text
stringlengths 56
7.94M
|
---|
\begin{document}
\title{Quantum effects of Aharonov-Bohm type and noncommutative quantum mechanics}
\author{Miguel E. Rodriguez R.}
\affiliation{Mechatronics Engineering Department, Faculty of Engineering of Applied
Science, Technical University North, Ibarra, 100150-Ecuador}
\pacs{03.65.-w, 2.40.Gh, 03.65.Vf.}
\keywords{Aharonov-Bohm effect, quantum mechanics, noncommutative space--time.}
\begin{abstract}
Quantum mechanics in noncommutative space modifies the standard result of the
Aharonov-Bohm effect for electrons and other recent quantum effects. Here we
obtain the phase in noncommutative space for the Spavieri effect, a generalization of Aharonov-bohm effect which
involving a coherent superposition of particles with opposite charges moving along single open interferometric path. By means of the experimental considerations a
limit $\sqrt{\theta}\simeq\left( 0,13T\text{eV}\right) ^{-1}$ is
achieved, improving by 10 orders of magnitude the derived by Chaichian
\textit{et. al.} for the Aharonov-Bohm effect. It is also shown that the
noncommutative phases of the Aharonov-Casher and He-McKellar-Willkens effects
are nullified in the current experimental tests.
\end{abstract}
\maketitle
\section{Introduction}
Recently there has been a growing interest in studying quantum mechanics in
noncommutative (NC) space \cite{Ma2017120}\cite{Ma2016306}\cite{Benchikha2017}
\cite{Bertolami-2015}\cite{Kovacik-2017}\cite{Ababekri:2016ois}. Because
quantum nature experiments are measured with high precision, these are
feasible scenarios for setting limits on the experimental manifestation of NC space. The Aharonov-Bohm (AB) effect, in which two coherent beams of charged
particles encircle an infinite solenoid \cite{AB}, has been studied by
Chaichian et. al. \cite{Chaichian2002149}, and Li and Dulat \cite{Li2006825}
in the NC space. The expression of the obtained phase includes an additional
term dependent on the NC space parameter, $\theta$ (measured in units of
$(length)^{2}$ ). The limit on $\theta$ found in the AB effect is of the order
of $\sqrt{\theta}\leqslant$ $10^{6}$GeV$^{-1}$ which corresponds to a
relatively large scale of 1\AA \ \cite{Chaichian2002149}. This same approach
was extended to the Aharonov-Casher (AC) \cite{AC} effect by Li and Wang
\cite{Li20071007}, and Mirza and Zarei \cite{Mirza2004583}, in this effect two
coherent beams of neutral particles encircle an infinite charged wire.
Considering the reported experimental error of the AC effect ($\sim25\%$)
\cite{CIMINO-PhysRevLett.63.380} a limit $\sqrt{\theta}\leqslant10^{7}
$GeV$^{-1}$ is obtained \cite{Mirza2004583} \cite{Li20071007}. The
He-Mckellar-Wilkens (HMW) \cite{He-Mackellar-PhysRevA} \cite{Wilkens} effect,
in which neutral particles with electric dipole moment interact with an
magnetic field, has been studied in the NC context by Wang and Li
\cite{Wang20072197} \cite{Wang20075}, and by Dayi \cite{Dayi2009} and in the
context of the Anandan phase \cite{Anandan-PhysRevLett.85.1354} by Passos
\cite{Pasos-PhysRevA.76.012113}. There is no experimental report on the
parameter limit $\theta$ in quantum effects for electric dipoles. We consider
here a new effect of AB type, proposed by Spavieri in \cite{Spavieri-E-P}. In
this effect, two beams of particles with charges, $+q$ and $-q$, move along a
single side of an infinite solenoid, even though the beams do not enclose the
solenoid (as in the ordinary AB effect). The advantage of this effect, called
here \textquotedblleft the S effect\textquotedblright, is that the size of the
solenoid has no limit, so that it can be considered to be very large, such as
a cyclotron. The S effect has been studied by Spavieri and Rodriguez
\cite{spavieri:052113} in the context of massive electrodynamics (or photon
mass). Under certain experimental considerations proposed and discussed in
\cite{spavieri:052113}, Spavieri and Rodriguez envisage a limit on the mass of
the photon of $m_{\gamma}\sim10^{-51}$g, which is the best limit obtainable
for the photon mass by means of a laboratory experiment with a quantum
approach. Consequently, due to the success of the S effect in the photon mass
scenario, we derive here the phase of the S effect in the context of the NC quantum mechanics
as an application of the phase found by Chaichian et. al.
\cite{Chaichian2002149}. Keeping the experimental proposal of
\ \cite{spavieri:052113} we get a new limit on $\theta$ in the context of the
quantum effects of the AB type. In addition, recent advances in atomic
interferometry have allowed obtaining measurements of the HWM phase
\cite{Lepoutre2013} which allows exploring experimentally the manifestation
of the space NC in the HWM effect by means of the phases found in
\cite{Pasos-PhysRevA.76.012113} for these effects. This same analysis may be
extended to the experimental configuration proposed by Sangster et. al.
\cite{Sangster-PhysRevLett.71.3641} for the AC effect where the particles do
not enclose the charged wire.
\section{NC quantum mechanics }
In NC quantum mechanics, the commutation relationships of the position
operators satisfy the relation, $\left[ \hat{x}_{i},\hat{x}_{j}\right]
=$i$\theta_{ij}$, where $\left\{ \theta_{ij}\right\} $ is a fully
antisymmetric real matrix representing the property noncommutativity of space
and $\hat{x}_{i}$ represents the coordinate operator ($\hat{p}_{i}$ is the
corresponding moment operator) in the space NC. In this scenario the product
of two functions is replaced by the product Moyal-Weyl (or star
\textquotedblleft$\ast$\textquotedblright\ ) \cite{MANKO2007522}, so the
ordinary Schr\"{o}ndiger equation, $H\psi=E\psi$, is written as:
\begin{equation}
H\left( \hat{x}_{i},\hat{p}_{i}\right) \ast\psi=E\psi
\label{Schrodinger-NCCM}
\end{equation}
Now, the star product between two functions in an NC plane $(i,j=1,2)$ is
defined by the following expression:
\begin{align}
\left( f\ast g\right) \left( x\right) & =e^{\frac{\text{i}}{2}
\theta_{ij}\partial_{x_{i}}\partial_{x_{j}}}f\left( x_{i}\right) g\left(
x_{j}\right) \label{Moyal-Weyl}\\
& =f\left( x\right) g\left( x\right) +\left. \frac{\text{i}}{2}
\theta_{ij}\partial_{i}f\partial_{j}g\right\vert _{x_{i}=x_{j}}+O\left(
\theta^{2}\right) ,\nonumber
\end{align}
where $f\left( x\right) $ and $g\left( x\right) $ are two arbitrary
functions. Usually, the NC operators are expressed by means of the formulation
of the Bopp shift \cite{Bopp-Shift-DULAT-2006}\ (equivalent to
(\ref{Moyal-Weyl})). This formalism maps the NC problem in the usual
commutative space using new NC variables defined in terms of the commutative
variables. That is to say,
\begin{equation}
\hat{x}_{i}=x_{i}-\frac{1}{2\hbar}\theta_{ij}p_{j}\text{, \ \ }i,j=1,2,
\label{coordenadas-NCMC}
\end{equation}
where the variables $x_{i}$ and $p_{i}$ satisfy the usual canonical
commutation relations, $\left[ x_{i},x_{j}\right] =0$, $\left[ x_{i}
,p_{j}\right] =i\hbar\delta_{ij}$ and $\left[ p_{i},p_{j}\right] =0.$ With
these considerations the Hamiltonian undergoes a coordinate transformation,
$H\left( \hat{x}_{i},\hat{p}_{i}\right) =H\left( x_{i}-\frac{1}{2\hbar
}\theta_{ij}p_{j},p_{i}\right) $. Note that $\theta_{ij}\ll1$, so that the
effects of the NC space can always be treated as a disturbance. If we consider
a particle of mass $m$ and charge $q$ in the presence of a magnetic field (or
potential vector $A_{i}$), then the Hamiltonian in the space NC, $H(
\hat{x}_{i},\hat{p}_{i},\hat{A}_{i}) $ undergoes a Bopp shift in both
$\hat{x}_{i}$ and $\hat{A}_{i}$. Therefore, in the NC space and with a
magnetic field the equation of Schr\"{o}dinger takes the following form:
\begin{equation}
\frac{\hbar^{2}}{2m}\left( p_{i}-qA_{i}-\frac{1}{2}q\theta_{lj}p_{l}
\partial_{j}A_{i}\right) ^{2}\psi=E\psi, \label{schrodinger-NC-final-1}
\end{equation}
whose solution is:
\begin{equation}
\psi=\psi_{0}\exp\left[ \text{i}\frac{q}{\hbar}\int_{x_{0}}^{x}\left(
A_{i}+\frac{1}{2}\theta_{lj}p_{l}\partial_{j}A_{i}\right) dx_{i}\right] ,
\label{fase-AB}
\end{equation}
where $\psi_{0}$ is the solution of
(\ref{schrodinger-NC-final-1}) when $A_{i}=0$.
\section{Phase of Effect S in NC quantum mechanics and limit over $\theta$}
In \cite{Spavieri-E-P} Spavieri has pointed out that the amount observable in
the AB effect is actually the phase difference
\begin{equation}
\Delta\varphi=\frac{e}{\hbar}\left[\int \mathbf{A}\cdot d\mathbf{l-}
\int\mathbf{A}_{0}\cdot d\mathbf{l}\right] , \label{phase-shift-S}
\end{equation}
where the integral can be taken over an open path integral. For the usual
closed path $C$ encircling the solenoid and limiting the surface $S$, the
observable quantity is the phase-shift variation, \ $\Delta\phi\propto
\oint_{C}\mathbf{A}\cdot d\mathbf{l-}\oint_{C}\mathbf{A}_{0}\cdot
d\mathbf{l=}\oint_{C}\mathbf{B}\cdot d\mathbf{S-}\oint_{C}\mathbf{B}_{0}\cdot
d\mathbf{S.}$ In fact, in interferometric experiments involving the AB and AC
effects \cite{CHAMBERS-PhysRevLett.5.3} \cite{Sangster-PhysRevLett.71.3641}
the direct measurement of the phase $\varphi\propto\int\mathbf{A}\cdot
d\mathbf{l}$ or phase shift $\phi\propto\oint\mathbf{A}\cdot d\mathbf{l}$~is
impossible in principle without the comparison of the actual interference
pattern, due to $\mathbf{A}$, with an interference reference pattern, due to $\mathbf{A}_{0}$. Thus, $\varphi$ o $\phi$ are
not observable, but the variations $\Delta\varphi$ and $\Delta\phi$ are both
gauge-invariant observable quantities \cite{Spavieri-E-P}. Therefore, with these considerations introduced by Spavieri \cite{Spavieri-E-P}, it is possible consider a new effect of AB type without particles encircling of solenoid. In this case the particles must has opposites charges, $\pm e$ , moving along one side of solenoid, i. e., along path $b$. Thus, the phase of this new effect called Spavieri effect, S, is:
\begin{equation}
\Delta\varphi_{S}=\frac{e}{\hbar}\left[\int_{b} \mathbf{A}\cdot d\mathbf{l-}
\int_{b}\mathbf{A}\cdot d\mathbf{l}\right]= \frac{2e}{\hbar}\int_{b}\mathbf{A}\cdot d\mathbf{l}. \label{phase-shift-S1}
\end{equation}
Now, we interesting in the phase (\ref{phase-shift-S1}) in the context of NC quantum mechanics. Substituting the phase (\ref{fase-AB}) in (\ref{phase-shift-S1}), and retaining only term related with the parameter $\theta$, we obtained the correction due to NC space for the effect S, thus:
\begin{equation}
\Delta\varphi_{S}^{NC} = \frac{e}{\hbar}\int_{b}
\theta_{lj}p_{l}\partial_{j}A_{i} dx_{i}. \label{phase-shift-S2}
\end{equation}
Writing (\ref{phase-shift-S2}) en Cartesian coordinates, we obtained, the phase-shift of effect S in NC space:
\begin{equation}
\Delta\phi_{S}^{NC}=-\frac{em}{4\hbar^{2}}\vec{\theta}\cdot\int\left[ \left(
\mathbf{v}\times\vec{\nabla}A_{i}\right) -\frac{e}{m}\left( \mathbf{A}
\times\vec{\nabla}A_{i}\right) \right] dx_{i}. \label{fase-S-NC}
\end{equation}
where $i=1,2$ are Cartesian components $x$ and $y$, $m$ is the mass of electron and $\mathbf{v}$ is the velocity of particles.
Although the effect for $\pm q$ charged particles is viable
\cite{Spavieri-E-P}, the technology and interferometry for the test of this
effect needs improvements. It is worth recalling that not long ago the
technology and interferometry for beams of particles with opposite magnetic
$\pm\mathbf{m}$ or electric $\pm \mathbf{d} $ dipole moments was likewise
unavailable, but is today a reality \cite{Sangster-PhysRevLett.71.3641}
\cite{DWF-PhysRevLett.83.2486}. Discussions on this subject may act as a
stimulating catalyst for further studies and technological advances that will
lead to the experimental test of this quantum effect. An important step in
this direction has already been made \cite{Spavieri-E-P} by showing that, at
least in principle and as far as gauge invariance requirements are concerned,
this effect is physically feasible.
In the experimental setups detecting the traditional AB effect there are
limitations imposed by the suitable type of interferometer related to the
electron wavelength, the corresponding convenient size of the solenoid or
toroid, and the maximum achievable size $\rho$ of the coherent electron beam
encircling the magnetic flux \cite{BD-PhysRevLett.63.2319}. In the analysis
made by Boulware and Deser \cite{BD-PhysRevLett.63.2319} in the context of the limit of mass photon, the radius of the
solenoid is $a=0.1$cm, and $\rho$ is taken to be about 10cm, implying that the
electron beam keeps its state of coherence up to a size $\rho=10^{2}a$, i.e.,
fifty times the solenoid diameter. The advantage of the new approach for the
$\pm q$ beam of particles is that the dimension of the solenoid has no upper
limits and is conditioned only by practical limits of the experimental setup,
while the size of the coherent beam of particles plays no important role. Due
to these advantages of the new approach introduced by Spavieri and its success
in the exploration of the limit of the mass of the photon, it is asked which
limit could be reached for the parameter $\theta$ of the NC quantum mechanics. To answer this question, we consider that the electrons move along the straight line $y=y_{0}$ from $x=-x_{0}$ to
$x=x_{0}$ (open path $b$ in (\ref{phase-shift-S2})), with a velocity $\mathbf{v}=v\mathbf{i}$, thus $i=x$ in
(\ref{fase-S-NC}). In addition, as in \cite{Chaichian2002149}, here it is
considered that $\vec{\theta}$ $=\theta\mathbf{z}$. To complete the calculation it is necessary to know the component $A_{x}$ of
the external vector potential to an infinite solenoid,
\[
A_{x}=-B_{0}\frac{a^{2}}{2}\left( \frac{y}{x^{2}+y^{2}}\right) ,
\]
thus, the terms in parentheses of (\ref{fase-S-NC}) are:
\begin{equation}
\mathbf{v}\times\vec{\nabla}A_{x}=-B_{0}\frac{a^{2}v}{2}\left( \frac{
x^{2}-y^{2} }{\left( x^{2}+y^{2}\right) ^{2}}\right) \mathbf{z}
\label{termino-A-1}
\end{equation}
and
\begin{equation}
\mathbf{A}\times\vec{\nabla}A_{x}=-\frac{1}{4}\frac{B_{0}^{2}a^{4}y}{\left(
x^{2}+y^{2}\right) ^{2}}\mathbf{z} \label{termino-A-2}
\end{equation}
where $a$ is the radius of solenoid and $B_{0}$ is the magnetic field enclosed within of solenoid. Substituting (\ref{termino-A-1}) and (\ref{termino-A-2}) in
(\ref{fase-S-NC}) and performing the integration of $-x_{0}$ to
$-x_{0}$ the correction NC to the phase of the effect S is obtained:
\begin{equation}
\Delta\phi_{S}^{NC}=\frac{1}{8}\theta\left( \frac{\Phi}{\Phi_{0}}\right)
^{2}\left\{
\begin{array}
[c]{c}
\dfrac{\arctan\left( \dfrac{x}{y}\right) }{y^{2}}+\dfrac{x/y}
{x^{2}+y^{2}} \\ \\
+\dfrac{8\pi}{\lambda_{e}}\dfrac{\Phi_{0}}{\Phi}\dfrac{v}{c}\dfrac{x}{x^{2}+y^{2}}
\end{array}
\right\} \label{phase-S-NCQM}
\end{equation}
where $\Phi=\pi a^{2}B_{0}$ is the magnetic flux enclosed within solenoid, $\Phi_{0}=2,06\times10^{-15}$T$\cdot$m$^{2}$ is the quantum flux
elemental, $\lambda_{e}$ the Compton wavelength of the electron and $c$ is the speed of light. To
estimate a limit on $\theta$ here we consider the same experimental parameters
introduced and discussed in \cite{spavieri:052113} for the study of the mass of the photon
in the context of the effect S, which are: $a=5$m$,$ $x=5a=30$m$,$ $y=8a/5=8m$
y $B_{0}=10T$. With these parameters it can be demonstrated that the order of
magnitude (in units $m^{-2}$) of the terms in square brackets are the following: $\frac
{\arctan\left( \frac{x}{y}\right) }{y^{2}}\sim10^{-2}$, $\frac{x/y}
{x^{2}+y^{2}}\sim10^{-3}$ $\ $y $\frac{8\pi}{\lambda_{e}}\frac{\Phi_{0}}{\Phi
}\frac{v}{c}\frac{x}{x^{2}+y^{2}}\sim3,3\times10^{-15}v$. If the velocity of
electrons is $v=2\times10^{8}$m/\thinspace s as in the Tonomura et. al.
\cite{Tonomura1987639} experiment for the Aharonov-Bohm effect, then the order
of magnitude of the kinetic term is\ $10^{-7}$. This analysis shows that the
kinetic term is up to 5 times smaller than the geometric terms. This contrasts
with the analysis made by Chiachian et. al. \cite{Chaichian2002149} for the AB
effect where the kinetic term is five orders greater than the geometric term.
Consequently, for the estimation of the limit on $\theta$ we consider only the
first term in brackets of expression (\ref{phase-S-NCQM}), i.e. $\Delta
\phi_{S}^{NC}\simeq\frac{1}{8}\theta\left( \frac{\Phi}{\Phi_{0}}\right)
^{2}\frac{\arctan\left( \frac{x}{y}\right) }{y^{2}}$. As the NC
correction is a very small, its effect must be masked within experimental
error, $\epsilon$, so $\Delta
\phi_{S}^{NC}\leqslant \epsilon$. This same argument is followed in the works related to the estimation
of the mass of the photon \cite{BD-PhysRevLett.63.2319}
\cite{Neyenhuis-Mass-Photon}\cite{spavieri:052113} \cite{Rodriguez2009373}. According to recent
advances in atomic interferometry \cite{1674-1056-24-5-053702,gustavson-2000}, the
experimental error that can reach in the measurement of the quantum phases is
of the order of $10^{-4}$rad, this can be seen in the measurement of
the AC
\cite{Sangster-PhysRevLett.71.3641}, where the phase has been measured with an experimental error of $0.11$mrad$=1.1\times10^{-4}
$rad, even Zhout et. al. \cite{Zhou-2016}, by means of simulation, provided for the measurement of the AC phase with a relative error of $10^{-5}$rad. Consequently, in this work, to be conservative, it is considered that
$\epsilon=10^{-4}$rad. Therefore the estimated limit on
$\theta$ in the context of the effect S is,
\[
\sqrt{\theta}\leqslant\left[ \frac{1}{8y}\left( \frac{\Phi}
{\Phi_{0}}\right)\sqrt{\frac{\arctan\left( \frac{x}{y}\right) }{\epsilon
}} \, \right] ^{-1}\simeq\left[ 0,13\times\text{TeV}\right] ^{-1}
\]
which is 10 orders of magnitude smaller than the value of Chiachian et. al.
\cite{Chaichian2002149} for the effect AB.
\section{Quantum effect for electric and magnetic dipoles in NC quantum mechanics}
The phase to magnetic dipoles, $\mathbf{m}$ (effect AC) and for electric dipoles, $\mathbf{d}$, (effect
HMW) in noncommutative space also has been calculated by Passos et al.
\cite{Pasos-PhysRevA.76.012113}. The expressions are as follows:
\begin{align}
\phi_{AC} & =\text{i}\oint\left( \mathbf{m}\times\mathbf{E}\right)
\cdot d\mathbf{r}+\frac{\text{i}}{2}m\oint\vec{\theta}\mathbf{\cdot}\left[
\mathbf{v\times\nabla\cdot}\left( \mathbf{m}\times\mathbf{E}\right)
\right] \cdot d\mathbf{r}\nonumber\\
& \mathbf{-}\frac{\text{i}}{2}m\oint\vec{\theta}\mathbf{\cdot}\left[ \left(
\mathbf{m}\times\mathbf{E}\right) \mathbf{\times\nabla\cdot}\left(
\mathbf{m}\times\mathbf{E}\right) \right] \cdot d\mathbf{r}
\label{AC-passos}
\end{align}
\begin{align}
\phi_{HMW} & =-\text{i}\oint\left( \mathbf{d}\times\mathbf{B}\right) \cdot
d\mathbf{r-}\frac{\text{i}}{2}m\oint\vec{\theta}\mathbf{\cdot}\left[
\mathbf{v\times\nabla\cdot}\left( \mathbf{d}\times\mathbf{B}\right) \right]
\cdot d\mathbf{r}\nonumber\\
& \mathbf{+}\frac{\text{i}}{2}m\oint\vec{\theta}\mathbf{\cdot}\left[ \left(
\mathbf{d}\times\mathbf{B}\right) \mathbf{\times\nabla\cdot}\left(
\mathbf{d}\times\mathbf{B}\right) \right] \cdot d\mathbf{r},
\label{hwm-passos}
\end{align}
where $m$ is the mass of electric or magnetic dipoles. In the experimental setup proposed by Sangster et. al.
\cite{Sangster-PhysRevLett.71.3641}, magnetic dipoles with opposing dipole
moments are moving on the same interferometric path. With this configuration
of magnetic moments the beams do not need to enclose a charged wire (as in the
ordinary AC effect), but are moving in the presence of a homogeneous electric
field produced by a capacitor of parallel plates. In the configuration of
Sansgter \cite{Sangster-PhysRevLett.71.3641} the magnetic dipole moments,
$\mathbf{m}$, are perpendicular to the electric field, $\mathbf{E}$,
thus the terms $\mathbf{\nabla\cdot}\left( \mathbf{m}\times
\mathbf{E}\right) $ in (\ref{AC-passos}) vanish and the effects of
the NC space can not be observed in this configuration. In the same sense, in a recent
experiment carried out to observe the HMW phase \cite{Lepoutre2013}, the
electric dipole moments, $\mathbf{d}$, of the beams are perpendicular to the
magnetic field, $\mathbf{B}$, involved in the effect. Thus, the terms
$\mathbf{\nabla\cdot}\left( \mathbf{d}\times\mathbf{B}\right) $ in the
expression (\ref{hwm-passos}) vanish and the NC effect, as a function of the
expression (\ref{hwm-passos}) derived by Passos et. al. (\ref{hwm-passos}), is
not evidentiable in this configuration. Another proposed configuration for
observing the AB effect for electric dipoles is known as Takchuk effect \cite{Takchuk-PhysRevA.62.052112}. In this configuration, it is consider
two infinite wires with an opposite magnetic polarization dependent on the
length of the wire, that is, $M\left( z\right) =-qz$, where $q$ can be
treated as a linear magnetic charge density. If the wires are sufficiently
long, the magnetic vector potential can be written as $\mathbf{A}
_{T}=z\mathbf{A}_{AB}$, $\mathbf{A}_{AB}$ being the ordinary vector potential
of the AB effect. Thus, the magnetic field is $\mathbf{B}=\nabla
\times\mathbf{A}_{T}=z\left( \nabla\times\mathbf{A}_{AB}\right)
-\mathbf{A}_{AB}\times\mathbf{z}$. In this effect the beams of magnetic
dipoles move in the middle plane of the wires, that is, $z=0$, with their
polarization, $\mathbf{d}$, parallel to the wire axis. The term of interest in
the NC context according to (\ref{hwm-passos}) is $\mathbf{d}\times\mathbf{B}
$. Therefore, $\mathbf{d}\times\mathbf{B=}d$ $\mathbf{z}\times\left( z\left(
\nabla\times\mathbf{A}_{AB}\right) -\mathbf{A}_{AB}\times\mathbf{z}\right)
$, evaluated at $z=0$, we obtain that $\mathbf{d}\times\mathbf{B=}
d\mathbf{A}_{AB}$, but $\nabla\cdot\left( \mathbf{d}\times\mathbf{B}\right)
=d\nabla\cdot\mathbf{A}_{AB}=0$ for the ordinary AB effect (Coulomb's gauge).
This implies that in the scenario presented by Takchuk
\cite{Takchuk-PhysRevA.62.052112}, the NC effect can not be observed.
\section{Conclusions}
We have derived the AB phase for open path, or S effect, in the context of NC quantum
mechanics (eq. \ref{phase-S-NCQM}). Considering the experimental parameters of the S effect to obtained a limit on mass photon, here we obtained an upper limit on the
NC parameter, $\sqrt{\theta}\leqslant\left( 0.13\text{TeV}\right) ^{-1}$, 10
orders of magnitude smaller than in previous scenarios of the AB type, and three order of magnitude smaller that the limit derived
by Moumni et. al. \cite{Moumni2011151} $\sqrt{\theta}\leqslant\left(
0.16\text{GeV}\right) ^{-1}$, in the context of the energy lines of the
hydrogen atom, which is also a quantum scenario. It can also be observed that
the kinetic term in our result, which includes the speed and mass of the
particle, is not relevant for $\theta$ calculation since it is several orders
of magnitude smaller than the geometric terms, which is opposite to the result
found Chaichian et. al. \cite{Chaichian2002149}. Also we have shown that the
NC effects are not manifested in the experimental configurations (for to interferometric open path) proposed by
Sangster et. al. \cite{Sangster-PhysRevLett.71.3641} for the AC effect and the
proposal of Lepoutre et. al. \cite{Lepoutre2013}\cite{Lepoutre2012} and
Takchuk \cite{Takchuk-PhysRevA.62.052112} for the HMW effect from the point of
view of the phases derived by Passos et. al. \ \cite{Pasos-PhysRevA.76.012113}
for these purposes. Finally, it is important to mention that these results can be improved in the future due to the development of new atomic interferometers, with more precision and longer interferometric paths \cite{Wang-Jin-2015}.
\begin{thebibliography}{39}
\expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi
\expandafter\ifx\csname bibnamefont\endcsname\relax
\def\bibnamefont#1{#1}\fi
\expandafter\ifx\csname bibfnamefont\endcsname\relax
\def\bibfnamefont#1{#1}\fi
\expandafter\ifx\csname citenamefont\endcsname\relax
\def\citenamefont#1{#1}\fi
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi
\providecommand{\bibinfo}[2]{#2}
\providecommand{\eprint}[2][]{\url{#2}}
\bibitem[{\citenamefont{Ma and Wang}(2017)}]{Ma2017120}
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Ma}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.-H.} \bibnamefont{Wang}},
\bibinfo{journal}{Ann. Phys.} \textbf{\bibinfo{volume}{383}},
\bibinfo{pages}{120 } (\bibinfo{year}{2017}).
\bibitem[{\citenamefont{Ma et~al.}(2016)\citenamefont{Ma, Wang, and
Yang}}]{Ma2016306}
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Ma}},
\bibinfo{author}{\bibfnamefont{J.-H.} \bibnamefont{Wang}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.-X.} \bibnamefont{Yang}},
\bibinfo{journal}{Phys. Lett. B} \textbf{\bibinfo{volume}{759}},
\bibinfo{pages}{306 } (\bibinfo{year}{2016}).
\bibitem[{\citenamefont{Benchikha et~al.}(2017)\citenamefont{Benchikha, Merad,
and Birkandan}}]{Benchikha2017}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Benchikha}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Merad}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Birkandan}},
\bibinfo{journal}{Mod. Phys. Lett.} \textbf{\bibinfo{volume}{A32}},
\bibinfo{pages}{1750106} (\bibinfo{year}{2017}).
\bibitem[{\citenamefont{Bertolami and Leal}(2015)}]{Bertolami-2015}
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Bertolami}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Leal}},
\bibinfo{journal}{Phys. Lett. B} \textbf{\bibinfo{volume}{750}},
\bibinfo{pages}{6 } (\bibinfo{year}{2015}).
\bibitem[{\citenamefont{Kov\'a\v{c}ikand and
Pre\v{s}najder}(2017)}]{Kovacik-2017}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Kov\'a\v{c}ikand}}
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Pre\v{s}najder}},
\bibinfo{journal}{J. Math. Phys.} \textbf{\bibinfo{volume}{58}},
\bibinfo{pages}{012101} (\bibinfo{year}{2017}).
\bibitem[{\citenamefont{Ababekri et~al.}(2016)\citenamefont{Ababekri, Abduwali,
Mamatabdulla, and Reyima}}]{Ababekri:2016ois}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Ababekri}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Abduwali}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Mamatabdulla}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Reyima}},
\bibinfo{journal}{Front. Phys.} \textbf{\bibinfo{volume}{4}},
\bibinfo{pages}{1} (\bibinfo{year}{2016}).
\bibitem[{\citenamefont{Aharonov and Bohm}(1959)}]{AB}
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Aharonov}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Bohm}},
\bibinfo{journal}{Phys. Rev.} \textbf{\bibinfo{volume}{115}},
\bibinfo{pages}{485} (\bibinfo{year}{1959}).
\bibitem[{\citenamefont{Chaichian et~al.}(2002)\citenamefont{Chaichian,
Pre\v{s}najder, Sheikh-Jabbari, and Tureanu}}]{Chaichian2002149}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Chaichian}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Pre\v{s}najder}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Sheikh-Jabbari}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Tureanu}},
\bibinfo{journal}{Phys. Lett. B} \textbf{\bibinfo{volume}{527}},
\bibinfo{pages}{149} (\bibinfo{year}{2002}).
\bibitem[{\citenamefont{Li and Dulat}(2006)}]{Li2006825}
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Li}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Dulat}},
\bibinfo{journal}{Eur. Phys. J. C} \textbf{\bibinfo{volume}{46}},
\bibinfo{pages}{825} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Aharonov and Casher}(1984)}]{AC}
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Aharonov}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Casher}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{53}},
\bibinfo{pages}{319} (\bibinfo{year}{1984}).
\bibitem[{\citenamefont{Li and Wang}(2007)}]{Li20071007}
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Li}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Wang}},
\bibinfo{journal}{Eur. Phys. J. C} \textbf{\bibinfo{volume}{50}},
\bibinfo{pages}{1007} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Mirza and Zarei}(2004)}]{Mirza2004583}
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Mirza}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Zarei}},
\bibinfo{journal}{Eur. Phys. J. C} \textbf{\bibinfo{volume}{32}},
\bibinfo{pages}{583} (\bibinfo{year}{2004}).
\bibitem[{\citenamefont{Cimmino et~al.}(1989)\citenamefont{Cimmino, Opat,
Klein, Kaiser, Werner, Arif, and Clothier}}]{CIMINO-PhysRevLett.63.380}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Cimmino}},
\bibinfo{author}{\bibfnamefont{G.~I.} \bibnamefont{Opat}},
\bibinfo{author}{\bibfnamefont{A.~G.} \bibnamefont{Klein}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Kaiser}},
\bibinfo{author}{\bibfnamefont{S.~A.} \bibnamefont{Werner}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Arif}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Clothier}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{63}},
\bibinfo{pages}{380} (\bibinfo{year}{1989}).
\bibitem[{\citenamefont{He and McKellar}(1993)}]{He-Mackellar-PhysRevA}
\bibinfo{author}{\bibfnamefont{X.-G.} \bibnamefont{He}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{B.~H.~J.} \bibnamefont{McKellar}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{47}},
\bibinfo{pages}{3424} (\bibinfo{year}{1993}).
\bibitem[{\citenamefont{Wilkens}(1994)}]{Wilkens}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Wilkens}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{72}},
\bibinfo{pages}{5} (\bibinfo{year}{1994}).
\bibitem[{\citenamefont{Wang and Li}(2007{\natexlab{a}})}]{Wang20072197}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Wang}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Li}}, \bibinfo{journal}{J.
Phys. A: Math. Theor.} \textbf{\bibinfo{volume}{40}}, \bibinfo{pages}{2197}
(\bibinfo{year}{2007}{\natexlab{a}}).
\bibitem[{\citenamefont{Wang and Li}(2007{\natexlab{b}})}]{Wang20075}
\bibinfo{author}{\bibfnamefont{J.-H.} \bibnamefont{Wang}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Li}},
\bibinfo{journal}{Chinese Phys. Lett.} \textbf{\bibinfo{volume}{24}},
\bibinfo{pages}{5} (\bibinfo{year}{2007}{\natexlab{b}}).
\bibitem[{\citenamefont{Dayi}(2009)}]{Dayi2009}
\bibinfo{author}{\bibfnamefont{O.~F.} \bibnamefont{Dayi}},
\bibinfo{journal}{EPL} \textbf{\bibinfo{volume}{85}} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Anandan}(2000)}]{Anandan-PhysRevLett.85.1354}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Anandan}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{85}},
\bibinfo{pages}{1354} (\bibinfo{year}{2000}).
\bibitem[{\citenamefont{Passos et~al.}(2007)\citenamefont{Passos, Ribeiro,
Furtado, and Nascimento}}]{Pasos-PhysRevA.76.012113}
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Passos}},
\bibinfo{author}{\bibfnamefont{L.~R.} \bibnamefont{Ribeiro}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Furtado}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.~R.} \bibnamefont{Nascimento}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{76}},
\bibinfo{pages}{012113} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{{Spavieri}}(2006)}]{Spavieri-E-P}
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{{Spavieri}}},
\bibinfo{journal}{Eur. Phys. J. D} \textbf{\bibinfo{volume}{37}},
\bibinfo{pages}{327} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Spavieri and Rodriguez}(2007)}]{spavieri:052113}
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Spavieri}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Rodriguez}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{75}},
\bibinfo{eid}{052113} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Lepoutre et~al.}(2013)\citenamefont{Lepoutre, Gillot,
Gauguet, B\"uchner, and Vigu\'e}}]{Lepoutre2013}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Lepoutre}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Gillot}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Gauguet}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{B\"uchner}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Vigu\'e}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{88}}
(\bibinfo{year}{2013}).
\bibitem[{\citenamefont{Sangster et~al.}(1993)\citenamefont{Sangster, Hinds,
Barnett, and Riis}}]{Sangster-PhysRevLett.71.3641}
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Sangster}},
\bibinfo{author}{\bibfnamefont{E.~A.} \bibnamefont{Hinds}},
\bibinfo{author}{\bibfnamefont{S.~M.} \bibnamefont{Barnett}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Riis}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{71}},
\bibinfo{pages}{3641} (\bibinfo{year}{1993}).
\bibitem[{\citenamefont{Man'ko et~al.}(2007)\citenamefont{Man'ko, Man'ko,
Marmo, and Vitale}}]{MANKO2007522}
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Man'ko}},
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Man'ko}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Marmo}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Vitale}},
\bibinfo{journal}{Phys. Lett. A} \textbf{\bibinfo{volume}{360}},
\bibinfo{pages}{522 } (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Dulat and LI}(2006)}]{Bopp-Shift-DULAT-2006}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Dulat}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{LI}}, \bibinfo{journal}{Mod.
Phys. Lett. A} \textbf{\bibinfo{volume}{21}} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Chambers}(1960)}]{CHAMBERS-PhysRevLett.5.3}
\bibinfo{author}{\bibfnamefont{R.~G.} \bibnamefont{Chambers}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{5}},
\bibinfo{pages}{3} (\bibinfo{year}{1960}).
\bibitem[{\citenamefont{Dowling et~al.}(1999)\citenamefont{Dowling, Williams,
and Franson}}]{DWF-PhysRevLett.83.2486}
\bibinfo{author}{\bibfnamefont{J.~P.} \bibnamefont{Dowling}},
\bibinfo{author}{\bibfnamefont{C.~P.} \bibnamefont{Williams}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.~D.}
\bibnamefont{Franson}}, \bibinfo{journal}{Phys. Rev. Lett.}
\textbf{\bibinfo{volume}{83}}, \bibinfo{pages}{2486} (\bibinfo{year}{1999}).
\bibitem[{\citenamefont{Boulware and Deser}(1989)}]{BD-PhysRevLett.63.2319}
\bibinfo{author}{\bibfnamefont{D.~G.} \bibnamefont{Boulware}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Deser}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{63}},
\bibinfo{pages}{2319} (\bibinfo{year}{1989}).
\bibitem[{\citenamefont{Tonomura}(1987)}]{Tonomura1987639}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Tonomura}},
\bibinfo{journal}{Rev. Mod. Phys.} \textbf{\bibinfo{volume}{59}},
\bibinfo{pages}{639} (\bibinfo{year}{1987}).
\bibitem[{\citenamefont{Neyenhuis et~al.}(2007)\citenamefont{Neyenhuis,
Christensen, and Durfee}}]{Neyenhuis-Mass-Photon}
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Neyenhuis}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Christensen}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.~S.}
\bibnamefont{Durfee}}, \bibinfo{journal}{Phys. Rev. Lett.}
\textbf{\bibinfo{volume}{99}}, \bibinfo{eid}{200401} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Rodriguez}(2009)}]{Rodriguez2009373}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Rodriguez}},
\bibinfo{journal}{Rev. Mex. Fis.} \textbf{\bibinfo{volume}{55}},
\bibinfo{pages}{373} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Jin}(2015{\natexlab{a}})}]{1674-1056-24-5-053702}
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Jin}},
\bibinfo{journal}{Chinese Phys. B} \textbf{\bibinfo{volume}{24}},
\bibinfo{pages}{053702} (\bibinfo{year}{2015}{\natexlab{a}}).
\bibitem[{\citenamefont{Gustavson et~al.}(2000)\citenamefont{Gustavson,
Landragin, and Kasevich}}]{gustavson-2000}
\bibinfo{author}{\bibfnamefont{T.~L.} \bibnamefont{Gustavson}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Landragin}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~A.}
\bibnamefont{Kasevich}}, \bibinfo{journal}{Classical and Quantum Gravity}
\textbf{\bibinfo{volume}{17}}, \bibinfo{pages}{2385} (\bibinfo{year}{2000}).
\bibitem[{\citenamefont{Zhou et~al.}(2016)\citenamefont{Zhou, Zhang, Duan, Ke,
Shao, and Hu}}]{Zhou-2016}
\bibinfo{author}{\bibfnamefont{M.-K.} \bibnamefont{Zhou}},
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Zhang}},
\bibinfo{author}{\bibfnamefont{X.-C.} \bibnamefont{Duan}},
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Ke}},
\bibinfo{author}{\bibfnamefont{C.-G.} \bibnamefont{Shao}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{Z.-K.} \bibnamefont{Hu}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{93}},
\bibinfo{pages}{023641} (\bibinfo{year}{2016}).
\bibitem[{\citenamefont{Tkachuk}(2000)}]{Takchuk-PhysRevA.62.052112}
\bibinfo{author}{\bibfnamefont{V.~M.} \bibnamefont{Tkachuk}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{62}},
\bibinfo{pages}{052112} (\bibinfo{year}{2000}).
\bibitem[{\citenamefont{Moumni et~al.}(2011)\citenamefont{Moumni, BenSlama, and
Zaim}}]{Moumni2011151}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Moumni}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{BenSlama}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Zaim}}, \bibinfo{journal}{J.
Geom. Phys.} \textbf{\bibinfo{volume}{61}}, \bibinfo{pages}{151 }
(\bibinfo{year}{2011}).
\bibitem[{\citenamefont{Lepoutre et~al.}(2012)\citenamefont{Lepoutre, Gauguet,
Tr\'enec, B\"uchner, and Vigu\'e}}]{Lepoutre2012}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Lepoutre}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Gauguet}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Tr\'enec}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{B\"uchner}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Vigu\'e}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{109}}
(\bibinfo{year}{2012}).
\bibitem[{\citenamefont{Jin}(2015{\natexlab{b}})}]{Wang-Jin-2015}
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Jin}},
\bibinfo{journal}{Chinese Physics B} \textbf{\bibinfo{volume}{24}},
\bibinfo{eid}{53702} (pages~\bibinfo{numpages}{0})
(\bibinfo{year}{2015}{\natexlab{b}}).
\end{thebibliography}
\end{document} |
\begin{document}
\title{{\Large \textbf{On the clustering aspect of nonnegative matrix factorization}}}
\author{\IEEEauthorblockN{Andri Mirzal}
\IEEEauthorblockA{Grad.~School of Information Science and Technology,\\
Hokkaido University, Kita 14 Nishi 9, Kita-Ku,\\
Sapporo, Japan\\
andri@complex.eng.hokudai.ac.jp}
\and
\IEEEauthorblockN{Masashi Furukawa}
\IEEEauthorblockA{Grad.~School of Information Science and Technology,\\
Hokkaido University, Kita 14 Nishi 9, Kita-Ku,\\
Sapporo, Japan\\
mack@complex.eng.hokudai.ac.jp}
}
\maketitle
\begin{abstract}
This paper provides a theoretical explanation on the clustering aspect of nonnegative matrix factorization (NMF). We prove that even without imposing orthogonality nor sparsity constraint on the basis and/or coef\mbox{}ficient matrix, NMF still can give clustering results, thus providing a theoretical support for many works, e.g., Xu et al.~\cite{Xu} and Kim et al.~\cite{Kim}, that show the superiority of the standard NMF as a clustering method.\\
\end{abstract}
\begin{IEEEkeywords}
\textit{bound-constrained optimization, clustering method, non-convex optimization, nonnegative matrix factorization}
\end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction} \label{introduction}
NMF is a matrix approximation technique that factorizes a nonnegative matrix into a pair of other nonnegative matrices of much lower rank:
\begin{equation}
\mathbf{A} \approx \mathbf{B}\mathbf{C},
\label{eq1}
\end{equation}
where $\mathbf{A}\in\mathbb{R}_{+}^{M\times N}=\left[\mathbf{a}_1,\ldots,\mathbf{a}_N\right]$ denotes the feature-by-item data matrix, $\mathbf{B}\in\mathbb{R}_{+}^{M\times K}=\left[\mathbf{b}_1,\ldots,\mathbf{b}_K\right]$ denotes the basis matrix, $\mathbf{C}\in\mathbb{R}_{+}^{K\times N}=\left[\mathbf{c}_1,\ldots,\mathbf{c}_N\right]$ denotes the coef\mbox{}ficient matrix, and $K$ denotes the number of factors which usually chosen so that $K\ll\min(M,N)$. There are also other variants of NMF like semi-NMF, convex NMF, and symmetric NMF. Detailed discussions can be found in, e.g., \cite{Li} and \cite{Ding3}.
The nonnegativity constraints and the reduced dimensionality define the uniqueness and power of NMF. The nonnegativity constraints allow only nonsubstractive linear combinations of the basis vectors $\mathbf{b}_k$ to construct the data vectors $\mathbf{a}_n$, thus providing the parts-based interpretations as shown in \cite{Lee, SZLi, Hoyer}. And the reduced dimensionality provides NMF with the clustering aspect and data compression capabilities.
The most important NMF's application is in the data clustering, as some works have shown that it is a superior method compared to the standard clustering methods like spectral methods and $K$-means algorithm. In particular, Xu et al.~\cite{Xu} showed that NMF outperforms standard spectral methods in finding the document clustering in two text corpora, TDT2 and Reuters. And Kim et al.~\cite{Kim} showed that NMF and sparse NMF are much more superior methods compared to the $K$-means algorithm in both a synthetic dataset (which is well separated) and a real dataset (TDT2).
If sparsity constraints are imposed to columns of $\mathbf{C}$, the clustering aspect of NMF is intuitive since in the extreme case where there is only one nonzero entry per column, NMF will be equivalent to the $K$-means algorithm employed to the data vectors $\mathbf{a}_n$ \cite{Ding}, and the sparsity constraints can be thought as the relaxation to the strict orthogonality constraints on rows of $\mathbf{C}$ (an equivalent explanation can also be stated for imposing sparsity on rows of $\mathbf{B}$).
However, as reported by Xu et al.~\cite{Xu} and Kim et al.~\cite{Kim}, even without imposing sparsity constraints, NMF still can give very promising clustering results. But the authors didn't give any theoretical analysis on why the standard NMF---NMF without sparsity nor orthogonality constraint---can give such good results. So far the best explanation for this remarkable fact is only qualitative: the standard NMF produces non-orthogonal latent semantic directions (the basis vectors) that are more likely to correspond to each of the clusters than those produced by the spectral methods, thus the clustering induced from the latent semantic directions of the standard NMF are better than clustering by the spectral methods \cite{Xu}. Therefore, this work attempts to provide a theoretical support for the clustering aspect of the standard NMF.
\section{Clustering aspect of NMF} \label{clusteringnmf}
To compute $\mathbf{B}$ and $\mathbf{C}$, usually eq.~\ref{eq1} is rewritten into a minimization problem in the Frobenius norm criterion.
\begin{equation}
\min_{\mathbf{B},\mathbf{C}}J\left(\mathbf{B},\mathbf{C}\right)=\frac{1}{2}\|\mathbf{A}-\mathbf{B}\mathbf{C}\|_{F}^{2}\;\,\mathrm{s.t.}\;\, \mathbf{B}\ge\mathbf{0},\mathbf{C}\ge\mathbf{0}.
\label{eq2}
\end{equation}
In addition to the usual Frobenius norm criterion, the family of Bregman divergences---which Frobenius norm and Kullback-Leibler divergence are part of it---can also be used as the af\mbox{}finity measures. Detailed discussion on the Bregman divergences for NMF can be found in \cite{Dhillon}.
Sometimes it is more practical and intuitive to decompose $J\left(\mathbf{B},\mathbf{C}\right)$ into a series of smaller objectives.
\begin{align}
&\min_{\mathbf{B},\mathbf{C}}J\left(\mathbf{B},\mathbf{C}\right)\equiv \left(\min_{\mathbf{B},\mathbf{c}_1}J_1(\mathbf{B},\mathbf{c}_1),\ldots,\min_{\mathbf{B},\mathbf{c}_N}J_N(\mathbf{B},\mathbf{c}_N)\right), \label{eq000}\\
&\text{where} \nonumber \\
&\min_{\mathbf{B},\mathbf{c}_n}J_n\left(\mathbf{B},\mathbf{c}_n\right)=\frac{1}{2}\|\mathbf{a}_n-\mathbf{B}\mathbf{c}_n\|_2^2,\;\,n\in[1,N]. \label{eq010}
\end{align}
Minimizing $J_n$ is known to be the nonnegative least square (NLS) problem, and some fast NMF algorithms are developed based on solving the NLS subproblems, e.g., alternating NLS with block principal pivoting algorithm \cite{Kim2}, active set method \cite{HKim}, and projected quasi-Newton algorithm \cite{DKim}. Decomposing NMF problem into NLS subproblems also transforms the non-convex optimization in eq.~\ref{eq000} to the convex optimization subproblems in eq.~\ref{eq010}. Even though eq.~\ref{eq010} is not strictly convex, for two-block case, any limit point of the sequence \{$\mathbf{B}^t$,$\mathbf{C}^t$\}, where $t$ is the updating step, is a stationary point \cite{Grippo}.
The objective in eq.~\ref{eq010} aims to simultaneously find the suitable basis vectors such that the latent factors are revealed, and the coef\mbox{}ficient vector $\mathbf{c}_n$ such that a linear combination of the basis vectors ($\mathbf{B}\mathbf{c}_n$) is close to $\mathbf{a}_n$. In clustering term this can be rephrased as: to simultaneously find the cluster centers and the cluster assignments.
To investigate the clustering aspect of NMF, four possibilities of NMF settings are discussed: (1) imposing orthogonality constraints on both rows of $\mathbf{C}$ and columns of $\mathbf{B}$, (2) imposing orthogonality constraints on rows of $\mathbf{C}$, (3) imposing orthogonality constraints on columns of $\mathbf{B}$, and (4) no orthogonality constraint is imposed. The last case is the standard NMF which its clustering aspect is the focus of this paper as many works reported that it is a very ef\mbox{}fective clustering method.
\subsection{Orthogonality constraints on both $\mathbf{B}$ and $\mathbf{C}$} \label{BC}
The following theorems proves that imposing column-orthogonality constraints on $\mathbf{B}$ and row-orthogonality constraints on $\mathbf{C}$ lead to the simultaneous clustering of similar items and related features.
\begin{theorem} \label{theorem1}
Minimizing the following objective
\begin{eqnarray}
&&\min_{\mathbf{B},\mathbf{C}}J_a\left(\mathbf{B},\mathbf{C}\right)=\frac{1}{2}\|\mathbf{A}-\mathbf{B}\mathbf{C}\|_{F}^{2} \label{eqa}\\
&&\mathrm{s.t.}\;\,\mathbf{B}\ge\mathbf{0},\mathbf{C}\ge\mathbf{0},\mathbf{B}^T\mathbf{B}=\mathbf{I}, \mathbf{C}\mathbf{C}^T=\mathbf{I} \nonumber
\label{eqb}
\end{eqnarray}
is equivalent to applying ratio association to $\mathcal{G}(\mathbf{A}^T\mathbf{A})$ and $\mathcal{G}(\mathbf{A}\mathbf{A}^T)$, where $\mathbf{A}^T\mathbf{A}$ and $\mathbf{A}\mathbf{A}^T$ are the item af\mbox{}finity matrix and the feature af\mbox{}finity matrix respectively, thus leads to simultaneous clustering of similar items and related features.
\end{theorem}
\begin{IEEEproof}
\begin{align}
\|\mathbf{A}-\mathbf{B}\mathbf{C}\|_{F}^{2}=\;&\mathrm{tr}\left(\left(\mathbf{A}-\mathbf{B}\mathbf{C}\right)^T\left(\mathbf{A}-\mathbf{B}\mathbf{C}\right)\right) \nonumber \\
=\;&\mathrm{tr}\left(\mathbf{A}^T\mathbf{A}-2\mathbf{C}^T\mathbf{B}^T\mathbf{A}+\mathbf{I}\right). \label{eqc}
\end{align}
The Lagrangian function:
\begin{align}
L_a\left(\mathbf{B},\mathbf{C}\right)=\;&J_a\left(\mathbf{B},\mathbf{C}\right)-\mathrm{tr}\left(\mathbf{\Gamma}_{\mathbf{B}}\mathbf{B}^T\right)-\mathrm{tr}\left(\mathbf{\Gamma}_{\mathbf{C}}\mathbf{C}\right) + \nonumber \\
&\mathrm{tr}\left(\mathbf{\Lambda}_{\mathbf{B}}\left(\mathbf{B}^T\mathbf{B}-\mathbf{I}\right)\right)+\mathrm{tr}\left(\mathbf{\Lambda}_{\mathbf{C}}\left(\mathbf{C}\mathbf{C}^T-\mathbf{I}\right)\right), \label{eqaa}
\end{align}
where $\mathbf{\Gamma}_{\mathbf{B}}\in\mathbb{R}_{+}^{M\times K}$, $\mathbf{\Gamma}_{\mathbf{C}}\in\mathbb{R}_{+}^{N\times K}$, $\mathbf{\Lambda}_{\mathbf{B}}\in\mathbb{R}_{+}^{K\times K}$, and $\mathbf{\Lambda}_{\mathbf{C}}\in\mathbb{R}_{+}^{K\times K}$ are the Lagrange multipliers. By the Karush-Kuhn-Tucker (KKT) optimality conditions we get:
\begin{align}
\nabla_{\mathbf{B}}L_a=\;&\mathbf{B}-\mathbf{A}\mathbf{C}^T-\mathbf{\Gamma}_{\mathbf{B}}+2\mathbf{B}\mathbf{\Lambda}_{\mathbf{B}}=\mathbf{0},\label{eqbb} \\
\nabla_{\mathbf{C}}L_a=\;&\mathbf{C}-\mathbf{B}^T\mathbf{A}-\mathbf{\Gamma}_{\mathbf{C}}^T+2\mathbf{\Lambda}_{\mathbf{C}}\mathbf{C}=\mathbf{0},\label{eqcc}
\end{align}
with complementary slackness:
\begin{equation}
\mathbf{\Gamma}_{\mathbf{B}}\otimes\mathbf{B}=\mathbf{0},\;\,\mathbf{\Gamma}_{\mathbf{C}}^T\otimes\mathbf{C}=\mathbf{0}, \label{eqdd}
\end{equation}
where $\otimes$ denotes component-wise multiplications. Assume $\mathbf{\Gamma}_{\mathbf{B}}=\mathbf{0}$, $\mathbf{\Lambda}_{\mathbf{B}}=\mathbf{0}$, $\mathbf{\Gamma}_{\mathbf{C}}=\mathbf{0}$, and $\mathbf{\Lambda}_{\mathbf{B}}=\mathbf{0}$ (at the stationary point these assumptions are reasonable since the complementary slackness conditions hold and the Lagrange multipliers can be assigned to zeros), we get:
\begin{align}
\mathbf{B}=\;&\mathbf{A}\mathbf{C}^T\;\text{and} \label{eqee}\\
\mathbf{C}=\;&\mathbf{B}^T\mathbf{A}. \label{eqff}
\end{align}
Substituting eq.~\ref{eqee} into eq.~\ref{eqc}, we get:
\begin{equation}
\min_{\mathbf{C}}J_a\left(\mathbf{C}\right) \equiv \max_{\mathbf{C}}\;\mathrm{tr}\left(\mathbf{C}\mathbf{A}^T\mathbf{A}\mathbf{C}^T\right).
\label{eqf}
\end{equation}
Similarly, substituting eq.~\ref{eqff} into eq.~\ref{eqc}, we get:
\begin{equation}
\min_{\mathbf{B}}J_a\left(\mathbf{B}\right) \equiv \max_{\mathbf{B}}\;\mathrm{tr}\left(\mathbf{B}^T\mathbf{A}\mathbf{A}^T\mathbf{B}\right).
\label{eqe}
\end{equation}
Therefore, minimizing $J_a$ is equivalent to simultaneously optimizing:
\begin{align}
&\max_{\mathbf{C}}\;\mathrm{tr}\left(\mathbf{C}\mathbf{A}^T\mathbf{A}\mathbf{C}^T\right)\;\,\mathrm{s.t.}\;\,\mathbf{C}\mathbf{C}^T=\mathbf{I},\;\,\mathrm{and} \label{eqg}\\
&\max_{\mathbf{B}}\;\mathrm{tr}\left(\mathbf{B}^T\mathbf{A}\mathbf{A}^T\mathbf{B}\right)\;\,\mathrm{s.t.}\;\,\mathbf{B}^T\mathbf{B}=\mathbf{I}.\label{eq2s4}
\end{align}
Eq.~~\ref{eqg} and eq.\ref{eq2s4} are the ratio association objectives (see \cite{Dhillon1} for details on various graph cuts objectives) applied to $\mathcal{G}(\mathbf{A}^T\mathbf{A})$ and $\mathcal{G}(\mathbf{A}\mathbf{A})^T$ respectively. Thus minimizing $J_a$ leads to the simultaneous clustering of similar items and related features.
\end{IEEEproof}
\subsection{Orthogonality constraints on $\mathbf{C}$} \label{C}
When the orthogonality constraints are imposed only on rows of $\mathbf{C}$, it is no longer clear whether columns of $\mathbf{B}$ will lead to the feature clustering. The following theorem shows that without imposing the orthogonality constraints on $\mathbf{b}_k$, the resulting $\mathbf{B}$ can still lead to the feature clustering.
\begin{theorem} \label{theorem2}
Minimizing the following objective
\begin{align}
&\min_{\mathbf{B},\mathbf{C}}J_{b}\left(\mathbf{B},\mathbf{C}\right)=\frac{1}{2}\|\mathbf{A}-\mathbf{B}\mathbf{C}\|_{F}^{2} \label{eq4}\\
&\mathrm{s.t.}\;\,\mathbf{B}\ge\mathbf{0},\mathbf{C}\ge\mathbf{0},\mathbf{C}\mathbf{C}^T=\mathbf{I} \nonumber
\end{align}
is equivalent to applying ratio association to $\mathcal{G}(\mathbf{A}^T\mathbf{A})$, and also leads to the feature clustering indicator matrix $\mathbf{B}$ which is approximately column-orthogonal.
\end{theorem}
\begin{IEEEproof}
\begin{align}
\|\mathbf{A}-\mathbf{B}\mathbf{C}\|_{F}^{2}=\;&\mathrm{tr}\left(\left(\mathbf{A}-\mathbf{B}\mathbf{C}\right)^T\left(\mathbf{A}-\mathbf{B}\mathbf{C}\right)\right) \nonumber \\
=\;&\mathrm{tr}\left(\mathbf{A}^T\mathbf{A}-2\mathbf{B}^T\mathbf{A}\mathbf{C}^T+\mathbf{C}^T\mathbf{B}^T\mathbf{B}\mathbf{C}\right). \label{eq6a}
\end{align}
The Lagrangian function:
\begin{align}
L_b\left(\mathbf{B},\mathbf{C}\right)=\;&J_b\left(\mathbf{B},\mathbf{C}\right)-\mathrm{tr}\left(\mathbf{\Gamma}_{\mathbf{B}}\mathbf{B}^T\right)-\mathrm{tr}\left(\mathbf{\Gamma}_{\mathbf{C}}\mathbf{C}\right) + \nonumber \\
&\mathrm{tr}\left(\mathbf{\Lambda}_{\mathbf{C}}\left(\mathbf{C}\mathbf{C}^T-\mathbf{I}\right)\right). \label{eq7a}
\end{align}
By applying the KKT conditions, we get:
\begin{align}
\mathbf{B}=\;&\mathbf{A}\mathbf{C}^T\;\text{and} \label{eq8a}\\
\mathbf{C}=\;&\mathbf{B}^T\mathbf{A}. \label{eq9a}
\end{align}
By substituting eq.~\ref{eq8a} and eq.~\ref{eq9a} into eq.~\ref{eq6a}, minimizing $J_b$ is equivalent to simultaneously optimizing:
\begin{align}
&\max_{\mathbf{C}}\;\mathrm{tr}\left(\mathbf{C}\mathbf{A}^T\mathbf{A}\mathbf{C}^T\right)\;\,\text{s.t.}\;\,\mathbf{C}\mathbf{C}^T=\mathbf{I}, \label{eq10a}\\
&\max_{\mathbf{B}}\;\mathrm{tr}\left(\mathbf{B}^T\mathbf{A}\mathbf{A}^T\mathbf{B}\right), \label{eq11a}\;\,\text{and} \\
&\min_{\mathbf{B}}\;\mathrm{tr}\left(\mathbf{A}^T\mathbf{B}\mathbf{B}^T\mathbf{B}\mathbf{B}^T\mathbf{A}\right)\equiv\min_{\mathbf{B}}\;\mathrm{tr}\left(\mathbf{B}^T\mathbf{B}\mathbf{B}^T\mathbf{B}\right).
\label{eq12a}
\end{align}
Note that the step in eq.~\ref{eq12a} is justifiable since $\mathbf{A}$ is a constant matrix. By using the fact $\mathrm{tr}(\mathbf{X}^T\mathbf{X})=\|\mathbf{X}\|_F^2$, eq.~\ref{eq12a} can be rewritten as:
\begin{equation}
\min_{\mathbf{B}}\;\left\|\mathbf{B}^T\mathbf{B}\right\|_F^2=\min_{\mathbf{B}}\;\Big(\sum_{i}\left(\mathbf{b}_{i}^T\mathbf{b}_{i}\right)^2 + \sum_{i\ne j}\left(\mathbf{b}_{i}^T\mathbf{b}_{j}\right)^2 \Big). \label{eq13abc}
\end{equation}
The objective in eq.~\ref{eq10a} is equivalent to eq.~\ref{eqg} and eventually leads to the clustering of similar items. So the remaining problem is how to prove that optimizing eq.~\ref{eq11a} and \ref{eq13abc} simultaneously will lead to the feature clustering indicator matrix $\mathbf{B}$ which is approximately column-orthogonal.
Eq.~\ref{eq11a} resembles eq.~\ref{eqe}, but without orthogonality nor upper bound constraint, so one can easily optimizing eq.~\ref{eq11a} by setting $\mathbf{B}$ to an infinity matrix. However, this violates eq.~\ref{eq13abc} which favors small $\mathbf{B}$. Conversely, one can optimizing eq.~\ref{eq13abc} by setting $\mathbf{B}$ to a null matrix, but again this violates eq.~\ref{eq11a}. Therefore, these two objectives create implicit lower and upper bound constraints on $\mathbf{B}$, and eq.~\ref{eq11a} and eq.~\ref{eq13abc} can be rewritten into:
\begin{align}
&\max_{\mathbf{B}}\;\mathrm{tr}\left(\mathbf{B}^T\mathbf{\hat{A}}\mathbf{B}\right),\;\,\text{and} \label{eq13aa} \\
&\min_{\mathbf{B}}\;\Big( \underbrace{\sum_{i}\left(\mathbf{b}_{i}^T\mathbf{b}_{i}\right)^2}_{j_{b1}} + \underbrace{\sum_{i\ne j}\left(\mathbf{b}_{i}^T\mathbf{b}_{j}\right)^2}_{j_{b2}} \Big) \label{eq13ab} \\
&\text{s.t.}\;\,\mathbf{0}\le\mathbf{B}\le\mathbf{\Upsilon}_{\mathbf{B}}, \nonumber
\end{align}
where $\mathbf{\hat{A}}$ denotes the feature affinity matrix and $\mathbf{\Upsilon}_{\mathbf{B}}$ denotes the upperbound constraints on $\mathbf{B}$. Now we have box-constraint objectives which are known to behave well and are guaranteed to converge to the stationary point \cite{Calamai}.
Even though the objectives are now transformed into box-constraint optimization problems, since there is no column-orthogonality constraint, maximizing eq.~\ref{eq13aa} can be easily done by setting each entry of $\mathbf{B}$ to the corresponding largest possible value (in graph term this means to only create one partition on $\mathcal{G}(\mathbf{\hat{A}})$). But this scenario results in the maximum value of eq.~\ref{eq13ab}, which violates the objective. Conversely, minimizing eq.~\ref{eq13ab} to the smallest possible value (minimizing $j_{b1}$ implies minimizing $j_{b2}$, but not vice versa) violates eq.~\ref{eq13aa}.
Thus, the most reasonable scenario is: setting $j_{b2}$ as small as possible and balancing $j_{b1}$ with eq.~\ref{eq13aa}. This scenario is the relaxed ratio association applied to $\mathcal{G}(\mathbf{\hat{A}})$, and as long as vertices of $\mathcal{G}(\mathbf{\hat{A}})$ are clustered, simultaneous optimizing eq.~\ref{eq13aa} and eq.~\ref{eq13ab} leads to the clustering of related features. Moreover, as $j_{b2}$ is minimum, $\mathbf{B}$ is approximately column-orthogonal.
\end{IEEEproof}
\subsection{Orthogonality constraints on $\mathbf{B}$} \label{B}
\begin{theorem} \label{theorem3}
Minimizing the following objective
\begin{align}
&\min_{\mathbf{B},\mathbf{C}}J_{c}\left(\mathbf{B},\mathbf{C}\right)=\frac{1}{2}\|\mathbf{A}-\mathbf{B}\mathbf{C}\|_{F}^{2} \label{eq13}\\
&\mathrm{s.t.}\;\,\mathbf{B}\ge\mathbf{0},\mathbf{C}\ge\mathbf{0},\mathbf{B}^T\mathbf{B}=\mathbf{I} \nonumber
\end{align}
is equivalent to applying ratio association to $\mathcal{G}(\mathbf{A}\mathbf{A}^T)$, and also leads to the item clustering indicator matrix $\mathbf{C}$ which is approximately row-orthogonal.
\end{theorem}
\begin{IEEEproof}
By following the proof of theorem \ref{theorem2}, minimizing $J_c$ is equivalent to simultaneously optimizing:
\begin{align}
&\max_{\mathbf{B}}\;\mathrm{tr}\left(\mathbf{B}^T\mathbf{A}\mathbf{A}^T\mathbf{B}\right)\;\,\text{s.t.}\;\,\mathbf{B}^T\mathbf{B}=\mathbf{I}, \label{eq14a}\\
&\max_{\mathbf{C}}\;\mathrm{tr}\left(2\mathbf{C}^T\mathbf{A}^T\mathbf{A}\mathbf{C}\right),\;\,\text{and} \label{eq14b}\\
&\min_{\mathbf{C}}\;\mathrm{tr}\left(\mathbf{C}^T\mathbf{C}\mathbf{A}^T\mathbf{A}\mathbf{C}^T\mathbf{C}\right)\equiv\min_{\mathbf{C}}\;\mathrm{tr}\left(\mathbf{C}\mathbf{C}^T\mathbf{C}\mathbf{C}^T\right).
\label{eq14c}
\end{align}
Eq.~\ref{eq14a} is equivalent to eq.~\ref{eq2s4} and leads to the clustering of related features. And optimizing eq.~\ref{eq14b} and Eq.~\ref{eq14c} simultaneously is equivalent to:
\begin{align}
&\max_{\mathbf{C}}\;\mathrm{tr}\left(\mathbf{C}\mathbf{\tilde{A}}\mathbf{C}^T\right),\;\,\text{and} \label{eq14aa} \\
&\min_{\mathbf{C}}\;\Big(\underbrace{\sum_{i}\left(\mathbf{\check{c}}_{i}\mathbf{\check{c}}_{i}^T\right)^2}_{j_{c1}} + \underbrace{\sum_{i\ne j}\left(\mathbf{\check{c}}_{i}\mathbf{\check{c}}_{j}^T\right)^2}_{j_{c2}} \Big) \label{eq14ab} \\
&\text{s.t.}\;\,\mathbf{0}\le\mathbf{C}\le\mathbf{\Upsilon}_{\mathbf{C}}, \nonumber
\end{align}
where $\mathbf{\tilde{A}}$ denotes the item affinity matrix, $\mathbf{\check{c}}_i$ denotes the $i$-th row of $\mathbf{C}$, and $\mathbf{\Upsilon}_{\mathbf{C}}$ denotes the upperbound constraints on $\mathbf{C}$.
As in the proof of theorem \ref{theorem2}, the most reasonable scenario in simultaneously optimizing eq.~\ref{eq14aa} and eq.~\ref{eq14ab} is by setting $j_{c2}$ as small as possible and balancing $j_{c1}$ with eq.~\ref{eq14aa}. This leads to the clustering of similar items, and as $j_{c2}$ is minimum, $\mathbf{C}$ is approximately row-orthogonal.
\end{IEEEproof}
\subsection{No orthogonality constraint on both $\mathbf{B}$ and $\mathbf{C}$} \label{NO}
In this section we prove that applying the standard NMF to the feature-by-item data matrix eventually leads to the simultaneous feature and item clustering.
\begin{theorem} \label{theorem4}
Minimizing the following objective
\begin{align}
&\min_{\mathbf{B},\mathbf{C}}J_{d}\left(\mathbf{B},\mathbf{C}\right)=\frac{1}{2}\|\mathbf{A}-\mathbf{B}\mathbf{C}\|_{F}^{2} \label{eq15}\\
&\mathrm{s.t.}\;\,\mathbf{B}\ge\mathbf{0},\mathbf{C}\ge\mathbf{0}, \nonumber
\end{align}
leads to the feature clustering indicator matrix $\mathbf{B}$ and the item clustering indicator matrix $\mathbf{C}$ which are approximately column- and row-orthogonal respectively.
\end{theorem}
\begin{IEEEproof}
By following the proof of theorem \ref{theorem2}, minimizing $J_d$ is equivalent to simultaneously optimizing:
\begin{align}
&\max_{\mathbf{B},\mathbf{C}}\mathrm{tr}\left(\mathbf{B}^T\mathbf{A}\mathbf{C}^T\right), \label{eq17a} \;\,\text{and} \\
&\min_{\mathbf{B},\mathbf{C}}\mathrm{tr}\left(\mathbf{B}^T\mathbf{B}\mathbf{C}\mathbf{C}^T\right). \label{eq17b}
\end{align}
By substituting $\mathbf{B}=\mathbf{A}\mathbf{C}^T$ and $\mathbf{C}=\mathbf{B}^T\mathbf{A}$ into the above equations, we get:
\begin{align}
&\max_{\mathbf{B}}\mathrm{tr}\left(\mathbf{B}^T\mathbf{\hat{A}}\mathbf{B}\right), \label{eq18a} \;\,\text{and} \\
&\min_{\mathbf{B}}\mathrm{tr}\left(\mathbf{B}^T\mathbf{B}\mathbf{B}^T\mathbf{A}\mathbf{A}^T\mathbf{B}\right)\equiv\min_{\mathbf{B}}\mathrm{tr}\left(\mathbf{B}^T\mathbf{B}\mathbf{B}^T\mathbf{B}\right) \label{eq18b}
\end{align}
for feature clustering, and:
\begin{align}
&\max_{\mathbf{C}}\mathrm{tr}\left(\mathbf{C}\mathbf{\tilde{A}}\mathbf{C}^T\right), \label{eq19a} \;\,\text{and} \\
&\min_{\mathbf{C}}\mathrm{tr}\left(\mathbf{C}\mathbf{A}^T\mathbf{A}\mathbf{C}^T\mathbf{C}\mathbf{C}^T\right)\equiv\min_{\mathbf{C}}\mathrm{tr}\left(\mathbf{C}\mathbf{C}^T\mathbf{C}\mathbf{C}^T\right) \label{eq19b}
\end{align}
for item clustering. Therefore, minimizing $J_d$ is equivalent to simultaneously optimizing:
\begin{align}
&\max_{\mathbf{B}}\mathrm{tr}\left(\mathbf{B}^T\mathbf{\hat{A}}\mathbf{B}\right),\label{eq20a}\\
&\min_{\mathbf{B}}\;\Big(\sum_{i}\left(\mathbf{b}_{i}^T\mathbf{b}_{i}\right)^2 + \sum_{i\ne j}\left(\mathbf{b}_{i}^T\mathbf{b}_{j}\right)^2\Big), \label{eq20b} \\
&\max_{\mathbf{C}}\mathrm{tr}\left(\mathbf{C}\mathbf{\tilde{A}}\mathbf{C}^T\right),\;\,\text{and}\label{eq20c} \\
&\min_{\mathbf{C}}\;\Big(\sum_{i}\left(\mathbf{\check{c}}_{i}\mathbf{\check{c}}_{i}^T\right)^2 + \sum_{i\ne j}\left(\mathbf{\check{c}}_{i}\mathbf{\check{c}}_{j}^T\right)^2\Big), \label{eq20d} \\
&\text{s.t.}\;\,\mathbf{0}\le\mathbf{B}\le\mathbf{\Upsilon}_{\mathbf{B}},\;\,\text{and}\;\,\mathbf{0}\le\mathbf{C}\le\mathbf{\Upsilon}_{\mathbf{C}}, \nonumber
\end{align}
which will lead to the feature clustering indicator matrix $\mathbf{B}$ and the item clustering indicator matrix $\mathbf{C}$ that are approximately column- and row-orthogonal respectively.
\end{IEEEproof}
\section{Unipartite and directed graph cases} \label{ud}
\balance
The af\mbox{}finity matrix $\mathbf{W}$ induced from a unipartite (undirected) graph is a symmetric matrix, which is a special case of the rectangular af\mbox{}finity matrix $\mathbf{A}$. Therefore, by following the discussion in section \ref{clusteringnmf}, it can be shown that the standard NMF applied to $\mathbf{W}$ leads to the clustering indicator matrix which is almost orthogonal.
The af\mbox{}finity matrix $\mathbf{V}$ induced from a directed graph is an asymmetric square matrix. Since columns and rows of $\mathbf{V}$ correspond to the same set of vertices with the same order, as the clustering problem is concerned, $\mathbf{V}$ can be replaced by $\mathbf{V}+\mathbf{V}^T$ which is a symmetric matrix. Then the standard NMF can be applied to this matrix to get the clustering indicator matrix which is almost orthogonal.
\section{Related works} \label{rw}
Ding et al.~\cite{Ding} provides the theoretical analysis on the equivalences between orthogonal NMF to $K$-means clustering for both rectangular data matrices and symmetric matrices. However as their proofs utilize the zero gradient conditions, the hidden assumptions (setting the Lagrange multipliers to zeros) are not revealed there. Actually it can be easily shown that their approach is the KKT conditions applied to the unconstrained version of eq.~\ref{eq2}. Thus there is no guarantee that minimizing eq.~\ref{eq2} by using the zero gradient conditions leads to the stationary point located on the nonnegative orthant as required by the objective.
Applying the standard NMF to the symmetric matrix leads to almost orthogonal matrix was previously proven by Ding et al.~\cite{Ding2}. But due to the used approach, the theorem cannot be extended to the rectangular matrices which so far are the usual form of the data (practical applications of NMF seemed exclusively for rectangular matrices). Therefore, their results cannot be used to explain the abundant experimental results that show the power of the standard NMF in clustering, latent factors identification, learning the parts of objects, and producing sparse matrices even without explicit sparsity constraint \cite{Lee}.
\section{Conclusion} \label{conc}
By using the strict KKT optimality conditions, we showed that even without explicitly imposing orthogonality nor sparsity constraint NMF produces approximately column-orthogonal basis matrix and row-orthogonal coefficient matrix which lead to the simultaneous feature and item clustering. This result, therefore, gives the theoretical explanation on some experimental results that show the power of the standard NMF as a clustering tool which are reported to be better than the spectral methods \cite{Xu} and $K$-means algorithm \cite{Kim}.
\end{document} |
\begin{document}
\begin{abstract}
In this paper we present a combinatorial
proof of the Kronecker--Weber Theorem
for global fields of positive characteristic.
The main tools are the use of Witt vectors and their
arithmetic developed by H. L. Schmid. The key
result is to obtain, using counting arguments,
how many $p$--cyclic extensions exist
of fixed degree and bounded conductor
where only one prime ramifies are there. We then
compare this
number with the number of subextensions of
cyclotomic function fields of the same
type and verify that these two numbers
are the same.
\end{abstract}
\mathbbketitle
ection{Introduction}\label{S1}
The classical Kronecker--Weber Theorem establishes that every
finite abelian extension of ${\mathbb Q}$, the field of rational numbers, is
contained in a cyclotomic field. Equivalently, the maximal abelian
extension of ${\mathbb Q}$ is the union of all cyclotomic
fields. In 1974, D. Hayes \cite{Hay74},
proved the analogous result for rational congruence function fields.
Hayes constructed first cyclotomic function fields as
the analogue to classical cyclotomic fields. Indeed,
the analogy was developed in the
first place by L. Carlitz in the 1930's.
The union of all these cyclotomic function fields is not the
maximal abelian extension of the rational congruence function
field $k={\mathbb F}_q(T)$ since all these extensions are geometric
and the infinite prime is tamely ramified. Hayes proved that
the maximal abelian extension of $k$ is the composite of
three linearly disjoint fields: the first one is
the union of all cyclotomic fields; the second one is
the union of all constant extensions
and the third one is the union of all the subfields of the
corresponding cyclotomic function fields, where the
infinite prime is totally wildly ramified. The proof of this
theorem uses Artin--Takagi's reciprocity law
in class field theory.
In the classical case, possibly the simplest proof of the
Kronecker--Weber Theorem uses ramification groups.
The key tool in the proof is that there is only one cyclic
extension of ${\mathbb Q}$ of degree $p$, $p$ an odd prime,
where $p$ is the only ramified prime. Indeed, this field
is the unique subfield of degree $p$ of the cyclotomic
field of $p^2$--roots of unity.
In the case of function fields the situation is quite different.
There exist infinitely many cyclic extensions of $k$ of degree $p$
where only one fixed prime divisor is ramified.
In this paper we present a proof of the Kronocker--Weber
Theorem analogue for rational congruence function fields using
counting arguments for the case of wild ramification.
First, similarly to the classical case, we prove that a
finite abelian tamely ramified extension of $k$ is contained in
the composite of a cyclotomic function field and a constant
extension (see \cite{SaRzVi2012}). Next, the key part, is to show that every
cyclic extension of $p$--power degree, where there
is only one ramified prime and it is fully ramified, and
the infinite prime is fully decomposed, is contained
in a cyclotomic function field. The particular case of an
Artin--Schreier extension was completely solved in \cite{SaRzVi2012-2}
using counting techniques.
In this paper we present another proof for the Artin--Schreier case
that also uses counting techniques but is
suitable of generalization for cyclic extensions of degree $p^n$.
Once the latter is proven, the
rest of the proof follows easily. We use the arithmetic
of Witt vectors developed by Schmid in \cite{Sch36} to give
two proofs, one by induction and the other is direct.
ection{The result}\label{S2}
We give first some notation and some results in the theory of
cyclotomic function fields developed by D. Hayes
\cite{Hay74}. See also \cite{Vil2006}. Let $k={\mathbb F}_q(T)$ be
a congruence rational function field, ${\mathbb F}_q$ denoting the finite
field of $q=p^s$ elements, where $p$ is the characteristic.
Let $R_T={\mathbb F}_q[T]$ be the ring of
polynomials, that is, $R_T$ is
the ring of integers of $k$.
For $N\in R_T
etminus
\{0\}$, $\Lambda_N$ denotes the $N$--torsion of the Carlitz
module and $k(\Lambda_N)$ denotes the $N$--th cyclotomic
function field. The $R_T$--module $\Lambda_N$ is cyclic.
For any $m\in{\mathbb N}$,
$C_m$ denotes a cyclic group of order $m$. Let
$K_T:=\bigcup_{M\in R_T} k(\Lambda_M)$ and ${\mathbb F}_{\infty}:=
\bigcup_{m\in{\mathbb N}}{\mathbb F}_{q^m}$.
We denote by ${\mathbbthfrak p}_{\infty}$
the pole divisor of $T$ in $k$.
In $k(\Lambda_N)/k$, ${\eu p}_{\infty}$ has ramification index $q-1$
and decomposes into ${\ma F}_q^{\ast}rac{|G_N|}{q-1}$ different
prime divisors of $k(\Lambda_N)$ of degree $1$, where
$G_N:=\operatorname{Gal}(k(\Lambda_N/k))$. Furthermore,
with the identification $G_N\cong \G N$, the inertia
group ${\mathbbthfrak I}$ of ${\mathbbthfrak p}_{\infty}$ is ${\mathbb F}_q^{\ast}
ubseteq \G N$,
that is, ${\mathbbthfrak I}=\{
igma_a\mid a\in {\mathbb F}_q^{\ast}\}$,
where for $A\in R_T$ we use the notation $
igma_A(\lambda)=
\lambda^A$ for $\lambda \in \Lambda_N$.
We denote by $R_T^+$ the set of monic irreducible polynomials
in $R_T$. The
primes that ramify in $k(\Lambda_N)/k$ are ${\mathbbthfrak p}_{\infty}$
and the polynomials $P\in R_T^+$ such that $P\mid N$,
except in the extreme case $q=2$, $N\in\{T,T+1,T(T+1)\}$
because in this case we have $k(\Lambda_N)=k$.
We set $L_n$ to be the largest
subfield of $k\big(\Lambda_{1/T^{n+1}}\big)$ where ${\eu p}_{\infty}$
is fully and purely wildly ramified, $n\in{\mathbb N}$.
That is, $L_n=k\big(\Lambda_{1/T^{n+1}}\big)^{{\mathbb F}_q^{\ast}}$.
Let $L_{\infty}:=\bigcup_{n\in{\mathbb N}}L_n$.
For any prime divisor ${\mathbbthfrak p}$ in a field $K$, $v_{\mathbbthfrak p}$ will denote
the valuation corresponding to ${\mathbbthfrak p}$.
The main goal
of this paper is to prove the following result.
\begin{theorem}[Kronecker--Weber, {\cite{Hay74}},
{\cite[Theorem12.8.31]{Vil2006}}]\label{T2.1}
The maximal abelian extension $A$ of $k$ is $A=K_T
{\mathbb F}_{\infty} L_{\infty}$. \qed
\end{theorem}
To prove Theorem \ref{T2.1} it suffices to show
that any finite abelian extension of $k$ is contained in
$k(\Lambda_N) {\mathbb F}_{q^m} L_n$ for some $N\in R_T$
and $m,n\in{\mathbb N}$.
Let $L/k$ be a finite abelian extension. Let
\[
G:=\operatorname{Gal}(L/k)\cong
C_{n_1}\times \cdots\times C_{n_l}\times C_{p^{a_1}}\times
\cdots \times C_{p^{a_h}}
\]
where $\gcd (n_i,p)=1$ for $1\leq
i\leq l$ and $a_j\in{\mathbb N}$ for $1\leq j\leq h$. Let $S_i
ubseteq
L$ be such that $\operatorname{Gal}(S_i/k)\cong C_{n_i}$, $1\leq i\leq l$ and let
$R_j
ubseteq L$ be such that $\operatorname{Gal}(R_j/k)\cong C_{p^{a_j}}$,
$1\leq j\leq h$. To prove Theorem \ref{T2.1} it is enough
to show that each $S_i$ and each $R_j$ are contained
in $k(\Lambda_N) {\mathbb F}_{q^m} L_n$ for some $N\in R_T$
and $m,n\in{\mathbb N}$.
In short, we may assume that $L/k$ is a cyclic extension
of degree $h$ where either $\gcd(h,p)=1$ or $h=p^n$ for
some $n\in{\mathbb N}$.
ection{Geometric tamely ramified extensions}\label{S3}
In this section, we prove Theorem \ref{T2.1} for the particular case
of a tamely ramified extension.
Let $L/k$ be an abelian extension. Let $P\in R_T$, $d:=\displaystyleeg P$.
\begin{proposition}\label{P3.1}
Let $P$ be tamely ramified in $L/k$. If $e$ denotes
the ramification index of $P$ in $L$, we have
$e\mid q^d-1$.
\end{proposition}
{\eu p}_{\infty}roof
First we consider in general an abelian extension $L/k$. Let
$G_{-1}=D$ be the decomposition group of $P$, $G_0=I$
be the inertia group and $G_i$, $i\geq 1$ be the
ramification groups. Let ${\mathbbthfrak P}$ be a prime divisor
in $L$ dividing $P$. Then if ${\mathbbthcal O}_{{\mathbbthfrak P}}$
denotes the valuation ring of ${\mathbbthfrak P}$, we have
\[
U^{(i)}=1+{\mathbbthfrak P}^i
ubseteq {\mathbbthcal O}_{\mathbbthfrak P}^{\ast}=
{\mathbbthcal O}_{\mathbbthfrak P}
etminus {\mathbbthfrak P},
i\geq 1, U^{(0)}={\mathbbthcal O}_{\mathbbthfrak P}^{\ast}.
\]
Let $l({\mathbbthfrak P}):= {\mathbbthcal O}_{\mathbbthfrak P}/{\mathbbthfrak P}$ be the
residue field at ${\mathbbthfrak P}$. The following are monomorphisms:
\begin{eqnarray*}
G_i/G_{i+1}&
tackrel{\varphi_i}{\hookrightarrow} &U^{(i)}/
U^{(i+1)}\cong
\begin{cases}
l({\mathbbthfrak P})^{\ast}, i=0\\
{\mathbbthfrak P}^i/{\mathbbthfrak P}^{i+1}\cong l({\mathbbthfrak P}), i\geq 1.
\end{cases}\\
\bar{
igma}&\mathbbpsto&
igma{\eu p}_{\infty}i/{\eu p}_{\infty}i
\end{eqnarray*}
where ${\eu p}_{\infty}i$ denotes a prime element for ${\mathbbthfrak P}$.
We will prove that if $G_{-1}/G_1=D/G_1$ is abelian, then
\[
\varphi=\varphi_0\colon G_0/G_1\longrightarrow U^{(0)}/U^{(1)}\cong
\big({\mathbbthcal O}_{\mathbbthfrak P}/{\mathbbthfrak P}\big)^{\ast}
\]
satisfies that $\operatorname{im} \varphi
ubseteq {\mathbbthcal O}_P/(P)\cong
R_T/(P)\cong {\mathbb F}_{q^d}$. In particular it will follow
$\big|G_0/G_1\big|\mid \big|{\mathbb F}_{q^d}^{\ast}\big|=q^d-1$.
To prove this statement, note that
\[
\operatorname{Aut}(({\mathbbthcal O}_{\mathbbthfrak P}/
{\mathbbthfrak P}) / ({\mathbbthcal O}_P/(P)))\cong\operatorname{Gal}(({\mathbbthcal O}_{\mathbbthfrak P}/
{\mathbbthfrak P})/({\mathbbthcal O}_P/(P)))=D/I=G_{-1}/G_0
\]
(see \cite[Corollary 5.2.12]{Vil2006}).
Let $
igma\in G_0$ and $\varphi(\bar{
igma})=\varphi(
igma
\bmod G_1)=[\alpha]\in \big({\mathbbthcal O}_{\mathbbthfrak P}/{\mathbbthfrak P}\big)^{\ast}$.
Therefore $
igma{\eu p}_{\infty}i\equiv \alpha{\eu p}_{\infty}i\bmod {\mathbbthfrak P}^2$.
Let $\theta\in G_{-1}=D$ be arbitrary and let ${\eu p}_{\infty}i_1:=\theta^{-1}
{\eu p}_{\infty}i$. Then ${\eu p}_{\infty}i_1$ is a prime element for ${\mathbbthfrak P}$. Since
$\varphi$ is independent of the prime element, it follows that
$
igma {\eu p}_{\infty}i_1\equiv \alpha {\eu p}_{\infty}i_1\bmod {\mathbbthfrak P}^2$,
that is $
igma\theta^{-1}{\eu p}_{\infty}i\equiv \alpha\theta^{-1}{\eu p}_{\infty}i\bmod {\mathbbthfrak P}^2$.
Since $G_{-1}/G_1$ is an abelian group, we have
\[
igma{\eu p}_{\infty}i=(\theta
igma\theta^{-1})({\eu p}_{\infty}i)\equiv \theta(\alpha){\eu p}_{\infty}i\bmod{\mathbbthfrak P}^2.
\]
Thus $
igma{\eu p}_{\infty}i\equiv \theta(\alpha){\eu p}_{\infty}i\bmod{\mathbbthfrak P}^2$ and
$
igma{\eu p}_{\infty}i\equiv \alpha{\eu p}_{\infty}i\bmod {\mathbbthfrak P}^2$. It follows that
$\theta(\alpha)\equiv \alpha\bmod {\mathbbthfrak P}$ for all $\theta\in G_{-1}$.
If we write $\tilde{\theta}=\theta\bmod G_0$, $\tilde{\theta}[\alpha]
=[\alpha]$, that is, $[\alpha]$ is a fixed element under the action
of the group $G_{-1}/G_0\cong \operatorname{Gal}(({\mathbbthcal O}_{\mathbbthfrak P}/
{\mathbbthfrak P})/({\mathbbthcal O}_P/(P)))$. We obtain that
$[\alpha]\in {\mathbbthcal O}_P/(P)$.
Therefore $\operatorname{im} \varphi
ubseteq \big({\mathbbthcal O}_P/(P)\big)^{\ast}$
and $\big|G_0/G_1\big|\mid \big|\big({\mathbbthcal O}_P/(P)\big)^{\ast}\big|=
q^d-1$.
Finally, since $L/k$ is abelian and $P$ is tamely ramified, $G_1=\{1\}$, it
follows that $e=|G_0|=|G_0/G_1|\mid q^d-1$. \qed
Now consider a finite abelian tamely ramified extension $L/k$
where $P_1,\ldots,P_r$ are the finite ramified primes.
Let $P\in\{P_1,\ldots,P_r\}$, $\displaystyleeg P=d$. Let $e$ be the ramification
index of $P$ in $L$. Then by Proposition \ref{P3.1} we have
$e\mid q^d-1$. Now $P$ is totally ramified in $k(\Lambda_P)/k$ with
ramification index $q^d-1$. In this extension ${\eu p}_{\infty}$ has ramification
index equal to $q-1$.
Let $k
ubseteq E
ubseteq k(\Lambda_P)$ with $[E:k]=e$.
Set $\tilde{{\mathbbthfrak P}}$ a prime divisor in $LE$ dividing $P$.
Let ${\mathbbthfrak q}:= \tilde{\mathbbthfrak P}|_E$ and ${\mathbbthfrak P}:=\tilde{\mathbbthfrak P}|_L$.
\[
\xymatrix{
{\mathbbthfrak P}\ar@/^1pc/@{-}[rrrr]\ar@/_1pc/@{-}[dddd]\ar@{--}[dr]&&&&\tilde{\mathbbthfrak P}
\ar@/^1pc/@{-}[dddd]\ar@{--}[dl]\\
&L\ar@{--}[rr]\ar@{-}[dd]&&LE\ar@{--}[dd]\\
&&M\ar@{-}[dl]\ar@{-}[ur]_H\\
&k\ar@{-}[rr]_e&&E\ar@{-}[rr]\ar@{--}[dr]&&k(\Lambda_P)\\
P\ar@/_1pc/@{-}[rrrr]\ar@{--}[ru]&&&&{\mathbbthfrak q}
}
\]
We have $e=e_{L/k}({\mathbbthfrak P}|P)=e_{E/k}({\mathbbthfrak q}|P)$. By
Abhyankar's Lemma \cite[Theorem 12.4.4]{Vil2006}, we obtain
\[
e_{LE/k}(\tilde{\mathbbthfrak P}|P)=\operatorname{lcm} [e_{L/k}({\mathbbthfrak P}|P), e_{E/k}(
{\mathbbthfrak q}|P)]=\operatorname{lcm}[e,e]=e.
\]
Let $H
ubseteq \operatorname{Gal}(LE/k)$ be the inertia group of $\tilde{
\mathbbthfrak P}/P$. Set $M:=(LE)^H$. Then $P$ is unramified in $M/k$.
We want to see that $L
ubseteq Mk(\Lambda_P)$. Indeed we have
$[LE:M]=e$ and $E\cap M=k$ since $P$ is totally ramified
in $E/k$ and unramified in $M/k$. It follows that
$[ME:k]=[M:k][E:k]$. Therefore
\[
[LE:k]=[LE:M][M:k]=e{\ma F}_q^{\ast}rac{[ME:k]}{[E:k]}=
e{\ma F}_q^{\ast}rac{[ME:k]}{e}=[ME:k].
\]
Since $ME
ubseteq LE$ it follows that
$LE=ME=EM
ubseteq k(\Lambda_P)M$. Thus
$L
ubseteq k(\Lambda_P)M$.
In $M/k$ the finite ramified primes are
$\{P_2,\cdots, P_r\}$. In case $r-1\geq 1$,
we may apply the above argument to $M/k$ and we
obtain $M_2/k$ such that at most $r-2$ finite
primes are ramified and $M
ubseteq k(\Lambda_{P_2})M_2$,
so that $L
ubseteq k(\Lambda_{P_1})M
ubseteq
k(\Lambda_{P_1})k(\Lambda_{P_2})M_2=
k(\Lambda_{P_1P_2})M_2$.
Performing the above process at most $r$ times we
have
\begin{equation}\label{E3.2}
L
ubseteq k(\Lambda_{P_1P_2\cdots P_r})M_0
\end{equation}
where in $M_0/k$ the only possible ramified prime is ${\eu p}_{\infty}$.
We also have
\begin{proposition}\label{P3.3}
Let $L/k$ be an abelian extension where at most
one prime divisor ${\mathbbthfrak p}_0$ of degree $1$ is
ramified and the extension is tamely ramified. Then
$L/k$ is a constant extension.
\end{proposition}
\begin{proof}
By Proposition \ref{P3.1} we have $e:=e_{L/k}({\mathbbthfrak p}_0)|q-1$.
Let $H$ be the inertia group of ${\mathbbthfrak p}_0$. Then $|H|=e$ and
${\mathbbthfrak p}_0$ is unramified in $E:=L^H/k$. Therefore $E/k$
is an unramified extension. Thus $E/k$ is a constant
extension.
Let $[E:k]=m$. Then if ${\mathbbthfrak P}_0$ is a prime divisor in $E$
dividing ${\mathbbthfrak p}_0$ then the relative degree $d_{E/k}
({\mathbbthfrak P}_0|{\mathbbthfrak p}_0)$ is equal to $m$, the number of
prime divisors in $E/k$ is $1$ and the degree of ${\mathbbthfrak P}_0$
is $1$ (see \cite[Theorem 6.2.1]{Vil2006}).
Therefore ${\mathbbthfrak P}_0$ is the only prime divisor
ramified in $L/E$ and it is of degree $1$ and totally ramified.
Furthermore $[L:E]=e\mid q-1=|{\mathbb F}_q^{\ast}|$.
The $(q-1)$-th roots of unity belong to ${\ma F}_q
ubseteq k$.
Hence $k$ contains the $e$--th roots of unity and $L/E$
is a Kummer extension, say $L=E(y)$ with $y^e=\alpha
\in E=k{\mathbb F}_{q^m}={\mathbb F}_{q^m}(T)$.
We write $\alpha$ in a normal form as prescribed by Hasse
\cite{Has35}: $(\alpha)_E={\ma F}_q^{\ast}rac{{\mathbbthfrak P}_0^a
{\mathbbthfrak a}}{{\mathbbthfrak b}}$, $0<a<e$. Now since $\displaystyleeg (\alpha)_E=0$
it follows that $\displaystyleeg_E {\mathbbthfrak a}$ or $\displaystyleeg_E {\mathbbthfrak b}$ is not
a multiple of $e$. This contradicts that ${\mathbbthfrak p}_0$ is the
only ramified prime. Therefore $L/k$ is a constant extension.
\end{proof}
As a corollary to (\ref{E3.2}) and Proposition \ref{P3.3}
we obtain
\begin{corollary}\label{C3.5}
If $L/k$ is a finite abelian tamely ramified extension where
the ramified finite prime divisors are $P_1,\ldots,P_r$, then
\[
L
ubseteq k(\Lambda_{P_1\cdots P_r}) {\mathbb F}_{q^m},
\]
for some $m\in{\mathbb N}$.
\qed
\end{corollary}
ection{Reduction steps}\label{S2(1)}
As a consequence of Corollary \ref{C3.5}, Theorem
\ref{T2.1} will follow if we prove it for the particular
case of a cyclic extension $L/k$ of degree $p^n$ for
some $n\in{\mathbb N}$. Now, this kind of extensions are
given by a Witt vector:
\[
K=k(\vec y)=k(y_1,\ldots,y_n) \quad\text{with}\quad
\vec y^p\Witt - \vec y =
\vec \beta=(\beta_1,\ldots,\beta_n)\in W_n(k)
\]
where
for any field $E$ of characteristic $p$,
$W_n(E)$ denotes the ring of Witt vectors
of length $n$ with components in $E$.
The following result was proved in \cite{MaRzVi2013}. It
``separates'' the ramified prime divisors.
\begin{theorem}\label{T2.3} Let $K/k$ be a cyclic extension of degree $p^n$
where $P_1,\ldots,P_r\in R_T^+$ and possibly ${\eu p}_{\infty}$, are the ramified prime
divisors. Then $K=k(\vec y)$ where
\[
\vec y^p\Witt -\vec y=\vec \beta={\vec\displaystyleelta}_1\Witt + \cdots \Witt + {\vec\displaystyleelta}_r
\Witt + \vec\mu,
\]
with $\displaystyleelta_{ij}={\ma F}_q^{\ast}rac{Q_{ij}}{P_i^{e_{ij}}}$, $e_{ij}\geq 0$, $Q_{ij}\in R_T$
and
\l
\item if $e_{ij}=0$ then $Q_{ij}=0$;
\item if $e_{ij}>0$ then $p\nmid e_{ij}$, $\gcd(Q_{ij},P_i)=1$ and
$\displaystyleeg (Q_{ij})<\displaystyleeg (P_i^{e_{ij}})$,
\end{list}
and ${\mu}_j=f_j(T)\in R_T$ with
\l
etcounter{bean}{2}
\item $p\nmid \displaystyleeg f_j$ when $f_j\not\in {\mathbb F}_q$ and
\item $\mu_j\notin\wp({\mathbb F}_q):=\{a^p-a\mid a\in{\mathbb F}_q\}$ when
$\mu_j\in {\mathbb F}_q^{\ast}$.\qed
\end{list}
\end{theorem}
Consider the field $K=k(\vec y)$ as above, where precisely
one prime divisor $P\in R_T^+$ ramifies, with
\begin{gather}\label{EqNew1}
\begin{array}{l}
\text{$\beta_i=
{\ma F}_q^{\ast}rac{Q_i}{P^{\lambda_i}}$, $Q_i\in R_T$
such that $\lambda_i\geq 0$},\\
\text{if $\lambda_i=0$ then $Q_i=0$,}\\
\text{if $\lambda_i>0$ then $\gcd(\lambda_i,p)=1$,
$\gcd(Q_i,P)=1$ and $\displaystyleeg Q_i<\displaystyleeg P^{\lambda_i}$},\\
\lambda_1>0.
\end{array}
\end{gather}
A particular case of Theorem \ref{T2.3} suitable for our study
is given in the following proposition.
\begin{proposition}\label{P2.6} Assume that every extension $K_1/k$
that meets the conditions of {\rm (\ref{EqNew1})}
satisfies that $K_1
ubseteq k(\Lambda_{P^{\alpha}})$ for some
$\alpha\in{\mathbb N}$. Let $K/k$ be the extension defined by
$K=k(\vec{y})$ where $\wp(\vec{y})= \vec y^p\Witt - \vec y=
\vec \beta$ with $\vec \beta=(\beta_1,\ldots,\beta_n)$,
$\beta_i$ given in
normal form: $\beta_i\in{\mathbb F}_q$
or $\beta_i={\ma F}_q^{\ast}rac{Q_i}{P^{\lambda_i}}$,
$Q_i\in R_T$ and $\lambda_i>0$,
$\gcd(\lambda_i,p)=1$, $\gcd(Q_i,P)=1$ and $\displaystyleeg Q_i\leq
\displaystyleeg P^{\lambda_i}$. Then $K
ubseteq {\mathbb F}_{q^{p^n}}
\lam \alpha$ for some $\alpha\in{\mathbb F}_q$.
\end{proposition}
{\eu p}_{\infty}roof From Theorem \ref{T2.3} we have that we can
decompose the vector $\vec \beta$ as
$\vec \beta=\vec \varepsilon\Witt + \vec \gamma$ with
$\varepsilon_i\in{\mathbb F}_q$ for all $1\leq i\leq n$ and
$\gamma_i=0$
or $\gamma_i={\ma F}_q^{\ast}rac{Q_i}{P^{\lambda_i}}$,
$Q_i\in R_T$ and $\lambda_i>0$,
$\gcd(\lambda_i,p)=1$, $\gcd(Q_i,P)=1$ and $\displaystyleeg Q_i<
\displaystyleeg P^{\lambda_i}$.
Let
$\gamma_1=\cdots=\gamma_r=0$, and
$\gamma_{r+1}\notin {\mathbb F}_q$.
We have $K
ubseteq k(\vec \varepsilon)k(\vec \gamma)$.
Now $k(\vec \varepsilon)
ubseteq {\mathbb F}_{q^{p^n}}$ and
$k(\vec \gamma)=k(0,\ldots,0,\gamma_{r+1},\ldots,\gamma_n)$.
For any Witt vector $\vec x=(x_1,\ldots,x_n)$ we have
the decomposition given by Witt himself
\begin{align*}
\vec x=&(x_1,0,0,\ldots,0)\Witt +(0,x_2,0,\ldots,0)\Witt +\cdots
\Witt + (0,\ldots, 0,x_j,0,\ldots,0)\\
&\Witt +(0,\ldots, 0, x_{j+1}, \ldots, x_n)
\end{align*}
for each $0\leq j\leq n-1$. It follows that
$k(\vec \gamma)=k(\gamma_{r+1},\ldots,\gamma_n)$.
Since this field fulfills the conditions of (\ref{EqNew1}),
we have $k(\vec \gamma)
ubseteq k(\Lambda_{P^{\alpha}})$
for some $\alpha\in{\mathbb N}$. The result follows. \qed
\begin{remark}\label{R2.6'}
The prime ${\eu p}_{\infty}$ can be handled in the same way. The conditions
(\ref{EqNew1}) for ${\eu p}_{\infty}$ are the following. Let $K=k(\vec \mu)$
with $\mu_j=f_j(T)\in R_T$, with $f_j(0)=0$ for all $j$ and
either $f_j(T)=0$ or $f_j(T)\neq 0$ and $p\nmid \displaystyleeg f_j(T)$.
The condition $f_j(0)=0$ means that the infinite prime for
$T^{{\eu p}_{\infty}rime}=1/T$ is either decomposed or ramified in each layer,
that is, its inertia degree is $1$ in $K/k$.
In this case with the change of variable $T^{{\eu p}_{\infty}rime}=1/T$
the hypotheses in Proposition \ref{P2.6} say that
any field meeting these conditions satisfies that
$K
ubseteq k(\Lambda_{T^{{\eu p}_{\infty}rime m}})=
k(\Lambda_{T^{-m}})$ for some $m\in{\mathbb N}$.
However, since the degree of the extension $K/k$
is a power of $p$ we must have that $K
ubseteq k(\Lambda_{T^{-m}})^{
{\mathbb F}_q^{\ast}}=L_{m-1}$.
\end{remark}
With the notation of Theorem \ref{T2.3}
we obtain that if $\vec z_i^p\Witt - \vec z_i
=\vec \displaystyleelta_i$, $1\leq i\leq r$ and if $\vec v^p\Witt -\vec v=\vec \mu$,
then $L=k(\vec y)
ubseteq k(\vec z_1,\ldots,\vec z_r,\vec v)=
k(\vec z_1)\ldots k(\vec z_r) k(\vec v)$. Therefore if Theorem
\ref{T2.1} holds for each $k(\vec z_i)$, $1\leq i\leq r$ and for
$k(\vec v)$, then it holds for $L$.
From Theorem \ref{T2.3}, Proposition \ref{P2.6} and
the remark after this proposition, we obtain that
to prove Theorem \ref{T2.1} it suffices to show that
any field extension $K/k$ meeting the
conditions of (\ref{EqNew1}) satisfies that
either $K
ubseteq k(\Lambda_{P^{\alpha}})$
for some $\alpha\in{\mathbb N}$ or $K
ubseteq L_m$
for some $m\in{\mathbb N}$.
It suffices to study the case for $P\in R_T$.
Next we study the behavior of ${\eu p}_{\infty}$ in
an arbitrary cyclic
extension $K/k$ of degree $p^n$.
Consider first the case $K/k$ cyclic of degree $p$.
Then $K=k(y)$ where $y^p-y=\alpha\in k$.
The equation can be
normalized as:
\begin{equation}\label{E2.2}
y^p-y=\alpha=
um_{i=1}^r{\ma F}_q^{\ast}rac{Q_i}{P_i^{e_i}} + f(T)=
{\ma F}_q^{\ast}rac{Q}{P_1^{e_1}\cdots P_r^{e_r}}+f(T),
\end{equation}
where $P_i\in R_T^+$, $Q_i\in R_T$,
$\gcd(P_i,Q_i)=1$, $e_i>0$, $p\nmid e_i$, $\displaystyleeg Q_i<
\displaystyleeg P_i^{e_i}$, $1\leq i\leq r$,
$\displaystyleeg Q<
um_{i=1}^r\displaystyleeg P_i^{e_i}$, $f(T)\in R_T$,
with $p\nmid \displaystyleeg f$ when $f(T)\not\in {\mathbb F}_q$
and $f(T)\notin \wp({\mathbb F}_q)$ when $f(T)
\in {\mathbb F}_q^{\ast}$.
We have that the finite primes ramified in $K/k$ are precisely
$P_1,\ldots,P_r$ (see \cite{Has35}). With respect to ${\eu p}_{\infty}$ we have
the following well known result. We present a proof for the
sake of completeness.
\begin{proposition}\label{P2.4}
Let $K=k(y)$ be given by {\rm (\ref{E2.2})}. Then
the prime ${\eu p}_{\infty}$ is
\l
\item decomposed if $f(T)=0$.
\item inert if $f(T)\in {\mathbb F}_q$ and $f(T)\not\in \wp({\mathbb F}_q)$.
\item ramified if $f(T)\not\in {\mathbb F}_q$ (thus $p\nmid\displaystyleeg f$).
\end{list}
\end{proposition}
{\eu p}_{\infty}roof
First consider the case $f(T)=0$. Then $v_{{\eu p}_{\infty}}(\alpha)=
\displaystyleeg(P_1^{e_1}\cdots P_r^{e_r})-\displaystyleeg Q>0$.
Therefore ${\eu p}_{\infty}$ is unramified. Now $y^p-y={\eu p}_{\infty}rod_{i=0}^{p-1}
(y-i)$. Let ${\mathbbthfrak P}_{\infty}\mid{\eu p}_{\infty}$. Then
\[
v_{{\mathbbthfrak P}_{\infty}}(y^p-y)=
um_{i=0}^{p-1}v_{{\mathbbthfrak P}_{\infty}}
(y-i)=e({\mathbbthfrak P}_{\infty}|{\eu p}_{\infty})v_{{\eu p}_{\infty}}(\alpha)=v_{{\eu p}_{\infty}}(\alpha)>0.
\]
Therefore, there exists $0\leq i\leq p-1$ such that
$v_{{\eu p}_{\infty}}(y-i)>0$. Without loss of generality we may assume that $i=0$.
Let $
igma\in \operatorname{Gal}(K/k)
etminus \{\operatorname{Id}\}$. Assume that
${\mathbbthfrak P}_{\infty}^{
igma}={\mathbbthfrak P}_{\infty}$. We have $y^{
igma}=
y-j$, $j\neq 0$. Thus, on the one hand
\[
v_{{\mathbbthfrak P}_{\infty}}(y-j)=v_{{\mathbbthfrak P}_{\infty}}(y^{
igma})=
v_{
igma({\mathbbthfrak P}_{\infty})}(y)=v_{{\mathbbthfrak P}_{\infty}}(y)>0.
\]
On the other hand, since
$v_{{\mathbbthfrak P}_{\infty}}(y)>0=v_{{\mathbbthfrak P}_{\infty}}(j)$,
it follows that
\[
v_{{\mathbbthfrak P}_{\infty}}(y-j)=\min\{
v_{{\mathbbthfrak P}_{\infty}}(y),v_{{\mathbbthfrak P}_{\infty}}(j)\}=0.
\]
This contradiction shows that
${\mathbbthfrak P}_{\infty}^{
igma}\neq {\mathbbthfrak P}_{\infty}$
so that ${\eu p}_{\infty}$ decomposes in $K/k$.
Now we consider the case $f(T)\neq 0$. If $f(T)\not\in {\ma F}_q$, then
${\eu p}_{\infty}$ ramifies since it is in the normal form prescribed by
Hasse \cite{Has35}.
The last case is when $f(T)\in{\ma F}_q$, $f(T)\not\in \wp({\ma F}_q)$. Let
$b\in {\mathbb F}_{q^p}$ with $b^p-b=a=f(T)$. Since $\displaystyleeg {\eu p}_{\infty}=1$,
${\eu p}_{\infty}$ is inert in the constant extension $k(b)/k$ (\cite[Theorem 6.2.1]{Vil2006}).
Assume that ${\eu p}_{\infty}$ decomposes in $k(y)/k$. We have
the following diagram
\[
\xymatrix{
k(y)\ar@{-}[r]^{{\eu p}_{\infty}}_{\text{inert}}\ar@{-}[d]_{
ubstack{{\eu p}_{\infty}\\ \text{decomposes}}}
&k(y,b)\ar@{-}[d]\\ k\ar@{-}[r]^{{\eu p}_{\infty}}_{\text{inert}}&k(b)
}
\]
The decomposition group of ${\eu p}_{\infty}$ in $k(y,b)/k$ is
$\operatorname{Gal}(k(y,b)/k(y))$. Therefore ${\eu p}_{\infty}$ is inert in every
field of degree $p$ over $k$ other than $k(y)$. Since the
fields of degree $p$ are $k(y+ib), k(b)$, $0\leq i\leq p-1$,
in $k(y+b)/k$ we have
\[
(y+b)^p-(y+b)=(y^p-y)+(b^p-b)=\alpha-a ={\ma F}_q^{\ast}rac{Q}{P_1^{
e_1}\cdots P_r^{e_r}}
\]
with $\displaystyleeg (\alpha-a)<0$. Hence, by the first part, ${\eu p}_{\infty}$ decomposes
in $k(y+b)/k$ and in $k(y)/k$ which is impossible.
Thus ${\eu p}_{\infty}$ is inert in $k(y)/k$. \qed
The general case for the behavior of ${\eu p}_{\infty}$
in a cyclic $p$--extension is given in
\cite[Proposition 5.6]{MaRzVi2013} and it is a consequence of Proposition
\ref{P2.4}.
\begin{proposition}\label{P2.5}
Let $K/k$ be given as in Theorem {\rm{\ref{T2.3}}}. Let
$\mu_1=\cdots=\mu_s=0$, $\mu_{s+1}\in {\mathbb F}_q^{\ast}$,
$\mu_{s+1}\not\in \wp({\mathbb F}_q)$ and finally, let $t+1$ be the
first index with $f_{t+1}\not\in{\mathbb F}_q$ (and therefore $p\nmid \displaystyleeg f_{t+1}$).
Then the ramification index of ${\eu p}_{\infty}$ is $p^{n-t}$, the inertia degree
of ${\eu p}_{\infty}$ is $p^{t-s}$ and the decomposition number
of ${\eu p}_{\infty}$ is $p^s$. More precisely, if $\operatorname{Gal}(K/k)=\langle
igma\rangle
\cong C_{p^n}$, then the inertia group of ${\eu p}_{\infty}$ is ${\mathbbthfrak I}=\langle
igma^{p^t}
\rangle$ and the decomposition group of ${\eu p}_{\infty}$ is ${\mathbbthfrak D}=\langle
igma^{p^s}\rangle$. \qed
\end{proposition}
From Propositions \ref{P2.4} and \ref{P2.5} we obtain
\begin{proposition}\label{P2.5'}
If $K$ is a field defined by an equation of the
type given in {\rm (\ref{EqNew1})}, then
$K/k$ is a cyclic extension of degree $p^n$, $P$
is the only ramified prime, it is fully ramified and ${\eu p}_{\infty}$ is
fully decomposed.
Similarly, if $K=k(\vec v)$ where $v_i=f_i(T)\in R_T$, $f_i(0)=0$
for all $1\leq i\leq n$ and $f_1(T)\notin {\mathbb F}_q$,
$p\nmid \displaystyleeg f_1(T)$, then ${\eu p}_{\infty}$ is the only
ramified prime in $K/k$, it is fully ramified and the
zero divisor of $T$ which is the infinite prime in $R_{1/T}$,
is fully decomposed.\qed
\end{proposition}
We have reduced the proof of Theorem \ref{T2.1} to prove
that any extension of the type given in Proposition \ref{P2.5'} is
contained in either $k(\Lambda_{P^{\alpha}})$ for
some $\alpha\in{\mathbb N}$ or in $L_m$ for some
$m\in{\mathbb N}$. The second case is a consequence of the first
one with the change of variable $T^{{\eu p}_{\infty}rime}=1/T$.
Let $n,\alpha\in{\mathbb N}$. Denote by $v_n(\alpha)$
the number of cyclic groups of order $p^n$
contained in $\G {P^{\alpha}}\cong
\operatorname{Gal}(k(\Lambda_{P^{\alpha}}/k)$. We have
that $v_n(\alpha)$ is the number of cyclic field
extensions $K/k$ of degree $p^n$
and $K
ubseteq k(\Lambda_{P^{\alpha}})$.
Every such extension satisfies that its
conductor ${\mathbbthfrak F}_K$ divides $P^{\alpha}$.
Let $t_n(\alpha)$ be the number of cyclic field
extensions $K/k$ of degree $p^n$ such that $P$ is the only
ramified prime, it is fully ramified, ${\eu p}_{\infty}$ is
fully decomposed and its conductor
${\mathbbthfrak F}_K$ is a divisor of $P^{\alpha}$.
Since every cyclic extension $K/k$ of degree
$p^n$ such that $k
ubseteq K
ubseteq
k(\Lambda_{P^{\alpha}})$ satisfies these
conditions we have $v_n(\alpha)\leq t_n(\alpha)$.
If we prove $t_n(\alpha)\leq v_n(\alpha)$ then
every extension satisfying equation (\ref{EqNew1})
is contained in a cyclotomic extension and
Theorem \ref{T2.1} follows.
Therefore, to prove Theorem \ref{T2.1}, it suffices to
prove
\begin{gather}\label{EqNew2}
t_n(\alpha)\leq v_n(\alpha)\quad\text{for all}\quad
n, \alpha\in{\mathbb N}.
\end{gather}
ection{Wildly ramified extensions}\label{S4}
In this section we prove (\ref{EqNew2}) by induction on $n$ and as
a consequence we obtain
our main result, Theorem \ref{T2.1}.
First we compute $v_n(\alpha)$ for all $n,\alpha\in{\mathbb N}$.
\begin{proposition}\label{P4.1}
The number $v_n(\alpha)$ of different cyclic groups of order $p^n$
contained in $\G {P^{\alpha}}$ is
\[
v_n(\alpha)={\ma F}_q^{\ast}rac{q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}
{p^n}\big)}-q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}
{p^{n-1}}\big)}}{p^{n-1}(p-1\big)}=
{\ma F}_q^{\ast}rac{q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}
{p^{n-1}}\big)}\big(q^{d\big(\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n-1}}
-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n}}\big)}-1\big)}{p^{n-1}(p-1)},
\]
where $\lceil x\rceil$ denotes the {\em ceiling function},
that is, $\lceil x\rceil$ denotes the minimum integer greater
than or equal to $x$.
\end{proposition}
{\eu p}_{\infty}roof
Let $P\in R_T^+$ and $\alpha\in{\mathbb N}$ with $\displaystyleeg P=d$.
First we consider how many cyclic extensions of degree $p^n$
are contained in $\lam {\alpha}$. Since ${\eu p}_{\infty}$ is tamely
ramified in $\lam \alpha$, if $K/k$ is a cyclic extension
of degree $p^n$, ${\eu p}_{\infty}$ decomposes fully in $K/k$
(\cite[Theorem 12.4.6]{Vil2006}). We have
$\operatorname{Gal}(\lam \alpha/k)\cong \G{P^{\alpha}}$ and
the exact sequence
\begin{gather}\label{E4.1}
0\longrightarrow D_{P,P^{\alpha}}\longrightarrow
\G {P^{\alpha}}
tackrel{\varphi}{\longrightarrow} \G P\longrightarrow 0,
\end{gather}
where
\begin{eqnarray*}
\varphi\colon \G {P^{\alpha}}&\longrightarrow&\G P\\
A\bmod P^{\alpha}&\longmapsto& A\bmod P
\end{eqnarray*}
and $D_{P,P^{\alpha}}=\{N\bmod P^{\alpha}\mid
N\equiv 1\bmod P\}$. We safely may consider
$D_{P,P^{\alpha}}=\{1+hP\mid h\in R_T, \displaystyleeg h<\displaystyleeg P^{\alpha}=
d\alpha\}$.
We have $\G {P^{\alpha}}\cong \G P \times D_{P,P^{\alpha}}$
and $\G P\cong C_{q^d-1}$.
First we compute how many elements of order $p^n$
contains $\G {P^{\alpha}}$. These elements belong to
$D_{P,P^{\alpha}}$. Let $A=1+hP\in D_{P,P^{\alpha}}$
of order $p^n$. We write $h=gP^{\gamma}$ with
$g\in R_T$, $\gcd (g,P)=1$ and $\gamma\geq 0$.
We have $A=1+gP^{1+\gamma}$. Since $A$ is
of order $p^n$, it follows that
\begin{gather}
A^{p^n}=1+g^{p^n}P^{p^n(1+\gamma)}\equiv 1\bmod P^{\alpha}\label{E4.2}\\
\intertext{and}
A^{p^{n-1}}=1+g^{p^{n-1}}P^{p^{n-1}(1+\gamma)}
\not\equiv 1\bmod P^{\alpha}\label{E4.3}.
\end{gather}
From (\ref{E4.2}) and (\ref{E4.3}) it follows that
\begin{gather}\label{E4.4}
p^{n-1}(1+\gamma)<\alpha\leq p^n(1+\gamma),\\
\intertext{which is equivalent to}
\genfrac\lceil\rceil{0.5pt}0{\alpha}{p^n}-1\leq \gamma <\genfrac\lceil\rceil{0.5pt}0{\alpha}
{p^{n-1}}-1.
\end{gather}
Observe that for the existence
of at least one element of order $p^n$ we need
$\alpha>p^{n-1}$.
Now, for each $\gamma$ satisfying (\ref{E4.4}) we have
$\gcd (g,P)=1$ and $\displaystyleeg g + d(1+\gamma) <d \alpha$,
that is, $\displaystyleeg g<d(\alpha-\gamma-1)$. Thus, there exist
$\Phi(P^{\alpha-\gamma-1})$ such $g$'s, where
for any $N\in R_T$, $\Phi(N)$ denotes
the order of $\big|\G N\big|$.
Therefore the number of elements of order $p^n$
in $D_{P,P^{\alpha}}$ is
\begin{gather}\label{E4.5}
um_{\gamma=\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^n}-1}^{
\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n-1}}-2}\Phi(P^{\alpha-\gamma-1})=
um_{\gamma'=\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n-1}}+1}^{
\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^n}}\Phi(P^{\gamma'}).
\end{gather}
Note that for any $1\leq r\leq s$ we have
\begin{align*}
um_{i=r}^{s}\Phi(P^i)&=
um_{i=r}^s q^{d(i-1)}(q^d-1)=
(q^d-1)q^{d(r-1)}
um_{j=0}^{s-r}q^{dj}\\
&=(q^d-1)q^{d(r-1)}{\ma F}_q^{\ast}rac{q^{d(s-r+1)}-1}{q^d-1}=q^{ds}-q^{d(r-1)}.
\end{align*}
Hence (\ref{E4.5}) is equal to
\[
q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^n}\big)}-q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}
{p^{n-1}}\big)}=q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}
{p^{n-1}}\big)}(q^{d\big(\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n-1}}
-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n}}\big)}-1).
\]
Since each cyclic group of order $p^n$ has $\varphi(p^n)=
p^{n-1}(p-1)$ generators, we obtain the result. \qed
Note that if $K/k$ is any field contained in $\lam {\alpha}$ then the
conductor ${\mathbbthfrak F}_K$ of $K$ is a divisor of $P^{\alpha}$.
During the proof of Proposition \ref{P4.3}
we compute $t_1(\alpha)$, that is,
the number of cyclic extensions $K/k$ of degree $p$
such that $P$ is the only ramified prime (it is fully ramified),
${\eu p}_{\infty}$ is decomposed in
$K/k$ and ${\mathbbthfrak F}_K\mid P^{\alpha}$ and we obtain
(\ref{EqNew2}) for the case $n=1$.
We have already solved this case in \cite{SaRzVi2012-2}. Here
we present another proof, which is suitable of
generalization to the case of cyclic extensions of
degree $p^n$.
\begin{proposition}\label{P4.3}
Every cyclic extension $K/k$ of degree $p$ such that $P$ is the only
ramified prime, ${\eu p}_{\infty}$ decomposes in $K/k$ and ${\mathbbthfrak F}_K\mid P^{\alpha}$
is contained in $\lam \alpha$.
\end{proposition}
{\eu p}_{\infty}roof
From the Artin--Schreier theory
(see (\ref{E2.2})) and Proposition \ref{P2.4},
we have that the field $K$ is given by
$K=k(y)$ with the Artin--Schreier equation of
$y$ normalized as prescribed by Hasse \cite{Has35}. That is
\[
y^p-y={\ma F}_q^{\ast}rac{Q}{P^{\lambda}},
\]
where $P\in R_T^+$, $Q\in R_T$,
$\gcd(P,Q)=1$, $\lambda >0$, $p\nmid \lambda$, $\displaystyleeg Q<
\displaystyleeg P^{\lambda}$. Now the conductor ${\mathbbthfrak F}_K$ satisfies
${\mathbbthfrak F}_K=P^{\lambda+1}$ so
$\lambda\leq \alpha-1$.
We have that if $K=k(z)$ with $z^p-z=a$ then there exist
$j\in {\mathbb F}_p^{\ast}$ and $c\in k$ such that
$z=jy+c$ and $a=j{\ma F}_q^{\ast}rac{Q}{P^{\lambda}}+\wp(c)$ where
$\wp (c)=c^p-c$. If $a$ is also given in normal form
then $c={\ma F}_q^{\ast}rac{h}{P^{\gamma}}$ with $p\gamma\leq \lambda$
(indeed, $p\gamma<\lambda$ since $\gcd (\lambda,p)=1$)
and $\displaystyleeg h<\displaystyleeg P^{\gamma}$ or $h=0$. Let $\gamma_0:=
\genfrac[]{0.5pt}0chico{\lambda}{p}$. Then any such $c$ can be
written as $c={\ma F}_q^{\ast}rac{hP^{\gamma_0-\gamma}}{P^{\gamma_0}}$.
Therefore $c\in {\mathbbthcal G}:=\Big\{{\ma F}_q^{\ast}rac{h}{P^{\gamma_0}}\mid
h\in R_T, \displaystyleeg h<\displaystyleeg P^{\gamma_0}=d\gamma_0
\text{\ or\ } h=0\Big\}$.
If $c\in{\mathbbthcal G}$ and $j\in\{1,2,\ldots,p-1\}$ we have
\begin{align*}
a&=j{\ma F}_q^{\ast}rac{Q}{P^{\lambda}}+\wp(c)=j{\ma F}_q^{\ast}rac{Q}{P^{\lambda}}+
{\ma F}_q^{\ast}rac{h^p}{P^{p\gamma_0}}+{\ma F}_q^{\ast}rac{h}{P^{\gamma_0}}\\
&={\ma F}_q^{\ast}rac{jQ+P^{\lambda-p\gamma_0}h+P^{\lambda-\gamma_0}}
{P^{\lambda}}={\ma F}_q^{\ast}rac{Q_1}{P^{\lambda}},
\end{align*}
with $\displaystyleeg Q_1<\displaystyleeg P^{\lambda}$.
Since $\lambda-p\gamma_0>0$ and $\lambda-\gamma_0>0$,
we have $\gcd(Q_1,P)=1$. Therefore $a$ is in normal form.
It follows that the same field has $|{\mathbb F}_p^{\ast}||\wp({\mathbbthcal
G})|$ different representations in standard form.
Now ${\mathbbthcal G}$ and $\wp({\mathbbthcal G})$ are additive
groups and $\wp\colon {\mathbbthcal G}\to \wp({\mathbbthcal G})$ is a
group epimorphism with kernel $\ker \wp = {\mathbbthcal G}\cap
\{c\mid \wp(c)=c^p-c=0\}={\mathbbthcal G}\cap {\mathbb F}_p=\{0\}$.
We have $|\wp({\mathbbthcal G})|=|{\mathbbthcal G}|=|R_T/(P^{\gamma_0})|=
q^{d\gamma_0}$.
From the above discussion we obtain that the number of different
cyclic extensions $K/k$ of degree $p$ such that the conductor
of $K$ is ${\mathbbthfrak F}_K= P^{\lambda+1}$ is equal to
\begin{equation}\label{E4.8}
{\ma F}_q^{\ast}rac{\Phi(P^{\lambda})}{|{\mathbb F}_p^{\ast}||\wp({\mathbbthcal G})|}=
{\ma F}_q^{\ast}rac{q^{d(\lambda-1)}(q^d-1)}{(p-1)q^{d\gamma_0}}=
{\ma F}_q^{\ast}rac{q^{d(\lambda-\genfrac[]{0.5pt}0chico{\lambda}{p}-1)}(q^d-1)}{p-1}=
{\ma F}_q^{\ast}rac{1}{p-1}\Phi\big(P^{\lambda-\genfrac[]{0.5pt}0chico{\lambda}{p}}\big).
\end{equation}
Therefore the number of different
cyclic extensions $K/k$ of degree $p$ such that the conductor
${\mathbbthfrak F}_K$ of $K$ is a divisor of $P^{\alpha}$ is given by
$t_1(\alpha)={\ma F}_q^{\ast}rac{w(\alpha)}{p-1}$ where
\begin{equation}\label{E3.10}
w(\alpha)=
um_{
ubstack{\lambda=1\\ \gcd(\lambda,p)=1}}^{\alpha-1}
\Phi\big(P^{\lambda-\genfrac[]{0.5pt}0chico{\lambda}{p}}\big).
\end{equation}
To compute $w(\alpha)$ write $\alpha-1=pt_0+r_0$ with $t_0\geq 0$
and $0\leq r_0\leq p-1$. Now $\{\lambda\mid 1\leq \lambda \leq
\alpha-1, \gcd(\lambda,p)=1\}={\mathbbthcal A}\cup {\mathbbthcal B}$ where
\begin{gather*}
{\mathbbthcal A}
=\{pt+r\mid 0\leq t\leq t_0-1, 1\leq r\leq p-1\}
\quad \text{and}\quad {\mathbbthcal B}=
\{pt_0+r\mid 1\leq r\leq r_0\}.
\intertext{Then}
w(\alpha)=
um_{\lambda\in{\mathbbthcal A}}
\Phi\big(P^{\lambda-\genfrac[]{0.5pt}0chico{\lambda}{p}}\big)+
um_{\lambda\in
{\mathbbthcal B}}\Phi\big(P^{\lambda-\genfrac[]{0.5pt}0chico{\lambda}{p}}\big)
\end{gather*}
where we understand that if a set, ${\mathbbthcal A}$ or
${\mathbbthcal B}$ is empty, the respective sum is $0$.
Then
\begin{align}\label{E4.8'}
w(\alpha)&=
um_{
ubstack{0\leq t\leq t_0-1\\ 1\leq r\leq p-1}}
q^{d(pt+r-t-1)}(q^d-1)+
um_{r=1}^{r_0}(q^{d(pt_0+r-t_0-1})(q^d-1)\nonumber\\
&=(q^d-1)\Big(
um_{t=0}^{t_0-1}q^{d(p-1)t}\Big)\Big(
um_{
r=1}^{p-1}q^{d(r-1)}\Big)+
(q^d-1)q^{d(p-1)t_0}
um_{r=1}^{r_0}q^{d(r-1)}\\
&=(q^d-1){\ma F}_q^{\ast}rac{q^{d(p-1)t_0}-1}{q^{d(p-1)}-1}{\ma F}_q^{\ast}rac{q^{d(p-1)}-1}{q^d-1}
+(q^d-1)q^{d(p-1)t_0}{\ma F}_q^{\ast}rac{q^{dr_0}-1}{q^d-1}\nonumber\\
&=q^{d((p-1)t_0+r_0)}-1=q^{d(pt_0+r_0-t_0)}-1=
q^{d\big(\alpha-1-\genfrac[]{0.5pt}0chico
{\alpha-1}{p}\big)}-1.\nonumber
\end{align}
Therefore, the number of different cyclic extensions $K/k$ of degree $p$
such that $P$ is the only ramified prime, ${\mathbbthfrak F}_K\mid P^{\alpha}$ and
${\eu p}_{\infty}$ decomposes, is
\begin{equation}\label{E4.7}
t_1(\alpha)={\ma F}_q^{\ast}rac{w(\alpha)}{p-1}={\ma F}_q^{\ast}rac{q^{d(\alpha-1-\genfrac[]{0.5pt}0chico
{\alpha-1}{p})}-1}{p-1}.
\end{equation}
To finish the proof of Proposition \ref{P4.3} we need the following
\begin{lemma}\label{L4.2}
For any $\alpha\in{\mathbb Z}$ and $s\in {\mathbb N}$ we have
\l
\item $\genfrac[]{0.5pt}0{\genfrac[]{0.5pt}0chico{\alpha}{p^s}}{p}=
\genfrac[]{0.5pt}0{\genfrac[]{0.5pt}0chico{\alpha}{p}}{p^s}=\genfrac[]{0.5pt}0{\alpha}{p^{s+1}}$.
\item $\genfrac\lceil\rceil{0.5pt}0{\alpha}{p^s}=\genfrac[]{0.5pt}0{\alpha-1}{p^s}+1$.
\end{list}
\end{lemma}
{\eu p}_{\infty}roof
For (a), we prove only $\genfrac[]{0.5pt}0{\genfrac[]{0.5pt}0chico{\alpha}{p^s}}{p}
=\genfrac[]{0.5pt}0{\alpha}{p^{s+1}}$, the other equality is similar.
Note that the case $s=0$ is clear. Set $\alpha=tp^{s+1}+r$
with $0\leq r\leq p^{s+1}-1$. Let $r=lp^s+r'$ with $0\leq r'\leq p^s-1$. Note
that $0\leq l\leq p-1$. Hence $\alpha=tp^{s+1}+lp^s+r'$, $0\leq r'\leq p^s-1$
and $0\leq l\leq p-1$. Therefore $\genfrac[]{0.5pt}0{\alpha}{p^s}=tp+l$, and
$\displaystyle{\ma F}_q^{\ast}rac{\genfrac[]{0.5pt}0{\alpha}{p^s}}{p}=t+{\ma F}_q^{\ast}rac{l}{p}$, $0\leq l\leq p-1$. Therefore
$\genfrac[]{0.5pt}0{\genfrac[]{0.5pt}0chico{\alpha}{p^s}}{p}=t=\genfrac[]{0.5pt}0{\alpha}{p^{s+1}}$.
For (b) write $\alpha=p^s t+r$ with
$0\leq r\leq p^s-1$. If $p^s\mid \alpha$ then $r=0$ and $\genfrac\lceil\rceil{0.5pt}0{\alpha}{p^s}
=t$, $\genfrac[]{0.5pt}0{\alpha-1}{p^s}=\genfrac[]{0.5pt}0{p^s t-1}{p^s}=\displaystyle\Big[t-{\ma F}_q^{\ast}rac{1}{p^s}\Big]=
t-1=\genfrac\lceil\rceil{0.5pt}0{\alpha}{p^s}-1$.
If $ p^s\nmid \alpha$, then $1\leq r\leq p^s-1$ and $\alpha-1=p^st+(r-1)$
with $0\leq r-1\leq p^s-2$. Thus $\genfrac\lceil\rceil{0.5pt}0{\alpha}{p^s}=\displaystyle\Big\lceil t+{\ma F}_q^{\ast}rac{r}{p^s}
\Big\rceil=t+1$ and $\genfrac[]{0.5pt}0{\alpha-1}{p^s}=\displaystyle\Big[t+{\ma F}_q^{\ast}rac{r-1}{p^s}\Big]=t=
\genfrac\lceil\rceil{0.5pt}0{\alpha}{p^s}-1$. This finishes the proof of Lemma \ref{L4.2}. \qed
From Lemma \ref{L4.2} (b) we obtain that (\ref{E4.7}) is equal to
\begin{equation}\label{E3.11}
t_1(\alpha)={\ma F}_q^{\ast}rac{w(\alpha)}{p-1}={\ma F}_q^{\ast}rac{q^{d\big(\alpha-1-\big(\genfrac\lceil\rceil{0.5pt}0chico
{\alpha}{p}-1\big)\big)}-1}{p-1}={\ma F}_q^{\ast}rac{q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico
{\alpha}{p}\big)}-1}{p-1}=v_1(\alpha).
\end{equation}
As a consequence of (\ref{E3.11}), we have
Proposition \ref{P4.3}.\qed
Proposition \ref{P4.3} proves (\ref{EqNew2}) for $n=1$ and all
$\alpha\in{\mathbb N}$.
Now consider any cyclic extension $K_n/k$ of degree $p^n$ such that
$P$ is the only ramified prime, it is fully ramified, ${\eu p}_{\infty}$
decomposes fully in $K_n/k$ and
${\mathbbthfrak F}_K\mid P^{\alpha}$. We want to prove that $K_n
ubseteq
\lam \alpha$, that is, (\ref{EqNew2}): $t_n(\alpha)\leq
v_n(\alpha)$. This will be proved by induction on $n$. The case
$n=1$ is Proposition \ref{P4.3}. We assume that any cyclic extension
$K_{n-1}$ of degree $p^{n-1}$, $n\geq 2$ such that $P$ is the only ramified
prime, ${\eu p}_{\infty}$ decomposes fully in $K_{n-1}/k$ and ${\mathbbthfrak F}_{K_{
n-1}}\mid P^{\displaystyleelta}$
is contained in $\lam \displaystyleelta$ where $\displaystyleelta\in{\mathbb N}$.
Let $K_n$ be any cyclic extension of degree $p^n$ such that
$P$ is the only ramified prime and it is fully ramified,
${\eu p}_{\infty}$ decomposes fully in $K_n/k$
and ${\mathbbthfrak F}_{K_n}\mid P^{\alpha}$.
Let $K_{n-1}$ be the subfield of $K_n$ of degree $p^{n-1}$
over $k$.
Now we consider $K_n/k$ generated by the Witt vector $\vec{\beta}=
(\beta_1,\ldots,\beta_{n-1},\beta_n)$, that is, $\wp(\vec{y})=
\vec{y}^p\Witt - \vec{y}=\vec{\beta}$,
and we assume that $\vec{\beta}$ is in the normal form
described by Schmid (see Theorem \ref{T2.3}, \cite{Sch36}).
Then $K_{n-1}/k$ is
given by the Witt vector $\vec{\beta'}=(\beta_1,\ldots,\beta_{n-1})$.
If $\vec{\lambda}:=(\lambda_1,\ldots,\lambda_{n-1},\lambda_n)$ is
the vector of Schmid's parameters, that is, each $\beta_i$
is given by
\begin{gather*}
\beta_i={\ma F}_q^{\ast}rac{Q_i}{P^{\lambda_i}}, \text{\ where\ } Q_i=0 \text{\
(that is, $\beta_i=0$) and $\lambda_i=0$ or}\\
\gcd(Q_i,P)=1, \displaystyleeg Q_i<\displaystyleeg P^{\lambda_i}, \lambda_i>0 \text{\ and\ }
\gcd(\lambda_i,p)=1.
\end{gather*}
Since $P$ is fully ramified we have $\lambda_1>0$.
Now we compute how many different extensions
$K_n/K_{n-1}$ can be constructed
by means of $\beta_n$.
\begin{lemma}\label{LN1}
For a fixed $K_{n-1}$ the number of different fields $K_n$
is less than or equal to
\begin{equation}\label{EqNew5}
{\ma F}_q^{\ast}rac{1+w(\alpha)}{p}=
{\ma F}_q^{\ast}rac{1}{p}q^{d(\alpha-\genfrac\lceil\rceil{0.5pt}0chico {\alpha}p)}.
\end{equation}
\end{lemma}
{\eu p}_{\infty}roof
For $\beta_n\neq 0$, each equation in normal
form is given by
\begin{gather}\label{E4.11}
y_n^p-y_n=z_{n-1}+\beta_n,
\end{gather}
where $z_{n-1}$ is the element in $K_{n-1}$ obtained by
the Witt generation of $K_{n-1}$ by the vector $\vec{\beta'}$ (see
\cite[page 161]{Sch36}). In fact
$z_{n-1}$ is given, formally, by
\[
z_{n-1}=
um_{i=1}^{n-1}{\ma F}_q^{\ast}rac{1}{p^{n-i}}\big[
y_i^{p^{n-i}}+\beta_i^{p^{n-i}}-(y_i+\beta_i+
z_{i-1})^{p^{n-i}}\big],
\]
with $z_0=0$.
As in the case $n=1$ we have that there
exist $\Phi(P^{\lambda_n})$
different $\beta_n$ with $\lambda_n>0$.
With the change of variable
$y_n\to y_n+c$, $c\in {\mathbbthcal G}_{\lambda_n}:
=\big\{{\ma F}_q^{\ast}rac{h}{P^{\gamma_n}}\mid
h\in R_T, \displaystyleeg h<\displaystyleeg P^{\gamma_n}=d\gamma_n
\text{\ or\ } h=0\big\}$
where $\gamma_n=\genfrac[]{0.5pt}0{\lambda_n}{p}$, we obtain
$\beta_n\to \beta_n+\wp(c)$ also in normal form. Therefore
the number of different
elements $\beta_n$ which provide the same field $K_n$ with this
change of variable is $
q^{d(\genfrac[]{0.5pt}0chico{\lambda_n}{p})}$. Therefore we obtain at most
$\Phi\big(P^{\lambda_n-\genfrac[]{0.5pt}0chico{\lambda_n}{p}}\big)$ possible
fields $K_n$ for each $\lambda_n>0$ (see (\ref{E4.8})).
More precisely, if for each $\beta_n$ with $\lambda_n>0$ we set
$\overline{\beta_n}:=\{\beta_n+\wp(c)\mid c\in {\mathbbthcal G}_{\lambda_n}\}$,
then any element of $\overline{\beta_n}$ gives the same field $K_n$.
Let $v_P$ denote the valuation at $P$ and
\begin{gather*}
{\mathbbthcal A}_{\lambda_n}:=\{\overline{\beta_n}\mid v_P(\beta_n)=-
\lambda_n\},\\
{\mathbbthcal A}:=\bigcup_{
ubstack{\lambda_n=1\\
\gcd(\lambda_n,p)=1}}^{\alpha-1} {\mathbbthcal A}_{\lambda_n}.
\end{gather*}
Then any field $K_n$ is given by $\beta_n=0$ or $\overline{\beta_n}\in
{\mathbbthcal A}$. From (\ref{E4.8'})
we have that the number of
fields $K_n$ containing a fixed $K_{n-1}$
that we obtain in (\ref{E4.11}) is less than or equal to
\begin{gather}\label{E410'}
1+|{\mathbbthcal A}|=1+w(\alpha)=q^{d\big(\alpha-1-\genfrac[]{0.5pt}0chico
{\alpha-1}{p}\big)}=q^{d\big(\alpha-1-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p}+1\big)}=
q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p}\big)}.
\end{gather}
Now with the substitution $y_n\to y_n+jy_1$, $j=0,1,\ldots,p-1$,
in (\ref{E4.11}) we obtain
\[
(y_n+jy_1)^p-(y_n+jy_1)=y_n^p-y_n +j(y_1^p-y_1)=
z_{n-1}+\beta_n+j\beta_1.
\]
Therefore each of the extensions obtained in (\ref{E4.11}) is repeated
at least $p$ times, that is, for each $\beta_n$, we obtain the same extension
with $\beta_n,\beta_n+\beta_1,\ldots, \beta_n+(p-1)\beta_1$.
We will prove that different $\beta_n+j\beta_1$ correspond to
different elements of $\{0\}\cup {\mathbbthcal A}$.
Fix $\beta_n$. We modify each $\beta_n+j\beta_1$ into its normal
form: $\beta_n+j\beta_1 +\wp(c_{\beta_n,j})$ for some
$c_{\beta_n,j}\in k$. Indeed $\beta_n+j\beta_1$ is
always in normal form with the possible exception that $\lambda_n=
\lambda_1$ and in this case it holds for at most one index $j\in
\{0,1,\ldots,p-1\}$: if $\lambda_n\neq \lambda_1$,
\[
v_P(\beta_n+j\beta_1)=
\begin{cases}
-\lambda_n&\text{if $j=0$}\\
-\mathbbx\{-\lambda_n,-\lambda_1\}&\text{if $j\neq 0$}
\end{cases}.
\]
When $\lambda_n= \lambda_1$ and if $v_P(\lambda_n+j
\lambda_1)=u>-\lambda_n=-\lambda_1$ and $p|u$,
then for $i\neq j$, $v_P(\beta_n+i\beta_i)=v_P(
\beta_n+j\beta_1+(i-j)\beta_1)=-\lambda_n=-\lambda_1$.
In other words $c_{\beta_n,j}=0$ with very few exceptions.
Each $\mu=\beta_n+j\beta_1+\wp(c_{\beta_n,j})$, $j=0,1,\ldots,p-1$
satisfies that either $\mu=0$ or $\overline{\mu}\in{\mathbbthcal A}$. We will
see that all these elements give different elements of $\{0\}\cup
{\mathbbthcal A}$.
If $\beta_n=0$, then for $j\neq 0$, $v_P(j\beta_1)=-\lambda_1$, so
$\overline{j\beta_n}\in{\mathbbthcal A}$. Now if
$\overline{j\beta_n}=\overline{i\beta_n}$,
then
\[
j\beta_1=\beta_n^{{\eu p}_{\infty}rime}+\wp(c_1)\quad\text{and}\quad
i\beta_1=\beta_n^{{\eu p}_{\infty}rime}+\wp(c_2)
\]
for some $\beta_n^{{\eu p}_{\infty}rime}\neq 0$ and some
$c_1,c_2\in{\mathbbthcal G}_{\lambda_1}$. It follows that
$(j-i)\beta_1=\wp(c_2-c_1)\in\wp(k)$. This is not possible by the choice
of $\beta_1$ unless $j=i$.
Let $\beta_n\neq 0$. The case $\beta_n+j\beta_1=0$
for some $j\in\{0,1,\ldots,p-1\}$ has already been considered
in the first case. Thus we consider the case $\beta_n+j\beta_1
+\wp(c_{\beta_n,j})\neq 0$ for all $j$. If for some $i,j\in\{0,1,\ldots,p-1\}$
we have $\overline{\beta_n+j\beta_1+\wp(c_{\beta_n,j})}=\overline{
\beta_n+i\beta_1+\wp(c_{\beta_n,i})}$ then there exists $\beta_n^{{\eu p}_{\infty}rime}$
and $c_1,c_2\in k$ such that
\[
\beta_n+j\beta_1+\wp(c_{\beta_n,j})=\beta_n^{{\eu p}_{\infty}rime}+\wp(c_1)\quad
\text{and}\quad
\beta_n+i\beta_1+\wp(c_{\beta_n,i})=\beta_n^{{\eu p}_{\infty}rime}+\wp(c_2).
\]
It follows that $(j-i)\beta_1=\wp(c_1-c_2+
c_{\beta_n,i}-c_{\beta_n,j})\in\wp(k)$ so that
$i=j$.
Therefore each field $K_n$ is represented by at least $p$
different elements of $\{0\}\cup {\mathbbthcal A}$. The result follows.
\qed
Now, according to Schmid \cite[page 163]{Sch36}, the conductor of $K_n$
is $P^{M_n+1}$ where $M_n=\mathbbx\{pM_{n-1},\lambda_n\}$ and
$P^{M_{n-1}+1}$ is the conductor of $K_{n-1}$. Since ${\mathbbthfrak F}_{K_n}\mid
P^{\alpha}$, we have $M_n\leq \alpha-1$. Therefore
$pM_{n-1}\leq \alpha-1$ and $\lambda_n\leq \alpha-1$. Hence
${\mathbbthfrak F}_{K_{n-1}}\mid P^{\displaystyleelta}$ with $\displaystyleelta=\genfrac[]{0.5pt}0{\alpha-1}{p}+1$.
\begin{proposition}\label{P4.4}
We have
\[
{\ma F}_q^{\ast}rac{v_n(\alpha)}{v_{n-1}(\displaystyleelta)}=
{\ma F}_q^{\ast}rac{q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p}\big)}}{p},
\]
where $\displaystyleelta=\genfrac[]{0.5pt}0{\alpha-1}{p}+1$.
\end{proposition}
{\eu p}_{\infty}roof
From Proposition \ref{P4.1} we obtain
\begin{align*}
v_n(\alpha)&={\ma F}_q^{\ast}rac{q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}
{p^{n-1}}\big)}\big(q^{d\big(\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n-1}}
-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n}}\big)}-1\big)}{p^{n-1}(p-1)}\\
&={\ma F}_q^{\ast}rac{q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}
{p^{n-1}}\big)}}{p^{n-1}(p-1)}\big(q^{d\big(\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n-1}}
-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n}}\big)}-1\big),\\
\intertext{and}
v_{n-1}(\displaystyleelta)&={\ma F}_q^{\ast}rac{q^{d\big(\displaystyleelta-\genfrac\lceil\rceil{0.5pt}0chico{\displaystyleelta}
{p^{n-2}}\big)}\big(q^{d\big(\genfrac\lceil\rceil{0.5pt}0chico{\displaystyleelta}{p^{n-2}}
-\genfrac\lceil\rceil{0.5pt}0chico{\displaystyleelta}{p^{n-1}}\big)}-1\big)}{p^{n-2}(p-1)}\\
&={\ma F}_q^{\ast}rac{q^{d\big(\displaystyleelta-\genfrac\lceil\rceil{0.5pt}0chico{\displaystyleelta}
{p^{n-2}}\big)}}{p^{n-2}(p-1)}\big(q^{d\big(\genfrac\lceil\rceil{0.5pt}0chico{\displaystyleelta}{p^{n-2}}
-\genfrac\lceil\rceil{0.5pt}0chico{\displaystyleelta}{p^{n-1}}\big)}-1\big).
\end{align*}
From Lemma \ref{L4.2} we have
\begin{align*}
\genfrac\lceil\rceil{0.5pt}0{\displaystyleelta}{p^{n-2}}-\genfrac\lceil\rceil{0.5pt}0{\displaystyleelta}{p^{n-1}}&=\Big(
\genfrac[]{0.5pt}0{\displaystyleelta-1}{p^{n-2}}+1\Big)-\Big(\genfrac[]{0.5pt}0{\displaystyleelta-1}{p^{n-1}}+1\Big)\\
&=\genfrac[]{0.5pt}0{\displaystyleelta-1}{p^{n-2}}-\genfrac[]{0.5pt}0{\displaystyleelta-1}{p^{n-1}}=
\genfrac[]{0.5pt}0{\genfrac[]{0.5pt}0chico{\alpha-1}{p}}{p^{n-2}}-
\genfrac[]{0.5pt}0{\genfrac[]{0.5pt}0chico{\alpha-1}{p}}{p^{n-1}}\\
&=\genfrac[]{0.5pt}0{\alpha-1}{p^{n-1}}-\genfrac[]{0.5pt}0{\alpha-1}{p^n}=
\Big(\genfrac\lceil\rceil{0.5pt}0{\alpha}{p^{n-1}}-1\Big)-\Big(\genfrac\lceil\rceil{0.5pt}0{\alpha}{p^{n}}-1\Big)\\
&=\genfrac\lceil\rceil{0.5pt}0{\alpha}{p^{n-1}}-\genfrac\lceil\rceil{0.5pt}0{\alpha}{p^n},\\
\displaystyleelta-\genfrac\lceil\rceil{0.5pt}0{\displaystyleelta}{p^{n-2}}&=\Big(\genfrac[]{0.5pt}0{\alpha-1}{p}+1\Big)
-\Big(\genfrac[]{0.5pt}0{\displaystyleelta-1}{p^{n-2}}+1\Big)\\
&=\genfrac[]{0.5pt}0{\alpha-1}{p}-\genfrac[]{0.5pt}0{\displaystyleelta-1}{p^{n-2}}=\genfrac[]{0.5pt}0{\alpha-1}{p}
-\genfrac[]{0.5pt}0{\genfrac[]{0.5pt}0chico{\alpha-1}{p}}{p^{n-2}}\\
&=\genfrac[]{0.5pt}0{\alpha-1}{p}-\genfrac[]{0.5pt}0{\alpha-1}{p^{n-1}}.
\end{align*}
Therefore
\begin{gather*}
v_{n-1}(\displaystyleelta)={\ma F}_q^{\ast}rac{q^{d\big(\genfrac[]{0.5pt}0chico{\alpha-1}{p}-\genfrac[]{0.5pt}0chico{
\alpha-1}{p^{n-1}}\big)}}{p^{n-2}(p-1)}\Big(q^{d\big(\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n-1}}
-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^n}\big)}-1\Big).
\end{gather*}
Thus, again by Lemma \ref{L4.2}
\begin{align*}
{\ma F}_q^{\ast}rac{v_n(\alpha)}{v_{n-1}(\displaystyleelta)}&=
{\ma F}_q^{\ast}rac{{\ma F}_q^{\ast}rac{q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}
{p^{n-1}}\big)}}{p^{n-1}(p-1)}\big(q^{d\big(\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n-1}}
-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n}}\big)}-1\big)}
{{\ma F}_q^{\ast}rac{q^{d\big(\genfrac[]{0.5pt}0chico{\alpha-1}{p}-\genfrac[]{0.5pt}0chico{
\alpha-1}{p^{n-1}}\big)}}{p^{n-2}(p-1\big)}\Big(q^{d\big(\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n-1}}
-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^n}\big)}-1\Big)}\\
&={\ma F}_q^{\ast}rac{1}{p}q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n-1}}-\genfrac[]{0.5pt}0chico{
\alpha-1}{p}+\genfrac[]{0.5pt}0chico{\alpha-1}{p^{n-1}}\big)}\\
&={\ma F}_q^{\ast}rac{1}{p}q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n-1}}-\big(\genfrac\lceil\rceil{0.5pt}0chico{
\alpha}{p}-1\big)+\big(\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n-1}}-1\big)\big)}
={\ma F}_q^{\ast}rac{1}{p}q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p}\big)}.
\end{align*}
This proves the result. \qed
Hence, from Proposition \ref{P4.4}, Lemma
\ref{LN1} (\ref{EqNew5}) and since
by the induction hypothesis,
$t_{n-1}(\displaystyleelta)=v_{n-1}(\displaystyleelta)$,
we obtain
\[
t_n(\alpha)\leq t_{n-1}(\displaystyleelta)\big({\ma F}_q^{\ast}rac{1}{p}q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico
{\alpha}p\big)}\big)=v_{n-1}(\displaystyleelta)\big({\ma F}_q^{\ast}rac{1}{p}q^{d\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico
{\alpha}p\big)}\big)=v_n(\alpha).
\]
This proves (\ref{EqNew2}) and Theorem \ref{T2.1}.
ection{Alternative proof of (\ref{EqNew2})}\label{S6}
We keep the same notation as in previous sections. Let $K/k$
be an extension satisfying the conditions (\ref{EqNew1})
and with conductor a divisor of $P^{\alpha}$.
We have ${\mathbbthfrak F}_K=P^{M_n+1}$ where
\begin{gather*}
M_n=\mathbbx\{p^{n-1}\lambda_1,p^{n-2}\lambda_2,\ldots,
p\lambda_{n-1},\lambda_n\},\\
\intertext{see \cite{Sch36}. Therefore}
{\mathbbthfrak F}_K\mid P^{\alpha}\iff M_n+1\leq \alpha \iff
p^{n-i}\lambda_i\leq \alpha-1, \quad i=1,\ldots, n.
\end{gather*}
Thus $\lambda_i\leq \genfrac[]{0.5pt}0{\alpha-1}{p^{n-i}}$. These
conditions give all cyclic extensions of degree $p^n$
where $P\in R_T^+$ is the only ramified prime, it is fully
ramified, ${\eu p}_{\infty}$ decomposes fully and its conductor divides
$P^{\alpha}$. Now we estimate the number of different
forms of generating $K$.
Let $K=k(\vec y)$. First, note that with the
change of variable $y_i$ for $y_i+c_i$
for each $i$, $c_i\in k$ we obtain the same extension.
For these new ways of
generating $K$ to satisfy (\ref{EqNew1}), we
must have:
\l
\item If $\lambda_i=0$, $c_i=0$.
\item If $\lambda_i>0$, then $c_i
\in \Big\{{\ma F}_q^{\ast}rac{h}{P^{\gamma_i}}\mid
h\in R_T, \displaystyleeg h<\displaystyleeg P^{\gamma_i}=d\gamma_i
\text{\ or\ } h=0\Big\}$,
where $\gamma_i=\genfrac[]{0.5pt}0{\lambda_i}{p}$.
Therefore we have at most
$\Phi\big(P^{\lambda_i-\genfrac[]{0.5pt}0chico{\lambda_i}{p}}\big)$
extensions for this $\lambda_i$
(see (\ref{E4.8})). Since $1\leq \lambda_i
\leq\genfrac[]{0.5pt}0{\alpha-1}{p^{n-i}}$ and $\gcd(\lambda_i,p)=1$,
if we let $\displaystyleelta_i:=\genfrac[]{0.5pt}0{\alpha-1}{p^{n-i}}+1$,
from (\ref{E3.10}) and (\ref{E4.8'}) we obtain that
we have at most
\begin{gather}\label{EqNew3}
w(\displaystyleelta_i)=
um_{
ubstack{\lambda_i=1\\
\gcd(\lambda_i,p)=1}}^{\displaystyleelta_i-1}
\Phi\big(P^{\lambda_i-\genfrac[]{0.5pt}0chico{\lambda_i}{p}}\big)=
q^{d\big(\displaystyleelta_i-1-\genfrac[]{0.5pt}0chico{\displaystyleelta_i-1}{p}\big)}-1
\end{gather}
different expressions for all possible $\lambda_i>0$.
Now by Lemma \ref{L4.2} we have
\[
\displaystyleelta_i-1-\genfrac[]{0.5pt}0{\displaystyleelta_i-1}{p}=\genfrac[]{0.5pt}0{\alpha-1}{p^{n-i}}-
\genfrac[]{0.5pt}0{\genfrac[]{0.5pt}0{\alpha-1}{p^{n-i}}}{p}=\genfrac[]{0.5pt}0{\alpha-1}
{p^{n-i}}-\genfrac[]{0.5pt}0{\alpha-1}{p^{n-i+1}}.
\]
Therefore
\begin{gather}\label{EqNew4}
w(\displaystyleelta_i)=q^{d\big(\genfrac[]{0.5pt}0chico{\alpha-1}{p^{n-i}}-\genfrac[]{0.5pt}0chico{
\alpha-1}{p^{n-i+1}}\big)}-1.
\end{gather}
\end{list}
When $\lambda_i=0$ is allowed, we have at most $w(\displaystyleelta_i)+1$
extensions with parameter $\lambda_i$. Therefore, since $\lambda_1
>0$ and $\lambda_i\geq 0$ for $i=2,\ldots, n$, we have that
the number of extensions satisfying (\ref{EqNew1}) and with
conductor a divisor of $P^{\alpha}$ is at most
\begin{gather*}
s_n(\alpha):=w(\displaystyleelta_1)\cdot {\eu p}_{\infty}rod_{i=2}^n\big(w(\displaystyleelta_i)+1\big).\\
\intertext{From (\ref{EqNew3}) and (\ref{EqNew4}), we obtain}
s_n(\alpha)=\Big(q^{d\big(\genfrac[]{0.5pt}0chico{\alpha-1}{p^{n-1}}
-\genfrac[]{0.5pt}0chico{\alpha-1}{p^n}\big)}-1\Big)\cdot {\eu p}_{\infty}rod_{i=2}^n
q^{d\big(\genfrac[]{0.5pt}0chico{\alpha-1}{p^{n-i}}
-\genfrac[]{0.5pt}0chico{\alpha-1}{p^{n-i+1}}\big)}.\\
\intertext{Therefore ${\eu p}_{\infty}rod_{i=2}^n(w(\displaystyleelta_i)+1)=q^{d\mu}$ where}
\begin{align*}
\mu &=
um_{i=2}^n \Big(\genfrac[]{0.5pt}0{\alpha-1}{p^{n-i}}
-\genfrac[]{0.5pt}0{\alpha-1}{p^{n-i+1}}\Big)=
um_{i=2}^n \genfrac[]{0.5pt}0{\alpha-1}{p^{n-i}}
-
um_{j=1}^{n-1}\genfrac[]{0.5pt}0{\alpha-1}{p^{n-j}}\\
&=\genfrac[]{0.5pt}0{\alpha-1}{p^{n-n}}-\genfrac[]{0.5pt}0{\alpha-1}{p^{n-1}}=
\alpha-1-\genfrac[]{0.5pt}0{\alpha-1}{p^{n-1}}.
\end{align*}
\end{gather*}
Hence
\begin{align*}
s_n(\alpha)&=\Big(q^{d\big(\genfrac[]{0.5pt}0chico{\alpha-1}{p^{n-1}}
-\genfrac[]{0.5pt}0chico{\alpha-1}{p^n}\big)}-1\Big)\cdot q^{d\big(\alpha
-1-\genfrac[]{0.5pt}0chico{\alpha-1}{p^{n-1}}\big)}\\
&= q^{d\big(\genfrac[]{0.5pt}0chico{\alpha-1}{p^{n-1}}-
\genfrac[]{0.5pt}0chico{\alpha-1}{p^{n}}+\alpha-1-
\genfrac[]{0.5pt}0chico{\alpha-1}{p^{n-1}}\big)}-
q^{d\big(\alpha-1-\genfrac[]{0.5pt}0chico{\alpha-1}{p^{n-1}}\big)}\\
&=q^{d\big(\alpha-1-\genfrac[]{0.5pt}0chico{\alpha-1}{p^{n}}\big)}
-q^{d\big(\alpha-1-\genfrac[]{0.5pt}0chico{\alpha-1}{p^{n-1}}\big)}.
\end{align*}
From Lemma \ref{L4.2} (b) we obtain
\begin{align*}
\alpha-1-\genfrac[]{0.5pt}0{\alpha-1}{p^n}=\alpha-\genfrac\lceil\rceil{0.5pt}0{\alpha}{p^n}
\quad\text{and}\quad \alpha-1-\genfrac[]{0.5pt}0{\alpha-1}{p^{n-1}}=
\alpha-\genfrac\lceil\rceil{0.5pt}0{\alpha}{p^{n-1}}.\\
\intertext{Thus}
s_n(\alpha)=q^{\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^n}\big)}
-q^{\big(\alpha-\genfrac\lceil\rceil{0.5pt}0chico{\alpha}{p^{n-1}}\big)}=
p^{n-1}(p-1)v_n(\alpha).
\end{align*}
Finally, the change of variable $\vec y\to \vec j \Witt \times \vec y$ with
$\vec j\in W_n({\mathbb F}_p)^{\ast}\cong \big({\mathbb Z}/p^n{\mathbb Z}
\big)^{\ast}$ gives the same field and we have
$\vec \beta\to \vec j\Witt \times \vec \beta$. Therefore
\[
t_n(\alpha)\leq {\ma F}_q^{\ast}rac{s_n(\alpha)}{\varphi(p^n)}={\ma F}_q^{\ast}rac{s_n(\alpha)}
{p^n(p-1)}=v_n(\alpha).
\]
This proves (\ref{EqNew2}) and Theorem \ref{T2.1}.
\end{document} |
\begin{document}
\title{Mechanism Design for Maximum Vectors}
\begin{abstract}
We consider the {\em Maximum Vectors problem} in a strategic setting. In the classical setting this problem consists, given a set of $k$-dimensional vectors, in computing the set of all \emph{nondominated} vectors. Recall that a vector $v=(v^1, v^2, \ldots, v^k)$ is said to be nondominated if there does
not e\-xist another vector $v_*=(v_*^1, v_*^2, \ldots, v_*^k)$ such that
$v^l \leq v_*^{l}$ for $1\leq l\leq k$,
with at least one strict inequality among the $k$ inequalities.
This problem is strongly related to other known problems such as the \emph{Pareto curve computation} in multiobjective optimization. In a strategic setting each vector is owned by a selfish agent which can misreport her values in order to become nondominated by other vectors. Our work explores under which conditions it is
possible to incentivize agents to report their true values
using the algorithmic mechanism design framework. We provide both impossibility results along with positive ones, according to various assumptions.
\end{abstract}
\section{Introduction}
A great variety of algorithms and methods have been designed for various optimization problems. In classic \emph{Combinatorial Optimization}, the algorithm knows the complete input of the problem, and its goal is to produce an optimal or near optimal solution. However, in many modern applications the input of the problem is spread among a set of \emph{selfish agents}, where eachone owns a different part of the input as private knowledge. Hence, every agent is capable to manipulate the algorithm by miss-reporting its part of the input in order to maximize its personal payoff. In their seminal paper Nisan and Ronen \cite{NR99} were the first to study the impact of the ``strategic'' behavior of the agents on the difficulty of an optimization problem.
Since then \emph{Algorithmic Mechanism Design} studies optimization problems in presence of selfish agents with private knowledge of the input and potentially conflicting individual objective functions. The goal is to know whether it is possible to propose a \emph{truthful (or incentive compatible) mechanism}, i.e., an algorithm solving the optimization problem together with a set of incentives/payments for the agents motivating them to report honestly their part of the input.
As an illustrating example, consider the problem of finding the maximum of a set of values $v_1,v_2,\ldots,v_n$. In the classic setting, computing the maximum value is trivial. Let us consider now the case when the inputs are strategic. It means that
each of the $n$ selfish agents $i$, for $1\leq i\leq n$,
has a private value $v_i$ (not known to the algorithm) for
being selected (as the maximum), and may report any value $b_i$. If the agents know that the maximum value will be computed using the classic algorithm: $\max_i b_i$, then each agent will have an incentive
to cheat by declaring $+\infty$ instead of her true value.
In such a strategic setting, we need a mechanism that is capable to give incentive to the agents to report eachone their true value. For doing that we may use Vickrey's (also known as \emph{second-price}) mechanism \cite{Vickrey}. In this setting, the maximum $\max_i b_i$ is still
computed, but the agent $i^*$=arg max $b_i$ is charged the second
highest reported value $p^*:=\max_{j \neq i^*} b_j$. Her utility is
therefore $v_{i^*}-p^*$. It can be shown that in such a setting each
agent $i$ will have an incentive to report $b_i=v_i$, and henceforth
this mechanism is able to compute the maximum value in this strategic environment \cite{D16}.
In this paper, we consider the problem of \emph{maximum vectors}, i.e., the problem of finding the maxima of a set of vectors in a strategic environment. The classic problem of computing the maxima of a
set of vectors can be stated as follows: we are given a set $V$ of $n$ $k$-dimensional vectors $v_1,v_2,\ldots,v_n$ with $v_i=(v_i^1, v_i^2, \ldots, v_i^k)$ for $i=1,2,\ldots,n$. Given two vectors
$v_i=(v_i^1, v_i^2, \ldots, v_i^k)$ and
$v_j=(v_j^1, v_j^2, \ldots, v_j^k)$, we say that $v_i$ is {\em dominated}
by $v_j$ if $v_i^l \leq v_j^{l}$ for $1\leq l\leq k$,
with at least one strict inequality among the $k$ inequalities. The problem consists in computing $MAX(V)$, i.e. the set of all \emph{nondominated} vectors among the $n$ given vectors. This problem is related to other known problems as the \emph{Pareto curve computation} in multiobjective optimization \cite{Ehr00,PY,S86}, the \emph{skyline} problem in databases \cite{KRR,PTFS03}, or the \emph{contour} problem \cite{M74}.
In a ``strategic'' setting the problem is as follows:
there are $n$ selfish agents $1,2,\ldots,n$ and the value of
agent $i$ is described by a vector $v_i=(v_i^1, v_i^2, \ldots, v_i^k)$ for being \emph{selected}\footnote{An agent is \emph{selected} if its bid belongs to the set of nondominated vectors.}. The vector $v_i$ is a \emph{private information} known only by agent $i$. Computing the set of nondominated vectors by using one of the classic algorithms gives incentive to the agents to cheat by declaring $+\infty$ in all the coordinates of their vectors instead of their true values per coordinate. Our work explores under which conditions it is
possible to incentivize agents to report their true values. In order to
precisely answer this question, it is useful to distinguish two cases.
In the strongest case, the mechanism is able to enforce truthtelling
for each agent regardless of the reports of the others (\emph{truthfulness}).
In the second case, the mechanism is able to enforce truthtelling for each
agent assuming that the others report their true values
(\emph{equilibria truthfulness}).
\emph{Previous works}
The Artificial Intelligence (AI) community is faced with many real-world problems involving multiple, conflicting and noncommensurate objectives in path planning \cite{Donald,Khouadjia,Quemy}, game search \cite{Dasgupta}, preference-based configuration \cite{Benabbou}, ... Modeling such problems using a single scalar-valued criterion may be problematic (see for instance \cite {Zeleny}) and hence multiobjective approaches have been studied in the AI literature \cite{Hart,Mandow}.
Some multiobjective problems have been considered in the mechanism design framework. However, these works apply a budget approach where instead of computing the set of all Pareto solutions (or an approximation of this set), they consider that among the different criteria, one is the main criterion to be optimized while the others are modeled via budget constraints \cite{Bilo,Grandoni}.
Another family of related works concern \emph{auction theory}.
In the
classical setting, the item as well as the valuation of the bidders are characterized
by a scalar representing the price/value of the item. However, in many situations
an item is characterized, besides of its price, by quality measures, delivery times,
technical specifications etc. In such cases, the valuation of the bidders for the
item are vectors. Auctions where the item to sell or buy are characterized by a vector
are known as \emph{multi-attribute auctions} \cite{Bellosta,Bellosta2,Bichler,Bichler2,Bichler3,Branco,Che,Smet,desmet-new}.
In most of these works, a scoring rule is used for combining the values of the different attributes in order to determine the winner of the auction.
\emph{Our contribution} We first show that neither truthfulness nor equilibria truthfulness are achievable. However, if one assumes that the agents have distinct values in each of the dimensions, we show that it
is possible to design an equilibria truthful mechanism for the {\em Maximum Vector problem}. We also show that the payments that our algorithm computes are the only payments that give this guarantee. In order to go beyond the negative result concerning ties in the valuations of agents, we show that it is possible to get an equilibria truthful mechanism for the
{\em Weakly Maximum Vector problem} in which one looks for weakly nondominated vectors instead of nondominated ones \cite{Ehr00}.
\section{Problem definition}\label{sec-prbdef}
The following definition and notations will be useful in the sequel of the paper.
\begin{definition} Given two vectors $x,y \in \mathbb{R}_+^k$ we say that:
\begin{itemize}
\item $x$ {\em weakly dominates} $y$, denoted by $x \succeq y$, iff $x^j \ge y^j$ for all $j \in \{1, \ldots ,k\}$;
\item $x$ {\em dominates} $y$, denoted by $x \succ y$, iff $x \succeq y$ and $x^j > y^j$ holds for at least one coordinate $j \in \{1, \ldots ,k\}$;
\item $x$ {\em strongly dominates} $y$, denoted by $x \gg y$, iff
$x^j > y^j$ holds for all coordinates $j \in \{1, \ldots ,k\}$;
\item $x$ and $y$ are {\em incomparable}, denoted by $x \sim y$,
iff there exist two coordinates, say $j$ and $j'$, such that $x^j < y^j$ and $x^{j'} > y^{j'}$.
\end{itemize}
\end{definition}
We denote by $\mathbb{R}^k_{+}$ (resp. $\mathbb{R}^k_{*+}$)
the set of vectors $v\in \mathbb{R}^k$ such that $v\succ \vec{0}$ (resp.
$v\gg \vec{0}$), with $\vec{0}:=(0,\ldots ,0)$ the zero vector.
Given a set $F \subset \mathbb{R}^k_+$, as stated before, we denote by $MAX(F)$ the subset of all nondominated vectors, i.e. $MAX(F) := \{v\in F\: :\: \not\exists v_*\in F, v_*\succ v \}$.
Such a set is composed of pairwise incomparable vectors.
In a similar way, we will denote by $MIN(F)$ the set
$\{v\in F\: :\: \not\exists v_*\in F, v\succ v_* \}$. We will also consider the subset of all weakly nondominated vectors, i.e. $WMAX(F) := \{v\in F\: :\: \not\exists v_*\in F, v_*\gg v \}$.
The Maximum Vector problem has been studied in the classical framework, and the following proposition is known:
\begin{proposition}
(from \cite{KLP75})\label{lem-par} The set $MAX(F)$ can be computed in
$O(|F| \log |F|)$ time for $k=2,3$ and at most
$O(|F| (\log |F|)^{k-2})$ for $k\geq 4$.
\end{proposition}
Following the mechanism design framework~\cite{IntroMD}, we aim to design
a mechanism, that we call {\em Pareto mechanism}, such that no agent has an incentive to misreport its vector in order to increase her utility.
The set of agents is denoted by $N$.
Each agent $i$ has a private {\em vector} $v_i=(v_i^1, v_i^2, \ldots, v_i^k)$ representing the agent valuations on $k$ numerical criteria for being selected. In the following, we consider that $k$ is a fixed constant.
We denote by $V$ the set of private vectors.
Each agent $i$ reports a vector (a bid) $b_i=(b_i^1, b_i^2,\ldots, b_i^k)$. We denote by $B$ the set of all reported vectors. Based on the set of reported vectors, the mechanism computes for each agent $i$ a vector-payment $p_i=(p_i^1,p_i^2,\ldots ,p_i^k)$.
For each agent $i$, if $b_i$ belongs to $MAX(B)$ she has to pay $p_i$ and so her utility is $u_i := v_i-p_i$, while if $b_i$ does not belong to $MAX(B)$ her utility is $u_i=\vec{0}$ (zero vector).
Since no agent has an incentive to misreport her vector in order to increase her utility, we will be able to correctly compute $MAX(V)$ by computing $MAX(B)$ since we will have $MAX(B)=MAX(V)$.
If we consider $WMAX$ instead of $MAX$ we use the term of a weakly Pareto mechanism.
\section{Preliminaries}
The Pareto mechanism we want to design must satisfy several properties.
\begin{definition}[multiobjective individual rationality]
A Pareto mechanism satisfies the multiobjective individual rationality (MIR)
constraint iff $u_i \succeq \vec{0}$ for all agents $i$.
\end{definition}
By the MIR constraint, it is always better for an agent to participate in
the mechanism (i.e. reports a vector) than not participating. In the following we will always assume that the mechanism satisfies the MIR constraint.
We want that the Pareto mechanism incentivize agents to report their true values. This leads to the two following formal definitions.
\begin{definition}[multiobjective truthfulness]
For any fixed set of reported vectors $b_{i'}$, $i'\neq i$, let $u_i$ be agent $i$'s utility if she reports $b_i=v_i$ and let $u'_i$ denotes her
utility if she reports $b_i\neq v_i$ (the reported vectors of all the
other agents remaining unchanged). A Pareto mechanism is said to be
multiobjective {\em truthful} iff
$u_i \succeq u'_i$ or $u_i \sim u'_i$ for any agent $i \in
\{1, \ldots, n\}$.
\end{definition}
\begin{definition}[multiobjective equilibria truthfulness]
As in the previous definition, let $u_i$ be agent $i$'s utility if $b_i=v_i$ and let
$u'_i$ denotes her utility if $b_i\neq v_i$. A Pareto mechanism is said to be
multiobjective {\em equilibria truthful} iff
$u_i \succeq u'_i$ or $u_i \sim u'_i$ for any agent $i \in
\{1, \ldots, n\}$, assuming that $b_{i'}=v_{i'}$ for all $i'\neq i$.
\end{definition}
Honestly reporting her valuation is a dominant strategy for any agent if the mechanism is truthful.
We will also need some additional definitions in the context of multicriteria
optimization, along with some technical lemmas. The missing proofs can be found in
the Appendix Section. In the sequel, all sets $S$ have a finite size.
\begin{definition}\label{def-t1}
Let $S\subset \mathbb{R}^k_{*+}$ be a finite set of $k$-dimensional vectors.
We define the {\em reference points}\footnote{This set is known in multiobjective optimization as the set of \emph{local upper bounds} \cite{Vander}.} of $S$, denoted by ${\cal T}(S)$,
as the minimum subset of $\mathbb{R}^k_{+}$ such that
for any $v\in \mathbb{R}^k_{*+}$ with $v\not\in MAX(S)$,
one has $v\in MAX(S\cup \{v\})$ iff
$\exists t\in {\cal T}(S)$ such that $v\gg t$.
\end{definition}
Such a set can be easily computed in dimension 2. For $k=2$, an example is depicted in
Figure~\ref{fig-refpoints}.
Let $S\subset \mathbb{R}^2_{*+}$. By Proposition~\ref{lem-par}, we compute $MAX(S) = \{s_1,\ldots ,s_r\}$, where the solutions
$s_i$, $1\leq i\leq r$, are pairwise incomparable.
Without loss of generality we assume that
$s_1^1 < s_2^1 < \ldots < s_r^1$ and $s_1^2 > s_2^2 > \ldots > s_r^2$.
Then one has ${\cal T}(S)=\{t_1,\ldots t_{r+1}\}$
with $t_1 = (0,s_1^2)$, $t_{r+1} = (s_r^1,0)$
and $t_l=(s_{l-1}^1,s_l^2)$ for $2\leq l\leq r$.
The overall complexity to compute ${\cal T}(S)$ is therefore $O(|S| \log |S|)$ in dimension 2.
The existence and uniqueness of such a set for any dimension follows from
Proposition~\ref{prop-ref}.
Let $D_S^j:=\{0\} \cup \{s^j \, : \, s\in S\}$ for $j=1,\ldots ,k$,
and $\Omega_S := D_S^1 \times D_S^2 \times \cdots \times D_S^k$.
\begin{proposition} \label{prop-ref}
For any finite set $S\subset \mathbb{R}^k_{*+}$,
one has ${\cal T}(S) = MIN( \{t \in \Omega_S \, : \,
\forall s \in S, \; s \not \gg t \}).$
\end{proposition}
Notice that $|\Omega_S| \leq (|S|+1)^k$ and by using Proposition~\ref{lem-par}
we obtain that for any $S\subset \mathbb{R}^k_{*+}$ its set of reference
points ${\cal T}(S)$ can be computed in polynomial time with respect to
$|S|$ ($k$ is assumed to be a constant).
For example, with $k=3$ and
$S=\{(2,2,2),(1,3,3),(3,1,1)\}$, using Proposition~\ref{prop-ref} one
obtains:
${\cal T}(S)=\{(3,0,0),(2,1,0),(2,0,1),(1,2,0),(1,0,2),(0,3,0),(0,0,3)\}.$
\begin{figure}
\caption{\label{fig-refpoints}
\label{fig-refpoints}
\end{figure}
\section{Impossibility results}
Because of the following Proposition, achieving an equilibria truthful Pareto
mechanism is the best we can hope for.
\begin{proposition}\label{not-truth}A Pareto mechanism cannot be truthful.
\end{proposition}
\begin{proof}
Let us consider an instance in two dimensions, with three agents reporting $b_1=(3,1)$,
$b_2=(1,3)$ and $b_3=(2,2)$. Then $b_3\in MAX(B)$, and agent 3 is charged
some payment $p_3\preceq (2,2)$ by the MIR assumption.
Let us define the following region of payments ${\cal R} = ([1,2]\times [1,2])
\setminus \{(1,\alpha)\cup (\alpha,1)\}$ for $1\leq \alpha\leq 2$, depicted
in Figure~\ref{fig-just}.
We claim that we can not have $p_3\in {\cal R}$.
Indeed if it was the case, then if $v_3=(2,2)$ agent 3's interest would
be to lie and report $b_3=((1+p_3^1)/2,(1+p_3^2)/2)$. She would still be in $MAX(B)$ and get charged $p'_3\preceq b_3\prec p_3$.\\
Now, since $p_3\preceq (2,2)$ and $p_3\not\in {\cal R}$ it means that
either $p_3\preceq (1,2)$ (case 1) or $p_3\preceq (2,1)$ (case 2).
In the first case, if $v_3=(1,2.5)$ then agent 3's interest would
be to lie and report $b_3=(2,2)$. She would belong to $MAX(B)$ and
her utility would be $(1,2.5)-p_3\succ (0,0)$, whereas if she reports $b_3=v_3$
she would not belong to $MAX(B)$ and therefore gets a utility $(0,0)$.
In the second case, in a similar way if $v_3=(2.5,1)$ then agent 3's interest
would be to lie and report $b_3=(2,2)$.
\end{proof}
\begin{definition}
An instance satisfies the {\em DV} property (distinct values) if
for every couple of distinct agents $i$, $i' \in N$, and
every $j \in \{1,\ldots ,k\}$, $v_i^j \neq v_{i'}^j$ holds.
\end{definition}
Let us motivate the introduction of this property.
\begin{proposition}\label{dvprop} Without the DV property, a Pareto mechanism cannot be
equilibria-truthful.
\end{proposition}
\begin{proof}
The proof is very similar to the one for Proposition~\ref{not-truth}.
We only need to assume that agents 1 and 2 are reporting their true vectors, i.e.
$b_1=v_1=(3,1)$ and $b_2=v_2=(1,3)$, and notice that the DV assumption does
not hold in cases 1 and 2.
\end{proof}
\begin{figure}
\caption{\small \label{fig-just}
\label{fig-just}
\end{figure}
\section{A Pareto mechanism for the Maximum Vector problem}
We are going to present a Pareto mechanism, denoted by $\cal M$, which
satisfies the MIR constraint and which is equilibria-truthful under the hypothesis DV.
The mechanism is described in Algorithm~\ref{figpmeca}. In the initial step,
we remove all identical vectors. This means that if there is a set of
agents with the same reported vector $b$, this vector is removed from the set $B$ and
all such agents will not be considered anymore in the mechanism.
Notice that this case will not occur, since we are in the context of
a equilibria truthfulm mechanism and we have the DV assumption. Not having
two identical vectors is a formal requirement used in the proof of
Lemma~\ref{lem-prop1}.
The mechanism computes for all agents $i$ such that $b_i\in MAX(B)$ a set of possible payments, denoted by $PAY(i)$, and can charge agent $i$ any payment from this set.
\begin{algorithm}
\begin{algorithmic}[1]
\STATE Remove all identical vectors and corresponding agents.
\STATE Compute $MAX(B)$.
\STATE For all $b_i\in MAX(B)$, set $PAY(i) := \{t \in
{\cal T}(B\setminus \{b_i\}) \: |\: b_i \succeq t\}$, and choose any
$p_i\in PAY(i)$.
\STATE For all $b_i\notin MAX(B)$, we set $p_i=\vec{0}$.
\caption{\label{figpmeca}The Pareto mechanism ${\cal M}$.}
\end{algorithmic}
\end{algorithm}
\begin{lemma} \label{lem-prop1}
For any agent $i$ such that $b_i\in MAX(B)$, one has $PAY(i)\neq \emptyset$.
\end{lemma}
\begin{proof}
We need the following notations.
Let $D_B^j:=\{0\} \cup \{b^j \, : \, b\in B\}$ for $j=1,\ldots ,k$.
Given $a \in \mathbb{R}_{*+}$, let us denote by
$dec_{B,j}(a)$ the quantity $\max\{x \in D_B^j \, : \, x<a\}$.\\
Let us consider a vector $r =(dec_{B,1}(b_i^1),\ldots,dec_{B,k}(b_i^k))$.
If there exists an agent $z\neq i$ such that $b_z \gg r$ then
$b_z \succ b_i$ (since no two reported vectors are identical), which is a
contradiction with $b_i \in MAX(B)$.
Therefore $r \in \{t \in \Omega_{B\setminus\{b_i\}} \, : \, \forall s \in B
\setminus \{b_i\}, \, s \not \gg t\}$.
By definition of the $MIN$ operator, there exists
$w \in MIN(\{t \in \Omega_{B\setminus\{b_i\}} \, : \, \forall s \in B
\setminus \{b_i\}, \, s \not \gg t\}) =
{\cal T}(B \setminus \{b_i\})$ such that $w \preceq r$.
Since $r \prec b_i$ (by construction), we get $w \prec b_i$.
Therefore $w \in PAY(i)$ and $PAY(i)\neq \emptyset$.
\end{proof}
\begin{theorem} The Pareto mechanism ${\cal M}$ satisfies the MIR constraint.
\label{theo-mir}
\end{theorem}
\begin{proof}
Given an agent $i$ such that $b_i\not\in MAX(B)$ her utility $u_i$ is the zero vector $\vec{0}$ by definition.
Now given an agent $i$ such that $b_i\in MAX(B)$, we have to prove that $b_i \succeq p_i$ for all
$p_i \in PAY(i)$. By Lemma~\ref{lem-prop1}, $PAY(i)$ is nonempty, and according
to the mechanism $\cal M$, $PAY(i)$ contains vectors dominated by $b_i$, thus
the property holds.
\end{proof}
In what follows, we use the following standard notation in game theory.
Given a set of reported vectors $B := \{b_j\:|\:j\in N\}$, we denote by
$(b_{-i},v_i)$ the set in which each agent $j\in N\setminus \{i\}$
reports $b_j$, excepted the agent $i$ which reports $v_i$ instead, and we denote by
$(b_{-i},b_i)$ the set in which each agent $j\in N$ reports $b_j$
including the agent $i$ which reports $b_i$.
Before proving Theorem~\ref{theo-eqt} we need the following two lemmas:
\begin{lemma}\label{lem-la}
Let $S\subset \mathbb{R}^k_{*+}$ be a finite set.
Then, $\forall t\in {\cal T}(S)$ and $\forall j\in \{1,\ldots k\}$, then
$t^j=0$ or $\exists s\in S$ such that $t^j=s^j$.
\end{lemma}
\begin{lemma}\label{lem-lt}
Let $S\subset \mathbb{R}^k_{*+}$ be a finite set.
Then, ${\cal T}(S)$ is composed of mutually noncomparable vectors, i.e.
$\forall t,t'\in {\cal T}(S)$, one has $t=t'$ or $t\sim t'$.
\end{lemma}
\begin{theorem}\label{theo-eqt}The Pareto mechanism ${\cal M}$ is equilibria-truthful.
\end{theorem}
\begin{proof}
Let $u_i$ be agent $i$'s utility if $b_i=v_i$ and let $u'_i$ denotes her
utility if $b_i\neq v_i$.
We need to show that $u_i \succeq u'_i$ or $u_i \sim u'_i$.
Recall that we always assume that $b_i \gg \vec{0}$ and $v_i \gg \vec{0}$.\\
We have two cases to consider. First, let assume that
$v_i\not\in MAX(b_{-i}, v_i)$.
The utility $u_i$ of agent $i$ is $\vec{0}$. In that case, agent $i$ may have an
incentive to report a vector $b_i\neq v_i$ such that $b_i\in MAX(b_{-i},b_i)$.
According to the mechanism $\cal M$, agent $i$ will be charged some
$t\in {\cal T}(B\setminus \{b_i\})$.
Since $v_i\not\in MAX(b_{-i}, v_i)$ we get from
Definition~\ref{def-t1} with $S=B\setminus \{b_i\}$ and $v=v_i$
that $v_i\not\gg t$. From the DV (distinct values) hypothesis
and Lemma~\ref{lem-la} and using that $v_i\gg \vec{0}$, we can conclude
that either $v_i\sim t$ t or $t\gg v_i$. Indeed, if it was not the case, then
$v_i\not\gg t$, $v_i\not\sim t$ t and $t\not\gg v_i$ implies that
$\exists j$ such that $v_i^j = t^j$ and by Lemma~\ref{lem-la} we know that
either $t^j=0$ or $\exists s\in B\setminus \{b_i\}$ such that $t^j=s^j$, meaning that
either $v_i^j=0$ or $v_i^j=s^j$ with $s$ the reported vector of a agent different from $i$.
But recall that we have assumed that $v_i \gg \vec{0}$ and moreover since the other agents
report their true values and by the DV hypothesis this is not possible.
This case is illustrated in Figure~\ref{fig-eqt-c1}.
Therefore, the utility
$u'_i = v_i - t$ of agent $i$ will satisfy $\vec{0}\sim u'_i$ or $\vec{0}\gg u'_i$.\\
Assume now that $v_i\in MAX(b_{-i}, v_i)$. According to the mechanism
${\cal M}$, if agent $i$ declares her true value, she will be charged some
$t$ such that $t\in {\cal T}(B\setminus \{b_i\})$. Her utility $u_i$ will
be $v_i-t$.
If agent $i$ reports $b_i$ such that $b_i\in MAX(b_{-i},b_i)$, then
she will be charged $t'$ for some
$t'\in {\cal T}(B\setminus \{b_i\})$ and her utility $u'_i$ will be
$v_i - t'$. Since by Lemma~\ref{lem-lt} one has $t=t'$ or $t\sim t'$, the
utilitie will satisfy $u'_i=u_i$ or $u'_i\sim u_i$.
This case is illustrated in Figure~\ref{fig-eqt-c2}.
Finally if agent $i$ reports $b_i$ such that $b_i\not\in MAX(b_{-i},b_i)$,
her utility will be zero, i.e. $u'_i=\vec{0}$ whereas $u_i\succeq \vec{0}$ according to
Theorem~\ref{theo-mir}.
\begin{figure}
\caption{\small \label{fig-eqt}
\label{fig-eqt-c1}
\label{fig-eqt-c2}
\label{fig-eqt}
\end{figure}
\end{proof}
We are now going to prove that the Pareto mechanism we introduced is the
unique way of achieving equilibria truthfulness.
Let $\pi$ be a truthful payment function, i.e. given a set of reported vectors $B$, for any agent $i$, $\pi_i(B)$ is the amount charged to the agent $i$, such that no agent has an incentive to declare a false
vector.
Recall that by the MIR constraint, we assume that
\begin{eqnarray}
b_i \succeq \pi_i(b_{-i},b_i) \mbox{ for any vector } b_i. \label{ineg1}
\end{eqnarray}
\begin{lemma}\label{lem1}
For any two different reported vectors
$b_i$ and $b'_i$, such that $b_i\in MAX(b_{-i}, b_i)$ and
$b'_i\in MAX(b_{-i}, b'_i)$, either
$\pi_i(b_{-i},b_i)\sim \pi_i(b_{-i},b'_i)$ or
$\pi_i(b_{-i},b_i) = \pi_i(b_{-i},b'_i)$.
\end{lemma}
\begin{proof}
Let us assume that $\pi_i(b_{-i},b'_i)\succ \pi_i(b_{-i},b_i)$. Then
if $v_i=b'_i$, agent $i$ would have an incentive to report $b_i$ instead
of her true value $v_i$.
The same line of reasoning shows that one cannot have
$\pi(b_{-i},b_i) \succ \pi(b_{-i},b'_i)$ neither. \\$\Box$
\end{proof}
We need additional lemmas.
\begin{lemma}\label{lem-t01}
Let $S\subset \mathbb{R}^k_{*+}$ be a finite set.
Then, $\forall s\in MAX(S)$, $\exists t\in {\cal T}(S)$ such that
$s \succ t$.
\end{lemma}
\begin{lemma}\label{lem-t5}Let $x,y,z \in \mathbb{R}^k_+$ be three
$k$-dimensional vectors, such that $y\succ x$ and $y\sim z$.
Then, $x\not\succeq z$
\end{lemma}
\begin{proof}
The proof is by contradiction. Assume that $x\succeq z$. Since
$y\sim z$, one can assume without loss of generality that
$y^1 > z^1$ and $y^2 < z^2$.
Since $y\succ x$, one has $y^2 \geq x^2$. Therefore, $z^2 > x^2$, which is
in contradiction with $x\succeq z$.
\end{proof}
\begin{lemma}\label{lem-t4}Let $S\subset \mathbb{R}^k_+$ be a set of
$k$-dimensional vectors, mutually non comparable, i.e.
$\forall s,s'\in S$, one has $s\sim s'$.
Let $p\in \mathbb{R}^k_+$ such that $s^*\succ p$, for some $s^*\in S$.
Then $\forall s\in S$, $s\succ p$ or $s\sim p$.
\end{lemma}
\begin{proof}
Let us consider any $s\in S$ with $s\neq s^*$. We know that $s\sim s^*$,
and by using Lemma~\ref{lem-t5} with $x=p$, $y=s^*$ and $z=s$, we obtain
that $p\not\succeq s$, i.e. $s\gg p$ or $s\sim p$.
\end{proof}
\begin{lemma}\label{lem-t2}Let $S\subset \mathbb{R}^k_+$ be a set of
$k$-dimensional vectors, and let $p\in \mathbb{R}^k_+$ such that
$\forall s\in S$, $p\sim s$. Then, there exist $\delta \in \mathbb{R}^k_{*+}$
such that $\forall s\in S$, $p+\delta \sim s$.
\end{lemma}
\begin{proof}
Let $S=\{s_1,\ldots ,s_{|S|}\}$. Since $p\sim s$, $\forall s\in S$, it means
that $\forall s_i\in S$, $\exists l_i,l'_i \in \{1,\ldots ,k\}$ such that
$s_i^{l_i} > p^{l_i}$ and $s_i^{l'_i} < p^{l'_i}$.
Let $I_j = \{ i \: | \: l_i = j \}$.
For all $j\in \{1,\ldots ,k\}$, let
$\varepsilon^j = \min_{i\in I_j} (s_i^{l_i} - p^{l_i})$ if
$I_j \neq \emptyset$, and let $\varepsilon^j$ be any positive real number
otherwise.
Then it is easy to see that the vector
$\delta=(\varepsilon^1/2,\ldots ,\varepsilon^k/2)$ satisfies the lemma.
Indeed, observe that $\varepsilon^j>0$ for $1\leq j\leq k$.
Now let us consider any $s_r\in S$ with $1\leq r\leq |S|$.
One has $s_r^{l_r} > p^{l_r}$ and $s_r^{l'_r} < p^{l'_r}$.
Therefore $s_r^{l'_r} < p^{l'_r} + \varepsilon^{l'_r}/2 = (p+\delta)^{l'_r}$.
We are going to show that $s_r^{l_r} > (p+\delta)^{l_r}$.
One has $r\in I_{l_r}$, hence $\varepsilon_{l_r} \leq s_r^{l_r}-p^{l_r}$, and
$\delta^{l_r}=\varepsilon_{l_r}/2 < s_r^{l_r}-p^{l_r}$, which means that
$s_r^{l_r} > (p+\delta)^{l_r}$.
\end{proof}
\begin{lemma}\label{lem-t3}Let $S\subset \mathbb{R}^k_+$ a set of
$k$-dimensional vectors, and let $p\in \mathbb{R}^k_+$ such that
one has $\forall s\in S_1$, $s\sim p$, and one has
$\forall s\in S_2$, $s\succ p$, with $S_1$ and $S_2$ a bipartition of $S$,
i.e. $S_1\cup S_2=S$ and $S_1\cap S_2=\emptyset$.
Then, there exist $\delta\in \mathbb{R}^k_{*+}$ such that
$\forall s\in S$, $s\sim p+\delta$ or $s\succ p+\delta$.
\end{lemma}
\begin{proof}
This result can be considered as a generalization of Lemma~\ref{lem-t2}
and the proof is quite similar.
For the set $S_1$ we define $\epsilon^j$ as previously. For the set
$S_2$, we define $\varepsilon'^j = \min_{\{s\in S_2\, : \, s^j-p^j>0\}} s^j-p^j$
if the set $\{s\in S_2\, : \, s^j-p^j>0\}$ is not empty, otherwise
$\varepsilon'^j$ is any positive value.
Then it is easy to see that the vector
$\delta$ with $\delta^j = \min \{\varepsilon^j, \varepsilon'^j\}/2$,
for $1\leq j\leq k$ satisfies the lemma.
For example, let us consider any $s\in S_2$.
Since $s\succ p$, $\exists l$, $1\leq l\leq k$, such that $s^l>p^l$.
Then $s^l \geq p^l+\varepsilon'^l > p^l + \delta^l$.
It implies that $s\sim p+\delta$ or $s\succ p+\delta$.
If $s\in S_1$ the proof is the same than Lemma~\ref{lem-t2}.
\end{proof}
\begin{lemma}\label{lem-infinity}
Let $z_j$ and $u_j$, for $j\geq 1$, be two infinite sequences of points in
$\mathbb{R}^k_{+}$ such that $z_j \gg z_{j+1}$, $z_j \succeq u_j\succ t$, and
$\lim_{j\to \infty} z_j = t$. Then $\exists l,l'$ such that $u_l \succ u_{l'}$.
\end{lemma}
\begin{proof}
First, we assume there exist a point $u_i$ such that $u_i\gg t$.
Let $\delta=\min_{1\leq j\leq k} u_i^j - t >0$.
Since $\lim_{j\to \infty} z_j = t$, $\exists N$ such that $\forall i\geq N$
one has $||z_i-t||_{\infty} = \max_{1\leq j\leq k} z_i^j-t^j \leq \delta/2$.
We have $u_i^j \geq t+\delta > t+\delta/2 \geq z_N^j \geq u_N^j$, meaning
that $u_i \succ u_N$.\\
Now assume that $\forall i$, $u_i\not\gg t$.
Since $u_i\succ t$, it means that $\exists \alpha_i$ such that
$u_i^{\alpha_i}=t^{\alpha_i}$. Without loss of generality by considering
an (infinite) subsequence of $u_i$ we can assume that $\forall i$, $\alpha_i=1$.
Again by considering an (infinite) subsequence we can assume that
$\exists K$ with $1\leq K\leq k-1$ such that
$\forall j$, $1\leq j\leq K$, one has $u_i^j =t^j$, and
$\forall j$, $K+1\leq j\leq k$, one has $u_i^j > t^j$. Now we can apply
the same line of reasoning than in the first case, by considering only
the coordinates between $K+1$ and $k$.
\end{proof}
\begin{theorem}\label{theo-all}The Pareto mechanism $\cal M$ gives all the equilibria
truthful payments.
\end{theorem}
\begin{proof}
The proof is by contradiction. We assume that the payment is computed in a different way than in our Pareto mechanism, and we show that there exists a configuration for which an agent has an incentive to lie. We consider any agent $i$, and any vector $b_i$ such that
$b_i\in MAX(b_{-i}, b_i)$.\\
Let first assume that the payment computed $\pi(b_i)$ is incomparable with
the set of points ${\cal T}(B\setminus \{b_i\})$, i.e.
$\forall t\in {\cal T}(B\setminus \{b_i\})$ one has $\pi(b_i)\sim t$.
Using Lemma~\ref{lem-t2}, there exists $\delta\gg \vec{0}$ such that
$\forall t\in {\cal T}(B\setminus \{b_i\})$, $\pi(b_i)+\delta\sim t$.
Let us assume that $v_i=\pi(b_i)+\delta$. Since
$\forall t\in {\cal T}(B\setminus \{b_i\})$, $v_i\sim t$, we know by
Definition~\ref{def-t1} that
$v_i\not\in MAX(b_{-i}, v_i)$ or
$v_i\in MAX(B\setminus \{b_i\})$.
One cannot have $v_i\in MAX(B\setminus \{b_i\})$, because
Lemma~\ref{lem-t01} would contradict the fact that
$\forall t\in {\cal T}(B\setminus \{b_i\})$, $v_i\sim t$.
Therefore one has $v_i\not\in MAX(b_{-i}, v_i)$,
and the utility of agent $i$ is $\vec{0}$ if $i$ reports her true value $v_i$.
If agent $i$ reports $b_i$ her utility will be $u'_i=v_i-\pi(b_i)=\delta\gg \vec{0}$,
meaning that she has an incentive to lie.
This case is illustrated in the Figure~\ref{fig-theo-all} (Case 1).\\
We assume now that the payment computed $\pi(b_i)$ is dominated by
at least one point from ${\cal T}(B\setminus \{b_i\})$.
Using Lemma~\ref{lem-t4} it means that there exists a bipartition
${\cal T}_1,{\cal T}_2$ of ${\cal T}(B\setminus \{b_i\})$ such that
$\forall t\in {\cal T}_1$, $t \succ \pi(b_i)$ and
$\forall t\in {\cal T}_2$, $t \sim \pi(b_i)$.
Using Lemma~\ref{lem-t3} there exists $\delta\gg \vec{0}$ such that
$\forall t\in {\cal T}(B\setminus \{b_i\})$, $t \succ \pi(b_i)+\delta$ or
$t \sim \pi(b_i)+\delta$.
Let us assume that $v_i=\pi(b_i)+\delta$. According to the
Definition~\ref{def-t1}, $v_i\not\in MAX(b_{-i}, v_i)$ or
$v_i\in MAX(B\setminus \{b_i\})$. One cannot have
$v_i\in MAX(B\setminus \{b_i\})$, because
Lemma~\ref{lem-t01} would contradict the fact that
$\forall t\in {\cal T}_1$, $t \succ v_i$ and
$\forall t\in {\cal T}_2$, $t \sim v_i$.
Therefore $v_i\not\in MAX(b_{-i},v_i)$ and the utility of agent $i$ is
$\vec{0}$ if $i$ reports her true value $v_i$.
If agent $i$ reports $b_i$ her utility will be $u'_i=v_i-\pi(b_i)=\delta\gg \vec{0}$,
meaning that she has an incentive to lie. This case is illustrated in the
Figure~\ref{fig-theo-all} (Case 2).\\
We assume now that the payment computed $\pi(b_i)$ strictly dominates at least
one point $t$ from ${\cal T}(B\setminus \{b_i\})$.
Now there are two cases to consider, either
$\pi(\pi(b_i))\neq \pi(b_i)$ or $\pi(\pi(b_i))=\pi(b_i)$.
If $\pi(\pi(b_i))\neq \pi(b_i)$, by using (\ref{ineg1}) one has
$\pi(b_i)\succ \pi(\pi(b_i))$.
If we assume that $v_i=b_i$ then agent $i$ would have an incentive to report
$\pi(b_i)$ instead of her true value $b_i$. Indeed, since $\pi(b_i)\gg t$,
one can conlude from Definition~\ref{def-t1} that
$\pi(b_i)\in MAX(b_{-i},\pi(b_i))$.
Moreover, agent $i$ would pay $\pi(\pi(b_i))$ instead of $\pi(b_i)$. This case
is illustrated in the Figure~\ref{fig-theo-all} (Case 3).
Assume now that $\pi(\pi(b_i))=\pi(b_i)$. Consider any vector $b'_i$ such that
$\pi(b_i)\succ b'_i$ and $b'_i\in MAX(b_{-i}, b'_i)$. For example,
one can take $b'_i = \pi(b_i) - (\pi(b_i)-t)/2$ (since $\pi(b_i)\gg t$, one
has $b'_i\gg t$ and hence by Definition~\ref{def-t1} one has
$b'_i\in MAX(b_{-i}, b'_i)$).
Using Lemma~\ref{lem1}, either $\pi(b'_i)$ is incomparable with
$\pi(\pi(b_i))=\pi(b_i)$, or $\pi(b'_i)=\pi(\pi(b_i))=\pi(b_i)$.
If $\pi(b'_i)\sim \pi(b_i)$ then by using Lemma~\ref{lem-t5} with
$x=b'_i$, $y=\pi(b_i)$ and $z=\pi(b'_i)$, one has
$b'_i\not\succeq \pi(b'_i)$. However this contradict the assumption that
$b'_i\succeq \pi(b'_i)$ (see (\ref{ineg1})).
If $\pi(b'_i) = \pi(b_i)$ then since $\pi(b_i)\succ b'_i$, we have
$\pi(b'_i)\succ b'_i$, and there is again a contradiction with the
assumption (\ref{ineg1}). This case is illustrated in the
Figure~\ref{fig-theo-all} (Case 4).\\
According to the previous discussion, the only remaining case we need to
consider is when $\pi(b_i)$ dominates, but not strictly dominates, at least
one point from ${\cal T}(B\setminus \{b_i\})$.
Since $b_i\in MAX(b_{-i}, b_i)$ (and
$b_i\not\in MAX(B\setminus \{b_i\})$) we know by
Definition~\ref{def-t1} that $\exists t'\in {\cal T}(B\setminus \{b_i\})$
such that $b_i\gg t'$.
We are going to consider a set of vectors
$b_{i,l}$, with $l\geq 1$, such that $\forall l\geq 1$,
$b_{i,l}\gg b_{i,l+1}$, $b_{i,l}\gg t'$ and $\lim_{l\to \infty} b_{i,l} = t'$.
Since ${\cal T}(B\setminus \{b_i\})$ has a finite number of elements,
we can assume without loss of generality that all the vectors $\pi(b_{i,l})$
dominates the same point $t\in {\cal T}(B\setminus \{b_i\})$.
Recall also that, by inequality~(\ref{ineg1}),
$b_{i,l} \succeq \pi(b_{-i},b_{i,l})= \pi(b_{i,l})$.
One can easily see that necessarily $t=t'$. Now using Lemma~\ref{lem-infinity}
we obtain that, $\exists l,l'$ such that $\pi(b_{i,l}) \succ \pi(b_{i,l'})$.
If we assume that $v_i=b_{i,l}$ then agent $i$ would have an incentive to report
$b_{i,l'}$ instead of her true value $b_{i,l}$.
\end{proof}
\begin{figure}
\caption{\small \label{fig-theo-all}
\label{fig-theo-all}
\end{figure}
\section{A Pareto mechanism for the Weakly Maximum Vector problem}
We are going to present a Pareto mechanism, denoted by $\cal M'$, which
satisfies the MIR constraint and which is equilibria-truthful.
For doing so, we modify the mechanism ${\cal M}$ in order to
remove the DV condition.
The modified mechanism is given
in Table~\ref{figpmecarelaxed} and illustrated in
Figure~\ref{fig-mprime}. It can be shown that $\cup_{i\in W} b_i = WMAX(B)$.
\begin{algorithm}[h]
\begin{algorithmic}[1]
\STATE Remove all identical vectors and corresponding agents.
\STATE For all $i\in B$, set $PAY(i) := \{t \in
{\cal T}(MAX(B)\setminus \{b_i\}) \: |\: b_i \succeq t\}$.
\STATE $W$ is the set of agents $i$ such that $PAY(i)\neq \emptyset$.
\STATE For all $i\in W$ choose any $p_i\in PAY(i)$.
\STATE For all $i\notin W$, we set $p_i=\vec{0}$.
\caption{\label{figpmecarelaxed}The weakly Pareto mechanism ${\cal M'}$.}
\end{algorithmic}
\end{algorithm}
\begin{figure}
\caption{\small \label{fig-mprime}
\label{fig-mprime}
\end{figure}
\appendix
\section{Appendix}
In order to prove Proposition~\ref{prop-ref}, we first need a lemma.
\begin{lemma}\label{lem-p1}
Given $t \in \Omega_S$, one has
$\big( \forall s \in S, \, s \not \gg t \big) \Leftrightarrow \big( \forall s \in MAX(S), \, s \not \gg t \big)$.
\end{lemma}
\begin{proof}
\noindent ($\Rightarrow$) Obvious since $MAX(S) \subseteq S$.
\noindent ($\Leftarrow$) Obvious if $S=MAX(S)$, so let assume that
$MAX(S)\subset S$ and let $s \in S \setminus MAX(S)$.
By the definition of $MAX$, there exists $x \in MAX(S)$
such that $x \succeq s$. Therefore $x^j \ge s^j$ for all $j \in K$.
Since $x \not \gg t$ (by hypothesis) there is $\ell \in K$ such that
$x^{\ell} \le t^{\ell}$. We deduce that $s^\ell \le t^{\ell}$ and
$s \not \gg t$ follows.
\ \\$\Box$
\end{proof}
\noindent {\bf Proof of Proposition~\ref{prop-ref}:}\\
For any finite set $S\subset \mathbb{R}^k_{*+}$,
and any $v \in \mathbb{R}^k_{*+} \setminus MAX(S)$,
we want to proove the following equivalence:
\[ \exists t \in {\cal T}(S) \, : \, v \gg t \Longleftrightarrow
v \in MAX(S \cup \{v\}), \]
with ${\cal T}(S) := MIN( \{t \in \Omega_S \, : \,
\forall s \in S, \; s \not \gg t \}).$\\
$(\Rightarrow)$ Since $v \gg t$, $v^i > t^i$ for all $i \in K$.
Since $t \in {\cal T}(S)$, $s \not \gg t$ for all
$s \in MAX(S)$. It means that for all
$s \in MAX(S)$ there exists (at least) one index,
says $i(s)$, such that $s^{i(s)} \le t^{i(s)}$.
Then, for all $s \in MAX(S)$, $s^{i(s)} \le t^{i(s)}
< v^{i(s)}$ holds. It follows that $s \not \succ v $ for all
$s \in MAX(S)$. Thus, $v \in MAX(S \cup
\{v\})$.
\noindent $(\Leftarrow)$ $v \in MAX(S \cup \{v\})$
means that for all $s \in MAX(S)$ either
$v \sim s$ or $v \succeq s$. Since $v \in
\mathbb{R}^k_{*+} \setminus MAX(S)$, the case
$v \succeq s$ becomes $v \succ s$. Both cases imply that, for all
$s \in MAX(S)$, there exists an index, says
$i(s)$, such that $v^{i(s)} > s^{i(s)}$.
Given $a \in \mathbb{R}_{*+}$, let us denote by
$dec_{S,j}(a)$ the quantity $\max\{x \in D_S^j \, : \, x<a\}$, with
$D_S^j:=\{0\} \cup \{s^j \, : \, s\in S\}$ for $j=1,\ldots ,k$.
Now let us consider a
vector $\omega$ satisfying $\omega^i = dec_{S,i}(v^i)$ for all $i \in K$.
By definition of $dec_{S,i}$, one has $\omega^{i(s)} \ge s^{i(s)}$ for
all $s \in MAX(S)$. Therefore,
$s \not \gg \omega$ for all $s \in MAX(S)$. We
deduce that $\omega \in \{ t \in \Omega_S \, : \, \forall s \in
MAX(S), \; s \not \gg t \}$.
By definition of a minimum operator $MIN$,
there exists $q \in MIN( \{ t \in \Omega_S \, : \,
\forall b \in MAX(S), \; s \not \gg t \})$ such that
$q \preceq \omega$.
Using Lemma~\ref{lem-p1}, the set $\{ t \in \Omega_S \, : \, \forall s \in
MAX(S) \; s \not \gg t \}=\{ t \in \Omega_S \, : \,
\forall s \in S, \; s \not \gg t \}$. By construction,
$v \gg \omega$ holds.
Finally, $v \gg \omega \succeq q$ implies $v \gg q$ where
$q \in MIN(\{ t \in \Omega_S \, : \, \forall s \in S,
\; s \not \gg t \})={\cal T}(S)$.
\ \\$\Box$
For $s\in \mathbb{R}^k_{*+}$ and $1\leq j\leq k$, let us define $s(j)$ as the vector
which has its $j$-th coordinate equals to $s^j$, and 0 elsewhere.
Given a set of vectors $v_1,\ldots ,v_n\in \mathbb{R}^k$, let us define
$VMAX(\{v_1,\ldots ,v_n\})$ as the vector obtained by taking on each coordinate
$j\in \{1,\ldots ,k\}$ the maximum value among the $j$-th coordinate of
$v_1,\ldots ,v_n$, i.e.
$VMAX(\{v_1,\ldots ,v_n\}) = (\max_{1\leq i\leq n} v_i^1, \ldots
, \max_{1\leq i\leq n} v_i^k)$.
\begin{proposition} \label{prop-ref2}
For any finite set $S\subset \mathbb{R}^k_{*+}$, one has\\
${\cal T}(S) = MIN(\cup_{\{j_s\in \{1,\ldots ,k\} |
s\in MAX(S)\}} VMAX(\cup_{s\in MAX(S)} s(j_s))).$
\end{proposition}
\begin{proof}
Let us first describe another way to compute the set of reference points which will
be useful to prove some properties on it.
Let $S=\{s_1,\ldots ,s_n\} \subset \mathbb{R}^k_{*+}$, and let us consider
a vector $v\in \mathbb{R}^k_{*+}\setminus MAX(S)$ such that
$v\in MAX(S\cup \{v\})$. Of course $v\not\in S$, and
it means that $\forall s\in S$, $v$ is not dominated by $s$, i.e.
$s\not \succ v$ and $s\neq v$, i.e. $\exists j$, $1\leq j\leq k$, such that
$v^j>s^j$.
We define a boolean formula $\psi$ indexed by the vector $s$, in the
following way: $\psi(s) := (v\gg s(1)) \vee (v\gg s(2)) \vee
\ldots \vee (v\gg s(k))$.
Since we know that $v\gg 0$, we get that $v$ is not dominated by $s$ if and
only if the boolean formula $\psi(s)$ is true.
Finally, observe that $\forall s\in S$, $v$ is not dominated by $s$, is
equivalent with $\forall s\in MAX(S)$, $v$ is not dominated by $s$.
Therefore, one has $v\in MAX(S\cup \{v\})$ if and only if
$\psi := \bigwedge_{s\in MAX(S)} \psi(s)$ is true, i.e.
$$\psi := \bigwedge_{s\in MAX(S)}
[ (v\gg s(1)) \vee (v\gg s(2)) \vee \ldots \vee (v\gg s(k)) ]$$
is true.
We can rewrite $\psi$ as
$$\psi := \bigvee_{\{j_s\in \{1,\ldots ,k\} | s\in MAX(S)\}} \
\bigwedge_{s\in MAX(S)} (v\gg s(j_s)).$$
In this formula, the $\vee$ is taken over all tuples
$(j_s)_{s\in MAX(S)}$, with $j_s\in \{1,\ldots ,k\}$ for
each $s\in MAX(S)$, there are therefore
$k^{|{\cal P}_{max}(S)|}$ such tuples.
Clearly, one has $\bigwedge_{s\in MAX(S)} (v\gg s(j_s)) \iff v\gg VMAX(\cup_{s\in MAX(S)}
s(j_s))$, therefore we can write
$$\psi := \bigvee_{\{j_s\in \{1,\ldots ,k\} | s\in MAX(S)\}}
(v\gg VMAX(\cup_{s\in MAX(S)} \: s(j_s))).$$
Now observe that if $v_1\succ v_2$, clearly one has
$v\gg v_2$ if and only if $(v\gg v_1) \vee (v\gg v_2)$.
We therefore have proved the proposition.
\ \\$\Box$
\end{proof}
\noindent {\bf Proof of Lemma~\ref{lem-la} and \ref{lem-lt}:}\\
It is a direct consequence of Proposition~\ref{prop-ref2}.
\noindent {\bf Proof of Lemma~\ref{lem-t01}:}\\
Let $s\in MAX(S)$. Then $\forall s'\in MAX(S)$,
$s' \neq s$, $\exists l_{s'}$,
$1\leq l_{s'}\leq k$, such that
$(s')^{l_{s'}} < s^{l_{s'}}$. If we take $j_{s'}=l_{s'}$ for $s'\neq s$, and
$j_s$ any integer value between 1 and $k$, we have
$\forall s'\in MAX(S)$, $s\succ s'(j_{s'})$, and therefore
$s\succ VMAX(\cup_{s'\in MAX(S)} s'(j_{s'}))$.
The result now follows from Proposition~\ref{prop-ref2}.
\ \\$\Box$
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Meta-learning PINN loss functions}
\author[mymainaddress]{Apostolos~F~Psaros}
\author[Kenjiaddress]{Kenji~Kawaguchi}
\author[mymainaddress]{George~Em~Karniadakis\corref{mycorrespondingauthor}}\cortext[mycorrespondingauthor]{Corresponding Author}\ead{george_karniadakis@brown.edu}
\address[mymainaddress]{Division of Applied Mathematics, Brown University, Providence, RI 02906, USA}
\address[Kenjiaddress]{Center of Mathematical Sciences and Applications, Harvard University, Cambridge, MA 02138, USA}
\begin{abstract}
We propose a meta-learning technique for offline discovery of physics-informed neural network (PINN) loss functions.
We extend earlier works on meta-learning, and develop a gradient-based meta-learning algorithm for addressing diverse task distributions based on parametrized partial differential equations (PDEs) that are solved with PINNs.
Furthermore, based on new theory we identify two desirable properties of meta-learned losses in PINN problems, which we enforce by proposing a new regularization method or using a specific parametrization of the loss function.
In the computational examples, the meta-learned losses are employed at test time for addressing regression and PDE task distributions.
Our results indicate that significant performance improvement can be achieved by using a shared-among-tasks offline-learned loss function even for out-of-distribution meta-testing.
In this case, we solve for test tasks that do not belong to the task distribution used in meta-training, and we also employ PINN architectures that are different from the PINN architecture used in meta-training.
To better understand the capabilities and limitations of the proposed method, we consider various parametrizations of the loss function and describe different algorithm design options and how they may affect meta-learning performance.
\end{abstract}
\begin{keyword}
physics-informed neural networks, meta-learning, meta-learned loss function
\end{keyword}
\end{frontmatter}
\section{Introduction}
\subsection{Related work and motivation}
The physics-informed neural network (PINN) is a recently proposed method for solving forward and inverse problems involving partial differential equations (PDEs); see, e.g., \cite{raissi2019physicsinformed,lu2021deepxde,jagtap2020adaptive,meng2020composite,pang2020npinns,kharazmi2021hpvpinns,yang2021bpinns,cai2021physicsinformed,shukla2021parallel,jin2021nsfnets} for different versions of PINNs.
PINNs are based on (a) constructing a neural network (NN) approximator for the PDE solution that is inserted via automatic differentiation in the nonlinear operators describing the PDE, and (b) learning the solution by minimizing a composite objective function comprised of the residual terms for PDEs and boundary and initial conditions in the strong form; other PINN types in variational (weak) form have also been developed in \cite{kharazmi2021hpvpinns}.
Similar to solving supervised learning tasks with NNs, optimally solving PDEs with PINNs requires selecting the architecture, the optimizer, the learning rate schedule, and other hyperparameters (considered as such in a broad sense), each of which plays a different role in training.
Moreover, optimally enforcing the physics-based constraints, e.g., by controlling the number and locations of residual points for each term in the composite objective function introduces additional PINN-specific hyperparameters to be selected.
Overall, partly because of the above and despite the conceptual simplicity of PINNs, the resulting learning problem is theoretically and practically challenging; see, e.g., \cite{wang2020understanding}.
In general, the loss function in NN training interacts with the optimization algorithm and affects both the convergence rate and the performance of the obtained minimum.
In addition, the loss function interacts with the NN for shaping the loss landscape; i.e., the training objective as a function of the trainable NN parameters.
As a result, from this point of view, deciding whether to use mean squared error (MSE), mean absolute error (MAE) or a different loss function can be considered as an additional hyperparameter to be selected; trial and error is often employed in practice, with MSE being the most popular option for PINNs.
For facilitating automatic selection, a parametrized loss function has been proposed in \cite{barron2019general}, which includes standard losses as special cases and can be optimized online for improving performance.
Although such an adaptive loss is shown in \cite{barron2019general} to be highly effective in the computer vision problems considered therein, it increases training computational cost and does not benefit from prior knowledge regarding the particular problem being solved.
The following question, therefore, arises in the context of PINNs: \textit{Can we develop a framework for encoding the underlying physics of a parametrized PDE in a loss function optimized offline?}
Meta-learning is an emerging field that aims to optimize various parts of learning by infusing prior knowledge of a given task distribution such that new tasks drawn from the distribution can be solved faster and more accurately; see \cite{hospedales2020metalearning} for a comprehensive review.
One form of meta-learning, namely gradient-based, alternates between an inner optimization in which dependence of updates on meta-learned parameters is tracked, and an outer optimization in which meta-learned parameters are updated based on differentiating the inner optimization paths.
Such differentiation can be performed either exactly or approximately for reducing computational cost; see, e.g., \cite{grefenstette2019generalized,nichol2018firstorder,rajeswaran2019metalearning,lorraine2020optimizing}.
In this regard, a recent research direction is concerned with loss function meta-learning, with diverse applications in supervised and reinforcement learning \cite{sung2017learning,houthooft2018evolved,wu2018learning,xu2018metagradient,zheng2018learning,antoniou2019learning,grabocka2019learning,huang2019addressing,zou2019reward,bechtle2020metalearning,gonzalez2020effective,gonzalez2020improved,kirsch2020improving,sicilia2021multidomain}.
Although different works utilize different meta-learning techniques and have different goals, it has been shown that loss functions obtained via meta-learning can lead to an improved convergence of the gradient-descent-based optimization.
Moreover, meta-learned loss functions can improve test performance under few-shot and semi-supervised conditions as well as cases involving a mismatch between train and test distributions, or between train loss functions and test evaluation metrics (e.g., because of non-differentiability of the latter).
For simplicity, we refer to meta-learned loss functions as \textit{learned losses} in this paper, while loss functions optimized during training are referred to as \textit{online adaptive losses}.
\subsection{Overview of the proposed method}
In this work, we propose a method for offline discovery via meta-learning of PINN loss functions by utilizing information from task distributions defined based on parametrized PDEs.
In the learned loss function, we encode information specific to the considered PDE task distribution by differentiating the PINN optimization path with respect to the loss function parametrization.
Discovering loss functions by differentiating the physics-informed optimization path can enhance our understanding about the complex problem of solving PDEs with PINNs.
Following the PDE task distribution definition and the learned loss parametrization, the learned loss parameters are optimized via meta-training by repeating the following steps until a stopping criterion is met:
(a) PDE tasks are drawn from the task distribution;
(b) in the inner optimization part, they are solved with PINNs for a few iterations using the current learned loss, and the gradient of the learned loss parameters is tracked throughout optimization; and
(c) in the outer optimization part, the learned loss parameters are updated based on MSE of the final (optimized) PINN parameters.
After meta-training, the obtained learned loss is used for meta-testing, which entails solving unseen tasks until convergence.
A schematic illustration of the above alternating optimization procedure is provided in Fig.~\ref{fig:overview}.
As we demonstrate in the computational examples of the present paper, the proposed loss function meta-learning technique can improve PINN performance significantly even compared with the online adaptive loss proposed in \cite{barron2019general}, while also allocating the loss function training computational cost to the offline phase.
\begin{figure}
\caption{Schematic illustration of the proposed meta-learning method for discovering PINN loss functions.
In the inner optimization part, the PINN parameters $\theta$ are updated based on $\pazocal{L}
\label{fig:overview}
\end{figure}
\subsection{Summary of innovative claims}
\begin{itemize}
\item We propose a gradient-based meta-learning algorithm for offline discovery of PINN loss functions pertaining to diverse PDE task distributions.
\item We extend the loss function meta-learning technique of \cite{bechtle2020metalearning} by considering alternative loss parametrizations and various algorithm design options.
\item By proving two new theorems and a corollary, we identify two desirable properties of learned loss functions and propose a new regularization method to enforce them.
We also prove that the loss function parametrization proposed in \cite{barron2019general} is guaranteed to satisfy the two desirable properties.
\item We define several representative benchmarks for demonstrating the performance of the considered algorithm design options as well as the applicability and the limitations of the proposed method.
\end{itemize}
\subsection{Organization of the paper}
We organize the paper as follows.
In Section~\ref{sec:prelim} we provide a brief overview of PINNs for solving and discovering PDEs as well as of standard NN training.
In Section~\ref{sec:meta} we summarize meta-learning for PINNs, discuss our PINNs loss function meta-learning technique in detail, and present the theoretical results.
In Section~\ref{sec:examples}, we perform various computational experiments involving diverse PDE task distributions.
Finally, we summarize our findings in Section~\ref{sec:summary}, while the theorem proofs as well as additional design options and computational results are included in Appendices~\ref{app:meta:design:multi}-\ref{app:add:results}.
\section{Preliminaries}\label{sec:prelim}
\subsection{PINN solution technique overview}
\label{sec:prelim:pinns}
Consider a general problem defined as
\begin{subequations}\label{eq:param:pde}
\begin{align}
\pazocal{F}_{\lambda}[u](t, x) &= 0 \text{, } (t, x) \in [0, T] \times \Omega \label{eq:param:pde:a}\\
\pazocal{B}_{\lambda}[u](t, x) & = 0 \text{, } (t, x) \in [0, T] \times \partial\Omega \label{eq:param:pde:b}\\
u(0, x) & = u_{0, \lambda}(x) \text{, } x \in \Omega, \label{eq:param:pde:c}
\end{align}
\end{subequations}
where $\Omega \subset \mathbb{R}^{D_x}$ is a bounded domain with boundary $\partial \Omega$, $T > 0$, and $u(t, x) \in \mathbb{R}^{D_u}$ denotes the solution at $(t, x)$.
Eq.~\eqref{eq:param:pde:a} is a PDE expressed with a nonlinear operator $\pazocal{F}_{\lambda}[\cdot]$ that contains identity and differential operators as well as source terms; Eq.~\eqref{eq:param:pde:b} represents the boundary conditions (BCs) expressed with an operator $\pazocal{B}_{\lambda}[\cdot]$, and Eq.~\eqref{eq:param:pde:c} represents the initial conditions (ICs) expressed with a function $u_{0, \lambda}$.
In Eqs.~\eqref{eq:param:pde:a}-\eqref{eq:param:pde:c}, $\lambda$ represents the parametrization of the problem and is considered as shared among the operators $\pazocal{F}_{\lambda}$, $\pazocal{B}_{\lambda}$ and the function $u_{0, \lambda}$.
Furthermore, in practice Eqs.~\eqref{eq:param:pde:a}-\eqref{eq:param:pde:c} are often given in the form of collected data; i.e., they are only specified on discrete sets of locations that are subsets of $[0, T] \times \Omega$, $[0, T] \times \partial \Omega$ and $\Omega$, respectively.
In this regard, one problem scenario pertains to fixing the model parameters $\lambda$ and aiming to obtain the solution $u(t, x)$ for every $(t, x) \in [0, T] \times \Omega$.
This problem is henceforth referred to as solving the PDE or as forward problem.
Another problem scenario pertains to having observations from the solution $u(t, x)$ and aiming to obtain the parameters $\lambda$ that best describe the data.
This problem is henceforth referred to as discovering the PDE or as inverse problem.
PINNs were proposed in \cite{raissi2019physicsinformed} for addressing both problem scenarios.
In the case of a forward problem, the solution $u$ is represented with a NN $\hat{u}$, which is trained such that a composite objective comprised of the strong-form (PDE), boundary and initial residuals corresponding to Eqs.~\eqref{eq:param:pde:a}-\eqref{eq:param:pde:c}, respectively, on a discrete set of points is minimized.
For addressing the inverse problem with the PINNs solution technique of \cite{raissi2019physicsinformed}, an additional term corresponding to the solution $u$ data misfit is added to the composite objective and $\hat{u}$ is trained simultaneously with $\lambda$.
Equivalently, we can view PINNs from a data-driven perspective.
By defining $f(t, x)$ as the output of the left-hand side of Eq.~\eqref{eq:param:pde} for an arbitrary function $u(t, x)$, i.e.,
\begin{equation}\label{}
f(t, x) := \pazocal{F}_{\lambda}[u](t, x),
\end{equation}
the operator $\pazocal{F}_{\lambda}: u \mapsto f$ can be construed as a map from $V_u$, the set of all admissible functions $u$ to $V_f$, the image of $V_u$ under $\pazocal{F}_{\lambda}$.
In this context, the function $u$ that satisfies Eq.~\eqref{eq:param:pde:a} corresponds to a mapped function $f \in V_f$ that is zero for every $(t, x) \in [0, T] \times \Omega$.
As a result, satisfying Eq.~\eqref{eq:param:pde:a} is equivalent to having observations from this zero-outputting $f$ at every $(t, x) \in [0, T] \times \Omega$ (or in a subset of it).
Furthermore, satisfying Eq.~\eqref{eq:param:pde:b} is equivalent to having observations in $[0, T] \times \Omega$ from a zero-outputting $b$ function defined as
\begin{equation}\label{}
b(t, x) := \pazocal{B}_{\lambda}[u](t, x)
\end{equation}
and satisfying Eq.~\eqref{eq:param:pde:c} to having observations in $\{t = 0\} \times \Omega$ from the solution $u$.
If Eqs.~\eqref{eq:param:pde:a}-\eqref{eq:param:pde:c} are defined analytically for every point of the domains $[0, T] \times \Omega$, $[0, T] \times \partial \Omega$ and $\Omega$, respectively, a finite dataset is considered in practice.
In this context, PINNs address the forward problem by (a) constructing three NN approximators $\hat{u}$, $\hat{f}$, and $\hat{b}$ that are connected via both the operators $\pazocal{F}_{\lambda}$ and $\pazocal{B}_{\lambda}$ and parameter sharing, i.e., $\hat{f}_{\theta, \lambda} = \pazocal{F}_{\lambda}[\hat{u}_{\theta}]$ and $\hat{b}_{\theta, \lambda} = \pazocal{B}_{\lambda}[\hat{u}_{\theta}]$, where $\theta$ are the shared parameters of the NN approximators, and (b) by training $\hat{u}$, $\hat{f}$, and $\hat{b}$ (simultaneously because of parameter sharing) to fit the complete dataset.
For the inverse problem, in which observations of the solution $u$ for times $t$ other than zero are also available, the same three connected approximators $\hat{u}$, $\hat{f}$, and $\hat{b}$ are constructed, but the additional constraint of fitting the $u$ observations is also included for learning $\lambda$ too.
Overall, learning the approximators $\hat{u}$, $\hat{f}$, and $\hat{b}$ (and, potentially, $\lambda$ too) takes the form of a minimization problem expressed as
\begin{equation}\label{eq:pinns:loss:1}
\min_{\theta \ (, \ \lambda)} \pazocal{L}_f(\theta, \lambda) +
\pazocal{L}_b(\theta, \lambda)
+ \pazocal{L}_{u_0}(\theta, \lambda)
+ \pazocal{L}_u(\theta),
\end{equation}
where $\pazocal{L}_f$ represents the loss related to the $f$ data, i.e., the physics of Eq.~\eqref{eq:param:pde:a}, $\pazocal{L}_b(\theta, \lambda)$ and $\pazocal{L}_{u_0}(\theta, \lambda)$ the losses related to the BCs and ICs, respectively, and $\pazocal{L}_u$ the loss related to the $u$ data.
Next, considering $\{N_f, N_b, N_{u_0}, N_u\}$ datapoints, the terms $\{\pazocal{L}_f, \pazocal{L}_b, \pazocal{L}_{u_0}, \pazocal{L}_u\}$ in Eq.~\eqref{eq:pinns:loss:1} expressed as the weighted average dataset errors become
\begin{subequations}\label{eq:pinns:loss:2}
\begin{align}
\pazocal{L}_f &= \frac{w_f}{N_f}\sum_{i=1}^{N_f}\ell(\hat{f}_{\theta, \lambda}(t_i, x_i), 0) \label{eq:pinns:loss:2:a}\\
\pazocal{L}_b & = \frac{w_b}{N_b}\sum_{i=1}^{N_b}\ell(\hat{b}_{\theta, \lambda}(t_i, x_i), 0) \label{eq:pinns:loss:2:b}\\
\pazocal{L}_{u_0} & = \frac{w_{u_0}}{N_{u_0}}\sum_{i=1}^{N_{u_0}}\ell(\hat{u}_{\theta}(0, x_i), u_{0, \lambda}(x_i)) \label{eq:pinns:loss:2:c} \\
\pazocal{L}_u & = \frac{w_u}{N_u}\sum_{i=1}^{N_u}\ell(\hat{u}_{\theta}(t_i, x_i), u(t_i, x_i)). \label{eq:pinns:loss:2:d}
\end{align}
\end{subequations}
In Eq.~\eqref{eq:pinns:loss:2}, $\ell(prediction, target)$, with $\ell: D_u \times D_u \to \mathbb{R}_{\geq 0}$, is a loss function that takes as input the NN prediction at a domain point and the target value, and outputs the corresponding loss; $\hat{u}_{\theta}(t_i, x_i)$, $\hat{f}_{\theta, \lambda}(t_i, x_i)$, $\hat{b}_{\theta, \lambda}(t_i, x_i)$ denote the NN predictions $\hat{u}_{\theta}$ and $\hat{f}_{\theta, \lambda} = \pazocal{F}_{\lambda}[\hat{u}_{\theta}]$, $\hat{b}_{\theta, \lambda} = \pazocal{B}_{\lambda}[\hat{u}_{\theta}]$ evaluated at the $i^{th}$ domain point $(t_i, x_i)$, respectively; $u_{0,\lambda}(x_i)$ denotes $i^{th}$ target value corresponding to the ICs function $u_{0,\lambda}$; and $u(t_i, x_i)$ denotes the $i^{th}$ target value corresponding to solution $u$ (if available).
Clearly, the point sets $\{t_i, x_i\}_{i=1}^{N_{f}}$, $\{t_i, x_i\}_{i=1}^{N_{b}}$, $\{t_i, x_i\}_{i=1}^{N_{u_0}}$, and $\{t_i, x_i\}_{i=1}^{N_{u}}$ represent domain points at different locations, although the same symbol $(t_i, x_i)$ has been used for all of them, for notation simplicity.
Note that optimal weights $\{w_f, w_b, w_{u_0}, w_u\}$ to be used in Eq.~\eqref{eq:pinns:loss:1} are not known a priori and are often set in practice as equal to one or obtained via trial and error; see \cite{wang2020understanding} for discussion and an adaptive method for addressing this issue.
Finally, by considering as $\ell$ the squared $\ell_2$-norm of the discrepancy between predictions and targets, i.e., MSE if it is averaged over the dataset, Eq.~\eqref{eq:pinns:loss:2} reduces to
\begin{subequations}\label{eq:pinns:loss:3}
\begin{align}
\pazocal{L}_f &= \frac{w_f}{N_f}\sum_{i=1}^{N_f}
||\hat{f}_{\theta, \lambda}(t_i, x_i)||_2^2 \label{eq:pinns:loss:3:a}\\
\pazocal{L}_b & = \frac{w_b}{N_b}\sum_{i=1}^{N_b}
||\hat{b}_{\theta, \lambda}(t_i, x_i)||_2^2 \label{eq:pinns:loss:3:b}\\
\pazocal{L}_{u_0} & = \frac{w_{u_0}}{N_{u_0}}\sum_{i=1}^{N_{u_0}}
||\hat{u}_{\theta}(0, x_i) - u_{0, \lambda}(x_i)||_2^2 \label{eq:pinns:loss:3:c} \\
\pazocal{L}_u & = \frac{w_u}{N_u}\sum_{i=1}^{N_u}
||\hat{u}_{\theta}(t_i, x_i) - u(t_i, x_i)||_2^2. \label{eq:pinns:loss:3:d}
\end{align}
\end{subequations}
We note that the terms MSE and squared $\ell_2$-norm of the discrepancy are used interchangeably in this paper depending on the context.
\subsection{Loss functions in neural network training}\label{sec:prelim:loss}
This section serves as a brief summary of standard NN training and as an introduction to the meta-learning loss functions Section~\ref{sec:meta}.
As described in Section~\ref{sec:prelim:pinns}, addressing forward and inverse PDE problems with PINNs requires solving the minimization problem of Eq.~\eqref{eq:pinns:loss:1} with a composite objective function comprised of the loss terms of Eq.~\eqref{eq:pinns:loss:2}.
All of the functionals $\pazocal{L}_f(\theta, \lambda)$, $\pazocal{L}_b(\theta, \lambda)$, $\pazocal{L}_{u_0}(\theta, \lambda)$ and $\pazocal{L}_u(\theta)$ in Eq.~\eqref{eq:pinns:loss:2} are given as the average discrepancies over $f$, $b$, $u_0$ and $u$ data, respectively, and the total loss $\pazocal{L}$ is expressed as the weighted sum of the individual terms.
Therefore, we can study in this section, without loss of generality, each part of the sum separately by only considering the objective function
\begin{equation}\label{eq:gen:loss}
\pazocal{L}(\theta) = \frac{1}{N}\sum_{i=1}^{N}\ell(\hat{u}_{\theta}(t_i, x_i), u(t_i, x_i)),
\end{equation}
where $\hat{u}_{\theta}(t_i, x_i)$ denotes the prediction value for each $(t_i, x_i)$ (i.e., either $\hat{f}_{\theta, \lambda}(t_i, x_i)$, $\hat{b}_{\theta, \lambda}(t_i, x_i)$, $\hat{u}_{\theta, \lambda}(0, x_i)$, or $\hat{u}_{\theta}(t_i, x_i)$ in Eq.~\eqref{eq:pinns:loss:2}) and $u(t_i, x_i)$ denotes the target (i.e., either $0$, $u_{0, \lambda}(x_i)$, or $u(t_i, x_i)$ in Eq.~\eqref{eq:pinns:loss:2}).
For each $(t, x)$ in Eq.~\eqref{eq:param:pde}, $u(t, x)$ as well as $\pazocal{F}_\lambda[u](t,x)$ and $\pazocal{B}_\lambda[u](t,x)$ belong to $\mathbb{R}^{D_u}$; thus, $\hat{u}_{\theta}(t_i, x_i)$ and $u(t_i, x_i)$ in Eq.~\eqref{eq:gen:loss} are also considered $D_u$-dimensional.
Furthermore, $N$ represents the size of the corresponding dataset, i.e., $N_f$, $N_b$, $N_{u_0}$, or $N_u$.
An optimization technique using only first-order information of the objective function is the (stochastic) gradient descent, which is typically denoted as SGD, whether or not stochastic gradients are used, by considering mini-batches of the data in Eq.~\eqref{eq:gen:loss}.
Using SGD, the NN parameters $\theta$ are updated based on
\begin{equation}
\theta \leftarrow \theta - \epsilon \nabla_{\theta}\pazocal{L},
\end{equation}
where $\epsilon$ is the learning rate,
\begin{equation}\label{eq:theta:grad}
\nabla_{\theta}\pazocal{L} = \frac{\partial \pazocal{L}}{\partial \theta} = \left[\frac{\partial \pazocal{L}}{\partial \theta_1}, \dots, \frac{\partial \pazocal{L}}{\partial \theta_{D_{\theta}}}\right]
\end{equation}
is the gradient of $\pazocal{L}$ with respect to $\theta$, and $D_{\theta}$ is the number of NN parameters (weights and biases).
Employing the chain rule for each $\frac{\partial \pazocal{L}}{\partial \theta_j}$, $j \in \{1,\dots,D_{\theta}\}$, Eq.~\eqref{eq:theta:grad} becomes
\begin{equation}\label{eq:theta:grad:2}
\nabla_{\theta}\pazocal{L} = \left[\left.\frac{1}{N}\sum_{i=1}^{N}\nabla_{q}\ell(q, u(t_i, x_i))\right\vert_{q=\hat{u}_{\theta}(t_i, x_i)}\right]J_{\hat{u}_{\theta}, \theta},
\end{equation}
where $\nabla_{q}\ell(q, u(t_i, x_i)) \in \mathbb{R}^{1 \times D_u}$ is the gradient of the scalar output $\ell(q, u(t_i, x_i))$ with respect to the prediction $q=\hat{u}_{\theta}(t_i, x_i)$.
Furthermore, $J_{\hat{u}_{\theta}, \theta} \in \mathbb{R}^{D_u \times D_{\theta}}$ is the Jacobian matrix
\begin{equation}
J_{\hat{u}_{\theta}, \theta} = \left[\nabla_{\theta}(\hat{u}_{\theta, 1}),\dots,\nabla_{\theta}(\hat{u}_{\theta, D_u})\right]^\top
\end{equation}
of the NN transformation $\theta \mapsto \hat{u}_{\theta}$.
If $\hat{u}_{\theta}$ is one-dimensional, Eq.~\eqref{eq:theta:grad:2} reduces to
\begin{equation}\label{eq:theta:grad:3}
\nabla_{\theta}\pazocal{L} = \left[\left.\frac{1}{N}\sum_{i=1}^{N}\frac{\partial \ell(q, u(t_i, x_i))}{\partial q}\right\vert_{q=\hat{u}_{\theta}(t_i, x_i)}\right]\nabla_{\theta}\hat{u}_{\theta}
\end{equation}
and if, in addition, $\ell$ is the squared $\ell_2$-norm, to
\begin{equation}\label{eq:theta:grad:4}
\nabla_{\theta}\pazocal{L} = \left[\frac{1}{N}\sum_{i=1}^{N}2(\hat{u}_{\theta}(t_i, x_i)-u(t_i, x_i))\right]\nabla_{\theta}\hat{u}_{\theta}.
\end{equation}
The term enclosed in brackets in Eqs.~\eqref{eq:theta:grad:2}-\eqref{eq:theta:grad:4}, and more specifically the loss function $\ell$, controls how the objective function behaves for increasing discrepancies from the target.
If $\hat{u}_{\theta}$ is multi-dimensional, the loss function through $\left.\nabla_{q}\ell(q, u(t_i, x_i))\right\vert_{q=\hat{u}_{\theta}(t_i, x_i)}$ also controls how the discrepancy in each dimension affects the final gradient $\nabla_{\theta}\pazocal{L}$; note that each component of $\nabla_{\theta}\pazocal{L}$ is an inner product between the term inside brackets and $\frac{\partial \hat{u}_{\theta}}{\partial \theta_j}$, $j \in \{1,\dots,D_{\theta}\}$.
For instance, by using the squared $\ell_2$-norm as $\ell$, which is given as the sum of squared discrepancies across dimensions, all dimensions are treated uniformly; see Appendix~\ref{app:meta:design:multi} regarding how a parametrized loss function for multi-dimensional inputs can be constructed.
For other standard first-order optimization algorithms, such as AdaGrad and Adam, the parameter $\theta$ updates depend not only on the current iteration gradient $\nabla_{\theta}\pazocal{L}$ of Eqs.~\eqref{eq:theta:grad:2}-\eqref{eq:theta:grad:4}, but also on the $\theta$ updates history.
For standard second-order algorithms, such as Newton's method and BFGS, $\theta$ updates depend not only on $\nabla_{\theta}\pazocal{L}$, but also on the Hessian or the approximate Hessian, respectively, of $\pazocal{L}$ with respect to $\theta$.
\section{Meta-learning loss functions for PINNs}\label{sec:meta}
\subsection{Defining PDE task distributions}\label{sec:meta:dist}
In this section, we consider only the forward problem scenario as defined in Eq.~\eqref{eq:param:pde}.
As explained in Section~\ref{sec:prelim:pinns}, solving Eq.~\eqref{eq:param:pde} with PINNs for a value of $\lambda$ requires learning the PINN parameters $\theta$ by solving the minimization problem of Eq.~\eqref{eq:pinns:loss:1}.
Overall, a NN is constructed, an optimization strategy is selected and the optimization algorithm is run until some convergence criterion is met.
However, all of the above steps, from defining the optimization problem to solving it, require the user to make certain design choices, which
affect the overall training procedure and the final approximation accuracy.
For example, they affect the convergence rate of the optimizer as well as the training and test error at the obtained minimum.
For simplicity, all these aspects of training that depend on the selected hyperparameters are henceforth collectively called \textit{performance or efficiency} of the selected hyperparameters and of the training in general.
For example, we may refer to a set of hyperparameters as being \textit{more efficient} than another set.
Indicatively, the PINN architecture and activation function, the optimization algorithm, and the number of collocation points (points at which the PDE residual is evaluated) correspond to hyperparameters that must be tuned/selected a priori or be optimized in an online manner (see, e.g., \cite{jagtap2020adaptive} for adaptive activation functions).
In this regard, it is standard practice to experiment with many different hyperparameter settings by performing a few iterations of the optimization algorithm and by evaluating performance based on validation error; i.e., to perform hold-out validation.
In the context of PINNs, the validation error can be computed over collocation points not used in training or testing.
After performing this trial-and-error procedure, the problem can then be fully solved; i.e., the optimization is run until convergence.
For solving a novel, different PDE of the form of Eq.~\eqref{eq:param:pde}, hold-out validation is either repeated from scratch, or search is limited to a tight range of hyperparameter settings, depending on the experience of the user with the novel parameter $\lambda$ in Eq.~\eqref{eq:param:pde}.
Thus, it becomes clear that solving novel problems more efficiently requires:
\begin{enumerate}[(a)]
\item To define families of related PDEs such that hyperparameters selected for solving one member of the family are expected to perform well also for other members.
\item To effectively utilize information acquired from solving a few representative members of the family in order to solve other members efficiently.
\end{enumerate}
Families of related machine learning problems are typically called task distributions in the literature; see, e.g., \cite{hospedales2020metalearning}.
For example, approximating the function $y = sin(x + \pi)$ is related to approximating $y = sin(x)$, in the sense that a hyperparameter setting found efficient for the former problem can be used as is, or with minimal modifications, for solving the latter one efficiently.
Therefore, a task distribution can, indicatively, be defined by functions of the form of $y = sin(x + \lambda)$, with $\lambda$ drawn uniformly from $[-\pi, \pi]$.
More generally, it is assumed that tasks are parametrized by $\lambda$, which is drawn from some distribution $p(\lambda)$.
In addition, each $\lambda$ defines a learning task, which can be shown experimentally or theoretically that is related to other learning tasks drawn from $p(\lambda)$.
Note that although the discussion up to this point has been motivated by the burden of repetitive hyperparameter tuning even for related problems, meta-learning has a rich range of applications that goes well beyond selecting NN architectures, learning rates and activation functions.
For example, optimizing the loss function or selecting a shared-among tasks NN initialization are not generally considered hyperparameter tuning.
It is useful, however, to interpret all the different factors that affect the NN optimization procedure and the final performance as hyperparameters, some of which we try to optimize and some of which we fix.
\subsection{Meta-learning as a bi-level minimization problem}\label{sec:meta:bilevel}
As explained in Section~\ref{sec:meta:dist}, a task can be defined as a fixed PDE problem associated with a parameter $\lambda$ that is drawn from a pre-specified task distribution $p(\lambda)$.
Furthermore, in conjunction with task-specific training data, a task is solved with PINNs until convergence via Eqs.~\eqref{eq:pinns:loss:1}-\eqref{eq:pinns:loss:3}.
Final accuracy or overall performance of the training procedure can, indicatively, be evaluated based on final validation data error after training (e.g., PDE residual on a large set of points in the domain).
Next, given a task distribution defined via $p(\lambda)$ and a set of optimizable hyperparameters $\eta$, meta-learning seeks for the optimal setting of $\eta$, where optimality is defined in terms of average performance across tasks drawn from $p(\lambda)$.
For example, if performance of PINN training is evaluated based on the $\ell_2$-norm of PDE residual on validation points, average performance of $\eta$ refers to average across $\lambda$ final $\ell_2$ error.
Following \cite{hospedales2020metalearning} and considering a finite set $\Lambda = \{\lambda_{\tau}\}_{\tau = 1}^{T}$ of $\lambda \sim p(\lambda)$ values, meta-learning is commonly formalized as a bi-level minimization problem expressed as
\begin{subequations}\label{eq:bilevel}
\begin{align}
\min_{\eta} \pazocal{L}_O(\theta^*(\eta), \eta) \label{eq:bilevel:a}\\
\text{s.t. } \theta^*(\eta) = \{\theta^*_{\tau}(\eta)\}_{\tau = 1}^{T} \label{eq:bilevel:b}\\
\text{with } \theta^*_{\tau}(\eta) = \argmin_{\theta} \pazocal{L}_{\tau}(\theta, \eta), \label{eq:bilevel:c}
\end{align}
\end{subequations}
where Eq.~\eqref{eq:bilevel:a} is referred to as outer optimization, whereas Eqs.~\eqref{eq:bilevel:b}-\eqref{eq:bilevel:c} as inner optimization.
In Eq.~\eqref{eq:bilevel}, $\theta^*(\eta) = \{\theta^*_{\tau}(\eta)\}_{\tau = 1}^{T}$ consists of the final NN parameters $\theta^*$ obtained after solving all tasks in the set $\Lambda$ using hyperparameters $\eta$, and $\pazocal{L}_{\tau}(\theta, \eta)$, which is given by Eq.~\eqref{eq:pinns:loss:1}-\eqref{eq:pinns:loss:3}, is called inner- or base-objective.
The subscript $\tau$ and the arguments $\eta$, $\theta$, and $\theta^*$ in Eq.~\eqref{eq:bilevel} are used to signify task-dependent quantities as well as dependencies on (optimizable) $\eta$ and $\theta$, and (optimum) $\theta^*_{\tau}(\eta)$, respectively.
Furthermore, $\pazocal{L}_O(\theta^*(\eta), \eta)$, which is called outer- or meta-objective and pertains to the ultimate goal of meta-learning, is commonly considered to be the average performance of $\eta$.
For example, if validation error is measured based on final $\ell_2$-norm of PDE residual on a validation set of size $N_{f, val}$, the outer-objective $\pazocal{L}_O(\theta^*(\eta), \eta)$ can be expressed as
\begin{equation}\label{}
\pazocal{L}_O(\theta^*(\eta), \eta) = \mathbb{E}_{\lambda_{\tau} \in \Lambda}\left[\frac{1}{N_{f, val}}\sum_{i=1}^{N_{f, val}}||\hat{f}_{\theta^*_{\tau}, \lambda_{\tau}}(t_i, x_i)||_2^2\right],
\end{equation}
where $N_{f, val}$ and domain points $\{(t_i, x_i)\}_{i=1}^{N_{f, val}}$ are considered the same for all tasks for notation simplicity and without loss of generality.
For more design choices regarding outer-objective $\pazocal{L}_O$ see Section~\ref{sec:meta:lossml} and \cite{hospedales2020metalearning}.
Next, for addressing the bi-level minimization problem of Eq.~\eqref{eq:bilevel}, an alternating approach between outer and inner optimization can be considered, which in practice can be achieved by performing one or only a few steps for each optimization.
Three main families of techniques exist (\cite{hospedales2020metalearning}): gradient-based, reinforcement-learning-based, and evolutionary meta-learning.
Discussions in this paper are limited to gradient-based approaches.
Although inner optimization corresponds to standard PINN training of Eqs.~\eqref{eq:pinns:loss:1}-\eqref{eq:pinns:loss:3}, gradient-based outer optimization requires computing the total derivative
\begin{equation}\label{eq:many:steps:derivative}
\boldsymbol{d}_{\eta}\pazocal{L}_O(\theta^*(\eta), \eta) = \nabla_{\eta} \pazocal{L}_O +\left[\nabla_{\theta^*}\pazocal{L}_O\right] J_{\theta^*(\eta), \eta},
\end{equation}
where $\boldsymbol{d}_{\eta}$ represents total derivative with respect to $\eta$, $\nabla_{\eta}$ and $\nabla_{\theta^*}$ here represent partial derivatives with respect to $\eta$ and $\theta^*(\eta)$, respectively, and $J_{\theta^*(\eta), \eta}$ is the Jacobian matrix of the transformation from $\eta$ to $\theta^*(\eta)$.
The first term on the right-hand side of Eq.~\eqref{eq:many:steps:derivative} refers to direct dependence of $\pazocal{L}_O$ on $\eta$ (e.g., as is the case for neural architecture search; see \cite{liu2019darts}), whereas the second term refers to dependence of $\pazocal{L}_O$ on $\eta$ through the obtained optimal $\theta^*(\eta)$.
Because $\theta^*(\eta)$ is the result of a number of inner optimization steps, obtaining the Jacobian $J_{\theta^*(\eta), \eta}$ via chain rule is often referred to as \textit{differentiating over the optimization path}.
In this regard, see Section~\ref{sec:meta:design:inner} for addressing the exploding gradients pathology arising because of path differentiation in loss function meta-learning as well as \cite{nichol2018firstorder,rajeswaran2019metalearning,lorraine2020optimizing} on approximate ways of computing Eq.~\eqref{eq:many:steps:derivative}.
Overall, the computation of Eq.~\eqref{eq:many:steps:derivative} can be performed by designing for this purpose automatic differentiation algorithms; see \cite{grefenstette2019generalized} for more information and open-source code.
Of course, for utilizing Eq.~\eqref{eq:many:steps:derivative} the outer-objective $\pazocal{L}_O$ and the inner optimization steps must be differentiable.
For instance, a single SGD step given as $\theta \leftarrow \theta - \epsilon \nabla_{\theta}\pazocal{L}(\theta,\eta)$, where $\epsilon$ is the learning rate, is differentiable with respect to $\eta$ if $\pazocal{L}$ is also differentiable with respect to $\eta$, meaning that the Jacobian in Eq.~\eqref{eq:many:steps:derivative} can be computed.
The same holds for multiple SGD steps as well as for other optimization algorithms, such as AdaGrad and Adam.
Algorithm~\ref{algo:general} is a general gradient-based meta-learning algorithm for arbitrary, admissible $\eta$.
The input to the meta-learning algorithm includes the number of outer and inner iterations, $I$ and $J$, respectively, and the outer and inner learning rates, $\epsilon_1$, $\epsilon_2$, respectively; see Appendix~\ref{app:meta:design:other} for stopping criteria.
Furthermore, the algorithm input includes the task distribution $p(\lambda)$ to be used for resampling during training and the number of tasks $T$.
As shown in Algorithm~\ref{algo:general}, the task set $\Lambda$ is not required to remain the same during optimization.
Optimizing $\eta$ using a meta-learning algorithm such as Algorithm~\ref{algo:general} is typically called \textit{meta-training}, and using the obtained $\eta$ for solving unseen tasks from the task distribution is called \textit{meta-testing}.
\begin{algorithm}[H]
\label{algo:general}
\SetAlgoLined
\textbf{input:} $\epsilon_1$, $\epsilon_2$, $I$, $T$, $J$, and task distribution $p(\lambda)$
initialize $\eta$ with $\eta^{(0)}$
\For{$i \in \{1,\dots, I\}$}{
sample set $\Lambda$ of $T$ tasks from $p(\lambda)$
\For{$\tau \in \{1,\dots, T\}$}{
initialize $\theta_{\tau}$ with $\theta_{\tau}^{(0)}$
\For{$j \in \{1,\dots, J\}$}{
\Comment*[f]{Inner step for each task}
$\theta_{\tau}^{(j)} = \theta_{\tau}^{(j-1)} - \epsilon_2 \nabla_{\theta}\pazocal{L}_{\tau}(\theta,\eta)\bigg\rvert_{\theta = \theta_{\tau}^{(j-1)}, \ \eta = \eta^{(i-1)}}$
\Comment*[f]{PINN step}
\Comment*[f]{$\pazocal{L}_{\tau}$ given by Eq.~\eqref{eq:lossml:inner:loss}}
}
set $\theta^{*(i)}_{\tau} = \theta_{\tau}^{(J)}$
}
$\eta^{(i)} = \eta^{(i-1)} - \epsilon_1 \boldsymbol{d}_{\eta}\pazocal{L}_O(\theta^*(\eta),\eta)\bigg\rvert_{\theta^* = \{\theta^{*(i)}_{\tau}\}_{\tau = 1}^{T}, \ \eta = \eta^{(i-1)}}$
\Comment*[f]{Outer step}
\Comment*[f]{$\boldsymbol{d}_{\eta}\pazocal{L}_O$ given by Eqs.~\eqref{eq:many:steps:derivative} and \eqref{eq:lossml:outer:loss}}
}
\textbf{return} $\eta^{(I)}$
\caption{General gradient-based meta-learning algorithm for PINNs}
\end{algorithm}
\subsection{An algorithm for meta-learning PINN loss functions}\label{sec:meta:lossml}
Meta-learning PINN loss functions by utilizing the concepts of Section~\ref{sec:meta:bilevel} requires defining an admissible hyperparameter $\eta$ that can be used in conjunction with Algorithm~\ref{algo:general}.
In this regard, a parametrization $\eta$ can be used for the loss function $\ell$ of Eq.~\eqref{eq:pinns:loss:2} for which $\pazocal{L}_O$ and the inner optimization steps are differentiable.
Indicatively, $\ell$ can be represented with a feed-forward NN (FFN) and thus $\eta$ represents the weights and biases of the FFN.
Such a parametrization in conjunction with Algorithm~\ref{algo:general} has been proposed in \cite{bechtle2020metalearning} for supervised learning and reinforcement learning problems.
Alternatively, $\ell$ can be the adaptive loss function proposed in \cite{barron2019general} (see Eqs.~\eqref{eq:barron:loss:1} and \eqref{eq:barron:loss:2}) and thus $\eta$ can be the robustness and scale parameters.
More discussion and options regarding loss function parametrization can be found in Section~\ref{sec:meta:design}.
Following parametrization, the PINN objective function for each task $\tau$ is the same as in standard PINNs training and given as
\begin{equation}\label{eq:lossml:inner:loss}
\pazocal{L}_{\tau}(\theta, \eta) = \pazocal{L}_f(\theta, \lambda_{\tau}) +
\pazocal{L}_b(\theta, \lambda_{\tau})
+ \pazocal{L}_{u_0}(\theta, \lambda_{\tau}),
\end{equation}
with the terms $\{\pazocal{L}_f, \pazocal{L}_b, \pazocal{L}_{u_0}\}$ given from Eq.~\eqref{eq:pinns:loss:2} and evaluated on the training datasets of sizes $\{N_{f}, N_{b}, N_{u_0}\}$ by using the parametrized loss function $\ell_{\eta}$ instead of a fixed $\ell$.
The objective function $\pazocal{L}_{\tau}(\theta, \eta)$ of Eq.~\eqref{eq:lossml:inner:loss} is optimized with respect to $\theta$ in the inner optimization step of Algorithm~\ref{algo:general}, while also tracking the optimization path dependence on $\eta$ (see \cite{grefenstette2019generalized}).
Next, the outer-objective $\pazocal{L}_O$, which is used for optimizing the learned loss $\ell_{\eta}$, can be defined as the MSE on validation data, i.e.,
\begin{equation}\label{eq:lossml:outer:loss}
\pazocal{L}_O(\theta^*(\eta), \eta) = \mathbb{E}_{\lambda_{\tau} \in \Lambda}\left[
\pazocal{L}_f(\theta^*_{\tau}, \lambda_{\tau}) +
\pazocal{L}_b(\theta^*_{\tau}, \lambda_{\tau})
+ \pazocal{L}_{u_0}(\theta^*_{\tau}, \lambda_{\tau})
\right],
\end{equation}
with the terms $\{\pazocal{L}_f, \pazocal{L}_b, \pazocal{L}_{u_0}\}$ given from Eq.~\eqref{eq:pinns:loss:2} and evaluated on the validation datasets of sizes $\{N_{f, val}, N_{b, val}, N_{u_0, val}, N_{u, val}\}$ by using the task parameters $\lambda_{\tau}$ and the optimal task-specific parameters $\theta^*_{\tau}$ for every $\tau$.
Clearly, Eq.~\eqref{eq:lossml:outer:loss} has the same form as Eq.~\eqref{eq:lossml:inner:loss}, except for the fact that (a) the number of validation points $\{N_{f, val}, N_{b, val}, N_{u_0, val}, N_{u, val}\}$ can be different from $\{N_{f}, N_{b}, N_{u_0}, N_{u}\}$, (b) MSE is used in Eq.~\eqref{eq:lossml:outer:loss} as a loss function, whereas $\ell_{\eta}$ is used in Eq.~\eqref{eq:lossml:inner:loss}, and (c) $\pazocal{L}_O$ is an average loss across tasks, whereas $\pazocal{L}_{\tau}$ is task-specific.
Optimizing $\ell_{\eta}$ utilizing Algorithm~\ref{algo:general} with $\pazocal{L}_O$ defined based on Eq.~\eqref{eq:lossml:outer:loss} aims to answer the following question: \textit{Is there a loss function $\ell_{\eta}$ which if used for PINN training (Eq.~\eqref{eq:lossml:inner:loss}) will perform better in terms of average across tasks MSE error (Eq.~\eqref{eq:lossml:outer:loss})?}
Note that in Eq.~\eqref{eq:lossml:outer:loss} additional data not used for inner training can also be utilized.
For example, we may have solution data $u_{\tau}$ corresponding to $\lambda_{\tau}$ values sampled from the task distribution $p(\lambda)$ (e.g., produced by traditional numerical solvers or measurements), which can be used in the outer optimization step.
By utilizing such additional data, the aforementioned question that loss function meta-learning aims to answer is augmented with the following: \textit{Is there a loss function $\ell_{\eta}$ that, in addition, leads to better solution $u$ performance error (Eq.~\eqref{eq:lossml:outer:loss}), although training has been performed based only on PDE residual and BCs/ICs data (Eq.~\eqref{eq:lossml:inner:loss})?}
Finally, see Section~\ref{sec:meta:design:properties} for more information regarding imposing additional properties to the loss function $\ell_{\eta}$ through penalties in Eq.~\eqref{eq:lossml:outer:loss}.
\subsection{Algorithm design and theory}\label{sec:meta:design}
\subsubsection{Loss function parametrization and initialization}
\label{sec:meta:design:param}
\paragraph{Adaptive loss function.}\label{sec:meta:design:param:LAL}
A loss function parametrization that can be used in conjunction with Algorithm~\ref{algo:general} has been proposed in \cite{barron2019general}.
In this regard, the one-dimensional loss function is parametrized by the shape parameter $\alpha \in \mathbb{R}$, which controls robustness to outliers (see Section~\ref{sec:prelim:loss}) and the scale parameter $c>0$ that controls the size of the quadratic bowl near zero.
Specifically, the loss function is expressed as
\begin{equation}\label{eq:barron:loss:1}
\rho_{\alpha,c}(d) = \frac{|\alpha - 2|}{\alpha}\left(\left(\frac{(d/c)^2}{|\alpha - 2|} + 1\right)^{\alpha/2} - 1\right),
\end{equation}
where $d$ denotes the discrepancy between each dimension of the prediction and the target; e.g., $d=\hat{u}_{\theta, j}(t,x)-u(t,x)$ or $d = \hat{f}_{\theta, \lambda, j}(t,x)$ for each $j\in\{1,\dots,D_u\}$ in PINNs.
For fixed values of $\alpha$, Eq.~\eqref{eq:barron:loss:1} yields known losses; see \cite{barron2019general} and Table~\ref{tab:losses}.
An extension to multi-dimensional inputs can be achieved via Eqs~\eqref{eq:g:sum:loss}-\eqref{eq:g:sum:loss:2}.
Nevertheless, the loss function of Eq.~\eqref{eq:barron:loss:1} cannot be used directly as an adaptive loss function to be optimized in the online manner (i.e., simultaneously with NN parameters) proposed in \cite{barron2019general}.
Specifically, $\rho_{\alpha,c}(d)$ in Eq.~\eqref{eq:barron:loss:1} is monotonic with respect to $\alpha$, and thus attempting to optimize $\alpha$ by minimizing Eq.~\eqref{eq:barron:loss:1} trivially sets $\alpha$ to be as small as possible.
To address this issue, \cite{barron2019general} defined also the corresponding probability density function, i.e.,
\begin{equation}\label{eq:barron:pdf}
p_{\alpha,c}(d) = \frac{1}{c Z(\alpha)}\exp(-\rho_{\alpha,c}(d)),
\end{equation}
which is valid only for $\alpha \geq 0 $ as $Z(\alpha)$ is divergent for $\alpha <0$.
Furthermore, \cite{barron2019general} defined a loss function based on the negative log likelihood of $p_{\alpha,c}(d)$, i.e.,
\begin{equation}\label{eq:barron:loss:2}
\hat{\ell}_{\alpha,c}(d) = \log(c) + \log(Z(\alpha)) + \rho_{\alpha,c}(d),
\end{equation}
which is simply a shifted version of Eq.~\eqref{eq:barron:loss:1}.
Because the partition function $Z(\alpha)$ is difficult to evaluate and differentiate, $\log(Z(\alpha))$ is approximated with a cubic Hermite spline, which induces an added computational cost.
The loss function of Eq.~\eqref{eq:barron:loss:2} has been used in \cite{barron2019general} in conjunction with Eqs.~\eqref{eq:g:sum:loss}-\eqref{eq:g:sum:loss:2} as an adaptive loss function that is optimized online.
Specifically, either the same pair $(\alpha, c)$ is used for each dimension, i.e., Eq.~\eqref{eq:g:sum:loss} is employed with unit weights and $\eta = \{\alpha, c\}$ or a different pair is used, i.e., Eq.~\eqref{eq:g:sum:loss} with $\eta = \{\alpha_1, c_1,\dots, \alpha_{D_u}, c_{D_u}\}$ is employed.
Regarding implementation of the constraints $\alpha \geq 0$ and $c>0$, the parameters $\alpha$ and $c$ for every dimension (if applicable) can be expressed as
\begin{equation}\label{eq:barron:imple}
\begin{aligned}
\alpha &= (\alpha_{max}-\alpha_{min})\ sigmoid(\hat{\alpha}) + \alpha_{min} \\
c &= softplus(\hat{c}) + c_{min}
\end{aligned}
\end{equation}
and $\hat{\alpha}$, $\hat{c}$ are optimized simultaneously with the NN parameters.
In Eq.~\eqref{eq:barron:imple}, the sigmoid function limits $\alpha$ in the range $[\alpha_{min}, \alpha_{max}]$, whereas the softplus function constrains $c$ to being greater than $c_{min}$; e.g., $c_{min} = 10^{-8}$ for avoiding degenerate optima.
Note that Algorithm~\ref{algo:general} uses the outer objective $\pazocal{L}_O$ of Eq.~\eqref{eq:lossml:outer:loss} for optimizing the loss function, which is different from the inner objectives $\pazocal{L}_{\tau}$ of Eq.~\eqref{eq:lossml:inner:loss}.
As a result, using Eq.~\eqref{eq:barron:loss:1} as a loss function parametrization in Algorithm~\ref{algo:general} does not lead to trivial $\alpha$ solutions as in \cite{barron2019general}.
Thus, either Eq.~\eqref{eq:barron:loss:1} or Eq.~\eqref{eq:barron:loss:2} can be considered as loss function parametrizations for Algorithm~\ref{algo:general}.
In Section~\ref{sec:meta:design:properties}, we prove that the loss function of Eq.~\eqref{eq:barron:loss:1} automatically satisfies the specific conditions required for successful training according to our new theorems, without adding any regularization terms in meta-training.
In the present paper, the loss function parametrization of this section is referred to as \textit{LAL} when used as a meta-learning parametrization, and as \textit{OAL} when used as an online adaptive loss.
For initialization, we consider the values $\alpha = 2.01$ and $c = 1/\sqrt{2}$, which approximate the squared $\ell_2$-norm.
\paragraph{NN-parametrized loss function.}\label{sec:meta:design:param:FFN}
An alternative parametrization based on FFN has been proposed in \cite{bechtle2020metalearning}.
Specifically, for the one-dimensional supervised learning regression problem considered in \cite{bechtle2020metalearning}, the most expressive representation $\ell_{\eta} = \hat{\ell}_{\eta}(\hat{u}_{\theta}, u)$ of Fig.~\ref{fig:gNN:higher} has been utilized.
In terms of parametrizing $\hat{\ell}_{\eta}$, $2$ hidden layers with $40$ neurons each without biases and with ReLU activations functions have been used, while the output is also passed through a softplus activation function for producing the final loss output.
Finally, the NN parameters are initialized using the Xavier uniform initializer.
In this regard, we note that ensuring positivity of the loss function does not affect NN parameter optimization; i.e., the softplus output activation function affects the results of \cite{bechtle2020metalearning} only through its nonlinearity and not by dictating positive loss outputs.
Furthermore, instead of randomly initializing NN parameters $\eta$, one can alternatively initialize them so that $\ell_{\eta}$ approximates a known loss function such as the squared $\ell_2$-norm.
For obtaining such an initialization, it suffices to perform even a few Adam iterations with synthetic data obtained by computing the $\ell_2$-norm of randomly sampled values in the considered domain; see computational examples in Section~\ref{sec:examples} for more information.
\paragraph{Meta-learning the composite objective function weights.}\label{sec:meta:design:param:weights}
Finally, the composite objective function weights $\{w_f, w_b, w_{u_0}\}$, corresponding to the PDE residual, BCs and ICs loss terms, respectively, in Eqs.~\eqref{eq:pinns:loss:2}-\eqref{eq:pinns:loss:3}, can also be included in the meta-learned parameters $\eta$.
As a result, three different loss functions are learned that are equivalent up to a scaling factor; see computational examples in Section~\ref{sec:examples} for experiments.
For restricting $\{w_f, w_b, w_{u_0}\}$ to values greater than zero or a minimum value, the softplus activation function can be used similarly to the $c$ parameter in Eq.~\eqref{eq:barron:imple}.
\subsubsection{Inner optimization steps}\label{sec:meta:design:inner}
As discussed in Sections~\ref{sec:meta:bilevel}-\ref{sec:meta:lossml}, the gradient $\nabla_{\eta}\pazocal{L}_O$ required to update $\eta$ in Algorithm~\ref{algo:general} is obtained through inner optimization path differentiation, i.e., via Eq.~\eqref{eq:many:steps:derivative}.
For each outer iteration $i$, the obtained NN parameters $\theta^{*(i)}_{\tau}$ for each task $\tau$ following $J$ inner SGD steps with hyperparameters $\eta^{(i)}$ are given as
\begin{equation}\label{eq:theta:many:steps}
\theta^{*(i)}_{\tau} = \theta_{\tau}^{(J)} = \theta_{\tau}^{(0)} - \epsilon_2 \sum_{j=1}^{J}\nabla_{\theta}\pazocal{L}_{\tau}(\theta,\eta)\bigg\rvert_{\theta = \theta_{\tau}^{(j-1)}, \ \eta = \eta^{(i-1)}},
\end{equation}
where $\pazocal{L}_{\tau}$ is given by Eq.~\eqref{eq:lossml:inner:loss}.
As a result, the Jacobian $J_{\theta^*(\eta), \eta}$ in Eq.~\eqref{eq:many:steps:derivative}, which is the average over tasks of the derivative of Eq.~\eqref{eq:theta:many:steps} with respect to $\eta$, is given as a sum of $J$ gradient terms, i.e.,
\begin{equation}\label{eq:jacobian:theta:star}
J_{\theta^*(\eta), \eta} = \mathbb{E}_{\lambda_{\tau} \in \Lambda}\left[- \epsilon_2 \sum_{j=1}^{J}\nabla_{\eta}\left(\nabla_{\theta}\pazocal{L}_{\tau}(\theta,\eta)\bigg\rvert_{\theta = \theta_{\tau}^{(j-1)}, \ \eta = \eta^{(i-1)}}\right)^\top\right].
\end{equation}
Clearly, using a large number of inner steps $J$ enables $\eta$ optimization to take into account larger parts of the $\theta$ optimization history, which can be construed as enriching the meta-learning dataset.
In this regard, the increased computational cost associated with large $J$ values can be addressed by considering approximations of Eq.~\eqref{eq:jacobian:theta:star}; see, e.g., \cite{nichol2018firstorder} for an introduction.
However, if the $\eta$ gradients for subsequent $j$ values point to similar directions, the summation of Eq.~\eqref{eq:jacobian:theta:star} can lead to large Jacobian values.
In turn, large Jacobian values in Eq.~\eqref{eq:jacobian:theta:star} lead to large $\boldsymbol{d}_{\eta}\pazocal{L}_O(\theta^*(\eta), \eta)$ values in Eq.~\eqref{eq:many:steps:derivative}, which make the optimization of $\eta$ unstable.
For the computational example of Section~\ref{sec:examples:regression} we demonstrate in Appendix~\ref{app:add:results} the effect of the number of inner steps as well as the exploding gradients pathology with an experiment.
Finally, to address this exploding $\eta$ gradients issue, we have tested (a) dividing $\boldsymbol{d}_{\eta}\pazocal{L}_O(\theta^*(\eta), \eta)$ values by the number of inner iterations $J$, (b) normalizing the gradient by its norm, and (c) performing gradient clipping, i.e., setting a cap for the gradient norm.
The fact that the norm of the $\eta$ gradient does not explode in every outer iteration makes dividing by $J$ too strict, while normalizing by the gradient norm deprives the $\eta$ gradient of its capability to provide also an update magnitude apart from a direction.
For these reasons, gradient clipping is expected to be a better option.
This is corroborated by the experiments of Section~\ref{sec:examples}.
\subsubsection{Theoretical derivation of desirable loss function properties}\label{sec:meta:design:properties}
In general, meta-learning $\ell_{\eta}$ via Algorithm~\ref{algo:general} in conjunction with Eqs.~\eqref{eq:lossml:inner:loss}-\eqref{eq:lossml:outer:loss} aims to maximize meta-testing performance by considering an expressive $\ell_{\eta}$, which is learned during meta-training.
However, because the loss function plays a central role in optimization as explained in Section~\ref{sec:prelim:loss}, it would be important for the learned loss $\ell_{\eta}$ to satisfy certain conditions to allow efficient gradient-based training.
In this section, we theoretically identify \textit{the optimal stationarity condition}\ and \textit{the MSE relation condition}\ as the two desirable conditions that enable efficient training in our problems. Moreover, we propose a novel regularization method to impose the conditions. The LAL parametrization of Section~\ref{sec:meta:design:param:LAL} is then proven to satisfy the two conditions without any regularization.
The results presented in this section are general and pertain to regression problems as well.
For this reason, we consider the more general than a PINN scenario of having a training dataset $\{(x_i,u_i)\}_{i=1}^N$ of $N$ samples, where the pair $(x_i, u_i)$ with $x_i \in \pazocal{X} \subseteq \mathbb{R}^{D_x}$ and $u_i \in\pazocal{U}\subseteq \mathbb{R}^{D_u}$, for $i \in \{1,\dots,N\}$, corresponds to the $i$-th (input, target) pair.
For learning a NN approximator $\hat{u}_{\theta}$, we minimize the objective
\begin{align}
\pazocal{L}(\theta)= \frac{1}{N} \sum_{i=1}^N \ell(\hat{u}_{\theta}(x_{i}),u_{i}),
\end{align}
over $\theta \in \mathbb{R}^{D_{\theta}}$, where $\ell: \mathbb{R}^{D_{u}} \times\pazocal{U} \rightarrow \mathbb{R}_{\ge 0}$ is the selected or learned loss function.
Note that in the following, the function $\hat{u}_{\theta}$ is allowed to represent a wide range of network architectures, including ones with batch normalization, convolutions, and skip connections. In this section, we assume that the map $\hat{u}_{i}:\theta \mapsto \hat{u}_{\theta}(x_i)$ is differentiable for every $i \in \{1,\dots, N\}$.
We define the output vector for all $N$ data points by
\begin{equation}
\hat{u}_{X}(\theta)= \vect((\hat{u}_{\theta}(x_{1}),\dots,\hat{u}_{\theta}(x_{N}))) \in \mathbb{R}^{N D_u},
\end{equation}
and let $\{\theta^{(r)}\}_{r=0}^\infty$ be the optimization sequence defined by
\begin{equation}
\theta^{(r+1)} = \theta^{(r)}-\epsilon^{(r)}\bar g^{(r)},
\end{equation}
with an initial parameter vector $\theta^{(0)}$, a learning rate $\epsilon^{(r)}$, and an update vector $\bar g^{(r)}$. One of the theorems in this section relies on the following assumption on the update vector $\bar g^{(r)}$:
\begin{assumption} \label{assump:5}
There exist $\bar c,\textit{\underbar c}>0$ such that
$
\textit{\underbar c} \|\nabla_{\theta}\pazocal{L}(\theta_{}^{(r)})\|^2 \allowbreak \le\nabla_{\theta}\pazocal{L}(\theta_{}^{(r)})^\top \bar g^{(r)}
$
and
$
\|\bar g^{(r)}\|^2 \le \bar c \|\nabla_{\theta}\pazocal{L}(\theta_{}^{(r)})\|^2
$ for any $r \ge 0$.
\end{assumption}
\noindent
It is noted that Assumption \ref{assump:5} is satisfied by using
$
\bar g^{(r)} = D^{(r)}\nabla_{\theta}\pazocal{L}(\theta^{(r)}),
$
where $D^{(r)}$ is any positive definite symmetric matrix with eigenvalues in the interval $[\textit{\underbar c}, \sqrt{\bar c}]$.
Setting $D^{(r)}=I$ corresponds to SGD and Assumption \ref{assump:5} is satisfied with $\textit{\underbar c}=\bar c = 1$.
Next, we define \textit{the optimal stationarity condition} as well as \textit{the MSE relation condition} and provide the main Theorems~\ref{thm:1}-\ref{thm:2} and Corollary \ref{corollary:1} of this section; corresponding proofs can be found in Appendix~\ref{app:proofs}.
\begin{definition}\label{def:1}
The learned loss $\ell$ is said to satisfy \textit{the optimal stationarity condition} if the following holds: for all $u \in \pazocal{U}$ and $q \in \mathbb{R}^{D_u}$, $\nabla_{q} \ell(q,u)$ exists and $\nabla_{q} \ell(q,u)=0$ implies that $\ell(q,u) \le \ell(q',u)$ for all $q' \in \mathbb{R}^{D_u}$.
\end{definition}
\begin{theorem} \label{thm:1}
If the learned loss $\ell$ satisfies the optimal stationarity condition, then any stationary point $\theta$ of $\pazocal{L}$ is a global minimum of $\pazocal{L}$ when $\rank(\frac{\partial \hat{u}_X (\theta)}{\partial \theta})=ND_u$. If the learned loss $\ell$ does not satisfy the optimal stationarity condition\ by having a point $q \in \mathbb{R}^{D_u}$ such that $\nabla_{q} \ell(q,u)=0$ and $\ell(q,u) > \ell(q',u)$ for some $q'\in \mathbb{R}^{D_u}$, then there exists a stationary point $\theta$ of $\pazocal{L}$ that is not a global minimum of $\pazocal{L}$ when $\{\hat{u}_X (\theta)\in \mathbb{R}^{N D_u}:\theta\in \mathbb{R}^{D_{\theta}}\}=\mathbb{R}^{N D_u}$.
\end{theorem}
\begin{theorem} \label{thm:2}
Suppose Assumption \ref{assump:5} holds. Assume that the learned loss $\ell$ satisfies the optimal stationarity condition, $\|\nabla_{\theta}\pazocal{L}(\theta)-\nabla_{\theta}\pazocal{L}(\theta')\|\le L \|\theta-\theta'\|$ for all $\theta,\theta'$ in the domain of $\pazocal{L}$ for some $L \ge 0$, and the learning rate sequence $\{\epsilon^{(r)}\}_{r}$ satisfies either (i) $\zeta \le \epsilon^{(r)} \le \frac{\textit{\underbar c} (2-\zeta)}{L\bar c}$ for some $\zeta>0$,
or (ii) $\lim_{r \rightarrow \infty}\epsilon^{(r)} =0$ and $\sum_{r=0}^\infty \epsilon^{(r)} = \infty$.
Then, for any limit point $\theta$ of the sequence $\{\theta^{(r)}\}_{r}$, the limit point $\theta$ is a global minimum of $\pazocal{L}$ if $\rank(\frac{\partial \hat{u}_X (\theta)}{\partial \theta})=ND_u$.
\end{theorem}
\begin{definition}\label{def:2}
The learned loss $\ell$ is said to satisfy \textit{the MSE relation condition} if the following holds: for all $u \in \pazocal{U}$ and $q \in \mathbb{R}^{D_u}$, $\nabla_{q} \ell(q,u)=0$ if and only if $q=u$.
\end{definition}
\begin{corollary} \label{corollary:1}
If the learned loss $\ell$ satisfies the optimal stationarity condition\ and the MSE relation condition, then any stationary point $\theta$ of $\pazocal{L}$ is a global minimum of the MSE loss $\pazocal{L}_{\mathrm{MSE}}$ when $\rank(\frac{\partial \hat{u}_X (\theta)}{\partial \theta})=ND_u$, where $\pazocal{L}_{\mathrm{MSE}}(\theta)=\frac{1}{N}\sum_{i=1}^N\|\hat u_\theta(x_{i})-u_{i}\|_2^2$.
\end{corollary}
For example, the rank condition $\rank(\frac{\partial \hat{u}_X (\theta)}{\partial \theta})=ND_u$ (as well as the expressivity condition of $\{\hat{u}_X (\theta)\in \mathbb{R}^{N D_u}:\theta\in \mathbb{R}^{D_\theta}\}=\mathbb{R}^{N D_u}$) is guaranteed to be satisfied by using wide NNs (e.g., see \citealp{kawaguchi2019gradient,huang2020dynamics,kawaguchi2021recipe}).
Nevertheless, the rank condition $\rank(\frac{\partial \hat{u}_X (\theta)}{\partial \theta})=ND_u$ is more general than the condition of using wide NNs, in the sense that the latter implies the former but not vice versa. Moreover, the standard loss functions used in PINNs, such as squared $\ell_2$-norm, satisfy the differentiability condition of Definition~\ref{def:1}, and are convex with respect to $q$: i.e., $\ell_{u} :q\mapsto \ell(q,u)$ is convex for all $u \in \pazocal{U}$. It is known that for any differentiable convex function $\ell_{u}:\mathbb{R}^{D_u} \rightarrow \mathbb{R}$, we have $\ell_u(q')\ge\ell_u(q)+\nabla_q\ell_u(q)^\top(q'-q)$ for all $q,q' \in \mathbb{R}^{D_u}$. Because the latter implies the optimal stationarity condition, we conclude that standard loss functions typically used in PINNs satisfy the optimal stationarity condition.
Unlike standard loss functions, the flexibility provided by meta-learning the loss function allows the learned loss $\ell_{u} :q\mapsto \ell(q,u)$ to be non-convex in $q$. This flexibility allows the learned loss to be well tailored to the task distribution considered, while we can still check or impose the optimal stationarity condition; see also Section~\ref{sec:examples}.
Corollary \ref{corollary:1} shows that for our PINN problems with the MSE measure, it is desirable for the learned loss to satisfy the optimal stationarity condition~and the MSE relation condition.
The MSE relation condition~imposes the existence of the desired stationary point related to MSE, whereas the optimal stationarity condition~ ensures the global optimality. Since the optimal stationarity condition~does not guarantee the existence of a stationary point, it is possible that there is no stationary point without the MSE relation condition\ or this type of an additional condition. As a pathological example, we may have $\ell_{\eta}(q, u)=q-u$, for which there is no stationary point and Theorems \ref{thm:1}--\ref{thm:2} vacuously hold true.
Therefore, depending on the measures used in applications (i.e., MSE for our case), it is beneficial to impose this type of an additional condition along with the optimal stationarity condition~in order to impose an existence of a desirable stationary point.
Using this theoretical result, we now propose a novel penalty term $\pazocal{L}_{O, add}$ to be added to the outer objective $\pazocal{L}_O$ of Eq.~\eqref{eq:lossml:outer:loss:penalty} in order to penalize the deviation of the learned loss from the conditions in Corollary \ref{corollary:1}.
More concretely, the novel penalty term $\pazocal{L}_{O, add}$ based on Corollary \ref{corollary:1} is expressed as
\begin{equation}\label{eq:lossml:outer:loss:penalty}
\pazocal{L}_{O,add}(\eta) = \mathbb{E}_{q}[\norm{\nabla_{q} \ell_{\eta}(q, q)}_2^2] + \mathbb{E}_{q \neq q'}[\max(0, c-\norm{\nabla_{q} \ell_{\eta}(q, q')}_2^2)],
\end{equation}
where $q, q' \in \mathbb{R}^{D_u}$ are arbitrary inputs to the loss function, and $c$ is a hyperparameter that can be set equal to a small value (e.g., $10^{-2}$).
The first term on the right-hand side of Eq.~\eqref{eq:lossml:outer:loss:penalty} promotes the first-order derivative of the learned loss to be zero for zero discrepancy, for obtaining a learned loss that satisfies the MSE relation condition.
Furthermore, in the second term on the right-hand side of Eq.~\eqref{eq:lossml:outer:loss:penalty}, the additional penalty term maximizes the derivative away from zero up to a constant $c$, for obtaining a learned loss that satisfies the optimal stationarity condition.
The terms $\mathbb{E}_{q}[\norm{\nabla_{q} \ell_{\eta}(q, q)}_2^2]$ and $\mathbb{E}_{q \neq q'}[\max(0, c-\norm{\nabla_{q} \ell_{\eta}(q, q')}_2^2)]$ can, in practice, be computed by drawing some $q$ and a random pair $(q,q')$ such that $q \neq q'$, in each outer iteration, and by computing $\norm{\nabla_{q} \ell_{\eta}(q, q)}_2^2$ and $\max(0, c-\norm{\nabla_{q} \ell_{\eta}(q, q')}_2^2)$, respectively.
Alternatively, we can define an empirical distribution on $q$ and on $(q,q')$, and replace the expectations by summations over finite points.
By following the same rationale and by augmenting the outer objective $\pazocal{L}_O$ of Eq.~\eqref{eq:lossml:outer:loss:penalty}, other problem-specific constraints can be imposed to the loss function as well.
Finally, we prove that a learned loss with the LAL parametrization of Section~\ref{sec:meta:design:param:LAL} automatically satisfies the optimal stationarity condition~and the MSE relation condition\ without adding the regularization term:
\begin{proposition} \label{prop:1}
Any LAL loss of the form $\ell(q,u)=\rho_{\alpha,c}(q-u) = \frac{|\alpha - 2|}{\alpha}((\frac{((q-u)/c)^2}{|\alpha - 2|} + 1)^{\alpha/2} - 1)$ satisfies the optimal stationarity condition~and the MSE relation condition\ if $c>0$, $\alpha\neq0$, and $\alpha\neq 2$.
\end{proposition}
\section{Computational examples}\label{sec:examples}
We consider four computational examples in order to demonstrate the applicability and the performance of Algorithm~\ref{algo:general} for meta-learning PINN loss functions.
In Section~\ref{sec:examples:regression}, we address the problem of discontinuous function approximation with varying frequencies. Because the function approximation problem is conceptually simpler than solving PDEs and computationally cheaper, Section~\ref{sec:examples:regression} serves not only as a pedagogical example but also as a guide for understanding the behavior of Algorithm~\ref{algo:general} when different loss function parametrizations and initializations, inner optimizers and other design options are used; see Appendices~\ref{app:meta:design:other} and \ref{app:add:results}.
In Section~\ref{sec:examples:ad} we address the problem of solving the advection equation with varying initial conditions and discontinuous solutions, in Section~\ref{sec:examples:rd} we solve a steady-state version of the reaction-diffusion equation with varying source term, and in Section~\ref{sec:examples:burgers} we solve the Burgers equation with varying viscosity in two regimes.
For each computational example, both FFN and LAL parametrizations from Section~\ref{sec:meta:design:param} are studied. For the FFN parametrization, we perform meta-training with and without the theoretically-driven regularization terms of Eq.~\eqref{eq:lossml:outer:loss:penalty} as developed in Section~\ref{sec:meta:design:properties}. We present the regularization results explicitly only when the terms of Eq.~\eqref{eq:lossml:outer:loss:penalty} are not zero, otherwise the respective results are identical. For the LAL parametrization, the regularization is not required because the desirable conditions of Section~\ref{sec:meta:design:properties} are automatically satisfied according to Proposition~\ref{prop:1}.
In all the examples, meta-training is performed using Adam as the outer optimizer for $10{,}000$ iterations. During meta-training, 6 snapshots of the learned loss are captured, with 0 corresponding to initialization.
Furthermore, only one task is used in each outer step throughout this section ($T = 1$ in Algorithm~\ref{algo:general}); increasing this number up to 5 does not provide any significant performance increase in the considered cases. In meta-testing, we compare the performance of the snapshots of the learned loss captured during meta-training with standard loss functions from Table~\ref{tab:losses}.
Specifically, we compare $12$ learned losses ($6$ snapshots of the FFN and $6$ of the LAL parametrization; see Sections~\ref{sec:meta:design:param:LAL}-\ref{sec:meta:design:param:FFN}) with the squared $\ell_2$-norm (MSE), the absolute error (L1), the Cauchy, and the Geman-McClure (GMC) loss functions.
In addition, we compare with the OAL of \cite{barron2019general} (see Section~\ref{sec:meta:design:param:LAL}) with $2$ learning rates ($0.01$ and $0.1$, denoted as OAL 1 and OAL 2) for its trainable parameters (robustness and scale); only the robustness parameter is trained in the computational examples because this setting yielded better performance. For evaluating the performance of the considered loss functions we use them for meta-testing on either $5$ or $10$ unseen tasks, either in-distribution (ID) or out-of-distribution (OOD), and record the relative $\ell_2$ test error (rl2) on exact solution datapoints averaged over tasks.
\subsection{Discontinuous function approximation with varying frequencies and heteroscedastic noise}\label{sec:examples:regression}
We first consider a distribution of functions in $[0, 4\pi ]$ defined as
\begin{equation}\label{eq:regex:udef}
u(x)=
\begin{cases}
\sin(\omega_1 x) + \epsilon,& \text{if } 0 \leq x \leq 2\pi\\
k (1 + \sin(\omega_2 (x - 2\pi))), & \text{if } 2\pi < x \leq 4\pi,
\end{cases}
\end{equation}
where $k$ denotes the magnitude of discontinuity and $\epsilon$ represents a zero-mean Gaussian noise term defined only in $[0, 2\pi]$ with standard deviation $\sigma_{\epsilon}$.
In this regard, a task distribution $p(\lambda)$ can be defined by drawing randomly frequency values $\lambda = \{\omega_1, \omega_2\}$ from $\mathcal{U}_{[\omega_{1, min}, \omega_{1, max}]}$ and $\mathcal{U}_{[\omega_{2, min}, \omega_{2, max}]}$, respectively, where $\mathcal{U}$ denotes a uniform distribution.
The defined task distribution is used in this section for function approximation.
The values of the fixed parameters used in this example can be found in Table~\ref{tab:reg:params}.
\begin{table}[H]
\centering
\caption{Function approximation: Task distribution values pertaining to Eq.~\eqref{eq:regex:udef}, used in meta-training and OOD meta-testing.
}
\begin{tabular}{ccccccc}
\toprule
&$k$ & $\omega_{1,min}$ & $\omega_{1,max}$ & $\omega_{2,min}$ & $\omega_{2,max}$ & $\sigma_{\epsilon}$\\
\midrule
meta-training&$1$ & $1$ & $3$ & $5$ & $6$ & $0.2$\\
OOD meta-testing&$1$ & $0.5$ & $4$ & $6$ & $7$ & $0.2$\\
\bottomrule
\end{tabular}
\label{tab:reg:params}
\end{table}
\paragraph*{Meta-training.}
Following the design options experiment in Appendix~\ref{app:add:results}, we fix the design options of Algorithm~\ref{algo:general} to $J=20$ inner steps and to resampling and re-initializing every outer iteration.
The approximator NN architecture consists of $3$ hidden layers with $40$ neurons each and $tanh$ activation function.
In the inner objective we consider $N_u = 100$ noisy datapoints with $\sigma_{\epsilon} = 0.2$, whereas in the outer objective $N_{u, val} = 1{,}000$ and $\sigma_{\epsilon} = 0$.
This can be interpreted as leveraging in the offline phase clean historical data that we synthetically corrupt with noise in order to meta-learn a loss function that can work well at test time also for the case of noisy data.
Moreover, both parametrizations (FFN and LAL) are used for comparison, and Adam is used as both inner and outer optimizer, with learning rates $10^{-3}$ and $10^{-4}$, respectively.
In Fig.~\ref{fig:reg:snaps} we show the learned loss snapshots and their first-order derivatives and compared with MSE for the FFN and LAL parametrizations.
Both FFN and LAL parametrizations with MSE initialization yield highly different learned losses as compared to MSE.
Being more flexible than LAL, FFN leads to more complex learned losses as depicted especially in the first-order derivative plots (Figs.~\ref{fig:reg:snaps:ffn:lossder} and \ref{fig:reg:snaps:lal:lossder}).
\begin{figure}
\caption{Function approximation: Learned loss snapshots (a, c) and corresponding first-order derivatives (b, d), as captured during meta-training (distributed evenly in $10{,}
\label{fig:reg:snaps}
\end{figure}
\paragraph*{Meta-testing.}
For evaluating the performance of the captured learned loss snapshots, we use them for meta-testing on $10$ OOD tasks and compare with standard loss functions from Table~\ref{tab:losses}.
Specifically, we train with Adam for $50{,}000$ iterations $10$ tasks using $18$ different loss functions ($6$ FFN, $6$ LAL and $6$ standard) and record the rl2 error on $1{,}000$ exact solution datapoints.
The test tasks are sampled from a distribution defined by combining Eq.~\eqref{eq:regex:udef} with $k=1$ and $\sigma_{\epsilon} = 0.2$, and with the uniform distributions $\mathcal{U}_{[\omega_{1, min}, \omega_{1, max}]}$ and $\mathcal{U}_{[\omega_{2, min}, \omega_{2, max}]}$, where $\{\omega_{1, min}, \omega_{1, max}, \omega_{2, min}, \omega_{2, max}\}$ are shown in Table~\ref{tab:reg:params}.
The minimum rl2 error results are shown in Fig.~\ref{fig:reg:test:min:stats}.
We see that the loss functions learned with the FFN parametrization do not generalize well, whereas the ones learned with the LAL parametrization achieve an average minimum rl2 error that is smaller than the error corresponding to all the other considered loss functions by at least $15 \%$.
For example, the average minimum rl2 error for LAL 4 is approximately $17 \%$ and is obtained (on average) close to iteration $20{,}000$, whereas the corresponding error for MSE is approximately $20 \%$ and is obtained (on average) close to iteration $50{,}000$.
Despite the improvement demonstrated in this example, in Fig.~\ref{fig:ad:traintest:Adam:experiment} we show for the advection equation example that the performance of the learned loss depends on the exponential decay parameters of Adam that control the dependence of the updates on the gradient history.
\begin{figure}
\caption{Function approximation: Minimum relative test $\ell_2$ error (rl2) averaged over $10$ OOD tasks during meta-testing with Adam for $50{,}
\label{fig:reg:test:min:stats}
\end{figure}
\subsection{Task distributions defined based on advection equation with varying initial conditions and discontinuous solutions}\label{sec:examples:ad}
Next, we consider the $(1 + 1)$-dimensional advection equation given as
\begin{equation}\label{eq:advection}
\partial_t u + V \partial_x u = 0,
\end{equation}
where $V$ is the constant advection velocity, $x \in [-1, 1]$ and $t \in [0, 1]$.
The considered Dirichlet BCs are given as
\begin{equation}\label{eq:advection:bcs}
u(-1, t) = u(1, t) = 0
\end{equation}
and the ICs as
\begin{equation}\label{eq:advection:ics}
u(x, 0) = u_{0, \lambda}(x) =
\begin{cases}
\frac{1}{\lambda},& \text{if } -1 \leq x \leq -1 + \lambda\\
0, & \text{if } -1 + \lambda < x \leq 1,
\end{cases}
\end{equation}
i.e., $u_{0, \lambda}(x)$ is a normalized box function of length $\lambda$.
The exact solution for this problem is given as $u_{0, \lambda}(x - V t)$, which is also a box function that advects in time.
In this regard, we can define a PDE task distribution comprised of problems of the form of Eq.~\eqref{eq:advection} with ICs given by Eq.~\eqref{eq:advection:ics} with varying $\lambda$.
A task distribution $p(\lambda)$ can be defined by drawing randomly $\lambda$ values from $\mathcal{U}_{[\lambda_{min}, \lambda_{max}]}$, which correspond to different initial conditions $u_{0, \lambda}(x)$ in Eq.~\eqref{eq:advection:ics}.
In this example, $V = 1$, $\lambda_{min} = 0.5$, and the maximum value of $\lambda$ that can be considered so that the BCs are not violated is $1$; thus we consider $\lambda_{max} = 1$.
\paragraph*{Meta-training.}
During meta-training, Algorithm~\ref{algo:general} is employed with either SGD or Adam as inner optimizer with learning rate $10^{-2}$ or $10^{-3}$, respectively.
Both FFN and LAL are initialized as MSE approximations, the number of inner iterations is $20$, and tasks are resampled and approximator NNs (PINNs in this case) are randomly re-initialized in every outer iteration.
The number of datapoints $\{N_{f}, N_{b}, N_{u_0}\}$ and $\{N_{f, val}, N_{b, val}, N_{u_0, val}\}$ used for evaluating both the inner objective of Eq.~\eqref{eq:lossml:inner:loss} and the outer objective of Eq.~\eqref{eq:lossml:outer:loss}, respectively, are the same and equal to $\{1{,}000, 100, 200\}$, and $N_{u, val} = 0$.
Furthermore, the PINN architecture consists of $4$ hidden layers with $20$ neurons each and $tanh$ activation function.
We show in Fig.~\ref{fig:ad:comp}, the final loss functions (FFN and LAL parametrizations) as obtained with SGD as inner optimizer, with and without meta-learning the objective function weights (Section~\ref{sec:meta:design:param:weights}), and as obtained with Adam as inner optimizer.
For SGD as inner optimizer, both FFN and LAL parametrizations with MSE initialization yield highly different learned losses as compared to MSE, with FFN yielding more complex learned losses.
Furthermore, objective function weights meta-learning leads to an asymmetric final learned loss and is found in meta-testing to deteriorate performance.
For Adam as inner optimizer, the final learned losses are close to MSE.
The corresponding objective function weights trajectories are shown in Fig.~\ref{fig:ad:params}.
All objective function weights increase for both parametrizations, which translates into learning rate increase, while FFN and LAL disagree on how they balance ICs.
The meta-testing results ($100$ test iterations) obtained while meta-training for the case of SGD as inner optimizer are shown in Fig.~\ref{fig:ad:traintest}.
Although initially objective function weights meta-learning improves performance, the corresponding final learned losses in conjunction with the final learned weights eventually deteriorate performance.
\begin{figure}
\caption{Advection equation: Final learned losses (a, c) and corresponding first-order derivatives (b, d), with FFN (a, b) and LAL (c, d) parametrizations.
Results obtained via meta-training with SGD as inner optimizer (without and with meta-learning the objective function weights; SGD and SGD - ofw) and with Adam optimizer; comparisons with MSE are also included.
For SGD as inner optimizer, both FFN and LAL parametrizations with MSE initialization yield highly different learned losses as compared to MSE, with FFN yielding more complex learned losses.
SGD - ofw leads to an asymmetric final learned loss and is found in meta-testing to deteriorate performance.
For Adam as inner optimizer, final learned losses are close to MSE.
}
\label{fig:ad:comp}
\end{figure}
\begin{figure}
\caption{Advection equation: Learned objective function weights, pertaining to PDE, BCs, and ICs residuals, as a function of outer iteration in meta-training; see Section~\ref{sec:meta:design:param:weights}
\label{fig:ad:params}
\end{figure}
\begin{figure}
\caption{Advection equation: Meta-testing results (relative $\ell_2$ test error on $1$ unseen task after $100$ iterations) obtained using learned loss snapshots and performed during meta-training (every $500$ outer iterations); these can be construed as \textit{meta-validation error}
\label{fig:ad:traintest}
\end{figure}
\paragraph*{Meta-testing with SGD.}
For evaluating the performance of the captured learned loss snapshots, we employ them for meta-testing on $5$ ID tasks and compare with standard loss functions from Table~\ref{tab:losses}.
Specifically, we train with SGD for $10{,}000$ iterations with learning rate $0.01$ (same as in meta-training), $5$ tasks using learned and standard loss functions, and record the rl2 error on $10{,}000$ exact solution datapoints.
As learned losses, we use the ones obtained with SGD as inner optimizer and without objective function weights meta-learning.
The test error histories as well as the OAL parameters during training are shown in Fig.~\ref{fig:ad:test:misc}, and the minimum rl2 error results are shown in Fig.~\ref{fig:ad:test:min:stats}.
As shown in Fig.~\ref{fig:ad:test:oal:params}, the final learned loss LAL 5 is much different than both OAL 1 and 2, which converge to a robustness parameter value close to 3 for all tasks.
In Figs.~\ref{fig:ad:test:train:hists}-\ref{fig:ad:test:min:stats}, we see that the loss functions learned with both parametrizations achieve an average minimum rl2 error during $10{,}000$ iterations that is significantly smaller than all the considered losses (even the online adaptive ones) although they have been meta-trained with only $20$ iterations.
\begin{figure}
\caption{Advection equation: In-distribution (ID) meta-testing results.
Results obtained using SGD with learning rate $0.01$ for $10{,}
\label{fig:ad:test:misc}
\end{figure}
\begin{figure}
\caption{Advection equation: Minimum relative test $\ell_2$ error (rl2) averaged over $5$ ID tasks during meta-testing with SGD for $10{,}
\label{fig:ad:test:min:stats}
\end{figure}
\paragraph*{Meta-testing with Adam.}
Next, we employ the learned losses obtained using Adam as inner optimizer in the same meta-testing experiment as the one of Figs.~\ref{fig:ad:test:misc}-\ref{fig:ad:test:min:stats}; except for the fact that Adam with learning rate $10^{-3}$ is used in meta-testing instead of SGD.
The minimum rl2 error results are shown in Fig.~\ref{fig:ad:adam:test:min:stats}, where we see that the learned losses do not improve performance as compared to MSE; same results have been observed for the examples of Sections~\ref{sec:examples:rd}-\ref{sec:examples:burgers} but these results are not included in this paper.
One reason for this result is the fact that Adam depends on the whole history of gradients during optimization through an exponentially decaying average that only discards far in the past gradients.
However, our learned losses have been meta-trained with only $20$ inner iterations, and thus it is unlikely that they could have learned this memory property of Adam.
To illustrate the validity of the above explanation, we perform meta-training with Adam as inner optimizer with varying exponential decay parameters and subsequently meta-testing with the obtained learned loss snapshots (see Fig.~\ref{fig:ad:traintest:Adam:experiment}).
Specifically, we use values for the pair ($\beta_1, \beta_2$), corresponding to the decay parameters for the first and second moment estimates in Adam (see \cite{kingma2014adam}) in the set $\{(0.5, 0.5), (0.8, 0.8), (0.9, 0.999) = \text{default}, (0.99, 0.9999)\}$ with higher numbers corresponding to higher dependency on the far past.
Note that the decay factors multiplying the $21st$ gradient in the past (i.e., 1 gradient beyond the history used in meta-training) are approximately $10^{-6}$ and $10^{-2}$ for the pairs $(0.5, 0.5)$ and $(0.8, 0.8)$, respectively.
As expected and shown in Fig.~\ref{fig:ad:traintest:Adam:experiment}, higher $\beta$ values corresponding to higher dependency on the far past yield deteriorating performance of the learned losses.
In this regard, we use only SGD as inner optimizer in the rest of the computational examples and leave the task of improving the performance of the technique for addressing inner optimizers with memory, such as SGD with momentum, AdaGrad, RMSProp and Adam, as future work.
\begin{figure}
\caption{Advection equation: Minimum relative test $\ell_2$ error (rl2) averaged over $5$ ID tasks during meta-testing with Adam for $10{,}
\label{fig:ad:adam:test:min:stats}
\end{figure}
\begin{figure}
\caption{Advection equation: Meta-testing results (relative $\ell_2$ test error on $10$ unseen task after $100$ iterations) obtained using learned loss snapshots and performed during meta-training (every $500$ outer iterations).
Results are related to meta-training with FFN parametrization and Adam as inner optimizer with 4 different $\beta$ pair values:
$\text{low}
\label{fig:ad:traintest:Adam:experiment}
\end{figure}
\subsection{Task distributions defined based on reaction-diffusion equation with varying source term}
\label{sec:examples:rd}
Reaction-diffusion equations are used to describe diverse systems ranging from population dynamics to chemical reactions and have the general form
\begin{equation}\label{eq:react}
\partial_tu = D\Delta u + z(x, u, \nabla u),
\end{equation}
where $\Delta$ denotes the Laplace operator and $D$ is called diffusion coefficient.
In Eq.~\eqref{eq:react}, $D\Delta u$ represents the diffusion term whereas $z(x, u, \nabla u)$ the reaction term.
In the following, without loss of generality, we consider a two-dimensional nonlinear, steady-state version of Eq.~\eqref{eq:react} given as
\begin{equation}\label{eq:react:ex}
k (\partial^2_{x_1} u + \partial^2_{x_2} u) + u(1 - u^2) = z,
\end{equation}
where $x_1, x_2 \in [-1, 1]$ refer to space dimensions, $z$ can be interpreted as a source term, and $u$ is considered as known at all boundaries.
To demonstrate the role of task distributions in the context of varying excitation terms, we consider a family of fabricated solutions $u$ that after differentiation produce a family of $z$ terms in Eq.~\eqref{eq:react:ex}.
As an illustrative example, the task distribution $p(z_{\lambda})$ can be defined by drawing $\lambda = \{\alpha_1, \alpha_2, \omega_1, \omega_2, \omega_3, \omega_4\}$, with $\lambda \sim p(\lambda)$, constructing an analytical solution $u = \alpha_1 \tanh(\omega_1 x_1) \tanh(\omega_2 x_2) + \alpha_2 \sin(\omega_3 x_1) \sin(\omega_4 x_2)$, and constructing $z_{\lambda}$ via Eq.~\eqref{eq:react:ex}.
Obviously, in practice the opposite is true; different excitation terms $z$ pose a novel problem of the form of Eq.~\eqref{eq:react:ex} to be solved.
Nevertheless, defining $z$ by using fabricated $u$ solutions that are by construction related to each other helps to demonstrate the concepts of this work in a more straightforward manner.
\paragraph*{Meta-training.}
The task distribution parameters used in meta-training are shown in Table~\ref{tab:rd:params}.
The meta-training design options are the same as in Section~\ref{sec:examples:ad} except for the fact that objective function weights are not meta-learned.
Furthermore, the PINN architecture consists of $3$ hidden layers with $20$ neurons each and $tanh$ activation function.
In Fig.~\ref{fig:rd:comp}, we show the final loss functions (FFN and LAL parametrizations) as obtained with SGD and Adam as inner optimizers.
In addition, in Fig.~\ref{fig:rd:comp} we include the loss functions obtained using the exact solution data in the outer objective instead of the composite PINNs loss of Eq.~\eqref{eq:lossml:outer:loss}.
In the considered example, this data is available because a fabricated solution is used, whereas in practice this data may be originating from a numerical solver or from measurements.
This is referred to as \textit{double data (DD)} in the plots because different data is used for the inner and for the outer objective; single data is denoted as SD.
The number of datapoints used for evaluating the outer objective of Eq.~\eqref{eq:lossml:outer:loss} is shown in Table~\ref{tab:rd:outer:data}, whereas the corresponding number for the inner objective of Eq.~\eqref{eq:lossml:inner:loss} is the same as the single-data case in Table~\ref{tab:rd:outer:data}.
\paragraph*{Connection with the theory of Section~\ref{sec:meta:design:properties}.}
Whereas regularization is not required for LAL (see Proposition~\ref{prop:1}), as shown in Fig.~\ref{fig:rd:comp} the loss function corresponding to SGD with FFN and no regularization is shifted to the right; i.e., its first-order derivative at $0$ discrepancy is not zero.
As a result, the MSE relation condition\ of Corollary~\ref{corollary:1} is not satisfied and the learned loss leads to divergence in optimization when used for meta-testing.
Thus, we also include in Fig.~\ref{fig:rd:comp} the regularized loss function obtained via meta-training with the theoretically-driven gradient penalty of Eq.~\eqref{eq:lossml:outer:loss:penalty}, which solves this issue; see also Fig.~\ref{fig:rd:snaps:loss:ffn} for the loss function snapshots captured during meta-training for the non-regularized and regularized cases with the FFN parametrization.
\begin{table}[H]
\centering
\caption{Reaction-diffusion equation: Task distribution values used in meta-training and OOD meta-testing.
Parameters $\alpha_1$, $\alpha_2$ and parameters $\omega_1$, $\omega_2$, $\omega_3$, $\omega_4$ share the same limits $\alpha_{min}$, $\alpha_{max}$ and $\omega_{min}$, $\omega_{max}$, respectively.
}
\begin{tabular}{ccccc}
\toprule
&$\alpha_{min}$& $\alpha_{max}$& $\omega_{min}$& $\omega_{max}$\\
\midrule
meta-training&$0.1$& $1$& $1$& $5$\\
OOD meta-testing&$0.1$& $2$& $0.5$& $7$\\
\bottomrule
\end{tabular}
\label{tab:rd:params}
\end{table}
\begin{table}[H]
\centering
\caption{Reaction-diffusion equation: PDE, BCs, ICs, and solution data considered in the outer objective of Eq.~\eqref{eq:lossml:outer:loss} for single data (outer objective data same as inner objective) and for double data (solution data considered available in the outer objective) in meta-training ($\{N_{f, val}, N_{b, val}, N_{u_0, val}, N_{u, val}\}$), as well as for OOD meta-testing ($\{N_{f}, N_{b}, N_{u_0}, N_{u}\}$).
}
\begin{tabular}{c|cccc}
\toprule
& $N_{f(, val)}$ & $N_{b(, val)}$ & $N_{u_0(, val)}$ & $N_{u(, val)}$ \\
single data (SD) meta-training & $1{,}600$ & $160$ & NA & $0$ \\
double data (DD) meta-training & $0$ & $0$ & NA & $1{,}600$ \\
OOD meta-testing & $2{,}500$ & $200$ & NA & $0$ \\
\bottomrule
\end{tabular}
\label{tab:rd:outer:data}
\end{table}
\begin{figure}
\caption{Reaction-diffusion equation: Final learned losses (a, c) and corresponding first-order derivatives (b, d), with FFN (a, b) and LAL (c, d) parametrizations.
Results obtained via meta-training with SGD as inner optimizer (with single data, without and with regularization, and with double data; SGD, SGD - reg, SGD - dd) and with Adam optimizer; comparisons with MSE are also included.
Whereas regularization is not required for LAL (Proposition~\ref{prop:1}
\label{fig:rd:comp}
\end{figure}
\begin{figure}
\caption{Reaction-diffusion equation: Learned loss snapshots captured during meta-training (distributed evenly in $10{,}
\label{fig:rd:snaps:loss:ffn}
\end{figure}
\paragraph*{Meta-testing.}
For evaluating the performance of the captured learned loss snapshots, we employ them for meta-testing on $5$ OOD tasks and compare with standard loss functions from Table~\ref{tab:losses}.
Specifically, we train with SGD for $20{,}000$ iterations with learning rate $0.01$ (same as in meta-training) $5$ tasks using learned and standard loss functions and record the rl2 error on $2{,}500$ exact solution datapoints.
The OOD test tasks are drawn based on the parameter limits shown in Table~\ref{tab:rd:params}.
In addition, to increase the difficulty of OOD meta-testing, we also draw random architectures to be used in meta-testing that are different than the meta-training architecture; the number of hidden layers is drawn from $\mathcal{U}_{[2, 5]}$ and the number of neurons in each layer from $\mathcal{U}_{[15, 55]}$.
A learned loss that performs equally well for architectures not used in meta-training is desirable if, for example, we seek to optimize the PINN architecture with a fixed learned loss.
The test error histories as well as the OAL parameters during meta-training are shown in Fig.~\ref{fig:rd:test:misc}, and the minimum rl2 error results are shown in Fig.~\ref{fig:rd:test:min:stats}.
As shown in Fig.~\ref{fig:rd:test:oal:params}, the online adaptive losses OAL converge to different loss functions for each task, whereas LAL provides a shared learned loss for all tasks.
In Fig.~\ref{fig:rd:test:min:stats} we see that the loss functions learned with the FFN parametrization achieve an average minimum rl2 error during $20{,}000$ iterations that is significantly smaller than most considered standard losses although they have been meta-trained with only $20$ iterations, with a different PINN architecture and on a different task distribution.
On the other hand, LAL does not generalize well.
This is attributed to the fact that, for this example, potentially a task-specific loss would be more appropriate (as suggested by Fig.~\ref{fig:rd:test:oal:params}), and the LAL parametrization is not flexible enough to provide a shared loss function that performs well across tasks (as opposed to FFN).
\begin{figure}
\caption{Reaction-diffusion equation: Out-of-distribution meta-testing results obtained using SGD with learning rate $0.01$ for $20{,}
\label{fig:rd:test:misc}
\end{figure}
\begin{figure}
\caption{Reaction-diffusion equation: Minimum relative test $\ell_2$ error (rl2) averaged over $5$ OOD tasks during meta-testing with SGD for $20{,}
\label{fig:rd:test:min:stats}
\end{figure}
\subsection{Task distributions defined based on Burgers equation with varying viscocity}
\label{sec:examples:burgers}
Finally, we consider the Burgers equation defined by
\begin{align}\label{eq:burgers}
&\partial_t u + u \partial_x u = \lambda \partial^2_x u, ~ x \in [-1, 1], t \in [0, 1],\\
&u(x, 0) = -\sin(\pi x), u(-1, t) = u(1, t) = 0,
\end{align}
where $u$ denotes the flow velocity and $\lambda$ the viscosity of the fluid.
From a function approximation point of view, solutions corresponding to values of $\lambda$ very close to zero (e.g., $\lambda \approx 10^{-3}$), are expected to have some common characteristics such as steep gradients after some time $t$.
This fact justifies defining a task distribution comprised of PDEs of the form of Eq.~\eqref{eq:burgers} with, indicatively, $\lambda < 2\times 10^{-3}$ and with the same ICs/BCs.
A similar explanation can be given for smoother solutions corresponding to values of $\lambda > 10^{-1}$, for example.
\paragraph*{Meta-training.}
For meta-training, we use two regimes for $\lambda$ as shown in Table~\ref{tab:burgers:params}.
The design options are the same as in Section~\ref{sec:examples:ad} except for the fact that only SGD is used in this example as an inner optimizer candidate.
The number of datapoints used for evaluating the outer objective of Eq.~\eqref{eq:lossml:outer:loss} is shown in Table~\ref{tab:burgers:outer:data}, whereas the corresponding number for the inner objective of Eq.~\eqref{eq:lossml:inner:loss} is the same as the single-data case in Table~\ref{tab:burgers:outer:data}.
Furthermore, the PINN architecture consists of $3$ hidden layers with $20$ neurons each and $tanh$ activation function.
In Fig.~\ref{fig:burgers:comp} we show the final loss functions (FFN and LAL parametrizations) as obtained with SGD as inner optimizer with single data with and without objective function weights meta-learning, and with double data.
The corresponding objective function weights trajectories are shown in Fig.~\ref{fig:burgers:params}.
The learned losses with single data have steeper derivatives because they lack the objective function weights, which are shown in Fig.~\ref{fig:burgers:params} to be greater than 1.
Furthermore, for single data the loss functions obtained for the two regimes are slightly different when objective function weights are not meta-learned but almost identical when they are.
This means that the meta-learning algorithm compensates for the difference in the two regimes by yielding the same learned loss with different balancing of the PDE, BCs, and ICs terms in the PINNs objective function.
On the other hand, when solution data is used in the outer objective (available via the analytical solution), the obtained loss functions are highly different; see also Fig.~\ref{fig:burgers:snaps:loss:ffn} for the loss function snapshots captured during meta-training for both regimes and for the single and double data cases with the FFN parametrization.
\paragraph*{Connection with theory Section~\ref{sec:meta:design:properties}.}
Finally, see Fig.~\ref{fig:burgers:trainloss:ffn} for the outer objective trajectories for both single and double data training with the FFN parametrization, as well as Fig.~\ref{fig:burgers:traintest} for the the corresponding test performance on $5$ unseen tasks while meta-training for $20$ iterations.
It is shown in Fig.~\ref{fig:burgers:trainloss:ffn:dd:r1} that the outer objective drops significantly during training for double data and regime 1; this can be construed as \textit{meta-training error}.
Furthermore, for the same case it is shown in Fig.~\ref{fig:burgers:traintest:dd} that rl2 drops significantly during training; this can be construed as \textit{meta-validation error}.
However, the learned loss obtained during this training with no regularization does not satisfy the optimal stationarity condition~of Section~\ref{sec:meta:design:properties}; see Figs.~\ref{fig:burgers:comp}, \ref{fig:burgers:snaps:loss:ffn} and notice that the stationary point at 0 is not a global minimum.
In line with our theoretical results of Section~\ref{sec:meta:design:properties}, these learned losses have also been found in our experiments to lead to divergence if used for full meta-testing ($20{,}000$ iterations), i.e., they do not generalize well although meta-training and meta-validation performance is satisfactory.
Finally, we also include in Fig.~\ref{fig:rd:snaps:loss:ffn} the regularized loss functions obtained via meta-training with the theoretically-driven gradient penalty of Eq.~\eqref{fig:burgers:final:ffn:dd}, which solves this issue with the FFN parametrization and double data meta-training.
\begin{table}[H]
\centering
\caption{Burgers equation: Task distribution values used in meta-training and OOD meta-testing for both task regimes r1 and r2.
}
\begin{tabular}{ccccc}
\toprule
&r1 $\lambda_{min}$& r1 $\lambda_{max}$& r2 $\lambda_{min}$& r2 $\lambda_{max}$\\
\midrule
meta-training&$10^{-3}$& $2\times 10^{-3}$& $10^{-1}$& $1$\\
OOD meta-testing&$10^{-3}$& $10^{-2}$& $10^{-2}$& $2$\\
\bottomrule
\end{tabular}
\label{tab:burgers:params}
\end{table}
\begin{table}[H]
\centering
\caption{Burgers equation: PDE, BCs, ICs, and solution data considered in the outer objective of Eq.~\eqref{eq:lossml:outer:loss} for single data (outer objective data same as inner objective) and for double data (solution data considered available in the outer objective) in meta-training ($\{N_{f, val}, N_{b, val}, N_{u_0, val}, N_{u, val}\}$), as well as for OOD meta-testing ($\{N_{f}, N_{b}, N_{u_0}, N_{u}\}$).
}
\begin{tabular}{c|cccc}
\toprule
& $N_{f(, val)}$ & $N_{b(, val)}$ & $N_{u_0(, val)}$ & $N_{u(, val)}$ \\
single data (SD) meta-training & $1{,}000$ & $200$ & $100$ & $0$ \\
double data (DD) meta-training & $0$ & $0$ & $0$ & $10{,}000$ \\
OOD meta-testing & $2{,}000$ & $200$ & $100$ & $0$ \\
\bottomrule
\end{tabular}
\label{tab:burgers:outer:data}
\end{table}
\begin{figure}
\caption{Burgers equation: Final learned losses (a, c) and corresponding first-order derivatives (b, d), with FFN (a, b) and LAL (c, d) parametrizations for task regimes $1$ and $2$ (r1 and r2).
Results obtained via meta-training with SGD as inner optimizer (with single data, with and without objective function weights meta-learning, and with double data; SD r1-r2, SD r1-r2 - ofw, DD r1-r2); comparisons with MSE are also included.
Whereas regularization is not required for LAL (Proposition~\ref{prop:1}
\label{fig:burgers:comp}
\end{figure}
\begin{figure}
\caption{Burgers equation: Learned objective function weights, pertaining to PDE, BCs, and ICs residuals, as a function of outer iteration in meta-training and task regimes (r1-r2); see Section~\ref{sec:meta:design:param:weights}
\label{fig:burgers:params}
\end{figure}
\begin{figure}
\caption{Burgers equation: Learned loss snapshots captured during meta-training (distributed evenly in $10{,}
\label{fig:burgers:snaps:loss:ffn}
\end{figure}
\begin{figure}
\caption{Burgers equation: Outer objective values recorded during meta-training as well as corresponding moving averages ($500$ iterations window size); these can be construed as \textit{meta-training error}
\label{fig:burgers:trainloss:ffn}
\end{figure}
\begin{figure}
\caption{Burgers equation: Meta-testing results (relative $\ell_2$ test error on $5$ unseen tasks after $20$ iterations) obtained using learned loss snapshots and performed during meta-training (every $500$ outer iterations); these can be construed as \textit{meta-validation error}
\label{fig:burgers:traintest}
\end{figure}
\begin{figure}
\caption{Burgers equation: Final learned losses (a) and corresponding first-order derivatives (b), with FFN parametrization for task regime $1$ with double data meta-training.
Results obtained with and without regularization (reg) and with and without objective function weights meta-learning (ofw); comparisons with MSE are also included.
Whereas regularization is not required for LAL (Proposition~\ref{prop:1}
\label{fig:burgers:final:ffn:dd}
\end{figure}
\paragraph*{Meta-testing.}
For evaluating the performance of the captured learned loss snapshots, we train with SGD for $20{,}000$ iterations with learning rate $0.01$ (same as in meta-training) $5$ OOD tasks using learned and standard loss functions and record the rl2 error on $10{,}000$ exact solution datapoints.
Regarding learned losses, we employ the ones obtained with single data and with objective function weights meta-learning, as they performed better in our experiments.
The OOD test tasks are drawn based on the parameter limits shown in Table~\ref{tab:rd:params} and the random architectures used are drawn in the same way as in Section~\ref{sec:examples:rd}.
The OAL parameters during training are shown in Fig.~\ref{fig:burgers:test:oal:params} and the minimum rl2 error results are shown in Fig.~\ref{fig:burgers:test:min:stats}.
For both regimes, the final learned loss LAL 5 is different than both OAL 1 and 2, which converge to a robustness parameter value close to 3 for all tasks.
Finally, the loss functions learned with both parametrizations achieve an average minimum rl2 error during $20{,}000$ iterations that is significantly smaller than most considered losses (even the online adaptive ones), although they have been meta-trained with only $20$ iterations with a different PINN architecture and on a different task distribution.
\begin{figure}
\caption{Burgers equation: Robustness parameter trajectories for online adaptive loss functions OAL 1 and OAL 2 and for both test task regimes.
Loss-specific learning rates $0.01$ (OAL 1) and $0.1$ (OAL 2); see Section~\ref{sec:meta:design:param:LAL}
\label{fig:burgers:test:oal:params}
\end{figure}
\begin{figure}
\caption{Burgers equation: Minimum relative test $\ell_2$ error (rl2) averaged over $5$ OOD tasks during meta-testing with SGD for $20{,}
\label{fig:burgers:test:min:stats}
\end{figure}
\section{Summary}\label{sec:summary}
We have presented a meta-learning method for offline discovery of physics-informed neural network (PINN) loss functions, addressing diverse task distributions defined based on parametrized partial differential equations (PDEs) that are solved with PINNs.
For employing our technique given a PDE task distribution definition, we parametrize and optimize the learned loss via meta-training by following an alternating optimization procedure until a stopping criterion is met.
Specifically, in each loss function update step, (a) PDE tasks are drawn from the task distribution;
(b) in the inner optimization, they are solved with PINNs for a few iterations using the current learned loss and the gradient of the learned loss parameters is tracked throughout optimization;
and (c) in the outer optimization, the learned loss parameters are updated based on MSE of the final (semi-optimized) PINN parameters.
Furthermore, we have presented and proven two new theorems, involving a condition, namely the optimal stationarity condition, that the learned loss should satisfy for successful training. If satisfied, this condition assures that under certain assumptions, a global minimum is reached by using gradient descent with the learned loss. In addition, we have proven that under a mean squared error (MSE) relation condition combined with the optimal stationarity condition, any stationary point obtained based on the learned loss function is a global minimum of the MSE-based loss as well. Driven by these theoretical results, we have also proposed a novel regularization method for imposing the above desirable conditions. Finally, we have proven that one of the two parametrizations used in this paper for the learned loss, namely the learned adaptive loss (LAL) discussed in Section~\ref{sec:meta:design:param:LAL} and proposed in \cite{barron2019general}, satisfies automatically these two conditions without any regularization.
In the computational examples, the learned losses have been employed at test time for addressing regression and PDE task distributions. Our results have demonstrated that significant performance improvement can be achieved by using a shared-among-tasks offline-learned loss function even for out-of-distribution meta-testing; i.e., solving test tasks not belonging to the task distribution used in meta-training and utilizing PINN architectures that are different than the PINN architecture used in meta-training.
Note that a learned loss that performs equally well for architectures not used in meta-training is desirable if, for example, we seek to optimize the PINN architecture with a fixed learned loss.
Moreover, improved performance has been demonstrated even compared with adapting the loss function online as proposed in \cite{barron2019general}.
In this regard, we have considered the problems of discontinuous function approximation with varying frequencies, of the advection equation with varying initial conditions and discontinuous solutions, of the reaction-diffusion equation with varying source term, and of the Burgers equation with varying viscosity in two parametric regimes.
We have demonstrated the importance of different loss function parametrizations, as well as of other meta-learning algorithm design options discussed in Section~\ref{sec:meta:design}.
As of future work, an interesting direction pertains to improving the performance of the technique for addressing inner optimizers with memory, such as RMSProp and Adam.
Specifically, although the LAL learned losses coupled with Adam performed better than MSE in the function approximation case of Section~\ref{sec:examples:regression}, neither feed-forward NN (FFN) nor LAL learned losses exhibited satisfactory generalization capabilities in the PINN examples of Sections~\ref{sec:examples:ad}-\ref{sec:examples:burgers}.
One reason for this result is the fact that Adam depends on the whole history of gradients during optimization through an exponentially decaying average that only discards far in the past gradients.
To illustrate this, we have shown in Section~\ref{sec:examples:ad} that higher $\beta$ values in Adam corresponding to higher dependency on the far past yield deteriorating performance of the learned losses.
\section{Acknowledgment}
This work was supported by OSD/AFOSR MURI grant FA9550-20-1-0358.
\counterwithin{figure}{section}
\counterwithin{table}{section}
\counterwithin{equation}{section}
\begin{appendices}
\section{Loss functions with multi-dimensional inputs}\label{app:meta:design:multi}
For each $(t, x)$ in Eq.~\eqref{eq:param:pde}, $u(t, x)$, $\pazocal{F}_\lambda[u](t,x)$, and $\pazocal{B}_\lambda[u](t,x)$ belong to $\mathbb{R}^{D_u}$.
The same holds for the outputs of the NN approximators $\hat{u}_{\theta}$, $\hat{f}_{\theta, \lambda}$ and $\hat{b}_{\theta, \lambda}$.
As a result, the loss function $\ell_{\eta}$, with $\ell_{\eta}: D_u \times D_u \to \mathbb{R}_{\geq 0}$, outputting the distance between the predicted $\pazocal{F}_\lambda[\hat{u}_{\theta}](t,x)$ and $0$, between the predicted $\pazocal{B}_\lambda[\hat{u}_{\theta}](t,x)$ and $0$, and between the predicted $u_{\theta}(0,x)$ and $u_{0, \lambda}(x)$ in Eq.~\eqref{eq:pinns:loss:2}, takes as input two $D_u$-dimensional vectors and outputs a scalar loss value.
Considering for simplicity, as in Section~\ref{sec:prelim:loss}, each part of the objective function of Eqs.~\eqref{eq:pinns:loss:1}-\eqref{eq:pinns:loss:3} separately, the squared $\ell_2$-norm loss between $\hat{u}_{\theta}(t, x)$ and $u(t, x)$ for each datapoint $((t, x), u(t, x))$ is given as
\begin{equation}\label{eq:l2:loss}
||\hat{u}_{\theta}(t, x) - u(t, x)||_2^2 = \sum_{j=1}^{D_u}(\hat{u}_{\theta, j}(t, x)-u_j(t, x))^2.
\end{equation}
In Eq.~\eqref{eq:l2:loss}, the loss for each $j \in \{1,\dots,D_u\}$ depends only on the discrepancy between $\hat{u}_{\theta, j}(t, x)$ and $u_j(t, x)$, and the total loss $||\hat{u}_{\theta}(t, x) - u(t, x)||_2^2$ is given as the sum of the one-dimensional losses (i.e., of the losses pertaining to one-dimensional inputs).
In this regard, see Table~\ref{tab:losses} for other than squared error candidates for the one-dimensional loss of Eq.~\eqref{eq:l2:loss}.
As a first attempt towards constructing a loss function with multi-dimensional inputs, one can generalize Eq.~\eqref{eq:l2:loss} by considering a parametrized function $\hat{\ell}_{\eta}$ instead of the squared error in the summation.
This gives rise to a parametrized loss function given as
\begin{equation}\label{eq:g:sum:loss}
\ell_{\eta}(\hat{u}_{\theta}(t, x), u(t, x)) = \sum_{j=1}^{D_u}a_j\hat{\ell}_{\eta}(\hat{u}_{\theta, j}(t, x)-u_j(t, x)),
\end{equation}
where, in addition, each directional loss is multiplied by a weight $a_j$ for making the loss function $\ell_{\eta}$ more flexible/expressive.
The weighted sum of Eq.~\eqref{eq:g:sum:loss} in conjunction with the meta-learning Algorithm~\ref{algo:general} can lead to a loss function that normalizes each directional loss optimally.
Instead of using the same $\hat{\ell}_{\eta}$ for each dimension as in Eq.~\eqref{eq:g:sum:loss}, more expressive loss functions can be constructed by considering a different loss function $\hat{\ell}_{\eta}(\hat{u}_{\theta}(t, x) - u(t, x))$ for each dimension, i.e.,
\begin{equation}\label{eq:g:sum:loss:2}
\ell_{\eta}(\hat{u}_{\theta}(t, x), u(t, x)) = \sum_{j=1}^{D_u}\hat{\ell}_{\eta, j}(\hat{u}_{\theta, j}(t, x)-u_j(t, x)).
\end{equation}
The latter can be extended further by parametrizing $\hat{\ell}_{\eta}$ in such a way that both $\hat{u}_{\theta}(t, x)$, $u(t, x)$ are given as inputs, and not only the discrepancy $\hat{u}_{\theta}(t, x) - u(t, x)$.
For obtaining even more expressive loss functions $\ell_{\eta}$, one can replace the summation formulas of Eqs.~\eqref{eq:g:sum:loss}-\eqref{eq:g:sum:loss:2} by a more general function $\hat{\ell}_{\eta}$ that takes as inputs the vectors $\hat{u}_{\theta}(t, x)$, $u(t, x)$ instead of the corresponding values in each dimension.
See Fig.~\ref{fig:gNN:higher} for a schematic illustration of the aforementioned indicative options.
\begin{figure}
\caption{Various indicative ways to represent loss function $\ell_{\eta}
\label{fig:gNN:higher}
\end{figure}
\begin{center}
\begin{table}[H]
\caption{Standard one-dimensional loss functions (i.e., pertaining to one-dimensional inputs $d=\hat{u}_{\theta, j}(t, x)-u_j(t, x)$ for each $j\in\{1,\dots,D_u\}$), accompanied by the corresponding first-order derivatives.
Loss function general forms are adopted by \cite{barron2019general}; see more information in Section~\ref{sec:meta:design:param}.
}
\centering
\begin{tabular}{ | c | c | c | c |}
\hline
\textbf{Name} & \textbf{General form} &
\textbf{Loss sketch} & \textbf{Derivative sketch} \\
\hline
&&&\\[-1.67em]
Squared error & $\frac{1}{2}(d/c)^2$ &
\begin{minipage}[]{.1\textwidth}
\includegraphics[width=\linewidth, height=10mm]{./figs/standard_losses/MSE}
\end{minipage}&
\begin{minipage}[]{.1\textwidth}
\includegraphics[width=\linewidth, height=10mm]{./figs/standard_losses/MSE_der}
\end{minipage} \\
\hline
&&&\\[-1.67em]
Absolute error & $|d|$
& \begin{minipage}[]{.1\textwidth}
\includegraphics[width=\linewidth, height=10mm]{./figs/standard_losses/L1}
\end{minipage}&
\begin{minipage}[]{.1\textwidth}
\includegraphics[width=\linewidth, height=10mm]{./figs/standard_losses/L1_der}
\end{minipage} \\
\hline
&&&\\[-1.67em]
Huber &
\makecell{
\shortstack{$0.5d^2 / c, \ \text{if } |x| < c$ \\ $\text{else} \ |x| - 0.5c$}
}
& \begin{minipage}[]{.1\textwidth}
\includegraphics[width=\linewidth, height=10mm]{./figs/standard_losses/Huber}
\end{minipage}&
\begin{minipage}[]{.1\textwidth}
\includegraphics[width=\linewidth, height=10mm]{./figs/standard_losses/Huber_der}
\end{minipage} \\
\hline
&&&\\[-1.67em]
pseudo-Huber & $\sqrt{(d/c)^2 + 1} - 1$
& \begin{minipage}[]{.1\textwidth}
\includegraphics[width=\linewidth, height=10mm]{./figs/standard_losses/pHuber}
\end{minipage}&
\begin{minipage}[]{.1\textwidth}
\includegraphics[width=\linewidth, height=10mm]{./figs/standard_losses/pHuber_der}
\end{minipage} \\
\hline
&&&\\[-1.67em]
Cauchy & $\log\left(\frac{1}{2}(d/c)^2 + 1\right)$
& \begin{minipage}[]{.1\textwidth}
\includegraphics[width=\linewidth, height=10mm]{./figs/standard_losses/Cauchy}
\end{minipage}&
\begin{minipage}[]{.1\textwidth}
\includegraphics[width=\linewidth, height=10mm]{./figs/standard_losses/Cauchy_der}
\end{minipage} \\
\hline
&&&\\[-1.67em]
Geman-McClure & $\frac{2(d/c)^2}{(d/c)^2 + 4}$
& \begin{minipage}[]{.1\textwidth}
\includegraphics[width=\linewidth, height=10mm]{./figs/standard_losses/GMC}
\end{minipage}&
\begin{minipage}[]{.1\textwidth}
\includegraphics[width=\linewidth, height=10mm]{./figs/standard_losses/GMC_der}
\end{minipage} \\
\hline
&&&\\[-1.67em]
Welsch & $1 - \exp\left(-\frac{1}{2}(d/c)^2\right)$
& \begin{minipage}[]{.1\textwidth}
\includegraphics[width=\linewidth, height=10mm]{./figs/standard_losses/Welsch}
\end{minipage}&
\begin{minipage}[]{.1\textwidth}
\includegraphics[width=\linewidth, height=10mm]{./figs/standard_losses/Welsch_der}
\end{minipage} \\
\hline
\end{tabular}
\label{tab:losses}
\end{table}
\end{center}
\section{Proofs}\label{app:proofs}
\subsection{Proof of Theorem \ref{thm:1}}
\begin{proof}
We now prove the first statement of this theorem.
Let $\theta$ be an arbitrary stationary point.
Then, we have that
\begin{align}
0=\frac{\partial \pazocal{L}(\theta)}{\partial \theta} &=\frac{1}{N} \sum_{i=1}^N \left(\frac{\partial \ell(q,u_{i})}{\partial q} \Big\vert_{q=\hat{u}_{\theta}(x_{i})} \right) \frac{\partial \hat{u}_{\theta}(x_{i})}{\partial \theta}
\\ & =\frac{1}{N} \sum_{i=1}^N \sum_{j=1}^{D_u} \left(\frac{\partial \ell(q,u_{i})}{\partial q_j} \Big\vert_{q=\hat{u}_{\theta}(x_{i})} \right) \frac{\partial \hat{u}_{\theta}(x_{i})_{j}}{\partial \theta} .
\end{align}
By rearranging for the gradient,
\begin{equation} \label{eq:new:1}
0=N\nabla\pazocal{L}(\theta)=\sum_{i=1}^N \sum_{j=1}^{D_u} \left(\frac{\partial \hat{u}_{\theta}(x_{i})_{j}}{\partial \theta}\right)^\top \left(\frac{\partial \ell(q,u_{i})}{\partial q_j} \Big\vert_{q=\hat{u}_{\theta}(x_{i})} \right)^\top.
\end{equation}
By rearranging the double sum into the matrix-vector product,
\begin{equation}
0 =\left(\frac{\partial \hat{u}_X (\theta)}{\partial \theta}\right)^\top v,
\end{equation}
where
\begin{equation}
v=\vect\left(\left(\frac{\partial \ell(q,u_{1})}{\partial q} \Big\vert_{q=\hat{u}_{\theta}(x_{1})} \right)^\top,\dots,\left(\frac{\partial \ell(q,u_{N})}{\partial q} \Big\vert_{q=\hat{u}_{\theta}(x_{N})} \right)^\top \right)\in \mathbb{R}^{N D_u}.
\end{equation}
Therefore, if $\rank(\frac{\partial \hat{u}_X (\theta)}{\partial \theta})=ND_u$, we have that $v=0$.
Here, by the definition of the $v$, the fact of $v=0$ implies that
\begin{equation}
\frac{\partial \ell(q,u_{i})}{\partial q} \Big\vert_{q=\hat{u}_{\theta}(x_{i})}=0,
\end{equation}
for all $i =1,\dots, N$.
Since the loss $\ell$ satisfies the optimal stationarity condition, this implies that for all $i =1,\dots, N$,
\begin{equation}
\ell(\hat{u}_{\theta}(x_{i}),u_{i})\le \ell(q',u_i), \qquad \forall q' \in \mathbb{R}^{D_u}.
\end{equation}
Since the objective function $\pazocal{L}$ is the sum of these terms,
\begin{equation}
\pazocal{L}(\theta)=\frac{1}{N} \sum_{i=1}^N \ell(\hat{u}_{\theta}(x_{i}),u_{i})\le\frac{1}{N} \sum_{i=1}^N \ell(q'_{i},u_i), \qquad \forall q'_{1},\dots, q'_{N}\in \mathbb{R}^{D_u}.
\end{equation}
This implies that
\begin{equation}
\pazocal{L}(\theta)\le \inf_{ q'_{1},\dots, q'_{N}\in \mathbb{R}^{D_u}}\frac{1}{N} \sum_{i=1}^N \ell(q'_{i},u_i) \le\pazocal{L}(\theta'), \qquad \forall \theta \in \mathbb{R}^{D_{\theta}},
\end{equation}
where $D_{\theta}$ is the dimension of $\theta$.
This shows that an arbitrary stationary point $\theta$ is a global minimum of $\pazocal{L}$ if $\rank(\frac{\partial \hat{u}_X (\theta)}{\partial \theta})=ND_u$. This proves the first statement of this theorem.
We now proceed to prove the second statement of this theorem. Using Eq.~\eqref{eq:new:1}, for any $\theta$ such that $(\frac{\partial \ell(q,u_{i})}{\partial q_j} \vert_{q=\hat{u}_{\theta}(x_{i})})=0$ for all $i \in \{1,\dots,N\}$,
we have $\nabla\pazocal{L}(\theta)=0$. In other words, every $\theta$ such that $(\frac{\partial \ell(q,u_{i})}{\partial q_j} \vert_{q=\hat{u}_{\theta}(x_{i})})=0$ for all $i \in \{1,\dots,N\}$ is a stationary point of $\pazocal{L}$. Using the assumption of $\{\hat{u}_X (\theta)\in \mathbb{R}^{N D_u}:\theta\in \mathbb{R}^{D_{\theta}}\}=\mathbb{R}^{N D_u}$, if there exists a global minimum of $\pazocal{L}$ (it is possible that a global minimum does not exist), then it achieves the global minimum values of $\ell(\cdot,u_i)$ for all $i \in \{1,\dots,N\}$.
Thus, all we need to show is the existence of a stationary point of $\pazocal{L}$ that does not achieve the global minimum values of $\ell(\cdot,u_i)$ for all $i \in \{1,\dots,N\}$. Using the assumption of $\{\hat{u}_X (\theta)\in \mathbb{R}^{N D_u}:\theta\in \mathbb{R}^{D_{\theta}}\}=\mathbb{R}^{N D_u}$ and the assumption of having a point $q \in \mathbb{R}^{D_u}$ such that $\nabla_{q} \ell(q,u)=0$ and $\ell(q,u) > \ell(q',u)$ for some $q' \in \mathbb{R}^{D_u}$, there exists a $\bar \theta$ such that for all $i \in \{1,\dots,N\},$
$$
\frac{\partial \ell(q,u_{i})}{\partial q_j} \Big\vert_{q=\hat{u}_{\bar \theta}(x_{i})}=0 \text{ and } \ell(\hat{u}_{\bar \theta}(x_{i}),u_{i}) > \ell(q'_{i},u_{i}) \text{ for some } q'_{i} \in \mathbb{R}^{D_u}.
$$
Here, $\bar \theta$ is a stationary point of $\pazocal{L}$ since $\frac{\partial \ell(q,u_{i})}{\partial q_j} \vert_{q=\hat{u}_{\bar \theta}(x_{i})}=0$ for all $i \in \{1,\dots,N\}$. Moreover, $\bar \theta$ is not a global minimum of $\pazocal{L}$ since $\ell(\hat{u}_{\bar \theta}(x_{i}),u_{i}) > \ell(q'_{i},u_{i}) \text{ for some } q'_{i} \in \mathbb{R}^{D_u}$. This proves the second statement of this theorem.
\end{proof}
\subsection{Proof of Theorem \ref{thm:2}}
To prove this theorem, we utilize the following known lemma:
\begin{lemma} \label{lemma:known_1}
For any differentiable function $\varphi: \dom(\varphi) \rightarrow \mathbb{R}$ with an open convex domain $\dom(\varphi) \subseteq \mathbb{R}^{D_\varphi}$, if $\|\nabla \varphi(z') - \nabla \varphi(z)\| \le L_{ \varphi} \|z'-z\|$ for all $z,z' \in \dom(\varphi)$, then
\begin{align}
\varphi(z') \le \varphi(z) + \nabla \varphi(z)^\top (z'-z) + \frac{L_{ \varphi}}{2} \|z'-z\|^2 \quad \text{for all $z,z' \in \dom(\varphi) $}.
\end{align}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:known_1}]
Fix $z,z'\in \dom(\varphi) \subseteq \mathbb{R}^{D_{\varphi}}$. Since $\dom(\varphi)$ is a convex set, $z+r(z'-z)\in\dom(\varphi)$ for all $r \in [0, 1]$.
Since $\dom(\varphi) $ is open, there exists $\zeta >0$ such that $z+(1+\zeta')(z'-z)\in\dom(\varphi)$ and $z+(0-\zeta')(z'-z)\in\dom(\varphi)$ for all $\zeta'\le \zeta$. Fix $\zeta>0$ to be such a number. Combining these, $z+r(z'-z)\in\dom(\varphi)$ for all $r \in [0-\zeta, 1+\zeta]$.
Accordingly, we can define a function $\bar \varphi: [0-\zeta, 1+\zeta] \rightarrow \mathbb{R}$ by $\bar \varphi(r)=\varphi(z+r(z'-z))$. Then, $\bar \varphi(1)=\varphi(z')$, $\bar \varphi(0)=\varphi(z)$, and $\nabla \bar \varphi(r)=\nabla\varphi(z+r(z'-z))^\top (z'-z)$ for $r \in [0, 1] \subset (0-\zeta,1+\zeta)$. Since $\|\nabla \varphi(z') - \nabla \varphi(z)\| \le L_{ \varphi} \|z'-z\|$,
\begin{align}
\|\nabla\bar \varphi(r')-\nabla\bar \varphi(r) \| &=\|[\nabla\varphi(z+r'(z'-z)) -\nabla\varphi(z+r(z'-z))^\top (z'-z) \|
\\ &\le \|z'-z\|\|\nabla\varphi(z+r'(z'-z)) -\nabla\varphi(z+r(z'-z)) \| \\ & \le L_{ \varphi}\|z'-z\|\|(r'-r)(z'-z) \|
\\ & \le L_{ \varphi}\|z'-z\|^{2}\|r'-r \|.
\end{align}
Thus, $\nabla \bar \varphi:[0, 1]\rightarrow \mathbb{R}$ is Lipschitz continuous with the Lipschitz constant $L_{ \varphi}\|z'-z\|^{2}$, and hence $\nabla\bar \varphi$ is continuous.
By using the fundamental theorem of calculus with the continuous function $\nabla\bar \varphi:[0, 1] \rightarrow \mathbb{R}$,
\begin{align}
\varphi(z')&=\varphi(z)+ \int_0^1 \nabla\varphi(z+r(z'-z))^\top (z'-z)dr
\\ &=\varphi(z)+\nabla\varphi(z)^\top (z'-z)+ \int_0^1 [\nabla\varphi(z+r(z'-z))-\nabla\varphi(z)]^\top (z'-z)dr
\\ & \le \varphi(z)+\nabla\varphi(z)^\top (z'-z)+ \int_0^1 \|\nabla\varphi(z+r(z'-z))-\nabla\varphi(z)\| \|z'-z \|dr
\\ & \le \varphi(z)+\nabla\varphi(z)^\top (z'-z)+ \int_0^1 r L_{ \varphi}\|z'-z\|^{2}dr
\\ & = \varphi(z)+\nabla\varphi(z)^\top (z'-z)+\frac{L_{ \varphi}}{2}\|z'-z\|^{2}.
\end{align}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:2}]
The function $\pazocal{L}$ is differentiable since $\ell_{i}$ is differentiable, $\theta_{}\mapsto \hat{u}_{\theta}(x_{i})$ is differentiable, and a composition of differentiable functions is differentiable. We will first show that in both cases of (i) and (ii) for the learning rates, we have $\lim_{r \rightarrow \infty}\nabla\pazocal{L}(\theta_{}^{(r)}) = 0$. If $\nabla\pazocal{L}(\theta_{}^{(r)})=0$ at any $r\ge 0$, then
Assumption \ref{assump:5} ($\|\bar g^{(r)}\|_2^2 \le \bar c \|\nabla\pazocal{L}(\theta_{}^{(r)})\|_2^2$) implies $\bar g^{(r)}=0$, which implies
\begin{equation}
\theta_{}^{(r+1)}= \theta^{(r)}_{} \text{ and } \nabla\pazocal{L}(\theta_{}^{(r+1)})=\nabla\pazocal{L}(\theta_{}^{(r)})=0.
\end{equation}
This means that if $\nabla\pazocal{L}(\theta_{}^{(r)})=0$ at any $r\ge 0$, we have that $\bar g^{(r)}=0$ and $\nabla\pazocal{L}(\theta_{}^{(r')})=0$ for all $r '\ge r$ and hence
\begin{align}
\lim_{r \rightarrow \infty}\nabla\pazocal{L}(\theta_{}^{(r)}) = 0,
\end{align}
as desired. Therefore, we now focus on the remaining scenario where $\nabla\pazocal{L}(\theta_{}^{(r)})\neq 0$ for all $r \ge0$.
By using Lemma \ref{lemma:known_1},
\begin{equation}
\pazocal{L}(\theta_{}^{(r+1)})\le \pazocal{L}(\theta_{}^{(r)})-\epsilon^{(r)} \nabla\pazocal{L}(\theta_{}^{(r)}) ^\top\bar g^{(r)} + \frac{L(\epsilon^{(r)})^2 }{2} \|\bar g^{(r)} \|^2.
\end{equation}
By rearranging and using Assumption \ref{assump:5},
\begin{align}
\pazocal{L}(\theta_{}^{(r)})-\pazocal{L}(\theta_{}^{(r+1)}) &\ge \epsilon^{(r)} \nabla\pazocal{L}(\theta_{}^{(r)}) ^\top\bar g^{(r)} - \frac{L(\epsilon^{(r)})^2 }{2} \|\bar g^{(r)} \|^2
\\ & \ge\epsilon^{(r)} \textit{\underbar c} \|\nabla\pazocal{L}(\theta_{}^{(r)})\|^2 -\frac{L(\epsilon^{(r)})^2 }{2}\bar c \|\nabla\pazocal{L}(\theta_{}^{(r)})\|^2.
\end{align}
By simplifying the right-hand-side,
\begin{align} \label{eq:9_2}
\pazocal{L}(\theta_{}^{(r)})-\pazocal{L}(\theta_{}^{(r+1)}) &\ge\epsilon^{(r)} \|\nabla\pazocal{L}(\theta_{}^{(r)})\|^2 (\textit{\underbar c}-\frac{L\epsilon^{(r)} }{2}\bar c).
\end{align}
Let us now focus on case (i). Then, using $ \epsilon^{(r)} \le \frac{\textit{\underbar c}\ (2-\zeta)}{L\bar c}$,
\begin{equation}
\frac{L\epsilon^{(r)}}{2}\bar c\le\frac{L\textit{\underbar c}\ (2-\zeta)}{2L\bar c}\bar c =\textit{\underbar c}-\frac{\zeta}{2}\textit{\underbar c} \text{ .}
\end{equation}
Using this inequality and using $\zeta\le \epsilon^{(r)}$ in Eq.~\eqref{eq:9_2},
\begin{align} \label{eq:10_2}
\pazocal{L}(\theta_{}^{(r)})-\pazocal{L}(\theta_{}^{(r+1)}) &\ge\frac{\textit{\underbar c}\zeta^{2}}{2}
\|\nabla\pazocal{L}(\theta_{}^{(r)})\|^2.
\end{align}
Since $\nabla\pazocal{L}(\theta_{}^{(r)})\neq 0$ for any $r\ge 0$ (see above) and $\zeta >0 $, this means that the sequence $(\pazocal{L}(\theta_{}^{(r)}))_{r}$ is monotonically decreasing. Since $\pazocal{L}(q) \ge 0$ for any $q$ in its domain, this implies that the sequence $(\pazocal{L}(\theta_{}^{(r)}))_{r}$ converges. Therefore, $\pazocal{L}(\theta_{}^{(r)})-\pazocal{L}(\theta_{}^{(r+1)}) \rightarrow 0$ as $r \rightarrow \infty$. Using Eq.~\eqref{eq:10_2}, this implies that
\begin{equation}
\lim_{r \rightarrow \infty}\nabla\pazocal{L}(\theta_{}^{(r)}) = 0,
\end{equation}
which proves the desired result for the case (i).
We now focus on the case (ii). Then, we still have Eq.~\eqref{eq:9_2}. Since $\lim_{r \rightarrow \infty}\epsilon^{(r)} =0$ in Eq.~\eqref{eq:9_2}, the first order term in $\epsilon^{(r)}$ dominates after sufficiently large $r$: i.e., there exists $\bar r \ge0$ such that for any $r\ge \bar r$,
\begin{align} \label{eq:11_2}
\pazocal{L}(\theta_{}^{(r)})-\pazocal{L}(\theta_{}^{(r+1)}) \ge c \epsilon^{(r)} \|\nabla\pazocal{L}(\theta_{}^{(r)})\|^2
\end{align}
for some constant $c>0$. Since $\nabla\pazocal{L}(\theta_{}^{(r)})\neq 0$ for any $r\ge 0$ (see above) and $c \epsilon^{(r)}>0$, this means that the sequence $(\pazocal{L}(\theta_{}^{(r)}))_{r}$ is monotonically decreasing. Since $\pazocal{L}(q) \ge 0$ for any $q$ in its domain, this implies that the sequence $(\pazocal{L}(\theta_{}^{(r)}))_{r}$ converges to a finite value.
Thus, by adding Eq. \eqref{eq:11_2} both sides over all $r \ge \bar r$,
\begin{align}
\infty > \pazocal{L}(\theta_{}^{(\bar r)})-\lim_{r \rightarrow \infty}\pazocal{L}(\theta_{}^{(r)}) \ge c \sum _{r=\bar r}^\infty \epsilon^{(r)} \|\nabla\pazocal{L}(\theta_{}^{(r)})\|^2.
\end{align}
Since $\sum _{r=0}^\infty \epsilon^{(r)} = \infty$, this implies that $\liminf _{r\to \infty }\|\nabla\pazocal{L}(\theta^{(r)})\|=0$. We now show that $ \limsup_{r\to \infty }\|\nabla\pazocal{L}(\allowbreak \theta^{(r)})\|=0$ by contradiction. Suppose that $\limsup_{r\to \infty }\|\nabla\pazocal{L}(\theta^{(r)})\| > 0$. Then, there exists $\delta>0$ such that $\limsup_{r\to \infty }\|\nabla\pazocal{L}(\theta^{(r)})\|\ge \delta$. Since $\liminf _{r\to \infty }\|\nabla\pazocal{L}(\theta^{(r)})\|=0$ and $\limsup_{r\to \infty }\|\nabla\pazocal{L}(\theta^{(r)})\|\ge \delta$, let ${\rho_j}_{j}$ and ${\rho'_j}_j$ be sequences of indexes such that $\rho_j<\rho'_j<\rho_{j+1}$, $\|\nabla\pazocal{L}(\theta^{(r)})\|>\frac{\delta}{3}$ for $\rho_j \le r < \rho_j'$, and $\|\nabla\pazocal{L}(\theta^{(r)})\|\le \frac{\delta}{3}$ for $\rho_j '\le r < \rho_{j+1}$. Since $\sum _{r=\bar r}^\infty \epsilon^{(r)} \|\nabla\pazocal{L}(\theta_{}^{(r)})\|^2< \infty$, let $\bar j$ be sufficiently large such that $\sum _{r=\rho_{\bar j}}^\infty \epsilon^{(r)} \|\nabla\pazocal{L}(\theta_{}^{(r)})\|^2<\frac{\delta^2}{9L\sqrt{\bar c}}$. Then, for any $j \ge \bar j$ and any $\rho$ such that $\rho_j \le \rho \le \rho'_j -1$, we have that
\begin{align}
\|\nabla\pazocal{L}(\theta_{}^{(\rho)})\|-\|\nabla\pazocal{L}(\theta_{}^{(\rho_j')})\|&\le\|\nabla\pazocal{L}(\theta_{}^{(\rho_j')})-\nabla\pazocal{L}(\theta_{}^{(\rho)})\|
\\ &=\left\|\sum_{r=\rho}^{\rho'_j-1}\nabla\pazocal{L}(\theta_{}^{(r+1)})-\nabla\pazocal{L}(\theta_{}^{(r)})\right\|
\\ & \le \sum_{r=\rho}^{\rho'_j-1}\left\|\nabla\pazocal{L}(\theta_{}^{(r+1)})-\nabla\pazocal{L}(\theta_{}^{(r)})\right\| \\ & \le L \sum_{r=\rho}^{\rho'_j-1}\left\|\theta_{}^{(r+1)}-\theta_{}^{(r)}\right\|
\\ & \le L\sqrt{\bar c} \sum_{r=\rho}^{\rho'_j-1} \epsilon^{(r)} \left\|\nabla\pazocal{L}(\theta_{}^{(r)})\right\|,
\end{align}
where the first and third lines use the triangle inequality (and symmetry), the forth line uses the assumption that $\|\nabla\pazocal{L}(\theta)-\nabla\pazocal{L}(\theta')\|\le L \|\theta-\theta'\|$, and the last line follows the definition of $\theta_{}^{(r+1)}-\theta_{}^{(r)}=-\epsilon^{(r)} \bar g$ and the assumption of $\|\bar g^{(r)}\|^2 \le \bar c \|\nabla\pazocal{L}(\theta_{}^{(r)})\|^2
$. Then, by using the definition of the sequences of the indexes,
\begin{align}
\|\nabla\pazocal{L}(\theta_{}^{(\rho)})\|-\|\nabla\pazocal{L}(\theta_{}^{(\rho_j')})\|&\le \frac{3L\sqrt{\bar c}}{\delta} \sum_{r=\rho}^{\rho'_j-1} \epsilon^{(r)} \left\|\nabla\pazocal{L}(\theta_{}^{(r)})\right\|^2 \le \frac{\delta}{3}.
\end{align}
Here, since $\|\nabla\pazocal{L}(\theta_{}^{(\rho_j')})\|\le \frac{\delta}{3}$, by rearranging the inequality, we have that for any $\rho\ge \rho_{\bar j}$,
\begin{align}
\|\nabla\pazocal{L}(\theta_{}^{(\rho)})\|\le\frac{2\delta}{3}.
\end{align}
This contradicts the inequality of $\limsup_{r\to \infty }\|\nabla\pazocal{L}(\theta^{(r)})\|\ge \delta$. Thus, we have
\begin{align}
\limsup_{r\to \infty }\|\nabla\pazocal{L}(\theta^{(r)})\| = \liminf _{r\to \infty }\|\nabla\pazocal{L}(\theta^{(r)})\|= 0.
\end{align}
This implies that
\begin{equation}
\lim_{r \rightarrow \infty}\nabla\pazocal{L}(\theta_{}^{(r)}) = 0,
\end{equation}
which proves the desired result for the case (ii).
Therefore, in both cases of (i) and (ii) for the learning rates, we have $\lim_{r \rightarrow \infty}\nabla\pazocal{L}(\theta^{(r)}) = 0$. From Theorem \ref{thm:1}, this implies that an arbitrary limit point $\theta$ of the sequence $\{\theta^{(r)}\}_{r=0}$ is a global minimum of $\pazocal{L}$ if $\rank(\frac{\partial \hat{u}_X (\theta)}{\partial \theta})=ND_u$.
\end{proof}
\subsection{Proof of Corollary \ref{corollary:1}}
\begin{proof}
By following the proof of the first statement of Theorem \ref{thm:1} (see Eq. \eqref{eq:new:2}), we have that at any stationary point $\theta$ of $\pazocal{L}$,
\begin{equation} \label{eq:new:2}
\frac{\partial \ell(q,u_{i})}{\partial q} \Big\vert_{q=\hat{u}_{\theta}(x_{i})}=0 \;\text{ for all } i =1,\dots, N.
\end{equation}
By the assumption of $\nabla_{q} \ell(q,u)=0$ if and only if $q=u$, this implies that at any stationary point $\theta$ of $\pazocal{L}$,
$$
\hat{u}_{\theta}(x_{i})=u_{i} \;\text{ for all } i =1,\dots, N.
$$
This implies that at any stationary point $\theta$ of $\pazocal{L}$, we have that $\pazocal{L}_{\mathrm{MSE}}(\theta)=\frac{1}{N}\sum_{i=1}^N\|\hat u_\theta(x_{i})-u_{i}\|_2^2=0$, which is the global minimum value of $\pazocal{L}_{\mathrm{MSE}}$.
\end{proof}
\subsection{Proof of Proposition \ref{prop:1}}
\begin{proof} Let $\ell(q,u)=\rho_{\alpha,c}(q-u) = \frac{|\alpha - 2|}{\alpha}((\frac{((q-u)/c)^2}{|\alpha - 2|} + 1)^{\alpha/2} - 1)$.
Let $c$ and $\alpha$ be real numbers such that $c>0$, $\alpha\neq0$, and $\alpha\neq 2$. Then,\begin{align}
\frac{\partial \ell(q,u)}{\partial q} &=\frac{|\alpha - 2|}{\alpha} \left(\frac{\alpha}{2} \left(\frac{((q-u)/c)^2}{|\alpha - 2|} + 1\right)^{(\alpha/2)-1} \right)\frac{1}{|\alpha - 2|} \frac{1}{c^2} 2(q-u)
\\ & =\left(\frac{1}{c^2}\left(\frac{((q-u)/c)^2}{|\alpha - 2|} + 1\right)^{(\alpha/2)-1}\right) (q-u). \label{eq:new:3}
\end{align}
Here, since any (real) power of strictly positive real number is strictly positive, we have
\begin{align} \label{eq:new:4}
\frac{1}{c^2}\left(\frac{((q-u)/c)^2}{|\alpha - 2|} + 1\right)^{(\alpha/2)-1} >0.
\end{align}
By combining Eq. \eqref{eq:new:3} and Eq. \eqref{eq:new:4}, we have that $\frac{\partial \ell(q,u)}{\partial q}=0$ implies $q-u =0$. On the other hand, using Eq. \eqref{eq:new:3}, we have that $q-u =0$ implies $\frac{\partial \ell(q,u)}{\partial q}=0$. In other words,
\begin{align} \label{eq:new:5}
\frac{\partial \ell(q,u)}{\partial q}=0 \qquad \Longleftrightarrow \qquad q = u.
\end{align}
This proves the statement for the second condition that $\nabla_{q} \ell(q,u)=0$ if and only if $q=u$. We now prove the statement for the optimal stationarity condition\ by showing that $q = u$ implies that $\ell(q,u) \le \ell(q',u)$ for all $q' \in \mathbb{R}$. If $q = u$, then
\begin{align}
\ell(q,u)=\frac{|\alpha - 2|}{\alpha}\left(\left( 1\right)^{\alpha/2} - 1\right)=0.
\end{align}
On the other hand, we have that
\begin{align}
\ell(q,u) \ge 0, \qquad \forall q,u\in \mathbb{R},
\end{align}
since
$
((\frac{(d/c)^2}{|\alpha - 2|} + 1)^{\alpha/2} - 1) \ge 0
$ if $\alpha \ge 0$ and
$
((\frac{(d/c)^2}{|\alpha - 2|} + 1)^{\alpha/2} - 1) \le 0
$ if $\alpha \le 0$. Therefore, for any $q,u \in \mathbb{R}$, having $q = u$ implies that $\ell(q,u) \le \ell(q',u)$ for all $q' \in \mathbb{R}$. By using Eq. \eqref{eq:new:5}, this proves the statement for the optimal stationarity condition.
\end{proof}
\section{Other algorithm design options}\label{app:meta:design:other}
Apart from the most important options presented in Section~\ref{sec:meta:design} and Appendix~\ref{app:meta:design:multi} that pertain to the loss function parametrization, the number of inner optimization steps and imposing desirable loss function properties, some additional design options are presented in this section.
First, consider the options of sampling new tasks $\Lambda$ from the task distribution $p(\lambda)$ (line 4 in Algorithm~\ref{algo:general}) and initializing the approximator NN parameters $\theta_{\tau}$ for $\tau \in \{1, \dots, T\}$ (line 6 in Algorithm~\ref{algo:general}).
Resampling a set $\Lambda$ of $T$ tasks in every outer iteration exposes the learned loss to more samples from the task distribution.
Similarly, solving these tasks with $T$ new randomly initialized NNs exposes the loss function to more samples from the NN initialization distribution.
Although such introduced randomness is generally expected to improve test performance, it leads to unstable training that depends also on the number of inner optimization steps.
As a result, we have the option to resample new tasks and new NN parameter initializations every $I'$ and $I''$ outer iterations, respectively, instead of every single iteration; i.e., $\theta_{\tau}$ is re-initialized in line 6 of Algorithm~\ref{algo:general} with a setting $\theta_{\tau}^{(0)}$ that is replaced every $I''$ iterations.
An indicative experiment for demonstrating the effect of these options on training and test performance of the learned loss is performed in Appendix~\ref{app:add:results} for the regression example of Section~\ref{sec:examples:regression}.
Finally, as a stopping criterion for Algorithm~\ref{algo:general}, i.e., for selecting the maximum number of outer iterations $I$, a performance metric can be recorded during meta-training and the algorithm can be stopped if no progress is observed for a number of iterations.
Two candidate options for this metric are the following: (a) the outer objective of Eq.~\eqref{eq:lossml:outer:loss}, corresponding to a \textit{meta-training error}, and (b) the meta-test performance on a few test iterations with learned loss snapshots captured during meta-training, corresponding to a \textit{meta-validation error}.
However, because of the aforementioned induced randomness during meta-training, the outer objective may be too noisy for it to be a useful metric, especially when only one task is used for each outer update ($T = 1$ in Algorithm~\ref{algo:general}); thus, either a less noisy option can be utilized (see Fig.~\ref{fig:reg:exp:train}) or a moving average can be recorded (see Fig.~\eqref{fig:burgers:trainloss:ffn}).
Furthermore, depending on the parametrization and other design options, the meta-validation error can also be noisy (see Figs.~\ref{fig:reg:exp:test}, \ref{fig:ad:traintest}, and \ref{fig:burgers:traintest}).
For this reason, in the computational examples of Section~\ref{sec:examples}, we meta-train for a sufficiently large number of outer iterations ($10{,}000$) that works reasonably well according to both metrics.
We also capture snapshots of the learned loss and use all of them in meta-testing for gaining a deeper understanding of the algorithm's applicability for solving PDEs with PINNs.
\section{Additional computational results related to the function approximation example}\label{app:add:results}
\paragraph*{Effect of loss function initialization.}
To demonstrate the effect of the initialization of the NN-parametrized loss function, the outer learning rate, as well as the gradient clipping approach discussed in Section~\ref{sec:meta:design:inner}, we first consider employing Algorithm~\ref{algo:general} with a randomly initialized loss function, a large learning rate equal to $5\times 10^{-2}$ and $J = 20$ inner optimization steps.
SGD is considered as both the inner and outer optimizer, the approximator NN architecture consists of $3$ hidden layers with $40$ neurons each and $tanh$ activation function, the number of datapoints used for inner training is $N_u = 100$, and the number of datapoints used for updating the loss is $N_{u, val} = 1{,}000$; the approximator NN as well as $N_u$ and $N_{u, val}$ are the same for all experiments in this section.
Note that this is the only case in the computational examples that SGD is used as outer optimizer; in all other cases Adam is used.
In Fig.~\ref{fig:reg:expl:grad:info}, the norm of the loss parameters as well as the norm and the maximum of their gradient for each outer iteration are shown.
The gradient on iteration $4$ explodes and leads to a large jump on the loss parameters norm as well.
Furthermore, Fig.~\ref{fig:reg:expl:grad:inner} shows the loss gradient norm for each outer iteration and as a function of inner iterations.
Clearly, although for increasing inner steps $J$ we differentiate over a longer optimization path as explained in Section~\ref{sec:meta:design:inner}, the loss gradient norm does not necessarily increase with increasing $J$.
For this reason, throughout the rest of the computational examples we use a gradient clipping approach for addressing the exploding gradient issue, instead of, for example, uniformly dividing all gradients by $J$.
\begin{figure}
\caption{Function approximation: Loss parameters norm as well as norm and maximum of their gradient for each outer iteration for the case of randomly initializing the loss function parameters and not using gradient clipping.
The gradient on iteration $4$ explodes and leads to a large jump on the loss parameters norm as well.}
\label{fig:reg:expl:grad:info}
\end{figure}
\begin{figure}
\caption{Function approximation: Loss parameters gradient norm as a function of inner and outer iterations.
Gradient norm does not increase with increasing number of inner iterations for all outer iterations.}
\label{fig:reg:expl:grad:inner}
\end{figure}
\paragraph*{Design options experiment.}
Next, to demonstrate the effect of the design options discussed in Section~\ref{sec:meta:design} and to evaluate the generalization capabilities of the algorithm we consider the following experiment:
Algorithm~\ref{algo:general} is employed with an LAL-parametrized loss function initialized with an MSE approximation (see Section~\ref{sec:meta:design:param:LAL}), with Adam as inner optimizer, and with $I = 1{,}000$ outer steps.
Furthermore, the number of inner steps $J$ varies between values in $\{1, 20\}$, the frequency according to which we sample new tasks varies between values in $\{1, 10, 100, 1000 = \text{no resampling}\}$, and whether in each outer iteration we use a newly initialized approximator NN is either True or False; see Appendix~\ref{app:meta:design:other} for relevant discussion.
In Fig.~\ref{fig:reg:exp:train} we show the outer objective during meta-training and as a function of outer iteration for $J=1$ and $J=20$.
Clearly, resampling tasks and re-initializing the approximator NN introduces noise to the training as depicted by the corresponding outer objective trajectories shown with gray lines in Fig.~\ref{fig:reg:exp:train}; see also relevant results in Fig.~\ref{fig:burgers:trainloss:ffn} pertaining to the Burgers equation example.
Furthermore, in Fig.~\ref{fig:reg:exp:test}, the generalization capacity of the obtained loss functions is evaluated by performing meta-testing on $5$ ID unseen tasks for $100$ and $500$ test iterations.
Note that each line in Fig.~\ref{fig:reg:exp:test} corresponds to the performance of $101$ different loss functions and is obtained by meta-training with different design options; i.e., we perform a test every $10$ outer iterations for each meta-training session.
Overall, increasing the number of inner iterations improves the learned loss test performance as well as its robustness with increasing outer iterations and with varying design options.
Moreover, the patterns observed for 100 test iterations are almost identical to the ones observed for 500 test iterations, which means that if we attempt to optimize the design options based on meta-testing with 100 or 500 iterations we will end up with the same optimal ones.
\begin{figure}
\caption{Function approximation: Outer objective values recorded during meta-training ($1{,}
\label{fig:reg:exp:train}
\end{figure}
\begin{figure}
\caption{Function approximation: Meta-testing performance (100 (a, b) and 500 (c, d) test iterations) as a function of outer iteration during meta-training ($1{,}
\label{fig:reg:exp:test}
\end{figure}
\end{appendices}
\end{document} |
\begin{document}
\title{A NEUTRAL RELATION BETWEEN METALLIC\ STRUCTURE\ AND ALMOST\ QUADRATIC
$\phi $-STRUCTURE}
\author{Sinem G\"{o}n\"{u}l$^{1}$}
\email{sinemarol@gmail.com}
\author{\.{I}rem K\"{u}peli Erken$^{2}$}
\address{$^{2}$Faculty of Engineering and Naturel Sciences, Department of
Mathematics, Bursa Technical University, Bursa, Turkey }
\email{irem.erken@btu.edu.tr}
\author{Aziz Yazla$^{1}$}
\address{$^{1}$Uludag University, Science Institute, Gorukle 16059,
Bursa-TURKEY}
\email{501411002@ogr.uludag.edu.tr}
\author{Cengizhan Murathan$^{3}$}
\address{$^{3}$Art and Science Faculty, Department of Mathematics, Uludag
University, 16059 Bursa, TURKEY}
\email{cengiz@uludag.edu.tr}
\subjclass[2010]{Primary 53C15, 53C25, 53C55; Secondary 53D15.}
\keywords{Polynomial structure, golden structure, metallic structure, almost
quadratic $\phi $-structure.}
\begin{abstract}
In this paper, metallic structure and almost quadratic metric $\phi $
-structure are studied. Based on metallic (polynomial) Riemannian manifold,
Kenmotsu quadratic metric manifold, cosymplectic quadratic metric manifold
are defined and gave some examples. Finally, we construct quadratic $\phi $
-structure on the hypersurface $M^{n}$ of a locally metallic Riemannian
manifold $\tilde{M}^{n+1}.$
\end{abstract}
\maketitle
\section{I\textbf{ntroduction}}
\label{introduction} In \cite{GOLYANO} and \cite{SP}, S. I. Goldberg, K.Yano
and N. C. Petridis defined a new type of structure which is called a
polynomial structure on a $n$-dimensional differentiable manifold $M$.\ The
polynomial structure of degree 2 can be given by
\begin{equation}
J^{2}=pJ+qI, \label{M1}
\end{equation}
where $J$ is a$\ (1,1)$ tensor field on $M,$ $I$ is the identity operator on
the Lie algebra $\Gamma (TM)$ of vector fields on $M$ and $p,q$ are real
numbers. This structure can be also viewed as a generalization of\ following
well known structures :
$\cdot $ If $p=0$, $q=1$, then $J$\ is called almost product or almost para
complex structure and denoted by $F$ \cite{PITIS}, \cite{NAVEIRA},
$\cdot $ If $p=0$, $q=-1$, then $J$ is called almost complex structure \cite
{YANO},
$\cdot $ If $p=1$, $q=1$, then $J$\ is called golden structure \cite{H1},
\cite{H2},
$\cdot $ If $p$ is positive integer and $q=-1$, then $J$ is called
poly-Norden structure \cite{SAHIN},
$\cdot $ If $p=1$, $q=\frac{-3}{2}$, then $J$ is called almost complex
golden structure \cite{Bilen},
$\cdot $ If $p$ and $q$ are positive integers, then $J$ is called metallic
structure \cite{H3}.
If a differentiable manifold endowed with a metallic structure $J$ then the
pair $(M,J)$ is called metallic manifold. Any metallic structure $J$ on $M$
induces two almost product structures on $M$
\begin{equation*}
F_{\pm }=\pm \frac{2}{2\sigma _{p,q}-p}J-\frac{p}{2\sigma _{p,q}-p}I\text{,}
\end{equation*}
where $\sigma _{p,q}=\frac{p+\sqrt{p^{2}+4q}}{2}$ is the metallic number,
which is the positive solution of the equation \ $x^{2}-px-q=0$ for p and q
non zero natural numbers. Conversely, any almost product structure $F$ on $M$
\ induces two metallic structures on $M$
\begin{equation*}
J_{\pm }=\pm \frac{2\sigma _{p,q}-p}{2}F+\frac{p}{2}I.
\end{equation*}
If $M$ is Riemannian, the metric $g$ is said to be compatible with\ the
polynomial structure $J$ if
\begin{equation}
g(JX,Y)=g(X,JY) \label{M2}
\end{equation}
for $X,Y\in \Gamma (TM)$. In this case $(g,J)$ is called metallic Riemannian
structure and $(M,g,J)$ metallic Riemannian manifold (\cite{DK}) By (\ref{M1}
) and (\ref{M2}), one can get
\begin{equation}
g(JX,JY)=pg(JX,Y)+qg(X,Y), \label{M3}
\end{equation}
for $X,Y\in \Gamma (TM).$The Nijenhuis torsion $N_{K}$ for arbitary tensor
field $K$ of type $(1,1)$ on $M$ is tensor field of type $(1,2)$ defined by
\begin{equation}
N_{K}(X,Y)=K^{2}[X,Y]+[KX,KY]-K[KX,Y]-K[X,KY] \label{M4}
\end{equation}
where $[X,Y]$ is the commutator for arbitrary differentiable vector fields $
X,Y\in \Gamma (TM).$ The polynomial structure $J$ is said to be integrable
if $N_{J}$ $=0.$ A metallic Riemannian structure $J$ is said to be locally
metallic if $\nabla J=0$, where $\nabla $ is the Levi-Civita connection with
respect to $g$. Thus one can deduce that a locally metallic Riemannian
manifold is always integrable.
On the other hand, P. Debnath and A. Konar \cite{DK} recently introduced a
new type of structure named as almost quadratic $\phi $-structure $(\phi
,\eta ,\xi )$ on a $n$-dimensional differentiable manifold $M$, determined
by a $(1,1)$-tensor field $\phi $, a unit vector field $\xi $ and a $1$-form
$\eta $ which satisfy the relations :
\begin{equation*}
\phi \xi =0
\end{equation*}
\begin{equation}
\phi ^{2}=a\phi +b(I-\eta \otimes \xi );\text{ \ }a^{2}+4b\neq 0 \label{DK1}
\end{equation}
where $a$ is arbitrary constant and $b$ is non zero constant. If $M$ is
Riemannian manifold with the Riemannian metric $g$ is said to be compatible
with the polynomial structure $\phi $ if
\begin{equation}
g(\phi X,Y)=g(X,\phi Y) \label{CR}
\end{equation}
which is equivalent to
\begin{equation}
g(\phi X,\phi Y)=pg(\phi X,Y)+q(g(X,Y)-\eta (X)\eta (Y)). \label{MR}
\end{equation}
In this case $(g,\phi ,\eta ,\xi )$ is called almost quadratic metric $\phi $
-structure.The manifold $M$ is said to be an almost quadratic metric $\phi $
-manifold if it is endowed with an almost quadratic metric $\phi $-structure
\cite{DK}. They proved the necessary and sufficient conditions for an almost
quadratic $\phi $-manifold induces an almost contact or almost paracontact
manifold.
Recently, A.M. Blaga and C. E. Hretcanu \cite{BH} characterized the metallic
structure on the product of two metallic manifolds in terms of metallic maps
and provided a necessary and sufficient condition for the warped product of
two locally metallic Riemannian manifolds to be locally metallic.
The paper is organized in the following way.
Section $2$ is preliminary section, where we recall some properties of an
almost quadratic metric $\phi $-structure and warped product manifolds. In
Section $3$, we define $(\beta ,\phi )$-Kenmotsu quadratic metric manifold
and cosymplectic quadratic metric manifold. We mainly prove that if $
(N,g,\nabla ,J)$ is a locally metallic Riemannian manifold,$\ $then $
\mathbb{R}
\times _{f}N$ is a $(-\frac{f^{\prime }}{f},\phi )$-Kenmotsu quadratic
metric manifold and we show that every differentiable manifold $M$ endowed
with an almost quadratic $\phi $-structure $(\phi ,\eta ,\xi )$ admits
associated Riemannian metric. We prove that on a $(\beta ,\phi )$-Kenmotsu
quadratic metric manifold the Nijenhuis tensor $N_{\phi }\equiv 0$. We also
gave examples of $(\beta ,\phi )$-Kenmotsu quadratic metric manifold.
Section $4$ is devoted to quadratic $\phi $-hypersurfaces of metallic
Riemannian manifolds. We show that there are almost quadratic $\phi $
-structures on hypersurfaces of metallic Riemannian manifolds .Then \ we
give the necessary and sufficent condition characteristic vector field $\xi $
to be Killing in a quadratic metric $\phi $-hypersurface. Furthermore we
obtain Riemannian curvature tensor of a quadratic metric $\phi $
-hypersurface.
\section{Preliminaries}
Let $M^{n\text{ }}$be an almost quadratic $\phi $-manifold. As in
almost contact manifold, Debnath and Konar \cite{DK} proved that $\eta \circ
\phi =0,\eta (\xi )=1$ and $rank$ $\phi =n-1$.They also show that the eigen
values of the structure tensor $\phi $ are $\frac{a+\sqrt{a^{2}+4b}}{2}$, $
\frac{a-\sqrt{a^{2}+4b}}{2}$ and $0.$ If $\lambda _{i}$,$\sigma _{j}$ and $
\xi $ are eigen vectors corresponding to the eigen values $\frac{a+\sqrt{
a^{2}+4b}}{2},$ $\frac{a-\sqrt{a^{2}+4b}}{2}$ and $0$ of $\phi $,
respectively, then $\lambda _{i}$,$\sigma _{j}$ and $\xi $ are linearly
independent. Denote distributions
$\cdot \Pi _{p}=\{X\in \Gamma (TM):\alpha LX=-\phi ^{2}X-(\frac{\sqrt{
a^{2}+4b}-a}{2})\phi ,\alpha =-2b-\frac{a^{2}+a\sqrt{a^{2}+4b}}{2}\};\dim
\Pi _{p}=p,$
$\cdot \Pi _{q}=\{X\in \Gamma (TM):\beta QX=-\phi ^{2}X+(\frac{\sqrt{a^{2}+4b
}+a}{2})\phi X,\beta =-2b-\frac{a^{2}-a\sqrt{a^{2}+4b}}{2}\};\dim \Pi
_{q}=q, $
$\cdot \Pi _{1}=\{X\in \Gamma (TM):\beta RX=\phi ^{2}X-a\phi X-bX=-b\eta
(X)\xi \};\dim \Pi _{1}=1.$
By above notations, P. Debmath and A. Konar proved following theorem.
\begin{theorem}[\protect \cite{DK}]
The necessary and sufficient condition that a manifold $M^{n}$ will be an
almost quadratic $\phi $-manifold \ is that at each point of the manifold $
M^{n}$ it contains distributions $\Pi _{p},\Pi _{q}$ and $\Pi _{1}$ such
that $\Pi _{p}\cap \Pi _{q}=\{ \varnothing \},\Pi _{p}\cap \Pi _{1}=\{
\varnothing \},\Pi _{q}\cap \Pi _{1}=\{ \varnothing \}$ and $\Pi _{p}\cup
\Pi _{q}\cup \Pi _{1}=TM$.
\end{theorem}
Let $(M^{m},g_{M})$ and $(N^{n},g_{N})$ be two Riemannian manifolds and $
\tilde{M}$ =$M\times N.$ The warped product metric $<,>$ on $\tilde{M}$ is
given by
\begin{equation*}
<\tilde{X},\tilde{Y}>=g_{M}(\pi {}_{\ast }\tilde{X},\pi {}_{\ast }\tilde{Y}
)+(f\circ \pi )^{2}g_{N}(\sigma {}_{\ast }\tilde{X},\sigma {}_{\ast }\tilde{Y
})
\end{equation*}
for every $\tilde{X}$ and $\tilde{Y}$ $\in $ $\Gamma (T\tilde{M})$ where, $
f:M\overset{C^{\infty }}{\rightarrow }
\mathbb{R}
^{+}$ and $\pi :M\times N\rightarrow M,$ $\sigma :M\times N\rightarrow N$ \
the canonical projections (see \cite{BO}). The warped product manifolds are
denoted by $\tilde{M}$ $=(M\times _{f}N,<,>).$ The function $f$ \ is called
the warping function of the warped product. If the warping function $f$ is $
1 $, then $\tilde{M}=(M\times _{f}N,<,>)$ reduces Riemannian product
manifold. The manifolds $M$ and $N$ are called the base and the fiber of $
\tilde{M}$, respectively. For a point $(p,q)\in M\times N,$ the tangent
space $T_{(p,q)}(M\times N)$ is isomorphic to the direct sum $
T_{(p,q)}(M\times q)\oplus T_{(p,q)}(p\times N)\equiv T_{p}M\oplus T_{q}N.$
Let $\mathcal{L}_{\mathcal{H}}(M)$ (resp. $\mathcal{L}_{\mathcal{V}}(N)$) be
the set of all vector fields on $M\times N$ which is the horizontal lift
(resp. the vertical lift) of a vector field on $M$ (a vector field on $N$).
Thus a vector field on $M\times N$ can be written as $\bar{E}$ $=$ $\bar{X}+
\bar{U}$, with $\bar{X}\in $ $\mathcal{L}_{\mathcal{H}}(M)$ and $\bar{U}$ $
\in $ $\mathcal{L}_{\mathcal{V}}(N)$. \ One can see that
\begin{equation*}
\pi _{\ast }(\mathcal{L}_{\mathcal{H}}(M))=\Gamma (TM)\text{ ,\ }\sigma
_{\ast }(\mathcal{L}_{\mathcal{V}}(N))=\Gamma (TN)
\end{equation*}
and so $\pi _{\ast }(\bar{X})=X$ $\in $\ $\Gamma (TM)$ and $\sigma _{\ast }(
\bar{U})=U\in \Gamma (TN)$. If $\bar{X},\bar{Y}\in \mathcal{L}_{\mathcal{H}
}(M)$ then $[\bar{X},\bar{Y}]=\overset{-}{[X,Y]}\in \mathcal{L}_{\mathcal{H}
}(B)$ and similary for $\mathcal{L}_{\mathcal{V}}(N)$ and also if $\bar{X}
\in \mathcal{L}_{\mathcal{H}}(M),\bar{U}$ $\in $ $\mathcal{L}_{\mathcal{V}
}(N)$ then $[\bar{X},\bar{U}]=0$ \cite{ONEILL}.
The Levi-Civita connection $\bar{\nabla}$ of $B\times _{f}F$ is related with
the Levi-Civita connections of $M$ and $N$ as follows:
\begin{proposition}[\protect \cite{ONEILL}]
\label{ON1}
For $\bar{X},\bar{Y}\in \mathcal{L}_{\mathcal{H}}(M)$ and $\bar{U},\bar{V}$ $
\in $ $\mathcal{L}_{\mathcal{V}}(N)$,
(a) $\bar{\nabla}_{\bar{X}}\bar{Y}\in \mathcal{L}_{\mathcal{H}}(M)$ is the
lift of $^{M}\nabla _{X}Y$, that is $\pi _{\ast }(\bar{\nabla}_{\bar{X}}\bar{
Y})=$ $^{M}\nabla _{X}Y$
(b) $\bar{\nabla}_{\bar{X}}\bar{U}=\bar{\nabla}_{\bar{U}}\bar{X}=\frac{X(f)}{
f}U$
(c) $\bar{\nabla}_{\bar{U}}\bar{V}$ $=$ $\ ^{N}\nabla _{U}V-\frac{<U,V>}{f}
\func{grad}f,$where $\sigma {}_{\ast }(\bar{\nabla}_{\bar{U}}\bar{V})=$ $
^{N}\nabla _{U}V.$
\end{proposition}
Here it is simplified the notation by writing $f$ for $f\circ \pi $ and $
\func{grad}f$ for $\func{grad}(f\circ \pi )$.
Now, we consider the special warped product manifold
\begin{equation*}
\tilde{M}=I\times _{f}N,\text{ }<,>=dt^{2}+f^{2}(t)g_{N},
\end{equation*}
In practise, $(-)$ is ommited from lifts. In this case
\begin{equation}
\tilde{\nabla}_{\partial _{t}}\partial _{t}=0,\tilde{\nabla}_{\partial
_{t}}X=\tilde{\nabla}_{X}\partial _{t}=\frac{f^{\prime }(t)}{f(t)}X\text{
and }\tilde{\nabla}_{X}Y=\ ^{N}\nabla _{X}Y-\frac{<X,Y>}{f(t)}f^{\prime
}(t)\partial _{t}. \label{AZ}
\end{equation}
\section{Almost quadratic metric $\protect \phi $-structure}
Let $(N,g,J)$ be a metallic Riemannian manifold with metallic structure $J$.
By (\ref{M1}) and (\ref{M2}) we have
\begin{equation}
g(JX,JY)=pg(X,JY)+qg(X,Y). \label{COMPLEX2}
\end{equation}
Let us consider the warped product $\tilde{M}=
\mathbb{R}
\times _{f}N$, with warping function $f>0$, endowed with the Riemannian
metric
\begin{equation*}
<,>=dt^{2}+f^{2}g.
\end{equation*}
Now we will define an almost quadratic metric $\phi $-structure on $(\tilde{M
},\tilde{g})$ by using similar method in \cite{ALPG}. Denote arbitrary any
vector field on $\tilde{M}$ by $\tilde{X}=\eta (\tilde{X})\xi +X,$ where $X$
is any vector field on $N$ and $dt=\eta $. By the help of tensor field $J$,
a new tensor field $\phi $ of type $(1,1)$ on $\tilde{M}$ can be given by
\begin{equation}
\phi \tilde{X}=JX,\text{ \ }X\in \Gamma (TN), \label{CM1}
\end{equation}
for $\tilde{X}\in $ $\Gamma (T\tilde{M})$. So we get $\phi \xi =\phi (\xi
+0)=J0=0$ and $\eta (\phi \tilde{X})=0,$ for any vector field $\tilde{X}$ on
$\tilde{M}$. Hence, we obtain
\begin{equation}
\phi ^{2}\tilde{X}=p\phi \tilde{X}+q(\tilde{X}-\eta (\tilde{X})\xi )
\label{CONTA1}
\end{equation}
and arrive at
\begin{eqnarray*}
&<&\phi \tilde{X}\text{ },\tilde{Y}>=f^{2}g(JX,Y) \\
&=&f^{2}g(X,JY) \\
&=&<\tilde{X}\text{ },\phi \tilde{Y}>,
\end{eqnarray*}
for $\tilde{X},\tilde{Y}\in \Gamma (T\tilde{M})$. Moreover, we get
\begin{eqnarray*}
&<&\phi \tilde{X},\phi \tilde{Y}>=f^{2}g(JX,JY) \\
&=&f^{2}(pg(X,JY)+qg(X,Y)) \\
&=&p<\tilde{X}-\eta (\tilde{X})\xi ,\phi \tilde{Y}>+q(<\tilde{X},\tilde{Y}
>-\eta (\tilde{X})\eta (\tilde{Y})) \\
&=&p<\tilde{X},\phi \tilde{Y}>+q(<\tilde{X},\tilde{Y}>-\eta (\tilde{X})\eta (
\tilde{Y})).
\end{eqnarray*}
Thus we have proved the following proposition.
\begin{proposition}
\label{QUADRATIC}If $(N,g,J)$ is a metallic Riemannian manifold, then there
is an almost quadratic metric $\phi $-structure on warped product manifold $(
\tilde{M}=
\mathbb{R}
\times _{f}N,<,>=dt^{2}+f^{2}g)$.
\end{proposition}
An almost quadratic metric $\phi $-manifold $(M,g,\nabla ,\phi ,\xi ,\eta )$
is called $(\beta ,\phi )$-Kenmotsu quadratic metric manifold if
\begin{equation}
(\nabla _{X}\phi )Y=\beta \{g(X,\phi Y)\xi +\eta (Y)\phi X\},\beta \in
C^{\infty }(M). \label{K1}
\end{equation}
Taking $Y=\xi $ into (\ref{K1}) and using (\ref{DK1}), we obtain
\begin{equation}
\nabla _{X}\xi =-\beta (X-\eta (X)\xi ). \label{K2}
\end{equation}
Moreover, by (\ref{K2}) we get $d\eta =0.$ If $\beta =0$, then this kind of
manifold is called cosymplectic quadratic manifold.
\begin{theorem}
\label{QC}If $(N,g,\nabla ,J)$ is a locally metallic Riemannian manifold,$\ $
then $
\mathbb{R}
\times _{f}N$ is a $(-\frac{f^{\prime }}{f},\phi )$-Kenmotsu quadratic
metric manifold.
\end{theorem}
\begin{proof}
We consider $\tilde{X}=\eta (\tilde{X})\xi +X$ and $\tilde{Y}=\eta (\tilde{Y}
)\xi +Y$ vector fields on $
\mathbb{R}
\times _{f}N$ , where $X,Y\in \Gamma (TN)$ and $\xi =\frac{\partial }{
\partial t}$ $\in $ $\Gamma (
\mathbb{R}
)$. By help of (\ref{CM1}), we have
\begin{eqnarray}
(\tilde{\nabla}_{\tilde{X}}\phi )\tilde{Y} &=&\tilde{\nabla}_{\tilde{X}}\phi
\tilde{Y}-\phi \tilde{\nabla}_{\tilde{X}}\tilde{Y} \notag \\
&=&\tilde{\nabla}_{X}JY+\eta (\tilde{X})\tilde{\nabla}_{\xi }JY-\phi (\tilde{
\nabla}_{X}\tilde{Y}+\eta (\tilde{X})\tilde{\nabla}_{\xi }\tilde{Y}) \notag
\\
&=&\tilde{\nabla}_{X}JY+\eta (\tilde{X})\tilde{\nabla}_{\xi }JY-\phi (\tilde{
\nabla}_{X}Y+X(\eta (\tilde{Y}))\xi +\eta (\tilde{Y})\tilde{\nabla}_{X}\xi
\label{W3} \\
&&+\eta (\tilde{X})\tilde{\nabla}_{\xi }Y+\xi (\eta (\tilde{Y}))\eta (\tilde{
X})\xi ). \notag
\end{eqnarray}
Using (\ref{AZ}) in (\ref{W3}), we get
\begin{eqnarray}
(\tilde{\nabla}_{\tilde{X}}\phi )\tilde{Y} &=&(\nabla _{X}J)Y-\frac{f}{f}
^{\prime }<X,JY>\xi +\eta (\tilde{X})\frac{f^{\prime }}{f}JY-\phi (\eta (
\tilde{Y})\frac{f^{\prime }}{f}X+\eta (\tilde{X})\frac{f^{\prime }}{f}Y)
\label{W4} \\
&=&(\nabla _{X}J)Y-\frac{f^{\prime }}{f}(<\tilde{X},\phi \tilde{Y}>\xi +\eta
(\tilde{Y})\phi \tilde{X}). \notag
\end{eqnarray}
Since $\nabla J=0,$ the last equation is reduced to
\begin{equation}
(\tilde{\nabla}_{\tilde{X}}\phi )\tilde{Y}=-\frac{f^{\prime }}{f}(<\tilde{X}
,\phi \tilde{Y}>\xi +\eta (\tilde{Y})\phi \tilde{X}). \label{w5}
\end{equation}
Using $\tilde{\nabla}_{X}\xi =\frac{f^{\prime }}{f}X$ , we have
\begin{equation*}
\tilde{\nabla}_{\tilde{X}}\xi =\frac{f^{\prime }}{f}(\tilde{X}-\eta (\tilde{X
})\xi ).
\end{equation*}
So $
\mathbb{R}
\times _{f}N$ is a $(-\frac{f^{\prime }}{f},\phi )$-Kenmotsu quadratic
metric manifold.
\end{proof}
\begin{corollary}
Let $(N,g,\nabla ,J)$ be a locally metallic Riemannian manifold. Then
product manifold $
\mathbb{R}
\times N$ is a cosymplectic quadratic metric manifold.
\begin{example}
A.M. Blaga and C.E. Hretcanu \cite{BH} constructed a metallic structure on $
\mathbb{R}
^{n+m}$ following manner:
\begin{equation*}
J(x_{1},...,x_{n},y_{1},...,y_{m})=(\sigma x_{1},...,\sigma x_{n},\bar{\sigma
}y_{1},...,\bar{\sigma}y_{m}),
\end{equation*}
where $\sigma =\sigma _{p,q}=\frac{p+\sqrt{p^{2}+4pq}}{2}$ and $\bar{\sigma}=
\bar{\sigma}_{p,q}=\frac{p-\sqrt{p^{2}+4pq}}{2}$ for $p,q$ positive
integers. By Theorem \ref{QC} $H^{n+m+1}=
\mathbb{R}
\times _{e^{t}}
\mathbb{R}
^{n+m}$ is a $(-1,\phi )$-Kenmotsu quadratic metric manifold.
\end{example}
\end{corollary}
$M$ is said to be metallic shaped hypersurface in a space form $N^{n+1}(c)$
if the shape operator $A$ of $M$ is a metallic structure (see \cite{OZGUR}).
\begin{example}
In \cite{OZGUR},C. \"{O}zg\"{u}r and N.Y\i lmaz \"{O}zg\"{u}r proved that $
S^{n}(\frac{2}{p+\sqrt{p^{2}+4pq}})$ sphere is a locally metallic shaped
hypersurfaces in $
\mathbb{R}
^{n+1}$. Using Theorem \ref{QC}, we have
\begin{equation*}
H^{n+1}=
\mathbb{R}
\times _{\cosh (t)}S^{n}(\frac{2}{p+\sqrt{p^{2}+4q)}})
\end{equation*}
$(-\tanh t,\phi )$-Kenmotsu quadratic metric manifold.
\end{example}
\begin{example}
P. Debnath and A. Konar \cite{DK} gave an example of almost\ quadratic $\phi
$-structure on $
\mathbb{R}
^{4}$ as following:
If the $(1,1)$ tensor field $\phi ,$ 1-form $\eta $ and vector field $\xi $
defined as
\begin{equation*}
\phi =
\begin{bmatrix}
2 & 1 & 0 & 0 \\
9 & 2 & 0 & 0 \\
0 & 0 & 5 & 0 \\
0 & 0 & 0 & 0
\end{bmatrix}
,\eta =
\begin{bmatrix}
0 & 0 & 0 & 1
\end{bmatrix}
,\xi =
\begin{bmatrix}
0 \\
0 \\
0 \\
0
\end{bmatrix}
,
\end{equation*}
then
\begin{equation*}
\phi ^{2}=4\phi +5(I_{4}-\eta \otimes \xi ).
\end{equation*}
Thus $
\mathbb{R}
^{4}$ has an almost\ quadratic $\phi $-structure.
\end{example}
\begin{theorem}
Every differentiable manifold $M$ endowed with an almost quadratic $\phi $
-structure $(\phi ,\eta ,\xi )$ admits associated Riemannian metric.
\begin{proof}
Let $\tilde{h}$ be any Riemannian metric. Putting
\begin{equation*}
h(X,Y)=\tilde{h}(\phi ^{2}X,\phi ^{2}Y)+\eta (X)\eta (Y),
\end{equation*}
we have $\eta (X)=h(X,\xi ).$ We now define $g$ by
\begin{equation*}
g(X,Y)=\frac{1}{\alpha +\delta }[\alpha h(X,Y)+\beta h(\phi X,\phi Y)+\frac{
\gamma }{2}(h(\phi X,Y)+h(X,\phi Y))+\delta \eta (X)\eta (Y)]\text{,}
\end{equation*}
where $\alpha ,\beta ,\gamma ,\delta ,q$ are non zero constants satisfying $
\beta q=p\frac{\gamma }{2}+\alpha ,$ $\alpha +\delta \neq 0.$ It is clearly
seen that
\begin{equation*}
g(\phi X,\phi Y)=pg(\phi X,Y)+q(g(X,Y)-\eta (X)\eta (Y))
\end{equation*}
for any $X,Y\in \Gamma (TM).$
\end{proof}
\end{theorem}
\begin{remark}
If we choose $\alpha =\delta =q,\beta =\gamma =1$, then we have $p=0.$ In
this case, we obtain Theorem 4.1 of \cite{DK}.
\end{remark}
\begin{proposition}
Let $(M,g,\nabla ,\phi ,\xi ,\eta )$ be a $(\beta ,\phi )$-Kenmotsu
quadratic metric manifold. Then quadratic structure $\phi $ is integrable,
that is, the Nijenhuis tensor $N_{\phi }\equiv 0.$
\begin{proof}
Using (\ref{CONTA1})in (\ref{M4}), we have
\begin{eqnarray}
N_{\phi }(X,Y) &=&\phi ^{2}[X,Y]+[\phi X,\phi Y]-\phi \lbrack \phi X,Y]-\phi
\lbrack X,\phi Y] \notag \\
&=&p\phi \lbrack X,Y]+q([X,Y]-\eta ([X,Y])\xi )+\tilde{\nabla}_{\phi X}\phi Y
\notag \\
&&-\nabla _{\phi Y}\phi X-\phi (\nabla _{\phi X}Y-\nabla _{Y}\phi X)-\phi
(\nabla _{X}\phi Y-\nabla _{\phi Y}X) \notag \\
&=&p\phi \nabla _{X}Y-p\phi \nabla _{Y}X+q\nabla _{X}Y-q\nabla _{Y}X-q\eta
([X,Y])\xi ) \notag \\
&&+(\nabla _{\phi X}\phi )Y-(\nabla _{\phi Y}\phi )X+\phi \nabla _{Y}\phi
X-\phi \nabla _{X}\phi Y. \label{n1}
\end{eqnarray}
for $X,Y\in \Gamma (TM).$In order to prove this theorem, we have to show that
\begin{eqnarray}
p\phi \nabla _{X}Y-\phi \nabla _{X}\phi Y &=&p\phi \nabla _{X}Y+(\nabla
_{X}\phi )\phi Y-\nabla _{X}\phi ^{2}Y \notag \\
&&\overset{(\ref{CONTA1})}{=}-p(\nabla _{X}\phi )Y+(\nabla _{X}\phi )\phi
Y-q\nabla _{X}Y \notag \\
&&+qX(\eta (Y))\xi +q(\eta (Y))\nabla _{X}\xi . \label{n2}
\end{eqnarray}
If we write the last equation in (\ref{n1}), we get
\begin{eqnarray}
N_{\phi }(X,Y) &=&-p(\nabla _{X}\phi )Y+p(\nabla _{Y}\phi )X+(\nabla
_{X}\phi )\phi Y-(\nabla _{Y}\phi )\phi X \notag \\
&&+(\nabla _{\phi X}\phi )Y-(\nabla _{\phi Y}\phi )X+q(X\eta (Y)\xi -Y\eta
(X)\xi -\eta ([X,Y])\xi ) \notag \\
&&+q(\eta (Y)\nabla _{X}\xi -\eta (X)\nabla _{Y}\xi ). \label{n3}
\end{eqnarray}
Employing (\ref{w5}) and (\ref{CONTA1}) in (\ref{n3}), we deduce that
\begin{eqnarray*}
N_{\phi }(X,Y) &=&q(X\eta (Y)\xi -Y\eta (X)\xi -\eta ([X,Y])\xi ) \\
&=&0.
\end{eqnarray*}
This completes the proof of the theorem.
\end{proof}
\end{proposition}
\section{Quadratic metric $\protect \phi $-hypersurfaces of metallic
Rieamannian manifolds}
\begin{theorem}
Let $\tilde{M}^{n+1}$ be a differentiable manifold with metallic structure $
J $ and $M^{n}$ be a hypersurface of $\tilde{M}^{n+1}.$Then there is an
almost quadratic $\phi $-structure $(\phi ,\eta ,\xi )$ on $M^{n}.$
\begin{proof}
Denote by $\nu $ a unit normal vector field of $M^{n}.$For any vector field $
X$ tangent to $M^{n}$, we put
\begin{eqnarray}
JX &=&\phi X+\eta (X)\nu ,\text{ \ \ } \label{1} \\
J\nu &=&q\xi +p\nu \label{2} \\
J\xi &=&\nu , \label{3}
\end{eqnarray}
where $\phi $ is an $(1,1)$ tensor field on $M^{n}$, $\xi $ $\in \Gamma (TM)$
and $\eta $ is a 1-form such that $\eta (\xi )=1$ and $\eta \circ \phi =0.$
On applying the operator $J$ on the above equality (\ref{1}) and using (\ref
{2}) we have
\begin{eqnarray}
J^{2}X &=&J(\phi X)+\eta (X)J\nu \notag \\
&=&\phi ^{2}X+\eta (X)(q\xi +p\nu ). \label{4}
\end{eqnarray}
Using (\ref{M1}) in (\ref{4})
\begin{equation*}
p\phi X+p\eta (X)\nu +qX=\phi ^{2}X+\eta (X)(q\xi +p\nu ).
\end{equation*}
Hence, we are led to the conclusion
\begin{equation}
\phi ^{2}X=p\phi X+q(X-\eta (X)\xi ). \label{5}
\end{equation}
\end{proof}
\end{theorem}
Let $M^{n}$ be a hypersurface of a $n+1$-dimensional metallic Riemannian
manifold $\tilde{M}^{n+1}$ and let $\nu $ be a globally unit normal vector
field on $M^{n}$. Denote $\tilde{\nabla}$ the Levi-Civita connection with
respect to the Riemannian metric $\tilde{g}$ of $\tilde{M}^{n+1}$. Then the
Gauss and Weingarten formulas are given respectively by
\begin{equation}
\tilde{\nabla}_{X}Y=\nabla _{X}Y+g(AX,Y)\nu , \label{6}
\end{equation}
\begin{equation}
\tilde{\nabla}_{X}\nu =-AX \label{7}
\end{equation}
for any $X,Y\in \Gamma (TM)$, where $g$ denotes the Riemannian metric of $
M^{n}$ induced from $\tilde{g}$ and $A$ is the shape operator of $M^{n}$.
\begin{proposition}
\label{HYPERSURFACES}Let $(\tilde{M}^{n+1},<,>,\tilde{\nabla},J)$ be a
locally metallic Riemannian manifold. If $(M^{n},g,\nabla ,\phi )$ is a
quadratic metric $\phi $-hypersurface of $\tilde{M}^{n+1}$, then
\begin{equation}
(\nabla _{X}\phi )Y=\eta (Y)AX+g(AX,Y)\xi , \label{8}
\end{equation}
\begin{equation}
\nabla _{X}\xi =pAX-\phi AX,\text{ }A\xi =0, \label{9a}
\end{equation}
and
\begin{equation}
(\nabla _{X}\eta )Y=pg(AX,Y)-g(AX,\phi Y) \label{9}
\end{equation}
\end{proposition}
\begin{proof}
If we take the covariant derivatives of the metallic structure tensor $J$
according to $X$ by (\ref{1})-(\ref{3}), the Gauss and Weingarten formulas,
we get
\begin{eqnarray}
0 &=&(\nabla _{X}\phi )Y-\eta (Y)AX-qg(AX,Y)\xi \label{9b} \\
&&+(g(AX,\phi Y)+X(\eta (Y))-\eta (\nabla _{X}Y)-pg(AX,Y))\nu . \notag
\end{eqnarray}
If we identify the tangential components and the normal components of the
equation (\ref{9b}), respectively, we have
\begin{equation}
(\nabla _{X}\phi )Y-\eta (Y)AX-qg(AX,Y)\xi =0. \label{9c}
\end{equation}
\begin{equation}
g(AX,\phi Y)+X(\eta (Y))-\eta (\nabla _{X}Y)-pg(AX,Y)=0. \label{9d}
\end{equation}
Using compatible condition of $J$, we have
\begin{eqnarray}
g(JX,JY) &=&pg(X,JY)+qg(X,Y) \notag \\
&&\overset{(\ref{1})}{=}pg(X,\phi Y)+qg(X,Y). \label{9e}
\end{eqnarray}
Expressed in another way, we obtain
\begin{eqnarray}
&&g(JX,JY)\overset{(\ref{1})}{=}g(\phi X,\phi Y)+\eta (X)\eta (Y), \notag \\
&&\overset{(\ref{MR})}{=}pg(X,\phi Y)+q(g(X,Y)-\eta (X)\eta (Y)) \notag \\
&&+\eta (X)\eta (Y) \notag \\
&=&pg(X,\phi Y)+qg(X,Y)+(1-q)\eta (X)\eta (Y)). \label{9f}
\end{eqnarray}
Considering (\ref{9e}) and (\ref{9f}), we get $q=1$. By (\ref{9c}) we arrive
(\ref{8}). If we put $Y=\xi $ in (\ref{9c}) we get
\begin{equation}
\phi \nabla _{X}\xi =-AX-g(AX,\xi )\xi . \label{10}
\end{equation}
If we apply $\xi $ on both sides of (\ref{10}), we have $A\xi =0$.
Acting $\phi $ on both sides of the equation (\ref{10}), using $A\xi =0$ and
so
\begin{eqnarray*}
-\phi AX &=&p\phi \nabla _{X}\xi +(\nabla _{X}\xi -\eta (\nabla _{X}\xi )\xi
) \\
&&\overset{(\ref{10})}{=}-pAX+\nabla _{X}\xi .
\end{eqnarray*}
Hence we arrive the first equation of (\ref{9a}). By help of (\ref{9a}), we
readily obtain (\ref{9}). This completes the proof.
\end{proof}
\begin{proposition}[\protect \cite{blair}]
\label{killing}Let $(M,g)$ be a Riemannian manifold and let $\nabla $ be the
Levi-Civita connection on $M$ induced by $g$. For every vector field $X$ on $
M$, the following conditions are equivalent:
(1) $X$ is a Killing vector field; that is, $L_{X}g=0$.
(2) $g(\nabla _{Y}X,Z)+g(\nabla _{Z}X,Y)=0$ for all $Y,Z\in \chi (M)$.
\end{proposition}
\begin{proposition}
Let $(M^{n},g,\nabla ,\phi ,\eta ,\xi )$ be a quadratic metric $\phi $
-hypersurface of a locally metallic Riemannian manifold $(\tilde{M}^{n+1},
\tilde{g},\tilde{\nabla},J)$. The characteristic vector field $\xi $ is a
Killing vector field if and only if $\phi A+A\phi =2pA$.
\end{proposition}
\begin{proof}
From Proposition \ref{killing}, we have
\begin{equation*}
g(\nabla _{X}\xi ,Y)+g(\nabla _{Y}\xi ,X)=0.
\end{equation*}
Making use of (\ref{9a}) in the last equation, we get
\begin{equation*}
pg(AX,Y)-g(\phi AX,Y)+pg(AY,X)-g(\phi AY,X)=0.
\end{equation*}
Using the symmetric property of $A$ and $\phi $, we obtain
\begin{equation}
2pg(AX,Y)=g(\phi AX,Y)+g(A\phi X,Y). \label{irem}
\end{equation}
We arrive the requested equation from (\ref{irem}).
\end{proof}
\begin{proposition}
\label{SINEM}If $(M^{n},g,\nabla ,\phi ,\xi )$ is a $(\beta ,\phi )$
-Kenmotsu quadratic hypersurface of a locally metallic Riemannian manifold
on $(\tilde{M}^{n+1},\tilde{g},\tilde{\nabla},J)$, then $\phi A=A\phi $ and $
A^{2}=\beta pA+\beta ^{2}(I-\eta \otimes \xi )$.
\end{proposition}
\begin{proof}
Since $d\eta =0,$ using (\ref{9a}), we have
\begin{eqnarray*}
0 &=&g(Y,\nabla _{X}\xi )-g(X,\nabla _{Y}\xi ) \\
&=&pg(Y,AX)-g(Y,\phi AX)-pg(X,AY)+g(X,\phi AY) \\
&=&g(A\phi X-\phi AX,Y).
\end{eqnarray*}
So, we get $\phi A=A\phi $. By (\ref{K1}) and (\ref{8}), we get
\begin{equation*}
\beta (g(X,\phi Y)\xi +\eta (Y)\phi X)=\eta (Y)AX+g(AX,Y)\xi .
\end{equation*}
If we act $\xi $ on both sides of the last equation, we obtain
\begin{equation*}
\beta g(X,\phi Y)=g(AX,Y).
\end{equation*}
Namely
\begin{equation}
\beta \phi X=AX. \label{mert}
\end{equation}
Putting $AX$ \ instead of $X$ and using (\ref{5}) in (\ref{mert}), we get $
A^{2}X=\beta pAX+\beta ^{2}(X-\eta (X)\xi ).$ This completes the proof.
\end{proof}
By help of (\ref{mert}) we obtain
\begin{corollary}
Let ($M^{n},g,\nabla ,\phi ,\xi )$ be a cosymplectic quadratic metric $\phi $
-hypersurface of a locally metallic Riemannian manifold. Then $M$ is totally
geodesic.
\end{corollary}
\begin{remark}
C. E. Hretcanu and M. Crasmareanu \cite{H3} \ investigated\ some properties
of the induced structure on a hypersurface in a metallic Riemannian
manifold. But the argument in Proposition \ref{HYPERSURFACES} is to get
quadratic $\phi $-hypersurface of a metallic Riemannian manifold. In the
same paper, they proved that the induced structure on $M$ is parallel to
induced Levi-Civita connection if and only if $M$ is totally geodesic.
\end{remark}
By Proposition \ref{HYPERSURFACES}, we have following.
\begin{proposition}
Let $(M^{n},g,\nabla ,\phi ,\xi )$ be a quadratic metric $\phi $
-hypersurface of a locally metallic Riemannian manifold.\ Then
\begin{equation*}
R(X,Y)\xi =p((\nabla _{X}A)Y-(\nabla _{Y}A)X)-\phi ((\nabla _{X}A)Y-(\nabla
_{Y}A)X),
\end{equation*}
for any $X,Y\in \Gamma (TM).$
\end{proposition}
\begin{corollary}
Let $(M^{n},g,\nabla ,\phi ,\xi )$ be a quadratic metric $\phi $-
hypersurface of a locally metallic Riemannian manifold.\ If the second
fundamental form is parallel, then $R(X,Y)\xi =0.$
\end{corollary}
\end{document} |
\begin{document}
\title{The Ring of Support-Classes of $\mathrm{SL}
{\sl Abstract\footnote{Keywords: Support-classes, Association Scheme, Representation Theory.
Math. class: Primary: 20G40 , Secondary: 05E30}: We introduce and study a subring $\mathcal{SC}$ of $\mathbb Z[\mathrm{SL}_2(\mathbb F_q)]$ obtained by summing elements of $\mathrm{SL}_2(\mathbb F_q)$
according to their support. The ring $\mathcal SC$ can be used
for the construction of several association schemes.}
\section{Main results}
Summing elements of the finite group $\mathrm{SL}_2(\mathbb F_q)$
according to their support (locations of non-zero matrix coefficients),
we get seven
elements (six when working over $\mathbb F_2$) in the integral group-ring
$\mathbb Z[\mathrm{SL}_2(\mathbb F_q)]$.
Integral linear combinations of these seven elements
form a subring $\mathcal{SC}$, called the \emph{ring of support classes}, of the integral group-ring
$\mathbb Z[\mathrm{SL}_2(\mathbb F_q)]$. Supposing $q>2$, we get
thus a $7-$dimensional algebra
$\mathcal{SC}_{\mathbb K}=
\mathcal{SC}\otimes_{\mathbb Z}\mathbb K$ over a field $\mathbb K$
when considering $\mathbb K-$linear combinations.
This paper is devoted to the definition
and the study of a few features of $\mathcal{SC}$.
More precisely, in Section \ref{sectringsuppcl} we prove that
the ring of support-classes $\mathcal{SC}$
is indeed a ring by computing its structure-constants.
Section \ref{sectalgprop} describes the structure
of $\mathcal{SC}_{\mathbb Q}=\mathcal{SC}\otimes_{\mathbb Z}\mathbb Q$ as a
semi-simple algebra independent of $q$ for $q>2$.
In Section \ref{sectass} we recall the definition of association schemes
and use $\mathcal{SC}$ for the construction of
hopefully interesting examples.
Finally, we study a few representation-theoretic aspects in
Section \ref{sectrepr}.
\section{The ring of support-classes}\label{sectringsuppcl}
Given subsets $\mathcal A,\mathcal B,\mathcal C,\mathcal D$ of a finite field $\mathbb F_q$,
we denote by
$$\left(\begin{array}{cc}\mathcal A&\mathcal B\\\mathcal C&\mathcal D\end{array}\right)=\sum_{(a,b,c,d)\in
\mathcal A\times \mathcal B\times \mathcal C\times \mathcal D,ad-bc=1}\left(\begin{array}{cc}a&b\\c&d
\end{array}\right)$$
the element of $\mathbb Z[\mathrm{SL}_2(\mathbb F_q)]$
obtained by summing all matrices of $\mathrm{SL}_2(\mathbb F_q)$
with coefficients $a\in \mathcal A,b\in \mathcal B,c\in \mathcal C$ and $d\in \mathcal D$.
Identifying $0$ with the singleton subset $\{0\}$ of $\mathbb F_q$ and
denoting by $\mathbb F_q^*=\mathbb F_q\setminus\{0\}$
the set of all units in $\mathbb F_q$, we consider the seven elements
\begin{eqnarray*}
&&A=\left(\begin{array}{cc}\mathbb F_q^*&0\\0&\mathbb F_q^*\end{array}\right),\
B=\left(\begin{array}{cc}0&\mathbb F_q^*\\\mathbb F_q^*&0\end{array}\right),\
C=\left(\begin{array}{cc}\mathbb F_q^*&\mathbb F_q^*\\\mathbb F_q^*&\mathbb F_q^*\end{array}\right),\\
&&D_+=\left(\begin{array}{cc}\mathbb F_q^*&\mathbb F_q^*\\0&\mathbb F_q^*\end{array}\right),\ D_-=
\left(\begin{array}{cc}\mathbb F_q^*&0\\\mathbb F_q^*&\mathbb F_q^*\end{array}\right),\\
&&E_+=
\left(\begin{array}{cc}\mathbb F_q^*&\mathbb F_q^*\\\mathbb F_q^*&0\end{array}\right),\ E_-=\left(\begin{array}{cc}0&\mathbb F_q^*\\\mathbb F_q^*&\mathbb F_q^*\end{array}\right).
\end{eqnarray*}
corresponding to all possible supports of matrices in
$\mathrm{SL}_2[\mathbb F_q]$.
The element $C$ is of course missing (and the remaining elements
consist simply of all six matrices in $\mathrm{SL}_2(\mathbb F_2)$)
over $\mathbb F_2$.
For the sake of concision,
we will always assume that $q$ has more than $2$ elements in the sequel
(there is however nothing wrong with finite fields of characteristic $2$
having at least $4$ elements).
We denote by
$$\mathcal{SC}=\mathbb Z A+\mathbb Z B+\mathbb ZC+\mathbb ZD_++\mathbb ZD_-
+\mathbb Z E_++\mathbb Z E_-$$
the free $\mathbb Z-$module of rank seven spanned by these seven elements.
We call $\mathcal{SC}$ the \emph{ring of support-classes of
$\mathrm{SL}_2(\mathbb F_q)$}, a terminology motivated by
our main result:
\begin{thm}\label{mainthma} $\mathcal{SC}$ is a
subring of the integral group-ring $\mathbb Z[\mathrm{SL}_2(\mathbb F_q)]$.
\end{thm}
The construction of $\mathcal{SC}$ can be carried over to
the projective special groups $\mathrm{PSL}_2(\mathbb F_q)$
without difficulties by dividing all structure-constants by $2$
if $q$ is odd. The obvious modifications are left to the reader.
Another obvious variation is to work with matrices in
$\mathrm{GL}_2(\mathbb F_q)$. This multiplies all
structure-constants by $(q-1)$ (respectively by $m$
if working with the subgroup of matrices in $\mathrm{GL}_2(\mathbb F_q)$
having their determinants in a fixed multiplicative
subgroup $M\subset \mathbb F_q^*$ with $m$ elements).
We hope to address a few other variations of our main construction
in a future paper.
Products among generators of $\mathcal{SC}$ are given by
\begin{eqnarray*}
AX&=&XA=(q-1)X\hbox{ for }X\in\{A,B,C,D_\pm,E_\pm\},\\
B^2&=&(q-1)A,\\
BC=CB&=&(q-1)C,\\
BD_+=D_-B&=&(q-1)E_-,\\
BD_-=D_+B&=&(q-1)E_+,\\
BE_+=E_-B&=&(q-1)D_-,\\
BE_-=E_+B&=&(q-1)D_+,\\
C^2&=&(q-1)^2(q-2)(A+B)+(q-1)(q-3)(q-4)C\\
&&\ +(q-1)(q-2)(q-3)(D_++D_-+E_++E_-),\\
CD_+=CE_-&=&(q-1)(q-3)C+(q-1)(q-2)(D_-+E_+),\\
CD_-=CE_+&=&(q-1)(q-3)C+(q-1)(q-2)(D_++E_-),\\
D_+C=E_+C&=&(q-1)(q-3)C+(q-1)(q-2)(D_-+E_-),\\
D_-C=E_-C&=&(q-1)(q-3)C+(q-1)(q-2)(D_++E_+),\\
D_+^2=E_+E_-&=&(q-1)^2A+(q-1)(q-2)D_+,\\
D_+D_-=E_+^2&=&(q-1)(C+E_-),\\
D_-D_+=E_-^2&=&(q-1)(C+E_+),\\
D_+E_+=E_+D_-&=&(q-1)^2B+(q-1)(q-2)E_+,\\
E_+D_+=D_+E_-&=&(q-1)(C+D_-),\\
E_-D_+=D_-E_-&=&(q-1)^2B+(q-1)(q-2)E_-,\\
D_-^2=E_-E_+&=&(q-1)^2A+(q-1)(q-2)D_-,\\
D_-E_+=E_-D_-&=&(q-1)(C+D_+).\\
\end{eqnarray*}
Easy consistency checks of these formulae are given by the antiautomorphisms
$\sigma$ and $\tau$ obtained respectively by matrix-inversion and matrix-transposition. Their composition $\sigma\circ \tau=\tau\circ \sigma$ is
of course an involutive automorphism of $\mathbb Z[\mathbb F_q]$
which restricts to
an automorphism of $\mathcal{SC}$. It coincides on $\mathcal{SC}$
with the action of the inner automorphism $X\longmapsto
\left(\begin{array}{cc}0&1\\-1&0\end{array}\right)X
\left(\begin{array}{cc}0&-1\\1&0\end{array}\right)$ of
$\mathbb Z[\mathrm{SL}_2(\mathbb F_q)]$, fixes $A,B,C$ and
transposes the elements of the two pairs $\{D_+,D_-\}$ and $\{E_+,E_-\}$.
\begin{rem} The construction of the ring $\mathcal{SC}$ described by Theorem
\ref{mainthma} does not generalise to the matrix-algebra of all
$2\times 2$ matrices over $\mathbb F_q$. Indeed,
$\left(\begin{array}{cc}\mathbb F_q^*&0\\\mathbb F_q^*&0\end{array}\right)
\left(\begin{array}{cc}\mathbb F_q^*&\mathbb F_q^*\\0&0\end{array}\right)$
equals, up to a factor $(q-1)$, to the sum of all $(q-1)^3$ possible
rank $1$ matrices with all four coefficients in $\mathbb F_q^*$.
Square rank one matrices of any size behave however rather well: The set
of all $(2^n-1)^2$ possible sums of rank $1$ matrices of size $n\times n$
with prescribed support is a $\mathbb Z$-basis of a ring (defined by
extending bilinearly the matrix product) after identifying the
zero matrix with $0$.
\end{rem}
\subsection{Proof of Theorem \ref{mainthma}}
We show that the formulae for the products are correct.
We are however not going to prove all $7^2=49$
possible identities but all omitted cases are similar
and can be derived by symmetry arguments, use of
the antiautomorphisms given by matrix-inversion and
transposition, or (left/right)-multiplication by
$B$.
Products with $A$ or $B$ are easy and left to the reader.
We start with the easy product $D_+^2$
(the products $$D_+E_+,E_+D_-,E_-D_+,D_-E_-,D_-^2,E_-E_+$$
are similar and left to the reader). Since $A+D_+$ is the sum of elements
over the
full group of all $q(q-1)$ unimodular upper-triangular
matrices, we have $(A+D_+)^2=q(q-1)(A+D_+)$ showing that
\begin{eqnarray*}
D_+^2&=&(A+D_+)^2-2AD_+-A^2\\
&=&q(q-1)(A+D_+)-2(q-1)D_+-(q-1)A\\
&=&(q-1)^2A+(q-1)(q-2)D_+.
\end{eqnarray*}
For $D_+D_-$ we consider
$$\left(\begin{array}{cc}a_1&b_1\\0&1/a_1\end{array}\right)
\left(\begin{array}{cc}a_2&0\\b_2&1/a_2\end{array}\right)
=\left(\begin{array}{cc}a_1a_2+b_1b_2&b_1/a_2\\b_2/a_1&1/(a_1a_2)
\end{array}\right).
$$
Every unimodular matrix $\left(\begin{array}{cc}\alpha&\beta\\
\gamma&\delta\end{array}\right)$
with $\beta,\gamma,\delta$ in $\mathbb F_q^*$ can be realised as
a summand in the product $D_+D_-$ in exactly $(q-1)$ different
ways by choosing $a_1$ freely in $\mathbb F_q^*$ and by setting
$$b_1=\frac{\beta}{a_1\delta},a_2=\frac{1}{a_1\delta},b_2=a_1\gamma.$$
This shows $D_+D_-=(q-1)(C+E_-)$. The products
$$E_+^2,D_-D_+,E_-^2,E_+D_+,D_+E_-,D_-E_+,E_-D_-$$
are similar.
In order to compute $D_+C$, we consider $(D_++A)(C+D_-+E_+)$.
Since
$$\left(\begin{array}{cc}
a_1&b_1\\0&1/a_1\end{array}\right)
\left(\begin{array}{cc}
a_2&b_2\\c_2&(1+b_2c_2)/a_2\end{array}\right)
$$
$$\left(\begin{array}{cc}
a_1a_2+b_1c_2&a_1b_2+b_1(1+b_2c_2)/a_2\\
c_2/a_1&(1+b_2c_2)/(a_1a_2)\end{array}\right),$$
every unimodular
matrix $\left(\begin{array}{cc}\alpha&\beta\\\gamma&\delta\end{array}\right)$
with $\gamma\in\mathbb F_q^*$
can be realised
in exactly $(q-1)^2$ ways as a summand in $(D_++A)(C+D_-+E_+)$ by
choosing $a_1,a_2$ freely in $\mathbb F_q^*$ and by setting
$$b_1=\frac{\alpha-a_1a_2}{a_1\gamma},b_2=\frac{a_1a_2\delta-1}{a_1\gamma},
c_2=a_1\gamma.$$
This shows $(D_++A)(C+D_-+E_+)=(q-1)^2(B+C+D_-+E_++E_-)$. We get thus
\begin{eqnarray*}
&&D_+C\\
&=&(D_++A)(C+D_-+E_+)-A(C+D_-+E_+)-D_+D_--D_+E_+\\
&=&(q-1)^2(B+C+D_-+E_++E_-)-(q-1)(C+D_-+E_+)\\
&&\ -(q-1)(C+E_-)-(q-1)^2B-(q-1)(q-2)E_+\\
&=&(q-1)(q-3)C+(q-1)(q-2)(D_-+E_-).
\end{eqnarray*}
The products
$$CD_+,CE_-,CD_-,CE_+,E_+C,D_-C,E_-C$$
are similar.
Using all previous products, the formula for $C^2$ can now be recovered
from $(A+B+C+D_++D_-+E_++E_-)^2=(q^3-q)(A+B+C+D_++D_-+E_++E_-)$.
We have indeed
\begin{eqnarray*}
C^2&=&(A+B+C+D_++D_-+E_++E_-)^2-\sum_{(X,Y)\not=(C,C)}XY
\end{eqnarray*}
where the sum is over all elements of
$\{A,B,C,D_+,D_-,E_+,E_-\}^2\setminus(C,C)$. All products of the right-hand-side
are known and determine thus $C^2$.
Equivalently, structure-constants of $C^2$
have to be polynomials of degree at most $3$ in $q$.
They can thus also be computed by interpolating the coefficients
in $4$ explicit examples. (Using divisibility by $q-1$,
computing $3$ examples is in fact enough.)
The existence of these formulae proves Theorem \ref{mainthma}.
$\Box$
\subsection{Matrices for left-multiplication by generators}
Left-multiplications by generators
with respect to the basis $A,B,C,D_+,D_-,E_+,E_-$
of $\mathcal{SC}$ are encoded by the matrices
$$M_B=(q-1)\left(\begin{array}{ccccccc}
0&1&0&0&0&0&0\\
1&0&0&0&0&0&0\\
0&0&1&0&0&0&0\\
0&0&0&0&0&0&1\\
0&0&0&0&0&1&0\\
0&0&0&0&1&0&0\\
0&0&0&1&0&0&0\\
\end{array}\right)$$
$$M_C=(q-1)
\left(\begin{array}{ccccccc}
0&0&(q-1)(q-2)&0&0&0&0\\
0&0&(q-1)(q-2)&0&0&0&0\\
1&1&(q-3)(q-4)&q-3&q-3&q-3&q-3\\
0&0&(q-2)(q-3)&0&q-2&q-2&0\\
0&0&(q-2)(q-3)&q-2&0&0&q-2\\
0&0&(q-2)(q-3)&q-2&0&0&q-2\\
0&0&(q-2)(q-3)&0&q-2&q-2&0\\
\end{array}\right)$$
$$M_{D_+}=(q-1)\left(\begin{array}{ccccccc}
0&0&0&q-1&0&0&0\\
0&0&0&0&0&q-1&0\\
0&0&q-3&0&1&0&1\\
1&0&0&q-2&0&0&0\\
0&0&q-2&0&0&0&1\\
0&1&0&0&0&q-2&0\\
0&0&q-2&0&1&0&0\\
\end{array}\right)$$
$$M_{E_+}=(q-1)\left(\begin{array}{ccccccc}
0&0&0&0&0&0&q-1\\
0&0&0&0&q-1&0&0\\
0&0&q-3&1&0&1&0\\
0&1&0&0&0&0&q-2\\
0&0&q-2&1&0&0&0\\
1&0&0&0&q-2&0&0\\
0&0&q-2&0&0&1&0\\
\end{array}\right)$$
The remaining matrices are given by $M_A=\frac{1}{q-1}(M_B)^2$
, $M_{D_-}=\alpha M_{D_+}\alpha$ and $M_{E_-}=
\alpha M_{E_+}\alpha$
where
$$\alpha=\left(\begin{array}{ccccccc}
1&0&0&0&0&0&0\\
0&1&0&0&0&0&0\\
0&0&1&0&0&0&0\\
0&0&0&0&1&0&0\\
0&0&0&1&0&0&0\\
0&0&0&0&0&0&1\\
0&0&0&0&0&1&0\\
\end{array}\right)$$
is the matrix corresponding to the automorphism $\sigma\circ \tau$.
The map $\{A,B,C,D_+,D_-,E_+,E_-\}\ni X\longmapsto M_X$ extends of course to an isomorphisme between
$\mathcal{SC}$ and
$$\mathbb ZM_A+\mathbb ZM_B+\mathbb ZM_C+\mathbb ZM_{D_+}+\mathbb ZM_{D_-}
+\mathbb ZM_{E_+}+\mathbb ZM_{E_-}$$
which is a ring. Computations are easier and faster in
this matrix-ring than in the subring $\mathcal{SC}$
of $\mathbb Z[\mathrm{SL}_2(\mathbb F_q)]$.
\section{Algebraic properties of $\mathcal{SC}_{\mathbb Q}$}\label{sectalgprop}
\subsection{$\mathcal{SC}_{\mathbb Q}$ as a semisimple algebra}\label{subsectsemisimple}
\begin{thm}\label{thmstructureSC}
The algebra $\mathcal{SC}_{\mathbb Q}=\mathcal{SC}\otimes_{\mathbb Z}\mathbb Q$
is a semi-simple algebra isomorphic to $\mathbb Q\oplus \mathbb Q\oplus \mathbb Q
\oplus M_2(\mathbb Q)$.
\end{thm}
The structure of $\mathcal{SC}_{\mathbb K}=\mathcal{SC}\otimes_{\mathbb Z}\mathbb K$
is of course easy to deduce for any field of characteristic $0$.
The algebra $\mathcal{SC}_{\mathbb K}$ is also semi-simple (and has the same structure) over most finite fields.
Theorem \ref{thmstructureSC} is an easy consequence of the following
computations:
The center of $\mathcal{SC}_{\mathbb Q}$
has rank $4$. It is spanned by the $3$
minimal central idempotents
\begin{eqnarray*}
\pi_1&=&\frac{1}{q^3-q}(A+B+C+D_++D_-+E_++E_-),\\
\pi_2&=&\frac{q-2}{2(q^2-q)}(A+B)+\frac{1}{q(q-1)^2}C-\frac{q-2}{2q(q-1)^2}
(D_++D_-+E_++E_-),\\
\pi_3&=&\frac{1}{2(q+1)}(A-B)+\frac{1}{2(q^2-1)}(-D_+-
D_-+E_++E_-)
\end{eqnarray*}
and by the central idempotent
\begin{eqnarray*}
\pi_4&=&\frac{1}{(q+1)(q-1)^2}(2(q-1)A-2C+(q-2)(D_++D_-)-(E_++E_-))
\end{eqnarray*}
which is non-minimal among all idempotents.
The three idempotents $\pi_1,\pi_2,\pi_3$ induce three different
characters (homomorphisms from $\mathcal{SC}_{\mathbb Q}$ into $\mathbb Q$).
Identifying $1\in \mathbb Q$ with $\pi_i$ in each case,
the three homomorphisms are given by
$$\begin{array}{||c||c|c|c|c|c||}
\hline\hline
&A&B&C&D_\pm&E_\pm\\
\hline
\pi_1&q-1&q-1&(q-1)^2(q-2)&(q-1)^2&(q-1)^2\\
\pi_2&q-1&q-1&2(q-1)&1-q&1-q\\
\pi_3&q-1&1-q&0&1-q&q-1\\
\hline\hline
\end{array}$$
The idempotent $\pi_1$ is of course simply the augmentation map
counting the number of matrices involved in each generator.
The idempotent $\pi_1+\pi_2+\pi_3+\pi_4=\frac{1}{q-1}A$ is the identity
of $\mathcal{SC}_{\mathbb Q}$.
The idempotent $\pi_4$ projects $\mathcal{SC}_{\mathbb Q}$
homomorphically onto a matrix-algebra of $2\times 2$ matrices.
$\pi_4$ can be written (not uniquely) as a sum of two minimal
non-central idempotents. We have for example $\pi_4=M_{1,1}+M_{2,2}$
where
\begin{eqnarray*}
M_{1,1}&=&\frac{1}{q^2-1}(A-B)+
\frac{1}{2(q^2-1)}(D_++D_--E_+-E_-),\\
M_{2,2}&=&\frac{1}{q^2-1}(A+B)-\frac{2}{(q+1)(q-1)^2}C+\\
&&\quad +
\frac{q-3}{2(q+1)(q-1)^2}(D_++D_-+E_++E_-).\\
\end{eqnarray*}
Considering also
\begin{eqnarray*}
M_{1,2}&=&\frac{1}{2(q-1)^2}(D_+-D_-+E_+-E_-),\\
M_{2,1}&=&\frac{1}{2(q^2-1)}(D_+-D_--E_++E_-),\\
\end{eqnarray*}
the elements
$M_{i,j}$ behave like matrix-units and the map
\begin{eqnarray}\label{exoticisom}
\left(\begin{array}{cc}a&b\\c&d\end{array}\right)\longmapsto
aM_{1,1}+bM_{1,2}+cM_{2,1}+dM_{2,2}
\end{eqnarray}
defines thus an isomorphism from the ring of integral $2\times 2$ matrices
into $\mathcal{SC}_{\mathbb Q}=\mathcal{SC}\otimes_{\mathbb Z}\mathbb Q$.
If $\mathbb F_q$ has odd characteristic, the elements $M_{i,j}$ can be
realised in $\mathcal{SC}_{\mathbb F_q}=\mathcal{SC}\otimes_{\mathbb Z}\mathbb F_q$.
In particular, Formula (\ref{exoticisom}) gives an \lq\lq exotic\rq\rq,
non-unital embedding of $\mathrm{SL}_2[\mathbb F_q]$ into
the group-algebra $\mathbb F_q[\mathrm{SL}_2(\mathbb F_q)]$
(in fact, (\ref{exoticisom}) gives an embedding of
$\mathrm{SL}_2(\mathbb F_r)$ into
$\mathbb F_r[\mathrm{SL}_2(\mathbb F_q)]$ whenever the prime power $r$
is coprime to $2(q^2-1)$).
The nice non-central idempotent $$\pi_3+M_{1,1}=\frac{1}{2(q-1)}(A-B)$$
projects $\mathcal{SC}_{\mathbb Q}$ onto the eigenspace of eigenvalue
$1-q$ of the map $X\longmapsto BX$.
The projection of $\mathcal{SC}_{\mathbb Q}$ onto the eigenspace of eigenvalue
$q-1$ of the map $X\longmapsto BX$ is similarly given by
$$\pi_1+\pi_2+M_{2,2}=\frac{1}{2(q-1)}(A+B).$$
\subsection{A few commutative subalgebras of $\mathcal{SC}_{\mathbb Q}$}
The previous section shows that the dimension of a commutative subalgebra of
$\mathcal{SC}_{\mathbb Q}$ cannot exceed $5$.
The center of $\mathcal{SC}_{\mathbb Q}$ is of course of rank $4$ and
spanned by the four minimal central idempotents $\pi_1,\dots,\pi_4$
of the previous Section.
Splitting $\pi_4=M_{1,1}+M_{2,2}$, we get a maximal commutative subalgebra
of rank $5$ by considering the vector space spanned by
the minimal idempotents
$\pi_1,\pi_2,\pi_3,M_{1,1},M_{2,2}$. Equivalently, this vector space
is spanned by $A,B,C,D=D_++D_-,E=E_++E_-$, as shown by the formulae
for $\pi_i$ and $M_{1,1},M_{2,2}$.
In terms of $A,B,C,D,E$, minimal idempotents $\pi_1,\pi_2,\pi_3,M_{1,1},M_{2,2}$ are given by
\begin{eqnarray*}
\pi_1&=&\frac{1}{q^3-q}(A+B+C+D+E),\\
\pi_2&=&\frac{q-2}{2(q^2-q)}(A+B)+\frac{1}{q(q-1)^2}C-\frac{q-2}{2q(q-1)^2}
(D+E),\\
\pi_3&=&\frac{1}{2(q+1)}(A-B)+\frac{1}{2(q^2-1)}(-D+E),\\
M_{1,1}&=&\frac{1}{q^2-1}(A-B)+
\frac{1}{2(q^2-1)}(D-E),\\
M_{2,2}&=&\frac{1}{q^2-1}(A+B)-\frac{2}{(q+1)(q-1)^2}C+\\
&&\quad +
\frac{q-3}{2(q+1)(q-1)^2}(D+E).\\
\end{eqnarray*}
They define five characters given by
$\mathbb Q A+\dots+\mathbb Q E\longrightarrow \mathbb Q$ given by
$$\begin{array}{||c||c|c|c|c|c||}
\hline\hline
&A&B&C&D&E\\
\hline
\pi_1&q-1&q-1&(q-1)^2(q-2)&2(q-1)^2&2(q-1)^2\\
\pi_2&q-1&q-1&2(q-1)&2(1-q)&2(1-q)\\
\pi_3&q-1&1-q&0&2(1-q)&2(q-1)\\
M_{1,1}&q-1&1-q&0&(q-1)^2&-(q-1)^2\\
M_{2,2}&q-1&q-1&2(1-q)(q-2)&(q-1)(q-3)&(q-1)(q-3)\\
\hline\hline
\end{array}$$
Moreover,
$A,B,C,F=D+E$ span a commutative $4$-dimensional subalgebra.
Minimal idempotents are
\begin{eqnarray*}
\pi_1&=&\frac{1}{q^3-q}(A+B+C+F),\\
\pi_2&=&\frac{q-2}{2(q^2-q)}(A+B)+\frac{1}{q(q-1)^2}C-\frac{q-2}{2q(q-1)^2}
F,\\
\pi_3+M_{1,1}&=&\frac{1}{2(q-1)}(A-B),\\
M_{2,2}&=&\frac{1}{q^2-1}(A+B)-\frac{2}{(q+1)(q-1)^2}C+\\
&&\quad +
\frac{q-3}{2(q+1)(q-1)^2}F.\\
\end{eqnarray*}
with character-table
$$\begin{array}{||c||c|c|c|c||}
\hline\hline
&A&B&C&F\\
\hline
\pi_1&q-1&q-1&(q-1)^2(q-2)&4(q-1)^2\\
\pi_2&q-1&q-1&2(q-1)&4(1-q)\\
\pi_3+M_{1,1}&q-1&1-q&0&0\\
M_{2,2}&q-1&q-1&2(1-q)(q-2)&2(q-1)(q-3)\\
\hline\hline
\end{array}$$
$I=A+B,C,F=E+D$ span a commutative $3-$dimensional subalgebra
of $\mathcal{SC}_{\mathbb C}$.
Generators of this last algebra are sums of elements in
$\mathrm{SL}_2(\mathbb F_q)$ with supports of given cardinality.
$I$ is the sum of all elements with two non-zero coefficients,
$C$ contains all elements having only non-zero coefficients and
$F$ contains all elements with three non-zero coefficients.
Products are given by
$IX=XI=2(q-1)X$ for $X\in\{I,C,F\}$
and
\begin{eqnarray*}
C^2&=&(q-1)^2(q-2)I+(q-1)(q-3)(q-4)C+(q-1)(q-2)(q-3)F,\\
CF&=&FC=4(q-1)(q-3)C+2(q-1)(q-2)F,\\
F^2&=&4(q-1)^2I+8(q-1)C+2(q-1)^2F.
\end{eqnarray*}
Idempotents are given by
\begin{eqnarray*}
\pi_1&=&\frac{1}{q^3-q}(I+C+F),\\
\pi_2&=&\frac{q-2}{2(q^2-q)}I+\frac{1}{q(q-1)^2}C-\frac{q-2}{2q(q-1)^2}
F,\\
M_{2,2}&=&\frac{1}{q^2-1}I-\frac{2}{(q+1)(q-1)^2}C+\\
&&\quad +
\frac{q-3}{2(q+1)(q-1)^2}F.\\
\end{eqnarray*}
with character-table
$$\begin{array}{||c||c|c|c||}
\hline\hline
&I&C&F\\
\hline
\pi_1&2(q-1)&(q-1)^2(q-2)&4(q-1)^2\\
\pi_2&2(q-1)&2(q-1)&4(1-q)\\
M_{2,2}&2(q-1)&2(1-q)(q-2)&2(q-1)(q-3)\\
\hline\hline
\end{array}$$
Working over $\mathbb C$ (or over a suitable extension of $\mathbb Q$) and setting
$$\tilde I=\frac{1}{2(q-1)}I,\tilde C=\frac{1}{\sqrt{2(q-1)^3(q-2)}}C,\tilde F=\frac{1}{\sqrt{8(q-1)^3}}F$$
we get products
$XY=\sum_{Z\in \{\tilde I,\tilde C,\tilde F\}}N_{X,Y,Z}Z$
defined by symmetric structure-constants $N_{X,Y,Z}=N_{Y,X,Z}=N_{X,Z,Y}$
for all $X,Y,Z\in \{\tilde I,\tilde C,\tilde F\}$.
Up to symmetric permutations, the structure-constants are given by
\begin{eqnarray*}
N_{\tilde I,X,Y}&=&\delta_{X,Y}\\
N_{\tilde C,\tilde C,\tilde C}&=&\frac{(q-3)(q-4)}{\sqrt{2(q-1)(q-2)}},\\
N_{\tilde C,\tilde C,\tilde F}&=&(q-3)\sqrt{\frac{2}{q-1}}\\
N_{\tilde C,\tilde F,\tilde F}&=&\sqrt{\frac{2(q-2)}{q-1}},\\
N_{\tilde F,\tilde F,\tilde F}&=&\sqrt{\frac{q-1}{2}}
\end{eqnarray*}
where $\delta_{X,Y}=1$ if and only if $X=Y$ and $\delta_{X,Y}=0$ otherwise.
The evaluation at $q=3$ leads to particularly
nice structure constants
with values in $\{0,1\}$.
Algebras with generating systems having symmetric structure-constants
and a character taking
real positive values on generators (satisfied by $\pi_1$ for $q>2$)
are sometimes called
\emph{algebraic fusion-algebras}, see for example \cite{Ba}.
\section{Association schemes and Bose-Mesner algebras}
\label{sectass}
An \emph{association scheme} is a set of $d+1$ square matrices
${\mathcal C}_0,\dots,{\mathcal C}_d$ with coefficients in $\{0,1\}$ such that
${\mathcal C}_0$ is the identity-matrix, ${\mathcal C}_0+\dots+{\mathcal C}_d$ is the all-one matrix
and $\mathbb Z{\mathcal C}_0+\dots+\mathbb Z{\mathcal C}_d$ is a commutative ring with
(necessarily integral) structure constants
$p_{i,j}^k=p_{j,i}^k$ defined by ${\mathcal C}_i{\mathcal C}_j={\mathcal C}_j{\mathcal C}_i=\sum_{k=0}^d p_{i,j}^k{\mathcal C}_k$.
An association scheme is \emph{symmetric} if
${\mathcal C}_1,\dots,{\mathcal C}_d$ are symmetric matrices. The algebra (over a field)
generated by the elements ${\mathcal C}_i$ is called a \emph{Bose-Mesner algebra}. See for example the monograph \cite{B} for additional information.
Identifying an element $g$ of $\mathrm{SL}_2(\mathbb F_q)$ with the
permutation-matrix associated to left-multiplication by $g$
we get a commutative association scheme with $d=5$ if $q\geq 4$ by setting
$${\mathcal C}_0=\mathrm{Id}, {\mathcal C}_1=A-\mathrm{Id}, {\mathcal C}_2=B,{\mathcal C}_3=C,{\mathcal C}_4=D_++D_-,{\mathcal C}_5=E_++E_-$$
where we consider sums of
permutation-matrices. All matrices are symmetric. Products with ${\mathcal C}_0,{\mathcal C}_1$
are given by ${\mathcal C}_0X=X{\mathcal C}_0=X, {\mathcal C}_1^2=(q-2){\mathcal C}_0+(q-3){\mathcal C}_1,{\mathcal C}_1Y=Y{\mathcal C}_1=(q-2)Y$
for $X\in\{{\mathcal C}_0,\dots,{\mathcal C}_5\}$ and $Y\in\{{\mathcal C}_2,\dots,{\mathcal C}_5\}$. The remaining products
are given by
\begin{eqnarray*}
{\mathcal C}_2^2&=&(q-1)({\mathcal C}_0+{\mathcal C}_1),\\
{\mathcal C}_2{\mathcal C}_3={\mathcal C}_3{\mathcal C}_2&=&(q-1){\mathcal C}_3\\
{\mathcal C}_2{\mathcal C}_4={\mathcal C}_4{\mathcal C}_2&=&(q-1){\mathcal C}_5\\
{\mathcal C}_2{\mathcal C}_5={\mathcal C}_5{\mathcal C}_2&=&(q-1){\mathcal C}_4\\
{\mathcal C}_3^2&=&(q-1)^2(q-2)({\mathcal C}_0+{\mathcal C}_1+{\mathcal C}_2)+(q-1)(q-3)(q-4){\mathcal C}_3\\
&&\quad +(q-1)(q-2)(q-3)({\mathcal C}_4+{\mathcal C}_5)\\
{\mathcal C}_3{\mathcal C}_4={\mathcal C}_4{\mathcal C}_3&=&2(q-1)(q-3){\mathcal C}_3+(q-1)(q-2)({\mathcal C}_4+{\mathcal C}_5)\\
{\mathcal C}_3{\mathcal C}_5={\mathcal C}_5{\mathcal C}_3&=&2(q-1)(q-3){\mathcal C}_3+(q-1)(q-2)({\mathcal C}_4+{\mathcal C}_5)\\
{\mathcal C}_4^2&=&2(q-1)^2({\mathcal C}_0+{\mathcal C}_1)+2(q-1){\mathcal C}_3\\
&&\quad +(q-1)(q-2){\mathcal C}_4+(q-1){\mathcal C}_5\\
{\mathcal C}_4{\mathcal C}_5={\mathcal C}_5{\mathcal C}_4&=&2(q-1)^2{\mathcal C}_2+2(q-1){\mathcal C}_3\\
&&\quad +(q-1){\mathcal C}_4+(q-1)(q-2){\mathcal C}_5\\
{\mathcal C}_5^2&=&2(q-1)^2({\mathcal C}_0+{\mathcal C}_1)+2(q-1){\mathcal C}_3\\
&&\quad +(q-1)(q-2){\mathcal C}_4+(q-1){\mathcal C}_5
\end{eqnarray*}
The reader should be warned that ${\mathcal C}_1$ behaves not exactly like $(q-2){\mathcal C}_0$.
We leave it to the reader to write down matrices for multiplications
with basis-elements and to compute the complete list of minimal idempotents.
Additional association schemes are given by
${\mathcal C}_0,{\mathcal C}_1,{\mathcal C}_2,{\mathcal C}_3,{\mathcal C}_4+{\mathcal C}_5$ and ${\mathcal C}_0,{\mathcal C}_1+{\mathcal C}_2,{\mathcal C}_3,{\mathcal C}_4+{\mathcal C}_5$.
It is also possible to split ${\mathcal C}_1$ and/or ${\mathcal C}_2$ according to subgroups
of $\mathbb F_q^*$ into several matrices (or classes, as they are sometimes
called).
We discuss now with a little bit more details the smallest
interesting association scheme with classes ${\tilde{\mathcal C}}_0,{\tilde{\mathcal C}}_1={\mathcal C}_1+{\mathcal C}_2,{\tilde{\mathcal C}}_2={\mathcal C}_3,{\tilde{\mathcal C}}_3={\mathcal C}_4+{\mathcal C}_5$. Products are given by
${\tilde{\mathcal C}}_0X=X{\tilde{\mathcal C}}_0$ and
\begin{eqnarray*}
{\tilde{\mathcal C}}_1^2&=&(2q-3){\tilde{\mathcal C}}_0+2(q-2){\tilde{\mathcal C}}_1\\
{\tilde{\mathcal C}}_1{\tilde{\mathcal C}}_2={\tilde{\mathcal C}}_2{\tilde{\mathcal C}}_1&=&(2q-3){\tilde{\mathcal C}}_2\\
{\tilde{\mathcal C}}_1{\tilde{\mathcal C}}_3={\tilde{\mathcal C}}_3{\tilde{\mathcal C}}_1&=&(2q-3){\tilde{\mathcal C}}_3\\
{\tilde{\mathcal C}}_2^2&=&(q-1)^2(q-2)({\tilde{\mathcal C}}_0+{\tilde{\mathcal C}}_1)+(q-1)(q-3)(q-4){\tilde{\mathcal C}}_2\\
&&\quad +(q-1)(q-2)(q-3){\tilde{\mathcal C}}_3\\
{\tilde{\mathcal C}}_2{\tilde{\mathcal C}}_3={\tilde{\mathcal C}}_3{\tilde{\mathcal C}}_2&=&4(q-1)(q-3){\tilde{\mathcal C}}_2+2(q-1)(q-2){\tilde{\mathcal C}}_3\\
{\tilde{\mathcal C}}_3^2&=&4(q-1)^2({\tilde{\mathcal C}}_0+{\tilde{\mathcal C}}_1)+8(q-1){\tilde{\mathcal C}}_2+2(q-1)^2{\tilde{\mathcal C}}_3
\end{eqnarray*}
Matrices $M_0,\dots,M_3$ corresponding to multiplication by ${\tilde{\mathcal C}}_0,\dots,{\tilde{\mathcal C}}_3$
are given by
$$
{\tilde{\mathcal C}}_0=\left(\begin{array}{cccc}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{array}\right),\quad {\tilde{\mathcal C}}_1=\left(\begin{array}{cccc}0&2q-3&0&0\\1&2(q-2)&0&0\\0&0&2q-3&0\\0&0&0&2q-3\end{array}\right)$$
$${\tilde{\mathcal C}}_2=\left(\begin{array}{cccc}0&0&(q-1)^2(q-2)&0\\
0&0&(q-1)^2(q-2)&0\\1&2q-3&(q-1)(q-3)(q-4)&4(q-1)(q-3)\\
0&0&(q-1)(q-2)(q-3)&2(q-1)(q-2)\end{array}\right),$$
$${\tilde{\mathcal C}}_3=\left(\begin{array}{cccc}0&0&0&4(q-1)^2\\
0&0&0&4(q-1)^2\\0&0&4(q-1)(q-3)&8(q-1)\\
1&2q-3&2(q-1)(q-2)&2(q-1)^2\end{array}\right)$$
and minimal idempotents are given by
\begin{eqnarray*}
\beta_0&=&\frac{1}{q^3-q}({\tilde{\mathcal C}}_0+{\tilde{\mathcal C}}_1+{\tilde{\mathcal C}}_2+{\tilde{\mathcal C}}_3),\\
\beta_1&=&\frac{q-2}{2(q^2-q)}({\tilde{\mathcal C}}_0+{\tilde{\mathcal C}}_1)+\frac{1}{q(q-1)^2}{\tilde{\mathcal C}}_2-\frac{q-2}{2q(q-1)^2}{\tilde{\mathcal C}}_3,\\
\beta_2&=&\frac{2q-3}{2(q-1)}{\tilde{\mathcal C}}_0-\frac{1}{2(q-1)}{\tilde{\mathcal C}}_1,\\
\beta_3&=&\frac{1}{q^2-1}({\tilde{\mathcal C}}_0+{\tilde{\mathcal C}}_1)-\frac{2}{(q-1)^2(q+1)}{\tilde{\mathcal C}}_2+\frac{q-3}{2(q-1)^2(q+1)}{\tilde{\mathcal C}}_3.
\end{eqnarray*}
The coefficient of ${\tilde{\mathcal C}}_0$ multiplied by $(q^3-q)$ gives the dimension of the associated eigenspace. Eigenvalues of (with multiplicities) of generators, obtained
by evaluating the characters $\beta_0,\dots,\beta_4$ on ${\tilde{\mathcal C}}_0,\dots,{\tilde{\mathcal C}}_3$,
are given by
$$\begin{array}{c||c|cccc}
&\hbox{dim}&{\tilde{\mathcal C}}_0&{\tilde{\mathcal C}}_1&{\tilde{\mathcal C}}_2&{\tilde{\mathcal C}}_3\\
\hline
\beta_0&1&1&2q-3&(q-1)^2(q-2)&4(q-1)^2\\
\beta_1&\frac{(q+1)(q-2)}{2}&1&2q-3&2(q-1)&4(1-q)\\
\beta_2&\frac{q(2q-3)(q+1)}{2}&1&-1&0&0\\
\beta_3&q&1&2q-3&2(1-q)(q-2)&2(q-1)(q-3)
\end{array}$$
\begin{rem} There exists a few more exotic variation
of this construction. An example is given by
partitioning the elements of
$\mathrm{SL}_2(\mathbb F_5)$ according to
the $30$ possible values of the Legendre symbol $\left(x\over 5\right)$
on entries.
Since $-1$ is a square modulo $5$,
these classes are well-defined on $\mathrm{PSL}_2(\mathbb F_5)$
which is isomorphic to the simple group $A_5$.
Details (and a few similar examples)
will hopefully appear in a future paper.
\end{rem}
\section{Representation-theoretic aspects}\label{sectrepr}
In this Section, we work over $\mathbb C$ for the sake of simplicity.
\subsection{Traces}
Left-multiplication
by the identity $\frac{1}{q-1}A$ of $\mathcal{SC}_{\mathbb C}$ defines an idempotent
on $\mathbb Q[\mathrm{SL}_2(\mathbb F_q)]$
whose trace is the dimension $\frac{1}{q-1}(q^3-q)=q(q+1)$ of the
non-trivial eigenspace. Indeed,
every non-trivial element of $\mathrm{SL}_2(\mathbb F_q)$ has trace
$0$ and the identity-matrix has trace $q^3-q$ since it
fixes all $q^3-q$ elements of $\mathrm{SL}_2(\mathbb F_q)$.
A basis of the non-trivial $q(q+1)$-dimensional
eigenspace of $\frac{1}{q-1}A$ is given by sums over all
matrices with rows representing two
distinct fixed elements of the projective line over $\mathbb F_q$.
The traces $\mathrm{tr}(\pi_1),\dots,\mathrm{tr}(\pi_4)$ of the four
minimal central projectors $\pi_0,\dots,\pi_4$ of $\mathcal{SC}_{\mathbb C}$
are equal to $q^3-q$ times the coefficient of $A$ in $\pi_i$.
They are thus given by
$$\begin{array}{c|cccc}
&\pi_1&\pi_2&\pi_3&\pi_4\\
\hline
\mathrm{trace}&1&\frac{(q+1)(q-2)}{2}&\frac{q(q-1)}{2}&2q
\end{array}$$
and we have
$$q(q+1)=\mathrm{tr}(\pi_1)+\mathrm{tr}(\pi_2)+\mathrm{tr}(\pi_3)+
\mathrm{tr}(\pi_4),$$
as expected.
\subsection{Characters}
Since simple matrix-algebras of $\mathbb C[\mathrm{SL}_2(\mathbb F_q)]$
are indexed by characters of $\mathbb C[\mathrm{SL}_2(\mathbb F_q)]$,
it is perhaps interesting to understand all irreducible
characters involved in idempotents of $\mathcal{SC}_\mathbb C\subset
\mathbb C[\mathrm{SL}_2(\mathbb F_q)]$.
The algebra $\mathcal{SC}_{\mathbb C}$ is in some sense almost ``orthogonal''
to the center of $\mathbb C[\mathrm{SL}_2(\mathbb F_q)]$. The algebra
$\mathcal{SC}_{\mathbb C}$ should thus involve many
different irreducible characters of $\mathrm{SL}_2(\mathbb F_q)$.
We will see that this is indeed the case.
We decompose first the identity
$\frac{1}{q-1}A$ according
to irreducible characters of $\mathbb C[\mathrm{SL}_2(\mathbb F_q)]$.
We refine this
decomposition to the minimal central idempotents $\pi_1,\dots,\pi_4$
of $\mathcal{SC}_\mathbb C$ in Section \ref{subsectdecomppiiqodd}.
For simplicity we work over $\mathrm{GL}_2(\mathbb F_q)$ which has
essentially the same character-theory as $\mathrm{SL}_2(\mathbb F_q)$.
We work over $\mathbb C$ and we identify (irreducible) characters
with the corresponding (irreducible) representations.
In order to do this, we introduce $F=\sum_{\lambda\in\mathbb F_q^*}\left(\begin{array}{cc}
1&0\\0&\lambda\end{array}\right)\in \mathbb Z[\mathrm{GL}_2(\mathbb F_q)]$.
We have $F^2=(q-1)F$ and $FX=XF$ for $X\in\{A,B,C,D_\pm, E_\pm\}$, considered
as an element of $\mathbb Z[\mathrm{GL}_2(\mathbb F_q)]$.
The map $X\longmapsto \frac{1}{q-1}FX$ preserves traces and
defines an injective
homomorphisme of $\mathcal{SC}$ into $\mathbb Q[\mathrm{GL}_2(\mathbb F_q)]$.
We use the conventions of Chapter 5 of \cite{FH} for
conjugacy classes of $\mathrm{GL}_2(\mathbb F_q)$. More precisely,
we denote by $a_x$ conjugacy classes of central diagonal matrices
with common diagonal value $x$ in $\mathbb F_q^*$, by $b_x$
conjugacy classes given by
multiplying unipotent matrices by a scalar $x$ in $\mathbb F_q^*$, by $c_{x,y}$
conjugacy classes with two distinct eigenvalues $x,y\in\mathbb F_q^*$
and by $d_\xi$ conjugacy classes with two conjugate eigenvalues $\xi,\xi^q\in
\mathbb F_{q^2}^*\setminus\mathbb F_q$. The number of conjugacy classes of each type is given by
$$\begin{array}{c|c|c|c}
a_x&b_x&c_{x,y}&d_\xi\\
\hline
q-1&q-1&\frac{(q-1)(q-2)}{2}&\frac{q(q-1)}{2}
\end{array}.$$
The character table of
$\mathrm{GL}_2(\mathbb F_q)$, copied from \cite{FH},
is now given by
$$\begin{array}{r|cccc}
&1&q^2-1&q^2+q&q^2-q\\
&a_x&b_x&c_{x,y}&d_\xi\\
\hline
U_\alpha&\alpha(x^2)&\alpha(x^2)&\alpha(xy)&\alpha(\xi^{q+1})\\
V_\alpha&q\alpha(x^2)&0&\alpha(xy)&-\alpha(\xi^{q+1})\\
W_{\alpha,\beta}&(q+1)\alpha(x)\beta(x)&\alpha(x)\beta(x)&\alpha(x)\beta(y)+
\alpha(y)\beta(x)&0\\
X_\varphi&(q-1)\varphi(x)&-\varphi(x)&0&-(\varphi(\xi)+\varphi(\xi^q))
\end{array}$$
where $\alpha,\beta$ are distinct characters of $\mathbb F_q^*$ and where
$\varphi$ is a character of $\mathbb F_{q^2}^*$ with $\varphi^{q-1}$ non-trivial.
The first row indicates the number of elements in a conjugacy class
indicated by the second row. The remaining rows give the character-table.
$U_\alpha$ are the one-dimensional representations factoring through the
determinant. $V_\alpha=V\otimes U_\alpha$ are obtained from the permutation-representation
$V=V_1+U_1$ describing the permutation-action of $\mathrm{GL}_2(\mathbb F_q)$
on all $p+1$ points of the projective line over $\mathbb F_q$.
$W_{\alpha,\beta}$ (isomorphic to $W_{\beta,\alpha}$)
are induced from non-trivial
$1$-dimensional representations of a Borel subgroup (given for example
by all upper triangular matrices). $X_\varphi$ (isomorphic to
$X_{\overline{\varphi}}$) are the remaining
irreducible representations
$V_1\otimes W_{\varphi\vert_{\mathbb F_q^*},1}-
W_{\varphi\vert_{\mathbb F_q^*},1}-\mathrm{Ind}_{\varphi}$
with $\mathrm{Ind}_{\varphi}$ obtained by inducing a $1$-dimensional representation $\varphi\not=\varphi^q$
of a cyclic subgroup isomorphic to $\mathbb F_{q^2}^*$.
The trace of the idempotent $\pi=\frac{1}{(q-1)^2}FA$ in irreducible
representations of $\mathrm{GL}_2(\mathbb F_q)$ is now given by:
\begin{eqnarray*}
U_\alpha&:&\frac{1}{(q-1)^2}\left(\sum_{x\in \mathbb F_q^*}\alpha(x)\right)^2\\
V_\alpha&:&q\left(\frac{1}{q-1}(\sum_{x\in \mathbb F_q^*}\alpha(x^2)+
\frac{1}{(q-1)^2}\left(\sum_{x\in \mathbb F_q^*}\alpha(x)\right)^2\right)\\
W_{\alpha,\beta}&:&(q+1)\left(\frac{1}{q-1}\sum_{x\in \mathbb F_q^*}\alpha(x)\beta(x)+
\frac{2}{(q-1)^2}\left(\sum_{x\in \mathbb F_q^*}\alpha(x)\right)\left(\sum_{x\in \mathbb F_q^*}\beta(x)\right)\right)\\
X_\varphi&:&(q-1)\sum_{x\in\mathbb F_q^*}\varphi(x)
\end{eqnarray*}
The factors $q,q\pm 1$ are the multiplicities of the irreducible
representations $V_\alpha,W_{\alpha,\beta}$ and $X_\varphi$ in the
regular (left or right) representation. They are of course equal
to the dimensions of $V_\alpha,W_{\alpha,\beta}$ and $X_\varphi$.
Irreducible representations $U_\alpha$ are involved in $\pi$ only
if $\alpha$ is the trivial character. This corresponds of course
to the central idempotent $\pi_1$ of $\mathcal{SC}_{\mathbb Q}$.
For $V_\alpha$ we get $2q$ for $\alpha$ trivial, $q$ for odd $q$
if $\alpha$ is the quadratic character (defined by the Legendre-symbol
and existing only for odd $q$)
and $0$ otherwise.
For $W_{\alpha,\beta}$ we get $q+1$ if $\beta=\overline{\alpha},\beta\not=\alpha$
and $0$ otherwise. There are $\frac{q-3}{2}$ such representations for odd $q$ and $\frac{q-2}{2}$ such representations for even $q$.
For $X_\varphi$ we get $q-1$ if the character $\varphi$ of $\mathbb F_{q^2}^*$
with non-trivial $\varphi^{q-1}$
restricts to the trivial character of
$\mathbb F_q^*$ and $0$ otherwise. Characters of $\mathbb F_{q^2}^*$
with trivial restrictions to $\mathbb F_q^*$ are in one-to-one correspondence
with characters of the additive group $\mathbb Z/(q+1)\mathbb Z$.
The character $\varphi^{q-1}$ is trivial for two
of them (the trivial and the quadratic one) if $q$ is odd.
This gives thus $\frac{q-1}{2}$ such irreducible representations for odd $q$.
For even $q$ we get $\frac{q}{2}$ irreducible representations.
All irreducible representations of $\mathrm{GL}_2(\mathbb F_q)$
involved in $\pi=\frac{1}{(q-1)^2}FA$ have irreducible restrictions
to $\mathrm{SL}_2(\mathbb F_q)$.
The following table sums up contributions of all different irreducible characters to the trace of $\pi=\frac{1}{(q-1)^2}FA$:
$$\begin{array}{c||c|c|c|c}
q&U&V&W&X\\
\hline
\hbox{odd}&1&3q&(q+1)\frac{q-3}{2}&(q-1)\frac{q-1}{2}\\
\hbox{even}&1&2q&(q+1)\frac{q-2}{2}&(q-1)\frac{q}{2}\\
\end{array}$$
and we have
$$\mathrm{tr}(\pi)=q(q+1)=1+3q+(q+1)\frac{q-3}{2}+(q-1)\frac{q-1}{2},$$
respectively
$$\mathrm{tr}(\pi)=q(q+1)=1+2q+(q+1)\frac{q-2}{2}+(q-1)\frac{q}{2},$$
as expected.
\subsection{Decompositions of $\pi_1,\dots,\pi_4$ for even $q$}\label{subsectdecomppiiqodd}
Over a finite field of characteristic $2$, conjugacy classes are involved in
generators of $\mathcal {SC}$ as follows:
$$\begin{array}{r|ccccccc}
&q-1&q-1&\frac{(q-1)(q-2)}{2}&\frac{q(q-1)}{2}\\
&a_x&b_x&c_{x,y\not=x}&d_{\xi}\\
\hline
FA&1&0&2&0\\
FB&0&q-1&0&0\\
FC&0&(q-1)(q-2)&(q-1)(q-4)&(q-1)(q-2)\\
FD_\pm&0&q-1&2(q-1)&0\\
FE\pm&0&0&q-1&(q-1)\\
\hline
&1&q^2-1&q^2+q&q^2-q\\
\end{array}$$
This implies the following decompositions of central idempotents of
$\mathcal{SC}$: The idempotent $\pi_1$ of rank $1$ is involved with multiplicity
$1$ in the trivial representation of type $U$.
The idempotent $\pi_2$ of rank $\frac{(q+1)(q-2)}{2}$ (in $\mathbb C[\mathrm{SL}_2(\mathbb F_q)]$) is involved with
multiplicity $q+1$ in all $\frac{q-2}{2}$ relevant characters of type W.
The idempotent $\pi_3$ of rank $\frac{q(q-1)}{2}$ is involved with multiplicity
$q-1$ in all $\frac{q}{2}$ relevant characters of type . Finally, the idempotent $\pi_4$
is involved with multiplicity $q$ in the irreducible representation $V$.
\subsection{Decompositions of $\pi_1,\dots,\pi_4$ for odd $q$}\label{subsectdecomppiiqodd}
Over a finite field of odd characteristic, contributions of the different conjugacy
classes to the elements $FX$ (for $X$ a generator of $\mathcal{SC}$) are given by
$${\scriptsize \begin{array}{r|ccccccc}
&q-1&q-1&\frac{q-1}{2}&\frac{(q-1)(q-3)}{2}&\frac{q-1}{2}&\frac{(q-1)^2}{2}\\
&a_x&b_x&c_{x,-x}&c_{x,y\not=\pm x}&d_{\xi,\mathrm{tr}(\xi)=0}&d_{\xi,\mathrm{tr}(\xi)\not=0}\\
\hline
FA&1&0&2&2&0&0\\
FB&0&0&q-1&0&q-1&0\\
FC&0&(q-1)(q-3)&(q-1)(q-3)&(q-1)(q-4)&(q-1)^2&(q-1)(q-2)\\
FD_\pm&0&q-1&2(q-1)&2(q-1)&0&0\\
FE\pm&0&q-1&0&q-1&0&q-1\\
\hline
&1&q^2-1&q^2+q&q^2+q&q^2-q\\
\end{array}}$$
and conjugacy classes of $\mathrm{GL}_2(\mathbb F_q)$ are involved in
projectors $\tilde\pi_1=\frac{1}{q-1}\pi_1F,\dots,\tilde\pi_4=\frac{1}{q-1}\pi_4F$ with the following coefficients
$$\begin{array}{r|cccccc|c}
&q-1&q-1&\frac{q-1}{2}&\frac{(q-1)(q-3)}{2}&\frac{q-1}{2}&\frac{(q-1)^2}{2}&\\
&a_x&b_x&c_{x,-x}&c_{x,y\not=\pm x}&d_{\xi,\mathrm{tr}(\xi)=0}&d_{\xi,\mathrm{tr}(\xi)\not=0}&\\
\hline
\tilde\pi_1&\frac{1}{(q-1)^2q(q+1)}&\frac{1}{q(q-1)}&\frac{1}{(q-1)^2}&\frac{1}{(q-1)^2}&\frac{1}{q^2-1}&\frac{1}{q^2-1}\\
\tilde\pi_2&\frac{q-2}{2q(q-1)^2}&\frac{-1}{q(q-1)}&\frac{q-3}{2(q-1)^2}&\frac{-1}{(q-1)^2}&\frac{1}{2(q-1)}&0\\
\tilde\pi_3&\frac{1}{2(q^2-1)}&0&\frac{-1}{2(q-1)}&0&\frac{-1}{2(q+1)}&\frac{1}{q^2-1}\\
\tilde\pi_4&\frac{2}{(q-1)^2(q+1)}&0&\frac{2}{(q-1)^2}&\frac{2}{(q-1)^2}&\frac{-2}{q^2-1}&\frac{-2}{q^2-1}\\
\end{array}$$
The idempotent $\tilde\pi_1$ (or equivalently the idempotent $\pi_1$ of
$\mathcal{SC}$) is only involved in the trivial representation with multiplicity $1$. The idempotent $\tilde\pi_4$ appears (with multiplicity $q$) only in the
unique non-trivial $q$-dimensional irreducible representation $V$ involved
in the permutation-representation of $\mathrm{GL}_2(\mathbb F_q)$ acting on
all $q+1$ points of the projective line over $\mathbb F_q$.
The character $V_L=V\otimes \alpha_L$ where $\alpha_L$ is the quadratic
character of $\mathbb F_q$ given by the Legendre-symbol has mean-values given
by
$$\begin{array}{r|cccccc|c}
&q-1&q-1&\frac{q-1}{2}&\frac{(q-1)(q-3)}{2}&\frac{q-1}{2}&\frac{(q-1)^2}{2}&\\
&a_x&b_x&c_{x,-x}&c_{x,y\not=\pm x}&d_{\xi,\mathrm{tr}(\xi)=0}&d_{\xi,\mathrm{tr}(\xi)\not=0}&\\
\hline
V_L&q&0&\left(\frac{-1}{q}\right)&-\frac{1+\left(\frac{-1}{q}\right)}{q-3}&\left(\frac{-1}{q}\right)&\frac{1-\left(\frac{-1}{q}\right)}{q-1}
\end{array}$$
on conjugacy-classes. The character $V_L$ involves thus $\tilde\pi_2$
(or $\pi_2$) with multiplicity $q$ if $q\equiv 1\pmod 4$ and
$\tilde\pi_3$
(or $\pi_3$) with multiplicity $q$ if $q\equiv 3\pmod 4$.
Mean-values of a character $W_{\alpha,\beta}$ with non-real
$\beta=\overline{\alpha}$ on conjugacy classes are given by
$$\begin{array}{r|cccccc|c}
&q-1&q-1&\frac{q-1}{2}&\frac{(q-1)(q-3)}{2}&\frac{q-1}{2}&\frac{(q-1)^2}{2}&\\
&a_x&b_x&c_{x,-x}&c_{x,y\not=\pm x}&d_{\xi,\mathrm{tr}(\xi)=0}&d_{\xi,\mathrm{tr}(\xi)\not=0}&\\
\hline
W_{\alpha,\beta}&q+1&1&2\alpha(-1)&\frac{-2(1+\alpha(-1))}{q-3}&0&0
\end{array}$$
They depend on the value $\alpha(-1)\in\{\pm 1\}$. For $q\equiv 1\pmod 4$ there are $\frac{q-5}{4}$ such characters $W_{\alpha,\beta}$ with $\alpha(-1)=1$
and $\frac{q-1}{4}$ such characters with $\alpha(-1)=-1$.
For $q\equiv 3\pmod 4$, there are the same number $\frac{q-3}{4}$
of such characters for both possible values of $\alpha(-1)$.
Such representations involve $\tilde \pi_2$ (or $\pi_2$ of $\mathcal{SC}$)
with multiplicity $q+1$
if $\alpha(-1)=1$ and $\tilde \pi_3$ with multiplicity $q+1$ otherwise.
We consider now a non-real character $\varphi$ of $\mathbb F_{q^2}^*$ with
trivial restriction to $\mathbb F_q^*$. Since $\xi^2$ belongs to
$\mathbb F_q^*$ if $\xi$ has trace $0$, we have
$\sigma=\varphi(\xi)\in \{\pm 1\}$.
Mean-values on conjugacy-classes
of a character $X_\varphi$ with $\varphi$ as above are thus given by
$$\begin{array}{r|cccccc|c}
&q-1&q-1&\frac{q-1}{2}&\frac{(q-1)(q-3)}{2}&\frac{q-1}{2}&\frac{(q-1)^2}{2}&\\
&a_x&b_x&c_{x,-x}&c_{x,y\not=\pm x}&d_{\xi,\mathrm{tr}(\xi)=0}&d_{\xi,\mathrm{tr}(\xi)\not=0}&\\
\hline
X_{\varphi}&q-1&-1&0&0&-2\sigma&\frac{2(1+\sigma)}{q-1}
\end{array}$$
For $q\equiv 1\pmod 4$, there are $\frac{q-1}{4}$ such characters for both possible values of $\sigma$. For $q\equiv 3\pmod 4$, the value $\sigma=1$ is achieved by $\frac{q-3}{4}$ such representations and the value $\sigma=-1$ by
$\frac{q+1}{4}$ representations.
Each such representation with $\sigma=1$ involves $\tilde\pi_3$ (or $\pi_3$)
with multiplicity $q-1$ and each such representation with $\sigma=-1$ involves $\tilde\pi_2$ (or $\pi_2$)
with multiplicity $q-1$.
I thank M. Brion and P. de la Harpe for useful discussions and remarks.
\noindent Roland BACHER, Univ. Grenoble Alpes, Institut
Fourier (CNRS UMR 5582), 38000 Grenoble, France.
\noindent e-mail: Roland.Bacher@ujf-grenoble.fr
\end{document}
{\bf useful}
http://mathoverflow.net/questions/87254/reference-for-representation-theory-of-sl-2z-n
question by Dan Petersen
William Fulton, Joe Harris, {\it Representation Theory, A First Course},
Springer, 1991.
\end{document} |
\begin{document}
\title[Reduction numbers and regularity of blowup rings and modules]{On reduction numbers and Castelnuovo-Mumford regularity of blowup rings and modules}
\author{Cleto B.~Miranda-Neto}
\address{Departamento de Matemática, Universidade Federal da
Paraíba, 58051-900 João Pessoa, PB, Brazil.}
\email{cleto@mat.ufpb.br}
\author{Douglas S.~Queiroz}
\address{Departamento de Matemática, Universidade Federal da
Paraíba, 58051-900 João Pessoa, PB, Brazil.}
\email{douglassqueiroz0@gmail.com}
\thanks{
Corresponding author: C. B. Miranda-Neto.}
\subjclass[2010]{Primary: 13C05, 13A30, 13C13; Secondary: 13H10, 13D40, 13D45.}
\keywords{Castelnuovo-Mumford regularity, reduction number, blowup algebra, Ratliff-Rush closure, Ulrich ideal}
\begin{abstract} We prove new results on the connections between reduction numbers and the Castelnuovo-Mumford regularity of blowup algebras and blowup modules, the key basic tool being the operation of Ratliff-Rush closure. First, we answer in two particular cases a question of M. E. Rossi, D. T. Trung, and N. V. Trung about Rees algebras of ideals in two-dimensional Buchsbaum local rings, and we even ask whether one of such situations always holds. In another theorem we generalize a result of A. Mafi on ideals in two-dimensional Cohen-Macaulay local rings, by extending it to arbitrary dimension (and allowing for the setting relative to a Cohen-Macaulay module). We derive a number of applications, including a characterization of (polynomial) ideals of linear type, progress on the theory of generalized Ulrich ideals, and improvements of results by other authors.
\end{abstract}
\maketitle
\centerline{\it Dedicated to the memory of Professor Wolmer Vasconcelos,}
\centerline{\it mentor of generations of commutative algebraists.}
\section{Introduction}
This paper aims at further investigating the interplay between reduction numbers, eventually taken relative to a given module, and the Castelnuovo-Mumford regularity of Rees and associated graded algebras and modules. The basic tool is the so-called Ratliff-Rush closure, a well-known operation which goes back to the classical work \cite{Ratliff-Rush}, and also \cite{Heinzer-Johnson-Lantz-Shah} for the relative case. These topics have been studied for decades and the literature about them is extensive; we mention, e.g., \cite{AHT}, \cite{Brodmann-Linh}, \cite{CPR},
\cite{D-H}, \cite{H-P-T}, \cite{J-U}, \cite{Linh}, \cite{Mafi}, \cite{Marley2}, \cite{Pol-X}, \cite{Puthenpurakal1}, \cite{Rossi-Dinh-Trung}, \cite{R-T-V}, \cite{Strunk}, \cite{Trung}, \cite{Vasc}, \cite{Wu}, \cite{Zamani}.
Let us briefly describe our two main motivations. First, an interesting question raised by Rossi, Trung, and Trung \cite{Rossi-Dinh-Trung} concerning the Castelnuovo-Mumford regularity of the Rees algebra of a zero-dimensional ideal in a 2-dimensional Buchsbaum local ring, and second, a result due to Mafi \cite{Mafi} on the Castelnuovo-Mumford regularity of the associated graded ring of a zero-dimensional ideal in a 2-dimensional Cohen-Macaulay local ring, both connected to the reduction number of the ideal. For the former (see Section \ref{R-T-T}) we provide answers in a couple of special cases, and we even ask whether one of the situations always occurs. For the latter (see Section \ref{gen-of-Mafi}), we are able to establish a generalization which extends dimension $2$ to arbitrary dimension and in addition takes into account the more general context of blowup modules (relative to any given Cohen-Macaulay module of positive dimension).
We then provide two sections of applications. In Section \ref{app} we show, for instance, that under appropriate conditions the Castelnuovo-Mumford regularity of Rees modules is not affected under regular hyperplane sections, and moreover, by investigating the role of postulation numbers with a view to the regularity of Rees algebras, we derive an improvement (by different arguments) of a result originally obtained in \cite{Hoa} on the independence of reduction numbers. Still in Section \ref{app} we provide an application regarding ideals of linear type (i.e., ideals for which the Rees and symmetric algebras are isomorphic) in a standard graded polynomial ring over an infinite field. Precisely, if $I$ is a zero-dimensional ideal with reduction number zero, then $I$ is of linear type (the converse is easy and well-known). Finally, in Section \ref{Ulrich}, we establish new facts on the theory of generalized Ulrich ideals introduced in \cite{Goto-Ozeki-Takahashi-Watanabe-Yoshida}. Given such an ideal in a Cohen-Macaulay local ring (of positive dimension), we determine the Castelnuovo-Mumford regularity of its blowup algebras and describe explicitly its Hilbert-Samuel polynomial and postulation number, which as far as we know are issues that have not been considered in the literature. We also correct a result from \cite{Mafi2} and improve results of \cite{Hoa}, \cite{Huneke} and \cite{Itoh} (where extra hypotheses were required), regarding a well-studied link between low reduction numbers and Hilbert functions. In addition, by detecting a normal Ulrich ideal in the ring of a rational double point, we give a negative answer to the 2-dimensional case of a question posed over fifteen years ago in \cite{CPR}.
\noindent {\it Conventions}. Throughout the paper, by {\it ring} we mean commutative, Noetherian, unital ring. If $R$ is a ring, then by a {\it finite} $R$-module we mean a finitely generated $R$-module.
\section{Ratliff-Rush closure}
The concept of the so-called \textit{Ratliff-Rush closure} $\widetilde{I}$
of a given ideal $I$ in a ring $R$ first appeared in the investigation of reductions of ideals developed in \cite{Ratliff-Rush}. Precisely,
$$\widetilde{I} \, = \, \bigcup_{n \geq 1}\,I^{n+1} : I^{n}.$$
This is an ideal of $R$ containing $I$ which refines the integral closure of $I$, so that $\widetilde{I}=I$ whenever $I$ is integrally closed. Now suppose $I$ contains a regular element (i.e., a non-zero-divisor on $R$). Then $\widetilde{I}$ is the largest ideal that shares with $I$ the same sufficiently high powers. Moreover, if $M$ is an $R$-module then \cite[Section 6]{Heinzer-Johnson-Lantz-Shah} defined the \textit{Ratliff-Rush closure of $I$ with respect to $M$} as
$$\widetilde{I_{M}} \, = \, \bigcup_{n \geq 1}\,I^{n + 1}M :_{M} I^{n} \, = \, \{m \in M \, \mid \, I^{n}m \subseteq I^{n + 1}M \ \mathrm{for \ some \ n\geq 1} \}.$$ Clearly, this retrieves the classical definition by letting $M = R$. Furthermore, $\widetilde{I_{M}}$ is an $R$-submodule of $M$, and it is easy to see that $IM\subseteq \widetilde{I}M \subseteq \widetilde{I_{M}}$. If the equality $IM=\widetilde{I_{M}}$ holds, the ideal $I$ is said to be {\it Ratliff-Rush closed with respect to $M$}; in case $M=R$, we simply say that $I$ is {\it Ratliff-Rush closed} (some authors also use the expression {\it Ratliff-Rush ideal}).
\begin{Lemma} $($\cite[Proposition 2.2]{Naghipour}\label{propnaghipour}$)$ \label{lemmanaghipour} Let $R$ be a ring and $M$ be a non-zero finite $R$-module. Assume that $I$ is an ideal of $R$ containing an $M$-regular element {\rm (}i.e., a non-zero-divisor on $M${\rm )} and such that $IM \neq M$ {\rm (}e.g., if $R$ is local{\rm )}. Then the following conditions hold:
\begin{itemize}
\item[(a)] For an integer $n \geq 1$, $\widetilde{I^{n}_{M}}$ is the eventual stable value of the increasing sequence
$$(I^{n + 1}M :_{M} I) \, \subseteq \, (I^{n + 2}M :_{M} I^{2}) \, \subseteq \, (I^{n + 3}M :_{M} I^{3}) \, \subseteq \, \cdots$$
\item[(b)] $\widetilde{I_{M}} \supseteq \widetilde{I^{2}_{M}} \supseteq \widetilde{I^{3}_{M}} \supseteq \cdots \supseteq \widetilde{I^{n}_{M}} = I^{n}M$ for all $n\gg 0$, and hence $I^{n}$ is Ratliff–Rush closed with respect to $M$.
\end{itemize}
\end{Lemma}
Note that, if the ideal $I$ contains an $M$-regular element, Lemma \ref{propnaghipour}(b) enables us to define the number $$s^{*}(I,M) \, := \, \mathrm{min}\,\{n \in \mathbb{N} \, \mid \, \widetilde{I^{i}_{M}} = I^{i}M \ \mathrm{for \ all} \ i \geq n \},$$ which we simply write $s^{*}(I)$ if $M=R$. Since the equality $\widetilde{I^{i}_{M}} = I^{i}M$ holds trivially for $i=0$, we have that $s^{*}(I,M)\geq 0$ if and only if $s^{*}(I,M)\geq 1$. Thus we can establish that, throughout the entire paper, $$s^{*}(I,M) \, \geq \, 1.$$
\begin{Definition}\rm
Let $R$ be a ring, $I$ an ideal of $R$, and $M$ an $R$-module. We say that $x \in I$ is an $M$-\textit{superficial element of $I$} if there exists $c \in \mathbb{N}$ such that $(I^{n + 1}M :_{M} x) \cap I^{c}M = I^{n}M$, for all $n \geq c$. If $x$ is $R$-superficial, we simply say that $x$ is a \textit{superficial element of $I$}. A sequence $x_{1}, \ldots , x_{s}$ in $I$ is said to be an \textit{$M$-superficial sequence of $I$} if $x_1$ is an $M$-superficial element of $I$ and, for all $i = 2, \ldots , s$, the image of $x_{i}$ in $I/(x_{1}, \ldots , x_{i - 1})$ is an $M/(x_{1}, \ldots , x_{i - 1})M$-superficial element of $I/(x_{1}, \ldots , x_{i - 1})$.
\end{Definition}
If $R$ is local, the above concept allows for an alternative, useful expression for $s^{*}(I,M)$. Indeed, in this case, \cite[Corollary 2.7]{Puthenpurakal1} shows that
$$s^{*}(I,M) \, = \, \mathrm{min}\,\{n \in \mathbb{N} \, \mid \, I^{i + 1}M :_{M} x = I^{i}M \ \mathrm{for \ all} \ i \geq n \},$$
for any given $M$-superficial element $x$ of $I$.
\begin{Lemma}$($\cite[Lemma 8.5.3]{Swanson-Huneke}\label{lemmaswanhun1}$)$
Let $R$ be a ring, $I$ an ideal of $R$, $M$ a finite $R$-module, and $x \in I$ an $M$-regular element. Then, $x$ is an $M$-superficial element of $I$ if and only if $I^{n}M\colon_Mx = I^{n - 1}M$ for all $n\gg 0$.
\end{Lemma}
\begin{Lemma}$($\cite[Lemma 8.5.4]{Swanson-Huneke}\label{lemmaswanhun}$)$
Let $R$ be a ring, $I$ an ideal of $R$, and $M$ a finite $R$-module. Assume that $\bigcap_{n} I^{n}M = 0$ and that $I$ contains an $M$-regular element. Then every $M$-superficial element of $I$ is $M$-regular.
\end{Lemma}
The condition $\bigcap_{n} I^{n}M = 0$ is satisfied whenever $R$ is a local ring, as guaranteed by the well-known Krull's intersection theorem, so in order to use Lemma \ref{lemmaswanhun} we just need to find an $M$-regular element in $I$.
Lemma \ref{lemmaswanhun1} and Lemma \ref{lemmaswanhun} put us in a position to prove Proposition \ref{regelem} below, which, according to \cite[Lemma 2.2]{Mafi}, is known when $M = R$. Moreover, in our general context, part (b) has been stated in \cite[Theorem 2.2(2)]{Puthenpurakal1}. However, the proofs are omitted in both situations, so for completeness we provide them here.
\begin{Proposition} \label{regelem}
Let $R$ be a local ring, $M$ a finite $R$-module, and $I$ an ideal of $R$ containing an $M$-regular element. Then, for all $m \in \mathbb{N}$,
\begin{itemize}
\item[(a)] $\widetilde{I^{m + 1}_{M}} :_{M} I = \widetilde{I^{m}_{M}};$
\item[(b)] $\widetilde{I^{m + 1}_{M}} :_{M} x = \widetilde{I^{m}_{M}}$, for every $M$-superficial element $x$ of $I$.
\end{itemize}
\end{Proposition}
\begin{proof} Fix an arbitrary $m \in \mathbb{N}$. By Lemma \ref{lemmanaghipour}(a), we can write $\widetilde{I^{m }_{M}} = I^{m + j}M :_{M} I^{j} = I^{m + j + 1}M :_{M} I^{j + 1}$ and $\widetilde{I^{m + 1}_{M}} = I^{(m + 1) + (j - 1)}M :_{M} I^{j - 1} = I^{m + j + 1}M :_{M} I^{j},$ for all $j \gg 0$. In particular, we get $$\widetilde{I^{m + 1}_{M}} :_{M} I \, = \, (I^{m + j + 1}M :_{M} I^{j}) :_{M} I \, = \, I^{m + j + 1}M :_{M} I^{j + 1} \, = \, \widetilde{I^{m }_{M}},$$ which proves (a). Now, let $x$ be an $M$-superficial element of $I$. By Lemma \ref{lemmaswanhun}, $x$ is $M$-regular, and hence, in view of Lemma \ref{lemmaswanhun1}, we can choose $j$ such that $I^{m + j + 1}M :_{M} x = I^{m + j}$. It follows that
$$\widetilde{I^{m + 1}_{M}} :_{M} x \, = \, (I^{m + j + 1}M :_{M} I^{j}) :_{M} x \, = \, (I^{m + j + 1}M :_{M} x) :_{M} I^{j} \, = \, I^{m + j}M :_{M} I^{j} \, = \, \widetilde{I^{m}_{M}},$$ thus showing (b). \hspace*{\fill} $\square$
\end{proof}
As a consequence of item (a) we derive the following result, which will be a key tool in Subsection
\ref{RTT} in the case $M=R$. Part (b) of Proposition \ref{regelem} will be used in other sections (e.g., in the proof of Theorem \ref{Mafigene}).
\begin{Proposition}\label{s*conjec}
Let $R$ be a local ring and $M$ a non-zero finite $R$-module. Let $I$ be an ideal of $R$ containing an $M$-regular element. Then $$s^{*}(I, M) \, = \, \mathrm{min}\,\{m \geq 1 \, \mid \, I^{n + 1}M :_{M} I = I^{n}M \ \ \mathit{for \ all} \ \ n \geq m \}.$$
\end{Proposition}
\begin{proof} Write $s = s^{*}(I,M)$ and $t = \mathrm{min}\{m \geq 1 \mid I^{n + 1}M :_{M} I = I^{n}M \ \mathrm{for \ all} \ n \geq m \}$. Let $n \geq s$. From the definition of $s$ we get $ \widetilde{I^{n}_{M}} = I^{n}M$ and $\widetilde{I^{n + 1}_{M}} = I^{n + 1}M$. On the other hand, Proposition \ref{regelem}(a) gives $\widetilde{I^{n + 1}_{M}} :_{M} I = \widetilde{I^{n}_{M}}$. Hence,
$I^{n + 1}M :_{M} I = I^{n}M$ for all $n\geq s$, which forces $s \geq t$. Suppose by way of contradiction that $s > t$. Then $s-1\geq t\geq 1$ and, using Proposition \ref{regelem}(a) again, we obtain
$$\widetilde{I^{s - 1}_{M}} \, = \, \widetilde{I^{s}_{M}} :_{M} I \, = \, I^{s}M :_{M} I \, = \, I^{s - 1}M,$$
thus contradicting the minimality of $s$. We conclude that $s=t$, as needed.\hspace*{\fill} $\square$
\end{proof}
\section{First results and a question of Rossi-Trung-Trung}\label{R-T-T}
Let $S = \bigoplus_{n \geq 0}S_{n}$ be a finitely generated standard graded algebra over a ring $S_{0}$. As usual, by {\it standard} we mean that $S$ is generated by $S_1$ as an $S_0$-algebra. We write $S_{+} = \bigoplus_{n \geq 1}S_{n}$ for the ideal generated by all elements of $S$ of positive degree. Fix a finite graded $S$-module $N\neq 0$. A sequence $y_{1}, \ldots, y_{s}$ of homogeneous elements of $S$ is said to be an \textit{$N$-filter regular sequence} if $$y_{i} \notin P, \ \ \ \ \ {\mathfrak o}rall \ P \in \textrm{Ass}_{S}\left({\mathfrak r}ac{N}{(y_{1}, \ldots, y_{i - 1})N}\right)\setminus V(S_{+}), \ \ \ \ \ i = 1, \ldots, s.$$
Now, for a graded $S$-module $A=\bigoplus_{n \in {\mathbb Z}}A_{n}$ satisfying $A_n=0$ for all $n\gg 0$, we let $a(A) = \textrm{max}\{n\in {\mathbb Z} \ | \ A_{n} \neq 0\}$ if $A\neq 0$, and $a(A)=-\infty$ if $A=0$. For an integer $j\geq 0$, we use the notation $$a_{j}(N) \, := \, a(H_{S_{+}}^{j}(N)),$$
where $H_{S_{+}}^{j}(-)$ stands for the $j$-th local cohomology functor with respect to the ideal $S_{+}$. It is known that $H_{S_{+}}^{j}(N)$ is a graded module with $H_{S_{+}}^{j}(N)_n=0$ for all $n\gg 0$; see, e.g., \cite[Proposition 15.1.5(ii)]{B-S}. Thus, $a_{j}(N)<+\infty$. We can now invoke one of the main numerical invariants studied in this paper.
\begin{Definition}\rm Maintain the above setting and notations. The \textit{Castelnuovo-Mumford regularity} of the $S$-module $N$ is defined as
$$\mathrm{reg}\,N \, := \, \mathrm{max}\{a_j(N) + j \, \mid \, j \geq 0\}.$$
\end{Definition}
It is well-known that $\mathrm{reg}\,N$ governs the complexity of the graded structure of $N$ and is of great significance in commutative algebra and algebraic geometry, for instance in the study of degrees of syzygies over polynomial rings. The literature on the subject is extensive; see, e.g., \cite{Bay-S}, \cite[Chapter 15]{B-S}, \cite{Eis-Go}, and \cite{Trung2}.
\subsection{First results} Let $I$ be an ideal of a ring $R$. The {\it Rees ring of $I$} is the blowup algebra ${\mathcal R}(I) = \bigoplus_{n \geq 0}I^{n}t^n$ (as usual we put $I^0=R$), which can be expressed as the standard graded subalgebra $R[It]$ of $R[t]$, where $t$ is an indeterminate over $R$. Now, for a finite $R$-module $M$, the {\it Rees module of $I$ relative to $M$} is the blowup module $${\mathcal R}(I, M) \, = \, \bigoplus_{n \geq 0}\,I^{n}M,$$ which in particular is a finite graded module over ${\mathcal R}(I, R)={\mathcal R}(I)$. We will be interested in the relation between ${\rm reg}\,{\mathcal R}(I, M)$ and another key invariant that will be defined shortly. First we recall the following fact.
\begin{Lemma}$($\cite[Lemma 4.5]{Giral-Planas-Vilanova}$)$\label{aistheleast} Let $R$ be a ring, $I$ an ideal of $R$ and $M$ a finite $R$-module. Let $x_{1}, \ldots, x_{s}$ be elements of $I$. Then, $x_{1}t, \ldots, x_{s}t$ is an ${\mathcal R}(I, M)$-filter regular sequence if and only if
\begin{equation}\label{GP-V}
[(x_{1}, \ldots, x_{i - 1})I^{n}M :_{M} x_{i}] \, \cap \, I^{n}M \, = \, (x_{1}, \ldots, x_{i - 1})I^{n - 1}M
\end{equation}
for $i = 1, \ldots, s$ and all $n\gg 0$.
\end{Lemma}
\begin{Definition}\rm Let $I$ be a proper ideal of a ring $R$ and let $M$ be a non-zero finite $R$-module. An ideal $J \subseteq I$ is called a \textit{reduction of $I$ relative to $M$} if $JI^{n}M = I^{n + 1}M$ for some integer $n\geq 0$. Such a reduction $J$ is said to be \textit{minimal} if it is minimal with respect to inclusion. As usual, in the classical case where $M = R$ we say that $J$ is a \textit{reduction of $I$}. If $J$ is a reduction of $I$ relative to $M$, we define the \textit{reduction number of $I$ with respect to $J$ relative to $M$} as $${\rm r}_{J}(I, M) \, := \, \mathrm{min}\,\{m \in \mathbb{N} \, \mid \, JI^{m}M = I^{m + 1}M\},$$
and the \textit{reduction number of $I$ relative to $M$} as
$${\rm r}(I, M) \, := \, \mathrm{min}\,\{{\rm r}_{J}(I, M) \, \mid \, J \ \ \mathrm{is \ a \ minimal \ reduction \ of \ } I \mathrm{ \ relative \ to} \ M\}.$$ We say that
${\rm r}(I, M)$ is \textit{independent} if ${\rm r}_J(I,M) = {\rm r}(I, M)$ for every minimal reduction $J$ of $I$ relative to $M$. If $M = R$, we simply write ${\rm r}_{J}(I)$ (the \textit{reduction number of $I$ with respect to $J$}) and ${\rm r}(I)$ (the \textit{reduction number of $I$}).
\end{Definition}
\begin{Lemma} $($\cite[Proposition 4.6]{Giral-Planas-Vilanova}$)$\label{propleastnum}
Let $R$ be a ring, $I$ an ideal of $R$ and $M$ a finite $R$-module. Let $J = (x_{1}, \ldots, x_{s})$ be a reduction of $I$ relative to $M$ such that $x_{1}t, \ldots, x_{s}t$ is an ${\mathcal R}(I, M)$-filter regular sequence. Then
$$\mathrm{reg}\,{\mathcal R}(I, M) \, = \, \mathrm{min}\,\{\ell \geq {\rm r}_{J}(I,M) \, \mid \, \mathit{equality \ in \ {\rm (\ref{GP-V})} \ holds \ for \ all} \ \ n \geq \ell + 1\}.$$
\end{Lemma}
The usefulness of the proposition below will be made clear in Remark \ref{reg<r} and Remark \ref{cotareg}. Notice that the case $M=R$ recovers \cite[Proposition 4.7(i)]{Trung}; in fact, the proof is essentially the same and we give it in a little more detail.
\begin{Proposition}\label{proptrung}
Let $R$ be a ring, $I$ an ideal of $R$ and $M$ a finite $R$-module. Let $J = (x_{1}, \ldots, x_{s})$ be a reduction of $I$ relative to $M$ such that
$$[(x_{1}, \ldots, x_{i - 1})M :_{M} x_{i}] \, \cap \, I^{\ell + 1}M \, = \, (x_{1}, \ldots, x_{i - 1})I^{\ell}M, \ \ \ \ \ i = 1, \ldots, s.$$
for a fixed $\ell \geq {\rm r}_{J}(I,M)$. Then, for all $n \geq \ell + 1$,
$$[(x_{1}, \ldots, x_{i - 1})M :_{M} x_{i}] \, \cap \, I^{n}M \, = \, (x_{1}, \ldots, x_{i - 1})I^{n - 1}M, \ \ \ \ \ i = 1, \ldots, s.$$
\end{Proposition}
\begin{proof} We may take $n > \ell + 1$ as the case $n=\ell + 1$ holds by hypothesis. Since $I^{n}M = JI^{n - 1}M= (x_1, \ldots, x_{s-1})I^{n - 1}M + x_sI^{n - 1}M$, and $(x_1, \ldots, x_{s-1})I^{n - 1}M\subseteq (x_1, \ldots, x_{s-1})M\subseteq (x_{1}, \ldots, x_{s - 1})M :_{M} x_{s}$, we can write
$$ [(x_{1}, \ldots, x_{s - 1})M :_{M} x_{s}] \cap I^{n}M = [(x_{1}, \ldots, x_{s - 1})M :_{M} x_{s}] \cap [ (x_1, \ldots, x_{s-1})I^{n - 1}M + x_sI^{n - 1}M]$$
$$=(x_{1}, \ldots, x_{s - 1})I^{n - 1}M + \{[(x_{1}, \ldots, x_{s - 1})M :_{M} x_{s}] \cap x_sI^{n - 1}M\}$$
$$=(x_{1}, \ldots, x_{s - 1})I^{n - 1}M + x_{s}\{ [(x_{1}, \ldots, x_{s - 1})M :_{M} x_{s}^{2}] \cap I^{n - 1}M\}.$$
Induction on $n$ yields $$[(x_{1}, \ldots, x_{s - 1})M :_{M} x_{s}] \cap I^{n - 1}M = (x_{1}, \ldots, x_{s - 1})I^{n - 2}M.$$ Thus,
$$[(x_{1}, \ldots, x_{s - 1})M :_{M} x_{s}^{2}] \cap [I^{n - 1}M :_{M} x_{s}] = \{[(x_{1}, \ldots, x_{s - 1})M :_{M} x_{s}] \cap I^{n - 1}M\} :_{M} x_{s}$$ $$=(x_{1}, \ldots, x_{s - 1})I^{n - 2}M :_{M} x_{s}
\subseteq (x_{1}, \ldots, x_{s - 1})M :_{M} x_{s}.$$ Intersecting with $I^{n - 1}M$, we get $$[(x_{1}, \ldots, x_{s - 1})M :_{M} x_{s}^{2}] \cap I^{n - 1}M \subseteq [(x_{1}, \ldots, x_{s - 1})M :_{M} x_{s}] \cap I^{n - 1}M = (x_{1}, \ldots, x_{s - 1})I^{n - 2}M.$$ Therefore, $$x_{s}\{[(x_{1}, \ldots, x_{s - 1})M :_{M} x_{s}^{2}] \cap I^{n - 1}M\} \subseteq x_{s}[(x_{1}, \ldots, x_{s - 1})I^{n - 2}M] \subseteq (x_{1}, \ldots, x_{s - 1})I^{n - 1}M.$$
Putting these facts together, we obtain
$$[(x_{1}, \ldots, x_{s - 1})M :_{M} x_{s}] \cap I^{n}M = (x_{1}, \ldots, x_{s - 1})I^{n - 1}M.$$
Now, for $i < s$, we can use induction on $n$ and on $i$ to obtain
$$[(x_{1}, \ldots, x_{i - 1})M :_{M} x_{i}] \cap I^{n}M = \{[(x_{1}, \ldots, x_{i - 1})M :_{M} x_{i}] \cap I^{n - 1}M\} \cap I^{n}M$$ $$= [(x_{1}, \ldots, x_{i - 1})I^{n - 2}M] \cap I^{n}M \subseteq [(x_{1}, \ldots, x_{i})M] \cap I^{n}M$$ $$\subseteq [(x_{1}, \ldots, x_{i})M :_{M} x_{i + 1}] \cap I^{n}M=(x_{1}, \ldots, x_{i})I^{n - 1}M=(x_{1}, \ldots, x_{i-1})I^{n - 1}M+x_iI^{n-1}M.$$ It follows that
$$ [(x_{1}, \ldots, x_{i - 1})M :_{M} x_{i}] \cap I^{n}M = (x_{1}, \ldots, x_{i - 1})I^{n - 1}M + x_{i}\{ [(x_{1}, \ldots, x_{i - 1})M :_{M} x_{i}^{2}] \cap I^{n - 1}M\}.$$ Now the result follows similarly as in the case $i = s$. \hspace*{\fill} $\square$
\end{proof}
Let us now introduce another fundamental blowup module. For an ideal $I$ in a ring $R$ and a finite $R$-module $M$, the {\it associated graded module of $I$ relative to $M$} is the blowup module $${\mathcal G}(I, M) \, = \, \bigoplus_{n \geq 0}\, {\mathfrak r}ac{I^{n}M}{I^{n + 1}M} \, = \, {\mathcal R}(I, M)\otimes_RR/I,$$ which is a finite graded module over the {\it associated graded ring} ${\mathcal G}(I) = {\mathcal G}(I, R)$ of $I$, i.e., ${\mathcal G}(I) = {\mathcal R}(I)\otimes_RR/I$. Notice that this algebra is standard graded over ${\mathcal G}(I)_0=R/I$. As usual, we set ${\mathcal G}(I)_+= \bigoplus_{n \geq 1}I^{n}/I^{n + 1}$.
Here it is worth mentioning for completeness that, under a suitable set of hypotheses, there is a close connection between the Cohen-Macaulayness of ${\mathcal G}(I, M)$ and the property ${\rm r}(I, M)\leq 1$. We refer to \cite[Theorem 3.8]{Pol-X}.
The following fact generalizes \cite[Lemma 4.8]{Oo} (also \cite[Proposition 4.1]{J-U}).
\begin{Lemma}$($\cite[Corollary 3]{Zamani}$)$\label{reg=reg}
Let $R$ be a ring, $I$ an ideal of $R$ and $M$ a
finite $R$-module. Then $\mathrm{reg}\,{\mathcal R}(I, M) = \mathrm{reg}\,{\mathcal G}(I, M) $.
\end{Lemma}
\begin{Remark}\label{reg<r}\rm
In the context of Proposition \ref{proptrung}, a consequence is the following inclusion of submodules of $I^nM$, $$[(x_{1}, \ldots, x_{i - 1})I^{n}M :_{M} x_{i}] \, \cap \, I^{n}M \, \subseteq \, (x_{1}, \ldots, x_{i - 1})I^{n - 1}M$$ for all $n \geq \ell + 1$ and $i = 1, \ldots, s$. But this is then an equality, as
$x_i[(x_{1}, \ldots, x_{i - 1})I^{n - 1}M]\subseteq (x_{1}, \ldots, x_{i - 1})I^{n}M$. It follows that (1) holds for all $n \geq \ell + 1$ and therefore, by Lemma \ref{aistheleast} and Lemma \ref{propleastnum}, we have $\mathrm{reg}\,{\mathcal R}(I, M) \leq \ell$. On the other hand, by Lemma \ref{reg=reg}, there is a general equality $\mathrm{reg}\,{\mathcal R}(I, M) = \mathrm{reg}\,{\mathcal G}(I, M)$. We conclude that $$\mathrm{reg}\,{\mathcal R}(I, M) \, = \, \mathrm{reg}\,{\mathcal G}(I, M) \, \leq \, \ell,$$ which thus generalizes \cite[Proposition 4.7(iv)]{Trung}.
\end{Remark}
\begin{Remark}\rm \label{cotareg}
Let $k \geq 1$ be a positive integer. Making $\ell = {\rm r}_{J}(I,M) + k - 1$ in Proposition \ref{proptrung}, we retrieve the Noetherian part of \cite[Proposition 5.2]{Giral-Planas-Vilanova}. The inequality ${\rm r}_{J}(I,M) \leq \mathrm{reg}\,{\mathcal R}(I, M)$ follows easily from Lemma \ref{propleastnum}, whereas the bound $$\mathrm{reg}\,{\mathcal R}(I, M) \, \leq \, {\rm r}_{J}(I, M) + k - 1$$ is a consequence of Remark \ref{reg<r}. In particular, the case $k=1$ yields $\mathrm{reg}\,{\mathcal R}(I, M) = {\rm r}_{J}(I,M)$.
\end{Remark}
Next, we record a result which provides another set of conditions under which the equality $\mathrm{reg}\,{\mathcal R}(I, M) = {\rm r}_{J}(I,M)$ holds. It will be a key ingredient in the proof of Theorem \ref{Mafigene}. For completeness, we recall that the condition on $x_1, \ldots, x_s$ present in part (a) below means that no element of the sequence lies in the ideal generated by the others and in addition there are equalities $0 :_Mx_{1}x_j = 0 :_Mx_j$ for $j=1, \ldots, s$ and $(x_1, \ldots, x_i)M :_Mx_{i+1}x_j = (x_1, \ldots, x_i)M :_Mx_j$ for $i=1, \ldots, s-1$ and $j=i+1, \ldots, s$. We observe that parts (a) and (b) are satisfied if the elements $x_1, \ldots, x_s$ form an $M$-sequence.
\begin{Lemma}$($\cite[Theorem 5.3]{Giral-Planas-Vilanova}$)$\label{GPV}
Let $R$ be a ring, $I$ an ideal of $R$ and $M$ a
finite $R$-module. Let $J = (x_{1}, \ldots, x_{s})$ be a reduction of $I$ relative to $M$, and ${\rm r}_{J}(I, M) = {\rm r}$. Suppose the following conditions hold:
\begin{itemize}
\item[(a)] $x_{1}, \ldots, x_{s}$ is a $d$-sequence relative to $M$;
\item[(b)] $x_{1}, \ldots, x_{s - 1}$ is an $M$-sequence;
\item[(c)] $(x_{1}, \ldots, x_{i})M \cap I^{{\rm r} + 1}M = (x_{1}, \ldots, x_{i})I^{{\rm r}}M$, \, $i = 1, \ldots, s - 1.$
\end{itemize}
Then, $\mathrm{reg}\,{\mathcal R}(I, M) = {\rm r}_{J}(I,M)$.
\end{Lemma}
\subsection{On a question of Rossi, Trung, and Trung}\label{RTT} In this part we focus on the interesting question raised in \cite[Remark 2.5]{Rossi-Dinh-Trung} as to whether, for a ring $R$ and an ideal $I$ having a minimal reduction $J$, the formula $${\rm reg}\,\mathcal{R}(I) \, = \, \mathrm{min}\,\{n \geq {\rm r}_{J}(I) \, \mid \, I^{n + 1} : I = I^{n}\}$$ holds under the hypotheses of Lemma \ref{rossidinhtrung} below. This lemma is a crucial ingredient in the proof of Theorem \ref{rossiquest}, which will lead us to partial answers to the question as will be explained in Remark \ref{partial} and recorded in Corollary \ref{our-answer}.
\begin{Lemma}$($\cite[Theorem 2.4]{Rossi-Dinh-Trung}$)$\label{rossidinhtrung}
Let $(R, \mathfrak{m})$ be a two-dimensional Buchsbaum local ring with $\mathrm{depth}\,R > 0$. Let $I$ be an $\mathfrak{m}$-primary ideal which is not a parameter ideal, and let $J$ be a minimal reduction of $I$. Then
$${\rm reg}\,\mathcal{R}(I) \, = \, \mathrm{max}\,\{{\rm r}_{J}(I),\,s^{*}(I)\} = \mathrm{min}\,\{n \geq {\rm r}_{J}(I) \, \mid \, I^{n} = \widetilde{I^n}\}.$$
\end{Lemma}
\begin{Theorem}\label{rossiquest}
For $R$, $I$, and $J$ exactly as in Lemma \ref{rossidinhtrung}, there is an equality
$${\rm reg}\,\mathcal{R}(I) \, = \, \mathrm{min}\,\{n \geq {\rm r}_{J}(I) \, \mid \, I^{m + 1} : I = I^{m} \ \ \mathit{for \ all} \ \ m \geq n\}.$$
\end{Theorem}
\begin{proof} By Lemma \ref{rossidinhtrung}, ${\rm reg}\,\mathcal{R}(I) = \mathrm{max}\{{\rm r}_{J}(I), s^{*}(I)\}$. Assume first that $s^{*}(I) \leq {\rm r}_{J}(I)$. Note that, being an $\mathfrak{m}$-primary ideal, $I$ contains a regular element since $\mathrm{depth}\,R > 0$. Thus, Proposition \ref{regelem}(a) yields $I^{m + 1} : I = \widetilde{I^{m + 1}} : I = \widetilde{I^{m}} = I^{m}$ for all $m \geq {\rm r}_{J}(I)$, and therefore $${\rm reg}\,\mathcal{R}(I) \, = \, {\rm r}_{J}(I) \, = \, \mathrm{min}\,\{n \geq {\rm r}_{J}(I) \, \mid \, I^{m + 1} : I = I^{m} \ \mathrm{for \ all} \ m \geq n\}.$$ Now suppose ${\rm r}_{J}(I) < s^{*}(I)$. Thus ${\rm reg}\,\mathcal{R}(I) = s^{*}(I)$. On the other hand, Proposition \ref{s*conjec} gives
$s^{*}(I) = \mathrm{min}\{m \geq 1 \mid I^{n + 1} : I = I^{n} \ \mathrm{for \ all} \ n \geq m \}$. Hence we can write ${\rm reg}\,\mathcal{R}(I) = \mathrm{min}\{m \geq {\rm r}_{J}(I) \mid I^{n + 1} : I = I^{n} \ \mathrm{for \ all} \ n \geq m\}$, as needed. \hspace*{\fill} $\square$
\end{proof}
\begin{Remark}\label{partial}\rm We point out that if $s^{*}(I) \leq {\rm r}_{J}(I) + 1$ then our theorem (with the crucial description of $s^*(I)$ given by Proposition \ref{s*conjec}) settles affirmatively the problem of Rossi-Trung-Trung. This is easily seen to be true in the situation $s^{*}(I) \leq {\rm r}_{J}(I)$. Now suppose $s^{*}(I) = {\rm r}_{J}(I) + 1$, and write ${\rm r}_{J}(I)={\rm r}$. Then necessarily $$I^{{\rm r} + 1} : I \, \neq \, I^{{\rm r}}.$$ It thus follows that Theorem \ref{rossiquest} solves the problem once again. Another situation where our result answers affirmatively the question is when $R$ is Cohen-Macaulay and $\widetilde{I^{\rm r}}=I^{\rm r}$; this holds because, by \cite[Remark 2.5]{Mafi}, we must have $s^{*}(I) \leq {\rm r}_{J}(I)$ in this case, and then we are done by a previous comment. Below we record such facts as a corollary.
\end{Remark}
\begin{Corollary}\label{our-answer}
Let $R$, $I$, and $J$ be exactly as in Lemma \ref{rossidinhtrung}. Suppose in addition any one of the following conditions:
\begin{itemize}
\item[(a)] $s^{*}(I) \leq {\rm r}_{J}(I) + 1$;
\item[(b)] $R$ is Cohen-Macaulay and $\widetilde{I^{\rm r}}=I^{\rm r}$, where ${\rm r}={\rm r}_{J}(I)$.
\end{itemize}
Then the Rossi-Trung-Trung question has an affirmative answer.
\end{Corollary}
As far as we know, under the hypotheses of Lemma \ref{rossidinhtrung} there is no example satisfying $s^*(I)>{\rm r}_{J}(I) + 1$. It is thus natural to raise the following question, to which an affirmative answer would imply the definitive solution of the Rossi-Trung-Trung problem.
\begin{Question}\rm Under the hypotheses of Lemma \ref{rossidinhtrung}, is it true that $s^{*}(I) \leq {\rm r}_{J}(I) + 1$?
\end{Question}
Regarding the setting relative to a module, a question is in order.
\begin{Question}\rm Let $R$ be exactly as in Lemma \ref{rossidinhtrung}. Let $I$ be an $\mathfrak{m}$-primary ideal and let $J$ be a minimal reduction of $I$ relative to a Buchsbaum $R$-module $M$, satisfying ${\rm r}_J(I, M)\geq 1$. We raise the question of whether any of the following equalities
\begin{itemize}
\item[(a)] ${\rm reg}\,\mathcal{R}(I, M) = \mathrm{max}\,\{{\rm r}_{J}(I, M),\,s^{*}(I, M)\}$,
\item[(b)] ${\rm reg}\,\mathcal{R}(I, M) = \mathrm{min}\,\{n \geq {\rm r}_{J}(I, M) \mid I^{m + 1}M :_M I = I^{m}M \ \mathrm{for \ all} \ m \geq n\}$
\end{itemize} is true. Notice that the assertions (a) and (b) are in fact equivalent; the implication (a)$\Rightarrow$(b) can be easily seen by using Theorem \ref{Mafigene} (to be proven in the next section) and Proposition \ref{s*conjec}, whereas (b)$\Rightarrow$(a) follows easily with the aid of Proposition \ref{s*conjec}. We might even conjecture that such equalities hold in the particular case where $R$ and $M$ are Cohen-Macaulay.
\end{Question}
\section{A generalization of a result of Mafi}\label{gen-of-Mafi}
Our main goal in this section is to establish Theorem \ref{Mafigene}, which concerns the interplay between the numbers ${\rm r}_{J}(I, M)$ and $\mathrm{reg}\,{\mathcal R}(I, M)$ in a suitable setting. As we shall explain, our theorem generalizes a result due to Mafi from \cite{Mafi} (see Corollary \ref{Mafigene2}) and gives us a number of other applications, to be described in Section \ref{app} and also in Section \ref{Ulrich}. As a matter of notation, it is standard to write ${\rm grade}(I, M)$ for the maximal length of an $M$-sequence contained in the ideal $I$ of the local ring $(R, \mathfrak{m})$. If $M=R$, the notation is simplified to ${\rm grade}\,I$. Note that ${\rm grade}(I, M)$ is just the $I$-${\rm depth}$ of $M$; in particular, ${\rm grade}(\mathfrak{m}, M)={\rm depth}\,M$.
First we recall two general lemmas.
\begin{Lemma}$($\cite[Lemma 1.2]{Rossi-Valla}$)$\label{LemmaRV0}
Let $R$ be a local ring, $M$ a finite $R$-module of positive dimension, and $I$ a proper ideal. Let $x_1, \ldots, x_s$ be an $M$-superficial sequence of $I$. Then $x_1, \ldots, x_s$ is an $M$-sequence if and only if ${\rm grade}(I, M)\geq s$.
\end{Lemma}
\begin{Lemma}$($\cite[p.\,12]{Rossi-Valla}$)$\label{LemmaRV} Let $(R, \mathfrak{m})$ be a local ring with infinite residue field, $M$ a finite $R$-module of positive dimension, $I$ an $\mathfrak{m}$-primary ideal, and $J$ a minimal reduction of $I$ relative to $M$. Then $J$ can be generated by an $M$-superficial sequence of $I$, which is also a system of parameters of $M$.
\end{Lemma}
Our theorem (which also holds in an appropriate graded setting) is as follows.
\begin{Theorem}\label{Mafigene}
Let $(R, \mathfrak{m})$ be a local ring with infinite residue field, $M$ a Cohen-Macaulay $R$-module of dimension $s\geq 1$, $I$ an $\mathfrak{m}$-primary ideal, and $J = (x_1, \ldots, x_s)$ a minimal reduction of $I$ relative to $M$. Set ${\rm r} = {\rm r}_{J}(I, M)$ and $M_j = M/(x_{1}, \ldots, x_{j - 1})M$ with $M_1=M$. Assume that either ${\rm r} = 0$, or ${\rm r} \geq 1$ and $\widetilde{I^{\rm r}_{M_{j}}} = I^{\rm r}M_{j}$ for $j = 1, \ldots, s-1$ {\rm (}if $s\geq 2${\rm )}. Then
$$\mathrm{reg}\,{\mathcal R}(I, M) \, = \, \mathrm{reg}\,{\mathcal G}(I, M) \, = \, {\rm r}_{J}(I, M).$$ In particular, ${\rm r}(I, M)$ is independent.
\end{Theorem}
\begin{proof} First, by Lemma \ref{reg=reg}, the equality $\mathrm{reg}\,{\mathcal R}(I, M) = \mathrm{reg}\,{\mathcal G}(I, M)$ holds. Thus our objective is to prove that $\mathrm{reg}\,{\mathcal R}(I, M) = {\rm r}$.
Because $M$ is Cohen-Macaulay and $I$ is $\mathfrak{m}$-primary, we have $s = \mathrm{depth}\,M = \mathrm{grade}(I, M)$
and then $x_{1}, \ldots, x_{s}$ must be in fact an $M$-sequence by Lemma \ref{LemmaRV0}. As a consequence, conditions (a) and (b) of Lemma \ref{GPV} are satisfied. We shall prove that \begin{equation}\label{main}(x_{1}, \ldots, x_{i})M \, \cap \, I^{{\rm r} + 1}M \, = \, (x_{1}, \ldots, x_{i})I^{{\rm r}}M, \ \ \ \ \ i = 1, \ldots, s - 1 \end{equation} and then the desired formula $\mathrm{reg}\,{\mathcal R}(I, M) = {\rm r}$ will follow immediately by Lemma \ref{GPV}. Notice that the condition is vacuous if $s=1$ (both sides of (\ref{main}) are regarded as zero), so we can suppose $s\geq 2$ from now on.
The case ${\rm r} = 0$ is now trivial because $(x_{1}, \ldots, x_{i})M \subseteq IM$, or what amounts to the same, $(x_{1}, \ldots, x_{i})M \cap IM = (x_{1}, \ldots, x_{i})M$, for all $i = 1, \ldots, s - 1$.
So let us assume that ${\rm r}\geq 1$ and $\widetilde{I^{\rm r}_{M_{j}}} = I^{\rm r}M_{j}$ for $j = 1, \ldots, s-1$. We proceed by induction on $i$. First, pick an element $f \in (x_{1})M \cap I^{{\rm r} + 1}M$. Then there exists $m \in M$ such that $f = x_{1}m \in I^{{\rm r} + 1}M$. By Proposition \ref{regelem}(b) and the hypothesis that $\widetilde{I^{{\rm r}}_{M}} \, = \, I^{{\rm r}}M$, we have $$m \, \in \, I^{{\rm r} + 1}M :_{M} x_{1} \, \subseteq \, \widetilde{I^{{\rm r} + 1}_{M}} :_{M} x_{1} \, = \, \widetilde{I^{{\rm r}}_{M}} \, = \, I^{{\rm r}}M,$$ so that $f = x_{1}m \in (x_{1})I^{{\rm r}}M$, which shows $(x_{1})M \cap I^{{\rm r} + 1}M = (x_{1})I^{{\rm r}}M$. Now let $i\geq 2$ and suppose \begin{equation}\label{suppose}
(x_{1}, \ldots, x_{i - 1})M \, \cap \, I^{{\rm r} + 1}M \, = \, (x_{1}, \ldots, x_{i - 1})I^{{\rm r}}M.
\end{equation} We claim that
\begin{equation}\label{claimed}
[I^{{\rm r} + 1}M + (x_{1}, \ldots, x_{i - 1})M] \, \cap \, (x_{1}, \ldots, x_{i})M \, = \, (x_{1}, \ldots, x_{i - 1})M + x_{i}I^{{\rm r}}M.
\end{equation} The inclusion
$(x_{1}, \ldots, x_{i - 1})M + x_{i}I^{{\rm r}}M\subseteq [I^{{\rm r} + 1}M + (x_{1}, \ldots, x_{i - 1})M] \, \cap \, (x_{1}, \ldots, x_{i})M$ is clear. Now pick $$g \, \in \, [I^{{\rm r} + 1}M + (x_{1}, \ldots, x_{i - 1})M] \cap (x_{1}, \ldots, x_{i})M.$$ Then, there exist $h \in I^{{\rm r} + 1}M$ and $m_{l}, n_{k} \in M$, with $l = 1, \ldots, i - 1$ and $k = 1, \ldots, i$, such that
$$g \, = \, h + \sum_{l = 1}^{i - 1}\,x_{l}m_{l} \, = \, \sum_{k = 1}^{i}\,x_{k}n_{k}.$$ Hence $h - x_{i}n_{i} \in (x_{1}, \ldots, x_{i - 1})M$. Now, denote
$ (I^{n})_{i} = (I^n + (x_{1}, \ldots, x_{i - 1}))/(x_{1}, \ldots, x_{i - 1})$ for any given $n\geq 0$, which agrees with $(I_i)^n$. Since $M_i = M/(x_{1}, \ldots, x_{i - 1})M$, we have \begin{equation}\label{equal0}
\widetilde{I^{n}_{M_{i}}} \, = \, \widetilde{(I_i)^{n}_{M_{i}}}
\end{equation}
for all $n \geq 1$ by \cite[Observation 2.3]{Puthenpurakal1}. Modulo $(x_{1}, \ldots, x_{i - 1})M$, we can write
\begin{equation}\label{equal1}
\overline{x_{i}n_{i}} \, = \, \overline{h} \, \in \, I^{{\rm r} + 1}M_i.
\end{equation}
Note that $\mathrm{grade}(I_i, M_i) > 0$ as the image $\overline{x_i} \in I_i$ of $x_{i}$ is $M_i$-regular. Moreover, $\overline{x_i}$ is $M_i$-superficial. Applying (\ref{equal0}), (\ref{equal1}), Proposition \ref{regelem}(b) and our hypotheses, we get \begin{equation}\label{all} \overline{n_{i}} \, \in \, I^{{\rm r} + 1}{M_i} :_{M_i} \overline{x_{i}} \, \subseteq \, \widetilde{I^{{\rm r} + 1}_{M_i}} :_{M_i} \overline{x_{i}} \, = \, \widetilde{(I_i)^{{\rm r} + 1}_{M_i}} :_{M_i} \overline{x_{i}} \, = \,
\widetilde{(I_i)^{{\rm r}}_{M_i}} \, = \, \widetilde{I^{{\rm r}}_{M_i}} \, = \, I^{{\rm r}}M_{i},\end{equation} where in particular we are using that $\widetilde{I^{{\rm r}}_{M_s}} = I^{{\rm r}}M_{s}$, which we claim to hold and will confirm in the last part of the proof. So, (\ref{all}) yields $n_{i} \in I^{{\rm r}}M + (x_{1}, \ldots, x_{i - 1})M$,
and this implies $$g \, = \, \left(\displaystyle{\sum_{k = 1}^{i-1}\,x_{k}n_{k}}\right) + x_in_i \, \in \, (x_{1}, \ldots, x_{i - 1})M + x_{i}I^{{\rm r}}M,$$ which thus proves (\ref{claimed}). Now, using (\ref{claimed}) and (\ref{suppose}) we obtain
\begin{eqnarray}
I^{{\rm r} + 1}M \cap (x_{1}, \ldots, x_{i})M &=& I^{{\rm r} + 1}M \cap (I^{{\rm r} + 1}M + (x_{1}, \ldots, x_{i - 1})M) \cap (x_{1}, \ldots, x_{i})M \nonumber \\ ~ &=& I^{{\rm r} + 1}M \cap [(x_{1}, \ldots, x_{i - 1})M + x_{i}I^{{\rm r}}M] \nonumber \\~ &=& [I^{{\rm r} + 1}M \cap (x_{1}, \ldots, x_{i - 1})M] + x_{i}I^{{\rm r}}M \nonumber \\
~ &=& (x_{1}, \ldots, x_{i - 1})I^{{\rm r}}M + x_{i}I^{{\rm r}}M \nonumber \\ ~ &=& (x_{1}, \ldots, x_{i})I^{{\rm r}}M, \nonumber
\end{eqnarray} which gives (\ref{main}), as needed.
To finish the proof, it remains to verify $\widetilde{I^{{\rm r}}_{M_s}} = I^{{\rm r}}M_{s}$. First observe that, for all $n \geq {\rm r}$, $$I^{n + 1}M = JI^{n}M = (x_1, \ldots, x_s)I^{n}M = x^{s}I^{n}M + (x_1, \ldots, x_{s - 1})I^{n}M.$$
Adding $(x_1, \ldots, x_{s - 1})M$ to both sides, we have $$\begin{array}{cccc}
~ & x_sI^{n}M + (x_1, \ldots, x_{s - 1})M &=& I^{n + 1}M + (x_1, \ldots, x_{s - 1})M \\ \Rightarrow & x_sJI^{n - 1}M + (x_1, \ldots, x_{s - 1})M &=& I^{n + 1}M + (x_1, \ldots, x_{s - 1})M \\ \Rightarrow & x_s[ x_sI^{n - 1}M + (x_1, \ldots, x_{s - 1})I^{n - 1}M] + (x_1, \ldots, x_{s - 1})M &=& I^{n + 1}M + (x_1, \ldots, x_{s - 1})M \\ \Rightarrow & x_s^{2}I^{n - 1}M + (x_1, \ldots, x_{s - 1})M &=& I^{n + 1}M + (x_1, \ldots, x_{s - 1})M.
\end{array}$$
Proceeding with the same argument, we deduce $$x_s^{n - {\rm r} + 1}I^{\rm r}M + (x_1, \ldots, x_{s - 1})M = I^{n + 1}M + (x_1, \ldots, x_{s - 1})M.$$ Using this equality, the standard definitions and the fact that $x_s$ is $M/(x_1, \ldots, x_{s-1})M$-regular, we can take $k \gg 0$ so that
$$\begin{array}{ccccccccc}
\widetilde{I^{\rm r}_{M_{s}}} & = & \widetilde{(I_s)^{\rm r}_{M_s}} & = & \left({\mathfrak r}ac{I}{(x_1, \ldots, x_{s - 1})}\right)^{{\rm r} + k}M_s :_{M_s} \left({\mathfrak r}ac{I}{(x_1, \ldots, x_{s - 1})}\right)^{k} \nonumber \\
~ & = & {\mathfrak r}ac{I^{{\rm r} + k}M + (x_1, \ldots, x_{s - 1})M}{(x_1, \ldots, x_{s - 1})M} :_{M_s} {\mathfrak r}ac{I^{k} + (x_1, \ldots, x_{s - 1})}{(x_1, \ldots, x_{s - 1})} & \subseteq & {\mathfrak r}ac{I^{{\rm r} + k}M + (x_1, \ldots, x_{s - 1})M}{(x_1, \ldots, x_{s - 1})M} :_{M_s} \overline{x_s^{k}} \nonumber \\ ~ & = & {\mathfrak r}ac{x^{k}_sI^{\rm r}M + (x_1, \ldots, x_{s - 1})M}{(x_1, \ldots, x_{s - 1})M} :_{M_s} \overline{x_s^{k}} & = & {\mathfrak r}ac{I^{\rm r}M + (x_1, \ldots, x_{s - 1})M}{(x_1, \ldots, x_{s - 1}))M} \nonumber \\ ~ & = & I^{\rm r}M_s.\end{array}$$ \hspace*{\fill} $\square$
\end{proof}
It seems natural to raise the following question.
\begin{Question}\rm \label{questionmafi} In Theorem \ref{Mafigene} (assuming $s\geq 3$ and ${\rm r}\neq 0$), can we replace the set of hypotheses $\widetilde{I^{\rm r}_{M_{j}}} = I^{\rm r}M_{j}$, \,$j = 1, \ldots, s-1$, with the single condition $\widetilde{I^{\rm r}_M} = I^{\rm r}M$\,?
\end{Question}
The result below (the case $s=2$) is an immediate byproduct of Theorem \ref{Mafigene}.
\begin{Corollary} \label{corMafigen}
Let $(R, \mathfrak{m})$ be a local ring with infinite residue field, $M$ a Cohen-Macaulay $R$-module with ${\rm dim}\,M = 2$, $I$ an $\mathfrak{m}$-primary ideal, and $J$ a minimal reduction of $I$ relative to $M$. Setting ${\rm r = r}_{J}(I, M)$, assume that either ${\rm r} = 0$, or ${\rm r} \geq 1$ and $\widetilde{I^{\rm r}_{M}} = I^{\rm r}M$. Then
$$\mathrm{reg}\,{\mathcal R}(I, M) \, = \, \mathrm{reg}\,{\mathcal G}(I, M) \, = \, {\rm r}_{J}(I, M).$$
\end{Corollary}
We are now able to recover an interesting result of Mafi about zero-dimensional ideals in two-dimensional Cohen-Macaulay local rings. More precisely, by taking $M = R$ in Corollary \ref{corMafigen} we readily derive the following fact.
\begin{Corollary}$($\cite[Proposition 2.6]{Mafi}$)$\label{Mafigene2} Let $(R, \mathfrak{m})$ be a two-dimensional Cohen-Macaulay local ring with infinite residue field. Let $I$ be an $\mathfrak{m}$-primary ideal and $J$ a minimal reduction of $I$. Setting ${\rm r = r}_{J}(I)$, assume that either ${\rm r} = 0$, or ${\rm r} \geq 1$ and $\widetilde{I^{\rm r}} = I^{\rm r}$. Then $$\mathrm{reg}\,{\mathcal R}(I) \, = \, \mathrm{reg}\,{\mathcal G}(I) \, = \, {\rm r}_{J}(I).$$
\end{Corollary}
\begin{Example}\rm Consider the ideal $I = (x^6, x^4y^2,x^3y^3,y^6)$ of the formal power series ring $R = k[[x,y]]$, where $k$ is a field. By \cite[Example 3.2]{Huckaba}, we have ${\rm r}(I) = 3$ and $\mathrm{grade}\,\mathcal{G}(I)_{+} > 0$. This latter fact is equivalent to all powers of $I$ being Ratliff-Rush closed
(see \cite[Fact 9]{Heinzer-Johnson-Lantz-Shah}), i.e., $s^{*}(I) = 1$. In particular, $\widetilde{I^3} = I^3$, and therefore Corollary \ref{Mafigene2} gives $$\mathrm{reg}\,\mathcal{R}(I) \, = \, \mathrm{reg}\,\mathcal{G}(I) \, = \, 3.$$ Let us also mention that, in this case, it is easy to alternatively confirm any of these equalities, say $\mathrm{reg}\,\mathcal{R}(I)=3$, by first regarding (for simplicity) $I$ as an ideal of the polynomial ring $k[x, y]$ and then noting that the Rees algebra of $I$ can be presented as $\mathcal{R}(I) = S/\mathcal{J} = k[x, y, Z, W, T, U]/\mathcal{J}$, where now $x, y$ are seen in degree 0 and $Z, W, T, U$ are indeterminates of degree 1 over $k[x, y]$. Explicitly, $\mathcal{J}$ is the homogeneous $S$-ideal $$\mathcal{J} \, = \, (T^2-ZU,\, yW-xT,\, W^3-Z^2U,\, y^2Z-x^2W,\, xW^2-yZT,\, y^3T-x^3U).$$ A computation gives the shifts of the graded $S$-free modules along a (length $3$) resolution of $\mathcal{J}$, and we can finally use a standard device (e.g., \cite[Exercise 15.3.7(iv)]{B-S}) to get ${\rm reg}\,\mathcal{J}=4$, hence
$\mathrm{reg}\,\mathcal{R}(I)=\mathrm{reg}\,S/\mathcal{J}={\rm reg}\,\mathcal{J}-1=3$, as desired.
\end{Example}
To conclude the section, we illustrate that Corollary \ref{Mafigene2} is no longer valid if we remove the hypothesis involving the Ratliff-Rush closure.
\begin{Example}\rm
Consider the monomial ideal $$I \, = \, (x^{157},\, x^{35}y^{122},\, x^{98}y^{59},\, y^{157}) \, \subset \, k[x, y],$$ where $k[x,y]$ is a (standard graded) polynomial ring over a field $k$. By \cite[Proposition 3.3 and Remark 3.4]{DTrung}, the ideal $J = (x^{157},y^{157})$ is a minimal reduction of $I$ satisfying $${\rm r}_{J}(I) \, = \, 20 \, < \, 21 \, = \, \mathrm{reg}\,{\mathcal R}(I) \, = \, s^{*}(I),$$
where the last equality follows by Lemma \ref{rossidinhtrung}. Thus, ${\rm r}_{J}(I) = s^{*}(I) - 1\neq \mathrm{reg}\,{\mathcal R}(I)$. Finally notice that, as $s^{*}(I)={\rm r}_J(I)+1=20+1$, we have $\widetilde{I^{20}} \neq I^{20}$.
\end{Example}
\section{First applications}\label{app}
We now describe some applications of the results obtained in the preceding section. With the exception of Subsection \ref{lintype}, we shall focus on the two-dimensional case.
\subsection{Hyperplane sections of Rees modules} The first application is the corollary below, which can be of particular interest for inductive arguments in dealing with Castelnuovo-Mumford regularity of Rees modules.
\begin{Corollary}
Let $(R, \mathfrak{m})$ be a local ring with infinite residue field, $I$ an $\mathfrak{m}$-primary ideal, and $M$ a Cohen-Macaulay $R$-module with $\mathrm{dim}\,M = 2$. Suppose $\widetilde{I^{\rm r}_{M}} = I^{\rm r}M$ with ${\rm r} = {\rm r}(I, M)$. Let $x \in \mathfrak{m} \setminus I$ be an $M$-regular element whose initial form in $\mathcal{G}(I)$ is $\mathcal{G}(I, M)$-regular. If $$\widetilde{\left({\mathfrak r}ac{I + (x)}{(x)}\right)}_{{\mathfrak r}ac{M}{xM}} = {\mathfrak r}ac{IM + xM}{xM}$$ then\,
$\mathrm{reg}\,\mathcal{R}(I, M)/x\mathcal{R}(I, M) = \mathrm{reg}\,\mathcal{R}(I, M).$
\end{Corollary}
\begin{proof} It follows from \cite[Lemma 2.3]{Zamani1} that $$\mathrm{reg}\,\mathcal{R}(I, M)/x\mathcal{R}(I, M) \, = \, \mathrm{reg}\,\mathcal{R}((I + (x))/(x),\,M/xM).$$
Since $M/xM$ is a Cohen-Macaulay $R/(x)$-module of positive dimension (equal to $1$) and the ideal $(I + (x))/(x)$ is $\mathfrak{m}/(x)$-primary, we can apply Corollary \ref{corMafigen} so as to obtain $$\mathrm{reg}\,\mathcal{R}((I + (x))/(x),\,M/xM) \, = \, {\rm r}((I + (x))/(x),\,M/xM).$$ On the other hand, \cite[Lemma 2.2]{Zamani1} yields ${\rm r}((I + (x))/(x),\,M/xM) = {\rm r}(I,\,M)$. Now, again by Corollary \ref{corMafigen}, ${\rm r}(I, M) = \mathrm{reg}\,\mathcal{R}(I, M)$. Therefore, the asserted equality is true. \hspace*{\fill} $\square$
\end{proof}
Taking $M=R$, we immediately get the following consequence.
\begin{Corollary}
Let $(R, \mathfrak{m})$ be a two-dimensional Cohen-Macaulay local ring with infinite residue field, and $I$ an $\mathfrak{m}$-primary ideal. Suppose $\widetilde{I^{\rm r}} = I^{\rm r}$ with ${\rm r} = {\rm r}(I)$. Let $x \in \mathfrak{m} \setminus I$ be a regular element whose initial form in $\mathcal{G}(I)$ is regular. If the $R/(x)$-ideal $(I + (x))/(x)$ is Ratliff-Rush closed, then\,
$\mathrm{reg}\,\mathcal{R}(I)/x\mathcal{R}(I) = \mathrm{reg}\,\mathcal{R}(I).$
\end{Corollary}
\subsection{The role of postulation numbers} This subsection (which focuses on the classical case $M=R$) investigates connections of postulation numbers with the Castelnuovo-Mumford regularity of blowup algebras and reduction numbers. First recall that, if $(R, \mathfrak{m})$ is a local ring and $I$ is an $\mathfrak{m}$-primary ideal of $R$, then the corresponding {\it Hilbert-Samuel function} is given by $$H_{I}(n) \, = \, \displaystyle{\lambda\left(R / I^{n}\right)}$$ for any integer $n \geq 1$, and $H_{I}(n) = 0$ if $n \leq 0$. The symbol $\lambda(-)$ denotes length of $R$-modules. It is well-known that $H_I(n)$ coincides, for all sufficiently large integers $n$, with a polynomial $P_I(n)$ -- the {\it Hilbert-Samuel polynomial of $I$}.
\begin{Definition}\rm The \textit{postulation number} of $I$ is the integer
$$\rho(I) \, = \, \mathrm{sup}\{n \in \mathbb{Z} \mid H_{I}(n) \neq P_{I}(n)\}.$$
\end{Definition}
Here it is worth recalling that the functions $H_I(n)$ and $P_I(n)$ are defined for all integers $n$, so $\rho(I)$ can be -- and often is -- negative (as emphasized in \cite[Introduction, p.\,335]{Marley2}).
In the application below we furnish a characterization of $\mathrm{reg}\,\mathcal{R}(I)$ (which by \cite[Lemma 4.8]{Oo} agrees with $\mathrm{reg}\,\mathcal{G}(I)$), in terms, in particular, of $\rho(I)$. Notice that our statement makes no explicit use of the concept of postulation number $p(\mathcal{G}(I))$ for the ring $\mathcal{G}(I)$ (see \cite[Remark 1.1]{Strunk}), which we only use in the proof.
\begin{Corollary}\label{strunk}
Let $(R,\mathfrak{m})$ be a two-dimensional Cohen–Macaulay local ring with infinite residue field, and let $I$ be an $\mathfrak{m}$-primary ideal with a minimal reduction $J$. If $\mathrm{grade}\,\mathcal{G}(I)_{+} = 0$, then
$$\mathrm{reg}\,\mathcal{R}(I) = {\rm max}\{{\rm r}_J(I),\,\rho(I) + 1\}.$$
\end{Corollary}
\begin{proof} We have ${\rm dim}\,\mathcal{G}(I)=2$ and $H^j_{\mathcal{G}(I)_+}(\mathcal{G}(I))=0$ for all $j>2$, so that $a_j(\mathcal{G}(I))=-\infty$ whenever $j>2$. Moreover, $\mathrm{grade}\,\mathcal{G}(I)_{+} = 0$ and then $a_{0}(\mathcal{G}(I)) < a_{1}(\mathcal{G}(I))$ by virtue of \cite[Theorem 2.1(a)]{Marley2}.
Therefore, $$\mathrm{reg}\,\mathcal{G}(I) \, = \, {\rm max}\{a_1(\mathcal{G}(I))+1,\, a_2(\mathcal{G}(I))+2\}.$$ Now let us consider the subcase where $a_{1}(\mathcal{G}(I)) \leq a_{2}(\mathcal{G}(I))$, so that $\mathrm{reg}\,\mathcal{G}(I) = a_{2}(\mathcal{G}(I)) + 2$. On the other hand, \cite[Lemma 1.2]{Marley2} yields $$a_{2}(\mathcal{G}(I)) + 2 \, \leq \, {\rm r}_{J}(I) \, \leq \, \mathrm{reg}\,\mathcal{G}(I),$$ and it follows that $\mathrm{reg}\,\mathcal{G}(I)= {\rm r}_{J}(I)$.
Finally, if $a_{2}(\mathcal{G}(I)) < a_{1}(\mathcal{G}(I))$, then $a_{2}(\mathcal{G}(I)) +2\leq a_{1}(\mathcal{G}(I))+1$ and hence $\mathrm{reg}\,\mathcal{G}(I) = a_{1}(\mathcal{G}(I)) + 1$. In addition, \cite[Proof of Theorem 3.10]{Strunk} gives $\rho(I)=p(\mathcal{G}(I))$, and on the other hand, applying \cite[Corollary 2.3(2)]{Brodmann-Linh} we can write $p(\mathcal{G}(I))=a_{1}(\mathcal{G}(I))$ since $a_{1}(\mathcal{G}(I))$ is strictly bigger than both $a_{0}(\mathcal{G}(I))$ and $a_{2}(\mathcal{G}(I))$. Thus, $\mathrm{reg}\,\mathcal{G}(I) = \rho(I) + 1$. \hspace*{\fill} $\square$
\end{proof}
Next we provide a different proof (in fact an improvement) of \cite[Proposition 3.7]{Hoa}. Notice that Hoa's $c(I)$ is just $s^*(I)-1$.
\begin{Corollary}
Let $(R, \mathfrak{m})$ be a two-dimensional Cohen-Macaulay local ring with infinite residue field, and let $I$ be an $\mathfrak{m}$-primary ideal. If $\rho(I) \neq s^{*}(I) - 1$ then ${\rm r}_{J}(I)=\mathrm{reg}\,\mathcal{R}(I)$ for any minimal reduction $J$ of $I$. In particular,
${\rm r}(I)$ is independent.
\end{Corollary}
\begin{proof} Set ${\rm r}={\rm r}(I)$. By virtue of Corollary \ref{Mafigene2}, we can assume that ${\rm r}(I)\geq 1$ (in particular, $I$ cannot be a parameter ideal; see Subsection \ref{lintype} below). Let us consider first the case $\mathrm{grade}\,\mathcal{G}(I)_{+} >0$. According to \cite[Fact 9]{Heinzer-Johnson-Lantz-Shah}, all powers of $I$ must be Ratliff-Rush closed, and hence $s^*(I)=1$. Note ${\rm r}(I)\geq 1$ implies ${\rm r}_J(I)\geq 1=s^*(I)$ for every minimal reduction $J$ of $I$. By Lemma \ref{rossidinhtrung}, the asserted equality follows.
So we can suppose $\mathrm{grade}\,\mathcal{G}(I)_{+} = 0$. As verified in the proof of Corollary \ref{strunk}, if $a_{1}(\mathcal{G}(I)) \leq a_{2}(\mathcal{G}(I))$ then for any minimal reduction $J$ of $I$ we can write
${\rm r}_{J}(I)=\mathrm{reg}\,\mathcal{G}(I)$, which as we know coincides with $\mathrm{reg}\,\mathcal{R}(I)$. Moreover, we have seen that if $a_{2}(\mathcal{G}(I)) < a_{1}(\mathcal{G}(I))$ then $\mathrm{reg}\,\mathcal{G}(I) = \rho(I) + 1$, so that $\mathrm{reg}\,\mathcal{G}(I)\neq s^*(I)$, which is tantamount to saying that $\mathrm{reg}\,\mathcal{R}(I) \neq s^*(I)$.
Now Lemma \ref{rossidinhtrung} yields $\mathrm{reg}\,\mathcal{R}(I) = {\rm r}_{J}(I)$ whenever $J$ is a minimal reduction of $I$, as needed. \hspace*{\fill} $\square$
\end{proof}
\subsection{Ideals of linear type}\label{lintype}
For an ideal $I$ of a ring $R$, there is a canonical homomorphism $\pi \colon \mathcal{S}(I)\rightarrow \mathcal{R}(I)$ from the symmetric algebra $\mathcal{S}(I)$ of $I$ onto its Rees algebra $\mathcal{R}(I)$. The ideal $I$ is said to be of {\it linear type} if $\pi$ is an isomorphism. To see what this means a bit more concretely, we can make use of some (any) $R$-free presentation $$R^m\stackrel{\varphi}{\longrightarrow} R^{\nu}\longrightarrow I\longrightarrow 0.$$ Letting $S=R[T_1, \ldots, T_{\nu}]$ be a standard graded polynomial ring in variables $T_1, \ldots, T_{\nu}$ over $R=S_0$, we can identify $\mathcal{S}(I)=S/\mathcal{L}$, where $\mathcal{L}$ is the ideal generated by the $m$ linear forms in the $T_i$'s given by the entries of the product $(T_1 \cdots T_{\nu})\cdot \varphi$, where $\varphi$ is regarded as a $\nu \times m$ matrix taken with respect to the canonical bases of $R^{\nu}$ and $R^m$. We can also write $\mathcal{R}(I)=S/\mathcal{J}$, for a certain ideal $\mathcal{J}$ containing $\mathcal{L}$. Precisely, expressing $\mathcal{R}(I)=R[It]$ (where $t$ is an indeterminate over $R$), then $\mathcal{J}$ is the kernel of the natural epimorphism $S\rightarrow \mathcal{R}(I)$. Now, the above-mentioned map $\pi$ can be simply identified with the surjection $S/\mathcal{L}\rightarrow S/\mathcal{J}$. It follows that $I$ is of linear type if and only if $\mathcal{J}=\mathcal{L}$. For instance, if $I$ is generated by a regular sequence then $I$ is of linear type (see \cite[5.5]{Swanson-Huneke}).
Now assume $(R, \mathfrak{m})$ is either local with infinite residue field or a standard graded algebra over an infinite field. The {\it analytic spread} of $I$, which we denote $s(I)$, is defined as the Krull dimension of the special fiber ring $\mathcal{R}(I)/\mathfrak{m} \mathcal{R}(I)$. It is a classical fact that ${\rm r}(I)=0$ if and only if $I$ can be generated by $s(I)$ elements (see \cite[Theorem 4(ii)]{NR}). So the property ${\rm r}(I)=0$ is easily seen to take place, for example, whenever $I$ is of linear type (or if $I$ is an $\mathfrak{m}$-primary parameter ideal).
While, in general, there exist examples of ideals with reduction number zero that are not of linear type, the natural question remains as to under what conditions the property ${\rm r}(I)=0$ forces $I$ to be of linear type. The corollary below is a quick application of Theorem \ref{Mafigene} (along with the property that the Castelnuovo-Mumford regularity controls degrees over graded polynomial rings) and contributes to this problem in a suitable setting. It is plausible to believe that the result is known to experts, but, as far as we know, it has not yet been recorded in the literature.
\begin{Corollary}\label{app-lintype}
Let $R$ be a standard graded polynomial ring over an infinite field, and let $I$ be a zero-dimensional ideal of $R$. If ${\rm r}(I) = 0$, then $I$ is of linear type.
\end{Corollary}
\begin{proof} Applying the graded analogue of Theorem \ref{Mafigene} with $M=R$, we obtain $\mathrm{reg}\,{\mathcal R}(I)=0$. Using the above notations, we get $\mathrm{reg}\,S/{\mathcal J}=0$, so that $\mathrm{reg}\,{\mathcal J}=1$, which implies that the homogeneous $S$-ideal $\mathcal{J}$ admits no minimal generator of degree greater than 1. In other words, we must have ${\mathcal J}={\mathcal L}$. As already clarified, this means that $I$ is of linear type. \hspace*{\fill} $\square$
\end{proof}
\section{More applications: Ulrich ideals and a question of Corso-Polini-Rossi}\label{Ulrich}
This last section provides applications concerning the theory of generalized Ulrich ideals. As we shall explain, this includes a negative answer (in dimension 2) to a question by Corso, Polini, and Rossi.
Here is the central notion (and the setup) of this section.
\begin{Definition}$($\cite[Definition 1.1]{Goto-Ozeki-Takahashi-Watanabe-Yoshida}$)$ \label{Ulrideal} \rm Let $(R, \mathfrak{m})$ be a $d$-dimensional Cohen-Macaulay local ring and let $I$ be an $\mathfrak{m}$-primary ideal. Suppose $I$ contains a parameter ideal $J$ as a reduction. Note this is satisfied whenever $R/\mathfrak{m}$ is infinite (indeed, in this situation, a minimal reduction -- which does exist -- of an $\mathfrak{m}$-primary ideal must necessarily be a parameter ideal). We say that $I$ is a \textit{generalized Ulrich ideal} -- {\it Ulrich ideal}, for short -- if $I$ satisfies the following properties:
\begin{itemize}
\item[(i)] ${\rm r}_{J}(I) \leq 1$;
\item[(ii)] $I/I^2$ is a free $R/I$-module.
\end{itemize}
\end{Definition}
\begin{Remark}\label{G-is-CM}\rm Let us recall a couple of interesting basic facts about Ulrich ideals. First, since $R$ is Cohen-Macaulay, it is clear that every parameter ideal is Ulrich. Second, if $R$, $I$ and $J$ are as above and $R/\mathfrak{m}$ is infinite, then by \cite[Lemma 1 and Theorem 1]{Valla} the associated graded ring $\mathcal{G}(I)$ must be Cohen-Macaulay whenever ${\rm r}_{J}(I) \leq 1$. Therefore, if $I$ is Ulrich then $\mathcal{G}(I)$ is Cohen-Macaulay.
\end{Remark}
\subsection{Regularity of blowup algebras} In order to determine the Castelnuovo-Mumford regularity of the blowup algebras of an Ulrich ideal, we first recall one of the ingredients.
\begin{Lemma}$($\cite[(1.3)]{Heinzer-Lantz-Shah}$)$\label{rednum1} If $(R, \mathfrak{m})$ is a Cohen-Macaulay local ring of positive dimension and $I$ is an $\mathfrak{m}$-primary ideal with ${\rm r}(I)\leq 1$, then every power of $I$ is Ratliff-Rush closed.
\end{Lemma}
This lemma immediately gives the following fact, which will be useful in the proof of Corollary \ref{reg=red/ulrich}.
\begin{Corollary}\label{UlrRatliff}
Let $R$ be a Cohen-Macaulay local ring of positive dimension with infinite residue field and let $I$ be an Ulrich ideal of $R$. Then $$\widetilde{I^{n}}=I^n, \ \ \ \ \ {\mathfrak o}rall \, n\geq 1.$$ Therefore, for any parameter ideal $J$ which is a reduction of $I$, we have either ${\rm r}_{J}(I) = 0$ or $s^{*}(I) = {\rm r}_{J}(I) = 1$.
\end{Corollary}
This corollary
is particularly useful to test for ideals that are not Ulrich, as illustrated in the next two examples.
\begin{Example} \rm Let $k$ be an infinite field and $R = k[[x,y]]$. Then, by Corollary \ref{UlrRatliff}, the ideal $I = (x^{4}, x^{3}y, xy^{3}, y^{4})$ is not Ulrich, since $x^{2}y^{2} \in \widetilde{I} \setminus I$.
\end{Example}
\begin{Example} \rm Let $k$ be an infinite field and $R = k[[t^{3}, t^{10}, t^{11}]]$. As observed in \cite[Example 1.16]{Heinzer-Lantz-Shah}, the ideal $I = (t^{9}, t^{10}, t^{14})$ is not Ratliff-Rush closed (precisely, $t^{11} \in \widetilde{I} \setminus I$). Hence, Corollary \ref{UlrRatliff} gives that $I$ is not Ulrich.
\end{Example}
The next example shows that the converse of Corollary \ref{UlrRatliff} is not true.
\begin{Example}\rm Let $k$ be an infinite field and consider the zero-dimensional ideal $$I \, = \, (x^{6},\, x^{4}y^{2},\,
x^{3}y^{3},\, x^{2}y^{4},\, xy^{5},\, y^{6}) \, \subset \, R \, = \, k[[x, y]].$$ According to \cite[Example 2.15]{Mafi2}
we have $\mathrm{depth}\,\mathcal{G}(I) = 1$ and
hence $\mathcal{G}(I)$ is not Cohen-Macaulay. By Remark \ref{G-is-CM}, the ideal $I$ cannot be Ulrich. On the other hand, because $\mathrm{grade}\,\mathcal{G}(I)_{+} > 0$ we have $\widetilde{I^{n}} = I^{n}$ for all $n \geq 1$.
\end{Example}
Now we are able to find the regularity of the Rees algebra -- hence of the associated graded ring -- of an Ulrich ideal (in arbitrary positive dimension).
\begin{Corollary}\label{reg=red/ulrich}
Let $R$ be a Cohen-Macaulay local ring of positive dimension with infinite residue field, and let $I$ be an Ulrich ideal of $R$. Then,
$$\mathrm{reg}\,\mathcal{R}(I) \, = \, \mathrm{reg}\,\mathcal{G}(I) \, \leq \, 1,$$ with equality if and only if $I$ is not a parameter ideal. Furthermore, ${\rm r}(I)$ is independent.
\end{Corollary}
\begin{proof} Pick a minimal reduction $J=(x_1, \ldots, x_d)$ of $I$ (notice that $J$ is necessarily a parameter ideal; see Definition \ref{Ulrideal}). Since $I$ is Ulrich, \cite[Lemma 2.3]{Goto-Ozeki-Takahashi-Watanabe-Yoshida} yields that $I/J$ is a free $R/I$-module. For $i = 1, \ldots, d$, set $R_i = R/(x_1, \ldots, x_{i - 1})$ (with $R_1=R$), $I_i = IR_i$ and $J_i = JR_i$. As $JI = I^2$, we have $J_iI_i = (I_i)^{2}$. Moreover, $I_i/J_i \cong I/J$ and $R_i/I_i \cong R/I$, which gives that $I_i/J_i$ is a free $R_i/I_i$-module. Applying \cite[Lemma 2.3]{Goto-Ozeki-Takahashi-Watanabe-Yoshida} once again, we get that $I_i$ is an Ulrich ideal of $R_i$ for all $i$. Now, by Corollary \ref{UlrRatliff} (with $n=1$) we have $\widetilde{I_i} = I_i=IR_i$ for all $i$. On the other hand, it is easy to see that $\widetilde{I_i} = \widetilde{I_{R_i}}$ in the notation of Theorem \ref{Mafigene} (with $M=R$), which therefore gives $\mathrm{reg}\,\mathcal{R}(I) = \mathrm{reg}\,\mathcal{G}(I) = {\rm r}_J(I) \leq 1$, as asserted. Observe that this also shows the independence of ${\rm r}(I)$.
Now the characterization of equality can be rephrased as ${\rm r}_J(I)=0$ if and only if $I$ is a parameter ideal. Obviously,
${\rm r}_J(I)=0$ means $I=J$, which is a parameter ideal. Conversely, if $I$ is a parameter ideal then ${\rm r}(I)=0$ (see Subsection \ref{lintype}) and hence ${\rm r}_J(I)=0$ by independence. \hspace*{\fill} $\square$
\end{proof}
\begin{Example} \label{ex-Goto} \rm Let $k$ be an infinite field. Given $\ell \geq 2$, consider the zero-dimensional ideal $$I \, = \, (t^{4},\, t^{6}) \, \subset \, R \, = \, k[[t^{4}, t^{6}, t^{4\ell - 1}]],$$ which clearly is not a parameter ideal. According to \cite[Example 2.7(1)]{Goto-Ozeki-Takahashi-Watanabe-Yoshida}, this ideal is Ulrich. Applying Corollary \ref{reg=red/ulrich}, we obtain $\mathrm{reg}\,\mathcal{R}(I) = \mathrm{reg}\,\mathcal{G}(I) = 1$.
\end{Example}
\subsection{Hilbert-Samuel polynomial and postulation number}
Our next goal is to determine the Hilbert-Samuel coefficients and the postulation number of an Ulrich ideal. Recall that if $(R, \mathfrak{m})$ is a Cohen-Macaulay local ring of dimension $d\geq 1$ and $I$ is an $\mathfrak{m}$-primary ideal of $R$, then it is a well-known fact that the Hilbert-Samuel polynomial of $I$ can be expressed as
\begin{equation}\label{hilbpoly}
P_{I}(n) \, = \,
\sum_{i = 0}^{d}(-1)^{i}{\rm e}_{i}(I)\binom{n + d - i - 1}{d - i},
\end{equation}
where ${\rm e}_{0}(I), \ldots, {\rm e}_{d}(I)$ are the so-called {\it Hilbert-Samuel coefficients of $I$}. The number ${\rm e}_{0}(I)$ (which is always positive) is the multiplicity while ${\rm e}_{1}(I)$ is dubbed {\it Chern number} of $I$.
For the connection between postulation and reduction numbers, the following fact will be useful.
\begin{Lemma}$($\cite[Theorem 2]{Marley}$)$\label{LemmaM}
Let $(R, \mathfrak{m})$ be a $d$-dimensional Cohen-Macaulay local ring with infinite residue field, and let $I$ be an $\mathfrak{m}$-primary ideal with $\mathrm{grade}\,\mathcal{G}(I)_{+} \geq d-1$. Then ${\rm r}(I) = \rho(I) +d$.
\end{Lemma}
\begin{Proposition}\label{HS-poly}
Let $R$ be a Cohen-Macaulay local ring of dimension $d\geq 1$ and with infinite residue field, and let $I$ be an Ulrich ideal of $R$ minimally generated by $\nu(I)$ elements. Then $$P_{I}(n) \, = \, \lambda(R/I)\left[(\nu(I) - d + 1)\binom{n+d-1}{d} - (\nu(I) - d)\binom{n+d-2}{d-1}\right].$$ Furthermore, $\rho(I) = - d$ if $I$ is a parameter ideal, and $\rho(I) = 1 - d$ otherwise. In particular, $H_I(n)=P_I(n)$ for all $n\geq 1$.
\end{Proposition}
\begin{proof} Let $J$ be a minimal reduction of $I$ (note $J$ is a parameter ideal; see Definition \ref{Ulrideal}). Since $I^2=IJ$, \cite[Theorem 2.1]{Huneke} gives that the Chern number of $I$ is given by ${\rm e}_{1}(I) = {\rm e}_{0}(I) - \lambda(R/I)$ and in addition ${\rm e}_{j}(I) = 0$ for all $j = 2, \ldots, d$. Moreover, by \cite[Lemma 1]{Valla},
\begin{equation}\label{vallaequality}
\lambda(I/I^2) \, = \, {\rm e}_0(I) + (d-1)\lambda(R/I).
\end{equation}
On the other hand, $I/I^2$ is a free $R/I$-module by hypothesis, and clearly the minimal number of generators of $I/I^2$ coincides with $\nu := \nu(I)$. Thus, setting $\lambda :=\lambda(R/I)$, we get $\lambda(I/I^2)=\lambda((R/I)^{\nu})=\nu \lambda$. Therefore, by (\ref{vallaequality}), $${\rm e}_0(I) \, = \, \nu \lambda - (d-1)\lambda \, = \, \lambda(\nu - d +1)$$ and hence ${\rm e}_1(I) = {\rm e}_0(I) - \lambda = \lambda(\nu - d)$. Using (\ref{hilbpoly}), the formula for $P_I(n)$ follows.
Finally, note that \cite[Theorem 1]{Valla} yields $\mathrm{grade}\,\mathcal{G}(I)_{+} = d$. It now follows from Lemma \ref{LemmaM} that $\rho(I) = {\rm r}(I) - d$. By Corollary \ref{reg=red/ulrich} and its proof, we get
${\rm r}(I) \leq 1$ with ${\rm r}(I) = 1$ if and only if $I$ is not a parameter ideal, so the assertions about $\rho(I)$ hold. In particular, $\rho(I)\leq 0$, which gives $H_I(n)=P_I(n)$ for all $n\geq 1$.
\hspace*{\fill} $\square$
\end{proof}
In the Gorenstein case, Proposition \ref{HS-poly} admits the following version.
\begin{Corollary}\label{HS-poly-Gor}
Let $R$ be a Gorenstein local ring of dimension $d\geq 1$ and with infinite residue field, and let $I$ be an Ulrich ideal of $R$ which is not a parameter ideal. Then $$P_{I}(n) \, = \, \lambda(R/I^n) \, = \, {\mathfrak r}ac{\lambda(R/I)}{d!}\cdot {\mathfrak r}ac{(2n+d-2)(n+d-2)!}{(n-1)!}, \ \ \ \ \ {\mathfrak o}rall n\geq 2-d.$$
\end{Corollary}
\begin{proof} In this case, we have $\nu(I)=d+1$ by \cite[Corollary 2.6(b)]{Goto-Ozeki-Takahashi-Watanabe-Yoshida}, and thus Proposition \ref{HS-poly} gives $P_{I}(n) = \lambda(R/I)N$, where $N=2\binom{n+d-1}{d} - \binom{n+d-2}{d-1}$. Now, by elementary simplifications, we see that $N=(2n+d-2)(n+d-2)!/d!(n-1)!$, as needed. The fact that the equality holds for all $n\geq 2-d$ follows from $\rho(I)=1-d$, as shown in the proposition. \hspace*{\fill} $\square$
\end{proof}
Next we remark that Ulrich ideals that are not parameter ideals are the same as Ulrich ideals with non-zero Chern number.
\begin{Remark}\rm Maintain the setting and hypotheses of Proposition \ref{HS-poly}. We have shown in particular that the Chern number of $I$ is given by $${\rm e}_1(I) \, = \, \lambda(R/I)(\nu(I) - d).$$ As a consequence, the Ulrich ideal $I$ is a parameter ideal if and only if ${\rm e}_1(I)=0$. For instance, if $R$ is Gorenstein and $I$ is not a parameter ideal, then ${\rm e}_1(I)=\lambda (R/I)\neq 0$.
\end{Remark}
\begin{Example} \rm Let us revisit the Ulrich ideal $I\subset R$ of Example \ref{ex-Goto}. Note that $\nu(I)=2$ and $\lambda(R/I)=2$. Using Proposition \ref{HS-poly} we get ${\rm e}_0(I)=4$, ${\rm e}_1(I)=2$, $\rho(I)=0$, and then $$P_I(n) \, = \, \lambda(R/I^n) \, = \, 4n-2, \ \ \ \ \ {\mathfrak o}rall n\geq 1.$$
\end{Example}
\begin{Example}\label{E8} \rm Consider the so-called $E_8$-singularity $R={\mathbb C}[[x, y, z]]/(x^3+y^5+z^2)$, which is a rational double point.
By \cite[Example 7.2]{Goto-et-al-2}, the ideal $I=(x, y^2, z)R$ is an Ulrich ideal. Here we have $d=2$, $\nu(I)=3$ and $\lambda(R/I)=2$, so that ${\rm e}_0(I)=4$, ${\rm e}_1(I)=2$. Also, $\rho(I)=-1$. By Corollary \ref{HS-poly-Gor}, we get
$$P_I(n) \, = \, \lambda(R/I^n) \, = \, \lambda(R/I)n^2 = \, 2n^2, \ \ \ \ \ {\mathfrak o}rall n\geq 0.$$
\end{Example}
\begin{Example} \rm Fix integers $a\geq b\geq c\geq 2$, and consider the local ring
$$R \, = \, {\mathbb C}[[x, y, z, t]]/(xy - t^{a+b},\, xz - t^{a+c} + zt^a,\, yz - yt^c+zt^b)$$
which is a rational surface singularity (more precisely, a rational triple point), hence Cohen-Macaulay. Given an integer $\ell$ with $1\leq \ell \leq c$, consider the ideal $I = (x, y, z, t^{\ell})R$. By \cite[Example 7.5]{Goto-et-al-2}, the ideal $I$ is Ulrich, with $\lambda(R/I)=\ell$. Moreover, $d=2$ and $\nu(I)=4$, so that ${\rm e}_0(I)=3\ell$ and ${\rm e}_1(I)=2\ell$. We also have $\rho(I)=-1$. Proposition \ref{HS-poly} thus yields
$$P_I(n) \, = \, \lambda(R/I^n) \, = \, 3\ell\binom{n+1}{2} - 2\ell n \, = \, {\mathfrak r}ac{\ell n}{2}(3n-1), \ \ \ \ \ {\mathfrak o}rall n\geq 0.$$
\end{Example}
\begin{Example} \rm Fix integers $m\geq 1$, $d\geq 2$ and $n_1,\ldots, n_d\geq 2$. Given an infinite field $K$, consider the $d$-dimensional local hypersurface ring $$R \, = \, K[[x_0, x_1, \ldots, x_d]]/(x_0^{2m}+x_1^{n_1}+ \ldots + x_d^{n_d}).$$ Now, fix integers $k_1, \ldots, k_d$ with $1\leq k_i\leq
\lfloor n_i/2\rfloor$ for all $i=1,\ldots, d$. By \cite[Example 2.4]{Goto-et-al-2}, the ideal $I = (x_0^m, x_1^{k_1}, \ldots, x_d^{k_d})R$ is Ulrich. In this example we have $\lambda(R/I)=mk_1\cdots k_d$, so that
${\rm e}_0(I)=2mk_1\cdots k_d$ and ${\rm e}_1(I)=mk_1\cdots k_d$. By Corollary \ref{HS-poly-Gor},
$$P_I(n) \, = \, \lambda(R/I^n) \, = \, {\mathfrak r}ac{mk_1\cdots k_d}{d!}\cdot {\mathfrak r}ac{(2n+d-2)(n+d-2)!}{(n-1)!}, \ \ \ \ \ {\mathfrak o}rall n\geq 2-d.$$
In particular, for $d=3$, we have $P_I(n)=\lambda(R/I^n)={\mathfrak r}ac{mk_1k_2k_3}{6}(2n+1)(n+1)n$, for all $n\geq -1$. Note that $P_I(-2)=-mk_1k_2k_3\neq 0=H_I(-2)$, while $P_I(-1)=0=H_I(-1)$, $P_I(0)=0=H_I(0)$, $P_I(1)=mk_1k_2k_3=\lambda(R/I)=H_I(1)$, and so on.
\end{Example}
\subsection{Further results}
In this part we shall provide a correction of a proposition of Mafi as well as improvements of independent results by other authors.
Let $(R, \mathfrak{m})$ be a two-dimensional Cohen-Macaulay local ring with infinite residue field, and let $I$ be an $\mathfrak{m}$-primary ideal satisfying $\widetilde{I}=I$. Let $J$ be a minimal reduction of $I$. Then, \cite[Proposition 2.6]{Mafi2} claims that ${\rm r}_{J}(I) = 2$ if and only if $$H_{I}(n) = P_{I}(n), \ \ \ \ \ n = 1,\, 2.$$ However, if we take $I$ as being an Ulrich ideal, then ${\rm r}_{J}(I) \leq 1$ and we have seen in Corollary \ref{UlrRatliff} that $\widetilde{I}=I$; moreover, our Proposition \ref{HS-poly} yields in particular $H_{I}(n) = P_{I}(n)$ for $n = 1, 2$. Any such $I$ is therefore a counter-example to Mafi's claim.
We shall establish the correct statement in Proposition \ref{genItoh}. First, we need a lemma.
\begin{Lemma}\label{2lema} Let $(R, \mathfrak{m})$ be a two-dimensional Cohen-Macaulay local ring with infinite residue field, $I$ an $\mathfrak{m}$-primary ideal and $J$ a minimal reduction of $I$ with ${\rm r}_{J}(I) \leq 2$. Then, $\widetilde{I}=I$ if and only if $\mathrm{grade}\,\mathcal{G}(I)_{+} \geq 1$.
\end{Lemma}
\begin{proof} From \cite[Fact 9]{Heinzer-Johnson-Lantz-Shah} we have $\mathrm{grade}\,\mathcal{G}(I)_{+} \geq 1$ if and only if all powers of $I$ are Ratliff-Rush closed. In particular, if $\mathrm{grade}\,\mathcal{G}(I)_{+} \geq 1$ then $\widetilde{I}=I$. Conversely, suppose $\widetilde{I}=I$. First, if ${\rm r}_{J}(I) \leq 1$ then ${\rm r}(I) \leq 1$ and it follows from Lemma \ref{rednum1} that $\widetilde{I^{n}}=I^n$ for all $n \geq 1$. Now if ${\rm r}_{J}(I) = 2$, with say $J=(x, y)$ (note that by letting $M=R$ in Lemma \ref{LemmaRV}, or alternatively by \cite[Lemma 1.2]{Rossi-Dinh-Trung}, we can take $\{x, y\}$ as being a superficial sequence for $I$), then using Proposition \ref{regelem}(b) we can write $$I \, \subseteq \, I^{2} : x \, \subseteq \, \widetilde{I^{2}} : x \, = \, \widetilde{I} \, = \, I,$$ which gives $I^2 : x = I$. By \cite[Corollary 2.3]{Mafi2} (which requires ${\rm r}_{J}(I) = 2$) we get $\widetilde{I^{n}}=I^n$ for all $n \geq 1$. Therefore, $\mathrm{grade}\,\mathcal{G}(I)_{+} \geq 1$ in both cases. \hspace*{\fill} $\square$
\end{proof}
\begin{Proposition}\label{genItoh}
Let $(R, \mathfrak{m})$ be a two-dimensional Cohen-Macaulay local ring with infinite residue field, $I$ an $\mathfrak{m}$-primary ideal with $\widetilde{I} = I$ and $J$ a minimal reduction of $I$. Then the following assertions are equivalent:
\begin{itemize}
\item[(a)] ${\rm r}_{J}(I) \leq 2$;
\item[(b)] $H_{I}(n) = P_{I}(n)$ for all $n \geq 1$;
\item[(c)] $H_{I}(n) = P_{I}(n)$ for $n = 1, 2$.
\end{itemize}
\end{Proposition}
\begin{proof} Assume (a). By Lemma \ref{2lema} we get $\mathrm{grade}\,\mathcal{G}(I)_{+} \geq 1$, which as we know is equivalent to $\widetilde{I^{n}} = I^{n}$ for all $n \geq 1$. Then (c) follows from \cite[Proposition 16]{Itoh}, which also gives the implication (c) $\Rightarrow$ (a). Thus (a) and (c) are equivalent. The implication (b) $\Rightarrow$ (c) is obvious. Finally let us show that (a) $\Rightarrow$ (b). If ${\rm r}_{J}(I) \leq 2$ then Lemma \ref{2lema} yields $\mathrm{grade}\,\mathcal{G}(I)_{+} \geq 1=d-1$, so we can apply Lemma \ref{LemmaM} and obtain ${\rm r}(I) = \rho(I) + 2$. Therefore, $\rho(I) = {\rm r}(I) - 2 \leq {\rm r}_J(I) - 2 \leq 2 - 2 = 0$, which gives (b). \hspace*{\fill} $\square$
\end{proof}
\begin{Remark}\rm Besides correcting Mafi's proposition (as explained above), our Proposition \ref{genItoh} also sharpens independent results by Hoa, Huneke, and Itoh (see \cite[Theorem 3.3]{Hoa}, \cite[Theorem 2.11]{Huneke}, and \cite[Proposition 16]{Itoh}, respectively), where additional hypotheses are required.
\end{Remark}
\subsection{Negative answer to a question of Corso, Polini, and Rossi} In this last part, recall that the integral closure of an ideal $I$ of a Noetherian ring is the set $\overline{I}$ formed by the elements $r$ satisfying an equation of the form $$r^m+a_1r^{m-1}+\ldots + a_{m-1}r+a_m=0, \quad a_i\in I^i, \quad i=1, \ldots, m.$$ Clearly, $\overline{I}$ is an ideal containing $I$. The ideal $I$ is integrally closed (or complete) if $\overline{I}=I$. Moreover, $I$ is
{\it normal} if $I^j$ is integrally closed for every $j\geq 1$.
Let $(R, \mathfrak{m})$ be a Gorenstein local ring (with positive dimension and infinite residue field), and let $I$ be a normal $\mathfrak{m}$-primary ideal of $R$. Then following problem appeared in \cite[Question 4.4]{CPR}: Does ${\rm e}_3(I)=0$ imply ${\rm r}(I)=2$\,? It has been also pointed out that an affirmative answer to this question is given in \cite{Itoh} in the case $I=\mathfrak{m}$ (recall that $\mathfrak{m}$ may not be normal in general).
Now note that if ${\rm dim}\,R=2$ then the condition ${\rm e}_3(I)=0$ holds trivially. So, the 2-dimensional case of the above question is: If $I$ is a normal $\mathfrak{m}$-primary ideal of a 2-dimensional Gorenstein local ring, is it true that ${\rm r}(I)=2$\,? Below we
answer this question negatively. In our example, the ideal $I$ satisfies in addition the property ${\rm e}_2(I)=0$ (indeed, $I$ is Ulrich and this vanishing was observed in the proof of Proposition \ref{HS-poly}). Thus we may also consider a version of the Corso-Polini-Rossi problem by adding the hypothesis ${\rm e}_2(I)\neq 0$, which we hope to pursue in a future work.
\begin{Proposition} There exists a normal $\mathfrak{m}$-primary ideal $I$, in a $2$-dimensional local hypersurface ring, such that ${\rm r}(I)=1$.
\end{Proposition}
\begin{proof} Consider the rational double point $R={\mathbb C}[[x, y, z]]/(x^3+y^5+z^2)$ and the ideal $I=(x, y^2, z)R$. As mentioned in Example \ref{E8}, the ideal $I$ is Ulrich. As such, being in addition a non-parameter ideal, it satisfies ${\rm r}(I)=1$ (see the proof of Corollary \ref{reg=red/ulrich}). It remains to show that $I$ is normal. First, notice that $I$ is integrally closed, because
$x^3+z^2\notin I^5$ and hence $y\notin \overline{I}$. Now, since $R$ is a rational surface singularity, the normality of $I$ follows by \cite[Theorem 7.1]{Lip}.
\hspace*{\fill} $\square$
\end{proof}
\noindent{\bf Acknowledgements.} The first author was partially supported by the CNPq-Brazil grants 301029/2019-9 and 406377/2021-9. The second author was supported by a CAPES Doctoral Scholarship.
\end{document} |
\begin{document}
\begin{abstract}
Combinatorial classes ${\mathcal T}$ that are recursively defined using
combinations of
the standard \emph{multiset}, \emph{sequence}, \emph{directed cycle} and \emph{cycle}
constructions, and their restrictions,
have generating series ${\bf T}(z)$ with a positive radius of
convergence; for most of these a simple test can be used
to quickly show that the form of the asymptotics is the same
as that for the class of rooted trees: $C \rho^{-n} n^{-3/2}$\,,
where $\rho$ is the radius of convergence of ${\bf T}$.
\end{abstract}
\title{Counting Rooted Trees\,:\\ The Universal Law
$t(n)\,\sim\,C \rho^{-n}
\section{Introduction}
The class of rooted trees, perhaps with additional structure as in the planar case,
is unique among the well studied classes of structures. It is
so easy to find endless possibilities for defining interesting
subclasses as {\em the fixpoint of a class construction}, where the constructions
used are combinations of a few standard constructions like {\em sequence},
{\em multiset} and {\em add-a-root}. This fortunate situation is based on a simple
reconstruction property: removing the root from a tree gives a collection of
trees (called a forest); and it is trivial to reconstruct the
original tree from the forest (by adding a root).
Since we will be frequently referring to {\em rooted} trees, and rarely to
{\em free} (i.e., {\em unrooted}) trees, from now on we will assume, unless
the context says otherwise, that the word `tree' means `rooted tree'.
\subsection{Cayley's fundamental equation for trees}
Cayley \cite{Cy1} initiated the tree investigations\footnote{This was in
the context of an algorithm for expanding partial differential operators.
Trees play an important role in the modern theory of differential equations
and integration---see for example Butcher \cite{bu1972}.}
in 1857
when he presented the well known infinite product
representation\footnote{This representation
uses $t(n)$ to count the number of trees on $n$ vertices. Cayley actually
used $t(n)$ to count the number of trees with $n$ edges, so his formula was
\[
{\bf T}(z)\ =\ z \prod_{j\ge 1}\big(1-z^j\big)^{-t(j-1)}.
\]
}
\[
{\bf T}(z)\ =\ z \prod_{j\ge 1}\big(1-z^j\big)^{-t(j)}\,.
\]
Cayley used this to calculate $t(n)$ for $1\le n\le 13$\,.
More than a decade later (\cite{Cy3}, \cite{Cy4}, \cite{Cy6})
he used this method to give
\emph{recursion procedures} for finding the coefficients of generating
functions for the chemical diagrams of certain families of compounds.
\subsection{P\'olya's analysis of the generating series for trees}
Following on Cayley's work and further contributions by chemists,
P\'olya published his classic 1937 paper\footnote{Republished in book form
in \cite{P:R}.}
that presents:
(1) his {\em group-theoretic} approach to enumeration, and (2) the primary analytic
technique to establish the \emph{asymptotics} of recursively defined classes
of trees. Let us review the latter as it has provided the paradigm for all
subsequent investigations into generating series defined by
recursion equations.
Let ${\bf T}(z)$ be the generating series for the class of all unlabelled
trees. P\'olya first converts Cayley's equation to the form
\[
{\bf T}(z)\ =\ z\cdot\exp\Big(\sum_{m\ge 1}{\bf T}(z^m)/m\Big).
\]
From this he quickly deduces that: the radius of convergence $\rho$
of ${\bf T}(z)$ is in $(0,1)$ and ${\bf T}(\rho)<\infty$.
He defines the bivariate function
\[
{\bf E}(z,w) := z e^w\cdot \exp\Big(\sum_{m\ge 2}{\bf T}(z^m)/m\Big),
\]
giving the recursion equation ${\bf T} = {\bf E}\big(z,{\bf T}\big)$. Since
${\bf E}(z,w)$ is holomorphic in a neighborhood of ${\bf T}$
he can invoke the Implicit Function Theorem to show that a necessary
condition for $z$ to be a dominant singularity, that is a singularity
on the circle of convergence, of ${\bf T}$ is
\[
{\bf E}_w\big(z,{\bf T}(z)\big) = 1.
\]
From this P\'olya deduces that ${\bf T}$ has a unique dominant singularity, namely
$z=\rho$. Next, since
${\bf E}_z\big(\rho,{\bf T}(\rho)\big),\, {\bf E}_{ww}\big(\rho,{\bf T}(\rho)\big)\neq 0$,
the Weierstra{\ss} Preparation Theorem shows that $\rho$ is a square-root
type singularity. Applying well known results derived from the
Cauchy Integral Theorem
\begin{equation}\label{cauchy}
t(n)\ =\ \frac{1}{2\pi i}\int_{\mathcal C} \frac{{\bf T}(z)}{z^{n+1}}dz
\end{equation}
one has the famous asymptotics
\[
\begin{array}{l @{\quad} l}
\pmb{(\star)}&\qquad\qquad\qquad\qquad
t(n)\ \sim\ C \rho^{-n} n^{-3/2}\hspace*{2in}
\end{array}
\]
which occur so frequently in the study of recursively defined classes.
\subsection{Subsequent developments}
Bender (\cite{Be}, 1974) proposed a general version of the P\'olya result,
but Canfield (\cite{Canf}, 1983) found a flaw in the proof, and proposed a
more restricted version. Harary, Robinson and Schwenk (\cite{HRS}, 1975)
gave a 20 step guideline on how to carry out a P\'olya style analysis of a
recursion equation.
Meir and Moon (\cite{Me:Mo2}, 1989) made some further proposals on how to
modify Bender's approach; in particular it was found that the hypothesis that
{\em the coefficients of ${\bf E}$ be nonnegative} was highly desirable, and covered
a great number of important cases.
This nonnegativity condition has continued to find
favor, being used in Odlyzko's survey paper \cite{Odly} and in the forthcoming book
\cite{Fl:Se} of
Flajolet and Sedgewick. Odlyzko's version seems to be a current standard---here it is
(with minor corrections due to Flajolet and Sedgewick \cite{Fl:Se}).
\begin{theorem}[Odlyzko \cite{Odly}, Theorem 10.6]
\label{Odlyzko Thm}
Suppose
\begin{eqnarray}
{\bf E}(z,w)&=&\sum_{i,j\ge 0}e_{ij} z^i w^j \quad
\text{with } e_{00} = 0,\, e_{01} < 1,\, (\forall i,j)\, e_{ij}\ge 0\\
{\bf T}(z)&=&\sum_{i\ge 1}t_i z^i\quad\text{with }(\forall i)\, t_i\ge 0
\end{eqnarray}
are such that
\begin{thlist}
\item ${\bf T}(z)$ is analytic at $x=0$
\item ${\bf T}(z) = {\bf E}\big(z,{\bf T}(z)\big)$
\item ${\bf E}(z,w)$ is nonlinear in $w$
\item there are positive integers $i,j,k$ with $i<j<k$ such that
\begin{eqnarray*}
t_i,t_j, t_k&>&0\\
\gcd(j-i,k-i)&=&1.
\end{eqnarray*}
\end{thlist}
Suppose furthermore that there exist $\delta, r,s > 0$ such that
\begin{thlist}
\item [e] ${\bf E}(z,w)$ is analytic in $|z|< r+\delta$ and $|w|<s+\delta$
\item [f] ${\bf E}(r,s) = s$
\item [g] ${\bf E}_w(r,s) = 1$
\item [h] ${\bf E}_z(r,s) \neq 0$ and ${\bf E}_{ww}(r,s)\neq 0$.
\end{thlist}
Then $r$ is the radius of convergence of ${\bf T}$, ${\bf T}(r) = s$,
and as $n\rightarrow \infty$
\[
t_n\ \sim\ \sqrt{\frac{r {\bf E}_z(r,s)}{2\pi
{\bf E}_{ww}(r,s)}} \cdot r^n n^{-3/2}.
\]
\end{theorem}
\begin{remark}\label{Odly Rem}
As with P\'olya's original result, the asymptotics in these more general theorems
follow from information gathered on the location and nature of the dominant
singularities of ${\bf T}$. It has become popular to require that the solution ${\bf T}$
have a {\bf unique} dominant singularity---to guarantee this happens the above
theorem has the hypothesis \mbox{\rm(d)}. One can achieve this with a weaker hypothesis,
namely one only needs to require
\[
\mbox{\rm(d$'$)}
\quad \gcd\big(\{j-i : i< j\text{ and } t_i, t_j>0\}\big) = 1.
\]
Actually, given the other hypotheses of Theorem \ref{Odlyzko Thm}, the condition
\mbox{\rm(d$'$)}
is necessary and sufficient that ${\bf T}$ have a unique dominant singularity.
\end{remark}
The generalization of P\'olya's result that we find most convenient is given
in Theorem \ref{basic thm}. We will also adopt the condition that ${\bf E}$
have nonnegative coefficients, but point out that {\em under
this hypothesis the
location of the dominant singularities is quite easy to determine}.
Consequently the unique singularity condition is not needed to determine the
asymptotics.
For further remarks on previous variations and
generalizations of the work of P\'olya see $\S\,$\ref{history}.
The condition that the ${\bf E}$ have nonnegative coefficients forces us to omit
the ${\sf Set}$ operator from our list of standard combinatorial operators. There
are a number of complications in trying to extend the results of this paper
to recursion equations $w = {\bf G}(z,w)$ where ${\bf G}$ has mixed signs appearing
with its coefficients, including the problem of locating the
dominant singularities of the solution. The situation with mixed signs is
discussed in $\S$\,\ref{mixed}.
\subsection{Goal of this paper}
Aside from the proof details that show we do not need to require that
the solution ${\bf T}$ have a unique dominant singularity, this paper is {\em not}
about finding a
better way of generalizing P\'olya's theorem on trees. Rather the paper is concerned
with the ubiquity of the form $\pmb{(\star)}$ of asymptotics that
P\'olya found for the recursively defined class of trees.\footnote{The motivation
for our work came when a colleague, upon seeing the asymptotics of
P\'olya for the first time, said ``Surely the form $\pmb{(\star)}$ hardly ever
occurs! (when finding the asymptotics for the solution of an equation
$w = \Theta(w)$ that recursively defines a class of trees)". A quick examination
of the literature, a few examples, and we were convinced that quite the
opposite held, that {\em almost any} reasonable class of trees defined by a
recursive equation
that is nonlinear in $w$ would lead to an asymptotic law of P\'olya's form
$\pmb{(\star)}$.}
The goal of this paper is to exhibit a very large class of natural
and easily recognizable operators $\Theta$ for
which we can guarantee that a solution $w = {\bf T}(z)$ to the recursion equation
$w = \Theta(w)$ has coefficients that satisfy $\pmb{(\star)}$. By `easily
recognizable' we mean that you only have to look at the expression describing
$\Theta$---no further analysis is needed. This contrasts with the existing
literature where one is expected to carry out some calculations to determine
if the solution ${\bf T}$ will have certain properties. For example, in Odlyzko's
version, Theorem \ref{Odlyzko Thm}, there is a great deal of work to be done,
starting with checking that the solution ${\bf T}$ is analytic at $z=0$.
In the formal specification theory for \emph{combinatorial classes}
(see Flajolet and Sedgewick \cite{Fl:Se}) one starts with the binary operations
of \emph{disjoint union} and \emph{disjoint sum}
and adds unary
\emph{constructions} that transform a collection of objects (like trees)
into a collection of objects (like forests). Such constructions
are \emph{admissible} if the generating series of the output class of the
construction is completely determined by the generating series of the
input class.
We want to show that a recursive specification using almost any combination of these
constructions, and others that we will introduce, yield classes whose
generating series have coefficients that obey the asymptotics
$\pmb{(\star)}$
of P\'olya. It is indeed a {\em universal law}.
The goal of this paper is to provide truly {\em practical} criteria
(Theorem \ref{Main Thm})
to verify that many, if not most, of the common nonlinear recursion equations lead to
$\pmb{(\star)}$.
Here is a contrived example to which this theorem applies:
\begin{equation}\label{ex1}
w\ =\ z \ +\ z {\sf MSet}\Big({\sf Seq}\big(\sum_{n\in{\sf Odd}} 6^n w^n\big)\Big) \sum_{n\in{\sf Even}}(2^n+1)\big({\sf DCycle}_{{\sf Primes}}(w)\big)^n\,.
\end{equation}
An easy application of Theorem \ref{Main Thm}
(see $\S\,$\ref{applications})
tells us this particular recursion equation
has a recursively defined solution ${\bf T}(z)$ with a positive radius
of convergence, and the asymptotics for the coefficients $t_n$ have the form
$\pmb{(\star)}$.
The results of this paper apply to any combinatorial situation
described by a recursion equation of the type studied here. We put our
focus on classes of trees because they are by far the most popular setting
for such equations.
\subsection{First definitions}
We start with our basic notation for number systems, power series and
open discs.
\begin{definition}
\begin{thlist}\itemsep=1ex
\item
${\mathbb R}$ is the set of \underline{reals};
${\mathbb R}^{\ge 0}$ is the set of \underline{nonnegative reals}.
\item
${\mathbb P}$ is the set of \underline{positive integers}.
${\mathbb N}$ is the set of \underline{nonnegative integers}.
\item
${\mathbb R}^{\ge 0}[[z]]$ is the set of
power series in $z$ with nonnegative coefficients.
\item
$\rho_{\bf A}$ is the \underline{radius {(of convergence)}}
of the power series ${\bf A}$\,.
\item
For ${\bf A}\in{\mathbb R}^{\ge 0}[[z]]$ we write
${\bf A}\,=\,\sum_n a(n) z^n$ or ${\bf A}\,=\,\sum_n a_n z^n$\,.
\item
For $r>0$ and $z_0\in{\mathbb C}$ the open disc of radius $r$ about $z_0$ is
${\mathbb D}_r(z_0)\,:=\,\{z : |z-z_0|< r\}$
\end{thlist}
\end{definition}
\subsection{Selecting the domain }
We want to select a suitable collection of power series to work with when
determining solutions $w={\bf T}$ of recursion equations $w = \Phi(w)$.
The intended application is that ${\bf T}$ be a generating series for some
collection of combinatorial objects.
Since generating series have nonnegative coefficients we naturally focus
on series in ${\mathbb R}^{\ge 0}[[z]]$.
There is one restriction that seems most desirable, namely to consider as generating
functions
only series whose constant term is 0. A generating series ${\bf T}$ has
the coefficient $t(n)$ of $z^n$ counting (in some fashion) objects of size $n$.
It has become popular when working with combinatorial systems
to admit a constant coefficient when it makes a result look simpler, for example
with permutations we write ${\bf A}(z) = \exp\big({\bf Q}(z)\big)$, where ${\bf A}(z)$ is the exponential
generating series for permutations, and ${\bf Q}(z)$ the exponential generating series
for cycles.
${\bf Q}(z) = \log\big(1/(1-z)\big)$ will have a constant term 0, but ${\bf A}(z)=1/(1-z)$ will have
the constant term 1.
Some authors like to introduce an `ideal' object of size 0 to go along with this
constant term.
There is a problem with this convention if one wants to look at compositions of
operators. For example, suppose you wanted to look at sequences of permutations.
The natural way to write the generating series would be to apply the sequence
operator ${\sf Seq}$ to $1/(1-z)$ above, giving $\sum 1/(1-z)^n$.
Unfortunately this ``series'' has constant coefficient $=\infty$, so we
do not have an analytical function.
The culprit is the constant 1 in ${\bf A}(z)$. If we drop the 1, so that we are
counting only `genuine' permutations, the generating series for permutations
is $z/(1-z)$; applying ${\sf Seq}$ to this gives $z/(1-2z)$, an analytical function
with radius of convergence 1/2.
Consequently in this paper we return to the older convention of having the
constant term be 0, so that we are only counting `genuine' objects.
\begin{definition}
For ${\bf A}\in{\mathbb R}[[z]]$ we write ${\bf A}\unrhd 0$ to say that all coefficients
$a_i$ of ${\bf A}$ are nonnegative. Likewise for ${\bf B}\in{\mathbb R}[[z,w]]$ we write
${\bf B}\unrhd 0$ to say all coefficients $b_{ij}$ are nonnegative. Let
\begin{thlist}
\item
${\mathbb{DOM}}[z]\,:=\,\{{\bf A}\in {\mathbb R}^{\ge 0}[[z]] : {\bf A}(0) = 0\}$,
the set of power series ${\bf A} \unrhd 0$ with constant term $0$; and let
\item
${\mathbb{DOM}}[z,w]\,:=\,\{{\bf E}\in {\mathbb R}^{\ge 0}[[z,w]] : {\bf E}(0,0) = 0\},$
the set of power series ${\bf E} \unrhd 0$ with constant term $0$\,.
Members of this class are called \underline{elementary} power
series.\footnote{We use the name {\em elementary} since a recursion
equation
of the form $w = {\bf E}(z,w)$ is in the proper form to employ the tools
of analysis that are presented in the next section.}
\end{thlist}
\end{definition}
When working with a member ${\bf E}\in{\mathbb{DOM}}[z,w]$
it will be convenient to use various series formats for writing ${\bf E}$,
namely
\begin{eqnarray*}
{\bf E}(z,w) &=& \sum_{ij} e_{ij} z^iw^j\\
{\bf E}(z,w) &=& \sum_j {\bf E}_j(z) w^j\\
{\bf E}(z,w) &=& \sum_j \Big(\sum_i e_{ij} z^i \Big) w^j.
\end{eqnarray*}
This is permissible from a function-theoretic viewpoint
since all coefficients $e_{ij}$ are
nonnegative; for any given $z,w \ge 0$ the three formats converge to the
same value (possibly infinity).
An immediate advantage of working with series having nonnegative coefficients
is that the series is defined (possibly infinite) at its radius of convergence.
\begin{lemma}
For ${\bf T}\,\in\,{\mathbb{DOM}}[z]$ one has ${\bf T}(\rho_{\bf T})\in[0,\infty]$.
Suppose ${\bf T}(\rho_{\bf T})\,\in\,(0,\infty)$. Then
$\rho_{\bf T}\,<\,\infty$\,; in particular ${\bf T}$ is not a polynomial.
If furthermore ${\bf T}$ has integer coefficients then $\rho_{\bf T}\,<\,1$.
\end{lemma}
\section{The theoretical foundations} \label{theor sect}
We want to show that the series ${\bf T}$ that are recursively defined as
solutions to functional equations $w = {\bf G}(z,w)$ are such that
with remarkably frequency the
asymptotics of the coefficients $t_n$ are given by $\pmb{(\star)}$.
Our main results deal with the
case that ${\bf G}(z,w)$ is holomorphic in a neighborhood of $(0,0)$,
and the expansion $\sum g_{ij} z^iw^j$ is such that {\em all
coefficients $g_{ij}$ are nonnegative}. This covers most of the equations
arising from combinations of the popular combinatorial operators like
Sequence, MultiSet and Cycle.
The referee noted that we had omitted one popular construction, namely
${\sf Set}$, and the various restrictions ${\sf Set}_{\mathbb M}$ of ${\sf Set}$, and asked that
we explain this omission. Although the equation $w = z+z {\sf Set}(w)$ has
been successfully analyzed in \cite{HRS}, there are difficulties when
one wishes to form composite operators involving ${\sf Set}$. These
difficulties arise from the fact that the resulting equation $w = {\bf G}(z,w)$
has ${\bf G}$ with coefficients having mixed signs.
A general discussion of the mixed signs case is given in $\S$\,\ref{sub mixed}
and a particular discussion of the ${\sf Set}$ operator in $\S$\,\ref{sub set}.
Since the issue of
mixed signs is so important we introduce the following abbreviations.
\begin{definition}
A bivariate series ${\bf E}(z,w)$ and the associated functional equation
$w = {\bf E}(z,w)$ are \underline{nonnegative} if the coefficients of
${\bf E}$ are nonnegative. A bivariate series ${\bf G}(z,w)$ and the associated
functional equation $w = {\bf G}(z,w)$ have
\underline{mixed signs} if some coefficients $g_{ij}$ are positive and
some are negative.
\end{definition}
To be able to locate the difficulties when working with mixed signs, and
to set the stage for further research on this topic, we have put together
an essentially complete outline of the steps we use to prove that
a solution ${\bf T}$ to a functional equation
$w={\bf E}(z,w)$ satisfies the P\'olya asymptotics $\pmb{(\star)}$,
starting with the bedrock results of analysis such as the
Weierstra{\ss} Preparation Theorem and the Cauchy Integral Formula.
Although this background material has often been cited in work on
recursive equations, it has never been written down in
a single unified comprehensive exposition. Our treatment of this background
material goes beyond the existing literature to include a precise analysis of
the nonnegative recursion equations whose solutions have multiple dominant
singularities.
\subsection{A method to prove $\pmb{(\star)}$}
Given ${\bf E}\in{\mathbb{DOM}}[z,w]$ and ${\bf T}\in{\mathbb{DOM}}[z]$ such that ${\bf T} = {\bf E}(z,{\bf T})$,
we use the following steps to show that the coefficients $t_n$
satisfy $\pmb{(\star)}$.
\begin{thlist}
\item
{\sc Show:}
${\bf T}$ has radius of convergence $\rho := \rho_{\bf T} > 0$.
\item
{\sc Show:}
${\bf T}(\rho)<\infty$.
\item
{\sc Show:}
$\rho <\infty$.
\item
{\sc Let:} ${\bf T}(z) = z^d {\bf V}(z^q)$ where ${\bf V}(0) \neq 0$ and
$\gcd\big\{n : v(n) \neq 0\big\} = 1$.
\item
{\sc Let:} $\omega = \exp(2\pi i/q)$.
\item
{\sc Observe:}
${\bf T}(\omega z) = \omega^d{\bf T}(z)$, for $|z|<\rho$.
\item
{\sc Show:}
The set of dominant singularities of ${\bf T}$ is
$\{z : z^q = \rho^q\}$.
\item
{\sc Show:}
${\bf T}$ satisfies a quadratic equation, say
\[
{\bf Q}_0(z) + {\bf Q}_1(z) {\bf T}(z) + {\bf T}(z)^2 \ =\ 0
\]
for $|z|<\rho$ and sufficiently near $\rho$,
where ${\bf Q}_0(z),{\bf Q}_1(z)$ are analytic at $\rho$.
\item
{\sc Let:} ${\bf D}(z) = {\bf Q}_1(z)^2 - 4{\bf Q}_0(z)$, the {\em discriminant} of
the equation in (g).
\item
{\sc Show:} ${\bf D}'(\rho)\neq 0$ in order to conclude that $\rho$ is a branch
point of order 2, that is, for $|z|<\rho$ and
sufficiently near $\rho$ one has
${\bf T}(z)\ =\ {\bf A}(\rho - z) + {\bf B}(\rho - z)\sqrt{\rho -z}$,
where ${\bf A}$ and ${\bf B}$ are analytic at $0$, and
${\bf B}(0)< 0$.
\item
{\sc Design:} A contour that is invariant under multiplication by $\omega$
to be used in the Cauchy Integral Formula to calculate $t(n)$.
\item
{\sc Show:} The full contour integral for $t(n)$ reduces to evaluating
the portion lying between the angles $-\pi/q$ and $\pi/q$.
\item
{\sc Optional:} One has a Darboux expansion for the asymptotics of $t(n)$.
\end{thlist}
Given that ${\bf E}$ has nonnegative coefficients, items (a)--(f) can be
easily established by imposing modest conditions on ${\bf E}$ (see
Theorem \ref{basic thm}).
For (g) the method is to show that one has a functional
equation
${\bf F}\big(z,{\bf T}(z)\big)=0$
holding for $|z|\le\rho$ and sufficiently near $\rho$, that
${\bf F}(z,w)$ is holomorphic in a neighborhood of
$\big(\rho,{\bf T}(\rho)\big)$, and that
${\bf F}\big(\rho,{\bf T}(\rho)\big)\,=\,
{\bf F}_w\big(\rho,{\bf T}(\rho)\big)\,=\,0$, but
${\bf F}_{ww}\big(\rho,{\bf T}(\rho)\big)\,\neq\,0$.
These hypotheses allow one to apply the Weierstra{\ss}
Preparation Theorem to obtain a quadratic equation for ${\bf T}(z)$.
\begin{theorem}[Weierstra{\ss} Preparation Theorem]\label {WPThm}
Suppose ${\bf F}(z,w)$ is
a function of two complex variables and $(z_0,w_0)$
is a point in ${\mathbb C}^{\,2}$ such that:
\begin{thlist}
\item
${\bf F}(z,w)$ is holomorphic in a neighborhood of $(z_0,w_0)$
\item
$\displaystyle {\bf F}(z_0,w_0)
\,=\, \frac{\partial {\bf F}}{\partial w}(z_0,w_0)
\,=\,\cdots\,=\, \frac{\partial^{k-1} {\bf F}}{\partial w^{k-1}}(z_0,w_0)\,=\,0$
\item
$\displaystyle
\frac{\partial^k {\bf F}}{\partial w^k}(z_0,w_0)\,\neq\,0$.
\end{thlist}
Then in a neighborhood of $(z_0,w_0)$ one has ${\bf F}(z,w)\,=\,{\bf P}(z,w){\bf R}(z,w)$,
a product of two holomorphic functions ${\bf P}(z,w)$ and ${\bf R}(z,w)$ where
\begin{itemize}
\item [(i)] ${\bf R}(z,w)\neq 0$ in this neighborhood,
\item [(ii)] ${\bf P}(z,w)$ is a `monic polynomial of degree $k$' in $w$, that is
${\bf P}(z,w) \,=\, {\bf Q}_0(z) + {\bf Q}_1(z) w +\cdots + {\bf Q}_{k-1}(z) w^{k-1} + w^k$,
and the ${\bf Q}_i(z)$ are analytic in a neighborhood of $z_0$.
\end{itemize}
\end{theorem}
\begin{proof}
An excellent reference is Markushevich \cite{Marku}, Section 16, p.~105,
where one finds a leisurely and complete proof of the
Weierstra{\ss} Preparation Theorem.
\end{proof}
There are two specializations of this result that we will be particularly
interested in: $k=1$ gives the Implicit Function Theorem, the best known
corollary of the Weierstra{\ss} Preparation Theorem;
and $k=2$ gives a quadratic equation for ${\bf T}(z)$.
\subsection{$k=1$: The implicit function theorem}
\begin{corollary} [k=1: Implicit Function Theorem] \label{IFThm}
Suppose ${\bf F}(z,w)$ is
a function of two complex variables and $(z_0,w_0)$
is a point in ${\mathbb C}^{\,2}$ such that:
\begin{thlist}
\item
${\bf F}(z,w)$ is holomorphic in a neighborhood of $(z_0,w_0)$
\item
$\displaystyle {\bf F}(z_0,w_0)\,=\,0$
\item
$\displaystyle
\frac{\partial {\bf F}}{\partial w}(z_0,w_0)\,\neq\,0$.
\end{thlist}
Then there is an $\varepsilon>0$
and a function ${\bf A}(z)$ such that for
$z\,\in\,{\mathbb D}_\varepsilon(z_0)$,
\begin{itemize}
\item[(i)]
${\bf A}(z)$ is analytic
in ${\mathbb D}_\varepsilon(z_0)$ ,
\item[(ii)]
${\bf F}\big(z,{\bf A}(z)\big)\,=\,0$
for $z\,\in\,{\mathbb D}_\varepsilon(z_0)$ ,
\item[(iii)]
for all $(z,w)\in {\mathbb D}_\varepsilon(z_0)\times {\mathbb D}_\varepsilon(w_0)$,
if ${\bf F}(z,w)\,=\,0$ then $w\,=\,{\bf A}(z)$.
\end{itemize}
\end{corollary}
\begin{proof}
From Theorem \ref{WPThm} there is an $\varepsilon>0$
and a factorization of
${\bf F}(z,w)\,=\,{\bf L}(z,w){\bf R}(z,w)$,
valid in
${\mathbb D}_\varepsilon(z_0)\times {\mathbb D}_\varepsilon(w_0)$,
such that ${\bf R}(z,w)\neq 0$ for
$(z,w)\,\in\,{\mathbb D}_\varepsilon(z_0)\times {\mathbb D}_\varepsilon(w_0)$, and
${\bf L}(z,w)\,=\,{\bf L}_0(z)+ w$, with ${\bf L}_0(z)$ analytic in
${\mathbb D}_\varepsilon(z_0)$.
Thus ${\bf A}(z)\,=\,-{\bf L}_0(z)$ is such that
${\bf L}\big(z,{\bf A}(z)\big)\,=\,0$ on ${\mathbb D}_\varepsilon(z_0)$; so
${\bf F}\big(z,{\bf A}(z)\big)\,=\,0$ on ${\mathbb D}_\varepsilon(z_0)$.
Furthermore, if ${\bf F}(z,w)\,=\, 0$ with
$(z,w)\in {\mathbb D}_\varepsilon(z_0)\times {\mathbb D}_\varepsilon(w_0)$,
then ${\bf L}(z,w)\,=\,0$, so $w\,=\,{\bf A}(z)$.
\end{proof}
\subsection{$k=2$: The quadratic functional equation}
The fact that $\rho$ is an order 2 branch point comes
out of the $k=2$ case in the Weierstra{\ss} Preparation Theorem.
\begin{corollary}[$k=2$]\label{WPThm2}
Suppose ${\bf F}(z,w)$ is
a function of two complex variables and $(z_0,w_0)$
is a point in ${\mathbb C}^{\,2}$ such that:
\begin{thlist}
\item
${\bf F}(z,w)$ is holomorphic in a neighborhood of $(z_0,w_0)$
\item
$\displaystyle {\bf F}(z_0,w_0) \,=\, \frac{\partial {\bf F}}{\partial w}(z_0,w_0) \,=\,0$
\item
$\displaystyle
\frac{\partial^2 {\bf F}}{\partial w^2}(z_0,w_0)\,\neq\,0$.
\end{thlist}
Then in a neighborhood of $(z_0,w_0)$ one has ${\bf F}(z,w)\,=\,{\bf Q}(z,w){\bf R}(z,w)$,
a product of two holomorphic functions ${\bf Q}(z,w)$ and ${\bf R}(z,w)$ where
\begin{itemize}
\item [(i)] ${\bf R}(z,w)\neq 0$ in this neighborhood,
\item [(ii)] ${\bf Q}(z,w)$ is a `monic quadratic polynomial' in $w$, that is
${\bf Q}(z,w) \,=\, {\bf Q}_0(z) + {\bf Q}_1(z) w + w^2$,
where ${\bf Q}_0$ and ${\bf Q}_1$ are analytic in a neighborhood of $z_0$.
\end{itemize}
\end{corollary}
\subsection{Analyzing the quadratic factor ${\bf Q}(z,w)$}
\label{implicit}
Simple calculations are known
(see \cite{Pl:Ro})
for finding all the partial derivatives of ${\bf Q}$ and ${\bf R}$ at
$\big(z_0,w_0\big)$
in terms of the partial derivatives of ${\bf F}$ at the same point.
From this we can obtain important
information about the coefficients of the discriminant ${\bf D}(z)$
of ${\bf Q}(z,w)$.
\begin{lemma}\label {WPThm2a}
Given the hypotheses (a)-(c) of Corollary \ref{WPThm2}
let
${\bf Q}(z,w)$ and ${\bf R}(z,w)$ be as described in (i)-(ii) of that corollary.
Then
\begin{itemize}
\item [(i)] ${\bf Q}(z_0,w_0)\, =\, {\bf Q}_w(z_0,w_0)\, =\, 0$
\item [(ii)] ${\bf R}(z_0,w_0)\, =\, {\bf F}_{ww}(z_0,w_0)/2$.
\end{itemize}
Let ${\bf D}(z) \ =\ {\bf Q}_1(z)^2 - 4{\bf Q}_0(z)$, the discriminant of ${\bf Q}(z,w)$.\\
Then
\begin{itemize}
\item [(iii)] ${\bf D}(z_0) = 0$
\item [(iv)] ${\bf D}'(z_0)\, =\, -8 {\bf F}_z(z_0,w_0)\big/{\bf F}_{ww}(z_0,w_0)$.
\end{itemize}
\end{lemma}
\begin{proof}
For (i) use Corollary \eqref{WPThm2} (b), the fact that ${\bf R}(z_0,w_0) \neq 0$, and
\begin{eqnarray*}
{\bf F}(z_0,w_0)&=&{\bf Q}(z_0,w_0) {\bf R}(z_0,w_0)\\
{\bf F}_w(z_0,w_0)
&=& {\bf Q}_w(z_0,w_0) {\bf R}(z_0,w_0)\ +\
{\bf Q}(z_0,w_0) {\bf R}_w(z_0,w_0)\\
&=& {\bf Q}_w(z_0,w_0) {\bf R}(z_0,w_0).
\end{eqnarray*}
For (ii), since ${\bf Q}$ and ${\bf Q}_w$ vanish and ${\bf Q}_{ww}$
evaluates to $2$ at $(z_0,w_0)$,
\begin{eqnarray*}
{\bf F}_{ww}(z_0,w_0)
&=& 2{\bf R}(z_0,w_0).
\end{eqnarray*}
For (iii) we have from (i)
\begin{eqnarray*}
0& =&{\bf Q}_0(z_0) + {\bf Q}_1(z_0) w_0 + {w_0}^2\\
0& =&{\bf Q}_1(z_0)+ 2w_0
\end{eqnarray*}
and thus
\begin{eqnarray}
{\bf Q}_1(z_0)& =&-2w_0 \label{A1}\\
{\bf Q}_0(z_0)& =&{w_0}^2.\label{A0}
\end{eqnarray}
From \eqref{A1} and \eqref{A0} we have
\begin{eqnarray*}
{\bf D}(z_0)&=& {\bf Q}_1(z_0)^2 - 4{\bf Q}_0(z_0)\ =\ 4{w_0}^2 - 4 {w_0}^2\ =\ 0,
\end{eqnarray*}
which is claim (iii).
For claim (iv) start with
\begin{eqnarray*}
{\bf F}_z(z_0,w_0)
&=&{\bf Q}_z(z_0,w_0) {\bf R}(z_0,w_0)\\
&=& \big({\bf Q}_0'(z_0) + w_0 {\bf Q}_1'(z_0) \big){\bf R}(z_0,w_0).
\end{eqnarray*}
From the definition of ${\bf D}(z)$ and \eqref{A1}
\begin{eqnarray*}
{\bf D}'(z_0)
&=& 2{\bf Q}_1(z_0) {\bf Q}_1'(z_0) - 4{\bf Q}_0'(z_0)\\
&=& -4\big( {\bf Q}_0'(z_0) + w_0 {\bf Q}_1'(z_0) \big),
\end{eqnarray*}
so
\[
-4{\bf F}_z(z_0,w_0)\ =\ {\bf D}'(z_0) {\bf R}(z_0,w_0).
\]
Now use (ii) to finish the derivation of (iv).
\end{proof}
\subsection{A square-root continuation of ${\bf T}(z)$ when $z$ is near $\rho$}
Let us combine the above information into a proposition about
a solution to a functional equation.
\begin{proposition} \label{crucial1}
Suppose ${\bf T}\in{\mathbb{DOM}}[z]$ is such that
\begin{thlist}
\item
$\rho:=\rho_{\bf T}\in(0,\infty)$
\item
${\bf T}(\rho)<\infty$
\end{thlist}
and ${\bf F}(z,w)$ is a function of two complex variables such that:
\begin{thlist}
\item[c]there is an $\varepsilon > 0$ such that ${\bf F}\big(z,{\bf T}(z)\big) = 0$
for $|z| < \rho$ and $|z-\rho| < \varepsilon$
\item[d]
${\bf F}(z,w)$ is holomorphic in a neighborhood of $\big(\rho,{\bf T}(\rho)\big)$
\item[e]
$\displaystyle {\bf F}\big(\rho,{\bf T}(\rho)\big) \,=\,
\frac{\partial {\bf F}}{\partial w}\big(\rho,{\bf T}(\rho)\big) \,=\,0$
\item[f]
$\displaystyle
\frac{\partial {\bf F}}{\partial z}\big(\rho,{\bf T}(\rho)\big)\cdot
\frac{\partial^2 {\bf F}}{\partial w^2}\big(\rho,{\bf T}(\rho)\big)\,>\,0$.
\end{thlist}
Then there are functions ${\bf A}(z), {\bf B}(z)$ analytic at $0$
such that
\[
{\bf T}(z)\ =\ {\bf A}(\rho - z) \,+\, {\bf B}(\rho - z)\sqrt{\rho -z}
\]
for $|z|<\rho$ and near $\rho$
(see Figure \ref{puiseux}), and
\[
{\bf B}(0)\ =\ -\sqrt{\frac{2{\bf F}_z\big(\rho,{\bf T}(\rho)\big)}
{{\bf F}_{ww}\big(\rho,{\bf T}(\rho)\big)}}\ <\ 0.
\]
\begin{figure}
\caption{${\bf T}
\label{puiseux}
\end{figure}
\end{proposition}
\begin{proof}
Items (d)--(f) give the the hypotheses of Corollary \ref{WPThm2}
with $(z_0,w_0) = \big(\rho,{\bf T}(\rho)\big)$.
Let ${\bf Q}_0(z)$, ${\bf Q}_1(z)$ and ${\bf D}(z)={\bf Q}_1(z)^2 - 4{\bf Q}_0(z)$
be as in Corollary \ref{WPThm2}.
From conclusion (iv) of Lemma \ref{WPThm2a} we have
\begin{equation}\label{D'}
{\bf D}'(\rho)\ =\ -8\frac{{\bf F}_z\big(\rho,{\bf T}(\rho)\big)}
{{\bf F}_{ww}\big(\rho,{\bf T}(\rho)\big)}\ <\ 0.
\end{equation}
From (c) and Corollary \ref{WPThm2}(i)
\[
{\bf Q}_0(z)\,+\,{\bf Q}_1(z){\bf T}(z)\,+\,{\bf T}(z)^2\ =\ 0
\]
holds in a neighborhood of $z=\rho$ intersected with
${{\mathbb D}_\rho(0)}$
(as pictured in Figure \ref{puiseux}), so in this region
\[
{\bf T}(z)\ =\ -\frac{1}{2}{\bf Q}_1(z)\,+\,\frac{1}{2}\sqrt{{\bf D}(z)}
\]
for a suitable branch of the square root.
Expanding ${\bf D}(z)$ about $\rho$ gives
\begin{equation}\label{Dz}
{\bf D}(z)\ =\ \sum_{k\ge 1} d_k(\rho -z)^k
\end{equation}
since ${\bf D}(\rho)=0$ by (iii) of Lemma \ref{WPThm2a};
and $d_1 = -{\bf D}'(\rho) > 0$ by \eqref{D'}.
Consequently
\begin{equation}\label{sqrt cont}
{\bf T}(z)\ =\ \underbrace{-\frac{1}{2}{\bf Q}_1(z)}_{\displaystyle{\bf A}(\rho - z)}\,
\underbrace{-\,\frac{1}{2}\sqrt{d_1}
\sqrt{1 + \sum_{k\ge 2}\frac{d_k}{d_1}(\rho - z)^{k-1}}}_{\displaystyle {\bf B}(\rho-z)}
\,\cdot\,\sqrt{\rho -z}
\end{equation}
holds for $|z|<\rho$ and near $\rho$. The negative sign of the second term is due to choosing
the branch of the square root which is consistent with the choice of branch implicit
in Lemma \ref{binomial} when $\alpha = 1/2$, given that the $t(n)$'s are nonnegative.
Thus we have functions ${\bf A}(z), {\bf B}(z)$ analytic in a neighborhood
of $0$ with ${\bf B}(0)\neq 0$ such that
\[
{\bf T}(z)\ =\ {\bf A}(\rho-z) + {\bf B}(\rho-z)\sqrt{\rho -z}
\]
for $|z|<\rho$ and near $\rho$.
From \eqref{D'}, \eqref{Dz} and \eqref{sqrt cont}
\[
{\bf B}(0)\ =\ -\frac{1}{2}\sqrt{d_1}\ =\ -\frac{1}{2}\sqrt{-{\bf D}'(\rho)}\ =\
-\sqrt{\frac{2{\bf F}_z\big(\rho,{\bf T}(\rho)\big)}
{{\bf F}_{ww}\big(\rho,{\bf T}(\rho)\big)}}\ <\ 0.
\]
\end{proof}
Now we turn to recursion equations $w={\bf E}(z,w)$.
So far in our discussion of the role of the Weierstra{\ss} Preparation Theorem we have not made any reference to the signs of the coefficients in the
recursion equation. The following proposition establishes a square-root
singularity at $\rho$, and {\em the proof uses the fact that
all coefficients of ${\bf E}$ are nonnegative}. If we did not make
this assumption then items \eqref{Fww} and \eqref{Fz} below might fail to hold.
If \eqref{Fz} is false then ${\bf F}_z\big(\rho,{\bf T}(\rho)\big)$ may be
$0$, in which case
$\pmb{(\star)}$ fails.
See section \ref{B0 condition} for a further discussion of this issue.
\begin{corollary} \label{crucial2}
Suppose ${\bf T}\in{\mathbb{DOM}}[z]$ and ${\bf E}\in{\mathbb{DOM}}[z,w]$
are such that
\begin{thlist}
\item
$\rho:=\rho_{\bf T}\in(0,\infty)$
\item
${\bf T}(\rho)<\infty$
\item
${\bf T}(z) = {\bf E}\big(z,{\bf T}(z)\big)$ holds as an identity between formal
power series,
\item
${\bf E}(z,w)$ is not linear in $w$,
\item
${\bf E}_z\neq 0$
\item
$(\exists \varepsilon > 0)\,
\Big({\bf E}\big(\rho+\varepsilon,{\bf T}(\rho)+\varepsilon\big)\,<\,\infty\Big)$.
\end{thlist}
Then there are functions ${\bf A}(z), {\bf B}(z)$ analytic at $0$
such that
\[
{\bf T}(z)\ =\ {\bf A}(\rho - z) + {\bf B}(\rho - z)\sqrt{\rho -z}
\]
for $|z|<\rho$ and near $\rho$
(see Figure \ref{puiseux}), and
\[
{\bf B}(0)\ =\ -\sqrt{\frac{2{\bf E}_z\big(\rho,{\bf T}(\rho)\big)}
{{\bf E}_{ww}\big(\rho,{\bf T}(\rho)\big)}}\ <\ 0.
\]
\end{corollary}
\begin{proof}
By (f) we can
choose $\varepsilon>0$ such that ${\bf E}$ is holomorphic in
\[
{\mathbb U}\ =\ {\mathbb D}_{\rho + \varepsilon}(0) \times
{\mathbb D}_{{\bf T}(\rho) + \varepsilon}(0),
\]
an open polydisc neighborhood of the graph of ${\bf T}$.
Let
\begin{equation}\label{def of F}
{\bf F}(z,w)\ :=\ w \,-\,{\bf E}(z,w).
\end{equation}
Then ${\bf F}$ is holomorphic in ${\mathbb U}$, and one readily sees that
\begin{eqnarray}
{\bf F}\big(z,{\bf T}(z)\big)& =& {\bf T}(z) \,-\,{\bf E}\big(z,{\bf T}(z)\big)\ =\ 0
\quad\text{for }|z|\le \rho\label{F}\\
{\bf F}_w(z,w)& =& 1 \,-\,{\bf E}_w(z,w)\label{Fw}\\
{\bf F}_{ww}\big(\rho,{\bf T}(\rho)\big)& =& -
{\bf E}_{ww}\big(\rho,{\bf T}(\rho)\big)\ <\ 0
\quad\text{by (d) and ${\bf E}\unrhd 0$} \label{Fww}\\
{\bf F}_z\big(\rho,{\bf T}(\rho)\big)&=& -{\bf E}_z\big(\rho,{\bf T}(\rho)\big)\ < \ 0
\quad\text{by (e) and ${\bf E}\unrhd 0$}\label{Fz}.
\end{eqnarray}
By Pringsheim's Theorem $\rho$ is a singularity of ${\bf T}$. Thus
${\bf F}_w\big(\rho,{\bf T}(\rho)\big)=0$ since one cannot use the Implicit Function
Theorem to analytically continue
${\bf T}$ at $\rho$.
We have satisfied the hypotheses of Proposition \ref{crucial1}---use
\eqref{Fww} and \eqref{Fz} to obtain the formula for ${\bf B}(0)$.
\end{proof}
\subsection{Linear recursion equations}
In a {\em linear} recursion equation
\[
w\ =\ {\bf A}_0(z) + {\bf A}_1(z) w
\]
one has
\begin{equation} \label{linear}
w\ =\ \frac{{\bf A}_0(z)}{1 - {\bf A}_1(z)}.
\end{equation}
From this we see that the collection of solutions to linear equations
covers an enormous range. For example, in the case
\[
w\ =\ {\bf A}_0(z) + z w,
\]
any ${\bf T}(z)\in{\mathbb{DOM}}[z]$ with
nondecreasing eventually positive coefficients
is a solution to the above linear equation (which satisfies
${\bf A}_0(z)+z w\,\unrhd\,0$) if we choose
${\bf A}_0(z) := (1-z) {\bf T}(z)$.
When one moves to a $\Theta(w)$ that is
nonlinear in $w$, the range of solutions seems to be greatly constricted.
In particular with remarkable frequency one encounters solutions ${\bf T}(z)$
whose coefficients are asymptotic to $C\rho^{-n}n^{-3/2}$.
\subsection{Binomial coefficients} \label{ultimate}
The asymptotics for the coefficients
in the binomial expansion of $(\rho-z)^\alpha$ are the ultimate basis for
the universal law $\pmb{(\star)}$. Of course if
$\alpha\in{\mathbb N}$ then $(\rho-z)^\alpha$ is just a polynomial and the coefficients
are eventually 0.
\begin{lemma}[See Wilf \cite{Wi}, p. 179] \label{binomial}
For $\alpha\in {\mathbb R} \setminus {\mathbb N}$ and $\rho\in(0,\infty)$
\[
[z^n]\,(\rho-z)^\alpha\ =\
(-1)^n
\binom{\alpha}{n}
\rho^{\alpha-n}\
\sim\ \frac{\rho^\alpha}{\Gamma(-\alpha)} \rho^{-n} n^{-\alpha-1}.
\]
\end{lemma}
\subsection{The Flajolet and Odlyzko singularity analysis}
In \cite{Fl:Od} Flajolet and Odlyzko develop {\em transfer} theorems via
singularity analysis for functions ${\bf S}(z)$ that
have a unique dominant singularity.
The goal is to develop a catalog of translations, or transfers,
that say: {\em if ${\bf S}(z)$ behaves like such and such
near the singularity $\rho$
then the coefficients $s(n)$ have such and such asymptotic behaviour.}
Their work is based on applying the Cauchy Integral Formula to an
analytic continuation of ${\bf S}(z)$ beyond its circle of convergence.
This leads to their basic notion of
a \underline{Delta neighborhood} $\Delta$ of $\rho$, that is,
a closed disc which is somewhat larger than the disc of radius $\rho$,
but with an open pie shaped wedge cut out at the point $z=\rho$
(see Fig. \ref{delta}).
We are particularly interested in their transfer theorem that directly
generalizes the binomial asymptotics given in Lemma \ref{binomial}.
\begin{figure}
\caption{A Delta region and associated contour
\label{delta}
\label{delta}
\end{figure}
\begin{proposition}
[\cite{Fl:Od}, Corollary 2]
\label{FlOd prop}
Let $\rho\in(0,\infty)$ and suppose
${\bf S}$ is analytic in $\Delta\setminus\{\rho\}$
where $\Delta$ is a Delta neighborhood of $\rho$.
If $\alpha\notin{\mathbb N}$ and
\begin{equation}
\label{alpha}
{\bf S}(z)\ \sim \ K\big(\rho-z\big)^\alpha
\end{equation}
as $z\rightarrow \rho$ in $\Delta$, then
\[
s(n) \ \sim
[z^n]\,
K\big(\rho-z\big)^\alpha\ =\
(-1)^n K
\binom{\alpha}{n}
\rho^{\alpha-n}\
\sim\ \frac{K\rho^\alpha}{\Gamma(-\alpha)}\cdot \rho^{-n}n^{-\alpha-1}.
\]
\end{proposition}
Let us apply this to the square-root singularities that
we are working with to see that one ends up with the
asymptotics satisfying $\pmb{(\star)}$.
\begin{corollary}
\label{FlOd cor}
Suppose ${\bf S}\in{\mathbb{DOM}}[z]$ has radius of convergence
$\rho\in(0,\infty)$, and $\rho$ is the only dominant singularity
of ${\bf S}$.
Furthermore suppose ${\bf A}$ and ${\bf B}$ are analytic at $0$ with
${\bf B}(0) < 0$, ${\bf A}(0)>0$ and
\begin{equation}\label{sqrt}
{\bf S}(z)\ = {\bf A}(\rho -z) + {\bf B}(\rho - z)\sqrt{\rho-z}
\end{equation}
for $z$ in some neighborhood of $\rho$, and $|z|<\rho$.
Then
\[
s(n) \ \sim \ [z^n]\, {\bf B}(0)\sqrt{\rho-z}
\ \sim\ \frac{-{\bf B}(0) \sqrt{\rho}}{2\sqrt{\pi}}
\cdot \rho^{-n} n^{-3/2}.
\]
\end{corollary}
\begin{proof}
One can find a Delta neighborhood $\Delta$ of $\rho$
(as in Fig. \ref{delta}) such that ${\bf S}$ has an analytic
continuation to $\Delta\setminus\{\rho\}$; and for
$z\in\Delta$ and near $\rho$ one has \eqref{sqrt} holding.
Consequently
\[
{\bf S}(z) - {\bf A}(0)\ \sim\ {\bf B}(0)\sqrt{\rho-z}
\]
as $z\rightarrow \rho$ in $\Delta$.
This means we can apply
Proposition \ref{FlOd prop} to obtain
\begin{eqnarray*}
s(n) & \sim& \frac{{\bf B}(0) \sqrt{\rho}}{\Gamma(-1/2)}
\cdot \rho^{-n} n^{-3/2}.
\end{eqnarray*}
\end{proof}
\subsection{On the condition ${\bf B}(0)< 0$} \label{B0 condition}
In the previous corollary suppose that ${\bf B}(0)= 0$ but ${\bf B}\neq 0$.
Let $b_k$ be the first nonzero coefficient of ${\bf B}$.
The asymptotics for $s(n)$ are
\[
s(n)\ \sim\ b_k[z^n]\,\big(\rho - z\big)^{k +\frac{1}{2}},
\]
giving a law of the form $C \rho^{-n} n^{-k -\frac{3}{2}}$.
{\em We do not know of an example of ${\bf S}$ defined by a nonlinear
functional equation that gives rise to such a solution with $k>0$, that is,
with the exponent of $n$ being $-5/2$, or $-7/2$, etc. Meir and Moon
(p.~83 of \cite{Me:Mo2}, 1989) give the example
\[
w\ =\ (1/6) e^w \sum_{n\ge 1} z^n/n^2
\]
where the solution $w={\bf T}$ has coefficient asymptotics given by
$t_n\sim C/n$.}
\subsection{Handling multiple dominant singularities}
We want to generalize Proposition \ref{FlOd prop} to cover the case of several dominant
singularities equally spaced around the circle of convergence
and with the function ${\bf S}$ enjoying a certain kind of symmetry.
\begin{proposition} \label{FlOd Multi}
Given $q\in{\mathbb P}$ and $\rho\in(0,\infty)$ let
\begin{eqnarray*}
\omega &:=&e^{2\pi i/q}\\
U_{q,\rho}& :=& \{ \omega^j\rho: j=0,1,\ldots,q-1\}.
\end{eqnarray*}
Suppose $\Delta$ is a generalized Delta-neighborhood of $\rho$
with wedges removed at the points in $U_{q,\rho}$
(see Fig. \ref{MultiDelta} for $q=3$),
\begin{figure}
\caption{Multiple dominant singularities \label{MultiDelta}
\label{MultiDelta}
\end{figure}
suppose
${\bf S}$ is continuous on $\Delta$ and
analytic in $\Delta\setminus U_{q,\rho}$, and suppose
$d$ is a nonnegative integer such that
$
{\bf S}\big(\omega z\big)\ = \omega^d {\bf S}(z)
$
for $z\in\Delta$.
If
${\bf S}(z)\sim K(\rho-z)^\alpha$ as $z\rightarrow \rho$ in $\Delta$
and $\alpha\notin {\mathbb N}$
then
\[
s(n) \ \sim\ \frac{q K \rho^\alpha}
{\Gamma(-\alpha)} \cdot \rho^{-n}
n^{-\alpha-1}\quad
\text{if } n\equiv d\mod q,
\]
$s(n)=0$ otherwise.
\end{proposition}
\begin{proof}
Given $\varepsilon > 0$ choose the contour ${\mathcal C}$ to follow the
boundary of $\Delta$ except for
a radius $\varepsilon$ circular detour around each
singularity $\omega^j\rho$ (see Fig. \ref{MultiDelta}). Then
\[
s(n)\ =\ \frac{1}{2\pi i}\int_{{\mathcal C}} \frac{{\bf S}(z)}{z^{n+1}}dz.
\]
Subdivide ${\mathcal C}$ into $q$ congruent pieces
${\mathcal C}_0,\ldots,{\mathcal C}_{q-1}$ with ${\mathcal C}_j$ centered around
$\omega^j\rho$, choosing as
the dividing points on ${\mathcal C}$ the bisecting
points between successive singularities (see Fig. \ref{bisect2} for $q=3$).
\begin{figure}
\caption{The congruent contour segments ${\mathcal C}
\label{bisect2}
\end{figure}
Then ${\mathcal C}_j = \omega^j {\mathcal C}_0$.
Let $s_j(n)$ be the portion of the integral for $s(n)$
taken over ${\mathcal C}_j$, that is:
\[
s_j(n)\ =\ \frac{1}{2\pi i}\int_{{\mathcal C}_j} \frac{{\bf S}(z)}{z^{n+1}}dz.
\]
Then from ${\bf S}(\omega z) = \omega^d {\bf S}(z)$ and
${\mathcal C}_j = \omega^j {\mathcal C}_0$ we have
\begin{eqnarray*}
s_j(n)
& =& \frac{1}{2\pi i}\int_{{\mathcal C}_j} \frac{{\bf S}(z)}{z^{n+1}}dz\\
& =& \frac{1}{2\pi i}\int_{{\mathcal C}_0}
\frac{\omega^{dj} {\bf S}(z)}{(\omega^j z)^{n+1}}\omega^j dz\\
& =& \omega^{j(d-n)} \frac{1}{2\pi i}\int_{{\mathcal C}_0}
\frac{{\bf S}(z)}{z^{n+1}} dz\\
&=& \omega^{j(d-n)} s_0(n),
\end{eqnarray*}
so
\begin{eqnarray*}
s(n)
& =& \sum_{j=0}^{q-1} s_j(n)\\
& =&
\Big(\sum_{j=0}^{q-1} \omega^{j(d-n)}\Big) s_0(n)\\
& =& \begin{cases} q s_0(n)&\text{if }n\equiv d \mod q\\
0&\text{otherwise.}
\end{cases}
\end{eqnarray*}
We have reduced the integral calculation to the integral over
${\mathcal C}_0$, and this proceeds exactly as in \cite{Fl:Od} in the unique
singularity case described in Proposition \ref{FlOd prop}.
\end{proof}
Let us apply this result to the case of ${\bf S}(z)$ having multiple
dominant singularities, equally spaced on the circle of convergence,
with a square-root singularity at $\rho$.
\begin{corollary} \label{MultiTransfer}
Given $q\in{\mathbb P}$ and $\rho\in(0,\infty)$ let
\begin{eqnarray*}
\omega &:=&e^{2\pi i/q}\\
U_{q,\rho}& :=& \{ \omega^j\rho: j=0,1,\ldots,q-1\}.
\end{eqnarray*}
Suppose ${\bf S}\in{\mathbb{DOM}}[z]$ has radius of convergence
$\rho\in(0,\infty)$, $U_{q,\rho}$ is the set of dominant
singularities of ${\bf S}$, and
${\bf S}(\omega z) = \omega^d {\bf S}(z)$ for $|z|<\rho$ and
for some $d\in{\mathbb N}$.
Furthermore suppose ${\bf A}$ and ${\bf B}$ are analytic at $0$ with
${\bf B}(0) < 0$, ${\bf A}(0) > 0$ and
\begin{equation}\label{sqrt2}
{\bf S}(z)\ = {\bf A}(\rho -z) + {\bf B}(\rho - z)\sqrt{\rho-z}
\end{equation}
for $z$ in some neighborhood of $\rho$, and $|z|<\rho$.
Then
\begin{equation}\label{multi sqrt}
s(n) \ \sim \
\frac{q {\bf B}(0) \sqrt{\rho}}
{\Gamma(-1/2)} \cdot \rho^{-n}
n^{-3/2}\quad
\quad\text{for } n\equiv d \mod q.
\end{equation}
Otherwise $s(n)=0$.
\end{corollary}
\begin{proof}
Since the set of dominant singularities $U_{q,\rho}$ is finite
one can find a generalized Delta neighborhood $\Delta$ of $\rho$
(as in Fig. \ref{MultiDelta}) such that ${\bf S}$ has a continuous extension
to $\Delta$ which is an analytic continuation to $\Delta\setminus U_{q,\rho}$;
and for
$z\in\Delta$ and near $\rho$ one has \eqref{sqrt2} holding.
Consequently
\[
{\bf S}(z) - {\bf A}(0) \ \sim\ {\bf B}(0)\sqrt{\rho-z}
\]
as $z\rightarrow \rho$ in $\Delta$.
This means we can apply Proposition \ref{FlOd Multi}
to obtain \eqref{multi sqrt}.
\end{proof}
\subsection{Darboux's expansion}
In 1878 Darboux \cite{darboux} published a procedure for expressing the
asymptotics of the coefficients $s(n)$ of a power series ${\bf S}$ with
algebraic dominant singularities.
Let us focus first on the case that ${\bf S}$ has a single dominant
singularity, namely $z=\rho$, and it is of square-root type, say
\[
{\bf S}(z)\ =\ {\bf A}(\rho - z) + {\bf B}(\rho - z) \sqrt{\rho - z}
\]
for $|z|< \rho$ and sufficiently close to $\rho$, where ${\bf A}$ and ${\bf B}$
are analytic at 0 and ${\bf B}(0)< 0$.
From Proposition \ref{FlOd prop} we know that
\[
s(n)\ =\ \big(1+{\rm o}(1)\big) b(0) [z^n]\,\sqrt{\rho - z}.
\]
Rewriting the expression for ${\bf S}(z)$ as
\[
{\bf S}(z)\ =\
\sum_{j=0}^\infty \Big(a_j(\rho - z)^j \,+ \,
b_j(\rho - z)^{j+\frac{1}{2}}\Big)
\]
we can see that the $m$th derivative of ${\bf S}$ `blows up' as $z$ approaches
$\rho$ because the $m$th derivative of the terms on the right with $j< m$
involve terms with $\rho-z$ to a negative power. However for $j\ge m$ the
terms on the right have $m$th derivatives that behave nicely near $\rho$.
By shifting the troublesome terms to the left side of the equation, giving
\begin{eqnarray*}
{\bf S}_m(z)& :=& {\bf S}(z) -
\sum_{j<m} \Big(a_j(\rho - z)^j \,+ \,
b_j(\rho - z)^{j+\frac{1}{2}}\Big)\\
& =&
\sum_{j\ge m} \Big(a_j(\rho - z)^j \,+ \,
b_j(\rho - z)^{j+\frac{1}{2}}\Big),
\end{eqnarray*}
one can see by looking at the right side that the $m$th
derivative
${\bf S}_m^{(m)}(z)$
of ${\bf S}_m(z)$ has a square-root singularity at $\rho$ provided some
$b_j\neq 0$ for $j\ge m$.
Indeed
${\bf S}_m^{(m)}(z)$ is very much like ${\bf S}(z)$, being
analytic for $|z|\le \rho$ provided $z\neq \rho$.
If $b_m\neq 0$ we can apply Proposition \ref{FlOd prop}
to obtain (for suitable $C_m$)
\[
[z^n]\, {\bf S}_m^{(m)}(z) \ \sim\ C_m\rho^{-n} n^{-\frac{3}{2}}
\]
and thus
\[
[z^n]\, {\bf S}_m(z) \ \sim\ C_m\rho^{-n}
n^{-m-\frac{3}{2}}.
\]
This tells us that
\[
s(n)\ =\ \sum_{j<m} [z^n]\,\Big(a_j(\rho - z)^j \,+ \,
b_j(\rho - z)^{j+\frac{1}{2}}\Big)\ +\
\big(1+{\rm o}(1)\big) C_m\rho^{-n} n^{-m-\frac{3}{2}}.
\]
For $n\ge m$ the part with the $a_j$ drops out, so we have
the Darboux expansion
\[
s(n)\ =\ \sum_{j<m} [z^n]\,\Big(
b_j(\rho - z)^{j+\frac{1}{2}}\Big)\ +\
\big(1+{\rm o}(1)\big) C_m\rho^{-n} n^{-m-\frac{3}{2}}.
\]
The case of multiple dominant singularities is handled as previously.
Here is the result for the general exponent $\alpha$.
\begin{proposition}[Multi Singularity Darboux Expansion]\,
Given $q\in{\mathbb P}$
let
\begin{eqnarray*}
\omega &:=&e^{2\pi i/q}\\
U_{q,\rho}& :=& \{ \omega^j\rho: j=0,1,\ldots,q-1\}.
\end{eqnarray*}
Suppose we have a generalized Delta-neighborhood $\Delta$
with wedges removed at the points in $U_{q,\rho}$ (see Fig. \ref{MultiDelta})
and
${\bf S}$ is analytic in $\Delta\setminus U_{q,\rho}$. Furthermore suppose
$d$ is a nonnegative integer such that
$
{\bf S}\big(\omega z\big)\ = \omega^d {\bf S}(z)
$
for $|z|<\rho$.
If
\[
{\bf S}(z)\ =\ {\bf A}(\rho-z) + {\bf B}(\rho-z)(\rho-z)^\alpha
\]
for $|z|<\rho$ and in a neighborhood of $\rho$,
and $\alpha\notin {\mathbb N}$,
then given $m\in{\mathbb N}$ with $b_m\neq 0$ there is a $C_m\neq 0$
such that
for $n\equiv d \mod q$
\[
s(n)\ =\ q\sum_{j<m} [z^n]\,\Big(
b_j(\rho - z)^{j+\frac{1}{2}}\Big)\ +\
\big(1+{\rm o}(1)\big) C_m\rho^{-n} n^{-\alpha-(m+1)}.
\]
\end{proposition}
\subsection{An alternative approach:
reduction to the aperiodic case} \label{altern}
In the literature one finds references to the option of using the
aperiodic reduction ${\bf V}$ of ${\bf T}$, that is, using
${\bf T}(z) = z^d{\bf V}(z^q)$ where ${\bf V}(0)\neq 0$ and $\gcd\{n : v(n) \neq 0\}=1$.
${\bf V}$ has a unique dominant singularity at
$\rho_{\bf V}\,=\,{\rho_{\bf T}}^q$, so the hope
would be that one could use a well known result like
Theorem \ref{Odlyzko Thm} to prove that $\pmb{(\star)}$ holds for $v(n)$.
Then $t(nq + d) = v(n)$ gives the asymptotics for the coefficients of ${\bf T}$.
One can indeed make the transition from ${\bf T} = {\bf E}(z,{\bf T})$ to a functional
equation ${\bf V} = {\bf H}(z,{\bf V})$, but it is not clear if the property that
${\bf E}$ is holomorphic at the endpoint of the graph of ${\bf T}$ implies
${\bf H}$ is holomorphic at the endpoint of the graph of ${\bf V}$.
Instead of the property
\[
(\exists \varepsilon > 0)\,\Big({\bf E}\big(\rho+\varepsilon,
{\bf T}(\rho)+\varepsilon\big)\,<\,\infty\Big)
\]
of ${\bf E}$ used previously, a stronger version seems to be needed, namely:
\[
(\forall y > 0)\Big[{\bf E}\big(\rho,y\big)\,<\,\infty \Rightarrow\
(\exists \varepsilon > 0)\,\Big({\bf E}\big(\rho+\varepsilon,
y+\varepsilon\big)\,<\,\infty\Big)\Big].
\]
We chose the singularity analysis because it sufficed to require
the weaker condition that ${\bf E}$ be holomorphic at $\big(\rho,{\bf T}(\rho)\big)$,
and because the expression for the constant term in the asymptotics was
far simpler that what we obtained through the use of ${\bf V} = {\bf H}(z,{\bf V})$.
Furthermore, in any attempt to extend the analysis of the asymptotics to
other cases of recursion of equations one would like to have the ultimate
foundations of the Weierstra{\ss} Preparation Theorem and the Cauchy Integral
Theorem to fall back on.
\section{The Dominant Singularities of ${\bf T}(z)$}
The recursion equations $w = {\bf E}(z,w)$ we consider will be such that
the solution $w={\bf T}$ has a radius of convergence $\rho$ in $(0,\infty)$ and finitely
many dominant singularities, that is finitely many singularities on
the circle of convergence.
In such cases the primary technique to find the asymptotics
for the coefficients $t(n)$ is to apply Cauchy's Integral
Theorem \eqref{cauchy}.
Experience suggests that properly
designed contours ${\mathcal C}$ will concentrate the value of the integral \eqref{cauchy}
on small
portions of the contour near the dominant singularities of ${\bf T}$---consequently great value
is placed on locating the dominant singularities of ${\bf T}$.
\begin{definition}
For ${\bf T}\in{\mathbb{DOM}}[z]$ with radius $\rho\in (0,\infty)$ let
${\sf DomSing}({\bf T})$ be the set of dominant singularities of ${\bf T}$, that is,
the set of singularities on the circle of convergence of ${\bf T}$.
\end{definition}
\subsection{The spectrum of a power series}
\begin{definition} For ${\bf A}\in{\mathbb{DOM}}[z]$ let the \underline{spectrum} ${\sf Spec}({\bf A})$
of ${\bf A}$ be the set of $n$ such that the $n$th coefficient $a(n)$ is not
zero.\footnote{
In the 1950s the logician Scholz defined
the {\em spectrum} of a first-order sentence $\varphi$ to be the set of
sizes of the finite models of $\varphi$. For example if $\varphi$ is an
axiom for fields, then the spectrum would be the set of powers of primes.
There are many papers on this topic: a famous open problem due to Asser
asks if the collection of spectra of first-order sentences is closed
under complementation. This turns out to be equivalent to an open
question in complexity theory. The recent paper \cite{Fi:Ma} of Fischer
and Makowsky has an
excellent bibliography of 62 items on the subject of spectra.
For our purposes, if ${\bf A}(z)$ is a generating series for a class ${\mathcal A}$ of
combinatorial objects then the set of sizes of the objects
in ${\mathcal A}$ is precisely ${\sf Spec}({\bf A})$.
}
It will be convenient to denote ${\sf Spec}({\bf A})$ simply by $A$, so we have
\[
A \ =\ {\sf Spec}({\bf A})\ =\ \{n : a(n) \neq 0\}.
\]
\end{definition}
In our analysis of the dominant singularities of ${\bf T}$ it will be most
convenient
to have a simple calculus to work with the spectra of power series.
\subsection{An algebra of sets}
The spectrum of a power series from ${\mathbb{DOM}}[z]$ is a subset of
positive integers; the calculus
we use has certain operations on the subsets of the nonnegative integers.
\begin{definition}
For $I,J\subseteq {\mathbb N} $ and $j,m\in{\mathbb N}$ let
\begin{eqnarray*}
I+J& :=& \big\{i+j : i\in I, j\in J\big\}\\
I-j& :=& \big\{i-j : i\in I, \big\}\quad\text{where }j\le\min(I)\\
m \cdot J& :=& \{m\cdot j : j\in J\}\quad\text{for }m\ge 1\\
0\odot J &:=& \{0\}\\
m\odot J& :=& \underbrace{J+\cdots +J}_{m-{\rm times}} \quad \text{for $m \ge 1$}\\
I\odot J& :=& \bigcup_{i\in I} i\odot J\\
m| J&\Leftrightarrow & \big(\forall j\in J\big)\,(m|j).
\end{eqnarray*}
\end{definition}
\subsection{The periodicity constants}
Periodicity plays an important role in determining the dominant singularities.
For example the generating series ${\bf T}(z)$ of planar (0,2)-binary trees,
that is, planar trees where each node has 0 or 2 successors, is defined by
\[
{\bf T}(z)\ =\ z + z{\bf T}(z)^2.
\]
It is clear that all such trees have odd size, so one has
\[
{\bf T}(z)\ =\ \sum_{j=0}^\infty t(2j+1) z^{2j+1}
\ =\ z \sum_{j=0}^\infty t(2j+1) (z^2)^j.
\]
This says we can write ${\bf T}(z)$ in the form
\[
{\bf T}(z)\ =\ z {\bf V}(z^2).
\]
From such considerations one finds that ${\bf T}(z)$ has exactly
two dominant singularities, $\rho$ and $-\rho$. (The general result is
given in Lemma \ref{dom sing}.)
\begin{lemma}\label{periodic form}
For ${\bf A}\in {\mathbb{DOM}}[z]$ let
\[
p\ :=\ \gcd A\qquad d\ :=\ \min A\qquad q\ :=\ \gcd (A-d).
\]
Then there are ${\bf U}(z)$ and ${\bf V}(z)$ in ${\mathbb R}^{\ge 0}[[z]]$ such that
\begin{thlist}
\item
${\bf A}(z) \ =\ {\bf U}\big(z^p\big)$\quad with $\gcd(U) = 1$
\item
${\bf A}(z) = z^d{\bf V}\big(z^q\big)$ with ${\bf V}(0)\neq 0$ and $\gcd(V)=1$.
\end{thlist}
\end{lemma}
\begin{proof} (Straightforward.)
\end{proof}
\begin{definition}
With the notation of Lemma \ref{periodic form},
${\bf U}(z^p)$ is the \underline{purely periodic form} of ${\bf A}(z)$; and
$z^d{\bf V}\big(z^q\big)$
is the \underline{shift periodic form} of ${\bf A}(z)$.
\end{definition}
The next lemma is quite important---it says that the
$q$ equally spaced points on the circle of convergence are all
dominant singularities of ${\bf T}$.
Our main results depend heavily on the fact that the
equations we consider are such that these are the {\em only} dominant
singularities of ${\bf T}$.
\begin{lemma} \label{min sing}
Let ${\bf T}\in{\mathbb{DOM}}[z]$ have radius of convergence $\rho\in(0,\infty)$
and the shift periodic form $z^d {\bf V}(z^q)$.
Then
\[
\{z : z^q = \rho^q\} \ \subseteq\ {\sf DomSing}({\bf T}).
\]
\end{lemma}
\begin{proof}
Suppose ${z_0}^q = \rho^q$ and suppose
${\bf S}(z)$ is an analytic continuation
of ${\bf T}(z)$ into a neighborhood ${\mathbb D}_\varepsilon(z_0)$ of $z_0$.
Let $\omega := z_0/\rho$. Then $\omega^q = 1$.
The function
${\bf S}_0(z) := {\bf S}(\omega z)/\omega^d$ is an analytic function
on ${\mathbb D}_\varepsilon(\rho)$.
For $z\in {\mathbb D}_\varepsilon(\rho)\cap {\mathbb D}_\rho(0)$ we have
\[
\omega z\in {\mathbb D}_\varepsilon(z_0)\cap {\mathbb D}_\rho(0),
\]
so
\[
{\bf S}_0(z) \ = \ {\bf S}(\omega z)/\omega^d\ =\ {\bf T}(\omega z)/\omega^d\ =\ {\bf T}(z).
\]
This means ${\bf S}_0(z)$ is an analytic continuation of ${\bf T}(z)$ at $z=\rho$,
contradicting Pringsheim's Theorem that $\rho$ is a dominant singularity.
\end{proof}
\subsection{Determining the shift periodic parameters from ${\bf E}$} \label{det shift}
\begin{lemma} \label{periodic constants}
Suppose ${\bf T}(z) = {\bf E}\big(z,{\bf T}(z)\big)$ is a formal recursion that defines
${\bf T} \in {\mathbb{DOM}}[z]$, where ${\bf E}\in{\mathbb{DOM}}[z,w]$.
Let the shift periodic form of ${\bf T}(z)$
be $z^d {\bf V}(z^q)$. Then
\begin{eqnarray*}
d &=&\min(T)\ =\ \min(E_0)\\
q &=&\gcd(T-d)\ =\ \gcd\bigcup_{n\ge 0} \Big(E_n + (n-1)d\Big).
\end{eqnarray*}
\end{lemma}
\begin{proof}
Since ${\bf T}$ is recursively defined by
\[
{\bf T}(z) \ = \ \sum_{n\ge 0} {\bf E}_n(z) {\bf T}(z)^n
\]
one has the first nonzero coefficient of ${\bf T}$ being the first nonzero
coefficient of ${\bf E}_0$, and thus $d = \min(T) = \min(E_0)$.
It is easy to see that we also have $q=\gcd(T-d)$.
Next apply the spectrum operator to the above functional equation to obtain
the set equation
\[
T\ =\ \bigcup_{n\ge 0} E_n + n\odot T,
\]
and thus
\[
T-d\ =\ \bigcup_{n\ge 0} \Big(E_n + (n-1)d + n\odot (T-d)\Big).
\]
Since $q = \gcd(T-d)$ it follows that
$q | r := \gcd \Big(\bigcup_n E_n + (n-1)d\Big)$.
To show that $r | q$, and hence that $r = q$, note that
\[
w =\ \bigcup_{n\ge 0} \Big(E_n + (n-1)d + n\odot w\Big)
\]
is a recursion equation whose unique solution is $w=T-d$.
Furthermore we can find the solution $w$ by iteratively applying
the set operator
\[
\Theta(w)\ :=\ \bigcup_{n\ge 0} \Big(E_n + (n-1)d + n\odot w\Big)
\]
to \O, that is,
\[
T-d\ =\ \lim_{n\rightarrow \infty} \Theta^n(\text{\O}).
\]
Clearly $r\,|\,\text{\O}$, and a simple induction shows that for
every $n$ we have $r\,\big|\,\Theta^n(\text{\O})$.
Thus $r\,|\,(T-d)$, so $r\,|\,q$, giving $r = q$.
This finishes the proof that $q$ is the gcd of the
set $\bigcup_n\Big( E_n + (n-1)d\Big)$.
\end{proof}
\subsection{Determination of the dominant singularities}
The following lemma completely determines the dominant singularities
of ${\bf T}$.
\begin{lemma}\label{dom sing}
Suppose
\begin{thlist}
\item
${\bf T}\in{\mathbb{DOM}}[z]$ has radius of convergence $\rho\in (0,\infty)$
with ${\bf T}(\rho) < \infty$, and
\item
${\bf T}(z) = {\bf E}\big(z,{\bf T}(z)\big)$, where ${\bf E}\in{\mathbb{DOM}}[z,w]$
is nonlinear in $w$ and holomorphic on (the graph of) ${\bf T}$.
\end{thlist}
Let the shift periodic form of ${\bf T}(z)$ be $z^d {\bf V}(z^q)$.
Then
\[
{\sf DomSing}({\bf T})\ =\ \{z : z^q = \rho ^ q\}.
\]
\end{lemma}
\begin{proof}
By the usual application of the implicit function theorem,
if $z$ is a dominant singularity of ${\bf T}$ then
\begin{equation}\label{orig sing cond}
{\bf E}_w\big(z,{\bf T}(z)\big) = 1.
\end{equation}
As
$\rho$ is a dominant singularity we can replace \eqref{orig sing cond} by
\begin{equation}\label{new sing cond}
{\bf E}_w\big(z,{\bf T}(z)\big) \ =\ {\bf E}_w\big(\rho,{\bf T}(\rho)\big).
\end{equation}
Let ${\bf U}(z^p)$
be the purely periodic form of ${\bf E}_w\big(z,{\bf T}(z)\big)$.
As the coefficients of ${\bf E}_w$ are nonnegative it follows that
\eqref{new sing cond} implies
\[
{\sf DomSing}({\bf T})\ \subseteq\ \{z : z^p = \rho^p\}.
\]
We know from Lemma \ref{min sing} that
\[
\{z : z^q = \rho^q\}\ \subseteq\ {\sf DomSing}({\bf T}),
\]
consequently $q|p$.
To show that $p \leq q$ first note that if $m\in {\mathbb N}$ then
\[
\gcd(m+T)\,\big|\,q.
\]
For if $r=\gcd(m+T)$ then for any $n\in T$ we have
$r|(m+n)$ and $r|(m+d)$. Consequently $r|(n-d)$, so $r|(T-d)$,
and thus $r|q$.
Since
\[
{\bf U}(z^p)\ =\ {\bf E}_w\big(z,{\bf T}(z)\big)\ =\ \sum_{n\ge 1} {\bf E}_n(z) n
{\bf T}(z)^{n-1},
\]
applying the spectrum operator gives
\begin{eqnarray*}
{\sf Spec}\big({\bf U}(z^p)\big)
& =& \bigcup_{n\ge 1} E_n + (n-1)\odot T.
\end{eqnarray*}
Choose $n\ge 2$ such that $E_n\neq \text{\O}$ and choose $a\in E_n$. Then
\begin{eqnarray*}
{\sf Spec}\big({\bf U}(z^p)\big)
& \supseteq& E_n + (n-1)\odot T\\
& \supseteq& \big(a+ (n-2)d\big) + T,
\end{eqnarray*}
so
taking the $\gcd$ of both sides gives
\begin{eqnarray*}
p
&=&\gcd {\sf Spec}\big({\bf U}(z^p)\big)\\
&\le& \gcd\Big(\big(a+ (n-2)d\big) + T\Big)\,\Big|\,q.
\end{eqnarray*}
With $p =q$ it follows that we have proved the dominant
singularities are as claimed.
\end{proof}
\subsection{Solutions that converge at the radius of convergence}
The equations $w=\Theta(w)$ that we are pursuing
will have a solution ${\bf T}$ that converges at the finite and positive
radius of convergence $\rho_{\bf T}$.
\begin{definition}
Let
\[
{\mathbb{DOM}}^\star[z]\ :=\ \{{\bf T} \in {\mathbb{DOM}}[z] : \rho_{\bf T}\in (0,\infty),\ {\bf T}(\rho_{\bf T})<\infty\}.
\]
\end{definition}
\subsection{A basic theorem}
The next theorem summarizes what we need from the preceding
discussions
to show that ${\bf T} = {\bf E}(z,{\bf T})$ leads to
$\pmb{(\star)}$ holding for
the coefficients $t_n$ of ${\bf T}$.
\begin{theorem} \label{basic thm}
Suppose ${\bf T}\in{\mathbb{DOM}}[z]$ and ${\bf E}\in{\mathbb{DOM}}[z,w]$
are such that
\begin{thlist}
\item
${\bf T}(z) = {\bf E}\big(z,{\bf T}(z)\big)$ holds as an identity between formal
power series
\item
${\bf T}\in{\mathbb{DOM}}^\star[z]$
\item
${\bf E}(z,w)$ is nonlinear in $w$
\item
${\bf E}_z\neq 0$
\item
$(\exists \varepsilon > 0)\,\Big({\bf E}\big(\rho+\varepsilon,
{\bf T}(\rho)+\varepsilon\big)\,<\,\infty\Big)$.
\end{thlist}
Then
\[
t(n)\ \sim\ q\sqrt
{\frac{\rho{\bf E}_z\big(\rho,{\bf T}(\rho)\big)}
{2\pi {\bf E}_{ww}\big(\rho,{\bf T}(\rho)\big)}}
\rho^{-n} n^{-3/2}
\quad\text{for } n\equiv d \mod q.
\]
Otherwise $t(n)=0$.
Thus $\pmb{(\star)}$ holds on $\{n : t(n) > 0\}$.
\end{theorem}
\begin{proof}
By Corollary \ref{crucial2},
Corollary \ref{MultiTransfer} and Lemma \ref{dom sing}.
\end{proof}
\section{Recursion Equations using Operators}
Throughout the theoretical section, $\S\,$\ref{theor sect}, we only considered
recursive equations based on elementary operators ${\bf E}(z,w)$.
Now we want to expand beyond these to include recursions that are based
on popular combinatorial constructions
used with classes of unlabelled structures. As an umbrella concept
to create these various recursions we introduce the notion of
{\em operators} $\Theta$.
Actually if one is only interested in working with classes of labelled
structures then it seems that the recursive equations based on
elementary power series are all that one needs.
However, when working with classes of unlabelled structures,
the natural way of writing down an equation corresponding to a recursive
specification is in terms of combinatorial operators like ${\sf MSet}$
and ${\sf Seq}$. The resulting equation $w=\Theta(w)$, if properly designed,
will have a unique solution ${\bf T}(z)$ whose coefficients are recursively
defined, and this solution will likely be needed to construct the
translation of $w=\Theta(w)$ to an elementary recursion
$w={\bf E}(z,w)$, a translation that is needed in order to apply the
theoretical machinery of $\S\,$\ref{theor sect}.
\subsection{Operators} \label{comb op}
The mappings on generating series corresponding to combinatorial
constructions are called operators. But we want to go beyond the
obvious and include complex combinations of elementary and
combinatorial operators. For this purpose we introduce
a very general definition of an operator.
\begin{definition}
An \underline{operator} is a mapping
$\Theta: {\mathbb{DOM}}[z]\,\rightarrow\, {\mathbb{DOM}}[z]$\,.
\end{definition}
Note that operators $\Theta$ act on ${\mathbb{DOM}}[z]$\,, the set of formal
power series with {\em nonnegative coefficients} and {\em constant term 0}.
As mentioned before, the constraint that the constant terms of the power
series be 0 makes for an elegant theory because compositions of operators
are always defined.
A primary concern, as in the original work of P\'olya,
is to be able to handle combinatorial operators $\Theta$ that,
when acting on ${\bf T}(z)$, introduce terms like ${\bf T}(z^2), {\bf T}(z^3)$ etc.
For such operators it is natural to use power series ${\bf T}(z)$ with
integer coefficients as one is usually working in the context of
ordinary generating functions. In such cases one has
$\rho\le 1$ for the radius of convergence of ${\bf T}$,
provided ${\bf T}$ is not a polynomial.
\begin{definition}
An \underline{integral operator} is a mapping
$\Theta: {\mathbb{IDOM}}[z]\,\rightarrow\, {\mathbb{IDOM}}[z]$\,,
where
${\mathbb{IDOM}}[z]\ :=\ \big\{{\bf A}\in{\mathbb N}[[z]] : {\bf A}(0) = 0\big\}$,
the set of power series with nonnegative integer coefficients
and constant term zero.
\end{definition}
\begin{remark}
Many of the lemmas, etc, that follow have both a version for general
operators and a version for integral operators. We will usually just
state and prove the general version, leaving the completely parallel
integral version as a routine exercise.
\end{remark}
\subsection{The arithmetical operations on operators} \label{arith sec}
The operations of addition, multiplication, positive scalar
multiplication and composition are defined on the set of operators
in the natural manner:
\begin{definition}
\begin{eqnarray*}
(\Theta_1 \,+\, \Theta_2)({\bf T})& :=& \Theta_1({\bf T}) \,+ \,\Theta_2({\bf T})\\
(\Theta_1 \,\cdot\, \Theta_2)({\bf T})& :=& \Theta_1({\bf T}) \,\cdot \,\Theta_2({\bf T})\\
(c\cdot \Theta)({\bf T})&:=&c\cdot \Theta({\bf T})\\
(\Theta_1 \,\circ\, \Theta_2)({\bf T})& :=& \Theta_1\big(\Theta_2({\bf T})\big)\,,
\end{eqnarray*}
where the operations on the right side are the operations of formal power series.
A set of operators is \underline{closed}
if it is closed under the four arithmetical operations.
\end{definition}
Note that when working with integral operators the scalars should be positive
integers.
The operation of addition corresponds to the construction {\em disjoint union}
and the operation of product to the construction {\em disjoint sum}, for both
the unlabelled and the labelled case.
Clearly the set of all [integral] operators is closed.
\subsection{Elementary operators}
In a most natural way we can think of elementary power
series ${\bf E}(z,w)$ as operators.
\begin{definition}
Given ${\bf E}(z,w)\in{\mathbb{DOM}}[z,w]$ let the associated \underline{elementary
operator} be given by
\[
{\bf E} : {\bf T}\mapsto{\bf E}(z,{\bf T})\quad\text{for }{\bf T}\in{\mathbb{DOM}}.
\]
\end{definition}
Two particular kinds of elementary operators are as follows.
\begin{definition}
Let ${\bf A}\in{\mathbb{DOM}}[z]$.
\begin{thlist}
\item
The \underline{constant operator} $\Theta_{\bf A}$ is given by
$\Theta_{\bf A}:{\bf T}\mapsto {\bf A}$ for ${\bf T}\in{\mathbb{DOM}}[z]$, and
\item
the \underline{simple operator} ${\bf A}(w)$ maps ${\bf T}\in{\mathbb{DOM}}[z]$ to
the power series that is the formal expansion of
\[
\sum_{n\ge 1}a_n \Big(\sum_{j\ge 1} t_j z^j\Big)^n\,.
\]
\end{thlist}
\end{definition}
\subsection{Open elementary operators}
\begin{definition}
Given $a,b>0$,
an elementary operator ${\bf E}(z,w)$ is \underline{open at $(a,b)$} if
\[
(\exists \varepsilon > 0)\Big({\bf E}(a+\varepsilon,b+\varepsilon)<\infty\Big).
\]
${\bf E}$ is \underline{open} if it is open at any
$a,b>0$ for which ${\bf E}(a,b)<\infty$.
\end{definition}
Eventually we will be wanting an elementary operator to
be open at $\big(\rho,{\bf T}(\rho)\big)$ in order
to invoke the Weierstra{\ss} Preparation Theorem.
\begin{lemma} \label{cs open}
Suppose ${\bf A}\in{\mathbb{DOM}}[z]$ and $a,b>0$. \\
The constant operator $\Theta_{\bf A}$
\begin{thlist}
\item
is open at $(a,b)$ iff
$ a<\rho_{\bf A}$;
\item
it is open iff $\rho_{\bf A}>0 \ \Rightarrow\ {\bf A}(\rho_{\bf A})=\infty$.
\end{thlist}
\noindent
The simple operator $\Theta_{\bf A}$
\begin{thlist}
\item[c]
is open at $(a,b)$ iff
$ b<\rho_{\bf A}$;
\item[d]
it is open iff $\rho_{\bf A}>0 \ \Rightarrow\ {\bf A}(\rho_{\bf A})=\infty$.
\end{thlist}
\end{lemma}
\begin{proof}
$\Theta_{\bf A}$ is open at $(a,b)$ iff for some $\varepsilon > 0$
we have ${\bf A}(a+\varepsilon)<\infty$. This is clearly
equivalent to $a<\rho_{\bf A}$.
Thus $\rho_{\bf A}>0$ and ${\bf A}(\rho_{\bf A})<\infty$
imply $\Theta_{\bf A}$ is not open at $(\rho_{\bf A},b)$ for any $b>0$,
hence it is not open. Conversely if $\Theta_{\bf A}$ is not open then
$\rho_{\bf A}>0$ and ${\bf A}(a)<\infty$
for some $a,b>0$, but
${\bf A}(a+\varepsilon)=\infty$
for any $\varepsilon>0$. This implies $a = \rho_{\bf A}$.
The proof for the simple operator ${\bf A}(w)$ is similar.
\end{proof}
\subsection{Operational closure of the set of open ${\bf E}$}
\begin{lemma} \label{closure of open E}
Let $a,b>0$.
\begin{thlist}
\item
The set of elementary operators open at $(a,b)$
is closed under the arithmetical
operations of
scalar multiplication, addition and multiplication.
If\, ${\bf E}_2$ is open at $(a,b)$ and ${\bf E}_1$ is
open at $\big(a,{\bf E}_2(a,b)\big)$ then
${\bf E}_1\big(z,{\bf E}_2(z,w)\big)$ is open at $(a,b)$.
\item
The set of open elementary operators
is closed.
\end{thlist}
\end{lemma}
\begin{proof}
Let $c>0$ and let ${\bf E},{\bf E}_1,{\bf E}_2$ be elementary operators
open at $(a,b)$.
Then
\small
\begin{eqnarray*}
(\exists \varepsilon>0)\,{\bf E}\big(a+\varepsilon,b+\varepsilon\big)<\infty
&\Rightarrow&(\exists \varepsilon>0)\,(c{\bf E})\big(a+\varepsilon,b+\varepsilon\big)<\infty\\
(\exists \varepsilon_1>0)\,{\bf E}_1\big(a+\varepsilon_1,b+\varepsilon_1\big)<\infty
&\text{and}&
(\exists \varepsilon_2>0)\,{\bf E}_2\big(a+\varepsilon_2,b+\varepsilon_2\big)<\infty
\\
&\Rightarrow&(\exists \varepsilon>0)
{\bf E}_i\big(a+\varepsilon,b+\varepsilon\big)<\infty\
\text{for}\ i=1,2\\
&\Rightarrow&(\exists \varepsilon>0)\,
\big({\bf E}_1 + {\bf E}_2\big)(a+\varepsilon,b+\varepsilon)<\infty\\
(\exists \varepsilon_1>0)\,{\bf E}_1\big(a+\varepsilon_1,b+\varepsilon_1\big)<\infty
&\text{and}&
(\exists \varepsilon_2>0)\,{\bf E}_2\big(a+\varepsilon_2,b+\varepsilon_2\big)<\infty
\\
&\Rightarrow&(\exists \varepsilon>0)
{\bf E}_i\big(a+\varepsilon,b+\varepsilon\big)<\infty\
\text{for}\ i=1,2\\
&\Rightarrow&(\exists \varepsilon>0)\,
\big({\bf E}_1 {\bf E}_2\big)(a+\varepsilon,b+\varepsilon)<\infty.
\end{eqnarray*}
\normalsize
Now suppose ${\bf E}_2$ is open at $(a,b)$ and ${\bf E}_1$ is
open at $\big(a,{\bf E}_1(a,b)\big)$. Then
\small
\begin{eqnarray*}
(\exists \varepsilon_2>0)\,{\bf E}_2\big(a+\varepsilon_2,b+\varepsilon_2\big)<\infty
&\text{and}&
(\exists \varepsilon_1>0)\,
{\bf E}_1\big(a+\varepsilon_1,{\bf E}_2(a,b)+\varepsilon_1\big)<\infty\\
&\Rightarrow&
(\exists \varepsilon>0)\,
{\bf E}_1\big(a+\varepsilon,
{\bf E}_2(a+\varepsilon,b+\varepsilon)+\varepsilon\big)<\infty.
\end{eqnarray*}
\normalsize
This completes the proof for (a). Part (b) is proved similarly.
\end{proof}
The {\em base} operators that we will use as a starting point
are the elementary operators ${\bf E}$ and all possible restrictions
$\Theta_{\mathbb M}$ of the standard operators $\Theta$ of combinatorics
discussed below.
More complex operators called {\em composite} operators will be
fabricated from these base operators by using the familiar
{\em arithmetical operations} of addition, multiplication, scalar multiplication
and composition discussed in $\S\,$\ref{arith sec}.
\subsection{The standard operators on ${\mathbb{DOM}}[z]$} \label{std op sec}
Following the lead of Flajolet and Sedgewick \cite{Fl:Se} we adopt as our
\emph{standard operators}
${\sf MSet}$ (multiset), ${\sf Cycle}le$ (undirected cycle),
${\sf DCycle}$ (directed cycle) and ${\sf Seq}$ (sequence),
corresponding to the constructions by the same names.\footnote{Flajolet and Sedgewick
also include
${\sf Set}$ as a standard operator, but we will not do so since, as mentioned in the second paragraph of $\S$\,\ref{theor sect}, for a given ${\bf T}$, the
series ${\bf G}(z,w)$ associated with ${\sf Set}({\bf T})$ may very well not be elementary.
For a discussion of mixed sign equations see $\S\,$\ref{mixed}.
}
These operators have well known analytic expressions, for example,
\[
\begin{array}{l l}
\text{unlabelled multiset operator}&
1+{\sf MSet}({\bf T})\ =\ \exp\Big(\sum_{j\ge 1} {\bf T}(z^j)/j\Big)\\
\text{labelled multiset operator}&
\widehat{\sf MSet}({\bf T})\ =\ \sum_{j\ge 1}{\bf T}(z)^j/j! \ =\ e^{{\bf T}(z)} -1
\end{array}
\]
\subsection{Restrictions of standard operators} \label{restriction}
Let ${\mathbb M}\subseteq{\mathbb P}$\,.
(We will always assume ${\mathbb M}$ is nonempty.)
The \emph{${\mathbb M}$-restriction} of
a standard construction $\Delta$ applied to a class of trees means that
one only takes those forests
in $\Delta({\mathcal T})$ where the number of trees is in ${\mathbb M}$\,.
Thus ${\sf MSet}_{\{2,3\}}({\mathcal T})$ takes all multisets of
two or three trees from ${\mathcal T}$\,.
The P\'olya {\em cycle index polynomials}
${\bf Z}({\sf H},z_1,\ldots,z_m)$
are very convenient for expressing such operators; such a polynomial
is connected with a permutation group ${\sf H}$ acting on an $m$-element
set (see Harary and Palmer \cite{H:P}, p.~35). For $\sigma\in{\sf H}$
let $\sigma_j$ be the number of $j$-cycles in a decomposition of
$\sigma$ into disjoint cycles. Then
\[
{\bf Z}({\sf H},z_1,\ldots,z_m)\ :=\ \frac{1}{|H|}\sum_{\sigma\in H}
\prod_{j=1}^m {z_j}^{\sigma_j}.
\]
The only groups we consider are the following:
\begin{thlist}
\item
${\sf S}_m$ is the {\em symmetric group} on $m$ elements,
\item
${\sf D}_m$ the {\em dihedral group} of order $2m$,
\item
${\sf C}_m$ the {\em cyclic group} of order $m$, and
\item
${\sf Id}_m$ the one-element {\em identity group} on $m$ elements.
\end{thlist}
The \emph{${\mathbb M}$-restrictions} of the standard operators are
each of the form $\Delta_{\mathbb M}\,:=\,\sum_{m\in{\mathbb M}} \Delta_m$ where
$\Delta\in \{{\sf MSet},{\sf DCycle},{\sf Cycle}le,{\sf Seq}\}$ and
$\Delta_m$ is given by:
{
\small
\[
\begin{array}{l l| l l}
{\sf operator}&{\sf unlabelled\ case}&{\sf operator}&{\sf labelled\ case}\\
{\sf MSet}_m({\bf T})
& {\bf Z}\big({\sf S}_m,{\bf T}(z),\ldots,{\bf T}(z^m)\big)
&\widehat{\sf MSet}_m({\bf T})
&(1/m!){\bf T}(z)^m\\
{\sf Cycle}le_m({\bf T})
& {\bf Z}\big({\sf D}_m,{\bf T}(z),\ldots,{\bf T}(z^m)\big)
&\widehat{\sf Cycle}_m({\bf T})
& (1/2m){\bf T}(z)^m\\
{\sf DCycle}_m({\bf T})
& {\bf Z}\big({\sf C}_m,{\bf T}(z),\ldots,{\bf T}(z^m)\big)
&\widehat{\sf DCycle}_m({\bf T})
& (1/m){\bf T}(z)^m\\
{\sf Seq}_m({\bf T})
& {\bf Z}\big({\sf Id}_m,{\bf T}(z),\ldots,{\bf T}(z^m)\big)
&\widehat{\sf Seq}_m({\bf T})
& {\bf T}(z)^m\\
\end{array}
\]
\normalsize
}
Note that the labelled version of $\Delta_m$ is just
the first term of the cycle index polynomial for the unlabelled
version,
and the sequence operators are the same in both cases.
We write simply ${\sf MSet}$ for ${\sf MSet}_{\mathbb M}$ if ${\mathbb M}$ is ${\mathbb P}$, etc.
In the labelled case the standard operators (with restrictions) are
simple operators, whereas in the unlabelled case only $\Delta_{\{1\}}$ and the ${\sf Seq}_{\mathbb M}$ are simple.
The other standard operators in the unlabelled case are not elementary
because of the presence of terms ${\bf T}(z^j)$ with $j>1$ when ${\mathbb M}\neq\{1\}$.
\subsection{Examples of recursion equations}
Table \ref{std examp} gives the recursion equations
for the generating series
of several well-known classes of trees.
\begin{table}[h]
\small
\[
\begin{array}{l @{\quad} l}
\text{Recursion Equation}&\text{Class of Rooted Trees}\\
\hline
w\,=\,z\,+\,z w&\text{chains}\\
w\,=\,z\,+\,z {\sf Seq}(w)&\text{planar}\\
w\,=\,m z\,+\,m z {\sf Seq}(w)&\text{$m$-flagged
planar\footnotemark}\\
w\,=\,z e^w&\text{labelled}\\
w\,=\,z\,+\,z {\sf MSet}(w)&\text{unlabelled }\\
w\,=\,z\,+\,z {\sf MSet}_{\{2,3\}}(w)&\text{unlabelled (0,2,3)-}\\
w\,=\,z\,+\,z{\sf Seq}_2(w)&\text{unlabelled binary planar}\\
w\,=\,z\,+\,z{\sf MSet}_2(w)&\text{unlabelled binary }\\
w\,=\,z\,+\,z w^2&\text{labelled binary }\\
w\,=\,z\,+\,z\big(w\,+\,{\sf MSet}_2(w)\big)&\text{unlabelled unary-binary }\\
w\,=\,z\,+\,z{\sf MSet}_r(w)&\text{unlabelled $r$-regular }\\
\hline
\end{array}
\]
\caption{Familiar examples of recursion equations\label{std examp}}
\end{table} \footnotetext{\emph{$m$-flagged}
means one can attach any subset of $m$ given flags to each vertex.
This is just a colorful way of saying that the tree structures are augmented
with $m$-unary predicates $U_1,\ldots,U_m$\,, and each can hold on any subset
of a tree independently of where the others hold. }
\subsection{Key properties of operators}
Now we give a listing of the various properties of abstract
operators that are needed to prove a universal law for recursion equations.
The first question to be addressed is
``Which properties does $\Theta$ need in order to guarantee that
$w=\Theta(w)$ has a solution?''
\subsection{Retro operators}
There is a simple natural property of an operator $\Theta$ that guarantees
an equation $w = \Theta(w)$ has a {\em unique} solution that is
determined by a recursive computation of the coefficients, namely
$\Theta$ calculates, given ${\bf T}$, the $n$th coefficient of
$\Theta({\bf T})$ solely on the basis of the values of $t(1),\ldots,t(n-1)$.
\begin{definition}
An operator $\Theta$ is
\underline{retro}
if there is a sequence $\sigma$ of functions such that for
${\bf B}\,=\,\Theta({\bf A})$ one has
$b_n\,=\,\sigma_n(a_1,\ldots,a_{n-1})$\,, where $\sigma_1$ is a
constant\,.
\end{definition}
There is a strong temptation to call such $\Theta$ \emph{recursion}
operators since they will be used to recursively define
generating series. But without the context of a recursion equation
there is nothing recursive
about $b_n$ being a function of $a_1,\ldots,a_{n-1}$\,.
\begin{lemma}\label{retro lem}
A retro operator $\Theta$ has a unique fixpoint in ${\mathbb{DOM}}[z]$,
that is,
there is a unique power series ${\bf T}\,\in\,{\mathbb{DOM}}[z]$ such that
$
{\bf T}\,=\, \Theta({\bf T})\,.
$
We can obtain ${\bf T}$ by an iterative application of $\Theta$
to the constant power series $0$:
\[
{\bf T}\ = \ \lim_{n\rightarrow \infty} \Theta^n(0)\,.
\]
If $\Theta$ is an integral retro operator then ${\bf T}\in{\mathbb{IDOM}}[z]$.
\end{lemma}
\begin{proof}
Let $\sigma$ be the sequence of functions that witness the
fact that $\Theta$ is retro.
If ${\bf T}=\Theta({\bf T})$ then
\begin{eqnarray*}
t(1)&=&\sigma_1\\
t(n)&=&\sigma_n\big(t(1),\ldots,t(n-1)\big)\quad\text{for }n>1.
\end{eqnarray*}
Thus there is at most one possible fixpoint ${\bf T}$ of $\Theta$; and
these two equations show how to recursively find such a ${\bf T}$.
A simple argument shows that $\Theta^{n+k}(0)$ agrees with $\Theta^n(0)$
on the first $n$ coefficients, for all $k\ge 0$\,. Thus
$\lim_{n\rightarrow \infty} \Theta^n(0)$ is a fixpoint,
and hence {\em the} fixpoint\,. If $\Theta$ is also integral then
each stage $\Theta^n(0)\in{\mathbb{IDOM}}[z]$, so ${\bf T}\in{\mathbb{IDOM}}[z]$.
\end{proof}
Thus if $\Theta$ is a retro operator then the functional equation
$ w = \Theta(w)$ has a unique solution ${\bf T}(z)$.
Although the end goal is to have an equation $w=\Theta(w)$ with $\Theta$ a retro
operator, for the intermediate stages it is often more desirable to work with
{\em weakly} retro operators.
\begin{definition}
An operator $\Theta$ is
\underline{weakly retro}
if there is a sequence $\sigma$ of functions such that for
${\bf B}\,=\,\Theta({\bf A})$ one has
$b_n\,=\,\sigma_n(a_1,\ldots,a_n)$\,.
\end{definition}
\begin{lemma} \label{retro is full}
\begin{thlist}
\item
The set of retro operators is closed.
\item
The set of weakly retro operators is closed and
includes all elementary operators and all restrictions of standard operators.
\item
If $\Theta$ is a weakly retro operator then $z \Theta$ and
$w\Theta$ are both retro operators.
\end{thlist}
\end{lemma}
\begin{proof}
For (a),
given retro operators $\Theta,\Theta_1,\Theta_2$, a positive constant $c$ and a
power series ${\bf T}\unrhd 0$, we have
\begin{eqnarray*}
{[z^n]\,}(c \Theta)({\bf T}) &=&c\big([z^n]\,\Theta({\bf T})\big)\\
{[z^n]\,}(\Theta_1+\Theta_2)({\bf T})& =& [z^n]\,\Theta_1({\bf T}) + [z^n]\,\Theta_2({\bf T})\\
{[z^n]\,}(\Theta_1\Theta_2)({\bf T})
& =& \sum_{j=1}^{n-1}[z^j]\,\Theta_1({\bf T}) [z^{n-j}]\Theta_2({\bf T})\\
{[z^n]\,}(\Theta_1\circ\Theta_2)({\bf T})
& =& \sigma_n
\big([z^1]\Theta_2({\bf T}),\ldots,[z^n]\,\Theta_2({\bf T})\big),
\end{eqnarray*}
where $\sigma$ is the sequence of functions that witness the fact that
$\Theta_1$ is a retro operator.
In each case it is clear that the value of the right side depends only
on the first $n-1$ coefficients of ${\bf T}$. Thus the set of retro
operators is closed.
For (b) use the same proof as in (a), after changing the initial operators to
weakly retro operators, to show that the set of weakly retro operators is closed.
For an elementary operator ${\bf E}(z,w)$ and power series ${\bf T}\unrhd 0$
we have, after writing ${\bf E}(z,w)$ as
$\sum_{i\ge 0} {\bf E}_i(z) w^i$,
\begin{eqnarray*}
{[z^n]\,}{\bf E}\big(z,{\bf T}(z)\big)
&=&[z^n]\, \sum_{j\ge 0} {\bf E}_j(z) {\bf T}(z)^j\\
&=&\sum_{j\ge 0} \sum_{i= 0}^n e_{ij} \big[z^{n-i}\big]{\bf T}(z)^j.
\end{eqnarray*}
The last expression clearly depends only on the first $n$
coefficients of ${\bf T}(z)$.
Thus all elementary operators are weakly retro operators.
Let $Z({\bf H},z_1,\ldots,z_m)$ be a cycle index polynomial.
Then for ${\bf T}\in{\mathbb{DOM}}[z]$ one has
\[
{[z^n}]{\bf T}(z^j)\ =\
\begin{cases}
0&\text{if $j$ does not divide $n$}\\
t(n/j)&\text{if }j|n.
\end{cases}
\]
Thus the operator that maps ${\bf T}(z)$ to ${\bf T}(z^j)$ is
a weakly retro operator.
The set of weakly retro operators is closed,
so the operator mapping ${\bf T}$ to
$Z\big({\bf H},{\bf T}(z),\ldots,{\bf T}(z^m)\big)$ is weakly retro.
Now every restriction $\Delta_{\mathbb M}$ of a standard operator is a
(possibly infinite) sum of such instances of cycle index polynomials;
thus they are also weakly retro.
For (c) note that
\begin{eqnarray*}
{[z^n]}\,\big(z \Theta({\bf T})\big)
&=&[z^{n-1}]\,\Theta({\bf T})\\
{[z^n]}\,\big({\bf T} \Theta({\bf T})\big)
&=&\sum_{j=1}^{n-1} t_j [z^{n-j}]\,\Theta({\bf T}),
\end{eqnarray*}
and in both cases the right side depends only on $t_1,\ldots,t_{n-1}$.
\end{proof}
\begin{lemma}\label{retro form}
\begin{thlist}
\item
An elementary operator ${\bf E}(z,w)=\sum_{ij} e_{ij}z^iw^j$ is
retro iff $e_{01} = 0$.
\item
A restriction $\Delta_{\mathbb M}$ of a standard operator $\Delta$ is
retro iff $1\notin{\mathbb M}$.
\end{thlist}
\end{lemma}
\begin{proof}
For (a) let ${\bf T}\in{\mathbb{DOM}}[z]$. Then
\[
[z^n]\,{\bf E}\big(z,{\bf T}(z)\big)\ =\
\sum_{j\ge 0} \sum_{i= 0}^n e_{ij} \big[z^{n-i}\big]{\bf T}(z)^j,
\]
which does not depend on $t(n)$ iff $e_{01}=0$.
For (b) one only has to look at the definition of the P\'olya cycle index
polynomials.
\end{proof}
The property of being retro for an elementary ${\bf E}(z, w)$ is very closely related to the necessary and sufficient conditions for an equation $w = {\bf E}(z,w)$ to give a recursive definition of a function ${\bf T}\in{\mathbb{DOM}}[z]$ that is not 0. To see this rewrite the equation in the form
\begin{eqnarray*}
(1-e_{01}) w
& =& \big(e_{10}z + e_{20}z^2 + \cdots \big) +
\big(e_{11}z + \cdots \big) w + \cdots.
\end{eqnarray*}
(We know that $e_{00}=0$ as ${\bf E}$ is elementary.)
So the first restriction needed on ${\bf E}$ is that $e_{01}< 1$.
Suppose this condition on $e_{01}$ holds.
Dividing through by $1-e_{01}$ gives an {\em equivalent} equation
with no occurrence of the linear term $z^0w^1$ on the right hand side,
thus leading to the use of
$
e_{01} = 0
$
rather than the apparently weaker condition $e_{01} < 1$.
To guarantee a nonzero solution we also need that
$
{\bf E}_0(z) \neq 0,
$
and by the recursive construction these conditions suffice.
Now that we have a condition, being retro, to guarantee that
$w=\Theta(w)$ is a recursion equation with a unique solution
$w={\bf T}$, the next goal is to find simple conditions on
$\Theta$ that ensure this solution will have the desired
asymptotics.
\subsection{Dominance between power series}
It is useful to have a notation to indicate that
the coefficients of one series dominate those of another.
\begin{definition}
For power series ${\bf A},{\bf B}\,\in\,{\mathbb{DOM}}[z]$
we say ${\bf B}$ \underline{dominates} ${\bf A}$\,,
written ${\bf A}\,\unlhd\,{\bf B}$\,, if $a_j\,\le\, b_j$ for all $j$.
Likewise for power series ${\bf G},{\bf H}\,\in\,{\mathbb{DOM}}[z,w]$
we say ${\bf H}$ \underline{dominates} ${\bf G}$\,,
written ${\bf G}\,\unlhd\,{\bf H}$\,, if $g_{ij}\,\le\, h_{ij}$ for all $i,j$.
\end{definition}
\begin{lemma} \label{dom series}
The dominance relation $\unlhd$ is a partial ordering on ${\mathbb{DOM}}[z]$
preserved by the arithmetical operations:
for ${\bf T}_1,{\bf T}_2,{\bf T}\in{\mathbb{DOM}}[z]$ and a constant $c>0$,
if
${\bf T}_1\unlhd{\bf T}_2$ then
\begin{eqnarray*}
c\cdot{\bf T}_1&\unlhd& c\cdot{\bf T}_2\\
{\bf T}_1 + {\bf T}& \unlhd& {\bf T}_2 + {\bf T}\\
{\bf T}_1 \cdot {\bf T}& \unlhd& {\bf T}_2 \cdot {\bf T}\\
{\bf T}_1 \circ{\bf T}& \unlhd& {\bf T}_2 \circ {\bf T}\\
{\bf T} \circ{\bf T}_1& \unlhd& {\bf T} \circ {\bf T}_2.
\end{eqnarray*}
\end{lemma}
\begin{proof}
Straightforward.
\end{proof}
\subsection{The dominance relation on the set of operators}
\begin{definition}
\begin{thlist}
\item
For operators $\Theta_1,\Theta_2$
we say
$\Theta_2$ \underline{dominates} $\Theta_1$\,,
symbolically $\Theta_1\,{\,\sqsubseteq\,}\,\Theta_2$\,,
if for any ${\bf T}\in {\mathbb{DOM}}[z]$ one has
$\Theta_1({\bf T})\,\unlhd\,\Theta_2({\bf T})$\,.
\item
For integral operators $\Theta_1,\Theta_2$
we say
$\Theta_2$ \underline{dominates} $\Theta_1$\,,
symbolically $\Theta_1\,{\,\sqsubseteq_{\mathbf I}\,}\,\Theta_2$\,,
if for any ${\bf T}\in {\mathbb{IDOM}}[z]$ one has
$\Theta_1({\bf T})\,\unlhd\,\Theta_2({\bf T})$\,.
\end{thlist}
\end{definition}
As usual we continue our discussion mentioning only the general
operators when the integral case is exactly parallel.
It is straightforward to check that the dominance relation ${\,\sqsubseteq\,}$ is a
partial ordering on the set of operators which is preserved by
addition, multiplication and positive scalar multiplication.
Composition on the right also preserves ${\,\sqsubseteq\,}$, that is, for operators
$\Theta_1 {\,\sqsubseteq\,} \Theta_2$ and $\Theta$,
\[
\Theta_1 \circ\Theta \ {\,\sqsubseteq\,} \ \Theta_2 \circ \Theta.
\]
However composition on the left requires an additional property, \emph{monotonicity}.
The bivariate ${\bf E}$ in ${\mathbb{DOM}}[z,w]$ play a dual role, on the one
hand simply as power series, and on the other as operators.
Each has a notion of dominance, and they are related.
\begin{lemma} \label{bidom}
For ${\bf E},{\bf F}\in{\mathbb{DOM}}[z,w]$ we have
\[
{\bf E} \unlhd {\bf F}\ \Rightarrow\ {\bf E} {\,\sqsubseteq\,}{\bf F}.
\]
\end{lemma}
\begin{proof}
Suppose ${\bf E} \unlhd {\bf F}$ and let ${\bf T}\in{\mathbb{DOM}}[z]$.
Then
\[
[z^n]\,{\bf E}(z,{\bf T})
\ =\ \sum e_{ij}[z^{n-i}]{\bf T}(z)^j
\ \le\ \sum f_{ij}[z^{n-i}]{\bf T}(z)^j
\ =\ [z^n]\,{\bf F}(z,{\bf T}),
\]
so ${\bf E}(z,{\bf T})\unlhd{\bf F}(z,{\bf T})$. As ${\bf T}$ was arbitrary, ${\bf E}{\,\sqsubseteq\,}{\bf F}$.
\end{proof}
\subsection{Monotone operators}
\begin{definition}
An operator $\Theta$
is \underline{monotone}
if it preserves $\unlhd$\,, that is, ${\bf A}\,\unlhd\,{\bf B}$ implies
$\Theta({\bf A})\,\unlhd\,\Theta({\bf B})$ for ${\bf A},{\bf B}\in{\mathbb{DOM}}[z]$\,.
\end{definition}
\begin{lemma}
If $\Theta_1 {\,\sqsubseteq\,}\Theta_2$ and $\Theta$ is monotone then
\[
\Theta \circ\Theta_1 \ {\,\sqsubseteq\,}\ \Theta \circ \Theta_2.
\]
\end{lemma}
\begin{proof}
Straightforward.
\end{proof}
\begin{lemma} \label{monotone closed}
The set of monotone operators is closed and includes
all elementary operators and all restrictions of the standard operators.
\end{lemma}
\begin{proof}
Straightforward.
\end{proof}
\subsection{Bounded series}
\begin{definition}\label{AR}
For $R>0$ let ${\bf A}_R(z) := \sum_{n\ge 1}R^n\,z^n$.
A series ${\bf T}\in{\mathbb{DOM}}[z]$ is
\underline{bounded} if ${\bf T}\unlhd{\bf A}_R$ for some $R>0$.
\end{definition}
An easy application of the Cauchy-Hadamard Theorem shows that ${\bf T}$ is
bounded iff it is analytic at $0$.
The following basic facts about the series ${\bf A}_R(z)$ show
that the collection of bounded series is closed under the
arithmetical operations, a well known fact. Of more interest
will be the application of this to the collection of bounded
operators in Section \ref{bdd closed sect}.
\begin{lemma}\label{bounded updir}
For $c,R,R_1,R_2>0$
\begin{eqnarray*}
R_1\le R_2&\Rightarrow&{\bf A}_{R_1}\unlhd{\bf A}_{R_2}\\
c {\bf A}_R& \unlhd& {\bf A}_{(c+1)R}\\
{\bf A}_{R_1} + {\bf A}_{R_2}&\unlhd& {\bf A}_{R_1+R_2}\\
{\bf A}_{R_1} {\bf A}_{R_2}&\unlhd& {\bf A}_{R_1+R_2 }\\
{\bf A}_{R_1} \circ{\bf A}_{R_2}&\unlhd& {\bf A}_{ 2(1+R_1+R_2)^2}.
\end{eqnarray*}
\end{lemma}
\begin{proof}
The details are quite straightforward---we give the proofs for the
last two items.
\begin{eqnarray*}
\big({\bf A}_{R_1} {\bf A}_{R_2}\big)(z)
&=&
\Big(\sum_{j\ge 1}{R_1}^j z^j\Big)\
\cdot\ \Big(\sum_{j\ge 1}{R_2}^j z^j\Big) \\
&=&
\sum_{n\ge 1} \sum_{\substack{i+j = n\\i,j\ge 1}}
\big(R_1^i z^i\big)\cdot \big(R_2^j z^j\big)\\
&=&
\sum_{n\ge 1} \Big(\sum_{\substack{i+j = n\\i,j\ge 1}}
{R_1}^i {R_2}^j\Big) z^n\\
&\unlhd&
\sum_{n\ge 1}(R_1+R_2)^n z^n
\ =\ {\bf A}_{R_1+R_2}(z).
\end{eqnarray*}
For composition,
letting $R_0= 1+R_1+R_2$:
\begin{eqnarray*}
\big({\bf A}_{R_1}\circ{\bf A}_{R_2}\big)(z)
&\unlhd& \big({\bf A}_{{R_0}}\circ{\bf A}_{{R_0}}\big)(z)
\ =\ \sum_{i\ge 1} {R_0}^i\Big( \sum_{j\ge 1} {R_0}^j z^j\Big)^i\\
&=& \sum_{i\ge 1} \Big( \sum_{j\ge 1} {R_0}^{1+j} z^j\Big)^i
\ \unlhd\ \sum_{i\ge 1}\Big(\sum_{j\ge 1}({R_0}^2 z)^j\Big)^i\\
&\unlhd& \sum_{n\ge 1}(2{R_0}^2 z)^n \ =\ {\bf A}_{2 {R_0}^2}(z).
\end{eqnarray*}
\end{proof}
\subsection{Bounded operators}
The main tool for showing that the solution $w={\bf T}$ to $w=\Theta(w)$ has
a positive radius of convergence, which is essential to employing the
methods of analysis, is to show that $\Theta$ is
{\em bounded}.
\begin{definition}
For $R>0$ define the simple operator ${\bf A}_R$ by
\[
{\bf A}_R(w)\ =\ \sum_{j\ge 1}R^j\,w^j.
\]
An operator $\Theta$ is \underline{bounded} if
$\big(\exists R>0\big)\Big(\Theta(w){\,\sqsubseteq\,}{\bf A}_R(z + w)\Big)$, that is,
\[
\big(\exists R>0\big)\,\big(\forall {\bf T}\in{\mathbb{DOM}}\big)\,\Big(\Theta({\bf T})\unlhd {\bf A}_R(z + {\bf T})\Big).
\]
\end{definition}
Of course we will want to use integer values of $R$ when working with integral
operators.
\subsection{When is an elementary operator bounded?}
The properties {\em weakly retro} and {\em monotone} investigated
earlier hold for all elementary operators. This is certainly not the
case with the bounded property. In this subsection we give a
simple univariate test for being bounded.
As mentioned before, any
${\bf E}\in{\mathbb{DOM}}[z,w]$ plays a dual role in this paper, one as a bivariate
power series and the other as an elementary operator. Each of these
roles has its own definition as to what bounded means, namely:
\begin{eqnarray*}
{\bf E}\unlhd {\bf A}_R(z+w)&\Leftrightarrow&
\big(\forall i,j\ge 1\big)\,\Big(e_{ij} \le [z^i\,w^j]\,{\bf A}_R(z+w)\Big)\\
{\bf E}{\,\sqsubseteq\,}{\bf A}_R(z+w)&\Leftrightarrow&
\big(\forall {\bf T}\in{\mathbb{DOM}}[z]\big)\,\Big({\bf E}(z,{\bf T})\,\unlhd\,{\bf A}_R(z+{\bf T})\Big).
\end{eqnarray*}
The two definitions are equivalent.
\begin{lemma}\label{two defs}
Let ${\bf E}$ be an elementary operator.
\begin{thlist}
\item
${\bf E}(z,w)$ is bounded as an operator iff\, ${\bf E}(z,z)$ is bounded
as a power series.
Indeed
\begin{eqnarray*}
{\bf E}(z,w){\,\sqsubseteq\,}{\bf A}_R(z+w)
&\Rightarrow&
{\bf E}(z,z)\unlhd{\bf A}_{2R}(z)\quad\text{for }R>0\\
{\bf E}(z,z)\unlhd{\bf A}_R(z)&\Rightarrow&
{\bf E}(z,w){\,\sqsubseteq\,}{\bf A}_{R}(z+w) \quad\text{for }R>1.
\end{eqnarray*}
\item
The equivalence of bivariate bounded and operator bounded
follows from
\begin{eqnarray*}
{\bf E}(z,w)\unlhd{\bf A}_R(z+w)&\Rightarrow& {\bf E}(z,w){\,\sqsubseteq\,}{\bf A}_R(z+w)\quad\text{for }R>0\\
{\bf E}(z,w){\,\sqsubseteq\,}{\bf A}_R(z+w)&\Rightarrow& {\bf E}(z,w)\unlhd{\bf A}_{2R}(z+w)\quad\text{for }R>1.
\end{eqnarray*}
\end{thlist}
\end{lemma}
\begin{proof}
For (a) suppose $R>0$ and ${\bf E}(z,w){\,\sqsubseteq\,}{\bf A}_R(z+w)$.
Since $z \in{\mathbb{DOM}}[z]$, we have
\[
{\bf E}(z,z) \ \unlhd\ \sum_{j\ge 1} R^j (2z)^j \ =\ {\bf A}_{2R}(z),
\]
so ${\bf E}(z,z)$ is a bounded power series.
Conversely, suppose $R>1$ and ${\bf E}(z,z)\unlhd {\bf A}_R(z)$.
Then
\[
{\bf E}(z,z)\ \unlhd\ \sum_{j\ge 1} R^j z^j,
\]
so for $n\ge 1$
\[
[z^j]\, {\bf E}(z,z) \ \le \ R^j.
\]
Then from ${\bf E}(z,w) =\sum e_{i,j} z^i w^j$
we have $e_{i,j} \le R^{i+j}$,
so
\begin{eqnarray*}
{\bf E}(z,w)&\unlhd &\sum_{i,j\ge 1} R^{i+j} z^i w^j\\
&\unlhd& \sum_{i,j\ge 1} R^{i+j} \binom{i+j}{i} z^iw^j\\
& =& {\bf A}_R(z+w).
\end{eqnarray*}
Applying Lemma \ref{bidom} gives
${\bf E}(z,w){\,\sqsubseteq\,}{\bf A}_R(z+w)$.
For (b) the first claim is just Lemma \ref{bidom}. For the second
claim suppose $R>1$ and
${\bf E}(z,w){\,\sqsubseteq\,}{\bf A}_R(z+w)$.
From the first part of (a) we have
${\bf E}(z,z)\unlhd {\bf A}_{2R}(z)$ and then from the second part
${\bf E}(z,w){\,\sqsubseteq\,}{\bf A}_{2R}(z+w)$.
\end{proof}
\begin{corollary} \label{cs bdd}
Given ${\bf A}\in{\mathbb{DOM}}[z]$, the constant operator $\Theta_{\bf A}$
as well as the simple operator ${\bf A}(w)$ are bounded iff $\rho_{\bf A} >0$.
\end{corollary}
\subsection{Bounded operators form a closed set} \label{bdd closed sect}
\begin{lemma} \label{bdd is closed}
The set of bounded operators is closed.
\end{lemma}
\begin{proof}
Let $\Theta,\Theta_1,\Theta_2$ be bounded operators as witnessed by
the following:\\
$\Theta(w){\,\sqsubseteq\,} {\bf A}_R(z+w)$, $\Theta_1(w){\,\sqsubseteq\,} {\bf A}_{R_1}(z+w)$
and $\Theta_2(w){\,\sqsubseteq\,} {\bf A}_{R_2}(z+w)$.
With $c>0$ we have from Lemma \ref{bounded updir}
\begin{eqnarray*}
\big(c \Theta\big)(w)& {\,\sqsubseteq\,}& c {\bf A}_R(z+w)\ {\,\sqsubseteq\,}\ {\bf A}_{(1+c)R}(z+w)\\
\big(\Theta_1 + \Theta_2\big)(w)& {\,\sqsubseteq\,}& {\bf A}_{R_1}(z+w) + {\bf A}_{R_2}(z+w)\ {\,\sqsubseteq\,}\ {\bf A}_{R_1+R_2}(z+w)\\
\big(\Theta_1 \Theta_2\big)(w)
& {\,\sqsubseteq\,}& {\bf A}_{R_1}(z+w) {\bf A}_{R_2}(z+w)\ {\,\sqsubseteq\,}\
{\bf A}_{R_1+R_2}(z+w)\\
\big(\Theta_1 \circ\Theta_2\big)(w)& {\,\sqsubseteq\,}& {\bf A}_{R_1}(z+w) \circ{\bf A}_{R_2}(z+w)\ {\,\sqsubseteq\,}\ {\bf A}_{2(1+R_1+R_2)^2}(z+w).
\end{eqnarray*}
\end{proof}
\begin{lemma} All restrictions of standard operators are
bounded operators.
\label{std bdd}
\end{lemma}
\begin{proof}
Let $\Delta$ be a standard operator. Then for any ${\mathbb M}\subseteq {\mathbb P}$ we
have $\Delta_{\mathbb M} {\,\sqsubseteq\,} \Delta$, so it suffices to show the standard
operators are bounded. But this is evident from the well known fact that
\begin{eqnarray*}
{\sf MSet}(w)
&{\,\sqsubseteq\,}& {\sf Cycle}le(w)\
{\,\sqsubseteq\,}\ {\sf DCycle}(w)\\
&&{\,\sqsubseteq\,}\
{\sf Seq}(w)\ =\ \sum_{n\ge 1}w^n\ = {\bf A}_1(w)\ {\,\sqsubseteq\,}\ {\bf A}_1(z+w).
\end{eqnarray*}
So the choice of $R$ is $R=1$.
\end{proof}
\subsection[I]{When dominance of operators gives dominance of fixpoints}
This is part of proving that the solution $w={\bf T}$ to $w=\Theta(w)$
has a positive radius of convergence.
\begin{lemma}\label{domi}
Let ${\bf T}_i$ satisfy the recursion equation
${\bf T}_i\, =\,\Theta_i({\bf T}_i)$ for $i=1,2$\,.
If the $\Theta_i$ are retro operators,
$\Theta_1\,{\,\sqsubseteq\,}\,\Theta_2$, and
$\Theta_1$ or $\Theta_2$ is monotone
then ${\bf T}_1\,\unlhd\,{\bf T}_2$\,.
\end{lemma}
\begin{proof}
Since each $\Theta_i(w)$ is a retro operator,
by Lemma \ref{retro lem} we have
\[
{\bf T}_i\ =\ \lim_{n\rightarrow\infty}{\Theta_i}^n(0)\,.
\]
Let us use induction to show
\[
{\Theta_1}^n(0)\,\unlhd\,{\Theta_2}^n(0)
\]
holds for $n\ge 1$\,.
For $n=1$ this follows from the assumption that $\Theta_2$
dominates $\Theta_1$. So suppose it holds for $n$. Then
\[
\begin{array}{l}
{\Theta_1}^{n+1}(0)\
\unlhd\ \Theta_1\Big({\Theta_2}^n(0)\Big)\
\unlhd\ {\Theta_2}^{n+1}(0)\quad\text{if $\Theta_1$ is monotone}\\
{\Theta_1}^{n+1}(0)\
\unlhd\ \Theta_2\Big({\Theta_1}^n(0)\Big)\
\unlhd\ {\Theta_2}^{n+1}(0)\quad\text{if $\Theta_2$ is monotone}.
\end{array}
\]
Thus ${\bf T}_1\,\unlhd\,{\bf T}_2$\,.
\end{proof}
\subsection{The nonzero radius lemma}
To apply complex analysis methods to a solution ${\bf T}$ of a recursion
equation we need ${\bf T}$ to be analytic at 0.
\begin{lemma}\label{modify bound}
Let $\Theta$ be a retro operator with
$\Theta(w)\sqsubseteq{\bf A}_R(z+w)$.
Then
\[
\Theta(w)\ \sqsubseteq\ {\bf A}_R(z+w) - Rw.
\]
\end{lemma}
\begin{proof}
Since $\Theta$ is retro there is a sequence $\sigma_n$ of functions such
that for ${\bf T}\in{\mathbb{DOM}}[z]$,
\[
[z^n]\,\Theta({\bf T})\ =\ \sigma_n\big(t_1,\ldots,t_{n-1}\big).
\]
Let
\[
\Phi(w) \ :=\ \sum_{n\ge 2}R^n (z+w)^n,
\]
which is easily seen to be a retro operator.
Choose $\widehat{\sigma}_n$ such that
for ${\bf T}\in {\mathbb{DOM}}[z]$
\[
[z^n]\,\Phi({\bf T})
\ =\
\widehat{\sigma}_n\big(t_1,\ldots,t_{n-1}\big).
\]
Then, since ${\bf A}_R(z+w) = R(z+w) + \Phi(w)$, from the dominance of
$\Theta(w)$ by ${\bf A}_R(z+w)$ we have, for any $t_i\ge 0$
and $n\ge 2$,
\[
\sigma_n\big(t_1,\ldots,t_{n-1}\big)\ \le\ R t_n\, + \,
\widehat{\sigma}_n\big(t_1,\ldots,t_{n-1}\big).
\]
As the left side does not depend on $t_n$ we can put $t_n=0$
to deduce
\[
\sigma_n\big(t_1,\ldots,t_{n-1}\big)\ \le\
\widehat{\sigma}_n\big(t_1,\ldots,t_{n-1}\big),
\]
which gives the desired conclusion.
\end{proof}
\begin{lemma} \label{pos rad lemma}
Let $\Theta$ be a bounded retro operator. Then $w=\Theta(w)$ has
a unique solution $w={\bf T}$, and $\rho_{\bf T}>0$.
\end{lemma}
\begin{proof}
By Lemma \ref{retro lem} we know there is a unique solution ${\bf T}$. Choose
$R>1$ such that $\Theta(w) \sqsubseteq {\bf A}_R(z + w)$.
From Lemma \ref{modify bound} we can change this to
\begin{equation} \label{theta R}
\Theta(w)\ \sqsubseteq\ {\bf A}_R(z + w) - Rw.
\end{equation}
The right side is a monotone retro operator,
so Lemma \ref{domi} says that the
fixpoint ${\bf S}$ of ${\bf A}_R(z+w)-Rw$ dominates the fixpoint ${\bf T}$ of $\Theta(w)$.
Let
\[
{\bf S}\ =\ {\bf A}_R(z+{\bf S}) - R{\bf S}.
\]
To show $\rho_{\bf T}>0$
it suffices to show $\rho_{\bf S}>0$.
We would like to sum the geometric series ${\bf A}_R\big(z+{\bf S}(z)\big)$;
however since we do not yet know that ${\bf S}$ is analytic at $z=0$ we
perform an equivalent maneuver by multiplying both sides of equation
\eqref{theta R}
by $1-Rz - R{\bf S}$ to obtain the quadratic equation
\[
\left( R+{R}^{2} \right) {{\bf S}}^{2}+ \left( {R}^{2}z+Rz-1 \right) {\bf S}+Rz
\ =\ 0.
\]
The discriminant of this equation is
\[
{\bf D}(z)\ =\ \left( {R}^{2}z+Rz-1 \right)^2\, -\,
4\left( R+{R}^{2} \right) Rz.
\]
Since ${\bf D}(0) = 1 $ is positive
it follows that $\sqrt{{\bf D}(z)}$ is analytic in a
neighborhood of $z=0$. Consequently ${\bf S}(z)$ has a nonzero radius
of convergence.
\end{proof}
\subsection{The set of composite operators}
The sets ${\mathcal O}_E$ and ${\mathcal O}_I$ of operators that we
eventually will exhibit as
``guaranteed to give the universal law'' will be subsets of the
following {\em composite} operators.
\begin{definition} \label{composite}
The composite operators are those obtained from
the base operators, namely
\begin{thlist}\itemsep=1ex
\item
the elementary operators ${\bf E}(z,w)$ and
\item
the ${\mathbb M}$-restrictions of the standard operators:
${\sf MSet}_{\mathbb M}$,
${\sf Cycle}le_{\mathbb M}$,
${\sf DCycle}_{\mathbb M}$
and ${\sf Seq}_{\mathbb M}$,
\end{thlist}
using the variables $z,w$, scalar multiplication by positive reals,
and the binary operations addition
\mbox{\rm($+$)}, multiplication \mbox{\rm($\cdot$)}
and composition \mbox{\rm($\circ$)}.
\end{definition}
\begin{lemma} \label{2 constr props}
The set of composite operators is closed under the arithmetical operations
and
all composite operators $\Theta$ are monotone and weakly retro.
\end{lemma}
\begin{proof}
The closure property is immediate from the definition of the set of
composite operators,
the monotone property is from Lemma \ref{monotone closed}, and
the weakly retro property is from Lemma \ref{retro is full} (b).
\end{proof}
An expression like $z\, +\,z {\sf Seq}(w)$ that describes
how a composite operator is constructed is
called a \emph{term}. Terms can be visualized as trees, for example
the term just described and the term in \eqref{ex1} have the
trees shown in Figure \ref{term tree}. (A small empty box in the figure
shows where the argument below the box is to be inserted.)
\begin{figure}
\caption{Two examples of term trees \label{term tree}
\label{term tree}
\end{figure}
Composite operators are, like their counterparts called {\em term functions}
in universal algebra and logic, valued for the fact that one has the possibility
to (1) define functions on the class by induction on terms, and (2)
one can prove facts about the class by induction on terms.
Perhaps the simplest explanation of why we like the composite operators
$\Theta$ so much is: we have a routine procedure to convert the
equation $w = \Theta(w)$ into an equation $w = {\bf E}(z,w)$ where ${\bf E}$
is elementary. This is the next topic.
\subsection{Representing a composite operator $\Theta$ at ${\bf T}$}
In order to apply analysis to the solution $w={\bf T}$ of a recursion
equation $w=\Theta(w)$ we want to put the
equation into the form $w = {\bf E}(z,w)$ with ${\bf E}$ analytic on ${\bf T}$.
The next definition describes a natural candidate for
${\bf E}$ in the case that $\Theta$ is composite.
\begin{definition} \label{repr}
Given a base operator $\Theta$ and a ${\bf T}\in{\mathbb{DOM}}[z]$
define an elementary operator
${\bf E}^{\Theta,{\bf T}}$
as follows:
\begin{thlist}
\item
${\bf E}^{{\bf E},{\bf T}} \,=\, {\bf E}$ for ${\bf E}$ an elementary operator.
\item
For $\Theta={\sf MSet}_{\mathbb M}$ let
${\bf E}^{\Theta,{\bf T}}\,=\,\sum_{m\in{\mathbb M}} {\bf Z}\big({\sf S}_m,w,{\bf T}(z^2),\ldots,{\bf T}(z^m)\big)$.
\item
For $\Theta={\sf DCycle}_{\mathbb M}$ let
${\bf E}^{\Theta,{\bf T}}\,=\,\sum_{m\in{\mathbb M}} {\bf Z}\big({\sf C}_m,w,{\bf T}(z^2),\ldots,{\bf T}(z^m)\big)$.
\item
For $\Theta={\sf Cycle}le_{\mathbb M}$ let
${\bf E}^{\Theta,{\bf T}}\,=\,\sum_{m\in{\mathbb M}} {\bf Z}\big({\sf D}_m,w,{\bf T}(z^2),\ldots,{\bf T}(z^m)\big)$.
\item
For $\Theta={\sf Seq}_{\mathbb M}$ let
${\bf E}^{\Theta,{\bf T}}\,=\,\sum_{m\in{\mathbb M}} w^m$.
\end{thlist}
Extend this to all composite operators using the obvious inductive definition:
\begin{eqnarray*}
{\bf E}^{c \Theta,{\bf T}}&:=&c {\bf E}^{\Theta,{\bf T}}\\
{\bf E}^{\Theta_1 + \Theta_2,{\bf T}}& :=& {\bf E}^{\Theta_1,{\bf T}} + {\bf E}^{\Theta_2,{\bf T}}\\
{\bf E}^{\Theta_1 \Theta_2,{\bf T}}& :=& {\bf E}^{\Theta_1,{\bf T}} {\bf E}^{\Theta_2,{\bf T}}\\
{\bf E}^{\Theta_1 \,\circ\, \Theta_2,{\bf T}}& :=&
{\bf E}^{\Theta_1,\Theta_2({\bf T})}\big(z,{\bf E}^{\Theta_2,{\bf T}}\big).
\end{eqnarray*}
\end{definition}
The definition is somewhat redundant as
the ${\sf Seq}_{\mathbb M}$ operators are included in the elementary operators.
\begin{lemma}\label{repres}
For $\Theta$ a composite operator and ${\bf T}\in{\mathbb{DOM}}[z]$ we have
\[
\Theta({\bf T})\ =\ {\bf E}^{\Theta,{\bf T}}(z,{\bf T}).
\]
We will simply say that ${\bf E}^{\Theta,{\bf T}}$
\underline{represents $\Theta$ at ${\bf T}$}.
\end{lemma}
\begin{proof}
By induction on terms.
\end{proof}
\subsection{Defining linearity for composite operators}
\begin{definition}
Let $\Theta$ be a composite operator. We say $\Theta$ is
\underline{linear} (in $w$) if the elementary operator
${\bf E}^{\Theta,z}$ representing $\Theta$ at $z$ is
linear in $w$. Otherwise we say $\Theta$ is \underline{nonlinear} (in $w$).
\end{definition}
\begin{lemma} \label{lin inv}
Let $\Theta$ be a composite operator. Then the elementary operator
${\bf E}^{\Theta,{\bf T}}(z,w)$ representing $\Theta$ at ${\bf T}$ is either
linear in $w$ for all ${\bf T}\in{\mathbb{DOM}}[z]$, or it is
nonlinear in $w$ for all ${\bf T}\in{\mathbb{DOM}}[z]$.
\end{lemma}
\begin{proof}
Use induction on terms.
\end{proof}
\subsection{When ${\bf T}$ belongs to ${\mathbb{DOM}}^\star[z]$}
\begin{proposition}\label{dom star}
Let $\Theta$ be a bounded nonlinear retro composite operator. Then there
is a unique solution $w = {\bf T}$ to $w = \Theta(w)$, and ${\bf T}\in{\mathbb{DOM}}^\star[z]$,
that is, $\rho_{\bf T}\in(0,\infty)$ and ${\bf T}(\rho_{\bf T})<\infty$.
\end{proposition}
\begin{proof}
From Lemma \ref{pos rad lemma} we know that $w=\Theta(w)$ has a unique
solution ${\bf T}\in{\mathbb{DOM}}[z]$, and $\rho :=\rho_{\bf T}>0$.
Let ${\bf E}(z,w)$ be the elementary operator representing $\Theta$ at ${\bf T}$.
Then ${\bf T} = {\bf E}(z,{\bf T})$. As $\Theta$ is nonlinear there is a positive
coefficient $e_{ij}$ of ${\bf E}$ with $j\ge 2$. Clearly
\[
{\bf T}(x) \ \ge\ e_{ij} x^i {\bf T}(x)^j\quad\text{for }x\ge 0.
\]
Divide through by ${\bf T}(x)^2$ and take the limsup of both sides as $x$
approaches $\rho^-$ to see that ${\bf T}(\rho) < \infty$, and thus $\rho<\infty$.
This shows ${\bf T}\in{\mathbb{DOM}}^\star[z]$.
\end{proof}
\subsection{Composite operators that are open for ${\bf T}$}
Many examples of elementary operators enjoy the open property, but
(restrictions of) the standard operators rarely do: only the various
${\sf Seq}_{\mathbb M}$ and $\Delta_{\{1\}}$ for $\Delta$ any of the standard
operators.
For the standard operators other than ${\sf Seq}$, and hence for
most of the composite operators, it is
very important that we use the concept of `open at ${\bf T}$'
when setting up for the Weierstra{\ss} Preparation Theorem.
\begin{definition} \label{op open def}
Let ${\bf T}\in{\mathbb{DOM}}^\star[z]$.
A composite operator $\Theta$ is \underline{open for ${\bf T}$} iff
${\bf E}^{\Theta,{\bf T}}$ is
open at $\big(\rho,{\bf T}(\rho)\big)$.
\end{definition}
The next lemma determines when the base operators are open for a given
${\bf T}\in{\mathbb{DOM}}^\star[z]$.
\begin{lemma} \label{open constr}
Suppose ${\bf T}\in{\mathbb{DOM}}^\star[z]$ and let $\rho\in(0,\infty)$ be its
radius of convergence.
Then the following hold:
\begin{thlist}
\item
An elementary operator ${\bf E}$ is open for ${\bf T}$ iff it is open
at $\big(\rho,{\bf T}(\rho)\big)$.
\item
A constant operator $\Theta_{\bf A}(w)$ is open for ${\bf T}$\, iff\,
$\rho<\rho_{\bf A}$.
\item
A simple operator ${\bf A}(w)$ is open for ${\bf T}$ \,iff\,
${\bf T}(\rho)<\rho_{\bf A}$.
\item
${\sf Seq}_{\mathbb M}$ is open for ${\bf T}$ iff ${\mathbb M}$ is finite or
${\bf T}(\rho)<1$.
\item
${\sf MSet}_{\mathbb M}$ is open for ${\bf T}$\, iff\,
${\mathbb M}=\{1\}$ or $\rho<1$.
\item
${\sf DCycle}_{\mathbb M}$, or ${\sf Cycle}le_{\mathbb M}$, is open for ${\bf T}$\, iff\,
${\mathbb M}=\{1\}$\ or\
$({\mathbb M}$ is finite and $\rho<1)$ or
$({\mathbb M}$ is infinite and $\rho,{\bf T}(\rho)<1)$.
\end{thlist}
\end{lemma}
\begin{proof}
For (a) note that an open operator represents itself at ${\bf T}$.
For (b) and (c) use Lemma \ref{cs open}.
For (d) note that ${\sf Seq}_{\mathbb M}(w)$
is the simple operator ${\bf A}(w) :=\sum_{m\in{\mathbb M}} w^m$, so (c) applies.
For (e) let ${\bf E} := {\bf E}^{\Theta,{\bf T}}$
where $\Theta := {\sf MSet}_{\mathbb M}$.
Then
\[
{\bf E}(z,w)\ :=\ \sum_{m\in{\mathbb M}}
{\bf Z}\big({\sf S}_m,w,{\bf T}(z^2),\ldots,{\bf T}(z^m)\big).
\]
If ${\mathbb M}=\{1\}$
then ${\bf E}(z,w)=w$ and (c) applies.
So suppose ${\mathbb M}\neq\{1\}$.
The term ${\bf T}(z^2)$ appears in ${\bf E}(z,w)$, and this diverges
at $\rho + \varepsilon$ if $\rho \geq 1$. Thus $\rho < 1$ is
a necessary condition for ${\bf E}$ to be open for ${\bf T}$.
So suppose $\rho<1$.
The representative for ${\sf MSet}$
dominates the representative of any
${\sf MSet}_{\mathbb M}$. Thus for any $x\in(0,\sqrt{\rho})$ and $y>0$:
\[
{\bf E}(x,y)\ \le\ e^y \exp\Big(\sum_{m\ge 2}{\bf T}\big(x^m\big)\big/m\Big)\ <\ \infty.
\]
Since one can find $\varepsilon>0$ such that the
right hand side is finite at
$\big(\rho+\varepsilon,{\bf T}(\rho)+\varepsilon\big)$, it follows
that ${\sf MSet}_{\mathbb M}$ is open for ${\bf T}$ when $\rho<1$.
For (f) let
${\bf E} := {\bf E}^{\Theta,{\bf T}}$
where $\Theta := {\sf DCycle}_{\mathbb M}$.
Then
\begin{eqnarray*}
{\bf E}(z,w)
& :=&
\sum_{m\in {\mathbb M}} {\bf Z}\big({\sf C}_m,w,{\bf T}(z^2),\ldots,{\bf T}(z^m)\big)\\
& =&
\underbrace{\sum_{m\in{\mathbb M}}\frac{1}{m} w^m}_{{\bf A}(w)}\ +\
\underbrace{\sum_{k\ge 2}\frac{\varphi(k)}{k}\sum_{jk\in{\mathbb M}}\frac{1}{j}{\bf T}(z^k)^j}_{{\bf B}(z)}.
\end{eqnarray*}
If ${\mathbb M}=\{1\}$ then, as before, there are no further restrictions needed
as ${\bf E}(z,w) := w$.
So now suppose ${\mathbb M}\neq\{1\}$.
The presence of some ${\bf T}(z^k)$ with $k\ge 2$ in the expression
for ${\bf E}(z,w)$ shows, as in (e), that a necessary condition is $\rho<1$.
This condition implies $\rho_{\bf B} \ge \sqrt{\rho}$.
If ${\mathbb M}$ is finite then $\rho_{\bf A}=\infty$, and $\rho_{\bf B} \ge \sqrt{\rho}$,
consequently ${\bf E}$ is open at $\big(\rho,{\bf T}(\rho)\big)$.
If ${\mathbb M}$ is infinite then $\rho_{\bf A}=1$.
Suppose ${\bf E}$ is open at $\big(\rho,{\bf T}(\rho)\big)$.
Then ${\bf A}\big({\bf T}(\rho)+\varepsilon\big)$ converges for some
$\varepsilon>0$, so ${\bf T}(\rho)<1$. The conditions $\rho,{\bf T}(\rho)<1$
are easily seen to be sufficient in this case.
For the ${\sf Cycle}le_{\mathbb M}$ case let
${\bf E} := {\bf E}^{\Theta,{\bf T}}$
where $\Theta := {\sf Cycle}le_{\mathbb M}$.
\begin{eqnarray*}
{\sf Cycle}le_{\mathbb M}\big({\bf T}(z)\big)& =& \frac{1}{2}\,{\sf DCycle}_{\mathbb M}\big({\bf T}(z)\big)\\
&& +\
\frac{1}{4}
\sum_{m\in{\mathbb M}}
\begin{cases}
2{\bf T}(z) {\bf T}(z^2)^{(m-1)/2} &\text{if $m$ is odd}\\
{\bf T}(z)^2 {\bf T}(z^2)^{(m-2)/2}\ +\ {\bf T}(z^2)^{m/2} &\text{if $m$ is even}.
\end{cases}
\end{eqnarray*}
Thus
\begin{eqnarray*}
{\bf E}(z,w)
& :=&
\sum_{m\in {\mathbb M}} {\bf Z}\big({\sf D}_m,w,{\bf T}(z^2),\ldots,{\bf T}(z^m)\big)\\
& =&
\frac{1}{2}\sum_{m\in{\mathbb M}}\frac{1}{m} w^m\ +\
\frac{1}{2}\sum_{k\ge 2}\frac{\varphi(k)}{k}\sum_{jk\in{\mathbb M}}\frac{1}{j}{\bf T}(z^k)^j\\
&& +\
\frac{1}{4}\sum_{m\in{\mathbb M}}
\begin{cases}
2 w {\bf T}(z^2)^{(m-2)/2} &\text{if $m$ is odd}\\
w^2 {\bf T}(z^2)^{(m-1)/2}\ +\ {\bf T}(z^2)^{m/2} &\text{if $m$ is even}
\end{cases}
\end{eqnarray*}
and we can use the same arguments as for ${\sf DCycle}$\,.
\end{proof}
\subsection{Closure of the composite operators that are open for
${\bf T}\in{\mathbb{DOM}}^\star[z]$} \label{aeo}
\begin{lemma} \label{aeo lem}
Suppose ${\bf T}\in{\mathbb{DOM}}^\star[z]$. Then the following hold:
\begin{thlist}
\item
The set of composite operators that are open for ${\bf T}$
is closed under addition, scalar multiplication and multiplication.
\item
Given composite operators $\Theta_1,\Theta_2$ with
$\Theta_2$ open for ${\bf T}$ and $\Theta_1$ open for
${\bf T}_1 := \Theta_2({\bf T})$, the composition $\Theta_1\circ \Theta_2$
is open for ${\bf T}$.
\end{thlist}
\end{lemma}
\begin{proof}
Just apply Lemma \ref{closure of open E}.
\end{proof}
\subsection{Closure of the composite integral operators that are open
for ${\bf T}\in{\mathbb{IDOM}}^\star[z]$} \label{aio}
\begin{definition}
${\mathbb{IDOM}}^\star[z] = {\mathbb{IDOM}}[z] \cap {\mathbb{DOM}}^\star[z]$.
\end{definition}
\begin{lemma} \label{aio lem}
Suppose ${\bf T}\in{\mathbb{IDOM}}^\star[z]$. Then the following hold:
\begin{thlist}
\item
The set of integral composite operators that are open for ${\bf T}$
is closed under addition, positive integer scalar multiplication and multiplication.
\item
Given integral composite operators $\Theta_1,\Theta_2$ with
$\Theta_2$ open for ${\bf T}$ and $\Theta_1$ open for
${\bf T}_1 := \Theta_2({\bf T})$, the composition $\Theta_1\circ \Theta_2$
is integral and open for ${\bf T}$.
\end{thlist}
\end{lemma}
\begin{proof}
This is just a repeat of the previous proof, noting that at
each stage we are dealing with integral operators acting on ${\mathbb{IDOM}}[z]$.
\end{proof}
\subsection{A special set of operators called ${\mathcal O}$}
This is the penultimate step in describing the promised collection
of recursion equations.
\begin{definition} \label{def cO}
Let ${\mathcal O}$ be the set of operators that can be constructed from
\begin{thlist}\itemsep=1ex
\item
the bounded and open elementary operators ${\bf E}(z,w)$ and
\item
the ${\mathbb M}$-restrictions of the standard operators:
${\sf MSet}_{\mathbb M}$,
${\sf Cycle}le_{\mathbb M}$,
${\sf DCycle}_{\mathbb M}$
and ${\sf Seq}_{\mathbb M}$,
where in the case of the cycle constructions we require the set ${\mathbb M}$ to
be either finite or to satisfy $\sum_{m\in{\mathbb M}} 1/m \,=\, \infty$,
\end{thlist}
using the variables $z,w$, scalar multiplication by positive reals,
and the binary operations addition
\mbox{\rm($+$)}, multiplication \mbox{\rm($\cdot$)}
and composition \mbox{\rm($\circ$)}.
Within ${\mathcal O}$ let ${\mathcal O}_E$ be the set of bounded and open
elementary operators; and let ${\mathcal O}_I$ be the closure
under the arithmetical operations of the
bounded and open integral elementary operators along with the
standard operators listed in (b).
\end{definition}
Clearly ${\mathcal O}$ is a subset of the composite operators.
\begin{lemma} \label{O facts}
\begin{thlist}
\item
Every $\Theta\in{\mathcal O}$ is a bounded monotone and weakly retro operator.
\item
Each of the sets ${\mathcal O},{\mathcal O}_E,{\mathcal O}_I$ is closed under the
arithmetical operations.
\end{thlist}
\end{lemma}
\begin{proof}
For (a)
we know from our assumption on the elementary operators in ${\mathcal O}$ and
Lemma \ref{std bdd} that the base operators in ${\mathcal O}$ are bounded---then
Lemma \ref{bdd is closed} shows that all members of ${\mathcal O}$ are
bounded.
All members of ${\mathcal O}$ are monotone and weakly retro
by Lemma \ref{2 constr props}.
Regarding (b), use Lemma \ref{closure of open E} (b) for ${\mathcal O}_E$, and
Definition \ref{def cO} for the other two sets.
\end{proof}
\begin{lemma}\label{open integral}
Let $\Theta\in{\mathcal O}_I$.
If $\,{\bf T}\in{\mathbb{IDOM}}^*[z]$ and
$\Theta({\bf T})(\rho_{\bf T})<\infty$ then $\Theta$ is open for ${\bf T}$.
\end{lemma}
\begin{proof}
Since ${\bf T}\in{\mathbb{IDOM}}^\star$ we must have $\rho:=\rho_{\bf T} < 1$.
Let
\[
{\mathcal O}^\star\ :=\ \{\Theta\in{\mathcal O}_I : \Theta({\bf T})(\rho) < \infty\}.
\]
An induction proof will show that for $\Theta\in{\mathcal O}^\star$
we have $\Theta$ open for ${\bf T}$.
The elementary base operators of ${\mathcal O}^\star$ are given to be
open, hence they are open for ${\bf T}$.
The restrictions of the standard operators in ${\mathcal O}^\star$
are covered by parts (d)--(f) of Lemma \ref{open constr}, with
one exception.
We need to verify in certain ${\sf DCycle}$
and ${\sf Cycle}$ cases that ${\bf T}(\rho)<1$. In these cases
one has ${\mathbb M}$ infinite, and then we must have ${\bf T}(\rho)<1$
in order for $\Theta({\bf T})$ to converge at $z=\rho$ since $\sum_{m\in{\mathbb M}} 1/m \,=\, \infty$.
For the induction step simply apply Lemma \ref{aio lem}.
\end{proof}
\subsection{The Main Theorem}
The following is our main theorem, exhibiting many $\Theta$ for which
$w = \Theta(w)$ is a recursion equation whose solution satisfies
the universal law. Several examples follow the proof.
\begin{theorem}\label{Main Thm}
Let $\Theta_1$ be a nonlinear retro member of
${\mathcal O}_E$, respectively ${\mathcal O}_I$, and let ${\bf A}(z)\in {\mathbb{DOM}}[z]$,
respectively ${\bf A}(z)\in{\mathbb{IDOM}}[z]$, be such that
${\bf A}(\rho_{\bf A})=\infty$.
Then there is a unique ${\bf T}\in{\mathbb{DOM}}[z]$, respectively ${\bf T}\in{\mathbb{IDOM}}[z]$,
such that ${\bf T} = {\bf A}(z) + \Theta_1({\bf T})$.
The coefficients of ${\bf T}$ satisfy the
universal law $\pmb{(\star)}$ in the form
\[
t(n)\ \sim\ q\sqrt
{\frac{\rho{\bf E}_z\big(\rho,{\bf T}(\rho)\big)}
{2\pi {\bf E}_{ww}\big(\rho,{\bf T}(\rho)\big)}}
\cdot
\rho^{-n} n^{-3/2}
\quad\text{for } n\equiv d \mod q.
\]
Otherwise $t(n)=0$.
Thus $\pmb{(\star)}$ holds on $\{n : t(n) > 0\}$.
The constants $d,q$ are from the
shift periodic form ${\bf T}(z) = z^d{\bf V}(z^q)$.
\end{theorem}
\begin{proof}
Let $\Theta(w) = {\bf A}(z) + \Theta_1(w)$, by Lemma \ref{O facts}
a member of ${\mathcal O}_E$, respectively
${\mathcal O}_I$.
By Proposition \ref{dom star}
there is a unique solution $w={\bf T}$ to $w={\bf A}(z) + \Theta(w)$ and
${\bf T}\in{\mathbb{DOM}}^\star[z]$.
Let ${\bf E}_1(z,w)={\bf E}^{\Theta,{\bf T}}$. Then the elementary representative ${\bf E}$
of ${\bf A}(z)+\Theta(w)$ is given by
\[
{\bf E}(z,w) \ :=\ {\bf A}(z) + {\bf E}_1(z,w).
\]
We will verify the hypotheses (a)--(e) of Theorem \ref{basic thm}.
${\bf T} = {\bf E}(z,{\bf T})$ by Lemma \ref{repres};
this is \ref{basic thm} (a).
The fact that ${\bf T}\in{\mathbb{DOM}}^\star[z]$ is \ref{basic thm} (b).
By Lemma \ref{lin inv} we get \ref{basic thm} (c).
Since ${\bf A}(0) = 0$ and ${\bf A} \neq 0$
it follows that ${\bf A}_z \neq 0$.
As ${\bf E}(z,0) = {\bf A}(z)$ it follows that ${\bf E}_z \neq 0$.
This is \ref{basic thm} (d).
To show $\Theta$ is open for ${\bf T}$ we note that in the
case of the operators coming from ${\mathcal O}_E$ they are given
to be open elementary operators; and for the case they
are coming from ${\mathcal O}_I$ use Lemma \ref{open integral}.
This gives \ref{basic thm} (e).
\end{proof}
\subsection{Applications of the main theorem} \label{applications}
One readily checks that all the recursion equations given in
Table \ref{std examp} satisfy the hypotheses of Theorem \ref{Main Thm}.
One can easily produce more complicated examples such as
\[
w\ =\ 3z^3\,+\,z^4 {\sf Cycle}le(w)
\,+\,w^2 {\sf DCycle}(w) \,+\,{\sf MSet}_2(w).
\]
Such simple cases barely scratch the surface
of the possible applications of Theorem \ref{Main Thm}. Let us turn to
the more dramatic example given early in \eqref{ex1}, namely:
\[
w\ =\ z \ +\ z {\sf MSet}
\Big({\sf Seq}\big(
\sum_{n\in{\sf Odd}} 6^n w^n
\big)\Big)
\sum_{n\in{\sf Even}}(2^n+1)
\big({\sf DCycle}_{{\sf Primes}}(w)\big)^n\,.
\]
We will analyze this from `the inside out', naming the operators encountered
as we work up the term tree. First we give names to the nodes of the term tree:
\[
\begin{array}{l @{\quad}l@{\quad} l@{\quad} l }
\Phi_1\ :=\ \sum_{n\in{\sf Odd}} 6^n w^n&
\Phi_2\ :=\ {\sf DCycle}_{{\sf Primes}}(w)&
\Phi_3\ :=\ {\sf Seq}\big(\Phi_1\big)\\
\Phi_4\ :=\ {\sf MSet}\big(\Phi_3\big)&
\Phi_5\ :=\ \sum_{n\in{\sf Even}}(2^n+1) w^n&
\Phi_6\ :=\ \Phi_5(\Phi_4)\\
{\bf A}(z)\ :=\ z&
\Theta_1\ :=\ z\Phi_4 \Phi_6&&.
\end{array}
\]
Now we argue that each of these operators is in ${\mathcal O}_I$:
\begin{thlist}
\item
$\Phi_1$ is an elementary (actually simple) integral operator with radius
of convergence $1/6$. Thus it is bounded. Since it diverges at its radius
of convergence, it is open. Thus $\Phi_1\in{\mathcal O}_I$.
\item
$\Phi_2$ is a restriction of ${\sf DCycle}$ to the
set of prime numbers; since $\sum_{m\in{\sf Primes}} 1/m = \infty$ we have
$\Phi_2\in{\mathcal O}_I$.
\item
$\Phi_3$ is in ${\mathcal O}_I$ as it is a composition of two operators in ${\mathcal O}_I$.
\item
$\Phi_4$ is in ${\mathcal O}_I$ as it is a composition of two operators in ${\mathcal O}_I$.
\item
$\Phi_5$ is an elementary (actually simple) integral operator with radius
of convergence $1/2$. Thus it is bounded. Since it diverges at its radius
of convergence, it is open. Thus $\Phi_5\in{\mathcal O}_I$.
\item
$\Phi_6$ is in ${\mathcal O}_I$ as it is a composition of two operators in ${\mathcal O}_I$.
\item
$\Theta_1$ is in ${\mathcal O}_I$ as it is a product of two operators in ${\mathcal O}_I$.
\item
$\Theta_1$ is a nonlinear retro operator in ${\mathcal O}_I$.
\end{thlist}
Thus we have an equation
$w\ =\ {\bf A}(z) + \Theta_1(w)$
that satisfies the hypotheses of Theorem \ref{Main Thm}; consequently
the solution $w={\bf T}(z)$ has coefficients satisfying the universal law.
\subsection{Recursion specifications for planar trees}
When working with either labelled trees or planar trees the
recursion equations are elementary. Here is a popular
example that we will examine in detail.
\begin{example}[Planar Binary Trees] The defining equation is
\[
w \ =\ z \,+\, z w^2.
\]
This simple equation can be handled directly since it is a quadratic,
giving the solution
\[
{\bf T}(z) \ =\ \frac{1 - \sqrt{1-4z^2}}{2z}.
\]
Clearly $\rho = 1/2$ and for $n\ge 1$ we have $t(2n)=0$, and Lemma
\ref{binomial} gives
\begin{eqnarray*}
t(2n-1)
&=& (-1)^n \frac{4^n}{2}\binom{1/2}{n}
\sim\
\frac{4^n}{2}
\cdot \frac{n^{-3/2}}{2 \sqrt{\pi}}
\ =\ \frac{1}{\sqrt{\pi}} 4^{n-1} n^{-3/2}.
\end{eqnarray*}
For illustrative purposes let us examine this in light of
the results in this paper. Note that
\[
{\bf E}(z,w)\ :=\ z + z w^2
\]
is in the desired form ${\bf A}(z) + \Theta_1(w)$ with
${\bf A}(\rho_{\bf A}) = \infty$
and
$\Theta_1$ a bounded retro nonlinear (elementary) operator.
The constants $d,q$ of the shift periodic form are given by:
\begin{thlist}
\item
$d=1$ as ${\bf E}_0(z) = z$ implies $E_0=\{1\}$.
\item
$q = 2$ as $E_0 = \{1\}$, $E_2 = \{1\}$, and otherwise $E_j=\text{\O}$;
thus $\bigcup E_n+(n-1)d = (E_0 - 1)\cup(E_2+1) = \{0,2\}$, so
$q = \gcd\{0,2\}= 2$.
\end{thlist}
Thus $t(n)>0$ implies $n\equiv 1 \mod 2$, that is, $n$ is an odd number.
For the constant in the asymptotics we have
\begin{eqnarray*}
{\bf E}_z(z,w)&=& 1+w^2\\
{\bf E}_{ww}(z,w)&=& 2z.
\end{eqnarray*}
In this case we know $\rho = 1/2$ and ${\bf T}(\rho)=1$ (from solving the
quadratic equation), so
\begin{eqnarray*}
{\bf E}_z\big(\rho,{\bf T}(\rho)\big)&=& 2\\
{\bf E}_{ww}\big(\rho,{\bf T}(\rho)\big)&=& 1.
\end{eqnarray*}
Thus
\begin{eqnarray*}
t(n)&\sim&
q
\sqrt{\frac{\rho{\bf E}_z\big(\rho,{\bf T}(\rho)\big)}
{2\pi {\bf E}_{ww}\big(\rho,{\bf T}(\rho)\big)}}
\cdot \rho^{-n} n^{-3/2}\\
&=& 2\sqrt{\frac{1}{2\pi}} \cdot 2^{n} n^{-3/2}\\
&=& \sqrt{\frac{2}{\pi}} \cdot 2^n n^{-3/2}\quad\text{for }
n\equiv 1 \mod 2.
\end{eqnarray*}
\end{example}
\subsection{On the need for integral operators}
Since the standard operators, and their restrictions, are defined
on ${\mathbb{DOM}}[z]$ it would be most welcome if one could unify the treatment
so that the main theorem was simply a theorem about operators on ${\mathbb{DOM}}[z]$
instead of having one part for elementary operators on ${\mathbb{DOM}}[z]$, and another
part for integral operators acting on ${\mathbb{IDOM}}[z]$. However the following
example indicates that one has to exercise some caution when working
with standard operators that mention ${\bf T}(x^j)$ for some $j\ge 2$.
Let
\[
\Theta(w)\ :=\ \frac{z}{2}\big(1 + {\sf MSet}_2(w)\big).
\]
This is 1/2 the operator one uses to define (0,2)-trees. This operator
is clearly in ${\mathcal O}$ and of the form
${\bf A}(z) + \Theta_1(w)$;
however it is not in either ${\mathcal O}_E$ or ${\mathcal O}_I$, as required by the main
theorem.
$\Theta$ is clearly retro and monotone.
Usual arguments show that $w = \Theta(w)$ has a unique solution $w={\bf T}$
which is in ${\mathbb{DOM}}^\star$, and we have
\begin{equation}\label{one}
{\bf T}(\rho)
\ =\ \frac{1}{2}\rho + \frac{1}{4}\rho{\bf T}(\rho)^2 +
\frac{1}{4}\rho{\bf T}(\rho^2).
\end{equation}
Since $\Theta({\bf T})$ involves ${\bf T}(z^2)$ it follows that
$\rho\le 1$ (for otherwise ${\bf T}(\rho^2)$ diverges).
Suppose $\rho<1$.
Following P\'olya let us write the equation for ${\bf T}$ as $w = {\bf E}(z,w)$
where
\[
{\bf E}(z,w)\ :=\
\frac{1}{2}z + \frac{1}{4}z w^2 + \frac{1}{4}z {\bf T}(z^2).
\]
Then the usual condition for the singularity $\rho$ is
$1 = {\bf E}_w(\rho,{\bf T}(\rho))$, that is
\begin{equation}\label{two}
1\ =\ \frac{1}{4}(\rho 2{\bf T}(\rho))\ =\ \frac{\rho{\bf T}(\rho)}{2},
\end{equation}
so $\rho{\bf T}(\rho)=2$.
Putting ${\bf T}(\rho) = 2/\rho$ into equation \eqref{one} gives
\[
\frac{2}{\rho}\ =\ \frac{1}{2}\rho + \frac{1}{\rho} +
\frac{1}{4}\rho {\bf T}(\rho^2),
\]
so
\[
4\ =\ 2\rho^2 + \rho^2{\bf T}(\rho^2).
\]
Since ${\bf T}(\rho^2) < {\bf T}(\rho)=2/\rho$ we have
\[
4\ <\ 2\rho^2 + 2\rho,
\]
a contradiction as $\rho<1$.
Thus $\rho = 1$, and we cannot apply the method of P\'olya since
${\bf E}(z,w)$ is not holomorphic at $\big(1,{\bf T}(1)\big)$.
\section{Algorithmic Aspects}
\subsection{An algorithm for nonlinear}
Given a term $\Phi(z,w)$ that describes a composite operator $\Theta$
there is a simple algorithm to determine if $\Theta$ is nonlinear.
Let us use the abbreviation $\Delta$ for the various standard unary
operators and their restrictions as well as the elementary operators
${\bf E}(z,w)$\,.
We can assume that any
occurrence of a ${\mathbb M}$-restriction of a standard operator $\Delta$ in $\Phi$
is such that
${\mathbb M}\,\neq\,\{1\}$ since if
${\mathbb M}\,=\,\{1\}$ then $\Delta_{\mathbb M}$ is just the identity operator.
Given an occurrence of a $\Delta$ in $\Phi$ let $T_\Delta$ be the full
subtree of $\Phi$ rooted at the occurrence of $\Delta$\,.
\noindent
{\sf An algorithm to determine if a composite $\Theta$ is nonlinear}
\begin{itemize}
\item
First we can assume that constant operators $\Theta_{\bf A}$ are only located at the
leaves of the tree.
\item
If there exists a $\Delta$ in the tree of $\Phi$ such that a leaf $w$ is
below $\Delta$\,,
where $\Delta$ is either a restriction of a standard
operator or a nonlinear elementary ${\bf B}$, then $\Theta$ is nonlinear.
\item
If there exists a node labelled with multiplication in the tree such that
each of the two branching nodes have a $w$ on or below them then
$\Theta$ is nonlinear.
\item
Otherwise $\Theta$ is linear in $w$.
\end{itemize}
\begin{proof}[Proof of the correctness of the Algorithm.]
(A routine induction argument on terms.)
\end{proof}
\section{Equations $w = {\bf G}\big(z,w\big)$ with mixed
sign coefficients}\label{mixed}
\subsection{Problems with mixed sign coefficients} \label{sub mixed}
We would like to include the possibility of mixed sign
coefficients in a recursion equation $w = {\bf G}(z,w)$.
The following table shows the key steps we used to prove
$\pmb{(\star)}$ holds in the nonnegative case, and the situation
if we try the same steps in the mixed sign case.
\[
\begin{tabular}{|l |l |l|}
\hline
&${\bf G}\in{\bf R}^{\ge 0}[[z,w]]$& ${\bf G}\in{\bf R}[[z,w]]$\\
&Nonnegative ${\bf G}$&Mixed Signs ${\bf G}$\\
\hline
Property&Reason &Reason\\
\hline
$(\exists!{\bf T})\,({\bf T}={\bf G}(z,{\bf T})$&$g_{01}=0$& $g_{01}=0$\\
$\rho>0$&${\bf G}$ is bounded& ${\bf G}$ is abs. bounded\\
$\rho<\infty$&${\bf G}$ is nonlinear in $w$& (?)\\
${\bf T}(\rho)<\infty$&${\bf G}$ is nonlinear in $w$&(?)\\
${\bf G}$ holomorphic in nbhd of ${\bf T}$& ${\bf G}(\rho+\varepsilon,{\bf T}(\rho)+\varepsilon)<\infty$&(?)\\
${\bf G}_{ww}\big(\rho,{\bf T}(\rho)\big)\neq 0$&${\bf G}$ is nonlinear in $w$& (?)\\
${\bf G}_z\big(\rho,{\bf T}(\rho)\big)\neq 0$&${\bf G}_0(z)\neq 0$& (?)\\
${\sf DomSing} = \{z : z^q = \rho^q\}$&${\sf Spec}\,{\bf G}_w\big(z,{\bf T}(z)\big)$ is nice& (?)\\
\hline
\end{tabular}
\]
As indicated in this table, many of the techniques that we used for the
case of a nonnegative equation do not carry over to the mixed case.
\begin{thlist}
\item
To show that a unique solution $w={\bf T}$ exists in the mixed sign case
we can use the retro property, precisely as with the nonnegative case.
The condition for ${\bf G}\in {\mathbb R}[[z,w]]$ to be retro is that $g_{01}=0$.
\item
To show $\rho>0$ in the nonnegative case we used the existence of an
$R>0$ such that ${\bf E}(z,{\bf T}) \unlhd {\bf A}_R\big(z+{\bf T}\big)$. In the mixed
sign case we could require that ${\bf G}(z,{\bf T})$ be absolutely dominated
by ${\bf A}_R\big(z+{\bf T}\big)$.
\item
To show $\rho < \infty$ and ${\bf T}(\rho)<\infty$ we used the nonlinearity
of ${\bf E}(z,w)$ in $w$. Then ${\bf T} = {\bf E}(z,{\bf T})$ implies
${\bf T}(x) \ge e_{ij}x^i{\bf T}(x)^j$ for some $e_{ij}>0$ with $j\ge 2$.
This conclusion does not follow in the mixed sign case.
\item
After proving that ${\bf T}\in{\mathbb{DOM}}^\star[z]$,
to be able to invoke the theoretical machinery of $\S$\,\ref{theor sect}
we required that ${\bf E}$ be open at $\big(\rho,{\bf T}(\rho)\big)$, that is,
\[
\big(\exists \varepsilon>0\big)\,
\Big({\bf E}\big(\rho+\varepsilon,{\bf T}(\rho)+\varepsilon\big) \,<\, \infty \Big).
\]
This shows ${\bf E}$ is holomorphic on a neighborhood of ${\bf T}$.
In the mixed sign case there seems to be no such easy condition unless we
know that $\sum_{ij} |g_{ij}|\rho^i{\bf T}(\rho)^j < \infty$.
\item
In the nonnegative case, if ${\bf E}$ is nonlinear in $w$ then ${\bf E}_{ww}$ does
not vanish, and hence it cannot be 0 when evaluated at
$\big(\rho,{\bf T}(\rho)\big)$. With the mixed signs case, proving
${\bf G}_{ww}\big(\rho,{\bf T}(\rho)\big)\neq 0$
requires a fresh analysis.
\item
A similar discussion applies to showing ${\bf G}_z\big(\rho,{\bf T}(\rho)\big)\neq 0$.
\item
Finally there is the issue of locating the dominant singularities.
The one condition we have to work with is that the dominant singularities
must satisfy ${\bf G}_w\big(z,{\bf T}(z)\big) = 1$.
In the nonnegative case we were able to use the analysis of the
spectrum of ${\bf E}_w$:
\[
{\sf Spec}\Big({\bf E}_w\big(z,{\bf T}(z)\big)\Big)\ = \ \bigcup_n E_n + (n-1)\odot T.
\]
This tied in with an expression for the spectrum of ${\bf E}\big(z,{\bf T}(z)\big)$.
However for the mixed case we only have
\[
{\sf Spec}\Big({\bf E}_w\big(z,{\bf T}(z)\big)\Big)\ \subseteq \ \bigcup_n E_n + (n-1)\odot T.
\]
In certain mixed sign equations one has a promising property, namely
\[
{\bf G}_w\big(z,{\bf T}(z)\big)\in {\mathbb{DOM}}[z].
\]
This happens with the equation for identity trees.
In such a case put
${\bf G}_w\big(z,{\bf T}(z)\big)$ in its pure periodic form ${\bf U}(z^p)$. Then
the necessary condition on the dominant singularities $z$ becomes
simply
\[
z^p \ =\ \rho^p.
\]
If one can prove $p=q$, as we did with elementary recursions,
then the dominant singularities are as simple as one could hope for.
\end{thlist}
There is clearly considerable work to be done to develop a theory of solutions
to mixed sign recursion equations.
\subsection{The operator ${\sf Set}$} \label{Set sect} \label{sub set}
The above considerations led us to omit the
popular ${\sf Set}$ operator from our list of standard
combinatorial operators.
In the equation $w = z+z{\sf Set}(w)$ for the class of {\em identity trees}
(that is, trees with only one automorphism), one can readily show that the only
dominant singularity of the solution ${\bf T}$ is $\rho$. But if we look at more complex
equations, like
\[
w = z +z^3 + z^5 + z {\sf Set}\big({\sf Set}(w){\sf MSet}(w)\big),
\]
the difficulties of determining the locations of the dominant singularities
appear substantial.
\begin{example}
Consider the restrictions of the ${\sf Set}$ operator
\begin{eqnarray*}
{\sf Set}_{\mathbb M}({\bf T})& =& \sum_{m\in{\mathbb M}}{\sf Set}_m({\bf T}),\quad\text{where}\\
{\sf Set}_m({\bf T})& = & {\bf Z}\big({\sf S}_m,{\bf T}(z),-{\bf T}(z^2),\ldots,(-1)^{m+1}{\bf T}(z^m)\big).
\end{eqnarray*}
Thus in particular
\[
{\sf Set}_2({\bf T})\ = \ \frac{1}{2}\big({\bf T}(z)^2 - {\bf T}(z^2)\big).
\]
The recursion equation
\[
w\ =\ z\,+\,{\sf Set}_2(w)
\]
exhibits different behavior than what has been seen so far
since the solution is
$
{\bf T}(z)\, =\, z,
$
which is not a proper infinite series.
The solution certainly does not have coefficients
satisfying the universal law, nor does it have a finite radius of convergence
that played such an important role.
We can modify this equation slightly to obtain a more interesting solution,
namely let
\[
\Theta(w)\ =\ z\,+\,z^2\,+\,{\sf Set}_2(w).
\]
Then $\Theta$ is integral retro, and the unique solution $w={\bf T}$ to
$w = \Theta(w)$ is
\[
{\bf T}(z)\ =\ z \,+\, z^2\, +\,z^3\,+\,z^4\,+\,2z^5\,+\,3z^6\,+\,6z^7\,+\,11z^8\,+\,\cdots
\]
with $t(n)\ge 1$ for $n\ge 1$.
Consequently we have the radius of convergence
\begin{equation}\label{rho in 01}
\rho :=\rho_{\bf T}\in[0,1].
\end{equation}
We will give a detailed proof that ${\bf T}$ has coefficients satisfying the
universal law, to hint at the added difficulties that might occur
in trying to add ${\sf Set}$ to our standard operators.
Let
\[
\Theta_1(w) \ =\ z + z^2 + \frac{1}{2}w^2
\]
a bounded open nonlinear retro elementary operator, hence an
operator in ${\mathcal O}_E$ to which the Main Theorem applies.
For ${\bf A}\in{\mathbb{IDOM}}$ note that
$\Theta({\bf A})\, \unlhd\,\Theta_1({\bf A})$, so we can use the monotonicity of $\Theta_1$
to argue that $\Theta^n(0)\,\unlhd\,{\Theta_1}^n(0)$ for all $n\ge 1$.
Thus ${\bf T}$ is dominated by the solution
${\bf S}$ to $w=\Theta_1(w)$. At this point we know that
$\rho_{\bf T} \ge\rho_{\bf S}> 0$.
Since $\rho\in(0,1]$
we have ${\bf T}(x^2) < {\bf T}(x)$ for $x\in(0,\rho)$. Thus for $x\in(0,\rho)$
\[
{\bf T}(x)\ >\ x + x^2 + \frac{1}{2}\big({\bf T}(x)^2 - {\bf T}(x)\big),
\]
or
\[
\frac{3}{2}{\bf T}(x)\ >\ x + x^2 + \frac{1}{2}{\bf T}(x)^2.
\]
Thus ${\bf T}(x)$ cannot approach $\infty$ as $x\rightarrow \rho^-$.
Consequently ${\bf T}(\rho)<\infty$, and then we must also have $\rho<1$.
By defining
\begin{eqnarray*}
{\bf G}(z,w)&:=& z\,+\,z^2+\, \frac{1}{2}\big(w^2 - {\bf T}(z^2)\big)\\
& =& \frac{1}{2}w^2\,+\,z\,+\,\frac{1}{2}z^2
\,-\,\frac{1}{2}z^4
\,-\,\frac{1}{2}z^6 \,-\,\cdots,
\end{eqnarray*}
we have the recursion equation $w = {\bf G}(z,w)$ satisfied by $w={\bf T}$, and
${\bf G}(z,w)$ has mixed signs of coefficients.
As $\rho<1$ we know that $ {\bf G}(z,w)$
is holomorphic in a neighborhood of the graph of ${\bf T}$, so
a necessary condition for $z$ to be a dominant singularity is that
${\bf G}_w\big(z,{\bf T}(z)\big)=1$, that is
${\bf T}(z)\ =\ 1$.
Since ${\bf T}$ is aperiodic,
this tells us we have a {\em unique dominant singularity}, namely $z=\rho$,
and we have ${\bf T}(\rho)\ =\ 1$.
Differentiating the equation
\[
{\bf T}(z)\ =\ z + z^2 + \frac{1}{2}{\bf T}(z)^2 - \frac{1}{2}{\bf T}(z^2)
\]
gives
\[
{\bf T}'(z)\ =\ 1 + 2z + {\bf T}(z){\bf T}'(z) - z {\bf T}'(z^2)
\]
or equivalently
\[
\big(1-{\bf T}(z)\big){\bf T}'(z)
\ =\
(1 + 2z)\,-\, z{\bf T}'(z^2)
\quad\text{for }|z|<\rho.
\]
Since $\rho<1$ we know that
\[
\lim_{\substack{z\rightarrow \rho\\|z|<\rho}} \Big((1+2z) -z{\bf T}'(z^2)\Big)
\ =\ (1 + 2\rho) - \rho{\bf T}'(\rho^2).
\]
Let $\lambda$ be this limiting value.
Consequently
\begin{equation}\label{TT'}
\lim_{\substack{z\rightarrow \rho\\|z|<\rho}} \big(1-{\bf T}(z)\big){\bf T}'(z)\ =\ \lambda.
\end{equation}
By considering the limit along the real
axis, as $x\rightarrow \rho^-$, we see that $\lambda\ge 0$, so
\[
(1 + 2\rho) - \rho{\bf T}'(\rho^2)\ =\ \lambda \ \ge \ 0.
\]
Let
\[
{\bf F}(z,w)\ :=\ w \,-\,\Big(z + z^2 + \frac{1}{2}w^2 - \frac{1}{2}{\bf T}(z^2)\Big).
\]
Then
\[
{\bf F}_z(z,w)\ =\ -\Big(1 + 2z - z{\bf T}'(z^2)\Big)\ =\ z{\bf T}'(z^2) - (1+2z),
\]
so
\[
{\bf F}_z\big(\rho,{\bf T}(\rho)\big)\ =\ \rho{\bf T}'(\rho^2) - (1+2\rho)\ =\ -\lambda.
\]
If $\lambda > 0$ then
${\bf F}_z\big(\rho,{\bf T}(\rho)\big)\, <\, 0$; and since ${\bf F}_{ww} = -2$ we have
\[
{\bf F}_z\big(\rho,{\bf T}(\rho)\big)
{\bf F}_{ww}\big(\rho,{\bf T}(\rho)\big)\ > \ 0.
\]
This means we have all the hypotheses needed to apply Proposition \ref{crucial1}
to get the square root asymptotics which lead to the universal law for ${\bf T}$.
To conclude that we indeed have the universal law we will show that $\lambda >0$.
Let $\alpha\in[\rho,1]$. Then for $x\in(0,\rho)$ we have
${\bf T}(x^2) \le \alpha {\bf T}(x)$, and thus for $x \in (0,\rho)$
\[
{\bf T}(x)\ >\ x + x^2 + \frac{1}{2}\Big({\bf T}(x)^2 - \alpha{\bf T}(x)\Big).
\]
Let
\[
{\bf U}(x)\ =\ x + x^2 + \frac{1}{2}\Big({\bf U}(x)^2 - \alpha{\bf U}(x)\Big).
\]
Then
\begin{eqnarray*}
{\bf U}(x)& =& \frac{1}{2} \Big( (2+\alpha) - \sqrt{ (2+\alpha)^2 - 8(x + x^2)}\Big)\\
\rho_{\bf U} &=& -\frac{1}{2} + \frac{1}{4}\sqrt{4 + 2(2+\alpha)^2}.
\end{eqnarray*}
Now for $x\in I := (0,\min(\rho,\rho_{\bf U}))$
\[
(2+\alpha){\bf T}(x) - {\bf T}(x)^2
\ >\
(2+\alpha){\bf U}(x) - {\bf U}(x)^2
\]
so
\[
{\bf U}(x)^2 - {\bf T}(x)^2\ >\ (2 + \alpha)\big( {\bf U}(x) - {\bf T}(x)\big).
\]
Thus ${\bf U}(x) \neq {\bf T}(x)$ for $x\in I$. If ${\bf U}(x)>{\bf T}(x)$ on $I$ then
\[
{\bf U}(x) + {\bf T}(x)\ >\ 2 + \alpha \quad\text{for }x\in I.
\]
But this is impossible since on $I$ we have
\begin{eqnarray*}
{\bf U}(x) &<& {\bf U}(\rho_{\bf U}) = 1 + \alpha/2\\
{\bf T}(x) &<& {\bf T}(\rho)\ =\ 1.
\end{eqnarray*}
Thus we have
\[
{\bf U}(x) \ <\ {\bf T}(x)\quad\text{on } I.
\]
If $\rho_{\bf U} \le \rho$ then ${\bf U}(\rho_U) \le {\bf T}(\rho_U)$, which is also impossible.
Thus
\[
\rho < \rho_{\bf U}.
\]
Now define a function $f$ on $[\rho,1]$ that maps $\alpha\in[\rho,1]$ to
$\rho_{\bf U}$ as given in the preceding lines, that is:
\[
f(\alpha)\ =\ (-1/2) + (1/4)\sqrt{4 + 2(2+\alpha)^2}.
\]
Then $\alpha \in [\rho,1]$ implies $f(\alpha) \in (\rho,1]$.
Calculation gives $f^3(1) = 0.536\ldots$, so
\[
\rho \ <\ 0.54
\]
Since $\rho_{\bf S} = \big(\sqrt{3}-1\big)/2 = 0.366\ldots$
we have $\rho^2 < \rho_{\bf S}$,
and then
\[
{\bf T}'(\rho^2)\ <\ {\bf S}'(\rho^2),
\]
so
\[
-\lambda \ =\ \rho{\bf T}'(\rho^2) - (1+2\rho)\ <\ \rho{\bf S}'(\rho^2) - (1+2\rho)\ <\ 0
\]
since $x{\bf S}'(x^2) - (1+2x)\ < \ 0$ for $x\in (0,0.55)$.
This proves $\lambda>0$, and hence the universal law holds for the coefficients of
${\bf T}$.
This example shows that the generating function ${\bf T}^*$ for the class
of identity (0,1,2)-trees satisfies the universal law. We have
${\bf T}^*$ defined by the equation $w = z + z * {\sf Set}_{\{1,2\}}(w)$, and it turns out
that $t_n^* = t_{n+1}$. (We discovered this connection with ${\bf T}^*$
when looking for the first few coefficients of ${\bf T}$ in the {\em On-Line
Encyclopedia of Integer Sequences}.)
\end{example}
\begin{example}
In the 20 Steps paper of
Harary, Robinson and Schwenk \cite{HRS}
the asymptotics for the class of {\em identity trees} (those with no nontrivial automorphism)
was successfully analyzed
by first showing that the associated recursion equation
\[
w\ =\ z \,+\, z {\sf Set}(w)
\]
has a unique solution $w={\bf T}\in{\mathbb{DOM}}^\star[z]$.
Then
\[
{\bf G}(z,w)\ :=\ z\,+\,z e^w\cdot\exp\Big(\sum_{m\ge 2}(-1)^{m+1}{\bf T}(z^m)/m\Big)
\]
is holomorphic in a neighborhood of the graph of ${\bf T}$. One has
\[
z + {\bf G}_w(z,w)\ =\ {\bf G}(z,w),
\]
so the necessary condition
${\bf G}_w\big(z,{\bf T}(z)\big)=1$
for a dominant singularity
is just the condition ${\bf T}(z) = 1+z$.
$\rho$ is the only solution of this equation on the circle
of convergence as ${\bf T}-z\,\unrhd\,0$ and is aperiodic.
Consequently {\em the only dominant singularity is $z=\rho$}.
The equation for identity trees is of mixed signs.
From
\begin{equation}\label{SetEq}
{\bf T}(z)\ =\ z \prod_{j\ge 1}\big(1+z^j\big)^{t_j}
\end{equation}
we can calculate the first few values of $t(n)$ for identity trees:\footnote{One
can also look up sequence number A004111 in the
{\em On-Line Encyclopedia of Integer Sequences.}}
\[
\begin{array}{| c | c | c | c | c | c | c | c | c | c |}
t(1)&t(2)&t(3)&t(4)&t(5)&t(6)&t(7)&t(8)&t(9)&t(10)\\
\hline
1&1&1&2&3&6&12&25&52&113
\end{array}
\]
Returning to the definition of ${\bf G}$ we have
\begin{eqnarray*}
{\bf G}(z,w)& =& z + z e^w
\sum_{n\ge 0}
\Big(\sum_{m\ge 2} (-1)^{m+1}\big(t(1)z^m + t(2)z^{2m} +\cdots)\big/m\Big)^n\Big/ n!\\
& =& z + z e^w
\sum_{n\ge 0}\Big(-\big(z^2 + z^{4} +\cdots)\big/2
\ +\ \big(z^3 + z^{6} +\cdots)\big/3\\
&&\qquad\ -\ \big(z^4 + z^{8} +\cdots)\big/4
\Big)^n\Big/ n!\\
& =& z + z e^w
\sum_{n\ge 0}\Big(-z^2/2 + z^3/3 -3 z^{4}/4 +\cdots\Big)^n\Big/ n!\\
& =&z + z e^w
\Big(1 -z^2/2 + z^3/3 - 5z^{4}/8 +\cdots\Big).
\end{eqnarray*}
Thus for some of the $z^iw^j$ the coefficients are positive, and some are
negative; ${\bf G}(z,w)$ is a mixed sign operator.
\end{example}
If one were to form more complex operators $\Theta$ by adding the operator
${\sf Set}$ and its restrictions ${\sf Set}_{\mathbb M}$ to our set of Standard Operators, then
there is some hope for proving that one always has the universal
law holding for the solution to $w = \Theta(w)$, provided one has a solution
that is not a polynomial.
The hope stems from the fact that although the ${\bf G}(z,w)$
associated with $\Theta$ may have mixed sign coefficients, when it comes to the
condition ${\bf G}_w\big(z,{\bf T}(z)\big) = 1$ on the dominant singularities we have
the good fortune that ${\bf G}_w\big(z,{\bf T}(z)\big)\unrhd 0$, that is, it expands
into a series with nonnegative coefficients. The reason is quite simple,
namely using the bivariate generating function we have
\[
{\sf Set}_m({\bf T})\ =\ [u^m]\,\exp\Big(\sum_{n\ge 1} (-1)^{n-1}u^n{\bf T}(x^n)/n\Big).
\]
Letting ${\bf G}_m(z,w)$ be ${\bf Z}_m\big({\sf S}_m,w,-{\bf T}(z^2),\ldots,(-1)^{m+1}{\bf T}(z^m)\big)$
we have
\[
{\bf G}_m(z,w)\ =\
[u^m]\,e^{uw}\cdot\exp\Big(\sum_{n\ge 2} (-1)^{n-1}u^n{\bf T}(x^n)/n\Big),
\]
thus
\begin{eqnarray*}
\frac{\partial {\bf G}_m}{\partial w}
& =& \frac{\partial }{\partial w}
[u^m]\,e^{uw}\cdot\exp\Big(\sum_{n\ge 2} (-1)^{n-1}u^n{\bf T}(x^n)/n\Big)\\
& =& [u^m]\,u e^{uw}\cdot\exp\Big(\sum_{n\ge 2} (-1)^{n-1}u^n{\bf T}(x^n)/n\Big)\\
& =& [u^{m-1}]\, e^{uw}\cdot\exp\Big(\sum_{n\ge 2} (-1)^{n-1}u^n{\bf T}(x^n)/n\Big)\\
&=& {\bf G}_{m-1}(z,w).
\end{eqnarray*}
Consequently if we put ${\bf G}_w\big(z,{\bf T}(z)\big)$ into
its pure periodic form ${\bf U}(z^p)$ then we have the necessary condition
$z^p = \rho^p$ on the dominant singularities $z$. Letting
$z^d{\bf V}(z^q)$ be the shift periodic form of ${\bf T}(z)$ it follows that
$q \big| p$. If we can show that $p = q$ then
${\sf DomSing} = \{z : z^q = \rho^q\}$, which is as simple as possible.
Indeed this has been the case with the few examples we have worked out by
hand.
\section{Comments on Background Literature} \label{history}
Two important sources offer global views on finding asymptotics.
\subsection{The ``20 Step algorithm'' of \cite{HRS}\label{20 step}}
This 1975 paper by Harary, Robinson and Schwenk
is in good part a heuristic for
how to apply P\'olya's method\footnote{The paper also has a
proof that the generating function for the class of identity trees
(defined by $w = z + z {\sf Set}(w)$) satisfies the universal law.},
and in places the
explanations show an affinity for operators close to the
original ones studied by P\'olya.
For example it says that
${\bf G}(z,w)$ should be analytic for $|z|\,<\,\sqrt{\rho_{\bf T}}$ and
$|w|\,<\,\infty$. This strong condition on $w$ fails for most of the simple
classes studied by Meir and Moon, and hence for the setting of this
paper.
The algorithm of {\em 20 Steps} also discusses how to find asymptotics for the class of
free trees obtained from a rooted class defined by recursion.
Given a class ${\mathcal T}$ of rooted trees let ${\mathcal U}$ be the associated class of
free (unrooted) trees, that is, the members of ${\mathcal U}$ are the same
as the members of
${\mathcal T}$ except that the designation of an element as the root has been
removed. Let the corresponding generating series\footnote{{\em 20 Steps}
uses $t(n)$ to denote the number of rooted trees in ${\mathcal T}$ on $n+1$ points, whereas
$u(n)$ denotes the number of free trees in ${\mathcal U}$ on $n$ points. We will let $t(n)$
denote the number of free trees in ${\mathcal T}$ on $n$ points, as we have
done before, since this will have no material effect on the efficacy of
the 20 Steps.}
be ${\bf T}(z)$ and ${\bf U}(z)$.
The initial assumptions are only two: that ${\bf T}$ is not a polynomial and
it is aperiodic.
Step 2 of the 20 steps is: express ${\bf U}(z)$ in terms of ${\bf T}(z)$ and ${\bf T}(z^2)$.
{\em 20 Steps}
says that Otter's {\em dissimilarity characteristic} can usually be
applied to achieve this.
Step 20 is to deduce that $u(n) \sim C \rho^{-n} n^{-5/2}$.
This outline suggests that it is widely possible to find the asymptotics
of the coefficients $u(n)$, and evidently this gives a second universal law
involving the exponent $-5/2$ instead of the $-3/2$.
Our investigations suggest that determining the growth
rate of the associated classes of free trees will be quite challenging.
Suppose ${\mathcal T}$ is a class of rooted trees for which the P\'olya
style analysis has been successful, that we have found the radius of convergence
$\rho \in (0,1)$, that ${\bf T}(\rho)< \infty$, and that
$t(n) \sim C \rho^{-n} n^{-3/2}$. What can we say about the generating
function ${\bf U}(z)$ for the corresponding class of free trees?
Since we have the inequality $t(n)/n \le u(n) \le t(n)$ (note
that one has only $n$ ways to choose the root in a free tree of size $n$), it follows
by the Cauchy-Hadamard Theorem that ${\bf U}(z)$ has the same radius of convergence
$\rho$ as ${\bf T}(z)$. From this and the asymptotics for $t(n)$ it also follows
that one can find $C_1,C_2>0$ such that for $n\ge 1$
\[
C_1 \rho^{-n} n^{-5/2}\ \le \ u(n)\ \le \ C_2 \rho^{-n} n^{-3/2}.
\]
Thus $u(n)$ is sandwiched between a $-5/2$ expression and a $-3/2$ expression.
In the case that ${\mathcal T}$ is the class of {\em all} rooted trees, Otter \cite{Otter}
showed that
\[
{\bf U}(z)\ =\ z\Big({\bf T}(z) - {\sf Set}_2({\bf T})\Big)
\ =\ z\Big({\bf T}(z) - \frac{1}{2}\big({\bf T}(z)^2 - {\bf T}(z^2)\big)\Big),
\]
and from this he was able to find the asymptotics for $u(n)$ with
a $-5/2$ exponent.
However let ${\mathcal T}$ be the class of rooted
trees such that every node has either 2 or 5 descending branches. The
recursion equation for ${\bf T}(z)$ is
\[
{\bf T}(z)\ =\ z\,+\,z\Big({\sf MSet}_2({\bf T}) \,+\, {\sf MSet}_5({\bf T})\Big),
\]
and ${\bf T}(z)$ is aperiodic.
By Theorem \ref{Main Thm} we know that the coefficients of ${\bf T}(z)$ satisfy the
universal law
\[
t(n)\ \sim\ C \rho^{-n} n^{-3/2}.
\]
Let ${\mathcal U}$ be the corresponding set of free trees. Note that when one converts a
rooted tree $T$ in ${\mathcal T}$ to a free tree $F$, a root with 2 descending branches
will give a node of degree 2 in $F$, and a root with 5 descending branches will
give a node of degree 5 in $F$.
Any non root node with 2 descending branches will give a node of degree 3 in $F$;
and any non root node with 5 descending branches will give a node of degree 6 in $F$.
Thus $F$ will have exactly one node of degree 2 or degree 3, and not both, so one
can identify the node that corresponds to the root of $T$. This means that there
is a bijection between the rooted trees on $n$ vertices in ${\mathcal T}$ and the free
trees on $n$ vertices in ${\mathcal U}$. Consequently $t(n)=u(n)$, and thus
\[
u(n)\ \sim\ C \rho^{-n} n^{-3/2}.
\]
Clearly $u(n)$ cannot also satisfy a $-5/2$ law. Such examples are easy
to produce.
Thus it is not clear to what extent the program of {\em 20 Steps} can be carried
through for free trees. It seems that free trees are rarely defined by a
single recursion equation, and it is doubtful if there is always a
recursive relationship between ${\bf U}(z)$ and ${\bf T}(z),{\bf T}(z^2),\ldots$.
Furthermore it is not clear what the possible asymptotics for the $u(n)$
could look like---is it possible that one will always have either a
$-3/2$ or a $-5/2$ law? Since a class ${\mathcal U}$ derived from a ${\mathcal T}$ which
has a nice recursive specification can be defined by
a monadic second order sentence, there is hope that the $u(n)$
will obey a reasonable asymptotic law. (See Q5 in $\S\,$\ref{Open Probs}.)
In {\em 20 Steps} consideration is also given to techniques for calculating
the radius
of convergence $\rho$ of ${\bf T}$ and the constant $C$ that appears in the
asymptotic formula for the $t(n)$\,. In this regard the reader should consult
the paper of Plotkin and Rosenthal \cite{Pl:Ro} as there are evidently some
numerical errors in the constants calculated in {\em 20 Steps}.
\subsection{Meir and Moon's global approach}\label{MM sec}
In 1978 Meir and Moon \cite{Me:Mo1}
considered classes ${\mathcal T}$ of trees with generating
functions ${\bf T}(z)\,=\,\sum_{n\ge 1} t(n) z^n$
such that
\begin{quote}
\begin{thlist}\itemsep=1ex
\item[1]
$t(1)\,=\,1$\,;
\item[2]
${\mathcal T}$ can be obtained by taking certain forests of trees
from ${\mathcal T}$ and adding a root to each one
(this choice of certain forests is evidently a `construction');
\item[3]
this `construction' and `conditions implicit in the definition' of ${\mathcal T}$
give rise to a `recurrence relation' for the $t(n)$\,, evidently
a sequence $\sigma$ of functions $\sigma_n$ such that
$t(n)\,=\,\sigma_n\big(t(1),\ldots,t(n-1)\big)$\,;
\item[4]
there is an `operator' $\Gamma$\,, acting on (possibly infinite) sequences
of power series, such that the recurrence relation for the $t(n)$
`can be expressed in terms of generating series', for example
${\bf T}(z)\,=\,\Gamma\big({\bf T}(z),{\bf T}(z^2),\ldots\big)$\,, which is abbreviated
to ${\bf T}(z)\,=\,\Gamma\{{\bf T}(z)\}$\,.
\end{thlist}
\end{quote}
This is the most penetrating presentation we have seen of a foundation
for recursively defined classes of trees, a goal that we find most
fascinating since to prove global results one needs a global setting.
However their conditions have limitations that we want to point out.
(1) requires $t(1)=1$, so there is only one object of size 1; this means
multicolored trees are ruled out. (2) indicates that one
is using a specification\footnote{We have not needed a specification
language so far in this paper---for this comment it is useful.
Let $\bullet$ denote the tree with one node, and $\bullet\big/\Box$ says to
add a root to any forest in $\Box$\,.
}
like
${\mathcal T}\,=\,\{\bullet\}\,\cup\,\bullet\big/{\sf Seq}_{\mathbb M}({\mathcal T})$\,. The recurrence
relation in (3) is the one item that seems to be appropriately general. It
corresponds to what we call `retro'. (4) is too
vague; after all, a function of $\big({\bf T}(z),{\bf T}(z^2),\ldots\big)$
is really just a function of ${\bf T}(z)$ since ${\bf T}(z)$ completely
determines all the ${\bf T}(z^k)$. This
formulation is surely motivated by the desire to include the ${\sf MSet}$
construction; perhaps the authors were thinking of `natural' functions
of these arguments like $\sum_{n\ge 1} {\bf T}(z^n)/n$\,; or perhaps something
of an effective nature, an algorithm.
After this general discussion, without any attempt to prove theorems in this
context, they turn the focus to \emph{simple} classes ${\mathcal T}$ of rooted trees,
namely simple classes are those for which
\begin{quote}
\begin{itemize}
\item[(M1)]
the generating series ${\bf T}(z)$ is defined by a `simple' recursion
equation
\[
w\ =\ z {\bf A}(w)\,,
\]
where ${\bf A}\,\in\,{\mathbb R}^{\ge 0}[[z]]$ with ${\bf A}(0)\,=\,1$\,.
\end{itemize}
\end{quote}
Additional conditions\footnote{Their original 1978 conditions
had a minor restriction, that $a(1) > 0$. That was soon replaced
by the condition (M3)---see for example \cite{Me:Mo3}.
}
are needed to prove their theorems,
namely
\begin{quote}
\begin{itemize}
\item
[(M2)] ${\bf A}(w)$ is analytic at 0,
\item
[(M3)] $\gcd\{n\in {\mathbb P}: a(n)>0\}\, =\,1$\,,
\item
[(M4)] ${\bf A}(w)$ is not a linear polynomial $aw+b$\,, and
\item
[(M5)] ${\bf A}(w)\,=\,w {\bf A}'(w)$ has a positive solution
$y\,<\,\rho_{\bf A}$
\end{itemize}
\end{quote}
to guarantee that the methods of P\'olya apply to give the
asymptotic form $\pmb{(\star)}$\,.
(M2) makes ${\bf T}(z)$ analytic at 0, (M3) ensures that ${\bf T}(z)$ is aperiodic,
(M4) leads to $\rho_{\bf T}\,< \,\infty$ and ${\bf T}(\rho_{\bf T})\,<\,\infty$\,,
and (M5) shows ${\bf F}(z,w)\,:=\,w\,-\,z {\bf A}(w)\,=\,0$
is holomorphic in a neighborhood
of $\big(\rho_{\bf T},{\bf T}(\rho_{\bf T})\big)$.
Thanks to the restriction to recursion identities based on simple operators
they are able to employ the more powerful condition (M5) instead of our
condition ${\bf A}(\rho_{\bf A})\,=\,\infty$\,. Our condition
is easier to use in practice, and it covers the two examples frequently cited
by Meir and Moon, namely planar trees with ${\bf A}(w)\,=\,\sum_{n\ge 0}w^n$
and planar binary trees with ${\bf A}(w)\,=\,1+w^2$\,.
For the simple recursion equations one can replace
our ${\bf A}(\rho_{\bf A})\,=\,\infty$ by the condition
${\bf A}'(\rho_{\bf A})\,=\,\infty$.
\section{Open Problems} \label{Open Probs}
\begin{thlist}
\item [Q1]
If ${\bf T} = {\bf E}(z,{\bf T})$ with ${\bf E}\in{\mathbb{DOM}}[z,w]$ and ${\bf T}(z)$ has
the shift periodic form $z^d {\bf V}(z^q)$ then one can use the
spectrum calculus to show there is an ${\bf H}\in{\mathbb{DOM}}[z,w]$ such that
${\bf V}(z) = {\bf H}\big(z,{\bf V}(z)\big)$.
If ${\bf E}$ is open at $\big(\rho_{\bf T},{\bf T}(\rho_{\bf T})\big)$
does it follow that
${\bf H}$ is open at $\big(\rho_{\bf V},{\bf V}(\rho_{\bf V})\big)$?
If so one would have an easy way of reducing the multi-singularity case
of ${\bf T}$ to the unique singularity case of ${\bf V}$.
As mentioned in $\S\,$\ref{altern}, we were not able to prove this,
but instead needed an additional hypothesis on ${\bf E}$.
Partly in order to avoid this extra hypothesis we used a detailed singularity
analysis approach.
\item[Q2]
Determine whether or not the ${\sf Set}_{\mathbb M}$ operators can be adjoined
to the standard operators used in this paper and still have the
universal law hold. (See $\S$\,\ref{Set sect}.)
A simple and interesting case to consider is that of identity
(0,1,...,m)-trees with generating function ${\bf T}$ defined by
$w = z + z{\sf Set}_{\{1,...,m\}}(w)$. Does ${\bf T}$ satisfy the universal
law? Example 78 shows the answer is yes for $m=2$, but it seems
the question is open for any $m \ge 3$.
\item[Q3]
Expand the theory to handle recursion equations $w = {\bf G}(z,w)$
with ${\bf G}$ having mixed sign coefficients.
\item[Q4]
Find large collections of classes
satisfying the universal law
(or any other law) that are recursively defined by
systems of equations
\[
(**)\quad
\begin{array}{l l l}
w_1&=&\Theta_1\big(w_1,\ldots,w_k\big)\\
&\vdots&\\
w_k&=&\Theta_k\big(w_1,\ldots,w_k\big),
\end{array}
\]
where the $\Theta_i$ are multivariate operators.
Classes defined by specifications using the standard operators
that correspond to such a system of equations are called
{\em constructible} classes by Flajolet and Sedgewick;
the asymptotics of the case that the operators are {\em polynomial} has been
studied in \cite{Fl:Se}, Chapter VII, \emph{provided the dependency digraph
has a single strong component},
and shown to satisfy the universal law $\pmb{(\star)}$.
\item[Q5]
The study of systems (**) is of particular interest to those investigating
the behavior of monadic second order definable classes ${\mathcal T}$
since every such class is a finite disjoint union of some of
the ${\mathcal T}_i$ defined by such a system (see Woods \cite{Wo}).
In brief, Woods proved: \emph{every MSO class of trees with the same radius
as the whole class of trees satisfies the universal law.} However his results
seem to give very little for the MSO classes of smaller radius beyond the
fact that they have smaller radius.
Here is a plausible direction:
If ${\mathcal T}$ is a class of trees defined by a MSO sentence, does it follow
that ${\mathcal T}$ decomposes into finitely many ${\mathcal T}_i$ such that each satisfies
a nice law on its support?
\item[Q6]
Among the MSO classes of trees perhaps the best known are the \emph{exclusion}
classes ${\mathcal T}\,=\,{\sf Excl}(T_1,\ldots,T_n)$, defined by saying that certain trees
$T_1,\ldots,T_n$ are not to appear as induced subtrees.
The $T_i$ are called \emph{forbidden trees}\,.
A good example is
`trees of height $n$', defined by excluding a chain of height $n+1$\,;
or unary-binary trees defined by excluding the four-element tree of height 1.
Even restricting one's attention to the collection of classes
${\mathcal T}\,=\,{\sf Excl}(T)$ defined by excluding a single tree
offers considerable challenges to the development of a global theory of
enumeration.
Which of these classes are defined by recursion?
Which of these obey the universal law?
Given two trees
$T_1,T_2$\,, which of ${\sf Excl}(T_1),{\sf Excl}(T_2)$ has the greater radius (for its
generating series)?
From Schwenk \cite{Schw} (1973) we know that if one excludes any limb from the
class of trees then the radius of the resulting class is larger than
what one started with.
Much later, in 1997, Woods \cite{Wo}
rediscovered a part of Schwenk's result in the context of logical
limit laws; this can be used to quickly show that the class of
free trees has a monadic second order 0--1 law.
Aaron Tikuisis, an undergraduate at the University of Waterloo, has
determined which ${\sf Excl}(T)$ have radius $<1$.
\item[Q7]
Find a method to determine the asymptotics of a class ${\mathcal U}$ of free trees
obtained from a recursively defined class ${\mathcal T}$ of rooted trees.
(See $\S\,$\ref{20 step}.)
\end{thlist}
\end{document} |
\begin{document}
\title{Some properties of differentiable $p$-adic functions}
\author[Fern\'andez]{J. Fern\'andez-S\'anchez}
\address[J. Fern\'andez-S\'anchez]{\mbox{}\newline \indent
Grupo de investigaci\'on ``Teor\'ia de c\'opulas y aplicaciones'', \newline \indent
Universidad de Almer\'ia, \newline \indent
Carretera de Sacramento s/n, \newline \indent
04120 Almer\'ia (Spain).}
\email{juanfernandez@ual.es}
\author[Maghsoudi]{S. Maghsoudi}
\address[S. Maghsoudi]{\mbox{}\newline \indent Department of Mathematics,\newline \indent University of Zanjan,\newline \indent Zanjan 45371-38791, Iran}
\email{s\_maghsodi@znu.ac.ir}
\author[Rodr\'{i}guez]{D.L.~Rodr\'{i}guez-Vidanes}
\address[D.L.~Rodr\'{i}guez-Vidanes]{\mbox{}\newline\indent Instituto de Matem\'atica Interdisciplinar (IMI), \newline \indent Departamento de An\'{a}lisis y Matem\'{a}tica Aplicada \newline \indent Facultad de Ciencias Matem\'{a}ticas \newline \indent Plaza de Ciencias 3 \newline \indent Universidad Complutense de Madrid \newline \indent Madrid, 28040 (Spain).}
\email{dl.rodriguez.vidanes@ucm.es}
\author[Seoane]{J.B. Seoane--Sep\'ulveda}
\address[J.B. Seoane--Sep\'ulveda]{\mbox{}\newline\indent Instituto de Matem\'atica Interdisciplinar (IMI), \newline\indent Departamento de An\'{a}lisis Matem\'{a}tico y Matem\'atica Aplicada,\newline\indent Facultad de Ciencias Matem\'aticas, \newline\indent Plaza de Ciencias 3, \newline\indent Universidad Complutense de Madrid,\newline\indent 28040 Madrid, Spain.}
\email{jseoane@ucm.es}
\keywords{$p$-adic function, strict differentiability, Lipschitz function, bounded $p$-adic derivative, lineability, algebrability.}
\subjclass[2010]{MSC 2020: 15A03, 46B87, 26E30, 46S10, 32P05.}
\thanks{The third and fourth authors were supported by Grant PGC2018-097286-B-I00. The third author was also supported by the Spanish Ministry of Science, Innovation and Universities and the European Social Fund through a “Contrato Predoctoral para la Formación de Doctores, 2019” (PRE2019-089135). The second author was supported by the Iran National Science Foundation (INSF) grant no. 99019850.}
\begin{abstract}
In this paper, using the tools from the lineability theory, we distinguish certain subsets of $p$-adic differentiable functions. Specifically, we show that the following sets of functions are large enough to contain an infinite dimensional algebraic structure: (i) continuously differentiable but not strictly differentiable functions, (ii) strictly differentiable functions of order $r$ but not strictly differentiable of order $r+1$, (iii) strictly differentiable functions with zero derivative that are not Lipschitzian of any order $\alpha >1$, (iv) differentiable functions with unbounded derivative, and (v) continuous functions that are differentiable on a full set with respect to the Haar measure but not differentiable on its complement having cardinality the continuum.
\end{abstract}
\maketitle
\section{Introduction and terminology}
In the non-Archimedean setting, at least two notions of differentiability have been defined: classical and strict derivative. Classical derivative has some unpleasant and strange behaviors, but it has been long known that the strict differentiability in the sense of Bourbaki is the hypothesis most useful to geometric applications, such as inverse theorem. Let us recall definitions.
Let $\mathbb K$ be a valued field containing $\mathbb{Q}_p$ such that $\mathbb K$ is complete (as a metric space), and $X$ be a nonempty subset of $\mathbb K$ without isolated points.
Let $f\colon X \to \mathbb K$ and $r$ be a natural number. Set
$$
\nabla^{r} X:=\begin{cases}
X & \text{if } r=1,\\
\left\{\left(x_1, x_2,\ldots, x_r\right)\in X^r \colon x_i\neq x_j \text{ if } i\neq j \right\} & \text{if } r\geq 2.
\end{cases}
$$
The $r$-th difference quotient $\Phi_rf: \nabla^{r+1} X \to \mathbb K$ of $f$, with $r\geq 0$, is recursively given by $\Phi_0f=f$ and, for $r\geq 1$, $(x_1, x_2,\ldots, x_{r+1})\in \nabla^{r+1} X$ by
$$
\Phi_r f(x_1, x_2,\ldots, x_{r+1})=\frac{\Phi_{r-1}f(x_1, x_3,\ldots, x_{r+1})-\Phi_{r-1}f(x_2, x_3,\ldots, x_{r+1})}{x_1-x_2}.
$$
Then a function $f\colon X \to \mathbb K$ at a point $a\in X$ is said to be:
\begin{itemize}
\item \textbf{differentiable} if $\lim_{x\to a}f(x)-f(a)/(x-a)$ exists;
\item \textbf{strictly differentiable of order $r$} if $\Phi_r f$ can be extended to a continuous function $\overline{\Phi}_r f: X^{r+1} \to \mathbb K$. We then set $D_rf(a)=\overline{\Phi}_r f(a, a,\ldots, a)$.
\end{itemize}
Our aim in this work is to study these notions through a recently coined approach--the lineability theory. Searching for large algebraic structures in the sets of objects with a special property, we, in this approach, can get deeper understanding of the behavior of the objects under discussion. In \cites{jms1,jms2} authors studied lineability notions in the $p$-adic analysis; see
also \cites{preprint-June-2019,jms3, fmrs}. The study of lineability can be traced back to Levine
and Milman \cites{levinemilman1940} in 1940, and V. I. Gurariy \cite{gur1} in 1966. These works, among
others, motivated the introduction of the notion of lineability in 2005 \cite
{AGS2005} (notion coined by V. I. Gurariy). Since then it has been a rapidly developing trend in mathematical analysis. There are extensive works on the classical lineability theory, see e.g. \cites{ar,AGS2005,ar2,survey,bbf,bay}, whereas some recent topics can be found in \cites{TAMS2020,abms,bns,cgp,nak,CS2019,bo,egs,bbls,ms}. It is interesting to note that Mahler in \cite{Ma2} stated that:
\begin{quote}
``On the other hand, the behavior of continuous functions of a $p$-adic variable is quite distinct from that of real continuous functions, and many basic theorems of real analysis have no $p$-adic analogues.
$\ldots$ there exist infinitely many linearly independent non-constant functions the derivative of which vanishes identically $\ldots$''.
\end{quote}
Before further going, let us recall three essential notions.
Let $\kappa$ be a cardinal number. We say that a subset $A$ of a vector space $V$ over a field $\mathbb{K}$ is
\begin{itemize}
\item \textbf{$\kappa$-lineable} in $V$ if there exists a vector space $M$ of dimension $\kappa$ and $
M\setminus \{0\} \subseteq A$;
\end{itemize}
and following \cites{ar2, ba3}, if $V$ is
contained in a (not necessarily unital) linear algebra, then $A$ is called
\begin{itemize}[resume*]
\item \textbf{$\kappa$-algebrable} in $V$ if there exists an algebra $M$ such that $
M\setminus \{0\} \subseteq A$ and $M$ is a $\kappa$-dimensional vector
space;
\item \textbf{strongly $\kappa$-algebrable} in $V$ if there exists an $\alpha$-generated free algebra $M$ such that $M\setminus \{0\}\subseteq A$.
\end{itemize}
Note that if $V$ is also contained in a commutative algebra, then a set $B\subset V$ is a generating set of a free algebra contained in $A$ if and only if for any $n\in \mathbb N$ with $n\leq \text{card}(B)$ (where $\text{card}(B)$ denotes the cardinality of $B$), any nonzero polynomial $P$ in $n$ variables with coefficients in $\mathbb K$ and without free term, and any distinct $b_1,\ldots,b_n\in B$, we have $P(b_1,\ldots,b_n)\neq 0$.
Now we can give an outline of our work. In Section \ref{bac} we recall some standard concepts and notations concerning
non-Archimedean analysis. Then, in the section of Main results, we
show, among other things, that:
(i) the set of functions $\mathbb{Q}_p\rightarrow \mathbb{Q}_p$ with continuous derivative that are not strictly differentiable is $\mathfrak{c}$-lineable ($\mathfrak{c}$ denotes the cardinality of the continuum),
(ii) the set of strictly differentiable functions $\mathbb{Q}_p\rightarrow \mathbb{Q}_p$ of order $r$ but not strictly differentiable of order $r+1$ is $\mathfrak c$-lineable,
(iii) the set of all strictly differentiable functions $\mathbb{Z}_p\rightarrow \mathbb K$ with zero derivative that are not Lipschitzian of any order $\alpha>1$ is $\mathfrak{c}$-lineable and $1$-algebrable,
(iv) the set of differentiable functions $\mathbb{Q}_p\rightarrow \mathbb{Q}_p$ which derivative is unbounded is strongly $\mathfrak{c}$-algebrable,
(v) the set of continuous functions $\mathbb{Z}_p\rightarrow \mathbb{Q}_p$ that are differentiable with bounded derivative on a full set for any positive real-valued Haar measure on $\mathbb{Z}_p$ but not differentiable on its complement having cardinality $\mathfrak{c}$ is $\mathfrak c$-lineable.
\section{Preliminaries for $p$-adic analysis}
\label{bac}
We summarize some basic definitions of $p$-adic analysis (for a
more profound treatment of this topic we refer the interested reader to \cites{go,kato,va,sc}).
We shall use standard set-theoretical notation. As usual, $\mathbb{N},
\mathbb{N}_0, \mathbb{Z}, \mathbb{Q}$, $\mathbb{R}$ and $\mathbb P$ denote
the sets of all natural, natural numbers including zero, integer, rational, real, and prime numbers, respectively.
The restriction of a function $f$ to a set $A$ and the characteristic function of a set $A$ will be denoted
by $f\restriction A$ and $1_A$, respectively.
Frequently, we use a theorem due to Fichtenholz-Kantorovich-Hausdorff \cites{fich,h}: For any set $X$ of infinite cardinality there exists a family $\mathcal{B}\subseteq
\mathcal{P}(X)$ of cardinality $2^{\text{card}(X)}$ such that for any finite
sequences $B_1,\ldots , B_n\in \mathcal{B}$ and $\varepsilon_1, \ldots,
\varepsilon_n \in \{0,1\}$ we have $B_1^{\varepsilon_1}\cap \ldots \cap
B_n^{\varepsilon_n}\neq \emptyset$, where $B^1=B$ and $B^0=X\setminus B$.
A family of subsets of $X$ that satisfy the latter condition is called a family of \textit{independent subsets} of $X$.
Here $\mathcal{P}(X)$ denotes the power set of $X$. In what follows we fix $\mathcal{N}$, $\mathcal N_0$ and $\mathcal P$ for families of independent subsets of $\mathbb{N}$, $\mathbb N_0$ and $\mathbb P$, respectively,
such that $\text{card}(\mathcal N)=\text{card}(\mathcal N_0)=\text{card}(\mathcal P)=\mathfrak c$.
Now let us recall that given a field $\mathbb K$, an absolute value on $\mathbb K$ is a function
$$
|\cdot|\colon \mathbb K\to [0,+\infty)
$$
such that:
\begin{itemize}
\item $|x|=0$ if and only if $x=0$,
\item $|x y|=|x||y|$, and
\item $|x+y|\leq |x|+|y|$,
\end{itemize}
for all $x,y\in \mathbb{K}$.
The last condition is the so-called \textit{triangle inequality}.
Furthermore, if $(\mathbb K,|\cdot|)$ satisfies the condition $|x+y|\leq \max\{|x|, |y|\}$ (the \textit{strong triangle inequality}), then $(\mathbb K,|\cdot|)$ is called a non-Archimedean field.
Note that $(\mathbb K,|\cdot|)$ is a normed space since $\mathbb K$ is a vector space in itself.
For simplicity, we will denote for the rest of the paper $(\mathbb K,|\cdot|)$ by $\mathbb K$.
Clearly, $|1|=|-1|=1$ and, if $\mathbb K$ is a non-Archimedean field, then $\underbrace{|1+\cdots+1|}_{n \text{ times}}\leq 1$
for all $n\in \mathbb{N}$. An immediate consequence of the strong triangle
inequality is that $|x|\neq |y|$ implies $|x+y|=\max\{|x|, |y|\}$.
Notice that if $\mathbb K$ is a finite field, then the only possible absolute value that can be defined on $\mathbb K$ is the trivial absolute value, that is, $|x|=0$ if $x=0$, and $|x|=1$ otherwise.
Furthemore, given any field $\mathbb K$, the topology endowed by the trivial absolute value on $\mathbb K$ is the discrete topology.
Let us fix a prime number $p$ throughout this work. For any non-zero integer
$n\neq 0$, let $\text{ord}_p (n)$ be the highest power of $p$ which divides $
n$. Then we define $|n|_p=p^{-\text{ord}_p (n)}$, $|0|_p=0$ and $|\frac{n}{m}
|_p=p^{-\text{ord}_p (n)+\text{ord}_p (m)}$, the $p$-adic absolute value. The completion of the field of
rationals, $\mathbb{Q}$, with respect to the $p$-adic absolute value
is called the field of $p$-adic numbers $\mathbb{Q}_p$.
An important property of $p$-adic numbers is that each nonzero $x\in \mathbb{Q}_p$ can be represented as
$$
x=\sum_{n=m}^\infty a_n p^n,
$$
where $m\in \mathbb Z$, $a_n\in \mathbb F_p$ (the finite field of $p$ elements) and $a_m\neq 0$.
If $x=0$, then $a_n=0$ for all $n\in \mathbb Z$.
The $p$-adic absolute value
satisfies the strong triangle inequality. Ostrowski's Theorem states that every nontrivial
absolute value on $\mathbb{Q}$ is equivalent (i.e., defines the same
topology) to an absolute value $|\cdot|_p$, where $p$ is a prime number, or
the usual absolute value (see \cite{go}).
Let $a\in \mathbb{Q}_{p}$ and $r$ be a positive number. The set $
B_r(a)=\{x\in \mathbb{Q}_{p}\colon |x-a|_{p}<r\}$ is called the open ball of
radius $r$ with center $a$, $\overline{B}_{r}(a)=\{x\in \mathbb{Q}
_{p}\colon |x-a|_{p}\leq r\}$ the closed ball of radius $r$ with center $a$, and $
S_r(a)=\{x\in \mathbb{Q}_{p}\colon |x-a|_{p}=r\}$ the sphere of radius $r$ and
center $a$.
It is important to mention that $B_r(a)$, $\overline{B}_{r}(a)$ and $S_r(a)$ are clopen sets in $\mathbb{Q}_p$.
The closed unit ball
$$
\mathbb{Z}_{p}=\{x\in \mathbb{Q}_{p}\colon |x|_{p}\leq 1\}=\left\{x\in \mathbb{Q}_{p}\colon x=\sum_{i=k}^\infty a_ip^i,\ a_i\in \{0,1,\ldots ,p-1\},
\ k\in \mathbb N_0 \right\}
$$
is called the ring of $p$-adic integers in $\mathbb{Q}_{p}$. We know that $\mathbb{Z}_{p}$ is a compact set and $\mathbb{N}$ is
dense in $\mathbb{Z}_{p}$ (\cite{go}).
Throughout this article we shall consider all vector spaces and algebras taken over the
field $\mathbb{Q}_p$ (unless stated otherwise).
\section{Main results}
We are ready to present our results. For the rest of this work, $X$ will denote a nonempty subset of $\mathbb K$ without isolated points.
Let us fix two notations:
$$
C^1(X,\mathbb K)=\{f\colon X \to \mathbb K \colon f \text{ has continuous (classical) derivative on }X\},
$$
$$
S^r(X,\mathbb K)=\{f\colon X \to \mathbb K \colon f \text{ is strictly differentiable of order }r \text{ on }X\}.
$$
Our first result shows that, unlike the classical case, strict differentiability is a stronger condition than simply having continuous derivative.
An analogue to part (ii) of the result for the classical case can be found in \cite{bbf}.
\begin{theorem}
\label{34}
~\begin{itemize}
\item[(i)] The set $C^1(\mathbb{Q}_p,\mathbb{Q}_p)\setminus S^{1}(\mathbb{Q}_p,\mathbb{Q}_p)$ is $\mathfrak{c}$-lineable.
\item[(ii)] The set $S^{1}(\mathbb{Q}_p,\mathbb{Q}_p)\setminus S^{2}(\mathbb{Q}_p,\mathbb{Q}_p)$ is $\mathfrak{c}$-lineable. In general, $S^{r}(\mathbb{Q}_p,\mathbb{Q}_p)\setminus S^{r+1}(\mathbb{Q}_p,\mathbb{Q}_p)$ is $\mathfrak{c}$-lineable for any $r\geq 1$.
\end{itemize}
\end{theorem}
\begin{proof}
\textit{(i).} Notice that $B_{p^{-2n}}(p^n)\subset S_{p^{-n}}(0)$ for every $n\in\mathbb N$, therefore $B_n\cap B_m=\emptyset$ if, and only if, $n\neq m$.
Also, let us define $f_N\colon \mathbb{Q}_p\rightarrow \mathbb{Q}_p$ for every $N\in\mathcal N$ as follows:
$$
f_N(x)=\begin{cases}
p^{2n} & \text{if }x\in B_{p^{-2n}}(p^n) \text{ with } n\in N,\\
0 & \text{otherwise}.
\end{cases}
$$
First, notice that $f_N$ is locally constant outside $0$ and, thus, $f_N$ is differentiable everywhere except maybe at $0$ with $f_N'(x)=0$ for every $x\in\mathbb{Q}_p\setminus\{0\}$.
Moreover, we have
$$
\left|\frac{f_N(x)-f_N(0)}{x}\right|_p=\left|\frac{f_N(x)}{x}\right|_p=\begin{cases}
p^{-n} & \text{if } x\in B_{p^{-2n}}(p^n) \text{ with } n\in N,\\
0 & \text{otherwise},
\end{cases}
$$
i.e., $\left|\frac{f_N(x)-f_N(0)}{x}\right|_p\rightarrow 0$ as $x\rightarrow 0$.
Hence, $f_N^\prime(0)=0$.
Therefore, $f_N^\prime$ exists everywhere and is continuous since $f_N^\prime\equiv 0$, that is, $f_N\in C^1(\mathbb{Q}_p,\mathbb{Q}_p)$.
It is enough to show that the family of functions $V=\{f_N\colon N\in\mathcal N \}$ is linearly independent over $\mathbb{Q}_p$ and the vector space generated by $V$, denoted by $\text{span}(V)$, satisfies $\text{span}(V)\setminus \{0\}\subset C^1(\mathbb{Q}_p,\mathbb{Q}_p)\setminus S^{1}(\mathbb{Q}_p,\mathbb{Q}_p)$.
Take $f=\sum_{i=1}^m \alpha_i f_{N_i}$, where $\alpha_1,\ldots, \alpha_m\in\mathbb{Q}_p$, $N_1,\ldots, N_m\in\mathcal N$ are distinct and $m\in \mathbb N$.
Assume that $f\equiv 0$ then, by taking $x=p^n$ with $n\in N_1^1\cap N_2^0\cap\cdots \cap N_m^0$, we have that $0=f(x)=\alpha_1 f_{N_1}(x)=\alpha_1 p^{2n}$, i.e., $\alpha_1=0$.
Therefore, applying similar arguments, we arrive at $\alpha_i=0$ for every $i\in\{1,\ldots, m \}$.
Assume now that $\alpha_i\neq 0$ for every $i\in\{1,\ldots, m \}$.
Since $C^1(\mathbb{Q}_p,\mathbb{Q}_p)$ forms a vector space, we have that $f\in C^1(\mathbb{Q}_p,\mathbb{Q}_p)$.
Moreover, by construction we have $f'\equiv 0$.
It remains to prove that $f\notin S^1(\mathbb{Q}_p,\mathbb{Q}_p)$.
To do so, take the sequences $$(x_n)_{n\in N_1^1\cap N_2^0\cap \cdots\cap N_m^0}=(p^n)_{n\in N_1^1\cap N_2^0\cap \cdots\cap N_m^0}$$ and $$(y_n)_{n\in N_1^1\cap N_2^0\cap \cdots\cap N_m^0}=(p^n-p^{2n})_{n\in N_1^1\cap N_2^0\cap \cdots\cap N_m^0}.$$
Notice that both sequences converge to $0$, and $f(x_n)=\alpha_1 p^{2n}$ and $f(y_n)=0$ for each $n\in N_1^1\cap N_2^0\cap \cdots\cap N_m^0$.
Hence,
$$
\frac{f(x_n)-f(y_n)}{x_n-y_n}=\alpha_1,
$$
for every $n\in N_1^1\cap N_2^0\cap \cdots\cap N_m^0$.
Since $\alpha_1\neq 0$, we have the desired result.
\textit{(ii).} We will prove the case when $r=1$ since the general case can be easily deduced.
For every nonempty subset $N$ of $\mathbb N$, let us define $g_N \colon \mathbb{Q}_p\rightarrow\mathbb{Q}_p$ as follows: for every $x=\sum_{n=m}^\infty a_n p^n\in\mathbb{Q}_p$, take
$$
g_N(x)=\sum_{n=0}^\infty b_n p^{2n},
$$
where
$$
b_n=\begin{cases}
a_n & \text{if } n\in N,\\
0 & \text{otherwise},
\end{cases}
$$
for every $n\in\mathbb N_0$.
For any $x,y\in \mathbb{Q}_p$, notice that
$$
|g_N(x)-g_N(y)|_p\leq |x-y|_p^2.
$$
Hence,
$$
\left|\frac{g_N(x)-g_N(z)}{x-z} \right|_p\leq |x-z|_p\rightarrow 0
$$
as $x\rightarrow z$ for any $z\in\mathbb{Q}_p$, that is, $g_N$ is differentiable and $g_N^\prime\equiv 0$.
Moreover,
$$
\left|\frac{g_N(x)-g_N(y)}{x-y} \right|_p\leq |x-y|_p\rightarrow 0,
$$
when $(x,y)\rightarrow (z,z)$ for any $z\in\mathbb{Q}_p$, and where $(x,y)\in \nabla^2 \mathbb{Q}_p$.
Thus, $g_N\in S^{1}(\mathbb{Q}_p,\mathbb{Q}_p)$.
We will prove that the family of functions $W=\{g_N \colon N\in\mathcal N \}$ is linearly independent over $\mathbb{Q}_p$ and $\text{span}(W)\setminus\{0\}\subset S^{1}(\mathbb{Q}_p,\mathbb{Q}_p)\setminus S^{2}(\mathbb{Q}_p,\mathbb{Q}_p)$.
It is easy to see that any linear combination of the functions in $W$ belongs to $S^{1}(\mathbb{Q}_p,\mathbb{Q}_p)$.
Take now $g=\sum_{i=1}^k \beta_i g_{N_i}$, where $\beta_1,\ldots,\beta_k\in \mathbb{Q}_p$, $N_1,\ldots,N_k\in\mathcal N$ are distinct and $k\in\mathbb N$.
Assume that $g\equiv 0$ then, by taking $x=p^{n}$, with $n\in N_1^1\cap N_2^0\cap \cdots\cap N_k^0$ fixed, we have that $0=g(x)=\beta_1 p^{2n}$, i.e., $\beta_1=0$.
By applying similar arguments we see that $\beta_i=0$ for every $i\in \{1,\ldots,k \}$.
Therefore, assume that $\beta_i\neq 0$ for every $i\in \{1,\ldots,k \}$.
For every $n\in N_1^1\cap N_2^0\cap \cdots \cap N_k^0$, denote
$$
n_{+}=\min\{l\in N_1^1\cap N_2^0\cap \cdots \cap N_k^0 \colon l>n \}.
$$
Now consider the sequences $\overline x=(x_n)_{N_1^1\cap N_2^0\cap \cdots \cap N_k^0}$, $\overline y=(y_n)_{N_1^1\cap N_2^0\cap \cdots \cap N_k^0}$ and $\overline z=(z_n)_{n\in N_1^1\cap N_2^0\cap \cdots \cap N_k^0}$ defined as $x_n=p^n$, $y_n=0$ and $z_n=p^{n}+p^{n_+}$ for every $n\in N_1^1\cap N_2^0\cap \cdots \cap N_k^0$.
(Notice that the sequences $\overline{x}$, $\overline{y}$ and $\overline{z}$ converge to $0$.)
Then,
\begin{align*}
& \left|(y_n-z_n)^{-1}\right|_p \left|\frac{g(x_n)-g(y_n)}{x_n-y_n}-\frac{g(x_n)-g(z_n)}{x_n-z_n} \right|_p\\
& = \left|(p^n+p^{n_+})^{-1}\right|_p \left|\frac{\beta_1 p^{2n}}{p^n}-\frac{\beta_1 p^{2n}-\beta_1 p^{2n}-\beta_1 p^{2n_{+}} }{p^n-p^n-p^{n_+}} \right|_p\\
& = |\beta_1|_p \left|\frac{p^n-p^{n_+}}{p^n+p^{n_+}} \right|_p=|\beta_1|_p\neq 0,
\end{align*}
for every $n\in N_1^1\cap N_2^0\cap \cdots \cap N_k^0$.
However, $g^{\prime\prime}\equiv 0$.
This finishes the proof.
\end{proof}
Let $\mathbb K$ be a non-Archimedean field with absolute value $|\cdot|$ that contains $\mathbb{Q}_p$.
For any $\alpha>0$, the space of Lipschitz functions from $X$ to $\mathbb K$ of order $\alpha$ is defined as
$$
\text{Lip}_\alpha (X,\mathbb K)=\{f\colon X\rightarrow \mathbb K \colon \exists M>0(|f(x)-f(y)|\leq M|x-y|^\alpha),\forall x,y\in X \}.
$$
Let
$$
N^1(X,\mathbb K)=\{f\in S^1(X,\mathbb K)\colon f' \equiv 0 \}.
$$
In view of \cite{sc}*{Exercise~63.C} we have
$$
N^1(\mathbb{Z}_p,\mathbb K)\setminus \bigcup_{\alpha>1} \text{Lip}_\alpha (\mathbb{Z}_p, \mathbb K)\neq \emptyset.
$$
To prove the next theorem, we need a characterization of the spaces $N^1(\mathbb{Z}_p,\mathbb K)$ and $\text{Lip}_\alpha (\mathbb{Z}_p,\mathbb K)$ from \cite{sc}.
For this we will denote by $(e_n)_{n\in \mathbb N_0}$ the van der Put base of $C(\mathbb{Z}_p,\mathbb K)$, which is given by $e_0\equiv 1$ and $e_n$ is the characteristic function of $\{x\in\mathbb{Z}_p \colon |x-n|_p<1/n \}$ for every $n\in \mathbb N$.
\begin{proposition}\label{thm:1}~\begin{itemize}
\item[(i)] Let $f=\sum_{n=0}^\infty a_n e_n \in C(\mathbb{Z}_p,\mathbb K)$.
Then $f\in N^1(\mathbb{Z}_p,\mathbb K)$ if and only if $(|a_n| n)_{n\in \mathbb N_0}$ converges to $0$ (see \cite{sc}*{Theorem~63.3}).\label{thm:63.3}
\item[(ii)]
Let $f=\sum_{n=0}^\infty a_n e_n \in C(\mathbb{Z}_p,\mathbb K)$ and $\alpha>0$.
Then $f\in \text{Lip}_\alpha (\mathbb{Z}_p,\mathbb K)$ if and only if
$$
\sup \{|a_n| n^\alpha \colon n\in\mathbb N_0 \}<\infty
$$
(see \cite{sc}*{Exercise~63.B}).\label{ex:63.B}
\end{itemize}
\end{proposition}
The next result shows that there is a vector space of maximum dimension of strictly differentiable functions with zero derivative that are not Lipschitzian.
Our next three results can be compared with some results obtained in \cites{jms, bbfg,bq} for the classical case.
\begin{theorem}
The set $N^1(\mathbb{Z}_{p},\mathbb K)\setminus\bigcup_{\alpha>1}{\rm{Lip}}_{\alpha }(\mathbb{Z}_{p},\mathbb K)$ is $\mathfrak{c}$-lineable (as a $\mathbb K$-vector space).
\end{theorem}
\begin{proof}
Fix $n_1\in\mathbb N$ and take $B_1=\{x\in \mathbb Z_p\colon |x-n_1|_p<1/n_1 \}$.
Since $\mathbb Z_p$ and $B_1$ are clopen sets, we have that $\mathbb Z_p\setminus B_1$ is open and also nonempty.
Therefore there exists an open ball $D_1\subset \mathbb Z_p\setminus B_1$.
Furthermore, as $\mathbb N$ is dense in $\mathbb Z_p$, there exists $n_2\in \mathbb N\setminus \{n_1\}$ such that $B_2=\{ x\in \mathbb Z_p\colon |x-n_2|_p<1/n_2 \}\subset D_1$.
By recurrence, we can construct a set $M=\{n_k\colon k\in\mathbb N \}\subset \mathbb N$ such that the balls $B_k=\{x\in\mathbb Z_p\colon |x-n_k|_p<1/n_k \}$ are pairwise disjoint.
Let $\sigma\colon \mathbb N_0\rightarrow M$ be the increasing bijective function and let $(m_n)_{n\in\mathbb N}\subset \mathbb N$ be an increasing sequence such that $p^{-m_n}n\rightarrow 0$ and $p^{-m_n}n^\alpha\rightarrow\infty$ for every $\alpha>1$ when $n\rightarrow\infty$. (This can be done for instance by taking $m_n=\left\lfloor \ln(n \ln(n))/\ln(p) \right\rfloor$.)
For every $N\in\mathcal N_0$, define $f_N\colon \mathbb Z_p\rightarrow \mathbb K$ as
$$
f_N=\sum_{n=0}^\infty 1_N(n) p^{m_{\sigma(n)}} e_{\sigma(n)}.
$$
Since every $N\in\mathcal N_0$ is infinite, we have that $|1_N(n)| p^{-m_{\sigma(n)}} \sigma(n)\rightarrow 0$ when $n\rightarrow \infty$ and
$$
\{|1_N(n)| p^{-m_{\sigma(n)}} \sigma(n)^\alpha \colon n\in\mathbb N_0 \}
$$
is unbounded for every $\alpha>1$.
Hence, by Theorem~\ref{thm:1}, we have $f_N\in N^1(\mathbb{Z}_{p},\mathbb K)\setminus\bigcup_{\alpha>1}{\rm{Lip}}_{\alpha }(\mathbb{Z}_{p},\mathbb K)$ for every $N\in\mathcal N_0$.
We will prove now that the functions in $V=\{f_N\colon N\in\mathcal N_0 \}$ are linearly independent over $\mathbb K$ and such that any nonzero linear combination of $V$ over $\mathbb K$ is contained in $N^1(\mathbb{Z}_{p},\mathbb K)\setminus\bigcup_{\alpha>1}{\rm{Lip}}_{\alpha }(\mathbb{Z}_{p},\mathbb K)$.
Take $f=\sum_{i=1}^r a_i f_{N_i}$, where $a_1,\ldots, a_r\in\mathbb K$, $N_1,\ldots, N_r\in\mathcal N_0$ are distinct and $r\in\mathbb N$.
Assume that $f\equiv 0$. Fix $n\in N_1^1\cap N_2^0\cap \cdots\cap N_r^0$ and take $x\in\mathbb Z_p$ such that $x\in B_{\sigma(n)}$, then $0=f(x)=a_1 p^{m_{\sigma(n)}}$, i.e., $a_1=0$.
By applying similar arguments we have $a_i=0$ for every $i\in\{1,\ldots,r \}$.
Assume for the rest of the proof that $a_i\neq 0$ for every $i\in\{1,\ldots,r \}$.
Notice that $f=\sum_{n=0}^\infty \left(\sum_{i=1}^r a_i 1_{N_i}\right)(n) p^{m_{\sigma(n)}} e_{\sigma(n)}$, where $\left|\left(\sum_{i=1}^r a_i 1_{N_i}\right)(n) p^{m_{\sigma(n)}}\right|\leq |p^{m_{\sigma(n)}}| \max\{|a_i|\colon i\in\{1,\ldots,r \} \}=p^{-m_{\sigma(n)}} \max\{|a_i|\colon i\in\{1,\ldots,r \} \}$.
Therefore $\left|\left(\sum_{i=1}^r a_i 1_{N_i}\right)(n) p^{m_{\sigma(n)}}\right| \sigma(n)\rightarrow 0$ when $n\rightarrow \infty$.
Finally, as $N_1^1\cap N_2^0\cap \cdots\cap N_r^0$ is infinite, we have that
\begin{align*}
& \left\{\left|\left(\sum_{i=1}^r a_i 1_{N_i}\right)(n) p^{m_{\sigma(n)}}\right| \sigma(n)^\alpha \colon n\in N_1^1\cap N_2^0\cap \cdots\cap N_r^0 \right\}\\
& = \{|a_1| p^{-m_{\sigma(n)}} \sigma(n)^\alpha \colon n\in N_1^1\cap N_2^0\cap \cdots\cap N_r^0 \}
\end{align*}
is unbounded for every $\alpha>1$.
\end{proof}
The next lineability result can be considered as a non-Archimedean counterpart of \cite{g7}*{Theorem 6.1}.
To prove it we will make use of the following lemma.
(For more information on the usage of the lemma see \cite[Section~5]{fmrs}.)
In order to understand it, let us consider the functions $x\mapsto (1+x)^\alpha$ where $x\in p\mathbb Z_p$ and $\alpha\in \mathbb Z_p$.
It is well known that $(1+x)^\alpha$ is defined analytically by $(1+x)^\alpha = \sum_{i=0}^\infty {\alpha \choose i} x^i$.
Moreover $x\mapsto (1+x)^\alpha$ is well defined (see \cite[Theorem~47.8]{sc}), differentiable with derivative $\alpha(1+x)^{\alpha-1}$, and $x\mapsto (1+x)^\alpha$ takes values in $\mathbb Z_p$ (in particular $(1+x)^\alpha=1+y$ for some $y\in p\mathbb Z_p$, see \cite[Theorem~47.10]{sc}).
\begin{lemma}\label{lem:1}
If $\alpha_1,\ldots,\alpha_n\in \mathbb Z_p\setminus \{0\}$ are distinct, with $n\in \mathbb N$, then every linear combination $\sum_{i=1}^n \gamma_i (1+x)^{\alpha_i}$, with $\gamma_i\in \mathbb Q_p\setminus \{0\}$ for every $1\leq i\leq n$, is not constant.
\end{lemma}
\begin{theorem}
\label{16} The set of everywhere differentiable functions $\mathbb{Q}_{p}\rightarrow \mathbb{Q}_{p}$
which derivative is unbounded is strongly $\mathfrak{c}$-algebrable.
\end{theorem}
\begin{proof}
Let $\mathcal H$ be a Hamel basis of $\mathbb Q_p$ over $\mathbb Q$ contained in $p\mathbb Z_p$, and for each $\beta \in \mathbb Z_p\setminus\{0\}$ define $f_\beta : \mathbb Q_p\to \mathbb Q_p$ by
$$
f_\beta (x)=\begin{cases}
p^{-n} (1+y)^\beta & \text{if } x=\sum_{k=-n}^0 a_k p^k +y, \text{ where } a_n\neq 0,\ n\in \mathbb N_0 \text{ and } y\in p\mathbb{Z}_p,\\
0 & \text{otherwise}.
\end{cases}
$$
The function $f_\beta$ is differentiable everywhere for any $\beta\in \mathbb Z_p$.
Indeed, firstly we have that $f_\beta$ is locally constant on $p\mathbb Z_p$ as $f_\beta\restriction p\mathbb Z_p \equiv 0$.
Lastly it remains to prove that $f_\beta$ is differentiable at $x_0=\sum_{k=-n}^0 a_k p^k +y_0$, i.e., the limit
\begin{equation}\label{equ:2}
\lim_{x\rightarrow x_0} \frac{p^{-m}(1+y)^\beta-p^{-n}(1+y_0)^\beta}{x-x_0}
\end{equation}
exists, where the values $x$ are of the form $x=\sum_{k=-m}^0 b_k p^k +y$.
Moreover, as $x$ approaches $x_0$ in the limit \eqref{equ:2}, we can assume that $x=\sum_{k=-n}^0 a_k p^k +y$.
Hence, the limit in \eqref{equ:2} can be simplified to
\begin{align*}
\lim_{x\rightarrow x_0} \frac{p^{-m}(1+y)^\beta-p^{-n}(1+y_0)^\beta}{x-x_0} & =\lim_{x\rightarrow x_0} \frac{p^{-n}(1+y)^\beta-p^{-n}(1+y_0)^\beta}{x-x_0}\\
& =p^{-n} \lim_{y\rightarrow y_0} \frac{(1+y)^\beta-(1+y_0)^\beta}{y-y_0}=p^{-n}\beta(1+y_0)^{\beta-1}.
\end{align*}
In particular the derivative of $f_\beta$ is given by
$$
f_\beta^\prime (x)=\begin{cases}
p^{-n} \beta (1+y)^{\beta-1} & \text{if } x=\sum_{k=-n}^0 a_k p^k +y, \text{ where } a_n\neq 0,\ n\in \mathbb N_0, y\in p\mathbb{Z}_p,\\
0 & \text{otherwise},
\end{cases}
$$
and it is unbounded since $\lim_{n\rightarrow \infty} |p^{-n}\beta(1+y)^{\beta-1}|_p=\lim_{n\rightarrow \infty} p^{n}|\beta(1+y)^{\beta-1}|_p=|\beta|_p\lim_{n\rightarrow \infty} p^{n}=\infty$, where we have used the fact that $\beta\neq 0$.
Let $h_1,\ldots,h_m \in \mathcal H$ be distinct and take $P$ a polynomial in $m$ variables with coefficients in $\mathbb Q_p\setminus \{0\}$ and without free term, that is, $P(x_1,\ldots,x_m)=\sum_{r=1}^d \alpha_r x_1^{k_{r,1}}\cdots x_m^{k_{r,m}}$, where $d\in \mathbb N$, $\alpha_i\in\mathbb{Q}_p\setminus \{0\}$ for every $1\leq r\leq d$, $k_{r,i}\in\mathbb N_0$ for every $1\leq r\leq d$ and $1\leq i\leq m$ with $\overline{k}_r:=\sum_{i=1}^m k_{r,i}\geq 1$, and the $m$-tuples $(k_{r,1},\ldots,k_{r,m})$ are pairwise distinct.
Assume also without loss of generality that $\overline{k}_1\geq \cdots\geq \overline{k}_d$.
We will prove first that $P(f_{h_1},\ldots,f_{h_m})\not\equiv 0$, i.e., the functions in $\{f_h \colon h\in \mathcal H \}$ are algebraically independent.
Notice that $P(f_{h_1},\ldots,f_{h_m})$ is of the form
$$
\begin{cases}
\sum_{r=1}^d p^{-n\overline{k}_r} \alpha_r (1+y)^{\beta_r} & \text{if } x=\sum_{k=-n}^0 a_k p^k +y, \text{ with } a_n\neq 0,\ n\in \mathbb N_0, y\in p\mathbb{Z}_p,\\
0 & \text{otherwise},
\end{cases}
$$
where the exponents $\beta_r:=\sum_{i=1}^m k_{r,i} h_i$ belong to $p\mathbb{Z}_p\setminus\{0\}$ and are pairwise distinct because $\mathcal H$ is Hamel basis of $\mathbb Q_p$ over $\mathbb Q$ contained in $p\mathbb{Z}_p$, $k_{r,i}\in\mathbb N_0$, $\overline{k}_r\neq 0$ and the numbers $h_1,\ldots,h_m$ as well as the $m$-tuples $(k_{r,1},\ldots,k_{r,m})$ are pairwise distinct.
Fix $n\in\mathbb N_0$.
Since $p^{-n\overline{k}_r} \alpha_r \neq 0$ for every $1\leq r\leq d$, by Lemma~\ref{equ:2}, there is $y\in p\mathbb{Z}_p$ such that $\sum_{r=1}^d p^{-n\overline{k}_r} \alpha_r (1+y)^{\beta_r}\neq 0$.
Hence, by taking $x=p^{-n}+y$, we have $P(f_{h_1},\ldots,f_{h_m})(x)\neq 0$.
Finally, let us prove that $P(f_{h_1},\ldots,f_{h_m})'$ exists and it is unbounded.
Clearly $P(f_{h_1},\ldots,f_{h_m})$ is differentiable and the derivative is given by
\begin{equation}\label{equ:6}
\begin{cases}
\displaystyle \sum_{r=1}^d p^{-n\overline{k}_r} \alpha_r \beta_r (1+y)^{\beta_r-1} & \displaystyle \text{if } x=\sum_{k=-n}^0 a_k p^k +y, \text{ with } a_n\neq 0,\ n\in \mathbb N_0, y\in p\mathbb{Z}_p,\\
0 & \text{otherwise}.
\end{cases}
\end{equation}
Notice that $\beta_r\neq 1$ for every $1\leq r\leq d$ since $\beta_r\in p\mathbb{Z}_p$.
We will now rewrite formula \eqref{equ:6} in order to simplify the proof.
Notice that if some of the exponents $\overline{k}_r$ are equal, i.e., for instance $\overline{k_i}=\cdots=\overline{k}_j$ for some $1\leq i<j\leq d$, then $p^{-n\overline{k}_i}$ is a common factor in each summand $p^{-n\overline{k}_i} \alpha_i \beta_i (1+y)^{\beta_i-1},\ldots,p^{-n\overline{k}_j} \alpha_j \beta_j (1+y)^{\beta_j-1}$.
Therefore, $P(f_{h_1},\ldots,f_{h_m})^\prime(x)$ can also be written as
\begin{equation}\label{equ:7}
\begin{cases}
\displaystyle \sum_{q=1}^{\widetilde{d}} p^{-n\widetilde{k}_q} \sum_{s=1}^{m_q}\alpha_{q,s}\beta_{q,s} (1+y)^{\beta_{q,s}-1} & \displaystyle \text{if } x=\sum_{k=-n}^0 a_k p^k +y, \text{ where } a_n\neq 0,\\
& \displaystyle n\in \mathbb N_0 \text{ and } y\in p\mathbb{Z}_p,\\
0 & \displaystyle \text{otherwise},
\end{cases}
\end{equation}
where $\tilde{d}\in \mathbb{N}$, the $\widetilde{k}_q$'s represent the common exponents $p^{-n\overline{k}_i}$ with $\widetilde{k}_1>\cdots>\widetilde{k}_{\widetilde{d}}$, and $\alpha_{q,s}$ and $\beta_{q,s}$ are the corresponding terms $\alpha_r$ and $\beta_r$, respectively.
Now, since $\alpha_{1,s} \beta_{1,s}\neq 0$ and the exponents $\beta_{1,s}-1$ are pairwise distinct and not equal to $0$ for every $1\leq s\leq m_1$, by Lemma~\ref{equ:2}, there exists $y_1\in p\mathbb{Z}_p$ such that $\sum_{s=1}^{m_1}\alpha_{1,s}\beta_{1,s} (1+y_1)^{\beta_{1,s}-1}\neq 0$.
Take the sequence $(x_n)_{n=1}^\infty$ defined by $x_n=p^{-n}+y_1$.
Since $\widetilde{k}_1>\cdots>\widetilde{k}_{\widetilde{d}}$, there exists $n_0\in \mathbb N$ such that
\begin{align*}
|P(f_{h_1},\ldots,f_{h_m})^\prime (x_n)|_p & =\left|\sum_{q=1}^{\widetilde{d}} p^{-n\widetilde{k}_q} \sum_{s=1}^{m_q}\alpha_{q,s}\beta_{q,s} (1+y_1)^{\beta_{q,s}-1} \right|_p\\
& =\left|p^{-n\widetilde{k}_1}\sum_{s=1}^{m_1 }\alpha_{1,s} \beta_{1,s} (1+y_1)^{\beta_{1,s}-1} + \right.\\
&\left. + \sum_{q=2}^{\widetilde{d}} p^{-n\widetilde{k}_r} \sum_{s=1}^{m_q}\alpha_{q,s} \beta_{q,s} (1+y_1)^{\beta_{q,s}-1} \right|_p\\
& =\left|p^{-n\widetilde{k}_1}\sum_{s=1}^{m_1 }\alpha_{1,s} \beta_{1,s} (1+y_1)^{\beta_{1,s}-1}\right|_p\\
& =p^{n\widetilde{k}_1}\left|\sum_{s=1}^{m_1 }\alpha_{1,s} \beta_{1,s} (1+y_1)^{\beta_{1,s}-1}\right|_p,
\end{align*}
for every $n\geq n_0$.
Hence, $\lim_{n\to \infty} |P(f_{h_1},\ldots,f_{h_m})^\prime (x_n)|_p=\infty$.
\end{proof}
The reader may have noticed that the functions in the proof of Theorem~\ref{16} have unbounded derivative but the derivative is bounded on each ball of $\mathbb{Q}_p$.
The following result (which proof is a modification of the one in Theorem~\ref{16}) shows that we can obtain a similar optimal result when the derivative is unbounded on each ball centered at a fixed point $a\in \mathbb{Q}_p$.
The functions will not be differentiable at $a$.
\begin{corollary}
\label{15} Let $a\in \mathbb{Q}_p$.
The set of continuous functions $\mathbb{Q}_{p}\rightarrow \mathbb{Q}_{p}$ that are differentiable except at $a$ and which derivative is unbounded on $\mathbb{Q}_p\setminus (a+\mathbb{Z}_p)$ and on $(a+\mathbb{Z}_p)\setminus\{a\}$ is strongly $\mathfrak{c}$-algebrable.
\end{corollary}
\begin{proof}
Fix $a\in \mathbb{Q}_p$.
Let $\mathcal H$ be a Hamel basis of $\mathbb Q_p$ over $\mathbb Q$ contained in $p\mathbb Z_p$.
For every $\beta\in \mathbb{Z}_p\setminus\{0\}$ take the function $f_\beta$ defined in the proof of Theorem~\ref{16} and also define $g_\beta\colon \mathbb{Q}_p\rightarrow \mathbb{Q}_p$ by
$$
g_\beta(x)=\begin{cases}
p^{n}[p^{-n^2}(x-a)]^\beta & \text{if } x\in \overline{B}_{p^{-(n^2+1)}}(a+p^{n^2}) \text{ for some } n\in \mathbb N,\\
0 & \text{otherwise}.
\end{cases}
$$
Notice that by applying the change of variable $y=x-a$ we can assume, without loss of generality, that $a=0$.
Since for any $x\in \overline{B}_{p^{-(n^2+1)}}(p^{n^2})$ with $n\in\mathbb N$, $x$ is of the form $p^{n^2}+\sum_{k=n^2+1}^\infty a_k p^k$ with $a_{k}\in \{0,1,\ldots,p-1\}$ for every integer $k\geq n^2+1$, we have that $p^{-n^2}x=1+\sum_{k=n^2+1}^\infty a_k p^{k-n^2}\in 1+p\mathbb{Z}_p$.
Thus $g_\beta$ is well defined.
Now, for every $\beta\in \mathbb{Z}_p\setminus\{0\}$, let $F_{\beta}:=f_\beta+g_\beta$.
It is easy to see that $F_\beta$ is differentiable at every $x\in \mathbb{Q}_p\setminus\{0\}$ and, in particular, continuous on $\mathbb{Q}_p\setminus\{0\}$.
Let us prove now that $F_\beta$ is continuous at $0$.
Fix $\varepsilon>0$ and take $n\in\mathbb N$ such that $p^{-n}<\varepsilon$.
If $|x|_p<p^{1-n^2}$, then
\begin{align*}
|F_\beta(x)| & =\begin{cases}
|p^n (p^{-n^2}x)^\beta|_p & \text{if } x\in \overline{B}_{p^{-(n^2+1)}}(p^{n^2}),\\
0 & \text{otherwise},
\end{cases}\\
& =\begin{cases}
p^{-n} & \text{if } x\in \overline{B}_{p^{-(n^2+1)}}(p^{n^2}),\\
0 & \text{otherwise}.
\end{cases}
\end{align*}
In any case, $|F_\beta(x)|<\varepsilon$.
Hence $F_\beta$ is continuous.
Moreover, $F_\beta$ is not differentiable at $0$.
Indeed, by considering the sequence $(x_n)_{n=1}^\infty=(p^{n^2}+p^{n^2+1})_{n=1}^\infty$ which converges to $0$ when $n\to \infty$ we have
\begin{align*}
\lim_{n\to \infty} \frac{|F_\beta(x_n)-F_\beta(0)|_p}{|x_n|_p} & =\lim_{n\to \infty} \frac{|p^{n} [p^{-n^2}(p^{n^2}+p^{n^2+1})]^\beta|_p}{|p^{n^2}+p^{n^2+1}|_p}\\
& =\lim_{n\to \infty} \frac{p^{-n} |(1+p)^\beta|_p}{p^{-n^2}}=\lim_{n\to \infty} p^{n^2-n}=\infty.
\end{align*}
In particular, by the chain rule, the derivative of $F_\beta$ on $\mathbb{Q}_p\setminus \{0\}$ is as follows
$$
\hspace*{-1cm} F_\beta^\prime(x)=\begin{cases}
p^{-n} \beta (1+y)^{\beta-1} & \displaystyle \text{if } x=\sum_{k=n}^0 a_k p^k +y, \text{ with } a_n\neq 0,\ n\in \mathbb Z\setminus \mathbb N, y\in p\mathbb{Z}_p,\\
p^{n-n^2}\beta(p^{-n^2}x)^{\beta-1} & \text{if } x\in \overline{B}_{p^{-(n^2+1)}}(p^{n^2}) \text{ for some } n\in \mathbb N,\\
0 & \text{otherwise}.
\end{cases}
$$
By the proof of Theorem~\ref{16} the functions in the set $V=\{F_h \colon h\in \mathcal H \}$ are algebraically independent, and every function in the algebra $\mathcal A$ generated by $V$ over $\mathbb{Q}_p$ that is not the $0$ function is continuous, differentiable on $\mathbb{Q}_p\setminus \{0\}$ and has unbounded derivative on $\mathbb{Q}_p\setminus \mathbb{Z}_p$.
It remains to prove that any nonzero algebraic combination in $V$ has unbounded derivative on $\mathbb{Z}_p\setminus\{0\}$.
To do so, let $h_1,\ldots,h_m \in \mathcal H$ be distinct and take $P$ a polynomial in $m$ variables with coefficients in $\mathbb Q_p\setminus \{0\}$ and without free term.
Then, by the chain rule, $P(f_{h_1},\ldots,f_{h_m})^\prime$ on $\overline{B}_{p^{-(n^2+1)}}(p^{n^2})$ is of the form
$$
p^{-n^2}\sum_{q=1}^d p^{nk_q} \sum_{s=1}^{m_q} \alpha_{q,s} \beta_{q,s} (p^{-n^2}x)^{\beta_{q,s}-1},
$$
see the proof of Theorem~\ref{16}.
Assume without loss of generality that $k_1<\cdots<k_d$.
Since $\alpha_{1,s} \beta_{1,s}\neq 0$ and the exponents $\beta_{1,s}- 1$ are pairwise distinct and not $0$ for every $1\leq s\leq m_1$, by Lemma~\ref{lem:1}, there exists $y_1\in p\mathbb{Z}_p$ such that $\sum_{s=1}^{m_1} \alpha_{1,s} \beta_{1,s}(1+y_1)^{\beta_{1,s}-1}\neq 0$.
For every $n\in\mathbb N$, take $x_n=p^{n^2}(1+y_1)$.
Then, notice that there exists $n_0\in\mathbb N$ such that
\begin{align*}
|P(F_{h_1},\ldots,F_{h_m})^\prime (x_n)|_p & =\left|p^{-n^2}\sum_{q=1}^{d} p^{nk_q} \sum_{s=1}^{m_q}\alpha_{q,s}\beta_{q,s} (p^{-n^2}x_n)^{\beta_{q,s}-1} \right|_p\\
& =p^{n^2}\left|\sum_{q=1}^{d} p^{nk_q} \sum_{s=1}^{m_q}\alpha_{q,s}\beta_{q,s} (1+y_1)^{\beta_{q,s}-1} \right|_p\\
& =p^{n^2}\left|p^{nk_1}\sum_{s=1}^{m_1 }\alpha_{1,s} \beta_{1,s} (1+y_1)^{\beta_{1,s}-1} + \right. \\
& \left. + \sum_{q=2}^{d} p^{nk_r} \sum_{s=1}^{m_q} \alpha_{q,s} \beta_{q,s} (1+y_1)^{\beta_{q,s}-1} \right|_p\\
& =p^{n^2}\left|p^{nk_1}\sum_{s=1}^{m_1 }\alpha_{1,s} \beta_{1,s} (1+y_1)^{\beta_{1,s}-1}\right|_p\\
& =p^{n^2-nk_1}\left|\sum_{s=1}^{m_1 }\alpha_{1,s} \beta_{1,s} (1+y_1)^{\beta_{1,s}-1}\right|_p,
\end{align*}
for every $n\geq n_0$.
Therefore $\lim_{n\to \infty} |P(F_{h_1},\ldots,F_{h_m})^\prime (x_n)|_p=\infty$.
\end{proof}
In Corollary~\ref{15} we can replace unbounded derivative with being not Lipschitzian although the conclusion is weaker in terms of lineability as it is shown in the following proposition.
\begin{proposition}
\label{26} The set of continuous functions $\mathbb{Q}_{p}\rightarrow \mathbb{Q}_{p}$ which are differentiable except at $0$, with bounded derivative on $\mathbb{Q}_p\setminus\{0\}$ and not Lipschitzian of order $\alpha>0$ is $\mathfrak{c}$-lineable and 1-algebrable.
\end{proposition}
\begin{proof}
Let us prove first the lineability part.
For any $N\in\mathcal N$, let $f_N\colon \mathbb{Q}_p\rightarrow \mathbb{Q}_p$ be:
$$
f_N(x)=\begin{cases}
p^{n} & \text{if } x\in S_{p^{-n^2}}(0) \text{ and } n\in N,\\
0 & \text{otherwise}.
\end{cases}
$$
For every $x\in\mathbb{Q}_p\setminus\{0\}$, it is clear that there exists a neighborhood of $x$ such that $f_N$ is constant since the spheres are open sets.
Thus, $f_N$ is locally constant on $\mathbb{Q}_p\setminus\{0\}$ which implies that $f_N$ is continuous, differentiable on $\mathbb{Q}_p\setminus\{0\}$ and $f_N^\prime(x)=0$ for every $x\in \mathbb{Q}_p\setminus\{0\}$.
Moreover, it is easy to see that $f_N$ is continuous at $0$.
However, $f_N$ is not differentiable at $0$.
Indeed, take $x_n=p^{n^2}$ for every $n\in N$.
It is clear that the sequence $(x_n)_{n\in N}$ converges to $0$ and also, for every $\alpha>0$,
$$
\frac{|f_N(x_n)|_p}{|x_n|_p^\alpha}=p^{\left(-1+\alpha n\right) n}\rightarrow \infty,
$$
when $n\in N$ tends to infinity.
Therefore $f_N$ is not differentiable at $0$.
Furthermore, notice that for any $M>0$ there are infinitely many $x\in \mathbb{Z}_p$ such that $|f_N(x)|_p> M|x|_p^\alpha$.
Hence $f_N$ is not Lipschitzian of order $\alpha>0$.
It remains to prove that the functions in $V=\{f_N\colon N\in\mathcal N \}$ are linearly independent over $\mathbb{Q}_p$ and such that the functions in $\text{span}(V)\setminus\{0\}$ are differentiable except at $0$, with bounded derivative on $\mathbb{Q}_p\setminus\{0\}$ and not Lipschitzian of order $\alpha>0$.
Let $f=\sum_{i=1}^m \alpha_i f_{N_i}$, where $\alpha_1,\ldots, \alpha_m\in \mathbb{Q}_p$, $N_1,\ldots, N_m\in \mathcal N$ are distinct and $m\in \mathbb N$.
Assume that $f\equiv 0$ and take $n\in N_1^1\cap N_2^0\cap \cdots \cap N_m^0$.
Then $0=f(p^{n^2})=\alpha_1 p^{n}$ which implies that $\alpha_1=0$.
Applying similar arguments we have that $\alpha_i=0$ for every $i\in \{1,\ldots, m \}$.
Finally, assume that $\alpha_i\neq 0$ for every $i\in \{1,\ldots, m \}$.
It is clear that $f$ is continuous on $\mathbb{Q}_p$ and differentiable on $\mathbb{Q}_p\setminus\{0\}$ with $f_N^\prime(x)=0$ for every $x\in \mathbb{Q}_p\setminus \{0\}$.
Let $x_n=p^{n^2}$ for every $n\in N_1^1\cap N_2^0\cap \cdots \cap N_m^0$ and notice that $(x_n)_{n\in N_1^1\cap N_2^0\cap \cdots \cap N_m^0}$ converges to $0$.
Moreover, for every $\alpha>0$,
$$
\frac{|f(x_n)|_p}{|x_n|_p^\alpha}=|\alpha_1|_p p^{\left(-1+\alpha n\right) n}\rightarrow \infty,
$$
when $n\in N_1^1\cap N_2^0\cap \cdots \cap N_m^0$ tends to infinity.
Hence, $f$ is not differentiable at $0$, and also for every $M>0$ there are infinitely many $x\in \mathbb{Z}_p$ such that $|f(x)|_p> M|x|_p^\alpha$.
For the algebrability part, let $g\colon \mathbb{Q}_p\rightarrow\mathbb{Q}_p$ be defined as:
$$
g(x)=\begin{cases}
p^n & \text{if } x\in S_{p^{-n^2}}(0) \text{ for some } n\in\mathbb N,\\
0 & \text{otherwise}.
\end{cases}
$$
By applying similar arguments used in the first part of the proof, we have that $g$ is continuous, differentiable on $\mathbb{Q}_p\setminus\{0\}$ with $g^\prime(x)=0$ for every $x\in \mathbb{Q}_p\setminus\{0\}$ and not Lipschitzian of order $\alpha>0$.
To finish the proof, let $G=\beta g^k$ where $\beta\in\mathbb{Q}_p\setminus\{0\}$ and $k\in\mathbb N$.
It is obvious that $G$ is continuous, differentiable on $\mathbb{Q}_p\setminus\{0\}$ and $G^\prime(x)=0$ for every $x\in\mathbb{Q}_p\setminus\{0\}$.
Now, let $x_n=p^{n^2}$ for every $n\in\mathbb N$.
It is easy to see that $(x_n)_{n\in\mathbb N}$ converges to $0$ and
$$
\frac{|G(x_n)|_p}{|x_n|_p^\alpha}=|\beta|_p p^{\left(-k+\alpha n\right) n}\rightarrow \infty,
$$
when $n\rightarrow\infty$.
\end{proof}
Let $\mathcal B$ be the $\sigma$-algebra of all Borel subsets of $\mathbb{Z}_p$ and $\mu$ be any non-negative real-valued Haar measure on the measurable space $(\mathbb{Z}_p,\mathcal B)$.
In particular, if $\mu$ is normalized, then $\mu\left(x+p^n\mathbb{Z}_p\right)=p^{-n}$ for any $x\in \mathbb{Z}_p$ and $n\in\mathbb N$.
For the rest of the paper $\mu$ will denote a non-negative real-valued Haar measure on $(\mathbb{Z}_p,\mathcal B)$.
As usual, a Borel subset $B$ of $\mathbb{Z}_p$ is called a null set for $\mu$ provided that $\mu(B)=0$.
We also say that a Borel subset of $\mathbb{Z}_p$ is a full set for $\mu$ if $\mathbb{Z}_p\setminus B$ is a null set. (See \cite[Section~2.2]{Fo} for more details on the Haar measure.)
It is easy to see that the singletons of $\mathbb{Z}_p$ are null sets for any Haar measure $\mu$ on $(\mathbb{Z}_p,\mathcal B)$.
Therefore Proposition~\ref{26} states in particular that, for any Haar measure $\mu$ on $(\mathbb{Z}_p,\mathcal B)$, the set of continuous functions $\mathbb{Z}_p\rightarrow\mathbb{Q}_p$ that are differentiable except on a null set for $\mu$ of cardinality $1$, with bounded derivative elsewhere, is $\mathfrak{c}$-lineable.
The following result shows that a similar version can be obtained when we consider null sets of cardinality $\mathfrak{c}$ for any Haar measure $\mu$ on $(\mathbb{Z}_p,\mathcal B)$.
In order to prove it, we recall the following definition and result from probability theory.
\begin{definition}\label{def:1}
Let $(\Omega,\mathcal F,P)$ be a probability space and $Y$ be a measurable real-valued function on $\Omega$. We say that $Y$ is a random variable.
\end{definition}
\begin{theorem}\normalfont{(Strong law of large numbers, see \cite[Theorem~22.1]{Bi}).}
Let $(Y_n)_{n\in \mathbb N_0}$ be a sequence of independent and identically distributed real-valued random variables on a probability space $(\Omega,\mathcal F,P)$ such that, for each $n\in\mathbb N_0$, $E[Y_n]=m$ for some $m\in\mathbb R$ (where $E$ denotes the expected value).
Then
$$
P\left(\left\{x\in \Omega\colon \exists\lim_{n\rightarrow \infty} \frac{\sum_{k=0}^{n-1} Y_k(x)}{n}=m \right\} \right)=1.
$$
\end{theorem}
\begin{theorem}\label{thm:2}
Let $\mu$ be a Haar measure on $(\mathbb{Z}_p,\mathcal B)$.
The set of continuous functions $\mathbb{Z}_p\rightarrow\mathbb{Q}_p$ which are differentiable on a full set for $\mu$ with bounded derivative but not differentiable on the complement having cardinality $\mathfrak c$ is $\mathfrak c$-lineable.
\end{theorem}
\begin{proof}
We will prove the result for $\mu$ the normalized Haar measure on $(\mathbb{Z}_p,\mathcal B)$ since any null set for the normalized Haar measure is also a null set for any other non-negative real-valued Haar measure on $(\mathbb{Z}_p,\mathcal B)$.
This is an immediate consequence of Haar's Theorem which states that Haar measures are unique up to a positive multiplicative constant (see \cite[Theorem~2.20]{Fo}).
Let $f\colon \mathbb{Z}_p\rightarrow \mathbb{Z}_p$ be defined as follows: for every $x=\sum_{i=0}^\infty x_i p^i\in \mathbb{Z}_p$, we have
\begin{equation}\label{equ:3}
f(x)=\begin{cases}
x & \text{if } (x_{2i},x_{2i+1})\neq (0,0) \text{ for all } i\in \mathbb N_0,\\
\displaystyle \sum_{i=0}^{2n+1} x_i p^i & \text{if } (x_{2i},x_{2i+1})\neq (0,0) \text{ for all } i\leq n \\
& \text{with } n\in \mathbb N_0 \text{ and } (x_{2n+2},x_{2n+3})= (0,0),\\
0 & \text{if } (x_0,x_1)=(0,0).
\end{cases}
\end{equation}
The function $f$ is continuous.
Indeed, let $x=\sum_{i=0}^\infty x_i p^i\in \mathbb{Z}_p$ and fix $\varepsilon>0$.
Take any $m\in \mathbb N_0$ such that $p^{-(2m+1)}<\varepsilon$.
Then for any $y\in \mathbb{Z}_p$ such that $|x-y|_p<p^{-(2m+1)}$ we have that $y$ is of the form $\sum_{i=0}^{2m+1} x_i p^i + \sum_{i=2m+2}^\infty y_i p^i$.
Hence, notice that in any possible case of $x$ given in \eqref{equ:3}, we have that $|f(x)-f(y)|_p< p^{-(2m+1)}<\varepsilon$.
Let us define, for every $i\in \mathbb N_0$, the random variables $Y_i\colon \mathbb{Z}_p\rightarrow \{0,1\}$ in the following way: for any $x=\sum_{i=0}^\infty x_i p^i\in \mathbb{Z}_p$,
$$
Y_i(x)=\begin{cases}
1 & \text{if } (x_{2i},x_{2i+1})=(0,0),\\
0 & \text{if } (x_{2i},x_{2i+1})\neq (0,0).
\end{cases}
$$
Notice that the random variables $(Y_i)_{i\in \mathbb N_0}$ are independent and identically distributed with $E[Y_i]=\frac{1}{p^2}$ for every $i\in \mathbb N_0$.
Thus, by the strong law of large numbers, the set
$$
D=\left\{x=\sum_{i=0}^\infty x_i p^i\in \mathbb{Z}_p \colon \exists\lim_{n\rightarrow\infty} \frac{\sum_{i=0}^{n-1} Y_i(x)}{n}=\frac{1}{p^2} \right\}
$$
has measure $1$.
Now, since for every $i\in \mathbb N_0$, $Y_i(x)=0$ for all $x=\sum_{j=0}^\infty x_j p^j$ that satisfy $(x_{2j},x_{2j+1})\neq (0,0)$ for each $j\in \mathbb N_0$, we have that $\lim_{n\to \infty} \frac{\sum_{i=0}^{n-1} Y_i(x)}{n}=0$ for all such $x$.
Hence, it is clear that $E:=\left\{\sum_{i=0}^\infty x_i p^i\in \mathbb{Z}_p\colon (x_{2i},x_{2i+1})\neq (0,0) \text{ for all } i\in \mathbb N_0 \right\}\subset \mathbb{Z}_p\setminus D$.
Moreover, by construction $\text{card}(E)=\mathfrak{c}$.
Notice that it is not obvious that $E$ is a Borel set since any Haar measure $\mu$ on $(\mathbb{Z}_p,\mathcal B)$ is not complete.
However, as $\mathbb{Z}_p\setminus D$ is a null set, we have that if $E$ were a Borel set, then $E$ would be a null set.
Let us prove that $E$ is a Borel set.
Consider the finite field of $p$ elements $\mathbb F_p$ endowed with an absolute value $|\cdot|_T$--the trivial absolute value.
Then $\mathbb F_p$ is a discrete topological space, which implies that the product space $\mathbb F_p^2:=\mathbb F_p\times \mathbb F_p$ has the discrete topology. (Recall that the finite product of discrete topological spaces has the discrete topology.)
For every $n\in \mathbb N_0$, let $\pi_n\colon \mathbb{Z}_p\to \mathbb F_p^2$ be defined as $\pi_n(x)=(x_{2n},x_{2n+1})$ for every $x=\sum_{i=0}^\infty x_i p^i\in \mathbb{Z}_p$.
Let $n\in \mathbb N_0$, $x\in \mathbb{Z}_p$ and fix $\varepsilon>0$.
Take an integer $m>n$.
Then for every $y\in \mathbb{Z}_p$ such that $|x-y|_p<p^{-(2m+1)}$ we have that $\pi_n(x)=\pi_n(y)$, i.e., $|\pi_n(x)-\pi_n(y)|_T=0<\varepsilon$.
Hence $\pi_n$ is continuous.
Note that $E=\bigcap_{n=0}^\infty \pi_n^{-1}(\{(x,y)\in \mathbb F_p^2 \colon (x,y)\neq (0,0) \})$, where $\pi_n^{-1}(\{(x,y)\in \mathbb F_p^2 \colon (x,y)\neq (0,0) \})$ is closed since $\pi_n$ is continuous and $\{(x,y)\in \mathbb F_p^2 \colon (x,y)\neq (0,0) \}$ is closed in $\mathbb F_p^2$.
Hence, $E$ is closed since it is the countable intersection of closed sets and, therefore, a Borel set.
Let us analyze now the differentiability of $f$.
On the one hand, if for $x=\sum_{i=0}^\infty x_i p^i\in \mathbb{Z}_p$ there exists $i_0\in\mathbb N_0$ such that $(x_{2i_0},x_{2i_0+1})=(0,0)$, then it is clear that $f$ is constant on some neighborhood of $x$, and hence differentiable at $x$.
On the other hand, if $f$ were differentiable at $x=\sum_{i=0}^\infty x_i p^i\in \mathbb{Z}_p$ satisfying $(x_{2i},x_{2i+1})\neq (0,0)$ for all $i\in \mathbb N_0$, then we would have $f^\prime (x)=1$.
Assume, by means of contradiction, that $f$ is differentiable at $x$.
For every $n\in \mathbb N_0$, take $\overline{x}_n:=\sum_{i=0}^{2n+1} x_i p^i + \sum_{i=2n+4}^\infty y_i p^i$ with $y_{2n+4}\neq 0$, then
\begin{align*}
\frac{f(x)-f(\overline{x}_n)}{x-\overline{x}_n} & = \frac{\sum_{i=0}^\infty x_i p^i-\sum_{i=0}^{2n+1} x_i p^i}{\sum_{i=2n+2}^\infty x_{i} p^{i}-\sum_{i=2n+4}^\infty y_i p^{i} }\\
& = \frac{\sum_{i=2n+2}^\infty x_{i} p^{i}}{\sum_{i=2n+2}^\infty x_{i} p^{i}-\sum_{i=2n+4}^\infty y_i p^{i}}\\
& = \frac{\sum_{i=2n+2}^\infty x_{i} p^{i}-\sum_{i=2n+4}^\infty y_i p^{i}+\sum_{i=2n+4}^\infty y_i p^{i} }{\sum_{i=2n+2}^\infty x_{i} p^{i}-\sum_{i=2n+4}^\infty y_i p^{i}}\\
& = 1+\frac{\sum_{i=2n+4}^\infty y_i p^{i}}{\sum_{i=2n+2}^\infty x_{i} p^{i}-\sum_{i=2n+4}^\infty y_i p^{i}}.
\end{align*}
Now, as $y_{2n+4} \neq 0$, we have
$$
\left|\frac{\sum_{i=2n+4}^\infty y_i p^{i}}{\sum_{i=2n+2}^\infty x_{i} p^{i}-\sum_{i=2n+4}^\infty y_i p^{i}} \right|_p=\begin{cases}
p^{-2} & \text{if } x_{2n+2}\neq 0,\\
p^{-1} & \text{if } x_{2n+3}\neq 0.
\end{cases}
$$
Thus we have $\lim_{n\to \infty} |x-\overline{x}_n|_p=0$ and $\lim_{n\to \infty} \left|\frac{f(x)-f(\overline{x}_n)}{x-\overline{x}_n}-1\right|_p\geq p^{-2}\neq 0$, a contradiction.
Let us define the function $g\colon \mathbb{Z}_p\rightarrow \mathbb{Q}_p$ by:
$$
g(x)=\begin{cases}
p^n f(x^\prime) & \text{if } x=p^n+p^{n+1} x^\prime \text{ with } n\in\mathbb N \text{ and } x^\prime \in \mathbb{Z}_p,\\
0 & \text{otherwise}.
\end{cases}
$$
Notice that $g$ is well defined since the sets $B_n:=p^n+p^{n+1}\mathbb{Z}_p$ are pairwise disjoint.
(The sets $B_n$ are the closed balls $\overline{B}_{p^{-(n+1)}}(p^n)$.)
If $x\in \mathbb{Z}_p\setminus \left(\{0\} \cup \bigcup_{n=1}^\infty B_n\right)$, then there exists an open neighborhood $U^x$ of $x$ such that $g$ is identically zero on $U^x$, i.e., $g$ is differentiable at $x$.
Now, as $g(p^n+p^{n+1}x)=p^n f(x)$ for every $n\in\mathbb N$ and $x\in \mathbb{Z}_p$, and since $f$ is continuous, it is obvious that $g$ is continuous on $\bigcup_{n=1}^\infty B_n$.
Moreover, $g$ is also continuous at $0$.
To prove it fix $\varepsilon>0$ and take $n\in \mathbb N$ such that $p^{-n}<\varepsilon$.
If $x\in \mathbb{Z}_p$ is such that $|x|_p=p^{-n}$, then $x=x_n p^n+p^{n+1}x^\prime$ with $x_n\neq 0$.
Furthermore,
\begin{align*}
|g(0)-g(x)|_p & =\begin{cases}
|p^n f(x^\prime)|_p & \text{if } x_n=1,\\
0 & \text{otherwise},
\end{cases}
=\begin{cases}
p^{-n}|f(x^\prime)|_p & \text{if } x_n=1,\\
0 & \text{otherwise}.
\end{cases}
\end{align*}
Hence, $|g(0)-g(x)|_p\leq p^{-n}<\varepsilon$.
Therefore we have proven that $g$ is continuous on $\mathbb{Z}_p$.
Moreover, $g$ is differentiable also on $\bigcup_{n=1}^\infty (B_n\setminus E_n)$ (with bounded derivative) as $f$ is differentiable on $\bigcup_{n=1}^\infty (B_n\setminus E_n)$, where $E_n:=p^n+p^{n+1}E$; and $g$ is not differentiable on $\bigcup_{n=1}^\infty E_n$ since $f$ is not differentiable on $\bigcup_{n=1}^\infty E_n$.
Notice that once again $\text{card}(E_n)=\mathfrak{c}$ for every $n\in \mathbb N_0$.
Let us prove that $E_n$ is a Borel set with $\mu(E_n)=0$ for every $n\in \mathbb N$.
To do so, let us consider the restricted measure $\mu_n=p^{n+1}\mu$ on the measurable space $(B_n,\mathcal B_n)$, where $\mathcal B_n$ is the $\sigma$-algebra of all Borel subsets of $B_n$.
Notice that $\mathcal B_n=\{B\cap B_n\colon B\in \mathcal B \}$ and $(B_n,\mathcal B_n,\mu_n)$ is a probability space.
Define now for every $i\in \mathbb N_0$ the random variables $Y_{n,i}\colon B_n\rightarrow\{0,1\}$ as follows: for $x=p^n+p^{n+1}\sum_{i=0}^\infty x_i p^i\in B_n$, we have
$$
Y_{n,i}(x)=\begin{cases}
1 & \text{if } (x_{2i},x_{2i+1})=(0,0),\\
0 & \text{if } (x_{2i},x_{2i+1}) \neq (0,0).
\end{cases}
$$
Once again the random variables $(Y_{n,i})_{i\in \mathbb N_0}$ are independent and identically distributed with $E[Y_{n,i}]=\frac{1}{p^2}$ for every $i\in\mathbb N_0$.
Thus, the set
$$
\left\{x=p^n+p^{n+1}\sum_{i=0}^\infty x_i p^i\in B_n \colon \exists\lim_{m\rightarrow\infty} \frac{\sum_{i=0}^{m-1} Y_{n,i}(x)}{m}=\frac{1}{p} \right\}=p^n+p^{n+1}D=:D_n
$$
is a full set for $\mu_n$.
By considering for each $k\in \mathbb N_0$ the function $\pi_{n,k}\colon B_n\to \mathbb F_p^2$ given by $\pi_{n,k}(x)=(x_{2k},x_{2k+1})$ for every $x=p^n+p^{n+1}\sum_{i=0}^\infty x_i p^i\in B_n$ and applying similar arguments as above, we have that $\pi_{n,k}$ is continuous.
Hence $E_n=\bigcap_{k=0}^\infty \pi_{n,k}(\{(x,y)\in \mathbb F_p^2\colon (x,y) \neq (0,0) \})$ is once again a Borel set.
Furthermore, since $E_n\subset B_n\setminus D_n$ we have that $E_n$ is a null set for $\mu_n$.
Thus $\mu(E_n)=p^{-(n+1)}\mu_n(E_n)=0$ for every $n\in \mathbb N$.
Finally let us prove that $g$ is not differentiable at $0$.
Since every neighborhood containing $0$ on $\mathbb{Z}_p$ contains points $x$ such that $g(x)=0$, if $g$ were differentiable at $0$ then $g^\prime (0)=0$.
Assume that $g$ is differentiable at $0$.
As $p^n+p^{n+1}\sum_{i=0}^\infty p^i=p^n \sum_{i=0}^\infty p^i\in B_n$ for every $n\in \mathbb N$, we have that
$$
\left|\frac{g\left(p^n+p^{n+1}\sum_{i=0}^\infty p^i \right)-g(0)}{p^n+p^{n+1}\sum_{i=0}^\infty p^i} \right|_p=\left|\frac{p^n f\left(\sum_{i=0}^\infty p^i \right)}{p^n \sum_{i=0}^\infty p^i} \right|_p=\left|\frac{p^n \sum_{i=0}^\infty p^i}{p^n \sum_{i=0}^\infty p^i} \right|_p=1,
$$
where $\lim_{n\to \infty } \left|p^n+p^{n+1}\sum_{i=0}^\infty p^i \right|_p=\lim_{n\to \infty} p^{-n}=0$, a contradiction.
For every $N\in \mathcal N$, let us define $f_N\colon \mathbb{Z}_p\rightarrow \mathbb{Q}_p$ by:
$$
f_N(x)=g(x)\sum_{n\in N} 1_{B_n}(x).
$$
Since $B_n\cap B_m=\emptyset$ for every distinct $n,m\in \mathbb N$, the function $f_N$ is well defined.
Furthermore, since each $N\in \mathcal N$ is infinite, we can apply the above arguments to prove that $f_N$ is continuous and differentiable on a full set for $\mu$ with bounded derivative but not differentiable on the complement having cardinality $\mathfrak c$.
It remains to prove that the functions in $V=\{f_N\colon N\in \mathcal N \}$ are linearly independent over $\mathbb{Q}_p$ and any nonzero linear combination over $\mathbb{Q}_p$ of the functions in $V$ satisfies the necessary properties.
Let $F:=\sum_{i=1}^k a_i f_{N_i}$, where $k\in \mathbb N$, $a_1,\ldots,a_k\in \mathbb{Q}_p$, and $N_1,\ldots,N_k\in \mathcal N$ are distinct.
We begin by showing the linear independence.
Assume that $F\equiv 0$.
Fix $n\in N_1^1\cap N_2^0\cap \cdots \cap N_k^0$ and take $x=p^n+p^{n+1}\sum_{i=0}^\infty p^i\in B_n$.
Then, $0=F(x)=a_1 f_{N_1}(x)=a_1 \sum_{i=0}^\infty p^i$ if and only if $a_1=0$.
By repeating the same argument, it is easy to see that $a_i=0$ for every $i\in \{1,\ldots,k \}$.
Assume now that $a_i\neq 0$ for every $i\in \{1,\ldots,k \}$.
Then $F$ is continuous but also differentiable on
$$
\Delta:=\mathbb{Z}_p\setminus \left(\{0\}\cup \left(\bigcup_{n\in \bigcup_{i=1}^k N_i} E_n \right) \right)
$$
(with bounded derivative).
Applying similar arguments as above, we have that $F$ is not differentiable at $0$.
Let $x\in E_n$ with $n\in \bigcup_{i=1}^k N_i$.
We will analyze the differentiability of $F$ at $x$ depending on the values that $F$ takes on $B_n$.
We have two possible cases.
\textit{Case 1:} If $F$ is identically $0$ on $B_n$, then $F$ is differentiable at $x$.
\textit{Case 2:} If $F$ is not identically $0$ on $B_n$, then there exists $a\in \mathbb{Q}_p\setminus\{0\}$ such that $F=ag$.
Hence $F$ is not differentiable at $x$ since $g$ is not differentiable at $x$.
Notice that Case 2 is always satisfied.
To finish the proof, it is enough to show that $\Delta$ is a full set for $\mu$, but this is an immediate consequence of the fact that $\{0\}\cup \left(\bigcup_{n\in \bigcup_{i=1}^k N_i} E_n \right)$ is the countable union of null sets for $\mu$ since it implies that
$$
\mu\left(\{0\}\cup \left(\bigcup_{n\in \bigcup_{i=1}^k N_i} E_n \right) \right)=0.
$$
\end{proof}
\begin{bibdiv}
\begin{biblist}
\bib{abms}{article}{
AUTHOR = {Ara\'{u}jo, G.},
AUTHOR = {Bernal-Gonz\'{a}lez, L.},
AUTHOR = {Mu\~{n}oz-Fern\'{a}ndez, G. A.},
AUTHOR = {Seoane-Sep\'{u}lveda, J. B.},
TITLE = {Lineability in sequence and function spaces},
JOURNAL = {Studia Math.},
VOLUME = {237},
YEAR = {2017},
NUMBER = {2},
PAGES = {119--136},
}
\bib{ar}{book}{
author={Aron, R.M.},
author={Bernal Gonz\'{a}lez, L.},
author={Pellegrino, D.M.},
author={Seoane Sep\'{u}lveda, J.B.},
title={Lineability: the search for linearity in mathematics},
series={Monographs and Research Notes in Mathematics},
publisher={CRC Press, Boca Raton, FL},
date={2016},
pages={xix+308},
isbn={978-1-4822-9909-0},
}
\bib{AGS2005}{article}{
author={Aron, R. M.},
author={Gurariy, V. I.},
author={Seoane-Sep\'{u}lveda, J. B.},
title={Lineability and spaceability of sets of functions on $\mathbb R$},
journal={Proc. Amer. Math. Soc.},
volume={133},
date={2005},
number={3},
pages={795--803},
}
\bib{ar2}{article}{
author={Aron, R.M.},
author={P\'{e}rez-Garc\'{\i }a, D.},
author={Seoane-Sep\'{u}lveda, J.B.},
title={Algebrability of the set of non-convergent Fourier series},
journal={Studia Math.},
volume={175},
date={2006},
number={1},
pages={83--90},
}
\bib{bbf}{article}{
AUTHOR = {Balcerzak, Marek},
AUTHOR = {Bartoszewicz, Artur},
AUTHOR = {Filipczak, Ma\l gorzata},
TITLE = {Nonseparable spaceability and strong algebrability of sets of
continuous singular functions},
JOURNAL = {J. Math. Anal. Appl.},
VOLUME = {407},
YEAR = {2013},
NUMBER = {2},
PAGES = {263--269},
}
\bib{bbfg}{article}{
AUTHOR = {Bartoszewicz, Artur},
AUTHOR = {Bienias, Marek},
AUTHOR = {Filipczak, Ma\l gorzata},
AUTHOR = {G\l \c{a}b, Szymon},
TITLE = {Strong {$\germ{c}$}-algebrability of strong
{S}ierpi\'{n}ski-{Z}ygmund, smooth nowhere analytic and other sets
of functions},
JOURNAL = {J. Math. Anal. Appl.},
VOLUME = {412},
YEAR = {2014},
NUMBER = {2},
PAGES = {620--630},
}
\bib{barglab}{article}{
author={Bartoszewicz, A.},
AUTHOR = {Bienias, M.},
author={G\l \c {a}b, S.},
TITLE = {Independent {B}ernstein sets and algebraic constructions},
JOURNAL = {J. Math. Anal. Appl.},
VOLUME = {393},
YEAR = {2012},
NUMBER = {1},
PAGES = {138--143},
}
\bib{bar20}{article}{
AUTHOR = {Bartoszewicz, Artur},
AUTHOR ={ Filipczak, Ma\l gorzata},
AUTHOR = {Terepeta, Ma\l gorzata},
TITLE = {Lineability of {L}inearly {S}ensitive {F}unctions},
JOURNAL = {Results Math.},
VOLUME = {75},
YEAR = {2020},
NUMBER = {2},
PAGES = {Paper No. 64},
}
\bib{ba3}{article}{
author={Bartoszewicz, A.},
author={G\l \c {a}b, S.},
title={Strong algebrability of sets of sequences and functions},
journal={Proc. Amer. Math. Soc.},
volume={141},
date={2013},
number={3},
pages={827--835},
}
\bib{bay}{article}{
AUTHOR = {Bayart, Fr\'{e}d\'{e}ric},
TITLE = {Linearity of sets of strange functions},
JOURNAL = {Michigan Math. J.},
VOLUME = {53},
YEAR = {2005},
NUMBER = {2},
PAGES = {291--303},
}
\bib{bq}{article}{
AUTHOR = {Bayart, Fr\'{e}d\'{e}ric},
AUTHOR = {Quarta, Lucas},
TITLE = {Algebras in sets of queer functions},
JOURNAL = {Israel J. Math.},
VOLUME = {158},
YEAR = {2007},
PAGES = {285--296},
}
\bib{bbls}{article}{
AUTHOR = {Bernal-Gonz\'{a}lez, L.},
AUTHOR = {Bonilla, A.},
AUTHOR = {L\'{o}pez-Salazar, J.},
AUTHOR = {Seoane-Sep\'{u}lveda, J. B.},
TITLE = {Nowhere h\"{o}lderian functions and {P}ringsheim singular
functions in the disc algebra},
JOURNAL = {Monatsh. Math.},
VOLUME = {188},
YEAR = {2019},
NUMBER = {4},
PAGES = {591--609},
}
\bib{TAMS2020}{article}{
author={Bernal-Gonz\'{a}lez, L.},
author={Cabana-M\'{e}ndez, H. J.},
author={Mu\~{n}oz-Fern\'{a}ndez, G. A.},
author={Seoane-Sep\'{u}lveda, J. B.},
title={On the dimension of subspaces of continuous functions attaining
their maximum finitely many times},
journal={Trans. Amer. Math. Soc.},
volume={373},
date={2020},
number={5},
pages={3063--3083},
}
\bib{b20}{article}{
author={Bernal-Gonz\'{a}lez, L.},
author={Mu\~{n}oz-Fern\'{a}ndez, G. A.},
author={Rodr\'{\i }guez-Vidanes, D. L.},
author={Seoane-Sep\'{u}lveda, J. B.},
title={Algebraic genericity within the class of sup-measurable functions},
journal={J. Math. Anal. Appl.},
volume={483},
date={2020},
number={1},
pages={123--576},
}
\bib{bo}{article}{
AUTHOR = {Bernal-Gonz\'{a}lez, Luis},
AUTHOR = {Ord\'{o}\~{n}ez Cabrera, Manuel},
TITLE = {Lineability criteria, with applications},
JOURNAL = {J. Funct. Anal.},
VOLUME = {266},
YEAR = {2014},
NUMBER = {6},
PAGES = {3997--4025},
}
\bib{survey}{article}{
author={Bernal-Gonz\'{a}lez, L.},
author={Pellegrino, D.},
author={Seoane-Sep\'{u}lveda, J.B.},
title={Linear subsets of nonlinear sets in topological vector spaces},
journal={Bull. Amer. Math. Soc. (N.S.)},
volume={51},
date={2014},
number={1},
pages={71--130},
}
\bib{bns}{article}{
AUTHOR = {Biehler, N.},
AUTHOR = {Nestoridis, V.},
AUTHOR = {Stavrianidi, A.},
TITLE = {Algebraic genericity of frequently universal harmonic
functions on trees},
JOURNAL = {J. Math. Anal. Appl.},
VOLUME = {489},
YEAR = {2020},
NUMBER = {1},
PAGES = {124132, 11},
}
\bib{Bi}{book}{
author={Billingsley, Patrick},
title={Probability and measure},
series={Wiley Series in Probability and Mathematical Statistics},
edition={3},
note={A Wiley-Interscience Publication},
publisher={John Wiley \& Sons, Inc., New York},
date={1995},
pages={xiv+593},
}
\bib{cgp}{article}{
AUTHOR = {Calder\'{o}n-Moreno, M. C.},
AUTHOR = {Gerlach-Mena, P. J.},
AUTHOR = {Prado-Bassas, J. A.},
TITLE = {Lineability and modes of convergence},
JOURNAL = {Rev. R. Acad. Cienc. Exactas F\'{\i}s. Nat. Ser. A Mat. RACSAM},
VOLUME = {114},
YEAR = {2020},
NUMBER = {1},
PAGES = {Paper No. 18, 12},
}
\bib{CS2019}{article}{
author={Ciesielski, Krzysztof C.},
author={Seoane-Sep\'{u}lveda, Juan B.},
title={Differentiability versus continuity: restriction and extension theorems and monstrous examples},
journal={Bull. Amer. Math. Soc. (N.S.)},
volume={56},
date={2019},
number={2},
pages={211--260},
}
\bib{egs}{article}{
AUTHOR = {Enflo, Per H.},
AUTHOR = {Gurariy, Vladimir I.},
AUTHOR = {Seoane-Sep\'{u}lveda, Juan B.},
TITLE = {Some results and open questions on spaceability in function
spaces},
JOURNAL = {Trans. Amer. Math. Soc.},
VOLUME = {366},
YEAR = {2014},
NUMBER = {2},
PAGES = {611--625},
}
\bib{fmrs}{article}{
AUTHOR = {Fern\'{a}ndez-S\'{a}nchez, J.},
author={Maghsoudi, S.},
author={Rodr\'{\i}guez-Vidanes, D. L.},
author={Seoane-Sep{\'u}lveda, J. B.},
TITLE = {Classical vs. non-Archimedean analysis. An approach via algebraic genericity},
status={Preprint (2021)},
}
\bib{preprint-June-2019}{article}{
author={Fern\'andez-S\'anchez, J.},
author={Mart\'inez-G\'omez, M.E.},
author={Seoane-Sep\'ulveda, J.B.},
title={Algebraic genericity and special properties within sequence spaces and series},
journal={Preprint (2019)},
}
\bib{fich}{article}{
author={Fichtenholz, G.},
author={Kantorovich, L.},
title={Sur les op\'{e}rations dans l'espace des functions born\'{e}es},
journal={Studia Math.},
volume={5},
date={1934},
number={},
pages={69-98},
}
\bib{Fo}{book}{
author={Folland, Gerald B.},
title={A course in abstract harmonic analysis},
series={Textbooks in Mathematics},
edition={2},
publisher={CRC Press, Boca Raton, FL},
date={2016},
pages={xiii+305 pp.+loose errata},
}
\bib{g7}{article}{
AUTHOR = {Garc\'{\i}a-Pacheco, F. J.},
AUTHOR = {Palmberg, N.},
AUTHOR = {Seoane-Sep\'{u}lveda, J. B.},
TITLE = {Lineability and algebrability of pathological phenomena in analysis},
JOURNAL = {J. Math. Anal. Appl.},
VOLUME = {326},
YEAR = {2007},
NUMBER = {2},
PAGES = {929--939},
}
\bib{go}{book}{
author={Gouv\^{e}a, F.Q.},
title={$p$-adic numbers, An introduction},
series={Universitext},
edition={2},
publisher={Springer-Verlag, Berlin},
date={1997},
pages={vi+298},
}
\bib{gur1}{article}{
author={Gurari\u {\i }, V.I.},
title={Subspaces and bases in spaces of continuous functions},
language={Russian},
journal={Dokl. Akad. Nauk SSSR},
volume={167},
date={1966},
pages={971--973},
}
\bib{h}{article}{
author={Hausdorff, F.},
title={Uber zwei Satze von G. Fichtenholz und L. Kantorovich},
language={German},
journal={Studia Math.},
volume={6},
date={1936},
pages={18--19},
}
\bib{jms}{article}{
AUTHOR = {Jim\'{e}nez-Rodr\'{\i}guez, P.},
AUTHOR = {Mu\~{n}oz-Fern\'{a}ndez, G. A.},
AUTHOR = {Seoane-Sep\'{u}lveda, J. B.},
TITLE = {Non-{L}ipschitz functions with bounded gradient and related
problems},
JOURNAL = {Linear Algebra Appl.},
VOLUME = {437},
YEAR = {2012},
NUMBER = {4},
PAGES = {1174--1181},
}
\bib{kato}{book}{
AUTHOR = {Katok, S.},
TITLE = {{$p$}-adic analysis compared with real},
SERIES = {Student Mathematical Library},
VOLUME = {37},
PUBLISHER = {American Mathematical Society, Providence, RI; Mathematics
Advanced Study Semesters, University Park, PA},
YEAR = {2007},
PAGES = {xiv+152},
}
\bib{jms1}{article}{
author={Khodabendehlou, J.},
author={Maghsoudi, S.},
author={Seoane-Sep{\'u}lveda, J.B.},
title={Algebraic genericity and summability within the non-Archimedean setting},
journal={Rev. R. Acad. Cienc. Exactas F\'{\i}s. Nat. Ser. A Mat. RACSAM},
volume={115},
number={21},
date={2021},
}
\bib{jms2}{article}{
author={Khodabendehlou, J.},
author={Maghsoudi, S.},
author={Seoane-Sep{\'u}lveda, J.B.},
title={Lineability and algebrability within $p$-adic function spaces},
journal={Bull. Belg. Math. Soc. Simon Stevin},
volume={27},
pages={711–729},
date={2020},
}
\bib{jms3}{article}{
author={Khodabendehlou, J.},
author={Maghsoudi, S.},
author={Seoane-Sep{\'u}lveda, J.B.},
title={Lineability, continuity, and antiderivatives in the
non-Archimedean setting},
journal={Canad. Math. Bull.},
volume={64},
date={2021},
number={3},
pages={638--650},
}
\bib{levinemilman1940}{article}{
author={Levine, B.},
author={Milman, D.},
title={On linear sets in space $C$ consisting of functions of bounded variation},
language={Russian, with English summary},
journal={Comm. Inst. Sci. Math. M\'{e}c. Univ. Kharkoff [Zapiski Inst. Mat. Mech.] (4)},
volume={16},
date={1940},
pages={102--105},
}
\bib{mahler}{book}{
AUTHOR = {Mahler, K.},
TITLE = {$p$-adic numbers and their functions},
SERIES = {Cambridge Tracts in Mathematics},
VOLUME = {76},
EDITION = {Second},
PUBLISHER = {Cambridge University Press, Cambridge-New York},
YEAR = {1981},
PAGES = {xi+320},
}
\bib{Ma2}{article}{
author={Mahler, K.},
title={An interpolation series for continuous functions of a $p$-adic
variable},
journal={J. Reine Angew. Math.},
volume={199},
date={1958},
pages={23--34},
}
\bib{ms}{article}{
AUTHOR = {Moothathu, T. K. Subrahmonian},
TITLE = {Lineability in the sets of {B}aire and continuous real
functions},
JOURNAL = {Topology Appl.},
VOLUME = {235},
YEAR = {2018},
PAGES = {83--91},
}
\bib{nak}{article}{
AUTHOR = {Natkaniec, Tomasz},
TITLE = {On lineability of families of non-measurable functions of two
variable},
JOURNAL = {Rev. R. Acad. Cienc. Exactas F\'{\i}s. Nat. Ser. A Mat. RACSAM},
VOLUME = {115},
YEAR = {2021},
NUMBER = {1},
PAGES = {Paper No. 33, 10},
}
\bib{ro}{book}{
AUTHOR = {Robert, A. M.},
TITLE = {A course in $p$-adic analysis},
SERIES = {Graduate Texts in Mathematics},
VOLUME = {198},
PUBLISHER = {Springer-Verlag, New York},
YEAR = {2000},
PAGES = {xvi+437},
}
\bib{sc}{book}{
author={Schikhof, W.H.},
title={Ultrametric calculus, An introduction to $p$-adic analysis},
series={Cambridge Studies in Advanced Mathematics},
volume={4},
publisher={Cambridge University Press, Cambridge},
date={1984},
pages={viii+306},
}
\bib{juanksu}{book}{
author={Seoane-Sep\'{u}lveda, J.B.},
title={Chaos and lineability of pathological phenomena in analysis},
note={Thesis (Ph.D.)--Kent State University},
publisher={ProQuest LLC, Ann Arbor, MI},
date={2006},
pages={139},
}
\bib{va}{book}{
author={van Rooij, A.C.M.},
title={Non-Archimedean functional analysis},
series={Monographs and Textbooks in Pure and Applied Math.},
volume={51},
publisher={Marcel Dekker, Inc., New York},
date={1978},
pages={x+404},
}
\end{biblist}
\end{bibdiv}
\end{document} |
\begin{document}
\pagenumbering{arabic}
\title[Control biharmonic: Star graphs]{Controllability for Schrödinger type system with mixed dispersion on compact star graphs}
\author[Capistrano--Filho]{Roberto de A. Capistrano--Filho}
\address{\emph{Departamento de Matem\'atica, Universidade Federal de Pernambuco (UFPE), 50740-545, Recife (PE), Brazil.}}
\email{roberto.capistranofilho@ufpe.br}
\author[Cavalcante]{M\'arcio Cavalcante}
\address{\emph{Instituto de Matem\'{a}tica, Universidade Federal de Alagoas (UFAL), Macei\'o (AL), Brazil}}
\email{marcio.melo@im.ufal.br}
\author[Gallego]{Fernando A. Gallego}
\address{\emph{Departamento de Matematicas y Estad\'istica, Universidad Nacional de Colombia (UNAL), Cra 27 No. 64-60, 170003, Manizales, Colombia}}
\email{fagallegor@unal.edu.co}
\makeatletter
\@namedef{subjclassname@2020}{\textup{2020} Mathematics Subject Classification}
\makeatother
{\mbox{sub}}jclass[2020]{35R02, 35Q55, 35G30, 93B05, 93B07}
\keywords{Exact controllability, Schrödinger type equation, Star graph, Neumann boundary conditions}
\begin{abstract}
In this work we are concerned with solutions to the linear Schrödinger type system with mixed dispersion, the so-called biharmonic Schrödinger equation. Precisely, we are able to prove an exact control property for these solutions with the control in the energy space posed on an oriented star graph structure $\mathcal{G}$ for $T>T_{min}$, with $$T_{min}=\sqrt{ \frac{ \overline{L} (L^2+\pi^2)}{\pi^2\varepsilon(1- \overline{L} \varepsilon)}},$$ when the couplings and the controls appear only on the Neumann boundary conditions.
\end{abstract}
\maketitle
\section{Introduction}
The fourth-order nonlinear Schr\"odinger (4NLS) equation or biharmonic cubic nonlinear \linebreak Schr\"odinger equation
\begin{equation}
\langlebel{fourtha}
i\partial_tu +\partial_x^2u-\partial_x^4u=\langlembda |u|^2u,
\end{equation}
have been introduced by Karpman \cite{Karpman} and Karpman and Shagalov \cite{KarSha} to take into account the role of small fourth-order dispersion terms in the propagation of intense laser beams in a bulk medium with Kerr nonlinearity. Equation \eqref{fourtha} arises in many scientific fields such as quantum mechanics, nonlinear optics and plasma physics, and has been intensively studied with fruitful references (see \cite{Ben,Karpman} and references therein).
In the past twenty years such 4NLS have been deeply studied from different mathematical points of view. For example, Fibich \textit{et al.} \cite{FiIlPa} worked various properties of the equation in the subcritical regime, with part of their analysis relying on very interesting numerical developments. The well-posedness problem and existence of the solutions has been shown (see, for instance, \cite{tsutsumi}) by means of the energy method, harmonic analysis, etc.
{\mbox{sub}}section{Dispersive models on star graphs}
The study of nonlinear dispersive models in a metric graph has attracted a lot of attention of mathematicians, physicists, chemists and engineers, see for details \cite{BK, BlaExn08, BurCas01, Mug15} and references therein. In particular, the framework prototype (graph-geometry) for description of these phenomena have been a {\it star graph} $\mathcal G$, namely, on metric graphs with $N$ half-lines of the form $(0, +\infty)$ connecting at a common vertex $\nu=0$, together with a nonlinear equation suitably defined on the edges such as the nonlinear Schr\"odinger equation (see Adami {\it{et al.}} \cite{AdaNoj14, AdaNoj15} and Angulo and Goloshchapova \cite{AngGol17a, AngGol17b}). We note that with the introduction of nonlinearities in the dispersive models, the network provides a nice field, where one can look for interesting soliton propagation and nonlinear dynamics in general. A central point that makes this analysis a delicate problem is the presence of a vertex where the underlying one-dimensional star graph should bifurcate (or multi-bifurcate in a general metric graph).
Looking at other nonlinear dispersive systems on graph structure, we have some interesting results. For example, related with well-posedness theory, the second author in \cite{Cav}, studied the local well-posedness for the Cauchy problem associated to Korteweg-de Vries equation in a metric star graph with three semi-infinite edges given by one negative half-line and two positives half-lines attached to a common vertex $\nu=0$ (the $\mathcal Y$-junction framework). Another nonlinear dispersive equation, the Benjamin--Bona--Mahony (BBM) equation, is treated in \cite{bona,Mugnolobbm}. More precisely, Bona and Cascaval \cite{bona} obtained local well-posedness in Sobolev space $H^1$ and Mugnolo and Rault \cite{Mugnolobbm} showed the existence of traveling waves for the BBM equation on graphs. Using a different approach Ammari and Crépeau \cite{AmCr1} derived results of well-posedness and, also, stabilization for the Benjamin-Bona-Mahony equation in a star-shaped network with bounded edges.
Recently, in \cite{CaCaGa1}, the authors deals to present answers for some questions left in \cite{CaCaGa} concerning the study of the cubic fourth order Schr\"odinger equation in a star graph structure $\mathcal{G}$. Precisely, they considered $\mathcal{G}$ composed by $N$ edges parameterized by half-lines $(0,+\infty)$ attached with a common vertex $\nu$. With this structure the manuscript studied the well-posedness of a dispersive model on star graphs with three appropriate vertex conditions.
Regarding to the control theory and inverse problems, let us cite some previous works on star graphs. Ignat \textit{et al.} in \cite{Ignat2011} worked on the inverse problem for the heat equation and the Schr\"odinger equation on a tree. Later on, Baudouin and Yamamoto \cite{Baudouin} proposed a unified - and simpler - method to study the inverse problem of determining a coefficient. Results of stabilization and boundary controllability for KdV equation on star-shaped graphs was also proved in \cite{AmCr,Cerpa,Cerpa1}. Finally, recently, Duca in \cite{Duca,Duca1} showed the controllability of the bilinear Schrödinger equation defined on a compact graph. In booth works, with different main goals, the author showed control properties for this system.
We caution that this is only a small sample of the extant work on graphs structure for partial differential equations.
{\mbox{sub}}section{Functional framework}
Let us define the graphs ${\mathcal{G}}$ given by the central node $0$ and edges $I_j$, for $j=1,2,\cdots, N$. Thus, for any function $f: {\mathcal{G}}\rightarrow \mathbb C$, we set $f_j= f|_{I_j},$
$$
L^2({\mathcal{G}}):= \bigoplus_{j=1}^{N}L^2(I_j):=\left\lbrace f: {\mathcal{G}} \rightarrow \mathbb R: f_j \in L^2(I_j), j \in \{1,2,\cdot, N\} \right\rbrace, \quad \|f\|_2=\left( \sum_{j=1}^N \|f_j\|_{L^2(I_j)}\right)^{1/2}
$$
and
$$
\left( f, g\right)_{L^2({\mathcal{G}})}:= {\text{Re }} \int_{-l_1}^0 f_1(x)\overline{g_1(x)}dx +{\text{Re }} \sum_{j=2}^{N}\int_{0}^{l_j} f_j(x)\overline{g_j(x)}dx.
$$
Also, we need the following spaces
$$
H^m_0({\mathcal{G}}):= \bigoplus_{j=1}^{N}H^m_0(I_j):=\left\lbrace f: {\mathcal{G}} \rightarrow \mathbb C: f_j \in H^m(I_j), j \in \{1,2,..., N\}\right\rbrace,
$$
where $\partial_x^j f_1(-l_1)=\partial_x^jf_j(l_j)=0, \ j \in \{1,2,..., m-1\}$ and $ f_1(0)=\alpha_j f_j(0), j \in \{1,2,..., N\}$, with
$$
\|f\|_{H^m_0({\mathcal{G}})}=\left( \sum_{j=1}^{N} \|f_j\|_{H^m(I_j)}^2\right)^{1/2},
$$
for $m\in {\mathbb N}$ with the natural inner product of $H^s_0(I_j)$. We often write,
$$
\int_{\mathcal{G}} f d x=\int_{-l_{1}}^{0} f_{1}(x) d x+\sum_{j=2}^{N} \int_{0}^{l_{j}} f_{j}(x) d x
$$
Then the inner products and the norms of the Hilbert spaces $L^2({\mathcal{G}})$ and $H^m_0 ({\mathcal{G}})$ are defined by
\begin{align*}
\langlengle f, g\ranglengle_{L^{2}(\mathcal{G})}={\text{Re }}\int_{\mathcal{G}} f(x) \overline{ g(x)} d x\quad \text { and } \quad \|f\|_{L^{2}(\mathcal{G})}^{2} &=\int_{\mathcal{G}}|f|^{2} d x,
\\
\langlengle f, g\ranglengle_{H_{0}^{m}(\mathcal{G})}={\text{Re }} \sum_{k\leq m}\int_{\mathcal{G}} \partial^k_x f_{x}(x) \overline{ \partial_x^k g_{x}}(x) d x \quad \text { and } \quad \|f\|_{H_{0}^{m}(\mathcal{G})}^{2} &=\sum_{k\leq m}\int_{\mathcal{G}}\left|\partial_x^k f\right|^{2} d x.
\end{align*}
We will denote $H^{-s}({\mathcal{G}})$ the dual of $H^s_0({\mathcal{G}})$. By using Poincaré inequality, it follows that
$$
\|f\|_{L^2({\mathcal{G}})}^2 \leq \frac{L^2}{\pi^2} \|\partial_x f\|_{L^2({\mathcal{G}})}^2, \quad \forall f \in H^1_0({\mathcal{G}}),
$$
where $L=\max_{j=1,2,...,N}\left\lbrace l_j \right\rbrace.$Thus, we have that
\begin{equation}\langlebel{poincare}
\sum_{k=1 }^m \|\partial_x^k f\|_{L^2({\mathcal{G}})}^2 \leq \|f\|_{H_{0}^{m}(\mathcal{G})}^{2} \leq \left( \frac{L^2}{\pi^2}+1\right)\sum_{k=1 }^m \|\partial_x^k f\|_{L^2({\mathcal{G}})}^2.
\end{equation}
{\mbox{sub}}section{Setting of the problem and main result} Let us now present the problem that we will study in this manuscript. Due the results presented in work \cite{CaCaGa1}, naturally, we should see what happens for the control properties for a linear Schrödinger type system with mixed dispersion on compact graph structure $\mathcal{G}$ of $(N+1)$ edges $e_{j}$ (where $\left.N \in \mathbb{N}^{*}\right)$, of lengths $l_{j}>0, j \in\{1, . ., N+1\}$, connected at one vertex that we assume to be 0 for all the edges. Precisely, we assume that the first edge $e_{1}$ is parametrized on the interval $I_{1}:=\left(-l_{1}, 0\right)$ and the $N$ other edges $e_{j}$ are parametrized on the interval $I_{j}:=\left(0, l_{j}\right)$. On each edge we pose a linear biharmonic NLS equation $(Bi-NLS)$. On the first edge $(j=1)$ we put no control and on the other edges $(j=2, \cdots, N+1)$ we consider Neumann boundary controls (see Fig. {\text{Re }}f{control}).
\begin{figure}
\caption{A compact graph with $N+1$ edges}
\end{figure}
Thus, in this work, we consider the following system
\begin{equation}\langlebel{graph}
\begin{cases}
i\partial_t u_j +\partial_x^2 u_j - \partial_x^4 u_j =0, & (x,t)\in I_j \times (0,T),\ j=1,2, ..., N\\
u_j(x,0)=u_{j0}(x), & x\in I_j,\ j=1,2, ..., N\\
\end{cases}
\end{equation}
with appropriated boundary conditions as follows
\begin{equation}\langlebel{bound}
\left\lbrace
\begin{split}
&u_1(-l_1,t )=\partial_x u_1(-l_1,t)=0,\\
& u_j(l_j,t)=0, & j \in \{2,3,\cdots, N\},\\
& \partial_x u_j(l_j,t)= h_j(t), &j \in \{2,3,\cdots, N\}, \\
& u_1(0,t)=\alpha_j u_j(0,t), & j \in \{2,3,\cdots, N\},\\
&\partial_x u_1(0,t)=\sum_{j=2}^{N}\frac{\partial_x u_j(0,t)}{\alpha_j} \\
& \partial_x^2 u_1(0)= \alpha_j \partial_x^2 u_j(0,t), & j \in \{2,3,\cdots, N\}, \\
&\partial_x^3 u_1(0,t)=\sum_{j=2}^{N}\frac{\partial_x^3 u_j(0,t)}{\alpha_j}.
\end{split}\right.
\end{equation}
Here $u_{j}(x, t)$ is the amplitude of propagation of intense laser beams on the edge $e_{j}$ at position $x \in I_{j}$ at time $t, h_{j}=h_{j}(t)$ is the control on the edge $e_{j}\ (j \in\{2, \cdots, N+1\})$ belonging to $L^{2}(0, T)$ and $\alpha_{j} \ (j \in\{2, \cdots, N+1\})$ is a positive constant. The initial data $u_{j 0}$ are supposed to be $H^{-2}(\mathcal{G})$ functions of the space variable.
With this framework in hand, our work deals with the following classical control problem.
|\!|\!|indent\textbf{Boundary controllability problem:}\textit{ For any $T > 0$, $l_j > 0$, $u_{j0}\in H^{-2}(\mathcal{G})$ and $u_{T}\in H^{-2}(\mathcal{G})$, is it possible to find $N$ Neumann boundary controls $h_j\in L^2(0,T)$ such that the solution $u$ of \eqref{graph}-\eqref{bound} on the tree shaped network of $N+1$ edges (see Fig. {\text{Re }}f{control}) satisfies
\begin{equation}\langlebel{ect}
u(\cdot,0) = u_{0}(\cdot) \quad \text{ and } \quad u(\cdot,T)=u_{T}(\cdot) ?
\end{equation}}
The answer for that question is given by the following result.
\begin{theorem}\langlebel{Th_Control_N}
For $T>0$ and $l_1, l_2, \cdots l_{N}$ positive real numbers, let us suppose that
\begin{equation}\langlebel{condition1}
T > \sqrt{ \frac{ \overline{L} (L^2+\pi^2)}{\pi^2\varepsilon(1- \overline{L} \varepsilon)}}:=T_{min}
\end{equation}
where
\begin{equation}\langlebel{condition2}
L=\max \left\lbrace l_1, l_2 \cdots, l_{N}\right\rbrace, \quad \overline{L} =\max \left\lbrace 2l_1, \max \left\lbrace l_2,l_3, \cdots, l_N \right\rbrace + l_1\right\rbrace,
\end{equation}
and
\begin{equation}\langlebel{condition3}
0<\varepsilon < \frac{1}{ \overline{L} }.
\end{equation}
Additionally, suppose that the coefficients of the boundary conditions \eqref{bound} satisfies
\begin{equation}\langlebel{putaquepario1}
\sum_{j=2}^{N}\frac{1}{\alpha^2_j}=1 \quad \text{and} \quad \frac{1}{\alpha^2_j}\leq \frac{1}{N-1}.
\end{equation}
Then for any $u_{0},\ u_{T}\in H^{-2}(\mathcal{G})$, there exists a control $h_j(t)\in L^2(0,T)$, for $j=2,...,N$, such that the unique solution $u(x,t)\in C([0,T];H^{-2}(\mathcal{G}))$ of \eqref{graph}-\eqref{bound}, with $h_1(t)=0$, satisfies \eqref{ect}.
\end{theorem}
{\mbox{sub}}section{Outline and structure of the paper} In this article we prove the exact controllability of the Schrödinger type system with mixed dispersion in star graph structure $\mathcal{G}$ of $(N+1)$ edges $e_{j}$ of lengths $l_{j}>0, j \in\{1, . ., N+1\}$, connected at one vertex that we assume to be $0$ for all the edges (see Fig. {\text{Re }}f{control}). Precisely, we are able to prove that solutions of adjoint system associated to \eqref{graph}, with boundary conditions \eqref{bound}, preserve conservation laws in $L^2(\mathcal{G})$, $H^1(\mathcal{G})$ and $H^2(\mathcal{G})$ (see Appendix {\text{Re }}f{Sec4}), which are proved \textit{via} Morawetz multipliers. With this in hand, an \textit{observability inequality} associated with the solution of the adjoint system is proved. Here, the relation between $T>T_{min}$, where $$T_{min}=\sqrt{ \frac{ \overline{L} (L^2+\pi^2)}{\pi^2\varepsilon(1- \overline{L} \varepsilon)}},$$ is crucial to prove the result.
\begin{remark}Let us give some remarks in order.
\begin{itemize}
\item[1.] It is important to point out that the transmission conditions at the central node 0 are inspired by the recent papers \cite{CaCaGa,Cav,GM,MNS}. It is not the only possible choice, and the main motivation is that they guarantee uniqueness of the regular solutions of the $(Bi-NLS)$ equation linearized around 0.
\item[2.] An important fact is that we are able to deal with the mix dispersion in the system \eqref{graph}, that is, with laplacian and bi-laplacian terms in the system. The laplacian term gives us an extra difficulty to deal with the adjoint system associated to \eqref{graph}. Precisely, if we remove the term $\partial^2_x$ in \eqref{graph} and deal only with the fourth order Schrödinger equation with the boundary conditions \eqref{bound} we can use two different constants $\alpha_j$ and $\beta_j$ in the traces of the boundary conditions.
\item[3.] We are able to control $N+1$ edges with $N-$boundary controls, however, we do not have the sharp conditions on the lengths $l_j$. Moreover, the time of control $T>T_{min}$ is not sharp, but we get an explicit constant in the observability inequality. In this way, these two problems are open.
\end{itemize}
\end{remark}
To end our introduction, we present the outline of the manuscript. Section {\text{Re }}f{Sec2} is related with the well-posedness results for the system \eqref{graph}-\eqref{bound} and its adjoint. In Section {\text{Re }}f{Sec3}, we give a rigorous proof of observability inequality, and with this in hand, we are able to prove Theorem {\text{Re }}f{Th_Control_N}. In Appendix {\text{Re }}f{Sec4} we present key lemmas using Morawetz multipliers which are crucial to prove the main result of the paper.
\section{Well-posedness results}\langlebel{Sec2}
We first study the homogeneous linear system (without control) and the adjoint system associated to \eqref{graph}-\eqref{bound}. After that, the linear biharmonic Schrödinger equation with regular initial data and controls is studied.
{\mbox{sub}}section{Study of the linear system} In this section we consider the following linear model
\begin{equation}\langlebel{graph_1}
\begin{cases}
i\partial_t u_j +\partial_x^2 u_j - \partial_x^4 u_j =0, & (t,x)\in (0,T) \times I_j,\ j=1,2, ..., N\\
u_j(0,x)=u_{j0}(x), & x \in I_j,\ j=1,2, ..., N\\
\end{cases}
\end{equation}
with the boundary conditions
\begin{equation}\langlebel{bound1}
\left\lbrace
\begin{split}
&u_1(-l_1,t )=\partial_x u_1(-l_1,t)=0\\
&u_j(l_j,t)= \partial_x u_j(l_j,t)=0, & j \in \{2,3,\cdots, N\}, \\
& u_1(0,t)=\alpha_j u_j(0,t), & j \in \{2,3,\cdots, N\}, \\
&\partial_x u_1(0,t)=\sum_{j=2}^{N}\frac{\partial_x u_j(0,t)}{\alpha_j} \\
& \partial_x^2 u_1(0)= \alpha_j \partial_x^2 u_j(0,t), &j \in \{2,3,\cdots, N\}, \\
&\partial_x^3 u_1(0,t)=\sum_{j=2}^{N}\frac{\partial_x^3 u_j(0,t)}{\alpha_j}.
\end{split}\right.
\end{equation}
Additionally, from now on we use the notation introduced in the introduction of the manuscript.
Let us consider the differential operator $$A: u=\left(u_{1}, \cdots, u_{N+1}\right) \in \mathcal{D}(A) {\mbox{sub}}set L^{2}(\mathcal{G}) \mapsto i \partial_x^2 u - i \partial_x^4 u \in L^{2}(\mathcal{R})$$
with domain defined by
\begin{equation}\langlebel{b1}
D(A):= \left\lbrace u \in \bigr )od_{j=1}^N H^4(I_j) \cap V : \partial_x u_1(0)=\sum_{j=2}^{N}\frac{\partial_x u_j(0)}{\alpha_j}, \,\, \partial_x^3 u_1(0)=\sum_{j=2}^{N}\frac{\partial_x^3 u_j(0)}{\alpha_j} \right\rbrace,
\end{equation}
where
\begin{multline}\langlebel{b2}
V=\left\lbrace u \in \bigr )od_{j=1}^N H^2(I_j) : u_1(-l_1 )=\partial_x u_1(-l_1)=u_j(l_j)=\partial_x u_j(l_j)=0, \right. \\
\left. u_1(0)=\alpha_j u_j(0), \partial_x^2 u_1(0)= \alpha_j \partial_x^2 u_j(0), \quad j \in \{2,3,\cdots, N\}. \right\rbrace
\end{multline}
Then we can rewrite the homogeneous linear system \eqref{graph_1}-\eqref{bound1} takes the form
\begin{equation}\langlebel{gen}
\begin{cases}
u_t (t)= Au(t), & t>0 \\
u(0)=u_0 \in L^2({\mathcal{G}}).
\end{cases}
\end{equation}
The following proposition guarantees some properties for the operator $A$. Precisely, the following holds.
\begin{proposition}\langlebel{selfadjoint}
The operator $A:D(A){\mbox{sub}}set L^2(\mathcal G)\rightarrow L^2(\mathcal G)$ is self-adjoint in $L^2({\mathcal{G}})$.
\end{proposition}
\begin{proof}
Let us first to prove that $A$ is a symmetric operator. To do this let $u$ and $v$ in $D(A)$. Then, by approximating $u$ and $v$ by $C^4(\mathcal{G})$ functions, integrating by parts and using the boundary conditions \eqref{b1} and \eqref{b2} we have that
\begin{equation*}
\begin{split}
(Au,v)_{L^{2}(\mathcal{G})}&
= {\text{Re }}\int_{-l_1}^0 (Au)_1(x)\overline{v_1(x)}dx + {\text{Re }} \sum_{j=2}^{N} \int_{0}^{l_j} (Au)_j(x)\overline{v_j(x)}dx\\
&={\text{Re }} \int_{-l_1}^0 (i \partial_x^2 u_1 - i \partial_x^4 u_1)\overline{v_1(x)}dx + {\text{Re }} \sum_{j=2}^{N}\int_{0}^{l_j} (i \partial_x^2 u_j - i \partial_x^4 u_j)(x)\overline{v_j(x)}dx\\
&={\text{Re }} \int_{-l_1}^0 {u_1} (i \partial_x^2 \overline{v}_1 - i \partial_x^4 \overline{v}_1)dx +{\text{Re }} \sum_{j=2}^{N}\int_{0}^{l_j} {u_j} (i \partial_x^2 \overline{v}_j - i \partial_x^4 \overline{v}_j)dx\\
&\quad \quad + {\text{Re }} i\sum_{j=1}^{N} \left[\partial_xu_j\overline{v}_j-u_j\partial_x\overline{v}_j-\partial_x^3u_j\overline{v}_j+\partial_x^2u_j\partial_x\overline{v}_j-\partial_xu_j\partial_x^2\overline{v}_j+u_j\partial_x^3\overline{v}_j\right]_{\partial\mathcal{G}}\\
&=(u,Av)_{L^2(\mathcal{G})},\ \forall\ u,v\in D(A),
\end{split}
\end{equation*}
that is, $A$ is symmetric. It is not hard to see that $D(A^*)=D(A)$, so $A$ is self-adjoint. This finishes the proof.
\end{proof}
By using semigroup theory, $A$ generates a strongly continuous unitary group on $L^2({\mathcal{G}})$, and for any $u_0=(u_{10}, u_{20},..., u_{N0}) \in L^2({\mathcal{G}})$ there exists a unique mild solution $u \in C([0; T];L^2({\mathcal{G}}))$ of \eqref{gen}. Furthermore, if $u_0 \in D(A)$, then \eqref{gen} has a classical solution satisfying $u \in C([0; T];D(A)) \cap C^1([0; T];L^2({\mathcal{G}}))$. Summarizing, we have the following result.
\begin{proposition}\langlebel{mild}
Let $u_0=(u_{10}, u_{20},..., u_{N0})\in H_0^k(\mathcal G)$, for $k\in\{0,1,2,3,4\}$. Then the linear system \eqref{graph_1} with boundary conditions \eqref{bound1} has a unique solution $u$ on the space $C([0,T]:H_0^k(\mathcal G))$. In particular, for $k=4$ we get a classical solution and for the other cases ($k\in\{0,1,2,3\}$) the solution is a mild solution.
\end{proposition}
Now, we deal with the adjoint system associated to \eqref{graph_1}-\eqref{bound1}. As the operator $A=i\partial_x^2-i\partial_x^4$ is self adjoint (see Proposition {\text{Re }}f{selfadjoint}) the adjoint system is defined as follows
\begin{equation}\langlebel{adj_graph_1a}
\begin{cases}
i\partial_t v_j +\partial_x^2 v_j - \partial_x^4 v_j =0, & (t,x)\in (0,T) \times I_j,\ j=1,2, ..., N\\
v_j(T,x)=v_{jT}(x), & x \in I_j,\ j=1,2, ..., N\\
\end{cases}
\end{equation}
with the boundary conditions
\begin{equation}\langlebel{adj_bound1}
\left\lbrace
\begin{split}
&v_1(-l_1,t )=\partial_x v_1(-l_1,t)=0\\
&v_j(l_j,t)=\partial_x v_j(l_j,t)=0, & j \in \{2,3,\cdots, N\}, \\
& v_1(0,t)=\alpha_j v_j(0,t), & j \in \{2,3,\cdots, N\}, \\
&\partial_x v_1(0,t)=\sum_{j=2}^{N}\frac{\partial_x v_j(0,t)}{\alpha_j} \\
& \partial_x^2 v_1(0)= \alpha_j \partial_x^2 v_j(0,t), &j \in \{2,3,\cdots, N\}, \\
&\partial_x^3 v_1(0,t)=\sum_{j=2}^{N}\frac{\partial_x^3 v_j(0,t)}{\alpha_j}.
\end{split}\right.
\end{equation}
Also, as $A=A^*$ we have that $D(A^{*})=D(A)$ and the proof of well-posedness is the same that in Proposition {\text{Re }}f{mild}.
\section{Exact boundary controllability}\langlebel{Sec3}
This section is devoted to the analysis of the exact controllability property for the
linear system corresponding to \eqref{graph} with boundary control \eqref{bound}. Here, we will present the answer for the control problem presented in the introduction of this work. First, let us present two definitions that will be important for the rest of the work.
\begin{definition}
Let $T > 0$, The system \eqref{graph}-\eqref{bound} is exactly controllable in time $T$ if for any initial and final data $u_0\,, u_T\in H^{-2}(\mathcal{G})$ there exist control functions $h_j\in L^2(0,T)$, $j\in \{2,3,\cdots, N\}$, such that solution $u$ of \eqref{graph}-\eqref{bound} on the tree shaped network of $N+1$ edges satisfies \eqref{ect}. In addition, when $u_T=0$ we said that the system \eqref{graph}-\eqref{bound} is null controllable in time $T$.
\end{definition}
Now on, consider the transposition solution to \eqref{graph}-\eqref{bound}, with $h_1(t)=0$, which is given by the following.
\begin{definition}
We say $u\in L^{\infty}(0,T;H^{-2}(\mathcal{G}))$ is solution of \eqref{graph}-\eqref{bound}, with $h_1(t)=0$,
in the transposition sense if and only if
$$
\sum_{j=1}^{N}\left(\int_0^T\left\langlengle u_j(t),f_j(t) \right\ranglengle dt+i\langlengle u_j(0),v_j(0)\ranglengle\right)+\sum_{j=2}^{N}\left(\int_{\mathcal{G}}h_j\partial^2_x\overline{v_j}dx\right)=0,
$$
for every $f\in L^2(0,T;H^2_0(\mathcal{G}))$, where $v(x,t)$ is the mild solution to the problem \eqref{adj_graph_1a}-\eqref{adj_bound1} on the space $C([0,T];H_0^2(G))$, with $v(x,T)=0$, obtained in Proposition {\text{Re }}f{mild}. Here, $\left\langlengle \cdot,\cdot \right\ranglengle$ means the duality between the spaces $H^{-2}(\mathcal{G})$ and $H^2_0(\mathcal{G})$.
\end{definition}
With this in hand, the following lemma gives an equivalent condition for the exact controllability property.
\begin{lemma}\langlebel{L_CECP_1}
Let $u_T\in H^{-2}(\mathcal{G})$. Then, there exist controls $h_j(t)\in L^2(0,T)$, for $j=2,...,N$, such that the solution $u(x,t)$ of \eqref{graph}-\eqref{bound}, with $h_1(t)=0$, satisfies \eqref{ect} if and only if
\begin{equation}\langlebel{CECP_1}
i\sum_{j=1}^N\int_{I_j}\langlengle u_j(T),\overline{v}_j(T)\ranglengle dx=\sum_{j=2}^{N}\int_0^Th_j(t)\partial^2_xv_j(l_j,t)dt,
\end{equation}
where $v$ is solution of \eqref{adj_graph_1a}-\eqref{adj_bound1}, with initial data $v(x,T)=v(T)$.
\end{lemma}
\begin{proof}
Relation \eqref{CECP_1} is obtained multiplying \eqref{graph}-\eqref{bound}, with $h_1(t)=0$, by the solution $v$ of \eqref{adj_graph_1a}-\eqref{adj_bound1} and integrating by parts on $\mathcal{G}\times(0,T)$.
\end{proof}
{\mbox{sub}}section{Observability inequality} A fundamental role will be played by the following observability result, which together with Lemma {\text{Re }}f{L_CECP_1} give us Theorem {\text{Re }}f{Th_Control_N}.
\begin{proposition}\langlebel{Oinequality_1}
Let $l_j>$ for any $j\in \{1,\cdots,N+1\}$ satisfying \eqref{condition1} and assume
that \eqref{putaquepario1} holds. There exists a positive constant $T_{min}$ such that if $T>T_{min}$, then the following inequality holds
\begin{equation}\langlebel{OI_2}
|\!|\!|rm{v(x,T)}_{H^2_0(\mathcal{G})}^2\leq C\sum_{j=2}^{N}|\!|\!|rm{\partial^2_xv_j(l_j,t)}^2_{L^2(0,T)}
\end{equation}
for any $v=\left(v_{1}, v_{2}, \cdots, v_{N+1}\right)$ solution of \eqref{adj_graph_1a}-\eqref{adj_bound1} with final condition $v_{T}=\left(v_{1}^{T}, v_{2}^{T}, \cdots, v_{N+1}^{T}\right) \in H^2_0(\mathcal{G})$ and for a positive constant $C >0$.
\end{proposition}
\begin{proof}
Firstly, taking $f=0$ and choosing $q(x,t)=1$ in \eqref{identity}, we get that
\begin{multline*}
\begin{split}
-\frac{Im}{2}\left.\int_{\mathcal{G}}v\overline{\partial_xv}\right]_0^Tdx+\frac{Im}{2}\left.\int_0^Tv\overline{\partial_tv}\right]_{\partial\mathcal{G}}dt&+\frac{1}{2}\left.\int_0^T|\partial_xv|^2\right]_{\partial\mathcal{G}}dt +\frac{1}{2}\left.\int_0^T|\partial_x^2v|^2\right]_{\partial\mathcal{G}}dt \\&-Re\int_0^T\left[ \partial_x^3 v\overline{\partial_xv}\right]_{\partial\mathcal{G}}dt=0,
\end{split}
\end{multline*}
or equivalently,
\begin{multline*}
\begin{split}
0=&-\frac{Im}{2}\left.\int_{\mathcal{G}}v\overline{\partial_xv}\right]_0^Tdx+\frac{Im}{2}\int_0^T \left( \left[ v_1\overline{\partial_t v}_1\right]_{-l_1}^0 + \left[ \sum_{j=2}^{N} v_j\overline{\partial_t v}_j\right]_{0}^{l_j}\right) dt \\
& +\frac{1}{2}\int_0^T\left( \left[ |\partial_x v_1|^2 \right]_{-l_1}^0 + \left[ \sum_{j=2}^{N} |\partial_x v_j|^2 \right]_{0}^{l_j} \right)dt
+\frac{1}{2}\int_0^T\left( \left[ |\partial_x^2 v_1|^2 \right]_{-l_1}^0 + \left[ \sum_{j=2}^{N} |\partial_x^2 v_j|^2 \right]_{0}^{l_j} \right)dt\\
& -Re\int_0^T\left(\left[ \partial_x^3 v_1\overline{\partial_x v}_1\right]_{-l_1}^0 + \left[ \sum_{j=2}^{N} \partial_x^3 v_j\overline{ \partial_x v}_j\right]_{0}^{l_j}\right)dt.
\end{split}
\end{multline*}
By using the boundary conditions \eqref{adj_bound1}, it follows that
\begin{multline*}
\begin{split}
0=&-\frac{Im}{2}\left.\int_{\mathcal{G}}v\overline{\partial_xv}\right]_0^Tdx+\frac{Im}{2}\int_0^T \left( v_1(0)\overline{\partial_t v}_1(0) - \sum_{j=2}^{N} v_j(0)\overline{\partial_t v}_j(0)\right) dt \\
&+\frac{1}{2}\int_0^T\left( |\partial_x v_1(0)|^2- \sum_{j=2}^{N} |\partial_x v_j(0)|^2 \right)dt \\
&+\frac{1}{2}\int_0^T\left( |\partial_x^2 v_1(0)|^2 - |\partial_x^2 v_1(-l_1)|^2 + \sum_{j=2}^{N} \left( |\partial_x^2 v_j(l_j)|^2 - |\partial_x^2 v_j(0)|^2 \right) \right)dt \\
&-Re\int_0^T\left(\partial_x^3 v_1 (0)\overline{\partial_x v}_1(0) - \sum_{j=2}^{N} \partial_x^3 v_j(0)\overline{ \partial_x v}_j(0) \right)dt.
\end{split}
\end{multline*}
Once again, due to the boundary conditions \eqref{adj_bound1} and relations \eqref{putaquepario1}, we have that
\begin{equation}\langlebel{perfect}
\begin{split}
\int_0^T |\partial_x^2&v_1(-l_1)|^2 dt =-Im\left.\int_{\mathcal{G}}v\overline{\partial_xv}\right]_0^Tdx +\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt \\
&+\int_0^T\left( |\partial_x v_1(0)|^2- \sum_{j=2}^{N} |\partial_x v_j(0)|^2 \right)dt
+\int_0^T\left( |\partial_x^2 v_1(0)|^2- \sum_{j=2}^{N} |\partial_x^2 v_j(0)|^2 \right)dt.
\end{split}
\end{equation}
Thanks to relations \eqref{putaquepario1}, we deduce that
\begin{multline*}
\int_0^T\left( |\partial_x v_1(0)|^2- \sum_{j=2}^{N} |\partial_x v_j(0)|^2 \right)dt = \frac{1}{2}\int_0^T\left( \left| \sum_{j=2}^{N} \frac{\partial_x v_j(0)}{\alpha_j}\right |^2- \sum_{j=2}^{N} |\partial_x v_j(0)|^2 \right)dt \\
\leq \frac{1}{2}\int_0^T\left( (N-1) \sum_{j=2}^{N} \left|\frac{\partial_x v_j(0)}{\alpha_j}\right |^2- \sum_{j=2}^{N} |\partial_x v_j(0)|^2 \right)dt = \frac{1}{2}\int_0^T \sum_{j=2}^{N} |\partial_x v_j(0)|^2\left( \frac{N-1}{\alpha_j^2}-1 \right)dt \leq 0
\end{multline*}
and
\begin{align*}
\frac{1}{2}\int_0^T\left( |\partial_x^2 v_1(0)|^2- \sum_{j=2}^{N} |\partial_x^2 v_j(0)|^2 \right)dt &= \frac{1}{2}\int_0^T\left( |\partial_x^2 v_1(0)|^2- \sum_{j=2}^{N} \left| \frac{\partial_x^2 v_1(0)}{\alpha_j} \right|^2 \right)dt \\
&= \frac{1}{2}\int_0^T |\partial_x^2 v_1(0)|^2\left( 1- \sum_{j=2}^{N} \frac{1}{\alpha_j^2} \right)dt= 0.
\end{align*}
Thus, previous calculations ensure that
\begin{equation}\langlebel{newmult_1}
\int_0^T |\partial_x^2 v_1(-l_1)|^2 dt \leq Im\left.\int_{\mathcal{G}}v\overline{\partial_xv}\right]_0^Tdx +\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt .
\end{equation}
Now, choosing $q(x,t)=x$ in \eqref{identity}, by using the boundary conditions \eqref{adj_bound1} and taking $f=0$, we get
\begin{align*}
2\int_Q|\partial_xv|^2dxdt+4\int_Q|\partial_x^2v|^2dxdt &=\left.\int_0^T|\partial^2_xv|^2x\right]_{\partial\mathcal{G}}dt-Im\left.\int_{\mathcal{G}}v\overline{\partial^2_xv}\right]_0^Tdx \\
&=l_1 \int_0^T |\partial_x^2 v_1(-l_1)|^2 dt + \int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^{2}l_jdt-Im\left.\int_{\mathcal{G}}v\overline{\partial_xv}x\right]_0^Tdx.
\end{align*}
From inequality \eqref{newmult_1}, it yields that
\begin{multline*}
\begin{split}
2\int_Q|\partial_xv|^2dxdt+4\int_Q|\partial_x^2v|^2dxdt \leq& \ l_1Im\left.\int_{\mathcal{G}}v\overline{\partial_xv}\right]_0^Tdx +l_1\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt \\
& + \int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^{2}l_jdt-Im\left.\int_{\mathcal{G}}v\overline{\partial_xv}x\right]_0^Tdx,
\end{split}
\end{multline*}
hence
\begin{equation}\langlebel{OI1_b_1}
2\int_Q|\partial_xv|^2dxdt+4\int_Q|\partial_x^2v|^2dxdt\leq Im\left.\int_{\mathcal{G}}v\overline{\partial_xv} (l_1-x)\right]_0^Tdx +(\overline{L}+l_1)\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt ,
\end{equation}
where $\overline{L}$ is defined by \eqref{condition2}.
Now, we are in position to prove \eqref{OI_2}. Thanks to \eqref{OI1_b_1}, follows that
\begin{multline}\langlebel{OI1_c_1}
\begin{split}
2\int_Q(|\partial_xv|^2+|\partial_x^2v|^2)dxdt\leq&\ (\overline{L}+l_1)\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt + \left|\left.\int_{\mathcal{G}}v\overline{\partial_xv} (l_1-x)\right]_0^Tdx \right| \\
\leq & \ L\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt + \int_{\mathcal{G}}|v(T)||\overline{\partial_xv(T)}| |l_1-x|dx \\
&+ \int_{\mathcal{G}}|v(0)||\overline{\partial_xv(0)}| |l_1-x|dx .
\end{split}
\end{multline}
As we have the conservation laws for solutions of \eqref{adj_bound1aa}, that is, \eqref{H12_c} is satisfied, so by using it on the left hand side of \eqref{OI1_c_1}, yields that
\begin{multline}\langlebel{OI1_d_1}
\begin{split}
2\int_0^T(|\!|\!|rm{\partial_xv(T)}^2_{L^2(\mathcal{G})}+&|\!|\!|rm{\partial^2_xv(T)}^2_{L^2(\mathcal{G})})dt
\leq \ L\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt \\ & + \overline{L} \left( \int_{\mathcal{G}}|v(T)||\overline{\partial_xv(T)}| dx + \int_{\mathcal{G}}|v(0)||\overline{\partial_xv(0)}| |dx\right).
\end{split}
\end{multline}
Applying Young inequality in \eqref{OI1_d_1}, with $\varepsilon >0$ satisfying \eqref{condition3}, we deduce that
\begin{multline}\langlebel{OI1_d_1_2}
2T \left(|\!|\!|rm{\partial_xv(T)}^2_{L^2(\mathcal{G})}+|\!|\!|rm{\partial_x^2v(T)}^2_{L^2(\mathcal{G})}\right)
\leq \ L \int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt \\ + \overline{L} \left( \frac{1}{\varepsilon T} \int_{\mathcal{G}}|v(T)|^2 dx + \varepsilon T \int_{\mathcal{G}} |\overline{\partial_xv(T)}|^2 dx + \frac{1}{\varepsilon T} \int_{\mathcal{G}}|v(0)|^2 dx + \varepsilon T \int_{\mathcal{G}}|\overline{\partial_xv(0)})|^2 dx\right).
\end{multline}
Therefore, we have due to \eqref{OI1_d_1_2} and, again using the conservation law, the following estimate
\begin{equation*}
2T(1-\overline{L}\epsilon ) \left( |\!|\!|rm{\partial_xv(T)}^2_{L^2(\mathcal{G})}+|\!|\!|rm{\partial_x^2v(T)}^2_{L^2(\mathcal{G})}\right)
\leq L \int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt +\frac{2\overline{L} }{\varepsilon T} |\!|\!|rm{v(T)}^2_{L^2(\mathcal{G})}.
\end{equation*}
From relation \eqref{poincare}, we have that
\begin{multline*}
\begin{split}
2(1-\overline{L}\epsilon )T \left(|\!|\!|rm{\partial_xv(T)}^2_{L^2(\mathcal{G})}+|\!|\!|rm{\partial_x^2v(T)}^2_{L^2(\mathcal{G})}\right)\leq&\ L\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt\\ & +\frac{2 M }{\varepsilon T} \left( \frac{L^2}{\pi^2} +1 \right) (|\!|\!|rm{\partial_xv(T)}^2_{L^2(\mathcal{G})}+|\!|\!|rm{\partial^2_xv(T)}^2_{L^2(\mathcal{G})}).
\end{split}
\end{multline*}
Equivalently, we get that
\begin{equation*}
2\overline{L}\left[ \left( \frac{1}{ \overline{L}} -\epsilon \right) T - \frac{1}{\varepsilon T} \left( \frac{L^2}{\pi^2} +1 \right)\right] \left(|\!|\!|rm{\partial_xv(T)}^2_{L^2(\mathcal{G})}+|\!|\!|rm{\partial_x^2v(T)}^2_{L^2(\mathcal{G})} \right)
\leq L\int_0^T \sum_{j=2}^{N} |\partial_x^2 v_j(l_j)|^2 dt.
\end{equation*}
Note that the conditions \eqref{condition1}, \eqref{condition2} and \eqref{condition3} imply that
\begin{equation*}
K=\left[ \left( \frac{1}{\overline{L}} -\epsilon \right) T - \frac{1}{\varepsilon T} \left( \frac{L^2}{\pi^2} +1 \right)\right] >0
\end{equation*}
Thus, again using \eqref{poincare}, we achieved the observability inequality \eqref{OI_2}.
\end{proof}
{\mbox{sub}}section{Proof of Theorem {\text{Re }}f{Th_Control_N}}
Notice that Theorem {\text{Re }}f{Th_Control_N} is a consequence of the observability inequality \eqref{OI_2}. In fact, without loss of generality, pick $u_0=0$ on $\mathcal{G}$. Define ${\mathcal{G}}amma$ the linear and bounded map
${\mathcal{G}}amma:H^2_0(\mathcal{G})\longrightarrow H^{-2}(\mathcal{G})$
by
\begin{equation}
{\mathcal{G}}amma(v(\cdot,T))=\langlengle u(\cdot,T),\overline{v}(\cdot,T)\ranglengle,
\end{equation}
where $v=v(x,t)$ is solution of \eqref{adj_graph_1a}-\eqref{adj_bound1}, with initial data $v(x,T)=v(T)$,$\left\langlengle \cdot,\cdot \right\ranglengle$ means the duality between the spaces $H^{-2}(\mathcal{G})$ and $H^2_0(\mathcal{G})$, $u=u(x,t)$ is solution of \eqref{graph}-\eqref{bound}, with $h_1(t)=0$ and
\begin{equation}\langlebel{control2a}
h_j(t)=\partial^2_xv_j(l_j,t),
\end{equation}
for $j=2,\cdots,N$.
According to Lemma {\text{Re }}f{L_CECP_1} and Proposition {\text{Re }}f{Oinequality_1}, we obtain
\begin{equation*}
\langlengle {\mathcal{G}}amma(v(T)),v(T)\ranglengle =\sum_{j=2}^{N}|\!|\!|rm{h_j(t)}^2_{L^2(0,T)}\geq C^{-1}|\!|\!|rm{v(T)}^2_{H^2_0(\mathcal{G})}.
\end{equation*}
Thus, by the Lax–Milgram theorem, ${\mathcal{G}}amma$ is invertible. Consequently, for given $u(T)\in H^{-2}(\mathcal{G})$, we can define $v(T):={\mathcal{G}}amma^{-1}(u(T))$ which one solves \eqref{adj_graph_1a}-\eqref{adj_bound1}. Then, if $h_j(t)$, for $j=2,\cdots,N$ is defined by \eqref{control2a}, the corresponding solution $u$ of the system \eqref{graph}-\eqref{bound}, satisfies \eqref{ect} and so, Theorem {\text{Re }}f{Th_Control_N} holds.
\appendix
\section{Auxiliary lemmas}\langlebel{Sec4}
{\mbox{sub}}section{Morawetz multipliers}
This section is dedicated to establishing fundamental identities by the multipliers method, which will be presented in two lemmas. For $f\in L^2(0,T;H^2_0(\mathcal{G}))$, let us consider the following system
\begin{equation}\langlebel{adj_graph_1aa}
\begin{cases}
i\partial_t u_j +\partial_x^2 u_j - \partial_x^4 u_j =f, & (t,x)\in (0,T) \times I_j,\ j=1,2, ..., N\\
u_j(0,x)=u_{j0}(x), & x\in I_j,\ j=1,2, ..., N\\
\end{cases}
\end{equation}
with the boundary conditions
\begin{equation}\langlebel{adj_bound1aa}
\left\lbrace
\begin{split}
&u_1(-l_1,t )=\partial_x u_1(-l_1,t)=0\\
& u_j(l_j,t)= \partial_x u_j(l_j,t)=0, \quad j \in \{2,3,\cdots, N\}, \\
& u_1(0,t)=\alpha_j u_j(0,t), \quad j \in \{2,3,\cdots, N\}, \\
&\partial_x u_1(0,t)=\sum_{j=2}^{N}\frac{\partial_x u_j(0,t)}{\alpha_j} \\
& \partial_x^2 u_1(0)= \alpha_j \partial_x^2 u_j(0,t), \quad j \in \{2,3,\cdots, N\}, \\
&\partial_x^3 u_1(0,t)=\sum_{j=2}^{N}\frac{\partial_x^3 u_j(0,t)}{\alpha_j}.
\end{split}\right.
\end{equation}
The first lemma gives us an identity which will help us to prove the main result of this article.
\begin{lemma}\langlebel{Id_Obs}
Let $q=q(x,t)\in C^4(\overline{\mathcal{G}}\times(0,T),\mathbb{R})$ with $\overline{\mathcal{G}}$ being the closed set of $\mathcal{G}$. For every solution of \eqref{adj_bound1aa}-\eqref{adj_graph_1aa} with $f \in\mathcal{D}(\mathcal{G})$ and $u_0\in\mathcal{D}(\mathcal{G})$, the following identity holds:
\begin{equation}\langlebel{identity}
\begin{split}
&\frac{Im}{2}\int_Qu\overline{\partial_xu}\partial_tqdxdt-\int_Q|\partial_xu|^2\partial_xqdxdt-2\int_Q|\partial_x^2u|^2\partial_xqdxdt-\frac{Re}{2}\int_{Q}\partial_xu\overline{u}\partial_x^2qdxdt\\
&+\frac{3}{2}\int_{Q}|\partial_xu|^2\partial_x^3qdxdt+\frac{Re}{2}\int_Q\partial_xu\overline{u}\partial_x^4qdxdt-\frac{Im}{2}\left.\int_{\mathcal{G}}u\overline{\partial_xu}q\right]_0^Tdx+\frac{1}{2}\left.\int_0^T|\partial_x^2u|^2q\right]_{\partial\mathcal{G}}dt\\
&+\frac{Im}{2}\left.\int_0^Tu\overline{\partial_tu}q\right]_{\partial\mathcal{G}}dt+\frac{1}{2}\left.\int_0^T|\partial_xu|^2q\right]_{\partial\mathcal{G}}dt+\frac{Re}{2}\left.\int_0^T\partial_xu\overline{u}\partial_xq\right]_{\partial\mathcal{G}} dt\\
&+\int_0^T\left[-|\partial_xu|^2\partial_x^2q+\frac{3}{2}Re(\partial_x^2u\overline{\partial_xu})\partial_xq-Re(\partial_x^3u\overline{\partial_xu})q\right]_{\partial\mathcal{G}}dt\\
&+\int_0^T\left[-\frac{Re}{2}(\partial_xu\overline{u})\partial_x^3q+\frac{Re}{2}(\partial_x^2u\overline{u})\partial_x^2q-\frac{Re}{2}(\partial_x^3u\overline{u})\partial_xq\right]_{\partial\mathcal{G}}dt=\int_Qf(\overline{\partial_xu}q+\frac{1}{2}\overline{u}\partial_xq)dxdt,
\end{split}
\end{equation}
where $Q:=\mathcal{G}\times[0,T]$ and $\partial\mathcal{G}$ is the boundary of $\mathcal{G}$.
\end{lemma}
\begin{proof} Multiplying \eqref{adj_graph_1aa} by $\overline{\partial_xu}q+\frac{1}{2}\overline{u}\partial_xq$, we have that
\begin{equation*}
\begin{split}
&\int_Qi\partial_tu(\overline{\partial_xu}q+\frac{1}{2}\overline{u}\partial_xq)dxdt+\int_Q\partial_x^2u(\overline{\partial_xu}q+\frac{1}{2}\overline{u}\partial_xq)dxdt-\int_Q\partial_x^4u(\overline{\partial_xu}q+\frac{1}{2}\overline{u}\partial_xq)dxdt\\
&-\int_Qf(\overline{\partial_xu}q+\frac{1}{2}\overline{u}\partial_xq)dxdt:=I_1+I_2-I_3-\int_Qf(\overline{\partial_xu}q+\frac{1}{2}\overline{u}\partial_xq)dxdt.
\end{split}
\end{equation*}
Now, we split the proof in three steps.
|\!|\!|indent\textbf{Step 1.} Analysis of $I_1$.
Integrating by parts, several times, on $Q$, we get
\begin{equation*}
\begin{split}
I_1=&-i\int_Qu\overline{\partial_t\partial_xu}qdxdt-i\int_Qu\overline{\partial_xu}\partial_tqdxdt+i\left.\int_{\mathcal{G}}u\overline{\partial_xu}q\right]_0^Tdx+\frac{i}{2}\int_Q\partial_xu\overline{\partial_tu}qdxdt+\frac{i}{2}\int_Qu\overline{\partial_t\partial_xu}qdxdt\\
&-\frac{i}{2}\left.\int_0^Tu\overline{\partial_tu}q\right]_{\partial\mathcal{G}}dt+\frac{i}{2}\int_Q\partial_xu\overline{u}\partial_tqdxdt+\frac{i}{2}\int_Qu\overline{\partial_xu}\partial_tqdxdt-\frac{i}{2}\left.\int_0^Tu\overline{u}\partial_tq\right]_{\partial\mathcal{G}}dt+\frac{i}{2}\left.\int_{\mathcal{G}}u\overline{u}\partial_xq\right]_0^Tdx\\
=&-\frac{i}{2}\int_Qu\overline{\partial_t\partial_xu}qdxdt-\frac{i}{2}\int_Qu\overline{\partial_xu}\partial_tqdxdt-\frac{i}{2}\int_{Q}\partial_t\partial_xu\overline{u}qdxdt+i\left.\int_{\mathcal{G}}u\overline{\partial_xu}q\right]_0^Tdx-\frac{i}{2}\left.\int_0^Tu\overline{\partial_tu}q\right]_{\partial\mathcal{G}}dt\\
&-\frac{i}{2}\left.\int_{\mathcal{G}}u\overline{\partial_xu}q\right]_0^Tdx-\left.\frac{i}{2}\int_{\mathcal{G}}u\overline{u}\partial_xq\right]_0^Tdx+\frac{i}{2}\left.\left[u\overline{u}q\right]_0^T\right]_{\partial\mathcal{G}}-\frac{i}{2}\left.\int_0^T u\overline{u}\partial_tq\right]_{\partial\mathcal{G}}dt+\frac{i}{2}\left.\int_{\mathcal{G}}u\overline{u}\partial_xq\right]_0^Tdx.
\end{split}
\end{equation*}
Thus, putting together the similar terms in the last equality, yields that
\begin{equation}\langlebel{i1a}
\begin{split}
I_1=&-\frac{i}{2}\int_Q(u\overline{\partial_t\partial_xu}+\partial_t\partial_xu\overline{u})qdxdt-\frac{i}{2}\int_Qu\overline{\partial_xu}\partial_tqdxdt+\frac{i}{2}\left.\int_{\mathcal{G}}u\overline{\partial_xu}q\right]_0^Tdx\\
&-\frac{i}{2}\left.\int_0^Tu\overline{\partial_tu}q\right]_{\partial\mathcal{G}}dt+\frac{i}{2}\left.\left[u\overline{u}q\right]_0^T\right]_{\partial\mathcal{G}}-\frac{i}{2}\left.\int_0^T u\overline{u}\partial_tq\right]_{\partial\mathcal{G}}dt.
\end{split}
\end{equation}
Finally, taking the real part of \eqref{i1a}, we have
\begin{equation}\langlebel{i1}
\begin{split}
Re(I_1)=\frac{Im}{2}\int_Qu\overline{\partial_xu}\partial_tqdxdt-\frac{Im}{2}\left.\int_{\mathcal{G}}u\overline{\partial_xu}q\right]_0^Tdx
+Re\left(-\frac{i}{2}\left.\int_0^Tu\overline{\partial_tu}q\right]_{\partial\mathcal{G}}dt\right).
\end{split}
\end{equation}
|\!|\!|indent\textbf{Step 2.} Analysis of the Laplacian integral $I_2$.
Integrating by parts, several times, on $Q$ and taking the real part of $I_2$, follows that
\begin{equation*}
\begin{split}
Re(I_2)=&\int_QRe(\partial_x^2u\overline{\partial_xu}q)dxdt+\frac{Re}{2}\left(\int_Q\partial^2_xu\overline{u}\partial_xqdxdt\right)\\
=&-\frac{1}{2}\int_Q|\partial_xu|^2\partial_xqdxdt+\left.\frac{1}{2}\int_0^T|\partial_xu|^2q\right]_{\partial\mathcal{G}}dt-\frac{1}{2}\int_Q|\partial_xu|^2\partial_xqdxdt\\
&-\frac{Re}{2}\left(\int_Q\partial_xu\overline{u}\partial^2_xqdxdt\right)+\frac{Re}{2}\left.\left(\int_0^T\partial_xu\overline{u}\partial_xq\right]_{\partial\mathcal{G}}dt\right).
\end{split}
\end{equation*}
Consequently,
\begin{equation}\langlebel{i2}
\begin{split}
Re(I_2)=&-\int_Q|\partial_xu|^2\partial_xqdxdt-\frac{Re}{2}\left(\int_Q\partial_xu\overline{u}\partial^2_xqdxdt\right)\\
&+\left.\frac{1}{2}\int_0^T|\partial_xu|^2q\right]_{\partial\mathcal{G}}dt+\frac{Re}{2}\left.\left(\int_0^T\partial_xu\overline{u}\partial_xq\right]_{\partial\mathcal{G}}dt\right).
\end{split}
\end{equation}
|\!|\!|indent\textbf{Step 3.} Analysis of the bi-Laplacian integral $I_3$.
Integrating by parts on $Q$ give us
\begin{equation}\langlebel{i3a}
\begin{split}
-I_3=&\int_Q\partial^3_xu\overline{\partial^2_xu}qdxdt+\int_Q\partial^3_xu\overline{\partial_xu}\partial_xqdxdt-\left.\int_0^T\partial^3_xu\overline{\partial_xu}q\right]_{\partial\mathcal{G}}dt\\
&+\frac{1}{2}\int_Q\partial^3_xu\overline{\partial_xu}\partial_xqdxdt+\frac{1}{2}\int_Q\partial^3_xu\overline{u}\partial^2_xqdxdt-\left.\frac{1}{2}\int_0^T\partial^3_xu\overline{u}\partial_xq\right]_{\partial\mathcal{G}}dt.
\end{split}
\end{equation}
Taking the real part of \eqref{i3a} and integrating, again, by parts on $Q$, we have
\begin{equation*}
\begin{split}
-Re(I_3)=&-\frac{1}{2}\int_Q|\partial^2_xu|^2\partial_xqdxdt+\frac{1}{2}\left.\int_0^T|\partial^2_xu|^2q\right]_{\partial\mathcal{G}}dt-\frac{3}{2}\int_Q|\partial_x^2u|^2\partial_xqdxdt+\int_Q|\partial_xu|^2\partial^3_xqdxdt\\
&-\left.\int_0^T|\partial_xu|^2\partial^2_xq\right]_{\partial\mathcal{G}}dt+\frac{3Re}{2}\left(\left.\int_0^T\partial^2_xu\overline{\partial_xu}\partial_xq\right]_{\partial\mathcal{G}}dt\right)-Re\left(\left.\int_0^T\partial^3_xu\overline{\partial_xu}q\right]_{\partial\mathcal{G}}dt\right)\\
&+\frac{Re}{2}\left(\int_Q|\partial_xu|^2\partial^3_xqdxdt\right)+\frac{Re}{2}\left(\int_Q\partial_xu\overline{u}\partial_x^4qdxdt\right)-\frac{Re}{2}\left.\left(\int_0^T\partial_xu\overline{u}\partial_x^3q\right]_{\partial\mathcal{G}}dt\right)\\
&+\frac{Re}{2}\left.\left(\int_0^T\partial^2_xu\overline{u}\partial_x^2q\right]_{\partial\mathcal{G}}dt\right)-\frac{Re}{2}\left.\left(\int_0^T\partial^3_xu\overline{u}\partial_xq\right]_{\partial\mathcal{G}} dt\right).
\end{split}
\end{equation*}
The last inequality ensure that
\begin{equation}\langlebel{i3}
\begin{split}
-Re(I_3)=&-2\int_Q|\partial_x^2u|^2\partial_xqdxdt+\frac{3}{2}\int_Q|\partial_xu|^2\partial^3_xqdxdt+\frac{Re}{2}\left(\int_Q\partial_xu\overline{u}\partial^4_xqdxdt\right)\\
&+\frac{1}{2}\left.\int_0^T|\partial^2_xu|^2q\right]_{\partial\mathcal{G}}dt-\left.\int_0^T|\partial_xu|^2\partial^2_xq\right]_{\partial\mathcal{G}}dt+\frac{3Re}{2}\left(\left.\int_0^T\partial^2_xu\overline{\partial_xu}\partial_xq\right]_{\partial\mathcal{G}}dt\right)\\
&-Re\left(\left.\int_0^T\partial^3_xu\overline{\partial_xu}q\right]_{\partial\mathcal{G}}dt\right)-\frac{Re}{2}\left.\left(\int_0^T\partial_xu\overline{u}\partial_x^3q\right]_{\partial\mathcal{G}}dt\right)\\
&+\frac{Re}{2}\left.\left(\int_0^T\partial^2_xu\overline{u}\partial^2_xq\right]_{\partial\mathcal{G}}dt\right)-\frac{Re}{2}\left.\left(\int_0^T\partial^3_xu\overline{u}\partial_xq\right]_{\partial\mathcal{G}} dt\right).
\end{split}
\end{equation}
Finally, taking into account \eqref{i1}, \eqref{i2} and \eqref{i3}, we have
$$Re(I_1)+Re(I_2)-Re(I_3)=\int_Qf(\overline{\partial_xu}q+\frac{1}{2}\overline{u}\partial_xq)dxdt,$$
then \eqref{identity} holds.
\end{proof}
{\mbox{sub}}section{Conservation laws}
The final result of the appendix gives us the conservation laws for the solutions of \eqref{adj_graph_1aa}-\eqref{adj_bound1aa}, with $f=0$. More precisely, the result is the following one.
\begin{lemma}\langlebel{conservations}
For any positive time t, the solution $u$ of \eqref{adj_graph_1aa}-\eqref{adj_bound1aa}, with $f=0$, satisfies
\begin{equation}\langlebel{L2_c}
|\!|\!|rm{u(t)}_{L^2(\mathcal{G})}=|\!|\!|rm{u(0)}_{L^2(\mathcal{G})}
\end{equation}
and
\begin{equation}\langlebel{H12_c}
|\!|\!|rm{\partial_xu(t)}_{L^2(\mathcal{G})}+|\!|\!|rm{\partial_x^2u(t)}_{L^2(\mathcal{G})}=|\!|\!|rm{\partial_xu(0)}_{L^2(\mathcal{G})}+|\!|\!|rm{\partial^2_xu(0)}_{L^2(\mathcal{G})}.
\end{equation}
Additionally, we have
\begin{equation}\langlebel{H2_c}
|\!|\!|rm{u(t)}_{L^2(\mathcal{G})}+|\!|\!|rm{\partial_xu(t)}_{L^2(\mathcal{G})}+|\!|\!|rm{\partial_x^2u(t)}_{L^2(\mathcal{G})}=|\!|\!|rm{u(0)}_{L^2(\mathcal{G})}+|\!|\!|rm{\partial_xu(0)}_{L^2(\mathcal{G})}+|\!|\!|rm{\partial^2_xu(0)}_{L^2(\mathcal{G})}.
\end{equation}
\end{lemma}
\begin{proof} Multiplying system \eqref{adj_graph_1aa}-\eqref{adj_bound1aa}, with $f=0$, by $i\overline{u}$ and integrating in $\mathcal{G}$, we get
\begin{equation}\langlebel{L2_caa}
0=-\int_{\mathcal{G}}\partial_t u_j \overline{u}dx+i\int_{\mathcal{G}}\partial_x^2 u_j \overline{u}dx -i\int_{\mathcal{G}} \partial_x^4 u_j\overline{u}dx=L_1+L_2+L_3.
\end{equation}
We are now looking for the integral $L_2+L_3$. Let us, first, rewrite these quantities as follows
\begin{equation}\langlebel{L2_ca}
\begin{split}
L_2+L_3=i\sum_{j=1}^{N}\left(\int_{I_j}\partial^2_xu_j\overline{u}_jdx-\int_{I_j}\partial^4_xu_j\overline{u}_jdx\right).
\end{split}
\end{equation}
Integrating \eqref{L2_ca} by parts and taking the real part of $L_2+L_3$, we get that
\begin{equation}\langlebel{L2_ca1}
\begin{split}
Re(L_2+L_3)=&Re\left( i\sum_{j=1}^{N}\left(\left.-\int_{I_j}|\partial_xu_j|^2dx+\partial_xu_j\overline{u}_j\right]_{\partial I_j}\right)\right)\\
&+Re\left(i\sum_{j=1}^{N}\left(-\left.\left.\int_{I_j}|\partial^2_xu_j|^2dx+\partial^2_xu_j\partial_x\overline{u}_j\right]_{\partial I_j}-\partial^3_xu_j\overline{u}_j\right]_{\partial I_j}\right)\right)\\
=&Re\left.i\left(\partial_xu_1\overline{u}_1+\partial^2_xu_1\partial_x\overline{u}_1-\partial^3_xu_1\overline{u}_1\right)\right]_{-l_1}^0\\
&+Re\left.i\sum_{j=2}^{N}\left(\partial_xu_j\overline{u}_j+\partial^2_xu_j\partial_x\overline{u}_j-\partial^3_xu_j\overline{u}_j\right)\right]_0^{l_j}.
\end{split}
\end{equation}
By using the boundary conditions \eqref{adj_bound1aa}, we have
\begin{equation}\langlebel{L2_ca2}
\begin{split}
Re(L_2+L_3)=&Re \,\, i\left(\partial_xu_1(0)\overline{u}_1(0)+\partial^2_xu_1(0)\partial\overline{u}_1(0)-\partial^3u_1(0)\overline{u}_1(0)\right.\\
&+Re\left.i\sum_{j=2}^{N}\left(-\partial_xu_j(0)\overline{u}_j(0)-\partial^2_xu_j(0)\partial_x\overline{u}_j(0)+\partial^3_xu_j(0)\overline{u}_j(0)\right)\right.=0.
\end{split}
\end{equation}
Thus, replacing \eqref{L2_ca2} in \eqref{L2_caa}, \eqref{L2_c} holds.
We will prove \eqref{H12_c}. Multiplying \eqref{adj_graph_1aa}, with $f=0$, by $\overline{\partial_tu}$, integrating on $\mathcal{G}$ and taking the real part give us
\begin{equation}\langlebel{L2_ca2a}
Re\left(i\int_{\mathcal{G}}|\partial_tu|^2dx\right)+Re\left(\int_{\mathcal{G}}\partial^2_xu\overline{\partial_tu}dx\right)-Re\left(\int_{\mathcal{G}}\partial^4_xu\overline{\partial_tu}dx\right)=0.
\end{equation}
Integrating \eqref{L2_ca2a} by parts on $\mathcal{G}$ and using the boundary conditions \eqref{adj_bound1aa}, yields that
\begin{equation}\langlebel{L2_ca2aa}
\begin{split}
\frac{1}{2}\frac{\partial}{\partial_t}\int_{\mathcal{G}}&\left(|\partial_xu|^2+|\partial^2_xu|^2\right)dx=-Re\left[\partial^2_xu_1(0)\partial_x\partial_t\overline{u}_1(0)+\partial_xu_1(0)\partial_t\overline{u}_1(0)-\partial^3_xu_1(0)\partial_t\overline{u}_1(0)\right]\\
&+\sum_{j=2}^{N}\left(-\partial^2_xu_j(0)\partial_x\partial_t\overline{u}_j(0)-\partial_xu_j(0)\partial_t\overline{u}_j(0)+\partial^3_xu_j(0)\partial_t\overline{u}_j(0)\right).
\end{split}
\end{equation}
The boundary condition give us that the right hand side of \eqref{L2_ca2aa} is zero, that is
$$\frac{1}{2}\frac{\partial}{\partial_t}\int_{\mathcal{G}}\left(|\partial_xu|^2+|\partial^2_xu|^2\right)dx=0,$$
which implies \eqref{H12_c}. Finally, adding \eqref{L2_c} and \eqref{H12_c}, we have \eqref{H2_c}.
\end{proof}
{\mbox{sub}}section*{Acknowledgments}
Capistrano–Filho was supported by CNPq 408181/2018-4, CAPES-PRINT 88881.311964/2018-01, CAPES-MATHAMSUD 88881.520205/2020-01, MATHAMSUD 21- MATH-03 and Propesqi (UFPE). Cavalcante was supported by CAPES-MATHAMSUD 88887.368708/2019-00. Gallego was supported by MATHAMSUD 21-MATH-03 and the 100.000 Strong in the Americas Innovation Fund. This work was carried out during visits of the authors to the Universidade Federal de Alagoas, Universidade Federal de Pernambuco and Universidad Nacional de Manizales. The authors would like to thank the universities for its hospitality.
\end{document} |
\begin{document}
\title{The topology of quaternionic contact manifolds}
\begin{abstract}
We explore the consequences of curvature and torsion on the topology of quaternionic contact manifolds with integrable vertical distribution. We prove a general Myers theorem and establish a Cartan-Hadamard result for almost qc-Einstein manifolds \end{abstract}
\maketitle
\section{Introduction}
In \cite{Biquard}, Biquard introduced quaternionic contact manifolds as the key tool to study the conformal boundaries at infinity of quaternionic K\"abler manifolds. Along with strictly pseudoconvex pseudohermitian manifolds, this class describes a model category of sub-Riemannian manifolds with special holonomy. Subsequently these manifolds themselves have been the objects of extensive study, see for example \cite{AFIV, Duchemin,IMV2,IMV3,IPV, VassilevEinstein,Vassilev1} amongst others.
Contact quaternionic manifolds possess a connection adapted to the quaternionic structure \cite{Biquard,Duchemin}. As shown in \cite{VassilevEinstein,Vassilev1} this connection enjoys the remarkable property that its Ricci tensor can decomposed entirely into three torsion components. In this paper, under the assumption that the canonical vertical distribution is integrable, we study the affect of these torsion components on the underlying topology of the manifold. The structure of the paper is as follows. In section 2, we review some basic properties of the Biquard connection and derive some new decompositions of the vertical components of curvature into torsion pieces. In section 3, we show how the Levi-Civita connection for a canonical family of Riemannian metrics can be computed in terms of the Biquard connection and its torsion. In section 4, we establish a general Bonnet-Myers type theorem. In section 5, we introduce the category of almost qc-Einstein manifolds, where the horizontal scalar curvature is constant and the horizontal Ricci curvature satisfies
\[ \ensuremath{\text{Rc}}\xspace^\ensuremath{\text{\tiny $H$\normalsize }}\xspace (JX,Y) +\ensuremath{\text{Rc}}\xspace^\ensuremath{\text{\tiny $H$\normalsize }}\xspace(X,JY)=0\]
for any of the horizontal operators $J$ derived from quaternionic multiplication. For these manifolds, we employ some techniques of foliation theory to derive Cartan-Hadamard type theorems. In particular, when the horizontal sectional curvatures are non-positive, we show that universal cover is either $\rn{h+3}$ or $\rn{h} \times \sn{3}$ and that the two cases can be distinguished by properties of the torsion.
\section{Basic properties of quaternionic contact manifolds}
In this section, we review the basic definitions associated to quaternionic contact manifolds and establish the basic properties of the Biquard connection. In particular, we derive precise expressions for vertical Ricci and sectional curvatures in terms of torsion components.
\bgD{s2sr}
A step $2$ sub-Riemannian manifold is a smooth manifold $M$ together with a smooth distribution $H \subseteq TM$ of dimension $h$ and a smooth positive definite inner product $g^\ensuremath{\text{\tiny $H$\normalsize }}\xspace$ on $H$ such that at every point $[H,H] =TM$.
\enD
If we let $V^* \subseteq T^*M$ consist of the covectors $\xi$ that annihilate $H$, then we can define a bundle map $\mathcal{J} \colon V^* \to \text{End}(H)$ by
\[ \aip{ \mathcal{J}(\xi)(X) }{Y}{} = d\xi(X,Y).\]
It is straightforward to verify that $\mathcal{J}(\xi)$ is independent of how $\xi$ is extended to a $1$-form and so is well-defined. The inner product on $H$ allows for pointwise identification of $\text{End}(H)$ with $\mathfrak{gl}_h$, well-defined up to conjugation. If we impose the standard inner product
\[ \aip{A}{B}{} = \text{tr} (B^\top A) \]
on $\mathfrak{gl}_h$, then we can define a canonical inner product on $V^*$ by
\[ \aip{\xi}{\eta}{} = \frac{1}{h} \aip{\mathcal{J}(\xi) }{\mathcal{J}(\eta)}{}.\]
If $V$ is a {\em complement} to $H$, i.e. a subbundle $V \subseteq TM$ such that $TM = H \oplus V$, then there is a canonical extension of the sub-Riemannian inner product to a Riemannian metric $g$, defined by declaring $H$, $V$ to be orthogonal and using the dual of the inner product of $V^*$ on $V$. For convenience of notation, we denote by $U_\xi $ the dual element in $V$ to $\xi \in V^*$.
\bgD{quaternionic}
A step $2$ sub-Riemannian manifold is quaternionic contact if $\mathcal{J}(V^*)$ is isomorphic to $\mathfrak{sp}_1$ at every point.
\enD
Thus there is an $\text{SO}(3)$-bundle of triples of unit length forms $\eta^1,\eta^2,\eta^3$ such that for each $a =1,2,3$,
\[ J_{a}^2 =- 1 =J_{123} \]
where $J_a = \mathcal{J}(\eta^a)$ and if $I=a_1a_2\dots a_k$ then $J_I = J_{a_1} J_{a_2} \dots J_{a_k}$. Furthermore, we can always choose a reduction of the horizontal frame bundle to $\text{Sp}(1)\text{Sp}(h/4)$. We introduce
\begin{align*}
{\mathfrak{t}}_0 &= \{ A \in {\mathfrak{so}}_h \colon [A,\mathfrak{sp}_1]=0\} \cong \mathfrak{sp}_{h/4} \\
{\mathfrak{t}} &= {\mathfrak{t}}_0 \oplus \mathfrak{sp}_1
\end{align*}
and denote by ${\mathfrak{t}}^\perp$ the orthogonal complement in ${\mathfrak{so}}_h$. It is easily seen that
\[ {\mathfrak{t}} = \{ A \in {\mathfrak{so}}_h \colon [A,\mathfrak{sp}_1] \subseteq \mathfrak{sp}_1 \} \cong \mathfrak{sp}_1 \oplus \mathfrak{sp}_{h/4}.\]
The foundational theorem on quaternionic contact manifolds due to Biquard \cite{Biquard} posits the existence of a connection adapted the the quaternionic structure.
\bgT[Biquard]{Biquard}
If $M$ is quaternionic contact with $h>4$ then there is a unique complement $V$ and connection $\nabla$ such that
\begin{itemize}
\item $H$, $V$, $g$ and $\mathcal{J}$ are parallel
\item $\text{Tor}(H,H) \subseteq V$, $\text{Tor}(H,V) \subseteq H$
\item For all $U \in V$, the operator $\text{Tor}(U,\cdot) \colon H \to H$ is in ${\mathfrak{t}}^\perp \oplus \Sigma^h$.
\end{itemize}
\enT
Here $\Sigma_h \subset \mathfrak{gl}_h $ denotes the space of symmetric elements of $\mathfrak{gl}_h$. If $h=4$ then Duchemin \cite{Duchemin} showed that the same result holds under the additional assumption that there exists a complement where for $a,b \in\{1,2,3\}$,
\[ d\eta^a(U_b,X) + d\eta^b(U_a,X) =0\]
for all $X \in H$. The Duchemin condition is equivalent to $1$-normal or $V$-normal in the language of \cite{Hladky4,Hladky5}. A quaternionic contact manifold is often referred to as {\em integrable} if either $h>4$ or it satisfies the Duchemin condition. However, for the sake of brevity, we shall henceforth always assume, unless other stated, that all quaternionic contact manifolds under consideration are integrable.
The property that $\mathcal{J}$ is parallel can easily be shown to be equivalent to the identity
\[ 0 \equiv \nabla \text{Tor} (X,Y,A). \]
for $X,Y \in H$ and $A \in TM$.
Henceforth, we shall also always assume that $E_1,\dots,E_h$ is an orthonormal frame for $H$ and that $U_1,U_2,U_3$ is an orientable orthonormal frame for $V$ with coframe $\eta^1,\dots,\eta^3$. The orientability condition is equivalent to $J_{123} = -1$.
For $a \in \{1,2,3\}$, we denote by $a^+,a^-$ the subspaces of $\mathfrak{gl}_h$ that commute and anti-commute with $J_a$ respectively. It is also easy to see that $\mathfrak{gl}_h = \Psi[3]\oplus\Psi[-1]$ where $\Psi[\lambda]$ is the eigenspace of the invariant Casimir operator $A \mapsto -\sum\limits_a J_a A J_a$. Clearly, the eigenvalue corresponds to the difference in count between the $J_a$'s that commute with $A$ and those that anti-commute. This is also invariant under orthonormal changes of the vertical frame.
For ease of notation, we denote $\text{Tor}(A,B)$ by $T(A,B)$. For $a,b,c \in \{1,2,3\}$ we define torsion operators by
\begin{align}
T_a \colon H \to H, &\qquad T_a X = \text{Tor}(U_a,X)\\
T_a^\text{\tiny $V$\normalsize } \colon V \to V & \qquad T_a^\text{\tiny $V$\normalsize } =T(U_a,U)_\text{\tiny $V$\normalsize }
\end{align}
and functions $\tau_{ab}^c$ by
\[ \tau^a_{bc} U = \eta^a \text{Tor}(U_b, U_c)\]
Furthermore, we denote the decomposition of each $T_a$ into symmetric and skew-symmetric components by $T_a = T_a^{\scriptscriptstyle \Sigma} + T^{{\mathfrak{o}}}_a$ and define tensors $T^{\scriptscriptstyle \Sigma} = \sum_a \eta^a \otimes T^{\scriptscriptstyle \Sigma}_a$, $T^{{\mathfrak{o}}} = \sum_a \eta^a \otimes T^{{\mathfrak{o}}}_a$
The Biquard connection then has the following useful properties.
\bgL{easytor2}
For elements $a,b \in \{1,2,3\}$,
\begin{enumerate}
\item $\{J_a,T_b\} \in a^+ \cap b^+$, $\{J_a,T_b\} + \{J_b,T_a\} =0$ and hence $T^{\scriptscriptstyle \Sigma}_a \in a^-$.
\item $\text{tr}_\ensuremath{\text{\tiny $H$\normalsize }}\xspace \left( T^{\scriptscriptstyle \Sigma}_a \right) =0$.
\item $[J_a,T^{{\mathfrak{o}}}_b] \in a^- \cap b^-$, $[J_a,T^{{\mathfrak{o}}}_b]+[J_b,T^{{\mathfrak{o}}}_a]=0$ and hence $T^{{\mathfrak{o}}}_a \in a^+ $.
\item $\tau_{12}^3 = \tau_{23}^1 =\tau_{31}^2 $
\end{enumerate}
where $\{A,B\}$ is the symmetric sum on $\mathfrak{gl}_h$, $\{A,B\} = AB+BA$.
\enL
As a result of the last part, we shall simplify notation by defining the {\em vertical torsion function}
\bgE{tau} \tau =-\tau_{12}^3,\enE
which shall become integral to later results. The reason for the sign choice should also become clear later on.
\pf
The general Bianchi Identity states
\bgE{Bianchi}
\mathscr{C} \left( R(A,B)C - \nabla T (A,B,C) + T(A,T(B,C)) \right)=0
\enE
where $\mathscr{C}$ denotes the cyclic sum. Various projections of this identity, will be of particular use to us:
if $X,Y,Z$ are sections of $H$, then
\begin{align} \label{E:B}
\mathscr{C} R(X,Y)Z &= -\mathscr{C} T(X,T(Y,Z)) = \mathscr{C} T(T(X,Y),Z).
\end{align}
If $X,Y\in H$ then
\bgE{BV} \mathscr{C} R(X,Y)U_b = -T^\text{\tiny $V$\normalsize }_b T(X,Y) + T(X,T_b Y) - T(Y,T_b X ).
\enE
If we apply $\eta^a$ to \rfE{BV} we obtain
\begin{align*}
\eta^a T_b^\text{\tiny $V$\normalsize } T(X,Y) + \eta^a R(X,Y)U_b &=\eta^a( T(X,T_b Y) - T(Y,T_b X) ) \\
&= \aip{J_a X }{T_b Y}{} - \aip{J_a Y }{T_b X}{} \\
&= \aip{ \left( \{ J_a, T^{\scriptscriptstyle \Sigma}_b \} + [J_a ,T^{{\mathfrak{o}}}_b ] \right) X}{Y}{}.
\end{align*}
Therefore if we define operators $R_b^a \colon H \to H$ by $\aip{R_b^a X}{Y}{} =\eta^a R(X,Y)U_b $ then
\bgE{raw} \{ J_a, T^{\scriptscriptstyle \Sigma}_b \} + [J_a ,T^{{\mathfrak{o}}}_b ] = \sum\limits_c \tau^a_{bc} J_c + R^a_b .\enE
For any operator $L$, $ \{J_a, L\} \in a^+ $ and $ [J_a,M] \in a^-$ and so
\[ \{J_a,T^0_b\} = \tau^a_{ba} J_a + (R^a_b)^{a^+} .\]
Now if we set $a=b$ and project onto the $a^+$ component we see that $\{J_a,T^{\scriptscriptstyle \Sigma}_a\}=0$. Thus $T^{\scriptscriptstyle \Sigma}_a \in a^-$ and so for $b\ne a$ we must have $J_bT^{\scriptscriptstyle \Sigma}_a,T^{\scriptscriptstyle \Sigma}_aJ_b \in a^+$. From this, we clearly see that $\{J_b,T^{\scriptscriptstyle \Sigma}_a\} \in a^+ \cap b^+$.
Next a symmetric application of \rfE{raw} shows that
\bgE{rawS} \left( \{ J_a, T^{\scriptscriptstyle \Sigma}_b \} +\{ J_b, T^{\scriptscriptstyle \Sigma}_a \} \right) + \left( [J_a ,T^{{\mathfrak{o}}}_b ]+ [J_b ,T^{{\mathfrak{o}}}_a ] \right) - \sum\limits_c \left( \tau^a_{bc} +\tau^b_{ac} \right) J_c =0 \enE
Since $\tau^a_{bb} =0 =\tau^b_{aa}$, the final term is contained in $(a^- + b^-) \cap \mathfrak{sp}_1$. The first term is in $a^+\cap b^+$ as noted earlier. However, as $T^{{\mathfrak{o}}}_a,T^{{\mathfrak{o}}}_b \in {\mathfrak{t}}^\perp$ and
\[ \aip{ [\mathfrak{sp}_1,{\mathfrak{t}}^\perp] }{\mathfrak{sp}_1}{} = \aip{ {\mathfrak{t}}^\perp }{[\mathfrak{sp}_1,\mathfrak{sp}_1]}{} =0.\]
And so it is easy to see that the middle term of \rfE{rawS} is in $(a^- + b^-) + \mathfrak{sp}_1^\perp$. Thus the three grouped terms in \rfE{rawS} are mutually orthogonal. Hence each must vanish.
From this it is easy to see that $\tau^a_{bc}$ is fully alternating as a tensor.
\end{pmatrix}f
The previous lemma can be generalized to contact manifolds based on structures such as the octonions or Clifford algebras that share many properties with the quaternions, but the following result is isolated to the quaternionic case
\bgC{qcB}
For an integrable quaternionic contact manifold,
\[ J_1 T^{{\mathfrak{o}}}_1 = J_2 T^{{\mathfrak{o}}}_2=J_3 T^{{\mathfrak{o}}}_3.\]
Hence for $a,b \in \{1,2,3\}$, $\|T^{{\mathfrak{o}}}_a\| =\|T^{{\mathfrak{o}}}_b\|$ and if $a \ne b$, then $\{T^{{\mathfrak{o}}}_a,T^{{\mathfrak{o}}}_b\}=0$ and $\aip{T^{{\mathfrak{o}}}_a X}{T^{{\mathfrak{o}}}_b X}{}=0$ for all $X \in H$.
\enC
\pf
Since $T^{{\mathfrak{o}}}_a $ is orthogonal to ${\mathfrak{t}}_0$ it must lie within $\Psi[-1]$. As it commutes with $J_a$, it must therefore lie in $b^-$ if $b \ne a$. But then \rfL{easytor2} implies that for $a \ne b$,
\[ J_b T^{{\mathfrak{o}}}_a = \frac{1}{2} [J_b ,T^{{\mathfrak{o}}}_a ] = - \frac{1}{2} [J_a ,T^{{\mathfrak{o}}}_b ] = -J_a T^{{\mathfrak{o}}}_b .\]
The first result follows easily.
We also see that for $a \ne b$, $T^{{\mathfrak{o}}}_a = -J_{ab} T^{{\mathfrak{o}}}_b$. Since $J_{ab} \in b^-$ and $T^{{\mathfrak{o}}}_b \in b^+$, the remaining properties follow also.
\end{pmatrix}f
We define functions $\Bt{\mathcal{T}}$, $\mathcal{T}^+$ by
\bgE{B} \Bt{\mathcal{T}} = \| T^{{\mathfrak{o}}}_1\|, \qquad \mathcal{T}^+ = \sup\limits_{|X|=1} | T^{{\mathfrak{o}}}_1 X| \enE
The content of the previous corollary is that these definitions are independent of the choice of orthonormal frame for $V$, indeed for any unit length $U \in V$, the mean-square and maximum of the eigenvalues of the operator $ iT(U, \cdot) \colon H \to H$ are $\Bt{\mathcal{T}}^2$ and $\mathcal{T}^+$ respectively.
For convenience, we introduce the invariant trace-free symmetric operators
\bgE{tfs}
[JT^{\scriptscriptstyle \Sigma}] = \sum\limits_a J_aT^{\scriptscriptstyle \Sigma}_a, \qquad
[JT^{{\mathfrak{o}}}] = \sum\limits_a J_a T^{{\mathfrak{o}}}_a.
\enE
\bgC{jbn}
The symmetric operator $[JT^{{\mathfrak{o}}}]$ has norm, $\|[JT^{{\mathfrak{o}}}] \| =3 \Bt{\mathcal{T}}$ and largest eigenvalue $3\mathcal{T}^+$.
\enC
For a quaternionic contact manifold it makes sense to split curvature operators into horizontal and vertical pieces. Since the horizontal bundle is fundamental to the definition of a sub-Riemannian manifold, it is desirable to understand how the horizontal components alone affect the geometry and topology of the manifold. Here we shall focus on the Ricci curvatures.
\bgD{Rc}
If $E_1,\dots,E_h$ and $U_1,U_2,U_3$ denote orthonormal bases for $H$ and $V$ respectively, then
\begin{align*}
\ensuremath{\text{Rc}}\xspace^\ensuremath{\text{\tiny $H$\normalsize }}\xspace(A,B) &= \sum\limits_i \aip{R(E_i,A)B}{E_i}{}\\
\ensuremath{\text{Rc}}\xspace^\text{\tiny $V$\normalsize }(A,B) &= \sum\limits_a \aip{R(U_a,A)B}{U_a}{}\\
\end{align*}
\enD
The horizontal Ricci curvature on an integrable quaternionic contact manifold is well-understood. In \cite{Vassilev2} it is shown that that as an operator on $H$, $\ensuremath{\text{Rc}}\xspace^\ensuremath{\text{\tiny $H$\normalsize }}\xspace$ is symmetric and has an orthogonal decomposition into $\text{Sp}(1)\text{Sp}(h/4)$ invariant torsion components given by
\bgE{RcH} \ensuremath{\text{Rc}}\xspace^\ensuremath{\text{\tiny $H$\normalsize }}\xspace = \left( \frac{h}{4}+2 \right) \tau -\left( \frac{h}{4} +1 \right) [JT^{\scriptscriptstyle \Sigma}] - \frac{ h+10}{6} [JT^{{\mathfrak{o}}}] \enE
where the difference of a factor of $2$ from \cite{Vassilev2} derives from a minor difference of convention in the relationship between the metric and quaternionic endomorphisms.
For mixed terms we can use a double cyclic argument (similar to \cite{Hladky5}, Lemma 1) to see
\begin{align*}
\aip{R(E_i,U)X}{E_i}{} &= \aip{R(E_i,U)X}{E_i}{} - \aip{R(E_i,X)U}{E_i}{} \\
&= \frac{1}{2} \mathscr{C} \aip{\mathscr{C}R(U,X)E_i}{E_i}{} \\
& = \frac{1}{2} \mathscr{C} \aip{\mathscr{C} T (T(U,X),E_i)}{E_i}{}+\frac{1}{2} \mathscr{C} \aip{\mathscr{C} \nabla T(U,X,E_i)}{E_i}{} \\
& =\aip{ T(T(X,E_i),U)}{E_i}{} + \aip{\nabla T(U,X,E_i)}{E_i}{} \\
& \qquad -\aip{\nabla T (U,E_i,X)}{E_i}{}.
\end{align*}
Since \rfL{easytor2} (2) implies that $\sum\limits_i \aip{T(U,E_i)}{E_i}{} =0$ we can easily see that $\sum\limits_i \aip{\nabla T (U,E_i,X)}{E_i}{}=0$. Hence
\bgE{Hricmix}
\ensuremath{\text{Rc}}\xspace^\ensuremath{\text{\tiny $H$\normalsize }}\xspace(U,X) = \sum\limits_a \aip{T(U,U_a)}{J_a X}{} + \sum\limits_ i \aip{\nabla T(U,X,E_i)}{E_i}{}.
\enE
Furthermore, if $V$ is integrable the first term of the right vanishes.
We shall primarily be interested in $\ensuremath{\text{Rc}}\xspace^\text{\tiny $V$\normalsize }$ as an operator on $V$. It is less well-behaved than $\ensuremath{\text{Rc}}\xspace^\ensuremath{\text{\tiny $H$\normalsize }}\xspace$, having both symmetric and skew-symmetric components, however it too can be decomposed into torsion components.
We first note that all purely vertical components of the full curvature tensor of $\nabla$ can actually computed using purely horizontal operators expressible either in terms of curvatures or torsions.
\bgL{RmV}
For $a,b,c,d\in \{1,2,3\}$,
\[ \aip{R(U_a,U_b)U_c}{U_d}{} = -\frac{2}{h}\aip{ R(U_a,U_b)}{J_{dc}}{}= -\frac{2}{h} \aip{[T_a,T_b]}{J_{dc} }{} \]
where the operator inner products are taken on $\text{End}(H)$.
\enL
\pf
This begins with an easy computation using the parallel torsion properties. Namely,
\begin{align*}
\aip{R(U_a,U_b)U_c}{U_d}{} &= \frac{1}{h} \sum\limits_i \aip{R(U_a,U_b) T(E_i,J_c E_i) }{U_d}{} \\
&= \frac{1}{h}\sum\limits_i \aip{ T(R(U_a,U_b) E_i,J_c E_i) +T(E_i,R(U_a,U_b) J_c E_i)}{U_d}{} \\
&= \frac{1}{h} \aip{ J_d R(U_a,U_b)}{J_c}{} + \frac{1}{h} \aip{J_d}{R(U_a,U_b)J_c}{}\\
&= -\frac{2}{h}\aip{ R(U_a,U_b)}{J_{dc}}{}.
\end{align*}
Next we use the cyclic identities to break the operator $R(U_a,U_b)$ into torsion pieces. For $E \in H$,
\bgE{RVtoH}
\begin{split}
R(U_a,U_b)E &= \left( \mathscr{C} R(U_a,U_b)E \right)_\ensuremath{\text{\tiny $H$\normalsize }}\xspace = \mathscr{C} T(T(U_a,U_b),E)_\ensuremath{\text{\tiny $H$\normalsize }}\xspace + \mathscr{C} \nabla T (U_a,U_b,E)_\ensuremath{\text{\tiny $H$\normalsize }}\xspace \\
&= \tau_{ab}^c T_c E -[T_a,T_b]E + [\nabla_{U_a}, T_b]E -\aip{\nabla_{U_a} U_b}{U_c}{} T_c E \\
& \qquad - [\nabla_{U_b}, T_a]E + \aip{\nabla_{U_b} U_a}{U_c}{} T_c E.
\end{split}
\enE
Now
\begin{align*}
\aip{ [ \nabla_{U_b}, T_a]}{J_c}{} &= U_b \aip{T_a}{J_c}{} - \aip{T_a}{\nabla_{U_b} J_c }{} + \aip{\nabla_{U_b} }{(T^{{\mathfrak{o}}}_a - T^{\scriptscriptstyle \Sigma}_a) J_c}{} \\
&= 0 -\aip{T_a}{ J_{\nabla_{U_b} \eta^c} }{} + \aip{J_c T_a + (T^{{\mathfrak{o}}}_a - T^{\scriptscriptstyle \Sigma}_a) J_c }{\nabla_{U_b}}{} \\
&= \aip{ \{J_c,T^{{\mathfrak{o}}}_a\} + [J_c,T^{\scriptscriptstyle \Sigma}_a] }{\nabla_{U_b}}{}\\
&=0
\end{align*}
as the left hand side of the penultimate line is symmetric but the right skew-symmetric.
Thus, recalling that $J_{dc} \in {\mathfrak{j}} \oplus \langle I_h \rangle$, and that $T_c$ is always trace free and orthogonal to ${\mathfrak{j}}$, we see that
\bgE{RT1} \aip{R(U_a,U_b)U_c}{U_d}{} = - \frac{2}{h} \aip{[T_a,T_b]}{J_{dc} }{} .\enE
\end{pmatrix}f
We can pursue this further to compute the vertical sectional curvatures in terms of the torsion operators as follows.
\bgL{KV}
For $a,b \in \{1,2,3\}$ with $a\ne b$, the vertical sectional curvatures satisfy
\[ \frac{h}{2} K(U_a,U_b) = \Bt{\mathcal{T}}^2 - \big\| (T^{\scriptscriptstyle \Sigma}_a)^{b^+} \big\|^2=\Bt{\mathcal{T}}^2 - \big\| (T^{\scriptscriptstyle \Sigma}_b)^{a^+} \big\|^2.\]
\enL
\pf
We apply \rfE{RT1} with $c=a$ and $b=d$ and note that $J_{ba}$ is either pure trace or skew-symmetric depending on whether $a=b$. Now $T_c$ splits into symmetric and skew-symmetric pieces as $T_c = T^{\scriptscriptstyle \Sigma}_c + T^{{\mathfrak{o}}}_c$. We then observe that $[T_a,T_b]$ is trace-free and its skew-symmetric component is $[T^{\scriptscriptstyle \Sigma}_a,T^{\scriptscriptstyle \Sigma}_b] + [T^{{\mathfrak{o}}}_a,T^{{\mathfrak{o}}}_b]$.
Next we see
\begin{align*}
\aip{ T^{\scriptscriptstyle \Sigma}_a T^{\scriptscriptstyle \Sigma}_b - T^{\scriptscriptstyle \Sigma}_b T^{\scriptscriptstyle \Sigma}_a}{J_{ab}}{} &=-2 \aip{J_bT^{\scriptscriptstyle \Sigma}_b }{J_a T^{\scriptscriptstyle \Sigma}_a}{}
\end{align*}
and using the torsion properties we see that if $a\ne b$
\begin{align*}
0 & = \aip{ \{ J_a ,T^{\scriptscriptstyle \Sigma}_b\} + \{J_b ,T^{\scriptscriptstyle \Sigma}_a\} }{ J_b T^{\scriptscriptstyle \Sigma}_a }{} \\
&= \aip{ -J_b J_a T^{\scriptscriptstyle \Sigma}_b -J_bT^{\scriptscriptstyle \Sigma}_b J_a}{T^{\scriptscriptstyle \Sigma}_a }{} + \aip{ T^{\scriptscriptstyle \Sigma}_a -J_b T^{\scriptscriptstyle \Sigma}_a J_b}{T^{\scriptscriptstyle \Sigma}_a}{}\\
&= -2 \aip{J_bT^{\scriptscriptstyle \Sigma}_b }{J_a T^{\scriptscriptstyle \Sigma}_a}{} + \aip{ T^{\scriptscriptstyle \Sigma}_a -J_b T^{\scriptscriptstyle \Sigma}_a J_b}{T^{\scriptscriptstyle \Sigma}_a}{}\\
&= -2 \aip{J_bT^{\scriptscriptstyle \Sigma}_b }{J_a T^{\scriptscriptstyle \Sigma}_a}{} + 2 \aip{ (T^{\scriptscriptstyle \Sigma}_a)^{b^+} }{T^{\scriptscriptstyle \Sigma}_a}{} .
\end{align*}
Thus
\[ \aip{[T^{\scriptscriptstyle \Sigma}_a,T^{\scriptscriptstyle \Sigma}_b]}{J_{ba}}{} = -2 \big\| (T^{\scriptscriptstyle \Sigma}_a)^{b^+}\big\|^2 .\]
A similar argument shows that
\begin{align*}
\aip{ T^{{\mathfrak{o}}}_aT^{{\mathfrak{o}}}_b-T^{{\mathfrak{o}}}_bT^{{\mathfrak{o}}}_a }{J_{ba}}{} &=2 \aip{J_b T^{{\mathfrak{o}}}_b}{J_a T^{{\mathfrak{o}}}_a}{}=2\Bt{\mathcal{T}} ^2
\end{align*}
The result follows easily.
\end{pmatrix}f
The curvature for the Biquard connection does not enjoy all the same symmetries possessed by the Levi-Civita connection. The sectional curvatures do not determine the full curvature tensor via polarization, or even the Ricci curvatures.. The symmetric portion of $\ensuremath{\text{Rc}}\xspace^\text{\tiny $V$\normalsize }$ is of course determined by the sectional curvatures, but unlike $\ensuremath{\text{Rc}}\xspace^\ensuremath{\text{\tiny $H$\normalsize }}\xspace$ or the Riemannian Ricci tensor, there is also a skew-symmetric component.
\bgL{RcV}
For an integrable quaternionic contact manifold with integrable vertical complement, if $J_{abc}=-1$ then
\[ \ensuremath{\text{Rc}}\xspace^\text{\tiny $V$\normalsize }(U_a,U_b) = \frac{4}{h} \aip{ T^{{\mathfrak{o}}}_a }{T^{{\mathfrak{o}}}_b}{}- \frac{4}{h} \aip{T^{\scriptscriptstyle \Sigma}_a }{T^{\scriptscriptstyle \Sigma}_b}{} +\frac{1}{2} d\tau(U_c) .\]
\enL
\pf
First, we choose an orthonormal frame $\eta^1,\dots,\eta^3$ for $V^*$ such that $\xi = f\eta^1$. Then
\[ \ensuremath{\text{Rc}}\xspace^\text{\tiny $V$\normalsize }(U_\xi,U_\xi) =f^2 \ensuremath{\text{Rc}}\xspace^\text{\tiny $V$\normalsize }(U_1,U_1).\]
Now since $T^{{\mathfrak{o}}}_1 \in \Psi[-1]$, it easily follows that
\[ \ensuremath{\text{Rc}}\xspace^\text{\tiny $V$\normalsize }(U_1,U_1) = \sum\limits_a K(U_a,U_1) = \frac{4}{h}\big\| T^{{\mathfrak{o}}}_1\big\|^2 - \frac{4}{h} \big\| T^{\scriptscriptstyle \Sigma}_1\big\|^2.\]
The symmetric component of $\ensuremath{\text{Rc}}\xspace^\text{\tiny $V$\normalsize }$ can now simply be computed from a polarization argument, so it only remains to find the skew-symmetric part. Now
\bgE{cyclic}\begin{split}
2 &\aip{R(U_3,U_1)U_2}{U_1}{} - 2 \aip{R(U_2,U_1)U_3}{U_1}{} \\
&\qquad =\mathscr{C} \aip{\mathscr{C}R(U_1,U_2)U_3}{U_1}{} \\
&\qquad =\mathscr{C} \aip{\mathscr{C} T (T(U_1,U_2),U_3)}{U_1}{} +\mathscr{C} \aip{\mathscr{C} \nabla T(U_1,U_2,U_3)}{U_1}{} \\
&\qquad= 2 d\tau(U_1)
\end{split}\enE
Hence
\[ \ensuremath{\text{Rc}}\xspace^\text{\tiny $V$\normalsize }(U_2,U_3) -\ensuremath{\text{Rc}}\xspace^\text{\tiny $V$\normalsize }(U_3,U_2) = d\tau(U_1). \]
With an identical argument for the other components, the result follows immediately.
\end{pmatrix}f
Somewhat surprisingly, for mixed terms, the vertical Ricci tensor is often better behaved than the horizontal.
\bgL{MixedRc}
For $U \in V$ and $X \in H$,
\[ \ensuremath{\text{Rc}}\xspace^\text{\tiny $V$\normalsize }(X,U) =-\sum\limits_a \aip{T(U,U_a)}{J_a X}{} .\]
If $V$ is integrable then $\ensuremath{\text{Rc}}\xspace^\text{\tiny $V$\normalsize }(X,U) =0$.
\enL
\pf
Here we employ standard results derived from the algebraic Bianchi identity (see ) for a connection with torsions. Notably
\begin{align*}
\aip{R(U_a,X)U}{U_a}{} &= \aip{R(U_a,X)U}{U_a}{} - \aip{R(U_a,U)X}{U_a}{} \\
&= \frac{1}{2} \mathscr{C} \aip{\mathscr{C}R(X,U)U_a}{U_a}{} \\
& = \frac{1}{2} \mathscr{C} \aip{\mathscr{C} T (T(X,U),U_a)}{U_a}{}+\frac{1}{2} \mathscr{C} \aip{\mathscr{C} \nabla T(X,U,U_a)}{U_a}{} \\
& = -\aip{T(U,U_a)}{J_a X}{} + \aip{\nabla T(U,U_a,X)}{U_a}{}
\end{align*}
However by skew-symmetry of the purely vertical torsion, we see
\[ \aip{T(U,U_a)}{U_a}{} =0.\] The result follows easily.
\end{pmatrix}f
\section{Comparison with weighted Levi-Civita connections.}
The extension of the metric on $H$ to a full Riemannian metric $g$ described earlier was canonical only up to a constant scaling factor. Thus it is natural to consider a family of Riemannian metric defined by
\[ g^\lambda(A,B) = \aip{A_\ensuremath{\text{\tiny $H$\normalsize }}\xspace}{B_\ensuremath{\text{\tiny $H$\normalsize }}\xspace}{} + \lambda^2 g(A_\text{\tiny $V$\normalsize },B_\text{\tiny $V$\normalsize }) .\]
While the Biquard connection associated to the quaternionic structure contains more refined information on the geometry of a quaternionic contact manifold, the torsion-free nature of the Levi-Civita connections $\overline{\nabla}^\lambda$ allows for the application of the more deeply developed theory of Riemannian geometry.
Our first step is to compare the Biquard connection to these weighted Levi-Civita connections. This is technically much simpler if we make the assumption that the vertical distribution is integrable. It should however be noted that this condition appears automatically in many important cases such qc-Einstein manifolds (see \cite{Vassilev2}) and the families involved in the partial solution to the qc-Yamabe problem from the same paper. One consequence of this is that $M$ then admits a foliation $\mathcal{F}^\text{\tiny $V$\normalsize }$ whose leaves are $3$-dimensional manifolds everywhere spanned by $V$.
\bgL{ConComp} If the vertical distribution $V$ is integrable then the Levi-Civita derivatives with respect to the metric $g^\lambda$ can be computed for sections $X,Y$ of $H$ as follows:
\begin{align*}
\overline{\nabla}^\lambda_X Y &=\nabla_X Y - \frac{1}{2} \sum\limits_a \aip{J_a X}{Y}{} U_a - \lambda^{-2} \sum\limits_a \aip{T^{\scriptscriptstyle \Sigma}_a X}{Y}{} U_a,\\
\overline{\nabla}^\lambda_{U_a} X &= \nabla_{U_a} X + \frac{\lambda^2}{2} J_a X -T^{{\mathfrak{o}}}_a X, \\
\overline{\nabla}^\lambda_X U_a &= \nabla_X U_a + \frac{\lambda^2}{2} J_a X +T^{\scriptscriptstyle \Sigma}_a X,\\
\overline{\nabla}^\lambda_{U_a} U_b &= \nabla_a U_b-\frac{1}{2} \sum\limits_c \tau_{ab}^c U_c.
\end{align*}
\enL
Here we get the first indication that the vertical torsion function $\tau$ will play an instrumental role in determining the topology in the leaves of $\mathcal{F}^\text{\tiny $V$\normalsize }$.
\pf The proof works by using the standard formulas expressing the Levi-Civita connections in terms of Lie brackets and then decomposes those Lie brackets using the Biquard connection. The computations are similar for all parts, so we shall prove the first and leave the others to the reader.
We begin by noting that, since the difference between connections is tensorial, it suffices to prove the results for $X,Y$ members of an horizontal orthonormal frame. Then letting $Z$ be another member of the same horizontal frame
\begin{align*} g^\lambda(\overline{\nabla}^\lambda_X Y,Z) &=-\frac{1}{2} \left( \aip{X}{[Y,Z]}{} +\aip{Y}{[X,Z]}{} - \aip{Z}{[X,Y]}{} \right) \\
&= \aip{\nabla_X Y }{Z}{}
\end{align*}
\begin{align*}
g^\lambda(\overline{\nabla}^\lambda_XY,U_b) &= -\frac{1}{2} \left( \aip{X}{[Y,U_b])}{} +\aip{Y}{[X,U_b]}{} - g^\lambda(U_b,[X,Y]) \right) \\
&= -\frac{1}{2} \Big( \aip{X}{ -\nabla_{U_b} Y -T_b Y}{} + \aip{Y}{-\nabla_{U_b} X -T_b X}{} \\
& \qquad + \lambda^2 g(U_b,T(X,Y) ) \Big)\\
&= \frac{1}{2} \aip{T^{\scriptscriptstyle \Sigma}_b X}{Y}{} -\frac{\lambda^2}{2} \aip{J_b X}{Y}{}.
\end{align*}
The proof of the first result is completed by then recalling that $\lambda^{-1} U_1 ,\dots, \lambda^{-1} U_3$ is an orthonormal frame for $V$ with respect to $g^\lambda$.
\end{pmatrix}f
With this comparison in hand, we can attend to the laborious task of comparing the curvature tensors.
\bgT{Rm} If $M$ is an integrable quaternionic contact manifold with integrable vertical distribution then the Levi-Civita connections associated to $g^\lambda$ can be computed from the Biquard connection as follows.
\begin{align*}
\Bt{\text{Rm}}^\lambda& (X,Y,Z,W) = \text{Rm}(X,Y,Z,W) \\
& - \lambda^{-2} \sum\limits_a \big[\frac{\lambda^2}{2} \aip{J_a Y}{Z}{}+ \aip{T^{\scriptscriptstyle \Sigma}_a Y}{Z}{}\big] \big[ \frac{\lambda^2}{2} \aip{J_a X}{W}{}+ \aip{T^{\scriptscriptstyle \Sigma}_a X}{W}{}\big] \\
& + \lambda^{-2} \sum\limits_a \big[\frac{\lambda^2}{2} \aip{J_a X}{Z}{}+ \aip{T^{\scriptscriptstyle \Sigma}_a X}{Z}{}\big] \big[ \frac{\lambda^2}{2} \aip{J_a Y}{W}{}+ \aip{T^{\scriptscriptstyle \Sigma}_a Y}{W}{}\big] \\
& +\frac{\lambda^2}{2}\sum\limits_a \aip{J_a X}{Y}{} \aip{ J_a Z}{W}{} \\
\Bt{\text{Rm}}^\lambda &(X,Y,U_a,Z) = \aip{ \nabla T^{\scriptscriptstyle \Sigma} (U_a,Y,X)}{Z}{}-\aip{ \nabla T^{\scriptscriptstyle \Sigma} (U_a,X,Y)}{Z}{}\\
\Bt{\text{Rm}}^\lambda & (X,U_a,U_a,X) = \aip{\nabla T(X,U_a,U_a)}{X}{} + \frac{\lambda^4}{4} \left|X\right|^2 -\lambda^2 \aip{J_a T^{\scriptscriptstyle \Sigma}_a X}{X}{} \\ & \qquad +\left|T^{{\mathfrak{o}}}_a X\right|^2 -\left|T_a X\right|^2 \\
\Bt{\text{Rm}}^\lambda &(U_b,Y,U_a,U_c) = \lambda^2 \text{Rm}(U_b,Y,U_a,U_c) -\lambda^2 \aip{ \nabla T(U_b,U_a,Y) }{U_c}{}\\
\Bt{\text{Rm}}^\lambda&(U_a,U_b,U_c,U_d) = \lambda^2 \text{Rm}(U_a,U_b,U_c,U_d) + \frac{\lambda^2}{2} \left[ d\tau^d_{bc}(U_a) -d \tau^d_{ac}(U_b) \right]\\
& \qquad + \frac{\lambda^2}{4} \sum\limits_e \left[ \tau^e_{ac}\tau^e_{bd} - \tau^e_{bc}\tau^e_{ad} - 2 \tau_{ab}^e \tau^e_{cd} \right]
\end{align*}
\enT
\pf
Again, all parts of the proof are by direct computation. However we can simplify the process considerably by making use of a technical result from \cite{Vassilev2}. Near any point $p \in M$ there always exist orthonormal frames $E_1,\dots,E_h$ and $U_1,\dots,U_3$ such that $(\nabla E_i)_{|p} = 0 =(\nabla U_a)_{|p}$ for all $i,a$.
Here, we shall prove the first result and the last and leave the remainder to the reader. Letting $X,Y,Z,W$ be elements of an orthonormal horizontal frame as above. Then at $p$,
\begin{align*}
g^\lambda&( \overline{\nabla}^\lambda_X \overline{\nabla}^\lambda_Y Z, W) = X \aip{ \overline{\nabla}^\lambda_Y Z}{W}{} - g^\lambda(\overline{\nabla}^\lambda_Y Z, \overline{\nabla}^\lambda_X W) \\
&= \aip{\nabla_X \nabla_Y Z}{W}{} \\
& \quad - \lambda^2 \sum\limits_a \big[\frac{1}{2} \aip{J_a Y}{Z}{}+\lambda^{-2} \aip{T^{\scriptscriptstyle \Sigma}_a Y}{Z}{}\big] \big[ \frac{1}{2} \aip{J_a X}{W}{}+\lambda^{-2} \aip{T^{\scriptscriptstyle \Sigma}_a X}{W}{}\big]
\end{align*}
and
\begin{align*}
g^\lambda(\overline{\nabla}^\lambda_{[X,Y]}Z,W) &= \sum\limits_a \aip{[X,Y]}{U_a}{} \aip{\overline{\nabla}^\lambda_{U_a} Z}{W}{} \\
&= -\sum\limits_a \aip{J_a X}{Y}{} \aip{ \frac{\lambda^2}{2} J_a Z}{W}{}
\end{align*}
The first result follows easily.
Now let $U_1,U_2,U_3$ be an orthonormal vertical frame such that each $\nabla U_a $ vanishes at $p$. Then
\begin{align*}
g^\lambda(\overline{\nabla}^\lambda_{U_a} \overline{\nabla}^\lambda_{U_b} U_c,U_d) &=U_a g^\lambda( \overline{\nabla}^\lambda_{U_b} U_c,U_d) - g^\lambda( \overline{\nabla}^\lambda_{U_b} U_c,\overline{\nabla}^\lambda_{U_a} U_d) \\
&= \lambda^2 \aip{ \nabla_{U_a} \nabla_{U_b} U_c}{U_d}{} + \frac{\lambda^2}{2} d\tau_{bc}^d (U_a) - \frac{\lambda^2}{4} \sum\limits_e \tau_{bc}^e \tau_{ad}^e
\end{align*}
\begin{align*}
g^\lambda(\overline{\nabla}^\lambda_{[U_a,U_b]} U_c,U_d) = - \lambda^2 \sum\limits_e \tau_{ab}^e \aip{\overline{\nabla}^\lambda_{U_e} U_c}{U_d}{} = \frac{\lambda^2}{2} \sum\limits_e \tau_{ab}^e \tau_{ec}^d
\end{align*}
Again the last result follows easily.
\end{pmatrix}f
From these we get the following simple corollaries.
\bgC{Sectional} If $X,Y \in H$ are unit length and orthogonal and $a \ne b$ then,
\begin{align*}
\Bt{K}^\lambda(X,Y) &= K(X,Y) -\lambda^{-2}\sum\limits_a \Big[ \aip{T^{\scriptscriptstyle \Sigma}_a X}{X}{} \aip{T^{\scriptscriptstyle \Sigma}_a Y}{Y}{} -\aip{T^{\scriptscriptstyle \Sigma}_a X}{Y}{}^2 \Big] \\
& \qquad - \frac{3\lambda^2}{4} \sum\limits_a \aip{J_a X}{Y}{}^2\\
\Bt{K}^\lambda(X,U_a) &= \frac{\lambda^2}{4} - \aip{J_a T^{\scriptscriptstyle \Sigma}_a X}{X}{} \\
& \qquad + \lambda^{-2} \Big[ \left|T^{{\mathfrak{o}}}_a X\right|^2 -\left|T_a X\right|^2-\aip{\nabla T^{\scriptscriptstyle \Sigma}(U_a, X,U_a)}{X}{} \Big]\\
\Bt{K}^\lambda(U_a,U_b) &= \lambda^{-2} \left[ K(U_a,U_b) + \frac{\tau^2}{4} \right]
\end{align*}
\enC
\bgC{RcStuff}
\begin{align*}
\Bt{\ensuremath{\text{Rc}}\xspace}^\lambda(X,X) &= \ensuremath{\text{Rc}}\xspace^\ensuremath{\text{\tiny $H$\normalsize }}\xspace (X,X) -\aip{[JT^{\scriptscriptstyle \Sigma}]X}{X}{} -\frac{3\lambda^2}{2} \left|X\right|^2 \\
& \quad - \lambda^{-2}\Big[ \aip{ \text{tr}_\text{\tiny $V$\normalsize } \nabla T^{\scriptscriptstyle \Sigma}(X)}{X}{} + 2 \sum\limits_a \aip{ T^{{\mathfrak{o}}}_aX}{T^{\scriptscriptstyle \Sigma}_aX}{} \Big] \\
\Bt{\ensuremath{\text{Rc}}\xspace}^\lambda(X,U_a) &=\aip{ \text{tr}_\ensuremath{\text{\tiny $H$\normalsize }}\xspace \nabla T^{\scriptscriptstyle \Sigma} (U_a)}{X}{} \\
\Bt{\ensuremath{\text{Rc}}\xspace}^\lambda(U_a,U_a) &= \ensuremath{\text{Rc}}\xspace^\text{\tiny $V$\normalsize }(U_a,U_a) + \frac{\tau^2}{2} + \frac{h \lambda^4}{4} +\|T^{{\mathfrak{o}}}_a\|^2 -\|T_a\|^2 \\
&=\frac{\tau^2}{2}+ \frac{h\lambda^4}{4} + \frac{4\Bt{\mathcal{T}}^2}{h} - \frac{h+4}{h} \left\| T^{\scriptscriptstyle \Sigma}_a \right\|^2 \end{align*}
\enC
Since we are assuming that the vertical distribution is integrable, it is natural to ask what affect torsion has on the topology of the leaves of $\mathcal{F}^\text{\tiny $V$\normalsize }$.
\bgC{RcF}
The Levi-Civita connection associated to the restriction of any metric $g^\lambda$ to a leaf of the foliation by $\mathcal{F}^\text{\tiny $V$\normalsize }$ has Ricci curvature given by
\[ \ensuremath{\text{Rc}}\xspace^{L}(U_a,U_a) =\frac{\tau^2}{2} + \ensuremath{\text{Rc}}\xspace^\text{\tiny $V$\normalsize }(U_a,U_a) = \frac{\tau^2}{2} + \frac{4\Bt{\mathcal{T}}^2}{h} - \frac{4}{h} \|T_a^{\scriptscriptstyle \Sigma}\|^2 \]
and sectional curvatures
\[ K^L(U_a,U_b) = \lambda^{-2} \left( \frac{\tau^2}{2} + \frac{4\Bt{\mathcal{T}}^2}{h} - \frac{4}{h} \|(T_a^{\scriptscriptstyle \Sigma})^{b+})\|^2 \right).\]
\enC
Thus we see that if $T^{\scriptscriptstyle \Sigma}$ is small compared to $\tau$ and $\Bt{\mathcal{T}}$, then the leaves of the foliation will be compact. Conversely, if $T^{\scriptscriptstyle \Sigma}$ is the dominant torsion component, we would have leaves that are covered by $\rn{3}$ and so would either be non-compact or have considerable topology .
\section{A qc-Bonnet-Myers theorem}
In this section, we explore conditions that ensure compactness of a quaternionic contact manifold. We shall make the standing assumption that $M$ is complete with respect to one (and hence all) of the metrics $g^\lambda$. It is well known that this is equivalent to $M$ being complete for the underlying sub-Riemannian metric too. The strategy is to choose the scaling factor $\lambda$ carefully and employ the traditional Bonnet-Myers theorem.
Considering \rfC{RcStuff}, the obvious difficulty is that letting $\lambda \to \infty$ produces two opposing effects. The vertical Ricci curvatures become more positive, but the horizontal Ricci curvatures become more negative.
A necessary condition for $M$ to be compact is that the various torsion operators must be bounded. Thus when looking for sufficient conditions, it makes sense to explore the consequences of bounds on particular pieces of torsion. Thus, throughout the remainder of this section, we shall suppose that $\ua,\ub,\uc$ are constants satisfying the following pointwise bounds at all points of $M$ and for all $X \in H$ and $U\in V$:
\begin{align} -2 \ua \left| X \right| \, \left|U \right| &\leq \sum\limits_i \aip{ \text{tr}_\ensuremath{\text{\tiny $H$\normalsize }}\xspace \nabla T^{\scriptscriptstyle \Sigma} (U)}{X}{} \\
-\ub \left|X\right|^2 &\leq -\aip{ \text{tr}_\text{\tiny $V$\normalsize } \nabla T^{\scriptscriptstyle \Sigma} (X)}{X}{}-2 \sum\limits_a \aip{T_a^{\scriptscriptstyle \Sigma} X }{ T^{{\mathfrak{o}}}_aX}{} ,\\
\frac{h+4}{h} \left\| T^{\scriptscriptstyle \Sigma}(U,\cdot) \right\|^2 & \leq \left( \uc+ \frac{\tau^2}{2} + \frac{4\Bt{\mathcal{T}}^2}{h} \right) \left|U\right|^2 . \end{align}
It should be remarked that by necessity, we must always have $\ua \geq 0$. Then
\begin{align*}
\Bt{\ensuremath{\text{Rc}}\xspace}^\lambda(X+ \mu U_a,X+ \mu U_a) &\geq \ensuremath{\text{Rc}}\xspace^\ensuremath{\text{\tiny $H$\normalsize }}\xspace (X,X) -\aip{[JT^{\scriptscriptstyle \Sigma}]X}{X}{} - \Big[ \frac{\ub}{\lambda^2} + \frac{3 \lambda^2}{2}+\ua \e \Big] \left|X\right|^2 \\
& \qquad + \left( \frac{h\lambda^4}{4} -\uc - \frac{\ua}{\e}\right)\mu^2 \\
\end{align*}
In order to apply the standard Bonnet-Myers theorem, there must be a positive constant $c$, such that
\[ \Bt{\ensuremath{\text{Rc}}\xspace}^\lambda(X+\mu U_a,X+ \mu U_a) \geq c \left( |X|^2 +\lambda^2 \mu^2 \right) \]
regardless of choice of $X$ or $a$. This clearly requires us to choose
\[ \e > \frac{ 4\ua}{h\lambda^4 - \uc}. \]
This then easily leads to the following theorem
\bgT{qcBM}
Suppose $M$ is a complete, integrable, quaternionic contact manifold with integrable vertical distribution.
If there exists a constant $\rho_0$ such that
\bgE{RcC} \ensuremath{\text{Rc}}\xspace^\ensuremath{\text{\tiny $H$\normalsize }}\xspace(X,X) - \aip{[JT^{\scriptscriptstyle \Sigma}]X}{X}{} \geq \rho_0 \|X\|^2 \enE
with
\bgE{rhoC} \rho_0 > \min\limits_{0 < x <x_L} \left\{ \frac{\ub}{x} + \frac{3x}{2}+\frac{4\ua^2}{h x^2 -\uc} \right\}, \qquad x_L = \begin{cases} \sqrt{\frac{\uc}{h} }, \quad & \uc>0 \\
\infty, \quad & \uc\leq 0\end{cases} \enE
then $M$ is compact with finite fundamental group.
\enT
We remark that it is mainly the presence of the torsion term $T^{\scriptscriptstyle \Sigma}$ that complicates the expressions above. This theorem simplifies considerable if we place additional constraints on this term. Indeed, we can easily establish the following result.
\bgC{TV}
Suppose $M$ is a complete, integrable, quaternionic contact manifold with integrable vertical distribution.
\begin{itemize}
\item If $\text{tr}_\ensuremath{\text{\tiny $H$\normalsize }}\xspace \nabla T^{\scriptscriptstyle \Sigma} \equiv 0$ and there is a constant $\rho_0 > \sqrt{\frac{3\ub}{2}}$ such that \rfE{RcC} holds then $M$ is compact
\item If $T^{\scriptscriptstyle \Sigma}\equiv 0$ and there is a constant $\rho_0>0$ such that \rfE{RcC} holds then $M$ is compact.
\end{itemize}
\enC
It should be remarked however that the combination of integrable vertical distribution and even the first of these conditions is quite restrictive. Indeed it follows from Theorem 4.8 in \cite{Vassilev2} that under such hypothesis $M$ must have constant scalar curvature.
\section{Almost qc-Einstein manifolds}
Of particular importance, in the theory of quaternionic contact manifolds is the class of qc-Einstein manifolds for which the Ricci tensor $\ensuremath{\text{Rc}}\xspace^\ensuremath{\text{\tiny $H$\normalsize }}\xspace$ is scalar as an operator on $H$. These manifolds have very special torsion properties due to the following theorem, shown for $h>4$ in \cite{Vassilev2} and for $h=4$ in \cite{VassilevEinstein}.
\bgT{qcEbase}
A integrable quaternionic contact manifold is qc-Einstein if and only if the torsion operators $T^{\scriptscriptstyle \Sigma},B$ vanish identically. Furthermore if $M$ is qc-Einstein then the vertical distribution is integrable and $\tau$ is constant.
\enT
Therefore every qc-Einstein manifold has a vertical foliation $\mathcal{F}^\text{\tiny $V$\normalsize }$. Here, motivated in part by the last section, we shall consider a larger class of manifolds that almost meet these conditions.
\bgD{aqcE}
An integrable quaternionic contact manifold is aqc-Einstein, or almost qc-Einstein, if it has constant scalar curvature ( i.e. $\tau =0$) and the Ricci operator is in $\Psi[3]$ (i.e. commutes with all $J$ operators).
We say that an aqc-Einstein manifold is of noncompact-type if following three conditions all hold
\begin{itemize}
\item[(NC1)] $\tau = 0$,
\item[(NC2)] for every leaf $L$ in $\mathcal{F}^\text{\tiny $V$\normalsize }$, $\inf_L \Bt{\mathcal{T}}=0$,
\item[(NC3)] for every compact leaf $L$ in $\mathcal{F}^\text{\tiny $V$\normalsize }$, $\Bt{\mathcal{T}} \equiv 0$ on $L$.
\end{itemize}
Otherwise $M$ is of compact-type.
\enD
Theorem 4.8 in \cite{Vassilev2} implies that $M$ is aqc-Einstein if and only if $T^{\scriptscriptstyle \Sigma} \equiv 0$ and the vertical distribution is integrable. The motivation for the type definitions will come later in \rfL{qcE}.
This means that aqc-Einstein manifolds possess a vertical foliation $\mathcal{F}^\text{\tiny $V$\normalsize }$ and a much simpler form of the Bonnet-Myers theorem.
\bgT{qcEp}
Suppose that $M$ is a complete aqc-Einstein manifold such that
\[ \left(\frac{h}{4} +2 \right) \tau - \left( \frac{h+10}{2} \right) \sup\limits_M \mathcal{T}^+ >0.\] Then $M$ is compact with finite fundamental group.
\enT
\pf This an easy consequence of \rfC{TV} together with the decomposition of $\ensuremath{\text{Rc}}\xspace^\ensuremath{\text{\tiny $H$\normalsize }}\xspace$ into torsion components..
\end{pmatrix}f
To proceed we shall first need to collect some results from the theory of foliations. The qc-Einstein version of the following result first appeared in \cite{VassilevEinstein}, here we generalize to the aqc-case.
\bgL{qcE}
Suppose that $M$ is a complete aqc-Einstein quaternionic contact manifold. Then
\begin{itemize}
\item $\mathcal{F}^\text{\tiny $V$\normalsize }$ is a Riemannian, bundle-like foliation with respect to each $g^\lambda$ such that each leaf is complete and totally geodesic,
\item if $M$ is qc-Einstein, then every leaf of $\mathcal{F}^\text{\tiny $V$\normalsize }$ is isometric to a complete, $3$-dimensional space-form with constant sectional curvature $\tau^2/2\lambda^2$,
\item if $M$ is of compact-type then every leaf is compact with universal cover $\sn{3}$,
\item if $M$ is of noncompact-type then every leaf has universal cover $\rn{3}$.
\end{itemize}
\enL
\pf
That $\mathcal{F}^\text{\tiny $V$\normalsize }$ is totally geodesic and $M$ is fibre-like compatible with $\mathcal{F}^\text{\tiny $V$\normalsize }$ follow from the observations that for $X,Y$ sections of $H$ and $U,W$ sections of $V$ respectively,
\[
\left( \overline{\nabla}^\lambda_U W\right)_\ensuremath{\text{\tiny $H$\normalsize }}\xspace=0, \qquad \left(\overline{\nabla}^\lambda_X Y + \nabla_Y X\right)_\text{\tiny $V$\normalsize } =0 .\]
See Lemma 1.2 in \cite{Johnson} and \cite{ONeill} for details. Completeness follows easily from the totally geodesic property.
The second part follows trivially from the curvature computations and \rfL{KV}.
For the last two parts, we first note Corollary 4 in \cite{Reinhart} stating that all leaves of such a foliation have the same universal covering space. Now if (NC1) or (NC2) fails, then from \rfC{RcF} there is a leaf with positive Ricci curvature which must therefore be compact. If (NC3) fails, there is a compact leaf $L$ such that the Ricci curvature on $L$ is quasi-positive. Since $L$ is compact, following the proof of a theorem by Aubin (\cite{aubin}, pp. 398-399) the metric can be deformed on $L$ into one of strictly positive curvature Ricci curvature. In either case, the universal covering space $\tilde{L}$ must then be compact. Now $\tilde{L}$ is compact, simply connected and oriented from the quaternionic identity $J_{123}=-1$. From Poincar\'e duality, we then have $H_1(M)=H_2(M)=0$ and the Hurewicz theorem then implies that $\pi_2(M) =0$ and $\pi_3(M) =\mathbb{Z}$. By Whitehead's theorem, any generator of $\pi_3(M)$ is then represented by a homotopy equivalence and so $\tilde{L}$ is homotopy equivalent to $\sn{3}$. Following Perelman's proof of the Poincar\'e conjecture (or Hamilton's partial solution in \cite{Hamilton}), we must have $\tilde{L}$ is diffeomorphic to $\sn{3}$. As every leaf has the same universal cover, we must have that each leaf is compact.
Now if $M$ is of noncompact-type and $\Bt{\mathcal{T}}$ vanishes everywhere then $M$ is qc-Einstein with $\tau=0$ and so the universal cover is $\rn{3}$ by the second part. If $\Bt{\mathcal{T}}$ is not identically zero, then there must be a noncompact leaf which admits a point where $\Bt{\mathcal{T}}>0$. This leaf will then have quasi-positive Ricci curvature by \rfC{RcF}. Its universal cover is then a simply connected, open $3$-manifold with quasi-positive Ricci curvature. A theorem of Zhu, \cite{Zhu}, then implies that the universal cover is $\rn{3}$.
\end{pmatrix}f
While it initially looks difficult to check the type of an aqc-Einstein manifold, it can be reduced to the study of a single leaf.
\bgC{NCT}
Suppose that $M$ is a complete aqc-manifold with $\tau = 0$ and $p \in M$ satisfies $\Bt{\mathcal{T}}(p) >0$. Then $M$ is of compact-type if and only if the leaf through $p$ is compact.
\enC
\pf
Following the proof of the previous lemma, we note that the leaf through $p$ has quasi-positive Ricci curvature and so has universal cover $\sn{3}$ if it's compact and $\rn{3}$ otherwise. As all leaves have the same universal cover, the result follows immediately.
\end{pmatrix}f
We now note the the work of Reinhart on fibre-like foliations, specifically Corollary 3 from \cite{Reinhart}.
\bgT{Rh} If $M$ is a foliated manifold that is complete with respect to bundle-like metric and the foliation is regular, then $M$ is isometric to a fibre bundle $\pi \colon M \to M^\prime$ where $M^\prime$ is a complete Riemannian manifold and the leaves of $\mathcal{F}$ coincide with the fibres of $\pi$.
\enT
It should be remarked here that it is well-known that the regularity condition can be weakened to requiring that leaves of the foliation have trivial leaf holonomy. This is also implied by the simpler condition that each of the leaves is simply connected.
While this result puts quite strong conditions on the topology of an aqc-Einstein manifold with regular foliation, this regularity condition is often non-trivial to check and indeed often fails. For the remainder of this section, we shall focus on partially replacing this condition with a tensorial, geometric property, namely non-positive horizontal sectional curvatures. As motivation, we note that, if $\mathcal{F}^\text{\tiny $V$\normalsize }$ is regular, the horizontal sectional curvatures descend as Riemannian sectional curvatures on the base manifold.
\bgC{qcE}
If $M$ is a complete aqc-Einstein manifold such that every leaf of $\mathcal{F}^\text{\tiny $V$\normalsize }$ has trivial leaf holonomy, then $M$ is a fibre bundle $\pi\colon M \to M^\prime$ over a Riemannian manifold $M^\prime$ such that for $X,Y \in H$,
\[ K(X,Y) = K^\prime(\pi_* X,\pi_*Y) \]
\enC
This is actually independent of which of the metrics $g^\lambda$ we are considering.
\pf The previous theorem establishes that $M$ is a fibre bundle. It follows from standard results on Riemannian submersions that for sections $X,Y$ of $H$ ,
\[ \pi_* \overline{\nabla}^\lambda_X Y = \nabla^\prime_{\pi_* X} (\pi_* Y), \qquad \pi_* \overline{\nabla}^\lambda_{U_a} X = \frac{\lambda^2}{2} \pi_* J_a X = \pi_* \overline{\nabla}^\lambda_X U_a. \]
From this it is easy to compute that
\begin{align*}
\pi_* \overline{\nabla}^\lambda_X \overline{\nabla}^\lambda_Y Y &= \nabla^\prime_{\pi_* X} \nabla^\prime_{\pi_* Y} (\pi_* Y) \\
\pi_* \overline{\nabla}^\lambda_Y \overline{\nabla}^\lambda_X Y &= \nabla^\prime_{\pi_* Y} \nabla^\prime_{\pi_* X} (\pi_* Y) -\frac{\lambda^2}{4} \sum\limits_a \aip{J_a X}{Y}{} \pi_*J_a Y \\
\pi_* \overline{\nabla}^\lambda_{[X,Y]} &= \nabla^\prime_{[\pi_*X,\pi_* Y]} \pi_* Y - \frac{\lambda^2}{2} \sum\limits_a \aip{J_a X}{Y}{} \pi_* J_a Y
\end{align*}
and hence that
\[ K^\prime (\pi_*X ,\pi_* Y) = \Bt{K}^\lambda(X,Y) + \frac{3\lambda^2}{4} \aip{J_a}{Y}{}^2 = K(X,Y).\]
\end{pmatrix}f
In fact, not only the sectional curvatures descend, but also the quaternionic structure and as noted in \cite{VassilevEinstein} the leaf space $M^\prime$ is actually locally hyper-K\"ahler. In the case of non-positive sectional curvatures, this can be improved to
\bgC{aqcE}
If $M$ is a complete aqc-Einstein manifold such that $\mathcal{F}^\text{\tiny $V$\normalsize }$ is regular the horizontal sectional curvatures are non-positive, then $M$ is diffeomorphic to a fibre-bundle over a manifold $M^\prime$ with universal cover $\rn{h}$.
\enC
\pf Since $M^\prime$ must have non-positive Riemannian sectional curvatures, has universal cover $\rn{h}$ by the standard Cartan-Hadamard theorem.
\end{pmatrix}f
In fact, we can improve this result by using Hebda's generalization of the Cartan-Hadamard theorem (\cite{Hebda} Theorem 2). This states that for a Riemannian foliation on $M$ with complete bundle-like metric, if the transverse sectional curvatures are non-positive then the universal cover of $M$ fibers over a complete simply-connected Riemannian manifold $M^\prime$ with non-positive sectional curvature and the fibres are the universal cover of the leaves (which we recall must all be the same). Thus we have established the following aqc-Cartan-Hadamard theorem.
\bgT{qcEm}
Suppose that $M$ is a complete aqc-Einstein manifold such that all the horizontal sectional curvatures for the Biquard connection are non-positive. Then the universal cover of $M$ is diffeomorphic to $\rn{h} \times \sn{3}$ if $M$ is of compact-type and diffeomorphic to $\rn{h+3}$ if $M$ is of non-compact type.
\enT
It should be remarked here that $[JT^{{\mathfrak{o}}}]$ is symmetric and trace-free so must have non-negative eigenvalues. The non-positive sectional curvatures imply non-positive Ricci curvature and so require $\tau \leq 0$. It should also be mentioned that the remarkable fact that the torsion terms determine the Ricci tensor does not appear to extend to the sectional curvatures and so the non-positive sectional curvature condition is unlikely to be redundant.
We conclude with the observation that while it would be useful to remove the restrictive aqc-Einstein condition here, the vertical spaces in the presence of torsion appear to be much wilder. Even in the case where there is a vertical foliation, the foliation would not be Riemannian and we would not expect any analogue of the fibration results. In particular, it would seem unlikely there would be a similarly simple classification of the universal covering spaces under the non-positive sectional curvature condition.
\end{document} |
\begin{document}
\title{Biphoton ququarts as either pure or mixed states, features and reconstruction from coincidence measurements}
\author{M.V. Fedorov}
\affiliation{A.M.Prokhorov General Physics Institute, Russian Academy of Science, Moscow, Russia\\ e-mail: fedorovmv@gmail.com}
\begin{abstract}
Features of biphoton polarization-frequency ququarts are considered. Their wave functions are defined as functions of both polarization and frequency variables of photons with the symmetry obligatory for two-boson states taken into account. In experiments, biphoton ququarts can display different features in dependence on whether experiments involve purely polarization or (alternatively) polarization-frequency measurements. If in experiments one uses only polarization measurements, the originally pure states of ququarts can be seen as mixed biphoton polarization states. Features of such states are described and discussed in details. Schemes of coincidence measurements for reconstruction of the ququart's parameters are suggested and described.
\end{abstract}
\pacs{03.67.Bg, 03.67.Mn, 42.65.Lm}
\maketitle
\section{Introduction}
Biphoton polarization-frequency ququarts can be produced in processes of collinear Spontaneous Parametric Down-Conversion (SPDC) non-degenerate with respect to frequencies of photons \cite{Bogd-06}. In such states photons have two degrees of freedom: polarization and frequency. In terms of photon polarization and frequency variables, $\sigma$ and $\omega$, each of them can take independently one of two values: $\sigma=H\,{\rm or}\,V$ (horizontal or vertical polarization) and $\omega=\omega_h\,{\rm or}\,\omega_l$ (high or low frequencies) \cite{NJP},\cite{PRA}. For experimental investigation of such states one has to use detectors provided with both polarizers and frequency filters. However, sometimes it is more convenient and, maybe, even more interesting to use only polarizers and wide-band detectors, non-selective in frequencies. In theoretical description, such situation corresponds to averaging of the ququart's states over photon frequencies, or taking traces of the biphoton density matrix with respect to frequency variables of photons. In a general case, this gives rise to two-qubit biphoton mixed polarization states (MPS) \cite{PRA}. In this paper features of such mixed states are briefly summarized and schemes for measuring their parameters are described. The method to be described is based on a series of coincidence measurements. In comparison with the earlier suggested general scheme of measuring the ququarts's parameter (Appendix B of Ref. \cite{NJP}) the case of MPS has its own rather interesting peculiarities.
\section{Biphoton ququarts and mixed polarization states}
In a general form, the state vector of an arbitrary polarization-frequency biphoton ququart is given by a superposition of four basis state vectors
\begin{gather}
\nonumber
|\Psi^{(4)}\rangle=C_1\,a_{H,h}^\dag a_{H,l}^\dag|0\rangle+
C_2\,a_{V,h}^\dag a_{V,l}^\dag|0\rangle\\
\label{qqrt-st-vect}
+C_3\,a_{H,h}^\dag a_{V,l}^\dag|0\rangle+C_4\,a_{V,h}^\dag a_{H,l}^\dag|0\rangle,
\end{gather}
where $C_i$ are arbitrary complex constants obeying the normalization condition $\sum_i|C_i|^2=1$; $a_{H,h}^\dag,a_{H,l}^\dag,a_{V,h}^\dag,\,{\rm and}\,a_{V,l}^\dag$ are one-photon creation operators for four one-photon modes $\{H,\omega_h\}$, $\{H,\omega_l\}$, $\{V,\omega_h\}$, and $\{V,\omega_l\}$. Superpositions of one-photon states $a_{H,h}^\dag|0\rangle,a_{H,l}^\dag|0\rangle,a_{V,h}^\dag|0\rangle,\,{\rm and}\,a_{V,l}^\dag|0\rangle$ form one-photon qudits with the dimensionality of the one-photon Hilbert space $d=4$. Double population of different one-photon modes corresponds to the basis state vectors in Eq. (\ref{qqrt-st-vect}), like $a_{H,h}^\dag a_{H,l}^\dag|0\rangle$, etc. These basis state vectors,as well as their superpositions, can be considered as describing two-qudit states belonging to the two-photon Hilbert space of the dimensionality $D=d^2=16$.
It's very fruitful to use not only state vectors of biphoton ququarts but also their wave functions. In a general case of arbitrary bipartite states with arbitrary variables of two particles $x_1$ and $x_2$ the bipartite wave function $\Psi(x_1,x_2)$ is defined via the bipartite state vector $|\Psi\rangle$ as $\Psi(x_1,x_2)=\langle x_1,x_2|\Psi\rangle$. One reason why it's important to use wave functions is that in terms of wave functions one can use the simplest definition of entanglement . According to this definition a bipartite state is entangled if its wave function cannot be presented in the form of a product of two single-particle wave functions
\begin{equation}
\label{ent-definition}
\Psi(x_1,x_2)\neq\varphi(x_1)\,\chi(x_2).
\end{equation}
Otherwise, if one can find such function $\varphi(x_1)$ and $\chi(x_2)$ that $\Psi(x_1,x_2)=\varphi(x_1)\,\chi(x_2)$, the state is disentangled.
In the case of biphoton states with two degrees of freedom for each photon, for defining two-photon wave functions one has to introduce two pairs of variables, $x_1=\{\sigma_1,\omega_1\}$ and $x_2=\{\sigma_2,\omega_2\}$. As photons are indistinguishable, we cannot attribute variable numbers 1 and 2 to any of two photons, though we know for sure that the amount of variables equals the amount of photons (2) times the amount of degrees of freedom (2), which gives 4 or two pairs. The biphoton wave function corresponding to the state vector of Eq. (\ref{qqrt-st-vect}) can be found with the help of the general rules of quantum electrodynamics (see, e.g., \cite{Schweber}). The result can be written in different forms. The form most convenient for the further consideration is that related to the use of polarization and frequency Bell states $\Psi_\pm$:
\begin{gather}
\nonumber
\Psi^{(4)}(\sigma_1,\omega_1;\,\sigma_2,\omega_2)=\Psi^{(3)}(\sigma_1,\sigma_2)\Psi_+(\omega_1,\omega_2)\\
\label{qqrt-wf}
+B_-\Psi_-(\sigma_1,\sigma_2)\Psi_-(\omega_1,\omega_2),
\end{gather}
where $\Psi^{(3)}(\sigma_1,\sigma_2)$ is the wave function of a purely polarization qutrit
\begin{gather}
\nonumber
\Psi^{(3)}(\sigma_1,\sigma_2)=C_1\,\Psi_{HH}(\sigma_1,\sigma_2)+B_+\,\Psi_+(\sigma_1,\sigma_2)\\
\label{qtr-wf}
+C_4\,\Psi_{VV}(\sigma_1,\sigma_2)
\end{gather}
with
\begin{gather}
\label{HH}
\Psi_{HH}(\sigma_1,\sigma_2)=\delta_{\sigma_1,H}\,\delta_{\sigma_2,H}\equiv \left({1\atop 0}\right)_1^{pol}
\otimes\left({1\atop 0}\right)_2^{pol},\\
\label{VV}
\Psi_{VV}(\sigma_1,\sigma_2)=\delta_{\sigma_1,V}\,\delta_{\sigma_2,V}\equiv \left({0\atop 1}\right)_1^{pol}
\otimes\left({0\atop 1}\right)_2^{pol},
\end{gather}
and the Bell-state wave functions depending on polarization or frequency variables are given by
\begin{gather}
\nonumber
\Psi_\pm(\sigma_1,\sigma_2)=\displaystyle\frac{\delta_{\sigma_1,H}\delta_{\sigma_2,V}
\pm\delta_{\sigma_1,V}\delta_{\sigma_2,H}}{\sqrt{2}}\\
\equiv\frac{1}{\sqrt{2}}\left\{\left({1\atop 0}\right)_1^{pol}\otimes\left({0\atop 1}\right)_2^{pol}\pm\left({0\atop 1}\right)_1^{pol}\otimes\left({1\atop 0}\right)_2^{pol}\right\},
\label{Bell-pol}
\end{gather}
\begin{gather}
\nonumber
\Psi_\pm(\omega_1,\omega_2)=\displaystyle\frac{\delta_{\omega_1,\omega_h}\delta_{\omega_2,\omega_l}
\pm\delta_{\omega_1,\omega_l}\delta_{\omega_2,\omega_h}}{\sqrt{2}} \\
\equiv\frac{1}{\sqrt{2}}\left\{\left({1\atop 0}\right)_1^{fr}\otimes\left({0\atop 1}\right)_2^{fr}\pm\left({0\atop 1}\right)_1^{fr}\otimes\left({1\atop 0}\right)_2^{fr}\right\};
\label{Bell-freq}
\end{gather}
superscripts ``$pol"$ and ``$fr"$ in Eqs. (\ref{HH})-(\ref{Bell-freq}) indicate polarization and frequency degrees of freedom. Besides, as it's clear from comparison of the functional and matrix forms of the biphoton wave functions in Eqs. (\ref{HH})-(\ref{Bell-freq}), the upper lines in two-line columns correspond to the horizontal polarization and higher frequency $\omega_h$ and the lower lines - to the vertical polarization and lower frequency $\omega_l$. The constants $B_\pm$ in Eqs. (\ref{qqrt-wf}), (\ref{qtr-wf}) are expressed via $C_{2,3}$ of Eq. (\ref{qqrt-st-vect}) as
\begin{equation}
\label{B via C}
B_{\pm}=\frac{C_2\pm C_3}{\sqrt{2}}
\end{equation}
with the normalization condition taking the form $|C_1|^2+|B_+|^2+|B_-|^2+|C_4|^2=1$.
Note that the ququart's wave function (\ref{qqrt-wf}) contains both symmetric and antisymmetric wave functions of the polarization and frequency Bell states. But the antisymmetric Bell-state wave functions $\Psi^{pol}_-$ and $\Psi^{fr}_-$ appear only in the form of their product, which makes the total wave function $\Psi^{(4)}$ symmetric with respect to the transposition of photon variables $1\rightleftharpoons 2$. This is the obligatory feature of two-boson pure states, which often is not taken seriously but which manifests itself, e.g., in the existence of MPS discussed below and in Ref. \cite{PRA}. Besides, owing to the symmetry, both terms in the ququart's wave function (\ref{qqrt-wf}) obey the entanglement criterion (\ref{ent-definition}) and, hence, all biphoton ququarts are entangled. In some cases this is a purely frequency entanglement (e.g., if $C_4=B_+=B_-=0$ and $C_1=1$), but in a general case entanglement of biphoton ququarts is an inseparable mixture of the polarization and frequency entanglement.
The density matrix the state (\ref{qqrt-st-vect}), (\ref{qqrt-wf}) is given by
\begin{equation}
\label{qqrt-dens-matr}
\rho^{(4)}=\Psi^{(4)}\otimes\Psi^{(4)\,\dag}.
\end{equation}
This density matrix characterizes pure states. But being averaged over frequency variables, $\rho^{(4)}$ turns into the density matrix of a mixed two-qubit polarization state \cite{PRA}. Written down in the basis $\big\{\Psi_{HH},\,\Psi_+^{pol},\,\Psi_{VV},\,\Psi_-^{pol}\big\}$ and with dropped 12 zero lines and columns , the averaged density matrix takes a rather simple form
\begin{gather}
\label{rho-mixed}
\overline{\rho}={\rm Tr}_{\omega_1,\omega_2}\rho^{(4)}=\left(\begin{matrix}\rho^{(3)}& 0\\ 0&|B_-|^2\end{matrix}\right)\\
\label{pol-4x4}
=\left(
\setlength{\extrarowheight}{0.1cm}
\begin{matrix}
|C_1|^2 & C_1B_+^* & C_1C_4^* & 0\\
B_+C_1^* & |B_+|^2 & B_+C_4^* & 0\\
C_4C_1^* & C_4B_+^* & |C_4|^2 & 0\\
0 & 0 & 0 & |B_-|^2
\end{matrix}
\right),
\end{gather}
where
\begin{equation}
\label{rho-qtrt}
\rho^{(3)}=\Psi^{(3)}\otimes\Psi^{(3)\,\dag}
\end{equation}
is the qutrit's coherence matrix \cite{Klyshko}. In a general case the density matrix $\overline{\rho}$ characterizes a mixed polarization state. The only two exceptions occur in the cases $B_-=0$ and $|B_-|=1$. In the first case the ququart is reduced to qutrit, and in the second case the qutrit's contribution to the quqaurt's wave function (\ref{qqrt-wf}) equals zero (as $C_1=C_4=B_+=0$). In both cases $B_-=0$ and $|B_-|=1$ the ququart's wave function $\Psi^{(4)}$ factorizes for parts depending on the polarization and frequency variables separately. This is the reason why in these cases averaging of a pure polarization-frequency state over frequency variables leaves the remaining polarization state pure. In all other cases ($|B_-|\neq 1,\,0$) there is no factorization for frequency and polarization parts in $\Psi^{(4)}$ and, hence, the state, arising after averaging over frequency variables, is mixed.
There are other forms of presenting the averaged polarization density matrix $\overline{\rho}$ alternative to that of Eq. (\ref{pol-4x4}). One of them used below consists in the presentation of $\overline{\rho}$ in the form of a sum of products of $2\times 2$ single-photon polarization matrices:
\begin{gather}
\begin{matrix}
\overline{\rho}=\left[|C_1|^2\left(\begin{matrix}1&0\\0&0\end{matrix}\right)_1+
\frac{|B_+|^2+|B_-|^2}{2}\left(\begin{matrix}0&0\\0&1\end{matrix}\right)_1\right]\otimes
\left(\begin{matrix}1&0\\0&0\end{matrix}
\right)_2+\\
\,\\
\left[\frac{|B_+|^2+|B_-|^2}{2}\left(\begin{matrix}1&0\\0&0\end{matrix}\right)_1
+|C_4|^2\left(\begin{matrix}0&0\\0&1\end{matrix}\right)_1\right]\otimes
\left(\begin{matrix}0&0\\0&1\end{matrix}\right)_2+ ...
\end{matrix}
\label{pol-2x2x2}
\end{gather}
As it's clear from the definition of $\overline{\rho}$, all matrices in this equation and further below refer to the polarization degree of freedom, with averaging over the frequency variables already performed. For this reason, to shorten notations, here and below we drop the superscript $pol$ common for all arising matrices. In Eq. (\ref{pol-2x2x2}) only four products of $2\times 2$ matrices are shown explicitly. In these four products all matrices are diagonal, whereas in all other 12 products, at least one of the matrices $\left(*\;*\atop{*\;*}\right)_1$ or $\left(*\;*\atop{*\;*}\right)_2$ is off-diagonal. Such terms do not contribute to conditional probabilities analyzed below in section V.
The density matrix of MPS can be further reduced over polarization variables of one of two photons to give rise to the mixed-state reduced density matrix of the form \cite{PRA}
\begin{equation}
\overline{\rho}_r=\left(
\begin{matrix}
|C_1|^2+\frac{|B_+|^2+|B_-|^2}{2} & \frac{C_1B_+^*+B_+C_4^*}{\sqrt{2}}\\
\frac{C_1^*B_++B_+^*C_4^*}{\sqrt{2}} & |C_4|^2+\frac{|B_+|^2+|B_-|^2}{2}
\end{matrix} \right).
\label{rho-red-pol}
\end{equation}
\section{Correlations in mixed biphoton polarization states}
Two correlation parameters found in the general form from the density matrices (\ref{pol-4x4}), (\ref{pol-2x2x2}), (\ref{rho-red-pol}) are the Schmidt parameter $\overline{K}$ and concurrence $\overline{C}$ \cite{PRA}:
\begin{gather}
\label{K-pol}
\overline{K}=\frac{2}{1+(1-|B_-|^{\,2})^2-|2C_1C_4-B_+^2|^2}
\end{gather}
and
\begin{equation}
\label{C-pol}
\overline{C}=\left||2C_1C_4-B_+^2|-|B_-|^2\right|.
\end{equation}
The concurrence $\overline{C}$ (\ref{C-pol}) characterizes the degree of entanglement or the amount of quantum correlations in MPS. Another quantifier of quantum correlations in such states is the so called relative entropy \cite{Vedral} defined as the ``distance$"$ between the density matrix $\overline{\rho}$ and the density matrix $\sigma$ of the closest disentangled state
\begin{equation}
\label{S-rel}
S_{rel}=Tr[\overline{\rho}(\log_2 \rho-\log_2\sigma)].
\end{equation}
For MPS with $C_1=C_4=0$ the relative entropy was found in \cite{PRA} and shown to be less than concurrence at any values of the remaining nonzero parameters $|B_-|$ and $|B_+|=1-|B_-|$. The only exceptions occur at $|B_-|=0,\,1\,{\rm and}\,1/\sqrt{2}$, where the concurrence and relative entropy are equal. Thus, it was found that $S_{rel}\leq {\overline C}$, which can be interpreted as indication that the relative entropy is a better entanglement quantifier than concurrence and that the latter can exaggerate slightly the degree of entanglement in the case of mixed polarization states. For such states, in accordance with the ideas of Refs. \cite{Vedral,Hamieh,Oh-Kim} one can define the quantifier of classical correlations as the difference between the von Neumann mutual information $I=2S({\overline\rho}_r)-S({\overline\rho})$ and relative entropy $S_{rel}$
\begin{equation}
\label{C-cl}
C_{cl}=I-S_{rel}.
\end{equation}
As for the Schmidt parameter of mixed states ${\overline K}$ (\ref{K-pol}), in contrast with pure biphoton polarization states (qutrits), ${\overline K}$ is not related anymore to the concurrence ${\overline C}$ (\ref{C-pol}): ${\overline C}\neq \sqrt{2\left(1-{\overline K}^{\,-1}\right)}$ and {$\overline K\neq 1/(1-{\overline C}^{\,2}/2)$ as in the case of pure states of biphoton qutrits. On the other hand, the Schmidt parameter of MPS remains related to their degree of polarization ${\overline P}$ by the same relation as in the case of pure states of biphoton qutrits
\begin{gather}
\label{P-K}
{\overline P}^{\,2}+2\left(1-{\overline K}^{\,-1}\right)=1,
\end{gather}
where ${\overline P}=|{\overline{\vec S}}|$, ${\overline{\vec S}}= Tr\left({\vec\sigma}\,{\overline\rho}_r\right)$ is the vector of Stokes parameters, and
${\vec\sigma}$ is the vector of Pauli matrices. Evidently,
$Tr\left({\vec\sigma}{\overline\rho}_r\,\right)\equiv Tr\left({\vec\sigma}\,\rho^{(4)}\right)$ and, hence, ${\overline P}=P^{(4)}$, i.e., the degree of polarization of the mixed state coincides with the degree of polarization of the original two-qudit ququart, and they both are determined by the Schmidt parameter of the mixed state ${\overline K}$ via Eq. (\ref{P-K}).
As the degree of polarization is a classical concept, we can deduce from Eq. (\ref{P-K}) that in the case of mixed states the Schmidt parameter $\overline{K}$ is related to the amount of classical rather than quantum correlations. In terms of ${\overline K}$, a new parameter characterizing the amount of classical correlations can be defined as
\begin{equation}
\label{C-cl-via-K}
{\overline C_{cl}}=\sqrt{2\left(1-{\overline K}^{\,-1}\right)}.
\end{equation}
It may be interesting to notice that for the state with $C_1=C_4=0$ this parameter coincides with that of (\ref{C-cl})
\begin{equation}
\label{C-cl=C-cl-K}
{\overline C_{cl}}=C_{cl}.
\end{equation}
Note finally that in other special cases, $B_-=0$ or $|B_-|=1$, when states averaged over frequencies remain pure, all discussed parameters of quantum and classical correlations coincide with each other and are equal to a half of the von Neumann mutual information
\begin{equation}
\label{all-coinciduing}
{\overline C}=S_{rel}=C_{cl}={\overline C}_{cl}=I/2.
\end{equation}
In these cases the relation ${\overline C}=\sqrt{2\left(1-{\overline K}^{\,-1}\right)}$ becomes valid again, and this is the reason why in pure bipartite states the Schmidt parameter $K$ can be used for characterization of the amounts of both quantum and classical correlations.
\section{Comparison with a two-qubit pure-state model of biphoton ququarts}
A picture of mixed two-qubit polarization states described above differs significantly from traditionally widely used model of two-qubit pure-state ququarts. This model starts from the same state vector as given by Eq. (\ref{qqrt-st-vect}). But then frequencies of SPDC photons $\omega_1$ and $\omega_2$ are considered as given numbers rather than variables, e.g., as $\omega_1\equiv\omega_h$ and $\omega_2\equiv\omega_l$. Owing to this, two photons of SPDC pairs are considered as ``partially distinguishable$"$, owing to which the polarization biphoton wave function appears to be not necessarily symmetric with respect to the transposition of particle's variables $1\rightleftharpoons 2$, and can be written in the form
\begin{gather}
\nonumber
\Psi^{(4)}_{2\,qb}(\sigma_1, \sigma_2)= C_1\Psi_{HH}(\sigma_1, \sigma_2) +B_+\Psi_+(\sigma_1, \sigma_2)\\
\label{Psi 2 qb}
+C_4\Psi_{VV}(\sigma_1, \sigma_2)+B_-\Psi_-(\sigma_1, \sigma_2).
\end{gather}
This is a wave function of a pure two-qubit state, and it yields the well known results for the Schmidt parameter and concurrence:
\begin{gather}
\label{K-2-qb}
K_{2\,qb}^{(4)}=\frac{2}{2-\left|2C_1C_4-B_+^2+B_-^2\right|^2},\\
\label{conc-2-qb}
C_{2\,qb}^{(4)}=\sqrt{2\left(1-K_{2\,qb}^{-1}\right)}=\left|2C_1C_4-B_+^2+B_-^2\right|.
\end{gather}
In a general case, these expressions differ from $\overline{K}$ and $\overline{C}$ of Eqs. (\ref{K-pol}) and (\ref{C-pol}). We believe that the correct results are those based on the picture of two-qudit polarization-frequency bipohoton ququarts and of MPS arising after averaging over frequencies, i.e., the results determined by Eqs. (\ref{K-pol})-(\ref{P-K}). Weak points of the two-qubit theory of biphoton ququarts are evident. Photons of SPDC pairs are always indistinguishable. If there is something that looks like "partial distinguishability", this is an indication that there is, in fact, an additional degree of freedom, and with this degree of freedom taken into account accurately, photons are evidently indistinguishable. Wave functions of two photons in a pure state cannot be asymmetric with respect to the transposition of their variables. Its symmetry is dictated by the Bose-Einstein statistics of photons. This feature is clearly violated in the two-qubit wave function of Eq. (\ref{Psi 2 qb}) where the symmetric and antisymmetric Bell-state wave functions are summed on equal terms. Note however, that a simple symmetrization of the expression in Eq. (\ref{Psi 2 qb}) vanishes the term, proportional to $\Psi_-$, and reduces the ququart's wave function $\Psi_{2qb}^{(4)}$ to that of a qutrit $\Psi^{(3)}$ (\ref{qtr-wf}). To get a correct quqaurt's wave function (\ref{qqrt-wf}), in addition to symmetrization, one has to give freedom to photon frequencies $\omega_{1,2}$ by considering them as variables which can take one of two values each: either $\omega_1=\omega_h$ and $\omega_2=\omega_l$ or $\omega_1=\omega_l$ and $\omega_2=\omega_h$. Actually, this means that we never know which photon has which frequency. Averaging of states of bophoton ququarts over frequencies $\omega_{1,2}$ gives rise to MPS considered here and in Ref. \cite{PRA}. There is no way to get such states in a two-qubit model. In principle, differences between predictions of the theory of mixed states and of the two-qubit model of ququarts can be seen in experiments on measurement of the degree of polarization of biphoton polarization-frequency ququarts. Some simple examples of experimental schemes where these differences are well pronounced are described in Ref. \cite{PRA}.
\section{Reconstruction of ququart's parameters in experiments}
The next questions are how to measure in experiments parameters of MPS and of pure states of polarization-frequency ququarts. It was shown earlier \cite{NJP} that, in principle, a series of coincidence polarization-frequency measurements in three different bases provides sufficient amount of data to get a complete set of equations for finding all ququart's parameters. But equations obtained in such a way were rather complicated and not convenient for practical purposes. Here we will consider first a simpler problem of finding parameters of the above discussed MPS related to biphoton ququarts. And then, at the last stage, we will show how this procedure can be prolonged in a very simple way to reconstruct explicitly all ququart's parameters. Note also that the methods of using series of coincidence measurements for reconstructing parameters of quantum states are alternative to standard and rather widely used methods of quantum tomography for biphoton ququarts \cite {Bogd-03,Bogd-04,Bogd-06}.
\subsection{Independent constants characterizing biphoton qutrits, ququarts and mixed polarization states}
Pure states of qutrits (\ref{qtr-wf}) and ququarts (\ref{qqrt-wf}) are characterized, correspondingly, by three and four complex parameters, $\{C_1,\,B_+,\,C_4 \}$ and $\{C_1,\,B_+,\,C_4,\,B_- \}$, which corresponds to 6 and 8 real constants. But these parameters are not completely independent: there are normalization conditions and, besides, in both cases the common phases of wave functions do not affect measurable quantities and, hence, can be taken having arbitrary most conveniently chosen given values. These conditions reduce the amount independent real constant parameters characterizing qutrits and ququarts, correspondingly, to 4 and 6. MPS considered above occupy an intermediate position between qutrits and ququarts. Parameters characterizing these states are the same as in the case of pure-state ququarts, $\{C_1,\,B_+,\,C_4,\,B_- \}$. But, in addition to the normalization and common phase conditions we have now one condition more: as seen well from the structure of the density matrix $\overline{\rho}$ written in the form (\ref{pol-4x4}), features of mixed states do not depend of the phase of the parameter $B_-$. This is seen well from the derived expressions for the Schmidt parameter and concurrence (\ref{K-pol}) and (\ref{C-pol}), which depend on $B_-$ only as on $|B_-|$. Also, as seen well from the definition of $\overline{\rho}$ in the form of Eqs. (\ref{rho-mixed}), (\ref{rho-qtrt}), the density matrix of MPS does not depend on the phase of the qutrit's wave function $\Psi^{(3)}$, which enters into the definition of $\overline{\rho}$ as determining one of its components. Owing to this, the phase of $\Psi^{(3)}$ can be chosen, e.g., in a way, making the parameter $B_+$ real and positive. Thus, we find that in this case MPS are characterized completely by 5 independent real constants: $|C_1|,\,\varphi_1,\,B_+,\,|C_4|, \varphi_4$, where $\varphi_{1,4}$ are phases of the parameters $C_{1,4}$ with the constant $|B_-|$ to be found from the normalization condition.
\subsection{Conditional probabilities and coincidence measurements}
By definition, the conditional probability $\left.w_\sigma\right|_{\sigma^\prime}$ is the probability for a photon 1 to have polarization $\sigma$ under the condition that the second photon (2) of the same pair has polarization $\sigma^\prime$. Relations between the conditional probability and parameters of MPS follow directly from the presentation of the density matrix $\overline{\rho}$ in the form of a sum of products of $2\times 2$ single photon matrices (\ref{pol-2x2x2}):
\begin{gather}
\label{cond-HH-VV}
\left.w_H\right|_H=|C_1|^2,\,\left.w_V\right|_V=|C_4|^2,\\
\label{cond-HV-VH}
\left.w_H\right|_V=\left.w_V\right|_H=\frac{|B_+|^2+|B_-|^2}{2}.
\end{gather}
Owing to normalization, $\sum_{\sigma,\sigma^\prime}\left.w_\sigma\right|_{\sigma^\prime}=1$.
In experiment, conditional probabilities can be found from coincidence measurements.
For this goal a biphoton beam has to be divided for two channels 1 and 2 by a non-selective Beam Splitter (BS).
Half of SPDC pairs will be divided between channels, whereas another half of undivided pairs will appear either
in the channel 1 or 2 (see a scheme in Fig. \ref{Fig1}).
\begin{figure}
\caption{{\protect\footnotesize {(a): a scheme of experiment for coincidence measurements, $BS$ - beam splitter, $P$ - polarizers, $D$ - detectors, $M$ - mirror; (b): the horizontal-vertical coordinate frame and the frame turned of an angle $\alpha$. }
\label{Fig1}
\end{figure}
In coincidence measurements only divided pairs are registered, and for such pairs numbers of channels 1 and 2 can be associated with photon or variable numbers 1 and 2 in Eqs. (\ref{qqrt-wf})-(\ref{Bell-freq}) and (\ref{pol-2x2x2}). Measurements consist in counting photons with a given polarization. Selection of polarization is provided by polarizers installed in each channel in front of detectors. Orientation of each polarizer can be changed independently from horizontal to vertical and vice versa. Each series of measurements with given polarizer orientations has to be performed under identical conditions and to take the same time. As the undivided pairs do not participate in such measurements, the amount of photons with any given polarization accessible for registration is twice less than in the original beam. Besides detectors have some efficiency less than $100\%$, which further diminishes amounts of registered photons. But for relative amounts of registered photons all these losses do not matter. A computer obtaining signals from both detectors registers only coinciding incoming signals and provides measurements of the relative amounts of counts coinciding with conditional probabilities (\ref{cond-HH-VV}), (\ref{cond-HV-VH}). Let $\left.N_\sigma\right|_{\sigma^\prime}$ be the counted number of pairs with the photon polarization $\sigma$ in the channel 1 and $\sigma^\prime$ in the channel 2. Then we have
\begin{equation}
\label{coinc-cond}
\frac{\left.N_\sigma\right|_{\sigma^\prime}}{\sum_{\sigma,\sigma^\prime}\left.N_\sigma\right|_{\sigma^\prime}}
=\left.w_\sigma\right|_{\sigma^\prime}.
\end{equation}
Now Eqs. (\ref{cond-HH-VV}) and (\ref{coinc-cond}) can be used for finding two real constants characterizing MPS, $|C_1|^2$ and $|C_4|^2$, directly from experimental measurements with polarizer in both channels 1 and 2 oriented either horizontally or vertically. Measurements with other orientations $(HV)$ and $(VH)$ are necessary too but only for determination of the normalizing factor in the denominator of Eq. (\ref{coinc-cond}). The sum $|B_+|^2+|B_-|^2=1-|C_1|^2-|C_4|^2$ is determined by normalization but the constants $B_+$ and $|B_-|^2$ themselves remain undefined, as well as the phases $\varphi_1$ and $\varphi_4$. For finding them other measurements are needed.
\subsection{Conditional probabilities in rotated frames}
Additional information about parameters of MPS can be obtained from coincidence measurements with polarizers in both channels 1 and 2 turned for the same angle $\alpha$ with respect the horizontal or vertical axes.
The results of such measurements (photon counting) are related to the corresponding conditional probabilities by the same relation as in the case $\alpha=0$ (\ref{coinc-cond}):
\begin{equation}
\label{coinc-cond-alpha}
\displaystyle\frac{\left.N_\alpha\right|_{\alpha^\prime}}{N_{tot}}=\left.w_\alpha\right|_{\alpha^\prime},\,;
\displaystyle\frac{\left.N_{90^\circ+\alpha}\right|_{\alpha^\prime}}{N_{tot}}
=\left.w_{90^\circ+\alpha}\right|_{\alpha^\prime},
\end{equation}
where $\alpha^\prime=\alpha$ or $\alpha+90^\circ$ and $N_{tot}= \sum_{\alpha^\prime}[\left.N_\alpha\right|_{\alpha^\prime}
+\left.N_{90^\circ+\alpha}\right|_{\alpha^\prime}]$
For finding the conditional probabilities $\left.w_\alpha\right|_{\alpha^\prime}$ and $\left.w_{90^\circ+\alpha}\right|_{\alpha^\prime}$, we have to rewrite the same ququart's wave function as given by Eq. (\ref{qqrt-wf}) in the frame turned for an angle $\alpha$ around the photon propagation axis $Oz$ (in the ($x_\alpha,y_\alpha$)-plane in Fig. 1$b$). Transformation to this frame is provided by the basic transformation formulas for one-photon states
\begin{gather}
\label{trasnsf}
\setlength{\extrarowheight}{0.3cm}
\begin{matrix}
\displaystyle\left({1\atop 0}\right)=\cos\alpha\left({1\atop 0}\right)^\alpha-\sin\alpha\left({0\atop 1}\right)^\alpha,\\
\displaystyle\left({0\atop 1}\right)=\sin\alpha\left({1\atop 0}\right)^\alpha+\cos\alpha\left({0\atop 1}\right)^\alpha,
\end{matrix}
\end{gather}
where $\left({1\atop 0}\right)^\alpha$ and $\left({0\atop 1}\right)^\alpha$ correspond to polarizations along directions $\alpha$ and $90^\circ+\alpha$. Evidently, the transformations (\ref{trasnsf}) do not affect the frequency part of the ququart's wave function (\ref{qqrt-wf}). Besides, as can be easily checked, the polarization antisymmetric Bell-state wave function is invariant with respect to the transformations (\ref{trasnsf}), i.e., $\Psi_-^\alpha$ expressed via $\left({1\atop 0}\right)^\alpha$ and $\left({0\atop 1}\right)^\alpha$ has the same form as $\Psi_-$ expressed via $\left({1\atop 0}\right)$ and $\left({0\atop 1}\right)$ [Eq. (\ref{Bell-pol})]. This means that after the transformation (\ref{trasnsf}) the part of the ququart's wave function with the product of antisymmetric Bell states does not mix up with the symmetric part, and the form of ququart's wave function is invariant with respect to transformation:
\begin{equation}
\Psi^{(4)}=\Psi^{(3)\,\alpha}\Psi_+^{fr}+B_-\Psi_-^{pol\, \alpha}\Psi_-^{fr},
\label{qqrt-wf-alpha}
\end{equation}
where $\Psi^{(3)\,\alpha}$ has the same form in the transformed basis $\left({1\atop 0}\right)^\alpha$ and $\left({0\atop 1}\right)^\alpha$ as $\Psi^{(3)}$ in the original $HV$ basis (\ref{qtr-wf})
\begin{equation}
\label{qtr-wf-alpha}
\Psi^{(3)\,\alpha}=C_1^\alpha\,\Psi_{\alpha,\alpha}+B_+^\alpha\,\Psi_+^\alpha
+C_4^\alpha\,\Psi_{90^\circ+\alpha,90^\circ+\alpha}.
\end{equation}
The coefficients $C_1^\alpha$, $B_+^\alpha$ and $C_4^\alpha$ are easily found to be given by \cite{NJP} [Eq. (A.13)]
\begin{eqnarray}
\label{C1-alpha}
C_1^\alpha=\cos^2\alpha \,C_1+\sqrt{2}\cos\alpha\sin\alpha \,B_++\sin^2\alpha \,C_4,\\
\label{B+-alpha}
B_+^\alpha=-\sqrt{2}\cos\alpha\sin\alpha\, (C_1-C_4)+\cos 2\alpha \,B_+,\\
\label{C4-alpha}
C_4^\alpha=\sin^2\alpha \,C_1-\sqrt{2}\cos\alpha\sin\alpha\,B_++\cos^2\alpha\, C_4,
\end{eqnarray}
The conditional probabilities in the turned frame are defined similarly to their definition in the $HV$-frame (\ref{cond-HH-VV}):
\begin{gather}
\nonumber
\left.w_\alpha\right|_\alpha=|C_1^\alpha|^2\\
\label{w-alpha-alpha}
=|\cos^2\alpha \,C_1+\sqrt{2}\cos\alpha\sin\alpha \,B_++\sin^2\alpha \,C_4|^2,\\
\nonumber
\left.w_{90^\circ+\alpha}\right|_{90^\circ+\alpha}=|C_4^\alpha|^2\\
\label{w-alpha+90-alpha+90}
=|\sin^2\alpha \,C_1-\sqrt{2}\cos\alpha\sin\alpha\,B_++\cos^2\alpha\, C_4|^2,\\
\left.w_{\alpha}\right|_{90^\circ+\alpha}=\left.w_{90^\circ+\alpha}\right|_{\alpha}
=\frac{|B_+^\alpha|^2+|B_-|^2}{2}.
\label{w-alpha-alpha+90}
\end{gather}
If $\alpha$ is small, in the linear approximation in $\alpha$, Eqs. (\ref{w-alpha-alpha}) and (\ref{w-alpha+90-alpha+90}) take the form
\begin{gather}
\label{cos-phi1}
\left.w_\alpha\right|_\alpha=\left.w_H\right|_H+2\sqrt{2}\,\alpha|C_1|B_+\cos\varphi_1,\\
\label{cos-phi4}
\left.w_{90^\circ+\alpha}\right|_{90^\circ+\alpha}=\left.w_V\right|_V-2\sqrt{2}\,\alpha|C_4|B_+\cos\varphi_4,
\end{gather}
where, as assumed, $B_+$ is taken real and positive. By denoting
\begin{equation}
\label{theta1-theta4}
\setlength{\extrarowheight}{0.3cm}
\begin{matrix}
\lim_{\alpha\rightarrow 0}\frac{\left.w_\alpha\right|_\alpha-\left.w_H\right|_H}{\alpha}=\tan\theta_1,\\
\lim_{\alpha\rightarrow 0}\frac{\left.w_{90^\circ+\alpha}\right|_{90^\circ+\alpha}-\left.w_V\right|_V}{\alpha}=\tan\theta_4
\end{matrix},
\end{equation}
we can rewrite Eqs. (\ref{cos-phi1}) and (\ref{cos-phi4}) as
\begin{equation}
\label{phi via theta}
\cos\varphi_1=\frac{\tan\theta_1}{2^{3/2}|C_1|B_+},\;
\cos\varphi_4=\frac{\tan\theta_4}{2^{3/2}|C_4|B_+}.
\end{equation}
Here expressions for the parameters $|C_1|$ and $|C_4|$ in terms of conditional probabilities are known (\ref{cond-HH-VV}), and the derived Eqs. (\ref{cos-phi1}) and (\ref{cos-phi4}) determine directly phases $\varphi_1$ and $\varphi_4$ as functions of $B_+$. For finding $B_+$ [and then $|B_-|$ from Eq. (\ref{cond-HV-VH})] one has to make one measurement and one derivation more, for example, in the frame with $\alpha=45^\circ$, i.e., in the frame turned for $45^\circ$ with respect to the original $HV$- ($xy$-) frame. In this case we find from Eq. (\ref{B+-alpha}) that $B_+^{45^\circ}=|C_1-C_4|^2/2$ and, hence, Eqs. (\ref{cond-HV-VH}) and (\ref{w-alpha-alpha+90}) yield the following equation:
\begin{equation}
\label{Eq for B+}
\left.w_{45^\circ}\right|_{135^\circ}=\frac{|C_1-C_4|^2}{4}+\left.w_H\right|_V-\frac{B_+^2}{2},
\end{equation}
where
\begin{equation}
\label{(C1-C4)2}
|C_1-C_4|^2=|C_1|^2+|C_4|^2-2|C_1||C_4|\cos (\varphi_1-\varphi_4)
\end{equation}
and
\begin{gather}
\nonumber
\cos (\varphi_1-\varphi_4)=\frac{\tan\theta_1}{2^{3/2}|C_1|B_+}\,\frac{\tan\theta_4}{2^{3/2}|C_4|B_+}\\
\label{cos phi1-phi4}
+\left[1-\left(\frac{\tan\theta_1}{2^{3/2}|C_1|B_+}\right)^2
\left(\frac{\tan\theta_4}{2^{3/2}|C_4|B_+}\right)^2\right]^{1/2}.
\end{gather}
With known $C_{1,4}$ and $\tan\varphi_{1,4}$, the only unknown parameter in Eq. (\ref{Eq for B+}) [combined with Eqs. (\ref{(C1-C4)2}) and (\ref{cos phi1-phi4})] is $B_+$, and this equation has to be solved numerically. When $B_+$ is found, Eq. (\ref{cond-HV-VH}) yields $|B_-|=\left[2\left.w_H\right|_V-|B_+|^2\right]^{1/2}.$ This concludes determination of all five parameters of MPS in terms of conditional probabilities.
\subsection{Scenarios for experimental measurement of the parameters of mixed polarization states}
In all cases, the first step consists in performing at least three series of coincidence measurements of photon numbers with polarizers in channels 1 and 2 installed along either horizontal or vertical directions,
$\left.N_H\right|_H$, $\left.N_H\right|_V$, and $\left.N_V\right|_V$ (because of photon indistinguishability
$\left.N_V\right|_H=\left.N_H\right|_V$). Then, with the help of Eqs. (\ref{cond-HH-VV}) we find two
constants, $|C_1|$ and $|C_4|$ plus the relation between $|B_+|^2$ and $|B_-|^2$ (\ref{cond-HV-VH}). The next steps are different for the situations of zero or non-zero obtained values of the parameters $|C_1|$ and $|C_4|$.
\subsubsection{$C_1=C_4=0$}
If the above-described measurements with horizontal-vertical orientations of polarizers give $|C_1|=|C_4|=0$, the remaining two constants to be found are $|B_+|$ and $|B_-|$, and their measurement is very simple. E.g., from of Eq. (\ref{C1-alpha}) we find $C_1^\alpha=\sin 2\alpha B_+/\sqrt{2}$. Then Eq. (\ref{w-alpha-alpha}) yields
\begin{equation}
\label{zero C1 C4}
|B_+|^2=\frac{2\left.w_\alpha\right|_\alpha}{\sin^22\alpha}
=\frac{2\left.N_\alpha\right|_\alpha/N_{tot}}{\sin^22\alpha}
\end{equation}
and $|B_-|=\sqrt{1-|B_+|^2}.$ Thus, if a complete set of horizontal-vertical coincidence measurements gives $C_1=C_4=0$, for finding $|B_+|$ and $|B_-|$ one has to make only one coincidence measurement more, with identically oriented polarizers $P_1$ and $P_2$ and with arbitrary chosen angle of their orientation $\alpha\neq 0,\,\pi$.
\subsubsection{$C_1\neq 0,\,C_4=0$}
If the horizontal-vertical coincidence measurements give $C_1\neq 0,\,C_4=0$, at $\alpha=45^\circ$ the formula in Eq. (\ref{B+-alpha}) is reduced to $B_+^{45^\circ}=C_1/\sqrt{2}$, owing to which Eq. (\ref{w-alpha-alpha+90}) takes the form
\begin{equation}
\label{45-135}
\left.w_{135^\circ}\right|_{45^\circ}\equiv\frac{\left.N_{135^\circ}\right|_{45^\circ}}{N_{tot}}=
\frac{|C_1|^2}{4}+\frac{|B_-|^2}{2}.
\end{equation}
This is the equation for finding $|B_-|$, after which $|B_+|$ is also easily found from normalization $|B_-|=\sqrt{1-|C_1|^2-|B_+|^2}$. Thus, for measuring absolute values of all three constants, $|C_1|$, $|B_+|$, and $|B_-|$ ( with $C_4=0$), it is sufficient to complete horizontal-vertical coincidence measurements by measurement of the coincidence number of photons with polarizers $P_1$ and $P_2$ turned, correspondingly, for $135^\circ$ and $45^\circ$ with respect to the horizontal direction. In principle, in addition to these three constants there is one constant more characterizing MPS with $C_4=0$, the phase $\varphi_1$ of the parameter $C_1$ ( with real $B_+$). The way of its measurement is described in the following subsubsection. But it should be noted that in the case $C_4=0$ the MPS correlation parameters $\overline{K}$ (\ref{K-pol}) and $\overline{C}$ (\ref{C-pol}) do not depend on $\varphi_1$. Note also that the described scheme of measurements is valid also in the case $C_1=0,\,C_4\neq 0$.
\subsubsection{Nonzero $C_{1,4}$}
Let now both constants $|C_1|$ and $|C_4|$ found from the horizontal-vertical coincidence measurements be different from zero, $C_{1,4}\neq 0$. Then, the procedure of measuring other MPS parameters is more complicated. In particular, for measuring phases $\varphi_{1,4}$ of $C_{1,4}$, we suggest to use their relations (\ref{phi via theta}) with the parameters $\tan\theta_{1,4}$ (\ref{theta1-theta4}) characterizing the behavior of the function $\left.w_\alpha\right|_\alpha(\alpha)$ in small vicinities of the points $\alpha=0$ and $\alpha=90^\circ$.
Specifically, for finding $\theta_1$ and $\varphi_1$ we suggest to measure coincidence numbers of photons $\left.N_{\pm\alpha_0}\right|_{\pm\alpha_0}$ with both polarizers $P_1$ and $P_2$ turned for some small angles $\alpha_0$ and $-\alpha_0$ with respect to the horizontal direction (e. g., with $\alpha_0=5^\circ=0.087\,rad\ll 1$). In accordance with Eqs. (\ref{coinc-cond-alpha}) and together with the earlier made measurement of $\left.w_H\right|_H$ this gives three values of the function $\left.w_\alpha\right|_\alpha(\alpha)$ at three values of $\alpha$, $\alpha=-\alpha_0,\,0,\,{\rm and}\,\alpha_0$. In Fig. \ref{Fig2} these values are indicated by letters
\begin{figure}
\caption{{\protect\footnotesize {Conditional probability $\left.w_\alpha\right|_\alpha(\alpha)$ at $|\alpha|\ll 1$}
\label{Fig2}
\end{figure}
$A$, $O$, and $B$. With this three values we can reconstruct the function $\left.w_\alpha\right|_\alpha(\alpha)$ in a small vicinity of the point $\alpha=0$. The pictures $(a),\,(b),\,{\rm and}\,(c)$ correspond to three different possible locations of the points $A$ and $B$. The picture ${(a)}$ corresponds to the case when the points $A$ and $B$ are symmetric with respect to $O$, and all three points $A$, $O$, and $B$ can be connected by a single straight line. In this case the angle $\alpha_0$ is small enough for validity of the approximation linear in $\alpha$ in Eq. (\ref{cos-phi1}) in all range $[-\alpha_0,\alpha_0]$. In the case $(b)$ positions of the points $A$ and $B$ are asymmetric, and the points $A$, $O$, and $B$ can be connected only by a parabola. In this case, the straight line corresponding to the linear approximation of Eq. (\ref{cos-phi1}) is tangent to the parabola in the point $O$. The angle $\theta_1$ is determined in these two cases as the angle between the horizontal line and either the line $\left.w_\alpha\right|_\alpha(\alpha)$ in the case $(a)$ or the line tangent to the curve $\left.w_\alpha\right|_\alpha(\alpha)$ in the point $O$ in the case $(b)$. In both cases, with known $|C_1|$ and $\theta_1$, we can use Eq. (\ref{phi via theta}) for finding the phase $\varphi_1$ of the parameter $C_1$ as a function of $B_+$. Similar measurements and calculations can be done with polarizers deviating for a small angle $\alpha$ from the vertical orientation, to determine $\theta_4$ and, then, $\varphi_4$ as a function of $B_+$. The last step is a single coincidence measurement with one of two polarizers turned for $45^\circ$ and the other one for $135^\circ$. This measurement gives $\left.w_{135^\circ}\right|_{45^\circ}$, and then Eqs. (\ref{Eq for B+})-(\ref{cos phi1-phi4}) can be used for finding $B_+$ and, then, $\varphi_{1,4}$. With $|B_-|$ found from Eq. (\ref{cond-HV-VH}), this finalizes the procedure of finding all parameters of MPS of a general form.
\subsubsection{$B_+=0$}
In the picture $(c)$ of Fig. \ref{Fig2} the points $A$ and $B$ are located symmetrically with respect to the vertical axis crossing the point $O$. Again, three points $A$, $O$, and $B$ can be connected by a parabola,
\begin{equation}
\label{parabola}
\left.w_\alpha\right|_\alpha\approx\left.w_H\right|_H +k\alpha^2.
\end{equation}
The line tangent to this parabola in the point $O$ is horizontal. This means that in this case there is no validity region for the linear approximation of Eqs. (\ref{cos-phi1}) and (\ref{cos-phi4}), which is possible (at $C_{1,4}\neq 0$) only if $B_+=0$. In this case a direct measurement of the parameter $k$ in Eq. (\ref{parabola}) appears to be sufficient for completing the reconstruction of the parameters characterizing MPS. Indeed, in the case $B_+=0$ and $\alpha\ll 1$ Eq. (\ref{cos-phi1}) takes the form
\begin{equation}
\label{C1 at zero B+}
C_1^\alpha\approx C_1+ (C_4-C_1)\, \alpha^2.
\end{equation}
Moreover, as $B_+=0$, we can choose now an arbitrary phase of $\Psi^{(3)}$ in a way providing $\varphi_1=0$, which leaves only one phase parameter $\varphi_4$ to be determined form experiments in addition to $C_1$ and $C_4$. Under this assumption Eq. (\ref{C1 at zero B+}) gives
\begin{equation}
\label{C1 squared}
\left|C_1^\alpha\right|^2\approx |C_1|^2+ 2|C_1|(|C_4|\cos\varphi_4-|C_1|)\, \alpha^2
\end{equation}
and, being compared with Eq. (\ref{parabola}),
\begin{equation}
\label{k via cos phi4}
k=2|C_1|(|C_4|\cos\varphi_4-|C_1|).
\end{equation}
Thus, by measuring experimentally the parabola parameter $k$ of Fig. 2$(c)$ and Eq. (\ref{C1 at zero B+}) we find from Eq. (\ref{k via cos phi4}) the phase $\varphi_4$, and this concludes the reconstruction of all parameters of MPS in the case $B_+=0$.
\section{ Reconstruction of the ququart's parameters}
Let us return now to pure states of biphoton ququarts. The analysis of the previous Section shows that by means of purely polarization measurements one can determine all parameters of the ``qutrit's part" of the ququart $\Psi^{(3)}$ (except its phase) and the parameter $|B_-|$. The only remaining unknown parameter of the ququart's states is the phase $\varphi_-$ of the parameter $B_-$. Though characteristics of MPS do not depend of $\varphi_-$, features of pure states of ququarts can be phase-sensitive. The phase $\varphi_-$ cannot be found from any purely polarization measurements and requires combined polarization-frequency measurements. This means that in the experimental scheme of Fig. \ref{Fig2} one has to install in front of detectors both polarizers and frequency filters. Such coincidence measurements sufficient for determining the phase $\varphi_-$ are most simple in the case of ququarts with $B_+\neq 0$. Then, one of the measurable conditional probabilities is the probability of registering high-frequency horizontally polarized photons in the channel 1 under the condition that simultaneously one registers low-frequency vertically polarized photons in the channel 2
\begin{gather}
\nonumber
\left.w_{H,h}\right|_{V,l}=\frac{\left.N_{H,h}\right|_{V,l}}{N_{tot}}=\frac{|B_++B_-|^2}{2}\\
\label{cond-freq}
=\frac{|B_+|^2+|B_-|^2+2|B_+||B_-|\cos(\varphi_--\varphi_+)}{2}.
\end{gather}
By assuming again that $\varphi_+=0$, we find from this equation $\cos\varphi_-$ expressed in terms of the experimentally measurable relative amounts of photon counts.
The case $B_+=0$ (but $C_{1,4}\neq 0$) is not much more complicated or difficult. With polarizers in both channels 1 and 2 turned for an arbitrary but identical angle $\alpha$, one can use, in fact, the same scheme of measurements as described above for the case $B_+=0$. Indeed, as mentioned above, in the case $B_+=0$ Eq. (\ref{B+-alpha}) takes the form $B_+^\alpha=\sqrt{2}\cos\alpha\sin\alpha\, (C_4-C_1)$. As the parameters $C_1$ (with $\varphi_1=0$), and $C_4=|C_4|e^{i\varphi_4}$ are supposed to be known already from purely polarization measurements, we can write the difference $C_4-C_1$ as $C_4-C_1=|C_4-C_1|e^{i\varphi_{4-1}}$, where $|C_4-C_1|$ and $\varphi_{4-1}$ are easily calculable. Now, with the turned polarizers, one can measure the conditional probability of registering a high-frequency photon polarized in the direction $\alpha$ in the channel 1 and a low-frequency photon polarized in the direction $\alpha+90^\circ$ in the channel 2. This conditional probability is related to the unknown phase $\varphi_-$ by a formula very similar to that of Eq. (\ref{cond-freq})
\begin{gather}
\nonumber
\left.w_{\alpha,h}\right|_{\alpha+90^\circ,l}=\frac{\left.N_{\alpha,h}\right|_{\alpha+90^\circ,l}}{N_{tot}}
=\frac{|B_+^\alpha +B_-|^2}{2}\\
\label{cond-freq-alpha}
=\frac{|B_+^\alpha|^2+|B_-|^2+2|B_+^\alpha||B_-|\cos(\varphi_--\varphi_{4-1})}{2},
\end{gather}
where $|B_+^\alpha|=\frac{1}{\sqrt{2}}|\sin 2\alpha||C_4-C_1|$.
Eq. (\ref{cond-freq-alpha}) can be used for finding the phase $\varphi_-$ from the data to be obtained from the coincidence polarization-frequency measurements in the case of ququarts with $B_+=0$.
\section{Conclusion}
Thus, biphoton ququarts are more complicated and their physics is more rich and interesting than usually assumed. The key elements of this newer understanding are (i) the obligatory symmetry of biphoton wave functions (in pure states) as wave functions of two indistinguishable bosons, and (ii) consideration of frequencies of photons in biphoton polarization-frequency ququarts as variables independent of polarizations rather than as given numbers. In this approach biphoton polarization-frequency ququarts are states having two degrees of freedom for each photon, polarization and frequency. Owing to this, all biphoton polarization-frequency ququarts are entangled and, in a general case, their entanglement is an inseparable mixture of the polarization and frequency entanglement. Another interesting consequence of the formulated features of biphoton ququarts concerns their images to be seen in experiments. If in fully polarization-frequency coincidence measurements ququarts are seen as pure states, in simpler purely polarization (non-selective in frequencies) measurements the same states are seen as two-qubit mixed polarization states, MPS. MPS are characterized by the ququart's density matrix reduced with respect to the frequency variables. Parameters of MPS are found. They appear are to be rather peculiar, differing significantly from those of the full-dimensionality ququarts, and rather useful. In particular, the Schmidt parameter of MPS, ${\overline K}$ is found to be related directly to the degree of polarization of ququarts. Features of MPS can be used straightforwardly for experimental measurement of ququart's parameters if they are not known in advance. A scheme of such measurements is suggested and described. The main idea is in separation of a biphoton beam for two channels by a simple non-selective beam splitter and in performing series of coincidence measurements with photon counters. The first stage consists in making purely polarization measurements with different orientations of polarizers in both channels and in finding in this way all parameters of MPS. Then for reconstructing completely the ququart's state, one has to find additionally only one of its phase parameters. which requires making one additional polarization-frequency coincidence measurement with both polarizers and frequency filters installed in both channels in front of detectors. So, the scheme suggested for reconstruction of the ququart's parameters separates purely polarization and polarization-frequency measurements and minimizes the amount of polarization-frequency measurements. We believe that such experiments are feasible and their results may be sufficiently interesting.
\end{document} |
\begin{document}
\begin{abstract} We consider the problem of slicing a compact metric space $\Omega$ with sets of the form $\pi_{\lambda}^{-1}\{t\}$, where the mappings $\pi_{\lambda} \colon \Omega \to \mathbb{R}$, $\lambda \in \mathbb{R}$, are \emph{generalized projections}, introduced by Yuval Peres and Wilhelm Schlag in 2000. The basic question is: assuming that $\Omega$ has Hausdorff dimension strictly greater than one, what is the dimension of the 'typical' slice $\pi_{\lambda}^{-1}\{t\}$, as the parameters $\lambda$ and $t$ vary. In the special case of the mappings $\pi_{\lambda}$ being orthogonal projections restricted to a compact set $\Omega \subset \mathbb{R}^{2}$, the problem dates back to a 1954 paper by Marstrand: he proved that for almost every $\lambda$ there exist positively many $t \in \mathbb{R}$ such that $\dim \pi_{\lambda}^{-1}\{t\} = \dim \Omega - 1$. For generalized projections, the same result was obtained 50 years later by J\"arvenp\"a\"a, J\"arvenp\"a\"a and Niemel\"a. In this paper, we improve the previously existing estimates by replacing the phrase 'almost all $\lambda$' with a sharp bound for the dimension of the exceptional parameters.
\end{abstract}
\title{Slicing Sets and Measures, and the Dimension of Exceptional Parameters}
\section{Introduction}
\emph{Generalised projections} were introduced by Yuval Peres and Wilhelm Schlag in 2000: these are families of continuous mapping $\pi_{\lambda} \colon \Omega \to \mathbb{R}^{n}$, $\lambda \in Q$, where $\Omega$ is a compact metric space and $Q$ is an open set of parameters in $\mathbb{R}^{m}$. The projections $\pi_{\lambda}$ are required to satisfy certain conditions, see Definition \ref{projections}, which guarantee that they behave regularly with respect to $\lambda$ and are never too severely non-injective. As it was shown in \cite{PS}, these conditions are sufficient to produce results in the spirit of Marstrand's projection theorem, which were previously known only for orthogonal projections in $\mathbb{R}^{n}$.
Let us quickly review the classical theory related with orthogonal projections, restricting attention to $\mathbb{R}^{2}$. According to a 1954 result of Marstrand, see \cite{Mar}, any Borel set $B$ of dimension $\dim B = s \leq 1$ is projected into a set of dimension $s$ in almost all directions; if $\dim B > 1$, the projections typically have positive length. Since 1954, these results have been sharpened by examining the largest possible dimension of the set of \emph{exceptional} directions, that is, the directions for which the typical behaviour described by the theorem fails. Let $\rho_{\theta}(x) = x \cdot (\cos \theta,\sin \theta)$ denote the orthogonal projection onto the line spanned by $(\cos \theta, \sin \theta)$. Then we have the bounds
\begin{equation}\label{kaufman} \dim \{\theta \in [0,2\pi) : \dim \rho_{\theta}(B) < \dim B\} \leq \dim B, \qquad \dim B \leq 1, \end{equation}
due to Kaufman, see \cite{Ka}, and
\begin{equation}\label{falconer} \dim \{\theta \in [0,2\pi) : \mathcal{L}^{1}(\rho_{\theta}(B)) = 0\} \leq 2 - \dim B, \qquad \dim B > 1, \end{equation}
due to Falconer, see \cite{Fa}. Both estimates are known to be sharp.
Orthogonal projections are a special case of the general formalism of Peres and Schlag, and all the results stated above -- along with their higher dimensional analogues -- follow from the theory in \cite{PS}. Besides orthogonal projections, Peres and Schlag provide multiple examples to demonstrate the wide applicability of their formalism. These examples include the mappings $\pi_{\lambda}(x) = |x - \lambda|^{2}$, $\lambda,x \in \mathbb{R}^{n}$, and the 'Bernoulli projections'
\begin{displaymath} \pi_{\lambda}(x) = \sum_{j = 1}^{\infty} x_{j}\lambda^{j}, \qquad x = (x_{1},x_{2},\ldots) \in \Omega = \{-1,1\}^{\mathbb{N}}, \end{displaymath}
for $\lambda \in (0,1)$. Indeed, estimates similar to \eqref{kaufman} and \eqref{falconer} are obtained for these projections, and many more, in \cite{PS}.
In the field of geometric measure theory, Marstrand's projection theorem is not the only result involving orthogonal projections. Marstrand's theorem is certainly matched in fame by the \emph{Besicovitch-Federer projection theorem}, characterising rectifiability in $\mathbb{R}^{n}$ in terms of the behaviour of orthogonal projections. In the plane, this theorem states that a Borel set $B$ with positive and finite $1$-dimensional measure is purely unrectifiable, if and only if almost all of the sets $\rho_{\theta}(B)$ have zero length. Considering the success of Peres and Schlag's projections in generalizing Marstrand's theorem, it is natural to ask whether also the characterisation of Besicovitch-Federer would permit an analogue in terms of the generalized projections. This question was recently resolved by Hovila, J\"arvenp\"a\"a, J\"arvenp\"a\"a and Ledrappier: Theorem 1.2 in \cite{HJJL} essentially shows that orthogonal projections can be replaced by \emph{any} family of generalized projections in the theorem of Besicovitch-Federer.
There is a third classical result in geometric measure theory related intimately, though slightly covertly, to orthogonal projections. In his 1954 article mentioned above, Marstrand also studied the following question: given a Borel set $B \subset \mathbb{R}^{2}$ with $\dim B > 1$, what can be said of the dimension of the intersections $B \cap L$, where $L$ ranges over the lines of $\mathbb{R}^{2}$? Marstrand proved that in almost all directions there exist positively many lines intersecting $B$ in a set of dimension $\dim B - 1$. The multidimensional analogue of this result was obtained by Mattila first in \cite{Mat1} and later in \cite{Mat2} using a different technique: if $B \subset \mathbb{R}^{n}$ is a Borel set of dimension $\dim B > m$, then positively many translates of almost every $m$-codimensional subspace intersect $B$ in dimension $\dim B - m$. These results are easily formulated in terms of orthogonal projections. Once more restricting attention to the plane, the theorem of Marstrand can be stated as follows: given a Borel set $B \subset \mathbb{R}^{2}$ with $\dim B > 1$, we have
\begin{equation}\label{marstrand2} \mathcal{L}^{1}(\{t \in \mathbb{R} : \dim [B \cap \rho_{\theta}^{-1}\{t\}] \geq \dim B - 1\}) > 0 \end{equation}
for $\mathcal{L}^{1}$ almost every $\theta \in [0,2\pi)$. Indeed, all lines of $\mathbb{R}^{2}$ are of the form $\rho_{\theta}^{-1}\{t\}$ for some $\theta \in [0,2\pi)$ and $t \in \mathbb{R}$. Inspecting \eqref{marstrand2}, one arrives at the following conjecture: let $J \subset \mathbb{R}$ be an open interval and let $\pi_{\lambda} \colon \Omega \to \mathbb{R}$ be a family of generalized projections. Then \eqref{marstrand2} holds with $\rho_{\theta}$ replaced with $\pi_{\lambda}$, for any Borel set $B \subset \Omega$ with $\dim B > 1$. This conjecture, and its higher dimensional analogue, was verified in the 2004 paper \cite{JJN} by J\"arvenp\"a\"a, J\"arvenp\"a\"a and Niemel\"a.
To the best of our knowledge, no work has previously been done on obtaining a sharpened version of \eqref{marstrand2}. By a 'sharpened version' we mean a result, which would yield \eqref{marstrand2} not only for almost all $\theta \in [0,2\pi)$, but also give an estimate for the dimension of the set of exceptional parameters $\theta$ similar to \eqref{falconer}. It is easy to guess the correct analogue of \eqref{falconer} in our situation: first, note that if \eqref{marstrand2} holds for some $\theta \in [0,2\pi)$, then we automatically have $\mathcal{L}^{1}(\rho_{\theta}(B)) > 0$. The estimate \eqref{falconer} is known sharp, which means that $\mathcal{L}^{1}(\rho_{\theta}(B)) > 0$ may fail (for some particular set $B$, see Section \ref{sharpness} for references) for all parameters $\theta$ in a set of dimension $2 - \dim B$: thus, also \eqref{marstrand2} may fail for all parameters $\theta$ in a set of dimension $2 - \dim B$. The converse result is proven below: for any Borel set $B \subset \mathbb{R}^{2}$ with $\dim B > 1$, we have \eqref{marstrand2} for all parameters $\theta \in [0,2\pi) \setminus E$, where $\dim E \leq 2 - \dim B$. Inspired by the generalization due to J\"arvenp\"a\"a, J\"arvenp\"a\"a and Niemel\"a, all our estimates will also be couched in the formalism of Peres and Schlag.
Straightforward applications to the mappings $\pi_{\lambda}(x) = |\lambda - x|^{2}$ and $\pi_{\lambda}(x) = \sum x_{i}\lambda^{i}$ are presented in Sections \ref{sharpness} and \ref{further}.
\section{Definitions and a Result of Peres and Schlag}
We start by defining our central object of study, the projections $\pi_{\lambda}$:
\begin{definition}[The Projections $\pi_{\lambda}$]\label{projections} Let $(\Omega,d)$ be a compact metric space. Suppose that an open interval $J \subset \mathbb{R}$ parametrises a collection of continuous mappings $\pi_{\lambda} \colon \Omega \to \mathbb{R}$, $\lambda \in J$. These mappings, often referred to as \emph{projections}, are assumed to satisfy the following properties (see Remark \ref{ps} for a discussion on the origins of our assumptions):
\begin{itemize}
\item[(i)] For every compact subinterval $I \subset J$ and every $l \in \mathbb{N}$ there exist constants $C_{I,l} > 0$ such that
\begin{displaymath} |\partial^{l}_{\lambda} \pi_{\lambda}(x)| \leq C_{I,l} \end{displaymath}
for every $\lambda \in I$ and $x \in \Omega$.
\item[(ii)] Write
\begin{displaymath} \Phi_{\lambda}(x,y) := \begin{cases} \frac{\pi_{\lambda}(x) - \pi_{\lambda}(y)}{d(x,y)}, & \lambda \in J, \: x,y \in \Omega, \: x \neq y\\
0, & \lambda \in J, \: x = y \in \Omega, \end{cases} \end{displaymath}
and fix $\tau \in [0,1)$. For every compact subinterval $I \subset J$, there exist constants $\delta_{I,\tau}$ such that
\begin{equation}\label{transversality} |\Phi_{\lambda}(x,y)| \leq \delta_{I,\tau}d(x,y)^{\tau} \quad \Longrightarrow \quad |\partial_{\lambda}\Phi_{\lambda}(x,y)| \geq \delta_{I,\tau}d(x,y)^{\tau} \end{equation}
for every $\lambda \in I$ and $x,y \in \Omega$. The projections $\pi_{\lambda}$ are then said to satisfy \emph{transversality of order $\tau$}.
\item[(iii)] For every compact subinterval $I \subset J$, every $\tau > 0$ and every $l \in \mathbb{N}$ there exist constants $C_{I,l,\tau} > 0$ such that
\begin{equation}\label{regularity} |\partial^{(l)}_{\lambda}\Phi_{\lambda}(x,y)| \leq C_{I,l,\tau}d(x,y)^{-l\tau} \end{equation}
for every $\lambda \in I$ and $x,y \in \Omega$. This property is called \emph{regularity of order $\tau$}.
\end{itemize}
\end{definition}
Under these hypotheses, our main result is the following
\begin{thm}\label{main} Let $(\pi_{\lambda})_{\lambda \in J} \colon \Omega \to \mathbb{R}$ be a family of projections satisfying (i), (ii) and (iii) of Definition \ref{projections} for some $\tau > 0$. Let $B \subset \Omega$ be a Borel set with $\dim B = s$ for some $1 < s < 2$. Then there exists a set $E \subset J$ such that $\dim E \leq 2 - s + \delta(\tau)$, and
\begin{displaymath} \mathcal{L}^{1}(\{t \in \mathbb{R} : \dim [B \cap \pi_{\lambda}^{-1}\{t\}] \geq s - 1 - \delta(\tau)\}) > 0, \qquad \lambda \in J \setminus E. \end{displaymath}
Here $\delta(\tau) > 0$ is a constant depending only on $\tau$, and $\delta(\tau) \to 0$ as $\tau \to 0$. If the requirements of Definition \ref{projections} are satisfied with $\tau = 0$ and $\mathcal{H}^{s}(B) > 0$, then the assertions above hold with $\delta(\tau) = 0$.
\end{thm}
\begin{remark}\label{ps} Throughout the paper, $\dim$ will always refer to Hausdorff dimension. Our definition of the projections $\pi_{\lambda}$ is a slightly specialized version of \cite[Definition 2.7]{PS}. The most notable strengthening in our hypotheses is that, in the original definition in \cite{PS}, the bound in (\ref{regularity}) is only assumed to hold \emph{under the condition that} $|\Phi_{\lambda}(x,y)| \leq \delta_{I,\tau}d(x,y)^{\tau}$, whereas we assume it for all $\lambda \in I$ and $x,y \in \Omega$. Second, Peres and Schlag also obtain results for projections $\pi_{\lambda}$ such that (\ref{regularity}) holds only for a finite number of $\lambda$-derivatives of the function $\Phi_{\lambda}$: our projections are $\infty$-regular in the language of \cite{PS}.
Inspecting (i) and (ii) above, one should note that the easiest way to establish (\ref{transversality}) and (\ref{regularity}) for all $\tau > 0$ is to establish them (with some constants) for $\tau = 0$: indeed, this is possible in all known (to the author, at least) 'geometric' applications of the projection formalism -- but not possible in \emph{all} applications.
\end{remark}
A word on notation before we proceed. If $A,B > 0$ and $p_{1},\ldots,p_{k}$ are parameters, we write $A \lesssim_{p_{1},\ldots,p_{k}} B$, if there exists a finite constant $C > 0$, depending only on the parameters $p_{1},\ldots,p_{k}$, such that $A \leq CB$. The two-sided inequality $A \lesssim_{p_{1},\ldots,p_{k}} B \lesssim_{p_{1},\ldots,p_{k}} A$ is further abbreviated to $A \asymp_{p_{1},\ldots,p_{k}} B$.
We now cite the parts of \cite[Theorem 2.8]{PS} that will be needed later:
\begin{thm}\label{sobolev} Let $\pi_{\lambda} \colon \Omega \to \mathbb{R}$, $\lambda \in J$, be a family of projections as in Definition \ref{projections}, satisfying (ii) and (iii) for some $\tau \in [0,1)$. Let $\mu$ be a Radon measure on $\Omega$ such that $I_{s}(\mu) := \iint d(x,y)^{-s} \, d\mu x \, d\mu y < \infty$ for some $s > 0$. Write $\mu_{\lambda} := \pi_{\lambda\sharp}\mu$.\footnote{Thus $\mu_{\lambda}(B) = \mu(\pi_{\lambda}^{-1}(B))$ for Borel sets $B \subset \mathbb{R}$.} Then
\begin{equation}\label{sobolevIneq} \int_{I} \|\mu_{\lambda}\|_{2,\gamma}^{2} \, d\lambda \lesssim_{I,\gamma} I_{s}(\mu), \qquad 0 < (1 + 2\gamma)(1 + a_{0}\tau) \leq s, \end{equation}
where $\|\mu_{\lambda}\|_{2,\gamma}^{2} := \int |t|^{2\gamma}|\widehat{\mu_{\lambda}}(t)|^{2} \, dt$ is the Sobolev-norm of $\mu_{\lambda}$ with index $\gamma \in \mathbb{R}$, and $a_{0} > 0$ is an absolute constant. Moreover, we have the estimate
\begin{displaymath} \dim \{\lambda \in J : \mu_{\lambda} \not\ll \mathcal{L}^{1}\} \leq 2 - \frac{s}{1 + a_{0}\tau}. \end{displaymath}
\end{thm}
\begin{definition}[The Mappings $\Psi_{\lambda}$]\label{psi} Given a family of projections as in Definition \ref{projections}, we define the mappings $\Psi_{\lambda} \colon \Omega \to \mathbb{R}^{2}$, $\lambda \in J$, by
\begin{displaymath} \Psi_{\lambda}(x) := (\partial_{\lambda}\pi_{\lambda}(x),\pi_{\lambda}(x)), \qquad x \in \Omega. \end{displaymath}
\end{definition}
\begin{remark}[H\"older continuity of $\Psi_{\lambda}$] Note that
\begin{displaymath} \frac{\Psi_{\lambda}(x) - \Psi_{\lambda}(y)}{d(x,y)} = (\partial_{\lambda}\Phi_{\lambda}(x,y),\Phi_{\lambda}(x,y)), \qquad x,y \in \Omega,\: x \neq y. \end{displaymath}
Assuming that the projections $\pi_{\lambda}$ are transversal and regular of order $\tau \in [0,1)$, the inequalities (\ref{transversality}) and (\ref{regularity}) yield
\begin{displaymath} \left| \frac{\Psi_{\lambda}(x) - \Psi_{\lambda}(y)}{d(x,y)} \right| \gtrsim |\partial_{\lambda}\Phi_{\lambda}(x,y)| + |\Phi_{\lambda}(x,y)| \geq \delta_{I,\tau}d(x,y)^{\tau} \end{displaymath}
and
\begin{displaymath} \left| \frac{\Psi_{\lambda}(x) - \Psi_{\lambda}(y)}{d(x,y)} \right| \lesssim \max\{C_{I,0,\tau},C_{I,1,\tau}\}d(x,y)^{-\tau} \end{displaymath}
for all $\lambda$ in any compact subinterval $I \subset J$ and all $x,y \in \Omega$. In brief, the mapping $\Psi_{\lambda}$ satisfies bi-H\"older continuity in the form
\begin{equation}\label{holder} d(x,y)^{1 + \tau} \lesssim_{I,\tau} |\Psi_{\lambda}(x) - \Psi_{\lambda}(y)| \lesssim_{I,\tau} d(x,y)^{1 - \tau} \end{equation}
for $x,y \in \Omega$.
\end{remark}
We close this chapter by stating a result on 'slicing' any Radon measure $\mu$ on $\mathbb{R}^{2}$ with respect to a continuous function $\pi \colon \mathbb{R}^{2} \to \mathbb{R}$. For the technical details, we refer to \cite{Mat2} or \cite[Chapter 10]{Mat3}.
\begin{defThm}[Sliced Measures]\label{slices} Let $\mu$ be a compactly supported Radon measure on $\mathbb{R}^{2}$. If $\rho \colon \mathbb{R}^{2} \to \mathbb{R}$ is any orthogonal projection, we may for $\mathcal{L}^{1}$ almost every $t \in \mathbb{R}$ define the \emph{sliced measure} $\mu_{\rho,t}$ with the following properties:
\begin{itemize}
\item[(i)] $\operatorname{spt} \mu_{\rho,t} \subset \operatorname{spt} \mu \cap \rho^{-1}\{t\}$,
\item[(ii)]
\begin{displaymath} \int \eta \, d\mu_{\rho,t} = \lim_{\delta \to 0} (2\delta)^{-1} \int_{\rho^{-1}(t - \delta,t + \delta)} \eta \, d\mu, \qquad \eta \in C(\mathbb{R}^{2}). \end{displaymath}
\end{itemize}
For every Borel set $B \subset \mathbb{R}$ and non-negative lower semicontinuous function $\eta$ on $\mathbb{R}^{2}$ we have the inequality
\begin{displaymath} \int_{B} \int \eta \, d\mu_{\rho,t} \, dt \leq \int_{\rho^{-1}(B)} \eta \, d\mu. \end{displaymath}
Moreover, equality holds, if $\rho_{\sharp}\mu \ll \mathcal{L}^{1}$. In this case, taking $B = \{t : \nexists \,\mu_{\rho,t} \text{ or } \mu_{\rho,t} \equiv 0\}$ and $\eta \equiv 1$ yields
\begin{displaymath} \rho_{\sharp}\mu(B) = \int_{\rho^{-1}(B)} \, d\mu = \int_{B} \int d\mu_{\rho,t} \, dt = 0, \end{displaymath}
which shows that $\mu_{\rho,t}$ exists and is non-trivial for $\rho_{\sharp}\mu$ almost every $t \in \mathbb{R}$.
\end{defThm}
\section{Preliminary Lemmas}
For the rest of the paper we will write $e_{1} = (1,0)$, $e_{2} = (0,1)$, $\rho_{1} := \rho_{e_{1}}$ and $\rho_{2} := \rho_{e_{2}}$. Thus $\rho_{1}(x,y) = x$ and $\rho_{2}(x,y) = y$ for $(x,y) \in \mathbb{R}^{2}$. Our first lemma is motivated by the following idea: if $\mu$ were a smooth function on $\mathbb{R}^{2}$, say $\mu \in C_{c}^{\infty}(\mathbb{R}^{2})$ (the subscript $c$ indicates compact support), then one may easily check that any slice $\mu_{t} := \mu_{\rho_{2},t}$, $t \in \mathbb{R}$, coincides with $(\mu\mathcal{H}^{1})\llcorner L_{t}$, where $L_{t}$ is the line $L_{t} = \{(s,t) : s \in \mathbb{R}\}$, and $(\mu\mathcal{H}^{1})\llcorner L_{t}$ is the measure defined by
\begin{displaymath} \int \varphi \, d(\mu\mathcal{H}^{1})\llcorner L_{t} := \int_{\mathbb{R}} \varphi(s,t)\mu(s,t) \, ds \end{displaymath}
for $\varphi \in C(\mathbb{R}^{2})$. Since an arbitrary Radon measure $\mu$ on $\mathbb{R}^{2}$ can be approximated by a family $(\mu_{\varepsilon})_{\varepsilon > 0}$ of smooth functions, one may ask whether also the measures $(\mu^{\varepsilon}\mathcal{H}^{1})\llcorner L_{t}$ converge weakly to $\mu_{t}$ as $\varepsilon \searrow 0$. Below, we will prove that \textbf{if the functions $\mu_{\varepsilon}$ are chosen suitably}, then $\mu_{t} = \lim_{\varepsilon \to 0} (\mu_{\varepsilon}\mathcal{H}^{1})\llcorner L_{t}$.
\begin{lemma} Let $Q := (-1/2,1/2) \times (-1/2,1/2) \subset \mathbb{R}^{2}$ be the unit square centered at the origin, and let $\chi_{\varepsilon}$ be the lower semicontinuous function $\chi_{\varepsilon}(x) := \varepsilon^{-2}\chi_{Q}(x/\varepsilon)$. Let $\mu$ be a compactly supported Radon measure on $\mathbb{R}^{2}$, and write $\mu_{\varepsilon} := \chi_{\varepsilon} \ast \mu$. Then
\begin{equation}\label{form1} \int_{\mathbb{R}^{2}} \eta \, d\mu_{t} = \lim_{\varepsilon \to 0} \int_{\mathbb{R}} \eta(s,t)\mu_{\varepsilon}(s,t) \, ds, \qquad \eta \in C(\mathbb{R}^{2}), \end{equation}
whenever $\mu_{t} := \mu_{\rho_{2},t}$ exists in the sense of Definition \ref{slices}. Moreover, the convergence is uniform on any compact family (in the $\sup$-norm topology) of functions $K \subset C(\mathbb{R}^{2})$.
\end{lemma}
\begin{proof} We first establish pointwise convergence, and then use the Arzelà-Ascoli theorem to verify the stronger conclusion. Assume that $\mu_{t}$ exists for some $t \in \mathbb{R}$. We start with a simple reduction. Namely, we observe that it suffices to prove \eqref{form1} only for functions $\eta$ with the special form $\eta(x_{1},x_{2}) = \eta(x_{1},t)$, $(x_{1},x_{2}) \in \mathbb{R}^{2}$. Indeed, if $\eta \in C(\mathbb{R}^{2})$ is arbitrary, we note that both sides of \eqref{form1} depend only on the values of $\eta$ on the line $L_{t}$: in particular, both sides of \eqref{form1} remain unchanged, if we replace $\eta$ by the function $\tilde{\eta} \in C^{+}(\mathbb{R}^{2})$ defined by $\tilde{\eta}(x_{1},x_{2}) = \eta(x_{1},t)$. The function $\tilde{\eta}$ has the 'special form'.
Fix a function $\eta \in C(\mathbb{R}^{2})$ of the 'special form'.
Starting from the right hand side of \eqref{form1}, we compute
\begin{align}\label{form30} \int_{\mathbb{R}} & \eta(s,t)\mu_{\varepsilon}(s,t) \, ds = \int_{\mathbb{R}} \eta(s,t) \varepsilon^{-2} \int_{\mathbb{R}^{2}} \chi_{Q}\left(\frac{(s,t) - x}{\varepsilon}\right) \, d\mu x \, ds\notag\\
& = \varepsilon^{-1} \int_{\rho_{2}^{-1}(t - \varepsilon/2, t + \varepsilon/2)} \left(\varepsilon^{-1} \int_{x_{1} - \varepsilon/2}^{x_{1} + \varepsilon/2} \eta(s,t)\, ds \right) \, d\mu x. \end{align}
The domain $\rho_{2}^{-1}(t - \varepsilon/2,t + \varepsilon/2)$ results from the fact that the kernel $\chi_{Q}([(s,t) - x]/\varepsilon)$ is zero whenever the second coordinate of $x = (x_{1},x_{2})$ differs from $t$ by more than $\varepsilon/2$. On the other hand, if $|t - x_{2}| < \varepsilon/2$, we see that $\chi_{Q}([(s,t) - x]/\varepsilon) = 1$, if and only if $|s - x_{1}| < \varepsilon/2$. Next, we use the uniform continuity of $\eta$ on $\operatorname{spt} \mu$ and the 'special form' property to deduce that
\begin{align}\label{form2} \sup & \left\{\left|\varepsilon^{-1} \int_{x_{1} - \varepsilon/2}^{x_{1} + \varepsilon/2} \eta(s,t) \, ds - \eta(x) \right| : x = (x_{1},x_{2}) \in \operatorname{spt} \mu \right\}\notag\\
& \leq \sup \left\{\varepsilon^{-1} \int_{x_{1} - \varepsilon/2}^{x_{1} + \varepsilon/2} |\eta(s,t) - \eta(x_{1},t)| \, ds : (x_{1},x_{2}) \in \operatorname{spt} \mu \right\} \to 0, \end{align}
as $\varepsilon \to 0$. We write $\| \cdot \|_{\infty}$ for the $L^{\infty}$-norm on $C(\operatorname{spt} \mu)$. Let us consider the continuous linear functionals $\Lambda_{\epsilon} \colon (C(\operatorname{spt} \mu),\| \cdot \|_{\infty}) \to \mathbb{R}$, defined by
\begin{displaymath} \Lambda_{\varepsilon}(\psi) := \varepsilon^{-1}\int_{\rho_{2}^{-1}(t - \varepsilon/2,t + \varepsilon/2)} \psi \, d\mu, \qquad \psi \in C(\operatorname{spt} \mu). \end{displaymath}
Since $\mu_{t}$ exists, the orbits $\{\Lambda_{\varepsilon}(\psi) : \varepsilon > 0\}$ are bounded subsets of $\mathbb{R}$ for any $\psi \in C(\operatorname{spt} \mu)$. So, it follows from the Banach-Steinhaus theorem, see \cite[Theorem 2.5]{Ru}, that these functionals are uniformly bounded: there exists $C > 0$, independent of $\varepsilon > 0$, such that $|\Lambda_{\varepsilon}(\psi)| \leq C\|\psi\|_{\infty}$. We apply the bound with $\psi = \psi_{\varepsilon}$ defined by
\begin{displaymath} \psi_{\varepsilon}(x) = \varepsilon^{-1} \int_{x_{1} - \varepsilon/2}^{x_{1} + \varepsilon/2} \eta(s,t) \, ds - \eta(x), \quad x = (x_{1},x_{2}) \in \operatorname{spt} \mu. \end{displaymath}
Recalling \eqref{form2}, we have
\begin{align*} \limsup_{\varepsilon \to 0} & \left| \varepsilon^{-1} \int_{\rho_{2}^{-1}(t - \varepsilon/2, t + \varepsilon/2)} \left[\varepsilon^{-1} \int_{x_{1} - \varepsilon/2}^{x_{1} + \varepsilon/2} \eta(s,t) \, ds - \eta(x) \right] d\mu x \right|\\
& = \limsup_{\varepsilon \to 0} |\Lambda_{\varepsilon}(\psi_{\varepsilon})| \leq C \cdot \limsup_{\varepsilon \to 0} \|\psi_{\varepsilon}\|_{\infty} \stackrel{\eqref{form2}}{=} 0, \end{align*}
which implies that
\begin{align*} \lim_{\varepsilon \to 0} \varepsilon^{-1} & \int_{\rho_{2}^{-1}(t - \varepsilon/2, t + \varepsilon/2)} \left(\varepsilon^{-1} \int_{x_{1} - \varepsilon/2}^{x_{1} + \varepsilon/2} \eta(s,t) \, ds \right) \, d\mu x\\
& = \lim_{\varepsilon \to 0} \varepsilon^{-1} \int_{\rho_{2}^{-1}(t - \varepsilon/2,t + \varepsilon/2)} \eta(x) \, d\mu x =: \int \eta \, d\mu_{t}. \end{align*}
The existence of the former limit is a consequence of this equation, and the \emph{a priori} information on the existence of the latter limit. Combined with \eqref{form30}, this finishes the proof of pointwise convergence in \eqref{form1}.
Next, we fix a compact family of functions $K \subset C(\mathbb{R}^{2})$, and demonstrate that the convergence is uniform on $K$. Let $B \subset \mathbb{R}^{2}$ be a closed ball large enough to contain the supports of all the measures $\mu_{\varepsilon}$, for $0 < \varepsilon \leq 1$, say. Consider the linear functionals
\begin{displaymath} \Gamma_{\varepsilon}(\psi) := \int_{\mathbb{R}} \psi(r,t)\mu_{\varepsilon}(r,t) \, dr, \qquad \psi \in C(B). \end{displaymath}
Since the functionals $\Gamma_{\varepsilon}$ and $\psi \mapsto \Gamma(\psi) := \int \psi \, d\mu_{t}$ vanish outside $C(B)$, it suffices to show that $\Gamma_{\epsilon} \to \Gamma$ uniformly on $K \cap C(B)$: in fact, we may and will assume that $K \subset C(B)$. This way, we may view the mappings $\Gamma_{\varepsilon}$ not only as a family of functionals on $C(B)$, but also as a family of continuous functions $(K,\|\cdot\|_{L^{\infty}(B)}) \to \mathbb{R}$. Above, we showed that $\Gamma_{\varepsilon}(\psi) \to \Gamma(\psi)$ for every $\psi \in C(B)$: thus, the orbits $\{\Gamma_{\varepsilon}(\psi) : \varepsilon > 0\}$ are bounded for every $\psi \in C(B)$ -- and for every $\psi \in K$, in particular. Applying the Banach-Steinhaus theorem again, we see that
\begin{equation}\label{form32} |\Gamma_{\varepsilon}(\psi)| \leq C\|\psi\|_{L^{\infty}(B)}, \qquad \psi \in C(B), \end{equation}
for some constant $C > 0$ independent of $\varepsilon$. This implies that the functions $\Gamma_{\varepsilon}$ are equicontinuous on $K$: if $\psi \in K$ and $\delta > 0$, we have $|\Gamma_{\varepsilon}(\psi) - \Gamma_{\varepsilon}(\eta)| = |\Gamma_{\varepsilon}(\psi - \eta)| \leq \delta$ as soon as $\|\psi - \eta\|_{L^{\infty}(B)} \leq \delta/C$, for any $0 < \varepsilon \leq 1$. We have now demonstrated that $\{\Gamma_{\varepsilon} \colon K \to \mathbb{R} : 0 < \varepsilon \leq 1\}$ is a pointwise bounded equicontinuous family of functions. By the Arzelà-Ascoli theorem, see \cite[Theorem A5]{Ru}, every sequence in $\{\Gamma_{\varepsilon} : 0 < \varepsilon \leq 1\}$ contains a uniformly convergent subsequence. According to our result on pointwise convergence, the only possible limit of any such sequence is the functional $\psi \mapsto \int \psi \, d\mu_{t}$. This finishes the proof.
\end{proof}
\begin{cor} Let $(\varphi_{\varepsilon})_{\varepsilon > 0}$ be a sequence of smooth test functions satisfying $\varphi_{\varepsilon} \geq \chi_{\varepsilon}$. Let $\mu$ be a compactly supported Radon measure on $\mathbb{R}^{2}$, and write $\tilde{\mu}_{\varepsilon} := \mu \ast \varphi_{\varepsilon}$. Then
\begin{displaymath} \iint_{\mathbb{R}^{2} \times \mathbb{R}^{2}} \eta(x;y) \, d\mu_{t} x \, d\mu_{t} y \leq \liminf_{\varepsilon \to 0} \iint_{\mathbb{R} \times \mathbb{R}} \eta((r,t);(s,t))\tilde{\mu}_{\varepsilon}(r,t)\tilde{\mu}_{\varepsilon}(s,t) \, dr \, ds \end{displaymath}
for all non-negative lower semicontinuous functions $\eta \colon \mathbb{R}^{2} \times \mathbb{R}^{2} \to \mathbb{R}$, and for all $t \in \mathbb{R}$ such that $\mu_{t} = \mu_{\rho_{2},t}$ exists.
\end{cor}
\begin{proof} Assume that $\mu_{t}$ exists. Approximating from below, it suffices to prove the inequality for continuous compactly supported functions $\eta \colon \mathbb{R}^{2} \times \mathbb{R}^{2} \to \mathbb{R}$. For such $\eta$, the plan is to first prove equality with $\tilde{\mu}_{\varepsilon}$ replaced by $\mu_{\varepsilon} = \mu \ast \chi_{\varepsilon}$, and then simply apply the estimate $\mu_{\varepsilon} \leq \tilde{\mu}_{\varepsilon}$.
Fix $\eta \in C(\mathbb{R}^{2} \times \mathbb{R}^{2})$. It follows immediately from the previous lemma that
\begin{displaymath} \iint_{\mathbb{R}^{2} \times \mathbb{R}^{2}} \eta(x;y) \, d\mu_{t}x \, d\mu_{t}y = \int_{\mathbb{R}^{2}} \lim_{\varepsilon \to 0} \int_{\mathbb{R}} \eta((r,t);y)\mu_{\varepsilon}(r,t) \, dr \, d\mu_{t} y. \end{displaymath}
Moreover, the convergence of the inner integrals is uniform in $\epsilon$ on the compact family of functions $K = \{\eta(\cdot \, ; y) : y \in \operatorname{spt} \mu\} \subset C(\mathbb{R}^{2})$. Hence, the numbers
\begin{displaymath} \left|\int_{\mathbb{R}} \eta((r,t);y)\mu_{\varepsilon}(r,t) \, dr \right|, \qquad y \in \operatorname{spt} \mu, \end{displaymath}
are uniformly bounded by a constant independent of $\epsilon$. This means that the use of the dominated convergence theorem is justified:
\begin{equation}\label{form31} \iint_{\mathbb{R}^{2} \times \mathbb{R}^{2}} \eta(x;y) \, d\mu_{t}x \, d\mu_{t}y = \lim_{\varepsilon \to 0} \int_{\mathbb{R}^{2}}\int_{\mathbb{R}}\eta((r,t);y)\mu_{\varepsilon}(r,t) \, dr \, d\mu_{t}y. \end{equation}
We may now estimate as follows:
\begin{align} & \left| \iint_{\mathbb{R}^{2} \times \mathbb{R}^{2}} \eta(x;y) \, d\mu_{t}x \, d\mu_{t}y - \iint_{\mathbb{R} \times \mathbb{R}} \eta((r,t);(s,t))\mu_{\varepsilon}(r,t)\mu_{\varepsilon}(s,t) \, dr \, ds \right|\notag\\
& \leq \left| \iint_{\mathbb{R}^{2} \times \mathbb{R}^{2}}\eta(x;y) \, d\mu_{t}x\,d\mu_{t}y - \int_{\mathbb{R}^{2}}\int_{\mathbb{R}} \eta((r,t);y)\mu_{\varepsilon}(r,t) \, dr \, d\mu_{t}y \right|\notag\\
&\label{form33} \qquad + \left|\int_{\mathbb{R}} \left[ \int_{\mathbb{R}^{2}} \eta((r,t);y) \, d\mu_{t}y - \int_{\mathbb{R}} \eta((r,t);(s,t))\mu_{\varepsilon}(s,t) \,ds \right]\mu_{\varepsilon}(r,t) \, dr \right| \end{align}
As $\varepsilon \to 0$, the first term tends to zero according to \eqref{form31}. To see that the second term vanishes as well, we need to apply the previous lemma again. Namely, we first observe that the outer integration (with respect to $r$) can be restricted to some compact interval $[-R,R]$. The family of continuous mappings $\{\eta((r,t); \cdot) \colon \mathbb{R}^{2} \to \mathbb{R} : r \in [-R,R]\}$ is then compact, so the previous lemma implies that
\begin{displaymath} \int_{\mathbb{R}} \eta((r,t);(s,t))\mu_{\varepsilon}(s,t) \, ds \to \int_{\mathbb{R}^{2}} \eta((r,t);y) \, d\mu_{t} y \end{displaymath}
uniformly with respect to $r \in [-R,R]$, as $\varepsilon \to 0$. An application of \eqref{form32} then shows that the term on line \eqref{form33} converges to zero as $\varepsilon \to 0$. As we mentioned at the beginning of the proof, the assertion of the corollary now follows from the inequality $\mu_{\varepsilon} \leq \tilde{\mu}_{\varepsilon}$.
\end{proof}
The corollary will soon be used to prove an inequality concerning the energies of sliced measures. First, though, let us make a brief summary on Fourier transforms of measures on $\mathbb{R}^{n}$. If $\mu$ is a finite Borel measure on $\mathbb{R}^{n}$, then its Fourier transform $\hat{\mu}$ is, by definition, the complex function
\begin{displaymath} \hat{\mu}(x) = \int_{\mathbb{R}^{n}} e^{-ix \cdot y} \, d\mu y, \qquad x \in \mathbb{R}^{n}. \end{displaymath}
It is well-known, see \cite[Lemma 12.12]{Mat3}, that the $s$-energy $I_{s}(\mu)$ of $\mu$ can be expressed in terms of the Fourier transform:
\begin{equation}\label{s-energy} I_{s}(\mu) := \iint \frac{d\mu x \, d\mu y}{|x - y|^{s}} = c_{s,n}\int_{\mathbb{R}^{n}} |x|^{s - n}|\hat{\mu}(x)|^{2} \, dx, \qquad 0 < s < n. \end{equation}
This will be applied with $n = 1$ below.
\begin{lemma} Let $\mu$ be a compactly supported Radon measure on $\mathbb{R}^{2}$. Then, with $\mu_{t}$ as in the previous lemma, we have
\begin{displaymath} \int_{\mathbb{R}} I_{d - 1}(\mu_{t}) \, dt \lesssim_{d} \int_{\mathbb{R}^{2}} |\rho_{1}(x)|^{d - 2}|\hat{\mu}(x)|^{2} \, dx, \quad 1 < d < 2. \end{displaymath}
\end{lemma}
\begin{proof} Choose a family of test functions $(\varphi_{\varepsilon})_{\varepsilon > 0}$ such that $\chi_{\varepsilon} \leq \varphi_{\varepsilon}$, and $\|\varphi_{\varepsilon}\|_{L^{1}(\mathbb{R}^{2})} \leq 2$. Applying the previous corollary with $\tilde{\mu}_{\varepsilon} = \mu \ast \varphi_{\varepsilon}$ and $\eta(x,y) = |x - y|^{1 - d}$, we estimate
\begin{align*} I_{d - 1}(\mu_{t}) & = \iint_{\mathbb{R}^{2} \times \mathbb{R}^{2}} |x - y|^{1 - d} \, d\mu_{t}x \, d\mu_{t}y\\
& \leq \liminf_{\varepsilon \to 0} \iint_{\mathbb{R} \times \mathbb{R}} |r - s|^{1 - d} \tilde{\mu}_{\varepsilon}(r,t)\tilde{\mu}_{\varepsilon}(s,t) \, dr \, ds. \end{align*}
Integrating with respect to $t \in \mathbb{R}$, applying (\ref{s-energy}) and using Plancherel yields
\begin{align*} \int_{\mathbb{R}} I_{d - 1}(\mu_{t}) \, dt & \lesssim_{d} \liminf_{\varepsilon \to 0} \int_{\mathbb{R}} \int_{\mathbb{R}} |s|^{d - 2} \left|\int_{\mathbb{R}} e^{irs} \tilde{\mu}_{\varepsilon}(r,t) \, dr \right|^{2} \, ds \, dt\\
& = \liminf_{\varepsilon \to 0}\int_{\mathbb{R}} |s|^{d - 2} \int_{\mathbb{R}} \left| \int_{\mathbb{R}} e^{irs}\tilde{\mu}_{\varepsilon}(r,t) \, dr\right|^{2} \, dt \, ds\\
& \asymp \liminf_{\varepsilon \to 0} \int_{\mathbb{R}} |s|^{d - 2} \int_{\mathbb{R}} \left| \int_{\mathbb{R}} e^{itu} \int_{\mathbb{R}} e^{irs} \tilde{\mu}_{\varepsilon}(r,u) \, dr \, du \right|^{2} \, dt \, ds\\
& = \liminf_{\varepsilon \to 0} \iint_{\mathbb{R} \times \mathbb{R}} |s|^{d - 2} \left|\iint_{\mathbb{R} \times \mathbb{R}} e^{i(s,t) \cdot (r,u)} \tilde{\mu}_{\varepsilon}(r,u) \, dr \, du \right|^{2} \, ds \, dt\\
& = \liminf_{\varepsilon \to 0} \int_{\mathbb{R}^{2}} |\rho_{1}(x)|^{d - 2} |\widehat{\tilde{\mu}_{\varepsilon}}(x)|^{2} \, dx\\
& = \liminf_{\varepsilon \to 0} \int_{\mathbb{R}^{2}} |\rho_{1}(x)|^{d - 2} |\widehat{\varphi_{\varepsilon}}(x)|^{2}|\hat{\mu}(x)|^{2} \, dx\\
& \lesssim \liminf_{\varepsilon \to 0} \int_{\mathbb{R}^{2}} |\rho_{1}(x)|^{d - 2} |\hat{\mu}(x)|^{2} \, dx\\
& = \int_{\mathbb{R}^{2}} |\rho_{1}(x)|^{d - 2} |\hat{\mu}(x)|^{2} \, dx, \end{align*}
as claimed.
\end{proof}
Our last third lemma concerns the divergent set of certain parametrised power series. The result is due to Peres and Schlag, and the proof can be found in \cite{PS}:
\begin{lemma}\label{series} Let $U \subset \mathbb{R}$ be an open interval, and let $(h_{j})_{j \in \mathbb{N}} \in C^{\infty}(U)$. Suppose that there exist finite constants $B > 1$, $R > 1$, $C > 0$, $L \in \mathbb{N}$ and $C_{l} > 0$, $l \in \{1,\ldots,L\}$, such that $\|h_{j}^{(l)}\|_{\infty} \leq C_{l}B^{jl}$ and $\|h_{j}\|_{1} \leq CR^{-j}$, for $j \in \mathbb{N}$ and $1 \leq l \leq L$. Then, if $1 \leq \tilde{R} < R$ and $\alpha \in (0,1)$ is such that $B^{\alpha}\tilde{R}^{\alpha/L} \leq R/\tilde{R} \leq B\tilde{R}^{1/L}$, we have the estimate
\begin{displaymath} \dim \left\{ \lambda \in U : \sum_{j \in \mathbb{N}} \tilde{R}^{j}|h_{j}(\lambda)| = \infty\right\} \leq 1 - \alpha. \end{displaymath}
\end{lemma}
\begin{remark} The exact reference to this result is \cite[Lemma 3.1]{PS}. The result there is formulated with $L = \infty$, but the proof actually yields the slightly stronger statement above -- and we will need it. The stronger version is observed by Peres and Schlag themselves on the first few lines of \cite[\S3.2]{PS}.
\end{remark}
\section{The Main Lemma}
Now we are equipped to study the dimension of the sliced measures
\begin{displaymath} \mu_{\lambda,t} := (\Psi_{\lambda\sharp}\mu)_{\rho_{2},t}, \end{displaymath}
where $\Psi_{\lambda} \colon \Omega \to \mathbb{R}^{2}$ is the function introduced in Definition \ref{psi}. Thus, we are not attempting to slice the measure $\mu$ with respect to the transversal projections $\pi_{\lambda}$. Rather, we first map the measure into $\mathbb{R}^{2}$ with a sufficiently dimension preserving function $\Psi_{\lambda}$, and then slice the image measure $\Psi_{\lambda\sharp}\mu$. The reason for this is simple: in $\Omega$, the residence of $\mu$, we could not use Fourier analytic machinery required to prove Lemma \ref{thm1} below.
\begin{lemma}\label{thm1} Let $\mu$ be a Radon measure on $\Omega$, and let $1 < s < 2$.
\begin{itemize}
\item[(i)] If the projections $\pi_{\lambda}$ satisfy the regularity and transversality assumptions (\ref{transversality}) and (\ref{regularity}) with $\tau = 0$, then $I_{s}(\mu) < \infty$ implies
\begin{equation}\label{thm1Ineq} \dim \left\{\lambda \in J : \int_{\mathbb{R}} I_{s - 1}(\mu_{\lambda,t}) \, dt = \infty\right\} \leq 2 - s. \end{equation}
\item[(ii)] If the projections $\pi_{\lambda}$ only satisfy (\ref{transversality}) and (\ref{regularity}) for some $\tau > 0$ so small that $s + \tau^{1/3} < 2$, we still have (\ref{thm1Ineq}), assuming that $I_{t}(\mu) < \infty$ for $t \geq s + \varepsilon(\tau)$, where $\varepsilon(\tau) > 0$ is a constant depending on $\tau$. Moreover, $\varepsilon(\tau) \to 0$ as $\tau \to 0$.
\end{itemize}
\end{lemma}
\begin{proof} Assume that $I_{t}(\mu) < \infty$ for some $t \geq s$. We aim to determine the range of parameters $\tau > 0$ for which (\ref{thm1Ineq}) holds under this hypothesis. According to our second lemma we have
\begin{displaymath} \int_{\mathbb{R}} I_{s - 1}(\mu_{\lambda,t}) \, dt \lesssim_{s} \int_{\mathbb{R}^{2}} |\rho_{1}(x)|^{s - 2}|\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2} \, dx \end{displaymath}
for any $\lambda \in J$. We will attempt to show that the integral on the right hand side is finite for as many $\lambda \in J$ as possible. This is achieved by expressing the integral in the form of a power series and then applying Lemma \ref{series}. Some of the work in verifying the conditions of Lemma \ref{series} involves practically replicating ingredients from Peres and Schlag's original proof of Theorem \ref{sobolev}. Instead of using phrases such as 'we then argue as in \cite{PS}', we provide all the details for the reader's convenience, some of them in the Appendix.
Fix a compact subinterval $I \subset J$, and let $\lambda \in I$. We start by splitting our integral in two pieces:
\begin{align}\label{form5} \int_{\mathbb{R}^{2}} |\rho_{1}(x)|^{s - 2}|\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2} \, dx & = \int_{\mathcal{C}} |\rho_{1}(x)|^{s - 2}|\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2} \, dx\\
&\label{form6} + \int_{\mathbb{R}^{2} \setminus \mathcal{C}} |\rho_{1}(x)|^{s - 2}|\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2} \, dx,
\end{align}
where $\mathcal{C}$ is the \emph{vertical} cone
\begin{displaymath} \mathcal{C} = \{z : -\pi/4 \leq \arg z \leq \pi/4\} \cup \{\bar{z} : -\pi/4 \leq \arg z \leq \pi/4\}, \end{displaymath}
Here $\arg z$ is shorthand for \emph{signed angle formed by $z$ with the positive $y$-axis}, and $\bar{z}$ refers to complex conjugation. With this choice of $\mathcal{C}$, we have $|\rho_{1}(x)| \gtrsim |x|$ for $x \in \mathbb{R}^{2} \setminus \mathcal{C}$, which means that the integral on line (\ref{form6}) is easily estimated with the aid of equation \eqref{s-energy} and the H\"older bound \eqref{holder}:
\begin{align*} \int_{\mathbb{R}^{2} \setminus \mathcal{C}} |\rho_{1}(x)|^{s - 2}|\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2} \, dx & \lesssim_{s} \int |x|^{s - 2}|\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2} \, dx\\
& \asymp_{s} \iint |x - y|^{-s} \, d\Psi_{\lambda\sharp}\mu x \, d\Psi_{\lambda\sharp}\mu y\\
& = \iint_{\Omega \times \Omega} |\Psi_{\lambda}(x) - \Psi_{\lambda}(y)|^{-s} \, d\mu x \, d\mu y\\
& \lesssim_{I,\tau} \iint_{\Omega \times \Omega} d(x,y)^{-(1 + \tau)s} \, d\mu x \, d\mu y \end{align*}
Thus the integral over $\mathbb{R}^{2} \setminus \mathcal{C}$ is finite for \emph{all} $\lambda \in I$, as soon as $(1 + \tau)s \leq t$, which sets the first restriction for the admissible parameters $\tau > 0$.
Write $\mathcal{C} = \mathcal{C}^{1} \cup \mathcal{C}^{2} \cup \mathcal{C}^{3} \cup \mathcal{C}^{4}$, where $\mathcal{C}^{i}$ is the intersection of $\mathcal{C}$ with the $i^{th}$ quadrant in $\mathbb{R}^{2}$. We will show that, if $\tau > 0$ is small enough, then the integral of $|\rho_{1}(x)|^{s - 2}|\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2}$ over each of these smaller cones can be infinite for parameters $\lambda$ in a set of dimension at most $2 - s$. This will prove the lemma. The treatment of each of the cones $\mathcal{C}^{i}$ is similar, so we restrict attention to $\mathcal{C}^{2}$, and, for simplicity, write $\mathcal{C} := \mathcal{C}^{2} = \{z \in \mathbb{C} : 0 \leq \arg z \leq \pi/4\}$ (thus $\mathcal{C}$ lies in the upper left quadrant of the plane). Further split $\mathcal{C}$ into sub-cones $\mathcal{C}_{i}$, $i = 2,3,...$, where $\mathcal{C}_{i} = \{z : 2^{-i - 1}\pi < \arg z \leq 2^{-i}\pi\}$. The $y$-axis is not covered, but this has no effect on integration:
\begin{align} \int_{\mathcal{C}} |\rho_{1}(x)|^{s - 2}|\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2} \, dx & = \sum_{i = 2}^{\infty} \int_{\mathcal{C}_{i}} |\rho_{1}(x)|^{s - 2}|\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2} \, dx\notag\\
&\label{form7} \asymp_{s} \sum_{i = 2}^{\infty} 2^{i(2 - s)} \int_{\mathcal{C}_{i}} |x|^{s - 2} |\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2} \,dx. \end{align}
The passage to (\ref{form7}) follows by writing $|\rho_{1}(x)| = |\rho_{1}(x/|x|)|\cdot |x|$ and noting that $\rho_{1}(\zeta) = \sin (\arg \zeta) \asymp \arg \zeta$ for $\zeta \in \mathcal{C} \cap S^{1}$. To prove that (\ref{form7}) is finite for as many $\lambda \in I$ as possible, we need to replace $\chi_{\mathcal{C}_{i}}$ by something smoother. To this end, choose an infinitely differentiable function $\varphi$ on $\mathbb{R}$ satisfying
\begin{displaymath} \chi_{[-1,1]} \leq \varphi \leq \chi_{[-2,2]}. \end{displaymath}
Then let $\varphi_{i}$ be defined by $\varphi_{i}(t) = \varphi(c_{i}t + a_{i})$, where the numbers $a_{i}$ and $c_{i}$ are so chosen that
\begin{displaymath} \chi_{I_{i}} \leq \varphi_{i} \leq \chi_{2I_{i}}, \end{displaymath}
with $I_{i} = [\pi2^{-i - 1},\pi2^{-i}]$, and $2I_{i}$ denotes the interval with the same mid-point and twice the length as $I_{i}$. Clearly $c_{i} \asymp 2^{i}$. Now
\begin{displaymath} \int_{\mathcal{C}_{i}} |x|^{s - 2} |\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2} \,dx \leq \int \varphi_{i}(\arg x)|x|^{s - 2}|\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2} \, dx. \end{displaymath}
With our eyes fixed on applying Lemma \ref{series}, define
\begin{displaymath} h_{i}(\lambda) := 2^{-i}\int \varphi_{i}(\arg x)|x|^{s - 2}|\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2} \, dx. \end{displaymath}
The theorem will be proven by showing that
\begin{equation}\label{form8} \dim \left\{\lambda \in I : \sum_{i = 2}^{\infty} 2^{i(3 - s)}h_{i}(\lambda) = \infty\right\} \leq 2 - s, \end{equation}
provided that $\tau > 0$ is small. In order to apply Lemma \ref{series}, we now need to estimate both the derivatives and integrals of the functions $h_{i}$.
\subsection{The Integrals} Write $A_{j} := \{x \in \mathbb{R}^{2} : 2^{j - 1} \leq |x| \leq 2^{j}\}$ and $A_{ij} := \{x \in A_{j} : \arg x \in 2I_{i}\}$ for $i = 2,3,\ldots$ and $j \in \mathbb{Z}$. Then, by the choice of $\varphi_{i}$, we have
\begin{equation}\label{form9} h_{i}(\lambda) \lesssim_{s} 2^{-i} \sum_{j \in \mathbb{Z}} 2^{j(s - 2)} \int_{A_{ij}} |\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2} \, dx. \end{equation}
Using basic trigonometry, one sees that $x_{1} \asymp -2^{j - i}$ and $x_{2} \asymp 2^{j}$ for $x = (x_{1},x_{2}) \in A_{ij}$. In other words, one may choose an absolute constant $a \geq 1$ such that the sets $A_{ij}$ are covered by the rectangles $Q_{ij} = [-a2^{j - i},-a^{-1}2^{j - i}] \times [a^{-1}2^{j},a2^{j}]$, see the picture below.
\begin{figure}
\caption{The sets $A_{ij}
\end{figure}
Next, let $\eta$ be an infinitely differentiable function on $\mathbb{R}$, satisfying $\operatorname{spt} \eta \subset (a^{-2},2a)$ and $\eta|[a^{-1},a] \equiv 1$. Then the function $\eta_{ij} := \widetilde{\eta_{j - i}}\times\eta_{j}$ defined by
\begin{displaymath} \widetilde{\eta_{j - i}}\times\eta_{j}(x) := \widetilde{\eta_{j - i}}(x_{1})\eta_{j}(x_{2}), \qquad x = (x_{1},x_{2}) \in \mathbb{R}^{2}, \end{displaymath}
is identically one on $Q_{ij}$ (and thus $A_{ij}$) for every pair of indices $i = 2,3,\ldots$ and $j \in \mathbb{Z}$, where $\eta_{k}(t) := \eta(2^{-k}t)$ and $\tilde{f}(t) := f(-t)$ for any function $f \colon \mathbb{R} \to \mathbb{R}$. Using some basic properties of the Fourier transform, and the identity
\begin{displaymath} \Psi_{\lambda}(x) - \Psi_{\lambda}(y) = (d(x,y)\partial_{\lambda}\Phi_{\lambda}(x,y),d(x,y)\Phi_{\lambda}(x,y)), \end{displaymath}
we may now estimate
\begin{align*} \int_{A_{ij}} & |\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2} \, dx \leq \int \eta_{ij} |\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2} \, dx = \iint \widehat{\eta_{j - i} \times \widetilde{\eta_{j}}}\,(x - y) \, d\Psi_{\lambda\sharp}\mu x \, d\Psi_{\lambda\sharp}\mu y\\
& = \iint_{\Omega \times \Omega} \widehat{\eta_{j - i}}(\rho_{1}[\Psi_{\lambda}(x) - \Psi_{\lambda}(y)])\widehat{\widetilde{\eta_{j}}}(\rho_{2}[\Psi_{\lambda}(x) - \Psi_{\lambda}(y)]) \, d\mu x \, d\mu y\\
& = 2^{2j - i}\iint_{\Omega \times \Omega} \hat{\eta}(2^{j - i}r\partial_{\lambda}\Phi_{\lambda}(x,y))\bar{\hat{\eta}}(2^{j}(r\Phi_{\lambda}(x,y)) \, d\mu x \, d\mu y, \end{align*}
where $r := d(x,y)$. To bound the $\lambda$-integral of the expression on the last line, we need
\begin{lemma}\label{PSLemma} Let $\gamma$ be a smooth function supported on $J$. Then, for any $j \in \mathbb{Z}$ and $q \in \mathbb{N}$, we have the estimate
\begin{displaymath} \left| \int_{\mathbb{R}} \gamma(\lambda)\hat{\eta}(2^{j - i}r\partial_{\lambda}\Phi_{\lambda}(x,y))\bar{\hat{\eta}}(2^{j}r\Phi_{\lambda}(x,y)) \, d\lambda \right| \lesssim_{d(\Omega),\gamma,q} (1 + 2^{j}r^{1 + A\tau})^{-q}, \end{displaymath}
where $A \geq 1$ is some absolute constant.
\end{lemma}
\begin{proof} The proof given in \cite{PS} for \cite[Lemma 4.6]{PS} extends to our situation. In fact, the statement would be virtually the same as in \cite[Lemma 4.6]{PS} without the presence of the factor
\begin{displaymath} \hat{\eta}(2^{j - i}r\partial_{\lambda}\Phi_{\lambda}(x,y)). \end{displaymath}
Unfortunately, the proof of \cite[Lemma 4.6]{PS} requires something more delicate than 'bringing the absolute values inside the integral', so this factor cannot be completely dismissed. We discuss the lengthy details in Appendix A.
\end{proof}
Now we are prepared to estimate the $L^{1}(I)$-norms of the functions $h_{i}$, as required by Lemma \ref{series}. Fix any $r \in (0,1)$, and let $\gamma$ be a smooth function supported on $J$ and identically one on $I$. For brevity, write
\begin{displaymath} \Gamma_{x,y}^{ij}(\lambda) := \gamma(\lambda)\hat{\eta}(2^{j - i}r\partial_{\lambda}\Phi_{\lambda}(x,y))\bar{\hat{\eta}}(2^{j}(r\Phi_{\lambda}(x,y)). \end{displaymath}
Then, use (\ref{form9}) and the Lemma above:
\begin{align*} \sum_{i = 2}^{\infty} 2^{i(r + 1)} \int_{I} h_{i}(\lambda) \, d\lambda & \lesssim_{s} \sum_{i = 2}^{\infty} 2^{ir} \sum_{j \in \mathbb{Z}} 2^{j(s - 2)} \int_{\mathbb{R}} \gamma(\lambda) \int_{A_{ij}} |\widehat{\Psi_{\lambda\sharp}\mu}(x)|^{2} \, dx \, d\lambda\\
& \leq \sum_{i = 2}^{\infty} 2^{i(r - 1)} \sum_{j \in \mathbb{Z}} 2^{js} \iint_{\Omega \times \Omega} \left| \int_{\mathbb{R}} \Gamma_{x,y}^{ij}(\lambda) \, d\lambda \right| \, d\mu x \, d\mu y\\
& \lesssim_{I} \sum_{i = 2}^{\infty} 2^{i(r - 1)} \iint_{\Omega \times \Omega} \sum_{j \in \mathbb{Z}} 2^{js} (1 + 2^{j}d(x,y)^{1 + A\tau})^{-2} \, d\mu x \, d\mu y\\
& \lesssim_{s} \sum_{i = 2}^{\infty} 2^{i(r - 1)} \iint_{\Omega \times \Omega} d(x,y)^{-(1 + A\tau)s} \, d\mu x \, d\mu y
\end{align*}
This sum is finite, if $\tau > 0$ is so small that $(1 + A\tau)s \leq t$. Under this hypothesis, we conclude that
\begin{equation}\label{form10} \|h_{i}\|_{L^{1}(I)} \lesssim_{I,r} 2^{-i(r + 1)}, \qquad r \in (0,1). \end{equation}
\subsection{The Derivatives}
First, we need to find out the Fourier transform of the function $x \mapsto \varphi_{j}(\arg x)|x|^{s - 2}$. To this end, note that $\varphi$ is certainly smooth enough to have an absolutely convergent Fourier series representation:
\begin{displaymath} \varphi_{j}(\theta) = \sum_{k \in \mathbb{Z}} \hat{\varphi}_{j}(k)e^{ik\theta}. \end{displaymath}
Thus
\begin{displaymath} \varphi_{j}(\arg x)|x|^{s - 2} = |x|^{s - 2}\sum_{k \in \mathbb{Z}} \hat{\varphi}_{j}(k) e^{ik\arg x} = \sum_{k \in \mathbb{Z}} |x|^{s - 2 - |k|}P_{j,k}(x), \end{displaymath}
where
\begin{displaymath} P_{j,k}(x) := |x|^{|k|}\hat{\varphi}_{j}(k)e^{ik\arg x} \end{displaymath}
is a harmonic polynomial of degree $|k|$ in $\mathbb{R}^{2}$, which is also homogeneous of degree $|k|$.\footnote{In complex notation, $P_{j,k}(z) = i^{-k}\hat{\varphi}_{j}(k)z^{k}$ if $k \geq 0$, and $P_{j,k} = i^{-k}\hat{\varphi}_{j}(k)\bar{z}^{|k|}$ if $k < 0$. The factor $i^{-k}$ results from the fact that $\arg z = \operatorname{Arg} z - \pi/2$ with our definition of $\arg$.} The Fourier transform of each term $K_{j,k,s}(x) := P_{j,k}(x)|x|^{s - 2 - |k|}$ can be computed by an explicit formula given in \cite[Chapter IV, Theorem 4.1]{SW}:
\begin{displaymath} \hat{K}_{j,k,s}(x) = i^{-|k|}\pi^{1-s}\frac{\Gamma\left(\frac{|k| + s}{2}\right)}{\Gamma\left(\frac{2 + |k| - s}{2}\right)}P_{j,k}(x)|x|^{-|k| - s}.\end{displaymath}
Thus, the Fourier transform of $x \mapsto \varphi_{j}(\arg x)|x|^{s - 2}$ is the function
\begin{displaymath} F_{j}(x) = \pi^{1-s}|x|^{-s}\sum_{k \in \mathbb{Z}} i^{-|k|}\frac{\Gamma\left(\frac{|k| + s}{2}\right)}{\Gamma\left(\frac{2 + |k| - s}{2}\right)}\hat{\varphi}_{j}(k)H_{k}(x), \end{displaymath}
where $H_{k}(x) = e^{ik\arg x}$. We may now rewrite $h_{j}$ as
\begin{align*} h_{j}(\lambda) & = 2^{-j}\iint F_{j}(x - y) \, d\Psi_{\lambda\sharp}\mu(x) \, d\Psi_{\lambda\sharp}\mu(y)\\
& = 2^{-j}\iint F_{j}(\Psi_{\lambda}(x) - \Psi_{\lambda}(y)) \, d\mu x \, d\mu y, \end{align*}
whence, in order to evaluate the $\partial_{\lambda}^{(l)}$-derivatives of $h_{j}$, we only need to consider the corresponding derivatives of the mappings $\lambda \mapsto F_{j}(\Psi_{\lambda}(x) - \Psi_{\lambda}(y))$ for arbitrary $x,y \in \Omega$. From the bounds we obtain for the derivative, it will be clear that if $I_{t}(\mu) < \infty$ for some $t \geq s$ large enough, then no issues arise from the legitimacy of exchanging the order or differentiation and integration. For fixed $x,y \in \Omega$, the mapping $\lambda \mapsto F_{j}(\Psi_{\lambda}(x) - \Psi_{\lambda}(y))$ is the composition of $F_{j}$ with the path $\lambda \mapsto \Psi_{\lambda}(x) - \Psi_{\lambda}(y) =: \gamma(\lambda)$, whence $\partial_{\lambda}^{(l)}F_{j}(\Psi_{\lambda}(x) - \Psi_{\lambda}(y)) = (F_{j} \circ \gamma)^{(l)}(\lambda)$.
\begin{proposition}\label{Pderivatives} The derivative $(F_{j} \circ \gamma)^{(l)}(\lambda)$ consists of finitely many terms of the form
\begin{displaymath} \partial^{\beta}F_{j}(\gamma(\lambda))\tilde{\gamma}_{1}(\lambda)\cdots\tilde{\gamma}_{|\beta|}(\lambda), \end{displaymath}
where $\beta$ is a multi-index of length $|\beta| \leq l$ and $\tilde{\gamma}_{j} = \gamma_{i}^{(k)}$ for some $i \in \{1,2\}$ and $k \leq l$.
\end{proposition}
\begin{proof} To get the induction started, note that
\begin{displaymath} (F_{j} \circ \gamma)'(\lambda) = \nabla F_{j}(\gamma(\lambda)) \cdot \gamma'(\lambda) = \partial_{1}F_{j}(\gamma(\lambda))\gamma_{1}'(\lambda) + \partial_{2}F_{j}(\gamma(\lambda))\gamma_{2}'(\lambda). \end{displaymath}
The expression on the right is certainly of the correct form. Then assume that the claim holds up to some $l \geq 1$. Then $(F_{j} \circ \gamma)^{(l + 1)}(\lambda)$ is the sum of the \emph{derivatives} of the terms occurring in the expression for $(F_{j} \circ \gamma)^{(l)}(\lambda)$. Take one such term $\partial^{\beta}F_{j}(\gamma(\lambda))\tilde{\gamma}_{1}(\lambda)\cdots\tilde{\gamma}_{|\beta|}(\lambda)$. The product rule shows that the derivative of this term is
\begin{displaymath} [\nabla \partial^{\beta}F_{j}(\gamma(\lambda)) \cdot \gamma'(\lambda)]\tilde{\gamma}_{1}(\lambda)\cdots\tilde{\gamma}_{|\beta|}(\lambda) + \partial^{\beta}F_{j}(\gamma(\lambda))\sum_{j = 1}^{|\beta|} \tilde{\gamma}_{1}(\lambda) \cdots \tilde{\gamma}_{j}'(\lambda) \cdots \tilde{\gamma}_{|\beta|}(\lambda) \end{displaymath}
The claim now follows immediately from this formula.
\end{proof}
At this point we state the estimate we aim to prove:
\begin{claim}\label{Fderivatives} Let, as before, $\gamma(\lambda) = \gamma_{x,y}(\lambda) = \Psi_{\lambda}(x) - \Psi_{\lambda}(y)$. Then
\begin{displaymath} |(F_{j} \circ \gamma)^{(l)}(\lambda)| \lesssim_{I,l,s} 2^{j(l + 1)}d(x,y)^{-s - \tau(l^{2} + l + s)}, \qquad \lambda \in I,\: l \geq 0. \end{displaymath}
\end{claim}
Once this is established, we will immediately obtain
\begin{displaymath} \|h_{j}^{(l)}\|_{L^{\infty}(I)} \lesssim_{I,l,s} 2^{jl} \iint d(x,y)^{-s - \tau(l(l + 1) + s)} \, d\mu x \, d\mu y = 2^{jl}I_{s + \tau(l(l + 1) + s)}(\mu), \end{displaymath}
so that Lemma \ref{series} can be applied with $B = 2$ and any $L \in \mathbb{N}$ such that
\begin{equation}\label{form11} s + \tau(L(L + 1) + s) \leq t. \end{equation}
By Proposition \ref{Pderivatives}, it suffices to prove Claim \ref{Fderivatives} for all products of the form $\partial^{\beta} F_{j}(\gamma(\lambda))\tilde{\gamma}_{1}(\lambda)\cdots\tilde{\gamma}_{|\beta|}(\lambda)$, where $|\beta| \leq l$ and $\tilde{\gamma}_{j} = \gamma_{i}^{(k)}$ for some $i \in \{1,2\}$ and $k \leq l$. First of all,
\begin{displaymath} |\tilde{\gamma}_{j}(\lambda)| \leq |\gamma^{(k)}(\lambda)| = |\partial_{\lambda}^{k}(\Psi_{\lambda}(x) - \Psi_{\lambda}(y))| \lesssim_{I,l,\tau} d(x,y)^{1 - (k + 1)\tau} \lesssim d(x,y)^{1 - (l + 1)\tau} \end{displaymath}
according to the regularity assumption (\ref{regularity}). This yields
\begin{displaymath} |\partial^{\beta} F_{j}(\gamma(\lambda))\tilde{\gamma}_{1}(\lambda)\cdots\tilde{\gamma}_{|\beta|}(\lambda)| \lesssim_{I,l} |\partial^{\beta}F_{j}(\gamma(\lambda))|d(x,y)^{|\beta|(1 - (l + 1)\tau)}. \end{displaymath}
Next we will prove that $|\partial^{\beta}F_{j}(\gamma(\lambda))| \lesssim_{I,l,s} 2^{j(l + 1)}d(x,y)^{-(1 + \tau)(s + |\beta|)}$ for multi-indices $\beta$ of length $|\beta| \leq l$. This will prove Claim \ref{Fderivatives}. One of the factors in the definition of $F_{j}$ is the Riesz kernel $x \mapsto k_{s}(x) = |x|^{-s}$, the expression of which does not depend on $j \geq 1$. It is easily checked that $|\partial^{\beta}k_{s}(x)| \lesssim_{l} s^{|\beta|}|x|^{-s - |\beta|}$, if $|\beta| \leq l$. What about the other factor? First we need the following estimate:
\begin{equation}\label{form12} |\partial^{\beta}H_{k}(x)| \lesssim_{l} |k|^{|\beta|}|x|^{-|\beta|} \leq |k|^{l}|x|^{-|\beta|}, \qquad k \in \mathbb{Z}, \: |\beta| \leq l. \end{equation}
Note that $H_{k}(z) = i^{-k}z^{k}/|z|^{k}$ or $H_{k}(z) = i^{-k}\bar{z}^{|k|}/|z|^{|k|}$ for $z \in \mathbb{C} = \mathbb{R}^{2}$ (depending on whether $k \geq 0$ or $k < 0$). First,
\begin{displaymath} |\partial^{\beta}z^{k}| = |\partial^{\beta}(x + iy)^{k}| = |k(k - 1)\cdots(k - |\beta| + 1)(x + iy)^{k - |\beta|}| \leq |k|^{|\beta|}|z|^{k - |\beta|}, \end{displaymath}
and the same estimate holds for $\partial^{\beta}\bar{z}^{|k|}$. Second, the estimate for the derivatives of the Riesz kernel yields $|\partial^{\beta}|z|^{-k}| \lesssim_{l} |k|^{|\beta|}|x|^{-k - |\beta|}$ for $k \geq 0$ and $|\beta| \leq l$, and (\ref{form12}) then follows by applying the Leibnitz formula.
Mostly for convenience,\footnote{Any estimate of the form $\Gamma(x + \alpha)/\Gamma(x) \lesssim x^{c(\alpha)}$ would suffice to us.} we use the well-known fact that $\Gamma(x + \alpha)/\Gamma(x) \asymp x^{\alpha}$ for $\alpha > 0$ and large $x > 0$. In particular,
\begin{displaymath} \frac{\Gamma\left(\frac{|k| + s}{2}\right)}{\Gamma\left(\frac{2 + |k| - s}{2}\right)} = \frac{\Gamma\left(\frac{2 + |k| - s}{2} + s - 1\right)}{\Gamma\left(\frac{2 + |k| - s}{2}\right)} \lesssim |k|^{s - 1}, \end{displaymath}
Now we may use the rapid decay bound $|\hat{\varphi}_{j}(k)| = c_{j}^{-1}|\hat{\varphi}(k/c_{j})| \lesssim_{l} 2^{j(l + 1)}|k|^{-(l + 2)}$ to conclude that
\begin{align*} \left| \partial^{\beta} \sum_{k \in \mathbb{Z}} i^{-|k|}\frac{\Gamma\left(\frac{|k| + s}{2}\right)}{\Gamma\left(\frac{2 + |k| - s}{2}\right)}\hat{\varphi}_{j}(k)H_{k}(x) \right| & \lesssim_{l} 2^{j(l + 1)} \sum_{k \in \mathbb{Z}} |k|^{-(l + 2)}|k|^{s - 1}|\partial^{\beta}H_{k}(x)|\\
& \lesssim 2^{j(l + 1)}|x|^{-|\beta|} \sum_{k \in \mathbb{Z}} |k|^{s - 3} \lesssim_{s} 2^{j(l + 1)}|x|^{-|\beta|}. \end{align*}
Finally, using Leibnitz's rule once more, and also the fact from estimate \eqref{holder} that
\begin{displaymath} |\gamma(\lambda)| = |\Psi_{\lambda}(x) - \Psi_{\lambda}(y)| \gtrsim_{I,\tau} d(x,y)^{1 + \tau}, \end{displaymath}
we obtain
\begin{displaymath} |\partial^{\beta}F_{j}(\gamma(\lambda))| \lesssim_{l,s} 2^{j(l + 1)}\sum_{\alpha \leq \beta}|c_{\alpha\beta}||\gamma(\lambda)|^{-s - |\alpha|}|\gamma(\lambda)|^{|\alpha| - |\beta|} \lesssim_{l} 2^{j(l + 1)}d(x,y)^{-(1 + \tau)(s + |\beta|)} \end{displaymath}
This finishes the proof of Claim \ref{Fderivatives}.
\subsection{Completion of the Proof of Lemma \ref{thm1}}
Now we are prepared to apply Lemma \ref{series} and prove (\ref{form8}). Let us first consider the simpler case $\tau = 0$. Let $t = s$, $B = 2$ and $\tilde{R} = 2^{3 - s}$. Then fix $\alpha < s - 1$ and choose $r \in (0,1)$ so close to one that $\alpha < r + s - 2$. Next, let $R := 2^{r + 1}$, and choose $L \in \mathbb{N}$ be so large that $\alpha + \alpha(3 - s)/L \leq r + s - 2$. As $\tau = 0$, the hypotheses $(1 + A\tau)s \leq t$ and $s + \tau(L(L + 1) + s) \leq t$ from (\ref{form10}) and (\ref{form11}) are automatically satisfied. With these notations, we have
\begin{displaymath} \|h_{i}\|_{L^{1}(I)} \lesssim R^{-i}, \quad \|h_{i}^{(l)}\|_{L^{\infty}(I)} \lesssim_{l} B^{il} \text{ for } 1 \leq l \leq L, \quad B^{\alpha}\tilde{R}^{\alpha/L} \leq \frac{R}{\tilde{R}} \leq B\tilde{R}^{1/L}, \end{displaymath}
whence Lemma \ref{series} yields
\begin{displaymath} \dim \left\{\lambda \in I : \sum_{i = 2}^{\infty} 2^{i(3 - s)}h_{i}(\lambda) = \infty\right\} \leq 1 - \alpha. \end{displaymath}
The proof of Lemma \ref{thm1}(i) is finished by letting $\alpha \nearrow s - 1$.
In the case (ii) of Lemma \ref{thm1}, we first set $s(\tau) := s + \tau^{1/3} < 2$ and define the functions $\tilde{h}_{i}$ in the same way as the functions $h_{i}$, only replacing $s$ by $s(\tau)$ in the definition. Then we write $\alpha := s - 1 < s(\tau) - 1$ and choose $r := 1 - \tau^{1/3}/2$. If $L$ is an integer satisfying $4\tau^{-1/3} \leq L \leq 8\tau^{-1/3}$, we then have
\begin{displaymath} \alpha + \alpha(3 - s(\tau))/L \leq \alpha + 2/L \leq s - 1 + \tau^{1/3}/2 = r + s(\tau) - 2, \end{displaymath}
which means that
\begin{displaymath} B^{\alpha}\tilde{R}^{\alpha/L} \leq \frac{R}{\tilde{R}} \leq 2 \leq B\tilde{R}^{1/L} \end{displaymath}
with $B = 2$, $R = 2^{r + 1}$ and $\tilde{R} = 2^{3 - s(\tau)}$. Moreover, the proofs above show (nothing more is required than the 'change of variables' $s \mapsto s(\tau)$) that the estimates
\begin{displaymath} \|\tilde{h}_{i}\|_{L^{1}(I)} \lesssim R^{-i} \quad \text{and} \quad \|\tilde{h}^{(l)}_{i}\|_{L^{\infty}(I)} \lesssim_{l} B^{il} \text{ for } 1 \leq l \leq L, \end{displaymath}
hold, this time provided that $(1 + A\tau)s(\tau) \leq t$ and $s(\tau) + \tau(L(L + 1) + s(\tau)) \leq t$. The first condition says that $t \geq s + \tau^{1/3} + As\tau + A\tau^{4/3} =: s + \varepsilon_{1}(\tau)$. On the other hand, the upper bound on $L$ gives the estimate
\begin{displaymath} s(\tau) + \tau(L(L + 1) + s(\tau)) \leq s + \tau^{1/3} + 128\tau^{1/3} + \tau(s + \tau^{1/3}) =: s + \varepsilon_{2}(\tau). \end{displaymath}
Thus, if $t \geq s + \varepsilon(\tau) := s + \max\{\varepsilon_{1}(\tau),\varepsilon_{2}(\tau)\}$, we again have
\begin{displaymath} \dim \left\{\lambda \in I : \sum_{i = 2}^{\infty} 2^{i(3 - s(\tau))}\tilde{h}_{i}(\lambda) = \infty\right\} \leq 1 - \alpha = 2 - s. \end{displaymath}
Arguing in the same way as we arrived at (\ref{form8}), this even proves that
\begin{displaymath} \dim\left\{\lambda \in J : \int_{\mathbb{R}} I_{s(\tau) - 1}(\mu_{\lambda,t}) \, dt = \infty \right\} \leq 2 - s. \end{displaymath}
The proof of Lemma \ref{thm1}(ii) is finished, since $I_{s - 1}(\mu_{\lambda,t}) \lesssim_{\lambda} I_{s(\tau) - 1}(\mu_{\lambda,t})$ for $\lambda \in J$ and $t \in \mathbb{R}$.
\end{proof}
\section{Proof of Theorem \ref{main} and Applications}
\begin{remark}\label{form3} If $B \subset \Omega$ is a Borel set with $\dim B > 1$, and $\operatorname{spt} \mu \subset B$, the previous lemma can be used to extract information on the dimension of $\Psi_{\lambda}(B) \cap \rho_{2}^{-1}\{t\}$ for various $\lambda \in J$ and $t \in \mathbb{R}$, since $\operatorname{spt} \mu_{\lambda,t} \subset \Psi_{\lambda}(B) \cap \rho_{2}^{-1}\{t\}$. Of course, we were originally interested in knowledge concerning $\dim [B \cap \pi_{\lambda}^{-1}\{t\}]$. Fortunately $\pi_{\lambda} = \rho_{2} \circ \Psi_{\lambda}$, so that
\begin{displaymath} \Psi_{\lambda}(B \cap \pi_{\lambda}^{-1}\{t\}) = \Psi_{\lambda}(B) \cap \rho_{2}^{-1}\{t\}. \end{displaymath}
By the H\"older continuity, recall (\ref{holder}), of the mappings $\Psi_{\lambda}$, we then have
\begin{displaymath} \dim [B \cap \pi_{\lambda}^{-1}\{t\}] \geq (1 - \tau)\dim [\Psi_{\lambda}(B) \cap \rho_{2}^{-1}\{t\}] \end{displaymath}
for $\lambda \in J$ and $t \in \mathbb{R}$.
\end{remark}
\begin{proof}[Proof of Theorem \ref{main}] Let us first prove the case $\tau = 0$. Assume that $\mathcal{H}^{s}(B) > 0$. We claim that the set
\begin{displaymath} E := \{\lambda \in J : \mathcal{L}^{1}(\{t \in \mathbb{R} : \dim [B \cap \pi_{\lambda}^{-1}\{t\}] \geq s - 1\}) = 0\} \end{displaymath}
satisfies $\dim E \leq 2 - s$. By Frostman's lemma (the version for metric spaces, see \cite{Ho}), we may choose a non-trivial compactly supported Radon measure $\mu$ on $\Omega$ with $\operatorname{spt} \mu \subset B$ and $\mu(B(x,r)) \leq r^{s}$ for $x \in \Omega$ and $r > 0$. Then $I_{r}(\mu) < \infty$ for every $1 < r < s$, so that by Theorem \ref{sobolev} we have the estimate
\begin{equation}\label{form4} \dim \{\lambda \in J : \pi_{\lambda\sharp}\mu \not\ll \mathcal{L}^{1}\} \leq 2 - s. \end{equation}
By (\ref{form4}) and Lemma \ref{thm1}, the set
\begin{displaymath} E_{\mu}^{j} := \left\{\lambda \in J : \pi_{\lambda\sharp}\mu \not\ll \mathcal{L}^{1} \text{ or } \int_{\mathbb{R}} I_{(s - 1/i) - 1}(\mu_{\lambda,t}) \, dt = \infty \text{ for some } i \geq j \right\} \end{displaymath}
has dimension $\dim E_{\mu}^{j} \leq 2 - s + 1/j$. If $\lambda \in J \setminus E_{\mu}^{j}$, we know that the set $N_{\mu,\lambda} := \{t \in \mathbb{R} : \exists \, \mu_{\lambda,t} \text{ and } \mu_{\lambda,t} \not\equiv 0\}$ has positive $\mathcal{L}^{1}$ measure.\footnote{This follows from the equation $\rho_{2\sharp}(\Psi_{\lambda\sharp}\mu) = \pi_{\lambda\sharp}\mu$: whenever $\pi_{\lambda\sharp}\mu \ll \mathcal{L}^{1}$, we also have $\rho_{2\sharp}(\Psi_{\lambda\sharp}\mu) \ll \mathcal{L}^{1}$, and $\mathcal{L}^{1}(N_{\mu,\lambda}) > 0$ in this case, see Definition \ref{slices}.} Also, by definition of $E_{\mu}^{j}$, we know that for $\mathcal{L}^{1}$ almost every $t \in N_{\mu,\lambda}$ the measure $\mu_{\lambda,t}$ has finite $(s - 1 - 1/i)$-energy \emph{for every $i \geq j$}: in particular,
\begin{displaymath} \dim [B \cap \pi_{\lambda}^{-1}\{t\}] \geq \dim [\Psi_{\lambda}(B) \cap \rho_{2}^{-1}\{t\}] \geq \dim (\operatorname{spt} \mu_{\lambda,t}) \geq s - 1 \end{displaymath}
for these $t \in N_{\mu,\lambda}$. Thus $\mathcal{L}^{1}(\{t \in \mathbb{R} : \dim [B \cap \pi_{\lambda}^{-1}\{t\}] \geq s - 1\}) \geq \mathcal{L}^{1}(N_{\mu,\lambda}) > 0$, and we conclude that $\lambda \in J \setminus E$. It follows that $E \subset E_{\mu}^{j}$, whence $\dim E \leq 2 - s + 1/j$. Letting $j \to \infty$ now shows that $\dim E \leq 2 - s$, and
\begin{displaymath} \mathcal{L}^{1}(\{t \in \mathbb{R} : \dim [B \cap \pi_{\lambda}^{-1}\{t\}] \geq s - 1\}) > 0, \qquad \lambda \in J \setminus E. \end{displaymath}
Next assume that $\tau > 0$, and $\dim B = s$. Let $\delta_{0}(\tau) = 2\varepsilon(\tau)$, where $\varepsilon(\tau)$ is the constant from the previous theorem. Then choose a measure $\mu$ on $\Omega$, supported on $B$, such that $I_{s - \varepsilon(\tau)} < \infty$. Then, as $s - \varepsilon(\tau) = (s - \delta_{0}(\tau)) + \varepsilon(\tau)$, Lemma \ref{thm1} asserts that
\begin{displaymath} \dim \left\{ \lambda \in J : \int_{\mathbb{R}} I_{s - 1 - \delta_{0}(\tau)}(\mu_{\lambda,t}) \, dt = \infty \right\} \leq 2 - s + \delta_{0}(\tau). \end{displaymath}
On the other hand, Theorem \ref{sobolev} gives
\begin{displaymath} \dim\{\lambda \in J : \pi_{\lambda\sharp}\mu \not\ll \mathcal{L}^{1}\} \leq 2 - \frac{s - \varepsilon(\tau)}{1 + a_{0}\tau} \leq 2 - s + \frac{\varepsilon(\tau) + 2a_{0}\tau}{1 + a_{0}\tau} =: 2 - s + \delta_{1}(\tau). \end{displaymath}
This means, again, that the set $N_{\mu,\lambda} = \{t \in \mathbb{R} : \exists \, \mu_{\lambda,t} \text{ and } \mu_{\lambda,t} \not\equiv 0\}$ has positive $\mathcal{L}^{1}$ measure for all $\lambda \in J$ except for a set of dimension at most $2 - s + \delta_{1}(\tau)$. Writing $\delta(\tau) = \max\{\delta_{0}(\tau) + \tau,\delta_{1}(\tau)\}$, we may conclude that there exists a set $E \subset J$ of dimension at most $2 - s + \delta(\tau)$ such that
\begin{align*} \dim [B \cap \pi_{\lambda}^{-1}\{t\}] & \geq (1 - \tau)\dim [\Psi_{\lambda}(B) \cap \rho_{2}^{-1}\{t\}]\\
& \geq (1 - \tau) \dim (\operatorname{spt} \mu_{\lambda,t})\\
& \geq (1 - \tau)(s - 1 - \delta_{0}(\tau)) \geq s - 1 - \delta(\tau) \end{align*}
holds for $\lambda \in J \setminus E$ and for $\mathcal{L}^{1}$ almost all $t \in N_{\mu,\lambda}$. In short,
\begin{displaymath} \mathcal{L}^{1}(\{t \in \mathbb{R} : \dim[B \cap \pi_{\lambda}^{-1}\{t\}] \geq s - 1 - \delta(\tau)\}) = \mathcal{L}^{1}(N_{\mu,\lambda}) > 0, \quad \lambda \in J \setminus E.\end{displaymath}
\end{proof}
\begin{remark}\label{sharp} For later use, let us note that the proof above yields the following result: \emph{if $\mu$ is a Borel measure on $\Omega$ with $I_{s}(\mu) < \infty$ for some $1 < s < 2$, then there exists a set $E \subset J$ of dimension $\dim E \leq 2 - s + \delta(\tau)$ such that for every $\lambda \in J \setminus E$ and $\pi_{\lambda\sharp}\mu$ almost every $t \in \mathbb{R}$} it holds that $\pi_{\lambda\sharp}\mu \ll \mathcal{L}^{1}$ and $\dim [\operatorname{spt} \mu \cap \pi_{\lambda}^{-1}\{t\}] \geq s - 1 - \delta(\tau)$. Indeed, as shown above, the inequality $\dim [\operatorname{spt} \mu \cap \pi_{\lambda}^{-1}\{t\}] \geq s - 1 - \delta(\tau)$ holds for $\mathcal{L}^{1}$ almost every $t \in N_{\mu,\lambda}$. Moreover, as noted in Definition and Theorem \ref{slices}, we have $\pi_{\lambda\sharp}\mu(\mathbb{R} \setminus N_{\mu,\lambda}) = 0$ whenever $\pi_{\lambda\sharp}\mu \ll \mathcal{L}^{1}$.
\end{remark}
\subsection{Applications and Remarks on the Sharpness of Theorem \ref{main}}\label{sharpness}
As the first application of Theorem \ref{main}, we consider orthogonal projections to lines in $\mathbb{R}^{2}$. Namely, let $\pi_{\lambda}(x) := x \cdot (\cos \lambda,\sin \lambda)$ for $x \in \mathbb{R}^{2} = \Omega$ and $\lambda \in (0,2\pi)$. Then $\pi_{\lambda}$ is the orthogonal projection onto the line spanned by the vector $(\cos \lambda, \sin \lambda) \in S^{1}$. These projections are perhaps the most basic example of generalized projections: transversality with $\tau = 0$ follows immdediately from the equation
\begin{displaymath} |\Phi_{\lambda}(x,y)|^{2} + |\partial_{\lambda}\Phi_{\lambda}(x,y)|^{2} = 1, \qquad x,y \in \mathbb{R}^{2}, \end{displaymath}
and the other conditions are even easier to verify. Applied to these projections Theorem \ref{main} assumes the form
\begin{cor} Let $B \subset \mathbb{R}^{2}$ be a Borel set with $0 < \mathcal{H}^{s}(B) < \infty$ for some $s > 1$. Denote by $L_{\lambda,t}$ the line $L_{\lambda,t} = \{x \in \mathbb{R}^{2} : x \cdot (\cos \lambda,\sin \lambda) = t\} = \pi_{\lambda}^{-1}\{t\}$. Then there exists a set $E \subset (0,2\pi)$ of dimension $\dim E \leq 2 - s$ such that
\begin{displaymath} \mathcal{L}^{1}(\{t \in \mathbb{R} : \dim [B \cap L_{\lambda,t}] \geq s - 1\}) > 0, \qquad \lambda \in (0,2\pi) \setminus E. \end{displaymath}
\end{cor}
In fact, the inequality $\dim [B \cap L_{\lambda,t}] \geq s - 1$ above could be replaced by equality: indeed, if $\pi \colon \mathbb{R}^{2} \to \mathbb{R}$ is any Lipschitz map, then $\dim [B \cap \pi^{-1}\{t\}] \leq \dim B - 1$ for almost every $t \in \mathbb{R}$. This follows from \cite[Theorem 7.7]{Mat3}.
In the case of orthogonal projections, the bound obtained above for $\dim E$ is sharp. To see this, first note that if $B \subset \mathbb{R}^{2}$ is a Borel set with $\dim B > 1$, and if $\lambda \in (0,2\pi)$ is any parameter such that $\mathcal{L}^{1}(\{t : \dim [B \cap L_{\lambda,t}] \geq \dim B - 1\}) > 0$, then it follows immediately that $\mathcal{L}^{1}(\pi_{\lambda}(B)) > 0$. On the other hand, it is known that a compact set $B \subset \mathbb{R}^{2}$ exists with $s = \dim B > 1$, and such that
\begin{equation}\label{FKMP} \dim \{\lambda \in (0,2\pi) : \mathcal{L}^{1}(\pi_{\lambda}(B)) = 0\} = 2 - s. \end{equation}
This yields
\begin{displaymath} \dim \{\lambda \in (0,2\pi) : \mathcal{L}^{1}(\{t : \dim [B \cap L_{\lambda,t}] \geq s - 1\}) = 0\} \geq 2 - s \end{displaymath}
for this particular set $B$, so that the bound $\dim E \leq 2 - s$ in the previous corollary cannot be improved. The construction of the compact set $B$ satisfying (\ref{FKMP}) is essentially the same as that of a similar counter-example obtained by Kaufman and Mattila in \cite{KM}. The applicability of the example in \cite{KM} to this situation was observed by K. Falconer in his paper \cite{Fa}, and the full details were worked out by A. Peltomäki in his licenciate thesis \cite{Pe}.
Before discussing another application, we consider the following question. Let $B \subset \mathbb{R}^{2}$ be a Borel set with $\mathcal{H}^{s}(B) > 0$ for some $s > 1$, and let $(\pi_{\lambda})_{\lambda \in J}$ be a family of projections satisfying the regularity and transversality conditions of Definition \ref{projections} with $\tau = 0$. Then we have the bounds
\begin{equation}\label{exception1} \dim \{\lambda \in J : \mathcal{L}^{1}(\{t \in \mathbb{R} : \dim [B \cap \pi_{\lambda}^{-1}\{t\}] \geq s - 1\}) = 0\} \leq 2 - s, \end{equation}
given by Theorem \ref{main}, and
\begin{equation}\label{exception2} \dim \{\lambda \in J : \mathcal{L}^{1}(\pi_{\lambda}(B)) = 0\} \leq 2 - s, \end{equation}
by Theorem \ref{sobolev}. Moreover, any reader familiar with the proofs of Peres and Schlag in \cite{PS} will note that the projection results there and the slicing result here are very strongly connected. Thus, one might wonder if the 'exceptional sets' of dimension at most $2 - s$ on lines \eqref{exception1} and \eqref{exception2} might, in fact, coincide. In other words the question is, does $\mathcal{L}^{1}(\pi_{\lambda}(B)) > 0$ imply $\mathcal{L}^{1}(\{t \in \mathbb{R} : \dim [B \cap \pi_{\lambda}^{-1}\{t\}] \geq s - 1\}) > 0$ (the converse implication is trivial and true, as noted above)? The answer is negative (for orthogonal projections, at least), as can be seen by considering the the self-affine fractal $F \subset [0,1] \times [0,1]$ obtained by iterating the scheme below:
\begin{center}
\includegraphics[scale = 0.9]{Image1.pdf}
\end{center}
It is easy to prove that if $\mu$ is the uniformly distributed measure on $F$, then $\mu(Q) \lesssim d(Q)^{3/2}$ for all cubes $Q \subset \mathbb{R}^{2}$, whence $\mathcal{H}^{3/2}(F) > 0$. Also, the projection of $F$ onto the real axis is the interval $[0,1]$. However, the intersection of any vertical line with $F$ contains at most two points, and, in particular, the dimension of these vertical slices of $F$ have dimension zero instead of $3/2 - 1 = 1/2$.
We finish this section with another application of Theorem \ref{main}. As noted in Remark \ref{ps}, there exist families of projections, which only satisfy the regularity and transversality requirements of Definition \ref{projections} for strictly positive parameters $\tau > 0$. Several such examples are presented in the original paper by Peres and Schlag, so here and now we will content ourselves with just one, the \emph{Bernoulli convolutions}. Let $\Omega = \{-1,1\}^{\mathbb{N}}$ and let $\mu$ be the product measure on $\Omega$. For $\lambda \in (0,1)$, consider the projections
\begin{displaymath} \pi_{\lambda}(\omega) := \sum_{n = 0}^{\infty} \omega_{n}\lambda^{n}, \qquad \omega = (\omega_{1},\omega_{2},\ldots) \in \Omega. \end{displaymath}
Our result yields some information on the number of distinct solutions $\omega \in \Omega$ for the equation $\pi_{\lambda}(\omega) = t$, for $\mathcal{L}^{1}$ almost every $t \in [-1/(1 - \lambda),1/(1 - \lambda)] =: I_{\lambda}$ (the equation obviously has \emph{no} solutions for $t$ outside $I_{\lambda}$). If $(a,b) \subset (0,1)$, write $G(a,b) := \{\lambda \in (a,b) : \text{number of solutions to } \pi_{\lambda}(\omega) = t \text{ is uncountable for a.e. } t \in I_{\lambda}\}$.
\begin{cor} If $(a,b) \subset (2^{-1},2^{-2/3}] \approx (0.5,0.63]$, we have
\begin{displaymath} \dim [(a,b) \setminus G(a,b)] \leq 2 - \frac{\log 2}{-\log a}. \end{displaymath}
\end{cor}
\begin{proof} The proof is essentially the same as the proof of \cite[Theorem 5.4]{PS}, but we provide it here for completeness. We may assume that $a > 1/2$. Fix $\tau > 0$ and divide the interval $(a,b)$ into finitely many subintervals $J_{i} = (a_{i},b_{i})$ satisfying $a_{i} > b_{i}^{1 + \tau}$. Given $J_{i}$, define the metric $d_{i}$ on $\Omega$ by setting $d_{i}(\omega^{1},\omega^{2}) := b_{i}^{|\omega^{1} \land \,\omega^{2}|}$, where $|\omega^{1} \land \,\omega^{2}| = \min\{j \geq 0 : \omega^{1}_{j} \neq \omega^{2}_{j}\}$. As shown in \cite[Lemma 5.3]{PS}, the projections $\pi_{\lambda}$ satisfy the requirements of Definition \ref{projections} on $J_{i}$ with the metric $d_{i}$ and the fixed constant $\tau > 0$.
Let $\mu$ be the product measure on $\Omega$: then $I_{s}(\mu) < \infty$ with respect to $d_{i}$, if and only if $b_{i}^{s} > 1/2$, if and only if $s < -\log 2/\log b_{i}$. Let $s_{i} := -\log 2/\log b_{i} - \delta(\tau)$, where $\delta(\tau)$ is the constant from Theorem \ref{main}. Using Remark \ref{sharp}, one then finds a set $E_{i} \subset J_{i}$ of dimension
\begin{displaymath} \dim E_{i} \leq 2 - s_{i} + \delta(\tau) \leq 2 + \log 2/\log a + 2\delta(\tau) \end{displaymath}
such that $\pi_{\lambda\sharp}\mu \ll \mathcal{L}^{1}$ and
\begin{displaymath} \dim \pi_{\lambda}^{-1}\{t\} = \dim [\operatorname{spt} \, \mu \cap \pi_{\lambda}^{-1}\{t\}] \geq s_{i} - \delta(\tau) \end{displaymath}
for every $\lambda \in J_{i} \setminus E_{i}$ and $\pi_{\lambda\sharp}\mu$ almost every $t \in \mathbb{R}$. By now it is well-known, see \cite{MS}, that $\pi_{\lambda\sharp}\mu \ll \mathcal{L}^{1}$ implies $\mathcal{L}^{1}\llcorner I_{\lambda} \ll \pi_{\lambda\sharp}\mu$. Thus, we see that the inequality $\dim \pi_{\lambda}^{-1}\{t\} \geq s_{i} - \delta(\tau)$ holds for $\mathcal{L}^{1}$ almost every $t \in I_{\lambda}$. Also, we clearly have $\operatorname{card}(\{\omega \in \Omega : \pi_{\lambda}(\omega) = t\}) > \aleph_{0}$, whenever $\dim \pi_{\lambda}^{-1}\{t\} \geq s_{i} - \delta(\tau) > 0$, that is, whenever $\tau > 0$ is small enough. Thus $\dim [(a,b) \setminus G(a,b)] \leq 2 + \log 2/\log a + 2\delta(\tau)$. The proof is finished by letting $\tau \to 0$.
\end{proof}
\begin{remark} The proof above does not work with $(2^{-1},2^{-2/3}]$ replaced with $(2^{-1},1]$. This unfortunate fact is concealed in our reference to \cite[Lemma 5.3]{PS} above: the transversality conditions cannot be verified for the projections $\pi_{\lambda}$ on the whole interval $(2^{-1},1]$. Our methods reveal nothing on the dimension of the sets $(a,b) \setminus G(a,b)$ for intervals $(a,b)$ outside $(2^{-1},2^{-2/3})$.
\end{remark}
\subsection{Further Generalisations}\label{further}
In \cite{PS} Peres and Schlag obtained results analogous to Theorem \ref{sobolev} for $\mathbb{R}^{m}$-valued projections with an $\mathbb{R}^{n}$-valued parameter set. It seems reasonable to conjecture that an analogue of Theorem \ref{main} would also hold for such projections, but the technical details might be rather tedious. In some special circumstances, even the inherently one-dimensional Theorem \ref{main} can be used to deduce results for projections with a multidimensional parameter set. For example, let us consider the projections $\pi_{\lambda} \colon \mathbb{R}^{2} \to [0,\infty)$, $\lambda \in \mathbb{R}^{2}$, defined by $\pi_{\lambda}(x) := |\lambda - x|^{2}$. Assume that $B \subset \mathbb{R}^{2}$ is a Borel set with $\mathcal{H}^{s}(B) > 0$ for some $s > 3/2$.
\begin{claim} There exists a point $\lambda \in B$ such that
\begin{equation}\label{form14} \mathcal{L}^{1}(\{r > 0 : \dim [B \cap S(\lambda,r)] \geq s - 1\}) > 0, \end{equation}
where $S(\lambda,r) = \pi_{\lambda}^{-1}\{r^{2}\} = \{y : |\lambda - y| = r\}$.
\end{claim}
\begin{proof} It follows from the classical line-slicing result of Marstrand (but also from Theorem \ref{main} applied to orthogonal projections) that there exists a line $L \subset \mathbb{R}^{2}$ such that $\dim [B \cap L] > 1/2$. The mappings $\pi_{\lambda}$ with $\lambda \in L \cong \mathbb{R}$ now form a collection of projections with a one-dimensional parameter set. They do not satisfy transversality as projections from the whole plane to $\mathbb{R}$, but this is no issue: choose a compact subset $C \subset B$ with $\mathcal{H}^{s}(C) > 0$ \emph{lying entirely on one side of, and well separated from, the line $L$.} Then it is an easy exercise to check that the restricted projections $\pi_{\lambda} \colon C \to \mathbb{R}$, $\lambda \in L \cong \mathbb{R}$, satisfy the requirements of Definition \ref{projections} with $\tau = 0$. Thus, according to Theorem \ref{main}, the equation \eqref{form14} holds for all $\lambda \in L \setminus E$, where $\dim E \leq 2 - \dim C < 1/2$. Since $\dim [B \cap L] > 1/2$, we may find a point $\lambda \in [B \cap L] \setminus E$. This finishes the proof.
\end{proof}
\section{Acknowledgments}
I am grateful to my advisor Pertti Mattila for useful comments. I would also like to give many thanks to an anonymous referee for making numerous detailed observations and pointing out several mistakes in the original manuscript.
\appendix
\section{Proof of Lemma \ref{PSLemma}}
In this section we provide the details for the proof of Lemma \ref{PSLemma}. \emph{The argument is the same as used to prove \cite[Lemma 4.6]{PS}, and we claim no originality on this part}. As mentioned right after Lemma \ref{PSLemma}, the only reason for reviewing the proof here is to make sure that the factor
\begin{displaymath} \hat{\eta}(2^{j - i}r\partial_{\lambda}\Phi_{\lambda}(x,y))\end{displaymath}
produces no trouble -- and in particular, no dependence on the index $i \in \mathbb{N}$! In this respect, everything depends on an estimate made no earlier than on page 27 of this paper.
\begin{lemma}\label{Intervals} Fix $x,y \in \Omega$, $x \neq y$, write $r = d(x,y)$, and let $I$ be a compact subinterval of $J$. The set
\begin{displaymath} \{\lambda \in \operatorname{int} I : |\Phi_{\lambda}(x,y)| < \delta_{I,\tau}r^{\tau}\} \end{displaymath}
can be written as the countable union of disjoint (maximal) open intervals $I_{1},I_{2},\ldots \subset I$.
\begin{itemize}
\item[(i)] The intervals $I_{j}$ satisfy $\mathcal{L}^{1}(I_{j}) \leq 2$. Furthermore, if $I_{j}$ and $I$ have no common boundary, then $r^{2\tau} \lesssim_{I,\tau} \mathcal{L}^{1}(I_{j})$. Thus, in fact, there are only finitely many intervals $I_{j}$.
\item[(ii)] There exist points $\lambda_{j} \in \bar{I}_{j}$, which satisfy: if $\lambda \in \bar{I}_{j}$, then $|\Phi_{\lambda}(x,y)| \geq |\Phi_{\lambda_{j}}(x,y)|$ and $|\Phi_{\lambda}(x,y)| \geq \delta_{I,\tau}r^{\tau}|\lambda - \lambda_{j}|$. Furthermore, there exists a constant $\varepsilon > 0$, depending only on $I$ and $\tau$, with the following properties: \emph{(a)} if $\lambda \in I_{j}$ and $|\Phi_{\lambda}(x,y)| \leq \delta_{I,\tau}r^{\tau}/2$, then $(\lambda - \varepsilon r^{2\tau},\lambda + \varepsilon r^{2\tau}) \cap I \subset I_{j}$, and \emph{(b)} if $|\Phi_{\lambda_{j}}(x,y)| \geq \delta_{I,\tau}r^{\tau}/2$, then $|\Phi_{\lambda}(x,y)| \geq \delta_{I,\tau}r^{\tau}/2$ for all $\lambda \in (\lambda_{j} - \varepsilon r^{2\tau}, \lambda_{j} + \varepsilon r^{2\tau}) \cap I$.
\end{itemize}
\end{lemma}
\begin{proof} Let $J$ be one of the intervals $I_{j}$, see Figure 2. According to the transversality condition (\ref{transversality}), we have
\begin{displaymath} \partial_{\lambda}\Phi_{\lambda}(x,y) \geq \delta_{I,\tau}r^{\tau} \quad \text{or} \quad \partial_{\lambda}\Phi_{\lambda}(x,y) \leq -\delta_{I,\tau}r^{\tau} \end{displaymath}
for all $\lambda \in J$, which means that the mapping $\lambda \mapsto \Phi_{\lambda}(x,y)$ is strictly monotonic on $J$. The first inequality in (i) follows from (\ref{transversality}) via the mean value theorem: if $[a,b] \subset J$ and $\xi \in (a,b)$ is the point specified by the mean value theorem, we have
\begin{displaymath} \delta_{I,\tau}r^{\tau}(b - a) \leq |\partial_{\lambda}\Phi_{\xi}(x,y)|(b - a) = |\Phi_{b}(x,y) - \Phi_{a}(x,y)| \leq 2\delta_{I,\tau}r^{\tau}. \end{displaymath}
For the second inequality in (i) we apply the regularity condition (\ref{regularity}): write $J = (a,b)$. Since $J$ and $I$ have no common boundary, we have $\{\Phi_{a}(x,y),\Phi_{b}(x,y)\} = \{-\delta_{I,\tau}r^{\tau},\delta_{I,\tau}r^{\tau}\}$. Hence, by (\ref{regularity}) and the mean value theorem,
\begin{displaymath} C_{I,1,\tau}r^{-\tau}\mathcal{L}^{1}(J) \geq |\partial_{\lambda}\Phi_{\xi}(x,y)|(b - a) = |\Phi_{b}(x,y) - \Phi_{a}(x,y)| = 2\delta_{I,\tau}r^{\tau}. \end{displaymath}
\begin{center}
\begin{figure}
\caption{The interval $J$ and some of the points $\lambda_{i}
\end{figure}
\end{center}
To prove (ii), let $\lambda_{j} \in \bar{I}_{j}$ be the unique point in $\bar{I}_{j}$ where the mapping $\lambda \mapsto |\Phi_{\lambda}(x,y)|$ attains its minimum on $I_{j}$. Such a point exists by continuity and monotonicity: note that on all but possibly two of the intervals $I_{j}$ (the left- and rightmost ones) $\lambda_{j}$ is the unique zero of the mapping $\lambda \mapsto \Phi_{\lambda}(x,y)$ on $I_{j}$. Now if $\lambda \in I_{j}$ is any point, we see that $\Phi_{\lambda}(x,y)$ has the same sign and absolute value at least as great as $\Phi_{\lambda_{j}}(x,y)$. This gives
\begin{displaymath} |\Phi_{\lambda}(x,y)| \geq |\Phi_{\lambda}(x,y) - \Phi_{\lambda_{j}}(x,y)| \geq \delta_{I,\tau}r^{\tau}|\lambda - \lambda_{j}| \end{displaymath}
by \eqref{transversality} and the mean value theorem. All that is left now are (a) and (b) of (iii): set $\varepsilon := \delta_{I,\tau}C_{I,1,\tau}^{-1}/2 > 0$. Assume first that $\lambda \in I_{j}$, $|\Phi_{\lambda}(x,y)| \leq \delta_{I,\tau}r^{\tau}/2$ and $t \in (\lambda - \varepsilon r^{2\tau},\lambda + \varepsilon r^{2\tau}) \cap I$. Then, by the regularity assumption (\ref{regularity}) and $|t - \lambda| < \varepsilon r^{2\tau}$ we get
\begin{displaymath} |\Phi_{t}(x,y)| \leq |\Phi_{t}(x,y) - \Phi_{\lambda}(x,y)| + \delta_{I,\tau}r^{\tau}/2 \leq C_{I,1,\tau}r^{-\tau}|t - \lambda| + \delta_{I,\tau}r^{\tau}/2 < \delta_{I,\tau}r^{\tau}. \end{displaymath}
Since the same also holds for all $t'$ between $\lambda$ and $t$, we see that $t \in I_{j}$. Finally, suppose that $|\Phi_{\lambda_{j}}(x,y)| \geq \delta_{I,\tau}r^{\tau}/2$ and fix $\lambda \in (\lambda_{j} - \varepsilon r^{2\tau}, \lambda_{j} + \varepsilon r^{2\tau}) \cap I$. There are three cases: first, if $\lambda \in I_{j}$, then, by choice of $\lambda_{j}$, we clearly have $|\Phi_{\lambda}(x,y)| \geq \delta_{I,\tau}r^{\tau}/2$. Second, if $\lambda$ belongs to none of the intervals $I_{j}$, we even have the stronger conclusion $|\Phi_{\lambda}(x,y)| \geq \delta_{I,\tau}r^{\tau}$. Finally, if $\lambda \in I_{i}$ for some $i \neq j$, let $t \in I$ be the end point of $I_{i}$ between $\lambda_{j}$ and $\lambda$. Then $|\Phi_{t}(x,y)| = \delta_{I,\tau}r^{\tau}$ and $|t - \lambda| \leq \varepsilon r^{2\tau}$ so that by (\ref{regularity}), the mean value theorem and the choice of $\varepsilon$,
\begin{displaymath} |\Phi_{\lambda}(x,y)| \geq \delta_{I,\tau}r^{\tau} - |\Phi_{t}(x,y) - \Phi_{\lambda}(x,y)| \geq \delta_{I,\tau}r^{\tau} - C_{I,1,\tau}r^{-\tau}\varepsilon r^{2\tau} = \delta_{I,\tau}r^{\tau}/2. \end{displaymath}
This finishes the proof of (ii).
\end{proof}
Now we are prepared to prove Lemma \ref{PSLemma}. Recall that we should prove
\begin{displaymath} \left| \int_{\mathbb{R}} \gamma(\lambda)\hat{\eta}(2^{j - i}r\partial_{\lambda}\Phi_{\lambda}(x,y))\bar{\hat{\eta}}(2^{j}r\Phi_{\lambda}(x,y)) \, d\lambda \right| \lesssim_{\gamma,q} (1 + 2^{j}r^{1 + A\tau})^{-q}, \end{displaymath}
where $x,y \in \Omega$, $r = d(x,y)$, $q \in \mathbb{N}$, $j \in \mathbb{Z}$, $\gamma$ is any compactly supported smooth function on $J,$ and $A \geq 1$ is an absolute constant. We may and will assume that $q \geq 2$. Moreover, recall that $\eta$ was a fixed smooth function satisfying $\chi_{[a^{-1},a]} \leq \eta \leq \chi_{[a^{-2},2a]}$ for some large $a \geq 1$. Setting $\psi := \bar{\hat{\eta}}$ (in the appendix, we freely recycle all the Greek letters and other symbols that had a special meaning in the previous sections), this definition implies that $\psi$ is a rapidly decreasing function, that is, obeys the bounds $|\psi(t)| \lesssim_{N} (1 + |t|)^{-N}$ for any $N \in \mathbb{N}$, and also satisfies
\begin{equation}\label{form13} \hat{\psi}^{(l)}(0) = 0, \qquad l \in \mathbb{N}. \end{equation}
Fixing $x,y \in \Omega$, we will also temporarily write
\begin{displaymath} \Gamma(\lambda) := \gamma(\lambda)\hat{\eta}(2^{j - i}r\partial_{\lambda}\Phi_{\lambda}), \end{displaymath}
where $\Phi_{\lambda} := \Phi_{\lambda}(x,y)$. For quite some while, it suffices to know that $\Gamma$ is a smooth function satisfying $\|\Gamma\|_{L^{\infty}(\mathbb{R})} \lesssim_{\gamma} 1$. The inequality we are supposed to prove now takes the form
\begin{equation}\label{Projection ineq} \left| \int_{\mathbb{R}} \Gamma(\lambda)\psi(2^{j}r\Phi_{\lambda}) \, d\lambda \right| \lesssim_{\gamma,q} (1 + 2^{j}r^{1 + A\tau})^{-q}. \end{equation}
We claim that $A = 14$ will do the trick. First of all, we may assume that
\begin{equation}\label{reduction} 2^{j}r^{1 + m\tau} > 1, \qquad 0 \leq m \leq 14, \end{equation}
Indeed, if this were not the case for some $0 \leq m \leq 14$, then
\begin{displaymath} \left| \int_{\mathbb{R}} \Gamma(\lambda)\psi(2^{j}r\Phi_{\lambda}) \, d\lambda \right| \lesssim \|\gamma\|_{L^{1}(\mathbb{R})} \lesssim_{\gamma,q} 2^{-q} \leq (1 + 2^{j}r^{1 + m\tau})^{-q} \lesssim_{q,d(\Omega)} (1 + 2^{j}r^{1 + 14\tau})^{-q}, \end{displaymath}
and we would be done. Next choose an auxiliary function $\varphi \in C^{\infty}(\mathbb{R})$ with $\chi_{[-1/4,1/4]} \leq \varphi \leq \chi_{[-1/2,1/2]}$. Then split the integration in (\ref{Projection ineq}) into two parts:
\begin{align} \int_{\mathbb{R}} \Gamma(\lambda)\psi(& 2^{j}r\Phi_{\lambda}) \, d\lambda \notag\\
&\label{Piece I} = \int_{\mathbb{R}} \Gamma(\lambda)\psi(2^{j}r\Phi_{\lambda})\varphi(\delta_{\tau}^{-1}r^{-\tau}\Phi_{\lambda}) \, d\lambda\\
&\label{Piece II} + \int_{\mathbb{R}} \Gamma(\lambda)\psi(2^{j}r\Phi_{\lambda})[1 - \varphi(\delta_{\tau}^{-1}r^{-\tau}\Phi_{\lambda})] \, d\lambda. \end{align}
Here $\delta_{\tau} := \delta_{I,\tau} > 0$ is the constant from Definition \ref{projections}, and $I \subset J$ is some compact interval containing the support of $\Gamma$. The integral of line (\ref{Piece II}) is easy to bound, since the integrand vanishes whenever $|\Phi_{\lambda}| \leq \delta_{\tau}r^{\tau}/4$, and if $|\Phi_{\lambda}| \geq \delta_{\tau}r^{\tau}/4$, we have the estimate
\begin{displaymath} |\psi(2^{j}r\Phi_{\lambda})| \lesssim_{q} (1 + \delta_{\tau}2^{j - 2}r^{\tau + 1})^{-q} \lesssim_{\tau,q} (1 + 2^{j}r^{\tau + 1})^{-q}, \end{displaymath}
whence
\begin{equation}\label{Rapid decay} \left| \int_{\mathbb{R}} \Gamma(\lambda)\psi(2^{j}r\Phi_{\lambda})[1 - \varphi(\delta_{\tau}^{-1}r^{-\tau}\Phi_{\lambda})] d\lambda \right| \lesssim_{q,\gamma,\tau} (1 + 2^{j}r^{\tau + 1})^{-q}. \end{equation}
Moving on to line \eqref{Piece I}, let the intervals $I_{1},\ldots,I_{N}$, the points $\lambda_{i} \in I_{i}$ and the constant $\varepsilon > 0$ be as provided by Lemma \ref{Intervals} (related to the interval $I$ specified above). Choose another auxiliary function $\chi \in C^{\infty}(\mathbb{R})$ with $\chi_{[-\varepsilon/2,\varepsilon/2]} \leq \chi \leq \chi_{(-\varepsilon,\varepsilon)}$. Then split the integration on line (\ref{Piece I}) into $N + 1$ parts:
\begin{align} \int_{\mathbb{R}} \Gamma(\lambda)& \psi(2^{j}r\Phi_{\lambda})\varphi(\delta_{\tau}^{-1}r^{-\tau}\Phi_{\lambda}) \, d\lambda \notag\\
&\label{Piece III} = \sum_{i = 1}^{N} \int_{\mathbb{R}} \Gamma(\lambda)\chi(r^{-2\tau}(\lambda - \lambda_{i}))\psi(2^{j}r\Phi_{\lambda})\varphi(\delta_{\tau}^{-1}r^{-\tau}\Phi_{\lambda}) \, d\lambda \\
&\label{Piece IV} + \int_{\mathbb{R}} \Gamma(\lambda) \left[ 1 - \sum_{i = 1}^{N} \chi(r^{-2\tau}(\lambda - \lambda_{i})) \right]\psi(2^{j}r\Phi_{\lambda})\varphi(\delta_{\tau}^{-1}r^{-\tau}\Phi_{\lambda}) \, d\lambda. \end{align}
With the aid of part (ii) of Lemma \ref{Intervals}, the integral on line (\ref{Piece IV}) is easy to handle. If the integrand is non-vanishing at some point $\lambda \in I$, we necessarily have $\varphi(\delta_{\tau}^{-1}r^{-\tau}\Phi_{\lambda}) \neq 0$, which means that $|\Phi_{\lambda}| \leq \delta_{\tau}r^{\tau}/2$: in particular $\lambda \in I_{i}$ for some $1 \leq i \leq N$. Now (a) of Lemma \ref{Intervals}(ii) tells us that $(\lambda - \varepsilon r^{2\tau}, \lambda + \varepsilon r^{2\tau}) \cap I \subset I_{i}$, whence $r^{-2\tau}|\lambda - \lambda_{j}| \geq \varepsilon$ and $\chi(r^{-2\tau}(\lambda - \lambda_{j})) = 0$ for $j \neq i$. But, since the integrand is non-vanishing at $\lambda \in I$, this enables us to conclude that $\chi(r^{-2\tau}(\lambda - \lambda_{i})) < 1$: in particular, $|\lambda - \lambda_{i}| \geq \varepsilon r^{2\tau}/2$. Then Lemma \ref{Intervals}(ii) shows that $|\Phi_{\lambda}| \geq \delta_{\tau}r^{\tau}|\lambda - \lambda_{i}| \geq \varepsilon \delta_{\tau}r^{3\tau}/2$, and using the rapid decay of $\psi$ as on line (\ref{Rapid decay}) one obtains
\begin{displaymath} \left|\int_{\mathbb{R}} \Gamma(\lambda) \left[ 1 - \sum_{j = 1}^{N} \chi(r^{-2\tau}(\lambda - \lambda_{j})) \right]\psi(2^{j}r\Phi_{\lambda})\varphi(\delta_{\tau}^{-1}r^{-\tau}\Phi_{\lambda}) \, d\lambda \right| \lesssim_{q,\gamma,\tau} (1 + 2^{j}r^{1 + 3\tau})^{-q}. \end{displaymath}
Now we turn our attention to the $N$ integrals on line (\ref{Piece III}). If $|\Phi_{\lambda_{i}}| \geq \delta_{\tau}r^{\tau}/2$ for some $1 \leq i \leq N$, part (b) of Lemma \ref{Intervals}(ii) says that $|\Phi_{\lambda}| \geq \delta_{\tau}r^{\tau}/2$ for all $\lambda \in (\lambda_{i} - \varepsilon r^{2\tau},\lambda_{i} + \varepsilon r^{2\tau}) \cap I$, that is, for all such $\lambda \in I$ that $\chi(r^{-2\tau}(\lambda - \lambda_{i})) \neq 0$. But for such $\lambda \in I$ it holds that $\varphi(\delta_{\tau}^{-1}r^{-\tau}\Phi_{\lambda}) = 0$, whence
\begin{displaymath} \int_{\mathbb{R}} \Gamma(\lambda)\chi(r^{-2\tau}(\lambda - \lambda_{i}))\psi(2^{j}r\Phi_{\lambda})\varphi(\delta_{\tau}^{-1}r^{-\tau}\Phi_{\lambda}) \, d\lambda = 0. \end{displaymath}
So we may assume that $|\Phi_{\lambda_{i}}| < \delta_{\tau}r^{\tau}/2$. In this case part (a) of Lemma \ref{Intervals}(ii) tells us that the support of $\lambda \mapsto \Gamma(\lambda)\chi(r^{-2\tau}(\lambda - \lambda_{i}))$ is contained in $I_{i}$. The restriction of $\lambda \mapsto \Phi_{\lambda}$ to this interval $I_{i}$ is strictly monotonic, so the inverse $g := \Phi_{(\cdot)}^{-1} \colon \{\Phi_{\lambda} : \lambda \in I_{i}\} \to I_{i}$ exists. We perform the change of variables $\lambda \mapsto g(u)$:
\begin{align} \int_{\mathbb{R}} & \Gamma(\lambda)\chi(r^{-2\tau}(\lambda - \lambda_{i}))\psi(2^{j}r\Phi_{\lambda})\varphi(\delta_{\tau}^{-1}r^{-\tau}\Phi_{\lambda}) \, d\lambda \notag\\
&\label{Piece IIIb} = \int_{\mathbb{R}} \Gamma(g(u))\chi(r^{-2\tau}(g(u) - \lambda_{i}))\psi(2^{j}r u)\varphi(\delta_{\tau}^{-1}r^{-\tau} u)g'(u) \, du. \end{align}
Write $F(u) := \Gamma(g(u))\chi(r^{-2\tau}(g(u) - \lambda_{i}))\varphi(\delta_{\tau}^{-1}r^{-\tau} u)g'(u)$ for $u \in \{\Phi_{\lambda} : \lambda \in I_{i}\}$. The support of $\lambda \mapsto \Gamma(\lambda)\chi(r^{-2\tau}(\lambda - \lambda_{i}))$ is compactly contained in $I_{i}$, which means that the support of $u \mapsto \Gamma(g(u))\chi(r^{2\tau}(g(u) - \lambda_{i}))$ is compactly contained in the open interval $\{\Phi_{\lambda} : \lambda \in I_{i}\}$. Thus $F$ may be defined smoothly on the real line by setting $F(u) := 0$ for $u \notin \{\Phi_{\lambda} : \lambda \in I_{i}\}$. We will need the following lemmas:
\begin{lemma}\label{Inverse derivatives} Let $h \in C^{k}(a,b)$ with $h'(x) \neq 0$ for $x \in (a,b)$. Suppose the inverse $h^{-1} \colon h(a,b) \to (a,b)$ exists. Then, for $x \in (a,b)$ and $1 \leq l \leq k$,
\begin{displaymath} (h^{-1})^{(l)}(h(x)) = \sum_{m = 0}^{l - 1} \frac{(-1)^{m}}{m!}(h'(x))^{-l - m} \sum_{\mathbf{b} \in \mathbb{N}^{m}} \frac{(l - 1 + m)!}{b_{1}! \cdots b_{m}!}h^{(b_{1})}(x) \cdots h^{(b_{m})}(x), \end{displaymath}
where the inner summation runs over those $\mathbf{b} = (b_{1},\ldots,b_{m}) \in \mathbb{N}^{m}$ with $b_{1} + \ldots + b_{m} = (l - 1) + m$ and $b_{i} \geq 2$ for all $1 \leq i \leq k$.
\end{lemma}
\begin{proof} See \cite{Jo1}. \end{proof}
\begin{lemma}[Faà di Bruno formula] Let $f,h \in C^{l}(\mathbb{R})$. Then
\begin{equation}\label{Faa di Bruno} (f \circ h)^{(l)}(\lambda) = \sum_{\mathbf{m} \in \mathbb{N}^{l}} b_{\mathbf{m}} f^{(m_{1} + \ldots + m_{l})}(h(\lambda)) \cdot [h'(\lambda)]^{m_{1}} \cdots [h^{(l)}(\lambda)]^{m_{l}}, \end{equation}
where $\mathbf{m} = (m_{1},\ldots,m_{l}) \in \mathbb{N}^{l}$, and $b_{\mathbf{m}} \neq 0$ if and only if $m_{1} + 2m_{2} + \ldots + lm_{l} = l$.
\end{lemma}
\begin{proof} See \cite{Jo2}. \end{proof}
We apply the first lemma to $g$: if $u = \Phi_{\lambda}$ for some $\lambda \in I_{i}$, the derivatives $\partial_{\lambda}^{m}\Phi_{\lambda}$ satisfy the transversality and regularity conditions of Definition \ref{projections}. Hence
\begin{align} |g^{(l)}(u)| & = \left| \sum_{m = 0}^{l - 1} \frac{(-1)^{m}}{m!}(\partial_{\lambda}\Phi_{\lambda})^{-l - m} \sum_{\mathbf{b} \in \mathbb{N}^{m}} \frac{(l - 1 + m)!}{b_{1}! \cdots b_{m}!}\partial^{b_{1}}_{\lambda}\Phi_{\lambda} \cdots \partial^{b_{m}}_{\lambda}\Phi_{\lambda} \right| \notag\\
& \leq \sum_{m = 0}^{l - 1} \sum_{\mathbf{b} \in \mathbb{N}^{m}} \frac{(l - 1 + m)!}{b_{1}! \cdots b_{m}!} (\delta_{\tau}r^{\tau})^{-l - m}(C_{\tau,b_{1}}r^{-\tau b_{1}}) \cdots (C_{\tau,b_{m}}r^{-\tau b_{m}}) \notag\\
& \lesssim_{l,\tau} \sum_{m = 0}^{l - 1} \sum_{\mathbf{b} \in \mathbb{N}^{m}} \frac{(l - 1 + m)!}{b_{1}! \cdots b_{m}!} r^{-\tau(l + m + b_{1} + \ldots + b_{m})} \notag\\
&\label{derivatives} \lesssim_{l,\tau} \sum_{m = 0}^{l - 1} r^{-\tau(l + m + (l - 1) + m)} \lesssim_{l,\tau} r^{-\tau(4l - 3)}, \qquad l \geq 1. \end{align}
On the last line we simply ignored the summation and employed the inequality
\begin{equation}\label{r ineq} r^{-s} \lesssim_{s,t,d(\Omega)} r^{-t}, \qquad s \leq t. \end{equation}
with $s = m \leq l - 1 = t$ (this follows from $d(\Omega) < \infty$). Next we wish to estimate the derivatives of $F$, so we recall that
\begin{displaymath} F(u) = \gamma(g(u))\hat{\eta}(2^{j - i}r\partial_{\lambda}\Phi_{g(u)})\chi(r^{-2\tau}(g(u) - \lambda_{i}))\varphi(\delta_{\tau}^{-1}r^{-\tau} u)g'(u)\end{displaymath}
If $u \notin \{\Phi_{\lambda} : \lambda \in I_{i}\}$, we have $F^{(l)}(u) = 0$ for all $l \geq 0$. So, let $u = \Phi_{\lambda}$, $\lambda \in I_{i}$. By (\ref{derivatives}) and (\ref{r ineq}) we have $|g^{(l)}(u)| \lesssim_{l,\tau} r^{-4\tau l}$ for $l \geq 1$, so (\ref{Faa di Bruno}) gives
\begin{align} |(\gamma \circ g)^{(l)}(u)| & \lesssim_{l,\gamma} \sum_{\mathbf{m} \in \mathbb{N}^{l}} |b_{\mathbf{m}}||g'(u)|^{m_{1}} \cdots |g^{(l)}(u)|^{m_{l}} \notag \\
&\label{rho derivative} \lesssim_{l,\tau} \sum_{\mathbf{m} \in \mathbb{N}^{l}} |b_{\mathbf{m}}| (r^{-4 \tau})^{m_{1}} \cdots (r^{-4\tau l})^{m_{l}} \\
& = \sum_{\mathbf{m} \in \mathbb{N}^{l}} |b_{\mathbf{m}}| r^{-4\tau(m_{1} + 2m_{2} + \ldots + lm_{l})} \lesssim_{l} r^{-4\tau l}. \notag \end{align}
A similar computation also yields
\begin{equation}\label{chi and phi derivatives} |\partial^{(l)}_{u}\chi(r^{-2\tau}(g(u) - \lambda_{i}))| \lesssim_{l,\tau} r^{-6\tau l} \quad \text{and} \quad |\partial^{(l)}_{u}\varphi(\delta_{\tau}^{-1}r^{-\tau}u)| \lesssim_{l,\tau} r^{-\tau l}. \end{equation}
\emph{The presence of the factor $\hat{\eta}(2^{j - i}r\partial_{\lambda}\Phi_{g(u)})$ in the definition of $F(u)$ is the only place where our proof of Lemma \ref{PSLemma} differs from the original proof of \cite[Lemma 4.6]{PS} -- and, indeed, we only need to check that the $l^{th}$ derivatives of this factor admit bounds similar to those of the other factors.} Applying \eqref{Faa di Bruno} with $f(\lambda) = \hat{\eta}(2^{j - i}r\partial_{\lambda}\Phi_{\lambda})$ and $h(u) = g(u)$, we use the bounds $|g^{(l)}(u)| \lesssim_{l,\tau} r^{-4\tau l}$ to obtain
\begin{align*} |\partial^{(l)}_{u} \hat{\eta}(2^{j - i}r\partial_{\lambda}\Phi_{g(u)})| & \leq \sum_{\mathbf{m} \in \mathbb{N}^{l}} |b_{\mathbf{m}}||f^{(m_{1} + \ldots + m_{l})}(g(u))| \cdot |g'(u)|^{m_{1}}\cdots|g^{(l)}(u)|^{m_{l}}\\
& \lesssim_{l,\tau} r^{-4\tau l}\left( \sum_{\mathbf{m} \in \mathbb{N}^{l}} |b_{\mathbf{m}}||f^{(m_{1} + \ldots + m_{l})}(g(u))| \right).
\end{align*}
Here, for any $k = m_{1} + \ldots + m_{l} \leq l$ and $\lambda \in I$, we may use \eqref{Faa di Bruno} again, combined with rapid decay bounds of the form $|\hat{\eta}^{(p)}(t)| \lesssim_{N,p} |t|^{-N}$, to estimate
\begin{align*} |f^{(k)}(\lambda)| & \leq \sum_{\mathbf{m} \in \mathbb{N}^{k}} |b_{\mathbf{m}}||\hat{\eta}^{(m_{1} + \ldots + m_{k})}(2^{j - i}r\partial_{\lambda}\Phi_{\lambda})|(2^{j - i}r)^{m_{1} + \ldots + m_{k}}|\partial_{\lambda}^{2}\Phi_{\lambda}|^{m_{1}}\cdots|\partial_{\lambda}^{k + 1}\Phi_{\lambda}|^{m_{k}}\\
& \lesssim_{l,\tau} \sum_{\mathbf{m} \in \mathbb{N}^{k}} |b_{\mathbf{m}}|\cdot|\partial_{\lambda}\Phi_{\lambda}|^{-(m_{1} + \ldots + m_{k})}\cdot r^{-2m_{1}\tau}\cdots r^{-(k + 1)m_{k}\tau}\\
& \lesssim_{d(\Omega)} \sum_{\mathbf{m} \in \mathbb{N}^{k}} |b_{\mathbf{m}}|\cdot |\partial_{\lambda}\Phi_{\lambda}|^{-(m_{1} + \ldots + m_{k})}\cdot r^{-2k\tau}. \end{align*}
Finally, recalling that $g(u) \in I_{i}$, we have $|\partial_{\lambda}\Phi_{g(u)}| \gtrsim_{\tau,I} r^{\tau}$ by \eqref{transversality}, whence
\begin{displaymath} |f^{(m_{1} + \ldots + m_{l})}(g(u))| \lesssim_{I,l,\tau} r^{-3(m_{1} + \ldots + m_{l})\tau} \lesssim_{l,\tau} r^{-3\tau l}, \end{displaymath}
and so
\begin{displaymath} |\partial^{(l)}_{u} \hat{\eta}(2^{j - i}r\partial_{\lambda}\Phi_{g(u)})| \lesssim_{I,l,\tau} r^{-7\tau l}. \end{displaymath}
Now that we have estimated the derivatives of all the factors of $F$ separately, we may conclude from the Leibnitz formula that
\begin{equation}\label{F derivatives} |F^{(l)}(u)| \lesssim_{\gamma,\tau,l} r^{-7\tau l}, \qquad l \geq 1. \end{equation}
For $l = 0$ estimate (\ref{derivatives}) yields $|F(u)| \lesssim_{\gamma,\tau} |g'(u)| \lesssim_{\tau} r^{-\tau}$.
Next we write $F$ as a degree $2(q - 1)$ Taylor polynomial centered at the origin:
\begin{displaymath} F(u) = \sum_{l = 0}^{2(q - 1)} \frac{F^{(l)}(0)}{l!}u^{l} + \mathcal{O}(F^{(2q - 1)}(u)u^{2q - 1}). \end{displaymath}
Then we split the integration in (\ref{Piece IIIb}) in two for one last time. Denoting $U_{1} = \{u \in \mathbb{R} : |u| < (2^{j}r)^{-1/2}\}$ and $U_{2} = \mathbb{R} \setminus U_{1}$,
\begin{align}\label{Piece V} \int_{\mathbb{R}} F(u)\psi(2^{j}r u) \, du & = \int_{U_{1}} \psi(2^{j}r u) \left[ \sum_{l = 0}^{2(q - 1)} \frac{F^{(l)}(0)}{l!}u^{l} + \mathcal{O}(F^{(2q - 1)}(u)u^{2q - 1}) \right] \\
&\label{Piece VI} + \int_{U_{2}} \psi(2^{j}r u)F(u) \, du. \end{align}
To estimate the integral of line (\ref{Piece VI}) we use the bounds $|\psi(2^{j}ru)| \lesssim_{q} (2^{j}r|u|)^{-2q - 1}$, $|F(u)| \lesssim_{\gamma,\tau} r^{-\tau}$ and $2^{j}r > 1$ to obtain
\begin{equation}\label{Piece VII} \left|\int_{U_{2}} \psi(2^{j}r u)F(u) \, du\right| \lesssim_{q,\gamma,\tau} (2^{j}r)^{-2q - 1} r^{-\tau} \int_{(2^{j}r)^{-1/2}}^{\infty} u^{-2q - 1} \lesssim_{q} (2^{j}r)^{-q}r^{-\tau}. \end{equation}
As regards (\ref{Piece V}), note that the Fourier transform of $x \mapsto x^{l}\psi(x)$ is (a constant multiple of) $i^{l}\hat{\psi}^{(l)}$, whence
\begin{displaymath} \int_{\mathbb{R}} x^{l}\psi(x) \, dx = \int_{\mathbb{R}} x^{l}\psi(x) e^{-ix \cdot 0} \, dx \asymp i^{l}\hat{\psi}^{(l)}(0) = 0, \quad l \geq 0, \end{displaymath}
according to \eqref{form13}. This shows that
\begin{displaymath} \int_{U_{1}} \psi(2^{j}r u) \sum_{l = 0}^{2(q - 1)} \frac{F^{(l)}(0)}{l!} u^{l} \, du = -\sum_{l = 0}^{2(q - 1)} \int_{U_{2}} \psi(2^{j}r u) \frac{F^{(l)}(0)}{l!} u^{l} \, du. \end{displaymath}
The term corresponding to $l = 0$ may be bounded as on line (\ref{Piece VII}). For $l \geq 1$ we apply the estimates $|\psi(2^{j}ru)| \lesssim_{q} (2^{j}ru)^{-2q - l - 1}$, $1 \leq l \leq q$, and (\ref{F derivatives}):
\begin{align*} \left|\sum_{l = 1}^{2(q - 1)} \int_{U_{2}} \psi(2^{j}r u) \frac{F^{(l)}(0)}{l!} u^{l} \, du \right| & \lesssim_{q,\tau} \sum_{l = 1}^{2(q - 1)} \left( r^{-7\tau l}(2^{j}r)^{-2q - l - 1} \int_{(2^{j}r)^{-1/2}}^{\infty} u^{-2q - 1} \, du \right) \\
& \asymp_{q} \sum_{l = 1}^{2(q - 1)} r^{-7\tau l}(2^{j}r)^{-q - l - 1} \lesssim_{q} r^{-7\tau}(2^{j}r)^{-q}. \end{align*}
On the last line, we need not actually perform the summation: just note that the terms are decreasing, since $2^{j}r^{1 + 7\tau} > 1$, and their number is $2(q - 1) \lesssim_{q} 1$. Since also
\begin{align*} \left|\int_{U_{1}} \psi(2^{j}r u)\mathcal{O}(F^{(2q - 1)}(u)u^{2q - 1}) \, du\right| & \lesssim_{q,\tau} r^{-7\tau(2q - 1)}\int_{0}^{(2^{j}r)^{-1/2}} u^{2q - 1} \, du \\
& \asymp_{q} r^{-7\tau(2q - 1)}(2^{j}r)^{-q}, \end{align*}
we have shown that
\begin{equation}\label{Piece VIII} \left| \int_{U_{1}} \psi(2^{j}r u) \left[ \sum_{l = 0}^{2(q - 1)} \frac{F^{(l)}(0)}{l!}u^{l} + \mathcal{O}(F^{(2q - 1)}(u)u^{2q - 1}) \right] \right| \lesssim_{q,\tau} r^{-7\tau(2q - 1)}(2^{j}r)^{-q}. \end{equation}
Now we wrap things up. On line \eqref{Piece III} it follows from part (i) of Lemma \ref{Intervals} that the number $N$ of intervals $I_{i}$ meeting the support of $\Gamma$ (or $\gamma$) is bounded above by $N \lesssim_{\gamma,\tau} r^{-2\tau}$. This, together with the estimates (\ref{Piece VII}) and (\ref{Piece VIII}), yields
\begin{align*} \Bigg| \sum_{i = 1}^{N} \int_{\mathbb{R}} \Gamma(\lambda) & \chi(r^{-2\tau}(\lambda - \lambda_{i}))\psi(2^{j}r\Phi_{\lambda})\varphi(\delta_{\tau}^{-1}r^{-\tau}\Phi_{\lambda}) \, d\lambda \Bigg| \\
& \lesssim_{q,\gamma,\tau,d(\Omega)} r^{-2\tau}((2^{j}r)^{-q}r^{-\tau} + r^{-7\tau(2q - 1)}(2^{j}r)^{-q})\\
& \lesssim_{\tau,d(\Omega)} (2^{j}r^{1 + 14\tau})^{-q} \lesssim_{q} (1 + 2^{j}r^{1 + 14\tau})^{-q}. \end{align*}
The last inequality uses assumption $2^{j}r^{1 + 14\tau} \geq 1$. This finishes the proof, since all the finitely many pieces $I$ into which the integral on line (\ref{Projection ineq}) was decomposed have been seen to satisfy $I \lesssim_{q,\gamma,\tau,d(\Omega)} (1 + 2^{j}r^{1 + c\tau})^{-q}$ for some $0 \leq c \leq 14$.
\end{document} |
\begin{document}
\title{ extbf{QMA-complete problems}
\thispagestyle{fancy}
\abstract{
In this paper we give an overview of the quantum computational complexity class QMA and a description of known QMA-complete problems to date\footnote{The reader is invited to share more proven QMA-complete problems with the author.}. Such problems are believed to be difficult to solve, even with a quantum computer, but have the property that if a purported solution to the problem is given, a quantum computer would easily be able to verify whether it is correct.
An attempt has been made to make this paper as self-contained as possible so that it can be accessible to computer scientists, physicists, mathematicians, and quantum chemists. Problems of interest to all of these professions can be found here.
}
\section{Introduction}
\subsection{Background}
The class QMA is the natural extension of the classical class NP to the quantum computing world. NP is an extremely important class in classical complexity theory, containing (by definition) those problems that
have a short proof (or witness) that can be efficiently checked to verify that a valid solution to the problem exists. The class NP is of great importance because many interesting and important problems have this property -- they may be difficult to solve, but given a solution, it is easy to verify that the solution is correct.
The probabilistic extension of NP is the class MA, standing for ``Merlin-Arthur". Unlike in NP where witnesses must be verifiable with certainty, in MA valid witnesses need only be accepted with probability greater than $2/3$ (and invalid witnesses rejected with probability greater than $2/3$). MA can be thought of as the class of languages $L$ for which a computationally-unbounded but untrustworthy prover, Merlin, can convince (with high probability) a verifier, Arthur (who is limited to polynomial-time computation), that a particular instance $x$ is in $L$. Furthermore, when the instance $x$ is not in $L$, the probability of Merlin successfully cheating must be low.
Because quantum computers are probabilistic by nature (as the outcome of a quantum measurement can generally be predicted only probabilistically),
the \emph{natural} quantum analogue of NP is actually the quantum analogue of MA, whence the quantum class QMA -- Quantum-Merlin-Arthur\footnote{Initially QMA was referred to as BQNP \cite{KSV02}; the name QMA was coined in \cite{Watrous00}.}.
QMA, then, is the class of languages for which small \emph{quantum} witness states exist that enable one to prove, with high probability, whether a given string belongs to the language by whether the witness state causes an efficient \emph{quantum verifier circuit} to output 1. It was first studied by Kitaev\cite{KSV02} and Knill\cite{Knill96}. A more precise definition will be given later.
The history of QMA takes its lead from the history of NP.
In complexity theory, one of the most important results about NP was the first proof that it contains
complete problems.
A problem is NP-hard if, given the ability to solve it, one can also efficiently (that is, with only polynomial overhead) solve \emph{any} other NP problem; in other words, a problem is NP-hard if any NP problem can be \emph{reduced} to it. If, in addition to being NP-hard, a problem is itself in NP, it is called an NP-complete problem, and can be considered among the hardest of all the problems in NP.
Two simple examples of NP-complete problems are \langfont{Boolean satisfiability} (SAT) and \langfont{circuit satisfiability} (CSAT).
The problem CSAT is to determine, given a Boolean circuit, whether there exists an input that will be evaluated by the circuit as true. The problem SAT is to determine, given a set of clauses containing Boolean variables, whether there is an assignment of those variables that will satisfy all of the clauses. If the clauses are restricted to containing at most $k$ literals each, the problem is called $k$-SAT. One may also consider the problem MAX-SAT, which is to determine, given a set of clauses (of Boolean variables) and an integer $m$, whether at least $m$ clauses can be simultaneously satisfied.
The fact that a complete problem exists for NP is actually trivial,
as the problem of deciding whether there exists a (small) input that will be accepted (in polynomial time) by a given Turing machine is trivially NP-complete; rather, the importance of NP-completeness is due to the existence of interesting NP-complete problems. The famous Cook-Levin theorem, which pioneered the study of NP-completeness, states that SAT
is NP-complete. A common way of proving this theorem is to first show that the above trivial NP-complete problem can be reduced to CSAT,
and to then show that CSAT can be reduced to SAT.
The quantum analogue of CSAT is the \langfont{quantum circuit satisfiability} problem (QCSAT), which is trivially QMA-complete (since QMA is defined in terms of quantum circuits). But QMA was found to have other, natural complete problems too. The most important of these, the \langfont{$k$-local Hamiltonian } problem \cite{KSV02}, was defined by Kitaev \cite{KSV02} inspired by Feynman's ideas on Hamiltonian quantum computing \cite{Feynman82}. This problem is a quantum analogue of MAX-SAT, in which Boolean variables are replaced by qubits and clauses are replaced by local Hamiltonians (which may be viewed as local constraints on the qubits); it is defined formally below in \ref{k-local}.
Just as the Cook-Levin theorem opened the study of NP-completeness by showing that SAT is NP-complete, so too the study of QMA-completeness began by showing that \langfont{5-local Hamiltonian} is QMA-complete.
However, unlike NP, for which thousands of complete problems are known, there are currently relatively few known QMA-complete problems. In this paper we will survey many, if not all, of them. This paper divides the QMA-complete problems into three main groups and one subgroup:
\begin{itemize}
\item Quantum circuit/channel property verification (V)
\item Hamiltonian ground state estimation (H)
\begin{itemize}
\item Quantum $k$-SAT (S)
\end{itemize}
\item Density matrix consistency (C)
\end{itemize}
The letters in parentheses are used as labels to identify the group.
\subsection{Formal definition of QMA}
We now give a formal definition of QMA.
\begin{definition}
QMA is the set of all languages $L\subset\{0,1\}^*$ for which there exists a (uniform family of)
quantum polynomial-time verifier circuit $V$ such that for every $x\in \{0,1\}^*$ of length $n=|x|$,
\begin{description}[leftmargin=80pt, align=right, font=\normalfont, labelindent=60pt, itemsep=1ex]
\item [if $x\in L$] then there exists a poly$(n)$-qubit witness state $\ket{\psi_x}$ such that $V\big(x,\ket{\psi_x}\big)$ accepts with probability $\geqslant 2/3$
\item [if $x\not\in L$] then for every purported poly$(n)$-qubit witness state $\ketpsi$, $V\big(x,\ketpsi\big)$ accepts with \\probability~$\leqslant 1/3$.
\end{description}
\end{definition}
Although the definition above used the numbers $2/3$ and $1/3$ (as is standard),
we can generally define the class QMA$(b,a)$: Given functions $a,b : \N \rightarrow (0,1)$ with $b(n)-a(n)\geqslant 1/\poly(n)$, a language is in QMA$(b,a)$ if $2/3$ and $1/3$ in the definition above are replaced by $b$ and $a$, respectively.
It is important to note that doing this does not change the class: QMA$(2/3,1/3)$ = QMA$(1-\epsilon, \epsilon)$ provided that $\epsilon \geqslant 2^{-\poly(n)}$. Moreover, in going from QMA$(2/3,1/3)$ to QMA$(1-\epsilon, \epsilon)$, the amplification procedure can be carried out in such a way that the same witness is used, i.e. Merlin need only ever send a single copy of the witness state. \cite{MW05}
When $b=1$, i.e. when the witness must be verifiable with no error, the class is called QMA$_1$; thus QMA$_1$ = QMA$(1,1/3)$ = QMA$(1,\epsilon)$. For the classical complexity class MA it is known that MA = MA$_1$,
and also for the class QCMA, which is QMA restricted to classical witnesses, it has recently been shown~\cite{JKNN12} that QCMA=QCMA$_1$,
but it is still an open question whether QMA=QMA$_1$. Several QMA$_1$-complete problems are presented in this paper.
Furthermore, it should be noted that QMA actually consists of promise problems, meaning that when considering whether Merlin can truthfully convince Arthur or trick Arthur, we restrict our consideration to a subset of possible instances -- we may assume that we are promised that our instance of consideration falls in this subset. In physical problems, this restriction could correspond to a limitation in the measurement precision available to us.
With the above remarks, we can write the definition of QMA in a style matching that of the problem definitions provided in this paper.
\begin{definition}[QMA]
A promise problem $L = L_{\text{yes}}\cup L_{\text{no}}\subset\{0,1\}^*$ is in QMA if there exists a quantum polynomial-time verifier circuit $V$ such that for every $x\in \{0,1\}^*$ of length $n=|x|$,
\begin{description}[leftmargin=80pt, align=right, font=\normalfont, labelindent=60pt, itemsep=1ex]
\item [(yes case)] if $x\in L_{\text{yes}}$ then $\exists \poly(n)$-qubit state $\ket{\psi_x}$ such that $\Pr\Big[V\big(x,\ket{\psi_x}\big) \text{ accepts }\Big] \geqslant b$
\item [(no case)] if $x\in L_{\text{no}}$ then $\forall \poly(n)$-qubit states $\ketpsi$, $\Pr\Big[V\big(x,\ketpsi\big) \text{ accepts }\Big] \leqslant a$
\end{description}
promised that one of these is the case (i.e. either $x$ is in $L_{\text{yes}}$ or $L_{\text{no}}$), \\
{\wherePoly{b-a}{n}} and $0<\epsilon<a<b<1-\epsilon$, with $\epsilon \geqslant 2^{-\poly(n)}$. If, instead, the above is true with $b=1$, then $L$ is in the class QMA$_1$.
\end{definition}
Except for a glossary at the end, which provides the definitions of several basic reoccurring mathematical terms that appear in this work, the remainder of this paper is devoted to listing known QMA-complete problems, along with their description and sometimes a brief discussion.
When a problem is given matrices, vectors, or constants as inputs, it is assumed that they are given to precision of $\poly(n)$ bits. When a problem is given a unitary or quantum circuit, $U_x$, it is assumed that the problem is actually given a classical description $x$ of the corresponding quantum circuit, which consists of $\poly(|x|)$ elementary gates. Likewise, quantum channels are specified by efficient classical descriptions.
This paper has attempted to be as self-contained as possible, but for a more complete description and motivation of a problem, the reader is invited to consult the references provided. An attempt has been made to include as many currently known QMA-complete and QMA$_1$-complete problems as possible, but it is, of course, unlikely that this goal has been accomplished in full. The reader is invited to share other proven QMA-complete problems with the author for their inclusion in future versions of this work. Note that this paper has restricted itself to QMA-complete and QMA$_1$-complete problems; it does not include other QMA-inspired classes, such as QMA(2) (when there are multiple unentangled Merlins) or QCMA (when the witness is classical).
\setlist[enumerate,1]{itemsep=20pt}
\newcommand{\setEnumLabel}[1]{\setlist[enumerate,1]{label*=\fbox{{#1}-\arabic*} , ref={#1}-\arabic* ,itemsep=20pt }}
\newcommand{\EnumItem}[1]{\item{\textbf{\MakeTextUppercase{#1}}} \\}
\newcommand{\probIntro}[1]{#1
}
\newcommand{\probPreamble}[1]{\textit{Preliminary information}:\\ #1
}
\newcommand{\probDesc}[4]{
\fbox{
\parbox{400pt}{
Problem: #1, \\
determine whether:
\begin{description}[leftmargin=80pt, align=right, font=\normalfont, labelindent=60pt, itemsep=1ex]
\item [(yes case)] #2
\item [(no case)] #3,
\end{description}
promised one of these to be the case\ifthenelse{\equal{#4}{}}{.}{,\\#4.}
}}
}
\newcommand{\probTheorem}[3]{Theorem: QMA-complete #1 [proven by #2 \cite{#3}]}
\newcommand{\probTheoremO}[3]{Theorem: QMA$_1$-complete #1 [proven by #2 \cite{#3}]}
\newcommand{\probTheoremH}[3]{Theorem: QMA-hard #1 [proven by #2 \cite{#3}]}
\newcommand{\probRed}[2]{Hardness reduction from: #1 (#2)}
\newcommand{\probClassical}[1]{Classical analogue: #1.}
\newcommand{\probNotes}[1]{\begin{description}[font=\normalfont\emph, leftmargin=31pt]
\item[Notes:] #1
\end{description}}
\newcommand{
}{
}
\section{Quantum circuit/channel property verification}
\setEnumLabel{V} \begin{enumerate}
\EnumItem{Quantum Circuit-SAT (QCSAT) \label{QCSAT}}
\probDesc{
Given a quantum circuit $V$ on $n$ witness qubits and $m=\poly(n)$ ancilla qubits}
{$\exists$ $n$-qubit state $\ket{\psi}$ such that $V(\ket{\psi}\ket{0\ldots 0})$ accepts with probability $\geqslant b$, i.e. outputs a state with $\ket{1}$ on the first qubit with amplitude-squared $\geqslant b$}
{$\forall$ $n$-qubit state $\ket{\psi}$, $V(\ket{\psi}\ket{0\ldots 0})$ accepts with probability $\leqslant a$}
{\wherePoly{b-a}{n} and $\ket{0\ldots 0}$ is the all-zero $m$-qubit ancilla state}
This problem is QMA-complete immediately from the definition of QMA.
\EnumItem{non-identity check \label{non-identity}}
\probDesc{
Given a unitary $U$ implemented by a quantum circuit on $n$ qubits,
determine whether $U$ is \textit{not} close to a trivial unitary (the identity times a phase), i.e.}
{$\forall \phi\in[0,2\pi), \norm{U - e^{i \phi}\id} \geqslant b$}
{$\exists \phi\in[0,2\pi)$ such that $\norm{U - e^{i \phi}\id} \leqslant a$}
{\wherePoly{b-a}{n}}
\probTheorem{}{Janzing, Wocjan, and Beth}{JWB05}\\
\probTheorem{even for small-depth quantum circuits}{Ji and Wu}{JW09}\\
\probRed{QCSAT}{\ref{QCSAT}}
\EnumItem{non-equivalence check}
\probIntro{This problem, a generalisation of \langfont{non-identity check} (\ref{non-identity}), is to determine whether two quantum circuits (do not) define approximately the same unitary (up to phase) on some chosen invariant subspace. The subspace could, of course, be chosen to be the entire space, but in many cases one is interested in restricting their attention to a subspace, e.g. one defined by a quantum error-correcting code.
}
\probDesc{
Given two unitaries, $U_1$ and $U_2$, implemented by a quantum circuit on $n$ qubits, let $\mathcal{V}$ be a common invariant subspace of $(\C^2)^{\otimes n}$ specified by a quantum circuit $V$ (that ascertains with certainty whether a given input is in $\mathcal{V}$ or not). The problem is to determine, given $U_1$, $U_2$, and $V$, whether the restrictions of $U_1$ and $U_2$ to $\mathcal{V}$ are not approximately equivalent, i.e.}
{$\exists \ \ket{\psi}\in\mathcal{V}$ such that $\forall \phi\in[0,2\pi), \norm{(U_1 U_2^\dag - e^{i \phi}\id)\ketpsi} \geqslant b$}
{$\exists \ \phi\in[0,2\pi)$ such that $\forall \ket{\psi}\in\mathcal{V}, \norm{(U_1 U_2^\dag - e^{i \phi}\id)\ketpsi} \leqslant a$}
{\wherePoly{b-a}{n}}
\probTheorem{}{Janzing, Wocjan, and Beth}{JWB05}\\
\probRed{\langfont{non-identity check}}{\ref{non-identity}}
\EnumItem{Mixed-state non-identity check \label{mixed-non-identity}}
\probIntro{In this problem, either the given circuit acts like some unitary $U$ that is far from the identity, or else it acts like the identity. This is very similar to \langfont{non-identity check} (\ref{non-identity}), but allows mixed-state circuit inputs. The diamond norm used here is defined in the glossary (appendix \ref{glossary}).}
\probDesc{
Given a quantum circuit $C$ on $n$-qubit density matrices}
{$\Vert C-\id \Vert_{\diamondsuit} \geqslant 2-\epsilon$ and there is an efficiently implementable unitary $U$ and state $\ketpsi$ such that $\Vert C(\selfketbra{\psi}) - U\selfketbra{\psi}U^\dag\Vert_{tr} \leqslant \epsilon$ and $\Vert U\selfketbra{\psi}U^\dag - \selfketbra{\psi}\Vert_{tr} \geqslant 2-\epsilon$}
{$\Vert C-\id \Vert_{\diamondsuit} \leqslant\epsilon$}
{where $1 > \epsilon \geqslant 2^{-\poly(n)}$}
\probTheorem{}{Rosgen}{Rosgen11b}\\
\probRed{Quantum circuit testing (see appendix~\ref{app:hard})}{\ref{circuit_testing}}
\EnumItem{non-isometry testing \label{non-isometry}}
\probPreamble{
This problem tests to see if a quantum channel is not almost a linear isometry (given a mixed-state quantum circuit description of the channel).
\begin{definition}[isometry]
A linear isometry is a linear map $U : {\cal H}_1 \rightarrow {\cal H}_2$ that preserves inner products, i.e. $U^\dag U = \id_{{\cal H}_1}$.
\end{definition}
Note that this is more general than a unitary operator, as ${\cal H}_1$ and ${\cal H}_2$ may have different sizes and $U$ need not be surjective.
Practically speaking, isometries are the operations involving unitaries that have access to fixed ancillae (say, ancillae starting in the $\ket{0}$ state).
This problem asks how far from an isometry the input is, so it requires a notion of approximate isometries. A characterising property of isometries is that they map pure states to pure states, even in the presence of a reference system; therefore, the notion of an approximate isometry is defined in terms of how mixed the output of a channel is in the presence of a reference system.
\begin{definition}[$\epsilon$-isometry]
A quantum channel $\Phi$ that is a linear transformation from ${\cal H}_1$ to ${\cal H}_2$ is an $\epsilon$-isometry if $\forall \ket{\psi} \in {\cal H}_1\otimes {\cal H}_1$, we have $\norm{ (\Phi\otimes\id_{{\cal H}_1})(\selfketbra{\psi})} \geqslant 1-\epsilon$. i.e. it maps pure states (in a combined input and reference system) to almost-pure states. The norm appearing in this definition is the operator norm\footnote{definition provided in the glossary (appendix \ref{glossary})}.
\end{definition}}
\probDesc{
Given a quantum channel $\Phi$ that takes density matrices of ${\cal H}_1$ to density matrices of ${\cal H}_2$}
{$\Phi$ is not an $\epsilon$-isometry, i.e. $\exists \ket{\psi}$ such that $\norm{ (\Phi\otimes\id_{{\cal H}_1})(\selfketbra{\psi})} \leqslant\epsilon$}
{$\Phi$ is an $\epsilon$-isometry, i.e. $\forall\ket{\psi}$, $\norm{ (\Phi\otimes\id_{{\cal H}_1})(\selfketbra{\psi})} \geqslant 1-\epsilon$}
{}
\probTheorem{when $0<\epsilon < 1/19$}{Rosgen}{Rosgen11a,Rosgen11b}
\probRed{QCSAT}{\ref{QCSAT}}
\EnumItem{Detecting insecure quantum encryption \label{insecure_q_enc}}
\probIntro{In this problem, we wish to determine whether the given purported encryption channel $E$ is insecure on a large subspace (for any key), or is close to being perfectly secure. The diamond norm used here is defined in the glossary (appendix~\ref{glossary}).}
\probPreamble{
A private channel is a quantum channel with a classical key such that the input state cannot be determined from the output state without the key. Formally, it is defined as follows.
\begin{definition} [$\epsilon$-private channel] Suppose $E$ is a channel taking as input an integer $k\in\lbrace 1,.\ldots,K\rbrace$ and quantum state in space ${\cal H}_1$, and producing a quantum output in space ${\cal H}_2$, with $\text{dim\,}{\cal\,H}_1\leqslant\text{dim }{\cal H}_2$. Let $E_k$ be the quantum channel where the integer input is fixed as $k$. Let $\Omega$ be the completely depolarizing channel that maps all density matrices to the maximally mixed state.
$E$ is an $\epsilon$-private channel if $\Vert \frac{1}{K} \sum_k E_k - \Omega \Vert_{\diamondsuit} \leqslant \epsilon$ (so if the key $k$ is not known, the output of $E$ gives almost no information about the input)
and
there is a polysize decryption channel $D$ (operating on the same space as $E$) such that $\forall k, \Vert D_k \circ E_k -\id\Vert_{\diamondsuit} \leqslant \epsilon$ (i.e. if $k$ is known, the output can be reversed to obtain the input).
\end{definition}
}
\probDesc{
Let $\delta\in(0,1]$. Given circuit $E$, which upon input $k$ implements channel $E_k$ acting from space ${\cal H}_1$ to ${\cal H}_2$ (with $\text{dim\,}{\cal H}_1\leqslant\text{dim\,}{\cal H}_2$)}
{$\exists$ subspace $S$, with $\text{dim\,} S \geqslant (\text{dim\,} {\cal H}_1)^{1-\delta}$, such that for any $k$ and any reference space $\cal R$, if $\rho$ is a density matrix on $S\otimes \cal R$ then $\Vert (E_k\otimes\id_{R})(\rho) - \rho\Vert_{tr} \leqslant \epsilon$}
{$E$ is an $\epsilon$-private channel}
{where $1 > \epsilon \geqslant 2^{-\poly}$}
\probTheorem{for $0<\epsilon<1/8$}{Rosgen}{Rosgen11b}\\
\probRed{Quantum circuit testing (see appendix~\ref{app:hard})}{\ref{circuit_testing}}
\probNotes{In this problem, channels are given as mixed-state circuits.}
\EnumItem{Quantum clique}
\probIntro{
This is the quantum analogue of the NP-complete problem \langfont{largest independent set} on a graph $G$, which asks for the size of the largest set of vertices in which no two vertices are adjacent. According to the analogy, the graph $G$ becomes a channel, and two inputs are `adjacent' if they can be confused after passing through the channel, i.e. if there is an output state that could have come from either of the two input states. In this quantum QMA-complete problem, the channel is a quantum entanglement-breaking channel $\Phi$ and the problem is to find the size of the largest set of input states that cannot be confused after passing through the channel, that is, to determine if there are $k$ inputs $\rho_1,\ldots,\rho_k$ such that $\Phi(\rho_1),\ldots,\Phi(\rho_k)$ are (almost) orthogonal under the trace inner product. Regarding the name, note that the NP-complete problems \langfont{largest independent set} and \langfont{largest clique} (which asks for the largest set of vertices, all of which are adjacent) are essentially the same: a set of vertices is an independent set on a graph $G$ if and only if it is a clique on the complement of $G$.
}
\probPreamble{
Let $S$ be the SWAP gate, so $S\ket{\psi}\otimes\ket{\phi} = \ket{\phi}\otimes\ket{\psi}$. Note that $\trP(\sigma_1\sigma_2)=\trP(S\, \sigma_1\otimes\sigma_2)$ for all density matrices $\sigma_1$ and $\sigma_2$, so the right hand side can be used to evaluate the trace inner product (and therefore determine orthogonality) of $\sigma_1$ with $\sigma_2$.
For any density matrix $\rho$ on $k$ registers, let $\rho^i$ denote the result of tracing out all but the $i$th register of $\rho$. Similarly, define $\rho^{i,j} = \trP_{\lbrace 1,\ldots k\rbrace \smallsetminus{\lbrace i,j \rbrace}} (\rho)$.
\begin{definition}[entanglement-breaking channel; q-c channel]
A quantum channel $\Phi$ is \textit{entanglement-breaking} if there are POVM (Hermitian, positive-semidefinite operators that sum to the identity) $\lbrace M_i \rbrace$ and states $\sigma_i$ such that $\Phi(\chi) = \sum_i \trP(M_i\chi)\sigma_i$. In this case it is a fact that $\Phi^{\otimes 2} (\rho^{1,2})$ is always a separable state.
If the $\sigma_i$ in the above definition can be chosen to be $\sigma_i = \selfketbra{i}$, where $\ket{i}$ are orthogonal states, then $\Phi$ is called a \text{quantum classical channel} (q-c channel).
\end{definition}}
\probDesc{
Given an integer $k$ and a quantum entanglement-breaking channel $\Phi$ acting on $n$-qubit states}
{$\exists\ \rho_1\otimes\cdots\otimes\rho_k$ such that $\sum_{i,j} \trP(S \Phi(\rho_i)\otimes\Phi(\rho_j)) \leqslant a$}
{$\forall\ k$-register state $\rho$, $\sum_{i,j} \trP(S \Phi^{\otimes 2}(\rho^{i,j})) \geqslant b$}
{where $b$ and $a$ are inverse-polynomially separated}
There are two theorems associated with this problem.
\begin{enumerate}
\item \probTheorem{}{Beigi and Shor}{BS07}\\
\item \probTheoremO{when $a=0$ and $\Phi$ is further restricted to q-c channels}{Beigi and Shor}{BS07}
\end{enumerate}
\probRed{\langfont{$k$-local Hamiltonian}}{\ref{k-local}}\\
\probClassical{\langfont{largest independent set} is NP-complete}
\EnumItem{quantum non-expander}
\probIntro{
A quantum expander is a superoperator that rapidly takes density matrices towards the maximally mixed state. The \langfont{quantum non-expander} problem is to check whether a given superoperator is \textit{not} a good quantum expander. This problem uses the Frobenius norm\footnote{definition provided in the glossary (appendix \ref{glossary})}.
}
\probPreamble{
A density matrix can always be written as $\rho = I + A$, where $I$ is the maximally mixed state and $A$ is traceless. A quantum expander is linear (and unital), so $\Phi(\rho) = I + \Phi(A)$, which differs from $I$ by $\Phi(A)$. Thus a good quantum expander rapidly kills traceless matrices. We have the following formal definition.
\begin{definition}[quantum expander]
Let $\Phi$ be a superoperator acting on $n$-qubit density matrices and obeying $\Phi(\rho) = \frac{1}{D}\sum_d U_d \rho U_d^\dag$ where $\lbrace U_d : d=1,\ldots D \rbrace$ is a collection of $D=\poly(n)$ efficiently-implementable unitary operators. $\Phi$ is a $\kappa$-contractive quantum expander if $\forall\ 2^n\times 2^n$ traceless matrix~$A$, $\norm{\Phi(A)}_F \leqslant \kappa \norm{A}_F$.
\end{definition}
}
\probDesc{Given a superoperator $\Phi$ that can be written in the form appearing in the above definition}
{$\Phi$ is not a $b$-contractive quantum expander}
{$\Phi$ is an $a$-contractive quantum expander}
{\wherePoly{b-a}{n}}
\probTheorem{}{Bookatz, Jordan, Liu, and Wocjan}{BJLW12}\\
\probRed{QCSAT}{\ref{QCSAT}}
\end{enumerate}
\section{Hamiltonian ground-state energy estimation}
\setEnumLabel{H} \begin{enumerate}
\EnumItem{$k$-local Hamiltonian \label{k-local}}
\probIntro{
This is the problem of estimating the ground-state energy\footnote{see the glossary (appendix \ref{glossary}) for a very brief definition of these terms}
of a Hamiltonian in which all interactions are $k$-local, that is, they only ever involve at most $k$ particles at a time. Formally, $H$ is a $k$-local Hamiltonian if $H=\sum_i H_i$ where each $H_i$ is a Hermitian operator acting (non-trivially) on at most $k$ qubits.
In addition to restricting the locality of a Hamiltonian in terms of the number of qubits on which it acts, one can also consider geometric restrictions on the Hamiltonian. Indeed, one can imagine a 2-local Hamiltonian in which interactions can only occur between neighbouring sites, e.g. $H = \sum_{i=1}^{n-1} H_{i,i+1}$ where each $H_{i,i+1}$ acts non-trivially only on particles $i$ and $i+1$ arranged on a line. The results of these considerations will also be mentioned below. Note that all these problems use the operator norm\addtocounter{footnote}{-1}\addtocounter{Hfootnote}{-1}\footnotemark}.
\probDesc{
Given a $k$-local Hamiltonian on $n$ qubits, $H=\sum_{i=1}^r H_i$, where $r=\poly(n)$ and each $H_i$ acts non-trivially on at most $k$ qubits and has bounded operator norm $\norm{H_i}\leqslant \poly(n)$}
{$H$ has an eigenvalue less than $a$}
{all of the eigenvalues of $H$ are larger than $b$}
{\wherePoly{b-a}{n}}
\probTheorem{for $k\geqslant 2$}{Kempe, Kitaev, and Regev}{KKR06}\\
\probRed{QCSAT}{\ref{QCSAT}}\\
Additionally, it has been proved that it is:
\begin{enumerate}
\item \probTheorem{when $k=O(\log n)$ (still provided $k\geqslant 2$)}{Kitaev}{KSV02}\\
\item \probTheorem{even when $k=3$ with constant norms, i.e. $\norm{H_i}=O(1)$}{Nagaj}{NM07}\\
\item \probTheorem{even when 2-local on a line of 8-dimensional qudits\addtocounter{footnote}{-1}\addtocounter{Hfootnote}{-1}\footnotemark, i.e. when the qudits are arranged on a line and only nearest-neighbour interactions are present}{Hallgren, Nagaj, and Narayanaswami\footnote{improving the work by Aharonov, Gottesman, Irani, and Kempe\cite{AGIK09} who showed this for 12-dimensional qudits}}{HNN13} \\
\item \label{2D-lattice} \probTheorem{even when 2-local on a 2-D lattice}{Oliveira and Terhal}{OT08} \\
\item \probTheorem{even for interacting bosons under two-body interactions}{Wei, Mosca, and Nayak}{WMN10} \\
\item \probTheorem{even for interacting fermions under two-body interactions}{Whitfield, Love, and Aspuru-Guzik}{WLA12} \\
\item \label{realHam} \probTheorem{even when restricted to real 2-local Hamiltonians}{Biamonte and Love}{BL08}\\
\item \label{stochastic_ham} \probTheorem{even for stochastic\footnote{definition provided in the glossary (appendix \ref{glossary})}\ Hamiltonians (i.e. symmetric Markov matrices) when $k\geqslant 3$}{Jordan, Gosset, and Love}{JGL10}\\
\end{enumerate}
\probNotes{
For $k=1$, the \langfont{1-local Hamiltonian} is in P \cite{KKR06}. \\
Many other simple modifications of \langfont{$k$-local Hamiltonian} are also QMA-complete. For example\footnote{This result is from \cite{GK12}, who actually defined their problem for finding the highest energy of a positive-semidefinite Hamiltonian. Their interest lay in finding approximation algorithms for this problem.}, QMA-completeness is not changed when restricting to dense $k$-local Hamiltonians, i.e. for a negative-semidefinite Hamiltonian when the ground energy is (in absolute value) $\Omega(n^k)$.}
\probClassical{MAX-$k$-SAT is NP-complete for $k\geqslant 2$.\\
This problem may easily be rephrased in terms of satisfying constraints imposed by the $H_i$ terms. The yes case corresponds to the existence of a state that violates, in expectation value, only fewer than $a$ weighted-constraints; the no case, to all states violating, in expectation value, at least $b$ weighted-constraints. This problem can therefore be viewed as estimating the largest number of simultaneously satisfiable constraints, whence the analogy to MAX-SAT}
\EnumItem{excited $k$-local Hamiltonian}
\probIntro{
We have seen that estimating the ground-state energy of a Hamiltonian is QMA-complete. The current problem shows that estimating the low-lying excited energies of a Hamiltonian is QMA-complete; specifically, estimating the $c^\text{th}$ energy eigenvalue of a $k$-local Hamiltonian is QMA-complete when $c=O(1)$.
}
\probDesc{
Given a $k$-local Hamiltonian $H$ on $n$ qubits}
{the $c^\text{th}$ eigenvalue of $H$ is $\leqslant a$}
{the $c^\text{th}$ eigenvalue of $H$ is $\geqslant b$}
{\wherePoly{b-a}{n}}
\probTheorem{for $c=O(1)$ and $k\geqslant 3$}{Jordan, Gosset, and Love}{JGL10} \\
\probRed{\langfont{2-local Hamiltonian}}{\ref{k-local}}
\EnumItem{highest energy of a $k$-local stoquastic Hamiltonian}
\probIntro{Problem \ref{stochastic_ham} states that finding the lowest eigenvalue of a stochastic\footnote{definition provided in the glossary (appendix \ref{glossary})}\addtocounter{footnote}{-1}\addtocounter{Hfootnote}{-1}
Hamiltonian is QMA-complete. Since if $H$ is a stochastic Hamiltonian then $-H$ is stoquastic\footnotemark, we also have QMA-completeness for the problem of estimating the largest energy of a stoquastic Hamiltonian.}
\probDesc{
Given a $k$-local stoquastic Hamiltonian $H$ on $n$ qubits}
{$H$ has an eigenvalue greater than $b$}
{all of the eigenvalues of $H$ are less than $a$}
{\wherePoly{b-a}{n}}
\probTheorem{for $k\geqslant 3$}{Jordan, Gosset, and Love}{JGL10}\\
\probRed{\langfont{$k$-local stochastic Hamiltonian} (\ref{stochastic_ham}) which itself is from}{\ref{BL08-1}}
\probNotes{\langfont{$k$-local stoquastic Hamiltonian}, i.e. finding the lowest energy rather than the highest energy, is in AM. \cite{BDOT08}}
\EnumItem{Separable $k$-local Hamiltonian}
\probIntro{
This problem is the \langfont{$k$-local Hamiltonian} problem with the extra restriction that the quantum state of interest be a separable state, i.e. the question is whether there is a \textit{separable} state with energy less than $a$ (or greater than $b$). Separable, here, is with respect to a given partition of the space into two sets, between which the state must not be entangled.
}
\probDesc{
Given the same input as described in the \langfont{$k$-local Hamiltonian} problem, as well as a partition of the qubits to disjoint sets $\calA$ and $\calB$}
{$\exists \ketpsi = \ketpsi_A\otimes\ketpsi_B$, with $\ketpsi_A\in\calA$ and $\ketpsi_B\in\calB$, such that $\bra{\psi}H\ket{\psi}\leqslant a$}
{$\forall \ketpsi = \ketpsi_A\otimes\ketpsi_B$, with $\ketpsi_A\in\calA$ and $\ketpsi_B\in\calB$, $\bra{\psi}H\ket{\psi}\geqslant b$}
{\wherePoly{b-a}{n}}
\probTheorem{}{Chailloux and Sattath}{CS12}\\
\probRed{\langfont{$k$-local Hamiltonian}}{\ref{k-local}}
\probNotes{
Interestingly, although the QMA-hardness proof follows immediately from \langfont{$k$-local Hamiltonian}, the ``in QMA" proof is non-trivial and relies on the \langfont{local consistency problem} (\ref{local_consistency}).
}
\EnumItem{Physically relevant Hamiltonians}
\langfont{2-local Hamiltonian} is also QMA-complete when the Hamiltonian is restricted to various physically-relevant forms. These Hamiltonians may serve as good models for phenomena found in nature, or may be relatively easy to physically implement.
We will not explain all of the relevant physics and quantum chemistry here. However, we use the following notations:
The Pauli matrices $X$, $Y$, and $Z$ are denoted as
\[
\sigma^x = X, \quad\quad \sigma^y = Y, \quad\quad \sigma^z = Z, \quad\quad \boldsymbol{\sigma} = (\sigma^x,\sigma^y,\sigma^z) .
\]
When particles are on a lattice,
$\langle i,j \rangle$ denotes nearest neighbours on the lattice.
An electron on a lattice is located at some lattice site $i$
and can be either spin-up ($\uparrow$) or spin-down ($\downarrow$).
The operators $a^\dag_{i,s}$ and $a_{i,s}$ are the fermionic raising and lowering operators, respectively; they create and annihilate an electron of spin $s\in\lbrace\uparrow,\downarrow\rbrace$ at site
$i$, respectively.
The operator corresponding to the number of electrons of spin $s$ at site $i$ is $n_{i,s} = a^\dag_{i,s} a_{i,s}$.
Note that proving the QMA-completeness of physical Hamiltonians is related to the goal of implementing adiabatic quantum computation: techniques used to prove that a Hamiltonian is QMA-complete are often also used to prove that it is universal for adiabatic quantum computation.
\begin{enumerate}
\item \label{BL08-1} The 2-local Hamiltonian
\[
H_{ZZXX} = \sum_i h_i \sigma^z_i + \sum_i d_i \sigma^x_i + \sum_{i,j} J_{ij} \sigma^z_i \sigma^z_j + \sum_{i,j} K_{ij} \sigma^x_i \sigma^x_j
\]
where coefficients $d_i, h_i, K_{ij}, J_{ij}$ are real numbers.
This Hamiltonian represents a 2-local Ising model with 1-local transverse field and a tunable 2-local transverse $\sigma^x\sigma^x$ coupling. The $\sigma^x\sigma^x$ is realisable, e.g., using capacitive coupling of flux qubits or with polar molecules \cite{BL08}.
\probTheorem{}{Biamonte and Love}{BL08}\\
\probRed{\langfont{2-local real Hamiltonian}}{\ref{realHam}}\\
\probClassical{When when $K_{ij}=d_i=0$ we obtain the famous Ising (spin glass) model with a magnetic field, which is NP-complete on a planar graph \cite{Barahona82}}
\\
\item \label{BL08-2} The 2-local Hamiltonian
\[
H_{ZX} = \sum_i h_i \sigma^z_i + \sum_i d_i \sigma^x_i + \sum_{i<j} J_{ij} \sigma^z_i \sigma^x_j + \sum_{i<j} K_{ij} \sigma^x_i \sigma^z_j
\]
where coefficients $d_i, h_i, K_{ij}, J_{ij}$ are real numbers.
The $\sigma^x\sigma^z$ is realisable using flux qubits \cite{BL08}.
\probTheorem{}{Biamonte and Love}{BL08} \\
\probRed{\langfont{2-local real Hamiltonian}}{\ref{realHam}}
\\
\item \label{Heisenberg} The 2D Heisenberg Hamiltonian with local magnetic fields
The 2D Heisenberg Hamiltonian is a model for spins on a 2-dimensional lattice in a magnetic system, and is often used to study phase transitions. It takes the form
\[
H_{\text{Heis}} = J\sum_{\langle i,j\rangle} \boldsymbol{\sigma_i}\cdot\boldsymbol{\sigma_j} - \sum_i \boldsymbol{\sigma_i}\cdot\boldsymbol{B}_i .
\]
Here, sums over $i$ range over all sites $i$ in the lattice, and $\langle i,j \rangle$ range over nearest-neighbouring sites. The local magnetic field at site $i$ is denoted by $\boldsymbol{B_i}$, and the coupling-constant $J$ is a real constant.
Hamiltonians restricted to this form are QMA-complete both for $J>0$ and for $J<0$.
\probTheorem{}{Schuch and Verstraete}{SV09}\\
\probRed{2-local 2D-lattice Hamiltonian}{\ref{2D-lattice}}
\\
\item \label{Hubbard} The 2D Hubbard Hamiltonian with local magnetic fields
The 2D Hubbard model describes a system of fermions on a 2-dimensional lattice and
is therefore used to model electrons in solid-state systems.
It takes the form
\[
H_{\text{Hubb}} = -t\sum_{\langle i,j\rangle, s} a^\dag_{i,s} a_{j,s} + U\sum_i n_{i,\uparrow} n_{i,\downarrow} - \sum_i \boldsymbol{\bar \sigma_i}\cdot\boldsymbol{B}_i \ .
\]
Here, sums over $i$ range over all sites $i$ in the lattice, $\langle i,j \rangle$ range over nearest-neighbouring sites, and $s$ range over spins ${\lbrace \uparrow,\downarrow \rbrace}$. In this model, $\boldsymbol{\bar \sigma_i}$ is the Pauli matrices vector converted into orbital pair operators: $\boldsymbol{\bar \sigma_i}= {\lbrace \bar \sigma^x_i, \bar \sigma^y_i, \bar \sigma^z_i \rbrace}$ with $\bar \sigma^\alpha_i = \sum_{s,s'} \sigma^\alpha_{ss'} a^\dag_{i,s} a_{i,s'}$ where $\sigma^\alpha_{ss'}$ denotes the $(s,s')$ element of Pauli matrix $\sigma^\alpha$. The local magnetic field at site $i$ is denoted by $\boldsymbol{B_i}$, and $U$ and $t$ are positive numbers representing the electron-electron Coulomb repulsion and electron tunneling rate, respectively.
\probTheorem{}{Schuch and Verstraete}{SV09}\\
\probRed{Heisenberg Hamiltonian}{\ref{Heisenberg}}
\end{enumerate}
\EnumItem{Translationally invariant $k$-local Hamiltonian}
\probIntro{There has been some interest in studying the \langfont{$k$-local Hamiltonian} (\ref{k-local}) problem with the added restriction that the Hamiltonian be \textit{translationally invariant}, i.e. that the Hamiltonian be identical at each particle (qudit\footnote{definition provided in the glossary (appendix \ref{glossary})}) in the system. Such systems are generally in a one-dimensional geometry with periodic boundary conditions. Some problems additionally employ geometric locality (which we refer to here as being on a line), such as constraining interactions to be between nearest-neighbouring particles, or between nearby (but not necessarily nearest-neighbouring) particles; some problems do not, however, have such geometric locality constraints. Current results are listed here. These results are all built on Ref.\cite{AGIK09}. Finally, note that there may be complications in discussing QMA-completeness, since if a Hamiltonian is local and translationally invariant, the only input that scales is the number of qudits, $n$; it may need to be assumed that $n$ is given to the problem in unary to avoid these complications, but we will not discuss this here.
}
The \langfont{$k$-local Hamiltonian} problem is:
\begin{enumerate}
\item \probTheorem{even with a translationally invariant 3-local Hamiltonian with 22-state qudits, but where the interactions are not necessarily geometrically local.}{Vollbrecht and Cirac}{VC08}
\\
\item \probTheorem{even for translationally invariant 2-local Hamiltonians on $\poly(n)$-state qudits}{Kay}{Kay07}
\\
\item \probTheorem{even for translationally invariant $O(\log n)$-local Hamiltonians on 7-state qudits, where the interactions are geometrically local (albeit not restricted to nearest-neighbours)}{Kay}{Kay07}
\\
\item \probTheorem{even for 2-local Hamiltonians on a line of 49-state qudits where all strictly-2-local Hamiltonian terms are translationally invariant, although the 1-local terms can still be position-dependent}{Kay}{Kay08}
\\
\probNotes{Although not discussed here, similar results exist for rotationally invariant Hamiltonians \cite{Kay09}.\\
There exist translationally invariant 2-local Hamiltonian problems on constant\hyp{}dimensional qudits, where the interactions are only between nearest-neighbours
(and in which the only input is the size of the system) that are QMA$_{\text{EXP}}$-complete, where QMA$_{\text{EXP}}$ is the quantum analogue of the classical complexity class NEXP; see \cite{GI10}.}
\end{enumerate}
\EnumItem{Universal functional of DFT}
\probPreamble{
In quantum chemistry, \textit{density functional theory} (DFT) is a method for approximating the ground-state energy of an electron system (see \cite{SV09} and the references therein). The Hamiltonian for a system of $N$ electrons is $H = T^e + V^{ee} + V^e$ where the kinetic energy, electron-electron Coulomb potential, and local potential are given respectively by
\[
\begin{split}
& T^e = -\frac{1}{2}\sum_{i=1}^N \nabla^2_i \\
& V^{ee} = \sum_{1\leqslant i < j \leqslant N} \frac{\gamma}{|\boldsymbol r_i - \boldsymbol r_j|} \\
& V^e = \sum_{i=1}^N V(\boldsymbol x_i)
\end{split}
\]
where $\gamma>0$, $\boldsymbol r_i$ is the position of the $i$th electron, $\boldsymbol x_i = (\boldsymbol r_i, s_i)$ is the position $(\boldsymbol r_i$) of the $i$th electron together with its spin ($s_i$), and $\nabla^2$ is the Laplacian operator.
The ground-state energy of a system of $N$ electrons can be found by minimizing the energy over all $N$-electron densities $\rho^{(N)}(\boldsymbol{x})$, but it can also be given by minimizing over all single-electron probability distributions $n(\boldsymbol{x})$ as
\[
E_0 = \min_n \Big(\tr{V^e n(\boldsymbol{x})} + F[n(\boldsymbol{x})] \Big)
\]
where the \textit{universal functional of DFT} is
\[
F[n(\boldsymbol{x})] = \min_{\rho^{(N)} \rightarrow n} \tr{(T^e + V^{ee})\rho^{(N)}(\boldsymbol{x})} .
\]
In the universal functional, the minimization is over all $N$-electron densities $\rho^{(N)}(\boldsymbol{x})$ that give rise to the reduced-density $n(\boldsymbol{x})$; therefore $F[n]$ gives the lowest energy of $T^e + V^{ee}$ consistent with $n$. The difficult part of DFT is approximating $F[n(\boldsymbol{x})]$, which is independent of the external potential $V^e$ and is therefore universal for all systems.
}
\probDesc{
Given an integer $N$, representing the number of electrons, and a one-electron probability density $n(\boldsymbol{x})$}
{$F[n(\boldsymbol{x})] \leqslant a$}
{$F[n(\boldsymbol{x})] \geqslant b$}
{\wherePoly{b-a}{N} and the strength of the Hamiltonian is bounded by $\poly(N)$}
\probTheorem{}{Schuch and Verstraete}{SV09, WLA12}\\
\probRed{Hubbard model}{\ref{Hubbard}} [Turing reduction]
\end{enumerate}
\subsection{Quantum $k$-SAT and its variations}
\langfont{Quantum $k$-SAT} is really just the \langfont{$k$-local Hamiltonian} problem restricted to projection operators. Nonetheless, it is included here as a subsection of its own due to its high level of interest and study. Note that occasionally people speak of the problem \langfont{MAX-quantum-$k$-SAT}; this is just another name for the \langfont{$k$-local Hamiltonian} problem (\ref{k-local}), and is therefore QMA-complete for $k\geqslant 2$. The problem \langfont{quantum $k$-SAT}, however, is different.\\
\setEnumLabel{S} \begin{enumerate}
\EnumItem{quantum $k$-SAT \label{q-k-SAT}}
\probIntro{
\langfont{Quantum $k$-SAT} is the quantum analogue of the classical problem $k$-SAT. It is actually simply the \langfont{$k$-local Hamiltonian problem} restricted to the case of $k$-local projector Hamiltonians\footnote{definition provided in the glossary (appendix \ref{glossary})}.
In classical $k$-SAT, the objective is to determine whether there exists a bit string (so each character in the string can be either 0 or 1) that satisfies (all of) a set of Boolean clauses, each of which only involves at most $k$ bits of the string. In the quantum analogue, rather than Boolean clauses we have projection operators. A \langfont{quantum $k$-SAT} instance has a solution if there is a quantum state that passes (i.e., is a 0-eigenvalue of) each projection operator.
We provide two equivalent definitions of this problem here. The first emphasises \langfont{quantum $k$-SAT} as a special case of \langfont{$k$-local Hamiltonian}, and the second emphasises the similarity to classical $k$-SAT.
}
\probDesc{
Given $k$-local projection operators $\lbrace \Pi_1,\ldots,\Pi_m \rbrace$ on $n$ qubits, where $m=\poly(n)$, and letting $H=\sum_{i=1}^m \Pi_i$}
{$H$ has an eigenvalue of precisely 0}
{all of the eigenvalues of $H$ are larger than $b$}
{\wherePoly{b}{n}}
Equivalently, we can define the problem as follows.\\
\probDesc{
Given polynomially many $k$-local projection operators $\lbrace\Pi_i\rbrace$}
{$\exists\ \ket{\psi}$ such that $\Pi_i \ket{\psi} = 0$ $\forall i$}
{$\forall\ \ketpsi, \sum_i \bra{\psi} \Pi_i \ket{\psi} \geqslant \epsilon $ (i.e. the expected number of `clause violations' is $\geqslant\epsilon$)}
{\wherePoly{\epsilon}{n}}
\probTheoremO{for $k\geqslant 3$}{Gosset and Nagaj\footnote{improving the results of Bravyi\cite{Bravyi06}, which showed this for $k\geqslant 4$}}{GN13}\\
\probRed{QCSAT}{\ref{QCSAT}}
\probNotes{
\langfont{Quantum $k$-SAT} is in P for $k=2$.\cite{Bravyi06}\\
\langfont{Quantum $k$-SAT} is still QMA$_1$-complete if instead of demanding that $\Pi_i$ be projectors, we demand they be positive-semidefinite operators with zero ground-state energies and constant norms \cite{NM07}.
}
\probClassical{$k$-SAT is NP-complete for $k\geqslant 3$}
\EnumItem{quantum ($d_1,d_2,\ldots,d_k$)-SAT}
\probIntro{\langfont{Quantum ($d_1,d_2,\ldots,d_k$)-SAT} is a quantum $k$-SAT problem but with qudits rather than qubits. Specifically, in a \langfont{quantum ($d_1,d_2,\ldots,d_k$)-SAT} instance, the system consists of $n$ qudits (of various dimension), and each projection operator acts non-trivially on at most $k$ of these $n$ qudits, of the types specified, namely one $d_1$-dimensional qudit, one $d_2$-dimensional qudit,\ldots, and one $d_k$-dimensional qudit. Bear in mind that, e.g., if $d_1\geqslant d_2$ then a $d_2$-dimensional qudit is itself considered a type of $d_1$-dimensional qudit, so a projection operator that acts only on two $d_2$-dimensional qudits is also allowed. For example, an instance of \langfont{quantum (5,3)-SAT} involves a system of $n$ qudits (3-dimensional qudits called qutrits and 5-dimensional qudits called cinquits) such that each projection operator acts (non-trivially) on a single qudit, on one cinquit and one qutrit, or on two qutrits (but not on two cinquits).
For purposes of notation, we assume that $d_1\geqslant d_2 \geqslant \ldots \geqslant d_k$.
For $k\geqslant 3$, this class is trivial\footnote{Earlier work by Nagaj and Mozes\cite{NM07} that \langfont{quantum (3,2,2)-SAT} is QMA$_1$-complete is now subsumed by the result that \langfont{quantum 3-SAT} is QMA$_1$-complete.}: since \langfont{quantum 3-SAT} is QMA$_1$-complete for qubits, it is certainly QMA$_1$-complete for qudits. The cases of $k=2$ is not fully understood; however, the following results are known.}
\begin{enumerate}
\item
\langfont{quantum (5,3)-SAT}, i.e. with a cinquit and a qutrit \\%: i.e. quantum 2-SAT with a cinquit and qutrit, rather than with two qubits.\cite{ER08}
\probTheoremO{for $k=2$ with $d_1\geqslant 5$, $d_2\geqslant 3$}{Eldar and Regev}{ER08}
\\
\item
\langfont{quantum (11,11)-SAT on a (one-dimensional) line} \\
\probTheoremO{}{Nagaj}{Nagaj08}\\
\end{enumerate}
\probNotes{
\langfont{Quantum (2,2)-SAT}, i.e. \langfont{quantum 2-SAT}, is in P.\\
\langfont{Quantum ($d_1,d_2$)-SAT} when $d_1<5$ or $d_2=2$ are open questions. They are known to be NP-hard (except for $d_1=d_2=2$ which is in P).\cite{ER08}
}
\probClassical{even though classical 2-SAT is in P, classical (3,2)-SAT, where one of the binary variables is replaced by a ternary variable, is NP-complete\cite{ER08}}
\EnumItem{stochastic $k$-SAT}
\probIntro{This problem is like \langfont{quantum $k$-SAT}, except that instead of projection operators it uses stochastic, Hermitian, positive-semidefinite operators (see glossary, appendix~\ref{glossary}, for definitions).}
\probDesc{
Given polynomially many $k$-local stochastic, Hermitian, positive-semidefinite operators $\lbrace H_1,\ldots,H_m \rbrace$ on $n$-qubits with norms bounded by $\poly(n)$}
{the lowest eigenvalue of $H=\sum_i H_i$ is 0}
{all eigenvalues of $H$ are $\geqslant b$}
{\wherePoly{b}{n}}
\probTheoremO{for $k=6$}{Jordan, Gosset, and Love}{JGL10}\\
\probRed{\langfont{quantum 4-SAT}}{\ref{q-k-SAT}}
\probNotes{
\langfont{Stoquastic quantum $k$-SAT}, where the word `stochastic' is replaced by `stoquastic' above, is in MA and is MA-complete for $k\geqslant 6$. \cite{BT08} Note that \langfont{stochastic $k$-SAT} makes no mention of projection operators, and therefore isn't really a quantum $k$-SAT problem. In \langfont{stoquastic quantum $k$-SAT}, its MA-complete cousin, however, the operators can be converted to equivalent operators that are projectors, whence the relation to \langfont{quantum $k$-SAT}. No connection to projectors is known in the stochastic case.
}
\end{enumerate}
\section{Density matrix consistency}
\setEnumLabel{C} \begin{enumerate}
\EnumItem{$k$-local density matrix consistency \label{local_consistency}}
\probIntro{
Given a set of density matrices on subsystems of a constant number of qubits, this problem is to determine whether there is a global density matrix on the entire space that is consistent with the subsystem density matrices.
}
\probDesc{
Consider a system of $n$ qubits. Given $m=\poly(n)$ \quad $k$-local density matrices $\rho_1, \ldots, \rho_m$, so that each $\rho_i$ acts only on a subset $C_i \subseteq \lbrace 1,\ldots n \rbrace$ of qubits with $|C_i| \leqslant k$}
{$\exists$ $n$-qubit density matrix $\sigma$ such that $\forall i$, $\Vert\rho_i - \tilde\sigma_i\Vert_{tr} = 0$ where $\tilde \sigma_i = \trP_{\{1,\ldots, n\}\smallsetminus C_i} (\sigma)$}
{$\forall$ $n$-qubit density matrix $\sigma$, $\exists i$ such that $\Vert\rho_i - \tilde\sigma_i\Vert_{tr} \geqslant b$ where $\tilde \sigma_i = \trP_{\{1,\ldots, n\}\smallsetminus C_i} (\sigma)$}
{\wherePoly{b}{n}}
\probTheorem{even for $k=2$}{Liu}{Liu06}\\
\probRed{\langfont{$k$-local Hamiltonian}}{\ref{k-local}} [Turing reduction]
\probClassical{\langfont{consistency of marginal distributions} is NP-hard}
\EnumItem{$N$-representability}
\probIntro{
This is the same problem as \langfont{$2$-local density matrix consistency} (\ref{local_consistency}), but specialised to fermions (particles whose quantum state must be antisymmetric under interchange of particles).
}
\probDesc{
Given a system of $N$ fermions and $d$ possible modes, with $N\leqslant d\leqslant \poly(N)$, and a $\frac{d(d-1)}{2} \times \frac{d(d-1)}{2}$ 2-fermion density matrix $\rho$}
{$\exists$ ${d \choose N} \times {d \choose N}$ $N$-fermion density matrix $\sigma$ such that $\trP_{\lbrace 3,\ldots,N\rbrace}(\sigma) = \rho$}
{$\forall$ $N$-fermion density matrices $\sigma$, $\Vert\rho - \trP_{\lbrace 3,\ldots,N\rbrace}(\sigma)\Vert_{tr} \geqslant b$}
{\wherePoly{b}{N}}
\probTheorem{}{Liu, Christandl, and Verstraete}{LCF07}\\
\probRed{\langfont{2-local Hamiltonian}}{\ref{k-local}} [Turing reduction]
\EnumItem{bosonic $N$-representability}
\probIntro{
This is the same problem as \langfont{$2$-local density matrix consistency} (\ref{local_consistency}), but specialised to bosons (particles whose quantum state must be symmetric under interchange of particles).
}
\probDesc{
Given a system of $N$ bosons and $d$ possible modes, with $d\geqslant cN$ (for some constant $c>0$), and a $\frac{d(d+1)}{2} \times \frac{d(d+1)}{2}$ 2-boson density matrix $\rho$}
{$\exists$ ${N+d-1 \choose N} \times {N+d-1 \choose N}$ $N$-boson density matrix $\sigma$ such that $\trP_{\lbrace 3,\ldots,N\rbrace}(\sigma) = \rho$}
{$\forall$ $N$-boson density matrices $\sigma$, $\Vert\rho - \trP_{\lbrace 3,\ldots,N\rbrace}(\sigma)\Vert_{tr} \geqslant b$}
{\wherePoly{b}{N}}
\probTheorem{}{Wei, Mosca, and Nayak}{WMN10} \\
\probRed{2-local Hamiltonian}{\ref{k-local}} [Turing reduction]
\end{enumerate}
\appendix
\section{Glossary }
\label{glossary}
The definitions given here are not necessarily the most general or precise possible, but they suffice for the needs of this paper.
\newcommand{\GlosItem}[1]{\item[#1] --}
\begin{description}
\GlosItem{$i^\text{th}$ energy of a Hamiltonian $H$} the $i^\text{th}$ smallest eigenvalue of $H$.
\GlosItem{ground-state energy of a Hamiltonian $H$} the smallest eigenvalue of $H$.
\GlosItem{Hamiltonian} the generator of time-evolution in a quantum system. Its eigenvalues correspond to the allowable energies of the system. It also dictates what interactions are present in a system. As a matrix, it is Hermitian.
\GlosItem{Hermitian matrix} a square matrix $H$ that is equal to its own conjugate-transpose, i.e. $H^\dag = H$.
\GlosItem{norms of matrices} Several different matrix norms appear in this paper.
Given a matrix $A$ with elements $a_{ij}$, the
\begin{description}
\item[operator norm] of $A$ is
$\Vert A\Vert = \max{\left\lbrace \Vert A\ketpsi\Vert_2 : \Vert\ketpsi\Vert_2 =1 \right\rbrace}$. For a square matrix, it is also known as the spectral norm; it is the largest singular value of $A$, and if $A$ is normal, then it is the largest absolute value of the eigenvalues of $A$.
\item[Frobenius norm] of $A$ is $\Vert A\Vert_F = \sqrt{\tr{A^\dag A}} = \sqrt{\sum_{i,j} |a_{ij}|^2}$.
\item[trace norm] of $A$ is $\Vert A\Vert_{tr} = \tr{\sqrt{A^\dag A}}$,
which when $A$ is normal is the sum of the absolute value of its eigenvalues. It is often written $\Vert A\Vert_{tr} = \trP|A|$ where $|A|$ denotes $\sqrt{A^\dag A}$.
\end{description}
\GlosItem{norms of quantum superoperators} Occasionally norms of superoperators are required in this paper.
\begin{description}
\item[diamond norm] of a superoperator $\Phi$ that acts on density matrices that act on a Hilbert space $\cal{H}$ is $\Vert \Phi \Vert_\diamondsuit = \sup_{X}\Vert(\Phi\otimes\id)(X)\Vert_{tr}/\Vert X\Vert_{tr}$ where the supremum is taken over all linear operators
$X:\cal{H}\otimes \cal{H} \rightarrow \cal{H}\otimes \cal{H}$.
\end{description}
\GlosItem{positive-semidefinite matrix}
a Hermitian matrix whose eigenvalues are all non-negative.
\GlosItem{$k$-local projector on $n$ qubits} a Hermitian matrix of the form $\Pi = \id^{\otimes (n-k)}\otimes \sum_i\selfketbra{\psi}_i$ where the $\ket{\psi}_i$ are orthonormal $k$-qubit states. It satisfies $\Pi^2 = \Pi$.
\GlosItem{stochastic matrix}
a square matrix of non-negative real numbers such that each row sums to 1. If additionally each column sums to 1, it is called a doubly stochastic matrix.
\GlosItem{stoquastic Hamiltonian}
a Hamiltonian in which the off-diagonal matrix elements are non-positive real numbers in the standard basis.
\GlosItem{qudit}
generalization of a qubit: for some $d$, a $d$-state quantum-mechanical system, or mathematically, a unit-normalized vector in $\C^d$ (but where global phase is irrelevant). When $d=2$ it is called a qubit, when $d=3$ it is called a qutrit. When $d=5$ it may be called a cinquit\cite{ER08}, but to avoid headaches, I advise against trying to name the $d=4$ version.
\end{description}
\section{QMA-hard theorems \label{app:hard}}
This appendix contains a theorem that allows one to prove that several quantum circuit verification problems are QMA-hard. Note that it doesn't prove QMA-completeness, only QMA-hardness, so it is relegated to the appendix.
\setEnumLabel{X} \begin{enumerate}
\EnumItem{Quantum circuit testing \label{circuit_testing}}
\probIntro{This problem involves testing the behaviour of a quantum circuit. Given input circuit $C$, one wishes to determine whether it acts like a circuit from uniform circuit family $\mathscr{C}_0$ on a large input space, or like a circuit from uniform circuit family $\mathscr{C}_1$ for all inputs, promised that the two families are significantly different.}
\probDesc{
Let $\delta\in(0,1]$ and let $\mathscr{C}_0$ and $\mathscr{C}_1$ be two uniform families of quantum circuits. Given input quantum circuit $C$ acting on $n$-qubit input space ${\cal H}$, let $C_0\in \mathscr{C}_0$ and $C_1\in \mathscr{C}_1$ act on the same input space ${\cal H}$. The problem is}
{$\exists$ subspace $S$, with $\text{dim} S \geqslant (\text{dim} {\cal H})^{1-\delta}$, such that for any reference space ${\cal R}$, if $\rho$ is a density matrix on $S\otimes {\cal R}$ then $\Vert (C\otimes\id_{{\cal R}})(\rho) - (C_0\otimes\id_{\cal R})(\rho)\Vert_{tr} \leqslant \epsilon$}
{for any reference space ${\cal R}$, if $\rho$ is a density matrix on the full space ${\cal H}\otimes {\cal R}$ then $\Vert (C\otimes\id_{\cal R})(\rho) - (C_1\otimes\id_{\cal R})(\rho)\Vert_{tr} \leqslant \epsilon$}
{where $1 > \epsilon \geqslant 2^{-\poly(n)}$. Note that the promise actually imposes a condition on the allowable $\mathscr{C}_0$ and $\mathscr{C}_1$, forcing them to be significantly different}
\probTheoremH{for constant $\delta$}{Rosgen}{Rosgen11b}\\
\probRed{QCSAT}{\ref{QCSAT}}
\probNotes{leads to: \langfont{mixed-state non-identity check} (\ref{mixed-non-identity}), \langfont{non-isometry testing} (\ref{non-isometry}), \langfont{Detecting insecure quantum encryption}(\ref{insecure_q_enc})}
\end{enumerate}
\section{Diagram of QMA-complete problems}
\begin{figure}
\caption{\label{fig:tree}
\label{fig:tree}
\end{figure}
\subsection*{Acknowledgments}
The author gratefully acknowledges support from the Department of Physics at MIT. Much appreciation goes to Scott Aaronson, for whose excellent MIT 6.845 course I prepared this paper, and to Pawel Wocjan, for greatly advancing my knowledge of the class QMA and its history.
\end{document} |
\begin{document}
\title{Ramified class field theory and duality over finite fields}
\author{Rahul Gupta, Amalendu Krishna}
\address{Fakult\"at f\"ur Mathematik, Universit\"at Regensburg,
93040, Regensburg, Germany.}
\email{Rahul.Gupta@mathematik.uni-regensburg.de}
\address{Department of Mathematics, Indian Institute of Science,
Bangalore, 560012, India.}
\email{amalenduk@iisc.ac.in}
\keywords{Milnor $K$-theory, Class field theory,
$p$-adic {\'e}tale cohomology, Cycles with modulus}
\subjclass[2020]{Primary 14C25; Secondary 14F42, 19E15}
\maketitle
\begin{quote}\emph{Abstract.}
We prove a duality theorem for the $p$-adic {\'e}tale motivic
cohomology of a variety $U$ which is the
complement of a divisor on a smooth projective variety over ${\mathbb F}_p$.
This extends the duality theorems of Milne and Jannsen-Saito-Zhao.
The duality introduces a filtration on $H^1_{{\text{\'et}}l}(U, {{\mathbb Q}}/{{\mathbb Z}})$.
We identify this filtration to the classically known
Matsuda filtration when the
reduced part of the divisor is smooth.
We prove a reciprocity theorem
for the idele class groups with modulus introduced by Kerz-Zhao and R{\"u}lling-Saito.
As an application, we derive the failure of Nisnevich descent for
Chow groups with modulus.
\end{quote}
\setcounter{tocdepth}{1}
\tableofcontents
\section{Introduction}\label{sec:Intro}
The objective of this paper is to study the duality and reciprocity theorems
for non-complete smooth varieties over finite fields and draw consequences.
Below we describe the contexts and statements of our main results.
\subsection{The duality theorem}\label{sec:D*}
Let $k$ be a finite field of characteristic $p$
and $X$ a smooth projective variety of dimension
$d$ over $k$. Let $W_m\Omega^r_{X,{\operatorname{log}}}$ be the logarithmic Hodge-Witt
sheaf on $X$, defined as the image of the dlog map
from the Milnor $K$-theory sheaf ${\mathcal K}^M_{r,X}$ to the $p$-typical
de Rham-Witt sheaf $W_m\Omega^r_X$ in the {\'e}tale topology.
Milne \cite{Milne-Zeta} proved that there is a perfect pairing of
cohomology groups
\begin{equation}\label{eqn:Milne-Z}
H^i_{\text{\'et}}l(X, W_m\Omega^r_{X,{\operatorname{log}}}) \times
H^{d+1-i}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{X,{\operatorname{log}}})
\to {{\mathbb Z}}/{p^m}.
\end{equation}
By \cite[Theorem~8.4]{Geisser-Levine}, there is an isomorphism
$H^i_{\text{\'et}}l(X, W_m\Omega^r_{X,{\operatorname{log}}}) \cong H^{i+r}_{{\text{\'et}}l}(X, {{\mathbb Z}}/{p^m}(r))$,
where the latter is the $p$-adic {\'e}tale motivic cohomology
due to Suslin-Voevodsky. Milne's theorem can thus be considered as
a perfect duality for the $p$-adic {\'e}tale motivic cohomology of
smooth projective varieties over $k$. The corresponding
duality for the ${{\text{\rm op}}eratorname{\rm EL}}l$-adic {\'e}tale cohomology is the classical
Poincar{\'e} duality for {\'e}tale cohomology for a prime ${{\text{\rm op}}eratorname{\rm EL}}l \neq p$.
If $U$ is a smooth quasi-projective variety over $k$ which is not
complete, then there is a perfect duality (see \cite{Saito-89}) for the
${{\text{\rm op}}eratorname{\rm EL}}l$-adic {\'e}tale cohomology in the form
\begin{equation}\label{eqn:Saito}
H^{i}_{{\text{\'et}}l}(U, {{\mathbb Z}}/{{{\text{\rm op}}eratorname{\rm EL}}l^m}(r)) \times H^{2d+1-i}_{c}(U, {{\mathbb Z}}/{{{\text{\rm op}}eratorname{\rm EL}}l^m}(d-r))
\to {{\mathbb Z}}/{{{\text{\rm op}}eratorname{\rm EL}}l^m},
\end{equation}
where $H^{i}_{c}(U, {{\mathbb Z}}/{{{\text{\rm op}}eratorname{\rm EL}}l^m}(j))$ is the ${{\text{\rm op}}eratorname{\rm EL}}l$-adic {\'e}tale cohomology
of $U$ with compact support.
However, there is no known $p$-adic analogue of the {\'e}tale cohomology of $U$
with compact support such that a $p$-adic analogue of ~\eqref{eqn:Saito} could
hold. Construction of this duality is yet an open problem.
Recall that one of the applications (which is the interest of this paper)
of such a duality theorem is to the
study of the mod-$p$ {\'e}tale fundamental group of $U$, which is in general
a very complicated object. A duality theorem such as above would allow one to study this
group in terms of a more tractable {\'e}tale cohomology
of $U$ with compact support.
In \cite{JSZ}, Jannsen-Saito-Zhao proposed an approach in a special case
when $U$ is the complement of a divisor $D$ on a
smooth projective variety $X$ over $k$ such that $D_{\rm red}$ is a
simple normal crossing divisor.
They constructed a relative version of the logarithmic Hodge-Witt sheaves
on $X$, denoted by $W_m\Omega^r_{X|D,{\operatorname{log}}}$. Using these sheaves, they showed
that there is a semi-perfect pairing (see Definition~\ref{defn:PF-lim-colim})
\begin{equation}\label{eqn:JSZ-main}
H^i_{\text{\'et}}l(U, W_m\Omega^r_{U,{\operatorname{log}}}) \times {\varprojlim}_n
H^{d+1-i}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{X|nD,{\operatorname{log}}}) \to {{\mathbb Z}}/{p^m}
\end{equation}
of topological abelian groups,
where the first group has discrete topology and the second has
profinite topology. This pairing is perfect when $i = 1$ and $r = 0$.
In this paper, we propose a different approach to the $p$-adic
duality for $U$. This new approach has the advantage that it allows
$D$ to be an arbitrary divisor on $X$. This is possible by virtue of the
choice of the relative version of the logarithmic
Hodge-Witt sheaves. Instead of using the Hyodo-Kato de Rham-Witt sheaves
with respect to a suitable log structure considered in \cite{JSZ},
we use a more ingenuous version of the relative logarithmic
Hodge-Witt sheaf, which we denote by $W_m\Omega^r_{(X,D), {\operatorname{log}}}$. The latter is defined to be the
kernel of the
canonical surjection $W_m\Omega^r_{X,{\operatorname{log}}} \twoheadrightarrow W_m\Omega^r_{D,{\operatorname{log}}}$.
Our main result on the $p$-adic duality for $U$ is roughly the
following. We refer to Theorem~\ref{thm:Duality-main} for the precise
statement.
\begin{thm}\label{thm:Main-1}
Let $X$ be a smooth projective variety of dimension $d$
over a finite field $k$ of characteristic $p$.
Let $D \subset X$ be an effective Cartier divisor
with complement $U$. Let $r, i \ge 0$ and $m \ge 1$ be integers.
Then there is a semi-perfect pairing of topological abelian groups
\[
H^i_{{\text{\'et}}l}(U, W_m\Omega^r_{U, {\operatorname{log}}}) \times
{\varprojlim}_n
H^{d+1-i}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{(X,nD),{\operatorname{log}}}) \to {{\mathbb Z}}/{p^m}.
\]
This pairing is perfect if $D_{\rm red}$ is a simple normal crossing divisor,
$i = 1, r = 0$ and one of the conditions $\{d \neq 2, k \neq {\mathbb F}_2\}$
holds.
\end{thm}
We show that Theorem~\ref{thm:Main-1} recovers the duality theorem of
Jannsen-Saito-Zhao if $D_{\rm red}$ is a simple normal crossing divisor.
\vskip .3cm
\begin{comment}
There is a canonical map $H^d_{\text{\'et}}l(X, W_m\Omega^{d}_{(X,D),{\operatorname{log}}})^\vee
\to {\varinjlim}_n H^{d}_{\text{\'et}}l(X, W_m\Omega^{d}_{(X,nD),{\operatorname{log}}})^\vee$,
where $G^\vee$ denotes the Pontryagin dual of a profinite or discrete
abelian group $G$.
One then defines ${\mathbb F}il^{{\text{\'et}}l}_{D} H^1_{\text{\'et}}l(U, {{\mathbb Z}}/{p^m})$ to be the inverse
image of the image of this map under the isomorphism
\[
H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m}) \xrightarrow{\cong}
{\varinjlim}_n H^{d}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{(X,nD),{\operatorname{log}}})^\vee.
\]
Define ${\mathbb F}il^{{\text{\'et}}l}_{D} H^1(K)$ to be the subgroup
$H^1(K)\{p'\} \bigoplus {\varinjlim}_m {\mathbb F}il^{{\text{\'et}}l}_{D} H^1_{\text{\'et}}l(U, {{\mathbb Z}}/{p^m})$
of $H^1(K)$.
\end{comment}
As a consequence of Theorem~\ref{thm:Main-1}, we obtain a filtration
$\{{\mathbb F}il^{{\text{\'et}}l}_{nD} H^1(K)\}_n$, where $K$ is the function field of $X$ and
$H^1(K)$ is a shorthand for $H^1_{\text{\'et}}l(K, {{\mathbb Q}}/{{\mathbb Z}})$ (see \S~\ref{sec:Filt-et}).
We let ${\mathbb F}il_{D} H^1(K)$ be the subgroup of $H^1(K)$ introduced in
\cite[Definition~7.12]{Gupta-Krishna-REC}. This coincides with
the filtration defined in \cite[Definition~2.9]{Kerz-Saito} by
\cite[Theorem~1.2]{Gupta-Krishna-BF}. ${\mathbb F}il_{D} H^1(K)$ can be described as
the subgroup
of continuous characters of the absolute Galois group of $K$ whose Artin
conductors (see \cite[Definition~3.2.5]{Matsuda})
at the generic points of $D$ are bounded by the multiplicities of $D$ in
$D_{\rm red}$. This is a more intricate subgroup of $H^1(K)$ than
${\mathbb F}il^{{\text{\'et}}l}_{D} H^1(K)$ because the latter can be described in terms of
the simpler objects such as the cohomology of the relative logarithmic Hodge-Witt sheaves.
On the other hand, ${\mathbb F}il_{D} H^1(K)$ determines the ramification theory of
finite {\'e}tale coverings of $U$.
It is therefore desirable to know if and
when these two filtrations agree. Our next result is the following.
\begin{thm}\label{thm:Main-2}
Let $X$ be a smooth projective variety of dimension $d$
over a finite field $k$ of characteristic $p$.
Let $D \subset X$ be an effective Cartier divisor
such that $D_{\rm red}$ is regular. Assume that either $d \ne 2$ or $k \neq {\mathbb F}_2$.
Then
\[
{\mathbb F}il_D H^1(K) = {\mathbb F}il^{{\text{\'et}}l}_D H^1(K).
\]
\end{thm}
\vskip .3cm
\subsection{The reciprocity theorem}\label{sec:R**}
The purpose of reciprocity theorems in class field theory over a perfect
field is to be able to represent the abelianized
{\'e}tale fundamental groups of varieties
over the field in terms of idele class groups which are often described
in terms of explicit sets of generators and relations.
Let us assume that the base field $k$ is finite of characteristic $p$.
In this case, such a reciprocity theorem for smooth projective varieties
over $k$ is due to Kato-Saito \cite{Kato-Saito-83} which describes the abelianized
{\'e}tale fundamental group in terms of the Chow group of 0-cycles.
For the more intricate case of a
smooth quasi-projective variety $U$ which is not complete, an approach
was introduced by Kato-Saito \cite{Kato-Saito} whose underlying idea
is to study the so called `{\'e}tale fundamental group of $X$ with modulus $D$',
where $D \subset X$ is a fixed divisor which is supported on the
complement of $U$ in a normal compactification $U \subset X$.
This group characterizes the finite {\'e}tale coverings of $U$ whose
ramification along $X\setminus U$ is bounded by $D$ in a certain sense.
There are various ways to make sense of this bound on the ramification,
and they give rise to several definitions of the
{\'e}tale fundamental group with modulus.
It turns out that depending on what one wants to do, each of these has
certain advantage over the others.
Kato and Saito were able to describe $\pi^{\rm ab}_1(U)$ in terms of the limit
(over $D$) of the idele class groups with modulus
$H^d_{{\rm nis}}(X, {\mathcal K}^M_{d,(X,D)})$, where $d = \text{\rm dim}(X)$.
In \cite[Theorems~1.1]{Gupta-Krishna-BF}, it was shown that
$H^d_{{\rm nis}}(X, {\mathcal K}^M_{d,(X,D)})$ describes $\pi^{{\rm adiv}}_1(X,D)$ for
every $D$ if we let $\pi^{{\rm adiv}}_1(X,D)$ be the Pontryagin dual to
the Matsuda filtration ${\mathbb F}il_D H^1(K)$.
It was also shown in loc. cit. that $\pi^{{\rm adiv}}_1(X,D)$
coincides with the fundamental group
with modulus $\pi^{\rm ab}_1(X,D)$, introduced earlier by Deligne and Laumon
\cite{Laumon} if $X$ is smooth. The latter was shown to coincide (along the degree zero parts)
with the Chow group of 0-cycles with modulus ${\mathbb C}H_0(X|D)$ by Kerz-Saito
\cite{Kerz-Saito} (when $p \neq 2$) and Binda-Krishna-Saito \cite{BKS}.
If we use Kato's Swan conductor instead of Matsuda's Artin conductor
to bound the ramification in terms of a divisor supported away from $U$, we are led to a
different notion of the abelianized {\'e}tale fundamental group
with modulus which we denote by $\pi^{\rm abk}_1(X,D)$.
This is defined as the Pontryagin dual to the subgroup
${\mathbb F}il^{{\rm bk}}_D H^1(K)$. The latter is
the subgroup
of continuous characters of the absolute Galois group of $K$ whose Swan
conductors (defined by Kato \cite{Kato-89})
at the generic points of $D$ are bounded by the multiplicities of $D$ in
$D_{\rm red}$.
One can now ask if $\pi^{\rm abk}_1(X,D)$ could be described by
an idele class group with modulus, similar to the $K$-theoretic
idele class group of Kato-Saito and the cycle-theoretic idele class group of
Kerz-Saito. Our next result solves this problem.
Let $\widehat{{\mathcal K}}^M_{r,X}$ be the improved Milnor $K$-theory sheaf of Gabber and
Kerz \cite{Kerz-10}.
Let $\widehat{{\mathcal K}}^M_{r,X|D}$ be the relative Milnor $K$-theory sheaf, defined
locally as the image of the map
${\mathcal K}^M_{1, (X,D)} \otimes_{{\mathbb Z}} j_* {\mathcal O}^{\times}_U \otimes_{{\mathbb Z}} \cdots
\otimes_{{\mathbb Z}} j_* {\mathcal O}^{\times}_U \to \widehat{{\mathcal K}}^M_{r,X}$.
We refer to Lemma~\ref{lem:RS-GK} for the proof that this map is defined.
This sheaf was considered by R{\"u}lling-Saito \cite{Rulling-Saito} when
$X$ is smooth and $D_{\rm red}$ is a simple normal crossing divisor.
There are
degree maps $\deg \colon H^d_{\rm nis}(X, \widehat{{\mathcal K}}^M_{d,X|D}) \to {\mathbb Z}$
(see \S~\ref{sec:Degree}) and
$\deg' \colon \pi^{\rm abk}_1(X,D) \to \widehat{{\mathbb Z}}$. We let
$H^d_{\rm nis}(X, \widehat{{\mathcal K}}^M_{d,X|D})_0 = {\rm Ker}(\deg)$ and
$\pi^{\rm abk}_1(X,D)_0 = {\rm Ker}(\deg')$.
\begin{thm}\label{thm:Main-3}
Let $X$ be a normal projective variety of dimension $d$
over a finite field.
Let $D \subset X$ be an effective Cartier divisor whose complement is regular.
Then there is a continuous reciprocity homomorphism
\[
\rho'_{X|D} \colon H^d_{\rm nis}(X, \widehat{{\mathcal K}}^M_{d,X|D}) \to \pi_1^{\rm abk}(X, D)
\]
with dense image such that the induced map
\[
\rho'_{X|D} \colon H^d_{\rm nis}(X, \widehat{{\mathcal K}}^M_{d,X|D})_0 \xrightarrow{\cong}
\pi_1^{\rm abk}(X, D)_0
\]
is an isomorphism of finite groups.
\end{thm}
\vskip .3cm
Let $H^i_{\mathcal M}(X|D, {\mathbb Z}(j))$ be the motivic cohomology with modulus.
This is defined as the Nisnevich hypercohomology
of the sheafified cycle complex with modulus
$z^j(X|D, 2j- \bullet)$, introduced in \cite{Binda-Saito}.
Using Theorem~\ref{thm:Main-3} and
\cite[Theorem~1]{Rulling-Saito}, we obtain the following.
\begin{cor}\label{cor:Main-4}
Assume in Theorem~\ref{thm:Main-3} that $X$ is regular and
$D_{\rm red}$ is a simple normal crossing
divisor. Then there is an isomorphism of finite groups
\[
cyc'_{X|D} \colon H^{2d}_{\mathcal M}(X|D, {\mathbb Z}(d))_0 \xrightarrow{\cong}
\pi_1^{\rm abk}(X, D)_0.
\]
\end{cor}
This result can be viewed as the cycle-theoretic description of
the $\pi_1^{\rm abk}(X, D)$, analogous a similar result for $\pi_1^{{\rm adiv}}(X,D)$, proven in
\cite{Kerz-Saito}, \cite{BKS} and \cite{Gupta-Krishna-BF}.
\vskip .3cm
\subsection{Failure of Nisnevich descent for Chow group with modulus}
\label{sec:Chow-nis}
Recall that the classical higher Chow groups of smooth varieties
satisfy the Nisnevich descent in the sense that the canonical map
${\mathbb C}H^i(X,j) \to H^{2i-j}_{\mathcal M}(X, {\mathbb Z}(i))$ is an isomorphism for $i, j \ge 0$,
where the latter is the Nisnevich hypercohomology of the sheafified
(in Nisnevich topology) Bloch's cycle complex.
However, this question for the higher Chow groups with modulus is
yet an open problem. Some cases of this were verified by
R{\"u}lling-Saito \cite[Theorem~3]{Rulling-Saito}.
As an application of Corollary~\ref{cor:Main-4}, we prove the following
result which provides a counterexample to the Nisnevich descent for the
Chow groups with modulus. This was
one of our motivations for studying the reciprocity for
$\pi_1^{\rm abk}(X, D)$.
\begin{thm}\label{thm:Main-5}
Let $X$ be a smooth projective surface
over a finite field.
Let $D \subset X$ be an effective Cartier divisor such that
$D_{\rm red}$ is a simple normal crossing divisor.
Then the canonical map
\[
{\mathbb C}H_0(X|D) \to H^{4}_{\mathcal M}(X, {\mathbb Z}(2))
\]
is not always an isomorphism.
\end{thm}
Using Theorem~\ref{thm:Main-5}, one can show that the Nisnevich descent for the Chow groups with
modulus fails over infinite fields too.
Recall that the map in Theorem~\ref{thm:Main-5} is known to be an isomorphism if $X$ is a curve.
This is related to the fact that for a Henselian discrete valuation field
$K$ with perfect residue field, the Swan conductor and the Artin conductor
agree for the characters of the absolute Galois group of $K$.
In this context, we also remark that one could attempt to define an analogue
of the Deligne-Laumon fundamental group with modulus $\pi^{\rm ab}_1(X,D)$
by using the Brylinski-Kato filtration along all integral curves in $X$ not
contained in $D$. However, the resulting fundamental group will coincide with
$\pi^{{\rm adiv}}_1(X,D)$ and not yield anything new for the same reason as above.
\vskip .3cm
\subsection{Overview of proofs}\label{sec:Overview}
We give a brief overview of our proofs.
The main idea behind the proof of our duality theorem is the observation that
the naive relative logarithmic Hodge-Witt sheaves are isomorphic to the
naive relative mod-$p$ Milnor $K$-theory sheaves (in the sense of
Kato-Saito) in the pro-setting. The hope that the
{\'e}tale cohomology of relative mod-$p$ Milnor $K$-theory sheaves are the
correct objects to use for duality originated from our results in previous
papers which showed that the Nisnevich cohomology of the
Kato-Saito relative Milnor $K$-theory yields a reciprocity isomorphism
without any condition on the divisor.
To implement the above ideas, we need to prove many results about
the relative logarithmic Hodge-Witt sheaves and their relation with
various versions of relative Milnor $K$-theory. One of these is a
pro-isomorphism between the relative logarithmic Hodge-Witt sheaves
and the twisted de Rham-Witt sheaves. The latter are easier objects
to work with because duality for them follows from the
Grothendieck-Serre coherent duality.
The next step is to construct higher Cartier operators on the
twisted de Rham-Witt sheaves. This allows us to construct the relative
version of the two-term complexes considered by Milne
\cite{Milne-Zeta}.
The proof of the duality theorem is then reduced to coherent duality
using an induction procedure.
To prove Theorem~\ref{thm:Main-2}, we use our duality theorem and the reciprocity
theorem of \cite{Gupta-Krishna-BF} to reduce it to proving an
independent statement that the top Nisnevich cohomology of the
relative Milnor $K$-theory sheaf coincides with the corresponding
{\'e}tale cohomology if the reduced divisor is regular
(see Theorem~\ref{thm:Comparison-3}).
To prove Theorem~\ref{thm:Main-3}, we follow the strategy we used
in the proof of a similar result for $\pi^{{\rm adiv}}_1(X,D)$ in
\cite{Gupta-Krishna-BF}. But there are some new ingredients to be used.
The first key ingredient is a result of Kerz-Zhao
\cite{Kerz-Zhau} which gives an idelic presentation of
$H^d_{\rm nis}(X, \widehat{{\mathcal K}}^M_{d,X|D})$.
The second key result is a theorem of Kato \cite{Kato-89}
which gives a criterion for the
characters of the absolute Galois group of a Henselian discrete valuation
field to annihilate various subgroups of the Milnor $K$-theory of the
field under a suitable pairing (see Theorem~\ref{thm:Fil-main}).
What remains after this is to prove Proposition~\ref{prop:Fil-SH-BK}.
We prove various results about relative Milnor $K$-theory in
sections ~\ref{sec:Milnor-K}, ~\ref{sec:KZ} and ~\ref{sec:Comp**}.
We study the relative logarithmic Hodge-Witt sheaves and prove some
results about their cohomology in \S~\ref{sec:Hodge-Witt}.
We introduce the Cartier operators on the twisted de Rham-Witt sheaves
in \S~\ref{sec:Cartier}. We construct the pairing for the duality theorem
and prove its perfectness in sections ~\ref{sec:DT} and ~\ref{sec:Perfectness}.
The reciprocity theorem is proven in \S~\ref{sec:REC*} and the
failure of Nisnevich descent for the Chow groups with
modulus is shown in \S~\ref{sec:CE}.
\vskip .3cm
\subsection{Notation}\label{sec:Notn}
We shall work over a field $k$ of characteristic $p > 0$ throughout this
paper. We let ${{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$ denote the category of separated and
essentially of finite type schemes over $k$. The product $X \times_{{\rm Spec \,}(k)} Y$
in ${{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$ will be written as $X \times Y$. We let
$X^{(q)}$ (resp. $X_{(q)}$) denote the set of points on $X$ having codimension
(resp. dimension) $q$.
We let ${{\text{\rm op}}eratorname{\mathbf{Sch}}}_{k/{\rm zar}}$ (resp. ${{\text{\rm op}}eratorname{\mathbf{Sch}}}_{k/{\rm nis}}$, resp. ${{\text{\rm op}}eratorname{\mathbf{Sch}}}_{k/ {\text{\'et}}l})$
denote the Zariski (resp. Nisnevich, resp. {\'e}tale) site of ${{\text{\rm op}}eratorname{\mathbf{Sch}}}_{k}$.
We let $\epsilon \colon
{{\text{\rm op}}eratorname{\mathbf{Sch}}}_{k/ {\text{\'et}}} \to {{\text{\rm op}}eratorname{\mathbf{Sch}}}_{k/ {\rm nis}}$ denote the canonical morphism of sites.
If ${\mathcal F}$ is a sheaf on ${{\text{\rm op}}eratorname{\mathbf{Sch}}}_{k/{\rm nis}}$,
we shall denote $\epsilon^* {\mathcal F}$ also by ${\mathcal F}$ as long as the usage of
the {\'e}tale topology is clear in a context.
For $X \in {{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$, we shall let ${\operatorname{\text{p.s.s.}}}i \colon X \to X$ denote the
absolute Frobenius morphism.
For an abelian group $A$, we shall write ${\mathbb T}or^1_{{\mathbb Z}}(A, {{\mathbb Z}}/n)$ as
$_nA$ and $A/{nA}$ as $A/n$.
The tensor product $A \otimes_{{\mathbb Z}} B$ will be written as $A \otimes B$.
We shall let $A\{p'\}$ denote the
subgroup of elements of $A$ which are torsion of order prime to $p$.
We let $A\{p\}$ denote the subgroup of elements of $A$ which are torsion of
order some power of $p$.
\section{Relative Milnor $K$-theory}\label{sec:Milnor-K}
In this section, we recall several versions of relative Milnor $K$-theory
sheaves and establish some relations among their cohomology.
For a commutative ring $A$, the Milnor $K$-group $K^M_r(A)$
(as defined by Kato \cite{Kato-86}) is
the $r$-th graded piece of the graded ring $K^M_*(A)$.
The latter is the quotient of the tensor algebra $T_*(A^{\times})$ by the
two-sided graded ideal generated by the homogeneous elements
$a_1 \otimes \cdots \otimes a_r$ such that $r \ge 2$, and $a_i + a_j = 1$ for
some $1 \le i \neq j \le r$.
The residue class of $a_1 \otimes \cdots \otimes a_r \in T_r(A^{\times})$
in $K^M_r(A)$ is denoted by the Milnor symbol $\underline{a} = \{a_1, \ldots , a_r\}$.
Given an ideal $I \subset A$, the relative Milnor $K$-theory $K^M_*(A, I)$ is
defined as the kernel of the restriction map
$K^M_*(A) \to K^M_*(A/I)$.
Let $\widetilde{K}^M_r(A)$ denote the $r$-th graded piece of the graded ring
$\widetilde{K}^M_*(A)$, where the latter is the quotient of the tensor algebra
$T_*(A^{\times})$ by the two-sided graded ideal generated by the homogeneous
elements $a_1 \otimes a_2$ such that $a_1 + a_2 = 1$.
We let $\widetilde{K}^M_*(A, I)$ be the kernel of the restriction map
$\widetilde{K}^M_*(A) \to \widetilde{K}^M_*(A/I)$.
It is clear that there is a natural surjection $\widetilde{K}^M_*(A) \twoheadrightarrow
{K}^M_*(A)$. This is an isomorphism if $A$ is a local ring with infinite
residue field (see \cite[Proposition~2]{Kerz-10}). The following says that
a similar thing holds also for the relative $K$-theory.
\begin{lem}\label{lem:BT-KS}
Let $A$ be a local ring and let $I \subset A$ be an
ideal. Then the following hold.
\begin{enumerate}
\item
$K^M_r(A,I)$ and $\widetilde{K}^M_r(A,I)$ are generated by the Milnor symbols
$\{a_1, \ldots , a_r\}$
such that $a_i \in K^M_1(A,I)$ for some $1 \le i \le r$.
\item
The canonical map $\widetilde{K}^M_r(A,I) \to K^M_r(A,I)$ is surjective.
This is an isomorphism if $A$ has infinite residue field.
\end{enumerate}
\end{lem}
\begin{proof}
The assertion (1) for $K^M_r(A,I)$ is \cite[Lemma~1.3.1]{Kato-Saito} and the
proof of (1) for $\widetilde{K}^M_r(A,I)$ is completely identical to that of
$K^M_r(A,I)$.
The second part of (2) follows from the corresponding result in the
non-relative case mentioned above. To prove the first part of (2),
we fix an integer $r \ge 1$.
Let $\underline{a} = \{a_1, \ldots , a_r\} \in \widetilde{K}^M_r(A)$ be such that $a_i \in
\widetilde{K}^M_1(A,I) = K^M_1(A,I)$ for some $1 \le i \le r$. It is then clear that
$\underline{a} \in \widetilde{K}^M_r(A,I)$. Using this observation, our assertion follows
directly from item (1) for $K^M_r(A,I)$.
\end{proof}
For a ring $A$ as above, we let $\widehat{K}^M_*(A)$ denote the improved
Milnor $K$-theory defined independently by Gabber and Kerz \cite{Kerz-10}.
For an ideal $I \subset A$, we define $\widehat{K}^M_*(A,I)$ to be the kernel
of the map $\widehat{K}^M_*(A) \to \widehat{K}^M_*(A/I)$.
Let $K_*(A)$ denote the Quillen $K$-theory of $A$.
We state the following facts as a lemma and refer to \cite{Kerz-10}
for their source.
\begin{lem}\label{lem:Milnor-Kerz}
There are natural maps
\begin{equation}\label{eqn:Milnor-K-0}
K^M_*(A) \xleftarrow{\alpha_A} \widetilde{K}^M_*(A) \xrightarrow{\beta_A}
\widehat{K}^M_*(A) \xrightarrow{\gamma_A} K_*(A),
\end{equation}
where $\alpha_A$ is always surjective and $\beta_A$ is surjective
if $A$ is local. These two maps are isomorphisms if $A$ is either a field
or a local ring with infinite residue field.
\end{lem}
Let $X$ be a Noetherian scheme and $\iota \colon D \hookrightarrow X$ a
closed immersion.
We shall say that $(X,D)$ is a modulus pair if $D$ is an effective
Cartier divisor on $X$. This Cartier divisor may be empty.
If ${\mathcal P}$ is a property of schemes,
we shall say that $(X,D)$ satisfies ${\mathcal P}$ if $X$ does so.
We shall say that $(X,D)$ has dimension $d$ if $X$ has Krull
dimension $d$.
Given a Noetherian scheme $X$ and an integer $r \ge 1$,
let ${\mathcal K}^M_{r,X}$ denote the sheaf on $X_{\rm nis}$
whose stalk at a point $x \in X$ is $K^M_{r}({\mathcal O}^h_{X,x})$. We let
${\mathcal K}^M_{r, (X,D)}$ be the kernel of the restriction map
${\mathcal K}^M_{r,X} \twoheadrightarrow {\mathcal K}^M_{r,D} := \iota_* {\mathcal K}^M_{r,D}$. We shall usually refer to
${\mathcal K}^M_{r, (X,D)}$ as the Kato-Saito relative Milnor $K$-sheaves.
We define the sheaves $\widetilde{{\mathcal K}}^M_{r, (X,D)}$ and $\widehat{{\mathcal K}}^M_{r, (X,D)}$
in an analogous way. We let ${\mathcal K}_{r,X}$ be the Quillen $K$-theory sheaf on
$X_{\rm nis}$.
\begin{lem}\label{lem:Milnor-top-coh}
Let $X$ be a reduced Noetherian
scheme of Krull dimension $d$ and let $D \subset X$
be a nowhere dense closed subscheme. Then the canonical map
\[
H^d_{\rm nis}(X, {\widetilde{{\mathcal K}}^M_{r,(X,D)}}/n) \to
H^d_{\rm nis}(X, {{\mathcal K}^M_{r,(X,D)}}/n)
\]
is an isomorphism for every integer $n \ge 0$.
\end{lem}
\begin{proof}
We look at the commutative diagram
\begin{equation}\label{eqn:Milnor-top-coh-0}
\xymatrix@C.8pc{
\widetilde{{\mathcal K}}^M_{r,(X,D)} \ar@{^{(}->}[r] \ar[d] & \widetilde{{\mathcal K}}^M_{r,X} \ar@{->>}[d] \\
{\mathcal K}^M_{r,(X,D)} \ar@{^{(}->}[r] & {\mathcal K}^M_{r,X}.}
\end{equation}
Lemma~\ref{lem:BT-KS} says that the left vertical arrow is surjective.
We have seen above that the kernel of the right vertical arrow is
supported on a nowhere dense closed subscheme of $X$ (this uses reducedness of $X$).
Hence, the same holds for the left vertical arrow.
But this easily implies the lemma when $n = 0$.
For $n \ge 1$, we use the commutative diagram
\[
\xymatrix@C.8pc{
H^d_{\rm nis}(X, {\widetilde{{\mathcal K}}^M_{r,(X,D)}}) \ar[r]^-{n} \ar[d] &
H^d_{\rm nis}(X, {\widetilde{{\mathcal K}}^M_{r,(X,D)}}) \ar[r] \ar[d] &
H^d_{\rm nis}(X, {\widetilde{{\mathcal K}}^M_{r,(X,D)}}/n) \ar[r] \ar[d] & 0 \\
H^d_{\rm nis}(X, {{\mathcal K}^M_{r,(X,D)}}) \ar[r]^-{n} &
H^d_{\rm nis}(X, {{\mathcal K}^M_{r,(X,D)}}) \ar[r] & H^d_{\rm nis}(X, {{\mathcal K}^M_{r,(X,D)}}/n)
\ar[r] & 0.}
\]
It is easily seen that the two rows are exact. The $n = 0$ case shows that
the left and the middle vertical arrows are isomorphisms.
It follows that the right vertical arrow is also an isomorphism.
\end{proof}
For $D \subset X$ as above and
$n \ge 1$, let
\begin{equation}\label{eqn:Milnor-mod-n}
\overline{{\widetilde{{\mathcal K}}^M_{r,(X,D)}}/n}
:= {\widetilde{{\mathcal K}}^M_{r,(X,D)}}/{(\widetilde{{\mathcal K}}^M_{r,(X,D)} \cap n \widetilde{{\mathcal K}}^M_{r,X})} =
{\rm Image}(\widetilde{{\mathcal K}}^M_{r,(X,D)} \to {\widetilde{{\mathcal K}}^M_{r,X}}/n).
\end{equation}
Since the map $n\widetilde{{\mathcal K}}^M_{r,X} \to
n \widetilde{{\mathcal K}}^M_{r,D}$ is surjective, it easily follows that there are
exact sequences
\begin{equation}\label{eqn:Rel-Milnor}
0 \to \overline{{\widetilde{{\mathcal K}}^M_{r,(X,D)}}/n} \to
{\widetilde{{\mathcal K}}^M_{r,X}}/n \to {\widetilde{{\mathcal K}}^M_{r,D}}/n \to 0;
\end{equation}
\[
_{n}{\widetilde{{\mathcal K}}^M_{r,D}} \to {\widetilde{{\mathcal K}}^M_{r,(X,D)}}/n \to
\overline{{\widetilde{{\mathcal K}}^M_{r,(X,D)}}/n} \to 0.
\]
We let $\overline{{\widehat{{\mathcal K}}^M_{r,(X,D)}}/n}
:= {\widehat{{\mathcal K}}^M_{r,(X,D)}}/{(\widehat{{\mathcal K}}^M_{r,(X,D)} \cap n \widehat{{\mathcal K}}^M_{r,X})}$.
It is then clear that the two exact sequences of ~\eqref{eqn:Rel-Milnor}
hold for the improved Milnor $K$-sheaves as well.
The following is immediate from ~\eqref{eqn:Rel-Milnor}.
\begin{lem}\label{lem:Milnor-d}
Let $X$ be a Noetherian scheme of Krull dimension $d$ and let $D \subset X$
be a nowhere dense closed subscheme. Then the canonical map
\[
H^d_{\rm nis}(X, {\widetilde{{\mathcal K}}^M_{r,(X,D)}}/n) \to
H^d_{\rm nis}(X, \overline{{\widetilde{{\mathcal K}}^M_{r,(X,D)}}/n})
\]
is an isomorphism for all $n \ge 1$.
\end{lem}
To proceed further, we need the following result about the
sheaf cohomology. Given a field $k$ and a topology
$\tau$ on ${\rm Spec \,}(k)$, let $cd_\tau(k)$ denote the cohomological
dimension of $k$ for torsion $\tau$-sheaves.
\begin{lem}\label{lem:Sheaf-0}
Let $k$ be a field and $X \in {{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$. Let ${\mathcal F}$ be a torsion sheaf on $X_{\rm nis}$
such that ${\mathcal F}_x = 0$ for every $x \in X_{(q)}$ with $q > 0$. Then
$H^q_{\tau}(X, {\mathcal F}) = 0$ for $q > cd_\tau(k)$ and $\tau \in \{{\rm nis}, {\text{\'et}}l\}$.
In particular, given an exact sequence of Nisnevich sheaves
\[
0 \to {\mathcal F} \to {\mathcal F}' \to {\mathcal F}'' \to 0,
\]
the induced map $H^q_{\tau}(X, {\mathcal F}') \to H^q_{\tau}(X, {\mathcal F}'')$ is an isomorphism
for $q > cd_\tau(k)$ and $\tau \in \{{\rm nis}, {\text{\'et}}l\}$.
\end{lem}
\begin{proof}
We fix $\tau \in \{{\rm nis}, {\text{\'et}}l\}$.
Let $J$ be the set of finite subsets of $X_{(0)}$.
Given any $\alpha \in J$, let $S_\alpha \subset X$ be the
0-dimensional reduced closed subscheme defined by $\alpha$ and let
$U_\alpha = X \setminus S_\alpha$. Let $\iota_\alpha \colon S_\alpha \hookrightarrow X_\alpha$
and $j_\alpha \colon U_\alpha \hookrightarrow X$ be the inclusions.
As $J$ is cofiltered
with respect to inclusion, there is a cofiltered system of short exact
sequences of $\tau$-sheaves
\begin{equation}\label{eqn:Sheaf-0-0}
0 \to (j_\alpha)_!({\mathcal F}|_{U_\alpha}) \to {\mathcal F} \to (\iota_\alpha)_*({\mathcal F}|_{S_\alpha}) \to 0.
\end{equation}
It is easily seen from our assumption that
${\underlinederset{\alpha \in J}\varinjlim} \
(j_\alpha)_!({\mathcal F}|_{U_\alpha}) = 0$ so that the map
${\mathcal F} \to {\underlinederset{\alpha \in J}\varinjlim} \ (\iota_\alpha)_*({\mathcal F}|_{S_\alpha})$ is an
isomorphism. Since $H^*_\tau(S_\alpha, {\mathcal F}|_{S_\alpha}) \cong
H^*_\tau(X, (\iota_\alpha)_*({\mathcal F}|_{S_\alpha}))$, we are done by
\cite[Proposition~58.89.6]{SP}.
\end{proof}
\begin{lem}\label{lem:Milnor-d-0}
Let $k$ be a field and $(X, D)$ a reduced modulus pair in ${{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$.
Assume that $X$ is of pure dimension $d \ge 1$ and
$\tau \in \{{\rm nis}, {\text{\'et}}l\}$. Let $n \ge 1$ be an integer and
\begin{equation}\label{eqn:Milnor-d-0-0}
H^d_\tau(X, \widetilde{{\mathcal K}}^M_{r,(X,D)}) \to
H^d_\tau(X, \widehat{{\mathcal K}}^M_{r,(X,D)});
\end{equation}
\begin{equation}\label{eqn:Milnor-d-0-0-*}
H^d_\tau(X, \overline{{\widetilde{{\mathcal K}}^M_{r,(X,D)}}/n}) \to
H^d_\tau(X, \overline{{\widehat{{\mathcal K}}^M_{r,(X,D)}}/n})
\end{equation}
the canonical maps. Then the following hold.
\begin{enumerate}
\item
If $k$ is infinite, then both maps are isomorphisms.
\item
If $d = 1$ and $r \le 1$,
then both maps are isomorphisms.
\item
If $d \ge 2$ and $\tau = {\rm nis}$, then both maps are isomorphisms.
\item
If $k$ is finite, $d \ge 3$ and $\tau = {\text{\'et}}l$, then
~\eqref{eqn:Milnor-d-0-0-*} is an isomorphism.
\item
If $k \neq {\mathbb F}_2$ is finite, $d = 2, r \le 2$ and
$\tau = {\text{\'et}}l$, then ~\eqref{eqn:Milnor-d-0-0-*} is an isomorphism.
\end{enumerate}
\end{lem}
\begin{proof}
The items (1) and (2) are clear from Lemma~\ref{lem:Milnor-Kerz}.
To prove (3) and (4), we let $n \ge 1$ and consider
the commutative diagram
\[
\xymatrix@C.3pc{
H^{d-1}_\tau(X, {\widetilde{{\mathcal K}}^M_{r,X}}/n) \ar[r] \ar[d] &
H^{d-1}_\tau(D, {\widetilde{{\mathcal K}}^M_{r,D}}/n) \ar[r] \ar[d] &
H^d_\tau(X, \overline{{\widetilde{{\mathcal K}}^M_{r,(X,D)}}/n}) \ar[r] \ar[d] &
H^{d}_\tau(X, {\widetilde{{\mathcal K}}^M_{r,X}}/n) \ar[r] \ar[d] &
H^{d}_\tau(D, {\widetilde{{\mathcal K}}^M_{r,D}}/n) \ar[d] \\
H^{d-1}_\tau(X, {\widehat{{\mathcal K}}^M_{r,X}}/n) \ar[r] &
H^{d-1}_\tau(D, {\widehat{{\mathcal K}}^M_{r,D}}/n) \ar[r] &
H^d_\tau(X, \overline{{\widehat{{\mathcal K}}^M_{r,(X,D)}}/n}) \ar[r] &
H^{d}_\tau(X, {\widehat{{\mathcal K}}^M_{r,X}}/n) \ar[r] &
H^{d}_\tau(D, {\widehat{{\mathcal K}}^M_{r,D}}/n)}
\]
with $d \ge 2$.
The two rows are exact by ~\eqref{eqn:Rel-Milnor}. It suffices to show that all vertical
arrows except possibly the middle one are isomorphisms.
But this follows easily from Lemmas~\ref{lem:Milnor-Kerz} and
~\ref{lem:Sheaf-0} if we observe that $cd_{\rm nis}(k) = 0$ for $k$ arbitrary and
$cd_{\text{\'et}}l(k) = 1$ for $k$ finite (see \cite[Expos{\'e}~X]{SGA4}).
The proof of ~\eqref{eqn:Milnor-d-0-0} for $\tau = {\rm nis}$
is identical if we observe that Lemma~\ref{lem:Sheaf-0}
holds even if the sheaf ${\mathcal F}$ is not torsion when $\tau = {\rm nis}$.
We now prove (5). We only consider the case $r = 2$ as the other cases
are trivial.
We look at the commutative diagram of Nisnevich (or {\'e}tale) sheaves
\begin{equation}\label{eqn:Milnor-d-0-1}
\xymatrix@C.8pc{
0 \ar[r] & \overline{{\widetilde{{\mathcal K}}^M_{r,(X,D)}}/n} \ar[r] \ar[d] &
{\widetilde{{\mathcal K}}^M_{r,X}}/n \ar[r] \ar[d]^-{\alpha_X} &
{\widetilde{{\mathcal K}}^M_{r,D}}/n \ar[r] \ar[d]^-{\alpha_D} & 0 \\
0 \ar[r] & \overline{{\widehat{{\mathcal K}}^M_{r,(X,D)}}/n} \ar[r] &
{\widehat{{\mathcal K}}^M_{r,X}}/n \ar[r] & {\widehat{{\mathcal K}}^M_{r,D}}/n \ar[r] & 0.}
\end{equation}
We now recall from \cite[Proposition~10(6)]{Kerz-10} that for a
local ring $A$ containing $k$, the map $\widehat{K}^M_2(A) \to K_2(A)$
is an isomorphism. Using this, it follows from \cite[Corollary~3.3]{Kolster}
that $\widehat{K}^M_2(A)$ is generated by the Milnor symbols $\{a, b\}$ subject
to bilinearity, Steinberg relation, and the relation $\{a, b\} = -\{b, a\}$.
Since ${\rm Ker}(\widetilde{K}^M_2(A) \twoheadrightarrow \widehat{K}^M_2(A))$ surjects onto
${\rm Ker}({\widetilde{K}^M_2(A)}/n \twoheadrightarrow {\widehat{K}^M_2(A)}/n)$, it follows that
the latter is generated by the images of the symbols of the type
$\{a, b\} + \{b, a\}$. Since $A^{\times} \to (A/I)^{\times}$ is surjective for
any ideal $I \subset A$, we conclude that the map
\begin{equation}\label{eqn:Milnor-d-0-2}
{\rm Ker}({\widetilde{K}^M_2(A)}/n \twoheadrightarrow {\widehat{K}^M_2(A)}/n) \to
{\rm Ker}({\widetilde{K}^M_2(A/I)}/n \twoheadrightarrow {\widehat{K}^M_2(A/I)}/n)
\end{equation}
is surjective.
We have therefore shown that ${\rm Ker}(\alpha_X) \twoheadrightarrow {\rm Ker}(\alpha_D)$ in
~\eqref{eqn:Milnor-d-0-1}. Subsequently, a diagram chase shows that
\begin{equation}\label{eqn:Milnor-d-0-3}
\overline{{\widetilde{{\mathcal K}}^M_{r,(X,D)}}/n} \to \overline{{\widehat{{\mathcal K}}^M_{r,(X,D)}}/n}
\end{equation}
is a surjective map of Nisnevich (or {\'e}tale) sheaves.
We let ${\mathcal F}$ denote the kernel of this map.
A diagram similar to ~\eqref{eqn:Milnor-top-coh-0}
(with ${\mathcal K}^M_{r,X}$ replaced by $\widehat{{\mathcal K}}^M_{r,X}$) and
Lemma~\ref{lem:Milnor-Kerz} together show that
${\mathcal F}_x = 0$ for every $x \in X_{(q)}$ with $q > 0$.
We now apply Lemma~\ref{lem:Sheaf-0} to finish the proof.
\end{proof}
Combining Lemmas~\ref{lem:Milnor-top-coh}, ~\ref{lem:Milnor-d} and
~\ref{lem:Milnor-d-0}, we get the following key result that we shall use.
\begin{prop}\label{prop:Milnor-iso}
Let $k$ be a field and $(X, D)$ a reduced modulus pair in ${{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$.
Assume that $X$ is of pure dimension $d \ge 2$. Then there are
natural isomorphisms
\[
H^d_{\rm nis}(X, {\mathcal K}^M_{r,(X,D)}) \xrightarrow{\cong}
H^d_{\rm nis}(X, \widehat{{\mathcal K}}^M_{r,(X,D)});
\]
\[
H^d_{\rm nis}(X, {{\mathcal K}^M_{r,(X,D)}}/n) \xrightarrow{\cong}
H^d_{\rm nis}(X, \overline{{\widehat{{\mathcal K}}^M_{r,(X,D)}}/n})
\]
for every integer $n \ge 1$. If $d = 1$, these isomorphisms hold if
either $k$ is infinite or $r \le 1$.
\end{prop}
\vskip .3cm
\section{The idele class group of Kerz-Zhao}\label{sec:KZ}
Let $A$ be a local domain with fraction field $F$ and let $I = (f)$ be a
principal ideal, where $f \in A$ is a non-zero element.
Let $A_f$ denote the localization $A[f^{-1}]$ obtained by
inverting the powers of $f$ so that there are inclusions of rings $A \hookrightarrow A_f \hookrightarrow F$.
We let $\widehat{K}^M_1(A|I) = K^M_1(A,I)$ and for $r \ge 2$, we let
$\widehat{K}^M_r(A|I)$ denote the image of the canonical map
of abelian groups
\begin{equation}\label{eqn:RS-0}
K^M_1(A,I) \otimes (A_f)^{\times} \otimes \cdots \otimes (A_f)^{\times} \to
K^M_r(F),
\end{equation}
induced by the product in Milnor $K$-theory, where the tensor product is
taken $r$ times.
These groups coincide with the relative Milnor $K$-groups of R\"ulling-Saito
(see \cite[Definition~2.4 and Lemma~2.1]{Rulling-Saito}) when
$A$ is regular and
${\rm Spec \,}(A/f)_{\rm red}$ is a normal crossing divisor on ${\rm Spec \,}(A)$.
We shall denote the associated sheaf on an integral scheme $X$ with
an effective Cartier divisor $D$ by $\widehat{{\mathcal K}}^M_{r, X|D}$.
We let $E_r(A,I)$ be the image of the canonical map
${K}^M_r(A,I) \to K^M_r(F)$. This is same as the image of the map
$\widetilde{K}^M_r(A,I) \to K^M_r(F)$ by Lemma~\ref{lem:BT-KS}.
The following result will be important for us.
\begin{lem}\label{lem:RS-GK}
We have the following.
\begin{enumerate}
\item
The composite map
\[
\widehat{K}^M_r(A|I) \to K^M_r(F) \xrightarrow{\partial}
{\underlinederset{{\rm ht} ({\mathfrak p}) = 1}{\text{\rm op}}lus} \ K^M_{r-1}(k({\mathfrak p}))
\]
is zero. In particular, $\widehat{K}^M_r(A|I) \subset \widehat{K}^M_r(A)$ if
$A$ is regular and equicharacteristic or a Henselian discrete valuation ring.
\item
There is an inclusion $E_r(A,I) \subseteq \widehat{K}^M_r(A|I)$
which is an equality if $I$ is a prime ideal.
\end{enumerate}
\end{lem}
\begin{proof}
We let ${\mathfrak p} \subset A$ be a prime ideal of height one such that $f \in {\mathfrak p}$.
To prove (1), it suffices to show that the composite
$\widehat{K}^M_r({A_{\mathfrak p}}|{I_{\mathfrak p}}) \to K^M_r(F) \xrightarrow{\partial}
K^M_{r-1}(k({\mathfrak p}))$
is zero, where $\partial$ is the boundary map defined in
\cite[\S~1]{Kato-86}. We can therefore assume that $\text{\rm dim}(A) = 1$ and
$f \notin A^\times$. By the definition of $\partial$, we can assume that
$A$ is normal. But one easily checks in this case that
$\widehat{K}^M_r(A|I) \subset \widehat{K}^M_r(A)$
(see \cite[Proposition~2.8]{Rulling-Saito})
and $\partial(\widehat{K}^M_r(A)) = 0$ (see \cite[Proposition~2.7]{Dahlhausen}).
The second part of (1) follows from \cite[Proposition~10]{Kerz-10} and
\cite[Theorem~5.1]{Luders}.
We now prove (2). One checks using Lemma~\ref{lem:BT-KS} that
$E_r(A,I) \subseteq \widehat{K}^M_r(A|I)$. Suppose now that $I$ is prime
and consider an element
$\alpha = \{1 + af, u_2, \ldots , u_{r}\}$, where
$a \in A$ and $u_i \in (A_f)^\times$ for $i \ge 2$.
Since $I$ is prime, it is easy to check that $u_i = a_if^{n_i}$ for
$a_i \in A^{\times}$ and $n_i \ge 0$. Using the bilinearity and the Steinberg
relations, it easily follows from \cite[Lemma~1]{Kato-86} that
$\alpha \in E_r(A,I)$.
\end{proof}
Suppose that $k$ is any field and $X \in {{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$ is regular.
Let $D \subset X$ be an effective Cartier divisor.
It follows from Lemma~\ref{lem:RS-GK} that there are
inclusions $\widehat{{\mathcal K}}^M_{r, X|nD} \subset \widehat{{\mathcal K}}^M_{r, X} \supset
\widehat{{\mathcal K}}^M_{r, (X,nD)}$ for $n \ge 1$.
We let $\overline{{\widehat{{\mathcal K}}^M_{r,X|D}}/m} =
{\widehat{{\mathcal K}}^M_{r, X|D}}/{(\widehat{{\mathcal K}}^M_{r, X|D} \cap m \widehat{{\mathcal K}}^M_{r,X})}$.
\begin{prop}\label{prop:Rel-RS}
Let $k$ be a field and $X \in {{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$ a
regular scheme of dimension $d \ge 1$.
Let $D \subset X$ be a simple normal crossing divisor.
Let $\tau \in \{{\rm nis}, {\text{\'et}}l\}$ and $m \ge 1$.
Then the conclusions (1) $\sim$ (5) of Lemma~\ref{lem:Milnor-d-0}
hold for the maps
\[
\{H^d_\tau(X, \widetilde{{\mathcal K}}^M_{r,(X,nD)})\}_n \to
\{H^d_\tau(X, \widehat{{\mathcal K}}^M_{r,X|nD})\}_n;
\]
\[
\{H^d_\tau(X, \overline{{\widetilde{{\mathcal K}}^M_{r,(X,nD)}}/m})\}_n \to
\{H^d_\tau(X, \overline{{\widehat{{\mathcal K}}^M_{r,X|nD}}/m})\}_n
\]
of pro-abelian groups.
\end{prop}
\begin{proof}
We let ${\mathcal E}_{r, (X,nD)} = {\rm Image}(\widetilde{{\mathcal K}}^M_{r,(X,nD)} \to
\widehat{{\mathcal K}}^M_{r,X})$ and $\overline{{{\mathcal E}_{r, (X,nD)}}/m} =
{\rm Image}(\widetilde{{\mathcal K}}^M_{r,(X,nD)} \to {\widehat{{\mathcal K}}^M_{r,X}}/m)$.
It is easy to check using \cite[Proposition~2.8]{Rulling-Saito} that
the inclusions $\{{\mathcal E}_{r, (X,nD)}\}_n \hookrightarrow \{\widehat{{\mathcal K}}^M_{r,X|nD}\}_n$ and
$\{\overline{{{\mathcal E}_{r, (X,nD)}}/m}\}_n \hookrightarrow \{\overline{{\widehat{{\mathcal K}}^M_{r,X|nD}}/m}\}_n$
are pro-isomorphisms. The rest of the proof is now completely identical to that
of Lemma~\ref{lem:Milnor-d-0}.
\end{proof}
Let us now assume that $k$ is any field
and $X \in {{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$ is an integral scheme of dimension $d \ge 1$.
We shall endow $X$ with its canonical dimension function $d_X$ (see \cite{Kerz-11}).
Assume that $D \subset X$ is an effective Cartier divisor.
Let $F$ denote the function field of $X$ and $U = X \setminus D$.
We refer the reader to \cite[\S~1]{Kato-Saito} or
\cite[\S~2]{Gupta-Krishna-REC} for the definition
and all properties of Parshin chains and their Milnor $K$-groups that
we shall need.
We let ${\mathcal P}_{U/X}$ denote the set of Parshin chains on the pair $(U \subset X)$
and let ${\mathcal P}^{{\operatorname{\rm max}}}_{U/X}$ be the subset of ${\mathcal P}_{U/X}$ consisting of
maximal Parshin chains.
If $P$ is a Parshin chain on $X$ of dimension $d_X(P)$, then we shall consider
$K^M_{d_X(P)}(k(P))$ as a topological abelian group with its canonical
topology if $P$ is maximal (see \cite[\S~2,5]{Gupta-Krishna-REC}).
Otherwise, we shall consider
$K^M_{d_X(P)}(k(P))$ as a topological abelian group with its discrete topology.
If $P = (p_0, \ldots , p_d)$ is a maximal Parshin chain on $X$, we shall let $P' =
(p_0, \ldots , p_{d-1})$.
Recall from \cite[Definition~3.1]{Gupta-Krishna-REC} that the idele
group of $(U \subset X)$ is defined as
\begin{equation}\label{eqn:Idele-U-0}
I_{U/X} = {\underlinederset{P \in {\mathcal P}_{U/X}}\bigoplus} K^M_{d_X(P)}(k(P)).
\end{equation}
We consider $I_{U/X}$ as a topological group with its direct sum topology.
We let
\begin{equation}\label{eqn:Idele-D}
I(X|D) = {\rm Coker}\left({\underlinederset{P \in {\mathcal P}^{{\operatorname{\rm max}}}_{U/X}}\bigoplus}
\widehat{K}^M_{d_X(P)}({\mathcal O}^h_{X, P'}|I_D) \to I_{U/X}\right),
\end{equation}
where $I_D$ is the extension of the ideal sheaf ${\mathcal I}_D \subset {\mathcal O}_X$
to ${\mathcal O}^h_{X,P'}$ and the map on the right is induced by the composition
$\widehat{K}^M_{d_X(P)}({\mathcal O}^h_{X, P'}|I_D) \hookrightarrow K^M_{d_X(P)}(k(P)) \to I_{U/X}$ for
$P \in {\mathcal P}^{{\operatorname{\rm max}}}_{U/X}$.
We consider $I(X|D)$ a topological group with its quotient topology.
Recall from \cite[\S~3.1]{Gupta-Krishna-REC} that $I(X,D)$ is
defined analogous to $I(X|D)$, where we replace
$\widehat{K}^M_{d_X(P)}({\mathcal O}^h_{X, P'}|I_D)$ by ${K}^M_{d_X(P)}({\mathcal O}^h_{X, P'}, I_D)$.
We let ${\mathcal Q}_{U/X}$ denote the set of all $Q$-chains on $(U \subset X)$.
Recall that the idele class group of the pair $(U \subset X)$
is the topological abelian group
\begin{equation}\label{eqn:IC-0}
C_{U/X} = {\rm Coker}\left({\underlinederset{Q \in {\mathcal Q}_{U/X}}\bigoplus}
K^M_{d_X(Q)}(k(Q)) \xrightarrow{\partial_{U/X}} I_{U/X}\right)
\end{equation}
with its quotient topology.
We let
\begin{equation}\label{eqn:IC-1}
C(X|D) = {\rm Coker}\left({\underlinederset{Q \in {\mathcal Q}_{U/X}}\bigoplus}
K^M_{d_X(Q)}(k(Q)) \to I(X|D)\right).
\end{equation}
This is a topological group with its quotient topology.
This idele class group was introduced by Kerz-Zhao in
\cite[Definition~2.1.4]{Kerz-Zhau}. We let $C(X,D)$ be the cokernel of the
map ${\underlinederset{Q \in {\mathcal Q}_{U/X}}{\text{\rm op}}lus}
K^M_{d_X(Q)}(k(Q)) \to I(X,D)$.
One checks using part (2) of Lemma~\ref{lem:RS-GK} that there is a commutative diagram
\begin{equation}\label{eqn:IC-2}
\xymatrix@C.8pc{
I_{U/X} \ar[r] \ar[d] & I(X,D) \ar[r] \ar[d] & I(X|D) \ar[d] \\
C_{U/X} \ar[r] & C(X,D) \ar[r] & C(X|D).}
\end{equation}
If $\text{\rm dim}(X) = d$, we let $C_{KS}(X|D) = H^d_{\rm nis}(X, \widehat{{\mathcal K}}^M_{d, X|D})$
and $C^{{\text{\'et}}l}_{KS}(X|D) = H^d_{\text{\'et}}l(X, \widehat{{\mathcal K}}^M_{d, X|D})$.
We let $C_{KS}(X,D) = H^d_{\rm nis}(X, {{\mathcal K}}^M_{d, (X,D)})$
and $C^{{\text{\'et}}l}_{KS}(X,D) = H^d_{\text{\'et}}l(X, {{\mathcal K}}^M_{d, (X,D)})$.
\begin{lem}\label{lem:KS-GK-iso}
The canonical map $C_{KS}(X,D) \to C_{KS}(X|D)$ is surjective.
It is an isomorphism if $D \subset X$ is a reduced Cartier divisor.
\end{lem}
\begin{proof}
It is clear that the stalks of the kernel and
cokernel of the
map ${\mathcal K}^M_{r,(X,D)} \to \widehat{{\mathcal K}}^M_{r,X|D}$ vanish at the generic point of $X$.
It follows moreover from Lemma~\ref{lem:RS-GK} that these stalks vanish at all
points of $X$ with dimension $d-1$ or more if $D$ is reduced.
A variant of Lemma~\ref{lem:Sheaf-0} therefore implies the desired result.
\end{proof}
Let ${\mathcal Z}_0(U)$ be the free abelian group on $U_{(0)}$.
It is clear from the definitions of $C_{KS}(X|D)$ and $C(X|D)$ that there
is a diagram
\begin{equation}\label{eqn:idele}
\xymatrix@C.8pc{
{\mathcal Z}_0(U) \ar[dr] \ar[d] & \\
C(X|D) \ar@{.>}[r] & C_{KS}(X|D),}
\end{equation}
where the left vertical arrow is induced by the inclusion of the Milnor
$K$-groups of the length zero Parshin chains
on $(U \subset X)$ into $I_{U/X}$, and the diagonal arrow is induced by
the coniveau spectral sequence for $C_{KS}(X|D)$.
The following is \cite[Theorem~3.1.1]{Kerz-Zhau} whose proof is
obtained by simply repeating the proof of \cite[Theorem~8.2]{Kerz-11}
mutatis mutandis (see \cite[Theorem~3.8]{Gupta-Krishna-REC}).
\begin{thm}\label{thm:KZ-main}
Assume that $U$ is regular. Then one has an isomorphism
\[
\phi_{X|D} \colon C(X|D) \xrightarrow{\cong} C_{KS}(X|D)
\]
such that ~\eqref{eqn:idele} is commutative.
\end{thm}
\subsection{Degree map for $C_{KS}(X|D)$}\label{sec:Degree}
Let $k$ be any field and $X \in {{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$ an integral scheme of dimension $d \ge 1$.
Assume that $D \subset X$ is an effective Cartier divisor.
Let ${\mathbb C}H^F_r(X)$ denote the Chow group of $r$-dimensional cycles
on $X$ as in \cite[Chapter~1]{Fulton}.
It follows from Lemma~\ref{lem:RS-GK} and \cite[Proposition~1]{Kato-86} that
\begin{equation}\label{eqn:KS-Chow-0}
\widehat{{\mathcal K}}^M_{r,X|D} \to {\underlinederset{x \in X^{(0)}}{\text{\rm op}}lus} (\iota_x)_*(K^M_{r}(k(x)))
\to {\underlinederset{x \in X^{(1)}}{\text{\rm op}}lus} (\iota_x)_*(K^M_{r-1}(k(x))) \to
\cdots
\hspace*{2cm}
\end{equation}
\[
\hspace*{3.8cm} \cdots
\to {\underlinederset{x \in X^{(d-1)}}{\text{\rm op}}lus} (\iota_x)_*(K^M_{r-d+1}(k(x)))
\xrightarrow{\partial}
{\underlinederset{x \in X^{(d)}}{\text{\rm op}}lus} (\iota_x)_*(K^M_{r-d}(k(x))) \to 0
\]
is a complex of Nisnevich sheaves on $X$.
An elementary cohomological argument (see \cite[\S~2.1]{Gupta-Krishna-BF})
shows that by taking the cohomology of
the sheaves in this complex, one gets a canonical
homomorphism
\begin{equation} \label{eqn:RS-Chow*-1}
\nu_{X|D} \colon C_{KS}(X|D) \to {\mathbb C}H^{F}_0(X).
\end{equation}
If $X$ is projective over $k$, there is a degree map
$\deg \colon {\mathbb C}H^{F}_0(X) \to {\mathbb Z}$. We let $\deg \colon C_{KS}(X|D) \to {\mathbb Z}$
be the composition $C_{KS}(X|D) \xrightarrow{\nu_{X|D}} {\mathbb C}H^{F}_0(X)
\xrightarrow{\deg} {\mathbb Z}$. We let
$C_{KS}(X|D)_0 = {\rm Ker}(\deg)$.
We let $C_{KS}(X,D)_0$ be the kernel of the composite map
$C_{KS}(X,D) \twoheadrightarrow C_{KS}(X|D) \xrightarrow{\deg} {\mathbb Z}$.
\section{Comparison theorem for cohomology of
$K$-sheaves}\label{sec:Comp**}
Let $k$ be a perfect field of characteristic $p > 0$
and $X \in {{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$ an integral regular scheme of dimension $d$.
Let $D \subset X$ be an effective Cartier divisor.
Let $U = X \setminus D$ and $F = k(X)$.
Let $m, r \ge 1$ be integers.
A combination of \cite[Theorem~1.1.5]{JSZ} and
\cite[Theorems~2.2.2, 3.3.1]{Kerz-Zhau} implies the following.
\begin{thm}\label{thm:RS-comp}
Assume that $k$ is finite and $D_{\rm red}$ is a simple normal crossing
divisor. Then the canonical map
\[
H^d_{\rm nis}(X, \overline{{\widehat{{\mathcal K}}^M_{d, X|D}}/{p^m}}) \to
H^d_{\text{\'et}}l(X, \overline{{\widehat{{\mathcal K}}^M_{d, X|D}}/{p^m}})
\]
is an isomorphism.
\end{thm}
\begin{cor}\label{cor:RS-K-comp}
Under the assumptions of Theorem~\ref{thm:RS-comp}, the canonical map
\[
\{H^d_{\rm nis}(X, \overline{{\widehat{{\mathcal K}}^M_{d, (X,nD)}}/{p^m}})\}_n \to
\{H^d_{\text{\'et}}l(X, \overline{{\widehat{{\mathcal K}}^M_{d, (X,nD)}}/{p^m}})\}_n
\]
is an isomorphism if either $k \neq {\mathbb F}_2$ or $d \neq 2$.
\end{cor}
\begin{proof}
Combine Lemma~\ref{lem:Milnor-d-0}, Proposition~\ref{prop:Rel-RS} and
Theorem~\ref{thm:RS-comp}.
\end{proof}
Our goal in this section is to prove an analogue of
Theorem~\ref{thm:RS-comp} for the
cohomology of ${\mathcal K}^M_{d, (X,D)}$ when $D_{\rm red}$ is regular. This will be used in the proof of
Theorem~\ref{thm:Main-2}.
\subsection{Filtrations of Milnor $K$-theory}
\label{sec:Filt*}
Let $A$ be a regular local ring essentially of finite type over $k$
and $I = (f)$ a principal ideal, where $f \in A$ is a non-zero
element such that $R = {A}/{I}$ is regular.
Let $A_f$ denote the localization $A[f^{-1}]$ obtained by
inverting the powers of $f$. Let $F$ be the quotient field of
$A$ so that there are inclusions of rings $A \hookrightarrow A_f \hookrightarrow F$.
For an integer $n \ge 1$, let $I^n = (f^n)$.
We let $E_r(A,I^n)$ be the image of the canonical map
$\widetilde{K}^M_r(A,I^n) \to \widehat{K}^M_r(A) \subset K^M_r(F)$.
One easily checks that
there is a commutative diagram (see ~\eqref{eqn:Milnor-mod-n})
\begin{equation}\label{eqn:Rel-filtn-0}
\xymatrix@C.8pc{
\widehat{K}^M_r(A,I^n) \ar@{->>}[r]
& \overline{{\widehat{K}^M_r(A,I^n)}/{p^m}} \ar[rr]^-{\cong} & &
\frac{\widehat{K}^M_r(A,I^n) + p^m\widehat{K}^M_r(A)}{p^m\widehat{K}^M_r(A)}
\ar@{^{(}->}[dd] & \\
\widetilde{K}^M_r(A,I^n) \ar@{->>}[r] \ar[u] \ar[d] &
\overline{{\widetilde{K}^M_r(A,I^n)}/{p^m}} \ar@{->>}[r] \ar[u] \ar[d] &
\frac{E_r(A,I^n) + p^m\widehat{K}^M_r(A)}{p^m\widehat{K}^M_r(A)} \ar@{^{(}->}[ur]
\ar@{^{(}->}[d] & \\
\widehat{K}^M_r(A|I^n) \ar@{->>}[r] & \overline{{\widehat{K}^M_r(A|I^n)}/{p^m}}
\ar[r]^-{\cong} &
\frac{\widehat{K}^M_r(A|I^n) + p^m\widehat{K}^M_r(A)}{p^m\widehat{K}^M_r(A)} \ar@{^{(}->}[r] &
\frac{\widehat{K}^M_r(A)}{p^m\widehat{K}^M_r(A)},}
\end{equation}
where the existence of the left-most bottom vertical arrow and
the right-most bottom horizontal inclusion follows from
\cite[Proposition~2.8]{Rulling-Saito} (see also
\cite[Lemma~3.8]{Gupta-Krishna-p}).
We consider the filtrations $F^\bullet_{m,r}(A)$ and
$G^\bullet_{m,r}(A)$ of ${\widehat{K}^M_r(A)}/{p^m}$ by letting
$F^n_{m,r}(A) = \frac{E_r(A,I^n) + p^m\widehat{K}^M_r(A)}{ p^m\widehat{K}^M_r(A)}$ and
$G^n_{m,r}(A) = \frac{\widehat{K}^M_r(A|I^n) + p^m\widehat{K}^M_r(A)}{p^m\widehat{K}^M_r(A)}$.
We then have a commutative diagram of filtrations
\begin{equation}\label{eqn:Rel-filtn-1}
\xymatrix@C.8pc{
E_r(A, I^\bullet) \ar@{^{(}->}[r] \ar@{->>}[d] &
\widehat{K}^M_r(A|I^\bullet) \ar@{^{(}->}[r] \ar@{->>}[d] &
\widehat{K}^M_r(A) \ar@{->>}[d] \\
F^\bullet_{m,r}(A) \ar@{^{(}->}[r] & G^\bullet_{m,r}(A) \ar@{^{(}->}[r] &
{\widehat{K}^M_r(A)}/{p^m}.}
\end{equation}
\begin{lem}\label{lem:Rel-filtn-2}
One has $\widehat{K}^M_r(A|I^{n+1}) \subseteq E_r(A, I^n)$ and
$G^{n+1}_{m,r}(A) \subseteq F^{n}_{m,r}(A)$. Furthermore,
$E_r(A, I) = \widehat{K}^M_r(A|I)$ and $F^{1}_{m,r}(A) = G^{1}_{m,r}(A)$.
\end{lem}
\begin{proof}
This is immediate from \cite[Proposition~2.8]{Rulling-Saito}.
\end{proof}
We let $\overline{\widetilde{K}^M_r(A_f)}$ be the image of the canonical map
$\widetilde{K}^M_r(A_f) \to K^M_r(F)$. We let
$H^n_{m,r}(A) = \frac{\widehat{K}^M_r(A|I^n) +
p^m\overline{\widetilde{K}^M_r(A_f)}}{p^m\overline{\widetilde{K}^M_r(A_f)}}$.
Note that
the canonical map $\widetilde{K}^M_r(A) \twoheadrightarrow \widehat{K}^M_r(A)$ is surjective.
We therefore have a surjective map $G^n_{m,r}(A) \twoheadrightarrow H^n_{m,r}(A)$.
\begin{lem}\label{lem:BK-RS}
The canonical map
\[
\frac{G^n_{m,r}(A)}{G^{n+1}_{m,r}(A)} \to \frac{H^n_{m,r}(A)}{H^{n+1}_{m,r}(A)}
\]
is an isomorphism.
\end{lem}
\begin{proof}
The map in question is clearly surjective since its both sides are quotients
of $\widehat{K}^M_r(A|I^n)$. So we need to prove only the injectivity.
Using Lemma~\ref{lem:Rel-filtn-2} and an easy reduction, one can see that
it suffices to show that the map
\[
\frac{\overline{\widetilde{K}^M_r(A_f)}}{\widehat{K}^M_r(A)} \xrightarrow{p^m}
\frac{\overline{\widetilde{K}^M_r(A_f)}}{\widehat{K}^M_r(A)}
\]
is injective.
Since $\overline{\widetilde{K}^M_r(A_f)}$ has no $p$-torsion, a snake lemma argument
shows that the previous injectivity is equivalent to the injectivity of the
map ${\widehat{K}^M_r(A)}/{p^m} \to {\overline{\widetilde{K}^M_r(A_f)}}/{p^m}$.
We now look at the composition
${\widehat{K}^M_r(A)}/{p^m} \to {\overline{\widetilde{K}^M_r(A_f)}}/{p^m} \to
{K^M_r(F)}/{p^m}$. It suffices to show that this composition is injective.
But this is an easy consequence of \cite[Proposition~10(8)]{Kerz-10}
and \cite[Theorem~8.1]{Geisser-Levine}.
\end{proof}
\vskip .3cm
We now consider the map
\begin{equation}\label{eqn:Rel-filtn-3}
\rho \colon \Omega^{r-1}_R {\text{\rm op}}lus \Omega^{r-2}_R \to
{\widehat{K}^M_r(A|I^n)}/{\widehat{K}^M_r(A|I^{n+1})}
\end{equation}
having the property that
\[
\rho(a{\rm dlog} b_1 \wedge \cdots \wedge {\rm dlog} b_{r-1},0)
= \{1 + \widetilde{a}f^n, \widetilde{b}_1, \ldots , \widetilde{b}_{r-1}\};
\]
\[
\rho(0, a{\rm dlog} b_1 \wedge \cdots \wedge {\rm dlog} b_{r-2}) =
\{1 + \widetilde{a}f^n, \widetilde{b}_1, \ldots , \widetilde{b}_{r-2}, f\},
\]
where $\widetilde{a}$ and $\widetilde{b}_i$'s are arbitrary lifts of $a$ and $b_i$'s,
respectively, under the surjection $A \twoheadrightarrow R$.
It follows from \cite[\S4, p.~122]{Bloch-Kato} that this map is well defined.
It is clear from ~\eqref{eqn:Rel-filtn-1} and Lemma~\ref{lem:Rel-filtn-2}
that $\rho$ restricts to a map
\begin{equation}\label{eqn:Rel-filtn-4}
\rho \colon \Omega^{r-1}_R \to
{E_r(A, I^n)}/{\widehat{K}^M_r(A|I^{n+1})}.
\end{equation}
This map is surjective by Lemma~\ref{lem:BT-KS}.
Let ${\operatorname{\text{p.s.s.}}}i \colon R \to R$ be the absolute Frobenius morphism.
Let $Z_i\Omega^{r-1}_R$ be the unique $R$-submodule of ${\operatorname{\text{p.s.s.}}}i^i_* \Omega^{r-1}_R$
such that the inverse Cartier operator
(see \cite[Chap.~0, \S~2]{Illusie})
induces an $R$-linear isomorphism $C^{-1} \colon Z_{i-1}\Omega^{r-1}_R
\xrightarrow{\cong} {Z_{i}\Omega^{r-1}_R}/{d\Omega^{r-2}_R}$, where
we let $Z_1 \Omega^{r-1}_R = {\rm Ker}(\Omega^{r-1}_R \to \Omega^{r}_R)$.
We let $B_1\Omega^{r-1}_R = d\Omega^{r-2}_R$ and
let $B_i \Omega^{r-1}_R$ ($i \ge 2$)
be the unique $R$-submodule of ${\operatorname{\text{p.s.s.}}}i^i_* \Omega^{r-1}_R$
such that $C^{-1} \colon B_{i-1} \Omega^{r-1}_R \to
{B_i\Omega^{r-1}_R}/{d\Omega^{r-2}_R}$ is an isomorphism of $R$-modules.
\begin{lem}\label{lem:Coherence}
Let $n = n_1p^s$, where $s \ge 0$ and $p \nmid n_1$.
Then $\rho$ induces isomorphisms
\[
\rho^n_{m,r} \colon \frac{{\operatorname{\text{p.s.s.}}}i^m_* \Omega^{r-1}_R}{Z_m\Omega^{r-1}_R}
\xrightarrow{\cong} \frac{F^n_{m,r}(A)}{G^{n+1}_{m,r}(A)} \ \ {\rm if} \
m \le s;
\]
\[
\rho^n_{m,r} \colon \frac{{\operatorname{\text{p.s.s.}}}i^s_* \Omega^{r-1}_R}{B_s\Omega^{r-1}_R}
\xrightarrow{\cong} \frac{F^n_{m,r}(A)}{G^{n+1}_{m,r}(A)} \ \ {\rm if} \
m > s.
\]
\end{lem}
\begin{proof}
Once it exists, $\rho^n_{m,r}$ is clearly surjective in both cases. We therefore
need to prove its existence and injectivity. These are easy consequences
of various lemmas in \cite[\S~4]{Bloch-Kato} (see also
\cite[Proposition~1.1.9]{JSZ}) once we have Lemma~\ref{lem:BK-RS}.
The proof goes as follows.
When $m \le s$, we look at the diagram
\begin{equation}\label{eqn:Coherence-0}
\xymatrix@C.8pc{
\frac{{\operatorname{\text{p.s.s.}}}i^m_* \Omega^{r-1}_R}{Z_m\Omega^{r-1}_R}
\ar@{.>}[r]^-{\rho^n_{m,r}} \ar@{^{(}->}[d] &
\frac{F^n_{m,r}(A)}{G^{n+1}_{m,r}(A)} \ar@{^{(}->}[d] & \\
\frac{{\operatorname{\text{p.s.s.}}}i^m_* \Omega^{r-1}_R}{Z_m\Omega^{r-1}_R} {\text{\rm op}}lus
\frac{{\operatorname{\text{p.s.s.}}}i^m_* \Omega^{r-2}_R}{Z_m\Omega^{r-2}_R} \ar[r]^-{\rho} &
\frac{G^n_{m,r}(A)}{G^{n+1}_{m,r}(A)} \ar[r]^-{\cong} &
\frac{H^n_{m,r}(A)}{H^{n+1}_{m,r}(A)},}
\end{equation}
where the bottom right isomorphism is by Lemma~\ref{lem:BK-RS}.
The right vertical arrow is clearly injective.
The composite arrow on the bottom is an isomorphism by
\cite[(4.7), Remark~4.8]{Bloch-Kato}. It follows that the top
horizontal arrow exists and it is injective.
When $m > s$, we look at the commutative diagram
\begin{equation}\label{eqn:Coherence-1}
\xymatrix@C.8pc{
& \frac{{\operatorname{\text{p.s.s.}}}i^s_* \Omega^{r-1}_R}{B_s\Omega^{r-1}_R}
\ar@{.>}[r]^-{\rho^n_{m,r}} \ar[d] \ar@{^{(}->}[dl] &
\frac{F^n_{m,r}(A)}{G^{n+1}_{m,r}(A)} \ar@{^{(}->}[d] & \\
\frac{{\operatorname{\text{p.s.s.}}}i^s_* \Omega^{r-1}_R}{B_s\Omega^{r-1}_R} {\text{\rm op}}lus
\frac{{\operatorname{\text{p.s.s.}}}i^s_* \Omega^{r-2}_R}{B_s\Omega^{r-2}_R} \ar@{->>}[r] &
{\rm Coker}(\theta) \ar[r]^-{\rho} &
\frac{G^n_{m,r}(A)}{G^{n+1}_{m,r}(A)} \ar[r]^-{\cong} &
\frac{H^n_{m,r}(A)}{H^{n+1}_{m,r}(A)},}
\end{equation}
where $\theta$ is defined in \cite[Lemma~4.5]{Bloch-Kato}.
The map $\rho$ on the bottom is again an isomorphism.
Hence, it suffices to show that the left vertical arrow is injective.
But this is immediate from the definition of $\theta$ (see loc. cit.)
since $C^{-s}$ is an isomorphism.
\end{proof}
\subsection{The comparison theorem}\label{sec:NIS-ET}
Let $k$ be a perfect field of characteristic $p > 0$.
Let $X \in {{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$ be a regular scheme of pure dimension $d \ge 1$ and
let $D \subset X$ be a regular closed subscheme of codimension one.
Let ${\mathcal F}^n_{m,r, X}$ be the sheaf on $X_{{\rm nis}}$ such that for any
point $x \in X$, the stalk $({\mathcal F}^n_{m,r, X})_x$ coincides with
$F^n_{m,r}({\mathcal O}^{h}_{X,x})$, where $I = {\mathcal I}_D {\mathcal O}^{h}_{X,x}$.
The corresponding {\'e}tale sheaf is defined
similarly by replacing the Henselization ${\mathcal O}^h_{X,x}$ by the strict
Henselization ${\mathcal O}^{sh}_{X,x}$.
We define the Nisnevich and {\'e}tale sheaves ${\mathcal G}^n_{m,r, X}$
in an identical way, using the groups $G^n_{m,r}(A)$ for stalks.
\begin{lem}\label{lem:Comparison-0}
The change of topology map
\[
H^i_{\rm nis}(X, {{\mathcal F}^1_{m,r, X}}/{{\mathcal F}^n_{m,r, X}}) \to
H^i_{\text{\'et}}l(X, {{\mathcal F}^1_{m,r, X}}/{{\mathcal F}^n_{m,r, X}})
\]
is an isomorphism for every $i \ge 0$.
\end{lem}
\begin{proof}
We consider the exact sequence of Nisnevich (or {\'e}tale) sheaves
\begin{equation}\label{eqn:Comparison-10}
0 \to \frac{{\mathcal F}^n_{m,r, X}}{{\mathcal G}^{n+1}_{m,r, X}}
\to \frac{{\mathcal G}^1_{m,r, X}}{{\mathcal G}^{n+1}_{m,r, X}}
\to \frac{{\mathcal F}^1_{m,r, X}}{{\mathcal F}^n_{m,r, X}} \to 0,
\end{equation}
where we have replaced ${\mathcal G}^1_{m,r, X}$ by ${\mathcal F}^1_{m,r, X}$ in the quotient on the
right using Lemma~\ref{lem:Rel-filtn-2}.
Working inductively on $n$, it follows from \cite[Proposition~1.1.9]{JSZ}
that the middle term in ~\eqref{eqn:Comparison-10} has identical
Nisnevich and {\'e}tale cohomology. It suffices therefore to show
that the same holds for the left term in ~\eqref{eqn:Comparison-10} too.
Since this isomorphism is obvious for $i = 0$, it suffices to show using the
Leray spectral sequence that
${\bf R}^i\epsilon_*({{\mathcal F}^n_{m,r, X}}/{{\mathcal G}^{n+1}_{m,r, X}}) = 0$ for
$i > 0$.
Equivalently, we need to show that if $A = {\mathcal O}^h_{X,x}$ for some $x \in X$
and $f \in A$ defines $D$ locally at $x$, then
$H^i_{\text{\'et}}l(A, {F^n_{m,r}(A)}/{G^{n+1}_{m,r}(A)}) = 0$ for $i > 0$.
But this is immediate from Lemma~\ref{lem:Coherence}.
\end{proof}
\begin{lem}\label{lem:Comparison-1}
Assume that $k$ is finite. Then the change of topology map
\[
H^d_{\rm nis}(X, {{\mathcal F}^n_{m,d, X}}) \to
H^d_{\text{\'et}}l(X, {{\mathcal F}^n_{m,d, X}})
\]
is an isomorphism.
\end{lem}
\begin{proof}
We first assume $n =1$. Using Lemma~\ref{lem:Rel-filtn-2}, we can
replace ${\mathcal F}^1_{m,d, X}$ by ${\mathcal G}^1_{m,d, X}$.
By \cite[Theorem~1.1.5]{JSZ} and
\cite[Theorems~2.2.2, Proposition~2.2.5]{Kerz-Zhau},
the dlog map ${\mathcal G}^1_{m,d, X} \to W_m\Omega^d_{X, {\operatorname{log}}}$ is an isomorphism
in Nisnevich and {\'e}tale topologies. We conclude by
\cite[Theorem~3.3.1]{Kerz-Zhau}.
We now assume $n \ge 2$ and look at the exact sequence
\[
0 \to {\mathcal F}^n_{m,d, X} \to {\mathcal F}^1_{m,d, X} \to \frac{{\mathcal F}^1_{m,d, X}}{{\mathcal F}^n_{m,d, X}}
\to 0.
\]
We saw above that the middle term can be replaced by $W_m\Omega^d_{X, {\operatorname{log}}}$.
We can now conclude the proof by Lemmas~\ref{lem:Comparison-0}
and the independent statement Lemma~\ref{lem:Kato-complex} via the 5-lemma.
\end{proof}
\begin{lem}\label{lem:Comparison-2}
Assume that $k$ is finite. Then the change of topology map
\[
H^d_{\rm nis}(X, \overline{{\widetilde{{\mathcal K}}^M_{d, (X, nD)}}/{p^m}}) \to
H^d_{\text{\'et}}l(X, \overline{{\widetilde{{\mathcal K}}^M_{d, (X, nD)}}/{p^m}})
\]
is an isomorphism if either $d \neq 2$ or $k \neq {\mathbb F}_2$.
\end{lem}
\begin{proof}
In view of Lemma~\ref{lem:Comparison-1}, it suffices to show that the canonical
map (see ~\eqref{eqn:Rel-filtn-0})
\begin{equation}\label{eqn:Comparison-2-0}
H^d_\tau(X, \overline{{\widetilde{{\mathcal K}}^M_{d, (X, nD)}}/{p^m}}) \to
H^d_\tau(X, {{\mathcal F}^n_{m,d, X}})
\end{equation}
is an isomorphism for $\tau \in \{{\rm nis}, {\text{\'et}}l\}$.
Using Lemma~\ref{lem:Sheaf-0} and ~\eqref{eqn:Rel-filtn-0}, the proof of this is
completely identical to that of Lemma~\ref{lem:Milnor-d-0}.
\end{proof}
The following is the main result we were after in this section. We restate
all assumptions for convenience.
\begin{thm}\label{thm:Comparison-3}
Let $k$ be a finite field of characteristic $p$.
Let $X \in {{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$ be a regular scheme of pure dimension $d \ge 1$ and
let $D \subset X$ be a regular closed subscheme of codimension one.
Assume that either $d \neq 2$ or $k \neq {\mathbb F}_2$.
Then there is a natural isomorphism
\[
{H^d_{\rm nis}(X, {{\mathcal K}^M_{d,(X,nD)}})}/{p^m} \xrightarrow{\cong}
H^d_{\text{\'et}}l(X, \overline{{\widehat{{\mathcal K}}^M_{d, (X,nD)}}/{p^m}})
\]
for every $m,n \ge 1$.
\end{thm}
\begin{proof}
Combine Lemmas~\ref{lem:Milnor-top-coh}, ~\ref{lem:Milnor-d},
~\ref{lem:Milnor-d-0} and ~\ref{lem:Comparison-2}.
\end{proof}
\begin{cor}\label{cor:Comparison-4}
Under the assumptions of Theorem~\ref{thm:Comparison-3}, there is a commutative
diagram
\begin{equation}\label{eqn:Comparison-4-0}
\xymatrix@C.8pc{
{H^d_{\rm nis}(X, {{\mathcal K}^M_{d,(X,nD)}})}/{p^m} \ar[r] \ar[d] &
H^d_{\text{\'et}}l(X, \overline{{\widehat{{\mathcal K}}^M_{d, (X,nD)}}/{p^m}}) \ar[d] \\
H^d_{\rm nis}(X, \overline{{\widehat{{\mathcal K}}^M_{d, X|nD}}/{p^m}}) \ar[r] &
H^d_{\text{\'et}}l(X, \overline{{\widehat{{\mathcal K}}^M_{d, X|nD}}/{p^m}})}
\end{equation}
for $m,n \ge 1$ in which the horizontal arrows are isomorphisms.
\end{cor}
\begin{proof}
In view of Theorems~\ref{thm:RS-comp} and ~\ref{thm:Comparison-3}, we
only have to explain the vertical arrows.
But their existence follows from Lemmas~\ref{lem:Milnor-top-coh},
~\ref{lem:Milnor-d} and ~\ref{lem:Milnor-d-0}, and the
diagram ~\eqref{eqn:Rel-filtn-0}.
\end{proof}
\begin{remk}\label{remk:Comparison-5}
The reader may note that Theorem~\ref{thm:RS-comp} holds whenever $D_{\rm red}$ is a simple normal crossing
divisor, but this is not the case with Theorem~\ref{thm:Comparison-3}. The main reason for this
is that while $\widehat{K}^M_{d,X|D}$ satisfies cdh-descent for closed covers \cite{JSZ}, the same
is not true for $\widehat{K}^M_{d,(X,D)}$. This prevents us from using an induction argument on the number
of irreducible components of $D$.
\end{remk}
\section{Relative logarithmic Hodge-Witt sheaves}
\label{sec:Hodge-Witt}
In this section, we shall recall the relative logarithmic Hodge-Witt sheaves
and show that they coincide with the Kato-Saito relative Milnor $K$-theory
with ${{\mathbb Z}}/{p^m}$ coefficients in a pro-setting. We fix a field $k$ of
characteristic $p > 0$ and let $X \in {{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$.
\subsection{The relative Hodge-Witt sheaves}\label{sec:dRW}
Let $\{W_m\Omega^\bullet_X\}_{m \ge 1}$ denote the pro-complex of de
Rham-Witt (Nisnevich) sheaves on $X$. This is a pro-complex of differential
graded algebras with the structure map $R$ and the
differential $d$. Let $[-]_m \colon {\mathcal O}_X \to W_m{\mathcal O}_X$ be the multiplicative
Teichm{\"u}ller homomorphism. Recall that the pro-complex
$\{W_m\Omega^\bullet_X\}_{m \ge 1}$ is equipped with the
Frobenius homomorphism of graded algebras $F \colon W_m\Omega^r_X \to
W_{m-1}\Omega^r_X$ and the additive Verschiebung homomorphism
$V \colon W_m\Omega^r_X \to W_{m+1}\Omega^r_X$ .
These homomorphisms satisfy the following properties which we shall use
frequently.
\begin{enumerate}
\item $FV = p = VF$.
\item $FdV = d$, $dF = p Fd$ and $p dV = Vd$.
\item $F(d[a]) = [a^{p-1}] d[a]$
for all $a \in {\mathcal O}_X$.
\item $V(F(x) y ) = x V(y)$ and $V(x dy)= Vx d Vy$ for all
$x \in W_m \Omega^i_X$, $y \in W_m \Omega^j_X$.
\end{enumerate}
Let ${\mathbb F}il^i_V W_m\Omega^r_X = V^iW_{m-i}\Omega^r_X + dV^iW_{m-i}\Omega^{r-1}_X,
\ {\mathbb F}il^i_R W_m\Omega^r_X = {\rm Ker}(R^{r-i})$ and ${\mathbb F}il^i_p W_m\Omega^r_X =
{\rm Ker}(p^{r-i})$ for $0 \le i \le r$. By \cite[Lemma~3.2.4]{Hesselholt-Madsen}
and \cite[Proposition~I.3.4]{Illusie}, we know that
${\mathbb F}il^i_V W_m\Omega^r_X = {\mathbb F}il^i_R W_m\Omega^r_X \subseteq
{\mathbb F}il^i_p W_m\Omega^r_X$ and the last inclusion is an equality if
$X$ is regular. We let $Z W_m\Omega^r_X = {\rm Ker}(d \colon W_m\Omega^r_X \to
W_m\Omega^{r+1}_X)$ and $B W_m\Omega^r_X =
{\rm Image}(d \colon W_m\Omega^{r-1}_X \to
W_m\Omega^{r}_X)$. We set ${\mathcal H}^r(W_m\Omega^\bullet_X) =
{Z W_m\Omega^r_X}/{B W_m\Omega^r_X}$.
Let ${\operatorname{\text{p.s.s.}}}i \colon X \to X$ denote the absolute Frobenius morphism.
One then knows that $d \colon {\operatorname{\text{p.s.s.}}}i_* W_1 \Omega^r_X \to {\operatorname{\text{p.s.s.}}}i_* W_1 \Omega^{r+1}_X$
is ${\mathcal O}_X$-linear. In particular, the sheaves ${\operatorname{\text{p.s.s.}}}i_* Z W_1\Omega^r_X, \
{\operatorname{\text{p.s.s.}}}i_* B W_1\Omega^r_X$ and
${\operatorname{\text{p.s.s.}}}i_* {\mathcal H}^r(W_1\Omega^\bullet_X)$ are quasi-coherent on $X_{\rm nis}$.
If $X$ is $F$-finite, then ${\operatorname{\text{p.s.s.}}}i$ is a finite morphism, and therefore,
${\operatorname{\text{p.s.s.}}}i_* Z W_1\Omega^r_X, \ {\operatorname{\text{p.s.s.}}}i_* B W_1\Omega^r_X$ and
${\operatorname{\text{p.s.s.}}}i_* {\mathcal H}^r(W_1\Omega^\bullet_X)$ are coherent sheaves on $X_{\rm nis}$.
Let $D \subset X$ be a closed subscheme defined by a sheaf of ideals ${\mathcal I}_D$
and let $nD \subset X$ be the closed subscheme defined by ${\mathcal I}^n_D$ for
$n \ge 1$.
We let $W_m\Omega^\bullet_{(X,D)} = {\rm Ker}(W_m\Omega^\bullet_X \twoheadrightarrow
\iota_* W_m\Omega^\bullet_D)$, where $\iota \colon D \hookrightarrow X$ is the inclusion.
We shall often use the notation $W_m\Omega^\bullet_{(A,I)}$ if $X = {\rm Spec \,}(A)$
and $D = {\rm Spec \,}(A/I)$.
If $D$ is an effective Cartier divisor on $X$ and $n\in {\mathbb Z}$,
we let $W_m {\mathcal O}_X(nD)$ be the
Nisnevich sheaf on $X$ which is locally defined as
$W_m {\mathcal O}_X(n D) = [f^{-n}] \cdot W_m {\mathcal O}_X \subset j_* W_m{\mathcal O}_U$,
where $f\in {\mathcal O}_X$ is a local equation of $D$ and $j \colon U \hookrightarrow X$ is the
inclusion of the complement of $D$ in $X$. One knows that $W_m {\mathcal O}_X(nD)$
is a sheaf of invertible $W_m{\mathcal O}_X$-modules. We let $W_m\Omega^\bullet_X(nD) =
W_m\Omega^\bullet_X \otimes_{W_m{\mathcal O}_X} W_m{\mathcal O}_X(nD)$. It is clear that there is
a canonical $W_m{\mathcal O}_X$-linear map $W_m\Omega^\bullet_X(-D) \to
W_m\Omega^\bullet_{(X,D)}$. The reader is invited to compare the
following with a weaker statement \cite[Proposition~2.5]{Geisser-Hesselholt-top}.
\begin{lem}\label{lem:pro-iso-rel}
If $(X,D)$ is a modulus pair and $m \ge 1$ an
integer, then the canonical map
of pro-sheaves
\[
\{W_m\Omega^\bullet_X(-nD)\}_n \to \{W_m\Omega^\bullet_{(X,nD)}\}_n
\]
on $X_{\rm nis}$ is surjective.
If $X$ is furthermore regular, then the
map $W_m\Omega^\bullet_X(-D) \to W_m\Omega^\bullet_{(X,D)}$ is injective.
\end{lem}
\begin{proof}
The first part follows directly from Lemma~\ref{lem:pro-iso-rel-local} below.
We prove the second part.
Let $U = X \setminus D$ and $j \colon U \hookrightarrow X$ the inclusion.
Using the N{\'e}ron-Popescu approximation and the Gersten resolution of
$W_m\Omega^\bullet_X$ (when $X$ is smooth), proven in
\cite[Proposition~II.5.1.2]{Gross}, it follows that the canonical map
$W_m\Omega^\bullet_X \to j_* W_m\Omega^\bullet_U$ is injective (see
\cite[Proposition~2.8]{KP-dRW}).
Since $W_m{\mathcal O}_X(-D)$ is invertible as a $W_m{\mathcal O}_X$-module,
there are natural isomorphisms
$j_* W_m\Omega^\bullet_U \otimes_{W_m{\mathcal O}_X} W_m{\mathcal O}_X(-D)
\to j_*(W_m\Omega^\bullet_U \otimes_{W_m{\mathcal O}_U} j^* W_m{\mathcal O}_X(-D)) \cong
j_* W_m\Omega^\bullet_U$ (see \cite[Exercise~II.5.1(d)]{Hartshorne-AG}).
Using the invertibility of $W_m{\mathcal O}_X(-D)$ again, we conclude that the
canonical map $W_m\Omega^\bullet_X(-D) \to j_* W_m\Omega^\bullet_U$ is injective.
Since this map factors through
$W_m\Omega^\bullet_X(-D) \to W_m\Omega^\bullet_{(X,D)} \hookrightarrow W_m\Omega^\bullet_X$,
the lemma follows.
\end{proof}
\begin{lem}\label{lem:pro-iso-rel-local}
Let $A$ be a commutative ${\mathbb F}_p$-algebra and $I = (f) \subseteq A$ a
principal ideal. Let $m \ge 1$ be any integer and $n = p^m$.
Then the inclusion
$W_m\Omega^r_{(A, I^n)} \subseteq W_m\Omega^r_A$ of $W_m(A)$-modules
factors through $W_m\Omega^r_{(A, I^n)} \subseteq [f] \cdot W_m\Omega^r_A$
for every $r \ge 0$.
\end{lem}
\begin{proof}
By \cite[Lemma~1.2.2]{Hesselholt-2004}, $W_m\Omega^r_{(A,I)}$ is the
$W_m(A)$-submodule of $W_m\Omega^r_A$ generated by the de Rham-Witt forms
of the type $a_0da_1 \wedge \cdots \wedge da_r$,
where $a_i \in W_m(A)$ for all $i$ and $a_i \in W_m(I)$ for some $i$.
We let $\omega = a_0da_1 \wedge \cdots \wedge da_r \in
W_m \Omega^r_{(A, I^n)}$ such that $a_i \in W_m(I^{n})$ for some
$i \neq 0$.
We can assume after a permutation that $i = 1$. We let
$\omega' = da_2 \wedge \cdots \wedge da_r$.
We can write $a_1 = \stackrel{m-1}{\underlinederset{i = 0}\sum}
V^i([f^{n}]_{m-i}[a'_i]_{m-i})$ for some $a'_i \in A$.
It follows that
$\omega = \stackrel{m-1}{\underlinederset{i = 0}\sum}
a_0dV^i([f^{n}]_{m-i}[a'_i]_{m-i}) \wedge \omega'$.
We fix an integer $0 \le i \le m-1$ and let
$\omega_i = a_0dV^i([f^{n}]_{m-i}[a'_i]_{m-i}) \wedge \omega'$.
It suffices to show that each $\omega_i \in W_m\Omega^r_X(-D)$, where
$X = {\rm Spec \,}(A)$ and $D = {\rm Spec \,}(A/I)$.
However, we have
\[
\begin{array}{lll}
\omega_i & = & a_0dV^i([f^{n}]_{m-i}[a'_i]_{m-i}) \wedge \omega' \\
& = & a_0d([f^{p^{m-i}}]_mV^i[a'_i]_{m-i}) \wedge \omega' \\
& = & [f^{p^{m-i}}]_ma_0dV^i[a'_i]_{m-i} \wedge \omega' +
p^{m-i} [f^{p^{m-i}-1}]_ma_0V^i[a'_i]_{m-i}d[f]_m \wedge \omega'.
\end{array}
\]
It is clear that the last term lies in $W_m\Omega^r_X(-D)$.
This finishes the proof.
\end{proof}
The following result will play a key role
in our exposition.
\begin{cor}\label{cor:pro-iso-rel*}
If $(X,D)$ is a regular modulus pair, then the
canonical map of pro-sheaves
\[
\{W_m\Omega^\bullet_X(-nD)\}_n \to \{W_m\Omega^\bullet_{(X,nD)}\}_n
\]
on $X_{\rm nis}$ is an isomorphism for every $m \ge 1$.
\end{cor}
\subsection{The relative logarithmic sheaves}
\label{sec:Rel-lo}
In the remaining part of \S~\ref{sec:Hodge-Witt}, our default topology will
be the {\'e}tale topology. All other topologies will be mentioned specifically.
Let $k$ be a field of characteristic $p > 0$ and $D \subseteq X$ a closed
immersion in ${{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$.
Recall from \cite{Illusie} that $W_m\Omega^r_{X, {\operatorname{log}}}$ is the {\'e}tale
subsheaf of $W_m\Omega^r_X$ which is the image of the
map ${\rm dlog} \colon {\widetilde{{\mathcal K}}^M_{r,X}}/{p^m} \to W_m\Omega^r_X$, given by
${\rm dlog}(\{a_1, \ldots , a_r\}) = {\rm dlog}[a_1]_m \wedge \cdots \wedge {\rm dlog}[a_r]_m$.
It is easily seen that this map exists.
One knows (e.g., see \cite[Remark~1.6]{Luders-Morrow}) that this map in fact
factors through
${\rm dlog} \colon {\widehat{{\mathcal K}}^M_{r,X}}/{p^m} \twoheadrightarrow W_m\Omega^r_{X, {\operatorname{log}}}$.
Moreover, this map is multiplicative.
The naturality of the dlog map gives rise to its relative version
\begin{equation}\label{eqn:dlog-rel}
{\rm dlog} \colon \overline{{\widehat{{\mathcal K}}^M_{r,(X,D)}}/{p^m}} \to W_m\Omega^r_{(X,D), {\operatorname{log}}}
:= {\rm Ker}(W_m\Omega^r_{X, {\operatorname{log}}} \twoheadrightarrow W_m\Omega^r_{D, {\operatorname{log}}})
\end{equation}
which is a morphism of {\'e}tale sheaves of ${\widehat{{\mathcal K}}^M_{*,X}}/{p^m}$-modules
as $r \geq 0$ varies.
Recall that the Frobenius map
$F \colon W_{m+1}\Omega^r_X \to W_{m}\Omega^r_X$ sends ${\rm Ker}(R)$ into the
subsheaf $dV^{m-1}\Omega^{r-1}_X$ so that there is an induced map
$\overline{F} \colon W_{m}\Omega^r_X \to {W_{m}\Omega^r_X}/{dV^{m-1}\Omega^{r-1}_X}$.
We let $\pi \colon W_{m}\Omega^r_X \to {W_{m}\Omega^r_X}/{dV^{m-1}\Omega^{r-1}_X}$
denote the projection map. Since $R - F \colon
W_{m+1}\Omega^r_X \to W_{m}\Omega^r_X$ is surjective by
\cite[Proposition~I.3.26]{Illusie}, it follows that $\pi -\overline{F}$ is also
surjective.
Indeed, this surjectivity is proven for smooth schemes over $k$ in loc. cit., but
then the claim follows because $X$ can be seen locally as a closed subscheme
of a regular scheme and a regular scheme is a
filtered inductive limit of smooth schemes
over $k$, by N\'eron–Popescu desingularisation.
We thus get a commutative diagram
\begin{equation}\label{eqn:dlog-2-*}
\xymatrix@C.8pc{
0 \ar[r] & W_{m}\Omega^r_{X, {\operatorname{log}}} \ar[r] \ar@{->>}[d] &
W_{m}\Omega^r_X \ar@{->>}[d] \ar[r]^-{\pi - \overline{F}} &
\frac{W_{m}\Omega^r_X}{dV^{m-1}\Omega^{r-1}_X} \ar[r] \ar@{->>}[d] & 0 \\
0 \ar[r] & W_{m}\Omega^r_{D, {\operatorname{log}}} \ar[r] &
W_{m}\Omega^r_D \ar[r]^-{\pi - \overline{F}} &
\frac{W_{m}\Omega^r_D}{dV^{m-1}\Omega^{r-1}_D} \ar[r] & 0.}
\end{equation}
The middle and the right vertical arrows are surjective by definition
and the two rows are exact by loc. cit. and \cite[Corollary~4.2]{Morrow-ENS}.
The
surjectivity of the left vertical arrow follows by applying the dlog map to
the surjection $\widehat{{\mathcal K}}^M_{r,X} \twoheadrightarrow \widehat{{\mathcal K}}^M_{r,D}$.
Taking the kernels of the vertical arrows, we get a short exact sequence
\begin{equation}\label{eqn:dlog-2}
0 \to W_m\Omega^r_{(X,D), {\operatorname{log}}} \to W_m\Omega^r_{(X,D)} \to
\frac{W_m\Omega^r_{(X,D)}}{W_m\Omega^r_{(X,D)} \cap dV^{m-1} \Omega^{r-1}_{X}} \to 0.
\end{equation}
The following property of the relative dlog map will be important to us.
\begin{lem}\label{lem:Rel-dlog-iso}
Assume that $k$ is perfect and $X$ is regular. Then the
map of {\'e}tale pro-sheaves
\[
{\rm dlog} \colon \{\overline{{\widehat{{\mathcal K}}^M_{r,(X,nD)}}/{p^m}}\}_n \to
\{W_m\Omega^r_{(X,nD), {\operatorname{log}}}\}_n
\]
is an isomorphism for every $m, r \ge 1$.
\end{lem}
\begin{proof}
We consider the commutative diagram of pro-sheaves:
\begin{equation}\label{eqn:dlog-0}
\xymatrix@C.8pc{
0 \ar[r] & \{\overline{{\widehat{{\mathcal K}}^M_{r,(X,nD)}}/{p^m}}\}_n \ar[r] \ar[d]_-{{\rm dlog}} &
{\widehat{{\mathcal K}}^M_{r,X}}/{p^m} \ar[d]^-{{\rm dlog}}_-{\cong} \ar[r] &
\{{\widehat{{\mathcal K}}^M_{r,nD}}/{p^m}\}_n \ar[d]^-{{\rm dlog}} \ar[r] & 0 \\
0 \ar[r] & \{W_m\Omega^r_{(X,nD), {\operatorname{log}}}\}_n \ar[r] & W_m\Omega^r_{X,{\operatorname{log}}} \ar[r] &
\{W_m\Omega^r_{nD,{\operatorname{log}}}\}_n \ar[r] & 0.}
\end{equation}
The middle vertical arrow is an isomorphism by the Bloch-Gabber-Kato theorem
for fields \cite[Corollary~2.8]{Bloch-Kato} and the proof of the Gersten
conjecture for Milnor $K$-theory (\cite[Proposition~10]{Kerz-10}) and
logarithmic Hodge-Witt sheaves (\cite[Th{\`e}or{\'e}me~1.4]{Gross-Suwa-88}).
The right vertical arrow is an
isomorphism by \cite[Theorem~0.3]{Luders-Morrow}. Since the rows are exact,
it follows that the left vertical arrow is also an isomorphism.
This finishes the proof.
\end{proof}
We now prove a Nisnevich version of Lemma~\ref{lem:Rel-dlog-iso}.
For any $k$-scheme $X$, we let $W_m\Omega^r_{X, {\operatorname{log}}, {\rm nis}}$ be the image of the
map of Nisnevich sheaves
${\rm dlog} \colon {\widehat{{\mathcal K}}^M_{r,X}}/{p^m} \to W_m\Omega^r_X$, given by
${\rm dlog}(\{a_1, \ldots , a_r\}) = {\rm dlog}[a_1]_m \wedge \cdots \wedge {\rm dlog}[a_r]_m$
(see \cite[Remark~1.6]{Luders-Morrow} for the existence of this map).
The naturality of this map gives rise to its relative version
\begin{equation}\label{eqn:dlog-rel-*}
{\rm dlog} \colon \overline{{\widehat{{\mathcal K}}^M_{r,(X,D)}}/{p^m}} \to W_m\Omega^r_{(X,D), {\operatorname{log}}, {\rm nis}}
:= {\rm Ker}(W_m\Omega^r_{X, {\operatorname{log}}, {\rm nis}} \twoheadrightarrow W_m\Omega^r_{D, {\operatorname{log}}, {\rm nis}})
\end{equation}
if $D \subseteq X$ is a closed immersion.
\begin{lem}\label{lem:Rel-dlog-iso-nis}
Assume that $k$ is perfect and $X$ is regular. Then the
map of Nisnevich pro-sheaves
\[
{\rm dlog} \colon \{\overline{{\widehat{{\mathcal K}}^M_{r,(X,nD)}}/{p^m}}\}_n \to
\{W_m\Omega^r_{(X,nD), {\operatorname{log}}, {\rm nis}}\}_n
\]
is an isomorphism for every $m, r \ge 1$.
\end{lem}
\begin{proof}
The proof is completely identical to that of Lemma~\ref{lem:Rel-dlog-iso}, where
we only have to observe that the middle and the right vertical arrows
are isomorphisms even in the Nisnevich topology, by the same references that
we used for the {\'e}tale case.
\end{proof}
\begin{comment}
An argument identical to those of Lemmas~\ref{lem:Rel-dlog-iso} and
~\ref{lem:Rel-dlog-iso-nis} yields the following.
\begin{lem}\label{lem:Rel-dlog-iso-nil}
Assume that $k$ is perfect and $D \subseteq X$ is a closed immersion of
regular schemes in ${{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$. Then the canonical maps of Nisnevich and
{\'e}tale pro-sheaves
\[
{\rm dlog} \colon \{\overline{{\widehat{{\mathcal K}}^M_{r,(nD,D)}}/{p^m}}\}_n \to
\{W_m\Omega^r_{(nD,D), {\operatorname{log}}, {\rm nis}}\}_n
\]
\[
{\rm dlog} \colon \{\overline{{\widehat{{\mathcal K}}^M_{r,(nD,D)}}/{p^m}}\}_n \to
\{W_m\Omega^r_{(nD,D), {\operatorname{log}}}\}_n
\]
are isomorphisms for every $m, r \ge 1$.
\end{lem}
\end{comment}
\begin{lem}\label{lem:Kato-complex}
Assume that $k$ is a finite field and $X$ is regular of pure dimension
$d \ge 1$. Then the canonical map
\[
H^i_{\rm nis}(X, {\widehat{{\mathcal K}}^M_{d,X}}/{p^m}) \to H^i_{\text{\'et}}l(X, {\widehat{{\mathcal K}}^M_{d,X}}/{p^m})
\]
is an isomorphism for $i = d$ and surjective for $i = d-1$.
\end{lem}
\begin{proof}
We have seen in the proofs of Lemmas~\ref{lem:Rel-dlog-iso} and
~\ref{lem:Rel-dlog-iso-nis} that for every $r \ge 0$,
there are natural isomorphisms
\begin{equation}\label{eqn:Kato-complex-0}
{({\widehat{{\mathcal K}}^M_{r,X}}/{p^m})}_{{\rm nis}} \cong W_m\Omega^r_{X, {\operatorname{log}}, {\rm nis}} \ \
{\rm and} \ \
{({\widehat{{\mathcal K}}^M_{r,X}}/{p^m})}_{{\text{\'et}}l} \cong W_m\Omega^r_{X, {\operatorname{log}}}.
\end{equation}
Using these isomorphisms, the lemma follows from
\cite[Propositions~3.3.2, 3.3.3]{Kerz-Zhau}. The
only additional input one has to use is that
$H^b_x(X_{\text{\'et}}l, W_m\Omega^d_{X, {\operatorname{log}}}) = 0$ if $x \in X^{(a)}$ and $b < a$ by
\cite[Corollary~2.2]{Milne-Zeta} so that the right hand side of
[loc. cit., (3.3.3)] is zero. This is needed in the proof of the
$i = d-1$ case of the lemma.
\end{proof}
\begin{comment}
\begin{lem}\label{lem:Nis-et-pro-rel}
Assume that $k$ is a perfect field. Let $X \in {{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$ be regular of pure
dimension $d \ge 1$ and $D \subset X$ a simple normal crossing
divisor. Then the canonical map
\[
\{(\overline{{\widehat{{\mathcal K}}^M_{d,nD}}/{p^m}})_{\rm nis}\}_n \to
\{{\bf R}\epsilon_*((\overline{{\widehat{{\mathcal K}}^M_{d,nD}}/{p^m}})_{\text{\'et}}l)\}_n
\]
is an isomorphism in the derived category of Nisnevich sheaves on $X$ for
every $m \ge 1$.
\end{lem}
\begin{proof}
If $D$ is regular, it follows from the formal smoothness property that
the inclusion $D \hookrightarrow nD$ locally admits a retract. In particular, there is a
short exact sequence of Nisnevich or {\'e}tale sheaves
\begin{equation}\label{eqn:Nis-et-2}
0 \to {\widehat{{\mathcal K}}^M_{d,(nD, D)}}/{p^m} \to {\widehat{{\mathcal K}}^M_{d,nD}}/{p^m} \to
{\widehat{{\mathcal K}}^M_{d,D}}/{p^m} \to 0.
\end{equation}
Since $D$ is regular of dimension at most $d-1$, it follows from
~\eqref{eqn:Kato-complex-0} that ${\widehat{{\mathcal K}}^M_{d,D}}/{p^m} = 0$.
In particular, the map ${\widehat{{\mathcal K}}^M_{d,(nD, D)}}/{p^m} \to
{\widehat{{\mathcal K}}^M_{d,nD}}/{p^m}$ of Nisnevich or {\'e}tale sheaves
is an isomorphism for every $n \ge 1$.
The result now follows from ~Lemma~\ref{lem:Rel-dlog-iso-nil} and
\cite[Corollary~4.12]{Morrow-ENS}.
If $D$ is not regular, then we can use induction on
the number of irreducible components of $D$ and apply
Corollary~4.10 of {\sl loc. cit.} to finish the proof.
\end{proof}
\end{comment}
Assume now that $k$ is a perfect field, $X \in {{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$
is a regular scheme of pure dimension
$d \ge 1$ and $D \subset X$ is a nowhere dense closed
subscheme. We fix integers $m \geq 1$ and $i, r\geq 0$.
Under these assumptions, we shall prove the
following results.
\begin{lem}\label{lem:log-HW}
There is a short exact sequence
\[
0 \to W_{m-i}\Omega^r_{X, {\operatorname{log}}} \xrightarrow{\underline{p}^i}
W_{m}\Omega^r_{X, {\operatorname{log}}} \xrightarrow{R^{m-i}} W_{i}\Omega^r_{X, {\operatorname{log}}}
\to 0
\]
of sheaves on $X$ in Nisnevich and {\'e}tale
topologies.
\end{lem}
\begin{proof}
The {\'e}tale version of the lemma is already known by
\cite[Lemma~3]{CTSS}. To prove its Nisnevich version, we can use
the first isomorphism of ~\eqref{eqn:Kato-complex-0}.
This reduces the proof to showing that tensoring the exact sequence
\[
0 \to {{\mathbb Z}}/{p^{m-i}} \xrightarrow{p^i} {{\mathbb Z}}/{p^{m}} \xrightarrow{\pi_{m-i}}
{{\mathbb Z}}/{p^{i}} \to 0
\]
with ${\widehat{{\mathcal K}}^M_{r,X}}$ yields an exact sequence
\begin{equation}\label{eqn:log-HW-0}
0 \to {\widehat{{\mathcal K}}^M_{r,X}}/{p^{m-i}} \xrightarrow{p^i}
{\widehat{{\mathcal K}}^M_{r,X}}/{p^{m}} \xrightarrow{\pi_{m-i}} {\widehat{{\mathcal K}}^M_{r,X}}/{p^{i}} \to 0
\end{equation}
of Nisnevich sheaves on $X$.
But this follows directly from the fact that ${\widehat{{\mathcal K}}^M_{r,X}}$ has
no $p^i$-torsion (see \cite[Theorem~8.1]{Geisser-Levine}).
\end{proof}
\begin{lem}\label{lem:log-HW-pro}
There is a short exact sequence
\[
0 \to \{W_{m-i}\Omega^r_{nD, {\operatorname{log}}}\}_n \xrightarrow{\underline{p}^i}
\{W_{m}\Omega^r_{nD, {\operatorname{log}}}\}_n \xrightarrow{R^{m-i}} \{W_{i}\Omega^r_{nD, {\operatorname{log}}}\}_n
\to 0
\]
of pro-sheaves on $D$ in Nisnevich and {\'e}tale
topologies.
\end{lem}
\begin{proof}
The argument below works for either of Nisnevich and {\'e}tale
topologies.
We shall prove the lemma by modifying the proof of
\cite[Theorem~4.6]{Morrow-ENS}. The latter result says that there is an
exact sequence
\begin{equation}\label{eqn:log-HW-pro-0}
\{W_{m}\Omega^r_{nD, {\operatorname{log}}}\}_n \xrightarrow{{p}^i}
\{W_{m}\Omega^r_{nD, {\operatorname{log}}}\}_n \xrightarrow{R^{m-i}} \{W_{i}\Omega^r_{nD, {\operatorname{log}}}\}_n
\to 0.
\end{equation}
It suffices therefore to show that the first arrow in this sequence has a
factorization
\begin{equation}\label{eqn:log-HW-pro-1}
\{W_{m}\Omega^r_{nD, {\operatorname{log}}}\}_n \stackrel{R^i}{\twoheadrightarrow}
\{W_{m-i}\Omega^r_{nD, {\operatorname{log}}}\}_n \stackrel{\underline{p}^i}{\hookrightarrow}
\{W_{m}\Omega^r_{nD, {\operatorname{log}}}\}_n.
\end{equation}
To prove this factorization, we look at the commutative diagram
\begin{equation}\label{eqn:log-HW-pro-2}
\xymatrix@C.8pc{
\{W_{m}\Omega^r_{nD, {\operatorname{log}}}\}_n \ar[r]^-{{p}^i} \ar@{^{(}->}[d] &
\{W_{m}\Omega^r_{nD, {\operatorname{log}}}\}_n \ar@{^{(}->}[d] \\
\{W_{m}\Omega^r_{nD}\}_n \ar[r]^-{{p}^i} &
\{W_{m}\Omega^r_{nD}\}_n.}
\end{equation}
It follows from \cite[Proposition~2.14]{Morrow-ENS} that the map $p^i$ at
the bottom has a factorization (see \S~\ref{sec:dRW} for the definitions of
various filtrations of $W_m\Omega^\bullet_{nD}$)
\begin{equation}\label{eqn:log-HW-pro-3}
\{W_{m}\Omega^r_{nD}\}_n \stackrel{R^i}{\twoheadrightarrow}
\{W_{m-i}\Omega^r_{nD}\}_n \stackrel{\underline{p}^i}{\hookrightarrow}
\{W_{m}\Omega^r_{nD}\}_n.
\end{equation}
Since the image of $\{W_{m}\Omega^r_{nD, {\operatorname{log}}}\}_n$ under $R^i$ is
$\{W_{m-i}\Omega^r_{nD, {\operatorname{log}}}\}_n$ (note the surjectivity of the second
arrow in ~\eqref{eqn:log-HW-pro-0}),
it follows at once
from ~\eqref{eqn:log-HW-pro-3} that the map $p^i$ on the
top in ~\eqref{eqn:log-HW-pro-2} has a factorization as
desired in ~\eqref{eqn:log-HW-pro-1}. This concludes the proof.
\end{proof}
\begin{prop}\label{prop:log-HW-pro-*}
There is a short exact sequence
\[
0 \to \{W_{m-i}\Omega^r_{(X,nD), {\operatorname{log}}}\}_n \xrightarrow{\underline{p}^i}
\{W_{m}\Omega^r_{(X,nD), {\operatorname{log}}}\}_n \xrightarrow{R^{m-i}}
\{W_{i}\Omega^r_{(X,nD), {\operatorname{log}}}\}_n
\to 0
\]
of pro-sheaves on $X$ in Nisnevich and {\'e}tale
topologies.
\end{prop}
\begin{proof}
Combine the previous two lemmas and use that the maps
$W_{m}\Omega^r_{X, {\operatorname{log}}} \to W_{m}\Omega^r_{nD, {\operatorname{log}}}$ and
$W_{m}\Omega^r_{X, {\operatorname{log}}, {\rm nis}} \to W_{m}\Omega^r_{nD, {\operatorname{log}}, {\rm nis}}$ are surjective.
\end{proof}
Applying the cohomology functor, we get
\begin{cor}\label{cor:log-HW-pro-coh}
There is a long exact
sequence of pro-abelian groups
\[
\cdots \to \{H^j_{\text{\'et}}l(X, W_{m-i}\Omega^r_{(X,nD), {\operatorname{log}}})\}_n
\xrightarrow{\underline{p}^i} \{H^j_{\text{\'et}}l(X, W_{m}\Omega^r_{(X,nD), {\operatorname{log}}})\}_n
\hspace*{3cm}
\]
\[
\hspace*{7.5cm}
\xrightarrow{R^{m-i}} \{H^j_{\text{\'et}}l(X, W_{i}\Omega^r_{(X,nD), {\operatorname{log}}})\}_n
\xrightarrow{\partial^i}
\cdots
\]
The same also holds in the Nisnevich topology.
\end{cor}
\vskip .3cm
\subsection{Some cohomology exact sequences}\label{sec:Exact}
Let us now assume that $k$ is a finite field and $X \in {{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$ is regular of
pure dimension $d \ge 1$. For any $p^m$-torsion abelian group $V$, we let $V^\star =
{\rm Hom}_{{{\mathbb Z}}/{p^m}}(V,{{\mathbb Z}}/{p^m})$.
Let $D \subset X$ be an effective Cartier divisor
with complement $U$.
We let $F^j_{m,r}(n) = H^j_{\text{\'et}}l(X, W_{m}\Omega^{r}_{(X,nD), {\operatorname{log}}})$
and $F^j_{m,r}(U) = {\varprojlim}_n F^j_{m,r}(n)$.
Each group ${(F^j_{m,r}(n))}^\star$
is a profinite abelian group (see \cite[Theorem~2.9.6]{Pro-fin}).
\begin{lem}\label{lem:Lim-exact}
The sequence
\[
\cdots \to F^j_{m-1,r}(U) \to F^j_{m,r}(U) \to F^j_{1,r}(U) \to
F^{j+1}_{m-1,r}(U) \to \cdots
\]
is exact.
\end{lem}
\begin{proof}
We first prove by induction on $m$ that ${\varprojlim}^1_n F^j_{m,r}(n) = 0$.
The $m =1$ case follows from Lemma~\ref{lem:Finite-0}. In general,
we break the long exact sequence of Corollary~\ref{cor:log-HW-pro-coh}
into short exact sequences. This yields exact sequences
of pro-abelian groups
\begin{equation}\label{eqn:Lim-exact-0}
0 \to {\rm Image}(\partial^{1}) \to \{F^j_{m-1,r}(n)\}_n
\to {\rm Ker}(R^{m-1}) \to 0;
\end{equation}
\begin{equation}\label{eqn:Lim-exact-1}
0 \to {\rm Ker}(R^{m-1}) \to \{F^j_{m,r}(n)\}_n \to {\rm Image}(R^{m-1}) \to 0;
\end{equation}
\begin{equation}\label{eqn:Lim-exact-2}
0 \to {\rm Image}(R^{m-1}) \to \{F^j_{1,r}(n)\}_n \to {\rm Image}(\partial^1)
\to 0.
\end{equation}
Lemma~\ref{lem:Finite-0} says that $\{F^j_{1,r}(n)\}_n$ is
a pro-system of finite abelian groups. This implies by
~\eqref{eqn:Lim-exact-2} that each of
${\rm Image}(R^{m-1})$ and ${\rm Image}(\partial^1)$ is isomorphic to a
pro-system of finite abelian groups. In particular, the
pro-systems of ~\eqref{eqn:Lim-exact-2} do not admit higher derived
lim functors. On the other hand, ${\varprojlim}^1_n F^j_{m-1,r}(n) = 0$ by
induction. It follows from ~\eqref{eqn:Lim-exact-0} that
${\varprojlim}^1_n {\rm Ker}(R^{m-1}) = 0$.
Using ~\eqref{eqn:Lim-exact-1}, we get ${\varprojlim}^1_n F^j_{m,r}(n) = 0$.
It follows from what we have shown is that the above three short
exact sequences of pro-abelian groups remain exact after we
apply the inverse limit functor. But this implies that
the long exact sequence of Corollary~\ref{cor:log-HW-pro-coh} also remains exact
after applying the inverse limit functor.
This proves the lemma.
\end{proof}
\begin{lem}\label{lem:long-exact-lim}
There is a long exact
sequence
\[
\cdots \to {\varinjlim}_n {(F^j_{i,r}(n))}^\star \xrightarrow{(R^{m-i})^\star}
{\varinjlim}_n {(F^j_{m,r}(n))}^\star \xrightarrow{(\underline{p}^i)^\star}
{\varinjlim}_n {(F^j_{m-i,r}(n))}^\star \to \cdots
\]
of abelian groups for every $i \ge 0$.
\end{lem}
\begin{proof}
This is an easy consequence of Corollary~\ref{cor:log-HW-pro-coh} using the fact
that the Pontryagin dual functor (recalled in \S~\ref{sec:Cont})
is exact on the category of discrete torsion
abelian groups (see \cite[Theorem~2.9.6]{Pro-fin}) and the direct limit
functor is exact on the category of ind-abelian groups.
We also have to note that the Pontryagin dual of a discrete $p^m$-torsion
abelian group $V$ coincides with $V^\star$.
\end{proof}
\begin{comment}
\begin{proof}
Since each $F^j_{m,r}(n)$ is a discrete $p^m$-torsion group and since
the Pontryagin dual functor (recalled in \S~\ref{sec:Cont})
is exact on the category of discrete torsion
abelian groups by \cite[Theorem~2.9.6]{Pro-fin}, it follows by
applying the Pontryagin duals to the exact sequence of
Corollary~\ref{cor:log-HW-pro-coh} that the sequence of ind-abelian groups
\[
\cdots \to \{{(F^j_{i,r}(n))}^\star\}_n \xrightarrow{(R^{m-i})^\star}
\{{(F^j_{m,r}(n))}^\star\}_n \xrightarrow{(\underline{p}^i)^\star}
\{{(F^j_{m-i,r}(n))}^\star\}_n \to \cdots
\]
is exact. Note here that the Pontryagin dual of a discrete $p^m$-torsion
abelian group $V$ coincides with $V^\star$.
Since the direct limit functor is exact, the proof concludes.
\end{proof}
\end{comment}
\begin{lem}\label{lem:long-exact-lim-d}
Assume further that $D \subset X$ is a simple normal crossing
divisor.
Then the group
${\varprojlim}_n F^d_{m,d}(n)$ is profinite and the canonical map
\[
{\varinjlim}_n {(F^d_{m,d}(n))}^\star \to
({\varprojlim}_n F^d_{m,d}(n))^\star
\]
is an isomorphism of topological abelian groups if
either $k \neq {\mathbb F}_2$ or $d \neq 2$.
\end{lem}
\begin{proof}
By \cite[Theorem~9.1]{Kato-Saito}, Proposition~\ref{prop:Milnor-iso},
Corollary~\ref{cor:RS-K-comp} and
Lemma~\ref{lem:Rel-dlog-iso}, $\{F^d_{m,d}(n)\}_n$ is isomorphic to
a pro-abelian group $\{E_n\}_n$ such that each $E_n$ is finite and
the map $E_{n+1} \to E_n$ is surjective. We can therefore apply
Lemma~\ref{lem:Top}.
\end{proof}
\begin{comment}
Let $k$ be any field of characteristic $p > 0$.
For any $X \in {{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$, one knows that the Frobenius map
$F \colon W_{m+1}\Omega^r_X \to W_{m}\Omega^r_X$ sends ${\rm Ker}(R)$ into the
subsheaf $dV^{m-1}\Omega^{r-1}_X$ so that we get an induced map
$\overline{F} \colon W_{m}\Omega^r_X \to {W_{m}\Omega^r_X}/{dV^{m-1}\Omega^{r-1}_X}$.
We let $\pi \colon W_{m}\Omega^r_X \to {W_{m}\Omega^r_X}/{dV^{m-1}\Omega^{r-1}_X}$
denote the projection map. Since $R - F \colon
W_{m+1}\Omega^r_X \to W_{m}\Omega^r_X$ is surjective by
\cite[Proposition~I.3.26]{Illusie}, it follows that $\pi -\overline{F}$ is also
surjective.
For a closed immersion $D \subseteq X$ in ${{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$, we thus get a commutative
diagram of exact sequences
\begin{equation}\label{eqn:dlog-2}
\xymatrix@C.8pc{
0 \ar[r] & W_{m}\Omega^r_{X, {\operatorname{log}}} \ar[r] \ar@{->>}[d] &
W_{m}\Omega^r_X \ar@{->>}[d] \ar[r]^-{\pi - \overline{F}} &
\frac{W_{m}\Omega^r_X}{dV^{m-1}\Omega^{r-1}_X} \ar[r] \ar@{->>}[d] & 0 \\
0 \ar[r] & W_{m}\Omega^r_{D, {\operatorname{log}}} \ar[r] &
W_{m}\Omega^r_D \ar[r]^-{\pi - \overline{F}} &
\frac{W_{m}\Omega^r_D}{dV^{m-1}\Omega^{r-1}_D} \ar[r] & 0.}
\end{equation}
The middle and the right vertical arrows are surjective by definition. The
surjectivity of the left vertical arrow follows by applying the dlog map to
the surjection $\widehat{{\mathcal K}}^M_{r,X} \twoheadrightarrow \widehat{{\mathcal K}}^M_{r,D}$.
Taking the kernels of the vertical arrows, we get a short exact sequence
\begin{equation}\label{eqn:dlog-3}
0 \to W_m\Omega^r_{(X,D), {\operatorname{log}}} \to W_m\Omega^r_{(X,D)} \to
\frac{W_m\Omega^r_{(X,D)}}{W_m\Omega^r_{(X,D)} \cap dV^{m-1} \Omega^{r-1}_{X}} \to 0.
\end{equation}
\begin{lem}\label{lem:Finite-0}
Assume that $k$ is finite and $X$ is projective.
Then $H^i_{{\text{\'et}}l}(X, \Omega^r_{(X,D), {\operatorname{log}}})$ is finite for $i, r \ge 0$.
\end{lem}
\begin{proof}
Using ~\eqref{eqn:dlog-3}, it suffices to show that
$H^i_{{\text{\'et}}l}(X, \frac{\Omega^r_{(X,D)}}{\Omega^r_{(X,D)} \cap d\Omega^{r-1}_{X}})$
is finite. By \eqref{eqn:dlog-2}, it suffices to show that
$H^i_{{\text{\'et}}l}(Z, \frac{\Omega^r_Z}{d\Omega^{r-1}_Z})$ is finite if
$Z \in \{X, D\}$. Since $k$ is perfect, the absolute Frobenius ${\operatorname{\text{p.s.s.}}}i$ is
a finite morphism. In particular, ${\operatorname{\text{p.s.s.}}}i_*$ is exact.
It suffices therefore to show that
$H^i_{{\text{\'et}}l}(Z, \frac{{\operatorname{\text{p.s.s.}}}i_*\Omega^r_Z}{{\operatorname{\text{p.s.s.}}}i_* d\Omega^{r-1}_Z})$ is finite.
But this is clear because $k$ is finite, $Z$ is projective over $k$ and
${{\operatorname{\text{p.s.s.}}}i_*\Omega^r_Z}/{{\operatorname{\text{p.s.s.}}}i_* d\Omega^{r-1}_Z}$ is coherent.
\end{proof}
\begin{prop}\label{prop:ML-condition}
Assume that $k$ is finite and $X$ is projective.
Then the pro-abelian group
$\{H^i_{{\text{\'et}}l}(X, W_m\Omega^r_{(X,nD), {\operatorname{log}}})\}_n$ satisfies the
Mittag-Leffler condition for every $i, r \ge 0$ and $m \ge 1$.
\end{prop}
\begin{proof}
We shall prove the proposition by induction on $m$.
The initial case $m =1$ follows from Lemma~\ref{lem:Finite-0}.
We can thus assume $m \ge 2$.
By Corollary~\ref{cor:log-HW-pro-*}, there is a surjection
$\{H^i_{{\text{\'et}}l}(X, W_{m-1}\Omega^r_{(X,nD), {\operatorname{log}}})\}_n \twoheadrightarrow
\{F_n\}_n$ and an injection $\{G_n\} \hookrightarrow
\{H^i_{{\text{\'et}}l}(X, W_{1}\Omega^r_{(X,nD), {\operatorname{log}}})\}_n$
such that
\[
0 \to \{F_n\}_n \to
\{H^i_{{\text{\'et}}l}(X, W_{m}\Omega^r_{(X,nD), {\operatorname{log}}})\}_n \to \{G_n\}_n \to 0
\]
is exact.
The pro-abelian group $\{H^i_{{\text{\'et}}l}(X, W_{m-1}\Omega^r_{(X,nD), {\operatorname{log}}})\}_n$
satisfies the Mittag-Leffler condition by induction.
This implies that so does $\{F_n\}_n$.
Since $\{H^i_{{\text{\'et}}l}(X, W_{1}\Omega^r_{(X,nD), {\operatorname{log}}})\}_n$ is a pro-system
of finite groups, so is $\{G_n\}_n$. In particular, the latter satisfies
the Mittag-Leffler condition.
\end{proof}
\end{comment}
\section{Cartier map for twisted de
Rham-Witt complex}\label{sec:Cartier}
In this section, we shall prove the existence and some properties of the
Cartier homomorphism for the twisted de Rham-Witt complex.
We fix a perfect field $k$ of characteristic $p >0$ and a modulus pair
$(X,D)$ in ${{\text{\rm op}}eratorname{\mathbf{Sch}}}_k$ such that $X$ is connected and regular
of dimension $d \ge 1$. Let $\iota \colon D \hookrightarrow X$
and $j \colon U \hookrightarrow X$ be the inclusions, where $U = X \setminus D$.
Let $F$ denote the function field of $X$.
We also fix integers $m \geq 1$,
$n \in {\mathbb Z}$ and $r \geq 0$.
Let $W_m {\mathcal O}_X(nD) $ be the Nisnevich sheaf on $X$
defined in \S~\ref{sec:dRW}. We begin with following result describing the
behavior of the Frobenius and Verschiebung operators on
$W_\bullet{\mathcal O}_X(nD)$. We shall then use this result to describe these
operators on the full twisted de Rham-Witt complex.
Until we talk about the topology of $X$ again in this section, we shall assume
it to be the Nisnevich topology.
\begin{lem}\label{lem:F-V-Witt-0}
We have
\begin{enumerate}
\item
$F(W_{m+1}{\mathcal O}_X(nD)) \subseteq W_{m}{\mathcal O}_X(pnD)$.
\item
$V(W_{m}{\mathcal O}_X(pnD)) \subseteq W_{m+1}{\mathcal O}_X(nD)$.
\item
$R(W_{m+1}{\mathcal O}_X(nD)) \subseteq W_{m}{\mathcal O}_X(nD)$.
\end{enumerate}
\end{lem}
\begin{proof}
We can check the lemma locally at a point $x \in X$. We let $A = {\mathcal O}^h_{X,x}$
and let $f \in A$ define $D$ at $x$. The lemma is easy to check when $n \le 0$.
We therefore assume $n \ge 1$. For $w \in
W_{m+1}(A)$, we have $F(w[f^{-n}]) = F(w) \cdot F([f^{-n}]) = F(w) \cdot [f^{-pn}]
\in W_m{\mathcal O}^h_{X,x}(pnD)$. This proves (1). For (2), we use the projection
formula to get $V(w'[f^{-pn}]) = V(w' \cdot F([f^{-n}])) = V(w') \cdot
[f^{-n}] \in W_{m+1}{\mathcal O}^h_{X,x}(nD)$ for $w' \in W_m(A)$. The part (3) is obvious.
\end{proof}
\begin{lem}\label{lem:F-V-Witt-1}
For $i \geq 1$, there is a short exact sequence
\[
0 \to W_{m-i}{\mathcal O}_X(p^inD) \xrightarrow{V^i} W_m {\mathcal O}_X(nD) \xrightarrow{R^{m-i}}
W_i{\mathcal O}_X(nD) \to 0.
\]
\end{lem}
\begin{proof}
Since $R$ is a ring homomorphism on $j_* W_m{\mathcal O}_U$ on which it is surjective,
it follows that it is surjective on $W_m {\mathcal O}_X(*D)$.
Since the sequence
\begin{equation}\label{eqn:F-V-Witt-1-0}
0 \to W_{m-i}{\mathcal O}_X \xrightarrow{V^i} W_m{\mathcal O}_X \xrightarrow{R^{m-i}} W_i{\mathcal O}_X \to
0
\end{equation}
is anyway exact, and since $V^i$ and $R$ are defined by
Lemma~\ref{lem:F-V-Witt-0}, we only have to show that
$V^i(j_*W_{m-i}{\mathcal O}_U) \cap W_m{\mathcal O}_X(nD) = V^i(W_{m-i}{\mathcal O}_X(p^inD))$.
Arguing by induction on $i \ge 1$, it suffices to check this for $i = 1$.
Now, we let $w = (a_1, \ldots , a_{m-1}) \in j_*W_{m-1}{\mathcal O}_U$ be such that
$V(w) = (0, a_1, \ldots , a_{m-1}) = (b_1, \ldots , b_{m}) \cdot [f^{-n}]
= (b_1f^{-n}, b_2f^{-pn}, \ldots , b_{m}f^{-p^{m-1}n})$, where $b_i \in {\mathcal O}_X$.
This implies that $b_1 = 0$ and $a_i = b_{i+1}f^{-p^{i}n}$ for $i \ge 1$.
Hence, $w = (b_2f^{-pn}, \ldots , b_{m}f^{-p^{m-1}n}) = (b_2, \ldots , b_{m})
\cdot [f^{-pn}] \in W_{m-1}{\mathcal O}_X(pnD)$.
\end{proof}
We shall now extend Lemma~\ref{lem:F-V-Witt-0} to the de Rham-Witt forms of
higher degrees.
One knows using the Gersten conjecture for the de Rham-Witt sheaves
due to Gros \cite{Gross} that the map
$W_m \Omega^r_X \to W_m \Omega^r_K$ is
injective. It follows that the canonical map
$W_m \Omega^r_X \to j_* W_m \Omega^r_U$ is injective too.
Using the invertibility of the $W_m{\mathcal O}_X$-module $W_m {\mathcal O}_X(nD)$,
we get an injection
\begin{equation}\label{eqn:Injection}
\xymatrix@C.8pc{
W_m \Omega^r_X(nD) \ar@{=}[r] \ar@{^{(}->}[dr] &
W_m {\mathcal O}_X(nD) \otimes_{W_m{\mathcal O}_X} W_m \Omega^r_X
\ar@{^{(}->}[r] & W_m {\mathcal O}_X(nD) \otimes_{W_m{\mathcal O}_X} j_* W_m \Omega^r_U
\ar@{=}[d] \\
& j_* W_m \Omega^r_U \ar@{=}[r] & W_m {\mathcal O}_X(nD) \cdot j_* W_m \Omega^r_U.}
\end{equation}
\begin{lem}\label{lem:Limit}
Let $q \ge 1$ be a positive integer. Then the inclusions
$W_m {\mathcal O}_X(qnD) \hookrightarrow W_m {\mathcal O}_X(q(n+1)D)$ induce an
isomorphism
\[
{\underlinederset{n \ge 1}\varinjlim} \ W_m {\mathcal O}_X(qnD)
\xrightarrow{\cong} j_* W_m {\mathcal O}_U.
\]
\end{lem}
\begin{proof}
We shall prove the lemma by induction on $m$.
For $m = 1$, the statement is clear. We now assume $m \ge 2$ and use the
commutative diagram
\begin{equation}\label{eqn:Limit-0}
\xymatrix@C.8pc{
0 \ar[r] & {\underlinederset{n \ge 1}\varinjlim} \ W_{m-1} {\mathcal O}_X(qpnD) \ar[r]^-{V}
\ar[d] &
{\underlinederset{n \ge 1}\varinjlim} \ W_{m} {\mathcal O}_X(qnD) \ar[r]^-{R^{m-1}} \ar[d] &
{\underlinederset{n \ge 1}\varinjlim} \ W_1 {\mathcal O}_X(qnD) \ar[r] \ar[d] & 0 \\
0 \ar[r] & j_* W_{m-1} {\mathcal O}_U \ar[r]^-{V} & j_* W_{m} {\mathcal O}_U \ar[r]^-{R^{m-1}} &
j_* W_{1} {\mathcal O}_U \ar[r] & 0.}
\end{equation}
It is easy to check that the bottom row is exact. The top row is
exact by Lemma~\ref{lem:F-V-Witt-1}.
The right vertical arrow is an isomorphism by $m =1$ case and the
left vertical arrow is an isomorphism by induction on $m$
(with $q$ replaced by $qp$).
The lemma follows.
\end{proof}
\begin{cor}\label{cor:Limit-1}
For $q \ge 1$, the canonical map
$W_m\Omega^r_X(qnD) \to W_m\Omega^r_X(q(n+1)D)$ is injective and the map
${\underlinederset{n \ge 1}\varinjlim} \ W_m\Omega^r_X(qnD) \to j_* W_m\Omega^r_U$
is an isomorphism of $W_m{\mathcal O}_X$-modules.
\end{cor}
\begin{proof}
The injectivity claim follows from ~\eqref{eqn:Injection}.
To prove the isomorphism, we take the tensor product with
the $W_m {\mathcal O}_X$-module $W_m \Omega^r_X$
on the two sides
of the isomorphism in Lemma~\ref{lem:Limit}.
This yields an isomorphism
${\underlinederset{n \geq 1}\varinjlim} \ W_m \Omega^r_X(qnD) \cong (j_* W_m {\mathcal O}_U)
\otimes_{W_m{\mathcal O}_X} W_m \Omega^r_X$.
On the other hand, the {\'e}tale descent property (see \cite[Proposition I.1.14]{Illusie})
of the de Rham-Witt sheaves says that the canonical map
$(j_* W_m {\mathcal O}_U) \otimes_{W_m{\mathcal O}_X} W_m \Omega^r_X \to j_* W_m\Omega^r_U$
is an isomorphism. The corollary follows.
\end{proof}
In view of Corollary~\ref{cor:Limit-1}, we shall hereafter consider
the sheaves $W_m \Omega^r_X(nD)$ as $W_m {\mathcal O}_X$-submodules of
$j_* W_m\Omega^r_U$.
\begin{lem}\label{lem:FRV}
The $V,F$ and $R$ operators of $j_* W_m\Omega^r_U$ satisfy the following.
\begin{enumerate}
\item
$F(W_m \Omega^r_X(nD)) \subseteq W_{m-1} \Omega^r_X(pnD)$.
\item
$V(W_m \Omega^r_X(pnD)) \subseteq W_{m+1} \Omega^r_X(nD)$.
\item
$R(W_m \Omega^r_X(nD)) \subseteq W_m \Omega^r_X(nD)$.
\end{enumerate}
\end{lem}
\begin{proof}
We have $F([f^{-n}]\omega) = F(\omega)
F([f^{-n}]) \in W_{m-1} \Omega^r_X \cdot W_{m-1} {\mathcal O}_X(pnD) =
W_{m-1} \Omega^r_X(pnD)$, where the first inclusion holds by
Lemma~\ref{lem:F-V-Witt-0}.
We have $V([f^{-pn}]\omega) = V(F([f^{-n}]) \omega) = [f^{-n}] V(\omega)
\in W_{m+1} {\mathcal O}_X(nD) \cdot W_{m+1} \Omega^r_X = W_{m+1} \Omega^r_X(nD)$,
where the second inclusion holds again by Lemma~\ref{lem:F-V-Witt-0}.
The last assertion is clear.
\end{proof}
\begin{lem}\label{lem:mult-p}
The multiplication by $p$
map $p \colon W_{m+1}\Omega^r_X(nD) \to W_{m+1}\Omega^r_X(nD)$ has a
factorization
\[
W_{m+1}\Omega^r_X(nD) \stackrel{R}{\twoheadrightarrow} W_{m}\Omega^r_X(nD)
\stackrel{\underline{p}}{\hookrightarrow} W_{m+1}\Omega^r_X(nD)
\]
in the category of sheaves of $W_{m+1}{\mathcal O}_X$-modules. This factorization
is natural in $X$.
\end{lem}
\begin{proof}
Since $R$ is $W_{m+1}{\mathcal O}_X$-linear, we get an exact sequence
(see \S~\ref{sec:dRW})
\begin{equation}\label{eqn:mult-p-0}
0 \to (V^{m}W_1\Omega^r_X + dV^{m}W_1\Omega^{r-1}_X) \cdot W_{m+1}{\mathcal O}_X(nD) \to
W_{m+1}\Omega^r_X(nD) \xrightarrow{R} W_{m} \Omega^r_X(nD) \to 0.
\end{equation}
On the other hand, we note that
$p \colon W_{m+1}\Omega^r_X \to W_{m+1}\Omega^r_X$ is also a
$W_{m+1}{\mathcal O}_X$-linear homomorphism. Since $W_{m+1}{\mathcal O}_X(nD)$ is an
invertible $W_{m+1} {\mathcal O}_X$-module, we get an exact sequence
\begin{equation}\label{eqn:mult-p-1}
0 \to {{\rm Ker}(p)} \cdot W_{m+1}{\mathcal O}_X(nD) \to
W_{m+1}\Omega^r_X(nD) \xrightarrow{p} W_{m+1} \Omega^r_X(nD).
\end{equation}
Since ${\rm Ker}(p) = {\rm Ker}(R)$ as $X$ is regular (see \S~\ref{sec:dRW}), we get
${{\rm Ker}(p)} \cdot W_{m+1}{\mathcal O}_X(nD) =
(V^{m}W_1\Omega^r_X + dV^{m}W_1\Omega^{r-1}_X) \cdot W_{m+1}{\mathcal O}_X(nD)$.
The first part of the lemma now follows. The naturality is clear from the
proof.
\end{proof}
\begin{lem}\label{lem:Cartesian-X}
The square
\begin{equation}\label{eqn:Cartesian-X-0}
\xymatrix@C.8pc{
W_m\Omega^r_X \ar[r]^-{\underline{p}} \ar[d]_-{j^*} & W_{m+1}\Omega^r_X \ar[d]^-{j^*} \\
j_* W_m\Omega^r_U \ar[r]^-{\underline{p}} & j_* W_{m+1}\Omega^r_U}
\end{equation}
is Cartesian.
\end{lem}
\begin{proof}
We can check this locally. So let $x \in j_* W_m\Omega^r_U$ be such that
$\underline{p}(x) \in W_{m+1}\Omega^r_X$. Since $j$ is affine and
$W_m\Omega^r_X$ is an $W_m{\mathcal O}_X$-module, it follows that
$j_* W_{m+1}\Omega^r_U \xrightarrow{j_* R} j_* W_m\Omega^r_U$ is surjective.
We can therefore find $\tilde{x} \in j_* W_{m+1}\Omega^r_U$ such that
$R(\tilde{x}) = x$. In particular, $p \tilde{x} = \underline{p}(x) \in
W_{m+1}\Omega^r_X$.
We thus get $VF(\tilde{x}) = p \tilde{x} \in W_{m+1}\Omega^r_X$. Since
\[
(j_* V W_m \Omega^r_U) \cap W_{m+1} \Omega^r_X =
{\rm Ker}(F^{m} \colon W_{m+1} \Omega^r_X \to j_* \Omega^r_U) =
{\rm Ker}(F^{m} \colon W_{m+1} \Omega^r_X \to \Omega^r_X)
\]
\[
= VW_m \Omega^r_X,
\]
it follows that there exists $y' \in W_m \Omega^r_X$ such that
$VF(\tilde{x}) = Vy'$.
Since ${\rm Ker}(V \colon j_*W_m \Omega^r_U \to j_* W_{m+1} \Omega^r_U) =
j_* FdV^{m} \Omega^{r-1}_U $
(see \cite[I.3.21.1.4]{Illusie}), it follows that there exists
$z' \in j_* \Omega^{r-1}_U$ such that
$FdV^m(z') = F(\tilde{x}) - y'$. Equivalently, $F(\tilde{x} - dV^m(z')) = y'
\in W_m \Omega^r_X$.
Since
\[
j_* FW_{m+1} \Omega^r_U \cap W_m \Omega^r_X =
{\rm Ker}(F^{m-1}d \colon W_m \Omega^r_X \to j_* \Omega^{r+1}_U) =
{\rm Ker}(F^{m-1}d \colon W_m \Omega^r_X \to \Omega^{r+1}_X)
\]
\[
= F W_{m+1} \Omega^r_X,
\]
we can find $y'' \in W_{m+1} \Omega^r_X$ such that $F(\tilde{x} - dV^m(z'))
= F(y'')$.
Since ${\rm Ker}(F \colon j_* W_{m+1} \Omega^r_{U} \to j_* W_m \Omega^r_U)
= j_* V^m \Omega^r_U$ (see \cite[I.3.21.1.2]{Illusie}),
we can find $z'' \in j_* \Omega^r_U$ such that
$V^m(z'') = \tilde{x} - dV^m(z') - y''$. Equivalently,
$\tilde{x} - y'' = V^m(z'') + dV^m(z')$.
On the other hand, we have
\[
V^m j_* \Omega^r_U + dV^m j_* \Omega^{r-1}_U =
{\rm Ker}(R \colon j_* W_{m+1} \Omega^r_U \to j_* W_m \Omega^r_U).
\]
We thus get $x = R(\tilde{x}) = R(y'')$. Since $y'' \in W_{m+1} \Omega^r_X$,
we get $x \in W_{m} \Omega^r_X$. This proves the lemma.
\end{proof}
It is easy to see that for $n \in {\mathbb Z}$, the differential
$d \colon j_* W_{m}\Omega^r_X \to j_* W_{m}\Omega^{r+1}_X$ restricts to a
homomorphism $d \colon W_{m}\Omega^r_X(nD) \to W_{m}\Omega^{r+1}_X((n+1)D)$
such that the composite map
$d^2 \colon W_{m}\Omega^r_X(nD) \to W_{m}\Omega^{r+2}_X((n+2)D)$ is zero
by Corollary~\ref{cor:Limit-1}.
The map $d \colon W_{m}\Omega^r_X(p^mnD) \to W_{m}\Omega^{r+1}_X((p^mn+1)D)$
actually factors through $d \colon
W_{m}\Omega^r_X(p^mnD) \to W_{m}\Omega^{r+1}_X(p^mnD)$ as one easily checks.
In particular, $W_m\Omega^\bullet_X(p^mnD)$ is a complex for every $m\ge 1$ and
$n \in {\mathbb Z}$.
Let
\[
Z_1 W_m\Omega^r_X(nD) = (j_* Z_1 W_m\Omega^r_U) \cap W_m\Omega^r_X(nD)
= j_*{\rm Ker}(F^{m-1}d) \cap W_m\Omega^r_X(nD)
\]
\[
= {\rm Ker}(F^{m-1}d \colon W_m\Omega^r_X(nD) \to j_* \Omega_U) =
{\rm Ker}(F^{m-1}d \colon W_m\Omega^r_X(nD) \to \Omega^{r+1}_X(p^{m-1}(n+1)D)),
\]
where the third equality is by the left exactness of $j_*$ and
the last equality by Lemma~\ref{lem:FRV}.
If $m =1$, we get
\begin{equation}\label{eqn:Loc-free-0}
Z_1W_1\Omega^r_X(nD) = Z\Omega^r_X(nD) = {\rm Ker}(d \colon
\Omega^r_X(nD) \to \Omega^{r+1}_X((n+1)D)).
\end{equation}
We let $B\Omega^r_X(nD) = {\rm Image}(d \colon \Omega^r_X((n-1)D) \to
\Omega^r_X(nD))$.
\begin{prop}\label{prop:Cartier-map}
There exists a homomorphism $C \colon Z_1 W_m\Omega^r_X(pnD)
\to W_m\Omega^r_X(nD)$ such that the diagram
\begin{equation}\label{eqn:Cartier-map-0}
\xymatrix@C.8pc{
Z_1 W_m\Omega^r_X(pnD) \ar[rr]^-{V} \ar[dr]_-{C} & & W_{m+1}\Omega^r_X(nD) \\
& W_m\Omega^r_X(nD) \ar[ur]_-{\underline{p}}}
\end{equation}
is commutative. The map $C$ induces an isomorphism of ${\mathcal O}_X$-modules
\[
C \colon {\mathcal H}^r({\operatorname{\text{p.s.s.}}}i_* W_1\Omega^\bullet_X(pnD)) \xrightarrow{\cong}
W_1\Omega^r_X(nD).
\]
\end{prop}
\begin{proof}
We consider the diagram
\begin{equation}\label{eqn:Cartier-map-10}
\xymatrix@C.8pc{
Z_1 W_m\Omega^r_X(pnD) \ar@{.>}[r] \ar[d] \ar@/^1cm/[rr]^-{V} & W_m\Omega^r_X(nD)
\ar[r]^-{\underline{p}} \ar[d] & W_{m+1}\Omega^r_X(nD) \ar[d] \\
j_* Z_1 W_m\Omega^r_U \ar[r]^-{C} \ar@/_1cm/[rr]^-V
& j_* W_m\Omega^r_U \ar[r]^-{\underline{p}} & j_* W_{m+1}\Omega^r_U,}
\end{equation}
where the vertical arrows are inclusions from Corollary~\ref{cor:Limit-1}.
The right square exists and commutes by Lemma~\ref{lem:mult-p}.
The big outer square clearly commutes. It suffices therefore to show
that the right square is Cartesian.
We have a commutative diagram of $W_{m+1}{\mathcal O}_X$-modules
\begin{equation}\label{eqn:Cartier-map-1}
\xymatrix@C.8pc{
W_{m+1}\Omega^r_X(nD) \ar[r]^{p} \ar[d] & W_{m+1}\Omega^r_X(nD) \ar[r] \ar[d]
& {W_{m+1}\Omega^r_X(nD)}/p \ar[r] \ar[d] & 0 \\
j_* W_{m+1}\Omega^r_U \ar[r]^{p} \ar[r] & j_* W_{m+1}\Omega^r_U \ar[r]
& j_* {W_{m+1}\Omega^r_U}/p \ar[r] & 0.}
\end{equation}
The top sequence is clearly exact and the bottom sequence is exact
by the Serre vanishing because all sheaves are $W_{m+1}{\mathcal O}_X$-modules and
$j$ is an affine morphism. By Lemma~\ref{lem:mult-p}, it suffices to show that
the right vertical arrow is injective.
To prove this injectivity, we note that the top row of
~\eqref{eqn:Cartier-map-1} is same as the sequence
\[
W_{m+1}\Omega^r_X \otimes_{{\mathcal O}} {\mathcal O}(nD)
\xrightarrow{p \otimes 1}
W_{m+1}\Omega^r_X \otimes_{{\mathcal O}} {\mathcal O}(nD)
\to {W_{m+1}\Omega^r_X}/p \otimes_{{\mathcal O}} {\mathcal O}(nD) \to 0,
\]
where ${\mathcal O} = W_{m+1}{\mathcal O}_X$. Similarly, we have
\[
j_* {W_{m+1}\Omega^r_U}/p \otimes_{{\mathcal O}} {\mathcal O}(nD) \cong
j_*({W_{m+1}\Omega^r_U}/p \otimes_{W_{m+1}{\mathcal O}_U} j^* {\mathcal O}(nD))
\cong
\]
\[
j_*({W_{m+1}\Omega^r_U}/p \otimes_{W_{m+1}{\mathcal O}_U} W_{m+1}{\mathcal O}_U)
\cong j_* {W_{m+1}\Omega^r_U}/p,
\]
where the first isomorphism follows from the projection formula for
${\mathcal O}$-modules using the fact that ${\mathcal O}(nD)$ is an invertible ${\mathcal O}$-module
(see \cite[Exercise~II.5.1]{Hartshorne-AG}).
Moreover, it is clear that the right vertical arrow in
~\eqref{eqn:Cartier-map-1} is
the map
\[
{W_{m+1}\Omega^r_X}/p \otimes_{{\mathcal O}} {\mathcal O}(nD)
\xrightarrow{j^* \otimes 1} (j_* {W_{m+1}\Omega^r_U}/p) \otimes_{{\mathcal O}} {\mathcal O}(nD).
\]
Since ${\mathcal O}(nD)$ is an invertible ${\mathcal O}$-module, it suffices to show that
the map $j^* \colon {W_{m+1}\Omega^r_X}/p \to j_* {W_{m+1}\Omega^r_U}/p$
is injective. But this follows from Lemma~\ref{lem:Cartesian-X} since $j^*$
in ~\eqref{eqn:Cartesian-X-0} is injective. This proves the first part of
the lemma.
We now prove the second part for which we can assume $m =1$.
We know classically that the map $C$ on $Z\Omega^r_X$ induces an
${\mathcal O}_X$-linear isomorphism $C \colon {\mathcal H}^r({\operatorname{\text{p.s.s.}}}i_* \Omega^\bullet_X)
\xrightarrow{\cong} \Omega^r_X$.
Taking its inverse, we get an ${\mathcal O}_X$-linear isomorphism
\[
C^{-1} \colon \Omega^r_X \xrightarrow{\cong} {\mathcal H}^r({\operatorname{\text{p.s.s.}}}i_* \Omega^\bullet_X).
\]
Since ${\operatorname{\text{p.s.s.}}}i_* \Omega^\bullet_X \in D^{+}({\rm Coh}_X)$, we see that
${\mathcal H}^r({\operatorname{\text{p.s.s.}}}i_* \Omega^\bullet_X)$ is a coherent ${\mathcal O}_X$-module.
In particular, we get an isomorphism
\[
C^{-1} \colon \Omega^r_X(nD) \xrightarrow{\cong}
{\mathcal H}^r({\operatorname{\text{p.s.s.}}}i_* \Omega^\bullet_X)(nD).
\]
On the other hand, we have
\[
{\mathcal H}^r({\operatorname{\text{p.s.s.}}}i_* \Omega^\bullet_X)(nD) \cong {\mathcal H}^r(({\operatorname{\text{p.s.s.}}}i_* \Omega^\bullet_X)(nD))
\cong {\mathcal H}^r({\operatorname{\text{p.s.s.}}}i_*(\Omega^\bullet_X \otimes_{{\mathcal O}_X} {\operatorname{\text{p.s.s.}}}i^*({\mathcal O}(nD))))
\cong {\mathcal H}^r({\operatorname{\text{p.s.s.}}}i_*(\Omega^\bullet_X (pnD))).
\]
This proves the second part.
\end{proof}
\begin{lem}\label{lem:Z_1-mod}
We have the following.
\begin{enumerate}
\item
${\rm Ker}(F^{m-1} \colon W_{m} \Omega^r_X(nD) \to \Omega^r_X(p^{m-1}nD)) =
V W_{m-1} \Omega^r_X(pnD)$.
\item
$Z_1 W_m\Omega^r_X(pnD) = F W_{m+1} \Omega^r_X(nD)$.
\end{enumerate}
\end{lem}
\begin{proof}
We first prove (1). It is clear that the right hand side is contained in the
left hand side. We prove the other inclusion.
It suffices to show that
\[
(j_* V W_{m-1} \Omega^r_U) \cap W_{m} \Omega^r_X(nD) \subset
V W_{m-1} \Omega^r_X(pnD).
\]
We can check this locally. So let $D$ be defined by $f \in {\mathcal O}_X$ and
let $y = Vx = [f^{-n}]\omega$,
where $x \in j_* W_{m-1} \Omega^r_U$ and $\omega \in W_{m} \Omega^r_X$.
This yields $\omega = [f^n] Vx = V(F([f^n]) x) = V([f^{pn}]x)$.
This implies that $\omega \in W_{m} \Omega^r_X \cap j_* V W_{m-1} \Omega^r_U$.
On the other hand, we have
\[
W_{m} \Omega^r_X \cap j_* V W_{m-1} \Omega^r_U =
{\rm Ker}(F^{m-1} \colon W_{m} \Omega^r_X \to j_*\Omega^r_U) =
{\rm Ker}(F^{m-1} \colon W_{m} \Omega^r_X \to \Omega^r_X)
\]
\[
= V W_{m-1}\Omega^r_X.
\]
We thus get $\omega \in V W_{m-1}\Omega^r_X$. Let $y' \in W_{m-1}\Omega^r_X$
be such that $\omega = Vy'$. This yields
$y = [f^{-n}]\omega = [f^{-n}]Vy' = V(F([f^{-n}])y') = V([f^{-pn}]y')
\in VW_{m-1}\Omega^r_X(pnD)$. This proves (1).
We now prove (2). Since
$FW_{m+1} \Omega^r_X(nD) \subset j_* F W_{m+1} \Omega^r_U \cap
W_m\Omega^r_X(pnD)$ by Lemma~\ref{lem:FRV}, it follows that
$FW_{m+1} \Omega^r_X(nD) \subset Z_1 W_m \Omega^r_X(pnD)$.
We show the other inclusion.
We let $z \in Z_1 W_m \Omega^r_X(pnD)$ so that
$z = Fx = [f^{-pn}]\omega$ for some $x \in j_* W_{m+1}\Omega^r_U$ and
$\omega \in W_m\Omega^r_X$.
We can then write $\omega = [f^{pn}]Fx = F([f^n]x)$.
This implies that $\omega \in W_m\Omega^r_X \cap j_* F W_{m+1} \Omega^r_U$.
On the other hand, we have
\[
W_m\Omega^r_X \cap j_* F W_{m+1} \Omega^r_U =
{\rm Ker}(F^{m-1} d \colon W_m\Omega^r_X \to j_* \Omega^{r+1}_U)
= {\rm Ker}(F^{m-1} d \colon W_m\Omega^r_X \to \Omega^{r+1}_X)
\]
\[
= F W_{m+1} \Omega^{r}_X.
\]
We can thus write $\omega = Fx'$ for some $x' \in W_{m+1} \Omega^{r}_X$.
This gives $z = [f^{-pn}]\omega = [f^{-pn}]Fx' = F([f^{-n}]) Fx' =
F([f^{-n}]x') \in F W_{m+1}\Omega^r_X(nD)$. This proves (2).
\end{proof}
\section{The pairing of cohomology groups}\label{sec:DT}
We fix a finite field $k$ of characteristic $p$
and an integral smooth projective scheme $X$
of dimension $d \ge 1$ over $k$. Let $D \subset X$ be an effective Cartier
divisor with complement $U$. Let $\iota \colon D \hookrightarrow X$ and
$j \colon U \hookrightarrow X$ be the inclusions.
In this section, we shall establish the pairing for
our duality theorem for the $p$-adic {\'e}tale
cohomology of $U$. We fix integers $m, n \ge 1$ and $r \ge 0$.
We shall consider the {\'e}tale topology throughout our discussion of
duality.
\subsection{The complexes}\label{sec:Complexes}
We consider the complex of {\'e}tale sheaves
\begin{equation}\label{eqn:F-com}
W_m{\mathcal F}^{r, \bullet}_{n} = [Z_1W_m\Omega^r_{X}(nD) \xrightarrow{1 - C}
W_m\Omega^r_{X}(nD)].
\end{equation}
The differential of this complex is induced by the composition
\[
Z_1W_m\Omega^r_{X}(nD) \hookrightarrow Z_1W_m\Omega^r_{X}(pnD) \xrightarrow{C}
W_m\Omega^r_{X}(nD),
\]
where the last map is defined by virtue of Proposition~\ref{prop:Cartier-map}.
We now consider the map $F \colon W_{m+1}\Omega^r_X(-nD) \to W_m\Omega^r_X(-pnD)$
whose existence is shown in Lemma~\ref{lem:FRV}.
We have shown in ~\eqref{eqn:dlog-2-*} that
$F({\rm Ker}(R)) = F((V^mW_1\Omega^r_X + dV^mW_1\Omega^{r-1}_X) \cap
W_{m+1}\Omega^r_X(-nD)) \subseteq dV^{m-1}\Omega^{r-1}_X \cap
W_m\Omega^r_X(-pnD)$. It follows that
$F$ induces a map
$\overline{F} \colon W_{m}\Omega^r_X(-nD) \to
{W_m\Omega^r_X(-pnD)}/{(dV^{m-1}\Omega^{r-1}_X \cap
W_m\Omega^r_X(-pnD))}$.
We denote the composition
\[
W_{m}\Omega^r_X(-nD) \to
\frac{W_m\Omega^r_X(-pnD)}{dV^{m-1}\Omega^{r-1}_X \cap
W_m\Omega^r_X(-pnD)} \hookrightarrow \frac{W_m\Omega^r_X(-nD)}{dV^{m-1}\Omega^{r-1}_X \cap
W_m\Omega^r_X(-nD)}
\]
also by $\overline{F}$ and consider the complex of {\'e}tale sheaves
\begin{equation}\label{eqn:G-com}
W_m{\mathcal G}^{r, \bullet}_{n} = \left[W_{m}\Omega^r_X(-nD) \xrightarrow{\pi - \overline{F}}
\frac{W_m\Omega^r_X(-nD)}{dV^{m-1}\Omega^{r-1}_X \cap
W_m\Omega^r_X(-nD)}\right],
\end{equation}
where $\pi$ is the quotient map.
We let
\begin{equation}\label{eqn:H-com}
W_m{\mathcal H}^{d,\bullet} = [W_m\Omega^d_X \xrightarrow{1 -C} W_m\Omega^d_X],
\end{equation}
where the map $C$ (see Proposition~\ref{prop:Cartier-map} for $n = 0$) is defined
because $Z_1W_m\Omega^d_X = W_m\Omega^d_X$.
\subsection{The pairing of complexes}\label{sec:pair}
We consider the pairing
\begin{equation}\label{eqn:Pair-0}
\langle, \rangle_1 \colon Z_1W_m\Omega^r_{X}(nD) \times W_{m}\Omega^{d-r}_X(-nD)
\xrightarrow{\wedge} W_m\Omega^d_X
\end{equation}
by letting $\langlew_1, w_2\rangle_1 = w_1 \wedge w_2$.
This is defined by the definition of the twisted de Rham-Witt sheaves.
We define a pairing
$\langle, \rangle_2 \colon Z_1W_m\Omega^r_{X}(nD) \times W_{m}\Omega^{d-r}_X(-nD)
\to W_m\Omega^d_X$ by $\langlew_1, w_2\rangle_2 = - C(w_1 \wedge w_2)$.
We claim that $C(w_1 \wedge w_2) = 0$ if
$w_2 \in dV^{m-1}\Omega^{d-r-1}_X \cap W_m\Omega^{d-r}_X(-nD)$.
To prove it, we write $w_1 = F( w'_1)$ for some $w'_1 \in
j_* W_{m+1} \Omega^r_U$.
This gives
$V(w_1 \wedge j^* w_2) = V(F(w'_1) \wedge j^* w_2) = w'_1 \wedge j^* V w_2 = 0$, where
the last equality holds because $V d V^{m-1} \Omega_X^{d-r-1} \subset
p d V^{m} \Omega_X^{d-r-1}$ and $p \Omega_X^{d-r-1} = 0$.
In particular,
$j^*(w_1 \wedge w_2) \in {\rm Ker} (V: W_m \Omega_U^{d} \to W_{m+1} \Omega_U^d) =
Fd V^m \Omega_U^{d-1} = d V^{m-1} \Omega_U^{d-1} = {\rm Ker}(C_U)$,
where $C_U$ is the Cartier map for $U$
(see \cite[Chapitre~III]{I-R-83} for the last equality).
It follows that $j^* \circ C(w_1 \wedge w_2) =
C_U \circ j^*(w_1 \wedge w_2) = 0$.
Since $W_m \Omega^d_X \hookrightarrow j_* W_m \Omega^d_U$, the claim follows.
Using the claim, we get a pairing
\begin{equation}\label{eqn:Pair-1}
\langle, \rangle_2 \colon Z_1W_m\Omega^r_{X}(nD) \times
\frac{W_m\Omega^{d-r}_X(-nD)}{dV^{m-1}\Omega^{d-r-1}_X \cap
W_m\Omega^{d-r}_X(-nD)} \to W_m\Omega^d_X.
\end{equation}
We define our third pairing of {\'e}tale sheaves
\begin{equation}\label{eqn:Pair-2}
\langle, \rangle_3 \colon W_m\Omega^r_{X}(nD) \times W_m\Omega^{d-r}_X(-nD) \to
W_m\Omega^d_X
\end{equation}
by $\langlew_1, w_2\rangle_3 = w_1 \wedge w_2$.
\begin{lem}\label{lem:Pairing-main-sheaf}
The above pairings of {\'e}tale sheaves give rise to a pairing of
two-term complexes of sheaves
\begin{equation}\label{eqn:Pair-3}
\langle, \rangle \colon W_m{\mathcal F}^{r, \bullet}_{n} \times W_m{\mathcal G}^{d-r, \bullet}_{n} \to
W_m{\mathcal H}^{d, \bullet}.
\end{equation}
\end{lem}
\begin{proof}
By \cite[\S~1, p.175]{Milne-Duality}, we only have to show that
\begin{equation}\label{eqn:Pair-3-0}
(1-C)(w_1 \wedge w_2) = (1-C)(w_1) \wedge w_2 - C(w_1 \wedge (\pi - \overline{F})w_2)
\end{equation}
for all $w_1 \in Z_1W_m\Omega^r_{X}(nD)$
and $w_2 \in W_m\Omega^r_{X}(-nD)$.
But this follows from the equalities
\[
\begin{array}{lll}
(1-C)(w_1 \wedge w_2) & = & w_1 \wedge w_2 - C(w_1 \wedge w_2) \\
& = & w_1 \wedge w_2 - C(w_1) \wedge C(w_2) \\
& = & w_1 \wedge w_2 - C(w_1) \wedge w_2 + C(w_1) \wedge w_2 -
C(w_1) \wedge C(w_2) \\
& = & (1-C)(w_1) \wedge w_2 - C(w_1) \wedge (C-1)(w_2) \\
& {=}^\dagger & (1-C)(w_1) \wedge w_2 - C(w_1 \wedge (\pi - \overline{F})(w_2)),
\end{array}
\]
where ${=}^\dagger$ holds because $C \circ \overline{F} = {\rm id}$
(see \cite[Proposition~1.1.4]{Zhau}).
\end{proof}
\subsection{The pairing of {\'e}tale cohomology}\label{sec:Coh-pair}
Using Lemma~\ref{lem:Pairing-main-sheaf}, we get a pairing of hypercohomology
groups
\begin{equation}\label{eqn:Pair-4-0}
{\mathbb H}^i_{\text{\'et}}l(X, W_m{\mathcal F}^{r, \bullet}_{n}) \times
{\mathbb H}^{d+1-i}_{\text{\'et}}l(X, W_m{\mathcal G}^{d-r, \bullet}_{n}) \to
{\mathbb H}^{d+1}_{\text{\'et}}l(X, W_m{\mathcal H}^{d, \bullet}).
\end{equation}
By \cite[Proposition~1.1.7]{Zhau} and \cite[Corollary~1.12]{Milne-Zeta},
there is a quasi-isomorphism
$W_m\Omega^d_{X, {\operatorname{log}}} \xrightarrow{\cong} W_m{\mathcal H}^{d, \bullet}$, and a
bijective trace homomorphism
${\mathbb T}r \colon H^{d+1}_{\text{\'et}}l(X, W_m\Omega^d_{X, {\operatorname{log}}}) \xrightarrow{\cong}
{{\mathbb Z}}/{p^m}$. We thus get a pairing of ${{\mathbb Z}}/{p^m}$-modules
\begin{equation}\label{eqn:Pair-4-1}
{\mathbb H}^i_{\text{\'et}}l(X, W_m{\mathcal F}^{r, \bullet}_{n}) \times
{\mathbb H}^{d+1-i}_{\text{\'et}}l(X, W_m{\mathcal G}^{d-r, \bullet}_{n}) \to {{\mathbb Z}}/{p^m}.
\end{equation}
Since this pairing is compatible with the change in values of $m$ and $n$, we
get a pairing of ind-abelian (in first coordinate) and pro-abelian
(in second coordinate) groups
\begin{equation}\label{eqn:Pair-4-2}
\{{\mathbb H}^i_{\text{\'et}}l(X, W_m{\mathcal F}^{r, \bullet}_{n})\}_n \times
\{{\mathbb H}^{d+1-i}_{\text{\'et}}l(X, W_m{\mathcal G}^{d-r, \bullet}_{n})\}_n \to {{\mathbb Z}}/{p^m}.
\end{equation}
It follows from Corollary~\ref{cor:Limit-1} that
${\varinjlim}_n \ W_m{\mathcal F}^{r, \bullet}_{n}
\xrightarrow{\cong} j_*([Z_1W_m\Omega^r_{U} \xrightarrow{1 - C}
W_m\Omega^r_{U}])$.
Since $j$ is affine, we get
\begin{equation}\label{eqn:Pair-4-3}
{\varinjlim}_n \ W_m{\mathcal F}^{r, \bullet}_{n}
\xrightarrow{\cong}
{\bf R}j_*([Z_1W_m\Omega^r_{U} \xrightarrow{1 - C}
W_m\Omega^r_{U}]) \cong {\bf R}j_*(W_m\Omega^r_{U, {\operatorname{log}}}).
\end{equation}
In order to understand the pro-complex $\{W_m{\mathcal G}^{r, \bullet}_{n}\}_n$,
we let
\[
W_m\widetilde{{\mathcal G}}^{r, \bullet}_{n} =
\left[W_m\Omega^r_{(X,nD)} \xrightarrow{\pi - \overline{F}}
\frac{W_m\Omega^r_{(X,nD)}}{dV^{m-1}\Omega^{r-1}_X \cap W_m\Omega^r_{(X,nD)}}\right].
\]
This complex is defined by the exact sequence
~\eqref{eqn:dlog-2}.
It follows then from Corollary~\ref{cor:pro-iso-rel*} that
the canonical inclusion
$\{W_m{\mathcal G}^{r, \bullet}_{n}\}_n \hookrightarrow \{W_m\widetilde{{\mathcal G}}^{r, \bullet}_{n}\}_n$
is an isomorphism of pro-complexes of sheaves.
Using the exact sequence ~\eqref{eqn:dlog-2} and Lemma~\ref{lem:Rel-dlog-iso},
we get that there is a canonical isomorphism of pro-complexes
\begin{equation}\label{eqn:Pair-4-4}
{\rm dlog} \colon \{\overline{{\widehat{{\mathcal K}}^M_{r, (X, nD)}}/{p^m}}\}_n \xrightarrow{\cong}
\{W_m\Omega^r_{(X,nD), {\operatorname{log}}}\}_n \xrightarrow{\cong} \{W_m{\mathcal G}^{r, \bullet}_{n}\}_n.
\end{equation}
We conclude that ~\eqref{eqn:Pair-4-2} is equivalent to the pairing
of ind-abelian (in first coordinate) and pro-abelian
(in second coordinate) groups
\begin{equation}\label{eqn:Pair-4-5}
\{{\mathbb H}^i_{\text{\'et}}l(X, W_m{\mathcal F}^{r, \bullet}_{n})\}_n \times
\{H^{d+1-i}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{(X,nD), {\operatorname{log}}})\}_n \to {{\mathbb Z}}/{p^m}.
\end{equation}
\subsection{Continuity of the pairing}\label{sec:Cont}
Recall from \S~\ref{sec:Exact}
that for a ${{\mathbb Z}}/{p^m}$-module $V$, one has $V^\star =
{\rm Hom}_{{{\mathbb Z}}/{p^m}}(V, {{\mathbb Z}}/{p^m})$.
For any profinite or discrete torsion abelian group
$A$, let $A^\vee = {\rm Hom}_{{\rm cont}}(A, {{\mathbb Q}}/{{\mathbb Z}})$ denote the Pontryagin dual of
$A$ with the compact-open topology (see \cite[\S~7.4]{Gupta-Krishna-REC}).
If $A$ is discrete and $p^m$-torsion, then we have
$A^\vee = A^\star$.
We shall use the following topological results.
\begin{lem}\label{lem:lim-colim}
Let $\{A_n\}_n$ be a direct system of discrete torsion
topological abelian groups
whose limit is $A$ with the direct limit topology. Then
the canonical map $\lambda \colon A^\vee \to {\varprojlim}_n A^\vee_n$
is an isomorphism of profinite groups.
\end{lem}
\begin{proof}
The fact that $\lambda$ is an isomorphism of abelian groups is
well known and elementary.
We show that $\lambda$ is a homeomorphism. Since its source and targets
are compact Hausdorff, it suffices to show that it is continuous.
We let $B_n = {\rm Image}(A_n \to A)$.
Then, we have a surjection of finite ind-groups $\{A_n\}_n \twoheadrightarrow \{B_n\}_n$
whose limits coincides with $A$.
We thus get maps $A^\vee \xrightarrow{\lambda'} {\varprojlim}_n B^\vee_n
\xrightarrow{\lambda''} {\varprojlim}_n A^\vee_n$ such that
$\lambda = \lambda'' \circ \lambda'$.
Since $\lambda''$ is clearly continuous, we have to only show that
$\lambda'$ is continuous. We can therefore assume that
each $A_n$ is a subgroup of $A$. Lemma now
follows from \cite[Lemma~2.9.3 and Theorem~2.9.6]{Pro-fin}.
\end{proof}
\begin{lem}\label{lem:Top}
Let $\{A_n\}_n$ be an inverse system of discrete topological
abelian groups. Let $A$ be the limit of $\{A_n\}_n$ with the inverse
limit topology. Then any $f \in A^\vee$ factors through the
projection $\lambda_n \colon A \to A_n$ for some $n \ge 1$.
In particular, the canonical map
${\varinjlim}_n A^\vee_n \xrightarrow{{\text{\'et}}a} A^\vee$ is continuous and surjective.
This map is an isomorphism if the transition maps of $\{A_n\}_n$
are surjective.
\end{lem}
\begin{proof}
Since $f \colon A \to {{\mathbb Q}}/{{\mathbb Z}}$ is continuous and its target is discrete,
it follows that ${\rm Ker}(f)$ is open. By the definition of the inverse limit
topology, the latter contains ${\rm Ker}(\lambda_n)$ for some $n \ge 1$.
Letting $A'_n = {A}/{{\rm Ker}(\lambda_n)}$, this implies that $f$ factors through
$f'_n \colon A'_n \to {{\mathbb Q}}/{{\mathbb Z}}$. Since $A_n$ is discrete, the map
$(A_n)^\vee \to (A'_n)^\vee$ is surjective by
\cite[Lemma~7.10]{Gupta-Krishna-REC}. Choosing a lift $f_n$ of $f'_n$
under this surjection, we see that $f$ factors through
$A \xrightarrow{\lambda_n} A_n \xrightarrow{f_n} {{\mathbb Q}}/{{\mathbb Z}}$.
The continuity of ${\text{\'et}}a$ is equivalent to the assertion that
the map $A^\vee_n \to A^\vee$ is continuous for each $n$.
But this is well known since $\lambda_n$ is continuous.
The remaining parts of the lemma are now obvious.
\end{proof}
\begin{remk}\label{remk:Top-0}
The reader should note that the map ${\varinjlim}_n A^\vee_n \to A^\vee$
may not in general be injective even if each $A_n$ is finite.
\end{remk}
\vskip .3cm
For $n \ge 1$, we endow each of ${\mathbb H}^i_{\text{\'et}}l(X, W_m{\mathcal F}^{r, \bullet}_{n})$ and
$H^{i}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{(X,nD), {\operatorname{log}}})$ with the discrete topology.
We endow ${\varprojlim}_n H^{i}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{(X,nD), {\operatorname{log}}})$
with the inverse limit topology and
${\varinjlim}_n {\mathbb H}^i_{\text{\'et}}l(X, W_m{\mathcal F}^{r, \bullet}_{n}) \cong
H^i_{\text{\'et}}l(U, W_m\Omega^r_{U, {\operatorname{log}}})$ with the direct limit topology.
Note that the latter topology is discrete.
If we let $x \in {\varinjlim}_n {\mathbb H}^i_{\text{\'et}}l(X, W_m{\mathcal F}^{r, \bullet}_{n})$,
then we can find some $n \gg 0$ such that
$x = f_n(x')$ for some $x' \in {\mathbb H}^i_{\text{\'et}}l(X, W_m{\mathcal F}^{r, \bullet}_{n})$,
where $f_n \colon {\mathbb H}^i_{\text{\'et}}l(X, W_m{\mathcal F}^{r, \bullet}_{n}) \to
{\varinjlim}_n {\mathbb H}^i_{\text{\'et}}l(X, W_m{\mathcal F}^{r, \bullet}_{n})$ is the canonical map
to the limit.
This gives a map
\begin{equation}\label{eqn:Pair-4-6}
\langlex, \cdot\rangle \colon {\varprojlim}_n
H^{d+1-i}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{(X,nD), {\operatorname{log}}}) \to {{\mathbb Z}}/{p^m}
\end{equation}
which sends $y$ to $\langlex', \pi(y)\rangle$ under the pairing ~\eqref{eqn:Pair-4-1},
where $\pi$ is the composite map
\[
{\varprojlim}_n H^{d+1-i}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{(X,nD), {\operatorname{log}}})
\cong {\varprojlim}_n {\mathbb H}^{d+1-i}_{\text{\'et}}l(X, W_m{\mathcal G}^{d-r, \bullet}_{n})
\xrightarrow{g_n} {\mathbb H}^{d+1-i}_{\text{\'et}}l(X, W_m{\mathcal G}^{d-r, \bullet}_{n}).
\]
One checks that ~\eqref{eqn:Pair-4-6} is defined.
Since $H^i_{\text{\'et}}l(U, W_m\Omega^r_{U, {\operatorname{log}}})^\vee$ is profinite,
this shows that the map (see Lemma~\ref{lem:lim-colim})
\begin{equation}\label{eqn:Pair-5-0}
\theta_m \colon {\varprojlim}_n
H^{d+1-i}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{(X,nD), {\operatorname{log}}}) \to
H^i_{\text{\'et}}l(U, W_m\Omega^r_{U, {\operatorname{log}}})^\vee
\end{equation}
is continuous.
Since $H^i_{\text{\'et}}l(U, W_m\Omega^r_{U, {\operatorname{log}}})$ is discrete, the map
\begin{equation}\label{eqn:Pair-5-1}
\vartheta_m \colon H^i_{\text{\'et}}l(U, W_m\Omega^r_{U, {\operatorname{log}}}) \to
({\varprojlim}_n H^{d+1-i}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{(X,nD), {\operatorname{log}}}))^\vee
\end{equation}
is clearly continuous. We have thus shown that after taking the limits,
~\eqref{eqn:Pair-4-5} gives rise to a
continuous pairing of topological abelian groups
\[
H^i_{\text{\'et}}l(U, W_m\Omega^r_{U, {\operatorname{log}}}) \times {\varprojlim}_n \
H^{d+1-i}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{(X,nD), {\operatorname{log}}}) \to {{\mathbb Q}}/{{\mathbb Z}}.
\]
Since each $H^{d+1-i}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{(X,nD), {\operatorname{log}}})$ is $p^m$-torsion
and discrete, we get
the following.
\begin{prop}\label{prop:Cont-pair}
There is a continuous pairing of topological abelian groups
\begin{equation}\label{eqn:Pair-4-7}
H^i_{\text{\'et}}l(U, W_m\Omega^r_{U, {\operatorname{log}}}) \times {\varprojlim}_n \
H^{d+1-i}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{(X,nD), {\operatorname{log}}}) \to {{\mathbb Z}}/{p^m}.
\end{equation}
\end{prop}
Equivalently, there is a continuous pairing of topological abelian groups
\begin{equation}\label{eqn:Pair-4-8}
H^i_{\text{\'et}}l(U, W_m\Omega^r_{U, {\operatorname{log}}}) \times {\varprojlim}_n \
H^{d+1-i}_{\text{\'et}}l(X, \overline{{\widehat{{\mathcal K}}^M_{d-r, (X, nD)}}/{p^m}}) \to {{\mathbb Z}}/{p^m}.
\end{equation}
These pairings are compatible with respect to the canonical surjection
${{\mathbb Z}}/{p^m} \twoheadrightarrow {{\mathbb Z}}/{p^{m-1}}$ and the inclusion
${{\mathbb Z}}/{p^{m-1}} \stackrel{p}{\hookrightarrow} {{\mathbb Z}}/{p^m}$.
It follows from ~\eqref{eqn:Pair-4-5} that the map $\vartheta_m$
in ~\eqref{eqn:Pair-5-1} has a factorization
\begin{equation}\label{eqn:Pair-5-2}
H^i_{\text{\'et}}l(U, W_m\Omega^r_{U, {\operatorname{log}}}) \xrightarrow{\theta'_m}
{\varinjlim}_n H^{d+1-i}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{(X,nD), {\operatorname{log}}})^\vee
\stackrel{\vartheta'_m}{\twoheadrightarrow} \hspace*{3cm}
\end{equation}
\[
\hspace*{8cm}
({\varprojlim}_n H^{d+1-i}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{(X,nD), {\operatorname{log}}}))^\vee,
\]
where $\vartheta'_m$ is the canonical map. This map is continuous and surjective
by Lemma~\ref{lem:Top}. The map $\theta'_m$ is continuous since
its source is discrete.
Our goal in the next section will be to show that the maps
$\theta_m$ and $\theta'_m$ are isomorphisms.
\section{Perfectness of the pairing}\label{sec:Perfectness}
We continue with the setting of \S~\ref{sec:DT}.
Our goal in this section is to prove a perfectness statement for the pairing
~\eqref{eqn:Pair-4-5}. To make our statement precise, it is
convenient to have the following definition.
\begin{defn}\label{defn:PF-lim-colim}
Let $\{A_n\}_n$ and $\{B_n\}_n$ be ind-system and pro-system
of $p^m$-torsion discrete abelian groups, respectively.
Suppose that there is a pairing
\[
(\star) \hspace*{1cm} \{A_n\}_n \times \{B_n\}_n \to {{\mathbb Z}}/{p^m}.
\]
We shall say that this pairing is continuous and semi-perfect if the induced
maps
\[
\theta \colon {\varprojlim}_n B_n \to ({\varinjlim}_n A_n)^\vee
\ \ {\rm and} \ \
\theta' \colon {\varinjlim}_n A_n \to {\varinjlim}_n B^\vee_n
\]
are continuous and bijective homomorphisms of topological abelian groups.
Recall here
that $ ({\varinjlim}_n A_n)^\vee \cong {\varprojlim}_n A^\vee_n$
by Lemma~\ref{lem:lim-colim}.
We shall say that $(\star)$ is perfect if it is semi-perfect,
the surjective map (see Lemma~\ref{lem:Top})
${\varinjlim}_n B^\vee_n \to ( {\varprojlim}_n B_n)^\vee$
is bijective and $\theta$ is a homeomorphism.
Note that perfectness implies that $\theta'$ is also a homeomorphism.
\end{defn}
The following is easy to check.
\begin{lem}\label{lem:Semi-perfect*}
The pairing $(\star)$ is perfect if it is semi-perfect and
$\{B_n\}_n$ is isomorphic to a surjective inverse system of compact groups.
\end{lem}
Our goal is to show that ~\eqref{eqn:Pair-4-5} is semi-perfect by
induction on $m \ge 1$.
We first consider the case $m =1$.
We shall prove this case using our earlier observation that $W_m\Omega^\bullet_X(p^mnD)$ is a
complex for every $m \ge 1$ and $n \in {\mathbb Z}$.
In this case, we have $Z_1\Omega^r_X(pnD) =
{\rm Ker}(d \colon \Omega^r_X(pnD) \to \Omega^{r+1}_X(pnD))$ by
~\eqref{eqn:Loc-free-0}. We claim that the inclusion
$d\Omega^{r-1}_X(-pnD) \hookrightarrow \Omega^r_X(-pnD) \cap d\Omega^{r-1}_X$ is actually
a bijection when $n \ge 1$. Equivalently, the map
${\Omega^r_X(-pnD)}/{d\Omega^{r-1}_X(-pnD)} \to {\Omega^r_X}/{d\Omega^{r-1}_X}$
is injective. Since ${\operatorname{\text{p.s.s.}}}i$ is identity on the topological space
$X$, the latter injectivity is
equivalent to showing that the map
\begin{equation}\label{eqn:Pair-1-psi}
\frac{({\operatorname{\text{p.s.s.}}}i_* \Omega^r_X)(-nD)}{d ({\operatorname{\text{p.s.s.}}}i_* \Omega^{r-1}_X)(-nD)} \cong
\frac{{\operatorname{\text{p.s.s.}}}i_* \Omega^r_X(-pnD)}{d {\operatorname{\text{p.s.s.}}}i_* \Omega^{r-1}_X(-pnD)} \to
\frac{{\operatorname{\text{p.s.s.}}}i_* \Omega^r_X}{d {\operatorname{\text{p.s.s.}}}i_* \Omega^{r-1}_X}
\end{equation}
is injective. Using a snake lemma argument, it suffices to show that
the map ${\operatorname{\text{p.s.s.}}}i_* Z_1\Omega^{r-1}_X \to {\operatorname{\text{p.s.s.}}}i_* Z_1\Omega^{r-1}_X \otimes_{{\mathcal O}_X}
{\mathcal O}_{nD}$ is surjective. But this is obvious.
\begin{lem}\label{lem:LFPP}
For $n \in {\mathbb Z}$, the sheaves
${\operatorname{\text{p.s.s.}}}i_* (Z_1\Omega^r_X(pnD))$ and
${\operatorname{\text{p.s.s.}}}i_*({\Omega^r_X(pnD)}/{d\Omega^{r-1}_X(pnD)})$
are locally free ${\mathcal O}_X$-modules. For $n \ge 1$, the map
\[
{\operatorname{\text{p.s.s.}}}i_* (Z_1\Omega^r_X(pnD)) \to
{\mathcal H}om_{{\mathcal O}_X}({\operatorname{\text{p.s.s.}}}i_*({\Omega^{d-r}_X(-pnD)}/{d\Omega^{d-r-1}_X(-pnD)}), \Omega^d_X),
\]
induced by ~\eqref{eqn:Pair-1}, is an isomorphism.
\end{lem}
\begin{proof}
We first note that
\begin{equation}\label{eqn:LFPP-0}
\begin{array}{lll}
{\operatorname{\text{p.s.s.}}}i_* (Z_1\Omega^r_X(pnD)) & \cong &
{\operatorname{\text{p.s.s.}}}i_*({\rm Ker}(\Omega^r_X(pnD) \to \Omega^{r+1}_X(pnD))) \\
& \cong & {\rm Ker}({\operatorname{\text{p.s.s.}}}i_*\Omega^r_X(pnD) \to {\operatorname{\text{p.s.s.}}}i_*\Omega^{r+1}_X(pnD)) \\
& \cong & {\rm Ker}(({\operatorname{\text{p.s.s.}}}i_*\Omega^r_X)(nD) \to ({\operatorname{\text{p.s.s.}}}i_*\Omega^{r+1}_X)(nD)) \\
& \cong & ({\rm Ker}({\operatorname{\text{p.s.s.}}}i_*\Omega^r_X \to {\operatorname{\text{p.s.s.}}}i_*\Omega^{r+1}_X))(nD) \\
& \cong & ({\operatorname{\text{p.s.s.}}}i_*Z_1\Omega^r_X)(nD).
\end{array}
\end{equation}
Since ${\operatorname{\text{p.s.s.}}}i_*$ also commutes with ${\rm Coker}(d)$, we similarly get
\begin{equation}\label{eqn:LFPP-1}
{\operatorname{\text{p.s.s.}}}i_*({\Omega^r_X(pnD)}/{d\Omega^{r-1}_X(pnD)}) \cong
({{\operatorname{\text{p.s.s.}}}i_* \Omega^{r}_X}/{d{\operatorname{\text{p.s.s.}}}i_* \Omega^{r-1}_X})(nD).
\end{equation}
In the exact sequence
\[
0 \to {\operatorname{\text{p.s.s.}}}i_* Z_1\Omega^r_X \to {\operatorname{\text{p.s.s.}}}i_* \Omega^r_X \xrightarrow{d}
{\operatorname{\text{p.s.s.}}}i_* \Omega^{r+1}_X \to {{\operatorname{\text{p.s.s.}}}i_* \Omega^{r+1}_X}/{d{\operatorname{\text{p.s.s.}}}i_* \Omega^{r}_X} \to 0
\]
of ${\mathcal O}_X$-linear maps between coherent ${\mathcal O}_X$-modules,
all terms are locally free by \cite[Lemma~1.7]{Milne-Duality}.
It therefore remains an exact sequence of locally free ${\mathcal O}_X$-modules after
tensoring with ${\mathcal O}_X(nD)$ for any $n \in {\mathbb Z}$.
The first part of the lemma now follows by using
~\eqref{eqn:LFPP-0} and ~\eqref{eqn:LFPP-1} (for different values of $r$).
To prove the second part, we can again use
~\eqref{eqn:LFPP-0} and ~\eqref{eqn:LFPP-1}. Since ${\mathcal O}_X(nD)$ is
invertible, it suffices to show that
$({\operatorname{\text{p.s.s.}}}i_* Z_1\Omega^r_X)(nD) \to
{\mathcal H}om_{{\mathcal O}_X}({{\operatorname{\text{p.s.s.}}}i_* \Omega^{d-r}_X}/{d{\operatorname{\text{p.s.s.}}}i_* \Omega^{d-r-1}_X}, \Omega^d_X(nD))$
is an isomorphism. But the term on the right side of this map is
isomorphic to the sheaf
${\mathcal H}om_{{\mathcal O}_X}({{\operatorname{\text{p.s.s.}}}i_* \Omega^{d-r}_X}/{d{\operatorname{\text{p.s.s.}}}i_* \Omega^{d-r-1}_X}, \Omega^d_X)(nD)$
via the canonical isomorphism
\[
{\mathcal H}om_{{\mathcal O}_X}({\mathcal A}, {\mathcal B}) \otimes_{{\mathcal O}_X} {\mathcal E} \xrightarrow{\cong}
{\mathcal H}om_{{\mathcal O}_X}({\mathcal A}, {\mathcal B}\otimes_{{\mathcal O}_X} {\mathcal E}),
\]
which locally sends $(f \otimes b) \to (a \mapsto f(a) \otimes b)$
if ${\mathcal E}$ is locally free.
We therefore have to only show that the map
${\operatorname{\text{p.s.s.}}}i_* Z_1\Omega^r_X \to
{\mathcal H}om_{{\mathcal O}_X}({{\operatorname{\text{p.s.s.}}}i_* \Omega^{d-r}_X}/{d{\operatorname{\text{p.s.s.}}}i_* \Omega^{d-r-1}_X}, \Omega^d_X)$
is an isomorphism. But this follows from \cite[Lemma~1.7]{Milne-Duality}.
\end{proof}
\begin{lem}\label{lem:Finite-0}
The groups ${\mathbb H}^i_{\text{\'et}}l(X, W_1{\mathcal F}^{r, \bullet}_{n})$ and
$H^i_{{\text{\'et}}l}(X, \Omega^r_{(X,nD), {\operatorname{log}}})$
are finite for $i, r \ge 0$.
\end{lem}
\begin{proof}
Using ~\eqref{eqn:dlog-2}, the finiteness of
$H^i_{{\text{\'et}}l}(X, \Omega^r_{(X,nD), {\operatorname{log}}})$ is reduced to showing that
the group
$H^i_{{\text{\'et}}l}(X, \frac{\Omega^r_{(X,nD)}}{\Omega^r_{(X,nD)} \cap d\Omega^{r-1}_{X}})$
is finite. By \eqref{eqn:dlog-2-*}, it suffices to show that
$H^i_{{\text{\'et}}l}(Z, \frac{\Omega^r_Z}{d\Omega^{r-1}_Z})$ is finite if
$Z \in \{X, nD\}$. Since $k$ is perfect, the absolute Frobenius ${\operatorname{\text{p.s.s.}}}i$ is
a finite morphism. In particular, ${\operatorname{\text{p.s.s.}}}i_*$ is exact.
It suffices therefore to show that
$H^i_{{\text{\'et}}l}(Z, \frac{{\operatorname{\text{p.s.s.}}}i_*\Omega^r_Z}{{\operatorname{\text{p.s.s.}}}i_* d\Omega^{r-1}_Z})$ is finite.
But this is clear because $k$ is finite, $Z$ is projective over $k$ and
${{\operatorname{\text{p.s.s.}}}i_*\Omega^r_Z}/{{\operatorname{\text{p.s.s.}}}i_* d\Omega^{r-1}_Z}$ is coherent.
The finiteness of ${\mathbb H}^i_{\text{\'et}}l(X, W_1{\mathcal F}^{r, \bullet}_{n})$
follows by a similar argument because ${\operatorname{\text{p.s.s.}}}i_*(Z_1\Omega^r_X(nD))$ is coherent.
\end{proof}
\begin{lem}\label{lem:PP-pro-1}
The pairing
\[
\{{\mathbb H}^i_{\text{\'et}}l(X, W_1{\mathcal F}^{r, \bullet}_{n})\}_n \times
\{{\mathbb H}^{d+1-i}_{\text{\'et}}l(X, W_1{\mathcal G}^{d-r, \bullet}_{n})\}_n \to {{\mathbb Z}}/{p}
\]
is a perfect pairing of the ind-abelian and pro-abelian finite groups.
\end{lem}
\begin{proof}
The finiteness follows from Lemma~\ref{lem:Finite-0}.
For perfectness, it suffices to prove that without using the pro-systems,
the pairing in question is a perfect pairing
of finite abelian groups if we replace $n$ by $pn$. We shall show the latter.
For an ${\mathbb F}_p$-vector space $V$, we let $V^\star = {\rm Hom}_{{\mathbb F}_p}(V, {\mathbb F}_p)$.
Using the definitions of various pairings of sheaves and complexes of
sheaves, we have a commutative diagram of long exact sequences
\begin{equation}\label{eqn:PP-pro-1-0}
\xymatrix@C.4pc{
\cdots \ar[r] & H^{i-1}_{\text{\'et}}l(X, Z_1\Omega^r_X(pnD)) \ar[r] \ar[d] &
H^{i-1}_{\text{\'et}}l(X, \Omega^r_X(pnD)) \ar[r] \ar[d] &
{\mathbb H}^{i}_{\text{\'et}}l(X, W_1{\mathcal F}^{r, \bullet}_{pn}) \ar[r] \ar[d] &
\cdots \\
\cdots \ar[r] &
H^{d+1-i}_{\text{\'et}}l(X, \frac{\Omega^{d-r}_X(-pnD)}{d\Omega^{d-r-1}_X(-pnD)})^\star
\ar[r] &
H^{d+1-i}_{\text{\'et}}l(X, \Omega^{d-r}_X(-pnD))^\star \ar[r] &
{\mathbb H}^{d+1-i}_{\text{\'et}}l(X, W_1{\mathcal G}^{d-r, \bullet}_{pn})^\star \ar[r] &
\cdots .}
\end{equation}
The exactness of the bottom row follows from Lemma~\ref{lem:Finite-0} and
\cite[Lemma~7.10]{Gupta-Krishna-REC}.
Using Lemma~\ref{lem:LFPP} and Grothendieck duality for the structure map
$X \to {\rm Spec \,}({\mathbb F}_p)$, the map
\[
H^{i}_{\text{\'et}}l(X, {\operatorname{\text{p.s.s.}}}i_*(Z_1\Omega^r_X(pnD))) \to
H^{d-i}_{\text{\'et}}l(X,
{\operatorname{\text{p.s.s.}}}i_*(\frac{\Omega^{d-r}_X(-pnD)}{d\Omega^{d-r-1}_X(-pnD)}))^\star
\]
is an isomorphism.
Since ${\operatorname{\text{p.s.s.}}}i_*$ is exact, it follows that the map
\begin{equation}\label{eqn:PP-pro-1-1}
H^{i}_{\text{\'et}}l(X, Z_1\Omega^r_X(pnD)) \to
H^{d-i}_{\text{\'et}}l(X, \frac{\Omega^{d-r}_X(-pnD)}{d\Omega^{d-r-1}_X(-pnD)})^\star
\end{equation}
is an isomorphism.
By the same reason, the map
\begin{equation}\label{eqn:PP-pro-1-2}
H^{i}_{\text{\'et}}l(X, \Omega^r_X(pnD)) \to H^{d-i}_{\text{\'et}}l(X, \Omega^{d-r}_X(-pnD))^\star
\end{equation}
is an isomorphism.
Using ~\eqref{eqn:PP-pro-1-0}, ~\eqref{eqn:PP-pro-1-1} and
~\eqref{eqn:PP-pro-1-2}, we conclude that the right vertical arrow
in ~\eqref{eqn:PP-pro-1-0} is an isomorphism.
An identical argument shows that the
map ${\mathbb H}^{d+1-i}_{\text{\'et}}l(X, W_1{\mathcal G}^{d-r, \bullet}_{pn}) \to
{\mathbb H}^{i}_{\text{\'et}}l(X, W_1{\mathcal F}^{r, \bullet}_{pn})^\star$
is an isomorphism. This finishes
the proof of the perfectness of the
pairing of the hypercohomology groups.
\end{proof}
\begin{comment}
We let $F^j_{m,r}(n) = {\mathbb H}^j_{\text{\'et}}l(X, W_{m}\Omega^{r}_{(X,nD), {\operatorname{log}}})$
and $F^j_{m,r}(U) = {\varprojlim}_n F^j_{m,r}(n)$.
\begin{lem}\label{lem:Lim-exact}
The sequence
\[
\cdots \to F^j_{m-1,r}(U) \to F^j_{m,r}(U) \to F^j_{1,r}(U) \to
F^{j+1}_{m-1,r}(U) \to \cdots
\]
is exact.
\end{lem}
\begin{proof}
We first prove by induction on $m$ that ${\varprojlim}^1_n F^j_{m,r}(n) = 0$.
The $m =1$ case follows from Lemma~\ref{lem:Finite-0}. In general,
we break the long exact sequence of Corollary~\ref{cor:log-HW-pro-coh}
into short exact sequences. This yields exact sequences
of pro-abelian groups
\begin{equation}\label{eqn:Lim-exact-0}
0 \to {\rm Image}(\partial^{1}) \to \{F^j_{m-1,r}(n)\}_n
\to {\rm Ker}(R^{m-1}) \to 0;
\end{equation}
\begin{equation}\label{eqn:Lim-exact-1}
0 \to {\rm Ker}(R^{m-1}) \to \{F^j_{m,r}(n)\}_n \to {\rm Image}(R^{m-1}) \to 0;
\end{equation}
\begin{equation}\label{eqn:Lim-exact-2}
0 \to {\rm Image}(R^{m-1}) \to \{F^j_{1,r}(n)\}_n \to {\rm Image}(\partial^1)
\to 0.
\end{equation}
Lemma~\ref{lem:Finite-0} says that $\{F^j_{1,r}(n)\}_n$ is
a pro-system of finite abelian groups. This implies by
~\eqref{eqn:Lim-exact-2} that each of
${\rm Image}(R^{m-1})$ and ${\rm Image}(\partial^1)$ is isomorphic to a
pro-system of finite abelian groups. In particular, the
pro-systems of ~\eqref{eqn:Lim-exact-2} do not admit higher derived
lim functors. On the other hand, ${\varprojlim}^1_n F^j_{m-1,r}(n) = 0$ by
induction. It follows from ~\eqref{eqn:Lim-exact-0} that
${\varprojlim}^1_n {\rm Ker}(R^{m-1}) = 0$.
Using ~\eqref{eqn:Lim-exact-1}, we get ${\varprojlim}^1_n F^j_{m,r}(n) = 0$.
It follows from what we have shown is that the above three short
exact sequences of pro-abelian groups remain exact after we
apply the inverse limit functor. But this implies that
the long exact sequence of Corollary~\ref{cor:log-HW-pro-coh} also remains exact
after applying the inverse limit functor.
This proves the lemma.
\end{proof}
For any ${{\mathbb Z}}/{p^m}$-module $V$, we let $V^\star =
{\rm Hom}_{{{\mathbb Z}}/{p^m}}(V, {{\mathbb Z}}/{p^m})$.
For any profinite or discrete torsion abelian group
$A$, let $A^\vee = {\rm Hom}_{{\rm cont}}(A, {{\mathbb Q}}/{{\mathbb Z}})$ denote the Pontryagin dual of
$A$ with the compact-open topology (see \cite[\S~7.4]{Gupta-Krishna-REC}).
If $A$ is discrete and $p^m$-torsion, then we have
$A^\vee = A^\star$.
\end{comment}
We can now prove the main duality theorem of this paper.
\begin{thm}\label{thm:Duality-main}
Let $k$ be a finite field and $X$ a smooth and projective scheme
of pure dimension $d \ge 1$ over $k$. Let $D \subset X$ be an effective
Cartier divisor with complement $U$.
Let $m \ge 1$ and $i, r \ge 0$ be integers. Then
\[
\{{\mathbb H}^i_{\text{\'et}}l(X, W_m{\mathcal F}^{r, \bullet}_{n})\}_n \times
\{{\mathbb H}^{d+1-i}_{\text{\'et}}l(X, W_m{\mathcal G}^{d-r, \bullet}_{n})\}_n \to {{\mathbb Z}}/{p^m}
\]
is a semi-perfect pairing of ind-abelian and pro-abelian groups.
\end{thm}
\begin{proof}
We have shown already (see ~\eqref{eqn:Pair-4-7}) that the pairing is
continuous after taking limits. We need to show that the maps
$\theta_m$ (see ~\eqref{eqn:Pair-5-0}) and $\theta'_m$ (see
~\eqref{eqn:Pair-5-2}) are isomorphisms of abelian groups.
We shall prove this by induction on $m \ge 1$.
We first assume $m = 1$. Then Lemma~\ref{lem:PP-pro-1} implies that the
map
\[
\theta_1 \colon
\{{\mathbb H}^{d+1-i}_{\text{\'et}}l(X, W_1{\mathcal G}^{d-r, \bullet}_{n})\}_n \to
\{{\mathbb H}^i_{\text{\'et}}l(X, W_1{\mathcal F}^{r, \bullet}_{n})^\vee\}_n
\]
is an isomorphism of pro-abelian groups.
Taking the limit and using ~\eqref{eqn:Pair-4-4}, we get an isomorphism
\[
\theta_1 \colon F^{d+1-i}_{1,r}(U) \xrightarrow{\cong}
{\varprojlim}_n {\mathbb H}^i_{\text{\'et}}l(X, W_1{\mathcal F}^{r, \bullet}_{n})^\vee.
\]
But the term on the right is same as $H^i_{\text{\'et}}l(U, W_1\Omega^r_{U, {\operatorname{log}}})^\vee$
by ~\eqref{eqn:Pair-4-3} and Lemma~\ref{lem:lim-colim}.
Lemma~\ref{lem:PP-pro-1} also implies that $\theta'_1$ is an isomorphism.
This proves $m =1$ case of the theorem.
We now assume $m \ge 2$ and recall the definitions of
$F^j_{m,r}(n)$ and $F^j_{m,r}(U)$ from
\S~\ref{sec:Exact}. We consider the commutative diagram
\begin{equation}\label{eqn::Duality-main-1}
\xymatrix@C.8pc{
\cdots \ar[r] & F^{d+1-i}_{m-1,r}(U) \ar[r] \ar[d]_-{\theta_{m-1}} &
F^{d+1-i}_{m,r}(U) \ar[r] \ar[d]^-{\theta_m} &
F^{d+1-i}_{1,r}(U) \ar[r] \ar[d]^-{\theta_1} & \cdots \\
\cdots \ar[r] & H^i_{\text{\'et}}l(U, W_{m-1}\Omega^r_{U, {\operatorname{log}}})^\vee \ar[r]^-{R^\vee} &
H^i_{\text{\'et}}l(U, W_{m}\Omega^r_{U, {\operatorname{log}}})^\vee \ar[r]^-{({\underline{p}}^{m-1})^\vee} &
H^i_{\text{\'et}}l(U, W_{1}\Omega^r_{U, {\operatorname{log}}})^\vee \ar[r] & \cdots.}
\end{equation}
The top row is exact by Lemma~\ref{lem:Lim-exact}.
The bottom row is exact by Lemma~\ref{lem:log-HW} and
\cite[Lemma~7.10]{Gupta-Krishna-REC}.
The maps $\theta_i$ are isomorphisms for $i \le m-1$ by induction.
We conclude that $\theta_m$ is also an isomorphism.
An identical argument, where we apply Lemma~\ref{lem:long-exact-lim}
instead of Lemma~\ref{lem:Lim-exact}, shows that $\theta'_m$ is an
isomorphism. This finishes the proof.
\end{proof}
\begin{remk}\label{remk:Perf-semi-perf}
In \cite[Theorem~4.1.4]{JSZ} and \cite[Theorem~3.4.2]{Zhau},
a pairing using a logarithmic version of the pro-system
$\{H^{d+1-r}_{\text{\'et}}l(X, W_m\Omega^{d-r}_{(X,nD), {\operatorname{log}}})\}_n$ is
given under the assumption that
$D_{\rm red}$ is a simple normal crossing divisor. The authors in these
papers say that their pairing is perfect. Although they do not explain
their interpretations of perfectness,
what they actually prove is the semi-perfectness in the sense of
Definition~\ref{defn:PF-lim-colim}, according to
our understanding.
\end{remk}
\begin{comment}
\begin{equation}\label{eqn::Duality-main-0}
\theta_m \colon F^{d+1-i}_{m,r}(U) \to H^i_{\text{\'et}}l(U, W_m\Omega^r_{U, {\operatorname{log}}})^\star
\end{equation}
and
is an isomorphism.
We first assume $m = 1$. Then Lemma~\ref{lem:PP-pro-1} implies that the
map
\[
\theta_1 \colon
\{{\mathbb H}^{d+1-i}_{\text{\'et}}l(X, W_1{\mathcal G}^{d-r, \bullet}_{n})\}_n \to
\{{\mathbb H}^i_{\text{\'et}}l(X, W_1{\mathcal F}^{r, \bullet}_{n})^*\}_n
\]
is an isomorphism of pro-abelian groups.
Taking the limit and using ~\eqref{eqn:Pair-4-4}, we get an isomorphism
\[
\theta_1 \colon F^{d+1-i}_{1,r}(U) \xrightarrow{\cong}
{\varprojlim}_n {\mathbb H}^i_{\text{\'et}}l(X, W_1{\mathcal F}^{r, \bullet}_{n})^*.
\]
But the term on the right is same as $H^i_{\text{\'et}}l(U, W_1\Omega^r_{U, {\operatorname{log}}})^\star$
by ~\eqref{eqn:Pair-4-3} and Lemma~\ref{lem:lim-colim} once we note that
$H^i_{\text{\'et}}l(U, W_1\Omega^r_{U, {\operatorname{log}}})^\star \cong
H^i_{\text{\'et}}l(U, W_1\Omega^r_{U, {\operatorname{log}}})^\vee$. This proves $m =1$ case of the theorem.
We now assume $m \ge 2$ and consider the commutative diagram
\begin{equation}\label{eqn::Duality-main-1}
\xymatrix@C.8pc{
\cdots \ar[r] & F^{d+1-i}_{m-1,r}(U) \ar[r] \ar[d]_-{\theta_{m-1}} &
F^{d+1-i}_{m,r}(U) \ar[r] \ar[d]^-{\theta_m} &
F^{d+1-i}_{1,r}(U) \ar[r] \ar[d]^-{\theta_1} & \cdots \\
\cdots \ar[r] & H^i_{\text{\'et}}l(U, W_{m-1}\Omega^r_{U, {\operatorname{log}}})^\star \ar[r]^-{R^\star} &
H^i_{\text{\'et}}l(U, W_{m}\Omega^r_{U, {\operatorname{log}}})^\star \ar[r]^-{({\underline{p}}^{m-1})^\star} &
H^i_{\text{\'et}}l(U, W_{1}\Omega^r_{U, {\operatorname{log}}})^\star \ar[r] & \cdots.}
\end{equation}
The top row is exact by Lemma~\ref{lem:Lim-exact}.
The bottom row is exact by Lemma~\ref{lem:log-HW} and
\cite[Lemma~7.10]{Gupta-Krishna-REC}.
The maps $\theta_i$ are isomorphisms for $i \le m-1$ by induction.
We conclude that $\theta_m$ is also an isomorphism. This finishes the proof.
\end{proof}
\end{comment}
\begin{cor}\label{cor:Perfect**}
The pairing of Theorem~\ref{thm:Duality-main} is perfect if
$D_{\rm red}$ is a simple normal crossing divisor, $i =1$, $r =0$ and one of the
conditions $\{d \neq 2, k \neq {\mathbb F}_2\}$ holds.
\end{cor}
\begin{proof}
Combine Theorem~\ref{thm:Duality-main}, Lemma~\ref{lem:Semi-perfect*},
Corollary~\ref{cor:RS-K-comp}, Proposition~\ref{prop:Milnor-iso},
\cite[Theorem~9.1]{Kato-Saito} and the
equivalence of ~\eqref{eqn:Pair-4-7} and ~\eqref{eqn:Pair-4-8}.
\end{proof}
We let $C^{{\text{\'et}}l}_{KS}(X,D;m) = H^d_{\text{\'et}}l(X, \overline{{\widehat{{\mathcal K}}^M_{d,(X,D)}}/{p^m}})$.
We shall study the following special case in the next
section.
\begin{cor}\label{cor:Duality-main-d}
Under the assumptions of Theorem~\ref{thm:Duality-main}, the
map
\[
\theta_m \colon {\varprojlim}_n C^{{\text{\'et}}l}_{KS}(X,nD;m) \to
{\pi^{\rm ab}_1(U)}/{p^m}
\]
is a bijective and continuous homomorphism between topological abelian groups.
This is an isomorphism of topological groups under the assumptions of
Corollary~\ref{cor:Perfect**}.
\end{cor}
\subsection{The comparison theorem}\label{sec:Comp}
We shall continue with the assumptions of Theorem~\ref{thm:Duality-main}.
We fix an integer $m \ge 1$. Let $\pi^{\rm ab}_1(U)$ be the abelianized
{\'e}tale fundamental group of $U$ and $\pi^{\rm adiv}_1(X,D)$ the
co-1-skeleton {\'e}tale fundamental group of $X$ with modulus $D$,
introduced in \cite[Definition~7.5]{Gupta-Krishna-REC}. The latter
characterizes finite abelian covers of $U$ whose ramifications are
bounded by $D$ at each of its generic point, where the bound is
given by means of Matsuda's Artin conductor.
There is a natural surjection $\pi^{\rm ab}_1(U) \twoheadrightarrow \pi^{\rm adiv}_1(X,D)$.
Let $C_{KS}(X,D) = H^d_{\rm nis}(X, {\mathcal K}^M_{d, (X,D)})$ and
$C_{KS}(X,D; m) = H^d_{\rm nis}(X, {{\mathcal K}^M_{d, (X,D)}}/{p^m}) \cong
{C_{KS}(X,D)}/{p^m}$. By Proposition~\ref{prop:Milnor-iso}, we have
\begin{equation}\label{eqn:KS-NiS}
C_{KS}(X,D; m) \xrightarrow{\cong} H^d_{\rm nis}(X, \overline{{\widehat{{\mathcal K}}^M_{d,(X,D)}}/{p^m}}).
\end{equation}
We let $\widetilde{C}_{U/X} = \varprojlim_n C_{KS}(X,D)$.
The canonical map ${\widetilde{C}_{U/X}}/{p^m} \to \varprojlim_n C_{KS}(X,D; m)$
is an isomorphism by \cite[Lemma~5.9]{Gupta-Krishna-BF}.
The groups $C_{KS}(X,D)$ and $C_{KS}(X,D; m)$
have the discrete topology while $\widetilde{C}_{U/X}$ and
${\widetilde{C}_{U/X}}/{p^m}$ have the inverse limit topology.
The groups $\pi^{\rm ab}_1(U)$ and $\pi^{\rm adiv}_1(X,D)$ have the profinite topology.
By \cite[Theorem~1.2]{Gupta-Krishna-REC}, there is a commutative diagram
of continuous homomorphisms of topological abelian groups
\begin{equation}\label{eqn:REC}
\xymatrix@C.8pc{
\widetilde{C}_{U/X} \ar[r]^-{\rho_{U/X}} \ar@{->>}[d] & \pi^{\rm ab}_1(U) \ar@{->>}[d] \\
C(X,D) \ar[r]^-{\rho_{X|D}} & \pi^{{\rm adiv}}_1(X,D).}
\end{equation}
The horizontal arrows are injective with dense images.
They become isomorphisms after tensoring with
${{\mathbb Z}}/{p^m}$ by \cite[Corollary~5.15]{Gupta-Krishna-BF}.
We let $\widetilde{C}^{{\text{\'et}}l}_{U/X}(m) = {\varprojlim}_n C^{{\text{\'et}}l}_{KS}(X,nD;m)$.
By Corollary~\ref{cor:Duality-main-d}, we have a
bijective and continuous homomorphism between topological abelian groups
$\rho^{{\text{\'et}}l}_{U/X} \colon \widetilde{C}^{{\text{\'et}}l}_{U/X}(m) \rightarrow
{\pi^{\rm ab}_1(U)}/{p^m} \cong H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m})^\star$.
We therefore have a diagram
\begin{equation}\label{eqn:REC-0}
\xymatrix@C.8pc{
{\widetilde{C}_{U/X}}/{p^m} \ar[dr]^-{\rho_{U/X}} \ar[d]_-{{\text{\'et}}a} & \\
\widetilde{C}^{{\text{\'et}}l}_{U/X}(m) \ar[r]^-{\rho^{{\text{\'et}}l}_{U/X}} & {\pi^{\rm ab}_1(U)}/{p^m}}
\end{equation}
of continuous homomorphisms,
where ${\text{\'et}}a$ is the change of topology homomorphism.
We wish to prove the following.
\begin{prop}\label{prop:Nis-et-rec}
The diagram ~\eqref{eqn:REC-0} is commutative.
\end{prop}
\begin{proof}
It follows from \cite[Theorem~1.2]{Gupta-Krishna-BF} that
${\rho_{U/X}}$ is an isomorphism of profinite groups.
Since the image of the composite map
${\mathcal Z}_0(U) \xrightarrow{{\operatorname{\rm cyc}}_{U/X}} {\widetilde{C}_{U/X}}/{p^m} \xrightarrow{\rho_{U/X}}
{\pi^{\rm ab}_1(U)}/{p^m}$ is dense by the generalized Chebotarev density theorem,
it follows that the image of ${\operatorname{\rm cyc}}_{U/X}$ is also dense in
${\widetilde{C}_{U/X}}/{p^m}$. Since ${\pi^{\rm ab}_1(U)}/{p^m}$ is Hausdorff, it suffices
to show that $\rho_{U/X} \circ {\operatorname{\rm cyc}}_{U/X} = \rho^{{\text{\'et}}l}_{U/X} \circ {\text{\'et}}a
\circ {\operatorname{\rm cyc}}_{U/X} $.
Equivalently, we have to show that for every closed point $x \in U$,
one has that $(\rho^{{\text{\'et}}l}_{U/X} \circ {\text{\'et}}a \circ {\operatorname{\rm cyc}}_{U/X})([x])$
is the image of the
Frobenius element under the map ${\mathbb G}al({\overline{k}}/{k(x)}) \to
{\pi^{\rm ab}_1(U)}/{p^m}$.
But this is well known (e.g., see \cite[Theorem~3.4.1]{Kerz-Zhau}).
\end{proof}
\subsection{A new filtration of $H^1_{\text{\'et}}l(U, {{\mathbb Q}_p}/{{\mathbb Z}_p})$}
\label{sec:Filt-et}
We keep the assumptions of Theorem~\ref{thm:Duality-main}.
By ~\eqref{eqn:Pair-4-4} and Theorem~\ref{thm:Duality-main}, we
have the isomorphism of abelian groups
\[
\theta'_m \colon H^1(U, {{\mathbb Z}}/{p^m}) \xrightarrow{\cong}{\varinjlim}_n C^{{\text{\'et}}l}_{KS}(X,nD;m)^{\vee},
\]
where $H^1(U, {{\mathbb Z}}/{p^m})$ denote the \'etale cohomology
$ H_{{\text{\'et}}l}^1(U, {{\mathbb Z}}/{p^m})$.
We let
\[
{\mathbb F}il^{{\text{\'et}}l}_D H^1(U, {{\mathbb Z}}/{p^m}) =
(\theta'_m)^{-1}({\rm Image}(C^{{\text{\'et}}l}_{KS}(X,nD;m)^\vee \to
{\varinjlim}_n C^{{\text{\'et}}l}_{KS}(X,nD;m)^{\vee})).
\]
We set
\begin{equation}\label{eqn:Filt-et-0}
{\mathbb F}il^{{\text{\'et}}l}_D H^1(U, {{\mathbb Q}_p}/{{\mathbb Z}_p}) =
{\varinjlim}_m {\mathbb F}il^{{\text{\'et}}l}_D H^1(U, {{\mathbb Z}}/{p^m}).
\end{equation}
It follows that $\{{\mathbb F}il^{{\text{\'et}}l}_{nD} H^1(U, {{\mathbb Q}_p}/{{\mathbb Z}_p})\}_n$ defines
an increasing filtration of $H^1(U, {{\mathbb Q}_p}/{{\mathbb Z}_p})$. This is an
{\'e}tale version of the filtration ${\mathbb F}il_{D} H^1(U, {{\mathbb Q}_p}/{{\mathbb Z}_p})$
defined in \cite[definition~7.12]{Gupta-Krishna-REC}.
This new filtration is clearly exhaustive.
We do not know if ${\mathbb F}il_{D} H^1(U, {{\mathbb Q}_p}/{{\mathbb Z}_p})$ and
${\mathbb F}il^{{\text{\'et}}l}_D H^1(U, {{\mathbb Q}_p}/{{\mathbb Z}_p})$ are comparable in general.
We can however prove the following.
\begin{thm}\label{thm:Comp-Main}
Let $k$ be a finite field and $X$ a smooth and projective scheme
of pure dimension $d \ge 1$ over $k$. Let $D \subset X$ be an effective
Cartier divisor with complement $U$ such that $D_{\rm red}$ is a simple
normal crossing divisor.
Assume that either $d \neq 2$ or $k \neq {\mathbb F}_2$.
Then one has
\[
{\mathbb F}il^{{\text{\'et}}l}_D H^1(U, {{\mathbb Z}}/{p^m}) \subseteq
{\mathbb F}il_D H^1(U, {{\mathbb Z}}/{p^m}) \ \ {\rm and} \ \
{\mathbb F}il^{{\text{\'et}}l}_D H^1(U, {{\mathbb Q}_p}/{{\mathbb Z}_p}) \subseteq
{\mathbb F}il_D H^1(U, {{\mathbb Q}_p}/{{\mathbb Z}_p})
\]
as subgroups of $H^1(U, {{\mathbb Z}}/{p^m})$ and
$ H^1(U, {{\mathbb Q}_p}/{{\mathbb Z}_p})$, respectively.
These inclusions are equalities if $D_{\rm red}$ is regular.
\end{thm}
\begin{proof}
The claim about the two inclusions follows directly from the
definitions of the filtrations in view of Proposition~\ref{prop:Nis-et-rec}
and Corollary~\ref{cor:Perfect**}.
Moreover, Corollary~\ref{cor:Duality-main-d}
yields that $\rho^{{\text{\'et}}l}_{U/X}$ is an isomorphism of
profinite topological groups. Hence, there exists a
unique quotient $\pi^{{\text{\'et}}l}_1(X,nD;m)$ of ${\pi^{\rm ab}_1(U)}/{p^m}$ such that
the diagram
\begin{equation}\label{eqn:REG-*0}
\xymatrix@C.8pc{
\widetilde{C}^{{\text{\'et}}l}_{U/X}(m) \ar[r]^-{\rho^{{\text{\'et}}l}_{U/X}} \ar@{->>}[d]
& {\pi^{\rm ab}_1(U)}/{p^m} \ar@{->>}[d] \\
\pi^{{\text{\'et}}l}_1(X,nD;m) \ar[r]^-{\rho^{{\text{\'et}}l}_{X|nD}} &
{\pi^{{\rm adiv}}_1(X,nD)}/{p^m}}
\end{equation}
commutes and the horizontal arrows are isomorphisms of topological groups.
One knows that ${\mathbb F}il_{nD} H^1(U, {{\mathbb Z}}/{p^m}) =
({\pi^{{\rm adiv}}_1(X,nD)}/{p^m})^\vee$. By Theorem~\ref{thm:Comparison-3},
it is easy to see that ${\mathbb F}il^{{\text{\'et}}l}_{nD} H^1(U, {{\mathbb Z}}/{p^m})
\cong \pi^{{\text{\'et}}l}_1(X,nD;m)^\vee$ when $D_{\rm red}$ is regular.
More precisely, if $D_{\rm red}$ is a simple
normal crossing divisor and
either $d \neq 2$ or $k \neq {\mathbb F}_2$, we have the following
diagram.
\begin{equation}\label{eqn:REG-*0.5}
\xymatrix@C.8pc{
H^1U, {{\mathbb Z}}/{p^m}) \ar[r]^-{\theta'}_-{\cong} &
{\varinjlim}_n C^{{\text{\'et}}l}_{KS}(X,nD;m)^{\vee}
\ar[r]^-{\cong} & ({\varprojlim}_n C^{{\text{\'et}}l}_{KS}(X,nD;m))^{\vee}\\
{\mathbb F}il^{{\text{\'et}}l}_D H^1(U, {{\mathbb Z}}/{p^m}) \ar@{_{(}->}[d] \ar@{^{(}->}[u]
\ar@{.>}[r]
& C^{{\text{\'et}}l}_{KS}(X,nD;m)^{\vee} \ar[u] \ar[d]& \\
{\mathbb F}il_D H^1(U, {{\mathbb Z}}/{p^m}) \ar[r]^-{\cong}&C_{KS}(X,nD;m)^{\vee}. }
\end{equation}
Assume now that $D_{{\rm red}}$ is regular. By Theorem~\ref{thm:Comparison-3},
it then follows that the transition maps in the pro-system
$\{C^{{\text{\'et}}l}_{KS}(X,nD;m))\}_n $ are surjective. In particular, the
right top vertical arrow is injective and hence the middle
horizontal dotted arrow is a honest arrow such that the top square
commutes. It follows from Proposition~\ref{prop:Nis-et-rec}, that the
bottom square commutes as well.
By the definition of ${\mathbb F}il^{{\text{\'et}}l}_D H^1(U, {{\mathbb Z}}/{p^m})$,
it is clear that the middle
horizontal arrow is now surjective.
To prove its injectivity, it suffices to show
that the right bottom vertical arrow is injective. But this follows from Theorem~\ref{thm:Comparison-3}.
\end{proof}
\begin{comment}
Let ${\mathbb F}il_D H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m}) = {\mathbb F}il_D H^1_{{\text{\'et}}l}(U, {{\mathbb Q}}/{{\mathbb Z}})
\cap H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m})$, where ${\mathbb F}il_D H^1_{{\text{\'et}}l}(U, {{\mathbb Q}}/{{\mathbb Z}})$
is defined in \cite[Definition~7.12]{Gupta-Krishna-REC}.
It follows from \cite[Theorem~7.16]{Gupta-Krishna-REC} that
${\mathbb F}il_D H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m}) = ({\pi^{{\rm adiv}}_1(X,D)}/{p^m})^\star$.
We let ${\mathbb F}il^{{\text{\'et}}l}_D H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m}) =
(\pi^{{\text{\'et}}l}_1(X,D;m))^\star$.
We let ${\mathbb F}il^{{\text{\'et}}l}_D H^1_{{\text{\'et}}l}(U, {{\mathbb Q}_p}/{{\mathbb Z}_p}) =
{\varinjlim}_m {\mathbb F}il^{{\text{\'et}}l}_D H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m})$.
\subsection{Case of regular $D_{\rm red}$}\label{sec:REG}
Let us now assume that $D_{\rm red}$ is regular. Assume also that
either $d \neq 2$ or $k \neq {\mathbb F}_2$.
In this case, Theorem~\ref{thm:Comparison-3}, Lemma~\ref{lem:Rel-dlog-iso}
and Corollary~\ref{cor:Duality-main-d}
together say that $\rho^{{\text{\'et}}l}_{U/X}$ is an isomorphism of
profinite topological groups. Hence, there exists a
unique quotient $\pi^{{\text{\'et}}l}_1(X,nD;m)$ of ${\pi^{\rm ab}_1(U)}/{p^m}$ such that
the diagram
\begin{equation}\label{eqn:REG-*0}
\xymatrix@C.8pc{
\widetilde{C}^{{\text{\'et}}l}_{U/X}(m) \ar[r]^-{\rho^{{\text{\'et}}l}_{U/X}} \ar@{->>}[d]
& {\pi^{\rm ab}_1(U)}/{p^m} \ar@{->>}[d] \\
\pi^{{\text{\'et}}l}_1(X,nD;m) \ar[r]^-{\rho^{{\text{\'et}}l}_{X|nD}} &
{\pi^{{\rm adiv}}_1(X,nD)}/{p^m}}
\end{equation}
commutes and the horizontal arrows are isomorphisms.
Let ${\mathbb F}il_D H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m}) = {\mathbb F}il_D H^1_{{\text{\'et}}l}(U, {{\mathbb Q}}/{{\mathbb Z}})
\cap H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m})$, where ${\mathbb F}il_D H^1_{{\text{\'et}}l}(U, {{\mathbb Q}}/{{\mathbb Z}})$
is defined in \cite[Definition~7.12]{Gupta-Krishna-REC}.
It follows from \cite[Theorem~7.16]{Gupta-Krishna-REC} that
${\mathbb F}il_D H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m}) = ({\pi^{{\rm adiv}}_1(X,D)}/{p^m})^\star$.
We let ${\mathbb F}il^{{\text{\'et}}l}_D H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m}) =
(\pi^{{\text{\'et}}l}_1(X,D;m))^\star$.
We let ${\mathbb F}il^{{\text{\'et}}l}_D H^1_{{\text{\'et}}l}(U, {{\mathbb Q}_p}/{{\mathbb Z}_p}) =
{\varinjlim}_m {\mathbb F}il^{{\text{\'et}}l}_D H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m})$.
Using that $\rho^{{\text{\'et}}l}_{X|nD}$ is an isomorphism of topological
abelian groups, we get the following
comparison theorem for filtrations of $H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m})$ and
$H^1_{{\text{\'et}}l}(U, {{\mathbb Q}_p}/{{\mathbb Z}_p})$
induced by the reciprocity and the duality theorems.
\begin{thm}\label{thm:Comp-Main}
Let $k$ be a finite field and $X$ a smooth and projective scheme
of pure dimension $d \ge 1$ over $k$. Let $D \subset X$ be an effective
Cartier divisor with complement $U$ such that $D_{\rm red}$ is regular.
Assume that either $d \neq 2$ or $k \neq {\mathbb F}_2$.
Then one has
\[
{\mathbb F}il^{{\text{\'et}}l}_D H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m}) =
{\mathbb F}il_D H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m}) \ \ {\rm and} \ \
{\mathbb F}il^{{\text{\'et}}l}_D H^1_{{\text{\'et}}l}(U, {{\mathbb Q}_p}/{{\mathbb Z}_p}) = {\mathbb F}il_D H^1_{{\text{\'et}}l}(U, {{\mathbb Q}_p}/{{\mathbb Z}_p})
\]
as subgroups of $H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m})$ and
$H^1_{{\text{\'et}}l}(U, {{\mathbb Q}_p}/{{\mathbb Z}_p})$, respectively.
\end{thm}
\end{comment}
\begin{comment}
\begin{prop}\label{prop:PP-pro-m}
For $m \ge 1$, the pairing
\[
\{{\mathbb H}^i_{\text{\'et}}l(X, W_m{\mathcal F}^{r, \bullet}_{n})\}_n \times
\{{\mathbb H}^{d+1-i}_{\text{\'et}}l(X, W_m{\mathcal G}^{d-r, \bullet}_{n})\}_n \to {{\mathbb Z}}/{p^m}.
\]
is a perfect pairing of discrete ind-abelian and pro-abelian groups.
\end{prop}
\begin{proof}
We shall prove the proposition by induction on $m \ge 1$.
The $m =1$ case follows from Lemma~\ref{lem:PP-pro-1}.
For $m \ge 2$, we consider the commutative diagram
\begin{equation}\label{eqn:PP-pro-m-0}
\xymatrix@C.4pc{
\cdots \ar[r]
& \{{\mathbb H}^{d+1-i}_{\text{\'et}}l(X, W_{m-1}{\mathcal G}^{d-r, \bullet}_{n})\}_n \ar[r] \ar[d] &
\{{\mathbb H}^{d+1-i}_{\text{\'et}}l(X, W_{m}{\mathcal G}^{d-r, \bullet}_{n})\}_n \ar[r] \ar[d] &
\{{\mathbb H}^{d+1-i}_{\text{\'et}}l(X, W_{1}{\mathcal G}^{d-r, \bullet}_{n})\}_n \ar[r] \ar[d] & \cdots \\
\cdots \ar[r] & \{({\mathbb H}^i_{\text{\'et}}l(X, W_{m-1}{\mathcal F}^{r, \bullet}_{n})^\star\}_n \ar[r] &
\{{\mathbb H}^i_{\text{\'et}}l(X, W_{m}{\mathcal F}^{r, \bullet}_{n})^\star\}_n \ar[r] &
\{{\mathbb H}^i_{\text{\'et}}l(X, W_{1}{\mathcal F}^{r, \bullet}_{n})^\star\}_n \ar[r] & \cdots,}
\end{equation}
where for a ${{\mathbb Z}}/{p^m}$-module $V$, we
let $V^\star = {\rm Hom}_{{{\mathbb Z}}/{p^m}}(V, {{\mathbb Z}}/{p^m})$.
The bottom is not known to be exact.
\end{proof}
\end{comment}
\section{Reciprocity theorem for $C_{KS}(X|D)$}\label{sec:REC*}
In this section, we shall prove the reciprocity theorem for the idele
class group $C_{KS}(X|D)$. Before going into this, we recall the
definition of some filtrations of $H^1_{\text{\'et}}l(K, {{\mathbb Q}}/{{\mathbb Z}})$ for
a Henselian discrete valuation field $K$.
\subsection{The Brylinski-Kato and Matsuda filtrations}
\label{sec:BKM}
Let $K$ be a Henselian discrete valuation field of
characteristic $p > 0$ with ring of integers
${\mathcal O}_K$, maximal ideal ${\mathfrak m}_K \neq 0$ and residue field ${\mathfrak f}$.
We let $H^q(A) = H^q_{\text{\'et}}l(A, {{\mathbb Q}}/{{\mathbb Z}}(q-1))$ for any commutative ring $A$.
The generalized Artin-Schreier sequence gives rise to an exact sequence
\begin{equation}\label{eqn:DRC-3}
0 \to {{\mathbb Z}}/{p^r} \to W_r(K) \xrightarrow{1 - F} W_r(K)
\xrightarrow{\partial} H^1_{{\text{\'et}}}(K, {{\mathbb Z}}/{p^r}) \to 0,
\end{equation}
where $F((a_{r-1}, \ldots , a_{0})) = (a^p_{r-1}, \ldots , a^p_{0})$.
We write the Witt vector $(a_{r-1}, \ldots , a_{0})$ as $\underline{a}$ in short.
Let $v_K \colon K^{\times} \to {\mathbb Z}$ be the normalized valuation.
Let $\partialta_r$ denote the composite map
$W_r(K) \xrightarrow{\partial} H^1_{{\text{\'et}}}(K, {{\mathbb Z}}/{p^r}) \hookrightarrow H^1(K)$.
Then one knows that $\partialta_r = \partialta_{r+1} \circ V$.
For an integer $m \ge 1$, let
${\rm ord}_p(m)$ denote the $p$-adic order of $m$ and
let $r' = \min(r, {\rm ord}_p(m))$. We let ${\rm ord}_p(0) = - \infty$.
For $m \ge 0$, we let
\begin{equation}\label{eqn:DRC-4}
{\mathbb F}il^{{\rm bk}}_m W_r(K) = \{\underline{a}| p^{i}v_K(a_i) \ge - m\}; \ \ {\rm and}
\end{equation}
\begin{equation}\label{eqn:DRC-5}
{\mathbb F}il^{{\rm ms}}_m W_r(K) = {\mathbb F}il^{{\rm bk}}_{m-1} W_r(K) +
V^{r-r'}({\mathbb F}il^{{\rm bk}}_{m} W_{r'}(K)).
\end{equation}
We have ${\mathbb F}il^{{\rm bk}}_0 W_r(K) = {\mathbb F}il^{{\rm ms}}_0 W_r(K) = W_r({\mathcal O}_K)$.
We let ${\mathbb F}il^{{\rm bk}}_{-1} W_r(K) = {\mathbb F}il^{{\rm ms}}_{-1} W_r(K) = 0$.
For $m \geq 1$, we let
\begin{equation}\label{eqn:DRC-6}
{\mathbb F}il^{{\rm bk}}_{m-1} H^1(K) = H^1(K)\{p'\} \bigoplus {\underlinederset{r \ge 1}\bigcup}
\partialta_r({\mathbb F}il^{{\rm bk}}_{m-1} W_r(K)) \ \ \mathbbox{and}
\end{equation}
\[
{\mathbb F}il^{{\rm ms}}_m H^1(K) = H^1(K)\{p'\} \bigoplus {\underlinederset{r \ge 1}\bigcup}
\partialta_r({\mathbb F}il^{{\rm ms}}_m W_r(K)).
\]
Moreover, we let
${\mathbb F}il^{{\rm ms}}_0 H^1(K) = {\mathbb F}il^{{\rm bk}}_{-1} H^1(K) = H^1({\mathcal O}_K)$,
the subgroup
of unramified characters.
The filtrations ${\mathbb F}il^{{\rm bk}}_\bullet H^1(K)$ and ${\mathbb F}il^{{\rm ms}}_\bullet H^1(K)$
are due to Brylinski-Kato \cite{Kato-89} and Matsuda \cite{Matsuda},
respectively. We refer to \cite[Theorem~6.1]{Gupta-Krishna-REC} for the
following.
\begin{thm}\label{thm:Fil-main}
The two filtrations defined above satisfy the following relations.
\begin{enumerate}
\item
$H^1(K) = {\underlinederset{m \ge 0}\bigcup} {\mathbb F}il^{{\rm bk}}_m H^1(K)
= {\underlinederset{m \ge 0}\bigcup} {\mathbb F}il^{{\rm ms}}_m H^1(K) $.
\item
${\mathbb F}il^{{\rm ms}}_m H^1(K) \subset {\mathbb F}il^{{\rm bk}}_m H^1(K) \subset {\mathbb F}il^{{\rm ms}}_{m+1} H^1(K)$
for all $m \ge -1$.
\item If $m \geq 1$ such that ${\rm ord}_p(m)=0$, then
${\mathbb F}il^{{\rm bk}}_{m-1} H^1(K) = {\mathbb F}il^{{\rm ms}}_m H^1(K)$. In particular,
${\mathbb F}il^{{\rm bk}}_0 H^1(K) = {\mathbb F}il^{{\rm ms}}_1 H^1(K)$, which is the subgroup of
tamely ramified characters.
\end{enumerate}
\end{thm}
For integers $m \ge 0$ and $r \ge 1$, let $U_m K^M_r(K)$ be
the subgroup $\{1 + {\mathfrak m}^m_K, K^{\times}, \ldots , K^{\times}\}$ of $K^M_r(K)$.
We let $U'_mK^M_r(K)$ be the subgroup $\{1 + {\mathfrak m}^m_K, {\mathcal O}^{\times}_K, \ldots ,
{\mathcal O}^{\times}_K\}$ of $K^M_r(K)$.
It follows from \cite[Lemma~6.2]{Gupta-Krishna-REC} that
$U_{m+1}K^M_r(K) \subseteq U'_mK^M_r(K) \subseteq U_{m}K^M_r(K)$ for every
integer $m \ge 0$.
If $K$ is a $d$-dimensional Henselian local field
(see \cite[\S~5.1]{Gupta-Krishna-REC}), there is a pairing
\begin{equation}\label{eqn:Kato-pair}
\{,\} \colon K^M_d(K) \times H^1(K) \to H^{d+1}(K) \cong {{\mathbb Q}}/{{\mathbb Z}}.
\end{equation}
The following result is due to Kato and Matsuda (when $p \neq 2$).
We refer to \cite[Theorem~6.3]{Gupta-Krishna-REC} for a proof.
\begin{thm}\label{thm:Filt-Milnor-*}
Let $\chi \in H^1(K)$ be a character. Then the following hold.
\begin{enumerate}
\item
For every integer $m \ge 0$, we have that
$\chi \in {\mathbb F}il^{{\rm bk}}_m H^1(K)$ if and only if $\{\alpha, \chi\} = 0$
for all $\alpha \in U_{m+1} K^M_d(K)$.
\item
For every integer $m \ge 1$, we have that $\chi \in {\mathbb F}il^{{\rm ms}}_m H^1(K)$ if and
only if $\{\alpha, \chi\} = 0$ for all $\alpha \in U'_{m} K^M_d(K)$.
\end{enumerate}
\end{thm}
We have the inclusions $K \hookrightarrow K^{sh} \hookrightarrow \overline{K}$, where $\overline{K}$ is
a fixed separable closure of $K$ and $K^{sh}$ is the strict Henselization of
$K$. We shall use the following key result in the proof of our reciprocity
theorem.
\begin{prop}\label{prop:Fil-SH-BK}
For $m \ge 0$, the canonical square
\[
\xymatrix@C.8pc{
{\mathbb F}il^{{\rm bk}}_m H^1(K) \ar[r] \ar[d] & H^1(K) \ar[d] \\
{\mathbb F}il^{{\rm bk}}_m H^1(K^{sh}) \ar[r] & H^1(K^{sh})}
\]
is Cartesian.
\end{prop}
\begin{proof}
Since ${\mathbb F}il^{{\rm bk}}_{\bullet} H^1(K)$ is an exhaustive
filtration of $H^1(K)$ by Theorem~\ref{thm:Fil-main}(1), it
suffices to show that for every $m \ge 1$, the square
\[
\xymatrix@C.8pc{
{\mathbb F}il^{{\rm bk}}_{m-1} H^1(K) \ar[r] \ar[d] & {\mathbb F}il^{{\rm bk}}_{m} H^1(K) \ar[d] \\
{\mathbb F}il^{{\rm bk}}_{m-1} H^1(K^{sh}) \ar[r] & {\mathbb F}il^{{\rm bk}}_{m} H^1(K^{sh})}
\]
is Cartesian. Equivalently, it suffices to show that for every $m \ge 1$, the
map
\[
\phi^*_m \colon {\rm gr}^{{\rm bk}}_m H^1(K) \to {\rm gr}^{{\rm bk}}_m H^1(K^{sh}),
\]
induced by the inclusion $\phi \colon K \hookrightarrow K^{sh}$, is injective.
We fix $m \ge 1$. By \cite[Corollary~5.2]{Kato-89},
there exists an injective (non-canonical)
homomorphism
${\rm rsw}_{\pi_K} \colon {\rm gr}^{{\rm bk}}_m H^1(K) \hookrightarrow \Omega^1_{{\mathfrak f}} {\text{\rm op}}lus {\mathfrak f}$.
By Theorem~5.1 of loc. cit., this refined swan conductor
${\rm rsw}_{\pi_K} $
depends only on the choice of a uniformizer $\pi_K$ of $K$.
Since ${\mathcal O}^{sh}_K$ is unramified over ${\mathcal O}_K$, we can choose
$\pi_K$ to be a uniformizer of $K^{sh}$ as well. It therefore
follows that for all $m\geq 1$, the diagram
\[
\xymatrix@C2pc{
{\rm gr}^{{\rm bk}}_m H^1(K) \ar@{^{(}->}[r]^-{{\rm rsw}_{\pi_K}} \ar[d] &
\Omega^1_{{\mathfrak f}} {\text{\rm op}}lus {\mathfrak f} \ar[d] \\
{\rm gr}^{{\rm bk}}_m H^1(K^{sh}) \ar@{^{(}->}[r]^-{{\rm rsw}_{\pi_K}} &
\Omega^1_{\overline{{\mathfrak f}}} {\text{\rm op}}lus \overline{{\mathfrak f}}}
\]
is commutative, where $\overline{{\mathfrak f}}$ is a separable closure of
${\mathfrak f}$.
Since the horizontal arrows in the above diagram are injective, it suffices to
show
that the natural map $\Omega^1_{{\mathfrak f}} \to \Omega^1_{\overline{{\mathfrak f}}}$ is injective.
But this is clear.
\end{proof}
\subsection{Logarithmic fundamental group with modulus}
\label{sec:LFGM}
Let $k$ be a finite field of characteristic $p$ and $X$ an integral
projective scheme over $k$ of dimension $d \ge 1$. Let $D \subset X$
be an effective Cartier divisor with complement $U$. Let $K = k({\text{\'et}}a)$ denote
the function field of $X$. We let $C = D_{\rm red}$.
We fix a separable closure $\overline{K}$ of $K$
and let $G_K$ denote the absolute Galois group of $K$.
Recall the following notations from \cite[\S~3.3]{Kato-Saito} or
\cite[\S~2.3]{Gupta-Krishna-REC}.
Assume that $X$ is normal.
Let $\lambda$ be a generic point of $D$.
Let $K_\lambda$ denote the Henselization of $K$ at $\lambda$.
Let $P = (p_0, \ldots , p_{d-2}, \lambda, {\text{\'et}}a)$ be a Parshin
chain on $(U \subset X)$.
Let $V \subset K$ be a $d$-DV which dominates $P$.
Let $V = V_0 \subset \cdots \subset V_{d-2}
\subset V_{d-1} \subset V_d = K$ be the chain of valuation rings
in $K$ induced by $V$. Since $X$ is normal, it is easy to check
that for any such chain, one must have $V_{d-1} = {\mathcal O}_{X,\lambda}$.
Let $V'$ be the image of $V$ in $k(\lambda)$. Let $\widetilde{V}_{d-1}$
be the unique Henselian discrete valuation ring
having an ind-{\'e}tale local homomorphism
$V_{d-1} \to \widetilde{V}_{d-1}$ such that its residue field $E_{d-1}$
is the quotient field of $(V')^h$. Then $V^h$ is the inverse image of
$(V')^h$ under the quotient map $\widetilde{V}_{d-1} \twoheadrightarrow E_{d-1}$.
It follows that its function field $Q(V^h)$ is a $d$-dimensional
Henselian discrete valuation field whose ring of integers is
$\widetilde{V}_{d-1}$ (see \cite[\S~3.7.2]{Kato-Saito}).
It then follows that
there are canonical inclusions of discrete valuation rings
\begin{equation}\label{eqn:Fil-SH-3}
{\mathcal O}_{X,\lambda} \hookrightarrow \widetilde{V}_{d-1} \hookrightarrow {\mathcal O}^{sh}_{X,\lambda}.
\end{equation}
Moreover, we have (see the proof of \cite[Proposition~3.3]{Kato-Saito})
\begin{equation}\label{eqn:Fil-SH-5}
{\mathcal O}^h_{X,P'} \cong {\underlinederset{V \in {\mathcal V}(P)}{\rm pr}od} \widetilde{V}_{d-1},
\end{equation}
where ${\mathcal V}(P)$ is the set of $d$-DV's in $K$ which dominate $P$.
As an immediate consequence of Proposition~\ref{prop:Fil-SH-BK},
we therefore get the following.
\begin{cor}\label{cor:Fil-SH-4}
For every $m \ge 0$, the square
\[
\xymatrix@C.8pc{
{\mathbb F}il^{{\rm bk}}_m H^1(K_\lambda) \ar[r] \ar[d] & H^1(K_\lambda) \ar[d] \\
{\mathbb F}il^{{\rm bk}}_m H^1(Q(V^h)) \ar[r] & H^1(Q(V^h))}
\]
is Cartesian.
\end{cor}
Let ${\mathbb I}rr_C$ denote the set of all generic points of $C$ and
let $C_\lambda$ denote the closure of an element $\lambda \in {\mathbb I}rr_C$.
We write $D = {\underlinederset{\lambda \in {\mathbb I}rr_C}\sum} n_\lambda C_\lambda$.
We can also write $D = {\underlinederset{x \in X^{(1)}}\sum} n_x \overline{\{x\}}$, where $n_x =0$ for all $x\in U$.
We allow $D$ to be empty in which case we
write $D = 0$.
For any $x \in X^{(1)} \cap C$, we let $\widehat{K}_x$ denote the quotient field of
the ${\mathfrak m}_x$-adic completion $\widehat{{\mathcal O}_{X,x}}$ of ${\mathcal O}_{X,x}$.
Let ${\mathcal O}^{sh}_{X,x}$ denote the strict Henselization of ${\mathcal O}_{X,x}$
and let $K^{sh}_x$ denote its quotient field.
Then it is clear from the definitions that there are
inclusions
\begin{equation}\label{eqn:Field-incln}
K \hookrightarrow K_x \hookrightarrow K^{sh}_x \hookrightarrow \overline{K} \ \mathbbox{and} \ \ K \hookrightarrow K_x \hookrightarrow \widehat{K}_x.
\end{equation}
\begin{defn}\label{defn:Fid_D}
Let ${\mathbb F}il^{{\rm bk}}_D H^1(K)$ denote the subgroup of characters
$\chi \in H^1(K)$ such that for every $x \in X^{(1)}$,
the image $\chi_{x}$ of
$\chi$ under the canonical surjection $H^1(K) \twoheadrightarrow H^1(K_{x})$
lies in ${\mathbb F}il^{{\rm bk}}_{n_{x}-1} H^1(K_{x})$.
It is easy to check that ${\mathbb F}il^{{\rm bk}}_D H^1(K) \subset H^1(U)$.
We let ${\mathbb F}il^{{\rm bk}}_D H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{m}) =
H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{m}) \cap {\mathbb F}il^{{\rm bk}}_D H^1(K)$.
\end{defn}
Recall that $H^1(K)$ is a torsion abelian group. We consider it a
topological abelian group with discrete topology. In particular,
all the subgroups $H^1(U)$ (e.g., ${\mathbb F}il^{{\rm bk}}_D H^1(K)$) are
also considered as discrete topological abelian groups.
Recall from \cite[Definition~7.12]{Gupta-Krishna-REC} that
${\mathbb F}il_D H^1(K)$ is a subgroup of $H^1(U)$ which is
defined similar to ${\mathbb F}il^{{\rm bk}}_D H^1(K)$, where we only
replace ${\mathbb F}il^{{\rm bk}}_{n_{\lambda}-1} H^1(K_{\lambda})$ by
${\mathbb F}il^{{\rm ms}}_{n_{\lambda}} H^1(K_{\lambda})$.
\begin{defn}\label{defn:Fun-D-BK}
We define the quotient $\pi_1^{\rm abk}(X, D)$ of $\pi_1^{\rm ab}(U)$
to be the Pontryagin dual of ${\mathbb F}il^{{\rm bk}}_D H^1(K) \subset H^1(U)$,
i.e.,
\[
\pi_1^{\rm abk}(X, D) := {\rm Hom}_{{\rm cont}}({\mathbb F}il^{{\rm bk}}_D H^1(K), {\mathbb Q}/{\mathbb Z}) =
{\rm Hom}({\mathbb F}il^{{\rm bk}}_D H^1(K), {\mathbb Q}/{\mathbb Z}).
\]
\end{defn}
Since $H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m}) = \ _{p^m}H^1(U, {{\mathbb Q}}/{{\mathbb Z}})$,
it follows that
\[
{\pi_1^{\rm abk}(X, D)}/{p^m} \cong
{\rm Hom}_{{\rm cont}}({\mathbb F}il^{{\rm bk}}_D H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m}), {{\mathbb Q}}/{{\mathbb Z}}).
\]
Since ${\mathbb F}il^{{\rm bk}}_D H^1(K)$ is a discrete topological group, it follows that
$\pi_1^{\rm abk}(X, D)$ is a profinite group. Moreover,
\cite[Theorem~2.9.6]{Pro-fin} implies that
\begin{equation}\label{eqn:dual-D-BK}
\pi_1^{\rm abk}(X, D)^\vee \cong {\mathbb F}il^{{\rm bk}}_D H^1(K) \ \ {\rm and} \ \
({\pi_1^{\rm abk}(X, D)}/{p^m})^\vee \cong {\mathbb F}il^{{\rm bk}}_D H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m}).
\end{equation}
\begin{lem} \label{lem:adiv-abk}
The quotient map $\pi^{\rm ab}_1(U) \to \pi_1^{\rm abk}(X, D)$
factors through $\pi_1^{{\rm adiv}}(X, D)$.
\end{lem}
\begin{proof}
This is a straightforward consequence of Theorem~\ref{thm:Fil-main} and
\cite[Theorem~7.16]{Gupta-Krishna-REC} once we recall that
\begin{equation}\label{eqn:adiv-abk-0}
\pi_1^{{\rm adiv}}(X, D) = {\rm Hom}({\mathbb F}il_D H^1(K), {\mathbb Q}/{\mathbb Z}).
\end{equation}
\end{proof}
\begin{remk}\label{remk:Tannakian-defn}
Using \cite[\S~3]{Abbes-Saito}, one can mimic the
construction of \cite[\S~7]{Gupta-Krishna-REC} to show that
$\pi_1^{\rm abk}(X, D)$ is isomorphic to the abelianization of the
automorphism group of the fiber functor of a certain Galois subcategory of
the category of finite {\'e}tale covers of $U$.
Under this Tannakian interpretation,
$\pi_1^{\rm abk}(X, D)$ characterizes the finite
{\'e}tale covers of $U$ whose ramifications are bounded at each generic
point of $D$ by means of Kato's Swan conductor.
\end{remk}
\vskip .3cm
\subsection{Reciprocity for $C_{KS}(X|D)$}\label{sec:REC-KS}
We shall continue with the setting of \S~\ref{sec:LFGM}.
Before we prove the reciprocity theorem for $C_{KS}(X|D)$,
we recall the construction of the reciprocity map for
$C_{U/X}$ from \cite[\S~5.4]{Gupta-Krishna-REC}.
We let $P = (p_0, \ldots , p_s)$ be any Parshin chain on $X$ with
the condition that $p_s \in U$.
Let $X(P) = \overline{\{p_s\}}$ be the integral closed subscheme of $X$
and let $U(P) = U \cap X(P)$.
We let $V \subset k(p_s)$ be an $s$-DV dominating $P$.
Then $Q(V^h)$ is an
$s$-dimensional Henselian local field. By
\cite[Proposition~5.1]{Gupta-Krishna-REC}, we therefore
have
the reciprocity map
\begin{equation}\label{eqn:Kato-rec}
\rho_{Q(V^h)} \colon K^M_s(Q(V^h)) \to {\mathbb G}al({{Q(V^h)}^{\rm ab}}/{Q(V^h)})
\cong \pi^{\rm ab}_1({\rm Spec \,}(Q(V^h))).
\end{equation}
Taking the sum of these maps over ${\mathcal V}(P)$ and using
\cite[Lemma~5.10]{Gupta-Krishna-REC}, we get a reciprocity map
\begin{equation}\label{eqn:Kato-rec-0}
\rho_{k(P)} \colon K^M_s(k(P)) \to \pi^{\rm ab}_1({\rm Spec \,}(k(P))) \to \pi^{\rm ab}_1(U),
\end{equation}
where last map exists because $p_s \in U$. Taking sum over all Parshin
chains on the pair $(U\subset X)$, we get a reciprocity map
$\rho_{U/X} \colon I_{U/X} \to \pi^{\rm ab}_1(U)$.
By \cite[Theorem~5.13, Proposition~5.15]{Gupta-Krishna-REC},
this descends to a continuous homomorphism of topological groups
\begin{equation}\label{eqn:Kato-rec-4}
\rho_{U/X} \colon C_{U/X} \to
\pi^{\rm ab}_1(U).
\end{equation}
Recall that $\pi_1^{{\rm adiv}}(X, D)_0$ is the kernel of the
composite map $\pi_1^{{\rm adiv}}(X, D) \twoheadrightarrow \pi_1^{\rm ab}(X) \to \widehat{{\mathbb Z}}$.
One defines $\pi_1^{\rm abk}(X, D)_0$ similarly.
One of the main results of this paper is the following.
\begin{thm}\label{thm:REC-RS-Main}
There is a continuous homomorphism
\[
\rho'_{X|D} \colon C(X|D) \to \pi_1^{\rm abk}(X, D)
\]
with dense image such that the diagram
\begin{equation}\label{eqn:Rec-D-map-0}
\xymatrix@C.8pc{
{C}_{U/X} \ar[r]^-{\rho_{U/X}} \ar@{->>}[d]_-{p'_{X|D}} & \pi^{\rm ab}_1(U)
\ar@{->>}[d]^-{q'_{X|D}} \\
C(X|D) \ar[r]^-{\rho'_{X|D}} & \pi^{\rm abk}_1(X,D)}
\end{equation}
is commutative. If $X$ is
normal and $U$ is regular, then $\rho'_{X|D}$ induces an
isomorphism of finite groups
\[
\rho'_{X|D} \colon C(X|D)_0 \xrightarrow{\cong} \pi_1^{\rm abk}(X, D)_0.
\]
\end{thm}
\begin{proof}
We first show the existence of $\rho'_{X|D}$.
Its continuity and density of its image will then follow by
the corresponding assertions for $\rho_{U/X}$, shown in
\cite{Gupta-Krishna-REC}.
In view of ~\eqref{eqn:Kato-rec-4}, we only have to show that
if $\chi$ is a character of $\pi_1^{\rm abk}(X, D)$, then the composite
$\chi \circ q'_{X|D} \circ \rho_{U/X}$ annihilates ${\rm Ker}(p'_{X|D})$.
By ~\eqref{eqn:Idele-D}, we only need to show that
$\chi \circ q'_{X|D} \circ \rho_{U/X}$ annihilates
the image of $\widehat{K}^M_{d}({\mathcal O}^h_{X,P'}|I_D) \to C_{U/X}$, where $P$ is any
maximal Parshin chain on $(U \subset X)$.
But the proof of this is completely identical to that of
\cite[Theorem~8.1]{Gupta-Krishna-REC}, only difference being
that we have to use part (1) of Theorem~\ref{thm:Filt-Milnor-*}
instead of part (2).
We now assume that $X$ is normal and $U$ is regular.
In this case, we can replace $C(X|D)$ by $C_{KS}(X|D)$ by
Theorem~\ref{thm:KZ-main}. We show that $\rho'_{X|D}$ is injective on all of
$C_{KS}(X|D)$.
By Lemmas~\ref{lem:KS-GK-iso} and ~\ref{lem:adiv-abk}, there is a diagram
\begin{equation}\label{eqn:Rec-D-map-1}
\xymatrix@C.8pc{
{\mathbb F}il^{{\rm bk}}_D H^1(K) \ar[r]^-{\rho'^\vee_{X|D}} \ar[d]_-{\alpha^\vee} &
C_{KS}(X|D)^\vee \ar[d]^-{\beta^\vee} \\
{\mathbb F}il_D H^1(K) \ar[r]^-{\rho^\vee_{X|D}} & C_{KS}(X,D)^\vee,}
\end{equation}
whose vertical arrows are injective. It is clear from the construction of
the reciprocity maps that this diagram is commutative.
To show that $\rho'_{X|D}$ is injective, it suffices to show that
$\rho'^\vee_{X|D}$ is surjective (see \cite[Lemma~7.10]{Gupta-Krishna-REC}).
We fix a character
$\chi \in C_{KS}(X|D)^\vee$ and let $\widetilde{\chi} = \beta^\vee(\chi)$.
Since $\rho^\vee_{X|D}$ is surjective by
\cite[Theorem~1.1]{Gupta-Krishna-BF}, we can find a character
$\chi' \in {\mathbb F}il_D H^1(K)$ such that $\widetilde{\chi} = \rho^\vee_{X|D}(\chi')$.
We need to show that $\chi' \in {\mathbb F}il^{{\rm bk}}_D H^1(K)$.
We fix a point $x \in {\mathbb I}rr_C$ and let $\chi'_x$ be the
image of $\chi'$ in
$H^1(K_x)$. We need to show that $\chi'_x \in {\mathbb F}il^{{\rm bk}}_{n_x -1} H^1(K_x)$,
where $n_x$ is the multiplicity of $D$ at $x$.
By Corollary~\ref{cor:Fil-SH-4}, it suffices to show that
for some maximal Parshin chain $P = (p_0, \ldots , p_{d-2}, x, {\text{\'et}}a)$
on $(U \subset X)$
and $d$-DV $V \subset K$ dominating $P$, the image of $\chi'_x$ in
$H^1(Q(V^h))$ lies in the subgroup ${\mathbb F}il^{{\rm bk}}_{n_x -1} H^1(Q(V^h))$.
But this is proven by repeating the proof of
\cite[Theorem~1.1]{Gupta-Krishna-BF} mutatis mutandis by using only
one modification. Namely, we need to use part (1) of
Theorem~\ref{thm:Filt-Milnor-*} instead of part (2).
The surjectivity of $\rho'_{X|D}$ on the degree zero part follows because
the top horizontal arrow in the
commutative diagram
\begin{equation}\label{eqn:Rec-mod-bk-2}
\xymatrix@C.8pc{
C_{KS}(X,D)_0 \ar[r]^-{\rho_{X|D}} \ar@{->>}[d]_-{\beta} & \pi^{{\rm adiv}}_1(X,D)_0
\ar@{->>}[d]^-{\alpha} \\
C_{KS}(X|D)_0 \ar[r]^-{\rho'_{X|D}} & \pi^{\rm abk}_1(X,D)_0}
\end{equation}
is surjective by \cite[Theorem~1.1]{Gupta-Krishna-BF}.
The finiteness of $C_{KS}(X|D)_0$ follows because
$\beta$ is surjective by Lemma~\ref{lem:KS-GK-iso} and
$C_{KS}(X,D)_0$ is finite by \cite[Theorem~4.8]{Gupta-Krishna-BF}.
This concludes the proof.
\end{proof}
Using Theorem~\ref{thm:REC-RS-Main} and \cite[Lemma~8.4]{Gupta-Krishna-REC},
we get the following.
\begin{cor}\label{cor:REC-RS-Main-fin}
Under the additional assumptions of Theorem~\ref{thm:REC-RS-Main}, the map
\[
\rho'_{X|D} \colon {C_{KS}(X|D)}/m \to {\pi^{\rm abk}_1(X,D)}/m
\]
is an isomorphism of finite groups for every integer $m \ge 1$.
\end{cor}
\begin{comment}
\begin{thm}\label{thm:REC-RS-Main}
There is a continuous homomorphism
\[
\rho'_{X|D} \colon C(X|D) \to \pi_1^{\rm abk}(X, D)
\]
with dense image such that the diagram
\begin{equation}\label{eqn:Rec-D-map-0}
\xymatrix@C.8pc{
{C}_{U/X} \ar[r]^-{\rho_{U/X}} \ar@{->>}[d]_-{p'_{X|D}} & \pi^{\rm ab}_1(U)
\ar@{->>}[d]^-{q'_{X|D}} \\
C(X|D) \ar[r]^-{\rho'_{X|D}} & \pi^{\rm abk}_1(X,D)}
\end{equation}
is commutative. If $X$ is
normal and $U$ is regular, then $\rho'_{X|D}$ induces an
isomorphism
\[
\rho'_{X|D} \colon C(X|D)_0 \xrightarrow{\cong} \pi_1^{\rm abk}(X, D)_0.
\]
\end{thm}
\begin{proof}
We first show the existence of $\rho'_{X|D}$.
Its continuity and density of its image will then follow by
the corresponding assertions for $\rho_{U/X}$, shown in
\cite{Gupta-Krishna-REC}.
In view of ~\eqref{eqn:Kato-rec-4}, we only have to show that
if $\chi$ is a character of $\pi_1^{\rm abk}(X, D)$, then the composite
$\chi \circ q'_{X|D} \circ \rho_{U/X}$ annihilates ${\rm Ker}(p'_{X|D})$.
By ~\eqref{eqn:Idele-D}, we only need to show that
$\chi \circ q'_{X|D} \circ \rho_{U/X}$ annihilates
the image of $\widehat{K}^M_{d}({\mathcal O}^h_{X,P'}|I_D) \to C_{U/X}$, where $P$ is any
maximal Parshin chain on $(U \subset X)$.
But the proof of this is completely identical to that of
\cite[Theorem~8.1]{Gupta-Krishna-REC}, only difference being
that we have to use part (1) of Theorem~\ref{thm:Filt-Milnor-*}
instead of part (2).
We now assume that $X$ is normal and $U$ is regular.
In this case, we can replace $C(X|D)$ by $C_{KS}(X|D)$ by
Theorem~\ref{thm:KZ-main}. We show that $\rho'_{X|D}$ is injective on all of
$C_{KS}(X|D)$.
By Lemmas~\ref{lem:KS-GK-iso} and ~\ref{lem:adiv-abk}, there is a diagram
\begin{equation}\label{eqn:Rec-D-map-1}
\xymatrix@C.8pc{
{\mathbb F}il^{{\rm bk}}_D H^1(K) \ar[r]^-{\rho'^\vee_{X|D}} \ar[d]_-{\alpha^\vee} &
C_{KS}(X|D)^\vee \ar[d]^-{\beta^\vee} \\
{\mathbb F}il_D H^1(K) \ar[r]^-{\rho^\vee_{X|D}} & C_{KS}(X,D)^\vee,}
\end{equation}
whose vertical arrows are injective. It is clear from the construction of
the reciprocity maps that this diagram is commutative.
To show that $\rho'_{X|D}$ is injective, it suffices to show that
$\rho'^\vee_{X|D}$ is surjective (see \cite[Lemma~7.10]{Gupta-Krishna-REC}).
We fix a character
$\chi \in C_{KS}(X|D)^\vee$ and let $\widetilde{\chi} = \beta^\vee(\chi)$.
Since $\rho^\vee_{X|D}$ is surjective by
\cite[Theorem~1.1]{Gupta-Krishna-BF}, we can find a character
$\chi' \in {\mathbb F}il_D H^1(K)$ such that $\widetilde{\chi} = \rho^\vee_{X|D}(\chi')$.
We need to show that $\chi' \in {\mathbb F}il^{{\rm bk}}_D H^1(K)$.
We fix a point $x \in {\mathbb I}rr_C$ and let $\chi'_x$ be the
image of $\chi'$ in
$H^1(K_x)$. We need to show that $\chi'_x \in {\mathbb F}il^{{\rm bk}}_{n_x -1} H^1(K_x)$,
where $n_x$ is the multiplicity of $D$ at $x$.
By Corollary~\ref{cor:Fil-SH-4}, it suffices to show that
for some maximal Parshin chain $P = (p_0, \ldots , p_{d-2}, x, {\text{\'et}}a)$
on $(U \subset X)$
and $d$-DV $V \subset K$ dominating $P$, the image of $\chi'_x$ in
$H^1(Q(V^h))$ lies in the subgroup ${\mathbb F}il^{{\rm bk}}_{n_x -1} H^1(Q(V^h))$.
Choose any Parshin chain as above and call it $P_0$.
Let $V \subset K$ be a $d$-DV dominating $P_0$ and let
$\widehat{\chi}_x$ denote the image of $\chi'_x$ in $H^1(Q(V^h))$.
Let $V = V_0 \subset \cdots \subset V_{d-1} \subset V_d = K$
be the chain of valuation rings associated to $V$.
Then $Q(V^h)$ is a $d$-dimensional Henselian local field whose
ring of integers is $\widetilde{V}_{d-1}$ (see \S~\ref{sec:LFGM}).
By Theorem~\ref{thm:Filt-Milnor-*}(1), it suffices to show that
$\{\alpha, \widehat{\chi}_x\} = 0$ for every
$\alpha \in \widehat{K}^M_d(\widetilde{V}_{d-1}|I_D) =
\widehat{K}^M_d(\widetilde{V}_{d-1}|{\mathfrak m}^{n_x})$ under the pairing
~\eqref{eqn:Kato-pair} for $Q(V^h)$, where
${\mathfrak m}$ is the maximal ideal of $\widetilde{V}_{d-1}$.
Now, we are given that $\widetilde{\chi}$ annihilates
the kernel of the surjection $C_{KS}(X,D) \twoheadrightarrow C_{KS}(X|D)$.
The latter is the sum of images of the composite maps
$\widehat{K}^M_d({\mathcal O}^h_{X,P'}| I_D) \to C_{U/X} \to C_{KS}(X,D)$, where
$P$ runs through all maximal Parshin chains on $(U \subset X)$.
In particular, $\chi' \circ \rho_{X|D}$ annihilates the image of
$\widehat{K}^M_d({\mathcal O}^h_{X, P'_0}| I_D) \to C_{KS}(X,D)$. It follows from
~\eqref{eqn:Fil-SH-5} that $\chi' \circ \rho_{X|D}$ annihilates the image of
$\widehat{K}^M_d(\widetilde{V}_{d-1}| {\mathfrak m}^{n_x}) \to C_{KS}(X,D)$.
On the other hand, we have a commutative diagram
\begin{equation}\label{eqn:Rec-mod-bk-1}
\xymatrix@C.8pc{
K^M_d(\widetilde{V}_{d-1}| {\mathfrak m}^{n_x}) \ar[r] \ar[d]
& C_{KS}(X,D) \ar[d]_-{\rho_{X|D}} \ar[dr]^-{\widetilde{\chi}} & \\
{\mathbb G}al(\overline{Q(V^h)}/ Q(V^h)) \ar@/_1cm/[rr]^-{\widehat{\chi}_x}
\ar[r] & \pi^{{\rm adiv}}_1(X,D) \ar[r]^-{\chi'} & {\mathbb Q}/{\mathbb Z}}
\end{equation}
by the definition of the reciprocity maps,
where the left vertical arrow is defined by the pairing
~\eqref{eqn:Kato-pair} for $Q(V^h)$.
We conclude diagram that
$\{\alpha, \widehat{\chi}_x\} = 0$ for every $\alpha \in
\widehat{K}^M_d(\widetilde{V}_{d-1}|{\mathfrak m}^{n_x})$. We have thus proven the desired claim,
and hence the injectivity of $\rho'_{X|D}$.
The surjectivity of $\rho'_{X|D}$ on the degree zero part follows because
the top horizontal arrow in the
commutative diagram
\begin{equation}\label{eqn:Rec-mod-bk-2}
\xymatrix@C.8pc{
C_{KS}(X,D)_0 \ar[r]^-{\rho_{X|D}} \ar[d]_-{\beta} & \pi^{{\rm adiv}}_1(X,D)_0
\ar@{->>}[d]^-{\alpha} \\
C_{KS}(X|D)_0 \ar[r]^-{\rho'_{X|D}} & \pi^{\rm abk}_1(X,D)_0}
\end{equation}
is surjective by \cite[Theorem~1.1]{Gupta-Krishna-BF}.
This concludes the proof.
\end{proof}
By \cite[Corollary~8.2]{Gupta-Krishna-REC},
$\rho_{U/X}$ factors through $\widetilde{\rho}_{U/X} \colon
\widetilde{C}_{U/X} \to \pi^{\rm ab}_1(U)$ (see ~\eqref{eqn:REC}).
We now look at the diagram
\begin{equation}\label{eqn:Rec-D-map-1}
\xymatrix@C.8pc{
\widetilde{C}_{U/X} \ar[r] \ar[d]_-{\widetilde{\rho}_{U/X}}
& C_{KS}(X,D) \ar[r] \ar[d]^-{\rho_{X|D}} &
C_{KS}(X|D) \ar[d]^-{\rho'_{X|D}} \\
\pi^{\rm ab}_1(U) \ar[r]^-{q_{X|D}} & \pi^{{\rm adiv}}_1(X,D) \ar[r] &
\pi^{\rm abk}_1(X,D).}
\end{equation}
It is clear from the construction of the reciprocity maps that this diagram
is commutative, all arrows are continuous and all horizontal arrows are
surjective.
\end{comment}
\subsection{Filtration of Kerz-Zhao}
\label{sec:Etale-bk}
Let $k$ be a finite field of characteristic $p$ and $X$ an integral regular
projective scheme over $k$ of dimension $d \ge 1$. Let $D \subset X$
be an effective Cartier divisor with complement $U$ such that
$D_{\rm red}$ is a simple normal crossing divisor. Let $K$ denote
the function field of $X$.
By \cite[Theorems~1.1.5, 4.1.4]{JSZ}, there is an isomorphism
\begin{equation}\label{eqn:JSJ-0}
\lambda_m \colon H^1_{{\text{\'et}}l}(U, {{\mathbb Z}}/{p^m}) \xrightarrow{\cong}
{\varinjlim}_n H^d_{{\text{\'et}}l}(X, \overline{{\widehat{{\mathcal K}}^M_{d,X|nD}}/{p^m}})^\vee.
\end{equation}
Each group $H^d_{{\text{\'et}}l}(X, \overline{{\widehat{{\mathcal K}}^M_{d,X|nD}}/{p^m}})$ is finite
by \cite[Theorem~3.3.1]{Kerz-Zhau} and Corollary~\ref{cor:REC-RS-Main-fin}.
It follows that ~\eqref{eqn:JSJ-0}
is an isomorphism of discrete torsion topological abelian groups.
Since $H^d_{{\text{\'et}}l}(X, \overline{{\widehat{{\mathcal K}}^M_{d,X|D}}/{p^m}})^\vee
\hookrightarrow {\varinjlim}_n H^d_{{\text{\'et}}l}(X, \overline{{\widehat{{\mathcal K}}^M_{d,X|nD}}/{p^m}})^\vee$,
we can define
\[
{\mathbb F}il^{{\rm bk}}_{D} H^1_{\text{\'et}}l(U, {{\mathbb Z}}/{p^m}) :=
(\lambda_m)^{-1}(H^d_{{\text{\'et}}l}(X, \overline{{\widehat{{\mathcal K}}^M_{d,X|D}}/{p^m}})^\vee).
\]
This filtration was defined by Kerz-Zhao \cite[Definition~3.3.7]{Kerz-Zhau}.
We let ${\mathbb F}il^{{\rm bk}}_{D} H^1_{\text{\'et}}l(U, {{\mathbb Q}_p}/{{\mathbb Z}_p}) = {\varinjlim}_m
{\mathbb F}il^{{\rm bk}}_{D} H^1_{\text{\'et}}l(U, {{\mathbb Z}}/{p^m})$ and
${\mathbb F}il^{{\rm bk}}_{D, {\text{\'et}}l} H^1(K) = H^1(K)\{p'\} {\text{\rm op}}lus
{\mathbb F}il^{{\rm bk}}_{D} H^1_{\text{\'et}}l(U, {{\mathbb Q}_p}/{{\mathbb Z}_p})$.
Using \cite[Theorem~3.3.1]{Kerz-Zhau} and Corollary~\ref{cor:REC-RS-Main-fin}, we
get the following logarithmic version of Theorem~\ref{thm:Comp-Main} (without assuming that
$D_{\rm red}$ is regular).
This identifies the filtration due to Kerz-Zhao to
the one induced by the Brylinski-Kato filtration.
\begin{cor}\label{cor::REC-RS-Main-fin-0}
As subgroups of $H^1(K)$, one has
\[
{\mathbb F}il^{{\rm bk}}_{D, {\text{\'et}}l} H^1(K) = {\mathbb F}il^{{\rm bk}}_D H^1(K).
\]
\end{cor}
\vskip .3cm
\section{Counterexamples to Nisnevich descent}\label{sec:CE}
We shall now prove Theorem~\ref{thm:Main-5}.
Let $k$ be a finite field of characteristic $p$ and $X$ an integral regular
projective scheme of dimension two over $k$.
Let $C \subset X$ be an integral regular curve with complement $U$.
Let $K = k(X)$ and ${\mathfrak f} = k(\lambda)$, where $\lambda$ is the
generic point of $C$. Let $K_\lambda$ be the Henselian discrete valuation field
with ring of integers ${\mathcal O}^h_{X,\lambda}$ and residue field ${\mathfrak f}$.
Fix a positive integer $m_0 = p^rm'$, where $r \ge 1$ and $p \nmid m'$.
Assume that $p \neq 2$.
Then Matsuda has shown (see the proof of \cite[Proposition~3.2.7]{Matsuda})
that there is an isomorphism
\begin{equation}\label{eqn:Artin-cond}
{\text{\'et}}a \colon \frac{{\mathbb F}il^{{\rm ms}}_{m_0} H^1(K_\lambda)}{{\mathbb F}il^{{\rm bk}}_{m_0 -1} H^1(K_\lambda)}
\xrightarrow{\cong} B_r \Omega^1_{{\mathfrak f}},
\end{equation}
where $B_\bullet \Omega^1_{{\mathfrak f}}$ is an increasing filtration of
$\Omega^1_{{\mathfrak f}}$, recalled in the proof of Lemma~\ref{lem:BK-RS}.
Suppose that $B_1 \Omega^1_{{\mathfrak f}} = d({\mathfrak f}) \subset \Omega^1_{{\mathfrak f}}$ is not zero.
Then $B_r\Omega^1_{{\mathfrak f}} \neq 0$ for every $r \ge 1$.
Hence, the left hand side of ~\eqref{eqn:Artin-cond} is not zero.
Since the restriction map $\partialta_\lambda \colon H^1(K) \to H^1(K_\lambda)$
is surjective, it follows that we can find a continuous character
$\chi \in H^1(K)$ such that $\partialta_\lambda(\chi) \in
{\mathbb F}il^{{\rm ms}}_{m_0} H^1(K_\lambda) \setminus {\mathbb F}il^{{\rm bk}}_{m_0 -1} H^1(K_\lambda)$.
We let $U' \subseteq U$ be the largest open subscheme where
$\chi$ is unramified and let $C' = X\setminus U'$ with the reduced
closed subscheme structure.
Since $X$ is a surface, we can find a morphism $f \colon X' \to X$
which is a composition of a sequence monoidal transformations
such that the reduced closed subscheme $f^{-1}(C')$ is a simple
normal crossing divisor.
In particular, $E_0 \to C$ is an isomorphism if we let $E_0$ be
the strict transform of $C$. We let $E = f^{-1}(C')$ with reduced
structure. Note that there is a finite closed subset $T \subset C'$
such that $f^{-1}(X \setminus T) \to X \setminus T$ is an isomorphism.
We let $E = E_0 + E_1 + \cdots + E_s$, where each $E_i$ is integral.
We let $\lambda_i$ be the generic point of $E_i$ so that $\lambda_0
= \lambda$.
Let ${\rm ar}_{\lambda_i} \colon H^1(K) \to {\mathbb Z}$ be the Artin conductor
(see \cite[Definition~3.2.5]{Matsuda}).
Theorem~\ref{thm:Fil-main} implies that
${\rm ar}_{\lambda_0}(\chi) = m_0$.
We let $m_i = {\rm ar}_{\lambda_i}(\chi)$ for $i \ge 1$
and define $D' = \stackrel{s}{\underlinederset{i = 0}\sum} m_i E_i$.
It is then clear that $\chi \in {\mathbb F}il_D H^1(K) \setminus {\mathbb F}il^{{\rm bk}}_D H^1(K)$.
We have thus found a smooth projective integral surface $X'$ and an
effective Cartier divisor $D' \subset X'$ with the property that
$D'_{\rm red}$ is a simple normal crossing divisor and
$\frac{{\mathbb F}il_{D'} H^1(K)}{{\mathbb F}il^{{\rm bk}}_{D'} H^1(K)} \neq 0$ if $K$ is the function
field of $X'$.
It follows that ${\rm Ker}(\pi^{{\rm adiv}}_1(X', D') \twoheadrightarrow \pi^{\rm abk}_1(X', D'))
\neq 0$. Equivalently,
${\rm Ker}(\pi^{{\rm adiv}}_1(X', D')_0 \twoheadrightarrow \pi^{\rm abk}_1(X', D')_0)
\neq 0$.
We now look at the the commutative diagram
\begin{equation}\label{eqn:Rec-cycle}
\xymatrix@C.8pc{
{\mathbb C}H_0(X'|D')_0 \ar[r]^-{{\operatorname{\rm cyc}}_{X'|D'}} \ar[d] &
\pi^{{\rm adiv}}_1(X', D')_0 \ar[d] \\
H^{4}_{\mathcal M}(X'|D', {\mathbb Z}(2))_0 \ar[r]^-{{\operatorname{\rm cyc}}'_{X'|D'}} &
\pi^{\rm abk}_1(X', D')_0.}
\end{equation}
The top horizontal arrow is an isomorphism by
\cite[Theorems~1.2, 1.3]{Gupta-Krishna-BF} and the bottom
horizontal arrow is an isomorphism by Corollary~\ref{cor:Main-4}.
We conclude that the left vertical arrow is surjective but not injective.
To complete the construction of a counterexample, what remains is to
find a pair $(X,C)$ such that $d({\mathfrak f}) \subset \Omega^1_{{\mathfrak f}}$ is not zero.
But this is an easy exercise. For instance, take $X = {\mathbb P}^2_k$ and
$C \subset {\mathbb P}^2_k$ a coordinate hyperplane. Then ${\mathfrak f} \cong k(t)$
is a purely transcendental extension of degree one and
$d(t)$ is a free generator of $\Omega^1_{{\mathfrak f}}$.
We remark that we had assumed above that $p \neq 2$, but this condition
can be removed using the proof of \cite[Theorem~6.3]{Gupta-Krishna-REC}.
\vskip .3cm
Let $(X',D')$ be as above and let $F = k(t)$ be a purely transcendental extension of degree one.
We let $\widetilde{X} = X'_{F}$ and $\widetilde{D} = D'_F$. Then we have a commutative diagram
\[
\xymatrix@C.8pc{
{\mathbb C}H_0(X'|D') \ar[r] \ar[d] & {\mathbb C}H_0(\widetilde{X}|\widetilde{D}) \ar[d] \\
H^{4}_{\mathcal M}(X'|D', {\mathbb Z}(2)) \ar[r] & H^{4}_{\mathcal M}(\widetilde{X}|\widetilde{D}, {\mathbb Z}(2)),}
\]
where the horizontal arrows are the flat pull-back maps. It easily follows from the proof of
\cite[Proposition~4.3]{KP-Doc} that the top horizontal arrow is injective. It follows that
the right vertical arrow is not injective. This shows the failure of Nisnevich descent for
the Chow groups with modulus over infinite fields too.
\vskip .4cm
\noindent\emph{Acknowledgements.}
Gupta was supported by the
SFB 1085 \emph{Higher Invariants} (Universit\"at Regensburg).
\end{document} |
\begin{document}
\maketitle
\begin{abstract}
In this paper, we consider the sequential decision problem where the goal is to minimize the \emph{general dynamic regret} on a complete Riemannian manifold. The task of offline optimization on such a domain, also known as a \emph{geodesic metric space}, has recently received significant attention. The online setting has received significantly less attention, and it has remained an open question whether the body of results that hold in the Euclidean setting can be transplanted into the land of Riemannian manifolds where new challenges (e.g., \emph{curvature}) come into play.
In this paper, we show how to get optimistic regret bound on manifolds with non-positive curvature whenever improper learning is allowed and propose an array of adaptive no-regret algorithms. To the best of our knowledge, this is the first work that considers general dynamic regret and develops ``optimistic'' online learning algorithms which can be employed on geodesic metric spaces.
\end{abstract}
\begin{keywords}
Riemannian Manifolds, Optimistic Online Learning, Dynamic Regret
\end{keywords}
\section{Introduction}
Online convex optimization (OCO) in Euclidean space is a well-developed area with numerous applications. In each round, the learner takes an action from a decision set, while the \textit{adversary} chooses a loss function. The long-term performance metric of the learner is (static) regret, which is defined as the difference between the learner's cumulative loss and the loss of playing the best-fixed decision in hindsight. As the name suggests, OCO requires both the losses and the decision set to be \emph{convex}. From the theoretical perspective, convex functions and sets are well-behaved objects with many desirable properties that are generally required to obtain tight regret bounds.
Typical algorithms in OCO, such as \textit{mirror descent}, determine how one should adjust parameter estimates in response to arriving data, typically by shifting parameters against the gradient of the loss. But in many cases of interest, the underlying parameter space is not only non-convex but non-Euclidean. The \textit{hyperboloid}, for example, arising from the solution set of a degree-two polynomial, is a Riemannian manifold that has garnered interest as a tool in tree-embedding tasks \citep{lou2020differentiating}. On such manifolds, we do have a generalized notion of convexity, known as \textit{geodesic convexity} \citep{udriste2013convex}. There are many popular problems of interest \citep{hosseini2015matrix,vishnoi2018geodesic,sra2018geodesically} where the underlying objective function is geodesically convex (gsc-convex) under a suitable Riemannian metric. But there has thus far been significantly limited research on how to do \textit{adaptive learning} in such spaces and to understand when regret bounds are obtainable.
\begin{table*}[!hbpt]\label{table-1}
\centering
\caption{Summary of bounds. $\delta$ describes the discrepancy between the decision set and the comparator set. We define $\zeta$ in Def.~\ref{def1} and let $B_T\coloneqq\min\{V_T,F_T\}$.}
\begin{tabular}{cccc}
\toprule[1pt]
Algorithm& Type & Dynamic regret \\
\hline
\textsc{Radar}\xspace& Standard & $O(\sqrt{{\zeta}(1+P_T)T})$ \\
& Lower bound &$\Omega(\sqrt{(1+P_T)T})$ \\
\textsc{Radar}\xspacev& Gradient-variation & $O(\sqrt{{\zeta}(\frac{1+P_T}{\delta^2}
+V_T)(P_T+1)})$ \\
\textsc{Radar}\xspaces& Small-loss & $O(\sqrt{{\zeta}((1+P_T){\zeta}+F_T)(P_T+1)})$ \\
\textsc{Radar}\xspaceb& Best-of-both-worlds & {\small $O\lt(\sqrt{\zeta( P_T(\zeta+\frac{1}{\delta^2})+B_T+1)(P_T+1)+ B_T\ln T}\rt)$} \\
\bottomrule[1pt]
\end{tabular}
\end{table*}
Let $\N$ be a gsc-convex subset of a geodesic metric space $\M$. In this paper, we consider the problem of minimizing the \emph{general dynamic regret} on $\N$, defined as
$$
\text{D-Regret}_T := \sumT f_t(\x_t)-\sumT f_t(\u_t),
$$
where $\x_1, \ldots, \x_T \in \N$ is the sequence of actions taken by the learner, whose loss is evaluated relative to the sequence of ``comparator'' points $\u_1,\dots,\u_T\in\N$. There has been recent work establishing that sublinear regret is possible as long as $\N$ and the $f_t$'s are gsc-convex, for example using a Riemannian variant of Online Gradient Descent ~\citep{wang2021no}. But so far there are no such results that elicit better D-Regret using \textit{adaptive} algorithms.
What do we mean by ``adaptive'' in this context? In the online learning literature there have emerged three key quantities of interest in the context of sequential decision making, the \textit{comparator path length}, the \textit{gradient variation}, and the \textit{comparator loss}, defined respectively as:
\begin{eqnarray}
\textstyle P_T & := &\textstyle\sumsT d(\u_t,\u_{t-1}),\label{path} \\
\textstyle V_T & := & \textstyle\sum_{t=2}^T\sup_{\x\in\N}\|\nabla f_{t-1}(\x)-\nabla f_t(\x)\|^2, \nonumber \\
\textstyle F_T & := & \textstyle\sumT f_t(\u_t). \nonumber
\end{eqnarray}
Let us start by considering regret minimization with respect to path length. While it has been observed that $O(\sqrt{(1+P_T)T})$ is optimal in a minimax sense~\citep{zhang2018adaptive}, a great deal of research for the Euclidean setting~\citep{srebro2010smoothness,chiang2012online,rakhlin2013online} has shown that significantly smaller regret is achievable when any one of the above quantities is small. These are not just cosmetic improvements either, many fundamental applications of online learning rely on these adaptive methods and bounds. We give a thorough overview in Section~\ref{sec:related}.
The goal of the present paper is to translate to the Riemannian setting an array of adaptive regret algorithms and prove corresponding bounds. We propose a family of algorithms which we call \textsc{Radar}\xspace, for \underline{R}iemannian \underline{a}daptive \underline{d}yn\underline{a}mic \underline{r}egret. The three important variants of \textsc{Radar}\xspace are \textsc{Radar}\xspacev, \textsc{Radar}\xspaces, and \textsc{Radar}\xspaceb, we prove regret bounds on each, summarized in Table \ref{table-1}. We allow \emph{improper learning} for the gradient-variation bound, which means the player can choose $\x_1,\dots,\x_T$ from a slightly larger set $\N_{\delta G}$ (formally defined in Definition \ref{def2}).
As a general matter, convex constraints on a Riemannian manifold introduce new difficulties in optimization that are not typically present in the Euclidean case, and there has been limited work on addressing these.
To the best of our knowledge, there are only three papers
considering how to incorporate constraints on manifolds, and these all make further assumptions on either the curvature or the diameter of the feasible set. \cite{martinez2022global} only applies to hyperbolic and spherical spaces. \cite{criscitiello2022negative} works for complete Riemannian manifolds with sectional curvature in $[-K,K]$ but the diameter of the decision set is at most $\textstyle O(\frac{1}{\sqrt{K}})$.
\cite{martinez2022accelerated} mainly works for locally symmetric Hadamard manifolds. Our paper is the first to consider the projective distortion in the \emph{online} setting that applies to all Hadamard manifolds as long as improper learning is allowed without imposing further constraints on the diameter or the curvature.
Obtaining adaptive regret guarantees in the Riemannian setting is by no means a trivial task, as the new geometry introduces various additional technical challenges. Here is but one example: whereas the cost of a (Bregman) projection into a feasible region can be controlled using a generalized ``Pythagorean'' theorem in the Euclidean setting, this same issue becomes more difficult on a manifold as we encounter geometric distortion due to curvature. To better appreciate this, for a Hadamard manifold $\M$, assume the projection of $\x\in \M$ onto a convex subset $\N\subset\M$ is ${\z}$. While it is true that for any $\y\in\N\setminus \{\z\}$ the angle between geodesics $\overline{\z\x}$ and $\overline{\z\y}$
is obtuse, this property is only relevant at the tangent space of $\z$, yet we need to analyze gradients at the tangent space of ${\x}$. The use of \textit{parallel transport} between $\x$ and $\z$ unavoidably incurs extra distortion and could potentially lead to $O(T)$ regret.
The last challenge comes from averaging on manifolds. For example, many adaptive OCO algorithms rely on the \textit{meta-expert} framework, described by \cite{van2016metagrad}, that runs several learning algorithms in parallel and combines them through appropriately-weighted averaging. There is not, unfortunately, a single way to take convex combinations in a geodesic metric space, and all such averaging schemes need to account for the curvature of the manifold and incorporate the associated costs. We finally find the \emph{Fr\'echet mean} to be a desirable choice, but the analysis must proceed with care.
The key contributions of our work can be summarized as follows:
\begin{itemize}
\item We develop the optimistic mirror descent (OMD) algorithm on Hadamard manifolds\footnote{We focus on Hadamard manifolds in the main paper and extend the guarantee to CAT$(\kappa)$ spaces in Appendix \ref{cat-proof}.} in the online improper learning setting. Interestingly, we also show Optimistic Hedge, a variant of OMD, works for gsc-convex losses. We believe these tools may have significant applications to research in online learning and Riemannian optimization.
\item We combine our analysis on OMD with the \emph{meta-expert framework} ~\citep{van2016metagrad} to get several adaptive regret bounds, as shown in Table \ref{table-1}.
\item We develop a novel dynamic regret lower bound, which renders our $O(\sqrt{\zeta(1+P_T)T})$ bound to be tight up to the geometric constant $\zeta$.
\end{itemize}
\section{Preliminaries}
In this section, we introduce background knowledge of OCO and Riemannian manifolds.
\paragraph{OCO in Euclidean space.} We first formally describe OCO in Euclidean space. For each round $t=1,\dots,T$, the learner makes a decision $\x_t\in\mathcal{X}$ based on historical losses $f_1,\dots,f_{t-1}$ where $\X$ is a convex decision set, and then the adversary reveals a convex loss function $f_t$. The goal of the learner is to minimize the difference between the cumulative loss and that of the best-fixed decision in hindsight: $
\text{Regret}_T=\sum_{t=1}^Tf_t(\x_t)-\min_{\x\in\mathcal{X}}\sum_{t=1}^Tf_t(\x),
$
which is usually referred to as the \emph{static regret}, since the comparator is a fixed decision.
In the literature, there exist a large number of algorithms ~\citep{hazan2016introduction} on minimizing the static regret. However, the underlying assumption of the static regret is the adversary's behavior does not change drastically, which can be unrealistic in real applications. To resolve this issue, dynamic regret stands out, which is defined as ~\citep{zinkevich2003online}
$$
\text{Regret}_T(\u_1,\dots,\u_T)=\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\u_t),
$$
where $\u_1,\dots,\u_T\in\X$ is a comparator sequence. Dynamic regret receives considerable attention recently ~\citep{besbes2015non,jadbabaie2015online,mokhtari2016online,NIPS:2017:Zhang,zhang2018adaptive,zhao2020dynamic,wan2021projection,L4DC:2021:Zhao,pmlr-v134-baby21a} due to its flexibility. However, dynamic regret can be as large as $O(T)$ in general. Thus, regularizations need to be imposed on the comparator sequence to ensure no-regret online learning. A common assumption ~\citep{zinkevich2003online} is the path-length (see Equation \eqref{path}) of the comparator sequence to be bounded. We refer to the corresponding dynamic regret as \textit{general dynamic regret} because any assignments to $\u_1,\dots,\u_T$ subject to the path-length constraint are feasible.
\textbf{Riemannian manifolds.} Here, we give a brief overview of Riemannian geometry, but this is a complex subject, and we refer the reader to, e.g., \cite{petersen2006riemannian} for a full treatment. A Riemannian manifold $(\mathcal{M},g)$ is a smooth manifold $\M$ equipped with a Riemannian metric $g$. The tangent space $T_{\x}\M\cong\R^d$, generalizing the concept of the tangent plane, contains vectors tangent to any curve passing $\x$. The Riemannian metric $g$ induces the inner product $\langle\u,\v\rangle_{\x}$ and the Riemannian norm $\|\u\|_{\x}=\sqrt{\langle\u,\u\rangle_{\x}}$ where $\u,\v\in T_{\x}\M$ (we omit the reference point $\x$ when it is clear from the context). We use $d(\x,\y)$ to denote the Riemannian distance between $\x,\y\in\M$, which is the greatest lower bound of the length of all piecewise smooth curves joining $\x$ and $\y$.
A curve connecting $\x,\y\in\M$ is a geodesic if it is locally length-minimizing. For two points $\x,\y\in\M$, suppose there exists a geodesic $\gamma(t):[0,1]\rightarrow\M$ such that $\gamma(0)=\x,\gamma(1)=\y$ and $\gamma'(0)=\v\in T_{\x}\M$. The exponential map $\expmap_{\x}(\cdot):T_{\x}\M\rightarrow\M$ maps $\v\in T_{\x}\M$ to $\y\in \M$. Correspondingly, the inverse exponential map $\expmap_{\x}^{-1}(\cdot):\M\rightarrow T_{\x}\M$ maps $\y\in\M$ to $\v\in T_{\x}\M$.
Since traveling along a geodesic is of constant velocity, we indeed have $d(\x,\y)=\|\expmap_{\x}^{-1}\y\|_{\x}$. Also, it is useful to compare tangent vectors in different tangent spaces. Parallel transport $\Gamma_{\x}^{\y}\u$ translates $\u$ from $T_{\x}\M$ to $T_{\y}\M$ and preserves the inner product, i.e., $\langle \u,\v\rangle_{\x} = \langle \Ga_{\x}^{\y}\u,\Ga_{\x}^{\y}\v\rangle_{\y}$.
The curvature of Riemannian manifolds reflects the extent to which the manifold differs from a Euclidean surface. For optimization purposes, it usually suffices to consider the sectional curvature. Following \cite{zhang2016first,wang2021no}, in this paper we mainly consider Hadamard manifolds, which are complete and single-connected manifolds with non-positive sectional curvature. On such manifolds, every two points are connected by a unique and distance-minimizing geodesic by Hopf-Rinow Theorem ~\citep{petersen2006riemannian}.
A subset $\N$ of $\M$ is gsc-convex if for any $\x,\y\in\N$, there exists a geodesic connecting $\x,\y$ and fully lies in $\N$. A function $f:\N\rightarrow \mathbb{R}$ is gsc-convex if $\N$ is gsc-convex and the composition $f(\gamma(t))$ satisfies
$
f(\gamma(t))\leq (1-t)f(\gamma(0))+tf(\gamma(1))
$
for any geodesic $\gamma(t)\subseteq\N$ and $t\in[0,1]$. An alternative definition of geodesic convexity is
$$
\textstyle f(\y)\geq f(\x)+\langle\nabla f(\x),\expmap_{\x}^{-1}\y\rangle,\qquad\forall\;\x,\y\in\N.
$$
where the Riemannian gradient $\nabla f(\x)\in T_{\x}\M$ is the unique vector determined by
$
D f(\x)[\v] = \langle\v, \nabla f(\x)\rangle
$
and $D f(\x)[\v]$ is the differential of $f$ along $\v\in T_{\x}\M$.
Similarly, a $L$-geodesically-smooth (L-gsc-smooth) function $f$ satisfies
$\|\Ga_{\y}^{\x}\nabla f(\y)-\nabla f(\x)\|\leq L\cdot d(\x,\y)$ for all $\x,\y\in\N$, or
$$
\textstyle f(\y)\leq f(\x)+\langle\nabla f(\x),\expmap_{\x}^{-1}\y\rangle+\frac{L}{2}d(\x,\y)^2.
$$
\section{Related Work} \label{sec:related}
In this part, we briefly review past work on OCO in Euclidean space, online optimization and optimism on Riemannian manifolds.
\subsection{OCO in Euclidean Space}
\paragraph{Static regret.} We first consider work on static regret. In Euclidean space, it is well known that online gradient descent (OGD) guarantees $O(\sqrt{T})$ and $O(\log T)$ regret for convex and strongly convex losses \citep{hazan2016introduction}, which are also minimax optimal \citep{abernethy2008optimal}. However, the aforementioned bounds are not fully adaptive due to the dependence on $T$. Therefore, there is a tendency to replace $T$ with problem-dependent quantities. \cite{srebro2010smoothness} first notice that the smooth and non-negative losses satisfy the self-bounding property, thus establishing the small-loss bound $O(\sqrt{F_T^\star})$ where $F_T^{\star}=\sum_{t=1}^Tf_t(\x^\star)$ is the cumulative loss of the best action in hindsight. \cite{chiang2012online} propose extra-gradient
to get $O(\sqrt{V_T})$ gradient-variation regret bound for convex and smooth losses where $V_T=\sum_{t=2}^T\sup_{\x\in\X}\|\nabla f_{t-1}(\x)-\nabla f_t(\x)\|_2^2$.
\cite{rakhlin2013online} generalize the work of \cite{chiang2012online} and propose optimistic mirror descent, which has become a standard tool in online learning since then.
\paragraph{Dynamic regret.} Now we switch to the related work on dynamic regret. \cite{zinkevich2003online} propose to use OGD to get a $O\lt(\eta T+\frac{1+P_T}{\eta}\rt)$ regret bound where $\eta$ is the step size, but the result turns out to be $O((1+P_T)\sqrt{T})$ since the value of $P_T$ is unknown to the learner. The seminal work of \cite{zhang2018adaptive} use Hedge to combine the advice of experts with different step sizes, and show a $O\lt(\sqrt{(1+P_T)T}\rt)$ regret. A matching lower bound is also established therein. \cite{zhao2020dynamic} utilize smoothness to get a gradient-variation bound, a small-loss bound, and a best-of-both-worlds bound in Euclidean space.
\subsection{Online Learning and Optimism on Riemannian Manifolds}
In the online setting, \cite{becigneul2018riemannian} consider adaptive stochastic optimization on Riemannian manifolds but their results only apply to the Cartesian product of one-manifolds.
\cite{maass2022tracking}
study the \emph{restricted dynamic regret} on Hadamard manifolds under the gradient-free setting and provide $O(\sqrt{T}+P_T^\star)$ bound for gsc-strongly convex and gsc-smooth functions, where $P_T^\star$ is the path-length of the comparator formed by $\u_t=\argmin_{\x\in\X}f_t(\x)$. On Hadamard manifolds, \cite{wang2021no} apply Riemannian OGD (R-OGD) to get $O(\sqrt{T})$ upper bound and $\Omega(\sqrt{T})$ randomized lower bound. Comparatively, we focus on general and adaptive dynamic regret on Hadamard manifolds. Our minimax lower bound is also novel.
There also exist algorithms considering optimism on Riemannian manifolds. \cite{zhang2022minimax} propose Riemannian Corrected Extra Gradient (RCEG) for unconstrained minimax optimization on manifolds. \cite{karimi2022riemannian} consider a Robbins-Monro framework on Hadamard manifolds which subsumes Riemannian stochastic extra-gradient. By imposing the weakly asymptotically coercivity and using a decaying step size, the trajectory is guaranteed to be finite \citep{karimi2022riemannian}. However, our paper is the first to consider the constrained case and the online setting. For the improper learning setting, we show a constant step size achieves the same guarantee as in Euclidean space.
\section{Path Length Dynamic Regret Bound on Manifolds}
\label{radar}
In this section, we present the results related to the minimax path-length bound on manifolds. Before diving into the details, following previous work \citep{zinkevich2003online,wang2021no}, we introduce some standard assumptions and definitions.
\begin{ass}\label{hada}
$\M$ is a Hadamard manifold and its sectional curvature is lower bounded by $\kappa\leq 0$.
\end{ass}
\begin{ass}\label{diam}
The decision set $\N$ is a gsc-convex compact subset of $\M$ with diameter upper bounded by $D$, i.e., $\sup_{\x,\y\in\N} d(\x,\y)\leq D$. For optimistic online learning, we allow the player chooses decisions from $\N_{\delta M}$, which is defined in Definition \ref{def2} and the diameter becomes $(D+2\delta M)$.
\end{ass}
\begin{ass}\label{grad}
The norm of Riemannian gradients are bounded by $G$, i.e., $\sup_{\x\in\N}\|\nabla f_t(\x)\|\leq G.$ When improper learning is allowed, we assume $\sup_{\x\in\ndm}\|\nabla f_t(\x)\|\leq G.$
\end{ass}
\begin{defn}\label{def1}
Under Assumptions \ref{hada}, \ref{diam}, we denote
${\zeta}\coloneqq\sqrt{-\kappa} D \operatorname{coth}(\sqrt{-\kappa} D)$. When improper learning is allowed, ${\zeta}\coloneqq\sqrt{-\kappa} (D+2\delta M) \operatorname{coth}(\sqrt{-\kappa} (D+2\delta M))$, where $M$ is in Definition \ref{def2}. {Note that, on manifolds of zero sectional curvature ($\kappa=0$), we have $\zeta=\lim_{x\to 0}x\cdot\coth{x}=1$.}
\end{defn}
The seminal work of \cite{zinkevich2003online} shows that the classical OGD algorithm can minimize the general dynamic regret in Euclidean space. Motivated by this, we consider the Riemannian OGD (R-OGD) algorithm \citep{wang2021no}:
\begin{equation}
\label{alg:rogd}
\x_{t+1}=\Pi_{\N}\expmap_{\x_t}(-\eta\nabla f_t(\x_t)),
\end{equation}
which is a natural extension of OGD to the manifold setting. We show R-OGD can also minimizes the general dynamic regret on manifolds. Due to page limitation, we postpone details to Appendix \ref{app:radar}.
\begin{thm}\label{ROGD}
Suppose Assumptions \ref{hada}, \ref{diam} and \ref{grad} hold. Then the general dynamical regret of R-OGD defined in Equation \eqref{alg:rogd} satisfies
\begin{equation}
\text{D-Regret}_T\leq \frac{D^2+2DP_T}{2\eta}+\frac{\eta{\zeta} G^2T}{2}.
\end{equation}
\end{thm}
Theorem \ref{ROGD} implies that R-OGD yields $O(\frac{P_T+1}{\eta}+\eta T)$ general dynamic regret bound, which means the optimal step size is $\eta= O{\scriptstyle\left(\sqrt{\frac{1+P_T}{T}}\right)}$.
However, this configuration of $\eta$ is invalid, as $P_T$ is unknown to the learner. Although a sub-optimal choice for $\eta$, i.e., $\textstyle{\eta=O\left(\frac{1}{\sqrt{T}}\right)}$, is accessible, the resulting algorithm suffers $O((1+P_T)\sqrt{T})$ regret.
{The meta-expert framework \citep{van2016metagrad} consists of a meta algorithm and some expert algorithm instances. The constructions are modular such that we can use different meta algorithms and expert algorithms to achieve different regret guarantees. For optimizing dynamic regret, the seminal work of \citet{zhang2018adaptive} propose Ader based on this framework.} In every round $t$, each expert runs OGD with a different step size, and the meta algorithm applies Hedge to learn the best weights. The step sizes used by the experts are carefully designed so that there always exists an expert which is almost optimal. The regret of Ader is $O(\sqrt{(1+P_T)T})$, which is minimax-optimal in Euclidean space \citep{zhang2018adaptive}.
However, it is unclear how to extend Ader to manifolds at first glance since we need to figure out the "correct" way to do averaging. In this paper, we successfully resolve this problem using the \emph{Fr\'echet mean} and the \emph{geodesic mean}. Our proposed algorithm, called \textsc{Radar}\xspace, consists of $N$ instances of the expert algorithm (Algorithm \ref{alg:expert}), each of which runs R-OGD with a different step size, and a meta algorithm (Algorithm \ref{alg:meta}), which enjoys a regret approximately the same as the best expert. We denote the set of all step sizes $\{\eta_i\}$ by $\H$. In the $t$-th round, the expert algorithms submit all $\x_{t,i}$'s ($i=1,\dots,N$) to the meta algorithm. Then the meta algorithm either computes the Fr\'{e}chet mean or the geodesic mean (see Algorithm \ref{alg:gsc} in Appendix \ref{app:lem} for details) as $\x_t$. After receiving $f_t$, the meta algorithm updates the weight of each expert $w_{t+1,i}$ via Hedge and sends $\nabla f_t(\x_{t,i})$ to the $i$-th expert, which computes $\x_{t+1,i}$ by R-OGD. The regret of the meta algorithm of \textsc{Radar}\xspace can be bounded by Lemma \ref{meta-rader}.
\begin{minipage}{0.488\textwidth}
\begin{algorithm2e}[H]
\caption{\textsc{Radar}\xspace: Meta Algorithm}\label{alg:meta}
\KwData{Learning rate $\beta$, set of step sizes $\mathcal{H}$, initial weights $w_{1,i}=\frac{N+1}{i(i+1)N}$}
\For{$t=1,\dots,T$}{
Receive $\x_{t,i}$ from experts with stepsize $\eta_i$\\
$\x_t=\argmin_{\x\in\N}\sum_{i\in[N]}w_{t,i} d(\x,\x_{t,i})^2$\\
Observe the loss function $f_t$\\
Update $w_{t+1,i}$ by Hedge with $f_{t}(\mathbf{x}_{t,i})$\\
Send gradient $\nabla f_t(\x_{t,i})$ to each expert
}
\end{algorithm2e}
\end{minipage}
\begin{minipage}{0.466\textwidth}
\begin{algorithm2e}[H]
\caption{\textsc{Radar}\xspace: Expert Algorithm}\label{alg:expert}
\KwData{ A step size $\eta_i$}
Let $\x_{1,i}^\eta$ be any point in $\N$\\
\For{$t=1,\dots,T$} {
Submit $\x_{t,i}$ to the meta algorithm\\
Receive gradient $\nabla f_t(\x_{t,i})$ from the meta algorithm\\
Update:\\
$\x_{t+1,i}=\Pi_{\N}\expmap_{\x_{t,i}}(-\eta_i\nabla f_t(\x_{t,i}))$\\
}
\end{algorithm2e}
\end{minipage}
\begin{lemma}\label{meta-rader}
Under Assumptions \ref{hada}, \ref{diam}, \ref{grad}, and setting $\beta=\sqrt{\frac{8}{G^2D^2T}}$, the regret of Algorithm \ref{alg:meta} satisfies
$$
\textstyle\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\x_{t,i})\leq\sqrt{\frac{G^2D^2T}{8}}\left(1+\ln\frac{1}{w_{1,i}}\right).
$$
\end{lemma}
We show that, by configuring the step sizes in $\H$ carefully, \textsc{Radar}\xspace ensures a $O(\sqrt{(1+P_T)T})$ bound on geodesic metric spaces.
\begin{thm}\label{RAder}
Set $\textstyle\mathcal{H}=\left\{\eta_i=2^{i-1}\sqrt{\frac{D^2}{G^2{\zeta} T}}\big| i\in[N]\right\}$
where $N=\lceil\frac{1}{2}\log_2(1+2T)\rceil+1$ and $\beta=\sqrt{\frac{8}{G^2D^2T}}$. Under Assumptions \ref{hada}, \ref{diam}, \ref{grad}, for any comparator sequence $\u_1,\dots,\u_T\in\N$, the general dynamic regret of \textsc{Radar}\xspace satisfies
\begin{equation*}
\text{D-Regret}_T=O(\sqrt{{\zeta}(1+P_T)T}).
\end{equation*}
\end{thm}
\begin{remark}
Note that if $\M$ is Euclidean space, then ${\zeta}=1$ and we get $O(\sqrt{(1+P_T)T)}$ regret, which is the same as in \cite{zhang2018adaptive}.
\end{remark}
{A disadvantage of \textsc{Radar}\xspace is $\Theta(\log T)$ gradient queries are required at each round. In Euclidean space, \citet{zhang2018adaptive} use a linear surrogate loss to achieve the same bound by $O(1)$ gradient queries. But on manifolds, the existence of such functions implies the sectional curvature of the manifold is everywhere $0$~\citep{kristaly2016convexities}. It is interesting to investigate if $\Omega(\log T)$ gradient queries are necessary to achieve dynamic regret on manifolds. We would also like to point out that $O(\log T)$ is reasonable small and the work of \citet{zhang2018adaptive} still needs $O(\log T)$ computational complexity per round.}
Using the Busemann function as a bridge, we show the following dynamic regret lower bound, with proof deferred to Appendix \ref{app:lb}.
\begin{thm}\label{lb_dynamic}
There exists a comparator sequence which satisfies $\sumsT d(\u_t,\u_{t-1})\leq P_T$ and encounters $\Omega(\sqrt{(1+P_T)T})$
dynamic regret on Hadamard manifolds.
\end{thm}
Although the regret guarantee in Theorem \ref{RAder}
is optimal up to constants in terms of $T$ and $P_T$ by considering the corresponding lower bound, it still depends on $T$ and thus cannot adapt to mild environments. In Euclidean space, the smoothness of losses induces adaptive regret bounds, including the gradient-variation bound \citep{chiang2012online} and the small-loss bound \citep{srebro2010smoothness}. It is then natural to ask if similar bounds can be established on manifolds by assuming gsc-smoothness. We provide an affirmative answer to this question and show how to get problem-dependent bounds under the \textsc{Radar}\xspace framework.
\section{Gradient-variation Bound on Manifolds}
\label{radarv}
In this section, we show how to obtain the gradient-variation bound on manifolds under the \textsc{Radar}\xspace framework with alternative expert and meta algorithms.
\paragraph{Expert Algorithm.} For minimax optimization on Riemannian manifolds, \cite{zhang2022minimax} propose Riemannian Corrected Extra Gradient (RCEG), which performs the following iterates:
\begin{equation*}
\begin{split}
\textstyle&\x_{t}=\expmap_{\y_t}(-\eta \nabla f_{t-1}(\y_t))\\
&\y_{t+1}=\expmap_{\x_t}\left(-\eta\nabla f_t(\x_t)+\expmap_{\x_t}^{-1}(\y_t)\right).
\end{split}
\end{equation*}
However, this algorithm does not work in the constrained case, which has been left as an open problem \citep{zhang2022minimax}. The online improper learning setting~\citep{hazan2018online,pmlr-v134-baby21a} allows the decision set to be different from (usually larger than) the set of strategies we want to compete against. Under such a setting, we find the geometric distortion due to projection can be bounded in an elegant way, and
generalize RCEG to incorporate an optimism term $M_t\in T_{\y_t}\M$.
\begin{defn}\label{def2}
We use $M_t$ to denote the optimism at round $t$ and assume there exists $M$ such that $\|M_t\|\leq M$ for all t. We define $\N_c= \{\x|d(x,\N)\leq c\}$ where $d(\x,\N)\coloneqq\inf_{\y\in\N}d(\x,\y)$. In the improper setting, we allow the player to choose decisions from $\ndm$.
\end{defn}
\begin{thm}\label{var-expert}
Suppose all losses $f_t$ are $L$-gsc-smooth on $\M$. Under Assumptions \ref{hada}, \ref{diam}, \ref{grad}, the iterates
\begin{equation}\label{omd}
\begin{split}
\textstyle &\x_{t}'=\expmap_{\y_t}(-\eta M_t)\\
&\x_t=\Pi_{\ndm}\x_t'\\
&\y_{t+1}=\Pi_{\N}\expmap_{\x_t'}\left(-\eta\nabla f_t(\x_t')+\expmap_{\x_t'}^{-1}(\y_t)\right).
\end{split}
\end{equation}
satisfies
\begin{equation*}
\begin{split}
\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\u_t)\leq\eta{\zeta}\sumT\|\nabla f_t(\y_t)-M_t\|^2+\frac{D^2+2DP_T}{2\eta}.
\end{split}
\end{equation*}
for any $\u_1,\dots,\u_T\in\N$ and $\eta\leq\frac{\delta M}{G+(G^2+2\zeta\delta^2M^2L^2)^{\frac{1}{2}}}$. Specifically, we achieve $\eta{\zeta}V_T+\frac{D^2+2DP_T}{2\eta}$ regret by choosing $M_t=\nabla f_{t-1}(\y_t)$. In this case, $M=G$ and we need $\eta\leq\frac{\delta}{1+(1+2\zeta\delta^2L^2)^{\frac{1}{2}}}$.
\end{thm}
\paragraph{Proof sketch.} We use the special case $M_t=\nabla f_{t-1}(\y_t)$ to illustrate the main idea of the proof. We first decompose $f_t(\x_t)-f_t(\u_t)$ into two terms,
\[
\begin{split}
& f_t(\x_t)-f_t(\u_t)={(f_t(\x_t)-f_t(\x_t'))}+{(f_t(\x_t')-f_t(\u_t))}\\
\leq&G\cdot d(\x_t,\x_t')+{(f_t(\x_t')-f_t(\u_t))}\leq \underbrace{G\cdot d(\x_t',\y_t)}_{\text{troublesome term } 1}+\underbrace{(f_t(\x_t')-f_t(\u_t))}_{\text{unconstrained RCEG}}
\end{split}
\]
where the first inequality is because the gradient Lipschitzness condition, and the second one follows from the non-expansiveness of the projection.
For the unconstrained RCEG term, we have the following decomposition,
\[
\begin{split}
&f_t(\x_t')-f_t(\u_t)\leq \underbrace{\frac{1}{2\eta}(2\eta^2\zeta L^2-1)d(\x_t',\y_t)^2}_{\text{troublesome term }2}\\
+&\underbrace{\eta\zeta\|\nabla f_t(\y_t)-\nabla f_{t-1}(\y_t)\|^2}_{\eta\zeta V_T}+\underbrace{\frac{1}{2\eta}\lt(d(\y_t,\u_t)^2-d(\y_{t+1},\u_t)^2\rt)}_{\frac{D^2+2DP_T}{2\eta}}
\end{split}
\]
where the second and the third term corresponds to the gradient variation term and the dynamic regret term, respectively.
In the improper learning setting, we can show $d(\x_t',\y_t)\geq\delta G$. Combining both troublesome terms, it suffices to find $\eta$ which satisfies
\[
2\eta G+2\eta^2\zeta L^2\lambda-\lambda\leq 0,\;\forall \lambda\coloneqq d(\x_t',\y_t)\geq\delta G.
\]
\begin{remark}
{We generalize Theorem \ref{var-expert} from Hadamard manifolds to CAT$(\kappa)$ spaces in Appendix \ref{cat-proof}.} Note that although we allow the player to make improper decisions, $V_T$ is still defined on $\N$ instead of $\N_{\delta G}$. For the static setting, $P_T=0$ and the resulting regret bound is $O(\sqrt{V_T}+\frac{1}{\delta})$. Also, in this setting, we can use an adaptive step-size
\[
\eta_t=\min\lt\{\frac{1}{\sqrt{1+\sum_{s=2}^t\|\nabla f_t(\y_t)-\nabla f_{t-1}(\y_t)\|^2}},\frac{\delta}{1+(1+2\zeta\delta^2L^2)^{\frac{1}{2}}}\rt\}
\]
to eliminate the dependence on $V_T$.
\end{remark}
\paragraph{Meta algorithm.} Intuitively, we can run OMD with different step sizes and
apply a meta algorithm to estimate the optimal step size. Previous studies in learning with multiple step sizes usually adopt Hedge to aggregate the experts' advice. However, the regret of Hedge is $O(\sqrt{T\ln N})$ and thus is undesirable for our purpose. Inspired by optimistic online learning \citep{rakhlin2013online,syrgkanis2015fast}, \cite{zhao2020dynamic} adopt Optimistic Hedge as the meta algorithm to get $O(\sqrt{(V_T+P_T)P_T})$ gradient-variation bound. After careful analysis, we show Optimistic Hedge works for gsc-convex losses regardless of the geometric distortion and get the desired gradient-variation bound.
\begin{algorithm2e}[H]
\caption{\textsc{Radar}\xspacev: Expert Algorithm}\label{alg:expert_v}
\KwData{A step size $\eta_i$}
Let $\x_{1,i}^\eta$ be any point in $\N$\\
\For{$t=1,\dots,T$} {
Submit $\x_{t,i}$ to the meta algorithm\\
Receive gradient $\nabla f_t(\cdot)$ from the meta algorithm\\
Each expert runs Equation \eqref{omd} with $M_t=\nabla f_{t-1}(\y_t)$, $M=G$ and step size $\eta_i$\\
}
\end{algorithm2e}
\begin{algorithm2e}[H]
\caption{\textsc{Radar}\xspacev: Meta Algorithm}\label{alg:meta_v}
\KwData{A learning rate $\beta$, a set of step sizes $\H$, initial weights $w_{1,i}=w_{0,i}=\frac{1}{N}$}
\For{$t=1,\dots,T$} {
Receive all $\x_{t,i}$'s from experts with step size $\eta_i$\\
$\bx_t=\argmin_{\x\in\N_{\Gd}}\sum_{i\in[N]}w_{t-1,i} d(\x,\x_{t,i})^2$\\
Update $w_{t,i}\propto\exp\left(-\beta\left(\sum_{s=1}^{t-1}\ell_{s,i}+m_{t,i}\right)\right)$ by Equation \eqref{radarv_opt}\\
$\x_t=\argmin_{\x\in\N_{\Gd}}\sum_{i\in[N]}w_{t,i} d(\x,\x_{t,i})^2$\\
Observe $f_t(\cdot)$ and send $\nabla f_t(\cdot)$ to experts\\
}
\end{algorithm2e}
We denote $\l_t,\m_t\in\mathbb{R}^N$ as the surrogate loss and the optimism at round $t$. The update rule of Optimistic Hedge is:
\[
\textstyle w_{t,i}\propto\exp\left(-\beta\left(\sum_{s=1}^{t-1}\ell_{s,i}+m_{t,i}\right)\right),
\]
which achieves adaptive regret due to the optimism. The following technical lemma \citep{syrgkanis2015fast} is critical for our analysis of Optimistic Hedge, and the proof is in Appendix \ref{app:lem:opthedge} for completeness.
\begin{lemma}\label{optHedge}
For any $i\in[N]$, Optimistic Hedge satisfies
\begin{equation*}
\begin{split}
\textstyle \sum_{t=1}^T\la\w_t,\l_t\ra-\ell_{t,i}\leq\frac{2+\ln N}{\beta}+\beta\sum_{t=1}^T\|\l_t
-\m_t\|_{\infty}^2-\frac{1}{4\beta}\sum_{t=2}^T\|\w_t-\w_{t-1}\|_1^{2}
\end{split}
\end{equation*}
\end{lemma}
Following the insightful work of \cite{zhao2020dynamic}, we also adopt the Optimistic Hedge algorithm as the meta algorithm, but there are some key differences in the design of the surrogate loss and optimism. To respect the Riemannian metric, we propose the following:
\begin{equation}\label{radarv_opt}
\begin{split}
&\ell_{t,i}=\la\nabla f_t(\x_t),\expmap_{\x_t}^{-1}\x_{t,i}\ra \\
&m_{t,i}=\la\nabla f_{t-1}(\bar{\x}_t),\expmap_{\bx_t}^{-1}\x_{t,i}\ra \\
\end{split}
\end{equation}
where $\x_t$ and $\bx_t$ are Fr\'echet averages of $\x_{t,i}$ w.r.t. linear combination coefficients $\w_{t}$ and $\w_{t-1}$ respectively. Under the Fr\'echet mean, we can show
\[
f_t(\x_t)-f_t(\x_{t,i})\leq\la\w_t,\l_t\ra-\ell_{t,i},
\]
which ensures Lemma \ref{optHedge} can be applied to bound the meta-regret and the geodesic mean does not meet this requirement. We also emphasize that the design of the surrogate loss and optimism is highly non-trivial. As we will see in the proof of Theorem \ref{opthedge-reg}, the combination of the surrogate loss and the gradient-vanishing property of the Fr\'{e}chet mean ensures Lemma \ref{optHedge} can be invoked to upper bound the regret of the meta algorithm. However, $\m_{t}$ cannot rely on $\x_t$ thus, we need to design optimism based on the tangent space of ${\bx_t}$, which incurs extra cost. Luckily, under Equation \eqref{radarv_opt}, we find a reasonable upper bound of this geometric distortion by showing
\[
\begin{split}
\|\l_t-\m_t\|_{\infty}^2&\leq O(1)\cdot\sup_{\x\in\N_{\delta G}}\|\nabla f_t(\x)-\nabla f_{t-1}(\x)\|^2+O(1)\cdot d(\x_t,\bx_t)^2\\
&\leq O(1)\cdot\sup_{\x\in\N_{\delta G}}\|\nabla f_t(\x)-\nabla f_{t-1}(\x)\|^2+\Tilde{O}(1) \cdot \|\w_t-\w_{t-1}\|_1^2.
\end{split}
\]
Thus we can apply the negative term in Lemma \ref{optHedge} to eliminate undesired terms in $\|\l_t-\m_t\|_{\infty}^2$.
Algorithms \ref{alg:expert_v} and \ref{alg:meta_v} describe the expert algorithm and meta algorithm of \textsc{Radar}\xspacev. We show
the meta-regret and total regret of \textsc{Radar}\xspacev in Theorems \ref{opthedge-reg} and \ref{var-reg}, respectively. Detailed proof in this section is deferred to Appendix \ref{app:radarv}.
\begin{thm}\label{opthedge-reg}
Assume all losses are $L$-gsc-smooth on $\M$. Then under Assumptions \ref{hada}, \ref{diam}, \ref{grad}, the regret of Algorithm \ref{alg:meta_v} satisfies:
\begin{equation*}
\begin{split}
\textstyle & \sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\x_{t,i})\\
\leq&\frac{2+\ln N}{\beta}+3D^2\beta (V_T+G^2)+\sumsT\left(3\beta (D^4L^2+D^2G^2{\zeta}^2)-\frac{1}{4\beta}\right)\|\w_t-\w_{t-1}\|_1^2.
\end{split}
\end{equation*}
\end{thm}
\begin{thm}\label{var-reg}
Let $\textstyle\beta=\min\left\{\sqrt{\frac{2+\ln N}{3D^2V_T}},\frac{1}{\sqrt{12(D^4L^2+D^2G^2{\zeta}^2)}}\right\}$, $\textstyle\H=\left\{\eta_i=2^{i-1}\sqrt{\frac{D^2}{8{\zeta} G^2T}}\big|i\in[N]\right\}$, where $\textstyle N=\lt\lceil\frac{1}{2}\log\frac{8\zeta\delta^2G^2T}{({1+(1+2\zeta\delta^2L^2)^{\frac{1}{2}}})^2}\rt\rceil+1$. Assume all losses are $L$-gsc-smooth on $\M$ and allow improper learning. Under Assumptions \ref{hada}, \ref{diam} and \ref{grad},
the regret of \textsc{Radar}\xspacev satisfies
$$
\text{D-Regret}_T={O}\left(\sqrt{{\zeta}(V_T+(1+P_T)/\delta^2)(1+P_T)}\right).
$$
\end{thm}
In Theorem \ref{var-reg}, $\beta$ relies on $V_T$, and this dependence can be eliminated by showing a variant of Lemma \ref{optHedge} with an adaptive learning rate $\beta_t$.
\section{Small-loss Bound on Manifolds}
\label{radars}
For dynamic regret, the small-loss bound replaces the dependence on $T$ by $F_T=\sum_{t=1}^Tf_t(\u_t)$, which adapts to the function values of the comparator sequence.
In Euclidean space, \cite{srebro2010smoothness} show this adaptive regret by combining OGD with the self-bounding property of smooth and non-negative functions, which reads
$
\|\nabla f(\x)\|_2^2\leq 4L\cdot f(\x)
$
where $L$ is the smoothness constant. We show a similar conclusion on manifolds and defer proof details in this part to Appendix \ref{app:radars}.
\begin{lemma}\label{self-bound}
Suppose $f:\M\rightarrow\R$ is both $L$-gsc-smooth and non-negative on its domain where $\M$ is a Hadamard manifold, then we have
$\|\nabla f(\x)\|^2\leq 2L\cdot f(\x)$.
\end{lemma}
To facilitate the discussion, we denote $\bar{F}_T=\sumT f_t(\x_t)$ and $\bar{F}_{T,i}=\sumT f_t(\x_{t,i})$.
We use R-OGD as the expert algorithm (Algorithm \ref{alg:expert}) and Hedge with surrogate loss
$$
\ell_{t,i}=\la\nabla f_t(\x_t),\invexp_{\x_t}\x_{t,i}\ra
$$
as the meta algorithm (Algorithm \ref{alg:meta}). The following Lemma
considers the regret of a single expert and shows that R-OGD achieves a small-loss dynamic regret on geodesic metric spaces.
\begin{lemma}\label{sml-expert}
Suppose all losses are $L$-gsc-smooth and non-negative on $\M$. Under Assumptions \ref{hada}, \ref{diam}, by choosing any step size $\eta\leq \frac{1}{2{\zeta}L}$, R-OGD achieves $O\left(\frac{P_T}{\eta}+\eta F_T\right)$ regret.
\end{lemma}
Again, we can not directly set $\textstyle\eta=O\lt(\frac{1+P_T}{F_T}\rt)$ because $P_T$ is unknown, which is precisely why we need the meta algorithm. The meta-regret of Hedge is as follows.
\begin{lemma}\label{sml-meta}
Suppose all losses are $L$-gsc-smooth and non-negative on $\M$.
Under Assumptions \ref{hada}, \ref{diam}, by setting the learning rate of Hedge as $\beta=\sqrt{\frac{(2+\ln N)}{D^2\bar{F}_T}}$, the regret of the meta algorithm satisfies
\begin{equation*}
\begin{split}
\textstyle\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\x_{t,i})\leq 8D^2L(2+\ln N)
+\sqrt{8D^2L(2+\ln N)F_{T,i}}.
\end{split}
\end{equation*}
\end{lemma}
Now as we have the guarantee for both the expert algorithm and the meta algorithm, a direct combination yields the following general dynamic regret guarantee.
\begin{thm}\label{sml-reg}
Suppose all losses are $L$-gsc-smooth and non-negative on $\M$. Under Assumptions \ref{hada}, \ref{diam}. Setting $\H=\lt\{\eta_i=2^{i-1}\sqrt{\frac{D}{2 {\zeta} LGT}}\big|i\in[N]\rt\}$ where $N=\lc\frac{1}{2}\log\frac{GT}{2 LD{\zeta}}\rc+1$ and {\small $\beta=\sqrt{\frac{(2+\ln N)}{D^2\bar{F}_T}}$}. Then for any comparator $\u_1,\dots,\u_T\in\N$, we have
$$
\text{D-Regret}_T=O(\sqrt{\zeta({\zeta}(P_T+1)+F_T)(P_T+1)}).
$$
\end{thm}
\begin{remark}
If we take $\M$ as Euclidean space, the regret guarantee shown in Theorem \ref{sml-reg} becomes { $O(\sqrt{(P_T+F_T+1)(P_T+1)})$}, which matches
the result of \cite{zhao2020dynamic}.
\end{remark}
\section{Best-of-both-worlds Bound on Manifolds}
\label{radarb}
Now we already achieve the gradient-variation bound and the small-loss bound on manifolds. To highlight the differences between them, we provide an example in Appendix \ref{bbw} to show under certain scenarios, one bound can be much tighter than the other.
The next natural question is, is that possible to get a best-of-both-worlds bound on manifolds?
We initialize $N\coloneqq N^v+N^s$ experts as shown in Theorems \ref{var-reg} and \ref{sml-reg} where $N^v$ and $N^s$ are the numbers of experts running OMD and R-OGD, respectively. For each expert $i\in [N]$, the surrogate loss and the optimism are
\begin{equation}\label{radarb_opt}
\begin{split}
&\ell_{t,i}=\la\nabla f_t(\x_t),\expmap_{\x_t}^{-1}\x_{t,i}\ra \\
&m_{t,i}=\gamma_t\la\nabla f_{t-1}(\bar{\x}_t),\expmap_{\bx_t}^{-1}\x_{t,i}\ra. \\
\end{split}
\end{equation}
$\gamma_t$ controls the optimism used in the meta algorithm. When $\gamma_t=1$, the optimism for the gradient-variation bound is recovered, and $\gamma_t=0$ corresponds to the optimism for the small-loss bound.
Following \cite{zhao2020dynamic}, we use Hedge for two experts to get a best-of-both-worlds bound. The analysis therein relies on the strong convexity of $\|\nabla f_t(\x_t)-\m\|_2^2$ in $\m$, which is generally not the case on manifolds. So an alternative scheme needs to be proposed. We denote
\begin{equation}
\begin{split}
&m_{t,i}^v=\la\nabla f_{t-1}(\bx_t),\invexp_{\bx_t}\x_{t,i}\ra\\
&m_{t,i}^s=0,\\
\end{split}
\end{equation}
while $\m_t^v$ and $\m_t^s$ be the corresponding vectors respectively. Then $\m_t=\gamma_t\m_t^v+(1-\gamma_t)\m_t^s$, which is exactly the combination rule of Hedge. The function $\|\l_t-\m\|_{\infty}^2$ is convex with respect to $\m$ but not strongly convex so we instead use $d_t(\m)\coloneqq \|\l_t-\m\|_{2}^2$ for Hedge, and the learning rate is updated as
{
\begin{equation}\label{gamma}
{\gamma_t=\frac{\exp\lt(-\tau\lt(\sum_{r=1}^{t-1}d_r(\m_r^v)\rt)\rt)}{\exp\lt(-\tau\lt(\sum_{r=1}^{t-1}d_r(\m_r^v)\rt)\rt)+\exp\lt(-\tau\lt(\sum_{r=1}^{t-1}d_r(\m_r^s)\rt)\rt)} }
\end{equation}
}
Algorithm \ref{alg:meta_b} summarizes the meta algorithm as well as the expert algorithm for \textsc{Radar}\xspaceb.
\begin{algorithm2e}
\caption{\textsc{Radar}\xspaceb: Algorithm}\label{alg:meta_b}
\KwData{ Learning rates $\beta$ for Optimistic Hedge and $\gamma_t$ for Hedging the two experts, $\mathcal{H}=\{\eta_i\}$ consists of $N=N^v+N^s$ step sizes, $\tau=\frac{1}{8NG^2D^2}$}
\For{$t=1,\dots,T$} {
Run Algorithms \ref{alg:expert_v} and \ref{alg:expert} on the first $N^v$ experts and the later $N^s$ experts, resp.
$\bx_t=\argmin_{\x\in\N_{\Gd}}\sum_{i\in[N]}w_{t-1,i} d(\x,\x_{t,i})^2$\\
Update $\gamma_t$ as in Equation \eqref{gamma}\\
Update $w_{t,i}\propto\exp\left(-\beta\left(\sum_{s=1}^{t-1}\ell_{s,i}+m_{t,i}\right)\right)$ by Equation \eqref{radarb_opt}
$\x_t=\argmin_{\x\in\N_{\Gd}}\sum_{i\in[N]}w_{t,i} d(\x,\x_{t,i})^2$\\
Observe $f_t$ and send $\nabla f_t(\cdot)$ to each expert
}
\end{algorithm2e}
In Theorem \ref{best-meta} we show the guarantee of the meta algorithm of \textsc{Radar}\xspaceb and postpone proof details of this section to Appendix \ref{app:radarb}.
\begin{thm}\label{best-meta}
Setting learning rates $\tau=\frac{1}{8NG^2D^2}$ and
\begin{equation*}
\begin{split}
\beta=\min\left\{\sqrt{\frac{2+\ln N}{N(D^2\min\{3(V_T+G^2),\bar{F}_T\}+8G^2D^2\ln 2)} }, \frac{1}{\sqrt{12(D^4L^2+D^2G^2{\zeta}^2)}}\right\}.
\end{split}
\end{equation*}
Suppose all losses are $L$-gsc-smooth and non-negative on $\M$. Under Assumptions \ref{hada}, \ref{diam}, \ref{grad}, the regret of the meta algorithm satisfies
$$
\textstyle\sumT f_t(\x_t)-\sumT f_t(\x_{t,i})=O\lt(\sqrt{\ln T\min\{V_T,\bar{F}_T\}}\rt)
$$
where $N=N^v+N^s$ and $\bar{F}_T=\sumT f_t(\x_t)$.
\end{thm}
Finally, we show the regret of \textsc{Radar}\xspaceb is bounded by the smaller of two problem-dependent bounds as follows.
\begin{thm}\label{best-reg}
Suppose all losses are $L$-gsc-smooth and non-negative on $\M$ and allow improper learning. Under Assumptions \ref{hada}, \ref{diam}, \ref{grad}, if we set the set of candidate step sizes as
\begin{equation}\label{radar_b}
\H=\H^v\cup\H^s,
\end{equation}
where $\H^v$ and $\H^s$ are sets of step sizes in Theorem \ref{var-reg} with $N=N^v$ and Theorem \ref{sml-reg} with $N=N^s$ respectively. Then Algorithm \ref{alg:meta_b} satisfies
{ $$
\text{D-Regret}_T= O\lt(\sqrt{\zeta( P_T(\zeta+1/\delta^2)
+B_T+1)(1+P_T)+\ln T\cdot B_T}\rt)
$$}
where $B_T\coloneqq\min\{V_T,F_T\}$.
\end{thm}
\begin{remark}
Comparing to the result in \cite{zhao2020dynamic}, we find the result of Theorem \ref{best-reg} has an additional $\sqrt{\ln T}$ factor, which comes from our construction of hedging two experts. It will be interesting to remove this dependence.
\end{remark}
\section{Conclusion}
In this paper, we consider adaptive online learning on Riemannian manifolds. Equipped with the idea of learning with multiple step sizes and optimistic mirror descent, we derive a series of no-regret algorithms that adapt to quantities reflecting the intrinsic difficulty of the online learning problem in different aspects. In the future, it is interesting
to investigate how to achieve optimistic online learning in the proper learning setting. Moving forward, one could further examine whether $\Omega(\log T)$ gradient queries in each round are truly necessary. A curvature-dependent lower bound like the one in \citet{criscitiello2022negative} for Riemannian online optimization also remains open.
\acks{We would like to thank three anonymous referees for constructive comments and suggestions. We gratefully thank the AI4OPT Institute for funding, as part of NSF Award 2112533. We also
acknowledge the NSF for their support through Award IIS-1910077. GW would like to acknowledge an ARC-ACO fellowship provided by Georgia Tech.}
\appendix
\section{Omitted Proof for Section \ref{radar}}
\label{app:radar}
\subsection{Proof of Theorem \ref{ROGD}}
\label{app:thm:rogd}
We denote $\x_{t+1}'=\expmap_{\x_t}(-\eta\nabla f_t(\x_t))$ and start from the geodesic convexity:
\begin{equation}\label{rogd-eq1}
\begin{split}
f_t(\x_t)-f_t(\u_t)\stackrel{\rm (1)}{\leq}&\langle\nabla f_t(\x_t),-\expmap_{\x_t}^{-1}(\u_t)\rangle\\
=&\frac{1}{\eta}\langle\expmap_{\x_t}^{-1}\x_{t+1}',\expmap_{\x_t}^{-1}\u_t\rangle\\
\stackrel{\rm (2)}{\leq} &\frac{1}{2\eta}\left(\|\expmap_{\x_t}^{-1}\u_t\|^2-\|\expmap_{\x_{t+1}'}^{-1}\u_t\|^2+{\zeta}\|\expmap_{\x_t}^{-1}\x_{t+1}'\|^2\right)\\
\stackrel{\rm (3)}{\leq}&\frac{1}{2\eta}\left(\|\expmap_{\x_t}^{-1}\u_t\|^2-\|\expmap_{\x_{t+1}}^{-1}\u_t\|^2\right)+\frac{\eta{\zeta}G^2}{2}\\
=&\frac{1}{2\eta}\left(\|\expmap_{\x_t}^{-1}\u_t\|^2-\|\expmap_{\x_{t+1}}^{-1}\u_{t+1}\|^2+\|\expmap_{\x_{t+1}}^{-1}\u_{t+1}\|^2-\|\expmap_{\x_{t+1}}^{-1}\u_t\|^2\right)+\frac{\eta{\zeta}G^2}{2}\\
\stackrel{\rm (4)}{\leq} &\frac{1}{2\eta}\left(\|\expmap_{\x_t}^{-1}\u_t\|^2-\|\expmap_{\x_{t+1}}^{-1}\u_{t+1}\|^2+2D\|\expmap_{\u_t}^{-1}\u_{t+1}\|\right)+\frac{\eta{\zeta}G^2}{2},
\end{split}
\end{equation}
where for the second inequality we use Lemma \ref{cos1}, while the third is due to Lemma \ref{proj} and Assumption \ref{grad}. For the last inequality, we invoke triangle inequality and Assumption \ref{diam}.
WLOG, we can assume $\u_{T+1}=\u_T$ and sum from $t=1$ to $T$:
\begin{equation}
\sum_{t=1}^T f_t(\x_t)-f_t(\u_t)\leq \frac{D^2}{2\eta}+\frac{DP_T}{\eta}+\frac{\eta{\zeta}G^2 T}{2}.
\end{equation}
\subsection{Proof of Lemma \ref{meta-rader}}
\label{app:lem:meta-radar}
This is a generalization of \cite[Theorem 2.2]{cesa2006prediction} to the Riemannian manifold. Let $L_{t,i}=\sum_{s=1}^tf_s(\x_{s,i})$ and $W_t=\sum_{i=1}^Nw_{1,i}e^{-\beta L_{t,i}}$. We have the following lower bound for $\ln W_T$,
$$
\ln(W_T)=\ln\lt(\sum_{i\in[N]}w_{1,i}e^{-\beta L_{t,i}}\rt)\geq -\beta\min_{i\in[N]}\lt(L_{T,i}+\frac{1}{\beta}\ln\frac{1}{w_{1,i}}\rt).
$$
For the next step, we try to get an upper bound on $\ln W_T$. When $t\geq 2$, we have
$$
\ln\lt(\frac{W_t}{W_{t-1}}\rt)=\ln\frac{\sum_{i\in[N]}w_{1,i}e^{-\beta L_{t-1,i}}e^{-\beta f_t(\x_{t,i})}}{\sum_{i\in[N]}w_{1,i}e^{-\beta L_{t-1,i}}}=\ln\lt(\sum_{i\in[N]}w_{t,i}e^{-\beta f_t(\x_{t,i})}\rt),
$$
where the updating rule of Hedge
$$
w_{t,i}=\frac{w_{1,i}e^{-\beta L_{t-1,i}}}{\sum_{j\in[N]}w_{1,j}e^{-\beta L_{t-1,j}}}
$$
is applied.
Therefore
\begin{equation}
\begin{split}
&\ln W_T=\ln W_1+\sumsT\ln\lt(\frac{W_t}{W_{t-1}}\rt)=\sumT\ln\lt(\sum_{i\in[N]}w_{t,i}e^{-\beta f_t(\x_{t,i})}\rt)\\
\leq&\sumT\lt(-\beta\sum_{i\in[N]}w_{t,i}f_t(\x_{t,i})+\frac{\beta^2G^2D^2}{8}\rt)\leq\sumT\lt(-\beta f_t(\x_t)+\frac{\beta^2G^2D^2}{8}\rt),
\end{split}
\end{equation}
where the first inequality follows from Hoeffding's inequality and $f_t(\x^\star)\leq f_t(\x)\leq f_t(\x^\star)+G\cdot D$ holds for any $\x\in\N$ and $\x^\star=\argmin_{\x\in\N}f_t(\x)$, and the second inequality is due to both the Fr\'echet mean and the geodesic mean satisfy Jensen's inequality. For the Fr\'echet mean, we can apply Lemma \ref{jensen}. While
Lemmas \ref{gsc-mean} and \ref{gsc-com} ensure
the geodesic mean satisfies the requirement.
Combining the lower and upper bound for $\ln W_T$, we see
$$
-\beta\min_{i\in[N]}\lt(L_{T,i}+\frac{1}{\beta}\ln\frac{1}{w_{1,i}}\rt)\leq\sumT\lt(-\beta f_t(\x_t)+\frac{\beta^2G^2D^2}{8}\rt).
$$
After simplifying, we get
$$
\sumT f_t(\x_t)-\min_{i\in[N]}\lt(\sumT f_t(\x_{t,i})+\frac{1}{\beta}\ln\frac{1}{w_{1,i}}\rt)\leq\frac{\beta G^2D^2T}{8}.
$$
Setting $\beta=\sqrt{\frac{8}{G^2D^2T}}$, we have
$$
\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\x_{t,i})\leq\sqrt{\frac{G^2D^2T}{8}}\left(1+\ln\frac{1}{w_{1,i}}\right).
$$
\subsection{Proof of Theorem \ref{RAder}}
\label{app:thm:rader}
Each expert performs R-OGD, so by Theorem \ref{ROGD} we have
\begin{equation}
\sum_{t=1}^Tf_t(\x_{t,i})-f_t(\u_t)\leq\frac{D^2+2DP_T}{2\eta}+\frac{\eta{\zeta}G^2T}{2}.
\end{equation}
holds for any $i\in [N]$.
Now it suffices to verify that there always exists $\eta_k\in\mathcal{H}$ which is close to the optimal stepsize
\begin{equation}\label{eta_star}
\eta^\star=\sqrt{\frac{D^2+2DP_T}{{\zeta} G^2T}}.
\end{equation}
By Assumption \ref{diam},
$$
0 \leq P_{T}=\sum_{t=2}^{T}d(\u_{t-1},\u_t)\leq T D.
$$
Thus
$$
\sqrt{\frac{D^{2}}{TG^{2}{\zeta}}} \leq \eta^{*} \leq \sqrt{\frac{D^{2}+2 T D^{2}}{TG^{2}{\zeta}}}
$$
It is obvious that
$$
\min \mathcal{H}=\sqrt{\frac{D^{2}}{TG^{2}{\zeta}}}, \text { and } \max \mathcal{H} \geq 2\sqrt{\frac{D^{2}+2 T D^{2}}{TG^{2}{\zeta}}}
$$
Therefore, there exists $k\in[N-1]$ such that
\begin{equation}
\label{etak}
\eta_{k}=2^{k-1} \sqrt{\frac{D^{2}}{TG^{2}{\zeta}}} \leq \eta^{*} \leq 2 \eta_{k}
\end{equation}
The dynamic regret of the $k$-th expert is
\begin{equation}
\begin{split}
\label{expert-rader}
& \sum_{t=1}^{T}f_{t}(\x_{t,k})-\sum_{t=1}^{T} f_{t}\left(\mathbf{u}_{t}\right) \\
\stackrel{\rm (1)}{\leq} & \frac{D^{2}}{2 \eta_{k}}+\frac{DP_T}{\eta_{k}}+\left(\frac{\eta_{k} TG^{2}{\zeta}}{2}\right) \\
\stackrel{\rm (2)}{\leq} & \frac{D^{2}}{\eta^{*}}+\frac{2 DP_T}{\eta^{*}}+\left(\frac{\eta^{*} TG^{2}{\zeta}}{2}\right) \\
{=}& \frac{3}{2} \sqrt{TG^{2}{\zeta}\left(D^{2}+2 D P_{T}\right)}.
\end{split}
\end{equation}
The second inequality follows from Equation \eqref{etak} and we use Equation \eqref{eta_star} to derive the last equality.
Since the initial weight of the $k$-th expert satisfies
$$
w_{1,k}=\frac{N+1}{k(k+1)N}\geq\frac{1}{(k+1)^2},
$$
the regret of the meta algorithm with respect to the $k$-th expert is bounded by
\begin{equation}\label{eq:meta-rader}
\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\x_{t,k})\leq\sqrt{\frac{G^2D^2T}{8}}\left(1+2\ln(k+1)\right)
\end{equation}
in view of Lemma \ref{meta-rader}.
Combining Equations \eqref{expert-rader} and \eqref{eq:meta-rader}, we have
\begin{equation}
\begin{split}
&\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\u_t)\\
\leq& \frac{3}{2} \sqrt{TG^{2}{\zeta}\left(D^{2}+2 D P_{T}\right)}+\sqrt{\frac{G^2D^2T}{8}}\left(1+2\ln(k+1)\right)\\
\leq &\sqrt{2\lt(\frac{9}{4}TG^{2}{\zeta}\left(D^{2}+2 D P_{T}\right)+\frac{G^2D^2T}{8}\left(1+2\ln(k+1)\right)^2 \rt)}\\
=&O\left(\sqrt{{\zeta}(1+P_T)T}\right),
\end{split}
\end{equation}
where $\sqrt{a}+\sqrt{b}\leq\sqrt{2(a+b)}$ is applied to derive the second inequality, and for the equality follows from $\ln k=O(\log\log P_T)=o(\sqrt{P_T})$.
\subsection{Minimax Dynamic Regret Lower Bound on Hadamard Manifolds}
\label{app:lb}
In this part, we first establish a $\Omega(\sqrt{T})$ minimax static regret lower bound on Hadamard manifolds following the classical work of \citet{abernethy2008optimal}, then follow the reduction in \cite{zhang2018adaptive} to get its dynamic counterpart. We focus on the manifold of SPD matrices \citep{bhatia2009positive} $\S^n_{++}=\{p:p\in\mathbb{R}^{n\times n},p=p^T \text{and } p\succ 0^{n\times n}\}$, which becomes Hadamard when equipped with the affine-invariant metric $\la U,V\ra_p=tr(p^{-1}Up^{-1}V)$ where $tr(\cdot)$ is the trace operator. The tangent space of $\S^n_{++}$ is $\S^n=\{U:U\in\mathbb{R}^{n\times n},U=U^T\}$. Under the affine-invariant metric, we have
\begin{equation*}
\begin{split}
&\expmap_pU=p^{\frac{1}{2}}\exp\lt(p^{-\frac{1}{2}}Up^{-\frac{1}{2}}\rt)p^{\frac{1}{2}} \\
&\invexp_pq=p^{\frac{1}{2}}\log\lt(p^{-\frac{1}{2}}qp^{-\frac{1}{2}}\rt)p^{\frac{1}{2}} \\
&d(p, q)=\sqrt{\sum_{i=1}^n(\log\lambda_i(q^{-\frac{1}{2}}pq^{-\frac{1}{2}}))^2}.
\end{split}
\end{equation*}
For technical reason, we restrict the manifold to be the manifold of SPD matrices with diagonal entries $\D_{++}^n=\{p:p\in\mathbb{R}^{n\times n},p_{i,i}>0, p \text{ is diagonal}\}$ and its tangent space is $\D^n=\{U:U\in\mathbb{R}^{n\times n}, U \text{ is diagonal} \}$. A key component in the proof is the Busemann function \citep{ballmann2012lectures} on $\D_{++}^n$ equipped with affine-invariant metric has a closed form, which we describe as follows.
\begin{defn}\citep{ballmann2012lectures}
Suppose $\M$ is a Hadamard manifold and $c:[0,\infty)$ is a geodesic ray on $\M$ with $\|\dot{c}(0)\|=1$. Then the Busemann function associated with $c$ is defined as
$$
b_c(p)=\lim_{t\rightarrow\infty}(d(p, c(t))-t).
$$
\end{defn}
Busemann functions enjoy the following useful properties.
\begin{lemma}\citep{ballmann2012lectures}\label{buse}
A Busemann function $b_c$ satisfies
\begin{enumerate}[label=\arabic*)]
\item $b_c$ is gsc-convex;
\item $\nabla b_c(c(t))=\dot{c}(t)$ for any $t\in[0,\infty)$;
\item $\|\nabla b_c(p)\|\leq 1$ for every $p\in \M$.
\end{enumerate}
\end{lemma}
\begin{lemma}\citep[Chapter \RomanNumeralCaps{2}.10]{bridson2013metric}
On $\D_{++}^n$, suppose $c(t)=\expmap_{I}(tX)$, $p=\expmap_{I}(Y)$ and $\|X\|=1$, then the Busemann function is
$$
b_c(p)=-\text{tr}(XY).
$$
\end{lemma}
\begin{proof}
We first compute
\begin{equation}
\begin{split}
d(p, c(t))^2=&d(\expmap_{I}(Y), \expmap_{I}(tX))^2\\
=&d(e^Y, e^{tX})^2\\
=&\sum_{i=1}^n(\log \lambda_i(e^{-t X+Y}))^2\\
=&d(I, e^{Y-tX})^2\\
=&d(I, \expmap_{I}(Y-tX))^2\\
=&\|Y-tX\|^2\\
=&tr((Y-tX)(Y-tX))\\
=&tr(Y^2)-2t\cdot tr(XY)+t^2\cdot tr(X^2),
\end{split}
\end{equation}
where we use facts that $\expmap_{I}(X)=e^X$ and $d(p, q)=\sqrt{\sum_{i=1}^n(\log\lambda_i(q^{-\frac{1}{2}}pq^{-\frac{1}{2}}))^2}$.
Meanwhile,
$$
\lim_{t\rightarrow\infty}\frac{d(p, c(t))}{t}=1
$$
due to the triangle inequality on $\triangle (p, c(0), c(t))$. Therefore,
\begin{equation*}
\begin{split}
b_c(p)=&\lim_{t\rightarrow\infty}(d(p, c(t))-t)\\
=&\lim_{t\rightarrow\infty}\frac{d(p, c(t))^2-t^2}{2t}\\
=&-tr(XY).
\end{split}
\end{equation*}
\end{proof}
\begin{remark}
We consider $\D_{++}^n$ to ensure $X$ and $Y$ commute thus also $e^Y$ and $e^{tX}$ commute. This is necessary to get a closed form of the Busemann function.
\end{remark}
Now we describe the minimax game on $\D_{++}^n$. Each round $t$, the player chooses $p_t$ from $\N=\{p:d(p,I)\leq\frac{D}{2}\}=\{p:p=\expmap_{I}(Y),\|Y\|\leq\frac{D}{2}\}$, and the adversary is allowed to pick a geodesic $c^t$, which determines a loss function in
\begin{equation}
\begin{split}
\F_t=&\{\alpha_t G_t b_c(p): \|\dot{c}(0)\|=1, \alpha_t\in[0,1] \}\\
=&\{\alpha_t G_t \cdot -tr(X_tY): \|X_t\|=1, \alpha_t\in[0,1] \}\\
=&\{-tr(X_tY): \|X_t\|\leq G_t \}\\
\end{split}
\end{equation}
The domain is gsc-convex by Lemmas \ref{gsc-convex} and \ref{levelset}. Each loss function is gsc-convex and has a gradient upper bound $G_t$ using the second item of Lemma \ref{buse}. WLOG, we assume $D=2$, and the value of the game is
\[\V_T(\N, \{\F_t\})\coloneqq\inf_{\|Y_1\| \leq 1}\sup_{\|X_1\| \leq G_1}\dots \inf_{\|Y_T\| \leq 1}\sup_{\|X_T\| \leq G_T} \lt[ \sumT -tr(X_tY_t)-\inf_{\|Y\|\leq 1}\sumT -tr(X_tY)\rt]\]
\begin{lemma}\label{lb1}
The value of the minimax game $\V_T$ can be written as
\[\V_T(\N, \{\F_t\})=\inf_{\|Y_1\| \leq 1}\sup_{\|X_1\| \leq G_1}\dots \inf_{\|Y_T\| \leq 1}\sup_{\|X_T\| \leq G_T} \lt[ \sumT -tr(X_tY_t)+\lt\|\sumT X_t\rt\|\rt]\]
\end{lemma}
\begin{proof}
This is obvious due to Cauchy–Schwarz inequality:
\[\inf_{\|Y\|\leq 1}\sumT -tr(X_tY)=-\sup_{\|Y\|\leq 1}tr\lt(\sumT X_t Y\rt)=-\lt\|\sumT X_t\rt\|.\]
\end{proof}
\begin{lemma}\label{lb2}
For $n>2$, the adversary guarantees at least $\sqrt{\sumT G_t^2}$ regardless of the player's strategy.
\end{lemma}
\begin{proof}
Each round, after the player chooses $Y_t$, the adversary chooses $X_t$ such that $\|X_t\|=G_t$, $\la X_t,Y_t\ra=0$ and $\la X_t, \sum_{s=1}^{t-1}X_s\ra =0$. This is always possible when $n>2$. Under this strategy, $\sumT-tr(X_tY_t)=0$ and we can show $\lt\|\sumT X_t\rt\|=\sqrt{\sumT G_t^2}$ by induction. The case for $T=1$ is obvious. Assume $\lt\|\sum_{s=1}^{t-1}X_s\rt\|=\sqrt{\sum_{s=1}^{t-1}G_s^2}$, then
$$
\lt\|\sum_{s=1}^tX_s\rt\|=\lt\|\sum_{s=1}^{t-1}X_s+X_t\rt\|=\sqrt{\lt\|\sum_{s=1}^{t-1}X_s\rt\|^2+\|X_t\|^2}=\sqrt{\sum_{s=1}^tG_t^2}.
$$
where the second equality is due to $\la X_t, \sum_{s=1}^{t-1}X_s\ra =0$.
\end{proof}
\begin{lemma}\label{lb3}
Let $X_0=0$. If the player plays
$$
Y_t=\frac{\sum_{s=1}^{t-1}X_s}{\sqrt{\lt\|\sum_{s=1}^{t-1}X_s\rt\|^2+\sum_{s=t}^TG_s^2}},
$$
then
\[\sup_{\|X_1\| \leq G_1}\sup_{\|X_2\| \leq G_2}\dots \sup_{\|X_T\| \leq G_T} \lt[ \sumT -tr(X_tY_t)+\lt\|\sumT X_t\rt\|\rt]\leq\sqrt{\sumT G_t^2}.\]
\end{lemma}
\begin{proof}
Let $\Gamma_t^2=\sum_{s=t}^TG_s^2$, $\Gamma_{T+1}=0$ and $\Xt_t=\sum_{s=1}^tX_s$. We define
$$
\Phi_t(X_1,\dots, X_{t-1})=\sum_{s=1}^{t-1}-tr(X_sY_s)+\sqrt{\lt\|\sum_{s=1}^{t-1}X_s\rt\|^2+\Gamma_t^2}
$$
and $\Phi_1=\sqrt{\sumT G_t^2}$. We further let
$$
\Psi_t(X_1,\dots, X_{t-1})=\sup_{\|X_t\| \leq G_t}\dots \sup_{\|X_T\| \leq G_T} \lt[ \sum_{s=1}^T -tr(X_sY_s)+\lt\|\sum_{s=1}^T X_s\rt\|\rt]
$$
be the payoff of the adversary when he plays $X_1,\dots, X_{t-1}$ and then plays optimally.
We do backward induction for this argument, which means for all $t\in\{1,\dots,T+1\}$,
$$
\Psi_t(X_1,\dots,X_{t-1})\leq\Phi_t(X_1,\dots,X_{t-1}).
$$
The case of $t=T+1$ is obvious because $\Psi_{T+1}=\Phi_{T+1}$. Assume the argument holds for $t+1$ and we try to show the case for $t$.
\begin{equation}
\begin{split}
&\Psi_t(X_1,\dots,X_{t-1})\\
=&\sup_{\|X_t\|\leq G_t}\Psi_{t+1}(X_1,\dots,X_t)\\
\leq&\sup_{\|X_t\|\leq G_t}\Phi_{t+1}(X_1,\dots,X_t)\\
=&\sum_{s=1}^{t-1}-tr(X_sY_s)+\sup_{\|X_t\|\leq G_t}\lt[-tr(X_tY_t)+\sqrt{\lt\|\sum_{s=1}^tX_s\rt\|^2+\Gamma_{t+1}^2}\rt].
\end{split}
\end{equation}
Now it suffices to show
$$
\sup_{\|X_t\|\leq G_t}\lt[-tr(X_tY_t)+\sqrt{\lt\|\sum_{s=1}^tX_s\rt\|^2+\Gamma_{t+1}^2}\rt]\leq \sqrt{\lt\|\sum_{s=1}^{t-1}X_s\rt\|^2+\Gamma_t^2}
$$
to establish our argument. Recall that
$$
Y_t=\frac{\sum_{s=1}^{t-1}X_s}{\sqrt{\lt\|\sum_{s=1}^{t-1}X_s\rt\|^2+\sum_{s=t}^TG_s^2}},
$$
and denote $\Xt_{t-1}=\sum_{s=1}^{t-1}X_s$. It turns out that what we need to show is
$$
\sup_{\|X_t\|\leq G_t} -tr\lt(\frac{\la X_t, \Xtt\ra}{\sqrt{\|\Xtt\|^2+\Gamma_t^2}}\rt)+\sqrt{\|\Xtt+X_t\|^2+\Gamma_{t+1}^2}\leq\sqrt{\|\Xtt\|^2+\Gamma_t^2}.
$$
We use the Lagrange multiplier method to prove this argument. Let
$$
g(X_t)=-tr\lt(\frac{\la X_t, \Xtt\ra}{\sqrt{\|\Xtt\|^2+\Gamma_t^2}}\rt)+\sqrt{\|\Xtt+X_t\|^2+\Gamma_{t+1}^2}+\lambda(\|X_t\|^2-G_t^2),
$$
then the stationary point of $g$ satisfies
$$
\frac{\partial g(X_t)}{\partial X_t}=-\frac{\Xtt}{\sqrt{\|\Xtt\|^2+\Gamma_t^2}}+\frac{\Xtt+X_t}{\sqrt{\|\Xtt+X_t\|^2+\Gamma_{t+1}^2}}+2\lambda X_t=0
$$
and
$$\lambda(\|X_t\|^2-G_t^2)=0.$$
We first consider that $\Xtt$ is co-linear with $X_t$.
When $\lambda=0$, we have $X_t=c\Xtt$ where
$$
c = \frac{\Gamma_{t+1}}{\Gamma_t}-1.
$$
If $\Xtt$ is co-linear with $X_t$ and $\lambda\neq 0$, we know $\|X_t\|=G_t$ and again let $X_t=G_t\frac{\Xtt}{\|\Xtt\|}$ or $X_t=-G_t\frac{\Xtt}{\|\Xtt\|}$. Then we need to ensure
$$
g(c\Xtt)\leq\sqrt{\|\Xtt\|^2+\Gamma_t^2}
$$
holds for $c = \frac{\Gamma_{t+1}}{\Gamma_t}-1,\frac{G_t}{\|\Xtt\|}$ and $-\frac{G_t}{\|\Xtt\|}$.
By Lemma \ref{auxi}, it suffices to verify
$$
(c^2-1)\|\Xtt\|^2\Gamma_t^2+(\|\Xtt\|^2+\Gamma_t^2)\Gamma_{t+1}^2\leq\Gamma_t^4.
$$
If $c=\frac{\Gamma_{t+1}}{\Gamma_t}-1$, we have to ensure
\begin{equation}
\begin{split}
& (c^2-1)\|\Xtt\|^2\Gamma_t^2+(\|\Xtt\|^2+\Gamma_t^2)\Gamma_{t+1}^2-\Gamma_t^4\\
=&\lt(\frac{\Gtt^2}{\Gt^2}-2\frac{\Gtt}{\Gt}\rt)\|\Xtt\|^2\Gt^2+\|\Xtt\|^2\Gtt^2+\Gt^2\Gtt^2-\Gt^4\\
=&2(\Gtt-\Gt)\Gtt\|\Xtt\|^2+\Gt^2(\Gtt^2-\Gt^2)\leq 0.
\end{split}
\end{equation}
For the case where $c^2=\frac{G_t^2}{\|\Xtt\|^2}$, we have
\begin{equation}
\begin{split}
& (c^2-1)\|\Xtt\|^2\Gamma_t^2+(\|\Xtt\|^2+\Gamma_t^2)\Gamma_{t+1}^2-\Gamma_t^4\\
=&\lt(\frac{G_t^2}{\|\Xtt\|^2}-1\rt)\|\Xtt\|^2\Gt^2+(\|\Xtt\|^2+\Gt^2)\Gtt^2-\Gt^4\\
=&\Gt^2(G_t^2+\Gtt^2-\Gt^2)-G_t^2\|\Xtt\|^2\\
=&-G_t^2\|\Xtt\|^2\leq 0.
\end{split}
\end{equation}
The only case left is when $X_t$ is not parallel to $\Xtt$.
$\lambda=0$ implies $X_t=0$ and thus
$$
g(0)=\sqrt{\|\Xtt\|^2+\Gtt^2}\leq\sqrt{\|\Xtt\|^2+\Gt^2}.
$$
If $\lambda\neq 0$ then $\|X_t\|=G$. We have
$$
-\frac{\Xtt}{\sqrt{\|\Xtt\|^2+\Gamma_t^2}}+\frac{\Xtt}{\sqrt{\|\Xtt+X_t\|^2+\Gtt^2}}=0
$$
which in turn implies $\la X_t,\Xtt\ra=0$. This is the maximum point of $g$ as now
$$
g(X_t)=\sqrt{\|\Xtt\|^2+\Gt^2}.
$$
Thus we finished the induction step and the lemma was established.
\end{proof}
\begin{lemma}\label{auxi}
$$
-tr\lt(\frac{\la X_t, \Xtt\ra}{\sqrt{\|\Xtt\|^2+\Gamma_t^2}}\rt)+\sqrt{\|\Xtt+X_t\|^2+\Gamma_{t+1}^2}\leq\sqrt{\|\Xtt\|^2+\Gamma_t^2}.
$$ holds for $X_t=c \Xtt$ iff
$$
(c^2-1)\|\Xtt\|^2\Gamma_t^2+(\|\Xtt\|^2+\Gamma_t^2)\Gamma_{t+1}^2\leq\Gamma_t^4.
$$
\end{lemma}
\begin{proof}
The statement we want to show is
$$
-\frac{c\|\Xtt\|^2}{\sqrt{\|\Xtt\|^2+\Gamma_t^2}}+\sqrt{\|\Xtt\|^2(1+c)^2+\Gamma_{t+1}^2}\leq \sqrt{\|\Xtt\|^2+\Gamma_t^2}.
$$
Let $\alpha=\|\Xtt\|^2$, $\beta=\Gamma_t^2$ and $\gamma=\Gamma_{t+1}^2$. Following a series of algebraic manipulations, we get
$$
(c^2-1)\alpha\beta+(\alpha+\beta)\gamma\leq \beta^2.
$$
And the argument is proved after plugging back $\alpha, \beta, \gamma$.
\end{proof}
\begin{thm}\label{lb_static}
There exists a game on $\D_{++}^n$ such that we can exactly compute the value of the minimax regret. Specifically,
the decision set of the player is $\N=\{p:p=\expmap_{I}(Y),\|Y\|\leq\frac{D}{2}\}$, and the adversary is allowed to pick a loss function in
$$
\F_t=\{\alpha_t G_t b_c(p): \|\dot{c}(0)\|=1, \alpha_t\in[0,1] \}=\{-tr(X_tY): \|X_t\|\leq G_t \}\\.
$$
Then the minimax value of the game is
$$
\V_T(\N,\{\F_t\})=\frac{D}{2}\sqrt{\sumT G_t^2}.
$$
In addition, the optimal strategy of the player is
$$
Y_t=\frac{\sum_{s=1}^{t-1}X_s}{\sqrt{\lt\|\sum_{s=1}^{t-1}X_s\rt\|^2+\sum_{s=t}^TG_s^2}}.
$$
\end{thm}
\begin{proof}
The proposition is a direct conclusion of Lemmas \ref{lb1}, \ref{lb2}, and \ref{lb3}.
\end{proof}
\begin{thm}
There exists a comparator sequence which satisfies $\sumsT d(\u_t,\u_{t-1})\leq P_T$ and
the dynamic minimax regret lower bound on Hadamard manifolds is $\Omega(G\sqrt{D^2+DP_T})$.
\end{thm}
\begin{proof}
We combine Theorem \ref{lb_static} with a reduction in \cite{zhang2018adaptive} to finish the proof. By Theorem \ref{lb_static} we have
$$
\V_T=\inf_{\|Y_1\| \leq 1}\sup_{\|X_1\| \leq G}\dots \inf_{\|Y_T\| \leq 1}\sup_{\|X_T\| \leq G} \lt[ \sumT -tr(X_tY_t)-\inf_{\|Y\|\leq 1}\sumT -tr(X_tY)\rt]=\frac{GD\sqrt{T}}{2}.
$$
Note that the path-length is upper bounded by $TD$. For any $\tau\in[0, TD]$, we define the set of comparators with path-length bounded by $\tau$ as
$$
C(\tau)=\lt\{\u_1,\dots,\u_T\in\N:\sumsT d(\u_t,\u_{t-1})\leq\tau\rt\}
$$
where $\N=\{\u:d(I, \u)\leq\frac{D}{2}\}$ is a gsc-convex subset and the minimax dynamic regret w.r.t. $C(\tau)$ is
$$
\V_T(C(\tau))=\inf_{\small\|Y_1\| \leq 1}\sup_{\small\|X_1\| \leq G}\dots \inf_{\small\|Y_T\| \leq 1}\sup_{\small\|X_T\| \leq G} \lt[ \sumT -tr(X_tY_t)-\inf_{\u_1,\dots,\u_T\in C(\tau)}\sumT -tr(X_t\invexp_I \u_t)\rt].
$$
We distinguish two cases. When $\tau\leq D$, we invoke the minimax static regret directly to get
\begin{equation}\label{eq-lb1}
\V_T(C(\tau))\geq \V_T=\frac{GD\sqrt{T}}{2}.
\end{equation}
For the case of $\tau\geq D$, WLOG, we assume $\lc \tau/D\rceil$ divides $T$ and let $L$ be the quotient. We construct a subset of $C(\tau)$, named $C'(\tau)$, which contains comparators that are fixed for each consecutive $L$ rounds. Specifically,
$$
C'(\tau)=\lt\{\u_1,\dots,\u_T\in\N:\u_{(i-1)L+1}=\dots\u_{i L},\forall i\in [1,\lc\tau/D\rc]\rt\}.
$$
Note that the path-length of comparators in $C'(\tau)$ is at most $(\lc\tau/D\rc-1)D\leq \tau$, which implies $C'(\tau)$ is a subset of $C(\tau)$. Thus we have
\begin{equation}\label{eq-lb2}
\V_T(C(\tau))\geq \V_T(C'(\tau)).
\end{equation}
The objective of introducing $C'(\tau)$ is we can set $\u_{(i-1)L+1}=\dots=\u_{(iL)}$ to be the offline minimizer of the $i$-th segment and invoke the minimax lower bound for the static regret for each segment. Thus we have
\begin{equation}\label{eq-lb3}
\begin{split}
&\V_T(C'(\tau))\\
=&\inf_{\small\|Y_1\| \leq 1}\sup_{\small\|X_1\| \leq G}\dots \inf_{\small\|Y_T\| \leq 1}\sup_{\small\|X_T\| \leq G} \lt[ \sumT -tr(X_tY_t)-\inf_{\u_1,\dots,\u_T\in C'(\tau)}\sumT -tr(X_t\invexp_I \u_t)\rt]\\
=&\inf_{\small\|Y_1\| \leq 1}\sup_{\small\|X_1\| \leq G}\dots \inf_{\small\|Y_T\| \leq 1}\sup_{\small\|X_T\| \leq G} \lt[ \sumT -tr(X_tY_t)-\sum_{i=1}^{\lc\tau/D\rc}\inf_{\|Y\|\leq 1}\sum_{t=(i-1)L+1}^{iL} -tr(X_tY)\rt]\\
=&\lc\tau/D\rc\frac{GD\sqrt{L}}{2}=\frac{GD\sqrt{T\lc\tau/D\rc}}{2}\geq\frac{G\sqrt{TD\tau}}{2}.
\end{split}
\end{equation}
Combining Equations \eqref{eq-lb1}, \eqref{eq-lb2} and \eqref{eq-lb3} yields
$$
\V_T(C(\tau))\geq \frac{G}{2}\max(D\sqrt{T},\sqrt{TD\tau})=\Omega(G\sqrt{T(D^2+D\tau)}).
$$
\end{proof}
\section{Omitted Proof for Section \ref{radarv}}
\label{app:radarv}
\subsection{Proof of Theorem \ref{var-expert}}
\label{app:thm:var-expert}
We first argue $\N_c$ is gsc-convex for any $c\geq 0$ to ensure the algorithm is well-defined. By Lemma \ref{gsc-convex}, $d(\x, \N)$ is gsc-convex on Hadamard manifolds. The sub-level set of a gsc-convex function is a gsc-convex set due to Lemma \ref{levelset}, which implies $\N_c$ is gsc-convex.
We notice that
$$
f_t(\x_t)-f_t(\u_t)=(f_t(\x_t)-f_t(\x_t'))+(f_t(\x_t')-f_t(\u_t))
$$
and derive upper bounds for two terms individually. If $\x_t'\in\ndm$ then $\x_t'=\x_t$ and $f_t(\x_t)-f_t(\x_t')=0$. If this is not the case, by Lemma \ref{proj}, we have
$$
d(\x_t', \x_t)\leq d(\x_t',\z)\leq d(\x_t', \y_t)$$ where $\z$ is the intersection of $\N_{\delta}$ and the geodesic segment connecting $\x_t'$ and $\y_t$. Thus
\begin{equation}\label{var-exp-eq0}
f_t(\x_t)-f_t(\x_t')\leq\la\nabla f_t(\x_t),-\invexp_{\x_t}{\x_t'}\ra\leq\|\nabla f_t(\x_t)\|\cdot d(\x_t',\x_t)\leq G\cdot d(\x_t', \y_t),
\end{equation}
where we notice $\x_t\in\ndm$ and use Assumption \ref{grad}.
Let $\y_{t+1}'=\expmap_{\x_t'}\left(-\eta\nabla f_t(\x_t')+\expmap_{\x_t'}^{-1}(\y_t)\right)$. The second term $f_t(\x_t')-f_t(\u_t)$ can be bounded by
\begin{equation}\label{var-exp-eq1}
\begin{split}
&f_t(\x_t')-f_t(\u_t)\stackrel{\rm (1)}{\leq} -\langle\expmap_{\x_t'}^{-1}(\u_t),\nabla f_t(\x_t')\rangle\\
=&\frac{1}{\eta}\langle\expmap_{\x_t'}^{-1}(\u_t),\expmap_{\x_t'}^{-1}(\y_{t+1}')-\expmap_{\x_t'}^{-1}(\y_t)\rangle\\
\stackrel{\rm (2)}{\leq}&\frac{1}{2\eta}\left({\zeta}d(\x_t',\y_{t+1}')^2+d(\x_t',\u_t)^2-d(\y_{t+1}',\u_t)^2\right)-\frac{1}{2\eta}\left(d(\x_t',\y_t)^2+d(\x_t',\u_t)^2-d(\y_t,\u_t)^2\right)\\
=&\frac{1}{2\eta}\left({\zeta}d(\x_t',\y_{t+1}')^2-d(\x_t',\y_t)^2+d(\y_t,\u_t)^2-d(\y_{t+1}',\u_t)^2\right)\\
=&\frac{1}{2\eta}\left({\zeta}\|-\eta\nabla f_t(\x_t')+\expmap_{\x_t'}^{-1}(\y_t)\|^2
-d(\x_t',\y_t)^2+d(\y_t,\u_t)^2-d(\y_{t+1}',\u_t)^2\right)\\
=&\frac{1}{2\eta}\left({\zeta}\|-\eta\nabla f_t(\x_t')+\eta\Ga_{\y_t}^{\x_t'}M_t\|^2
-d(\x_t',\y_t)^2+d(\y_t,\u_t)^2-d(\y_{t+1}',\u_t)^2\right)\\
\stackrel{\rm (3)}{\leq}&\frac{1}{2\eta}\left({\zeta}\|-\eta\nabla f_t(\x_t')+\eta\Ga_{\y_t}^{\x_t'}M_t\|^2
-d(\x_t',\y_t)^2+d(\y_t,\u_t)^2-d(\y_{t+1},\u_t)^2\right)
\end{split}
\end{equation}
where the second inequality follows from Lemmas \ref{cos1} and \ref{cos2} and the third one is due to the non-expansive property of projection onto Hadamard manifolds. We apply $\Ga_{\x}^{\y}\expmap_{\x}^{-1}\y=-\expmap_{\y}^{-1}\x$ to derive the last equality.
Now we can get the desired squared term $\|\nabla f_t(\y_t)-M_t\|^2$ by considering
\begin{equation}\label{var-exp-eq2}
\begin{split}
&\|\nabla f_t(\x_t')-\Ga_{\y_t}^{\x_t'}M_t\|^2\\
=&\|\nabla f_t(\x_t')-\Ga_{\y_t}^{\x_t'}\nabla f_t(\y_t)+\Ga_{\y_t}^{\x_t'}\nabla f_t(\y_t)-\Ga_{\y_t}^{\x_t'}M_t\|^2\\
\leq& 2\left(\|\nabla f_t(\x_t')-\Ga_{\y_t}^{\x_t'}\nabla f_t(\y_t)\|^2+\|\Ga_{\y_t}^{\x_t'}\nabla f_t(\y_t)-\Ga_{\y_t}^{\x_t'}M_t\|^2\right)\\
\leq &2L^2d(\x_t',\y_t)^2+2\|\nabla f_t(\y_t)-M_t\|^2,
\end{split}
\end{equation}
where in the first inequlity we use $\|\x+\y\|^2\leq 2\|\x\|^2+2\|\y\|^2$ holds for any SPD norm $\|\cdot\|$, and the second inequality is due to the smoothness of $f$ and parallel transport is an isometry. Combining Equations \eqref{var-exp-eq0}, \eqref{var-exp-eq1} and \eqref{var-exp-eq2}, we have
\begin{equation}\label{var-exp-eq3}
\begin{split}
&f_t(\x_t)-f_t(\u_t)\\
\leq&G d(\x_t', \y_t)+\frac{\eta{\zeta}}{2}\left(2\|\nabla f_t(\y_t)-M_t\|^2+2L^2d(\x_t',\y_t)^2\right)\\
& -\frac{1}{2\eta}d(\x_t',\y_t)^2+\frac{1}{2\eta}(d(\y_t,\u_t)^2-d(\y_{t+1},\u_t)^2)\\
\leq&\eta{\zeta}\|\nabla f_t(\y_t)-M_t\|^2+\frac{1}{2\eta}\left(2\eta G+2\eta^2{\zeta}L^2d(\x_t',\y_t)-d(\x_t',\y_t)\right)d(\x_t',\y_t)\\
& +\frac{1}{2\eta}(d(\y_t,\u_t)^2-d(\y_{t+1},\u_t)^2).
\end{split}
\end{equation}
Now we show
\begin{equation}\label{key}
\frac{1}{2\eta}\left(2\eta G+2\eta^2{\zeta}L^2d(\x_t',\y_t)-d(\x_t',\y_t)\right)d(\x_t',\y_t)\leq 0
\end{equation}
holds for any $t\in[T]$. First we consider the case that $d(\x_t',\y_t)\leq \delta M$, which means $\x_t'\in\ndm$ and $f_t(\x_t)=f_t(\x_t')$. Thus Equation \eqref{key} is implied by
$$
2\eta^2{\zeta}L^2d(\x_t',\y_t)-d(\x_t',\y_t)\leq 0,$$
which is obviously true by considering our assumption on $\eta$:
$$\eta\leq\frac{\delta M}{G+(G^2+2\zeta\delta^2M^2L^2)^{\frac{1}{2}}}\leq\frac{1}{\sqrt{2\zeta L^2}}.$$
When $d(\x_t', \y_t)\geq\delta M$, to simplify the proof, we denote $\lambda=d(\x_t', \y_t)$ and try to find $\eta$ such that
\begin{equation}\label{quadratic}
h(\eta;\lambda)\coloneqq 2\eta G+2\eta^2\zeta L^2\lambda-\lambda\leq 0
\end{equation}
holds for any $\lambda\geq \delta M$. We denote the only non-negative root of $h(\eta;\lambda)$ as $\eta(\lambda)$, which can be solved explicitly as
$$
\eta(\lambda)=\frac{-G+(G^2+2\zeta \lambda^2L^2)^{\frac{1}{2}}}{2\zeta\lambda L^2}.
$$
Applying Lemma \ref{lem:tech} with $a=G$ and $b=2\zeta L^2$, we know $\eta(\lambda)$ increases on $[0,\infty)$. Thus $\eta(\lambda)\geq\eta(\delta M)$ holds for any $\lambda\geq\delta M$. Combining with the fact that $h(0;\lambda)=-\lambda<0$, we know $h(\eta;\lambda)\leq 0$ holds for any $\eta\leq\eta(\lambda)$, so we can simply set
$$\eta\leq \min_{\lambda} \eta(\lambda)=\eta\lt(\delta M\rt)=\frac{\delta M}{G+(G^2+2\zeta\delta^2M^2L^2)^{\frac{1}{2}}}.
$$
to ensure $h(\eta;\lambda)\leq 0$ always holds.
Now it suffices to bound
\begin{equation}
\begin{split}
&\sum_{t=1}^T\frac{1}{2\eta}(d(\y_t,\u_t)^2-d(\y_{t+1},\u_t)^2)\\
\leq& \frac{d(\y_1,\u_1)^2}{2\eta}+\sum_{t=2}^T\frac{\left(d(\y_t,\u_t)^2-d(\y_t,\u_{t-1})^2\right)}{2\eta}\\
\leq&\frac{D^2}{2\eta}+2D\frac{\sum_{t=2}^Td(\u_t,\u_{t-1})}{2\eta}=\frac{D^2+2DP_T}{2\eta}.
\end{split}
\end{equation}
Finally, we apply the telescoping-sum on Equation \eqref{var-exp-eq3},
\begin{equation}
\begin{split}
\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\u_t)\leq\eta{\zeta}\sumT\|\nabla f_t(\y_t)-M_t\|^2+\frac{D^2+2DP_T}{2\eta}.
\end{split}
\end{equation}
\subsection{Extension of Theorem \ref{var-expert} to CAT\texorpdfstring{$(\kappa)$}{ a} Spaces}
\label{cat-proof}
In this part, we show how to get optimistic regret bound on CAT$(\kappa)$ spaces, the sectional curvature of which is upper bounded by $\kappa$. Note that Hadamard manifolds are complete CAT$(0)$ spaces. To proceed, we make the following assumption.
\begin{ass}\label{cat}
The sectional curvature of manifold $\M$ satisfies $-\kappa_1\leq\kappa\leq\kappa_2$ where $\kappa_1\geq 0$. We define
$$
\D(\kappa)\coloneqq \begin{cases}
\infty,\quad &\kappa\leq 0 \\
\frac{\pi}{\sqrt{\kappa}},\quad &\kappa>0.
\end{cases} $$
The diameter of the gsc-convex set $\N\subset \M$ is $D$ and we assume $D+2\delta M\leq \D(\kappa_2)$. The gradient satisfies $\sup_{\x\in\ndm}\|\nabla f_t(\x)\|\leq G.$
\end{ass}
\begin{lemma}\cite[Corollary 2.1]{alimisis2020continuous}\label{cos3}
Let $\M$ be a Riemannian manifold with sectional curvature upper bounded by $\kappa_2$ and $\N\subset\M$ be a gsc-convex set with diameter upper bounded by $\D(\kappa_2)$. For a geodesic triangle fully lies within $\N$ with side lengths $a, b, c$, we have
$$
a^{2} \geq {\xi}(\kappa_2, D) b^{2}+c^{2}-2 b c \cos A
$$
where ${\xi}(\kappa_2, D)=\sqrt{-\kappa_2} D \operatorname{coth}(\sqrt{-\kappa_2} D)$ when $\kappa_2\leq 0$ and ${\xi}(\kappa_2, D)=\sqrt{\kappa_2} D \operatorname{cot}(\sqrt{\kappa_2} D)$ when $\kappa_2> 0$.
\end{lemma}
\begin{defn}
We define $\zeta=\zeta(-\kappa_1, D+2\delta M)$ and $\xi=\xi(\kappa_2, D+2\delta M)$ where $\xi(\cdot,\cdot)$ and $\zeta(\cdot,\cdot)$ are defined in Lemmas \ref{cos3} and \ref{cos1}, respectively.
\end{defn}
\begin{lemma}\citep{bridson2013metric}\label{cat2}
For a CAT$(\kappa)$ space $\M$, a ball of diameter smaller than $\D(\kappa)$ is convex. Let $C$ be a convex subset in $\M$. If $d(\x, C)\leq \frac{\D(\kappa)}{2}$ then $d(\x, C)$ is convex and there exists a unique point $\Pi_C(\x)\in\C$ such that $d(\x,\Pi_C(\x))=d(\x,C)=\inf_{\y\in C}d(\x,\y)$.
\end{lemma}
\begin{thm}
Suppose all losses $f_t$ are $L$-gsc-smooth on $\M$. Under Assumptions \ref{cat}, the iterates
\begin{equation}
\begin{split}
\textstyle &\x_{t}'=\expmap_{\y_t}(-\eta M_t)\\
&\x_t=\Pi_{\ndm}\x_t'\\
&\y_{t+1}=\Pi_{\N}\expmap_{\x_t'}\left(-\eta\nabla f_t(\x_t')+\expmap_{\x_t'}^{-1}(\y_t)\right).
\end{split}
\end{equation}
satisfies
\begin{equation*}
\begin{split}
\textstyle\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\u_t)\leq\eta{\zeta}\sumT\|\nabla f_t(\y_t)-M_t\|^2+\frac{D^2+2DP_T}{2\eta}.
\end{split}
\end{equation*}
for any $\u_1,\dots,\u_T\in\N$ and $\eta\leq\min\lt\{\frac{\xi\delta M}{G+(G^2+2\zeta\xi\delta^2M^2L^2)^{\frac{1}{2}}},\frac{\D(\kappa_2)}{2(G+2M)}\rt\}$.
\end{thm}
\begin{proof}
We highlight key differences between the proof of Theorem \ref{var-expert}. Again we let $\y_{t+1}'=\expmap_{\x_t'}\left(-\eta\nabla f_t(\x_t')+\expmap_{\x_t'}^{-1}(\y_t)\right)$. First, we need to argue the algorithm is well-defined. The diameter of $\ndm$ is at most $D+2\delta M$ by triangle inequality, so $\ndm$ is gsc-convex by Assumption \ref{cat} and Lemma \ref{cat2}. We also need to ensure that $d(\x_t',\ndm)\leq\frac{\dk}{2}$ and $d(\y_{t+1}',\N)\leq\frac{\dk}{2}$ and apply Lemma \ref{cat2} to show the projection is unique and non-expansive. For $d(\x_t',\ndm)$, we have
$$
d(\x_t',\ndm)\leq d(\x_t',\y_t)\leq \eta M\leq\frac{\D(\kappa_2)}{2}.
$$
by $\eta\leq\frac{\D(\kappa_2)}{2(G+2M)}$. Similarly, for $d(\y_{t+1}',\N)$
\begin{equation*}
\begin{split}
&d(\y_{t+1}',\N)\leq d(\y_{t+1}',\y_t)\leq d(\y_{t+1}',\x_t')+d(\y_{t},\x_t')\\
\leq& \|-\eta\nabla f_t(\x_t')+\eta\Ga_{\y_t}^{\x_t'}M_t\|+\eta M\\
\leq& \eta(G+2M)\leq\frac{\dk}{2}.
\end{split}
\end{equation*}
We can bound $f_t(\x_t)-f_t(\x_t')$ in the same way as Theorem \ref{var-expert}, but we now use Lemmas \ref{cos3} and \ref{cos1} to bound $f_t(\x_t')-f_t(\u_t)$.
\begin{equation}
\begin{split}
&f_t(\x_t')-f_t(\u_t){\leq} -\langle\expmap_{\x_t'}^{-1}(\u_t),\nabla f_t(\x_t')\rangle\\
=&\frac{1}{\eta}\langle\expmap_{\x_t'}^{-1}(\u_t),\expmap_{\x_t'}^{-1}(\y_{t+1}')-\expmap_{\x_t'}^{-1}(\y_t)\rangle\\
{\leq}&\frac{1}{2\eta}\left({\zeta}d(\x_t',\y_{t+1}')^2+d(\x_t',\u_t)^2-d(\y_{t+1}',\u_t)^2\right)-\frac{1}{2\eta}\left(\xi d(\x_t',\y_t)^2+d(\x_t',\u_t)^2-d(\y_t,\u_t)^2\right).\\
\end{split}
\end{equation}
Finally, we need to show
\begin{equation}
2\eta G+2\eta^2{\zeta}L^2d(\x_t',\y_t)-\xi d(\x_t',\y_t)\leq 0
\end{equation}
Following the proof of Theorem \ref{var-expert}, we find
$$\eta\leq\frac{\xi\delta M}{G+(G^2+2\zeta\xi\delta^2M^2L^2)^{\frac{1}{2}}}$$
satisfies the required condition. The guarantee is thus established.
\end{proof}
\subsection{Proof of Lemma \ref{optHedge}}
\label{app:lem:opthedge}
We first show
\begin{equation}\label{opthedge-eq1}
\begin{split}
\sumT\la\l_t,\w_t-\w^*\ra\leq \frac{\ln N+R(\w^*)}{\beta}+\beta\sumT\|\l_t-\m_t\|_{\infty}^2-\frac{1}{2\beta}\sumT\lt(\|\w_t-\w_t'\|_1^2+\|\w_t-\w_{t-1}'\|_1^2\rt)
\end{split}
\end{equation}
holds for any $\w^*\in\Delta_N$, where
$$
\w_t=\argmin_{\w\in\Delta_N}\;\beta\la\sum_{s=1}^{t-1}\l_s+\m_t,\w\ra +R(\w), \quad t\geq 1
$$
and
$$
\w_t'=\argmin_{\w\in\Delta_N}\;\beta\la\sum_{s=1}^{t}\l_s,\w\ra +R(\w), \quad t\geq 0.
$$
Note that $R(\w)=\sum_{i\in[N]}w_i\ln w_i$ is the negative entropy. According to the equivalence between Hedge and follow the regularized leader with the negative entropy regularizer, we have
$
w_{t,i}\propto e^{-\beta\lt(\sum_{s=1}^{t-1}\ell_{s,i}+m_{t,i}\rt)}
$
and
$
w_{t,i}'\propto e^{-\beta\lt(\sum_{s=1}^{t}\ell_{s,i}\rt)}.
$
To prove Equation \eqref{opthedge-eq1}, we consider the following
decomposition:
$$
\la\l_t,\w_t-\w^*\ra=\la \l_t-\m_t,\w_t-\w_t'\ra+\la\m_t,\w_t-\w_t'\ra+\la\l_t,\w_t'-\w^*\ra.
$$
Since $R(\cdot)$ is $1$-strongly convex w.r.t. the $\l_1$ norm, by Lemma \ref{dual}, we have $\|\w_t-\w_t'\|_1\leq\beta\|\l_t-\m_t\|_{\infty}$ and
$$
\la\l_t-\m_t,\w_t-\w_t'\ra\leq\|\l_t-\m_t\|_{\infty}\|\w_t-\w_t'\|_1\leq\beta\|\l_t-\m_t\|_{\infty}^2
$$
by H\"{o}lder's inequality.
Hence it suffices to show
\begin{equation}\label{opthedge-eq2}
\sumT\la\m_t,\w_t-\w_t'\ra+\la\l_t,\w_t'-\w^*\ra\leq\frac{\ln N+R(\w^*)}{\beta}-\frac{1}{2\beta}\sumT\lt(\|\w_t-\w_t'\|_1^2+\|\w_t-\w_{t-1}'\|_1^2\rt)
\end{equation}
to prove Equation \eqref{opthedge-eq1}.
Equation \eqref{opthedge-eq2} holds for $T=0$ because $R(\w^*)\geq -\ln N$ holds for any $\w^*\in\Delta_N$. To proceed, we need the following proposition:
\begin{equation}\label{strong-cvx}
g(\w^*)+\frac{c}{2}\|\w-\w^*\|^2\leq g(\w)
\end{equation}
holds for any $c$-strongly convex $g(\cdot):\W\rightarrow\R$ where $\w^*=\argmin_{\w\in\W}g(\w)$. This fact can be easily seen by combining the strong convexity and the first-order optimality condition for convex functions.
Assume Equation \eqref{opthedge-eq2} holds for round $T-1\; (T\geq 1)$ and we denote
$$
C_T=\frac{1}{2\beta}\sumT\lt(\|\w_t-\w_t'\|_1^2+\|\w_t-\w_{t-1}'\|_1^2\rt).
$$
Now for round $T$,
\begin{equation}
\begin{split}
&\sumT\lt(\la \m_t,\w_t-\w_t'\ra+\la\l_t,\w_t'\ra\rt)\\
\stackrel{\rm (1)}{\leq}&\sum_{t=1}^{T-1}\la\l_t,\w_{T-1}'\ra+\frac{\ln N+R(\w_{T-1}')}{\beta}-C_{T-1}+\la\m_T,\w_T-\w_T'\ra+\la\l_T,\w_T'\ra\\
\stackrel{\rm (2)}{\leq}&\sum_{t=1}^{T-1}\la\l_t,\w_{T}\ra+\frac{\ln N+R(\w_{T})}{\beta}-C_{T-1}+\la\m_T,\w_T-\w_T'\ra+\la\l_T,\w_T'\ra-\frac{1}{2\beta}\|\w_T-\w_{T-1}'\|_1^2\\
=&\lt(\sum_{t=1}^{T-1}\la\l_t,\w_{T}\ra+\la\m_T,\w_T\ra+\frac{\ln N+R(\w_{T})}{\beta}\rt)+\la\l_T-\m_T,\w_T'\ra-C_{T-1}-\frac{1}{2\beta}\|\w_T-\w_{T-1}'\|_1^2\\
\stackrel{\rm (3)}{\leq}&\lt(\sum_{t=1}^{T-1}\la\l_t,\w_{T}'\ra+\la\m_T,\w_T'\ra+\frac{\ln N+R(\w_{T}')}{\beta}\rt)+\la\l_T-\m_T,\w_T'\ra\\
&-C_{T-1}-\frac{1}{2\beta}\|\w_T-\w_{T-1}'\|_1^2-\frac{1}{2\beta}\|\w_T-\w_{T}'\|_1^2\\
=&\sumT\la\l_t,\w_T'\ra+\frac{\ln N+R(\w_T')}{\beta}-C_T\\
\stackrel{\rm (4)}{\leq}&\sumT\la\l_t,\w^*\ra+\frac{\ln N+R(\w^*)}{\beta}-C_T.
\end{split}
\end{equation}
The first inequality is due to the induction hypothesis with $\w^\star=\w_{T-1}'$. The second and the third ones are applications of Equation \eqref{strong-cvx}. Specifically, $\w_{T-1}'$ and $\w_T$ minimize $\sum_{t=1}^{T-1}\beta\la\l_t,\w\ra+{R(\w)}$ and $\sum_{t=1}^{T-1}\beta\la\l_t,\w\ra+\beta\la\m_T,\w\ra+{R(\w)}$ respectively. The forth inequality follows from $\w_T'$ minimizes $\beta\sumT\la\l_t,\w\ra+R(\w)$.
We now demonstrate how to remove the dependence on $\w_t'$:
\begin{equation}\label{opthedge-eq3}
\begin{split}
&\frac{1}{2\beta}\sumT\lt(\|\w_t-\w_t'\|_1^2+\|\w_t-\w_{t-1}'\|_1^2\rt)\\
\geq&\frac{1}{2\beta}\sumT\lt(\|\w_t-\w_t'\|_1^2+\|\w_{t+1}-\w_t'\|_1^2\rt)-\frac{1}{2\beta}\|\w_{T+1}-\w_T'\|_1^2\\
\geq& \frac{1}{4\beta}\sum_{t=2}^{T}\|\w_t-\w_{t-1}\|_1^2-\frac{2}{\beta},
\end{split}
\end{equation}
where the last inequality follows from $\|\x+\y\|^2\leq 2(\|\x\|^2+\|\y\|^2)$ holds for any norm. Now the proof is completed by combining Equations \eqref{opthedge-eq1} and \eqref{opthedge-eq3}.
\subsection{Proof of Theorem \ref{opthedge-reg}}
\label{app:thm:opthedge-reg}
We first show $f_t(\x_t)-f_t(\x_{t,i})\leq\la\w_t,\l_t\ra-\ell_{t,i}$ so that Lemma \ref{optHedge} can be invoked to bound the regret of the meta algorithm. We start from gsc-convexity,
\begin{equation}
\begin{split}
&f_t(\x_t)-f_t(\x_{t,i})\leq -\la\nabla f_t(\x_t),\expmap_{\x_t}^{-1}\x_{t,i}\ra \\
=&\la\nabla f_t(\x_t),\sum_{j=1}^Nw_{t,j}\expmap_{\x_t}^{-1}\x_{t,j}\ra-\la\nabla f_t(\x_t),\expmap_{\x_t}^{-1}\x_{t,i}\ra\\
=&\sum_{j=1}^Nw_{t,j}\ell_{t,j}-\ell_{t,i}=\la\w_t,\l_t\ra-\ell_{t,i},
\end{split}
\end{equation}
where the first equality is due to the gradient of $\sum_{i=1}^Nw_{t,i}d(\x,\x_{t,i})^2$ vanishes at $\x_t$. Now we can bound the regret as
\begin{equation}\label{grad-var-eq1}
\begin{split}
&\sum_{t=1}^Tf_t(\x_t)-f_t(\x_{t,i})\leq\sum_{t=1}^T\la\w_t,\l_t\ra-\ell_{t,i}\\
\leq&\frac{2+\ln N}{\beta}+\beta\sum_{t=1}^T\|\l_t-\m_t\|_{\infty}^2-\frac{1}{4\beta}\sum_{t=2}^T\|\w_t-\w_{t-1}\|_1^{2}\\
\end{split}
\end{equation}
It now suffices to bound $\|\l_t-\m_t\|_{\infty}^2$ in terms of the gradient-variation $V_T$ and $\|\w_t-\w_{t-1}\|_1^2$. We start with the definition of the infinity norm.
\begin{equation}\label{grad-var-eq2}
\begin{split}
&\|\l_t-\m_t\|_{\infty}^2\\
=&\max_{i\in[N]}(\ell_{t,i}-m_{t,i})^2\\
=&\max_{i\in[N]}\left(\la\nabla f_t(\x_t),\expmap_{\x_t}^{-1}\x_{t,i}\ra-\la\nabla f_{t-1}(\bar{\x}_t),\expmap_{\bx_t}^{-1}\x_{t,i}\ra\right)^2\\
=&\max_{i\in[N]}\Bigl(\Bigl. \la\nabla f_t(\x_t),\expmap_{\x_t}^{-1}\x_{t,i}\ra-\la\nabla f_{t-1}(\x_t),\expmap_{\x_t}^{-1}\x_{t,i}\ra+\la\nabla f_{t-1}(\x_t),\expmap_{\x_t}^{-1}\x_{t,i}\ra\\&
-\la\Ga_{\bx_t}^{\x_t}\nabla f_{t-1}(\bar{\x}_t),\expmap_{\x_t}^{-1}\x_{t,i}\ra+\la\Ga_{\bx_t}^{\x_t}\nabla f_{t-1}(\bar{\x}_t),\expmap_{\x_t}^{-1}\x_{t,i}\ra-\la\Ga_{\bx_t}^{\x_t}\nabla f_{t-1}(\bar{\x}_t),\Ga_{\bx_t}^{\x_t}\expmap_{\bx_t}^{-1}\x_{t,i}\ra\Bigl.\Bigl)^2\\
\stackrel{\rm (1)}{\leq}&3\max_{i\in[N]}\Bigl(\Bigl. \la\nabla f_t(\x_t)-\nabla f_{t-1}(\x_t),\expmap_{\x_t}^{-1}\x_{t,i}\ra^2+\la\nabla f_{t-1}(\x_t)-\Ga_{\bx_t}^{\x_t}\nabla f_{t-1}(\bar{\x}_t),\expmap_{\x_t}^{-1}\x_{t,i}\ra^2\\
&+\la\Ga_{\bx_t}^{\x_t}\nabla f_{t-1}(\bar{\x}_t),\expmap_{\x_t}^{-1}\x_{t,i}-\Ga_{\bx_t}^{\x_t}\expmap_{\bx_t}^{-1}\x_{t,i}\ra^2\Bigl. \Bigl)\\
\stackrel{\rm (2)}{\leq}&3\left(D^2\sup_{\x\in\N}\|\nabla f_t(\x)-\nabla f_{t-1}(\x)\|^2+D^2L^2d(\x_t,\bx_t)^2+G^2\|\expmap_{\x_t}^{-1}(\x_{t,i})-\Ga_{\bx_t}^{\x_t}\expmap_{\bx_t}^{-1}(\x_{t,i})\|^2\right)
\end{split}
\end{equation}
where the first inequality relies on fact that $(a+b+c)^2\leq 3(a^2+b^2+c^2)$ holds for any $a,b,c\in\R$, and the second one follows from Assumptions \ref{diam}, \ref{grad}, $L$-gsc-smoothness and H\"{o}lder's inequality.
By Lemma \ref{smooth}, for a Hadamard manifold with sectional curvature lower bounded by $\kappa$, $h(\x)\coloneqq\frac{1}{2}d(\x,\x_{t,i})^2$ is $\frac{\sqrt{\kappa}D}{\tanh(\sqrt{\kappa}D)}$-smooth (which is exactly ${\zeta}$ as in Definition \ref{def1}) on Hadamard manifolds. Thus
\begin{equation}\label{grad-var-eq3}
\|\expmap_{\x_t}^{-1}(\x_{t,i})-\Ga_{\bx_t}^{\x_t}\expmap_{\bx_t}^{-1}(\x_{t,i})\|=\|-\nabla h(\x_t)+\Ga_{\bx_t}^{\x_t}\nabla h(\bx_t)\|\leq {\zeta} d(\x_t,\bx_t).
\end{equation}
We need to bound $d(\x_t,\bx_t)$ in terms of $\|\w_t-\w_{t-1}\|_1$ to make full use of the negative term in Lemma \ref{optHedge}. By Lemma \ref{frechet}
\begin{equation}\label{grad-var-eq4}
d(\x_t,\bx_t)\leq\sum_{i=1}^Nw_{t,i}\cdot d(\x_{t,i},\x_{t,i})+D\|\w_t-\w_{t-1}\|_1=D\|\w_t-\w_{t-1}\|_1.
\end{equation}
Combining Equations \eqref{grad-var-eq1}, \eqref{grad-var-eq2}, \eqref{grad-var-eq3} and \eqref{grad-var-eq4}, we have
\begin{equation}
\sum_{t=1}^Tf_t(\x_t)-f_t(\x_{t,i})\leq\frac{2+\ln N}{\beta}+3\beta D^2(V_T+G^2)+\sumsT\left(3\beta (D^4L^2+D^2G^2{\zeta}^2)-\frac{1}{4\beta}\right)\|\w_t-\w_{t-1}\|_1^2,
\end{equation}
where the $3\beta D^2G^2$ term is due to the calculation of $V_T$ starts from $t=2$, while $\w_0=\w_1$ ensures $\x_1=\bx_1$ and thus $d(\x_1,\bx_1)=0$.
\subsection{Proof of Theorem \ref{var-reg}}
\label{app:thm:var-reg}
The optimal step size, according to Theorem \ref{var-expert} is
$$
\eta^\star=\min\left\{\frac{\delta}{1+(1+2\zeta\delta^2L^2)^{\frac{1}{2}}}, \sqrt{\frac{D^2+2DP_T}{2{\zeta}V_T}}\right\}.
$$
Based on Assumption \ref{grad}, we know $V_T$ has an upper bound $V_T=\sum_{t=2}^T\sup_{\x\in\N}\|\nabla f_t(\x)-\nabla f_{t-1}(\x)\|^2\leq 4G^2T$. Therefore, $\eta^\star$ can be bounded by
$$
\sqrt{\frac{D^2}{8{\zeta}G^2T}}\leq\eta^\star\leq\frac{\delta}{1+(1+2\zeta\delta^2L^2)^{\frac{1}{2}}}.
$$
According to the construction of $\H$,
$$
\min\H=\sqrt{\frac{D^2}{8{\zeta}G^2T}},\qquad\max\H\geq 2\frac{\delta}{1+(1+2\zeta\delta^2L^2)^{\frac{1}{2}}}.
$$
Therefore, there always exists $k\in[N]$ such that $\eta_k\leq\eta^\star\leq 2\eta_k$.
We can bound the regret of the $k$-th expert as
\begin{equation}\label{var-expert-eq1}
\begin{split}
&\sum_{t=1}^Tf_t(\x_{t,k})-\sum_{t=1}^Tf_t(\u_t)\leq\eta_k{\zeta}V_T+\frac{D^2+2DP_T}{2\eta_k}\leq\eta^\star{\zeta}V_T+\frac{D^2+2DP_T}{\eta^\star}\\
\leq&\zeta V_T \sqrt{\frac{D^2+2DP_T}{2\zeta V_T}}+{(D^2+2DP_T)}\cdot\max\lt\{\sqrt{\frac{2\zeta V_T}{D^2+2DP_T}}, \frac{1+(1+2\zeta\delta^2L^2)^{\frac{1}{2}}}{\delta} \rt\}\\
=&\frac{3}{2}\sqrt{2(D^2+2DP_T){\zeta}V_T}+(D^2+2DP_T)\frac{1+(1+2\zeta\delta^2L^2)^{\frac{1}{2}}}{\delta}.\\
\end{split}
\end{equation}
Since the dynamic regret can be decomposed as the sum of the meta-regret and the expert-regret,
$$
\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\u_t)=\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\x_{t,k})+\sum_{t=1}^Tf_t(\x_{t,k})-\sum_{t=1}^Tf_t(\u_t).
$$
Applying Theorem \ref{opthedge-reg} with $\beta\leq \frac{1}{\sqrt{12(D^4L^2+D^2G^2{\zeta}^2)}}$, we have
$$
\left(3\beta (D^4L^2+D^2G^2{\zeta}^2)-\frac{1}{4\beta}\right)\leq 0
$$
and
$$
\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\x_{t,i})\leq\frac{2+\ln N}{\beta}+3\beta D^2 (V_T+G^2).
$$
We need to consider two cases based on the value of $\beta$.
If $\sqrt{\frac{2+\ln N}{3D^2(V_T+G^2)}}\leq \frac{1}{\sqrt{12(D^4L^2+D^2G^2{\zeta}^2)}}$, then
$$
\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\x_{t,i})\leq 2\sqrt{3D^2(V_T+G^2)(2+\ln N)}.
$$
Otherwise, we have
$$
\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\x_{t,i})\leq 2(2+\ln N)\sqrt{12(D^4L^2+D^2G^2{\zeta}^2)}.
$$
In sum,
\begin{equation}\label{var-meta}
\begin{split}
&\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\x_{t,i})\\
\leq&\max\left\{2\sqrt{3D^2(V_T+G^2)(2+\ln N)},2(2+\ln N)\sqrt{12(D^4L^2+D^2G^2{\zeta}^2)}\right\}.\\
\end{split}
\end{equation}
Combining Equations \eqref{var-meta} and \eqref{var-expert-eq1}, we have
\begin{equation}
\begin{split}
&\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\u_t)\\
\leq &\max\left\{2\sqrt{3D^2(V_T+G^2)(2+\ln N)},2(2+\ln N)\sqrt{12(D^4L^2+D^2G^2{\zeta}^2)}\right\}\\
&+\frac{3}{2}\sqrt{2(D^2+2DP_T){\zeta}V_T}+(D^2+2DP_T)\frac{1+(1+2\zeta\delta^2L^2)^{\frac{1}{2}}}{\delta}\\
=&O\left(\sqrt{(V_T+{\zeta}^2\ln N)\ln N}\right)+O\left(\sqrt{{\zeta}(V_T+(1+P_T)/\delta^2)(1+P_T)}\right)\\
=&{O}\left(\sqrt{{\zeta}(V_T+(1+P_T)/\delta^2)(1+P_T)}\right),
\end{split}
\end{equation}
where we use ${O}(\cdot)$ to hide $O(\log\log T)$ following \cite{luo2015achieving} and \cite{zhao2020dynamic} and $N=O(\log T)$ leads to $\ln N=O(\log\log T)$.
\section{Omitted Proof for Section \ref{radars}}
\label{app:radars}
\subsection{Proof of Lemma \ref{self-bound}}
\label{app:lem:self-bound}
By $L$-gsc-smoothness, we have
$$
f(\y)\leq f(\x)+\la\nabla f(\x),\expmap_{\x}^{-1}(\y)\ra+\frac{L\cdot d(\x,\y)^2}{2}.
$$
Setting $\y=\expmap_{\x}\left(-\frac{1}{L}\nabla f(\x)\right)$, we have
\begin{equation}
\begin{split}
0\leq& f(\y)\leq f(\x)-\frac{1}{L}\|\nabla f(\x)\|^2+\frac{L}{2}\cdot\frac{1}{L^2}\|\nabla f(\x)\|^2\\
=&f(\x)-\frac{1}{2L}\|\nabla f(\x)\|^2,
\end{split}
\end{equation}
where we use the non-negativity of $f$. The above inequality is equivalent to
$$
\|\nabla f(\x)\|^2\leq 2L\cdot f(\x),
$$
in which the constant is two times better than that of \cite{srebro2010smoothness}.
\subsection{Proof of Lemma \ref{sml-expert}}
\label{app:lem:sml-expert}
The proof is similar to the proof of Theorem \ref{ROGD}. Let $\x_{t+1}'=\expmap_{\x_t}\lt(-\eta\nabla f_t(\x_t)\rt)$, then analog to Equation \eqref{rogd-eq1}, we have
\begin{equation}
\begin{split}
f_t(\x_t)-f_t(\u_t)\leq&\frac{1}{2\eta}\lt(\|\invexp_{\x_t}\u_t\|^2-\|\invexp_{\x_{t+1}}\u_{t+1}\|^2+2D\|\invexp_{\u_t}\u_{t+1}\| \rt)+\frac{\eta{\zeta}\|\nabla f_t(\x_t)\|^2}{2}\\
\leq& \frac{1}{2\eta}\lt(\|\invexp_{\x_t}\u_t\|^2-\|\invexp_{\x_{t+1}}\u_{t+1}\|^2+2D\|\invexp_{\u_t}\u_{t+1}\| \rt)+
\eta{\zeta} Lf_t(\x_t),
\end{split}
\end{equation}
where for the second inequality we apply Lemma \ref{self-bound}.
WLOG, we can assume $\u_{T+1}=\u_T$ and sum from $t=1$ to $T$:
$$
\sum_{t=1}^T\lt(f_t(\x_t)-f_t(\u_t)\rt)\leq\frac{D^2+2DP_T}{2\eta}+\eta{\zeta}L\sum_{t=1}^Tf_t(\x_t).
$$
After simplifying, we get
\begin{equation}
\begin{split}
\sum_{t=1}^T\lt( f_t(\x_t)-f_t(\u_t)\rt)
\leq&\frac{D^2+2DP_T}{2\eta(1-\eta{\zeta} L)}+\frac{\eta{\zeta}L\sum_{t=1}^Tf_t(\u_t)}{1-\eta{\zeta} L}\\
=&\frac{D^2+2DP_T}{2\eta(1-\eta{\zeta} L)}+\frac{\eta{\zeta}LF_T}{1-\eta{\zeta} L}\\
\leq &\frac{D^2+2DP_T}{\eta}+2\eta{\zeta} L F_T\\
=&O\left(\frac{1+P_T}{\eta}+\eta F_T\right).\\
\end{split}
\end{equation}
where $\eta\leq\frac{1}{2\zeta L}$ is used to obtain the second inequality.
\subsection{Proof of Lemma \ref{sml-meta}}
\label{app:lem:sml-meta}
We again apply Lemma \ref{optHedge}, with the surrogate loss $\ell_{t,i}=\la\nabla f_t(\x_t),\invexp_{\x_t}\x_{t,i}\ra$ and the optimism $m_{t,i}=0$ for any $i\in[N]$.
In this way,
\begin{equation}
\begin{split}
&\sumT f_t(\x_t)-\sumT f_t(\x_{t,i})\\
\leq& -\sumT\la\nabla f_t(\x_t),\invexp_{\x_t}\x_{t,i}\ra=\sumT\la\w_t,\l_t\ra-w_{t,i}\\
\leq&\frac{2+\ln N}{\beta}+\beta\sum_{t=1}^T\|\l_t\|_{\infty}^2-\frac{1}{4\beta}\sum_{t=2}^T\|\w_t-\w_{t-1}\|_1^2\\
\leq &\frac{2+\ln N}{\beta}+\beta D^2\sumT\|\nabla f_t(\x_t)\|^2\\
\leq &\frac{2+\ln N}{\beta}+2\beta D^2L\sumT f_t(\x_t)=\frac{2+\ln N}{\beta}+2\beta D^2L\bar{F}_T,
\end{split}
\end{equation}
where the second inequality follows from Lemma \ref{optHedge}, the third one follows from Assumption \ref{diam} and H\"{o}lder's inequality, while the last inequality is due to Lemma \ref{self-bound}. By setting $\beta=\sqrt{\frac{2+\ln N}{2LD^2\bar{F}_T}}$, the regret of the meta algorithm is upper bounded by
\begin{equation}\label{sml-meta-eq1}
\sumT f_t(\x_t)-\sumT f_t(\x_{t,i})\leq\sqrt{8D^2L(2+\ln N)\bar{F}_T}=\sqrt{8D^2L(2+\ln N)\sumT f_t(\x_t)}.
\end{equation}
Although $\bar{F}_T$ is unknown similar to the case of Optimistic Hedge, we can use techniques like the doubling trick or a time-varying step size $\beta_t=O\lt(\frac{1}{\sqrt{1+\bar{F}_t}}\rt)$ to overcome this hurdle.
The RHS of Equation \eqref{sml-meta-eq1} depends on the cumulative loss of $\x_t$, which remains elusive. Here we apply an algebraic fact that $x-y\leq\sqrt{ax}$ implies $x-y\leq a+\sqrt{ay}$ holds for any non-negative $x,y$ and $a$. Then
\begin{equation}
\sumT f_t(\x_t)-\sumT f_t(\x_{t,i})\leq 8D^2L(2+\ln N)+\sqrt{8D^2L(2+\ln N)\bar{F}_{T,i}}
\end{equation}
where we remind $\bar{F}_{T,i}=\sumT f_t(\x_{t,i})$.
\subsection{Proof of Theorem \ref{sml-reg}}
\label{app:thm:sml-reg}
Recall the regret of the meta algorithm as in Lemma \ref{sml-meta}:
\begin{equation}\label{sml-eq1}
\sum_{t=1}^Tf_t(\x_t)-\sum_{t=1}^Tf_t(\x_{t,i})\leq 8D^2L(2+\ln N)+\sqrt{8D^2L(2+\ln N)\bar{F}_{T,i}}.
\end{equation}
On the other hand, we know the regret can be decomposed as
\begin{equation}\label{sml-eq2}
\begin{split}
\sumT f_t(\x_t)-\sumT f_t(\u_t)=&\sumT f_t(\x_t)-\sumT f_t(\x_{t,i})+\sumT f_t(\x_{t,i})-\sumT f_t(\u_t)\\
=&\sumT f_t(\x_t)-\sumT f_t(\x_{t,i})+\bar{F}_{T,i}-F_T.
\end{split}
\end{equation}
We can first show there exists an almost optimal step size and bound the regret of the corresponding expert. That regret immediately provides an upper bound of $\bar{F}_{T,i}$ in terms of $F_T$. This argument eliminates the dependence on $\bar{F}_{T,i}$ and leads to a regret bound solely depending on $F_T$ and $P_T$.
Now we bound the regret of the best expert. Note that due to Lemma \ref{sml-expert}, the optimal step size is $\eta=\min\lt\{\frac{1}{2{\zeta} L},\sqrt{\frac{D^2+2DP_T}{2{\zeta} LF_T}}\rt\}$. According to Assumptions \ref{diam}, \ref{grad}, $F_T\leq GDT$. Thus the optimal step size $
\eta^\star$ is bounded by
$$
\sqrt{\frac{D}{4LGT}}\leq \eta^\star\leq\frac{1}{2{\zeta} L}.
$$
Due to our construction of $\H$, there exists $k\in[N]$ such that $\eta_k\leq\eta^\star\leq 2\eta_k$.
According to Lemma \ref{sml-expert},
\begin{equation}\label{sml-eq3}
\begin{split}
&\sumT f_t(\x_{t,k})-\sumT f_t(\u_t)\\
\leq&\frac{D^2+2DP_T}{2\eta_k(1-\eta_k{\zeta}L)}+\frac{\eta_k{\zeta}LF_T}{1-\eta_k{\zeta}L}\leq\frac{D^2+2DP_T}{\eta_k}+2\eta_k{\zeta}LF_T\\
\leq&\frac{2(D^2+2DP_T)}{\eta^\star}+2\eta^\star{\zeta}LF_T\\
\leq&2(D^2+2DP_T)\lt(2{\zeta}L+\sqrt{\frac{2{\zeta}LF_T}{D^2+2DP_T}}\rt)+2{\zeta}LF_T\cdot\sqrt{\frac{D^2+2DP_T}{2{\zeta}LF_T}}\\
=&4{\zeta}L(D^2+2DP_T)+3\sqrt{2{\zeta}LF_T(D^2+2DP_T)}\\
\leq &\sqrt{2\lt(16{\zeta}^2L^2(D^2+2DP_T)^2+18{\zeta}LF_T(D^2+2DP_T)\rt)}
\end{split}
\end{equation}
where we apply $\eta^\star\leq \sqrt{\frac{D^2+2DP_T}{2\zeta LF_T}}$, $\frac{1}{\eta^\star}\leq 2{\zeta}L+\sqrt{\frac{2\zeta LF_T}{D^2+2DP_T}}$ and $\sqrt{a}+\sqrt{b}\leq\sqrt{2(a+b)}$.
Now as we combine Equations \eqref{sml-eq1}, \eqref{sml-eq2}, \eqref{sml-eq3},
\begin{equation}
\begin{split}
&\sumT f_t(\x_t)-\sumT f_t(\u_t)\\
\leq& 8D^2L(2+\ln N)+\sqrt{8D^2L(2+\ln N)\bar{F}_{T,k}}+\sqrt{2\lt(16{\zeta}^2L^2(D^2+2DP_T)^2+18{\zeta}LF_T(D^2+2DP_T)\rt)}\\
\leq &8D^2L(2+\ln N)+\sqrt{8D^2L(2+\ln N)\lt(F_{T}+\sqrt{2\lt(16{\zeta}^2L^2(D^2+2DP_T)^2+18{\zeta}LF_T(D^2+2DP_T)\rt)}\rt)}\\
&+\sqrt{2\lt(16{\zeta}^2L^2(D^2+2DP_T)^2+18{\zeta}LF_T(D^2+2DP_T)\rt)}\\
=&O(\sqrt{\zeta({\zeta}(1+P_T)+F_T)(P_T+1)}).
\end{split}
\end{equation}
where we again use $O(\cdot)$ to hide the $\log\log T$ term.
\section{Omitted Proof for Section \ref{radarb}}
\label{app:radarb}
\subsection{Necessity of Best-of-both-worlds Bound}
\label{bbw}
We highlight the necessity of achieving a best-of-both-worlds bound by computing the Fr\'{e}chet mean in the online setting on the $d$-dimensional unit Poincar\'{e} disk. The Poincar\'{e} disk looks like a unit ball in Euclidean space, but its Riemannian metric blows up near the boundary:
$$
\la \u,\v\ra_{\x}=\frac{4\la \u,\v\ra_2}{(1-\|\x\|_2^2)^2}
$$
and has constant sectional curvature $-1$. We use $\o$ to denote the origin of the Poincar\'{e} ball and $\e_i$ to be the $i$-th unit vector in the standard basis. The Poincar\'{e} ball has the following property \citep{lou2020differentiating}:
\begin{equation*}
\begin{split}
&d(\x,\y)=\arcosh\lt(1+\frac{2\|\x-\y\|_2^2}{(1-\|\x\|_2^2)(1-\|\y\|_2^2)} \rt)\\
&\invexp_{\mathbf{0}}\y=\arctanh(\|\y\|_2)\frac{\y}{\|\y\|_2}.
\end{split}
\end{equation*}
Now consider the following loss function
$$
f_t(\x)=\sum_{i=1}^{2d}\frac{d(\x,\x_{t,i})^2}{2d}
$$
where $\x_{t,i}=\frac{t}{2T}\e_i$ for $1\leq i\leq d$ and $\x_{t,i}=-\frac{t}{2T}\e_{i-d}$ for $d+1\leq i\leq 2d$. We choose $\N$ to be the convex hull of $\pm \frac{1}{2} \e_i$, $i=1,\dots, d$. And the comparator is $\u_t=\argmin_{\u_t\in\N}f_t(\u_t)$ which is indeed the origin $\o$ due to symmetry. Now we can bound $V_T$ by
\begin{equation}
\begin{split}
V_T=&\sumsT\sup_{\x\in\N}\|\nabla f_t(\x)-\nabla f_{t-1}(\x)\|^2\\
=&\frac{1}{4d^2}\sumsT\sup_{\x\in\N}\lt\|\sum_{i=1}^{2d}\lt(\invexp_{\x}\x_{t,i}-\invexp_{\x}\x_{t-1,i}\rt)\rt\|^2\\
\leq&\frac{1}{4d^2}\sumsT c\cdot\lt(\sum_{i=1}^{2d}d(\x_{t,i},\x_{t-1,i})\rt)^2=\sumsT c\lt(\arcosh\lt(1+ \frac{2(\frac{t}{2T}-\frac{t-1}{2T} )^2}{(1-(\frac{t}{2T})^2)(1-(\frac{t-1}{2T})^2)}\rt)\rt)^2\\
\leq & \sumsT c\lt(\arcosh\lt(1+\frac{8}{9T^2}\rt)\rt)^2\\
\leq& \sumsT c\cdot\lt(\frac{8}{9T^2}+\sqrt{\frac{64}{81T^4}+\frac{16}{9T^2}}\rt)^2=\sumsT c\cdot O\lt(\frac{1}{{T^2}}\rt)=O\lt(\frac{1}{T}\rt),\\
\end{split}
\end{equation}
where the first inequality is due to triangle inequality and Lemma \ref{compare}. We note that $c$ is a constant depending on the diamater of $\N$ and the sectional curvature of $\M$. The second one is due to $t\leq T$, while the third inequality follows from $\arcosh(1+x)\leq x+\sqrt{x^2+2x}$.
Similarly, we can evaluate $F_T$
\begin{equation}
\begin{split}
F_T=&\sumT f_t(\u_t)=\sumT f_t(\o)=\sumT\sum_{i=1}^{2d}\frac{d(\o,\x_{t,i})^2}{2d}\\
=&\sumT\arcosh\lt(\frac{1+\frac{t^2}{4T^2}}{1-\frac{t^2}{4T^2}} \rt)=\int_{0}^T\arcosh \lt(\frac{1+\frac{t^2}{4T^2}}{1-\frac{t^2}{4T^2}} \rt)dt+O(1)\\
=&2T\int_{0}^{\frac{1}{2}}\arcosh\frac{1+a^2}{1-a^2}da+O(1)\\
=&2T\lt(a\cdot\arcosh\frac{1+a^2}{1-a^2}+\ln(1-a^2)\rt)\big|_{0}^{\frac{1}{2}}+O(1)=\Theta(T).
\end{split}
\end{equation}
When the input losses change smoothly, $V_T\ll F_T$ and the gradient-variation bound is much tighter than the small-loss bound.
There also exist scenarios in which the small-loss bound is tighter. We still consider computing the Fr\'{e}chet mean on the Poincar\'{e} disk
$$
f_t(\x)=\sum_{i=1}^nd(\x,\x_{t,i})^2/n,
$$
but assume $\x_{t,i}=\y_i$ when $t$ is odd and $x_{t,i}=-\y_i$ when $t$ is even. We restrict $\y_1,\dots,\y_n\in\mathbb{B}(\frac{\e_1}{2},T^{-\alpha})$ where $\mathbb{B}(\p,r)$ is the geodesic ball centered at $\p$ and with radius $r$. $\N$ is the convex hull of $\mathbb{B}(\frac{\e_1}{2},T^{-\alpha})\cup\mathbb{B}(-\frac{\e_1}{2},T^{-\alpha})$. Since the input sequence is alternating, $\sup_{\x\in \N}\|\nabla f_t(\x)-\nabla f_{t-1}(\x)\|^2$ is a constant over time, and we can lower bounded it by
\begin{equation}
\begin{split}
&\sup_{\x\in \N}\|\nabla f_t(\x)-\nabla f_{t-1}(\x)\|^2\\
=&\sup_{\x\in\N}\lt\|\frac{1}{n}\lt(\sum_{i=1}^n\invexp_{\x}\x_{t,i}-\sum_{i=1}^n\invexp_{\x}\x_{t-1,i}\rt)\rt\|^2\\
=&\sup_{\x\in\N}\lt\|\frac{2}{n}\sum_{i=1}^n\invexp_{\x}\y_i\rt\|^2\geq \frac{4}{n^2}\lt\|\sum_{i=1}^n\invexp_{\o}\y_i\rt\|^2\\
=&\frac{4}{n^2}\lt(\lt\| \lt(\sum_{i=1}^n\invexp_{\o}\y_i\rt)^{\parallel}\rt\|^2 + \lt\|\lt(\sum_{i=1}^n\invexp_{\o}\y_i\rt)^{\perp}\rt\|^2\rt)\\
\geq&\frac{4}{n^2}\lt\| \lt(\sum_{i=1}^n\invexp_{\o}\y_i\rt)^{\parallel}\rt\|^2\geq \frac{4}{n^2}\lt\|n\cdot\arctanh\lt(\frac{1}{2}-T^{-\alpha}\rt)\e_1\rt\|_{\mathbf{0}}^2\\
=&16\arctanh^2\lt(\frac{1}{2}-T^{-\alpha}\rt)\\
\geq& 16\lt(\frac{\frac{1}{2}-T^{-\alpha}}{\frac{3}{2}-T^{-\alpha}}\rt)^2=\Omega(1). \\
\end{split}
\end{equation}
where we use $\a^{\parallel}$ and $\a^{\perp}$ to denote components parallel and orthogonal to the direction of $\e_1$, respectively. The key observation is the lower bound attains when $\lt(\sum_{i=1}^n\invexp_{\o}\y_i\rt)^{\perp}$ is zero, and each $\y_i$ has the smallest component along $\e_1$, i.e., $\y_i=\lt(\frac{1}{2}-T^{-\alpha}\rt)\e_1$. We also use $\arctanh(x)\geq\frac{x}{1+x}$. Thus we have $V_T=\Omega(T)$. Now we consider $F_T$. By Lemma \ref{jensen}, we know that $\u_t$ lies within the same geodesic ball as $\x_{t,i}$, $i\in[n]$. Thus
$$
F_T=\sumT f_t(\u_t)=\sumT\sum_{i=1}^n d(\u_t,\x_{t,i})^2/n\leq T\cdot \lt(\frac{2}{T^\alpha}\rt)^2=O(T^{1-2\alpha}).
$$
We can see whenever $\alpha>0$, $F_T=o(T)$ but $V_T=\Omega(T)$.
\subsection{Proof of Theorem \ref{best-meta}}
\label{app:thm:best-meta}
By Lemma \ref{optHedge} we have
\begin{equation}\label{best-eq1}
\sum_{t=1}^Tf_t(\x_t)-f_t(\x_{t,i})
\leq\frac{2+\ln N}{\beta}+\beta\sum_{t=1}^T\|\l_t-\m_t\|_{\infty}^2-\frac{1}{4\beta}\sum_{t=2}^T\|\w_t-\w_{t-1}\|_1^{2}
\end{equation}
We bound $\sum_{t=1}^T\|\l_t-\m_t\|_{\infty}^2$ in terms of $\sum_{t=1}^T\|\l_t-\m_t^v\|_{\infty}^2$ and $\sum_{t=1}^T\|\l_t-\m_t^s\|_{\infty}^2$ as follows. By Assumptions \ref{diam} and \ref{grad}, $\|\l_t-\m_t\|_2^2\leq 4NG^2D^2$ and we can compute $d_t(\m)= \|\l_t-\m\|_{2}^2$ is $\frac{1}{8NG^2D^2}$-exp-concave.
. We have
\begin{equation}\label{best-eq2}
\begin{split}
&\sumT\|\l_t-\m_t\|_{\infty}^2\leq\sumT\|\l_t-\m_t\|_2^2\\
\leq&\min\lt\{\sumT\|\l_t-\m_t^v\|_2^2,\sumT\|\l_t-\m_t^s\|_2^2\rt\}+8NG^2D^2\ln 2\\
\leq &N\min\lt\{\sumT\|\l_t-\m_t^v\|_{\infty}^2,\sumT\|\l_t-\m_t^s\|_{\infty}^2\rt\}+8NG^2D^2\ln 2,\\
\end{split}
\end{equation}
where for the second inequality we use Lemma \ref{hedge} and for the first and the third one the norm inequality $\|\x\|_{\infty}\leq\|\x\|_2\leq\sqrt{N}\|\x\|_{\infty}$ is used.
Combining Equations \eqref{var-meta}, \eqref{best-eq1}, \eqref{best-eq2} and Lemma \ref{sml-meta}, we have
$$
\sumT f_t(\x_t)-\sumT f_t(\x_{t,i})\leq\frac{2+\ln T}{\beta}+\beta N\lt(D^2\min\{3V_T,\bar{F}_T\}+8G^2D^2\ln 2\rt)
$$
holds for any $\beta\leq \frac{1}{\sqrt{12(D^4L^2+D^2G^2{\zeta}^2)}}$ and $i\in[N]$.
Suppose $\beta^\star=\sqrt{\frac{2+\ln N}{N(D^2\min\{3(V_T+G^2),\bar{F}_T\}+8G^2D^2\ln 2)} }\leq \frac{1}{\sqrt{12(D^4L^2+D^2G^2{\zeta}^2)}}$, then
$$
\sumT f_t(\x_t)-\sumT f_t(\x_{t,i})\leq 2\sqrt{(2+\ln N)N(D^2\min\{3(V_T+G^2),\bar{F}_{T}\}+8G^2D^2\ln 2)}.
$$
Otherwise
$$
\sumT f_t(\x_t)-\sumT f_t(\x_{t,i})\leq 2(2+\ln N)\sqrt{12(D^4L^2+D^2G^2{\zeta}^2)}.
$$
In sum, we have
\begin{equation}
\begin{split}
&\sumT f_t(\x_t)-\sumT f_t(\x_{t,i})\\
\leq &\max\lt\{2\sqrt{(2+\ln N)N(D^2\min\{3(V_T+G^2),\bar{F}_{T}\}+8G^2D^2\ln 2)},2(2+\ln N)\sqrt{12(D^4L^2+D^2G^2{\zeta}^2)}\rt\}\\
=&O(\log T\cdot\min\{V_T,\bar{F}_T\}).
\end{split}
\end{equation}
\subsection{Proof of Theorem \ref{best-reg}}
\label{app:thm:best-reg}
By Theorem \ref{best-meta}, we know
$$
\sumT f_t(\x_t)-\sumT f_t(\x_{t,i})=O\lt(\sqrt{\ln T(\min\{V_T,\bar{F}_T\})}\rt)
$$
holds for any $i\in [N^v+N^s]$. WLOG, we assume $k$ and $k'$ to be indexes of the best experts for the gradient-variation bound and the small-loss bound, respectively. Then by Theorem \ref{var-reg},
\begin{equation}
\sumT f_t(\x_{t,k})-f_t(\u_t)\leq\frac{3}{2}\sqrt{2(D^2+2DP_T){\zeta}V_T}+(D^2+2DP_T)\frac{1+(1+2\zeta\delta^2L^2)^{\frac{1}{2}}}{\delta}
\end{equation}
while by Theorem \ref{sml-reg},
\begin{equation}
\sumT f_t(\x_{t,k'})-f_t(\u_t)\leq \sqrt{2\lt(16{\zeta}^2L^2(D^2+2DP_T)^2+18{\zeta}LF_T(D^2+2DP_T)\rt)}.
\end{equation}
Since the regret admits the following decompositions
\begin{equation}
\begin{split}
&\sumT f_t(\x_t)-\sumT f_t(\u_t)\\
=&\sumT f_t(\x_t)-\sumT f_t(\x_{t,k})+\sumT f_t(\x_{t,k})-\sumT f_t(\u_t)\\
=&\sumT f_t(\x_t)-\sumT f_t(\x_{t,k'})+\sumT f_t(\x_{t,k'})-\sumT f_t(\u_t)\\,
\end{split}
\end{equation}
we indeed have
\begin{equation}
\begin{split}
&\sumT f_t(\x_t)-\sumT f_t(\u_t)\\
\leq & O\lt(\sqrt{\ln T(\min\{V_T,\bar{F}_T\})}\rt)+\min\lt\{\frac{3}{2}\sqrt{2(D^2+2DP_T){\zeta}V_T}+(D^2+2DP_T)\frac{1+(1+2\zeta\delta^2L^2)^{\frac{1}{2}}}{\delta},\rt .\\
& \lt.\sqrt{2\lt(16{\zeta}^2L^2(D^2+2DP_T)^2+18{\zeta}LF_T(D^2+2DP_T)\rt)}\rt\}\\
=&O\lt(\sqrt{\ln T(\min\{V_T,\bar{F}_T\})}\rt)\\
&+O\lt(\min\lt\{\sqrt{{\zeta}(1+P_T)((1+P_T)/\delta^2+V_T)}, \sqrt{{\zeta}(1+P_T)(\zeta(1+P_T)+F_T)}\rt\}\rt)
\end{split}
\end{equation}
Note that $\bar{F}_T$ can be processed similarly as in Lemma \ref{sml-meta} and Theorem \ref{sml-reg} to get $F_T$.
In sum, the regret is bounded by
$$
\sumT f_t(\x_t)-\sumT f_t(\u_t)=O\lt(\sqrt{\zeta(P_T(\zeta +1/\delta^2)+\min\{V_T,F_T\}+1)(1+P_T)+\ln T\min\{V_T,F_T\}}\rt).
$$
\section{Technical Lemmas}
\label{app:lem}
We need the following technical lemmas.
\begin{lemma}\label{gsc-mean}\cite[Theorem 2.1]{bento2021elements}
Suppose $f:\M\rightarrow\R$ is geodesically convex and $\M$ is Hadamard. The geodesic mean $\bar{\x}_k$ w.r.t coefficients $a_1,\dots,a_N$ ($\sum_{i=1}^Na_i=1$, $a_i\geq 0$) is defined as:
\begin{equation}
\begin{split}
&\bar{\x}_1=\x_1\\
&\bar{\x}_k=\expmap_{\bar{\x}_{k-1}}\left(\frac{a_k}{\sum_{i=1}^ka_i}\expmap_{\bar{\x}_{k-1}}^{-1}\x_k\right),\quad k>1.
\end{split}
\end{equation}
Then we have
\begin{equation}
f(\bar{\x}_N)\leq\sum_{i=1}^Na_i f(\x_i).
\end{equation}
\end{lemma}
\begin{proof}
We use induction to show a stronger statement
$$
f(\bar{\x}_k)\leq\sum_{i=1}^k\frac{a_i}{\sum_{j=1}^k a_j} f(\x_i)
$$
holds for $k=1,\dots,N$.
For $k=1$, this is obviously true because $\bar{\x}_1=\x_1$. Suppose
$$
f(\bar{\x}_k)\leq\sum_{i=1}^k\frac{a_i}{\sum_{j=1}^k a_j} f(\x_i)
$$
holds for some $k$, then by geodesic convexity,
\begin{equation}
\begin{split}
f(\bar{\x}_{k+1})\leq& \left(1-\frac{a_{k+1}}{\sum_{j=1}^{k+1}a_j}\right)f(\bar{\x}_k)+\frac{a_{k+1}}{\sum_{j=1}^{k+1}a_j}f(\x_{k+1})\\
\leq&\sum_{i=1}^k\frac{a_i}{\sum_{j=1}^{k+1}a_j}f(\x_i)+\frac{a_{k+1}}{\sum_{j=1}^{k+1}a_j}f(\x_{k+1})\\
=&\sum_{i=1}^{k+1}\frac{a_if(\x_i)}{\sum_{j=1}^{k+1}a_j}.
\end{split}
\end{equation}
The first inequality is due to gsc-convexity: for the geodesic determined by $\gamma(0)=\bx_k$ and $\gamma(1)=\x_{k+1}$ we have $\bx_{k+1}=\gamma\lt(\frac{a_{k+1}}{\sum_{i=1}^{k+1}a_i}\rt)$ and thus $f(\gamma(t))\leq(1-t)f(\gamma(0))+tf(\gamma(1))$. For the second inequality, we use the induction hypothesis.
Given $\sum_{i=1}^Na_i=1$, the lemma is proved.
\end{proof}
The computation of the geodesic averaging is summarized in Algorithm \ref{alg:gsc}, which serves as a sub-routine of \textsc{Radar}\xspace.
\begin{algorithm2e}[H]
\caption{Geodesic Averaging}\label{alg:gsc}
\KwData{$N$ points $\x_1,\dots,\x_N\in \N$ and $N$ real weights $w_1,\dots,w_N$.}
Let $\bar{\x}_{1}=\x_1$\\
\For{$k=2,\dots,N$} {
$\bar{\x}_k=\expmap_{\bar{\x}_{k-1}}\left(\frac{w_k}{\sum_{i=1}^kw_i}\expmap^{-1}_{\bar{\x}_{k-1}}\x_k\right)$\\
}
Return $\bar{\x}_N$.\\
\end{algorithm2e}
\begin{lemma}
\label{frechet}
Suppose $\x_1,\dots,\x_N,\y_1,\dots,\y_N\in\N$ where $\N$ is a gsc-convex subset of a Hadamard manifold $\M$ and the diameter of $\N$ is upper bounded by $D$. Let $\bar{\x}$ and $\bar{\y}$ be the weighted Fr\'echet mean with respect to coefficient vectors $\a$ and $\b$ $(\a,\b\in\Delta_N)$, defined as $\bar{\x}=\argmin_{\x\in\N}\sum_{i=1}^N a_i\cdot d(\x,\x_i)^2$ and $\bar{\y}=\argmin_{\y\in\N}\sum_{i=1}^N b_i\cdot d(\y,\y_i)^2$.
Then we have
\begin{equation}
d(\bar{\x},\bar{\y})\leq \sum_{i=1}^N a_i\cdot d(\x_i,\y_i)+D\sum_{i=1}^N|a_i-b_i|.
\end{equation}
\end{lemma}
\begin{proof}
Recall that on Hadamard manifolds, the following inequality \citep[Prop. 2.4]{sturm2003probability}
$$
d(\x,\y)^2+d(\u,\v)^2\leq d(\x,\v)^2+d(\y,\u)^2+2d(\x,\u)\cdot d(\y,\v)$$
holds for any $\x,\y,\u,\v\in\M$. A direct application of the above inequality yields
\begin{equation}\label{eq2-0}
d(\x_i,\bar{\y})^2+d(\y_i,\bar{\x})^2\leq d(\x_i,\bar{\x})^2+d(\y_i,\bar{\y})^2+2d(\bar{\x},\bar{\y})\cdot d(\x_i,\y_i)\qquad \forall i\in [N].
\end{equation}
By \cite[Theorem 2.4]{bacak2014computing}:
\begin{equation}
\label{eq2-1}
\sum_{i=1}^Na_i\cdot d(\x_i,\bar{\x})^2+\sum_{i=1}^N b_i\cdot d(\y_i,\bar{\y})^2+2d(\bar{\x},\bar{\y})^2\leq \sum_{i=1}^N a_i\cdot d(\x_i,\bar{\y})^2+\sum_{i=1}^N b_i\cdot d(\y_i,\bar{\x})^2
\end{equation}
Multiplying Equation \eqref{eq2-0} by $a_i$, summing from $i=1$ to $n$ and adding Equation \eqref{eq2-1}, we have
\begin{equation}
\begin{split}
2d(\bar{\x},\bar{\y})^2\leq& 2d(\bar{\x},\bar{\y})\sum_{i=1}^N a_i\cdot d(\x_i,\y_i)+\sum_{i=1}^N(a_i-b_i) d(\y_i,\bar{\y})^2+\sum_{i=1}^N(b_i-a_i)d(\y_i,\bar{\x})^2\\
=&2d(\bar{\x},\bar{\y})\sum_{i=1}^N a_i\cdot d(\x_i,\y_i)+\sum_{i=1}^N (a_i-b_i)\cdot (d(\y_i,\bar{\y})-d(\y_i,\bar{\x}))\cdot (d(\y_i,\bar{\y})+d(\y_i,\bar{\x}))\\
\leq& 2d(\bar{\x},\bar{\y})\sum_{i=1}^N a_i\cdot d(\x_i,\y_i)+2D\sum_{i=1}^N |a_i-b_i|d(\bar{\x},\bar{\y}),
\end{split}
\end{equation}
where for the last inequality we use the triangle inequality for geodesic metric spaces and Assumption \ref{diam}. Now dividing both sides by $2d(\bar{\x},\bar{\y})$ and we complete the proof.
\end{proof}
\begin{lemma}
\label{cos1}
\cite[Lemma 5]{zhang2016first}. Let $\mathcal{M}$ be a Riemannian manifold with sectional curvature lower bounded by $\kappa \leq 0$. Consider $\N$, a gsc-convex subset of $\M$ with diameter $D$. For a geodesic triangle fully lies within $\N$ with side lengths $a, b, c$, we have
$$
a^{2} \leq {\zeta}(\kappa, D) b^{2}+c^{2}-2 b c \cos A
$$
where ${\zeta}(\kappa, D):=\sqrt{-\kappa} D \operatorname{coth}(\sqrt{-\kappa} D)$.
\end{lemma}
\begin{lemma}
\label{cos2}
\cite[Prop. 4.5]{sakai1996riemannian}
Let $\mathcal{M}$ be a Riemannian manifold with sectional curvature upper bounded by $\kappa\leq 0$. Consider $\N$, a gsc-convex subset of $\M$ with diameter $D$.
For a geodesic triangle fully lies within $\N$ with side lengths $a, b, c$, we have
$$
a^{2} \geq b^{2}+c^{2}-2 b c \cos A.
$$
\end{lemma}
\begin{lemma}\cite[Theorem 2.1.12]{bacak2014convex}\label{proj}
Let $(\H, d)$ be a Hadamard space and $C\subset \H$ be a closed convex set. Then $\Pi_C{\x}$ is singleton and $d(\x, \Pi_C\x)\leq d(\x,\z)$ for any $\z\in C\setminus\{\Pi_C\x\}$.
\end{lemma}
\begin{lemma}\cite[Prop. 6.1 and Theorem 6.2]{sturm2003probability}\label{jensen}
Suppose $\x_1,\dots, \x_N\in\N$ where $\N$ is a bounded gsc-convex subset of a Hadamard space. $\bx$ is the weighted Fr\'echet mean of $\x_1,\dots, \x_N\in\N$ w.r.t. non-negative $w_1,\dots,w_N$ such that $\sum_{i=1}w_i=1$ and $f:\N\rightarrow R$ is a gsc-convex function. Then $\bx\in\N$ and
$$
f(\bx)\leq \sum_{i=1}^Nw_if(\x_i).
$$
\end{lemma}
\begin{lemma}\label{lem:tech}
Let
\[
g(x)\coloneqq \frac{-a+({a^2+bx^2})^{\frac{1}{2}}}{x},
\]
where $a,b\in\R^{+}$,
then $g(x)$ increases on $[0,\infty)$.
\end{lemma}
\begin{proof}
Taking the derivative w.r.t. $x$, we have
\[
\begin{split}
g'(x)=&\frac{\frac{1}{2}\cdot 2bx(a^2+bx^2)^{-\frac{1}{2}}\cdot x-(-a+(a^2+bx^2)^{\frac{1}{2}})\cdot 1}{x^2}\\
=&\frac{bx^2-(-a\sqrt{a^2+bx^2}+a^2+bx^2)}{x^2\sqrt{a^2+bx^2}}\\
=&\frac{a\sqrt{a^2+bx^2}-a^2}{x^2\sqrt{a^2+bx^2}}\geq 0
\end{split}
\]
holds for any $x>0$. By L'{H}\^{o}pital's rule, $g(0)=0$ and $g'(0)=\frac{b}{2a}$. Thus we know $g(x)$ increases on $[0,\infty)$.
\end{proof}
\begin{lemma}\cite[Theorem 3.1]{zhou2019revision}\label{gsc-com}
On a Hadamard manifold $\M$, a subset $C$
gsc-convex, iff it contains the geodesic convex combinations of any countable points in $C$.
\end{lemma}
\begin{lemma}\cite[Prop. H.1]{ahn2020nesterov}\label{smooth}
Let $\M$ be a Riemannian manifold with sectional curvatures lower bounded by $-\kappa<0$ and the distance function $d(\x)=\frac{1}{2}d(\x,\p)^2$ where $\p\in\M$. For $D\geq 0$, $d(\cdot)$ is gsc-smooth within the domain $\{\u\in\M:d(\u, \p)\leq D\}$.
\end{lemma}
\begin{lemma}\cite[Corollary 5.6]{ballmann2012lectures}\label{gsc-convex}
Let $\M$ be a Hadamard space and $C\subset\M$ a convex subset. Then $d(\z, C)$ is gsc-convex for $\z\in\M$.
\end{lemma}
\begin{lemma}\cite[Section 2.1]{bacak2014convex}\label{levelset}
Let $\H$ be a Hadamard manifold, $f:\H\rightarrow(-\infty, \infty)$ be a convex lower semicontinuous function. Then any $\beta$-sublevel set of $f$:
$$
\{\x\in\H:f(\x)\leq\beta\}
$$
is a closed convex set.
\end{lemma}
\begin{lemma}\cite[Lemma 4]{sun2019escaping}\label{compare}
Let the sectional curvature of $\M$ is in $[-K,K]$ and $\x,\y,\z\in\M$ with pairwise distance upper bounded by $R$. Then
$$
\|\invexp_{\x}\y-\invexp_{\x}\z\|\leq(1+c(K)R^2)d(\y,\z).
$$
\end{lemma}
\begin{lemma}\cite[Lemma 2]{duchi2012dual}\label{dual}
Let
$$
\Pi_{\X}^{\psi}(\z,\alpha)=\argmin_{\x\in\X}\la\z,\x\ra+\frac{\psi(\x)}{\alpha}
$$
where $\psi(\cdot)$ is $1$-strongly convex w.r.t. $\|\cdot\|$, then
$$
\|\Pi_{\X}^{\psi}(\u,\alpha)-\Pi_{\X}^{\psi}(\v,\alpha)\|\leq\alpha\|\u-\v\|_{\star}.
$$
\end{lemma}
\begin{lemma}\cite[Prop. 3.1 and Theorem 3.2]{cesa2006prediction}\label{hedge}
Suppose the loss function $\ell_t$ is exp-concave for $\eta>0$, then the regret of Hedge is $\frac{\ln N}{\eta}$, where $N$ is the number of experts.
\end{lemma}
\end{document} |
\begin{document}
\title{Some characterizations of special rings by delta-invariant}
\author[M. T. Dibaei]{Mohammad T. Dibaei$^1$}
\author[Y. Khalatpour]{Yaser Khalatpour$^2$}
\address{$^{1, 2}$ Faculty of Mathematical Sciences and Computer,
Kharazmi University, Tehran, Iran.}
\address{$^{1}$ School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box: 19395-5746, Tehran, Iran }
\email{dibaeimt@ipm.ir} \email{yaserkhalatpour@gmail.com}
\keywords{linkage of ideals, $\delta$-invariant, generically Gorenstein and Gorenstein rings, regular rings.}
\subseteqbjclass[2010]{13C14, 13D02, 13H10, 16G50, 16E65, 13H05}
\maketitle
\begin{abstract} This paper is devoted to present some characterizations for a local ring to be generically Gorenstein and Gorenstein by means of $\delta$-invariant and linkage theory.\\
\end{abstract}
\section{Introduction}
Characterizations of important classes of commutative Noetherian rings are in the main interests of the authors. There are a vast number of papers characterizing rings to be regular, Gorenstein, generically Gorenstein, Cohen-Macaulay, etc (see \cite[Theorem 4.3]{CB}, \cite[Theorem 2.3]{JX}, \cite[Theorem 1.3]{CC}, \cite[Theorem 2.3]{LL}, \cite[Theorem 1]{SD}). For a finite module $M$ over a local ring $R$, the delta invariant of $M$, $\delta_R(M)$, has been defined by M. Auslander \cite{AB}. In this paper we use the delta invariant to study special rings such as generically Gorenstein, Gorenstein and regular.
A commutative Noetherian ring $R$ is called \emph{generically Gorenstein} whenever $R_\mathfrak{p}$ is Gorenstein for all minimal prime ideal $\mathfrak{p}$ of $R$. It is well known that if $(R, \mathfrak{m})$ is Cohen-Macaulay local ring with canonical module then $R$ is generically Gorenstein if and only if the canonical module is isomorphic to an ideal of $R$ (see \cite[Proposion 3.3.18]{BH}).
In section 2, we investigate the delta invariant in order to characterize rings to be generically Gorenstein, Gorenstein, or regular.
Our first result characterizes Cohen-Macaulay local rings with canonical modules which are non-Gorenstein generically Gorenstein (Theorem \ref{g}). We prove that a complete local ring $(R, \mathfrak{m}, k)$ is regular if and only if $R$ is Gorenstein and a syzygy module of $k$ has a principal direct summand $R$-module whose delta invariant is equal $1$ and satisfies an extra condition (see Theorem \ref{2}).
Section 3 is devoted to characterize Gorenstein rings to be non-regular by means of higher delta invariant. We end the section with the following result.
{\bf Corollary
\ref{cor}}. {\it Suppose that $(R, \mathfrak{m})$ is a Gorenstein local ring of dimension $d$ such that $R/\mathfrak{m} $ is infinite. Consider the following statements.
\begin{itemize}
\item [(a)] $R$ is not regular.
\item [(b)] There exists an $ \mathfrak{m} $-primary ideal $I$ of $R$ such that
\begin{itemize}
\item[(i)] $ I^{i}/I^{i+1} $ is free $R/I $-module for any $ i\geq0 $, and
\item[(ii)] for all $R$-regular sequence ${\bf x}=x_1,\mbox{cd}\,ots, x_s$ in $ I $ with \\ $ x_i+(x_1,\mbox{cd}\,ots, x_{i-1}) \in ( I/(x_1,\mbox{cd}\,ots, x_{i-1}))\setminus (I/(x_1,\mbox{cd}\,ots, x_{i-1}))^{2}$, $1\leq i\leq s$, $\delta^{n}_{R/{\bf x}R}\left( R/I\right)=0$ for all $ n\geq0 $.
\end{itemize}
\item [(c)] There exists a non-zero ideal $I$ of $R$ such that
\begin{itemize}
\item[(i)] $ \emph\mbox{depth}\,( G )\geq \emph\mbox{depth}\,_R(R/I) $, and
\item[(ii)] $\delta_R^{n}(R/I^{m})=0 $ for all integers $ n\geq d-\emph\mbox{depth}\,( G )+1 $ and $ m\geq1 $.
\end{itemize}
\end{itemize}
Then the implications (a)$\mathbb{R}ightarrow$(b) and (b)$\mathbb{R}ightarrow$(c) hold true. If $ \emph\mbox{depth}\,(G)>\emph\mbox{depth}\,_R(R/I) $, the implication (c)$\mathbb{R}ightarrow$(a) holds true. }
In section 4, we are interested in characterization of a ring to be Gorenstein by means of linkage theory and show how generically Gorenstein-ness of a ring with positive dimension may be reducible to the one with smaller dimension. As a result a class of non-generically Gorenstein rings may be recognized.
Here is our main result in section 4. If $I$ and $J$ are ideals of a Cohen-Macaulay local ring with canonical module $\omega_R$ such that $0:_{\omega_R} I=J\omega_R $ and $0:_{\omega_R} J=I\omega_R $ (these conditions coincide with those of Peskine-Szpiro's conditions when the base ring is Gorenstein, see \cite{PS}), then Cohen-Macaulay-ness of $R/I$ and of $R/J$ are equivalent provided the $\mbox{G}$-dimension of some particular modules are finite (see Theorem \ref{H}). We use this result to show when $\delta_R^i(I\omega_R)$ vanishes for all $i\geq 0$, where $\delta_R^i$ denote the higher delta invariants.
Throughout $( R,\mathfrak{m})$ is a commutative local Noetherian ring with maximal ideal $\mathfrak{m} $ and residue field $k=R/\mathfrak{m} $, and all modules are finite (i.e. finitely generated).
\section{Characterization by delta invariant}
The delta invariant of $M $ has been introduced by M. Auslander in paragraph just after \cite[Proposition 5.3]{AB}. For a finite (i.e. finitely generated) $R$-module $M$, denote $M ^\text{cm}$ the sum of all submodules $\phi(L)$ of $M $, where $L $ ranges over all maximal Cohen-Macaulay $R$-modules with no non-zero free direct summands and $\phi $ ranges over all $R$-linear homomorphisms from $L$ to $M $. The $\delta$ invariant of $M$, denoted by $\delta_{R}\left( M\right)$, is defined to be $\mu_R(M/M^\text{cm})$, the minimal number of generators of the the quotient module $M/M^\text{cm} $.
A short exact sequence $0\longrightarrowngrightarrow Y \longrightarrowngrightarrow X \overset{\varphi}{\longrightarrowngrightarrow} M \longrightarrowngrightarrow 0 $ of $R$-modules is called a Cohen-Macaulay approximation of $ M $ if $X$ is maximal-Cohen-Macaulay $R$-module and $Y$ has finite injective dimension over $R$. A Cohen-Macaulay approximation $0\longrightarrowngrightarrow Y \longrightarrowngrightarrow X \overset{\varphi}{\longrightarrowngrightarrow} M \longrightarrowngrightarrow 0 $ of $M$ is called minimal if each endomorphism $ \psi $ of $ X $, with $ \varphi\circ \psi=\varphi $, is an automorphism of $ X $. If $R$ be Cohen-Macaulay with canonical module $ \omega_R$ then a minimal Cohen-Macaulay approximation of $ M $ exists and is unique up to isomorphism (see \cite[Theorem 11.16]{GL}, \cite[Corollary 2.4]{MA} and \cite[Proposition 1.7]{MR}). If the sequence $ 0\longrightarrowngrightarrow Y \longrightarrowngrightarrow X \overset{\varphi}{\longrightarrowngrightarrow} M \longrightarrowngrightarrow 0 $ is a minimal Cohen-Macaulay approximation of $ M $, then $ \delta_R(M) $ determines the maximal rank of a free direct summand of $ X $ (see \cite[Exercise 11.47]{GL} and \cite[Proposition 1.3]{MM}). Also it can be shown that if $R$ is Cohen-Macaulay ring which admits a canonical module, then $\delta_R(M) $ is less than or equal to $ n $, where there is an epimorphism $X\oplus R^{n}\longrightarrowngrightarrow M$ with $X $ a maximal Cohen-Macaulay module with no free direct summands (see \cite[Proposition 11.25]{GL} and \cite[Proposition 4.8]{AA}). This definition of delta is used by Ding \cite{BB}. We recall the basic properties of the delta invariant.
\begin{prop}\label{a}
\cite[Corollary 11.26]{GL} and \cite[Lemma 1.2]{AD}. Let $M $ and $N $ be finite modules over a Gorenstein local ring $(R, \mathfrak{m}, k)$. Then the following statements hold true.
\begin{itemize}
\item [(i)] $\delta_{R}(M\oplus N)=\delta_{R}\left( M\right)+\delta_{R}\left( N\right) $.
\item[(ii)] If there is a an $R$-epimorphism $M\longrightarrowngrightarrow N $, then $\delta_{R}\left( M\right)\geq \delta_{R}\left( N\right)$.
\item[(iii)] $\delta_{R}(M)\leq\ \mu(M) $.
\item[(iv)] $\delta_{R}\left( k\right)=1$ if and only if $R$ is regular.
\item[(v)] $\delta_{R}(M)=\mu(M) $ when $\emph\mbox{proj.dim}\,_{R}(M) $ is finite.
\end{itemize}
\end{prop}
Here is our first observation which shows how one may characterize a Cohen-Macaulay local ring with canonical module to be generically Gorenstein by the $\delta$-invariant.
\begin{thm}\label{g} Let $(R, \mathfrak{m})$ be a Cohen-Macaulay local ring of dimension $d>0$ with canonical module $\omega_{R}$. Then the following statements are equivalent.
\begin{itemize}
\item [(a)] The ring $R$ is generically Gorenstein and non-Gorenstein.
\item [(b)] There exists an ideal $I$ of $R$ such that
\begin{itemize}
\item[(i)] $\delta_{R}\left( R/I\right)=1$,
\item[(ii)] $\emph\mbox{ht}\,_{R}\left( I\right) = 1$,
\item[(iii)] There exists a commutative diagram
\begin{center}
$\begin{array}{cccccccccc}
& & R & \longrightarrowngrightarrow & R/I & \longrightarrowngrightarrow & 0\\
& &\downarrow\cong & & \downarrow\cong & & \\
& &\emph\mbox{Hom}\,_{R}(I,\omega_{R}) & \longrightarrowngrightarrow &\emph\mbox{Ext}\,^{1}_{R}(R/I,\omega_{R}) &\longrightarrowngrightarrow & 0
\end{array}$
\end{center}
with isomorphism vertical maps.
\end{itemize}
\end{itemize}
\end{thm}
\begin{proof}
(a)$\mathbb{R}ightarrow$(b). Assume that $R$ is generically Gorenstein and that $\omega_{R}\ncong R$. As $\omega_{R}$ is an ideal of $R$, we consider the exact sequence
\begin{center}
$0\longrightarrowngrightarrow \omega_{R} \longrightarrowngrightarrow R\overset{\pi}{\longrightarrowngrightarrow} M \longrightarrowngrightarrow 0, $
\end{center} where $M:=R/\omega_R$.
Let $L$ be a maximal Cohen-Macaulay $R$-module with no free direct summands, $\phi:L\longrightarrowngrightarrow M$ an $R$-homomorphism. Applying the functor $\mbox{Hom}\,_R(L,-)$
gives the long exact sequence
\begin{center}
$0\longrightarrowngrightarrow \mbox{Hom}\,_{R} (L, \omega_{R} ) \longrightarrowngrightarrow \mbox{Hom}\,_{R} (L,R) \longrightarrowngrightarrow \mbox{Hom}\,_{R} (L,M )\longrightarrowngrightarrow \mbox{Ext}\,^{1}_{R}(L,\omega_{R}).$
\end{center}
As $\mbox{Ext}\,^{1}_{R}(L,\omega_{R})=0$, there exists $\alpha\in \mbox{Hom}\,_{R} (L,R)$ such that $\pi \circ \alpha =\phi $. If there exists $x\in L$ such that $\phi(x)\notin \mathfrak{m} M$ then we have $\alpha(x)\notin \mathfrak{m}$, i.e. $\alpha(x)$ is a unit and so $\alpha$ is an epimorphism which means $L$ has a free direct summand which is not the case. Hence $\phi(L)\subseteqbseteq \mathfrak{m} M $. Therefore $M^\text{cm} \subseteqbseteq \mathfrak{m} M$ and we have
$$\delta_{R}(M)=\mu(M/M^\text{cm})= \mbox{vdim}\,_{k}(M/(M^\text{cm}+\mathfrak{m} M))=\mu(M/\mathfrak{m} M)=\mu(M)=1. $$
Moreover, we have $\mbox{Ext}\,_{R}^{1}\left( R/\omega_{R},\omega_{R}\right) \cong R/\omega_{R}$ since $R/\omega_{R}$ is Gorenstein ring of dimension $d-1$, and $\mbox{Hom}\,_{R} (\omega_{R} ,\omega_{R}
)\cong R$, $\mbox{ht}\,_{R}\left( \omega_{R}\right) =1$. Now that the statement (iii) follows naturally.
(b)$\mathbb{R}ightarrow$(a). As $\mbox{ht}\,(I)= 1$, $I\nsubseteq\underset{\mathfrak{p}\in\mbox{Ass}\,(R)}{\cup}\mathfrak{p}$ and so $\mbox{Hom}\,_{R}(R/I,\omega_{R})=0$. Hence, naturally, we obtain the exact sequence
\begin{center}
$0\longrightarrowngrightarrow \mbox{Hom}\,_{R}(R,\omega_{R})\longrightarrowngrightarrow \mbox{Hom}\,_{R}(I,\omega_{R})\longrightarrowngrightarrow \mbox{Ext}\,^{1}_{R}(R/I,\omega_{R})\longrightarrowngrightarrow 0.$
\end{center}
One has the following commutative diagram
\begin{center}
$\begin{array}{cccccccccc}
0 & \longrightarrowngrightarrow & I & \longrightarrowngrightarrow & R & \longrightarrowngrightarrow & R/I &\longrightarrowngrightarrow &0& \\
& & & & \downarrow\cong & & \downarrow \cong& & \\
0 & \longrightarrowngrightarrow & \mbox{Hom}\,_{R}(R,\omega_{R}) & \longrightarrowngrightarrow &\mbox{Hom}\,_{R}(I,\omega_{R}) & \longrightarrowngrightarrow &\mbox{Ext}\,^{1}_{R}(R/I,\omega_{R}) &\longrightarrowngrightarrow & 0.
\end{array}$
\end{center}
Therefore we obtain, $I\cong \omega_{R} $ which means $R$ is generically Gorenstein.
To see the final claim, assume contrarily that $R$ is Gorenstein. Hence $ \omega_R\cong R$ and $ \mbox{Hom}\,_{R}(R,\omega_{R})\cong \mbox{Hom}\,_{R}(I,\omega_{R}) $. Now, the commutative diagram (iii) implies that $R/I=0 $ so $ \delta_R(R/I)=0 $ which is a contradiction.
\end{proof}
Our next observation traces ideals $J$ with $\delta_R(R/J)=1$.
\begin{prop}
Let $(R, \mathfrak{m})$ be a Cohen-Macaulay local ring with canonical module $\omega_R$, and let $I$ and $J$ be two ideals of $R$ such that $R/J\cong\Omega_R(R/I)$, $J\cong \emph\mbox{Ann}\,_{\omega_R}(I) $ and $ \emph\mbox{ht}\,_R(J)\geq 2$. Then $R$ is generically Gorenstein and $ \delta_R(R/J)=1 $.
\end{prop}
\begin{proof}
Apply the functor $\mbox{Hom}\,_R(-, \omega_R) $ on the exact sequence $ 0\longrightarrowngrightarrow R/I\longrightarrowngrightarrow \overset{t}{\oplus}R\longrightarrowngrightarrow R/J\longrightarrowngrightarrow 0 $ to get the exact sequence
\begin{center}
$ 0\longrightarrowngrightarrow \mbox{Hom}\,_R(R/J, \omega_R)\longrightarrowngrightarrow \mbox{Hom}\,_R(\overset{t}{\oplus}R, \omega_R)\longrightarrowngrightarrow \mbox{Hom}\,_R(R/I, \omega_R)\longrightarrowngrightarrow \mbox{Ext}\,^{1}_R(R/J, \omega_R)\longrightarrowngrightarrow0 $.
\end{center}
But we have $ \mbox{Hom}\,_R(R/J, \omega_R)=0=\mbox{Ext}\,^{1}_R(R/J, \omega_R)=0 $, since $ \mbox{ht}\,_R(J)\geq 2 $. Therefore we have $\overset{t}{\oplus}\omega_R\cong J$. Then $R$ is generically Gorenstein.
Finally, assume that $L$ is a maximal Cohen-Macaulay $R$-module with no free direct summands and that $f: L\rightarrow R/J$ is an $R$-homomorphism. Applying $\mbox{Hom}\,(L, -)$ on the exact sequence $$ 0\longrightarrowngrightarrow \overset{t}{\oplus}\omega_R\cong J\longrightarrowngrightarrow R \longrightarrowngrightarrow R/J \longrightarrowngrightarrow 0 $$ implies that $f(L)\subseteqbseteq\mathfrak{m}(R/J)$, so that $\delta_R(R/J)=1$.
\end{proof}
Over a Gorenstein local ring $R$, Proposition \ref{a} (iii) states that the inequality $\delta_R(M)\leq\mu(M)$. In the following, we explore when equality holds true by means of Gorenstein dimensions.
A finite $R$-module $ M $ is said to be {\it totally reflexive} if the natural map $ M\longrightarrowngrightarrow \mbox{Hom}\,_{R}(\mbox{Hom}\,_{R}(M, R), R) $ is an isomorphism and
$ \mbox{Ext}\,^{i}_{R}(M, R)=0=\mbox{Ext}\,^{i}_{R}(\mbox{Hom}\,_{R}(M, R), R)$ for all $i>0$.
An $R$-module $ M $ is said to have Gorenstein dimension $\leq n$, write $\mbox{G-dim}\,_R(M)\leq n $, if there exists an exact sequence $$ 0\longrightarrowngrightarrow G_{n}\longrightarrowngrightarrow\mbox{cd}\,ots \longrightarrowngrightarrow G_{1}\longrightarrowngrightarrow G_{0}\longrightarrowngrightarrow M\longrightarrowngrightarrow 0, $$ of $R$ modules such that each $ G_{i} $ is totally reflexive. Write $ \mbox{G-dim}\,_{R}(M)= n $ if there is no such sequence with shorter length. If there is no such finite length exact sequence, we write $\mbox{G-dim}\,_R(M)=\infty$.
Our first result indicates the existence of a finite length $R$- module $M$ such that the equality $\delta_R(M)=\mu(M)$ holds true may put a strong condition on $R$. More precisely:
In \cite[Theorem 6.5]{CB}, it is shown that the local ring $(R, \mathfrak{m}, k)$ is Gorenstein if and only if $\Omega^{n}_{R}(k) $ has a G-projective summand for some $n$, $ 0\leq n\leq \mbox{depth}\, R+2 $ .
\begin{thm} \label{l}
Let $(R, \mathfrak{m}) $ be a local ring. The following statements are equivalent.
\begin{itemize}
\item[(i)] $R$ is Gorenstein.
\item[(ii)] There exists an $R$--module $M$ such that $ \delta_R(M)=\mu(M)$, $ \mathfrak{m}^{n}M=0$, and $\emph{\mbox{G-dim}\,}_{R}(\mathfrak{m}^{n-2}M^{\emph{cm}})<\infty$ for some integer $ n\geq 2 $.
\end{itemize}
\end{thm}
\begin{proof}
Assume first that $R$ is Gorenstein and that $\underline{x}$ is a maximal $R$--regular sequence. Thus there is a surjective homomorphism $R/\mathfrak{m}^{t}\longrightarrowngrightarrow R/\underline{x}R$ for some integer $t\geq 1$. As $ \mbox{proj.dim}\,(R/\underline{x}R)<\infty $, Proposition \ref{a} implies that
\begin{center}
$1=\mu(R/\underline{x}R)=\delta_R(R/\underline{x}R)\leq\delta_R(R/\mathfrak{m}^{t})\leq\mu(R/\mathfrak{m}^{t})=1.$
\end{center}
Therefore $\delta_R(R/\mathfrak{m}^{t})=1=\mu(R/\mathfrak{m}^{t})$. Now by setting $ n=t+1\geq2 $, the module $M:=R/\mathfrak{m}^{t}$ trivially justifies claim (ii).
For the converse, consider the natural exact sequence
\begin{center}
$0\longrightarrowngrightarrow (M^{\text{cm}}+\mathfrak{m} M)/\mathfrak{m} M\longrightarrowngrightarrow M/\mathfrak{m} M\longrightarrowngrightarrow \frac{M/M^{\text{cm}}}{\mathfrak{m}(M/M^{\text{cm}})}\longrightarrowngrightarrow 0 $.
\end{center}
Now the equality $\delta_R(M)=\mu(M)$ implies that $M^{\text{cm}}\subseteqbseteq \mathfrak{m} M $. As $\mathfrak{m}^nM=0$, $ \mathfrak{m}^{n-2}M^{\text{cm}} $ is vector space. Our assumption $\mbox{G-dim}\,_{R}(\mathfrak{m}^{n-2}M^\emph {cm})<\infty$ implies that $\mbox{G-dim}\,_{R}(R/\mathfrak{m})<\infty$. Hence $R$ is Gorenstein by \cite[Theorem 1.4.9]{BA}.
\end{proof}
Assume that $ (R, \mathfrak{m}, k) $ is a local ring with residue field $k$. In \cite[Corollary 1.3]{CC}, Dutta presents a characterization for $R$ to be regular in terms of the admitting a syzygy of $k$ with a free direct summand. Later on,
Takahashi, in \cite[Theorem 4.3]{CB}, generalized the result in terms of the existence of a syzygy module of the residue field having a semidualizing module as its direct summand. Also Ghosh et.al, in \cite[Theorem 3.7]{BD}, have shown that the ring is regular if and only if a syzygy module of $ k $ has a non-zero direct summand of finite injective dimension.
Now we investigate these notions by means of delta invariant. Denote by $\Omega^{i}_{R}(k) $ the $i$th syzygy, in the minimal free resolution, of $k$. In the next result we prove that, for $i\gg 0$, $\Omega_R^i(k)$ does not possess a direct summand of finite injective dimension by means of $\delta$-invariant. D. Ghosh informed us that this result has been proved by him directly \cite[Theorem 3.3]{GO}.
\begin{prop} \label{f}
Let $(R, \mathfrak{m}, k)$ be a local ring of dimension $d$. Then, for any $ i>d $, the following statements hold true.
\begin{itemize}
\item [(a)] $\Omega^{i}_{R}(k) $ has no non-zero direct summand of finite injective dimension.
\item[(b)] If $R$ is Cohen-Macaulay with canonical module $ \omega_R$, then $\Omega^{i}_{R}(k) $ has no direct summand isomorphic to $ \omega_R$.
\item[(c)] $\Omega^{i}_{R}(k) $ has no free direct summand.
\end{itemize}
\end{prop}
\begin{proof}
(a). Assume contrarily that, for an integer $i>d$, $\Omega^{i}_{R}(k)=M\oplus N$ for some $R$--modules $ M\neq 0 $ and $ N $ with $ \mbox{inj.dim}\,_R(M)<\infty$. By \cite[Theorem 3.7]{BD} we have $R$ is regular so that $ \mbox{proj.dim}\,_R(M)<\infty$. Thus, by Proposition \ref{a}(v), $ \delta_R(M)=\mu(M)$. As $i-1\geq d$, $\Omega^{i-1}_{R}(k) $ is a maximal Cohen-Macaulay $R$-module. By the paragraph just after \cite[Proposition 5.3]{AB}, the exact sequence $0\longrightarrowngrightarrow \Omega^{i}_{R}(k)\longrightarrowngrightarrow \oplus R\longrightarrowngrightarrow \Omega^{i-1}_{R}(k)\longrightarrowngrightarrow 0$ implies that $ \delta_R(\Omega^{i}_{R}(k))=0 $. Therefore by Proposition \ref{a}(i) we obtain $ \mu(M)=0 $ which contradicts the fact that $ M\neq 0$.
Parts (b) and (c) are clear conclusions of (a).
\end{proof}
\begin{dfn}
An $R$-module $ X $ is said to satisfy the condition $ (\ast) $ whenever, for any $ X $-regular element $ a $, $ X/aX $ is indecomposable as $R/aR$-module.
\end{dfn}
\begin{thm} \label{2}
Let $( R, \mathfrak{m}, k) $ be a complete local ring of dimension $ d $. The following statements are equivalent.
\begin{itemize}
\item[(i)] $R$ is regular.
\item[(ii)] $R$ is Gorenstein and $\Omega^{n}_{R}(k) $ has a principal $R$-module as its direct summand whose delta invariant is $1$ and satisfies the property $ (\ast) $, for some $ n\geq 0$.
\end{itemize}
\end{thm}
\begin{proof}
(i)$\mathbb{R}ightarrow$(ii). $k=\Omega^{0}_{R}(k) $ fulfills our statement by Proposition \ref{a}.
(ii)$\mathbb{R}ightarrow$(i). Suppose that $R$ is Gorenstein and, for an integer $n\geq 0$, $\Omega^{n}_{R}(k)\cong X\oplus Y $ for some $R$-modules $ X $ and $ Y $ such that $X\cong R/\mbox{Ann}\,_R(X)$ with $ \delta_R(X)=1 $. The case $ n=0 $ implies that $R$ is regular. So we may assume that $ n\geq 1$.
We proceed by induction on $d$.
For the case $ d=0 $, if $ \mathfrak{m}\neq0 $ then $ \mbox{Soc}\,(R)\neq0 $ and $R/\mbox{Soc}\,(R) $ is maximal Cohen-Macaulay $R$-module with no free direct summand and so $ \delta_R(R/\mbox{Soc}\,(R))=0 $. On the other hand, by \cite[Lemma 2.1]{BD}, $ \mbox{Soc}\,(R)\subseteqbseteq \mbox{Ann}\,_R(\Omega^{n}_{R}(k) )=\mbox{Ann}\,_R(X\oplus Y)\subseteqbseteq \mbox{Ann}\,_R(X)$. Therefore the natural surjection $R/\mbox{Soc}\,(R)\longrightarrowngrightarrow R/\mbox{Ann}\,_R(X)\cong X $ implies that $ 1=\delta_R(X)\leq \delta_R(R/\mbox{Soc}\,(R))=0$ which is absurd. Hence $ \mathfrak{m}=0 $ and $R=R/\mathfrak{m} $ is regular.
Now we suppose that $ d\geq1 $ and the statement is settled for $ d-1 $. As $R$ is Cohen-Macaulay, we choose an $R$-regular element $ y\in \mathfrak{m}\setminus\mathfrak{m}^{2} $. Hence $ y $ is $\Omega^{n}_{R}(k) $-regular and $X$-regular. We set $ \overline{(-)}=(-)\otimes_R R/yR$. Note that $ \overline{X} $ is a principal $ \overline{R}$-module and that, by \cite[Corollary 2.5]{CA} and Proposition \ref{a}, $ 1=\delta_R(X)\leq \delta_{\overline{R}}(\overline{X}) \leq \mu(\overline{X})=1 $. Note that, by \cite[Corollary 3.5]{CB} , we have
\begin{center}
$ \overline{\Omega^{n}_{R}(k)}\cong \Omega^{n}_{\overline{R}}(k)\oplus \Omega^{n-1}_{\overline{R}}(k). $
\end{center}
Therefore we have $ \overline{X}\oplus \overline{Y} \cong\overline{\Omega^{n}_{R}(k)}\cong \Omega^{n}_{\overline{R}}(k)\oplus \Omega^{n-1}_{\overline{R}}(k) $. But $ \overline{X} $ is indecomposable $ \overline{R} $-module so, by Krull-Schmit uniqueness theorem (see \cite[Theorem 21.35]{BN}), $ \overline{X} $ is direct summand of $ \Omega^{n-1}_{\overline{R}}(k) $ or $ \Omega^{n}_{\overline{R}}(k) $. Now our induction hypothesis implies that $ \overline{R} $ is regular and so is $R$.
\end{proof} We end this section by the following remark which gives some more informations on the delta invariant.
\begin{rmk} \label{n}
Let $(R, \mathfrak{m})$ be a local ring.
\begin{itemize}
\item[(a)] The ring $R$ is regular if and only if $R$ is Gorenstein and $\delta_R(M)>0$ for all non-zero finitely generated $R$-module $ M $.
\item[(b)] If $R$ is Cohen-Macaulay with canonical module $ \omega_R$, then $R$ is not Gorenstein if and only if there exists a non-zero $R$-module $M$ with $ \delta_R(M)=0$ and $\emph\mbox{inj.dim}\,_{R}(M)<\infty $.
\end{itemize}
\end{rmk}
\begin{proof}
(a). Suppose that $R$ is regular. Assume contrarily that there exists a non-zero $R$--module $ M $ such that $\delta_R(M)=0$. By definition of delta, there exists a surjective homomorphism $ X\longrightarrowngrightarrow M $ such that $ X $ is maximal Cohen-Macaulay $R$-module with no free direct summand. On the other hand, as $R$ is regular, $ \mbox{proj.dim}\,_R(X)=0 $ and so $ X $ is free a $R$-module which is not the case.
Conversely, by assumption and Proposition \ref{a}, $ 1\leq\delta_R(R/\mathfrak{m})\leq \mu(R/\mathfrak{m})=1$. Hence, by Proposition \ref{a}, $R$ is regular.
(b). Assume that $R$ is not Gorenstein. As $R$ is indecomposable then $\omega_R$ is maximal Cohen-Macaulay with no free direct summand and so we trivially have $ \delta_R(\omega_{R})=0 $ with $ \mbox{inj.dim}\,_R(\omega_{R})<\infty $.
Conversely, assume contrarily that $R$ is Gorenstein. As $\mbox{inj.dim}\,_R(M)<\infty$, we have $ \mbox{proj.dim}\,_R(M)<\infty $, by \cite[Exercise 3.1.25]{BH}. Hence Proposition \ref{a} implies that $ 0=\delta_R(M)=\mu(M)$ which contradicts the fact that $ M\neq 0$.
\end{proof}
\section{Gorenstein non-regular rings}
For an integer $n\geq 0$ and an $R$--module $M$, we denote $\delta_{R}^{n}(M):=\delta_R(\Omega^{n}_{R}(M)) $ as the higher delta invariant, where $ \Omega^{n}_{R}(M) $ is $ n $th syzygy module of $ M $ in its minimal free resolution. The following result will be used in characterizing a ring to be non-regular Gorenstein of dimensions $\leq 2$.
\begin{prop}\label{sa}
Assume that $(R, \mathfrak{m})$ is a $1$-dimensional Gorenstein local ring and that $ I $ is an $ \mathfrak{m} $-primary ideal of $R$ such that $ \mu_R(I)\geq2 $. Then
$\delta_R(I^{n}/I^{n+1})=0 $ and $ \delta_R^{m}(R/I^{n})=0 $ for all positive integers $ n $ and $ m $.
\end{prop}
\begin{proof}
The assumption $ \mbox{dim}\,(R/I)=0 $ and the exact sequence $ 0\longrightarrowngrightarrow I\longrightarrowngrightarrow R\longrightarrowngrightarrow R/I\longrightarrowngrightarrow0 $ imply that $ I $ is a maximal Cohen-Macaulay as an $R$--module. As $ \mu_R(I)\geq2 $, $ I $ has no free direct summand and so $ \delta_R(I)=0 $. For a finite $R$-module $M$, the natural epimorphism $ \overset{\mu_R(M)}{\oplus}I \longrightarrowngrightarrow IM $, by Proposition \ref{a}, implies that $ \delta_R(IM)\leq \delta_R(\overset{t}{\oplus}I)=0 $ which gives $ \delta_R(IM)=0 $. Let $ n $ and $ m $ be positive integers. As $ \Omega^{m}_{R}(R/I^{n}) $ is a maximal Cohen-Macaulay $R$-module for all $ m\geq1 $, we have $ \delta_R^{m}(R/I^{n})=0 $ for all $ m>1 $ (see the paragraph just after \cite[Proposition 5.3]{AB}). For the case $ m=1 $ we have $\delta_R^{1}(R/I^{n})=\delta_R(\Omega^{1}_{R}(R/I^{n}))=\delta_R(I^{n})=\delta_R(II^{n-1})=0 $.
\end{proof}
For an $R$-regular element $ x$ in $\mathfrak{m} $, we set $ \overline{(-)}=(-)\otimes_R R/xR$. Recall from the first paragraph of section 5 of \cite{AB} that an $R$-module $ M $ is called {\it weakly liftable} on $ \overline{R}$ if $ M $ is a direct summand of $\overline{N} $ for some $R$-module $N$. We recall the following result from \cite[Proposition 5.7]{AB} and provide a proof by using of \ref{sa} for convenience of the reader.
\begin{rmk} \label{12}
Suppose that $(R, \mathfrak{m}, k)$ is a Gorenstein local ring of dimension $ d\geq1 $. If $R$ is not regular then $ \delta_R^{n}(R/\mathfrak{m})=0 $ for all integer $ n\geq0 $.
\end{rmk}
\begin{proof}
Suppose that $R$ is not regular. If $ n=0 $ then we have nothing to prove. For $ n\geq1 $, we prove by induction on $ d $. If $ d=1 $ then $ \mu_R(\mathfrak{m})\geq2$ and result follows by proposition \ref*{sa}. Now assume that $ d \geq 2 $ and that the result has been proved for $< d $. Choose an $R$-regular element $ x\in \mathfrak{m} \setminus \mathfrak{m}^{2} $, set $ \overline{R}=R/xR$ and $ \overline{\mathfrak{m}}=\mathfrak{m}/xR$. We have, by \cite[Corollary 3.5]{CB}, $ \overline{\Omega^{n}_{R}(k)}\cong \Omega^{n}_{\overline{R}}(k)\oplus \Omega^{n-1}_{\overline{R}}(k) $ and so
\[\begin{array}{rl}
\delta_R^{n}(k)=\delta_R(\Omega^{n}_{R}(k))&\leq\delta_{\overline{R}}(\overline{\Omega^{n}_{R}(k)})\\
&=\delta_{\overline{R}}(\Omega^{n}_{\overline{R}}(k))+\delta_{\overline{R}}(\Omega^{n-1}_{\overline{R}}(k))\\
\text{(by induction hypothesis)} &=0+0\\
&=0.\\
\end{array}\]
\end{proof}
\begin{cor} \label{7}
Assume that $(R, \mathfrak{m})$ is a Gorenstein local ring of positive dimension $d\leq2$. Then the following statements are equivalent.
\begin{itemize}
\item [(a)] $R$ is not regular.
\item [(b)] There exists an $ \mathfrak{m} $-primary ideal $I$ of $R$ such that
\begin{itemize}
\item[(i)] $ \mu_R(I)\geq d+1 $,
\item[(ii)] $ I^{d-1}/I^{d} $ is a free $R/I $-module,
\item[(iii)] There exists $R$-regular element $ x\in I^{d-1} \setminus I^{d} $ such that the natural map $ R/I\overset{x.}{\longrightarrowngrightarrow} I^{d-1}/I^{d} $ is injective and $\delta_{R/xR}\left( R/I\right)=0$.
\end{itemize}
\item [(c)] There exists a non-zero ideal $I$ of $R$ such that $ \delta_R^{n}(R/I)=0 $ for all positive integers $ n\geq1 $.
\end{itemize}
\end{cor}
\begin{proof}
(a)$\mathbb{R}ightarrow$(b). We set $ I=\mathfrak{m} $. If $ d=1 $ the result is trivial. For $ d=2 $, choose an $R$-regular element $ x\in \mathfrak{m} \setminus \mathfrak{m}^{2} $. Hence the map $ R/\mathfrak{m}\longrightarrowngrightarrow \mathfrak{m}/\mathfrak{m}^{2} $, with $a+\mathfrak{m}\rightsquigarrow ax+\mathfrak{m}^2$, is injective. As $ R/xR $ is not regular, $\delta_{R/xR}\left( R/\mathfrak{m}\right)=\delta_{R/xR}\left( (R/xR)/(\mathfrak{m}/xR) \right)=0 $ by Remark \ref{12} and also it is clear $ \mathfrak{m}/{\mathfrak{m}}^{2} $ is free $ R/\mathfrak{m} $ module.
(b)$\mathbb{R}ightarrow$(c). Set $ \overline{R}=R/xR$ and assume that $ n\geq1 $. If $ d=1 $, the result follows by Proposition \ref{sa}. Let $ d=2 $. As $ \mbox{dim}\,(\overline{R})=1 $ and $ \mu_{\bar R}(\bar I)\geq 2 $, by Proposition \ref{sa} and assumption (iii), we have $ \delta_{\overline{R}}(\Omega^{n}_{\overline{R}}(R/I))=\delta_{\overline{R}}(\Omega^{n}_{\overline{R}}(\overline{R}/\overline{I}))=0$ and $\delta_{\overline{R}}(\Omega^{n-1}_{\overline{R}}(R/I))=\delta_{\overline{R}}(\Omega^{n-1}_{\overline{R}}(\overline{R}/\overline{I}))=0$. On the other hand, $ \mbox{dim}\, R/I=0 $ and the exact sequence $$ 0 \longrightarrowngrightarrow R/I\longrightarrowngrightarrow I/I^{2}\longrightarrowngrightarrow I/(xR+I^{2})\longrightarrowngrightarrow 0 $$ imply that $ \mbox{proj.dim}\,_{R/I}(I/(xR+I^{2})) =0 $ by Auslander-Buchsbaum formula. Therefore, the commutative diagram
\begin{center}
$\begin{array}{cccccccccc}
0 & \longrightarrowngrightarrow & R/I & \overset{x.}{\longrightarrowngrightarrow} & I/xI & \longrightarrowngrightarrow &I/xR &\longrightarrowngrightarrow &0& \\
& & \paragraphllel & & \downarrow & & \downarrow\\
0 & \longrightarrowngrightarrow & R/I & \overset{x.}{\longrightarrowngrightarrow} &I/I^{2} & \longrightarrowngrightarrow &I/(xR+I^{2}) &\longrightarrowngrightarrow & 0
\end{array} $
\end{center}
with exact rows implies that the upper row
splits. Hence $R/I $, as $\bar{R}$--module, is weakly liftable on $\bar{R}$. Thus, by \cite[Proposition 5.2]{CB}, we obtain
$ \overline{\Omega^{n}_{R}(R/I)}\cong \Omega^{n}_{\overline{R}}(R/I)\oplus \Omega^{n-1}_{\overline{R}}(R/I) $.
Therefore
\[\begin{array}{rl}
\delta_R^{n}(R/I)= \delta_R(\Omega^{n}_{R}(R/I))&\leq\delta_{\overline{R}}(\overline{\Omega^{n}_{R}(R/I)})\\
&=\delta_{\overline{R}}(\Omega^{n}_{\overline{R}}(R/I))+\delta_{\overline{R}}(\Omega^{n-1}_{\overline{R}}(R/I))\\
&=0+0\\
&=0.\\
\end{array}\]
(c)$\mathbb{R}ightarrow$(a). By assumption $ \delta_R(I)=0 $ therefore, by Remark \ref{n}, $R$ is not regular.
\end{proof}
It is shown by Yoshino \cite[Theorem 2.3]{y} that, in a non-regular Gorenstein local ring $(R, \mathfrak{m})$ with $\mbox{depth}\, ( \mathcal{G}_\mathfrak{m}(R) )\geq d-1$, one has $ \delta_R^{n}(R/\mathfrak{m}^{m})=0 $ for all positive integers $ n $ and $ m $, where $ \mathcal{G}_\mathfrak{m}(R) $ denote the associate graded ring of $R$ with respect to $\mathfrak{m}$. In order to present a generalization of this theorem, we first bring a lemma. For an ideal $I$ of a ring $R$, we set $ G:=\mathcal{G}_I(R) $ as the associated ring of $R$ with respect to $I$.
\begin{lem}\label{sb}
Assume that $(R, \mathfrak{m})$ is a local ring and that $ I $ is an $ \mathfrak{m} $-primary ideal of $R$ such that $ I^{i}/I^{i+1} $ is free $R/I $-module for all $ i\geq0 $ $($ e.g. $ I $ an Ulrich ideal of $R$, see \cite[Proposition 3.2]{K} $ ) $. Suppose that $ x\in I\setminus I^2 $ such that $ x^{\ast}:=x+I^2 $ is $ G $-regular element in $ G $. Set $ \bar{R}=R/xR$. Then, for any $n\geq 0$, we have $ \Omega^{n}_R(I^{m})\otimes_R \bar{R}\cong \Omega^{n}_{\bar{R}}(I^{m-1}/I^{m})\oplus \Omega^{n}_{\bar{R}}(I^{m}/xI^{m-1}) $ for all $ m\geq1 $.
\end{lem}
\begin{proof}
As $ x^{\ast} $ is a $ G $-regular element in $ G $, the map $ I^{m-1}/I^{m}\overset{x.}{\longrightarrowngrightarrow} I^{m}/I^{m+1} $ is injective for all $ m\geq1 $. We prove the claim by induction on $ n $. By \cite[Lemma 2.3]{K} we have $ I^{m}/xI^{m}\cong I^{m-1}/I^{m}\oplus I^{m}/xI^{m-1}$.
Therefore
\[\begin{array}{rl}
\Omega^{0}_R(I^{m})\otimes_R \bar{R}&=I^{m}\otimes_R \bar{R} \\
&\cong I^{m}/xI^{m}\\
&\cong I^{m-1}/I^{m}\oplus I^{m}/xI^{m-1}\\
&=\Omega^{0}_{\bar{R}}(I^{m-1}/I^{m})\oplus \Omega^{0}_{\bar{R}}(I^{m}/xI^{m-1})
\end{array},\]
which proves the claim for $n=0$.
Now we assume that $ n> 0 $ and the claim is settled for integers less than $ n $. Note that a minimal free cover $ 0\longrightarrowngrightarrow \Omega^{n}_R(I^{m})\longrightarrowngrightarrow F \longrightarrowngrightarrow \Omega^{n-1}_R(I^{m})\longrightarrowngrightarrow 0$ of $ \Omega^{n-1}_R(I^{m}) $ gives a minimal cover
\begin{center}
$ 0\longrightarrowngrightarrow \Omega^{n}_R(I^{m})\otimes_R \bar{R}\longrightarrowngrightarrow F\otimes_R \bar{R} \longrightarrowngrightarrow \Omega^{n-1}_R(I^{m})\otimes_R \bar{R}\longrightarrowngrightarrow 0$
\end{center}
of $\Omega^{n-1}_R(I^{m})\otimes_R \bar{R}$ over $ \bar{R} $. Hence we get
$ \Omega^{n}_R(I^{m})\otimes_R \bar{R}\cong \Omega^{1}_{\bar{R}}(\Omega^{n-1}_R(I^{m})\otimes_R \bar{R})$.
By induction hypothesis we have
\[\begin{array}{rl} \Omega^{n}_R(I^{m})\otimes_R \bar{R}&\cong \Omega^{1}_{\bar{R}}(\Omega^{n-1}_R(I^{m})\otimes_R \bar{R})\\
&\cong \Omega^{1}_{\bar{R}}(\Omega^{n-1}_{\bar{R}}(I^{m-1}/I^{m})\oplus \Omega^{n-1}_{\bar{R}}(I^{m}/xI^{m-1}))\\
&\cong \Omega^{n}_{\bar{R}}(I^{m-1}/I^{m})\oplus \Omega^{n}_{\bar{R}}(I^{m}/xI^{m-1}).
\end{array}\]
\end{proof}
\begin{thm} \emph{(Compare \cite[Theorem 2.3]{y})}\label{sf}
Suppose that $(R, \mathfrak{m})$ is a Gorenstein local ring of dimension $d$ with infinite residue field $R/\mathfrak{m} $. Assume that $ I $ is an $ \mathfrak{m} $-primary ideal of $R$ such that:
\begin{itemize}
\item[(i)] For any $ i\geq0 $, $ I^{i}/I^{i+1} $ is free $R/I $-module, and
\item [(ii)] for any $R$-regular sequence ${\bf x}=x_1,\mbox{cd}\,ots, x_s$ in $ I $ with $$ x_i+(x_1,\mbox{cd}\,ots, x_{i-1}) \in ( I/(x_1,\mbox{cd}\,ots, x_{i-1}))\setminus (I/(x_1,\mbox{cd}\,ots, x_{i-1}))^{2}, \ 1\leq i\leq s,$$ we have $\delta^{n}_{R/{\bf x}R}\left( R/I\right)=0$ for all $ n\geq0 $.
\end{itemize}
Then $ \delta_R^{n}(R/I^{m})=0 $ for all integers $ n\geq d+1-\emph\mbox{depth}\, G $ and all $ m\geq1 $. In particular, if $ \emph\mbox{depth}\, G =d-1 $, then $ \delta_R^{n}(R/I^{m})=0 $ for all $ n \geq 2 $ and all $ m\geq1 $.
\end{thm}
\begin{proof}
Let $ m\geq1 $. If $ d=0 $ the result is trivial. We assume that $ d>0 $ and $ n\geq d+1-\emph\mbox{depth}\, G $.
If $\mbox{depth}\, G=0 $ then $ n\geq d+1 $ and the result is clear by \cite[Corollary 1.2.5]{Av}. Now assume that $ d>0 $ and $ \mbox{depth}\, G>0 $. As $ R/\mathfrak{m} $ is infinite, \cite[Lemma 2.1]{HM} implies that the map $I^{m-1}/I^{m}\overset{x.}{\longrightarrowngrightarrow} I^{m}/I^{m+1} $ is injective for some $ x\in I \setminus I^{2} $ with $ x^{\ast}:= x+I^2\in G $ is $ G $-regular. Note that $ \mathcal{G}_I(I^{m})\subseteqbseteq G $ implies that $ x\in I $ is non-zero-divisor on $ I^{m} $. Set $ \bar{R}=R/xR $ and $ \bar{I}=I/xR $ and let $ n\geq d-t+1 $. By Lemma \ref{sb} we have
\begin{center}
$ \Omega^{n-1}_R(I^{m})\otimes_R \bar{R}\cong \Omega^{n-1}_{\bar{R}}(I^{m-1}/I^{m})\oplus \Omega^{n-1}_{\bar{R}}(I^{m}/xI^{m-1}) $.
\end{center}
On the other hand $ x $ is $ \Omega^{n-1}_R(I^{m}) $-regular, therefore by \cite[Corollary 2.5]{CA} we have
\begin{center}
$ \delta_R(\Omega^{n-1}_R(I^{m})\leq\delta_{\bar{R}}(\Omega^{n-1}_R(I^{m})\otimes_R \bar{R}) $.
\end{center}
Therefore
\begin{equation}\label{4.1}
\begin{array}{rl}
\delta_R^{n}(R/I^{m})&=\delta_R(\Omega^{n-1}_R(I^{m})\\
&\leq\delta_{\bar{R}}(\Omega^{n-1}_R(I^{m})\otimes_R \bar{R})\\
&=\delta_{\bar{R}}( \Omega^{n-1}_{\bar{R}}(I^{m-1}/I^{m})\oplus \Omega^{n-1}_{\bar{R}}(I^{m}/xI^{m-1}))\\
&=\delta_{\bar{R}}(\Omega^{n-1}_{\bar{R}}(I^{m-1}/I^{m}))+\delta_{\bar{R}}(\Omega^{n-1}_{\bar{R}}(I^{m}/xI^{m-1})) \\
&=\delta_{\bar{R}}^{n-1}(I^{m-1}/I^{m})+\delta_{\bar{R}}^{n-1}(I^{m}/xI^{m-1}).
\end{array}
\end{equation}
The injective map $I^{m-1}/I^{m}\overset{x.}{\longrightarrowngrightarrow} I^{m}/I^{m+1} $ implies, by induction on $m$, that $xI^{m-1}=xR\cap I^{m}$ so we get $ (\bar{I})^{m}=I^{m}/(xR\cap I^{m})=I^{m}/xI^{m-1} $, therefore
\[\begin{array}{rl}
\delta_R^{n}(R/I^{m}) &\leq\delta_{\bar{R}}^{n-1}(I^{m}/xI^{m-1})+\delta_{\bar{R}}^{n-1}(I^{m-1}/I^{m})\\&
=\delta_{\bar{R}}^{n-1}(({\bar{I}})^{m})+\delta_{\bar{R}}^{n-1}(I^{m-1}/I^{m}).
\end{array}\]
Note that, by assumption, $ I^{m-1}/I^{m}\cong \oplus R/I $, and $ \delta_{\bar{R}}^{n-1}(I^{m-1}/I^{m})=\delta_{\bar{R}}^{n-1}(R/I)=0 $.
If $ d=1 $ then $ \mbox{dim}\,(\bar R)=0$ so $\delta_{\bar{R}}^{n-1}(({\bar{I}})^{m})= \delta_{\bar{R}}(\Omega^{n-1}_{\bar{R}}(({\bar{I}})^{m}))=\delta^{n}_{\bar{R}}(\bar{R}/(\bar{I})^{m})=0 $ hence result is clear.
Suppose that $ d\geq2 $. As $ n\geq d-t+1=(d-1)-(t-1)+1 $, when $t=1$, we have $ n\geq (d-1)+1 $ hence $ \delta^{n}_{\bar{R}}(\bar{R}/(\bar{I})^{m})=0 $ , therefore $\delta_R^{n}(R/I^{m})= 0$.
If $ t\geq2 $ then $ \mbox{depth}\, ( gr_{\bar{R}}(\bar{I}))=\mbox{depth}\, ( gr_R(I)/x^{\ast}gr_R(I))=t-1>0 $. As $ \bar R/\bar \mathfrak{m} \cong R/\mathfrak{m} $ is infinite and $ \mbox{dim}\,(\bar R)>0 $ and $ \mbox{depth}\, ( gr_{\bar{R}}(\bar{I}))>0 $, by \cite[Lemma 2.1]{HM}, there exists $ \bar y =y+xR\in \bar I \setminus \bar I^{2} $ such that $ \bar y^{\ast} $ is $ \bar G $-regular. Therefore the map $ \bar I^{m-1}/\bar I^{m}\overset{\bar y}{\longrightarrowngrightarrow} \bar I^{m}/\bar I^{m+1} $ is injective. On the other hand we have $ (\bar I)^{m}/(\bar I)^{m+1}\cong I^{m}/(xI^{m-1}+I^{m+1}) $ and, by \cite[Lemma 2.3]{K}, $I^{m}/(xI^{m-1}+I^{m+1}) $ is a direct summand of $ I^{m}/I^{m+1} $. Therefore $ (\bar I)^{m}/(\bar I)^{m+1} $ is a free $ \bar R/\bar I $-module for any $ i\geq1 $. Also $ gr_{\bar I}(\bar I^{m})\subseteqbseteq \bar G $ implies that $ \bar y\in \bar I $ is non-zero divisor on $ \bar I^{m} $. Set $ \bar {\bar R}=\bar R/\bar y \bar R $ and $ \bar {\bar I}=\bar I/\bar y \bar I $. Then by the same argument as above we have $\delta_{\bar R}^{n}(\bar R/\bar I^{m}) \leq\delta^{n}_{\bar{\bar{R}}}(\bar{\bar{R}}/(\bar{\bar{I}})^{m})+\delta_{\bar{\bar{R}}}^{n-1}(\oplus R/I) $.
By our assumption $ \delta_{\bar{\bar{R}}}^{n-1}(\oplus R/I))= \delta_{R/(x,y)}^{n-1}(\oplus R/I))=0 $. When $ d=2 $, $ \mbox{dim}\,(\bar{\bar R})=0$ and so $ \delta^{n}_{\bar{\bar{R}}}(\bar{\bar{R}}/(\bar{\bar{I}})^{m})=0 $ and result is clear. Suppose that $ d\geq3 $. As $ n\geq d-t+1=(d-2)-(t-2)+1 $, if $ t=2 $ then $ \delta^{n}_{\bar{\bar{R}}}(\bar{\bar{R}}/(\bar{\bar{I}})^{m})=0 $, therefore (\ref{4.1}) implies that
\[\begin{array}{rl}
\delta_R^{n}(R/I^{m})
&\leq \delta^{n}_{\bar{R}}(\bar{R}/(\bar{I})^{m})+\delta_{\bar{R}}^{n-1}(\oplus R/I)) \\
&\leq\delta^{n}_{\bar{\bar{R}}}(\bar{\bar{R}}/(\bar{\bar{I}})^{m})+\delta_{R/(x,y)}^{n-1}(\oplus R/I)+\delta_{R/xR}^{n-1}(\oplus R/I)\\&
=0.
\end{array}\]
For the case $ t\geq3 $, we proceed by the same argument as above to find $ \delta_R^{n}(R/I^{m})=0$.
\end{proof}
\begin{cor}\label{cor}
Suppose that $(R, \mathfrak{m})$ is a Gorenstein local ring of dimension $d$ such that $R/\mathfrak{m} $ is infinite. Consider the following statements.
\begin{itemize}
\item [(a)] $R$ is not regular.
\item [(b)] There exists an $ \mathfrak{m} $-primary ideal $I$ of $R$ such that
\begin{itemize}
\item[(i)] $ I^{i}/I^{i+1} $ is free $R/I $-module for any $ i\geq0 $, and
\item[(ii)] for any $R$-regular sequence ${\bf x}=x_1,\mbox{cd}\,ots, x_s$ in $ I $ such that $$ x_i+(x_1,\mbox{cd}\,ots, x_{i-1}) \in ( I/(x_1,\mbox{cd}\,ots, x_{i-1}))\setminus (I/(x_1,\mbox{cd}\,ots, x_{i-1}))^{2},\ 1\leq i\leq s,$$ one has $\delta^{n}_{R/{\bf x}R}\left( R/I\right)=0$ for all $ n\geq0 $.
\end{itemize}
\item [(c)] There exists a non-zero ideal $I$ of $R$ such that
\begin{itemize}
\item[(i)] $ \emph\mbox{depth}\,( G )\geq \emph\mbox{depth}\,_R(R/I) $, and
\item[(ii)] $\delta_R^{n}(R/I^{m})=0 $ for all integers $ n\geq d-\emph\mbox{depth}\,( G )+1 $ and $ m\geq1 $.
\end{itemize}
\end{itemize}
Then the implications (a)$\mathbb{R}ightarrow$(b) and (b)$\mathbb{R}ightarrow$(c) hold true. If $ \emph\mbox{depth}\,(G)>\emph\mbox{depth}\,_R(R/I) $, the statements (a), (b), and (c) are equivalent.
\end{cor}
\begin{proof}
(a)$\mathbb{R}ightarrow$(b). We show that $ I=\mathfrak{m} $ works. Assume that ${\bf x}=x_1,\mbox{cd}\,ots, x_s$ is $R$-regular sequence in $ \mathfrak{m} $ such that $ x_1\in \mathfrak{m} \setminus \mathfrak{m}^{2}$ and $ x_i+(x_1,\mbox{cd}\,ots, x_{i-1}) \in ( \mathfrak{m}/(x_1,\mbox{cd}\,ots, x_{i-1}))\setminus (\mathfrak{m}/(x_1,\mbox{cd}\,ots, x_{i-1}))^{2}$. Set $ \bar{R}=R/(x_1,\mbox{cd}\,ots, x_{s-1})R $, $ \bar{\mathfrak{m}}=\mathfrak{m}/(x_1,\mbox{cd}\,ots, x_{s-1})R $ and $\bar x_s= x_s+(x_1,\mbox{cd}\,ots, x_{s-1}) $. As $ \bar R/\bar{x_s}\bar R $ is not regular so by Remark \ref{12} we have, for all $n\geq 0$,
\[\begin{array}{rl}
\delta^{n}_{ R/ ( x_1,\mbox{cd}\,ots, x_s) R}\left( R/ \mathfrak{m} \right)
& =\delta^{n}_{(\bar R/ \bar{x_s}\bar R) }\left( R/ \mathfrak{m} \right)
\\&=\delta^{n}_{(\bar R/ \bar{x_s}\bar R) }\left( \bar R/ \bar {\mathfrak{m}} \right)\\&
=\delta^{n}_{(\bar R/ \bar{x_s}\bar R) }\left( (\bar R/\bar{x_s}\bar R)/ (\bar \mathfrak{m}/\bar{x_s}\bar R) \right)\\&
=0.
\end{array}\]
(b)$\mathbb{R}ightarrow$(c). Apply Theorem \ref{sf}.
(c)$\mathbb{R}ightarrow$(a). We assume that $ R $ is a regular ring. Our assumption, Observation \ref{n} and Auslander-Buchsbaum formula implies that $ d-\mbox{depth}\,_R(R/I)=\mbox{proj.dim}\,_R(R/I)\leq d-\mbox{depth}\,( G ) $ which contradicts that $ \mbox{depth}\,(G)>\mbox{depth}\,_R(R/I) $.
\end{proof}
\section{characterization by linkage theory}
The notion of linkage of ideals in commutative algebra is invented by Peskine and Szpiro \cite{PS}. Two ideals $I$ and $J$ in a Cohen-Macaulay local ring $R$ are said to be linked if there is a
regular sequence $\underline{a}$ in their intersection such that $I = (\underline{a}) :_R J$ and $J = (\underline{a}) :_R I$. They have shown that the Cohen-Macaulay-ness property
is preserved under linkage over Gorenstein local rings and provided a counterexample to show that
the above result is no longer true if the base ring is Cohen-Macaulay but not Gorenstein. In the following, we investigate the situation over a Cohen-Macaulay local ring with canonical module and generalize the result of Peskine and Szpiro \cite{PS}.
\begin{thm} \label{H}
Let $(R, \mathfrak{m})$ be a Cohen-Macaulay local ring of dimension $ d $ with canonical module $\omega_R$. Suppose that $ I $ and $ J $ are two ideals of $ R $ such that $ 0:_{\omega_R}I=J \omega_R$, $ 0:_{\omega_R}J=I \omega_R$, $ \emph\mbox{G-dim}\,_{R/I}(\omega_{R}/I\omega_R)<\infty $ and also $\emph\mbox{G-dim}\,_{R/J}(\omega_{R}/J\omega_R)<\infty $ (e.g. $R$ is Gorenstein), then $R/I $ is Cohen-Macaulay $ R $-module if and only if $R/J $ is Cohen-Macaulay $ R $-module.
\end{thm}
\begin{proof}
Assume that $R/I $ is Cohen-Macaulay. Set $ t:=\mbox{grade}\,(I, R)$ so that $t=\mbox{ht}\,_R(I)=\mbox{dim}\, R- \mbox{dim}\, R/I $. If $ t>0 $ then there exists an $R$-regular element $x$ in $ I $. As $ \omega_R $ is maximal Cohen-Macaulay, $ x $ is also $ \omega_R $-regular which implies that $J\omega_R=(0:_{\omega_R}I)=0$. Hence $J=0$ which is absurd. So assume that $ t=0 $ which implies that $R/I $ is maximal Cohen-Macaulay $R$--module so that $\mbox{Ext}\,^{i}_R(R/I, \omega_R)=0 $ for all $ i\geq 1 $. Apply the functor $ \mbox{Hom}\,_R(-, \omega_R) $ on a minimal free resolution $$ \mbox{cd}\,ots \longrightarrowngrightarrow \overset{t_2}{\oplus}R\longrightarrowngrightarrow \overset{t_1}{\oplus}R\longrightarrowngrightarrow R\longrightarrowngrightarrow R/I\longrightarrowngrightarrow0 $$ of $R/I $, to obtain the induced exact sequence
\begin{center}
$0\longrightarrowngrightarrow\omega_{R}/J\omega_R\longrightarrowngrightarrow\overset{t_1}{\oplus}\omega_R\longrightarrowngrightarrow \overset{t_2}{\oplus}\omega_R\longrightarrowngrightarrow\mbox{cd}\,ots.$
\end{center}
Splitting into the short exact sequences
\begin{center}
$\begin{array}{ccccccccc}
0 & \longrightarrowngrightarrow & \omega_{R}/J\omega_R & \longrightarrowngrightarrow& \overset{t_1}{\oplus }\omega_R & \longrightarrowngrightarrow & C_1 &\longrightarrowngrightarrow&0 \\
0& \longrightarrowngrightarrow &C_1 &\longrightarrowngrightarrow & \overset{t_2}{\oplus }\omega_R & \longrightarrowngrightarrow & C_2 &\longrightarrowngrightarrow &0 \\
0 & \longrightarrowngrightarrow & C_2 &\longrightarrowngrightarrow &\overset{t_3}{\oplus }\omega_R &\longrightarrowngrightarrow &C_3 &\longrightarrowngrightarrow &0 \\
& & & &\mbox{vdim}\,ots, & & & &
\end{array} $
\end{center}
where $ C_{i}=\mbox{Im}\,{f_{i+1}}$ for $ i\geq1 $, we obtain $\mbox{depth}\,_R(\omega_R/J\omega_R)=d$. Note that $\mbox{G-dim}\,_{R/J}(\omega_{R}/J\omega_R)<\infty $, implies that $d= \mbox{depth}\,_{R/J}(\omega_{R}/J\omega_R)\leq\mbox{depth}\,_{R/J}(R/J)$. Thus
$R/J $ is also a maximal Cohen-Macaulay $R$-module.
\end{proof}
To see some applications of Theorem \ref{H}, we refer to the $ n$th $ \delta $-invariant of an $R$--module $M$ as in the paragraph just after \cite[Proposition 5.3]{AB}.
\begin{cor}\label{Z}
Let $(R, \mathfrak{m})$ be a Cohen-Macaulay local ring of dimension $ d $ with canonical module $\omega_R$. Let $ I $ and $ J $ be ideals of $R$.
\begin{itemize}
\item [(a)] If $ 0:_{\omega_R}I=J \omega_R$ and $R/I $ is a maximal Cohen-Macaulay $R$-module, then $\delta_{R}^{i}(J\omega_R)=0$ for all $ i\geq1 $.
\item[(b)] If $ 0:_{\omega_R}I=J \omega_R$, $ 0:_{\omega_R}J=I \omega_R$, $R/I $ is a maximal Cohen-Macaulay $R$-module, and $\emph\mbox{G-dim}\,_{R/J}(\omega_{R}/J\omega_R)<\infty $, then $\delta_{R}^{i}(I\omega_R)=0$ for all $ i\geq1 $.
\end{itemize}
\end{cor}
\begin{proof}
(a). A similar argument as in the proof of Theorem \ref{H}, implies that $ \mbox{depth}\,_R(\omega_{R}/J\omega_R)=d $ and $ \omega_{R}/J\omega_R$ is maximal Cohen-Macaulay. By the paragraph just after \cite[Proposition 5.3]{AB}, we get $\delta_{R}^{i}(J\omega_R)=0$ for all $ i\geq1 $.
(b). By Theorem \ref{H}, $R/J $ is maximal Cohen-Macaulay $R$-module so, by part (a), $\delta_{R}^{i}(I\omega_R)=0$ for all $ i\geq1 $.
\end{proof}
\begin{dfn}
\emph{Let $(R, \mathfrak{m})$ be a local ring. An ideal of $R$ is said to be a linked ideal if $I=0:_R(0:_RI)$. We call an ideal $I$ to be {\it generically linked} if for any $\mathfrak{p}\in\mbox{Spec}\, R$ with $\mbox{ht}\,\mathfrak{p}=0$, the ideal $IR_\mathfrak{p}$ is linked in $R_\mathfrak{p}$.}
\end{dfn}
Note that a local ring $R$ is generically Gorenstein if and only if any ideal of height $0$ is generically linked.
In the following, we recall a characterization for a Cohen-Macaulay ring to be Gorenstein in terms of linkage of particular ideals. The proof is straightforward and we bring it here for convenience of the readers.
\begin{prop}\label{gor}
Let $(R, \mathfrak{m})$ be a Cohen-Macaulay local ring. The following statements are equivalent.
\begin{itemize}
\item [(i)] $R$ is Gorenstein.
\item[(ii)] For any non-zero proper ideal $I$ of $R$ and each $R$-regular sequence ${\bf x}=x_1,\mbox{cd}\,ots, x_t$ in $I$ with $t=\emph{\mbox{ht}\,} I$, the ideal $I/({\bf x})$ is generically linked in $R/({\bf x})$.
\end{itemize}
\end{prop}
\begin{proof}
(i)$\mathbb{R}ightarrow$(ii). Let that $I$ is an ideal of $R$, $\mbox{ht}\, I=t$. Let ${\bf x}=x_1,\mbox{cd}\,ots, x_t\in I$ be an $R$-sequence. Then $\mbox{ht}\, (I/({\bf x}))=0$ in $R/({\bf x})$. As $R/({\bf x})$ is Gorenstein, $I/({\bf x})$ is generically linked in $R/({\bf x})$.
(ii)$\mathbb{R}ightarrow$(i). We prove by induction on $d:=\mbox{dim}\, R$. In the case $d=0$, we have $I=0:_R(0:_RI)$ for all ideals $I$ by our assumption. Therefore $R$ is Gorenstein. Let $d>0$ and the result has been settled for rings of dimensions smaller than $d$. Choose an $R$--regular element $x$ and set $\overline{R}=R/Rx$. We note that $\overline{R}$ is a Cohen-Macaulay local ring of dimension $d-1$. Let $\overline{I}:=I/Rx$ be an ideal of $\overline{R}$ with height $t$ so that there is an $\overline{R}$--regular sequence $\overline{y}_1, \mbox{cd}\,ots, \overline{y}_t$, with $y_1, \ldots, y_t\in I$. Note that ${\bf x}:= x, y_1, \mbox{cd}\,ots, y_{t}$ is an $R$--sequence contained in $I$ and $\mbox{ht}\, I=t+1$. By our assumption, $I/(x, y_1, \mbox{cd}\,ots, y_t)$ is generically linked in $R/(x, y_1, \mbox{cd}\,ots, y_{t})$. Equivalently, the ideal $\frac{I/Rx}{(\overline{y_1}, \mbox{cd}\,ots, \overline{y_t})}$ is generically linked in $R/Rx$. By induction hypothesis, $R/Rx$ is Gorenstein and so is $R$.
\end{proof}
We end the paper by the following remark which shows that generically Gorenstein local rings possessing canonical modules with dimensions greater than $1$ are reducible to the one with lower dimension by some non-zero divisor.
\begin{rmk}\label{rg}
Let $(R, \mathfrak{m})$ be a Cohen-Macaulay local ring with canonical module $\omega_{R}$ and $\emph{\mbox{dim}\,} R>1$. Then the following are equivalent.
\begin{itemize}
\item [(i)] $R$ is generically Gorenstein.
\item [(ii)] There exists an $R$--regular element $x$ such that $R/Rx$ is generically Gorenstein.
\end{itemize}
\end{rmk}
\begin{proof}
(i)$\mathbb{R}ightarrow$(ii). We may assume that $R$ is not Gorenstein. As $R$ is generically Gorenstein, by \cite[Proposition 3.3.18]{BH}, $\omega_{R}$ is an ideal of $R$. As $\mathfrak{m}\not\subseteqbseteq(\underset{\mathfrak{p}\in\mbox{Ass}\, R}{\cup }\mathfrak{p})\cup(\underset{\mathfrak{q}\in\mbox{Ass}\,_R( R/\omega_R)}{\cup}\mathfrak{q})$, there is $x\in \mathfrak{m}$ which is regular on $R$, $R/\omega_R$. We define $R$-homomorphism $\alpha :\omega_R/x\omega_R\longrightarrowngrightarrow R/xR$ such that $ \alpha(u+x\omega_R)=u+xR $. As $\alpha$ is injective, $\omega_{R}/x\omega_{R}$ can be embedded into $R/Rx$. As $\omega_{R}/x\omega_{R}\cong\omega_{R/Rx}$ is the canonical module of $R/Rx$, the result follows.
(ii)$\mathbb{R}ightarrow$(i). Assume that $R/Rx$ is generically Gorenstein for some $R$-regular element $x$. Let $\mathfrak{p}$ be a minimal prime ideal of $R$. Thus one has $\mbox{dim}\, R/(Rx+\mathfrak{p})=\mbox{dim}\, R-1$ and $\mbox{ht}\,(Rx+\mathfrak{p})=1$. Hence $Rx+\mathfrak{p}\subseteqbseteq\mathfrak{q}$ for some prime ideal $\mathfrak{q}$ with $\mbox{ht}\,\mathfrak{q}=1$. By our assumption, $(R/Rx)_{\mathfrak{q}/Rx}$ and so $R_\mathfrak{q}/xR_\mathfrak{q}$ is Gorenstein. Hence $R_\mathfrak{q}$ and therefore $R_\mathfrak{p}$ is Gorenstein.
\end{proof}
\begin{eg}
\emph{Assume that $R$ is a non-Gorenstein Cohen-Macaulay local ring with canonical module $\omega_R$ of dimension $1$ (e.g. $R= k \llbracket t^3, t^5, t^7 \rrbracket $, $k$ is a field). Then, for any $R$-regular element $x$, $R/Rx$ is not generically Gorenstein.}
\end{eg}
\end{document} |
\begin{document}
\title[Extreme contractions on finite-dimensional polygonal spaces]{Extreme contractions on finite-dimensional polygonal Banach spaces}
\author[Debmalya Sain, Anubhab Ray and Kallol Paul ]{ Debmalya Sain, Anubhab Ray and Kallol Paul}
\newcommand{\newline\indent}{\newline\indent}
\address[Sain]{Department of Mathematics\\ Indian Institute of Science\\ Bengaluru 560012\\ Karnataka \\India\\ }
\email{saindebmalya@gmail.com}
\address[Ray]{Department of Mathematics\\ Jadavpur University\\ Kolkata 700032\\ West Bengal\\ INDIA}
\email{anubhab.jumath@gmail.com}
\address[Paul]{Department of Mathematics\\ Jadavpur University\\ Kolkata 700032\\ West Bengal\\ INDIA}
\email{kalloldada@gmail.com}
\thanks{The research of Dr. Debmalya Sain is sponsored by Dr. D. S. Kothari Postdoctoral Fellowship, under the mentorship of Professor Gadadhar Misra. Dr. Debmalya Sain feels elated to acknowledge the extraordinary presence of Mr. Anirban Dey, his beloved brother-in-arms, in his everyday life! The second author would like to thank DST, Govt. of India, for the financial support in the form of doctoral fellowship. The research of third author is supported by project MATRICS of DST, Govt. of India.
}
\subjclass[2010]{Primary 46B20, Secondary 47L05}
\keywords{extreme contractions; polygonal Banach spaces; strict convexity; Hilbert spaces.}
\begin{abstract}
We explore extreme contractions on finite-dimensional polygonal Banach spaces, from the point of view of attainment of norm of a linear operator. We prove that if $ X $ is an $ n- $dimensional polygonal Banach space and $ Y $ is any normed linear space and $ T \in L(X,Y) $ is an extreme contraction, then $ T $ attains norm at $ n $ linearly independent extreme points of $ B_{X}. $ Moreover, if $ T $ attains norm at $ n $ linearly independent extreme points $ x_1, x_2, \ldots, x_n $ of $ B_X $ and does not attain norm at any other extreme point of $ B_X, $ then each $ Tx_i $ is an extreme point of $ B_Y.$ We completely characterize extreme contractions between a finite-dimensional polygonal Banach space and a strictly convex normed linear space. We introduce L-P property for a pair of Banach spaces and show that it has natural connections with our present study. We also prove that for any strictly convex Banach space $ X $ and any finite-dimensional polygonal Banach space $ Y, $ the pair $ (X,Y) $ does not have L-P property. Finally, we obtain a characterization of Hilbert spaces among strictly convex Banach spaces in terms of L-P property.
\end{abstract}
\maketitle
\section{Introduction.}
The purpose of the present paper is to study the structure and properties of extreme contractions on finite-dimensional real polygonal Banach spaces. Characterization of extreme contractions between Banach spaces is a rich and intriguing area of research. It is worth mentioning that extreme contractions on a Hilbert space are well-understood \cite{Ga, K, N, S}. However, we are far from completely describing extreme contractions between general Banach spaces, although several mathematicians have studied the problem for particular Banach spaces \cite{G, I, Ki, Sh, Sha}. Very recently, a complete characterization of extreme contractions between two-dimensional strictly convex and smooth real Banach spaces has been obtained in \cite{SPM}. To the best of our knowledge, the characterization problem remains unsolved for higher dimensional Banach spaces. Lindenstrauss and Perles carried out a detailed investigation of extreme contractions on a finite-dimensional Banach space in \cite{LP}. It follows from their seminal work that extreme contractions on a finite-dimensional Banach space may have additional properties if the unit sphere of the space is a polytope. Motivated by this observation, we further explore extreme contractions between finite-dimensional polygonal Banach spaces. Without further ado, let us establish the relevant notations and terminologies to be used throughout the paper. \\
In this paper, letters $X,~Y$ denote real Banach spaces. Let $ B_{X}=\{ x\in X ~:~\|x\| \leq 1 \} $ and
$ S_{ X }=\{ x\in X ~:~\|x\|=1 \} $ denote the unit ball and the unit sphere of $X$ respectively and let $ L(X,Y) $ be the Banach space of all bounded linear operators from $ X $ to $ Y, $ endowed with the usual operator norm. Let $E_X$ be the collection of all extreme points of the unit ball $B_X.$ We say that a finite-dimensional Banach space $ X $ is a polygonal Banach space if $ S_{X} $ is a polytope, or equivalently, if $ B_{X} $ contains only finitely many extreme points. An operator $ T \in L(X,Y) $ is said to be an extreme contraction if $ T $ is an extreme point of the unit ball $ B_{L(X,Y)} $. We would like to note that extreme contractions are norm one elements of the Banach space $ L(X,Y) $ but the converse is not necessarily true. For a bounded linear operator $ T \in L(X,Y), $ let $ M_T $ be the collection of all unit vectors in $ X $ at which $ T $ attains norm, i.e.,
\[ M_T = \{ x \in S_{X} : \| Tx \| = \| T \| \}. \]
As we will see in due course of time, the notion of the norm attainment set $ M_T, $ corresponding to a linear operator $ T $ between the Banach spaces $ X $ and $ Y, $ plays a very important role in determining whether $ T $ is an extreme contraction or not. As a matter of fact, we prove that if $ X $ is an $ n- $dimensional polygonal Banach space and $ Y $ is any normed linear space, then $ T \in L(X,Y) $ is an extreme contraction implies that $ span(M_T \cap E_X) = X. $ Moreover, if $ M_T \cap E_X $ contains exactly $2n $ elements then $ T( M_T \cap E_X ) \subset E_Y.$ Indeed, this novel connection between extreme contractions and the corresponding norm attainment set is a major highlight of the present paper. As an application of this result, we completely characterize extreme contractions from a finite-dimensional polygonal Banach space to any strictly convex normed linear space. We would like to note that extreme contractions in $ L(l_{1},Y), $ where $ Y $ is any Banach space, have been completely characterized by Sharir in \cite{Sha}. In this paper, we show that extreme contractions in $ L(l_{1}^{n},Y) $ can be characterized using our approach. We also illustrate that the nature of extreme contractions can change dramatically if we consider the domain space to be something other than finite-dimensional polygonal Banach spaces.\\
In \cite{LP}, Lindenstrauss and Perles studied the set of extreme contractions in $ L(X,X), $ for a finite-dimensional Banach space $ X $. One of the main results presented in \cite{LP} is the following:
\begin{theorem}
If $ X $ is a finite-dimensional Banach space, then the following statements are equivalent:\\
$ (1) $ $ T \in E_{L(X,X)},~~ x \in E_X \Rightarrow Tx \in E_X. $\\
$ (2) $ $ T_1,T_2 \in E_{L(X,X)} \Rightarrow T_1 \circ T_2 \in E_{L(X,X)}. $\\
$ (3) $ $ \{T_i\}_{i=1}^{m} \subseteq E_{L(X,X)} \Rightarrow \|T_1 \circ \ldots \circ T_m \|=1, $ for all $ m. $
\end{theorem}
Furthermore, they also proved in \cite{LP} that if $ X $ is a Banach space of dimension strictly less than $ 5 $ then $ X $ has the Properties $ (1), $ $ (2) $ and $ (3), $ as mentioned in the above theorem, if and only if either of the following is true:\\
\noindent $ (i) $ $ X $ is an inner product space.\\
\noindent $ (ii) $ $ B_{X} $ is a polytope with the property that for every facet (i.e., maximal proper face) $ K $ of $ B_{X}, $ $ B_{X} $ is the convex hull of $ K \bigcup (-K). $\\
In view of the results obtained in \cite{LP}, it seems natural to introduce the following definition in the study of extreme contractions between Banach spaces:
\begin{definition}
Let $ X,~Y $ be Banach spaces. We say that the pair $ (X,Y) $ has L-P (abbreviated form of Lindenstrauss-Perles) property if a norm one bounded linear operator $ T \in L(X,Y) $ is an extreme contraction if and only if $ T(E_X) \subseteq E_Y. $
\end{definition}
We would like to remark that although the study conducted in \cite{LP} was only for the case $ X=Y, $ we have consciously formulated the above mentioned definition in a broader context, by not imposing the restriction that the domain space and the range space must be identical. In order to illustrate that our definition is meaningful, we provide examples of Banach spaces $ X,~Y $ such that $ X \neq Y $ and the pair $ (X,Y) $ has L-P property. As a matter of fact, we observe that the pair $ (l_{\infty}^{3},l_{1}^{3}) $ has L-P property. It is apparent that our study in this paper is motivated by the investigations carried out in \cite{LP}. We further give examples of finite-dimensional polygonal Banach spaces $ X,~Y $ such that the pair $ (X,Y) $ does not have L-P property. We prove that if $ X $ is a strictly convex Banach space and $ Y $ is a finite-dimensional polygonal Banach space, then the pair $ (X,Y) $ does not have L-P property. We end the present paper with a characterization of Hilbert spaces among strictly convex Banach spaces in terms of L-P property.
\section{ Main Results.}
Let us first make note of the following easy proposition:
\begin{prop}\label{proposition:bounded}
Let $ X $ be an $ n- $dimensional Banach space and let $ A \subseteq X $ be a bounded set. Suppose
$ \{ x_1, x_2, \ldots , x_n\}$ is a basis of $ X. $ Then the following set $ S= \{ |\alpha_i| : z=\sum\limits_{i=1}^{n} \alpha_i x_i~~ \mbox{and}~~ z \in A \} $ is bounded.
\end{prop}
Using the above proposition, we obtain a necessary condition for a linear operator on a polygonal Banach space to be an extreme contraction, in terms of the norm attainment set.
\begin{theorem}\label{theorem:dimension-n}
Let $ X $ be an $ n- $dimensional polygonal Banach space and let $ Y $ be any normed linear space. Let $ T \in L(X,Y) $ be an extreme contraction. Then $ span(M_T \cap E_X) = X. $ Moreover, if $ M_T \cap E_X $ contains exactly $2n $ elements then $ T( M_T \cap E_X ) \subset E_Y.$
\end{theorem}
\begin{proof}
First, we prove that $ span(M_T \cap E_X) = X. $ Since $ X $ is an $ n- $dimensional Banach space, an easy application of Krein-Milman theorem ensures that $ B_{X} $ has at least $ 2n $ extreme points out of which $ n $ are linearly independent. If $ M_T $ contains $ n $ linearly independent extreme points then we are done. Suppose that $ M_T $ does not contain $ n $ linearly independent extreme points. Let $ M_T $ contains at most $ k $ linearly independent extreme points, where $ k < n. $ Let $ \{x_1, x_2, \ldots, x_k\} $ be one such linearly independent subset of $ M_T, $ consisting of extreme points of $ B_X. $ We extend it to a basis $ \{ x_1, x_2, \ldots,x_k,x_{k+1}, \ldots , x_n\} $ of $ X $ such that each $ x_i, ~~ i=1, 2, \ldots, n $ is an extreme point of $ B_X. $
Now, we can choose $ w(\neq 0) \in B_Y $ such that $ T{x_n} \pm \frac{w}{j} \neq 0 $ for every $ j \in \mathbb{N}. $
For each $ j \in \mathbb{N}, $ define linear operators $ T_j, ~~ S_j : X \rightarrow Y $ as follows:
\begin{equation*}
\begin{aligned}[c]
T_j(x_i) &=T{x_i} ~ (i =1,2,\ldots, n-1)\\
T_j(x_n) &=Tx_n + \frac{w}{j}
\end{aligned}
\qquad \qquad
\begin{aligned}[c]
S_j(x_i) & = T{x_i} ~(i = 1,2,\ldots, n-1)\\
S_j(x_n) & = Tx_n - \frac{w}{j}.
\end{aligned}
\end{equation*}
Then $ T \neq T_j $ and $ T \neq S_j $ for all $ j \in \mathbb{N}. $ Also, we have $ T= \frac{1}{2}(T_j+S_j) $ for all
$ j \in \mathbb{N}. $\\
Since $ T $ is an extreme contraction, there exists a subsequence $\{j_k\}$ such that for all $ j_k, $ either $ \|T_{j_k}\| > 1 $ or $ \|S_{j_k}\| > 1. $ Without loss of generality, we may and do assume that $ \|T_j\| > 1 $ for all $ j \in \mathbb{N}.$\\
Since $ X $ is finite-dimensional, each $ T_j $ attains norm at an extreme point of $ B_X. $ Suppose $ \|T_j{y_j}\|= \|T_j\| > 1 $ where each $ y_j $ is an extreme point of $ B_X. $\\
Let $ z = a_1x_1+\ldots+a_{n-1}x_{n-1}+a_nx_n \in S_X $ be arbitrary, then by Proposition \ref{proposition:bounded}, there exists $ r>0 $ such that $ |a_n| \leq r, $ for each $ z \in S_X.$\\
Now, $ \| (T_j-T)z\|=\|(T_j-T)(a_1x_1+\ldots+a_nx_n)\|=\|\frac{a_nw}{j}\| \leq \frac{r}{j} \rightarrow 0 $ as $ j \rightarrow \infty.$\\
So, $ T_j \rightarrow T $ as $ j \rightarrow \infty. $ As each $ y_j \in S_X $ and $ S_X $ is compact, we may assume without loss of generality that
$ y_j \rightarrow y_0 $ as $ j \rightarrow \infty, $ where $ y_0 \in S_X.$\\
Noting that there are only finitely many extreme points and each $ y_j $ is an extreme point, we can further assume that $ y_j=y_0 $ for each $ j. $ So, $ T_j{y_j}=T_j{y_0}\rightarrow Ty_0 $ as $ j \rightarrow \infty. $\\
Therefore, $ \|Ty_0\| \geq 1. $ As $ \|T\|=1, $ $ \|Ty_0\|=1. $ So, $ y_0 \in M_T .$ Since $ y_0 $ is an extreme point and $ M_T $ has at most $ k $ linearly independent extreme points, the set $ \{ y_0, x_1, x_2, \ldots, x_k \} $ is linearly dependent. Let $ y_0=a_1x_1+a_2x_2+\ldots + a_k x_k. $ Then,
$ T_j{y_j}=T_j{y_0}=T_j(a_1x_1+\ldots+a_kx_k)=a_1Tx_1+\ldots+a_kTx_k=Ty_0. $
Therefore, $ \|T_j{y_j}\|=\|Ty_0\|=1, $ which is contradiction to the fact that $\|T_j\| > 1 $ for all $j \in \mathbb{N}. $ Thus, if $ T $ is an extreme contraction, then $ span(M_T \cap E_X) = X. $\\
Now, we prove the next part of the theorem. As $ X $ is a finite-dimensional polygonal Banach space, $ E_X $ is a finite set. Let $ \{ \pm{x_1}, \pm{x_2}, \ldots, \pm{x_m} : ~~ m\geq n \} =E_X $ and $ (M_T \cap E_X) = \{ \pm{x_i} : ~~i=1,2,\ldots,n \}. $\\
As $ T $ attains norm only at $ \{ \pm{x_i} : ~~i=1,2,\ldots,n \}, $ $ \|T\|=\|Tx_i\|=1 $ for all $ i=1,2,\ldots,n $ and $ \|Tx_j\| < \|T\|=1 $ for all $ j=n+1,n+2,\ldots,m. $ Therefore, there exists $ \epsilon > 0 $ such that $ \|Tx_j\| < 1-\epsilon $ for all $ j=n+1, n+2, \ldots ,m. $\\
Clearly, $ \{x_1, x_2, \ldots,x_n\} $ is a basis of $ X. $ Therefore, there exist scalars $ a_{1j}, a_{2j}, \ldots , a_{nj} $ such that
\[ x_j=a_{1j}x_1+a_{2j}x_2+\ldots+a_{nj}x_n, ~\forall ~ j=n+1,n+2,\ldots,m. \]
Now, using Proposition \ref{proposition:bounded}, there exists $ r>0 $ such that $ |a_{ij}| \leq r $ for all $ i=1,2,\ldots,n $ and $ j=n+1,n+2,\ldots,m. $\\
Suppose $ Tx_k $ is not an extreme point of $ B_Y $ for some $ k \in \{ 1,2,\ldots,n\}. $ Then there exist $ y,z \in S_Y \cap B[Tx_k, \frac{\epsilon}{2r}]$ such that $ Tx_k= (1-t)y+tz $ for some $ t \in (0,1). $\\
Let us define $ T_1:X \rightarrow Y $ by $ T_1{x_i}=Tx_i $ for all $ i=1,2, \ldots,n ~\mbox{and}~ i\neq k $ and $ T_1{x_k}=y, ~i=k. $\\
Similarly, we define $ T_2:X \rightarrow Y $ by $ T_2{x_i}=Tx_i $ for all $ i=1,2, \ldots,n ~\mbox{and}~ i\neq k $ and $ T_2{x_k}=z, ~i=k. $\\
Clearly, $ T_1 \neq T $ and $ T_2 \neq T. $ Also, we have $ T=(1-t)T_1+tT_2. $ We claim that $ \|T_1\|=\|T_2\|=1. $\\
Clearly, $ \|T_1{x_i}\|=\|Tx_i\|=1 $ for all $ i=1,2,\ldots,n ~\mbox{and}~ i\neq k. $ Also, $ \|T_1{x_k}\|=\|y\|=1 $ as $ y \in S_Y \cap B[Tx_k, \frac{\epsilon}{2r}].$\\
Furthermore, for each $ j=n+1,\ldots,m, $ we have,
\begin{eqnarray*}
\|T_1{x_j}\| & = & \|a_{1j}T_1{x_1}+\ldots+a_{kj}T_1{x_{k}}+\ldots +a_{nj}T_1{x_n}\| \\
& = & \|a_{1j}T{x_1}+\ldots+a_{kj}y+\ldots +a_{nj}T{x_n}\| \\
& = & \|a_{1j}T{x_1}+\ldots +a_{nj}T{x_n}+a_{kj}(y-T{x_k})\| \\
& \leq & \|T{x_j}\| + |a_{kj}|\|y-T{x_k}\| \\
& \leq & 1- \epsilon + r\frac{\epsilon}{2r} \\
& = & 1-\frac{\epsilon}{2} < 1.
\end{eqnarray*}
Thus, for any extreme point $ x_i $ of $ B_X, $ $ \|T_1{x_i}\| \leq 1 $ and so $ \|T_1\|=1. $
Similarly, $ \|T_2\|=1.$\\
Therefore, $ T=(1-t)T_1+tT_2 $ and $\|T_1\|=\|T_2\|=1. $ This contradicts the fact that $ T $ is an extreme contraction. Thus, if $ M_T \cap E_X $ contains exactly $2n $ elements then $ T( M_T \cap E_X ) \subset E_Y.$ This completes the proof of the theorem.
\end{proof}
As an immediate application of Theorem \ref{theorem:dimension-n}, it is possible to characterize extreme contractions from a finite-dimensional polygonal Banach space to a strictly convex normed linear space. We present the characterization in the form of the next theorem.
\begin{theorem}\label{theorem:poly}
Let $ X $ be a finite-dimensional polygonal Banach space and let $ Y $ be a strictly convex normed linear space. Then $ T \in L(X,Y), $ with $ \|T\|=1, $ is an extreme contraction if and only if $ span (M_T \cap E_X) = X.$
\end{theorem}
\begin{proof}
The proof of the necessary part of the theorem follows directly from Theorem \ref{theorem:dimension-n}. Here, we prove the sufficient part of the theorem. Suppose that $ T $ is not an extreme contraction. Then there exist $ T_1, T_2 \in L(X,Y) $ such that $ T_1, T_2 \neq T, $ $ \|T_1\|=\|T_2\|=1 $ and $ T = (1-t)T_1+tT_2 $ for some $ t \in (0,1). $ Let $ \{ x_1, x_2, \ldots, x_n \} \subseteq M_T \cap E_X $ be a basis of $ X. $ Then $ Tx_i = (1-t)T_{1}x_{i}+ t T_{2}x_{i}, $ for each $ i \in \{1,2, \ldots, n \}. $ We also note that $ T_{1}x_{i}, T_{2}x_{i} \in B_Y, $ as $ \|T_1\|=\|T_2\|=1. $ As $ Y $ is strictly convex, we have $ E_Y=S_Y. $ Therefore, each $ Tx_{i} $ is an extreme point of $ B_{Y} $ and it follows from that $ Tx_{i}= T_{1}x_{i} = T_{2}x_{i}, $ for each $ i \in \{1,2, \ldots, n \}. $ However, this implies that $ T_1, T_2 $ agree with $ T $ on a basis of $ X $ and therefore, $ T_1 = T_2 = T.$ This contradicts our initial assumption that $ T_1, T_2 \neq T. $ Thus $ T $ is an extreme contraction and this completes the proof of the theorem.
\end{proof}
As another useful application of Theorem \ref{theorem:dimension-n}, it is possible to characterize extreme contractions from $ l_1^n $ to $ Y, $ where $ Y $ is any normed linear space. It is worth mentioning that the following corollary also follows from \cite[Lemma 2.8]{Sha}.
\begin{cor}\label{corollary:l_1}
Let $ X=l_{1}^{n} $ and let $ Y $ be any normed linear space. Let $ T \in L(X,Y) $ with $\|T\|=1.$ Then $ T $ is extreme contraction if and only if $ M_T=E_X $ and $ T(E_X) \subseteq E_Y. $
\end{cor}
\begin{proof}
Firstly, we know that $ \{ \pm e_1, \pm e_2, \ldots, \pm e_n \} $ is the set of all extreme points of the unit ball of $ l_{1}^{n},$ where $e_i=(\underbrace{0,0,\ldots,0,1,}_{i}0,\ldots,0)$ for each $ i \in \{1,2, \ldots, n\}. $ Now, the proof of the necessary part of the theorem follows directly from Theorem \ref{theorem:dimension-n}. Here, we prove the sufficient part of the theorem. Suppose that $ T $ is not an extreme contraction. Then there exist $ T_1, T_2 \in L(X,Y) $ such that $ T_1, T_2 \neq T, $ $ \|T_1\|=\|T_2\|=1 $ and $ T = (1-t)T_1+tT_2 $ for some $ t \in (0,1). $ Then $ Te_i = (1-t)T_{1}e_{i}+ t T_{2}e_{i}, $ for each $ i \in \{1,2, \ldots, n \}. $ We also note that for each $ i \in \{ 1,2, \ldots, n \}, $ $ T_{1}e_{i}, T_{2}e_{i} \in B_Y, $ as $ \|T_1\|=\|T_2\|=1. $ Since $ Te_{i} \in E_Y, $ it follows that $ Te_{i}= T_{1}e_{i} = T_{2}e_{i}, $ for each $ i \in \{1,2, \ldots, n \}. $ However, this implies that $ T_1, T_2 $ agree with $ T $ on a basis of $ X $ and therefore, $ T_1 = T_2 = T.$ This contradicts our initial assumption that $ T_1, T_2 \neq T. $ Thus $ T $ is an extreme contraction and this completes the proof of the corollary.
\end{proof}
\begin{remark}
If $ Y $ is a polygonal Banach space with $ p $ pair of extreme points then the number of extreme contractions in $ L(l_{1}^{n},Y) $ is $ (2p)^n. $ Moreover, since the number of extreme contractions in $ L(l_{1}^{n},Y) $ is finite, each extreme point of the unit ball of $ L(l_{1}^{n},Y) $ is also an exposed point of the unit ball of $ L(l_{1}^{n},Y). $ Therefore, the number of exposed points of the unit ball of $ L(l_{1}^{n},Y) $ is also $ (2p)^n. $ In particular,\\
\noindent (i) the number of extreme contractions in $ L(l_{1}^{n},l_{1}^{n}) $ is $ {2^n}{n^n}. $\\
\noindent (ii) the number of extreme contractions in $ L(l_{1}^{n}, l_{\infty}^{n}) $ is $ 2^{n^2}. $\\
\end{remark}
As illustrated in \cite{LP}, one of the most intriguing aspects of the study of extreme contractions between Banach spaces is to explore the extremity of the images of the extreme points of the unit ball of the domain space under extreme contractions. Till now, we have considered only finite-dimensional polygonal Banach spaces as the domain space. However, if we choose the domain space to be something other than the polygonal Banach space then the scenario changes drastically. In fact, choosing the Euclidean space $ l_2^n $ as the domain space and $ l_{\infty}^n $ as the co-domain space, we have the following proposition:
\begin{prop}\label{proposition:l_2}
Let $ T \in L(X,Y) $ with $ \|T\|=1,$ where $ X=l_{2}^n $ and $ Y=l_{\infty}^n. $ Then $ T $ is an extreme contraction if and only if corresponding matrix representation of $ T $ with respect to standard ordered basis is of the form
\[
\begin{bmatrix}
a_{11}&a_{12}&\cdots &a_{1n} \\
a_{21}&a_{22}&\cdots &a_{2n} \\
\vdots & \vdots & \ddots & \vdots\\
a_{n1}&a_{n2}&\cdots &a_{nn}
\end{bmatrix}\]
where $ \sum \limits_{j=1}^{n} {a_{ij}}^2=1 $ for all $ i=1,2,\ldots,n.$
\end{prop}
\begin{proof}
We first note that a bounded linear operator $ T $ between reflexive Banach spaces is an extreme contraction if and only if $ T^* $ is an extreme contraction, which follows from the facts that $ T = (1-t)T_1+ t T_2 \Leftrightarrow T^*=(1-t)T_{1}^{*} + t T_{2}^{*}, $ $ \|T\|=\|T^*\| $ and $ (T^*)^*=T $ by reflexivity of domain and co-domain spaces. Now, the proof of the proposition directly follows from Corollary \ref{corollary:l_1}.
\end{proof}
In the following two examples, we further illustrate the variation in the norm attainment set of extreme contractions, depending on the domain and the range space.
\begin{example}
In $ L(l_{2}^2, l_{\infty}^2), $ \[ T=
\begin{bmatrix}
1&0 \\
1&0
\end{bmatrix}\] is an extreme contraction though $ T $ attains norm only at $ \pm(1,0).$
\end{example}
\begin{example}
In $ L(l_{2}^2, l_{\infty}^2), $ \[ T=
\begin{bmatrix}
1&0 \\
0&1
\end{bmatrix}\] is an extreme contraction. Here, $ T $ attains norm at $ \pm(1,0)$ and $ \pm(0,1) $ but $T(1,0)=(1,0)$ and $T(0,1)=(0,1) $ are not extreme points of unit ball of $l_{\infty}^2.$
\end{example}
Let us now focus on the connections between the study carried out in \cite{LP} and some of the results obtained by us in the present paper, in light of the newly introduced notion of L-P property for a pair of Banach spaces $ (X,Y). $ It is easy to observe that $ l_{1}^{n} $ is a universal L-P space, in the sense that the pair $ (l_{1}^{n},Y) $ has L-P property for every Banach space $ Y. $ We would also like to note that it follows from the works \cite{Li, L} of Lima that there exist finite-dimensional polygonal Banach spaces $ X,~Y, $ with $ X \neq Y, $ such that the pair $ (X,Y) $ has L-P property. Also, there are finite-dimensional polygonal Banach spaces $ X, Y $ with $ X \neq Y, $ such that the pair $ (X,Y) $ does not satisfy L-P property. As for example, the pair $ (l_{\infty}^{3},l_{1}^{3}) $ has L-P property, which follows from \cite[Theorem 2.1]{L}, whereas the pair $ (l_{\infty}^{4},l_{1}^{4}) $ does not satisfy L-P property, which follows from Lemma $ 3.2 $ of \cite{L}.
Let us observe that Proposition \ref{proposition:l_2} implies, in particular, that the pair $ (l_{2}^{n}, l_{\infty}^{n}) $ does not have L-P property. Now, we prove that this observation is actually a consequence of the following general result.
\begin{theorem}\label{theorem:L-Pstrict}
Let $ X $ be any strictly convex Banach space and $ Y $ be a finite-dimensional polygonal Banach space. Then the pair $ (X,Y) $ does not have L-P property.
\end{theorem}
\begin{proof}
Suppose that the pair $ (X,Y) $ satisfies L-P property. Then for any extreme contraction $ T \in L(X,Y), $ we have $ T(E_X) \subseteq E_Y. $ As $ X $ is strictly convex, $ B_X $ has infinitely many extreme points, in fact, $ E_X=S_X. $ On the other hand, as $ Y $ is a finite-dimensional polygonal Banach space, $ B_Y $ has only finitely many extreme points. Thus, for any extreme contraction $ T \in L(X,Y), $ $ T $ cannot be an one-to-one operator. Therefore, there exists a nonzero $ x \in B_X $ such that $ T(x)=0. $ So, $ T(\frac{x}{\|x\|})=0, $ which contradicts the fact that $ T(E_X)= T(S_X)\subseteq E_Y. $ This completes the proof of the fact that the pair $ (X,Y) $ does not have L-P property.
\end{proof}
\begin{remark}
From Theorem \ref{theorem:L-Pstrict}, we can conclude that the pairs $ (l_{p}^{n},l_{1}^{n}) $ and $ (l_{p}^{n},l_{\infty}^{n}) $ do not have L-P property for any $ 1 < p < \infty. $ Furthermore, we already know that the pair $ (l_{1}^{n},Y) $ has L-P property for every Banach space $ Y. $ This illustrates that there exist Banach spaces $ X, ~~Y $ such that the pair $ (X,Y) $ has L-P property but the pair $ (Y,X) $ does not have L-P property.
\end{remark}
\begin{remark}
The above Theorem \ref{theorem:L-Pstrict} holds for any Banach space $Y$ with countably many extreme points.
\end{remark}
Now, we give a characterization of finite-dimensional Hilbert spaces among all finite-dimensional strictly convex Banach spaces in terms of L-P property.
\begin{theorem}\label{theorem:Hilbert-finite}
Let $ X $ be a finite-dimensional strictly convex Banach space. Then $ X $ is a Hilbert space if and only if the pair $ (X, X) $ has L-P property.
\end{theorem}
\begin{proof}
Firstly, we claim that for a strictly convex Banach space $ X, $ the pair $ (X, X) $ has L-P property if and only if every extreme contraction in $ L(X,X) $ is an isometry. Clearly, if every extreme contraction in $ L(X,X) $ is an isometry then the pair $ (X,X) $ has L-P property, as for a strictly convex space $ X,$ $ S_X=E_X. $ Conversely, since $ (X,X) $ satisfies L-P property, $ T \in L(X,X) $ is an extreme contraction if and only if $ T(E_X) \subseteq E_X. $ As $ X $ is strictly convex, $ E_X = S_X. $ Thus, $ T \in L(X,X) $ is extreme contraction if and only if $ T(S_X) \subseteq S_X. $ In other words, we must have $ M_T=S_X $ and $ \|T\|=1, $ i.e., $ T \in L(X,X) $ is an isometry. This completes the proof of our claim. On the other hand, in \cite{N}, Navarro proved that a finite-dimensional Banach space $ X $ is a Hilbert space if and only if every extreme contraction in $ L(X,X) $ is an isometry. The proof of the theorem follows directly from a combination of these two facts.
\end{proof}
As an application of Theorem \ref{theorem:Hilbert-finite}, we characterize Hilbert spaces among all strictly convex Banach spaces in terms of L-P property.
\begin{cor}
A strictly convex Banach space $ X $ is a Hilbert space if and only if for every two-dimensional subspace $ Y $ of $ X, $ the pair $ (Y, Y) $ has L-P property.
\end{cor}
\begin{proof}
Let us first prove the ``only if" part. If $ X $ is a Hilbert space then every two-dimensional subspace $ Y $ of $ X $ is also a Hilbert space. Again, we know that for a finite-dimensional Hilbert space $ Y, $ isometries are the only extreme contractions in $ L(Y,Y) $ and $ E_Y = S_Y. $ Thus, for every two-dimensional subspace $ Y $ of $ X, $ the pair $ (Y,Y) $ has L-P property.\\
Let us now prove the ``if" part. If $ X $ is a strictly convex Banach space then every two-dimensional subspace $ Y $ of $ X $ is also a strictly convex Banach space. Since the pair $ (Y,Y) $ has L-P property, then by Theorem \ref{theorem:Hilbert-finite}, $ Y $ is a Hilbert space. Thus every two-dimensional subspace of $ X $ is a Hilbert space which means that $ X $ itself has to be a Hilbert space. This establishes the theorem.\\
\end{proof}
\end{document} |
\begin{document}
\title{The Metric Completion of
Outer Space}
\author{Yael Algom-Kfir
\thanks{Electronic address: \texttt{yalgom@univ.haifa.ac.il}}}
\affil{Mathematics Department, University of Haifa\\ Mount Carmel, Haifa, 31905, Israel}
\maketitle
\begin{abstract}We develop the theory of a metric completion of an asymmetric metric space.
We characterize the points on the boundary of Outer Space that are in the metric completion of Outer Space with the Lipschitz metric. We prove that the simplicial completion, the subset of the completion consisting of simplicial tree actions, is homeomorphic to the free splitting complex.
We use this to give a new proof of a theorem by Francaviglia and Martino that the isometry group of Outer Space is homeomorphic to $\textup{Out}(F_n)$ for $n \geq 3$ and equal to $\text{PSL}(2,{\mathbb Z}Z)$ for $n=2$.
\end{abstract}
Outer Space, defined by Culler and Vogtmann \cite{CV}, has become in the past decades an important tool for
studying the group of outer automorphisms of the free group $\textup{Out}(F_n)$.
It is defined as the space of minimal, free and simplicial metric trees with an
isometric action of $F_n$ (see section \ref{secOuterSpace}).
Outer Space, denoted $\mathcal{X}_n$, admits a natural (non-symmetric)
metric: the distance $d(X,Y)$ is the maximal amount of stretching
any equivariant map from $X$ to $Y$ must apply to the edges of $X$. The group
$\textup{Out}(F_n)$ acts on Outer Space by isometries. As mentioned, this metric is non-symmetric
and in fact $\frac{d(X,Y)}{d(Y,X)}$ can be arbitrarily large (see \cite{AKB} for a
general theorem about the asymmetry of Outer Space). Moreover it is not proper in
the sense that outgoing balls
\[\overline{B_{\text{out}}}(X,r) = \{Y \mid d(X,Y) \leq r \}\]
are not compact (See Proposition \ref{compactBalls}). One way to fix this is to symmetrize the metric, i.e. define
$d_s(X,Y) = d(X,Y) + d(Y,X)$. Closed balls in the metric $d_s$ are compact thus
resolving both problems. However, in symmetrizing we lose much of the insight that
Outer Space provides into the dynamics of the action of $\textup{Out}(F_n)$ on $F_n$.
Moreover, Outer Space with the symmetric metric is
not a geodesic space \cite{FM}. Therefore we prefer to keep the non-symmetric metric and determine the metric completion of
$\mathcal{X}_n$ with the asymmetric metric. This raises the general question of how to
complete an asymmetric metric.
This turns out to be an intereseting problem in and of itself. The new terms in Theorem \ref{ThmCompl} are defined in Section 1,
\begin{introthm}\leftarrowbel{ThmCompl}
For any forwards continuous asymmetric space $(X,d)$ which satisfies property (\ref{star}), there is a unique forwards
complete asymmetric space $(\hat X, \hat d)$
and an isometric embedding $\iota: X \to \hat X$
such that $\hat X$ is semi forwards continuous with respect to $\text{Im} (\iota)$ and so that $\iota(X)$ is forwards dense in $\hat X$.
\end{introthm}
We also prove
\begin{introcor}\leftarrowbel{isomXintro}
For any forwards continuous asymmetric space $(X,d)$ which satisfies property (\ref{star}),
each isometry $f \colon X \to X$ induces a unique isometry $F \colon \hat X \to \hat X$.
\end{introcor}
After the completion of this article, it was brought to my attention that the completion of an asymetric metric space (also called a quasi-metric space) has been previously addressed, see for example \cite{BvBR}. However, their definition of a forward limit is slightly different (see \cite{BvBR}[page 8]) as well as their catagorical approach. Our approach is a little more hands-on providing some interesting examples as well to show the necessity of the conditions in our theorems. Both approaches give the same completion and we believe each has its merits.
We then address the issue of completing Outer Space. The boundary of Outer Space $\partial \mathcal{X}_n$ is the space of all homothety classes of \emph{very small} $F_n$ trees (see definition \ref{defnVerySmall}) that are not free and simplicial.
Our main result is:
\begin{introthm}\leftarrowbel{myB}
Let $[T]$ be a homothety class in $\partial \mathcal{X}_n$, the point $[T]$ is contained in the completion of $\mathcal{X}_n$ if and only if for any (equivalently for some) $F_n$ tree $T$ in the class $[T]$, orbits in $T$ are not dense and arc stabilizers are trivial.
\end{introthm}
We show that the Lipschitz distance can be extended to the completion
(allowing the value $\infty$) and that isometries of $\mathcal{X}_n$ uniquely extend
to the completion. We distinguish the set of simplicial trees in the metric
completion of Outer Space and refer to it as the \emph{simplicial metric
completion}. The simplicial metric completion is related to a well
studied complex called the complex of free splittings. We show that the simplicial metric completion is invariant under any isometry of the completion of Outer Space (see Proposition \ref{isomExtend}).
The complex of free splittings of $F_n$, denoted $FS_n$ (and
introduced by Hatcher \cite{HatFS} as the sphere complex) is the
complex of minimal, simplicial, actions of $F_n$ on simplicial trees
with trivial edge stabilizers (see definition \ref{defnSplittingComplex}).
The group $\textup{Out}(F_n)$ acts on $FS_n$ by simplicial automorphisms. This
action is cocompact but has large stabilizers.
Aramayona and Souto \cite{AS} proved that $\textup{Out}(F_n)$
is the full group of automorphisms of $FS_n$.
The free splitting complex is one of the analogues of
the curve complex for mapping class groups. For example,
Handel and Mosher \cite{HMhyp} showed that the free splitting complex
with the Euclidean metric is Gromov hyperbolic (this proof was later
simplified by \cite{HilionHorbezFS}). We prove that
\begin{introthm}\leftarrowbel{myC}
The simplicial metric completion of Outer Space with the Lipschitz topology is homeomorphic to the free splitting complex with the Euclidean topology.
\end{introthm}
We prove that an isometry of Outer Space extends to a simplicial automorphism of $\text{FS}_n$ (Proposition \ref{HomeoPresSimp}) and use Aramayona and Souto's theorem to prove the following theorem of Francaviglia and Martino.
\begin{introthm}\cite{FMIsom}\leftarrowbel{myD}
The group of isometries of Outer Space is equal to $\textup{Out}(F_n)$ for $n\geq 3$ and to $\text{PSL}(2,{\mathbb Z}Z)$ for $n=2$.
\end{introthm}
Francaviglia-Martino prove Theorem \ref{myD} for both the Lipschitz metric and for the symmetric metric.
The techniques in this paper only apply to the asymmetric case.
However, once the theory of the completion of Outer Space is established, the proof of Theorem \ref{myD} is natural and relatively light in computations.
As highlighted in \cite{FMIsom}, an application of Theorem \ref{myD} is that if $\Gamma$ is an irreducible lattice in a higher-rank connected semi-simple Lie group, then every action of $\Gamma$ on
Outer Space has a
global fixed point (this follows from a result of Bridson and Wade \cite{BW}
that the image of $\Gamma$ in $\textup{Out}(F_n)$ of is always finite). In lay terms $\Gamma$ cannot act on $\mathcal{X}_n$ in an interesting way.
We remark that the Thurston metric on Teichm\"{u}ller space, which inspired for the
Lipschitz metric on Outer Space, is in fact a proper metric i.e. closed balls are
compact. Walsh \cite{WalshHoroBoundary} proved that the isometry group of Teichm\"{u}ller space with the
Thurston metric is the extended mapping class group. His technique was to embed
Teichm\"{u}ller space into the space of functions and consider its horofunction boundary.
However, he uses the fact that the Teichm\"{u}ller space with the Thurston metric is proper, which is false in the case of Outer Space.
\begin{comment}
We begin this article by developing a theory for the completion of an asymmetric metric space, and prove Theorem \ref{ThmCompl}. In section 2 we give some background material
on Outer Space and its topologies and boundary.
In section 3 we characterize the completion points in $\partial \mathcal{X}_n$ and prove Theorem \ref{myB}.
In section 4 we prove that the Lipschitz distance extends to the completion.
In section 5 we prove that the completion metric is a natural extension of the Lipchitz metric and prove Theorem \ref{myC}.
In section 6 we give the new proof of Theorem \ref{myD}.
\end{comment}
I originally wrote a version of this paper in 2012 as part of my postdoctoral work.
The original paper was rejected by a journal after an unusually long refereeing process and as often happens, I put it away for a long time.
I wish to thank Ilya Kapovich and Ursula Hamenst\"{a}dt for encouraging me to look at it again and to extend the results of the original work. I also thank Emily Stark for thoughly reading the first section and Mladen Bestvina for inspiring the original work. Thanks to David Dumas for pointing me to papers on quasi-metric spaces.
\section{The completion of an asymmetric metric space}
This section develops the theory of the completion of an incomplete asymmetric metric space (see Definition \ref{asymMetricSp}). Working out the details often reminded me of W. Thurston's quote in his famous ``notes'' \cite{ThurstonNotes} ``Much of the material or technique is new, and more of it was new to me. As a consequence, I did not always know where I was going \dots The countryside is scenic, however, and it is fun to tramp around \dots''.
The starting point was to imitate the completion of a metric space using Cauchy sequences. This strategy eventually works but many potholes must first be overcome.
For example, not every convergent sequence is a Cauchy sequence (Example \ref{ConvNoCauchy}). In fact, the Cauchy condition is a little unnaturally strict in the asymmetric setting so we relax it to define admissible sequences (see Definition \ref{add_seq_defn}). We define a limiting distance between two admissible sequences (Lemma \ref{dist_limit}). In the symmetric setting the completion is the space of equivalence classes of sequences with limiting distances equal to 0. In the asymmetric setting an admissible sequence $\{x_k\}$ can converge to a point $x$ but one of the limiting distances between $\{x_k\}$ and the constant sequence $\{x\}$ is not 0. Therefore we define a weaker equivalence relation (see Definition \ref{defnEquiv}). We then define the completion $\hat X$ of $X$ as the set of equivalence classes of admissible sequences with an appropriate notion of $\hat d$ (see Definition \ref{defHat}). However, one must add extra conditions to guarantee that $(\hat X, \hat d)$ is an asymmetric metric space (Lemma \ref{triDHatLemma}), and that it is complete (Lemma \ref{bigLemma}). Finally, under certain conditions we show that for $(X,d)$ an asymmetric metric space $(\hat X, \hat d)$ is the unique complete asymmetric metric space that contains a dense copy of $X$ (Theorem \ref{ThmCompl}), and that isometries of $X$ uniquely extend to $\hat X$ (see Corollary \ref{isomX}).
The following are not strictly needed for the rest of the paper but are included for clarity: Proposition \ref{hatXsatisfies11}, Remark \ref{remarkBadExample} and Lemma \ref{connection}.
\begin{definition}\leftarrowbel{asymMetricSp}
An asymmetric metric on a set $X$ is a function $d:X \times X \to {\mathbb R}\cup \{\infty\}$ which satisfies the following properties:
\begin{enumerate}
\item $d(x,y) \geq 0$
\item If $d(x,y) = 0$ and $d(y,x) = 0$ then $x=y$.
\item For any $x,y,z \in X$, $d(x,z) \leq d(x,y) + d(y,z)$.
\end{enumerate}
\end{definition}
Since the metric is not symmetric, a sequence might have many (forward)
limits.
\begin{defn}\leftarrowbel{dfCFL} The point $x \in X$ is a \emph{forward limit} of the sequence
$\{x_n\}_{n=1}^\infty \subset X$ if
\[\lim_{n \to \infty} d(x_n,x)=0,\] and it is the \emph{closest forward limit} (CFL) if additionally, for every $y$ in $X$
such that $\lim_{n \to \infty} d(x_n,y)=0$ we have $d(x,y)=0$. If the two conditions are satisfied then
we write $ x = \fl{n} x_n$.
\end{defn}
Note that if $\{x_n\}$ admits a CFL $x$ then it is unique since if $z$
was another such limit we would get $d(x,z) = 0 = d(z,x)$ hence $x=z$.
A closest
backward limit (CBL) is defined as in Definition \ref{dfCFL}, only switch the order of the parameters of $d$.
We would like to define the completion of an asymmetric metric
space in a similar fashion to
the completion of a metric space, as equivalence classes of (appropriately defined)
Cauchy sequences.
We introduce the notion of an admissible sequence which is better suited to the asymmetric setting.
\begin{defn}\leftarrowbel{add_seq_defn}[Admissible sequences, Cauchy sequences]
A sequence $\xi = \{x_n\}$ is \emph{forwards Cauchy} if for all $\varepsilon>0$ there is an $N(\varepsilon) \in \NN$ such that
\begin{equation}\leftarrowbel{eq21} d(x_i, x_j)< \varepsilon \quad \text{for all} \quad j>i>N(\varepsilon)\end{equation}
In words, we first choose the left index to be large enough and then the right index to be even larger. For backwards Cauchy we choose the right index first and then the left index. \\
A sequence $\{ x_n \} \subseteq \mathcal{X}_n$ is \emph{forwards admissible} if for all $\varepsilon>0$ there is a natural number $N(\varepsilon)$ such that for all $n >N(\varepsilon)$ there is a natural number $K(n,\varepsilon) > n$ such that for all $k>K(n,\varepsilon)$,
\begin{equation}\leftarrowbel{eq22}d(x_n, x_k)<\varepsilon. \end{equation}
Refer to Figure \ref{AdmissibleFig} for an illustration of the indices.
A backwards admissible sequence is defined similarly (choosing the right index first). \\
When we leave out the adjective forwards or backwards for an admissible sequence we shall always mean a forwards admissible sequence.
\end{defn}
\begin{figure}
\caption{\leftarrowbel{AdmissibleFig}
\end{figure}
Note that if $d$ is a symmetric metric then the Cauchy and admissible definitions are equivalent.
\begin{example}\leftarrowbel{NonSymEx}
Consider $X=[0,1]$ where $d(x,x') = x-x'$
if $x>x'$ and $d(x,x') =1$ if $x<x'$. The sequence $x_n = \left\{ \begin{array}{ll} \frac{1}{n} & n \text{ odd}\\[0.2 cm] \frac{1}{2n} & n \text{ even} \end{array}\right.$ is a forwards admissible sequence that is not forwards Cauchy. The point $x =0$ is its CFL.
\end{example}
\begin{remark}\leftarrowbel{NKdefn}
We use the convention that $N(\varepsilon), K(n,\varepsilon)$ denote the smallest integers with the required property.
Clearly, the indexes $N$ and $K$ also depend on the sequence $\xi$ and when emphasizing this dependence will write $N(\xi, \varepsilon)$ etc.
\end{remark}
Unlike in a symmetric metric space, it is false that a convergent sequence is a Cauchy sequence. Even more disturbingly, a sequence admitting a CFL may not have an admissible subsequence!
\begin{example}\leftarrowbel{ConvNoCauchy}
Let $X$ be the space from Example \ref{NonSymEx}, we glue countably many different copies of $X$ along the point $0$:
\[ Y = X \times {\mathbb N}/ (0,i) \sim (0,j) \text{ for } i,j \in \N. \]
and extend the metric by declaring $d((x,i), (x',j)) = 1$ if $x\neq 0$ and $x'\neq0$ and $i \neq j$. Consider the sequence $y_k = (\frac{1}{k}, k)$. Then $y_k$ has a CFL, it is the equivalence class of the point $(0,1)$. But for each $n \neq k$: $d(y_n, y_k) =1$ so it admits no admissible subsequence.
\end{example}
However, we do have the following properties:
\begin{prop}\leftarrowbel{ssProp}
If $\{x_n\}$ is a forwards admissible sequence and $\{x_{n_j}\}$ is an infinite subsequence then
\begin{enumerate}
\item $\{ x_{n_j} \}$ is admissible.
\item if $x$ is a forward limit of $\{x_n\}$ then it is a forward limit of $\{x_{n_j}\}$.
\item if $x$ is a forward limit of $\{x_{n_j}\}$ then it is a forward limit of $\{x_n\}$.
\item $x = \fl{n} x_n$ if and only if $x=\fl{j} x_{n_j}$.
\end{enumerate}
\end{prop}
\begin{proof}
Items (1) and (2) follow from the definitions. To prove (3) suppose $\lim_{j \to \infty} d(x_{n_j}, x) = 0$. Then given $\varepsilon>0$ let $N(\varepsilon)$ be the constant from Definition \ref{add_seq_defn} let $n>N(\varepsilon)$ and let $K(n, \varepsilon)$ be the constant from the same definition then for $n_j>K(n,\varepsilon)$ we have:
\[ d(x_n,x) \leq d(x_n, x_{n_j}) + d(x_{n_j}, x) \leq d(x_{n_j},x) + \varepsilon \] which limits to 0.
Item (4) follows from (2) and (3).
\end{proof}
\begin{comment}
To fix this we define the notion of a continuous metric space. From the triangle inequality we automatically have that $d(x, \cdot)$ is lower semi-continuous (and $d(\cdot, y)$ is upper semi-continuous). Thus if $\displaystyle \lim_{m \to \infty} y_m = y$ then $d(x,y_m) \geq d(x,y) - \varepsilon$ for large $m$. For a forward continuous metric space, we require that if $y$ is the CFL of an admissible sequence $\{y_m\}$ then $\displaystyle \lim_{m \to \infty} d(x,y_m) = d(x,y)$.
\begin{defn}[CHECK DEFINITION]
An asymmetric metric $d$ is \emph{forwards continuous} if for
every sequence\footnote{admissible?}
$\{x_m\}_{m=1}^\infty$ that admits a CFL $x$, and for any $y \in X$ so that $y \neq x$ we have
\begin{equation}\leftarrowbel{continuityEq}
d(y,x) = \lim_{m \to \infty} d( y,x_m)
\end{equation}
Similarly it is \emph{backwards continuous} if every sequence $\{x_m\}_{m=1}^\infty$ hat admits a CBL $x$, and for any $y \in X$, $y \neq x$ we have
\[d(x,y) = \lim_{m \to \infty}d(x_m,y)\]
\end{defn}
\begin{example}\leftarrowbel{example1}
\begin{enumerate}
\item We will show that Outer Space $\mathcal{X}_n$ with the Lipschitz metric is both forwards and backwards continuous.
\item The space $X$ from Example \ref{ConvNoCauchy} is forward continuous.
\item The space $Y$ from Example \ref{ConvNoCauchy} is not forward continuous: Take $x = (\frac{1}{2},1)$ and $y_k = (\frac{1}{k},2)$ then $y = \fl{k} y_k = (0,2) \sim (0,1)$ but $d(x,y) = \frac{1}{2} \neq 1 = \lim d(x,y_k)$.
\item Note that if we don't require that $y \neq x$ in Equation (\ref{continuityEq}) we get that
$\lim_{k \to \infty} d(x,x_k) = 0$ and this is too strong a condition. For example, this does not occur in the completion of Outer Space.
\end{enumerate}
\end{example}
\begin{prop} If $(X,d)$ is a forward continuous asymmetric metric space then every convergent sequence that has a CFL is forwards admissible.
\end{prop}
\begin{proof}
Let $\{x_m \} \subset X$ be a sequence admitting a CFL $x \in X$. For each $\varepsilon>0$ there is an $M(\varepsilon)$ so that for $m>M(\varepsilon)$: $d(x_m,x) < \varepsilon$. Fix $m>M(\varepsilon)$ then $\lim_{k \to \infty} d(x_m, x_k) = d(x_m,x) < \varepsilon$. Therefore, there exists a $K(m,\varepsilon)$ so that for all $k> K(m,\varepsilon)$ we have $d(x_m , x_k) < 2\varepsilon$.
\end{proof}
\end{comment}
\begin{obs}\leftarrowbel{obs1} In keeping with the conventions of Remark \ref{NKdefn}, we observe that if $\xi$ is forwards admissible and $\varepsilon' \leq \varepsilon$ then
\begin{enumerate}
\item $N(\varepsilon) \leq N(\varepsilon')$.
\item $K(n,\varepsilon) \leq K(n,\varepsilon')$
\item $n' \geq K(n,\varepsilon)$ implies $K(n,\varepsilon) \leq K(n',\varepsilon)$
\end{enumerate}
\end{obs}
\begin{prop}\leftarrowbel{addConCauchy}
Every forwards admissible sequence $\xi = \{ x_n\}$ has a subsequence $\{x_{n_i}\}$ which is forwards Cauchy. Moreover we can choose this subsequence so that for all $i<j$ we have \[ d(x_{n_i},x_{n_j})< \frac{1}{2^i}\]
\end{prop}
\begin{proof}
For convenience let us denote $x(n) = x_n$. Since $\{x(n)\}$ is admissible, then for $\varepsilon>0$ let $N(\varepsilon)$ and for $n>N(\varepsilon)$ let $K(n,\varepsilon)$ be the constants from Definition \ref{add_seq_defn}.
Then the Cauchy subsequence will be given recursively by $n_1 = N(\frac{1}{2})$ and
\[n_{j+1} = \max \left\{ N \left( \frac{1}{2^{j+1}} \right)+1 , K \left( n_j, \frac{1}{2^{j+1}} \right) \right\}\]
For all $j>i$ we have $n_i>N(\frac{1}{2^i})$ and $n_j \geq K( n_{j-1}, \frac{1}{2^j})$ and by \ref{obs1}(3) $n_j \geq K( n_i, \frac{1}{2^i})$ hence $d(x(n_i), x(n_j))< \frac{1}{2^i}$. \end{proof}
We define the limiting distance between two admissible sequences.
\begin{lemma}\leftarrowbel{dist_limit}
Let $\xi = \{x_n\}, \eta=\{y_n\}$ be forwards admissible then $\displaystyle c(\xi,\eta) = \lim_{n \to \infty} \lim_{k \to \infty} d(x_n,y_k)$. That is, one of the two options hold:
\begin{enumerate}
\item\leftarrowbel{case1} For all $r>0$ there is an $N(r) \in \NN$ so that for all $n>N(r)$ there is a $K(n,r) \in \NN$ such that, \[ d(x_n, y_k) > r \quad \text{for all} \quad k>K(n,r)\]
In this case we write: $c(\xi,\eta) = \infty$.
\item\leftarrowbel{case2} There is a number $c(\xi,\eta)\geq 0$ such that for all $\varepsilon>0$ there is an $N(\varepsilon) \in \NN$ so that for all $n>N(\varepsilon)$ there is a $K(n,\varepsilon) \in \NN$ such that, \[| d(x_n, y_k) - c(\xi,\eta)| < \varepsilon \quad \text{for all} \quad k>K(n,\varepsilon)\]
\end{enumerate}
\end{lemma}
\begin{comment}
\begin{figure}
\caption{\leftarrowbel{DistanceSequences}
\end{figure}
\end{comment}
We shall need the following definition and proposition to prove this lemma.
\begin{defn}\leftarrowbel{Defn:AlmostMonotonicallyDec}
A sequence $\{ r_n\}_{n=1}^{\infty}$ in ${\mathbb R}R$ is \emph{almost monotonically decreasing} if for every $\varepsilon>0$ there is a natural $N(\varepsilon)$ such that for all $n>N(\varepsilon)$ there is a natural number $K(n,\varepsilon)$ so that for all $k>K(n,\varepsilon)$
\[ r_{k} \leq r_n+\varepsilon \]
$\{ r_n\}_{n=1}^{\infty}$ is almost monotonically increasing if $\{- r_n\}_{n=1}^{\infty}$ is almost monotonically decreasing.
\end{defn}
\begin{prop}\leftarrowbel{Prop:BoundedAndAlmostMonDec}
If $\{ r_i\}$ is almost monotonically decreasing and bounded below or almost monotonically increasing and bounded above then it converges.
\end{prop}
\begin{proof} We shall prove the almost monotonically decreasing case.
Since $\{ r_i\}$ is bounded, there is a subsequence $\{ r_{i_j}\}$ converging to some number $R$.
We show that $R$ is in fact the limit of $\{ r_i\}$. Let $\varepsilon>0$, and $M=M(\varepsilon)$ be such that for $j \geq M$ we have $|r_{i_j} - R| < \varepsilon$.
Let $K = K(i_M,\varepsilon)$ be the constant from Definition \ref{Defn:AlmostMonotonicallyDec}. So for any $k>K$
\begin{equation}\leftarrowbel{eq2321} r_k \leq r_{i_M} + \varepsilon < R+2\varepsilon \end{equation}
For the other inequality, let $K' = K(k,\varepsilon)$ be the constant from the almost monotonically decreasing
definition \ref{Defn:AlmostMonotonicallyDec}, and choose $s$ large enough so that $i_s>K'$ and $s>M$ then
\begin{equation}\leftarrowbel{eq24} R - \varepsilon < r_{i_s} \leq r_k +\varepsilon \end{equation}
From equations \ref{eq2321} and \ref{eq24} we get $|r_k-R| < 2\varepsilon$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{dist_limit}]
We must show that for every pair of admissible sequences $\xi = \{x_n\}$, $\eta = \{y_n\}$
satisfy either (1) or (2). Let $\varepsilon>0$ fix $n \in \NN$ and construct the sequence $\alpha(n) = \{a_k\}_{k=n}^\infty$ in ${\mathbb R}R$ by defining $a_k = d(x_n,y_k)$ for $n \leq k \in \NN$.
For a fixed $n$, the sequence $\alpha(n)$ is non-negative and almost monotonically decreasing. This follows from the triangle inequality and because $\eta$ is admissible.
By Proposition \ref{Prop:BoundedAndAlmostMonDec} $\alpha(n)= \{a_k\}_{k=n}^\infty$ converges to a limit $c_n$.
We now show that $\{c_n\}_{n=1}^\infty$ is almost monotonically increasing.
Since $\lim_{j \to \infty} d(x_n, y_j) = c_n$
then for each $n$ let $J(n,\varepsilon)$ be an index so that for all $j>J(n,\varepsilon)$ \begin{equation}\leftarrowbel{eq25} |d(x_n,y_j) - c_n|< \varepsilon. \end{equation}
We can assume that $J(n,\varepsilon)$ is monotonically increasing with $n$.
Let $N:= N_{\ref{add_seq_defn}}(\xi,\varepsilon)$ be the constant from Definition \ref{add_seq_defn} so that for all $n>N$, there is a $K_{\ref{add_seq_defn}}(\xi,n,\varepsilon)$, so that $k>K_{\ref{add_seq_defn}}(\xi,n,\varepsilon)$ we have $d(x_{n}, x_k)< \varepsilon$.
We take $n>N$, $j>K(n,\varepsilon)$ and $t>\max\{j, J(j,\varepsilon)\}$ (then $t>J(n,\varepsilon)$ ), then we have:
\begin{align}
c_j & \geq d(x_j,y_t) - \varepsilon \leftarrowbel{eqA} \\[0.2 cm]
&\geq d(x_n,y_t) - d(x_n,x_j) - \varepsilon \leftarrowbel{eqB} \\[0.2 cm]
& \geq d(x_n,y_t) - 2 \varepsilon \leftarrowbel{eqC} \\[0.2 cm]
&\geq c_n - 3 \varepsilon \leftarrowbel{eqD}
\end{align}
Therefore $\{c_n \}$ is almost monotonically increasing. So either
\begin{itemize}
\item $\{c_n\}$ is bounded above and hence converges to a limit $c$. This implies case \ref{case2} of the statement, or,
\item $\{c_n\}$ is unbounded and so $\xi, \eta$ satisfy case \ref{case1} of the statement. \qedhere
\end{itemize}
\end{proof}
\begin{obs}\leftarrowbel{cTriangleInequality}
The function $c( \cdot , \cdot )$ satisfies the triangle inequality.
\end{obs}
\begin{proof}
Let $\xi = \{x_n\}, \eta=\{y_n\}, \zeta = \{z_n\}$ be admissible sequences, then for indexes $n<k<m$ we have:
\[ d(x_n, z_m) \leq d(x_n, y_k) + d(y_k, z_m) \]
The triangle inequality for $c(\xi,\zeta)$ follows.
\end{proof}
\begin{defn}\leftarrowbel{interlace}
Let $\xi = \{x_n\}$ and $\eta = \{y_n\}$ be sequences in $X$. We denote their interlace sequence
by $j(\xi, \eta) = \zeta = \{ z_n \}$ which is given by:
\[ z_n = \left\{ \begin{array}{ll} x_{\frac{n+1}{2}} & n \text{ odd }
\\ y_{\frac{n}{2}} & n \text{ even } \end{array} \right. \]
We call the admissible sequences $\xi$ and $\eta$ \emph{neighbors},
if their interlace is admissible.
\end{defn}
\begin{lemma}\leftarrowbel{equiv_relation}
Let $\xi = \{x_n\}$ and $\eta = \{y_n\}$ be forwards admissible sequences then $\xi$
and $\eta$ are neighbors iff \[c(\xi,\eta) = 0 \text{ and } c( \eta,\xi)=0\]
\end{lemma}
\begin{proof}
Suppose the interlace $j(\xi,\eta) = \zeta = \{z_n\}$ is admissible. For any $n$, $x_n = z_{2n-1}$ and $y_n = z_{2n}$. Therefore, for large $n<k$
\[ d(x_n,y_k) = d(z_{2n-1},z_{2k})\]
which is small provided $2n-1 >N_{\ref{add_seq_defn}}(\zeta,\varepsilon)$ and $2k> K_{\ref{add_seq_defn}}(\zeta, 2n-1, \varepsilon)$. This proves that $c(\xi,\eta) = 0$ and similarly $c(\eta,\xi) =0$.
For the other deduction assume that both sequences are forwards admissible and that
both limits are $0$.
We must find for all $\varepsilon>0$ a natural number $N=N(\varepsilon)$ and for all $n>N$ a natural number $K = K(n,\varepsilon)$ so that for all $n>N$ and $k>K$
\begin{equation}\leftarrowbel{eq33} d(z_n,z_k)<\varepsilon \end{equation}
We now have four cases:
If both $n,k$ are odd then inequality \ref{eq33} follows from the admissibility of $\xi$.
If both $n,k$ are even then inequality \ref{eq33} follows from the admissibility of $\eta$.
If $n$ is odd and $k$ even
then inequality \ref{eq33} follows from $c(\xi,\eta)=0$.
If $n$ is even and $k$ odd
then inequality \ref{eq33} follows from $c(\eta,\xi)=0$.
\end{proof}
\begin{prop}\leftarrowbel{c_well_defined_on_neighbors} Suppose $\xi,\xi'$ and $\eta, \eta'$ are neighbors respectively. Then,
\[ c(\xi, \eta) = c(\xi', \eta'). \]
\end{prop}
\begin{proof}
If $\xi, \xi'$ are neighbors and $\eta,\eta'$ are neighbors then by Observation \ref{cTriangleInequality} and Lemma \ref{equiv_relation} we have $c(\xi, \eta) \leq c(\xi,\xi') + c(\xi', \eta') + c(\eta',\eta) = c(\xi',\eta')$.
\end{proof}
\begin{remark}\leftarrowbel{CFLnotnbrs}
Notice that if $\xi = \{x_n\}_{n=1}^\infty$ and
$\eta = \{y_n\}_{n=1}^\infty$ are
admissible sequences with the same closest forward limit then they are not necessarily neighbors.
For example, consider $X$ in Example \ref{NonSymEx} and the sequences $x_n = 0$ and $y_n = \frac{1}{n}$ for all $n$.
\end{remark}
\begin{remark}
By Lemma \ref{equiv_relation} and the triangle inequality we have that being
neighbors is an equivalence relation. However, as noted in Remark \ref{CFLnotnbrs} this is not the equivalence relation we should use to define the forwards completion.
\end{remark}
\begin{notation}
For $x \in X$ we denote the constant sequence $\{x\}_{n=1}^\infty$ by $\bold{x}$.
\end{notation}
\begin{defn}\leftarrowbel{defnEquiv}[Equivalent sequences, realization in $X$]
We call the forwards admissible sequences $\xi$ and $\eta$ equivalent if either they are neighbors or they have the same CFL. This is an equivalence relation (using Proposition \ref{ssProp}). \\
If an equivalence class $\alpha$ has a representative $\xi\in\alpha$ with a CFL in $X$ denoted $x$, then the constant sequence $\bold{x}$ also belongs to $\alpha$ and we say that $\alpha$ is realized by $x$ in $X$.
\end{defn}
\begin{obs}\leftarrowbel{obs2}
\begin{enumerate}
\item There exists some representative $\xi \in \alpha$ such that $x = \overrightarrow{\lim} \xi$ if and only if for all $\xi \in \alpha$, $x = \overrightarrow{\lim} \xi$.
\item If $\alpha$ is not realized then for all $\xi, \xi' \in \alpha$, $\xi$ and $\xi'$ are neighbors.
\end{enumerate}
\end{obs}
\begin{defn}\leftarrowbel{defHat}
Let $\hat X$ be the quotient set of forwards admissible sequences in $X$ by the equivalence relation in Definition \ref{defnEquiv}.
We define the function on the set $\hat X$
\[ \begin{array}{l}
\hat d: \hat X \times \hat X \to {\mathbb R}R\cup \{\infty\}\\
\hat d([\xi],[\eta]) = \inf \{ c(\xi',\eta') \mid \xi\sim \xi' \text{ and } \eta \sim \eta' \}
\end{array}\]
\end{defn}
\begin{example}
It is possible that for $x, y \in X$, $d(x,y) > \hat d([\bold{x}], [\bold{y}])$.
Let $X = [0,1]$ and suppose $d(x,y) = x-y$ for $x>y$, $d(x,y) = 1$ for $0 \neq x<y$ and $d(0,y) = 2$ for all $y$. Then $d(0,1) = 2 > \lim_{k \to \infty} d(\frac{1}{k}, 1) = 1 = \hat d( \bold{0}, \bold{1})$. We therefore define the notion of a forwards continuous metric space.
\end{example}
\begin{defn}\leftarrowbel{contDefn}
An asymmetric metric $d$ is \emph{forwards continuous} if for
every forwards admissible sequence
$\{x_m\}_{m=1}^\infty$ that admits a CFL $x$, and for any $y \in X$ we have
\begin{align}
d(x, y) & = \lim_{m \to \infty} d( x_m,y ) \quad \text{ and } \quad \leftarrowbel{continuityEq1} \\
d(y,x) & = \lim_{m \to \infty} d(y, x_m ) \leftarrowbel{continuityEq2}
\end{align}
Similarly it is \emph{backwards continuous} if every backwards admissible sequence $\{x_m\}_{m=1}^\infty$ that admits a CBL $x$, and for any $y \in X$, equations (\ref{continuityEq1}) and (\ref{continuityEq2}) hold.
\end{defn}
\begin{remark}
The triangle inequality automatically implies that $d(y, \cdot)$ is lower semi-continuous and $d(\cdot, y)$ is upper semi-continuous. Explicitly, if
$\displaystyle \lim_{m \to \infty} x_m = x$ then
$d(x_m, y) \leq d(x,y) + \varepsilon$ for large $m$ and
$d(y, x_m) \geq d(y,x) - \varepsilon$ for large $m$.
\end{remark}
\begin{remark}\leftarrowbel{osFrwrdsCont}
We will show that Outer Space $\mathcal{X}_n$ with the Lipschitz metric is both forwards and backwards continuous (see Proposition \ref{conditions}).
\end{remark}
\begin{example}\leftarrowbel{notMinimizing}
Even when $X$ forwards continuous, the infimum in $\hat d$ may not be realized.
Consider the space $X = \{x^i_n\}_{i,n=1}^\infty \colonprod \{y_m\}_{m=1}^\infty \colonprod \{0\}$ (see Figure \ref{infNotMin}).
We define an asymmetric metric as follows:
\[ d(x^i_n, 0) = \frac{1}{n} \]
and
\[\begin{array}{ll}
d(x^{i}_n, y_m) =
\left\{ \begin{array}{ll}
2 & n \geq m \\
1+ 1/i & n<m
\end{array} \right. ,
&
d(0,y_m) = 2
\end{array} \]
We also have,
\[\begin{array}{ll}
d(x^i_n, x^i_k) =
\left\{ \begin{array}{ll}
1/n-1/k & n < k \\
1 & n>k
\end{array} \right. ,
&
d(0, x^i_k) = 1
\end{array}\]
We define
\[d(y_n, y_m) =
\left\{ \begin{array}{ll}
1/n-1/m & n < m \\
1 & n>m
\end{array} \right.
\]
And the rest of the distances are equal to $1$, e.g. $d(y_m, x^i_k)$, $d(x^i_n, x^j_m)$ for $i \neq j$. One can directly check that the directional triangle inequality holds and therefore, $X$ is an asymmetric metric space.
\begin{figure}
\caption{\leftarrowbel{infNotMin}
\end{figure}
It is elementary to check that $X$ is continuous. The sequence $\eta = \{y_n\}$ is admissible as well as $\xi_i = \{x^i_n\}_{n=1}^\infty$ for all $i \geq 1$. Moreover, for each $i$, $0 = \fl{n} x^i_n$. Thus, $\xi_i \in [\bold{0}]$ for all $i$. We have
\[ c(\xi_i, \eta) = \lim_{n \to \infty} \lim_{m \to \infty} d(x_n^i, y_m) = 1 + \frac{1}{i}\]
Therefore $\hat{d}([\bold{0}], [\eta]) \leq 1$. However, if $\xi$ is an admissible sequence whose CFL is $0$ then either it eventually becomes the constant sequence $\bold{0}$ or there exists an $i$ so that it eventually becomes a subsequence of $\{x^i_n\}_{n=1}^\infty$. Thus, $c(\xi, \eta) > 1$ for any $\xi \in [\bold{0}]$. Hence $\hat d$ is not realized by any sequence.
\end{example}
\begin{lemma}\leftarrowbel{infIsMin}
If $(X,d)$ is a forwards continuous metric space then
\begin{enumerate}
\item For all $\alpha, \beta \in \hat X$, if $\alpha$ is not realized then the infimum in $\hat d(\alpha, \beta)$ is a minimum, and
\item If both $\alpha, \beta$ are not realized then
\[ \hat d(\alpha,\beta) = c(\xi, \eta) \text{ for all } \xi \in \alpha, \eta \in \beta \]
\item If $\beta$ is realized by $y$ in $X$ then
\[ \hat d(\alpha, \beta) = \inf \{ c(\xi, \bold{y}) \mid \xi \in \alphapha\} \]
If $\alpha$ is not realized then for all $\xi \in \alpha$, $\hat d(\alpha,\beta) = c(\xi, \bold{y})$.
\item If both $\alpha, \beta \in \hat X$ are realized by $x,y \in X$ then \[\hat d(\alpha,\beta) = d(x,y).\]
\end{enumerate}
\end{lemma}
\begin{proof}
To prove (3), for any admissible $\xi = \{x_m\} \in \alpha$ and any $\eta = \{y_k\} \in \beta$:
\[ \begin{array}{ll}
\displaystyle c(\xi,\eta) = \lim_{m \to \infty} \lim_{k \to \infty} d(x_m, y_k) & \displaystyle \geq
\lim_{m \to \infty} \lim_{k \to \infty} d(x_m, y) - d(y_k , y) \\
\displaystyle & = \displaystyle
\lim_{m \to \infty} d(x_m,y) \\
\displaystyle & = c(\xi, \bold{y})
\end{array}\]
Moreover, if $\alpha$ is not realize then by Observation \ref{obs2} for all $\xi,\xi'\in\alpha$, $\xi$ and $\xi'$ are neighbors so by Proposition \ref{c_well_defined_on_neighbors}, $c(\xi,\bold{y}) = c(\xi',\bold{y})$.
To prove (4), we have that for all $\xi = \{x_n\} \in \alpha$, $\fl{n} x_n = x$ and by continuity, $d(x,y) = \lim_{m \to \infty} d(x_m,y) = c(\xi,\bold{y})$. By (3), $d(x,y) = \hat d( \alpha,\beta)$.
To prove (2), if both $\alpha$, $\beta$ are not realized then for all $\xi,\xi' \in \alpha$ and $\eta, \eta' \in \beta$ we have $c(\xi, \eta) = c(\xi',\eta')$ by Observation \ref{obs2} and by Proposition \ref{c_well_defined_on_neighbors}. Thus, the infimum is taken over a set of one element.
To prove (1), the case where $\beta$ is not realized follows from (2) and if $\beta$ is realized this follows from (3).
\end{proof}
\begin{defn}\leftarrowbel{miniDefn}
Let $\alpha \in \hat X$ if $\xi \in \alpha$ is an admissible sequence in $X$ such that
\begin{equation*}
\hat d(\alpha, \beta) = \inf \{c(\xi, \eta) \mid \eta \in \beta\}
\end{equation*}
for all $\beta \in \hat X$ then $\xi$ is called a good representative of $\alpha$. If each $\alpha \in \hat X$ admits a good representative then we shall say that $X$ is \emph{minimizing}.
\end{defn}
\begin{remark}
Notice that if $\alpha$ is realized by $x$ then $[\bold{x}]$ is not necessarily a good representative (see Example \ref{notMinimizing}).
\end{remark}
\begin{prop}\leftarrowbel{realizationOfDistance}
If $(X,d)$ is forwards continuous and minimizing then for $\xi$ a good representative of $\alpha$ ,
\[\hat d(\alpha, \beta) = c(\xi, \eta) \text{ for all } \eta \in \beta \]
\end{prop}
\begin{proof}
If $\beta$ is not realized then for all $\eta' \in \beta$,
\[ \hat d(\alpha,\beta) = \inf \{ c(\xi, \eta) \mid \eta \in \beta\} = c(\xi, \eta') \]
the last inequality follows since all elements of $\beta$ are neighbors. If $\beta$ is realized by $y$ then by Lemma \ref{infIsMin},
\[ \hat d(\alpha, \beta) = \inf \{ c(\xi', [\bold y]) \mid \xi' \in \alpha \} = c(\xi, [\bold y]) \]
Let $\xi = \{x_n\}$ and $\eta = \{y_m\}$ then by continuity,
\[ c(\xi, \eta) = \lim_{n \to \infty} \lim_{m \to \infty} d(x_n, y_m) = c(\xi, [\bold{y}]) \qedhere \]
\end{proof}
\begin{comment}
\begin{prop} If $(X,d)$ is a forward continuous asymmetric metric space then every convergent sequence that has a CFL is forwards admissible.
\end{prop}
\begin{proof}
Let $\{x_m \} \subset X$ be a sequence admitting a CFL $x \in X$. For each $\varepsilon>0$ there is an $M(\varepsilon)$ so that for $m>M(\varepsilon)$: $d(x_m,x) < \varepsilon$. Fix $m>M(\varepsilon)$ then $\lim_{k \to \infty} d(x_m, x_k) = d(x_m,x) < \varepsilon$. Therefore, there exists a $K(m,\varepsilon)$ so that for all $k> K(m,\varepsilon)$ we have $d(x_m , x_k) < 2\varepsilon$.
\end{proof}
\end{comment}
\begin{lemma}\leftarrowbel{triDHatLemma}
If $(X,d)$ is forwards continuous and minimizing then the function $\hat d$ satisfies the triangle inequality.
\end{lemma}
\begin{proof}
Let $\alpha,\beta,\gamma \in \hat X$ and let $\xi,\eta,\nu$ be good representatives of these classes respectively. Then,
$\hat d( \alpha , \beta ) = c( \xi,\eta )$,
$\hat d( \alpha , \gamma) = c(\xi , \nu)$ and $\hat d(\gamma ,\beta ) = c(\nu , \eta )$. The triangle inequality of $\hat d$ now follows from Observation (\ref{cTriangleInequality}).
\end{proof}
\begin{defn}\leftarrowbel{defnCompl}
Let $(X,d)$ be a forwards complete, minimizing, asymmetric metric space, then $(\hat X, \hat d)$ from Definition \ref{defHat} is an asymmetric metric space called the forwards completion of $(X,d)$.
\end{defn}
\begin{remark}
One way in which $\hat d$ may differ from $d$ is the separation axioms it satisfies.
The function $d$ might satisfy \[ d(x,y) = 0 \text{Im}plies x=y\] even while $\hat d$ may not.
This in fact occurs in $\hat \mathcal{X}_n$ (see the proof of Proposition \ref{isomExtend}).
\end{remark}
We will need the following technical lemma.
\begin{lemma}\leftarrowbel{convenientSubsequences}
If $\{\alpha_n\}$ is a sequence of admissible sequences so that $c(\alpha_n, \alpha_{n+1}) = c_{n,n+1}$ then for each $n$ there is a subsequence $\alpha_n'$ of $\alpha_n$ with $\alpha_n' = \{x_{n,k}\}_{k=1}^\infty$ so that for all $n$ and $k<k'$ we have
\begin{enumerate}
\item $d(x_{n,k}, x_{n,k'})\leq \frac{1}{2^k}$.
\item $d(x_{n,k}, x_{n+1,k'}) \leq c_{n,n+1} + \frac{1}{2^{nk}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
\textbf{Step 1:}
Using Proposition \ref{addConCauchy} we first extract subsequences $\alpha'_n$ of $\alpha_n$ so that property (1) holds. Note that $c(\alpha'_n, \alpha'_j) = c(\alpha_n, \alpha_j)$ for all $n,j$.
\textbf{Step 2:}
Let $\alpha'_n = \{x_{n,k}\}$ be the subsequences extracted in Step 1. We now recursively extract subsequences of $\alpha_n'$ that satisfy Property (2) in the statement of the Lemma. Note that every subsequence of $\alpha_n'$ satisfies Property (1).
Since $c(\alpha'_1, \alpha'_2) = c_{1,2}$ then for all $i \geq 1$ there exists a $K(i)$ so that for all $k \geq K(i)$ there exists a $J(k,i)$ satisfying $|d(x_{1,k}, x_{2,j}) - c_{1,2}| < \frac{1}{2^i}$. We take the subsequence $x_{1,i}' = x_{1,K(i)}$ of $\alpha_1'$, and (temporarily) take the subsequence of $\alpha'_2$ with indices $J(K(i), i)$. Thus for all $i$, the new sequences which we denote by $\{x_{1,i}'\}, \{x_{2,i}'\}$ satisfy \[ |d(x_{1,i}', x_{2,i}')-c_{1,2}|<\frac{1}{2^i}.\]
Notice that this inequality still holds for any subsequence of $x_{2,i}'$.
We now suppose that we have extracted subsequences of $\alpha'_n$ for $n \leq N$ that satisfy property (2) for each $n \leq N-1$. In the induction step we extract a subsequence of $\alpha'_{N}$ and a (temporary) subsequence of $\alpha'_{N+1}$ that satisfy property (2) for $n=N$. \\
Since $c_{N,N+1} = c(\alpha'_N, \alpha'_{N+1})$, then for each $i$ we will let $\varepsilon = \frac{1}{2^{iN}}$ and thus there exists a natural number $K(i)$ so that for all $k>K(i)$ there exists a natural number $J(k,i)$ satisfying for all $j>J(k,i)$,
\[ |d(x_{N,k}, x_{N+1,j})-c_{N,N+1}|< \frac{1}{2^{Ni}}.\]
We take the subsequence $x_{N, K(i)}$ of $\alpha'_{N}$ and the subsequence of $\alpha'_{N+1}$ whose indices are $J(K(i), i)$. We denote the new sequences by $\{x'_{N, i}\}$ and $\{x'_{N+1,i}\}$. Then for all $i$ we have:
$x'_{N,i} = x_{N,K(i)}$ and for $j \geq i$ we have $x'_{N+1, j} = x_{N+1, J(K(j),j)}$ and $J(K(j), j) \geq J(K(i), i)$, thus
\[d(x'_{N, i}, x'_{N+1, j})< \frac{1}{2^{Ni}} \text{ for } j\geq i.\]
The subsequence of $\alpha'_{N+1}$ will be changed to a new subsequence in the next step but this process does not ruin property (2) for $n=N$.
\end{proof}
\begin{definition}
An asymmetric space $(Y,\rho)$ is
\emph{forwards complete} if every forwards admissible
sequence $\{ y_n \}$ in $Y$ has a closest forward limit.
\end{definition}
\begin{lemma}\leftarrowbel{bigLemma}
If $(X,d)$ is a forwards continuous, minimizing, asymmetric metric space then its forward completion $(\hat X, \hat d)$ is forwards complete.
\end{lemma}
\begin{proof}
Let $\{\alpha_n\} \subset \hat X$ be an admissible sequence, we will first show that it has a forwards limit $\alpha \in \hat X$. By Proposition \ref{ssProp} and Proposition \ref{addConCauchy}, we may switch $\{\alpha_n\}$ by a subsequence so that
\begin{equation}\leftarrowbel{eq55} \hat d (\alpha_n,\alpha_m)< \frac{1}{2^n} \text{ for } 1 \leq n < m. \end{equation}
By Proposition \ref{realizationOfDistance} we may choose
sequences $\xi_n$ so that for all $n$:
\[ c(\xi_n, \xi_{n+1}) = \hat d(\alpha_n, \alpha_{n+1}) = \frac{1}{2^n}.\]
We can now use Proposition \ref{convenientSubsequences} to extract subsequences $\xi'_n$ of $\xi_n$ so that for $\xi'_n = \{x_{n,k}\}_{k=1}^\infty$ we have for all $n$ and for all $k\leq j$:
\begin{enumerate}
\item $d(x_{n,k}, x_{n,j}) < \frac{1}{2^k}$, and
\item $d(x_{n,k}, x_{n+1,j})< \frac{1}{2^n} + \frac{1}{2^{nk}}$
\end{enumerate}
We now let $\xi = \{x_{n,n}\}$ and note that it is a Cauchy sequence. We also see that for $k>n$ we have:
\[ d(x_{n,k}, x_{k,k}) \leq \sum_{j=n}^{k-1} d(x_{j,k}, x_{j+1, k}) < \sum_{j=1}^{k-1} \frac{1}{2^j} + \frac{1}{2^{jk}} \leq \frac{1}{2^{n-2}}\]
We therefore see that $\lim_{n \to \infty} c(\xi_n, \xi) =0$ hence denoting $\alpha = [\xi]$ we have that $\alpha$ is a forward limit of $\alpha_n$.
We claim that it is the closest forward limit. Indeed, let $\zeta = \{z_m\}$ so that $[\zeta]$ is a forward limit of $\alpha_n$. The same sequences
$\{\xi_n\}$ satisfy $c(\xi_n,\zeta) = \hat d(\alpha_n, [\zeta]) = 0$. Thus, for all $\varepsilon$ there exists an $N(\varepsilon)$ so that for all $n>N(\varepsilon)$ there is an $M(n, \varepsilon)$ such that for $m\geq M(n,\varepsilon)$ there is a $K(m,n,\varepsilon)$ satisfying
\[ d(x_{n,m}, z_k)< \varepsilon \text{ for } k>K(m,n,\varepsilon) \]
This implies for all $n>N(\varepsilon)$ and $k> K(M(n,\varepsilon),n, \varepsilon)$ that
\[ d(x_{n,n}, z_k) \leq d(x_{n,n}, x_{n,M(n,\varepsilon)}) + d(x_{n,M(n, \varepsilon)}, z_k) \leq \frac{1}{2^{n-1}}+\varepsilon \]
Hence $c(\xi, \zeta) = 0$. This implies that $\alpha = [\xi]$ is the CFL of $\{\alpha_n\}$.
\begin{comment}
We now wish to show that if $\mu = \{x_{n,k(n)}\}$ is such that $k(n)$ is a function limiting to $\infty$ then $\xi$ and $\mu$ are neighbors.
Indeed, for $n$ let $n'>n$ so that $k(n') >n$ then
\[ d(x_{n,n}, x_{n',k(n')}) \leq d(x_{n,n}, x_{n, k(n')})+ d(x_{n, k(n')}, x_{n',k(n')} \leq \frac{2}{2^n}+ \frac{2}{2^n}.\]
Hence $c(\xi, \mu) = 0$. On the other hand let $j = \max\{n, k(n)\}$ and $i = \min\{n,k(n)\}$ then a similar application of the triangle inequality implies,
\[ d(x_{n, k(n)}, x_{j,j}) \leq d(x_{n,k(n)}, x_{n,j}) + d(x_{n,j}, x_{j,j}) \leq \frac{4}{2^i}.\] Hence $c(\mu, \xi) = 0$.
\end{comment}
\end{proof}
\begin{prop}\leftarrowbel{hatXsatisfies11}
Under the assumptions of Lemma \ref{bigLemma}, $\hat X$ need not necessarily be forwards continuous but it does satisfy Equation (\ref{continuityEq1}) of the definition of forwards continuous.
\end{prop}
\begin{proof}
The example in Remark \ref{remarkBadExample} shows that $\hat X$ need not be forwards continuous. We will show that it satisfies Equation (\ref{continuityEq1}). Let $\alpha \in \hat X$ be the CFL of the admissible sequence $\{\alpha_n\} \subset \hat X$.
Let $\beta \in \hat X$ be any point, we wish to show that $\hat d(\alpha, \beta) = \lim_{n \to \infty} \hat d( \alpha_n, \beta)$.
As before we find sequences $\xi_n = \{x_{n,k}\}, \eta = \{y_m\} \subset X$ so that $\eta$ is a good representative of $\beta$ and $\xi_n$ are good representatives of $\alpha_n$. We extract subsequences satisfying items (1)+(2) as before, and denote by $\xi = \{x_{n,n}\}$.
Then since $\alpha$ and $[\xi]$ are both CFLs, $\xi \in \alpha$. For $\varepsilon>0$ let $N(\varepsilon)$ be so that for all $n>N(\varepsilon)$, $c(\xi_n, \eta) = \hat d(\alpha_n, \beta)$ is within $\varepsilon$ of $L = \lim_{n \to \infty} \hat d(\alpha_n, \beta)$. Let $K(n,\varepsilon)$ and for $k>K(n,\varepsilon)$ let $M(n,k,\varepsilon)$ be as in Definition \ref{dist_limit}, then take $k= \max\{n,K(n,\varepsilon)\}$ and $m > M(n,k,\varepsilon)$ then
\[ d(x_{n,n}, y_m) \leq d(x_{n,n}, x_{n,k}) + \hat d(x_{n,k}, y_m) \leq \frac{1}{2^n} + c(\xi_n,\eta) + \varepsilon \leq L + 3 \varepsilon\] for large $n$.
This implies $\hat d(\alpha, \beta) \leq c(\xi, \eta) \leq L$.
Moreover, $\hat d(\alpha_n, \beta) \leq \hat d(\alpha_n , \alpha) + \hat d(\alpha, \beta)$ implies $L \leq \hat d(\alpha,\beta)$ which gives the equality we need.
\end{proof}
\begin{comment}
We would like to know that in certain cases, the forward completion is also backwards complete.
\begin{prop}
Let $(X,d)$ be a continuous asymmetric metric space. Suppose that the following implication holds for all sequences $\{x_k\}_{k=1}^\infty \subset X$ and points $x \in X$
\begin{equation}\leftarrowbel{eq323}
\lim_{k \to \infty}d(x, x_k)=0 \text{Im}plies \lim_{k \to \infty}d(x_k, x)=0
\end{equation}
Then the forwards completion $\hat X$ is also backwards complete.
\end{prop}
\begin{proof}
Let $\boldsymbol{\xi} = \{\xi_i\} \subset \hat X$ be a backwards admissible sequence of forwards admissible sequences.
By Proposition \ref{ssProp} and Proposition \ref{addConCauchy} for backwards admissible sequences, we may pass to a
subsequence, so that for every $n$, and for all $k>n>N$,
\begin{equation}\leftarrowbel{eq31} \hat d(\xi_k, \xi_n) < \frac{1}{2^n} \end{equation}
We may assume that the sequences $\xi_i = \{ x_{ij} \}_{j=1}^\infty$ have been replaced with equivalent sequences so that for every $i$ and for all $j>m$:
\begin{equation} d( x_{im}, x_{ij}) < \min \left\{\frac{1}{2^i},\frac{1}{2^m} \right\} \end{equation}
By equation (\ref{eq31}),
for every $n$ there exists an $M(n)$ so that for all $m>M(n)$, there exists a $J(m,n)$ so that for all $j>J(m,n)$
\begin{equation} d(x_{n+1,m}, x_{n,j})< \frac{1}{2^{n-1}} \end{equation}
We explain how to replace the sequences $\xi_{n}$ by equivalent sequences so that for all $m$:
\begin{equation} d(x_{n+1,m}, x_{n,m})< \frac{1}{2^{n-1}} \end{equation}
By induction, truncate
$\xi_{n+1}$ by $M(n)$ elements, and replace the sequence $\xi_n$ with the subsequence $\{ x_{n,J(m,n)} \}_{m=M(n)}^\infty$. \\
Since $X$ is backwards complete, for every $j$ there is a backwards closest limit $x_{\infty j}$ of the sequence $\{x_{nj}\}_{n=1}^\infty$. We claim that the sequence $\xi_\infty = \{ x_{\infty j} \}_{j=1}^\infty$ is forwards admissible. For $j<k$, let $N$ be large enough so that for all $n>N$ we have
\begin{equation}\leftarrowbel{eq322}
d(x_{\infty,j} , x_{n,j}) < \varepsilon
\quad \mbox{and} \quad d(x_{\infty,k} , x_{n,k}) < \varepsilon
\end{equation}
Then
\begin{equation}\leftarrowbel{eq321}
d(x_{\infty,j} , x_{\infty,k}) < d(x_{\infty,j} , x_{n,j}) + d(x_{n,j}, x_{n,k}) + d(x_{n,k} , x_{\infty,k})
\end{equation}
The last term in equation (\ref{eq321}) is small because of equation (\ref{eq322}) and
implication (\ref{eq323}). Therefore $d(x_{\infty,j} , x_{\infty,k})$ is arbitrarily
small, hence $\xi_\infty$ is forwards Cauchy.
We now show that $\xi_\infty$ is a backward limit of $\{\xi_n\}_{n=1}^\infty$,
\begin{align*}
\lim_{n \to \infty} \hat d([\xi_\infty], [\xi_n]) & =
\lim_{n \to \infty} \lim_{m \to \infty} \lim_{k \to \infty} d(x_{\infty,m}, x_{n,k}) \\ & \leq
\lim_{n \to \infty} \lim_{m \to \infty} \lim_{k \to \infty} d(x_{\infty,m}, x_{n,m}) + d(x_{n,m}, x_{n,k}) = 0
\end{align*}
It is the closest since if $\eta = \{y_m\}$ is another backwards limit then,
\begin{align*}
\hat d([\eta],[\xi_{\infty}]) & \leq
\lim_{m \to \infty} \lim_{k \to \infty} d(y_m, x_{\infty,k}) \\
& \leq \lim_{m \to \infty} \lim_{k \to \infty} d(y_m, x_{n,k}) + d(x_{n,k}, x_{\infty,k})
\end{align*}
For large $n$ the first term is small because $\eta$ is a backward limit of $\{\xi_n\}$ we have $\lim_{n \to \infty} \lim_{m \to \infty} \lim_{k \to \infty} d(y_m, x_{n,k}) =0$.
As for the second term,
we know that for each $k$, $\lim_{n \to \infty} d(x_{\infty,k}, x_{n,k}) =0$ thus by implication (\ref{eq323}),
we have $\lim_{n \to \infty} d(x_{n,k}, x_{\infty,k}) = 0$ for each $k$.
We can extract a subsequence of $\xi_n$ so that for every $k$, $ d(x_{n,k}, x_{\infty,k}) \leq \frac{1}{n}$. Thus the second term will also be small for large $n$. Thus $\hat d([\eta],[\xi_{\infty}])=0$.
\end{proof}
\end{comment}
\begin{defn}\leftarrowbel{denseDefn}
If $X \subset Y$ and for all $y \in Y$, there exists a forwards admissible sequence $\{y_n\} \subset X$ such that $y = \fl{n} y_n$ then we will say that $X$ is forward dense in $Y$.
\end{defn}
\begin{lemma}\leftarrowbel{embedding}
The map $\iota: X \hookrightarrow \hat X$ defined by $\iota(x) = [\bold x]$ is an isometric embedding and its image $\iota(X)$ is forward dense.
\end{lemma}
\begin{proof}
The fact that $\iota$ is an isometric embedding follows from Lemma \ref{infIsMin}(4). Let $\alpha \in \hat X$ then there exists a forwards admissible sequence $\xi \subset X$ such that $\alpha = [\xi]$. If $\xi = \{x_n\}_{n=1}^\infty$ then for $\varepsilon>0$ since $\xi$ is admissible, let $n>N_{\ref{add_seq_defn}}(\varepsilon)$ and $k>K_{\ref{add_seq_defn}}(n,\varepsilon)$
\[ c(\bold{x_n}, \xi) = \lim_{k \to \infty} d(x_n, x_k) <\varepsilon. \]
Thus, $\hat d(\iota(x_n), \alpha)< \varepsilon$. Hence $\alpha$ is a forward limit of the sequence $\{ \iota(x_n)\}$. Moreover, if $\beta \in \hat X$ so that $\lim_{n \to \infty} \hat d(\iota(x_n), \beta) = 0$. Then there exists an admissible sequence in $X$, $\eta = \{y_m\} \in \beta$, so that for all $\varepsilon$ there exists an $M(\varepsilon)$ so that for all $n>M(\varepsilon)$, $c(\iota(x_n),\eta)<\varepsilon$. Thus, there exists a $J(n,\varepsilon)$ so that
$d(x_n,y_j)<\varepsilon$ for all $j>J(n,\varepsilon)$. But this exactly means that $c(\xi, \eta)<\varepsilon$ and hence $\hat d(\alpha,\beta) <\varepsilon$. Thus $\alpha$ is the closest forward limit of $\{\iota(x_n)\}$.
\end{proof}
The space $\hat X$ will not necessarily be forwards continuous (Definition \ref{contDefn}) as we shall later see in Remark \ref{remarkBadExample}.
However, we need some continuity property of $\hat X$ to hold in order to extend an isometry of $X$ to $\hat X$.
We weaken the notion of forwards continuous to continuity with respect to a subspace as follows.
\begin{defn}\leftarrowbel{semiForwardsContinuous}
Suppose $(Y,d)$ is an asymmetric metric space and $X \subset Y$. Suppose that for all $a,b \in Y$ and all forwards admissible sequences $\{a_n\}, \{b_n\} \subset X$ such that $\fl{n}a_n = a$ and $\fl{n}b_n=b$ we have,
\[ d(a,b) = \lim_{n \to \infty} \lim_{k \to \infty} d(a_n, b_k) = c(\{a_n\}, \{b_n\}) \]
then we say that $Y$ is semi forwards continuous with respect to $X$.
\end{defn}
For $Y = \hat X$ we will say that $\hat X$ is semi forwards continuous with respect to $X$ when we mean with respect to $\text{Im}(\iota)$.
\begin{remark}\leftarrowbel{remarkBadExample}
$Y$ may be forwards continuous and minimizing but still $\hat Y$ may NOT be semi-forwards continuous with respect to $Y$. Indeed consider the space $X$ of Example \ref{notMinimizing}. Let $Y = X \colonprod \{x^0_n\}_{n = 1}^\infty$ and extend the distance:
\[ d(x^0_n, x^0_m) = \left\{ \begin{array}{ll}
\frac{1}{n}-\frac{1}{m} & n<m \\
1 & n>m
\end{array} \right. , \quad d(x^0_n,0) = \frac{1}{n} , \quad
d(x^0_n, y_m) = \left\{ \begin{array}{ll}
1 & n<m \\
2 & n>m
\end{array} \right.
\]
and the unmentioned new distances are equal to $1$.
Then $Y$ is forwards continuous (since $X$ was) but now it is also minimizing since for $\eta = \{y_m\}$ there exists $\xi = \{x^0_n\}_{n=1}^{\infty} \in [\bold{0}]$ such that
\[ \hat d([\bold{0}], [\eta]) = c(\xi,\eta) = 1.\]
However, $\hat Y$ is not semi forwards continuous with respect to $Y$ since for $\bold{0}$, $c(\bold{0}, \eta) = 2 > c(\xi, \eta) = 1$. So we have a sequence $a_n = [\bold{0}] \in \text{Im}(\iota)$ and $b_m = [\bold{y_m}] \in \text{Im}(\iota)$ with $\fl{n} a_n = [\bold{0}]$ and $\fl{n}b_n = [\eta]$ so that
\[ 2 = \lim_{n \to \infty} \lim_{m \to \infty} \hat d(a_n, b_m) \neq \hat d([\bold{0}],[\eta]) =1\] contradicting the semi forwards continuous property.
\end{remark}
It is not so clear how to check that $X$ is minimizing or that $\hat X$ is semi-forwards continuous with respect to $X$. The following Lemma gives a sufficient condition for both.
\begin{lemma}\leftarrowbel{keyLemma}
Suppose that $(X,d)$ is an asymmetric metric space so that:
\begin{equation}\leftarrowbel{star}
\begin{array}{l}
\text{If } \{x_n\}_{n =1}^\infty \subset X \text{ is a forwards admissible sequence that has a} \\
\text{CFL } x \in X
\text{ then } \{x_n\} \text{ is also backwards admissible and } x \text{ is }\\ \text{its CBL.}
\end{array}\tag{*}
\end{equation}
If $X$ is forwards continuous then $X$ is minimizing and $\hat X$ is semi forwards continuous with respect to $X$.
\end{lemma}
\begin{proof}
First notice that if $\xi, \xi'$ are forwards admissible sequences with the same CFL $x$ then they are neighbors. Indeed since $\lim_{k \to \infty} d(x_k',x) = 0$ then $\lim_{k \to \infty} d(x,x'_k) = 0$. Therefore, $d(x_n, x'_k) \leq d(x_n, x)+ d(x, x'_k)$ limits to $0$ as $n,k \to \infty$.
Now, for every $\alpha \in \hat X$, and for every $\xi,\xi' \in \alpha$, $\xi$ and $\xi'$ are neighbors (whether or not $\alpha$ is realized). This implies by Lemma \ref{c_well_defined_on_neighbors} that
\[ \hat d(\alpha,\beta) = c(\xi,\eta) \quad \text{ for every } \xi\in \alpha, \eta\in\beta. \]
In particular $X$ is minimizing.
To show that $\hat X$ is semi forwards continuous with respect to $X$, let $\alpha,\beta \in \hat X$ and $\{\alpha_n\}, \{\beta_k\} \subset \text{Im}(\iota)$ forwards admissible sequences with $\alpha=\fl{n} \alpha_n$ and $\beta = \fl{n}\beta_n$.
We have to show that $\hat d(\alpha, \beta) = \lim_{n \to \infty} \lim_{k \to \infty} \hat d(\alpha_n,\beta_k)$.
There exist $a_n, b_n \in X$ such that $\iota(a_n) = \alpha_n$ and $\iota(b_n) = \beta_n$.
Note that $\hat d(\alpha_n,\beta_k) = c(\bold{a_n}, \bold{b_k}) = d(a_n,b_k)$.
Moreover, $\fl{n} \alpha_n = \alpha$ implies that $\{a_n\}_{n=1}^\infty \in \alpha$ and similarly, $\{b_n\}_{n=1}^\infty \in \beta$.
Therefore,
\[ \lim_{n \to \infty} \lim_{k \to \infty} \hat d(\alpha_n,\beta_k) =
\lim_{n \to \infty} \lim_{k \to \infty} d(a_n,b_k) = c(\{a_n\}, \{b_n\}) = \hat d(\alpha,\beta).\qedhere \]
\end{proof}
\begin{lemma}\leftarrowbel{connection}
If $X$ is forwards continuous, and $\hat X$ is semi-forwards continuous with respect to $X$ then $X$ is minimizing.
\end{lemma}
\begin{proof}
We remark that we will not use the directed triangle inequality for $\hat X$, for which we assumed that $X$ was minimizing.
By Lemma \ref{infIsMin} we know that $\hat d(\alpha,\beta)$ is a minimum for all $\alpha \in \hat X$ that are not realized. Now assume $\alpha$ is realized by $x$.
We need to show that for all $\beta \in \hat X$, $\hat d(\alpha,\beta) = \min \{ c(\xi,\eta) \mid \xi \in \alpha, \eta \in \beta \}$.
Let $\xi \in \alpha$ then $\xi = \{x_n\} \subset X$ and $\fl{n} x_n = x$ and let $\eta = \{y_n\} \subset \beta$. Then $\alpha_n = \iota(x_n) \in \text{Im}(\iota)$ satisfies $\fl{n} \alpha_n =\alpha$, and similarly for $\beta_n = \iota(y_n) \in \text{Im}(\iota)$, $\fl{n} \beta_n = \beta$. Note that by Lemma \ref{infIsMin}, $\hat d(\alpha_n, \beta_k) = c(\bold{x_n}, \bold{y_k}) = d(x_n, y_k)$. By semi forwards continuity,
\[ \hat d(\alpha, \beta) = \lim_{n \to \infty} \lim_{k \to \infty} \hat d(\alpha_n, \beta_k) = \lim_{n \to \infty} \lim_{k \to \infty} d(x_n, y_k) = c(\xi,\eta)\]
Thus $\xi$ and $\eta$ realize the distance for every $\xi \in \alpha$ and $\eta \in \beta$.
\end{proof}
We now prove that we can extend an isometry of $X$ to $\hat X$.
\begin{theorem}\leftarrowbel{extIsom}
Let $X$ be a forwards continuous asymmetric metric space and suppose $\hat X$ is semi forwards continuous with respect to $X$.
Suppose $(Y,\rho)$ is a forwards complete asymmetric metric space and let
\[ f:(X,d) \hookrightarrow (Y,\rho)\] be an isometric embedding so that $Y$ is semi forwards continuous with respect to $\text{Im}(f)$. Then there exists a
lift of $f$ to an isometric embedding $F:(\hat X, \hat d) \hookrightarrow (Y, \rho)$ so that $f = F \circ \iota$.
\end{theorem}
\begin{proof}
By Lemma \ref{connection}, $X$ is minimizing.
Let $\alpha \in \hat X$ and let $\xi = \{x_k\} \subset X$ be an admissible sequence so that $\lim_{k \to \infty} \iota(x_k) = \alpha$.
The sequence $\{f(x_k)\}$ is admissible in $Y$ and $Y$ is forwards complete, hence it has a CFL $y$. We define $F(\alpha) = y$.
We now check that $F$ is well defined: If $\alpha \in \hat X - \iota(X)$ then for all $\xi, \xi' \subset X$ such that the CFL of both $\iota(\xi), \iota(\xi')$ is $\alpha$, we have $\xi,\xi' \in \alpha$. Since $\alpha$ is not realized, then $\xi,\xi'$ are neighbors. Therefore, $f(\xi), f(\xi')$ are neighbors and therefore, they have the same CFL in $Y$. Now for the other case, if $\alpha \in \iota(X)$ then let $\alpha = \iota(x)$ and $\xi = \{x_n\}$ and $\xi' = \{x_n'\}$ sequences in $X$ so that $\fl{n} x_n = \fl{n} x_n' = x$. Denote by $y,y'$ the CFLs of $f(\xi),f(\xi')$. Since $Y$ is semi forwards
continuous with respect to $\text{Im}(f)$ then $\rho(y,y') = \lim_{n \to \infty} \lim_{k \to \infty} \rho( f(x_n), f(x_k'))$. Since $f$ is an isometry $\rho(f(x_n), f(x_k')) = d(x_n, x_k)$. Therefore $\rho(y,y') = c(\xi,\xi')$. Since $X$ is forwards continuous we have $c(\xi,\xi') = \lim_{n \to \infty} \lim_{k \to \infty} d(x_n, x_k') = \lim_{n \to \infty} d(x_n, x) = 0$. Thus, $\rho(y,y') = 0$ and similarly $\rho(y',y) = 0$. This concludes the argument that $F$ is well defined.
Given $\alpha,\beta \in \hat X$ let $\xi = \{x_n\} \in \alpha$ and $\eta = \{x'_n\} \in \beta$ then in $\hat X$ we have $\hat d(\alpha,\beta) = c(\xi, \eta)$ since $\hat X$ is semi forwards continuous with respect to $X$. We denote by $F(\alpha)$ the CFL of $f(\xi)$ and by $F(\beta)$ the CFL of $f(\eta)$.
Since $Y$ is semi forwards continuous with respect to $\text{Im} f$ we have that $\rho(F(\alpha), F(\beta)) = \lim_{n \to \infty} \lim_{k \to \infty} \rho(f(x_n), \rho(f(x'_k))$. Since $f$ is an isometric embedding,
\[ \hat d(\alpha, \beta) = \lim_{n \to \infty} \lim_{k \to \infty} d(x_n, x'_k) = \lim_{n \to \infty} \lim_{k \to \infty} \rho(f(x_n), f(x'_k)) = \rho(F(\alpha), F(\beta)) \]
Hence $F$ is an isometric embedding.
\end{proof}
\begin{example}
This is an example where the extension $F$ of $f$ is not unique.
Let $X = (0,1]$ with the asymmetric distance from Example \ref{NonSymEx}. Let $Y = X \cup \{0\} \cup\{0'\}$ with $d(t,0) = t = d(t,0')$ for all $t \in X$, and $d(0,t) = d(0',t) = 1$ and $d(0,0') = 0$ and $d(0', 0) =1$. Then $0$ is the CFL of $\{ \frac{1}{n}\}$ and $0'$ is a forward limit of $\{\frac{1}{n}\}$. The inclusion $f \colon X \to Y$ is an isometric embedding. $X$ is forward continuous and minimizing, $Y$ is semi forwards continuous with respect to $X$. However, $f$ has two extensions to $\hat X$. One can define $F([\{\frac{1}{n}\}]) = 0$ or to $0'$.
\end{example}
\begin{prop}\leftarrowbel{uniqueness}
Under the hypothesis of Theorem \ref{extIsom}, if $\text{Im}(f)$ is forwards dense in $Y$ then $F$ is unique.
\end{prop}
\begin{proof}
Let $F \colon \hat X \to Y$ be an isometric embedding that extends $f$. Let $\alpha \in \hat X$, and $\xi = \{x_n\} \subset X$ a forwards admissible sequence such that $\xi \in \alpha$. Let $y$ be the CFL of $f(\xi)$. Then since $F$ is an isometric embedding, we have,
\[ \begin{array}{l}
\displaystyle \lim_{n \to \infty} \rho(f(x_n), F(\alpha)) =
\lim_{n \to \infty} \rho(F \circ \iota(x_n), F(\alpha)) = \\
\displaystyle \lim_{n \to \infty} \hat d(\iota(x_n), \alpha) =
\lim_{n \to \infty} \lim_{k \to \infty} d(x_n, x_k) = 0.
\end{array}\] Because $y$ is the CFL of $\{f(x_n)\}_{n=1}^\infty$ by the definition of CFL we have $\rho(y,F(\alpha))=0$.
Conversely, since $\text{Im}(f)$ is forwards dense in $Y$, $F(\alpha)$ is a CFL of some forwards admissible sequence in $\text{Im}(f)$. Thus there exists a forwards admissible $\zeta = \{z_n\} \subset X$ so that $F(\alpha)$ is the CFL of $f(\zeta)$. Since $Y$ is semi forwards continuous with respect to $\text{Im}(f)$ and $F(\alpha) = \fl{n} f(z_n)$ and $y = \fl{n} f(x_n)$ then we have
\begin{equation}\leftarrowbel{eq444}
\rho(F(\alpha),y) = \lim_{n \to \infty} \lim_{k \to \infty} \rho( f(z_n), f(x_k)) =
\lim_{n \to \infty} \lim_{k \to \infty} d( z_n, x_k)
\end{equation}
Since $F(\alpha)$ is the CFL of $f(\xi)$ then
$\lim_{n \to \infty} \rho(f(z_n), F(\alpha)) = 0$. Therefore, $\lim_{n \to \infty} \hat d(\iota(z_n), \alpha) = 0$.
Since $\hat X$ is forwards continuous with respect to $X$, we have
$\lim_{n \to \infty} \lim_{k \to \infty} d(z_n, x_k) =0$. Along with equation \ref{eq444} we get $\rho(F(\alpha),y) = 0$.
\end{proof}
\begin{cor}\leftarrowbel{isomX}
For any forwards continuous asymmetric space $(X,d)$ which satisfies property (\ref{star}),
each isometry $f \colon X \to X$ induces a unique isometry $F \colon \hat X \to \hat X$.
\end{cor}
\begin{proof}
Since $X$ satisfies property (\ref{star}) and is forwards continuous we have that $X$ is minimizing and $\hat X$ is semi-forwards continuous with respect to $X$ (Lemma \ref{keyLemma}). By Lemma \ref{embedding}, $\iota(X)$ is forwards dense in $\hat X$. An isometry $f \colon X \to X$ induces an isometric embedding $\iota \circ f \colon X \to \hat X$ and by Theorem \ref{extIsom}, $\iota \circ f$ extends to an isometric embedding $F \colon \hat X \to \hat X$. Since $\iota(X)$ is forwards dense, $F$ is an isometry. Moreover, by Proposition \ref{uniqueness}, $F$ is unique.
\end{proof}
We finally prove our main theorem.
\begin{proof}[Proof of Theorem \ref{ThmCompl}]
The existence part of the theorem follows from Lemma \ref{keyLemma}, Definition \ref{defnCompl}, Lemma \ref{bigLemma} and Lemma \ref{embedding}.
Now suppose that $(Y,\rho)$ is a forwards complete asymmetric metric space and $f \colon X \to Y$ is an isometric embedding so that $Y$ is semi forwards continuous with respect to $\text{Im}(f)$ and $\text{Im}(f)$ is forwards dense in $Y$. Then by Theorem \ref{extIsom} $f$ extends to an isometric embedding $F \colon \hat X \to Y$ and by density $F$ is an isometry. Thus $(Y,\rho)$ is isometric to $(\hat X, \hat d)$.
\end{proof}
\section{Background on Outer Space} \leftarrowbel{secOuterSpace}
In this section we review the graph and tree approaches to Outer Space. We then formulate a lifting proposition that allows us to lift an affine difference in marking of graphs to an affine equivariant map of $F_n$-trees with specified data, see Proposition \ref{prop_lift}. We end the section with a review of the boundary of Outer Space.
\subsection{Outer Space in terms of marked graphs}
Fix a basis $\{ x_1, \dots x_n \}$ of $F_n$. When we talk of reduced words we will mean relative to this basis.
\begin{defn}[The rose]
Let $R$ be the wedge of $n$ circles, denote the vertex of $R$ by $*$. We identify $x_i$ with the positively oriented edges of $R$. This gives us an identification of $\pi_1(*,R)$ with $F_n$ that we will suppress from now on.
\end{defn}
\begin{defn}[Outer Space]
A point in outer space is an equivalence class of a triple $x = (G,\tau,\ell)$ where $G$ is a graph (a finite 1 dimensional cell complex), $\tau \colon R \to G$ and $\ell \colon E(G) \to (0,1)$ are maps, and $(G,\tau,\ell)$ satisfy:
\begin{enumerate}
\item the valence of $v \in V(G)$ is greater than $2$.
\item $\tau$ is a homotopy equivalence.
\item $\sum_{e \in E(G)} \ell(e) = 1$.
\end{enumerate}
The equivalence relation is given by: $(G,\tau,\ell) \sim (G',\tau',\ell')$ if there is an isometry $f \colon (G,\ell) \to (G',\ell')$ such that $f \circ \tau$ is freely homotopic to $\tau'$. We shall sometimes abuse notation and use the triple notation $(X,\tau,\ell)$ to mean its equivalence class.
\end{defn}
We will always identify words in $F_n$ with edge paths in $R$, note that reduced words are identified with immersed paths in $R$. Using this identification, an automorphism $\phi:F_n \to F_n$ can be viewed as an affine map $\phi: R \to R$.
\begin{defn}[$\textup{Out}(F_n)$ action]
There is a right $\text{Aut}(F_n)$ action on the set of metric marked graphs given by: $x \cdot \phi = (G, \tau \circ \phi, \ell)$.
This action is constant on equivalence classes, and inner automorphisms act trivially. Therefore, this action descends to an $\textup{Out}(F_n)$ action on $\mathcal{X}_n$.
\end{defn}
\subsection{Outer Space in terms of tree actions}
An equivalent description of Outer Space is given in terms of minimal free simplicial metric $F_n$-trees.
\begin{defn}[Tree definition of $\mathcal{X}_n$]
Outer Space $\mathcal{X}_n$ is the set of equivalence classes of pairs $(X,\rho)$ where $X$ is a metric tree, and $\rho: F_n \to \text{Isom}(X)$ is a homomorphism and the following conditions are satisfied:
\begin{enumerate}
\item The action is free - if $\rho(g)(p) = p$ for $p \in X$ and $g \in F_n$ then $g = 1$.
\item $X$ is simplicial - for any $1 \neq g \in F_n$, the translation length
\[ l(\rho(g),T) := \inf \{ d(x, \rho(g) x) \mid x \in T\}\]
is bounded away from zero by a global constant independent of $g$.
\item The action is minimal - no subtree of $X$ is invariant under the group $\rho(F_n)$.
\item The action is normalized to have unit volume - $X/\rho(F_n)$ is a finite graph whose sum of edges is $1$.
\end{enumerate}
The equivalence relation on the collection of $F_n$ tree actions is defined as follows: $(X,\rho) \sim (Y, \mu)$ if there is
an isometry $f \colon X \to Y$ with $f^{-1} \circ \mu(g)\circ f (x) = \rho(g)(x)$. In this case
$(X,\rho)$, $(Y,\mu)$ are called isometrically conjugate.
\end{defn}
\begin{remark}\leftarrowbel{graphQuotient}
The first three items imply that $X/\rho(F_n)$ is a finite metric graph. Indeed, by (1) and (2) the
action is properly discontinuous therefore $p \colon X \to X/\rho(F_n)$ is a covering map.
Since the action is free then $\pi_1(X/\rho(F_n),*) \colonng F_n$ and because of (3), there are no valence 1 in $X/\rho(F_n)$.
\end{remark}
There is an action of $\text{Aut}(F_n)$ on simplicial minimal metric $F_n$-trees
given by
\[(X, \rho) \cdot \phi = (X, \rho \circ \phi)\]
Clearly, the action is
constant on the equivalence classes.
To see that inner automorphisms act trivially, assume $\phi =
i_g$ and take $f = \rho(g) \colon X \to X$ then $f$ is an isometry such that
\begin{align*}
f^{-1} \circ \rho(h) \circ f(x) &= \rho(g)^{-1} \circ \rho(h)(\rho(g)(x)) \\
& = \rho(ghg^{-1})(x) \\
& = [\rho \circ i_g](h) (x)
\end{align*}
Therefore $\rho$ is isometrically conjugate to $\rho \circ i_g$ and hence the
action descends to an action of $\textup{Out}(F_n)$ on the isometry classes of trees.
\subsection{An Equivalence of the Categories}
\textbf{From graphs to trees.} The operation of lifting gives us a way of converting a marked graph to an $F_n$-tree. Let
$(G, \tau, \ell)$ be a metric marked graph, by choosing a point $w$ in the fiber of $\tau(*)$ we obtain
an action of $\pi_1(G,\tau(*))$ by deck transformations on the universal cover of $G$, $\widetilde G$.
By precomposing with $\tau_*$ we obtain a homomorphism $\rho^G_w:F_n \to \text{Isom}(\widetilde G)$. A
different choice of a point $z \in p^{-1}(\tau(*))$, where $p\colon \widetilde G \to G$ is the covering map, results in $\rho^G_z$ that is isometrically conjugate to $\rho^G_w$.
A marking $\tau'$ homotopic to $\tau$
would produce a homomorphism $\rho'$ isometrically conjugate to $\rho$.
\textbf{From trees to graphs.} There is also an inverse operation, namely,
taking the quotient of a simplicial metric tree $T$ by a free and simplicial $F_n$ action, see Remark \ref{graphQuotient}. The quotient is a finite metric graph with a marking $\tau$ determined by a basepoint $q \in T$. A change of basepoint would lead to a marking $\tau'$ homotopic to $\tau$. Moreover, the tree is the universal cover of the quotient. Therefore, the operations of lifting and taking quotients are inverses of each other.
Let $x=(G,\tau, \ell), y=(H,\mu,\ell')$ be two points in $\mathcal{X}_n$ a \emph{difference in markings} is a
map $f:G \to H$ such that $ f \circ \tau $ is homotopic to $\mu$. Thus, $y$ is equal to the equivalence class of $(H, f \circ \tau, \ell')$ and this will be our preferred representative of the equivalence class.
\begin{prop}\leftarrowbel{liftingAmap}
Let $f \colon G \to H$ be a graph map and
let $( \widetilde G,p),(\widetilde H,p')$ be the universal covers of $G$ and $H$.
Given a choice of
basepoints $w \in p^{-1}(\tau(*))$
and $z \in p'^{-1}(f \circ \tau(*))$
there is a unique lift of $f \circ p:\widetilde G \to H$ to $\widetilde{f_{wz}}: \widetilde G \to \widetilde H$.
\begin{enumerate}
\item If $f$ is a difference in marking then for all $h \in F_n$,
\[ \widetilde{f_{wz}} \circ \rho^G_w(h) = \rho^H_z(h) \circ \widetilde{f_{wz}} \]
\item If $f$ is affine then $\widetilde{f_{wz}}$ is affine.
\end{enumerate}
\end{prop}
We note that the other direction is also true. Given a linear equivariant map ${\sf f}:\widetilde G \to \widetilde H$, it descends to a map $f:G \to H$ which is a difference of the induced markings.
Therefore the lifting operation defines an equivalence of categories between (equivalence classes of) marked metric graphs and linear differences of markings, and the category of (equivalence classes of) $F_n$-trees and linear equivariant maps.
\subsection{Lifting optimal maps}
Let $x$ be a point in Outer Space and let $\alpha$ be a loop in $x$ (i.e. the underlying graph of $x$), we denote by $l(\alpha,x)$ the length of the immersed loop homotopic to $\alpha$.
For $a \in F_n$ we denote by $l(a,x)=l(\tau(a),x)$ or equivalently the length of each element in the conjugacy class of $a$ in $x$.
Denote by $X$ be the $F_n$-tree obtained as the universal cover of $x$. Since the action is free each $a \in F_n$ acts as a hyperbolic isometry and its translation length is equal to $l(a,x)$. It has an axis denoted by $A_X(a)$.
\begin{defn}
A loop $\alpha$ in $x$ is a \emph{candidate} if it is an embedded circle, an embedded figure 8, or an embedded barbell. \end{defn}
\begin{theoremDT}\cite{FM}
Let $x,y \in \mathcal{X}_n$ then the function
\begin{equation} \leftarrowbel{eq23}
\begin{array}{ll}
d(x,y) & = \log \inf \{ \text{Lip}(f) \mid f \colon x \to y \text{ is a Lipchitz difference in markings }\} \\
& = \log \sup \left. \left\{ \frac{l(\gamma,y)}{l(\gamma,x)} \right| \gamma \text{ is a loop in } x \right\}
\end{array} \end{equation}
defines an asymmetric distance on $\mathcal{X}_n$. Additionally, the supremum and infimum in equation \ref{eq23} are realized.
\end{theoremDT}
\begin{defn}
A loop which realizes the maximum in equation \ref{eq23} is called a \emph{witness}. Sometimes we shall call a witness the element of $F_n$ or its conjugacy class that corresponds to the witness loop. A map that realizes the minimum in equation \ref{eq23} is called an \emph{optimal map}.
\end{defn}
\begin{prop}\cite{FM}\leftarrowbel{witnessOptimal}
For $x,y \in \mathcal{X}_n$ there is a candidate $\alpha$ of $x$ that witnesses the distance $d(x,y)$.
Moreover, if $\beta$ is a witness and $f \colon x \to y$ is an optimal difference in marking then $f(\alpha)$ is an immersed loop in $y$ and $d(x,y) = \log \frac{l(f(\alpha),y)}{l(\alpha,x)}$.
\end{prop}
One can compute the distance
$d(x,y)$ by going over all candidates in $x$ and finding those which stretch maximally.
\begin{defn}
Let $x \in \mathcal{X}_n$ a basis $\mathcal{B}$ of $F_n$ is \emph{short} with respect to $x$ if for every $a \in \mathcal{B}$, $l(\tau(a),x)\leq 2$.
\end{defn}
The following proposition follows from standard covering space theory.
\begin{prop}\leftarrowbel{prop_lift}
Let $x = (G, \tau, \ell)$ and $y=(H, \mu, \ell')$ be elements of $\mathcal{X}_n$, and
$f:x \to y$ an affine optimal map. Let $\beta$ be a candidate witness for the distance $d(x,y)$, and let $b = \tau_*^{-1}(\beta) \in F_n$. Then for any choice of basepoint $w'$ in the image of the loop $\beta$ in $G$ and for any choice of lift $w \in \widetilde G$ of $w'$ and for any choice of lift $z \in \widetilde H$ of $f(w')$ on the axis $A_{\widetilde H}(b)$ the lift $\widetilde{f_{wz}}:\widetilde G \to \widetilde H$, from Proposition \ref{liftingAmap}, restricts to a linear map from
$A_{\widetilde G}(b)$ to $A_{\widetilde H}(b)$. \end{prop}
\subsection{The boundary of Outer Space}
The advantage of the tree approach to Outer Space is that
it extends to a compactification of Outer Space. Given an $F_n$ tree $X$ consider the function of translation lengths in $X$
\begin{align*}
\ell_X : & F_n \to {\mathbb R}\\
& \ell_X(a) = \ell(\rho(a),X)
\end{align*}
Culler and Morgan \cite{CM} defined a set of five axioms and called a function
$\ell: F_n \to {\mathbb R}R$ a \emph{pseudo length functions} if it satisfied their axioms. Any
length function of an $F_n$ tree is a pseudo length function. They proved
\begin{theorem}\cite{CM}\leftarrowbel{uniqueTree}
If a length function $\ell$ is irreducible (there are $g,h \in F_n$ with $\ell(h), \ell(g)$ and $\ell([g,h])$ non-zero) then there exists a \textbf{unique} minimal tree $X$ so that $\ell = \ell_X$ and this tree is irreducible (i.e. there is no global fixed point, no invariant end, and no invariant line).
\end{theorem}
Let $\mathcal{PLF}$ the space of projective length functions be the quotient of the space of length functions, considered as a subspace of ${\mathbb R}R^{F_n}$, quotioned by the action of ${\mathbb R}R$. By the previous paragraph, there exists an injection
\begin{equation}\leftarrowbel{eqInjection} \mathcal{X}_n \to \mathcal{PLF} \end{equation}
\begin{defn}\leftarrowbel{axesTopo}
The topology of $\mathcal{X}_n$ induced by the injection \ref{eqInjection} is called the \emph{axes topology}.
\end{defn}
Culler and Morgan \cite{CM} proved
that $\mathcal{PLF}$ is compact. The closure $\overline{\mathcal{X}_n}$ of $\mathcal{X}_n$ in $\mathcal{PLF}$ is called the compactification of
Outer Space. Cohen and Lustig \cite{CohL} and Bestvina and Feighn \cite{BF} showed that the compactification of
Outer Space $\overline \mathcal{X}_n$ can be characterized as the set of equivalence classes of
very small $F_n$-trees.
\begin{defn}\leftarrowbel{defnVerySmall}A very small $F_n$-tree is a metric ${\mathbb R}R$-tree $T$ and a homomorphism
$\rho: F_n \to \text{Isom}(T)$ so that
\begin{enumerate}
\item $\rho$ is minimal and irreducible.
\item For every set ${\sf s} \subset T$ that is isometric to an interval, $\stab{\sf s}$ is cyclic or trivial. If $\stab{{\sf s}} = \leftarrowngle g \rightarrowngle$ then $g$ is not a power.
\item For every tripod $\sf t$ in $T$, $\stab{\sf t} = \{1\}$.
\end{enumerate}
Two very small $F_n$ trees are equivalent if there is an equivariant isometry between them.
\end{defn}
Free simplicial $F_n$-trees are very small trees and those are precisely the points of
Outer Space.
\begin{comment}
\begin{defn}
The map in \ref{eqInjection} defines a topology on the set of equivalence classes of
very small $F_n$-trees called \emph{the axes topology}.
We give an explicit description of a basis element in the axes topology. \\
\indent A basis element
$U(T,P,\varepsilon)$ is parameterized by a very small $F_n$ tree $T$, a finite
subset $P < F_n$, and
$\varepsilon>0$ and is given by
\[U(T,P,\varepsilon) = \{ S \in \overline{\mathcal{X}_n} \mid |l(a,T) - l(a,S)|<\varepsilon
\quad\forall a \in P \}\]
\end{defn}
\end{comment}
There is another topology on $\overline{\mathcal{X}_n}$ called the \emph{Gromov topology}.
\begin{defn}
A basis element $O(T,P,K,\varepsilon)$ is parameterized by an ${\mathbb R}R$-tree
$T$, a compact subset $K$ in $T$, a finite subset $P< F_n$ and $\varepsilon>0$.
A \emph{$P$-equivariant $\varepsilon$-relation} $R$ between $K \subset T$ and $K' \subset T'$ is a subset
$R \subset K \times K'$ so that the following hold
\begin{enumerate}
\item the projection of $R$ is surjective on each factor,
\item if $(x,x') , (y,y') \in R$ then $|d(x,y) - d(x',y')|< \varepsilon$,
\item for all $a \in P$ if $x$ and $\rho(a) x \in K$
and $(x,x') \in R$ then $\rho'(a)x' \in K'$ and $(\rho(a)x,\rho'(a)x') \in R$.
\end{enumerate}
A basis element $O(T,K,P,\varepsilon)$ for the Gromov topology is the
set of trees $S$ so that there is a compact $K' \subset S$ and a $P$-equivariant
$\varepsilon$-relation $R \subset K \times K'$.
\end{defn}
Paulin \cite{P} showed that the axes topology and the Gromov topology coincide on $\mathcal{X}_n$. They are both equal to the simplicial topology defined by using the $l_1$ metric on the following simplicies.
\begin{defn}\leftarrowbel{simplTopo}
Let $G$ be a graph with no vertices of valence $<3$ and $\tau:R_n \to G$ a marking. The simplex $S_{G,\tau} \subset \mathcal{X}_n$ is the set of points $(G, \tau,\ell)$ with $\ell$ ranging over all possible functions
$\ell:E(G) \to (0,1)$ with $\sum_{e \in G} \ell(e) = 1$.
The $l_1$ metric between $(G,\tau,\ell), (G, \tau, \ell')$ in this simplex is $\sum_{e \in G} |\ell(e) - \ell'(e)|$. A face of $S_{G,\tau}$ corresponds to a forest collapse.
This defines a simplicial structure on $\mathcal{X}_n$ that is locally finite. The topology induced on $\mathcal{X}_n$ is called
\emph{the simplicial topology}. \end{defn}
\section{A characterization of the completion points}
\subsection{Outer Space admits a forwards completion.}
We will first prove that Outer Space is backwards complete but not forwards complete.
Next we will show that it satisfies the conditions of Theorem \ref{ThmCompl} and thus it admits a forwards completion.
Finally we will characterize the points of the compactification of Outer Space that are completion points.
\begin{defn}\leftarrowbel{defBInOut}
Let $x \in \mathcal{X}_n$, $0 \leq r \in {\mathbb R}R$ the incoming ball centered at $x$ with radius $r$ is
\[ B_\text{in}(x,r) = \{y \in \mathcal{X}_n \mid d(y,x) < r\}. \]
The closed ball is $\overline{B}_{\text{in}}(x,r) = \{ y \in \mathcal{X}_n \mid d(y,x)\leq r\}$. Similarly, the outgoing ball centered at $x$ with radius $r$ is
\[ B_\text{out}(x,r) = \{y \in \mathcal{X}_n \mid d(x,y) < r\} \]
and $\overline{B}_\text{out}(x,r) = \{ y \in \mathcal{X}_n \mid d(x,y)\leq r\}$ is the closed ball.
\end{defn}
\begin{prop}\leftarrowbel{compactBalls}
$\overline{B}_{\text{in}}$ is compact (while $\overline{B}_\text{out}(x,r)$ is not).
\end{prop}
\begin{proof}
Consider $y \in \mathcal{B}in$ since $d(y,x)< r$ then for all $a \in F_n$, $l(a,y) > \frac{1}{r} l(a,x)$. Let $s$ be the length of the shortest loop in $x$, then $l(a,y) > \frac{s}{r}$. This implies that $y$ lies in the $\frac{s}{r}$ thick part of $\mathcal{X}_n$ and by \cite{AKB}, there exists a constant $A = A(s,r)$ so that $d(x,y) \leq A d(y,x) <Ar$. Thus, for all $a \in F_n$ we also have $l(a,y) < Ar l(a,x)$. In conclusion,
\[ \frac{1}{r} l(a,x) < l(a,y) < Ar l(a,x) \]
Now suppose $z$ lies in the closure of $\mathcal{B}in$ in the space of projective length functions. Let $\{x_k \}$ be a sequence in $\mathcal{B}in$ projectively converging to $z$, thus
\[ \frac{l(a,z)}{l(b,z)} = \lim_{k \to \infty} \frac{l(a,x_k)}{l(b,x_k)} \text{ for all } a,b \in F_n \]
Fix $b \in F_n$ then $l(b,x)>0$. Thus, for all $a \in F_n$,
\[ \frac{1}{Ar^2} \frac{l(b,z)}{l(b,x)} l(a,x) \leq l(a,z) \leq Ar^2 \frac{l(b,z)}{l(b,x)} l(a,x). \]
In particular $z$ corresponds to a free and simplicial $F_n$ tree and so $z$ lies in Outer Space. Thus, the closure of $\mathcal{B}in$ in the space of projective length functions is equal to its closure in Outer Space. Since the former is compact we obtain our statement.
Finally, to see that $\overline{B}_{\text{out}}$ is not compact, let $\alpha$ be an embedded loop in $x$ and let $x_k$ be a graph so that the edges of $\alpha$ are rescaled by $\frac{1}{k}$ and the other edges are rescaled so the total volume is 1. It is elementary to see that $d(x,x_k)$ is at most $r = \frac{1}{s}$ where $s$ is the length of the shortest edge in $x$. Hence $x_k \in B_\text{out}(x, r)$ but $x_k$ converges projectively to a non free $F_n$ tree hence the limit lies outside Outer Space.
\end{proof}
\begin{lemma}\leftarrowbel{addmissibleConv}
If $\{x_k\} \subset \mathcal{X}_n$ is a forwards admissible sequence then for each $a \in F_n$, $l(a,x_k)$ is almost monotonically decreasing.
If $\{x_k\} \subset \mathcal{X}_n$ is a backwards admissible sequence then for each $a \in F_n$, $l(a,x_k)$ is almost monotonically increasing.
\end{lemma}
\begin{proof}
We know that for all $\varepsilon$ there exists an $N(\varepsilon) \in \NN$ so that for all $m>N(\varepsilon)$ there is a $K(m,\varepsilon) \in \NN$ such that $d(x_m,x_k) < \log(1+\varepsilon)$. Therefore, for all $a$, $\frac{l(a,x_k)}{l(a,x_m)}< e^{\log(1+\varepsilon)} = 1+ \varepsilon$. Hence $l(a,x_k) < l(a,x_m) + \varepsilon l(a,x_m)$. Now let $\varepsilon=1$ then for $S = N(1)+1$ denote $M = l(a,x_S)$ then for all $m>K(S,1)$ we have $l(a,x_m) < 2M$. Now let $\varepsilon>0$ be arbitrary and take $m> \max\{N(\frac{\varepsilon}{M}), K(S,1)\}$ then there exists a $K(m,\frac{\varepsilon}{M})$ such that for all $k>K$ we have
\[ l(a,x_k)< l(a,x_m) + \frac{\varepsilon}{M} l(a,x_m) < l(a,x_m) + \varepsilon. \]
If $\{x_k\}$ is backwards admissible then for all $\varepsilon$ there exists an $N(\varepsilon) \in \NN$ such that for all $m>N(\varepsilon)$ there is a $K(m,\varepsilon) \in \NN$ so that for all $k>K(m,\varepsilon)$ we have $\frac{l(x_m,a)}{l(x_k,a)}<1+\varepsilon$. We proceed as in the case of a forwards admissible sequence.
\end{proof}
\begin{prop}\leftarrowbel{BackAddConv}
If $\{x_k\}$ is a backwards admissible sequence then it has a unique forwards limit $x$. The point $x$ is also the limit of $\{x_k\}$ in the axes topology.
\end{prop}
\begin{proof}
If $\{ x_k\}$ is backwards admissible then $\{x_k\} \subseteq \overline{B}_{\text{in}}(x_0, r)$ for a large enough $r$. Thus, $\{x_k\}$ has a partial limit $x \in \mathcal{X}_n$. Fix a conjugacy class $\alpha$ and consider the sequence $l(\alpha,x_k)$, since $x_k$ is backwards admissible, $l(\alpha,x_k)$ is almost monotonically increasing. Moreover, if $\{x_{k_i}\}$ is the subsequence that limits to $x$ then $l(\alpha, x_{k_i})$ limits to $l(\alpha,x)$ and therefore, $l(\alpha,x_k)$ is bounded. Therefore $l(\alpha,x_k)$ converges by Proposition \ref{Prop:BoundedAndAlmostMonDec} and to $l(\alpha,x)$. Thus, the partial limit $x \in \mathcal{X}_n$ is in fact the limit of $\{x_k\}$ in the axes topology. If $y \in \mathcal{X}_n$ so that $\lim_{m \to \infty} d(x_k,y) = 0$ then $d(x,y) = 0$ (since the length functions converge) and thus, $y=x$.
\end{proof}
\begin{prop}
$\mathcal{X}_n$ is backwards complete but not forwards complete.
\end{prop}
\begin{proof}
Outer Space is backwards complete by Proposition \ref{BackAddConv}.
To see that it is not forwards complete consider the sequence from the proof of Proposition \ref{compactBalls} showing that outgoing balls are not compact and notice that it is a forwards Cauchy sequence.
\end{proof}
To prove that $\mathcal{X}_n$ has a forwards completion guaranteed by Theorem \ref{ThmCompl} we must show that it is forwards continuous (Definition \ref{contDefn}), minimizing (Definition \ref{miniDefn}) and that $\hat X$ is semi forwards continuous with respect to $X$ (Definition \ref{semiForwardsContinuous}). By Lemma \ref{keyLemma} it suffices to show that $X$ is forwards continuous and that it satisfies condition \ref{star} of Lemma \ref{keyLemma} (we recall this condition at the end of the proof of Proposition \ref{conditions}).
\begin{prop}\leftarrowbel{conditions}
Outer Space is forwards and backwards continuous. Moreover, $X$ satisfies condition \ref{star} of Lemma \ref{keyLemma} and therefore it is minimizing and $\hat X$ is semi forwards continuous with respect to $X$.
\end{prop}
\begin{proof}
Suppose that $\{x_k\}$ is a sequence in Outer Space with a forward limit $x \in \mathcal{X}_n$. Then there exists $K\in \NN$ such that for all $k>K$ we have $d(x_k,x)<1$. Thus for each conjugacy class $\alpha$ in $F_n$ we have $l(a,x_k)> e^{-1} l(a,x)$.
Since $x \in \mathcal{X}_n$ its length function is positive and bounded away from zero, so there exists $\varepsilon_0$ so that $x_k \in \mathcal{X}_n^{\geq \varepsilon_0}$ for all $k>K$. Therefore there exists $A>0$ such that $d(x,x_k) \leq A d(x_k,x)$ for all $k>K$. Thus, $x$ is also a backwards limit of $\{x_k\}$.
Therefore, $x$ is the limit of $\{x_k\}$ (by Proposition \ref{BackAddConv}).
Now for every $y \in \mathcal{X}_n$, we have $d(x_m,y) - d(x,x_m) \leq d(x,y) \leq d(x,x_m) + d(x_m,y)$ and hence $\displaystyle \lim_{m \to \infty}d(x_m,y) \leq d(x,y) \leq \lim_{m \to \infty} d(x_m,y)$. Similarly $d(y,x) = \lim_{m \to \infty} d(y,x_m)$. Hence $\mathcal{X}_n$ is forwards continuous. The proof that it is backwards continuous is analogous and left to the reader.
Condition \ref{star} states that every forwards admissible sequence $\{x_m\}$ in $\mathcal{X}_n$ that has a CFL in $\mathcal{X}_n$ denoted $x$ is also backwards admissible and $x$ is its CBL. This is precisely what we have already shown.
\end{proof}
\begin{cor}
Outer Space admits a forwards completion.
\end{cor}
In the next sections we will work to characterize this completion as a subset of $\overline{\mathcal{X}_n}$.
\subsection{The limit of a forward admissible sequence}
Recall that $\overline{\mathcal{X}_n}$ is compact. In general, if $\{X_m\}$ is a sequence in $\overline{\mathcal{X}_n}$ then there exists a subsequence $\{X_{m_j}\}$ and a sequence of scalars $\{c_j\}$ so that $\lim_{j \to \infty} c_j X_{m_j}$ converges to $X \in \overline{\mathcal{X}_n}$ (in the space of length functions).
We observe that a forward Cauchy sequence in $\mathcal{X}_n$ converges, and without the need to rescale the trees, i.e. we can take $c_j =1$ for each $j$.
\begin{cor}\leftarrowbel{limitExists}
Let $\{X_k\} \subset \mathcal{X}_n$ be a forwards Cauchy sequence then there exists an $F_n$-tree $X$ whose homothety class lies in Outer Space, so that $\lim_{k \to \infty} X_k = X$ as length functions.
\end{cor}
\begin{proof}
Let $\| \cdot \|_k$ be the translation length function of $X_k$.
For each conjugacy class $\alpha$, the sequence $\| \alpha \|_{k}$ is positive and almost monotonically decreasing (Lemma \ref{addmissibleConv}). Therefore $\| \cdot \|_{k}$ converges to a length function, that is a length function of a very small tree action \cite{CohL}.
\end{proof}
We denote by $x_m$ the $F_n$-quotient of $X_m$ and let $h_{m,k}\colon x_m \to x_k$ be an optimal difference in marking. Then $\log \text{Lip} h_{m,k} = d(x_m,x_k)$. Fix $m$ and let $k$ vary, then $\text{Lip} h_{m,k}$ is an almost monotonically decreasing Cauchy sequence.
\begin{cor}\leftarrowbel{limitLipConst}
If $\{x_m\}$ is an admissible sequence and $h_{m,k} \colon x_m \to x_k$ optimal differences in marking then $\{ \text{Lip} h_{m,k} \}_{k = m}^\infty$ converges to some limit $L_m$ and it is bounded by $M_m$.
\end{cor}
Our next goal is to show that for each $m$ there exists a map $\f{m,\infty}: X_m \to X$ such that $\text{Lip} \f{m,\infty} \leq L_m$.
The limit $X$ is given in terms of length functions so we need to work in order to give another description of it that would easily admit a map.
For the remainder of this section, we fix $1 \leq m \in \NN$, following closely Bestvina's construction\footnote{In fact the rest of this section is a repetition of the arguments of \cite{degen} in our setting (that is a little different than the original).} in \cite{degen}, and using Gromov's Theorem \ref{GromovThm}, we construct a metric tree $X_\infty(m)$, with an $F_n$-action, and an equivariant map $f_{m, \infty} \colon X_m \to X_\infty(m)$.
All of these constructions depend on $m$. However, we will show that the tree $X_\infty(m)$ is non-trivial, irreducible and that the length functions of $X_k$ converge to that of $X_\infty(m)$. Therefore, by Theorem \ref{uniqueTree}, $X_\infty(m)$ is equivariantly isometric to the limit $X$ from Corollary \ref{limitExists}.
We construct $X_\infty(m)$ as a union of finite trees $X^l_\infty(m)$ as follows. For every $k>m$ there is a candidate $\beta_k$ in $x_m$ that is stretched maximally by the difference in marking $h_{m,k} \colon x_m \to x_k$.
By passing to a subsequence we may assume that it is
the same for all $k>m$ and denote it by $\beta$.
Note that $\beta$ depends on $m$ but since we have now fixed $m$, we shall drop the $m$-index to simplify the notation.
We denote by $\tau_k \colon R \to x_k$ the marking of $x_k$. Let $\mathcal{B}$ (also depends on $m$) be a basis such that for every $c \in \mathcal{B}$, $\tau_m(c)$ is homotopic to a candidate of $x_m$ and there is a $b$ in $\mathcal{B}$ such that $\tau_m(b) = \beta$.
Recall that $h_{m,k}(\beta)$ is immersed in $x_k$. Let $A_m(b)$ be the axis of $b$ in $X_m$. Let $w \in X_m$ be a point on $A_m(b)$ ($w$ depends on $m$), let $w'$ be the projection in $x_m$ of $w$, and choose
$w_k \in X_k$ be a point on $A_k(b)$ in the preimage of $h_{m,k}(w')$ (here $w_k$ depends on $m$ as well, but again we repress this dependence).
By Proposition \ref{prop_lift} there exists a map $f_{m,k} \colon X_m \to X_k$ that lifts $h_{m,k}$ and so that $f_{m,k}(w) = w_k$ and $f_{m,k}|_{A_m(b)}$ is a linear map onto $A_k(b)$.
\begin{prop}
For each basis element $c \in \mathcal{B}$, the $f_{m,k}$ image of the axis $A_m(c)$ is contained in the $\text{Lip}(f_{m,k})$ neighborhood of
the axis $A_k(c)$.
\end{prop}
\begin{proof}
The cycle $\gamma = \tau_m(c)$ is a candidate in $x_m$. Therefore, it is short, i.e. $l(c,x_m) \leq 2$ and hence $l(c,x_k) \leq 2 \text{Lip}(f_{m,k})$. Part of $f_{m,k}(\gamma)$ survives after tightening the loop. This is the part that lifts to a fundamental domain of $A_k(c)$. Therefore, $f_{m,k}(A_m(c))$ is contained in the $2 \text{Lip}(f_{m,k})$ neighborhood of $A_k(c)$.
\end{proof}
We exhaust $F_n$ via $\mathcal{B}B$-length:
\[ W^l = \{ g \in F_n \mid |g|_{\mathcal{B}B^m}\leq l\}\]
Denote by $X_k^l$ the convex
hull of $\{ W^l \cdot z_k \}$ . A \emph{diagonal} in $X_k^l$ is a path of the
form $[\rho(g) w_k, \rho(h) w_k]$. Each diagonal in $X_k^l$ can be covered
by $\frac{2M_ml}{\varepsilon}$ balls of radius $\varepsilon$ because:
\[d_{X_k}(w_k, \rho_k(g) w_k) \leq \text{Lip}(f_{m,k}) d(w_m ,\rho_m(g) w_m) \leq \text{Lip}(f_{m,k}) l \leq M_m l \]
Note that this number is uniform over all $k$. We now apply
\begin{theorem}\cite{Gr}\leftarrowbel{GromovThm}
If $\{Y_k\}_{k=k_0}^\infty$ is a sequence of compact metric spaces so that for every $\varepsilon$ there is an $N(\varepsilon)$ so that $Y_k$ may be covered by $N(\varepsilon)$ $\varepsilon$-balls then there is a subsequence $Y_{k_j}$ which converges in the Gromov sense to a compact metric space.
\end{theorem}
Therefore, for each $l \in \NN$ we denote the limit space provided by the theorem $X^l_\infty(m)$ and the sequence $\{k_j^l\}_{j=1}^\infty$ so that $\lim_{j \to \infty} X^l_{k^l_j} = X^l_\infty(m)$. By a diagonal argument we may pass to a subsequence $k_j$ so that $X^l_{k_j}(m)$
converges to $X^l_\infty(m)$ for every $l$.
\begin{defn}\leftarrowbel{actionDefn}
We require of the sequence $\{k_j\}$ that for each $g \in F_n$ the sequence $\rho_k(g)(w_{k_j}) \in X_{k_j}$ converges in $X^l_\infty(m)$ and denote its limit by $\rho(g) \cdot w$. The existence of the $k_j$ sequence follows by a diagonal argument from the fact that $F_n$ is countable.
\end{defn}
\begin{prop}
$X^l_\infty(m)$ is a finite tree.
\end{prop}
\begin{proof}
The proof is identical to that of Lemma 3.5 in \cite{degen}. One shows that the limit of 0-hyperbolic spaces is a 0-hyperbolic space. It is elementary to show that
\[ X^l_\infty(m) = \bigcup_{g,g' \in W^l} [g \cdot w, g' \cdot w]. \]
This implies that $X^l_\infty(m)$ is connected and that it is a finite tree.
\begin{comment}
This is a repetition of the proof of Lemma 3.5 of \cite{degen}, we include it for the reader's convenience.
We first show that for every $a,b \in X_\infty^l(m)$ for every $0 \leq t \leq D = d_{X_\infty^l(m)}(a,b)$ there
is a unique point $c \in X_\infty^l(m)$ such that $d_{X_\infty^l(m)}(a,c) = t$ and $d_{X_\infty^l(m)}(c,b) = D-t$.
To see this, let $a_k, b_k$ be points in $X_k^l$ such that $\lim_{k \to \infty} a_k = a$ and
$\lim_{k \to \infty} b_k = b$ then there is a point $c_k \in T^l_k$ such that $d(a_k,c_k) = t_k$
and $d(c_k, b_k) = s_k$ with $\lim t_k = t$ and $\lim s_k = D - t$. $c_k$ has a convergent
subsequence to a point $c \in X_\infty^l(m)$. If $c'$ is another point with $d(a,c') = t$ and
$d(c',b) = D-t$ and $c'_k$ a sequence such that $\lim c'_k = c'$ then for large $k$,
$d(a_k,c'_k) + d(b_k,c'_k) \leq d(a_k,b_k) + \varepsilon$ hence $[a_k,c'_k] \cup [c'_k,b'_k]$ is a
tripod and $c'_k$ is a distance less than $\varepsilon$ away from the vertex of the tripod which itself is
a distance approximately $t$ from $a$ and $D-t$ from $b$. The same is true for $c_k$ hence
$d(c_k,c'_k) < 2\varepsilon$ thus $d(c,c') = 0$. \\
\indent For each $g \in W^l$ let $g \cdot w_m$ be the limit in $X_\infty^l(m)$ of the sequence
$\rho_k(g) \cdot z_k$. Let
$H \subset X_\infty^l(m)$ be the union of all diagonals of elements in $W^l$, i.e. all segments of the form
$[g \cdot w, g' \cdot w] \in X^l_\infty (m)$ for $g,g' \in W^l$. We claim that $H = X_\infty^l(m)$. To see this suppose
$x \in X_\infty^l(m)$ is not covered by a diagonal. Then there is an $\varepsilon$ such that
$d(g \cdot w,x) + d(x,g' \cdot w) > \varepsilon+ d(g \cdot w, g' \cdot w)$ for all $g,g' \in W^l$. Thus,
for a large $k$, there is an $x_k \in X^l_k$ with
$d(g \cdot w_k,x_k) + d(x_k,g' \cdot w_k) > \frac{\varepsilon}{2}+ d(g \cdot w_k, g' \cdot w_k)$.
Hence $x_k$ is not in the convex hull of $W^l \cdot w_k$ which is a contradiction.
\end{comment}
\end{proof}
The definitions of $X_\infty^l(m)$ and $k_j$ imply that $X_\infty^l(m) \subset X_\infty^{l+1}(m)$. Moreover, the inclusion sends the basepoint $w$ in $X_\infty^l(m)$ to the basepoint in $X_\infty^{l+1}(m)$ and the point $\rho(g) \cdot w$ of $X^l_\infty(m)$ to $\rho(g) \cdot w$ of $X^{l+1}_\infty(m)$.
\begin{defn}
Let
$\mathcal{X}m = \cup_{l=1}^{\infty} X_\infty^l(m)$.
Then $\mathcal{X}m$ is a tree and there is a well defined basepoint $w \in \mathcal{X}m$ that is the image of each basepoint $w \in X_\infty^l(m)$, and the points $\rho(g) \cdot w$ for $g \in F_n$ are also well-defined.
\end{defn}
In Propositions \ref{Fntree}, \ref{Limitminimal} and \ref{XmIsX} to simplify notation we write $X_k$ instead of $X_{k_j}$ but we always mean that we restrict our attention to the subsequence.
\begin{prop}\leftarrowbel{Fntree}
There exists a homomorphism $\rho: F_n \to \text{Isom}(\mathcal{X}m)$ so that for every $q \in \mathcal{X}m$ and for every sequence $\{ q_k \} \subset X_k$ so that $\lim_{k \to \infty} q_k = q$ the following equation holds
\[ \rho(g) q = \lim_{k \to \infty} \rho_k(g) q_k \]
\end{prop}
\begin{proof}
We follow the proof in \cite{degen} Proposition 4.1 and Theorem 4.2.
We have already defined the action of $F_n$ on $w$ in Definition \ref{actionDefn}. We check that it is an isometry on orbits of $w$,
\begin{eqnarray*}
d \mathopen{} \left( \rho(g) \mathcal{B}igl( \rho(h) w \mathcal{B}igr), \rho(g) \mathcal{B}igl( \rho(h') w \mathcal{B}igr) \right) &= &\displaystyle \lim_{k \to \infty} d(\rho_k(g) \rho_k(h) w_k , \rho_k(g) \rho_k(h') w_k)\\
& = & \displaystyle \lim_{k \to \infty} d(\rho_k(h) w_k , \rho_k(h') w_k) \\
& = & \displaystyle \lim_{k \to \infty} d(\rho(h) w_k, \rho(h') w_k)
\end{eqnarray*}
For a point $q \in \mathcal{X}m$ there is some $l$ such that $q \in X^l_\infty(m)$. There is a sequence
$q_k \in X^l_k$ so that $\lim_{k \to \infty}q_k = q$. Let $g \in W^s$, then $q_k \in [\rho_k(g') w_k, \rho_k(g'') w_k]$ for $g',g'' \in W^l$ implies that
$\rho_k(g)(q_k) \in [\rho_k(gg') w_k, \rho_k(gg'') w_k]$ so $\rho_k(g)(q_k) \in X_k^{s+l}$. We define
\[ \rho(g) q = \lim_{k \to \infty} \rho_k(g)(q_k)\] in $X_\infty^{l+s}(m)$. We note that $\rho(g)$ behaves well with respect to distances to orbit points of $w$,
\begin{eqnarray*}
\displaystyle d(\rho(g)q, \rho(h) w) &= &\displaystyle \lim_{k \to \infty} d(\rho_k(g) q_k, \rho_k(h) w_k) \\
&\displaystyle = &\lim_{k \to \infty} d( q_k, \rho_k(g^{-1}h) w_k) \\
& \displaystyle= & d(q,\rho(g^{-1}h) w) \\
& = & \lim_{k \to \infty}d(q_k, \rho_k(g^{-1}h) w_k)
\end{eqnarray*}
The point $\rho(g)q$ is determined by the distances above
and it is independent of the sequence $\{q_k\}$.
We now have to prove that $\rho(g)$ is an isometry. Let $q \in \mathcal{X}m$ and
suppose $q'$ is in the orbit of $w$, i.e. $q' = \rho(h) w$ for some $h \in F_n$. Then $\rho(g)$ preserves the distance, indeed, $d(\rho(g) q, \rho(g) q') = d(\rho(g) q, \rho(g) \rho(h)w) = \lim_{k \to \infty} d(q_k, \rho_k(g^{-1} gh) w_k) = d(q, \rho(h) w)$.
More generally for $q,q' \in \mathcal{X}m$ let $w',w''$ be points in the orbit of $w$ such that $q,q'$ lie on the diagonal $[w',w'']$. $\rho(g)$ preserves the distance between $w,w'$ as well as the distances between $q$ and $w,w'$ and $q'$ and $w',w''$. Since $\mathcal{X}m$ is a tree, it must preserve the distance $d(q,q')$.
\end{proof}
\begin{prop}\leftarrowbel{Limitminimal}
$\mathcal{X}m$ is non-trivial and minimal.
\end{prop}
\begin{proof}
Since $w_m \in A_{m}(b)$ and $f_{m,k}(w_m) = w_k$ and $f_{m,k}$ restricts to an affine map on $A_{m}(b)$, then $w_k \in A_{k}(b)$. Therefore, for all $k$,
\[ d(w_k, \rho_k(b^2)(w_k) = 2 d(w_k, \rho_k(b)w_k)\]
The derivative of $f_{m,k}$ on $A_m(b)$ is $\text{Lip}(f_{m,k}) \geq 1$, hence $d(w_k, \rho_k(b) w_k)>d(w_m, \rho_k(b) w_m)$ therefore
\begin{equation}\leftarrowbel{AxisOfb}
d(w, \rho(b^2) w) = 2 d(w, \rho(b)w) \geq 2d(w_m,\rho_m(b)w_m)>0 \end{equation}
Hence $\rho(b)$ is a hyperbolic isometry and $w$ lies on the axis of $\rho(b)$ in $\mathcal{X}m$.
The tree is minimal: If $H$ is an invariant subtree then it must contain the axis of $\rho(b)$ in $\mathcal{X}m$ and its orbit under $F_n$. Since $H$ is connected it must also contain the convex hull of this set. By construction $\mathcal{X}m$ is precisely the convex hull of an orbit of $w$ hence $H=\mathcal{X}m$.
\end{proof}
\begin{prop}\leftarrowbel{XmIsX}
For every $g \in F_n$:
$\|g\|_{\mathcal{X}m} = \displaystyle \lim_{k \to \infty} \|g\|_k$
\end{prop}
\begin{proof} By definition
\[ \displaystyle \| g \|_{\mathcal{X}m} = \lim_{s \to \infty} \frac{d(w, \rho(g^s) w) }{s}= \lim_{s \to \infty} \lim_{k \to \infty} \frac{d(w_k, \rho(g^s) w_k)}{s}\]
Observe that
$d(w_k, \rho(g^s) w_k)= 2 d(w_k, A_k(g)) + s \|g\|_k$ and
\[d(w_k, A_k(g)) \leq \text{Lip}(f_{m,k}) d(w_m, A_m(g)) \leq M_m d(w_m, A_m(g)) = D(g,m)\]
Therefore, $s \|g\|_k \leq d(w_k, \rho(g^s) w_k) \leq 2D(g,m) + s \|g\|_k$. Hence,
\[
\displaystyle \lim_{k \to \infty} \| g\|_k \leq
\| g\|_{\mathcal{X}m} \leq \lim_{k \to \infty} \lim_{s \to \infty} \frac{1}{s} (s \|g\|_k + 2D(g,m)) = \lim_{k \to \infty} \| g \|_k \qedhere \] \end{proof}
We now consider the entire sequence $\{X_k\}$ and not the subsequence $\{X_{k_j}\}$.
\begin{theorem}\leftarrowbel{mapExists}
Let $\{ X_m\}$ be a forward Cauchy sequence in Outer Space, then there is a point $X \in \overline{\mathcal{X}_n}$ so that $X$ is the limit of $\{X_m\}$ in the axes topology and
there is an equivariant Lipschitz map $f_{m,\infty}: X_m \to X$ with $\text{Lip}(f_{m, \infty}) = \lim_{k \to \infty} \text{Lip}(f_{m,k})$.
\end{theorem}
\begin{proof}
The first part of the statement is Corollary \ref{limitExists}. By Proposition \ref{XmIsX}, the
length functions of $X$ and $\mathcal{X}m$ coincide. This length function is non-abelian and both $X$ and $\mathcal{X}m$ are irreducible $F_n$-trees, hence by Theorem \ref{uniqueTree}, there is an equivariant
isometry $k_m: \mathcal{X}m \to X$.
Let $f_{m,\infty}: X_m \to X$ be the equivariant affine map that sends $w_m \to w$ whenever $m = k_j$ of the subsequence and otherwise let $k_j$ be the smallest index closest to $m$ and let $f_{m,\infty}$ be the composition $X_m \to X_{k_j} \to X_\infty$.
Then $\text{Lip} f_{m,\infty} = \lim_{k \to \infty} \text{Lip} f_{m,k}$.
The required maps $X_m \to X$ are $k_m \circ f_{m,\infty}$ and are denoted by $f_{m,\infty}$ as well.
\end{proof}
\subsection{A characterization of completion points}
Once we have a map $f_{m, \infty}: X_m \to X$ it is straightforward to characterize $X \in \mathcal{C}os$. We begin with a definition of a quotient volume of an $F_n$-tree.
If $V$ is a finite metric tree then $V = \sqcup \sigma_i$ a finite union of arcs
$\sigma_i$ with disjoint interiors. The volume of $V$ is the sum of lengths of $\sigma_i$.
It is easy to see that the volume does not depend on the decomposition of $V$ into non-overlapping arcs.
\begin{defn}\leftarrowbel{tree_volume}
Let $T$ be an (infinite) $F_n$-tree. The quotient volume of $T$ is
\[ qvol(T) = \inf \{ vol(V) \mid V \subset T \text{ finite forest and } F_n \cdot V = T \} \]
\end{defn}
\begin{prop}\leftarrowbel{howtogen}
If $h: R \to T$ is an $L$-Lipschitz equivariant map then \[ qvol(T) \leq L \cdot qvol(R)\]
\end{prop}
\begin{proof} For each subset $V \subset R$,
$vol(h(V)) \leq L vol(V)$.
Moreover, if $F_n \cdot V = R$ then the orbit of $h(V)$ covers $T$. Therefore,
$qvol(T) \leq L qvol(R)$.
\end{proof}
If $S$ is a simplicial tree then $qvol(S)$ is equal to the sum of lengths of edges
of $S/F_n$. More generally,
\begin{prop}
Let $T$ be an $F_n$-tree, $U \subset T$ a finite subtree and \[P=\{ g \in F_n \mid gU \cap U \neq \emptyset \}\] then $P$ generates $F_n$ if and only if $F_n \cdot U = T$.
\end{prop}
\begin{proof}
Suppose $P$ generates $F_n$, we show that $F_n \cdot U$ is connected and by minimality it must coincide with $T$. Let $W^k$ be the set of words $g \in F_n$ that can be written as $g=p_1 \dots p_k$ for $p_i \in P$. For each $i$, $p_1 \cdots p_{i-1} U \cap p_1 \cdots p_i U \neq \emptyset$. Therefore, $\cup_{k} \cup_{g\in W^k} gU = F_n \cdot U$ is connected. \\
Conversely, suppose $F_n \cdot U = T$, choose a basepoint $u \in U$ and consider $gu$. Let $g_1, \dots g_k \in F_k$ so that the geodesic from $u$ to $gu$ passes linearly through $g_1U, \dots , g_kU$ and $g_k = g$. Since $g_iU \cap g_{i+1}U \neq \emptyset$ we have $g_i^{-1} g_{i+1} \in P$. Moreover, since $u \in U$ then $g_1 \in P$, thus
\[ g = g_1 \cdot g_1^{-1} g_2 \cdot g_2^{-1} g_3 \dots g_{k-1}^{-1} g_k \] is a multiple of elements in $P$.
\end{proof}
\begin{prop}\leftarrowbel{FindUandS}
For every $F_n$-tree $T$ such that $[T] \in \overline\mathcal{X}_n$ and for every $\varepsilon>0$ there is a finite and connected subtree $U$, and a finite generating set $\mathcal{S}$ of $F_n$ such that
\begin{enumerate}
\item For each $g \in \mathcal{S}$, $gU \cap U \neq \emptyset$, and
\item $vol(U) \leq qvol(T) + \varepsilon$.
\end{enumerate}
Additionally, there is a simplicial $F_n$-tree $T'$ admitting an equivariant 1-Lipchitz quotient map $p: T \to T'$, so that $qvol(T') = qvol(T)$.
\end{prop}
\begin{proof}
We will use a result of Levitt \cite{Lev} about countable groups acting on ${\mathbb R}R$-trees such that the action is a $J$-action. This condition is satisfied by every $F_n$ tree $T$ whose homothety class $[T]$ lies in the closure of Outer Space. Levitt shows in the Lemma on page 31 that if $B \subset T$ is the set of branch points of $T$, then the number of $F_n$ orbits of $\pi_0(T-\overline{B})$ is finite. Each connected component of $T-\overline{B}$ is an arc. We choose arcs $\sigma_1, \dots, \sigma_k$ so that the union of their $F_n$ orbits cover $T-\overline{B}$. Let $\theta_1, \dots , \theta_k$ be the associated lengths, and let $\varepsilon>0$ be smaller than $\frac{\theta_i}{2}$ for each $i$.
Consider $N_1 \subset T$ the $\varepsilon$-neighborhood of $F_n \cdot \sigma_1$. If $N_1$ is connected then by minimality $N_1 = T$. But the midpoint of $\sigma_2$ cannot be contained in $N_1$ since $\varepsilon< \frac{\theta_2}{2}$, hence $k=1$.
Otherwise, $N_1$ is not connected. Denote by $S_1$ the $\varepsilon$-neighborhood of $\sigma_1$. Therefore, there exists an $i$ and an element $a \in F_n$ so that $S_1 \cap a \cdot \sigma_i \neq \emptyset$. We reorder the arcs so that $i=2$ and change $\sigma_2$ with an element in its orbit so that $a=1$. Let $N_2$ be the $\varepsilon$-neighborhood of $F_n \cdot (\sigma_1 \cup \sigma_2)$ if $N_2$ is connected we are done, otherwise continue choosing representatives $\sigma_3 , \dots , \sigma_k$ such that the convex hull $W = Hull(\cup_{i=1}^k \sigma_i)$ has total volume $\leq \sum_{i=1}^k \theta_i + k \varepsilon$.
Each component of $V = T \setminus F_n \cdot \bigcup_{i=1}^k \sigma_i$ is a tree. There are finitely many orbits of connected components of $V$. Each orbit of connected components of $V$ meets $W$. Let $J_1, \dots , J_m$ be connected components of $V$, that meet $W$ and belong to distinct orbits and so that the orbit $W \cup \bigcup_{i=1}^m J_i$ is equal to $T$.
For each $i$, $J_i$ is a tree and let $H_i < F_n$ be the stabilizer of $J_i$ in $F_n$. Then $H_i$ is a finite rank free group. Since $B \cap J_i$ is dense in $J_i$, then the action of $H_i$ on $J_i$ has dense orbits. By \cite{LL} there exists a basis $\mathcal{B}_i$ of $H_i$ and a point $y_i \in J_i$ such that $d(y_i,W)<\varepsilon$ and $d(y_i, h \cdot y_i)<\varepsilon$ for all $h \in \mathcal{B}_i$. Define
\[
U = Hull \left( W \cup \bigcup_{i =1}^m \{ h \cdot y_i \mid h \in \mathcal{B}_i \} \right) \]
We note that $U$ is a finite tree and the volume of $U$ is approximately equal to that of $W$ which is approximately equal to the volume of the sum of lengths of $\sigma_1, \dots, \sigma_k$.
\begin{eqnarray*}
vol(U) & \leq & vol(W) + \sum_{i=1}^m (d(y_i,W) + \sum_{y \in \mathcal{B}_i}d(y_i, h \cdot y_i)) \\
& \leq & vol(W) + \sum_{i=1}^k (|\mathcal{B}_i|+1) \varepsilon \\
& \leq & \sum_{i=1}^k \theta_i + \left( k + \sum_{i=1}^k(|\mathcal{B}_i|+1) \right) \varepsilon
\end{eqnarray*}
We claim that the translates of $U$ cover $T$. Indeed, the translates of $Hull\{ h \cdot y_i \mid h \in \mathcal{B}_i \cup \{1\} \}$ cover $J_i$ and the orbit of the union of $J_i$ cover $V$ which is the complement of translates of the segments $\sigma_1,\dots, \sigma_k$. Therefore, $qvol(T) \leq vol(U) \leq \sum_{i=1}^k \theta_i + \left( k + \sum_{i=1}^k(|\mathcal{B}_i|+1) \right) \varepsilon$. But since the $\sigma_i$'s belong to distinct orbits, $qvol(T) \geq \sum_{i=1}^k \theta_i$. In conclusion, $qvol(T) = \sum_{i=1}^k \theta_i$ and is within a multiple of $\varepsilon$ away from $vol(U)$.
We define the tree $T'$ to be the result of collapsing the sets $g \cdot J_i$ for each $i \in \{1,\dots, m\}$ and $g \in F_n$. Let $p \colon T \to T'$ be the quotient map. Then $T'$ is a simplicial tree whose edges correspond to translates of the $\sigma_i$'s. Let $U' = p(U)$ then $U'$ is connected. The set
$P' = \{ g \in F_n \mid |g|>0 \text{ and } gU' \cap U' \neq \emptyset \}$ is finite. Let
\[ \mathcal{S} = P' \cup \bigcup_{i=1}^k \mathcal{B}_i. \]
It is elementary to show that $\cal{S}$ generates the set $P = \{ g \in F_n \mid gU' \cap U' \neq \emptyset \}$. Since $P$ generates $F_n$ (Proposition \ref{howtogen}) , then $\cal{S}$ generates $F_n$. Moreover, it is also clear that $g \cdot U \cap U \neq \emptyset$ for each $g \in \cal{S}$. We leave the details to the reader.
\end{proof}
\begin{prop}\leftarrowbel{forwardDirection}
If $X$ is a limit of a forwards Cauchy sequence in $\mathcal{X}_n$ then it has unit quotient volume.
\end{prop}
\begin{proof}
Let $x_m$ be a forward Cauchy sequence and $X$ its limit.
For all $\varepsilon>0$ there is an $N(\varepsilon)$ so that for $m>N(\varepsilon)$, $\text{Lip}(f_{m,\infty}) < 1+\varepsilon$. Therefore, $qvol(X) \leq (1+ \varepsilon) qvol(X_m) = 1+\varepsilon$, since $\varepsilon$ was arbitrary, $qvol(X) \leq 1$. \\
\indent To show the other inequality: suppose by contradiction that $vol(X)<1$. By Proposition \ref{FindUandS} there exists a finite
subtree $U$ in $T$
such that $vol(U) = c <1$ and a finite generating set $\mathcal{S}$ of $F_n$ so that for all
$g \in \mathcal{S}$, $gU \cap U \neq \emptyset$.
Suppose that $U$ is a union of $k$ non-overlapping
segments. Let $\varepsilon = \frac{1-c}{6nk}$ and assume $m$ is
large enough so that there is a set $U' \subset X_m$ with an $\mathcal{S}$-invariant
$\varepsilon$-relation to $U$.
Let $\theta_1, \dots, \theta_k$ be the lengths of the arcs $\sigma_1, \dots, \sigma_k$ of $U$. Then there exist arcs $\tau_1, \dots \tau_k$ in $U'$ so that
$|len(\tau_i) - len(\sigma_i)'|< \varepsilon$ for each $i$.
If $\tau_1, \dots , \tau_k$ do not cover $U'$ let $\tau_{k+1}, \dots , \tau_J$ be the remaining arcs. Since the valence at each vertex is no greater than $3n-3$ we have $J<(3n-3)k$. Moreover, since the $\varepsilon$-relation is onto, the length of $\tau_j$ for $k+1\leq j \leq J$ is no greater than $\varepsilon$. Therefore,
\[ \vol{U'} \leq \sum_{i=1}^J len(\tau_i) \leq \sum_{i=1}^k len(\tau_i) + J \varepsilon \leq \vol U + (k+J)\varepsilon<1 \]
However, for each $g \in \mathcal{S}$, $gU' \cap U' \neq \emptyset$. Therefore, $qvol(X_m) \leq vol(U')<1$ which is a contradiction.
\end{proof}
\begin{prop}\leftarrowbel{forwardDirection2}
If $X$ is a limit of a forwards admissible sequence in $\mathcal{X}_n$ then it admits no arc with a non-trivial stabilizer.
\end{prop}
\begin{proof}
Let $\{X_m\} \subset \mathcal{X}_n$ be a forwards admissible sequence and $X$ its limit.
The idea is
that an arc stabilizer will take up a definite part of the volume which would lead to
$X_m$ having less than unit quotient volume.
Let $X'$ be the simplicial tree provided in Proposition \ref{FindUandS} and let $\theta$ be the length of the smallest edge in $X'$. We take $\varepsilon$ to be smaller than $\theta/3$ and let $U$ be a finite subtree of $U$ so that $\vol U< 1 + \varepsilon$ and let
$\mathcal{S} \subset F_n$ be a generating set such that for each $g \in \mathcal{S}$, $gU \cap U \neq \emptyset$.
Suppose $U$ contains an arc
$\nu$ and $a \in F_n$ such that $a \cdot \nu = \nu$.
A segment with a non-trivial
stabilizer is not contained in a dense subtree (see e.g. \cite[Lemma 4.2]{LL}). Thus we may choose $len(\nu,X) \geq \theta$.
Let $U' \subset X_m$ be a set with an $\mathcal{S} \cup \{a\}$ equivariant $\varepsilon$-relation to
$U$.
As in the proof of Proposition \ref{FindUandS} we have $vol(U') < qvol(X) + (3n-2)k \varepsilon$.
Let $\nu'=[p,q]$ be a segment approximating $\nu$ in $U'$, then
$len(\nu') \geq \theta - \varepsilon$.
We claim that $len(a \nu' \cap \nu') > len(\nu') - 2 \varepsilon$.
Let $p,q$ be the endpoints of $\nu'$, then the
segments $[p,ap]$ and $[q,aq]$
have length bounded above by $\varepsilon$ choosing $\varepsilon< \frac{len(\nu')}{3}$
we have $[p,ap] \cap [q,aq] = \emptyset$. Let $[m,n]$ be the bridge between $[p,ap]$ and $[q,aq]$.
Then $len([m,n])> len(\nu') - 2 \varepsilon > \varepsilon$ and $[m,n] = \nu' \cap a \nu'$.
Since the action of $a$ is hyperbolic, both segments $[p,ap]$ and $[q,aq]$ intersect $A_{X_m}(a)$. Therefore $[m,n] \subset A_{X_m}(a)$ and $l(\rho_m(a),X_m)< \varepsilon$.
Thus we may chop off most of the segment $[m,n]$ in $U'$ leaving a segment of length $ \leq \varepsilon$, and get a set $U''$ whose translates cover $X_m$. However,
\begin{eqnarray*}
vol(U'') & \leq &vol(U') - ( len[m,n] - \varepsilon) \\ & \leq & (vol(U) + (3n-2)k \varepsilon) - ( (\theta - 3\varepsilon) - \varepsilon)
< 1- \theta + (3n+3)k \varepsilon. \end{eqnarray*}
Therefore, if $\varepsilon< \frac{\theta}{2(3n+3)k}$ then $vol(U'')<1$ and $X_m$ is covered by translates of $U''$ which is a contradiction to $qvol(X_m) = 1$. Hence there are no edges with non-trivial stabilizers.
\end{proof}
A very small $F_n$ tree $T$ gives rise to a graph of actions.
\begin{defn}\leftarrowbel{graphOfDefn}
\cite{Lev} A \emph{graph of actions} $\mathcal{G}$ consists of
\begin{enumerate}
\item a \emph{metric graph of groups} which consists of the following data: a metric graph $G$, with vertex groups $H_v$, and edge groups $H_e$ and injections $i_e: H_e \to H_v$ when $v$ is the initial point of the oriented edge $e$.
\item for every vertex $v$, an action of $H_v$ on an ${\mathbb R}R$-tree $T_v$.
\item for every oriented edge $e$ with $v = i(e)$ there is a point $q_v \in T_v$ which is fixed under the subgroup $i_{e}(H_e)$.
\end{enumerate}
\end{defn}
Let $T$ be a very small $F_n$ tree, one constructs the simplicial $F_n$-tree $T'$ and $p: T \to T'$ as described in Proposition \ref{FindUandS}. The graph $G$ is the quotient of $T'$ by the $F_n$-action. Lifting an edge $e$ of $G$ to $e'$ in $T'$ define $H_e = \text{Stab}(e')$ and $H_{i(e)} = \text{Stab}_{F_n}(i(e'))$. For a vertex $v$ in $G$ let $v'$ be some lift of $v$ and let $T_v$ be the preimage of $v$ in the tree $T$. The point $q_e$ of $T_v$ is the point $T_v \cap p(e')$.
Conversely, given a graph of actions, one can combine the universal cover of $G$ with the trees $T_v$ to obtain a very small $F_n$-tree \cite[Theorem 5]{Lev}.
\begin{prop}\leftarrowbel{backDir}
If $T$ is a very small $F_n$ tree with unit quotient volume and no arc stabilizers then there exists a forwards Cauchy sequence $\{X_m\} \subset \mathcal{X}_n$ such that $\lim_{m \to \infty} X_m = T$.
\end{prop}
\begin{proof}
Let ${\mathcal G}$ be the Levitt graph of actions of $T$. Since all edge groups are trivial,
all vertex groups are free factors. Let $V$ be the set of vertices of ${\mathcal G}$ with non-trivial vertex
groups. For each $v \in V$ there is a tree $R_v$ in $T$, invariant under (a conjugate of) the vertex group $H_v$
and so that $H_v$ acts on $R_v$ with dense orbits. Levitt and Lustig \cite{LL} show that
for every $\varepsilon$ there is a free simplicial tree $S_{v,\varepsilon}$ that admits a 1-Lipschitz equivariant map $f_{v,\varepsilon} \colon S_{v,\varepsilon} \to R_v$ and $qvol(S_{v, \varepsilon}) = \varepsilon$. For an edge $e$ in $G$ (the underlying graph of ${\mathcal G}$) let $q_e^\varepsilon = f_{i(e),\varepsilon}(q_{i(e)})$.
We construct a new graph of actions ${\mathcal G}_\varepsilon$ by replacing $R_v$ and $q_e$ in ${\mathcal G}$ with $S_{v,\varepsilon}$ and $q_e^\varepsilon$.
The resulting graph of actions ${\mathcal G}_\varepsilon$ can be developed to a simplicial tree $X_\varepsilon$ with quotient volume equal to $1+M \varepsilon$ where $M$ is bounded uniformly by $n$.
There is an equivariant $1$-Lipschitz map $f_{\varepsilon}$ from $X_\varepsilon$ to $X$ that restricts to $f_{v,\varepsilon}$ on each tree $S_{v,\varepsilon}$. Moreover, for
$\varepsilon'<\varepsilon$ there is an equivariant $1$-Lipschitz map $f_{\varepsilon,\varepsilon'}$ from $X_{\varepsilon}$ onto $X_{\varepsilon'}$.
To make $X_\varepsilon$ have unit quotient volume we replace $X_\varepsilon$ with $X_\varepsilon'$ by rescaling the edges (outside of the dense trees) by $1/\left( 1+ \sum_{i=1}^{|V|} qvol(S_{v,\varepsilon}) \right)$. Thus $qvol(X_\varepsilon') = 1$. Moreover, define $f_{\varepsilon,\varepsilon'}'$ to be the affine map induced by $f_{\varepsilon,\varepsilon'}$ and $f_\varepsilon'$ the map induced by $f_\varepsilon$. Then $\lip{f_\varepsilon'} \leq 1+|V|\varepsilon$ and also $\lip{f_{\varepsilon,\varepsilon'}' }\leq 1+|V|\varepsilon$. Therefore, $\{X_{1/m}\}$ is a forwards Cauchy sequence and $X$ is its forward limit.
\end{proof}
We can now prove Theorem \ref{myB} from the introduction.
\begin{proof}[Proof of Theorem \ref{myB}]
This follows from Propositions \ref{forwardDirection}, \ref{forwardDirection2} and Proposition \ref{backDir}.
\end{proof}
\section{Description of the distance in the completion}
The distance in the completion is defined in Definition \ref{defHat}. We will show that it can also be defined as follows: Let $\mathcal{C}os$ to be the set of very small $F_n$-trees with unit volume and no non-trivial arc stabilizers. For $X,Y \in \mathcal{C}os$ we let
\begin{equation}\leftarrowbel{Cosdistance}
d'(X,Y) = \log \sup \left. \left\{ \frac{l(g,Y)}{l(g,X)} \right| g \in F_n \right\}
\end{equation}
Notice that for $X,Y \in \mathcal{X}_n$, $d'(X,Y) = d(X,Y)$.
\begin{prop}\leftarrowbel{propd'}
The function $d' \colon \mathcal{C}os \times \mathcal{C}os \to {\mathbb R}R\cup \{\infty\}$ satisfies:
\begin{enumerate}
\item The directed triangle inequality.
\item If $d'(X,Y) = d'(Y,X) = 0$ then $X=Y$.
\item If there exists $g \in F_n$ with $l(g,X)=0$ and $l(g,Y)>0$ then $d'(X,Y) = \infty$.
\item If there exists an $L$-Lipschitz equivariant map $f \colon X \to Y$ then $d'(X,Y) \leq \log L$.
\end{enumerate}
\end{prop}
\begin{proof}
\begin{enumerate}
\item This is obvious from the properties of $\sup$.
\item If $X \neq Y$ then their length functions are different. Therefore, there exists $g \in F_n$ such that $l(g,X)<l(g,Y)$ or $l(g,Y)< l(g,X)$ which implies, $d(X,Y) \neq 0$ or $d(Y,X) \neq 0$.
\item This is obvious.
\item For each $p,q \in X$ we have $d(f(p), f(q)) \leq L d(p,q)$. Therefore, for each $g$ that is eliptic in $X$ - it is also eliptic in $Y$. Moreover, if $g$ is hyperbolic in $X$ with translation length $l(g,X)$ then $l(g,Y) \leq L l(g,X)$. $\qedhere$
\end{enumerate}
\end{proof}
\begin{prop}\leftarrowbel{distIsLip}
For every $X,Y \in \mathcal{C}os$, $d'(X,Y) = \hat d(X,Y)$.
\end{prop}
\begin{proof}
By forwards continuity, $\hat d = d = d'$ for $X,Y \in \mathcal{X}_n$.
Let $\{ X_m \}_{m=1}^\infty, \{Y_k \}_{k=1}^\infty$ be Cauchy sequences in $\mathcal{X}_n$ such that $X=\lim_{m\to \infty}X_m$ and $Y=\lim_{k \to \infty}Y_k$ as length functions.
We need to prove:
\begin{enumerate}
\item $d'(X,Y) = c< \infty$ if and only if for all $\varepsilon>0$ there exists an $N=N(\varepsilon)$ such that for all $m>N$ there is a $K=K(m,\varepsilon)$ such that $| d(X_m,Y_k) -c |<\varepsilon$ for all $k>K$.
\item\leftarrowbel{case2} $d'(X,Y) = \infty$ iff for all $r$ there is an $N(r)$ such that for al $m > N(r)$ there is a $K(m,r)$ such that $d(X_m, Y_k)> r$ for all $k>K$.
\end{enumerate}
By the triangle inequality we have $d'(X,Y) \geq d'(X_m,Y) - d'(X_m,X)$, thus by Proposition \ref{propd'}(4) for large enough $m$,
\begin{equation}\leftarrowbel{triangle}
d'(X,Y) \geq d'(X_m,Y) - \varepsilon
\end{equation}
Since there exists an equivariant map $X_m \to Y$, the distance $d'(X_m, Y)$ is finite (by Proposition \ref{propd'}(4) ).
Fix $m$, and let $\beta_1, \dots ,\beta_s$ be the list of candidates of $X_m$.
Choose $K= K(m,\varepsilon)$ large enough so that for all $k>K$:
\begin{enumerate}
\item if $l(\beta_i, Y) = 0$ then $l(\beta_i, Y_k) < injrad(X_m)$ and
\item if $l(\beta_i,Y) >0$ then $|l(\beta_i,Y_k) - l(\beta_i,Y)| < \varepsilon l(\beta_i,Y)$. This is possible since the list of candidates in $X_m$ is finite, and $\lim_{k \to \infty} l(\beta_i, Y_k) = l(\beta_i,Y)$.
\end{enumerate}
Let $\gamma$ be a candidate of $X$ that is elliptic in $Y$, by item (1)
$\frac{l(\gamma,Y_k)}{l(\gamma,X_m)}<1$, so $\gamma$ cannot
realize the distance $d(X_m,Y_k)$.
Let $\partialta$ be the candidate that is a witness to the distance $d(X_m,Y_k)$
then by item (2), \[l(\partialta,Y_k) \leq (1+\varepsilon) l(\partialta,Y)\]
dividing by $l(\partialta,X_m)$ and taking the log we get $d(X_m,Y_k) \leq \log(1+\varepsilon) + d(X_m,Y)$. Combining this with inequality (\ref{triangle}), there is a constant $C$ so that for all large enough $m,k$:
\begin{equation}\leftarrowbel{eqFirstDir}
d'(X,Y) \geq d'(X_m,Y_k) - C \varepsilon.
\end{equation}
If $d'(X,Y) < \infty$ then for all $\varepsilon$ there is some $b \in F_n$ that $\varepsilon$ approximates the distance in equation \ref{Cosdistance}.
Thus, $l(b,X)>0$ and let $N(\varepsilon)$ be such that for all $m>N(\varepsilon)$,
$|l(b, X_m) - l(b,X)|< \varepsilon l(b,X)$.
Thus \[\frac{l(b,Y)}{l(b,X_m)} \geq \frac{l(b,Y)}{(1+\varepsilon) l(b,X)}\]
which implies $d'(X,Y) \leq d'(X_m,Y) + \log(1+\varepsilon)$ for $m>N(\varepsilon)$. By the
triangle inequality, $d'(X_m,Y) \leq d'(X_m,Y_k) + d'(Y_k,Y)$. Thus
for $K(\varepsilon)$ with the property that $k>K(\varepsilon)$ implies $d'(Y_k,Y)<\varepsilon$, we have
\begin{equation}\leftarrowbel{eqOtherDir}
d'(X,Y) \leq d'(X_m,Y_k) + \varepsilon + \log(1+\varepsilon)
\end{equation}
for all $m>N(\varepsilon)$ and $k>K(\varepsilon)$. Thus, if $d'(X,Y)<\infty$ then equations \ref{eqFirstDir} and \ref{eqOtherDir} imply $d'(X,Y) = \hat d(X,Y)$.
If $d'(X,Y) = \infty$ then either there is some $\beta$ so that $l(\beta,X) = 0$ and $l(\beta,Y) >0$, or for all $r>1$ there is some $\beta$ in $X$ so that $\frac{l(\beta,Y)}{l(\beta,X)} > 2r$. If the former occurs, then there exist $N$ and $K$ so that for $m>N,k>K$ we have $l(\beta,X_m)<\frac{l(\beta,Y)}{r }$ and $l(\beta,Y_k) \geq (1 - \frac{1}{r})l(\beta,Y)$ thus $\frac{l(\beta,Y_k)}{l(\beta,X_m)}\geq \frac{(1-\frac{1}{r})l(\beta,Y)}{\frac{1}{r} l(\beta,Y)} \geq r-1$ and $d(X_m,Y_k) > \log(r-1)$. If the latter occurs, then $l(\beta,X), l(\beta,Y) >0$ and there are $N,K$ large enough so that for all $m>N, k>K$ we have $l(\beta,X_m) \leq (1 + \frac{1}{r})l(\beta,X)$ and $l(\beta,Y_k) \geq (1 - \frac{1}{r})l(\beta,Y)$. Thus $\frac{l(\beta,Y_k)}{l(\beta,X_m)} \geq \frac{(1-\frac{1}{r}) l(\beta,Y)}{(1+\frac{1}{r})l(\beta,X)} \geq r$ and $d(X_m,Y_k)>\log r$ for $m>M$ and $k>K$.
In both cases we have shown item (\ref{case2}).
\end{proof}
\begin{notation}
From now on we denote both $\hat d$ and $d'$ by $d$.
\end{notation}
\begin{comment}
\begin{prop}
Let $\{X_m\} \subset \mathcal{X}_n$ be forwards Cauchy and let $X$ be its limit in the axes topology. Then $X$ is the CLS of $\{X_m\}$.
\end{prop}
\begin{proof}
For all $g \in F_n$ we have $\lim_{m \to \infty} l(g,X_m) = l(g,X)$. This implies $ \lim_{m \to \infty} d(X_m, X) = 0$ by Equation (\ref{Cosdistance}). Moreover, if $Y$ is another limit then for all $\varepsilon>0$ there is an $N(\varepsilon)$ such that for all $m>N(\varepsilon)$, $l(g,Y) \leq (1+\varepsilon) l(g, X_m)$. This implies $l(g,Y) \leq l(g,X)$. Hence $d(X,Y)=0$.
\end{proof}
\end{comment}
When $T$ is any tree, the supremum in the formula for the distance might not be realized. We now show that if $T$ is simplicial then the supremum is realized and can be obtained by taking a maximum on a finite set of conjugacy classes which we call candidates. We first recall the definition of the Bass Group of a graph of groups.
\begin{defn}[The Bass group of a graph of groups.]
Given a graph of groups ${\mathcal G} = (G, \{ G_v\}_{v \in V}, \{H_e\}_{e \in E}, \{ i_e: H_e \to G_{ter(e)} \})$ with $V$ be the set of vertices of $G$ and $E$ the set of edges of $G$ (see Definition \ref{graphOfDefn}). Let $F_E = F(\{t_e \mid e \in E\})$ denote the free group on the basis $E$, the Bass group of ${\mathcal G}$, $\mathcal{B}({\mathcal G})$ is the quotient of the free product
\[*_{v \in V} G_v * F_E / R \] where $R$ is the normal subgroup generated by
\begin{enumerate}[a)]
\item $t_e^{-1} = t_{\bar e}$
\item $t_e i_e(g) t_e^{-1} = i_{\bar e}(g)$ for all $e \in E$ and $g \in H_e$.
\end{enumerate}
A connected word $w$ in the Bass group has the form
$w=r_0 t_1 r_2 t_2 \dots t_q r_q$ for which there is an edge path $e_1 \dots e_q$ in $G$ such
that $r_0 \in G_{ini(e_i)}$, $r_i \in G_{ter(e_i)}$ and $t_i = t_{e_i}$. The word
$w$ is a cyclic word if the edge path it follows is a loop.
$w$ is reduced if either $r_0 \neq 1$ and $q=0$ or, $q>0$ and $w$ does not contain a subword of the form $t_e i_e(g) t_e^{-1}$.
The \emph{fundamental group of the graph of groups} $\pi({\mathcal G},P)$ based at the point $P \in G$ is the subgroup of $\mathcal{B}({\mathcal G})$ of all cyclic subwords based at $P$. See \cite{CohL} for more detailed definitions.
\end{defn}
\begin{defn}\leftarrowbel{can}
A candidate $\alpha$ in a marked metric graph of groups $x$ is a cyclically reduced cycle word of the Bass group that follows a path of the following type:
\begin{enumerate}
\item an embedded loop
\item an embedded figure 8
\item a barbell
\item a barbell whose bells are single points
\item a barbell which has one proper bell and one collapsed bell.
\end{enumerate}
\end{defn}
The following Proposition is a generalization of what happens in Outer Space.
\begin{prop} \leftarrowbel{ConstructLipMap}
If $S$ is simplicial and $T \in \mathcal{C}os$ let
\[ M_{stretch} = \max \{ st(\alpha,S,T) \mid \alpha \text{ a candidate}\}\]
and let
$m_{lip}= \infty$ if no equivariant Lipschitz map $S \to T$ exists, and otherwise
\[m_{lip} = \min \{ \text{Lip}(h) \mid h:S \to T \text{an equivariant Lipschitz map} \}.\] Then
\[ d(S,T) = \log M_{stretch} = \log m_{lip} \]
\end{prop}
\begin{proof}
We wish to show that if one of the quantities in the equations is infinite then so is the other.
The proof that the quantities are equal in the case that they are finite is identical to the case both trees are in $\mathcal{X}_n$ and can be found in \cite{FM}.
If there exists some equivariant Lipschitz map $f: S \to T$ then $M_{stretch} \leq \text{Lip}(f)$. Thus, if $M_{stretch} = \infty$ then there is no Lipchitz map $S \to T$.
Conversely,
suppose that $M_{stretch} < \infty$ so in particular, all of
the elliptic elements of $S$ are also elliptic in $T$.
More generally, if $H<F_n$ is elliptic in $S$ then it must be eliptic in $T$. Indeed suppose $g,g'$ both fix $p \in S$ and $g$ fixes $q$ in $T$ and $g'$ fixes $q'$ in $T$. Then $gg'$ is eliptic in $S$ (fixing $p$) but in $T$, since there are no fixed arcs, it is hyperbolic.
We wish to construct an equivariant map from $S$ to $T$.
Let $g \in F_n$ be eliptic in $S$ fixing $p \in S$. Suppose $g$ fixes $q$ in $T$. Define $f \colon S \to T$ by setting $f( a \cdot p) = a \cdot q$ for all $a \in F_n$ and extending $f$ piecewise linearly on edges (there are some choices here). We claim that $f$ is well defined. Indeed, if $a \cdot p = b \cdot p$ then $a^{-1}b$ stabilizes $p$. Therefore, $a^{-1}b \in Stab_S(p) \subset Stab_T(q)$. Hence $a \cdot q = b \cdot q$. In conclusion, $M_{stretch} < \infty$ implies the existence of a Lipschitz map and hence $m_{lip}< \infty$.
\end{proof}
\begin{question}
Does Proposition \ref{ConstructLipMap} hold even when $S$ is not simplicial?
\end{question}
When $T$ is a non-simplicial, there is a collapse map to a simplicial tree
$T \to T'$ so that $d(T,T') = 0$ see Proposition \ref{FindUandS}. The next
proposition shows that this cannot happen when $T$ is simplicial. This will be important in the proof that the simplicial completion is invariant under an isometry.
\begin{prop}\leftarrowbel{zero_distance}
If $X$ simplicial and $Y \in \mathcal{C}os$ so that $d(X,Y) = 0$ then $X=Y$.
\end{prop}
\begin{proof}
By Proposition \ref{ConstructLipMap}, there is an equivariant map
$f:X \to Y$ that is 1-Lipschitz. The map $f$ is onto (since $Y$ is minimal). Any map
can be homotoped without increasing its Lipschitz constant, so that the restriction of
the new map on each edge is an immersion or a collapse to one point. No edge is
stretched since $\text{Lip}(f) = 1$. No edge is shrunk or collapsed because
$qvol(X) =1= qvol(Y)$. So $f$ restricted to each edge is an isometry.
We claim that $f$ is an immersion which would finish the proof.
Indeed if $f$ is not
injective then $f(p) = f(p')$ and $f|_{[p,p']}$ is not immersed. Suppose $f$ is not
an immersion at a neighborhood of $v$. Then there are two edges $e_1, e_2$ incident at $v$ so that
$f(e_1), f(e_2)$ define the same germ. If $e_1, e_2$ are not in the same orbit then
this would contradict $qvol(X) = qvol(Y)$ by Proposition \ref{forwardDirection} ($f$ loses a definite part of the volume).
So assume there is a $g \in F_n$ such that $g \cdot e_1 = e_2$ (with the appropriate
orientation). But then $f( g \cdot e_1 ) = f(e_1)$ so $g$ stabilizes a non-trivial segment in
$Y$ which contradicts Proposition \ref{forwardDirection2}. Hence $f$ is a surjective isometric immersion
i.e. an isometry. In particular, $Y$ is simplicial. \end{proof}
\section{Topologies on the simplicial metric completion}
\begin{defn}\leftarrowbel{defnSplittingComplex}
A \emph{free splitting} of $F_n$ is a simplicial tree $S$ along with a simplicial $F_n$ action
on $S$, so that this action is minimal, non-trivial, irreducible, and edge stabilizers are
trivial. The quotient graph $S/F_n$ is a finite graph and we say the \emph{covolume}
of $S$ is the number of edges in this quotient graph. Two free splittings $S,S'$ are
compatible if there is a free splitting $S''$ and equivariant edge collapses $S'' \to S$
and $S'' \to S'$. \\
The \emph{free splitting complex} $\text{FS}_n$ is the simplicial complex whose set of
vertices is the set of free splittings of covolume 1 and there is a simplex spanned by
$S_1, \dots S_k$ if they are pairwise compatible. This turns out to be equivalent to
the existence of a splitting $S$ and maps $S \to S_i$ for $i=1, \dots , k$ that are
equivariant edge collapses.
\end{defn}
Outer Space $\mathcal{X}_n$ naturally embeds in $\text{FS}_n$ as a subcomplex with missing faces. If we
add the missing faces we obtain $\text{FS}_n$. Therefore, $\text{FS}_n$ is called the \emph{simplicial
completion} of Outer Space.
\begin{defn}
The (metric) simplicial completion $\mathcal{\hat{X}}_n^S$ is the set of points $X \in \mathcal{C}os$ such that $X$ is simplicial.
\end{defn}
By the characterization of the completion points in the boundary, Theorem \ref{myB}, the set of simplicial trees in the completion is the same as the simplicial completion, i.e. $\text{FS}_n$. In this section we compare two natural topologies on this set.
\begin{defn}[The Euclidean topology on $\text{FS}_n$]
Let $\sigma$ be a simplex in $\text{FS}_n$ and let $d_1(x,y)$ be the $l_1$ metric for $x,y \in \sigma$ (using the edge lengths).
We define $B_\sigma(x,\varepsilon) = \{ y \mid d_1(x_i,y) <\varepsilon \}$. The Euclidean ball around $x$ is $B_{Euc}(x,\varepsilon) = \cup_{x \in \sigma} B_\sigma(x,\varepsilon)$. The Euclidean topology is the topology generated by Euclidean balls.
\end{defn}
There is also a natural Lipschitz topology on $\mathcal{\hat{X}}_n^S$.
\begin{defn}[The (incoming-)Lipschitz topology on $\mathcal{\hat{X}}_n^S$]
A basis for the Lipschitz topology is the collection of incoming balls, see Definition \ref{defBInOut}.
\end{defn}
The Lipschitz topology will be preserved under isometries of Outer Space. We show that the Euclidean topology coincides with the Lipschitz topology.
\begin{remark}The topology generated by the ``outgoing" balls is different from the Euclidean topology. Consider a point $x$ in the completion so that the underlying graph of the quotient $x/F_n$ is a single, non-separating edge (a one edge loop). For such $x$ and for all $x \neq y \in \mathcal{\hat{X}}_n^S$, $d(x,y) = \infty$. Hence the only open sets of the outgoing-Lipschitz topology containing $x$ are $\{ x \}, \mathcal{\hat{X}}_n^S$.
\end{remark}
\begin{notation}
For $x \in \mathcal{\hat{X}}_n^S$ we denote $\theta(x)$ the smallest edge of $x$, and by $\text{Inj}(x)$ the
injectivity radius of $x$. Also notice that we are considering two metrics here - the Lipchitz metric $d(x,y)$ and the Euclidean metric $d_1(x,y)$.
\end{notation}
\begin{lemma}\leftarrowbel{lem2}
The function $d( \cdot, x)$ is continuous with respect to the Euclidean topology on $\mathcal{\hat{X}}_n^S$.
\end{lemma}
\begin{proof}
For $y \in \mathcal{\hat{X}}_n^S$ let
$\partialta = \min \left\{ \theta(y)\varepsilon, \frac{\text{Inj}(y) \varepsilon}{6n-6} \right\}$,
we claim that for $y' \in B_{Euc}(y,\partialta)$ we have $|d(y',x) - d(y,x)|< \varepsilon$.
Let $y' \in B_{Euc}(y,\partialta)$ and let $\sigma$ be a simplex containing $y,y'$, and
enumerate the edges in the graph corresponding to $\sigma$ by $e_1, \dots , e_J$.
Thus, $y=(y_1, \dots , y_J), y' = (y_1', \dots , y_J')$ in the $\sigma$ coordinates, and
$|y'_i - y_i|< \partialta$. By our choice of $\partialta$, for $i$ so that $y_i>0$, $\frac{y_i}{y_i'}< \frac{y_i}{y_i-y_i\varepsilon}= \frac{1}{1-\varepsilon}$. Therefore,
\begin{equation}\leftarrowbel{eq1024}
d(y',x) \leq d(y',y) + d(y,x) \leq \log\left( \frac{1}{1-\varepsilon}\right) + d(y,x)
\end{equation}
Let $\alpha$ be a realizing candidate for $d(y,y')$. Since $|y_i'-y_i|<\partialta$ and $\alpha$ contains no more than $2(3n-3)$ edges we have
\[ \frac{l(\alpha,y')}{l(\alpha,y)} \leq \frac{l(\alpha,y) + 2(3n-3)\partialta}{l(\alpha,y)}< 1+ \varepsilon \]
Thus $d(y',x) \geq d(y,x) - d(y,y') \geq d(y,x) - \log(1+\varepsilon) $. Combining with equation (\ref{eq1024}) we conclude the proof.
\end{proof}
\begin{cor}\leftarrowbel{EucFinerThanLip}
The Euclidean topology is finer than the Lipschitz topology.
\end{cor}
To show that they are in fact equal we need the following two lemmas.
\begin{lemma}\leftarrowbel{lem1}
For every simplex $\sigma$ in $FS_n$
\begin{enumerate}
\item For every $x \in \sigma$ and for every face $\tau$ of $\sigma$ so that $x \notin \tau$ there is an $\varepsilon(x,\tau)>0$ such that $d(\tau,x)>\varepsilon$.
\item Let $x \in \sigma$, for all $r>0$ there is a $t(x,\sigma,r)$ such that if $y \in \sigma$ and $d(y,x)<t(x,\sigma,r)$ then $d_1(y,x)<r$.
\end{enumerate}
\end{lemma}
\begin{proof}
To prove part (1), $\tau$ is compact in the Euclidean topology, and $d(\cdot,x)$ is continuous.
Thus $d(\cdot,x)$ achieves a minimum $\varepsilon(x,\tau)$ on $\tau$.
If $\varepsilon(x,\tau) = 0$ then there is a $y \in \tau$ with $d(y,x)= 0$ and by Proposition \ref{zero_distance} $y=x$ but $x \notin \tau$. Therefore $\varepsilon(x,\tau) >0$. \\
To prove part (2), consider for $r>0$,
\[A(x,r) = \{y \in \sigma \mid d_1(y,x) \geq r \}\]
Since $A(x,r)$ is compact in the Euclidean topology, then there exists a minimum $t(x,\sigma,r)$ to the set $\{d(a,x) \mid a \in A(x,r) \}$. As before, $t \neq 0$ since $x \notin A(x,r)$. Therefore, $d(y,x)<t$ and $y \in \sigma$ implies that $d_1(y,x)<r$.
\end{proof}
\begin{lemma}\leftarrowbel{InSimplex}
For every $x \in \mathcal{\hat{X}}_n^S$ there is a constant $\varepsilon(x)>0$ such that
for all $y \in \mathcal{\hat{X}}_n^S$ with
$d(y,x)< \varepsilon$ there exists a simplex $\sigma \in \text{FS}_n$ that contains both $x$ and $y$.
\end{lemma}
\begin{proof}
Let $x$ be contained in the interior of the simplex $\tau$. By Lemma \ref{lem1} for
any simplex $\sigma \supseteq \tau$ and for any face $\tau'$ of $\sigma$ that does not
contain $x$ there is an $\varepsilon = \varepsilon(x,\tau')$ so that $d(\tau',x)> \varepsilon$.
We show that we can find such an $\varepsilon$ independent of $\tau'$. The difficulty is that the link of $x$ is potentially infinite.
Recall that $\textup{Out}(F_n)$ acts cocompactly on $\text{FS}_n$ by simplicial automorphisms.
For an automorphism $\phi \in \textup{Out}(F_n)$, $\varepsilon(\phi(x), \phi(\tau)) = \varepsilon(x,\tau)$. For $H<\textup{Out}(F_n)$ a finite index torsion free subgroup, the quotient $\text{FS}_n/H$ is a finite CW-complex, therefore, there are finitely many isometry types of simplicies $\sigma$ containing $x$. Therefore the set
\[ \{ \varepsilon(x,\sigma,\tau) \mid x \in \sigma, x \notin \tau \subset \sigma \}\]
is finite and therefore achieves a minimum $\varepsilon(x)$.
Let $y \in \mathcal{\hat{X}}_n^S$ such that $d(y,x)< \infty$ and there is no simplex
containing both then $d(y,x)>\frac{\varepsilon(x)}{2}$.
By Proposition \ref{ConstructLipMap} there is an equivariant Lipschitz
map $f:y \to x$.
Let $y'$ be a point in the same simplex as $y$ so that there is a Stallings fold
sequence from $y'$ to $x$ (perturb the edges lengths in $y$ so that the stretch of the
edges of the optimal map are all rational). Moreover, we can guarantee that
$d(y',x)<d(y,x)+ \frac{\varepsilon(x)}{2}$. Let
$f':z \to x$ the last fold in the sequence. Then $z$ and $x$ are contained in the same
simplex. Moreover, $d(y',x)>d(z,x)$, and $d(z,x) > \varepsilon(x)$ hence $d(y,x)>
\frac{\varepsilon(x)}{2}$.
\end{proof}
\begin{theorem}\leftarrowbel{topo}
The Lipschitz topology and the Euclidean topology on $\mathcal{\hat{X}}_n^S$ coincide.
\end{theorem}
\begin{proof}
By Corollary \ref{EucFinerThanLip} it is enough to show that the Lipschitz
topology is finer than the Euclidean topology. Let $B_{Euc}(x,r)$ be a
ball in the Euclidean topology. By proposition \ref{InSimplex}, we may
choose $\varepsilon$ small enough so that $B_{\text{in}}(x,\varepsilon)$ is contained in the star of $x$.
By lemma \ref{lem1}, there is a $t(x,\sigma,r)$ so that for all $y\in \sigma$ if
$d(y,x)<t(x,\sigma,r)$ then $d_1(y,x)<r$. We need to find $t$ that works for
all $\sigma$ containing $x$.
As in the proof of \ref{InSimplex} we use the simplicial action of $\textup{Out}(F_n)$ on $\text{FS}_n$.
The quotient $\text{FS}_n/H$ where $H<\textup{Out}(F_n)$ is finite index and torsion free is compact.
Since $\textup{Out}(F_n)$ acts by Lipschitz isometries as well as Euclidean
isometries
$t(x,\sigma,r)$ is invariant under this action. Thus the set
\[ \{ t(x,\sigma,r) \mid x \in \sigma \} \]
is finite, and therefore achieves a minimum $t(x,r)$.
Thus, for $r>0$ let $\partialta = \min\{ \varepsilon(x), t(x,r) \}$ (where $\varepsilon(x)$ is
the constant from Lemma \ref{InSimplex}) then if $d(y,x)<\partialta$ then there
exists a simplex $\sigma$ containing $y$ and $x$ and moreover, $d_1(y,x)<r$.
\end{proof}
This completes the proof of Theorem \ref{myC}.
\begin{remark}
The Gromov/Axes topology on $\mathcal{\hat{X}}_n^S$ is strictly finer than the Lipschitz/Euclidean topology. This was included in a previous preprint but we chose to omit it for the sake of brevity. However, to see that they are not the same is quick. Consider $F_2=\leftarrowngle a,b \rightarrowngle$ and let $x$ be the graph of groups with one vertex whose related group is $\leftarrowngle b \rightarrowngle$ and one edge labeled $a$. Now suppose $y$ is a point in the simplex with a graph of groups which is a rose with edges labeled $ab^i, b$. Then if $y$ is in the Gromov-topology-neighborhood $U(a,\varepsilon)$ of $x$ then the edge labeled $b$ in $y$ has to have length shorter than $\frac{\varepsilon}{i}$. Since the length depends on $i$, it depends on $\sigma$. Thus, there is no open set in the Euclidean topology that is contained in $U(a,\varepsilon)$.
\end{remark}
\section{The isometries of Outer Space}
\begin{prop}\leftarrowbel{isomExtend}
Every Lipschitz isometry $F:\mathcal{X}_n \to \mathcal{X}_n$ extends to an isometry of the completion $\hat F:\mathcal{C}os \to \mathcal{C}os$. The simplicial metric completion $\mathcal{\hat{X}}_n^S = \text{FS}_n$ is an invariant subspace and $\hat F|_{\mathcal{\hat{X}}_n^S}$ is a homeomorphism of $\text{FS}_n$ with the Euclidean topology.
\end{prop}
\begin{proof}
By Corollary \ref{isomX} and Proposition \ref{conditions} the isometry $F$ extends to an isometry of $\mathcal{C}os$.
We claim that $\mathcal{\hat{X}}_n^S$ is invariant under $\hat F$. The reason is as follows: Suppose $T \in \mathcal{C}os$ is not simplicial Let $B$ be the set of branch points of $T$. Note that $\overline{B} \neq T$ since $qvol(T) = 1$. By \cite{Lev} we may equivariantly collapse the components of $\overline{B}$ in $T$ to obtain a simplicial $F_n$ tree $T'$ with $qvol(T') = 1$.
Moreover since arc stabilizers in $T$ are trivial then in $T'$ they are trivial as well. This implies that there is a $T' \in \mathcal{C}os$ such that $d'(T,T') = 0$ and $T' \neq T$.
By proposition \ref{zero_distance} $x \in \mathcal{\hat{X}}_n^S$ if and only if for all $x\neq y \in \mathcal{C}os$, $d(x,y)>0$. Thus $\hat F$ preserves $\mathcal{\hat{X}}_n^S$ and its metric $d$. By Theorem \ref{topo} the Lipschitz topology is the same as the Euclidean topology. \end{proof}
We now work to show that $\hat F|_{\text{FS}_n} \colon \text{FS}_n \to \text{FS}_n$ is simplicial.
\begin{defn}
A blowup $G'$ of a graph of groups $G$ is a graph of groups $G'$ along with a marking preserving map $c \colon G' \to G$ so that $c$ collapses a proper subset of $E(G')$.
\end{defn}
\begin{lemma}\leftarrowbel{BlowupLemma}
Let $G$ be a graph of groups representing a free splitting of $F_n$, for $n\geq 3$. Then either:
\begin{enumerate}
\item There are three or more different 1-edge blowups of $G$ in $\text{FS}_n$.
\end{enumerate}
or, the graph $G$ is one of the types $(A), (B)$ or $(C)$ in Figure \ref{exTableFig}, i.e.
\begin{enumerate}\setcounter{enumi}{1}
\item Type A: $G$ has a unique vertex $v$ with non-trivial group $G_v$. $G_v$ is cyclic and the valence of $v$ is 1 and of all other vertices is $3$. In this case there is a single 1-edge blowup of $G$ in $\text{FS}_n$ an no other possible blowups.
\item Type B: $G$ has a unique vertex $v$ with non-trivial group $G_v$ and $G_v$ is cyclic. Moreover, there is a unique embedded circle containing $v$. The valence of $v$ is two and the rest of the vertices have valence $3$. In this case there are two 1-edge blowups and two 2-edge blowups.
\item Type C: $G$ has precisely two vertices $v_1, v_2$ with non-trivial groups $G_{i}$ that are cyclic for $i=1,2$.
The valence of $v_1, v_2$ is one, and all other vertices have valence equal to $3$. In this case there are two 1-edge blowups and one 2-edge blowup.
\end{enumerate}
\end{lemma}
\begin{figure}
\caption{The graphs of groups\leftarrowbel{exTableFig}
\end{figure}
\begin{proof}
The following types of graphs of groups have three or more 1-edge blowups.
\begin{enumerate}
\item $G$ contains a vertex of valence 4 or more.
\item $G$ contains a vertex with a non-cyclic vertex group.
\item $G$ contains three or more vertices with non-trivial vertex groups.
\item $G$ contains a vertex $v$ with $G_v \neq \{ 1 \}$ and there exist two distinct embedded loops $\beta, \gamma$ containing $v$. See figure \ref{example1}.
\item $G$ contains a vertex $v$ with $G_v \neq \{ 1 \}$ and a separating edge $e$ with $G - e = X \cup Y$ with $v \in X$ and $X - \{ v\} \neq \emptyset$ then again there are infinitely many 1-edge blowups of $G$ (see figure \ref{example2}).
\end{enumerate}
\begin{figure}
\caption{\leftarrowbel{example1}
\end{figure}
\begin{figure}
\caption{\leftarrowbel{example2}
\end{figure}
Therefore, the remaining cases are where $G$ contains at most two vertices with non-trivial groups and the remaining vertices have valence $= 3$. Moreover, by (2) the non-trivial groups are cyclic and either $v_i$ is a valence 1 vertex or it has valence two and there is a unique embedded circle containing $v_i$.
These are the possibilities illustrated in Figure \ref{exTableFig}. Notice that type $(D)$ has three 1-edge blowups and case $(E)$ has 4 1-edge blowups. Therefore the graphs with less than three 1-edge blowups are A,B and C.
\end{proof}
\begin{defn}
Let $X$ be a simplicial complex. A point $x \in X$ is $j$-smooth if there exists a neighborhood $U \subset X$ of $x$ that is homeomorphic to ${\mathbb R}R^j$.
\end{defn}
\begin{prop}\leftarrowbel{jsmoothprop}
Suppose $X$ is a simplicial complex of dimension $j$. If each $j-1$ simplex is contained in either one or more than three $k+1$ simplicies then the $j$-smooth points are the interiors of $j$-simplicies.
\end{prop}
\begin{proof}
If $x \in X$ is $j$-smooth then its link is homeomorphic to $S^{j-1}$. In particular, if $\tau$ is a $j-1$ simplex in the link of $x$ then it must be contained in precisely two $j$-simplicies.
\end{proof}
\begin{comment}
In Table \ref{dim Table}, we summarize Lemma \ref{BlowupLemma}. For a graph $G$ of the type in Figure \ref{exTableFig}, let $\sigma$ be its corresponding simplex in $\text{FS}_n$. We count the number of simplexes $\sigma$ is contained in of higher dimensions then itself. This will play an important role in the proof of Proposition \ref{HomeoPresSimp}.
\begin{table}[hb]
\centering
\begin{tabular}{| c | c | c | c | c | c | c |}
\hline
Type & Codim & \# top & \# top-1 & \# top-2 & \# top-3 & smooth \\
\hline
A & 1 & 1 & & & & no \\ \hline
B & 2 & 2 & 1 & & & no \\ \hline
C & 2 & 1 & 2 & & & 3n-5 \\ \hline
D & 3 & 2 & 3 & 2 & & 3n-6 \\ \hline
E & 4 & 4 & 4 & 5 & 2 & 3n-7 \\ \hline
\end{tabular}
\caption{Let $G$ be one of the graphs in Figure \ref{exTableFig}, and let $\sigma$ be the simplex in $\text{FS}_n$ corresponding to it. This table specifies, according to the type of $G$, the codimension of $\sigma$, the number of top dimensional simplicies that $\sigma$ is contained in, the number of codimension 1,2 and 3 simplicies that $\sigma$ is contained in and whether the points of $\sigma$ are $j$-smooth in the $j$-skeleton of $\text{FS}_n$.}
\leftarrowbel{dim Table}
\end{table}
\end{comment}
\begin{prop}\leftarrowbel{HomeoPresSimp}
If $\hat F$ is a simplicial self map of $\text{FS}_n$.
\end{prop}
\begin{proof}
Francaviglia and Martino \cite{FMIsom} prove that if $F$ is a homeomorphism of $\mathcal{X}_n$ then $F$ is simplicial. They prove it by induction on
the codimension. They consider $\mathcal{X}_n^i$ the $i$-skeleton of $\mathcal{X}_n$ and show that every
$i-1$ simplex is attached to three or more $i$-simplicies.
Therefore, $i$-simplicies are the connected components of the $i$-smooth set in $\mathcal{X}_n^i$ and therefore, $F$ must map each $i$-simplex to an $i$-simplex.
This statement is false for $\text{FS}_n$. Consider for example the free splitting complex for $n=2$. Consider the simplex $\sigma$ corresponding to a barbell. This simplex has two codimension 1 faces $\tau_1, \tau_2$ that correspond to collapsing one of the bells, and a codimension 2 simplex $\nu$ that corresponds to collapsing both bells. It is possible to perturb the identity homeomorphism to a homeomorphism $\hat F$ of $\text{FS}_2$ that maps $\tau_1$ into the union of $\tau_1 \cup \nu_1 \cup \tau_2$ (but not into a single simplex).
Therefore, here we must use the hypothesis that $\hat F$ preserves the Lipschitz metric on $\text{FS}_n$.
Since top dimensional simplicies of $\text{FS}_n$ are in fact contained in $\mathcal{X}_n$ then they are preserved by \cite{FMIsom}.
$\hat F$ restricts to a homeomorphism on the codimension 1 skeleton and preserves the part of it that is contained in $\mathcal{X}_n$. Let $\tau$ be a top dimensional simplex, then $\hat F(\bar \tau) = \overline{ \hat F(\tau)}$. Let $\sigma$ be an open $(3n-5)$-simplex in $\partial \tau$.
Then $\hat F(\sigma)$ is an open set in $\partial F(\tau)$. If $\hat F(\sigma)$ is not contained in a face of $\hat F(\tau)$ then there must exist two points $x,x' \in \sigma$ so that $\hat F(x), \hat F(x')$ lie in different codimension 1 faces of $\hat F(\tau)$.
Two different top dimensional faces of $\hat F(\tau)$ that are not contained in $\mathcal{X}_n$ correspond to shrinking two different 1 edge loops in the graph corresponding to $\hat F(\tau)$.
Thus, $d(\hat F(x), \hat F(x'))= \infty$
However, $d(x,x')<\infty$ since they belong to the same open simplex. We get a contradiction so $\hat F(\sigma)$ must be contained in a codimension 1 face of $\hat F(\tau)$. Since $\hat F$ is invertible, $\hat F(\sigma)$ is equal to a codimension 1 face. This shows that $F$ sends codimension 1 simplicies of $\text{FS}_n$ to codimension 1 simplicies and preserves the codimension 2 skeleton.
For $j<3n-5$, by Lemma \ref{BlowupLemma} and Proposition \ref{jsmoothprop}, the connected components of the set of $j$-smooth points of the $j$ skeleton of $\text{FS}_n$ are precisely the open $j$-simplicies so $F$ preserves the set of open $j$-simplicies and the induction pushes through.
\end{proof}
\begin{cor}\leftarrowbel{IsomAction}
There is a homomorphism $\phi: \text{Isom}(\mathcal{X}_n) \to \text{Aut}(\text{FS}_n)$.
\end{cor}
\begin{proof}
$\hat G \circ \hat F$ is an isometry of $\mathcal{\hat{X}}_n^S$ that restricts to $G \circ F$ on $\mathcal{X}_n$. By the uniqueness in proposition \ref{extIsom}, $\hat G \circ \hat F = \widehat{G \circ F}$.
\end{proof}
\begin{cor}\leftarrowbel{homo}
For $n\geq 3$ there is a homomorphism $\phi: \text{Isom}(\mathcal{X}_n) \to \textup{Out}(F_n)$. For $n=2$, there is a homomorphism $\phi: \text{Isom}(\mathcal{X}_2) \to \text{PSL}(2,{\mathbb Z}Z)$.
\end{cor}
\begin{proof}
Using Corollary \ref{IsomAction}, we must show that $Aut(\text{FS}_n) \colonng \textup{Out}(F_n)$ for $n\geq 3$ and $Aut(FS_2) \colonng \text{PSL}(2,{\mathbb Z}Z)$. For $n \geq 3$ this is a result of
Aramayona and Souto \cite{AS}.
For $n=2$, simplicies with free faces in
$\text{FS}_2$ are precisely the graphs with separating edges. Thus, an automorphism
of $\text{FS}_2$ preserves the ``non-separating splitting complex''. This complex is the Farey complex and its automorphism group is $\text{PSL}(2,{\mathbb Z}Z)$. We therefore get a homomorphism $\phi \colon \text{Isom}\mathcal{X}_n \to \text{PSL}(2,{\mathbb Z}Z)$
\end{proof}
We now wish to show that the kernel of this homomorphism is trivial.
\begin{defn}
Let $(G,\tau)$ be a marked graph representing the simplex $\sigma$. For any proper subgraph $\emptyset \neq H \subset G$ let $\sigma_H$ denote the face of $\sigma$ obtained by collapsing each connected component of the complement of $H$.
\end{defn}
\begin{prop}\leftarrowbel{distDetermine}
Let $\sigma$ be a simplex in $\text{FS}_n$ corresponding to the marked graph $G$. Let $\alpha$ be a candidate loop in $G$, let $\alpha_G$ denote its image. For every $x \in \text{int}(\sigma)$, let $vol(\alpha,x)$ denote the volume of $\alpha$ in $x$, then
\[ d(x,\sigma_\alpha) = \log \frac{1}{vol(\alpha,x)}.\]
\end{prop}
\begin{proof}
Denote $\leftarrowmbda =1/vol(\alpha,x)$ and let $y \in \bar\sigma$ be the point such that
\[ \left\{ \begin{array}{lr}
len(e,y) = 0 & \text{for } e \subset G-\alpha_G \\
len(e,y) = \leftarrowmbda len(e,x) & \text{for } e \subset \alpha_G
\end{array} \right. \]
Note that the volume of $y$ is 1 since only the edges of $\alpha_G$ survive.
The natural map $f:x \to y$
stretching the edges in $\alpha_G$ by $\leftarrowmbda$ and collapsing the others to points satisfies
$\text{Lip}(f) = \leftarrowmbda$. Therefore $d(x,y) \leq \log\leftarrowmbda$ and $d(x,\sigma_\alpha) \leq \log\leftarrowmbda$.
When $\alpha$ is a candidate that is injective except on possibly finitely many points (for example a circle or a figure 8) then in $\bar\sigma$, $l(\alpha, \cdot) = vol(\alpha, \cdot)$. Thus,
$st(\alpha) = \frac{1}{l(\alpha,x)} = \leftarrowmbda$ and so $d(x,z)\geq \log\leftarrowmbda$. Therefore, $d(x,\sigma_\alpha) = \log\leftarrowmbda$.
Otherwise $\alpha$ has the form $\alpha=\beta \partialta \gamma \bar \partialta$ and for each $z \in \sigma_\alpha$ we have
\[ l(\beta,z) + l(\gamma,z) + l(\partialta,z) = 1 = \leftarrowmbda (l(\beta,x) + l(\gamma,x) + l(\partialta,x)).\]
If either $l(\beta,z)> \leftarrowmbda l(\beta,x)$ or $l(\gamma,z)> \leftarrowmbda l(\gamma,x)$ then $d(x,z)> \log \leftarrowmbda$.
Otherwise, $l(\partialta,z) \geq \leftarrowmbda l(\partialta,x)$ hence
$l(\alpha,z) = 1+ len(\partialta,z) \geq 1+\leftarrowmbda l(\partialta,x) =
\leftarrowmbda vol(\alpha,x) + \leftarrowmbda l(\partialta,x) =
\leftarrowmbda l(\alpha, x)$.
Hence $d(x,z) \geq \log \leftarrowmbda$. We thus get that $d(x,\sigma_\alpha) = \log\leftarrowmbda$.
\end{proof}
\begin{cor}\leftarrowbel{corCandidates}
Let $x$ be a point in the interior of the simplex $\sigma$. The lengths of candidate loops of $x$ are determined by the Lipschitz distance of $x$ to the faces of $\sigma$.
\end{cor}
\begin{proof}
If $\alpha$ is a candidate that is injective into $x$ for all but finitely many points then $l(\alpha,x) = vol(\alpha,x)$ and the statement follows from Proposition \ref{distDetermine}. Otherwise $\alpha = \beta \partialta \gamma \bar \partialta$. Since the lengths of $\beta$ and $\gamma$ are determined by the distances to the appropriate faces and the length of $\partialta$ can then be computed by the volume of $\alpha$. Then we can compute the length of $\alpha$.
\end{proof}
We now prove Theorem \ref{myD}
\begin{theorem}[\cite{FMIsom}]
The group of isometries of $\mathcal{X}_n$ with the Lipschitz metric is $\textup{Out}(F_n)$ for $n \geq 3$.
The isometry group of $\mathcal{X}_2$ with the Lipschitz metric is $\textup{PSL}(2,{\mathbb Z}Z) \colonng \textup{Out}(F_2)/\{x_i \to x_i^{-1}\}$.
\end{theorem}
\begin{proof}
We wish to show that the homomorphisms in Corollary \ref{homo} are injective.
It is enough to show that if $F$ is an isometry of
$\mathcal{X}_n$ such that that $\phi(F) = id$
then $F$ is the identity on $\mathcal{X}_n$. Since $\phi(F)$ is the identity then for each simplex $\sigma \in \text{FS}_n$ we have $F(\sigma) = \sigma$.
Let $x \in \mathcal{X}_n$ and $\sigma$ a simplex so that $x \in int(\sigma)$.
For all faces $\tau$ of $\sigma$, $d(x,\tau) = d(F(x),F(\tau)) = d(F(x),\tau)$. By
Corollary \ref{corCandidates}, the lengths of all candidate loops of $G$ the underlying marked graph of $\sigma$, are the
same in both $x$ and $F(x)$. Since the distance $d(x,F(x))$ is the logarithm of the maximal stretch of
candidate loops of $x$ then $d(x,F(x))=0$ therefore $F(x) = x$ by Proposition
\ref{zero_distance}.
\end{proof}
\end{document} |
\begin{document}
\title{Superluminal Signals: Causal Loop Paradoxes Revisited}
\author{J. C. Garrison, M. W. Mitchell ,
and R. Y. Chiao \\
University of California\\
Berkeley, CA 94720}
\author{E. L. Bolda\\University of Auckland\\ Private Bag
92019, Auckland, New Zealand}
\maketitle
\begin{abstract}
Recent results demonstrating superluminal group velocities and
tachyonic dispersion relations reopen the question of superluminal
signals and causal loop paradoxes. The sense in which superluminal
signals are permitted is explained in terms of pulse reshaping, and
the self-consistent behavior which prevents causal loop paradoxes is
illustrated by an explicit example.
\end{abstract}
\section{Introduction} \label{intro}
The idea of ``tachyons'', i.e., particles that travel in the vacuum
faster than light, has been the source of controversy for many years.
Although special relativity does not strictly outlaw tachyons
\cite{1167,1168,1103}, the interaction of tachyons with ordinary
matter does raise difficult questions. One of these is the
possibility of violating the familiar relativistic prohibition of
faster-than-light (superluminal) signals. A closely related concern
is that \textit{any} interaction of tachyons with ordinary matter
would lead to logical inconsistencies through the formation of closed
causal loops \cite{1096,764,835}. The participants in the ongoing
debates are at liberty to hold their various views largely because of
the complete absence of any experimental data. This unsatisfactory
situation persists as far as true tachyons are concerned, but not with
regard to ``quasitachyons'', i.e., excitations in a material medium
exhibiting tachyon-like behavior. Theoretical considerations have
shown that superluminal and even negative group velocities are
physically meaningful \cite{403,1164,143,144}, and that excitations
with tachyonic dispersion relations exist \cite{1159,1191,1192,1156}.
Superluminal group velocities have been experimentally observed for
propagation through an absorbing medium \cite{248}, for microwave
pulses\cite{1160,1193,1162} , and for light transmitted through a
dielectric mirror \cite{1163,1165} . There has been a parallel
theoretical controversy over the possibility of superluminal behavior
in quantum tunneling of electrons and photons. Recent experiments
using photons as the tunneling particles \cite{1163,1189,1170,1190}
have confirmed Wigner's early prediction that the time required for a
particle to traverse a tunneling barrier of width $d$ can indeed be
less than $d/c$.
The existence of superluminal group velocities and quasitachyons
raises questions of the same general kind as those sparked by the
previous speculations about tachyons. Is there any conflict with
special relativity? Can these phenomena be used to send superluminal
signals? What mechanism prevents logical contradictions through the
formation of closed causal loops? In order to arrive at reasonably
sharp and concise answers to these questions we will restrict the
following discussion primarily to classical phenomena. The answer to
the first question is straightforward. In all cases considered so
far, the propagation of excitations in a medium is described by
theories, e.g., Maxwell's equations, which are consistent with special
relativity; therefore, the predictions cannot violate special
relativity. The remaining questions require somewhat more discussion.
Superluminal signaling will be examined in Sec.~\ref{super}, in the
context of choosing an appropriate definition of signal propagation
speed. In Sec.~\ref{paradoxes} we investigate the issue of causality
paradoxes in a somewhat simpler context. Finally the lessons drawn
from these considerations will be discussed in Sec.~\ref{discuss}.
\section{Superluminal signals} \label{super}
A common, if loosely worded, statement of an important consequence of
special relativity is:``No signal can travel faster than light.'' The
more sweeping statement,``Nothing can travel faster than light.'', is
contradicted by the familiar example of the spot of light thrown on a
sufficiently distant screen by a rotating beacon \cite{1173}. The
apparent velocity of the spot of light can exceed $c$, but this does
not contradict special relativity since there is no causal relation
between successive appearances of the spot. Any discussion of the
first statement requires a definition of what is meant by a ``signal''
and what is meant by ``signal velocity''. For our purposes it is
sufficient to define a signal as the emission of a well defined pulse,
e.g., of electromagnetic radiation, at one point and the detection of
the same pulse at another point.
The classic analysis of Sommerfeld and Brillouin \cite{1169}
identified five different velocities associated with a
finite-bandwidth pulse of electromagnetic radiation propagating in a
linear dispersive medium. We will consider here only the ``front
velocity'', the velocity of the ``front'', i.e., the boundary
separating the region in which the field vanishes identically from the
region in which the field assumes nonzero values, and the ``group
velocity'', which describes the overall motion of the pulse envelope.
It is worth noting that the definition of the front does not require a
jump discontinuity at the leading edge of the pulse. Discontinuities
of this kind are convenient idealizations suggested by the difficulty
of following the behavior of the pulse at very short time scales, but
they can always be replaced by smooth behavior. An envelope function
which is sufficiently smooth at the front, e.g., the function and its
first derivative vanish there, will have a finite bandwidth in the
precise sense that the rms dispersion of the frequency, calculated
from the power spectrum, will be finite. This definition of bandwidth
is the one used in the uncertainty relation. The existence of the
front has an important consequence which is best described in the
simple example of one-dimensional propagation. If the incident wave
arrives at \(x = \mathrm 0\) at \(t = \mathrm 0\) and the envelope,
\(u\left({\mathrm x,t}\right)\), satisfies \(u\left({\mathrm
0,t}\right) = \mathrm 0\) for \(t < \mathrm 0\), then physically
plausible assumptions for the behavior of the medium guarantee that
the front propagates precisely at \(c\), the velocity of light
\textit{in vacuo}\cite{544}. On the other hand, the group velocity
can take on any value. Indeed it has been shown that ``abnormal''
(either superluminal or negative) group velocities are required by the
Kramers-Kronig relation for some range of carrier frequencies away
from a gain line or within an absorption line\cite{143}.
The principle of relativistic causality states that a source cannot
cause any effects outside its forward light cone. Since the front of
a pulse emitted by the source traces out the light cone, this means
that no detection can occur before the front arrives at the detector.
A general signal will be a linear superposition of the elementary
signals described above. The simplest model of this general behavior
is that the pulse envelope \(u\left({\mathrm x,t}\right)\) is
sectionally analytic in \(t\), i.e., the front is a point of
nonanalyticity separating two regions in which the pulse envelope has
different analytic forms. The values of the pulse envelope on any
finite segment in the interior of a domain of analyticity determine,
by the uniqueness theorem for analytic functions, the pulse envelope
up to the next point of nonanalyticity. It is therefore tempting to
associate the arrival of new data with the points of nonanalyticity
\cite{1170} in the pulse envelope. From this point of view, it is
reasonable to identify the signal velocity with the front velocity. A
happy consequence of this choice is that superluminal signals are
uniformly forbidden, but this conceptual tidiness is purchased at a
price in terms of experimental realism. By definition, the field
vanishes at the front, and for smooth pulses will remain small for
some time thereafter. Thus the front itself cannot be observed by a
detector with finite detection threshold. Nevertheless, an
operational definition of the front velocity can be given in terms of
a limiting procedure in which identical pulses are detected by a
sequence of detectors, \({D}_{n}\) , with decreasing thresholds
\({S}_{n}\) . Let the pulse be initiated at \(t = \mathrm 0\) and
denote by \({t}_{n}\) the time of first detection by \({D}_{n}\) ,
then the effective signal velocity is \({v}_{n} = d / {t}_{n}\),where
$d$ is the distance from the source to the detector. The front
velocity would then be defined by extrapolating \({v}_{n}\) as
\({S}_{n}\mathrm \rightarrow \mathit \mathrm 0\). While physically
and logically sound, this procedure is scarcely practical.
We now turn from consideration of a series of increasingly sensitive
detectors to a single detector with threshold close to the expected
peak strength of the signal, \({S}_{\mathrm 0}\mathrm \ \approx
\mathit \left|{{u}_{peak}}\right|\) . We also assume that the pulse
is not strongly distorted during propagation. Under these
circumstances the pulse envelope propagates rigidly with the group
velocity \({v}_{g}\). This suggests identifying the signal velocity
with the group velocity. This is an attractive choice from the
experimental point of view, but this benefit also has a price. First
note that the peak cannot overtake the front \cite{1169} and that the
front travels with velocity \(c\). This prompts the question. In
what sense is the signal superluminal even if \({v}_{g} > c\)? To
answer this, consider an experiment in which the original signal is
divided, e.g., by use of a beam splitter. One copy is sent through
the vacuum and the other through a medium, and the firing times of
identical detectors placed at the ends of the two paths, each of
length d, are then recorded. The difference between the two times,
the ``group delay'', is given by
\begin{equation}{t}_{g} = {\frac{d}{{v}_{g}}} - {\frac{d}{c}}~.
\label{grpdelay}\end{equation}
The group delay is positive for normal media (\(\mathrm 0\mathit <
{v}_{g} < c \) ), and negative for abnormal media, (\({v}_{g} > c\) or
\({v}_{g} < 0\)). It is only in this sense that the abnormally
propagated signal is superluminal. With these definitions it is then
correct to say that special relativity does not prohibit superluminal
signals. This is a fairly innocuous complication of the usual
discussion, but there are more serious problems related to the
robustness of the definition. For example, if a more sensitive
detector were used the measured group delay could be significantly
smaller than that given by (\ref{grpdelay}). Indeed as the threshold
of the detector approaches zero, the group delay would approach zero.
In other words the signal velocity would approach the front velocity.
The only simple way to remove this ambiguity would be to identify the
arrival of the pulse with the arrival of the peak. This would seem to
attribute an unwarranted fundamental significance to the peak.
\section{Causal loop paradoxes}\label{paradoxes}
Causal loop paradoxes are usually introduced by considering two
observers, A and B, each equipped with transmitters and receivers for
tachyons. At time \({t}_{A}\), A sends a tachyonic message to B who
then sends a return message to A timed to arrive at \({t}_{A}^{\mathrm
'} < {t}_{A}\), where both times are measured in A's restframe. The
paradox occurs if the return message activates a mechanism which
prevents A from sending the original message \cite{1096}. Our next
task is to reexamine this issue in the context of the two definitions
of signal velocity discussed above. No paradoxical behavior is
possible if the signal velocity is identified with the front velocity,
since the signal velocity then equals the velocity of light. When the
signal velocity is equated to the group velocity, more discussion is
needed, since negative group delays are possible.
The core of the tachyon paradox is the ability to send messages into
the past. It is therefore sufficient to devise a situation in which
messages can be sent to the past at a single point in space
\cite{764}. A concrete example can be constructed by using an
electronic circuit, for which the light transit time across the system
is negligible compared to all other time constants. Propagation
effects are then irrelevant, and the system can be described by a
function, \(V\left({t}\right)\), which depends only on time. For
these systems we can use the principle of elementary causality which
states that the output signal depends only on past values of the input
signal. We will assume that both the input and the output pulse have
well defined peaks occurring at \({t}_{in}\) and \({t}_{out}\)
respectively. The time difference \({t}_{g} = {t}_{out} - {t}_{in}\)
is called the group delay by analogy to (\ref{grpdelay}). Analyzing
the relation between input and output in this way is analogous to
choosing the group velocity to represent the signal velocity in the
propagative problem. We can attempt to create the paradox by
designing a circuit with negative group delay, i.e., the output peak
leaves the amplifier \textit{before}the input peak has entered. A low
frequency bandpass amplifier with this rather bizarre property has
been experimentally demonstrated \cite{1158}, and it will be used in
the following discussion. To get a message from the future it is
necessary to construct a feedback loop in which the output of the
amplifier is used to modulate the input, as shown in
Fig.~\ref{circuit}. If the amplifier produces a negative group delay
this arrangement could apparently be used to turn off the input
prematurely, e.g., before the peak.
\begin{center}
\begin{figure}
\caption{Feedback circuit: The modulator multiplies the input signal
by $M(t)$ when the detector fires. The amplifier parameters are
\({\mathrm \omega }
\label{circuit}
\end{figure}
\end{center}
The Green's function of the amplifier, i.e., the Fourier transform of
the frequency-domain transfer function, is
\begin{equation}G\left({t}\right) =\mathrm \ \delta \left({\mathit
t}\right)\mathit + G\mathrm '\left({\mathit t}\right)~, \end{equation}
\begin{equation}G\mathrm '\left({\mathit t}\right)\mathit \mathrm
\equiv \mathit {G}_{\mathrm 0} \mathrm \gamma \mathit \mathrm \theta
\left({\mathit t}\right)\mathit {e}^{-\mathrm \ \gamma \mathit t}
\left[{\cos \left({{\mathrm \omega }_{r} t}\right) + {\frac{\mathrm
\gamma }{{\omega }_{\mathit r}}} \sin \left({{\mathrm \omega }_{r}
t}\right)}\right]~,\label{prop} \end{equation}
where \(\mathrm \theta \left({\mathit t}\right)\) is the step
function, \(\mathrm \gamma \) and \({\mathrm \omega }_{r}\) are
respectively the damping rate and resonant frequency of the amplifier,
and the dimensionless parameter \({G}_{\mathrm 0}\) describes the
overall amplification \cite{1158}. The presence of the step function
in (\ref{prop}) imposes the retarded Green's function and guarantees
elementary causality. In the absence of feedback the output signal is
\begin{equation}{V}_{out}\left({t}\right) = {V}_{in}\left({t}\right) +
\int_{-\mathrm \ \infty }^{t}d\mathrm \tau \mathit G\mathrm
'\left({\mathit t -\mathrm \ \tau }\right)\mathit
{V}_{in}\left({\mathrm \tau }\right) \label{vout}~. \end{equation}
A simple example of an input signal which has a continuous first
derivative everywhere and vanishes outside a finite interval, is
given by
\begin{equation}{V}_{in}\left({t}\right) = {V}_{\mathrm 0} \mathrm
\theta \left({{\mathit T}_{\mathit f}\mathit -
\left|{t}\right|}\right)\mathit \cos \left({{\mathrm \omega }_{c}
t}\right) {\cos}^{\mathrm 2 \mathit}\left({{\frac{\mathrm \pi \mathit
t}{\mathrm 2 \mathit {T}_{f}}}}\right)~. \end{equation}
In the interior of the interval \(\left({- {T}_{f} ,
{T}_{f}}\right)\), this signal resembles a Gaussian pulse peaked at
\(t = \mathrm 0\), and modulated at carrier frequency \({\mathrm
\omega }_{c}\) . An example of negative group delay for this input is
shown in Fig.~\ref{ngrpdelay}.
\begin{center}
\begin{figure}
\caption{$V_{out}
\label{ngrpdelay}
\end{figure}
\end{center}
In the feedback circuit the detector, with threshold set to \({S}_{\mathrm
0}\), triggers the modulator which in turn multiplies the input
voltage by a factor \(M\left({t}\right)\). The input to the amplifier
is then \(M\left({t}\right) {V}_{in}\left({t}\right)\), and the signal
\(V\left({t}\right)\) satisfies
\begin{eqnarray}V\left({t}\right) &=& M\left({t}\right)
{V}_{in}\left({t}\right)
\nonumber \\
&&+ \int_{-\mathrm \ \infty }^{t}d\mathrm \tau
\mathit G\mathrm '\left({\mathit t -\mathrm \ \tau }\right)\mathit
M\left({\mathrm \tau }\right) {V}_{in}\left({\mathrm \tau }\right)~.
\label{feedback} \end{eqnarray}
This seems to open the way for a variant of the time travel paradox in
which the traveller journeys to the past and kills his grandfather
before his own father is born. The analogous situation for the
feedback circuit would be to employ the output peak to turn off the
input before it has reached its peak. Fig.~\ref{ngrpdelay} shows that
this seems to be possible. If there is a paradox, the integral
equation (\ref{feedback}) should fail to have a solution when the
modulation function is chosen in this way. In an attempt to produce the
paradoxical situation we choose the modulating function
\(M\left({t}\right)\) as follows. For any signal
\(V\left({t}\right)\) which rises smoothly from zero, define
\({t}_{\mathrm 1 \mathit}\) and \({t}_{\mathrm 2 \mathit}\) as the
first two times for which \(\left|{V\left({t}\right)}\right| =
{S}_{\mathrm 0}\). The first peak of the amplitude exceeding $S_{0}$
is guaranteed to lie between these two times, provided that the value
of $S_{0}$ is below the absolute maximum value of the feedback
signal. The modulating function is then chosen as
\begin{equation}M\left({t}\right) =\mathrm \ \theta \left({{\mathit
t}_{\mathit \mathrm 2}\mathit - t}\right)~. \end{equation}
Thus the input is unmodulated until the peak of the feedback signal
has passed and the detector again registers $S_{0}$. At this time
the input is set to zero. The integral equation (\ref{feedback}) for
the signal is now
\begin{eqnarray}V\left({t}\right) &=& \mathrm \theta \left({{\mathit
t}_{\mathit \mathrm 2 \mathit}\mathit - t}\right)\mathit
{V}_{in}\left({t}\right)
\nonumber \\
&&+ \int_{-\mathrm \ \infty }^{t}d\mathrm \tau
\mathit G\mathrm '\left({\mathit t -\mathrm \ \tau }\right)\mathit
\mathrm \theta \left({{\mathit t}_{\mathit \mathrm 2 \mathit}\mathit
\mathrm \tau }\right)\mathit {V}_{in}\left({\mathrm \tau }\right)~.
\label{paradox}\end{eqnarray}
For times \(t < {t}_{\mathrm 2}\), the modulation function in both
terms of (\ref{paradox}) is unity, and comparison with (\ref{vout})
shows that \(V\left({t}\right) = {V}_{out}\left({t}\right)\) for \(t <
{t}_{\mathrm 2}\) . Thus the time \({t}_{\mathrm 2}\) is determined
by the simple output function \({V}_{out}\), and the solution to
(\ref{paradox}) is
\begin{eqnarray}{V}^{}\left({t}\right) &=&\mathrm \ \theta
\left({{\mathit t}_{\mathit \mathrm 2 \mathit}\mathit -
t}\right)\mathit {V}_{out}\left({t}\right)
+ \mathrm \theta
\left({\mathit t - {t}_{\mathrm 2 \mathit}}\right)\mathit
\nonumber \\
&&\times \int_{-\mathrm \ \infty }^{t}d\mathrm \tau \mathit G\mathrm
'\left({\mathit t -\mathrm \tau }\right)\mathit \mathrm \theta
\left({{\mathit t}_{\mathit \mathrm 2 \mathit}\mathit -\mathrm \tau
}\right)\mathit {V}_{in}\left({\mathrm \tau }\right)~, \end{eqnarray}
where $t_{2}$ is determined by $V_{out}$. A computer simulation of
the solution, using the same parameters as in Fig.~\ref{circuit} , is
plotted in Fig.~\ref{fdbksol}. The self-consistent signal follows the
amplifier output until the detector is triggered. This occurs after
the output signal has reached its peak but before the input achieves
its peak. The sudden termination of the input then sets off a damped
oscillation. The existence of a self-consistent solution shows that
there is no paradox; i.e., the theory does not suffer from internal
contradictions. This feature is shared with previous resolutions of
apparent paradoxes associated with tachyons \cite{764,835}, or with
the use of advanced Green's functions in the Wheeler-Feynman radiation
theory \cite{1157}.
\begin{center}
\begin{figure}
\caption{$V(t)/V_{0}
\label{fdbksol}
\end{figure}
\end{center}
\section{Discussion and conclusions}\label{discuss}
In Sec.~\ref{super} we considered only two candidates for the signal
velocity. While they may not be the only possibilities, the front and
group velocities do seem to have the strongest \textit{a priori} claims.
Furthermore there is a form of complementarity between them. The front
velocity is conceptually simple but operationally complex, and the
group velocity is conceptually complex but operationally simple. Each
alternative has strengths and weaknesses which we now discuss.
Identification of the front velocity as the signal velocity uniformly
forbids the appearance of any superluminal signals, either in the
vacuum or in a medium. This definition is Lorentz invariant, and it
automatically excludes the possibility of any causal loop paradoxes.
For a given distance $d$ between source and detector, the predicted
arrival time $d/c$ represents the earliest possible time for
detection. This interpretation is related to the most serious
drawback of the definition, namely a detector with finite threshold cannot
respond until some time $t>d/c$. Thus the arrival time $d/c$ can only
be approximated by a limiting procedure such as that discussed in
Sec.~\ref{super}. This objection is not fatal, since definitions of
fundamental notions in terms of a limiting procedure are common, e.g.,
the definition of an electric field as the ratio of the force on a
test charge to the charge as the charge approaches zero. An
additional drawback is that the front-velocity definition mandates
that all signals travel exactly at $c$, whether in the vacuum or in a
medium. In particular this means that signals travel exactly at $c$ in normal
dielectrics, not slower than $c$. This is not in accord with our usual usage and intuitions.
Identification of the group velocity with the signal velocity has the
advantage of easy experimental realization, but there are also
disadvantages. For example, one can imagine signals transmitted by
pulses lacking a well defined group velocity, e.g., in the
presence of strong group velocity dispersion. Furthermore, this
definition actually requires the existence of superluminal signals, in
the sense of negative group delays. In view of this, it is natural to
wonder how superluminal signals can be consistent with special
relativity. To begin, recall that special relativity is based on two
postulates: (A) The laws of physics have the same form in all inertial
frames. (B) The velocity of light \textit{in vacuo} is independent of
the velocity of the source. The first postulate is already present in
Newtonian mechanics, so it is the second that leads to
characteristically relativistic phenomena. Neither postulate says
anything directly about the propagation of excitations in a material
medium. The implications of special relativity for this question can
only be found by using a theory of the medium, e.g., the macroscopic
form of Maxwell's equations, which is consistent with special
relativity. In all such theories the response of the medium to the
incident wave is described by retarded propagators, in accordance with
both relativistic and elementary causality. With this in mind,
superluminal propagation of electromagnetic fields can be understood
as reshaping of the pulse envelope by interaction with the medium
\cite{144}. For propagation in a linear medium, it has long been
known that the peak of the pulse can never overtake the
front\cite{1169}. This conclusion holds for nonlinear media as well,
e.g., superluminal propagation in a laser amplifier \cite{534}. In
all cases the pulse shape will become increasingly distorted as it
asymptotically attempts to overtake the front.
The would-be paradoxical feedback circuit analyzed in
Sec.~\ref{paradoxes} at first appears to present a puzzle. With the
choice of parameters in Fig.~\ref{fdbksol}, the peak of the feedback
signal is used to turn off the input signal before it achieves its
peak. This would seem to satisfy the requirements of the paradox, but
the feedback problem does have a self-consistent solution. The
apparent difficulty here stems from the natural assumption that the
output peak is causally related to the input peak. This assumption
has been criticized previously \cite{1173}, and recent experimental
results \cite{1158} , as well as the simulation results shown in
Fig.~\ref{fdbksol}, show it to be false. In order for event A to be
the cause of event B, it must be that preventing A also prevents B.
Both experiment and theory show that preventing the peak in the input
does not prevent the peak in the output, therefore the peaks are not
causally related. The peak in the output is however causally related
to earlier parts of the input, since cutting off the input
sufficiently early will prevent the output peak from appearing
\cite{1187}. This shows that the analytic continuation of an initial
part of the smooth pulse, discussed in Sec.~\ref{super}, is not just a
theoretical artifact; the experimental apparatus actually performs the
necessary extrapolation. However the apparatus cannot send a signal
to any time prior to the initiation of the input signal. In other
words, the output signal vanishes identically for \(t < - {T}_{f}\);
this is guaranteed by the use of retarded propagators.
The discussion so far has been carried out at the classical level, but
there are general arguments suggesting that there will be no surprises
at the quantum level. The relevant setting here is quantum field
theory. In the Heisenberg picture the operator field equations have
the same form as the classical field equations, so it is plausible
that the solutions will be described by the same propagators. In
particular, the electromagnetic field operators arising from an
electric current localized in a small space-time region will be
related to the source by the standard retarded propagator which
vanishes outside the light cone. Indeed the solution to the point
source problem involves only the retarded propagator even for models
with tachyonic dispersion relations \cite{10}. Explicit calculations
for one such model display the same pulse reshaping features as the
classical case \cite{1176}. A rigorous, general argument has been
given by Eberhard and Ross \cite{1154}, who show that if classical
influences satisfy relativistic causality, then no signals outside the
forward light cone will be observed in a fully quantal calculation.
The essential point for their argument is the postulate that operators
localized in space-like separated regions commute. This is used to
show that actions performed in one region cannot change the
probability distributions for measurements in a space-like separated
region.
The first conclusion to be drawn from this discussion is that there is
no completely compelling argument that would allow a choice between
the proposed definitions of signal velocity. The front-velocity
definition eliminates all superluminal signals and causality problems
at a single stroke, but at the expense of an indirect operational
definition. The group-velocity definition is operationally simple,
but it provides a well defined sense in which superluminal signalling
is allowed by relativity and by elementary causality, namely in those
media allowing negative group delay. The description of negative
group delay as superluminal propagation is, to some extent, a question
of language. The values of the group delay, whether positive or
negative, come from pulse reshaping effects. Thus one could speak of
``group advance'' for abnormal propagation and ``group
retardation'' for normal propagation. The choice between
``superluminal signal'' and ``group advance'' is a matter of taste,
but it should be kept in mind that the group delay is a measurable
quantity and that negative values have been observed. Another point
to consider is that before the work of Garrett and McCumber\cite{403}
the possibility of negative group delays would have been rejected as
obviously forbidden by relativity. The second conclusion is that the
superluminal propagation allowed by the group-velocity definition does
not give rise to any causal loop paradoxes. The third conclusion is
that no fundamental modifications in physics are needed to explain
these phenomena. Finally the possibility of interesting applications
of superluminal signals (in the sense of negative group delays) is an
open question.
\section*{Acknowledgments}
R. Y. Chiao and M. W. Mitchell were supported by ONR grant number
N000149610034. E.L. Bolda was supported by the Marsden Fund of the
Royal Society of New Zealand. It is a pleasure to acknowledge many
useful conversations with Prof. C. H. Townes, Prof. A. M. Steinberg,
and Jack Boyce.
\end{document} |
\begin{document}
\title[Fibrations whose geometric fibers are nonreduced]
{On Fibrations whose geometric fibers are nonreduced}
\author[Stefan Schr\"oer]{Stefan Schr\"oer}
\address{Mathematisches Institut, Heinrich-Heine-Universit\"at,
40225 D\"usseldorf, Germany}
\curraddr{}
\email{schroeer@math.uni-duesseldorf.de}
\subjclass{14A15, 14D05, 14J70}
\dedicatory{25 February 2009}
\begin{abstract}
We give a bound on embedding dimensions of geometric generic fibers
in terms of the dimension of the base, for fibrations
in positive characteristic. This generalizes the well-known fact
that for fibrations over curves, the geometric generic fiber is reduced.
We illustrate our results with Fermat hypersurfaces and genus one curves.
\end{abstract}
\maketitle
\tableofcontents
\renewcommand{(\roman{enumi})}{(\roman{enumi})}
\section*{Introduction}
The goal of this paper is to understand \emph{geometric nonreducedness} for fibrations
in characteristic $p>0$. Roughly speaking, we deal with nonreducedness that becomes visible only after
purely inseparable base change.
My starting point is the following well-known fact: Let $k$ be an algebraically closed
ground field, $S$ be a normal algebraic scheme, and $f:S\ra C$ be a fibration
over a curve: then the geometric generic fiber $S_{\bar{\eta}}$ is reduced.
This fact goes back to MacLane (\cite{MacLane 1939}, Theorem 2); a proof can also be found
in B\u{a}descu's monograph (\cite{Badescu 2001}, Lemma 7.2).
It follows, for example, that geometric nonreducedness plays no role in the Enriques classification of surfaces.
A natural question comes to mind: Are there generalisations to higher dimensions?
The main result of this paper is indeed such a generalisation:
\begin{maintheorem}
Let $f:X\ra B$ be a proper morphism with $\O_B=f_*(\O_X)$ between normal $k$-schemes of finite type.
Then the generic embedding dimension of $X_{\bar{\eta}}$ is smaller the $\dim(B)$.
\end{maintheorem}
Here the \emph{generic embedding dimension} is the embedding dimension of the local ring at the generic
point of $X_{\bar{\eta}}$.
I expect that the phenomenon of geometric nonreducedness in fibrations will
play an role in the characteristic-$p$-theory of genus-one-fibrations, Albanese maps,
and the minimal model program. For example,
Koll\'{a}r mentions the threefold $X\subset\mathbb{P}^2\times\mathbb{P}^2$ in characteristic two defined by
the bihomogeneous equation $xu^2+yv^2+zw^2=0$, whose projection onto the first factor defines
a Mori fiber space that has the structure of a geometrically nonreduced conic bundle (\cite{Kollar 1991}, Example 4.12).
The whole theory, however, is largely unexplored to date.
It is actually better to formulate our result in a more general framework, in which an
arbitrary field $K$ takes over the role of the function field of the scheme $B$. In this setting,
the dimension of $B$, which is also the transcendence degree of the function field $\kappa(B)$,
has to be replaced by the \emph{degree of inseparability} of $K$, which is
the length of any $p$-basis for the extension $K^p\subset K$.
\begin{maintheorem}
Let $X$ be a proper normal $K$-scheme with $K=H^0(X,\O_X)$.
Suppose $X$ is geometrically nonreduced. Then the geometric generic embedding dimension of $X$
is smaller then the degree of inseparability of $K$.
\end{maintheorem}
The key observation of this paper is that base change with field extensions $K\subset L$ of degree $p$ cannot
produce nilpotent functions on $X\otimes_KL$. However, the global functions on the \emph{normalisation} of $X\otimes_KL$
may form a field that is larger than $L$.
Our arguments also hinge on Kraft's beautiful description
of finitely generated field extensions in positive characteristics \cite{Kraft 1970}.
To illustrate our result, we study \emph{$p$-Fermat hypersurfaces} $X\subset\mathbb{P}^n_K$, which are
defined by a homogeneous equation of the form
$$
\lambda_0U_0^p+\lambda_1U_1^p+\ldots+\lambda_nU_n^p=0.
$$
These are twisted forms of an infinitesimal neighborhood of a hyperplane.
Mori and Saito \cite{Mori; Saito 2003} studied them in the context of fibrations, using the name \emph{wild hypersurface bundle},
in connection with the minimal model program.
We attach to such $p$-Fermat hypersurfaces a numerical invariant $0\leq d\leq n$
defined as the $p$-degree of certain field extension depending on the coefficients
$\lambda_i$ (for details, see Section \ref{fermat hypersurfaces}), and show that this number has a geometric significance:
\begin{maintheorem}
The $p$-Fermat hypersurface $X\subset\mathbb{P}^n$ is regular if and only if $d=n$.
If $X$ is not regular, then the singular set $\Sing(X)\subset X$ has codimension $0\leq d\leq n-1$.
\end{maintheorem}
This generalizes parts of a result of Buchweitz, Eisenbud and Herzog, who studied quadrics
in characteristic two (\cite{Buchweitz; Eisenbud; Herzog 1987}, Theorem 1.1). Our proof uses
different methods and relies on Grothendieck's
theory of \emph{the generic hyperplane section}.
The result nicely shows that field extension of degree $p$ may increase the dimension of the singular set
only stepwise.
Then we take a closer look at the case of\emph{ $p$-Fermat plane curves} $X\subset\mathbb{P}^2$ that
have an isolated singularity. It turns out that the normalization is a projective line over
a purely inseparable field extension $K\subset L$ of degree $p$.
Roughly speaking, the curve $X$ arises from this projective line by \emph{thinning out} an infinitesimal
neighborhood of a $L$-rational point. Here some amazing phenomena related to nonuniqueness of
\emph{coefficient fields in local rings} come into light.
We then study the question whether curves $C$ arising as such a denormalization
are necessarily $p$-Fermat plane curve. I could not settle this. However,
we shall see that this is true under the assumption that $p\neq 3$ and $C$ globally embeds
into a smooth surface.
In this context we touch upon the theory of \emph{abstract multiple curves}, which
were studied, for example, by B\u{a}nic\u{a} and Forster \cite{Banica; Forster 1986}, Manolache \cite{Manolache 1994}, and Drezet \cite{Drezet 2007}.
We also make use of Grothendieck's theory of \emph{Brauer--Severi schemes}.
Finally, we study \emph{genus one curves}, that
is, proper curves $X$ with cohomological invariants $h^0(\O_X)=h^(1\O_X)=1$. They should play an important role in the
characteristic-$p$-theory of genus one fibrations. Here we have a result
on the structure of the Picard scheme:
If $X$ be a genus one curve that is regular but geometrically nonreduced, then
$\Pic^0_X$ is unipotent, that is, a twisted form of the additive group scheme
$\mathbb{G}_a$.
As a consequence, the reduction of $X\otimes_K\bar{K}$ cannot be
an elliptic curve.
\section{Geometric nonreducedness}
\mylabel{geometric nonreducedness}
Throughout this paper, we fix a prime number $p>0$.
Let $K$ be a field of characteristic $p$,
and let $X$ be a proper $K$-scheme so that the canonical inclusion $K\subset H^0(X,\O_X)$
is bijective. The latter can always be achieved, at least if $X$ is connected and reduced, by replacing the
ground field by the finite extension field $H^0(X,\O_X)$.
If $K\subset K'$ is a field extension, the induced proper $K'$-scheme $X'=X\otimes_K K'$
may be less regular than $X$, and in fact nilpotent elements may appear. Such phenomena are the topic of this paper.
We start with obvious facts:
\begin{proposition}
\mylabel{embedded components}
If $X$ contains no embedded components, then $X'=X\otimes_KK'$ contains no embedded
components. In particular, if $X$ is reduced, then $X'$ is reduced if and only if
$X'$ is generically reduced.
\end{proposition}
\proof
The first statement is contained in \cite{EGA IVb}, Proposition 4.2.7,
and the second is an immediate consequence.
\qed
Recall that $X$ is called \emph{geometrically reduced} if $X'=X\otimes_KK'$ is reduced for
all field extension $K\subset K'$.
Actually, it suffices to check reducedness for a single field extension:
\begin{proposition}
\mylabel{geometrically reduced}
The scheme $X$ is geometrically reduced if and only if $X\otimes_K K^{1/p}$ is reduced.
\end{proposition}
\proof
Necessity is trivial. Now suppose that $X\otimes_K K^{1/p}$ is reduced.
Let $\eta\in X$ be a generic point, and let $F=\O_{X,\eta}$ be the corresponding field of functions.
Then $F\otimes_KK^{1/p}$ is reduced. By MacLane's Criterion (see for example \cite{A 4-7}, Chapter IV, \S15, No.\ 4, Theorem 2),
the field extension $K\subset F$ is separable. In light of Proposition \ref{embedded components},
$X$ must be geometrically reduced.
\qed
Purely inspeparable field extension that are smaller then $K^{1/p}$
do not necessarily uncover geometric nonreducedness. In fact, degree $p$ extensions
are incapable of doing so:
\begin{lemma}
\mylabel{remains integral}
Suppose that $K\subset K'$ is purely inseparable of degree $p$.
If the scheme $X$ is normal than the induced scheme $X'=X\otimes_K K'$ is at least integral.
\end{lemma}
\proof
Let $F=\O_{X,\eta}$ be the function field of $X$.
In light of Proposition \ref{embedded components}, it suffices to check that the local Artin ring $F'=F\otimes_KK'$ is a field.
Choose an element $a\in K'$ not contained in $K$. Then $b=a^p$ lies in the subfield $K\subset K'$, and $T^p-b\in K[T]$ is the minimal polynomial of $a\in K'$.
It follows that $K'=K[T]/(T^p-b)$ and $F'=F[T]/(T^p-b)$. Thus our task is to show that $b\in F$ is not a $p$-th power.
Seeking a contradiction, we assume $b=c^p$ for some $c\in F$. Then for each affine open subset $U=\Spec(A)$ in $X$, the element $c\in F$
lies in the integral closure of $A\subset F$. Since $X$ is normal, $c\in A$, and whence $c\in H^0(X,\O_X)$ according to the sheaf axioms.
By our overall assumption $K=H^0(X,\O_X)$, and we conclude that $b$ is a $p$-th power in $K$, contradiction.
\qed
Suppose our field extension $K\subset K'$ is so that the induced scheme $X'=X\otimes_KK'$
remains integral; then we consider its normalization $Z\ra X'$ and obtain
a new field $L=H^0(Z ,\O_Z)$, such that we have a sequence of field extensions
$$
K\subset K'\subset L.
$$
Here $K'\subset L$ is finite. In some sense and under suitable assumption, this extension $K\subset L$ is also not too big.
Recall that a purely inseparable field extension $K\subset E$ is called \emph{of height} $\leq 1$ if one has $E^{p}\subset K$.
\begin{proposition}
\mylabel{height one}
Suppose that $K\subset K'$ is purely inseparable of height $\leq 1$ so that $X'=X\otimes_KK'$
is integral. Then the
field extension $K\subset L$ is purely inseparable and has height $\leq 1$ as well.
\end{proposition}
\proof
Let $f\in H^0(Z,\O_Z)$. We have to show that $g=f^p$ lies
in the image of $H^0(X,\O_X)$ with respect to the canonical projection $Z\ra X$.
Let $F $ and $F'$ be the function fields of $X$ and $X'$,
respectively. Then $F'=F\otimes_KK'$, and this is also the function field of $Z$.
The description of $F'$ as tensor product gives $g\in F$.
To finish the proof, let $x\in X$ be a point of codimension one.
Using that $X$ is normal, it suffices to show that $g\in F$ is contained
in the valuation ring $\O_{X,x}\subset F$. Suppose this is not the case.
Then $1/g\in \maxid_x$. Consider the point $z\in Z$
corresponding to $x\in X$. Then $1/g\in\maxid_z$ since $\maxid_z\cap\O_{X,x}=\maxid_x$.
On the other hand, we have $g\in \O_{Z,z}$ and whence $1\in\maxid_z$, contradiction.
\qed
Recall that the \emph{$p$-degree} of an extension $K\subset L$ of height $\leq 1$
is the cardinality of any $p$-basis, confer \cite{A 4-7}, Chapter V, \S 13. The \emph{$p$-degree} $[L:K]_p$ of an arbitrary extension is
defined as the $p$-degree of the height one extension $K(L^p)\subset L$.
The \emph{degree of imperfection} of
a field $K$ is the $p$-degree of $K$ over its prime field. In other words, it is the
$p$-degree of $K^p\subset K$, or equivalently of $K\subset K^{1/p}$.
\begin{proposition}
\mylabel{maximal extension}
There is a purely inseparable extension
$K\subset K'$ of height $\leq 1$ so that the induced scheme $X'=X\otimes_KK'$ is integral,
and that the field of global functions $L=H^0(Z,\O_Z)$ on the
normalization $Z\ra X'$ is isomorphic to $K^{1/p}$ as an extension of $K$.
\end{proposition}
\proof
This is an application of Zorn's Lemma.
Let $K\subset K_\alpha\subset K^{1/p}$ be the collection of
all intermediate fields with $X\otimes_KK_\alpha$ integral.
We may view this collection as an ordered set, where the ordering comes
from the inclusion relation. This ordered set is inductive by
\cite{EGA IVc}, Corollary 8.7.3. Dint of Zorn's Lemma,
we choose a maximal intermediate field $K'=K_\alpha$ and consider $L=H^0(Z,\O_Z)$. Then $L\subset K^{1/p}$ by Proposition \ref{height one}.
Seeking a contradiction, we assume that the latter inclusion is not an equality.
Then there is a purely inseparable extension $K\subset E$
of degree $p$ that is linearly disjoint from $K\subset L$. Consider the composite field $K''=K'\otimes_K E$.
Then $K\subset K''$ is purely inseparable of height $\leq 1$ and strictly larger
then $K'$, so $X''=X\otimes_KK''$ is generically nonreduced. On the other hand,
$X''$ is birational to
$$
Z\otimes_{K'}K''=Z\otimes_L (L\otimes_{K'}K'')= Z\otimes_L (L\otimes_K E),
$$
which is integral
by Lemma \ref{remains integral}, contradiction.
\qed
\section{Generic embedding dimension}
\mylabel{generic embedding dimension}
Let $K$ be a field of characteristic $p>0$, and $K\subset F$ be
a finitely generated extension field.
If $K\subset K'$ is purely inseparable, the tensor product
$R=F\otimes_KK'$ is a local Artin ring. We now investigate
its \emph{embedding dimension} $\edim(R)$, that is, the smallest number of generators
for the maximal ideal $\maxid_R$, or equivalently the vector space dimension
of the cotangent space $\maxid_R/\maxid_R^2$ over the residue field
$R/\maxid_R$.
We first will relate the embedding dimension
$\edim(F\otimes_KK')$, which we regard as an invariant from
algebraic geometry, with some invariants from field theory.
Our analysis hinges on Kraft's beautiful result on the structure of
finitely generated field extensions: According to \cite{Kraft 1970},
there is a chain of intermediate fields
\begin{equation}
\label{kraft presentation}
K\subset F_0\subset F_1\subset\ldots\subset F_m=F
\end{equation}
so that $K\subset F_0$ is separable, each $F_i\subset F_{i+1}$ is purely inseparable,
and moreover $F_{i+1} $ is generated over $F_i$ by a single element $a_i$ whose minimal polynomial is
of the form $T^{p^{r_i}}-b_i$ with constant term $b_i\in K(F_i^{p^{r_i}})$ and $r_i>0$. We now
exploit the rather special form of the constant terms $b_i$.
\begin{proposition}
\mylabel{integers coincide}
Suppose the extension $K\subset K'$ contains $K^{1/p}$. Then the following integers coincide:
\begin{enumerate}
\item
The embedding dimension of $F\otimes_K K'$.
\item
The number $m$ of purely inseparable field extension in the chain (\ref{kraft presentation}).
\item
The difference between the $p$-degree and the transcendence degree of $K\subset F$.
\end{enumerate}
\end{proposition}
\proof
Recall that the \emph{$p$-degree} of an arbitrary field extension $K\subset F$ is defined
as the $p$-degree of the height $\leq 1$ extension
$K(F^p)\subset F$.
Let $t_1,\ldots,t_n\in F_0$ be a separating transcendence basis over $K$, such that $n$ is
the transcendence degree for $K\subset F$. Clearly, $t_1,\ldots,t_n$ together
with $a_1,\ldots,a_m$ comprise a $p$-basis for $K(F^p)\subset F$, so the integers in
(ii) and (iii) are indeed the same.
We now check $\edim(F\otimes_KK')=m$ by induction on $m$.
If $m=0$, then $K\subset F$ is separable, whence $F\otimes_KK'$ is a field.
Suppose now $m\geq 1$, and assume inductively that the local Artin ring $F_{m-1}\otimes_KK'$
has embedding dimension $m-1$.
Write $F=F_{m-1}[T]/(T^{p^r}-b)$ for some $b\in K(F_{m-1}^{p^r})$.
Clearly, $b$ is not a $p$-th power, but it becomes a $p$-th power after tensoring
with $K'$ because $K^{1/p}\subset K'$. Now write
$$
F\otimes_K K'= (F_{m-1}\otimes_KK' )[T]/(T^{p^r}-b\otimes1).
$$
Our claim then follows from the following Lemma.
\qed
\begin{lemma}
\mylabel{plus one}
Suppose $R$ is a local Noetherian ring in characteristic $p>0$,
and let $A=R[T]/(T^{p^r}-f^p)$ for some integer $r\geq 1$ and some element $f\in R$.
Then we have $\edim(A)=\edim(R)+1$.
\end{lemma}
\proof
We may assume that $\maxid_R^2=0$. Let $\bar{f}$ denote the class of $f$
in the residue field $k=R/\maxid_R$. Write $\bar{f}=\bar{g}^{p^{m-1}}$ with $\bar{g}\in k$
and $0\leq m-1\leq r$ as large as possible, and choose a representant $g\in R$ for $\bar{g}$.
Then $f^p-g^{p^m}\in\maxid_R^p$, whence $f^p=g^{p^m}$. Set $h=T^{p^{r-m}}-g$, such that
$A=R[T]/(h^{p^m})$. Consider the ring $A'=R[T]/(h)$. This is a \emph{gonflement}
of $R$ in the sense of \cite{AC 8-9}, Chapter IX, Appendix to No.\ 1, and we have $\edim(A')=\edim(R)$ by loc.\ cit.\ Proposition 2.
The element $h\in A$ is nilpotent, and $A'=A/hA$ by definition. It therefore suffices
to check $h\not\in\maxid_A^2$. Seeking a contradiction, we assume $h\in \maxid_A^2$.
Applying the functor $\otimes_Rk$, we reduce to the case that $R=k$ and $R'=k'$ are fields,
such that $\maxid_A=hA$. It then follows $\maxid_A=\cap_{i\geq 0}\maxid_A^i=0$, such that
the projection $A\ra A'$ is bijective. Taking ranks of these free $R$-modules, we obtain $p^m=1$
and thus $m=0$, which contradicts $0\leq m-1$.
\qed
In \cite{Kraft 1970}, the integer in Proposition \ref{integers coincide} is
called the \emph{inseparability} of $K\subset F$.
We prefer to call it the \emph{geometric embedding dimension} for $K\subset F$, to
avoid confusion with other measure of inseparability and to stress its
geometric meaning. Note that this invariant neither depends on the choice of $K'$
nor on the choice of the particular chain (\ref{kraft presentation}).
We now go back to the setting of algebraic geometry:
Suppose $X$ is an integral proper $K$-scheme with $K=H^0(X,\O_X)$.
Let $F=\O_{X,\eta}$ be its function field. We call the embedding dimension
of $F\otimes_K K^{1/p}$ the \emph{geometric generic embedding dimension} of $X$.
The main result of the paper is the following:
\begin{theorem}
\mylabel{inseparability degree}
Let $X$ be a proper normal $K$-scheme with $K=H^0(X,\O_X)$.
Suppose that $X$ is not geometrically reduced.
Then the geometric generic embedding dimension of $X$ is smaller
then the degree of imperfection for $K$.
\end{theorem}
\proof
The statement is trivial if the degree of imperfection for $K$ is infinite.
Assume now that this degree of imperfection is finite.
According to Proposition \ref{maximal extension}, there is a purely inseparable extension $K\subset K'$
of height $\leq 1$ so that the induced scheme $X'=X\otimes_KK'$ is integral,
and that its normalization $Z$ has the property that $H^0(Z,\O_Z)=K^{1/p}$.
Note that $K$ cannot be perfect, since $X$ is not geometrically reduced,
and hence $K\neq K'$.
We now use the transitivity properties of tensor products: The scheme
$$
X\otimes_KK^{1/p} = (X\otimes_KK')\otimes_{K'}K^{1/p} = X'\otimes_{K'}K^{1/p}
$$
is birational to
$$
Z\otimes_{K'}K^{1/p}= Z\otimes_{K^{1/p}}(K^{1/p}\otimes_{K'}K^{1/p}).
$$
The local Artin ring $K^{1/p}\otimes_{K'}K^{1/p}$ has residue field $K^{1/p}$, and its embedding
dimension coincides with the $p$-degree of $K'\subset K^{1/p}$,
by Lemma \ref{embedding dimension} below. This embedding dimension is clearly the geometric generic
embedding dimension of $X$.
The $p$-degree of $K'\subset K^{1/p}$ is strictly smaller then the $p$-degree of $K\subset K^{1/p}$, which coincides with the degree of imperfection for $K$.
\qed
\begin{corollary}
\mylabel{base dimension}
Let $k$ be a perfect field, $f:X\ra B$ be a proper morphism
with $\O_B=f_*(\O_X)$ between integral normal algebraic $k$-schemes.
Let $\eta\in B$ be the generic point and suppose $\dim(B)>0$.
Then the geometric generic embedding dimension of $X_\eta$
is smaller then $\dim(B)$.
\end{corollary}
\proof
The statement is trivial if $X_\eta$ is geometrically reduced.
If $X_\eta$ is not geometrically reduced, then its geometric generic embedding
dimension is smaller than the degree of imperfection of the function field $K=\O_{B,\eta}$.
The latter coincides with $\dim(B)$ by \cite{A 4-7}, Chapter IV, \S 16, No.\ 6,
Corollary 2 to Theorem 4, because the ground field $k$ is perfect.
\qed
Let us also restate MacLane's result \cite{MacLane 1939}, Theorem 2 in geometric form:
\begin{corollary}
\mylabel{over curve}
Assumptions as in Corollary \ref{base dimension}.
Suppose additionally that $B$ is a curve.
Then $X_\eta$ is geometrically reduced.
\end{corollary}
\proof
According to the previous Corollary, the geometric generic embedding dimension of $X_\eta$
is zero.
\qed
In the proof for Theorem \ref{inseparability degree}, we needed the following fact:
\begin{lemma}
\mylabel{embedding dimension}
Let $K\subset L$ be a finite purely inspeparable extension of height $\leq 1$.
Then the local Artin ring $R=L\otimes_{K}L$ has residue field isomorphic to $L$,
and its embedding dimension equals the $p$-degree of $K\subset L$.
\end{lemma}
\proof
Choose a $p$-basis $a_1,\ldots,a_n\in L$, say with $a_i^p=b_i\in K$.
Then we have $L=K[T_1,\ldots,T_n]/(T_1^p-b_1,\ldots,T_n^p-b_n)$, and consequently
$$
R=L[U_1,\ldots,U_n]/(U_1^p-a_1^p,\ldots,U_n^p-a_n^p).
$$
Clearly, the $U_i-a_i$ are nilpotent and generate the $L$-algebra $R$,
and their residue classes in $\maxid_R/\maxid_R^2$
comprise a vector space basis. Hence $\edim(R)$ equals the $p$-degree of $K\subset L$.
\qed
\section{$p$-Fermat hypersurfaces}
\mylabel{fermat hypersurfaces}
Let $K$ be a ground field of characteristic $p>0$.
In this section, we shall consider \emph{Fermat hypersurfaces} $X\subset\mathbb{P}^n$ defined by
a homogeneous equation of the special form
$$
\lambda_0 U_0^p+\lambda_1 U_1^p+\ldots+\lambda_n U_n^p=0.
$$
Here we write $\mathbb{P}^n=\Proj(K[U_0,\ldots,U_n])$, and $\lambda_0,\ldots,\lambda_n\in K$ are scalars
not all zero. Let us call such subschemes \emph{$p$-Fermat hypersurfaces}.
If the scalars are contained in $K^p\subset K$, then $X$
is the $(p-1)$-th infinitesimal neighborhood of a hyperplane.
In any case, $X$ is a twisted form of the $(p-1)$-th infinitesimal neighborhood of
a hyperplane, such that $X$ is nowhere smooth. The following is immediate:
\begin{proposition}
\mylabel{rational point}
We have $X(K)=\emptyset$ if and only if the scalars
$\lambda_0,\ldots,\lambda_n\in K$ are linearly independent over $K^p$.
\end{proposition}
The main goal of this section is to understand the \emph{singular locus} $\Sing(X)\subset X$,
which comprise the points $x\in X$ whose local ring $\O_{X,x}$ is not regular.
To this end, let
$$
f=\lambda_0 U_0^p+\lambda_1 U_1^p+\ldots +\lambda_n U_n^p\in K[U_0,\ldots,U_n]
$$
be the homogeneous polynomial defining $X=V_+(f)$, and consider the intermediate field
$$
K^p\subset E\subset K
$$
generated over $K^p$ by all fractions $f(\alpha_0,\ldots,\alpha_n)/f(\beta_0,\ldots,\beta_n)$
with nonzero denominator, and scalars $\alpha_i,\beta_i\in K$. Clearly,
this intermediate field depends only on the closed subscheme $X\subset\mathbb{P}^n$, and does not change if
one translates $X$ by an automorphism of $\mathbb{P}^n$.
A more direct but less invariant description of this intermediate field is as follows:
\begin{proposition}
\mylabel{uncanonical description}
Suppose $\lambda_r\neq 0$. Then the extension $K^p\subset E$
is generated by the fractions $\lambda_i/\lambda_r$, $0\leq i\leq n$.
\end{proposition}
\proof
Let $K^p\subset E'\subset K$ be the intermediate field generated by the $\lambda_i/\lambda_r$.
The inclusion $E'\subset E$ is trivial.
The converse inclusion follows from
$$
f(\alpha_0,\ldots,\alpha_n)/f(\beta_0,\ldots,\beta_n) =
{\sum_{i=0}^n}\alpha_i^p
(\sum_{j=0}^n \frac{\lambda_j}{\lambda_r}\frac{\lambda_r}{\lambda_i}\beta_j^p )^{-1},
$$
where the outer sum is restricted to those indices $0\leq i\leq n$ with $\lambda_i\neq 0$.
\qed
We obtain a \emph{numerical invariant} $d=[E:K^p]_p$ for our $p$-Fermat variety $X\subset\mathbb{P}^n$,
the $p$-degree of the field extension $K^p\subset E$.
In light of Proposition \ref{uncanonical description}, we have $0\leq d\leq n$.
This numerical invariant has a geometric significance:
\begin{theorem}
\mylabel{codimension singular}
The scheme $X$ is regular if and only if $d=n$.
If the singular set $\Sing(X)\subset X$ is nonempty, then its codimension equals the $p$-degree
$d=[E:K^p]_p$.
\end{theorem}
\proof
Let $c\in\left\{0,\ldots,n-1,\infty\right\}$ be the codimension of $\Sing(X)\subset X$.
In the first part of the proof we show that $c\geq d$.
By convention, the case $c=\infty $ means that $\Sing(X)$ is empty.
Without loss of generality we may assume that $\lambda_0=1$,
and that $\lambda_1,\ldots,\lambda_d\in K$ are $p$-linearly independent.
For $d+1\leq j \leq n$, we then may write
$$
\lambda_j = P_j(\lambda_1,\ldots,\lambda_d),
$$
where $P_j(V_1,\ldots,V_d)$ is a polynomial with coefficients in $K^p$ and of degree $\leq p-1$ in each of the variables.
Since the scalars $\lambda_1,\ldots,\lambda_d\in K$ are $p$-linearly independent,
there are derivations $D_i:K\ra K$ with $D_i(\lambda_j)=\delta_{ij}$,
the Kronecker Delta. We may extend them to derivations of degree zero
$D_i:K[X_0,\ldots,X_n]\ra K[X_0,\ldots,X_n]$ sending the variables to zero.
Then the singular locus of $X=V_+(f)$ is contained in the closed subscheme
$S\subset\mathbb{P}^n$ defined by the vanishing of the homogeneous polynomials
\begin{gather*}
f=U_0^p+\lambda_1 U_1^p+\ldots+\lambda_n U_n^p,\\
D_i(f)=U_i^p + \sum_{j=d+1}^{n}\frac{\partial P_j}{\partial V_i}(\lambda_1,\ldots,\lambda_d)U_j^p,\quad 1\leq i\leq d.
\end{gather*}
Substituting the relations $D_i(f)$ into the former relation $f=0$,
we infer that $S\subset\mathbb{P}^n$ is defined by the vanishing of homogeneous polynomials of the form
$$
U_i^p-Q_i(U_{d+1},\ldots,U_n),\quad 0\leq i\leq d.
$$
It suffices to check that $\dim(S)\leq n-1-d$. Suppose this is not the case.
By Krull's Principal Ideal Theorem, the dimensions of $S'=S\cap V_+(U_{d+1},\ldots,U_n)$
is of dimension $\geq 0$. On the other hand, we have
$S'_\red=V_+(U_0,\ldots,U_n)=\emptyset$, contradiction.
This proves that $c\geq d$. In particular, the scheme $X$ is regular provided $d=n$.
To finish the proof we claim that $c=d$ if $d<n$.
We proof the claim by induction on $n$. The case $n=1$ is trivial.
Now suppose $n\geq 2$, and that the claim is true for $n-1$.
Suppose $d<n$, otherwise there is nothing to prove.
Without loss of generality we may assume that $\lambda_n\in K$ is nonzero and $p$-linearly
dependent on $\lambda_0,\ldots, \lambda_{n-1}$.
To proceed we employ Grothendieck's theory of \emph{the generic hyperplane
section} as exposed in the unpublished manuscript \cite{EGA V}, confer
also Jouanoulous's monograph \cite{Jouanolou 1983}. Let $\check{\mathbb{P}}^n$ be the scheme of hyperplanes in $\mathbb{P}^n$,
and $H\subset\mathbb{P}^n\times\check{\mathbb{P}}^n$ be the universal hyperplane,
and $\eta\in\check{\mathbb{P}^n}$ be the generic point.
We then denote by $Y=(X\times\check{\mathbb{P}}^n)\cap H$ the universal hyperplane section of $X$,
and by $Y_\eta=Y\times_{\check{\mathbb{P}}^n}\Spec \kappa(\eta)$ the generic hyperplane
section of $X$. Note that $Y_\eta$ is actually a hyperplane section in $X\otimes_K\kappa(\eta)$,
and that the projection $Y_\eta\ra X$ has highly unusual geometric properties.
We have a closed embedding $Y_\eta\subset H_\eta$,
and $H_\eta$ is isomorphic to the projective space of dimension $n-1$ over
$K'=\kappa(\eta)$. To describe it explicitely, let
$U_i^*$ be the homogeneous coordinates for $\check{\mathbb{P}}^n$ dual to the $U_i$.
Then the universal hyperplane $H\subset\mathbb{P}^n\times\check{\mathbb{P}}^n$ is defined by
the bihomogeneous equation $\sum U_i\otimes U_i^*=0$, and
the function field of $\check{\mathbb{P}}^n$ is the subfield
$K'\subset K(U^*_0,\ldots,U_n^*)$
generated by the fractions $U^*_i/U_n^*$, $0\leq i\leq n$.
In turn, the generic hyperplane $H_\eta\subset\mathbb{P}^n_\eta$ is defined
by the homogeneous equation $\sum_{i=0}^n U_i\otimes U_i^*/U_n^*$, such that we have
the additional relation $U_n\otimes 1=-\sum_{i=0}^{n-1} U_i\otimes U_i^*/U_n^*$.
Hence we may write the generic hyperplane $H_\eta=\Proj K'[U'_0,\ldots,U'_{n-1}]$
with $U'_i=U_i\otimes 1$.
In these homogeneous coordinates, the
generic hyperplane section is the $p$-Fermat variety given by $Y_\eta=V_+(f')$ with
$$
f'=\sum_{i=0}^{n-1} (\lambda_i/\lambda_n - (U_i^*/U_n^*)^p)(U'_i)^p.
$$
Now let $c'\in\left\{0,\ldots,n-2,\infty\right\}$ be the codimension of the
singular set $\Sing(Y_\eta)\subset Y_\eta$, and $d'=[E':K'^p]_p$ be the numerical
invariant of the $p$-Fermat variety $Y_\eta\subset H_\eta$. Using that $\lambda_n$ is $p$-linearly dependent
on the $\lambda_1,\ldots,\lambda_{n-1}$, we easily see
that $d=d'$. We now use that the projection $Y_\eta\ra X$ is flat with geometrically regular fibers,
and that its set-theoretical image is the \emph{set of nonclosed points} $x\in X$.
This implies that
$$
c'=\begin{cases}
\infty &\text{if $c=n-1$,}\\
c &\text{if $c\leq n-2$.}
\end{cases}
$$
We now distinguish two cases:
Suppose first that $d=n-1$, such that $d'=n-1$. We already saw that
this implies that $Y_\eta$ is regular, whence $c'=\infty$, and therefore $c=n-1$.
Now consider the case that $d\leq n-2$. Then $d'\leq (n-1)-1$, and
the induction hypothesis implies $c'=d'$, and finally $d=d'=c'=c$.
Summing up, we have $c=d$ in both cases.
\qed
\section{Singularities on $p$-Fermat plane curves}
\mylabel{fermat curves}
Keeping the assumptions of the previous section, we now
study the singularities lying on \emph{$p$-Fermat plane curves} $X\subset\mathbb{P}^2_K$,
say defined by
\begin{equation}
\label{fermat equation}
\lambda_0U_0^p+\lambda_1U_1^p+\lambda_2U_2^p=0
\end{equation}
with $\lambda_0,\lambda_1,\lambda_2\in K$.
First note that $X$ is a geometrically irreducible projective curve,
and has $h^0(\O_X)=1$ and $h^1(\O_X)=(p-1)(p-2)/2$.
Let $0\leq d\leq 2$ be the numerical invariant of the $p$-Fermat variety $X$, as defined
in the previous section. The curve $X$ is regular if $d=2$, nonreduced if $d=0$,
and has isolated singularities if $d=1$.
With respect to singularities, only the case $d=1$ is of interest.
Throughout, we suppose that $d=1$. Without loss of generality,
we may then assume that $\lambda_2=1$, that $\lambda=\lambda_0$ does not lie in $K^p$,
and that $\lambda_1=P(\lambda)$ is a polynomial of degree $<p$ in $\lambda$.
So $X\subset\mathbb{P}^2_K$ is defined by the homogeneous equation
$$
\lambda U_0^p+P(\lambda)U_1^p+U_2^p=0.
$$
Consider the field extension $L=K(\lambda^{1/p})$, and let $T_0,T_1$ be two indeterminates.
The map
\begin{gather*}
K[U_0,U_1,U_2]\lra L[T_0,T_1],\\
U_0\longmapsto T_0 ,\quad
U_1\longmapsto T_1,\quad
U_2\longmapsto -\lambda^{1/p}T_0-P(\lambda^{1/p})T_1
\end{gather*}
defines a morphism $\mathbb{P}^1_L\ra\mathbb{P}^2_K$ factoring over $X\subset\mathbb{P}^2_K$.
\begin{proposition}
\mylabel{normalization}
The induced morphism $\nu:\mathbb{P}^1_L\ra X$ is the normalization of $X$.
\end{proposition}
\proof
It suffices to check that the morphism
$\nu:\mathbb{P}^1_L\ra X$ has degree one. The intersection $X\cap V_+(U_0)$
is Cartier divisor of length $p$.
Its preimage on $\mathbb{P}^1_L$ is given by $V_+(T_0)$, which also has length $p$ over $K$.
Whence the degree in question is one.
\qed
It follows that the field extension $K\subset L$ is nothing but the field of global
section on the normalization of $X$. Therefore, it depends only on the curve $X$,
and not on the chosen Fermat equation (\ref{fermat equation}).
\begin{proposition}
\mylabel{singular point}
The singular locus $\Sing(X)$ consists of precisely one point
$a_0\in X$. The corresponding point $a\in\mathbb{P}^1_L$ on the normalization
has residue field $\kappa(a)=L$.
\end{proposition}
\proof
Taking the derivative with respect to $\lambda$, we see that the singular locus $\Sing(X)$ is contained in the closed subscheme defined by
$$
\lambda U_0^p+P(\lambda)U_1^p+U_2^p=0\quadand
U_0^p+P'(\lambda)U_1^p=0.
$$
Substituting the latter into the former, we see that there is only one singular point $a_0\in X$.
The preimage on $\mathbb{P}^1_L$ is defined by $T_0^p+P'(\lambda)T_1^p$,
which clearly defines an $L$-rational point $a\in\mathbb{P}^1_L$.
\qed
\begin{remark}
\mylabel{homogeneous coordinates}
The homogeneous coordinates for the preimage $a\in\mathbb{P}^1_L$ of the singular point $a_0\in X$
are $(-P'(\lambda^{1/p}):1)$
\end{remark}
Let $\mathfrak{c}\subset\nu_*(\O_{\mathbb{P}^1_L})$ be the conductor ideal for the finite birational map $\nu:\mathbb{P}^1_L\ra X$,
which is the largest $\nu_*(\O_{\mathbb{P}^1_L})$-ideal contained in the subring $\O_X\subset\nu_*(\O_{\mathbb{P}^1_L})$.
The conductor ideal defines closed subschemes $A\subset\mathbb{P}^1_L$ and $A_0\subset X$,
such that we have a commutative diagram
\begin{equation}
\label{conductor square}
\begin{CD}
A @>>>\mathbb{P}^1_L\\
@VVV @VV\nu V\\
A_0 @>>> X,
\end{CD}
\end{equation}
which is cartesian and cocartesian.
By abuse of notation, we write $\O_{A_0} $ and $\O_A$ for the local Artin rings defining
the conductor schemes $A_0$ and $A$. Then we have $\O_A=L[u]/(u^l)$ for some integer $l\geq 1$, where $u\in\O_{\mathbb{P}^1,a}$ denotes a uniformizer,
and the subring $\O_{A_0}\subset\O_A$ is a $K$-subalgebra.
\begin{proposition}
\mylabel{properties subalgebras}
We have $\O_A=L[u]/(u^{p-1})$, and $\O_{A_0}\subset\O_A$ is a $K$-subalgebra generated by two
elements so that $L\not\subset\O_{A_0}$ and $\dim_K(\O_{A'})=p(p-1)/2$.
\end{proposition}
\proof
The conductor square (\ref{conductor square}) yields an exact sequence of coherent sheaves
\begin{equation*}
\label{conductor sequence}
0\lra\O_X\lra\O_{A_0}\oplus\nu_*(\O_{\mathbb{P}^1_L})\lra\nu_*(\O_A)\lra 0,
\end{equation*}
which in turn gives a long exact sequence of $K$-vector spaces
\begin{equation}
\label{exact sequence}
0\lra H^0(X,\O_X)\lra \O_{A_0}\oplus L\lra \O_A\lra H^1(X,\O_X)\lra 0.
\end{equation}
Using that $h^0(\O_X)=1$ we infer that $\O_{A_0}\cap L=K$.
The latter is equivalent to $L\not\subset \O_{A_0}$, because the field
extension $K\subset L$ has prime degree.
Being a complete intersection, the curve $X$ is Gorenstein.
According to \cite{Serre 1975}, Chapter IV, \S 3.11, Proposition 7, this implies $\dim_K(\O_{A_0})=\dim_K(\O_{A})/2$.
Now set $r=\dim_K(\O_{A})$.
In light of $h^1(\O_X)=(p-1)(p-2)/2$, it follows $r=(p-1)(p-2)/2 + r/2 + p -1$, whence $r=p(p-1)$.
Finally, since the scheme $\Spec(\O_{A'})$ contains only one point and embeds into $\mathbb{P}^2_K$,
it embeds even into $\mathbb{A}^2_K=\Spec K[U_1,U_2]$, whence the $K$-subalgebra $\O_{A_0}\subset \O_{A}$
is generated by two elements.
\qed
The situation is very simple in characteristic two:
\begin{corollary}
\mylabel{conductor 2}
If $p=2$, then $\O_A=L$ and $\O_{A_0}=K$.
\end{corollary}
There is more to say for odd primes. Here we have to distinguish two cases,
according to the residue field of the singularity $a_0\in X$, which is either
$K$ or $L$:
\begin{proposition}
\mylabel{conductor L}
Suppose $p\geq 3$ and $\kappa(a_0)=\O_{A'}/\maxid_{A'}$ equals $L$.
Then we have $\O_{A'}=K[\mu+f,g]$ for some $\mu\in L\smallsetminus K$ and
$f\in\maxid_{A}\smallsetminus \maxid_{A}^2$ and $g\in\maxid_A^2\smallsetminus\maxid_R^3$.
\end{proposition}
\proof
According to \cite{Matsumura 1980}, Theorem 60, the projection $\O_{A'}\ra L$ onto the residue field admits a section.
In other words, it is possible to embed $L=\O_{A'}/\maxid_{A'}$
as a \emph{coefficient field} $L'\subset\O_{A'}$;
but note that such coefficient fields are not unique.
By Proposition \ref{properties subalgebras}, the $K$-algebra $\O_{A'}$ is generated by two elements, say $h,g\in \O_{A_0}$.
In case $h,g\in\maxid_{A_0}\cup K$ the residue field would be $K$, contradiction.
Without loss of generality we may assume that $h\in\O_{A_0}$ generates a coefficient
field $L'\subset\O_{A_0}$.
Let $\mu\in L$ be the image of $h$ in the residue field.
Inside $\O_A=L[u]/(u^{p-1})$, we have $h=\mu+f$ for some
$f\in\maxid_A$.
To continue, write $g=\epsilon u^l$ for some
unit $\epsilon\in \O_A$ and some integer $0\leq l\leq p-1$.
Adding some polynomial in $h$ to $g$, we may assume $l\geq 1$.
We now check that $l=2$. Clearly, $\O_{A_0}=L'[g]/(g^{d})$,
where $d=\lceil(p-1)/l\rceil$, such that
$$
p(p-1)/2=\dim_K(\O_{A_0})=p \lceil(p-1)/l\rceil.
$$
This equation implies $(p-1)/2\geq (p-1)/l> (p-1)/2-1$,
which easily gives us $l=2$.
It remains to verify $f\not\in\maxid_A^2$.
One way of seeing this is to use that the local Artin ring $\O_A=\O_{\mathbb{P}^1_L}/\mathfrak{c}\simeq L[u]/(u^{p-1})$
contains a \emph{canonical} coefficient field, namely the image of $H^0(\mathbb{P}^1_L,\O_{\mathbb{P}^1_L})$.
This gives a splitting $\O_A=L\oplus\maxid_A$ of $L$-vector spaces,
and whence a projection $\pr:\O_A\ra\maxid_A$.
Note that this projection does not depend on any choices of the uniformizer $u$.
Seeking a contradiction, we now assume that $f\in\maxid_A^2$.
Then $\pr(\O_{A_0})\subset\maxid_A^2$.
On the other hand, the discussion at the beginning of
Section \ref{fermat curves} tell us that $C\subset\mathbb{P}^2_K$
is defined by a homogeneous equation of
the form $\lambda U_0^p+P(\lambda)U_1^p+U_2^p=0$.
The preimage $a\in \mathbb{P}^1_L$ of the singular point $a_0\in C$ has
homogeneous coordinates $(-P'(\lambda^{1/p}):1)$, according
to Remark \ref{homogeneous coordinates}.
The normalization is described in Proposition \ref{normalization},
and sends $U_0/U_1$ to the element $T_0/T_1=u-P'(\lambda^{1/p})$,
where $u=T_0/T_1+P'(\lambda^{1/p})$. It follows that $u\in\maxid_A$ is contained in
$\pr(\O_{A_0})\subset\maxid_A$, contradiction.
\qed
\begin{proposition}
\mylabel{conductor K}
Suppose $p\geq 3$ and $\kappa(a_0)=\O_{A_0}/\maxid_{A_0}$ equals $K$. Then we have
$\O_{A_0}=K[v,w]$ for some $v,w\in\maxid_A$ so that their classes mod $\maxid_A^2$
are $K$-linearly independent.
\end{proposition}
\proof
Write $\O_{A_0}=K[v,w]$ for some $v,w\in \maxid_A$.
Then the monomials $v^iw^j$ with $i+j\geq p-1$ vanish.
Since we have $\dim_K(\O_{A_0})=p(p-1)/2$, the monomials $v^iw^j\in R'$ with $i+j\leq p-2$
form a $K$-basis. Whence $v^{p-2},w^{p-2}\neq 0$, such that $v,w\not\in\maxid_A^2$.
Seeking a contradiction, we now assume that $v\equiv \alpha w$ modulo $\maxid_A^2$
for some $\alpha\in K$. Then $v^{p-2}=\alpha^{p-2}w^{p-2}$, contradicting
linear independence.
\qed
\section{Abstract multiple curves}
\mylabel{multiple curves}
We keep the notation from the preceding section, but
now now reverse the situation:
Fix an $L$-rational point $a\in\mathbb{P}^1_L$.
Let $A\subset\mathbb{P}^1_E$ be the $(p-2)$-th infinitesimal neighborhood of $a$, and
write $\O_A=L[u]/(u^{p-1})$. Now choose a $K$-subalgebra $\O_{A_0}\subset\O_A$
and consider the resulting morphism $A=\Spec(\O_A)\ra\Spec(\O_{A_0})=A_0$.
The pushout square
$$
\begin{CD}
A @>>> \mathbb{P}^1_L\\
@VVV @VVV\\
A_0 @>>> C
\end{CD}
$$
defines a proper integral curve $C$, with normalization $\nu:\mathbb{P}^1_L\ra C$
and containing a unique singular point $a_0\in C$.
We call the subalgebra $\O_{A_0}\subset\O_A$ \emph{admissible} if
it takes the form described in Corollary \ref{conductor 2} or Proposition \ref{conductor L} or Proposition \ref{conductor K}.
From now on we assume that our subalgebra is admissible, and ask whether or not
the resulting curve $C$ is embeddable into $\mathbb{P}^2_K$ as a $p$-Fermat plane curve. The cohomology groups indeed have the right dimensions:
\begin{proposition}
\mylabel{cohomology dimensions}
We have $h^0(\O_C)=1$ and $h^1(\O_C)=(p-1)(p-2)/2$.
\end{proposition}
\proof
It is easy to see that an admissible subalgebra $\O_{A_0}\subset\O_A$ satisfies the conclusion
of Proposition \ref{properties subalgebras}, that is,
$\dim_K(\O_{A_0})=p(p-1)/2$ and $L\not\subset\O_{A_0}$.
The the statement in question follows from the
exact sequence
$$
0\lra H^0(C,\O_C)\lra \O_{A_0}\oplus L\lra \O_A\lra H^1(C,\O_C)\lra 0,
$$
as in the proof for Proposition \ref{properties subalgebras}.
\qed
To proceed, we have to study the behavior of $C$ under base change.
If $K\subset K'$ is a field extension, then the induced curve $C'=C\otimes_KK'$
sits inside the cartesian and cocartesian diagram
\begin{equation}
\label{base change diagram}
\begin{CD}
A' @>>> \mathbb{P}^1_{L\otimes_KK'}\\
@VVV @VVV\\
A_0' @>>> C',
\end{CD}
\end{equation}
with $A'=A\otimes_KK'$ and $A_0'=A_0\otimes_KK'$.
In particular, $C'$ is nonreduced if and only if the extension field $K\subset K'$
contains $L$. The next result implies that under suitable assumptions,
$C'$ locally looks like a Cartier divisor inside a regular surface:
\begin{proposition}
Suppose $L\subset K'$. Then every closed point on $C'$ has embedding dimension two.
\end{proposition}
\proof
The assertion is clear outside the closed point $a_0'\in C'$ corresponding to the singularity $a_0\in C$.
To understand the embedding dimension of $\O_{C',a_0'}$, let $M,G$ be two indeterminates, and consider
the $K$-algebra
$$
R=K[M][[G]],
$$
which is a regular ring of dimension two that is a formally smooth $K$-algebra.
Therefore, it suffices to construct a surjection $R\ra\O_{C,a_0}^\wedge$.
We do this for the case that $\kappa(a_0)=L$, that is, $\O_{A'}=K[\mu+f,g]$
as in Proposition \ref{conductor L} (the other cases are similar, actually simpler).
Choose lifts $\tilde{f},\tilde{g}\in\O_{C,a_0}$ for $f,g\in\O_{A'}$, and define
$$
h:R\lra\O_{C,a_0}^\wedge,\quad M\longmapsto \mu+\tilde{f},\quad G\longmapsto\tilde{g}.
$$
Note that we have to work with $R$ rather than its formal completion $K[[M,G]]$,
because the image $\lambda+\tilde{f}$ does not lie in the maximal ideal.
Obviously, the composite map $R\ra\O^\wedge_{C,a_0}\ra\O_{A'}$ is surjective.
Now let $\lambda u^i\in\O_{C,a_0}^\wedge\subset L[[u]]$ be a monomial with $\lambda\in L$ and $i\geq p-1$.
By completeness, it suffices to check that this monomial lies in $h(R)$ modulo $u^{i+1}$.
Consider first the case that $i$ is even.
Write $g^{i/2}=\lambda'u^i$ modulo $u^{i+1}$ for some nonzero $\lambda'\in L$, and write $\lambda/\lambda' =P(\mu)$
as a polynomial of degree $<p$ with coefficients in $K$ in terms of the generator $\mu\in L$.
Then $\lambda u^i = P(\lambda +f)\cdot g^{i/2}$ modulo $u^{i+1}$.
Finally suppose $i$ is odd. Write $i=p+j$ for some even $j\geq 0$.
Then $(\mu+f)^p=\mu^p+\alpha u^p$ modulo $u^{p+1}$ with nonzero $\mu^p,\alpha\in K$.
Moreover, $g^{j/2}=\lambda' u^j$ modulo $u^{j+1}$ for some $\lambda'\in L$. As above, we find some polynomial $P$ of degree $<p$
with $P(\lambda)=\lambda/\lambda'$. Then
$$
\lambda u^i=P(\lambda+f) \frac {(\mu+f)^p -\mu^p}{\alpha} g^{j/2} \mod u^{i+1}.
$$
Using that $R$ is $G$-adically complete and that $\O_{C,a_0}^\wedge$ is complete,
we infer that $h:R\ra\O_{C,a_0}^\wedge$ is surjective.
\qed
\begin{proposition}
Suppose $L\subset K'$. Then $C'_\red=\mathbb{P}^1_{K'}$.
\end{proposition}
\proof
It suffices to treat the case $K'=L$.
The diagram (\ref{base change diagram}) yields a birational morphism
$$
\nu:\mathbb{P}^1_{K'}=(\mathbb{P}^1_{L\otimes_KK'})_\red\lra C'_\red.
$$
Let $a_0'\in C'$ and $a'\in\mathbb{P}^1_{K'}$ be the points corresponding
to the singularity $a_0\in C$.
It suffices to check that the fiber $\nu^{-1}(a_0')\subset\mathbb{P}^1_{K'}$
is the reduced scheme given by $a'$.
In other words, the maximal ideal $\maxid_{A'}\subset\O_{A'}$ is generated
by the maximal ideal $\maxid_{A_0}$ and the nilradical of $\O_{\mathbb{P}^1_{L\otimes_KK'}}$.
Let us do this in the case the residue field of $a_0\in C$ is $L$,
that is, $\O_{A_0}=K[\mu+f,g]$ as in Proposition
\ref{conductor L} (the other two cases being similar).
Then $\O_{A_0'}$ is generated over $L$ by $\mu\otimes1+f\otimes1$ and $g\otimes 1$,
whence also by $\mu\otimes1-1\otimes\mu +f\otimes 1$ and $g\otimes 1$.
But the nilradical of $\mathbb{P}^1_{L\otimes_KK'}$ is generated by $\mu\otimes 1-1\otimes\mu$.
By assumption, $f\in\O_{A}$ generates the maximal ideal,
and this implies that $f\otimes1\in\O_{A'} $ generates the maximal ideal modulo
$\mu\otimes1-1\otimes\mu$, whence the claim.
\qed
If $L\subset K'$, then $C'$ is a nonreduced curve with smooth reduction and multiplicity $p$, and for each closed point $y\in C'$,
the complete local ring $\O_{C',y}^\wedge$ is a quotient of a formal power series ring in two variable over $\kappa(y)$.
Loosing speaking, one may say that $C'$ \emph{locally embeds into regular surfaces}.
Such curves were studied, for example, by B\u{a}nic\u{a} and Forster \cite{Banica; Forster 1986}, Manolache \cite{Manolache 1994}, and Drezet \cite{Drezet 2007},
and are called \emph{abstract multiple curves}.
Let $\shN\subset\O_{C'}$ be the nilradical. Then the ideal powers
$$
\O_{C'}=\shN^0\supset\shN\supset\shN^2\supset\ldots\supset\shN^p=0
$$
define a filtration on the structure sheaf $\O_{C'}$.
The graded piece $\shL=\shN/\shN^2$ is an coherent sheaf on $C'_\red=\O_{\mathbb{P}^1_{K'}}$,
and we obtain an algebra map $\Sym(\shL)\ra\gr(\O_{C'})$.
A computation in the complete local rings shows that this map is surjective, with
kernel the ideal generated by $\Sym^p(\shL)$, and that $\shL$ is invertible. Set $d=\deg(\shL)$. Using
$$
1-(p-1)(p-2)/2=\chi(\O_{C'})=\chi(\gr(\O_{C'})=\sum_{j=0}^{p-1} (jd+1) = d p(p-1)/2 + p,
$$
we infer $\deg(\shL)=-1$. This shows:
\begin{proposition}
\mylabel{graded algebra}
If we have $L\subset K'$, then the associated graded algebra is given by $\gr(\O_{C'})=\bigoplus_{i=0}^{p-1}\O_{\mathbb{P}^1_{K'}}(-i)$,
with the obvious multiplication law.
\end{proposition}
We shall say that $C'$ \emph{globally embeds into a smooth surface} if there is a smooth proper
connected $K'$-surface $S$ into which $C'$ embeds.
We remark in passing that the surface $S$ is then geometrically connected, or equivalently $K'=H^0(S,\O_S)$,
because we have $K'\subset H^0(S,\O_S)\ra H^0(C',\O_{C'})=K'$.
\begin{theorem}
\mylabel{surface plane}
Suppose that $C'$ embeds globally into a smooth surface and that $p\neq 3$.
Then $C'$ embeds as a $p$-Fermat plane curve into $\mathbb{P}^2_{K'}$.
\end{theorem}
\proof
We first consider the case that $L\subset K'$, for which the assumption $p\neq 3$ plays no role.
Choose an embedding $C'\subset S$ into a smooth proper connected surface.
Then $D=C'_\red$ is isomorphic to a projective line. By Proposition \ref{graded algebra}, its selfintersection
inside $S$ is the number $D^2=1$. Set $\shL=\O_S(D)$. The exact sequence of sheaves
$0\ra\O_S\ra\shL\ra\shL_D\ra 0$ gives an exact sequence
$$
0\ra K'\lra H^0(S,\shL)\lra H^0(D,\shL_D) \lra H^1(S,\O_S).
$$
Suppose for the moment that $H^1(S,\O_S)=0$. Then $\shL$ is globally generated and has $h^0(\shL)=3$.
The resulting morphism $r:S\ra\mathbb{P}^2_{K'}$ is surjective, because $(\shL\cdot\shL)\neq 0$,
and its degree equals $\deg(r)=(\shL\cdot\shL)=1$, whence $r$ is birational.
Moreover, the induced morphism $r:D\ra\mathbb{P}^2_{K'}$ is a closed embedding,
and $r(D)\subset\mathbb{P}^2_{K'}$ is a line. Using $D^2=1=r(D)^2$, we conclude that the exceptional curves
for $r$ are disjoint from $D$. It follows that $C'$ embeds into $\mathbb{P}^2_{K'}$.
Obviously, it becomes a $p$-Fermat plane curve, because $D=C'_\red$ becomes a line.
We now check that indeed $H^1(S,\O_S)=0$. For this we may assume that $K'$ is algebraically closed.
Now $K_S\cdot D=-3$, whence $K_S$ is not numerically effective. By the Enriques classification of surfaces,
$S$ is either the projective plane or ruled. If there is a ruling $f:S\ra B$, then
$D$ is not contained in a fiber, because $D^2>0$. Whence $D\ra B$ is dominant, and it follows
that $B$ is a projective line. In any case, $H^1(S,\O_S)=0$.
Finally, we have to treat the case that $K\subset K'$ is linearly disjoint form $L$.
Tensoring with $K'$, we easily reduce to the case $K'=K$.
Choose separable and algebraic closures $K\subset K^s\subset\bar{K}$
and an embedding $C\subset S$
into a proper smooth connected surface $S$.
Following \cite{GB III}, Section 5, we write $\Pic(S/K)=\Pic_{S/K}(K)$ for the group
of rational points on the Picard scheme.
According to the previous paragraph $S_{\bar{K}}$ is a rational surface, whence the abelian group $\Pic(S_{\bar{K}})$ is
a free of finite rank, and the scheme $\Pic_{S/K}$ is \'etale at each point.
It follows that the canonical map $\Pic(S_{K^s}/K^s)\ra\Pic(S_{\bar{S}}/\bar{S})$ is bijective.
Now let $[D]\in\Pic(S_{K^s})$ be the class
of $D=(C_{\bar{K}})_\red$. This class is necessarily invariant under
the action of the Galois group $\Gal(K^s/K)$, because $pD=C_{\bar{K}}$ comes
from a curve on $S$ and the abelian group $\Pic(S_{\bar{K}})$ is torsion free. We conclude that the class $[D]\in\Pic(S/K)$ exists, although it does not
come from an invertible sheaf on $S$. However, it gives rise to a 2-dimensional Brauer--Severi scheme $B$,
which comes along with a morphism $r:S\ra B$ that induces the morphism
$r_{\bar{K}}:S_{\bar{K}}\ra\mathbb{P}^2_{\bar{K}}$ defined above.
The upshot is that there is an embedding $C\subset B$,
and that the class of $C$ inside $\Pic(B/K)=\mathbb{Z}$ equals $p$.
On the other hand, the class of the dualizing sheaf $\omega_B$ equals $-3$.
Since $p\neq 3$ by assumption, we must have $\Pic(B)=\mathbb{Z}$,
and consequently our Brauer--Severi scheme is $B\simeq\mathbb{P}^2_K$.
It is then easy to check that $B\subset\mathbb{P}^2_K$ is indeed a $p$-Feramt plane curve.
\qed
\section{Genus one curves}
\mylabel{genus one}
The goal of this section is to study geometric nonreducedness for genus one curves.
Throughout, $K$ denotes a ground field of arbitrary characteristic $p>0$.
A \emph{genus one curve} is a proper geometrically irreducible curve $X$ over $K$ with $h^0(\O_X)=h^1(\O_X)=1$.
Clearly, this notion is stable under field extensions $K\subset K'$.
Since $h^0(\O_X)=1$, the curve $X$ contains no embedded component, such that
the dualizing sheaf $\omega_X$ exists.
\begin{proposition}
\mylabel{dualizing sheaf}
Let $X$ be a reduced genus one curve. Then $X$ is Gorenstein, and $\omega_X\simeq\O_X$.
\end{proposition}
\proof
We have $h^0(\omega_X)=h^1(\omega_X)=1$, hence $\omega_X$ admits a nonzero section $s$.
The map $s:\O_X\ra\omega_X$ is injective, because $X$ is reduced.
Hence we have a short exact sequence
$$
0\lra\O_X\stackrel{s}{\lra}\omega_X\lra\shF\lra 0
$$
for some torsion sheaf $\shF$. Using $h^1(\shF)=0$ and $\chi(\shF)=\chi(\omega_X)-\chi(\O_X)=0$,
we conclude $\shF=0$, and the result follows.
\qed
Since $H^2(X,\O_X)=0$, the Picard scheme $\Pic_X$ is smooth and 1-dimensional, so the connected component
of the origin $\Pic_X^0$ is either an elliptic curve, a twisted form of $\mathbb{G}_m$, or a twisted form of $\mathbb{G}_a$.
In other words, it is proper, of multiplicative type, or unipotent.
A natural question is whether all three possibilities occur in genus one curves that are regular
but geometrically nonreduced. It turns out that this is not the case:
\begin{theorem}
\mylabel{picard unipotent}
Let $X$ be a genus one curve that is regular but geometrically nonreduced.
Then $\Pic^0_X$ is unipotent.
\end{theorem}
\proof
Seeking a contradiction, we assume that the Picard scheme is not unipotent.
Choose an algebraic closure $K\subset\bar{K}$, and set $Y=X\otimes_K\bar{K}$.
Proposition \ref{dualizing sheaf} implies that $Y$ is a genus one curve with $\omega_Y=\O_Y$.
Let $\shN\subset\O_Y$ be the nilradical, which defines the closed subscheme
$Y_\red\subset Y$. We first check that $Y_\red$ is also a genus one curve with $\omega_{Y_\red}=\O_{Y_\red}$.
The short exact sequence $0\ra\shN\ra\O_Y\ra\O_{Y_\red}\ra 0$ yields a long exact sequence
$$
H^1(Y,\shN)\lra H^1(Y,\O_Y)\lra H^1(Y_\red,\O_{Y_\red})\lra 0.
$$
Since the Picard scheme $\Pic^0_Y$ contains no unipotent subgroup scheme,
the restriction mapping $H^1(Y,\O_Y)\ra H^1(Y_\red,\O_{Y_\red})$ is injective ,
which follows from Proposition \cite{Bosch; Luetkebohmert; Raynaud 1990}, Section 9.2, Proposition 5. Whence $h^1(\O_{Y_\red})=1$.
Furthermore, we have $h^0(\O_{Y_\red})=1$ since $Y_\red$ is integral and $\bar{K}$ is algebraically closed.
Consequently, $Y_\red$ is a genus one curve,
and Proposition \ref{dualizing sheaf} tells us that $\omega_{Y_\red}=\O_{Y_\red}$.
Relative duality for the inclusion morphism $Y_\red\ra Y$ yields the formula
$$
\O_{Y_\red}=\omega_{Y_\red} =\shHom(\O_{Y_\red},\omega_Y)=\shHom(\O_{Y_\red},\O_Y).
$$
The term on the right is nothing but the annulator ideal $\shA\subset\O_Y$ of the nilradical $\shN\subset\O_Y$,
such that $\shA=\O_{Y_\red}$ as $\O_Y$-modules, and in particular $h^0(\shA)=1$.
To finish the proof, consider the closed subscheme $Y'\subset Y$ defined by $\shA\subset\O_Y$.
By assumption, $\shN\neq 0$, such that $\shA\neq \O_Y$, and therefore $Y'\neq\emptyset$.
We have an exact sequence
$$
0\lra H^0(Y,\shA)\lra H^0(Y,\O_Y)\lra H^0(Y',\O_{Y'}),
$$
and consequently $h^0(\O_Y)\geq 2$, contradiction.
\qed
\begin{corollary}
Let $X$ be a genus one curve that is regular but geometrically nonreduced.
Then the reduction of the induced curve $\bar{X}$ over the algebraic closure $K\subset \bar{K}$
is isomorphic to the projective line or the rational cuspidal curve.
\end{corollary}
\begin{proof}
Set $C\subset\bar{X}$ be the reduction. Clearly, $h^1(\O_C)\leq 1$.
We have $C=\mathbb{P}^1_{\bar{K}}$ if $h^1(\O_C)=0$. Now assume that $h^1(\O_C)=1$.
Then $\Pic^0_C$ is unipotent by Theorem \ref{picard unipotent}.
According to \cite{Bosch; Luetkebohmert; Raynaud 1990}, Section 9.3, Corollary 12, this implies that the normalization map $\nu:C'\ra C$
is a homeomorphism, which means that $C$ is the rational cuspidal curve.
\end{proof}
\end{document} |
\begin{document}
\title{Refinement of some partition identities of Merca and Yee}
\author[P. J. Mahanta]{Pankaj Jyoti Mahanta}
\address{Gonit Sora, Dhalpur, Assam 784165, India}
\email{pankaj@gonitsora.com}
\author[M. P. Saikia]{Manjil P. Saikia}
\address{School of Mathematics, Cardiff University, Cardiff, CF24 4AG, UK}
\email{manjil@saikia.in}
\keywords{integer partitions, generating functions, partition identities, truncated partition theorems.}
\subjclass[2020]{11P83, 11P84, 05A17, 05A19.}
\date{\today.}
\begin{abstract}
Recently, Merca and Yee proved some partition identities involving two new partition statistics. We refine these statistics and generalize the results of Merca and Yee. We also correct a small mistake in a result of Merca and Yee.
\end{abstract}
\maketitle
\section{Introduction}
A partition of an integer $n$, is a sequence of weakly decreasing positive integers such that they sum up to $n$. The terms of the sequence are called parts and a partition $\lambdambda$ of $n$ is denoted by $\lambdambda \vdash n$. We denote by $p(n)$, the number of partitions of $n$. For instance, $2+2+1$ is a partition of $5$ and $p(5)=7$. A masterful treatment of this topic is in the book by Andrews \cite{AndrewsBook}.
There is a rich history and literature on partitions with various statistics attached to them. Recently, Merca and Yee \cite{MercaYee} studied several such statistics and proved several interesting results (both analytially and combinatorially). The aim of this paper is to refine the results of Merca and Yee \cite{MercaYee} by putting in additional constraints on the partition statistics they studied.
The following functions are of interest in this paper.
\begin{definition}
For a positive integer $n$, we define
\begin{enumerate}
\item $a_k(n)$ to be the sum of the parts which are divisible by $k$ counted without multiplicity in all the partitions of $n$,
\item $a_{k,p}(n)$ to be the sum of the parts which are congruent to $p \pmod k$ counted without multiplicity in all the partitions of $n$, where $0\leq p\leq k-1$, and
\item $b_k(n)$ to be the sum of the distinct parts of $n$ that appear at least $k$ times in all the partitions of $n$.
\end{enumerate}
\end{definition}
For example, $a_3(5)=6$, $a_{3,0}(5)=6$, $a_{3,1}(5)=9$, $a_{3,2}(5)=11$ and $b_3(5)=2$, which can be seen from the fact that the partitions of $5$ are \[5, 4+1, 3+2, 3+1+1, 2+2+1, 2+1+1+1, 1+1+1+1+1.\]
Merca and Yee \cite{MercaYee} studied related functions. In particular, they studied $a(n)$, the sum of parts counted without multiplicity in all the partitions of $n$ and $b(n)$, the sum of distinct parts that appear at least $2$ times in all the partitions of $n$. It is clear from the definition that \[
a(n)=\sum_{p=0}^{k-1}a_{k,p}(n),\] and $b_2(n)=b(n)$. So, $a_{k,p}(n)$ and $b_k(n)$ can be said to be refinements of $a(n)$ and $b(n)$. They also studied the function $a_{2,0}(n)$ and $a_{2,1}(n)$ which they denoted by $a_e(n)$ and $a_o(n)$ respectively. We will keep their notation for these special cases in the remainder of this paper.
Merca and Yee \cite{MercaYee} found the generating functions of $a(n)$, $a_e(n)$, $a_o(n)$ and $b(n)$, connected these functions in terms of very simple relations and then further connected the function $b(n)$ to two other partition functions $M_\ell (n)$ and $MP_\ell(n)$, which we will define in the next section. The aim of the present paper is to generalize all of these results for our refined functions $a_{k,p}(n)$ and $b_k(n)$. While doing this, we also correct a minor error in a result of Merca and Yee \cite{MercaYee}.
The rest of the paper is organized as follows: in Section \ref{sec:results} we state all of our results and show as corollaries all of the results of Merca and Yee \cite{MercaYee}, in Section \ref{sec:proof1} we prove our results using analytical techniques, in Section \ref{sec:comb} we prove all but one of our results using combinatorial techniques, and finally we end the paper with some remarks in Section \ref{sec:rem}. We closely follow the techniques used by Merca and Yee \cite{MercaYee} in our proofs.
\section{Results and Corollaries}\lambdabel{sec:results}
We need the notation for the $q$-Pochhammer symbol
\[
(a;q)_\infty=\prod_{n=0}^\infty (1-aq^n) \quad \text{for}~|q|<1.
\]The generating functions for $a_k(n)$, $a_{k,p}(n)$ and $b_k(n)$ are given in the following theorem.
\begin{theorem}\lambdabel{ThmGF}
We have
\begin{align*}
\sum_{n=1}^\infty a_k(n)q^n &=~\frac{1}{(q;q)_\infty}\cdot \frac{kq^k}{(1-q^k)^2},\\
\sum_{n=1}^\infty a_{k,p}(n)q^n &=~\frac{1}{(q;q)_\infty}\cdot \frac{(pq^{p-k}+(k-p)q^p)q^k}{(1-q^k)^2},\\
\sum_{n=1}^\infty b_k(n)q^n &=~\frac{1}{(q;q)_\infty}\cdot \frac{q^k}{(1-q^k)^2}.
\end{align*}
\end{theorem}
\noindent From the above theorem (as well as combinatorially, which we will prove later) the following result follows.
\begin{theorem}\lambdabel{ThmComb}
For all $n\geq 1$, we have
\begin{enumerate}
\item $a_k(n)=kb_k(n)$, and
\item $a_{k,p}(n)=(k-p)b_k(n-p)+pb_k(n+k-p)$.
\end{enumerate}
\end{theorem}
As easy corollaries of the above results, two results of Merca and Yee \cite{MercaYee} follow.
\begin{corollary}[Theorem 1.2, \cite{MercaYee}]\lambdabel{CorM}
We have
\begin{align*}
\sum_{n=1}^\infty a_e(n)q^n =~\sum_{n=1}^\infty a_{2,0}(n)q^n &=~\frac{1}{(q;q)_\infty}\cdot \frac{2q^2}{(1-q^2)^2},\\
\sum_{n=1}^\infty a_o(n)q^n =~\sum_{n=1}^\infty a_{2,1}(n)q^n &=~\frac{1}{(q;q)_\infty}\cdot \frac{q(1+q^2)}{(1-q^2)^2},
\end{align*}
and
\[
\sum_{n=1}^\infty a(n)q^n =~\frac{1}{(q;q)_\infty}\cdot \frac{q}{(1-q)^2}.
\]
\end{corollary}
\begin{corollary}[Theorem 1.3, \cite{MercaYee}]
For all $n\geq 1$, we have
\begin{enumerate}
\item $a_e(n)=a_{2,0}(n)=2b(n)$,
\item $a_o(n)=a_{2,1}(n)=b(n+1)+b(n-1)$, and
\item $a(n)=a_{2,0}(n)+a_{2,1}(n)=b(n+1)+2b(n)+b(n-1)$.
\end{enumerate}
\end{corollary}
Andrews and Merca \cite{AndrewsMerca1} introduced a new partition function $M_\ell(n)$, which counts the number of partitions of $n$ where $\ell$ is the least positive integer that is not a part and there are more parts which are greater than $\ell$ than there are parts less than $\ell$. For instance $M_3(5)=0$. We can connect this function with $b_k(n)$ in the following way.
\begin{theorem}\lambdabel{Thm:trunc}
For any positive integer $k, \ell$ and $n$, we have
\begin{multline*}
(-1)^{\ell-1}\left(\sum_{j=-(\ell-1)}^\ell (-1)^j b_k(n-j(3j-1)/2)-\frac{1+(-1)^{[n \equiv 0 \pmod k]+1}}{2}\cdot \frac{n}{k} \right)\\=\sum_{j=1}^{\lfloor n/k\rfloor}jM_\ell(n-kj),
\end{multline*}
where we have used the Iverson bracket, $[P]$ which returns the value $1$ if the logical proposition $P$ is satisfied, and returns $0$ otherwise.
\end{theorem}
The following are two easy corollaries of the above theorem.
\begin{corollary}\lambdabel{cor-m-t}
For any positive integer $k, \ell$ and $n$, we have
\[
(-1)^{\ell-1}\left(\sum_{j=-(\ell-1)}^\ell (-1)^j b_k(n-j(3j-1)/2)-\frac{1+(-1)^{[n \equiv 0 \pmod k]+1}}{2}\cdot \frac{n}{k} \right)\geq 0.
\]
\end{corollary}
\begin{corollary}
For any positive integer $k$ and $n$, we have
\[
\sum_{j=-\infty}^\infty (-1)^j b_k(n-j(3j-1)/2)=\frac{1+(-1)^{[n \equiv 0 \pmod k]+1}}{2}\cdot \frac{n}{k}.
\]
\end{corollary}
From the above theorem and corollaries, the following results follow easily.
\begin{corollary}[Theorem 1.4, \cite{MercaYee}]
For any positive integers $\ell$ and $n$, we have
\[
(-1)^{\ell-1}\left(\sum_{j=-(\ell-1)}^\ell (-1)^j b(n-j(3j-1)/2)-\frac{1+(-1)^{n}}{2}\cdot \frac{n}{2} \right)=\sum_{j=1}^{\lfloor n/2\rfloor}jM_\ell(n-2j).
\]
\end{corollary}
\begin{corollary}[Corollary 1.5, \cite{MercaYee}]
For any positive integers $\ell$ and $n$, we have
\[
(-1)^{\ell-1}\left(\sum_{j=-(\ell-1)}^\ell (-1)^j b(n-j(3j-1)/2)-\frac{1+(-1)^{n}}{2}\cdot \frac{n}{2} \right)\geq 0.
\]
\end{corollary}
\begin{corollary}[Corollary 1.6, \cite{MercaYee}]
For any positive integer $n$, we have
\[
\sum_{j=-\infty}^\infty (-1)^j b(n-j(3j-1)/2)=\frac{1+(-1)^{n}}{2}\cdot \frac{n}{2}.
\]
\end{corollary}
Andrews and Merca \cite{AndrewsMerca2} studied a new partition function $MP_\ell (n)$, which counts the number of partitions of $n$ in which the first part larger than $2k-1$ is odd and appears exactly $k$ times, and all other parts appear at most one time. For instance, $MP_3(5)=3$. We can connect the function $b_k(n)$ with $MP_\ell(n)$ using a new function $c_k(n)$ in the following way.
\begin{theorem}\lambdabel{Thm:gen1.7}
For any positive integer $k, \ell$ and $n$, we have
\begin{multline*}
(-1)^{\ell-1}\left(\sum_{j=0}^{2\ell-1} (-1)^{\frac{j(j+1)}{2}} b_k(n-j(j+1)/2)-\frac{1+(-1)^{[n \equiv 0 \pmod k]+1}}{2}\cdot c_k(n) \right)\\=\sum_{j=0}^n c_k(j)MP_\ell(n-j),
\end{multline*}
where we have used the Iverson bracket, $[P]$ which returns the value $1$ if the logical proposition $P$ is satisfied, and returns $0$ otherwise, and the function $c_k(n)$ is defined as
\[
c_k(n)=\sum_{j=1}^{\lfloor n/k\rfloor}jQ\left(\frac{n-kj}{2}\right),
\]
and $Q(m)$ denotes the number of partitions of $m$ into distinct parts. Here $Q(x)=0$ if $x\notin \mathbb{N}$.
\end{theorem}
The following are two easy corollaries of the above theorem.
\begin{corollary}
For any positive integers $k$, $\ell$ and $n$, we have
\[
(-1)^{\ell-1}\left(\sum_{j=0}^{2\ell-1} (-1)^{\frac{j(j+1)}{2}} b_k(n-j(j+1)/2)-\frac{1+(-1)^{[n \equiv 0 \pmod k]+1}}{2}\cdot c_k(n) \right)\geq 0.
\]
\end{corollary}
\begin{corollary}
For positive integers $n$ and $k$, we have
\[
\sum_{j=0}^{\infty} (-1)^{\frac{j(j+1)}{2}} b_k(n-j(j+1)/2)=\frac{1+(-1)^{[n \equiv 0 \pmod k]+1}}{2}\cdot c_k(n).
\]
\end{corollary}
From the above theorem and corollaries, the following results of Merca and Yee \cite{MercaYee} follow as corollaries. Here we have corrected the exponent of the $-1$ inside the summation in the left hand side, which is $\dfrac{j(j+1)}{2}$, but was mentioned as $j$ by Merca and Yee \cite{MercaYee}.
\begin{corollary}[Theorem 1.7, \cite{MercaYee}]
For any positive integer $\ell$ and $n$, we have
\[
(-1)^{\ell-1}\left(\sum_{j=0}^{2\ell-1} (-1)^{\frac{j(j+1)}{2}} b(n-j(j+1)/2)-\frac{1+(-1)^n}{2}\cdot c\left(\frac{n}{2}\right) \right)=\sum_{j=1}^{\lfloor n/2\rfloor}c(j)MP_\ell(n-2j),
\]
where $c(n)$ is the number of subsets of $\{1, 2, \ldots, n\}$ which contains a number that is greater than the sum of the other numbers in the subset.
\end{corollary}
\begin{proof}
We notice that
\[
c_2(2n)=\sum_{j=1}^njQ(n-j)=\sum_{m=0}^{n-1}(n-m)Q(m),
\]which was shown to be equal to $c(n)$ in the proof of Theorem 4.1 in Merca and Yee's \cite{MercaYee} work. So, we have $c_2(n)=c\left(\frac{n}{2}\right)$. Putting $k=2$ in Theorem \ref{Thm:gen1.7} we get the result.
\end{proof}
\begin{corollary}[Corollary 4.2, \cite{MercaYee}]
Let $\ell$ and $n$ be positive integers, then we have
\[
(-1)^{\ell-1}\left(\sum_{j=0}^{2\ell-1} (-1)^{\frac{j(j+1)}{2}} b(n-j(j+1)/2)-\frac{1+(-1)^n}{2}\cdot c\left(\frac{n}{2}\right) \right)\geq 0.
\]
\end{corollary}
\begin{corollary}[Corollary 4.3, \cite{MercaYee}]
Let $n$ be a positive integer, then we have
\[
\sum_{j=0}^{\infty} (-1)^{\frac{j(j+1)}{2}} b(n-j(j+1)/2)=\frac{1+(-1)^n}{2}\cdot c\left(\frac{n}{2} \right).
\]
\end{corollary}
\section{Analytical Proofs of our Main Results}\lambdabel{sec:proof1}
In this section, we prove all the theorems stated in the previous section, using analytical methods. Our proofs follow closely the techniques used by Merca and Yee \cite{MercaYee}.
\subsection{Proof of Theorems \ref{ThmGF} and \ref{ThmComb}}
We start with the generating function for partitions where the power of $z$ keeps track of parts with multiplicity $\geq k$,
\begin{multline*}
\prod_{j=1}^\infty (1+q^j+q^{2j}+\cdots +q^{(k-1)j}+z^j(q^{kj}+q^{(k+1)j}+\cdots))\\ =\prod_{j=1}^\infty \left(\frac{q^{kj}-1}{q^j-1}+z^j\frac{q^{kj}}{1-q^j} \right)
=\frac{1}{(q;q)_\infty}\prod_{j=1}^\infty (1+(z^j-1)q^{kj}).
\end{multline*}
Now, taking the derivative w.r.t. $z$ and setting $z\rightarrow 1$ we get,
\[
\sum_{n=1}^\infty b_k(n)q^n=\frac{1}{(q;q)_\infty}\sum_{j=1}^\infty jq^{kj}=\frac{1}{(q;q)_\infty}\cdot \frac{q^k}{(1-q^k)^2}.
\]
In a similar way, we have
\begin{multline*}
\sum_{n=1}^\infty a_{k,p}(n)q^n\\
=\frac{\partial}{\partial z} (1-q^p+z^pq^p) \prod_{j=1}^\infty ((1+q^j+q^{2j}+\cdots) -(q^{kj+p}+q^{(k+1)j+p}+\cdots) \\+z^{kj+p}(q^{kj+p}+q^{(k+1)j+p}+\cdots))\mid_{z=1}.
\end{multline*}
We subtract $(q^{kj+p}+q^{(k+1)j+p}+\cdots)$ from the first term, because we count the parts of the type $kj+p$ where $j\geq1$ in the third term, and we multiply by $(1-q^p+z^pq^p)$ because of the parts of the form $kj+p$ where $j=0$. Therefore,
\begin{align*}
\sum_{n=1}^\infty a_{k,p}(n)q^n
&=~\frac{\partial}{\partial z} (1-q^p+z^pq^p) \prod_{j=1}^\infty \left( \frac{1-q^{kj+p}}{1-q^j}+z^{kj+p}\frac{q^{kj+p}}{1-q^j} \right)\bigg|_{z=1}\\
&=~\frac{1}{(q;q)_\infty}\frac{\partial}{\partial z} \prod_{j=0}^\infty \left( 1+(z^{kj+p}-1)q^{kj+p}\right)\bigg|_{z=1}\\
&=~\frac{1}{(q;q)_\infty}\sum_{j=0}^\infty (kj+p)q^{kj+p}\\
&=~\frac{1}{(q;q)_\infty} \bigg(kq^p\frac{q^k}{(1-q^k)^2}+pq^p\frac{1}{1-q^k}\bigg) \\
&=~\frac{1}{(q;q)_\infty}\cdot \frac{pq^p+(k-p)q^{p+k}}{(1-q^k)^2}.
\end{align*}
The case $p=0$ in the above will give us the generating function for $a_k(n)$.
Theorem \ref{ThmComb} immediately follows from Theorem \ref{ThmGF}; we just compare coefficients.
\subsection{Proof of Theorem \ref{Thm:trunc}}
The generating function for $M_\ell(n)$ was found by Andrews and Merca \cite{AndrewsMerca1}, when they studied a truncated version of Euler's pentagonal number theorem
\begin{equation}\lambdabel{eq-1}
\frac{(-1)^{\ell-1}}{(q;q)_\infty}\sum_{n=-(\ell-1)}^\ell (-1)^n q^{n(3n-1)/2}=(-1)^{\ell-1}+\sum_{n=\ell}^\infty\frac{q^{\binom{\ell}{2}+(\ell+1)n}}{(q;q)_n}\gauss{n-1}{\ell-1},
\end{equation}
where $\ell \geq 1$, $(a;q)_n=\dfrac{(a;q)_\infty}{(aq^n;q)_\infty}$ and the Gausssian binomial $\gauss{n}{\ell}$ equals $\dfrac{(q;q)_n}{(q;q)_\ell(q;q)_{n-\ell}}$ whenever $0\leq \ell \leq n$ and is $0$ otherwise. The sum on the right hand side of equation \eqref{eq-1} is the generating function of $M_\ell(n)$, that is
\begin{equation}\lambdabel{eq-2}
\sum_{n=0}^\infty M_\ell(n)q^n=\sum_{n=\ell}^\infty\frac{q^{\binom{\ell}{2}+(\ell+1)n}}{(q;q)_n}\gauss{n-1}{\ell-1}.
\end{equation}
We now multiply both sides of equation \eqref{eq-1} by
\[
\sum_{n=0}^\infty nq^{kn}=\frac{q^k}{(1-q^k)^2},
\]
which gives us (after using equation \eqref{eq-2}),
\begin{multline*}
(-1)^{\ell-1}\left(\left(\sum_{n=1}^\infty b_k(n)q^n\right)\left(\sum_{n=-(\ell-1)}^\ell (-1)^n q^{n(3n-1)/2}\right)-\sum_{n=0}^\infty nq^{kn}\right)\\=\left(\sum_{n=0}^\infty nq^{kn}\right)\left(\sum_{n=0}^\infty M_\ell(n)q^n\right).
\end{multline*}
Using the Cauchy product of two power series, Theorem \ref{Thm:trunc} follows from the above.
\subsection{Proof of Theorem \ref{Thm:gen1.7}} The generating function for $MP_\ell(n)$ was found by Andrews and Merca \cite{AndrewsMerca2} when they considered a truncated theta identify of Gauss,
\begin{equation}\lambdabel{eq:3.1}
\frac{(-q;q^2)_\infty}{(q^2;q^2)_\infty}\sum_{j=0}^{2\ell-1}(-q)^{j(j+1)/2}
=1+(-1)^{\ell-1}\frac{(-q;q^2)_\ell}{(q^2;q^2)_{\ell-1}}\sum_{j=0}^\infty \frac{q^{\ell(2\ell+2j+1)}(-q^{2\ell+2j+3};q^2)_\infty}{(q^{2\ell+2j+2};q^2)_\infty}.
\end{equation}
The sum on the right hand side of equation \eqref{eq:3.1} is the generating function of $MP_\ell(n)$.
We now multiply both sides of equation \eqref{eq:3.1} by $\dfrac{q^k}{(1-q^k)^2}\cdot (-q^2;q^2)_\infty$ and deduce the following identity
\begin{multline}\lambdabel{eq:pj}
(-1)^{\ell-1}\left(\left(\sum_{n=0}^\infty b_k(n)q^n\right)\left(\sum_{n=0}^{2\ell-1} (-q)^{n(n+1)/2}\right)- \dfrac{q^k}{(1-q^k)^2}\cdot (-q^2;q^2)_\infty \right)\\=\left(\dfrac{q^k}{(1-q^k)^2}\cdot (-q^2;q^2)_\infty\right)\left(\sum_{n=0}^\infty MP_\ell(n)q^n\right).
\end{multline}
We know that the generating function of the number of partitions into distinct parts is \[ \sum_{n=0}^\infty Q(n)q^n=(-q;q)_\infty.\] Using this, we have
\begin{align*}
\dfrac{q^k}{(1-q^k)^2}\cdot (-q^2;q^2)_\infty &=~\sum_{n=0}^{\infty}nq^{kn}\cdot \sum_{m=0}^{\infty}Q(m)q^{2m}\\
&=~\sum_{n=0}^{\infty}\sum_{j=1}^{\lfloor n/k\rfloor} j Q\left(\frac{n-kj}{2}\right)q^n\\
&=~\sum_{n=0}^\infty c_k(n)q^n.
\end{align*}
Putting this in equation \eqref{eq:pj} we get,
\begin{multline*}
(-1)^{\ell-1}\left(\left(\sum_{n=0}^\infty b_k(n)q^n\right)\left(\sum_{n=0}^{2\ell-1} (-q)^{n(n+1)/2}\right)-\sum_{n=0}^\infty c_k(n)q^n \right)\\=\left(\sum_{n=0}^\infty c_k(n)q^n\right)\left(\sum_{n=0}^\infty MP_\ell(n)q^n\right).
\end{multline*}
Using the Cauchy product of two power series, Theorem \ref{Thm:gen1.7} follows from the above.
\section{Combinatorial Proofs of some of our Results}\lambdabel{sec:comb}
In this section we give combinatorial proofs of all but one (Theorem \ref{Thm:gen1.7}) of our results. The approach again closely follows that of Merca and Yee \cite{MercaYee}.
\subsection{Proof of Theorem \ref{ThmComb}} For part (1), we note that for a partition $\lambda \vdash n$, if $ka$ is a part of $\lambda$ then we split this part into $k$ $a$'s while keeping the remaining parts of $\lambda$ unchanged. Let us call the new partition $\mu$, then clearly the part $a$ has multiplicity at least $k$ in $\mu$, so we get
\begin{align*}
a_k(n)&=~\sum_{\lambda \vdash n}\text{different parts divisble by}~k\\
&=~ k\sum_{\mu\vdash n}\text{differents parts with multiplicity}\geq k =kb_k(n).
\end{align*}
For part (2), let $ka+p$ be a part of $\lambda \vdash n$ which is counted in $a_{k,p}(n)$. We now split $ka$ into $k$ $a$'s while keeping the remaining parts unchanged to get a new partition $\mu \vdash n-p$. Again, we split $(ka+p)+(k-p)$ into $k$ $(a+1)$'s while keeping the remaining parts unchanged to get a new partition $\nu \vdash n+k-p$. We have
\begin{align*}
a_{k,p}(n)=&~\sum_{\lambda \vdash n}\text{different parts}\equiv p \pmod k\\
=&~(k-p)\sum_{\mu \vdash n-p}\text{different parts with multiplicity}\geq k \\&+p\sum_{\nu \vdash n+k-p}\text{different parts with multiplicity}\geq k\\
=&~(k-p)b_k(n-p)+pb_k(n+k-p).
\end{align*}
\subsection{Proof of Theorem \ref{ThmGF}}
We prove the generating function for $a_k(n)$ here; the other two generating functions can be proved combinatorially by combining the previous subsection with this proof. In fact, our proof is the same when $2$ is replaced by $k$ in the proof of Corollary \ref{CorM} given by Merca and Yee \cite{MercaYee}, so for the sake of brevity we just outline the steps.
We work with two sets of overpartitions, let $\bar P_k(n)$ be the set of overpartitions of $n$ where exactly one part divisible by $k$ is overlined, and let $\bar A_k(n)$ be the set of colored overpartitions of $n$ where exactly one part divisible by $k$ is overlined and at most one other part divisible by $k$ is colored with blue color. For instance, we have
\[
\bar P_3(6)=\{\bar 6, \bar 3+3, \bar 3+2+1, \bar 3+1+1+1\},
\]
and
\[
\bar A_3(6)=\{\bar 6, \bar 3+3, \bar 3+\textcolor{blue}{3}, \bar 3+2+1, \bar 3+1+1+1\}.
\]
Clearly, $\bar P_k(n)$ is a subset of $\bar A_k(n)$, and we have
\begin{equation}\lambdabel{eq:p1}
a_k(n)=\sum_{\lambda \in \bar P_k(n)}\text{the overlined part of}~\lambda .
\end{equation}
Also note that for each partition in $\bar A_k(n)$ we can decompose it into a tuple $(\lambda,\mu, \nu)$ where $\lambda$ is the overline part, $\mu$ is the colored part and $\nu$ are the non-colored parts. This gives us
\begin{equation}\lambdabel{eq:p2}
\sum_{n\geq 0}\bar A_k(n)q^n=\frac{q^k}{(1-q^k)}\cdot \frac{1}{(1-q^k)}\cdot \frac{1}{(q;q)_\infty}.
\end{equation}
We now set up the following surjection from $\bar A_k(n)$ to $\bar P_k(n)$: if there is a colored part, we merge it with the overlined part to get a resulting overlined part. The new partition is clearly in $\bar P_k(n)$, and if there are no colored parts then we keep the partition unchanged. Now, for an overlined part $\overline{ka}$ of $\mu\in \bar P_k(n)$, there are $a$ ways to merge an overlined part with a colored part to get $ka$, so we have
\begin{equation}\lambdabel{eq:p3}
\sum_{\mu \in \bar P_k(n)}\text{the overline part of}~\mu=\sum_{\nu \in \bar A_k(n)}k.
\end{equation}
From equations \eqref{eq:p1}, \eqref{eq:p2} and \eqref{eq:p3} we get
\[
\sum_{n\geq 0}a_k(n)q^n=\frac{1}{(q;q)_\infty}\cdot \frac{kq^k}{(1-q^k)^2}.
\]
\subsection{Proof of Theorem \ref{Thm:trunc}}
Again, our proof is similar to the proof of Corollary \ref{cor-m-t}, given by Merca and Yee \cite{MercaYee}, so we mention the main steps without going into too much details. Theorem \ref{Thm:trunc} is equivalent to the following
\begin{multline*}
(-1)^{\ell-1}\left(\sum_{j=-(\ell-1)}^\ell (-1)^j \left(\sum_{\lambda \in \bar P_k(n-j(3j-1)/2}\text{overlined part of }\lambda \right)\right.\\-\left.\frac{1+(-1)^{[n \equiv 0 \pmod k]+1}}{2}\cdot n \right)=\sum_{j=1}^{\lfloor n/k\rfloor}kjM_\ell(n-kj),
\end{multline*}
where we have used Theorem \ref{ThmComb} and equation \eqref{eq:p1}.
We note that
\[
\sum_{\lambda \in \bar P_k(n)}\text{overlined part of }\lambda=\sum_{m=1}^{\lfloor n/k\rfloor}km\sum_{\mu \vdash (n-km)}1=\sum_{m=1}^{\lfloor n/k\rfloor}km \cdot p(n-km).
\]
The above equation is true since any partition $\lambda \in \bar P_k(n)$ can be made into a pair of partitions $(\nu, \mu)$ where $\nu$ is the overlined part and $\mu$ is then an ordinary partition.
So, we get
\begin{multline*}
(-1)^{\ell-1}\sum_{j=-(\ell-1)}^\ell (-1)^j \left(\sum_{\lambda \in \bar P_k(n-j(3j-1)/2}\text{overlined part of }\lambda \right)\\
=(-1)^{\ell-1}\sum_{j=-(\ell-1)}^\ell (-1)^j \sum_{m=1}^{\lfloor (n-j(3j-1)/2)/2\rfloor}km \cdot p(n-j(3j-1)/2-km).
\end{multline*}
The above is equivalent to
\begin{equation}\lambdabel{pf-t-1}
\sum_{m=1}^{\lfloor n/k\rfloor}km\left((-1)^{\ell-1}\sum_{j=-(\ell-1)}^\ell (-1)^j p(n-km-j(3j-1)/2) \right),
\end{equation}
where we rearrange the summation and take $p(n)=0$ if $n<0$.
Merca and Yee \cite{MercaYee} have given a combinatorial proof of the truncated pentagonal number theorem, which is equivalent to the following identity
\begin{equation}\lambdabel{pf-t-2}
(-1)^{\ell-1}\sum_{j=0}^{\ell -1})(-1)^j(p(n-j(3j+1)/2)-p(n-(j+1)(3j+2)/2))=M_\ell(n).
\end{equation}
Using equation \eqref{pf-t-2} in \eqref{pf-t-1}, we get that \eqref{pf-t-1} is equal to
\[
\sum_{m=1}^{\lfloor n/k\rfloor}km\cdot M_\ell(n-km)+(-1)^{\ell-1}\cdot \frac{1+(-1)^{[n \equiv 0 \pmod k]+1}}{2}\cdot n,
\]
which proves the result, since equation \eqref{pf-t-2} already has a combinatorial proof. We need the term $(-1)^{\ell-1}n$ when $n \equiv 0 \pmod k$ because, if $n=kr$ for some $r$ and $m=n/k$, then without this term we get
\[
(-1)^{\ell-1}np(0)=nM_k(0)\Rightarrow n=0.
\]
\section{Concluding Remarks}\lambdabel{sec:rem}
\begin{enumerate}
\item A combinatorial proof of Theorem \ref{Thm:gen1.7} is left as an open problem. Any combinatorial proof of Theorem \ref{Thm:gen1.7} would hinge on a combinatorial interpretation of $c_k(n)$, like we have for $c_2(2n)$. So, a first step towards a combinatorial proof would be such an interpretation of $c_k(n)$.
\item Identities of the type in Theorems \ref{Thm:trunc} and \ref{Thm:gen1.7} are also known for some other partition statistics, for instance one can see some recent work of Merca \cite{Merca}. It would be interesting to see if one can relate such partition statistics with the ones introduced in this paper.
\end{enumerate}
\end{document} |
\begin{document}
\renewcommand{\textbullet}{\textbullet}
\title{ extbf{ extit{A priori}
\begin{abstract}
In this paper, we consider unilateral contact problem without friction between a rigid body and deformable one in the framework of isogeometric analysis. We present the theoretical analysis of the mixed problem{. For the displacement, we use the pushforward of a NURBS space of degree $p$ and for the Lagrange multiplier, the pushforward of a B-Spline space of degree $p-2$. These chooses of space ensure to prove an $\inf-{\sigma}up$ condition and so on, the stability of the method. An active set strategy is used in order to avoid of geometrical hypothesis of the contact set.}
An {optimal} \textit{a priori} error estimate is demonstrated without assumption on the unknown contact set. Several numerical examples in two- and three-dimensions and in small and large deformation demonstrate the accuracy of the proposed method.
\end{abstract}
{\sigma}ection*{Introduction}
\label{sec:sec0}
\addcontentsline{toc}{section}{Introduction}
In the past few years, the study of contact problems in small and large deformation is increased. The numerical solution of contact problems presents several difficulties as the computational cost, the high nonlinearity and the ill-conditioning. Contrary to many other problems in nonlinear mechanics, these problems can not be solved always at a satisfactory level of robustness and accuracy \cite{laursen-02,wriggers-06} with the introduce numerical methods.
One of the reasons that make robustness and accuracy hard to achieve is that the computation of gap, i.e. the distance between the deformed body and the obstacle is indeed an ill-posed problem and its numerical approximation often introduce extra discontinuity that breaks the converge of the iterative schemes; see \cite{al-cu1988,laursen-02,wriggers-06,kon-sch-13} where a master-slave method is introduced to weaken this effect.
To this respect, the use of NURBS or spline approximations within the framework of isogeometric analysis \cite{hughes05}, holds great promises thanks to the increased regularity in the geometric description which makes the gap computation intrinsically easier.
{The IGA-based methods use a generalization of B\'ezier's curves, the B-Splines and non-uniform rational B-Splines (NURBS). These functions, used to represent the geometry of the domains with CAD, are used as basis functions to approximate a partial differential equations, it is called the isoparametric paradigm. The smooth IGA basis functions possess a number of signifiant advantages for the analysis, including exact geometry and superior approximation. }
Isogeometric methods for \hbox{frictionless} contact problems have been introduced in \cite{lorenzis9,Temizer11,Temizer12,lorenzis12,lorenzisreview,lorenzis15}, see also with {primal} and dual elements \cite{Wohlmuth00,hueber-Wohl-05,hueber-Stadler-Wohlmuth-08,wohl12,seitz16}. Both point-to-segment and segment-to-segment (i.e, mortar type) algorithms have been designed and tested with an engineering perspective, showing that, indeed, the use of smooth geometric representation helps the design of reliable methods for contact problems.
In this paper, we take a slightly different point of view. Inspired by the recent design and analysis of isogeometric mortar methods in \cite{brivadis15}, we consider a formulation of frictionless contact based on the choice of the Lagrange multiplier space proposed there. Indeed, we associate to NURBS displacement of degree $p$, a space of Lagrange multiplier of degree $p-2$. The use of lower order multipliers has several advantages because it makes the evaluation of averaged gap values at active and inactive control points simpler, accurate and substantially more local. This choice of multipliers is then coupled with an active-set strategy, as the one proposed and used in \cite{hueber-Wohl-05,hueber-Stadler-Wohlmuth-08}.
Finally, we perform a comprehensive set of tests both in small and large scale deformation, which well show the performance of our method. These tests have been performed with an in-house code developed upon the public library igatools \cite{Pauletti2015}.
The outline of the paper is structured as follows in Section \ref{sec:sec1}, we introduce unilateral contact problem, some notations. In Section \ref{sec:discrete_pb}, we describes the discrete spaces and their properties.
In Section \ref{sec:sec2}, we present the theoretical analysis of the mixed problem. An {optimal} \textit{a priori} error estimate without assumption on the unknown contact set is presented.
In the last section, some two- and three-dimensional problem in small deformation are presented in order to illustrate the convergence of the method with active-set strategy. A two-dimensional problem in large deformation with Neo-Hookean material law is provided to show the robustness of this method.
\begin{remark*}
The letter $C$ stands for a generic constant, independent of the discretization parameters and the solution $u$ of the variational problem. For two scalar quantities $a$ and $b$, the notation $a \lesssim b$ means there exists a constant $C$, independent of the mesh size parameters, such that $a \leq Cb$. Moreover, $a {\sigma}im b$ means that $a \lesssim b$ and $b \lesssim a$.
\end{remark*}
{\sigma}ection{Preliminaries and notations}
\label{sec:sec1}
{\sigma}ubsection{Unilateral contact problem}
\label{subsec:continuous_pb}
Let $\Omega {\sigma}ubset \mathbb{R}^d$ ($d = $2$ \textrm{ or } 3$) be a bounded regular domain which represents the reference configuration of a body submitted to a Dirichlet condition on $\Gamma_D$ (with $\textrm{meas}(\Gamma_D) >0$), a Neumann condition on $\Gamma_N$ and a unilateral contact condition on a potential zone of contact $\Gamma_C$ with a rigid body. Without loss of generality, it is assumed that the body is subjected to a volume force $f$, to a surface traction $\ell$ on $\Gamma_N$ and clamped at $\Gamma_D$. Finally, we denote by $n_{\Omega}$ the unit outward normal vector on $\partial \Omega$.
In what follows, we call $u$ the displacement of $\Omega$, $\displaystyle \varepsilon (u) = \frac{1}{2} (\nabla u + \nabla u^T)$ its linearized strain tensor and we denote by $\displaystyle {\sigma} = ({\sigma}_{ij})_{1 \leq i,j \leq d} $ the stress tensor. We assume a linear constitutive law between ${\sigma}$ and $\varepsilon$, \textit{i.e.} $\displaystyle {\sigma} (u)= A \varepsilon (u)$, where $A=(a_{ijkl})_{1 \leq i,j,k,l \leq d}$ is a fourth order symmetric tensor verifying the usual bounds:
\begin{itemize}
\item $\displaystyle a_{ijkl} \in L^{\infty}(\Omega), \textrm{ \textit{i.e.} there exists a constant $m$ such that } \max_{1 \leq i,j,k,l \leq d} \abs{a_{ijkl}} \leq m;$
\item $\displaystyle \textrm{there exists a constant } M>0 \textrm{ such that \textit{a.e.} on } \Omega,$
$$\displaystyle a_{ijkl} \varepsilon_{ij}\varepsilon_{kl} \geq M \varepsilon_{ij} \varepsilon_{ij} \quad \forall \varepsilon \in \mathbb{R}^{d \times d} \textrm{ with } \varepsilon_{ij} = \varepsilon_{ji}.$$
\end{itemize}
Let $n$ be the outward unit normal vector at the rigid body. {From now on, we assume that $n$ is an infinitely regular field}. For any
displacement field $u$ and for any density of surface forces ${\sigma} (u) n$ defined on $\partial \Omega$, we adopt the following notation: $$ u = u_n n + u_t \qquad \textrm{and} \qquad {\sigma} (u) n = {\sigma}_n (u) n + {\sigma}_t (u) ,$$ where $u_t$ (resp. ${\sigma}_t (u)$) are the tangential components with respect to $n$.
The unilateral contact problem between a rigid body and the elastic body $\Omega$ consists in finding the displacement $u$ satisfying:
\begin{eqnarray} \label{eq:strong}
\begin{array}{rcll}
\div {\sigma} (u) +f \!\!\!&\!\!\!=\!\!\!&\!\!\! 0 &\qquad \textrm{in } \Omega, \\
{\sigma} (u) \!\!\!&=&\!\!\! A \varepsilon (u)&\qquad \textrm{in } \Omega,\\
u \!\!\!&=&\!\!\! 0 &\qquad \textrm{on } \Gamma_D,\\
{\sigma} (u) {n_\Omega} \!\!\!&=&\!\!\! \ell &\qquad \textrm{on } \Gamma_N.
\end{array}
\end{eqnarray}
and the conditions describing unilateral contact without friction at $\Gamma_C$ are:
\begin{eqnarray} \label{eq:contact_cond}
\begin{array}{rl}
u_n \geq &\!\!\! 0 \quad (i), \\
{\sigma}_n (u) \leq &\!\!\! 0 \quad (ii), \\
{\sigma}_n (u) u_n = &\!\!\! 0 \quad (iii), \\
{\sigma}_t (u) = &\!\!\! 0 \quad (iv).
\end{array}
\end{eqnarray}
\noindent In order to describe the variational formulation of \eqref{eq:strong}-\eqref{eq:contact_cond}, we consider the Hilbert spaces: $$ \displaystyle V := H^1_{0,\Gamma_D}(\Omega)^d = \{ v \in H^1(\Omega)^d , \quad v = 0 \textrm{ on } \Gamma_D \}, \quad W=\{ \restrr{v_n}{\Gamma_C}, \quad v \in V \},$$ and their dual spaces $V'$, $W'$ endowed with their usual norms. We denote by: $$\displaystyle \norm{v}_{V} = \left(\norm{v}_{L^2(\Omega)^d}^2 + \abs{v}_{H^1(\Omega)^d}^2\right)^{1/2} , \ \forall v \in V.$$
If $\overline{\Gamma}_D \cap \overline{\Gamma}_C = \emptyset$ and $n$ is regular enough, it is well known that $W=H^{1/2}(\Gamma_C) $ and we denote $W'$ by $H^{-1/2}(\Gamma_C)$. On the other hand, if $\overline{\Gamma}_D \cap \overline{\Gamma}_C \neq \emptyset$, it will hold that $H^{1/2}_{00} (\Gamma_C) {\sigma}ubset W {\sigma}ubset H^{1/2}(\Gamma_C)$.
\noindent In all cases, we will denote by $\norm{\cdot}_W $ the norm on $W$ and by ${\sigma}cal{\cdot}{\cdot}$ the duality pairing between $W'$ and $W$.
\noindent For all $u$ and $v$ in $V$, we set: $$\displaystyle a(u,v) = \int_{\Omega} {\sigma}(u): \varepsilon(v)\ {\rm d}\Omega \quad \textrm{and} \quad L(v) = \int_{\Omega} f \cdot v \ {\rm d}\Omega + \int_{\Gamma_N} \ell \cdot v \ {\rm d} \Gamma.$$
\noindent Let ${K_C}$ be the closed convex cone of admissible displacement fields satisfying the non-interpenetration conditions, $\displaystyle {K_C} := \{ v \in V, \quad v_n \geq 0 \textrm{ on } \Gamma_C\}$. {A weak formulation of Problem} \eqref{eq:strong}-\eqref{eq:contact_cond} (see \cite{lions-magenes-72}), as a variational inequality, is to find $u \in {K_C} $ such as:
\begin{eqnarray} \label{eq:var_ineq}
\displaystyle a(u,v-u) \geq L(v-u), \qquad \forall v \in {K_C}.
\end{eqnarray}
We cannot directly use a Newton-Raphson's method to solve the formulation \eqref{eq:var_ineq}. A classical solution is to introduce a new variable, the Lagrange multipliers denoted by $\lambda$, which represents {the surface normal force}
. For all $\lambda$ in $W'$, we denote $\displaystyle b(\lambda,v) = - {\sigma}cal \lambda v_n$ and $M$ is the classical convex cone of multipliers on $\Gamma_C$:
$$ \displaystyle M := \{ \mu \in W', \quad {\sigma}cal{\mu}{\psi} \leq 0 \quad \forall \psi \in H^{1/2}(\Gamma_C), \quad \psi \geq 0 \textit{ a.e.} \textrm{ on } \Gamma_C \} .$$
\noindent The complementary conditions with Lagrange multipliers writes as follows:
\begin{eqnarray} \label{eq:contact_cond_mult}
\begin{array}{rl}
u_n \geq &\!\!\! 0 \quad (i), \\
\lambda \leq &\!\!\! 0 \quad (ii), \\
\lambda u_n = &\!\!\! 0 \quad (iii).
\end{array}
\end{eqnarray}
\noindent {The mixed formulation \cite{ben-belgacem-renard-03} of the Signorini problem \eqref{eq:strong} and \eqref{eq:contact_cond_mult} consists in finding $(u,\lambda) \in V \times M$ such that:}
\begin{eqnarray} \label{eq:mixed_form}
\left\{
\begin{array}{rl}
\displaystyle a(u,v) - b(\lambda,v) = L(v),& \displaystyle \qquad \forall v \in V,\\
\displaystyle b(\mu-\lambda,u) \geq 0,&\displaystyle \qquad \forall \mu \in M.
\end{array}
\right.
\end{eqnarray}
Stampacchia's Theorem ensures that problem \eqref{eq:mixed_form} admits a unique solution.
\noindent The existence and uniqueness of the solution $(u,\lambda)$ of the mixed formulation has been established in \cite{haslinger-96} and it holds $\lambda = {\sigma}n (u)$.
\noindent {To simplify the notation, we denote by $\norm{\cdot}_{3/2+s,\Omega} $ the norm on $H^{3/2+s}(\Omega)^d$ and by $\norm{\cdot}_{s,\Gamma_C} $ the norm on $H^s(\Gamma_C)$.}
\noindent So, the following classical inequality (see \cite{daveiga06}) holds:
\begin{theorem}\label{thm:u-lambda}
Given $s>0$, if the displacement $u$ verifies $u \in H^{3/2+s}(\Omega)^d$, then $\lambda \in H^s(\Gamma_C)$ and it holds:
\begin{eqnarray} \label{ineq:Sp-2 primo}
\displaystyle \norm{\lambda}_{s,\Gamma_C} \leq \norm{u}_{3/2+s,\Omega}.
\end{eqnarray}
\end{theorem}
The aim of this paper is to discretize the problem \eqref{eq:mixed_form} within the isogeometric paradigm, \textit{i.e.} with splines and NURBS. Moreover, in order to properly choose the space of Lagrange multipliers, we will be inspired by \cite{brivadis15}. In what follows, we introduce NURBS spaces and assumptions together with relevant choices of space pairings. In particular, following \cite{brivadis15}, we concentrate on the definitions of B-Splines displacements of degree $p$ and multiplier spaces of degree $p-2$.
{\sigma}ubsection{NURBS discretisation}
\label{subsec:nurbs_discretisation}
In this section, we describe briefly an overview on isogeometric analysis providing the notation and concept needed in the next sections. Firstly, we define B-Splines and NURBS in one-dimension. Secondly, we extend these definitions to the multi-dimensional case. Finally, we define the primal and the dual spaces for the contact boundary.
{Let us denote by $p$ the degree of univariate B-Splines and by $\Xi$ an open univariate knot vector, where the first and last entries are repeated $(p + 1)$-times, \textit{i.e.} $$\Xi := \{ 0= \xi_1= \cdots = \xi_{p+1} < \xi_{p+2} \leq \ldots \leq \xi_{\eta}< \xi_{\eta+1}= \cdots = \xi_{\eta+p+1} \}. $$}
Let us define $Z = \{ \zeta_1, \ldots, \zeta_E\}$ as vector of breakpoints, \textit{i.e.} knots taken without repetition, and $m_j$, the multiplicity of the breakpoint $ \xi_j, \ j=1, \ldots , E$. Let $\Xi$ be the open knot vector associated to $Z$ where each breakpoint is repeated $m_j$-times, \textit{i.e.}
In what follows, we suppose that $m_1 = m_E = p+1$, while $m_j \leq p-1$, $\forall j = 2,\ldots, E-1$. We define by $\hat{B}^p_i(\zeta)$, $i=1,\ldots,\eta$ the $i$-th univariable B-Spline based on the univariate knot vector $\Xi$ and the degree $p$. We denote by $S^p (\Xi)= Span\{ \hat{B}^p_i(\zeta), \ i=1,\ldots,\eta \}$. Moreover, for further use we denote by $\tilde{\Xi}$ the sub-vector of $\Xi$ obtained by removing the first and the last knots.
Multivariate B-Splines in dimension $d$ are obtained by tensor product of univariate B-Splines. For any direction $\delta \in \{ 1, \ldots, d\}$, we define by
$\eta_\delta$ the number of B-Splines, $\Xi_\delta$ the open knot vector and $Z_\delta$ the breakpoint vector. Then, we define the multivariate knot vector by $\boldsymbol{\Xi} = ( \Xi_1 \times \ldots \times \Xi_d) $ and the multivariate breakpoint vector by $\boldsymbol{Z} = ( Z_1 \times \ldots \times Z_d ) $. We introduce a set of multi-indices $\boldsymbol{I} = \{ \boldsymbol{i} =(i_1, \ldots, i_d) \mid 1 \leq i_\delta \leq \eta_\delta \}$.
We build the multivariate B-Spline functions for each multi-index $\boldsymbol{i}$ by tensorization from the univariate B-Splines, {let $\boldsymbol{\zeta} \in \boldsymbol{Z}$ be a parametric coordinate of the generic point}: $$\hat{B}^p_{\boldsymbol{i}}(\boldsymbol{\zeta}) = \hat{B}^p_{i_1}(\zeta_1) \ldots \hat{B}^p_{i_d}(\zeta_d) .$$
Let us define the multivariate spline space in the reference domain by (for more details, see \cite{brivadis15}): $$S^p(\boldsymbol{\Xi}) = Span\{ \hat{B}^p_{\boldsymbol{i}}(\boldsymbol{\zeta}), \ \boldsymbol{i}\in \boldsymbol{I} \}.$$
We define $N^p(\boldsymbol{\Xi})$ as the NURBS space, spanned by the function $\hat{N}^p_{\boldsymbol{i}}(\boldsymbol{\zeta})$ with $$\hat{N}^p_{\boldsymbol{i}}(\boldsymbol{\zeta}) = \frac{\omega_{\boldsymbol{i}} \hat{B}^p_{\boldsymbol{i}}(\boldsymbol{\zeta})}{\hat{W}(\boldsymbol{\zeta})}, $$ {where $\{\omega_{\boldsymbol{i}}\}_{\boldsymbol{i}\in \boldsymbol{I}}$ is a set of positive weights and $\displaystyle \hat{W}(\boldsymbol{\zeta}) = {\sigma}um_{\boldsymbol{i}\in \boldsymbol{I}} \omega_{\boldsymbol{i}} \hat{B}^p_{\boldsymbol{i}}(\boldsymbol{\zeta}) $ is the weight function and we set }$$ N^p(\boldsymbol{\Xi}) = Span\{ \hat{N}^p_{\boldsymbol{i}}(\boldsymbol{\zeta}) , \ \boldsymbol{i}\in \boldsymbol{I} \} .$$
In what follows, we will assume that $\Omega$ is obtained as image of $\displaystyle\hat{\Omega} = ]0,1[^d$ through a NURBS mapping $\varphi_0$, \textit{i.e.} $\displaystyle\Omega = \varphi_0(\hat{\Omega})$. Moreover, in order to simplify our presentation, we assume that $\Gamma_C$ is the image of a full face $\displaystyle\hat{f}$ of $\displaystyle\bar{\hat{\Omega}}$, \textit{i.e.} $\displaystyle{\Gamma_C} = \varphi_0(\hat{f})$. We denote by $\displaystyle \varphi_{0,\Gamma_C}$ the restriction of $\displaystyle \varphi_{0}$ to $\displaystyle\hat{f}$. \\
{
\noindent A NURBS surface, in d=2, or solid, in d=3, is parameterised by $$ {\cal C}( \boldsymbol{\zeta}) = {\sigma}um_{\boldsymbol{i} \in \boldsymbol{I}} C_{\boldsymbol{i}} \hat{N}^p_{\boldsymbol{i}}(\boldsymbol{\zeta}) ,$$
where ${C_{\boldsymbol{i}}}_{ \in \boldsymbol{I}} \in \mathbb{R}^d$, is a set of control point coordinates. The control points are somewhat analogous to nodal points in finite element analysis. The NURBS geometry is defined as the image of the reference domain $\hat{\Omega}$ by $\varphi$, called geometric mapping, $\Omega_t = \varphi(\hat{\Omega})$. \\
}
We remark that the physical domain $\Omega$ is split into elements by the image of $\boldsymbol{Z}$ through the map $\varphi_0$. We denote such a {physical mesh ${\cal Q}_h$ }and {physical elements} in this mesh will be called $Q$.
$\Gamma_C$ inherits a mesh that we denote by $ \restr{{\cal Q}_h}{\Gamma_C} $. Elements on this mesh will be denoted as $Q_C$. \\
Finally, we introduce some notations and assumptions on the mesh.
\noindent \textbf{Assumption 1.} The mapping $\varphi_0$ is considered to be a bi-Lipschitz homeomorphism. Furthermore, for any parametric element ${\hat{Q}}$, $ \displaystyle \restr{\varphi_0}{\bar{\hat{Q}}}$ is in ${\cal C}^\infty(\bar{\hat{Q}})$ and for any physical element ${{Q}}$, $ \displaystyle \restr{\varphi_0^{-1}}{{\bar{Q}}}$ is in ${\cal C}^\infty({\bar{Q}})$.
\noindent Let $h_Q$ be the size of an {physical element} $Q$, it holds $h_Q = \textrm{diam} (Q)$. In the same way, we define the mesh size for any parametric element. In addition, the Assumption 1 ensures that both size of mesh are equivalent. We denote the maximal mesh size by $\displaystyle h = \max_{Q \in {\cal Q}_h} h_Q$.
\noindent \textbf{Assumption 2.} The mesh ${\cal Q}_h$ is quasi-uniform, \textit{i.e} there exists a constant $\theta$ such that $\displaystyle \frac{h_Q}{h_{Q'}} \leq \theta$ with $Q$ and $Q' \in {\cal Q}_h$.
{\sigma}ection{Discrete spaces and their properties}
\label{sec:discrete_pb}
We concentrate now on the definition of spaces on the domain $\Omega$.
\noindent For displacements, we denote by $V^h {\sigma}ubset V$ the space of mapped NURBS of degree $p$ with appropriate homogeneous Dirichlet boundary condition: $$\displaystyle V^h := \{ v^h = \hat{v}^h \circ \varphi^{-1}_0, \quad \hat{v}^h \in N^p(\boldsymbol{\Xi})^d \} \cap V .$$
\noindent We denote the space of traces normal to the rigid body as: $$\displaystyle W^h := \{ \psi^h, \quad \exists v^h \in V^h: \quad v^h \cdot n = \psi^h \textrm{ on } \Gamma_C \}.$$
\noindent For multipliers, following the ideas of \cite{brivadis15}, we define the space of B-Splines of degree $p-2$ on the potential contact zone $\displaystyle \Gamma_C = \varphi_{0,\Gamma_C}(\hat{f})$. We denote by $\boldsymbol{\Xi}_{\hat{f}}$ the knot vector defined on $\hat{f}$ and by $\tilde{\boldsymbol{\Xi}}_{\hat{f}}$ the knot vector obtained by removing the first and last value in each knot vector. We define:
$$\displaystyle \Lambda^h := \{ \lambda^h = \hat{\lambda}^h \circ \varphi_{0,\Gamma_C}^{-1}, \quad \hat{\lambda}^h \in S^{p-2}(\tilde{\boldsymbol{\Xi}}_{\hat{f}}) \} .$$
The {scalar} space $\Lambda^h$ is spanned by mapped B-Splines of the type $\displaystyle \hat{B}^{p-2}_{\boldsymbol{i}}(\boldsymbol{\zeta})\circ \varphi_{0,\Gamma_C}^{-1}$ for $\boldsymbol{i}$ belonging to a suitable set of indices. In order to reduce our notation, we call {$K$ the unrolling of the multi-index $\boldsymbol{i}$, $K=0 \ldots {\cal K}$ and
remove super-indices: for $K$ corresponding a given $\boldsymbol{i}$, we set $ \hat{B}_K (\boldsymbol{\zeta}) = \hat{B}^{p-2}_{\boldsymbol{i}}(\boldsymbol{\zeta})$,} $ {B}_K = \hat{B}_K \circ \varphi_{0,\Gamma_C}^{-1}$ and:
\begin{eqnarray}\label{def:space_mult}
\displaystyle \Lambda^h := Span \{ {B_K(x)}, \quad K=0 \ldots {\cal K} \} .
\end{eqnarray}
For further use, for $v \in \displaystyle L^2(\Gamma_C)$ {and for each $K=0 \ldots {\cal K}$}, we denote by $\displaystyle (\Pi_\lambda^h \cdot )_K$ {the following weighted average of $v$}: \begin{eqnarray}\label{def:l2proj_K}
\displaystyle (\Pi_\lambda^h v)_K = \frac{\displaystyle \int_{\Gamma_C} v B_K \ {\rm d} \Gamma}{\displaystyle \int_{\Gamma_C} B_K \ {\rm d} \Gamma},
\end{eqnarray}
and by $\Pi_\lambda^h$ the global {operator} such as: \begin{eqnarray}\label{def:l2proj}
\displaystyle \Pi_\lambda^h v = {\sigma}um_{K = 0}^{\cal K} (\Pi_\lambda^h v)_K B_K.
\end{eqnarray}
We denote by $L^h$ {the subset} of $W^h$ on which the non-negativity holds only at the control points:$$\displaystyle L^h = \{ \varphi^h \in W^h , \quad (\Pi_\lambda^h \varphi^h)_K \geq 0\quad \forall K\} .$$
\noindent We note that $L^h$ is a convex subset of $W^h$.
Next, we define the discrete space of the Lagrange multipliers as the negative cones of $L^h$ by $$\displaystyle M^h :=L^{h,*} = \{ \mu^h \in \Lambda^h, \quad \int_{\Gamma_C} \mu^h \varphi^h \ {\rm d} \Gamma \leq 0 \quad \forall \varphi^h \in L^{h} \}.$$
\noindent For any $Q_C \in \restr{{\cal Q}_h}{\Gamma_C}$, $\tilde{Q}_C$ denotes the support extension of $Q_C$ (see \cite{daveiga06,daveiga2014}) defined as the image of supports of B-Splines that are not zero on $\hat{Q}_C= \varphi_{0,\Gamma_C}^{-1} (Q_C)$.
\noindent We notice that the operator verifies the following estimate error:
\begin{lemma}\label{lem:op_h1}
Let $\psi \in H^s(\Gamma_C)$ with $0\leq s\leq 1$, the estimate for the local interpolation error reads:
\begin{eqnarray} \label{ineq:local disp}
\displaystyle \norm{\psi -\Pi_\lambda^h (\psi )}_{0,\Gamma_CQ} \lesssim h^s \norm{\psi}_{s,\Gamma_CQtilde} , \qquad \forall \Gamma_CQ \in \restr{{\cal Q}_h}{\Gamma_C}.
\end{eqnarray}
\end{lemma}
\noindent {\bf Proof: } First, Let $c$ be a constant. It holds:
\begin{eqnarray}
\begin{array}{lcl}
\displaystyle \Pi_\lambda^h c &=& \displaystyle {\sigma}um_{K = 0}^{\cal K} (\Pi_\lambda^h c)_K B_K = {\sigma}um_{K = 0}^{\cal K} \frac{\int_{\Gamma_C} c B_K \ {\rm d} \Gamma}{ \int_{\Gamma_C} B_K \ {\rm d} \Gamma} B_K = {\sigma}um_{K = 0}^{\cal K} c \frac{\int_{\Gamma_C} B_K \ {\rm d} \Gamma}{ \int_{\Gamma_C} B_K \ {\rm d} \Gamma} B_K = c {\sigma}um_{K = 0}^{\cal K} B_K .
\end{array}\nonumber
\end{eqnarray}
{Using that B-Splines are a partition of the unity, we obtain $ \Pi_\lambda^h c =c$.}
\noindent Let $\psi \in H^s(\Gamma_C)$, it holds:
\begin{eqnarray}\label{ineq:v1}
\begin{array}{lcl}
\displaystyle \norm{\psi -\Pi_\lambda^h (\psi )}_{0,\Gamma_CQ} &=& \displaystyle \norm{\psi-c -\Pi_\lambda^h (\psi -c)}_{0,\Gamma_CQ}\\[0.2cm]
& \leq & \displaystyle \norm{\psi-c}_{0,\Gamma_CQ} + \norm{\Pi_\lambda^h (\psi -c)}_{0,\Gamma_CQ}\\[0.2cm]
\end{array}
\end{eqnarray}
We need now to bound the operator $\Pi_\lambda^h$. We obtain:
\begin{eqnarray}
\begin{array}{lcl}
\displaystyle \norm{\Pi_\lambda^h (\psi -c )}_{0,\Gamma_CQ} &=& \displaystyle \norm{ {\sigma}um_{K = 0}^{\cal K} \frac{\int_{\Gamma_C} (\psi -c ) B_K \ {\rm d} \Gamma}{ \int_{\Gamma_C} B_K \ {\rm d} \Gamma} B_K}_{0,\Gamma_CQ} \\[0.4cm]
&\leq& \displaystyle {\sigma}um_{K: \ supp B_K \cap Q_C \neq \emptyset}^{\cal K} \abs{ \frac{\int_{\Gamma_C} (\psi -c ) B_K \ {\rm d} \Gamma}{ \int_{\Gamma_C} B_K \ {\rm d} \Gamma} } \norm{B_K}_{0,\Gamma_CQ} \\[0.4cm]
&\leq& \displaystyle {\sigma}um_{K: \ supp B_K \cap Q_C \neq \emptyset}^{\cal K} \norm{(\psi -c )}_{0,\Gamma_CQtilde} \frac{\norm{B_K}_{0,\Gamma_CQtilde}}{\int_{\Gamma_C} B_K \ {\rm d} \Gamma} \norm{B_K}_{0,\Gamma_CQ}.
\end{array}\nonumber
\end{eqnarray}
Using $\norm{B_K}_{0,\Gamma_CQtilde} {\sigma}im \abs{ \Gamma_CQtilde}^{1/2}$, $\norm{B_K}_{0,\Gamma_CQ} {\sigma}im \abs{{\Gamma_CQ}}^{1/2}$, $\displaystyle \int_{\Gamma_C} B_K \ {\rm d} \Gamma {\sigma}im \abs{\Gamma_CQtilde}$ and Assumption 1, it holds:
\begin{eqnarray}\label{ineq:v2}
\begin{array}{lcl}
\displaystyle \norm{\Pi_\lambda^h (\psi -c)}_{0,\Gamma_CQ} &\lesssim& \displaystyle \norm{(\psi -c )}_{0,\Gamma_CQtilde}.
\end{array}
\end{eqnarray}
Using the previous inequalities \eqref{ineq:v1} and \eqref{ineq:v2}, for $0 \leq s \leq 1$, we obtain:
\begin{eqnarray}
\begin{array}{lcl}
\displaystyle \norm{\psi -\Pi_\lambda^h (\psi )}_{0,\Gamma_CQ}& \lesssim & \displaystyle \norm{\psi -c}_{0,\Gamma_CQtilde}
\ \lesssim \displaystyle h^s_{\Gamma_CQtilde} \abs{\psi}_{s,\Gamma_CQtilde}
\end{array}\nonumber
\end{eqnarray}
{$\mbox{}$
$\square$} \\
{\begin{proposition}\label{prop:infsupL2}
For $h$ sufficiently small, there exists a $\beta>0$ such that:
\begin{eqnarray} \label{ineq:infsupL2}
\displaystyle \inf_{\mu^h \in M^h} {\sigma}up_{\psi^h \in W^h} \frac{ - \int_{\Gamma_C} \psi^h \mu^h \ {\rm d} \Gamma}{\norm{\psi^h}_{0,\Gamma_C}\norm{\mu^h}_{0,\Gamma_C} } \geq \beta .
\end{eqnarray}
\end{proposition} }
\noindent {\bf Proof: } In the article \cite{brivadis15}, the authors prove that, if $h$ is sufficiently small, there exists a constant $\beta$ independent of $h$ such that:
\begin{eqnarray} \label{ineq:infsup_ericka}
\displaystyle \forall {\phi^h} \in (\Lambda^h)^d , \quad \exists u^h \in \restr{V^h}{\Gamma_C}, \quad \textrm{s.t.} \quad \displaystyle \frac{ - \int_{\Gamma_C} {\phi^h} \cdot u^h \ {\rm d} \Gamma}{\norm{u^h}_{0,\Gamma_C} } \geq \beta \norm{{\phi^h}}_{0,\Gamma_C} .
\end{eqnarray}
\noindent Given now a $\lambda^h \in \Lambda^h$ and {$\psi^h \in W^h$}, we should like to choose ${\phi^h} = {\lambda^h n}$ and {$\psi^h = u^h \cdot n$} in \eqref{ineq:infsup_ericka}, but, unfortunately, it is clear that ${\phi^h} \not\in (\Lambda^h)^d$. Indeed, \eqref{ineq:infsupL2} can obtained from \eqref{ineq:infsup_ericka} via a superconvergence argument that we discuss in the next lines.
\noindent Let $\Pi_{(\Lambda^h)^d} : L^2(\Gamma_C)^d \rightarrow (\Lambda^h)^d$ be a quasi-interpolant defined and studied in \textit{e.g.} see \cite{daveiga2014}.
\noindent If $n \in W^{p-1,\infty} (\Gamma_C)$, by the same super-convergence argument used in \cite{brivadis15}, we obtain that:
\begin{eqnarray}\label{ineq:alpha}
\begin{array}{lcl}
\displaystyle \norm{{\phi^h} - \Pi_{(\Lambda^h)^d}({\phi^h})}_{0,\Gamma_C}
&\leq& \displaystyle \alpha h \norm{{\phi^h} }_{0,\Gamma_C}.
\end{array}
\end{eqnarray}
Note that:
\begin{eqnarray}
\begin{array}{lcl}
\displaystyle b(\lambda^h, u^h)& =&\displaystyle - \int_{\Gamma_C} \lambda^h (u^h \cdot n) \ {\rm d} \Gamma=- \int_{\Gamma_C} {\phi^h} \cdot u^h \ {\rm d} \Gamma \\
& =&\displaystyle - \int_{\Gamma_C} \Pi_{(\Lambda^h)^d}({\phi^h}) \cdot u^h \ {\rm d} \Gamma - \int_{\Gamma_C} \left({\phi^h} - \Pi_{(\Lambda^h)^d}({\phi^h}) \right)\cdot u^h \ {\rm d} \Gamma .
\end{array}\nonumber
\end{eqnarray}
By $\inf-{\sigma}up$ condition \eqref{ineq:infsup_ericka}, we get:
\begin{eqnarray}
\displaystyle {\sigma}up_{u^h \in V^h} \frac{- \int_{\Gamma_C} \Pi_{(\Lambda^h)^d}({\phi^h}) \cdot u^h \ {\rm d} \Gamma}{\norm{u^h}_{0,\Gamma_C} } \geq \beta \norm{\Pi_{(\Lambda^h)^d}({\phi^h})}_{0,\Gamma_C}, \nonumber
\end{eqnarray}
By \eqref{ineq:alpha}, it holds:
$$\int_{\Gamma_C} \left({\phi^h} - \Pi_{(\Lambda^h)^d}({\phi^h}) \right)\cdot u^h \ {\rm d} \Gamma \leq \alpha h \norm{{\phi^h} }_{0,\Gamma_C} \norm{u^h }_{0,\Gamma_C} .$$
Thus:
\begin{eqnarray}
\begin{array}{lcl}
\displaystyle \frac{b( \lambda^h , u^h )}{~\norm{u^h}_{0,\Gamma_C} } &\geq&\displaystyle \beta \norm{\Pi_{(\Lambda^h)^d}({\phi^h})}_{0,\Gamma_C} - \alpha h \norm{{\phi^h} }_{0,\Gamma_C} .
\end{array}\nonumber
\end{eqnarray}
Noting that $ \norm{\Pi_{(\Lambda^h)^d}({\phi^h})}_{0,\Gamma_C} \geq \norm{{\phi^h} }_{0,\Gamma_C} - \alpha h \norm{{\phi^h} }_{0,\Gamma_C}$, $ \norm{{\phi^h} }_{0,\Gamma_C} {\sigma}im \norm{\lambda^h }_{0,\Gamma_C}$ and {$ \norm{u^h }_{0,\Gamma_C} {\sigma}im \norm{\psi^h }_{0,\Gamma_C}$}.
Finally, we obtain:
\begin{eqnarray}
\begin{array}{lcl}
\displaystyle {\sigma}up_{u^h \in V^h} \frac{ - \int_{\Gamma_C} \psi^h \lambda^h \ {\rm d} \Gamma}{\norm{\psi^h}_{0,\Gamma_C} } &\geq&\displaystyle \beta \norm{\lambda^h}_{0,\Gamma_C} - \alpha h \norm{\lambda^h}_{0,\Gamma_C}.
\end{array}\nonumber
\end{eqnarray}
{For $h$ is sufficiently small,} this implies that there exists a constant $\beta'$ independent of $h$ such that:
\begin{eqnarray}\label{ineq:infsup l2}
\begin{array}{lcl}
\displaystyle {\sigma}up_{u^h \in V^h} \frac{ - \int_{\Gamma_C} \psi^h \lambda^h \ {\rm d} \Gamma }{~\norm{\psi^h}_{0,\Gamma_C} }&\geq&\displaystyle \beta' \norm{\lambda^h}_{0,\Gamma_C}.
\end{array}
\end{eqnarray}
{$\mbox{}$
$\square$}
{\begin{lemma}\label{lem:neg}
For $h$ is sufficiently small, $M^h$ can be characterised as follow:
$$M^h \equiv \{ \mu^h \in \Lambda^h, \quad \mu^h = {\sigma}um_K \mu^h_K B_K, \quad \mu^h_K\leq 0 \} .$$
\end{lemma}
\noindent {\bf Proof: } By definition of $\displaystyle M^h$, we have $\displaystyle \int_{\Gamma_C} \mu^h \varphi^h \ {\rm d} \Gamma \leq 0$ for all $\displaystyle \varphi^h \in L^h$. We have :
$$ \displaystyle \int_{\Gamma_C} \mu^h \varphi^h \ {\rm d} \Gamma = {\sigma}um_K \mu^h_K \int_{\Gamma_C} B_K \varphi^h \ {\rm d} \Gamma = {\sigma}um_K \mu^h_K (\Pi_\lambda^h \varphi^h)_K \int_{\Gamma_C} B_K\ {\rm d} \Gamma \leq 0 .$$
Thus the question becomes if it is possible to find for each $K= 0 \ldots {\cal K}$ a function $\varphi^h_K$ such that:
$$ \displaystyle \int_{\Gamma_C} \mu^h \varphi^h_K \ {\rm d} \Gamma = \mu^h_K \int_{\Gamma_C} B_K\ {\rm d} \Gamma. $$
The existence of such a $\varphi^h_K$ is guaranteed by Proposition \ref{prop:infsupL2}, as a consequence of the $\inf-{\sigma}up$ condition \eqref{ineq:infsupL2} is that the matrix representing the scalar product $\displaystyle \int_{\Gamma_C} \mu^h \varphi^h \ {\rm d} \Gamma$ is full rank. }
{$\mbox{}$
$\square$} \\
Then a discretized mixed formulation of the problem \eqref{eq:mixed_form} consists in finding $(u^h,\lambda^h) \in V^h \times M^h$ such that:
\begin{eqnarray} \label{eq:discrete_mixed_form}
\left\{
\begin{array}{rl}
\displaystyle a(u^h,v^h) - b(\lambda^h,v^h) = L(v^h),& \displaystyle \qquad \forall v^h \in V^h,\\
\displaystyle b(\mu^h-\lambda^h,u^h) \geq 0,&\displaystyle \qquad \forall \mu^h \in M^h.
\end{array}
\right.
\end{eqnarray}
According to Lemma \ref{lem:neg}, we get: $$\{ \mu^h \in M^h: \quad b(\mu^h,v^h)=0 \quad \forall v^h \in V^h) \} =\{0\} ,$$
and using the ellipticity of the bilinear form $a(\cdot,\cdot)$ on $V^h$, then the problem \eqref{eq:discrete_mixed_form} admits a unique solution $(u^h,\lambda^h) \in V^h \times M^h$.
Before addressing the analysis of \eqref{eq:discrete_mixed_form}, let us recall that {the following inequalities} (see \cite{daveiga06}) are true for the primal and the dual space.
\begin{theorem}
Let a given quasi-uniform mesh and let $r, s$ be such that $0 \leq r \leq s \leq p +1$. Then, there exists a constant depending only on $p, \theta, \varphi_0$ and $\hat{W}$ such that for any $v \in H^s(\Omega)$ there exists an approximation $v^h \in V^h$ such that
\begin{eqnarray} \label{ineq:Np}
\displaystyle \norm{v-v^h}_{r,\Omega} \lesssim h^{s-r} \norm{v}_{s,\Omega}.
\end{eqnarray}
\end{theorem}
\noindent We will also make use of the local approximation estimates for splines quasi-interpolants that can be found \textit{e.g.} in \cite{daveiga06,daveiga2014}.
\begin{lemma}\label{lem:op_mult}
Let $\lambda \in H^s(\Gamma_C)$ with $0\leq s \leq p-1$, then there exists a constant depending only on $p,\varphi_0$ and $\theta$, there exists an approximation $\lambda^h \in \Lambda^h$ such that:
\begin{eqnarray} \label{ineq:local}
\displaystyle h^{-1/2} \norm{\lambda-\lambda^h}_{-1/2,\Gamma_CQ} + \norm{\lambda-\lambda^h}_{0,\Gamma_CQ} \lesssim h^{s} \norm{\lambda}_{s,\Gamma_CQtilde} , \qquad \forall \Gamma_CQ \in \restr{{\cal Q}_h}{\Gamma_C}.
\end{eqnarray}
\end{lemma}
{\noindent It is well known \cite{fortinbook13} that the stability for the mixed problem \eqref{eq:mixed_form} is linked to the $\inf-{\sigma}up$ condition.
\begin{theorem}
For $h$ sufficiently small, $n$ sufficiently regular and for any $\mu^h \in \Lambda^h$, it holds:
\begin{eqnarray} \label{ineq:infsup}
\displaystyle {\sigma}up_{v^h \in V^h} \frac{b(\mu^h,v^h)}{\norm{v^h}_{V} } \geq \beta \norm{\mu^h}_{W'},
\end{eqnarray}
where $\beta$ is independent of $h$.
\end{theorem} }
\noindent {\bf Proof: }
{By Lemma \ref{lem:neg},} there exists
a Fortin's operator $\Pi : \ L^2(\Gamma_C) \rightarrow \restr{V^h}{\Gamma_C} \cap H^1_0(\Gamma_C) $ such that
\begin{eqnarray}
\begin{array}{ll}
&\displaystyle b(\lambda,\Pi (u))= b(\lambda,u) , \ \forall \lambda \in M \quad \textrm{ and } \quad \displaystyle \norm{\Pi (u)}_{0,\Gamma_C} \leq \norm{u}_{0,\Gamma_C}.
\end{array}\nonumber
\end{eqnarray}
Let $I_h$ be a $L^2$ and $H^1$ stable quasi-interpolant onto $\restr{V^h}{\Gamma_C}$ (for example, the Schumaker's quasi-interpolant, see for more details \cite{daveiga2014}). It is important to notice that $I^h$ preserves the homogeneous Dirichlet boundary condition.
\noindent We set $\Pi_F = \Pi (I-I_h) + I_h$. It is classical to see that:
\begin{eqnarray}\label{prop:pif-b}
\begin{array}{ll}
\displaystyle b(\lambda,\Pi_F (u)) = b(\lambda,u) , \quad \forall \lambda \in M,
\end{array}
\end{eqnarray}
and it is easy to see that:
\begin{eqnarray}\label{prop:pif-=}
\begin{array}{ll}
\displaystyle \Pi_F (u^h)= u^h , \quad \forall u^h \in \restr{V^h}{\Gamma_C} .
\end{array}
\end{eqnarray}
Moreover, by stability of $\Pi$ and $I_h$, it holds:
\begin{eqnarray}\label{prop:pif-leq}
\begin{array}{ll}
\displaystyle \displaystyle \norm{\Pi_F (u)}_{0,\Gamma_C} \lesssim \norm{u}_{0,\Gamma_C}, \quad \forall u \in L^2(\Gamma_C).
\end{array}
\end{eqnarray}
and also
\begin{eqnarray}\label{prop:pif-leqh1}
\begin{array}{lcl}
\norm{\Pi_F (u)}_{1,\Gamma_C} \lesssim \norm{ (u)}_{1,\Gamma_C} , \quad \forall u \in H^1(\Gamma_C).
\end{array}
\end{eqnarray}
{To conclude, we distinguish between two cases :
\begin{itemize}
\item If $\overline{\Gamma}_D \cap \overline{\Gamma}_C =\emptyset$, it is well know that $W=H^{1/2}(\Gamma_C)$. By interpolation of Sobolev Spaces, using \eqref{prop:pif-leq} and \eqref{prop:pif-leqh1}, we obtain:
$$ \displaystyle b(\lambda,\Pi_F (u)) = b(\lambda,u), \quad \forall \lambda \in M \quad \textrm{ and } \quad \displaystyle \norm{\Pi_F (u)}_{W} \lesssim \norm{u}_{W}. $$
Then $\inf-{\sigma}up$ condition \eqref{ineq:infsup} holds thanks to Proposition 5.4.2 of \cite{fortinbook13}.
\item If $\overline{\Gamma}_D \cap \overline{\Gamma}_C \neq \emptyset$, it is enough to remind that for all $u \in H^1_{0, \Gamma_D \cap \Gamma_C} (\Gamma_C)$, we have $\displaystyle \Pi_F (u) \in H^1_{0, \Gamma_D \cap \Gamma_C} (\Gamma_C)$ and \eqref{prop:pif-leqh1} is valid on the subspace $H^1_{0, \Gamma_D \cap \Gamma_C} (\Gamma_C)$.
Again by interpolation argument between \eqref{prop:pif-leq} and \eqref{prop:pif-leqh1}, it holds $ \displaystyle \norm{\Pi_F (u)}_{W} \leq C\norm{u}_{W}$ which ends the proof.
\end{itemize}}
{$\mbox{}$
$\square$}
{\sigma}ection{\textit{A priori} error analysis}
\label{sec:sec2}
In this section, we present an {optimal} \textit{a priori} error estimate for the Signorini mixed problem. Our estimates follows the ones for finite elements, provided in \cite{coorevits-hild-lhalouani-sassi-02,hild-laborde-02}, and refined in \cite{drouet15}. In particular, in \cite{drouet15} the authors overcome a technical assumption on the geometric structure of the contact set and we are able to avoid such as assumptions also in our case.
Indeed, for any $p$, we prove our method to be optimal for solutions with regularity up to $5/2$.
{Thus, optimality for the displacement is obtained for any $p \geq 2$.} The cheapest {and more convenient} method proved optimal corresponds to the choice $p=2$. Larger values of $p$ may be of interest because they produce continuous pressures, but, on the other hand, the error bounds remain limited by the regularity of the solution, \textit{i.e.} , up to $C h^{3/2}$. Clearly, to enhance approximation suitable local refinement may be used, \cite{lorenzis11,lorenzis12}, but this choice outside the scope of this paper.
In order to prove Theorem \ref{tmp:apriori} which follows, we need a few preparatory Lemmas.
\noindent First, we introduce some notation and some basic estimates. Let us define the active-set strategy for the variational problem. Given an element $Q_C \in \restr{{\cal Q}_h}{\Gamma_C}$ of the undeformed mesh, we denote by $Z_{C}(\Gamma_CQ)$ the contact set and by $Z_{NC}(\Gamma_CQ)$ the non-contact set in $\Gamma_CQ$, as follows:
\begin{eqnarray}
\begin{array}{rl}
\displaystyle Z_{C}(\Gamma_CQ) = \{ x \in \Gamma_CQ, \quad u_n (x) = 0\} \quad \textrm{and} \quad \displaystyle Z_{NC}(\Gamma_CQ) = \{ x \in \Gamma_CQ, \quad u_n (x) > 0\} .
\end{array}\nonumber
\end{eqnarray}
$|Z_{C}(\Gamma_CQ)|$ and $|Z_{NC}(\Gamma_CQ)|$ stand for their measures and $|Z_{C}(\Gamma_CQ)|+|Z_{NC}(\Gamma_CQ)| = |Q_C| = C h_{Q_C}^{d-1}$.
\begin{remark}\label{rq:cont}
Since $u_n$ belongs to $H^{1+\nu}(\Omega)^{2}$ for $0<\nu<1$, if $d=2$ the Sobolev embeddings ensure that $u_n \in {\cal C}^0(\partial \Omega)$. It implies that $Z_{C}(\Gamma_CQ)$ and $Z_{NC}(\Gamma_CQ)$ are measurable as inverse images of a set by a continuous function.
\end{remark}
The following estimates are the generalization to the mixed problem of Lemma 2 of Appendix of the article \cite{drouet15}. We recall that if $(u,\lambda)$ is a solution of the mixed problem \eqref{eq:mixed_form} then ${\sigma}igma_n(u)=\lambda$. So, the following lemma can be proven exactly in the same way.
\begin{lemma} \label{lemma1}
Let $d=2$ or $3$. Let $(u,\lambda)$ be the solution of the mixed formulation \eqref{eq:mixed_form} and let $u \in H^{3/2+ \nu}(\Omega)^d$ with $0<\nu<1$. Let $h_Q$ the be the diameter of the trace element $\Gamma_CQ$ and the set of contact $Z_{C}(\Gamma_CQ)$ and non-contact $Z_{NC}(\Gamma_CQ)$ defined previously in $\Gamma_CQ$. \\
We assume that $|Z_{NC}(\Gamma_CQ)|>0$, the following
$L^2$-estimate holds for $\lambda$:
\begin{eqnarray} \label{estimate2}
\displaystyle \norm{\lambda}_{0,\Gamma_CQ} \leq \frac{1}{|Z_{NC}(\Gamma_CQ)|^{1/2}} h_{Q_C}^{d/2+\nu-1/2} |\lambda|_{\nu,\Gamma_CQ } .
\end{eqnarray}
We assume that $|Z_{C}(\Gamma_CQ)|>0$, the following
$L^2$-estimates hold for $\nabla u_n$:
\begin{eqnarray} \label{estimate4}
\displaystyle \norm{\nabla u_n}_{0,\Gamma_CQ} \leq \frac{1}{|Z_{C}(\Gamma_CQ)|^{1/2}} h_{Q_C}^{d/2+\nu-1/2} |\nabla u_n|_{\nu,\Gamma_CQ } .
\end{eqnarray}
\end{lemma}
\begin{theorem}\label{tmp:apriori} Let $(u,\lambda)$ and $(u^h,\lambda^h)$ be respectively the solution of the mixed problem \eqref{eq:mixed_form} and the discrete mixed problem \eqref{eq:discrete_mixed_form}. Assume that $u \in H^{3/2+\nu}(\Omega)^d$ with $0<\nu<1$. Then, the following error estimate is satisfied:
\begin{eqnarray} \label{eq:apriori}
\norm{u-u^h}_{V}^2 + \norm{\lambda-\lambda^h}^2_{W'} \lesssim h^{1+2\nu} \norm{u}^2_{3/2 + \nu,\Omega}.
\end{eqnarray}
\end{theorem}
\noindent {\bf Proof: } {In the article \cite{hild-laborde-02} Proposition 4.1, it is proved that if $(u,\lambda)$ is the solution of the mixed problem \eqref{eq:mixed_form} and $(u^h,\lambda^h)$ is the solution of the discrete mixed problem \eqref{eq:discrete_mixed_form}, it holds :}
\begin{eqnarray}
\begin{array}{rl}
\displaystyle \norm{u-u^h}_{V}^2 + \norm{\lambda-\lambda^h}_{W'}^2 \lesssim & \displaystyle
\norm{u-v^h}_{V}^2 +
\norm{\lambda-\mu^h}_{W'}^2 \\[0.4cm]
& \displaystyle +\max(-b(\lambda,u^h),0) +\max(- b(\lambda^h,u),0) .
\end{array}\nonumber
\end{eqnarray}
It remains to estimate on the previous inequality the last two terms to obtain the estimate \eqref{eq:apriori}. \\
\noindent\textbf{Step 1: estimate of $-b(\lambda,u^h) \displaystyle = \int_{\Gamma_C} \lambda u^h_n \ {\rm d} \Gamma$.}\\
\noindent Using the operator $\Pi_\lambda^h$ defined in \eqref{def:l2proj}, it holds:
\begin{eqnarray}
\begin{array}{lcl}
\displaystyle -b(\lambda,u^h) & \displaystyle =&\displaystyle \int_{\Gamma_C} \lambda u^h_n \ {\rm d} \Gamma \displaystyle= \int_{\Gamma_C} \lambda \left(u^h_n - \Pi_\lambda^h (u^h_n)\right) \ {\rm d} \Gamma + \int_{\Gamma_C} \lambda \Pi_\lambda^h ( u^h_n) \ {\rm d} \Gamma \\[0.4cm]
& \displaystyle = &\displaystyle \int_{\Gamma_C} \left(\lambda -\Pi_\lambda^h (\lambda) \right) \left(u^h_n - \Pi_\lambda^h (u^h_n)\right) \ {\rm d} \Gamma + \int_{\Gamma_C} \Pi_\lambda^h (\lambda) \left(u^h_n - \Pi_\lambda^h (u^h_n)\right) \ {\rm d} \Gamma \\[0.4cm]
&& \displaystyle + \int_{\Gamma_C} \lambda \Pi_\lambda^h ( u^h_n) \ {\rm d} \Gamma.
\end{array}\nonumber
\end{eqnarray}
\noindent Since $\lambda$ is a solution of \eqref{eq:mixed_form}, it holds $\displaystyle \Pi_\lambda^h (\lambda) \leq 0$. Furthermore, $u^h$ is a solution of \eqref{eq:discrete_mixed_form}, thus $\displaystyle \int_{\Gamma_C} \Pi_\lambda^h (\lambda) \left(u^h_n - \Pi_\lambda^h (u^h_n)\right) \ {\rm d} \Gamma \leq 0$ and $\displaystyle \int_{\Gamma_C} \lambda \Pi_\lambda^h ( u^h_n) \ {\rm d} \Gamma \leq 0$.
\noindent We obtain:
\begin{eqnarray} \label{eq:eq1}
\begin{array}{lcl}
\displaystyle -b(\lambda,u^h)& \displaystyle \leq& \displaystyle \int_{\Gamma_C} \left(\lambda -\Pi_\lambda^h (\lambda) \right) \left(u^h_n - \Pi_\lambda^h (u^h_n)\right) \ {\rm d} \Gamma \\[0.4cm]
& \leq &\displaystyle \int_{\Gamma_C} \left(\lambda -\Pi_\lambda^h (\lambda) \right) \left(u^h_n -u_n- \Pi_\lambda^h (u^h_n-u_n)\right) \ {\rm d} \Gamma \\[0.4cm]
& &+\displaystyle\int_{\Gamma_C} \left(\lambda -\Pi_\lambda^h (\lambda) \right) \left(u_n- \Pi_\lambda^h (u_n)\right) \ {\rm d} \Gamma.
\end{array}
\end{eqnarray}
The first term of \eqref{eq:eq1} is bounded in an optimal way by using \eqref{ineq:local disp}, the summation on each physical element, Theorem \ref{thm:u-lambda} and the trace theorem:
\begin{eqnarray}
\begin{array}{ll}
\displaystyle \int_{\Gamma_C} \left(\lambda -\Pi_\lambda^h (\lambda) \right) \left(u^h_n -u_n- \Pi_\lambda^h (u^h_n-u_n)\right) \ {\rm d} \Gamma \!\!\!\!\!&\leq \displaystyle \norm{\lambda -\Pi_\lambda^h (\lambda) }_{0,\Gamma_C} \norm{u^h_n -u_n- \Pi_\lambda^h (u^h_n-u_n)}_{0,\Gamma_C}\\[0.2cm]
&\leq \displaystyle C h^{1/2+\nu} \norm{\lambda }_{\nu,\Gamma_C} \norm{u_n-u^h_n}_{W}\\[0.2cm]
&\leq \displaystyle C h^{1/2+\nu} \norm{u}_{3/2+\nu,\Omega} \norm{u-u^h}_{V}.
\end{array}\nonumber
\end{eqnarray}
We need now to bound the second term in \eqref{eq:eq1}. Let $Q_C$ be an element of $ \restr{{\cal Q}_h}{\Gamma_C}${. I}f either $|\mathbb{Z}C|$ or $|\mathbb{Z}NC|$ are null, the integral on $\Gamma_CQ$ vanishes. So we suppose that either $|\mathbb{Z}C|$ or $|\mathbb{Z}NC|$ are greater than {$\displaystyle|\Gamma_CQ|/2= C h_{Q_C}^{d-1}$} and we consider the two cases, separately. \\
{Similarly to the article \cite{hild-laborde-02}, we can prove that if:}
\begin{itemize}
\item $|\mathbb{Z}C| \geq \displaystyle {|\Gamma_CQ|}/{2} $. Using the estimate \eqref{ineq:local disp}, the estimate \eqref{estimate4} of Lemma \ref{lemma1} and the Young's inequality, it holds: {
\begin{eqnarray}
\begin{array}{rcl}
\displaystyle \int_{\GCQ} (\lambda -\Pi_\lambda^h (\lambda) ) (u_n -\Pi_\lambda^h (u_n ) ) \ {\rm d} \Gamma
&\lesssim&\displaystyle h^{1+2\nu} (\norm{\lambda}^2_{\nu,\Gamma_CQ} + \norm{ u_n}^2_{1+\nu,\Gamma_CQtilde}).
\end{array}\nonumber
\end{eqnarray} }
\item $|\mathbb{Z}NC| \geq \displaystyle {|\Gamma_CQ|}/{2} $. Using the estimate \eqref{ineq:local disp}
, the estimate \eqref{estimate2} of Lemma \ref{lemma1} and the Young's inequality, it holds:
\begin{eqnarray}
\begin{array}{rcl}
{ \displaystyle \int_{\GCQ} (\lambda -\Pi_\lambda^h (\lambda) ) (u_n -\Pi_\lambda^h (u_n ) ) \ {\rm d} \Gamma }
&{ \lesssim }&{\displaystyle h^{1+2\nu} (\norm{\lambda}^2_{\nu,\Gamma_CQtilde} + \norm{ u_n}^2_{1+\nu,\Gamma_CQtilde}). }
\end{array}\nonumber
\end{eqnarray}
\end{itemize}
\noindent Summing over all the contact elements and {distinguishing the two cases $\mathbb{Z}C \geq {|\Gamma_CQ|}/{2}$ and $\mathbb{Z}NC \geq {|\Gamma_CQ|}/{2}$}, it holds:
\begin{eqnarray}
\begin{array}{rcl}
\displaystyle \int_{\Gamma_C} (\lambda -\Pi_\lambda^h (\lambda) ) (u_n -\Pi_\lambda^h (u_n ) ) \ {\rm d} \Gamma & = &\displaystyle {\sigma}um_{{Q_C \in \restr{{\cal Q}_h}{\Gamma_C}}} \int_{\GCQ} (\lambda -\Pi_\lambda^h (\lambda) ) (u_n -\Pi_\lambda^h (u_n ) ) \ {\rm d} \Gamma \\[0.4cm]
&\leq&\displaystyle C h^{1+2\nu} {\sigma}um_{Q_C \in \restr{{\cal Q}_h}{\Gamma_C}} \norm{\lambda}^2_{\nu,\Gamma_CQ} + \norm{\lambda}^2_{\nu,\Gamma_CQtilde} + \norm{ u_n}^2_{1+\nu,\Gamma_CQtilde} \\[0.4cm]
&\leq&\displaystyle C h^{1+2\nu} {\sigma}um_{Q_C \in \restr{{\cal Q}_h}{\Gamma_C}} \norm{\lambda}^2_{\nu,\Gamma_CQ} + {\sigma}um_{Q_C' \in \tilde{Q}_C} \norm{\lambda}^2_{\nu,\Gamma_CQ'} + \norm{ u_n}^2_{1+\nu,\Gamma_CQ'} \\[0.4cm]
&\leq&\displaystyle C h^{1+2\nu} \Big{(} \norm{\lambda}^2_{\nu,\Gamma_C} + {\sigma}um_{Q \in \restr{{\cal Q}_h}{\Gamma_C}} {\sigma}um_{Q_C' \in \tilde{Q}_C} \norm{\lambda}^2_{\nu,\Gamma_CQ'} + \norm{ u_n}^2_{1+\nu,\Gamma_CQ'} \Big{)}.
\end{array}\nonumber
\end{eqnarray}
Due to the compact supports of the B-Splines basis functions, there exists a constant $C$ depending only on the degree $p$ and the dimension $d$ of the undeformed domain such that: $$\displaystyle {\sigma}um_{Q \in \restr{{\cal Q}_h}{\Gamma_C}} {\sigma}um_{Q_C' \in \tilde{Q}_C} \norm{\lambda}^2_{\nu,\Gamma_CQ'} + \norm{ u_n}^2_{1+\nu,\Gamma_CQ'} \leq C \norm{\lambda}^2_{\nu,\Gamma_C} + C\norm{ u_n}^2_{1+\nu,\Gamma_C} .$$
So we have: $$\displaystyle \int_{\Gamma_C} (\lambda -\Pi_\lambda^h (\lambda) ) (u_n -\Pi_\lambda^h (u_n ) ) \ {\rm d} \Gamma \leq C h^{1+2\nu } \Big{(} \norm{\lambda}^2_{\nu,\Gamma_C} + \norm{ u_n}^2_{1+\nu,\Gamma_C}\Big{)} ,$$
\textit{i.e.} $$\displaystyle \int_{\Gamma_C} (\lambda -\Pi_\lambda^h (\lambda) ) (u_n -\Pi_\lambda^h (u_n ) ) \ {\rm d} \Gamma \leq C h^{1+2\nu} \norm{u}^2_{3/2+\nu,\Omega} .$$
\noindent We conclude that:
\begin{eqnarray}
\begin{array}{rl}
\displaystyle -b(\lambda,u^h) & \displaystyle \lesssim h^{1/2+\nu} \norm{u}_{3/2+\nu,\Omega} \norm{u-u^h}_{V} + h^{1+2\nu}\norm{u}^2_{3/2+\nu,\Omega}.
\end{array}\nonumber
\end{eqnarray}
\noindent Using Young's inequality, we obtain:
\begin{eqnarray} \label{step2}
\begin{array}{rl}
-b(\lambda,u^h) \lesssim h^{1+2\nu} \norm{u}^2_{3/2+\nu,\Omega} + \norm{u-u^h}^2_{V} .
\end{array}
\end{eqnarray}
\noindent\textbf{Step 2: estimate of $-b(\lambda^h,u) \displaystyle = \int_{\Gamma_C} \lambda^h u_n \ {\rm d} \Gamma$.}\\
Let us denote by $j^h$ the Lagrange interpolation operator of order one on {$ \restr{{\cal Q}_h}{\Gamma_C}$}.
\begin{eqnarray}
\begin{array}{rl}
\displaystyle -b(\lambda^h,u) & \displaystyle = \int_{\Gamma_C} \lambda^h u_n \ {\rm d} \Gamma = \int_{\Gamma_C} \lambda^h (u_n -j^h (u_n )) \ {\rm d} \Gamma + \int_{\Gamma_C} \lambda^h j^h (u_n ) \ {\rm d} \Gamma .\\
\end{array}\nonumber
\end{eqnarray}
Note that by remark \ref{rq:cont}, $u_n$ is continuous and $j^h (u_n )$ is well define. \\
\noindent Since $u$ is a solution of \eqref{eq:mixed_form}, it holds $\displaystyle j^h (u_n ) \geq 0$. Thus, $\displaystyle \int_{\Gamma_C} \lambda^h j^h (u_n ) \ {\rm d} \Gamma \leq 0, \quad \lambda^h \in M^h$.
\noindent As previously, we obtain:
\begin{eqnarray}
\begin{array}{rl}
\displaystyle -b(\lambda^h,u) & \displaystyle \leq \int_{\Gamma_C} \lambda^h u_n \ {\rm d} \Gamma \displaystyle \leq \int_{\Gamma_C} \lambda^h (u_n -j^h (u_n )) \ {\rm d} \Gamma \\
& \displaystyle \leq \int_{\Gamma_C} (\lambda^h - \lambda)(u_n -j^h (u_n )) \ {\rm d} \Gamma + \int_{\Gamma_C} \lambda (u_n -j^h (u_n )) \ {\rm d} \Gamma\\
& \displaystyle \leq \int_{\Gamma_C} \lambda (u_n -j^h (u_n )) \ {\rm d} \Gamma+ \norm{\lambda-\lambda^h}_{W'} \norm{u_n - j^h (u_n )}_{W}\\
& \displaystyle \leq \int_{\Gamma_C} \lambda (u_n -j^h (u_n )) \ {\rm d} \Gamma + h^{1/2 +\nu}\norm{u_n }_{1+\nu,\Gamma_C} \norm{\lambda-\lambda^h}_{W'}\\
& \displaystyle \leq \int_{\Gamma_C} \lambda (u_n -j^h (u_n )) \ {\rm d} \Gamma + h^{1/2 +\nu}\norm{u}_{3/2+\nu,\Omega} \norm{\lambda-\lambda^h}_{W'}.
\end{array}\nonumber
\end{eqnarray}
\noindent Now, we need to show that: \begin{eqnarray} \label{step1_1}
\int_{\Gamma_C} \lambda (u_n -j^h (u_n )) \ {\rm d} \Gamma \leq C h^{1+2\nu} \norm{u}_{3/2+\nu,\Omega}^2.\end{eqnarray}
\noindent The proof of this inequality is done in the paper \cite{drouet15} for both linear and quadratic finite elements, and can be repeated here verbatim. In this proof, two cases are considered:
\begin{enumerate}
\item either $|\mathbb{Z}C|$ or $|\mathbb{Z}NC|$ is null and thus the inequality is trivial;
\item where either $|\mathbb{Z}C|$ or $|\mathbb{Z}NC|$ is greater than $\displaystyle {|\Gamma_CQ|}/{2}= C h_{Q_C}^{d-1}$.
\end{enumerate}
As previously, choosing either $|\mathbb{Z}C|$ or $|\mathbb{Z}NC|$, using the previous Lemma \ref{lemma1} and by summation on all element of mesh,
{\noindent we conclude that:
\begin{eqnarray}
\begin{array}{rl}
\displaystyle -b(\lambda^h,u) & \displaystyle \leq \int_{\Gamma_C} \lambda (u_n -j^h (u_n )) \ {\rm d} \Gamma + h^{1/2 +\nu}\norm{u}_{3/2+\nu,\Omega} \norm{\lambda-\lambda^h}_{W'}\\
& \displaystyle \lesssim h^{1+2\nu}\norm{u}^2_{3/2+\nu,\Omega} + h^{1/2 +\nu}\norm{u}_{3/2+\nu,\Omega} \norm{\lambda-\lambda^h}_{W'}.
\end{array}\nonumber
\end{eqnarray}
Using Young's inequality, we obtain:
\begin{eqnarray} \label{step1}
\begin{array}{rl}
\displaystyle -b(\lambda^h,u) & \displaystyle \lesssim h^{1+2\nu}\norm{u}^2_{3/2+\nu,\Omega} + \norm{\lambda-\lambda^h}^2_{W'}.
\end{array}
\end{eqnarray}}
Finally, we can conclude using \eqref{step1} and \eqref{step2}.
{$\mbox{}$
$\square$}
{\sigma}ection{Numerical Study}
\label{sec:sec3}
{In this section, we perform {a} numerical validation for the method we propose {d} in small as well as in large deformation {frameworks}, \textit{i.e.}, also beyond the theory developed in previous Sections. Due to the intrinsic lack of regularity of contact solutions, we restrict ourselves to the case $p=2$, {for which the $N_2 / S_0$ method is tested}.}
The suite of benchmarks reproduces the classical Hertz contact problem \cite{Hertz1882,johnsonbook}:
Sections \ref{subsec:Hertz problems} and \ref{subsec:Hertz problems} analyse
the two and three-dimensional cases for a small deformation setting, whereas
Section \ref{subsec:large deformation} considers the large deformation problem in 2D.
The examples were performed using an in-house code based on the igatools library (see \cite{Pauletti2015} for further details).
{In the following example, to prevent that the {contact zone} is empty, we considered, only for the initial gap, that {there exists contact} if the $\displaystyle g_n\leq 10^{-9}$.}\\
{\sigma}ubsection{Two-dimensional Hertz problem}
\label{subsec:Hertz problems}
The first example included in this section analyses the two-dimensional frictionless Hertz contact problem considering small elastic deformations.
It consists in an infinitely long half cylinder body with radius $R=1$, that it is deformable and whose
material is linear elastic, with Young's modulus $E = 1$ and Poisson's ratio $\nu = 0.3$.
A uniform pressure $P=0.003$ is applied on the top face of the cylinder while the curved surface contacts against a horizontal rigid plane
(see Figure \ref{fig:mesh1_a}).
Taking into account the test symmetry and the ideally infinite length of the cylinder,
the problem is modelled as 2D quarter of disc with proper boundary conditions.
Under the hypothesis that the contact area is small compared to the cylinder dimensions, the Hertz's analytical solution (see \cite{Hertz1882,johnsonbook})
predicts that the contact region is an infinitely long band whose width is $2a$, being $a = {\sigma}qrt{8R^2P(1- \nu^2)/\pi E}$.
Thus, the normal pressure, {that} follows an elliptical distribution along the width direction $r$, is $p(r)=p_0{\sigma}qrt{1-r^2/a^2}$,
where the maximum pressure, at the central line of the band ($r=0$), is $p_0 = 4 RP/ \pi a$.
For the geometrical, material and load data chosen in this numerical test,
the characteristic values of the solution are $a=0.083378$ and $p_0 = 0.045812$.
Notice that, as required by Hertz's theory hypotheses, $a$ is sufficiently small compared to $R$.
It is important to remark that, despite the fact that Hertz's theory provides a
full description of the contact area and the normal contact pressure
in the region, it does not describe analytically the deformation of the whole elastic domain.
Therefore, for all the test cases hereinafter, the $L^2$ {error norm} and $H^1$ {error semi-norm} of
the displacement obtained numerically are computed taking a more refined solution
as a reference. For this bidimensional test case, the mesh size
of the refined solution $h_{ref}$ is such that, for all the discretizations,
$4 h_{ref} \leq h$, where $h$ is the size of the mesh considered.
Additionally, as it is shown in Figure \ref{fig:mesh1_a}, the mesh is finer in the
vicinity of the potential contact zone. The knot vector values
are defined such that $80\%$ of the knot spans are located within $10\%$ of the total length of the knot vector.
\begin{figure}
\caption{2D Hertz contact problem with $N_2 / S_0$ method for an applied pressure $P=0.003$.}
\label{fig:mesh1_a}
\label{fig:mesh1_b}
\label{fig:mesh1}
\end{figure}
In particular, the analysis of this example focuses on the effect of the
interpolation order on the quality of contact stress distribution.
Thus, in Figure \ref{fig:mesh1_b} we compare the pressure reference solution.
{with the Lagrange multiplier values computed at the control points, \textit{i.e.} its constant values, and a post-processing which consists in a $P1$ re-interpolation.}
The dimensionless contact pressure $p/p_0$ is plotted respect to the normalized
coordinate $r/a$.
{The results are very good: the maximum pressure computed
and the pressure distribution, even across the boundary of the contact region (on the contact and non contact zones), are close to the analytical solution.}
In Figure \ref{fig:disp 0point003_a}, absolute errors in $L^2${-norm} and $H^1${-semi-norm}
for the $N_2 / S_0$ choice are shown. As expected, optimal convergence is obtained
for the displacement error in the $H^1$-{semi-}norm: the convergence rate is close to the expected $3/2$ value.
Nevertheless, the $L^2$-norm of the displacement error presents suboptimal convergence (close to $2$), but
according to Aubin-Nitsche's lemma in the linear case, the expected convergence rate is {lower than} $5/2$.
On the other hand, in Figure \ref{fig:disp 0point003_b} the
$L^2$-norm of the Lagrange multipliers error is presented, the expected convergence rate is $1$.
Whereas a convergence rate close to $0.6$ is achieved when {we compare the numerical solution and the Hertz's analytical solution}, and close to $0.8$ is achieved when {we compare the numerical solution and the refined numerical solution}.
\begin{figure}
\caption{2D Hertz contact problem with $N_2 / S_0$ method for an applied pressure $P=0.003$.
Absolute displacement errors in $L^2$-norm and $H^1$-{semi-}
\label{fig:disp 0point003_a}
\label{fig:disp 0point003_b}
\label{fig:disp 0point003}
\end{figure}
As a second example, we present the same test case but with significantly higher pressure applied
$P = 0.01$. Under these load conditions, the contact area is wider ($a = 0.15223$) and the
contact pressure higher ($p_0 = 0.083641$).
It can be considered that the ratio $a/R$ no longer satisfies the hypotheses of Hertz's theory.
In the same way as before, Figure \ref{fig:mesh2} shows the stress tensor magnitude
and computed contact pressure.
\begin{figure}
\caption{2D Hertz contact problem with $N_2 / S_0$ method for a higher applied pressure ($P=0.01$).}
\label{fig:mesh2_a}
\label{fig:mesh2_b}
\label{fig:mesh2}
\end{figure}
Figure \ref{fig:disp 0point01_a} shows the displacement absolute error in $L^2$-norm and $H^1$-{semi-}norm
for $N_2 / S_0$ method. As expected, optimal convergence is obtained in the $H^1$-{semi-}norm,
(the convergence rate is close to $1.5$) and, {while, for the $L^2$-norm we obtain a better rate (as expected by the Aubin-Nische's lemma) which can hardly be estimated precisely from the graph.}
On the other hand, in Figure \ref{fig:disp 0point01_b} it can be seen that the $L^2$-norm of the error of the Lagrange multipliers
evidences a suboptimal behaviour: the error, that initially decreases, remains constant for smaller values of $h$.
It may due to the choice of an excessively large normal pressure:
the approximated solution converges, but not to the analytical solution, that is no longer valid.
Indeed, when compared to a refined numerical solution (Figure \ref{fig:disp 0point01_b}),
the computed Lagrange multipliers solution converges optimally. As it was pointed out above,
for these examples the displacement solution error is computed respect to a more refined numerical solution, therefore, this effect
is not present in displacement results.
\begin{figure}
\caption{2D Hertz contact problem with $N_2 / S_0$ method for an applied pressure $P=0.01$.
Absolute displacement errors in $L^2$-norm and $H^1$-{semi-}
\label{fig:disp 0point01_a}
\label{fig:disp 0point01_b}
\label{fig:disp 0point01}
\end{figure}
{\sigma}ubsection{Three-dimensional Hertz problem}
\label{subsec:Hertz problems 3d}
In this section, the three-dimensional frictionless Hertz problem is studied.
It consists in a hemispherical elastic body with radius $R$ that contacts against a horizontal rigid plane as a consequence of
an uniform pressure $P$ applied on the top face (see Figure \ref{fig:3d p=0.0001_a}).
Hertz's theory predicts that the contact region is a circle of radius $a = (3R^3P(1- \nu^2)/4 E)^{1/3}$
and the contact pressure follows a hemispherical distribution
$p(r)=p_0 {\sigma}qrt{1-r^2/a^2}$, with $p_0 = 3R^2P/ 2 a^2$, being $r$ the distance to the centre of the circle
(see\cite{Hertz1882,johnsonbook}).
In this case, for the chosen values $R=1$, $E=1$, $\nu=0.3$ and $P=10^{-4}$, the contact radius is $a=0.059853$ and the maximum pressure $p_0 = 0.041872$.
As in the two-dimensional case, Hertz's theory relies on the hypothesis that $a$ is small compared to $R$ and the deformations are small.
Considering the problem axial symmetry, the test is reproduced using an octant of sphere with proper boundary conditions.
Figure \ref{fig:3d p=0.0001_a} shows the problem setup and the magnitude of the computed stresses.
As in the 2D case, in order to achieve more accurate results in the contact region, the mesh is refined in the vicinity of the potential contact zone.
The knot vectors are defined such as $75\%$ of the elements are located within $10\%$ of the total length of the knot vector.
\begin{figure}
\caption{3D Hertz contact problem with $N_2 / S_0$ method for an applied pressure $P=10^{-4}
\label{fig:3d p=0.0001_a}
\label{fig:3d p=0.0001_b}
\label{fig:3d p=0.0001}
\end{figure}
{In Figure \ref{fig:3d p=0.0001_b}, we compare the Hertz's solution with the computed contact pressure at control points
and a $P1$ re-interpolation of those values, for a mesh with size $h=0.1$.}
On the other hand, in Figure \ref{fig:3d cp 0.0001} the contact pressure is shown at control points for mesh sizes $h=0.4$ and $h=0.2$. As it can be
appreciated, good agreement between the analytical and computed pressure is obtained in all cases.
\begin{figure}
\caption{3D Hertz contact problem with $N_2 / S_0$ method for an applied pressure $P=10^{-4}
\label{fig:3d cp 0.0001_a}
\label{fig:3d cp 0.0001_b}
\label{fig:3d cp 0.0001}
\end{figure}
As in the previous test, the displacement of the deformed elastic body is not fully described by the Hertz's theory.
Therefore, the $L^2$ error norm and $H^1$ error {semi}-norm of the displacement are evaluated
by comparing the obtained solution with a finer refined case. Nonetheless,
Lagrange multipliers computed solutions are compared with the analytical contact pressure.
In this test case, the size of the refined mesh is $h_{ref}=0.1175$ ($0.0025$ in the contact region), and
it is such as $2 h_{ref} \leq h$.
In Figure \ref{fig:3d error disp 1_a} the displacement error norms are reported. As it can {be} seen, they present suboptimal convergence rates
both in the $L^2$-norm and $H^1$-{semi-}norm. Convergence rates are close to $1.26$ and $0.5$, respectively.
The large mesh size of the numerical reference solution $h_{ref}$, limited by our computational resources, seems
to be the cause of these suboptimal results. {Due to the coarse reference mesh, the presented rates are only pre-asymptotic.}
Better behaviour is observed for the Lagrange multipliers error (Figure \ref{fig:3d error disp 1_b}).
\begin{figure}
\caption{3D Hertz contact problem with $N_2 / S_0$ method for an applied pressure $P=10^{-4}
\label{fig:3d error disp 1_a}
\label{fig:3d error disp 1_b}
\label{fig:3d error disp 1}
\end{figure}
By considering a higher pressure ($P=5\cdot10^{-4}$), the radius of the contact
zone becomes larger ($a = 0.10235$), and the ratio $a/R$ does not satisfy the theory hypotheses.
Figure \ref{fig:3d p = 0.0005} shows the stress magnitude and contact pressure at the control points
for a given mesh.
\begin{figure}
\caption{3D Hertz contact problem with $N_2 / S_0$ method for a higher pressure ($P=5\cdot10^{-4}
\label{fig:3d p = 0.0005_a}
\label{fig:3d p = 0.0005_b}
\label{fig:3d p = 0.0005}
\end{figure}
Similarly, in Figure \ref{fig:3d cp 0.0005} the analytical contact pressure is compared
with the computed Lagrange multipliers values associated to the control points for different meshes. Satisfactory results
are observed in all cases.
\begin{figure}
\caption{3D Hertz contact problem with $N_2 / S_0$ method for an applied pressure $P=5\cdot10^{-4}
\label{fig:3d cp 0.0005_a}
\label{fig:3d cp 0.0005_b}
\label{fig:3d cp 0.0005}
\end{figure}
As in the previous test, the coarse value of the reference mesh size $h_{ref}$ seems
to be the cause of the suboptimal convergence of the displacement shown in Figure \ref{fig:3d error disp 2_a}.
An optimal convergence is observed for the Lagrange multipliers error in the $L^2$-norm (see Figure \ref{fig:3d error disp 2_b}). However, due to the coarse value of the mesh size, we do not observe the expected threshold of the $L^2$ error for the Lagrange multipliers between the analytical and approximate solutions.
\begin{figure}
\caption{3D Hertz contact problem with $N_2 / S_0$ method for an applied pressure $P=5\cdot10^{-4}
\label{fig:3d error disp 2_a}
\label{fig:3d error disp 2_b}
\label{fig:3d error disp 2}
\end{figure}
{\sigma}ubsection{Two-dimensional Hertz problem with large deformations} \label{subsec:large deformation}
Finally, in this section the two-dimensional frictionless Hertz problem is studied considering large deformations and strains.
For that purpose, a Neo-Hookean material constitutive law {(an hyper-elastic law that considers finite strains)} with Young's modulus $E = 1$ and Poisson's ratio $\nu = 0.3$,
has been used for the deformable body.
As in Section \ref{subsec:Hertz problems}, the performance
of the $N_2 / S_0$ method is analysed and the problem is modelled as an elastic quarter of disc with proper boundary conditions.
The considerations made about the mesh size in Section \ref{subsec:Hertz problems} are also valid for the present case.
The radius of the cylinder is $R=1$ and the applied pressure $P=0.1$ (ten times higher than the one considered in Section \ref{subsec:Hertz problems}).
In this large deformation framework the exact solution is unknown: the error of the computed displacement and Lagrange multipliers are studied
taking a refined numerical solution as reference.
Figure \ref{fig:mesh3} shows the final deformation of the elastic body and the computed contact pressure.
\begin{figure}
\caption{2D large deformation Hertz contact problem with $N_2 / S_0$ method for an applied pressure $P = 0.1$.}
\label{fig:mesh3_a}
\label{fig:mesh3_b}
\label{fig:mesh3}
\end{figure}
In Figure \ref{fig:disp 0point1}, the displacement and multiplier errors are reported. It can be seen that the obtained displacement presents
optimal convergence both in $L^2$-norm and $H^1$-{semi-}norm; analogously, optimal convergence is also achieved
for the computed Lagrange multipliers.
\begin{figure}
\caption{2D large deformation Hertz contact problem with $N_2 / S_0$ method for an applied pressure $P=0.1$.
Absolute displacement errors in $L^2$-norm and $H^1$-{semi-}
\label{fig:disp 0point1}
\end{figure}
As a last example, the same large deformation Hertz problem is considered, but modifying its boundary conditions:
instead of pressure, a uniform downward displacement $u_y=-0.4$ is applied on the top surface of the cylinder.
The large deformation of the body and computed contact pressure are presented in Figure \ref{fig:mesh4}.
\begin{figure}
\caption{2D large deformation Hertz contact problem with $N_2 / S_0$ method with a uniform downward displacement $u_y = -0.4$.}
\label{fig:mesh4_b}
\label{fig:mesh4_b}
\label{fig:mesh4}
\end{figure}
As in the previous case (large deformation with applied pressure) optimal results are obtained for
the computed displacement and Lagrange multipliers.
\begin{figure}
\caption{2D large deformation Hertz contact problem with $N_2 / S_0$ method with a uniform downward displacement $u_y = -0.4$.
Absolute displacement errors in $L^2$-norm and $H^1$-{semi-}
\label{fig:disp 0point4}
\end{figure}\\
{\sigma}ection*{Conclusions}
\label{sec: conclusion}
\addcontentsline{toc}{section}{Conclusions}
In this work, we present an {optimal} \textit{a priori} error estimate of unilateral contact problem frictionless between deformable body and rigid one.
For the numerical point of view, we observe an {optimality} of this method for both variables, the displacement and the Lagrange multipliers. In our experiments, we use a NURBS of degree $2$ for the primal space and B-Spline of degree $0$ for the dual space. {Thanks to this choice of approximation spaces, we observe a stability of the Lagrange multipliers and a well approximation of the pressure in two-dimensional case and we observe a {sub-optimality} in three-dimensional case}. The {sub-optimality} observed in three-dimensional case may be due to the coarse mesh used. This NURBS based contact formulation seems to provide a robust description of large deformation.
{\sigma}ection*{Acknowledgements}
\label{sec: acknowledgements}
\addcontentsline{toc}{section}{acknowledgements}
This work has been partially supported by Michelin under the contract A10-4087. A. Buffa acknowledges the support of the ERC Advanced grant no. 694515 and of the PRIN-MIUR project "Metodologie innovative nella modellistica differenziale numerica".
\addcontentsline{toc}{section}{Appendices}
{\sigma}ection*{{Appendix 1.}}
\label{sec:appendix1}
In this appendix, we provide the ingredients needed to fully discretise the problem \eqref{eq:discrete_mixed_form} as well as its large deformation version that we have used in Section \ref{sec:sec3}. First we introduce the contact status, an active-set strategy for the discrete problem, and then the fully discrete problem. For the purpose of this appendix, we take notations suitable to large deformation and denote by $g_n$ the distance between the rigid and the deformable body. In small deformation, it holds $g_n(u)=u \cdot n$.
{\sigma}ubsection*{Contact status}
\label{subsec:contact stat}
Let us first deal with the contact status. The active-set strategy is defined in \cite{hueber-Wohl-05,hueber-Stadler-Wohlmuth-08} and is updated at each iteration of Newton. Due to the deformation, parts of the workpiece may come into contact or conversely may loose contact. This change of contact status changes the loading that is applied on the boundary of the mesh. This method is used to track the location of contact during the change in boundary conditions.\\
Let $K$ be a control point of the B-Spline space \eqref{def:space_mult}, let $(\Pi^h_\lambda \cdot )_K$ be the local projection defined in \eqref{def:l2proj_K} and let $P{\{}\lambda_K, (\Pi_\lambda^h g_n)_K{\}}$ be he operator defined component wise by:
\begin{itemize}
\item $\lambda_K = 0$, \\[0.2cm]
\qquad (1) if $ (\Pi_\lambda^h g_n)_K \geq 0$, then $P{\{}\lambda_K, (\Pi_\lambda^h g_n)_K{\}}=0$,\\[0.2cm]
\qquad (2) if $ (\Pi_\lambda^h g_n)_K < 0$, then $P{\{}\lambda_K, (\Pi_\lambda^h g_n)_K{\}}= (\Pi_\lambda^h g_n)_K$,
\item $\lambda_K < 0$, \\[0.2cm]
\qquad (3) $P{\{}\lambda_K, (\Pi_\lambda^h g_n)_K{\}}= (\Pi_\lambda^h g_n)_K$.
\end{itemize}
The optimality conditions are then written as $P{\{}\lambda_K, (\Pi_\lambda^h g_n)_K{\}}=0$. So in the case $(1)$, the constraints are inactive and in the case $(2)$ and $(3)$, the constraints are active.
{\sigma}ubsection*{Discrete problem}
\label{subsec: discrete problem}
\noindent The space $V^h$ is spanned by mapped NURBS of type $\hat{N}^p_{\boldsymbol{i}} (\boldsymbol{\zeta})\circ \varphi_{0,\Gamma_C}^{-1}$ for $\boldsymbol{i}$ belonging to a suitable set of indices. In order to simplify and reduce our notation, we call $A$ as the running index{, of control points associated with the surface $\displaystyle \Gamma_C$,} $A=0 \ldots {\cal A}$ on this basis and set:
\begin{eqnarray}\label{def:space_disp}
\displaystyle V^h = Span \{ N_A(x), \quad A=0 \ldots {\cal A} \} \cap V.
\end{eqnarray}
Now, we express quantities on the contact interface $\Gamma_C$ as follows: $$\displaystyle \restrr{u}{\Gamma_C} = {\sigma}um_{A=1}^{{\cal A}} u_A N_A , \qquad \restrr{\delta u}{\Gamma_C} = {\sigma}um_{A=1}^{{\cal A}} \delta u_A N_A \qquad \textrm{and} \qquad x = {\sigma}um_{A=1}^{{\cal A}} x_A N_A,$$
where
$C_A$, $u_A$, $\delta u_A$ and $x_A = \varphi(X_A)$ are the related reference coordinate, displacement, displacement variation and current coordinate vectors. \\
By substituting the interpolations, the normal gap becomes:
$$g_n = \left[{\sigma}um_{A=1}^{{\cal A}} C_A N_A(\zeta) + {\sigma}um_{A=1}^{{\cal A}} u_A N_A(\zeta) \right] \cdot n .$$
In the previous equation, $\zeta$ are the parametric coordinates of the generic point on $\Gamma_C$. To simplify, we denote for the next of the purpose ${\cal D} g_n[\delta u] = \delta g_n$. The virtual variation follows as $$\delta g_n = \left[{\sigma}um_{A=1}^{{\cal A}} \delta u_A N_A(\zeta) \right] \cdot n . $$
In order to formulate the problem in matrix form, the following vectors are introduced:
$$ \delta \boldsymbol{u} = \begin{bmatrix} \delta u_1\\ \vdots \\ \delta u_{{\cal A}} \end{bmatrix}, \qquad \Delta \boldsymbol{u} = \begin{bmatrix} \Delta u_1\\ \vdots \\ \Delta u_{{\cal A}} \end{bmatrix}, \qquad \boldsymbol{N} = \begin{bmatrix} N_1(\zeta) n \\ \vdots \\ N_{{\cal A}}(\zeta) n \end{bmatrix}.$$
With the above notations, the virtual variation and the linearized increments can be written in matrix form as follow:
$$ \delta g_n = \delta \boldsymbol{u}^T \boldsymbol{N} , \qquad \Delta g_n = \boldsymbol{N}^T \Delta \boldsymbol{u} .$$
The contact contribution of the virtual work is expressed as follows:
\begin{eqnarray}
\begin{array}{l}
\displaystyle \delta W_c = \int_{\Gamma_C} \lambda \delta g_n \ \textrm{d}\Gamma + \int_{\Gamma_C} \delta \lambda g_n \ \textrm{d}\Gamma .
\end{array}\nonumber
\end{eqnarray}
The discretized contact contribution can be expressed as follows:
\begin{eqnarray}
\begin{array}{ll}
\displaystyle \delta W_c & \displaystyle = \int_{\Gamma_C} {\sigma}um_{K=1}^{{\cal K}} \lambda_{K} {B}_{K} \delta g_n \ \textrm{d}\Gamma + \int_{\Gamma_C} {\sigma}um_{K=1}^{{\cal K}} \delta \lambda_{K} {B}_{K} g_n \ \textrm{d}\Gamma ,\\[0.4cm]
& \displaystyle = {\sigma}um_{K} \lambda_{K} \int_{\Gamma_C} {B}_{K} \delta g_n \ \textrm{d}\Gamma + \delta \lambda_{K} \int_{\Gamma_C} {B}_{K} g_n \ \textrm{d}\Gamma ,\\[0.4cm]
& \displaystyle = {\sigma}um_{K} \lambda_{K} \int_{\Gamma_C} B_K \delta g_n \ \textrm{d}\Gamma + \delta \lambda_{K} \int_{\Gamma_C} B_K g_n \ \textrm{d}\Gamma ,\\[0.4cm]
& \displaystyle = {\sigma}um_{K} \Big{(} \lambda_{K} (\Pi_\lambda^h \delta g_{n})_K + \delta \lambda_{K} (\Pi_\lambda^h g_{n})_K \Big{)} K_K ,\\ \end{array}\nonumber
\end{eqnarray}
where $\displaystyle K_K = \int_{\Gamma_C} B_K \ \textrm{d}\Gamma$.
\noindent Indeed, we need to resolve a variational inequality. {Using the contact status, we distinguish between the constraints on the control point $K$ are actives, \textit{i.e.} when the contact occurs, and the constraints on the control point $K$ are inactives, \textit{i.e.} when we loose the contact.} \\
\noindent Using active-set strategy on the local gap $ (\Pi_\lambda^h g_{n})_K$ and $\lambda_{K}$, it holds:
\begin{eqnarray}
\begin{array}{l}
\displaystyle \delta W_c = {\sigma}um_{K,act} \Big{(} \lambda_{K} (\Pi_\lambda^h \delta g_{n})_K + \delta \lambda_{K} (\Pi_\lambda^h \delta g_{n})_K \Big{)}K_K .
\end{array}\nonumber
\end{eqnarray}
\noindent At the discrete level we proceed as follows:
\begin{itemize}
\item We have $\displaystyle {\sigma}um_{K,inact} \delta \lambda_{K} (\Pi_\lambda^h g_{n})_K \leq 0, \ \forall \delta \lambda_{K}$, \textit{i.e.} $\displaystyle (\Pi_\lambda^h g_{n})_K \geq 0$ \textit{a.e.} on inactive part.
\item On the active part, it holds $\displaystyle {\sigma}um_{K,act} \delta \lambda_{K} (\Pi_\lambda^h g_{n})_K = 0, \ \forall \delta \lambda_{K}$, \textit{i.e.} $\displaystyle (\Pi_\lambda^h g_{n})_K = 0$ \textit{a.e.}.
\item We impose too, $\displaystyle {\sigma}um_{K,inact} \lambda_{K} (\Pi_\lambda^h \delta g_{n})_K = 0, \ \forall (\Pi_\lambda^h \delta g_{n})_K$, \textit{i.e.} $\lambda_{K}=0$ \textit{a.e.} on inactive boundary.
\end{itemize}
\noindent For the further developments it is convenient to define the vector of the virtual variations and linearizations for the Lagrange multipliers: $$ \delta \boldsymbol{\ell}ambda = \begin{bmatrix} \delta \lambda_{1}\\ \vdots \\ \delta \lambda_{{\cal K}} \end{bmatrix}, \qquad \Delta \boldsymbol{\ell}ambda = \begin{bmatrix} \Delta \lambda_{1}\\ \vdots \\ \Delta \lambda_{{\cal K}} \end{bmatrix}, \qquad \boldsymbol{N}_{\lambda,g} = \begin{bmatrix} (\Pi_\lambda^h g_{n})_{1,act} K_{1,act} \\ \vdots \\ (\Pi_\lambda^h g_{n})_{{\cal K},act} K_{{\cal A}, act} \end{bmatrix}, \qquad \boldsymbol{B}_{\lambda} = \begin{bmatrix} B_{1}(\zeta) \\ \vdots \\ B_{{\cal K}}(\zeta) \end{bmatrix} .$$
\noindent In the matrix form, it holds: $$\displaystyle \delta W_c = \delta \boldsymbol{u}^T \int_{\Gamma_C} \Big{(} {\sigma}um_{K,act} B_K \lambda_{K} \Big{)} \boldsymbol{N} \ \textrm{d}\Gamma + \delta \boldsymbol{\ell}ambda^T \boldsymbol{N}_{\lambda,g} , $$
and the residual for Newton-Raphson iterative scheme is obtained as: $$\displaystyle R = \begin{bmatrix} R_u \\ R_\lambda \end{bmatrix}= \begin{bmatrix} \int_{\Gamma_C} \left( {\sigma}um_{K,act} B_K \lambda_{K} \right) \boldsymbol{N} \ \textrm{d}\Gamma \\ \boldsymbol{N}_{\lambda,g} \end{bmatrix} .$$
\noindent The linearization yields:
\begin{eqnarray}
\begin{array}{l}
\displaystyle \Delta\delta W_c = \int_{\Gamma_C} \Delta \lambda \delta g_n \ \textrm{d}\Gamma + \int_{\Gamma_C} \delta \lambda \Delta g_n \ \textrm{d}\Gamma .
\end{array}\nonumber
\end{eqnarray}
The active-set strategy and the discretised of contact contribution can be expressed as follows:
\begin{eqnarray}
\begin{array}{ll}
\displaystyle \Delta\delta W_c &\displaystyle= {\sigma}um_{K,act} {\sigma}um_{A} \int_{\Gamma_C} \Delta \lambda_{K} B_K N_A \delta u_A \cdot n \ \textrm{d}\Gamma + \int_{\Gamma_C} \delta \lambda_{K} B_K N_A \Delta u_A \cdot n \ \textrm{d}\Gamma ,\\
&\displaystyle= \delta \boldsymbol{u}^T \int_{\Gamma_C, act} \boldsymbol{N} \boldsymbol{B}_{\lambda}^T \ \textrm{d}\Gamma \Delta \boldsymbol{\ell}ambda + \delta \boldsymbol{\ell}ambda^T \int_{\Gamma_C, act} \boldsymbol{B}_{\lambda} \boldsymbol{N}^T \ \textrm{d}\Gamma \Delta \boldsymbol{u}.
\end{array}\nonumber
\end{eqnarray}
\addcontentsline{toc}{section}{Bibliography}
\end{document} |
\begin{document}
\title{Dirac and Lagrange algebraic constraints in nonlinear port-Hamiltonian systems
}
\titlerunning{Algebraic constraints in nonlinear port-Hamiltonian systems}
\author{Arjan van der Schaft \and \\Bernhard Maschke
}
\institute{A.J. van der Schaft \at
Bernoulli institute for Mathematics, CS and AI, University of Groningen, the Netherlands \\
Tel.: +31-50-3633731\\
Fax: +31-50-3633800\\
\email{a.j.van.der.schaft@rug.nl}
\and
B. Maschke \at
Universit\'e Claude Bernard Lyon-1, France
}
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract}
After recalling standard nonlinear port-Hamiltonian systems and their algebraic constraint equations, called here Dirac algebraic constraints, an extended class of port-Hamiltonian systems is introduced. This is based on replacing the Hamiltonian function by a general Lagrangian submanifold of the cotangent bundle of the state space manifold, motivated by developments in \cite{barbero} and extending the linear theory as developed in \cite{DAE}, \cite{beattie}. The resulting new type of algebraic constraints equations are called Lagrange constraints. It is shown how Dirac algebraic constraints can be converted into Lagrange algebraic constraints by the introduction of extra state variables, and, conversely, how Lagrange algebraic constraints can be converted into Dirac algebraic constraints by the use of Morse families.
\keywords{Differential-algebraic equations \and Nonlinear control \and Hamiltonian systems \and Dirac structures \and Lagrangian submanifolds}
\subclass{MSC 34A09 \and MSC 65L80 \and MSC 53D12 \and MSC 70B45 \and MSC 93C10}
\end{abstract}
\section{Introduction}
\label{intro}
When modeling dynamical systems, the appearance of algebraic constraint equations next to differential equations is ubiquitous. This is especially clear in network modeling of physical systems, where the interconnections between different dynamical components of the overall system almost inevitably lead to algebraic constraints. On the other hand, the analysis and simulation of such systems of differential-algebraic equations (DAEs) poses delicate problems; especially in the nonlinear case, see e.g. the already classical treatise \cite{kunkel}, and the references quoted in there. The situation is even more challenging for control of DAE systems, and, up to now, most of the control literature is devoted to systems {\it without} algebraic constraints.
On the other hand, the DAE systems as resulting from the modeling of physical systems often have special properties, which makes their analysis, simulation and control potentially more feasible. A prominent framework for network modeling of multiphysics systems is {\it port-based modeling}, and the resulting theory of port-Hamiltonian systems; see e.g. \cite{maschkevdsbordeaux}, \cite{gsbm}, \cite{vanderschaftmaschkearchive}, \cite{dalsmo}, \cite{passivitybook}, \cite{geoplexbook}, \cite{NOW}. In \cite{phDAE}, see also \cite{passivitybook}, initial investigations were done on the algebraic constraint equations appearing in standard port-Hamiltonian systems; linear or nonlinear. The two main conclusions are that the algebraic constraints in such systems have index one once the Hamiltonian is non-degenerate in the state variables, and furthermore that generally elimination of algebraic constraints leads to systems that are again in port-Hamiltonian form.
Very recently, see especially \cite{beattie}, {\it another} type of algebraic constraint equations in linear physical system models was studied. In \cite{DAE}, \cite{beattie} these were identified as arising from generalized port-based modeling with a state space that has higher dimension than the minimal number of energy variables, corresponding to {\it implicit} energy storage relations which can be formulated as Lagrangian subspaces. For linear time-varying systems this formulation has led to various interesting results on their index, regularization and numerical properties \cite{mmw}; see also \cite{bmd} for related developments.
The formulation of implicit energy storage relations is very similar to an independent line of research initiated in \cite{barbero}, where the Hamiltonian in nonlinear Hamiltonian dynamics is replaced by a general Lagrangian submanifold (motivated, among others, by optimal control).
The precise relation between the algebraic constraints as arising in linear standard port-Hamiltonian systems (called Dirac algebraic constraints) and those in linear generalized port-Hamiltonian systems with implicit storage (called Lagrange algebraic constraints) was studied in \cite{beattie}, \cite{DAE}. In particular, in \cite{DAE} it was shown how in this linear case Dirac algebraic constraints can be converted into Lagrange algebraic constraints, and conversely; in both cases by extending the state space (e.g., introduction of Lagrange multipliers).
In the present paper we will continue on \cite{DAE}, \cite{beattie}, by extending the theory and constructions to the nonlinear case; inspired by \cite{barbero}.
\section{The standard definition of port-Hamiltonian systems and Dirac algebraic constraints}
\label{sec:1}
The standard definition of a port-Hamiltonian system, see e.g. \cite{vanderschaftmaschkearchive}, \cite{dalsmo}, \cite{phDAE}, \cite{passivitybook}, \cite{NOW} for more details and further ramifications, is depicted in Fig.~\ref{fig:pHsystems}.
\begin{figure}
\caption{From port-based modeling to port-Hamiltonian system.}
\label{fig:pHsystems}
\end{figure}
The system is modeled by identifying energy-storing elements $\mathcal{S}$ and energy-dissipating elements $\mathcal{R}$, which are linked to a central energy-routing structure, geometrically to be defined as a Dirac structure. This linking takes place via pairs $(f,e)$ of equally dimensioned vectors $f$ and $e$ (commonly called {\it flow} and {\it effort} variables). A pair $(f,e)$ of vectors of flow and effort variables defines a {\it port}, and the total set of variables $f,e$ is also called the set of {\it port variables}.
Fig.~\ref{fig:pHsystems} shows three ports: the port $(f_S, e_S)$ linking to energy storage, the port $(f_R, e_R)$ corresponding to energy dissipation, and the external port $(f_P,e_P)$, by which the system interacts with its environment (including controller action). The scalar quantities $e^T_Sf_S$, $e^T_R f_R$, and
$e^T_P f_P$ denote the instantaneous powers transmitted through the links.
Any physical system that is represented (modeled) in this way defines a port-Hamiltonian system. Furthermore, experience has shown that even for very complex physical systems port-based modeling leads to satisfactory and insightful models; certainly for control purposes; see e.g. \cite{passivitybook}, \cite{gsbm}, \cite{NOW}, and the references quoted in there.
The definition of a {\it constant} Dirac structure starts from a finite-dimensional {\it linear space} of \textit{flows} $\mathcal{F}$ and the dual linear space of efforts $\mathcal{E}:= \mathcal{F}^*$.
The \textit{power} $P$ on the total space $\mathcal{F} \times \mathcal{E}$ of port variables is defined by the duality product
$P =\;< e \mid f >$. In the common case $\mathcal{F} \simeq \mathcal{E} \simeq \mathbb{R}^k$ this simply amounts to $P= e^Tf$.
\begin{definition}\cite{Cour:90}, \cite{Dorf:93}
\label{def:schaft_dn2.1}
Consider a finite-dimensional linear space $\mathcal{F}$ with $\mathcal{E} = \mathcal{F}^*$. A subspace $\mathcal{D} \subset \mathcal{F} \times \mathcal{E}$ is a \textit{Dirac structure} if
\begin{enumerate}
\item[(i)] $< e \mid f > = 0$, for all $(f,e) \in \mathcal{D}$,
\item[(ii)] $\dim \mathcal{D} = \dim \mathcal{F}$.
\end{enumerate}
\end{definition}
The {\it maximal dimension} of any subspace $\mathcal{D} \subset \mathcal{F} \times \mathcal{E}$
satisfying the power-conservation property $(i)$ can be easily shown to be equal to $\dim \mathcal{F}$. Thus a constant Dirac structure is a {\it maximal} power-conserving subspace.
In the nonlinear case, e.g. mechanical systems, we need to extend the notion of a constant Dirac structure to a {\it Dirac structure on a manifold}\footnote{All notions will be assumed to be smooth; i.e., $C^{\infty}$.} $\mathcal{M}$.
\begin{definition}\cite{Cour:90}, \cite{Dorf:93}
A Dirac structure on a finite-dimensional manifold $\mathcal{M}$ is defined as a subbundle $\mathcal{D} \subset T\mathcal{M} \oplus T^*\mathcal{M}$ such that for every $x \in \mathcal{M}$ the subspace $\mathcal{D}(x) \subset T_x\mathcal{M} \times T^*_x\mathcal{M}$ is a constant Dirac structure as before.
\end{definition}
We note that in the standard definition \cite{Cour:90}, \cite{Dorf:93} of a Dirac structure on a manifold an additional {\it integrability} condition is imposed; generalizing the Jacobi identity for Poisson structures or closedness of symplectic forms. However, for many purposes this integrability condition is not needed, while on the other hand there are interesting port-Hamiltonian system classes (like mechanical systems with nonholonomic kinematic constraints) that do {\it not} satisfy this integrability condition \cite{passivitybook}, \cite{dalsmo}, \cite{NOW}.
The {\it dynamics} of port-Hamiltonian systems derives from the time-integration taking place in the energy storage. Let $f_S,e_S$ be the vector of flow and effort variables of the energy storage port. Time-integration of the flow vector $f_S$ leads to the equally dimensioned vector of state variables $x \in \mathcal{X}$ satisfying $\dot{x}=-f_S$. Energy storage in a standard port-Hamiltonian system is then expressed by a Hamiltonian (total energy)
\begin{equation}
H: \mathcal{X} \to \mathbb{R},
\end{equation}
defining the vector $e_S$ of as $e_S = \nabla H(x)$, with $\nabla H(x)$ is the column vector of partial derivatives of $H$. Obviously, this implies
\begin{equation}
\frac{d}{dt}H(x(t))= \left(\nabla H(x(t))\right)^T \dot{x}(t)= - e_S^T(t)f_S(t),
\end{equation}
i.e., the increase of stored energy is equal to the power delivered to the energy-storing elements through the left link in Figure \ref{fig:pHsystems}.
Furthermore, energy dissipation is any relation $\mathcal{R}$ between the flow and effort variables $f_R,e_R$ of the energy-dissipating port having the property that
\begin{equation}
\label{R}
e_R^Tf_R \leq 0, \quad (f_R,e_R) \in \mathcal{R}
\end{equation}
Consider now a Dirac structure $\mathcal{D}$ on the manifold $\mathcal{X} \times \mathcal{F}_R \times \mathcal{F}_P$, which is independent\footnote{This can be formalized as a {\it symmetry} of the Dirac structure: the Dirac structure $\mathcal{D}$ is {\it invariant} with respect to arbitrary transformations on $\mathcal{F}_R \times \mathcal{F}_P$; see \cite{vdssym}.} of the point in $\mathcal{F}_R \times \mathcal{F}_P$; i.e., for every $x \in \mathcal{X}$
\begin{equation}
\mathcal{D}(x) \subset T_x \mathcal{X} \times \mathcal{F}_R \times \mathcal{F}_P \times T_x^*\mathcal{X} \times \mathcal{E}_R \times \mathcal{E}_P
\end{equation}
is a Dirac structure as in Definition \ref{def:schaft_dn2.1}.
Then the triple $(\mathcal{D},H,\mathcal{R})$, with the energy storage $H: \mathcal{X} \to \mathbb{R}$, and energy dissipation $\mathcal{R} \subset \mathcal{F}_R \times \mathcal{E}_R$ defines a {\it port-Hamiltonian system} (sometimes abbreviated as {\it pH system}), geometrically defined as the implicit dynamics
\begin{equation}
\label{id}
\begin{array}{c}
(-\dot{x}(t),f_R(t),f_P(t), \nabla H(x(t)), e_R(t), e_P(t)) \in \mathcal{D}(x(t))\\[2mm]
(f_R(t),e_R(t)) \in \mathcal{R}, \quad t \in \mathbb{R}
\end{array}
\end{equation}
in the state variables $x$, with external port-variables $f_P,e_P$.
A specific class of port-Hamiltonian systems is obtained by considering Dirac structures which are the graph of a skew-symmetric bundle map
\begin{equation}
\begin{bmatrix} - J(x) & -G_R(x) & -G(x) \\ G_R^T(x) & 0 & 0 \\G(x) & 0 & 0 \end{bmatrix}, \quad J(x)=-J^T(x), \quad x \in \mathcal{X} ,
\end{equation}
from $e_S,e_R,e_P$ to $f_S,f_R,f_P$, and a linear energy dissipation relation $e_R=-\bar{R}e_R$ for some matrix $\bar{R}=\bar{R}^T\geq 0$. This yields {\it input-state-output} port-Hamiltonian systems
\begin{equation}
\label{iso}
\begin{array}{rcl}
\dot{x} & = & \left[ J(x) - R(x) \right] \nabla H(x) + G(x)u \\[2mm]
y & = & G^T(x) \nabla H(x),
\end{array}
\end{equation}
where $R(x)=G_R(x)\bar{R}\,G_T(x)$, and $u=e_P$ is the {\it input} and $y=f_P$ the {\it output} vector. This is often taken as the starting point for control purposes \cite{passivitybook}.
On the other hand, for a general Dirac structure {\it algebraic constraints} in the state variables $x$ may easily appear; leading to port-Hamiltonian systems which are {\it not} of the form \end{equation}ref{iso}. In fact, if the projection $\rho^*(x)(\mathcal{D}(x))$ of $\mathcal{D}(x)$ to $T^*_x\mathcal{X}$ under the canonical projection $\rho^*(x): T_x \mathcal{X} \times T_x^*\mathcal{X} \to T_x^*\mathcal{X}$ is a strict subspace of $T^*_x \mathcal{X}$, then necessarily $x$ should be such that $\nabla H(x) \in \rho^*(x)(\mathcal{D}(x))$; leading to algebraic constraints \cite{phDAE}, \cite{dalsmo}, \cite{passivitybook}. In the sequel these algebraic constraints, stemming directly from port-based modeling, will be referred to as {\it Dirac algebraic constraints}.
\section{Port-Hamiltonian systems with implicit energy storage and Lagrange algebraic constraints}
An interesting extension of the standard nonlinear port-Hamiltonian systems as discussed in the previous section is obtained as follows.
For any Hamiltonian $H: \mathcal{X} \to \mathbb{R}$ the submanifold
\begin{equation}
\mathrm{graph\,} \nabla H := \{(x,\nabla H(x) ) \mid x \in \mathcal{X} \}
\end{equation}
is a {\it Lagrangian submanifold} \cite{arnold} of the cotangent bundle $T^*\mathcal{X}$. Thus, instead of considering energy storage defined by a Hamiltonian $H$ we may as well consider a general {\it implicit} energy storage defined by a general Lagrangian submanifold $\mathcal{L}$. In fact \cite{arnold} a general Lagrangian submanifold $\mathcal{L} \subset T^*\mathcal{X}$ will be of the form $\mathrm{graph\,} \nabla H$ for a certain $H$ if and only if the {\it projection} of $\mathcal{L} \subset T^*\mathcal{X}$ to $\mathcal{X}$ under the canonical projection $\pi: T^*\mathcal{X} \to \mathcal{X}$ is equal to the whole of $\mathcal{X}$. On the other hand, if and only if the projection $\pi(\mathcal{L})$ of $\mathcal{L} \subset T^*\mathcal{X}$ to $\mathcal{X}$ is {\it not} equal to the whole of $\mathcal{X}$, then a new class of algebraic constraints arises, namely $x \in \pi(\mathcal{L})$. These algebraic constraints will be called {\it Lagrange algebraic constraints}; extending the terminology in the linear case in \cite{DAE}.
The resulting triple $(\mathcal{D},\mathcal{L},\mathcal{R})$ will be called a {\it generalized} port-Hamiltonian system, defining the dynamics (generalizing \end{equation}ref{id})
\begin{equation}
\begin{array}{c}
(-\dot{x}(t),f_R(t),f_P(t), e_S(t), e_R(t), e_P(t)) \in \mathcal{D}(x(t))\\[2mm]
(f_R(t),e_R(t)) \in \mathcal{R}, \quad (x(t),e_S(t)) \in \mathcal{L}, \quad t \in \mathbb{R}
\end{array}
\end{equation}
in the state variables $x$, with external port-variables $f_P,e_P$.
The basic idea of this definition (without the inclusion of an energy dissipation relation and external port) can be already found in \cite{barbero}. The definition of the generalized port-Hamiltonian system $(\mathcal{D},\mathcal{L},\mathcal{R})$ extends the definition in the {\it linear} case as recently given in \cite{DAE}; partly motivated by \cite{beattie}.
\subsection{Properties of the Legendre transform}
Before going on with a discussion of the properties of generalized port-Hamilto-nian systems $(\mathcal{D},\mathcal{L},\mathcal{R})$ and their Lagrange algebraic constraints, let us recall the basic properties of the Legendre transform.
Consider a smooth function $P: \mathcal{X} \to \mathbb{R}$, with column vector of partial derivatives denoted by $\nabla P(x)$.
The {\it Legendre transform} of $P$ is defined in local coordinates $x$ for $\mathcal{X}$ as the expression
\begin{equation}
\label{leg}
P^*(e)= e^Tx - P(x), \quad e = \nabla P(x),
\end{equation}
where $e$ are corresponding coordinates for the cotangent space. In the expression \end{equation}ref{leg} it is meant that $x$ is expressed as a function of $e$ through the equation $e = \nabla P(x)$; thus obtaining a function $P^*$ of $e$ only. This requires that the map $x \mapsto \nabla P(x)$ is injective\footnote{However, more generally, i.e., without this assumption, we can still define $P^*$ as the restriction of the function $e^Tx - P(x)$ defined on $T^*\mathcal{X}$, but {\it restricted} to the submanifold $e = \nabla P(x)$. On this submanifold obviously the partial derivatives of $e^Tx - P(x)$ with respect to $x$ are zero, and thus the function is determined as a function of $e$ only.}.
A well-known property of the Legendre transform is the fact that the Legendre transform of $P^*$ is equal to $P$; i.e., $P^{**}=P$. Furthermore, the inverse of the map $ x \mapsto e = \nabla P(x)$ is given as $ e \mapsto x = \nabla P^*(e)$, that is
\begin{equation}
\nabla P^*(\nabla P(x))=x, \quad \nabla P(\nabla P^*(e))=e
\end{equation}
Given the Legendre transform $P^*$ of $P$ one may also define the new function
\begin{equation}
\widetilde{P}(x) := P^*(\nabla P(x))
\end{equation}
In case of a {\it quadratic} function $P(x)=x^TMx$ for some invertible symmetric matrix $M$ it is obvious to check that $\widetilde{P}=P$; however for a general $P$ this need not be the case.
Interestingly, using $P^*(e)= e^Tx - P(x)$, $x = \nabla P^*(e)$, and the identity $\nabla P(\nabla P^*(e))=e$, the function $\widetilde{P}$ can be also expressed as
\begin{equation}
\widetilde{P}(x) = (\nabla P(x))^T \nabla P^*(\nabla P(x)) - P(\nabla P^*(\nabla P(x))) = x^T \nabla P(x) - P(x)
\end{equation}
Furthermore, we note the following remarkable property
\begin{equation}
\nabla \widetilde{P}(x) = \nabla^2 P(x) \nabla P^*(\nabla P(x)) = \nabla^2 P(x)x,
\end{equation}
with $\nabla^2 P(x)$ denoting the Hessian matrix of $P$.
Finally, all of this theory can be extended to {\it partial} Legendre transformations. Consider a partitioning $I \cup J= \{1,\cdots,n\}$ , and the corresponding splitting $x=(x_I,x_J)$, $e=(e_I,e_J)$. The partial Legendre of $P(x_I,x_J)$ with respect to $x_J$ is defined as
\begin{equation}
P^*(x_I,e_J) = e_J^Tx_J - P(x_I,x_J), \quad e_J= \frac{P}{\partial x_J}(x_I,x_J),
\end{equation}
where $x_J$ is solved from $e_J= \frac{\partial P}{\partial x_J}(x_I,x_J)$.
\subsection{Explicit representation of implicit storage relations}
In this subsection we will show how generalized port-Hamiltonian systems $(\mathcal{D},\mathcal{L},\mathcal{R})$ with implicit energy storage relations $\mathcal{L} \subset T^*\mathcal{X}$ can be explicitly represented as follows. This extends the observations made in the linear case \cite{beattie}, \cite{DAE} in a non-trivial way.
The starting point is the fact that any Lagrangian submanifold $\mathcal{L} \subset T^*\mathcal{X}$, with $\dim \mathcal{X} =n$, can be locally written as \cite{arnold}
\begin{equation}
\label{L}
\mathcal{L} = \{(x,e_S)=(x_I,x_J,e_I,e_J) \in T^*\mathcal{X} \mid e_I = \frac{\partial V}{\partial x_I}, x_J=-\frac{\partial V}{\partial e_J} \} ,
\end{equation}
for some splitting $\{1, \cdots,n\}=I \cup J$ of the index set, and a function $V(x_I,e_J)$, called the {\it generating function} of the Lagrangian submanifold $\mathcal{L}$. In particular, $x_I,e_J$ define local coordinates for $\mathcal{L}$. Now define the Hamiltonian $\widetilde{H} (x_I,e_J)$ as
\begin{equation}
\widetilde{H} (x_I,e_J):= V(x_I,e_J) - e^T_J \frac{\partial V}{\partial e_J}(x_I,e_J)
\end{equation}
By Equation \ref{L} the coordinate expressions of $f_S=-\dot{x},e_S$ (in terms of $x_I,e_J$) are given as
\begin{equation}
-f_S= \begin{bmatrix} I & 0 \\ -\frac{\partial^2 V}{\partial e_J \partial x_I} & -\frac{\partial^2 V}{\partial e^2_J} \end{bmatrix}
\begin{bmatrix} \dot{x}_I \\ \dot{e}_J \end{bmatrix}, \quad e_S = \begin{bmatrix} \frac{\partial V}{\partial x_I} \\ e_J \end{bmatrix}
\end{equation}
This yields
\begin{equation}
\frac{d}{dt} \widetilde{H}(x_I,e_J)= --e_S^T(t)f_S(t)
\end{equation}
Consider furthermore any modulated Dirac structure $\mathcal{D}(x) \subset T_x\mathcal{X} \times T_x^*\mathcal{X} \times \mathcal{F}_R \times \mathcal{E}_R \times \mathcal{F}_P \times \mathcal{E}_P$. Since by the power-conservation property of Dirac structures $-e_S^Tf_S = e_R^Tf_R + e_P^Tf_P$ it thus follows that
\begin{equation}
\frac{d}{dt} \widetilde{H}(x_I,e_J)= e_R^T(t)f_R(t) + e_P^T(t)f_P(t) \leq e_P^T(t)f_P(t)
\end{equation}
Hence $\widetilde{H} (x_I,e_J)$ serves as the expression of an {\it energy function} (however, {\it not} in the original state variables $x$, but instead in the mixed set of coordinates $x_I,e_J$).
Note that if the mapping $x_J=-\frac{\partial V}{\partial e_J}$ from $e_J$ to $x_J$ is {\it invertible}, and hence the Lagrangian submanifold can be also parametrized by $x=(x_I,x_J)$, and thus $\mathcal{L}=\{(x,\nabla H(x)) \mid x \in \mathcal{X} \}$ for a certain $H$, then actually $\widetilde{H}(x_I,x_J)$ is minus the partial Legendre transform of $V(x_I,e_J)$ with respect to $e_J$, and equals $H$.
As another special case, let us take $I$ to be empty and thus $e_J=e_S$. Let $V^*(x)$ be the Legendre transform of $V(e_S)$. Then it follows that
\begin{equation}
V(e_S) - e_S^T \nabla V(e_S) = - \widetilde{V}(e_S)
\end{equation}
\section{Transformation of Dirac algebraic constraints into Lagrange algebraic constraints, and back}
In the previous sections it was discussed how in standard port-Hamiltonian systems $(\mathcal{D},H,\mathcal{R})$ {\it Dirac algebraic constraints} may arise (whenever the projection of the Dirac structure onto the cotangent space of the state space is not full), while generalized port-Hamiltonian systems $(\mathcal{D},\mathcal{L},\mathcal{R})$ may have (additional) {\it Lagrange algebraic constraints} (due to the projection of $\mathcal{L}$ on the state space manifold $\mathcal{X}$ not being surjective).
In this section we will show how one can {\it convert} Dirac algebraic constraints (as favored by port-based modeling) into Lagrange algebraic constraints (sometimes having advantages from a numerical simulation point of view); by adding {\it extra state variables}. This extends the constructions detailed in \cite{DAE} from the linear to the nonlinear case.
\subsection{From Dirac algebraic constraints to Lagrange algebraic constraints}
The first observation \cite{dalsmo} to be made is that a general Dirac structure $\mathcal{D}$ can be written as the graph of a skew-symmetric map on an extended state space as follows. In fact, suppose $\pi^*(\mathcal{D}(x)) \subset T_x\mathcal{X}^*$ is $(n-k)$-dimensional. Define $\mathcal{L}ambda:= \mathbb{R}^k$. Then there exists a full-rank $n \times k$ matrix $B(x)$ and a skew-symmetric $n \times n$ matrix $J(x)$ such that
\begin{equation}
\label{Diracconstraint}
\mathcal{D}(x) \! = \! \{(f,e) \! \in T_x\mathcal{X} \times T_x^*\mathcal{X} \mid \exists \lambda^* \in \mathcal{L}ambda^* \mbox{ s.t. }-f = J(x)e + B(x)\lambda^*, 0 = B^T(x)e \}
\end{equation}
Conversely, any such equations for a skew-symmetric map $J(x): T_x^*\mathcal{X} \to T_x\mathcal{X}$ define a Dirac structure. Now, let the energy-storage relation of the port-Hamiltonian system be given in a standard way; i.e., by a Hamiltonian $H: \mathcal{X} \to \mathbb{R}$. Then with respect to the {\it extended} state space $\mathcal{X}_e:= \mathcal{X} \times \mathcal{L}ambda$ we may define the {\it implicit} energy storage relation given by the Lagrangian submanifold (of the same type as in \end{equation}ref{L})
\begin{equation}
\mathcal{L}_e := \{ (x,\lambda, e, \lambda^*) \in T^* \mathcal{X}_e \mid e= \nabla H(x), \lambda = 0 \},
\end{equation}
corresponding to the Lagrange algebraic constraint $0=\lambda (\,= B^T(x)\nabla H(x))$. Hence the Dirac algebraic constraint $0=B^T(x) \nabla H(x)$ has been transformed into the Lagrange algebraic constraint $\lambda=0$ on the extended state space $\mathcal{X}_e$. The generating function of $\mathcal{L}_e$ is $H$, which is independent of $\lambda^*$, and therefore $\widetilde{H}(x, \lambda^*):= H(x) - \lambda^{*T}\frac{\partial H}{\partial \lambda^*}(x)=H(x)$.
\subsection{From Lagrange algebraic constraints to Dirac algebraic constraints}
The conversion of Lagrange algebraic constraints into Dirac algebraic constraints is also based on an extension of the state space. It is based on the fact, see e.g. \cite{barbero} and the references quoted in there, that any Lagrangian submanifold $\mathcal{L} \subset T^*\mathcal{X}$ can be locally represented by a parametrized family of generating functions, called a {\it Morse family}.
To be precise, given a Lagrangian submanifold $\mathcal{L} \subset T^*\mathcal{X}$, a point $P \in \mathcal{L}$ and projection $\pi(P) \in \mathcal{X}$, there exists a neighborhood $V$ of $\pi(P)$, a natural number $k$, a neighborhood $W$ of $0 \in \mathbb{R}^k$, together with a smooth function $F: V \times W \to \mathbb{R}$, such that the rank of $\frac{\partial F}{\partial \lambda}$, with $\lambda \in \mathbb{R}^k$, is equal to $k$ on $\left(\frac{\partial F}{\partial \lambda}\right)^{-1}(0)$, and
\begin{equation}
\{ \left(x, \frac{\partial F}{\partial x}(x, \lambda) \right) \mid \frac{\partial F}{\partial \lambda}(x, \lambda)=0 \}
\end{equation}
is a neighborhood of the point $P$ in $\mathcal{L}$. The function $F(x,\lambda)$, seen as a function of $x$, parametrized by $\lambda$, is called a Morse family for the Lagrangian submanifold $\mathcal{L}$.
Furthermore, given any (modulated) Dirac structure $\mathcal{D}(x) \subset T_x \mathcal{X} \times \mathcal{F}_R \times \mathcal{F}_P \times T_x^*\mathcal{X} \times \mathcal{E}_R$ as before, one may take the direct product with the (trivial) Dirac structure $\{(f_{\lambda},e_{\lambda}) \mid e_{\lambda}=0 \}$, so as to obtain an extended Dirac structure $\mathcal{D}_e$. This defines an {\it extended} port-Hamiltonian system $(\mathcal{D}_e,F, \mathcal{R})$ with {\it explicit} energy function $F(x,\lambda)$, and thus without Lagrange algebraic constraints.
\begin{example}[Optimal control \cite{barbero}]\label{singular} Consider the optimal control problem of minimizing a cost functional $\int L(q,u) dt$ for the control system $\dot{q}=f(q,u)$, with $q \in \mathbb{R}^n, u \in \mathbb{R}^m$. Define the optimal control Hamiltonian
\begin{equation}
K(q,p,u)= p^T f(q,u) + L(q,u) ,
\end{equation}
with $p \in \mathbb{R}^n$ the co-state vector. Application of Pontryagin's Maximum principle leads to the consideration of the standard port-Hamiltonian system (without inputs and outputs) on the space with coordinates $(q,p,u)$, given as
\begin{equation}
\label{o1}
\begin{bmatrix}
\dot{q} \\[2mm] \dot{p} \\[2mm] 0
\end{bmatrix} =
\begin{bmatrix}
0 & I_n & 0 \\[2mm]
-I_n & 0 & 0 \\[2mm]
0 & 0 & I_m
\end{bmatrix}
\begin{bmatrix}
\frac{\partial H}{\partial q}(q,p,u) \\[2mm]
\frac{\partial H}{\partial p}(q,p,u) \\[2mm]
\frac{\partial H}{\partial u}(q,p,u)
\end{bmatrix}
\end{equation}
The underlying Dirac structure is given as
\begin{equation}
\mathcal{D} =\{ \left( \begin{bmatrix} f_q \\[2mm] f_p \\[2mm] f_u \end{bmatrix} , \begin{bmatrix} e_q \\[2mm] e_p \\[2mm] e_u \end{bmatrix}\right) \mid f_q = - e_p, \, f_p = e_q, \, e_u=0\} ,
\end{equation}
i.e., the direct product of the Dirac structure on the $(q,p)$ space given by the graph of the canonical skew-symmetric map $\begin{bmatrix} 0 & -I \\ I & 0 \end{bmatrix}$ from $\begin{bmatrix} f_q \\ f_p \end{bmatrix}$ to $\begin{bmatrix} e_q \\ e_p \end{bmatrix}$, with the trivial Dirac structure $\{(f_u,e_u) \mid e_u=0 \}$. The resulting Dirac algebraic constraint is $\frac{\partial H}{\partial u}(q,p,u) =0$.
System \end{equation}ref{o1} can be equivalently rewritten as a port-Hamiltonian system system only involving the $(q,p)$ variables, with {\it implicit} energy storage relation given by the Lagrangian submanifold
\begin{equation}
\label{o2}
\mathcal{L} = \{\left(
\begin{bmatrix}
q \\[2mm] p
\end{bmatrix},
\begin{bmatrix}
e_q \\[2mm] e_p
\end{bmatrix} \right)
\mid
\exists u \mbox{ s.t. }
\begin{bmatrix}
e_q \\[2mm] e_p
\end{bmatrix} =
\begin{bmatrix}
\frac{\partial H}{\partial q}(q,p,u) \\[2mm]
\frac{\partial H}{\partial p}(q,p,u)
\end{bmatrix} , \; \frac{\partial H}{\partial u}(q,p,u)=0
\}
\end{equation}
Thus the function $H(q,p,u)$ defines a Morse family (function of $(q,p)$ parametri-zed by $u$) for this Lagrangian submanifold, and the conversion of \end{equation}ref{o2} into \end{equation}ref{o1} is an example of conversion of a Lagrange algebraic constraint into a Dirac algebraic constraint. See for the linear case \cite{DAE}.
\end{example}
\section{Conclusions}
We have laid down a framework for studying Dirac and Lagrange algebraic constraint equations as arising in (generalized) port-Hamiltonian systems, extending the linear results of \cite{DAE}, \cite{beattie}. In particular, a definition is provided of a nonlinear generalized port-Hamiltonian system, extending the one in \cite{barbero} by including energy dissipation and external ports. Furthermore, we have shown how implicit energy storage relations locally can be explicitly represented by a Hamiltonian depending on part of the state variables and a complementary part of the co-state variables. Also, by extension of the state space (inclusion of Lagrange multipliers) we have shown how Dirac algebraic constraints can be converted into Lagrange algebraic constraints, and conversely.
This work should be seen as a starting point for further study on the numerical properties of the resulting structured classes of nonlinear DAE systems; including their index and regularization \cite{kunkel}. Also it motivates the development of control theory for classes of physical nonlinear DAE systems, as well as extensions to the distributed-parameter case.
\section*{Conflict of interest}
The authors declare that they have no conflict of interest.
\end{document} |
\begin{document}
\title{Empirical Bernstein Bounds and Sample Variance Penalization}
\begin{abstract}
We give improved constants for data dependent and variance sensitive
confidence bounds, called empirical Bernstein bounds, and extend these
inequalities to hold uniformly over classes of functions whose growth
function is polynomial in the sample size $n$. The bounds lead us to
consider {\em sample variance penalization}, a novel learning method which
takes into account the empirical variance of the loss function. We
give conditions under which sample variance penalization is
effective. In particular, we present a bound on the excess risk
incurred by the method. Using this, we argue that there are situations
in which the excess risk of our method is of order $1/n$, while the
excess risk of empirical risk minimization is of order
$1/\sqrt{n}$. We show some experimental results, which confirm the
theory. Finally, we discuss the potential application of our results
to sample compression schemes.
\end{abstract}
\section{Introduction}
The method of empirical risk minimization (ERM) is so intuitive, that
some of the less plausible alternatives have received little attention
by the machine learning community. In this work we present
sample variance penalization (SVP), a method which is motivated by
some variance-sensitive, data-dependent confidence bounds, which we
develop in the paper. We describe circumstances under which SVP works
better than ERM and provide some preliminary experimental results
which confirm the theory.
In order to explain the underlying ideas and highlight the differences
between SVP and ERM, we begin with a discussion of the confidence
bounds most frequently used in learning theory.
\begin{theorem}[Hoeffding's inequality]
\label{Theorem Hoeffdings Inequality}Let $Z,Z_{1},\dots,Z_{n}$ be i.i.d. random
variables with values in $\left[ 0,1\right] $ and let ${\Greekmath 010E} >0$. Then with
probability at least $1-{\Greekmath 010E} $ in $\left( Z_{1},\dots,Z_{n}\right) $ we have
\begin{equation*}
\mathbb{E}Z-\frac{1}{n}\sum_{i=1}^{n}Z_{i}\leq \sqrt{\frac{\ln 1/{\Greekmath 010E} }{2n}
}.
\end{equation*}
\end{theorem}
It is customary to call this result Hoeffding's inequality. It
appears in a stronger, more general form in Hoeffding's 1963
milestone paper \cite{Hoeffding 1963}. Proofs can be found in
\cite{Hoeffding 1963} or \cite{McDiarmid 1998}. We cited
Hoeffding's inequality in form of a confidence-dependent bound on
the deviation, which is more convenient for our discussion than a
deviation-dependent bound on the confidence. Replacing $Z$ by $1-Z$
shows that the confidence interval is symmetric about $\mathbb{E }Z$.
Suppose some underlying observation is modeled by a random variable
$X$, distributed in some space $\mathcal{X}$ according to some
law ${\Greekmath 0116} $. In learning theory Hoeffding's inequality is
often applied when $Z$ measures the loss incurred by some hypothesis
$h$ when $X$ is observed, that is,
\begin{equation*}
Z=\ell _{h}\left( X\right) .
\end{equation*}
The expectation $\mathbb{E}_{X\sim {\Greekmath 0116} }\ell _{h}\left( X\right) $ is
called the risk associated with hypothesis $h$ and distribution ${\Greekmath 0116}
$. Since the risk depends only on the function $\ell _{h}$ and on
${\Greekmath 0116} $ we can write the risk as
\begin{equation*}
P\left( \ell _{h},{\Greekmath 0116} \right) ,
\end{equation*}
where $P$ is the expectation functional. If an i.i.d. vector $\mathbf{X}=\left(
X_{1},\dots,X_{n}\right) $ has been observed, then Hoeffding's inequality
allows us to estimate the risk, for fixed hypothesis, by the empirical risk
\begin{equation*}
P_{n}\left( \ell _{h},\mathbf{X}\right) = \frac{1}{n} \sum_{i}\ell
_{h}\left( X_{i}\right)
\end{equation*}
within a confidence interval of length $2\sqrt{\left( \ln 1/{\Greekmath 010E} \right)
/\left( 2n\right) }$.
Let us call the set $\mathcal{F}$ of functions $\ell _{h}$ for all different
hypotheses $h$ the \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{hypothesis space} and its members \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{
hypotheses}, ignoring the distinction between a hypothesis $h$ and the
induced loss function $\ell _{h}$. The bound in Hoeffding's inequality
can easily be adjusted to hold uniformly over any finite hypothesis space $
\mathcal{F}$ to give the following well known result \cite{Anthony 1999}.
\begin{corollary}
\label{Corollary Hoeffding finite function class}Let $X$ be a random
variable with values in a set $\mathcal{X}$ with distribution ${\Greekmath 0116} $, and
let $\mathcal{F}$ be a finite class of hypotheses $f:\mathcal{X\rightarrow }
\left[ 0,1\right] $ and ${\Greekmath 010E} >0$. Then with probability at least $
1-{\Greekmath 010E} $ in $\mathbf{X}=\left( X_{1},\dots,X_{n}\right) \sim {\Greekmath 0116} ^{n}$
\begin{equation*}
P\left( f,{\Greekmath 0116} \right) -P_{n}\left( f,
\mathbf{X}\right) \leq \sqrt{\frac{\ln \left( \left\vert \mathcal{F}
\right\vert /{\Greekmath 010E} \right) }{2n}},~~\forall f\in \mathcal{F}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, }
\end{equation*}
where $\left\vert \mathcal{F}\right\vert $ is the cardinality of $\mathcal{F}
$.
\end{corollary}
This result can be further extended to hold uniformly over hypothesis
spaces whose complexity can be controlled with different covering
numbers which then appear in place of the cardinality $\left\vert
\mathcal{F}\right\vert $ above. A large body of literature exists on
the subject of such uniform bounds to justify hypothesis selection by
empirical risk minimization, see \cite{Anthony 1999} and references
therein. Given a sample $\mathbf{X}$ and a hypothesis space
$\mathcal{ F}$, empirical risk minimization selects the hypothesis
\begin{equation*}
ERM\left( \mathbf{X}\right) =\arg \min_{f\in \mathcal{F}}P_{n}\left(
f,
\mathbf{X}\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{.}
\end{equation*}
A drawback of Hoeffding's inequality is that the confidence interval is
independent of the hypothesis in question, and always of order $\sqrt{1/n}$,
leaving us with a uniformly blurred view of the hypothesis class. But for
hypotheses of small variance better estimates are possible, such as the
following, which can be derived from what is usually called Bennett's
inequality (see e.g. Hoeffding's paper \cite{Hoeffding 1963}).
\begin{theorem}[Bennett's inequality]
\label{Theorem Bernsteins inequality}Under the conditions of Theorem \ref
{Theorem Hoeffdings Inequality} we have with probability at least $1-{\Greekmath 010E} $
that
\begin{equation*}
\mathbb{E}Z-\frac{1}{n}\sum_{i=1}^{n}Z_{i}\leq \sqrt{\frac{2\mathbb{V}Z\ln
1/{\Greekmath 010E} }{n}}+\frac{\ln 1/{\Greekmath 010E} }{3n},
\end{equation*}
where $\mathbb{V}Z$ is the variance $\mathbb{V}Z=\mathbb{E}\left( Z-\mathbb{E
}Z\right) ^{2}$.
\end{theorem}
The bound is symmetric about $\mathbb{E}Z$ and for large $n$ the confidence
interval is now close to $2\sqrt{\mathbb{V}Z}$ times the confidence interval
in Hoeffding's inequality. A version of this bound which is uniform over
finite hypothesis spaces, analogous to Corollary \ref{Corollary Hoeffding
finite function class}, is easily obtained, involving now for each
hypothesis $h$\ the variance $\mathbb{V}h\left( X\right) $. If $h_{1}$ and $
h_{2}$ are two hypotheses then $2\sqrt{\mathbb{V}h_{1}\left( X\right) }$ and
$2\sqrt{\mathbb{V}h_{2}\left( X\right) }$ are always less than or equal to $
1 $ but they can also be much smaller, or one of them can be substantially
smaller than the other one. For hypotheses of zero variance the diameter of
the confidence interval decays as $O\left( 1/n\right) $.
Bennett's inequality therefore provides us with estimates of lower accuracy
for hypotheses of large variance, and higher accuracy for hypotheses of
small variance. Given many hypotheses of equal and nearly minimal empirical
risk it seems intuitively safer to select the one whose true risk can be
most accurately estimated (a point to which we shall return). But
unfortunately the right hand side of Bennett's inequality depends on the
unobservable variance, so our view of the hypothesis class remains uniformly
blurred.
\subsection{Main results and SVP algorithm}
We are now ready to describe the main results of the paper, which
provide the motivation for the SVP algorithm.
Our first result provides a purely data-dependent bound with similar
properties as Bennett's inequality.
\begin{theorem}
\label{Theorem empirical Bernstein bound degree 1}Under the conditions of
Theorem \ref{Theorem Hoeffdings Inequality} we have with probability at
least $1-{\Greekmath 010E} $ in the i.i.d. vector $\mathbf{Z}=\left(
Z_{1},\dots,Z_{n}\right) $ that
\begin{equation*}
\mathbb{E}Z-\frac{1}{n}\sum_{i=1}^{n}Z_{i}\leq \sqrt{\frac{2V_{n}\left(
\mathbf{Z}\right) \ln 2/{\Greekmath 010E} }{n}}+\frac{7\ln 2/{\Greekmath 010E} }{3\left(
n-1\right) },
\end{equation*}
where $V_{n}\left( \mathbf{Z}\right) $ is the sample variance
\begin{equation*}
V_{n}\left( \mathbf{Z}\right) =\frac{1}{n\left( n-1\right) }\sum_{1\leq
i<j\leq n}\left( Z_{i}-Z_{j}\right) ^{2}.
\end{equation*}
\end{theorem}
We next extend Theorem \ref{Theorem empirical Bernstein bound
degree 1} over a finite function class.
\begin{corollary}
\label{Corollary empirical Bernstein finite function class}Let $X$ be a
random variable with values in a set $\mathcal{X}$ with distribution ${\Greekmath 0116} $,
and let $\mathcal{F}$ be a finite class of hypotheses $f:\mathcal{
X\rightarrow }\left[ 0,1\right] $. For ${\Greekmath 010E} >0,$ $n\geq 2$ we have with
probability at least $1-{\Greekmath 010E} $ in $\mathbf{X}=\left(
X_{1},\dots,X_{n}\right) \sim {\Greekmath 0116} ^{n}$ that
\begin{align*}
& P\left( f,{\Greekmath 0116} \right) -P_{n}\left( f,\mathbf{X}\right) & \leq &
\sqrt{\frac{2V_{n}\left( f,\mathbf{X}\right) \ln \left( 2\left\vert
\mathcal{F}\right\vert /{\Greekmath 010E} \right) }{n}}+
\\
& &~&
+ \frac{7\ln \left( 2\left\vert
\mathcal{F}\right\vert /{\Greekmath 010E} \right) }{3\left( n-1\right) },~~\forall f\in \mathcal{F},
\end{align*}
where $V_{n}\left( f,\mathbf{X}\right) =V_{n}\left( f\left( X_{1}\right)
,\dots,f\left( X_{n}\right) \right) $.
\end{corollary}
Theorem \ref{Theorem empirical Bernstein bound degree 1} makes the diameter
of the confidence interval observable. The corollary is obtained from a union bound over $\mathcal{F}$,
analogous to Corollary \ref{Corollary Hoeffding finite function
class}, and provides us with a view of the loss class which is blurred
for hypotheses of large sample variance, and more in focus for
hypotheses of small sample variance.
We note that an analogous result to Theorem \ref{Theorem empirical
Bernstein bound degree 1} is given by Audibert et al. \cite{Audibert
2007}. Our technique of proof is new and the bound we derive has
a slightly better constant.
Theorem \ref{Theorem empirical Bernstein bound degree 1} itself resembles
Bernstein's or Bennett's
inequality, in confidence bound form, but in terms of observable
quantities. For this reason it has been called an
\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{empirical Bernstein bound} in \cite{Audibert 2008a}. In \cite{Audibert 2007}
Audibert et al. apply their result to the analysis of algorithms for
the multi-armed bandit problem and in \cite{Audibert 2008a} it is used to
derive stopping rules for sampling procedures. We will prove Theorem
\ref{Theorem empirical Bernstein bound degree 1} in Section
\ref{Section EBB and Variance}, together with some useful confidence
bounds on the standard deviation, which may be valuable in their own right.
Our next result extends the uniform estimate in Corollary
\ref{Corollary empirical Bernstein finite function class} to infinite
loss classes whose complexity can be suitably controlled. Beyond the
simple extension involving covering numbers for $\mathcal{F}$ in the
uniform norm $\left\Vert \cdot\right\Vert _{\infty }$, we can use the
following complexity measure, which is also fairly commonplace in the
machine learning literature \cite{Anthony 1999},
\cite{Guo}.
For ${\Greekmath 010F} >0$, a function class $\mathcal{F}$ and an integer $n$, the
``growth function" $\mathcal{N}_{\infty }\left( {\Greekmath 010F} ,\mathcal{F}
,n\right) $ is defined as
\begin{equation*}
\mathcal{N}_{\infty }\left( {\Greekmath 010F} ,\mathcal{F},n\right) =\sup_{\mathbf{x}
\in X^{n}}\mathcal{N}\left( {\Greekmath 010F} ,\mathcal{F}\left( \mathbf{x}\right)
,\left\Vert \cdot\right\Vert _{\infty }\right) ,
\end{equation*}
where $\mathcal{F}\left( \mathbf{x}\right) =\left\{ \left( f\left(
x_{1}\right) ,\dots,f\left( x_{n}\right) \right) :f\in \mathcal{F}\right\}
\subseteq \mathbb{R}^{n}$ and for $A\subseteq
\mathbb{R}^{n}$ the number $\mathcal{N}\left( {\Greekmath 010F} ,A,\left\Vert \cdot\right\Vert
_{\infty }\right) $ is the smallest cardinality $\left\vert
A_{0}\right\vert $ of a set $A_{0}\subseteq A$ such that $A$ is
contained in the union of ${\Greekmath 010F} $-balls centered at points in
$A_{0}$, in the metric induced by $\left\Vert \cdot\right\Vert _{\infty
}$.
\begin{theorem}
\label{Theorem covering numbers}Let $X$ be a random variable with values in
a set $\mathcal{X}$ with distribution ${\Greekmath 0116} $ and let $\mathcal{F}$ be a
class of hypotheses $f:\mathcal{X\rightarrow }\left[ 0,1\right] $. Fix $
{\Greekmath 010E} \in \left( 0,1\right) ,$ $n\geq 16$ and set
\begin{equation*}
\mathcal{M}\left( n\right) =10\mathcal{N}_{\infty }\left( 1/n,\mathcal{F}
,2n\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{.}
\end{equation*}
Then with probability at least $1-{\Greekmath 010E} $ in the random vector $\mathbf{X}=\left(
X_{1},\dots,X_{n}\right) \sim {\Greekmath 0116} ^{n}$ we have
\begin{align*}
& P\left( f,{\Greekmath 0116} \right) -P_{n}\left( f,\mathbf{X}\right) & \leq & \sqrt{\frac{18V_{n}\left( f,\mathbf{X}\right) \ln \left( \mathcal{M}
\left( n\right) /{\Greekmath 010E} \right) }{n}} \\
& &~& +\frac{15\ln \left( \mathcal{M}\left(
n\right) /{\Greekmath 010E} \right) }{n-1},~~\forall f\in \mathcal{F}.
\end{align*}
\end{theorem}
The structure of this bound is very similar to Corollary
\ref{Corollary empirical Bernstein finite function class}, with
$2\left\vert \mathcal{F}\right\vert$ replaced by
$\mathcal{M}(n)$. In a number of practical cases polynomial growth of
$\mathcal{N}_{\infty }\left( 1/n,\mathcal{F},n\right)$ in $n$ has been
established. For instance, we quote \cite[equation~(28)]{Guo}
which states that for the bounded linear functionals in the
reproducing kernel Hilbert space associated with Gaussian kernels one
has $\ln
\mathcal{N}_{\infty }\left( 1/n,\mathcal{F},2n\right) =O\left( \ln
^{3/2}n\right) $. Composition with fixed Lipschitz functions preserves this
property, so we can see that Theorem \ref{Theorem covering numbers} is
applicable to a large family of function classes which occur in machine
learning. We will prove Theorem \ref{Theorem covering numbers} in Section
\ref{Section covering numbers}.
Since the minimization of uniform upper bounds is frequent practice in
machine learning, one could consider minimizing the bounds in
Corollary \ref{Corollary empirical Bernstein finite function class} or
Theorem \ref{Theorem covering numbers}. This leads to
\RIfM@\expandafter\text@\else\expandafter\mbox\fiit{sample variance penalization}, a technique which selects the
hypothesis
\begin{equation*}
SVP_{{\Greekmath 0115} }\left( \mathbf{X}\right) =\arg \min_{f\in
\mathcal{F}}P_{n}\left( f,\mathbf{X}\right) +{\Greekmath 0115}
\sqrt{\frac{V_{n}\left( f,\mathbf{X }\right) }{n}},
\end{equation*}
where ${\Greekmath 0115} \geq 0$ is some regularization parameter. For ${\Greekmath 0115} =0$ we
recover empirical risk minimization. The last term on the right hand side
can be regarded as a data-dependent regularizer.
Why, and under which circumstances, should sample variance
penalization work better than empirical risk minimization? If two
hypotheses have the same empirical risk, why should we discard the one
with higher sample variance? After all, the empirical risk of the
high variance hypothesis may be just as much overestimating the true
risk as underestimating it. In Section \ref{Section SVP vs ERM} we
will argue that the decay of the excess risk of sample variance
penalization can be bounded in terms of the variance of an optimal
hypothesis (see Theorem \ref{Theorem excess risk bound})
and if there is an optimal hypothesis with zero variance,
then the excess risk decreases as $1/n$. We also give an example of
such a case where the excess risk of empirical risk minimization
cannot decrease faster than $O\left( 1/\sqrt{n}\right) $. We then
report on the comparison of the two algorithms in a toy experiment.
Finally, in Section \ref{sec:SC} we present some preliminary observations concerning
the application of empirical Bernstein bounds to sample-compression schemes.
\subsection{Notation}
We summarize the notation used throughout the paper. We define the
following functions on the cube $\left[ 0,1\right] ^{n}$, which will
be used throughout. For every $\mathbf{x}=\left( x_{1},\dots,x_{n}\right)
\in \left[ 0,1\right] ^{n}$ we let
$$
P_{n}\left( \mathbf{x}\right) =\frac{1}{n}\sum_{i=1}^{n}x_{i}
$$
and
$$
V_{n}\left( \mathbf{x}\right) =\frac{1}{n\left( n-1\right) }
\sum_{i,j=1}^{n}\frac{\left( x_{i}-x_{j}\right) ^{2}}{2}.
$$
\iffalse
\begin{eqnarray*}
P_{n}\left( \mathbf{x}\right) &=&\frac{1}{n}\sum_{i=1}^{n}x_{i}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }
\\
V_{n}\left( \mathbf{x}\right) &=&\frac{1}{n\left( n-1\right) }
\sum_{i,j=1}^{n}\frac{\left( x_{i}-x_{j}\right) ^{2}}{2}.
\end{eqnarray*}
\fi
If $\mathcal{X}$ is some set, $f:\mathcal{X\rightarrow }\left[ 0,1\right] $
and $\mathbf{x}=\left( x_{1},\dots,x_{n}\right) \in \mathcal{X}^{n}$ we write $
f\left( \mathbf{x}\right) =\left( f\left( x_{1}\right) ,\dots,f\left(
x_{n}\right) \right)$, $P_{n}\left( f,\mathbf{x}
\right) =P_{n}\left( f\left( \mathbf{x}\right) \right) $ and $V_{n}\left( f,
\mathbf{x}\right) =V_{n}\left( f\left( \mathbf{x}\right) \right) $.
Questions of measurability will be ignored throughout, if necessary
this is enforced through finiteness assumptions. If $X$ is a real
valued random variable we use $\mathbb{E}X$ and $\mathbb{V}X$ to
denote its expectation and variance, respectively. If $X$ is a random
variable distributed in some set $\mathcal{X}$ according to a
distribution ${\Greekmath 0116} $, we write $X\sim {\Greekmath 0116} $. Product measures are
denoted by the symbols $\times $ or $\prod $, ${\Greekmath 0116} ^{n}$ is the
$n$-fold product of ${\Greekmath 0116} $ and the random variable $\mathbf{X}=\left(
X_{1},\dots,X_{n}\right) \sim {\Greekmath 0116} ^{n}$ is an i.i.d. sample generated from
${\Greekmath 0116} $. If $X\sim {\Greekmath 0116} $ and $f:\mathcal{X\rightarrow \mathbb{R}}$
then we write $P\left( f,{\Greekmath 0116} \right) =\mathbb{E}_{X\sim {\Greekmath 0116} }f\left(
X\right) =\mathbb{E}f\left( X\right) $ and $V\left( f,{\Greekmath 0116} \right)
=\mathbb{V}_{X\sim {\Greekmath 0116} }f\left( X\right) =\mathbb{V}f\left( X\right)
$.
\section{Empirical Bernstein bounds and variance estimation\label{Section
EBB and Variance}}
In this section, we prove Theorem \ref{Theorem empirical Bernstein bound
degree 1} and some related useful results, in particular concentration
inequalities for the variance of a bounded random variable, (\ref{Inequality
for i.i.d.}) and (\ref{Upper tail})\ below, which may be of independent
interest. For future use we derive our results for the more general case
where the $X_{i}$ in the sample are independent, but not necessarily
identically distributed.
We need two auxiliary results. One is a concentration inequality for
self-bounding random variables (Theorem 13 in \cite{Maurer 2006}):
\begin{theorem}
\label{Theorem concentration}Let $\mathbf{X}=\left( X_{1},\dots,X_{n}\right) $
be a vector of independent random variables with values in some set $
\mathcal{X}$. For $1\leq k\leq n$ and $y\in \mathcal{X}$, we use $\mathbf{X}
_{y,k}$ to denote the vector obtained from $\mathbf{X}$ by replacing $X_{k}$
by $y$. Suppose that $a\geq 1$ and that $Z=Z\left( \mathbf{X}\right) $
satisfies the inequalities
\begin{eqnarray}
Z\left( \mathbf{X}\right) -\inf_{y\in \mathcal{X}}Z\left( \mathbf{X}
_{y,k}\right) &\leq &1,\forall k \label{concentration contition I} \\
\sum_{k=1}^{n}\left( Z\left( \mathbf{X}\right) -\inf_{y\in \mathcal{X}
}Z\left( \mathbf{X}_{y,k}\right) \right) ^{2} &\leq &aZ\left( \mathbf{X}
\right) \label{concentration contition II}
\end{eqnarray}
almost surely. Then, for $t>0$,
\begin{equation*}
\Pr \left\{ \mathbb{E}Z-Z>t\right\} \leq \exp \left( \frac{-t^{2}}{2a\mathbb{
E}Z}\right) .
\end{equation*}
If $Z$ satisfies only the self-boundedness condition (\ref{concentration
contition II}) we still have
\begin{equation*}
\Pr \left\{ Z-\mathbb{E}Z>t\right\} \leq \exp \left( \frac{-t^{2}}{2a\mathbb{
E}Z+at}\right) .
\end{equation*}
\end{theorem}
The other result we need is a technical lemma on conditional expectations.
\begin{lemma}
\label{Lemma tedious}Let $X$, $Y$ be i.i.d. random variables with values in an
interval $\left[ a,a+1\right] $. Then
\begin{equation*}
\mathbb{E}_{X}\left[ \mathbb{E}_{Y}\left( X-Y\right) ^{2}\right] ^{2}\leq
\left( 1/2\right) \mathbb{E}\left( X-Y\right) ^{2}.
\end{equation*}
\end{lemma}
\begin{proof}
The right side of the above inequality is of course the variance $\mathbb{E}\left[ X^{2}-XY
\right] $. One computes
\begin{equation*}
\mathbb{E}_{X}\left[ \mathbb{E}_{Y}\left( X-Y\right) ^{2}\right] ^{2}=
\mathbb{E}\left[ X^{4}+3X^{2}Y^{2}-4X^{3}Y\right] .
\end{equation*}
We therefore have to show that $\mathbb{E}\left[ g\left( X,Y\right) \right]
\geq 0$ where
\begin{equation*}
g\left( X,Y\right) =X^{2}-XY-X^{4}-3X^{2}Y^{2}+4X^{3}Y
\end{equation*}
A rather tedious computation gives
\begin{align*}
& g\left( X,Y\right) +g\left( Y,X\right) = ~~\\
& =X^{2}-XY-X^{4}-3X^{2}Y^{2}+4X^{3}Y + \\
& \hspace{.7truecm} +Y^{2}-XY-Y^{4}-3X^{2}Y^{2}+4Y^{3}X \\
& =\left( X-Y+1\right) \left( Y-X+1\right) \left( Y-X\right) ^{2}.
\end{align*}
The latter expression is clearly nonnegative, so
\begin{equation*}
2\left[ \mathbb{E}g\left( X,Y\right) \right] =\mathbb{E}\left[ g\left(
X,Y\right) +g\left( Y,X\right) \right] \geq 0,
\end{equation*}
which completes the proof.
\end{proof}
When the random variables $X$ and $Y$ are uniformly distributed on a
finite set, $\left\{ x_{1},\dots,x_{n}\right\}$, Lemma \ref{Lemma tedious}
gives the following useful corollary.
\begin{corollary}
\label{Corollary tedious}Suppose $\left\{ x_{1},\dots,x_{n}\right\} \subset
\left[ 0,1\right] $. Then
\begin{equation*}
\frac{1}{n}\sum_{k}\left( \frac{1}{n}\sum_{j}\left( x_{k}-x_{j}\right)
^{2}\right) ^{2}\leq \frac{1}{2n^{2}}\sum_{k,j}\left( x_{k}-x_{j}\right)
^{2}.
\end{equation*}
\end{corollary}
We first establish confidence bounds for the standard deviation.
\begin{theorem}
\label{Theorem realvalued}Let $n\geq 2$ and $\mathbf{X}=\left(
X_{1},\dots,X_{n}\right) $ be a vector of independent random variables with
values in $\left[ 0,1\right] $. Then for ${\Greekmath 010E} >0$ we have, writing $
\mathbb{E}V_{n}$ for $\mathbb{E}_{\mathbf{X}}V_{n}\left( \mathbf{X}\right) $,
\begin{eqnarray}
\Pr \left\{ \sqrt{\mathbb{E}V_{n}}>\sqrt{V_{n}\left( \mathbf{X}\right) }+
\sqrt{\frac{2\ln 1/{\Greekmath 010E} }{n-1}}\right\} &\leq &{\Greekmath 010E}
\label{Stdev upper bound} \\
\Pr \left\{ \sqrt{V_{n}\left( \mathbf{X}\right) }>\sqrt{\mathbb{E}V_{n}}+
\sqrt{\frac{2\ln 1/{\Greekmath 010E} }{n-1}}\right\} &\leq &{\Greekmath 010E} .
\label{Stdev lower bound}
\end{eqnarray}
\end{theorem}
\begin{proof}
Write $Z\left( \mathbf{X}\right) =nV_{n}\left( \mathbf{X}\right) $. Now fix
some $k$ and choose any $y\in \left[ 0,1\right] $. Then
\begin{align*}
& Z\left( \mathbf{X}\right) -Z\left( \mathbf{X}_{y,k}\right) = \\
& =\frac{1}{n-1}\sum_{j}\left( \left( X_{k}-X_{j}\right) ^{2}-\left(
y-X_{j}\right) ^{2}\right) \\
& \leq \frac{1}{n-1}\sum_{j}\left( X_{k}-X_{j}\right) ^{2}.
\end{align*}
It follows that $Z\left( \mathbf{X}\right) -\inf_{y\in \Omega }Z\left(
\mathbf{X}_{y,k}\right) \leq 1$. We also get
\begin{align*}
& \sum_{k}\left( Z\left( \mathbf{X}\right) -\inf_{y\in \left[ 0,1\right]
}Z\left( \mathbf{X}_{y,k}\right) \right) ^{2} \leq ~~\\
& \leq \sum_{k}\left( \frac{1}{n-1}\sum_{j}\left( X_{k}-X_{j}\right)
^{2}\right) ^{2} \\
& \leq \frac{n^{3}}{\left( n-1\right) ^{2}}\frac{1}{2n^{2}}\sum_{kj}\left(
X_{k}-X_{j}\right) ^{2} \\
& =\frac{n}{n-1}Z\left( \mathbf{X}\right) ,
\end{align*}
where we applied Corollary \ref{Corollary tedious} to get the second
inequality. It follows that $Z$ satisfies (\ref{concentration contition I})
and (\ref{concentration contition II}) with $a=n/\left( n-1\right) $. From
Theorem \ref{Theorem concentration} and
$$\Pr \left\{ \pm \mathbb{E}V_{n}\mp
V_{n}\left( \mathbf{X}\right) >s\right\} =\Pr \left\{ \pm \mathbb{E}Z\mp
Z\left( \mathbf{X}\right) >ns\right\}
$$
we can therefore conclude the
following concentration result for the sample variance: For $s>0$
\begin{eqnarray}
\Pr \left\{ \mathbb{E}V_{n}-V_{n}\left( \mathbf{X}\right) >s\right\} &\leq
&\exp \left( \frac{-\left( n-1\right) s^{2}}{2\mathbb{E}V_{n}}\right)
\label{Inequality for i.i.d.}~~~ \\
\Pr \left\{ V_{n}\left( \mathbf{X}\right) -\mathbb{E}V_{n}>s\right\} &\leq
&\exp \left( \frac{-\left( n-1\right) s^{2}}{2\mathbb{E}V_{n}+s}\right).~~~
\label{Upper tail}
\end{eqnarray}
From the lower tail bound (\ref{Inequality for i.i.d.}) we obtain with
probability at least $1-{\Greekmath 010E} $ that
\begin{equation*}
\mathbb{E}V_{n}-2\sqrt{\mathbb{E}V_{n}}\sqrt{\frac{\ln 1/{\Greekmath 010E} }{2\left(
n-1\right) }}\leq V_{n}\left( \mathbf{X}\right) .
\end{equation*}
Completing the square on the left hand side, taking the square-root, adding $
\sqrt{\ln \left( 1/{\Greekmath 010E} \right) /\left( 2\left( n-1\right) \right) }$ and
using $\sqrt{a+b}\leq \sqrt{a}+\sqrt{b}$ gives (\ref{Stdev upper bound}).
Solving the right side of (\ref{Upper tail}) for $s$ and using the same
square-root inequality we find that with probability at least $1-{\Greekmath 010E} $ we
have
\begin{eqnarray*}
V_{n}\left( \mathbf{X}\right) &\leq &\mathbb{E}V_{n}+2\sqrt{\frac{\mathbb{E}
V_{n}\ln 1/{\Greekmath 010E} }{2\left( n-1\right) }}+\frac{\ln 1/{\Greekmath 010E} }{\left(
n-1\right) } \\
&=&\left( \sqrt{\mathbb{E}V_{n}}+\sqrt{\frac{\ln 1/{\Greekmath 010E} }{2\left(
n-1\right) }}\right) ^{2}+\frac{\ln 1/{\Greekmath 010E} }{2\left( n-1\right) }.
\end{eqnarray*}
Taking the square-root and using the root-inequality again gives (\ref{Stdev
lower bound}).
\end{proof}
We can now prove the empirical Bernstein bound, which reduces to Theorem \ref
{Theorem empirical Bernstein bound degree 1} for identically distributed
variables.
\begin{theorem}
\label{Theorem Bernstein Variance}Let $\mathbf{X}=\left(
X_{1},\dots,X_{n}\right) $ be a vector of independent random variables with
values in $\left[ 0,1\right] $. Let ${\Greekmath 010E} >0$. Then with probability at
least $1-{\Greekmath 010E} $ in $\mathbf{X}$\ we have
\begin{equation*}
\mathbb{E}\left[ P_{n}\left( \mathbf{X}\right) \right] \leq P_{n}\left(
\mathbf{X}\right) +\sqrt{\frac{2V_{n}\left( \mathbf{X}\right) \ln 2/{\Greekmath 010E} }{
n}}+\frac{7\ln 2/{\Greekmath 010E} }{3\left( n-1\right) }.
\end{equation*}
\end{theorem}
\begin{proof}
Write $W=\left( 1/n\right) \sum_{i}\mathbb{V}X_{i}$ and observe that
\begin{eqnarray}
W &\leq &\frac{1}{n}\sum_{i}\mathbb{E}\left( X_{i}-\mathbb{E}X_{i}\right)
^{2} \label{Variance bound} \\
&&+\frac{1}{2n\left( n-1\right) }\sum_{i\neq j}\left( \mathbb{E}X_{i}-
\mathbb{E}X_{j}\right) ^{2} \\
&=&\frac{1}{2n\left( n-1\right) }\sum_{i,j}\mathbb{E}\left(
X_{i}-X_{j}\right) ^{2} \notag \\
&=&\mathbb{E}V_{n}.
\end{eqnarray}
Recall that Bennett's inequality, which holds also if the $X_{i}$ are not
identically distributed (see \cite{McDiarmid 1998}), implies with
probability at least $1-{\Greekmath 010E} $
\begin{eqnarray*}
\mathbb{E}P_{n}\left( \mathbf{X}\right) &\leq &P_{n}\left( \mathbf{X}
\right) +\sqrt{\frac{2W\ln 1/{\Greekmath 010E} }{n}}+\frac{\ln 1/{\Greekmath 010E} }{3n} \\
&\leq &P_{n}\left( \mathbf{X}\right) +\sqrt{\frac{2\mathbb{E}V_{n}\ln
1/{\Greekmath 010E} }{n}}+\frac{\ln 1/{\Greekmath 010E} }{3n},
\end{eqnarray*}
so that the conclusion follows from combining this inequality with (\ref
{Stdev upper bound}) in a union bound and some simple estimates.
\end{proof}
\section{Empirical Bernstein bounds for function classes of polynomial
growth \label{Section covering numbers}}
We now prove Theorem \ref{Theorem covering numbers}. We will use the
classical double-sample method (\cite{Vapnik 1995}, \cite{Anthony 1999}),
but we have to pervert it somewhat to adapt it to the nonlinearity of the
empirical standard-deviation functional. Define functions $\Phi $, $\Psi :
\left[ 0,1\right] ^{n}\times
\mathbb{R}
_{+}\rightarrow
\mathbb{R}
$ by
\begin{eqnarray*}
\Phi \left( \mathbf{x},t\right) &=&P_{n}\left( \mathbf{x}\right) +\sqrt{
\frac{2V_{n}\left( \mathbf{x}\right) t}{n}}+\frac{7t}{3\left( n-1\right) }, \\
\Psi \left( \mathbf{x},t\right) &=&P_{n}\left( \mathbf{x}\right) +\sqrt{
\frac{18V_{n}\left( \mathbf{x}\right) t}{n}}+\frac{11t}{n-1}.
\end{eqnarray*}
We first record some simple Lipschitz properties of these functions.
\begin{lemma}
\label{Lemma Lipschitz}For $t>0$, $\mathbf{x},\mathbf{x}^{\prime }\in \left[
0,1\right] ^{n}$ we have
\begin{eqnarray}
\nonumber
(i)~~ \Phi \left( \mathbf{x},t\right) -\Phi \left( \mathbf{x}^{\prime
},t\right) &\leq& \left( 1+2\sqrt{t/n}\right) \left\Vert \mathbf{x}-\mathbf{x}
^{\prime }\right\Vert _{\infty },\\
(ii)~~ \Psi \left( \mathbf{x},t\right) -\Psi \left( \mathbf{x}^{\prime
},t\right) &\leq& \left( 1+6\sqrt{t/n}\right) \left\Vert \mathbf{x}-\mathbf{x}
^{\prime }\right\Vert _{\infty }.
\nonumber
\end{eqnarray}
\end{lemma}
\begin{proof}
One verifies that
$$
\sqrt{V_{n}\left( \mathbf{x}\right) }-\sqrt{V_{n}\left(
\mathbf{x}^{\prime }\right) }\leq \sqrt{2}\left\Vert \mathbf{x}-\mathbf{x}
^{\prime }\right\Vert _{\infty },
$$ which implies (i) and (ii).
\end{proof}
Given two vectors $\mathbf{x},\mathbf{x}^{\prime }\in \mathcal{X}^{n}$ and $
\mathbf{{\Greekmath 011B} }\in \left\{ -1,1\right\} ^{n}$ define $\left( \mathbf{{\Greekmath 011B}
},\mathbf{x},\mathbf{x}^{\prime }\right) \in \mathcal{X}^{n}$ by $\left(
\mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}^{\prime }\right) _{i}=x_{i}$ if $
{\Greekmath 011B} _{i}=1$ and $\left( \mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}^{\prime
}\right) _{i}=x_{i}^{\prime }$ if ${\Greekmath 011B} _{i}=-1$. In the following the $
{\Greekmath 011B} _{i}$ will be independent random variables, uniformly distributed on $
\left\{ -1,1\right\} $.
\begin{lemma}
\label{Lemma symmetrization} Let $\mathbf{X}=\left( X_{1},\dots,X_{n}\right) $
and $\mathbf{X}^{\prime }=\left( X_{1}^{\prime },\dots,X_{n}^{\prime }\right) $
be random vectors with values in $\mathcal{X}$ such that all the $X_{i}$ and
$X_{i}^{\prime }$ are independent and identically distributed. Suppose that $
F:\mathcal{X}^{2n}\rightarrow \left[ 0,1\right] $. Then
\begin{equation*}
\mathbb{E}F\left( \mathbf{X},\mathbf{X}^{\prime }\right) \leq \sup_{\left(
\mathbf{x},\mathbf{x}^{\prime }\right) \in \mathcal{X}^{2n}}\mathbb{E}
_{{\Greekmath 011B} }F\left( \left( \mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}^{\prime
}\right) ,\left( -\mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}^{\prime }\right)
\right) .
\end{equation*}
\end{lemma}
\begin{proof}
For any configuration $\mathbf{{\Greekmath 011B} }$ and $\left( \mathbf{X},\mathbf{X}
^{\prime }\right) $, the configuration $\left( \left( \mathbf{{\Greekmath 011B} },
\mathbf{X},\mathbf{X}^{\prime }\right) ,\left( -\mathbf{{\Greekmath 011B} },\mathbf{X},
\mathbf{X}^{\prime }\right) \right) $ is obtained from $\left( \mathbf{X},
\mathbf{X}^{\prime }\right) $ by exchanging $X_{i}$ and $X_{i}^{\prime }$
whenever ${\Greekmath 011B} _{i}=-1$. Since $X_{i}$ and $X_{i}^{\prime }$ are
identically distributed this does not affect the expectation. Thus
\begin{eqnarray*}
\mathbb{E}F\left( \mathbf{X},\mathbf{X}^{\prime }\right) &=&\mathbb{E}
_{{\Greekmath 011B} }\mathbb{E}F\left( \left( \mathbf{{\Greekmath 011B} },\mathbf{X},\mathbf{X}
^{\prime }\right) ,\left( -\mathbf{{\Greekmath 011B} },\mathbf{X},\mathbf{X}^{\prime
}\right) \right) \\
&\leq &\sup_{\left( \mathbf{x},\mathbf{x}^{\prime }\right) \in \mathcal{X}
^{2n}}\mathbb{E}_{{\Greekmath 011B} }F\left( \left( \mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{
x}^{\prime }\right) ,\left( -\mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}^{\prime
}\right) \right) .
\end{eqnarray*}
\end{proof}
The next lemma is where we use the concentration results in Section \ref
{Section EBB and Variance}.
\begin{lemma}
\label{Lemma probability bound}Let $f:\mathcal{X}\rightarrow \left[ 0,1
\right] $ and $\left( \mathbf{x},\mathbf{x}^{\prime }\right) \in \mathcal{X}
^{2n}$ be fixed. Then
\begin{equation*}
\Pr_{{\Greekmath 011B} }\left\{ \Phi \left( f\left( \mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{
x}^{\prime }\right) ,t\right) >\Psi \left( f\left( -\mathbf{{\Greekmath 011B} },\mathbf{
x},\mathbf{x}^{\prime }\right) ,t\right) \right\} \leq 5e^{-t}.
\end{equation*}
\end{lemma}
\begin{proof}
Define the random vector $\mathbf{Y}=\left( Y_{1},\dots,Y_{n}\right) $, where
the $Y_{i}$ are independent random variables, each $Y_{i}$ being uniformly
distributed on $\left\{ f\left( x_{i}\right) ,f\left( x_{i}^{\prime }\right)
\right\} $. The $Y_{i}$ are of course not identically distributed. Within
this proof we use the shorthand notation $\mathbb{E}P_{n}=\mathbb{E}_{
\mathbf{Y}}P_{n}\left( \mathbf{Y}\right) $ and $\mathbb{E}V_{n}=\mathbb{E}_{
\mathbf{Y}}V_{n}\left( \mathbf{Y}\right) $, and let
\begin{equation*}
A=\mathbb{E}P_{n}+\sqrt{\frac{8\mathbb{E}V_{n}~t}{n}}+\frac{14t}{3\left(
n-1\right) }.
\end{equation*}
Evidently
\begin{eqnarray*}
&&\Pr_{{\Greekmath 011B} }\left\{ \Phi \left( f\left( \mathbf{{\Greekmath 011B} },\mathbf{x},
\mathbf{x}^{\prime }\right) ,t\right) >\Psi \left( f\left( -\mathbf{{\Greekmath 011B} },
\mathbf{x},\mathbf{x}^{\prime }\right) ,t\right) \right\} \leq \\
&& \leq \Pr_{{\Greekmath 011B} }\left\{ \Phi \left( f\left( \mathbf{{\Greekmath 011B} },\mathbf{x},
\mathbf{x}^{\prime }\right) ,t\right) >A\right\}+
\\ && \hspace{1.9truecm}
+\Pr_{{\Greekmath 011B} }\left\{
A>\Psi \left( f\left( -\mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}^{\prime
}\right) ,t\right) \right\} = \\
&& = \Pr_{\mathbf{Y}}\left\{ \Phi \left( \mathbf{Y},t\right) >A\right\} +\Pr_{
\mathbf{Y}}\left\{ A>\Psi \left( \mathbf{Y},t\right) \right\} .
\end{eqnarray*}
To prove our result we will bound these two probabilities in turn.
Now
\begin{eqnarray*}
&&\Pr_{\mathbf{Y}}\left\{ \Phi \left( \mathbf{Y},t\right) >A\right\} \leq \\
&\leq &\Pr \left\{ P_{n}\left( \mathbf{Y}\right) >\mathbb{E}P_{n}+\sqrt{
\frac{2\mathbb{E}V_{n}t}{n}}+\frac{t}{3\left( n-1\right) }\right\} + \\
&&+\Pr \left\{ \sqrt{\frac{2V_{n}\left( \mathbf{Y}\right) t}{n}}>\sqrt{\frac{
2\mathbb{E}V_{n}~t}{n}}+\frac{2t}{n-1}\right\} .
\end{eqnarray*}
Since $\sum_{i}\mathbb{V}\left( f\left( Y_{i}\right) \right) \leq n\mathbb{E}
V_{n}$ by equation (\ref{Variance bound}), the first of these probabilities
is at most $e^{-t}$ by Bennett's inequality, which also holds for variables
which are not identically distributed. That the second of these
probabilities is bounded by $e^{-t}$ follows directly from Theorem \ref
{Theorem realvalued} (\ref{Stdev lower bound}). We conclude that $\Pr_{
\mathbf{Y}}\left\{ \Phi \left( \mathbf{Y},t\right) >A\right\} \leq 2e^{-t}$.
Since $\sqrt{2}+\sqrt{8}=\sqrt{18}$ we have
\begin{eqnarray*}
&&\Pr_{\mathbf{Y}}\left\{ A>\Psi \left( \mathbf{Y},t\right) \right\} \leq \\
&\leq &\Pr \left\{ \mathbb{E}P_{n}>P_{n}\left( \mathbf{Y}\right) +\sqrt{
\frac{2V_{n}\left( \mathbf{Y}\right) t}{n}}+\frac{7t}{3\left( n-1\right) }
\right\} + \\
&&+\Pr \left\{ \sqrt{\frac{8\mathbb{E}V_{n}~t}{n}}>\sqrt{\frac{8V_{n}\left(
\mathbf{Y}\right) t}{n}}+\frac{4t}{n-1}\right\} .
\end{eqnarray*}
The first probability in the sum is at most $2e^{-t}$ by Theorem \ref
{Theorem Bernstein Variance}, and the second is at most $e^{-t}$ by Theorem
\ref{Theorem realvalued} (\ref{Stdev upper bound}). Hence $\Pr_{\mathbf{Y}
}\left\{ A>\Psi \left( f\left( \mathbf{Y}\right) ,t\right) \right\} \leq
3e^{-t}$, so it follows that
\begin{equation*}
\Pr_{{\Greekmath 011B} }\left\{ \Phi \left( f\left( \mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{
x}^{\prime }\right) ,t\right) >\Psi \left( f\left( -\mathbf{{\Greekmath 011B} },\mathbf{
x},\mathbf{x}^{\prime }\right) ,t\right) \right\} \leq 5e^{-t}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.}
\end{equation*}
\end{proof}
\noindent {\bf Proof of Theorem \protect\ref{Theorem covering numbers}}. It follows from Theorem \ref{Theorem Bernstein Variance} that for $t>\ln 4$
we have for any $f\in \mathcal{F}$ that
\begin{equation*}
\Pr \left\{ \Phi \left( f\left( \mathbf{X}\right) ,t\right) >P\left( f,{\Greekmath 0116}
\right) \right\} \geq 1/2\RIfM@\expandafter\text@\else\expandafter\mbox\fi{. }
\end{equation*}
In other words, the functional
\begin{equation*}
f\mapsto \Lambda \left( f\right) =\mathbb{E}_{\mathbf{X}^{\prime }}\mathbf{1}
\left\{ \Phi \left( f\left( \mathbf{X}^{\prime }\right) ,t\right) >P\left(
f,{\Greekmath 0116} \right) \right\}
\end{equation*}
satisfies $1\leq 2\Lambda \left( f\right) $ for all $f$. Consequently, for
any $s>0$ we have, using $\mathbb{I}A$ to denote the indicator function of $A
$, that
\begin{align*}
& \Pr_{\mathbf{X}}\left\{ \exists f\in \mathcal{F}:P\left( f,{\Greekmath 0116} \right)
>\Psi \left( f\left( \mathbf{X}\right) ,t\right) +s\right\} \\ \\
& =\mathbb{E}_{\mathbf{X}}\sup_{f\in \mathcal{F}}\mathbb{I}\left\{ P\left(
f,{\Greekmath 0116} \right) >\Psi \left( f\left( \mathbf{X}\right) ,t\right) +s\right\} \\ \\
& \leq \mathbb{E}_{\mathbf{X}}\sup_{f\in \mathcal{F}}\mathbb{I}\left\{
P\left( f,{\Greekmath 0116} \right) >\Psi \left( f\left( \mathbf{X}\right) ,t\right)
+s\right\} 2\Lambda \left( f\right) \\ \\
& =2\mathbb{E}_{\mathbf{X}}\sup_{f\in \mathcal{F}}\mathbb{E}_{\mathbf{X}
^{\prime }}\mathbb{I}\left\{
\begin{array}{c}
P\left( f,{\Greekmath 0116} \right) >\Psi \left( f\left( \mathbf{X}\right) ,t\right) +s
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ } \\
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{and }\Phi \left( f\left( \mathbf{X}^{\prime }\right) ,t\right)
>P\left( f,{\Greekmath 0116} \right)
\end{array}
\right\} \\ \\
& \leq 2\mathbb{E}_{\mathbf{XX}^{\prime }}\sup_{f\in \mathcal{F}}\mathbb{I}
\left\{
\begin{array}{c}
P\left( f,{\Greekmath 0116} \right) >\Psi \left( f\left( \mathbf{X}\right) ,t\right) +s
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ } \\
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{and }\Phi \left( f\left( \mathbf{X}^{\prime }\right) ,t\right)
>P\left( f,{\Greekmath 0116} \right)
\end{array}
\right\} \\ \\
& \leq 2\mathbb{E}_{\mathbf{XX}^{\prime }}\sup_{f\in \mathcal{F}}\mathbb{I}
\left\{ \Phi \left( f\left( \mathbf{X}^{\prime }\right) ,t\right) >\Psi
\left( f\left( \mathbf{X}\right) ,t\right) +s\right\} \\ \\
& \leq 2\sup_{\left( \mathbf{x},\mathbf{x}^{\prime }\right) \in \mathcal{X}
^{2n}}\Pr_{{\Greekmath 011B} }\big\{ \exists f\in \mathcal{F}:\Phi \left( f\left(
\mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}^{\prime }\right) ,t\right)\\
& \hspace{3truecm} > \Psi \left( f\left( -\mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}^{\prime }\right)
,t\right) +s\big\},
\end{align*}
\noindent where we used Lemma \ref{Lemma symmetrization} in the last step.
Now we fix $\left( \mathbf{x},\mathbf{x}^{\prime }\right) \in \mathcal{X}
^{2n}$ and let ${\Greekmath 010F} >0$ be arbitrary. We can choose a finite subset $
\mathcal{F}_{0}$ of $\mathcal{F}$ such that $\left\vert \mathcal{F}
_{0}\right\vert \leq \mathcal{N}\left( {\Greekmath 010F} ,\mathcal{F},2n\right) $ and
that $\forall f\in \mathcal{F}$ there exists $\hat{f}\in \mathcal{F}_{0}$ such
that $\left\vert f\left( x_{i}\right) -\hat{f}\left( x_{i}\right)
\right\vert <{\Greekmath 010F} $ and $\left\vert f\left( x_{i}^{\prime }\right) -\hat{
f}\left( x_{i}^{\prime }\right) \right\vert <{\Greekmath 010F} $, for all $i\in
\left\{ 1,\dots,n\right\} $. Suppose there exists $f\in \mathcal{F}$ such that
\begin{equation*}
\Phi \left( f\left( \mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}^{\prime }\right)
,t\right) >\Psi \left( f\left( -\mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}
^{\prime }\right) ,t\right) +\left( 2+8\sqrt{\frac{t}{n}}\right) {\Greekmath 010F}
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{. }
\end{equation*}
It follows from the Lemma \ref{Lemma Lipschitz} (i) and (ii) that
there must exist $\hat{f}\in \mathcal{F}_{0}$ such that
$$\Phi \left( \hat{f}
\left( \mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}^{\prime }\right) ,t\right)
>\Psi \left( \hat{f}\left( -\mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}^{\prime
}\right) ,t\right).
$$
We conclude from the above that
\begin{align*}
& \Pr_{{\Greekmath 011B} }\left\{
\begin{array}{c}
\exists f\in \mathcal{F}:\Phi \left( f\left( \mathbf{{\Greekmath 011B} },\mathbf{x},
\mathbf{x}^{\prime }\right) ,t\right) > \\
>\Psi \left( f\left( -\mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}^{\prime
}\right) ,t\right) +\left( 2+8\sqrt{\frac{t}{n}}\right) {\Greekmath 010F}
\end{array}
\right\} \\ \\
& \leq \Pr_{{\Greekmath 011B} }\big\{ \exists f\in \mathcal{F}_{0}:\Phi \left( f\left(
\mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}^{\prime }\right) ,t\right) >\Psi
\left( f\left( -\mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}^{\prime }\right)
,t\right) \big\} \\
& \leq \sum_{f\in \mathcal{F}_{0}}\Pr_{{\Greekmath 011B} }\big\{ \Phi \left( f\left(
\mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}^{\prime }\right) ,t\right) >\Psi
\left( f\left( -\mathbf{{\Greekmath 011B} },\mathbf{x},\mathbf{x}^{\prime }\right)
,t\right) \big\} \\
& \leq 5\mathcal{N}\left( {\Greekmath 010F} ,\mathcal{F},2n\right) e^{-t}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,}
\end{align*}
\noindent where we used Lemma \ref{Lemma probability bound} in the last step. We
arrive at the statement that
\begin{align*}
& \Pr_{\mathbf{X}}\left\{ \exists f\in \mathcal{F}:P\left( f,{\Greekmath 0116} \right)
\geq \Psi \left( f\left( \mathbf{X}\right) ,t\right) +\left( 2+8\sqrt{\frac{t
}{n}}\right) {\Greekmath 010F} \right\} \\
& \leq 10\mathcal{N}\left( {\Greekmath 010F} ,\mathcal{F},2n\right) e^{-t}.
\end{align*}
Equating this probability to ${\Greekmath 010E} $, solving for $t$, substituting $
{\Greekmath 010F} =1/n$ and using $8\sqrt{t/n}\leq 2t$, for $n\geq 16$ and $t\geq 1$,
give the result. \qed
We remark that a simplified version of the above argument gives uniform
bounds for the standard deviation $\sqrt{V\left( f,{\Greekmath 0116} \right) }$, using
Theorem \ref{Theorem realvalued} (\ref{Stdev lower bound}) and (\ref{Stdev
upper bound}).
\section{Sample variance penalization versus empirical risk minimization
\label{Section SVP vs ERM}}
Since empirical Bernstein bounds are observable, have estimation errors
which can be as small as $O\left( 1/n\right) $ for small sample variances,
and can be adjusted to hold uniformly over realistic function classes, they
suggest a method which minimizes the bounds of Corollary \ref{Corollary
empirical Bernstein finite function class} or Theorem \ref{Theorem covering
numbers}. Specifically we consider the algorithm
\begin{equation}
SVP_{{\Greekmath 0115} }\left( \mathbf{X}\right) =\arg \min_{f\in \mathcal{F}
}P_{n}\left( f,\mathbf{X}\right) +{\Greekmath 0115} \sqrt{\frac{V_{n}\left( f,\mathbf{X
}\right) }{n}}, \label{SVP definition}
\end{equation}
where ${\Greekmath 0115}$ is a non-negative parameter. We call this method
sample variance penalization (SVP). Choosing the
regularization parameter ${\Greekmath 0115} =0$ reduces the algorithm to
empirical risk minimization (ERM).
It is intuitively clear that SVP will be inferior to ERM if losses
corresponding to better hypotheses have larger variances than the worse
ones. But this seems to be a somewhat unnatural situation. If, on the other
hand, there are some optimal hypotheses of small variance, then SVP should
work well. To make this rigorous we provide a result, which can be used to
bound the excess risk of $SVP_{{\Greekmath 0115} }$. Below we use Theorem \ref
{Theorem covering numbers}, but it is clear how the argument is to be
modified to obtain better constants for finite hypothesis spaces.
\begin{theorem}
\label{Theorem excess risk bound}Let $X$ be a random variable with values in
a set $\mathcal{X}$ with distribution ${\Greekmath 0116} $, and let $\mathcal{F}$ be a
class of hypotheses $f:\mathcal{X\rightarrow }\left[ 0,1\right] $. Fix $
{\Greekmath 010E} \in \left( 0,1\right) ,$ $n\geq 2$ and set $\mathcal{M}\left(
n\right) =10\mathcal{N}_{\infty }\left( 1/n,\mathcal{F},2n\right) $ and $
{\Greekmath 0115} =\sqrt{18\ln \left( 3\mathcal{M}\left( n\right) /{\Greekmath 010E} \right) }$.
Fix $f^{\ast }\in \mathcal{F}$. Then with probability at least $1-{\Greekmath 010E} $
in the draw of $\mathbf{X}\sim {\Greekmath 0116} ^{n}$,
\begin{multline*}
P\left( SVP_{{\Greekmath 0115} }\left( \mathbf{X}\right) ,{\Greekmath 0116} \right) -P\left( f^{\ast
},{\Greekmath 0116} \right) \\
\leq \sqrt{\frac{32V\left( f^{\ast },{\Greekmath 0116} \right) \ln \left( 3\mathcal{M}
\left( n\right) /{\Greekmath 010E} \right) }{n}} \\
+\frac{22\ln \left( 3\mathcal{M}\left( n\right) /{\Greekmath 010E} \right) }{n-1}.
\end{multline*}
\end{theorem}
\begin{proof}
Denote the hypothesis $SVP_{{\Greekmath 0115} }\left( \mathbf{X}\right) $ by $\hat{f}$
. By Theorem \ref{Theorem covering numbers} we have with probability at
least $1-{\Greekmath 010E} /3$ that
\begin{eqnarray*}
P\left( \hat{f},{\Greekmath 0116} \right) &\leq &P_{n}\left( \hat{f},\mathbf{X}\right)
+{\Greekmath 0115} \sqrt{\frac{V_{n}\left( \hat{f},\mathbf{X}\right) }{n}}+\frac{
15{\Greekmath 0115} ^{2}}{18\left( n-1\right) } \\
&\leq &P_{n}\left( f^{\ast },\mathbf{X}\right) +{\Greekmath 0115} \sqrt{\frac{
V_{n}\left( f^{\ast },\mathbf{X}\right) }{n}}+\frac{15{\Greekmath 0115} ^{2}}{18\left(
n-1\right) }.
\end{eqnarray*}
The second inequality follows from the definition of $SVP_{{\Greekmath 0115} }$. By
Bennett's inequality (Theorem \ref{Theorem Bernsteins inequality}) we have
with probability at least $1-{\Greekmath 010E} /3$ that
\begin{equation*}
P_{n}\left( f^{\ast },\mathbf{X}\right) \leq P\left( f^{\ast },{\Greekmath 0116} \right) +
\sqrt{\frac{2V\left( f^{\ast },{\Greekmath 0116} \right) \ln 3/{\Greekmath 010E} }{n}}+\frac{\ln
3/{\Greekmath 010E} }{3n}
\end{equation*}
and by Theorem \ref{Theorem realvalued} (\ref{Stdev lower bound}) we have
with probability at least $1-{\Greekmath 010E} /3$ that
\begin{equation*}
\sqrt{V_{n}\left( f^{\ast },\mathbf{X}\right) }\leq \sqrt{V\left( f^{\ast
},{\Greekmath 0116} \right) }+\sqrt{\frac{2\ln 3/{\Greekmath 010E} }{n-1}}.
\end{equation*}
Combining these three inequalities in a union bound and using $\ln \left( 3
\mathcal{M}\left( n\right) /{\Greekmath 010E} \right) \geq 1$ and some other crude but
obvious estimates, we obtain with probability at least $1-{\Greekmath 010E} $
\begin{eqnarray*}
P\left( \hat{f},{\Greekmath 0116} \right) &\leq &P\left( f^{\ast },{\Greekmath 0116} \right) +\sqrt{
\frac{32V\left( f^{\ast },{\Greekmath 0116} \right) \ln \left( 3\mathcal{M}\left( n\right)
/{\Greekmath 010E} \right) }{n}} \\
&&+\frac{22\ln \left( 3\mathcal{M}\left( n\right) /{\Greekmath 010E} \right) }{n-1}.
\end{eqnarray*}
\end{proof}
If we let $f^{\ast }$ be an optimal hypothesis we obtain a bound on the
excess risk. The square-root term in the bound scales with the standard
deviation of this hypothesis, which can be quite small. In particular, if
there is an optimal (minimal risk) hypothesis of zero variance, then the
excess risk of the hypothesis chosen by SVP decays as $\left( \ln \mathcal{M}
\left( n\right) \right) /n$. In the case of finite hypothesis spaces $
\mathcal{M}(n)=\left\vert \mathcal{F}\right\vert $ is independent of $n$ and
the excess risk then decays as $1/n$. Observe that apart from the
complexity bound on $\mathcal{F}$ no assumption such as convexity of
the function class or special properties of the loss functions were
needed to derive this result.
To demonstrate a potential competitive edge of SVP over ERM we will now give
a very simple example of this type, where the excess risk of the hypothesis
chosen by ERM is of order $O\left( 1/\sqrt{n}\right) $.
Suppose that $\mathcal{F}$ consists of only two hypotheses $\mathcal{F}
=\left\{ c_{1/2},b_{1/2+{\Greekmath 010F} }\right\} $. The underlying distribution ${\Greekmath 0116} $ is
such that $c_{1/2}\left( X\right) =1/2$ almost surely and $b_{1/2+{\Greekmath 010F}
}\left( X\right) $ is a Bernoulli variable with expectation $1/2+{\Greekmath 010F} $,
where ${\Greekmath 010F} \leq 1/\sqrt{8}$. The hypothesis $c_{1/2}$ is optimal and
has zero variance, the hypothesis $b_{1/2+{\Greekmath 010F} }$ has excess risk $
{\Greekmath 010F} $ and variance $1/4-{\Greekmath 010F} ^{2}$. We are given an i.i.d. sample $
\mathbf{X}=\left( X_{1},\dots,X_{n}\right) \sim {\Greekmath 0116} ^{n}$ on which we are to
base the selection of either hypothesis.
It follows from the previous theorem (with $f^{\ast }=c_{1/2}$), that the
excess risk of $SVP_{{\Greekmath 0115} }$ decays as $1/n$, for suitably chosen $
{\Greekmath 0115} $. To make our point we need to give a lower bound for the excess
risk of empirical risk minimization. We use the following inequality due to
Slud which we cite in the form given in \cite[p.~363]{Anthony 1999}.
\begin{theorem}
\label{Sluds inequality}Let $B$ be a binomial $\left( n,p\right) $ random
variable with $p\leq 1/2$ and suppose that $np\leq t\leq n\left( 1-p\right)$.
Then
\begin{equation*}
\Pr \left\{ B>t\right\} \geq \Pr \left\{ Z>\frac{t-np}{\sqrt{np\left(
1-p\right) }}\right\} ,
\end{equation*}
where $Z$ is a standard normal $N\left( 0,1\right) $-distributed random
variable.
\end{theorem}
Now ERM selects the inferior hypothesis $b_{1/2+{\Greekmath 010F} }$ if
$$
P_{n}\left(b_{1/2+{\Greekmath 010F} },\mathbf{X}\right) <P_{n}\left( c_{1/2},\mathbf{X}\right)
=1/2.
$$
We therefore obtain from Theorem \ref{Sluds inequality}, with
$$
B=n\left( 1-P_{n}\left( b_{1/2+{\Greekmath 010F} }\left( \mathbf{X}\right) \right)
\right),
$$
$p=1/2-{\Greekmath 010F} $ and $t=n/2$ that
\begin{eqnarray*}
\Pr \left\{ ERM\left( \mathbf{X}\right) =b_{1/2+{\Greekmath 010F} }\right\} &=&\Pr
\left\{ P_{n}\left( b_{1/2+{\Greekmath 010F} }\left( \mathbf{X}\right) \right)
<1/2\right\} \\
&\geq &\Pr \left\{ B>t\right\} \\
&\geq &\Pr \left\{ Z>\frac{\sqrt{n}{\Greekmath 010F} }{\sqrt{1/4-{\Greekmath 010F} ^{2}}}
\right\}
\end{eqnarray*}
A well known bound for standard normal random variables gives for ${\Greekmath 0111} >0$
\begin{eqnarray*}
\Pr \left\{ Z>{\Greekmath 0111} \right\} &\geq &\frac{1}{\sqrt{2{\Greekmath 0119} }}\frac{{\Greekmath 0111} }{
1+{\Greekmath 0111} ^{2}}\exp \left( \frac{-{\Greekmath 0111} ^{2}}{2}\right) \\
&\geq &\exp \left( -{\Greekmath 0111} ^{2}\right) ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ if }{\Greekmath 0111} \geq 2.
\end{eqnarray*}
If we assume $n\geq {\Greekmath 010F} ^{-2}$ we have $\sqrt{n}{\Greekmath 010F} /\sqrt{
1/4-{\Greekmath 010F} ^{2}}\geq 2$, so
\begin{equation*}
\Pr \left\{ ERM\left( \mathbf{X}\right) =b_{1/2+{\Greekmath 010F} }\right\} \geq \exp
\left( -\frac{n{\Greekmath 010F} ^{2}}{1/4-{\Greekmath 010F} ^{2}}\right) \geq e^{-8n{\Greekmath 010F}
^{2}},
\end{equation*}
where we used ${\Greekmath 010F} \leq 1/\sqrt{8}$\ in the last inequality. Since this
is just the probability that the excess risk is ${\Greekmath 010F} $ we arrive at the
following statement: For every $n\geq {\Greekmath 010F} ^{-2}$ there exists ${\Greekmath 010E} $
($=e^{-8n{\Greekmath 010F} ^{2}}$) such that the excess risk of the hypothesis
generated by ERM is at least
\begin{equation*}
{\Greekmath 010F} =\sqrt{\frac{\ln 1/{\Greekmath 010E} }{8n}},
\end{equation*}
with probability at least ${\Greekmath 010E} $. Therefore the excess risk for ERM
cannot have a faster rate than $O\left( 1/\sqrt{n}\right) $.
This example is of course a very artificial construction, chosen as a simple
illustration. It is clear that the conclusions do not change if we add any
number of deterministic hypotheses with risk larger than $1/2$ (they simply
have no effect), or if we add any number of Bernoulli hypotheses with risk
at least $1/2+{\Greekmath 010F} $ (they just make things worse for ERM).
\begin{figure}
\caption{Comparison of the excess risks of
the hypotheses returned by ERM (circled line) and SVP with $\protect{\Greekmath 0115}
\label{Experimental Findings}
\end{figure}
\iffalse
\FRAME{fhFU}{3.3753in}{3.7249in}{0pt}{\Qcb{Comparison of the excess risks of
the hypotheses returned by ERM and SVP with $\protect{\Greekmath 0115} =2.5$ for
different sample sizes.}}{\Qlb{Experimental Findings}}{Figure}{\special
{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio
TRUE;display "USEDEF";valid_file "T";width 3.3753in;height 3.7249in;depth
0pt;original-width 3.5893in;original-height 3.9638in;cropleft "0";croptop
"1";cropright "1";cropbottom "0";tempfilename
'KE2UTG04.wmf';tempfile-properties "XPR";}}
\fi
To obtain a more practical
insight into the potential advantages of SVP we have conducted a simple
experiment, where $\mathcal{X}=\left[ 0,1\right] ^{K}$ and the random
variable $X\in \mathcal{X}$ is distributed according to $\prod_{k=1}^{K}{\Greekmath 0116}
_{a_{k},b_{k}}$ where
$${\Greekmath 0116} _{a,b}=\left( 1/2\right) \left( {\Greekmath 010E}
_{a-b}+{\Greekmath 010E} _{a+b}\right).
$$
Each coordinate ${\Greekmath 0119} _{k}\left( X\right) $ of
$X$ is thus a binary random variable, assuming the values $a_{k}-b_{k}$ and $
a_{k}+b_{k}$ with equal probability, having expectation $a_{k}$ and variance
$b_{k}^{2}$.
The distribution of $X$ is itself generated at random by selecting the pairs
$\left( a_{k},b_{k}\right) $ independently: $a_{k}$ is chosen from the
uniform distribution on $\left[ B,1-B\right] $ and the standard deviation $
b_{k}$ is chosen from the uniform distribution on the interval $\left[ 0,B
\right] $. Thus $B$ is the only parameter governing the generation of the
distribution.
As hypotheses we just take the $K$ coordinate functions ${\Greekmath 0119} _{k}$ in $\left[
0,1\right] ^{K}$. Selecting the $k$-th hypothesis then just means that we
select the corresponding distribution ${\Greekmath 0116} _{a_{k},b_{k}}$. Of course we
want to find a hypothesis of small risk $a_{k}$, but we can only observe $
a_{k}$ through the corresponding sample, the observation being obscured by
the variance $b_{k}^{2}$.
We chose $B=1/4$ and $K=500$. We tested the algorithm (\ref{SVP
definition}) with ${\Greekmath 0115} =0$, corresponding to ERM, and ${\Greekmath 0115}
=2.5$. The sample sizes ranged from 10 to 500. We recorded the true
risks of the respective hypotheses generated, and averaged these risks
over 10000 randomly generated distributions. The results are reported
in Figure \ref{Experimental Findings} and show clearly the advantage
of SVP in this particular case. It must however be pointed out that
this advantage, while being consistent, is small compared to the risk
of the optimal hypotheses (around $1/4$).
If we try to extract a practical conclusion from Theorem \ref{Theorem excess
risk bound}, our example and the experiment, then it appears that SVP might
be a good alternative to ERM, whenever the optimal members of the hypothesis
space still have substantial risk (for otherwise ERM would do just as good),
but there are optimal hypotheses of very small variance. These two
conditions seem to be generic for many noisy situations: when the noise
arises from many independent sources, but does not depend too much on any
single source, then the loss of an optimal hypothesis should be sharply
concentrated around its expectation (e.g. by the bounded difference
inequality - see \cite{McDiarmid 1998}), resulting in a small variance.
\section{Application to sample compression}
\label{sec:SC}
Sample compression schemes \cite{Littlestone 1986} provide an elegant method
to reduce a potentially very complex function class to a finite,
data-dependent subclass. With $\mathcal{F}$ being as usual, assume that some
algorithm $A$ is already specified by a fixed function
\begin{equation*}
A:\mathbf{X}\in \bigcup_{n=1}^{\infty }\mathcal{X}^{n}\mapsto A_{\mathbf{X}
}\in \mathcal{F}.
\end{equation*}
The function $A_{S}$ can be interpreted as the hypothesis chosen by the
algorithm on the basis of the training set $S$, composed with the fixed loss
function. For $x\in \mathcal{X}$ the quantity $A_{S}\left( x\right) $ is
thus the loss incurred by training the algorithm from $S$ and applying the
resulting hypothesis to $x$.
The idea of sample compression schemes \cite{Littlestone 1986} is to
train the algorithm on subsamples of the training data and to use the
remaining data points for testing. A comparison of the different
results then leads to the choice of a subsample and a corresponding
hypothesis. If this hypothesis has small risk, we can say that the
problem-relevant information of the sample is present in the subsample
in a compressed form, hence the name.
Since the method is crucially dependent on the quality of the individual
performance estimates, and empirical Bernstein bounds give tight, variance
sensitive estimates, a combination of sample compression and SVP is
promising. For simplicity we only consider compression sets of a fixed size $
d$. We introduce the following notation for a subset $I\subset \left\{
1,\dots,n\right\} $ of cardinality $\left\vert I\right\vert =d$.
\begin{itemize}
\item $A_{\mathbf{X}[{I}]}=$ the hypothesis trained with $A$ from the
subsample $\mathbf{X}[{I}]$ consisting of those examples whose indices lie in
$I$.
\item For $f\in \mathcal{F}$, we let
$$P_{I^{c}}\left( f\right) =P_{n-d}\left(
f\left( \mathbf{X}[I^{c}]\right) \right)=\frac{1}{n-d}\sum_{i\notin
I}f\left( X_{i}\right),$$
the empirical risk of $f$ computed on the
subsample $\mathbf{X}[{I^{c}}]$ consisting of those examples whose
indices do not lie in $I$.
\item For $f\in \mathcal{F}$, we let
\begin{eqnarray}
\nonumber
V_{I^{c}}(f) & \hspace{-.21truecm}= \hspace{-.21truecm}&V_{n-d}\left(
f\left( \mathbf{X}[{I^{c}}]\right) \right)~~~ \\
\nonumber
~ &
\hspace{-.21truecm} = \hspace{-.21truecm}&\frac{1}{2(n-d)(n-d-1)}
\hspace{-.1truecm} \sum_{i,j\notin I}\left( f\left( X_{i}\right) -f\left( X_{j}\right)
\right) ^{2}\hspace{-.15truecm},~~~
\end{eqnarray}
the sample variance of $f$ computed on $\mathbf{X}[{I^{c}}]$.
\item $\mathcal{C}=$ the collection of subsets $I\subset \left\{
1,\dots,n\right\} $ of cardinality $\left\vert I\right\vert =d$.
\end{itemize}
\noindent With this notation we define our sample compression scheme as
\begin{eqnarray*}
SVP_{{\Greekmath 0115} }\left( \mathbf{X}\right) &=&A_{\mathbf{X}[{\hat{I}}]} \\
\hat{I} &=&\arg \min_{I\in \mathcal{C}}P_{I^{c}}\left( A_{\mathbf{X}[
{I}]}\right) +{\Greekmath 0115} \sqrt{V_{I^{c}}\left( A_{\mathbf{X}[{I}]}\right) }.
\end{eqnarray*}
As usual, ${\Greekmath 0115} =0$ gives the classical sample compression schemes. The
performance of this algorithm can be guaranteed by the following result.
\begin{theorem}
With the notation introduced above fix ${\Greekmath 010E} \in \left( 0,1\right) ,$ $
n\geq 2$ and set ${\Greekmath 0115} =\sqrt{2\ln \left( 6\left\vert \mathcal{C}
\right\vert /{\Greekmath 010E} \right) }$. Then with probability at least $1-{\Greekmath 010E} $
in the draw of $\mathbf{X}\sim {\Greekmath 0116} ^{n}$, we have for every $I^{\ast }\in
\mathcal{C}$
\begin{multline*}
P\left( SVP_{{\Greekmath 0115} }\left( \mathbf{X}\right) ,{\Greekmath 0116} \right) -P\left( A_{
\mathbf{X}[{I^*}]},{\Greekmath 0116} \right) \\
\leq \sqrt{\frac{8V\left( A_{\mathbf{X}[{I^{\ast }}]},{\Greekmath 0116} \right) \ln \left(
6\left\vert \mathcal{C}\right\vert /{\Greekmath 010E} \right) }{n-d}}+\frac{14\ln
\left( 6\left\vert \mathcal{C}\right\vert /{\Greekmath 010E} \right) }{3\left(
n-d-1\right) }
\end{multline*}
\end{theorem}
\begin{proof} Use a union bound and Theorem \ref{Theorem empirical Bernstein bound degree
1} to obtain an empirical Bernstein bound uniformly valid over all $A_{
\mathbf{X}[{I}]}$ with $I\in \mathcal{C}$ and therefore also valid for $
SVP_{{\Greekmath 0115} }\left( \mathbf{X}\right) $. Then follow the proof of Theorem
\ref{Theorem excess risk bound}. Since now $I^{\ast }\in \mathcal{C}$ is
chosen \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{after} seeing the sample, uniform versions of Bennett's
inequality and Theorem \ref{Theorem realvalued} (\ref{Stdev lower bound})
have to be used, and are again readily obtained with union bounds over $
\mathcal{C}$.
\end{proof}
The interpretation of this result as an excess risk bound is more subtle
than for Theorem \ref{Theorem excess risk bound}, because the optimal
hypothesis is now sample-dependent. If we define
\begin{equation*}
I^{\ast }=\arg \min_{I\in \mathcal{C}}P\left( A_{\mathbf{X}[{I}]},{\Greekmath 0116} \right)
,
\end{equation*}
then the theorem tells us how close we are to the choice of the optimal
subsample. This will be considerably better than what we get from Hoeffding's inequality if the variance $V\left( A_{\mathbf{X}[{I^{\ast }}]},{\Greekmath 0116}
\right) $ is small and sparse solutions are sought in the sense that $d/n$
is small (observe that $\ln \left\vert \mathcal{C}\right\vert \leq d\ln
\left( ne/d\right) $).
This type of relative excess risk bound is of course more useful if the
minimum $P\left( A_{\mathbf{X}[{I^{\ast }}]},{\Greekmath 0116} \right) $ is close to some
true optimum arising from some underlying generative model. In this case we
can expect the loss $A_{\mathbf{X}[{I^{\ast }}]}$ to behave like a noise
variable centered at the risk $P\left( A_{\mathbf{X}[{I^{\ast }}]},{\Greekmath 0116}
\right) $. If the noise arises from many independent sources, each of which
makes only a small contribution, then $A_{\mathbf{X}[{I^{\ast }}]}$ will be
sharply concentrated and have a small variance $V\left( A_{\mathbf{X}
[{I^{\ast }}]},{\Greekmath 0116} \right) $, resulting in tight control of the excess
risk.
\iffalse
As an example consider sparse Kernel PCA \cite{Hussain 2008}. Here $\mathcal{
X}$ is the unit ball of some Hilbert space and $T_{\mathbf{X}^{I}}$ is an
orthogonal projection operator determined from some compressed subsample $
\mathbf{X}^{I}$. The corresponding loss $A_{\mathbf{X}^{I}}$ is just the
squared reconstruction error
\begin{equation*}
A_{\mathbf{X}^{I}}\left( x\right) =\left\Vert T_{\mathbf{X}
^{I}}x-x\right\Vert ^{2}.
\end{equation*}
Suppose $X=Y+Z$, where $Y$ takes values in a $d$-dimensional subspace $M$
and corresponds to \RIfM@\expandafter\text@\else\expandafter\mbox\fiquotedblright useful data\RIfM@\expandafter\text@\else\expandafter\mbox\fiquotedblright , while $
Z$ is independent of and orthogonal to $Y$, essentially high dimensional and
corresponds to centered noise. The optimal hypothesis is of course the
orthogonal projection $P_{M}$ onto $M$, the corresponding loss being simply $
\left\Vert Z\right\Vert ^{2}$. If sample compression works at all, then the
sample $\mathbf{X}$ contains some subsample $\mathbf{X}^{I^{\ast }}$ such
that $\mathbb{E}_{YZ}\left( A_{\mathbf{X}^{I^{\ast }}}\left( Y+Z\right)
-\left\Vert Z\right\Vert ^{2}\right) ^{2}$ is small, so $\mathbb{V}
_{YZ}\left( A_{\mathbf{X}^{I^{\ast }}}\left( Y+Z\right) \right) \approx
\mathbb{V}\left\Vert Z\right\Vert ^{2}$.
For simplicity let us assume that $Z$ is a truncated Gaussian, $Z=Z^{\prime }
$ if $\left\Vert Z^{\prime }\right\Vert \leq 1$, $Z=Z^{\prime }/\left\Vert
Z^{\prime }\right\Vert $ else, where $Z^{\prime }$ is Gaussian with
covariance operator $C$. We can define an effective dimension of $Z$ as the
quotient
\begin{equation*}
d_{e}=tr\left( C\right) ^{2}/tr\left( C^{2}\right) =\left( \mathbb{E}
\left\Vert Z\right\Vert ^{2}\right) ^{2}/tr\left( C^{2}\right) .
\end{equation*}
because it gives us $d_{e}=N$ for an isotropic distribution on an $N$
-dimensional subspace, and our definition just extends this fact to
nonisotropic distributions supported on infinite dimensional spaces. It is
then not hard to show that
\begin{equation*}
\mathbb{V}\left( Z\right) \leq 2\left( \mathbb{E}\left\Vert Z^{\prime
}\right\Vert ^{2}\right) ^{2}/d_{e}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.}
\end{equation*}
If the dimension $d_{e}$ is large, the variance $\mathbb{V}_{X}\left( A_{
\mathbf{X}^{I^{\ast }}}\left( X\right) \right) $ can therefore be very
small, and also considerably smaller than the lower bound $\mathbb{E}
\left\Vert Z^{\prime }\right\Vert ^{2}$ on the risk .
Other examples of sample compression schemes, to which SVP can be applied,
include clustering by selecting the centers from the sample, or kernel
methods, when sparsity has to be enforced for performance reasons.
\fi
\iffalse
\section{Empirical Bernstein bounds for U-statistics\label{Section
U-statistics}}
The above results can be applied if the loss $\ell _{h}$ depends on a single
observation, but there are cases, when it takes more than one observation to
determine the loss incurred by a hypothesis.
In ranking \cite{Clemencon Lugosi 2008}, for example, one observes pairs $
X=\left( X_{in},Y\right) $ of inputs $X_{in}$ and numbers $Y$ and seeks a $
\left\{ -1,1\right\} $-valued rule $r=r\left( X_{in},X_{in}^{\prime }\right)
$ on pairs of inputs, which, given two independent observations $X=\left(
X_{in},Y\right) $ and $X^{\prime }=\left( X_{in}^{\prime },Y^{\prime
}\right) $ reliably predicts if $Y>Y^{\prime }$, that is if $r\left(
X_{in},X_{in}^{\prime }\right) =sgn\left( Y-Y^{\prime }\right) $. A loss
function corresponding to this rule would be
\begin{equation*}
\ell _{r}\left( X,X^{\prime }\right) =\mathbb{I}_{\left\{ \left( Y-Y^{\prime
}\right) r\left( X_{in},X_{in}^{\prime }\right) <0\right\} }\left(
X,X^{\prime }\right) .
\end{equation*}
The associated ranking risk $P\left( \ell _{r},{\Greekmath 0116} \right) =\mathbb{E}
_{X,X^{\prime }\sim {\Greekmath 0116} ^{2}}\ell _{r}\left( X,X^{\prime }\right) $ is just
the probability that the rule $r$ makes an error. Similarity learning gives
a similar example, where the $Y$s are discrete labels and a rule $s$ has to
predict equality of the labels $\left( Y,Y^{\prime }\right) $ from two
observed inputs $\left( X_{in},X_{in}^{\prime }\right) ,$leading to a rather
similar loss function. To control generalization it may in both cases be
convenient to replace the loss by some Lipschitz upper bounds, such as a the
hinge loss.
There are also cases where $\ell _{h}$ depends on more than two independent
observations. One example is cross validation to adjust the parameters of
some learning algorithm $A$ to be trained on $d-1$ examples (we identify $A$
with its parametrization). A correponding loss would be
\begin{equation*}
\ell _{A}\left( X_{1},\dots,X_{d}\right) =l\left( A\left(
X_{1},\dots,X_{d-1}\right) ,X_{d}\right) .
\end{equation*}
Here $A\left( X_{1},\dots,X_{d}\right) $ is the hypothesis generated by the
algorithm $A$ from the i.i.d. sample $\left( X_{1},\dots,X_{d}\right) $ and $l$
measures its loss on the independent test-observation $X_{d}$. The risk
associated with the algorithm $A$ is
\begin{eqnarray*}
P\left( \ell _{A},{\Greekmath 0116} \right) &=&\mathbb{E}_{X_{1},\dots,X_{d}\sim {\Greekmath 0116}
^{d}}\ell _{A}\left( X_{1},\dots,X_{d}\right) \\
&=&\mathbb{E}_{X_{1},\dots,X_{d-1}\sim {\Greekmath 0116} ^{d-1}}\left[ \mathbb{E}_{X_{d}\sim
{\Greekmath 0116} }l\left( A\left( X_{1},\dots,X_{d-1}\right) ,X_{d}\right) \right] .
\end{eqnarray*}
In all these cases the risk $P\left( \ell _{f},\cdot \right) $ associated
with a given hypothesis $f$ is a regular parameter \cite{Hoeffding 1948} in
the sense of the following definition.
\begin{definition}
Let $\mathcal{D}$ be a class of distributions on a space $\mathcal{X}$. A
function $\Pi :\mathcal{D\rightarrow
\mathbb{R}}$ is a regular parameter of degree $d$ if there exists a mble function $q:
\mathcal{X}^{d}\rightarrow \mathcal{
\mathbb{R}}$, the so called kernel, such that
\begin{equation*}
\Pi \left( {\Greekmath 0116} \right) =P\left( q,{\Greekmath 0116} \right) =\mathbb{E}_{X_{1},\dots,X_{d}
\sim {\Greekmath 0116} ^{d}}q\left( X_{1},\dots,X_{d}\right) ,\forall {\Greekmath 0116} \in \mathcal{D}
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.}
\end{equation*}
\end{definition}
We can use the functions $\ell _{h}$, $\ell _{r}$, $\ell _{s}$, $\ell _{A}$
introduced in the above examples as kernels to see that the risk usually
considered in learning theory, the ranking risk, the risk in similarity
learning, and the cross-validation risk associated with an algorithm are all
regular parameters of degrees 1, 2, 2 and $d$ respectively.
If $P\left( q,.\right) $ is a regular parameter of degree $d$ then the
variance $V\left( q,\cdot \right) $ of $q$
\begin{eqnarray*}
V\left( q,{\Greekmath 0116} \right) &=&P\left( g,{\Greekmath 0116} \right) \\
q_{v}^{\prime }\left( \mathbf{x}\right) &=&q\left( x_{1},\dots,x_{d}\right)
^{2}-q\left( x_{1},\dots,x_{d}\right) q\left( x_{d+1},\dots,x_{2d}\right)
\end{eqnarray*}
is a regular parameter of degree $2d$ with kernel $q_{v}^{\prime }$.
Since the $X_{i}$ are i.i.d. we can always assume $q$ to be permutation
symmetric in its arguments without changing $\Pi $. We can thus replace any
kernel by its symmetrized version, given by its average over all
permutations of the arguments. In the context of cross validation, if the
algorithm $A$ is symmetric, this symmetrized version of $\ell _{A}$ is the
familiar leave-one-out estimator.
We denote the symmetrized form of the variance kernel $q_{v}^{\prime }$ by $
q_{v}$. If $q$ has degree one (only one argument) then
\begin{equation*}
q_{v}\left( x_{1},x_{2}\right) =\frac{\left( q\left( x_{1}\right) -q\left(
x_{2}\right) \right) ^{2}}{2}.
\end{equation*}
In the following we will only consider symmetric kernels.
If $q$ is a symmetric kernel of degree $d$ then an unbiased estimate for the
associated parameter is obtained by averaging $q$ over all size-$d$ subsets
of the available observations.
\begin{definition}
For a measurable symmetric kernel $q:\mathcal{X}^{d}\rightarrow \mathcal{
\mathbb{R}
}$ and a vector $\mathbf{x}=\left( x_{1},\dots,x_{n}\right) \in \mathcal{X}
^{n} $ of size $n\geq d$ the U-statistic is the function $P_{n}\left(
q,\cdot \right) :\mathcal{X}^{n}\rightarrow \mathcal{
\mathbb{R}
}$ defined by
\begin{equation*}
P_{n}\left( q,\mathbf{x}\right) =\binom{n}{d}^{-1}\sum_{1\leq
i_{1}<\dots<i_{d}\leq n}q\left( x_{i_{1}},\dots,x_{i_{d}}\right) ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }
\mathbf{x}\in \mathcal{X}^{n}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.}
\end{equation*}
\end{definition}
For a hypothesis $f$, loss $\ell _{f}$ and an i.i.d. sample $\mathbf{X}$ the
U-statistic $P_{n}\left( \ell _{f},\mathbf{X}\right) $ is a generalization
of the usual empirical risk. For any kernel $q$ we denote $V_{n}\left( q,
\mathbf{x}\right) =P_{n}\left( q_{v},\mathbf{x}\right) $. If $q$ has degree
1 and $\mathbf{X}$ then $V_{n}\left( q,\mathbf{x}\right) $ is just the
familiar sample variance.
Empirical Bernstein bounds for general regular parameters follow from the
following Theorem (due to Hoeffding \cite{Hoeffding 1963}), which extends
Bennett's inequality to U-statistics. Just as with Theorem \ref{Theorem
Bernsteins inequality} we quote it in a weakened form as a confidence
dependent bound (also using $\left\lfloor n/d\right\rfloor \geq \left(
n-d\right) /d$).
\begin{theorem}
\label{Theorem Hoeffding Bennet}Let $q$ be a symmetric kernel with $d\leq n$
arguments and values in $\left[ 0,1\right] $. For ${\Greekmath 010E} >0$ we have with
probability at least $1-{\Greekmath 010E} $ in $\mathbf{X}\sim {\Greekmath 0116} ^{n}$ that
\begin{equation*}
P\left( q,{\Greekmath 0116} \right) -P_{n}\left( q,\mathbf{X}\right) \leq \sqrt{\frac{
2dV\left( q,{\Greekmath 0116} \right) \ln 1/{\Greekmath 010E} }{n-d}}+\frac{d\ln 1/{\Greekmath 010E} }{3\left(
n-d\right) }.
\end{equation*}
\end{theorem}
Since $q$ has values in $\left[ 0,1\right] $ the variance $V\left( q,{\Greekmath 0116}
\right) $ is bounded by the expectation $P\left( q,{\Greekmath 0116} \right) $, which can
therefore be substituted in the above inequality. This substitution,
exchanging the empirical risk and the root term and completing the square on
the left lead to
\begin{corollary}
\label{Corollary Selfbound U}Let $q$ be a symmetric kernel with $d\leq n$
arguments and values in $\left[ 0,B\right] $ and let ${\Greekmath 010E} >0$. Then with
probability at least $1-{\Greekmath 010E} $ in $\mathbf{X}\sim {\Greekmath 0116} ^{n}$ we have
\begin{eqnarray*}
\sqrt{P\left( q,{\Greekmath 0116} \right) } &\leq &\sqrt{P_{n}\left( q,\mathbf{X}\right) +
\frac{5dB\ln 1/{\Greekmath 010E} }{6\left( n-d\right) }}+\\
&&\sqrt{\frac{dB\ln 1/{\Greekmath 010E} }{
2\left( n-d\right) }}
\end{eqnarray*}
and
\begin{eqnarray*}
P\left( q,{\Greekmath 0116} \right) -P_{n}\left( q,\mathbf{X}\right) &\leq &\sqrt{\frac{
2dBP_{n}\left( q,\mathbf{X}\right) \ln 1/{\Greekmath 010E} }{n-d}}+\\
&&\frac{2dB\ln 1/{\Greekmath 010E} }{n-d}.
\end{eqnarray*}
\end{corollary}
In the context of learning theory the second inequality means that the risk $
P\left( \ell _{f},{\Greekmath 0116} \right) $ of a hypothesis with small empirical risk $
P_{n}\left( \ell _{f},\mathbf{X}\right) $ is more accurately estimated by
the empirical risk, up to $O\left( 1/n\right) $ for consistent hypotheses.
In a noisy situation, however, the empirical risk of a near optimal
hypothesis $f$ may still be quite different from zero. But if the noise
arises from many independent sources making comparable contributions to the
loss $\ell _{f}$ the distribution of $\ell _{f}$ will be sharply
concentrated, so that the variance will be much smaller than the
expectation, making the original bound of Theorem \ref{Theorem Hoeffding
Bennet} tighter, albeit unobservable.
To obtain an empirical Bernstein bound for U-statistics we simply apply the
first inequality in Corollary \ref{Corollary Selfbound U} to the variance $
V\left( q,{\Greekmath 0116} \right) =P\left( q_{v},{\Greekmath 0116} \right) $ and use a union bound to
combine the result with Bennett's inequality Theorem \ref{Theorem Hoeffding
Bennet}. With some simple numeric estimates this gives.
\begin{theorem}
\label{Theorem empirical Bernstein bound U-statistics}Let $q$ be a symmetric
kernel with $d$ arguments and values in $\left[ 0,1\right] $. For ${\Greekmath 010E} >0$
we have with probability at least $1-{\Greekmath 010E} $ in $\mathbf{X}\sim {\Greekmath 0116} ^{n}$
\begin{equation*}
P\left( q,{\Greekmath 0116} \right) -P_{n}\left( q,\mathbf{X}\right) \leq \sqrt{\frac{
2dV_{n}\left( q,\mathbf{X}\right) \ln 2/{\Greekmath 010E} }{n-d}}+\frac{8d\ln 2/{\Greekmath 010E}
}{3\left( n-2d\right) }.
\end{equation*}
\end{theorem}
Uniform extensions to finite classes of kernels or classes with polynomially
growing uniform covering numbers are immediate. We could not arrive at a
result analogous to Theorem \ref{Theorem covering numbers} for general
U-statistics, because it seems difficult to obtain the required version of
Bennett's inequality (Theorem \ref{Theorem Hoeffding Bennet}) for random
variables which are not identically distributed.
If we specialize the Theorem to degree $d=1$ and a $\left[ 0,1\right] $
-valued random variable $X$ we obtain with probability at least $1-{\Greekmath 010E} $
the bound
\begin{equation*}
\mathbb{E}X-P_{n}\left( \mathbf{X}\right) \leq \sqrt{\frac{2V_{n}\left(
\mathbf{X}\right) \ln 2/{\Greekmath 010E} }{n-1}}+\frac{8\ln 2/{\Greekmath 010E} }{3\left(
n-2\right) },
\end{equation*}
which is already stronger than the bound in \cite{Audibert 2007}, and not
quite as good as Theorem \ref{Theorem empirical Bernstein bound degree 1}.
Hoeffding's result (Theorem \ref{Theorem Hoeffding Bennet}) is not
the strongest bound for U-statistics. Recent results (see
\cite{Adamczak 2007} and \cite{Clemencon Lugosi 2008}) are stronger,
but do not appear well suited for conversion to empirical
bounds. Better results are possible because the variance of a
U-statistic $P_{n}\left( q,\mathbf{X}\right) $ can for large $n$ be
considerably smaller than $V\left( q,{\Greekmath 0116} \right) /n$. If $n$ grows
very large this variance becomes dominated by a term of the form
$d^{2}V_{1}\left( q,{\Greekmath 0116} \right) /n$ where $V_{1}\left( q,{\Greekmath 0116} \right) $
is the variance of the conditional expectation,
\begin{equation*}
V_{1}\left( q,{\Greekmath 0116} \right) =\mathbb{E}\left[ \mathbb{E}q\left(
X_{1},\dots,X_{d}\right) |X_{1}\right] ^{2}-\left[ \mathbb{E}q\left(
X_{1},\dots,X_{d}\right) \right] ^{2}.
\end{equation*}
This domination has the effect that the asymptotic distribution of
$\sqrt{n} \left( P_{n}\left( q,\mathbf{X}\right) -P\left( q,{\Greekmath 0116}
\right) \right) $ is a centered normal distribution with variance
$d^{2}V_{1}\left( q,{\Greekmath 0116}
\right) $. We can therefore expect correspondingly improved tail-bounds.
For U-statistics of degree 2 such a result has been given by Gin\'e
and de la Pe\~{n}a \cite{Gine}, who give a Bernstein type bound with
$V_{1}\left( q,{\Greekmath 0116} \right) $ in place of the variance. It is not hard
to see that $V_{1}\left( q,{\Greekmath 0116} \right) $ is a regular parameter of
degree $2d$ so that the result in \cite{Gine} can be made into an
empirical Bernstein bound for regular parameters of degree 2,
potentially much tighter than Theorem \ref{Theorem empirical Bernstein
bound U-statistics}.
\fi
\section{Conclusion}
We presented sample variance penalization as a potential alternative to
empirical risk minimization and analyzed some of its statistical properties
in terms of empirical Bernstein bounds and concentration properties of the
empirical standard deviation. The promise of our method is that,
in simple but perhaps practical scenarios the excess risk of our method
is guaranteed to be substantially better than that of empirical
risk minimization.
The present work raises some questions. Perhaps the most pressing
issue is to find an efficient implementation of the method, to deal
with the fact that sample variance penalization is non-convex in many
situations when empirical risk minimization is convex, and to compare
the two methods on some real-life data sets. Another important issue
is to further investigate the application of empirical Bernstein
bounds to sample compression schemes.
\end{document} |
\begin{document}
\title[Fully nonlinear dead-core systems]{Fully nonlinear dead-core systems}
\author[D.J. Ara\'ujo]{Dami\~ao J. Ara\'ujo}
\address{Department of Mathematics, Federal University of Para\'iba, 58059-900, Jo\~ao Pessoa, PB, Brazil}
\email{araujo@mat.ufpb.br}
\author[R. Teymurazyan]{Rafayel Teymurazyan}
\address{University of Coimbra, CMUC, Department of Mathematics, 3001-501 Coimbra, Portugal}
\email{rafayel@utexas.edu}
\begin{abstract}
We study fully nonlinear dead-core systems coupled with strong absorption terms. We discover a chain reaction, exploiting properties of an equation along the system. The lack of both the classical Perron's method and comparison principle for the systems requires new tools for tackling the problem. By means of a fixed point argument, we prove existence of solutions, and obtain higher sharp regularity across the free boundary. Additionally, we prove a variant of a weak comparison principle and derive several geometric measure estimates for the free boundary, as well as a Liouville type theorems for entire solutions. These results are new even for linear dead-core systems.
\noindent \textbf{Keywords:} Dead-core systems, elliptic systems, regularity, comparison principle, non-degeneracy, Liouville theorem for systems.
\noindent \textbf{MSC 2020:} 35J57, 35J47, 35J67, 35B53, 35J60, 35B65.
\end{abstract}
\maketitle
\section{Introduction}\label{s1}
Due to applications in population dynamics, combustion processes, catalysis processes and in industry, in particular, in bio-technologies and chemical engineering, reaction-diffusion equations with strong absorption terms have been studied intensively in recent years. In literature, the regions where the density of a certain substance (liquid, gas) vanishes, are referred to as dead-cores. When the density of one substance is influenced of that of another one, we then deal with dead-core systems. The system modeling these densities of the reactants $u$, $v:\Omega\rightarrow\R$, where $\Omega\subset\R^n$ with smooth boundary, is given by
\begin{equation}\label{fgsystem}
\left\{
\begin{array}{ccc}
F(D^2 u,x)&=&f(u,v,x),\,\,\,\text{ in }\,\,\,x\in\Omega,\\[0.2cm]
G(D^2 v,x)&=&g(u,v,x),\,\,\,\text{ in }\,\,\,x\in\Omega.
\end{array}
\right.
\end{equation}
The operators $F$, $G:\mathbb{S}^n\times\Omega\rightarrow\R$, where $\mathbb{S}^n$ is the set of symmetric matrices of order $n$, model the diffusion processes, while the functions $f$ and $g$ are the convection terms of the model. Additionally, systems like \eqref{fgsystem} are related to Lane-Emden systems (see \cite{MNS19}) and also model games that combine the tug-of-war with random walks in two different boards (see \cite{MR20}).
The goal of this paper is to study regularity of solutions for fully non-linear dead-core systems
\begin{equation}\label{1.1}
\left\{
\begin{array}{ccc}
F(D^2u,x)=v_+^{\,p}, & \text{in} & \Omega,\\[0.2cm]
G(D^2v,x)=u_+^q & \text{in} & \Omega,
\end{array}
\right.
\end{equation}
where $\Omega\subset\R^n$, $p$ and $q$ are non-negative constants such that $pq<1$, and $F$ and $G$ are fully nonlinear uniformly elliptic operators with uniformly continuous coefficients, i.e., satisfy
\begin{equation}\label{1.2}
\lambda\|N\|\le\mathcal{F}(M+N,x)-\mathcal{F}(M,x)\le\Lambda\|N\|,\,\,\,\forall N\ge0,
\end{equation}
with $M\in\mathbb{S}^n$, $0<\lambda\le\Lambda$. Here $N$ is a non-negative definite symmetric matrix, therefore the norm $\|N\|$ is equal to the maximum eigenvalue of $N$. Solutions of \eqref{1.1} are understood in the viscosity sense, according to the following definition.
\begin{definition}\label{d1.1}
A couple of functions $u$, $v\in C({\Omega})$ is called a viscosity sub-solution of \eqref{1.1}, if whenever $\varphi, \psi\in C^2(\Omega)$ and $u-\varphi$, $v-\psi$ attain local maximum at $x_0\in\Omega$, then
\begin{equation*}
\left\{
\begin{array}{ccc}
F\left(D^2\varphi(x_0), x_0\right)\ge v^p_+(x_0), & \text{in} & \Omega,\\[0.2cm]
G\left(D^2\psi(x_0), x_0\right)\ge u^q_+(x_0) & \text{in} & \Omega.
\end{array}
\right.
\end{equation*}
Similarly, $u$, $v\in C({\Omega})$ is called a viscosity super-solution of \eqref{1.1}, if whenever $\varphi,\psi\in C^2(\Omega)$ and $u-\varphi$, $v-\psi$ attain local minimum at $x_0\in\Omega$, then
\begin{equation*}
\left\{
\begin{array}{ccc}
F\left(D^2\varphi(x_0), x_0\right)\le v^p_+(x_0), & \text{in} & \Omega,\\[0.2cm]
G\left(D^2\psi(x_0), x_0\right)\le u^q_+(x_0) & \text{in} & \Omega.
\end{array}
\right.
\end{equation*}
A couple of functions $u$, $v\in C({\Omega})$ is called a viscosity solution of \eqref{1.1}, if its both a viscosity sub- and super-solution of \eqref{1.1}.
\end{definition}
In literature (see, for example, \cite{CIL92, I92,IK91, IK912}) viscosity solutions for systems are defined as locally bounded functions. Since our operators are uniformly elliptic (with continuous coefficients), using a fixed point argument, we are able to prove existence of continuous solutions. Based on this, we define viscosity solutions as continuous functions, which is in accordance with the classical definition for equations. Although, existence of solutions, Perron's method, as well as several geometric properties for systems are well studied in the literature, \cite{C89,CIL92,I92,IK91,IK912}, however, they are valid only under certain superlinearity, monotonicity or growth conditions, which do not apply in our framework.
From Krylov-Safonov regularity theory (see \cite{CC95}), one may imply that viscosity solutions of the system \eqref{1.1} are locally of the class $C^{0,\alpha}$, for $\alpha\in(0,1)$. The ideas introduced by Teixeira in \cite{T16}, where dead-core equations were studied, inspire a hope for more regularity near the free boundary. They were implemented to derive higher regularity for the dead-core problems ruled by the infinity Laplacian, \cite{ALT16} (see also \cite{DT19}). However, the tools used to obtain these regularity and geometric measure estimates for equations, do not apply to systems. Unlike dead-core problems for equations, when dealing with systems, one has two main challenges: the lack of a comparison principle and the lack of the classical Perron's method in the framework. The latter for systems is valid only under certain structural assumptions on the operators (see \cite{C89,CIL92,I92,IK91,IK912}). Without these tools, even proving existence of solutions is a challenging task, since for systems, in general, the comparison of a sub- and a super-solution does not hold, as shows the example below:
\begin{equation*}
\left\{
\begin{array}{ccc}
-\Delta u+2u+v=0, & \text{in} & \Omega,\\[0.2cm]
-\Delta v+u-1=0 & \text{in} & \Omega.
\end{array}
\right.
\end{equation*}
The Dirichlet problem (with zero boundary data) for this system has unique solution $(u,v)$, \cite[Remark 3.4]{IK91}. The pairs $(0,0)$ and $(u,v)$ are a sub- and a super-solution respectively, but $(0,0)\le(u,v)$ is not true, \cite[Remark 4.2]{IK91}.
As for the comparison principle, it is not valid even for linear systems. For example, when $F=G=\Delta$ in \eqref{fgsystem}, i.e., considering the system for the Laplace operator, the solution of which can be interpreted as the Euler-Lagrange equation of the energy functional
$$
I(u,v):=\int_\Omega\nabla u\cdot\nabla v+\mathcal{F}(u,v) \,dx \longrightarrow \mbox{min},
$$
where $f(u,v,x)=\partial_u \mathcal{F}(u,v,x)$ and $g(u,v,x)=\partial_v \mathcal{F}(u,v)$, the comparison of solutions on the boundary does not imply comparison inside the domain.
The lack of a comparison principle does not allow to construct suitable barriers as to study geometric properties of viscosity solutions. Nevertheless, by means of a careful analysis of solutions of an equation in the system, we discover a chain reaction within the system, exploiting properties of an equation along the system. In particular, we show that if a pair $(u,v)$ of non-negative functions is a viscosity solution of the system \eqref{1.1}, and $x_0\in\partial\{u+v>0\}$, then
$$
c\,r^{\frac{2}{1-pq}}\le\sup_{B_r(x_0)}\left(u^{\frac{1}{1+p}}+v^{\frac{1}{1+q}}\right)\le C\,r^{\frac{2}{1-pq}}
$$
where $r>0$ is small enough and $c,\,C>0$ are constants depending on the dimension. This provides a regularity effect among equations. Observe that if $F=G$, $p=q$ and $u=v$ in \eqref{1.1}, we recover the regularity result obtained in \cite{T16}. It is noteworthy that the regularity we derive for systems is higher than that of Hamiltonian elliptic systems studied in \cite{ST18}. As shown in \cite{ST18}, the $C^{2,\alpha}$ regularity remains valid for systems with vanishing Naumann boundary condition, when $F=G=-\Delta$ and $p$ and $q$ are chosen in a certain way. Note also that one has higher regularity for solutions along the set $\partial\{u+v>0\}$ compared to that coming from the Krylov-Safonov and Schauder regularity theory. Indeed, for example, for the Laplace operator, as $\Delta u\in C^{0,p}_\loc$ and $\Delta v\in C^{0,q}_\loc$, one has $u\in C^{2,p}_\loc$ and $v\in C^{2,q}_\loc$. On the other hand,
$$
\frac{2(1+p)}{1-pq}>2+p\,\,\,\textmd{ and }\,\,\,\frac{2(1+q)}{1-pq}>2+q,
$$
for $p,q\in(0,1)$. An application of this result yields that if a non-negative entire solution of the system \eqref{1.1} vanishes at a point and has a certain growth at infinity, then it must be identically zero. Additionally, we prove a variant of a weak comparison principle, showing that if one can compare viscosity sub- and super-solutions on the boundary, then at least one of the functions that make the pair of these sub- and super-solutions, still can be compared in the domain. The latter leads to several geometric measure estimates. Our results can be applied to systems with more than two equations.
The paper is organized as follows: in Section \ref{s2}, using Schaefer's fixed point argument, we prove existence of solutions for the system \eqref{1.1} (Proposition \ref{p2.1}). In Section \ref{s4}, we prove regularity of solutions of the system \eqref{1.1} (see Theorem \ref{t3.1}) across the free boundary. In Section \ref{s6}, non-degeneracy of viscosity solutions (Theorem \ref{t6.1}) is obtained. It is also in this section that we prove a weak variant of a comparison principle (Lemma \ref{comparison}). We close the paper with several applications, in Section \ref{s7} deriving geometric measure estimates for the free boundary (Lemma \ref{c6.1}) and proving Liouville type results for entire solutions of \eqref{1.1} (Theorem \ref{t5.1} and Theorem \ref{liouvthm}).
\section{Existence of solutions}\label{s2}
In this section, assuming that the boundary of the domain is of class $C^2$, we prove existence of viscosity solutions of the system
\begin{equation}\label{systemdirichlet}
\left\{
\begin{array}{ccc}
F(D^2u,x)=v_+^{\,p}, & \text{in} & \Omega,\\[0.2cm]
G(D^2v,x)=u_+^q & \text{in} & \Omega,\\[0.2cm]
u=\varphi,\,v=\psi&\text{on}&\partial\Omega,
\end{array}
\right.
\end{equation}
where $\varphi$, $\psi\in C^{0,1}(\partial\Omega)$. Throughout the paper we assume that
$$
p\ge0,\,\,\,q\ge0,\,\,\,pq<1,
$$
and $F$ and $G$ satisfy \eqref{1.2}. As commented above, neither the classical Perron's method nor the standard comparison principle apply in this framework. Thus, we need new tools to tackle the issue. For that purpose, we suitably construct a continuous and compact map, to use the following Schaefer's theorem (see, for example, \cite{Z85}), extending the idea from \cite{CT20}.
\begin{theorem}\label{schaefer}
If $T:X \to X$, where $X$ is a Banach space, is continuous and compact, and the set
$$
\mathcal{E}=\{z \in X;\,\,\, \exists\, \theta \in [0,1] \mbox{ such that } z=\theta T(z) \}
$$
is bounded, then $T$ has a fixed point.
\end{theorem}
\begin{proposition}\label{p2.1}
There exists a viscosity solution $(u,v)$ of system \eqref{systemdirichlet}. Moreover, $u$ and $v$ are globally Lipschitz functions.
\end{proposition}
\begin{proof}
Let $f\in C^{0,1}(\overline{\Omega})$ and let $u\in C^{0,1}(\overline{\Omega})$ be the unique viscosity solution of the following problem:
\begin{equation}\label{dirichlet1}
\left\{
\begin{array}{ccc}
F(D^2u,x)=f_+^p(x), & \text{in} & \Omega,\\[0.2cm]
u=\varphi & \text{on} & \partial\Omega,
\end{array}
\right.
\end{equation}
where $\varphi\in C^{0,1}(\partial\Omega)$. Existence and uniqueness of such $u$ is obtained by classical Perron's method (see \cite{CIL92,I89}). Similarly, for a function $g\in C^{0,1}(\overline{\Omega})$, there is a unique function $v\in C^{0,1}(\overline{\Omega})$ that solves the problem
\begin{equation}\label{dirichlet2}
\left\{
\begin{array}{ccc}
G(D^2v,x)=g_+^q(x), & \text{in} & \Omega,\\[0.2cm]
v=\psi & \text{on} & \partial\Omega,
\end{array}
\right.
\end{equation}
where $\psi\in C^{0,1}(\partial\Omega)$. For a given pair of functions $(f,g)\in C^{0,1}(\Omega)\times C^{0,1}(\Omega)$, we then define
$$
T : C^{0,1}(\Omega) \times C^{0,1}(\Omega) \to C^{0,1}(\Omega) \times C^{0,1}(\Omega)
$$
by $T(f,g):=(v,u)$, where $v$ and $u$ are the solutions of \eqref{dirichlet2} and \eqref{dirichlet1} respectively. Note that $T$ is well defined, as \eqref{dirichlet1} and \eqref{dirichlet2} have unique, globally Lipschitz solutions. The aim is to use the Schaefer's theorem, so we need to check that $T$ is continuous and compact.
Let $(f_k,g_k) \to (f,g)$ in $C^{0,1}(\Omega) \times C^{0,1}(\Omega)$, and let
$$
(v_k,u_k):=T(f_k,g_k).
$$
We want to show that $T(f_k,g_k) \to T(f,g)$ in $C^{0,1}(\Omega) \times C^{0,1}(\Omega)$. By global Lipschitz regularity (see, for instance, \cite{BD14}), for a universal constant $C>0$ one has
$$
\|u_k\|_{C^{0,1}(\overline{\Omega})}\le C(\|(f_k)_+^p\|_\infty+\|\varphi\|_\infty),
$$
and
$$
\|v_k\|_{C^{0,1}(\overline{\Omega})}\le C(\|(g_k)_+^q\|_\infty+\|\psi\|_\infty).
$$
Since $(f_k,g_k)$ converges, then it is limited, therefore, $\|f_k\|_\infty$ and $\|g_k\|_\infty$ are bounded uniformly. Thus, by Arzel\'a-Ascoli theorem, the sequence $(v_k,u_k)$ is bounded in $C^{0,1}(\overline{\Omega})\times C^{0,1}(\overline{\Omega})$ and hence, up to a sub-sequence, it converges to some $(v,u)$. Using stability of viscosity solutions under uniform limits, \cite[Proposition 2.1]{I89}, and passing to the limit, we obtain
$$
T(f_k,g_k)=(v_k,u_k)\to(v,u)=T(f,g).
$$
The uniqueness of solutions of \eqref{dirichlet1} and \eqref{dirichlet2} guarantees that any other sub-sequence converges to $(v,u)$. Thus, $T$ is continuous.
To see that $T$ is also compact, let $(f_k,g_k)$ be a bounded sequence in $C^{0,1}(\Omega) \times C^{0,1}(\Omega)$. As above, $(v_k,u_k)=T(f_k,g_k) \in C^{0,1}(\Omega) \times C^{0,1}(\Omega)$ is bounded. Then there is a convergent sub-sequence.
To use Theorem \ref{schaefer}, it remains to check that the set $\mathcal{E}$ of eigenvectors
$$
\mathcal{E}:=\{(f,g) \in C^{0,1}(\Omega) \times C^{0,1}(\Omega);\,\,\, \exists\, \theta \in [0,1] \mbox{ such that } (f,g)=\theta T(f,g) \}
$$
is bounded. Observe that $(0,0)\in\mathcal{E}$ if and only if $\theta=0$. Hence, we can assume $\theta\neq0$. Let $(f,g)\in\mathcal{E}$. For any $0<\theta \leq 1$, we have
$$
\left\{
\begin{array}{ccc}
F_\theta\left(D^2 g, x\right) = \theta f_+^p(x), & \text{in} & \Omega,\\[0.2cm]
g=\theta\varphi & \text{on} & \partial\Omega,
\end{array}
\right.
$$
where $F_\theta(M,x):=\theta F (\theta^{-1} M,x)$ satisfies \eqref{1.2}, and
$$
\left\{
\begin{array}{ccc}
G_\theta\left(D^2 f, x\right) = \theta g_+^q(x), & \text{in} & \Omega,\\[0.2cm]
f=\theta\psi & \text{on} & \partial\Omega,
\end{array}
\right.
$$
where $G_\theta(M,x):=\theta G (\theta^{-1} M,x)$ satisfies \eqref{1.2}. As above, from Krylov-Safonov regularity theory one has
$$
\|g\|_{C^{0,1}(\overline{\Omega})} \leq C(\|f_+^p\|_\infty+\|\varphi\|_\infty),
$$
and
$$
\|f\|_{C^{0,1}(\overline \Omega)}\leq C(\|g_+^q\|_\infty+\|\psi\|_\infty),
$$
where $C>0$ is a universal constant. However, since $F_\theta(D^2g,x)\ge0$, then $g$ takes its maximum on the boundary, and hence, is bounded by $\|\varphi\|_\infty$.
$$
\|g\|_\infty\le\|\varphi\|_\infty.
$$
Similarly,
$$
\|f\|_\infty\le\|\psi\|_\infty.
$$
Thus,
$$
\|g\|_{C^{0,1}(\overline{\Omega})} \leq C(\|\psi\|_\infty^p+\|\varphi\|_\infty)
$$
and
$$
\|f\|_{C^{0,1}(\overline{\Omega})} \leq C(\|\varphi\|_\infty^q+\|\psi\|_\infty),
$$
where $C>0$ is universal constant. This implies that $\mathcal{E}$ is bounded.
Finally, we can now apply Theorem \ref{schaefer} to conclude that $T$ has a fixed point. This finishes the proof.
\end{proof}
\section{Regularity of solutions at the free boundary}\label{s4}
In this section, we derive regularity for solutions of \eqref{1.1} across the free boundary, $\partial\{|(u,v)|>0\}$, where
\begin{equation}\label{2.1}
|(u,v)| := u^{\frac{1}{1+p}}+v^{\frac{1}{1+q}}.
\end{equation}
As an auxiliary step, we show that bounded solutions of a system, that vanish at a point, can be flattened, meaning that they can be made smaller than a given positive constant, once the right hand side is suitably perturbed. As is seen below, there is a natural phenomenon, a \textit{domino effect}, when a certain control over one of the diffusions dictates a wave across the system.
We say $(u,v)\ge 0$, if both $u$ and $v$ are non-negative. As our results are of local nature, we consider $B_1$ instead of $\Omega$.
\begin{lemma}\label{l2.1}
For a given $\mu>0$, there exists $\kappa=\kappa(\mu)>0$ depending only on $\mu$, $\lambda$, $\Lambda$, $n$, such that if $u,\,v\in[0,1]$ are such that $u(0)=v(0)=0$ and
\begin{equation*}
\left\{
\begin{array}{ccc}
F(D^2u,x)&=&\delta^2v_+^p,\,\,\text{in}\,\,B_1,\\[0.2cm]
G(D^2v,x)&=&\gamma u_+^q\,\,\text{in}\,\,B_1.
\end{array}
\right.
\end{equation*}
in the viscosity sense for $\delta\in(0,\kappa)$ and $\gamma>0$, then
$$
\sup_{B_{1/2}}|(u,v)|\le\mu.
$$
\end{lemma}
\begin{proof}
We argue by contradiction and suppose the conclusion of the lemma is not true, i.e., for $\mu_0>0$ there exists sequence of functions $u_i$, $v_i$ such that $u_i,\,v_i\in[0,1]$ and
\begin{equation*}
\left\{
\begin{array}{ccc}
F_i(D^2u_i,x)&=&i^{-2}\left(v_i\right)_+^p,\,\,\text{in}\,\,B_1,\\[0.2cm]
G_i(D^2v_i,x)&=&\gamma\left(u_i\right)_+^q\,\,\text{in}\,\,B_1,
\end{array}
\right.
\end{equation*}
where $F_i$, $G_i$ are elliptic operators satisfying \eqref{1.2}, but
\begin{equation}\label{2.2}
\sup_{B_{1/2}}|(u_i,v_i)|>\mu_0.
\end{equation}
By the Krylov-Safonov regularity theory (see, for example, \cite{CC95}), up to sub-sequences, $u_i$ and $v_i$ converge locally uniformly in $B_{2/3}$ to respectively $u_\infty$ and $v_\infty$, as $i\rightarrow\infty$. Clearly, $u_\infty(0)=0$, $u_\infty\in[0,1]$, and by stability of viscosity solutions, $u_\infty$ in $B_{2/3}$ satisfies
$$
F_\infty(D^2u_\infty,x)=0,
$$
where $F_\infty$ satisfies \eqref{1.2}. The maximum principle then implies that $u_\infty\equiv0$. The latter provides that in addition to $v_\infty(0)=0$ and $v_\infty\in[0,1]$, one also has
$$
G_\infty(D^2v_\infty,x)=0,
$$
where $G_\infty$ satisfies \eqref{1.2}. This \textit{chain reaction} allows one to apply once more the maximum principle and conclude that $v_\infty\equiv0$, which contradicts \eqref{2.2}.
\end{proof}
Next, we apply Lemma \ref{l2.1} to obtain regularity of solutions. Geometrically, it reveals that viscosity solutions of the system \eqref{1.1} touch the free boundary in a smooth fashion. In fact, it provides a quantitative information on the speed in which bounded viscosity solution detaches from the dead core. In the next section, we show that it is the exact speed in which viscosity solutions of \eqref{1.1} grow.
\begin{theorem}\label{t3.1}
Let $(u,v)$ be a non-negative viscosity solution of \eqref{1.1} in $B_1$. There exists a constant $C>0$, depending only on $\lambda,\Lambda,p,q$, $\|u\|_\infty,\|v\|_\infty$, such that for $x_0\in \partial\{|(u,v)|>0\} \cap B_{1/2}$ there holds
\begin{equation}\label{mainest}
|(u(x),v(x))|\le C\,|x-x_0|^{\frac{2}{1-pq}},
\end{equation}
for any $x \in B_{1/4}$. In particular,
$$
u(x)\le C\,|x-x_0|^{\frac{2(1+p)}{1-pq}}
\quad \mbox{and} \quad
v \le C\,|x-x_0|^{\frac{2(1+q)}{1-pq}}.
$$
\end{theorem}
\begin{proof}
Without loss of generality, we assume that $x_0=0$.
Take now $\mu=2^{-\frac{2}{1-pq}}$, and let $\kappa=\kappa(\mu)>0$ be as in Lemma \ref{l2.1}. We need to apply Lemma \ref{l2.1} to suitably chosen sequences of functions. We construct the first pair of functions in the following way. Set
$$
u_1(x):=b\cdot u(ax)\,\,\,\,\textrm{ and }\,\,\,\,v_1(x):=b\cdot v(ax),
$$
where
$$
a:=\displaystyle\frac{\kappa}{2}\left(\|u\|_\infty+\|v\|_\infty\right)^{\frac{p-1}{4}}\,\,\textrm{ and }\,\,b:=\displaystyle\left(\|u\|_\infty+\|v\|_\infty\right)^{1/2}.
$$
Note that $u_1$, $v_1\in[0,1]$. As $0\in\partial\{|(u,v)|>0\}$, then $u_1(0)=v_1(0)=0$. Also $(u_1,v_1)\ge0$ is a viscosity solution of the system
\begin{equation}\label{3.2}
\left\{
\begin{array}{ccc}
\tilde{F}(D^2u_1,x)&=&\frac{\kappa^2}{4}v_{1}^p(x)\,\,\text{in}\,\,B_1,\\[0.2cm]
\tilde{G}(D^2v,x)&=&b^{-q}u_{1}^q(x)\,\,\text{in}\,\,B_1,
\end{array}
\right.
\end{equation}
where $\tilde{F}(M,x):=ba^2F(b^{-1}a^{-2}M,ax)$, $\tilde{G}(N,x):=G(b^{-1}a^{-2}N,ax)$ are $(\lambda,\Lambda)$ - elliptic operators, i.e., satisfy \eqref{1.2}. Applying Lemma \ref{l2.1}, we deduce
\begin{equation}\label{3.3}
\sup_{B_{1/2}}|(u_1,v_1)|\le 2^{-\frac{2}{1-pq}}.
\end{equation}
Next, we define sequences of functions $u_2,v_2:B_1\rightarrow\R$ by
$$
u_2(x):=2^{\frac{2}{1-pq}(1+p)}u_1\left(\frac{x}{2}\right)\,\,\,\textrm{ and }\,\,\,v_2(x):=2^{\frac{2}{1-pq}(1+q)}v_1\left(\frac{x}{2}\right).
$$
Using \eqref{3.2} we have that $u_2,v_2\in[0,1]$. Also $u_2(0)=v_2(0)=0$, and $(u_2,v_2)\geq0$ is a viscosity solution of \eqref{3.2}, for some uniformly elliptic operators $\tilde{F}$ and $\tilde{G}$. Hence, Lemma \ref{l2.1} provides with
$$
\sup_{B_{1/2}}|(u_2,v_2)|\le 2^{-\frac{2}{1-pq}}.
$$
Scaling back we have
$$
\sup_{B_{1/4}}|(u_1,v_1)|\le 2^{-2\cdot\frac{2}{1-pq}}.
$$
Same way we define $u_i,v_i:B_1\rightarrow\R$, $i=3,4,\ldots$, by
$$
u_i(x):=2^{\frac{2}{1-pq}(1+p)}u_{i-1}\left(\frac{x}{2}\right)\,\,\,\textrm{ and }\,\,\,v_i(x):=2^{\frac{2}{1-pq}(1+q)}v_{i-1}\left(\frac{x}{2}\right)
$$
and deduce that
$$
\sup_{B_{1/2}}|(u_i,v_i)|\le 2^{-\frac{2}{1-pq}}.
$$
Scaling back provides with
$$
\sup_{B_{1/2^i}}|(u_1,v_1)|\le 2^{-i\cdot\frac{2}{1-pq}}.
$$
To finish the proof, for any given $\displaystyle r\in\left(0,\frac{A}{2}\right)$, we choose $i\in\mathbb{N}$ such that
$$
2^{-(i+1)}<\frac{r}{A}\le2^{-i},
$$
and estimate
\begin{equation*}
\begin{array}{ccl}
\displaystyle\sup_{B_r}|(u,v)|&\le&\displaystyle\sup_{B_{r/A}}|(u_1,v_1)|\\[0.5cm]
&\le&\displaystyle\sup_{B_{1/2^i}}|(u_1,v_1)|\\[0.5cm]
&\le&2^{-i\cdot\frac{2}{1-pq}}\\[0.3cm]
&\le&\left(\frac{2}{A}\right)^{\frac{2}{1-pq}}r^{\frac{2}{1-pq}}\\[0.3cm]
&=&Cr^{\frac{2}{1-pq}}.
\end{array}
\end{equation*}
\end{proof}
\section{Comparison and non-degeneracy estimates}\label{s6}
In this section we prove a variant of a weak comparison principle for the system \eqref{1.1}. With a careful analysis of radial super-solutions, we also derive non-degeneracy estimate for viscosity solutions. As was established by Theorem \ref{t3.1}, the quantity $|(u,v)|$, defined by \eqref{2.1}, grows not faster than $r^{\frac{2}{1-pq}}$. Here we prove that it grows exactly with that rate (see Figure 2).
\subsection{A weak comparison principle}
As commented in the introduction, the absence of the classical Perron's method in our framework forces us to find an alternative way to derive an information on relations between viscosity solutions and super-solutions of the system. For that purpose, we recall that uniform ellipticity in terms of extremal Pucci operators can be rewritten as follows:
$$
\mathcal{M}_{\lambda,\Lambda}^-(M-N)\le\mathcal{F}(M)-\mathcal{F}(N)\le\mathcal{M}_{\lambda,\Lambda}^+(M-N),\,\,\,\forall M, N\in\mathbb{S}^n,
$$
with $\mathcal{M}_{\lambda,\Lambda}^\pm$ being the classical extremal Pucci operators, i.e.,
$$
\mathcal{M}_{\lambda,\Lambda}^-(M):=\lambda\sum_{e_i>0}e_i+\Lambda\sum_{e_i<0}e_i,
$$
$$
\mathcal{M}_{\lambda,\Lambda}^+(M):=\Lambda\sum_{e_i>0}e_i+\lambda\sum_{e_i<0}e_i,
$$
where $e_i=e_i(M)$ are the eigenvalues of $M$, $0<\lambda\le\Lambda$. Since $\mathcal{M}_{\lambda,\Lambda}^+$ and $\mathcal{M}_{\lambda,\Lambda}^-$ are uniformly elliptic operators (see \cite[Lemma 2.10]{CC95}), as defined by \eqref{1.2}, with ellipticity constants $\lambda$ and $n\Lambda$, from the comparison principle for $\mathcal{M}_{\lambda,\Lambda}^-$, we are able to conclude a weak form of comparison principle for the system \eqref{1.1}, showing that at least one of the functions in viscosity super-solutions stays above a function in a pair that makes viscosity sub-solution of the system, as states the following lemma.
\begin{lemma}\label{comparison}
If $(u^*,v^*)$ is a viscosity super-solution and $(u_*,v_*)$ is a sub-viscosity solution of \eqref{1.1} such that $u^*\ge u_*$ and $v^*\ge v_*$ on $\partial\Omega$, then
$$
\max\{u^*-u_*,v^*-v_*\}\ge0\,\,\textrm{ in }\,\,\Omega.
$$
\end{lemma}
\begin{proof}
Suppose the conclusion of the lemma is not true, i.e., there exists $x_0\in\Omega$ such that
$$
u_*(x_0)>u^*(x_0)\,\,\,\textrm{ and }\,\,\,v_*(x_0)>v^*(x_0).
$$
We then define the open set
$$
\mathcal{D}:=\left\{x\in\Omega;\,u_*(x)>u^*(x)\,\,\,\textrm{ and }\,\,\,v_*(x)>v^*(x)\right\}.
$$
Note that $\mathcal{D}\neq\emptyset$, since $x_0\in\mathcal{D}$. On $\partial\mathcal{D}$ one has
$$
\omega(x):=\max\{u^*-u_*,v^*-v_*\}=0.
$$
Also, since $(u^*,v^*)$ is a viscosity super-solution and $(u_*,v_*)$ is a viscosity sub-solution of the system \eqref{1.1}, then
$$
F(D^2u^*,x)\le \left(v^*\right)_+^p(x)\le \left(v_*\right)_+^p(x)\le F(D^2u_*,x),\,\,\,x\in\mathcal{D}.
$$
As $F$ is uniformly elliptic, the above inequality implies
$$
\mathcal{M}_{\lambda,\Lambda}^-(D^2(u^*-u_*))\le0\,\,\,\textrm{ in }\,\,\,\mathcal{D}.
$$
Similarly, we conclude that
$$
\mathcal{M}_{\lambda,\Lambda}^-(D^2(v^*-v_*))\le0\,\,\,\textrm{ in }\,\,\,\mathcal{D}.
$$
Hence,
$$
\mathcal{M}_{\lambda,\Lambda}^-(D^2\omega)\le0\,\,\,\textrm{ in }\,\,\,\mathcal{D}.
$$
On the other hand, $\omega=0$ on $\partial\mathcal{D}$, therefore $\omega\ge0$ in $\mathcal{D}$, which is a contradiction, since $\omega(x_0)<0$.
\end{proof}
\subsection{Radial analysis}
Here we construct a radial viscosity super-solution for the system \eqref{1.1}. Understanding these radial solutions of the system plays a crucial role in proving non-degeneracy of solutions, as well as certain measure estimates. As was shown in Section \ref{s2}, the system \eqref{1.1} has a solution. In certain cases solutions of the system \eqref{1.1} can be constructed explicitly, as shows the following example. One can easily see that the pair of functions
$$
u(x):=A\left(|x|-\rho\right)_+^\frac{2(1+p)}{1-pq} \quad \mbox{and} \quad
v(x):=B\left(|x|-\rho\right)_+^\frac{2(1+q)}{1-pq},
$$
where $A$, $B$ are universal constants depending only on $p$, $q$ and $n$ and $\rho\ge0$, is a viscosity solution for the system (see Figure 1)
\begin{equation*}
\left\{
\begin{array}{ccc}
\Delta u=v_+^{\,p}, & \text{in} & \R^n,\\[0.2cm]
\Delta v=u_+^q & \text{in} & \R^n.
\end{array}
\right.
\end{equation*}
\begin{figure}
\caption{Radial solution}
\end{figure}
In general, as the operators $F$ and $G$ are uniformly elliptic, one can show that the system \eqref{1.1} has a radial viscosity sub- and super-solutions. Below we construct such functions. For that purpose, let us define
$$
\tilde{u}(x):=A|x|^\alpha\,\,\,\,\,\,\textrm{ and }\,\,\,\,\,\tilde{v}(x):=B|x|^\beta,
$$
where
$$
\alpha:=\frac{2(1+p)}{1-pq},\,\,\,\beta:=\frac{2(1+q)}{1-pq}
$$
and $A$, $B$ are positive constants to be chosen a posteriori. Direct computation gives
$$
\tilde{u}_{ij}(x)=A\alpha\left[(\alpha-2)|x|^{\alpha-4}x_ix_j+\delta_{ij}|x|^{\alpha-2}\right],
$$
where $\delta_{ij}=1$, if $i=j$ and is zero otherwise. Thus, using the ellipticity of $F$, we estimate
\begin{equation}\label{radial1}
F(D^2\tilde{u},x)\le\Lambda\left[A\alpha(\alpha-1)+(n-1)A\alpha\right]|x|^{\alpha-2}.
\end{equation}
The aim is to choose the constants $A$ and $B$ such that $(\tilde{u},\tilde{v})$ is a viscosity super-solution of the system \eqref{1.1}. If $B$ is such that
\begin{equation}\label{B}
\Lambda\left[A\alpha(\alpha-1)+(n-1)A\alpha\right]=B^p,
\end{equation}
then \eqref{radial1} yields
$$
F(D^2\tilde{u},x)\le \left[B|x|^{\frac{2(1+q)}{1-pq}}\right]^p.
$$
Similarly,
\begin{equation}\label{radial2}
G(D^2\tilde{v},x)\le\Lambda\left[B\beta(\beta-1)+(n-1)B\beta\right]|x|^{\beta-2},
\end{equation}
so if $A$ is such that
\begin{equation}\label{A}
\Lambda\left[B\beta(\beta-1)+(n-1)B\beta\right]=A^q,
\end{equation}
then \eqref{radial2} gives
$$
G(D^2\tilde{v},x)\le \left[A|x|^{\frac{2(1+p)}{1-pq}}\right]^q.
$$
Thus, the constants $A$ and $B$ are chosen to satisfy \eqref{B} and \eqref{A}, which provides
\begin{equation}\label{constantA}
A=\Lambda^{\frac{p+1}{pq-1}}\left[\alpha(\alpha-1)+\alpha(n-1)\right]^\frac{1}{pq-1}\left[\beta(\beta-1)+\beta(n-1)\right]^{\frac{p}{pq-1}}
\end{equation}
and
\begin{equation}\label{constantB}
B=\Lambda^{\frac{q+1}{pq-1}}\left[\beta(\beta-1)+\beta(n-1)\right]^\frac{1}{pq-1}\left[\alpha(\alpha-1)+\alpha(n-1)\right]^{\frac{q}{pq-1}}.
\end{equation}
In conclusion, $(\tilde{u},\tilde{v})$ is a radial viscosity super-solution of the system \eqref{1.1}. We state it below for a future reference.
\begin{proposition}\label{radialsupersolution}
The pair of functions
$$
\tilde{u}(x):=A|x|^\frac{2(1+p)}{1-pq}\,\,\,\,\,\,\textrm{ and }\,\,\,\,\,\tilde{v}(x):=B|x|^\frac{2(1+q)}{1-pq},
$$
where $A$ and $B$ are defined by \eqref{constantA} and \eqref{constantB} respectively, is a viscosity super-solution of the system \eqref{1.1}. Similarly, $(\tilde{u},\tilde{v})$ is a radial viscosity sub-solution of \eqref{1.1}, if in the definition of $A$, $B$ the constant $\Lambda$ is substituted by $\lambda$.
\end{proposition}
\subsection{Non-degeneracy of solutions}
Using Lemma \ref{comparison} and Proposition \ref{radialsupersolution}, we obtain non-degeneracy of solutions.
\begin{theorem}\label{t6.1}
If $(u,v)$ is a non-negative bounded viscosity solution of \eqref{1.1} in $B_1$, and $y\in\overline{\left\{|(u,v)|>0\right\}}\cap B_{1/2}$, then for a constant $c>0$, depending on the dimension, one has
$$
\sup_{\overline{B}_r(y)}|(u,v)|\ge cr^{\frac{2}{1-pq}},
$$
for any $r\in\left(0,\frac{1}{2}\right)$.
\end{theorem}
\begin{proof}
Since $u$ and $v$ are continuous, it is enough to prove the theorem for the points $y\in\left\{|(u,v)|>0\right\}\cap B_{1/2}$, i.e. $u(y)>0$ and $v(y)>0$. By translation, we may assume, without loss of generality, that $y=0$. Let now $\tilde{u}$ and $\tilde{v}$ be as in Proposition \ref{radialsupersolution}. Note that if for a $\xi\in\partial B_r(y)$ one has
\begin{equation}\label{enoughtoshow}
\left(u(\xi),v(\xi)\right)\ge\left(\tilde{u}(\xi),\tilde{v}(\xi)\right),
\end{equation}
then
$$
\sup_{\overline{B}_r}|(u,v)|\ge|(u(\xi),v(\xi))|\ge cr^{\frac{2}{1-pq}}.
$$
Hence, it is enough to check \eqref{enoughtoshow}. Suppose it is not true, i.e.,
$$
(u(\xi),v(\xi))<(\tilde{u}(\xi),\tilde{v}(\xi)),\,\,\,\forall\xi\in\partial B_r.
$$
Define
\begin{equation*}
\hat{u}=
\left\{
\begin{array}{ccc}
\min\{u,\tilde{u}\} & \text{in} & B_r\\[0.2cm]
u & \text{in} & B_r^c
\end{array}
\right.
\end{equation*}
and
\begin{equation*}
\hat{v}=
\left\{
\begin{array}{ccc}
\min\{v,\tilde{v}\} & \text{in} & B_r\\[0.2cm]
v & \text{in} & B_r^c.
\end{array}
\right.
\end{equation*}
Since $(\tilde{u},\tilde{v})$ is a super-solution and $(u,v)$ is a solution of \eqref{1.1}, then $(\hat{u},\hat{v})$ also is a super-solution (see \cite[Lemma 3.1]{IK912}). On the other hand, Lemma \ref{comparison} implies
$$
0<u(0)\le\hat{u}(0)=0\,\,\,\textrm{ or }\,\,\,0<v(0)\le\hat{v}(0)=0,
$$
which is a contradiction.
\end{proof}
\begin{corollary}\label{c4.1}
If $(u,v)$ is a non-negative viscosity solution of \eqref{1.1}, and $x_0\in\partial\{|(u,v)|>0\}$, then
$$
c\,r^{\frac{2}{1-pq}}\le\sup_{B_r(x_0)}|(u,v)|\le C\,r^{\frac{2}{1-pq}}
$$
where $r>0$ is small enough and $c,\,C>0$ are constants depending on the dimension.
\end{corollary}
\begin{figure}
\caption{The growth of $|(u,v)|$ near the free boundary}
\end{figure}
\section{Applications}\label{s7}
As a consequence of the regularity and non-degeneracy results, we obtain geometric measure estimates and derive a Liouville type theorem for entire solutions of the system.
\subsection{Geometric measure estimates}
We establish positive density and porosity results for the free boundary. Moreover, we show that its $(n-\varepsilon)$-dimensional Hausdorff measure is finite, where $\varepsilon>0$. We use $|E|$ for the $n$-dimensional Lebesgue measure of the set $E$. We also recall the definition of a porous set.
\begin{definition}\label{porous}
The set $E\subset\R^n$ is called porous with porosity $\sigma$, if there is $R>0$ such that $\forall x\in E$ and $\forall r\in (0,R)$ there exists $y\in\R^n$ such that
$$
B_{\sigma r}(y)\subset B_{r}(x)\setminus E.
$$
\end{definition}
A porous set of porosity $\sigma$ has Hausdorff dimension not exceeding
$n-c\sigma^n$, where $c>0$ is a constant depending only on dimension. In particular, a porous set has Lebesgue measure zero (see \cite[Theorem 2.1]{KR97}, for instance).
\begin{lemma}\label{c6.1}
If $(u,v)$ is a non-negative bounded viscosity solution of \eqref{1.1} in $B_1$, and $y\in\partial\{|(u,v)|>0\}\cap B_{1/2}$, then
$$
|B_\rho(y)\cap\{|(u,v)|>0\}|\ge c\rho^n,\,\,\,\,\forall\rho\in\left(0,\frac{1}{2}\right),
$$
where $c>0$ is a constant depending only on $\lambda$, $\Lambda$, $\|u\|_\infty$, $\|v\|_\infty$, $p$, $q$ and $n$. Moreover, the free boundary is a porous set, and as a consequence
$$
\mathcal{H}^{n-\varepsilon}\left(\partial\{|(u,v)|>0\}\cap B_{1/2}\right)<\infty,
$$
where $\varepsilon>0$ is a constant depending only on $\lambda$, $\Lambda$, $p$, $q$ and $n$.
\end{lemma}
\begin{proof}
For a $\rho\in(0,1/2)$, Theorem \ref{t6.1} guarantees the existence of a point $\xi_\rho$ such that
\begin{equation}\label{positivedensity}
|\left(u(\xi_\rho),v(\xi_\rho)\right)|\ge c\rho^{\frac{2}{1-pq}}.
\end{equation}
On the other hand, for a $\tau>0$, take
$$
y_0\in B_{\tau\rho}(\xi_\rho)\cap\partial\{|(u,v)|>0\}\neq\emptyset.
$$
Recalling Theorem \ref{t3.1} and using \eqref{positivedensity}, we have
$$
c\rho^{\frac{2}{1-pq}}\le|(u(\xi_\rho),v(\xi_\rho))|\le\sup_{B_{\rho\tau}(y_0)}\le C\left(\rho\tau\right)^{\frac{2}{1-pq}},
$$
which is a contradiction once
$$
\tau<\left(\frac{c}{C}\right)^\frac{1-pq}{2}.
$$
Therefore, for $\tau>0$ small enough
$$
B_{\tau\rho}(\xi_\rho)\subset\{|(u,v)|>0\},
$$
and hence
$$
|B_\rho(y)\cap\{|(u,v)|>0\}|\ge|B_\rho(y)\cap B_{\tau\rho}(\xi_\rho)|\ge c\rho^n.
$$
It remains to check the finiteness of the $(n-\varepsilon)$-dimensional Hausdorff measure of the free boundary, which, as observed above, is a consequence of its porosity. To show the latter, it is enough to take
$$
y^*:=t\xi_\rho+(1-t)y,
$$
where $t$ is close enough to $1$ to guarantee
$$
B_{\frac{\tau}{2}\rho}(y^*)\subset B_\tau(\xi_\rho)\cap B_\rho(y)\subset B_\rho(y)\setminus\partial\{|(u,v)|>0\},
$$
i.e., the set $\partial\{|(u,v)|>0\}\cap B_{1/2}$ is a $\frac{\tau}{2}$-porous set, and the result follows.
\end{proof}
\subsection{Liouville type results for systems}
As another application of the regularity result above, exploiting the ideas from \cite{TU14}, we obtain a Liouville type theorem for solutions of \eqref{1.1} in $\Omega=\R^n$. We refer to them as entire solutions of the system. Although Theorem \ref{t3.1} provides with regularity information only across the free boundary, it is enough to show that the only entire solution, which vanishes at a point and has a growth suitably controlled at infinity, is the trivial one.
\begin{theorem}\label{t5.1}
Let $(u,v)$ is a non-negative viscosity solution of
\begin{equation}\label{entire}
\left\{
\begin{array}{ccc}
F(D^2u,x)=v_+^p & \text{in} & \mathbb{R}^n\\[0.2cm]
G(D^2v,x)=u_+^q & \text{in} & \mathbb{R}^n,
\end{array}
\right.
\end{equation}
and $u(x_0)=v(x_0)=0$. If
\begin{equation}\label{5.1}
|(u(x),v(x))|=o\left(|x|^\frac{2}{1-pq}\right),\,\,\,\textrm{ as }\,\,\,|x|\rightarrow\infty,
\end{equation}
then $u\equiv v\equiv0$.
\end{theorem}
\begin{proof}
Without loss of generality, we may assume that $x_0=0$. We then define
$$
u_k(x):=k^{\frac{-2(1+p)}{1-pq}}u(kx)\,\,\,\textrm{ and }\,\,\,v_k(x):=k^{\frac{-2(1+q)}{1-pq}}v(kx),
$$
for $k\in\mathbb{N}$ and note that the pair $(u_k,v_k)$ is a viscosity solution of the system \eqref{1.1} in $B_1$. Moreover, $u_k(0)=v_k(0)=0$, since $0\in\partial\{|(u,v)|>0\}$. Therefore, one can apply Theorem \ref{t3.1} to estimate $|(u_k,v_k)|$. More precisely, let $x_k\in\overline{B}_r$ be such that $|(u_k,v_k)|$ reaches its supremum at that point, for $r>0$ small. Applying Theorem \ref{t3.1}, we obtain
\begin{equation}\label{5.2}
|(u_k(x_k),v_k(x_k))|\le C_k|x_k|^{\frac{2}{1-pq}},
\end{equation}
where $C_k>0$ goes to zero, as $k\rightarrow0$. Thus, if $|kx_k|$ is bounded as $k\rightarrow\infty$, then $|(u_k(kx_k),v_k(kx_k))|$ remains bounded. The latter implies
\begin{equation}\label{5.3}
|(u_k,v_k)|\rightarrow0,\,\,\,\textrm{ as }\,\,\,k\rightarrow\infty.
\end{equation}
Note that due to \eqref{5.1}, \eqref{5.3} remains true also in the case when $|kx_k|\rightarrow\infty$, as $k\rightarrow\infty$. In fact, from \eqref{5.1} one gets
$$
|(u_k(x_k),v_k(x_k))|\le|kx_k|^{-\frac{2}{1-pq}}k^{-\frac{2}{1-pq}}\rightarrow0.
$$
Our aim is to show that both $u$ and $v$ are identically zero. Let us assume this is not the case. If $y\in\R^n$ is such that $|(u(y),v(y))|>0$, then by choosing $k\in\mathbb{N}$ large enough so $y\in B_{kr}$, and using \eqref{5.2}, \eqref{5.3}, we estimate
\begin{equation*}
\begin{array}{ccl}
\displaystyle\frac{|(u(y),v(y))|}{|y|^{\frac{2}{1-pq}}}&\le&\displaystyle\sup_{B_{kr}}\frac{|(u(x),v(x))|}{|x|^{\frac{2}{1-pq}}}\\[0.8cm]
&=&\displaystyle\sup_{B_r}\frac{|(u_k(x),v_k(x))|}{|x|^{\frac{2}{1-pq}}}\\[0.8cm]
&\le&\displaystyle\frac{|(u(y),v(y))|}{2|y|^{\frac{2}{1-pq}}},
\end{array}
\end{equation*}
which implies $|(u(y),v(y))|=0$, a contradiction.
\end{proof}
Additionally, if
\begin{equation}\label{extracondition}
F(0,x)=G(0,x)=0\,\,\,\textrm{ in }\,\,\,\R^n,
\end{equation}
then Theorem \ref{t5.1} can be improved by relaxing \eqref{5.1} and not requiring $\partial \{|(u,v)>0|\} \neq \emptyset$.
\begin{theorem}\label{liouvthm}
Let \eqref{extracondition} hold. If $(u,v)$ is a non-negative viscosity sub-solution of \eqref{entire} and
\begin{equation}\label{asymptotic}
\limsup\limits_{|x| \to \infty} \frac{|(u(x),v(x))|}{|x|^{\frac{2}{1-pq}}} <\min\{A^{\frac{1}{1+p}},B^{\frac{1}{1+q}}\},
\end{equation}
where $A$ and $B$ are the constants defined by \eqref{constantA} amd \eqref{constantB} respectively, then $u\equiv v\equiv 0$.
\end{theorem}
\begin{proof}
Set
$$
\mathcal{S}_R:=\sup_{\partial B_R} |(u,v)| = \sup_{\partial B_R}\left( u^{\frac{1}{1+p}}+v^{\frac{1}{1+q}}\right).
$$
Using \eqref{asymptotic}, we choose $\theta<1$ and $R \gg 1$ such that
\begin{equation}\label{asymptotic2}
R^{-\frac{2}{1-pq}}\mathcal{S}_R \leq \theta m,
\end{equation}
where $m:=\min\{A^{\frac{1}{1+p}},B^{\frac{1}{1+q}}\}$. Recall that the pair of functions
$$
\tilde u(x):= A(|x|-r)_+^{\frac{2(1+p)}{1-pq}}\,\,\,\textrm{ and }\,\,\,\tilde v(x):= B(|x|-r)_+^{\frac{2(1+q)}{1-pq}},\,\,\,x \in\R^n,
$$
where $r>0$, is a viscosity super-solution of \eqref{entire}, Proposition \ref{radialsupersolution}, and the corresponding dead-core is the ball of radius $r$ centered at $0$, that is, $\{|(\tilde{u},\tilde{v})|=0\}=B_r$. For $R\gg 1$, taking
\begin{equation}\label{radius}
r:=R-\left[\frac{1}{m}\mathcal{S}_R\right]^{\frac{1-pq}{2}} \geq (1-\theta^{\frac{1-pq}{2}}) R,
\end{equation}
from \eqref{asymptotic2} on $\partial B_R$ we have
$$
\tilde u = A(R-r)^{\frac{2(1+p)}{1-pq}}_+ = A\left[\frac{1}{m}\mathcal{S}_R\right]^{1+p}\ge\sup\limits_{\partial B_R} u
$$
and
$$
\tilde v = B(R-r)^{\frac{2(1+q)}{1-pq}}_+ = B\left[\frac{1}{m}\mathcal{S}_R\right]^{1+q}\ge\sup\limits_{\partial B_R} v.
$$
Then, by Lemma \ref{comparison}, we deduce
$$
\max\{\tilde u - u, \tilde v - v\} \geq 0\,\,\,\mbox{ in } B_R.
$$
Thus, for a fixed point $x\in\R^n$, choosing $R$ large enough and using \eqref{radius}, we conclude that either $u(x)=0$ or $v(x)=0$, i.e.,
$$
u(x)v(x)=0,\,\,\,x\in\R^n.
$$
The latter implies
\begin{equation}\label{uv}
\{u>0\}\subset\{v=0\}\,\,\,\textrm{ and }\,\,\,\{v>0\}\subset\{u=0\}.
\end{equation}
Suppose there exists $y\in\R^n$ such that $u(y)>0$. Since $u$ is continuous, then it remains positive in a neighborhood of $y$. From \eqref{uv} we obtain $v=0$ in that neighborhood. On the other hand, from \eqref{extracondition} we have $0=G(0,y)\ge u^q_+$, which implies that $u(y)=0$, a contradiction.
\end{proof}
\begin{remark}\label{r5.1}
Observe that the constant on the right hand side of \eqref{asymptotic} is sharp. Indeed, for the pair of functions $(\tilde{u},\tilde{v})$ defined above, one has equality in \eqref{asymptotic}, but neither $\tilde{u}$ nor $\tilde{v}$ is not identically zero.
\end{remark}
\textbf{Acknowledgments.} DJA is partially supported by CNPq and grant 2019/0014 Paraiba State Research Foundation (FAPESQ). He thanks Centre for Mathematics of the University of Coimbra (CMUC) and the Abdus Salam International Centre for Theoretical Physics (ICTP) for great hospitality during his research visits. RT is partially supported by FCT - Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, I.P., through projects PTDC/MAT-PUR/28686/2017 and UTAP-EXPL/MAT/0017/2017, as well as by the Centre for Mathematics of the University of Coimbra - UIDB/00324/2020, funded by the Portuguese Government through FCT/MCTES. RT thanks Department of Mathematics of the Universidade Federal da Para\'iba (UFPB) for hospitality and great working conditions during his research visit.
\end{document} |
\betaegin{equation}gin{document}
\quad \vskip1.375truein
\delta} \def\D{\Deltaef\muq{\muathfrak{q}}
\delta} \def\D{\Deltaef\mup{\muathfrak{p}}
\delta} \def\D{\Deltaef\muH{\muathfrak{H}}
\delta} \def\D{\Deltaef\muh{\muathfrak{h}}
\delta} \def\D{\Deltaef\mua{\muathfrak{a}}
\delta} \def\D{\Deltaef\mus{\muathfrak{s}}
\delta} \def\D{\Deltaef\mum{\muathfrak{m}}
\delta} \def\D{\Deltaef\mun{\muathfrak{n}}
\delta} \def\D{\Deltaef\muz{\muathfrak{z}}
\delta} \def\D{\Deltaef\muw{\muathfrak{w}}
\delta} \def\D{\Deltaef\ifmmode{\muathbb H}\else{$\muathbb H$}\psihi} \def\F{\Phiioch{{\taut Hoch}}
\delta} \def\D{\Deltaef\mut{\muathfrak{t}}
\delta} \def\D{\Deltaef\mul{\muathfrak{l}}
\delta} \def\D{\Deltaef\muT{\muathfrak{T}}
\delta} \def\D{\Deltaef\muL{\muathfrak{L}}
\delta} \def\D{\Deltaef\mug{\muathfrak{g}}
\delta} \def\D{\Deltaef\mud{\muathfrak{d}}
\delta} \def\D{\Deltaef\mur{\muathfrak{r}}
\tauitle[Exponential convergence for Morse-Bott case]{Analysis of contact Cauchy-Riemann maps II:
canonical neighborhoods and exponential convergence for the Morse-Bott case}
\alphauthor{Yong-Geun Oh, Rui Wang}
\alphaddress{Center for Geometry and Physics, Institute for Basic Science (IBS),
77 Cheongam-ro, Nam-gu, Pohang, Korea 37673
\& Department of Mathematics, POSTECH, Pohang, Korea, 37673}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmail{yongoh1@postech.ac.kr}
\alphaddress{University of California, Irvine, 340 Rowland Hall (Bldg.\# 400),
Irvine, CA 92697-3875, USA}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmail{ruiw10@math.uci.edu}
\tauhanks{This work is supported by the IBS project \# IBS-R003-D1}
\delta} \def\D{\Deltaate{}
\betaegin{equation}gin{abstract} This is a sequel to the papers \chiite{oh-wang1}, \chiite{oh-wang2}.
In \chiite{oh-wang1}, the authors introduced a canonical
affine connection on $M$ associated to the contact triad $(M,\lambda} \def\La{\Lambdaambdambda,J)$.
In \chiite{oh-wang2}, they used the connection to establish a priori $W^{k,p}$-coercive estimates
for maps $w: \delta} \def\D{\Deltaot \Sigmama \tauo M$ satisfying $\psiartialbar^\psii w= 0, \, d(w^*\lambda} \def\La{\Lambdaambdambda \chiirc j) = 0$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{without involving symplectization}. We call such a pair $(w,j)$ a contact instanton.
In this paper, we first prove a canonical neighborhood theorem of the locus $Q$ foliated by
closed Reeb orbits of a Morse-Bott contact form.
Then using a general framework of the three-interval method,
we establish exponential decay estimates for contact instantons $(w,j)$ of the triad $(M,\lambda} \def\La{\Lambdaambdambda,J)$,
with $\lambda} \def\La{\Lambdaambdambda$ a Morse-Bott contact form and
$J$ a CR-almost complex structure adapted to $Q$,
under the condition that the asymptotic charge of $(w,j)$ at the associated puncture vanishes.
We also apply the three-interval method to the symplectization case and provide an alternative
approach via tensorial calculations to exponential decay estimates in the Morse-Bott case
for the pseudoholomorphic curves on the symplectization of contact manifolds.
This was previously established by Bourgeois \chiite{bourgeois} (resp. by Bao \chiite{bao}),
by using special coordinates, for the cylindrical (resp. for the asymptotically cylindrical) ends.
The exponential decay result for the Morse-Bott case is an essential ingredient in the set-up of
the moduli space of pseudoholomorphic curves which plays a central role in contact homology and
symplectic field theory (SFT).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{abstract}
\kappaeywords{Contact manifolds, Morse-Bott contact form,
Morse-Bott contact set-up, canonical neighborhood theorem,
adapted $CR$-almost complex structures, contact instantons, exponential decay, three-interval method}
\muaketitle
\tauableofcontents
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaection{Introduction}
\lambda} \def\La{\Lambdaambdabel{sec:intor}
Let $(M,\xii)$ be a contact manifold.
Each contact form $\lambda} \def\La{\Lambdaambdambda$ of $\xii$, i.e., a one-form with $\kappaer \lambda} \def\La{\Lambdaambdambda = \xii$, canonically induces a splitting
$$
TM = {\muathbb R}\{X_\lambda} \def\La{\Lambdaambdambda\} \omega} \def\O{\Omegaplus \xii.
$$
Here $X_\lambda} \def\La{\Lambdaambdambda$ is the Reeb vector field of $\lambda} \def\La{\Lambdaambdambda$,
which is uniquely determined by the equations
$$
X_\lambda} \def\La{\Lambdaambdambda \rhofloor \lambda} \def\La{\Lambdaambdambda \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 1, \quad X_\lambda} \def\La{\Lambdaambdambda \rhofloor d\lambda} \def\La{\Lambdaambdambda \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0.
$$
We denote by $\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii=\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_\lambda} \def\La{\Lambdaambdambda: TM \tauo TM$ the idempotent, i.e., an endomorphism satisfying
$\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii^2 = \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii$ such that $\kappaer \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii = {\muathbb R}\{X_\lambda} \def\La{\Lambdaambdambda\}$ and $\omega} \def\O{\Omegaperatorname{Im} \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii = \xii$.
Denote by $\psii=\psii_\lambda} \def\La{\Lambdaambdambda: TM \tauo \xii$ the associated projection.
\betaegin{equation}gin{defn}[Contact Triad]
We call the triple $(M,\lambda} \def\La{\Lambdaambdambda,J)$ a contact triad of $(M,\xii)$ if
$\lambda} \def\La{\Lambdaambdambda$ is a contact form of $(M,\xii)$,
and $J$ is an endomorphism of $TM$ with $J^2 = -\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii$ which we call
$CR$-almost complex structure, such that the triple $(\xii, d\lambda} \def\La{\Lambdaambdambda|_\xii,J|_\xii)$ defines a
Hermitian vector bundle over $M$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
As long as no confusion arises, we abuse our notation $J$ also for its restriction to $\xii$.
In \chiite{oh-wang2}, the authors of the present paper called the pair $(w,j)$ a contact instanton, if
$(\Sigmama, j)$ is a (punctured) Riemann surface and $w:\Sigmama\tauo M$ satisfies the following equations
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:contact-instanton}
\psiartialbar^\psii w = 0, \quad d(w^*\lambda} \def\La{\Lambdaambdambda \chiirc j) = 0.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
A priori coercive $W^{k,2}$-estimates for $w$ with $W^{1,2}$-bound was established
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{without involving symplectization}. Moreover, the study of $W^{1,2}$ (or the derivative) bound
and the definition of relevant energy is carried out by the first-named author in \chiite{oh:energy}.
Furthermore, for the punctured domains $\delta} \def\D{\Deltaot\Sigmama$ equipped with cylindrical metric near the puncture,
the present authors proved the result of asymptotic subsequence \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{uniform} convergence to a Reeb orbit
(which must be closed
when the corresponding charge is vanishing) under the assumption that the $\psii$-harmonic energy is finite
and the $C^0$-norm of derivative $dw$ is bounded. (Refer \chiite[Section 6]{oh-wang2} for precise statement and
Section \rhoef{sec:pseudo} in the current paper for its review.)
Based on this subsequence uniform convergence result, the present authors previously proved $C^\infty$ exponential decay
in \chiite{oh-wang2} when the contact form is nondegenerate.
The proof is based on the so-called \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{three-interval argument} which is essentially different
from the proofs for exponential convergence in existing literatures, e.g., from those
in \chiite{HWZ1, HWZ2, HWZplane} which use the method of differential inequality.
The present paper is a sequel to the paper \chiite{oh-wang2} and
the main purpose thereof is to generalize the exponential
convergence result to the Morse-Bott case. In Part \rhoef{part:exp} of the
current paper, we systematically develop the above mentioned three-interval method as a general framework
and establish the result for Morse-Bott contact forms.
(Corresponding results for pseudo-holomorphic curves in symplectizations
were provided by various authors including \chiite{HWZ3, bourgeois, bao}
and we suggest readers to compare our method with theirs.)
In general, the exponential
convergence result is an important ingredient in the set-up of the Fredholm theory
and in the relevant gluing construction. In contact geometry, the moduli spaces of pseudo-holomorphic curves
with noncompact sources are used in defining the contact homology or setting up
the framework of the symplectic field theory (SFT) (see e.g. \chiite{SFT} for an introduction).
In this regard, the Morse-Bott case provides important computable examples
in contact geometry and in SFT. (See \chiite{bourgeois} for some examples of such computations
based on the Morse-Bott framework of contact homology.)
However, there are various subtleties in
describing the structure of the Morse-Bott moduli spaces and
the corresponding contact homology for the contact forms of Morse-Bott type,
which have not been rigorously set up yet.
One of the purposes of current paper is to provide a careful geometric description
of the locus of closed Reeb orbits and the corresponding tensorial proof of exponential decay results.
Moreover, the abstract framework of the three-interval method we develop in this paper for the exponential decay
proof can be easily applied to other evolution type of equations, and provides a general `black box'
for the exponential decay.
The proof of the exponential decay result consists of two parts, one geometric and the other analytic. Part \rhoef{part:coordinate} is
devoted to unveil the geometric structure, the \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{pre-contact structure}, carried by the loci $Q$ of
the closed Reeb orbits of a Morse-Bott contact form $\lambda} \def\La{\Lambdaambdambda$ (see Section \rhoef{subsec:clean} for precise definition).
We prove a canonical neighborhood theorem of any pre-contact manifold which is the contact analogue to
Gotay's on presymplectic manifolds \chiite{gotay}, which we call the \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{contact thickening}
of a pre-contact manifold. By using this neighborhood theorem, we obtain a canonical splitting of
the tangent bundle $TM$ in terms of the pre-contact structure of $Q$ and its thickening. Then we
introduce the class consisting of $J$'s \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{adapted to $Q$} (refer Section \rhoef{sec:adapted} for definition)
besides the standard compatibility requirement to $d\lambda} \def\La{\Lambdaambdambda$. At last we split the derivative $dw$ of contact instanton $w$
into various components and study them separately.
In this way, we are given the geometric framework which gets us ready to conduct the three-interval method
provided in Part \rhoef{part:exp}.
Part \rhoef{part:exp} is then devoted to applying the enhanced version of the three-interval framework
in proving the exponential convergence for the Morse-Bott case, which generalizes the one presented
for the nondegenerate case in \chiite{oh-wang2}.
Now we outline the main results in the present paper in more details.
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{Structure of the locus of closed Reeb orbits}
\lambda} \def\La{\Lambdaambdabel{subsec:clean}
Assume $\lambda} \def\La{\Lambdaambdambda$ is a fixed contact form of the contact manifold $(M, \xii)$.
For a closed Reeb orbit $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$ of period $T > 0$, one can write $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(t) = \psihi^t(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0))$, where
$\psihi^t= \psihi^t_{X_\lambda} \def\La{\Lambdaambdambda}$ is the flow of the Reeb vector field $X_\lambda} \def\La{\Lambdaambdambda$.
Denote by ${\omega} \def\O{\Omegaperatorname{Cont}}(M, \xii)$ the set of all contact one-forms with respect to the contact structure $\xii$,
and by ${\muathbb C}L(M)=C^\infty(S^1, M)$ the space of loops $z: S^1 = {\muathbb R} /{\muathbb Z} \tauo M$.
Consider the bundle $\muathcal{L}$ over the product
$(0,\infty) \tauimes {\muathbb C}L(M) \tauimes {\omega} \def\O{\Omegaperatorname{Cont}}(M,\xii)$ whose fiber
at $(T, z, \lambda} \def\La{\Lambdaambdambda)$ is $C^\infty(z^*TM)$.
The assignment
$$
\group{U}psilon: (T,z,\lambda} \def\La{\Lambdaambdambda) \muapsto \delta} \def\D{\Deltaot z - T \,X_\lambda} \def\La{\Lambdaambdambda(z)
$$
defines a section of the bundle, where $(T,z)$ is a pair with a loop $z$ parameterized over
the unit interval $S^1=[0,1]/\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaim$ defined by $z(t)=\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(Tt)$ for
a Reeb orbit $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$ of period $T$. Notice that $(T, z, \lambda} \def\La{\Lambdaambdambda)\in \group{U}psilon^{-1}(0)
:=\psihi} \def\F{\Phirak{Reeb}(M,\xii)$ if and only if there exists some Reeb orbit $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma: {\muathbb R} \tauo M$ with period $T$, such that
$z(\chidot)=\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(T\chidot)$.
Denote by
$$
\psihi} \def\F{\Phirak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda) : = \{(T,z) \muid (T,z,\lambda} \def\La{\Lambdaambdambda) \in \psihi} \def\F{\Phirak{Reeb}(M,\xii)\}
$$
for each $\lambda} \def\La{\Lambdaambdambda \in {\omega} \def\O{\Omegaperatorname{Cont}}(M,\xii)$. From the formula of a $T$-periodic orbit $(T,\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma)$,
$
T = \int_\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma \lambda} \def\La{\Lambdaambdambda
$.
it follows that the period varies smoothly over $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$.
The general Morse-Bott condition (Bott's notion \chiite{bott} of clean critical submanifold in general) for $\lambda} \def\La{\Lambdaambdambda$ corresponds to the statement that
every connected component of $\psihi} \def\F{\Phirak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda)$ is a smooth submanifold
of $ (0,\infty) \tauimes {\muathbb C}L(M)$ and its tangent space at
every pair $(T,z) \in \psihi} \def\F{\Phirak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda)$ therein coincides with $\kappaer d_{(T,z)}\group{U}psilon$.
Denote by $Q$ the locus of closed Reeb orbits contained in a fixed
connected component of $\psihi} \def\F{\Phirak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda)$. Throughout this paper, we also call $Q$
a Morse-Bott submanifold when we want to emphasize its manifold structure.
However, when one tries to set up the moduli space of contact instantons
for Morse-Bott contact forms, more requirements are needed and we recall the definition that Bourgeois
adopted in \chiite{bourgeois}.
{(Strictly speaking, we also need to
take suitable completions of ${\muathbb C}L(M)$ and ${\omega} \def\O{\Omegaperatorname{Cont}}(M,\xii)$
but we ignore this point which does not play any role in our main discussion.)
\betaegin{equation}gin{defn}[Equivalent to Definition 1.7 \chiite{bourgeois}]\lambda} \def\La{\Lambdaambdabel{defn:morse-bott-intro}
A contact form $\lambda} \def\La{\Lambdaambdambda$ is called be of Morse-Bott type, if it satisfies the following:
\betaegin{equation}gin{enumerate}
\item
Every connected component of $\psihi} \def\F{\Phirak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda)$ is a smooth submanifold
of $ (0,\infty) \tauimes {\muathbb C}L(M)$ with its tangent space at
every pair $(T,z) \in \psihi} \def\F{\Phirak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda)$ therein coincides with $\kappaer d_{(T,z)}\group{U}psilon$;
\item The locus $Q$ is embedded;
\item The 2-form $d\lambda} \def\La{\Lambdaambdambda|_Q$ associated to the locus $Q$ of closed Reeb orbits has constant rank.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
Here Condition (1) corresponds to
Bott's notion of Morse-Bott critical manifolds which we name as \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{standard Morse-Bott type}.
While $\psihi} \def\F{\Phirak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda)$ is a
smooth submanifold, the orbit locus $Q \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset M$ of $\psihi} \def\F{\Phirak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda)$ is
in general only an immersed submanifold and could have multiple sheets along the locus of multiple orbits.
Therefore we impose Condition (2).
In general, the restriction of the two-form $d\lambda} \def\La{\Lambdaambdambda$ to $Q$ has varying rank. It is still not clear whether
the exponential estimates we derive in this paper holds in this general context because our proof
strongly relies on the existence of canonical model of neighborhoods of $Q$. For this reason, we
also impose Condition (3).
We remark that Condition (3) means that the 2-form $d\lambda} \def\La{\Lambdaambdambda|_Q$ becomes a presymplectic form.
Depending on the type of the presymplectic form,
we say that $Q$ is of pre-quantization type if the rank of $d\lambda} \def\La{\Lambdaambdambda|_Q$ is maximal,
and is of isotropic type if the rank of $d\lambda} \def\La{\Lambdaambdambda|_Q$ is zero. The general case is
a mixture of these two. In particular when $\delta} \def\D{\Deltaim M = 3$, such $Q$ must be either of
prequantization type or of isotropic type. This is the case dealt with in
\chiite{HWZ3}. The general case considered in \chiite{bourgeois}, \chiite{behwz} includes the mixed type.
\betaegin{equation}gin{defn}[Pre-Contact Form] We call one-form $\tauheta$ on a manifold $Q$ a \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{pre-contact} form
if $d\tauheta$ has constant rank, i.e., if $d\tauheta$ is a presymplectic form.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
While the notion of presymplectic manifolds is well-established in symplectic geometry,
this contact analogue seems to have not been used
in literature, at least formally, as far as we know.
With this terminology introduced, we prove the following theorem.
\betaegin{equation}gin{thm}[Theorem \rhoef{thm:morsebottsetup}]\lambda} \def\La{\Lambdaambdabel{thm:morsebottsetup-intro}
Let $\lambda} \def\La{\Lambdaambdambda$ be a Morse-Bott type contact form of contact manifold $(M, \xii)$ as defined above.
Let $Q$ be an associated Morse-Bott submanifold of closed Reeb orbits.
Suppose that $Q$ is embedded and $d\lambda} \def\La{\Lambdaambdambda|_Q = i_Q^*(d\lambda} \def\La{\Lambdaambdambda)$ has constant rank. Then $Q$ carries
\betaegin{equation}gin{enumerate}
\item a locally free $S^1$-action generated by the Reeb vector field $X_\lambda} \def\La{\Lambdaambdambda|_Q$;
\item the pre-contact form $\tauheta$ given by $\tauheta = i_Q^*\lambda} \def\La{\Lambdaambdambda$ and the splitting
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:kernel-dtheta0}
\kappaer d\tauheta = {\muathbb R}\{X_\tauheta\} \omega} \def\O{\Omegaplus H,
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
such that the distribution $H = \kappaer d\tauheta \chiap \xii|_Q$ is integrable;
\item
an $S^1$-equivariant symplectic vector bundle $(E,\Omegamega) \tauo Q$ with
$$
E = (TQ)^{d\lambda} \def\La{\Lambdaambdambda}/\kappaer d\tauheta, \quad \Omegamega = [d\lambda} \def\La{\Lambdaambdambda]_E.
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{thm}
Here we use the fact that there exists a canonical embedding
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:EtoNQM}
E = (TQ)^{d\lambda} \def\La{\Lambdaambdambda}/\kappaer d\tauheta \hookrightarrow T_QM/ TQ = N_QM,
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
and $d\lambda} \def\La{\Lambdaambdambda|_{(TQ)^{d\lambda} \def\La{\Lambdaambdambda}}$ canonically induces a bilinear form $[d\lambda} \def\La{\Lambdaambdambda]_E$
on $E = (TQ)^{d\lambda} \def\La{\Lambdaambdambda}/\kappaer di_Q^*\lambda} \def\La{\Lambdaambdambda$ by symplectic reduction.
\betaegin{equation}gin{defn} Let $(Q,\tauheta)$ be a pre-contact manifold
equipped with the splitting \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:kernel-dtheta0}. We call such a triple $(Q,\tauheta,H)$
a \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{Morse-Bott contact set-up}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
Denote by ${\muathbb C}F$ and ${\muathbb C}N$ the foliations associated to the distribution $\kappaer d\tauheta$ and $H$,
respectively. We also denote by $T{\muathbb C}F$, $T{\muathbb C}N$ the associated foliation tangent bundles and
$T^*{\muathbb C}N$ the foliation cotangent bundle of ${\muathbb C}N$.
We prove the following canonical model theorem which describes a
natural way of thickening of Morse-Bott contact set-up $(Q,\tauheta,H)$ whenever a
symplectic vector bundle $E \tauo Q$ is given.
\betaegin{equation}gin{thm}\lambda} \def\La{\Lambdaambdabel{thm:splitting1} Let $(Q,\tauheta,H)$ be a Morse-Bott contact set-up.
Let a symplectic vector bundle $(E,\Omegamega) \tauo Q$ be given. Then
the bundle $F = T^*{\muathbb C}N \omega} \def\O{\Omegaplus E$
carries a canonical contact form $\lambda} \def\La{\Lambdaambdambda_{F;G}$ defined as in \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:lambdaF}, for
each choice of complement $G$ such that $TQ = T{\muathbb C}F \omega} \def\O{\Omegaplus G$. Furthermore for two such choices of
$G, \, G'$, two induced contact structures are naturally isomorphic.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{thm}
Based on this theorem, we denote any such $\lambda} \def\La{\Lambdaambdambda_{F;G}$ just by $\lambda} \def\La{\Lambdaambdambda_F$
suppressing $G$ from its notation.
This normal form provides a general class of contact
manifolds equipped with a contact form of Morse-Bott type.
Finally we prove the following canonical neighborhood
theorem for $Q \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset M$ with $Q$ defined above for any
Morse-Bott contact form $\lambda} \def\La{\Lambdaambdambda$ of contact manifold $(M,\xii)$.
\betaegin{equation}gin{thm}[Theorem \rhoef{thm:neighborhoods2}]\lambda} \def\La{\Lambdaambdabel{thm:neighbohood}
Let $Q$ be the submanifold foliated by closed Reeb orbits of
Morse-Bott type contact form $\lambda} \def\La{\Lambdaambdambda$ of contact manifold $(M,\xii)$, and
$(Q,\tauheta)$ and $(E,\Omegamega)$ be the associated pair defined above.
Let $(F, \lambda} \def\La{\Lambdaambdambda_F)$ be the model contact manifold with $F = T^*{\muathbb C}N \omega} \def\O{\Omegaplus E$
and $\lambda} \def\La{\Lambdaambdambda_F$ be the contact form on $U_F \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset F$ given in \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:lambdaF}.
Then there exist neighborhoods ${\muathbb C}U$ of $Q$ and $U_F$ of the zero section $o_F$
and a diffeomorphism $\psisi: U_F \tauo {\muathbb C}U$ and a function $f: U_F \tauo {\muathbb R}$ such that
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:psi*lambda}
\psisi^*\lambda} \def\La{\Lambdaambdambda = f\, \lambda} \def\La{\Lambdaambdambda_F, \, f|_{o_F} \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 1, \, df|_{o_F}\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
and
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:ioFpsi}
i_{o_F}^*\psisi^*\lambda} \def\La{\Lambdaambdambda = \tauheta, \quad (\psisi^*d\lambda} \def\La{\Lambdaambdambda|_{VTF})|_{o_F} = 0\omega} \def\O{\Omegaplus \Omegamega
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
where we use the canonical identification of $VTF|_{o_F} \chiong T^*{\muathbb C}N \omega} \def\O{\Omegaplus E$ on the
zero section $o_F \chiong Q$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{thm}
\betaegin{equation}gin{rem}\lambda} \def\La{\Lambdaambdabel{rem:behwz}
We would like to remark that while the bundles $E$ and $TQ/T{\muathbb C}F$ carry canonical fiberwise
symplectic form and so carry canonical orientations
induced by $d\lambda} \def\La{\Lambdaambdambda$, the bundle $T{\muathbb C}N$ may not be orientable in general along a
Reeb orbit corresponding to an orbifold point in $P = Q/\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaim$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{rem}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{The three-interval method of exponential estimates}
\lambda} \def\La{\Lambdaambdabel{subsec:three-interval}
For the study of the asymptotic behavior of finite $\psii$-energy solutions
of contact instanton $w: \delta} \def\D{\Deltaot \Sigmama \tauo M$ near a Morse-Bott submanifold $Q$,
we introduce the following class of $CR$-almost complex structures.
\betaegin{equation}gin{defn}[Definition \rhoef{defn:adapted}] Let $Q \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset M$ be a
Morse-Bott submanifold foliated by closed Reeb orbits of $\lambda} \def\La{\Lambdaambdambda$.
Suppose $J$ defines a contact triad $(M,\lambda} \def\La{\Lambdaambdambda,J)$.
We say a $CR$-almost complex structure $J$ for $(M,\xii)$ is adapted to
the submanifold $Q$ or simply is $Q$-adapted if $J$ satisfies
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:JTNT}
J(TQ) \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset TQ + JT{\muathbb C}N.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
Note that this condition is vacuous for the nondegenerate case,
but for the general Morse-Bott case, the class of
adapted $J$ is strictly smaller than the one of general $CR$-almost complex structures of
the triad. The set of $Q$-adapted $J$'s is contractible and the proof is given in Appendix \rhoef{sec:appendix-adapted}.
As far as the applications to contact topology are concerned, requiring this
condition is not any restriction but seems to be necessary for the analysis of
contact-instanton maps or of the pseudoholomorphic maps in the symplectization
(or in the symplectic manifolds with contact-type boundary.)
Let $w: \delta} \def\D{\Deltaot\Sigmama \rhoightarrow M$ be a contact instanton map, i.e.,
satisfying \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:contact-instanton} at a cylindrical end
$[0, \infty)\tauimes S^1$, which now can be written as
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:contact-instanton2}
\psii \psihi} \def\F{\Phirac{\psiartial w}{\psiartial \tauau} + J \psii \psihi} \def\F{\Phirac{\psiartial w}{\psiartial t} = 0, \quad
d(w^*\lambda} \def\La{\Lambdaambdambda \chiirc j) = 0,
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
for $(\tauau, t)\in [0, \infty)\tauimes S^1$. We put the following basic hypotheses
for the study of exponential convergence.
\betaegin{equation}gin{hypo}\lambda} \def\La{\Lambdaambdabel{hypo:basic-intro}[Hypothesis \rhoef{hypo:basic}]
\betaegin{equation}gin{enumerate}
\item \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{Finite $\psii$-energy}:
$E^\psii(w): = \psihi} \def\F{\Phirac{1}{2} \int_{[0, \infty)\tauimes S^1} |d^\psii w|^2 < \infty$;
\item \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{Finite derivative bound}:
$\|dw\|_{C^0([0, \infty)\tauimes S^1)} \lambda} \def\La{\Lambdaeq C < \infty$;
\item \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{Non-vanishing asymptotic action}:\\
$
{\muathbb C}T := \psihi} \def\F{\Phirac{1}{2}\int_{[0,\infty) \tauimes S^1} |d^\psii w|^2
+ \int_{\{0\}\tauimes S^1}(w|_{\{0\}\tauimes S^1})^*\lambda} \def\La{\Lambdaambdambda\nueq 0
$;
\item \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{Vanishing asymptotic charge}:
$
{\muathbb C}Q:=\int_{\{0\}\tauimes S^1}((w|_{\{0\}\tauimes S^1})^*\lambda} \def\La{\Lambdaambdambda\chiirc j)=0$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{hypo}
Under these hypotheses, we establish the following $C^\infty$ uniform exponential convergence of
$w$ to a closed Reeb orbit $z$ of period $T={\muathbb C}T$. This result was already known in
\chiite{HWZ3, bourgeois,bao} in the context of pseudo-holomorphic curves $u = (w,a)$ in symplectizations.
However we emphasize that our proof presented here, which uses the three-interval framework, is
different from the ones \chiite{HWZ3,bourgeois,bao} even in the symplectization case.
Furthermore when we deal with the case of symplectization,
our method completely separates the estimates of $w$ from that of $a$'s.
(See Section \rhoef{sec:asymp-cylinder}.)
\betaegin{equation}gin{thm}\lambda} \def\La{\Lambdaambdabel{thm:expdecay} Assume $(M, \lambda} \def\La{\Lambdaambdambda)$ is a Morse-Bott contact manifold and
$w$ is a contact instanton satisfying the Hypothesis \rhoef{hypo:basic-intro}
at the given end. Then there exists a closed Reeb orbit $z$ with period $T={\muathbb C}T$ and positive
constant $\psiartialta$ determined by $z$, such that
$$\|d(w(\tauau, \chidot), z(T\chidot))\|_{C^0(S^1)}<C e^{-\psiartialta \tauau},$$
and
\betaegin{equation}astar
&&\lambda} \def\La{\Lambdaeft\|\psii \psihi} \def\F{\Phirac{\psiartial w}{\psiartial\tauau}(\tauau, \chidot)\rhoight\|_{C^0(S^1)}<Ce^{-\psiartialta\tauau}, \quad
\lambda} \def\La{\Lambdaeft\|\psii \psihi} \def\F{\Phirac{\psiartial w}{\psiartial t}(\tauau, \chidot)\rhoight\|_{C^0(S^1)}<Ce^{-\psiartialta\tauau}\\
&&\lambda} \def\La{\Lambdaeft\|\lambda} \def\La{\Lambdaambdambda(\psihi} \def\F{\Phirac{\psiartial w}{\psiartial\tauau})(\tauau, \chidot)\rhoight\|_{C^0(S^1)}<Ce^{-\psiartialta\tauau}, \quad
\lambda} \def\La{\Lambdaeft\|\lambda} \def\La{\Lambdaambdambda(\psihi} \def\F{\Phirac{\psiartial w}{\psiartial t})(\tauau, \chidot)-T\rhoight\|_{C^0(S^1)}<Ce^{-\psiartialta\tauau}\\
&&\lambda} \def\La{\Lambdaeft\|\nuabla^l dw(\tauau, t)\rhoight\|_{C^0(S^1)}<C_le^{-\psiartialta\tauau} \quad \tauext{for any}\quad l\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq 1,
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
where $d$ is the distance function induced from the triad metric on $M$ and $C, C_{l}$ are positive constants which only depend on $l$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{thm}
Now comes the outline of the strategy of our proof of exponential convergence in the present paper.
Mundet i Riera and Tian in \chiite{mundet-tian} elegantly used a discrete method of three-interval arguments
in proving exponential decay under the assumption of
$C^0$-convergence already established. However, for most cases of interests, the $C^0$-convergence
is not a priori given in the beginning but it is often the case that the $C^0$-convergence
can be obtained only after one proves the exponential convergence of derivatives.
(See the proofs of, for example, \chiite{HWZ1, HWZ2, HWZplane}, \chiite{HWZ3}, \chiite{bourgeois}, \chiite{bao}).
To obtain the exponential estimates of derivatives, researchers conduct some bruit-force calculations in deriving the needed
differential inequality, and then proceed from there towards the final result.
Such calculation, especially in coordinates, becomes quite complicated for the Morse-Bott situation
and hides the geometry that explains why such a differential inequality could be expected.
Our proof is divided into two parts by writing $w=(u, s)$ in the
normalized contact triad $(U_F,\lambda} \def\La{\Lambdaambdambda_F, J_0)$ (see Definition \rhoef{defn:normaltriad}) with $U_F \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset F \tauo Q$
for any given compatible $J$ adapted to $Q$, where $J_0$ is canonical normalized
$CR$-almost complex structure associated to $J$.
We also decompose $s=(\muu, e)$ in terms of the splitting $F = T^*{\muathbb C}N \omega} \def\O{\Omegaplus E$.
In this decomposition, the $L^2$-exponential estimates for the $e$-component is an easy consequence of
the three-interval method which we formulate above in a general abstract framework
(see Theorem \rhoef{thm:three-interval} for the precise statement). This estimate belongs to the
standard realm of exponential decay proof for the asymptotically cylindrical elliptic equations.
However the study of $L^2$-exponential estimates for $(u,\muu)$ does not directly belong to this
standard realm. Although we still apply similar three-interval method for the study of $L^2$-exponential convergence,
its study is much more subtle than that of the normal component due to the presence of
non-trivial kernel of the asymptotic operator $B_\infty$ of the linearization.
To handle the $(u,\muu)$-component, we formulate the following general theorem
from the abstract framework of the three-interval argument, and
refer readers to Section \rhoef{sec:three-interval}, \rhoef{subsec:exp-horizontal} for the precise statement and
its proof.
\betaegin{equation}gin{thm}
Assume $\xii(\tauau, t)$ is a section of some vector bundle on ${\muathbb R}\tauimes S^1$ which satisfies the equation
$$
\nuabla^\psii_\tauau\zetaeta+J\nuabla^\psii_t\zetaeta+S\zetaeta=L(\tauau, t) \quad \tauext{ with } |L|<Ce^{-\psiartialta_0\tauau}
$$
of Cauchy-Riemann type (or more generally any elliptic PDE of evolution type), where $S$ is a bounded symmetric operator.
Suppose that there exists a sequence $\{\betaar\zetaeta_k\}$ (e.g., by performing a suitable
rescaling of $\zetaeta$) such that at least one subsequence converges to a non-zero
section $\betaar\zetaeta_\infty$ of a (trivial) Banach bundle over a fixed finite interval, say on $[0,3]$,
that satisfies the ODE
$$
\psihi} \def\F{\Phirac{D \betaar\zetaeta_\infty}{d\tauau}+B_\infty \betaar\zetaeta_\infty =0
$$
on the associated Banach space.
Then provided $\|\zetaeta(\tauau, \chidot)\|_{L^2(S^1)}$ converges to zero as
$\tauau$ goes to $\infty$, $\|\zetaeta(\tauau, \chidot)\|_{L^2(S^1)}$ decays
exponentially fast with the rate $\psiartialta >0 $ for any constant $\psiartialta < \muin\{\lambda} \def\La{\Lambdaambdambda_0,\psiartialta_0\}$
where $\lambda} \def\La{\Lambdaambdambda_0$ is the smallest absolute value of non-zero eigenvalues of $B_\infty$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{thm}
\betaegin{equation}gin{rem}For the special case when $B_\infty$ has only trivial kernel,
the result can be regarded as the discrete analogue of the differential inequality method
used by Robbin-Salamon in \chiite{robbin-salamon}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{rem}
In this framework, our exponential convergence proof is based on intrinsic geometric tensor calculations
which is coordinate-free. As a result, our proof make it manifest that (roughly) the exponential
decay occurs whenever the geometric PDE has bounded energy at cylindrical ends
and the limiting equation is of linear evolution type
$
\psihi} \def\F{\Phirac{\psiartial \omega} \def\O{\Omegaverline \zetaeta_\infty}{\psiartial\tauau}+B_\infty \omega} \def\O{\Omegaverline \zetaeta_\infty =0
$,
where $B_\infty$ is an elliptic operator with discrete spectrum.
If $B_\infty$ has trivial kernel, the conclusion follows rather immediately from
the three-interval argument. Even when $B_\infty$ contains non-trivial kernel,
the exponential decay would still follow as long as some geometric condition, like the Morse-Bott assumption in
the current case of our interest, enables one to extract
some non-vanishing solution of the limit equation
$\psihi} \def\F{\Phirac{\psiartial \omega} \def\O{\Omegaverline\zetaeta_\infty}{\psiartial\tauau}+B_\infty \omega} \def\O{\Omegaverline \zetaeta_\infty =0$
that arises in the course of three-interval arguments.
Moreover the decay rate $\psiartialta> 0$ is always provided by the minimal eigenvalue of $B_\infty$.
Now we roughly explain how the non-vanishing limiting solution mentioned above is obtained in the current situation:
First, the canonical neighborhood provided in Part \rhoef{part:coordinate} is used to split the contact
instanton equations into the vertical and horizontal parts. By this way, only the horizontal equation could be
involved with the kernel of $B_\infty$ which by the Morse-Bott condition has
nice geometric structure in the sense that the kernel can be excluded by looking
higher derivative instead of the map itself.
Then, to further see the limit of the derivative is indeed non-vanishing, we apply the
geometric decomposition the derivative and study the center of mass on the Morse-Bott
submanifold $Q$. The details are presented in Section \rhoef{subsec:exp-horizontal}
and Section \rhoef{subsec:centerofmass}.
\psiart{Contact Hamiltonian geometry and canonical neighborhoods}\lambda} \def\La{\Lambdaambdabel{part:neighborhoods}
\lambda} \def\La{\Lambdaambdabel{part:coordinate}
The main purpose of this part is to prove a canonical neighborhood theorem for the loci of closed Reeb orbits when the contact form $\lambda} \def\La{\Lambdaambdambda$ of a contact manifold $(M, \xii)$ is of Morse-Bott type. The results of this part
provides geometric preparation for the study of asymptotic exponential convergence
of contact instanton at a puncture of the domain Riemann surface.
The outline of Part 1 in section wise is as follows.
\betaegin{equation}gin{itemize}
\item In Section 2, we review some basic facts related to contact forms of a contact manifold. We first set up a natural isomorphism between $TM$ and $T^*M$
using the contact form $\lambda} \def\La{\Lambdaambdambda$.
This is a contact analogue of the isomorphism between tangent bundle and
cotangent bundle for symplectic manifolds.
Then we derive explicit formulae of the Reeb vector field $R_{f\lambda} \def\La{\Lambdaambdambda}$ and
the contact projection $\psii_{f\lambda} \def\La{\Lambdaambdambda}$ in terms of $X_\lambda} \def\La{\Lambdaambdambda$, $\psii_\lambda} \def\La{\Lambdaambdambda$ and $f$
respectively.
\item In Section 3, we introduce the definition of Morse-Bott contact forms.
Then we study the canonical pre-contact structure associated to the loci of closed Reeb orbits under Morse-Bott assumption.
\item In section 4, we introduce the notion of contact thickening of pre-contact structure. It is the contact analogue
of the symplectic thickening for pre-symplectic structure constructed in \chiite{gotay}, \chiite{oh-park}.
\item In Section 5, we prove a canonical neighborhood theorem of the loci of
closed Reeb orbits $Q$ under the Morse-Bott assumption.
\item In Section 6, we derive the linearization formula of a Reeb orbit in the normal form.
\item In Section 7, we express the derivative $dw = (du, \nuabla_{du} f)$ of
any smooth map $w = (u,f)$ from a (punctured) surface into the normal neighborhood $F$ of $Q$
in terms of the splitting
$$
TU_F = TQ \omega} \def\O{\Omegaplus F = TQ \omega} \def\O{\Omegaplus (E \omega} \def\O{\Omegaplus JT{\muathbb C}F), \quad TQ = T{\muathbb C}F \omega} \def\O{\Omegaplus G.
$$
\item In Section 8 and Section 9, we introduce the class of adapted CR-almost complex structure and prove its abundance.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{itemize}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaection{Basics on contact forms}
We recall some basic facts on the contact geometry and
contact Hamiltonian dynamics especially in relation to the perturbation of
contact forms for a given contact manifold $(M,\xii)$.
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{$\lambda} \def\La{\Lambdaambdambda$-dual vector fields and $\lambda} \def\La{\Lambdaambdambda$-dual one-forms}
\lambda} \def\La{\Lambdaambdabel{subsec:some}
Let $(M,\xii)$ be a contact manifold and $\lambda} \def\La{\Lambdaambdambda$ be a contact form with $\kappaer \lambda} \def\La{\Lambdaambdambda = \xii$. Consider
its associated decomposition
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:decomp-TM}
TM = {\muathbb R}\{X_\lambda} \def\La{\Lambdaambdambda\} \omega} \def\O{\Omegaplus \xii
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
and denote by $\psii=\psii_\lambda} \def\La{\Lambdaambdambda: TM \tauo \xii$ the associated projection.
This decomposition canonically
induces the corresponding dual decomposition
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:decomp-T*M}
T^*M = \xii^\psierp \omega} \def\O{\Omegaplus ({\muathbb R}\{X_\lambda} \def\La{\Lambdaambdambda\})^\psierp
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
where $(\chidot)^\psierp$ is the annihilator of $(\chidot)$. This
gives rise to a decomposition
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:alpha-decomp}
\alphalpha = \alphalpha(X_\lambda} \def\La{\Lambdaambdambda)\, \lambda} \def\La{\Lambdaambdambda + \alphalpha \chiirc \psii_\lambda} \def\La{\Lambdaambdambda.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Then we have the following general lemma whose proof immediately follows from \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:decomp-T*M}.
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:decompose}
For any given one-form $\alphalpha$, there exists a unique $Y_\alphalpha \in \xii$ such that
$$
\alphalpha = Y_\alphalpha \rhofloor d\lambda} \def\La{\Lambdaambdambda + \alphalpha(X_\lambda} \def\La{\Lambdaambdambda) \lambda} \def\La{\Lambdaambdambda.
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{defn}[$\lambda} \def\La{\Lambdaambdambda$-Dual Vector Field and One-Form] Let $\lambda} \def\La{\Lambdaambdambda$ be a given contact form of $(M,\xii)$.
We define the \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{$\lambda} \def\La{\Lambdaambdambda$-dual vector field} of a one-form $\alphalpha$ to be
$$
\psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda(\alphalpha): = Y_\alphalpha + \alphalpha(X_\lambda} \def\La{\Lambdaambdambda)\, X_\lambda} \def\La{\Lambdaambdambda.
$$
Conversely for any given vector field $X$ we define its $\lambda} \def\La{\Lambdaambdambda$-dual one-form by
$$
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaharp_\lambda} \def\La{\Lambdaambdambda(X)= X \rhofloor d\lambda} \def\La{\Lambdaambdambda + \lambda} \def\La{\Lambdaambdambda(X)\, \lambda} \def\La{\Lambdaambdambda.
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
For the simplicity of notation, we will denote $\alphalpha_X: = \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaharp_\lambda} \def\La{\Lambdaambdambda(X)$.
By definition, we have the identity
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:obvious-id}
\lambda} \def\La{\Lambdaambdambda(\psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda(\alphalpha)) = \alphalpha(X_\lambda} \def\La{\Lambdaambdambda).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
The following proposition is immediate from the definitions of the dual vector field and the dual
one-forms.
\betaegin{equation}gin{prop}\lambda} \def\La{\Lambdaambdabel{prop:inverse} The map $\psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda: \Omegamega^1(M) \tauo \psihi} \def\F{\Phirak X(M), \, \alphalpha \muapsto \psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda(\alphalpha)$
and the map $\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaharp^\lambda} \def\La{\Lambdaambdambda: \psihi} \def\F{\Phirak X(M) \tauo \Omegamega^1(M), \, X \muapsto \alphalpha_X$ are inverse to each other.
In particular any vector field can be written as $\psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda(\alphalpha)$ for a unique one-form $\alphalpha$ and
any one-form can be written as $\alphalpha_X$ for a unique vector field $X$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
By definition, we have $\psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda(\lambda} \def\La{\Lambdaambdambda) = X_\lambda} \def\La{\Lambdaambdambda$ which
corresponds to the $\lambda} \def\La{\Lambdaambdambda$-dual to the contact form $\lambda} \def\La{\Lambdaambdambda$ itself for which $Y_\lambda} \def\La{\Lambdaambdambda =0$
by definition.
Obviously when an exact one-form $\alphalpha$ is given, the choice of $h$ with $\alphalpha = dh$ is
unique modulo addition by a constant (on each connected component of $M$).
To equip readers with some feeling on the above decomposition which is
not common in the literature, we now provide the coordinate expression of $\psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda(\alphalpha)$ and $\alphalpha_X$
in the Darboux chart $(q_1,\chidots, q_n, p_1, \chidots, p_n,z)$ with respect to the canonical
one-form $\lambda} \def\La{\Lambdaambdambda_0 = dz - \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^n p_i\, dq_i$ on ${\muathbb R}^{2n+1}$ or more generally on the
one-jet bundle $J^1(N)$ of a smooth $n$-manifold $N$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{However this coordinate expression will not be used in the rest of the present paper.}
We recall that for this contact form,
the associated Reeb vector field is nothing but
$$
X_{\lambda} \def\La{\Lambdaambdambda_0} = \psihi} \def\F{\Phirac{\psiartial}{\psiartial z}.
$$
We start with the expression of $\psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda(\alphalpha)$ for a given one-form
$$
\alphalpha = \alphalpha_0 \, dz + \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^n a_i\, dq_i + \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{j=1}^n b_j\, dp_j.
$$
We denote
$$
\psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda(\alphalpha) = v_0 \psihi} \def\F{\Phirac{\psiartial}{\psiartial z} + \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^n v_{i;q} \psihi} \def\F{\Phirac{\psiartial}{\psiartial q_i}
+ \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{j=1}^n v_{j;p} \psihi} \def\F{\Phirac{\psiartial}{\psiartial p_j}.
$$
A direct computation using the defining equation of $\psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda(\alphalpha)$ leads to
\betaegin{equation}gin{prop}\lambda} \def\La{\Lambdaambdabel{prop:expressioninDarbouxchart}
Consider the standard contact structure $\lambda} \def\La{\Lambdaambdambda = dz - \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^np_i\, dq_i$
on ${\muathbb R}^{2n+1}$. Then for the given one-form $\alphalpha = \alphalpha_0 dz + \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^na_i\, dq_i + \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{j=1}^nb_j \, dp_j$,
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:vis}
\psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda(\alphalpha) = \lambda} \def\La{\Lambdaeft(\alphalpha_0 + \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{k=1}^n p_k\, b_k\rhoight) \psihi} \def\F{\Phirac{\psiartial}{\psiartial z} +\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^n b_i\psihi} \def\F{\Phirac{\psiartial}{\psiartial q_i}
+ \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{j=1}^n (-a_j - p_j\, \alphalpha_0) \psihi} \def\F{\Phirac{\psiartial}{\psiartial p_j}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Conversely, for given $X = v_0 \psihi} \def\F{\Phirac{\psiartial}{\psiartial z} + \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^n v_{i;q} \psihi} \def\F{\Phirac{\psiartial}{\psiartial q_i} + \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{j=1}^n v_{j;p} \psihi} \def\F{\Phirac{\psiartial}{\psiartial p_j}$, we obtain
\betaegin{equation}a\lambda} \def\La{\Lambdaambdabel{eq:alphais}
\alphalpha_X & = & \lambda} \def\La{\Lambdaeft(v_0 - \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{j=1}^n p_j\, v_{j;q}\rhoight)\,dz \nuonumber \\
&{}& \quad - \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^n \lambda} \def\La{\Lambdaeft( v_{i;p} + \lambda} \def\La{\Lambdaeft((v_0 - \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{j=1}^n p_j\, v_{j;q})p_i\rhoight)\rhoight) dq_i
+\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{j=1}^n v_{j;q}\, dp_j.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonea
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\betaegin{equation}gin{proof} Here we first recall the basic identity \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:obvious-id}.
By definition, $\psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda(\alphalpha)$ is determined by the equation
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:alpha-lambda0}
\alphalpha = \psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda(\alphalpha) \rhofloor \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^n dq_i \varphiedge dp_i
+ \lambda} \def\La{\Lambdaambdambda(\psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda(\alphalpha))\, \lambda} \def\La{\Lambdaeft(dz - \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^n p_i\, dq_i\rhoight)
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
in the current case.
A straightforward computation leads to the formula \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:vis}.
Then \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:alphais} can be derived either by inverting this formula or by
using the defining equation of $\alphalpha_X$, which is further reduced to
$$
\alphalpha_X = X \rhofloor d\lambda} \def\La{\Lambdaambdambda + \lambda} \def\La{\Lambdaambdambda(X)\, \lambda} \def\La{\Lambdaambdambda = X \rhofloor \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^n dq_i \varphiedge dp_i
+ (dz - \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{j=1}^n p_j\, dq_j)(X)\, \lambda} \def\La{\Lambdaambdambda.
$$
We omit the details of the computation.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
\betaegin{equation}gin{exm}
Again consider the canonical one-form $dz - \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^n p_i\, dq_i$. Then we obtain the
following coordinate expression as a special case of \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:vis}
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:hamvis}
\psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda(dh) = \lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{\psiartial h}{\psiartial z} + \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^np_i\psihi} \def\F{\Phirac{\psiartial h}{\psiartial p_i}\rhoight)\psihi} \def\F{\Phirac{\psiartial}{\psiartial z}
+ \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^n \psihi} \def\F{\Phirac{\psiartial h}{\psiartial p_i}\psihi} \def\F{\Phirac{\psiartial}{\psiartial q_i}
+ \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^n\lambda} \def\La{\Lambdaeft(- \psihi} \def\F{\Phirac{\psiartial h}{\psiartial q_i} - p_i\psihi} \def\F{\Phirac{\psiartial h}{\psiartial z}\rhoight)\psihi} \def\F{\Phirac{\psiartial}{\psiartial p_i}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{exm}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{Perturbation of contact forms of $(M,\xii)$}
\lambda} \def\La{\Lambdaambdabel{subsec:perturbed-forms}
In this section, we exploit the discussion on the $\lambda} \def\La{\Lambdaambdambda$-dual vector fields and
express the Reeb vector fields $X_{f\lambda} \def\La{\Lambdaambdambda}$ and the projection $\psii_{f\lambda} \def\La{\Lambdaambdambda}$
associated to the contact form $f \lambda} \def\La{\Lambdaambdambda$ for a positive function $f > 0$,
in terms of those associated to the given contact form $\lambda} \def\La{\Lambdaambdambda$ and
the $\lambda} \def\La{\Lambdaambdambda$-dual vector fields of $df$.
Recalling Lemma \rhoef{lem:decompose}, we can write
$$
dg = Y_{dg} \rhofloor d\lambda} \def\La{\Lambdaambdambda + dg(X_\lambda} \def\La{\Lambdaambdambda) \lambda} \def\La{\Lambdaambdambda
$$
with $Y_{dg} \in \xii$ in a unique way for any smooth function $g$. Then by definition, we have
$Y_{dg} = \psii_\lambda} \def\La{\Lambdaambdambda(\psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda(dg))$.
We first compute the following useful explicit formula
for the associated Reeb vector fields $X_{f\lambda} \def\La{\Lambdaambdambda}$ in terms of $X_\lambda} \def\La{\Lambdaambdambda$
and $Y_{dg}$.
\betaegin{equation}gin{prop}[Perturbed Reeb Vector Field]\lambda} \def\La{\Lambdaambdabel{prop:eta} Denote $Y_{dg} = \psii_\lambda} \def\La{\Lambdaambdambda(\psihi} \def\F{\Philat_\lambda} \def\La{\Lambdaambdambda(dg))$
as above. Then we have
$$
X_{f\lambda} \def\La{\Lambdaambdambda} = \psihi} \def\F{\Phirac{1}{f}(X_{\lambda} \def\La{\Lambdaambdambda} + Y_{dg}), \quad g = \lambda} \def\La{\Lambdaog f.
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\betaegin{equation}gin{proof}
It turns out to be easier to
consider $f\, X_{f\lambda} \def\La{\Lambdaambdambda}$ which we compute below. First we have
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:fXflambda}
f\,X_{f\lambda} \def\La{\Lambdaambdambda}= c \chidot X_\lambda} \def\La{\Lambdaambdambda+\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
with respect to the splitting $TM = {\muathbb R}\{X_\lambda} \def\La{\Lambdaambdambda\} \omega} \def\O{\Omegaplus \xii$ for some constant $c$
and $\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta \in \xii$. We evaluate
\betaegin{equation}astar
c = \lambda} \def\La{\Lambdaambdambda(fX_{f\lambda} \def\La{\Lambdaambdambda})=(f\lambda} \def\La{\Lambdaambdambda)(X_{f\lambda} \def\La{\Lambdaambdambda})= 1.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
It remains to derive the formula for $\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta$. Using the formula
\betaegin{equation}astar
d(f\lambda} \def\La{\Lambdaambdambda)= f d\lambda} \def\La{\Lambdaambdambda+df\varphiedge \lambda} \def\La{\Lambdaambdambda
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
and $\lambda} \def\La{\Lambdaambdambda(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta) = 0$,
we compute
\betaegin{equation}astar
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta\rhofloor d\lambda} \def\La{\Lambdaambdambda&=&(fX_{f\lambda} \def\La{\Lambdaambdambda})\rhofloor d\lambda} \def\La{\Lambdaambdambda\\
&=& X_{f\lambda} \def\La{\Lambdaambdambda}\rhofloor d(f\lambda} \def\La{\Lambdaambdambda)-X_{f\lambda} \def\La{\Lambdaambdambda}\rhofloor(df\varphiedge\lambda} \def\La{\Lambdaambdambda)\\
&=&-X_{f\lambda} \def\La{\Lambdaambdambda}\rhofloor(df\varphiedge\lambda} \def\La{\Lambdaambdambda)\\
&=&-X_{f\lambda} \def\La{\Lambdaambdambda}(f)\lambda} \def\La{\Lambdaambdambda+\lambda} \def\La{\Lambdaambdambda(X_{f\lambda} \def\La{\Lambdaambdambda})df\\
&=&-\psihi} \def\F{\Phirac{1}{f}(X_\lambda} \def\La{\Lambdaambdambda+\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta)(f)\lambda} \def\La{\Lambdaambdambda+\psihi} \def\F{\Phirac{1}{f}\lambda} \def\La{\Lambdaambdambda(X_\lambda} \def\La{\Lambdaambdambda+\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta)df\\
&=&-\psihi} \def\F{\Phirac{1}{f}X_\lambda} \def\La{\Lambdaambdambda(f)\lambda} \def\La{\Lambdaambdambda-\psihi} \def\F{\Phirac{1}{f}\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta(f)\lambda} \def\La{\Lambdaambdambda+\psihi} \def\F{\Phirac{1}{f}df.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
Take value of $X_\lambda} \def\La{\Lambdaambdambda$ for both sides, we get
$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta(f)=0,
$
and hence
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta\rhofloor d\lambda} \def\La{\Lambdaambdambda=-\psihi} \def\F{\Phirac{1}{f}X_\lambda} \def\La{\Lambdaambdambda(f)\lambda} \def\La{\Lambdaambdambda+\psihi} \def\F{\Phirac{1}{f}df.
$$
Setting $g =\lambda} \def\La{\Lambdaog f$, we can rewrite this into
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta\rhofloor d\lambda} \def\La{\Lambdaambdambda=-dg(X_\lambda} \def\La{\Lambdaambdambda)\lambda} \def\La{\Lambdaambdambda+dg.
$$
In other words, we obtain
\betaegin{equation}
dg = \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta \rhofloor d\lambda} \def\La{\Lambdaambdambda + dg(X_\lambda} \def\La{\Lambdaambdambda)\lambda} \def\La{\Lambdaambdambda.\lambda} \def\La{\Lambdaambdabel{eq:eta1}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Therefore by Lemma \rhoef{lem:decompose}, we have obtained
$\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta = Y_{dg}$. Substituting this into \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:fXflambda} and dividing it
by $f$, we have finished the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Next we compare the contact projections
$\psii_{\lambda} \def\La{\Lambdaambdambda}$ with $\psii_{f\lambda} \def\La{\Lambdaambdambda}$ associated to $\lambda} \def\La{\Lambdaambdambda$ and $f\lambda} \def\La{\Lambdaambdambda$
respectively.
\betaegin{equation}gin{prop}[Perturbed Contact Projection]\lambda} \def\La{\Lambdaambdabel{prop:xi-projection}
Let $(M,\xii)$ be a contact manifold and let $\lambda} \def\La{\Lambdaambdambda$ be a contact form
i.e., $\kappaer \lambda} \def\La{\Lambdaambdambda = \xii$. Let $f$ be a positive smooth function and $f \, \lambda} \def\La{\Lambdaambdambda$
be its associated contact form. Denote by $\psii_{\lambda} \def\La{\Lambdaambdambda}$ and $\psii_{f\, \lambda} \def\La{\Lambdaambdambda}$ be
their associated $\xii$-projection. Then
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:xi-projection}
\psii_{f\, \lambda} \def\La{\Lambdaambdambda}(Z) = \psii_{\lambda} \def\La{\Lambdaambdambda}(Z)- \lambda} \def\La{\Lambdaambdambda(Z) Y_{dg}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
for the function $g = \lambda} \def\La{\Lambdaog f$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\betaegin{equation}gin{proof}
We compute
\betaegin{equation}astar
\psii_{f\lambda} \def\La{\Lambdaambdambda}(Z)&=& Z - f\lambda} \def\La{\Lambdaambdambda(Z) X_{f\lambda} \def\La{\Lambdaambdambda}
= Z-\lambda} \def\La{\Lambdaambdambda(Z)(f X_{f\lambda} \def\La{\Lambdaambdambda})\\
&=&Z-\lambda} \def\La{\Lambdaambdambda(Z) X_{\lambda} \def\La{\Lambdaambdambda}+(\lambda} \def\La{\Lambdaambdambda(Z) X_{\lambda} \def\La{\Lambdaambdambda}-\lambda} \def\La{\Lambdaambdambda(Z)(fX_{f\lambda} \def\La{\Lambdaambdambda}))\\
&=&\psii_{\lambda} \def\La{\Lambdaambdambda}Z+\lambda} \def\La{\Lambdaambdambda(Z)(X_{\lambda} \def\La{\Lambdaambdambda}-f X_{f\lambda} \def\La{\Lambdaambdambda})\\
&=&\psii_{\lambda} \def\La{\Lambdaambdambda}Z-\lambda} \def\La{\Lambdaambdambda(Z) Y_{dg}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
This finishes the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{Linearization formula for the perturbed contact form}
We next study the relationship between the linearization of
$\group{U}psilon_\lambda} \def\La{\Lambdaambdambda (z) = \delta} \def\D{\Deltaot z - T X_{f \lambda} \def\La{\Lambdaambdambda}(z)$ which we denote by
$$
D^\psii \group{U}psilon(z)(Z) = \nuabla_t^\psii Z - T\nuabla_Z X_{f\, \lambda} \def\La{\Lambdaambdambda}
$$
with respect to the triad connection of $(M,\lambda} \def\La{\Lambdaambdambda,J)$ (see Proposition 7.6 in \chiite{oh-wang2})
for a given function $f$. Substituting
$$
X_{f\, \lambda} \def\La{\Lambdaambdambda} = \psihi} \def\F{\Phirac{1}{f}(X_{\lambda} \def\La{\Lambdaambdambda} + Y_{dg})
$$
into this formula, we derive
\betaegin{equation}gin{lem}[Linearization]\lambda} \def\La{\Lambdaambdabel{lem:DUpsilon}
Let $\nuabla$ be the triad connection of $(M, f \lambda} \def\La{\Lambdaambdambda,J)$. Then
for any vector field $Z$ along a Reeb orbit $z = (\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(T\chidot),o_{\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(T\chidot)})$,
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:DUpsilon}
D^\psii \group{U}psilon(z)(Z)
= \nuabla_t^\psii Z - T\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{1}{f}\nuabla_Z X_\lambda} \def\La{\Lambdaambdambda + Z[1/f] X_\lambda} \def\La{\Lambdaambdambda \rhoight)
- T\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{1}{f} \nuabla_Z Y_{dg} + Z[1/f] Y_{dg}\rhoight)
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof} Let $\nuabla$ be the triad connection of $(M,f\lambda} \def\La{\Lambdaambdambda,J)$. Then
by definition its torsion $T$ satisfies the axiom $T(X_\lambda} \def\La{\Lambdaambdambda,Z) = 0$ for any vector field $Z$ on $M$
(see Theorem 1 in \chiite{oh-wang1}). Using this property, as in section 7 of \chiite{oh-wang2},
we compute
\betaegin{equation}astar
D^\psii \group{U}psilon(z)(Z) & = & \nuabla_t^\psii Z - T\nuabla_Z X_{f\lambda} \def\La{\Lambdaambdambda} \\
& = & \nuabla_t^\psii Z - T\nuabla_Z\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{1}{f}(X_\lambda} \def\La{\Lambdaambdambda + Y_{dg})\rhoight) \\
& = & \nuabla_t^\psii Z -TZ[1/f](X_\lambda} \def\La{\Lambdaambdambda + Y_{dg}) - \psihi} \def\F{\Phirac{T}{f}\nuabla_Z(X_\lambda} \def\La{\Lambdaambdambda + Y_{dg})\\
& = & \nuabla_t^\psii Z - T\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{1}{f}\nuabla_Z X_\lambda} \def\La{\Lambdaambdambda + Z[1/f] X_\lambda} \def\La{\Lambdaambdambda \rhoight)
- T\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{1}{f} \nuabla_Z Y_{dg} + Z[1/f] Y_{dg}\rhoight).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
which finishes the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
We note that when $f\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 1$, the above formula reduces to the standard formula
$$
D^\psii \group{U}psilon(z)(Z)
= \nuabla_t^\psii Z - T\nuabla_Z X_\lambda} \def\La{\Lambdaambdambda
$$
which is further reduced to
$$
D^\psii \group{U}psilon(z)(Z) = \nuabla_t^\psii Z - \psihi} \def\F{\Phirac{T}{2}({\muathbb C}L_{X_\lambda} \def\La{\Lambdaambdambda}J) J Z
$$
for any contact triad $(M,\lambda} \def\La{\Lambdaambdambda,J)$.
(See section 7 \chiite{oh-wang2} for some discussion on this formula.)
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaection{The locus foliated by closed Reeb orbits}
\lambda} \def\La{\Lambdaambdabel{sec:neighbor}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{Definition of Morse-Bott contact form}
\lambda} \def\La{\Lambdaambdabel{subsec:morse-bott}
Let $(M,\xii)$ be a contact manifold and $\lambda} \def\La{\Lambdaambdambda$ be a contact form of $\xii$.
We would like to study the linearization of the equation $\delta} \def\D{\Deltaot x = X_\lambda} \def\La{\Lambdaambdambda(x)$
along a closed Reeb orbit.
Let $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$ be a closed Reeb orbit of period $T > 0$. In other words,
$\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma: {\muathbb R} \tauo M$ is a solution of $\delta} \def\D{\Deltaot \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma = X_\lambda} \def\La{\Lambdaambdambda(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma)$ satisfying $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(t+T) = \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(t)$.
By definition, we can write $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(t) = \psihi^t_{X_\lambda} \def\La{\Lambdaambdambda}(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0))$
for the Reeb flow $\psihi^t= \psihi^t_{X_\lambda} \def\La{\Lambdaambdambda}$ of the Reeb vector field $X_\lambda} \def\La{\Lambdaambdambda$.
In particular $p = \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0)$ is a fixed point of the diffeomorphism
$\psihi^T$ when $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$ is a closed Reeb orbit of period $T$.
Since ${\muathbb C}L_{X_\lambda} \def\La{\Lambdaambdambda}\lambda} \def\La{\Lambdaambdambda = 0$, the contact diffeomorphism $\psihi^T$ canonically induces the isomorphism
$$
\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiisi_{z;p} : = d\psihi^T(p)|_{\xii_p}: \xii_p \tauo \xii_p
$$
which is the linearized Poincar\'e return map $\psihi^T$ restricted to $\xii_p$ via the splitting
$$
T_p M=\xii_p\omega} \def\O{\Omegaplus {\muathbb R}\chidot \{X_\lambda} \def\La{\Lambdaambdambda(p)\}.
$$
\betaegin{equation}gin{defn} Let $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$ be a closed Reeb orbit with period $T> 0$ and
denote by $z:S^1 \tauo M$ the map defined by $z(t) = \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(Tt)$.
We say a $T$-closed Reeb orbit $(T,z)$ is \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{nondegenerate}
if the linearized return map $\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiisi_{z;p}:\xii_p \tauo \xii_p$ with $p = \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0)$ has no eigenvalue 1.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
Denote ${\omega} \def\O{\Omegaperatorname{Cont}}(M, \xii)$ the set of contact one-forms with respect to the contact structure $\xii$
and ${\muathbb C}L(M)=C^\infty(S^1, M)$ the space of loops $z: S^1 = {\muathbb R} /{\muathbb Z} \tauo M$.
We would like to consider the bundle
$\muathcal{L}$ over the product $(0,\infty) \tauimes {\muathbb C}L(M) \tauimes {\omega} \def\O{\Omegaperatorname{Cont}}(M,\xii)$ whose fiber
at $(T, z, \lambda} \def\La{\Lambdaambdambda)$ is given by the space $C^\infty(z^*TM)$ of sections of the bundle $z^*TM \tauo S^1$.
We consider the assignment
$$
\group{U}psilon: (T,z,\lambda} \def\La{\Lambdaambdambda) \muapsto \delta} \def\D{\Deltaot z - T \,X_\lambda} \def\La{\Lambdaambdambda(z)
$$
which is a section. Then $(T, z, \lambda} \def\La{\Lambdaambdambda)\in \group{U}psilon^{-1}(0)
:=\muathfrak{Reeb}(M,\xii)$ if and only if there exists some closed Reeb orbit $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma: {\muathbb R} \tauo M$ with period $T$, such that
$z(\chidot)=\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(T\chidot)$.
We first start with the standard notion, applied to the set of
closed Reeb orbits, of Morse-Bott critical manifolds introduced by Bott in \chiite{bott}:
\betaegin{equation}gin{defn} We call a contact form $\lambda} \def\La{\Lambdaambdambda$ \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{standard Morse-Bott type} if
every connected component of $\psihi} \def\F{\Phirak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda)$ is a smooth submanifold
of $ (0,\infty) \tauimes {\muathbb C}L(M)$ with its tangent space at
every pair $(T,z) \in \psihi} \def\F{\Phirak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda)$ therein coincides with $\kappaer d_{(T,z)}\group{U}psilon$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
The following is an immediate consequence of this definition.
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:T-constant} Suppose $\lambda} \def\La{\Lambdaambdambda$ is of standard Morse-Bott type, then
on each connected component of $\muathfrak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda)$, the period remains constant.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof}
Let $(T_0, z_0)$ and $(T_1, z_1)$ be two elements in the same
connected component of $\muathfrak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda)$.
We connect them by a smooth one-parameter family
$(T_s, z_s)$ for $0 \lambda} \def\La{\Lambdaeq s \lambda} \def\La{\Lambdaeq 1$. Since $\delta} \def\D{\Deltaot{z_s}=T_sX_\lambda} \def\La{\Lambdaambdambda(z_s)$ and then
$T_s=\int_{S^1}z^*_s\lambda} \def\La{\Lambdaambdambda$, it is enough to prove
$$
\psihi} \def\F{\Phirac{d}{ds} T_s =\psihi} \def\F{\Phirac{d}{ds} \int_{S^1}z^*_s\lambda} \def\La{\Lambdaambdambda\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0.
$$
We compute
\betaegin{equation}astar
\psihi} \def\F{\Phirac{d}{ds} z^*_s\lambda} \def\La{\Lambdaambdambda &=& z_s^*(d (z_s' \rhofloor \lambda} \def\La{\Lambdaambdambda) +z_s' \rhofloor d\lambda} \def\La{\Lambdaambdambda)\\
&=&d(z_s^*(z_s' \rhofloor \lambda} \def\La{\Lambdaambdambda))+z_s^*(z_s' \rhofloor d\lambda} \def\La{\Lambdaambdambda)\\
&=&d(z_s^*(z_s' \rhofloor \lambda} \def\La{\Lambdaambdambda)).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
Here we use $z_s'$ to denote the derivative with respect to $s$.
The last equality comes from the fact that $\delta} \def\D{\Deltaot{z_s}$ is parallel to $X_\lambda} \def\La{\Lambdaambdambda$.
Therefore we obtain by Stokes formula that
$$
\psihi} \def\F{\Phirac{d}{ds} \int_{S^1}z^*_s\lambda} \def\La{\Lambdaambdambda = \int_{S^1} d(z_s^*(z_s' \rhofloor \lambda} \def\La{\Lambdaambdambda)) = 0
$$
and finish the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Now we prove
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:localfree} Let $\lambda} \def\La{\Lambdaambdambda$ be standard Morse-Bott type.
Fix a connected component ${\muathbb C}R \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset \psihi} \def\F{\Phirak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda)$
and denote by $Q \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset M$ the locus of the corresponding closed Reeb orbits. Then $Q$ is a smooth immersed submanifold
which carries a natural locally free $S^1$-action induced by the Reeb flow over one period.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof} Consider the evaluation map $ev_{{\muathbb C}R}: \muathfrak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda) \tauo M$ defined by
$ev_{{\muathbb C}R}(T,z) = z(0)$. It is easy to prove that the map is a local immersion and so $Q$ is an
immersed submanifold. Since the closed Reeb orbits have constant period $T>0$ by Lemma \rhoef{lem:T-constant},
the action is obviously locally free and so $ev_{{\muathbb C}R}$ is an immersion
and so $Q$ is immersed in $M$. This finishes the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
However $Q$ may not be embedded in general along the locus of multiple orbits.
Partially following \chiite{bourgeois}, from now on in the rest of the paper,
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{we always assume $Q$ is embedded and compact}.
Denote $\omega} \def\O{\Omegamega_Q : = i_Q^*d\lambda} \def\La{\Lambdaambdambda$ and
$$
\kappaer \omega} \def\O{\Omegamega_Q =\{e\in TQ \muid \omega} \def\O{\Omegamega(e, e')= 0 \quad \tauext{for any} \quad e'\in TQ\}.
$$
We warn readers that the Morse-Bott condition does not imply that the form $\omega} \def\O{\Omegamega_Q$
has constant rank, and hence the dimension of this kernel may vary pointwise on $Q$.
However if it does, $\kappaer \omega} \def\O{\Omegamega_Q$ defines an integrable distribution and so
defines a foliation, denoted by ${\muathbb C}F$, on $Q$.
Since $Q$ is also foliated by closed Reeb orbits and ${\muathbb C}L_{X_\lambda} \def\La{\Lambdaambdambda}d\lambda} \def\La{\Lambdaambdambda=0$,
it follows that ${\muathbb C}L_{X_{\lambda} \def\La{\Lambdaambdambda}}\omega} \def\O{\Omegamega_Q=0$ when we restrict everything on $Q$. Therefore
each leaf of the foliation consisting of closed Reeb orbits. Motivated by this, we also
impose the condition that \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{the two-form $\omega} \def\O{\Omegamega_Q$ has constant rank}.
\betaegin{equation}gin{defn}[Compare with Definition 1.7 \chiite{bourgeois}]
\lambda} \def\La{\Lambdaambdabel{defn:morse-bott}
We say that the contact form $\lambda} \def\La{\Lambdaambdambda$ is of Morse-Bott type if it satisfies the following:
\betaegin{equation}gin{enumerate}
\item
Every connected component of $\psihi} \def\F{\Phirak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda)$ is a smooth submanifold
of $ (0,\infty) \tauimes {\muathbb C}L(M)$ with its tangent space at
every pair $(T,z) \in \psihi} \def\F{\Phirak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda)$ therein coincides with $\kappaer d_{(T,z)}\group{U}psilon$.
\item $Q$ is embedded.
\item $\omega} \def\O{\Omegamega_Q$ has constant rank on $Q$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{Structure of the locus of closed Reeb orbits}
\lambda} \def\La{\Lambdaambdabel{subsec:structure}
Let $\lambda} \def\La{\Lambdaambdambda$ be a Morse-Bott contact form of $(M,\xii)$ and $X_\lambda} \def\La{\Lambdaambdambda$
its Reeb vector field. Let $Q$ be as in Definition \rhoef{defn:morse-bott}. In general, $Q$
carries a natural locally free $S^1$-action induced by the Reeb flow $\psihi_{X_\lambda} \def\La{\Lambdaambdambda}^T$
(see Lemma \rhoef{lem:localfree}).
Then by the general theory of compact Lie group actions (see \chiite{helgason} for example), the action has a finite
number of orbit types which have their minimal periods, $T/m$ for some integer $m \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq 1$.
The set of orbit spaces $Q/S^1$ carries natural
orbifold structure at each multiple orbit with its isotropy group ${\muathbb Z}/m$ for some $m$.
\betaegin{equation}gin{rem}\lambda} \def\La{\Lambdaambdabel{rem:noneffective}
Here we would like to mention that the $S^1$-action induced by
$\psihi_{X_\lambda} \def\La{\Lambdaambdambda}^T$ on $Q$ may not be effective:
It is possible that the connected component $\psihi} \def\F{\Phirak{R}$ of $\psihi} \def\F{\Phirak{Reeb}(M,\lambda} \def\La{\Lambdaambdambda)$
can consist entirely of multiple orbits.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{rem}
Now we fix a connected component of $Q$ and just denote it by $Q$ itself.
Denote $\tauheta = i_Q^*\lambda} \def\La{\Lambdaambdambda$.
We note that the two-form $\omega} \def\O{\Omegamega_Q = d\tauheta$ is assumed to
have constant rank on $Q$ by the definition of Morse-Bott contact form in Definition \rhoef{defn:morse-bott}.
The following is an immediate consequence of the definition but exhibits a
particularity of the null foliation of the presymplectic manifold $(Q,\omega} \def\O{\Omegamega_Q)$
arising from the locus of closed Reeb orbits. We note that $\kappaer \omega} \def\O{\Omegamega_Q$ carries a natural splitting
$$
\kappaer \omega} \def\O{\Omegamega_Q = {\muathbb R}\{X_\lambda} \def\La{\Lambdaambdambda\} \omega} \def\O{\Omegaplus (\kappaer \omega} \def\O{\Omegamega_Q \chiap \xii|_Q).
$$
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:integrableCN}
The distribution $(\kappaer \omega} \def\O{\Omegamega_Q) \chiap \xii|_Q$
on $Q$ is integrable.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof} Let $X,\, Y$ be vector fields on $Q$ such that $X,\, Y \in (\kappaer \omega} \def\O{\Omegamega_Q) \chiap \xii|_Q$.
Then $[X, Y]\in \kappaer \omega} \def\O{\Omegamega_Q$ since $\omega} \def\O{\Omegamega_Q$ is a closed two-form whose
null distribution is integrable.
At the same time, we compute
$i_Q^*\lambda} \def\La{\Lambdaambdambda([X,Y]) = X[\lambda} \def\La{\Lambdaambdambda(Y)] - Y[\lambda} \def\La{\Lambdaambdambda(X)] -\omega} \def\O{\Omegamega_Q(X,Y) = 0$
where the first two terms vanish since $X, \, Y \in \xii$ and the third vanishes because
$X \in \kappaer \omega} \def\O{\Omegamega_Q$. This proves $[X,Y] \in \kappaer \omega} \def\O{\Omegamega_Q \chiap \xii|_Q$,
which finishes the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Therefore $\kappaer \omega} \def\O{\Omegamega_Q \chiap \xii|_Q$ defines another foliation ${\muathbb C}N$ on $Q$, and hence
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:TCF}
T{\muathbb C}F = \muathbb{R}\{X_\lambda} \def\La{\Lambdaambdambda\} \omega} \def\O{\Omegaplus T{\muathbb C}N.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Note that this splitting is $S^1$-invariant.
We now recall some basic properties of presymplectic manifold \chiite{gotay}
and its canonical neighborhood theorem.
Fix an $S^1$-equivariant splitting of $TQ$
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:splitting}
TQ = T{\muathbb C}F \omega} \def\O{\Omegaplus G =\muathbb{R}X_\lambda} \def\La{\Lambdaambdambda\omega} \def\O{\Omegaplus T{\muathbb C}N \omega} \def\O{\Omegaplus G
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
by choosing an $S^1$-invariant complementary subbundle $G \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset TQ$.
This splitting is not unique but its choice will not matter for the coming discussions.
The null foliation carries a natural {\it transverse symplectic form}
in general \chiite{gotay}. Since the distribution $T{\muathbb C}F \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset TQ$ is preserved by
Reeb flow, it generates the $S^1$-action thereon in the current context. We denote by
$$
p_{T{\muathbb C}N;G}: TQ \tauo T{\muathbb C}N, \quad p_{G;G}:TQ \tauo G
$$
the projection to $T{\muathbb C}N$ and to $G$ respectively with respect to the splitting \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:splitting}.
We denote by $T^*{\muathbb C}N \tauo Q$ the associated foliation cotangent bundle, i.e., the
dual bundle of $T{\muathbb C}N$.
We now consider the isomorphism
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:tildedlambda}
\varphiidetilde{d\lambda} \def\La{\Lambdaambdambda|_\xii}: \xii \tauo \xii^*
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
and fix a splitting $T_QM = TQ \omega} \def\O{\Omegaplus N_QM$ with $T_QM = TM|_Q$
so that $N_QM \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset \xii_Q$:
This is possible since ${\muathbb R}\{X_\lambda} \def\La{\Lambdaambdambda\} \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset TQ$. We can also choose the splitting so that
it is $S^1$-equivariant. (See the proof of Proposition \rhoef{prop:neighbor-F} below.)
This leads to the further splitting
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:xiQ}
\xii|_Q = T{\muathbb C}N \omega} \def\O{\Omegaplus G \omega} \def\O{\Omegaplus N_QM
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
combined with \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:splitting}, which in turn leads to
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:xi*}
\xii^*|_Q = (G \omega} \def\O{\Omegaplus N_QM)^\psierp \omega} \def\O{\Omegaplus (T{\muathbb C}N)^\psierp
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
where $(\chidot)^\psierp$ denotes the annihilator of $(\chidot)$.
In particular, it induces an isomorphism
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:T*CN}
T^*{\muathbb C}N \chiong (G \omega} \def\O{\Omegaplus N_QM)^\psierp \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset \xii^*|_Q.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Now we consider the embedding $T^*{\muathbb C}N \tauo \xii$ defined by
the inverse of \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:tildedlambda}, which we denote by
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:TNdual}
(T{\muathbb C}N)^{\#_{d\lambda} \def\La{\Lambdaambdambda}} = (\varphiidetilde{d\lambda} \def\La{\Lambdaambdambda})^{-1}(T^*{\muathbb C}N).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Next we consider the symplectic normal bundle $(TQ)^{d\lambda} \def\La{\Lambdaambdambda} \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset T_QM$
defined by
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:TQdlambda}
(TQ)^{d\lambda} \def\La{\Lambdaambdambda} = \{v \in T_qM \muid d\lambda} \def\La{\Lambdaambdambda(v, w) = 0 , \psihi} \def\F{\Phiorall w \in T_qQ\}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
We define a vector bundle
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:E}
E = (TQ)^{d\lambda} \def\La{\Lambdaambdambda}/T{\muathbb C}F,
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
and then have the natural embedding
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:embed-E}
E = (TQ)^{d\lambda} \def\La{\Lambdaambdambda}/T{\muathbb C}F \hookrightarrow T_QM/TQ = N_QM
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
induced by the inclusion map $(TQ)^{d\lambda} \def\La{\Lambdaambdambda} \hookrightarrow T_QM$.
The following is straightforward to check.
\betaegin{equation}gin{lem} The $d\lambda} \def\La{\Lambdaambdambda|_E$ induces a nondegenerate 2-form and so $E$ carries a
fiberwise symplectic form, which we denote by $\Omegamega$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
We now consider the exact sequence
$$
0 \tauo E \tauo N_QM \tauo N_QM/E \tauo 0
$$
induced by \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:embed-E}. The sequence is $S^1$-equivariant with respect to the natural
$S^1$-action thereon. We have the canonical isomorphism
$$
N_QM/E \chiong \psihi} \def\F{\Phirac{T_QM}{TQ + (TQ)^{d\lambda} \def\La{\Lambdaambdambda}}
$$
which is $S^1$-equivariant. Then we have an $S^1$-equivariant splitting
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:TQM}
T_QM = (TQ + (TQ)^{d\lambda} \def\La{\Lambdaambdambda}) \omega} \def\O{\Omegaplus (T{\muathbb C}N)^{\#_{d\lambda} \def\La{\Lambdaambdambda}}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
where $(T{\muathbb C}N)^{\#_{d\lambda} \def\La{\Lambdaambdambda}}$ is the $d\lambda} \def\La{\Lambdaambdambda$-dual \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:TNdual}.
This also induces an embedding
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:embed-T*N}
T^*{\muathbb C}N \tauo (T{\muathbb C}N)^{\#_{d\lambda} \def\La{\Lambdaambdambda}} \hookrightarrow T_QM \tauo N_QM
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
which is also $S^1$-equivariant.
We now denote $F: = T^*{\muathbb C}N \omega} \def\O{\Omegaplus E \tauo Q$. The following proposition provides a
local model of the neighborhood of $Q \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset M$.
\betaegin{equation}gin{prop}\lambda} \def\La{\Lambdaambdabel{prop:neighbor-F} We
fix the splittings \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:splitting} and \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:xiQ}.
Then the sum of \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:embed-T*N} and \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:embed-E}
defines an isomorphism $T^*{\muathbb C}N \omega} \def\O{\Omegaplus E \tauo N_QM$ depending only on
the splittings.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\betaegin{equation}gin{proof}
A straightforward dimension counting shows that the bundle map indeed is an
isomorphism.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Identifying a neighborhood of $Q \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset M$ with a neighborhood of the zero section of $F$
and pulling back the contact form $\lambda} \def\La{\Lambdaambdambda$ to $F$, we may assume that our contact form $\lambda} \def\La{\Lambdaambdambda$ is
defined on a neighborhood of $o_F \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset F$. We also identify with $T^*{\muathbb C}N$ and $E$ as
their images in $N_QM$.
\betaegin{equation}gin{prop}\lambda} \def\La{\Lambdaambdabel{prop:S1-bundle} The $S^1$-action on $Q$
canonically induces the $S^1$-invariant vector bundle structure on $E$ such that
the form $\Omegamega$ is invariant under the $S^1$-action on $E$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\betaegin{equation}gin{proof}
The action of $S^1$ on $Q$ by $t\chidot q=\psihi^t(q)$
canonically induces a $S^1$ action on $T_QM$ by $t\chidot v=(d\psihi^t)(v)$,
for $v\in T_QM$.
Hence it gives rise to the following identity
\betaegin{equation}
t^*d\lambda} \def\La{\Lambdaambdambda=d\lambda} \def\La{\Lambdaambdambda\lambda} \def\La{\Lambdaambdabel{eq:S1action}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
since the Reeb flow preserves $\lambda} \def\La{\Lambdaambdambda$.
We first show it is well-defined on $E\tauo Q$, i.e., if $v\in (T_qQ)^{d\lambda} \def\La{\Lambdaambdambda}$,
then $t\chidot v\in (T_{t\chidot q}Q)^{d\lambda} \def\La{\Lambdaambdambda}$.
In fact, by using \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:S1action}, for $w\in T_{t\chidot q}Q$,
$$
d\lambda} \def\La{\Lambdaambdambda(t\chidot v, w)=\lambda} \def\La{\Lambdaeft((\psihi^t)^*d\lambda} \def\La{\Lambdaambdambda\rhoight)\lambda} \def\La{\Lambdaeft(v, (d\psihi^t)^{-1}(w)\rhoight)
=d\lambda} \def\La{\Lambdaambdambda\lambda} \def\La{\Lambdaeft(v, (d\psihi^t)^{-1}(w)\rhoight).
$$
This vanishes, since $Q$ consists of closed Reeb orbits and thus $d\psihi^t$ preserves $TQ$.
Secondly, the same identity \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:S1action} further indicates that this $S^1$ action preserves
$\Omegamega$ on fibers, i.e., $t^*\Omegamega=\Omegamega$, and we are done with the proof of this proposition.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Summarizing the above discussion, we have concluded that
the base $Q$ is decorated by
the one-form $\tauheta: = i_Q^*\lambda} \def\La{\Lambdaambdambda$ on the base $Q$ and the bundle $E$ is decorated by
the fiberwise symplectic 2-form $\Omegamega$. They satisfy
the following additional properties:
\betaegin{equation}gin{enumerate}
\item $Q = o_F$ carries an $S^1$-action which is locally free.
In particular $Q/S^1$ is a smooth orbifold.
\item The one-form $\tauheta$ is $S^1$-invariant, and $d\tauheta$ is a presymplectic form.
\item The bundle $E$ carries an $S^1$-action that preserves the fiberwise 2-form $\Omegamega$
and hence induces a $S^1$-invariant symplectic vector bundle structure on $E$.
\item The bundle $F = T^*{\muathbb C}N \omega} \def\O{\Omegaplus E \tauo Q$ carries the direct sum $S^1$-equivariant
vector bundle structure compatible to the $S^1$-action on $Q$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
We summarize the above discussions into the following theorem.
\betaegin{equation}gin{thm}\lambda} \def\La{\Lambdaambdabel{thm:morsebottsetup} Consider the locus $Q$ of closed Reeb orbits of
a Morse-Bott type contact form $\lambda} \def\La{\Lambdaambdambda$.
Let $(TQ)^{\omega} \def\O{\Omegamega_Q} \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset TQ$ be the null distribution of $\omega} \def\O{\Omegamega_Q = i_Q^*d\lambda} \def\La{\Lambdaambdambda$ and ${\muathbb C}F$ be
the associated characteristic foliation.
Then the restriction of $\lambda} \def\La{\Lambdaambdambda$ to $Q$ induces the following geometric structures:
\betaegin{equation}gin{enumerate}
\item $Q = o_F$ carries an $S^1$-action which is locally free.
In particular $Q/S^1$ is a smooth orbifold.
Fix an $S^1$-invariant splitting \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:splitting}.
\item We have the natural identification
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:NQM}
N_QM \chiong T^*{\muathbb C}N \omega} \def\O{\Omegaplus E = F,
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
as an $S^1$-equivariant vector bundle, where
$$
E: = (TQ)^{d\lambda} \def\La{\Lambdaambdambda}/T{\muathbb C}F
$$
is the symplectic normal bundle.
\item The two-form $d\lambda} \def\La{\Lambdaambdambda$ restricts to a nondegenerate skew-symmetric
two-form on $G$, and induces a fiberwise symplectic form
$
\Omegamega
$
on $E$ defined as above.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{thm}
We say that $Q$ is of pre-quantization type if the rank of $d\lambda} \def\La{\Lambdaambdambda|_Q$ is maximal
and is of isotropic type if the rank of $d\lambda} \def\La{\Lambdaambdambda|_Q$ is zero. The general case will be
a mixture of the two.
\betaegin{equation}gin{rem}\lambda} \def\La{\Lambdaambdabel{rem:dim3}
In particular when $\delta} \def\D{\Deltaim M = 3$, such $Q$ must be either of
prequantization type or of isotropic type. This is the case that is considered in
\chiite{HWZ3}. The general case considered in \chiite{bourgeois} and \chiite{behwz} includes the mixed type.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{rem}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaection{Contact thickening of Morse-Bott contact set-up}
\lambda} \def\La{\Lambdaambdabel{sec:thickening}
Motivated by the isomorphism in Theorem \rhoef{thm:morsebottsetup}, we consider the pair $(Q,\tauheta)$ and
the symplectic vector bundle $(E,\Omegamega) \tauo Q$ that satisfy the above properties.
We assume that $Q$ is compact and connected.
In the next section, we will associate the model contact form on the direct sum
$$
F = T^*{\muathbb C}N \omega} \def\O{\Omegaplus E
$$
and prove a canonical neighborhood theorem of the locus of closed Reeb orbits of
general contact manifold $(M,\lambda} \def\La{\Lambdaambdambda)$ such that the zero section of $F$ corresponds to $Q$.
To state our canonical neighborhood theorem, we need to first
identify the relevant geometric structure of the canonical neighborhoods.
For this purpose, introduction of the following notion is useful.
\betaegin{equation}gin{defn}[Pre-Contact Form] We call a one-form $\tauheta$ on a manifold $Q$ a \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{pre-contact} form
if $d\tauheta$ has constant rank.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{The $S^1$-invariant pre-contact manifold $(Q,\tauheta)$}
\lambda} \def\La{\Lambdaambdabel{subsec:Q}
First, we consider the pair $(Q,\tauheta)$ such that $Q$ carries a nontrivial
$S^1$-action preserving the one-form $\tauheta$. After taking the quotient of $S^1$ by some finite subgroup,
we may assume that the action is effective. We will also assume that the action
is locally free. Then by the general theory of group actions of
compact Lie group (see \chiite{helgason} for example),
the action is free on a dense open subset and has only a finite number
of different types of isotropy groups. In particular the quotient
$P: = Q/S^1$ becomes a presymplectic orbifold with a finite number of
different types of orbifold points. We denote by $Y$ the vector field generating the
$S^1$-action, i.e., the $S^1$-action is generated by its flows.
We require that the circle action preserves $\tauheta$, i.e., ${\muathbb C}L_Y\tauheta = 0$.
Since the action is locally free and free on a dense open subset of $Q$,
we can normalize the action so that
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:thetaX=1}
\tauheta(Y) \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 1.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
We denote this normalized vector field by $X_\tauheta$. We would like to emphasize that
$\tauheta$ may not be a contact form but can be regarded as the connection form of
the circle bundle $S^1 \tauo Q \tauo P$ over the orbifold $P$ in general. Although $P$
may carry non-empty set of orbifold points, the connection form $\tauheta$ is assumed to
be smooth on $Q$.
Similarly as in Lemma \rhoef{lem:integrableCN}, we also require
the presence of $S^1$-invariant splitting
$$
\kappaer d\tauheta = {\muathbb R} \{X_\tauheta\} \omega} \def\O{\Omegaplus H
$$
such that the subbundle $H$ is also integrable.
With these terminologies introduced, we can rephrase Theorem \rhoef{thm:morsebottsetup} as the following.
\betaegin{equation}gin{thm}\lambda} \def\La{\Lambdaambdabel{thm:morsebottsetup2}
Let $Q$ be the locus foliated by closed Reeb orbits of
a contact manifold $(M,\lambda} \def\La{\Lambdaambdambda)$ of Morse-Bott type.
Then $Q$ carries a locally free $S^1$-action and
\betaegin{equation}gin{enumerate}
\item an $S^1$-invariant pre-contact form $\tauheta$ given by $\tauheta = i_Q^*\lambda} \def\La{\Lambdaambdambda$,
\item a splitting
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:kernel-dtheta}
\kappaer d\tauheta = {\muathbb R}\{X_\tauheta\} \omega} \def\O{\Omegaplus H,
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
such that the distribution $H = \kappaer d\tauheta \chiap \xii|_Q$ is integrable,
\item
an $S^1$-equivariant symplectic vector bundle $(E,\Omegamega) \tauo Q$ with
$$
E = (TQ)^{d\lambda} \def\La{\Lambdaambdambda}/\kappaer d\tauheta, \quad \Omegamega = [d\lambda} \def\La{\Lambdaambdambda]_E
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{thm}
Here we use the fact that there exists a canonical embedding
$$
E = (TQ)^{d\lambda} \def\La{\Lambdaambdambda}/\kappaer d\tauheta \hookrightarrow T_QM/ TQ = N_QM
$$
and $d\lambda} \def\La{\Lambdaambdambda|_{(TQ)^{d\lambda} \def\La{\Lambdaambdambda}}$ canonically induces a bilinear form $[d\lambda} \def\La{\Lambdaambdambda]_E$
on $E = (TQ)^{d\lambda} \def\La{\Lambdaambdambda}/\kappaer di_Q^*\lambda} \def\La{\Lambdaambdambda$ by symplectic reduction.
\betaegin{equation}gin{defn} Let $(Q,\tauheta)$ be a pre-contact manifold
equipped with a locally free $S^1$-action generated by a vector field $Y$,
and with a $S^1$-invariant one-form $\tauheta$ and
the splitting \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:kernel-dtheta}. Assume $\tauheta(Y) \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 1$.
We call such a triple $(Q,\tauheta,H)$
a \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{Morse-Bott contact set-up}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
As before, we denote by ${\muathbb C}F$ and ${\muathbb C}N$
the associated foliations on $Q$, and decompose
$$
T{\muathbb C}F = {\muathbb R}\{X_\tauheta\} \omega} \def\O{\Omegaplus T{\muathbb C}N.
$$
We define a one-form $\muathbb{T}heta_G$ on $T^*{\muathbb C}N$ as follows. For a tangent
$\xii \in T_\alphalpha(T^*{\muathbb C}N)$, define
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:thetaG}
\muathbb{T}heta_G(\xii): = \alphalpha(p_{T{\muathbb C}N;G} d\psii_{(T^*{\muathbb C}N)}(\xii))
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
using the splitting
$$
TQ = {\muathbb R}\{X_\tauheta\} \omega} \def\O{\Omegaplus T{\muathbb C}N \omega} \def\O{\Omegaplus G.
$$
By definition, it follows $\muathbb{T}heta_G|_{VT(T^*{\muathbb C}N)} \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0$ and $d\muathbb{T}heta_G(\alphalpha)$ is nondegenerate
on
$$
\varphiidetilde{T_q{\muathbb C}N} \omega} \def\O{\Omegaplus VT_\alphalpha T^*{\muathbb C}N \chiong T_q{\muathbb C}N \omega} \def\O{\Omegaplus T_q^*{\muathbb C}N
$$
which becomes
the canonical pairing defined on $T_q{\muathbb C}N \omega} \def\O{\Omegaplus T_q^*{\muathbb C}N$ under the identification.
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{The bundle $E$}
We next examine the structure of
the $S^1$-equivariant symplectic vector bundle $(E, \Omegamega)$.
We denote by $\vec R$ the radial vector field which
generates the family of radial multiplication
$$
(c, e) \muapsto c\, e.
$$
This vector field is invariant under the given $S^1$-action on $E$, and vanishes on the zero section.
By its definition, $d\psii(\vec R)=0$, i.e., $\vec R$ is in the vertical distribution, denoted by $VTE$, of $TE$.
Denote the canonical isomorphism $V_eTE \chiong E_{\psii(e)}$ by $I_{e;\psii(e)}$.
It obviously intertwines the scalar multiplication, i.e.,
$$
I_{e;\psii(e)}(\muu\, \xii) = \muu I_{e;\psii(e)}(\xii)
$$
for a scalar $\muu$.
It also satisfies the following identity \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:dRc=Ic} with respect to the derivative of the
fiberwise scalar multiplication map $R_c: E \tauo E$.
\betaegin{equation}gin{lem} Let $\xii \in V_e TE$. Then
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:dRc=Ic}
I_{c\, e;\psii(c\, e)}(dR_c(\xii)) = c\, I_{e;\psii(e)}(\xii)
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
on $E_{\psii(c\, e)} = E_{\psii(e)}$ for any constant $c$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof} We compute
\betaegin{equation}astar
I_{c\, e;\psii(c\, e)}(dR_c(\xii)) & = & I_{c\, e;\psii(c\, e)}\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{d}{ds}\Big|_{s=0}c(e + s\xii)\rhoight) \\
& = & I_{c\, e;\psii(c\, e)}(R_c(\xii)) = c\, I_{e;\psii(e)}(\xii)
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
which finishes the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
We then define the fiberwise two-form $\Omegamega^v$ on $VTE \tauo E$ by
$$
\Omegamega^v_e(\xii_1,\xii_2) = \Omegamega_{\psii_F(e)}(I_{e;\psii(e)}(\xii_1),I_{e;\psii(e)}(\xii_2))
$$
for $\xii_1, \xii_2\in V_eTE$,
and one-form
$
\vec R \rhofloor \Omegamega^v
$
respectively.
Now we introduce an $S^1$-invariant symplectic connection on $E$ and choose the splitting
$$
TE = HTE \omega} \def\O{\Omegaplus VTE.
$$
Existence of such an invariant connection follows
e.g., by averaging over the compact group $S^1$. We denote by $\varphiidetilde \Omegamega$ the resulting
two-form on $E$. We extend the fiberwise form $\Omegamega$ of $E$ into the differential two-form
$\varphiidetilde \Omegamega$ on $E$ by setting
$$
\varphiidetilde \Omegamega_e(\xii,\zetaeta) = \Omegamega^v_e(\xii^v, \zetaeta^v).
$$
Denote by $\vec R$ the radial vector field of $E \tauo Q$ and consider the one-form
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:lambdaF}
\vec R \rhofloor \varphiidetilde \Omegamega
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
which is invariant under the action of $S^1$ on $E$.
\betaegin{equation}gin{rem}
Suppose $d\lambda} \def\La{\Lambdaambdambda_E(\chidot, J_E \chidot)=: g_{E;J_E}$ defines a
Hermitian vector bundle $(\xii_E, g_{E,J}, J_E)$.
Then we can write the radial vector field considered in the previous section as
$$
\vec R(e) = \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^k r_i \psihi} \def\F{\Phirac{\psiartial}{\psiartial r_i}
$$
where $(r_1,\chidots, r_k)$ is the coordinates of $e$ for
a local frame $\{e_1, \chidots, e_k\}$ of the vector bundle $E$. By definition, we have
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:vecRe}
I_{e;\psii_E(e)}(\vec R(e)) = e.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Obviously the right hand side expression does not depend on the choice of
local frames.
Let $(E,\Omegamega,J_E)$ be a Hermitian vector bundle and define
$|e|^2 = g_F(e,e)$. Motivated by the terminology used in \chiite{bott-tu}, we call the one-form
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:angularform}
\psisi = \psisi_\Omegamega = \psihi} \def\F{\Phirac{1}{r} \psihi} \def\F{\Phirac{\psiartial}{\psiartial r} \rhofloor \Omegamega^v
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
the \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{global angular form} for the Hermitian vector bundle $(E,\Omegamega,J_E)$.
Note that $\psisi$ is defined only on $E \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaetminus o_E$ although
$\Omegamega$ is globally defined.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{rem}
We state the following lemma.
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:ROmega} Let $\Omegamega$ be as above.
Then,
\betaegin{equation}gin{enumerate}
\item $\vec R \rhofloor d \varphiidetilde \Omegamega = 0$.
\item
For any non-zero constant $c > 0$, we have
$$
R_{c}^*\varphiidetilde \Omegamega = c^2\, \varphiidetilde \Omegamega.
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof}
Notice that $\varphiidetilde\Omegamega$ is compatible with $\Omegamega$ in the sense of symplectic fibration
and the symplectic vector bundle connection is nothing but the
Ehresmann connection induced by $\varphiidetilde\Omegamega$, which is a symplectic connection now.
Since $\vec R$ is vertical, the statement (1) immediately follows from the fact that the symplectic connection is vertical closed.
It remains to prove statement (2).
Let $e \in E$ and $\xii_1, \, \xii_2 \in T_e E$. By definition, we derive
\betaegin{equation}astar
(R_c^*\varphiidetilde \Omegamega)_e(\xii_1,\xii_2)
& = & \varphiidetilde \Omegamega_{c\,e}(dR_c(\xii_1^v), dR_c(\xii_2^v))\\
& = & \Omegamega^v_{c\,e}(dR_c(\xii_1^v), dR_c(\xii_2^v)))
= c^2 \varphiidetilde \Omegamega_e(\xii_1,\xii_2)
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
where we use the equality \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:dRc=Ic} and $\psii_F(c\, e) = \psii(e)$ for the fourth equality.
This proves $R_c^*\varphiidetilde \Omegamega = c^2 \varphiidetilde \Omegamega$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
It follows from Lemma \rhoef{lem:ROmega} (2) that we get ${\muathbb C}L_{\vec R}\varphiidetilde\Omegamega=2\varphiidetilde\Omegamega$. By Cartan's formula, we get
$$
d(\vec R \rhofloor \varphiidetilde \Omegamega) = 2\varphiidetilde \Omegamega.
$$
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{Canonical contact form and contact structure on $F$}
Let $(Q,\tauheta,H)$ be a given Morse-Bott contact set up and $(E,\Omegamega) \tauo Q$ be
any $S^1$-equivariant symplectic vector bundle equipped with an $S^1$-invariant symplectic connection
on it.
Now we equip the bundle $F = T^*{\muathbb C}N \omega} \def\O{\Omegaplus E$ with a canonical $S^1$-invariant contact form
on $F$. We denote the bundle projections by $\psii_{E;F}: F \tauo E$ and $\psii_{T^*{\muathbb C}N;F}: F \tauo T^*{\muathbb C}N$
of the splitting $F = T^*{\muathbb C}N \omega} \def\O{\Omegaplus E$ respectively, and provide the direct sum connection on
$F = T^*{\muathbb C}N \omega} \def\O{\Omegaplus E$.
\betaegin{equation}gin{thm} Let $(Q,\tauheta,H)$ be a Morse-Bott contact set-up.
Denote by ${\muathbb C}F$ and ${\muathbb C}N$ the foliations associated to the distribution $\kappaer d\tauheta$ and $H$,
respectively. We also denote by $T{\muathbb C}F$, $T{\muathbb C}N$ the associated foliation tangent bundles and
$T^*{\muathbb C}N$ the foliation cotangent bundle of ${\muathbb C}N$. Then for any symplectic vector bundle $(E,\Omegamega) \tauo Q$ with an $S^1$-invariant symplectic connection,
the following holds:
\betaegin{equation}gin{enumerate}
\item
The total space of the bundle $F = T^*{\muathbb C}N \omega} \def\O{\Omegaplus E$ carries a canonical contact form $\lambda} \def\La{\Lambdaambdambda_{F;G}$ defined as in \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:lambdaF}, for
each choice of complement $G$ such that $TQ = T{\muathbb C}F \omega} \def\O{\Omegaplus G$.
\item
For any two such choices of $G, \, G'$, the associated contact forms are
canonically gauge-equivalent by a bundle map $\psisi_{GG'}: TQ \tauo TQ$ preserving $T{\muathbb C}F$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{thm}
\betaegin{equation}gin{proof} We define a differential one-form on $F$ explicitly by
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:lambdaF}
\lambda} \def\La{\Lambdaambdambda_F = \psii_F^*\tauheta + \psii_{T^*{\muathbb C}N;F}^*\muathbb{T}heta_G +
\psihi} \def\F{\Phirac{1}{2} \psii_{E;F}^*\lambda} \def\La{\Lambdaeft(\vec R \rhofloor \varphiidetilde \Omegamega\rhoight).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Using Lemma \rhoef{lem:ROmega}, we obtain
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:dlambdaF}
d\lambda} \def\La{\Lambdaambdambda_F = \psii_F^*d\tauheta + \psii_{T^*{\muathbb C}N;F}^*d\muathbb{T}heta_G + \psii_{E;F}^*\varphiidetilde \Omegamega
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
by taking the differential of \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:lambdaF}.
A moment of examination of
this formula gives rise to the following
\betaegin{equation}gin{prop}\lambda} \def\La{\Lambdaambdabel{lem:deltaforE} There exists some $\psiartialta> 0$ such that
the one-form $\lambda} \def\La{\Lambdaambdambda_F$ is a contact form
on the disc bundle $D^{\psiartialta}(F)$, where
$$
D^{\psiartialta}(F) = \{(q,v) \in F \muid \|v\| < \psiartialta \}
$$
such that $\lambda} \def\La{\Lambdaambdambda_F|_{o_F} = \tauheta$ on $Q \chiong o_F \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset F$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\betaegin{equation}gin{proof} This immediately follows from the formulae \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:lambdaF}
and \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:dlambdaF}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
This proves the statement (1).
For the proof of the statement (2), we first note that the bundle $E$
itself does not depend on the choice of $G$. On the other hand, we put the one-form
$$
\lambda} \def\La{\Lambdaambdambda_{F;G} = \psii_F^*\tauheta + \psii_{T^*{\muathbb C}N;F}^*\muathbb{T}heta_G +
\psihi} \def\F{\Phirac{1}{2}\psii^*_{E;F}\lambda} \def\La{\Lambdaeft(\vec R \rhofloor \varphiidetilde \Omegamega\rhoight)
$$
on $E$, which depends on $G$ in general because the one-form $\muathbb{T}heta_G$ does. Furthermore
we recall that the projection map $\psii_{T^*{\muathbb C}N;F}$ also depends on the canonical splitting
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:TQF}
T_Q F = HT_Q F \omega} \def\O{\Omegaplus VT_Q F \chiong TQ \omega} \def\O{\Omegaplus F|_Q.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Now we fix this splitting $T_Q F$ and let $G,\, G'$ be two splittings
of $TQ$
$$
TQ = T{\muathbb C}F \omega} \def\O{\Omegaplus G = T{\muathbb C}F \omega} \def\O{\Omegaplus G'.
$$
Since both $G, \, G'$ are transversal to $T {\muathbb C}F$ in $TQ$,
we can represent $G'$ as the graph of the bundle map $A_G: G \tauo T{\muathbb C}N$. Then we consider the
bundle isomorphism
$$
\psisi_{GG'}: TQ/{\muathbb R}\{X_\lambda} \def\La{\Lambdaambdambda\} \tauo TQ/{\muathbb R}\{X_\lambda} \def\La{\Lambdaambdambda\}
$$
defined by
$$
\psisi_{GG'} = \lambda} \def\La{\Lambdaeft(\betaegin{equation}gin{matrix} Id_{T{\muathbb C}N} & A_G\\
0 & id_G \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{matrix}\rhoight)
$$
under the splitting $TQ = {\muathbb R}\{X_\lambda} \def\La{\Lambdaambdambda\} \omega} \def\O{\Omegaplus T{\muathbb C}N \omega} \def\O{\Omegaplus G$. Then $\psisi_{GG'}(G) = \omega} \def\O{\Omegaperatorname{Graph} A_G$
and $\psisi_{GG'}|_{T{\muathbb C}N} = id_{T{\muathbb C}N}$.
Therefore we have $p_{T{\muathbb C}N;G} = p_{T{\muathbb C}N;G'}\chiirc \psisi_{GG'}$.
We compute
\betaegin{equation}astar
\muathbb{T}heta_{G}(\alphalpha)(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta) & = &\alphalpha(p_{T{\muathbb C}N;G}(d\psii_{T^*{\muathbb C}N}(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta)))\\
& =&\alphalpha(p_{T{\muathbb C}N;G'}\chiirc \psisi_{GG'}(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta))
= \muathbb{T}heta_{G'}(\alphalpha)(\psisi_{GG'}(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta)).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
This proves $\muathbb{T}heta_{G} = \muathbb{T}heta_{G'}\chiirc \psisi_{GG'}$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Now we study the contact geometry of $(D^\psiartialta(F),\lambda} \def\La{\Lambdaambdambda_F)$. We first note that the two-form
$d\lambda} \def\La{\Lambdaambdambda_F$ is a presymplectic form with one dimensional kernel such that
$$
d\lambda} \def\La{\Lambdaambdambda_F|_{VTF} = \varphiidetilde \Omegamega^v|_{VTF}.
$$
Denote by $\varphiidetilde{X}:=(d\psii_{F;H})^{-1}(X)$ the horizontal lifting of the vector field $X$ on $Q$,
where
$$
d\psii_{F;H}:= d\psii_F|_H:HTF \tauo TQ
$$
is the bijection of the horizontal distribution and $TQ$.
\betaegin{equation}gin{lem}[Reeb Vector Field] The Reeb vector field $X_F$ of $\lambda} \def\La{\Lambdaambdambda_F$ is given by
$$
X_F=\varphiidetilde X_\tauheta,
$$
where $\varphiidetilde X_\tauheta$ denotes the horizontal lifting of $X_\tauheta$ to $F$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof} We only have to check the defining property $\varphiidetilde X_\tauheta \rhofloor \lambda} \def\La{\Lambdaambdambda_F = 1$
and $\varphiidetilde X_\tauheta \rhofloor d\lambda} \def\La{\Lambdaambdambda_F = 0$.
We first look at
\betaegin{equation}astar
\lambda} \def\La{\Lambdaambdambda_F(\varphiidetilde X_\tauheta)&=& \psii_F^*\tauheta(\varphiidetilde X_\tauheta) + \psii_{T^*{\muathbb C}N;F}^*\muathbb{T}heta_G(\varphiidetilde X_\tauheta)
+ \psihi} \def\F{\Phirac{1}{2} \psii_{E;F}^*\lambda} \def\La{\Lambdaeft(\vec R \rhofloor \varphiidetilde \Omegamega\rhoight)(\varphiidetilde X_\tauheta)\\
& = & \tauheta(X_\tauheta) + 0 + 0 = 1.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
Here $\varphiidetilde{\Omegamega}(\vec{R}, \varphiidetilde X_\tauheta) = 0$
by definition of $\varphiidetilde\Omegamega$ since $\varphiidetilde X_\tauheta^v = 0$.
Then we calculate
\betaegin{equation}astar
\varphiidetilde X_\tauheta\rhofloor d\lambda} \def\La{\Lambdaambdambda_F&=&\varphiidetilde X_\tauheta\rhofloor (\psii_F^*d\tauheta
+\varphiidetilde{\Omegamega}+\psii_{T^*{\muathbb C}N;F}^*d\muathbb{T}heta_G)\\
&=& \varphiidetilde X_\tauheta\rhofloor \psii_F^*d\tauheta+\varphiidetilde X_\tauheta\rhofloor \varphiidetilde{\Omegamega}+\varphiidetilde X_\tauheta\rhofloor \psii_{T^*{\muathbb C}N;F}^*d\muathbb{T}heta_G = 0.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
We only need to explain why the last term $\varphiidetilde X_\tauheta\rhofloor \psii_{T^*{\muathbb C}N;F}^*d\muathbb{T}heta_G$ vanishes.
In fact $pr_{{\muathbb C}N;G} d\psii_F(\varphiidetilde X_\tauheta) = pr_{{\muathbb C}N; G}(X_\tauheta) = 0$. Using this,
the definition of $\muathbb{T}heta_G$ and the $S^1$-equivariance of the vector bundle $F \tauo Q$ and the fact that $\varphiidetilde X_\tauheta$ is
the vector field generating the $S^1$-action, we derive
\betaegin{equation}astar
\varphiidetilde X_\tauheta\rhofloor \psii_{T^*{\muathbb C}N;F}^*d\muathbb{T}heta_G = 0
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
by a straightforward computation.
This finishes the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Now we calculate the contact structure $\xii_F$.
\betaegin{equation}gin{lem}[Contact Distribution] \lambda} \def\La{\Lambdaambdabel{lem:decomp-VW}
At each point $(\alphalpha,e) \in U_F \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset F$, we
define two subspaces of $T_{(\alphalpha,e)}F$
$$
V:=\{\xii_V \in T_{(\alphalpha,e)}F \muid \xii_V =-\psii_{T^*{\muathbb C}N;F}^*\muathbb{T}heta_G(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta)X_F+ \varphiidetilde\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta, \, \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta \in \kappaer \tauheta \}
$$
and
$$
W:=\{\xii_W \in T_{(\alphalpha,e)}F \muid \xii_W :=-\psihi} \def\F{\Phirac{1}{2}\psii_{E;F}^*\varphiidetilde \Omegamega(e,v)X_F+v, \, v\in VTF\},
$$
Then
$
\xii_F=V\omega} \def\O{\Omegaplus W.
$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof}
By straightforward calculation, both $V$ and $W$ are subspaces of $\xii_F = \kappaer \lambda} \def\La{\Lambdaambdambda_F$.
For any $\xii\in \xii_F$, we decompose $\xii=\xii^{\tauext{h}}+\xii^{\tauext{v}}$
using the decomposition $TF=HTF\omega} \def\O{\Omegaplus VTF$.
Since $\psii_{T^*{\muathbb C}N;F}^*\muathbb{T}heta_G(\varphiidetilde\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta) = \muathbb{T}heta_G(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta)$ and we can write $\xii^{\tauext{v}} = I_{(e;\psii(e))}(v)$
for a unique $v \in E_{\psii(e)}$. Therefore we need to find $b \in {\muathbb R}$, $\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta \in \kappaer \tauheta$
so that for the horizontal vector $\xii^{\tauext{h}}=\varphiidetilde{(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta+b X_\tauheta)}$
\betaegin{equation}astar
\lambda} \def\La{\Lambdaambdambda_F(\xii) & = & 0 \\
- \psii_{T^*{\muathbb C}N;F}^*\muathbb{T}heta_G(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta)X_F + \varphiidetilde\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta & \in & V\\
-\psihi} \def\F{\Phirac{1}{2} \psii_{E;F}^*\varphiidetilde \Omegamega (e, v)X_F+v & \in & W.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
Then
\betaegin{equation}astar
\xii^h&=&\varphiidetilde{(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta+ b X_\tauheta)}\\
&=&\varphiidetilde\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta+bX_F
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
which determines $\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta \in T_{\psii(e)}{\muathbb C}N \omega} \def\O{\Omegaplus G_{\psii(e)}$ uniquely. We need to determine $b$.
Since
\betaegin{equation}astar
0=\lambda} \def\La{\Lambdaambdambda_F(\xii)&=&\lambda} \def\La{\Lambdaambdambda_F(\varphiidetilde\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta+bX_F+\xii^{\tauext{v}})\\
&=&b+\lambda} \def\La{\Lambdaambdambda_F(\varphiidetilde\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta)+\lambda} \def\La{\Lambdaambdambda_F(\xii^{\tauext{v}})\\
&=&b+ \psii_{T^*{\muathbb C}N;F}^*\muathbb{T}heta_G (\varphiidetilde\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta)+\psihi} \def\F{\Phirac{1}{2} \psii_{E;F}^*\varphiidetilde \Omegamega(e,v)
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
Then we set $\xii_W = -\psihi} \def\F{\Phirac{1}{2} \psii_{E;F}^*\varphiidetilde \Omegamega(\vec{R}, v)X_F+ v$ for $v$ such that $I_{e;\psii(e)}(v) = \xii^v$
and then finally choose $b = - \psii_{T^*{\muathbb C}N;F}^*\muathbb{T}heta_G(\varphiidetilde\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta)$ so that
$\xii_V: = - \psii_{T^*{\muathbb C}N;F}^*\muathbb{T}heta_G(\varphiidetilde\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta) X_F + \varphiidetilde \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta$.
Therefore we have proved $\xii_F=V+W$.
To see it is a direct sum, assume
$$
-(\psii_{T^*{\muathbb C}N;F}^*\muathbb{T}heta_G)(\varphiidetilde{\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta})X_F+\varphiidetilde\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta-\psihi} \def\F{\Phirac{1}{2}\psii_{E;F}^*\varphiidetilde \Omegamega(e, v)X_F+v=0,
$$
for some $\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta\in \xii_\lambda} \def\La{\Lambdaambdambda$ and $v\in VTM$.
Apply $d\psii$ to both sides, and it follows that
$$
-\betaig((\psii_{T^*|CN;F}^*\muathbb{T}heta_G)(\varphiidetilde{\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta})+\psihi} \def\F{\Phirac{1}{2} \psii_{E;F}^*\varphiidetilde \Omegamega(e, v)X_\tauheta\betaig)+\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta=0.
$$
Hence $\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta=0$, and then $v=0$ follows since $X_F$ is in horizontal part.
This finishes the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaection{Canonical neighborhoods of the locus of closed Reeb orbits}
\lambda} \def\La{\Lambdaambdabel{sec:normalform}
Now let $Q$ be the submanifold of $(M,\lambda} \def\La{\Lambdaambdambda)$ that is
foliated by the closed Reeb orbits of $\lambda} \def\La{\Lambdaambdambda$ with constant period $T$.
Consider the Morse-Bott contact set-up $(Q,\tauheta,H)$ defined as before
and the symplectic vector bundle $(E,\Omegamega)$ associated to $Q$.
Now in this section, we prove the following canonical neighborhood
theorem as the converse of Theorem \rhoef{thm:morsebottsetup2}.
\betaegin{equation}gin{thm}[Canonical Neighborhood Theorem]\lambda} \def\La{\Lambdaambdabel{thm:neighborhoods2}
Let $Q$ be the submanifold of closed Reeb orbits of
Morse-Bott type contact form $\lambda} \def\La{\Lambdaambdambda$, and $(Q,\tauheta)$ and $(E,\Omegamega)$
be the associated pair and $F = T^*{\muathbb C}N \omega} \def\O{\Omegaplus E$ defined above.
Then there exist neighborhoods $U$ of $Q$ and $U_F$ of the zero section $o_F$,
and a diffeomorphism $\psisi: U_F \tauo U$ and a function $f: U_F \tauo {\muathbb R}$ such that
$$
\psisi^*\lambda} \def\La{\Lambdaambdambda = f\, \lambda} \def\La{\Lambdaambdambda_{F;G} \, f|_{o_F} \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 1, \, df|_{o_F}\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0
$$
and
$$
i_{o_F}^*\psisi^*\lambda} \def\La{\Lambdaambdambda = \tauheta, \quad (\psisi^*d\lambda} \def\La{\Lambdaambdambda|_{VTF})|_{o_F} = 0\omega} \def\O{\Omegaplus \Omegamega
$$
where we use the canonical identification of $VTF|_{o_F} \chiong T^*{\muathbb C}N \omega} \def\O{\Omegaplus E$ on the
zero section $o_F \chiong Q$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{thm}
We first identify the local pair $({\muathbb C}U, Q) \chiong (U_F,Q)$ by a
diffeomorphism $\psihi: {\muathbb C}U \tauo U_F$ such that
$$
\psihi|_Q = \tauextrm{id}_Q, \quad d\psihi(N_QM) = T^*{\muathbb C}N \omega} \def\O{\Omegaplus E.
$$
Such a diffeomorphism obviously exists by definition of $E$ and $T^*{\muathbb C}N$ via
the normal exponential map with respect to any metric $g$ (defined on ${\muathbb C}U$)
that satisfies the following property:
We note that we have the associated short exact sequences
\betaegin{equation}a
&{}& 0 \tauo T{\muathbb C}F \tauo TQ \tauo TQ/T{\muathbb C}F \tauo 0\lambda} \def\La{\Lambdaambdabel{eq:sequence1}\\
&{}& 0 \tauo TQ \tauo T_QM \tauo N_QM \tauo 0\lambda} \def\La{\Lambdaambdabel{eq:sequence2}\\
&{}& 0 \tauo E \tauo N_QM \tauo N_QM/E \tauo 0\lambda} \def\La{\Lambdaambdabel{eq:sequence3}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonea
which are $S^1$-equivariant with respect to the above mentioned
natural induced $S^1$-action on $Q$.
We take $S^1$-equivariant splittings of \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:sequence2},
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:sequence3} in addition to that of \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:sequence1}
used in Theorem \rhoef{thm:splitting1}. We then choose an $S^1$-equivariant
metric on the vector bundle $N_QM \chiong F$ whose associated normal exponential
map of $Q \chiong o_F$ respects the above chosen splittings.
From now on, we will sometimes denote $U_F$ by $F$ in the following context
if there is no danger of confusion.
Now $F$ carries two contact forms $\psisi^*\lambda} \def\La{\Lambdaambdambda, \, \lambda} \def\La{\Lambdaambdambda_F$ with $\psisi = \psihi^{-1}$ and they are the same
on the zero section $o_F$. With this preparation, we will derive Theorem \rhoef{thm:neighborhoods2} by the following
general submanifold version of Gray's theorem.
\betaegin{equation}gin{thm}\lambda} \def\La{\Lambdaambdabel{thm:normalform} Let $M$ be an odd dimensional manifold with two contact
forms $\lambda} \def\La{\Lambdaambdambda_0$ and $\lambda} \def\La{\Lambdaambdambda_1$ on it.
Let $Q$ be a closed manifold of closed Reeb orbits of $\lambda} \def\La{\Lambdaambdambda_0$ in $M$ and
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:equalities}
\lambda} \def\La{\Lambdaambdambda_0|_{T_QM} =\lambda} \def\La{\Lambdaambdambda_1|_{T_QM}, \, d\lambda} \def\La{\Lambdaambdambda_0|_{T_QM} = d\lambda} \def\La{\Lambdaambdambda_1|_{T_QM}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
where we denote $T_QM = TM|_Q$.
Then there exists a diffeomorphism $\psihi$ from a neighborhood $\muathcal{U}$ to $\muathcal{V}$ such that
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:phi}
\psihi|_Q = \tauext{\rhom id}_Q,
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
and a function $f>0$ such that
$$
\psihi^*\lambda} \def\La{\Lambdaambdambda_1 = f \chidot\lambda} \def\La{\Lambdaambdambda_0,
$$
and
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:f}
f|_Q\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 1,\quad df|_{T_QM}\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{thm}
\betaegin{equation}gin{proof}
By the assumption on $\lambda} \def\La{\Lambdaambdambda_0, \, \lambda} \def\La{\Lambdaambdambda_1$, there exists a small tubular neighborhood of $Q$ in $M$, denote by $\muathcal{U}$,
such that the isotopy $\lambda} \def\La{\Lambdaambdambda_t=(1-t)\lambda} \def\La{\Lambdaambdambda_0+t\lambda} \def\La{\Lambdaambdambda_1$, $t\in [0,1]$, are contact forms in $\muathcal{U}$:
this follows from the requirement \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:equalities}. Moreover, we have
$$
\lambda} \def\La{\Lambdaambdambda_t|_{T_QM} \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv \lambda} \def\La{\Lambdaambdambda_0|_{T_QM}(=\lambda} \def\La{\Lambdaambdambda_1|_{T_QM}), \quad \tauext{ for any } t\in [0,1].
$$
Then the standard Moser's trick will finish up the proof. For reader's convenience, we provide the details here.
We are looking for a family of diffeomorphisms onto its image $\psihi_t: {\muathbb C}U' \tauo {\muathbb C}U$
for some smaller open subset ${\muathbb C}U' \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset \omega} \def\O{\Omegaverline{{\muathbb C}U'} \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset {\muathbb C}U$
such that
$$
\psihi_t|_Q = \tauextrm{id}_Q, \quad d\psihi_t|_{T_QM} = \tauextrm{id}_{T_QM}
$$
for all $t \in [0,1]$, together with a family of functions $f_t>0$
defined on $\omega} \def\O{\Omegaverline{{\muathbb C}U'}$ such that
$$
\psihi_t^*\lambda} \def\La{\Lambdaambdambda_t = f_t\chidot \lambda} \def\La{\Lambdaambdambda_0 \quad \tauext{on } \, {\muathbb C}U'
$$
for $0 \lambda} \def\La{\Lambdaeq t \lambda} \def\La{\Lambdaeq 1$. We will further require $f_t \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 1$ on $Q$ and $df_t|_Q \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0$.
Since $Q$ is a closed manifold, it is enough to look for the vector fields $Y_t$ generating $\psihi_t$ via
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:ddtphit}
\psihi} \def\F{\Phirac{d}{dt}\psihi_t=Y_t\chiirc \psihi_t, \quad \psihi_0=id,
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
that satisfies
$$
\betaegin{equation}gin{cases}
\psihi_t^*\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{d}{dt}\lambda} \def\La{\Lambdaambdambda_t+{\muathbb C}L_{Y_t}\lambda} \def\La{\Lambdaambdambda_t\rhoight)
= \psihi} \def\F{\Phirac{f_t'}{f_t} \psihi_t^*\lambda} \def\La{\Lambdaambdambda_t\\
Y_t|_Q\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv0, \quad \nuabla Y_t|_{T_QM} \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{cases}
$$
By Cartan's magic formula, the first equation gives rise to
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:cartan}
d(Y_t\rhofloor\lambda} \def\La{\Lambdaambdambda_t)+Y_t\rhofloor d\lambda} \def\La{\Lambdaambdambda_t={\muathbb C}L_{Y_t}\lambda} \def\La{\Lambdaambdambda_t=(\psihi} \def\F{\Phirac{f_t'}{f_t}\chiirc \psihi_t^{-1})\lambda} \def\La{\Lambdaambdambda_t-\alphalpha,
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
where
$$
\alphalpha=\lambda} \def\La{\Lambdaambdambda_1-\lambda} \def\La{\Lambdaambdambda_0 (\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv \psihi} \def\F{\Phirac{d \lambda} \def\La{\Lambdaambdambda_t}{dt}).
$$
Now, we need to show that there exists $Y_t$ such that
$\psihi} \def\F{\Phirac{d}{dt}\lambda} \def\La{\Lambdaambdambda_t+{\muathbb C}L_{Y_t}\lambda} \def\La{\Lambdaambdambda_t$ is proportional to $\lambda} \def\La{\Lambdaambdambda_t$.
Actually, we can make our choice of $Y_t$ unique if we restrict ourselves
to those tangent to $\xii_t = \kappaer \lambda} \def\La{\Lambdaambdambda_t$ by Lemma \rhoef{lem:decompose}.
We require $Y_t\in \xii_t$ and then \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:cartan} becomes
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:alphatYt}
\alphalpha = - Y_t \rhofloor d\lambda} \def\La{\Lambdaambdambda_t +(\psihi} \def\F{\Phirac{f_t'}{f_t}\chiirc \psihi_t^{-1})\lambda} \def\La{\Lambdaambdambda_t.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
This in turn determines $\psihi_t$ by integration.
Since $\alphalpha|_Q =(\lambda} \def\La{\Lambdaambdambda_1-\lambda} \def\La{\Lambdaambdambda_0)|_Q=0$ and $\lambda} \def\La{\Lambdaambdambda_t|_{T_QM} = \lambda} \def\La{\Lambdaambdambda_0|_{T_QM}$,
(and hence $f_t \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 1$ on $Q$), it follows that $Y_t=0$ on $Q$. Therefore by compactness of
$[0,1] \tauimes Q$, the domain of existence of the ODE $\delta} \def\D{\Deltaot x = Y_t(x)$
includes an open neighborhood of $[0,1] \tauimes Q \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset {\muathbb R} \tauimes M$ which we may
assume is of the form $(-\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonpsilon, 1+ \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonpsilon) \tauimes {\muathbb C}V$.
Now going back to \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:alphatYt}, we find that the coefficient
$\psihi} \def\F{\Phirac{f_t'}{f_t}\chiirc \psihi_t^{-1}$ is uniquely determined.
We evaluate $\alphalpha = \lambda} \def\La{\Lambdaambdambda_1 - \lambda} \def\La{\Lambdaambdambda_0$ against the vector fields $X_t := (\psihi_t)_*X_{\lambda} \def\La{\Lambdaambdambda_0}$, and get
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:logft}
\psihi} \def\F{\Phirac{d}{dt}\lambda} \def\La{\Lambdaog f_t = \psihi} \def\F{\Phirac{f_t'}{f_t}=(\lambda} \def\La{\Lambdaambdambda_1(X_t)-\lambda} \def\La{\Lambdaambdambda_0(X_t))\chiirc \psihi_t,
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
which determines $f_t$ by integration with the initial condition $f_0 \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 1$.
It remains to check the additional properties \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:phi}, \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:f}.
We set
$$
h_t = \psihi} \def\F{\Phirac{f_t'}{f_t}\chiirc \psihi_t^{-1}.
$$
\betaegin{equation}gin{lem}
$$
dh_t|_{T_QM} \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof} By \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:logft}, we obtain
$$
dh_t = d(\lambda} \def\La{\Lambdaambdambda_1(X_t))-d(\lambda} \def\La{\Lambdaambdambda_0(X_t)) = {\muathbb C}L_{X_t}(\lambda} \def\La{\Lambdaambdambda_1 - \lambda} \def\La{\Lambdaambdambda_0) - X_t \rhofloor d(\lambda} \def\La{\Lambdaambdambda_1 - \lambda} \def\La{\Lambdaambdambda_0).
$$
Since $X_t = X_{\lambda} \def\La{\Lambdaambdambda_0} = X_{\lambda} \def\La{\Lambdaambdambda_1}$ on $Q$, $X_t \in \xii_1 \chiap \xii_0$ and so the
second term vanishes.
For the first term, consider $p \in Q$ and $v \in T_pM$. Let $Y$ be a locally defined vector field
with $Y(p) = v$. Then we compute
$$
{\muathbb C}L_{X_t}(\lambda} \def\La{\Lambdaambdambda_1 - \lambda} \def\La{\Lambdaambdambda_0)(Y)(p) = {\muathbb C}L_{X_t}((\lambda} \def\La{\Lambdaambdambda_1 - \lambda} \def\La{\Lambdaambdambda_0)(Y))(p) - (\lambda} \def\La{\Lambdaambdambda_1 - \lambda} \def\La{\Lambdaambdambda_0)({\muathbb C}L_{X_t}Y)(p).
$$
The second term of the right hand side vanishes since $\lambda} \def\La{\Lambdaambdambda_1 = \lambda} \def\La{\Lambdaambdambda_0$ on $T_pM$ for $p \in Q$. For the first one, we note
$X_t$ is tangent to $Q$ for all $t$ and $(\lambda} \def\La{\Lambdaambdambda_1 - \lambda} \def\La{\Lambdaambdambda_0)(Y) \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0$ on $Q$ by the hypothesis
$\lambda} \def\La{\Lambdaambdambda_0 = \lambda} \def\La{\Lambdaambdambda_1$ on $T_QM$. Therefore the first term also vanishes. This finishes the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Now we set $g_t = \lambda} \def\La{\Lambdaog f_t$. Since $\psihi_t$ is a
diffeomorphism and $\psihi_t(Q) \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset Q$, this implies $dg'_t = 0$ on $Q$ for all $t$.
By integrating $dg'_t = 0$ with $dg'_0 = 0$ along $Q$ over time $t = 0$ to $t=1$, which
implies $dg_t = 0$ along $Q$ (meaning $dg_t|_{T_QM} = 0$), i.e., $df_t = 0$ on $Q$.
This completes the proof of Theorem \rhoef{thm:normalform}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Applying this theorem to $\lambda} \def\La{\Lambdaambdambda$ and $\lambda} \def\La{\Lambdaambdambda_F$ on $F$ with $Q$ as the zero section $o_F$,
we can wrap-up the proof of Theorem \rhoef{thm:neighborhoods2}
\betaegin{equation}gin{proof}[Proof of Theorem \rhoef{thm:neighborhoods2}]
The requirement \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:psi*lambda} and the first of \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:ioFpsi}
are immediate translations of Theorem \rhoef{thm:normalform}.
For the second requirement in \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:ioFpsi}, we compute
$$
\psisi^*d\lambda} \def\La{\Lambdaambdambda = df \varphiedge \lambda} \def\La{\Lambdaambdambda_F + f\, d\lambda} \def\La{\Lambdaambdambda_F.
$$
By using $df|_{o_F}= 0$ and $f|_{o_F} = 1$, we derive
$$
\psisi^*d\lambda} \def\La{\Lambdaambdambda|_{VTF} = d\lambda} \def\La{\Lambdaambdambda_F|_{VTF} = \Omegamega
$$
on $o_F$. This then finishes the second an hence finishes the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
\betaegin{equation}gin{defn}[Normal Form of Contact Form]\lambda} \def\La{\Lambdaambdabel{defn:normalneighborhood} We call $(U_F,f \, \lambda} \def\La{\Lambdaambdambda_F)$ the normal
form of the contact form $\lambda} \def\La{\Lambdaambdambda$ associated to the Morse-Bott submanifold $Q$ of closed Reeb orbits.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
Note that the contact structures associated to $\psisi^*\lambda} \def\La{\Lambdaambdambda$ and $\lambda} \def\La{\Lambdaambdambda_F$ are the same which is given by
$$
\xii_F = \kappaer \lambda} \def\La{\Lambdaambdambda_F = \kappaer \psisi^*\lambda} \def\La{\Lambdaambdambda.
$$
This proves the following normal form theorem of the contact structure $(M,\xii)$
in a neighborhood of $Q$.
\betaegin{equation}gin{prop} Suppose that $Q \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset M$ be a submanifold of closed Reeb orbits of
$\lambda} \def\La{\Lambdaambdambda$. Then there exists a contactomorphism from
a neighborhood ${\muathbb C}U \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaupset Q$ to a neighborhood of the zero section of
$F$ equipped with $S^1$-equivariant contact structure $\lambda} \def\La{\Lambdaambdambda_F$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\betaegin{equation}gin{defn}[Normal Form of Contact Structure] We call $(F,\xii_F)$ the normal form of
$(M,\xii)$ associated to the Morse-Bott submanifold $Q$ of closed Reeb orbits.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
However, the Reeb vector fields of $\psisi^*\lambda} \def\La{\Lambdaambdambda$ and $\lambda} \def\La{\Lambdaambdambda_F$ coincide
only along the zero section in general.
In the rest of the paper, we will work with $F$ and for the general contact form $\lambda} \def\La{\Lambdaambdambda$
that satisfies
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:F-lambda}
\lambda} \def\La{\Lambdaambdambda|_{o_F} \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv \lambda} \def\La{\Lambdaambdambda_F|_{o_F}, \quad d\lambda} \def\La{\Lambdaambdambda|_{VTF}|_{o_F} = \Omegamega.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
In particular $o_F$ is also the locus of closed Reeb orbits with the same period $T$
of a Morse-Bott contact form $\lambda} \def\La{\Lambdaambdambda$.
We re-state the above normal form theorem in this context.
\betaegin{equation}gin{prop} Let $\lambda} \def\La{\Lambdaambdambda$ be any contact form in a neighborhood of $o_F$ on $F$
satisfying \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:F-lambda}. Then there exist an open embedding $\psisi: {\muathbb C}U \tauo F$
and a function $f$ on ${\muathbb C}U$ for open neighborhoods ${\muathbb C}U, \, F$ of $o_F$ such that
$\psisi|_{oF} = \tauextrm{id}_{o_F}$,
$\psisi^*\lambda} \def\La{\Lambdaambdambda = f\, \lambda} \def\La{\Lambdaambdambda_F$ with $f|_{o_F} \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 1$ and $df|_{o_F} \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
We denote $\xii$ and $X_\lambda} \def\La{\Lambdaambdambda$ the corresponding contact structure and Reeb vector field of $\lambda} \def\La{\Lambdaambdambda$,
and $\psii_\lambda} \def\La{\Lambdaambdambda$, $\psii_{\lambda} \def\La{\Lambdaambdambda_F}$ the corresponding projection from
$TF$ to $\xii$ and $\xii_F$.
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaection{Linearization of closed Reeb orbit on the normal form}
\lambda} \def\La{\Lambdaambdabel{sec:linearization-orbits}
In this section, we systematically examine the decomposition of the linearization map of
closed Reeb orbits in terms of the coordinate expression of
the loops $z$ in $F$ in this normal neighborhood.
For a given map $z: S^1 \tauo F$, we denote by $x := \psii_F \chiirc z$. Then
we can express
$$
z(t) = (x(t), s(t)), \quad t \in S^1
$$
where $s(t) \in F_{x(t)}$, i.e., $s$ is the section os $x^*F$.
We regard this decomposition as the map
$$
{\muathbb C}I: C^\infty(S^1, F) \tauo {\muathbb C}H^F_{S^1}
$$
where ${\muathbb C}H^F$ is the infinite dimensional vector bundle
$$
{\muathbb C}H^F_{S^1} = \betaigcup_{x \in C^\infty(S^1,F)} {\muathbb C}H^F_{S^1,x}
$$
where ${\muathbb C}H^F_{S^1,x}$ is the vector space given by
$$
{\muathbb C}H^F_{S^1,x} = \Omegamega^0(x^*F)
$$
the set of smooth sections of the pull-back vector bundle $x^*F$. This provides
a coordinate description of $C^\infty(S^1,F)$ in terms of ${\muathbb C}H^F_{S^1}$. We denote the corresponding
coordinates $z = (u_z,s_z)$ when we feel necessary to make the dependence of $(x,s)$ on $z$
explicit.
We fix an $S^1$-invariant connection on $F$ and the associated splitting
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:TF}
TF = HTF \omega} \def\O{\Omegaplus VTF
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
which is defined to be the direct sum of the connection of $T^*{\muathbb C}N$ and the $S^1$-invariant
connection on the symplectic vector bundle $(E,\Omegamega)$.
Then we express
$$
\delta} \def\D{\Deltaot z = \lambda} \def\La{\Lambdaeft(\betaegin{equation}gin{matrix} \varphiidetilde{\delta} \def\D{\Deltaot x} \\
\nuabla_t s \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{matrix}\rhoight).
$$
Here we regard $\delta} \def\D{\Deltaot x$ as a $TQ$-valued one-form on $S^1$ and
$\nuabla_t s$ is defined to be
$$
\nuabla_{\delta} \def\D{\Deltaot x} s = (x^*\nuabla)_{\psihi} \def\F{\Phirac{\psiartial}{\psiartial t}} s
$$
which we regard as an element of $F_{x(t)}$. Through identification of
$H_s TF$ with $T_{\psii_F(s)} Q$ and $V_s TF$ with $F_{\psii(s)}$ or more precisely through the identity
$$
\varphiidetilde{\delta} \def\D{\Deltaot x} \chiirc I_{s;x} = \delta} \def\D{\Deltaot x,
$$
we will just write
$$
\delta} \def\D{\Deltaot z = \lambda} \def\La{\Lambdaeft(\betaegin{equation}gin{matrix} \delta} \def\D{\Deltaot x \\
\nuabla_t s \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{matrix}\rhoight).
$$
Recall that $o_F$ is foliated by the closed Reeb orbits of $\lambda} \def\La{\Lambdaambdambda$ which also form
the fibers of the prequantization bundle $Q \tauo P$.
For a given Reeb orbit $z = (x,s)$, we denote $x(t) = \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(T\chidot t)$ where
$\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$ is a Reeb orbit of period $T$ of the contact form $\tauheta$ on $Q$ which is
nothing but a fiber of the prequantization $Q \tauo P$.
We then decompose
$$
D\group{U}psilon(z)(Z) = (D\group{U}psilon(z)(Z))^v + (D\group{U}psilon(z)(Z))^h.
$$
Then the assignment $Z \muapsto (D\group{U}psilon(z)(Z))^v$ defines an operator
from $\Gammaamma(z^*VTF)$ to $\Gammaamma(z^*VTF)$. We remark that since $VTF \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset \xii$,
we have
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:Dvertical}
D\group{U}psilon(z)(Z) = D^\psii\group{U}psilon(z)(Z)
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
for any vertical vector field $Z$.
Composing with the map $I_{z;x}$,
we have obtained an operator from $\Omegamega^0(x^*F)$ to $\Omegamega^0(x^*F)$.
We denote this operator by
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:Dupsilon}
D\upsilon(x): \Omegamega^0(x^*F) \tauo \Omegamega^0(x^*F).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Using $X_F = \varphiidetilde X_{\lambda} \def\La{\Lambdaambdambda,Q}$ and $\nuabla_Y X_F = 0$ for any vertical vector field $Y$,
we derive the following proposition from Lemma \rhoef{lem:DUpsilon}.
This will be important later for our exponential estimates.
\betaegin{equation}gin{prop}\lambda} \def\La{\Lambdaambdabel{prop:DupsilonE} Let $D\upsilon = D\upsilon(x)$ be the operator defined above.
Define the vertical Hamiltonian vector field $X^\Omegamega_g$ by
$$
X^\Omegamega_g \rhofloor \Omegamega = dg|_{VTF}.
$$
Then
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:nablaeY}
D\upsilon = \nuabla_t^F - T\, D^v X_g^{\Omegamega}(z)
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
where $z = (x,o_{x})$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\betaegin{equation}gin{proof}
Consider a vertical vector field $Z \in VTF$ along a Reeb orbit $z$ as above
and regard it as the section of $z^*F$ defined by
$$
s_Z(t) = I_{z;x}(Z(t))
$$
where $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$ is a Reeb orbit with period $T$ of $X_{\lambda} \def\La{\Lambdaambdambda,Q}$ on $o_F\chiong Q$.
Recall the formula
\betaegin{equation}a\lambda} \def\La{\Lambdaambdabel{eq:DUpsilonE}
D\group{U}psilon(z)(Z) &= & D^\psii \group{U}psilon(z)(Z) \nuonumber \\
& = & \nuabla_t^\psii Z - T \lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{1}{f}\nuabla_Z X_{\lambda} \def\La{\Lambdaambdambda_F} + Z[1/f] X_{\lambda} \def\La{\Lambdaambdambda_F} \rhoight)
- T \lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{1}{f} \nuabla_Z Y_{dg} + Z[1/f] Y_{dg}\rhoight) \nuonumber\\
&{}&
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonea
from \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:Dvertical}, and Lemma \rhoef{lem:DUpsilon} which we apply to the vertical vector field $Z$ for
the contact manifold $(U_F,\lambda} \def\La{\Lambdaambdambda_F)$.
We recall $f \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 1$ on $o_F$ and $df \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0$ on $TF|_{o_F}$.
Therefore we have $Z[1/f] = 0$. Furthermore recall $X_F = \varphiidetilde X_{\lambda} \def\La{\Lambdaambdambda,Q}$ and
$$
\nuabla_Z X_F = D^v X_F (s_Z) = D^v \varphiidetilde X_\tauheta(s_Z) = 0
$$
on $o_F$. On the other hand, by definition, we derive
$$
I_{z;x}\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{1}{f}\nuabla_Z Y_{dg}\rhoight) =
D^v X_g^\Omegamega (s_Z).
$$
By substituting this into \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:DUpsilonE} and composing with $I_{z;x}$,
we have finished the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
By construction, it follows that the vector field along $z$ defined by
$$
t \muapsto \psihi_{X_\tauheta}^t(v), \quad t \in [0,1]
$$
for any $v \in T_{z(0)} Q$ lie in $\kappaer D\group{U}psilon(z)$.
By the Morse-Bott hypothesis, this set of vector fields exhausts
$
\kappaer D\group{U}psilon(z).
$
We denote by $\psiartialta > 0$ the gap between $0$ and the first non-zero eigenvalue of
$D\group{U}psilon(z)$. Then we obtain the following
\betaegin{equation}gin{cor}\lambda} \def\La{\Lambdaambdabel{cor:gap} Let $z = (x_z, o_{x_z})$ be a Reeb orbit. Then
for any section $s \in \Omegamega^0(x^*F)$, we have
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:Dupsilon-gap}
\|\nuabla_t^F s - T D^v X_g^{\Omegamega}(z)(s)\|^2 \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq \psiartialta^2 \|s\|_2^2.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{cor}
This inequality plays a crucial role in the study of exponential convergence of
contact instantons in the Morse-Bott context studied later in the present paper.
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaection{Normal coordinates of $dw$ in $(U_F,f \lambda} \def\La{\Lambdaambdambda_F)$}
\lambda} \def\La{\Lambdaambdabel{sec:coord}
We fix the splitting $TF = HTF \omega} \def\O{\Omegaplus VTF$ given in \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:TF} and
consider the decomposition of $w = (u,s)$ according to the splitting.
For a given map $w: \delta} \def\D{\Deltaot \Sigmama \tauo F$, we denote by $u := \psii_F \chiirc w$. Then
we can express
$$
w(z) = (u(z), s(z)), \quad z \in \Sigmama
$$
where $s(z) \in F_{u(z)}$, i.e., $s$ is the section of $u^*F$.
We regard this decomposition as the map
$$
{\muathbb C}I: {\muathbb C}F(\Sigmama, F) \tauo {\muathbb C}H^F_\Sigmama
$$
where ${\muathbb C}H^F$ is the infinite dimensional vector bundle
$$
{\muathbb C}H^F_\Sigmama = \betaigcup_{u \in {\muathbb C}F(\Sigmama,F)} {\muathbb C}H^F_{\Sigmama,u}
$$
where ${\muathbb C}H^F_{\Sigmama,u}$ is the vector space given by
$$
{\muathbb C}H^F_{\Sigmama,u} = \Omegamega^0(u^*F)
$$
the set of smooth sections of the pull-back vector bundle $u^*F$. This provides
a coordinate description of ${\muathbb C}F(\Sigmama,F)$ in terms of ${\muathbb C}H^F_\Sigmama$. We denote the corresponding
coordinates $w = (u_w,s_w)$ when we feel necessary to make the dependence of $(u,s)$ on $w$
explicit.
In terms of the splitting \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:TF}, we express
$$
dw = \lambda} \def\La{\Lambdaeft(\betaegin{equation}gin{matrix} \varphiidetilde{du} \\
\nuabla_{du} s \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{matrix}\rhoight).
$$
Here we regard $du$ as a $TQ$-valued one-form on $\delta} \def\D{\Deltaot\Sigmama$ and
$\nuabla_{du} s$ is defined to be
$$
\nuabla_{du(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta)} s(z) = (u^*\nuabla)_{\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta} s
$$
for a tangent vector $\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta \in T_z\Sigmama$, which we regard as an element of $F_{u(z)}$. Through identification of
$HTF_s$ with $T_{\psii_F(s)} Q$ and $VTF_s$ with $F_{\psii_F(s)}$ or more precisely through the identity
$$
I_{w;u}(\varphiidetilde{du}) = du,
$$
we will just write
$$
dw = \lambda} \def\La{\Lambdaeft(\betaegin{equation}gin{matrix} du \\
\nuabla_{du} s \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{matrix}\rhoight)
$$
from now on, unless it is necessary to emphasize the fact that $dw$ a priori has values
in $TF = HTF \omega} \def\O{\Omegaplus VTF$, not $TQ \omega} \def\O{\Omegaplus F$.
To write them in terms of the coordinates $w = (u,s)$, we first derive the
formula for the projection $d^\psii w = d^{\psii_{\lambda} \def\La{\Lambdaambdambda}}w$ with $\lambda} \def\La{\Lambdaambdambda = f\, \lambda} \def\La{\Lambdaambdambda_F$.
For this purpose, we recall the formula for $X_{f\lambda} \def\La{\Lambdaambdambda_F}$ from Proposition \rhoef{prop:eta} subsection
\rhoef{subsec:perturbed-forms}
$$
X_{f\lambda} \def\La{\Lambdaambdambda_F} = \psihi} \def\F{\Phirac{1}{f}(X_\lambda} \def\La{\Lambdaambdambda + Y_{dg}), \quad Y_{dg}: = \psii_{\lambda} \def\La{\Lambdaambdambda_F}(\psihi} \def\F{\Philat_{\lambda} \def\La{\Lambdaambdambda_F}(dg))
$$
for $g = \lambda} \def\La{\Lambdaog f$. We decompose
$$
Y_{dg} = (Y_{dg})^v + (Y_{dg})^h
$$
into the vertical and the horizontal components. This leads us to the decomposition
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:XflambdaF-decompo}
f\, X_{f\lambda} \def\La{\Lambdaambdambda_F} = (Y_{dg})^h + X_{\lambda} \def\La{\Lambdaambdambda_F} + (Y_{dg})^v
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
in terms of the splitting
$$
TF = \varphiidetilde{(\xii_\lambda} \def\La{\Lambdaambdambda\chiap TQ)} \omega} \def\O{\Omegaplus {\muathbb R}\{X_F\} \omega} \def\O{\Omegaplus VTF, \quad HTF = \varphiidetilde{(\xii_\lambda} \def\La{\Lambdaambdambda\chiap TQ)} \omega} \def\O{\Omegaplus {\muathbb R}\{X_F\}.
$$
Recalling $d\lambda} \def\La{\Lambdaambdambda_F = \psii_F^*d\tauheta + \psii_{T^*{\muathbb C}N;F}^*d\muathbb{T}heta_G + \psii_{E;F}^*\varphiidetilde \Omegamega$, and since $d\muathbb{T}heta_G$ vanishes on $VTF$, we have derived
\betaegin{equation}gin{lem} At each $s \in F$,
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:Hamiltonian}
(Y_{dg})^v(s) = X_{g|_{F_{\psii(s)}}}^{\Omegamega^v(s)}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
Now we are ready to derive an important formula that will play a
crucial role in our exponential estimates in later sections. Recalling the canonical
isomorphism
$$
I_{s;\psii_F (s)}; VTF_s \tauo F_{\psii(s)}
$$
we introduced in section 2, we define the following \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{vertical derivative}
\betaegin{equation}gin{defn}\lambda} \def\La{\Lambdaambdabel{defn:vertical-derive}
Let $X$ be a vector field on $F \tauo Q$. The \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{vertical derivative},
denoted by $D^v X: F \tauo F$ is the map defined by
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:DvX}
D^vX(q)(f): = \psihi} \def\F{\Phirac{d}{dr}\Big|_{r=0} I_{r f;\psii(r f)}(X^v(r f))
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
\betaegin{equation}gin{prop} Let $(E,\Omegamega, J_E)$ be the Hermitian vector bundle
for $\Omegamega$ defined as before.
Let $g = \lambda} \def\La{\Lambdaog f$ and $X_g^{d\lambda} \def\La{\Lambdaambdambda_E}$ be the contact Hamiltonian
vector field as above. Then we have
$$
J_E D^v Y_{dg} = \omega} \def\O{\Omegaperatorname{Hess}^v g(q,o_q).
$$
In particular, $J_E D^v X_g^{d\lambda} \def\La{\Lambdaambdambda_E}: E \tauo E$
is a symmetric endomorphism with respect to the metric
$g_E = \Omegamega(\chidot, J_E\chidot)$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\betaegin{equation}gin{proof} Let $q \in Q$ and $e_1, \, e_2 \in E_q$. We compute
\betaegin{equation}astar
\lambda} \def\La{\Lambdaambdangle D^v Y_{dg}(q)e_1, e_2 \rhoangle & = & \Omegamega (D^v Y_{dg}(q) e_1,J_E e_2) \\
d\lambda} \def\La{\Lambdaambdambda_E(D^v Y_{dg}(q) e_1,J_E e_2) & = &
\Omegamega \lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{d}{dr}\Big|_{r=0} I_{re_1;q}((Y_{dg})^v (re_1)), J_E e_2\rhoight) \\
& = & \Omegamega \lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{d}{dr}\Big|_{r=0} I_{re_1;q}(X_g^{\Omegamega}(re_1)), J_E e_2\rhoight).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
Here $\psihi} \def\F{\Phirac{d}{dr}\Big|_{r=0} X_g^{\Omegamega}(re_1)$ is nothing but
$$
DX_g^{\Omegamega}(q)(e_1)
$$
where $DX_g^{\Omegamega}(q)$ is the linearization of the Hamiltonian vector field of $g|_{E_{q}}$ of the
symplectic inner product $\Omegamega(q)$ on $E_q$. Therefore it lies at the symplectic
Lie algebra $sp(\Omegamega)$ and so satisfies
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:DXg-symp}
\Omegamega(DX_g^{\Omegamega}(q)(e_1), e_2) + \Omegamega(e_1, DX_g^{\Omegamega}(q)(e_2)) = 0
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
which is equivalent to saying that $J_E DX_g^{\Omegamega}(q)$ is symmetric with
respect to the inner product $g_E = \Omegamega(\chidot, J_E \chidot)$. But we also have
$$
J_E DX_g^{\Omegamega}(q) = D \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammarad_{g_E(q)} g|_{E_q} = \omega} \def\O{\Omegaperatorname{Hess}^v g(q).
$$
On the other hand, \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:DXg-symp} also implies
$$
\Omegamega(DX_g^{\Omegamega}(q)(J_E e_1), e_2) - \Omegamega(DX_g^{\Omegamega}(q)(e_2), J_E e_1) = 0
$$
with $e_1$ replaced by $J e_1$ therein. The first term becomes
$$
\lambda} \def\La{\Lambdaambdangle DX_g^{\Omegamega}(q)(e_1), e_2 \rhoangle
$$
and the second term can be written as
\betaegin{equation}astar
\Omegamega(DX_g^{\Omegamega}(q)(J_E e_2), e_1)& = & - \Omegamega(J_E e_2, DX_g^{\Omegamega}(q)(e_1))\\
& = & \Omegamega(e_2, J_E DX_g^{\Omegamega}(q)(e_1)) \\
& = & \lambda} \def\La{\Lambdaambdangle e_2, DX_g^{\Omegamega}(q)(e_1)\rhoangle.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
Combining the two, we have finished the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaection{$CR$-almost complex structures adapted to $Q$}
\lambda} \def\La{\Lambdaambdabel{sec:adapted}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{We would like to emphasize that we have not involved any
almost complex structure yet. Now we involve $J$ in our discussion.}
Let $J$ be any $CR$-almost complex structure compatible to $\lambda} \def\La{\Lambdaambdambda$ in that
$(M,\lambda} \def\La{\Lambdaambdambda,J)$ defines a contact triad and denote by $g$ the triad metric.
Then we can realize the normal bundle $N_QM = T_QM /TQ$ as the metric normal bundle
$$
N^g_QM = \{ v \in T_QM \muid d\lambda} \def\La{\Lambdaambdambda(v, J w) = 0, \psihi} \def\F{\Phiorall w \in TQ\}.
$$
We start with the following obvious lemma
\betaegin{equation}gin{lem} Consider the foliation ${\muathbb C}N$ of $(Q,\omega} \def\O{\Omegamega_Q)$, where $\omega} \def\O{\Omegamega_Q = i_Q^*d\lambda} \def\La{\Lambdaambdambda$.
Then $JT{\muathbb C}N$ is perpendicular to $TQ$ with respect to the triad metric of $(M,\lambda} \def\La{\Lambdaambdambda,J)$.
In particular $JT{\muathbb C}N \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset N^g_QM$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof} The first statement follows from the property that
$T{\muathbb C}N$ is isotropic with respect to $\omega} \def\O{\Omegamega_Q$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Now, we introduce the concept of almost complex structures adapted to the locus $Q$ of closed Reeb orbits
of $M$.
\betaegin{equation}gin{defn}\lambda} \def\La{\Lambdaambdabel{defn:adapted} Let $Q \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset M$ be the locus of closed Reeb orbits of
Morse-Bott contact form $\lambda} \def\La{\Lambdaambdambda$.
Suppose $J$ defines a contact triad $(M,\lambda} \def\La{\Lambdaambdambda,J)$.
We say a $CR$-almost complex structure $J$ for $(M,\xii)$ is adapted to
the submanifold $Q$ if $J$ satisfies
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:JTQ}
J (TQ) \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset TQ + J T{\muathbb C}N.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
\betaegin{equation}gin{prop}\lambda} \def\La{\Lambdaambdabel{prop:adapted} The set of adapted $J$ relative to $Q$ is nonempty and is a contractible
infinite dimensional manifold.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\betaegin{equation}gin{proof}
For the existence of a $J$ adapted to $Q$, we recall the splitting
\betaegin{equation}astar
T_q Q & = & {\muathbb R}\{X_\lambda} \def\La{\Lambdaambdambda(q)\}\omega} \def\O{\Omegaplus T_q {\muathbb C}N \omega} \def\O{\Omegaplus G_q,\\
T_q M & \chiong & ({\muathbb R}\{X_\lambda} \def\La{\Lambdaambdambda(q)\}\omega} \def\O{\Omegaplus T_q{\muathbb C}N \omega} \def\O{\Omegaplus G_q) \omega} \def\O{\Omegaplus(T_q^*{\muathbb C}N \omega} \def\O{\Omegaplus E_q) \\
& = & {\muathbb R}\{X_\lambda} \def\La{\Lambdaambdambda(q)\}\omega} \def\O{\Omegaplus (T_q{\muathbb C}N \omega} \def\O{\Omegaplus T_q^*{\muathbb C}N) \omega} \def\O{\Omegaplus G_q \omega} \def\O{\Omegaplus E_q
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
on each connected component of $Q$. Therefore we
can find $J$ so that it is compatible on $T{\muathbb C}N \omega} \def\O{\Omegaplus T^*{\muathbb C}N$ with respect to
$-d\muathbb{T}heta_G|_{T{\muathbb C}N \omega} \def\O{\Omegaplus T^*{\muathbb C}N}$, and compatible on $G$
with respect to $\omega} \def\O{\Omegamega_Q$ and on $E$ with respect to $\Omegamega$. It follows that
any such $J$ is adapted to $Q$. This proves the first statement.
The proof of the second statement will be postponed until Appendix.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
We note that each summand $T_q{\muathbb C}N \omega} \def\O{\Omegaplus T_q^*{\muathbb C}N$,
$G_q$ and $E_q$ in the above splitting of $T_QM$ is symplectic with respect to $d\lambda} \def\La{\Lambdaambdambda$.
We recall the embeddings $T^*{\muathbb C}N$ and $E$ into $N_QM$
and the identification $N_Q M \chiong T^*{\muathbb C}N \omega} \def\O{\Omegaplus E$ discussed in Subsection
\rhoef{subsec:structure}.
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:J-identify} For any adapted $J$, the identification of the normal bundle
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:NgQM}
N_Q M \tauo N^g_Q M; \quad [v] \muapsto \varphiidetilde{d\lambda} \def\La{\Lambdaambdambda}(-J v)
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
naturally induces the following identifications:
\betaegin{equation}gin{enumerate}
\item
$
T^*{\muathbb C}N\chiong JT{\muathbb C}N.
$
\item
$
\tauext{\rhom Image}(E \hookrightarrow N_QM)= (TQ)^{d\lambda} \def\La{\Lambdaambdambda}\chiap (JT{\muathbb C}N)^{d\lambda} \def\La{\Lambdaambdambda}.
$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof}
(1) follows by looking at the metric $\lambda} \def\La{\Lambdaambdangle\chidot, \chidot\rhoangle=d\lambda} \def\La{\Lambdaambdambda(\chidot, J\chidot)$.
Now restrict it to $(TQ)^{d\lambda} \def\La{\Lambdaambdambda}$,
we can identify $E$ with the complement of $T{\muathbb C}F$ with respect to this metric,
which is just $(TQ)^{d\lambda} \def\La{\Lambdaambdambda}\chiap (JT{\muathbb C}N)^{d\lambda} \def\La{\Lambdaambdambda}$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:JE} For any adapted $J$, $J E\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset E$ in the sense of the identification of $E$ with
the subbundle of $T_QM$ given in the above lemma.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof}
Take $v\in (TQ)^{d\lambda} \def\La{\Lambdaambdambda}\chiap (JT{\muathbb C}N)^{d\lambda} \def\La{\Lambdaambdambda}$,
then for any $w\in TQ$,
$$
d\lambda} \def\La{\Lambdaambdambda(Jv, w)=-d\lambda} \def\La{\Lambdaambdambda(v, Jw)=0
$$
since $JTQ\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset TQ + JT{\muathbb C}N$. Hence $Jv\in (TQ)^{d\lambda} \def\La{\Lambdaambdambda}$.
For any $w\in T{\muathbb C}N$,
$$
d\lambda} \def\La{\Lambdaambdambda(Jv, Jw)=d\lambda} \def\La{\Lambdaambdambda(v, w)=0
$$
since $v\in (TQ)^{d\lambda} \def\La{\Lambdaambdambda}$ and $w\in TQ$. Hence $Jv\in (JT{\muathbb C}N)^{d\lambda} \def\La{\Lambdaambdambda}$,
and we are done.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
\betaegin{equation}gin{rem}\lambda} \def\La{\Lambdaambdabel{rem:adapted}
\betaegin{equation}gin{enumerate}
\item
We would like to mention that in the nondegenerate case
the adaptedness is automatically satisfied by any compatible $CR$-almost complex structure
$J \in {\muathbb C}J(M,\lambda} \def\La{\Lambdaambdambda)$, because in that case $P$ is a point and $HTF = {\muathbb R} \chidot \{X_F\}$ and
$VTF = TF = \xii_F$.
\item
However for the general Morse-Bott case, the set of adapted $CR$-almost complex
structure is strictly smaller than ${\muathbb C}J(M,\lambda} \def\La{\Lambdaambdambda)$. It appears that for the proof of exponential
convergence result of closed Reeb orbits in the Morse-Bott case, this additional restriction of
$J$ to those adapted to the Morse-Bott submanifold of closed Reeb orbits in the above sense facilitates geometric computation
considerably. (Compare our computations with those given in \chiite{bourgeois}, \chiite{behwz}.)
\item When $T{\muathbb C}N = \{0\}$, $(Q,\lambda} \def\La{\Lambdaambdambda_Q)$ carries the structure of prequantization $Q \tauo P = Q/S^1$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{rem}
We specialize to the normal form $(U_F, f \lambda} \def\La{\Lambdaambdambda_F)$.
We note that the complex structure $J_F: \xii_F \tauo \xii_F$ canonically induces one on the vector bundle $VTF \tauo F$
$$
J_F^v; VTF \tauo VTF
$$
satisfying $(J_F^v)^2 = -id_{VTF}$. For any given $J$ adapted to $o_F \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset F$,
it has the decomposition
$$
J_{U_F}|_{o_F} = \lambda} \def\La{\Lambdaeft(\betaegin{equation}gin{matrix} \varphiidetilde J_G & 0 & 0 & 0 \\
0 & 0 & I & D \\
C & -I & 0 & 0\\
0 & 0 & 0 & J_E
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{matrix} \rhoight)
$$
on the zero section with respect to the splitting
$$
TF|_{o_F} \chiong {\muathbb R}\{X_F\} \omega} \def\O{\Omegaplus G \omega} \def\O{\Omegaplus (T{\muathbb C}N \omega} \def\O{\Omegaplus T^*{\muathbb C}N) \omega} \def\O{\Omegaplus E.
$$
Here we note that $C \in {\omega} \def\O{\Omegaperatorname{Hom}}(G,T^*{\muathbb C}N)$ and $D \in {\omega} \def\O{\Omegaperatorname{Hom}}(E,T{\muathbb C}N)$, which depend on
$J$. Indeed it is easy to see $C = 0 = D$ from a consideration of the equation $J_U^2 = -Id$.
Using the splitting
\betaegin{equation}astar
TF & \chiong & {\muathbb R}\{X_{\lambda} \def\La{\Lambdaambdambda_F}\} \omega} \def\O{\Omegaplus \varphiidetilde G \omega} \def\O{\Omegaplus (\varphiidetilde{T{\muathbb C}N} \omega} \def\O{\Omegaplus VT{\muathbb C}N^*) \omega} \def\O{\Omegaplus VTF \\
& \chiong &
{\muathbb R}\{X_{\lambda} \def\La{\Lambdaambdambda_F}\} \omega} \def\O{\Omegaplus \varphiidetilde G \omega} \def\O{\Omegaplus (T{\muathbb C}N \omega} \def\O{\Omegaplus T^*{\muathbb C}N)\omega} \def\O{\Omegaplus E
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
on $o_F \chiong Q$, we lift $J_F$ to a $\lambda} \def\La{\Lambdaambdambda_F$-compatible almost
complex structure on the total space $F$, which we denote by $J_0$.
We note that the triad $(F,\lambda} \def\La{\Lambdaambdambda_F,J_0)$ is naturally $S^1$-equivariant by the $S^1$-action induced by
the Reeb flow on $Q$.
\betaegin{equation}gin{defn}[Normalized Contact Triad $(F,\lambda} \def\La{\Lambdaambdambda_F,J_0)$]\lambda} \def\La{\Lambdaambdabel{defn:normaltriad}
We call the $S^1$-invariant contact triad $(F,\lambda} \def\La{\Lambdaambdambda_F,J_0)$ the normalized
contact triad adapted to $Q$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
Now we are ready to give the proof of the following.
\betaegin{equation}gin{prop}\lambda} \def\La{\Lambdaambdabel{prop:connection} Consider the
contact triad $(U_F,\lambda} \def\La{\Lambdaambdambda_F,J_0)$ for an adapted $J$ and its associated
triad connection. Then the zero section $o_F \chiong Q$ is
totally geodesic and so naturally induces an affine connection on $Q$.
Furthermore the induced connection on $Q$ preserves $T{\muathbb C}F$ and the splitting
$$
T{\muathbb C}F = {\muathbb R}\{X_{\lambda} \def\La{\Lambdaambdambda_F}\} \omega} \def\O{\Omegaplus T{\muathbb C}N.
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\betaegin{equation}gin{proof}
We note that $\lambda} \def\La{\Lambdaambdambda_F$ is invariant under the reflection of
the vector bundle $F \tauo Q$ by definition \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:lambdaF} of $\lambda} \def\La{\Lambdaambdambda_F$ and
so is $J_0$ by the construction given above. Therefore the triad metric of $(U_F,\lambda} \def\La{\Lambdaambdambda_F,J_0)$
is invariant under the reflection. This implies that the associated triad connection, which
preserves the triad metric by one of the definition properties \chiite{oh-wang1},
makes the zero section totally geodesic since it is the fixed point set of the
reflection which is an isometry with respect to the triad metric.
Therefore it canonically restricts to
an affine connection on $o_F \chiong Q$.
It remains to show that this connection preserves the splitting $T{\muathbb C}F = {\muathbb R}\{X_{\lambda} \def\La{\Lambdaambdambda,Q}\} \omega} \def\O{\Omegaplus T{\muathbb C}N$.
For the simplicity of notation, we denote $\lambda} \def\La{\Lambdaambdambda_F = \lambda} \def\La{\Lambdaambdambda$ in the rest of this proof.
Let $q \in Q$ and $v \in T_qQ$. We pick a vector field $Z$ that is
tangent to $Q$ and $S^1$-invariant and satisfies $Z(q) = v$. Such a vector field exists because
${\muathbb C}F$ is the null foliation of $\omega} \def\O{\Omegamega_Q = i_Q^*d\lambda} \def\La{\Lambdaambdambda$,
and $Q$ carries the $S^1$-action induced by the Reeb flow of $\lambda} \def\La{\Lambdaambdambda$.
If $Z$ is a multiple of $X_\lambda} \def\La{\Lambdaambdambda$, then we can choose $Z = c\, X_\lambda} \def\La{\Lambdaambdambda$ for some
constant and so $\nuabla_Z X_{\lambda} \def\La{\Lambdaambdambda} = 0$ by the axiom $\nuabla_{X_\lambda} \def\La{\Lambdaambdambda}X_\lambda} \def\La{\Lambdaambdambda = 0$
of contact triad connection. Then for $Y \in \xii \chiap T{\muathbb C}F$, we compute
$$
\nuabla_{X_\lambda} \def\La{\Lambdaambdambda} Y = \nuabla_Y X_\lambda} \def\La{\Lambdaambdambda + [X_\lambda} \def\La{\Lambdaambdambda,Y] \in \xii
$$
by an axiom of the triad connection. On the other hand, for any $Z$ tangent to $Q$, we derive
$$
d\lambda} \def\La{\Lambdaambdambda(\nuabla_{X_\lambda} \def\La{\Lambdaambdambda} Y, Z) = -d\lambda} \def\La{\Lambdaambdambda(Y,\nuabla_{X_\lambda} \def\La{\Lambdaambdambda}Z) = 0
$$
since $Q = o_F$ is totally geodesic and so $\nuabla_{X_\lambda} \def\La{\Lambdaambdambda}Z \in TQ$. This proves that
$\nuabla_{X_\lambda} \def\La{\Lambdaambdambda} T{\muathbb C}N \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset T{\muathbb C}N$.
For $v \in T_q Q \chiap \xii_q$, we have
$
\nuabla_Z {X_\lambda} \def\La{\Lambdaambdambda} \in \xii \chiap TQ.
$
On the other hand,
$$
\nuabla_Z {X_\lambda} \def\La{\Lambdaambdambda} = \nuabla_{X_\lambda} \def\La{\Lambdaambdambda} Z + [Z,X_\lambda} \def\La{\Lambdaambdambda] = \nuabla_{X_\lambda} \def\La{\Lambdaambdambda} Z
$$
since $[Z,X_\lambda} \def\La{\Lambdaambdambda] = 0$ by the $S^1$-invariance of $Z$. Now let $W \in T{\muathbb C}N$ and compute
$$
\lambda} \def\La{\Lambdaambdangle \nuabla_Z {X_\lambda} \def\La{\Lambdaambdambda}, W \rhoangle = d\lambda} \def\La{\Lambdaambdambda(\nuabla_Z {X_\lambda} \def\La{\Lambdaambdambda}, J_0 W).
$$
On the other hand $J_0 W \in T^*{\muathbb C}N \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset T_{o_F}F$ since ${\muathbb C}F$ is (maximally)
isotropic with respect to $d\lambda} \def\La{\Lambdaambdambda$. Therefore we obtain
$$
d\lambda} \def\La{\Lambdaambdambda( \nuabla_Z {X_\lambda} \def\La{\Lambdaambdambda}, J_0 W) = -\psii_{T^*{\muathbb C}N;F}^*\muathbb{T}heta_G(q,0,0)(\nuabla_Z {X_\lambda} \def\La{\Lambdaambdambda}, J_0 W) = 0.
$$
This proves $\nuabla_Z {X_\lambda} \def\La{\Lambdaambdambda}$ is perpendicular to $T{\muathbb C}N$ with respect to the triad metric
and so must be parallel to $X_\lambda} \def\La{\Lambdaambdambda$. On the other hand,
$$
\lambda} \def\La{\Lambdaambdangle \nuabla_Z W, {X_\lambda} \def\La{\Lambdaambdambda} \rhoangle = - \lambda} \def\La{\Lambdaambdangle \nuabla_Z {X_\lambda} \def\La{\Lambdaambdambda}, W \rhoangle = 0
$$
and hence if $W \in T{\muathbb C}N$, it must be perpendicular to $X_\lambda} \def\La{\Lambdaambdambda$. Furthermore we have
$$
d\lambda} \def\La{\Lambdaambdambda(\nuabla_Z W, V) = -d\lambda} \def\La{\Lambdaambdambda(W,\nuabla_Z V) =0
$$
for any $V$ tangent to $Q$ since $\nuabla_Z V \in TQ$ as $Q$ is totally geodesic. This proves
$\nuabla_ZW$ indeed lies in $\xii \chiap T{\muathbb C}F = T{\muathbb C}N$, which finishes the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
\psiart{Exponential estimates for contact instantons: Morse-Bott case}
\lambda} \def\La{\Lambdaambdabel{part:exp}
In this part, we develop the three-interval method of
proving $C^\infty$ exponential convergence to closed Reeb orbits
of any (charge vanishing) contact instanton with finite $\psii$-energy and bounded gradient of
any Morse-Bott contact form, and use it at each puncture of domain Riemann surface.
The contents of this part are as follows:
\betaegin{equation}gin{itemize}
\item In Section \rhoef{sec:pseudo}, we briefly review the subsequence convergence result
for contact instantons with finite $\psii$-energy and bounded gradient. This is the starting point for
applying the three-interval method introduced in Section \rhoef{sec:three-interval} and afterwards;
\item In Section \rhoef{sec:three-interval}, an abstract three-interval method framework is presented;
\item In Section \rhoef{sec:prequantization}, we focus on the prequantization case and use
the three-interval machinery introduced in Section \rhoef{sec:three-interval} to prove exponential
convergence. The proof is divided into several steps which are organized into
different subsections.
\item In Section \rhoef{sec:general}, we prove exponential decay for general cases;
\item In Section \rhoef{sec:asymp-cylinder}, we explain how to apply this method to symplectic
manifolds with asymptotically cylindrical ends.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{itemize}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaection{Subsequence convergence on the adapted contact triad $(U_F,\lambda} \def\La{\Lambdaambdambda,J)$}
\lambda} \def\La{\Lambdaambdabel{sec:pseudo}
We first introduce the subsequence convergence result for Morse-Bott contact instantons of finite $\psii$-energy and finite gradient bound. The proofs are almost word-by-word the same as the nondegenerate case considered in \chiite{oh-wang2}. For readers' convenience, we include details here.
We fix a punctured Riemann surface $(\delta} \def\D{\Deltaot\Sigmama, j)$ with $l$-punctures and associate it with
a metric $h$ which is cylindrical at each end.
To be precise, it means that there exists a compact set $K_\Sigmama\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset \delta} \def\D{\Deltaot\Sigmama$,
such that $\delta} \def\D{\Deltaot\Sigmama-\tauext{\rhom Int}(K_\Sigmama)$ is the disjoint union of $l^+$-positive half cylinders and $l^-$-negative half cylinders with $l=l^++l^-$, i.e.
$$
\delta} \def\D{\Deltaot\Sigmama-\tauext{\rhom Int}(K_\Sigmama)=(\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqcup_{i=1, \chidots, l^+}C^+_i)\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqcup (\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqcup_{i=1, \chidots, l^-}C^-_i),\quad l=l^++l^-
$$
where
$$
C^+_i=[0, \infty)\tauimes S^1, \quad C^-_i=(-\infty, 0]\tauimes S^1
$$
equipped with the cylindrical metric $\quad h|_{C^\psim_i}=d\tauau^2+dt^2$ thereon.
For any smooth map
$
w: \delta} \def\D{\Deltaot\Sigmama\tauo M,
$
the $\psii$-harmonic energy $E^\psii(w)$ is defined as
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:endenergy}
E^\psii(w):= E^\psii_{(\lambda} \def\La{\Lambdaambdambda,J;\delta} \def\D{\Deltaot\Sigmama,h)}(w) = \psihi} \def\F{\Phirac{1}{2} \int_{\delta} \def\D{\Deltaot \Sigmama} |d^\psii w|^2
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
where the norm is taken by $h$ and the triad metric on $M$.
We put the following hypotheses on the asymptotic study of Morse-Bott contact instantons:
\betaegin{equation}gin{hypo}\lambda} \def\La{\Lambdaambdabel{hypo:basic} Consider the contact triad $(M,\lambda} \def\La{\Lambdaambdambda,J)$ with
Morse-Bott contact form $\lambda} \def\La{\Lambdaambdambda$.
Let $w:\delta} \def\D{\Deltaot\Sigmama\tauo M$ be a contact instanton, i.e., satisfy the contact instanton equations \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:contact-instanton}
defined on a punctured Riemann surface with cylindrical ends $(\delta} \def\D{\Deltaot\Sigmama, j, h)$. We assume that $w$ satisfies
\betaegin{equation}gin{enumerate}
\item $E^\psii(w):=E^\psii_{(\lambda} \def\La{\Lambdaambdambda,J;\delta} \def\D{\Deltaot\Sigmama,h)}(w)<\infty$, i.e., finite $\psii$-energy;
\item $\|d w\|_{L^\infty(\delta} \def\D{\Deltaot\Sigmama)} <\infty$, i.e., finite gradient bound.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{hypo}
The following two asymptotic invariants associated to each puncture play essential roles in the study of asymptotic behavior of a Morse-Bott contact instanton satisfying Hypothesis \rhoef{hypo:basic}.
\betaegin{equation}gin{defn} Let $w: \delta} \def\D{\Deltaot \Sigmama \tauo M$ be as in Hypothesis \rhoef{hypo:basic}.
At each puncture, we define the asymptotic contact action ${\muathbb C}T_{C^{\psim}_i}(w)$ and the asymptotic contact charge ${\muathbb C}Q_{C^{\psim}_i}(w)$ for a contact instanton $w$ satisfying Hypothesis \rhoef{hypo:basic} as
\betaegin{equation}a
{\muathbb C}T_{C^{\psim}_i}(w) & := & \psihi} \def\F{\Phirac{1}{2}\int_{C^{\psim}_i} |d^\psii w|^2 + \int_{\{0\}\tauimes S^1}(w|_{\{0\}\tauimes S^1})^*\lambda} \def\La{\Lambdaambdambda\lambda} \def\La{\Lambdaambdabel{eq:TQ-T}\\
{\muathbb C}Q_{C^{\psim}_i}(w) & : = & \int_{\{0\}\tauimes S^1}((w|_{\{0\}\tauimes S^1})^*\lambda} \def\La{\Lambdaambdambda\chiirc j)\lambda} \def\La{\Lambdaambdabel{eq:TQ-Q}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonea
where $C^\psim$ is the cylindrical end associated to the given puncture.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
The following remark shows that both ${\muathbb C}T$ and ${\muathbb C}Q$ are translation invariant.
\betaegin{equation}gin{rem}\lambda} \def\La{\Lambdaambdabel{rem:TQ}
For any contact instanton $w$ satisfying Hypothesis \rhoef{hypo:basic} at a puncture
$[0, \infty)\tauimes S^1$, we have
$$
{\muathbb C}T(w) = \psihi} \def\F{\Phirac{1}{2}\int_{[s,\infty) \tauimes S^1} |d^\psii w|^2 + \int_{\{s\}\tauimes S^1}(w|_{\{s\}\tauimes S^1})^*\lambda} \def\La{\Lambdaambdambda, \quad
\tauext{for any } s\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq 0,
$$
which is due to $\psihi} \def\F{\Phirac{1}{2}|d^\psii w|^2\, dA=d(w^*\lambda} \def\La{\Lambdaambdambda)$ and Stokes' formula;
and $$
{\muathbb C}Q(w)=\int_{\{s \}\tauimes S^1}(w|_{\{s \}\tauimes S^1})^*\lambda} \def\La{\Lambdaambdambda\chiirc j, \quad
\tauext{for any } s \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq 0,
$$
which is due to $d(w^*\lambda} \def\La{\Lambdaambdambda\chiirc j)=0$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{rem}
Since our main interest lies on the asymptotic behavior of a fixed contact instanton $w$ at a given puncture,
we will assume the domain of $w$ is a positive half cylinder $[0, \infty)\tauimes S^1$
without loss of generality. (The case of negative half cylinder can be treated in the same way.)
We simply denote the asymptotic contact action and charge at this puncture by ${\muathbb C}T$ and by ${\muathbb C}Q$
respectively.
\betaigskip
\betaegin{equation}gin{thm}[Subsequence Convergence \chiite{oh-wang2}]\lambda} \def\La{\Lambdaambdabel{thm:subsequence}
Let $(M, \lambda} \def\La{\Lambdaambdambda, J)$ be any, not necessarily Morse-Bott, contact triad.
Assume $w:[0, \infty)\tauimes S^1\tauo M$ is a contact instanton, i.e. it satisfies the contact instanton equations \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:contact-instanton}, and satisfies Hypothesis \rhoef{hypo:basic}.
Then for any sequence $s_k\tauo \infty$, there exists a subsequence, still denoted by $s_k$, and
a Reeb trajectory $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$, not necessarily closed, such that
$$
\lambda} \def\La{\Lambdaim_{k\tauo \infty}w(s_k + \tauau, t) = \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(-{\muathbb C}Q\tauau+{\muathbb C}T t)
$$
in the $C^l(K \tauimes S^1, M)$ sense for any $l\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq 0$, where $K\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset {\muathbb R}$ is an arbitrary compact set.
Furthermore when $(M, \lambda} \def\La{\Lambdaambdambda, J)$ is of Morse-Bott type and $w$ has
non-vanishing period ${\muathbb C}T\nueq 0$, then there exists a connected submanifold $Q$ foliated by closed
Reeb orbits of period ${\muathbb C}T$, so that the limit becomes $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(-{\muathbb C}Q\tauau+{\muathbb C}T\, t)$, where $z$ is a
closed Reeb orbit over $Q$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{thm}
The first part of the theorem was proved in \chiite[Section 6]{oh-wang2}. For reader's convenience,
we include its complete proof in Appendix \rhoef{appendix:subseqproof}.
Similar statement for Morse-Bott case in the context of symplectization
was proved in \chiite[Proposition 2.1]{HWZ3} (see also \chiite[Proposition 2.1]{HWZ1, HWZ2}).
\betaegin{equation}gin{cor}\lambda} \def\La{\Lambdaambdabel{cor:tangent-convergence}Assume $w:[0, \infty)\tauimes S^1\tauo M$ is a contact instanton, i.e. it satisfies the contact instanton equations \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:contact-instanton} in a Morse-Bott contact triad $(M, \lambda} \def\La{\Lambdaambdambda, J)$, and satisfies
Hypothesis \rhoef{hypo:basic}.
Then
\betaegin{equation}astar
&&\lambda} \def\La{\Lambdaim_{s\tauo \infty}\lambda} \def\La{\Lambdaeft|\psii \psihi} \def\F{\Phirac{\psiartial w}{\psiartial\tauau}(s+\tauau, t)\rhoight|=0, \quad
\lambda} \def\La{\Lambdaim_{s\tauo \infty}\lambda} \def\La{\Lambdaeft|\psii \psihi} \def\F{\Phirac{\psiartial w}{\psiartial t}(s+\tauau, t)\rhoight|=0\\
&&\lambda} \def\La{\Lambdaim_{s\tauo \infty}\lambda} \def\La{\Lambdaambdambda(\psihi} \def\F{\Phirac{\psiartial w}{\psiartial\tauau})(s+\tauau, t)=-{\muathbb C}Q, \quad
\lambda} \def\La{\Lambdaim_{s\tauo \infty}\lambda} \def\La{\Lambdaambdambda(\psihi} \def\F{\Phirac{\psiartial w}{\psiartial t})(s+\tauau, t)={\muathbb C}T
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
and
$$
\lambda} \def\La{\Lambdaim_{s\tauo \infty}|\nuabla^l dw(s+\tauau, t)|=0 \quad \tauext{for any}\quad l\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq 1.
$$
All the limits are uniform for $(\tauau, t)$ in $K\tauimes S^1$ with compact $K\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset {\muathbb R}$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{cor}
\betaigskip
From now on, we consider $J$ as a $CR$-almost complex structure adapted to $Q$,
which in turn induces a $CR$-almost complex structures on a neighborhood $U_F$
of the zero section of $F$.
Denote by $(U_F,\lambda} \def\La{\Lambdaambdambda,J)$ the corresponding adapted contact triad.
When restricted to each connected component of the loci of closed Reeb orbits, there exists a uniform constant
$\tauau_0>0$ such that the image of $w$ lies in a tubular neighborhood of $Q$ whenever $\tauau>\tauau_0$.
In other words, it is enough to restrict ourselves to study contact instanton maps from half cylinder $[0, \infty)\tauimes S^1$
to the canonical neighborhood $(U_F, \lambda} \def\La{\Lambdaambdambda, J)$ defined in Definition \rhoef{defn:normalneighborhood}
for the purpose of the study of asymptotic behaviour at the end.
\muedskip
With the normal form
we developed in Part \rhoef{part:coordinate}, we express
$w$ as $w=(u, s)$
where $u:=\psii\chiirc w:[0, \infty)\tauimes S^1\tauo Q$ and
$s=(\muu, e)$ is a section of the pull-back bundle $u^*(JT{\muathbb C}N)\omega} \def\O{\Omegaplus u^*E\tauo [0, \infty)\tauimes S^1$.
Recall from Section \rhoef{sec:coord} and express
$$
dw = \lambda} \def\La{\Lambdaeft(\betaegin{equation}gin{matrix}du\\
\nuabla_{du} s \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{matrix}\rhoight)
=\lambda} \def\La{\Lambdaeft(\betaegin{equation}gin{matrix}du\\
\nuabla_{du} \muu\\
\nuabla_{du} e \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{matrix}\rhoight),
$$
we reinterpret the convergence of $w$ stated in Theorem \rhoef{thm:subsequence}
in terms of the coordinate $w = (u,s)=(u, (\muu, e))$.
\betaegin{equation}gin{cor}\lambda} \def\La{\Lambdaambdabel{cor:convergence-ue} Let $w = (u,s)=(u, (\muu, e))$ satisfy the same assumption as in Theorem \rhoef{thm:subsequence}. Then for any sequence $s_k\tauo \infty$, there exists a subsequence, still denoted by $s_k$,
and a Reeb orbit $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$ on $Q$ (may depend on the choice of subsequences) with action ${\muathbb C}T$
and charge ${\muathbb C}Q$, such that
$$
\lambda} \def\La{\Lambdaim_{k\tauo \infty} u(\tauau+s_k, t)= \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(-{\muathbb C}Q\, \tauau + {\muathbb C}T\, t)
$$
in $C^l(K \tauimes S^1, M)$ sense for any $l$, where $K\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset [0,\infty)$ is an arbitrary compact set.
Furthermore,
we have
\betaegin{equation}a
\lambda} \def\La{\Lambdaim_{s\tauo \infty}\lambda} \def\La{\Lambdaeft|\muu(s+\tauau, t)\rhoight|=0, &{}& \quad \lambda} \def\La{\Lambdaim_{s\tauo \infty}\lambda} \def\La{\Lambdaeft|e(s+\tauau, t)\rhoight|=0\nuonumber\\
\lambda} \def\La{\Lambdaim_{s \tauo \infty} \lambda} \def\La{\Lambdaeft|d^{\psii_{\lambda} \def\La{\Lambdaambdambda}} u(s+\tauau, t)\rhoight|= 0, &{}& \quad
\lambda} \def\La{\Lambdaim_{s \tauo \infty} u^*\tauheta(s+\tauau, t) = -{\muathbb C}Q d\tauau + {\muathbb C}T\, dt\nuonumber\\
\lambda} \def\La{\Lambdaim_{s \tauo \infty} \lambda} \def\La{\Lambdaeft|\nuabla_{du} e(s+\tauau, t)\rhoight| = 0,&{}&\nuonumber
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonea
and
\betaegin{equation}a
\lambda} \def\La{\Lambdaim_{s \tauo \infty} \lambda} \def\La{\Lambdaeft|\nuabla^k d^{\psii_{\lambda} \def\La{\Lambdaambdambda}} u(s+\tauau, t)\rhoight| = 0, &{}& \quad
\lambda} \def\La{\Lambdaim_{s \tauo \infty} \lambda} \def\La{\Lambdaeft|\nuabla^k u^*\tauheta(s+\tauau, t)\rhoight| =0\nuonumber\\
\lambda} \def\La{\Lambdaim_{s \tauo \infty} \lambda} \def\La{\Lambdaeft|\nuabla_{du}^k e(s+\tauau, t)\rhoight| = 0 &{}&\lambda} \def\La{\Lambdaambdabel{eq:highere}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonea
for all $k \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq 1$, and all the limits are uniform for $(\tauau, t)$ on $K\tauimes S^1$ with compact $K\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset [0,\infty)$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{cor}
In particular,
$$
\lambda} \def\La{\Lambdaim_{s \tauo \infty} du(s+\tauau, t) = (-{\muathbb C}Q\, d\tauau +{\muathbb C}T\, dt)\omega} \def\O{\Omegatimes X_\tauheta
$$
uniformly for $(\tauau, t)$ in $C^\infty$ topology on
$K\tauimes S^1$ for any given compact $K\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset [0,\infty)$.
\betaigskip
In the rest of the present paper, we will add the following technical assumption of vanishing charge.
\betaegin{equation}gin{hypo}[Charge vanishing]\lambda} \def\La{\Lambdaambdabel{hypo:exact}
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:asymp-a=0}
{\muathbb C}Q:=\int_{\{0\}\tauimes S^1}((w|_{\{0\}\tauimes S^1})^*\lambda} \def\La{\Lambdaambdambda\chiirc j)=0.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{hypo}
Then the uniform convergence proved in this section ensures all the basic requirements (including
the uniformly local tameness,
pre-compactness, uniformly local coercive property and the locally asymptotically cylindrical
property) of applying the three-interval method
to prove exponential decay of $w$ at the end which we will introduce in details in following sections.
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaection{Abstract framework of the three-interval method}
\lambda} \def\La{\Lambdaambdabel{sec:three-interval}
In this section, we introduce a new method in proving exponential decay using the abstract framework of the three-interval
method. In later Section \rhoef{sec:expdecayMB}, we will apply the scheme
to the normal bundle part.
We remark that the method can deal with the case with an exponentially decaying perturbation too (see Theorem \rhoef{thm:three-interval}).
The three-interval method is based on the following analytic lemma.
\betaegin{equation}gin{lem}[\chiite{mundet-tian} Lemma 9.4]\lambda} \def\La{\Lambdaambdabel{lem:three-interval}
For a sequence of nonnegative numbers $\{x_k\}_{k=0, 1, \chidots, N}$, if there exists some constant $0<\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma<\psihi} \def\F{\Phirac{1}{2}$ such that
$$
x_k\lambda} \def\La{\Lambdaeq \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(x_{k-1}+x_{k+1})
$$
for every $1\lambda} \def\La{\Lambdaeq k\lambda} \def\La{\Lambdaeq N-1$, then it follows
$$
x_k\lambda} \def\La{\Lambdaeq x_0\xii^{-k}+x_N\xii^{-(N-k)},\quad k=0, 1, \chidots, N,
$$
where $\xii:=\psihi} \def\F{\Phirac{1+\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqrt{1-4\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma^2}}{2\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma}$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{rem}\lambda} \def\La{\Lambdaambdabel{rem:three-interval}
\betaegin{equation}gin{enumerate}
\item If we write $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma=\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(c):=\psihi} \def\F{\Phirac{1}{e^c+e^{-c}}$ where $c>0$ is uniquely determined by $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$, then the conclusion
can be written into the exponential form
$$
x_k\lambda} \def\La{\Lambdaeq x_0e^{-ck}+x_Ne^{-c(N-k)}.
$$
\item
For an infinite nonnegative sequence $\{x_k\}_{k=0, 1, \chidots}$, if we have a uniform bound of
in addition,
then the exponential decay follows as
$$
x_k\lambda} \def\La{\Lambdaeq x_0e^{-ck}.
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{rem}
The analysis of proving the exponential decay will be carried on a Banach bundle ${\muathbb C}E\tauo [0, \infty)$ modelled by the Banach space ${\muathbb E}$, for which we mean every fiber ${\muathbb E}_\tauau$ is identified with the Banach space ${\muathbb E}$ smoothly depending on $\tauau$. We omit this identification if there is no way of confusion.
First we emphasize the base $[0,\infty)$ is non-compact and
carries a natural translation map for any positive number $r$,
which is $\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaigma_r: \tauau \muapsto \tauau + r$.
We introduce the following definition
which ensures us to study the sections in local trivialization after taking a subsequence.
\betaegin{equation}gin{defn}\lambda} \def\La{\Lambdaambdabel{def:unif-local-tame} Let ${\muathbb C}E$ be a Banach bundle modelled with a Banach space ${\muathbb E}$ over $[0, \infty)$.
Let $[a,b] \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset [0,\infty)$ be any given bounded interval and let
$s_k \tauo \infty$ be any given sequence. A \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{tame family of trivialization}
over $[a,b]$ relative to the sequence $s_k$ is defined to be
a sequence of trivializations $\{\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_k\}:{\muathbb C}E|_{[a,b]} \tauo [a,b] \tauimes {\muathbb E}$
$$
\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_k: \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaigma^*_{s_\chidot}{\muathbb C}E|_{[a+s_k, b+s_k]} \tauo [a,b] \tauimes {\muathbb E}
$$
for $k\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq 0$ satisfying the following: There exists a sufficiently large $k_0 > 0$
such that for any $k \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq k_0$ the bundle map
$$
\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{k_0+k} \chiirc \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{k_0}^{-1}: [a,b] \tauimes {\muathbb E} \tauo [a,b] \tauimes {\muathbb E}
$$
satisfies
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:locallytame}
\|\nuabla_\tauau^l(\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{k_0+k} \chiirc \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{k_0}^{-1})\|_{{\muathbb C}L({\muathbb E},{\muathbb E})} \lambda} \def\La{\Lambdaeq C_l<\infty
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
for constants $C_l = C_l(|b-a|)$ depending only on $|b-a|$, $l=0, 1, \chidots$.
We call ${\muathbb C}E$ \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{uniformly locally tame}, if it carries a tame family of
trivializations over $[a,b]$ relative to the sequence $s_k$ for any given bounded
interval $[a,b] \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset [0,\infty)$ and a sequence $s_k \tauo \infty$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
\betaegin{equation}gin{defn} Suppose ${\muathbb C}E$ is uniformly locally tame. We say a connection $\nuabla$ on
${\muathbb C}E$ is \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{uniformly locally tame} if the push-forward $(\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_k)_*\nuabla_\tauau$ can be written as
$$
(\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_k)_*\nuabla_\tauau = \psihi} \def\F{\Phirac{d}{d\tauau} + \Gammaamma_k(\tauau)
$$
for any tame family $\{\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_k\}$ so that
$\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaup_{\tauau \in [a,b]}\|\Gammaamma_k(\tauau)\|_{{\muathbb C}L({\muathbb E},{\muathbb E})} < C$ for some $C> 0$ independent of $k$'s.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
\betaegin{equation}gin{defn}
Consider a pair ${\muathbb C}E_2 \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset {\muathbb C}E_1$ of uniformly locally tame bundles, and a bundle map
$B: {\muathbb C}E_2 \tauo {\muathbb C}E_1$.
We say $B$ is \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{uniformly locally bounded}, if for any compact set $[a,b] \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset [0,\infty)$ and
any sequence $s_k \tauo \infty$, there exists a subsequence, still denoted by $s_k$, a sufficiently large $k_0 > 0$ and tame families
$\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{1,k}$, $\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{2,k}$ such that for any $k\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq 0$
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:loc-uni-bdd}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaup_{\tauau \in [a,b]} \|\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{i,k_0+k} \chiirc B \chiirc \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{i,k_0}^{-1}\|_{{\muathbb C}L({\muathbb E}_2, {\muathbb E}_1)} \lambda} \def\La{\Lambdaeq C
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
where $C$ is independent of $k$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
For a given locally tame pair ${\muathbb C}E_2 \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset {\muathbb C}E_1$, we denote by ${\muathbb C}L({\muathbb C}E_2, {\muathbb C}E_1)$ the set
of bundle homomorphisms which are uniformly locally bounded.
\betaegin{equation}gin{lem}
If ${\muathbb C}E_1, \, {\muathbb C}E_2$ are uniformly locally tame, then so is ${\muathbb C}L({\muathbb C}E_2, {\muathbb C}E_1)$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{defn}\lambda} \def\La{\Lambdaambdabel{defn:precompact} Let ${\muathbb C}E_2 \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset {\muathbb C}E_1$ be as above and let $B \in {\muathbb C}L({\muathbb C}E_2, {\muathbb C}E_1)$.
We say $B$ is \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{pre-compact} on $[0,\infty)$ if for any locally tame families $\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_1, \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_2$,
there exists a further subsequence such that
$
\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{1, k_0+k} \chiirc B \chiirc \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{1, k_0}^{-1}
$
converges to some $B_{\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_1\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_2;\infty}\in {\muathbb C}L(\Gammaamma([a, b]\tauimes {\muathbb E}_2), \Gammaamma([a, b]\tauimes {\muathbb E}_1))$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
Assume $B$ is a bundle map from ${\muathbb C}E_2$ to ${\muathbb C}E_1$ which is uniformly locally bounded,
where ${\muathbb C}E_1 \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaupset {\muathbb C}E_2$ are uniformly locally tame with tame families
$\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{1,k}$, $\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{2,k}$. We can write
$$
\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{2,k_0+k} \chiirc (\nuabla_\tauau + B) \chiirc \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{1,k_0}^{-1} = \psihi} \def\F{\Phirac{\psiartial}{\psiartial \tauau} + B_{\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_1\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_2, k}
$$
as a linear map from $\Gammaamma([a,b]\tauimes {\muathbb E}_2)$ to $\Gammaamma([a,b]\tauimes {\muathbb E}_1)$, since $\nuabla$ is uniformly
locally tame.
Next we introduce the following notion of coerciveness.
\betaegin{equation}gin{defn}\lambda} \def\La{\Lambdaambdabel{defn:localcoercive}
Let ${\muathbb C}E_1, \, {\muathbb C}E_2$ be as above and $B: {\muathbb C}E_2 \tauo {\muathbb C}E_1$ be a uniformly locally bounded bundle map.
We say the operator
$$
\nuabla_\tauau + B: \Gammaamma({\muathbb C}E_2) \tauo \Gammaamma({\muathbb C}E_1)
$$
is \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{uniformly locally coercive}, if the following holds:
\betaegin{equation}gin{enumerate}
\item For any pair of bounded closed intervals $I, \, I'$ with $I \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset \omega} \def\O{\Omegaperatorname{Int}I'$,
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:coercive}
\|\zetaeta\|_{L^2(I,{\muathbb C}E_2)} \lambda} \def\La{\Lambdaeq C(I,I') (\|\nuabla_\tauau \zetaeta + B \zetaeta\|_{L^2(I',{\muathbb C}E_1)} + \| \zetaeta\|_{L^2(I',{\muathbb C}E_1)})
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
for a constant $C(I, I')$ depending only on $I, \, I'$ but independent of $\zetaeta$.
\item if for given bounded sequence $\zetaeta_k \in \Gammaamma({\muathbb C}E_2)$ satisfying
$$
\nuabla_\tauau \zetaeta_k + B \zetaeta_k = L_k
$$
with $|L_k(\tauau|_{{\muathbb C}E_1}$ bounded on a given compact subset $K \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset [0,\infty)$,
there exists a subsequence, still denoted by $\zetaeta_k$, that uniformly converges in ${\muathbb C}E_2$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
\betaegin{equation}gin{rem}
Let $E \tauo [0,\infty)\tauimes S$ be a (finite dimensional) vector bundle and denote by
$W^{k,2}(E)$ the set of $W^{k,2}$-section of $E$ and $L^2(E)$ the set of $L^2$-sections. Let
$D: L^2(E)\tauo L^2(E)$ be a first order elliptic operator with cylindrical end.
Denote by $i_\tauau: S \tauo [0,\infty)\tauimes S$ the natural inclusion map. Then
there is a natural pair of Banach bundles ${\muathbb C}E_2 \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset {\muathbb C}E_1$ over $[0, \infty)$ associated to $E$, whose fiber
is given by ${\muathbb C}E_{1,\tauau}=L^2(i_\tauau^*E)$, ${\muathbb C}E_{2,\tauau} = W^{1,2}(i_\tauau^*E)$.
Furthermore assume ${\muathbb C}E_i$ for $i=1, \, 2$ is uniformly local tame if $S$ is a compact manifold (without boundary).
Then $D$ is uniformly locally coercive, which follows from the elliptic bootstrapping and the Sobolev's embedding.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{rem}
Finally we introduce the notion of asymptotically cylindrical operator $B$.
\betaegin{equation}gin{defn}\lambda} \def\La{\Lambdaambdabel{defn:asympcylinderical} We call $B$ \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{locally asymptotically cylindrical} if the following holds:
Any subsequence limit $B_{\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_1\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_2;\infty}$ appearing in Definition \rhoef{defn:precompact}
is a \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{constant} section,
and $\|B_{\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_1\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_2, k}-\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{2,k_0+k} \chiirc B \chiirc \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{1,k_0}^{-1}\|_{{\muathbb C}L({\muathbb E}_i, {\muathbb E}_i)}$ converges to zero
as $k\tauo \infty$ for both $i =1, 2$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
Now we specialize to the case of Hilbert bundles ${\muathbb C}E_2 \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset {\muathbb C}E_1$ over $[0,\infty)$ and assume that
${\muathbb C}E_1$ carries a connection {which is compatible with the Hilbert inner product of ${\muathbb C}E_1$. We denote by $\nuabla_\tauau$ the associated covariant
derivative. We assume that $\nuabla_\tauau$ is uniformly locally tame.
Denote by $L^2([a,b];{\muathbb C}E_i)$ the space of $L^2$-sections $\zetaeta$ of ${\muathbb C}E_i$ over
$[a,b]$, i.e., those satisfying
$$
\int_a^b |\zetaeta(\tauau)|_{{\muathbb C}E_i}^2\, dt < \infty.
$$
where $|\zetaeta(\tauau)|_{{\muathbb C}E_i}$ is the norm with respect to the given Hilbert bundle structure of ${\muathbb C}E_i$.
\betaegin{equation}gin{thm}[Three-Interval Method]\lambda} \def\La{\Lambdaambdabel{thm:three-interval}
Assume ${\muathbb C}E_2\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset{\muathbb C}E_1$ is a pair of Hilbert bundles over $[0, \infty)$ with fibers ${\muathbb E}_2$ and ${\muathbb E}_1$,
and ${\muathbb E}_2\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset {\muathbb E}_1$ is dense.
Let $B$ be a section of the associated bundle ${\muathbb C}L({\muathbb C}E_2, {\muathbb C}E_1)$ and
$L \in \Gammaamma({\muathbb C}E_1)$.
We assume the following:
\betaegin{equation}gin{enumerate}
\item There exists a covariant derivative $\nuabla_\tauau$
that preserves the Hilbert structure;
\item ${\muathbb C}E_i$ for $i=1, \, 2$ are uniformly locally tame;
\item $B$ is precompact, uniformly locally coercive and asymptotically cylindrical;
\item Every subsequence limit $B_{\infty}$ is a self-adjoint unbounded operator on
${\muathbb E}_1$ with its domain ${\muathbb E}_2$, and satisfies $\kappaer B_{\infty} = \{0\}$;
\item There exists some positive number $\psiartialta$
such that any subsequence limiting operator $B_{\infty}$
of the above mentioned pre-compact family has all their eigenvalues $\lambda} \def\La{\Lambdaambdambda$
satisfying $|\lambda} \def\La{\Lambdaambdambda| >\psiartialta$;
\item
There exists some $R_0 > 0$, $C_0 > 0$ and $\psiartialta_0 > \psiartialta$ such that
$$
|L(\tauau)|_{{\muathbb C}E_{1,\tauau}} \lambda} \def\La{\Lambdaeq C_0 e^{-\psiartialta_0 \tauau}
$$
for all $\tauau \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq R_0$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
Then for any (smooth) section $\zetaeta \in \Gammaamma({\muathbb C}E_2)$ with
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:sup-bound}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaup_{\tauau \in [R_0,\infty)} |\zetaeta(\tauau,\chidot)|_{{\muathbb C}E_{2,\tauau}} < \infty
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
and satisfying the equation
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:nabla=L}
\nuabla_\tauau \zetaeta + B(\tauau) \zetaeta(\tauau) = L(\tauau),
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
there exist some constants $R$, $C>0$ such that for any $\tauau>R$,
$$
|\zetaeta(\tauau)|_{{\muathbb C}E_{1,\tauau}}\lambda} \def\La{\Lambdaeq C e^{-\psiartialta \tauau }.
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{thm}
\betaegin{equation}gin{proof}
We divide $[0, \infty)$ into the union of unit intervals $I_k:=[k, k+1]$ for
$k=0, 1, \chidots$.
We first prove the exponential decay of $\|\zetaeta\|^2_{L^2(I_k;{\muathbb C}E_1)}$ to zero as $k\tauo \infty$.
By Lemma \rhoef{lem:three-interval} and Remark \rhoef{rem:three-interval}, it is enough to prove that
for the function $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(c) = \psihi} \def\F{\Phirac{1}{e^c + e^{-c}}$ as in Remark \rhoef{rem:three-interval} we have
\betaegin{equation}
\|\zetaeta\|^2_{L^2(I_k;{\muathbb C}E_1)}\lambda} \def\La{\Lambdaeq \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(2\psiartialta)(\|\zetaeta\|^2_{L^2(I_{k-1};{\muathbb C}E_1)}+\|\zetaeta\|^2_{L^2(I_{k+1};{\muathbb C}E_1)}),\lambda} \def\La{\Lambdaambdabel{eq:3interval-ineq}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
for every $k=1, 2, \chidots$ for some choice of $0 < \psiartialta < 1$.
For the simplicity of notation and also because we use only the norms
$\|\zetaeta\|_{L^2([a,b];{\muathbb C}E_1}$ or $L^\infty([a,b];{\muathbb C}E_1)$ but for $\zetaeta \in {\muathbb C}E_2$
in the discussion below, we will just denote
$$
L^2([a,b]) := L^2([a,b];{\muathbb C}E_1), \quad L^\infty([a,b]): = L^\infty([a,b];{\muathbb C}E_1)
$$
for any given interval $[a,b]$.
If the inequality \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:3interval-ineq} does not hold for every $k$,
we collect all the $k$'s that reverse the direction of the inequality.
If such $k$'s are finitely many, i.e., \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:3interval-ineq} holds after some large $k_0$,
then we will still get the exponential estimate
as the theorem claims.
Otherwise, there are infinitely many such three-intervals, which we enumerate
by $I^{l_k}_{I}:=[l_k, l_k+1], \,I^{l_k}_{II}:=[l_k+1, l_k+2], \,I^{l_k}_{III}:=[l_k+2, l_k+3]$, $k=1, 2, \chidots$, such that
\betaegin{equation}
\| \zetaeta\|^2_{L^2(I^k_{II}) }>
\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(2\psiartialta)(\| \zetaeta\|^2_{L^2(I^k_{I}) }+\|\zetaeta\|^2_{L^2(I^k_{III}) }).\lambda} \def\La{\Lambdaambdabel{eq:against-3interval}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Before we deal with this case, we first remark that
this hypothesis in particular implies $ \zetaeta \nuot \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0$ on
$I^{l_k}:=I^{l_k}_{I}\chiup I^{l_k}_{II}\chiup I^{l_k}_{III}$, i.e.,
$\|\zetaeta\|_{L^\infty(I^{l_k}) }\nueq 0$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{If there exists some uniform constant $C_1>0$} such that
on each such three-intervals
\betaegin{equation}
\|\zetaeta\|_{L^\infty(I^{l_k}) }<C_1e^{-\psiartialta l_k}, \lambda} \def\La{\Lambdaambdabel{eq:expas-zeta}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
it follows that
\betaegin{equation}
\|\zetaeta\|_{L^2([l_k+1, l_k+2]) } \lambda} \def\La{\Lambdaeq C_1e^{-\psiartialta l_k}=C e^{-\psiartialta(l_k+1)}.\lambda} \def\La{\Lambdaambdabel{eq:expest-zeta}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Here $C=C_1e^{-\psiartialta}$ is purely a constant depending only on $\psiartialta$ which will be determined at the end.
From now on, various constants $C$ appearing below may vary but be independent of $\zetaeta$.
Recall that under our assumption
we have infinitely many intervals that satisfy \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:3interval-ineq},
and the exponential inequality \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:expas-zeta}.
If the union of such intervals $[l_k+1,l_k+2]$ is connected after some point, then we already get our conclusion
from \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:expest-zeta} since every interval becomes a middle interval (of the form $[l_k+1, l_k+2]$)
in such three-intervals.
Otherwise the set of $k$'s that satisfy \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:3interval-ineq} form a sequence of clusters,
$$
I^{l_k+1}, I^{l_k+2}, \chidots, I^{l_k+N_k}
$$
for the sequence $l_1, \, l_2, \chidots, l_k, \chidots $ such that $l_{k+1} > l_k +N_k$ and \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:3interval-ineq}
holds on each element contained in each cluster.
We remark that each cluster has the farthest left interval $[l_k+1, l_k+2]$ as the middle interval in $I^{l_k}$,
and the farthest right interval $[l_k+N+2, l_k+N+3]$ as the middle interval in $I^{l_{k+1}}$.
(See Figure \rhoef{fig:three-interval}.)
\betaegin{equation}gin{figure}[h]
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaetlength{\unitlength}{0.37in}
\chientering
\betaegin{equation}gin{picture}(32,6)
\psiut(2,5){\lambda} \def\La{\Lambdaine(1,0){9}}
\psiut(2,5){\lambda} \def\La{\Lambdaine(0,1){0.1}}
\psiut(3,5){\lambda} \def\La{\Lambdaine(0,1){0.1}}
\psiut(4,5){\lambda} \def\La{\Lambdaine(0,1){0.1}}
\psiut(5,5){\lambda} \def\La{\Lambdaine(0,1){0.1}}
\psiut(8,5){\lambda} \def\La{\Lambdaine(0,1){0.1}}
\psiut(9,5){\lambda} \def\La{\Lambdaine(0,1){0.1}}
\psiut(10,5){\lambda} \def\La{\Lambdaine(0,1){0.1}}
\psiut(11,5){\lambda} \def\La{\Lambdaine(0,1){0.1}}
\psiut(1,3){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(1,0){3}}}
\psiut(9,3){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(1,0){3}}}
\psiut(1,3){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(0,1){0.1}}}
\psiut(2,3){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(0,1){0.1}}}
\psiut(3,3){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(0,1){0.1}}}
\psiut(4,3){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(0,1){0.1}}}
\psiut(9,3){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(0,1){0.1}}}
\psiut(10,3){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(0,1){0.1}}}
\psiut(11,3){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(0,1){0.1}}}
\psiut(12,3){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(0,1){0.1}}}
\psiut(2.5, 4.8){\vector(0,-1){1.3}}
\psiut(10.5, 4.8){\vector(0,-1){1.3}}
\psiut(2, 5.2){$\omega} \def\O{\Omegaverbrace{\qquad\qquad\qquad\qquad}^{I_{l_k+1}}$}
\psiut(8, 5.2){$\omega} \def\O{\Omegaverbrace{\qquad\qquad\qquad\qquad}^{I_{l_k+N}}$}
\psiut(1, 2.8){$\underbrace{\qquad\qquad\qquad\qquad}_{I^{l_k}}$}
\psiut(9, 2.8){$\underbrace{\qquad\qquad\qquad\qquad}_{I_{l_{k+1}}=I_{l_k+N+1}}$}
\psiut(7.5, 4.7){$l_k+N$}
\psiut(1.5, 4.7){$l_k+1$}
\psiut(0.9, 3.2){$l_k$}
\psiut(8.8, 3.2){$l_{k+1}(=l_k+N+1)$}
\psiut(2,5){\chiircle*{0.07}}
\psiut(1,3){\tauextcolor{red}{\chiircle*{0.07}}}
\psiut(8,5){\chiircle*{0.07}}
\psiut(9,3){\tauextcolor{red}{\chiircle*{0.07}}}
\psiut(6,5.3){$\chidots\chidots$}
\psiut(2,5.02){\lambda} \def\La{\Lambdaine(1,0){1}}
\psiut(10,5.02){\lambda} \def\La{\Lambdaine(1,0){1}}
\psiut(2,3.02){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(1,0){1}}}
\psiut(10,3.02){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(1,0){1}}}
\psiut(2,5.03){\lambda} \def\La{\Lambdaine(1,0){1}}
\psiut(10,5.03){\lambda} \def\La{\Lambdaine(1,0){1}}
\psiut(2,3.03){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(1,0){1}}}
\psiut(10,3.03){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(1,0){1}}}
\psiut(2,1){\lambda} \def\La{\Lambdaine(1,0){1}}
\psiut(3.5,1){\tauext{denotes the unit intervals that satisfy \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:3interval-ineq}}}
\psiut(2,1){\lambda} \def\La{\Lambdaine(0,1){0.1}}
\psiut(3,1){\lambda} \def\La{\Lambdaine(0,1){0.1}}
\psiut(2,0){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(1,0){1}}}
\psiut(3.5,0){\tauext{denotes the unit intervals that satisfy \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:against-3interval} and \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:expas-zeta}}}
\psiut(2,0){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(0,1){0.1}}}
\psiut(3,0){\tauextcolor{red}{\lambda} \def\La{\Lambdaine(0,1){0.1}}}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{picture}
\chiaption{}
\lambda} \def\La{\Lambdaambdabel{fig:three-interval}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{figure}
Then from \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:expest-zeta}, we derive
\betaegin{equation}astar
\|\zetaeta\|_{L^2([l_k+1, l_k+2]) } &\lambda} \def\La{\Lambdaeq& Ce^{-\psiartialta l_k}, \\
\|\zetaeta\|_{L^2([l_k+N+2, l_k+N+3]) } &\lambda} \def\La{\Lambdaeq& Ce^{-\psiartialta l_{k+1}}=Ce^{-\psiartialta (l_k+N+1)}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
Combining them and Lemma \rhoef{lem:three-interval}, we get the following estimate for $l_k+1\lambda} \def\La{\Lambdaeq l\lambda} \def\La{\Lambdaeq l_k+N+2$,
\betaegin{equation}astar
&{}&\|\zetaeta\|_{L^2([l, l+1]) }\\
&\lambda} \def\La{\Lambdaeq&\|\zetaeta\|_{L^2([l_k+1, l_k+2) } e^{-\psiartialta (l-(l_k+1))}
+\|\zetaeta\|_{L^2([l_k+N+2, l_k+N+3]) }e^{-\psiartialta (l_k+N+2-l)}\\
&\lambda} \def\La{\Lambdaeq& Ce^{-\psiartialta l_k}e^{-\psiartialta (l-(l_k+1))}+Ce^{-\psiartialta (l_k+N+1)}e^{-\psiartialta (l_k+N+2-l)}\\
&=&Ce^{\psiartialta}(e^{-\psiartialta l}+e^{-\psiartialta(2l_k+2N+4-l)})
\lambda} \def\La{\Lambdaeq (2Ce^{\psiartialta})e^{-\psiartialta l}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
Thus on each such cluster, we have exponential decay with the presumed rate $\psiartialta$ as claimed in the theorem.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{Now if there is no such uniform $C=C_1$ for which \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:expas-zeta} holds},
then we can find a sequence of constants $C_k\tauo \infty$
and a subsequence of such three-intervals $\{I^{l_k}\}$, still denoted by $l_k$, such that
\betaegin{equation}
\|\zetaeta\|_{L^\infty(I^{l_k}) }\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq C_ke^{-\psiartialta l_k}.\lambda} \def\La{\Lambdaambdabel{eq:decay-fail-zeta}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
We can further choose a subsequence, but still denoted by $l_k$,
so that $l_k+3<l_{k+1}$, i.e., the intervals do not intersect one another.
We translate the sections
$
\zetaeta_k:=\zetaeta|_{[l_k, l_k+3]}
$
and consider the sections ${\varphiidetilde \zetaeta}_k$ defined on $[0,3]$ given by
$$
{\varphiidetilde \zetaeta}_k(\tauau, \chidot):=\zetaeta(\tauau + l_k, \chidot).
$$
Then \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:decay-fail-zeta} becomes
\betaegin{equation}
\|{\varphiidetilde\zetaeta}_k\|_{L^\infty([0,3]) }\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq C_ke^{-\psiartialta l_k}.\lambda} \def\La{\Lambdaambdabel{eq:decay-fail-zetak}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
If we consider the translations of $L$ given by $\varphiidetilde L_k(\tauau, t)=L(\tauau+l_k, t)$,
then
\betaegin{equation}
|\varphiidetilde L_k(\tauau, t)|<Ce^{-\psiartialta l_k}e^{-\psiartialta\tauau}\lambda} \def\La{\Lambdaeq Ce^{-\psiartialta l_k}\lambda} \def\La{\Lambdaambdabel{eq:Lk}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
for $\tauau \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq 0$. It follows that ${\varphiidetilde \zetaeta}_k$ satisfies the equation
\betaegin{equation}
\nuabla_\tauau \varphiidetilde \zetaeta_k + B(\tauau+l_k,\chidot) \varphiidetilde \zetaeta_k = \varphiidetilde{L}_k(\tauau, t).\lambda} \def\La{\Lambdaambdabel{eq:uktilde-zeta}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
We now rescale \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:uktilde-zeta} by dividing it by $\|{\varphiidetilde\zetaeta}_k\|_{L^\infty([0,3])}$, which can not vanish
by the standing hypothesis as we remarked right below \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:against-3interval}, and
consider the rescaled sequence
$$
\omega} \def\O{\Omegaverline \zetaeta_k:={\varphiidetilde\zetaeta}_k/\|{\varphiidetilde\zetaeta}_k\|_{L^\infty([0,3])}.
$$
We have now
\betaegin{equation}a
\|\omega} \def\O{\Omegaverline \zetaeta_k\|_{{L^\infty([0,3])}}&=&1\nuonumber\\
\nuabla_\tauau\omega} \def\O{\Omegaverline\zetaeta_k+ B(\tauau+l_k,t) \omega} \def\O{\Omegaverline\zetaeta_k
&=&\psihi} \def\F{\Phirac{\varphiidetilde{L}_k}{\|{\varphiidetilde\zetaeta}_k\|_{L^\infty([0,3] )}}\lambda} \def\La{\Lambdaambdabel{eq:nablabarzeta}\\
\|\omega} \def\O{\Omegaverline \zetaeta_k\|^2_{L^2([1,2] )}&\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq&\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(2\psiartialta)(\|\omega} \def\O{\Omegaverline \zetaeta_k\|^2_{L^2([0,1] )}+\|\omega} \def\O{\Omegaverline \zetaeta_k\|^2_{L^2([2,3] )}).\nuonumber
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonea
From \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:decay-fail-zetak} and \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:Lk}, we get
\betaegin{equation}astar
\psihi} \def\F{\Phirac{\|\varphiidetilde{L}_k\|_{L^\infty([0, 3])}}{\|\zetaeta_k\|_{L^\infty([0, 3] )}} \lambda} \def\La{\Lambdaeq \psihi} \def\F{\Phirac{C}{C_k},
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
and then by our assumption that $C_k\tauo \infty$,
we prove that the right hand side of \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:nablabarzeta} converges to zero as $k\tauo \infty$.
Since $B$ is assumed to be pre-compact, we
get a limiting operator $B_{\infty}$ after taking a subsequence (in a trivialization).
On the other hand, since $B$ is locally coercive,
there exists $\omega} \def\O{\Omegaverline\zetaeta_\infty$ such that
$\betaar\zetaeta_k \tauo
\omega} \def\O{\Omegaverline\zetaeta _\infty$ uniformly converges in ${\muathbb C}E_2$ and $\omega} \def\O{\Omegaverline\zetaeta _\infty$ satisfies
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:xibarinfty-zeta}
\nuabla_\tauau \omega} \def\O{\Omegaverline\zetaeta_\infty+ B_\infty \omega} \def\O{\Omegaverline\zetaeta_\infty=0 \quad \tauext{on $[0,3]$},
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
and
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:xibarinfty-ineq-zeta}
\|\omega} \def\O{\Omegaverline \zetaeta_\infty\|^2_{L^2([1,2] )}\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(2\psiartialta)(\|\omega} \def\O{\Omegaverline \zetaeta_\infty\|^2_{L^2([0,1] )}
+\|\omega} \def\O{\Omegaverline \zetaeta_\infty\|^2_{L^2([2,3] )}).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Since $\|\omega} \def\O{\Omegaverline\zetaeta_\infty\|_{L^\infty([0,3]\tauimes S^1)} = 1$,
$\omega} \def\O{\Omegaverline\zetaeta_\infty \nuot \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0$. Recall that $B_\infty$ is assumed to be a
(unbounded) self-adjoint operator on ${\muathbb E}_1$ with its domain ${\muathbb E}_2$.
Let $\{e_i\}$ be its orthonormal eigen-basis of ${\muathbb E}_1$ with respect to $B_\infty$.
We consider the eigen-function expansion of
$\omega} \def\O{\Omegaverline \zetaeta_\infty(\tauau,\chidot)$ and write
$$
\omega} \def\O{\Omegaverline \zetaeta_\infty(\tauau) = \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^\infty a_i(\tauau)\, e_i
$$
for each $\tauau \in [0,3]$, where $e_i$ are the eigen-functions of $B$ associated to the eigenvalue $\lambda} \def\La{\Lambdaambdambda_i$ with
$$
-\infty < \lambda} \def\La{\Lambdadots \lambda} \def\La{\Lambdaeq \lambda} \def\La{\Lambdaambdambda_{-k} \lambda} \def\La{\Lambdaeq \lambda} \def\La{\Lambdaambdambda_{-k+1} \lambda} \def\La{\Lambdaeq \chidots < 0 < \lambda} \def\La{\Lambdaambdambda_1 \lambda} \def\La{\Lambdaeq \lambda} \def\La{\Lambdadots \lambda} \def\La{\Lambdaeq \lambda} \def\La{\Lambdaambdambda_i \lambda} \def\La{\Lambdaeq \chidots < \infty.
$$
By plugging $\omega} \def\O{\Omegaverline\zetaeta_\infty$ into \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:xibarinfty-zeta}, we derive
$$
a_i'(\tauau)+\lambda} \def\La{\Lambdaambdambda_ia_i(\tauau)=0, \quad i \in \muathbb Z \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaetminus \{0\}
$$
It follows that
$$
a_i(\tauau)=c_ie^{-\lambda} \def\La{\Lambdaambdambda_i\tauau}, \quad i \in \muathbb Z \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaetminus \{0\}
$$
for some constants $c_i$ and hence
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:|ai|2}
\|a_i\|^2_{L^2([1,2])}=\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(2\lambda} \def\La{\Lambdaambdambda_i)(\|a_i\|^2_{L^2([0,1])}+\|a_i\|^2_{L^2([2,3])})
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
with the function determined by the function
$$
\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(2c) : =
\psihi} \def\F{\Phirac{\int_1^2 e^{-2c \tauau} \, d\tauau}
{\int_0^1 e^{-2c \tauau} \, d\tauau + \int_2^3 e^{-2c \tauau} \, d\tauau}.
$$
Equivalently, we obtain
$$
\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(c) = \psihi} \def\F{\Phirac{e^{-c} - e^{-2c}}{1- e^{-c} +
e^{-2c} - e^{-3\lambda} \def\La{\Lambdaambdambda_i}} = \psihi} \def\F{\Phirac{1}{e^c + e^{-c}}.
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{(This is how the function $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$ becomes relevant to this three interval argument. We note
that $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$ is an even function.})
We compute
\betaegin{equation}astar
\|\omega} \def\O{\Omegaverline\zetaeta_\infty\|^2_{L^2([k, k+1])}&=&\int_{[k,k+1]}\|\omega} \def\O{\Omegaverline\zetaeta_\infty\|^2_{L^2(S^1)}\,d\tauau\\
&=&\int_{[k,k+1]}\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_i |a_i(\tauau)|^2 d\tauau = \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_i\|a_i\|^2_{L^2([k,k+1])}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
By the monotonically decreasing property of $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$ for $c> 0$, this and \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:|ai|2} give rise to
$$
\|\omega} \def\O{\Omegaverline \zetaeta_\infty\|^2_{L^2([1,2])}< \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(2\psiartialta) (\|\omega} \def\O{\Omegaverline \zetaeta_\infty\|^2_{L^2([0,1])}+\|\omega} \def\O{\Omegaverline \zetaeta_\infty\|^2_{L^2([2,3])})
$$
for any $\psiartialta$ satisfying $0 < \psiartialta < \muin\{|\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma_{-1}|, \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma_1\}$.
Since $\omega} \def\O{\Omegaverline\zetaeta_\infty\nuot \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0$, this contradicts to \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:xibarinfty-ineq-zeta}, if we choose
$0 < \psiartialta < \muin\{|\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma_{-1}|, \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma_1\}$ at the beginning. This finishes the proof of the exponential decay
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:L2|zeta|}
\|\zetaeta\|_{L^2(I_k;{\muathbb C}E_1)} \lambda} \def\La{\Lambdaeq C e^{-\psiartialta \tauau}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
as $k\tauo \infty$.
Now we show this indicates the exponential decay of $\|\zetaeta\|_{{\muathbb C}E_{1,\tauau}}$.
Using \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:nabla=L} and \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:coercive}, we also derive
\betaegin{equation}a\lambda} \def\La{\Lambdaambdabel{eq:nablazeta}
\|\nuabla_\tauau\zetaeta(\tauau)\|_{L^2(I_k,{\muathbb C}E_1)} & \lambda} \def\La{\Lambdaeq & \|B(\tauau) \zetaeta(\tauau)\|_{L^2(I_k,{\muathbb C}E_1)}
+ \|L(\tauau)\|_{L^2(I_k,{\muathbb C}E_1)}\nuonumber\\
& \lambda} \def\La{\Lambdaeq & \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaup_{\tauau \in [R_0,\infty)}\|B(\tauau)\|_{\muathbb{L}({\muathbb C}E_2,{\muathbb C}E_1)} \|\zetaeta(\tauau)\|_{L^2(I_k,{\muathbb C}E_2)}
+ \|L(\tauau)\|_{L^2(I_k,{\muathbb C}E_1)}\nuonumber\\
& \lambda} \def\La{\Lambdaeq & C_2' C(I_k,I_k')\lambda} \def\La{\Lambdaeft(\|(\nuabla + B) \zetaeta\|_{L^2(I_k',{\muathbb C}E_1)} + \|\zetaeta\|_{L^2(I_k',{\muathbb C}E_1)} \rhoight)\nuonumber\\
&{}& + \|L(\tauau)\|_{L^2(I_k,{\muathbb C}E_1)})\nuonumber\\
& \lambda} \def\La{\Lambdaeq & (C_2'C(I_k,I_k') + 1) \|L(\tauau)\|_{L^2(I_k',{\muathbb C}E_1)} + C_2' C(I_k,I_k') \| \zetaeta\|_{L^2(I_k,{\muathbb C}E_1)}\nuonumber\\
&\lambda} \def\La{\Lambdaeq & C_2'' e^{-\psiartialta k}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonea
Here we have chosen $I_k' = [k-\psihi} \def\F{\Phirac{1}{3},k+\psihi} \def\F{\Phirac{4}{3}]$, $C_2'=\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaup_{\tauau \in [R_0,\infty)}\|B(\tauau)\|_{\muathbb{L}({\muathbb C}E_2,{\muathbb C}E_1)}$ and $C_2'' = (C_2'C(I_k,I_k')e^{\psiartialta/3} + 1)$.
Combining \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:L2|zeta|} and \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:nablazeta}, we have derived $\|\zetaeta\|_{W^{1,2}(I_k,{\muathbb C}E_1)} \lambda} \def\La{\Lambdaeq C_2 e^{-\psiartialta k}$
for all $k$ with $C_2 = \muax\{C,C_2''\}$.
By applying Sobolev's inequality for the section $I_k \tauo {\muathbb C}E_1$
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:Sobolev}
\muax_{\tauau \in I_k} |\zetaeta(\tauau)|_{{\muathbb C}E_1,\tauau} \lambda} \def\La{\Lambdaeq C_3\|\zetaeta\|_{W^{1,2}(I_k,{\muathbb C}E_1)}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
with $C_3$ the Sobolev constant on $I_k$. This now finishes the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
\betaegin{equation}gin{rem} Since ${\muathbb C}E_1$ may not be finitely dimensional, application of the Sobolev inequality \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:Sobolev} may not
be standard to some readers. For readers' convenience, we give
a direct proof of this inequality \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:Sobolev} in Appendix \rhoef{sec:Sobolev}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{rem}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaection{Exponential convergence: the prequantization case}
\lambda} \def\La{\Lambdaambdabel{sec:prequantization}
To make the main arguments transparent in the scheme of our
exponential estimates, we start with the case of prequantization, i.e.,
the case without ${\muathbb C}N$ and the normal form contains $E$ only.
The general case will be dealt with in the next section.
We put the basic hypothesis that
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:Hausdorff}
|e(\tauau,t)| < \psiartialta
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
for all $\tauau \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq \tauau_0$ in our further study,
where $\psiartialta$ is given as in Proposition \rhoef{lem:deltaforE}.
From Corollary \rhoef{cor:convergence-ue} and the remark after it,
we can locally work with everything in a neighborhood of zero section in the
normal form $(U_E, f\lambda} \def\La{\Lambdaambdambda_E, J)$.
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{Computational preparation}
\lambda} \def\La{\Lambdaambdabel{subsec:prepare}
For a smooth function $h$, we can express its gradient vector field $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammarad h$
with respect to the metric $g_{(\lambda} \def\La{\Lambdaambdambda_E,J_0)} = d\lambda} \def\La{\Lambdaambdambda_E(\chidot, J_0\chidot ) + \lambda} \def\La{\Lambdaambdambda_E \omega} \def\O{\Omegatimes \lambda} \def\La{\Lambdaambdambda_E$
in terms of the $\lambda} \def\La{\Lambdaambdambda_E$-contact Hamiltonian vector field $X^{d\lambda} \def\La{\Lambdaambdambda_E}_h$ and
the Reeb vector field $X_E$ as
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:gradh}
\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammarad h = -J_0 X^{d\lambda} \def\La{\Lambdaambdambda_E}_h + X_E[h]\, X_E.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Note the first term
$-J_0 X^{d\lambda} \def\La{\Lambdaambdambda_E}_h=:\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammarad h^\psii$ is the $\psii_{\lambda} \def\La{\Lambdaambdambda_E}$-component of $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammarad h$.
Consider the vector field $Y$ along $u$ given by
$
Y(\tauau,t) := \nuabla^\psii_\tauau e
$
where $w = (u,e)$ in the coordinates defined in section \rhoef{sec:thickening}.
The vector field $e_\tauau = e(\tauau,t)$
as a vector field along $u(\tauau,t)$ is nothing but
the map $(\tauau,t) \muapsto I_{w(\tauau,t);u(\tauau,t)}(\vec R(w(\tauau,t))$ as a section of $u^*E$. In particular
$$
e(\infty,t) = I_{w(\infty,t);u(\infty,t)}(\vec R(w(\infty,t)))= I_{z(t);x(Tt)}(\vec R(o_{x(T t)}) = o_{x(T t)}.
$$
Obviously, $I_{w(\tauau,t);u(\tauau,t)}(\vec R(w(\tauau,t))$ is pointwise perpendicular to $o_E \chiong Q$.
In particular,
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:perpendicular}
(\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_{x(T\chidot)}^{u_\tauau})^{-1} e_\tauau \in (\kappaer D\group{U}psilon(z))^\psierp
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
where $e_\tauau$ is the vector field along the loop $u_\tauau \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset o_E$ and we regard
$(\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_{x(T\chidot)}^{u_\tauau})^{-1} e_\tauau$ as a vector field along $z = (x(T\chidot),o_{x(T\chidot)})$.
For further detailed computations, one needs to decompose the contact instanton map equation
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:contact-instanton-E}
\psiartialbar_J^{f\lambda} \def\La{\Lambdaambdambda_E} w = 0, \quad d(w^*(f\, \lambda} \def\La{\Lambdaambdambda_E)) \chiirc j) = 0.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
The second equation does not depend on the choice of endomorphisms $J$ and becomes
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:dw*flambda}
d(w^* \lambda} \def\La{\Lambdaambdambda_E\chiirc j) = - dg \varphiedge (\lambda} \def\La{\Lambdaambdambda_E\chiirc j),\quad g = \lambda} \def\La{\Lambdaog f
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
which is equivalent to
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:dw*flambda}
d(u^*\tauheta \chiirc j + \Omegamega(e, \nuabla^E_{du\chiirc j} e))
= - dg \varphiedge (u^* \tauheta \chiirc j + \Omegamega(e, \nuabla^E_{du\chiirc j} e)).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
On the other hand, by the formula \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:xi-projection}, the first equation $\psiartialbar_J^{f\lambda} \def\La{\Lambdaambdambda_E} w = 0$ becomes
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:delbarwflambda}
\psiartialbar^{\psii_{\lambda} \def\La{\Lambdaambdambda_E}}_{J_0} w = (w^*\lambda} \def\La{\Lambdaambdambda_E\, Y_{dg})^{(0,1)} + (J - J_0) d^{\psii_{\lambda} \def\La{\Lambdaambdambda_E}} w
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
where $(w^*\lambda} \def\La{\Lambdaambdambda_E\, Y_{dg})^{(0,1)}$ is the $(0,1)$-part of the one-form $w^*\lambda} \def\La{\Lambdaambdambda_E\, Y_{dg}$
with respect to $J_0$.
In terms of the coordinates, the equation can be re-written as
\betaegin{equation}astar
&{}& \lambda} \def\La{\Lambdaeft(\betaegin{equation}gin{matrix} \psiartialbar^{\psii_\tauheta} u - \lambda} \def\La{\Lambdaeft(\Omegamega(\vec R(u,e),\nuabla^E_{du} e)\, X_E(u,e)\rhoight)^{(0,1)}\\
(\nuabla^E_{du} e)^{(0,1)} \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{matrix}\rhoight)\\
& = & \lambda} \def\La{\Lambdaeft(\betaegin{equation}gin{matrix}
\lambda} \def\La{\Lambdaeft( \lambda} \def\La{\Lambdaeft(u^*\tauheta + \Omegamega(\vec R(u,e),\nuabla^E_{du} e)\rhoight) d\psii_E((Y_{dg})^h)\rhoight)^{(0,1)} \\
\lambda} \def\La{\Lambdaeft( \lambda} \def\La{\Lambdaeft(u^*\tauheta + \Omegamega(\vec R(u,e),\nuabla^E_{du} e)\rhoight) X_g^\Omegamega(u,e)\rhoight)^{(0,1)}\\
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{matrix}\rhoight) + (J - J_0) d^{\psii_{\lambda} \def\La{\Lambdaambdambda_E}} w.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
Here $\lambda} \def\La{\Lambdaeft(\Omegamega(\vec R(u,e),\nuabla^E_{du} e)\, X_E(u,e)\rhoight)^{(0,1)}$ is the $(0,1)$-part with respect to
$J_Q$ and $(\nuabla^E_{du} e)^{(0,1)}$ is the $(0,1)$-part with respect to $J_E$.
From this, we have derived
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:eq-in-(u,e)}In coordinates $w = (u,e)$, \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:contact-instanton-E} is equivalent to
\betaegin{equation}a
\nuabla''_{du} e & = & \lambda} \def\La{\Lambdaeft(w^*\lambda} \def\La{\Lambdaambdambda_E\, X_g^\Omegamega(u,e)\rhoight)^{(0,1)} + I_{w;u}\lambda} \def\La{\Lambdaeft((J - J_0) d^{\psii_{\lambda} \def\La{\Lambdaambdambda_E}} w)^v\rhoight)\lambda} \def\La{\Lambdaambdabel{eq:CR-e} \\
\psiartialbar^\psii u & = & \lambda} \def\La{\Lambdaeft(w^*\lambda} \def\La{\Lambdaambdambda_E\, (d\psii_E(Y_{dg})^h)\rhoight)^{(0,1)}
+ \lambda} \def\La{\Lambdaeft(\Omegamega(e, \nuabla^E_{du} e)\, X_\tauheta(u(\tauau,e))^h\rhoight)^{(0,1)}\nuonumber\\
&{}& + d\psii_E\lambda} \def\La{\Lambdaeft((J - J_0) d^{\psii_{\lambda} \def\La{\Lambdaambdambda_E}} w\rhoight)^h
\lambda} \def\La{\Lambdaambdabel{eq:CR-uxi}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonea
and
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:CR-uReeb}
d(w^*\lambda} \def\La{\Lambdaambdambda_E \chiirc j) = -dg \varphiedge w^*\lambda} \def\La{\Lambdaambdambda_E \chiirc j
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
with the insertions of
$$
w^*\lambda} \def\La{\Lambdaambdambda_E = u^*\tauheta + \Omegamega(e,\nuabla^E_{du}e).
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
Note that with insertion of \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:CR-uReeb}, we obtain
\betaegin{equation}a\lambda} \def\La{\Lambdaambdabel{eq:delbarpiu=}
\psiartialbar^\psii u
& = & \lambda} \def\La{\Lambdaeft(u^*\tauheta\, d\psii_E((Y_{dg})^h)\rhoight)^{(0,1)}
+ \lambda} \def\La{\Lambdaeft(\Omegamega(e, \nuabla^E_{du} e)\, (X_\tauheta(u(\tauau,e))+ d\psii_E(Y_{dg}))^h\rhoight)^{(0,1)}\, \nuonumber \\
&{}& + d\psii_E\lambda} \def\La{\Lambdaeft((J - J_0) d^{\psii_{\lambda} \def\La{\Lambdaambdambda_E}} w\rhoight)^h.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonea
Now let $w=(u,e)$ be a contact instanton in terms of the decomposition as above.
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:J-J0} Let $e$ be an arbitrary section over a
smooth map $u:\Sigmama \tauo Q$. Then
\betaegin{equation}a\lambda} \def\La{\Lambdaambdabel{eq:J-J0}
I_{w;u}\lambda} \def\La{\Lambdaeft(((J - J_0) d^{\psii_{\lambda} \def\La{\Lambdaambdambda_E}} w)^v\rhoight)& = & L_1(u,e)(e,(d^\psii u,\nuabla_{du} e))\\
d\psii_E\lambda} \def\La{\Lambdaeft(((J - J_0) d^{\psii_{\lambda} \def\La{\Lambdaambdambda_E}} w)^h\rhoight) & = & L_2(u,e)(e,(d^\psii u,\nuabla_{du} e))
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonea
where $L_1(u,e)$ is a $(u,e)$-dependent bilinear map with values in $\Omegamega^0(u^*E)$ and
$L_2(u,e)$ is a bilinear map with values in $\Omegamega^0(u^*TQ)$. They also satisfy
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:|J-J0|}
|L_i(u,e)| = O(1).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
An immediate corollary of this lemma is
\betaegin{equation}gin{cor}\lambda} \def\La{\Lambdaambdabel{cor:|J-J0|}
\betaegin{equation}astar
|I_{w;u}\lambda} \def\La{\Lambdaeft(((J - J_0) d^{\psii_{\lambda} \def\La{\Lambdaambdambda_E}} w)^v\rhoight)| & \lambda} \def\La{\Lambdaeq & O(1)|e|(|d^\psii u| + |\nuabla e|) \\
\lambda} \def\La{\Lambdaeft|\nuabla\lambda} \def\La{\Lambdaeft(I_{w;u}\lambda} \def\La{\Lambdaeft(((J - J_0) d^{\psii_{\lambda} \def\La{\Lambdaambdambda_E}} w)^v\rhoight)\rhoight)\rhoight| & \lambda} \def\La{\Lambdaeq &
O(1)((|du| + |\nuabla e|)^2|e| \\
&{}& \quad + |\nuabla e|(|du| + |\nuabla e|) + |e||\nuabla^2 e|).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{cor}
Next, we give the following lemmas whose proofs are straightforward from the definition of
$X_g^\Omegamega$.
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:MN} Suppose $d_{C^0}(w(\tauau,\chidot),z(\chidot)) \lambda} \def\La{\Lambdaeq \iota_g$. Then
\betaegin{equation}astar
X^{\Omegamega}_{g}(u,e) &= & D^v X^\Omegamega_g(u,0) e + M_1(u,e)(e,e) \\
d\psii_E(Y_{dg}) & = & M_2(u,e)(e)\\
\Omegamega(e,\nuabla_{du}^E e)\, X_g^\Omegamega(u,e) & = & N(u,e)(e,\nuabla_{du} e,e)
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
where $M_1(u,e)$ is a smoothly $(u,e)$-dependent bi-linear map on $\Omegamega^0(u^*E)$
and $M_2: \Omegamega^0(u^*E) \tauo \Omegamega^0(u^*TQ)$ is a linear map, $N(u,e)$ is a
$(u,e)$-dependent tri-linear map on $\Omegamega^0(u^*E)$. They also satisfy
$$
|M_i(u,e)| = O(1), \quad |N(u,e)| = O(1).
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:K}
\betaegin{equation}astar
(X^{d\lambda} \def\La{\Lambdaambdambda_E}_g)^h(u,e) = K(u,e)\, e
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
where $K(u,e)$ is a $(u,e)$-dependent linear map
from $\Omegamega^0(u^*E)$ to $\Omegamega^0(u^*TQ)$ satisfying
$$
|M_1(u,e)| = O(1), \quad |N(u,e)| = O(1), \quad |K(u,e)| = o(1).
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\muedskip
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{$L^2$-exponential decay of the normal bundle component $e$}
\lambda} \def\La{\Lambdaambdabel{sec:expdecayMB}
Combining Lemma \rhoef{lem:J-J0}, \rhoef{lem:MN} and \rhoef{lem:K}, we can write \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:CR-e}
as
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:nabladue}
\nuabla''_{du}e - \lambda} \def\La{\Lambdaeft(u^*\tauheta \, DX^\Omegamega_g(u)(e)\rhoight)^{(0,1)} = K(e,\nuabla_{du} e, d^\psii u).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
By evaluating \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:nabladue} against $\psihi} \def\F{\Phirac{\psiartial}{\psiartial \tauau}$, we derive
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:e-reduction}
\nuabla_\tauau e + J_E(u) \nuabla_t e - \tauheta\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\rhoight)DX_g^\Omegamega(u)(e)
- J_E \tauheta\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{\psiartial u}{\psiartial t}\rhoight) DX_g^\Omegamega(u)(e)
= K\lambda} \def\La{\Lambdaeft(e, \nuabla_\tauau e, \psii_\tauheta \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\rhoight)
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
First notice that
\betaegin{equation}gin{lem} \lambda} \def\La{\Lambdaambdabel{lem:|K|=e2}
$$
\lambda} \def\La{\Lambdaeft|K\lambda} \def\La{\Lambdaeft(e, \nuabla_\tauau e, \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\rhoight)\rhoight|_{L^\infty} = o(|e|)
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof}
We consider \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:e-reduction}
as an equation for $e$. Clearly this is a quasilinear elliptic equation of $e$ when $u$ is fixed.
We also recall $K(e,\nuabla_{du} e, du)$ has the form
$$
L_1(u,e)\lambda} \def\La{\Lambdaeft(e,\lambda} \def\La{\Lambdaeft(\nuabla_\tauau e, \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\rhoight)\rhoight)
$$
where $L_1(u,e)$ is a bilinear map with $|L_1(u,e)| = O(1)$ by \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:J-J0}
which satisfies the inequality
$$
\lambda} \def\La{\Lambdaeft|L_1(u,e)\lambda} \def\La{\Lambdaeft(e,\lambda} \def\La{\Lambdaeft(\nuabla_\tauau e, \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\rhoight)\rhoight)\rhoight| \lambda} \def\La{\Lambdaeq O(1)
|e|(|d^\psii u| + |\nuabla_\tauau e|)
$$
(See Corollary \rhoef{cor:|J-J0|}.) Now the lemma
immediately follows from the convergence $|\psii \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}|, \, |\nuabla_\tauau e| \tauo 0$
established in Corollary \rhoef{cor:convergence-ue}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Denote by $B(\tauau)$ a $\tauau$-family of operators
$$
B(\tauau): W^{1, 2}(u(\tauau, \chidot)^*E)\tauo L^2(u(\tauau, \chidot)^*E)
$$
defined by
\betaegin{equation}astar
B(\tauau)e& :=& J_E(u) \nuabla_t e - \tauheta\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\rhoight)DX_g^\Omegamega(u)(e)
- J_E \tauheta\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{\psiartial u}{\psiartial t}\rhoight) DX_g^\Omegamega(u)(e)\\
&{}& \quad -K(e, \nuabla_\tauau e, \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
Then \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:e-reduction} for $e$ with $u$ fixed can be rewritten as
$$
\nuabla_\tauau e(\tauau)+B(\tauau)e(\tauau)=0.
$$
Once we know $u(\tauau, \chidot) \tauo z_\infty$ as $\tauau \tauo \infty$ for some Reeb orbit $z_\infty$,
we can use the exponential map from $z_\infty$ to $u(\tauau, \chidot)$ for any sufficiently large $\tauau$
and its associated parallel transport to regard $B(\tauau)$ as a $\tauau$-family of
linear operators
$$
W^{1, 2}(z_\infty^*E)\tauo L^2(z_\infty^*E)
$$
along the limiting closed Reeb orbit $z_\infty$. (See Section 8 \chiite{oh-wang2} for
a detailed discussion on this process.)
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:B}
Let $\tauau_k$ be a sequence with $\tauau_k \tauo \infty$, and also denote by $\tauau_k$
a subsequence thereof appearing in Theorem \rhoef{thm:subsequence}.
Under the above mentioned identification, the operator $B(\tauau_k)$ converges to the the linearized operator
$$
B_\infty = J_E (z_\infty(t))(\nuabla_t - {\muathbb C}T DX_g^\Omegamega(u_\infty))
$$
as $k \tauo \infty$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof} Reorganize \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:e-reduction} into
\betaegin{equation}astar
\nuabla_{\psihi} \def\F{\Phirac{\psiartial u}{\psiartial\tauau}}e&+&J_E \lambda} \def\La{\Lambdaeft(\nuabla_{\psihi} \def\F{\Phirac{\psiartial u}{\psiartial t}}e
-\lambda} \def\La{\Lambdaambdambda_E(\psihi} \def\F{\Phirac{\psiartial u}{\psiartial t})X^\Omegamega_g(u, e)\rhoight)\\
&-&\lambda} \def\La{\Lambdaambdambda_E(\psihi} \def\F{\Phirac{\psiartial u}{\psiartial\tauau})X^\Omegamega_g(u, e)
- K\lambda} \def\La{\Lambdaeft(e, \nuabla_\tauau e, \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\rhoight) =0.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
We first note that $\nuabla_{\psihi} \def\F{\Phirac{\psiartial u}{\psiartial t}(\tauau_k,)} \tauo {\muathbb C}T \nuabla_{\delta} \def\D{\Deltaot z_\infty}$
in the operator norm under the above mentioned identification.
(See Proposition 8.2 \chiite{oh-wang1} and its proof for the precise explanation on this
statement.)
We now estimate the two terms in the second line.
For the first term, we have
$$
|\lambda} \def\La{\Lambdaambdambda_E(\psihi} \def\F{\Phirac{\psiartial u}{\psiartial\tauau})X^\Omegamega_g(u, e)|\lambda} \def\La{\Lambdaeq |\lambda} \def\La{\Lambdaambdambda_E(\psihi} \def\F{\Phirac{\psiartial u}{\psiartial\tauau})||X^\Omegamega_g(u, e)|
=o(1)|e|,
$$
where the last estimate follows from Corollary \rhoef{cor:convergence-ue} and Lemma \rhoef{cor:|J-J0|}.
For the second term, Lemma \rhoef{lem:|K|=e2} implies that is of order $o(|e|)$.
This now completes the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Note that so far this convergence can only be expected in the subsequence sense.
Fortunately this weak convergence is already enough to conduct our scheme of three-interval argument,
when combined with the uniform local a priori estimate from \chiite{oh-wang2}.
Next we briefly explain how the current situation fits into the general framework set
up in Section \rhoef{sec:three-interval}. We refer readers to Section 8 \chiite{oh-wang2} for
further details of this verification.
We first consider two Banach spaces ${\muathbb C}E_{1, \tauau} \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaupset {\muathbb C}E_{2, \tauau}$
defined by
$$
{\muathbb C}E_{1, \tauau}:=L^2(\iota_\tauau^* u^*E), \quad {\muathbb C}E_{2, \tauau}:=W^{1, 2}(\iota_\tauau^* u^*E),
$$
where the maps
$$
\iota_\tauau^*: S^1\tauo [0, \infty)\tauimes S^1, \quad t\muapsto (\tauau, t)
$$
are embeddings at $\tauau\in [0, \infty)$. This family defines the bundle
${\muathbb C}E_i$ over $[0, \infty)$ whose fiber at $\tauau$ is given by ${\muathbb C}E_{1, \tauau}$:
Its local triviality can be again proved by the parallel transport over a sufficient
small interval $(-\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilon +\tauau, \tauau + \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilon)$ at each given $\tauau \in [0,\infty)$.
We denote the translation map $\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaigma_s(\tauau)=\tauau+s$ by $\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaigma_s: [a, b]\tauo [0, \infty)$ for each $s \in [a,b]$
for any given bounded interval $[a,b] \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset [0,\infty)$.
Using the exponential map over the limiting Reeb orbit $z_\infty$ and the associated
parallel transport for all sufficiently large $k$'s the sequence of Banach bundles
$${\muathbb C}E_{i;k}:=\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaigma_{\tauau_k}^*{\muathbb C}E_i\tauo [a,b]$$
have global trivializations
$$\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiihi_{i;k}:{\muathbb C}E_{i;k}\tauo L^2(z_\infty^*E)\tauimes [a,b].$$
The uniform convergence proved in Theorem \rhoef{thm:subsequence} and Corollary \rhoef{cor:convergence-ue} ensures
the uniformly local tameness of ${\muathbb C}E_i$ and also the precompactness, uniformly local coerciveness of $B$. In particular, since
$$
\tauheta(\psihi} \def\F{\Phirac{\psiartial u}{\psiartial\tauau})(\tauau, \chidot)\tauo {\muathbb C}Q=0, \quad \tauheta(\psihi} \def\F{\Phirac{\psiartial u}{\psiartial t})(\tauau, \chidot)\tauo {\muathbb C}T \quad \tauau\tauo \infty
$$
uniformly over $[a, b]$, we also conclude that the operator $B_\infty$ defined as
$$
B_\infty = J_E (z_\infty(t))(\nuabla_t - {\muathbb C}T DX_g^\Omegamega(u_\infty)),
$$
is the limit of $B(\tauau)$ for $\tauau \in [a,b]$ with respect to the subsequence $\{\tauau_k\}$, as shown in Lemma \rhoef{lem:B}.
Moreover, we also notice that $B_\infty$ is invariant under the $\tauau$-translations, and it shows that
$B$ is asymptotically cylindrical. Also, it follows from the Morse-Bott condition and Corollary \rhoef{cor:gap}
that the operator $B_\infty$ is an unbounded self-adjoint operator with trivial kernel.
The $C^1$-bound of $e$ from \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:Hausdorff} and \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:highere} guarantees that
the uniform bound of $W^{1, 2}(S^1)$-norm of $e(\tauau)$.
The above discussion verifies that \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:e-reduction} can be fit into
the general abstract framework of Theorem \rhoef{thm:three-interval} applied to $\zetaeta= e$.
Therefore we immediately obtain the following $L^2$-exponential estimate.
\betaegin{equation}gin{prop} \lambda} \def\La{\Lambdaambdabel{prop:expdecayvertical} There exists a sufficiently large $\tauau_0 > 0$
and constants $C_0, \, \psiartialta_0$ such that
$$
\|e(\tauau)\|_{L^2(S^1)} < C_0 e^{-\psiartialta_0 \tauau}
$$
for all $\tauau \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq \tauau_0$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{$L^2$-exponential decay of the tangential component $du$ I}
\lambda} \def\La{\Lambdaambdabel{subsec:exp-horizontal}
We summarize previous geometric calculations, especially the equation
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:delbarpiu=}, into the following basic equation which we will study
using the three-interval argument in this section.
\betaegin{equation}gin{lem} We can write the equation \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:delbarpiu=} into the form
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:pidudtauL}
\psii_\tauheta \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}+J(u)\psii_\tauheta \psihi} \def\F{\Phirac{\psiartial u}{\psiartial t}=L(\tauau, t),
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
so that $\|L(\tauau, \chidot)\|_{L^2(S^1)}\lambda} \def\La{\Lambdaeq C e^{-\psiartialta \tauau}$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof} We recall $|du| \lambda} \def\La{\Lambdaeq C$ which follows from Corollary
\rhoef{cor:convergence-ue}. Furthermore since $X_{dg}^{d\lambda} \def\La{\Lambdaambdambda_E}|_Q \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0$, it follows
$$
\lambda} \def\La{\Lambdaeft|\lambda} \def\La{\Lambdaeft(u^*\tauheta\, d\psii_E((X_{dg}^{d\lambda} \def\La{\Lambdaambdambda_E})^h)\rhoight)^{(0,1)}\rhoight| \lambda} \def\La{\Lambdaeq C |e|.
$$
Furthermore by the adaptedness of $J$ and by the definition of the associated $J_0$,
we also have $(J - J_0)|_{Q} \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0$ and so
$$
\lambda} \def\La{\Lambdaeft|\lambda} \def\La{\Lambdaeft(d\psii_E((J - J_0) d^{\psii_{\lambda} \def\La{\Lambdaambdambda_E}} w\rhoight)^h\rhoight| \lambda} \def\La{\Lambdaeq C |e|.
$$
It is manifest that the second term above also carries similar estimate. Combining them,
we have established that the right hand side is bounded by $C|e|$ from above.
Then the required exponential inequality follows from that
of $e$ established in Proposition \rhoef{prop:expdecayvertical}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
In the rest of this section and Section \rhoef{subsec:centerofmass}, we give the proof of the following
\betaegin{equation}gin{prop}\lambda} \def\La{\Lambdaambdabel{prop:expdecayhorizontal}
There exists some constant $C_0>0$ and $\psiartialta_0>0$ such that
$$
\lambda} \def\La{\Lambdaeft\|\psii_\tauheta\psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\rhoight\|_{L^2}< C_0\, e^{-\psiartialta_0 \tauau}.
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
The proof basically follows the same three-interval argument as in the proof of Theorem \rhoef{thm:three-interval}.
However, since the current case is much more subtle, we would like to highlight the following points before we start:
\betaegin{equation}gin{enumerate}
\item Unlike the normal component $e$ whose governing equation \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:e-reduction} is a (inhomogeneous) quasi-linear elliptic
equation, \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:pidudtauL} is only (inhomogeneous) quasi-linear \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{degenerate elliptic}:
the limiting operator $B$ of its linearization contains non-trivial kernel.
\item Nonlinearity of the
equation makes somewhat cumbersome to formulate the abstract framework of three-interval argument as in
Theorem \rhoef{thm:three-interval} although we believe it is doable. Since this is not the main interest of
ours, we directly deal with \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:pidudtauL} in the present paper postponing such an abstract
framework elsewhere in the future.
\item For the normal component, we directly establish the exponential estimates of the
map $e$ itself. On the other hand, for the tangential component, partly due to the absence of
direct linear structure of $u$ and also due to the presence of nontrivial kernel of the asymptotic
operator, we prove the exponential decay of the \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{derivative} $\psii_\tauheta\psihi} \def\F{\Phirac{\psiartial u}{\psiartial\tauau}$ first and
then prove the exponential convergence to some Reeb orbit afterwards.
\item To obtain the exponential decay of the \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{derivative} term,
we need to exclude the possibility of a kernel element for the limit obtained in the
three-interval argument. In Section \rhoef{subsec:centerofmass} we use the techniques of the center of mass as an intrinsic geometric coordinates system to exclude the possibility of the vanishing of the limit.
This idea appears in \chiite{mundet-tian} and \chiite{oh-zhu11} too.
\item Unlike \chiite{HWZ3} and \chiite{bourgeois}, our proof directly
obtains $L^2$-exponential decay instead of showing $C^0$ convergence first and
getting exponential decay afterwards.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
\muedskip
Starting from now until the end of Section \rhoef{subsec:centerofmass}, we give the proof of Proposition \rhoef{prop:expdecayhorizontal}.
Divide $[0, \infty)$ into the union of unit intervals $I_k=[k, k+1]$ for
$k=0, 1, \chidots$, and denote by $Z_k:=[k, k+1]\tauimes S^1$. In the context below, we also denote by
$
Z^l
$
the union of three-intervals $Z^l_{I}:=[k,k+1]\tauimes S^1$, $Z^l_{II}:=[k+1,k+2]\tauimes S^1$ and
$Z^l_{III}:=[k+2,k+3]\tauimes S^1$.
Consider $x_k: =\|\psii_\tauheta \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\|^2_{L^2(Z_k)}$ as symbols in Lemma \rhoef{lem:three-interval}.
As in the proof of Theorem \rhoef{thm:three-interval}, we still use the three-interval inequality as the criterion and consider two situations:
\betaegin{equation}gin{enumerate}
\item If there exists some constant $\psiartialta>0$ such that
\betaegin{equation}
\|\psii_\tauheta \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\|^2_{L^2(Z_{k})}\lambda} \def\La{\Lambdaeq \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(2\psiartialta)(\|\psii_\tauheta \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\|^2_{L^2(Z_{k-1})}
+\|\psii_\tauheta \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\|^2_{L^2(Z_{k+1})})\lambda} \def\La{\Lambdaambdabel{eq:3interval-ineq-II}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
holds for every $k$, then from Lemma \rhoef{lem:three-interval}, we are done with the proof;
\item Otherwise, we collect all the three-intervals $Z^{l_k}$ against \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:3interval-ineq-II}, i.e.,
\betaegin{equation}
\|\psii_\tauheta \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\|^2_{L^2(Z^{l_k}_{II})}>
\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(2\psiartialta)(\|\psii_\tauheta \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\|^2_{L^2(Z^{l_k}_{I})}+\|\psii_\tauheta \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\|^2_{L^2(Z^{l_k}_{III})}).\lambda} \def\La{\Lambdaambdabel{eq:against-3interval-II}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
In the rest of the proof, we deal with this case.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
First, if there exists some uniform constant $C_1>0$ such that
on each such three-interval
\betaegin{equation}
\|\psii_\tauheta \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\|_{L^\infty([l_k+0.5, l_k+2.5]\tauimes S^1)}<C_1e^{-\psiartialta l_k}, \lambda} \def\La{\Lambdaambdabel{eq:expas-zeta-II}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
then through the same estimates and analysis as for Theorem \rhoef{thm:three-interval},
we obtain the exponential decay of $\|\psii_\tauheta \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\|$ with the presumed rate $\psiartialta$ as claimed.
\betaegin{equation}gin{rem}Here we look at the $L^\infty$-norm on smaller intervals $[l_k+0.5, l_k+2.5]\tauimes S^1$
instead of the whole $Z^{l_k}=[l_k, l_k+3]\tauimes S^1$
is out of consideration for the elliptic bootstrapping argument in Lemma \rhoef{lem:boot}.
However, the change here doesn't change any argument, since smaller ones are already enough to cover the middle intervals (see Figure \rhoef{fig:three-interval}).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{rem}
Following the same scheme as for Theorem \rhoef{thm:three-interval}, we are going to deal with the case
when there is no uniform bound $C_1$. Then there exists a sequence of constants $C_k\tauo \infty$
and a subsequence of such three-intervals $\{Z^{l_k}\}$ (still use $l_k$ to denote them) such that
\betaegin{equation}
\|\psii_\tauheta \psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\|_{L^\infty([l_k+0.5, l_k+2.5]\tauimes S^1)}\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq C_ke^{-\psiartialta l_k}.\lambda} \def\La{\Lambdaambdabel{eq:decay-fail-zeta-II}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
By Theorem \rhoef{thm:subsequence}
and the local uniform $C^1$-estimate,
we can take a subsequence, still denoted by $\zetaeta_k$, such that $u(Z^{l_k})$ lives in a neighborhood of
some closed Reeb orbit $z_\infty$.
Next, we translate the sequence $u_k:=u|_{Z^{l_k}}$ to $\varphiidetilde{u}_k: [0,3]\tauimes S^1\tauo Q$
by defining
$
\varphiidetilde{u}_k(\tauau, t)=u_k(\tauau+l_k, t).
$
As before, we also define $\varphiidetilde{L}_k(\tauau, t)=L(\tauau+l_k, t)$. From \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:pidudtauL-t}, we now have
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:pidudtauL-t}
\psii_\tauheta \psihi} \def\F{\Phirac{\psiartial \varphiidetilde{u}_k}{\psiartial \tauau}+J(\varphiidetilde{u}_k)\psii_\tauheta \psihi} \def\F{\Phirac{\psiartial \varphiidetilde{u}_k}{\psiartial t}=\varphiidetilde{L}_k(\tauau, t).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Recalling that $Q$ carries a natural $S^1$-action induced from the Reeb flow,
we equip $Q$ with a $S^1$-invariant metric and its associated Levi-Civita connection.
In particular, the vector field $X_\lambda} \def\La{\Lambdaambdambda$ restricted to $Q$ is a Killing vector
field of the metric and satisfies $\nuabla_{X_\lambda} \def\La{\Lambdaambdambda}X_\lambda} \def\La{\Lambdaambdambda = 0$.
Now since the image of $\varphiidetilde{u}_k$ live in neighbourhood of a fixed Reeb orbit $z$ in $Q$,
we can express
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:normalexpN}
\varphiidetilde u_k(\tauau,t) = \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp_{z_k(\tauau,t)}^Z \zetaeta_k(\tauau,t)
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
for the normal exponential map $exp^Z: NZ\tauo Q$ of the locus $Z$ of $z$, where
$z_k(\tauau,t) = \psii_N(\varphiidetilde u_k(\tauau,t))$ is the normal projection of $\varphiidetilde u_k(\tauau,t)$
to $Z$ and $\zetaeta_k(\tauau,t) \in N_{z_k(\tauau,t)}Z = \zetaeta_{z_k(\tauau,t)} \chiap T_{z_k(\tauau,t)}Q$.
Then
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:pithetadel}
\betaegin{equation}
\psii_\tauheta\psihi} \def\F{\Phirac{\psiartial \varphiidetilde{u}_k}{\psiartial \tauau} =\psii_\tauheta (d_2\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z)(\nuabla^{\psii_{\tauheta}}_\tauau\zetaeta_k)
\lambda} \def\La{\Lambdaambdabel{eq:utau-thetatau}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
$$
\psii_\tauheta\psihi} \def\F{\Phirac{\psiartial \varphiidetilde{u}_k}{\psiartial t}=\psii_\tauheta (d_2\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z)(\nuabla^{\psii_{\tauheta}}_t\zetaeta_k).
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof} To simplify notation, we omit $k$ here.
For each fixed $(\tauau,t)$, we compute
$$
D_1 \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z(z(\tauau,t))(X_\lambda} \def\La{\Lambdaambdambda(z(\tauau,t))
= \psihi} \def\F{\Phirac{d}{ds}\Big|_{s = 0} \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z_{\alphalpha(s)} \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_{z(\tauau,t)}^{\alphalpha(s)}(X_\lambda} \def\La{\Lambdaambdambda(z(\tauau,t))
$$
for a curve $\alphalpha: (-\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilon,\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilon) \tauo Q$ satisfying $\alphalpha(0) = z(\tauau,t), \, \alphalpha'(0) = X_\lambda} \def\La{\Lambdaambdambda(z(\tauau,t))$.
For example, we can take $\alphalpha(s) = \psihi_{X_\lambda} \def\La{\Lambdaambdambda}^s(z(\tauau,t))$.
On the other hand, we compare the initial conditions of the two geodesics
$a \muapsto \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z_{\alphalpha(s)} a \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_{x}^{\alphalpha(s)}(X_\lambda} \def\La{\Lambdaambdambda(x))$ and
$a \muapsto \psihi_{X_\lambda} \def\La{\Lambdaambdambda}^s(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z_x a (X_\lambda} \def\La{\Lambdaambdambda(x))$ with $x = z(\tauau,t)$.
Since $\psihi_{X_\lambda} \def\La{\Lambdaambdambda}^s$ is an isometry, we derive
$$
\psihi_{X_\lambda} \def\La{\Lambdaambdambda}^s(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z_x a (X_\lambda} \def\La{\Lambdaambdambda(x)) = \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z_x a (d\psihi_{X_\lambda} \def\La{\Lambdaambdambda}^s(X_\lambda} \def\La{\Lambdaambdambda(x))).
$$
Furthermore we note that $d\psihi_{X_\lambda} \def\La{\Lambdaambdambda}^s(X_\lambda} \def\La{\Lambdaambdambda(x)) = X_\lambda} \def\La{\Lambdaambdambda(x) $ at $s = 0$
and the field $s \muapsto d\psihi_{X_\lambda} \def\La{\Lambdaambdambda}^s(X_\lambda} \def\La{\Lambdaambdambda(x))$ is parallel along the curve $s \muapsto \psihi_{X_\lambda} \def\La{\Lambdaambdambda}^s(x)$.
Therefore by definition of $\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_x^{\alphalpha(s)}(X_\lambda} \def\La{\Lambdaambdambda(z(\tauau,t))$, we derive
$$
\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_x^{\alphalpha(s)}(X_\lambda} \def\La{\Lambdaambdambda(x))= d\psihi_{X_\lambda} \def\La{\Lambdaambdambda}^s(X_\lambda} \def\La{\Lambdaambdambda(x)).
$$
Combining this discussion, we obtain
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z_{\alphalpha(s)} \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_{z(\tauau,t)}^{\alphalpha(s)}(X_\lambda} \def\La{\Lambdaambdambda(z(\tauau,t)))
= \psihi_{X_\lambda} \def\La{\Lambdaambdambda}^s(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z_{z(\tauau,t)}(X_\lambda} \def\La{\Lambdaambdambda(z(\tauau,t)))
$$
for all $s \in (-\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilon,\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilon)$. Therefore we obtain
$$
\psihi} \def\F{\Phirac{d}{ds}\Big|_{s = 0} \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z_{\alphalpha(s)} \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_{z(\tauau,t)}^{\alphalpha(s)}(X_\lambda} \def\La{\Lambdaambdambda(z(\tauau,t)))
= X_\lambda} \def\La{\Lambdaambdambda(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z_{z(\tauau,t)}(X_\lambda} \def\La{\Lambdaambdambda(z(\tauau,t))).
$$
This shows
$
(D_1\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z)(X_\lambda} \def\La{\Lambdaambdambda) = X_\lambda} \def\La{\Lambdaambdambda(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z_{z(\tauau,t)}(X_\lambda} \def\La{\Lambdaambdambda(z(\tauau,t)))
$.
To see $\psii_\tauheta (D_1\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z)(\psihi} \def\F{\Phirac{\psiartial z}{\psiartial \tauau}) = 0$, just note that $\psihi} \def\F{\Phirac{\psiartial z}{\psiartial \tauau}
= k(\tauau,t) X_\lambda} \def\La{\Lambdaambdambda(z(\tauau,t))$ for some function $k$, which is parallel to $X_\lambda} \def\La{\Lambdaambdambda$
and $z(\tauau,t) \in Z$. Using the definition of $D_1\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z(x)(v)$ for $v \in T_x Q$ at $x \in Q$, we
compute
\betaegin{equation}astar
(D_1\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z)(\psihi} \def\F{\Phirac{\psiartial z}{\psiartial \tauau})(\tauau,t) & = & D_1 \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z(z(\tauau,t))( k(\tauau,t) X_\lambda} \def\La{\Lambdaambdambda(z(\tauau,t))\\
& = & k(\tauau,t) D_1 \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z(z(\tauau,t))(X_\lambda} \def\La{\Lambdaambdambda(z(\tauau,t)),
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
and hence the $\psii_\tauheta$ projection vanishes.
At last write
\betaegin{equation}astar
\psii_\tauheta \psihi} \def\F{\Phirac{\psiartial \varphiidetilde{u}}{\psiartial \tauau}= \psii_\tauheta (d_2\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z)(\nuabla^{\psii_{\tauheta}}_\tauau\zetaeta)
+ \psii_\tauheta(D_1\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z)(\psihi} \def\F{\Phirac{\psiartial z}{\psiartial \tauau})
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
and we are done with the first identity claimed.
The second one is proved exactly the same way.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Further noting that $\psii_\tauheta (d_2\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z_{z_k(\tauau,t)}): \zetaeta_{z_k(\tauau,t)} \tauo \zetaeta_{z_k(\tauau,t)}$ is invertible,
using this lemma and \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:pidudtauL-t}, we now have the equation of $\zetaeta_k$
\betaegin{equation}
\nuabla^{\psii_{\tauheta}}_\tauau\zetaeta_k+ \omega} \def\O{\Omegaverline J(\tauau,t) \nuabla^{\psii_{\tauheta}}_t\zetaeta_k
= [\psii_\tauheta (d_2\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z)]^{-1}\varphiidetilde{L}_k.
\lambda} \def\La{\Lambdaambdabel{eq:nablaxi}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
where we set $[\psii_\tauheta (d_2\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp)]^{-1}J(\varphiidetilde{u}_k)[\psii_\tauheta (d_2\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z)] =: \omega} \def\O{\Omegaverline J(\tauau,t)$.
Next, we rescale this equation by the norm $\|{\zetaeta}_k\|_{L^\infty([0,3]\tauimes S^1)}$:
\betaegin{equation}gin{lem} The norm $\|{\zetaeta}_k\|_{L^\infty([0,3]\tauimes S^1)}$ is not zero.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof} Suppose to the contrary that
${\zetaeta}_k\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0$. This then implies $\varphiidetilde u_k(\tauau,t) \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv z_k(\tauau, t)$ for all $(\tauau, t)\in [0,3]\tauimes S^1$.
Therefore $\psihi} \def\F{\Phirac{\psiartial \varphiidetilde u_k}{\psiartial \tauau}$ is parallel to $X_\tauheta$ on $[0,3] \tauimes S^1$.
In particular $\psii_\tauheta \psihi} \def\F{\Phirac{\psiartial \varphiidetilde u_k}{\psiartial \tauau} \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0$. This
violates the inequality
$$
\|\psii_\tauheta\psihi} \def\F{\Phirac{\psiartial{\varphiidetilde{u}_k}}{\psiartial\tauau}\|_{L^2([1,2]\tauimes S^1)}>\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(\|\psii_\tauheta\psihi} \def\F{\Phirac{\psiartial{\varphiidetilde{u}_k}}{\psiartial\tauau}\|_{L^2([0,1]\tauimes S^1)}+\|\psii_\tauheta\psihi} \def\F{\Phirac{\psiartial{\varphiidetilde{u}_k}}{\psiartial\tauau}\|_{L^2([2,3]\tauimes S^1)}).
$$
Therefore the lemma holds.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Now the rescaled sequence
$
\omega} \def\O{\Omegaverline \zetaeta_k:={\zetaeta}_k/\|{\zetaeta}_k\|_{L^\infty([0,3]\tauimes S^1)}
$
satisfies $\|\omega} \def\O{\Omegaverline \zetaeta_k\|_{{L^\infty([0,3]\tauimes S^1)}} = 1$, and
\betaegin{equation}a\lambda} \def\La{\Lambdaambdabel{eq:nablabarxi}
\nuabla^{\psii_{\tauheta}}_\tauau\betaar\zetaeta_k+ \omega} \def\O{\Omegaverline J(\tauau,t) \nuabla^{\psii_{\tauheta}}_t\betaar\zetaeta_k &=& \psihi} \def\F{\Phirac{[\psii_\tauheta (d_2\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp^Z)]^{-1}\varphiidetilde{L}_k}{\|{\zetaeta}_k\|_{L^\infty([0,3]\tauimes S^1)}}\\
\|\nuabla^{\psii_{\tauheta}}_\tauau\betaar\zetaeta_k\|^2_{L^2([1,2]\tauimes S^1)}&\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq&\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(2\psiartialta)(\|\nuabla^{\psii_{\tauheta}}_\tauau\betaar\zetaeta_k\|^2_{L^2([0,1]\tauimes S^1)}+\|\nuabla^{\psii_{\tauheta}}_\tauau\betaar\zetaeta_k\|^2_{L^2([2,3]\tauimes S^1)}).\nuonumber
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonea
The next step is to focus on the right hand side of \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:nablabarxi}.
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:boot} The right hand side of \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:nablabarxi} converges to zero as $k \tauo \infty$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof}
Since the left hand side of \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:nablaxi} is an elliptic (Cauchy-Riemann type) operator,
we have the elliptic estimates
\betaegin{equation}astar
\|\nuabla^{\psii_\tauheta}_\tauau\zetaeta_k\|_{W^{l,2}([0.5, 2.5]\tauimes S^1)}&\lambda} \def\La{\Lambdaeq& C_1(\|\zetaeta_k\|_{L^2([0, 3]\tauimes S^1)}+\|\varphiidetilde{L}_k\|_{L^2([0, 3]\tauimes S^1)})\\
&\lambda} \def\La{\Lambdaeq& C_2(\|\zetaeta_k\|_{L^\infty([0, 3]\tauimes S^1)}+\|\varphiidetilde{L}_k\|_{L^\infty([0, 3]\tauimes S^1)}).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
The Sobolev's embedding theorem further gives
$$
\|\nuabla^{\psii_\tauheta}_\tauau\zetaeta_k\|_{L^\infty([0.5, 2.5]\tauimes S^1)}\lambda} \def\La{\Lambdaeq C(\|\zetaeta_k\|_{L^\infty([0, 3]\tauimes S^1)}+\|\varphiidetilde{L}_k\|_{L^\infty([0, 3]\tauimes S^1)}).
$$
Hence we have
\betaegin{equation}astar
\psihi} \def\F{\Phirac{\|\varphiidetilde{L}_k\|_{L^\infty([0, 3]\tauimes S^1)}}{\|\nuabla^{\psii_\tauheta}_\tauau\zetaeta_k\|_{L^\infty([0.5, 2.5]\tauimes S^1)}}
&\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq& \psihi} \def\F{\Phirac{\|\varphiidetilde{L}_k\|_{L^\infty([0, 3]\tauimes S^1)}}{C(\|\zetaeta_k\|_{L^\infty([0, 3]\tauimes S^1)}+\|\varphiidetilde{L}_k\|_{L^\infty([0, 3]\tauimes S^1)})}\\
&=&\psihi} \def\F{\Phirac{1}{C(\psihi} \def\F{\Phirac{\|\zetaeta_k\|_{L^\infty([0, 3]\tauimes S^1)}}{\|\varphiidetilde{L}_k\|_{L^\infty([0, 3]\tauimes S^1)}}+1)}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
We use our standing assumption \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:decay-fail-zeta-II}
with $C_k \tauo \infty$ as $k \tauo \infty$ and Lemma \rhoef{lem:pithetadel},
the left hand side converges to zero as $k\tauo \infty$, so we get
$$
\psihi} \def\F{\Phirac{\|\varphiidetilde{L}_k\|_{L^\infty([0, 3]\tauimes S^1)}}{\|\zetaeta_k\|_{L^\infty([0, 3]\tauimes S^1)}}\tauo 0, \quad \tauext{as} \quad k\tauo \infty.
$$
Thus the right hand side of \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:nablabarxi} converges to zero.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Then with the same argument as in the proof of Theorem \rhoef{thm:three-interval}, after taking a subsequence,
we obtain a limiting section $\omega} \def\O{\Omegaverline\zetaeta_\infty$ of $z_\infty^*\zetaeta_\tauheta$ satisfying
\betaegin{equation}a
\nuabla_\tauau \omega} \def\O{\Omegaverline\zetaeta_\infty+ B_\infty \omega} \def\O{\Omegaverline\zetaeta_\infty&=&0.\lambda} \def\La{\Lambdaambdabel{eq:xibarinfty-II}\\
\|\nuabla_\tauau\omega} \def\O{\Omegaverline \zetaeta_\infty\|^2_{L^2([1,2]\tauimes S^1)}&\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq&\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(2\psiartialta)(\|\nuabla_\tauau\omega} \def\O{\Omegaverline \zetaeta_\infty\|^2_{L^2([0,1]\tauimes S^1)}+\|\nuabla_\tauau\omega} \def\O{\Omegaverline \zetaeta_\infty\|^2_{L^2([2,3]\tauimes S^1)}).
\nuonumber\\
&{}&
\lambda} \def\La{\Lambdaambdabel{eq:xibarinfty-ineq-II}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonea
Here to make it compatible with the notation used in Theorem \rhoef{thm:three-interval},
we denote by $B_\infty$ the limit operator of $B:=\omega} \def\O{\Omegaverline J(\tauau,t) \nuabla^{\psii_{\tauheta}}_t$.
When applied to horizontal part as in the current case of study,
the operator is nothing but the linearization of Reeb orbit $z_\infty$ followed by action of $J$.
Write
$$
\omega} \def\O{\Omegaverline\zetaeta_\infty=\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{j=0, \chidots k}a_j(\tauau)e_j+\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq k+1}a_i(\tauau)e_i,
$$
where $\{e_i\}$ is the basis consisting of the eigenfunctions associated to the eigenvalue
$\lambda} \def\La{\Lambdaambdambda_i$ for $j\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq k+1$ with
$$
0< \lambda} \def\La{\Lambdaambdambda_{k+1} \lambda} \def\La{\Lambdaeq \lambda} \def\La{\Lambdaambdambda_{k+2} \lambda} \def\La{\Lambdaeq \chidots \lambda} \def\La{\Lambdaeq \lambda} \def\La{\Lambdaambdambda_i \lambda} \def\La{\Lambdaeq \chidots \tauo \infty.
$$
and $e_j$ for $j=1, \chidots k$ are eigen-functions of eigen-value zero.
By plugging $\omega} \def\O{\Omegaverline\zetaeta_\infty$ into \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:xibarinfty-zeta}, we derive
\betaegin{equation}astar
a_j'(\tauau)&=&0, \quad j=1, \chidots, k\\
a_i'(\tauau)+\lambda} \def\La{\Lambdaambdambda_ia_i(\tauau)&=&0, \quad i=k+1, \chidots
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
and it follows that
\betaegin{equation}astar
a_j&=&c_j, \quad j=1, \chidots, k\\
a_i(\tauau)&=&c_ie^{-\lambda} \def\La{\Lambdaambdambda_i\tauau}, \quad i=k+1, \chidots.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
By the same calculation in the proof of Theorem \rhoef{thm:three-interval}, it follows
$$
\|\nuabla^{\psii_\tauheta}_\tauau\omega} \def\O{\Omegaverline \zetaeta_\infty\|^2_{L^2([1,2]\tauimes S^1)}< \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(2\psiartialta) (\|\nuabla^{\psii_\tauheta}_\tauau\omega} \def\O{\Omegaverline \zetaeta_\infty\|^2_{L^2([0,1]\tauimes S^1)}
+\|\nuabla^{\psii_\tauheta}_\tauau\omega} \def\O{\Omegaverline \zetaeta_\infty\|^2_{L^2([2,3]\tauimes S^1)}).
$$
As a conclusion of this section, it remains to show
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:nonzero}
$$
\nuabla_\tauau^{\psii_\tauheta} \omega} \def\O{\Omegaverline \zetaeta_\infty \nueq 0.
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
This lemma will then lead to contradiction and hence finish the proof of Proposition \rhoef{prop:expdecayhorizontal}.
The proof of this non-vanishing is given in the next section via the study of the center of mass.
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{$L^2$-exponential decay of the tangential component $du$ II: study of center of mass}
\lambda} \def\La{\Lambdaambdabel{subsec:centerofmass}
We equip the submanifold with an $S^1$-invariant metric. Then its associated Levi-Civita connection $\nuabla$
is also $S^1$-invariant.
Denote by $\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp: U_{o_Q} \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset TQ \tauo Q \tauimes Q$ the exponential map,
and this defines a diffeomorphism between the open neighborhood $U_{o_Q}$ of the zero section $o_Q$ of $TQ$
and some open neighborhood $U_\muathbb{D}elta$ of the diagonal $\muathbb{D}elta \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset Q \tauimes Q$. Denote its inverse by
$$
E: U_\muathbb{D}elta \tauo U_{o_Q}; \quad E(x,y) = \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp_x^{-1}(y).
$$
We refer readers to \chiite{katcher} for the detailed study of the various basic derivative
estimates of this map.
The following lemma is a variation of the well-known center of mass techniques from Riemannian geometry
with the contact structure being taken into consideration (by introducing the reparameterization function $h$ in the following statement).
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:centerofmass-Reeb}
Let $(Q, \tauheta=\lambda} \def\La{\Lambdaambdambda|_Q)$ be the submanifold foliated by closed Reeb orbits of period $T$.
Then there exists some $\psiartialta > 0$ depending only
on $(Q, \tauheta)$ such that for any $C^{k+1}$ loop $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma: S^1 \tauo M$ with
$d_{C^{k+1}}(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma, \psihi} \def\F{\Phirak{Reeb}(Q, \tauheta)) < \psiartialta $, there exists a unique point $m(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma) \in Q$,
and a reparameterization map $h:S^1\tauo S^1$ which is $C^k$ close to $id_{S^1}$,
such that
\betaegin{equation}a
\int_{S^1} E(m, (\psihi^{Th(t)}_{X_\tauheta})^{-1}(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(t)))\,dt&=&0\lambda} \def\La{\Lambdaambdabel{eq:centerofmass-Reeb1}\\
E(m, (\psihi^{Th(t)}_{X_\tauheta})^{-1}(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(t)))&\in& \xii_\tauheta(m) \quad \tauext{for all $t\in S^1$}.\lambda} \def\La{\Lambdaambdabel{eq:centerofmass-Reeb2}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonea
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof}
Consider the functional
$$
\group{U}psilon: C^\infty(S^1, S^1)\tauimes Q\tauimes C^{\infty}(S^1, Q)\tauo TQ\tauimes \muathcal{R}
$$
defined as
$$
\group{U}psilon(h, m, \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma):=\lambda} \def\La{\Lambdaeft(\lambda} \def\La{\Lambdaeft(m, \int_{S^1} E(m, (\psihi^{Th(t)}_{X_\tauheta})^{-1}(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(t))dt )\rhoight), \tauheta(E(m, (\psihi^{Th(t)}_{X_\tauheta})^{-1}(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(t))) ) \rhoight),
$$
where $\muathcal{R}$ denotes the trivial bundle over ${\muathbb R}\tauimes Q$ over $Q$.
If $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$ is a Reeb orbit with period $T$, then
$h=id_{S^1}$ and $m(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma)=\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0)$ will solve the equation
$$
\group{U}psilon(h, m, \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma)=(o_Q, o_{{\muathbb C}R}).
$$
From straightforward calculations
\betaegin{equation}astar
&{}& D_h\group{U}psilon\betaig|_{(id_{S^1}, \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0), \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma)}(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta) \\
&=&\lambda} \def\La{\Lambdaeft(\lambda} \def\La{\Lambdaeft(m, \int_{S^1} d_2E\betaig|_{(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0), \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0))}(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta(t)TX_\tauheta)\,dt \rhoight), \tauheta(d_2E\betaig|_{(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0), \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0))}(\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta(t)TX_\tauheta) ) \rhoight)\\
&=&\lambda} \def\La{\Lambdaeft(\lambda} \def\La{\Lambdaeft(m, (T\int_{S^1} \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta(t) \,dt)\chidot X_\tauheta(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0)) \rhoight), T\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta(t) \rhoight)
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
and
\betaegin{equation}astar
D_m\group{U}psilon\betaig|_{(id_{S^1}, \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0), \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma)}(v)
&=&\lambda} \def\La{\Lambdaeft(\lambda} \def\La{\Lambdaeft(v, \int_{S^1} D_1E\betaig|_{(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0), \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0))}(v)\,dt \rhoight), \tauheta(D_1E\betaig|_{(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0), \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0))}(v) ) \rhoight)\\
&=&\lambda} \def\La{\Lambdaeft(\lambda} \def\La{\Lambdaeft(v, \int_{S^1} D_1E\betaig|_{(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0), \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0))}(v)\,dt \rhoight), \tauheta(D_1E\betaig|_{(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0), \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0))}(v) ) \rhoight)\\
&=&\lambda} \def\La{\Lambdaeft((v, v ), \tauheta(v) \rhoight),
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
we claim that
$D_{(h, m)}\group{U}psilon$ is transversal to $o_{TM}\tauimes o_{{\muathbb C}R}$ at the point $(id_{S^1}, \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0), \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(t))$, where $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$ is a Reeb
orbit of period $T$.
To see this, notice that for any point in the set
$$
\muathbb{D}elta:=\{(aX_\tauheta+\muu, f)\in TQ\tauimes C^\infty(S^1, \muathbb{R})\betaig| a=\int_{S^1}f(t)dt\},
$$
one can always find its pre-image as follows:
For any given $(aX_\tauheta+\muu, f)\in TQ\tauimes C^\infty(S^1, \muathbb{R})$ with $a=\int_{S^1}f(t)dt$,
the pair
\betaegin{equation}astar
v&=&a\chidot X_\tauheta+\muu\\
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonta(t)&=&\psihi} \def\F{\Phirac{1}{T}(f(t)-a)
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
lives in the preimage.
This proves surjectivity of the partial derivative $D_{(h,m)}\group{U}psilon\betaig|_{(id_{S^1}, \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(0), \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma)}$.
Then applying the implicit function theorem, we have finished the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Using the center of mass, we can derive the following proposition which will be used to exclude the possibility of the vanishing of $\nuabla_\tauau^{\psii_\tauheta} \omega} \def\O{\Omegaverline \zetaeta_\infty$.
\betaegin{equation}gin{prop}\lambda} \def\La{\Lambdaambdabel{prop:centerofmass-app} Recall the rescaling sequence $\psihi} \def\F{\Phirac{\varphiidetilde \zetaeta_k}{L_k}$
we take in the proof Proposition \rhoef{prop:expdecayhorizontal} above,
and assume for a subsequence,
$$
\psihi} \def\F{\Phirac{\varphiidetilde \zetaeta_k}{L_k} \tauo \omega} \def\O{\Omegaverline \zetaeta_\infty
$$
in $L^2$. Then $\int_{S^1} (d\psihi_{X_\tauheta}^{Tt})^{-1}(\omega} \def\O{\Omegaverline \zetaeta_\infty(\tauau,t)) \, dt = 0$
where $(d\psihi_{X_\tauheta}^{t})^{-1}(\omega} \def\O{\Omegaverline \zetaeta_\infty(\tauau,t)) \in T_{z(0)}Q$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\betaegin{equation}gin{proof}
By the construction of the center of mass applying to maps $u_k(\tauau, \chidot): S^1\tauo Q$
for $\tauau\in [0,3]$, we have obtained
$$
\int_{S^1}E(m_k(\tauau), (\psihi_{X_\tauheta}^{Th_k(\tauau, t)})^{-1}u_k(\tauau, t))\,dt=0.
$$
If we write $u_k(\tauau, t)=\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp_{z_\infty(\tauau, t)}\zetaeta_k(\tauau, t)$ where $z_\infty$ is the limit
of $z_k$ defined in \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:normalexpN}, it follows that
\betaegin{equation}a\lambda} \def\La{\Lambdaambdabel{eq:intEmk}
&{}& \int_{S^1}E(m_k(\tauau), \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp_{(\psihi_{X_\tauheta}^{Th_k(\tauau, t)})^{-1}z_\infty(\tauau, t)}d(\psihi_{X_\tauheta}^{Th_k(\tauau, t)})^{-1}\zetaeta_k(\tauau, t))\,dt \nuonumber\\
&= &\int_{S^1}E(m_k(\tauau), (\psihi_{X_\tauheta}^{Th_k(\tauau, t)})^{-1}\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp_{z_\infty(\tauau, t)}\zetaeta_k(\tauau, t))\,dt = 0
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonea
Recall the following lemma whose proof is direct and we skip.
\betaegin{equation}gin{lem} Let $\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_y^x$ is the parallel transport along the short geodesic from
$y$ to $x$. Then there exists some sufficiently small $\psiartialta > 0$ depending only on
the given metric on $Q$ and a constant $C = C(\psiartialta) > 0$ such that $C(\psiartialta) \tauo 1$ as $\psiartialta \tauo 0$
and
$$
|E(x, \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp_y^Z(\chidot)) - \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_y^x| \lambda} \def\La{\Lambdaeq C \, d(x, y).
$$
In particular $|E(x, \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp_y^Z(\chidot))| \lambda} \def\La{\Lambdaeq |\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_y^x| + C \, d(x, y)$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
Applying this lemma to \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:intEmk}, we obtain
\betaegin{equation}astar
&{}& \lambda} \def\La{\Lambdaeft|\int_{S^1}\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_{m_k(\tauau)}^{(\psihi_{X_\tauheta}^{Th_k(\tauau, t)})^{-1}z_\infty(\tauau, t)}(d\psihi_{X_\tauheta}^{Th(t)})^{-1}\zetaeta_k(\tauau, t)\,dt\rhoight| \\
& \lambda} \def\La{\Lambdaeq &
\int_{S^1} C \,d(m_k(\tauau), (\psihi_{X_\tauheta}^{Th_k(\tauau, t)})^{-1}z_\infty(\tauau, t))\lambda} \def\La{\Lambdaeft((d\psihi_{X_\tauheta}^{Th(t)})^{-1}\zetaeta_k(\tauau, t)\rhoight)\,dt.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
We rescale $\zetaeta_k$ by using $L_k$ and derive that
\betaegin{equation}astar
&&{}\lambda} \def\La{\Lambdaeft|\int_{S^1}\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_{m_k(\tauau)}^{(\psihi_{X_\tauheta}^{Th_k(\tauau, t)})^{-1}z_\infty(\tauau, t)}
\lambda} \def\La{\Lambdaeft(d\psihi_{X_\tauheta}^{Th_k(\tauau, t)}\rhoight)^{-1}\psihi} \def\F{\Phirac{\zetaeta_k(\tauau, t)}{L_k}\,dt\rhoight| \\
&\lambda} \def\La{\Lambdaeq & \int_{S^1} C \,d(m_k(\tauau), (\psihi_{X_\tauheta}^{Th_k(\tauau, t)})^{-1}z_\infty(\tauau, t))
\lambda} \def\La{\Lambdaeft|\lambda} \def\La{\Lambdaeft(d\psihi_{X_\tauheta}^{Th_k(\tauau, t)}\rhoight)^{-1}\psihi} \def\F{\Phirac{\zetaeta_k(\tauau, t)}{L_k}\rhoight| \,dt.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
Take $k\tauo \infty$, and since that $m_k(\tauau) \tauo z_{\infty}(0)$ and $h_k\tauo id_{S^1}$ uniformly,
we get $d(m_k(\tauau), (\psihi_{X_\tauheta}^{Th_k(\tauau, t)})^{-1}z_\infty(\tauau, t)) \tauo 0$
uniformly over $(\tauau,t) \in [0,3] \tauimes S^1$ as $k \tauo \infty$. Therefore the right hand
side of this inequality goes to 0. On the other hand by the same reason, we obtain
$$
\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_{m_k(\tauau)}^{(\psihi_{X_\tauheta}^{Th_k(\tauau, t)})^{-1}z_\infty(\tauau, t)}
\lambda} \def\La{\Lambdaeft(d\psihi_{X_\tauheta}^{Th_k(\tauau, t)}\rhoight)^{-1}\psihi} \def\F{\Phirac{\zetaeta_k(\tauau, t)}{L_k} \tauo
(d\psihi_{X_\tauheta}^{Tt})^{-1}\omega} \def\O{\Omegaverline \zetaeta_\infty
$$
uniformly and hence we obtain
$
\int_{S^1}(d\psihi_{X_\tauheta}^{Tt})^{-1}\omega} \def\O{\Omegaverline \zetaeta_\infty\,dt=0.
$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Using this proposition, we now prove
$
\nuabla_\tauau^{\psii_\tauheta} \omega} \def\O{\Omegaverline \zetaeta_\infty \nueq 0,
$
which is the last piece of finishing the proof of Proposition \rhoef{prop:expdecayhorizontal}.
\betaegin{equation}gin{proof}[Proof of Lemma \rhoef{lem:nonzero}]
Suppose to the contrary, i.e., $\nuabla_\tauau^{\psii_\tauheta} \omega} \def\O{\Omegaverline \zetaeta_\infty = 0$,
then we would have $J(z_\infty(t))\nuabla_t \omega} \def\O{\Omegaverline \zetaeta_\infty = 0$ from
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:xibarinfty-II} and the remark right after it.
Fix a basis $
\{v_1, \chidots, v_{2k}\}
$ of $\xii_{z(0)} \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset T_{z(0)}Q$
and define
$
e_i(t) = d\psihi_{X_\tauheta}^t(v_i)$ for $i = 1, \chidots, 2k$.
Since $\nuabla_t^{\psii_\tauheta}$ is nothing but the linearization of the Reeb orbit $z_\infty$, the Morse-Bott
condition implies
$
\kappaer B_\infty = \sigma} \delta} \def\D{\Deltaef\S{\Sigmamapan \{e_i(t)\}_{i=1}^{2k}.
$
Then one can express
$
\omega} \def\O{\Omegaverline \zetaeta_\infty(t) = \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^{2k} a_i(t) e_i(t)$.
Moreover since $e_i$ are parallel (with respect to the $S^1$-invariant connection) by construction, it follows $a_i$ are constants, $i = 1, \chidots, 2k$.
Then we can write
$
\omega} \def\O{\Omegaverline \zetaeta_\infty(t) = \psihi_{X_\tauheta}^t(v)$, where $v= \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaum_{i=1}^{2k} a_i v_i$,
and further it follows that $\int_{S^1}((d\psihi_{X_\tauheta}^t)^{-1}(\omega} \def\O{\Omegaverline \zetaeta_\infty(t))\, dt = v$.
On the other hand, from Proposition \rhoef{prop:centerofmass-app}, $v=0$ and further
$\omega} \def\O{\Omegaverline \zetaeta_\infty \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonquiv 0$, which contradicts with $\|\omega} \def\O{\Omegaverline \zetaeta_\infty\|_{L^\infty([0,3]\tauimes S^1)} = 1$.
Thus finally we conclude that $\nuabla_\tauau^{\psii_\tauheta}\omega} \def\O{\Omegaverline \zetaeta_\infty$ can not be zero.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
This now concludes the proof of Proposition \rhoef{prop:expdecayhorizontal}.
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{$L^2$-exponential decay of the Reeb component of $dw$}
\lambda} \def\La{\Lambdaambdabel{subsec:Reeb}
We again consider the equation
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:nabladue2}
\nuabla''_{du}e - \lambda} \def\La{\Lambdaeft(u^* \, DX^\Omegamega_g(u)(e)\rhoight)^{(0,1)} = K(e,\nuabla_{du} e, du)
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
as an equation for $e$. Clearly this is a quasi-linear elliptic equation of $e$ when $u$ is fixed.
Applying the uniform (local) elliptic estimates
to \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:nabladue2}, the $L^2$-exponential decays of $e$
and convergence of $\psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}$ to $0$ then lead to the $L^2$-exponential decay of $\nuabla_{du} e$.
Combining these, we have obtained $L^2$-exponential estimates of the $\psii dw$.
Now we consider the original map $w=(u, e)$ which satisfies \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:contact-instanton-E}
$$
\psiartialbar_J^{\lambda} \def\La{\Lambdaambdambda} w = 0, \quad d(w^*\lambda} \def\La{\Lambdaambdambda\chiirc j) = 0
$$
where $\lambda} \def\La{\Lambdaambdambda = f\lambda} \def\La{\Lambdaambdambda_E$.
We recall that this system is an elliptic system and the corresponding
uniform local a priori estimates was established in \chiite{oh-wang2}.
Then by the elliptic bootstrapping argument using the local uniform a priori estimates
on the cylindrical region, we obtain higher order $W^{k,2}$-exponential decay of
$\psii dw$ for all $k\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq 0$ under the hypothesis, Hypothesis \rhoef{hypo:exact}.
Next,
in the rest of this subsection, we prove the exponential decay of the Reeb component $w^*\lambda} \def\La{\Lambdaambdambda$.
For this purpose, we define a complex-valued function
$$
\alphalpha(\tauau,t) = \lambda} \def\La{\Lambdaeft(w^*\lambda} \def\La{\Lambdaambdambda(\psihi} \def\F{\Phirac{\psiartial}{\psiartial t})-{\muathbb C}T\rhoight) + \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqrt{-1}\lambda} \def\La{\Lambdaeft(w^*\lambda} \def\La{\Lambdaambdambda(\psihi} \def\F{\Phirac{\psiartial}{\psiartial\tauau})\rhoight).
$$
The following lemma is easy to prove.
\betaegin{equation}gin{lem} Let $\zetaeta = \psii \psihi} \def\F{\Phirac{\psiartial w}{\psiartial \tauau}$ on the cylindrical ends. Then
$$
*d(w^*\lambda} \def\La{\Lambdaambdambda) =|\zetaeta|^2.
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
Combining this lemma together with the equation $d(w^*\lambda} \def\La{\Lambdaambdambda\chiirc j)=0$, we
notice that $\alphalpha$ satisfies the equations
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:atatau-equation}
\psiartialbar \alphalpha =\nuu, \quad \nuu
=\psihi} \def\F{\Phirac{1}{2}|\zetaeta|^2 + \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqrt{-1}\chidot 0,
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
where $\psiartialbar=\psihi} \def\F{\Phirac{1}{2}\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{\psiartial}{\psiartial \tauau}+\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqrt{-1}\psihi} \def\F{\Phirac{\psiartial}{\psiartial t}\rhoight)$ the standard Cauchy-Riemann operator for the standard complex structure $J_0=\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqrt{-1}$.
Notice that from previous section we have already
established the $W^{1, 2}$-exponential decay of $\nuu = \psihi} \def\F{\Phirac{1}{2}|\zetaeta|^2$.
The exponential decay of $\alphalpha$ follows from the following lemma,
whose proof can be proved again by the three-interval method in a much easier way
and so omitted.
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:exp-decay-lemma}
Suppose the complex-valued functions $\alphalpha$ and $\nuu$ defined on $[0, \infty)\tauimes S^1$
satisfy
\betaegin{equation}astar
\betaegin{equation}gin{cases}
\psiartialbar \alphalpha = \nuu \\
\|\nuu\|_{L^2(S^1)}+\lambda} \def\La{\Lambdaeft\|\nuabla\nuu\rhoight\|_{L^2(S^1)}\lambda} \def\La{\Lambdaeq Ce^{-\psiartialta \tauau} \quad \tauext{ for some constants } C, \psiartialta>0\\
\lambda} \def\La{\Lambdaim_{\tauau\rhoightarrow +\infty}\alphalpha =0
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{cases}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
then $\|\alphalpha\|_{L^2(S^1)}\lambda} \def\La{\Lambdaeq \omega} \def\O{\Omegaverline{C}e^{-\psiartialta \tauau}$
for some constant $\omega} \def\O{\Omegaverline{C}$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{$C^0$ exponential convergence}
Now we prove $C^0$-exponential convergence of $w(\tauau,\chidot)$ to some Reeb orbit as $\tauau \tauo \infty$
from the $L^2$-exponential estimates presented in previous sections.
\betaegin{equation}gin{prop}\lambda} \def\La{\Lambdaambdabel{prop:czero-convergence}
Under Hypothesis \rhoef{hypo:basic}, for any contact instanton $w$ with vanishing charge,
there exists a unique Reeb orbit $z(\chidot)=\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(T\chidot):S^1\tauo M$ with period ${\muathbb C}T>0$, such that
$$
\|d(w(\tauau, \chidot), z(\chidot))
\|_{C^0(S^1)}\rhoightarrow 0,
$$
as $\tauau\rhoightarrow +\infty$,
where $d$ denotes the distance on $M$ defined by the triad metric.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\betaegin{equation}gin{proof}
We start with the following lemma
\betaegin{equation}gin{lem} Let $t \in S^1$ be given. Then
for any given $\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonpsilon > 0$, there exists sufficiently large $\tauau_1 > 0$ such that
$$
d(w(\tauau,t), w(\tauau', t)) < \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonpsilon
$$
for all $\tauau, \, \tauau' \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq \tauau_1$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof}
Suppose to the contrary that
there exist some $t_0\in S^1$ and some constant $\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonpsilon>0$, sequences $\tauau_k \tauo \infty$, $p_k>0$ such that
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:geqepsilon}
d(w(\tauau_{k+p_k}, t_0), w(\tauau_k, t_0))\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonpsilon.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Then combining this with the continuity of $w$ in $t$, there exists some $l>0$ small such that
$$
d(w(\tauau_{k+p_k}, t), w(\tauau_k, t))\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq\psihi} \def\F{\Phirac{\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonpsilon}{2}, \quad |t-t_0|\lambda} \def\La{\Lambdaeq l.
$$
Therefore
\betaegin{equation}astar
&{}&\int_{S^1}d(w(\tauau_{k+p_k}, t), w(\tauau_k, t))\ dt\\
&=&\int_{|t-t_0|\lambda} \def\La{\Lambdaeq l}d(w(\tauau_{k+p_k}, t), w(\tauau_k, t))\ dt+\int_{|t-t_0|>l}d(w(\tauau_{k+p_k}, t), w(\tauau_k, t))\ dt\\
&\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq&\int_{|t-t_0|\lambda} \def\La{\Lambdaeq l}d(w(\tauau_{k+p_k}, t), w(\tauau_k, t))\ dt \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonpsilon l.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
On the other hand, we compute
\betaegin{equation}astar
&{}&\int_{S^1}d(w(\tauau_{k+p_k}, t), w(\tauau_k, t))\ dt\\
&\lambda} \def\La{\Lambdaeq&\int_{S^1}\int_{\tauau_k}^{\tauau_{k+p_k}}\lambda} \def\La{\Lambdaeft|\psihi} \def\F{\Phirac{\psiartial w}{\psiartial s}(s, t)\rhoight|\ ds\ dt
= \int_{\tauau_k}^{\tauau_{k+p_k}}\int_{S^1}\lambda} \def\La{\Lambdaeft|\psihi} \def\F{\Phirac{\psiartial w}{\psiartial s}(s, t)\rhoight|\ dt\ ds\\
&\lambda} \def\La{\Lambdaeq&\int_{\tauau_k}^{\tauau_{k+p_k}}\lambda} \def\La{\Lambdaeft(\int_{S^1}\lambda} \def\La{\Lambdaeft|\psihi} \def\F{\Phirac{\psiartial w}{\psiartial s}(s, t)\rhoight|^2\,
dt\rhoight)^{\psihi} \def\F{\Phirac{1}{2}}\ ds\\
&\lambda} \def\La{\Lambdaeq&\int_{\tauau_k}^{\tauau_{k+p_k}}Ce^{-\psiartialta s}\ ds
= \psihi} \def\F{\Phirac{C}{\psiartialta}(1-e^{-(\tauau_{k+p_k}-\tauau_k)})e^{-\tauau_k} \lambda} \def\La{\Lambdaeq \psihi} \def\F{\Phirac{C}{\psiartialta}e^{-\tauau_k}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
When $\tauau_k$ sufficiently large, this inequality gives rise to a contradiction to \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:geqepsilon}.
Hence the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Now using the subsequence convergence from Theorem \rhoef{thm:subsequence},
we can pick a subsequence $\{\tauau_k\}$ and a closed Reeb orbit $(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma, T)$ such that
$$
w(\tauau_k, t)\tauo z(t) : = \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(Tt), \quad k\tauo \infty
$$
uniformly in $t$. Then the above lemma immediately implies
$
w(\tauau, t)
$
uniformly converges to $z(t)$ for any $t\in S^1$.
It remains to show that this convergence is uniform in $t$. Suppose to the contrary that
there exist some $\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonpsilon>0$ and some sequence $(\tauau_k, t_k)$ such that
$$
d(w(\tauau_k, t_k), z(t_k))\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq2\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonpsilon.
$$
Since $t_k\in S^1$, we can further take a subsequence, still denoted by $t_k$, such that $t_k\tauo t_0\in S^1$.
We can take $k$ so large that $d(z(t_k), z(t_0))\lambda} \def\La{\Lambdaeq \psihi} \def\F{\Phirac{1}{2}\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonpsilon$. We also note
\betaegin{equation}astar
d(w(\tauau, t_k), w(\tauau, t_0)) \lambda} \def\La{\Lambdaeq \int_{t_0}^{t_k}\lambda} \def\La{\Lambdaeft|\psihi} \def\F{\Phirac{\psiartial w}{\psiartial t}(\tauau, s)\rhoight|\, ds
\lambda} \def\La{\Lambdaeq(t_k-t_0)\|d w\|_{C^0},
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
by which we can make the distance less than $\psihi} \def\F{\Phirac{1}{2}\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonpsilon$ by taking $k$ sufficiently large.
Combing these, we derive
\betaegin{equation}astar
d(w(\tauau_k, t_0), z(t_0))&\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq&
d(w(\tauau_k, t_k), z(t_k))-d(w(\tauau_k, t_k), w(\tauau_k, t_0))\\
&{}&-d(z(t_k), z(t_0))\\
&\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq&2\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonpsilon-\psihi} \def\F{\Phirac{1}{2}\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonpsilon-\psihi} \def\F{\Phirac{1}{2}\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonpsilon = \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonpsilon
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
for all sufficiently large $k$'s.
This gives rise to contradiction to the pointwise convergence $w(\tauau_k,t_k) \tauo z(t_0)$, which
finishes the proof of uniform convergence for $t \in S^1$ and hence completes the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
Then the following $C^0$-exponential convergence immediately follows.
\betaegin{equation}gin{prop}
There exist some constants $C>0$, $\psiartialta>0$ and $\tauau_0$ large such that for any $\tauau>\tauau_0$,
\betaegin{equation}astar
\|d\lambda} \def\La{\Lambdaeft( w(\tauau, \chidot), z(\chidot) \rhoight) \|_{C^0(S^1)} &\lambda} \def\La{\Lambdaeq& C\, e^{-\psiartialta \tauau}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\betaegin{equation}gin{proof}
For any $\tauau<\tauau_+$, similarly as in the previous proof,
\betaegin{equation}astar
d(w(\tauau, t), w(\tauau_+, t))\lambda} \def\La{\Lambdaeq \int^{\tauau_+}_{\tauau}\lambda} \def\La{\Lambdaeft| \psihi} \def\F{\Phirac{\psiartial w}{\psiartial \tauau}(s,t) \rhoight|\,ds
\lambda} \def\La{\Lambdaeq \psihi} \def\F{\Phirac{C}{\psiartialta}e^{-\psiartialta \tauau}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
Take $\tauau_+\rhoightarrow +\infty$ and using the $C^0$ convergence of $w$ part, i.e.,
Proposition \rhoef{prop:czero-convergence}, we get
$$d(w(\tauau, t), z(t))\lambda} \def\La{\Lambdaeq\psihi} \def\F{\Phirac{C}{\psiartialta}e^{-\psiartialta \tauau}.$$
This proves the first inequality.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubsection{$C^\infty$-exponential decay of $dw - X_\lambda} \def\La{\Lambdaambdambda(w) \, d\tauau$}
\lambda} \def\La{\Lambdaambdabel{subsec:Cinftydecaydu}
We recall the coordinate expression of $w = (u,e)$ under the
identification of a tubular neighborhood of $Q$ with
a neighborhood of the zero section of
the normal bundle of $Q$. So far, we have established the following:
\betaegin{equation}gin{itemize}
\item $W^{1,2}$-exponential decay of the normal component $e$,
\item $L^2$-exponential decay of the derivative $du$ of the base component $u$,
\item $C^0$-exponential convergence of $w(\tauau,\chidot) \tauo z(\chidot)$ as $\tauau \tauo \infty$
for some closed Reeb orbit $z$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{itemize}
Now we are ready to complete the proof of
$C^\infty$-exponential convergence $w(\tauau,\chidot) \tauo z$
by establishing the $C^\infty$-exponential decay of $dw - X_\lambda} \def\La{\Lambdaambdambda(w)\, dt$.
The proof of the latter decay is now in order which will be
carried out by the bootstrapping arguments
applied to the system \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:contact-instanton-E}.
Combining the above three, we have obtained $L^2$-exponential estimates of the full derivative $dw$.
As already used in Section \rhoef{subsec:Reeb}, we consider the equation
$$
\psiartialbar_J^{\lambda} \def\La{\Lambdaambdambda} w = 0, \quad d(w^*\lambda} \def\La{\Lambdaambdambda \chiirc j) = 0
$$
where $\lambda} \def\La{\Lambdaambdambda = f \lambda} \def\La{\Lambdaambdambda_E$, under Hypothesis \rhoef{hypo:exact}.
By the bootstrapping argument using the local uniform a priori estimates
on the cylindrical region (see \chiite{oh-wang2} for the details),
we obtain higher order $W^{k,2}$-exponential decays of the term
$$
\psihi} \def\F{\Phirac{\psiartial w}{\psiartial t} - {\muathbb C}T X_\lambda} \def\La{\Lambdaambdambda(z), \quad \psihi} \def\F{\Phirac{\psiartial w}{\psiartial\tauau}
$$
for all $k\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq 0$, where $w(\tauau,\chidot)$ converges to $z$ as $\tauau \tauo \infty$ in $C^0$ sense.
This, combined with the Sobolev's embedding, then completes proof of $C^\infty$-convergence of
$w(\tauau,\chidot) \tauo z$ as $\tauau \tauo \infty$.
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaection{Exponential decay: general Morse-Bott case}
\lambda} \def\La{\Lambdaambdabel{sec:general}
In this section, we consider the general case of the Morse-Bott submanifold.
For this one, it is enough to consider the normalized contact triad
$(F, f\lambda} \def\La{\Lambdaambdambda_F, J)$ where $J$ is adapted to the zero section $Q$.
Write $w=(u, s)=(u, \muu, e)$, where $\muu\in u^*JT{\muathbb C}N$ and $e\in u^*E$.
By the calculations in Section \rhoef{sec:coord},
and with similar calculation of Section \rhoef{sec:prequantization},
the $e$ part can be dealt with exactly the same as in the prequantization case,
whose details are skipped here.
After the $e$-part is taken care of, for the $(u, \muu)$ part, we derive
$$
\lambda} \def\La{\Lambdaeft(\betaegin{equation}gin{matrix} \psii_\tauheta\psihi} \def\F{\Phirac{\psiartial u}{\psiartial\tauau} \\
\nuabla_\tauau \muu \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{matrix}\rhoight)
+J\lambda} \def\La{\Lambdaeft(\betaegin{equation}gin{matrix} \psii_\tauheta\psihi} \def\F{\Phirac{\psiartial u}{\psiartial t} \\
\nuabla_t \muu \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{matrix}\rhoight)=L,
$$
where $|L|\lambda} \def\La{\Lambdaeq C e^{-\psiartialta \tauau}$ similarly as for the prequantization case.
Then we apply the three-interval argument
whose details are similar to the prequantization case and so are omitted.
We only need to establish as in the prequantization case
for the limiting $(\omega} \def\O{\Omegaverline{\zetaeta}_\infty, \omega} \def\O{\Omegaverline{\muu}_\infty)$ is not in kernel of $B_\infty$.
If $(\omega} \def\O{\Omegaverline{\zetaeta}_\infty, \omega} \def\O{\Omegaverline{\muu}_\infty)$ is in the kernel of $B_\infty$, then by the Morse-Bott
condition, we have $\omega} \def\O{\Omegaverline{\muu}_\infty=0$. With the same procedure for introducing the
center of mass, we can use the same argument to prove that $\omega} \def\O{\Omegaverline{\zetaeta}_\infty$ must vanish
if it is contained in the kernel of $B_\infty$.
This will them prove the following proposition.
\betaegin{equation}gin{prop} \lambda} \def\La{\Lambdaambdabel{prop:expdecaygeneral}
For any $k=0, 1, \chidots$, there exists some constant $C_k>0$ and $\psiartialta_k>0$
\betaegin{equation}astar
\lambda} \def\La{\Lambdaeft|\nuabla^k \lambda} \def\La{\Lambdaeft(\psii\psihi} \def\F{\Phirac{\psiartial u}{\psiartial \tauau}\rhoight)\rhoight|<C_k\, e^{-\psiartialta_k \tauau}, \quad
\lambda} \def\La{\Lambdaeft|\nuabla^k \muu\rhoight|<C_k\, e^{-\psiartialta_k \tauau}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
for each $k \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq 0$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaection{The case of asymptotically cylindrical symplectic manifolds}
\lambda} \def\La{\Lambdaambdabel{sec:asymp-cylinder}
In this section, we explain how we can apply the three-interval method and our tensorial scheme
to non-compact symplectic manifolds with \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{asymptotically cylindrical ends}.
Here we use Bao's precise definition \chiite{bao} of the asymptotically cylindrical ends but
restricted to the case where the asymptotical manifold is a contact manifold $(V,\xii)$.
In this section, we will denote a contact manifold by $V$, instead of $M$ which is
what we used in the previous sections,to make comparison of our definition with Bao's transparent.
Let $(V,\xii)$ be a closed contact manifold of dimension $2n+1$ and let $J$ be an almost complex
structure on $W = [0,\infty) \tauimes V$. We denote
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:bfR}
{\betaf R}: = J\psihi} \def\F{\Phirac{\psiartial}{\psiartial r}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
a smooth vector field on $W$, and let $\xii \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset TW$ be a subbundle defined by
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:xi}
\xii_{(r,v)} = J T_{(r,v)}(\{r\} \tauimes V) \chiap T_{(r,v)}(\{r\} \tauimes V).
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Then we have splitting
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:splitting-asymp}
TW = {\muathbb R}\{\psihi} \def\F{\Phirac{\psiartial}{\psiartial r}\} \omega} \def\O{\Omegaplus {\muathbb R}\{{\betaf R}\} \omega} \def\O{\Omegaplus \xii_{(r,v)}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
and denote by $i: {\muathbb R}\{\psihi} \def\F{\Phirac{\psiartial}{\psiartial r}\} \omega} \def\O{\Omegaplus {\muathbb R}\{{\betaf R}\} \tauo {\muathbb R}\{\psihi} \def\F{\Phirac{\psiartial}{\psiartial r}\} \omega} \def\O{\Omegaplus {\muathbb R}\{{\betaf R}\}$
the almost complex structure
$$
i \psihi} \def\F{\Phirac{\psiartial}{\psiartial r} = {\betaf R}, \quad i{\betaf R} = - \psihi} \def\F{\Phirac{\psiartial}{\psiartial r}.
$$
We denote by $\lambda} \def\La{\Lambdaambdambda$ and $\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaigma$ the dual 1-forms of $\psihi} \def\F{\Phirac{\psiartial}{\psiartial r}$ and
${\betaf R}$ such that $\lambda} \def\La{\Lambdaambdambda|_\xii = 0 = \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaigma|_\xii$. In particular,
$$
\lambda} \def\La{\Lambdaambdambda({\betaf R}) = 1 = \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaigma(\psihi} \def\F{\Phirac{\psiartial}{\psiartial r}), \quad
\lambda} \def\La{\Lambdaambdambda(\psihi} \def\F{\Phirac{\psiartial}{\psiartial r}) = 0 = \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaigma({\betaf R}).
$$
We denote by $T_s: [0,\infty) \tauimes V \tauo [-s,\infty) \tauimes V$ the translation $T_s(r,v) = (r+s,v)$
and call a tensor on $W$ is translational invariant if it is invariant under the translation.
The following definition is the special case of the one in \chiite{bao}
restricted to the contact type asymptotical boundary.
\betaegin{equation}gin{defn}[Asymptotically Cylindrical $(W,\omega} \def\O{\Omegamega, J)$ \chiite{bao}]
The almost complex structure is called $C^\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonll$-\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{asymptotically cylindrical} if there exists a
2-form $\omega} \def\O{\Omegamega$ on $W$ such that the pair $(J,\omega} \def\O{\Omegamega)$ satisfies the following:
\betaegin{equation}gin{itemize}
\item[{(AC1)}] $\psihi} \def\F{\Phirac{\psiartial}{\psiartial r} \rhofloor \omega} \def\O{\Omegamega = 0 = {\betaf R} \rhofloor \omega} \def\O{\Omegamega$,
\item[{(AC2)}] $\omega} \def\O{\Omegamega|_\xii(v, J\, v) \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq 0$ and equality holds iff $v = 0$,
\item[{(AC3)}] There exists a smooth translational invariant almost complex structure
$J_\infty$ on ${\muathbb R} \tauimes V$ and constants $R_\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonll > 0$ and $C_\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonll, \, \psiartialta_\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonll > 0$
$$
\|(J - J_\infty)|_{[r,\infty) \tauimes V}\|_{C^\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonll} \lambda} \def\La{\Lambdaeq C_\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonll e^{-\psiartialta_\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonll r}
$$
for all $r \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq R_\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonll$. Here the norm is computed in terms of the translational invariant
metric $g_\infty$ and a translational invariant connection.
\item[{(AC4)}] There exists a smooth translational invariant closed 2-form $\omega} \def\O{\Omegamega_\infty$ on
${\muathbb R} \tauimes V$ such that
$$
\|(\omega} \def\O{\Omegamega - \omega} \def\O{\Omegamega_\infty)|_{[r,\infty) \tauimes V}\|_{C^\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonll} \lambda} \def\La{\Lambdaeq C_\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonll e^{-\psiartialta_\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonll r}
$$
for all $r \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq R_\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonll$.
\item[{(AC5)}] $(J_\infty,\omega} \def\O{\Omegamega_\infty)$ satisfies (AC1) and (AC2).
\item[{(AC6)}] ${\betaf R}_\infty\rhofloor d\lambda} \def\La{\Lambdaambdambda_\infty = 0$, where ${\betaf R}_\infty: = \lambda} \def\La{\Lambdaim_{s \tauo \infty}T_s^*{\betaf R}$,
$\lambda} \def\La{\Lambdaambdambda_\infty : = \lambda} \def\La{\Lambdaim_{s \tauo \infty} T_s^*\lambda} \def\La{\Lambdaambdambda$ where both limit exist on ${\muathbb R} \tauimes V$ by (AC3).
\item[{(AC7)}] ${\betaf R}_\infty(r,v) = J_\infty\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{\psiartial}{\psiartial r}\rhoight) \in T_{(r,v)}(\{r\} \tauimes V)$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{itemize}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{defn}
For the purpose of current paper, we will restrict ourselves to the case when $\lambda} \def\La{\Lambdaambdambda_\infty$ is
a contact form of a contact manifold $(V,\xii)$ and ${\betaf R}$ the translational
invariant vector field induced by the Reeb vector field on $V$
associated to the contact form $\lambda} \def\La{\Lambdaambdambda_\infty$ of $(V,\xii)$. More precisely, we have
$$
{\betaf R}(r,v) = (0, X_{\lambda} \def\La{\Lambdaambdambda_\infty}(v))
$$
with respect to the canonical splitting $T_{(r,v)}W = {\muathbb R} \omega} \def\O{\Omegaplus T_vV$.
Furthermore we also assume that $(V,\lambda} \def\La{\Lambdaambdambda_\infty, J_\infty)$ is a contact triad.
Now suppose that $Q \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset V$ is a Morse-Bott submanifold of closed Reeb orbits of $\lambda} \def\La{\Lambdaambdambda_\infty$
and that $\varphiidetilde u:[0,\infty) \tauimes S^1 \tauo W$ is a $\varphiidetilde J$-holomorphic
curve for which the Subsequence Theorem given in section 3.2 \chiite{bao} holds.
We also assume that $J_\infty$ is adapted to $Q$ in the sense of Definition \rhoef{defn:adapted}.
Let $\tauau_k \tauo \infty$ be a sequence such that $a(\tauau_k,t) \tauo \infty$ and
$w(\tauau_k,t) \tauo z$ uniformly as $k \tauo \infty$ where $z$ is a closed Reeb orbit
whose image is contained in $Q$. By the local uniform elliptic estimates, we may
assume that the same uniform convergence holds on the intervals
$$
[\tauau_k, \tauau_k+3] \tauimes S^1
$$
as $k \tauo \infty$. On these intervals, we can write the equation $\psiartialbar_{\varphiidetilde J} \varphiidetilde u = 0$ as
$$
\psiartialbar_{J_\infty} \varphiidetilde u\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{\psiartial}{\psiartial \tauau}\rhoight)
= (\varphiidetilde J - J_\infty) \psihi} \def\F{\Phirac{\psiartial \varphiidetilde u}{\psiartial t}.
$$
We can write the endomorphism $(\varphiidetilde J - J_\infty)(r,\muathbb{T}heta) =: M(r,\muathbb{T}heta)$ where
$(r,\muathbb{T}heta) \in {\muathbb R} \tauimes V$ so that
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:expdkM}
|\nuabla^k M(r,\muathbb{T}heta)| \lambda} \def\La{\Lambdaeq C_k e^{-\psiartialta r}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
for all $r \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq R_0$. Therefore $u = (a,w)$ with $a = r\chiirc \varphiidetilde u$, $w = \muathbb{T}heta \chiirc \varphiidetilde u$
satisfies
$$
\psiartialbar_{J_\infty} \varphiidetilde u\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{\psiartial}{\psiartial \tauau}\rhoight) = M(a,w)\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{\psiartial \varphiidetilde u}{\psiartial t}\rhoight)
$$
Decomposing $\psiartialbar_{J_\infty} \varphiidetilde u$ and $\psihi} \def\F{\Phirac{\psiartial \varphiidetilde u}{\psiartial t}$ with respect to the decomposition
$$
TW = {\muathbb R} \omega} \def\O{\Omegaplus TV = {\muathbb R}\chidot\psihi} \def\F{\Phirac{\psiartial}{\psiartial r} \omega} \def\O{\Omegaplus {\muathbb R} \chidot X_{\lambda} \def\La{\Lambdaambdambda_\infty} \omega} \def\O{\Omegaplus \xii
$$
we have derived
\betaegin{equation}a
\psiartialbar^{\psii_\xii} w\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{\psiartial}{\psiartial \tauau}\rhoight) & = & \psii_\xii
\lambda} \def\La{\Lambdaeft(M(a,w)\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{\psiartial \varphiidetilde u}{\psiartial t}\rhoight)\rhoight) \lambda} \def\La{\Lambdaambdabel{eq:delbarpiw1}\\
(dw^*\chiirc j - da)\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{\psiartial}{\psiartial \tauau}\rhoight) & = & \psii_{{\muathbb C}} \lambda} \def\La{\Lambdaeft(M(a,w)\lambda} \def\La{\Lambdaeft(\psihi} \def\F{\Phirac{\psiartial \varphiidetilde u}{\psiartial t}\rhoight)\rhoight)\lambda} \def\La{\Lambdaambdabel{eq:delbarpiw2}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonea
where $\psii_\xii$ is the projection to $\xii$ with respect to the contact form $\lambda} \def\La{\Lambdaambdambda_\infty$ and
$\psii_{\muathbb C}$ is the projection to ${\muathbb R}\chidot\psihi} \def\F{\Phirac{\psiartial}{\psiartial r} \omega} \def\O{\Omegaplus {\muathbb R} \chidot X_{\lambda} \def\La{\Lambdaambdambda_\infty}$
with respect to the cylindrical $(W,\omega} \def\O{\Omegamega_\infty,J_\infty)$.
Then we obtain from \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:expdkM}
$$
|\psiartialbar^{\psii_\xii} w| \lambda} \def\La{\Lambdaeq Ce^{-\psiartialta a}
$$
as $a \tauo \infty$. By the subsequence convergence theorem assumption and local a priori estimates on $\varphiidetilde u$,
we have immediately obtained the following
$$
|\nuabla''_\tauau e(\tauau,t)| \lambda} \def\La{\Lambdaeq C\, e^{\psiartialta_1 \tauau},\,
|\nuabla''_\tauau \xii_{\muathbb C}F(\tauau,t)| \lambda} \def\La{\Lambdaeq C\, e^{\psiartialta_1 \tauau},\,
|\nuabla''_\tauau \xii_G(\tauau,t)| \lambda} \def\La{\Lambdaeq C\, e^{\psiartialta_1 \tauau}
$$
where $w = \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonxp_Z(\xii_G + \xii_{\muathbb C}F + e)$ is the decomposition similarly as before.
Now we can apply exactly the same proof as the one given in the previous section to establish
the exponential decay property of $dw$.
For the component $a$, we can use \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:delbarpiw2} and the argument used in
\chiite{oh-wang2} and obtain the necessary exponential property as before.
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamamallskip
\alphappendix
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaection{Proof of Proposition \rhoef{prop:adapted}}
\lambda} \def\La{\Lambdaambdabel{sec:appendix-adapted}
In this appendix, we prove contractibility of the set of $Q$-adapted $CR$-almost complex structures
postponed from the proof of Proposition \rhoef{prop:adapted}.
We first notice that for any $d\lambda} \def\La{\Lambdaambdambda$-compatible $CR$-almost complex structure $J$,
$(TQ\chiap JTQ)\chiap T{\muathbb C}F=\{0\}$: This is because for any $v\in (TQ\chiap JTQ)\chiap T{\muathbb C}F$,
\betaegin{equation}astar
|v|^2=d\lambda} \def\La{\Lambdaambdambda(v, Jv)=0,
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
since $Jv\in TQ$ and $v\in T{\muathbb C}F=\kappaer \omega} \def\O{\Omegamega_Q$.
Therefore $(TQ\chiap JTQ)$ and $T{\muathbb C}F$ are linearly independent.
We now give the following lemma.
\betaegin{equation}gin{lem}\lambda} \def\La{\Lambdaambdabel{lem:J-identify-G} $J$ satisfies
the condition $JTQ\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset TQ+JT{\muathbb C}N$ if and only if it satisfies
$TQ =(TQ\chiap JTQ)\omega} \def\O{\Omegaplus T{\muathbb C}F$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof}
It is obvious to see that $TQ=(TQ\chiap JTQ)\omega} \def\O{\Omegaplus T{\muathbb C}F$ indicates $J$ is $Q$-adapted.
It remains to prove the other direction.
For this, we only need to prove that $TQ \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset (TQ\chiap JTQ) + T{\muathbb C}F$ by the discussion
right in front of the statement of the lemma.
Let $v \in TQ$. By the definition of the adapted condition,
$Jv\in TQ + JT{\muathbb C}N$. Therefore we can write
$$
Jv=w+Ju,
$$
for some $w\in TQ$ and $u\in T{\muathbb C}N$.
Then it follows that $v=-Jw+u$,
Noting that $Jw\in TQ\chiap JTQ$, we derive $v \in (TQ\chiap JTQ) + T{\muathbb C}F$ and so
we have finished the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
This lemma shows that any $Q$-adapted $J$ naturally defines a splitting
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:split1}
T{\muathbb C}F \omega} \def\O{\Omegaplus G_J = TQ, \quad G_J:= TQ \chiap JTQ.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
We also note that such $J$ preserves the subbundle $TQ + JT{\muathbb C}F \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset TM$ and so defines an
invariant splitting
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:split2}
TM = TQ \omega} \def\O{\Omegaplus JT{\muathbb C}F \omega} \def\O{\Omegaplus E_J; \quad E_J = (TQ \omega} \def\O{\Omegaplus JT{\muathbb C}F)^{\psierp_{g_J}}.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
Conversely, for a given splittings \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:split1}, \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:split2},
we can always choose $Q$-adapted $J$ so that $TQ \chiap JTQ = G$
but the choice of such $J$ is not unique.
It is easy to see that the set of such splittings forms a contractible manifold
(see Lemma 4.1 \chiite{oh-park} for a proof). We also note that the 2-form
$d\lambda} \def\La{\Lambdaambdambda$ induces nondegenerate (fiberwise) bilinear 2-forms on $G$ and $E$ which we denote by $\omega} \def\O{\Omegamega_G$ and
$\omega} \def\O{\Omegamega_E$. Now we denote by ${\muathbb C}J_{G,E}(\lambda} \def\La{\Lambdaambdambda;Q)$ the subset of ${\muathbb C}J(\lambda} \def\La{\Lambdaambdambda;Q)$ consisting of $J \in {\muathbb C}J(\lambda} \def\La{\Lambdaambdambda;Q)$
that satisfy \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:split1}, \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:split2}. Then ${\muathbb C}J(\lambda} \def\La{\Lambdaambdambda;Q)$ forms a fibration
$$
{\muathbb C}J(\lambda} \def\La{\Lambdaambdambda;Q) = \betaigcup_{G,E}{\muathbb C}J_{G,E}(\lambda} \def\La{\Lambdaambdambda;Q).
$$
Therefore it is enough to prove that ${\muathbb C}J_{G,E}(\lambda} \def\La{\Lambdaambdambda;Q)$ is contractible for each fixed $G, \, E$.
We denote each $J: TM \tauo TM$ as a block $4 \tauimes 4$ matrix in terms of the splitting
$$
TM = T{\muathbb C}F \omega} \def\O{\Omegaplus G \omega} \def\O{\Omegaplus JT{\muathbb C}F \omega} \def\O{\Omegaplus E.
$$
Then one can easily check that the $Q$-adaptedness of $J$ implies $J$ must have the form
$$
\lambda} \def\La{\Lambdaeft(\betaegin{equation}gin{matrix} 0 & 0 & Id & 0 \\
0 & J_G & 0 & 0 \\
-Id & 0 & 0 & 0 \\
0 & B & 0 & J_E
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{matrix} \rhoight)
$$
where $J_G:G \tauo G$ is $\omega} \def\O{\Omegamega_G$-compatible and $J_E:E \tauo E$ is $\omega} \def\O{\Omegamega_E$-compatible, and $B$ satisfies the relation
$
BJ_G = 0
$
which in turn implies $B = 0$.
Since each set of such $J_G$'s or of such $J_E$'s is contractible,
it follows that ${\muathbb C}J_{G,E}(\lambda} \def\La{\Lambdaambdambda;Q)$ is contractible.
This finishes the proof of contractibility of ${\muathbb C}J(\lambda} \def\La{\Lambdaambdambda;Q)$.
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaection{Proof of Theorem \rhoef{thm:subsequence}}
\lambda} \def\La{\Lambdaambdabel{appendix:subseqproof}
In this appendix, we provide the proof of Theorem \rhoef{thm:subsequence} borrowing
the exposition from \chiite{oh-wang2}.
For a given contact instanton $w: [0, \infty)\tauimes S^1\tauo M$, we define maps
$w_s: [-s, \infty) \tauimes S^1 \tauo M$ by
$
w_s(\tauau, t) = w(\tauau + s, t)$.
For any compact set $K\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset {\muathbb R}$, there exists sufficiently large $s_0$ such that for every $s\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq s_0$,
$K\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset [-s, \infty)$. For such $s\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaeq s_0$, we also get an $[s_0, \infty)$-family of maps by defining $w^K_s:=w_s|_{K\tauimes S^1}:K\tauimes S^1\tauo M$.
The asymptotic behavior of $w$ at infinity can be understood by studying the limit of the sequence of maps
$\{w^K_s:K\tauimes S^1\tauo M\}_{s\in [s_0, \infty)}$, for any compact set $K\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset {\muathbb R}$.
First of all,
it is easy to check that under Hypothesis \rhoef{hypo:basic}, the family
$\{w^K_s:K\tauimes S^1\tauo M\}_{s\in [s_0, \infty)}$ satisfies the following
\betaegin{equation}gin{enumerate}
\item $\psiartialbar^\psii w^K_s=0$, $d((w^K_s)^*\lambda} \def\La{\Lambdaambdambda\chiirc j)=0$, for every $s\in [s_0, \infty)$
\item $\lambda} \def\La{\Lambdaim_{s\tauo \infty}\|d^\psii w^K_s\|_{L^2(K\tauimes S^1)}=0$
\item $\|d w^K_s\|_{C^0(K\tauimes S^1)}\lambda} \def\La{\Lambdaeq \|d w\|_{C^0([0, \infty)\tauimes S^1)}<\infty$.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{enumerate}
From (1) and (3) together with the compactness of the target manifold $M$ (which provides the uniform $L^2(K\tauimes S^1)$ bound)
and the coercive estimate for contact instanton equation derived in \chiite[Theorem 5.7]{oh-wang2}, we obtain
$$
\|w^K_s\|_{W^{3,2}(K\tauimes S^1)}\lambda} \def\La{\Lambdaeq C_{K;(3,2)}<\infty,
$$
for some constant $C_{K;(3,2)}$ independent of $s$.
Then it follows from the compactness of the embedding of $W^{3,2}(K\tauimes S^1)$ into $C^2(K\tauimes S^1)$ that the
set $\{w^K_s:K\tauimes S^1\tauo M\}_{s\in [s_0, \infty)}$ is sequentially compact.
Therefore, for any sequence $s_k \tauo \infty$, there exists a subsequence, still denoted by $s_k$,
that converges to a map $w^K_\infty\in C^2(K\tauimes S^1, M)$ in $C^2(K\tauimes S^1, M)$ as $k\tauo \infty$.
Combined with (2), we derive the convergence
$$
dw^K_{s_k}\tauo dw^K_{\infty} \quad \tauext{and} \quad dw^K_\infty=(w^K_\infty)^*\lambda} \def\La{\Lambdaambdambda\omega} \def\O{\Omegatimes X_\lambda} \def\La{\Lambdaambdambda.
$$
Finally by taking (1) into consideration,
we also derive that both $(w^K_\infty)^*\lambda} \def\La{\Lambdaambdambda$ and $(w^K_\infty)^*\lambda} \def\La{\Lambdaambdambda\chiirc j$ are harmonic $1$-forms.
Recall that these limiting maps $w^K_\infty$ have common extension $w_\infty: {\muathbb R}\tauimes S^1\tauo M$
by the nature of the diagonal argument which takes a sequence of compact sets $K$
in the way one including another and exhausting full ${\muathbb R}$ as $k \tauo \infty$.
Then $w_\infty$ is $C^2$ (actually $C^\infty$) and satisfies
$$
\|d w_\infty\|_{C^0({\muathbb R}\tauimes S^1)}\lambda} \def\La{\Lambdaeq \|d w\|_{C^0([0, \infty)\tauimes S^1)}<\infty
$$
and
$
dw_\infty=(w_\infty)^*\lambda} \def\La{\Lambdaambdambda \omega} \def\O{\Omegatimes X_\lambda} \def\La{\Lambdaambdambda.
$
We also note that both $(w_\infty)^*\lambda} \def\La{\Lambdaambdambda$ and $(w_\infty)^*\lambda} \def\La{\Lambdaambdambda\chiirc j$
are bounded harmonic one-forms on ${\muathbb R}\tauimes S^1$.
Therefore they must be written into the forms
$$
(w_\infty)^*\lambda} \def\La{\Lambdaambdambda=a\,d\tauau+b\,dt, \quad (w_\infty)^*\lambda} \def\La{\Lambdaambdambda\chiirc j=b\,d\tauau-a\,dt,
$$
where $a$, $b$ are some constants.
Now we show that such $a$ and $b$ are actually related to ${\muathbb C}T$ and ${\muathbb C}Q$ as
follows
\betaegin{equation}gin{lem}
$$
a=-{\muathbb C}Q, \quad b= {\muathbb C}T.
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{lem}
\betaegin{equation}gin{proof} Take an arbitrary point $r\in K$. Using the $C^2$-convergence of some sequence $w_{s_k}|_{\{r\}\tauimes S^1}$
to $w_\infty|_{\{r\}\tauimes S^1}$, we derive
\betaegin{equation}astar
b=\int_{\{r\}\tauimes S^1}(w_\infty|_{\{r\}\tauimes S^1})^*\lambda} \def\La{\Lambdaambdambda
&=&\int_{\{r\}\tauimes S^1}\lambda} \def\La{\Lambdaim_{k\tauo \infty}(w_{s_k}|_{\{r\}\tauimes S^1})^*\lambda} \def\La{\Lambdaambdambda\\
&=&\lambda} \def\La{\Lambdaim_{k\tauo \infty}\int_{\{r\}\tauimes S^1}(w_{s_k}|_{\{r\}\tauimes S^1})^*\lambda} \def\La{\Lambdaambdambda\\
&=&\lambda} \def\La{\Lambdaim_{k\tauo \infty}\int_{\{r+s_k\}\tauimes S^1}(w|_{\{r+s_k\}\tauimes S^1})^*\lambda} \def\La{\Lambdaambdambda.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
On the other hand, recalling $w^*d\lambda} \def\La{\Lambdaambdambda = \psihi} \def\F{\Phirac{1}{2} |d^\psii w|^2)$ and applying Stokes'
formula and finiteness of the $\psii$-energy on $[0,\infty) \tauimes S^1$, the latter becomes
$$
\lambda} \def\La{\Lambdaim_{k\tauo \infty}(T-\psihi} \def\F{\Phirac{1}{2}\int_{[r+s_k, \infty)\tauimes S^1}|d^\psii w|^2)
={\muathbb C}T-\lambda} \def\La{\Lambdaim_{k\tauo \infty}\psihi} \def\F{\Phirac{1}{2}\int_{[r+s_k, \infty)\tauimes S^1}|d^\psii w|^2
={\muathbb C}T
$$
whjch proves $a = {\muathbb C}T$.
On the other hand, using the closeness of $w^*\lambda} \def\La{\Lambdaambdambda \chiirc j$
and Stokes' formula, we easily compute
\betaegin{equation}astar
-a=\int_{\{r\}\tauimes S^1}(w_\infty|_{\{r\}\tauimes S^1})^*\lambda} \def\La{\Lambdaambdambda\chiirc j
&=&\int_{\{r\}\tauimes S^1}\lambda} \def\La{\Lambdaim_{k\tauo \infty}(w_{s_k}|_{\{r\}\tauimes S^1})^*\lambda} \def\La{\Lambdaambdambda\chiirc j\\
&=&\lambda} \def\La{\Lambdaim_{k\tauo \infty}\int_{\{r\}\tauimes S^1}(w_{s_k}|_{\{r\}\tauimes S^1})^*\lambda} \def\La{\Lambdaambdambda\chiirc j\\
&=&\lambda} \def\La{\Lambdaim_{k\tauo \infty}\int_{\{r+s_k\}\tauimes S^1}(w|_{\{r+s_k\}\tauimes S^1})^*\lambda} \def\La{\Lambdaambdambda\chiirc j
={\muathbb C}Q.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
Here in our derivation, we used Remark \rhoef{rem:TQ}. This proves the lemma.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
By the connectedness of $[0,\infty) \tauimes S^1$, the image of $w_\infty$ is contained in
a single leaf of the Reeb foliation. If $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma: {\muathbb R} \tauo M$ is a parametrization of
the leaf so that it satisfies $\delta} \def\D{\Deltaot \gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma = X_\lambda} \def\La{\Lambdaambdambda(\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma)$,
then we can write $w_\infty(\tauau, t)=\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma(s(\tauau, t))$, where
$s:{\muathbb R}\tauimes S^1\tauo {\muathbb R}$ and $s=-{\muathbb C}Q\,\tauau+{\muathbb C}T\,t+c_0$ since $ds=-{\muathbb C}Q\,d\tauau+{\muathbb C}T\,dt$, where $c_0$ is some constant.
This implies that if ${\muathbb C}T\nueq 0$
$\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$ defines a closed Reeb orbit of period $T$. On the other hand,
If $T=0$ but ${\muathbb C}Q\nueq 0$, we can only conclude that $\gamma} \delta} \def\D{\Deltaef\Gamma{\Gammaammaamma$ is some Reeb
trajectory parameterized by $\tauau\in {\muathbb R}$.
\betaegin{equation}gin{rem} Of course, if both ${\muathbb C}T$ and ${\muathbb C}Q$ vanish, $w_\infty$ is a constant map.
In \chiite{oh:energy}, it is shown that such a puncture is a removable singularity
under the finiteness of a suitably defined Hofer-type energy.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{rem}
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaection{Sobolev's inequality for the sections of ${\muathbb C}E_1 \tauo {\muathbb R}$}
\lambda} \def\La{\Lambdaambdabel{sec:Sobolev}
In this section, we give the proof of \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:Sobolev} for the sections of
the bundle ${\muathbb C}E_1 \tauo {\muathbb R}$ whose fiber is a Hilbert space possibly with infinite dimension.
As in the main text, we assume ${\muathbb C}E_2 \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset {\muathbb C}E_1$ a pair of Hilbert bundles that satisfies
all the properties and is equipped with a compatible connection $\nuabla$.
We denote
by $\ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_s^\tauau$ the parallel transport from the fiber ${\muathbb C}E_s$ to ${\muathbb C}E_\tauau$.
\betaegin{equation}gin{prop}\lambda} \def\La{\Lambdaambdabel{prop:Sobolev} Let $I \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaubset {\muathbb R}$ be a closed
interval and $\zetaeta: I \tauo {\muathbb C}E_2$ be a smooth section.
Then there exists $C_3 = C_3(I) > 0$ depending only on the length $|I|$ of the interval
but independent of $\zetaeta$ such that
$$
\|\zetaeta(\tauau)\|_{L^\infty(I,{\muathbb C}E_1)} \lambda} \def\La{\Lambdaeq C_3 \|\zetaeta\|_{W^{1,2}(I,{\muathbb C}E_1)}.
$$
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{prop}
\betaegin{equation}gin{proof}
Thanks to \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonqref{eq:L2|zeta|}, there must be a point
$\tauau_0 \in I_k$ such that
\betaegin{equation}\lambda} \def\La{\Lambdaambdabel{eq:suptau0}
|\zetaeta(\tauau_0)|_{{\muathbb C}E_1,\tauau} \lambda} \def\La{\Lambdaeq \psihi} \def\F{\Phirac{1}{\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqrt{|I|}}\|\zetaeta\|_{L^2(I,{\muathbb C}E_1)}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilone
where $|I|$ is the length of the interval $I$.
Then for any $\tauau \in I$, we write
$$
\zetaeta(\tauau) - \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_{\tauau_0}^\tauau \zetaeta(\tauau_0) = \int_{\tauau_0}^\tauau \ifmmode{\muathbb P}\else{$\muathbb P$}\psihi} \def\F{\Phiii_{s}^\tauau \nuabla_s \zetaeta(s) \, ds
$$
Therefore we obtain
$$
|\zetaeta(\tauau)|_{{\muathbb C}E_1,\tauau} \lambda} \def\La{\Lambdaeq |\zetaeta(\tauau_0)|_{{\muathbb C}E_1,\tauau_0} + \int_{\tauau_0}^\tauau |\nuabla_s \zetaeta(s)|_{{\muathbb C}E_1,s} \, ds.
$$
Applying the H\"older's inequality, we derive
\betaegin{equation}astar
\int_{\tauau_0}^\tauau |\nuabla_s \zetaeta(s)|_{{\muathbb C}E_1,\tauau} \, ds
&\lambda} \def\La{\Lambdaeq & \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqrt{|I|} \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqrt{\int_{\tauau_0}^\tauau |\nuabla_s \zetaeta(s)|_{{\muathbb C}E_1,\tauau}^2 \, ds}\\
&\lambda} \def\La{\Lambdaeq & \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqrt{|I|} \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqrt{\int_I |\nuabla_s \zetaeta(s)|_{{\muathbb C}E_1,\tauau}^2 \, ds}
\lambda} \def\La{\Lambdaeq \sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqrt{|I|}\, \|\nuabla_\tauau \zetaeta\|_{L^2(I,{\muathbb C}E_1)}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsiloneastar
since $\tauau_0,\, \tauau \in I$.
Combining the two, we have obtained
$$
|\zetaeta(\tauau)|_{{\muathbb C}E_1,\tauau} \lambda} \def\La{\Lambdaeq \psihi} \def\F{\Phirac{1}{\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqrt{|I|}}\,\|\zetaeta\|_{L^2(I,{\muathbb C}E_1)} +
\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqrt{|I|}\, \|\nuabla_\tauau \zetaeta\|_{L^2(I,{\muathbb C}E_1)}
$$
for all $\tauau \in I$.
By setting $C_3 = 2 \muax\{\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqrt{|I|}, \psihi} \def\F{\Phirac{1}{\sigma} \delta} \def\D{\Deltaef\S{\Sigmamaqrt{|I|}}\}$, we have finished the proof.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{proof}
\betaigskip
\tauextbf{Acknowledgements:}
Rui Wang would like to thank Erkao Bao and Ke Zhu for useful discussions.
Both authors are extremely grateful to the anonymous referee, who read the paper
carefully, pointed out many typos and errors and provided valuable
suggestions to improve the presentation of this paper.
\betaegin{equation}gin{thebibliography}{BEHWZ}
\betaibitem[A]{arnold:book} Arnold, V. I., Mathematical Methods of
Classical Mechanics, GTM 60, 2-nd edition, Springer-Verlag, 1989,
New York.
\betaibitem[Ba]{bao} Bao, E. {\it
On J-holomorphic curves in almost complex manifolds with asymptotically cylindrical ends},
Pacific J. Math. {\betaf 278} (2015), no. 2, 291--324.
\betaibitem[Bot]{bott} Bott, R., {\it Nondegenerate critical manifolds}, Ann. of Math. (2) {\betaf 60}, (1954). 248--261.
\betaibitem[BT]{bott-tu} Bott, R., Tu, L., Differential Forms in Algebraic Topology, Springer-Verlag, New York,
1982.
\betaibitem[Bou]{bourgeois} Bourgeois, F., {\it A Morse-Bott approach to contact homology}, Ph D Dissertation,
Stanford University, 2002.
\betaibitem[BEHWZ]{behwz} Bourgeois, F., Eliashberg, Y., Hofer, H., Wysocki, K., Zehnder, E.,
{\it Compactness results in symplectic field theory,} Geom. Topol. {\betaf 7} (2003), 799-888.
\betaibitem[EGH]{SFT} Eliashberg, Y., Givental, A., Hofer, H., {\it Introduction to symplectic field theory,} Visions in Mathematics. Birkh\"auser Basel, 2000. 560-673.
\betaibitem[G]{gotay} Gotay, M., {\it On coisotropic imbeddings of
pre-symplectic manifolds}, Proc. Amer. Math. Soc., {\betaf 84} (1982),
111--114.
\betaibitem[He]{helgason} Helgason, S., Differential Geometry, Lie Groups, and Symmetric Spaces,
Pure and Applied Mathematics, 80. Academic Press, Inc., New York-London, 1978.
\betaibitem[Ho]{hofer}Hofer, H. \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{Pseudoholomorphic curves in symplectizations with
applications to the Weinstein conjecture in dimension three.} Invent. Math. 114.1 (1993): 515-563.
\betaibitem[HWZ1]{HWZ1} Hofer, H., Wysocki, K., Zehnder, E., {\it Properties of pseudoholomorphic
curves in symplectizations, I: asymptotics}, Annales de l'insitut Henri Poincar\'e,
(C) Analyse non linaire 13 (1996), 337 - 379.
\betaibitem[HWZ2]{HWZ2} Hofer, H., Wysocki, K., Zehnder, E., {\it Correction to ``Properties of pseudoholomorphic
curves in symplectizations, I: asymptotics''}, Annales de l'insitut Henri Poincar\'e,
(C) Analyse Non Linaire 15 (1998), 535 - 538.
\betaibitem[HWZ3]{HWZ3} Hofer, H., Wysocki, K., Zehnder, E., {\it Properties of pseudoholomorphic
curves in symplectization IV: asymptotics with degeneraies}, in Contact and Symplectic Geometry, Cambridge
University Press, (1966), 78-117.
\betaibitem[HWZ4]{HWZplane} Hofer, H., Wysocki, K., Zehnder, E., \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{The asymptotic behavior of a finite energy plane.}
ETHZ, Institute for Mathematical Research (2001).
\betaibitem[K]{katcher} Karcher, H. {\it Riemannian center of mass and
mollifier smoothing}, Comm. Pure Appl. Math. 30 (1977), 509-541.
\betaibitem[MT]{mundet-tian} Mundet i Riera, I., Tian, G., {\it A compactification of
the moduli space of twisted holomorphic maps,} Advanced Math. 222
(2009), 1117--1196.
\betaibitem[Oh1]{oh:book} Oh, Y.-G., Symplectic Topology and Floer Homology I,
New Mathematical Monographs 28, Cambridge University Press, 2016, Cambridge.
\betaibitem[Oh2]{oh:energy} Oh, Y.-G., {\it Analysis of contact Cauchy-Riemann maps III; energy, bubbling and
Fredholm theory}, preprint 2013, available from http://cgp.ibs.re.kr/~yongoh/preprints.html.
\betaibitem[OP]{oh-park} Oh, Y.-G. and Park, J.-S., {\it Deformations of coisotropic submanifolds
and strong homotopy Lie algebroids,} Invent. Math. 161 (2005), 287--360.
\betaibitem[OW1]{oh-wang1} Oh, Y.-G., Wang, R., \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{Canonical connection on contact manifolds,}
in ``Real and Complex Submanifolds'' Springer Proceedings in Math. \& Statistics vol. 106,
pp 43--63, eds. by Y.-J. Suh and et al. for ICM-2014 satellite conference,
Daejeon, Korea, August 2014; arXiv:1212.4817.
\betaibitem[OW2]{oh-wang2} Oh, Y.-G., Wang, R., \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{Analysis of contact Cauchy-Riemann maps I:
a priori $C^k$ estimates and asymptotic convergence}, submitted, arXiv:1212.5186v3.
\betaibitem[OZ]{oh-zhu11} Oh, Y. -G., Zhu, K.,
{\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonm Thick-thin decomposition of Floer trajectories and adiabatic gluing,} preprint,
arXiv:1103.3525.
\betaibitem[RS]{robbin-salamon} Robbin, J., Salamon, D., \varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonmph{Asymptotic behaviour of holomorphic strips.}
Annales de l'Institut Henri Poincare (C) Non Linear Analysis. Vol. 18. No. 5. Elsevier Masson, 2001.
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{thebibliography}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{document}
\varphiepsilon} \delta} \def\D{\Deltaef\ep{\epsilonnd{document} |
\begin{document}
\title{Quantum pigeonhole effect, Cheshire cat and contextuality}
\author{Sixia Yu}
\affiliation{Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore 117542}
\author{C.H. Oh}
\affiliation{Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore 117542}
\affiliation{Physics Department, National University of Singapore, 2 Science Drive 3, Singapore 117542}
\begin{abstract}
A kind of paradoxical effects has been demonstrated that the pigeonhole principle, i.e., if three pigeons are put in two pigeonholes then at least two pigeons must stay in the same hole, fails in certain quantum mechanical scenario. Here we shall show how to associate a proof of Kochen-Specker theorem with a quantum pigeonhole effect and vise versa, e.g., from state-independent proofs of Kochen-Specker theorem some kind of state-independent quantum pigeonhole effects can be demonstrated. In particular, a state-independent version of the quantum Cheshire cat, which can be rendered as a kind of quantum pigeonhole effect about the trouble of putting two pigeons in two pigeonholes, arises from Peres-Mermin's magic square proof of contextuality.
\end{abstract}
\maketitle
Quantum theory confronts us with a reality that is radically different from a classical one because of its contextuality. Roughly speaking, quantum contextuality refers to the property that any realistic theory trying to complete quantum mechanics so as to avoid indeterministic measurement outcomes has to be contextual, i.e., the predetermined outcomes of measuring an observable may depend on which set of compatible observables that might be measured along side. This is a seminal no-go theorem proved by Kochen and Specker (KS) \cite{ks} and independently by Bell \cite{bell2}. Bell's nonlocality as revealed by, e.g., the violation of Bell's inequality \cite{bell,bi} or arguments without inequality \cite{ghz,mermin,tang,hardy}, is a special form of contextuality enforced by space-like separation. In different scenarios there are different proofs of Kochen-Specker theorem, which can be state-dependent \cite{clif,pent}, state-independent and deterministic \cite{ks,cab,peres}, and statistical yet state-independent \cite{yo}.
Recently a kind of quantum pigeonhole effect \cite{qpe,vaid} was demonstrated that the pigeonhole principle, which states that if three pigeons are to be put into two pigeonholes then two of the pigeons must stay in the same hole, does not hold in some pre- and post-selection scenarios. This effect is even promoted to be a principle to explore the nature of the quantum correlations. Consider a system of three qubits representing three pigeons in two pigeonholes, i.e., two eigenstates $\{|0\rangle,|1\rangle\}$ of $Z$, where three Pauli matrices and identity matrix are denoted simply by $\{X,Y,Z,I\}$.
Initially the system is prepared in the state $|\psi_i\rangle=|+,+,+\rangle$, which is the common $+1$ eigenstate of commuting observables $\{X_1,X_2,X_3\}$. At the final stage, $y$ component of each qubit is measured $\{Y_1,Y_2,Y_3\}$ and only those outcomes with three +1 are kept, i.e., the common $+1$ eigenstate $|\psi_f\rangle=|0,0,0\rangle_Y$ is post-selected. In the intermediate stage between the preparation and post-selection, one asks what would happen if we had tested whether each pair of two qubits is in the same state or not. This is equivalent to measure the observable $Z_{ab}=Z_aZ_b$ on each pair of qubits $a$ and $b$ since the outcome $+1$ or $-1$, which corresponds to the projectors $\Pi_{ab}^{\pm}=(I_{ab}\pm Z_{ab})/2$, indicates that two qubits are in the same or different states, respectively, with respect to the the computational basis. It was then argued that the detectors corresponding to $\{\Pi_{ab}^+\}$ would never fire because of the identities
\begin{equation}\label{cd}
\langle\psi_i|\Pi_{ab}^+|\psi_f\rangle=0 \quad (a,b=1,2,3).
\end{equation}
As a result the detector corresponding to $\Pi_{ab}^-$ would always fire for each pair of qubits $a,b=1,2,3$, meaning that each pair of qubits would have stayed in different states, a violation to the pigeonhole principle.
The quantum pigeonhole effect described above is obviously a kind of pre- and post-selection paradox and, as pointed out by Leifer and Spekkens \cite{ls}, any such kind of paradox is associated with a proof of quantum contextuality. In any noncontextual realistic model, according to Kochen and Specker \cite{ks}, there is a so called KS value assignment of $\{0,1\}$ to all the rays, i.e., rank-1 projections, in the relevant Hilbert space satisfying the following three rules. The {\it noncontextuality rule} states that a ray is assigned to value 0 or 1 regardless of which complete orthonormal bases it belongs to, the {\it orthogonality rule} states that two orthogonal rays cannot be assigned to value 1 simultaneously, and the {\it completeness rule} states that within a complete basis orthonormal basis there is at least one ray that is assigned to value 1.
A finite set of rays having no KS value assignment is a proof of the KS theorem, i.e., a demonstration of quantum contextuality, and the proof is state-dependent if some states are assigned to value 1 {\it a priori}. The absence of KS value assignments is sufficient for the nonexistence of a noncontexual model but not necessary. There are also state-independent proofs admitting KS value assignments \cite{yo}.
In the above demonstration of quantum pigeonhole effect, by denoting $|\Phi_\pm\rangle\propto{ |00\rangle\pm|11\rangle}$ and $|\Psi_\pm\rangle\propto |01\rangle\pm|10\rangle$, the set of all the relevant 34 pure states
\begin{equation}
|\psi_i\rangle,|\psi_f\rangle, \{|\Phi_\pm\rangle_{ab}|\mu\rangle_c,|\Psi_\pm\rangle_{ab}|\mu\rangle_c\}, \{|\mu,\nu,\tau\rangle\},
\end{equation}
where
$(a,b,c)$ denotes one of possible cyclic permutations of $(1,2,3)$ and $\mu,\nu,\tau=0,1$, provides a state-dependent proof of KS theorem if $|\psi_i\rangle$ and $|\psi_f\rangle$ are assigned to value 1. In fact
for any possible choice of $(a,b,c)$ and $\mu=0,1$, both two rays $|\Phi_\pm\rangle_{ab}|\mu\rangle_c$ have to be assigned to value 0 because of Eq.(\ref{cd}) and the orthogonality rule. As a result, either $|\Psi_+\rangle_{ab}|\mu\rangle_c$ or $|\Psi_-\rangle_{ab}|\mu\rangle_c$ has to be assigned to value 1 for any given $\mu=0,1$ and $a,b$, according to the completeness rule. Or equivalently, all three rank-2 projections $\{\Pi_{ab}^-\}$ must be assigned to value 1.
Due to the pigeonhole principle, in each one of the eight computational bases $\{|\mu,\nu,\tau\rangle\}$ at least two qubits are in the same states and therefore is orthogonal at least to one of the three subspaces with projections $\{\Pi_{ab}^-\}$. That is to say all eight computational bases have to be assigned to value 0, which contradicts the completeness rule.
All
three rules of KS value assignment are also assumed implicitly or explicitly in the demonstration of the quantum pigeonhole effect. The orthogonality and completeness rules, which can be enforced by Aharonov-Bergmann-Lebowitz rule \cite{abl} for pre- and post-selection, have been used in the arguments against certain outcomes of the intermediate measurements. The noncontextuality has also been assumed implicitly in the intermediate measurement of the observable $Z_{ab}$ for each pair of qubits, which appears in two different measurement contexts in order to extract a contradiction. The first measurement context is given by $\{X_{ab},Y_{ab}\}$ which has been used to show that the detector $\Pi_{ab}^+$ would never fire, i.e., $Z_{ab}$ would take value $-1$ indication qubit $a$ and $b$ are in different states. The second measurement context is given by $\{Z_1,Z_2,Z_3\}$ whose eigenstates represent all possible configurations of three pigeons in two pigeonholes. There is a contradiction only if the value of $Z_{ab}$ obtained in the first context is used in the second context. It is clear that no paradox will be present if the assumption of noncontextuality is dropped. The counterfactual measurements might yield contextual values.
For other possible outcomes of the post-selection measurement, similar quantum pigeonhole effects can still be demonstrated \cite{qpe}. Here we shall remove further the dependency of the initial state so that we have a state-independent version of the quantum pigeonhole effect. On a system of three qubits we at first measure the complete set $\{X_1,X_2,X_3\}$ with outcomes $s_1,s_2,s_3=\pm1$ being arbitrary, and at the final stage we measure the complete set $\{Y_1,Y_2,Y_3\}$ with outcomes $t_1,t_2,t_3=\pm1$ being arbitrary. By the first measurement the system is prepared in the state $|\psi_i^s\rangle=|s_1,s_2,s_3\rangle$ and by the second measurement the system is post-selected into the state $|\psi_f^t\rangle=|t_1,t_2,t_3\rangle_Y$. Let us now examine what would happen if we had measured $Z_{ab}$ at the intermediate stage between the preparation and post-selection. Denoting $v_{ab}=s_as_bt_at_b$ we have
\begin{eqnarray}\label{test}
v_{ab}\langle\psi_i^s|\Pi_{ab}^{v_{ab}} |\psi^t_f\rangle&=&\langle\psi_i^s|X_{ab}\Pi_{ab}^{v_{ab}} Y_{ab}|\psi^t_f\rangle\nonumber\\
&=&-\langle\psi_i^s|X_{ab}\Pi_{ab}^{v_{ab}} Z_{ab}X_{ab}|\psi^t_f\rangle\nonumber\\
&=&-{v_{ab}} \langle\psi_i^s|\Pi_{ab}^{v_{ab}} |\psi^t_f\rangle=0
\end{eqnarray}
for arbitrary $a,b=1,2,3$. That is, the detector $\Pi_{ab}^-$ (or $\Pi_{ab}^+$) would never fire if $v_{ab}=-1$ (or 1) so that qubits $a$ and $b$ would have stayed in the same (or different) states, respectively. Since
$ v_{12}v_{23}v_{13}=1$ there is an even number of pairs of qubits such that $v_{ab}=-1$, i.e., among three qubits there is an even number of pairs of qubits that would have stayed in the same state. This is impossible due to the fact that to put three pigeons into two pigeonholes the number of pairs of pigeons staying in the different holes must be even.
\begin{figure}
\caption{(Color online.) Waegell-Aravind's triangle configuration for a proof of Kochen-Specker theorem in the case of three qubits. }
\end{figure}
The above state-independent version of the quantum pigeonhole paradox also gives rise to a state-independent proof of Kochen-Specker theorem due to Waegell and Aravind \cite{wa}. This proof includes 48 rays defined by eigenstates of 6 maximal sets of mutually commuting observables
$\{X_a\},\{Y_a\},\{Z_a\},\{X_{ab},Y_{ab},Z_c\},
$ with $(a,b,c)=(1,2,3)$. There is an elegant parity proof given by the Waegell-Aravind's triangle configuration as shown in Fig.1. Quantum mechanically, the product of three observables connected by each thin (blue) straight line is 1 while the product of three observables connected by each thick (red) line is $-1$. Thus the product of all those line products of three observables equals $-1$. In a noncontextual realistic theory, where all observables have some predetermined values $\pm1$ independent of the contexts, the same product yields value 1 since each observable appears twice in the product.
Any proof of Kochen-Specker theorem via logical contradiction can give rise to the demonstration of some sort of quantum pigeon effect. This is because in each KS proof there is at least a basis, one of which can be taken without loss of generality to be the computational basis. The logical contradiction can be pushed back to the impossibility of the KS value assignment to this computational basis. We note that the computational bases represent all possible configurations of pigeons in holes and satisfy obviously the orthogonality and completeness rules, i.e., one and only one configuration is realized. Thus the contradiction as seen in a KS proof can thus be regarded as a violation of some kind of generalized pigeonhole principle. For example the 3-box paradox \cite{3box}, which is associated with Clifton's state-dependent proof of KS theorem \cite{clif}, roughly shows that a single pigeon, when put into three pigeonholes, can stay simultaneously in two holes. Now let us see some more examples.
As the first example, some kind of robust quantum pigeonhole effect can be demonstrated by using Peres-Mermin's magic square for three qubits \cite{wa} as shown in Table I (left). Let the system be prepared in any state $|\psi_i\rangle$ (may even be mixed) in the common $+1$ eigenspace of three observables $\{X_{12},X_{23},X_{13}\}$ in the fist row, which is spanned by $\{|\pm,\pm,\pm\rangle\}$. At the final stage we make a post-selection to any state $|\psi_f\rangle$ in the common +1 eigenspace of three observables $\{Y_{12},Y_{23},Y_{13}\}$ in the third row, which is spanned by $\{|\pm,\pm,\pm\rangle_Y\}$. At the immediate stage we ask what would happen if we had measured three observables $\{Z_{12},Z_{23},Z_{13}\}$ in the second row, testing whether each pair of qubits was in the same state or not. In this general setting Eq.(\ref{cd}) still hold so that each pair of qubits would have stayed in different states, a violation to the pigeonhole principle. The dependency of the pre- and post-selection can be easily removed and the generalization of Peres-Mermin's magic square as well as the quantum pigeonhole effect to the case of an odd number of qubits is straightforward.
\begin{table}
$$
\begin{array}{|c|c|c|}\hline
X_1X_2& X_2X_3& X_1X_3\\
\hline
Z_1Z_2& Z_2Z_3 & Z_1Z_3\\
\hline
Y_1Y_2& Y_2Y_3& Y_1Y_3\\
\hline
\end{array}
\quad\quad
\begin{array}{|c|c|c|}\hline
X_1& X_2& X_1 X_2\\
\hline
Z_2 & Z_1 & Z_1 Z_2\\
\hline
X_1Z_2 & Z_1X_2 & Y_1Y_2\\
\hline
\end{array}
$$
\caption{Peres-Mermin's magic square for three qubits (left) and for two qubits (right). }
\end{table}
As the second example, a kind of paradoxical effect can be demonstrated in the case of two pigeons in two pigeonholes, which turns out to be the quantum Cheshire cat \cite{cat}, a curious situation of a grin without a cat encountered by Alice only in wonderland and demonstrated recently in a neutron experiment \cite{catn}. Consider a spin half particle, e.g., neutron, in an interferometer with two paths, where the cat is represented by the particle and its grin by its spin. This is effectively a two-qubit system with the first qubit representing the spatial degree of freedom, i.e., the path, while the second qubit representing its spin.
Let the system be prepared initially in the state $|\psi_i\rangle=|\Phi_+\rangle$ and post-selected to $|\psi_f\rangle=|+\rangle_1|0\rangle_2$ at the final stage. Because $\langle\psi_i |\Pi_1^{Z_1}|\psi_f\rangle=0$ the particle would take path $|0\rangle_1$ if we had observed its path, where we have denoted by $\Pi^O_u$ the projection to the eigenspace of observable $O$ corresponding to eigenvalue $(-1)^u$ with $u=0,1$. Because $\langle\psi_i |\Pi_1^{X_2}|\psi_f\rangle=0$ if we had observed its spin components along $x$ direction the answer would be spin up $|+\rangle_2$. Because $\langle\psi_i |\Pi^{Z_1X_2}_0|\psi_f\rangle=0$ we see that the path and the spin are anti-correlated, i.e., if the spin is up $|+\rangle_2$ then the path $|1\rangle_1 $ should be taken while if the spin is down $|-\rangle_2$ then the path $|0\rangle_1$ should be taken. To sum up, given the pre and post-selection, the particle would surely travel along path $|0\rangle$ and its spin would be up along $x$ direction while the path and spin would be anti-correlated, i.e., its spin would be measured along a different path $|1\rangle$ from what is taken by the particle, i.e., $|0\rangle$, which is exactly our quantum Cheshire cat.
This demonstration of quantum Cheshire cat, being a kind of pre- and post-selection paradox, also gives rise to a state-dependent proof of Kochen-Specker theorem including the eigenstates of $\{Z_1,Z_2\}$,
$\{X_1,X_2\}$ and $\{Z_1X_2,X_1Z_2\}$ in addition to $|\psi_i\rangle$ and $|\psi_f\rangle$.
By considering suitable pre- and post-selection measurements the dependency of the quantum Cheshire cat on the preparation and post-selection can be removed. For the preparation we measure observables $\{X_{12},Z_{12}\}$ while for the post-selection we measure observables $\{X_1,Z_2\}$. Thus initially the system is prepared in the state $|\psi_i\rangle=|\Phi_{\alpha\beta}\rangle$ which is the common eigenstate of $X_{12}$ and $Z_{12}$ corresponding to eigenvalues $(-1)^\alpha$ and $(-1)^\beta$, respectively, with $\alpha,\beta=0,1$, and post-selected into the state $|\psi_f\rangle=|(-1)^\mu\rangle_1|\nu\rangle_2$ with $\mu,\nu=0,1$. The following three observations constitute the quantum Cheshire cat. Firstly, if we had observed which path were taken by the particle we would find the particle in the path $|u\rangle_1 $ with $(-1)^u=(-1)^{\beta+\nu}$ since
\begin{eqnarray*}
(-1)^{\beta+\nu}\langle\psi_i |\Pi^{Z_1}_u|\psi_f\rangle&=&\langle\psi_i| Z_{12}\Pi^{Z_1}_uZ_2|\psi_f\rangle\\&=&(-1)^u\langle\psi_i |\Pi^{Z_1}_u|\psi_f\rangle
\end{eqnarray*}
vanishes otherwise.
Secondly, if we had observed the spin along $x$ direction we would find the spin in the state $|(-1)^v\rangle_2$ with $(-1)^v=(-1)^{\alpha+\mu}$ since
\begin{eqnarray*}
(-1)^{\alpha+\mu}\langle\psi_i |\Pi^{X_2}_v|\psi_f\rangle&=&\langle\psi_i| X_{12}\Pi^{X_2}_vX_1|\psi_f\rangle\\&=&(-1)^v\langle\psi_i |\Pi^{X_2}_v|\psi_f\rangle
\end{eqnarray*}
vanishes otherwise. Lastly, if we had observed the correlation $\{Z_1X_2\}$ then the detector $\Pi_w^{ZX}$ would never fire if $(-1)^w=(-1)^{\alpha+\beta+\mu+\nu}$ because in this case
\begin{eqnarray*}
(-1)^{\alpha+\beta+\mu+\nu}\langle\psi_i |\Pi^{ZX}_w|\psi_f\rangle &=&-\langle\psi_i| Y_{12}\Pi^{ZX}_wX_1Z_2|\psi_f\rangle\\&=&-(-1)^w\langle\psi_i |\Pi^{ZX}_w|\psi_f\rangle
\end{eqnarray*}
vanishes. This means that the spin state $|(-1)^v\rangle_2$, as revealed by the measurement $\{X_2\}$, has to be correlated with the path $|u+1\rangle_1$ which is different from the path $|u\rangle_1$ taken by the particle as revealed by the measurement $\{Z_1\}$. This state-independent version of the quantum Cheshire cat turns out to be exactly the parity proof by Peres-Mermin's magic square \cite{pm} as shown in Table I (right). Three commuting observables in three columns are taken as the post-selection, intermediate, and preparation measurements respectively. If we take three sets of row observables instead then we obtain a paradoxical effect that it is impossible to put two pigeons in two holes in a pre- and post-selection scenario.
\begin{figure}
\caption{Mermin's magic pentagram for
three qubits. }
\end{figure}
As the last example, let us see how to derive pigeonhole effects from Greenberger-Horne-Zeilinger (GHZ) paradoxes \cite{ghz,mermin,tang}. Consider Mermin's magic pentagram proof of KS theorem for three qubits as shown in Fig.2. Let the system be prepared in the GHZ state $|\psi_i^s\rangle$, i.e., one of the common eigenstates of three commuting observables $\{G_a=X_aZ_bZ_c\}$, with $(a,b,c)$ being one of three cyclic permutations of $(1,2,3)$, corresponding to eigenvalues $\{s_a=\pm1\}$ respectively. As the post-selection we measure observables $\{X_1,X_2,X_3\}$ with outcomes denoted by $t_a=\pm1$, i.e., the post-selection state is $|\psi_f^t\rangle=|t_1,t_2,t_3\rangle$. We note that outcomes must satisfy $st=-1$ where $s=s_1s_2s_3$ and $t=t_1t_2t_3$ since otherwise we will have orthogonal pre- and post-selected states $$-s\langle\psi_i^s|\psi_f^t\rangle=\langle\psi_i^s|X_{123}|\psi_f^t\rangle=t\langle\psi_i^s|\psi_f^t\rangle=0.$$ For each successful pre- and post-selection, if we had measured observables $Z_{ab}$ for $a,b=1,2,3$ then we would register outcome ${v_{ab}}=t_cs_c$ because otherwise
\begin{eqnarray*}
s_ct_c\langle\psi_i^s|\Pi^{v_{ab}}_{{ab}}|\psi_f^t\rangle&=&
\langle\psi_i^s|G_c\Pi^{v_{ab}}_{{ab}}X_c|\psi_f^t\rangle\\
&=&{v_{ab}}\langle\psi_i^s|\Pi^{v_{ab}}_{{ab}}|\psi_f^t\rangle
\end{eqnarray*}
would vanish. Since $v_{ab}=\pm1$ indicates whether two qubits are in the same state (computational basis) or not, from $v_{12}v_{23}v_{13}=st=-1$ it follows that there is an odd number of qubit pairs in different state, which is impossible because if we put three pigeons in two holes the number of pairs of pigeons in different state should be even. This is a state-independent version of an early proposal of the pigeonhole effect \cite{vaid} using entangled states. GHZ paradoxes for multi (even) levels systems and multi particles can be systematically constructed from the so called GHZ graph \cite{tang} and corresponding graph states. Each paradox will give rise to a similar quantum pigeonhole effect as above (see Appendix).
To summarize, we have established an intimate relation between the quantum pigeonhole effects and proofs of KS theorem. On the one hand this relation enables us to understand those paradoxical effects in terms of contextuality, since the assumption of noncontextuality is indispensable in all these paradoxical effects. On the other hand this relation helps to obtain from known proofs of KS theorem some other interesting effects, such as state-independent versions and the impossibility of putting two pigeons in more than two pigeonholes. Most interestingly, the latter effect is related to the quantum Cheshire cat, for which we also find a state-independent version with the help of Peres-Mermin's magic square. From the experimental point of view, the removal of the dependency on the post-selection states enhances the success probability by 300\%. Lastly, it will be interesting to find out what kinds of paradoxical effects correspond to those KS proofs without logical contradictions.
We are indebted to M. Howard for pointing out a mistake in previous version and L. Vaidman for providing a helpful reference \cite{vaid}. This work is funded by the Singapore Ministry of Education (partly through the Academic Research Fund Tier 3 MOE2012-T3-1-009) and the National Research
Foundation, Singapore (Grant No. WBS: R-710-000-008-271).
\section*{Appendix}
Here we shall demonstrate how to derive quantum pigeonhole paradoxes from GHZ paradoxes obtained from qudit graph states corresponding to GHZ graphs \cite{tang}.
A weighted graph is defined by a set $V=\{1,2,\ldots,n\}$ of $n$ vertices and a set of edges specified by the adjacency matrix $\Gamma$ whose entry $\Gamma_{ab}\in {\mathbb Z}_p=\{0,1,\ldots,d-1\}$ denotes the weight of the edge connecting two vertices $a,b$. We consider here only undirected graph without self loop so that the adjacent matrix is symmetric and has zero diagonal entries. A GHZ graph \cite{tang} is defined to be a weighted graph $G=(V,\Gamma)$ satisfying
$$d_a:=\sum_{b\in V}\Gamma_{ab}\equiv 0\mod d\quad (a\in V),$$
and
$$W:=\sum_{a>b}\Gamma_{ab}\not\equiv 0\mod d.$$
As a result $d$ must be even and $\omega^W=-1$ with $\omega=e^{i\frac{2\pi}d}$. GHZ graphs exist for all values of $n\ge3$ and even values of $d$ with some examples shown in Fig.3.
Consider a system of $n$ qudits labeled with $V$. For each qudit $a\in V$ we denote by $\{|k\rangle_a\mid {k\in {\mathbb Z}_d}\}$ its computational basis and by
\begin{equation*}
{\mathcal X}_a=\sum_{k\in {\mathbb Z}_d}|k\rangle\langle k\oplus1|_a,\quad {\mathcal Z}_a=\sum_{k\in {\mathbb Z}_d}\omega^{k}|k\rangle\langle k|_a
\end{equation*}
its generalized bit and phase shifts for which it holds ${\mathcal Z}^d_a={\mathcal X}_a^d=I$ and the commutation rule ${\mathcal X}_a{\mathcal Z}_a=\omega{\mathcal Z}_a{\mathcal X}_a$. We denote
$${\mathcal X}_V:=\bigotimes_{a\in V}{\mathcal X}_a,\quad {\mathcal Z}_{N_a}:=\bigotimes_{b\in V}{\mathcal Z}_b^{\Gamma_{ab}},\quad a\in V.$$
The graph state $|G\rangle$ for a given graph $G=(V,\Gamma)$ is defined by the $+1$ common eigenstate of $n$ commuting unitary observables $\{{\mathcal G}_a={\mathcal X}_a{\mathcal Z}_{N_a}\}$, i.e., ${\mathcal G}_a|G\rangle=|G\rangle$ for all $a\in V$. For a GHZ graph it holds
$$\prod_{a\in V} {\mathcal G}_a=\omega^W{\mathcal X}_V\prod_{a\in V}{\mathcal Z}_{N_a}=-{\mathcal X}_V\bigotimes_{a\in V}{\mathcal Z}_a^{d_a}=-{\mathcal X}_V$$
based on which we can construct a GHZ paradox as well as a state-independent parity proof of KS theorem as shown in Table II.
\begin{figure}
\caption{(Color online \cite{tang}
\end{figure}
\begin{table}
$$
\begin{array}{|c|c|cc|c|c|c|}\hline
{\mathcal X}_1& {\mathcal X}_2&\ldots&\ldots& {\mathcal X}_{n-1}&{\mathcal X}_n& {\mathcal X}_V^\dagger\\
\hline
{\mathcal Z}_{N_1}& {\mathcal Z}_{N_2}&\ldots&\ldots& {\mathcal Z}_{N_{n-1}}&{\mathcal Z}_{N_n}& I\\
\hline
{\mathcal G}_1^\dagger& {\mathcal G}_2^\dagger&\ldots&\ldots& {\mathcal G}_{n-1}^\dagger&{\mathcal G}_n^\dagger& {\mathcal X}_V\\
\hline
\end{array}
$$
\caption{Mermin's magic configuration for qudits. }
\end{table}
All unitary observables $\{{\mathcal O}_{rc}\}$ labeled with rows $r=1,2,3$ and columns $c=1,2,\ldots, n+1$ in Table II have spectrum ${\mathbb U}_d=\{\omega^k\mid k\in {\mathbb Z}_d\}$ and observables in the same column or row commute with each other. Quantum mechanically, it is clear that we have an identity
\begin{equation*}
\prod_{r=1}^3\prod_{c=1}^{n+1}{\mathcal O}_{rc}\prod_{c=1}^{n+1}\prod_{r=1}^3{\mathcal O}_{rc}^\dagger=-1.
\end{equation*}
In noncontextual models $3(n+1)$ observables $\{{\mathcal O}_{rc}\}$ assume predetermined noncontextual values $v_{rc}\in {\mathbb U}_d$ and the above product with observables replaced by their realistic values, since $|v_{rc}|=1$, is equal to $1\not=-1$, a contradiction.
A quantum pigeonhole paradox about the trouble of putting $n$ pigeons into $d$ holes can be demonstrated as follows. Let the system of $n$ qudits be prepared in the common eigenstate $|\psi_i^g\rangle$ of $\{{\mathcal G}_a\}$ corresponding to eigenvalues $\{g_a\in {\mathbb U}_d\}$ and be post-selected into the common eigenstate $|\psi_f^h\rangle$ of $\{{\mathcal X}_a\}$ with outcomes $\{h_a\in {\mathbb U}_d\}$. Outcomes $\{h_a\}$ and $\{g_a\}$ are not possible unless $$\prod_ah_a=-\prod_ag_a$$
because otherwise the pre- and post-selected states will be orthogonal. Given successful pre- and post-selection, if we had measured the observable ${\mathcal Z}_{N_a}$ in the intermediate stage we would obtain outcome $S_a=g_ah_a^*$ for each $a\in V$ because otherwise
\begin{eqnarray*}
S_a\langle\psi_i^g| \Pi^{{\mathcal Z}_{N_a}}_{S_a}|\psi_f^h\rangle&=&\langle\psi_i^g|{\mathcal Z}_{N_a} \Pi^{{\mathcal Z}_{N_a}}_{S_a}|\psi_f^h\rangle\\
&=&\langle\psi_i^g|{\mathcal G}_a \Pi^{{\mathcal Z}_{N_a}}_{S_a} {\mathcal X}_a^\dagger|\psi_f^h\rangle\\
&=&g_ah_a^*\langle\psi_i^g| \Pi^{{\mathcal Z}_{N_a}}_{S_a} |\psi_f^h\rangle
\end{eqnarray*}
will vanish, where we have denoted by $\Pi_O^{{\mathcal O}}$ the projection to the eigenspace of ${\mathcal O}$ corresponding to eigenvalue $O$. However if these values $\{S_a\}$ are measured in the context $\{{\mathcal Z}_a\}$, whose outcomes are denoted by $\{s_a\in{\mathbb U}_d\}$, or a possible configuration of $n$ pigeons in $d$ pigeonholes, then we should have $\prod_aS_a=\prod_as_a^{d_a}=1$, a contradiction. In other words, the possible answers to $n$ questions about how $n$ pigeons are distributed in $d$ pigeonholes, namely, ${\mathcal Z}_{N_a}$ with $a\in V$, provided by measurement context $\{{\mathcal Z}_{N_a},{\mathcal X}_V\}$ are incompatible with every configuration of directly putting $n$ pigeons into $d$ levels, namely, the measurement context $\{{\mathcal Z}_a\}$.
To demonstrate a paradox, the pre-selected state is not necessarily an entangled state and can also be chosen to be a product state. For preparation we measure observables $\{{\mathcal Z}_a\}$ with outcomes denoted by $s_a$, respectively, and for post-selection we still measure observables $\{{\mathcal X}_a\}$ with outcomes $h_a$. Thus the pre-selected state $|\psi_i^s\rangle$ is the common eigenstate of $\{{\mathcal Z}_a\}$ corresponding to eigenvalue $\{s_a\}$ while the post-selected state $|\psi_f^h\rangle$ is the common eigenstate of $\{{\mathcal X}_a\}$ corresponding to eigenvalue $\{h_a\}$. All outcomes are possible. In the intermediate stage, if we had measured observable ${\mathcal G}_a$ then we would obtain outcome $g_a=h_aS_a$ where $S_a=\prod_bs_b^{\Gamma_{ab}}$ since otherwise
\begin{eqnarray*}
h_aS_a\langle\psi_i^s| \Pi^{{\mathcal G}_a}_{g_a}|\psi_f^h\rangle&=&\langle\psi_i^s|{\mathcal Z}_{N_a} \Pi^{{\mathcal G}_a}_{g_a} {\mathcal X}_a|\psi_f^h\rangle\\
&=&{g_a}\langle\psi_i^s| \Pi^{{\mathcal G}_a}_{g_a}|\psi_f^h\rangle
\end{eqnarray*}
would vanish. The dilemma lies in the fact that on the one hand the outcome for measuring observable ${\mathcal X}_V$ should be $-\prod_ag_a=-\prod_a h_a$, since $\prod_aS_a=1$, from the above arguments and on the other hand $|\psi_f^h\rangle$ is an eigenstate of ${\mathcal X}_V$ corresponding to eigenvalue $\prod_ah_a$.
{\mbox{$\quad$}}
\end{document} |
\begin{document}
\title{ No Switching Policy is Optimal for a
ev{Positive}
\thispagestyle{empty}
\pagestyle{empty}
\begin{abstract}
We consider a nonlinear SISO system that is a cascade of a scalar
``bottleneck entrance'' and an arbitrary Hurwitz positive linear system.
This system entrains i.e. in response to a $T$-periodic inflow every solution converges to a unique~$T$-periodic solution of the system.
We study the problem of maximizing the averaged throughput
via controlled switching. The objective is to choose a periodic inflow rate with a given mean value that maximizes the averaged outflow rate of the system. We compare two strategies:
1)~switching between a high and low value,
and
2)~using a constant inflow equal to the prescribed mean value. We show that no switching policy can outperform a constant inflow rate, though it can approach it asymptotically.
We describe several potential applications
of this problem in traffic systems, ribosome flow models, and scheduling at security checks.
\end{abstract}
\begin{IEEEkeywords}
Entrainment,
switched systems,
airport security,
ribosome flow model,
traffic systems.
\end{IEEEkeywords}
\section{Introduction} \label{sec:intro}
\IEEEPARstart{M}{aximizing} the throughput is crucial in a wide range of nonlinear applications such as traffic systems, ribosome flow models~(RFMs), and scheduling at security checks.
We model the occupancy at time~$t$ in such applications by the
normalized state-variable~$x(t) \in [0,1]$.
In traffic systems,~$x(t)$ can be interpreted as the number of vehicles relative to the maximum capacity of a highway segment.
In biological transport models, $x(t)$ can be
interpreted as the probability that a biological ``machine'' (e.g. ribosome, motor protein)
is bound to a specific segment of the ``trail'' it is traversing (e.g. mRNA molecule, filament) at time~$t$.
For the security check, it is the number of passengers at a security gate relative to its capacity.
The output in such systems is a nonnegative
outflow which can be interpreted as the rate of cars exiting the highway for the traffic system, ribosomes leaving the~mRNA for the~RFM~\cite{margaliot2012stability}, and passengers leaving the gate for the security check.
The inflow rates are often periodic, such as those controlled by traffic light signals,
the periodic cell-cycle division process, or periodic flight schedules. Proper functioning often
requires \emph{entrainment} to such excitations i.e. internal
processes must operate in a periodic pattern with the same period as the excitation~\cite{Glass1979}.
In this case, in response to a~$T$-periodic inflow the outflow converges to a~$T$-periodic
pattern, and the
\textit{throughput} is then defined as the ratio of the average outflow relative
to the average inflow over the period~$T$.
As a general model for studying such applications,
we consider the cascade of two systems shown in Fig.~\ref{f.cascade}.
The first block is called the \emph{bottleneck} and is given by:
\begin{align} \label{eq:con}
\dot{x}(t) &= \sigma(t) (1-x(t)) - \lambda x(t),\nonumber\\
w(t)& = \lambda x(t),
\end{align}
where $\sigma(t) >0$ is the inflow rate at time~$t$,
$x(t) \in [0,1]$ is the occupancy of the bottleneck,
and~$\lambda>0$ controls the output flow~$w(t)$.
The rate of change of the occupancy is proportional to the inflow rate~$\sigma(t)$ and the
\emph{vacancy}~$1-x(t)$, that is, as the occupancy increases the effective entry rate decreases.
We assume that the inflow is periodic with period~$T\geq 0$, i.e. $\sigma(t+T)=\sigma(t)$ for all~$t\ge0$. The occupancy~$x(t)$ (and thus also~$w(t)$)
entrains, as the system is
contractive~\cite{sontag_cotraction_tutorial,LOHMILLER1998683}.
In other words, for any initial condition~$x(0)\in[0,1]$
the solution~$x(t)$ converges to a unique~$T$-periodic solution denoted~$x_\sigma$ and thus~$w $
converges to a~$T$-periodic solution~$w_\sigma$.
The outflow of the bottleneck is the
input into a Hurwitz positive linear system:
\begin{align}\label{linear_system}
\dot z &= Az + b w ,\nonumber \\
y & =c^T z,
\end{align}
where $A \in \mathbb R^{n\times n}$ is Hurwitz and
Metzler and~$ b,c \in \mathbb R^n_+$ (see Fig.~\ref{f.cascade}).
It is clear that for a~$T$-periodic~$\sigma(t)$, all trajectories of the cascade
converge to a unique trajectory~$(x_\sigma(t),z_\sigma(t))$ with~$x_\sigma(t)=x_\sigma(t+T)$
and~$z_\sigma(t)=z_\sigma(t+T)$.
\begin{figure}
\caption{Cascade system: the bottleneck is feeding a positive linear system.}
\label{f.cascade}
\end{figure}
Our goal is to compare the average (over a period) of~$y_\sigma(t)$
for various~$T$-periodic inflows.
To make a meaningful comparison,
we consider inflows that have a fixed mean~$\bar \sigma>0$, i.e \begin{equation}\label{average}
\frac 1T \int_0^T \sigma(t) \diff t
= \bar \sigma.
\end{equation}
The objective is maximize the gain of the system from~$\sigma$ to~$y$,
i.e to maximize~$\int_0^T y_\sigma(t) \diff t $ for inputs with mean~$\bar \sigma$.
The trivial periodic inflow rate is the constant rate~$\sigma(t)\equiv \bar \sigma$.
Here, we compare the outflow for this constant inflow
with that obtained for
an inflow that switches between two values~$\sigma_1$ and~$\sigma_2$
such that $\sigma_2>\bar \sigma >\sigma_1>0$.
In other words, $\sigma(t)\in \{\sigma_1,\sigma_2\}$ is periodic and satisfies~\eqref{average}.
As an application, consider the
traffic system depicted in Fig.~\ref{f.traffic}. There are two flows of vehicles with different rates~$\sigma_1,\sigma_2$ (e.g., cars and trucks)
each moving in a separate road and joining
into a two-lane highway. This can be done in two ways. The
first is to place traffic lights at the end of each road, and switch between them before entering the highway as in Fig.~\ref{f.traffic}(a). The periodic traffic light signal~$\sigma(t)$ switches between the two flows,
hence~$\sigma(t) \in \{\sigma_1,\sigma_2\}$. The second strategy is to have each road constricted to a single lane, and then each joining the corresponding lane in the highway as in Fig.~\ref{f.traffic}(b). Hence, the inflow rate is constant
and equal to~$ (\sigma_1+\sigma_2)/2$. In both cases, the occupancy~$x(t)$ of the highway is modeled by~\eqref{eq:con}. For a proper comparison, we require~$\tfrac 1T \int_0^T \sigma(t) \diff t = (\sigma_1+\sigma_2)/2$ as discussed before.
\begin{figure*}
\caption{Traffic system application illustrating the two strategies.
Here $x(t) \in[0,1]$ denotes the occupancy of the bottleneck at time~$t$.
(a)~The inflow rate switches via periodically-varying
traffic lights between two flows with rates $\sigma_1,\sigma_2$. At each time, either vehicles in the upper lane or vehicles in the lower lane can enter the bottleneck, but not both.
(b)~The double lane of each flow is restricted to a single lane and connected directly to the corresponding lane in the bottleneck. }
\label{f.traffic}
\end{figure*}
Another application is the ribosome flow model~(RFM)~\cite{reuveni2011genome}
which is
a deterministic model
for ribosome flow along the mRNA molecule.
It can be derived via a dynamic mean-field
approximation of a fundamental model from statistical physics called
the totally asymmetric simple exclusion process~(TASEP) \cite{TASEP_tutorial_2011,
solvers_guide}. In~\acrshort{tas} particles hop randomly
along a chain of ordered sites. A site can be either free or contain a single particle.
Totally asymmetric means that the flow is unidirectional, and simple exclusion means that a particle can only hop into a free site. This models the fact that two particles cannot be in the same place at the same time. Note that this generates an indirect coupling between the particles. In particular, if a particle is delayed at a site for a long time then the particles behind it cannot move forward and thus
a ``traffic jam'' of occupied sites may evolve. There is considerable interest in the evolution
and impact of
traffic jams of various ``biological machines'' in the cell (see, e.g.~\cite{Ross5911,ribo_q_2018}).
The~RFM is a nonlinear compartmental model with~$n$ sites.
The state-variable~$x_i(t)$, $i=1,\dots,n$,
describes the normalized density of particles
at site~$i$ at time~$t$, so
that~$x_i(t)=0$ [$x_i(t)=1$] means that site~$i$ is completely
empty [full] at time~$t$. The state-space is thus the closed unit cube~$[0,1]^n$. The model includes~$n+1$ parameters~$\lambda_i$, $i=0,\dots,n$,
where~$\lambda_i>0$ controls the transition rate from site~$i$ to site~$i+1$.
In particular,~$\lambda_0$ [$\lambda_n$] is called the initiation [exit] rate.
The RFM dynamics is described by~$n$ first-order ODEs:
\begin{align}\label{eq:rfm}
\dot x_k &= \lambda_{k-1} x_{k-1} (1-x_k )
- \lambda_k x_k (1-x_{k+1} ), \; k=1,\dots,n,
\end{align}
where we define~$x_0(t)\equiv 1$ and~$x_{n+1}(t)\equiv 0 $.
In the context of translation, the~$\lambda_i$'s
depend on various biomechanical properties for example
the abundance of tRNA molecules that deliver the amino-acids to the ribosomes.
A recent paper suggests that cells vary their~tRNA abundance in order to
control the translation rate~\cite{Torrenteaat6409}.
Thus, it is natural to consider the case where the rates are in fact time-varying.
Ref.~\cite{entrainment} proved, using the fact that the~RFM
is an (almost) contractive system~\cite{cast_book}, that if
all the rates are jointly~$T$-periodic then every solution of the~RFM converges to a unique~$T$-periodic solution~$x^*(t) \in (0,1)^n$.
Consider the RFM with a time-varying initiation
rate~$\lambda_0(t)=\sigma(t)$
and constant~$\lambda_1,\dots,\lambda_n$ such that~$\lambda_i \gg \sigma(t)$ for all~$i\geq 1$ and all~$t$. Then we can expect that the initiation rate becomes the bottleneck rate and thus~$x_i(t)$, $i=2,\dots,n$, converge to values that are close to zero, suggesting that~\eqref{eq:rfm} can be simplified to
\begin{align}\label{eq:rfm_l}
\begin{split}
\dot x_1 &= (1-x_1 )\sigma - \lambda_1 x_1 , \\
\dot x_i&= \lambda_{i-1} x_{i-1} - \lambda_i x_{i},\;\; i\in \{2,\dots,n\},
\end{split}
\end{align}
which has the same form as the cascade in Fig.~\ref{f.cascade}.
The remainder of this
note is organized
as follows. Section~\ref{sec:formulation}
presents a formulation of the problem.
Section~\ref{sec:main} describes our main results for the bottleneck system only, with
the proof given in Section~\ref{sec:proof}.
Section~\ref{sec:casc} shows that the same result can be generalized to the cascade.
\section{Problem Formulation for the Bottleneck~System}\label{sec:formulation}
Fix~$T\geq 0$. For any~$T$-periodic function~$f:\R_+ \to \R$,
let~$\operatorname{ave} (f) : =\frac{1}{T}\int_0^T f(s)\diff s$.
Consider first only the bottleneck
model \eqref{eq:con}.
A~$T$-periodic inflow rate~$\sigma(t)$
induces a unique
attractive~$T$-periodic trajectory that we denote by~$x_\sigma(t)$
and thus a unique~$T$-periodic
production rate~$w_\sigma (t):=\lambda x_\sigma (t)$. Thus, after a transient the
average production rate is~$\operatorname{ave}( {w_\sigma})$.
Now consider the bottleneck system
with a constant inflow rate~$\bar\sigma$.
Each trajectory converges to a unique steady-state~$\bar\sigma/(\lambda+\bar \sigma)$,
and thus to a production rate~$\bar w:=\lambda \bar\sigma/(\lambda+\bar \sigma)$.
The question we are interested
in is: \emph{what is the relation between~$\bar {w} $
and~$\operatorname{ave}( w_\sigma)$}?
To make this a
``fair'' comparison we consider inflow rates from the set~$S_{\bar\sigma,T}$ that includes all
the admissible rates~$\sigma(t)$ that
are~$T$-periodic and satisfy~$\operatorname{ave}(\sigma) = \bar\sigma$. \rev{We note that this problem is related to \emph{periodic optimal control} \cite{bittanti73,gilbert77} where the goal is to find an optimal control~$u$ under the constraint that~$x(T,u)=x(0)$, yet
without the additional constraint enforcing that the average on the control is fixed.}
We call~$ \sup_{\sigma \in S_{\bar\sigma,T}} {\operatorname{ave}(w_\sigma)}/{\bar w}, $ where the~$\sup$ is with respect to all the (non-trivial) rates in~$S_{\bar\sigma,T}$,
the \emph{periodic gain}
of the bottleneck over~$S_{\bar\sigma,T}$.
Indeed, one can naturally argue that the average production rate over a period,
rather than the instantaneous value,
is the relevant quantity. Then a periodic gain~$>1$
implies that we can ``do better'' using periodic rates. A periodic gain one implies that we do not lose nor gain
anything with respect to the constant rate~$\sigma(t)\equiv \bar\sigma$.
A periodic gain~$<1$ implies that
for any (non-trivial) periodic rate the average production rate is lower than the one obtained for constant
rates. This implies that
entrainment always incurs a cost, as the production rate for constant rates is higher.
\rev{In this paper, we study the periodic gain with respect to inflows that belong to a subset~$B_{\bar\sigma,T} \subset S_{\bar\sigma,T}$ that is defined as follows.} Fix~$\sigma_2>\bar\sigma > \sigma_1>0$ such that~$ (\sigma_1+\sigma_2)/2=\bar\sigma$. Let $\sigma(t)$ be any measurable function
satisfying~$\operatorname{ave}(\sigma)=\bar \sigma$ and~$\sigma(t) \in \{\sigma_1,\sigma_2\}$, or $\sigma(t) \equiv \bar\sigma$. We will actually
allow $\sigma$ to be a combination of the two under a specific condition to be explained below.
Since the system entrains to a periodic signal, it is sufficient to consider it on the compact interval $[0,T]$. The entrained occupancy is periodic, hence we need to enforce the condition $x(0)=x(T)$.
For $\sigma(t) \in \{\sigma_1, \sigma_2\}$ the bottleneck system
switches between two linear systems:
\begin{align*}
\dot x = \sigma_1 - (\sigma_1+\lambda) x ,\quad
\dot x = \sigma_2 - (\sigma_2+\lambda) x.
\end{align*}
If $\sigma(t) \equiv \bar \sigma $ then
\begin{align}\label{ode_constant}
\dot x = \tfrac{\sigma_1+\sigma_2}2 (1-x) - \lambda x,
\end{align}
adding the constraint $x(0)=x(T)$, this implies that $x(t) \equiv \tfrac {\bar\sigma}{\bar\sigma + \lambda}$.
In general, we can consider $\sigma(t) \in \{ \sigma_1, \bar \sigma, \sigma_2\}$ but to have a meaningful combination the occupancy must be constant when $\sigma(t)=\bar\sigma$. In other words, $\sigma(t) = \bar\sigma$ implies that $x(t) = \tfrac {\bar\sigma}{\bar\sigma + \lambda}$. Hence the set of admissible inflows is defined as follows:
\begin{align*}
B_{\bar\sigma,T} &:= \{ \sigma(t)\! \in \! \{\sigma_1,\bar \sigma,\sigma_2\} |\textstyle \operatorname{ave}(\sigma)= \bar\sigma, \sigma(t)\!=\!\sigma(t+T), \\ & \text{ and }
\sigma(t)=\bar\sigma \text{ implies that } x(t)=\bar\sigma/(\lambda+\bar\sigma) \}. \end{align*}
Our objective is to study the following quantity:
\[ J(\sigma(t)) := \operatorname{ave}(\lambda x_\sigma) /( \frac{\lambda \bar\sigma}{\bar\sigma+\lambda}),\]
over $\sigma(t) \in B_{\bar\sigma,T} $.
\rev{ Here $\frac{\lambda \bar\sigma}{\bar\sigma+\lambda}$ is the outflow for the constant input $\sigma(t)=\bar \sigma$, and $\operatorname{ave}(\lambda x_\sigma)$ is the outflow along the unique globally attracting periodic orbit corresponding to the periodic switching strategy.}
\section{ Periodic Gain of the Bottleneck System}\label{sec:main}
The main result of this section can be stated as follows:
\begin{theorem}\label{th1}
Consider the bottleneck system \eqref{eq:con}. If $\sigma \in
B_{\bar \sigma,T}$ then $J(\sigma ) \le 1$.
\end{theorem}
Thus, a constant inflow cannot be outperformed by a periodic one.
\begin{example} \label{ex:1}
Consider a bottleneck system with~$\lambda=1$
and switched inflows~$\sigma(t)$
satisfying~$\operatorname{ave}(\sigma)=1$.
Note that for the
constant inflow~$\sigma(t)\equiv 1$ the
output converges to~$w_\sigma (t)\equiv 1/2 $, so
$\operatorname{ave}(w_\sigma)=1/2$.
Fig.~\ref{f.sim}(a) depicts a histogram of the averaged
outflow for randomly generated switching signals in~$B_{\bar \sigma,T}$.
The mean performance for the switching laws
is~$20\%$ lower than that achieved by the constant inflow.
It follows from averaging theory~\cite{khalil}[Ch.~10] or
using the Lie-Trotter product formula~\cite{Hall2015}
that it is possible to approximate the effect of a constant inflow \rev{with any desired accuracy}
using a \rev{sufficiently} fast switching law, but such a switching law
is not practical in most applications.
Fig.~\ref{f.sim}(b) shows a scatter plot of the average outflow
for a bottleneck system
with~$\lambda=1$ and
different values of~$\bar\sigma$. It may be seen that
it is harder to approach the performance of the constant inflow for higher~$\bar\sigma$.
\end{example}
\begin{figure}
\caption{Performance of the switching policy: (a) Histogram of achievable average outflow rate with $\bar\sigma=1$, $\lambda=1$. (b) Scatter plot with~$\lambda=1$. }
\label{f.sim}
\end{figure}
\section{Proof of Theorem \ref{th1}}\label{sec:proof}
It is useful to parametrize the class of admissible inflows for a given average $\bar\sigma$ as follows:
\begin{equation}\label{control_paramterization}
\sigma(t) = \bar \sigma + \varepsilon \alpha(t),
\end{equation}
where $\alpha(t)$ is measurable,~$\operatorname{ave}(\alpha)=0$,
$\alpha(t) \in \{-1,0,1\}$ for almost all~$t\in[0,T]$,
and~$0\le \varepsilon\le \bar\sigma$.
\rev{Note that
$\alpha(t)=1$ [$\alpha(t)=-1$] corresponds
to $\sigma(t)=\sigma_1$ [$\sigma(t)=\sigma_2$], while $\alpha(t)=0$ corresponds to the constant inflow $\sigma(t)=\bar \sigma=(\sigma_1+\sigma_2)/2$.}
Recall that for every choice of~$\sigma(t)$ we let $x_\sigma$ denote the unique
solution of~\eqref{eq:con} that satisfies~$x(0)=x(T)$.
\subsection{Finite Number of Switchings}
A set $E\subset [0,T]$ is said to be \emph{elementary} if it can be written as a \emph{finite} union of open, closed, or half-open intervals. For a given~$\alpha$,
define~$E^+:=\{t|\alpha(t)=1\}$, $E^-:=\{t|\alpha(t)=-1\}$, and~$E^0:=\{t|\alpha(t)=0\}$. Then $\alpha(t)$ is said to have a finite number of switchings if $E^+$, $E^-$, and~$E^0$ are elementary sets.
We are ready to state the next proposition:
\begin{proposition}\label{th.finite}
Suppose that~$\sigma(t)$ satisfies~\eqref{control_paramterization}
with~$\varepsilon>0$,
$\alpha(t)$ has a finite number of switchings,
and~$\mu(E^0)<T$\rev{, where $\mu$ denotes the Lebesgue measure. }
Then, $
J(\sigma ) <1.
$
\end{proposition}
To prove this we require three simple auxiliary results that we state
as lemmas for easy reference.
\begin{lemma}
Consider the scalar system~$\dot x(t) = a - b x(t)$ with~$x(t_0)=x_0$. Pick~$t_1 \geq t_0$, and
let~$x_1:=x(t_1)$. Then:
\begin{equation} b \int_{t_0}^{t_1} x(t) \diff t = a (t_1-t_0) +x_0-x_1.
\label{eq:intx}
\end{equation}
\end{lemma}
\begin{proof}
The solution at time~$t$ of the scalar equation satisfies
\begin{align}\label{eq:oifp}
b x(t)= b e^{-b(t-t_0)}x_0 + a \Big( 1 - e^{-b(t-t_0)} \Big) ,
\end{align}
and integrating yields
\[
b \int_{t_0} ^{t} x(t) \diff t=a(t_1-t_0)+ b x_0 - b x_0 e^{-b(t-t_0)} - a\Big( 1 - e^{-b(t-t_0)} \Big)
\]
and combining this with~\eqref{eq:oifp} for~$t=t_1$ gives~\eqref{eq:intx}.
\end{proof}
\begin{lemma} \label{lem:1} For any~$a,b>0$ we have
\begin{align}
\frac{(1-e^{-a})(1-e^{-b})}{1-e^{-(a+b)}} < \frac{a b}{a+b}.
\end{align}
\end{lemma}
\begin{proof} Let $f(a,b):=\frac{1-e^{-(a+b)}}{(1-e^{-a})(1-e^{-b})} - \frac 1a - \frac 1b$. The inequality is proved if we show that $f(a,b)>0$ for all~$a,b>0$. Note that $\lim_{(a,b)\downarrow (0,0)} f(a,b) =0$.
Differentiating~$f$ with respect to $a$ yields
$\frac{\partial f}{\partial a}=\frac 1{a^2}+\frac 1{2-2\cosh(a)}.$
Using the Taylor series of $\cosh(a)$ we find that $2\cosh(a)-2 > a^2$,
so~$\partial f/\partial a>0$. Similarly, $\partial f/\partial b>0$. Hence, $f$ increases in all directions in the positive orthant.
\end{proof}
\begin{lemma}\label{lem:bound}
Consider the scalar systems~$\dot x_i(t) = a_i - b_i x_i(t)$, $i=1,2$, with~$b_i>0$.
Suppose that there exist~$v,w \in \R$ and~$t_1,t_2 > 0$ such that
for~$x_1(0)=v$ and~$x_2(0)=w$, we have~$x_1(t_1)=w$ and~$x_2(t_2)=v$. Then
\begin{align}\label{eq:wmv}
w-v = (c_1-c_2) \frac{ ( 1-e^{-b_1 t_1} )(1-e^{-b_2 t_2} ) }{ 1-e^{-b_1 t_1-b_2 t_2} } ,
\end{align}
where~$c_i:=a_i/b_i$, $i=1,2$. If, furthermore,~$c_1>c_2$ then
\begin{align}\label{eq:bloy}
w-v < (c_1-c_2) b_1 t_1 b_2 t_2 / (b_1 t_1+b_2 t_2 ) .
\end{align}
\end{lemma}
\begin{proof}
We know that $w=e^{-b_1 t_1} v +c_1 (1-e^{-b_1 t_1} )$
and that~$v=e^{-b_2 t_2} w +c_2 (1-e^{-b_2 t_2} )$. Combining these equations yields
\begin{align*}
w&=c_1 (1-e^{-b_1 t_1} )+e^{-b_1 t_1} \left( c_2 (1-e^{-b_2 t_2} ) + e^{-b_2 t_2} w \right ),\\
v&=c_2 (1-e^{-b_2 t_2} ) +e^{-b_2 t_2} \left ( c_1 (1-e^{-b_1 t_1} ) + e^{-b_1 t_1} v \right ) ,
\end{align*}
and this gives~\eqref{eq:wmv}. If~$c_1>c_2$ then Lemma~\ref{lem:1} yields~\eqref{eq:bloy}.
\end{proof}
Going back to our problem, note that the system \eqref{eq:con} with an input $\sigma$
given in~\eqref{control_paramterization} is a
\emph{switched linear system}~\cite{liberzon_book}
which switches between three linear systems in the
form~\rev{$\dot x=a_{z}-b_z x$, $z \in \{-,0,+\}$ (see Fig.~\ref{fig:main})},
with
\begin{equation}
\label{eq:ci}
\begin{alignedat}{3}
a_{-}&:= \bar\sigma -\varepsilon& ,\quad &b_{-}:= \bar \sigma +\lambda-\varepsilon,\\
a_{0}&:= \bar\sigma ,&\quad &b_{0}:= \bar \sigma +\lambda ,\\
a_{+}&:= \bar\sigma + \varepsilon & ,\quad &b_{+}:= \bar \sigma +\lambda + \varepsilon,
\end{alignedat}
\end{equation}
corresponding to~$\alpha(t)=-1$, $0$, and~$1$, respectively.
We refer to the corresponding arcs as a~$z$-arc, with~$z\in\{- ,0, +\}$.
Note that~$b_+>b_0>b_{-}>0$.
Letting~$c_i:=a_i/b_i$, we also have
\begin{align}\label{eq:c1c2}
c_+-c_{- }
&= \frac{2 \lambda \varepsilon} { ( \bar \sigma +\lambda+\varepsilon ) ( \bar \sigma +\lambda-\varepsilon ) } >0.
\end{align}
Recall that along a $0$-arc the solution satisfies~$x(t)\equiv c_0 $. \rev{
Since~$x(T)=x(0)$, the other arcs form a loop with a finite number of sub-loops. Observe that the $(+)$-arcs and the $(-)$-arcs can be paired: if a $(+)$-arc traverses from~$x(t)=x_i^-$ to~$x(t+t^+_i)=x_i^+$, then there must be a~$(-)$-arc that traverses back from~$x_i^+$ to~$x_i^-$ (see Fig.~\ref{fig:main}). Hence any trajectory can be partitioned into $n$ arcs with $x_0^-,x_0^+,x_1^-,x_1^+,...,x_n^-,x_n^+>0$ as the switching points. Let $t_i^+$ [$t_i^-$] be the time spent on the $i$'th $(+)$- arc [$(-)$-arc], respectively. }
Combining this with the assumption that~$\alpha(t)$
has a finite number of switches and~\eqref{eq:intx} implies that
\begin{align*}
\int _0^T x(t)\diff t &= \int_{t\in E^0} x(t)\diff t+ \int_{t\in E^+} x(t)\diff t+ \int_{t\in E^{-}} x(t)\diff t\\
&= c_0 \mu(E^0) + \sum_{i=1}^{n} \left (
c_{+} t_i^+ +(x_i^- - x_i^+ )/b_{+} \right ) \\
& + \sum_{i=1}^{n} \left( c_{- } t_i^- +( x_i^+- x_i^- )/b_{-}\right) .
\end{align*}
Thus,
\begin{align*}
\int _0^T x(t)\diff t
&= c_0 \mu(E^0) +c_+ \mu(E^+) +c_- \mu(E^-) \\
&+ (\frac{1}{b_-} -\frac{1}{b_+} ) \sum_{i=1}^{n} (x_i^+-x_i^- ) .
\end{align*}
It follows from~\eqref{eq:bloy} that
$
x_i^+-x_i^- < (c_+ - c_-) \frac{ b_+ t_i^+ b_- t_i^- }{b_+ t_i ^+ + b_- t_i^- } ,
$
so
\begin{align}\label{eq:poute}
\int _0^T x(t)\diff t
&= c_0 \mu(E^0) +c_+ \mu(E^+) +c_- \mu(E^-)\nonumber \\
&+ (c_+ - c_-) ( {b_+} - {b_-} )
\sum_{i=1}^n \frac{ t_i^+ t_i^- }{ b_+ t_i ^+ + b_- t_i^- } .
\end{align}
Let~$t_i:=t_i^+ +t_i^-$ and~$\delta_i:=t_i^+-t_i^-$. Then
\begin{align}\label{eq:macc}
\frac{ t_i^+ t_i^- }{ b_+ t_i ^+ + b_- t_i^- } &=\frac{ (t_i+\delta_i)(t_i-\delta_i) } { 4(b_0 t_i+\varepsilon \delta_i )}\nonumber \\
&=\frac{t_i^2-\delta_i^2}{4b_0 ( t_i+(\varepsilon \delta_i/b_0 ) )} \nonumber \\
& < \frac{1}{4b_0}(t_i- (\varepsilon \delta_i/b_0 ) ),
\end{align}
where the last inequality follows from the fact that~$b_0^2=(\bar \sigma+\lambda)^2>\varepsilon^2$.
Thus,
\begin{align*}
\sum_{i=1}^n \frac{ t_i^+ t_i^- }{b_+ t_i ^+ + b_- t_i^- }&<
\frac{1}{4b_0} \sum_{i=1}^n t_i\\
&= (\mu(E^-)+\mu (E^+))/(4 b_0),
\end{align*}
and plugging this in~\eqref{eq:poute} yields
\begin{align}\label{eq:ecier}
\int _0^T x(t)\diff t
&< c_0 \mu(E^0) +c_+ \mu(E^+) +c_- \mu(E^-) \\
&+ (c_+ - c_-) ( {b_+} - {b_-} )
(\mu(E^-)+\mu (E^+))/(4b_0).\nonumber
\end{align}
Since~$\operatorname{ave}(\alpha)=0$, $\mu(E^+)-\mu(E^-)=0$, and combining this with the fact that~$\mu(E^+)+\mu(E^-)+\mu(E^0)=T$ yields
\[
\mu(E^+)=\mu(E^-)=(T-\mu(E^0)) /2.
\]
Plugging this and~\eqref{eq:ci} in~\eqref{eq:ecier} and simplifying
yields
$
\int_0^T x(t) \diff t< \frac{ \bar \sigma}{\bar \sigma +\lambda}T ,
$
and this completes the proof of Prop.~\ref{th.finite}.
We note in passing that one advantage of our explicit approach is that
by using a more exact analysis in~\eqref{eq:macc} it is possible to derive
exact results on the ``loss'' incurred by using a non-constant inflow.
\setlength{\unitlength}{0.22mm}
\begin{figure}
\caption{A trajectory of the switched system with $x(0)=x(T)$ traverses a loop in the $(\dot x,x)$-plane.
The upper [lower] line corresponds to~$\alpha(t)=1$ [$\alpha(t)=-1$].
There is no line for the case $\alpha(t)=0$ since we assume that the corresponding trajectory stays at a fixed point in the~$(x,\dot x)$ plane.}
\label{fig:main}
\end{figure}
\subsection{Arbitrary Switchings}
For two sets~$A,B$ let~$A \Delta B:=(A\setminus B)\cup (B\setminus A)$ denote the symmetric difference of the sets.
To prove Thm.~\ref{th1}, we need to consider a
measurable signal~$\alpha(t)$ in~\eqref{control_paramterization}.
We use the following characterization of measurable sets:
\begin{lemma}\cite{kolmogorov} Let $E \subset [0,T]$. Then $E$ is measurable if and only if for every $\varepsilon>0$ there exists an elementary set $B_\varepsilon\subset [0,T]$ such that $\mu(E \Delta B_\varepsilon)< \varepsilon$. \label{kolm}
\end{lemma}
We improve on the lemma above, by the following:
\begin{lemma}\label{th.elementary} Let $E \subset [0,T]$. Then $E$ is measurable if and only if for every $\varepsilon>0$ there exists an elementary set $B_\varepsilon\subset [0,T]$ with $\mu(B_\varepsilon)=\mu(E)$ such that $\mu(E \Delta B_\varepsilon)< \varepsilon$.
\end{lemma}
\begin{proof}
Sufficiency is clear. To prove necessity, pick~$\varepsilon>0$.
By Lemma~\ref{kolm}, there exists an elementary
set~$B_{\varepsilon/2} \subset [0,T]$
such that~$\mu(E \Delta B_{\varepsilon/2})< \varepsilon/2$.
We can modify the intervals contained in $B_{\varepsilon/2}$
by up to $\varepsilon/2$ to get $B_\varepsilon $ with $\mu(B_\varepsilon)=\mu(E)$
and~$\mu(E \Delta B_{\varepsilon })< \varepsilon $.
\end{proof}
We generalize Prop.~\ref{th.finite} as follows:
\begin{proposition}\label{th.arbitrary}
If $\sigma(t)$ is given as in~\eqref{control_paramterization} with
$\alpha(t)$ measurable then~$J(\sigma ) \le 1$.
\end{proposition}
\begin{proof}
Let $E^0,E^+,E^-$ be defined as before. \rev{ For any~$i\geq 1$, let $\varepsilon_i:=2^{-i}$. By Lemma~\ref{th.elementary},
there exist elementary
sets~$E_i^0, E_i^+,E_i^-$ such that
$\mu(E_i^0)=\mu(E^0)$
and $\mu(E^0 \Delta E_i^0)<\varepsilon_i=2^{-i}$}, and similarly
for~$E_i^+, E_i^-$. Note that this implies that~$\mu(E_i^0)+\mu(E_i^+)+\mu(E_i^-)=T$. For every~$i\geq 1$
define
\[\alpha_i(t): = \begin{cases}
1,& \text{ if } t \in E_i^+, \\
0, & \text{ if } t \in E_i^0, \\
-1 ,& \text{ if } t \in E_i^-. \end{cases}
\]
Then $\alpha_i(t) $ are elementary simple functions,
and~$\lim_{i\to\infty} \alpha_i(t) = \alpha(t)$ for all~$t\in[0,T]$.
For each~$i$, we can apply Prop.~\ref{th.finite} to the periodic
solution~$x_{\alpha_i}$.
Now the proof
follows from Lebesgue's bounded convergence theorem~\cite{kolmogorov}.
\end{proof}
This completes the proof of Thm.~\ref{th1}.
\section{ Periodic Gain of the Cascade System}\label{sec:casc}
We have shown above that a constant inflow
outperforms any switched inflow
for the bottleneck system which is the first block in Fig.~\ref{f.cascade}.
We now show that the result can be generalized if we have a positive
linear system after the bottleneck. We first show that the periodic gain of a Hurwitz
linear system does not depend on the particular periodic signal, but only on the DC gain of the system.
\begin{proposition} \label{prop:9}
Consider a SISO Hurwitz linear system~\eqref{linear_system}.
Let~$w$ be a bounded measurable~$T$-periodic input.
Recall that the output converges to a steady-state~$y_w$
that is~$T$-periodic. Then
\[
\frac{1}{T} \int_0^T y_w(t) \diff t
= H(0) \frac{1}{T} \int_0^T w(t) \diff t ,
\]
where $H(s):=c^T(sI-A)^{-1}b$ is the transfer function of the linear system.
\end{proposition}
\rev{Thus, the periodic gain of a linear system is the
same for any input.}
\begin{proof}
Since $w$ is measurable and bounded,~$w \in L_2 ([0,T])$. Hence, it can be written as a Fourier series~\[
w(t)=\operatorname{ave}( w)+ \sum_i a_i \sin (\omega_i t +\phi_i ).
\]
The output of the linear system
converges to:
$
y_w(t)= H(0) \operatorname{ave}(w) +
\sum_i |H(j\omega_i)| a_i \sin (\omega_i t +\phi_i + \mbox{arg}(H(j\omega_i) )).
$
Each sinusoid in the expansion has period~$T$, so
$ \int_0^T y_w(t) \diff t = T H(0) \operatorname{ave}( w ).$
\end{proof}
If~$H(0)=-c^TA^{-1}b\geq 0$ (e.g. when the linear system is positive) then
and we can extend Thm.~\ref{th1} to the cascade in Fig.~\ref{f.cascade}:
\begin{theorem} \label{th:final}
Consider the cascade of \eqref{eq:con} and~\eqref{linear_system} in Fig.~\ref{f.cascade}. If $\sigma \in B_{\bar \sigma,T}$ then $\frac 1T \int_0^T y_\sigma (t) \diff t \le \tfrac{\lambda \bar\sigma}{\lambda+\bar \sigma} H(0).$
\end{theorem}
Thus, a constant inflow cannot be outperformed by a switched inflow.
\section{Conclusion}
We analyzed the performance of a switching control for a specific type of~SISO
nonlinear system that is relevant in fields such as
traffic control and molecular biology.
\begin{figure}
\caption{Example of the switching design in Fig.~\ref{f.traffic}
\label{f.google}
\end{figure}
We have shown
that a constant inflow outperforms switching inflows with the same average. The performance of the switching inflow can be enhanced by faster switching, but this is not practical in most
applications (e.g. traffic systems). Nevertheless, the switching inflow can be found in the real world (Fig.~\ref{f.google}), and our analysis
shows that, for our model, such design involves
a deterioration in the throughput of the system.
The periodic gain may certainly be larger than
one for some nonlinear systems and then \rev{nontrivial}
periodic inflows are better than the constant inflow.
For example, given the bottleneck system, let~$\xi(t):= 1- x(t)$.
Then we obtain the dual system
$
\dot \xi(t) = \lambda(t) - (\lambda(t)+\sigma(t)) \xi(t).
$
Defining the output as~$w(t):=\lambda \xi(t)$ it follows
from Thm.~\ref{th1} that here a periodic switching cannot be outperformed
by a constant inflow.
A natural question then is how can one determine
whether the periodic gain of a given nonlinear system is larger or smaller than one.\rev{
Other directions for further research include analyzing
the periodic gain of important nonlinear models like~TASEP
and the~$n$-dimensional~RFM. A possible generalization would be to consider a bottleneck-input boundary linear hyperbolic PDE instead of
a finite-dimensional system and its application to communication and traffic networks (see e.g.~\cite{espitia2017}).}
\section*{Acknowledgments}
This work was partially supported by DARPA FA8650-18-1-7800, NSF 1817936, AFOSR FA9550-14-1-0060, Israel Science Foundation, and the US-Israel Binational Science Foundation.
\end{document} |
\begin{document}
\title{Reduced basis approximation and \emph{a posteriori}
\begin{abstract}
In this work we consider (hierarchical, Lagrange) reduced basis approximation and {\em a posteriori\/} error estimation for elasticity problems in affinley parametrized geometries. The essential ingredients of the methodology are: a Galerkin projection onto a low-dimensional space associated with a smooth ``parametric
manifold'' --- dimension reduction; an efficient and effective
greedy sampling methods for identification of optimal and
numerically stable approximations --- rapid convergence;
an {\em a posteriori\/} error estimation procedures --- rigorous and
sharp bounds for the functional outputs related with the underlying solution or related quantities of interest, like stress intensity factor; and
Offline-Online computational decomposition strategies --- minimum
{\em marginal cost\/} for high performance in the real-time and many-query (e.g., design and optimization) contexts. We
present several illustrative results for linear elasticity problem in parametrized geometries representing 2D Cartesian or 3D axisymmetric configurations like an arc-cantilever beam, a center crack problem, a composite unit cell or a woven composite beam, a multi-material plate, and a closed vessel.
We consider different parametrization for the systems: either physical quantities --to model the materials and loads-- and geometrical
parameters --to model different geometrical configurations-- with isotropic and orthotropic
materials working in plane stress and plane strain approximation. We would like to underline the versatility of the methodology in very different problems. As last example we provide a nonlinear setting with increased complexity.
\end{abstract}
\section{Introduction}
\label{sec:1}
In several fields, from continuum mechanics to fluid dynamics, we need to solve numerically very complex problems that arise from physics laws. Usually we model these phenomena through partial differential equations (PDEs) and we are interested in finding the field solution and also some other quantities that increase our knowledge on the system we are describing. Almost always we are not able to obtain an analytical solution, so we rely on some discretization techniques, like Finite Element (FE) or Finite Volume (FV), that furnish an approximation of the solution. We refer to this methods as the ``truth'' ones, because they require very high computational costs, especially in parametrized context.
In fact if the problem depends on some physical or geometrical parameter, the \emph{full-order} or \emph{high-fidelity} model has to be solved many times and this might be quite demanding. Examples of typical applications of relevance are optimization, control, design, bifurcation detection and real time query.
For this class of problems, we aim to replace the high-fidelity problem by one of much lower numerical complexity, through the \emph{model order reduction} approach \cite{MOR2016}.
We focus on Reduced Basis (RB) method \cite{hesthaven2015certified,Quarteroni2011,quarteroni2015reduced,morepas2017,benner2017model} which provides both \emph{fast} and \emph{reliable} evaluation of an input (parameter)-output relationship. The main features of this methodology are \emph{(i)} those related to the classic Galerkin projection on which RB method is built upon \emph{(ii)} an \emph{a posteriori} error estimation which provides sharp and rigorous bounds and \emph{(iii)} offline/online computational strategy which allows rapid computation.
The goal of this chapter is to present a very efficient \emph{a posteriori} error estimation for linear elasticity parametrized problem. We show many different configurations and settings, by applying RB method to approximate problems using plane stress and plane strain formulation and to deal both with isotropic and orthotropic materials. We underline that the setting for very different problems is the same and unique.
This work is organized as follows. In Section 2, we first present a ``unified'' linear elasticity formulation; we then briefly introduce the geometric mapping strategy based on domain decomposition; we end the Section with the affine decomposition forms and the definition of the ``truth'' approximation, which we shall build our RB approximation upon. In Section 3, we present the RB methodology and the offline-online computational strategy for the RB ``compliant'' output. In Section 4, we define our \emph{a posteriori} error estimators for our RB approach, and provide the computation procedures for the two ingredients of our error estimators, which are the dual norm of the residual and the coercivity lower bound. In Section 5, we briefly discuss the extension of our RB methodology to the ``non-compliant'' output. In Section 6, we show several numerical results to illustrate the capability of this method, with a final subsection devoted to provide an introduction to more complex nonlinear problems. Finally, in Section 7, we draw discussions and news on future works.
\section{Preliminaries}
\label{sec:2}
In this Section we shall first present a ``unified'' formulation for all the linear elasticity cases -- for isotropic and orthotropic materials, 2D Cartesian and 3D axisymmetric configurations -- we consider in this study. We then introduce a domain decomposition and geometric mapping strategy to recast the formulation in the ``affine forms'', which is a crucial requirement for our RB approximation. Finally, we define the ``truth'' finite element approximation, upon which we shall build the RB approximation, introduced in the next Section.
\subsection{Formulation on the ``Original'' Domain}
\label{subsec:2}
\subsubsection{Isotropic/Orthotropic materials}
We first briefly describe our problem formulation based on the original settings (denoted by a superscript $^{\rm o}$). We consider a solid body in two dimensions $\Omega^{\rm o}(\boldsymbol{\mu}) \in \mathbb{R}^2$ with boundary $\Gamma^{\rm o}$, where $\boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$} \subset \mathbb{R}^P$ is the input parameter {and} $\mbox{\boldmath$\mathcal{D}$}$ is the parameter domain \cite{sneddon1999mathematical,sneddon2000mathematical}. For the sake of simplicity, in this section, we assume implicitly that any ``original'' quantities (stress, strain, domains, boundaries, etc) with superscript $^{\rm o}$ will depend on the input parameter $\boldsymbol{\mu}$, e.g. $\Omega^{\rm o} \equiv \Omega^{\rm o}(\boldsymbol{\mu})$.
We first make the following assumptions: (i) the solid is free of body forces, (ii) there {are} negligible thermal strains; note that the extension to include either or both body forces/thermal strains is straightforward. Let us denote $u^{\rm o}$ as the displacement field, and the spatial coordinate $\mathbf{x}^{\rm o} = (x^{\rm o}_1,x^{\rm o}_2)$, the linear elasticity equilibrium reads
\begin{equation}\label{eqn:equlibrium}
\frac{\partialartial \sigma^{\rm o}_{ij}}{\partialartial x^{\rm o}_j} = 0, \quad \text{in} \ \Omega^{\rm o}
\end{equation}
where $\sigma^{\rm o}$ denotes the stresses, which are related to the strains $\varepsilon^{\rm o}$
by
\begin{equation*}
\sigma^{\rm o}_{ij} = C_{ijkl}\varepsilon^{\rm o}_{kl}, \quad 1 \leq i,j,k,l \leq 2
\end{equation*}
where
\begin{equation*}
\varepsilon^{\rm o}_{kl} = \dfrac{1}{2}\bigg(\frac{\partialartial u^{\rm o}_k}{\partialartial x^{\rm o}_l} + \frac{\partialartial u^{\rm o}_l}{\partialartial x^{\rm o}_k} \bigg),
\end{equation*}
$u^{\rm o} = (u^{\rm o}_1,u^{\rm o}_2)$ is the displacement and $C_{ijkl}$ is the elastic tensor, which can be expressed in a matrix form as
\begin{equation*}
[\mathbf{C}] = \left[
\begin{array}{cccc}
C_{1111} & C_{1112} & C_{1121} & C_{1122} \\
C_{1211} & C_{1212} & C_{1221} & C_{1222} \\
C_{2111} & C_{2112} & C_{2121} & C_{2122} \\
C_{2211} & C_{2212} & C_{2221} & C_{2222} \\
\end{array}
\right]
= [\mathbf{B}]^T [\mathbf{E}] [\mathbf{B}],
\end{equation*}
where
\begin{equation*}
[\mathbf{B}] = \left[
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 1 & 1 & 0 \\
\end{array}
\right] \quad
[\mathbf{E}] = \left[
\begin{array}{ccc}
c_{11} & c_{12} & 0 \\
c_{21} & c_{21} & 0 \\
0 & 0 & c_{33} \\
\end{array}
\right].
\end{equation*}
The matrix $[\mathbf{E}]$ varies for different material types and is given in the Appendix.
We next consider Dirichlet boundary conditions for both components of $u^{\rm o}$:
\begin{equation*}
u^{\rm o}_i = 0 \quad {\rm{on}} \quad \Gamma_{D,i}^{\rm o},
\end{equation*}
and Neumann boundary conditions:
\begin{eqnarray*}
\sigma^{\rm o}_{ij}e^{\rm o}_{n,j} &=& \bigg\{
\begin{array}{ccc}
f^{\rm o}_ne^{\rm o}_{n,i} & \rm{on} & \Gamma^{\rm o}_{N} \\
0 & \rm{on} & \Gamma^{\rm o}\backslash\Gamma^{\rm o}_{N}\\
\end{array}
\end{eqnarray*}
where $f^{\rm o}_n$ is the specified stress on boundary edge $\Gamma^{\rm o}_{N}$ respectively; and $\mathbf{e}^{\rm o}_n = [e^{\rm o}_{n,1}, e^{\rm o}_{n,2}]$ is the unit normal on $\Gamma^{\rm o}_{N}$. Zero value of $f^{\rm o}_n$ indicate free stress (homogeneous Neumann conditions) on a specific boundary. Here we only consider homogeneous Dirichlet boundary conditions, but extensions to non-homogeneous Dirichlet boundary conditions and/or nonzero traction Neumann boundary conditions are simple and straightforward.
We then introduce the functional space
\begin{equation*}
X^{\rm o} = \{v = (v_1,v_2) \in (H^1(\Omega^{\rm o}))^2 \ | \ v_i = 0 \ {\rm on} \ \Gamma^{\rm o}_{D,i} , i = 1,2\},
\end{equation*}
here $H^1(\Omega^{\rm o}) = \{ v \in L^2(\Omega^{\rm o}) \ | \ \nabla v \in (L^2(\Omega^{\rm o}))^2 \}$ {and} $L^2(\Omega^{\rm o})$ is the space of square-integrable functions over $\Omega^{\rm o}$. By multiplying \refeq{eqn:equlibrium} by a test function $v \in X^{\rm o}$ and integrating by part over $\Omega^{\rm o}$ we obtain the weak form
\begin{equation}\label{eqn:wf0}
\int_{\Omega^{\rm o}}\frac{\partialartial v_i}{\partialartial x^{\rm o}_j}{C}_{ijkl}\frac{\partialartial
u^{\rm o}_k}{\partialartial x^{\rm o}_l}d\Omega^{\rm o} =
\int_{\Gamma^{\rm o}_{N}}{f}^{\rm o}_n e^{\rm o}_{n,j} v_jd\Gamma^{\rm o}.
\end{equation}
Finally, we define our output of interest, which usually is a measurement (of our displacement field or even equivalent derived solutions such as stresses, strains) over a boundary segment $\Gamma^{\rm o}_L$ or a part of the domain $\Omega^{\rm o}_L$. Here we just consider a simple case,
\begin{equation}\label{eqn:output0}
s^{\rm o}(\boldsymbol{\mu}) = \int_{\Gamma^{\rm o}_L}f^{\rm o}_{\ell,i}u^{\rm o}_id\Gamma^{\rm o},
\end{equation}
i.e the measure of the displacement on either or both $x^{\rm o}_1$ and $x^{\rm o}_2$ direction along $\Gamma^{\rm o}_L$ with multipliers $f^{\rm o}_{\ell,i}$; more general forms for the output of interest can be extended straightforward. Note that our output of interest is a linear function of the displacement; extension to quadratic function outputs can be found in \cite{huynh07:ijnme}.
We can then now recover our abstract statement: Given a $\boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$}$, we evaluate
$$s^{\rm o}(\boldsymbol{\mu}) = \ell^{\rm o}(u^{\rm o};\boldsymbol{\mu}),$$
where $u^{\rm o} \in X^{\rm o}$ satisfies
$$a^{\rm o}(u^{\rm o},v;\boldsymbol{\mu}) = f^{\rm o}(v;\boldsymbol{\mu}), \quad \forall v \in X^{\rm o}.$$
Here $a^{\rm o}(w,v;\boldsymbol{\mu}): X^{\rm o} \times X^{\rm o} \rightarrow \mathbb{R}$, $\forall w,v \in X^{\rm o}$ is the symmetric and positive bilinear form associated to the left hand side term of \refeq{eqn:wf0}; $f^{\rm o}(v;\boldsymbol{\mu}): X^{\rm o} \rightarrow \mathbb{R}$ and $\ell^{\rm o}(v;\boldsymbol{\mu}): X^{\rm o} \rightarrow \mathbb{R}$, $\forall v \in X^{\rm o}$ are the linear forms associated to the right hand side terms of \refeq{eqn:wf0} and \refeq{eqn:output0}, respectively. It shall be proven convenience to recast $a^{\rm o}(\cdot,\cdot;\boldsymbol{\mu})$, $f^{\rm o}(\cdot;\boldsymbol{\mu})$ and $\ell^{\rm o}(\cdot;\boldsymbol{\mu})$ in the following forms
\begin{equation}\label{eqn:bilinear_o}
a^{\rm o}(w,v;\boldsymbol{\mu}) = \int_{\Omega^{\rm o}}
\left[\dfrac{\partialartial w_1}{\partialartial x^{\rm o}_1}, \dfrac{\partialartial w_1}{\partialartial x^{\rm o}_2}, \dfrac{\partialartial w_2}{\partialartial x^{\rm o}_1}, \dfrac{\partialartial w_2}{\partialartial x^{\rm o}_2}, w_1 \right]
[\mathbf{S}^a]
\left[
\begin{array}{c}
\dfrac{\partialartial v_1}{\partialartial x^{\rm o}_1} \\ \dfrac{\partialartial v_1}{\partialartial x^{\rm o}_2} \\
\dfrac{\partialartial v_2}{\partialartial x^{\rm o}_1} \\ \dfrac{\partialartial v_2}{\partialartial x^{\rm o}_2} \\
v_1 \\
\end{array}
\right]
d\Omega^{\rm o}, \ \forall w,v \in X^{\rm o},
\end{equation}
\begin{equation}\label{eqn:linear_fo}
f^{\rm o}(v;\boldsymbol{\mu}) =
\int_{\Gamma^{\rm o}_N}
[\mathbf{S}^f]
\left[
\begin{array}{c}
v_1 \\ v_2 \\
\end{array}
\right]
d\Gamma^{\rm o}, \quad \forall v \in X^{\rm o},
\end{equation}
\begin{equation}\label{eqn:linear_lo}
\ell^{\rm o}(v;\boldsymbol{\mu}) = \int_{\Gamma^{\rm o}_L}
[\mathbf{S}^{\ell}]
\left[
\begin{array}{c}
v_1 \\ v_2 \\
\end{array}
\right]
d\Gamma^{\rm o}, \quad \forall v \in X^{\rm o},
\end{equation}
where $[\mathbf{S}^a] \in \mathbb{R}^{5\times 5}$; $[\mathbf{S}^f]\in \mathbb{R}^2$ and $[\mathbf{S}^{\ell}]\in \mathbb{R}^2$ are defined as
\begin{equation*}
[\mathbf{S}^a] = \left[
\begin{array}{ll}
[\mathbf{C}] & [\boldsymbol{0}]^{4\times 1} \\
\left[\boldsymbol{0}\right]^{1\times 4} & 0 \\
\end{array}
\right], \quad
[\mathbf{S}^{f}] = \left[
\begin{array}{cc}
f^{\rm o}_ne^{\rm o}_{n,1} & f^{\rm o}_ne^{\rm o}_{n,2} \\
\end{array}
\right], \quad
[\mathbf{S}^{\ell}] = \left[
\begin{array}{cc}
f^{\rm o}_{\ell,1}& f^{\rm o}_{\ell,2} \\
\end{array}
\right].
\end{equation*}
\subsubsection{Axisymmetric}
Now we shall present the problem formulation for the axisymmetric case. In a cylindrical coordinate system $(r,z,\theta),$\footnote{For the sake of simple illustration, we omit the ``original'' superscript $^{\rm o}$ on $(r,z,\theta)$.} the elasticity equilibrium reads
\begin{eqnarray*}
\dfrac{\partialartial \sigma^{\rm o}_{rr}}{\partialartial r} + \dfrac{\partialartial \sigma^{\rm o}_{zr}}{\partialartial z} + \dfrac{\sigma^{\rm o}_{rr}-\sigma^{\rm o}_{\theta\theta}}{r} &=& 0 , \quad \text{in} \quad \Omega^{\rm o} \\
\dfrac{\partialartial \sigma^{\rm o}_{rz}}{\partialartial r} + \dfrac{\partialartial \sigma^{\rm o}_{zz}}{\partialartial z} + \dfrac{\sigma^{\rm o}_{rz}}{r} &=& 0 \nonumber, \quad \text{in} \quad \Omega^{\rm o}
\end{eqnarray*}
where $\sigma^{\rm o}_{rr}$, $\sigma^{\rm o}_{zz}$, $\sigma^{\rm o}_{rz}$, $\sigma^{\rm o}_{\theta\theta}$ are the stress components given by
\begin{equation*}
\left[
\begin{array}{c}
\sigma^{\rm o}_{rr} \\
\sigma^{\rm o}_{zz} \\
\sigma^{\rm o}_{\theta\theta} \\
\sigma^{\rm o}_{rz} \\
\end{array}
\right] = \underbrace{\dfrac{E}{(1+\nu)(1-2\nu)}\left[\begin{array}{cccc}
(1-\nu) & \nu & \nu & 0 \\
\nu & (1-\nu) & \nu & 0 \\
\nu & \nu & (1-\nu) & 0 \\
0 & 0 & 0 & \dfrac{1-2\nu}{2} \\
\end{array}\right]}_{[\mathbf{E}]}
\left[
\begin{array}{c}
\varepsilon^{\rm o}_{rr} \\
\varepsilon^{\rm o}_{zz} \\
\varepsilon^{\rm o}_{\theta\theta} \\
\varepsilon^{\rm o}_{rz} \\
\end{array}
\right]\nonumber,
\end{equation*}
where $E$ and $\nu$ are the axial Young's modulus and Poisson ratio, respectively. We only consider isotropic material, however, extension to general to anisotropic material is possible; as well as axisymmetric plane stress and plane strain \cite{zienkiewics05:FEM1}.
The strain $\varepsilon^{\rm o}_{rr}$, $\varepsilon^{\rm o}_{zz}$, $\varepsilon^{\rm o}_{rz}$, $\varepsilon^{\rm o}_{\theta\theta}$ are given by
\begin{equation}\label{eqn:stress_axs}
\left[
\begin{array}{c}
\varepsilon^{\rm o}_{rr} \\
\varepsilon^{\rm o}_{zz} \\
\varepsilon^{\rm o}_{\theta\theta} \\
\varepsilon^{\rm o}_{rz} \\
\end{array}
\right] = \left[
\begin{array}{c}
\dfrac{\partialartial u^{\rm o}_r}{\partialartial r} \\[6pt]
\dfrac{\partialartial u^{\rm o}_z}{\partialartial z} \\[6pt]
\dfrac{u^{\rm o}_r}{r} \\[6pt]
\dfrac{\partialartial u^{\rm o}_r}{\partialartial z} + \dfrac{\partialartial u^{\rm o}_z}{\partialartial r} \\
\end{array}
\right],
\end{equation}
where $u^{\rm o}_r$, $u^{\rm o}_z$ are the radial displacement and axial displacement, respectively.
Assuming that the axial axis is $x^{\rm o}_2$, let $[u^{\rm o}_1,u^{\rm o}_2] \equiv [\dfrac{u^{\rm o}_r}{r}, u^{\rm o}_z]$ and denoting \\ $[x^{\rm o}_1, x^{\rm o}_2, x^{\rm o}_3] \equiv [r,z,\theta]$, we can then express \refeq{eqn:stress_axs} as
\begin{equation*}
\left[
\begin{array}{c}
\varepsilon^{\rm o}_{11} \\
\varepsilon^{\rm o}_{22} \\
\varepsilon^{\rm o}_{33} \\
\varepsilon^{\rm o}_{12} \\
\end{array}
\right] = [\hat{\mathbf{E}}]\underbrace{\left[\begin{array}{ccccc}
x^{\rm o}_1 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 \\
0 & x^{\rm o}_1 & 1 & 0 & 0 \\
\end{array}\right]}_{[\mathbf{B}_a]}
\left[
\begin{array}{c}
\dfrac{\partialartial u^{\rm o}_1}{\partialartial x^{\rm o}_1} \\[7pt]
\dfrac{\partialartial u^{\rm o}_1}{\partialartial x^{\rm o}_2} \\[7pt]
\dfrac{\partialartial u^{\rm o}_2}{\partialartial x^{\rm o}_1} \\[7pt]
\dfrac{\partialartial u^{\rm o}_2}{\partialartial x^{\rm o}_2} \\[7pt]
u^{\rm o}_1 \\
\end{array}
\right]\nonumber.
\end{equation*}
As in the previous case, we consider the usual homogeneous Dirichlet boundary conditions on $\Gamma^{\rm o}_{D,i}$ and Neumann boundary conditions on $\Gamma^{\rm o}$.
Then if we consider the output of interest $s^{\rm o}(\boldsymbol{\mu})$ defined upon $\Gamma^{\rm o}_L$, we arrive at the same abstract statement where
\begin{equation*}
[\mathbf{S}^{a}] = x^{\rm o}_1[\mathbf{B}_a]^T[\mathbf{E}][\mathbf{B}_a], \
[\mathbf{S}^{f}] = \left[(x^{\rm o}_1)^2f^{\rm o}_ne^{\rm o}_{n,1} , \ x^{\rm o}_1f^{\rm o}_ne^{\rm o}_{n,2} \right], \
[\mathbf{S}^{\ell}] = \left[ x^{\rm o}_1f^{\rm o}_ne^{\rm o}_{n,1} ,\ f^{\rm o}_ne^{\rm o}_{n,2}\right].
\end{equation*}
Note that the $x^{\rm o}_1$ multipliers appear in $[\mathbf{S}^{f}]$ during the weak form derivation, while in $[\mathbf{S}^{\ell}]$, in order to retrieve the measurement for the axial displacement $u^{\rm o}_r$ rather than $u^{\rm o}_1$ due to the change of variables. Also, the $2\partiali$ multipliers in both $a^{\rm o}(\cdot,\cdot;\boldsymbol{\mu})$ and $f^{\rm o}(\cdot;\boldsymbol{\mu})$ are disappeared in the weak form during the derivation, and can be included in $\ell^{\rm o}(\cdot;\boldsymbol{\mu})$, i.e. incorporated to $[\mathbf{S}^{\ell}]$ if measurement is required to be done in thruth (rather than in the axisymmetric) domain.
\subsection{Formulation on Reference Domain}
The RB requires that the computational domain must be parameter-independent; however, our ``original'' domain $\Omega^{\rm o}(\boldsymbol{\mu})$ is obviously parameter-dependent. Hence, to transform $\Omega^{\rm o}(\boldsymbol{\mu})$ into the computational domain, or ``reference'' (parameter-independent) domain $\Omega$, we must perform geometric transformations in order to express the bilinear and linear forms in our abstract statement in appropriate ``affine forms''. This ``affine forms'' formulation allows us to model all possible configurations, corresponding to every $\boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$}$, based on a single reference-domain \cite{Quarteroni2011,rozza08:ARCME}.
\subsubsection{Geometry Mappings}
We first assume that, for all $\boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$}$, $\Omega^{\rm o}(\boldsymbol{\mu})$ is expressed as
\begin{equation*}
\Omega^{\rm o}(\boldsymbol{\mu}) = \bigcup_{s = 1}^{L_{\rm reg}}\Omega^{\rm o}_s(\boldsymbol{\mu}),
\end{equation*}
where the $\Omega^{\rm o}_s(\boldsymbol{\mu})$, $s = 1,\ldots,L_{\rm reg}$ are mutually non-overlapping subdomains. In two dimensions, $\Omega^{\rm o}_s(\boldsymbol{\mu})$, $s = 1,\ldots,L_{\rm reg}$ is a set of triangles (or in the general case, a set of ``curvy triangles''\footnote{In fact, a ``curvy triangle'' \cite{rozza08:ARCME} is served as the building block. For its implementation see} \cite{rbMIT_URL}.) such that all important domains/edges (those defining different material regions, boundaries, pressures/tractions loaded boundary segments, or boundaries which the output of interests are calculated upon) are included in the set. In practice, such a set is generated by a constrained Delaunay triangulation.
We next assume that there exists a reference domain $\Omega \equiv \Omega^{\rm o}(\boldsymbol{\mu}_{\rm ref}) =
\bigcup_{s=1}^{L_{\rm reg}}\Omega_s$ where, for any $\mathbf{x}^{\rm o} \in \Omega_s$, $s = 1,\ldots,L_{\rm reg}$, its image $\mathbf{x}^{\rm o} \in \Omega^{\rm o}_s(\boldsymbol{\mu})$ is given by
\begin{equation}\label{eqn:local_mapping}
\mathbf{x}^{\rm o}(\boldsymbol{\mu})
= \mathcal{T}^{\rm aff}_s(\boldsymbol{\mu};\mathbf{x})
= [\mathbf{R}^{\rm aff}_s(\boldsymbol{\mu})]\mathbf{x} + [\mathbf{G}^{\rm aff}_s(\boldsymbol{\mu})],
\end{equation}
where $[\mathbf{R}^{\rm aff}_s(\boldsymbol{\mu})] \in \mathbb{R}^{2 \times 2}$ and $[\mathbf{G}^{\rm aff}_s(\boldsymbol{\mu})] \in \mathbb{R}^2$. It thus follows from our definitions that $\mathcal{T}_s(\boldsymbol{\mu};\mathbf{x}):\Omega_s\rightarrow\Omega_s^{\rm o}$, $1 \leq s \leq L_{\rm reg}$ is an (invertible) affine mapping from $\Omega_s$ to $\Omega^{\rm o}_s(\boldsymbol{\mu})$, hence the Jacobian $|{\rm det}([\mathbf{R}^{\rm aff}_s(\boldsymbol{\mu})])|$ is strictly positive, and that the derivative transformation matrix, $[\mathbf{D}^{\rm aff}_s(\boldsymbol{\mu})] = [\mathbf{R}^{\rm aff}_s(\boldsymbol{\mu})]^{-1}$ is well defined. We thus can write
\begin{equation}\label{eqn:spatial_derv}
\frac{\partialartial}{\partialartial x^{\rm o}_i} = \frac{\partialartial x_j}{\partialartial x^{\rm o}_i}\frac{\partialartial}{\partialartial x_j}
= D^{\rm aff}_{s,ij}(\boldsymbol{\mu})\frac{\partialartial}{\partialartial x_j}, \quad 1 \leq i,j \leq 2.
\end{equation}
As in two dimensions, an affine transformation maps a triangle to a triangle, we can readily calculate $[\mathbf{R}^{\rm aff}_s(\boldsymbol{\mu})]$ and $[\mathbf{G}^{\rm aff}_s(\boldsymbol{\mu})]$ for each subdomains $s$ by simply solving a systems of six equations forming from \refeq{eqn:local_mapping} by matching parametrized coordinates to reference coordinates for the three triangle vertices.
We further require a mapping continuity condition: for all $\boldsymbol{\mu} \in \mathcal{D}$,
\begin{equation*}
\mathcal{T}_s(\boldsymbol{\mu};\mathbf{x}) = \mathcal{T}_{s'}(\boldsymbol{\mu};\mathbf{x}),
\quad \forall \mathbf{x} \in \Omega_s \cap \Omega_{s'}, \quad 1 \leq
s,s' \leq L_{\rm reg}.
\end{equation*}
This condition is automatically held if there is no curved edge in the set of $\Omega^{\rm o}_s(\boldsymbol{\mu})$. If a domain contains one or more ``important'' curved edge, special ``curvy triangles'' must be generated appropriately to honour the continuity condition. We refer the readers to \cite{rozza08:ARCME} for the full discussion and detail algorithm for such cases.
The global transformation is for $\mathbf{x} \in \Omega$, the image $\mathbf{x}^{\rm o} \in \Omega^{\rm o}(\boldsymbol{\mu})$ is given by
\begin{equation*}
\mathbf{x}^{\rm o}(\boldsymbol{\mu}) = \mathcal{T}(\boldsymbol{\mu};\mathbf{x}).
\end{equation*}
It thus follows that $\mathcal{T}(\boldsymbol{\mu};\mathbf{x}):\Omega\rightarrow\Omega^{\rm o}(\boldsymbol{\mu})$ is a piecewise-affine geometric mapping.
\subsubsection{Affine Forms}
We now define our functional space $X$ as
\begin{equation*}
X = \{v = (v_1,v_2) \in (H^1(\Omega))^2 | v_i = 0 \ {\rm on} \ \Gamma_{D,i}, i = 1,2\},
\end{equation*}
and recast our bilinear form $a^{\rm o}(w,v;\boldsymbol{\mu})$, by invoking \refeq{eqn:bilinear_o}, \refeq{eqn:local_mapping} and \refeq{eqn:spatial_derv} to obtain $\forall w,v \in X(\Omega)$
\begin{eqnarray*}
a(w,v;\boldsymbol{\mu}) &=& \int_{\bigcup_{s=1}^{L_{\rm reg}}\Omega_s}\left[
\dfrac{\partialartial w_1}{\partialartial x_1},
\dfrac{\partialartial w_1}{\partialartial x_2},
\dfrac{\partialartial w_2}{\partialartial x_1},
\dfrac{\partialartial w_2}{\partialartial x_2},
w_1
\right][\mathbf{S}^{a,\rm aff}_s(\boldsymbol{\mu})]
\left[
\begin{array}{c}
\dfrac{\partialartial v_1}{\partialartial x_1} \\
\dfrac{\partialartial v_1}{\partialartial x_2} \\
\dfrac{\partialartial v_2}{\partialartial x_1} \\
\dfrac{\partialartial v_2}{\partialartial x_2} \\
v_1 \\
\end{array}
\right]d\Omega.
\end{eqnarray*}
where $[\mathbf{S}^{a,\rm aff}_s(\boldsymbol{\mu})] = [\mathbf{H}_s(\boldsymbol{\mu})][\mathbf{S}^a_s][\mathbf{H}_s(\boldsymbol{\mu})]^T|{\rm det}([\mathbf{R}^{\rm aff}_s(\boldsymbol{\mu})])|$ is the effective elastic tensor matrix, in which
\begin{equation*}
[\mathbf{H}_s(\boldsymbol{\mu})] = \left(
\begin{array}{ccc}
[\mathbf{D}_s(\boldsymbol{\mu})] & [\boldsymbol{0}]^{2 \times 2} & 0 \\
\partialhantom{1}[\boldsymbol{0}]^{2 \times 2} & [\mathbf{D}_s(\boldsymbol{\mu})] & 0 \\
0 & 0 & 1 \\
\end{array}
\right).
\end{equation*}
Similarly, the linear form $f^{\rm o}(v;\boldsymbol{\mu})$, $\forall v \in X$ can be transformed as
\begin{eqnarray*}
f(v;\boldsymbol{\mu}) = \int_{\bigcup_{s=1}^{L_{\rm reg}}\Gamma_{N_s}}[\mathbf{S}^{f,{\rm aff}}_s]
\left[
\begin{array}{c}
v_1 \\ v_2 \\
\end{array}
\right]
d\Gamma,
\end{eqnarray*}
where $\Gamma_{N_s}$ denotes the partial boundary segment of $\Gamma_N$ of the subdomain $\Omega_s$ and $[\mathbf{S}^{f,{\rm aff}}_s] = \|([\mathbf{R}_s(\boldsymbol{\mu})]\mathbf{e}_n)\|_2[\mathbf{S}^f]$ is the effective load vector, where $\mathbf{e}_n$ is the normal vector to $\Gamma_{N_s}$ and $\|\cdot\|_2$ denotes the usual Euclidean norm. The linear form $\ell(v;\boldsymbol{\mu})$ is also transformed in the same manner.
We then replace all ``original'' $x^{\rm o}_1$ and $x^{\rm o}_2$ in the effective elastic tensor matrix $[\mathbf{S}_s^{a,\rm aff}(\boldsymbol{\mu})]$, effective load/output vectors $[\mathbf{S}^{f,\rm aff}_s(\boldsymbol{\mu})]$ and $[\mathbf{S}^{\ell,\rm aff}_s(\boldsymbol{\mu})]$ by \refeq{eqn:local_mapping} to obtain a $\mathbf{x}^{\rm o}$-free effective elastic tensor matrix and effective load/output vectors, respectively.\footnote{Here we note that, the Young's modulus $E$ in the isotropic and axisymmetric cases (or $E_1$, $E_2$ and $E_3$ in the orthotropic case only} in certain conditions) can be a polynomial function of the spatial coordinates $\mathbf{x}^{\rm o}$ as well, and we still be able to obtain our affine forms \refeq{eqn:affine}.
We next expand the bilinear form $a(w,v;\boldsymbol{\mu})$ by treating each entry of the effective elastic tensor matrix for each subdomain separately, namely
\begin{eqnarray}\label{eqn:bilinear_exp}
a(w,v;\boldsymbol{\mu}) &=& S_{1,11}^{a,\rm aff}(\boldsymbol{\mu})\int_{\Omega_1}\frac{\partialartial w_1}{\partialartial x_1}\frac{\partialartial v_1}{\partialartial x_1} + S_{1,12}^{a,\rm aff}(\boldsymbol{\mu})\int_{\Omega_1}\frac{\partialartial w_1}{\partialartial x_1}\frac{\partialartial v_1}{\partialartial x_2} + \ldots \\
&& + S_{L_{\rm reg},55}^{a,\rm aff}(\boldsymbol{\mu})\int_{\Omega_{L_{\rm reg}}}w_1w_1.
\end{eqnarray}
Note that here for simplicity, we consider the case where there is no spatial coordinates in $[\mathbf{S}^{\ell,\rm aff}_s(\boldsymbol{\mu})]$. In general (especially for axisymmetric case), some or most of the integrals may take the form of $\int_{\Omega_s}(x_1)^m(x_2)^n\dfrac{\partialartial w_i}{\partialartial x_j}\dfrac{\partialartial v_k}{\partialartial x_l}$, where $m,n \in \mathbb{R}$.
Taking into account the symmetry of the bilinear form and the effective elastic tensor matrix, there will be at most $Q^a = 7L_{\rm reg}$ terms in the expansion. However, in practice, most of the terms can be collapsed by noticing that not only there will be a lot of zero entries in $[\mathbf{S}_s^{a,\rm aff}(\boldsymbol{\mu})]$, $s = 1,\ldots,L_{\rm reg}$, but also there will be a lot of duplicated or ``linearly dependent'' entries, for example, $S_{1,11}^{a,\rm aff}(\boldsymbol{\mu}) = [{\rm Const}]S_{2,11}^{a,\rm aff}(\boldsymbol{\mu})$. We can then apply a symbolic manipulation technique \cite{rozza08:ARCME} to identify, eliminate all zero terms in \refeq{eqn:bilinear_exp} and collapse all ``linear dependent'' terms to end up with a minimal $Q^a$ expansion. The same procedure is also applied for the linear forms $f(\cdot;\boldsymbol{\mu})$ and $\ell(\cdot;\boldsymbol{\mu})$.
Hence the abstract formulation of the linear elasticity problem in the reference domain $\Omega$ reads as follow: given $\boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$}$, find
\begin{equation*}
s(\boldsymbol{\mu}) = \ell(u(\boldsymbol{\mu});\boldsymbol{\mu}),
\end{equation*}
where $u(\boldsymbol{\mu}) \in X$ satisfies
\begin{equation*}
a(u(\boldsymbol{\mu}),v;\boldsymbol{\mu}) = f(v;\boldsymbol{\mu}), \quad \forall v \in X,
\end{equation*}
where all the bilinear and linear forms are in affine forms,
\begin{eqnarray}\label{eqn:affine}
a(w,v;\boldsymbol{\mu}) &=& \sum_{q=1}^{Q^a}\Theta_q^a(\boldsymbol{\mu}) a_q(w,v), \nonumber \\
f(v;\boldsymbol{\mu}) &=& \sum_{q=1}^{Q^f}\Theta_q^f(\boldsymbol{\mu}) f_q(v), \nonumber \\
\ell(v;\boldsymbol{\mu}) &=& \sum_{q=1}^{Q^{\ell}}\Theta_q^{\ell}(\boldsymbol{\mu}) \ell_q(v), \quad \forall w,v, \in X.
\end{eqnarray}
Here $\Theta_q^a(\boldsymbol{\mu})$, $a_q(w,v)$, $q = 1,\ldots,Q^a$, $f_q(v)$; $\Theta_q^f(\boldsymbol{\mu})$, $f_q(v)$, $q = 1,\ldots,Q^f$, and $\Theta_q^{\ell}(\boldsymbol{\mu})$, $\ell_q(v)$, $q = 1,\ldots,Q^{\ell}$ are parameter-dependent coefficient and parameter-independent bilinear and linear forms, respectively.
We close this section by defining several useful terms. We first define our inner product and energy norm as
\begin{equation}\label{eqn:inner_prod}
(w,v)_X = a(w,v;\overline{\boldsymbol{\mu}})
\end{equation}
and $\|w\|_X = (w,w)^{1/2}$, $\forall w,v \in X$, respectively, where $\overline{\boldsymbol{\mu}} \in \mbox{\boldmath$\mathcal{D}$}$ is an arbitrary parameter. Certain other inner norms and associated norms are also possible \cite{rozza08:ARCME}. We then define our coercivity and continuity constants as
\begin{equation}\label{eqn:inf}
\alpha(\boldsymbol{\mu}) = \inf_{w\in X}\frac{a(w,v;\boldsymbol{\mu})}{\|w\|_X^2},
\end{equation}
\begin{equation}\label{eqn:sup}
\gamma(\boldsymbol{\mu}) = \sup_{w\in X}\frac{a(w,v;\boldsymbol{\mu})}{\|w\|_X^2},
\end{equation}
respectively. We assume that $a(\cdot,\cdot;\boldsymbol{\mu})$ is symmetric, $a(w,v;\boldsymbol{\mu}) = a(v,w;\boldsymbol{\mu})$, $\forall w,v \in X$, coercive, $\alpha(\boldsymbol{\mu}) > \alpha_0 > 0$, and continuous, $\gamma(\boldsymbol{\mu}) < \gamma_0 < \infty$; and also our $f(\cdot;\boldsymbol{\mu})$ and $\ell(\cdot;\boldsymbol{\mu})$ are bounded functionals. It follows that problem which is well-defined and has a unique solution. Those conditions are automatically satisfied given the nature of our considered problems \cite{sneddon1999mathematical, sneddon2000mathematical}.
\subsection{Truth approximation}
From now on, we shall restrict our attention to the ``compliance'' case ($f(\cdot;\boldsymbol{\mu}) = \ell(\cdot;\boldsymbol{\mu})$). Extension to the non-compliance case will be discuss in the Section~5.
We now apply the finite element method and we provide a matrix formulation \cite{CMCS-CONF-2009-002}: given $\boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$}$, we evaluate
\begin{equation}\label{eqn:FE_out}
s(\boldsymbol{\mu}) = [\mathbf{F}^\mathcal{N}(\boldsymbol{\mu})]^T[\mathbf{u}^\mathcal{N}(\boldsymbol{\mu})],
\end{equation}
where $[\mathbf{u}^\mathcal{N}(\boldsymbol{\mu})]$ represents a finite element solution $u^{\mathcal{N}}(\boldsymbol{\mu}) \in X^{\mathcal{N}} \in X$ of size $\mathcal{N}$ which satisfies
\begin{equation}\label{eqn:FE_stiff}
[\mathbf{K}^\mathcal{N}(\boldsymbol{\mu})][\mathbf{u}^\mathcal{N}(\boldsymbol{\mu})] = [\mathbf{F}^\mathcal{N}(\boldsymbol{\mu})];
\end{equation}
here $[\mathbf{K}^\mathcal{N}(\boldsymbol{\mu})]$, and $[\mathbf{F}^\mathcal{N}(\boldsymbol{\mu})]$ and the (discrete forms) stiffness matrix and load vector of $a(\cdot,\cdot;\boldsymbol{\mu})$, and $f(\cdot;\boldsymbol{\mu})$, respectively. Note that the stiffness matrix $[\mathbf{K}^\mathcal{N}(\boldsymbol{\mu})]$ is symmetric positive definite (SPD). By invoking the affine forms \refeq{eqn:affine}, we can express $[\mathbf{K}^\mathcal{N}(\boldsymbol{\mu})]$, and $[\mathbf{F}^\mathcal{N}(\boldsymbol{\mu})]$ as
\begin{eqnarray}\label{eqn:affine_FE}
[\mathbf{K}^\mathcal{N}(\boldsymbol{\mu})] &=& \sum_{q = 1}^{Q^a}\Theta_q^a(\boldsymbol{\mu})[\mathbf{K}^\mathcal{N}_q], \nonumber \\
\left[\mathbf{F}^\mathcal{N}(\boldsymbol{\mu})\right] &=& \sum_{q = 1}^{Q^f}\Theta_q^f(\boldsymbol{\mu})[\mathbf{F}^\mathcal{N}_q],
\end{eqnarray}
where $[\mathbf{K}^\mathcal{N}_q]$, $[\mathbf{F}^\mathcal{N}_q]$ and are the discrete forms of the parameter-independent bilinear and linear forms $a_q(\cdot,\cdot)$ and $f_q(\cdot)$, respectively. We also denote (the SPD matrix) $[\mathbf{Y}^\mathcal{N}]$ as the discrete form of our inner product \refeq{eqn:inner_prod}. We also assume that the size of of our FE approximation, $\mathcal{N}$ is large enough such that our FE solution is an accurate approximation of the exact solution.
\section{Reduced Basis Method}
In this Section we shall restrict our attention by recalling the RB method for the ``compliant'' output. We shall first define the RB spaces and the Galerkin projection. We then describe an Offline-Online computational strategy, which allows us to obtain $\mathcal{N}$-independent calculation of the RB output approximation \cite{hesthaven2015certified,NgocCuong2005}.
\subsection{RB Spaces and Greedy algorithm}
To define the RB approximation we first introduce a (nested) Lagrangian parameter sample for $1 \leq N \leq N_{\max}$,
\begin{equation*}
S_N = \{\boldsymbol{\mu}_1,\boldsymbol{\mu}_2,\ldots,\boldsymbol{\mu}_N\},
\end{equation*}
and associated hierarchical reduced basis spaces $(X_N^\mathcal{N} =) W^\mathcal{N}_N$, $1 \leq N \leq N_{\max}$,
\begin{equation*}
W^\mathcal{N}_N = {\rm span}\{u^{\mathcal{N}}(\boldsymbol{\mu}_n),1 \leq n \leq N\} ,
\end{equation*}
where $\boldsymbol{\mu}_n \in \mbox{\boldmath$\mathcal{D}$}$ are determined by the means of a Greedy sampling algorithm \cite{rozza08:ARCME, quarteroni2015reduced}; this is an interarive procedure where at each step a new basis function is added in order to improve the precision of the basis set.
The key point of this methodology is the availability of an estimate of the error induced by replacing the full space $X^{\mathcal{N}}$ with the reduced order one $W^\mathcal{N}_N$ in the variational formulation. More specifically we assume that for all $\boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$}$ there exist an estimator $\eta(\boldsymbol{\mu})$ such that
\begin{equation*}
|| u^{\mathcal{N}}(\boldsymbol{\mu}) - u^\mathcal{N}_{{\rm RB},N}(\boldsymbol{\mu})|| \leq \eta(\boldsymbol{\mu}) ,
\end{equation*}
where $u^{\mathcal{N}}(\boldsymbol{\mu}) \in X^{\mathcal{N}} \in X$ represents the finite element solution, $u^\mathcal{N}_{{\rm RB},N}(\boldsymbol{\mu})\in X_N^\mathcal{N} \subset X^\mathcal{N}$ the reduced basis one and we can choose either the induced or the energy norm.
During this iterative basis selection process and if at the j-th step a j-dimensional reduced basis space $W^\mathcal{N}_j$ is given, the next basis function is the one that maximizes the estimated model order reduction error given the j-dimensional space $W^\mathcal{N}_j$ over $\mbox{\boldmath$\mathcal{D}$}$. So at the $n+1$ iteration we select $$\boldsymbol{\mu}_{n+1} = arg \max_{\boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$}} \eta(\boldsymbol{\mu})$$ and compute $u^{\mathcal{N}}(\boldsymbol{\mu}_{n+1})$ to enrich the reduced space. This is repeated until the maximal estimated error is below a required error tolerance. With this choice the greedy algorithm always selects the next parameter sample point as the one for which the model error is the maximum as estimated by $\eta(\boldsymbol{\mu})$ and this yields a basis that aims to be optimal in the maximum norm over $\mbox{\boldmath$\mathcal{D}$}$.
Furthermore we can rewrite the reduced space as
\begin{equation*}
W^\mathcal{N}_N = {\rm span}\{\zeta^{\mathcal{N}}_n,1 \leq n \leq N\},
\end{equation*}
where the basis functions $\left\{\zeta^{\mathcal{N}}\right\}$ are computed from the snapshots $u^{\mathcal{N}}(\boldsymbol{\mu})$ by a Gram-Schmidt orthonormalization process such that $[\boldsymbol{\zeta}^{\mathcal{N}}_m]^T[\mathbf{Y}^\mathcal{N}][\boldsymbol{\zeta}^{\mathcal{N}}_n] = \delta_{mn}$, where $\delta_{mn}$ is the Kronecker-delta symbol. We then define our orthonormalized-snapshot matrix $[\mathbf{Z}_N] \equiv [\mathbf{Z}^\mathcal{N}_N] = [[\boldsymbol{\zeta}^{\mathcal{N}}_1]|\cdots|[\boldsymbol{\zeta}^{\mathcal{N}}_n]]$ of dimension $\mathcal{N} \times N$.
\subsection{Galerkin Projection}
We then apply a Galerkin projection on our ``truth'' problem \cite{almroth78:_autom,noor81:_recen,noor82,noor80:_reduc,rozza08:ARCME}: given $\boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$}$, we could evaluate the RB output as
\begin{equation*}
s_N(\boldsymbol{\mu}) = [\mathbf{F}^\mathcal{N}(\boldsymbol{\mu})]^T[\mathbf{u}^\mathcal{N}_{{\rm RB},N}(\boldsymbol{\mu})],
\end{equation*}
where
\begin{equation}\label{eqn:RB_sol}
[\mathbf{u}^\mathcal{N}_{{\rm RB},N}(\boldsymbol{\mu})] = [\mathbf{Z}_N][\mathbf{u}_N(\boldsymbol{\mu})]
\end{equation}
represents the RB solution $\mathbf{u}^\mathcal{N}_{{\rm RB},N}(\boldsymbol{\mu}) \in X_N^\mathcal{N} \subset X^\mathcal{N}$ of size $\mathcal{N}$. Here $[\mathbf{u}_N(\boldsymbol{\mu})]$ is the RB coefficient vector of dimension $N$ satisfies the RB ``stiffness'' equations
\begin{equation}\label{eqn:RB_semifull}
[\mathbf{K}_N(\boldsymbol{\mu})][\mathbf{u}_N(\boldsymbol{\mu})] = [\mathbf{F}_N(\boldsymbol{\mu})],
\end{equation}
where
\begin{eqnarray}\label{eqn:RB_comp1}
[\mathbf{K}_N(\boldsymbol{\mu})] &=& [\mathbf{Z}_N]^T[\mathbf{K}^\mathcal{N}(\boldsymbol{\mu})][\mathbf{Z}_N], \nonumber \\
\left[\mathbf{F}_N(\boldsymbol{\mu})\right] &=& [\mathbf{Z}_N]^T[\mathbf{F}^\mathcal{N}(\boldsymbol{\mu})].
\end{eqnarray}
Note that the system \refeq{eqn:RB_semifull} is of small size: it is just a set of $N$ linear algebraic equations, in this way we can now evaluate our output as
\begin{equation}\label{eqn:RB_outsemifull}
s_N(\boldsymbol{\mu}) = [\mathbf{F}_N(\boldsymbol{\mu})]^T[\mathbf{u}_N(\boldsymbol{\mu})].
\end{equation}
It can be shown \cite{patera07:book} that the condition number of the RB ``stiffness'' matrix $[\mathbf{Z}_N]^T[\mathbf{K}^\mathcal{N}(\boldsymbol{\mu})][\mathbf{Z}_N]$ is bounded by $\gamma_0(\boldsymbol{\mu})/\alpha_0(\boldsymbol{\mu})$, and independent of both $N$ and $\mathcal{N}$.
\subsection{Offline-Online Procedure}
Although the system \refeq{eqn:RB_semifull} is of small size, the computational cost for assembling the RB ``stiffness'' matrix (and the RB ``output'' vector $[\mathbf{F}^\mathcal{N}(\boldsymbol{\mu})]^T[\mathbf{Z}_N]$) is still involves $\mathcal{N}$ and costly, $O(N\mathcal{N}^2 + N^2\mathcal{N})$ (and $O(N\mathcal{N})$, respectively). However, we can use our affine forms \refeq{eqn:affine} to construct very efficient Offline-Online procedures, as we shall discuss below.
We first insert our affine forms \refeq{eqn:affine_FE} into the expansion \refeq{eqn:RB_semifull} and \refeq{eqn:RB_outsemifull}, by using \refeq{eqn:RB_comp1} we obtain
\begin{equation*}
\sum_{q = 1}^{Q^a}\Theta_q^a(\boldsymbol{\mu})[\mathbf{K}_{qN}][\mathbf{u}_N(\boldsymbol{\mu})]
= \sum_{q = 1}^{Q^f}\Theta_q^f(\boldsymbol{\mu})[\mathbf{F}_{qN}]
\end{equation*}
and
\begin{equation*}
s_N(\boldsymbol{\mu}) = \sum_{q = 1}^{Q^f}\Theta_q^f(\boldsymbol{\mu})[\mathbf{F}_{qN}][\mathbf{u}_N(\boldsymbol{\mu})],
\end{equation*}
respectively. Here
\begin{eqnarray*}
[\mathbf{K}_{qN}] &=& [\mathbf{Z}_N]^T[\mathbf{K}^\mathcal{N}_q][\mathbf{Z}_N], \quad 1 \leq q \leq Q^a \\
\left[\mathbf{F}_{qN}\right] &=& [\mathbf{Z}_N]^T[\mathbf{F}^\mathcal{N}_q], \quad 1 \leq q \leq Q^f,
\end{eqnarray*}
are parameter independent quantities that can be computed just once and than stored for all the subsequent $\boldsymbol{\mu}$-dependent queries.
We then observe that all the ``expensive'' matrices $[\mathbf{K}_{qN}]$, $1 \leq q \leq Q^a$, $1 \leq N \leq N_{\max}$ and vectors $[\mathbf{F}_{qN}]$, $1 \leq q \leq Q^f$, $1 \leq N \leq N_{\max}$, are now separated and parameter-independent, hence those can be \emph{pre-computed} in an Offline-Online procedure.
In the Offline stage, we first compute the $[\mathbf{u}^\mathcal{N}(\boldsymbol{\mu}^n)]$, $1 \leq n \leq N_{\max}$, form the matrix $[\mathbf{Z}_{N_{\max}}]$ and then form and store $[\mathbf{F}_{N_{\max}}]$ and $[\mathbf{K}_{qN_{\max}}]$. The Offline operation count depends on $N_{\max}$, $Q^a$ and $\mathcal{N}$ but requires only $O(Q^aN_{\max}^2 + Q^fN_{\max} + Q^\ell N_{\max})$ permanent storage.
In the Online stage, for a given $\boldsymbol{\mu}$ and $N$ ($1 \leq N \leq N_{\max}$), we retrieve the pre-computed $[\mathbf{K}_{qN}]$ and $[\mathbf{F}_{N}]$ (subarrays of $[\mathbf{K}_{qN_{\max}}]$, $[\mathbf{F}_{N_{\max}}]$), form $[\mathbf{K}_N(\boldsymbol{\mu})]$, solve the resulting $N \times N$ system \refeq{eqn:RB_semifull} to obtain $\{\mathbf{u}_N(\boldsymbol{\mu})\}$, and finally evaluate the output $s_N(\boldsymbol{\mu})$ from \refeq{eqn:RB_outsemifull}. The Online operation count is thus $O(N^3)$ and independent of $\mathcal{N}$. The implication of the latter is two-fold: first, we will achieve very fast response in the many-query and real-time contexts, as $N$ is typically very small, $N \ll \mathcal{N}$; and second, we can choose $\mathcal{N}$ arbitrary large -- to obtain as accurate FE predictions as we wish -- without adversely affecting the Online (marginal) cost.
\section{\emph{A posteriori} error estimation}
In this Section we recall the \emph{a posteriori} error estimator for our RB approximation. We shall discuss in details the computation procedures for the two ingredients of the error estimator: the dual norm of the residual and the coercivity lower bound. We first present the Offline-Online strategy for the computation of the dual norm of the residual; we then briefly discuss the Successive Constraint Method \cite{huynh07:cras} in order to compute the coercivity lower bound.
\subsection{Definitions}
We first introduce the error $e^\mathcal{N}(\boldsymbol{\mu}) \equiv u^\mathcal{N}(\boldsymbol{\mu}) - u^\mathcal{N}_{{\rm RB},N}(\boldsymbol{\mu}) \in X^\mathcal{N}$ and the
residual $r^\mathcal{N}(v;\boldsymbol{\mu}) \in (X^\mathcal{N})'$ (the dual space to $X^\mathcal{N})$, $\forall v \in X^\mathcal{N}$,
\begin{equation}\label{eqn:residual}
r^\mathcal{N}(v;\boldsymbol{\mu}) = f(v) - a(u^\mathcal{N}(\boldsymbol{\mu}),v;\boldsymbol{\mu}),
\end{equation}
which can be given in the discrete form as
\begin{equation}\label{eqn:FE_residual}
[\mathbf{r}^\mathcal{N}(\boldsymbol{\mu})] = [\mathbf{F}^\mathcal{N}(\boldsymbol{\mu})] - [\mathbf{K}^\mathcal{N}(\boldsymbol{\mu})][\mathbf{u}^\mathcal{N}_{{\rm RB},N}(\boldsymbol{\mu})].
\end{equation}
We then introduce the Riesz representation of $r^\mathcal{N}(v;\boldsymbol{\mu})$: $\hat{e}(\boldsymbol{\mu}) \in X^\mathcal{N}$ defined by $(\hat{e}(\boldsymbol{\mu}),v)_{X^\mathcal{N}} = r^\mathcal{N}(v;\boldsymbol{\mu})$, $\forall v \in X^\mathcal{N}$. In vector form, $\hat{e}(\boldsymbol{\mu})$ can be expressed as
\begin{equation}\label{eqn:rres}
[\mathbf{Y}^\mathcal{N}][\mbox{\boldmath$\hat{e}$}(\boldsymbol{\mu})] = [\mathbf{r}^{\mathcal{N}}(\boldsymbol{\mu})].
\end{equation}
We also require a lower bound to the coercivity constant
\begin{equation}\label{eqn:inf_FE}
\alpha^\mathcal{N}(\boldsymbol{\mu}) = \inf_{w \in X^{\mathcal{N}}}\frac{a(w,w;\boldsymbol{\mu})}{\|w\|^2_{X^{\mathcal{N}}}},
\end{equation}
such that $0< \alpha_{\rm LB}^\mathcal{N}(\boldsymbol{\mu}) \leq \alpha^\mathcal{N}(\boldsymbol{\mu})$, $\forall \boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$}$.
We may now define our error estimator for our output as
\begin{equation}
\Delta_N^s(\boldsymbol{\mu}) \equiv \frac{\|\hat{e}(\boldsymbol{\mu})\|^2_{X^\mathcal{N}}}{\alpha^\mathcal{N}_{\rm LB}},
\end{equation}
where $\|\hat{e}(\boldsymbol{\mu})\|_{X^\mathcal{N}}$ is the dual norm of the residual.
We can also equip the error estimator with an effectivity defined by
\begin{equation}
\eta_N^s(\boldsymbol{\mu}) \equiv \frac{\Delta_N^s(\boldsymbol{\mu})}{|s^\mathcal{N}(\boldsymbol{\mu})-s_N(\boldsymbol{\mu})|}.
\end{equation}
We can readily demonstrate \cite{rozza08:ARCME, patera07:book} that
\begin{equation*}
1 \leq \eta_N^s(\boldsymbol{\mu}) \leq \frac{\gamma_0(\boldsymbol{\mu})}{\alpha^\mathcal{N}_{\rm LB}(\boldsymbol{\mu})}, \quad \forall \boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$};
\end{equation*}
so that the error estimator is both \emph{rigorous} and \emph{sharp}. Note that here we can only claim the \emph{sharp} property for this current ``compliant'' case.
We shall next provide procedures for the computation of the two ingredients of our error estimator: we shall first discuss the Offline-Online strategy to compute the dual norm of the residual $\|\hat{e}(\boldsymbol{\mu})\|_{X^\mathcal{N}}$, and then provide the construction for the lower bound of the coercivity constant $\alpha^\mathcal{N}(\boldsymbol{\mu})$.
\subsection{Dual norm of the residual}
In discrete form, the dual norm of the residual $\varepsilon(\boldsymbol{\mu}) = \|\hat{e}(\boldsymbol{\mu})\|_{X^\mathcal{N}}$ is given by
\begin{equation}\label{eqn:dnres}
\varepsilon^2(\boldsymbol{\mu}) = [\mbox{\boldmath$\hat{e}$}(\boldsymbol{\mu})]^T[\mathbf{Y}^\mathcal{N}][\mbox{\boldmath$\hat{e}$}(\boldsymbol{\mu})].
\end{equation}
We next invoke \refeq{eqn:FE_residual}, \refeq{eqn:rres} and \refeq{eqn:dnres} to arrive at
\begin{eqnarray}\label{eqn:dnres_der1}
\varepsilon^2(\boldsymbol{\mu}) &=& \bigg([\mathbf{F}^\mathcal{N}(\boldsymbol{\mu})] - [\mathbf{K}^\mathcal{N}(\boldsymbol{\mu})][\mathbf{u}^\mathcal{N}_{{\rm RB},N}(\boldsymbol{\mu})])\bigg)^T[\mathbf{Y}^\mathcal{N}]^{-1}\bigg([\mathbf{F}^\mathcal{N}(\boldsymbol{\mu})] - [\mathbf{K}^\mathcal{N}(\boldsymbol{\mu})][\mathbf{u}^\mathcal{N}_{{\rm RB},N}(\boldsymbol{\mu})]\bigg) \nonumber \\
&=& [\mathbf{F}^\mathcal{N}(\boldsymbol{\mu})]^T[\mathbf{Y}^\mathcal{N}]^{-1}[\mathbf{F}^\mathcal{N}(\boldsymbol{\mu})] - 2[\mathbf{F}^\mathcal{N}(\boldsymbol{\mu})]^T[\mathbf{Y}^\mathcal{N}]^{-1}[\mathbf{K}^\mathcal{N}(\boldsymbol{\mu})] \nonumber \\
&& + [\mathbf{K}^\mathcal{N}(\boldsymbol{\mu})]^T[\mathbf{Y}^\mathcal{N}]^{-1}[\mathbf{K}^\mathcal{N}(\boldsymbol{\mu})].
\end{eqnarray}
We next defines the ``pseudo''-solutions $[\mathbf{P}^f_{q}] = [\mathbf{Y}^\mathcal{N}]^{-1}[\mathbf{F}_{q}^\mathcal{N}]$, $1 \leq q \leq Q^f$ and $[\mathbf{P}^a_{qN}] = [\mathbf{Y}^\mathcal{N}]^{-1}[\mathbf{K}_q^\mathcal{N}][\mathbf{Z}_N]$, $1 \leq q \leq Q^a$, then apply the affine form \refeq{eqn:affine_FE} and \refeq{eqn:RB_sol} into \refeq{eqn:dnres_der1} to obtain
\begin{eqnarray}\label{eqn:dnres_der2}
\varepsilon^2(\boldsymbol{\mu}) &=& \sum_{q=1}^{Q^f}\sum_{q'=1}^{Q^f}\Theta_q^f(\boldsymbol{\mu})\Theta_{q'}^f(\boldsymbol{\mu})\bigg([\mathbf{P}^f_{q}]^T[\mathbf{Y}^\mathcal{N}][\mathbf{P}^f_{q'}]\bigg) \\ && -2\sum_{q=1}^{Q^a}\sum_{q'=1}^{Q^f}\Theta_q^a(\boldsymbol{\mu})\Theta_{q'}^f(\boldsymbol{\mu})\bigg([\mathbf{P}^f_{q}]^T[\mathbf{Y}^\mathcal{N}][\mathbf{P}^a_{q'N}]\bigg)[\mathbf{u}_N^{RB}(\boldsymbol{\mu})] \nonumber \\
&&+\sum_{q=1}^{Q^a}\sum_{q'=1}^{Q^a}\Theta_q^a(\boldsymbol{\mu})\Theta_{q'}^a(\boldsymbol{\mu})[\mathbf{u}_N^{RB}(\boldsymbol{\mu})]^T\bigg([\mathbf{P}^a_{qN}]^T[\mathbf{Y}^\mathcal{N}][\mathbf{P}^a_{q'N}]\bigg)[\mathbf{u}_N^{RB}(\boldsymbol{\mu})] \nonumber.
\end{eqnarray}
It is observed that all the terms in bracket in \refeq{eqn:dnres_der2} are all parameter-independent, hence they can be \emph{pre-computed} in the Offline stage. The Offline-Online strategy is now clear.
In the Offline stage we form the parameter-independent quantities. We first compute the ``pseudo''-solutions $[\mathbf{P}^f_{q}] = [\mathbf{Y}^\mathcal{N}]^{-1}[\mathbf{F}_{q}^\mathcal{N}]$, $1 \leq q \leq Q^f$ and $[\mathbf{P}^a_{qN}] = [\mathbf{Y}^\mathcal{N}]^{-1}[\mathbf{K}_q^\mathcal{N}][\mathbf{Z}_N]$, $1 \leq q \leq Q^a$, $1 \leq N \leq N_{\max}$; and form/store $[\mathbf{P}^f_{q}]^T[\mathbf{Y}^\mathcal{N}][\mathbf{P}^f_{q'}]$, $1 \leq q, q' \leq Q^f$, $[\mathbf{P}^f_{q}]^T[\mathbf{Y}^\mathcal{N}][\mathbf{P}^a_{q'N}]$, $1 \leq q \leq Q^f$, $1 \leq q \leq Q^a$, $1 \leq N \leq N_{\max}$,\\ $[\mathbf{P}^a_{qN}][\mathbf{Y}^\mathcal{N}][\mathbf{P}^a_{q'N}]$, $1 \leq q, q' \leq Q^a$, $1 \leq N \leq N_{\max}$. The Offline operation count depends on $N_{\max}$, $Q^a$, $Q^f$, and $\mathcal{N}$.
In the Online stage, for a given $\boldsymbol{\mu}$ and $N$ ($1 \leq N \leq N_{\max}$), we retrieve the pre-computed quantities $[\mathbf{P}^f_{q}]^T[\mathbf{Y}^\mathcal{N}][\mathbf{P}^f_{q'}]$, $1 \leq q, q' \leq Q^f$, $[\mathbf{P}^f_{q}]^T[\mathbf{Y}^\mathcal{N}][\mathbf{P}^a_{q'N}]$, $1 \leq q \leq Q^f$, $1 \leq q \leq Q^a$, and $[\mathbf{P}^a_{qN}]^T[\mathbf{Y}^\mathcal{N}][\mathbf{P}^a_{q'N}]$, $1 \leq q, q' \leq Q^a$, and then evaluate the sum \refeq{eqn:dnres_der2}. The Online operation count is dominated by $O(((Q^a)^2+(Q^f)^2)N^2)$ and independent of $\mathcal{N}$.
\subsection{Lower bound of the coercivity constant}
We now briefly address some elements for the computation of the lower bound in the coercive case. In order to derive the discrete form of the coercivity constant $\refeq{eqn:inf_FE}$ we introduce the discrete eigenvalue problem: given $\boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$}$, find the minimum set $([\boldsymbol{\chi}_{\min}(\boldsymbol{\mu})],\lambda_{\min}(\boldsymbol{\mu}))$ such that
\begin{eqnarray}\label{eqn:inf_truth}
[\mathbf{K}^\mathcal{N}(\boldsymbol{\mu})][\boldsymbol{\chi}(\boldsymbol{\mu})] &=& \lambda_{\min}[\mathbf{Y}^\mathcal{N}][\boldsymbol{\chi}(\boldsymbol{\mu})], \nonumber \\
\left[\boldsymbol{\chi}(\boldsymbol{\mu})\right]^T[\mathbf{Y}^\mathcal{N}][\boldsymbol{\chi}(\boldsymbol{\mu})] &=& 1.
\end{eqnarray}
We can then recover
\begin{equation}
\alpha^\mathcal{N}(\boldsymbol{\mu}) = \sqrt{\lambda_{\min}(\boldsymbol{\mu})}.
\end{equation}
However, the eigenproblem $\refeq{eqn:inf_truth}$ is of size $\mathcal{N}$, so using direct solution as an ingredient for our error estimator is very expensive. Hence, we will construct an inexpensive yet of good quality lower bound $\alpha_{\rm LB}^\mathcal{N}(\boldsymbol{\mu})$ and use this lower bound instead of the truth (direct) expensive coercivity constant $\alpha^\mathcal{N}(\boldsymbol{\mu})$ in our error estimator.
For our current target problems, our bilinear form is coercive and symmetric. We shall construct our coercivity lower bound by the Successive Constraint Method (SCM) \cite{huynh07:cras}. It is noted that the SCM method can be readily extended to non-symmetric as well as non-coercive bilinear forms \cite{huynh07:cras,rozza08:ARCME,patera07:book,huynh08:infsupLB}.
We first introduce an alternative (albeit not very computation-friendly) discrete form for our coercivity constant as
\begin{eqnarray}\label{eqn:inf_Y}
{\rm minimum} && \sum_{q = 1}^{Q^a}\Theta_q^a(\boldsymbol{\mu})y_q, \\
{\rm subject \ to} && y_q = \frac{[\mathbf{w}_q]^T[\mathbf{K}^\mathcal{N}_q][\mathbf{w}_q]}{[\mathbf{w}_q]^T[\mathbf{Y}^\mathcal{N}][\mathbf{w}_q]}, \quad 1 \leq q \leq Q^a, \nonumber
\end{eqnarray}
where $[\mathbf{w}_q]$ is the discrete vector of any arbitrary $w_y \in X^\mathcal{N}$.
We shall now ``relax'' the constraint in \refeq{eqn:inf_Y} by defining the ``continuity constraint box'' associated with $y_{q,\min}$ and $y_{q,\max}$, $1 \leq q \leq Q^a$ obtained from the minimum set $([\mathbf{y}_-(\boldsymbol{\mu})],y_{q,\min})$ and maximum set $([\mathbf{y}_+(\boldsymbol{\mu})],y_{q,\max})$ solutions of the eigenproblems
\begin{eqnarray*}
[\mathbf{K}^\mathcal{N}_q][\mathbf{y}_-(\boldsymbol{\mu})] &=& y_{q,\min}[\mathbf{Y}^\mathcal{N}][\mathbf{y}_-(\boldsymbol{\mu})], \\
\left[\mathbf{y}_-(\boldsymbol{\mu})\right][\mathbf{Y}^\mathcal{N}][\mathbf{y}_-(\boldsymbol{\mu})] &=& 1,
\end{eqnarray*}
and
\begin{eqnarray*}
[\mathbf{K}^\mathcal{N}_q][\mathbf{y}_+(\boldsymbol{\mu})] &=& y_{q,\max}[\mathbf{Y}^\mathcal{N}][\mathbf{y}_+(\boldsymbol{\mu})], \\
\left[\mathbf{y}_+(\boldsymbol{\mu})\right][\mathbf{Y}^\mathcal{N}][\mathbf{y}_+(\boldsymbol{\mu})] &=& 1,
\end{eqnarray*}
respectively, for $1 \leq q \leq Q^a$. We next define a ``coercivity constraint'' sample
\begin{equation*}
C_J = \{\boldsymbol{\mu}^{\rm SCM}_1 \in \mbox{\boldmath$\mathcal{D}$}, \ldots, \boldsymbol{\mu}^{\rm SCM}_J \in \mbox{\boldmath$\mathcal{D}$}\},
\end{equation*}
and denote $C_J^{M,\boldsymbol{\mu}}$ the set of $M$ $(1 \leq M \leq J)$ points in $C_J$ closest (in the usual Euclidean norm) to a given $\boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$}$. The construction of the set $C_J$ is done by means of a Greedy procedure \cite{huynh07:cras,rozza08:ARCME,patera07:book}. The Greedy selection of $C_J$ can be called the ``Offline stage'', which involves the solutions of $J$ eigenproblems \refeq{eqn:inf_truth} to obtain $\alpha^\mathcal{N}(\boldsymbol{\mu})$, $\forall \boldsymbol{\mu} \in C_J$.
We may now define our lower bound $\alpha^\mathcal{N}_{\rm LB}(\boldsymbol{\mu})$ as the solution of
\begin{eqnarray}\label{eqn:inf_LB}
{\rm minimum} && \sum_{q = 1}^{Q^a}\Theta_q^a(\boldsymbol{\mu})y_q, \\
{\rm subject \ to} && y_{q,\min} \leq y_q \leq y_{q,\max}, \quad 1 \leq q \leq Q^a, \nonumber \\
&& \sum_{q = 1}^{Q^a}\Theta_q^a(\boldsymbol{\mu}')y_q \geq \alpha^\mathcal{N}(\boldsymbol{\mu}'), \quad \forall \boldsymbol{\mu}' \in C_J^{M,\boldsymbol{\mu}}. \nonumber
\end{eqnarray}
We then ``restrict'' the constraint in \refeq{eqn:inf_Y} and define our upper bound $\alpha^\mathcal{N}_{\rm UB}(\boldsymbol{\mu})$ as the solution of
\begin{eqnarray}\label{eqn:inf_UB}
{\rm mininum} && \sum_{q = 1}^{Q^a}\Theta_q^a(\boldsymbol{\mu})y_{q,*}(\boldsymbol{\mu}'), \\
{\rm subject \ to} && y_{q,*}(\boldsymbol{\mu}') = [\boldsymbol{\chi}(\boldsymbol{\mu}')]^T[\mathbf{K}^\mathcal{N}_q][\boldsymbol{\chi}(\boldsymbol{\mu}')], \quad 1 \leq q \leq Q^a, \quad \forall \boldsymbol{\mu}' \in C_J^{M,\boldsymbol{\mu}}, \nonumber
\end{eqnarray}
where $[\boldsymbol{\chi}(\boldsymbol{\mu})]$ is defined by \refeq{eqn:inf_truth}. It can be shown \cite{huynh07:cras,rozza08:ARCME,patera07:book} that the feasible region of \refeq{eqn:inf_UB} is a subset of that of \refeq{eqn:inf_Y}, which in turn, is a subset of that of \refeq{eqn:inf_LB}: hence $\alpha^\mathcal{N}_{\rm LB}(\boldsymbol{\mu}) \leq \alpha^\mathcal{N}(\boldsymbol{\mu}) \leq \alpha^\mathcal{N}_{\rm UB}(\boldsymbol{\mu})$.
We note that the lower bound \refeq{eqn:inf_LB} is a linear optimization problem (or Linear Program (LP)) which contains $Q^a$ design variables and $2Q^a + M$ inequality constraints. Given a value of the parameter $\boldsymbol{\mu}$, the Online evaluation $\boldsymbol{\mu} \rightarrow \alpha^\mathcal{N}_{\rm LB}(\boldsymbol{\mu})$ is thus as follows: we find the subset $C_J^{M,\boldsymbol{\mu}}$ of $C_J$ for a given $M$, we then calculate $\alpha^\mathcal{N}_{\rm LB}(\boldsymbol{\mu})$ by solving the LP \refeq{eqn:inf_LB}. The crucial point here is that the online evaluation $\boldsymbol{\mu} \rightarrow \alpha^\mathcal{N}_{\rm LB}(\boldsymbol{\mu})$ is totally independent of $\mathcal{N}$. The upper bound \refeq{eqn:inf_LB}, however, can be obtained as the solution of just a simple enumeration problem; the online evaluation of $\alpha^\mathcal{N}_{\rm UB}(\boldsymbol{\mu})$ is also independent of $\mathcal{N}$. In general, the upper bound $\alpha^\mathcal{N}_{\rm UB}(\boldsymbol{\mu})$ is not used in the calculation of the error estimator, however, it is used in the Greedy construction of the set $C_J$ \cite{huynh07:cras}. In practice, when the set $C_J$ does not guarantee to produce a positive $\alpha^\mathcal{N}_{\rm LB}(\boldsymbol{\mu})$, the upper bound $\alpha^\mathcal{N}_{\rm UB}(\boldsymbol{\mu})$ can be used as a substitution for $\alpha^\mathcal{N}_{\rm UB}(\boldsymbol{\mu})$ since it approximates the ``truth'' $\alpha^\mathcal{N}(\boldsymbol{\mu})$ in a very way; however we will lose the rigorous property of the error estimators.
\section{Extension of the RB method to non-compliant output}
We shall briefly provide the extension of our RB methodology for the ``non-compliant'' case in this Section. We first present a suitable primal-dual formulation for the ``non-compliant'' output; we then briefly provide the extension to the RB methodology, including the RB approximation and its \emph{a posteriori} error estimation.
\subsection{Adjoint Problem}
We shall briefly discuss the extension of our methodology to the non-compliant problems. We still require that both $f$ and $\ell$ are bounded functionals, but now $(f(\cdot;\boldsymbol{\mu}) \neq \ell(\cdot;\boldsymbol{\mu}))$. We still use the previous abstract statement in Section~2. We begin with the definition of the dual problem associated to $\ell$: find $\partialsi(\boldsymbol{\mu}) \in X$ (our ``adjoint'' or ``dual'' field) such that
\begin{equation*}
a(v,\partialsi(\boldsymbol{\mu});\boldsymbol{\mu}) = -\ell(\boldsymbol{\mu}), \quad \forall v \in X.
\end{equation*}
\subsection{Truth approximation}
We now again apply the finite element method to the dual formulation: given $\boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$}$, we evaluate
\begin{equation*}
s(\boldsymbol{\mu}) = [\mathbf{L}^\mathcal{N}(\boldsymbol{\mu})]^T[\mathbf{u}^\mathcal{N}(\boldsymbol{\mu})],
\end{equation*}
where $[\mathbf{u}^\mathcal{N}(\boldsymbol{\mu})]$ is the finite element solution of size $\mathcal{N}$ satisfying \refeq{eqn:FE_stiff}. The discrete form of the dual solution $\partialsi^\mathcal{N}(\boldsymbol{\mu}) \in X^\mathcal{N}$ is given
\begin{equation*}
[\mathbf{K}^\mathcal{N}(\boldsymbol{\mu})][\boldsymbol{\psi}^\mathcal{N}(\boldsymbol{\mu})] = -[\mathbf{L}^\mathcal{N}(\boldsymbol{\mu})];
\end{equation*}
here $[\mathbf{L}^\mathcal{N}(\boldsymbol{\mu})]$ is the discrete load vector of $\ell(\cdot;\boldsymbol{\mu})$. We also invoke the affine forms \refeq{eqn:affine} to express $[\mathbf{L}^\mathcal{N}(\boldsymbol{\mu})]$ as
\begin{eqnarray}\label{eqn:affine_FE_out}
[\mathbf{L}^\mathcal{N}(\boldsymbol{\mu})] &=& \sum_{q = 1}^{Q^\ell}\Theta_q^\ell(\boldsymbol{\mu})[\mathbf{L}^\mathcal{N}_q],
\end{eqnarray}
where all the $[\mathbf{L}^\mathcal{N}_q]$ are the discrete forms of the parameter-independent linear forms $\ell_q(\cdot)$, $1 \leq q \leq Q^\ell$.
\subsection{Reduced Basis Approximation}
We now define our RB spaces: we shall need to define two Lagrangian parameter samples set, $S_{N^{\rm pr}} = \{\boldsymbol{\mu}_1,\boldsymbol{\mu}_2,\ldots,\boldsymbol{\mu}_{N^{\rm pr}}\}$ and $S_{N^{\rm du}}= \{\boldsymbol{\mu}_1,\boldsymbol{\mu}_2,\ldots,\boldsymbol{\mu}_{N^{\rm du}}\}$ corresponding to the set of our primal and dual parameter samples set, respectively. We also associate the primal and dual reduced basis spaces $(X_{N^{\rm pr}}^\mathcal{N} =) W^\mathcal{N}_{N^{\rm pr}}$, $1 \leq N \leq N^{\rm pr}_{\max}$ and $(X_{N^{\rm du}}^\mathcal{N} =) W^\mathcal{N}_{N^{\rm du}}$, $1 \leq N \leq N^{\rm du}_{\max}$ to our $S_{N^{\rm pr}}$ and $S_{N^{\rm du}}$ set, respectively, which are constructed from the primal $u^{\mathcal{N}}(\boldsymbol{\mu})$ and dual $\partialsi^{\mathcal{N}}(\boldsymbol{\mu})$ snapshots by a Gram-Schmidt process as in Section~3. Finally, we denote our primal and dual orthonormalized-snapshot as $[\mathbf{Z}^{\rm pr}_{N^{\rm pr}}]$ and $[\mathbf{Z}^{\rm du}_{N^{\rm du}}]$ basis matrices, respectively.
\subsection{Galerkin Projection}
We first denote the RB primal approximation to the primal ``truth'' approximation $u^\mathcal{N}(\boldsymbol{\mu})$ as $u_{{\rm RB},N}^\mathcal{N}(\boldsymbol{\mu})$ and the RB dual approximation to the primal ``truth'' dual approximation $\partialsi^\mathcal{N}(\boldsymbol{\mu})$ as $\partialsi_{{\rm RB},N}^\mathcal{N}(\boldsymbol{\mu})$: their discrete forms are given by $[\mathbf{u}_{RB,N^{\rm pr}}^\mathcal{N}(\boldsymbol{\mu})] = [\mathbf{Z}^{\rm pr}_{N^{\rm pr}}][\mathbf{u}_{N^{\rm pr}}(\boldsymbol{\mu})]$ and $[\boldsymbol{\psi}_{RB,N^{\rm du}}^\mathcal{N}(\boldsymbol{\mu})] = [\mathbf{Z}^{\rm du}_{N^{\rm du}}][\boldsymbol{\psi}_{N^{\rm du}}(\boldsymbol{\mu})]$, respectively.
We then apply a Galerkin projection (note that in this case, a Galerkin-Petrov projection is also possible \cite{rozza08:ARCME, patera07:book,benner2015}). given a $\boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$}$, we evaluate the RB output
\begin{equation*}
s_{N^{\rm pr},N^{\rm du}}(\boldsymbol{\mu}) = [\mathbf{L}^\mathcal{N}(\boldsymbol{\mu})]^T[\mathbf{u}^\mathcal{N}_{{\rm RB},N^{\rm pr}}(\boldsymbol{\mu})] - [\mathbf{r}^\mathcal{N}_{\rm pr}(\boldsymbol{\mu})]^T[\boldsymbol{\psi}^\mathcal{N}_{{\rm RB},N^{\rm du}}(\boldsymbol{\mu})],
\end{equation*}
recall that $[\mathbf{r}^\mathcal{N}_{\rm pr}(\boldsymbol{\mu})]$ is the discrete form of the RB primal residual defined in \refeq{eqn:residual}. The RB coefficient primal and dual are given by
\begin{eqnarray}\label{eqn:semifull_du}
\sum_{q=1}^{Q^a}\Theta_q^a(\boldsymbol{\mu})[\mathbf{K}_{qN^{\rm pr}N^{\rm pr}}][\mathbf{u}_{N^{\rm pr}}(\boldsymbol{\mu})] &=& \sum_{q=1}^{Q^f}\Theta_q^f(\boldsymbol{\mu})[\mathbf{F}_{qN^{\rm pr}}], \nonumber \\
\sum_{q=1}^{Q^a}\Theta_q^a(\boldsymbol{\mu})[\mathbf{K}_{qN^{\rm du}N^{\rm du}}[\boldsymbol{\psi}_{N^{\rm du}}(\boldsymbol{\mu})] &=& -\sum_{q=1}^{Q^\ell}\Theta_q^\ell(\boldsymbol{\mu})[\mathbf{L}_{qN^{\rm du}}].
\end{eqnarray}
Note that the two systems \refeq{eqn:semifull_du} are also of small size: their sizes are of $N^{\rm pr}$ and $N^{\rm du}$, respectively. We can now evaluate our output as
\begin{eqnarray}\label{eqn:RB_outsemifull_du}
s_{N^{\rm pr},N^{\rm du}}(\boldsymbol{\mu}) &=& \sum_{q=1}^{Q^\ell}\Theta_q^\ell(\boldsymbol{\mu})[\mathbf{L}_{qN^{\rm pr}}][\mathbf{u}_{N^{\rm pr}}(\boldsymbol{\mu})] - \sum_{q=1}^{Q^f}\Theta_q^f(\boldsymbol{\mu})[\mathbf{F}_{qN^{\rm du}}][\boldsymbol{\psi}_{N^{\rm du}}(\boldsymbol{\mu})] \nonumber\\
&&+\sum_{q=1}^{Q^a}\Theta_q^a(\boldsymbol{\mu})[\boldsymbol{\psi}_{N^{\rm du}}(\boldsymbol{\mu})]^T[\mathbf{K}_{qN^{\rm du}N^{\rm pr}}][\mathbf{u}_{N^{\rm pr}}(\boldsymbol{\mu})].
\end{eqnarray}
All the quantities in \refeq{eqn:semifull_du} and \refeq{eqn:RB_outsemifull_du} are given by
\begin{eqnarray*}
[\mathbf{K}_{qN^{\rm pr}N^{\rm pr}}] &=& [\mathbf{Z}^{\rm pr}_{N^{\rm pr}}]^T[\mathbf{K}_q][\mathbf{Z}^{\rm pr}_{N^{\rm pr}}], \quad 1 \leq q \leq Q^a, \ 1 \leq N^{\rm pr} \leq N^{\rm pr}_{\max},\\
\left[\mathbf{K}_{qN^{\rm du}N^{\rm du}}\right] &=& [\mathbf{Z}^{\rm du}_{N^{\rm du}}]^T[\mathbf{K}_q][\mathbf{Z}^{\rm du}_{N^{\rm du}}], \quad 1 \leq q \leq Q^a, \ 1 \leq N^{\rm du} \leq N^{\rm du}_{\max}, \\
\left[\mathbf{K}_{qN^{\rm du}N^{\rm pr}}\right] &=& [\mathbf{Z}^{\rm du}_{N^{\rm du}}]^T[\mathbf{K}_q][\mathbf{Z}^{\rm pr}_{N^{\rm pr}}], \quad 1 \leq q \leq Q^a, \ 1 \leq N^{\rm pr} \leq N^{\rm pr}_{\max}, \ 1 \leq N^{\rm du} \leq N^{\rm du}_{\max} \\
\left[\mathbf{F}_{qN^{\rm pr}}\right] &=& [\mathbf{Z}^{\rm pr}_{N^{\rm pr}}]^T[\mathbf{F}_q], \quad 1 \leq q \leq Q^f, \ 1 \leq N^{\rm pr} \leq N^{\rm pr}_{\max}, \\
\left[\mathbf{F}_{qN^{\rm du}}\right] &=& [\mathbf{Z}^{\rm du}_{N^{\rm du}}]^T[\mathbf{F}_q], \quad 1 \leq q \leq Q^f, \ 1 \leq N^{\rm du} \leq N^{\rm du}_{\max}, \\
\left[\mathbf{L}_{qN^{\rm pr}}\right] &=& [\mathbf{Z}^{\rm pr}_{N^{\rm pr}}]^T[\mathbf{L}_q], \quad 1 \leq q \leq Q^\ell, 1 \leq N^{\rm pr} \leq N^{\rm pr}_{\max}, \\
\left[\mathbf{L}_{qN^{\rm du}}\right] &=& [\mathbf{Z}^{\rm du}_{N^{\rm du}}]^T[\mathbf{L}_q], \quad 1 \leq q \leq Q^\ell, 1 \leq N^{\rm du} \leq N^{\rm du}_{\max}.
\end{eqnarray*}
The computation of the output $s_{N^{\rm pr},N^{\rm du}}(\boldsymbol{\mu})$ clearly admits an Offline-Online computational strategy similar to the one we discuss previously in Section~3.
\subsection{\emph{A posteriori} error estimation}
We now introduce the dual residual $r_{\rm du}^\mathcal{N}(v;\boldsymbol{\mu})$,
\begin{equation*}
r_{\rm du}^\mathcal{N}(v;\boldsymbol{\mu}) = -\ell(v) - a(v,\partialsi_{N^{\rm du}}^\mathcal{N}(\boldsymbol{\mu});\boldsymbol{\mu}), \quad \forall v \in X^\mathcal{N}.
\end{equation*}
and its Riesz representation of $r_{\rm du}^\mathcal{N}(v;\boldsymbol{\mu})$: $\hat{e}^{\rm du}(\boldsymbol{\mu}) \in X^\mathcal{N}$ defined by $(\hat{e}^{\rm du}(\boldsymbol{\mu}),v)_{X^\mathcal{N}} = r^\mathcal{N}_{\rm du}(v;\boldsymbol{\mu})$, $\forall v \in X^\mathcal{N}$.
We may now define our error estimator for our output as
\begin{equation}
\Delta_{N^{\rm pr}N^{\rm du}}^s(\boldsymbol{\mu}) \equiv \frac{\|\hat{e}^{\rm pr}(\boldsymbol{\mu})\|_{X^\mathcal{N}}}{(\alpha^\mathcal{N}_{\rm LB})^{1/2}}\frac{\|\hat{e}^{\rm du}(\boldsymbol{\mu})\|_{X^\mathcal{N}}}{(\alpha^\mathcal{N}_{\rm LB})^{1/2}},
\end{equation}
where $\hat{e}^{\rm pr}(\boldsymbol{\mu})$ is the Riesz representation of the primal residual. We then define the effectivity associated with our error bound
\begin{equation}
\eta_{N^{\rm pr}N^{\rm du}}^s(\boldsymbol{\mu}) \equiv \frac{\Delta_{N^{\rm pr}N^{\rm du}}^s(\boldsymbol{\mu})}{|s^\mathcal{N}(\boldsymbol{\mu})-s_{N^{\rm pr}N^{\rm du}}(\boldsymbol{\mu})|}.
\end{equation}
We can readily demonstrate \cite{rozza08:ARCME, patera07:book, grepl04:_reduc_basis_approx_time_depen} that
\begin{equation*}
1 \leq \eta_{N^{\rm pr}N^{\rm du}}^s(\boldsymbol{\mu}), \quad \forall \boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$};
\end{equation*}
note that the error estimator is still \emph{rigorous}, however it is less \emph{sharp} than that in the ``compliant'' case since in this case we could not provide an upper bound to $\eta_{N^{\rm pr}N^{\rm du}}^s(\boldsymbol{\mu})$.
The computation of the dual norm of the primal/dual residual also follows an Offline-Online computation strategy: the dual norm of the primal residual is in fact, the same as in Section~4.2; the same procedure can be applied to compute the dual norm of the dual residual.
\section{Numerical results}
In this sections we shall consider several ``model problems'' to demonstrate the feasibility of our methodology. We note that in all cases, these model problems are presented in non-dimensional form unless stated otherwise. In all problems below, displacement is, in fact, in non-dimensional form $u = {\tilde{u}\tilde{E}}/{\tilde{\sigma}_0}$, where $\tilde{u}$, $\tilde{E}$, $\tilde{\sigma}_0$ are the dimensional displacement, Young's modulus and load strength, respectively, while $E$ and $\sigma_0$ are our non-dimensional Young's modulus and load strength and usually are around unity.
We shall not provide any details for $\Theta_q^a(\boldsymbol{\mu})$, $\Theta_q^f(\boldsymbol{\mu})$ and $\Theta_q^\ell(\boldsymbol{\mu})$ and their associated bilinear and linear forms $a_q(\cdot,\cdot)$, $f_q(\cdot)$ and $\ell_q(\cdot)$ for any of the below examples as they are usually quite complex, due to the complicated structure of the effective elastic tensor and our symbolic manipulation technique. We refer the users to \cite{huynh07:ijnme, patera07:book, veroy03:_phd_thesis, milani08:RB_LE}, in which all the above terms are provided in details for some simple model problems.
In the below, the timing $t_{\rm FE}$ for an evaluation of the FE solution $\boldsymbol{\mu} \rightarrow s^\mathcal{N}(\boldsymbol{\mu})$ is the computation time taken by solving \refeq{eqn:FE_stiff} and evaluating \refeq{eqn:FE_out} by using \refeq{eqn:affine_FE} and \refeq{eqn:affine_FE_out}, in which all the stiffness matrix components, $[\mathbf{K}_q]$, $1\leq q\leq Q^a$, load and output vector components, $[\mathbf{F}_q]$, $1\leq q\leq Q^f$ and $[\mathbf{L}_q]$, $1\leq q\leq Q^\ell$, respectively, are pre-computed and pre-stored. We do not include the computation time of forming those components (or alternatively, calculate the stiffness matrix, load and output vector directly) in $t_{\rm FE}$.
Finally, for the sake of simplicity, we shall denote the number of basis $N$ defined as $N = N^{\rm pr} = N^{\rm du}$ in all of our model problems in this Section.
\subsection{The arc-cantilever beam}
We consider a thick arc cantilever beam correspond to the domain $\Omega^{\rm o}(\boldsymbol{\mu})$ representing the shape of a quarter of an annulus as shown in Figure~\ref{fig:ex1_model}. We apply (clamped) homogeneous Dirichlet conditions on $\Gamma^{\rm o}_D$ and non-homogeneous Neumann boundary conditions corresponding to a unit tension on $\Gamma^{\rm o}_N$. The width of the cantilever beam is $2d$, and the material is isotropic with $(E,\nu) = (1,0.3)$ under plane stress assumption. Our output of interest is the integral of the tangential displacement ($u_2$) over $\Gamma^{\rm o}_N$, which can be interpreted as the average tangential displacement on $\Gamma^{\rm o}_N$\footnote{The average tangential displacement on $\Gamma^{\rm o}_N$ is not exactly $s(\boldsymbol{\mu})$ but rather $s(\boldsymbol{\mu})/l_{\Gamma^{\rm o}_N}$, where $l_{\Gamma^{\rm o}_N}$ is the length of ${\Gamma^{\rm o}_N}$. It is obviously that the two descriptions of the two outputs, ''integral of'' and ``average of'', are pretty much equivalent to each other.}. Note that our output of interest is ``non-compliant''.
\begin{figure}
\caption{The arc-cantilever beam}
\label{fig:ex1_model}
\end{figure}
The parameter is the half-width of the cantilever beam $\boldsymbol{\mu} = [\mu_1] \equiv [d]$. The parameter domain is chosen as $\mbox{\boldmath$\mathcal{D}$} = [0.3, 0.9]$, which can model a moderately thick beam to a very thick beam. We then choose $\boldsymbol{\mu}_{\rm ref} = 0.3$ and apply the domain decomposition and obtain $L_{\rm reg} = 9$ subdomains as shown in Figure~\ref{fig:ex1_mesh}, in which three subdomains are the general ``curvy triangles'', generated by our computer automatic procedure \cite{rozza08:ARCME}. Note that geometric transformations are relatively complicated, due to the appearances of the ``curvy triangles'' and all subdomains transformations are classified as the ``general transformation case'' \cite{patera07:book, huynh07:_phd_thesis}. We then recover our affine forms with $Q^a = 54$, $Q^f = 1$ and $Q^l = 1$.
We next consider a FE approximation where the mesh contains $n_{\rm node} = 2747$ nodes and $n_{\rm elem} = 5322$ $P_1$ elements, which corresponds to $\mathcal{N} = 5426$ degrees of freedoms\footnote{Note that $\mathcal{N} \neq 2n_{\rm node}$ since Dirichlet boundary nodes are eliminated from the FE system.} as shown in Figure~\ref{fig:ex1_mesh}. To verify our FE approximation, we compare our FE results with the approximated solution for thick arc cantilever beam by Roark \cite{roark01:roark_formula} for a $100$ uniformly distributed test points in $\mbox{\boldmath$\mathcal{D}$}$: the maximum difference between our results and Roark's one is just $2.9\%$.
\begin{figure}
\caption{The arc-cantilever beam problem: Domain composition and FE mesh}
\label{fig:ex1_mesh}
\end{figure}
We then apply our RB approximation. We present in Table~\ref{tab:ex1_tab} our convergence results: the RB error bounds and effectivities as a function of $N (=N^{\rm pr} = N^{\rm du})$. The error bound reported, $\mathcal{E}_N = \Delta^s_N(\boldsymbol{\mu})/|s_N(\boldsymbol{\mu})|$ is the maximum of the relative error bound over a random test sample $\Xi_{\rm test}$ of size $n_{\rm test} = 100$. We denote by $\overline{\eta}_N^s$ the average of the effectivity $\eta_N^s(\boldsymbol{\mu})$ over $\Xi_{\rm test}$. We observe that average effectivity is of order $O(20-90)$, not very \emph{sharp}, but this is expected due to the fact that the output is ``non-compliant''.
\begin{table}
\centering
\begin{tabular}{|c||c|c|}
\hline
$N$ & $\mathcal{E}_N$ & $\overline{\eta}_N^s$ \\
\hline
2 & 3.57\texttt{E}+00 & 86.37 \\
4 & 3.70\texttt{E}-03 & 18.82 \\
6 & 4.07\texttt{E}-05 & 35.72 \\
8 & 6.55\texttt{E}-07 & 41.58 \\
10 & 1.99\texttt{E}-08 & 40.99 \\
\hline
\end{tabular}
\caption{The arc-cantilever beam: RB convergence}
\label{tab:ex1_tab}
\end{table}
As regards computational times, a RB online evaluation $\boldsymbol{\mu} \rightarrow (s_N(\boldsymbol{\mu}),\Delta_N^s(\boldsymbol{\mu}))$ requires just $t_{\rm RB} = 115$(ms) for $N = 10$; while the FE solution $\boldsymbol{\mu} \rightarrow s^\mathcal{N}(\boldsymbol{\mu})$ requires $t_{\rm FE} = 9$(s): thus our RB online evaluation is just $1.28\%$ of the FEM computational cost.
\subsection{The center crack problem}
We next consider a fracture model corresponds to a center crack in a plate under tension at both sides as shown in Figure~\ref{fig:ex2_modelfull}.
\begin{figure}
\caption{The center crack problem}
\label{fig:ex2_modelfull}
\end{figure}
Due to the symmetry of the geometry and loading, we only consider one quarter of the physical domain, as shown in Figure~\ref{fig:ex2_modelfull}, note that the crack corresponds to the boundary segment $\Gamma^{\rm o}_C$. The crack (in our ``quarter'' model) is of size $d$, and the plate is of height $h$ (and of fixed width $w = 1$). We consider plane strain isotropic material with $(E,\nu) = (1,0.3)$. We consider (symmetric about the $x^{\rm o}_1$ direction and $x^{\rm o}_2$ direction) Dirichlet boundary conditions on the left and bottom boundaries $\Gamma^{\rm o}_L$ and $\Gamma^{\rm o}_B$, respectively; and non-homogeneous Neumann boundary conditions (tension) on the top boundary $\Gamma^{\rm o}_T$. Our ultimate output of interest is the stress intensity factor (SIF) for the crack, which will be derived from an intermediate (compliant) energy output by application of the virtual crack extension approach \cite{parks77:a_stiff_sif}. The SIF plays an important role in the field of fracture mechanics, for examples, if we have to estimate the propagation path of cracks in structures \cite{hutchingson79:fracture}. We further note that analytical result for SIF of a center-crack in a plate under tension is only available for the infinite plate \cite{murakami01:SIFhandbook}, which can be compared with our solutions for small crack length $d$ and large plate height $h$ values.
\begin{figure}
\caption{The center crack problem}
\label{fig:ex2_model}
\end{figure}
Our parameters are the crack length and the plate height $\boldsymbol{\mu} = [\mu_1,\mu_2] \equiv [d, h]$, and the parameter domain is given by $\mbox{\boldmath$\mathcal{D}$} = [0.3,0.7] \times [0.5,2.0]$. We then choose $\boldsymbol{\mu}_{\rm ref} = [0.5,1.0]$ and apply a domain decomposition: the final setting contains $L_{\rm reg} = 3$ subdomains, which in turn gives us $Q^a = 10$ and $Q^f = 1$. Note that our ``compliant'' output $s(\boldsymbol{\mu})$ is just an intermediate result for the calculation of the SIF. In particular, the virtual crack extension method (VCE) \cite{parks77:a_stiff_sif} allows us to extract the ``Mode-I'' SIF though the energy $s(\boldsymbol{\mu})$ though the Energy Release Rate (ERR), $G(\boldsymbol{\mu})$, defined by
\begin{equation*}
G(\boldsymbol{\mu}) = -\bigg(\frac{\partialartial s(\boldsymbol{\mu})}{\partialartial \mu_1}\bigg).
\end{equation*}
In practice, the ERR is approximated by a finite-difference (FD) approach for a suitable small value $\delta\mu_1$ as
\begin{equation*}
\widehat{G}(\boldsymbol{\mu}) = -\bigg(\frac{s(\boldsymbol{\mu}+\delta\mu_1)-s(\boldsymbol{\mu})}{\delta\mu_1}\bigg),
\end{equation*}
which then give the SIF approximation $\widehat{\rm SIF}(\boldsymbol{\mu}) = \sqrt{\widehat{G}(\boldsymbol{\mu})/(1-\nu^2)}$.
We then consider a FE approximation with a mesh contains $n_{\rm node} = 3257$ nodes and $n_{\rm elem} = 6276$ $P_1$ elements, which corresponds to $\mathcal{N} = 6422$ degrees of freedoms; the mesh is refined around the crack tip in order to give a good approximation for the (singular) solution near this region as shown in Figure~\ref{fig:ex2_mesh}.
\begin{figure}
\caption{The center crack problem: Domain composition and FE mesh}
\label{fig:ex2_mesh}
\end{figure}
We present in Table~\ref{tab:ex2_tab} the convergence results for the ``compliant'' output $s(\boldsymbol{\mu})$: the RB error bounds and effectivities as a function of $N$. The error bound reported, $\mathcal{E}_N = \Delta^s_N(\boldsymbol{\mu})/|s_N(\boldsymbol{\mu})|$ is the maximum of the relative error bound over a random test sample $\Xi_{\rm test}$ of size $n_{\rm test} = 200$. We denote by $\overline{\eta}_N^s$ the average of the effectivity $\eta_N^s(\boldsymbol{\mu})$ over $\Xi_{\rm test}$. We observe that the effectivity average is very sharp, and of order $O(10)$.
\begin{table}
\centering
\begin{tabular}{|c||c|c|}
\hline
$N$ & $\mathcal{E}_N$ & $\overline{\eta}_N^s$ \\
\hline
5 & 2.73\texttt{E}-02 & 6.16 \\
10 & 9.48\texttt{E}-04 & 8.47 \\
20 & 5.71\texttt{E}-06 & 7.39 \\
30 & 5.59\texttt{E}-08 & 7.01 \\
40 & 8.91\texttt{E}-10 & 7.54 \\
50 & 6.26\texttt{E}-11 & 8.32 \\
\hline
\end{tabular}
\caption{The center crack problem: RB convergence}
\label{tab:ex2_tab}
\end{table}
We next define the ERR RB approximation $\widehat{G}_N(\boldsymbol{\mu})$ to our ``truth'' (FE) $\widehat{G}^\mathcal{N}_{\rm FE}(\boldsymbol{\mu})$ and its associated ERR RB error $\Delta^{\widehat{G}}_N(\boldsymbol{\mu})$ by
\begin{eqnarray}\label{eqn:RB_SIF_err}
\widehat{G}_N(\boldsymbol{\mu}) &=& \frac{s_N(\boldsymbol{\mu}) - \Delta_N^s(\boldsymbol{\mu}+\delta\mu_1)}{\delta\mu_1}, \nonumber \\
\Delta^{\widehat{G}}_N(\boldsymbol{\mu}) &=& \frac{\Delta_N^s(\boldsymbol{\mu} + \delta\mu_1) + \Delta_N^s(\boldsymbol{\mu})}{\delta\mu_1}.
\end{eqnarray}
It can be readily proven \cite{rozza08:ARCME} that our SIF RB error is a rigorous bound for the ERR RB prediction $\widehat{G}_N(\boldsymbol{\mu})$: $|\widehat{G}_N(\boldsymbol{\mu}) - \widehat{G}^\mathcal{N}_{\rm FE}(\boldsymbol{\mu})| \leq \Delta^{\widehat{G}}_N(\boldsymbol{\mu})$. It is note that the choice of $\delta\mu_1$ is not arbitrary: $\delta\mu_1$ needed to be small enough to provide a good FD approximation, while still provide a good ERR RB error bound \refeq{eqn:RB_SIF_err}. Here we choose $\delta\mu_1 = 1\texttt{E}-03$.
We then can define the SIF RB approximation $\widehat{\rm SIF}_N(\boldsymbol{\mu})$ to our ``truth'' (FE) $\widehat{\rm SIF}^\mathcal{N}_{\rm FE}(\boldsymbol{\mu})$ and its associated SIF RB error estimation $\Delta^{\widehat{\rm SIF}}_N(\boldsymbol{\mu})$ as
\begin{eqnarray*}
\widehat{\rm SIF}_N(\boldsymbol{\mu}) &=& \frac{1}{2\sqrt{1-\nu^2}}\bigg\{\sqrt{\widehat{G}_N(\boldsymbol{\mu}) + \Delta^{\widehat{G}}_N(\boldsymbol{\mu})}+\sqrt{\widehat{G}_N(\boldsymbol{\mu}) - \Delta^{\widehat{G}}_N(\boldsymbol{\mu})}\bigg\}, \\
\Delta^{\widehat{\rm SIF}}_N(\boldsymbol{\mu}) &=& \frac{1}{2\sqrt{1-\nu^2}}\bigg\{\sqrt{\widehat{G}_N(\boldsymbol{\mu}) + \Delta^{\widehat{G}}_N(\boldsymbol{\mu})}-\sqrt{\widehat{G}_N(\boldsymbol{\mu}) - \Delta^{\widehat{G}}_N(\boldsymbol{\mu})}\bigg\}.
\end{eqnarray*}
It is readily proven in \cite{huynh07:ijnme} that $|\widehat{\rm SIF}_N(\boldsymbol{\mu}) - \widehat{\rm SIF}^\mathcal{N}_{\rm FE}(\boldsymbol{\mu})| \leq \Delta^{\widehat{\rm SIF}}_N(\boldsymbol{\mu})$.
We plot the SIF RB results $\widehat{\rm SIF}(\boldsymbol{\mu})$ with error bars correspond to $\Delta^{\widehat{\rm SIF}}_N(\boldsymbol{\mu})$, and the analytical results $\widehat{\rm SIF}(\boldsymbol{\mu})$ \cite{murakami01:SIFhandbook}
in Figure~\ref{fig:ex2_SIF15} for the case $\mu_1 \in [0.3,0.7]$, $\mu_2 = 2.0$ for $N = 15$. It is observed that the RB error is large since the small number of basis $N = 15$ does not compromise the small $\delta\mu_1 = 1\texttt{E}-03$ value.
\begin{figure}
\caption{The center crack problem: SIF solution for $N=15$}
\label{fig:ex2_SIF15}
\end{figure}
We next plot, in Figure~\ref{fig:ex2_SIF30}, SIF RB results and error for the same $\boldsymbol{\mu}$ range as in Figure~\ref{fig:ex2_SIF15}, but for $N = 30$. It is observed now that the SIF RB error is significantly improved -- thanks to the better RB approximation that compensates the small value $\delta\mu_1$. We also want to point out that, in both Figure~\ref{fig:ex2_SIF15} and Figure~\ref{fig:ex2_SIF30}, it is clearly shown that our RB SIF error is not a \emph{rigorous} bound for the \emph{exact} SIF values $\widehat{\rm SIF}(\boldsymbol{\mu})$ but rather is a \emph{rigorous} bound for the ``truth'' (FE) approximation $\widehat{\rm SIF}^\mathcal{N}_{\rm FE}(\boldsymbol{\mu})$. It is shown, however, that FE SIF approximation (which is considered in Figure~\ref{fig:ex2_SIF30} thanks to the negligible RB error) are of good quality compared with the exact SIF. The VCE in this case works quite well, however it is not suitable for complicate crack settings. In such cases, other SIF calculation methods and appropriate RB approximations might be preferable \cite{huynh07:ijnme, huynh07:_phd_thesis}.
\begin{figure}
\caption{The center crack problem: SIF solution for $N=30$}
\label{fig:ex2_SIF30}
\end{figure}
As regards computational times, a RB online evaluation $\boldsymbol{\mu} \rightarrow (\widehat{\rm SIF}_N(\boldsymbol{\mu}),\Delta_N^{\widehat{\rm SIF}}(\boldsymbol{\mu})$ requires just $t_{\rm RB} (= 25 \times) = 50$(ms) for $N = 40$; while the FE solution $\boldsymbol{\mu} \rightarrow \widehat{\rm SIF}^\mathcal{N}_{\rm FE}(\boldsymbol{\mu})$ requires $t_{\rm FE} (= 7 \times 2) = 14$(s): thus our RB online evaluation takes only $0.36\%$ of the FEM computational cost.
\subsection{The composite unit cell problem}
We consider a unit cell contains an ellipse region as shown in Figure~\ref{fig:ex3_model}. We apply (clamped) Dirichlet boundary conditions on the bottom of the cell $\Gamma^{\rm o}_B$ and (unit tension) non-homogeneous Neumann boundary conditions on $\Gamma^{\rm o}_T$. We denote the two semimajor axis and semiminor axis of the ellipse region as $d_1$ and $d_2$, respectively. We assume plane stress isotropic materials: the material properties of the matrix (outside of the region) is given by $(E_m,\nu_m) = (1,0.3)$, and the material properties of the ellipse region is given by $(E_f,\nu_f) = (E_f,0.3)$. Our output of interest is the integral of normal displacement ($u_1$) over $\Gamma^{\rm o}_T$. We note our output of interest is thus ``compliant''.
\begin{figure}
\caption{The composite unit cell problem}
\label{fig:ex3_model}
\end{figure}
We consider $P=3$ parameters $\boldsymbol{\mu} = [\mu_1,\mu_2,\mu_3] \equiv [d_1,d_2,E_f]$. The parameter domain is chosen as $\mbox{\boldmath$\mathcal{D}$} = [0.8, 1.2] \times [0.8,1.2] \times [0.2,5]$. Note that the third parameter (the Young modulus of the ellipse region) can represent the ellipse region from an ``inclusion'' (with softer Young's modulus $E_f<E_m (= 1)$) to a ``fiber'' (with stiffer Young's modulus $E_f>E_m (= 1)$).
We then choose $\boldsymbol{\mu}_{\rm ref} = [1.0,1.0,1.0]$ and apply the domain decomposition \cite{rozza08:ARCME} and obtain $L_{\rm reg} = 34$ subdomains, in which $16$ subdomains are the general ``curvy triangles'' ($8$ inward ``curvy triangles'' and $8$ outward curvy ``triangles'') as shown in Figure~\ref{fig:ex3_mesh}. However, despite the large number of ``curvy triangles'' in the domain decomposition, it is observed that almost all transformations are congruent, hence we expected a small number of $Q^a$ than (says), that of the ``arc-cantilever beam'' example, in which all the subdomains transformations are different. Indeed, we recover our affine forms with $Q^a = 30$ and $Q^f = 1$, note that $Q^a$ is relatively small for such a complex domain decomposition thanks to our efficient symbolic manipulation ``collapsing'' technique and those congruent ``curvy triangles''.
We next consider a FE approximation where the mesh contains $n_{\rm node} = 3906$ nodes and $n_{\rm elem} = 7650$ $P_1$ elements, which corresponds to $\mathcal{N} = 7730$ degrees of freedoms. The mesh is refined around the interface of the matrix and the inclusion/fiber.
\begin{figure}
\caption{The composite unite cell problem: Domain composition and FE mesh}
\label{fig:ex3_mesh}
\end{figure}
We then apply the RB approximation. We present in Table~\ref{tab:ex3_tab} our convergence results: the RB error bounds and effectivities as a function of $N$. The error bound reported, $\mathcal{E}_N = \Delta^s_N(\boldsymbol{\mu})/|s_N(\boldsymbol{\mu})|$ is the maximum of the relative error bound over a random test sample $\Xi_{\rm test}$ of size $n_{\rm test} = 200$. We denote by $\overline{\eta}_N^s$ the average of the effectivity $\eta_N^s(\boldsymbol{\mu})$ over $\Xi_{\rm test}$. We observe that our effectivity average is of order $O(10)$.
\begin{table}
\centering
\begin{tabular}{|c||c|c|}
\hline
$N$ & $\mathcal{E}_N$ & $\overline{\eta}_N^s$ \\
\hline
5 & 9.38\texttt{E}-03 & 8.86 \\
10 & 2.54\texttt{E}-04 & 7.18 \\
15 & 1.37\texttt{E}-05 & 5.11 \\
20 & 3.91\texttt{E}-06 & 9.74 \\
25 & 9.09\texttt{E}-07 & 6.05 \\
30 & 2.73\texttt{E}-07 & 10.64 \\
35 & 9.00\texttt{E}-08 & 10.17 \\
40 & 2.66\texttt{E}-08 & 10.35 \\
\hline
\end{tabular}
\caption{The composite unit cell problem: RB convergence}
\label{tab:ex3_tab}
\end{table}
As regards computational times, a RB online evaluation $\boldsymbol{\mu} \rightarrow (s_N(\boldsymbol{\mu}),\Delta_N^s(\boldsymbol{\mu}))$ requires just $t_{\rm RB} = 66$(ms) for $N = 30$; while the FE solution $\boldsymbol{\mu} \rightarrow s^\mathcal{N}(\boldsymbol{\mu})$ requires approximately $t_{\rm FE} = 8$(s): thus our RB online evaluation is just $0.83\%$ of the FEM computational cost.
\subsection{The multi-material plate problem}
We consider a unit cell divided into $9$ square subdomains of equal size as shown in Figure~\ref{fig:ex4_model}. We apply (clamped) Dirichlet boundary conditions on the bottom of the cell $\Gamma^{\rm o}_B$ and (unit tension) non-homogeneous Neumann boundary conditions on $\Gamma^{\rm o}_T$. We consider orthotropic plane stress materials: the Young's modulus properties for all $9$ subdomains are given in Figure, the Poisson's ratio is chosen as $\nu_{12,i} =0.3$, $i = 1,\ldots,9$ and $\nu_{21,i}$ is determined by \refeq{eqn:ortho_plane_stress}. The shear modulus is chosen as a function of the two Young's moduli as in \refeq{eqn:ortho_shear_modulus} for all $9$ subdomains. All material axes are aligned with the coordinate system (and loading). Our output of interest is the integral of normal displacement ($u_1$) over $\Gamma^{\rm o}_T$, which represents the average normal displacement on $\Gamma^{\rm o}_T$. We note our output of interest is thus ``compliant''.
\begin{figure}
\caption{The multi-material problem}
\label{fig:ex4_model}
\end{figure}
We consider $P=6$ parameters $\boldsymbol{\mu} = [\mu_1,\ldots,\mu_6]$, correspond to the six Young's moduli values as shown in Figure~\ref{fig:ex4_model} (the two Young's moduli for each subdomain are shown in those brackets). The parameter domain is chosen as $\mbox{\boldmath$\mathcal{D}$} = [0.5, 2.0]^6$.
We then apply the domain decomposition \cite{rozza08:ARCME} and obtain $L_{\rm reg} = 18$ subdomains. Despite the large $L_{\rm reg}$ number of domains, there is no geometric transformation in this case. We recover our affine forms with $Q^a = 12$, $Q^f = 1$, note that all $Q^a$ are contributed from all the Young's moduli since there is no geometric transformation involved. Moreover, it is observed that the bilinear form can be, in fact, classified as a ``parametrically coercive'' one \cite{patera07:book}.
We next consider a FE approximation where the mesh contains $n_{\rm node} = 4098$ nodes and $n_{\rm elem} = 8032$ $P_1$ elements, which corresponds to $\mathcal{N} = 8112$ degrees of freedoms. The mesh is refined around all the interfaces between different subdomains as shown in Figure~\ref{fig:ex4_mesh}.
\begin{figure}
\caption{The multi-material problem: Domain composition and FE mesh}
\label{fig:ex4_mesh}
\end{figure}
We then apply the RB approximation. We present in Table~\ref{tab:ex4_tab} our convergence results: the RB error bounds and effectivities as a function of $N$. The error bound reported, $\mathcal{E}_N = \Delta^s_N(\boldsymbol{\mu})/|s_N(\boldsymbol{\mu})|$ is the maximum of the relative error bound over a random test sample $\Xi_{\rm test}$ of size $n_{\rm test} = 200$. We denote by $\overline{\eta}_N^s$ the average of the effectivity $\eta_N^s(\boldsymbol{\mu})$ over $\Xi_{\rm test}$. We observe that our effectivity average is of order $O(10)$.
\begin{table}
\centering
\begin{tabular}{|c||c|c|}
\hline
$N$ & $\mathcal{E}_N$ & $\overline{\eta}_N^s$ \\
\hline
5 & 1.01\texttt{E}-02 & 8.11 \\
10 & 1.45\texttt{E}-03 & 11.16 \\
20 & 3.30\texttt{E}-04 & 11.47 \\
30 & 1.12\texttt{E}-04 & 12.59 \\
40 & 2.34\texttt{E}-05 & 11.33 \\
50 & 9.85\texttt{E}-06 & 12.90 \\
\hline
\end{tabular}
\caption{The multi-material problem: RB convergence}
\label{tab:ex4_tab}
\end{table}
As regards computational times, a RB online evaluation $\boldsymbol{\mu} \rightarrow (s_N(\boldsymbol{\mu}),\Delta_N^s(\boldsymbol{\mu}))$ requires just $t_{\rm RB} = 33$(ms) for $N = 40$; while the FE solution $\boldsymbol{\mu} \rightarrow s^\mathcal{N}(\boldsymbol{\mu})$ requires $t_{\rm FE} = 8.1$(s): thus the RB online evaluation is just $0.41\%$ of the FEM computational cost.
\subsection{The woven composite beam problem}
We consider a composite cantilever beam as shown in Figure~\ref{fig:ex5_model}. The beam is divided into two regions, each with a square hole in the center of (equal) size $2w$. We apply (clamped) Dirichlet boundary conditions on the left side of the beam $\Gamma^{\rm o}_L$, (symmetric about the $x^{\rm o}_1$ direction) Dirichlet boundary conditions on the right side of the beam $\Gamma^{\rm o}_R$, and (unit tension) non-homogeneous Neumann boundary conditions on the top side $\Gamma^{\rm o}_T$. We consider the same orthotropic plane stress materials for both regions: $(E_1, E_2) = (1, E_2)$, $\nu_{12} = 0.3$, $\nu_{21}$ is determined by \refeq{eqn:ortho_plane_stress} and the shear modulus $G_{12}$ is given by \refeq{eqn:ortho_shear_modulus}. The material axes of both regions are not aligned with the coordinate system and loading: the angles of the the material axes and the coordinate system of the first and second region are $\theta$ and $-\theta$, respectively. The setting represents a ``woven'' composite material across the beam horizontally. Our output of interest is the integral of the normal displacement ($u_1$) over the boundary $\Gamma^{\rm o}_O$. We note our output of interest is thus ``non-compliant''.
\begin{figure}
\caption{The woven composite beam problem}
\label{fig:ex5_model}
\end{figure}
We consider $P=3$ parameters $\boldsymbol{\mu} = [\mu_1,\mu_2,\mu_3] \equiv [w, E_2, \theta]$. The parameter domain is chosen as $\mbox{\boldmath$\mathcal{D}$} = [1/6, 1/12] \times [1/2, 2] \times [-\partiali/4, \partiali/4]$.
We then apply the domain decomposition \cite{rozza08:ARCME} and obtain $L_{\rm reg} = 32$ subdomains, note that all subdomains transformations are just simply translations due to the ``added control points'' along the external (and interface) boundaries strategy \cite{rozza08:ARCME}. We recover the affine forms with $Q^a = 19$, $Q^f = 2$, and $Q^\ell = 1$.
We next consider a FE approximation where the mesh contains $n_{\rm node} = 3569$ nodes and $n_{\rm elem} = 6607$ $P_1$ elements, which corresponds to $\mathcal{N} = 6865$ degrees of freedoms. The mesh is refined around the holes, the interfaces between the two regions, and the clamped boundary as shown in Figure~\ref{fig:ex5_mesh}.
\begin{figure}
\caption{The woven composite beam problem: Domain composition and FE mesh}
\label{fig:ex5_mesh}
\end{figure}
We then apply the RB approximation. We present in Table~\ref{tab:ex5_tab} our convergence results: the RB error bounds and effectivities as a function of $N$. The error bound reported, $\mathcal{E}_N = \Delta^s_N(\boldsymbol{\mu})/|s_N(\boldsymbol{\mu})|$ is the maximum of the relative error bound over a random test sample $\Xi_{\rm test}$ of size $n_{\rm test} = 200$. We denote by $\overline{\eta}_N^s$ the average of the effectivity $\eta_N^s(\boldsymbol{\mu})$ over $\Xi_{\rm test}$. We observe that our effectivity average is of order $O(5-25)$.
\begin{table}
\centering
\begin{tabular}{|c||c|c|}
\hline
$N$ & $\mathcal{E}_N$ & $\overline{\eta}_N^s$ \\
\hline
4 & 4.64\texttt{E}-02 & 22.66 \\
8 & 1.47\texttt{E}-03 & 7.39 \\
12 & 2.35\texttt{E}-04 & 9.44 \\
16 & 6.69\texttt{E}-05 & 14.29 \\
20 & 1.31\texttt{E}-05 & 11.41 \\
\hline
\end{tabular}
\caption{The woven composite beam problem: RB convergence}
\label{tab:ex5_tab}
\end{table}
As regards computational times, a RB online evaluation $\boldsymbol{\mu} \rightarrow (s_N(\boldsymbol{\mu}),\Delta_N^s(\boldsymbol{\mu}))$ requires just $t_{\rm RB} = 40$(ms) for $N = 20$; while the FE solution $\boldsymbol{\mu} \rightarrow s^\mathcal{N}(\boldsymbol{\mu})$ requires $t_{\rm FE} = 7.5$(s): thus our RB online evaluation is just $0.53\%$ of the FEM computational cost.
\subsection{The closed vessel problem}
We consider a closed vessel under tension at both ends as shown in Figure~\ref{fig:ex6_modelfull}.
\begin{figure}
\caption{The closed vessel problem}
\label{fig:ex6_modelfull}
\end{figure}
The vessel is axial symmetric about the $x^{\rm o}_2$ axis, and symmetric about the $x^{\rm o}_1$ axis, hence we only consider a representation ``slice'' by our axisymmetric formulation as shown in Figure~\ref{fig:ex6_model}. The vessel is consists of two layered, the outer layer is of fixed width $w^{\rm out} = 1$, while the inner layer is of width $w^{\rm in} = w$. The material properties of the inner layer and outer layer are given by $(E^{\rm in},\nu) = (E^{\rm in},0.3)$ and $(E^{\rm out},\nu) = (1,0.3)$, respectively. We apply (symmetric about the $x^{\rm o}_2$ direction) Dirichlet boundary conditions on the bottom boundary of the model $\Gamma^{\rm o}_B$, (symmetric about the $x^{\rm o}_1$ direction) Dirichlet boundary conditions on the left boundary of the model $\Gamma^{\rm o}_L$ and (unit tension) non-homogeneous Neumann boundary conditions on the top boudanry $\Gamma^{\rm o}_T$. Our output of interest is the integral of the axial displacement ($u_r$) over the right boundary $\Gamma^{\rm o}_R$. We note our output of interest is thus ``non-compliant''.
\begin{figure}
\caption{The closed vessel problem}
\label{fig:ex6_model}
\end{figure}
We consider $P=2$ parameters $\boldsymbol{\mu} = [\mu_1,\mu_2] \equiv [w, E^{\rm in}]$. The parameter domain is chosen as $\mbox{\boldmath$\mathcal{D}$} = [0.1, 1.9] \times [0.1, 10]$.
We then apply the domain decomposition \cite{rozza08:ARCME} and obtain $L_{\rm reg} = 12$ subdomains as shown in Figure~\ref{fig:ex6_mesh}. We recover our affine forms with $Q^a = 47$, $Q^f = 1$, and $Q^\ell = 1$. Despite the small number of parameter (and seemingly simple transformations), $Q^a$ is large in this case. A major contribution to $Q^a$ come from the expansion of the terms $x^{\rm o}_1$ in the effective elastic tensor $[\mathbf{S}]$, which appeared due to the geometric transformation of the inner layer.
We next consider a FE approximation where the mesh contains $n_{\rm node} = 3737$ nodes and $n_{\rm elem} = 7285$ $P_1$ elements, which corresponds to $\mathcal{N} = 7423$ degrees of freedoms. The mesh is refined around the interfaces between the two layers.
\begin{figure}
\caption{The closed vessel problem: Domain composition and FE mesh}
\label{fig:ex6_mesh}
\end{figure}
We then apply the RB approximation. We present in Table~\ref{tab:ex6_tab} convergence results: the RB error bounds and effectivities as a function of $N$. The error bound reported, $\mathcal{E}_N = \Delta^s_N(\boldsymbol{\mu})/|s_N(\boldsymbol{\mu})|$ is the maximum of the relative error bound over a random test sample $\Xi_{\rm test}$ of size $n_{\rm test} = 200$. We denote by $\overline{\eta}_N^s$ the average of the effectivity $\eta_N^s(\boldsymbol{\mu})$ over $\Xi_{\rm test}$. We observe that our effectivity average is of order $O(50-120)$, which is quite large, however it is not surprising since our output is ``non-compliant''.
\begin{table}
\centering
\begin{tabular}{|c||c|c|}
\hline
$N$ & $\mathcal{E}_N$ & $\overline{\eta}_N^s$ \\
\hline
10 & 7.12\texttt{E}-02 & 56.01 \\
20 & 1.20\texttt{E}-03 & 111.28 \\
30 & 3.96\texttt{E}-05 & 49.62 \\
40 & 2.55\texttt{E}-06 & 59.96 \\
50 & 5.70\texttt{E}-07 & 113.86 \\
60 & 5.90\texttt{E}-08 & 111.23 \\
70 & 6.95\texttt{E}-09 & 77.12 \\
\hline
\end{tabular}
\caption{The closed vessel problem: RB convergence}
\label{tab:ex6_tab}
\end{table}
As regards computational times, a RB online evaluation $\boldsymbol{\mu} \rightarrow (s_N(\boldsymbol{\mu}),\Delta_N^s(\boldsymbol{\mu}))$ requires just $t_{\rm RB} = 167$(ms) for $N = 40$; while the FE solution $\boldsymbol{\mu} \rightarrow s^\mathcal{N}(\boldsymbol{\mu})$ requires $t_{\rm FE} = 8.2$(s): thus our RB online evaluation is just $2.04\%$ of the FEM computational cost.
\subsection{The Von {K{\'a}rm{\'a}n} plate problem}
We consider now a different problem that can be derived from the classical elasticity equations \cite{ciarlet1988three,ciarlet1997mathematical}. It turns out to be nonlinear and brings with it a lot of technical difficulties. Let us consider an elastic, bidimensional and rectangular plate $\Omega = [0,l] \times [0,1]$ in its undeformed state, subjected to a $\mu$-parametrized external load acting on its edge, then the \emph{Airy stress potential} and the deformation from its flat state, respectively $\partialhi$ and $u$ are defined by the Von {K{\'a}rm{\'a}n} equations
\begin{equation}
\label{karm}
\begin{cases}
\Delta^2u + \mu u_{xx} = \left[\partialhi, u\right] + f \ , \quad &\text{in}\ \Omega \\
\Delta^2\partialhi = -\left[u, u\right] \ , \quad &\text{in}\ \Omega
\end{cases}
\end{equation}
where $$\Delta^2 := \Delta\Delta = \left(\frac{\partial\,^2}{\partial\,x^2} + \frac{\partial\,^2}{\partial\,y^2}\right)^2 \ ,$$ is the biharmonic operator and
$$[u,\partialhi] := \frac{\partial\,^2u}{\partial\,x^2}\frac{\partial\,^2\partialhi}{\partial\,y^2} -2\frac{\partial\,^2u}{\partial x\partial y}\frac{\partial\,^2\partialhi}{\partial x\partial y} + \frac{\partial\,^2u}{\partial\,y^2}\frac{\partial\,^2\partialhi}{\partial\,x^2} \ ,$$
is the \emph{bracket of Monge-Amp\'ere}. So we have a system of two nonlinear and parametrized equations of the fourth order with $\mu$ the parameter that measures the compression along the sides of the plate.
\begin{figure}
\caption{A rectangular bidimensional elastic plate compressed on its edges}
\label{piastra}
\end{figure}
From the mathematical point of view, we will suppose the plate is simply supported, i.e. that holds boundary conditions $$u = \Delta u = 0, \qquad \partialhi = \Delta \partialhi = 0, \qquad \text{on} \ \partial\Omega .$$
In this model problem we are interested in the study of stability and uniqueness of the solution for a given parameter. In fact due to the nonlinearity of the bracket we obtain the so called \emph{buckling phenomena} \cite{PAMM:PAMM201310213}, that is the main feature studied in bifurcations theory.
What we seek is the critical value of $\mu$ for which the stable (initial configuration) solution become unstable while there are two new stable and symmetric solutions.
To detect this value we need a very complex algorithm that mixes a \emph{continuation method}, a nonlinear solver and finally a full-order method to find the buckled state. At the end for every $\mu \in \mbox{\boldmath$\mathcal{D}$}_{train}$ (a fine discretization of the parameter domain $\mbox{\boldmath$\mathcal{D}$}$) we have a loop due to the nonlinearity, for which at each iteration we have to solve the Finite Element method applied to the weak formulation of the problem.
Here we consider $P=1$ parameter $\mu$ and its domain is suitably chosen\footnote{It is possible to show that the bifurcation point is related to the eigenvalue of the linearized model \cite{berger}, so we are able to set in a proper way the range of the parameter domain.} as $\mbox{\boldmath$\mathcal{D}$} = [30, 70]$.
Also in this case we can simply recover the affine forms with $Q^a = 3$. For the rectangular plate test case with $l=2$ we applied the Finite Element method, with $n_{node} = 441$ nodes and $n_{elem} = 800$ $P_2$ elements, which corresponds to $\mathcal{N} = 6724$ degrees of freedom. We stress on the fact that the linear system obtained by the Galerkin projection has to be solved at each step of the nonlinear solver, here we chose the classic \emph{Newton method} \cite{QuateroniValli97}.
Moreover, for a given parameter, we have to solve a FE system until Newton method converges just to obtain one of the possible solutions of our model; keeping in mind that we do not know a priori where is the bifurcation point and we have to investigate the whole parameter domain. It is clear that despite the simple geometry and the quite coarse mesh, the reduction strategies are fundamental in this kind of applications.
For example, in order to plot a \emph{bifurcation diagram} like the one in Figure~\ref{fig:bifdia}, the full order code running on a standart computer takes approximately one hour.
\begin{figure}
\caption{Bifurcation diagram for a square plate and different initial guess for Newton method, on y-axis is represented the infinite norm of the solution}
\label{fig:bifdia}
\end{figure}
Once selected a specific parameter, $\lambda = 70$, we can see in Figure~\ref{fig:bifurc1} the two solutions that belong to the different branches of the plot reported in Figure~\ref{fig:bifdia}.
\begin{figure}
\caption{Contour plot of the two solutions belonging to the green and red branches of the bifurcation diagram for $\lambda = 70$, respectively}
\label{fig:bifurc1}
\end{figure}
We then applied RB approximation and present in Table~\ref{tab:ex7_tab} a convergence results: the error between the truth approximation and the reduced one as a function of $N$. The error reported, $\mathcal{E}_N = \max_{\boldsymbol{\mu} \in \mbox{\boldmath$\mathcal{D}$}} ||\mathbf{u}^\mathcal{N}(\boldsymbol{\mu}) - \mathbf{u}^\mathcal{N}_{{\rm RB},N}(\boldsymbol{\mu})||_{X}$ is the maximum of the approximation error over a uniformly chosen test sample.
\begin{table}
\centering
\begin{tabular}{|c||c|}
\hline
$N$ & $\mathcal{E}_N$ \\
\hline
1 & 6.61\texttt{E}+00 \\
2 & 6.90\texttt{E}-01 \\
3 & 7.81\texttt{E}-02 \\
4 & 2.53\texttt{E}-02 \\
5 & 1.88\texttt{E}-02 \\
6 & 1.24\texttt{E}-02 \\
7 & 9.02\texttt{E}-03 \\
8 & 8.46\texttt{E}-03 \\
\hline
\end{tabular}
\caption{The Von {K{\'a}rm{\'a}n} plate problem : RB convergence}
\label{tab:ex7_tab}
\end{table}
As we can see in Figure~\ref{fig:1cell1} e obtain very good results with a low number of snapshots due to the strong properties of the underlying biharmonic operator.
\begin{figure}
\caption{Comparison between the full order solution (left) and reduced order one (right) for $\lambda = 65$ }
\label{fig:1cell1}
\end{figure}
A suitable extension for the a posteriori error estimate of the solution can be obtained by applying Brezzi-Rappaz-Raviart (BRR) theory on the numerical approximation of nonlinear problems \cite{Brezzi1980,Brezzi1981,Brezzi1982,Grepl2007,canuto2009}. However, the adaptation of BRR theory to RB methods in bifurcating problems is not straightforward, and we leave it for further future investigation \cite{pichirozza}.
As regards computational times, a RB online evaluation $\boldsymbol{\mu} \rightarrow \mathbf{u}^\mathcal{N}_{{\rm RB},N}(\boldsymbol{\mu})$ requires just $t_{\rm RB} = 100$(ms) for $N = 8$; while the FE solution $\boldsymbol{\mu} \rightarrow \mathbf{u}^\mathcal{N}(\boldsymbol{\mu})$ requires $t_{\rm FE} = 8.17$(s): thus our RB online evaluation is just $1.22\%$ of the FEM computational cost.
\section{Conclusions}
We have provided some examples of applications of reduced basis
methods in linear elasticity problems depending also on many parameters
of different kind (geometrical, physical, engineering) using
different linear elasticity approximations, a 2D Cartesian setting or a 3D axisymmetric one, different material models (isotropic and orthotropic), as well as an overview on nonlinear problems. Reduced basis methods have confirmed a
very good computational performance with respect to a classical
finite element formulation, not very suitable to solve
parametrized problems in the real-time and many-query contexts.
We have extended and generalized previous work \cite{milani08:RB_LE} with the possibility to treat with more complex outputs by introducing a
dual problem \cite{rozza08:ARCME}. Another
very important aspect addressed in this work is the certification of the errors in the
reduced basis approximation by means of a posteriori error
estimators, see for example \cite{huynh07:cras}. This work looks also at more complex 3D parametrized applications (not only in the special axisymmetric case) as quite promising problem to be solved with the same certified methodology \cite{chinesta2014separated,zanon:_phd_thesis}.
\section*{Appendix}
\addcontentsline{toc}{section}{Appendix}
\section{Stress-strain matrices}
In this section, we denote $E_i$, $i = 1,3$ as the Young's moduli, $\nu_{ij}$; $i,j = 1,2,3$ as the Poisson ratios; and $G_{12}$ as the shear modulus of the material.
\subsection{Isotropic cases}
For both of the following cases, $E = E_1 = E_2$, and $\nu = \nu_{12} = \nu_{21}$.
Isotropic plane stress:
\begin{equation*}
[\mathbf{E}] = \frac{E}{(1-\nu^2)}\left[
\begin{array}{ccc}
1 & \nu & 0 \\
\nu & 1 & 0 \\
0 & 0 & 2(1+\nu) \\
\end{array}
\right].
\end{equation*}
Isotropic plane strain:
\begin{equation*}
[\mathbf{E}] = \frac{E}{(1+\nu)(1-2\nu)}\left[
\begin{array}{ccc}
1 & \nu & 0 \\
\nu & 1 & 0 \\
0 & 0 & 2(1+\nu) \\
\end{array}
\right].
\end{equation*}
\subsection{Orthotropic cases}
Here we assume that the orthotropic material axes are aligned with the axes used for the analysis of the structure. If the structural axes are not aligned with the orthotropic material axes, orthotropic material rotation must be rotated by with respect to the structural axes. Assuming the angle between the orthogonal material axes and the structural axes is $\theta$, the stress-strain matrix is given by $[\mathbf{E}] = [\mbox{\boldmath$T$}(\theta)][\hat{\mathbf{E}}][\mbox{\boldmath$T$}(\theta)]^T$, where
\begin{equation*}
[\mbox{\boldmath$T$}(\theta)] = \left[
\begin{array}{ccc}
\cos^2\theta & \sin^2\theta & -2\sin\theta\cos\theta \\
\sin^2\theta & \cos^2\theta & 2\sin\theta\cos\theta \\
\sin\theta\cos\theta & -sin\theta\cos\theta & \cos^2\theta-\sin^2\theta\\
\end{array}
\right].
\end{equation*}
Orthotropic plane stress:
\begin{equation*}
[\hat{\mathbf{E}}] = \frac{1}{(1-\nu_{12}\nu_{21})}\left[
\begin{array}{ccc}
E_1 & \nu_{12}E_1 & 0 \\
\nu_{21}E_2 & E_2 & 0 \\
0 & 0 & (1-\nu_{12}\nu_{21})G_{12} \\
\end{array}
\right].
\end{equation*}
Note here that the condition
\begin{equation}\label{eqn:ortho_plane_stress}
\nu_{12}E_1 = \nu_{21}E_2
\end{equation}
must be required in order to yield a symmetric $[\mathbf{E}]$.
Orthotropic plane strain:
\begin{equation*}
[\hat{\mathbf{E}}] = \frac{1}{\Lambda}\left[
\begin{array}{ccc}
(1-\nu_{23}\nu_{32})E_1 & (\nu_{12}+\nu_{13}\nu_{32})E_1 & 0 \\
(\nu_{21}+\nu_{23}\nu_{31})E_2 &(1-\nu_{13}\nu_{31})E_2 & 0 \\
0 & 0 & \Lambda G_{12} \\
\end{array}
\right].
\end{equation*}
Here $\Lambda = (1-\nu_{13}\nu_{31})(1-\nu_{23}\nu_{32})-(\nu_{12}+\nu_{13}\nu_{32})(\nu_{21}+\nu_{23}\nu_{31})$. Furthermore, the following conditions,
$$\nu_{12}E_1 = \nu_{21}E_2, \quad \nu_{13}E_1 = \nu_{31}E_3, \quad \nu_{23}E_2 = \nu_{32}E_3,$$
must be satisfied, which leads to a symmetric $[\mathbf{E}]$.
An reasonable good approximation for the shear modulus $G_{12}$ in orthotropic case is given by \cite{carroll98:fem} as
\begin{equation}\label{eqn:ortho_shear_modulus}
\frac{1}{G_{12}} \approx \frac{(1+\nu_{21})}{E_1} + \frac{(1+\nu_{12})}{E_2}.
\end{equation}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Long-time existence of nonlinear inhomogeneous compressible elastic waves}
\author[a]{Silu Yin\corref{mycorrespondingauthor}}
\cortext[mycorrespondingauthor]{Corresponding author}
\ead{yins11@shu.edu.cn}
\author[b]{Xiufang Cui}
\ead{xfcui16@fudan.edu.cn}
\address[a]{Department of Mathematics, Shanghai University, Shanghai, 200444, P. R. China}
\address[b]{School of Mathematical Sciences, Fudan University, Shanghai, 200433, P. R. China}
\begin{abstract}
In this paper, we consider the nonlinear inhomogeneous compressible elastic waves in three spatial dimensions when the density is a small disturbance around a constant state. In homogeneous case, the almost global existence was established by Klainerman-Sideris \cite{KS}, and global existence was built by Agemi \cite{Ag} and Sideris \cite{S1,S2} independently. Here we establish the corresponding almost global and global existence theory in the inhomogeneous case.
\end{abstract}
\begin{keyword}
inhomogeneous elastic waves \sep long-time existence\sep generalized energy estimate
\end{keyword}
\end{frontmatter}
\linenumbers
\setcounter{section}{0}
\numberwithin{equation}{section}
\section{Introduction}\label{s1}
The motion of an elastic body in three spatial dimensions is described by a time-dependent family of orientation preserving diffeomorphisms, written as $\varphi=\varphi(t,x),\ 0\leq t<T$, where $\varphi(0,x)=x$. Here $x=(x^1,x^2,x^3)$ is called material point, $\varphi=(\varphi^1,\varphi^2,\varphi^3)$ is called spatial point. The deformation gradient tensor $F$ is denoted as
$$F=\frac{\partial \varphi(t, x)}{\partial x}\Big|_{x=x(t,\varphi)},$$
where $F^j_i=\frac{\partial \varphi^j}{\partial x^i}$.
We concentrate on three dimensional inhomogeneous compressible hyperelastic materials whose potential energy $W(F)$ is determined only by the deformation tensor $F$. Denote the initial density by $\rho(x)$, then the Lagrangian variation governing the elastodynamics materials takes
\begin{align}\label{1.1}
\mathcal{\delta}(\varphi)=\int\int_{\mathbb{R}^3}\Big(\frac12\rho(x)|\partial_t\varphi(t,x)|^2-W(F)\Big)dxdt.
\end{align}
Then, we can formulate \eqref{1.1} to a nonlinear system
\begin{align}\label{11}
\rho(x)\partial_t^2\varphi-\nabla_x\cdot\frac{\partial W(\nabla_x \varphi)}{\partial F}=0.
\end{align}
From the groundbreaking work of quasilinear wave equations and elastodynamics in three dimensions, both the smallness of initial data and null condition in nonlinearities are indispensable to make sure the solution exists globally. It is reasonable to consider the elastodynamics around the equilibrium state. Let $u(t,x)=\varphi(t,x)-x$, by the isotropy of the hyperelastic materials and inherent frame-indifference, the mechanism obeys
\begin{align}\label{1.2}
\rho(x)\partial_t^2 u-c_2^2\Delta u-(c_1^2-c_2^2)\nabla( \nabla\cdot u )= N(u,u)
\end{align}
deduced from \eqref{11} , where $c_1^2>\frac43c_2^2$, and
$$N^i(u,v)=B_{lmn}^{ijk}\partial_l(\partial_mu^j\partial_nu^k)$$
with symmetry
\begin{align}\label{1.4}
B_{lmn}^{ijk}=B_{mln}^{jik}=B_{lnm}^{ikj}.
\end{align}
Here we use the null condition appeared in \cite{S2}, which means that the self-interaction of each homogeneous elastic wave family is nonresonant:
\begin{equation}\label{1.5}
B_{lmn}^{ijk}\omega_i\omega_j\omega_k\omega_l\omega_m\omega_n=0,\quad for \ all\ \omega\in S^2
\end{equation}
\begin{equation}\label{1.6}
B_{lmn}^{ijk}\eta_i\eta_j\eta_k\omega_l\omega_m\omega_n=0, \quad for\ all \ \eta, \omega \in S^2 \ \textup{with} \ \eta\bot\omega.
\end{equation}
For three dimensional homogeneous compressible elastic waves in which the density is always a constant, John \cite{J2} first showed a small displacement solution exists almost globally via an $L^1-L^\infty$ estimate (see also \cite{KS} dependently by the generalized energy method). Sideris \cite{S1} discovered a null condition within the class of physically meaningful nonlinearities and established the global existence theory via an enhanced decay estimate. Refinements for the proof of global existence were presented by Sideris \cite{S2} and Agemi \cite{Ag} independently. Blow up for elasticity with large initial data was given by Tahvildar-Zadeh \cite{T}. The global well-posedness of compressible elastic waves in two dimensions is more delicate and still remains open.
In this paper, we consider the long-time existence of inhomogeneous compressible elastic waves in which the density is not a constant any more. By adapting the improved generalized vector theory of Klainerman \cite{K2} and inspired by Sideris \cite{S2}, we show a unique solution of \eqref {1.2} exists almost globally for non-degenerate elastodynamics near the equilibrium state; see Theorem \ref{thm 2.1}. We also prove that if the elastodynamics is degenerate in the nonlinearities, that is the system satisfies the null condition as in \eqref{1.5} and \eqref{1.6}, then the solution exists globally; see Theorem \ref{thm 2.2}. For inhomogeneous case, not only the Lorentz invariance is absent but also there will be some extra linear terms to be controlled. One difference and difficulty we meet here is that the traditional weighted norm $\mathcal{X}_{|\alpha|+2}$ does not include $\|\langle c_at-r\rangle P_a\partial_t^2 \Gamma^\alpha u\|_{L^2}$. Utilizing the equation, we discover that $\|\langle c_at-r\rangle P_a\partial_t^2 \Gamma^\alpha u\|_{L^2}$ can also be controlled by the classical generalized energy $\mathcal{E}_{|\alpha|+2}$; see Lemma \ref{lem43}.
Next we highlight some closely related results. For homogeneous incompressible elastodynamics, the equations are inherently linearly degenerate in the isotropic case and satisfy a null condition that is necessary for global existence in three dimensions (see \cite{STh1, STh2}). In those papers the authors showed the lower-order energy is bounded but the higher-order energy has a low-increase ratio in time. Recently, Lei-Wang \cite{LW} proved that the higher-order generalized energy is still uniformly bounded and provided an improved proof for the global well-posedness.
The difficulty in two dimensional case is highly essential. The first nontrivial long-time existence result concerning on this problem was established by Lei-Sideris-Zhou \cite{LSZ}. They proved the almost global existence by formulating the system in Euler coordinate. The breakthrough of global well-posedness was owing to Lei \cite{L}. He introduced the so-called notion of "strong null condition" and discovered the two dimensional incompressible elastodynamics inherently satisfies this extremely important structure in Lagrangian coordinate. Based on these discoveries, Lei proved the incompressible isotropic elastodynamics in two dimensions admits a unique global classical solution. The method can even be used to prove the vanishing viscosity limit of viscoelastic system, see \cite{Ca}. Wang \cite{W} built the global existence theory of incompressible elastodynamics in two dimensions in frequency space in Euler coordinate. For the inhomogeneous case, Yin \cite{yin} built a model of two-dimensional incompressible elastodynamics and showed the global existence. The global well-posedness theory of inhomogeneous incompressible case in three dimensions is coming out in our another paper.
For three dimensional wave equations, John \cite{JF} first proved the blow up phenomenon for the Cauchy problem of wave equations with sufficiently small initial data but violating null condition. John-Klainerman \cite{JK} showed the almost global existence theory for nonlinear scalar wave equations. Then Klainerman \cite{K2} proved the global existence of classical solutions. This crucial work was also obtained independently by Christodoulou \cite{C} using a conformal mapping method. The multiple-speeds case in three dimensions was achieved by Sideris-Tu \cite{ST} under a null condition. In two-dimensional scalar case, Alinhac established a series of results, a "blow-up solution of cusp type" with sufficiently small initial data but without null condition in \cite{Alin1, Alin2} and the global existence with null bilinear forms in \cite{Alin3, Alin4} were obtained. However, the well-posedness of nonlinear wave systems with multiple speeds in two dimensions is still unknown.
Before ending this section, we make an outline of this paper. We first make some preparatory work and state our main results in Section 2. In Section 3, the detailed commutation relationships for the inhomogeneous compressible elastodynamics will be considered. We present some useful $L^\infty-L^2$ estimates and weighted $L^2-L^2$ estimates based on the classical Sobolev inequalities without Lorentz operators in Section 4. We control the weighted generalized $L^2$ norm of $\partial_t^2 u$ by the classical generalized energy. In Section 5, the almost global existence of inhomogeneous compressible elastodynamics will be established by a higher-order weighted generalized energy estimate. The relative lower-order energy estimate will be presented in Section 6 under the null condition. Combined with the higher-order energy estimate, we can obtain the global existence theory of the inhomogeneous compressible elastodynamics.
\section{Notations and Main Results}
In this work, we concentrate on the long-time existence of the solutions of inhomogeneous elastodynamics where the density is a small disturbance around a constant state. Without loss of generality, let this constant state be $1$. Thus, we assume
$$\rho(x)=1+\tilde{\rho}(x).$$
Partial derivatives will be presented as
\begin{align*}
\partial = (\partial_0,\partial_1,\partial_2,\partial_3)=(\partial_t,\partial_1,\partial_2,\partial_3),\quad \nabla=(\partial_1,\partial_2,\partial_3).
\end{align*}
The angular momentum operators are the vector fields
$$ \Omega=(\Omega_1,\Omega_2,\Omega_3)=x\wedge\nabla, $$
$$\nabla=\frac{x}{r}\partial_r-\frac{x}{r^2}\wedge \Omega , \quad \textup{where} \quad r=|x| \quad \textup{and} \quad \partial_r=\frac{x}{r}\cdot\nabla .$$
By the generators of simultaneous rotations and scaling transform which can be referred to Sideris \cite{S1,S2}, the angular momentum operators and scaling operator are given as
\begin{align*}
\tilde{\Omega}_l=\Omega_lI+U_l,\quad l=1,2,3,
\end{align*}
with
\begin{equation*}
U_1=\left[
\begin{matrix}
0 &0 & 0\\
0 & 0 & 1 \\
0&-1&0
\end{matrix}
\right],
\quad
U_2=\left[
\begin{matrix}
0 &0 & -1\\
0 & 0 & 0 \\
1 &0 &0
\end{matrix}
\right],
\quad
U_3=\left[
\begin{matrix}
0 &1 & 0 \\
-1 & 0 & 0 \\
0 &0 &0
\end{matrix}
\right],
\end{equation*}
and
\begin{align*}
\tilde{S}=t\partial_t +r\partial_r-1.
\end{align*}
Let
\begin{align*}
\Gamma = (\Gamma_0,\cdots,\Gamma_7)=(\partial,\tilde{\Omega},\tilde{S}),\quad \textup{where} \quad \tilde{\Omega}=\{\tilde{\Omega}_1,\tilde{\Omega}_2,\tilde{\Omega}_3\}.
\end{align*}
As in \cite{S2}, the standard generalized energy is defined as
\begin{equation*}
\mathcal{E}_k(u(t)) = \frac{1}{2} \sum\limits_{|\alpha|\leq k-1}\int_{\mathbb{R}^3}[ \ |\partial_t \Gamma ^ \alpha u(t)|^2 + c_2^2|\nabla \Gamma ^ \alpha u(t)|^2+(c_1^2-c_2^2)(\nabla \cdot \Gamma ^ \alpha u(t))^2 \ ]dx.
\end{equation*}
To estimate the different family of elastic waves, orthogonal projections onto radial and transverse directions are introduced as follows
\begin{align*}
P_1u(x)=\frac{x}{r}\otimes\frac{x}{r} u(x)=\frac{x}{r}\langle\frac{x}{r},u(x)\rangle\quad
\textup{and}\quad
P_2u(x)=-\frac{x}{r}\wedge(\frac{x}{r}\wedge u(x)).
\end{align*}
And the weighted generalized energy is given by
\begin{equation*}
\mathcal{X}_k(u(t)) =\sum_{a =1}^{2}\sum_{\beta=0}^{3}\sum_{l=1}^{3}\sum_{|\alpha|\leq k-2} ||\langle c_a t-r \rangle P_a \partial_\beta \partial_l\Gamma^\alpha u(t)||_{L^2}
\end{equation*}
with the notation of $\langle \cdot \rangle = (1+|\cdot|^2)^{\frac{1}{2}}$.
We characterize the space of initial data in $H_\Lambda^k$, which is defined as
$$H_\Lambda^k=\{(f,g):\sum\limits_{|\alpha|\leq k-1}(\|\Lambda^\alpha f\|_{L^2}+\|\nabla\Lambda^\alpha f\|_{L^2}+\|\Lambda^\alpha g\|_{L^2})<\infty\},$$
where $$\Lambda=\{\triangledown,\tilde{\Omega},r\partial_r-1\}.$$
For simplicity of presentation, throughout this paper, we utilize $A \lesssim B $ to denote $A\leq CB$ for some positive absolute constant $C$.
Now we are ready to state our main results, which generalize the work of Klainerman-Sideris \cite{KS}, Agemi \cite{Ag} and Sideris \cite{S1,S2} to the inhomogeneous case.
\begin{thm}[Almost Global Existence]\label{thm 2.1}
Let $k\geq 9$. Suppose $U_0=(u(0),u_t(0))$ is the initial data of \eqref{1.2} and satisfies
\begin{align*}
||U_0||_{H_\Lambda^k}\leq M, \quad \textup {and}\quad ||U_0||_{H_\Lambda^{k-2}}\leq \epsilon,
\end{align*}
where $M,\epsilon >0$ are two given constants.
If
\begin{equation}\label{21}
\|\langle r\rangle\Lambda^\alpha\tilde{\rho}(x)\|_{L^2}\leq\delta<\frac12,\quad\textup{for}\ |\alpha|\leq k+2
\end{equation}
is small enough, then there exists a $\epsilon_0$ sufficient small, which only depends on $M, k, \delta$ such that for any $\epsilon \leq \epsilon_0$, the Cauchy problem of \eqref{1.2} has a unique almost global solution satisfying
\begin{equation}
\mathcal{E}_{k}^\frac12(u(t))\leq C M\langle t \rangle^{(\varepsilon+\delta)/2}
\end{equation}
for some positive constant $C$ uniformly in $t$.
\end{thm}
\begin{thm}[Global Existence]\label{thm 2.2}
Under the assumption of Theorem \ref{thm 2.1}, if \eqref{1.5} and \eqref{1.6} are satisfied, then the Cauchy problem of \eqref{1.2} admits a unique global classical solution satisfying
\begin{equation}
\mathcal{E}_{k-2}^\frac12(u(t))\leq C_1(\varepsilon\exp\{C_2M\}+\delta \exp\{C_3M\})
\end{equation}
for some positive constant $C_1$, $C_2$, $C_3$ uniformly in $t$.
\end{thm}
\begin{rem}
In Theorem \ref{thm 2.1} and Theorem \ref{thm 2.2}, we need some decay in $\langle r\rangle$ for the disturbance $\tilde{\rho}$ of density. In fact, this is not a necessary request. If the density is a small disturbance with a compact support, Theorem \ref{thm 2.1} and Theorem \ref{thm 2.2} are also valid. By searching some weaker condition about the density to keep the long-time existence of the inhomogeneous elastic waves, it is easy to see that assumption \eqref{21} can be replaced by
$$\|\langle r\rangle\Lambda^\alpha\tilde{\rho}(x)\|_{L^2}\leq\delta<\frac12,\quad\textup{for}\ |\alpha|\leq 4$$
and
$$\|\Lambda^\alpha\tilde{\rho}(x)\|_{L^2}\leq\delta<\frac12,\quad\textup{for}\ 5\leq|\alpha|\leq k$$
from the process of proofs.
\end{rem}
\section{Commutation}
To do the generalized energy estimate, it is necessary to analyze what happens if $\Gamma$ derivatives act on \eqref{1.2}. We formulate \eqref{1.2} to
\begin{align}\label{31}
\partial_t^2 u-c_2^2\Delta u -(c_1^2-c_2^2)\nabla (\nabla \cdot u)=N(u,u)-\tilde{\rho}\partial_t^2 u.
\end{align}
Define
\begin{equation}
\mathcal{L}u\triangleq\partial_t^2 u-\mathcal{A}u\triangleq\partial_t^2 u -c_2^2\Delta u -(c_1^2-c_2^2)\nabla(\nabla \cdot u).
\end{equation}
By direct calculation, we obtain the following commutation
\begin{align*}
\partial\mathcal{L}u=\mathcal{L}\partial u,\quad \tilde{\Omega}\mathcal{L}u=\mathcal{L}\tilde{\Omega} u,\quad \tilde{S}\mathcal{L}u=\mathcal{L}\tilde{S} u-2\mathcal{L}u,\quad \tilde{S}\partial_t^2u=\partial_t^2\tilde{S} u-2\partial_t^2u,
\end{align*}
and
\begin{align*}
\partial N(u,v)=&N(\partial u,v)+N( u,\partial v),& \partial(\tilde{\rho}\partial_t^2 u)&=\nabla\tilde{\rho}\partial_t^2 u+\tilde{\rho}\partial_t^2 \partial u,\\
\tilde{\Omega}N(u,v)=&N(\tilde{\Omega} u,v)+ N(\tilde{\Omega} u ,v),& \tilde{\Omega}(\tilde{\rho}\partial_t^2 u)&=\tilde{\Omega}\tilde{\rho}\partial_t^2 u+\tilde{\rho}\partial_t^2 \tilde{\Omega} u,\\
\tilde{S}N(u,v)=&N(\tilde{S} u,v)+N( u,\tilde{S}v)-2N(u,v),&\tilde{S}(\tilde{\rho}\partial_t^2 u)&=(r\partial_r-1)\tilde{\rho}\partial_t^2 u+\tilde{\rho}\partial_t^2 \tilde{S} u-2\tilde{\rho}\partial_t^2 u.
\end{align*}
Then we have
\begin{align*}
\mathcal{L}\partial u=&2N(\partial u,u)-\nabla\tilde{\rho}\partial_t^2 u-\tilde{\rho}\partial_t^2 \partial u,\\
\mathcal{L}\tilde{\Omega} u=&2N(\tilde{\Omega} u,u)-\tilde{\Omega}\tilde{\rho}\partial_t^2 u-\tilde{\rho}\partial_t^2 \tilde{\Omega} u,\\
\mathcal{L}\tilde{S}u=&2N(\tilde{S} u,u)-(r\partial_r-1)\tilde{\rho}\partial_t^2 u+\tilde{\rho}\partial_t^2 \tilde{S} u
\end{align*}
according to symmetry \eqref{1.4}. For any multi-index $\alpha=(\alpha_1,\cdots,\alpha_8)\in\mathbb{N}^8$, we then can get the following
\begin{equation}\label{3.1}
\mathcal{L}\Gamma^\alpha u
\backsimeq\sum_{|\beta+\gamma|= |\alpha|}N(\Gamma^\beta u,\Gamma^\gamma u)-\sum_{|\beta+\gamma|= |\alpha|}\Lambda^\beta\tilde{\rho}\partial_t^2\Gamma^\gamma u.
\end{equation}
Here we use notation $A\backsimeq E+F$ means that $A=k_1 E+k_2F$ for some finite positive numbers $k_1$ and $k_2$ which depend only on $\alpha$.
\section{$L^\infty$ and weighted $L^2$ estimates}
From the theory of nonlinear wave equations, we know that a priori estimates play the key role of global existence. Klainerman \cite{K} obtained pointwise bounds for the unknown that decay as $t\rightarrow\infty$ by building some Klainerman-Sobolev estimates with a larger collection of vector fields that preserve the linear wave equation. Armed with these estimates, it is not hard to adapt the proof of the local existence theorem to obtain global existence in certain cases. But for elastodynamics, the Lorentz invariance is absent. The following improved Sobolev-type estimates were appeared partially in \cite{S2} (see also \cite{KS}).
\begin{lem}[\cite{S2}]\label{lem4.1}
Let $u\in H_\Gamma^k(T)$ and $\mathcal{X}_k (u(t))<\infty, $ then
\begin{align}
\langle r \rangle^{\frac{1}{2}}| \Gamma^\alpha u(t,x)|&\leq C\mathcal{E}_k^\frac{1}{2}(u(t)), \quad|\alpha|+2 \leq k,\label{4.1}\\
\langle r \rangle|\partial \Gamma^\alpha u(t,x)|&\leq C\mathcal{E}_k^\frac{1}{2}(u(t)), \quad|\alpha|+3 \leq k,\label{4.2-1}\\
\langle r \rangle {\langle c_\alpha t -r\rangle}^{\frac{1}{2}}|P_\alpha \partial\Gamma^\alpha u(t,x)|&\leq C[\mathcal{E}_k^{\frac{1}{2}}(u(t))+\mathcal{X}_k (u(t))],\quad |\alpha|+3\leq k,\label{4.2}\\
\langle r \rangle\langle c_\alpha t -r \rangle |P_\alpha \partial \nabla \Gamma ^\alpha u(t,x)|&\leq C \mathcal{X}_k(u(t)), \quad |\alpha |+4 \leq k.\label{4.3}
\end{align}
\end{lem}
For small solutions of the inhomogeneous elastodynamics, the weighted norm $\mathcal{X}_k$ can be controlled by the energy $\mathcal{E}_k^\frac12$. Before doing this, we first state a basic estimate appeared in \cite{S2}.
\begin{lem}\label{lem41}
Let $ u\in H_\Gamma^2(T) $, then
\begin{align*}
\mathcal{X}_2(u(t))\lesssim \mathcal{E}_2^\frac12+t\|\mathcal{L} u(t)\|_{L^2}.
\end{align*}
\end{lem}
The following lemma shows that $\|\langle c_a t-r\rangle P_a\partial_t^2\Gamma ^\alpha u\|_{L^2}$ can be controlled by the weighted norm $\mathcal{X}_{|\alpha|+2}$.
\begin{lem}\label{lem43}
Suppose $ u\in H_\Gamma^k(T) $ is a solution of \eqref{1.2}, then there holds
\begin{equation}\label{40}
\sum_{|\alpha|\leq k-2}\|\langle c_a t-r\rangle P_a\partial_t^2\Gamma ^\alpha u\|_{L^2}\lesssim \mathcal{X}_k+\mathcal{X}_{[\frac{k-1}2]+3}\mathcal{E}_{k-1}^\frac12+\mathcal{X}_{k}\mathcal{E}_{[\frac{k-1}2]+3}^\frac12.
\end{equation}
\end{lem}
{\bf Proof.} From \eqref{3.1}, we have
\begin{equation}\label{41}
\begin{split}
\sum_{|\alpha|\leq k-2}\|\langle c_a t-r\rangle P_a\partial_t^2\Gamma ^\alpha u\|_{L^2}\leq&\sum_{|\alpha|\leq k-2}\|\langle c_a t-r\rangle P_a\nabla^2\Gamma ^\alpha u\|_{L^2}\\
&+C\sum_{|\alpha|\leq k-2}\sum_{|\beta+\gamma|=|\alpha|}\|\langle c_a t-r\rangle P_a\nabla^2\Gamma ^\beta u\nabla\Gamma^\gamma u\|_{L^2}\\
&+\sum_{|\alpha|\leq k-2}\sum_{|\beta+\gamma|=|\alpha|}\|\langle c_a t-r\rangle P_a\Lambda ^\beta \tilde{\rho}\partial_t^2\Gamma^\gamma u\|_{L^2}.
\end{split}
\end{equation}
If $\|\tilde{\rho}\|_{H_\Lambda^k}\leq\frac12 $, then the last term of \eqref{41} can be absorbed by the left. Then we have
\begin{equation}\label{42}
\sum_{|\alpha|\leq k-2}\|\langle c_a t-r\rangle P_a\partial_t^2\Gamma ^\alpha u\|_{L^2}\lesssim \mathcal{X}_k+\sum_{|\alpha|\leq k-2}\sum_{|\beta+\gamma|=|\alpha|}\|\langle c_a t-r\rangle P_a\nabla^2\Gamma ^\beta u\nabla\Gamma^\gamma u\|_{L^2}
\end{equation}
Because that $|\beta+\gamma|\leq k-2$, either $|\beta|\leq [\frac{k-1}2]-1$ or $|\gamma|\leq [\frac{k-1}2]$ holds. If $|\beta|\leq [\frac{k-1}2]-1$, then by \eqref{4.3} in Lemma \ref{lem4.1},
\begin{equation}\label{43}
\begin{split}
&\sum_{|\alpha|\leq k-2}\sum_{|\beta+\gamma|=|\alpha|}\|\langle c_a t-r\rangle P_a\nabla^2\Gamma ^\beta u\nabla\Gamma^\gamma u\|_{L^2}\\
\leq& \sum_{|\alpha|\leq k-2}\sum_{|\beta+\gamma|=|\alpha|}\|\langle r\rangle\langle c_a t-r\rangle P_a\nabla^2\Gamma ^\beta u\|_{L^\infty}\|\nabla\Gamma^\gamma u\|_{L^2}\\
\lesssim &\mathcal{X}_{[\frac{k-1}2]+3}\mathcal{E}_{k-1}^\frac12.
\end{split} \end{equation}
If $|\gamma|\leq [\frac{k-1}2]$, then by \eqref{4.1} in Lemma \ref{lem4.1},
\begin{equation}\label{44}
\begin{split}
&\sum_{|\alpha|\leq k-2}\sum_{|\beta+\gamma|\leq|\alpha|}\|\langle c_a t-r\rangle P_a\nabla^2\Gamma ^\beta u\nabla\Gamma^\gamma u\|_{L^2}\\
\leq& \sum_{|\alpha|\leq k-2}\sum_{|\beta+\gamma|\leq|\alpha|}\|\langle c_a t-r\rangle P_a\nabla^2\Gamma ^\beta u\|_{L^2}\|\nabla\Gamma^\gamma u\|_{L^\infty}\\
\lesssim &\mathcal{X}_{k}\mathcal{E}_{[\frac{k-1}2]+3}^\frac12.
\end{split} \end{equation}
Finally, \eqref{40} is completed from \eqref{42}-\eqref{44}.
$\Box$
\begin{lem}\label{lem4.2}
Suppose $ u\in H_\Gamma^k(T) $ is a solution of the equation \eqref{1.2}, then
\begin{equation}\label{4.4}
\mathcal{X}_k\lesssim\mathcal{E}_k^\frac12+ \mathcal{X}_{[\frac{k-1}2]+3} \mathcal{E}_{k-1}^\frac12+ \mathcal{X}_k \mathcal{E}_{[\frac{k-1}2]+3}^\frac12.
\end{equation}
\end{lem}
{\bf Proof.} Applying Lemma \ref{lem41}, we have that
\begin{align*}
\mathcal{X}_k(u(t))\lesssim \mathcal{E}_k^\frac12+\sum_{|\alpha|\leq k-2}t\|\mathcal{L}\Gamma^\alpha u(t)\|_{L^2}.
\end{align*}
By \eqref{3.1}, we obtain that
\begin{equation}\label{4.6}
\mathcal{X}_k(u(t))\lesssim \mathcal{E}_k^\frac12+\sum_{|\beta+\gamma|\leq k-2}t\|\nabla^2\Gamma^\beta u\nabla^\gamma u\|_{L^2}
+\sum_{|\beta+\gamma|\leq k-2}t\|\Lambda^\beta\tilde{\rho} \partial_t^2\Gamma^\gamma u\|_{L^2}.
\end{equation}
For $t\|\nabla^2\Gamma^\beta u\nabla^\gamma u\|_{L^2}$, there holds either $|\beta|\leq [\frac{k-1}2]-1$ or $|\gamma|\leq [\frac{k-1}2]$. Note that
\begin{align}\label{4.7}
\langle t\rangle^{-1}\langle r\rangle\sum_a\langle c_a t-r\rangle P_a \gtrsim I,
\end{align}
thus we can control it by \begin{align*}
t\|\nabla^2\Gamma^\beta u\nabla^\gamma u\|_{L^2}\lesssim\sum_a\|\langle r\rangle\langle c_a t-r\rangle P_a\nabla^2\Gamma^\beta u\|_{L^\infty}\|\nabla \Gamma^\gamma u\|_{L^2}
\end{align*}
while $|\beta|\leq [\frac{k-1}2]-1$, and
\begin{align*}
t\|\nabla^2\Gamma^\beta u\nabla^\gamma u\|_{L^2}\lesssim\sum_a\|\langle c_a t-r\rangle P_a\nabla^2\Gamma^\beta u\|_{L^2}\|\langle r\rangle\nabla \Gamma^\gamma u\|_{L^\infty}
\end{align*}
while $|\gamma|\leq [\frac{k-1}2]$.\\
Because of Lemma \ref{lem4.1}, we obtain
\begin{align}\label{4.8}
\sum_{|\beta+\gamma|\leq k-2} t\|\nabla^2\Gamma^\beta u\nabla^\gamma u\|_{L^2}\lesssim \mathcal{X}_{[\frac{k-1}2]+3} \mathcal{E}_{k-1}^\frac12+ \mathcal{X}_k \mathcal{E}_{[\frac{k-1}2]+3}^\frac12.
\end{align}
For the last term of \eqref{4.6}, note \eqref{4.7}, we have
\begin{equation*}
\begin{split}
\sum_{|\beta+\gamma|\leq k-2}t\|\Lambda^\beta\tilde{\rho} \partial_t^2\Gamma^\gamma u\|_{L^2}&\lesssim \sum_a\sum_{|\beta+\gamma|\leq k-2}\|\langle r\rangle\langle c_a t-r\rangle P_a\Lambda^\beta\tilde{\rho}\partial_t^2\Gamma^\gamma u\|_{L^2}\\
&\lesssim\sum_a\sum_{|\beta+\gamma|\leq k-2}\|\langle r\rangle\Lambda^\beta\tilde{\rho}\|_{L^\infty}\|\langle c_a t-r\rangle P_a\partial_t^2\Gamma^\gamma u\|_{L^2},
\end{split}
\end{equation*}
then by \eqref{21} and Lemma \ref{lem43},
\begin{equation}\label{4.14}
\sum_{|\beta+\gamma|\leq k-2}t\|\Lambda^\beta\tilde{\rho} \partial_t^2\Gamma^\gamma u\|_{L^2}\lesssim\delta\big( \mathcal{X}_k+\mathcal{X}_{k}\mathcal{E}_{[\frac{k-1}2]+3}^\frac12+\mathcal{X}_{[\frac{k-1}2]+3}\mathcal{E}_{k-1}^\frac12\big).
\end{equation}
Concluding \eqref{4.6}, \eqref{4.8} and \eqref{4.14}, there exists a uniform constant $C$ such that
\begin{equation}\label{4.15}
\mathcal{X}_k(u(t))\leq C \Big(\mathcal{E}_k^\frac12+ \mathcal{X}_{[\frac{k-1}2]+3} \mathcal{E}_{k-1}^\frac12+ \mathcal{X}_k \mathcal{E}_{[\frac{k-1}2]+3}^\frac12+\delta( \mathcal{X}_k+\mathcal{X}_{k}\mathcal{E}_{[\frac{k-1}2]+3}^\frac12+\mathcal{X}_{[\frac{k-1}2]+3}\mathcal{E}_{k-1}^\frac12)\Big).
\end{equation}
If $\delta\leq\frac1{2C} $, then we have
\begin{equation*}
\mathcal{X}_k(u(t))\leq (2C+1) \Big(\mathcal{E}_k^\frac12+ \mathcal{X}_{[\frac{k-1}2]+3} \mathcal{E}_{k-1}^\frac12+ \mathcal{X}_k \mathcal{E}_{[\frac{k-1}2]+3}^\frac12\Big).
\end{equation*}
$\Box$
\begin{lem}\label{lem4.3}
Let $k\geq 9$. Suppose $ u\in H_\Gamma^k(T) $ is a solution of the equation\ \eqref{1.2}, if $\mathcal{E}_{k-2}$ is small enough for $0\leq t\leq T$, then
\begin{equation}\label{4.4}
\mathcal{X}_{k-2} (u(t))\lesssim{\mathcal{E}}_{k-2}^{\frac{1}{2}}(u(t)),
\end{equation}
\begin{equation}\label{4.5}
\mathcal{X}_k(u(t))\lesssim\mathcal{E}_k^{\frac{1}{2}}(u(t))[1+\mathcal{E}_{k-2} ^{\frac{1}{2}}(u(t))],
\end{equation}
and
\begin{equation}\label{4.9}
\sum_{|\alpha|\leq k-2}\|\langle c_a t-r\rangle P_a\partial_t^2\Gamma ^\alpha u\|_{L^2}\lesssim\mathcal{X}_k(u(t))\lesssim\mathcal{E}_k^{\frac{1}{2}}(u(t)).
\end{equation}
\end{lem}
The proof of Lemma \ref{lem4.3} is evident by Lemma \ref{lem4.2} and Lemma \ref{lem43}.
\section{Almost global existence}
In this section, we will show the higher-order energy estimate
\begin{equation}\label{51}
\mathcal{E}'_{k}(u(t))\lesssim {\langle t \rangle}^{-1}\mathcal{E}_{k}(u(t))\mathcal{E}_{k-2}(u(t))^{\frac{1}{2}}+\langle t \rangle^{-1}\delta \mathcal{E}_{k}(u(t)),
\end{equation}
which can be adapted to prove the solution of \eqref{1.2} exists almost globally.
Suppose that $u(t)\in H_\Gamma^k(T)$ is a local solution of \eqref{1.2}. Taking inner product with $\partial_t \Gamma ^\alpha u$ on both sides of \eqref{3.1}, we get
\begin{equation}\label{52}
\begin{split}
&\frac{1}{2} \frac{d}{dt}\int_{\mathbb{R}^3}|\partial_t\Gamma^\alpha u|^2+c_2^2|\triangledown \Gamma^\alpha u|^2 + (c_1^2-c_2^2)(\triangledown\cdot\Gamma^\alpha u)^2dx\\
\backsimeq&\sum_{|\beta+\gamma|=|\alpha|}\int_{\mathbb{R}^3}(\partial_t\Gamma^\alpha u,N(\Gamma^\beta u,\Gamma^\gamma u))dx -\sum_{|\beta+\gamma|= |\alpha|}\int_{\mathbb{R}^3}\Lambda^\beta\tilde{\rho} (\partial_t\Gamma^\alpha u,\partial_t^2\Gamma^\gamma u) dx.
\end{split}
\end{equation}
For the first term on the right side of \eqref{52}, if $\beta=\alpha $ or $\gamma= \alpha$, we have
\begin{equation*}
\begin{split}
\int_{\mathbb{R}^3}(\partial_t\Gamma^\alpha u,N(\Gamma^\alpha u,u))dx=&B_{lmn}^{ijk}\int_{\mathbb{R}^3}\partial_t\Gamma^\alpha u^i \partial_l(\partial_m\Gamma^\alpha u^j\partial_n u^k)dx\\
=&-B_{lmn}^{ijk}\int_{\mathbb{R}^3}\partial_t\partial_l\Gamma^\alpha u^i \partial_m\Gamma^\alpha u^j\partial_n u^k dx\\
=&-\frac{1}{2}\frac{d}{dt}B_{lmn}^{ijk}\int_{\mathbb{R}^3}\partial_l\Gamma^\alpha u^i \partial_m \Gamma^\alpha u^j \partial_n u^k dx\\
&+\frac{1}{2}B_{lmn}^{ijk}\int_{\mathbb{R}^3}\partial_l\Gamma^\alpha u^i\partial_m\Gamma^\alpha u^j\partial_t\partial_n u^k dx.
\end{split}
\end{equation*}
We set
$$\tilde{N}(u,v,w)=B_{lmn}^{ijk}\partial_l u^i \partial_m v^j \partial_n w^k ,$$
and
\begin{equation}\label{5.1}
\tilde{\mathcal{E}_{k}}(u(t))=\mathcal{E}_{k}(u(t))+\sum\limits_{|\alpha|\leq k-1}\int_{\mathbb{R}^3}\tilde{N}(\Gamma^\alpha u,\Gamma^\alpha u,u)dx.
\end{equation}
Thus
\begin{equation}\label{5.2}
\begin{split}
\tilde{\mathcal{E}}_{k}'(u(t))\backsimeq&\sum_{|\alpha|\leq k-1}\sum\limits_{|\beta|+|\gamma|= |\alpha|\atop |\beta|,|\gamma|\neq |\alpha|}\|\partial \Gamma^\alpha u\|_{L^2}\|\partial\nabla\Gamma^\beta u\nabla\Gamma^\gamma u\|_{L^2}\\
&+\sum_{|\alpha|\leq k-1}\sum_{|\beta+\gamma|= |\alpha|}\int_{\mathbb{R}^3}\Lambda^\beta\tilde{\rho} \partial_t\Gamma^\alpha u\partial_t^2\Gamma^\gamma u dx.
\end{split}
\end{equation}
Since $|\beta|+|\gamma|\leq |\alpha|$ and $|\beta|,|\gamma|\neq |\alpha|$, we may assume either $|\beta|+1\leq [\frac k2]$ or $|\gamma|\leq [\frac k2]$. As the way of proving Lemma \ref{lem4.2} and by Lemma \ref{lem4.1} and Lemma \ref{lem4.3}, we have that
\begin{equation}\label{5.5}
\sum_{|\alpha|\leq k-1,}\sum\limits_{|\beta|+|\gamma|= |\alpha|\atop |\beta|,|\gamma|\neq |\alpha|}\|\partial \Gamma^\alpha u\|_{L^2}\|\partial\nabla\Gamma^\beta u\nabla\Gamma^\gamma u\|_{L^2}\lesssim \langle t \rangle^{-1}\mathcal{E}_{k-2}^{\frac{1}{2}}\mathcal{E}_k.
\end{equation}
To estimate the second term on the right side of \eqref{5.2}, we revisit it as
\begin{equation}\label{5.6}
\begin{split}
\sum_{|\alpha|\leq k-1}\sum_{|\beta+\gamma|= |\alpha|}\int_{\mathbb{R}^3}\Lambda^\beta\tilde{\rho} \partial_t\Gamma^\alpha u\partial_t^2\Gamma^\gamma u dx=&\sum_{|\alpha|\leq k-1}\int_{\mathbb{R}^3}\tilde{\rho} \partial_t\Gamma^\alpha u\partial_t^2\Gamma^\alpha u dx\\
&+\sum_{|\alpha|\leq k-1}\sum_{|\beta+\gamma|= |\alpha|\atop |\gamma|\leq k-2}\int_{\mathbb{R}^3}\Lambda^\beta\tilde{\rho} \partial_t\Gamma^\alpha u\partial_t^2\Gamma^\gamma u dx.
\end{split}
\end{equation}
Since
\begin{equation*}
\sum_{|\alpha|\leq k-1}\int_{\mathbb{R}^3}\tilde{\rho} \partial_t\Gamma^\alpha u\partial_t^2\Gamma^\alpha u dx=- \sum_{|\alpha|\leq k-1}\frac12\frac d{dt}\int_{\mathbb{R}^3}\tilde{\rho} |\partial_t\Gamma^\alpha u|^2 dx,
\end{equation*}
we set
\begin{equation}\label{5.8}
\hat{\mathcal{E}_{k}}(u(t))=\tilde{\mathcal{E}_{k}}(u(t))+\sum_{|\alpha|\leq k-1}\int_{\mathbb{R}^3}\tilde{\rho} |\partial_t\Gamma^\alpha u|^2 dx,
\end{equation}
then we have that
\begin{equation}\label{5.9}
\begin{split}
\hat{\mathcal{E}_{k}}'(u(t)) \lesssim \langle t \rangle^{-1}\mathcal{E}_{k-2}^{\frac{1}{2}}\mathcal{E}_k+\sum_{|\alpha|\leq k-1}\sum_{|\beta+\gamma|= |\alpha|\atop |\gamma|\leq k-2}\int_{\mathbb{R}^3}\big|\Lambda^\beta\tilde{\rho} \partial_t\Gamma^\alpha u\partial_t^2\Gamma^\gamma u \big|dx
\end{split}
\end{equation}
combined with \eqref{5.2}-\eqref{5.8}. By \eqref{21}, \eqref{4.7} and \eqref{4.9}, we have
\begin{equation}
\begin{split}
&\sum_{|\alpha|\leq k-1}\sum_{|\beta+\gamma|= |\alpha|\atop |\gamma|\leq k-2}\int_{\mathbb{R}^3}\big|\Gamma^\beta\tilde{\rho} \partial_t\Gamma^\alpha u\partial_t^2\Gamma^\gamma u \big|dx\\
\lesssim&\sum_a\sum_{|\alpha|\leq k-1}\sum_{|\gamma|\leq k-2} \langle t\rangle^{-1}\delta\|\langle c_at-r\rangle P_a\partial_t^2\Gamma^\gamma u \|_{L^2}\|\partial_t\Gamma^\alpha u\|_{L^2}\\
\lesssim&\langle t \rangle^{-1}\delta \mathcal{X}_{k}\mathcal{E}_{k}^{\frac12}.
\end{split}
\end{equation}
Consequently, one has
\begin{equation}
\hat{\mathcal{E}_{k}}'(u(t)) \lesssim \langle t \rangle^{-1}\mathcal{E}_{k-2}^{\frac{1}{2}}\mathcal{E}_k+\langle t \rangle^{-1}\delta \mathcal{E}_{k},
\end{equation}
which gives that
\begin{equation}\label{511}
\mathcal{E}_{k}(u(t))\lesssim M^2\langle t \rangle^{\varepsilon+\delta},
\end{equation}
because the modified energy $\hat{\mathcal{E}_{k}}(u(t))$ is equivalent to the standard one $\mathcal{E}_{k}(u(t))$ and provided that $\mathcal{E}_{k-2}(u(t))\lesssim\mathcal{E}_{k-2}(u(0))$, which implies the almost global existence result of John \cite{JF} (see also \cite{KS}, \cite{S2}).
\section{Global existence}
This section is devoted to prove the existence of the global solution of the \eqref{1.2} under assumptions of Theorem \ref{thm 2.2}. Combined with the higher-order energy estimate, we will discuss the lower-order energy estimate
\begin{equation}\label{6.1}
\mathcal{E}'_{k-2}(u(t))\lesssim{\langle t \rangle}^{-\frac32}\mathcal{E}_{k}^{\frac{1}{2}}(u(t))\mathcal{E}_{k-2}(u(t))+ {\langle t \rangle}^{-\frac32}\delta\mathcal{E}_{k}^{\frac{1}{2}}(u(t))\mathcal{E}_{k-2}^{\frac{1}{2}}(u(t)).
\end{equation}
From \eqref{52}, \eqref{5.1} and \eqref{5.8}, let $|\alpha|\leq k-3$ , we have
\begin{equation}\label{6.2}
\begin{split}
\hat{\mathcal{E}}_{k-2}'(u(t))\backsimeq&\sum_{|\alpha|\leq k-3}\sum\limits_{|\beta|+|\gamma|= |\alpha|\atop |\beta|,|\gamma|\neq |\alpha|}\|\partial \Gamma^\alpha u\|_{L^2}\|\partial\nabla\Gamma^\beta u\nabla\Gamma^\gamma u\|_{L^2}\\
&+\sum_{|\alpha|\leq k-3}\sum_{|\beta+\gamma|= |\alpha|\atop |\gamma|\leq k-4}\int_{\mathbb{R}^3}\Lambda^\beta\tilde{\rho} \partial_t\Gamma^\alpha u\partial_t^2\Gamma^\gamma u dx.
\end{split}
\end{equation}
Like the discussion of Sideris, away from the inner light cone, the first term on the right side of \eqref{6.2} can be controlled by $\langle t \rangle ^{-\frac32}\mathcal{E}_{k-2}\mathcal{E}_k^\frac12$ directly by Lemma \ref{lem4.1}. When $r$ is comparable to $\langle t\rangle$, it also can be achieved successfully thanks to the null condition \eqref{1.5} and \eqref{1.6}. We omit the details here, readers can refer to \cite{S2} to get
\begin{align}\label{6.3}
\sum_{|\alpha|\leq k-3}\sum\limits_{|\beta|+|\gamma|= |\alpha|\atop |\beta|,|\gamma|\neq |\alpha|}\|\partial \Gamma^\alpha u\|_{L^2}\|\partial\nabla\Gamma^\beta u\nabla\Gamma^\gamma u\|_{L^2} \lesssim \langle t \rangle ^{-\frac{3}{2}}\mathcal{E}_k^{\frac{1}{2}}\mathcal{E}_{k-2}.
\end{align}
We focus on the last term of \eqref{6.2} in two cases: $\langle c_2 t\rangle\leq r$ and $\langle c_2 t\rangle\geq r$. Note that \eqref{21}, \eqref{4.7}, \eqref{4.9} and \eqref{4.2-1}, we have
\begin{equation}\label{6.3}
\begin{split}
&\sum_{|\alpha|\leq k-3}\sum_{|\beta+\gamma|= |\alpha|\atop |\gamma|\leq k-4}\int_{r\geq\langle c_2 t\rangle}\Lambda^\beta\tilde{\rho} \partial_t\Gamma^\alpha u\partial_t^2\Gamma^\gamma u dx\\
\lesssim& \sum_a\sum_{|\alpha|\leq k-3}\sum_{|\beta+\gamma|= |\alpha|\atop |\gamma|\leq k-4}\langle t\rangle^{-1}\|\langle r\rangle\Lambda^\beta\tilde{\rho}\|_{L^2}\| \partial_t\Gamma^\alpha u\|_{L^\infty( r\geq\langle c_2 t\rangle)}\|\langle c_at-r\rangle P_a\partial_t^2\Gamma^\gamma u\|_{L^2}\\
\lesssim&\sum_{|\alpha|\leq k-3}\langle t\rangle^{-1}\delta\| \partial_t\Gamma^\alpha u\|_{L^\infty( r\geq\langle c_2 t\rangle)}\mathcal{E}_{k-2}^\frac12\\
\lesssim&\sum_{|\alpha|\leq k-3}\langle t\rangle^{-2}\delta\mathcal{E}_{k}^\frac12\mathcal{E}_{k-2}^\frac12.
\end{split}
\end{equation}
When $\langle c_2 t\rangle\geq r$, the ratio $\langle t\rangle^{-1}\langle c_at-r\rangle$ and $\langle t\rangle^{-\frac12}\langle c_1t-r\rangle^\frac12$ are bounded below, with \eqref{4.2} in Lemma \ref{lem4.1} thus
\begin{equation}\label{6.4}
\begin{split}
&\sum_{|\alpha|\leq k-3}\sum_{|\beta+\gamma|= |\alpha|\atop |\gamma|\leq k-4}\int_{r\leq\langle c_2 t\rangle}\Lambda^\beta\tilde{\rho} \partial_t\Gamma^\alpha u\partial_t^2\Gamma^\gamma u dx\\
\lesssim& \sum_a\sum_{|\alpha|\leq k-3}\sum_{|\beta+\gamma|= |\alpha|\atop |\gamma|\leq k-4}\langle t\rangle^{-1}\|\Lambda^\beta\tilde{\rho}\|_{L^2}\| \partial_t\Gamma^\alpha u\|_{L^\infty( r\leq\langle c_2 t\rangle)}\|\langle c_at-r\rangle P_a\partial_t^2\Gamma^\gamma u\|_{L^2}\\
\lesssim&\sum_{|\alpha|\leq k-3}\langle t\rangle^{-\frac32}\delta\|\langle c_1t-r\rangle^\frac12 \partial_t\Gamma^\alpha u\|_{L^\infty}\mathcal{E}_{k-2}^\frac12\\
\lesssim&\sum_{|\alpha|\leq k-3}\langle t\rangle^{-\frac32}\delta\mathcal{E}_{k}^\frac12\mathcal{E}_{k-2}^\frac12,
\end{split}
\end{equation}
Consequently, one has
\begin{equation*}
\mathcal{E}'_{k-2}(u(t))\lesssim{\langle t \rangle}^{-\frac32}\mathcal{E}_{k}^{\frac{1}{2}}(u(t))\mathcal{E}_{k-2}(u(t))+ {\langle t \rangle}^{-\frac32}\delta\mathcal{E}_{k}^{\frac{1}{2}}(u(t))\mathcal{E}_{k-2}^{\frac{1}{2}}(u(t)).
\end{equation*}
Attached with \eqref{511}, we have
\begin{equation*}
\frac{d\mathcal{E}_{k-2}^\frac12(u(t))}{dt}\lesssim\frac12{\langle t \rangle}^{\frac{-3+\varepsilon+\delta}2}M\mathcal{E}_{k-2}^\frac12(u(t))+ \frac12\delta M{\langle t \rangle}^{\frac{-3+\varepsilon+\delta}2}.
\end{equation*}
Multiplying $e^{-\frac M{-1+\varepsilon+\delta}\langle t\rangle^{\frac{-1+\varepsilon+\delta}2}}$ on both sides of the above, and solve the ordinary differential inequality, there holds
\begin{align*}
\mathcal{E}_{k-2}^\frac12(u(t))&\lesssim e^{\frac M{-1+\varepsilon+\delta}\langle t\rangle^{\frac{-1+\varepsilon+\delta}2}}\varepsilon e^{C_2M}+\delta M\int_0^t e^{-\frac M{-1+\varepsilon+\delta}\langle s\rangle^{\frac{-1+\varepsilon+\delta}2}}\langle s\rangle^{\frac{-3+\varepsilon+\delta}2}ds\\
&\lesssim\varepsilon e^{C_2M}+\delta e^{C_3M},
\end{align*}
provided $\varepsilon+\delta<<1$, where $C_2$ and $C_3$ are two constants uniformly in $t$. The proof of Theorem \ref{thm 2.2} is completed.
\section*{Acknowledgemets}
The first author was supported by China Postdoctoral Science Foundation funded project (D.10-0101-17-B02).
\end{document} |
\begin{document}
\title{\bf Capacity of the range of random walk on $\mathbb{Z}^d$}
\author{Amine Asselah \thanks{Aix-Marseille Universit\'e \&
Universit\'e Paris-Est Cr\'eteil; amine.asselah@u-pec.fr} \and
Bruno Schapira\thanks{Aix-Marseille Universit\'e, CNRS, Centrale Marseille, I2M, UMR 7373, 13453 Marseille, France; bruno.schapira@univ-amu.fr} \and Perla Sousi\thanks{University of Cambridge, Cambridge, UK; p.sousi@statslab.cam.ac.uk}
}
\date{}
\maketitle
\begin{abstract}
We study the capacity of the range of a transient
simple random walk on $\mathbb{Z}^d$.
Our main result is a
central limit theorem for the capacity of the range for~$d\ge 6$.
We present a few open questions in lower dimensions.
\newline
\newline
\emph{Keywords and phrases.} Capacity, Green kernel, Lindeberg-Feller central limit theorem.
\newline
MSC 2010 \emph{subject classifications.} Primary 60F05, 60G50.
\end{abstract}
\section{Introduction}\label{sec:intro}
This paper is devoted to the study of the capacity of the range of
a transient random walk on $\mathbb{Z}^d$.
Let $\{S_k\}_{k\ge 0}$ be a simple random walk in dimension $d\geq 3$.
For any integers $m$ and $n$,
we define the range $\mathbb{R}R[m,n]$ to be the set of visited sites
during the interval $[m,n]$, i.e.
\[
\mathbb{R}R[m,n]= \{ S_m,\ldots, S_n\}.
\]
We write simply $\mathbb{R}R_n= \mathbb{R}R[0,n]$.
We recall that the capacity of a finite set $A\subseteq \mathbb{Z}^d$
is defined to be
\[
\cc{A} = \sum_{x\in A} \Psirstart{T_A^+=\infty}{x},
\]
where $T_A^+=\inf\{t\geq 1: S_t\in A\}$ is the first return time to $A$.
The capacity of the range of a walk has a long history.
Jain and Orey~\cite{JainOrey} proved, some fifty years ago,
that $\cc{\mathbb{R}R_n}$ satisfies a law of large numbers for all $d\geq 3$, i.e.\ almost surely
\[
\lim_{n\to\infty}
\frac{\cc{\mathbb{R}R_n}}{n} = \alpha_d.
\]
Moreover, they showed that $\alpha_d>0$ if and only if $d\geq 5$.
In the eighties, Lawler established estimates on intersection
probabilities for random walks, which are relevant tools for estimating
the expected capacity of the range (see \cite{Lawlerinter}).
Recently, the study of random interlacements by Sznitman \cite{S10},
has given some momentum to the study of the capacity of the union
of the ranges of a collection of independent walks.
In order to obtain bounds on the capacity of such union of ranges,
R\'ath and Sapozhnikov in~\cite{RathSap} have obtained
bounds on the capacity of the range of a simple transient walk. The capacity of the range is a natural object to probe the geometry of the walk under
localisation constraints. For instance, the first two authors have used the capacity
of the range in~\cite{AS2} to characterise the walk conditioned on
having a small range.
In the present paper, we establish a central limit theorem for $\cc{\mathbb{R}R_n}$ when $d\geq 6$.
\begin{theorem}\label{thm:clt}
For all $d\geq 6$, there is a positive constant $\sigma_d$ such that
\[
\frac{\cc{\mathbb{R}R_n} - \E{\cc{\mathbb{R}R_n}}}{\sqrt{n}} \Longrightarrow\sigma_d
\mathbb{N}N(0,1),\quad \text{as } n\to \infty,
\]
where $\Longrightarrow$ denotes convergence in distribution,
and $\mathbb{N}N(0,1)$ denotes a standard normal random variable.
\end{theorem}
A key tool in the proof of Theorem~\ref{thm:clt} is the following inequality.
\begin{proposition}\label{prop:capdec}
Let $A$ and $B$ be finite subsets of $\mathbb{Z}^d$. Then,
\be\label{main-lower}
\cc{A\cup B}\ge
\cc{A} + \cc{B} - 2\sum_{x\in A} \sum_{y\in B}G(x,y),
\ee
where $G$ is Green's kernel for a simple random walk in $\mathbb{Z}^d$
\[
G(x,y) = \estart{\sum_{t=0}^{\infty}{\text{\Large $\mathfrak 1$}}(X_t=y)}{x}.
\]
\end{proposition}
Note in comparison the well known upper bound (see for instance~\cite[Proposition 2.2.1]{Lawlerinter})
\be\label{key-lawler}
\cc{A\cup B}\le \cc{A}+\cc{B}-\cc{A\cap B}
\ee
In dimension four, asymptotics of $\E{\cc{\mathbb{R}R_n}}$
can be obtained from Lawler's estimates on
non-intersection probabilities for three
random walks, that we recall here for convenience.
\begin{theorem}{\rm (\cite[Corollary 4.2.5]{Lawlerinter})}\label{thm:lawler}
Let $\mathbb{R}R^1,\mathbb{R}R^2$ and $\mathbb{R}R^3$ be the ranges of
three independent random walks in $\mathbb{Z}^4$ starting at 0. Then,
\be\label{lawler-key}
\lim_{n\to\infty} \ \log n\times
\Psir{ \mathbb{R}R^1[1,n]\cap( \mathbb{R}R^2[0,n]\cup \mathbb{R}R^3[0,n])=\varnothing,\
0\not\in \mathbb{R}R^3[1,n]}=\frac{\Psii^2}{8},
\ee
and
\be\label{lawler-key2}
\lim_{n\to\infty} \ \log n\times
\Psir{\mathbb{R}R^1[1,\infty)\cap( \mathbb{R}R^2[0,n]\cup \mathbb{R}R^3[0,n])=\varnothing,\
0\not\in \mathbb{R}R^3[1,n]}=\frac{\Psii^2}{8}.
\ee
\end{theorem}
Actually \eqref{lawler-key2} is not stated exactly in this form in~\cite{Lawlerinter}, but it can be proved using exactly the same proof as for equation (4.11) in~\cite{Lawlerinter}.
As mentioned, we deduce from this result, the following estimate for the mean of the capacity.
\begin{corollary}\label{lem:d4}
Assume that $d=4$. Then,
\be\label{bounds-d4}
\lim_{n\to\infty}\ \frac{\log n}{n}\ \E{\cc{\mathbb{R}R_n}}\ = \ \frac{\Psii^2}{8}.
\ee
\end{corollary}
In dimension three, we use the following representation of capacity (see \cite[Lemma~2.3]{JainOrey-properties})
\be\label{variation-capa}
\cc{A} = \frac{1}{\inf_{\nu} \sum_{x\in A}\sum_{y\in A} G(x,y)\nu(x)\nu(y)},
\ee
where the infimum is taken over all probability measures~$\nu$
supported on $A$. We obtain the following bounds:
\begin{proposition}\label{lem:d3}
Assume that $d=3$. There are positive constants $c$ and
$C$, such that
\be\label{bounds-d3}
c \sqrt{n}\ \le\ \E{\cc{\mathbb{R}R_n}}\ \le\ C\sqrt{n}.
\ee
\end{proposition}
The rest of the paper is organised as follows.
In Section~\ref{sec-two} we present the decomposition of the range,
which is at the heart of our central limit theorem. The capacity
of the range is cut into a {\it self-similar} part and an {\it error
term} that we bound in Section~\ref{sec-three}. In Section~\ref{sec-four}
we check Lindeberg-Feller's conditions. We deal with dimension three
and four in Section~\ref{sec-five}. Finally, we present some open
questions in Section~\ref{sec-six}.
\textbf{Notation:}
When $0\leq a\leq b$ are real numbers, we write $\mathbb{R}R[a,b]$ to denote $\mathbb{R}R[[a],[b]]$, where $[x]$ stands for the integer part of $x$.
We also write $\mathbb{R}R_a$ for $\mathbb{R}R[0,[a]]$, and $S_{n/2}$ for $S_{[n/2]}$.
For positive functions $f,g$ we write $f(n) \lesssim g(n)$ if there exists a constant $c > 0$ such that $f(n) \leq c g(n)$ for all $n$. We write $f(n) \gtrsim g(n)$ if $g(n) \lesssim f(n)$. Finally, we write $f(n) \asymp g(n)$ if both $f(n) \lesssim g(n)$ and $f(n) \gtrsim g(n)$.
\section{Decomposition for capacities}\label{sec-two}
\begin{proof}[\bf Proof of Proposition~\ref{prop:capdec}]
Note first that by definition,
\begin{align*}
\cc{A\cup B}&=\cc{A} + \cc{B} - \sum_{x\in A\setminus B} \Psirstart{T_A^+=\infty, T_B^+<\infty}{x}
\\&- \sum_{x\in A\cap B} \Psirstart{T_A^+=\infty, T_B^+<\infty}{x} -\sum_{x\in B\setminus A} \Psirstart{T_A^+<\infty, T_B^+=\infty}{x} \\&- \sum_{x\in A\cap B} \Psirstart{T_A^+<\infty, T_B^+=\infty}{x} - \sum_{x\in A\cap B} \Psirstart{T_A^+=\infty, T_B^+=\infty}{x}\\
&\geq \cc{A} + \cc{B} - \sum_{x\in A\setminus B} \Psirstart{T_B^+<\infty}{x} - \sum_{x\in B\setminus A} \Psirstart{T_A^+<\infty}{x}
- |A\cap B|.
\end{align*}
For any finite set $K$ and all $x\notin K$ by considering the last visit to $K$ we get
\begin{align*}
\Psirstart{T_K^+<\infty}{x} =\sum_{y\in K} G(x,y) \Psirstart{T_K^+=\infty}{y}.
\end{align*}
This way we obtain
\begin{align*}
\sum_{x\in A\setminus B} \Psirstart{T_B^+<\infty}{x} \leq \sum_{x\in A\setminus B} \sum_{y\in B} G(x,y) \quad \text{and} \quad \sum_{x\in B\setminus A} \Psirstart{T_A^+<\infty}{x} \leq \sum_{x\in B\setminus A} \sum_{y\in A} G(x,y).
\end{align*}
Hence we get
\begin{align*}
\cc{A\cup B} \geq \cc{A} + \cc{B} - 2\sum_{x\in A} \sum_{y\in B} G(x,y) + \sum_{x\in A\cap B} \sum_{y\in A} G(x,y) \\+ \sum_{x\in A\cap B} \sum_{y\in B} G(x,y) - |A\cap B|.
\end{align*}
Since $G(x,x)\geq 1$ for all $x$ we get
\[
\sum_{x\in A\cap B} \sum_{y\in A} G(x,y) \geq |A\cap B|
\]
and this concludes the proof of the
lower bound and also the proof of the lemma.
\end{proof}
The decomposition of $\cc{\mathbb{R}R_n}$
stated in the following corollary is crucial in the rest of the paper.
\begin{corollary}\label{cor:decomposition}
For all $L$ and $n$, with $2^L\le n$, we have
\begin{align*}
\sum_{i=1}^{2^L} \cc{\mathbb{R}R^{(i)}_{n/2^L}} - 2\sum_{\ell=1}^{L}\sum_{i=1}^{2^{\ell-1}} \mathcal{E}_{\ell}^{(i)}\leq \cc{\mathbb{R}R_n}\leq \sum_{i=1}^{2^L} \cc{\mathbb{R}R^{(i)}_{n/2^L}},
\end{align*}
where $(\cc{\mathbb{R}R^{(i)}_{n/2^L}},\ i=1,\dots,2^L)$ are independent
and $\mathbb{R}R^{(i)}_{n/2^L}$ has the same law as $\mathbb{R}R_{[n/2^L]}$ or~$\mathbb{R}R_{[n/2^L+1]}$ and for each $\ell$ the random variables $(\mathcal{E}_{\ell}^{(i)})_i$ are independent and have the same law as $\sum_{x\in \mathbb{R}R^{(i)}_{n/2^L}} \sum_{y\in \widetilde{\mathbb{R}R}^{(i)}_{n/2^L}} G(x,y)$, with $\widetilde{\mathbb{R}R}$ an independent copy of $\mathbb{R}R$.
\end{corollary}
\begin{proof}[\bf Proof]
Since we work on~$\mathbb{Z}^d$, the capacity is translation invariant, i.e.\ $\cc{A} = \cc{A+x}$ for all $x$, and hence it follows that
\[
\cc{\mathbb{R}R_n} = \cc{\left(\mathbb{R}R_{n/2}-S_{n/2}\right) \cup \left(\mathbb{R}R[n/2,n]-S_{n/2}\right)}.
\]
The advantage of doing this is that now by the Markov property the random variables $\mathbb{R}R_{n/2}^{(1)}=\cc{\mathbb{R}R_{n/2}-S_{n/2}}$ and $\mathbb{R}R_{n/2}^{(2)}=\cc{\mathbb{R}R[n/2,n] -S_{n/2}}$ are independent. Moreover, by reversibility, each of them has the same law as the range of a simple random walk started from $0$ and run up to time $n/2$. Applying Proposition~\ref{prop:capdec} we get
\begin{align}\label{eq:key}
\cc{\mathbb{R}R_n}\geq \cc{\mathbb{R}R^{(1)}_{n/2}} + \cc{\mathbb{R}R^{(2)}_{n/2}} - 2\sum_{x\in \mathbb{R}R^{(1)}_{n/2}}\sum_{y\in \mathbb{R}R^{(2)}_{n/2}} G(x,y).
\end{align}
Applying the same subdivision to each of the terms $\mathbb{R}R^{(1)}$ and~$\mathbb{R}R^{(2)}$ and iterating $L$ times, we obtain
\begin{align*}
\cc{\mathbb{R}R_n} \geq \sum_{i=1}^{2^L} \cc{\mathbb{R}R^{(i)}_{n/2^L}} - 2\sum_{\ell=1}^{L}\sum_{i=1}^{2^{\ell-1}} \mathcal{E}_{\ell}^{(i)}.
\end{align*}
Here $\mathcal{E}_\ell^{(i)}$ has the same
law as $\sum_{x\in \mathbb{R}R_{n/2^l} }\sum_{y\in \mathbb{R}R'_{n/2^l}}G(x,y)$, with $\mathbb{R}R'$ independent of $\mathbb{R}R$ and the random variables
$(\mathcal{E}_\ell^{(i)},\ i=1,\dots,2^l)$ are independent. Moreover,
the random variables $(\mathbb{R}R_{n/2^L}^{(i)},\ i=1,\dots,2^L)$ are independent.
Using~\eqref{key-lawler} for the upper bound on $\cc{\mathbb{R}R_n}$ we get overall
\begin{align*}
\sum_{i=1}^{2^L} \cc{\mathbb{R}R^{(i)}_{n/2^L}} - 2\sum_{\ell=1}^{L}\sum_{i=1}^{2^{\ell-1}} \mathcal{E}_{\ell}^{(i)}\leq \cc{\mathbb{R}R_n} \leq \sum_{i=1}^{2^L} \cc{\mathbb{R}R^{(i)}_{n/2^L}}
\end{align*}
and this concludes the proof.
\end{proof}
\section{Variance of $\cc{\mathbb{R}R_n}$ and error term}\label{sec-three}
As outlined in the Introduction, we want to apply the Lindeberg-Feller theorem to obtain the central limit theorem. In order to do so, we need to control the
{\it error term} appearing in the decomposition of $\cc{\mathbb{R}R_n}$ in Corollary~\ref{cor:decomposition}. Moreover, we need to show that the variance of $\cc{\mathbb{R}R_n}/n$ converges to a strictly positive constant as $n$
tends to infinity. This is the goal of this section.
\subsection{On the {\it error term}}
We write $G_n(x,y)$ for the Green kernel up to time $n$, i.e.\ ,
\[
G_n(x,y) = \estart{\sum_{k=0}^{n-1}{\text{\Large $\mathfrak 1$}}(S_k=y)}{x}.
\]
We now recall a well-known bound
(see for instance~\cite[Theorem~4.3.1]{LawlerLimic})
\begin{align}\label{eq:wellknownbound}
G(0,x) \leq \frac{C}{1+\norm{x}^{d-2}},
\end{align}
where $C$ is a positive constant. We start with a preliminary result.
\begin{lemma}\label{lem:rearrange}
For all $a\in \mathbb{Z}^d$ we have
\begin{align*}\label{eq:rearrange}
\sum_{x\in \mathbb{Z}^d}\sum_{y\in \mathbb{Z}^d} G_n(0,x) G_n(0,y) G(0,x-y-a) \leq \sum_{x\in \mathbb{Z}^d}\sum_{y\in \mathbb{Z}^d} G_n(0,x) G_n(0,y) G(0,x-y). \end{align*}
Moreover,
\begin{align*}
\sum_{x\in \mathbb{Z}^d}\sum_{y\in \mathbb{Z}^d} G_n(0,x) G_n(0,y) G(0,x-y)\lesssim f_d(n),
\end{align*}
where
\be\label{def-f}
f_5(n) = \sqrt{n}, \qquad
f_6(n) = \log n,\quad\text{ and}\quad f_d(n) = 1 \quad\forall d\geq 7.
\ee
\end{lemma}
\begin{proof}[\bf Proof]
Let $S_a = \sum_{x,y} G_n(0,x) p_{2k}(x,y+a) G_n(0,y)$. Since
\[
p_{2k}(x,y-a) = \sum_z p_{k}(x,z) p_k(z,y-a) = \sum_{z} p_k(z,x) p_k(z,y-a)
\]
letting $F_a(z) = \sum_y G_n(0,y) p_k(z,y+a)$ we have
\begin{align}\label{eq:zya}
F_a(z) = \sum_y G_n(0,y) p_k(z-a,y),\quad\text{and}\quad
S_a = \sum_z F_0(z) F_a(z).
\end{align}
By Cauchy-Schwartz, we obtain
\[
S_a^2\leq \sum_z F_0^2(z) \cdot \sum_z F_a^2(z).
\]
Notice however that a change of variable and using~\eqref{eq:zya} yields
$$\sum_z F_a^2(z) = \sum_w F_a^2(w-a) = \sum_w F_0^2(w),$$ and hence we deduce
\[
S_a^2\leq S_0^2 \quad \forall \, a.
\]
We now note that if $X$ is a lazy simple random walk, then the sums in the statement of the lemma will only be affected by a multiplicative constant. So it suffices to prove the result for a lazy walk. It is a standard fact (see for instance~\cite[Proposition~10.18]{LevPerWil}) that the transition matrix of a lazy chain can be written as the square of another transition matrix. This now concludes the proof of the first inequality.
To simplify notation we write $G_n(x)=G_n(0,x)$ and $G(x)= G(0,x)$.
To prove the second inequality we split the second sum appearing in the statement of the lemma into three parts as follows
\begin{align}\label{eq:sums}
\begin{split}
\sum_{x}\sum_{y} &G_n(x) G_n(y) G(x-y) \leq \sum_{\substack{\norm{x}\leq \sqrt{n}\\ \norm{y}\leq \sqrt{n}}}G_n(x) G_n(y) G(x-y) \\+ &2\sum_{\substack{\norm{x}\geq \sqrt{n}\\ \frac{\sqrt{n}}{2}\leq \norm{y}\leq \sqrt{n}}}G_n(x)G_n(y)G(x-y) +2 \sum_{\substack{\norm{x}\geq \sqrt{n}\\ \norm{y}\leq \frac{\sqrt{n}}{2}}}G_n(x)G_n(y)G(x-y)\\ &\qquad \qquad \qquad\quad \qquad =:I_1 + I_2 + I_3,
\end{split}
\end{align}
where $I_k$ is the $k$-th sum appearing on the right hand side of the inequality above.
The first sum~$I_1$ is upper bounded by
\begin{align*}
2\sum_{k=0}^{\frac{\log_2(n)}{2}}\sum_{\frac{\sqrt{n}}{2^{k+1}}\leq \norm{x}\leq \frac{\sqrt{n}}{2^k}}\left( \sum_{\norm{y}\leq \frac{\sqrt{n}}{2^{k+2}}} G_n(x)G_n(y)G(x-y) + \sum_{r=0}^{\frac{\sqrt{n}}{2^k}}\sum_{\substack{y:\,\norm{y-x}=r\\ \norm{x}\geq \norm{y}\geq \frac{\sqrt{n}}{2^{k+2}}}} G_n(x) G_n(y)G(x-y) \right).
\end{align*}
For any fixed $k\leq \log_2(n)/2$, using~\eqref{eq:wellknownbound} we get
\begin{align}\label{eq:i1first}
\begin{split}
\sum_{\frac{\sqrt{n}}{2^{k+1}}\leq \norm{x}\leq \frac{\sqrt{n}}{2^k}} \sum_{\norm{y}\leq \frac{\sqrt{n}}{2^{k+2}}} G_n(x)G_n(y)G(x-y) &\lesssim
\left(\frac{\sqrt{n}}{2^{k}} \right)^d \left(\frac{\sqrt{n}}{2^k}\right)^{4-2d}\sum_{\norm{y}\leq \frac{\sqrt{n}}{2^{k+2}}}G_n(y) \\
& \lesssim \left(\frac{\sqrt{n}}{2^{k}} \right)^{4-d}\cdot \sum_{r=1}^{\frac{\sqrt{n}}{2^{k+2}}} \frac{r^{d-1}}{r^{d-2}} \asymp \left(\frac{\sqrt{n}}{2^{k}} \right)^{6-d}.
\end{split}
\end{align}
Similarly using~\eqref{eq:wellknownbound} again for any fixed $k\leq \log_2(n)/2$ we can bound
\begin{align}\label{eq:i1second}
\sum_{\frac{\sqrt{n}}{2^{k+1}}\leq \norm{x}\leq \frac{\sqrt{n}}{2^k}}\sum_{r=0}^{\frac{\sqrt{n}}{2^k}}\sum_{\substack{y:\,\norm{y-x}=r\\ \norm{x}\geq \norm{y}\geq \frac{\sqrt{n}}{2^{k+2}}}} G_n(x) G_n(y)G(x-y) \lesssim \left(\frac{\sqrt{n}}{2^{k}} \right)^{4-d} \sum_{r=1}^{\frac{\sqrt{n}}{2^k}} \frac{r^{d-1}}{r^{d-2}}\asymp \left(\frac{\sqrt{n}}{2^{k}} \right)^{6-d}.
\end{align}
Therefore using~\eqref{eq:i1first} and~\eqref{eq:i1second} and summing over all $k$ yields
\begin{align*}
I_1\lesssim f_d(n).
\end{align*}
We now turn to bound $I_2$. From~\eqref{eq:wellknownbound} we have
\begin{align*}
I_2 &\lesssim \sum_{\substack{\norm{x}\geq 2\sqrt{n} \\ \frac{\sqrt{n}}{2}\leq \norm{y}\leq \sqrt{n}} } G_n(x) G_n(y) G(x-y) + \sum_{\substack{\sqrt{n}\leq \norm{x}\leq 2\sqrt{n} \\ \frac{\sqrt{n}}{2}\leq \norm{y}\leq \sqrt{n}} } G_n(x) G_n(y) G(x-y) \\
&\lesssim n^2 \cdot \frac{1}{(\sqrt{n})^{d-2}} + (\sqrt{n})^{4-d} \sum_{r=1}^{\sqrt{n}} \frac{r^{d-1}}{r^{d-2}} \asymp f_d(n),\end{align*}
where for the first sum we used that
$\sum_x G_n(x) =n$. Finally, $I_3$ is treated similarly as above to
yield
\begin{align*}
I_3\lesssim n^2 \cdot\frac{1}{(\sqrt{n})^{d-2}} \asymp f_d(n).
\end{align*}
Putting all these bounds together concludes the proof.
\end{proof}
\begin{lemma}\label{lem:powers}
For all $n$, let $\mathbb{R}R_n$ and $\widetilde{\mathbb{R}R}_n$ be the ranges up to time $n$ of two independent simple random walks in $\mathbb{Z}^d$ started from~$0$.
For all $k,n\in \mathbb{N}$ we have
\[
\E{\left( \sum_{x\in \mathbb{R}R_n}\sum_{y\in \widetilde{\mathbb{R}R}_n} G(x,y)\right)^k} \leq C(k)(f_d(n))^k,
\]
where $f_d(n)$ is the function defined in the statement of Lemma~\ref{lem:rearrange} and $C(k)$ is a constant that depends only on $k$.
\end{lemma}
\begin{proof}[\bf Proof]
Let $L_\ell(x)$ denote the local time at $x$ up to time $\ell$ for the random walk $S$, i.e.\
\[
L_\ell(x) = \sum_{i=0}^{\ell-1}{\text{\Large $\mathfrak 1$}}(S_i=x).
\]
Let $\widetilde{S}$ be an independent walk and $\widetilde{L}$ denote its local times.
Then, we get
\begin{align*}
\sum_{x\in \mathbb{R}R_n} \sum_{y\in \widetilde{\mathbb{R}R}_n} G(x,y)
\le \sum_{x\in \mathbb{Z}^d}\sum_{y\in \mathbb{Z}^d} L_n(x) \widetilde{L}_n(y) G(x,y).
\end{align*}
So, for $k=1$ by independence, we get using Lemma~\ref{lem:rearrange}
\begin{align*}
\E{\sum_{x\in \mathbb{R}R_n} \sum_{y\in \widetilde{\mathbb{R}R}_n} G(x,y)} \le
\sum_{x\in \mathbb{Z}^d} \sum_{y\in \mathbb{Z}^d} G_n(0,x) G_n(0,y) G(0,x-y) \lesssim f_d(n),
\end{align*}
As in Lemma~\ref{lem:rearrange} to simplify notation we write $G_n(x) = G_n(0,x)$.
For the $k$-th moment we have
\begin{align}\label{kthmoment}
\E{\left( \sum_{x\in \mathbb{R}R_n} \sum_{y\in \widetilde{\mathbb{R}R}_n} G(x,y)\right)^k}
\le
\sum_{x_1,\ldots, x_k} \sum_{y_1,\ldots, y_k} \E{\Psirod_{i=1}^{k}L_n(x_i)} \E{\Psirod_{i=1}^{k}L_n(y_i)} \Psirod_{i=1}^{k}G(x_i-y_i).
\end{align}
For any $k$-tuples $x_1,\ldots, x_k$ and $y_1,\ldots,y_k$, we have
\begin{align*}
&\E{\Psirod_{i=1}^{k} L_n(x_i)} \leq \sum_{\sigma:\,\text{permutation of } \{1,\ldots,k\}}G_n(x_{\sigma(1)})\Psirod_{i=2}^{k}G_n(x_{\sigma(i)}- x_{\sigma(i-1)}) \quad \text{and}
\\ &\E{\Psirod_{i=1}^{k} L_n(y_i)} \leq \sum_{\Psii:\,\text{permutation of } \{1,\ldots,k\}}G_n(y_{\Psii(1)})\Psirod_{i=2}^{k}G_n(y_{\Psii(i)}- y_{\Psii(i-1)}).
\end{align*}
Without loss of generality, we consider the term corresponding to the identity permutation for $x$ and a permutation $\Psii$ for $y$.
Then, the right hand side of \reff{kthmoment} is a sum
of terms of the form
\begin{align*}
G_n(x_1) G_n(x_2-x_1) \ldots G_n(x_k-x_{k-1})G_n(y_{\Psii(1)})G_n(y_{\Psii(2)}-y_{\Psii(1)})\ldots G_n(y_{\Psii(k)}-y_{\Psii(k-1)})\Psirod_{i=1}^{k}G(x_i-y_i).
\end{align*}
Suppose now that the term $y_k$ appears in two terms
in the above product, i.e.
\[
G_n(y_k - y_{\Psii(i)}) G_n(y_k-y_{\Psii(j)}).
\]
By the triangle inequality we have that one of the following two inequalities has to be true
\[
\|y_k - y_{\Psii(i)}\| \geq \frac{1}{2} \| y_{\Psii(i)} - y_{\Psii(j)}\|
\quad \text{or}\quad
\|y_k - y_{\Psii(j)}\| \geq \frac{1}{2} \| y_{\Psii(i)} - y_{\Psii(j)}\|.
\]
Since Green's kernel is radially decreasing and satisfies $G(x)\asymp |x|^{2-d}$ for $\|x\|>1$ we get
\[
G_n(y_k - y_{\Psii(i)}) G_n(y_k-y_{\Psii(j)}) \lesssim G_n(y_{\Psii(j)} - y_{\Psii(i)})\left( G_n(y_k-y_{\Psii(j)}) + G_n(y_k-y_{\Psii(i)})\right).
\]
Plugging this upper bound into the product and summing only over $x_k$ and $y_k$, while fixing the other terms, we obtain
\begin{align*}
&\sum_{x_k, y_k} G_n(x_k-x_{k-1}) G_n(y_k-y_{\Psii(i)}) G(x_k-y_k) \\
&= \sum_{x_k, y_k} G_n(x_k-x_{k-1}) G_n(y_k-y_{\Psii(i)}) G((x_k-x_{k-1})-(y_k-y_{\Psii(i)}))\\&=
\sum_{x,y} G_n(x) G_n(y) G((x-y)-(x_{k-1}-y_{\Psii(i)}))\lesssim f_d(n),
\end{align*}
where the last inequality follows from Lemma~\ref{lem:rearrange}.
Continuing by induction completes the proof.
\end{proof}
\subsection{On the variance of $\cc{\mathbb{R}R_n}$}
\begin{lemma}\label{lem:variance}
For $d\geq 6$ there exists a strictly positive constant $\gamma_d$ so that
\[
\lim_{n\to\infty} \frac{\vr{\cc{\mathbb{R}R_n}}}{n} =\gamma_d>0.
\]
\end{lemma}
We split the proof of the lemma above in two parts. First we establish the existence of the limit and then we show it is strictly positive. For the existence, we need to use Hammersley's lemma~\cite{Hammersley}, which we recall here.
\begin{lemma}[Hammersley]\label{lem:ham}
Let $(a_n), (b_n), (c_n)$ be three sequences of real numbers satisfying for all~$n,m$
\[
a_n+a_m - c_{n+m}\leq a_{n+m}\leq a_n+a_m + b_{n+m}.
\]
If the sequences $(b_n), (c_n)$ are positive and non-decreasing and additionally satisfy
\[
\sum_{n=1}^{\infty} \frac{b_n+c_n}{n(n+1)} <\infty,
\]
then the limit as $n\to \infty$ of $a_n/n$ exists.
\end{lemma}
For a random variable $X$ we will write $\overline{X}=X-\E{X}$.
\begin{lemma}\label{lem:exis}
For $d\geq 6$, the limit as $n$ tends to infinity
of $\vr{\cc{\mathbb{R}R_n}}/n$ exists.
\end{lemma}
\begin{proof}[\bf Proof]
We follow closely the proof of Lemma~6.2 of Le Gall~\cite{Le-Gall}.
To simplify notation we write $X_n=\cc{\mathbb{R}R_n}$,
and we set for all $k\geq 1$
\[
a_k=\sup\left\{ \sqrt{\E{\overline{X}_n^2}}: \, 2^k\leq n< 2^{k+1}\right\}.
\]
For $k\geq 2$, take $n$ such that $2^k\leq n<2^{k+1}$
and write $\ell=[n/2]$ and $m=n-\ell$.
Then, from Corollary~\ref{cor:decomposition} for $L=1$ we get
\begin{align*}
X^{(1)}_{\ell} + X^{(2)}_{m} - 2 \mathcal{E}_\ell\leq X_n \leq X^{(1)}_{\ell} + X^{(2)}_{m},
\end{align*}
where $X^{(1)}$ and $X^{(2)}$ are independent and $\mathcal{E}_\ell$ has the same law as $\sum_{x\in \mathbb{R}R_\ell}\sum_{y\in\widetilde{\mathbb{R}R}_m} G(x,y)$ with~$\widetilde{\mathbb{R}R}$ an independent copy of $\mathbb{R}R$.
Taking expectations and subtracting we obtain
\begin{align*}
|\overline{X}_n - (\overline{X}^{(1)}_{\ell} + \overline{X}^{(2)}_{m} )|\leq 2\max\left(\mathcal{E}_\ell, \E{\mathcal{E}_\ell}\right).
\end{align*}
Since $\overline{X}^{(1)}$ and $\overline{X}^{(2)}$ are independent, we get
\[
\norm{\overline{X}^{(1)}_{\ell} + \overline{X}^{(2)}_{m}}_2 = \left( \norm{\overline{X}^{(1)}_{\ell}}_2^2
+\norm{\overline{X}^{(2)}_{m}}_2^2\right)^{1/2}.
\]
By the triangle inequality we now obtain
\begin{align*}
\|\overline{X}_n\|_2&\leq \| \overline{X}^{(1)}_{\ell}+ \overline{X}^{(2)}_{m}\|_2 + \|2\max(\mathcal{E}_\ell,\E{\mathcal{E}_\ell})\|_2 \\
&\leq \left( \norm{\overline{X}^{(1)}_{\ell}}_2^2
+\norm{\overline{X}^{(2)}_{m}}_2^2\right)^{1/2} + 2\left(\norm{\mathcal{E}_\ell}_2 + \E{\mathcal{E}_\ell} \right) \leq
\left( \norm{\overline{X}^{(1)}_{\ell}}_2^2
+\norm{\overline{X}^{(2)}_{m}}_2^2\right)^{1/2} + c_1 f_d(n) \\ &\leq \left( \norm{\overline{X}^{(1)}_{\ell}}_2^2
+\norm{\overline{X}^{(2)}_{m}}_2^2\right)^{1/2} + c_1 \log n,
\end{align*}
where $c_1$ is a positive constant. The penultimate inequality follows from Lemma~\ref{lem:powers},
and for the last inequality we used that $f_d(n)\leq \log n$
for all $d\geq 6$. From the definition of $a_k$, we deduce that
\begin{align*}
a_k\leq 2^{1/2} a_{k-1} +c_2 k,
\end{align*}
for another positive constant $c_2$.
Setting $b_k=a_k k^{-1}$ gives for all $k$ that
\begin{align*}
b_{k}\leq 2^{1/2} b_{k-1} + c_2,
\end{align*}
and hence $b_k\lesssim 2^{k/2}$, which implies that $a_k\lesssim k \cdot 2^{k/2}$ for all $k$. This gives that for all $n$
\begin{align}\label{eq:roughbound}
\vr{\overline{X}_n} \lesssim n(\log n)^2.
\end{align}
Proposition~\ref{prop:capdec} and~\eqref{key-lawler} give that for all $n,m$
\begin{align*}
X^{(1)}_n + X_{m}^{(2)} - 2\mathcal{E}(n,m)\leq X_{n+m}\leq X^{(1)}_n + X_{m}^{(2)},
\end{align*}
where again $X^{(1)}$ and $X^{(2)}$ are independent and
\begin{align}\label{eq:useful}
\mathcal{E}(n,m)=\sum_{x\in \mathbb{R}R_n}\sum_{y\in \widetilde{\mathbb{R}R}_m} G(x,y) \leq\sum_{x\in \mathbb{R}R_{n+m}}\sum_{y\in \widetilde{\mathbb{R}R}_{n+m}} G(x,y)
\end{align}
with $\mathbb{R}R$ and $\widetilde{\mathbb{R}R}$ independent. As above we get
\begin{align*}
\left|\overline{X}_{n+m} - \left( \overline{X}^{(1)}_n + \overline{X}_{m}^{(2)} \right)\right| \leq 2\max(\mathcal{E}(n,m), \E{\mathcal{E}(n,m)})
\end{align*}
and by the triangle inequality again
\begin{align*}
\left|\norm{\overline{X}_{n+m}}_2 - \norm{\overline{X}^{(1)}_n + \overline{X}_{m}^{(2)} }_2 \right| \leq 4\norm{\mathcal{E}(n,m)}_2.
\end{align*}
Taking the square of the above inequality and using that $\overline{X}^{(1)}_n$ and $\overline{X}^{(2)}_m$ are independent we obtain
\begin{align*}
\norm{\overline{X}_{n+m}}_2^2 &\leq \norm{\overline{X}_n}_2^2 + \norm{\overline{X}_m}_2^2+ 8\sqrt{\norm{\overline{X}_n}_2^2 +\norm{\overline{X}_m}_2^2} \norm{\mathcal{E}(n,m)}_2 + 16\norm{\mathcal{E}(n,m)}_2^2\\
\norm{\overline{X}_n}_2^2 + \norm{\overline{X}_m}_2^2 &\leq \norm{\overline{X}_{n+m}}_2^2 + 8\norm{\overline{X}_{n+m}}_2 \norm{\mathcal{E}(n,m)}_2+
16\norm{\mathcal{E}(n,m)}_2^2.
\end{align*}
We set $\gamma_n=\norm{\overline{X}_n}_2^2$, $d_n=c_1 \sqrt{n}
(\log n)^2$ and $d_n'=c_2\sqrt{n}(\log n)^2$, where $c_1$ and $c_2$ are two positive constants.
Using the bound from~\eqref{eq:roughbound} together with~\eqref{eq:useful} and Lemma~\ref{lem:powers} in the inequalities above yields
\begin{align*}
\gamma_n + \gamma_m - d'_{n+m}\leq \gamma_{n+m} \leq \gamma_n +\gamma_m + d_{n+m}.
\end{align*}
We can now apply Hammersley's result, Lemma~\ref{lem:ham}, to deduce that the limit $\gamma_n/n$ exists, i.e.
\[
\lim_{n\to \infty}\frac{\vr{\overline{X}_n}}{n} = \gamma_d\geq 0
\]
and this finishes the proof on the existence of the limit.
\end{proof}
\subsection{Non-degeneracy: $\gamma_d>0$}\label{sec:gammapos}
To complete the proof of Lemma~\ref{lem:variance} we need to show that the limit $\gamma$ is strictly positive. We will achieve this by using the same trick of not allowing double-backtracks at even times (defined below) as in~\cite[Section~4]{AS1}.
As in~\cite{AS1} we consider a walk with no double backtracks at even times. A walk makes a double backtrack at time~$n$ if $S_{n-1}=S_{n-3}$ and $S_n=S_{n-2}$. Let $\widetilde{S}$ be a walk with no double backtracks at even times constructed as follows: we set $\widetilde{S}_0=0$ and let $\widetilde{S}_1$ be a random neighbour of $0$ and $\widetilde{S}_2$ a random neighbour of $\widetilde{S}_1$. Suppose we have constructed $\widetilde{S}$ for all times $k\leq 2n$, then we let $(\widetilde{S}_{2n+1}, \widetilde{S}_{2n+2})$ be uniform in the set
\[
\{(x,y): \quad \|x-y\| = \|\widetilde{S}_{2n}-x\| =1 \,\text{ and }\, (x,y)\neq (\widetilde{S}_{2n-1}, \widetilde{S}_{2n})\}.
\]
Having constructed $\widetilde{S}$ we can construct a simple random walk in $\mathbb{Z}^d$ by adding a geometric number of double backtracks to $\widetilde{S}$ at even times. More formally, let $(\xi_i)_{i=2,4,\ldots}$ be i.i.d.\ geometric random variables with mean $p/(1-p)$ and
\[
\Psir{\xi=k} = (1-p) p^k \quad \forall\, k\geq 0,
\]
where $p=1/(2d)^2$. Setting
\[
N_k = \sum_{\substack{i=2 \\ i\text{ even}}}^{k} \xi_i,
\]
we construct $S$ from $\widetilde{S}$ as follows. First we set $S_i=\widetilde{S}_i$ for all $i\leq 2$ and for all $k\geq 1$ we set $I_k=[2k+ 2N_{2(k-1)} +1, 2k+2N_{2k}]$. If $I_k\neq \varnothing$, then if $i\in I_k$ is odd, we set $S_i = \widetilde{S}_{2k-1}$, while if $i$ is even, we set $S_i = \widetilde{S}_{2k}$. Afterwards, for the next two time steps, we follow the path of $\widetilde{S}$, i.e.,
\[
S_{2k+2N_{2k}+1} = \widetilde{S}_{2k+1} \quad \text{ and } \quad S_{2k+2N_{2k}+2} = \widetilde{S}_{2k+2}.
\]
From this construction, it is immediate that $S$ is a simple random walk on~$\mathbb{Z}^d$. Let $\widetilde{\mathbb{R}R}$ be the range of $\widetilde{S}$. From the construction of $S$ from $\widetilde{S}$ we immediately get that
\begin{align}\label{eq:tilderr}
\widetilde{\mathbb{R}R}_{n}= \mathbb{R}R_{n+2N_{n}} = \mathbb{R}R_{n+2N_{n-1}},
\end{align}
where the second equality follows, since adding the double backtracks does not change the range.
\begin{lemma}
Let $\widetilde{S}$ be a random walk on $\mathbb{Z}^d$ starting from $0$ with no double backtracks at even times. If $\widetilde{R}$ stands for its range, then for any positive constants $c$ and $c'$ we have
\[
\Psir{\sum_{x\in \widetilde{\mathbb{R}R}_{2n}}\sum_{y\in \widetilde{\mathbb{R}R}[2n,(2+c')n]} G(x,y) \geq c\sqrt{n}} \to 0\text{ as } n\to\infty.
\]
\end{lemma}
\begin{proof}[\bf Proof]
Let $M$ be the number of double backtracks added during the interval $[2n,(2+c')n]$, i.e.,
\begin{align}\label{eq:evenmsum}
M=\sum_{\substack{i=2n\\ i\text{ even}}}^{(2+c')n}\xi_i.
\end{align}
Then, we have that
\begin{align*}
\widetilde{\mathbb{R}R}[2n,(2+c')n] \subseteq \mathbb{R}R[2n+2N_{2(n-1)}, (2+c')n+ 2N_{2(n-1)} + 2M].
\end{align*}
Note that the inclusion above could be strict,
since $\widetilde{S}$ does not allow double backtracks, while $S$ does so.
We now can write
\begin{align*}
&\Psir{\sum_{x\in \widetilde{\mathbb{R}R}_{2n}}\sum_{y\in \widetilde{\mathbb{R}R}[2n,(2+c')n]} G(x,y) \geq c\sqrt{n}}
\\&\leq \Psir{\sum_{x\in \mathbb{R}R[0,2n+2N_{2(n-1)}]}\sum_{y\in \mathbb{R}R[2n+2N_{2(n-1)},(2+c')n+2N_{2(n-1)}+2M]} G(x,y) \geq c\sqrt{n}} \\
&\leq \Psir{\sum_{x\in \mathbb{R}R[0,2n+2N_{2(n-1)}]}\sum_{y\in \mathbb{R}R[2n+2N_{2(n-1)},(2+2C+c')n+2N_{2(n-1)}]} G(x,y) \geq c\sqrt{n}} + \Psir{M\geq Cn}.
\end{align*}
By~\eqref{eq:evenmsum} and Chebyshev's inequality
we obtain that for some positive $C$, $\Psir{M\geq Cn}$ vanishes
as~$n$ tends to infinity. Since $G(x-a,y-a) = G(x,y)$
for all~$x,y,a$, it follows that
\begin{align*}
\Psir{\sum_{x\in \mathbb{R}R[0,2n+2N_{2(n-1)}]}\sum_{y\in \mathbb{R}R[2n+N_{2(n-1)},(2+2C+c')n+2N_{2(n-1)}]} G(x,y) \geq c\sqrt{n}}\\ = \Psir{\sum_{x\in \mathbb{R}R_1}\sum_{y\in \mathbb{R}R_2} G(x,y) \geq c\sqrt{n}},
\end{align*}
where $\mathbb{R}R_1 = \mathbb{R}R[0,2n+2N_{2(n-1)}] - S_{2n+2N_{2(n-1)}}$ and $\mathbb{R}R_2 = \mathbb{R}R[2n+2N_{2(n-1)},(2+2C+c')n+2N_{2(n-1)}]-S_{2n+2N_{2(n-1)}}$. The importance of considering $\mathbb{R}R_1$ up to time $2n+2N_{2(n-1)}$ and not up to time $2n+2N_{2n}$ is in order to make~$\mathbb{R}R_1$ and $\mathbb{R}R_2$ independent. Indeed, this follows since after time~$2n+2N_{2(n-1)}$ the walk $S$ behaves as a simple random walk in $\mathbb{Z}^d$ independent of the past. Hence we can replace $\mathbb{R}R_2$ by $\mathbb{R}R'_{(2+2C+c')n}$, where $\mathbb{R}R'$ is the range of a simple random walk independent of~$\mathbb{R}R_1$. Therefore we obtain
\begin{align*}
\Psir{\sum_{x\in \mathbb{R}R_1}\sum_{y\in \mathbb{R}R'_{(2+2C+c')n}} G(x,y) \geq c\sqrt{n}} \leq \Psir{\sum_{x\in \mathbb{R}R_{(2C'+2)n}}\sum_{y\in \mathbb{R}R'_{(2+2C+c')n}} G(x,y) \geq c\sqrt{n}} \\+ \Psir{N_{2(n-1)}\geq C'n}.
\end{align*}
As before, by Chebyshev's inequality for $C'$ large enough $\Psir{N_{2(n-1)}\geq C'n} \to 0$ as $n\to \infty$ and by Markov's inequality and Lemma~\ref{lem:rearrange}
\begin{align*}
\Psir{\sum_{x\in \mathbb{R}R_{(2C'+2)n}}\sum_{y\in \mathbb{R}R'_{(2+2C+c')n}} G(x,y) \geq c\sqrt{n}} &\leq \frac{\E{\sum_{x\in \mathbb{R}R_{C'n}}\sum_{y\in \mathbb{R}R'_{(2+2C+c')n}} G(x,y)}}{c\sqrt{n}}\\ &\lesssim \frac{\log n}{\sqrt{n}},
\end{align*}
and this concludes the proof.
\end{proof}
\begin{claim}\label{cl:sllntil}
Let $\widetilde{\mathbb{R}R}$ be the range of $\widetilde{S}$. Then, almost surely
\[
\frac{\cc{\widetilde{\mathbb{R}R}[2k,2k+n]}}{n} \to \alpha_d \cdot \left(\frac{p}{1-p} \right)\quad \text{ as } \quad n\to\infty.
\]
\end{claim}
\begin{proof}[\bf Proof]
As mentioned already in the Introduction, Jain and Orey~\cite{JainOrey} proved that
\begin{align}\label{eq:lawoflargenumbers}
\lim_{n\to\infty}
\frac{\cc{\mathbb{R}R_n}}{n} = \alpha_d=\inf_{m}\frac{\E{\cc{\mathbb{R}R_m}}}{m}.
\end{align}
with the limit $\alpha_d$ being strictly positive for $d\geq 5$.
Clearly the range of $\widetilde{S}$ in $[2k,2k+n]$ satisfies
\begin{align*}
\mathbb{R}R[2k+2N_{2k-1}, 2k+2N_{2k-1}+2N'_n]\setminus \{S_{2k+2N_{2k-1}+1}, S_{2k+2N_{2k-1}+2}\}\subseteq \widetilde{\mathbb{R}R}[2k,2k+n]\\ \widetilde{\mathbb{R}R}[2k,2k+n]\subseteq \mathbb{R}R[2k+2N_{2k-1}, 2k+2N_{2k-1}+2N'_n],
\end{align*}
where $N'_n$ is the number of double backtracks added between times $2k$ and $2k+n$. We now note that after time $2k+2N_{2k-1}$ the walk $S$ behaves as a simple random walk in $\mathbb{Z}^d$. Hence using~\eqref{eq:lawoflargenumbers} and the fact that $N'_n/n\to p/(2(1-p))$ as $n\to \infty$ almost surely it follows that almost surely
\begin{align*}
\lim_{n\to\infty} \frac{\cc{\mathbb{R}R[2k+2N_{2k-1}, 2k+2N_{2k-1}+2N'_n]}}{n}= \alpha_d \cdot \left(\frac{p}{1-p} \right).
\end{align*}
and this concludes the proof.
\end{proof}
\begin{proof}[\bf Proof of Lemma~\ref{lem:variance}]
Let $\widetilde{S}$ be a random walk with no double backtracks at even times and $S$ a simple random walk constructed from $\widetilde{S}$ as described at the beginning of Section~\ref{sec:gammapos}. We thus have $\widetilde{\mathbb{R}R}_n = \mathbb{R}R_{n+2N_n}$ for all $n$. Let $k_n = [(1-p)n]$, $i_n=[(1-p)(n+A\sqrt{n})]$ and $\ell_n = [(1-p)(n-A\sqrt{n})]$ for a constant $A$ to be determined later.
Then, by Claim~\ref{cl:sllntil} for all $n$ sufficiently large so that $k_n$ and $\ell_n$ are even numbers we have
\begin{align}\label{eq:78}
\Psir{\cc{\widetilde{\mathbb{R}R}[k_n,i_n]}\geq \frac{3}{4}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p}\right)\cdot \sqrt{n}}&\geq\frac{7}{8} \qquad \text{and} \\
\label{eq:18} \Psir{\sum_{x\in \widetilde{\mathbb{R}R}[0,k_n]}\sum_{y\in \widetilde{\mathbb{R}R}[k_n,i_n]}G(x,y) \leq \frac{1}{8}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p}\right)\cdot \sqrt{n}}&\geq \frac{7}{8}
\end{align}
and
\begin{align}
\label{eq:118}
\Psir{\cc{\widetilde{\mathbb{R}R}[\ell_n,k_n]}\geq \frac{3}{4}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p}\right)\cdot \sqrt{n}}&\geq\frac{7}{8} \qquad \text{and} \\
\label{eq:128} \Psir{\sum_{x\in \widetilde{\mathbb{R}R}[0,\ell_n]}\sum_{y\in \widetilde{\mathbb{R}R}[\ell_n,k_n]}G(x,y) \leq \frac{1}{8}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p}\right)\cdot \sqrt{n}}&\geq \frac{7}{8}
\end{align}
We now define the events
\begin{align*}
B_n = \left\{ \frac{2N_{\ell_n} - 2\E{N_{\ell_n}}}{\sqrt{n}} \,\,\in \,\,[A+1,A+2]\right\} \quad \text{and} \quad D_n = \left\{ \frac{2N_{i_n} - 2\E{N_{i_n}}}{\sqrt{n}} \,\,\in \,\,[1-A,2-A]\right\}.
\end{align*}
Then, for all $n$ sufficiently large we have for a constant $c_A>0$ that depends on $A$
\begin{align}\label{eq:bndn}
\Psir{B_n} \geq c_A \quad \text{and}\quad \Psir{D_n}\geq c_A.
\end{align}
Since we have already showed the existence of the limit $\vr{\cc{\mathbb{R}R_n}}/n$ as $n$ tends to infinity, it suffices to prove that the limit is strictly positive along a subsequence. So we are only going to take~$n$ such that~$k_n$ is even. Take $n$ sufficiently large so that~\eqref{eq:78} holds and $k_n$ is even. We then consider two cases:
\begin{align*}
\rm{(i)}\,\, \Psir{\cc{\widetilde{\mathbb{R}R}[0,k_n]}\geq \E{\cc{\mathbb{R}R_n}}}\geq \frac{1}{2} \quad \text{or} \quad \rm{(ii)} \,\, \Psir{\cc{\widetilde{\mathbb{R}R}[0,k_n]}\leq \E{\cc{\mathbb{R}R_n}}}\geq \frac{1}{2}.
\end{align*}
We start with case (i). Using Proposition~\ref{prop:capdec} we have
\begin{align*}
\cc{\widetilde{\mathbb{R}R}[0,i_n]} \geq \cc{\widetilde{\mathbb{R}R}[0,k_n]} + \cc{\widetilde{\mathbb{R}R}[k_n,i_n]} - 2\sum_{x\in \widetilde{\mathbb{R}R}[0,k_n]}\sum_{y\in \widetilde{\mathbb{R}R}[k_n,i_n]} G(x,y).
\end{align*}
From this, we deduce that
\begin{align}\label{eq:allcc}
\begin{split}
\Psir{\cc{\widetilde{\mathbb{R}R}[0,i_n]}\geq \E{\cc{\mathbb{R}R_n}} + \frac{1}{2}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p} \right) \sqrt{n}} \\
\geq \Psir{\cc{\widetilde{\mathbb{R}R}[0,k_n]}\geq \E{\cc{\mathbb{R}R[0,n}} , \cc{\widetilde{\mathbb{R}R}[k_n,i_n]} \geq \frac{3}{4}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p}\right)\cdot \sqrt{n}} \\-
\Psir{\sum_{x\in \widetilde{\mathbb{R}R}[0,k_n]}\sum_{y\in \widetilde{\mathbb{R}R}[k_n,i_n]}G(x,y) > \frac{1}{8}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p}\right)\cdot \sqrt{n}}.
\end{split}
\end{align}
The assumption of case (i) and~\eqref{eq:78} give that
\begin{align*}
\Psir{\cc{\widetilde{\mathbb{R}R}[0,k_n]}\geq \E{\cc{\mathbb{R}R[0,n}} , \cc{\widetilde{\mathbb{R}R}[k_n,i_n]} \geq \frac{3}{4}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p}\right)\cdot \sqrt{n}} \geq \frac{3}{8}.
\end{align*}
Plugging this lower bound together with~\eqref{eq:18} into~\eqref{eq:allcc} yields
\begin{align*}
\Psir{\cc{\widetilde{\mathbb{R}R}[0,i_n]}\geq \E{\cc{\mathbb{R}R_n}} + \frac{1}{2}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p} \right) \cdot \sqrt{n}} \geq \frac{1}{4}.
\end{align*}
Since $N$ is independent of $\widetilde{S}$, using~\eqref{eq:bndn} it follows that
\begin{align*}
\Psir{\cc{\widetilde{\mathbb{R}R}[0,i_n]}\geq \E{\cc{\mathbb{R}R_n}} + \frac{1}{2}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p} \right)\cdot \sqrt{n}, D_n}\geq \frac{c_A}{4}.
\end{align*}
It is not hard to see that on the event $D_n$ we have $i_n+2N_{i_n}\in [n,n+3\sqrt{n}]$. Therefore, since $\widetilde{\mathbb{R}R}[0,k] = \mathbb{R}R[0,k+2N_k]$ we deduce
\begin{align*}
\Psir{\exists\,\, m\leq 3\sqrt{n}: \, \cc{\mathbb{R}R[0,n+m]} \geq \E{\cc{\mathbb{R}R_n}} + \frac{1}{2}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p} \right)\cdot \sqrt{n} }\geq \frac{c_A}{4}.
\end{align*}
Since $\cc{\mathbb{R}R[0,\ell]}$ is increasing in $\ell$, we obtain
\begin{align*}
\Psir{\cc{\mathbb{R}R[0,n+3\sqrt{n}]} \geq \E{\cc{\mathbb{R}R_n}} + \frac{1}{2}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p} \right)\cdot \sqrt{n} }\geq \frac{c_A}{4}.
\end{align*}
Using now the deterministic bound $\cc{
\mathbb{R}R[0,n+3\sqrt{n}]} \leq \cc{\mathbb{R}R[0,n]} + 3\sqrt{n}$ gives
\begin{align*}
\Psir{\cc{\mathbb{R}R[0,n]} \geq \E{\cc{\mathbb{R}R_n}} + \left(\frac{1}{2}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p} \right) - 3\right)\cdot \sqrt{n}}\geq \frac{c_A}{4},
\end{align*}
and hence choosing $A$ sufficiently large so that
\[
\frac{1}{2}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p} \right) - 3>0
\]
and using Chebyshev's inequality shows in case (i) for a strictly positive constant $c$ we have
\[
\vr{\cc{\mathbb{R}R_n}} \geq c\cdot n.
\]
We now treat case (ii). We are only going to consider $n$ so that $\ell_n$ is even.
Using Proposition~\ref{prop:capdec} again we have
\[
\cc{\widetilde{\mathbb{R}R}[0,\ell_n]}\leq \cc{\widetilde{\mathbb{R}R}[0,k_n]} - \cc{\widetilde{\mathbb{R}R}[\ell_n,k_n]} + 2\sum_{x\in \widetilde{\mathbb{R}R}[0,\ell_n]} \sum_{y\in \widetilde{\mathbb{R}R}[\ell_n,k_n]} G(x,y).
\]
Then, similarly as before using~\eqref{eq:118}, \eqref{eq:128} and~\eqref{eq:bndn} we obtain
\begin{align*}
\Psir{\cc{\widetilde{\mathbb{R}R}[0,\ell_n]}\leq \E{\cc{\mathbb{R}R_n}} - \frac{1}{2}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p} \right)\cdot \sqrt{n},\, B_n}\geq \frac{c_A}{4}.
\end{align*}
Since on $B_n$ we have $\ell_n + 2N_{\ell_n}\in [n,n+3\sqrt{n}]$, it follows that
\begin{align*}
\Psir{\exists \,\, m\leq 3\sqrt{n}: \, \cc{\mathbb{R}R[0,n+m]} \leq \E{\cc{\mathbb{R}R_n}} - \frac{1}{2}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p} \right) \cdot \sqrt{n}} \geq \frac{c_A}{4}.
\end{align*}
Using the monotonicity property of $\cc{\mathbb{R}R_l}$ in $\ell$ we finally conclude that
\[
\Psir{\cc{\mathbb{R}R[0,n]} \leq \E{\cc{\mathbb{R}R_n}} - \frac{1}{2}\cdot \left(\frac{A\cdot \alpha_d \cdot p}{1-p} \right) \cdot \sqrt{n}} \geq \frac{c_A}{4},
\]
and hence Chebyshev's inequality again finishes the proof in case (ii).
\end{proof}
\section{Central limit theorem}\label{sec-four}
We start this section by recalling the Lindeberg-Feller theorem.
Then, we give the proof of Theorem~\ref{thm:clt}.
\begin{theorem}[Lindeberg-Feller]\label{thm:lind}
For each $n$ let $(X_{n,i}: \, 1\leq i\leq n)$ be a collection of independent random variables with zero mean. Suppose that the following two conditions are satisfied
\newline
{\rm{(i)}} $\sum_{i=1}^{n}\E{X_{n,i}^2} \to \sigma^2>0$ as $n\to \infty$ and
\newline
{\rm{(ii)}} $\sum_{i=1}^{n}\E{(X_{n,i})^2{\text{\Large $\mathfrak 1$}}(|X_{n,i}|>\varepsilon)} \to 0$ as $n\to \infty$ for all $\varepsilon>0$.
\newline
Then, $S_n=X_{n,1}+\ldots + X_{n,n} \Longrightarrow \sigma \mathbb{N}N(0,1)$ as $n\to \infty$.
\end{theorem}
For a proof we refer the reader to~\cite[Theorem~3.4.5]{Durrett}.
Before proving Theorem~\ref{thm:clt}, we upper bound the fourth moment of~$\overline{\cc{\mathbb{R}R_n}}$. Recall that for a random variable $X$ we write $\overline{X}= X-\E{X}$.
\begin{lemma}\label{lem:fourth}
For all $d\geq 6$ and for all $n$ we have
\[
\E{(\overline{\cc{\mathbb{R}R_n}})^4}\lesssim n^2.
\]
\end{lemma}
\begin{proof}[\bf Proof]
This proof is similar to the proof of Lemma~\ref{lem:exis}. We only emphasize the points where they differ. Again we write $X_n=\cc{\mathbb{R}R_n}$ and we set for all $k\geq 1$
\[
a_k=\sup\left\{ \left(\E{\overline{X}_n^4}\right)^{1/4}: \, 2^k\leq n< 2^{k+1}\right\}.
\]
For $k\geq 2$ take $n$ such that $2^k\leq n< 2^{k+1}$ and write $n_1=[n/2]$ and $n_2=n-\ell$. Then, Corollary~\ref{cor:decomposition} and the triangle inequality give
\begin{align*}
\|\overline{X}_n\|_4\leq \| \overline{X}_{n_1}+ \overline{X}_{n_2}\|_4 + 4\|\mathcal{E}(n_1,n_2)\|_4 \leq \left( \E{\overline{X}_{n_1}^4} + \E{\overline{X}_{n_2}^4} + 6\E{\overline{X}_{n_1}^2} \E{\overline{X}_{n_2}^2} \right)^{1/4}+c_1\log n,
\end{align*}
where the last inequality follows from Lemma~\ref{lem:powers} and the fact that $\overline{X}_{n_1}$ and $\overline{X}_{n_2}$ are independent. Using Lemma~\ref{lem:variance} we get that
\[
\E{\overline{X}_{n_1}^2} \E{\overline{X}_{n_2}^2} \asymp n^2.
\]
Also using the obvious inequality for $a,b>0$ that $(a+b)^{1/4}\leq a^{1/4}+ b^{1/4}$ we obtain
\begin{align*}
\|\overline{X}_n\|_4\leq \left( \E{\overline{X}_{n_1}^4} + \E{\overline{X}_{n_2}^4} \right)^{1/4} + c_2\sqrt{n}.
\end{align*}
We deduce that
\begin{align*}
a_k\leq 2^{1/4} a_{k-1} +c_3 2^{k/2}.
\end{align*}
Setting $b_k= 2^{-k/2} a_k$ we get
\[
b_{k}\leq \frac{1}{2^{1/4}}b_{k-1} + c_3,
\]
This implies that $(b_k,k\in \mathbb{N})$
is a bounded sequence, and hence $a_k\leq C 2^{k/2}$ for a positive constant $C$, or in other words,
\[
\left(\E{\overline{X}_n^4}\right)^{1/4} \lesssim \sqrt{n}
\]
and this concludes the proof.
\end{proof}
\begin{proof}[\bf Proof of Theorem~\ref{thm:clt}]
From Corollary~\ref{cor:decomposition} we have
\begin{align}\label{eq:bigeq}
\sum_{i=1}^{2^L} \cc{\mathbb{R}R^{(i)}_{n/2^L}} - 2\sum_{\ell=1}^{L}\sum_{i=1}^{2^{\ell-1}} \mathcal{E}_{\ell}^{(i)}\leq \cc{\mathbb{R}R_n} \leq \sum_{i=1}^{2^L} \cc{\mathbb{R}R^{(i)}_{n/2^L}},
\end{align}
where $(\cc{\mathbb{R}R^{(i)}_{n/2^L}})$ are independent for different $i$'s and $\mathbb{R}R^{(i)}_{n/2^L}$ has the same law as $\mathbb{R}R_{[n/2^L]}$ or~$\mathbb{R}R_{[n/2^L+1]}$ and for each $\ell$ the random variables $(\mathcal{E}_{\ell}^{(i)})$ are independent and have the same law as $\sum_{x\in \mathbb{R}R^{(i)}_{n/2^L}} \sum_{y\in \widetilde{\mathbb{R}R}^{(i)}_{n/2^L}} G(x,y)$, with $\widetilde{\mathbb{R}R}$ and independent copy of $\mathbb{R}R$.
To simplify notation we set $X_{i,L} = \cc{\mathbb{R}R^{(i)}_{n/2^L}}$ and $X_n=\cc{\mathbb{R}R_n}$ and for convenience we rewrite~\eqref{eq:bigeq} as
\begin{align}\label{eq:keyeq}
\sum_{i=1}^{2^L} X_{i,L} - 2\sum_{\ell=1}^{L}\sum_{i=1}^{2^{\ell-1}} \mathcal{E}_{\ell}^{(i)}\leq X_n \leq \sum_{i=1}^{2^L} X_{i,L}.
\end{align}
We now let
\begin{align*}
\mathcal{E}(n) = \sum_{i=1}^{2^L}\overline{X}_{i,L} - \overline{X}_n.
\end{align*}
Using inequality~\eqref{eq:keyeq} we get
\begin{align*}
\E{|\mathcal{E}(n)|} \leq 4\E{\sum_{\ell=1}^{L}\sum_{i=1}^{2^{\ell-1}} \mathcal{E}_{\ell}^{(i)}}\lesssim\sum_{\ell=1}^{L} 2^\ell \log n \lesssim 2^L \log n,
\end{align*}
where the penultimate inequality follows from Lemma~\ref{lem:powers} for $k=1$ and the fact that $f_d(n)\leq \log n$ for all $d\geq 6$.
Choosing $L$ so that $2^L= n^{1/4}$ gives $\E{|\mathcal{E}(n)|}/\sqrt{n}\to 0$ as $n\to \infty$. We can thus reduce the problem of showing that $\overline{X}_n/{\sqrt{n}}$ converges in distribution to showing that $\sum_{i=1}^{2^L}\overline{X}_{i,L}/\sqrt{n}$ converges to a normal random variable.
We now focus on proving that
\begin{align}\label{eq:goal}
\frac{\sum_{i=1}^{2^L} \overline{X}_{i,L}}{\sqrt{n}} \Longrightarrow \sigma \mathbb{N}N(0,1) \quad \text{as } n\to \infty.
\end{align}
We do so by invoking Lindeberg-Feller's Theorem~\ref{thm:lind}. From Lemma~\ref{lem:variance} we immediately get that as $n$ tends to infinity.
\[
\sum_{i=1}^{2^L} \frac{1}{n}\cdot\vr{\overline{X}_{i,L}} \sim \frac{2^L}{n} \cdot \gamma_d\cdot \frac{n}{2^L} = \gamma_d>0,
\]
which means that the first condition of Lindeberg-Feller is satisfied. It remains to check the second one, i.e.,
\begin{align*}
\lim_{n\to\infty}
\sum_{i=1}^{2^L} \frac{1}{n}\cdot \E{\overline{X}_{i,L}^2{\text{\Large $\mathfrak 1$}}(|\overline{X}_{i,L}|>\varepsilon \sqrt{n})} = 0.
\end{align*}
By Cauchy-Schwartz, we have
\begin{align*}
\E{\overline{X}_{i,L}^2{\text{\Large $\mathfrak 1$}}(|\overline{X}_{i,L}|>\varepsilon \sqrt{n})}\leq \sqrt{\E{(\overline{X}_{i,L})^4}\Psir{|\overline{X}_{i,L}|>\varepsilon \sqrt{n}}}.
\end{align*}
By Chebyshev's inequality and using that $\vr{\overline{X}_{i,L}}\sim \gamma_d \cdot n/2^L$ from Lemma~\ref{lem:variance} we get
\begin{align*}
\Psir{|\overline{X}_{i,L}|>\varepsilon \sqrt{n}} \leq \frac{1}{\varepsilon^2 2^L}.
\end{align*}
Using Lemma~\ref{lem:fourth} we now get
\begin{align*}
\sum_{i=1}^{2^L} \frac{1}{n}\cdot \E{\overline{X}_{i,L}^2{\text{\Large $\mathfrak 1$}}(|\overline{X}_{i,L}|>\varepsilon \sqrt{n})} \lesssim \sum_{i=1}^{2^L} \frac{1}{n} \cdot \frac{n}{2^L} \frac{1}{\varepsilon 2^{L/2}} = \frac{1}{\varepsilon 2^{L/2}} \to 0,
\end{align*}
since $L=\log n/4$. Therefore, the second condition of Lindeberg-Feller Theorem~\ref{thm:lind} is satisfied and this finishes the proof.
\end{proof}
\section{Rough estimates in $d=4$ and $d=3$}
\label{sec-five}
\begin{proof}[\bf Proof of Corollary~\ref{lem:d4}]
In order to use Lawler's Theorem~\ref{thm:lawler}, we
introduce a random walk $\widetilde S$ starting at the origin
and independent from $S$,
with distribution denoted $\widetilde {\mathbb{P}}$. Then, as noticed already by Jain and Orey~\cite[Section~2]{JainOrey}, the capacity
of the range reads (with the convention $\mathbb{R}R_{-1}=\varnothing$)
\be\label{new-1}
\cc{\mathbb{R}R_n}=\sum_{k=0}^n {\text{\Large $\mathfrak 1$}}(S_k\notin\mathbb{R}R_{k-1})\times
\widetilde {\mathbb{P}}_{S_k}\big((S_k+\widetilde \mathbb{R}R_\infty)
\cap \mathbb{R}R_n=\varnothing\big),
\ee
where~$\widetilde{\mathbb{R}R}_\infty= \widetilde{\mathbb{R}R}[1,\infty)$.
Thus, for $k$ fixed we can consider three independent walks.
The first is $S^1:[0,k]\to\mathbb{Z}^d$ with $S^1_i=S_k-S_{k-i}$,
the second is $S^2:[0,n-k]\to\mathbb{Z}^d$ with $S^2_i:=S_{k+i}-S_k$, and
the third $S^3\equiv \widetilde S$. With these symbols,
equality \eqref{new-1} reads
$$
\cc{\mathbb{R}R_n}=\sum_{k=0}^n {\text{\Large $\mathfrak 1$}} (0\notin \mathbb{R}R^1[1,k] )\times
\widetilde {\mathbb{P}}\big(\mathbb{R}R^3[1,\infty)\cap(\mathbb{R}R^1[0,k]\cup
\mathbb{R}R^2[0,n-k])=\varnothing\big).
$$
Then, taking expectation with respect to $S^1$, $S^2$
and $S^3$, we get
\be\label{new-3}
\E{\cc{\mathbb{R}R_n}}=\sum_{k=0}^n
\Psir{0\not\in \mathbb{R}R^1[1,k],\ \mathbb{R}R^3[1,\infty)\cap
(\mathbb{R}R^1[0,k]\cup\mathbb{R}R^2[0,n-k])=\varnothing}.
\ee
Now, $\operatorname{var}epsilon \in(0,1/2)$ being fixed, we define $\operatorname{var}epsilon_n:=\operatorname{var}epsilon n/\log n$, and
divide the above sum into two subsets: when $k$ is smaller than $\operatorname{var}epsilon_n$ or larger than $n-\operatorname{var}epsilon_n$, and when $k$ is in between. The terms in the first subset can be bounded just by one,
and we obtain this way the following upper bound.
$$
\E{\cc{\mathbb{R}R_n}}\ \le\ 2\operatorname{var}epsilon_n
+ n \Psir{0\not\in \mathbb{R}R^1[1,\operatorname{var}epsilon_n],\ \mathbb{R}R^3[1,\operatorname{var}epsilon_n]\cap
(\mathbb{R}R^1[0,\operatorname{var}epsilon_n]\cup\mathbb{R}R^2[0,\operatorname{var}epsilon_n])=\varnothing}.
$$
Since this holds for any $\operatorname{var}epsilon>0$, and $\log \operatorname{var}epsilon_n \sim \log n$, we conclude using \eqref{lawler-key}, that
\be\label{new-5}
\limsup_{n\to\infty}\ \frac{\log n}{n} \times \E{\cc{\mathbb{R}R_n}}\ \le\
\frac{\Psii^2}{8}.
\ee
For the lower bound, we first observe that \eqref{new-3} gives
$$\E{\cc{\mathbb{R}R_n}}\ \ge \ n \,
\Psir{0\not\in \mathbb{R}R^1[1,n],\ \mathbb{R}R^3[1,\infty]\cap
(\mathbb{R}R^1[0,n]\cup\mathbb{R}R^2[0,n])=\varnothing},
$$
and we conclude the proof using \eqref{lawler-key}.
\end{proof}
\begin{proof}[\bf Proof of Proposition~\ref{lem:d3}]
We recall $L_n(x)$ is the local time at $x$, i.e.,
\[
L_n(x) = \sum_{i=0}^{n-1}{\text{\Large $\mathfrak 1$}}(S_i=x).
\]
The lower bound is obtained using the representation \eqref{variation-capa}, as we choose $\nu(x)=L_n(x)/n$. This gives
\be\label{new-6}
\cc{\mathbb{R}R_n}\ge \frac{n}{\frac{1}{n}\sum_{x,y\in \mathbb{Z}^d}
G(x,y)L_n(x)L_n(y)},
\ee
and using Jensen's inequality, we deduce
\be\label{new-7}
\E{\cc{\mathbb{R}R_n}}\ge \frac{n}{\frac{1}{n}\sum_{x,y\in \mathbb{Z}^d}
G(x,y)\E{L_n(x)L_n(y)}}.
\ee
Note that
\be\label{new-8}
\sum_{x,y\in \mathbb{Z}^d}G(x,y)\E{L_n(x)L_n(y)}=
\sum_{0\le k\leq n} \sum_{0\le k'\leq n} \E{G(S_k,S_{k'})}.
\ee
We now obtain, using the local CLT,
\begin{align*}
\sum_{0\le k\leq n} \sum_{0\le k'\leq n} \E{G(S_k,S_{k'})} = \sum_{0\le k\leq n} \sum_{0\le k'\leq n} \E{G(0,S_{|k'-k|})}\lesssim \sum_{0\le k\leq n} \sum_{0\le k'\leq n} \E{\frac{1}{1+|S_{|k'-k|}|}} \lesssim n\sqrt{n}
\end{align*}
and this gives the desired lower bound.
For the upper bound one can use that in dimension $3$,
$$\cc{A} \ \lesssim \ \textrm{rad}(A),$$
where $\textrm{rad}(A)=\sup_{x\in A} \| x \|$ (see \cite[Proposition~2.2.1(a) and (2.16)]{Lawlerinter}).
Therefore Doob's inequality gives
$$\E{\cc{\mathbb{R}R_n}} \ \lesssim\ \E{\sup_{k\le n}\ \|S_k\|}\ \lesssim \ \sqrt n$$
and this completes the proof.
\end{proof}
\section{Open Questions}\label{sec-six}
We focus on open questions
concerning the typical behaviour
of the capacity of the range.
Our main inequality \reff{main-lower} is reminiscent of
the equality for the range
\be\label{main-range}
|\mathbb{R}R[0,2n]|=|\mathbb{R}R[0,n]|+|\mathbb{R}R[n,2n]|-|\mathbb{R}R[0,n]\cap \mathbb{R}R[n,2n]|.
\ee
However, the {\it intersection term} $|\mathbb{R}R[0,n]\cap \mathbb{R}R[n,2n]|$ has
a different asymptotics for $d\ge 3$
\be\label{conj-0}
\E{|\mathbb{R}R[0,n]\cap \mathbb{R}R[n,2n]|}\asymp f_{d+2}(n).
\ee
This leads us to {\it add two dimensions} when comparing
the volume of the range with respect to the capacity of the range.
It is striking that the volume of the range in $d=1$ is typically
of order $\sqrt n$ as the capacity
of the range in $d=3$. The fact that the volume
of the range in $d=2$ is typically of order
$n/\log n$ like the capacity of the range in $d=4$ is as striking.
Thus, based on these analogies,
we conjecture that the variance in dimension five behaves as follows.
\be\label{conj-2}
\vr{\cc{\mathbb{R}R_n}}\asymp n\log n.
\ee
Note that an upper bound with a similar nature
as \reff{main-lower} is lacking, and that \reff{key-lawler}
is of a different order of magnitude. Indeed,
\[
\E{\cc{\mathbb{R}R[0,n]\cap \mathbb{R}R[n,2n]}}\, \le\, \E{|\mathbb{R}R[0,n]\cap \mathbb{R}R[n,2n]|}\, \lesssim\,
f_{d+2}(n).
\]
Another question would be to show a concentration result in dimension 4, i.e.,
\be\label{conj-1}
\frac{\cc{\mathbb{R}R_n}}{\E{\cc{\mathbb{R}R_n}}}\quad \stackrel{\text{(P)}}{\longrightarrow}
\quad 1.
\ee
We do not expect \reff{conj-1} to hold in dimension three, but rather
that the limit would be random.
\end{document} |
\begin{document}
\title{Non stabilizer Clifford codes with qupits}
\author{
HAGIWARA Manabu
\thanks{
Institute of Industrial Science, University of Tokyo,
4-6-1 Komaba, Meguro-ku Tokyo, Japan
E-mail: {\tt \{manau, imai\}@\allowbreak
\{imailab.\allowbreak
iis, iis\}.\allowbreak
u-tokyo.\allowbreak
ac.\allowbreak
jp}
}\\
\and
Hideki IMAI $^{*}$
}
\date{}
\maketitle
\abstract{
We present a method to construct a non stabilizer Clifford code which encodes a single qupit, i.e. a state described as a vector in $p$-dimensional Hilbert space, to a pair of a single qupit and a single qubit, for any odd prime $p$.
Thus we obtain infinite non stabilizer Clifford codes.
}
\section{Introduction}
There are well-known classes of quantum error correcting codes, such as CSS codes, stabilizer codes, and Clifford codes.
Stabilizer codes can be seen as a generalization of a class of CSS codes.
In the same way, Clifford codes can be understood as a generalization of stabilizer codes.
To show the existence of a true Clifford code which is better than any stabilizer code is a well known open problem in the theory of Clifford codes.
One of the main difficulties in solving this problem is that we know only about 110 examples of codes which are Clifford but not stabilizer codes.
In this abstract, we obtain infinite examples of Clifford codes which are not stabilizer codes.
We expect our examples to be useful in the study of Clifford codes.
A Clifford code is constructed from a quadruple of parameters $(G, \rho, N, \chi)$, a finite group $G$, an irreducible, faithful unitary representation $\rho$ of $G$ with degree root of $(G: Z(G))$, a normal subgroup $N$, and an irreducible character $\chi$ of $N$.
Let $\sigma$ be a standard cyclic permutation matrix with degree $p$.
Let $\lambda$ be a primitive $p$-th root of $1$ and let $i$ be a primitive $4$-th root of $1$, that is, $i^{2}=-1$.
Let denote a diagonal matrix diag$(1, \lambda, \lambda^{2}, \dots , \lambda^{p-1} )$ by $\tau$.
Put $2p \times 2p $ matrices $A:=$ diag$(\sigma, \sigma^{-1})$, $B:=$ diag$(i \tau, i^{-1} \tau^{-1})$, and $C:= \left(
\begin{array}{cc}
& I_{p} \\
-I_{p} & \\
\end{array}
\right) $ where $I_{p}$ is the identity matrix.
Let $G$ be a group generated by $A, B,$ and $C$ and let $\rho$ be a matrix representation.
We show that the order of $G$ is $2^3 p^3$ and that the order of its center $Z(G)$ is $2p$ in \S\ref{orderG}.
Thus we have deg$\rho$=$(G : Z(G))^{1/2}$.
Let $N$ be a subgroup generated by $A, B^{4}$, and $C$. Then $N$ is shown to be normal and we have a $p$-dimensional $N$-invariant space $V$ in \S\ref{orderN}.
Thus $V$ introduces a character $\chi$.
$\chi$ is shown to be irreducible.
Hence we have a Clifford code $V$ with its parameters $(G, \rho, N, \chi)$.
Moreover, we show that this code is not a stabilizer code in \S\ref{nonstab}.
\section{Notations, Definitions, and Preliminaries}
Let $G$ be a finite group having an irreducible, faithful unitary representation $\rho$ of degree $(G: Z(G))^{1/2}$ where $Z(G)$ is the center of $G$, and let $\phi$ be a character with respect to $\rho$.
Let $N$ be a normal subgroup of $G$, and let $\chi$ be an irreducible character of $N$ such that $( \chi , \phi_{N} ) \neq 0$, where $(,)$ is the inner product on characters.
A \textbf{Clifford code} with parameters $(G, \rho, N , \chi)$ is a representation with respect to $\chi$.
If the normal subgroup $N$ is abelian, then the Clifford code is called a \textbf{stabilizer code}.
If $N$ is not abelian, then the representation might still be a stabilizer code, but with respect to another normal abelian subgroup of $G$.
In \cite{KL2}, A. Klappenecker, M R\"{o}tteler showed the following.
\begin{thm}[\cite{KL2} Cor. 6]\label{thmKL}
Assume $N$ contains $Z(G)$.
A Clifford code $V$ with $(G, \rho, N, \chi)$ is a stabilizer code if and only if $\deg \rho = |N|/|H|$ for some abelian subgroup $H$ such that $H$ contains $Z(G)$ and $H$ is a subgroup of \textbf{quasikernel} on $\chi$ of $G$, i.e. any element of $H$ acts on $V$ as a scalar mapping.
\end{thm}
Let $p$ be an odd prime.
Let $\lambda$ be a primitive $p$-th root of $1$ and let $i$ be a primitive $4$-th root of $1$.
Let $\omega $ be a primitive $4p$-th root of $1$.
Let $\sigma$ be a standard cyclic permutation matrix with degree $p$, i.e.
$$ \sigma = \left(
\begin{array}{ccccc}
0 & & & & 1 \\
1 & 0 & & & \\
& 1 & \ddots & & \\
& & 1 & 0 & \\
0 & & & 1 & 0 \\
\end{array}
\right).
$$
Let $\tau$ be a matrix with degree $p$ such that
$$ \tau = \left(
\begin{array}{ccccc}
1 & & & & \\
& \lambda & & & \\
& & \lambda^{2} & & \\
& & & \ddots & \\
& & & & \lambda^{p-1} \\
\end{array}
\right).
$$
Let $I_{p}$ be the identity matrix with degree $p$.
Put three matrices $A, B,$ and $C$ with degree $2p$ by the following,
$$A := \left(
\begin{array}{cc}
\sigma & \\
& \sigma^{-1} \\
\end{array}
\right), B := \left(
\begin{array}{cc}
i \tau & \\
& i^{-1} \tau^{-1} \\
\end{array}
\right),
C := \left(
\begin{array}{cc}
& I_{p} \\
-I_{p} & \\
\end{array}
\right).
$$
\section{The order and the center of $G$}\label{orderG}
By the definition of $A$ and $B$, we have the following.
\begin{lem}\label{conjAB}
$$ABA^{-1} = \lambda B$$
\end{lem}
\begin{lem}\label{subG}
$\langle A , B \rangle = \{\lambda^{a}
\left(
\begin{array}{cc}
i^{b} \tau^{b} \sigma^{c} & \\
& i^{-b } \tau^{-b} \sigma^{-c} \\
\end{array}
\right)
| 0 \le a < p , 0 \le b < 4p, 0 \le c < p \}$.
In particular, we have $\# \langle A , B \rangle = 2^{2} p^{3}$
\end{lem}
\begin{proof}
By lemma \ref{conjAB}, we have $\langle A , B \rangle = \langle A , B , \lambda I_{2p}\rangle$.
Since $A,B$ are diagonal matrices with 2 blocks of size $p$, we have $\langle A , B \rangle \simeq \langle \sigma , i \tau , \lambda I_{p}\rangle$, where $\simeq$ means isomorphic as a group.
$\langle \sigma , i \tau , \lambda I_{p}\rangle$ has a normal subgroup $\langle i \tau, \lambda I_{p}\rangle$.
Furthermore $\langle \sigma , i \tau , \lambda I_{p}\rangle$ is a semi-product of $\sigma$ and $\langle i \tau, \lambda I_{p}\rangle$.
The structure of $\langle i \tau, \lambda I_{p}\rangle$ is easily obtained as follows,
$$\langle i \tau , \lambda I_{p}\rangle = \{ \lambda^{a} (i \tau)^{b} | 0 \le a < p , 0 \le b < 4p\} = \{ \omega^{a} \tau^{b} | 0 \le a < 4p , 0 \le b < p\} .$$
On the other hand, the order of $\sigma$ is $p$.
Hence we have
$$ \# \langle A , B \rangle = 2^{2} p^{3}.$$
Furthermore, we have
$$ \langle A , B \rangle = \{
\lambda^{a}
\left(
\begin{array}{cc}
(i \tau)^{b} \sigma^{c} & \\
& (i \tau)^{-b} \sigma^{-c} \\
\end{array}
\right)
| 0 \le a < p , 0 \le b < 4p, 0 \le c < p \}
$$
\end{proof}
\begin{prop}\label{stG}
$ G =
\{
\lambda^{a}
\left(
\begin{array}{cc}
(i \tau)^{b} \sigma^{c} & \\
& (i \tau)^{-b} \sigma^{-c} \\
\end{array}
\right)
| 0 \le a < p , 0 \le b < 4p, 0 \le c < p \}
\cup$$
$$
\{
\lambda^{a}
\left(
\begin{array}{cc}
& (i \tau)^{b} \sigma^{c} \\
-(i \tau)^{-b} \sigma^{-c} & \\
\end{array}
\right)
| 0 \le a < p , 0 \le b < 4p, 0 \le c < p \}
.
$
In particular, we have $\# G = 2^{3} p^{3}$
\end{prop}
\begin{proof}
Now we have $C^{2} = -I_{2p}$.
By lemma \ref{subG}, we can describe the elements of $G$ as follows,
$$ G =
\{
\lambda^{a}
\left(
\begin{array}{cc}
(i \tau)^{b} \sigma^{c} & \\
& (i \tau)^{-b} \sigma^{-c} \\
\end{array}
\right)
| 0 \le a < p , 0 \le b < 4p, 0 \le c < p \}
\cup$$
$$
\{
\lambda^{a}
\left(
\begin{array}{cc}
& (i \tau)^{b} \sigma^{c} \\
-(i \tau)^{-b} \sigma^{-c} & \\
\end{array}
\right)
| 0 \le a < p , 0 \le b < 4p, 0 \le c < p \}
.
$$
\end{proof}
Let $\rho$ be the matrix representation of $G$, in other words, $\rho(g) = g$ for any $g \in G$.
\begin{cor}\label{irrG}
$\rho$ is irreducible.
\end{cor}
\begin{proof}
$$
\frac{1}{|G|} \sum_{g \in G} | \mathrm{tr}(g) |^{2}
= \frac{1}{2^{3} p^{3}} \sum_{0 \le a < p} |2p \lambda^{a}|^{2} = 1
$$
\end{proof}
Let denote the center of $G$ by $Z(G)$.
\begin{cor}
$\# Z(G) = 2p$
\end{cor}
\begin{proof}
Since $\rho$ is irreducible, $Z(G)$ consists of the scolor mappings of $G$.
By proposition \ref{stG}, we have $\# Z(G) = 2p$.
\end{proof}
\begin{cor}
$\deg \rho = ( G : Z(G) )^{1/2}$
\end{cor}
\begin{proof}
We have $\deg \rho = 2p$ and $(G : Z(G) )^{1/2} = (2^{3} p^{3} / 2p)^{1/2} = 2p$.
\end{proof}
\section{properties of $N$}\label{orderN}
\begin{prop}\label{stN}
$N = \{
\omega^{2 a}
\left(
\begin{array}{cc}
\tau^{b} \sigma^{c} & \\
& \tau^{-b} \sigma^{-c} \\
\end{array}
\right),
\omega^{2 a}
\left(
\begin{array}{cc}
& \tau^{b} \sigma^{c} \\
- \tau^{-b} \sigma^{-c} & \\
\end{array}
\right),
| 0 \le a < 2p , 0 \le b,c < p \}
$.
In particular, we have $\# N = 2^{2} p^{3}$.
\end{prop}
\begin{proof}
We note that $\langle A , B^{4} \rangle = \langle A , B^{4} , \lambda I_{2p} \rangle$.
It is easy to show that
$$\langle A , B^{4} , \lambda I_{2p} \rangle \simeq \langle \sigma , \tau^{4} , \lambda I_{p} \rangle.$$
By an argument similar to that of proposition \ref{stG}, we have
$$\# \langle A , B^{4} \rangle = p^{3}.$$
Furthermore, we have
$$\langle A , B^{4} \rangle = \{
\lambda^{a}
\left(
\begin{array}{cc}
\tau^{b} \sigma^{c} & \\
& \tau^{-b} \sigma^{-c} \\
\end{array}
\right)
| 0 \le a,b,c < p \}.
$$
Hence it follows
$N = \langle A , B^{4} , C \rangle$
$$ = \{
\omega^{2 a}
\left(
\begin{array}{cc}
\tau^{b} \sigma^{c} & \\
& \tau^{-b} \sigma^{-c} \\
\end{array}
\right),
\omega^{2 a}
\left(
\begin{array}{cc}
& \tau^{b} \sigma^{c} \\
- \tau^{-b} \sigma^{-c} & \\
\end{array}
\right),
| 0 \le a < 2p , 0 \le b,c < p \}.
$$
\end{proof}
\begin{cor}
$N$ is a normal subgroup of $G$ and contains $Z(G)$.
\end{cor}
\begin{proof}
Since $(G:N) = 2$, $N$ is a normal subgroup.
By proposition \ref{stN}, $N$ contains $Z(G)$.
\end{proof}
Let denote the standard basis of $\mathbb{C}^{2p}$ by $\{e_{h} | 1 \le h \le 2p\}$ and put
$$ V_{1} := \langle e_{\mathbf{c}^{x}(1)} -i e_{\mathbf{c}^{-x}(1)+p} \rangle_{ 0 \le x < p },$$
$$ V_{2} := \langle e_{\mathbf{c}^{x}(1)} +i e_{\mathbf{c}^{-x}(1)+p} \rangle_{ 0 \le x < p }.$$
We denote a cyclic permutation $(1,2, \dots, p)$ by $\mathbf{c}$, i.e. $\mathbf{c}(h) = h+1 \pmod{p}$ for any $1 \le h \le p$.
\begin{prop}\label{v12_inv}
$V_{1}, V_{2}$ are $N$-invariant.
\end{prop}
\begin{proof}
First, we show that $V_{1}$ is $N$-invariant.
We have that
$$A (e_{\mathbf{c}^{x}(1)} -i e_{\mathbf{c}^{-x}(1)+p}) = e_{\mathbf{c}^{x+1}(1)} -i e_{\mathbf{c}^{-(x+1)}(1)+p},$$
$$B^{4} (e_{\mathbf{c}^{x}(1)} -i e_{\mathbf{c}^{-x}(1)+p}) = \lambda^{4x}(e_{\mathbf{c}^{x}(1)} -i e_{\mathbf{c}^{-x}(1)+p}),$$
$$C (e_{\mathbf{c}^{x}(1)} -i e_{\mathbf{c}^{-x}(1)+p}) = -i (e_{\mathbf{c}^{-x}(1)} -i e_{\mathbf{c}^{x}(1)+p}),$$
for any $1 \le x \le p$.
Thus $V_{1}$ is $N$-invariant.
By a similar argument, $V_{2}$ can be proved to be $N$-invariant.
\end{proof}
Denote the character of $N$ of action on $V_{h}$ by $\chi_{h}$ for $h=1,2$.
\begin{prop}
$\chi_{1}$ and $\chi_{2}$ are irreducible as a character of $N$.
\end{prop}
\begin{proof}
By a similar caluculation to corollary \ref{irrG}, we have
$$\frac{1}{|N} \sum_{n \in N}|\mathrm{tr}(n)|^{2} = 2.$$
It implies that $\rho_{|N}$ consists of two irreducible characters where $\rho_{N}$ is the restrected representation of $\rho$ to $N$.
By proposition \ref{v12_inv}, we know two representation subspace $V_{1}, V_{2}$.
It is easy to verify $V_{1} \cap V_{2} = \{ 0 \}$.
Thus $\chi_{1}, \chi_{2}$ are ireducible.
\end{proof}
Therefore we have the following.
\begin{thm}
$V_{1}$ is a Clifford code with its parameter $(G, \rho, N, \chi)$.
\end{thm}
\section{non-stabilizer property}\label{nonstab}
\begin{prop}
The quasikernel on $\chi$ of $G$ is $Z(G)$.
\end{prop}
\begin{proof}
Let $0 \le a < 2p, 0 \le b, c < p$.
Put $X := \omega^{2 a}
\left(
\begin{array}{cc}
(i \tau)^{b} \sigma^{c} & \\
& (i \tau)^{-b} \sigma^{-c} \\
\end{array}
\right)$.
It is easy to verify that $X$ is not a quasikernel if $k \neq 0$.
Thus we assume $c = 0$.
Then we have
$$ X (e_{\mathbf{c}^{x}(1)} -i e_{\mathbf{c}^{-x}(1)+p})
$$
$$
= \omega^{2a + 4 (x-1)b}
\left(
\begin{array}{cc}
i^{b} & \\
& i^{-b}\\
\end{array}
\right)
( e_{\mathbf{c}^{x}(1)} -i e_{\mathbf{c}^{-x}(1)+p} )
$$
$$
= \omega^{2a + 4 (x-1)b}
( i^{b} e_{\mathbf{c}^{x}(1)} -i^{-b+1} e_{\mathbf{c}^{-x}(1)+p} )
$$
$$
= \omega^{2a + 4 (x-1)b + bp}
( e_{\mathbf{c}^{x}(1)} -i^{-2b+1} e_{\mathbf{c}^{-x}(1)+p} ).
$$
Thus $X$ is quasikernel if $b=0$.
Put $Y := \omega^{2 a}
\left(
\begin{array}{cc}
& (i \tau)^{b} \\
- (i \tau)^{-b} & \\
\end{array}
\right).$
Then we have
$$
Y (e_{\mathbf{c}^{x}(1)} -i e_{\mathbf{c}^{-x}(1)+p})
$$
$$
=\omega^{2 a}
\left(
\begin{array}{cc}
(i \tau)^{b} & \\
& (i \tau)^{-b} \\
\end{array}
\right)
\left(
\begin{array}{cc}
& I_{p} \\
-I_{p} & \\
\end{array}
\right)
(e_{\mathbf{c}^{x}(1)} -i e_{\mathbf{c}^{-x}(1)+p})
$$
$$
=\omega^{2 a}
\left(
\begin{array}{cc}
(i \tau)^{b} & \\
& (i \tau)^{-b} \\
\end{array}
\right)
(-e_{\mathbf{c}^{x}(1)+p} -i e_{\mathbf{c}^{-x}(1)})
$$
$$
=-i\omega^{2 a}
\left(
\begin{array}{cc}
(i \tau)^{b} & \\
& (i \tau)^{-b} \\
\end{array}
\right)
(e_{\mathbf{c}^{-x}(1)} -ie_{\mathbf{c}^{x}(1)+p})
$$
$$
=-i \omega^{2 a}
(\omega^{4 (-x-1)b} i^{b} e_{\mathbf{c}^{-x}(1)} -i \omega^{-4 (-x-1) b} i^{b} e_{\mathbf{c}^{x}(1)+p}).
$$
Thus $Y$ is not an element of the quasikernel.
Therefore the quasikernel of $G$ is $Z(G)$.
\end{proof}
\begin{prop}
$V_{1}, V_{2}$ are non-stabilizer Clifford codes.
\end{prop}
\begin{proof}
A group $H$ which contains $Z(G)$ and which is contained by the quasikernel on $V_{i}$ of $G$ must be $Z(G)$ for $i=1,2$.
Thus we have
$$ \chi(1)^{2} = p^{2} \neq 2^{2} p^{3} / 2p = 2p^{2} = |N|/|H|.$$
By Theorem \ref{thmKL}, this Clifford code is non-stabilizer.
\end{proof}
\begin{acknowledge}
This work was supported by the project on ``Research and Development on Quantum Cryptography'' of Telecommunications Advancement Organization as part of the programme ``Research and Development on Quantum Communication Technology'' of the Ministry of Public Management, Home Affairs, Posts and Telecommunications of Japan.
\end{acknowledge}
\end{document} |
\begin{document}
\title{Sudden death of particle-pair Bloch oscillation and unidirectioanl
propagation in a one-dimensional driven optical lattice}
\author{S. Lin${}^1$, X. Z. Zhang${}^2$ and Z. Song${}^1$}
\email{songtc@nankai.edu.cn}
\affiliation{${\ }^1$School of Physics, Nankai University, Tianjin 300071, China \\
${\ }^2$College of Physics and Materials Science, Tianjin Normal University, Tianjin 300387, China}
\begin{abstract}
We study the dynamics of bound pairs in the extended Hubbard model driven by
a linear external field. It is shown that the two interacting bosons or
singlet fermions with nonzero on-site and nearest-neighbor interaction
strengths can always form bound pairs in the absence of the external field.
There are two bands of bound pair, one of which may have incomplete wave
vectors when it has a overlap with the scattering band, referred as
imperfect band. In the presence of the external field, the dynamics of the
bound pair in the perfect band exhibits distinct Bloch-Zener oscillation
(BZO), while in the imperfect band the oscillation presents a sudden death.
The pair becomes uncorrelated after the sudden death and the BZO never comes
back. Such dynamical behaviors are robust even for the weak coupling regime
and thus can be used to characterize the phase diagram of the bound states.
\end{abstract}
\pacs{03.65.Ge, 05.30.Jp, 03.65.Nk, 03.67.-a}
\maketitle
\section{Introduction}
The dynamics of particle-pair in lattice systems has received a lot of
interest, due to the rapid development of experiment. Ultra-cold atoms have
turned out to be an ideal playground for testing few-particle fundamental
physics since optical lattices provide clean realizations of a range of
many-body Hamiltonians. It stimulates many experimental \cite{Winkler,Folling,Gustavsson}\ and theoretical investigations in strongly correlated systems, which mainly focus on the
bound-pair formation \cite{Mahajan,Valiente2,MVExtendB,MVTB,Javanainen,Wang}, detection \cite{Kuklov}, dynamics \cite{Petrosyan,Zollner,ChenS,Valiente1,Wang,JLNJP,Corrielli},
collision between single particle and pair \cite{JLBP,JLTrans}, and bound-pair condensate \cite{Rosch}.
The essential physics of the proposed bound
pair (BP) is that, the periodic potential suppresses the single particle
tunneling across the barrier, a process that would lead to a decay of the
pair. This situation cannot be changed in general case when a weak linear
potential is applied. Then a BP acts as a single particle, sharing the
single-particle dynamical features, such as Bloch oscillation (BO),
Bloch-Zener oscillation (BZO) \cite{MUGA, Poladian, Greenberg, Kulishov,
Ruschhaupt, Longhi2012}.
The aim of this paper is to show that the nearest-neighbor (NN) interaction
can not only lead to distinct BO and BZO, but also induce the sudden death
of the oscillations within a Bloch period.\ We study the dynamics of BPs
in the extended Hubbard model driven by a linear external field. It is
shown that two interacting bosons or singlet fermions with nonzero on-site
and nearest-neighbor interaction strengths can always form BPs in
the absence of the external field. There are two kinds of BP, which
forms two bound bands. We find that the existence of the nearest-neighbor
interaction can lead to the overlap between the scattering band of a single
particle and the bound band, which can spoil the completeness of the bound
band, referred as imperfect band. In the presence of the external field, the
dynamics of the BP in the perfect band exhibits perfect BO and BZO,
while in the imperfect band\ the oscillation presents a sudden death. The
pair becomes uncorrelated after the sudden death of the oscillation and the
correlation never come back. This behavior is of interest in both
fundamental and application aspects. It can be utilized to control the
unidirectional propagation of the BP wavepacket by imposing a single
pulse, which is of great interest for applications in cold atom physics.
Numerical simulations are shown that this scheme achieves very high efficiency
and wide spectral band. Such a unidirectional filter for cold-atom pair may
be realized in a shaking optical lattice in experiment.
This paper is organized as follows. In Section \ref{sec_model}, we present
the model Hamiltonian, and the two-particle band structures. In Section \ref
{sec_BP dynamics}, we investigate the BP dynamics in the
presence of linear field. Section \ref{sec_Unidirectional propagation} is
devoted to the application of our finding, the realization of unidirectional
propagation induced by a pulsed field. Finally, we give a summary and
discussion in Section \ref{sec_Summary}.
\section{Model Hamiltonian and band structure}
\label{sec_model}
\begin{figure}
\caption{(Color online) (a) Phase diagram for the BP states. The
solid black (blue) line is the hyperbolic function in Eq. (\protect\ref
{boundary}
\label{fig1}
\end{figure}
We consider an extended Hubbard model describing interacting particles in
the lowest Bloch band of a one dimensional lattice driven by an external
force, which can be employed to describe ultracold atoms or molecules with
magnetic or electric dipole-dipole interactions in optical lattices. We
focus on the dynamics of the BP states. The pair can be two
identical bosons, or equivalently, spin-$1/2$ fermions in singlet state. For
simplicity we will only consider bosonic systems, but it is straightforward
to extend the conclusion to singlet fermionic pair. We consider the
Hamiltonian
\begin{equation}
H=H_{0}+F\sum_{j=1}jn_{j}, \label{H}
\end{equation}
where the second term describes the linear external field while $H_{0}$\ is
one-dimensional Hamiltonian for the extended Bose-Hubbard model on a $N$-site
lattice
\begin{equation}
H_{0}=-\kappa \sum_{j=1}\left( a_{j}^{\dagger }a_{j+1}+\textrm{H.c.}\right) +
\frac{U}{2}\sum_{j=1}n_{j}\left( n_{j}-1\right) +V\sum_{j=1}n_{j}n_{j+1}.
\label{H_0}
\end{equation}
where $a_{i}^{\dag }$ is the creation operator of the boson at the $i$th
site, the tunneling strength, on-site and NN interactions between bosons are
denoted by $\kappa $, $U$\ and $V$.
Let us start by analyzing in detail the two-boson problem in this model. As
in Refs. \cite{JLNJP,JLBP,JLTrans}, a state in the two-particle Hilbert
space, can be expanded in the basis set $\left\{ \left\vert \phi
_{r}^{k}\right\rangle,r=0,1,2,...\right\} $, with
\begin{eqnarray}
\left\vert \phi _{0}^{k}\right\rangle &=&\frac{1}{\sqrt{2N}}\sum_{j} e ^{
i kj}\left( a_{j}^{\dag }\right) ^{2}\left\vert \textrm{vac}\right\rangle ,
\\
\left\vert \phi _{r}^{k}\right\rangle &=&\frac{1}{\sqrt{N}}e ^{ i
kr/2}\sum_{j} e ^{ i kj}a_{j}^{\dag }a_{j+r}^{\dag }\left\vert \textrm{
vac}\right\rangle ,
\end{eqnarray}
where $\left\vert \textrm{vac}\right\rangle $\ is the vacuum state for the
boson operator $a_{i}$. Here $k$ denotes the momentum, and $r$ is the
distance between the two particles. Due to the translational symmetry of the
present system, we have the following equivalent Hamiltonian
\begin{eqnarray}
H_{\mathrm{eq}}^{k}=-J_{k}(\sqrt{2}\left\vert \phi
_{0}^{k}\right\rangle\left\langle \phi _{1}^{k}\right\vert
+\sum_{j=1}\left\vert \phi_{j}^{k}\right\rangle\left\langle \phi
_{j+1}^{k}\right\vert +\textrm{H.c.)}+U\left\vert \phi _{0}^{k}\right\rangle
\left\langle \phi _{0}^{k}\right\vert
+V\left\vert \phi _{1}^{k}\right\rangle \left\langle \phi _{1}^{k}\right\vert
\label{H_eq}
\end{eqnarray}
in each invariant subspace indexed by $k$. In its present form, $H_{\mathrm{
eq}}^{k}$ are formally analogous to the tight-binding model describing a
single-particle dynamics in a semi-infinite chain with the $k$-dependent
hopping integral $J_{k}=2\kappa \cos \left( k/2\right) $ in thermodynamic
limit $N\rightarrow \infty $. In this paper, we are interested in the
BP states, which corresponds to the bound state solution of the
single-particle Schr\"{o}dinger equation
\begin{equation}
H_{\mathrm{eq}}^{k}\left\vert \psi _{k}\right\rangle =\epsilon
_{k}\left\vert \psi _{k}\right\rangle . \label{S_eq}
\end{equation}
For a given $J_{k}$, the Hamiltonian $H_{\mathrm{eq}}^{k}$ possesses one or
two bound states, which are denoted as $\left\vert \psi
_{k}^{+}\right\rangle $\ and $\left\vert \psi _{k}^{-}\right\rangle $,
respectively. Here the Bethe-ansatz wavefunctions have the form
\begin{equation}
\left\vert \psi _{k}^{\pm }\right\rangle =C_{0}^{k}\left\vert \phi
_{0}^{k}\right\rangle +\sum_{r=1}\left( \pm 1\right) ^{r}C_{r}^{k}e^{-\beta
r}\left\vert \phi _{r}^{k}\right\rangle ,
\end{equation}
with $\beta >0$. For such two bound states $\left\vert \psi _{k}^{\pm
}\right\rangle $ the Schrodinger equation in Eq. (\ref{S_eq}) admits
\begin{equation}
\pm e^{3\beta }+\left( u_{k}+v_{k}\right) e^{2\beta }\pm \left(
u_{k}v_{k}-1\right) e^{\beta }+v_{k}=0,
\end{equation}
where $u_{k}=U/J_{k}$ and $v_{k}=V/J_{k}$\ are respectively the reduced
interaction strengthes. The corresponding bound-state energy of $\left\vert
\psi _{k}^{\pm }\right\rangle $ can be expressed as\
\begin{equation}
\epsilon _{k}^{\pm }=\pm J_{k}\cosh \beta . \label{BS_energy}
\end{equation}
The transition from bound to scattering states occurs at $\beta =0$. Then
the boundary, at which the bound state $\left\vert \psi _{k}^{\pm
}\right\rangle $ disappears, is described by the hyperbolic function
\begin{equation}
u_{k}=-\frac{2v_{k}}{1\pm v_{k}}, \label{boundary}
\end{equation}
which is plotted in Fig. \ref{fig1}. It shows that the boundary lines divide
the $u_{k}-v_{k}$ plane into six regions, from $\mathrm{I}$ to $\mathrm{VI}$
. The type of the bound states in each region can be foreseen from the
extreme situations where $\left\vert u_{k}\right\vert ,$\ $\left\vert
v_{k}\right\vert \gg 1$.\ Under this condition, it is easy to check that
there are two bound states\ with the eigen energies
\begin{equation}
\epsilon _{k}^{\pm }\approx U\textrm{ and }V, \label{UV}
\end{equation}
\ in each invariant $k$-subspace. Comparing Eqs. (\ref{UV}) and (\ref
{BS_energy}), we get the conclusion that there are two bound states in the
regions $\mathrm{I}$,\textrm{\ }$\mathrm{II}$,\textrm{\ }$\mathrm{III}$, and
$\mathrm{IV}$: two $\left\vert \psi _{k}^{-}\right\rangle $ in $\mathrm{I}$,
one $\left\vert \psi _{k}^{+}\right\rangle $ and $\left\vert \psi
_{k}^{-}\right\rangle $\ in $\mathrm{II}$ and $\mathrm{IV}$, two $\left\vert
\psi _{k}^{+}\right\rangle $ in $\mathrm{III}$, while there is a bound state
$\left\vert \psi _{k}^{-}\right\rangle $ in $\mathrm{V}$ and $\left\vert
\psi _{k}^{+}\right\rangle $\ in $\mathrm{VI}$, respectively.
On the other hand, we know that the scattering band of $H_{\mathrm{eq}}^{k}$
\ ranges from $-4\kappa \cos \left( k/2\right) $ to $4\kappa \cos \left(
k/2\right) $, which reaches the widest bandwidth\ at $k=0$. Therefore, when
we take $J_{0}=\pm 2\kappa $, this diagram can characterize the bound-state
number distribution $N_{\mathrm{b}}(U,V)$: we have $N_{\mathrm{b}}=2N$ in
the regions $\mathrm{I}$,\textrm{\ }$\mathrm{II}$,\textrm{\ }$\mathrm{III}$,
and $\mathrm{IV}$, where all the $2N$ bound states indexed by $k$\
constitute a complete BP band.\ In contrast, we have $N_{\mathrm{b}
}<2N$ in $\mathrm{V}$ and $\mathrm{VI}$, where all $N_{\mathrm{b}}$ bound
states indexed by survival $k$, which does not cover all range of the
momentum in the Brillouin zone, from $-\pi $ to $\pi $.\ We refer to this
property as incomplete BP band.\ Therefore, the phase diagram also
indicates the boundary $U=-2V/\left( 1\pm V\right) $, for the transition
from the complete to incomplete BP bands, which agrees with the
results reported in Refs. \cite{Valiente2009, Dias, Khomeriki}. For given $U$
and $V$, the complete spectrum of $H_{0}$ can be computed by diagonalizing
the Hamiltonian $H_{\mathrm{eq}}^{k}$ numerically. In Fig. \ref{fig2} and
\ref{fig3}, we plot the band structures for several typical cases,
which are marked in Fig. 1. We do not cover all the typical points in every
region due to the following fact. The spectrum of $H_{0}$ obeys the relation
\begin{equation}
E_{k}\left( U,V\right) =-E_{k}\left( -U,-V\right) ,
\end{equation}
in view of
\begin{equation}
H_{0}(U,V)=-RH_{0}(-U,-V)R^{-1},
\end{equation}
where the transformation $R$ is defined as $Ra_{j}R^{-1}=\left( -1\right)
^{j}a_{j}$.\textbf{\ }It is a rigorous result in the all range of the
parameters.\textbf{\ }As expected, we\textbf{\ }observe that the
two-particle spectrum comprises\textbf{\ }three Bloch bands, two BP
bands formed by two kinds of BP states, and one scattering band
formed by uncorrelated states.\ We can see from the Fig. \ref{fig2}, that
two bound bands are separated from the scattering band whenever the system
is in the regions (points $a$, $b$, $c$ and $d$) $\mathrm{I}$,\textrm{\ }$
\mathrm{II}$,\textrm{\ }$\mathrm{III}$, and $\mathrm{IV}$. In contrast,
whenever the points ($e$, $f$, $g$ and $h$) lie in the regions\ $\mathrm{V}$
and $\mathrm{VI}$, the pseudo gap between BP and scattering bands
around $k=0$ vanishes, resulting the formation of incomplete band. What it
quite unexpected and remarkable is that if we apply a linear field, the
dynamics of the BP exhibits some peculiar behaviors, which will be
investigated in the following section.
\section{BP dynamics}
\label{sec_BP dynamics}
Before starting the investigation of the BP dynamics, we would like to study
the relation between the center path of a wavepacket driven by the linear
field and\ dispersion of the Hamiltonian $H_{0}$. Consider a general
one-dimensional tight-binding system, which has the dispersion relation $
E(k) $ being an arbitrary smooth periodic function $E(2\pi +k)=E(k)$. The
dynamics of wavepacket can be simply understood in terms of the
semiclassical picture: A wavepacket centered around $k_{\mathrm{c}}$ can be
regarded as a classical particle with momentum $k_{\mathrm{c}}$ \cite{Bloch,
Ashcroft, Kittel}. When the wavepacket is subjected to a homogeneous force
of strength $F$, the acceleration theorem $\partial k_{\mathrm{c}}\left(
t\right) /\partial t=F$ tells us
\begin{eqnarray}
k_{\mathrm{c}}\left( t\right) &=&k_{\mathrm{c}}\left( 0\right)
+\int_{0}^{t}Fdt \nonumber \\
&=&k_{\mathrm{c}}\left( 0\right) +Ft \label{acceleration theorem}
\end{eqnarray}
for constant field. The central position of the wavepacket is
\begin{eqnarray}
x_{\mathrm{c}}\left( t\right) &=&x_{\mathrm{c}}\left( 0\right)
+\int_{0}^{t}\upsilon _{\mathrm{g}}dt \nonumber \\
&=&x_{\mathrm{c}}\left( 0\right) +\frac{1}{F}\left[ E_{k}\left( k_{\mathrm{c}
}\left( 0\right) +Ft\right) -E_{k}\left( k_{\mathrm{c}}\left( 0\right)
\right) \right] \label{CP}
\end{eqnarray}
where $\upsilon _{\mathrm{g}}=\partial E_{k}/\partial k$ is the group
velocity. Notice that the trajectory of a wavepacket is essentially
identical with the dispersion relation for the field-free system\ under the
semi-classical approximation. This observation provides a fairly clear
picture for the dynamics of a wavepacket in the presence of the linear
field. As a simple example we consider a single-particle case for
illustration. The single-particle BO with Bloch frequency $\omega _{\mathrm{B
}}=F$ for $H$ can be simply understood from its cosusoidal dispersion
relation $E(k)=-2\kappa \cos k$ rather than quadratic, in momentum $k$.
Now, we switch gears to the case of two-particle. We note that the
bandwidth of the BP band is comparable to that of scattering band,
which leads to the conclusion that a BP wavepacket has distinct
group velocity. It indicates that the dynamics of the BP state has
the similar behavior with the single particle. The BO-like behavior of the
BP wavepacket emerges in the presence of the linear external field.
In order to demonstrate these points, we consider an example for the
Hamiltonian $H$ in Eq. (\ref{H}) with $U$, $V\gg \left\vert U-V\right\vert $
, $\kappa $. As studied in Ref. \cite{JLNJP}, in the absence of the external
field, the BP lies in the quasi-invariant subspace spanned by the
basis $\{\underline{\left\vert l\right\rangle }\}$, which is defined as
\begin{equation}
\underline{\left\vert l\right\rangle }\equiv \left\{
\begin{array}{c}
\left( a_{l/2}^{\dag }\right) ^{2}/\sqrt{2}\left\vert \textrm{vac}
\right\rangle \textrm{, }(\textrm{even } l) \\
a_{\left( l-1\right) /2}^{\dag }a_{\left( l+1\right) /2}^{\dag }\left\vert
\textrm{vac}\right\rangle \textrm{,\ }(\textrm{odd } l)
\end{array}
\right. . \label{BP_basis}
\end{equation}
In the presence of the external field, the bound-pair can be described by
the following effective Hamiltonian
\begin{equation}
H_{\mathrm{eff}}=-\sqrt{2}\kappa \sum\limits_{l}\left( \underline{\left\vert
l\right\rangle }\underline{\left\langle l+1\right\vert }+\textrm{H.c.}\right)
+\sum\limits_{l}\left[ Fl+\frac{\delta }{2}\left( -1\right) ^{l}\right]
\underline{\left\vert l\right\rangle }\underline{\left\langle l\right\vert },
\label{H_eff}
\end{equation}
where we neglect a constant term $\left( U+V\right) /2\sum_{l}\underline{
\left\vert l\right\rangle }\underline{\left\langle l\right\vert }$\ and$\ $
take $\delta =U-V$ to present the unbalanced on-site and nearest neighbor
interactions. $H_{\mathrm{eff}}$\ is nothing but the tight-binding
Hamiltonian to describe a single particle subjected to a staggered linear
potential, which has been well studied in previous literature \cite{Breid}.
Unlike the fractional BO \cite{Buchleitner, Kolovsky, Kudo2009, Kudo2011} in
the case of $V=0$, $H_{\mathrm{eff}}$ can support wide bandwidth \cite{JLNJP}
, which is responsible for the large amplitude oscillations. In the
situation with $\delta =0$, it turns out that particle undergoes BO with
frequency $\omega _{\mathrm{B}}=F$. In the case of nonzero unbalance $\delta
\neq 0$, it has been reported that the dynamics of the wavepacket shows a
BZO,\ a coherent superposition of Bloch oscillations and Zener tunnelling
between the sub-bands. The Zener tunnelling takes place almost
exclusively when the momentum of wavepacket reaches $ \pm \pi$. Then we
get the conclusion that the BPs serve as a composite particle,
exhibiting BO and BZO in strong coupling region.
\begin{figure}
\caption{(Color online) (a1-d1) The band structures for the systems with
parameters fixed at the typical points ($a$-$d$)\ labelled in Fig. 1. All
the BP bands are complete. (a2-d2) The profiles and the average
distances $\bar{r}
\label{fig2}
\end{figure}
In this paper we are interested in what happens if the initial state is
placed in a incomplete band. It is presumable that the semi-classical theory
still holds when the wavepacket is in the extent of the in incomplete band,
because the nonzero pseudogap can protect the BP wavepacket from the
scattering band. However, when the wavepacket reaches the band edge, the
transition from bound to the scattering band occurs. The wavepacket diffuses
into the continuous spectrum rather than the repetitive motion of
acceleration and Bragg reflection. We refer to this phenomenon as the sudden
death of the BO. In the case of the incomplete BP band with a edge $
k_{\mathrm{m}}>0$, the life time $\tau $\ for an initial wavepacket with $k_{
\mathrm{c}}\left( 0\right) $\ satisfies
\begin{equation}
k_{\mathrm{m}}=\left\vert k_{\mathrm{c}}\left( 0\right) +\tau F\right\vert .
\label{tau}
\end{equation}
When this occurs, the correlation between two particles breaks down and the
wavepacket spreads out in space, irreversibly.
To verify and demonstrate the above analysis, numerical simulations are
performed to investigate the dynamics behavior. We compute the time
evolution of the wavepacket by diagonalizing the Hamiltonian $H$
numerically. Throughout this paper, we investigate the dynamics of the
initial Gaussian wavepacket in the form
\begin{equation}
\left\vert \Psi \left( 0\right) \right\rangle =\Lambda \sum_{k}\exp \left[ -
\frac{\left( k-k_{0}\right) ^{2}}{2\alpha ^{2}}-i N_{\mathrm{A}}\left( k-k_{0}\right)\right]
\left\vert \psi _{k}\right\rangle , \label{Psi_0}
\end{equation}
where $\Lambda $ is the normalization factor, $k_{0}$ and $N_{\mathrm{A}}$
denote the central momentum and position of the initial wavepacket,
respectively. The evolved state under the Hamiltonian $H$ is $\left\vert \Psi \left( t\right) \right\rangle$ = $ e^{-i Ht}\left\vert\Psi \left( 0\right) \right\rangle \nonumber$.
We would like to stress that the initial wavepacket involves solely one
BP band, either upper or lower one. However, the evolved state may
involve two BP bands, even the scattering band,\ when the Zener
tunnelling occurs. We plot the probability profile of the wavepacket
evolution in several typical cases in Fig. \ref{fig2} and \ref{fig3}. In
Fig. \ref{fig2}, the simulation is performed in the systems, where two bound
bands are well separated from the scattering band. As the external field is
turned on, several dynamical behaviors occur: when two bound bands are well
separated ($a1$ and $b1$), the BOs in both bands are observed ($a2$ and $b2$
). In the case ($d1$, $d2$), two bound bands are very close at $\pm \pi $,
which induces the BZO as expected. For the three cases, the BO
frequency doubles comparing with that of single particle case. Case ($c1$,
$c2$)\ fixes $U=V$, two bound bands merg into a single bound band. As
predicted above, simple\ BO rather\ than\ BZO is\ observed, with a period
equal to that of single-particle BO.
\begin{figure}
\caption{(Color online) The same as that in \protect\ref{fig2}
\label{fig3}
\end{figure}
In Fig. \ref{fig3}, the systems have a common feature: one of the BP
bands is incomplete due to the pseudo gap vanishing. In the three cases ($e2$
, $f2$ and $g2$), the BOs remain in one band, whereas the BOs break down at
the edges of the incomplete band. For the case ($h1$, $h2$) with $U=V$,\ two
bound bands merg into a single incomplete band. The BP wavepackets
in both two bands cannot survive from the irreversible spreading.
Furthermore, the correlation between two particles is measured by the
average distance of two particle
\begin{equation}
\overline{r}\left( t\right) =\sum_{i,r}r\left\langle \Psi \left( t\right)
\right\vert n_{i}n_{i+r}\left\vert \Psi \left( t\right) \right\rangle ,
\end{equation}
which can be used to characterize the feature of sudden death of BO for a
evolved state $\left\vert \Psi \left( t\right) \right\rangle $. As
comparison, the average distance $\overline{r}\left( t\right) $ as function
of time for several typical cases is plotted in Fig. \ref{fig2} and \ref
{fig3}. We find that the sudden death of BO is always accompanied by the
irreversible increasing of $\overline{r}\left( t\right) $, which accords
with our analytical predictions.
Finally, we also plot the BP dispersion relation $E\left( k\right) $
and the central position $x_{\mathrm{c}}\left( t\right) $\ of the wavepacket
under the driving force together in one figure. For several typical cases,
the plots in Fig. \ref{fig4} indicate that the shape of the function $x_{
\mathrm{c}}\left( t\right) $\ coincides with that of the dispersion relation
$E(k)$. We also find that the semi-classical analysis in Eq. (\ref{CP}) is
valid if $\delta $\ is not too small. Remarkably, one can see that such a
relation still holds even for the incomplete BP band. These results
are in agreement with the theoretical prediction based on the spectral
structures.
\begin{figure}
\caption{(Color online) The comparison between the BP dispersion
relations and the central positions for the cases plotted in Fig. \protect
\ref{fig2}
\label{fig4}
\end{figure}
\section{Unidirectional propagation}
\label{sec_Unidirectional propagation}
\begin{figure}
\caption{(Color online) Schematic illustration for the possess of realizing
unidirectional propagation of BP. The dashed $\infty $\ represents
the correlation between two particles. The shaking lattice induces the
inertial force $F\left( t\right) $. (a) for $t{<-T/2}
\label{fig5}
\end{figure}
We now investigate the effect of the time-dependent driving force on the
dynamics of a BP wavepacket. The acceleration theorem (\ref
{acceleration theorem}) tells us a pulsed field can shift the central
momentum of the wavepacket in the case of complete band. However, it is easy
to find that a pulsed field may destroy a BP wavepacket in the
incomplete band, similarly referred as sudden death of uniform motion. The
death and survival of a propagating wavepacket strongly depends on the
difference between the initial central momentum and the edge of the
incomplete band. Of course, a survived wavepacket can retain its original
motion state\textbf{\ }by a subsequently compensating pulsed field. This
gives rise to a scheme for destroying the pair wavepacket propagating in one
direction, but remaining the one with the opposite direction. Such kind of
scheme can be carried out by two pulsed fields in a pair of adjacent
intervals, which provides two opposite impulses to the wavepacket.
To illustrate the scheme, we propose two concrete examples. The first one is
a square-wave pulse driving force in the form
\begin{equation}
F\left( t\right) =\left\{
\begin{array}{cc}
F_{0}, & -T/2<t\leq 0, \\
-F_{0}, & 0<t\leq T/2, \\
0, & \textrm{otherwise,}
\end{array}
\right. . \label{Square}
\end{equation}
According to the acceleration theorem, an initial wavepacket with momentum $
k_{c}\left( 0\right) $\ will acquire a momentum shift $F_{0}T/2$ at instant $
t=0$ if $k_{\mathrm{c}}\left( 0\right) +F_{0}T/2$\ is within the band. The
action of the subsequent force $-F_{0}$ can return the group velocity to the
initial value, continue its motion in the same direction. However, in the
case of $k_{\mathrm{c}}\left( 0\right) +F_{0}T/2$\ beyond the incomplete
band, the BP wavepacket breaks down before $t=0$ and the subsequent
force cannot get the correlation back. Therefore, for two BP
wavepackets with opposite momenta $\pm k_{\mathrm{c}}\left( 0\right) $\ or
group velocities $\pm \upsilon _{\mathrm{g}}\left( 0\right) $, one can
always choose a proper $F\left( t\right) $\ to destroy one of them and
maintain the other. This feature can be used to control the direction of
wavepacket propagation\ in demand. Alternatively, one can also consider the
sine-wave pulse driving force
\begin{equation}
F\left( t\right) =\left\{
\begin{array}{cc}
\left( -F_{0}\pi /2\right) \sin \left( 2\pi t/T\right) , & t\leq \left\vert
T/2\right\vert , \\
0, & \textrm{otherwise,}
\end{array}
\right. , \label{Sine}
\end{equation}
to achieve the same effect from Eq. (\ref{acceleration theorem}). To examine
how these schemes works in practice, we apply it to a wavepacket in the form
of (\ref{Psi_0}). Fig. 6 shows a numerical propagation of the Gaussian
wavepacket under the action of two kinds of pulse driving forces. It shows
that wavepackets with opposite group velocities exhibit entirely different
behaviors: one remains the original motion state, while the other spreads
out in space.\ Remarkably, the probability of the broken wavepacket is
reflected by the pulsed field, which indicates that the unidirectional
effect in the scheme works not only for the two-particle correlation but
also for the probability flow. In addition, one can see a separation of
slight portion from the moving wavepacket under the action of the
square-wave pulsed field.
\begin{figure}
\caption{(Color online) The profiles and the avarage distances $\bar{r}
\label{fig6}
\end{figure}
The probability flow of two particles, no matter correlated or not, can be
depicted by the center of mass (COM)
\begin{equation}
x_{\mathrm{c}}\left( t\right) =\sum_{j}\left\langle jn_{j}\right\rangle _{t},
\label{x_c}
\end{equation}
where $\left\langle ...\right\rangle _{t}$ denotes the average for a evolved
state. On the other hand, to characterize the efficiency of the schemes, we
introduce the fidelity defined as
\begin{eqnarray}
\mathcal{F} &=&\textrm{max}\left[ f\left( t\right) \right] , \label{f(t)} \\
f\left( t\right) &=&\left\vert \left\langle \Psi _{t}\left( t\right)
\right\vert \left. \Psi _{0}\left( t_{0}\right) \right\rangle \right\vert ,
\nonumber
\end{eqnarray}
where
\begin{equation}
\left\vert \Psi _{0}\left( t_{0}\right) \right\rangle =\exp (-i
H_{0}t_{0})\left\vert \Psi \left( 0\right) \right\rangle
\end{equation}
is the target state and
\begin{equation}
\left\vert \Psi _{t}\left( t\right) \right\rangle =\mathcal{T}\exp (-i
\int_{0}^{t}H\left( t\right) \mathrm{d}t)\left\vert \Psi \left( 0\right)
\right\rangle
\end{equation}
is the transferred state subjected to a pulsed field. Here we have
omitted a shift on the time scale comparing to the expression of $F\left(
t\right) $. The computation is performed by using a uniform mesh in the time
discretization for the time-dependent Hamiltonian $H\left( t\right) $. As an
example, in Fig. \ref{fig7}, we show the evolution of the COM $x_{\mathrm{c}
}\left( t\right) \ $and the fidelity $f\left( t\right) $ for the same
parameter values as the four processes simulated in Fig. \ref{fig6}. The
plot in (a) shows the behavior of the two-particle transmission and
reflection induced by the pulsed field, while in (b) the fidelities of the
state transfer. It indicates that a sine-wave pulse has a higher fidelity ($
\mathcal{F}=0.994$) than that of square-wave pulse ($\mathcal{F}=0.940$).
\textbf{\ }These results clearly demonstrate the power of the mechanism
proposed in this paper with the purpose to induce unidirectional propagation
which is caused by a pulsed field.
\begin{figure}
\caption{(Color online) Plots of the COM $x_{\mathrm{c}
\label{fig7}
\end{figure}
It is easy to estimate the spectral band of the unidirectional
filter by neglecting the width of the wavepacket. We find that there are
three reasons that trigger the death of a propagating wavepacket: (i) the central
momentum of the initial wavepacket, (ii) the edge of the incomplete band,
which is determined by the\ values of $U/\kappa $ and $V/\kappa $
, (iii) the impulse of the single pulsed field. We consider the
lattice with one bound band just touching the scattering band at the center
momentum $k=0$. We note, but do not prove exactly, that the
dispersion relation in the left region $\left[ -\pi ,0\right] $
and right region $\left[ 0,\pi \right] $ are monotone functions.
Then if we apply a pulsed field with impulse $\pi $, a moving
wavepacket with the momentum in the left region should be pushed into the
scattering band and will not be recovered by the subsequent $-\pi $
impulse. In contrast, a wavepacket in the right region still keep its
initial situation after this process. Therefore, roughly speaking, the
proposed unidirectional filter works for the wavepacket with all possible
group velocity.
\section{Summary}
\label{sec_Summary}
In this paper, the coherent dynamics of two correlated particles in
one-dimensional extended Hubbard model with on-site $U$ and nearest-neighbor
site $V$ interactions, driven by a linear field has been theoretically
investigated. The analysis shows that in the free-field case, there always
exist BP states for any nonzero $U$ and $V$, which may have the
comparable bandwidth with that of single particle. It results in the onset
of distinct BO and BZO for correlated pair in the presence of the external
field. We found that the incompleteness of the BP band spoils the
correlation of the pair and leads to the sudden death of the BO and
BZO.\ Based on this mechanism, we propose a scheme to control the
unidirectional propagation of the BP wavepacket by imposing a single
pulse. As a simple application of this scheme, we investigate the
effect of two kinds of pulsed field. Numerical simulations indicate that a
sine-wave pulse has a higher fidelity than that of square-wave pulse.\ In
experiment, it has been proposed that the ultracold atomic gases in optical
lattices with sinusoidal shaking can be an attractive testing ground to
explore the dynamical control of quantum states \cite
{Madison,Gemelke,Lignier,Chen}. The sudden death of BO
predicted in this paper is an exclusive signature of correlated particle
pair and could be applied to the quantum and optical device design.
\acknowledgments We acknowledge the support of the National Basic Research Program (973
Program) of China under Grant No. 2012CB921900 and CNSF (Grant No. 11374163).
\section*{References}
\end{document} |
\begin{document}
\title{Computing and Proving Well-founded Orderings through Finite Abstractions}
\begin{abstract}
A common technique for checking properties of complex state machines is to build a finite
abstraction then check the property on the abstract system --- where a passing check on the
abstract system is only transferred to the original system if the abstraction is proven to be
representative. This approach does require the derivation or definition of the finite
abstraction, but can avoid the need for complex invariant definition. For our work in checking
progress of memory transactions in microprocessors, we need to prove that transactions in complex
state machines always make progress to completion. As a part of this effort, we developed a
process for computing a finite abstract graph of the target state machine along with annotations
on whether certain measures decrease or not on arcs in the abstract graph. We then iteratively
divide the abstract graph by splitting into strongly connected components and then building a
measure for every node in the abstract graph which is ensured to be reducing on every transition
of the original system guaranteeing progress. For finite state target systems (e.g. hardware
designs), we present approaches for extracting the abstract graph efficiently using incremental
SAT through GL and then the application of our process to check for progress. We present an
implementation of the Bakery algorithm as an example application.
\end{abstract}
\section{Introduction} \label{sec:intro}
In order to admit a recursive function to ACL2, the user must prove that the function terminates by
showing that a function of the inputs exists which returns an ordinal (recognized by \texttt{o-p})
and which strictly decreases (by \texttt{o<}) on every recursive call of the function. The
epsilon-$0$ ordinals recognized by \texttt{o-p} and ordered by \texttt{o<} are axiomatized in this
way to be well-founded in ACL2. Our goal in this work is to present a new way to prove that certain
relations are well-founded. Referring to Figure~\ref{fig:wellfound} and given a relation \texttt{(r
x y)}, we produce a measure function \texttt{(m x)} and proofs of the properties
\texttt{m-is-an-ordinal} and {\tt m-is-o<-when-r}. This entails that the given relation \texttt{r}
is well-founded.
\begin{figure}
\caption{Proving a relation is well-founded}
\label{fig:wellfound}
\end{figure}
In order to admit recursive functions, ACL2 has built-in heuristics for guessing appropriate
measures and attempts to prove them. These heuristics often work for functions with common recursive
patterns with user specification of measures covering the remaining cases. The theorem prover
ACL2s~\cite{acl2s} (the sedan) has a built-in procedure which builds so-called Calling Context Graphs
(or CCGs)~\cite{ccg} and checks that there are no infinite paths through the CCG such that some
measure doesn't decrease infinitely often while never increasing. The CCG checker in ACL2s
significantly increases the number of functions which can be admitted without user specification of
measures. Our work shares some similarities at a high level to the work on CCGs in ACL2s but the
approach and target applications are quite different --- we will cover these differences in greater
detail in Section~\ref{sec:concl}.
Our primary focus is proving well-founded relations \texttt{(r x y)} derived from software and
hardware systems comprised of interacting state machines. In particular, for the work presented in
this paper, we focus on systems defined as the composition of finite state machines. We present a
procedure in this paper which takes the definition of \texttt{(r x y)}, a finite domain
specification for \texttt{x} and \texttt{y} and a mapping from the concrete finite domain for
\texttt{x} and \texttt{y} to an abstract domain. The procedure leverages existing bit-blasting tools
in ACL2 to construct the abstraction of \texttt{r}, builds an abstract measure descriptor, and then
translates this back to a proven measure on the concrete domain. In Section~\ref{sec:bakery}, we
cover a version of the Bakery algorithm which will be the primary example for this paper. In
Section~\ref{sec:overview}, we present an overview of the procedure for generating and proving the
needed measures and in Sections~\ref{sec:gl},~\ref{sec:scc},~\ref{sec:proof}, we go into the details
of the steps of the procedure along with demonstration via application to the example. We conclude
the paper in Section~\ref{sec:concl} with a discussion of related work and future considerations.
\section{Example: Bakery Algorithm} \label{sec:bakery}
We will use a finite version of the Bakery algorithm as an example application throughout this
paper. The Bakery algorithm was developed by Lamport~\cite{Lamport} as a solution to mutual
exclusion with the additional assurance that every task would eventually gain access to its
exclusive section. The Bakery algorithm has also been a focus of previous ACL2 proof
efforts~\cite{RaySumners, somenerd}.
The Bakery algorithm operates by allowing each process that wants exclusive access to first choose a
number to get its position in line and then later compares the number against the numbers chosen by
the other processes to determine who should have access to the exclusive section. The Bakery
algorithm definition we will use for presenting this work is defined in Figure~\ref{fig:bakeimpl}
where the macros \texttt{(tr+ ..)} and \texttt{(sh+ ..)} are shorthand for functions which update the
specified fields of the \texttt{bake-tr-p} and \texttt{bake-sh-p} data structures.
The function \texttt{(bake-tr-next a sh)} takes a local bakery transaction state ``\texttt{a}'' and
a shared state ``\texttt{sh}'' and updates the local bakery state. The function
\texttt{(bake-sh-next sh a)} takes the same ``\texttt{sh}'' and ``\texttt{a}'' and produces the
updated shared state (a single variable \texttt{sh.max} storing the next position in line). The
function \texttt{(bake-tr-blok a b)} defines a blocking relation which denotes when one bakery
process ``\texttt{a}'' is blocked by another bakery process ``\texttt{b}''.
Each task will start in program location $0$ in which it starts its \texttt{a.choosing}
phase. During the \texttt{a.choosing} phase, the task will grab the current shared max variable
\texttt{sh.max} and then set its own position \texttt{a.pos} to be $1$ more than \texttt{sh.max} ---
possibly wrapping around to a position of $0$. If we do wrap the position around to $0$, then the
local Bakery process will cycle through the other Bakery processes that are completing to allow them
to flush out before proceeding. After this check, the process will perform an atomic
compare-and-update at program location (or {\tt a.loc}) $6$ to the shared \texttt{sh.max} variable
and ends its \texttt{a.choosing} phase.
After the \texttt{a.choosing} phase, the process at \texttt{a.loc} $7$ will enter another loop to
check if it can proceed to the critical section. In locations $8$, $9$, and $10$, the process checks
if the current process we are checking (at index \texttt{a.loop}) is either still choosing a
position in line or is ahead of us in line (with ties broken by checks on the order of process indexes
at location $10$). Finally, after the process enters and exits the critical section at location
$13$, the process will decrement the \texttt{a.runs} outer loop count and either branch back to
location $0$ or complete and set \texttt{a.done} at location $17$.
\begin{figure}
\caption{Bakery Process Definitions}
\label{fig:bakeimpl}
\end{figure}
Our goal is to prove that a system comprised of some number of bakery processes updating
asynchronously will eventually reach a state where all bakery processes are done. This is codified
by admitting the function \texttt{(bake-run st)} defined in Figure~\ref{fig:bakerun}. The {\tt
bake-run} function takes a state \texttt{st} consisting of a list \texttt{st.trs} of transaction states
(one for each bakery process) and a shared variable state \texttt{st.sh}. The function checks if all
bakery processes are done (i.e.\ \texttt{(bake-all-done st.trs)}) and simply returns if so. Otherwise,
the function chooses a process which is ready and updates the state for that process along with the shared
state and recurs. The function \texttt{(choose-ready st.trs st.sh oracle)} is constrained (via {\tt
encapsulate}) to return an index for a bakery process state which is not done and is not blocked
by any other process state (via \texttt{bake-tr-blok}). Given the function \texttt{choose-ready} is
constrained to represent any legal input selection, then \texttt{bake-run} represents all legal bakery
runs and its termination ensures that all runs end with all bakery processes done. We note that this
does not prove the Bakery algorithm ensures mutual exclusion and it does not prove that the Bakery
algorithm avoids livelock or starvation --- these issues were covered in ~\cite{somenerd} but require
more complex specifications involving infinite runs which are not closed-form in ACL2. The work
presented in this paper to generate proven measures for well-founded relations has been applied to
the more complete proof framework presented in ~\cite{somenerd}.
\begin{figure}
\caption{Bakery System Run Function}
\label{fig:bakerun}
\end{figure}
\section{Overview} \label{sec:overview}
Our goal is to define \texttt{bake-run} and admit it by proving its termination. In support of this
goal, we need to prove that two relations are well-founded orderings. First, we need to build a
measure showing that on updates with \texttt{bake-tr-next}, each bakery process makes progress to a
done state. The other measure we need to define and prove is a little more subtle. The function
\texttt{choose-ready} must return a process index with a state which is not done and is not
blocked. In the case of a deadlock between some number of process states (a cycle of the
\texttt{bake-tr-blok} relation), choosing an unblocked process may not be possible. We need to
build a measure showing that no blocking cycles exist between states; this measure will allow us
to define a function which always finds an unblocked process state. In each of these cases, we begin
with a relation \texttt{(r x y)} that we want to show is well-founded requiring the definition of a
measure \texttt{(m x)} which preserves the properties \texttt{m-is-an-ordinal} and
\texttt{m-is-o<-when-r} from Figure~\ref{fig:wellfound}.
\begin{figure}
\caption{Overview of the Procedure}
\label{fig:overview}
\end{figure}
A standard ACL2 proof that these relations are well-founded would not only require defining a
decreasing measure \texttt{(m x)}, but in addition and invariably, invariant properties of the
reachable process states. In order to prove these invariants, the user will need to strengthen the
invariant definitions to be inductive. For complex systems, the definition of inductive invariants
can be prohibitively expensive. In addition, inductive invariant definitions are fragile and require
maintenance when system definition changes. The same is also true of defining measures for proving
well-founded relations in complex systems --- these measure definitions can also be extensive and
brittle to changes in system definition. Our goal is to build a procedure which allows the
generation of inductive invariants as well as decreasing measures which prove our target relations
to be well-founded. Computing invariant and measure definitions not only has the benefits of
requiring less human definition and tracking design change, but can also provide more direct debug
output from failed attempts. We provide an overview of the steps in our procedure (as well as
pointers to where these steps are defined in the supporting materials) in
Figure~\ref{fig:overview}. We cover each of these steps in greater detail in the remaining sections
of the paper as well as covering their application to our Bakery algorithm example.
\section{Building Models with GL and Incremental SAT} \label{sec:gl}
The tool \texttt{GL}~\cite{gl} is an extension to ACL2 (primarily an untrusted clause processor)
which targets proving theorems on finite domains by translating the theorems to boolean formulas
using symbolic simulation and then checking the boolean formulas through BDDs or SAT with boolean AIG
transformations and simplifications. There are different ways to direct \texttt{GL} to translate a
term to a boolean formula, but the most basic form is to take a \texttt{hyp} term and \texttt{concl}
term along with a shape specification \texttt{g-bindings} for the free variables in the \texttt{hyp}
and \texttt{concl} terms. \texttt{GL} uses the shape specification to provide a symbolic value for
the free variables and then symbolically evaluates the \texttt{concl} and checks if the resulting
boolean formula is valid. If the boolean check passes, then \texttt{GL} checks that the \texttt{hyp}
implies the constraints specified by the shape specification. The first step in our procedure uses
the setup for \texttt{GL} but instead of proving that a term is always true, we will instead compute the
set of values that a term can return under evaluation with variable bindings consistent with
\texttt{g-bindings}.
We define the function \texttt{(compute-finite-values trm hyp g-b num)} which takes terms
\texttt{trm} and \texttt{hyp}, shape specification \texttt{g-b}, and natural number \texttt{num} and
attempts to return (up-to \texttt{num}) values from the set of possible values for \texttt{trm} under
the assumption of \texttt{hyp} with free variables consistent with \texttt{g-b}. The function
\texttt{compute-finite-values} will also return a boolean \texttt{is-total} which is true if the
list of values returned is the entire set of possible values. We use the \texttt{GL} symbolic
evaluation functions to provide a translation from terms (with shape-specifications) to boolean CNF
formula and then iterate through the set of possible boolean values discovered using incremental SAT
via the \texttt{IPASIR} library~\cite{ipasir}. The resulting boolean valuations from repeated IPASIR
tests are then translated back to ACL2 objects and returned.
This inner IPASIR Incremental SAT loop begins with installation of the CNF formula (from the GL
translation) into the IPASIR clause database. The literals in the CNF formula corresponding to the
output of \texttt{trm} are also recorded. Then, within each iteration of the loop, we first call
IPASIR to find a satisfying assignment. If it is unsatisfiable then there are no more values and we
return. Otherwise, we retrieve the boolean values for the \texttt{trm} literals from IPASIR and add
this boolean valuation to our accumulated return set. We then add the negation of the equality of
the \texttt{trm} literals to the retrieved boolean values as a new clause in the IPASIR database and
iterate through the loop. We terminate the loop by either reaching an unsatisfiable IPASIR instance
or exceeding the user-specified maximum number \texttt{num} of values. The chief benefit of using
incremental SAT is the amortization of the translation and installation of the CNF formula along
with (and more importantly) the incremental benefits of any learned clauses that the SAT solver
determines through each iteration of this loop. \begin{footnote}{We note that independent of the
work presented in this paper, a new revision of \texttt{GL}, named \texttt{FGL}, was added to
ACL2 and integrates incremental SAT in addition to other features. In particular,
\texttt{FGL} makes it much easier and direct to define exploration functions like
\texttt{compute-finite-values} using rewrite rules, but we decided to present the approach in
\texttt{GL} given that \texttt{GL} is more familiar to the ACL2 community at the time of the
writing of this paper.}\end{footnote}
\begin{figure}
\caption{Bakery state mapping and reachable graph construction}
\label{fig:rankmap}
\end{figure}
Using the \texttt{compute-finite-values} function, we construct an abstract graph from the concrete
system definition. The function \texttt{(comp-map-reach init-hyp init-trm step-hyp step-trm)} in
\texttt{gen-models.lisp} builds the reachable graph beginning with the set of values from
\texttt{init-trm} and iteratively reached by steps in \texttt{step-trm}. Returning to the Bakery
example, Figure~\ref{fig:rankmap} defines the mapping \texttt{(bake-rank-map a)} taking a bakery
process state and returning the state information needed to build a measure of progress to a done
state --- or, intuitively, the mapping \texttt{bake-rank-map} includes enough state information to
ensure a bakery process makes progress to a done state. This includes the current location
\texttt{a.loc}, whether or not we are done \texttt{a.done}, whether or not the inner loop or outer
loop variables have counted down to $0$, and then a predicate ensuring that the \texttt{a.loop} and
\texttt{a.runs} counters are natural. This last \texttt{:inv} predicate field is actually an
inductive invariant attached to the abstract state and we include it in the abstract state to
effectively prove and use this inductive invariant during the building of the abstract graph and
later, when we add ordering information to the graph.
The \texttt{comp-map-reach} function in Figure~\ref{fig:rankmap} builds the abstract reachable graph
by setting up calls to \texttt{compute-finite-values}. It first calls \texttt{compute-finite-values}
to return the set of values for \texttt{(bake-rank-map (bake-tr-init n r))} where \texttt{n} is
defined to be an index for the process state and \texttt{r} defines the number of runs or number of
iterations of the outer \texttt{bake-tr-next} loop. The function \texttt{comp-map-reach} builds a shape
specification based on the variables in the terms and then computes the initial states. The function
\texttt{comp-map-reach} will then iterate by computing the values for the step term
\texttt{(bake-rank-map (bake-tr-next a sh))} at each node with the hypothesis \texttt{(equal
(bake-rank-map a) ,*src-var*)} --- during each step, the special variable \texttt{*src-var*} is
bound to the value of the current node in the reachable graph exploration (i.e.\ a reached result of
\texttt{bake-rank-map}).
\begin{figure}
\caption{Nodes in reachable abstract graph from \texttt{comp-map-reach}
\label{fig:reachnodes}
\end{figure}
The result of \texttt{comp-map-reach} is a graph defined as an alist where each pair in the alist
associates a node to a list of nodes which form the directed arcs --- the nodes are the results of
\texttt{bake-rank-map} for reachable bakery process states. The nodes in the
\texttt{*bake-rank-reach*} graph are included in Figure~\ref{fig:reachnodes}. There are $21$ nodes
consisting of $1$ node per location and $2$ nodes for locations $5$, $12$, and $15$. The extra nodes
for these locations is due to a split based on whether \texttt{a.loop} is equal to $0$ in locations $5$
and $12$ and whether \texttt{a.runs} is equal to $0$ in location $15$.
\begin{figure}
\caption{Bakery component measures and ordering tag construction}
\label{fig:ordmap}
\end{figure}
The reachable abstract graph for \texttt{bake-rank-map} is not sufficient to build a measure of
progress to a done state --- there are 3 backward arcs at locations $5$, $12$, and $15$. We could
solve this by adding the full values for \texttt{a.loop} and \texttt{a.runs} to
\texttt{bake-rank-map} but this would dramatically increase the number of nodes in the resulting
abstract graph and is clearly not viable in general. The better approach is to tag arcs in the
abstract graph with whether or not certain measures strictly decrease or possibly increase. This
ordering information is defined by the function \texttt{(bake-rank-ord a o)} and added via the
function \texttt{comp-map-order} in Figure~\ref{fig:ordmap}. The function \texttt{bake-rank-ord}
takes a bakery state and symbol identifying a component measure and returns the measure value
(natural values in this case but in general can be a list of natural numbers). The function
\texttt{comp-map-order} takes the reachable abstract graph and computes (using
\texttt{compute-finite-values}) tags for each arc in the reachable graph encoding whether the
specified component measure is either strictly-decreasing, not-increasing, or possibly-increasing
along that arc. The \texttt{runs} measure strictly decreases on the arcs from the node at location
$14$ to the nodes at location $15$ and is non-increasing on all arcs. The \texttt{loop} measure
strictly decreases on arcs $4$ to $5$ and from $11$ to $12$, increases on arcs $2$ to $3$ and $7$ to
$8$ and is non-increasing on all other arcs. This tagged reachable graph is used in the next section
to compute a measure descriptor covering the concrete relation used to build the graph --- in this
case, the next-state bakery function \texttt{bake-tr-next}.
\section{Building Measures with SCC decomposition} \label{sec:scc}
In the previous section, we used \texttt{GL} and \texttt{IPASIR} to construct abstract reachable
graphs with arcs tagged based on which component measures decreased or increased. The next step in
our procedure is to use an algorithm based on the decomposition of strongly connected components (SCCs) to
build an object describing how to build a full measure across the concrete relation represented by
the abstract graph. The algorithm consists of two alternating phases operating on subgraphs of the
original graph (starting with the original graph itself) and produces a mapping of the nodes in the
graph to a measure descriptor which is a list comprised of symbols (representing a component
measure) or natural numbers.
\begin{itemize}
\item if the current subgraph is an SCC:
\begin{enumerate}
\item search for a component measure which never increases and decreases at least once.
\item if no such component measure is found, then find the minimal non-decreasing cycle and fail.
\item otherwise, remove the component measure's decreasing arcs from the graph and recur.
\item \texttt{cons} the component measure onto the measure descriptors from the recursive calls.
\end{enumerate}
\item otherwise:
\begin{enumerate}
\item partition the graph into SCCs using a standard algorithm.
\item recursively build measure descriptors for each of the SCCs.
\item build the directed acyclic graph of SCCs and enumerate the SCCs in the graph.
\item \texttt{cons} the enumeration of each SCC onto the measure descriptors from the recursive calls.
\end{enumerate}
\end{itemize}
\begin{figure}
\caption{Result of computing measure descriptors for \texttt{bake-rank-map}
\label{fig:msrdescr}
\end{figure}
Returning to the Bakery algorithm example, the resulting alist associating abstract graph nodes to
measure descriptors produced by this algorithm is given in Figure~\ref{fig:msrdescr}. The measure
descriptors follow the control flow through the locations of the bakery process state. The outer
loop forms an SCC with the \texttt{runs} component measure breaking the arc from location $14$ to
$15$. Within the outer loop, the inner loops form SCCs with the \texttt{loop} component measure
decreasing to break the SCCs further. The measure descriptor that results allows us to build a
measure showing that the relation \texttt{(and (equal y (bake-tr-next x sh)) (not (bake-tr-done x)))}
is well-founded. In the next section, we cover the theory and its instantiation which allows the
transfer of these measure descriptor results into actual proofs of well-founded relations.
\section{Proving the Generated Measures and Wrapping Up} \label{sec:proof}
\begin{figure}
\caption{``Theory'' for proving generated measures}
\label{fig:wfothry}
\end{figure}
In the book \texttt{wfo-thry.lisp} from the supporting materials, a ``theory'' is developed
connecting the successful computation of a measure descriptor from the abstract graph to the
definition of a measure proving that the concrete relation is well-founded. The structure of this
book is essentially the definition of two macros --- one macro codifies the assumptions made of a
set of definitions and a second macro generates the conclusions and results derived from these
assumptions. Each macro takes a name prefix parameter which is prepended to all of the definition
and theorem names generated in the macro. \begin{footnote}{It is worth noting that this would be
better carried out with functional instantiation in ACL2, but due to technical issues, that has
not worked in all cases --- we are working on rectifying this.}\end{footnote} The end of the
\texttt{wfo-thry} book concludes with the forms in Figure~\ref{fig:wfothry}. The function
\texttt{(rel-p x y)} is the relation we want to prove is well-founded. The function \texttt{(map-e
x)} is essentially equivalent to the \texttt{bake-rank-map} function from our example and
\texttt{(map-o x o)} is the \texttt{bake-rank-ord} function. The constant \texttt{(a-dom)} is the
set of nodes in the abstract graph (as in Figure~\ref{fig:reachnodes}) and \texttt{(nexts x)} is a
function taking a node in \texttt{(a-dom)} and returning a list of successor nodes pulled from the
abstract graph. The function \texttt{(chk-ord-arc x y o)} takes two nodes in the abstract graph
\texttt{x} and \texttt{y} and a component measure name \texttt{o} and returns either \texttt{:<<} if
the measure is strictly-decreasing, \texttt{t} if it is non-increasing, and \texttt{nil} if it is
possibly-increasing.
\begin{figure}
\caption{Assumptions for proving generated measures}
\label{fig:msrassume}
\end{figure}
The main theorems assumed about these functions are provided in Figure~\ref{fig:msrassume}. These
assumed properties provide the correlation between \texttt{(rel-p x y)} and the abstract graph
defined by \texttt{(nexts x)} with the tagging of component measure ordering on the arcs defined by
\texttt{chk-ord-arc}. The relation \texttt{(bnl< m n o)} is defined in the book \texttt{bounded-ords.lisp}
and orders lists of naturals \texttt{m} and \texttt{n} of length \texttt{o} as the lexicographic
product of the naturals in the list in order. The \texttt{bounded-ords} book defines functions and
relations for building and ordering lists of naturals (of the same length) and lists of natural
lists (potentially differing lengths). These are recognized as \texttt{bnlp} and \texttt{bnllp} and
ordered with \texttt{bnl<} and \texttt{bnll<} respectively. As an aside, we note that in the supporting
books for the paper, we use \texttt{bplp} instead of \texttt{bnlp} where \texttt{bplp} is the same
as \texttt{bnlp} but requires the last natural in the list to not be $0$. The reason to use
\texttt{bplp} is to allow a list of all $0$s to be used as a bottom element in certain
constructions. We use \texttt{bnl} and \texttt{bnll} throughout the paper for clarity.
There are also conversion functions \texttt{bnl->o} and \texttt{bnll->o} for converting
\texttt{bnl}s and \texttt{bnll}s to ACL2 ordinals preserving the well-founded ordering \texttt{o<}
on ACL2 ordinals. We use the bounded ordinals from \texttt{bounded-ords} instead of ACL2 ordinals
because these bounded ordinals are closed under lexicographic product while ACL2 ordinals are
not. This allows us to build constructions using these bounded ordinals that would be far more
difficult (and in some cases, not even possible) with ACL2 ordinals.
\begin{figure}
\caption{Derivations for proving generated measures}
\label{fig:msrderive}
\end{figure}
Given the assumptions from Figure~\ref{fig:msrassume} (and some additional typing assumptions), we
generate a measure function \texttt{(msr x m)} which takes an ACL2 object \texttt{x} and a measure
descriptor mapping produced from the SCC decomposition (as in Figure~\ref{fig:msrdescr} and termed
an \texttt{omap}) and produces an ACL2 ordinal. If the mapping \texttt{m} satisfies the generated check
\texttt{(valid-omap m)}, then the \texttt{(msr x m)} returns a strictly decreasing ordinal which shows
that \texttt{(rel-p x y)} is well-founded. The key derived properties generated from the instantiation
of this theory are provided in Figure~\ref{fig:msrderive}. In addition to generating the definition
of a measure function returning ACL2 ordinals, we also generate a definition producing a bounded
ordinal \texttt{bnlp} and related properties. The intent is to use the \texttt{mk-bnl} function when one
wants to use the generated ordinal in a composition to build larger ordinals --- even potentially
using the procedure in this paper hierarchically where the component measure at one level is a proven
generated measure at a lower level in the hierarchy.
Returning to the Bakery example, the book \texttt{bake-proofs} includes the generated abstract
graphs and measure descriptor mappings (or \texttt{omap}s) from \texttt{bake-models} and sets up
instantiations of the ``theory'' from the \texttt{wfo-thry} book. The result is the generated
measures for proving our target two relations are well-founded: the relation defined by the step
function \texttt{bake-tr-next} and the blocking relation \texttt{bake-tr-blok}. In the book
\texttt{top.lisp}, we use these results to reach our goal of defining and admitting the
\texttt{bake-run} function from Figure~\ref{fig:bakerun}. We use the generated measure function
\texttt{bank-rank-mk-bnl} to define the function \texttt{(bake-rank-bnll l sh)} which conses the
\texttt{bnl}s for each bakery process state in the list \texttt{l} and returns a \texttt{bnll}. The
measure we use for admitting \texttt{(bake-run st orcl)} is the conversion of the resulting {\tt
bnll} to ACL2 ordinals:
\begin{verbatim}
(bnll->o (len (bake-st->trs st))
(bake-rank-bnll (bake-st->trs st)
(bake-st->sh st))
(bake-rank-bnl-bnd))
\end{verbatim}
\normalsize
Additionally, as we noted earlier in Section~\ref{sec:bakery}, we need the generated measure for
proving\\
\texttt{bake-tr-blok} is well-founded in order to define \texttt{choose-ready} correctly. In
particular,\\
\texttt{(choose-ready l sh o)} is a constrained function which ensures that if there is
a bakery state which is not done in \texttt{l}, then \texttt{(choose-ready l sh o)} will return the
index of a state which is not done and not blocked. The local witness in the encapsulation for
\texttt{choose-ready} is:
\begin{verbatim}
(local (defun choose-ready (l sh o)
(find-unblok (find-undone l) l sh)))
\end{verbatim}
\normalsize
Where \texttt{(find-undone l)} returns an index in \texttt{l} for a bakery process which is not done
(if one exists) and the function \texttt{(find-unblok n l sh)} takes an index \texttt{n} and finds
an index which is not blocked in \texttt{l}. The function \texttt{find-unblok} is (essentially)
defined as:
\begin{verbatim}
(define find-unblok ((n natp) (l bake-tr-lst-p) (sh bake-sh-p))
(if (bake-blok (nth n l) l)
(find-unblok (pick-blok (nth n l) l) l sh)
n))
\end{verbatim}
\normalsize
Where \texttt{(bake-blok a l)} returns true if any state in \texttt{l} blocks \texttt{a} and
\texttt{(pick-blok a l)} finds an index in \texttt{l} for a bakery state which blocks \texttt{a} (and
thus if \texttt{(bake-blok a l)} then \texttt{(bake-tr-blok a (nth (pick-blok a l) l))}). The measure
used to admit \texttt{find-unblok} is defined using the generated measure \texttt{bake-nlock-msr} for
proving the \texttt{bake-tr-blok} relation is well-founded.
An additional important property of \texttt{(find-unblok n l sh)} is that it either returns \texttt{n} (if
\texttt{(nth n l)} is not blocked) or it returns an index for a process state which is blocking another
state. We prove separately that no process in a done state can block another process and thus, if
the state at index \texttt{n} passed to \texttt{find-unblok} is not in a done state, then the state at the
index returned by \texttt{find-unblok} is also not in a done state.
\section{Related Work and Future Work} \label{sec:concl}
The analysis of abstract reachable graphs with ordering tags is similar in some ways to analysis of
automata on infinite words~\cite{buchi}, but our search for a measure construction is not the same as
language emptiness or other checks typically considered for infinite word or tree automata. As we
mentioned in Section~\ref{sec:intro}, our work does share similarity to the work on CCGs in ACL2s in
that both build and analyze graphs with the goal of showing that no ``bad'' infinite paths exist
through the graph --- but the focus and approach of each work is significantly different. CCG
termination analysis in ACL2s is used to determine if a user-specified function always terminates. A
significant component of the CCG analysis is unwinding and transforming CCGs (and CCMs) until one
can no longer find bad paths and thus ensure termination. The problem that CCG analysis attempts to
tackle is intrinsically more difficult than the problems we target. We attempt to prove a given
relation is well-founded and rely on mapping function definitions to build a closed model sufficient
to then find a proven measure. Our procedure aggressively builds the model as specified by the
mapping functions and proceeds assuming it is sufficient without further refinement -- the user or
outside heuristics are responsible for any further refinements. The AProVE program analysis
tool~\cite{aprove} provides mechanisms for automatically checking program termination. The approach
taken in AProVE is to translate the source program into a term rewriting system and apply a
variety of analysis engines from direct analysis of the term rewriting to translation into checks
for SAT or SMT. Similar to CCG analysis in ACL2s, the primary difference between AProVE and our work
is an issue of focus. The contexts we target benefit from the assumption of mapping functions and
(in the case of this paper) finite-state systems which can be processed by {\tt GL}. This allows us
to build the tagged abstract reachable graph directly with less reliance on having sufficient
rewrite rules and term-level analysis and heuristics.
There are many ways to extend the work presented. We would like to add an interface into
\texttt{SMTLINK} \cite{smtlink} for either building the abstract graphs and/or the addition of
ordering tags to the arcs in the graphs. \texttt{SMTLINK} is more limited than \texttt{GL} in what
ACL2 definitions it can support, but \texttt{SMTLINK} would be a nice option to have in the cases
where the definitions were viable for \texttt{SMTLINK}. While we assumed the definition of mapping
functions and component measures for the sake of this paper, it is not difficult to write heuristics
to generate candidate mappings and measures either from datatype specifications in the source
definition or from static analysis results. Further, the results and steps in the process can be
analyzed to determine which refinements to apply to the mapping and component measures in an ``outer
loop'' to our procedure. From our limited experience, this is best addressed with guidance from the
domain of application and the types of relations and systems that the user wants to analyze.
\nocite{*}
\end{document} |
\begin{document}
\title{Integrals of polylogarithms and infinite series involving generalized harmonic numbers}
\author{
Rusen Li
\\
\small School of Mathematics\\
\small Shandong University\\
\small Jinan 250100 China\\
\small \texttt{limanjiashe@163.com}
}
\date{
\small 2020 MR Subject Classifications: 33E20, 11B83
}
\maketitle
\def\stf#1#2{\left[#1\atop#2\right]}
\def\sts#1#2{\left\{#1\atop#2\right\}}
\def\mathfrak e{\mathfrak e}
\def\mathfrak f{\mathfrak f}
\newtheorem{theorem}{Theorem}
\newtheorem{Prop}{Proposition}
\newtheorem{Cor}{Corollary}
\newtheorem{Lem}{Lemma}
\newtheorem{Example}{Example}
\newtheorem{Remark}{Remark}
\newtheorem{Definition}{Definition}
\newtheorem{Conjecture}{Conjecture}
\newtheorem{Problem}{Problem}
\begin{abstract}
In this paper, we give explicit evaluation for some infinite series involving generalized (alternating) harmonic numbers. In addition, some formulas for generalized (alternating) harmonic numbers will also be derived.
\\
{\bf Keywords:} polylogarithm function, generalized harmonic numbers
\mathfrak end{abstract}
\section{Introduction and preliminaries}
Let $\mathbb Z$, $\mathbb N$, $\mathbb N_{0}$ and $\mathbb C$ denote the set of integers, positive integers, nonnegative integers and complex numbers, respectively. The well-known polylogarithm function is defined as
$$
Li_{p}(x):=\sum_{n=1}^\infty \mathfrak frac{x^n}{n^p}\quad (\lvert x \lvert \leq 1,\quad p \in \mathbb N_{0})\,.
$$
Note that when $p=1$, $-Li_{1}(x)$ is the logarithm function $\log(1-x)$. Here, and throughout this paper, we use the natural logarithm (to base $e$). Furthermore, $Li_{n}(1)=\zeta(n)$, where $\zeta(s):=\sum_{n=1}^\infty n^{-s}$ denotes the well-known Riemann zeta function. The classical generalized harmonic numbers of order $m$ is defined by the partial sum of the Riemann Zeta function $\zeta(m)$ as:
$$
H_n^{(m)}:=\sum_{j=1}^n \mathfrak frac{1}{j^m} \quad (n, m \in \mathbb N)\,.
$$
For convenience, we recall the classical generalized alternating harmonic numbers $\overline{H}_n^{(p)}=\sum_{j=1}^n (-1)^{j-1}/{j^p}$.
It is interesting that infinite series containing harmonic numbers $H_n$ can be expressed explicitly in terms of the logarithms and polylogarithm functions. For instance, De Doelder \cite{Doelder} used the integrals
\begin{align*}
&\int_{0}^{x}\mathfrak frac{\log^{2}(1-t)}{t}\mathrm{d}t, \int_{0}^{x}\mathfrak frac{\log(t)\log^{2}(1-t)}{t}\mathrm{d}t, \int_{0}^{1}\mathfrak frac{\log^{2}(1+t)\log(1-t)}{t^{2}}\mathrm{d}t
\mathfrak end{align*}
and
\begin{align*}
\int_{0}^{1}\mathfrak frac{\log^{2}(1+t)\log(1-t)}{t}\mathrm{d}t
\mathfrak end{align*}
to evaluate infinite series containing harmonic numbers of types $\sum_{n=1}^\infty \mathfrak frac{H_{n-1}}{n^{2}}x^{n}$, $\sum_{n=1}^\infty \mathfrak frac{H_{n-1}}{n^{3}}x^{n}$, $\sum_{n=1}^\infty \mathfrak frac{(H_{n-1})^{2}}{n}x^{n}$ and $\sum_{n=1}^\infty \mathfrak frac{(H_{n-1})^{2}}{n^{2}}x^{n}$.
When $p=1$, De Doelder \cite{Doelder} gave the following formulas:
\begin{align*}
&\sum_{n=1}^\infty \mathfrak frac{H_{n-1}}{n}x^n=\mathfrak frac{1}{2}\log^{2}(1-x)\quad (\lvert x \lvert \leq 1)\,,\\
&\sum_{n=1}^\infty \mathfrak frac{H_{n-1}}{n^{2}}x^n=\mathfrak frac{1}{2}\log(x)\log^{2}(1-x)+\log(1-x)Li_{2}(1-x)
-Li_{3}(1-x)\\
&\qquad \qquad \qquad \quad +Li_{3}(1)\quad (0 \leq x \leq 1)\,,\\
&\sum_{n=1}^\infty \mathfrak frac{(-1)^{n}H_{n-1}}{n^{2}}x^n=\mathfrak frac{1}{2}\log(x)\log^{2}(1+x)
-\mathfrak frac{1}{3}\log^{3}(1+x)-Li_{3}\bigg(\mathfrak frac{1}{1+x}\bigg)\\
&\qquad \qquad \qquad \qquad \quad -\log(1+x)Li_{2}\bigg(\mathfrak frac{1}{1+x}\bigg)+Li_{3}(1)\quad (0 \leq x \leq 1)\,,\\
&\sum_{n=1}^\infty \mathfrak frac{H_{n-1}}{n^{3}}=\mathfrak frac{1}{360}\pi^{4}\,,\\
&\sum_{n=1}^\infty \mathfrak frac{H_{n-1}(-1)^{n-1}}{n^{3}}=\mathfrak frac{1}{48}\pi^{4}-2 Li_{4}\bigg(\mathfrak frac{1}{2}\bigg)-\mathfrak frac{7}{4}\log(2)\zeta(3)+\mathfrak frac{1}{12}\pi^{2}\log(2)
-\mathfrak frac{1}{12}\log^{4}(2)\,.
\mathfrak end{align*}
In this paper, we give explicit evaluation for infinite series involving generalized (alternating) harmonic numbers of types
$\sum_{n=1}^\infty \mathfrak frac{H_{n}}{n^{3}}x^{n}$,
$\sum_{n=1}^\infty \mathfrak frac{H_{n}}{n^{3}}(-x)^{n}$,
$\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{n}x^{n}$,
$\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{n}(-x)^{n}$,
$\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{n^{2}}x^{n}$,
$\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{n^{2}}(-x)^{n}$,
$\sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n}}{n}x^{n}$,
$\sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n}^{(2)}}{n}x^{n}$,
$\sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n}^{(2)}}{n}(-x)^{n}$.
In addition, some formulas for generalized (alternating) harmonic numbers will also be derived.
\section{Infinite series containing generalized harmonic numbers}
Now we establish more explicit formulas for infinite series $\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(p)}}{n^{m}}x^{n}$.
\begin{theorem}\label{thm1}
Let $p \in \mathbb N$ with $p$ odd and $\lvert x \lvert \leq 1$, then we have
\begin{align*}
\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(p)}}{n}x^{n}
=\mathfrak frac{1}{2}\sum_{j=1}^{p}(-1)^{j-1}Li_{j}(x)Li_{p+1-j}(x)+Li_{p+1}(x)\,.
\mathfrak end{align*}
\mathfrak end{theorem}
\begin{proof}
Integrating the generating function of $H_{n}^{(p)}$, we can write
\begin{align*}
&\quad \sum_{n=1}^\infty \mathfrak frac{H_{n}^{(p)}}{n+1}x^{n+1}=\int_{0}^{x}\mathfrak frac{Li_{p}(t)}{1-t}\mathrm{d}t\\
&=Li_{1}(x)Li_{p}(x)-\int_{0}^{x}\mathfrak frac{Li_{1}(t)Li_{p-1}(t)}{t}\mathrm{d}t\\
&=Li_{1}(x)Li_{p}(x)-Li_{2}(x)Li_{p-1}(x)+\int_{0}^{x}\mathfrak frac{Li_{2}(t)Li_{p-2}(t)}{t}\mathrm{d}t\\
&=\sum_{j=1}^{k}(-1)^{j-1}Li_{j}(x)Li_{p+1-j}(x)+(-1)^{k}\int_{0}^{x}\mathfrak frac{Li_{k}(t)Li_{p-k}(t)}{t}\mathrm{d}t \quad (0 \leq k \leq p)\\
&=\sum_{j=1}^{p}(-1)^{j-1}Li_{j}(x)Li_{p+1-j}(x)
+(-1)^{p}\int_{0}^{x}\mathfrak frac{Li_{p}(t)}{1-t}\mathrm{d}t\,.
\mathfrak end{align*}
Note that
$$
\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(p)}}{n}x^{n}=\sum_{n=1}^\infty \mathfrak frac{H_{n-1}^{(p)}}{n}x^{n}+\sum_{n=1}^\infty \mathfrak frac{x^{n}}{n^{p+1}}\,,
$$
thus we get the desired result.
\mathfrak end{proof}
\begin{Cor}
Let $p, n \in \mathbb N$ and $k \in \mathbb N_{0}$ with $0 \leq k \leq p$, then we have
\begin{align*}
H_{n}^{(p)}
=\sum_{j=1}^{k}(-1)^{j-1}\sum_{\mathfrak ell=1}^{n}\mathfrak frac{n+1}{\mathfrak ell^{j}(n+1-\mathfrak ell)^{p+1-j}}
+(-1)^{k}\sum_{\mathfrak ell=1}^{n}\mathfrak frac{1}{\mathfrak ell^{k}(n+1-\mathfrak ell)^{p-k}}\,.
\mathfrak end{align*}
In particular, if $m \in \mathbb N_{0}$, then we can obtain that
\begin{align*}
&\quad H_{n}^{(2m+1)}\\
&=\sum_{j=1}^{m}(-1)^{j-1}\sum_{k=1}^{n-1}\mathfrak frac{n}{k^{j}(n-k)^{2m+2-j}}
+\mathfrak frac{(-1)^{m}}{2}\sum_{k=1}^{n-1}\mathfrak frac{n}{k^{m+1}(n-k)^{m+1}}+\mathfrak frac{1}{n^{2m+1}}\,,\\
&H_{n}^{(2m)}
=\sum_{j=1}^{m}(-1)^{j-1}\sum_{k=1}^{n-1}\mathfrak frac{n}{k^{j}(n-k)^{2m+1-j}}
+(-1)^{m}\sum_{j=1}^{n-1}\mathfrak frac{1}{j^{m}(n-j)^{m}}+\mathfrak frac{1}{n^{2m}}\,.
\mathfrak end{align*}
\mathfrak end{Cor}
\begin{proof}
The following formula is obtained in the proof of Theorem \ref{thm1},
\begin{align*}
&\quad \sum_{n=1}^\infty \mathfrak frac{H_{n}^{(p)}}{n+1}x^{n+1}\\
&=\sum_{j=1}^{k}(-1)^{j-1}Li_{j}(x)Li_{p+1-j}(x)+(-1)^{k}\int_{0}^{x}\mathfrak frac{Li_{k}(t)Li_{p-k}(t)}{t}\mathrm{d}t \quad (0 \leq k \leq p)\,,
\mathfrak end{align*}
comparing the coefficients on both sides gives the desired result.
\mathfrak end{proof}
Before going further, we provide some lemmas.
\begin{Lem}(\cite[p.204]{Lewin})\label{lem0}
Let $0 \leq x < 1$, then we have
\begin{align*}
&\quad \int_{0}^{x}\mathfrak frac{\log^{2}(t)\log(1-t)}{1-t}\mathrm{d}t\\
&=-2\bigg(Li_{4}(x)+Li_{4}\bigg(\mathfrak frac{-x}{1-x}\bigg)-Li_{4}(1-x)+Li_{4}(1)
-Li_{3}(x)\log(1-x)\bigg)\\
&\quad -2 Li_{3}(1-x)\log(x)+2Li_{2}(1-x)\log(x)\log(1-x)-\mathfrak frac{1}{6}\pi^{2}\log^{2}(1-x)\\
&\quad +\mathfrak frac{1}{2}\log^{2}(x)\log^{2}(1-x)+\mathfrak frac{1}{3}\log(x)\log^{3}(1-x)-\mathfrak frac{1}{12}\log^{4}(1-x)\\
&\quad +2Li_{3}(1)\bigg(\log(x)-\log(1-x)\bigg)\,.
\mathfrak end{align*}
\mathfrak end{Lem}
\begin{Lem}(\cite[p.310]{Lewin})\label{lem00}
Let $0 \leq x < 1$, then we have
\begin{align*}
&\int_{0}^{x}\mathfrak frac{\log^{2}(t)\log(1-t)}{t}\mathrm{d}t
=-2 Li_{4}(x)+2 Li_{3}(x)\log(x)-Li_{2}(x)\log^{2}(x)\,.
\mathfrak end{align*}
\mathfrak end{Lem}
\begin{Lem}(\cite{Doelder,Lewin})\label{lem000}
Let $0 \leq x \leq 1$, then we have
\begin{align}
\int_{0}^{x}\mathfrak frac{\log^{2}(1-t)}{t}\mathrm{d}t
&=\log(x)\log^{2}(1-x)+2\log(1-x)Li_{2}(1-x)\notag\\
&\quad -2Li_{3}(1-x)+2Li_{3}(1)\,,\label{Lewin1}\\
\int_{0}^{x}\mathfrak frac{\log^{2}(1+t)}{t}\mathrm{d}t
&=\log(x)\log^{2}(1+x)-\mathfrak frac{2}{3}\log^{3}(1+x)-2Li_{3}\bigg(\mathfrak frac{1}{1+x}\bigg)\notag\\
&\quad -2\log(1+x)Li_{2}\bigg(\mathfrak frac{1}{1+x}\bigg)+2Li_{3}(1)\,.\label{Lewin2}
\mathfrak end{align}
\mathfrak end{Lem}
De Doelder \cite{Doelder} only calculated the integral $\int_{0}^{1}\mathfrak frac{\log^{2}(t)\log(1+t)}{1+t}\mathrm{d}t$, we now give explicit expression for the intagral $\int_{0}^{x}\mathfrak frac{\log^{2}(t)\log(1+t)}{1+t}\mathrm{d}t$.
\begin{Lem}\label{lem0000}
Let $0 \leq x \leq 1$, then we have
\begin{align*}
&\quad \int_{0}^{x}\mathfrak frac{\log^{2}(t)\log(1+t)}{1+t}\mathrm{d}t\\
&=2 \bigg(Li_{4}(-x)+Li_{4}\bigg(\mathfrak frac{x}{1+x}\bigg)+Li_{4}\bigg(\mathfrak frac{1}{1+x}\bigg)-Li_{4}(1)
-Li_{3}(1)\log(x)\\
&\quad +Li_{3}\bigg(\mathfrak frac{x}{1+x}\bigg)\log(1+x)+Li_{3}\bigg(\mathfrak frac{1}{1+x}\bigg)\log(1+x)
+Li_{3}\bigg(\mathfrak frac{1}{1+x}\bigg)\log(x)\\
&\quad +Li_{2}\bigg(\mathfrak frac{1}{1+x}\bigg)\log(x)\log(1+x)\bigg)+\mathfrak frac{1}{6}\pi^{2}\log^{2}(1+x)
-\mathfrak frac{1}{2}\log^{2}(x)\log^{2}(1+x)\\
&\quad +\mathfrak frac{4}{3}\log(x)\log^{3}(1+x)-\mathfrak frac{1}{2}\log^{4}(1+x)\,.
\mathfrak end{align*}
\mathfrak end{Lem}
\begin{proof}
Following De Doelder's paper \cite{Doelder}, we make the substitution $t=\mathfrak frac{1}{u}-1$ and it yields that
\begin{align*}
&\quad \int_{0}^{x}\mathfrak frac{\log^{2}(t)\log(1+t)}{1+t}\mathrm{d}t\\
&=-\int_{\mathfrak frac{1}{1+x}}^{1}\mathfrak frac{\big(\log(1-u)-\log(u)\big)^{2}\log(u)}{u}\mathrm{d}u\\
&=-\int_{\mathfrak frac{1}{1+x}}^{1}\mathfrak frac{\log^{2}(1-u)\log(u)}{u}\mathrm{d}u
+2\int_{\mathfrak frac{1}{1+x}}^{1}\mathfrak frac{\log(1-u)\log^{2}(u)}{u}\mathrm{d}u\\
&\quad -\int_{\mathfrak frac{1}{1+x}}^{1}\mathfrak frac{\log^{3}(u)}{u}\mathrm{d}u\\
&=-\int_{0}^{\mathfrak frac{x}{1+x}}\mathfrak frac{\log^{2}(y)\log(1-y)}{1-y}\mathrm{d}y
+2\int_{\mathfrak frac{1}{1+x}}^{1}\mathfrak frac{\log(1-u)\log^{2}(u)}{u}\mathrm{d}u\\
&\quad +\mathfrak frac{1}{4}\log^{4}(1+x)\,.
\mathfrak end{align*}
With the help of Lemmata \ref{lem0} and \ref{lem00}, we get the desired result.
\mathfrak end{proof}
\begin{theorem}
Let $0 \leq x \leq 1$, then we have
\begin{align*}
&\quad \sum_{n=1}^\infty \mathfrak frac{H_{n}}{n^{3}}x^{n}\\
&=2Li_{4}(x)+Li_{4}\bigg(\mathfrak frac{-x}{1-x}\bigg)-Li_{4}(1-x)+Li_{4}(1)-Li_{3}(x)\log(1-x)\\
&\quad +Li_{3}(1)\log(1-x)+\mathfrak frac{1}{24}\log^{4}(1-x)-\mathfrak frac{1}{6}\log(x)\log^{3}(1-x)\\
&\quad +\mathfrak frac{1}{12}\pi^{2}\log^{2}(1-x)\,,\\
&\quad \sum_{n=1}^\infty \mathfrak frac{H_{n}}{n^{3}}(-x)^{n}\\
&=2Li_{4}(-x)+Li_{4}\bigg(\mathfrak frac{1}{1+x}\bigg)+Li_{4}\bigg(\mathfrak frac{x}{1+x}\bigg)-Li_{4}(1)\\
&\quad +\log(1+x)Li_{3}\bigg(\mathfrak frac{1}{1+x}\bigg)+\log(1+x)Li_{3}\bigg(\mathfrak frac{x}{1+x}\bigg)\\
&\quad +\mathfrak frac{1}{12}\pi^{2}\log^{2}(1+x)+\mathfrak frac{1}{3}\log(x)\log^{3}(1+x)-\mathfrak frac{1}{4}\log^{4}(1+x)\,.
\mathfrak end{align*}
\mathfrak end{theorem}
\begin{proof}
Integrating the generating function of $H_{n}$, we can write
\begin{align*}
&\quad \sum_{n=1}^\infty \mathfrak frac{H_{n}}{(n+1)^{3}}x^{n+1}\\
&=\mathfrak frac{1}{2}\int_{0}^{x}\mathfrak frac{\mathrm{d}u}{u}\int_{0}^{u}\mathfrak frac{\log^{2}(1-t)}{t}\mathrm{d}t\\
&=\mathfrak frac{1}{2}\log(x)\int_{0}^{x}\mathfrak frac{\log^{2}(1-t)}{t}\mathrm{d}t
-\mathfrak frac{1}{2}\int_{0}^{x}\mathfrak frac{\log(t)\log^{2}(1-t)}{t}\mathrm{d}t\\
&=\mathfrak frac{1}{2}\log(x)\int_{0}^{x}\mathfrak frac{\log^{2}(1-t)}{t}\mathrm{d}t
-\mathfrak frac{1}{4}\log^{2}(x)\log^{2}(1-x)-\mathfrak frac{1}{2}\int_{0}^{x}\mathfrak frac{\log^{2}(t)\log(1-t)}{1-t}\mathrm{d}t\,,\\
&\quad \sum_{n=1}^\infty \mathfrak frac{H_{n}}{(n+1)^{3}}(-x)^{n+1}\\
&=\mathfrak frac{1}{2}\int_{0}^{x}\mathfrak frac{\mathrm{d}u}{u}\int_{0}^{u}\mathfrak frac{\log^{2}(1+t)}{t}\mathrm{d}t\\
&=\mathfrak frac{1}{2}\log(x)\int_{0}^{x}\mathfrak frac{\log^{2}(1+t)}{t}\mathrm{d}t
-\mathfrak frac{1}{4}\log^{2}(x)\log^{2}(1+x)+\mathfrak frac{1}{2}\int_{0}^{x}\mathfrak frac{\log^{2}(t)\log(1+t)}{1+t}\mathrm{d}t\,.
\mathfrak end{align*}
Note that
$$
\sum_{n=1}^\infty \mathfrak frac{H_{n}}{n^{3}}x^{n}
=\sum_{n=1}^\infty \mathfrak frac{H_{n}}{(n+1)^{3}}x^{n+1}+\sum_{n=1}^\infty \mathfrak frac{x^{n}}{n^{4}}\,,
$$
and
$$
\sum_{n=1}^\infty \mathfrak frac{H_{n}}{n^{3}}(-x)^{n}
=\sum_{n=1}^\infty \mathfrak frac{H_{n}}{(n+1)^{3}}(-x)^{n+1}+\sum_{n=1}^\infty \mathfrak frac{(-x)^{n}}{n^{4}}\,,
$$
with the help of Lemmata \ref{lem0}, \ref{lem000} and \ref{lem0000}, we get the desired result.
\mathfrak end{proof}
\begin{theorem}\label{thm2}
Let $0 \leq x \leq 1$, then we have
\begin{align*}
&\quad \sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{n}x^{n}\\
&=-Li_{2}(x)\log(1-x)-\log(x)\log^{2}(1-x)-2\log(1-x)Li_{2}(1-x)\\
&\quad +2Li_{3}(1-x)-2Li_{3}(1)+Li_{3}(x)\,,\\
&\quad \sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{n}(-x)^{n}\\
&=-Li_{2}(-x)\log(1+x)-\log(x)\log^{2}(1+x)+\mathfrak frac{2}{3}\log^{3}(1+x)\\
&\quad +2\log(1+x)Li_{2}\bigg(\mathfrak frac{1}{1+x}\bigg)+2Li_{3}\bigg(\mathfrak frac{1}{1+x}\bigg)-2Li_{3}(1)+Li_{3}(-x)\,.
\mathfrak end{align*}
\mathfrak end{theorem}
\begin{proof}
Integrating the generating function of $H_{n}^{(2)}$, we can write
\begin{align*}
&\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{n+1}x^{n+1}
=-\log(1-x)Li_{2}(x)-\int_{0}^{x}\mathfrak frac{\log^{2}(1-t)}{t}\mathrm{d}t\,,\\
&\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{n+1}(-x)^{n+1}
=-\log(1+x)Li_{2}(-x)-\int_{0}^{x}\mathfrak frac{\log^{2}(1+t)}{t}\mathrm{d}t\,.
\mathfrak end{align*}
With the help of Lemma \ref{lem000}, we get the desired result.
\mathfrak end{proof}
\begin{theorem}
Let $0 \leq x \leq 1$, then we have
\begin{align*}
&\quad \sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{n^{2}}x^{n}\\
&=-Li_{4}(x)-2 Li_{4}\bigg(\mathfrak frac{-x}{1-x}\bigg)+2 Li_{4}(1-x)-2 Li_{4}(1)
+2 Li_{3}(x)\log(1-x)\\
&\quad -2 Li_{3}(1)\log(1-x)+\mathfrak frac{1}{2}Li_{2}(x)^{2}-\mathfrak frac{1}{6}\pi^{2}\log^{2}(1-x)\\
&\quad +\mathfrak frac{1}{3}\log(x)\log^{3}(1-x)-\mathfrak frac{1}{12}\log^{4}(1-x)\,,\\
&\quad \sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{n^{2}}(-x)^{n}\\
&=-Li_{4}(-x)-2 Li_{4}\bigg(\mathfrak frac{x}{1+x}\bigg)-2 Li_{4}\bigg(\mathfrak frac{1}{1+x}\bigg)+2 Li_{4}(1)\\
&\quad -2 Li_{3}\bigg(\mathfrak frac{x}{1+x}\bigg)\log(1+x)-2 Li_{3}\bigg(\mathfrak frac{1}{1+x}\bigg)\log(1+x)+\mathfrak frac{1}{2}Li_{2}(-x)^{2}\\
&\quad -\mathfrak frac{1}{6}\pi^{2}\log^{2}(1+x)
-\mathfrak frac{2}{3}\log(x)\log^{3}(1+x)+\mathfrak frac{1}{2}\log^{4}(1+x)\,.
\mathfrak end{align*}
\mathfrak end{theorem}
\begin{proof}
Integrating the generating function of $H_{n}^{(2)}$, we can write
\begin{align*}
&\quad \sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{(n+1)^{2}}x^{n+1}\\
&=\int_{0}^{x}\mathfrak frac{\mathrm{d}u}{u}\int_{0}^{u}\mathfrak frac{Li_{2}(t)}{1-t}\mathrm{d}t\\
&=\log(x)\int_{0}^{x}\mathfrak frac{Li_{2}(t)}{1-t}\mathrm{d}t-\int_{0}^{x}\mathfrak frac{\log(t)Li_{2}(t)}{1-t}\mathrm{d}t\\
&=\log(x)\int_{0}^{x}\mathfrak frac{Li_{2}(t)}{1-t}\mathrm{d}t+\log(x)\log(1-x)Li_{2}(x)+\mathfrak frac{1}{2}Li_{2}(x)^{2}\\
&\quad +\int_{0}^{x}\mathfrak frac{\log(t)\log^{2}(1-t)}{t}\mathrm{d}t\,,\\
&\quad \sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{(n+1)^{2}}(-x)^{n+1}\\
&=\log(x)\int_{0}^{-x}\mathfrak frac{Li_{2}(t)}{1-t}\mathrm{d}t+\int_{0}^{x}\mathfrak frac{\log(t)Li_{2}(-t)}{1+t}\mathrm{d}t\\
&=\log(x)\int_{0}^{-x}\mathfrak frac{Li_{2}(t)}{1-t}\mathrm{d}t+\log(x)\log(1+x)Li_{2}(-x)
+\mathfrak frac{1}{2}Li_{2}(-x)^{2}\\
&\quad +\int_{0}^{x}\mathfrak frac{\log(t)\log^{2}(1+t)}{t}\mathrm{d}t\,.
\mathfrak end{align*}
With the help of Lemmata \ref{lem0}, \ref{lem0000} and Theorem \ref{thm2}, we get the desired result.
\mathfrak end{proof}
\begin{Remark}
It seems difficult to give explicit expressions for infinite series of types
$\sum_{n=1}^\infty \mathfrak frac{H_{n}}{n^{4}}x^{n}$ and
$\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{n^{3}}x^{n}$, since the integrals $\int_{0}^{x}\mathfrak frac{\log^{2}(t)\log^{2}(1-t)}{t}\mathrm{d}t$ and $\int_{0}^{x}\mathfrak frac{\log^{2}(t)Li_{2}(t)}{1-t}\mathrm{d}t$ are not known to be related to the polylogarithm functions.
\mathfrak end{Remark}
\begin{Example}
Some illustrative examples are as following:
\begin{align*}
&\sum_{n=1}^\infty \mathfrak frac{H_{n}}{n^{3}\cdot2^{n}}
=Li_{4}\bigg(\mathfrak frac{1}{2}\bigg)+\mathfrak frac{1}{720}\pi^{4}-\mathfrak frac{1}{8}\log(2)\zeta(3)
+\mathfrak frac{1}{24}\log^{4}(2)\,,\\
&\sum_{n=1}^\infty \mathfrak frac{H_{n}}{n^{3}\cdot2^{n}}(-1)^{n}
=2Li_{4}\bigg(-\mathfrak frac{1}{2}\bigg)+Li_{4}\bigg(\mathfrak frac{1}{3}\bigg)+Li_{4}\bigg(\mathfrak frac{2}{3}\bigg)
-\mathfrak frac{1}{90}\pi^{4}\\
&\qquad \qquad \qquad \quad +\log\bigg(\mathfrak frac{3}{2}\bigg)Li_{3}\bigg(\mathfrak frac{2}{3}\bigg)+\log\bigg(\mathfrak frac{3}{2}\bigg)Li_{3}\bigg(\mathfrak frac{1}{3}\bigg)\\
&\qquad \qquad \qquad \quad +\mathfrak frac{1}{12}\pi^{2}\log^{2}\bigg(\mathfrak frac{3}{2}\bigg)
-\mathfrak frac{1}{12}\log^{4}\bigg(\mathfrak frac{3}{2}\bigg)-\mathfrak frac{1}{6}\log(6)\log^{3}\bigg(\mathfrak frac{3}{2}\bigg)\,,\\
&\sum_{n=1}^\infty \mathfrak frac{H_{n}}{n^{3}}(-1)^{n}
=2Li_{4}\bigg(\mathfrak frac{1}{2}\bigg)-\mathfrak frac{11}{360}\pi^{4}+\mathfrak frac{7}{4}\log(2)\zeta(3)\\
&\qquad \qquad \qquad \quad -\mathfrak frac{1}{12}\pi^{2}\log^{2}(2)+\mathfrak frac{1}{12}\log^{4}(2)\,,\\
&\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{(n+1)2^{n+1}}
=-\mathfrak frac{1}{4}\zeta(3)+\mathfrak frac{1}{12}\pi^{2}\log(2)-\mathfrak frac{1}{6}\log^{3}(2)\,,\\
&\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{n\cdot2^{n}}
=\mathfrak frac{5}{8}\zeta(3)\,,\\
&\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}(\sqrt{5}-1)^{n+1}}{(n+1)2^{n+1}}
=-\mathfrak frac{2}{5}\zeta(3)-\mathfrak frac{1}{5}\pi^{2}\log\bigg(\mathfrak frac{\sqrt{5}-1}{2}\bigg)+\mathfrak frac{2}{3}\log^{3}\bigg(\mathfrak frac{\sqrt{5}-1}{2}\bigg)\,,\\
&\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}(\sqrt{5}-1)^{n}}{n\cdot2^{n}}
=-\mathfrak frac{2}{5}\zeta(3)-\mathfrak frac{1}{5}\pi^{2}\log\bigg(\mathfrak frac{\sqrt{5}-1}{2}\bigg)+\mathfrak frac{2}{3}\log^{3}\bigg(\mathfrak frac{\sqrt{5}-1}{2}\bigg)\\
&\qquad \qquad \qquad\qquad\quad +Li_{3}\bigg(\mathfrak frac{\sqrt{5}-1}{2}\bigg)\,,\\
&\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{n}(-1)^{n}=\mathfrak frac{1}{12}\pi^{2}\log(2)-\zeta(3)\,,\\
&\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{n^{2}\cdot2^{n}}
=Li_{4}\bigg(\mathfrak frac{1}{2}\bigg)+\mathfrak frac{1}{1440}\pi^{4}+\mathfrak frac{1}{4}\log(2)\zeta(3)
-\mathfrak frac{1}{24}\pi^{2}\log^{2}(2)+\mathfrak frac{1}{24}\log^{4}(2)\,,\\
&\sum_{n=1}^\infty \mathfrak frac{H_{n}^{(2)}}{n^{2}\cdot2^{n}}(-1)^{n}=
-Li_{4}\bigg(-\mathfrak frac{1}{2}\bigg)-2\bigg(Li_{4}\bigg(\mathfrak frac{1}{3}\bigg)+Li_{4}\bigg(\mathfrak frac{2}{3}\bigg)
-\mathfrak frac{1}{90}\pi^{4}\bigg)\\
&\qquad \qquad \qquad\qquad\quad -2\bigg(\log\bigg(\mathfrak frac{3}{2}\bigg)Li_{3}\bigg(\mathfrak frac{2}{3}\bigg)
+\log\bigg(\mathfrak frac{3}{2}\bigg)Li_{3}\bigg(\mathfrak frac{1}{3}\bigg)\bigg)\\
&\qquad \qquad \qquad\qquad\quad +\mathfrak frac{1}{2}Li_{2}\bigg(-\mathfrak frac{1}{2}\bigg)^{2}-\mathfrak frac{1}{6}\pi^{2}\log^{2}\bigg(\mathfrak frac{3}{2}\bigg)
+\mathfrak frac{1}{2}\log^{4}\bigg(\mathfrak frac{3}{2}\bigg)\\
&\qquad \qquad \qquad\qquad\quad +\mathfrak frac{2}{3}\log(2)\log^{3}\bigg(\mathfrak frac{3}{2}\bigg)\,.
\mathfrak end{align*}
\mathfrak end{Example}
\section{Infinite series containing generalized alternating harmonic numbers}
\begin{Prop}\label{prop}
Let $p, n \in \mathbb N$ and $k \in \mathbb N_{0}$ with $0 \leq k \leq p$, then we have
\begin{align*}
&\overline{H}_{n}^{(p)}
=\sum_{j=1}^{k}(-1)^{j}\sum_{\mathfrak ell=1}^{n}\mathfrak frac{(-1)^{n+1-\mathfrak ell}(n+1)}{\mathfrak ell^{j}(n+1-\mathfrak ell)^{p+1-j}}
+(-1)^{k+1}\sum_{\mathfrak ell=1}^{n}\mathfrak frac{(-1)^{n+1-\mathfrak ell}}{\mathfrak ell^{k}(n+1-\mathfrak ell)^{p-k}}\,.
\mathfrak end{align*}
In particular, we have
\begin{align*}
&\overline{H}_{n}^{(p)}
=\mathfrak frac{n+1}{2}\sum_{j=1}^{p}(-1)^{j}\sum_{\mathfrak ell=1}^{n}\mathfrak frac{(-1)^{n+1-\mathfrak ell}}{\mathfrak ell^{j}(n+1-\mathfrak ell)^{p+1-j}}\quad (p+n\quad\hbox{even})\,,\\
&\sum_{j=1}^{p}(-1)^{j}\sum_{\mathfrak ell=1}^{n}\mathfrak frac{(-1)^{n+1-\mathfrak ell}}{\mathfrak ell^{j}(n+1-\mathfrak ell)^{p+1-j}}
=0\quad (p+n\quad\hbox{odd})\,.
\mathfrak end{align*}
\mathfrak end{Prop}
\begin{proof}
Integrating the generating function of $\overline{H}_{n}^{(p)}$, we can write
\begin{align*}
&\quad \sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n}^{(p)}}{n+1}x^{n+1}=\int_{0}^{x}\mathfrak frac{-Li_{p}(-t)}{1-t}\mathrm{d}t\\
&=-Li_{1}(x)Li_{p}(-x)+\int_{0}^{x}\mathfrak frac{Li_{1}(t)Li_{p-1}(-t)}{t}\mathrm{d}t\\
&=-Li_{1}(x)Li_{p}(-x)+Li_{2}(x)Li_{p-1}(-x)-\int_{0}^{x}\mathfrak frac{Li_{2}(t)Li_{p-2}(-t)}{t}\mathrm{d}t\\
&=\sum_{j=1}^{k}(-1)^{j}Li_{j}(x)Li_{p+1-j}(-x)+(-1)^{k+1}\int_{0}^{x}\mathfrak frac{Li_{k}(t)Li_{p-k}(-t)}{t}\mathrm{d}t \quad (0 \leq k \leq p)\\
&=\sum_{j=1}^{p}(-1)^{j}Li_{j}(x)Li_{p+1-j}(-x)
+(-1)^{p}\int_{0}^{x}\mathfrak frac{Li_{p}(t)}{1+t}\mathrm{d}t\,.
\mathfrak end{align*}
Comparing the coefficients on both sides gives the desired result.
\mathfrak end{proof}
\begin{Lem}(\cite[p.303-p.304]{Lewin})\label{lem1}
The following formulas are known:
\begin{align*}
&\quad \int_{0}^{t}\mathfrak frac{\log(a+bt)}{c+et}\mathrm{d}t\\
&=\mathfrak frac{1}{e}\log\bigg(\mathfrak frac{ae-bc}{e}\bigg)\log\bigg(\mathfrak frac{c+et}{c}\bigg)-\mathfrak frac{1}{e}Li_{2}\bigg(\mathfrak frac{b(c+et)}{bc-ae}\bigg)
+\mathfrak frac{1}{e}Li_{2}\bigg(\mathfrak frac{bc}{bc-ae}\bigg)\,\\
&=\mathfrak frac{1}{2e}\log^{2}\bigg(\mathfrak frac{b}{e}(c+et)\bigg)-\mathfrak frac{1}{2e}\log^{2}\bigg(\mathfrak frac{bc}{e}\bigg)
+\mathfrak frac{1}{e}Li_{2}\bigg(\mathfrak frac{bc-ae}{b(c+et)}\bigg)-\mathfrak frac{1}{e}Li_{2}\bigg(\mathfrak frac{bc-ae}{bc}\bigg)\,.
\mathfrak end{align*}
\mathfrak end{Lem}
\begin{theorem}\label{thm3}
Let $\lvert x \lvert \leq 1$, then we have
\begin{align*}
\sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n}}{n}x^{n}
&=-\log(2)\log(1-x)+Li_{2}\bigg(\mathfrak frac{1}{2}(1-x)\bigg)-Li_{2}\bigg(\mathfrak frac{1}{2}\bigg)-Li_{2}(-x)\\
&=-\log(1-x)\log(1+x)+\log(2)\log(1+x)-Li_{2}\bigg(\mathfrak frac{1}{2}(1+x)\bigg)\\
&\quad +Li_{2}\bigg(\mathfrak frac{1}{2}\bigg)-Li_{2}(-x)\,.
\mathfrak end{align*}
\mathfrak end{theorem}
\begin{proof}
Integrating the generating function of $\overline{H}_{n}$, we can write
\begin{align*}
&\quad \sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n}}{n+1}x^{n+1}=\int_{0}^{x}\mathfrak frac{\log(1+t)}{1-t}\mathrm{d}t\\
&=-\log(1-x)\log(1+x)+\int_{0}^{x}\mathfrak frac{\log(1-t)}{1+t}\mathrm{d}t\,.
\mathfrak end{align*}
Set $a=b=c=1, e=-1$ and $a=c=e=1, b=-1$ in Lemma \ref{lem1} respectively, we have
\begin{align*}
&\int_{0}^{x}\mathfrak frac{\log(1+t)}{1-t}\mathrm{d}t
=-\log(2)\log(1-x)+Li_{2}\bigg(\mathfrak frac{1}{2}(1-x)\bigg)-Li_{2}\bigg(\mathfrak frac{1}{2}\bigg)\,,\\
&\int_{0}^{x}\mathfrak frac{\log(1-t)}{1+t}\mathrm{d}t=\log(2)\log(1+x)-Li_{2}\bigg(\mathfrak frac{1}{2}(1+x)\bigg)+Li_{2}\bigg(\mathfrak frac{1}{2}\bigg)\,.
\mathfrak end{align*}
Note that
$$
\sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n}}{n}x^{n}=\sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n-1}}{n}x^{n}+\sum_{n=1}^\infty \mathfrak frac{(-1)^{n-1}x^{n}}{n^{2}}\,,
$$
thus we get the desired result.
\mathfrak end{proof}
\begin{Lem}\label{lem2}
Let $0 \leq x \leq 1$, then we have
\begin{align*}
&\quad \int_{0}^{x}\mathfrak frac{\log(1-t)\log(1+t)}{t}\mathrm{d}t\\
&=\mathfrak frac{1}{8}\log(x)\log^{2}(1-\sqrt{x})-\mathfrak frac{1}{2}\log(x)\log^{2}(1-x)-\mathfrak frac{1}{2}\log(x)\log^{2}(1+x)\\
&\quad +\mathfrak frac{1}{3}\log^{3}(1+x)+\mathfrak frac{1}{2}\log(1-\sqrt{x})Li_{2}(1-\sqrt{x})-\log(1-x)Li_{2}(1-x)\\
&\quad +\log(1+x)Li_{2}\bigg(\mathfrak frac{1}{1+x}\bigg)-\mathfrak frac{1}{2}Li_{3}(1-\sqrt{x})+Li_{3}(1-x)-\mathfrak frac{3}{2}Li_{3}(1)\\
&\quad +Li_{3}\bigg(\mathfrak frac{1}{1+x}\bigg)\,.
\mathfrak end{align*}
\mathfrak end{Lem}
\begin{proof}
We start from
$$
\int_{0}^{x}\mathfrak frac{\log^{2}(1-t^{2})}{t}\mathrm{d}t\,.
$$
This integral equals to
$$
\int_{0}^{x}\mathfrak frac{\log^{2}(1-t)}{t}\mathrm{d}t+2\int_{0}^{x}\mathfrak frac{\log(1-t)\log(1+t)}{t}\mathrm{d}t
+\int_{0}^{x}\mathfrak frac{\log^{2}(1+t)}{t}\mathrm{d}t\,.
$$
It is obvious that
\begin{align*}
&\quad \int_{0}^{x}\mathfrak frac{\log^{2}(1-t^{2})}{t}\mathrm{d}t
=\mathfrak frac{1}{2}\int_{0}^{\sqrt{x}}\mathfrak frac{\log^{2}(1-t)}{t}\mathrm{d}t\\
&=\mathfrak frac{1}{4}\log(x)\log^{2}(1-\sqrt{x})+\log(1-\sqrt{x})Li_{2}(1-\sqrt{x})-Li_{3}(1-\sqrt{x})+Li_{3}(1)\,.
\mathfrak end{align*}
With the help of (\ref{Lewin1}) and (\ref{Lewin2}), we get the desired result.
\mathfrak end{proof}
\begin{theorem}\label{thm4}
Let $0 \leq x \leq 1$, then we have
\begin{align*}
&\quad \sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n}^{(2)}}{n}x^{n}\\
&=Li_{2}(-x)\log(1-x)-Li_{3}(-x)+\mathfrak frac{1}{8}\log(x)\log^{2}(1-\sqrt{x})-\mathfrak frac{1}{2}\log(x)\log^{2}(1-x)\\
&\quad -\mathfrak frac{1}{2}\log(x)\log^{2}(1+x)
+\mathfrak frac{1}{3}\log^{3}(1+x)+\mathfrak frac{1}{2}\log(1-\sqrt{x})Li_{2}(1-\sqrt{x})\\
&\quad -\log(1-x)Li_{2}(1-x) +\log(1+x)Li_{2}\bigg(\mathfrak frac{1}{1+x}\bigg)-\mathfrak frac{1}{2}Li_{3}(1-\sqrt{x})\\
&\quad +Li_{3}(1-x)+Li_{3}\bigg(\mathfrak frac{1}{1+x}\bigg)-\mathfrak frac{3}{2}Li_{3}(1)\,,\\
&\quad \sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n}^{(2)}}{n}(-x)^{n}\\
&=Li_{2}(x)\log(1+x)-Li_{3}(x)+\mathfrak frac{1}{8}\log(x)\log^{2}(1-\sqrt{x})-\mathfrak frac{1}{2}\log(x)\log^{2}(1-x)\\
&\quad -\mathfrak frac{1}{2}\log(x)\log^{2}(1+x)
+\mathfrak frac{1}{3}\log^{3}(1+x)+\mathfrak frac{1}{2}\log(1-\sqrt{x})Li_{2}(1-\sqrt{x})\\
&\quad -\log(1-x)Li_{2}(1-x) +\log(1+x)Li_{2}\bigg(\mathfrak frac{1}{1+x}\bigg)-\mathfrak frac{1}{2}Li_{3}(1-\sqrt{x})\\
&\quad +Li_{3}(1-x)+Li_{3}\bigg(\mathfrak frac{1}{1+x}\bigg)-\mathfrak frac{3}{2}Li_{3}(1)\,.
\mathfrak end{align*}
\mathfrak end{theorem}
\begin{proof}
Integrating the generating function of $\overline{H}_{n}^{(2)}$, we can write
\begin{align*}
&\sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n}^{(2)}}{n+1}x^{n+1}
=\log(1-x)Li_{2}(-x)+\int_{0}^{x}\mathfrak frac{\log(1-t)\log(1+t)}{t}\mathrm{d}t\,,\\
&\sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n}^{(2)}}{n+1}(-x)^{n+1}
=\log(1+x)Li_{2}(x)+\int_{0}^{x}\mathfrak frac{\log(1-t)\log(1+t)}{t}\mathrm{d}t\,.
\mathfrak end{align*}
Note that
\begin{align*}
\sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n}^{(2)}}{n}x^{n}=\sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n-1}^{(2)}}{n}x^{n}+\sum_{n=1}^\infty \mathfrak frac{(-1)^{n-1}x^{n}}{n^{3}}\,,\\
\sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n}^{(2)}}{n}(-x)^{n}=\sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n-1}^{(2)}}{n}(-x)^{n}-\sum_{n=1}^\infty \mathfrak frac{x^{n}}{n^{3}}\,,
\mathfrak end{align*}
with the help of Lemma \ref{lem2}, we get the desired result.
\mathfrak end{proof}
\begin{Example}
Some illustrative examples are as following:
\begin{align*}
&\sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n}(-1)^{n}}{n}
=-\mathfrak frac{1}{12}\pi^{2}-\mathfrak frac{1}{2}\log^{2}(2)\,,\\
&\sum_{n=1}^\infty \mathfrak frac{\overline{H}_{n}^{(2)}(-1)^{n}}{n}
=-\mathfrak frac{13}{8}\zeta(3)+\mathfrak frac{1}{6}\pi^{2}\log(2)\,.
\mathfrak end{align*}
\mathfrak end{Example}
\begin{thebibliography}{99}
\bibitem{Doelder}
De Doelder PJ. {\mathfrak em
On some series containing $\psi(x)-\psi(y)$ and $(\psi(x)-\psi(y))^2$ for certain values of $x$ and $y$}.
J. Comput. Appl. Math. 37 (1991), no. 1-3, 125--141.
\bibitem{Lewin}
Lewin L. {\mathfrak em
Polylogarithms and associated functions}.
North-Holland, Amsterdam, 1981.
\mathfrak end{thebibliography}
\mathfrak end{document} |
\begin{document}
\tauitle{Convoluted Fractional Poisson Process}
\alphauthor[Kuldeep Kumar Kataria]{K. K. Kataria}
\alphaddress{Kuldeep Kumar Kataria, Department of Mathematics,
Indian Institute of Technology Bhilai, Raipur-492015, India.}
\email{kuldeepk@iitbhilai.ac.in}
\alphauthor[Mostafizar Khandakar]{M. Khandakar}
\alphaddress{Mostafizar Khandakar, Department of Mathematics,
Indian Institute of Technology Bhilai, Raipur-492015, India.}
\email{mostafizark@iitbhilai.ac.in}
\date{May 21, 2020.}
\subjclass[2010]{Primary : 60G22; 60G55; Secondary: 60G51; 60J75.}
\kappaeywords{time fractional Poisson process; discrete convolution; subordination; long-range dependence property; short-range dependence property.}
\begin{abstract}
In this paper, we introduce and study a convoluted version of the time fractional Poisson process by taking the discrete convolution with respect to space variable in the system of fractional differential equations that governs its state probabilities. We call the introduced process as the convoluted fractional Poisson process (CFPP). The explicit expression for the Laplace transform of its state probabilities are obtained whose inversion yields its one-dimensional distribution. Some of its statistical properties such as probability generating function, moment generating function, moments {\it etc.} are obtained. A special case of CFPP, namely, the convoluted Poisson process (CPP) is studied and its time-changed subordination relationships with CFPP are discussed. It is shown that the CPP is a L\'evy process using which the long-range dependence property of CFPP is established. Moreover, we show that the increments of CFPP exhibits short-range dependence property.
\end{abstract}
\title{Convoluted Fractional Poisson Process}
\section{Introduction}
The Poisson process is a renewal process with exponentially distributed waiting times. This L\'evy process is often used to model the counting phenomenon. Empirically, it is observed that the process with heavy-tailed distributed waiting times offers a better model than the ones with light-tailed distributed waiting times. For this purpose several fractional generalizations of the homogeneous Poisson process are introduced and studied by researchers in the past two decades. These generalizations give rise to some interesting connections between the theory of stochastic subordination, fractional calculus and renewal processes. These fractional processes can be broadly categorized into two types: the time fractional types and the space fractional types.
The time fractional versions of the Poisson process are obtained by replacing the time derivative in the governing difference-differential equations of the state probabilities of Poisson process by certain fractional derivatives. These include Riemann-Liouville fractional derivative (see Laskin (2003)), Caputo fractional derivative (see Beghin and Orsingher (2009)), Prabhakar derivative (see Polito and Scalas (2016)), Saigo fractional derivative (see Kataria and Vellaisamy (2017a)) {\it etc}. These time fractional models is further generalized to state-dependent fractional Poisson processes (see Garra {\it et al.} (2015)) and the mixed fractional Poisson process (see Beghin (2012) and Aletti {\it et al.} (2018)). The governing difference-differential equations of state-dependent fractional Poisson processes depend on the number of events that have occurred till any given time $t$. The properties related to the notion of long memory such as the long-range dependence (LRD) property and the short-range dependence (SRD) property are obtained for such fractional processes by Biard and Saussereau (2014), Maheshwari and Vellaisamy (2016), Kataria and Khandakar (2020).
Orsingher and Polito (2012) introduced a space fractional version of the Poisson process, namely, the space fractional Poisson process (SFPP). It is characterized as a stochastic process obtained by time-changing the Poisson process by an independent stable subordinator. Orsingher and Toaldo (2015) studied a class of generalized space fractional Poisson processes associated with Bern\v stein functions. A particular choice of Bern\v stein function leads to a specific point process. Besides the SFPP, this class includes the relativistic Poisson process and the gamma-Poisson process as particular cases. Beghin and Vellaisamy (2018) introduced and study a process by time-changing the SFPP by a gamma subordinator. The jumps can take any positive value is a specific characteristics of these generalized space fractional processes.
The time fractional Poisson process (TFPP), denoted by $\{N^\alphalpha(t)\}_{t\geq0}$, $0<\alphalpha{\lambdangle}eq 1$, is a time fractional version of the homogeneous Poisson process whose state probabilities $p^\alphalpha(n,t)=\mathrm{Pr}\{N^\alphalpha(t)=n\}$ satisfy (see Laskin (2003), Beghin and Orsingher (2009))
\begin{equation}{\lambdangle}abel{qqawq112}
\partial_t^\alphalpha p^\alphalpha(n,t)=-{\lambdangle}ambda p^\alphalpha(n,t)+{\lambdangle}ambda p^\alphalpha(n-1,t),\ \ n\geq 0,
\end{equation}
with $p^\alphalpha(-1,t)=0$, $t\geq 0$ and the initial conditions $p^\alphalpha(0,0)=1$ and $p^\alphalpha(n,0)=0$, $n\geq 1$. Here, ${\lambdangle}ambda>0$ is the intensity parameter.
The derivative $\partial_t^\alphalpha$ involved in (\rangleef{qqawq112}) is the Dzhrbashyan--Caputo fractional derivative defined as
\begin{equation}{\lambdangle}abel{plm1}
\partial^{\alphalpha}_{t}f(t)\coloneqq\begin{cases}
\dfrac{1}{\Gammamma {\lambdangle}eft( 1-\alphalpha \rangleight)}\displaystyle\int_{0}^{t}(t-s)^{-\alphalpha}f^{\prime}(s)\,\mathrm{d}s,\ \ 0<\alphalpha<1,\vspace*{.2cm}\\
f^{\prime}(t), \ \ \alphalpha=1,
\end{cases}
\end{equation}
whose Laplace transform is given by (see Kilbas {\it et al.} (2006), Eq. (5.3.3))
\begin{equation}{\lambdangle}abel{lc}
\mathcal{L}{\lambdangle}eft(\partial^{\alphalpha}_{t}f(t);s\rangleight)=s^{\alphalpha}\tauilde{f}(s)-s^{\alphalpha-1}f(0),\ \ s>0.
\end{equation}
For $\alphalpha=1$, the TFPP reduces to Poisson process.
The TFPP is characterized as a time-changed Poisson process $\{N(t)\}_{t\geq0}$ by an inverse $\alphalpha$-stable subordinator $\{H^{\alphalpha}(t)\}_{t\geq0}$ (see Meerschaert {\it et al.} (2011)), that is,
\begin{equation}{\lambdangle}abel{1.1df}
N^\alphalpha(t)\overset{d}{=}N(H^\alphalpha(t)),
\end{equation}
where $\overset{d}{=}$ means equal in distribution.
In this paper, we introduce a counting process by varying intensity as a function of states and by taking discrete convolution in (\rangleef{qqawq112}). The discrete convolution used is defined in (\rangleef{def}). We call the introduced process as the convoluted fractional Poisson process (CFPP) and denote it by $\{\mathcal{N}^{\alphalpha}_{c}(t)\}_{t\ge0}$, $0<\alphalpha{\lambdangle}e1$. It is defined as the stochastic process whose state probabilities $p^\alphalpha_{c}(n,t)=\mathrm{Pr}\{\mathcal{N}^{\alphalpha}_{c}(t)=n\}$, satisfy
\begin{equation}{\lambdangle}abel{modellq}
\partial^{\alphalpha}_{t}p^\alphalpha_{c}(n,t)=-{\lambdangle}ambda_{n}*p^\alphalpha_{c}(n,t)+{\lambdangle}ambda_{n-1}*p^\alphalpha_{c}(n-1,t),\ \ \ n\ge0,
\end{equation}
with initial conditions
\begin{equation*}
p^\alphalpha_{c}(n,0)=\begin{cases}
1,\ \ n=0,\\
0,\ \ n\ge1,
\end{cases}
\end{equation*}
and $p^\alphalpha_{c}(-n,t)=0$ for all $n\ge1$, $t\ge0$. Also, $\{{\lambdangle}ambda_{j},\ j\in\mathbb{Z}\}$ is a non-increasing sequence of intensity parameters such that ${\lambdangle}ambda_{j}=0$ for all $j<0$, ${\lambdangle}ambda_{0}>0$ and ${\lambdangle}ambda_{j}\ge0$ for all $j>0$ with ${\lambdangle}im{\lambdangle}imits_{j\tauo\infty}{\lambdangle}ambda_{j+1}/{\lambdangle}ambda_{j}<1$.
The Laplace transform of the state probabilities of CFPP is inverted to obtain its one-dimensional distribution in terms of Mittag-Leffler function, defined in (\rangleef{mit}), as
\begin{equation*}
p^\alphalpha_{c}(n,t)=\begin{cases}
E_{\alphalpha,1}(-{\lambdangle}ambda_{0}t^{\alphalpha}),\ \ n=0,\vspace*{.2cm}\\
\displaystyle\sum_{k=1}^{n}\sum_{\Theta_{n}^{k}}k!\prod_{j=1}^{n}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!} t^{k\alphalpha}E_{\alphalpha,k\alphalpha+1}^{k+1}(-{\lambdangle}ambda_{0}t^{\alphalpha}),\ \ n\ge1,
\end{cases}
\end{equation*}
where the sum is taken over the set $\Theta_{n}^{k}=\{(k_{1},k_{2},\dots,k_{n}):\sum_{j=1}^{n}k_{j}=k, \ \ \sum_{j=1}^{n}jk_{j}=n,\ k_{j}\in\mathbb{N}_0\}$. An alternate expression for $p^\alphalpha_{c}(n,t)$ is obtained where the sum is taken over a slightly simplified set. It is observed that the CFPP is not a renewal process. The TFPP can be obtained as particular case of the CFPP by taking ${\lambdangle}ambda_{n}=0$ for all $n\ge1$. Further, $\alphalpha=1$ gives the homogeneous Poisson process.
The paper is organized as follows: In Section \rangleef{Section2}, we set some notations and give some preliminary results related to Mittag-Leffler function, its generalizations, Bell polynomials {\it etc}. In Section \rangleef{Section3}, we introduce the CFPP and obtain its state probabilities. It is shown that the CFPP is a limiting case of a fractional counting process introduced and studied by Di Crescenzo {\it et al.} (2016). Its probability generating function (pgf), factorial moments, moment generating function (mgf), moments including mean and variance are derived. Also, it is shown that the CFPP is a fractional compound Poisson process. In Section \rangleef{Section4}, we study a particular case of the CFPP, namely, the convoluted Poisson process (CPP). It is shown that the CPP is a L\'evy process. Some subordination results related to CPP, CFPP and inverse stable subordinator are obtained. In Section \rangleef{Section5}, we have shown that the CFPP exhibits the LRD property whereas its increments have the SRD property.
\section{Preliminaries}{\lambdangle}abel{Section2}
The set of integers is denoted by $\mathbb{Z}$, the set of positive integers by $\mathbb{N}$ and the set of non-negative integers by $\mathbb{N}_0$. The following definitions and known results related to discrete convolution, Bell polynomials, Mittag-Leffler function and its generalizations will be used.
\subsection{Discrete Convolution}
The discrete convolution of two real-valued functions $f$ and $g$ whose support is the set of integers is defined as (see Damelin and Miller (2012), p. 232)
\begin{equation}{\lambdangle}abel{def}
(f*g)(n)\coloneqq\sum_{j=-\infty}^{\infty}f(j)g(n-j).
\end{equation}
Here, $\sum_{j=-\infty}^{\infty}|f(j)|<\infty$ and $\sum_{j=-\infty}^{\infty}|g(j)|<\infty$, that is, $f,g\in\ell^1$ space.
\subsection{Bell polynomials}
The ordinary Bell polynomials $\hat{B}_{n,k}$ in $n-k+1$ variables is defined as
\begin{equation*}
\hat{B}_{n,k}(u_{1},u_{2},\dots,u_{n-k+1})\coloneqq\sum_{\Lambda_{n}^{k}}k!\prod_{j=1}^{n-k+1}\frac{u_{j}^{k_{j}}}{k_{j}!},
\end{equation*}
where
\begin{equation}{\lambdangle}abel{lambdaee11}
\Lambda_{n}^{k}={\lambdangle}eft\{(k_{1},k_{2},\dots,k_{n-k+1}):\sum_{j=1}^{n-k+1}k_{j}=k, \ \ \sum_{j=1}^{n-k+1}jk_{j}=n, \ k_{j}\in\mathbb{N}_0\rangleight\}.
\end{equation}
The following results holds (see Comtet 1974, pp. 133-137):
\begin{equation}{\lambdangle}abel{fm1}
\exp{\lambdangle}eft(x\sum_{j=1}^{\infty}u_{j}t^{j}\rangleight)=1+\sum_{n=1}^{\infty}t^{n}\sum_{k=1}^{n}\hat{B}_{n,k}(u_{1},u_{2},\dots,u_{n-k+1})\frac{x^{k}}{k!}
\end{equation}
and
\begin{equation}{\lambdangle}abel{fm2}
{\lambdangle}eft(\sum_{j=1}^{\infty}u_{j}t^{j}\rangleight)^{k}=\sum_{n=k}^{\infty}\hat{B}_{n,k}(u_{1},u_{2},\dots,u_{n-k+1})t^{n}.
\end{equation}
\subsection{Mittag-Leffler function and its generalizations}
The Mellin-Barnes representation of the exponential function is given by (see Paris and Kaminski (2001), Eq. (3.3.2))
\begin{equation}{\lambdangle}abel{me}
e^{x}=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}\Gammamma (z)(-x)^{-z}\mathrm{d}z, \ \ x\neq0,
\end{equation}
where $i=\sqrt{-1}$.
The one-parameter Mittag-Leffler function is a generalization of the exponential function. It is defined as (see Mathai and Haubold (2008))
\begin{equation*}
E_{\alphalpha, 1}(x)\coloneqq\sum_{k=0}^{\infty} \frac{x^{k}}{\Gammamma(k\alphalpha+1)},\ \ x\in\mathbb{R},
\end{equation*}
where $\alphalpha>0$. For $\alphalpha=1$, it reduces to the exponential function. It is further generalized to two-parameter and three-parameter Mittag-Leffler functions.
The three-parameter Mittag-Leffler function is defined as
\begin{equation}{\lambdangle}abel{mit}
E_{\alphalpha, \beta}^{\gammamma}(x)\coloneqq\frac{1}{\Gammamma(\gammamma)}\sum_{k=0}^{\infty} \frac{\Gammamma(\gammamma+k)x^{k}}{k!\Gammamma(k\alphalpha+\beta)},\ \ x\in\mathbb{R},
\end{equation}
where $\alphalpha>0$, $\beta>0$ and $\gammamma>0$.
For $x\neq0$, its Mellin-Barnes representation is given by (see Mathai and Haubold (2008), Eq. (2.3.5))
\begin{equation}{\lambdangle}abel{m3}
E_{\alphalpha,\beta}^{\gammamma}(x)=\frac{1}{2\pi i\Gammamma(\gammamma)}\int_{c-i\infty}^{c+i\infty}\frac{\Gammamma (z)\Gammamma(\gammamma-z)}{\Gammamma(\beta-\alphalpha z)}(-x)^{-z}\mathrm{d}z,
\end{equation}
where $0<c<\gammamma$. For $\gammamma=1$, it reduces to two-parameter Mittag-Leffler function. Further, $\beta=\gammamma=1$ reduce it to one-parameter Mittag-Leffler function. Note that (\rangleef{m3}) reduces to (\rangleef{me}) for $\alphalpha=\beta=\gammamma=1$.
Let $\alphalpha>0$, $\beta>0$, $t>0$ and $x,y$ be any two reals. The Laplace transform of the function $t^{\beta-1}E^{\gammamma}_{\alphalpha,\beta}({\lambdangle}ambda t^{\alphalpha})$ is given by (see Kilbas {\it et al.} (2006), Eq. (1.9.13)):
\begin{equation}{\lambdangle}abel{mi}
\mathcal{L}\{t^{\beta-1}E^{\gammamma}_{\alphalpha,\beta}(xt^{\alphalpha});s\}=\frac{s^{\alphalpha\gammamma-\beta}}{(s^{\alphalpha}-x)^{\gammamma}},\ s>|x|^{1/\alphalpha}.
\end{equation}
The following result holds for three-parameter Mittag-Leffler function (see Oliveira {\it et al.} (2016), Theorem 3.2):
\begin{equation}{\lambdangle}abel{formula}
\sum_{k=0}^{\infty} (yt^{\alphalpha})^{k}E_{\alphalpha,k\alphalpha+\beta}^{k+1}(xt^{\alphalpha})=E_{\alphalpha,\beta}((x+y)t^{\alphalpha}).
\end{equation}
Let $E_{\alphalpha, \beta}^{(n)}(.)$ denote the $n$th derivative of two-parameter Mittag-Leffler function. Then, (see Kataria and Vellaisamy (2019), Eq. (7)) :
\begin{equation}{\lambdangle}abel{re}
E_{\alphalpha, \beta}^{(n)}(x)=n!E_{\alphalpha, n\alphalpha+\beta}^{n+1}(x),\ \ n\ge0.
\end{equation}
\section{Convoluted Fractional Poisson Process}{\lambdangle}abel{Section3}
Here, we introduce and study a counting process by varying intensity as a function of states and by taking discrete convolution in the governing difference-differential equation (\rangleef{qqawq112}) of the TFPP. The introduced process is called the convoluted fractional Poisson process (CFPP) which we denote by $\{\mathcal{N}^{\alphalpha}_{c}(t)\}_{t\ge0}$, $0<\alphalpha{\lambdangle}e1$. We define it as the stochastic process whose state probabilities $p^\alphalpha_{c}(n,t)=\mathrm{Pr}\{\mathcal{N}^{\alphalpha}_{c}(t)=n\}$, satisfy
\begin{equation}{\lambdangle}abel{con}
\partial^{\alphalpha}_{t}p^\alphalpha_{c}(n,t)=-{\lambdangle}ambda_{n}*p^\alphalpha_{c}(n,t)+{\lambdangle}ambda_{n-1}*p^\alphalpha_{c}(n-1,t),\ \ \ n\ge0,
\end{equation}
with initial conditions
\begin{equation*}
p^\alphalpha_{c}(n,0)=\begin{cases}
1,\ \ n=0,\\
0,\ \ n\ge1,
\end{cases}
\end{equation*}
and $p^\alphalpha_{c}(-n,t)=0$ for all $n\ge1$, $t\ge0$.
Also, $\{{\lambdangle}ambda_{j},\ j\in\mathbb{Z}\}$ is a non-increasing sequence of intensity parameters such that ${\lambdangle}ambda_{j}=0$ for all $j<0$, ${\lambdangle}ambda_{0}>0$ and ${\lambdangle}ambda_{j}\ge0$ for all $j>0$ with ${\lambdangle}im{\lambdangle}imits_{j\tauo\infty}{\lambdangle}ambda_{j+1}/{\lambdangle}ambda_{j}<1$. Thus, it follows that
\begin{equation}{\lambdangle}abel{asdesa11}
\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})=0,
\end{equation}
as $\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}<\infty$ implies ${\lambdangle}ambda_{j}\tauo0$ as $j\tauo\infty$.
Using (\rangleef{def}), the system of fractional differential equations (\rangleef{con}) can be rewritten as
\begin{align}{\lambdangle}abel{model}
\partial^{\alphalpha}_{t}p^\alphalpha_{c}(n,t)&=-\sum_{j=0}^{n}{\lambdangle}ambda_{j}p^\alphalpha_{c}(n-j,t)+\sum_{j=0}^{n-1}{\lambdangle}ambda_{j}p^\alphalpha_{c}(n-j-1,t){\nonumber}number\\
&=-\sum_{j=0}^{n}{\lambdangle}ambda_{j}p^\alphalpha_{c}(n-j,t)+\sum_{j=1}^{n}{\lambdangle}ambda_{j-1}p^\alphalpha_{c}(n-j,t){\nonumber}number\\
&=-{\lambdangle}ambda_{0}p^\alphalpha_{c}(n,t)+\sum_{j=1}^{n}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})p^\alphalpha_{c}(n-j,t),\ \ n\ge0.
\end{align}
Note that for ${\lambdangle}ambda_{n}=0$ for all $n\ge1$, the CFPP reduces to TFPP with intensity parameter ${\lambdangle}ambda_{0}>0$.
\begin{remark}
Di Crescenzo {\it et al.} (2016) studied a fractional counting process $\{M^\alphalpha(t)\}_{t\geq0}$, $0<\alphalpha{\lambdangle}eq 1$, which performs $k$ kinds of jumps of amplitude $1,2,\dots,k$ with positive rates $\Lambda_{1},\Lambda_{2},{\lambdangle}dots,\Lambda_{k}$ where $k\in\mathbb{N}$ is fixed. Its state probabilities $q^\alphalpha(n,t)=\mathrm{Pr}\{M^\alphalpha(t)=n\}$ satisfy (see Di Crescenzo {\it et al.} (2016), Eq. (2.3))
\begin{equation}{\lambdangle}abel{cre}
\partial^{\alphalpha}_{t}q^\alphalpha(n,t)=-(\Lambda_{1}+\Lambda_{2}+\dots+\Lambda_{k})q^\alphalpha(n,t)+ \sum_{j=1}^{\min\{n,k\}}\Lambda_{j}q^\alphalpha(n-j,t),\ \ n\ge0,
\end{equation}
with
\begin{equation*}
q^\alphalpha(n,0)=\begin{cases}
1,\ \ n=0,\\
0,\ \ n\ge1.
\end{cases}
\end{equation*}
For $k=1$, the system of fractional differential equation (\rangleef{cre}) reduces to (\rangleef{qqawq112}). Thus, the TFPP follows as a particular case of $\{M^\alphalpha(t)\}_{t\geq0}$. It is important to note that the CFPP is not a particular case of $\{M^\alphalpha(t)\}_{t\geq0}$ for any choice of $k\in\mathbb{N}$. However, if we choose $\Lambda_{j}={\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}$, for all $j\ge1$, then $\Lambda_{1}+\Lambda_{2}+\dots+\Lambda_{k}={\lambdangle}ambda_{0}-{\lambdangle}ambda_{k}$. As $\sum_{k=0}^{\infty}{\lambdangle}ambda_{k}<\infty$ implies ${\lambdangle}ambda_{k}\tauo0$ as $k\tauo\infty$, the system (\rangleef{cre}) reduce to $(\rangleef{model})$. Thus, the CFPP is obtained as a limiting process of $\{M^\alphalpha(t)\}_{t\geq0}$ by letting $k\tauo\infty$.
\end{remark}
The following result gives the Laplace transform of the state probabilities of CFPP.
\begin{proposition}
The Laplace transform of the state probabilities $\tauilde{p}^\alphalpha_{c}(n,s)$, $s>0$, of CFPP is given by
\begin{equation}{\lambdangle}abel{lap}
\tauilde{p}^\alphalpha_{c}(n,s)=\begin{cases}
\dfrac{s^{\alphalpha-1}}{s^{\alphalpha}+{\lambdangle}ambda_{0}},\ \ n=0,\vspace*{.2cm}\\
\displaystyle\sum_{k=1}^{n}\sum_{\Theta_{n}^{k}}k!\prod_{j=1}^{n}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!}\frac{s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{k+1}},\ \ n\ge1,
\end{cases}
\end{equation}
where $\Theta_{n}^{k}=\{(k_{1},k_{2},\dots,k_{n}):\sum_{j=1}^{n}k_{j}=k,\ \sum_{j=1}^{n}jk_{j}=n,\ k_{j}\in\mathbb{N}_0\}$.
\end{proposition}
\begin{proof}
On applying the Laplace transform in (\rangleef{model}) and using (\rangleef{lc}), we get
\begin{equation*}
s^{\alphalpha}\tauilde{p}^\alphalpha_{c}(n,s)-s^{\alphalpha-1}p^\alphalpha_{c}(n,0)=-{\lambdangle}ambda_{0}\tauilde{p}^\alphalpha_{c}(n,s)+\sum_{m=1}^{n}({\lambdangle}ambda_{m-1}-{\lambdangle}ambda_{m})\tauilde{p}^\alphalpha_{c}(n-m,s).
\end{equation*}
Thus,
\begin{equation}{\lambdangle}abel{pns}
\tauilde{p}^\alphalpha_{c}(n,s)=(s^{\alphalpha}+{\lambdangle}ambda_{0})^{-1}{\lambdangle}eft(\sum_{m=1}^{n}({\lambdangle}ambda_{m-1}-{\lambdangle}ambda_{m})\tauilde{p}^\alphalpha_{c}(n-m,s)+s^{\alphalpha-1}p^\alphalpha_{c}(n,0)\rangleight).
\end{equation}
Put $n=0$ in the above equation and use the initial conditions given in (\rangleef{con}) to obtain
\begin{equation}{\lambdangle}abel{l0}
\tauilde{p}^\alphalpha_{c}(0,s)=\dfrac{s^{\alphalpha-1}}{s^{\alphalpha}+{\lambdangle}ambda_{0}}.
\end{equation}
So, the result holds for $n=0$. Next, we put $n=1$ in (\rangleef{pns}) to get
\begin{equation*}
\tauilde{p}^\alphalpha_{c}(1,s)=\dfrac{({\lambdangle}ambda_{0}-{\lambdangle}ambda_{1})\tauilde{p}(0,s)}{s^{\alphalpha}+{\lambdangle}ambda_{0}}=\dfrac{({\lambdangle}ambda_{0}-{\lambdangle}ambda_{1})s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{2}},
\end{equation*}
and the result holds for $n=1$. Now put $n=2$ in (\rangleef{pns}) to get
\begin{equation*}
\tauilde{p}^\alphalpha_{c}(2,s)=\dfrac{({\lambdangle}ambda_{0}-{\lambdangle}ambda_{1})^{2}s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{3}}+\dfrac{({\lambdangle}ambda_{1}-{\lambdangle}ambda_{2})s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{2}}.
\end{equation*}
Substituting $n=3$ in (\rangleef{pns}), we get
\begin{equation*}
\tauilde{p}^\alphalpha_{c}(3,s)=\dfrac{({\lambdangle}ambda_{0}-{\lambdangle}ambda_{1})^{3}s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{4}}+\dfrac{2({\lambdangle}ambda_{0}-{\lambdangle}ambda_{1})({\lambdangle}ambda_{1}-{\lambdangle}ambda_{2})s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{3}}+\dfrac{({\lambdangle}ambda_{2}-{\lambdangle}ambda_{3})s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{2}}.
\end{equation*}
Now put $n=4$ in (\rangleef{pns}), we get
\begin{align*}
\tauilde{p}^\alphalpha_{c}(4,s)&=\dfrac{({\lambdangle}ambda_{0}-{\lambdangle}ambda_{1})^{4}s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{5}}+\dfrac{3({\lambdangle}ambda_{0}-{\lambdangle}ambda_{1})^{2}({\lambdangle}ambda_{1}-{\lambdangle}ambda_{2})s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{4}}\\
&\ \ +\dfrac{{\lambdangle}eft(2({\lambdangle}ambda_{0}-{\lambdangle}ambda_{1})({\lambdangle}ambda_{2}-{\lambdangle}ambda_{3})+({\lambdangle}ambda_{1}-{\lambdangle}ambda_{2})^{2}\rangleight)s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{3}} +\dfrac{({\lambdangle}ambda_{3}-{\lambdangle}ambda_{4})s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{2}}.
\end{align*}
Equivalently,
\begin{equation*}
\tauilde{p}^\alphalpha_{c}(4,s)=\sum_{k=1}^{4}\sum_{\Theta_{4}^{k}}k!\prod_{j=1}^{4}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!}\frac{s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{k+1}} ,
\end{equation*}
where $\Theta_{4}^{k}=\{(k_{1},k_{2},k_{3},k_{4}):\ k_{1}+k_{2}+k_{3}+k_{4}=k,\ k_{1}+2k_{2}+3k_{3}+4k_{4}=4,\ k_{i}\in\mathbb{N}_0\}$.
Assume the result (\rangleef{lap}) holds for $n=l$. From (\rangleef{pns}), we have
\begin{align*}
\tauilde{p}^\alphalpha_{c}(l+1,s)&=(s^{\alphalpha}+{\lambdangle}ambda_{0})^{-1}{\lambdangle}eft(\sum_{m=1}^{l+1}({\lambdangle}ambda_{m-1}-{\lambdangle}ambda_{m})\tauilde{p}^\alphalpha_{c}(l+1-m,s)\rangleight)\\
&=(s^{\alphalpha}+{\lambdangle}ambda_{0})^{-1}{\lambdangle}eft(\sum_{m=1}^{l}({\lambdangle}ambda_{m-1}-{\lambdangle}ambda_{m})\tauilde{p}^\alphalpha_{c}(l+1-m,s)\rangleight)+\frac{({\lambdangle}ambda_{l}-{\lambdangle}ambda_{l+1})\tauilde{p}^\alphalpha_{c}(0,s)}{(s^{\alphalpha}+{\lambdangle}ambda_{0})}\\
&=\sum_{m=1}^{l}({\lambdangle}ambda_{m-1}-{\lambdangle}ambda_{m})\sum_{k=1}^{l+1-m}\sum_{\Theta_{l+1-m}^{k}}k!\prod_{j=1}^{l+1-m}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}s^{\alphalpha-1}}{k_{j}!(s^{\alphalpha}+{\lambdangle}ambda_{0})^{k+2}}+\frac{({\lambdangle}ambda_{l}-{\lambdangle}ambda_{l+1})s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{2}}\\
&=\sum_{k=1}^{l+1}\sum_{\Theta_{l+1}^{k}}k!\prod_{j=1}^{l+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!}\frac{s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{k+1}}.
\end{align*}
Using the method of mathematical induction, the result (\rangleef{lap}) holds true for all $n\ge0$.
\end{proof}
The above result can be written in a different form by using the following result due to Kataria and Vellaisamy (2017b).
\begin{lemma}{\lambdangle}abel{ll1}
Let $e^n_j$ denotes the $n$-tuple vector with unity at the $j$-th place and zero elsewhere. Then
\begin{equation*}{\lambdangle}abel{2.0}
\Theta^k_n={\lambdangle}eft\{\sum_{j=1}^{n-k+1}k_je^n_j:{\lambdangle}eft(k_1,k_2,{\lambdangle}dots,k_{n-k+1}\rangleight)\in\Lambda^k_n\rangleight\},
\end{equation*}
where $\Lambda_{n}^{k}$ is given in (\rangleef{lambdaee11}).
\end{lemma}
Using the above result, an equivalent expression for $\tauilde{p}^\alphalpha_{c}(n,s)$ is given by
\begin{equation*}
\tauilde{p}^\alphalpha_{c}(n,s)=\begin{cases}
\dfrac{s^{\alphalpha-1}}{s^{\alphalpha}+{\lambdangle}ambda_{0}},\ \ n=0,\vspace*{.2cm}\\
\displaystyle\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}k!\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!}\frac{s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{k+1}},\ \ n\ge1.
\end{cases}
\end{equation*}
\begin{theorem}
The one-dimensional distribution of the CFPP is given by
\begin{equation}{\lambdangle}abel{dist}
p^\alphalpha_{c}(n,t)=\begin{cases}
E_{\alphalpha,1}(-{\lambdangle}ambda_{0}t^{\alphalpha}),\ \ n=0,\vspace*{.2cm}\\
\displaystyle\sum_{k=1}^{n}\sum_{\Theta_{n}^{k}}k!\prod_{j=1}^{n}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!} t^{k\alphalpha}E_{\alphalpha,k\alphalpha+1}^{k+1}(-{\lambdangle}ambda_{0}t^{\alphalpha}),\ \ n\ge1,
\end{cases}
\end{equation}
where $\Theta_{n}^{k}=\{(k_{1},k_{2},\dots,k_{n}):\sum_{j=1}^{n}k_{j}=k, \ \ \sum_{j=1}^{n}jk_{j}=n,\ k_{j}\in\mathbb{N}_0\}$.
\end{theorem}
\begin{proof}
Taking inverse Laplace transform in (\rangleef{lap}), we get
\begin{equation*}
\mathcal{L}^{-1}{\lambdangle}eft(\tauilde{p}^\alphalpha_{c}(n,s);t\rangleight)=\begin{cases}
\mathcal{L}^{-1}{\lambdangle}eft(\dfrac{s^{\alphalpha-1}}{s^{\alphalpha}+{\lambdangle}ambda_{0}};t\rangleight),\ \ n=0,\vspace*{.2cm}\\
\mathcal{L}^{-1}{\lambdangle}eft(\displaystyle\sum_{k=1}^{n}\sum_{\Theta_{n}^{k}}k!\prod_{j=1}^{n}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!}\frac{s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{k+1}};t\rangleight),\ \ n\ge1.
\end{cases}
\end{equation*}
Using (\rangleef{mi}), the above equation reduces to (\rangleef{dist}).
\end{proof}
The distribution of TFPP follows as a particular case of CFPP.
\begin{corollary}
Let ${\lambdangle}ambda_{0}={\lambdangle}ambda$ and ${\lambdangle}ambda_{n}=0$ for all $n\ge1$. Then,
\begin{equation*}{\lambdangle}abel{dist1qa}
p^\alphalpha(n,t)=\frac{({\lambdangle}ambda t^{\alphalpha})^{n}}{n!}\sum_{j=0}^{\infty}\frac{(j+n)!}{j!}\frac{(-{\lambdangle}ambda t^{\alphalpha})^{j}}{\Gammamma((j+n)\alphalpha+1)},\ \ n\geq0,
\end{equation*}
which is the distribution of TFPP.
\end{corollary}
\begin{proof}
On substituting ${\lambdangle}ambda_{0}={\lambdangle}ambda$ and ${\lambdangle}ambda_{n}=0$ for $n\ge1$ in (\rangleef{dist}), we get
\begin{equation*}
p^{\alphalpha}(n,t)=({\lambdangle}ambda t^{\alphalpha})^nE_{\alphalpha,n\alphalpha+1}^{n+1}(-{\lambdangle}ambda t^{\alphalpha}),\ \ n\ge0.
\end{equation*}
On using (\rangleef{mit}), the result follows.
\end{proof}
Using Lemma \rangleef{ll1}, we can rewrite (\rangleef{dist}) as
\begin{equation}{\lambdangle}abel{dssdew1}
p^\alphalpha_{c}(n,t)=\begin{cases}
E_{\alphalpha,1}(-{\lambdangle}ambda_{0}t^{\alphalpha}),\ \ n=0,\vspace*{.2cm}\\
\displaystyle\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}k!\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!} t^{k\alphalpha}E_{\alphalpha,k\alphalpha+1}^{k+1}(-{\lambdangle}ambda_{0}t^{\alphalpha}),\ \ n\ge1.
\end{cases}
\end{equation}
Note that,
\begin{align*}
\sum_{n=0}^{\infty}p^\alphalpha_{c}(n,t)&=p^\alphalpha_{c}(0,t)+\sum_{n=1}^{\infty}p^\alphalpha_{c}(n,t)\\
&=E_{\alphalpha,1}(-{\lambdangle}ambda_{0}t^{\alphalpha})+\sum_{n=1}^{\infty}\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}k!\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!} t^{k\alphalpha}E_{\alphalpha,k\alphalpha+1}^{k+1}(-{\lambdangle}ambda_{0}t^{\alphalpha})\\
&=E_{\alphalpha,1}(-{\lambdangle}ambda_{0}t^{\alphalpha})+\sum_{k=1}^{\infty}t^{k\alphalpha}E_{\alphalpha,k\alphalpha+1}^{k+1}(-{\lambdangle}ambda_{0}t^{\alphalpha})\sum_{n=k}^{\infty}\sum_{\Lambda_{n}^{k}}k!\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!}\\
&=E_{\alphalpha,1}(-{\lambdangle}ambda_{0}t^{\alphalpha})+\sum_{k=1}^{\infty}{\lambdangle}eft(\sum_{j=1}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})\rangleight)^{k} t^{k\alphalpha}E_{\alphalpha,k\alphalpha+1}^{k+1}(-{\lambdangle}ambda_{0}t^{\alphalpha}),\ \ \tauext{(using\ (\rangleef{fm2}))}\\
&=E_{\alphalpha,1}(-{\lambdangle}ambda_{0}t^{\alphalpha})+\sum_{k=1}^{\infty} ({\lambdangle}ambda_{0}t^{\alphalpha})^{k}E_{\alphalpha,k\alphalpha+1}^{k+1}(-{\lambdangle}ambda_{0}t^{\alphalpha})\\
&=\sum_{k=0}^{\infty} ({\lambdangle}ambda_{0}t^{\alphalpha})^{k}E_{\alphalpha,k\alphalpha+1}^{k+1}(-{\lambdangle}ambda_{0}t^{\alphalpha})=1,
\end{align*}
where in the last step we have used (\rangleef{formula}). Thus, it follows that (\rangleef{dist}) is a valid distribution.
\begin{remark}
Let the random variable $W^{\alphalpha}_{c}$ be the waiting time of the first convoluted fractional Poisson event. Then, the distribution of $W^{\alphalpha}_{c}$ is given by
\begin{equation*}
\mathrm{Pr}\{W^{\alphalpha}_{c}>t\}=\mathrm{Pr}\{\mathcal{N}^{\alphalpha}_{c}(t)=0\}=E_{\alphalpha,1}(-{\lambdangle}ambda_{0}t^{\alphalpha}), \ t>0,
\end{equation*}
which coincides with the first waiting time of TFPP (see Beghin and Orsingher (2009)). However, the one-dimensional distributions of TFPP and CFPP differ. Thus, the fact that the TFPP is a renewal process (see Meerschaert {\it et al.} (2011)) implies that the CFPP is not a renewal process.
\end{remark}
The next result gives the probability generating function (pgf) of CFPP.
\begin{proposition}
The pgf $G^{\alphalpha}_{c}(u,t)=\mathbb{E}(u^{\mathcal{N}^{\alphalpha}_{c}(t)})$ of CFPP is given by
\begin{equation}{\lambdangle}abel{pgfa}
G^{\alphalpha}_{c}(u,t)=E_{\alphalpha,1}{\lambdangle}eft(\sum_{j=0}^{\infty}u^{j}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})t^{\alphalpha}\rangleight),\ \ |u|{\lambdangle}e1.
\end{equation}
\end{proposition}
\begin{proof}
The Laplace transform of the pgf of CFPP can be obtained as follows:
\begin{align}
\tauilde{G}^{\alphalpha}_{c}(u,s)&=\int_{0}^{\infty}e^{-st}G^{\alphalpha}_{c}(u,t)\mathrm{d}t,\ \ s>0 {\nonumber}number\\
&=\int_{0}^{\infty}e^{-st}\sum_{n=0}^{\infty}u^{n}p^\alphalpha_{c}(n,t)\mathrm{d}t{\nonumber}number\\
&=\int_{0}^{\infty}e^{-st}{\lambdangle}eft(E_{\alphalpha,1}(-{\lambdangle}ambda_{0}t^{\alphalpha})+\sum_{n=1}^{\infty}u^{n}\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}k!\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!} t^{k\alphalpha}E_{\alphalpha,k\alphalpha+1}^{k+1}(-{\lambdangle}ambda_{0}t^{\alphalpha})\rangleight)\mathrm{d}t{\nonumber}number\\
&=\dfrac{s^{\alphalpha-1}}{s^{\alphalpha}+{\lambdangle}ambda_{0}}+\sum_{n=1}^{\infty}u^{n}\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}k!\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!}\frac{s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{k+1}},\ \ (\mathrm{using}\ (\rangleef{mi})){\lambdangle}abel{mjhgqq1}\\
&=\dfrac{s^{\alphalpha-1}}{s^{\alphalpha}+{\lambdangle}ambda_{0}}{\lambdangle}eft(1+\sum_{k=1}^{\infty}\frac{1}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{k}}\sum_{n=k}^{\infty}\sum_{\Lambda_{n}^{k}}k!\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!} u^{n}\rangleight){\nonumber}number\\
&=\dfrac{s^{\alphalpha-1}}{s^{\alphalpha}+{\lambdangle}ambda_{0}}{\lambdangle}eft(1+\sum_{k=1}^{\infty}{\lambdangle}eft(\frac{\sum_{j=1}^{\infty}u^{j}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}) }{s^{\alphalpha}+{\lambdangle}ambda_{0}}\rangleight)^{k}\rangleight),\ \ (\mathrm{using}\ (\rangleef{fm2})){\nonumber}number\\
&=\dfrac{s^{\alphalpha-1}}{s^{\alphalpha}+{\lambdangle}ambda_{0}}\sum_{k=0}^{\infty}{\lambdangle}eft(\frac{\sum_{j=1}^{\infty}u^{j}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}) }{s^{\alphalpha}+{\lambdangle}ambda_{0}}\rangleight)^{k}{\nonumber}number\\
&=s^{\alphalpha-1}{\lambdangle}eft(s^{\alphalpha}-\sum_{j=0}^{\infty}u^{j}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})\rangleight)^{-1},{\lambdangle}abel{pgghg1}
\end{align}
which on using (\rangleef{mi}) gives (\rangleef{pgfa}).
\end{proof}
Next, we show that the pgf of CFPP solves the following differential equation:
\begin{equation}{\lambdangle}abel{pgf}
\partial^{\alphalpha}_{t}G^{\alphalpha}_{c}(u,t)=G^{\alphalpha}_{c}(u,t)\sum_{j=0}^{\infty}u^{j}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}),\ \ G^{\alphalpha}_{c}(u,0)=1.
\end{equation}
On taking Caputo derivative in $G^{\alphalpha}_{c}(u,t)=\sum_{n=0}^{\infty}u^{n}p^\alphalpha_{c}(n,t)$, we get
\begin{align*}
\partial^{\alphalpha}_{t}G^{\alphalpha}_{c}(u,t)&=\sum_{n=0}^{\infty}u^{n}\partial^{\alphalpha}_{t}p^\alphalpha_{c}(n,t)\\
&=\sum_{n=0}^{\infty}u^{n}{\lambdangle}eft(-{\lambdangle}ambda_{0}p^\alphalpha_{c}(n,t)+\sum_{j=1}^{n}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})p^\alphalpha_{c}(n-j,t)\rangleight),\ \ (\mathrm{using}\ (\rangleef{model}))\\
&=-{\lambdangle}ambda_{0}G^{\alphalpha}_{c}(u,t)+\sum_{n=0}^{\infty}\sum_{j=1}^{n}u^{n}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})p^\alphalpha_{c}(n-j,t)\\
&=-{\lambdangle}ambda_{0}G^{\alphalpha}_{c}(u,t)+\sum_{j=1}^{\infty}\sum_{n=j}^{\infty}u^{n}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})p^\alphalpha_{c}(n-j,t)\\
&=-{\lambdangle}ambda_{0}G^{\alphalpha}_{c}(u,t)+\sum_{j=1}^{\infty}\sum_{n=0}^{\infty}u^{n+j}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})p^\alphalpha_{c}(n,t)\\
&=-{\lambdangle}ambda_{0}G^{\alphalpha}_{c}(u,t)+G^{\alphalpha}_{c}(u,t)\sum_{j=1}^{\infty}u^{j}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})\\
&=G^{\alphalpha}_{c}(u,t)\sum_{j=0}^{\infty}u^{j}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}).
\end{align*}
Note that the Laplace transform of the pgf of CFPP can be also obtained from the above result as follows: By taking Laplace transform in (\rangleef{pgf}), we get
\begin{equation*}
s^{\alphalpha}\tauilde{G}^{\alphalpha}_{c}(u,s)-s^{\alphalpha-1}G^{\alphalpha}_{c}(u,0)=\tauilde{G}^{\alphalpha}_{c}(u,s)\sum_{j=0}^{\infty}u^{j}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}).
\end{equation*}
Thus,
\begin{equation}{\lambdangle}abel{ltpgf}
\tauilde{G}^{\alphalpha}_{c}(u,s)=s^{\alphalpha-1}{\lambdangle}eft(s^{\alphalpha}-\sum_{j=0}^{\infty}u^{j}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})\rangleight)^{-1},
\end{equation}
which coincides with (\rangleef{pgghg1}).
\begin{remark}
If the difference of intensities is a constant, {\it i.e.}, ${\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}=\deltalta$ for all $j\ge1$ then (\rangleef{pgf}) reduces to
\begin{equation*}
\partial^{\alphalpha}_{t}G^{\alphalpha}_{c}(u,t)=G^{\alphalpha}_{c}(u,t){\lambdangle}eft(\frac{\deltalta u}{1-u}-{\lambdangle}ambda_{0}\rangleight).
\end{equation*}
\end{remark}
Next, we obtain the mean and variance of CFPP using its pgf. From (\rangleef{pgfa}), we have
\begin{equation}{\lambdangle}abel{smpgf}
G^{\alphalpha}_{c}(u,t)=\sum_{k=0}^{\infty}\frac{t^{k\alphalpha}}{\Gammamma(k\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}) u^{j}\rangleight)^{k}.
\end{equation}
On taking the derivatives, we get
\begin{equation*}
\frac{\partial G^{\alphalpha}_{c}(u,t)}{\partial u} =\sum_{k=1}^{\infty}\frac{kt^{k\alphalpha}}{\Gammamma(k\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}) u^{j}\rangleight)^{k-1}{\lambdangle}eft(\sum_{j=1}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})j u^{j-1}\rangleight).
\end{equation*}
and
\begin{align*}
\frac{\partial^{2}G^{\alphalpha}_{c}(u,t)}{\partial u^{2}} &=\sum_{k=2}^{\infty}\frac{k(k-1)t^{k\alphalpha}}{\Gammamma(k\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}) u^{j}\rangleight)^{^{k-2}}{\lambdangle}eft(\sum_{j=1}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})j u^{j-1}\rangleight)^{2}\\
&\ \ \ \ +\sum_{k=1}^{\infty}\frac{kt^{k\alphalpha}}{\Gammamma(k\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}) u^{j}\rangleight)^{k-1}{\lambdangle}eft(\sum_{j=2}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})j(j-1)u^{j-2}\rangleight).
\end{align*}
Now, the mean of CFPP is given by
\begin{align}{\lambdangle}abel{wswee11}
\mathbb{E}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)&=\frac{\partial G^{\alphalpha}_{c}(u,t)}{\partial u}\bigg|_{u=1}{\nonumber}number\\
&=\sum_{k=1}^{\infty}\frac{kt^{k\alphalpha}}{\Gammamma(k\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})\rangleight)^{k-1}{\lambdangle}eft(\sum_{j=1}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})j\rangleight){\nonumber}number\\
&=\frac{t^{\alphalpha}}{\Gammamma(\alphalpha+1)}\sum_{j=0}^{\infty}{\lambdangle}ambda_{j},
\end{align}
using (\rangleef{asdesa11}) in the last step. Also, its variance can be obtained as follows:
\begin{align*}
\mathbb{E}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(t)(\mathcal{N}^{\alphalpha}_{c}(t)-1)\rangleight)&=\frac{\partial^{2} G^{\alphalpha}_{c}(u,t)}{\partial u^{2}}\bigg|_{u=1}\\
&=\sum_{k=2}^{\infty}\frac{k(k-1)t^{k\alphalpha}}{\Gammamma(k\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}) \rangleight)^{^{k-2}}{\lambdangle}eft(\sum_{j=1}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})j \rangleight)^{2}\\
&\ \ \ \ +\sum_{k=1}^{\infty}\frac{kt^{k\alphalpha}}{\Gammamma(k\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}) \rangleight)^{k-1}{\lambdangle}eft(\sum_{j=2}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})j(j-1) \rangleight)\\
&=\frac{2t^{2\alphalpha}}{\Gammamma(2\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}\rangleight)^{2}+\frac{2t^{\alphalpha}}{\Gammamma(\alphalpha+1)}\sum_{j=1}^{\infty}j{\lambdangle}ambda_{j}.
\end{align*}
Thus,
\begin{equation*}
\mathbb{E}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(t)^2\rangleight)=\frac{2t^{2\alphalpha}}{\Gammamma(2\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}\rangleight)^{2}+\frac{2t^{\alphalpha}}{\Gammamma(\alphalpha+1)}\sum_{j=1}^{\infty}j{\lambdangle}ambda_{j}+\frac{t^{\alphalpha}}{\Gammamma(\alphalpha+1)}\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}.
\end{equation*}
Hence, the variance of CFPP is given by
\begin{equation}{\lambdangle}abel{var}
\operatorname{ Var}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)=\frac{t^{\alphalpha}\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}}{\Gammamma(\alphalpha+1)}+\frac{2t^{\alphalpha}\sum_{j=1}^{\infty}j{\lambdangle}ambda_{j}}{\Gammamma(\alphalpha+1)}+\frac{2{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}\rangleight)^{2}}{\Gammamma(2\alphalpha+1)}-\frac{{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}\rangleight)^{2}}{\Gammamma^2(\alphalpha+1)}.
\end{equation}
The pgf of CFPP can also be utilized to obtain its factorial moments as follows:
\begin{proposition}
The $r$th factorial moment of the CFPP $\psi^\alphalpha_c(r,t)= \mathbb{E}(\mathcal{N}^{\alphalpha}_{c}(t)(\mathcal{N}^{\alphalpha}_{c}(t)-1)\dots(\mathcal{N}^{\alphalpha}_{c}(t)-r+1))$, $r\ge1$, is given by
\begin{equation*}
\psi^\alphalpha_c(r,t)=r!\sum_{k=1}^{r}\frac{t^{k\alphalpha}}{\Gammamma(k\alphalpha+1)}\underset{m_j\in\mathbb{N}_0}{\underset{\sum_{j=1}^km_j=r}{\sum}}\prod_{\ell=1}^{k}{\lambdangle}eft(\frac{1}{m_\ell!}\sum_{j=0}^{\infty}(j)_{m_\ell}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})\rangleight),
\end{equation*}
where $(j)_{m_\ell}=j(j-1)\dots(j-m_\ell+1)$ denotes the falling factorial.
\end{proposition}
\begin{proof}
From (\rangleef{pgfa}), we get
\begin{align}{\lambdangle}abel{ttt}
\psi^\alphalpha_c(r,t)&=\frac{\partial^{r}G^{\alphalpha}_{c}(u,t)}{\partial u^{r}}\bigg|_{u=1}{\nonumber}number\\
&=\sum_{k=0}^{r}\frac{1}{k!}E^{(k)}_{\alphalpha,1}{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})u^{j}\rangleight) {\lambdangle}eft.A_{r,k}{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})u^{j}\rangleight)\rangleight|_{u=1},
\end{align}
where we have used the $r$th derivative of composition of two functions (see Johnson (2002), Eq. (3.3)). Here,
\begin{align}{\lambdangle}abel{mkgtrr4543t}
A_{r,k}&{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})u^{j}\rangleight)\Bigg|_{u=1}{\nonumber}number\\
&=\sum_{m=0}^{k}\frac{k!}{m!(k-m)!}{\lambdangle}eft(-t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})u^{j}\rangleight)^{k-m}\frac{\mathrm{d}^{r}}{\mathrm{d}u
^{^{r}}}{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})u^{j}\rangleight)^{m}\Bigg|_{u=1}{\nonumber}number\\
&=t^{k\alphalpha }\frac{\mathrm{d}^{r}}{\mathrm{d}u^{^{r}}}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})u^{j}\rangleight)^{k}\Bigg|_{u=1},
\end{align}
where the last step follows by using (\rangleef{asdesa11}). From (\rangleef{re}), we get
\begin{align}{\lambdangle}abel{ppp}
E^{(k)}_{\alphalpha,1}{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})u^{j}\rangleight)\Bigg|_{u=1}&=k!E^{k+1}_{\alphalpha,k\alphalpha+1}{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})u^{j}\rangleight)\Bigg|_{u=1}{\nonumber}number\\
&=\frac{k!}{\Gammamma(k\alphalpha+1)}.
\end{align}
Now, by using the following result (see Johnson (2002), Eq. (3.6))
\begin{equation}{\lambdangle}abel{qlkju76}
\frac{\mathrm{d}^{r}}{\mathrm{d}w^{^{r}}}(f(w))^{k}=\underset{m_j\in\mathbb{N}_0}{\underset{m_{1}+m_{2}+\dots+m_{k}=r}{\sum}}\frac{r!}{m_1!m_2!{\lambdangle}dots m_k!}f^{(m_{1})}(w)f^{(m_{2})}(w)\dots f^{(m_{k})}(w),
\end{equation}
we get
\begin{align}{\lambdangle}abel{ccc}
\frac{\mathrm{d}^{r}}{\mathrm{d}u^{^{r}}}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})u^{j}\rangleight)^{k}\Bigg|_{u=1}&=r!\underset{m_j\in\mathbb{N}_0}{\underset{\sum_{j=1}^km_j=r}{\sum}}\prod_{\ell=1}^{k}\frac{1}{m_\ell!}\frac{\mathrm{d}^{m_{\ell}}}{\mathrm{d}u^{{m_{\ell}}}}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})u^{j}\rangleight)\Bigg|_{u=1}{\nonumber}number\\
&=r!\underset{m_j\in\mathbb{N}_0}{\underset{\sum_{j=1}^km_j=r}{\sum}}\prod_{\ell=1}^{k}\frac{1}{m_\ell!}\sum_{j=0}^{\infty}(j)_{m_\ell}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}).
\end{align}
Note that the expression in right hand side of (\rangleef{ccc}) vanishes for $k=0$. Finally, on substituting (\rangleef{mkgtrr4543t}), (\rangleef{ppp}) and (\rangleef{ccc}) in (\rangleef{ttt}), we get the required result.
\end{proof}
Next, we obtain the moment generating function (mgf) of CFPP on non-positive support.
\begin{proposition}
The mgf $m^\alphalpha_c(w,t)=\mathbb{E}(e^{-w\mathcal{N}^{\alphalpha}_{c}(t)})$, $w\geq0$, of CFPP is given by
\begin{equation}{\lambdangle}abel{mjhgss}
m^\alphalpha_c(w,t)=E_{\alphalpha,1}{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-wj}\rangleight).
\end{equation}
\end{proposition}
\begin{proof} Using (\rangleef{dssdew1}), we have
\begin{align*}
m^\alphalpha_c(w,t)&=p^\alphalpha_{c}(0,t)+\sum_{n=1}^{\infty}e^{-wn}p^\alphalpha_{c}(n,t)\\
&=E_{\alphalpha,1}(-{\lambdangle}ambda_{0}t^{\alphalpha})+\sum_{n=1}^{\infty}e^{-wn}\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}k!\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!} t^{k\alphalpha}E_{\alphalpha,k\alphalpha+1}^{k+1}(-{\lambdangle}ambda_{0}t^{\alphalpha})\\
&=E_{\alphalpha,1}(-{\lambdangle}ambda_{0}t^{\alphalpha})+\sum_{k=1}^{\infty}t^{k\alphalpha}E_{\alphalpha,k\alphalpha+1}^{k+1}(-{\lambdangle}ambda_{0}t^{\alphalpha})\sum_{n=k}^{\infty}\sum_{\Lambda_{n}^{k}}k!\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!}e^{-wn}\\
&=E_{\alphalpha,1}(-{\lambdangle}ambda_{0}t^{\alphalpha})+\sum_{k=1}^{\infty}{\lambdangle}eft(\sum_{j=1}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-wj}\rangleight)^{k} t^{k\alphalpha}E_{\alphalpha,k\alphalpha+1}^{k+1}(-{\lambdangle}ambda_{0}t^{\alphalpha}),\ \ \tauext{(using\ (\rangleef{fm2}))}\\
&=\sum_{k=0}^{\infty}{\lambdangle}eft(t^{\alphalpha}\sum_{j=1}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-wj}\rangleight)^{k}E_{\alphalpha,k\alphalpha+1}^{k+1}(-{\lambdangle}ambda_{0}t^{\alphalpha})\\
&=E_{\alphalpha,1}{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-wj}\rangleight),
\end{align*}
where we have used (\rangleef{formula}) in the last step.
\end{proof}
The mgf of CFPP solves the following fractional differential equation:
\begin{equation*}
\partial^{\alphalpha}_{t}m^{\alphalpha}_c(w,t)=m^{\alphalpha}_c(w,t)\sum_{j=0}^{\infty}e^{-wj}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}),\ \ m^{\alphalpha}_c(w,0)=1.
\end{equation*}
The above equation can be solved by using the Laplace transform method to obtain the mgf (\rangleef{mjhgss}). The proof follows similar lines to that of the related result for the pgf of CFPP.
The mean and variance of the CFPP can also be obtained from its mgf. From (\rangleef{mjhgss}), we have
\begin{equation*}
m^\alphalpha_c(w,t)=\sum_{k=0}^{\infty}\frac{t^{k\alphalpha}}{\Gammamma(k\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}) e^{-wj}\rangleight)^{k}.
\end{equation*}
On taking the derivatives, we get
\begin{equation*}
\frac{\partial m^\alphalpha_c(w,t)}{\partial w} =\sum_{k=1}^{\infty}\frac{kt^{k\alphalpha}}{\Gammamma(k\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}) e^{-wj}\rangleight)^{k-1}{\lambdangle}eft(\sum_{j=1}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})(-j)e^{-wj}\rangleight),
\end{equation*}
and
\begin{align*}
\frac{\partial^{2}m^\alphalpha_c(w,t)}{\partial w^{2}} &=\sum_{k=2}^{\infty}\frac{k(k-1)t^{k\alphalpha}}{\Gammamma(k\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}) e^{-wj}\rangleight)^{^{k-2}}{\lambdangle}eft(\sum_{j=1}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})j e^{-wj}\rangleight)^{2}\\
&\ \ \ \ +\sum_{k=1}^{\infty}\frac{kt^{k\alphalpha}}{\Gammamma(k\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}) e^{-wj}\rangleight)^{k-1}{\lambdangle}eft(\sum_{j=1}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})j^{2} e^{-wj}\rangleight).
\end{align*}
Now, the mean of CFPP is given by
\begin{align*}
\mathbb{E}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)&=-\frac{\partial m^\alphalpha_c(w,t)}{\partial w}\bigg|_{w=0}\\
&=\sum_{k=1}^{\infty}\frac{kt^{k\alphalpha}}{\Gammamma(k\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})\rangleight)^{k-1}{\lambdangle}eft(\sum_{j=1}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})j\rangleight)\\
&=\frac{t^{\alphalpha}}{\Gammamma(\alphalpha+1)}\sum_{j=0}^{\infty}{\lambdangle}ambda_{j},
\end{align*}
which agrees with (\rangleef{wswee11}). Its second order moment is given by
\begin{align*}
\mathbb{E}{\lambdangle}eft((\mathcal{N}^{\alphalpha}_{c}(t))^2\rangleight)&=\frac{\partial^{2}m^\alphalpha_c(w,t)}{\partial w^{2}}\bigg|_{w=0}\\
&=\sum_{k=2}^{\infty}\frac{k(k-1)t^{k\alphalpha}}{\Gammamma(k\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})\rangleight)^{^{k-2}}{\lambdangle}eft(\sum_{j=1}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})j \rangleight)^{2}\\
&\ \ \ \ +\sum_{k=1}^{\infty}\frac{kt^{k\alphalpha}}{\Gammamma(k\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})\rangleight)^{k-1}{\lambdangle}eft(\sum_{j=1}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})(j(j-1)+j)\rangleight)\\
&=\frac{2t^{2\alphalpha}}{\Gammamma(2\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}\rangleight)^{2}+\frac{t^{\alphalpha}}{\Gammamma(\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}+2\sum_{j=1}^{\infty}j{\lambdangle}ambda_{j}\rangleight).
\end{align*}
The variance (\rangleef{var}) can now be obtained by computing $\mathbb{E}{\lambdangle}eft((\mathcal{N}^{\alphalpha}_{c}(t))^2\rangleight)-{\lambdangle}eft(\mathbb{E}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)\rangleight)^{2}$.
\begin{proposition}
The $r$th moment $\mu^\alphalpha_c(r,t)=\mathbb{E}{\lambdangle}eft((\mathcal{N}^{\alphalpha}_{c}(t))^r\rangleight)$, $r\ge1$, of CFPP is given by
\begin{equation*}
\mu^\alphalpha_c(r,t)=r!\sum_{k=1}^{r}\frac{t^{k\alphalpha}}{\Gammamma(k\alphalpha+1)}\underset{m_j\in\mathbb{N}_0}{\underset{\sum_{j=1}^km_j=r}{\sum}}\prod_{\ell=1}^{k}{\lambdangle}eft(\frac{1}{m_\ell!}\sum_{j=0}^{\infty}j^{m_\ell}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})\rangleight).
\end{equation*}
\end{proposition}
\begin{proof}
Taking the $r$th derivative of composition of two functions (see Johnson (2002), Eq. (3.3)), we get
\begin{align}{\lambdangle}abel{tt}
\mu^\alphalpha_c(r,t)&=(-1)^{r}\frac{\partial^{r} m^\alphalpha_c(w,t)}{\partial w^{r}}\bigg|_{w=0}{\nonumber}number\\
&=\sum_{k=0}^{r}\frac{(-1)^{r}}{k!}E^{(k)}_{\alphalpha,1}{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-wj}\rangleight) {\lambdangle}eft.B_{r,k}{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-wj}\rangleight)\rangleight|_{w=0},
\end{align}
where
\begin{align}{\lambdangle}abel{mkgtrr4543}
B_{r,k}&{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-wj}\rangleight)\Bigg|_{w=0}{\nonumber}number\\
&=\sum_{m=0}^{k}\frac{k!}{m!(k-m)!}{\lambdangle}eft(-t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-wj}\rangleight)^{k-m}\frac{\mathrm{d}^{r}}{\mathrm{d}w^{^{r}}}{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-wj}\rangleight)^{m}\Bigg|_{w=0}{\nonumber}number\\
&=t^{k\alphalpha }\frac{\mathrm{d}^{r}}{\mathrm{d}w^{^{r}}}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-wj}\rangleight)^{k}\Bigg|_{w=0}.
\end{align}
The last equality follows by using (\rangleef{asdesa11}). Now, from (\rangleef{re}), we get
\begin{align}{\lambdangle}abel{pp}
E^{(k)}_{\alphalpha,1}{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-wj}\rangleight)\Bigg|_{w=0}&=k!E^{k+1}_{\alphalpha,k\alphalpha+1}{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-wj}\rangleight)\Bigg|_{w=0}{\nonumber}number\\
&=\frac{k!}{\Gammamma(k\alphalpha+1)}.
\end{align}
Using (\rangleef{qlkju76}), we have
\begin{align}{\lambdangle}abel{cc}
\frac{\mathrm{d}^{r}}{\mathrm{d}w^{^{r}}}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-wj}\rangleight)^{k}\Bigg|_{w=0}&=r!\underset{m_j\in\mathbb{N}_0}{\underset{\sum_{j=1}^km_j=r}{\sum}}\prod_{\ell=1}^{k}\frac{1}{m_\ell!}\frac{\mathrm{d}^{m_{\ell}}}{\mathrm{d}w^{{m_{\ell}}}}{\lambdangle}eft(\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-wj}\rangleight)\Bigg|_{w=0}{\nonumber}number\\
&=(-1)^rr!\underset{m_j\in\mathbb{N}_0}{\underset{\sum_{j=1}^km_j=r}{\sum}}\prod_{\ell=1}^{k}\frac{1}{m_\ell!}\sum_{j=0}^{\infty}j^{m_\ell}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}).
\end{align}
The result follows on substituting (\rangleef{mkgtrr4543})-(\rangleef{cc}) in (\rangleef{tt}).
\end{proof}
In the following result it is shown that the CFPP is equal in distribution to a compound fractional Poisson process. Thus, it is neither Markovian nor a L\'evy process.
\begin{theorem}
Let $\{N^\alphalpha(t)\}_{t\ge0}$, $0<\alphalpha{\lambdangle}e1$, be the TFPP with intensity parameter ${\lambdangle}ambda_{0}>0$ and $\{X_{i}\}_{i\ge1}$ be a sequence of independent and identically distributed (iid) random variables with the following distribution:
\begin{equation}{\lambdangle}abel{ngftt4}
\mathrm{Pr}\{X_{i}=j\}=\frac{{\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j}}{{\lambdangle}ambda_{0}}, \ \ j\ge1.
\end{equation}
Then,
\begin{equation}{\lambdangle}abel{cd}
\mathcal{N}^{\alphalpha}_{c}(t)\overset{d}{=}\sum_{i=1}^{N^\alphalpha(t)}X_{i}, \ \ \ t\ge0,
\end{equation}
where $\{X_{i}\}_{i\ge1}$ is independent of $\{N^\alphalpha(t)\}_{t\ge0}$.
\end{theorem}
\begin{proof}
The mgf of $N^\alphalpha(t)$, $t\ge0$, is given by (see Laskin (2003), Eq. (35))
\begin{equation}{\lambdangle}abel{mgft}
\mathbb{E}{\lambdangle}eft(e^{-wN^\alphalpha(t)}\rangleight)=E_{\alphalpha,1}({\lambdangle}ambda_{0}t^{\alphalpha}(e^{-w}-1)),\ \ w\ge0.
\end{equation}
Also, the mgf of $X_{i}$, $i\ge1$, can be obtained as
\begin{equation}{\lambdangle}abel{plo876}
\mathbb{E}{\lambdangle}eft(e^{-wX_{i}}\rangleight)=\frac{1}{{\lambdangle}ambda_{0}}\sum_{j=1}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-jw}.
\end{equation}
Now,
\begin{align*}
\mathbb{E}{\lambdangle}eft(e^{-w\sum_{i=1}^{N^\alphalpha(t)}X_{i}}\rangleight)&=\mathbb{E}{\lambdangle}eft(\mathbb{E}{\lambdangle}eft(e^{-w\sum_{i=1}^{N^\alphalpha(t)}X_{i}}\big|N^\alphalpha(t)\rangleight)\rangleight)\\
&=\mathbb{E}{\lambdangle}eft({\lambdangle}eft(\mathbb{E}{\lambdangle}eft(e^{-wX_{1}}\rangleight)\rangleight)^{N^\alphalpha(t)}\rangleight)\\
&=\mathbb{E}{\lambdangle}eft(\exp{\lambdangle}eft(N^\alphalpha(t){\lambdangle}n{\lambdangle}eft(\frac{1}{{\lambdangle}ambda_{0}}\sum_{j=1}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-jw}\rangleight)\rangleight)\rangleight),\ \ (\tauext{using (\rangleef{plo876})})\\
&=E_{\alphalpha,1}{\lambdangle}eft({\lambdangle}ambda_{0}t^{\alphalpha}{\lambdangle}eft(\exp{\lambdangle}eft({\lambdangle}n\frac{1}{{\lambdangle}ambda_{0}}\sum_{j=1}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-wj}\rangleight)-1\rangleight)\rangleight),\ \ (\tauext{using (\rangleef{mgft})})\\
&=E_{\alphalpha,1}{\lambdangle}eft(t^{\alphalpha}\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})e^{-wj}\rangleight),
\end{align*}
which agrees with the mgf of CFPP (\rangleef{mjhgss}). This completes the proof.
\end{proof}
\begin{remark}
The one-dimensional distribution of TFPP is given by
\begin{equation*}
\mathrm{Pr}\{N^\alphalpha(t)=n\}=({\lambdangle}ambda_{0}t^{\alphalpha})^{n}E_{\alphalpha,n\alphalpha+1}^{n+1}(-{\lambdangle}ambda_{0}t^{\alphalpha}),\ \ n\ge0.
\end{equation*}
For $n=0$, using (\rangleef{cd}) we have
\begin{equation*}
\mathrm{Pr}\{\mathcal{N}^\alphalpha_c(t)=0\}=\mathrm{Pr}\{N^\alphalpha(t)=0\}=E_{\alphalpha,1}(-{\lambdangle}ambda_{0}t^{\alphalpha}).
\end{equation*}
As $X_i$'s are independent of $N^\alphalpha(t)$, for $n\ge1$, we get
\begin{align}{\lambdangle}abel{rdee32}
\mathrm{Pr}\{\mathcal{N}^\alphalpha_c(t)=n\}&=\sum_{k=1}^{n}\mathrm{Pr}\{X_{1}+X_{2}+\dots+X_{k}=n\}\mathrm{Pr}\{N^\alphalpha(t)=k\}\\
&=\sum_{k=1}^{n}\sum_{\Theta_{n}^{k}}k!\prod_{j=1}^{n}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!}t^{k\alphalpha}E_{\alphalpha,k\alphalpha+1}^{k+1}(-{\lambdangle}ambda_{0}t^{\alphalpha}),{\nonumber}number
\end{align}
where $k_j$ is the total number of claims of $j$ units. The above expression agrees with (\rangleef{dist}).
As $X_{i}$'s are iid, we have
\begin{align}{\lambdangle}abel{sweee32}
\mathrm{Pr}\{X_{1}+X_{2}+\dots+X_{k}=n\}&=\underset{m_j\in\mathbb{N}}{\underset{m_{1}+m_{2}+\dots+m_{k}=n}{\sum}}\mathrm{Pr}\{X_{1}=m_1,X_{2}=m_2,{\lambdangle}dots,X_{k}=m_k\}{\nonumber}number\\
&=\underset{m_j\in\mathbb{N}}{\underset{m_{1}+m_{2}+\dots+m_{k}=n}{\sum}}\prod_{j=1}^{k}\mathrm{Pr}\{X_{j}=m_j\}{\nonumber}number\\
&=\underset{m_j\in\mathbb{N}}{\underset{m_{1}+m_{2}+\dots+m_{k}=n}{\sum}}\frac{1}{{\lambdangle}ambda_{0}^{k}}\prod_{j=1}^{k}({\lambdangle}ambda_{m_j-1}-{\lambdangle}ambda_{m_j}),
\end{align}
where we have used (\rangleef{ngftt4}). Substituting (\rangleef{sweee32}) in (\rangleef{rdee32}), we get an equivalent expression for the one-dimensional distribution of the CFPP as
\begin{equation*}
\mathrm{Pr}\{\mathcal{N}^\alphalpha_c(t)=n\}=\sum_{k=1}^{n}\underset{m_j\in\mathbb{N}}{\underset{m_{1}+m_{2}+\dots+m_{k}=n}{\sum}}\prod_{j=1}^{k}({\lambdangle}ambda_{m_j-1}-{\lambdangle}ambda_{m_j})t^{k\alphalpha}E_{\alphalpha,k\alphalpha+1}^{k+1}(-{\lambdangle}ambda_{0}t^{\alphalpha}).
\end{equation*}
\end{remark}
\section{Convoluted Poisson process: A Special case of CFPP}{\lambdangle}abel{Section4}
Here, we discuss a particular case of the CFPP, namely, the convoluted Poisson process (CPP) which we denote by $\{\mathcal{N}_{c}(t)\}_{t\ge 0}$.
The CPP is defined as a stochastic process whose state probabilities satisfy
\begin{equation*}
\frac{\mathrm{d}}{\mathrm{d}t}p_{c}(n,t)=-{\lambdangle}ambda_{n}*p_{c}(n,t)+{\lambdangle}ambda_{n-1}*p_{c}(n-1,t),
\end{equation*}
with $p_{c}(n,0)=\deltalta_{0}(n)$, $n\ge0$. The conditions on ${\lambdangle}ambda_{n}$'s are same as in the case of CFPP.
For $\alphalpha=1$, the CFPP reduces to CPP. Thus, its the state probabilities are given by
\begin{equation}{\lambdangle}abel{key1q112}
p_{c}(n,t)=\begin{cases}
e^{-{\lambdangle}ambda_{0}t},\ \ n=0,\vspace*{.2cm}\\
\displaystyle\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!} t^{k}e^{-{\lambdangle}ambda_{0}t},\ \ n\ge1.
\end{cases}
\end{equation}
On substituting ${\lambdangle}ambda_{0}={\lambdangle}ambda$ and ${\lambdangle}ambda_{n}=0$ for $n\ge1$ in (\rangleef{key1q112}), we get
\begin{equation*}
p(n,t)=\frac{({\lambdangle}ambda t)^ne^{-{\lambdangle}ambda t}}{n!},\ \ n\ge0,
\end{equation*}
which is the distribution of Poisson process. Thus, Poisson process is a particular case of the CPP.
\begin{remark}
Let $W_{c}$ be the first waiting time of CPP. Then,
\begin{equation*}
\mathrm{Pr}\{W_{c}>t\}=\mathrm{Pr}\{\mathcal{N}_{c}(t)=0\}=e^{-{\lambdangle}ambda_{0}t}, \ t>0,
\end{equation*}
which coincides with the first waiting time of Poisson process. It is known that the Poisson process is a renewal process. Thus, it implies that the CPP is not a renewal process as the one-dimensional distributions of Poisson process and CPP are different.
\end{remark}
A direct method to obtain the pgf of CPP is as follows:
\begin{align}{\lambdangle}abel{kjhgt21}
G_c(u,t)&=p_{c}(0,t)+\sum_{n=1}^{\infty}u^{n}p_{c}(n,t){\nonumber}number\\
&=e^{-t{\lambdangle}ambda_{0}} +\sum_{n=1}^{\infty}u^{n}\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!} t^{k}e^{-t{\lambdangle}ambda_{0}}\\
&=e^{-t{\lambdangle}ambda_{0}} {\lambdangle}eft(1+\sum_{n=1}^{\infty}\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!}u^{n}t^{k} \rangleight){\nonumber}number\\
&=e^{-t{\lambdangle}ambda_{0}}\exp{\lambdangle}eft(t\sum_{j=1}^{\infty}u^{j}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})\rangleight),\ \ \tauext{(using\ (\rangleef{fm1}))}{\nonumber}number\\
&=\exp{\lambdangle}eft(t\sum_{j=0}^{\infty}u^{j}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})\rangleight).{\nonumber}number
\end{align}
Also, its mean and variance are given by
\begin{equation}{\lambdangle}abel{4.3ew}
\mathbb{E}{\lambdangle}eft(\mathcal{N}_{c}(t)\rangleight)=t\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}
\end{equation}
and
\begin{equation}{\lambdangle}abel{4.4er}
\operatorname{ Var}{\lambdangle}eft(\mathcal{N}_{c}(t)\rangleight)=t{\lambdangle}eft(\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}+2\sum_{j=1}^{\infty}j{\lambdangle}ambda_{j}\rangleight),
\end{equation}
respectively.
\begin{remark}
The CPP exhibits overdispersion as $\operatorname{ Var}{\lambdangle}eft(\mathcal{N}_{c}(t)\rangleight)-\mathbb{E}{\lambdangle}eft(\mathcal{N}_{c}(t)\rangleight)=2t\sum_{j=1}^{\infty}j{\lambdangle}ambda_{j}>0$ for $t>0$ provided ${\lambdangle}ambda_{n}\neq0$ for all $n\ge1$.
\end{remark}
\begin{theorem}{\lambdangle}abel{thwe22}
The CPP is a L\'evy process.
\end{theorem}
\begin{proof}
The characteristic function $\phi_c(\xi,t)$ of CPP is given by
\begin{align*}
\phi_c(\xi,t)=\mathbb{E}{\lambdangle}eft(e^{i\xi\mathcal{N}_{c}(t)}\rangleight)&=\sum_{n=0}^{\infty}e^{i\xi n}p_{c}(n,t),\ \ \xi\in\mathbb{R}\\
&=e^{-t{\lambdangle}ambda_{0}}{\lambdangle}eft(1 +\sum_{n=1}^{\infty}e^{i\xi n}\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!} t^{k}\rangleight)\\
&=e^{-t{\lambdangle}ambda_{0}}\exp{\lambdangle}eft(t\sum_{j=1}^{\infty}e^{i\xi j}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})\rangleight),\ \ \tauext{(using\ (\rangleef{fm1}))}\\
&=\exp{\lambdangle}eft(-t\sum_{j=0}^{\infty}e^{i\xi j}({\lambdangle}ambda_{j}-{\lambdangle}ambda_{j-1})\rangleight).
\end{align*}
So, its characteristic exponent is
\begin{equation*}
\psi(\xi)=\sum_{j=0}^{\infty}({\lambdangle}ambda_{j}-{\lambdangle}ambda_{j-1})e^{i\xi j}
=\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})(1-e^{i\xi j}).
\end{equation*}
Thus, the CPP is a l\'evy process with l\'evy measure ${\mathbb P}i(\mathrm{d}x)=\sum_{j=0}^{\infty}({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})\deltalta_{j}$ where $\deltalta_{j}$'s are Dirac measures.
\end{proof}
\begin{remark}
Substituting $\alphalpha=1$ in (\rangleef{cd}), we get
\begin{equation*}
\mathcal{N}_{c}(t)\overset{d}{=}\sum_{i=1}^{N(t)}X_{i}, \ \ \ t\ge0,
\end{equation*}
where $\{X_{i}\}_{i\ge1}$ is independent of the Poisson process $\{N(t)\}_{t\ge0}$, {\it i.e.}, the CPP is a compound Poisson process. Hence, it's a L\'evy process.
\end{remark}
Let $\{T_{2\alphalpha}(t)\}_{t>0}$ be a random process whose distribution is given by the folded solution of following fractional diffusion equation (see Orsingher and Beghin (2004)):
\begin{equation}{\lambdangle}abel{diff}
\partial^{2\alphalpha}_{t}u(x,t)=\frac{\partial^{2}}{\partial x^{2}}u(x,t),\ \ x\in\mathbb{R},\ t>0,
\end{equation}
with $u(x,0)=\deltalta(x)$ for $0<\alphalpha{\lambdangle}e 1$ and $\frac{\partial^{}}{\partial t}u(x,0)=0$ for $1/2<\alphalpha{\lambdangle}e1$.
The Laplace transform of the folded solution $f_{T_{2\alphalpha}}(x,t)$ of (\rangleef{diff}) is given by (see Orsingher and Polito (2010), Eq. (2.29))
\begin{equation}{\lambdangle}abel{lta}
\int_{0}^{\infty}e^{-st}f_{T_{2\alphalpha}}(x,t)\mathrm{d}t=s^{\alphalpha-1}e^{-x s^{\alphalpha}}, \ \ x>0.
\end{equation}
The next result establish a time-changed relationship between the CPP and CFPP.
\begin{theorem}
Let the process $\{T_{2\alphalpha}(t)\}_{t>0}$, $0<\alphalpha{\lambdangle}e 1$, be independent of the CPP $\{\mathcal{N}_{c}(t)\}_{t>0}$. Then,
\begin{equation}{\lambdangle}abel{sub}
\mathcal{N}^{\alphalpha}_{c}(t)\overset{d}{=}\mathcal{N}_{c}(T_{2\alphalpha}(t)).
\end{equation}
\end{theorem}
\begin{proof}
From (\rangleef{mjhgqq1}), we have
\begin{align}{\lambdangle}abel{sin}
\tauilde{G}^{\alphalpha}_{c}(u,s)&=\dfrac{s^{\alphalpha-1}}{s^{\alphalpha}+{\lambdangle}ambda_{0}}+\sum_{n=1}^{\infty}u^{n}\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}k!\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!}\frac{s^{\alphalpha-1}}{(s^{\alphalpha}+{\lambdangle}ambda_{0})^{k+1}},\ \ s>0{\nonumber}number\\
&=\int_{0}^{\infty}s^{\alphalpha-1}{\lambdangle}eft(e^{-\mu(s^{\alphalpha}+{\lambdangle}ambda_{0})} +\sum_{n=1}^{\infty}u^{n}\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!} e^{-\mu(s^{\alphalpha}+{\lambdangle}ambda_{0})}\mu^{k}\rangleight)\mathrm{d}\mu {\nonumber}number\\
&=\int_{0}^{\infty}s^{\alphalpha-1}e^{-\mu s^{\alphalpha}}{\lambdangle}eft(e^{-\mu{\lambdangle}ambda_{0}} +\sum_{n=1}^{\infty}u^{n}\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!} e^{-\mu{\lambdangle}ambda_{0}}\mu^{k}\rangleight)\mathrm{d}\mu {\nonumber}number\\
&=\int_{0}^{\infty}s^{\alphalpha-1}e^{-\mu s^{\alphalpha}}G_c(u,\mu)\mathrm{d}\mu,\ \ \tauext{(using\ (\rangleef{kjhgt21}))}{\nonumber}number\\
&=\int_{0}^{\infty}G_c(u,\mu)\int_{0}^{\infty}e^{-st}f_{T_{2\alphalpha}}(\mu,t)\mathrm{d}t \mathrm{d}\mu,\ \ \tauext{(using\ (\rangleef{lta}))}{\nonumber}number\\
&=\int_{0}^{\infty}e^{-st}{\lambdangle}eft(\int_{0}^{\infty}G_c(u,\mu)f_{T_{2\alphalpha}}(\mu,t)\mathrm{d}\mu\rangleight)\mathrm{d}t {\nonumber}number.
\end{align}
By uniqueness of Laplace transform, we get
\begin{equation*}
G^{\alphalpha}_{c}(u,t)=\int_{0}^{\infty}G_c(u,\mu)f_{T_{2\alphalpha}}(\mu,t)\mathrm{d}\mu.
\end{equation*}
This completes the proof.
\end{proof}
\begin{remark}
For $\alphalpha=1/2$, the process $\{T_{2\alphalpha}(t)\}_{t>0}$ is equal in distribution to the reflecting Brownian motion $\{|B(t)|\}_{t>0}$ as the diffusion equation (\rangleef{diff}) reduces to the heat equation
\begin{equation*}
\begin{cases*}
\frac{\partial}{\partial t}u(x,t)=\frac{\partial^{2}}{\partial x^{2}}u(x,t),\ \ x\in\mathbb{R},\ t>0,\\
u(x,0)=\deltalta(x).
\end{cases*}
\end{equation*}
So, $\mathcal{N}^{1/2}_{c}(t)$ coincides with CPP at a Brownian time, that is, $\mathcal{N}_{c}(|B(t)|)$, $t>0$.
\end{remark}
\begin{remark}
Let $\{H^{\alphalpha}(t)\}_{t>0}$, $0<\alphalpha{\lambdangle}e 1$, be an inverse $\alphalpha$-stable subordinator. The density functions of $H^{\alphalpha}(t)$ and $T_{2\alphalpha}(t)$ coincides (see Meerschaert {\it et al.} (2011)). Thus, we have
\begin{equation}{\lambdangle}abel{keyyekk}
\mathcal{N}^{\alphalpha}_{c}(t)\overset{d}{=}\mathcal{N}_{c}(H^{\alphalpha}(t)),\ \ t>0,
\end{equation}
where the inverse $\alphalpha$-stable subordinator is independent of the CPP.
\end{remark}
Let $\{\mathcal{H}^{\alphalpha}(t)\}_{t>0}$, $0<\alphalpha{\lambdangle}e1$, be a random time process whose density function $f_{\mathcal{H}^{\alphalpha}(t)}(x,t)$, $x>0$, has the following Mellin transform (see Cahoy and Polito (2012), Eq. (3.12)):
\begin{equation}{\lambdangle}abel{mhts}
\int_{0}^{\infty}x^{\nu-1}f_{\mathcal{H}^{\alphalpha}(t)}(x,t)\mathrm{d}x=\frac{\Gammamma(\nu)t^{\frac{\nu-1}{\alphalpha}}}{\Gammamma(1-1/\alphalpha+\nu/\alphalpha)},\ \ t>0,\ \nu\in\mathbb{R}.
\end{equation}
\begin{proposition}
Let the process $\{\mathcal{H}^{\alphalpha}(t)\}_{t>0}$, $0<\alphalpha{\lambdangle}e 1$, be independent of the CFPP $\{\mathcal{N}^\alphalpha_{c}(t)\}_{t>0}$. Then,
\begin{equation*}
\mathcal{N}_{c}(t)\overset{d}{=}\mathcal{N}^{\alphalpha}_{c}(\mathcal{H}^{\alphalpha}(t)),\ \ t>0.
\end{equation*}
\end{proposition}
\begin{proof}
Using (\rangleef{dssdew1}), we have
\scriptsize
\begin{align*}
\int_{0}^{\infty}G^{\alphalpha}_{c}(u,s)&f_{\mathcal{H}^{\alphalpha}(t)}(s,t)\mathrm{d}s\\
&=\int_{0}^{\infty}\sum_{n=0}^{\infty}u^np^{\alphalpha}_{c}(n,s)f_{\mathcal{H}^{\alphalpha}(t)}(s,t)\mathrm{d}s\\
&=\int_{0}^{\infty}{\lambdangle}eft(E_{\alphalpha,1}(-{\lambdangle}ambda_{0}s^{\alphalpha})+\sum_{n=1}^{\infty}u^{n}\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}k!\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!} s^{k\alphalpha}E_{\alphalpha,k\alphalpha+1}^{k+1}(-{\lambdangle}ambda_{0}s^{\alphalpha})\rangleight)f_{\mathcal{H}^{\alphalpha}(t)}(s,t)\mathrm{d}s\\
&=\int_{0}^{\infty}\Bigg(\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}\frac{\Gammamma (z)\Gammamma(1-z)}{\Gammamma(1-\alphalpha z)}({\lambdangle}ambda_{0}s^{\alphalpha})^{-z}\mathrm{d}z\\
&\ \ +\sum_{n=1}^{\infty}u^{n}\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!}
\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}\frac{\Gammamma (z)\Gammamma(k+1-z)}{\Gammamma(\alphalpha(k-z)+1)}({\lambdangle}ambda_{0}s^{\alphalpha})^{-z}s^{k\alphalpha}\mathrm{d}z\Bigg)f_{\mathcal{H}^{\alphalpha}(t)}(s,t)\mathrm{d}s,\\
&\hspace*{12cm} (\mathrm{using}\ (\rangleef{m3}))\\
&=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}\frac{\Gammamma (z)\Gammamma(1-z){\lambdangle}ambda_{0}^{-z}}{\Gammamma(1-\alphalpha z)}{\lambdangle}eft(\int_{0}^{\infty}s^{-\alphalpha z}f_{\mathcal{H}^{\alphalpha}(t)}(s,t)\mathrm{d}s\rangleight)\mathrm{d}z\\
&\ \ +\sum_{n=1}^{\infty}u^{n}\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!}
\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}\frac{\Gammamma (z)\Gammamma(k+1-z){\lambdangle}ambda_{0}^{-z}}{\Gammamma(\alphalpha(k-z)+1)}{\lambdangle}eft(\int_{0}^{\infty}s^{k\alphalpha-\alphalpha z}f_{\mathcal{H}^{\alphalpha}(t)}(s,t)\mathrm{d}s\rangleight)\mathrm{d}z\\
&=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}\Gammamma (z)({\lambdangle}ambda_{0}t)^{-z}\mathrm{d}z +\sum_{n=1}^{\infty}u^{n}\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!}
\frac{t^{k}}{2\pi i}\int_{c-i\infty}^{c+i\infty}\Gammamma (z)({\lambdangle}ambda_{0}t)^{-z}\mathrm{d}z,\\
&\hspace*{12cm} (\mathrm{using}\ (\rangleef{mhts}))\\
&=e^{-t{\lambdangle}ambda_{0}} +\sum_{n=1}^{\infty}u^{n}\sum_{k=1}^{n}\sum_{\Lambda_{n}^{k}}\prod_{j=1}^{n-k+1}\frac{({\lambdangle}ambda_{j-1}-{\lambdangle}ambda_{j})^{k_{j}}}{k_{j}!} t^{k}e^{-t{\lambdangle}ambda_{0}},\ \ (\mathrm{using}\ (\rangleef{me}))\\
&
=G_c(u,t),
\end{align*}
{\nonumber}rmalsize
where in the last step we have used (\rangleef{kjhgt21}).
\end{proof}
\section{The dependence structer for CFPP and its increments}{\lambdangle}abel{Section5}
In this section, we show that the CFPP has LRD property whereas its increments exhibits the SRD property.
For a non-stationary stochastic process $\{X(t)\}_{t\geq0}$ the LRD and SRD properties are defined as follows (see D'Ovidio and Nane (2014), Maheshwari and Vellaisamy (2016)):
\begin{definition}
Let $s>0$ be fixed and $\{X(t)\}_{t\ge0}$ be a stochastic process whose correlation function satisfies
\begin{equation}{\lambdangle}abel{lrd}
\operatorname{Corr}(X(s),X(t))\sigmam c(s)t^{-\gammamma},\ \tauext{as}\ t\rangleightarrow\infty,
\end{equation}
for some $c(s)>0$. The process $\{X(t)\}_{t\ge0}$ has the LRD property if $\gammamma\in(0,1)$ and the SRD property if $\gammamma\in(1,2)$.
\end{definition}
First, we obtain the covariance of CFPP. Using Theorem 2.1 of Leonenko {\it et al.} (2014) and the subordination result (\rangleef{keyyekk}), the covariance of CFPP can be obtained as follows:
\begin{align}{\lambdangle}abel{covfrd11}
\operatorname{Cov}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(s),\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)&=\operatorname{Cov}{\lambdangle}eft(\mathcal{N}_{c}(H^{\alphalpha}(s)),\mathcal{N}_{c}(H^{\alphalpha}(t))\rangleight){\nonumber}number\\
&=\operatorname{Var}{\lambdangle}eft(\mathcal{N}_{c}(1)\rangleight)\mathbb{E}(H^{\alphalpha}(\min\{s,t\}))+{\lambdangle}eft(\mathbb{E}(\mathcal{N}_{c}(1))\rangleight)^{2}\operatorname{Cov}{\lambdangle}eft(H^{\alphalpha}(s),H^{\alphalpha}(t)\rangleight),
\end{align}
where we used Theorem \rangleef{thwe22} and the fact that the inverse stable subordinator $\{H^{\alphalpha}(t)\}_{t\ge0}$ is a non-decreasing process.
On using Theorem 2.1 of Leonenko {\it et al.} (2014), the mean and variance of CFPP can alternatively be obtained as follows:
\begin{equation*}
\mathbb{E}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)=\mathbb{E}{\lambdangle}eft(\mathcal{N}_{c}(1)\rangleight)\mathbb{E}{\lambdangle}eft(H^{\alphalpha}(t)\rangleight)
\end{equation*}
and
\begin{equation*}
\operatorname{ Var}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)={\lambdangle}eft(\mathbb{E}{\lambdangle}eft(\mathcal{N}_{c}(1)\rangleight)\rangleight)^{2}\operatorname{ Var}{\lambdangle}eft(H^{\alphalpha}(t)\rangleight)+\operatorname{ Var}{\lambdangle}eft(\mathcal{N}_{c}(1)\rangleight)\mathbb{E}{\lambdangle}eft(H^{\alphalpha}(t)\rangleight),
\end{equation*}
where the mean and variance of inverse $\alphalpha$-stable subordinator are given by (see Leonenko {\it et al.} (2014), Eq. (8) and Eq. (11))
\begin{equation*}
\mathbb{E}{\lambdangle}eft(H^{\alphalpha}(t)\rangleight)=\frac{t^{\alphalpha}}{\Gammamma(\alphalpha+1)}
\end{equation*}
and
\begin{equation}{\lambdangle}abel{xswe33}
\operatorname{ Var}{\lambdangle}eft(H^{\alphalpha}(t)\rangleight)=t^{2\alphalpha}{\lambdangle}eft(\frac{2}{\Gammamma(2\alphalpha+1)}-\frac{1}{\Gammamma^{2}(\alphalpha+1)}\rangleight),
\end{equation}
respectively.
\begin{remark}
From (\rangleef{wswee11}), (\rangleef{var}) and (\rangleef{xswe33}), we get
\begin{equation*}
\operatorname{ Var}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)-\mathbb{E}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)=\frac{2t^{\alphalpha}\sum_{j=1}^{\infty}j{\lambdangle}ambda_{j}}{\Gammamma(\alphalpha+1)}+{\lambdangle}eft(\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}\rangleight)^{2} \operatorname{ Var}{\lambdangle}eft(H^{\alphalpha}(t)\rangleight).
\end{equation*}
Thus, the CFPP exhibits overdispersion as $ \operatorname{ Var}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)-\mathbb{E}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)>0$ for $t>0$.
\end{remark}
Let
\begin{equation*}
R=\frac{1}{\Gammamma(\alphalpha+1)}\sum_{j=0}^{\infty}{\lambdangle}ambda_{j},\ \ S={\lambdangle}eft(\frac{2}{\Gammamma(2\alphalpha+1)}-\frac{1}{\Gammamma^2(\alphalpha+1)}\rangleight){\lambdangle}eft(\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}\rangleight)^{2}
\end{equation*}
and
\begin{equation*}
T=\frac{1}{\Gammamma(\alphalpha+1)}{\lambdangle}eft(\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}+2\sum_{j=1}^{\infty}j{\lambdangle}ambda_{j}\rangleight).
\end{equation*}
For $0<s{\lambdangle}e t$ in (\rangleef{covfrd11}), we get
\begin{equation}{\lambdangle}abel{qazxsa22}
\operatorname{Cov}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(s),\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)=Ts^{\alphalpha}+{\lambdangle}eft(\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}\rangleight)^{2}\operatorname{Cov}{\lambdangle}eft(H^{\alphalpha}(s),H^{\alphalpha}(t)\rangleight).
\end{equation}
For large $t$, we use the following result due to Leonenko {\it et al.} (2014):
\begin{equation*}{\lambdangle}abel{asi}
\operatorname{Cov}{\lambdangle}eft(H^{\alphalpha}(s),H^{\alphalpha}(t)\rangleight)\sigmam\frac{s^{2\alphalpha}}{\Gammamma(2\alphalpha+1)}.
\end{equation*}
in (\rangleef{qazxsa22}), to obtain
\begin{equation}{\lambdangle}abel{wdseq16}
\operatorname{Cov}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(s),\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)\sigmam Ts^{\alphalpha}+\frac{{\lambdangle}eft(\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}\rangleight)^{2}s^{2\alphalpha}}{\Gammamma(2\alphalpha+1)}\ \ \mathrm{as}\ \ t\tauo\infty.
\end{equation}
We now show that the CFPP has LRD property.
\begin{theorem}
The CFPP exhibits the LRD property.
\end{theorem}
\begin{proof}
Using (\rangleef{var}) and (\rangleef{wdseq16}), we get the following for fixed $s>0$ and large $t$:
\begin{align*}
\operatorname{Corr}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(s),\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)&=\frac{\operatorname{Cov}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(s),\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)}{\sqrt{\operatorname{ Var}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(s)\rangleight)}\sqrt{\operatorname{ Var}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)}}\\
&\sigmam\frac{\Gammamma(2\alphalpha+1)Ts^{\alphalpha}+{\lambdangle}eft(\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}\rangleight)^{2}s^{2\alphalpha}}{\Gammamma(2\alphalpha+1)\sqrt{\operatorname{ Var}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(s)\rangleight)}\sqrt{St^{2\alphalpha}+Tt^{\alphalpha}}}\\
&\sigmam c_0(s)t^{-\alphalpha},
\end{align*}
where
\begin{equation*}
c_0(s)=\frac{\Gammamma(2\alphalpha+1)Ts^{\alphalpha}+{\lambdangle}eft(\sum_{j=0}^{\infty}{\lambdangle}ambda_{j}\rangleight)^{2}s^{2\alphalpha}}{\Gammamma(2\alphalpha+1)\sqrt{\operatorname{ Var}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(s)\rangleight)}\sqrt{S}}.
\end{equation*}
As $0<\alphalpha<1$, the result follows.
\end{proof}
For a fixed $\deltalta>0$, we define the convoluted fractional Poissonian noise (CFPN), denoted by $\{Z^{\alphalpha}_{c,\deltalta}(t)\}_{t\ge0}$, as the increment process of CFPP, that is,
\begin{equation}{\lambdangle}abel{qlq1}
Z^{\alphalpha}_{c,\deltalta}(t)\coloneqq\mathcal{N}^{\alphalpha}_{c}(t+\deltalta)-\mathcal{N}^{\alphalpha}_{c}(t).
\end{equation}
Next, we show that the CFPN exhibits the SRD property.
\begin{theorem}{\lambdangle}abel{varbgfff}
The CFPN has the SRD property.
\end{theorem}
\begin{proof}
Let $s\ge0$ be fixed such that $0{\lambdangle}e s+\deltalta{\lambdangle}e t$. We have,
\begin{align}{\lambdangle}abel{covz}
\operatorname{Cov}(Z^{\alphalpha}_{c,\deltalta}(s),Z^{\alphalpha}_{c,\deltalta}(t))&=\operatorname{Cov}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(s+\deltalta)-\mathcal{N}^{\alphalpha}_{c}(s),\mathcal{N}^{\alphalpha}_{c}(t+\deltalta)-\mathcal{N}^{\alphalpha}_{c}(t)\rangleight){\nonumber}number\\
&=\operatorname{Cov}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(s+\deltalta),\mathcal{N}^{\alphalpha}_{c}(t+\deltalta)\rangleight)+\operatorname{Cov}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(s),\mathcal{N}^{\alphalpha}_{c}(t)\rangleight){\nonumber}number\\
&\ \ \ \ -\operatorname{Cov}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(s+\deltalta),\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)-\operatorname{Cov}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(s),\mathcal{N}^{\alphalpha}_{c}(t+\deltalta)\rangleight).
\end{align}
Leonenko {\it et al.} (2014) obtained the following expression for the covariance of inverse $\alphalpha$-stable subordinators:
\begin{equation}{\lambdangle}abel{mhjgf44}
\operatorname{Cov}{\lambdangle}eft(H^{\alphalpha}(s),H^{\alphalpha}(t)\rangleight)=\frac{1}{\Gammamma^2(\alphalpha+1)}{\lambdangle}eft( \alphalpha s^{2\alphalpha}B(\alphalpha,\alphalpha+1)+F(\alphalpha;s,t)\rangleight),
\end{equation}
where $F(\alphalpha;s,t)=\alphalpha t^{2\alphalpha}B(\alphalpha,\alphalpha+1;s/t)-(ts)^{\alphalpha}$. Here, $B(\alphalpha,\alphalpha+1)$ is the beta function whereas $B(\alphalpha,\alphalpha+1;s/t)$ is the incomplete beta function.
In (\rangleef{qazxsa22}), we use the following asymptotic result (see Maheshwari and Vellaisamy (2016), Eq. (8)):
\begin{equation*}
F(\alphalpha;s,t)\sigmam \frac{-\alphalpha^{2}}{(\alphalpha+1)}\frac{s^{\alphalpha+1}}{t^{1-\alphalpha}},\ \ \mathrm{as}\ \ t\tauo\infty,
\end{equation*}
to obtain
\begin{equation}{\lambdangle}abel{covzt}
\operatorname{Cov}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(s),\mathcal{N}^{\alphalpha}_{c}(t)\rangleight)\sigmam Ts^{\alphalpha}+ R^{2}{\lambdangle}eft(\alphalpha s^{2\alphalpha}B(\alphalpha,\alphalpha+1)-\frac{\alphalpha^{2}}{(\alphalpha+1)}\frac{s^{\alphalpha+1}}{t^{1-\alphalpha}}\rangleight) \ \ \mathrm{as}\ t\tauo\infty.
\end{equation}
From (\rangleef{covz}) and (\rangleef{covzt}), we get the following for large $t$:
\begin{align}{\lambdangle}abel{covzi}
\operatorname{Cov}(Z^{\alphalpha}_{c,\deltalta}(s),Z^{\alphalpha}_{c,\deltalta}(t))&\sigmam \frac{R^{2}\alphalpha^{2}}{\alphalpha+1}{\lambdangle}eft(\frac{s^{\alphalpha+1}}{(t+\deltalta)^{1-\alphalpha}}+\frac{(s+\deltalta)^{\alphalpha+1}}{t^{1-\alphalpha}}-\frac{s^{\alphalpha+1}}{t^{1-\alphalpha}}-\frac{(s+\deltalta)^{\alphalpha+1}}{(t+\deltalta)^{1-\alphalpha}}\rangleight){\nonumber}number\\
&=\frac{R^{2}\alphalpha^{2}}{\alphalpha+1}{\lambdangle}eft((t+\deltalta)^{\alphalpha-1}-t^{\alphalpha-1}\rangleight){\lambdangle}eft(s^{\alphalpha+1}-(s+\deltalta)^{\alphalpha+1}\rangleight){\nonumber}number\\
&\sigmam\frac{\alphalpha^{2}\deltalta(1-\alphalpha)}{\alphalpha+1}{\lambdangle}eft((s+\deltalta)^{\alphalpha+1}-s^{\alphalpha+1}\rangleight)R^{2}t^{\alphalpha-2}.
\end{align}
Now,
\begin{equation}{\lambdangle}abel{rfcdee1}
\operatorname{Var}(Z^{\alphalpha}_{c,\deltalta}(t))=\operatorname{Var}(\mathcal{N}^{\alphalpha}_{c}(t+\deltalta))+\operatorname{Var}(\mathcal{N}^{\alphalpha}_{c}(t))-2\operatorname{Cov}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(t),\mathcal{N}^{\alphalpha}_{c}(t+\deltalta)\rangleight).
\end{equation}
From (\rangleef{qazxsa22}) and (\rangleef{mhjgf44}), we have
\begin{equation}{\lambdangle}abel{sxeww12}
\operatorname{Cov}{\lambdangle}eft(\mathcal{N}^{\alphalpha}_{c}(t),\mathcal{N}^{\alphalpha}_{c}(t+\deltalta)\rangleight)=Tt^{\alphalpha}+R^{2}{\lambdangle}eft(\alphalpha t^{2\alphalpha}B(\alphalpha,\alphalpha+1)+F(\alphalpha;t,t+\deltalta)\rangleight),
\end{equation}
where $F(\alphalpha;t,t+\deltalta)=\alphalpha (t+\deltalta)^{2\alphalpha}B(\alphalpha,\alphalpha+1,t/t+\deltalta)-(t(t+\deltalta))^{\alphalpha}$.
For large $t$, we have
\begin{equation*}
B(\alphalpha,\alphalpha+1,t/t+\deltalta)\sigmam B(\alphalpha,\alphalpha+1)=\frac{\Gammamma(\alphalpha)\Gammamma(\alphalpha+1)}{\Gammamma(2\alphalpha+1)}.
\end{equation*}
Substituting (\rangleef{var}) and (\rangleef{sxeww12}) in (\rangleef{rfcdee1}), we get
\begin{align}{\lambdangle}abel{varzi}
\operatorname{Var}(Z^{\alphalpha}_{c,\deltalta}(t))&\sigmam (S-2R^{2}\alphalpha B(\alphalpha,\alphalpha+1))t^{2\alphalpha}+(S-2R^{2}\alphalpha B(\alphalpha,\alphalpha+1))(t+\deltalta)^{2\alphalpha}{\nonumber}number\\
&\hspace*{4cm}+T{\lambdangle}eft((t+\deltalta)^{\alphalpha}-t^{\alphalpha}\rangleight)+2R^{2}(t(t+\deltalta))^{\alphalpha}{\nonumber}number\\
&= Tt^{\alphalpha}{\lambdangle}eft({\lambdangle}eft(1+\frac{\deltalta}{t}\rangleight)^{\alphalpha}-1\rangleight)-R^{2}t^{2\alphalpha}{\lambdangle}eft({\lambdangle}eft(1+\frac{\deltalta}{t}\rangleight)^{\alphalpha}-1\rangleight)^2{\nonumber}number\\
&\sigmam T\alphalpha\deltalta t^{\alphalpha-1}-R^{2}\alphalpha^{2}\deltalta^{2}t^{2\alphalpha-2}{\nonumber}number\\
&\sigmam \alphalpha\deltalta Tt^{\alphalpha-1},\ \ \mathrm{as}\ \ t\tauo\infty.
\end{align}
From (\rangleef{covzi}) and (\rangleef{varzi}), we have
\begin{align*}
\operatorname{Corr}(Z^{\alphalpha}_{c,\deltalta}(s),Z^{\alphalpha}_{c,\deltalta}(t))&=\dfrac{\operatorname{Cov}{\lambdangle}eft(Z^{\alphalpha}_{c,\deltalta}(s),Z^{\alphalpha}_{c,\deltalta}(t)\rangleight)}{\sqrt{\operatorname{Var}(Z^{\alphalpha}_{c,\deltalta}(s))}\sqrt{\operatorname{Var}(Z^{\alphalpha}_{c,\deltalta}(t))}}\\
&\sigmam \frac{\alphalpha^{2}\deltalta(1-\alphalpha){\lambdangle}eft((s+\deltalta)^{\alphalpha+1}-s^{\alphalpha+1}\rangleight)R^{2}t^{\alphalpha-2}}{(\alphalpha+1)\sqrt{\operatorname{Var}(Z^{\alphalpha}_{c,\deltalta}(s))}\sqrt{\alphalpha\deltalta T t^{\alphalpha-1}}}\\
&=c_1(s)t^{-(3-\alphalpha)/2},\ \ \mathrm{as}\ t\rangleightarrow\infty.
\end{align*}
where
\begin{equation*}
c_1(s)=\frac{\alphalpha^{2}\deltalta(1-\alphalpha){\lambdangle}eft((s+\deltalta)^{\alphalpha+1}-s^{\alphalpha+1}\rangleight)R^{2}}{(\alphalpha+1)\sqrt{\operatorname{Var}(Z^{\alphalpha}_{c,\deltalta}(s))}\sqrt{\alphalpha\deltalta T}}.
\end{equation*}
Thus, the CFPN exhibits the SRD property as $1<(3-\alphalpha)/2<3/2$.
\end{proof}
\printbibliography
\end{document} |
\begin{document}
\title{The Inverse Problem for Canonically Bounded Rank-one Transformations}
\author{Aaron Hill}
\address{Department of Mathematics\\ University of Louisville\\ Louisville, KY 40292}
\email{aaron.hill@louisville.edu}
\thanks{The author acknowledges the US NSF grant DMS-0943870 for the support of his research.}
\subjclass[2010]{Primary 37A05, 37A35}
\date{July 3, 2015 and, in revised form, December 11, 2015.}
\keywords{rank-one transformation, isomorphic, canonically bounded}
\begin{abstract}
Given the cutting and spacer parameters for a rank-1 transformation, there is a simple condition which is easily seen to be sufficient to guarantee that the transformation under consideration is isomorphic to its inverse. Here we show that if the cutting and spacer parameters are canonically bounded, that condition is also necessary, thus giving a simple characterization of the canonically bounded rank-1 transformations that are isomorphic to their inverse.
\end{abstract}
\maketitle
\section{Introduction}
\subsection{Background}
A measure-preserving transformation is an automorphism of a standard Lebesgue space. Formally, it is a quadruple $(X, \mathcal{B}, \mu, T)$, where
\begin{enumerate}
\item $(X, \mathcal{B}, \mu)$ is a measure space isomorphic to the unit interval with the Lebesgue measure on all Borel sets,
\item $T$ is a bijection from $X$ to $X$ such that $T$ and $T^{-1}$ are both $\mu$-measurable and preserve the measure $\mu$.
\end{enumerate}
When the algebra of measurable sets is clear, we will refer to the transformation $(X, \mathcal{B}, \mu, T)$ by $(X, \mu, T)$. If $(X, \mathcal{B}, \mu, T)$ is a measure-preserving transformation, then so is its inverse, $(X, \mathcal{B}, \mu, T^{-1})$.
Two measure-preserving transformations $(X, \mathcal{B}, \mu, T)$ and $(X^\prime, \mathcal{B}^\prime, \mu^\prime, T^\prime)$ are isomorphic if there is a measure isomorphism $\phi$ from $(X, \mathcal{B}, \mu)$ to $(X^\prime, \mathcal{B}^\prime, \mu^\prime)$ such that $\mu$ almost everywhere, $\phi \circ T = T^\prime \circ \phi $.
One of the central problems of ergodic theory, originally posed by von Neumann, is the isomorphism problem: How can one determine whether two measure-preserving transformations are isomorphic? The inverse problem is one of its natural restrictions: How can one determine whether a measure-preserving transformation is isomorphic to its inverse?
In the early 1940s, Halmos and von Neumann \cite{HalmosvonNeumann} showed that ergodic measure-preserving transformations with pure point spectrum are isomorphic iff they have the same spectrum. It immediately follows from this that every ergodic measure-preserving transformation with pure point spectrum is isomorphic to its inverse.
About a decade later, Anzai \cite{Anzai} gave the first example of a measure-preserving transformation not isomorphic to its inverse. Later, Fieldsteel \cite{Fieldsteel} and del Junco, Rahe, and Swanson \cite{delJuncoRaheSwanson} independently showed that Chacon's transformation--one of the earliest examples of what we now call rank-1 transformations--is not isomorphic to its inverse. In the late 1980s, Ageev \cite{Ageev3} showed that a generic measure-preserving transformation is not isomorphic to its inverse.
In 2011, Foreman, Rudolph, and Weiss \cite{ForemanRudolphWeiss} showed that the set of ergodic measure-preserving transformations of a fixed standard Lebesgue space that are isomorphic to their inverse is a complete analytic subset of all measure-preserving transformations on that space. In essence, this result shows that there is no simple (i.e., Borel) condition which is satisfied if and only if an ergodic measure-preserving transformation is isomorphic to its inverse. However, in the same paper they show that the isomorphism relation becomes much simpler when restricted to the generic class of rank-1 transformations. It follows from their work that there exists a simple (i.e., Borel) condition which is satisfied if and only if a rank-1 measure-preserving transformation is isomorphic to its inverse. Currently, however, no such condition is known. In this paper we give a simple condition that is sufficient for a rank-1 transformation to be isomorphic to its inverse and show that for canonically bounded rank-1 transformations, the condition is also necessary.
\subsection{Rank-1 transformations}
\label{comments}
In this subsection we state the definitions and basic facts pertaining to rank-1 transformations that will be used in our main arguments.
We mostly follow the symbolic presentation in \cite{GaoHill1} and \cite{GaoHill2}, but also provide comments that hopefully will be helpful to those more familiar with a different approach to rank-1 transformations. Additional information about the connections between different approaches to rank-1 transformations can be found in the survey article \cite{Ferenczi}.
We first remark that by $\N$ we mean the set of all finite ordinals, including zero: $\{0, 1, 2, \ldots \}$.
Our main objects of study are symbolic rank-1 measure-preserving transformations. Each such transformation is a measure-preserving transformation $(X, \mathcal{B}, \mu, \sigma)$, where $X$ is a closed, shift-invariant subset of $\{0,1\}^\Z$, $\mathcal{B}$ is the collection of Borel sets that $X$ inherits from the product topology on $\{0,1\}^\Z$, $\mu$ is an atomless, shift-invariant (Borel) probability measure on $X$, and $\sigma$ is the shift. To be precise, the shift $\sigma$ is the bijection from $\{0,1\}^\Z$ to $\{0,1\}^\Z$, where $\sigma (x) (i) = x (i+1)$. Since the measure algebra of a symbolic measure-preserving transformation comes from the topology on $\{0,1\}^\Z$, we will omit the reference to that measure algebra and simply refer to a symbolic measure-preserving transformation as $(X, \mu, \sigma)$.
Symbolic rank-1 measure-preserving transformations are usually described by {\em cutting and spacer parameters}. The cutting parameter is a sequence $(r_n : n \in \N)$ of integers greater than 1. The spacer parameter is a sequence of tuples $(s_n : n \in \N)$, where formally $s_n$ is a function from $\{1, 2, \ldots, r_n -1\}$ to $\N$ (note that $s_n$ is allowed to take the value zero). Given
such cutting and spacer parameters, one defines the symbolic rank-1 system $(X, \sigma)$ as follows. First define a sequence of finite words $(v_n : n \in \N)$ by $v_0 =0$ and $$v_{n+1} = v_n 1^{s_n(1)} v_n 1^{s_n(2)}v_n \ldots v_n 1^{s_n(r_n-1)} v_n.$$ The sequence $(v_n: n \in \N)$ is called a {\em generating sequence}. Then let $$X = \{x \in \{0,1\}^\Z: \text{ every finite subword of $x$ is a subword of some $v_n$}\}.$$ It is straightforward to check that $X$ is a closed, shift-invariant subset of $\{0,1\}^\Z$. These symbolic rank-1 systems are treated extensively--as topological dynamical systems--in \cite{GaoHill1}. In order to introduce a nice measure $\mu$ and thus obtain a measurable dynamical system, we make two additional assumptions on the cutting and spacer parameters.
\begin{enumerate}
\item For every $N \in \N$ there exist $n, n^\prime \geq N$ and $0 < i < r_n$ and $0 < i^\prime < r_{n^\prime}$ such that $$s_n(i) \neq s_{n^\prime} (i^\prime).$$
\item $\displaystyle \sup_{n \in \N} \frac{ \text{\# of 1s in $v_n$}}{|v_n|} < 1 $
\end{enumerate}
It is straightforward to show that there is a unique shift-invariant measure on $X$ which assigns measure 1 to the set $\{x \in X: x (0) = 0\}$. As long as the first condition above is satisfied, that measure is atomless. As long as the second condition above is satisfied, that measure is finite. Assuming both conditions are satisfied, the normalization of that measure is called $\mu$ and then $(X, \mu, \sigma)$ is a measure-preserving transformation. We call such an $(X, \mu, \sigma)$ a symbolic rank-1 measure-preserving transformation.
Below are several important remarks about symbolic rank-1 measure-preserving transformations that will be helpful in understanding the arguments in Section 2.
\begin{itemize}
\item {\em Bounded rank-1 transformations:} Suppose $(r_n: n \in \N)$ and $(s_n: n \in \N)$ are cutting and spacer parameters for $(X, \mu, \sigma)$. We say the cutting parameter is bounded if there is some $R \in \N$ such that for all $n \in \N$, $r_n \leq R$. We say that the spacer parameter is bounded if there is some $S \in \N$ such that for all $n \in \N$ and all $0<i < r_n$, $s_n(i) \leq S$.
Let $(X, \mu, \sigma)$ be a symbolic rank-1 measure-preserving transformation. We say that $(X, \mu, \sigma)$ is bounded if there are cutting and spacer parameters $(r_n: n \in \N)$ and $(s_n: n \in \N)$ that give rise to $(X, \mu, \sigma)$ that are both bounded.
\item {\em Canonical cutting and spacer parameters:} There is an obvious bijective correspondence between cutting and spacer parameters and generating sequences, but there are many different generating sequences that give rise to the same symbolic rank-1 system. For example, any proper subsequence of a generating sequence will be a different generating sequence that gives rise to the same symbolic rank-1 system. There is a way, however, described in \cite{GaoHill1}, to associate to each symbolic rank-1 system a unique canonical generating sequence, which in turn gives rise to the canonical cutting and spacer parameters of that symbolic system. The canonical generating sequence was used in \cite{GaoHill1} to fully understand topological isomorphisms between symbolic rank-1 systems; it was also used in \cite{GaoHill2} to explicitly describe when a bounded rank-1 measure-preserving transformation has trivial centralizer.
There is only one fact that about canonical generating sequences that is used in our argument. It is this: If $(r_n: n \in \N)$ and $(s_n: n \in \N)$ are the canonical cutting and spacer parameters for $(X, \mu, \sigma)$, then for all $n \in \N$, there is $0<i<r_n$ and $0<j<r_{n+1}$ such that $s_n(i) \neq s_n(j)$. (See the definition of canonical generating sequence in section 2.3.2 and 2.3.3 in \cite{GaoHill1})
\item {\em Expected occurrences:} Let $(v_n: n \in \N)$ be a generating sequence giving rise to the symbolic system $(X, \sigma)$. Then for each $n \in \N$, there is a unique way to view each $x \in X$ as a disjoint collection of occurrences of $v_n$ separated only by 1s. Such occurrences of $v_n$ in $x$ are called {\em expected} and the following all hold.
\begin{enumerate}
\item For all $x \in X$ and $n \in \N$, every occurrence of $0$ in $x$ is contained in a unique expected occurrence of $v_n$.
\item For all $x \in X$ and $n \in \N$, $x$ has an expected occurrence of $v_n$ beginning at position $i$ iff $\sigma (x)$ has an expected occurrence of $v_n$ beginning at position $(i-1)$.
\item If $x \in X$ has an expected occurrence of $v_n$ beginning at position $i$, and $n^\prime > n$, then the unique expected occurrence of $v_{n^\prime}$ that contains the 0 at position $i$ completely contains the expected occurrence of $v_n$ that begins at $i$.
\item If $x \in X$ has expected occurrences of $v_n$ beginning at positions $i$ and $j$, with $|i - j| < |v_n|$, then $i=j$. In other words, distinct expected occurrences of $v_n$ cannot overlap.
\item If $n>m$ and $x\in X$ has as expected occurrence of $v_n$ beginning at $i$ which completely contains an expected occurrence of $v_m$ beginning at $i + l$, then whenever $j$ is such that $x$ has an expected occurrence of $v_n$ beginning at $j$, that occurrence completely contains an expected occurrence of $v_m$ beginning at $j + l$.
\end{enumerate}
For $n \in \N$ and $i \in \Z$ we define $E_{v_n,i}$ to be the set of all $x \in X$ that have an expected occurrence of $v_n$ beginning at position $i$.
\item {\em Relation to cutting and stacking constructions:} Let $(v_n: n \in \N)$ be a generating sequence giving rise to the symbolic rank-1 measure-preserving system $(X, \mu, \sigma)$. One can take the cutting and spacer parameters associated to $(v_n: n \in \N)$ and build, using a cutting and stacking construction, a rank-1 measure-preserving transformation. This construction involves a sequence of Rokhlin towers. There is a direct correspondence between the base of the $n$th tower in the cutting and stacking construction and the set $E_{v_n, 0}$ in the symbolic system. The height of the $n$th tower in the cutting and stacking construction then corresponds to (i.e., is equal to) the length of the word $v_n$. If the reader is more familiar with rank-1 transformations as cutting and stacking constructions, one can use this correspondence to translate the arguments in Section 2 to that setting.
\item {\em Expectedness and the measure algebra:} Let $(v_n: n \in \N)$ be a generating sequence giving rise to the symbolic rank-1 measure-preserving system $(X, \mu, \sigma)$. If $\mathbb{M}$ is any infinite subset of $\N$, then the collection of sets $\{E_{v_n, i}: n \in \mathbb{M}, i \in \Z\}$ is dense in the measure algebra of $(X, \mu)$. Thus if $A$ is any positive measure set and $\epsilon >0$, there is some $n \in \mathbb{M}$ and $i \in \Z$ such that $$\frac{\mu (E_{v_n, i} \cap A)}{\mu(E_{v_n, i}) } > 1 - \epsilon$$
\item {\em Rank-1 Inverses:} Let $(r_n: n \in \N)$ and $(s_n: n \in \N)$ be cutting and spacer parameters for the symbolic rank-1 measure-preserving transformation $(X, \mu, \sigma)$. It is straightforward to check that a simple modification of the parameters results in a symbolic rank-1 measure-preserving transformation that is isomorphic to $(X, \mu, \sigma^{-1})$. For each tuple $s_n$ in the spacer parameter, let $\overline{s_n}$ be the reverse tuple, i.e., for $0 < i < r_n$, $\overline{s_n } (i) = s_n (r_n -i)$. It is easy to check that the cutting and spacer parameters $(r_n: n \in \N)$ and $(\overline{s_n}: n \in \N)$ satisfy the two measure conditions necessary to produce a symbolic rank-1 measure-preserving transformation. If one denotes that transformation by $(\overline{X}, \overline{\mu}, \sigma)$ and defines $\phi : X \rightarrow \overline{X}$ by $\psi (x) (i) = x(-i)$, then it is straightforward to check that $\psi$ is an isomorphism between $(X, \mu, \sigma^{-1})$ and $(\overline{X}, \overline{\mu}, \sigma)$. Thus to check whether a given symbolic rank-1 measure-preserving transformation $(X, \mu, \sigma)$ is isomorphic to its inverse, one need only check whether it is isomorphic to the symbolic rank-1 measure-preserving transformation $(\overline{X}, \overline{\mu}, \sigma)$.
\end{itemize}
\subsection{The condition for isomorphism and the statement of the theorem}
Let $(r_n: n \in \N)$ and $(s_n: n \in \N)$ be cutting and spacer parameters for the symbolic rank-1 measure-preserving transformation $(X, \mu, \sigma)$. Suppose that there is an $N \in \N$ such that for all $n \geq N$, $s_n = \overline{s_n}$. Let $\phi : X \rightarrow \overline{X}$ be defined so that $\phi (x)$ is obtained from $x$ by replacing every expected occurrence of $v_N$ by $\overline{v_N}$ (the reverse of $v_N$). It is straightforward to check that $\phi$ is an isomorphism between $(X, \mu, \sigma)$ and $(\overline{X}, \overline{\mu}, \sigma)$, thus showing that $(X, \mu, \sigma)$ is isomorphic to its inverse $(X, \mu, \sigma^{-1})$.
As an example, Chacon2 is the rank-one transformation that can be defined by $v_{n+1} = v_n 1^n v_n$. (In the cutting and stacking setting, Chacon2 is usually described by $B_{n+1} = B_n B_n 1$, but that is easily seen to be equivalent to $B_{n+1} = B_n 1^n B_n$.) In this case $r_n = 2$ and $s_n(1)=n$, for all $n$. Since $s_n = \overline{s_n}$ for all $n$, Chacon2 is isomorphic to its inverse.
\begin{theorem}
\label{theorem}
Let $(r_n: n \in \N)$ and $(s_n: n \in \N)$ be the canonical cutting and spacer parameters for the symbolic rank-1 measure-preserving transformation $(X, \mu, \sigma)$. If those parameters are bounded, then $(X, \mu, \sigma)$ is isomorphic to $(X, \mu, \sigma^{-1})$ if and only if there is an $N \in \N$ such that for all $n \geq N$, $s_n = \overline{s_n}$.
\end{theorem}
We remark that in \cite{GaoHill1}, the author and Su Gao have completely characterized when two symbolic rank-1 system are {\em topologically} isomorphic, and as a corollary have a complete characterization of when a symbolic rank-1 system is {\em topologically} isomorphic to its inverse. A topological isomorphism between symbolic rank-1 systems is a homeomorphism between the underlying spaces that commutes with the shift. Since each the underlying space of a symbolic rank-1 system admits at most one atomless, shift-invariant probability measure, every topological isomorphism between symbolic rank-1 systems is also a measure-theoretic isomorphism. On the other hand, there are symbolic rank-1 systems that are measure-theoretically isomorphic, but not topologically isomorphic.
We note here the main difference between these two settings. Suppose $\phi$ is an isomorphism--either a measure-theoretic isomorphism or a topological isomorphism--between two symbolic rank-1 systems $(X, \mu, \sigma)$ and $(Y, \nu, \sigma)$.
Let $(v_n : n \in \N)$ and $(w_n : n \in \N)$ be generating sequences that gives rise to $(X, \mu, \sigma)$ and $(Y, \nu, \sigma)$, respectively.
One can consider a set $E_{w_m, 0} \subseteq Y$ and its pre-image, call it $A$, under $\phi$. If $\phi$ is a measure-theoretic isomorphism then one can find some $E_{v_n, i}$ so that $$\frac{\mu (E_{v_n, i} \cap A)}{\mu(E_{v_n, i}) } > 1 - \epsilon.$$
However, if $\phi$ is in fact a topological isomorphism, then one can find some $E_{v_n, i}$ so that $$E_{v_n, i} \subseteq A.$$
The stronger condition in the case of a topological isomorphism is what makes possible the analysis done by the author and Gao in \cite{GaoHill1}. In this paper, we are able to use the weaker condition, together with certain ``bounded" conditions on the generating sequences $(v_n : n \in \N)$ and $(w_n: n \in \N)$ to achieve our results.
\section{Arguments}
We begin with a short subsection introducing two new pieces of notation. Then we prove a general proposition that can be used to show that certain symbolic rank-1 measure-preserving transformations are not isomorphic. Finally, we show how to use the general proposition to prove the non-trivial direction of Theorem \ref{theorem}.
\subsection{New notation}
The first new piece of notation is $*$, a binary operation on all finite sequences of natural numbers. The second is $\perp$, a relation (signifying incompatibility) between finite sequences of natural numbers that have the same length.
{\bf The notation $*$:} We will first describe the reason for introducing this new notation. We will then then give the formal definition of $*$ and then illustrate that definition with an example. Suppose $(r_n :n \in \N)$ and $(s_n: n \in \N)$ are cutting and spacer parameters for the symbolic system $(X, \mu, \sigma)$ and that $(v_n: n \in \N)$ is the generating sequence corresponding to those parameters. Fix $n_0 > 0$ and consider the generating sequence $(w_n : n \in \N)$, defined as follows.
$$w_n =
\begin{cases}
v_n, \quad &\text{ if } n < n_0 \\
v_{n+1}, \quad &\text{ if } n \geq n_0\\
\end{cases}
$$
It is clear that $(w_n: n \in \N)$ is a subsequence of $(v_n: n \in \N)$, missing only the element $v_{n_0}$; thus, $(w_n: n \in \N)$ gives rise to the same symbolic system $(X, \mu, \sigma)$. We would like to be able to easily describe the cutting and spacer parameters that correspond to the generating sequence $(w_n:n \in \N)$. Let $(r_n^\prime :n \in \N)$ and $(s_n^\prime : n \in \N)$ be those cutting and spacer parameters. It is clear that for $n< n_0$ we have $r_n^\prime = r_n$ and $s_n^\prime = s_n$. It is also clear that for $n>n_0$ we have $r_n^\prime = r_{n+1}$ and $s_n^\prime = s_{n+1}$. It is straightforward to check that $r_{n_0}^\prime = r_{n_0+1} \cdot r_{n_0}$. The definition below for $*$ is precisely what is needed so that $s_{n_0}^\prime = s_{n_0+1} * s_{n_0}$.
Here is the definition. Let $s_1$ be any function from $\{1, 2, \ldots, r_1 -1\}$ to $\N$ and let $s_2$ be any function from $\{1, 2, \ldots, r_2 -1\}$ to $\N$. We define $s_2 * s_1$, a function from $\{1, 2, \ldots, r_2 \cdot r_1 -1\}$ to $\N$, as follows.
$$(s_2 *s_1 )(i) =
\begin{cases}
s_1(k), \quad &\text{ if } 0 < k < r_1 \text{ and } i \equiv k \mod r_1 \\
s_2(i/r_1), \quad &\text{ if } i \equiv 0 \mod r_1 \\
\end{cases}
$$
It is important to note, and straightforward to check, that the operation $*$ is associative.
To illustrate, suppose that $s_1$ is the function from $\{1,2,3\}$ to $\N$ with $s_1 (1) = 0$, $s_1 (2) = 1$, and $s_1 (3) = 0$ and that $s_2$ is the function from $\{1,2\}$ to $\N$ such that $s_2 (1) = 5$ and $s_2(2) = 6$; we abbreviate this by simply saying that $s_1 = (0,1,0)$ and $s_2 = (5,6)$. Then $s_2 * s_1 = (0,1,0, 5, 0,1,0,6,0,1,0)$.
{\bf The notation $\perp$:} Suppose $s$ and $s^\prime$ are both functions from $\{1, 2, \ldots, r-1\}$ to $\N$. We say that $s$ is {\em compatible} with $s^\prime$ if there exists a function $c$ from $\{1\}$ to $\N$ so that $s$ is a subsequence of $c*s^\prime$. Otherwise we say that $s$ is incompatible with $s^\prime$ and write $s\perp s^\prime$.
To illustrate, consider $s = (0,1,0)$ and $s^\prime = (0,0,1)$. Then $s$ is compatible with $s^\prime$ because if $c = 0$, then $c * s^\prime = (0,0,1,0,0,0,1)$ and $(0,1,0)$ does occur as a subsequence of $(0,{ 0,1,0},0,0,1)$. If $s^{\prime\prime} = (0,1,2)$, then $s^{\prime}$ is compatible with $s^{\prime\prime}$ (again let $c=0$), but $s \perp s^{\prime\prime} $, because $(0,1,0)$ can never be a subsequence of $(0,1,2,c,0,1,2)$.
Though not used in our arguments, it is worth noting, and is straightforward to check, that $s \perp s^\prime$ iff $s^\prime \perp s$. (It is important here that $s$ and $s^\prime$ have the same length.)
We now state the main point of this definition of incompatibility. This fact will be crucial in the proof of Proposition 2.1. Suppose $(r_n^\prime : n \in \N)$ and $(s_n^\prime : n \in \N)$ are cutting and spacer parameters associated to the symbolic rank-1 measure-preserving transformation $(Y, \nu, \sigma)$ and that $(w_n : n \in \N)$ is the generating sequence associated to those parameters.
If $n$ is such that $r_n = r_n^\prime$ and $s_n \perp s_n^\prime$, then no element of $y \in Y$ contains an occurrence of $$w_n 1^{s_n (1)} w_n 1^{s_n(2)} \ldots 1^{s_n(r_n -1)} w_n$$ where each of the demonstrated occurrence of $w_n$ is expected.
Indeed, suppose that that beginning at position $i$, some $y \in Y$ did have such an occurrence of $w_n 1^{s_n (1)} w_n 1^{s_n(2)} \ldots 1^{s_n(r_n -1)} w_n$. The expected occurrence of $w_n$ beginning at $i$ must be completely contained in some expected occurrence of $w_{n+1}$, say that begins at position $j$. We know that the expected occurrence of $w_{n+1}$ beginning at position $j$ contains exactly $r_n$-many expected occurrences of $w_n$. Let $1 \leq l \leq r_n$ be such that the expected occurrence of $w_n$ beginning at position $i$ is the $l$th expected occurrence of $w_n$ beginning at position $j$. If $l=1$, then $s_n = s_n^\prime$, which implies that $s_n$ is a subsequence of $c*s_n^\prime$ for any $c$. If, on the other hand, $1< l \leq r_n$, then letting $c = s_n (r_n - l +1)$, we have that $s_n$ is a subsequence of $c*s_n^\prime$. In either case this would result in $s_n$ being compatible with $s_n^\prime$.
\subsection{A general proposition guaranteeing non-isomorphism}
\begin{proposition}
\label{prop}
Let $(r_n: n \in \N)$ and $(s_n: n \in \N)$ be the cutting and spacer parameters for a symbolic rank-1 system $(X, \mu, \sigma)$ and let $(r^\prime_n: n \in \N)$ and $(s^\prime_n: n \in \N)$ be the cutting and spacer parameters for a symbolic rank-1 system $(Y, \nu, \sigma)$. Suppose the following hold.
\begin{enumerate}
\item For all $n$, $r_n = r^\prime_n$ and $\displaystyle \sum_{0 < i < r_n} s_{n}(i) = \sum_{0 < i < r_n} s^\prime_{n}(i)$.
\item There is an $S \in \N$ such that for all $n$ and all $0 < i < r_n$, $$s_n(i) \leq S \textnormal{ and } s^\prime_n(i) \leq S.$$
\item There is an $R \in \N$ such that for infinitely many $n$, $$r_n \leq R \textnormal{ and } s_n \perp s^\prime_n.$$
\end{enumerate}
Then $(X, \mu, \sigma)$ and $(Y, \nu, \sigma)$ are not measure-theoretically isomorphic.
\end{proposition}
\begin{proof}
Let $(v_n: n \in \N)$ be the generating sequence associated to the cutting and spacer parameters $(r_n: n \in \N)$ and $(s_n: n \in \N)$. Let $(w_n: n \in \N)$ be the generating sequence associated to the cutting and spacer parameters $(r^\prime_n: n \in \N)$ and $(s^\prime_n: n \in \N)$. Condition (1) implies that for all $n$, $|v_n| = |w_n|$. Now suppose, towards a contradiction, that $\phi$ is an isomorphism between $(X, \mu, \sigma)$ and $(Y, \nu, \sigma)$.
First, choose $m \in \N$ so that $|v_m| = |w_m|$ is greater than the $S$ from condition (2). Next, consider the positive $\mu$-measure set $$\phi^{-1} (E_{w_m, 0}) = \{x \in X: \phi (x) \text{ has an expected occurrence of $w_m$ at 0}\}.$$ Let $\mathbb{M} = \{n \in \N: \textnormal{$r_n \leq R $ and $s_n \perp s^\prime_n$}\}$, where $R$ is from condition (3), and note that $\mathbb{M}$ is infinite. We can then find $n \in \mathbb{M}$ and $k \in \N$ such that $$\displaystyle \frac{\mu (E_{v_n, k} \cap \phi^{-1} (E_{w_m, 0}))}{\mu(E_{v_n, k}) } > 1- \frac{1}{R}.$$
One can loosely describe the above inequality by saying: Most $x \in X$ that have an expected occurrence of $v_n$ beginning at position $k$ are such that $\phi (x)$ has an expected occurrence of $w_m$ beginning at position 0.
We say that an expected occurrence of $v_n$ in any $x \in X$ (say it begins at $i$) is a {\em good} occurrence of $v_n$ if $\phi (x)$ has an expected occurrence of $w_m$ beginning at position $i-k$. In this case we say the good occurrence of $v_n$ beginning at $i$ {\em forces} the expected occurrence of $w_m$ beginning at position $i-k$. Note that an expected occurrence of $v_n$ beginning at position $i$ in $x \in X$ is good iff $\sigma^{i} (x) \in E_{v_n, k} \cap \phi^{-1} (E_{w_m, 0})$, since $\phi$ commutes with $\sigma$. A simple application of the ergodic theorem shows that $\mu$ almost every $x \in X$ satisfies $$\displaystyle \lim_{N \rightarrow \infty} \frac{|\{i \in [-N,N] : \textnormal{ $x$ has a good occurrence of $v_n$ at $i$}\}|}{|\{i \in [-N,N] : \textnormal{ $x$ has an expected occurrence of $v_n$ at $i$}\}|} > 1- \frac{1}{R}.$$
Since $ r_n \leq R$, this implies that $\mu$ almost every $x \in X$ contains an expected occurrence of $v_{n+1}$ such that each of the $r_n$-many expected occurrences of $v_n$ that it contains is good. We say that such an occurrence of $v_{n+1}$ is {\em totally good}.
Let $x\in X$ and $i \in \Z$ be such that $x$ has a totally good occurrence of $v_{n+1}$ beginning at $i$. There are $r_n$-many expected occurrences of $v_n$ in the expected occurrence $v_{n+1}$ beginning at $i$ and each of them forces an expected occurrence of $w_m$ in $\phi(x)$. The first of these forced expected occurrences of $w_m$ in $\phi (x)$ begins at position $i - k$ and must be part of some expected occurrence of $w_n$, say it begins at position $i^\prime$. We claim that, in fact, $\phi (x)$ must have an occurrence of $$w_n 1^{s_n (1)} w_n 1^{s_n(2)} \ldots 1^{s_n(r_n -1)} w_n$$ beginning at $i^\prime$, where each of the demonstrated occurrence of $w_n$ is expected. This will contradict the fact that $s_n \perp s_n^\prime$.
Proving the claim involves an argument that is repeated $r_n -1$ times. The next paragraph contains the first instance of that argument, showing that the expected occurrence of $w_n$ beginning at $i^\prime$ in $\phi (x)$ is immediately followed by $1^{s_n(1)}$ and then another expected occurrence of $w_n$, this one containing the second forced occurrence of $w_m$. The next instance of the argument would show that the expected occurrence of $w_n$ beginning at $i^\prime + |w_n| + s_n(1)$ is immediately followed by $1^{s_n(2)}$ and then another expected occurrence of $w_n$, this one containing the third forced occurrence of $w_m$. After the $r_n -1$ instances of that argument, the claim would be proven.
Here is the first instance of the argument: We know that $\phi (x)$ has an expected occurrence of $w_n$ beginning at $i^\prime$ and that this expected occurrence of $w_n$ contains the expected occurrence of $w_m$ beginning at position $$i-k = (i^\prime) + (i - k - i^\prime).$$ Thus, by point (5) of the remark about expected occurrences in Section \ref{comments}, if $j \in \Z$ is such that $\phi (x)$ has an expected occurrence of $w_n$ beginning at position $j$, that occurrence completely contains an expected occurrence of $w_m$ beginning at position $j + (i - k - i^\prime)$.
The expected occurrence of $w_n$ beginning at position $i^\prime$ must be followed by $1^t$ and then another expected occurrence of $w_n$, for some $0 \leq t \leq S$. The expected occurrence of $w_n$ beginning at position $i^\prime + |w_n| + t$ must contain an expected occurrence of $w_m$ beginning at position $$(i -k) + |w_n| + t = (i^\prime + |w_n| + t) + (i - k - i^\prime).$$
But we also know that the expected occurrence of $v_n$ beginning at position $i + |v_n| + s_n(1)$ in $x$ forces an expected occurrence of $w_m$ beginning at $i + |w_n| + s_n(1)$ in $\phi (x)$. Since $0 \leq s_n(1),t \leq S$ it must be that $0 \leq |t - s_n(1)| \leq S$ and thus, since $|w_m| > S$, the expected occurrences beginning at positions $i + |w_n| + s_n(1)$ and $i + |w_n| + t$ must overlap. Since distinct expected occurrences of $w_m$ cannot overlap, it must be the case that $s_n(1)=t$ (See point (4) of the remark about expected occurrences in Section \ref{comments}). Thus, the expected occurrence of $w_n$ beginning at $i^\prime$ in $\phi (x)$ is immediately followed by $1^{s_n(1)}$ and then another expected occurrence of $w_n$, this one containing the second forced occurrence of $w_m$.
\end{proof}
\subsection{Proving the theorem}
We start this subsection with a comment and a simple lemma. The comment is that if $(r_n: n \in \N)$ and $(s_n: n \in \N)$ are the canonical cutting and spacer parameters for a symbolic rank-1 measure-preserving transformation $(X, \mu, \sigma)$, then for all $n \in \N$, $s_{n+1}*s_n$ is not constant. This is simply a restatement, using the notation $*$, of the last sentence in the remark about canonical generating sequences in Section \ref{comments}.
\begin{lemma}
\label{lemma}
Let $s_1$ be a function from $\{1, 2, \ldots, r_1-1\}$ to $\N$ such that $s_1 \neq \overline{s_1}$. Let $s_2$ be function from $\{1, 2, \ldots, r_2-1\}$ to $\N$ that is not constant. Then $s_2 * s_1 \perp \overline{s_2 * s_1}$.
\end{lemma}
\begin{proof}
Suppose, towards a contradiction, that $s_2 * s_1$ is compatible with $\overline{s_2 * s_1}$. Then there is function $c$ from $\{1\}$ to $\N$ so that $s_2 * s_1$ is a subsequence of $$c * (\overline{s_2 * s_1}) = c * (\overline{s_2} * \overline{s_1}) = (c * \overline{s_2}) * \overline{s_1}.$$ In other words, there is some $0 \leq k \leq r_2 \cdot r_1$ such that for all $0 < l < r_2 \cdot r_1$, $(s_2 * s_1)(l) = ((c * \overline{s_2}) * \overline{s_1})(k+l) $. We now have two cases.
Case 1: $k \equiv 0 \mod r_1$. Then for all $0<i<r_1$, $$s_1(i) = (s_2 * s_1) (i) = ((c * \overline{s_2}) * \overline{s_1}) (k+i) = \overline{s_1} (i).$$ Thus $s_1 = \overline{s_1}$, which is a contradiction.
Case 2: There is some $0<m<r_1$ such that $k+m \equiv 0 \mod r_1$. For all $0 \leq d < r_2$, we have $(s_2 * s_1)(m + dr_1) = s_1 (m) $. But also, $$(s_2 * s_1)(m + dr_1) = ((c * \overline{s_2}) * \overline{s_1}) (k + m + dr_1) = (c * \overline{s_2}) \left(\frac{k + m}{r_1} + d\right) .$$ This implies that the function $c*\overline{s_2}$ is constant (taking the value $s_1 (m)$) for $r_2$-many consecutive inputs. This implies that $\overline{s_2}$ must be constant, which is a contradiction.
\end{proof}
We will now prove the non-trivial direction of the theorem.
\begin{proof}
Let $(\tilde{r}_n: n \in \N)$ and $(\tilde{s}_n: n \in \N)$ be the canonical cutting and spacer parameters for a symbolic rank-1 measure-preserving transformation $(X, \mu, \sigma)$. Suppose both parameters are bounded; let $\tilde{R}$ be such that for all $n \in \N$, $\tilde{r}_n \leq \tilde{R}$ and let $S$ be such that for all $n \in \N$ and all $0<i<\tilde{r}_n$, $\tilde{s}_n(i) \leq S$. Also, assume that for infinity many $n$, $\tilde{s}_n \neq \overline{\tilde{s}_n}$. To prove the non-trivial direction of the theorem, we need to show that $(X, \mu, \sigma)$ is not isomorphic to its inverse.
Let $(u_n :n \in \N)$ be the generating sequence corresponding to the parameters $(\tilde{r}_n: n \in \N)$ and $(\tilde{s}_n: n \in \N)$. We will now describe a subsequence $(v_n: n \in \N)$ of $(u_n:n \in \N)$ and let $(r_n:n \in \N)$ and $(s_n: n \in \N)$ be the cutting and spacer parameters corresponding to the generating sequence $(v_n: n \in \N)$ (which also gives rise to $(X, \mu, \sigma)$). First, let $v_0 = u_0 =0$. Now, suppose $v_{2n}$ has been defined as $u_{k}$. Let $m>k$ be as small as possible so that $\tilde{s}_m \neq \overline{\tilde{s}_m}$, and define $v_{2n+1} = u_m$ and $v_{2n+2} = u_{m+3}$. It is very important to note here that $$r_{2n+1} = \tilde{r}_{m+2} \cdot \tilde{r}_{m+1} \cdot \tilde{r}_{m} $$
and that $$s_{2n+1} = \tilde{s}_{m+2} * \tilde{s}_{m+1} * \tilde{s}_{m}. $$ This has two important consequences. First, we have that for $n \in \N$, $r_{2n+1} \leq \tilde{R}^3$. By the remark before Lemma \ref{lemma}, we also have that $\tilde{s}_{m+3} * \tilde{s}_{m+2} $ is not constant and thus, by Lemma \ref{lemma}, $\tilde{s}_{m+3} * \tilde{s}_{m+2} * \tilde{s}_{m+1} \perp \overline{\tilde{s}_{m+3} * \tilde{s}_{m+2} * \tilde{s}_{m+1} }$; put another way, $s_{2n+1} \perp \overline{s_{2n+1}}$.
Now for each $n$, let $r_n^\prime = r_n$ and $s_n^\prime = \overline{s_n}$. Let $(Y, \nu, \sigma)$ be the symbolic rank-1 transformation corresponding to the cutting and spacer parameters $(r_n^\prime : n \in \N)$ and $(s_n^\prime : n \in \N)$. As mentioned in the remark on rank-1 inverses at the end of Section \ref{comments}, the transformation $(Y, \nu, \sigma)$ is isomorphic to the inverse of $(X, \mu, \sigma)$. Thus to show that $(X, \mu, \sigma)$ is not isomorphic to its inverse, we can show that $(X, \mu, \sigma)$ and $(Y, \nu, \sigma)$ are not isomorphic.
To do this we will apply Proposition \ref{prop}. We need to check that the following three conditions hold.
\begin{enumerate}
\item For all $n$, $r_n = r^\prime_n$ and $\displaystyle \sum_{0 < i < r_n} s_{n}(i) = \sum_{0 < i < r_n} s^\prime_{n}(i)$.
\item There is an $S \in \N$ such that for all $n$ and all $0 < i < r_n$, $$s_n(i) \leq S \textnormal{ and } s^\prime_n(i) \leq S.$$
\item There is an $R \in \N$ such that for infinitely many $n$, $$r_n \leq R \textnormal{ and } s_n \perp s^\prime_n.$$
\end{enumerate}
Condition (1) above follows immediately from the fact that for all $n \in N$, $r_n^\prime = r_n$ and $s_n^\prime = \overline{s_n}$. Condition (2) follows from the fact that each $s_n(i)$ is equal to some $\tilde{s}_m (j)$ which is less than or equal to $S$. Finally, to verify condition (3), let $R=\tilde{R}^3$ and note that, as remarked above, for all $n \in \N$ we have that $r_{2n+1} \leq \tilde{R}^3$ and $s_{2n+1} \perp \overline{s_{2n+1}}$. We now apply Proposition \ref{prop} and conclude that $(X, \mu, \sigma)$ and $(Y, \nu, \sigma)$ are not isomorphic. Thus $(X, \mu, \sigma)$ is not isomorphic to its inverse.
\end{proof}
\end{document} |
\begin{document}
\title[ An invariant for $\mathbb{Z} HS^3$ via skein algebras]
{Construction of an invariant for integral homology 3-spheres via completed Kauffman
bracket skein algebras }
\author{Shunsuke Tsuji}
\date{}
\maketitle
\begin{abstract}
We construct an invariant $z (M) =1+a_1(A^4-1)+
a_2(A^4-1)^2+a_3(A^4-1)^3 + \cdots \in \mathbb{Q} [[A^4-1]]=\mathbb{Q} [[A+1]]$
for an integral homology $3$-sphere $M$ using
a completed skein algebra and a Heegaard splitting.
The invariant $z(M)\mathrm{mod} ((A+1)^{n+1}) $
is a finite type invariant of order $n$.
In particular, $-a_1/6$ equals the
Casson invariant.
If $M$ is the Poincar\'{e} homology 3-sphere,
$(z(M))_{|A^4 =q} \mod (q+1)^{14} $ is the Ohtsuki series \cite{Ohtsuki1995} for $M$.
\end{abstract}
\section{Intoduction }
Heegaard splitting theory clarifies a
relationship between mapping class groups
on surfaces and closed oriented 3-manifolds.
In particular, there exists some equivalence relation $\sim$
of Torelli groups of a surface $\Sigma_{g,1}$
with genus $g$ and non-empty connected boundary,
and the well-defined bijective map
\begin{equation*}
\lim_{g \to \infty} \mathcal{I}(\Sigma_{g,1})/ {\sim} \to \mathcal{H} (3)
\end{equation*}
plays an important role, where we denote by $\mathcal{I}(\Sigma_{g,1})$
the Torelli group of $\Sigma_{g,1}$
and by $\mathcal{H}(3)$ the set of integral homology
$3$-spheres, i.e. closed oriented $3$-manifolds
whose homology groups are isomorphic to
the homology group of $S^3$.
For details, see Fact \ref{fact_map_3_manifold} in this paper.
This bijective map makes it possible to study
integral homology $3$-spheres using the structure of Torelli groups.
See, for example, Morita \cite{Morita1989} and
Pitsch \cite{Pitsch2008} \cite{Pitsch2009}.
On the other hand, in
our previous papers \cite{TsujiCSAI} \cite{Tsujipurebraid} \cite{TsujiTorelli},
we study some new relationship between the Kauffman bracket skein algebra and
the mapping class group of a surface.
It gives us a new way
of studying the mapping class group.
For example \cite{TsujiTorelli},
we reconstruct the first Johnson homomorphism
in terms of the skein algebra.
Since the Kauffman bracket skein algebra comes from
link theory, we expect that this relationship
brings us a new information
of $3$-manifolds.
The aim of this paper is to construct an invariant $z(M)$
for an integral homology
$3$-sphere $M$ using completed skein algebras
and the above bijective map.
In other words, the aim of this paper
is to prove the following main theorem.
\begin{thm}[Theorem \ref{thm_main}]
The map $Z: \mathcal{I}(\Sigma_{g,1}) \to \mathbb{Q}[[A+1]]$
defined by
\begin{equation*}
Z(\xi) \defeq \sum_{i=0}^\infty \frac{1}{(-A+\gyaku{A})^i i!}e_*
((\zeta (\xi))^i)
\end{equation*}
induces an invariant
\begin{equation*}
z:\mathcal{H} (3) \to \mathbb{Q}[[A+1]], M(\xi) \to Z(\xi),
\end{equation*}
where $e_*$ is the $\mathbb{Q} [[A+1]]$-module homomorphism
induced by standard embedding.
Here $\zeta :\mathcal{I} (\Sigma_{g,1}) \to
\widehat{\skein{\Sigma_{g,1}}}$ is an embedding
defined in Theorem \ref{thm_zeta}.
\end{thm}
We remark we do not rely on number theory
for constructing the invariant.
Let $V$ be a $\mathbb{Q}$-vector space.
In our paper, a map $z':\mathcal{H}(3) \to V$
is called a finite type invariant of
order $n$ if and only if
the $\mathbb{Q}$-linear map
$z':\mathbb{Q} \mathcal{H}(3) \to V$
induced by $z':\mathcal{H}(3) \to V$
satisfies the condition that
\begin{equation*}
\sum_{\epsilon_i \in \shuugou{0,1}} (-1)^{\sum \epsilon_i}z'(M(\prod_{i=1}^{2n+2} {\xi_i}^{\epsilon_i}))=0.
\end{equation*}
for any $\xi_1,\xi_2, \cdots, \xi_{2n+2} \in
\mathcal{I}(\Sigma_{g,1})$.
The above condition and the condition
that
\begin{equation*}
\sum_{\epsilon_i \in \shuugou{0,1}} (-1)^{\sum \epsilon_i}z'(M(\prod_{i=1}^{n+1} {\xi_i}^{\epsilon_i}))=0.
\end{equation*}
for any $\xi_1,\xi_2, \cdots, \xi_{n+1} \in
\mathcal{K}(\Sigma_{g,1})$
are equivalent
to each other.
This follows from
\cite{GL1997} Theorem 1 and \cite{GGP2001} subsection 1.8.
Furthermore, in our paper,
a finite type invariant
$z': \mathcal{H}(3) \to V$
of order $n$
is called nontrivial
if and only if
the $\mathbb{Q}$-linear map
$z':\mathbb{Q} \mathcal{H}(3) \to V$
induced by $z':\mathcal{H}(3) \to V$
satisfies the condition that
there exists $\xi_1, \xi_2, \cdots, \xi_{2n} \in
\mathcal{I}(\Sigma_{g,1})$ such that
\begin{equation*}
\sum_{\epsilon_i \in \shuugou{0,1}} (-1)^{\sum \epsilon_i}z'(M(\prod_{i=1}^{2n} {\xi_i}^{\epsilon_i})) \neq 0.
\end{equation*}
By \cite{GL1997} Theorem 1 and \cite{GGP2001} subsection 1.8,
the above condition and the condition
that there exists $\xi_1, \xi_2, \cdots, \xi_{n} \in
\mathcal{K}(\Sigma_{g,1})$ such that
\begin{equation*}
\sum_{\epsilon_i \in \shuugou{0,1}} (-1)^{\sum \epsilon_i}z'(M(\prod_{i=1}^{n} {\xi_i}^{\epsilon_i})) \neq 0.
\end{equation*}
are equivalent to each other.
The invariant $z: \mathcal{H}(3) \to \mathbb{Q}[[A+1]]$
defined in this paper induces
a finite type invariant
$z(M) \in \mathbb{Q}[[A+1]]/((A+1)^n)$ of order $n+1$
for $M \in \mathcal{H} (3)$ (Corollary \ref{cor_finite_type}).
In Proposition \ref{prop_z_finite_nontrivial},
we prove
the finite type invariant
$z(M) \in \mathbb{Q}[[A+1]]/((A+1)^n)$ of order $n+1$
is nontrivial,
where we use
a connected sum of the Poincar\'{e} spheres.
Furthermore,
we give some computations of this invariant $z$ for some
integral homology 3-spheres.
As a corollary of this computation, the coefficient of $(A^4-1)$ in $z$ is $(-6)$ times
the Casson invariant.
On the other hand, Ohtsuki \cite{Ohtsuki1995}
defined the Ohtsuki series $\tau:\mathcal{H}(3) \to
\mathbb{Z} [[q]]$.
If $M$ is the Poincar\'{e} homology $3$-sphere,
$z(M) \mod ((A+1)^{14})$ is equal to
$ \tau (M)_{|q=A^4} \mod ((A+1)^{14})$.
This lead us the following.
\begin{expectation}
Using the change of variables $A^4=q$,
the invariant $z$ induces
the Ohtsuki series $ \tau$,
in other words, we have $z(M) = \tau (M)_{|q=A^4}$ for any $M \in
\mathcal{H}(3)$.
\end{expectation}
\tableofcontents
\section{Mapping class grups and closed 3-manifolds}
Let $\Sigma_g$ denote an closed oriented surface of genus
$g$ standardly embedded in the oriented $3$-sphere $S^3$.
The embedded surface $\Sigma_g$ separates $S^3$ into two
handle bodies of genus $g$, $S^3 =H_g^+ \cup_\varphi H_g^-$
where $\varphi : \Sigma_g =\partial H_g^+ \to \partial H_g^-$
is a diffeomorphism. We fix an closed disk $D$ in $\Sigma_g$
and denote by $\Sigma_{g,1}$ the closure of $ \Sigma_g \backslash D$.
The embedding $\Sigma_g \hookrightarrow S^3$
determines two natural subgroups of
\begin{equation*}
\mathcal{M}(\Sigma_{g,1})
\defeq \mathrm{Diff}^+ (\Sigma_{g,1}, \partial \Sigma_{g,1})/
\mathrm{Diff}_0(\Sigma_{g,1}, \partial \Sigma_{g,1}),
\end{equation*}
namely
\begin{equation*}
\mathcal{M}(H_{g,1}^\epsilon)
\defeq \mathrm{Diff}^+ (H_{g}^\epsilon, D)/
\mathrm{Diff}_0(H_{g}^\epsilon, D).
\end{equation*}
for $\epsilon \in \shuugou{+,-}$.
We denote $M(\xi) \defeq H_g^+ \cup_{\varphi \circ \xi} H_g^-$.
Let $\mathcal{I} (\Sigma_{g,1}) $ be the Torelli group of the surface $\Sigma_{g,1}$,
which is the set consisting of all elements of $\mathcal{M}(\Sigma_{g,1})$
acting trivially on $H_1(\Sigma_{g,1})$.
We remark that
there is a natural injective stabilization map $\mathcal{M}(\Sigma_{g,1}) \hookrightarrow
\mathcal{M}(\Sigma_{g+1,1})$, which is
compatible with the definitions of the above two subgroups.
\begin{df}
For $\xi_1$ and $\xi_2 \in \mathcal{I}(\Sigma_{g,1})$, we define
$\xi_1 \sim \xi_2$ if there exist
$\eta^+ \in \mathcal{M}(H_{g,1}^+)$ and $\eta^- \in \mathcal{M}(H_{g,1}^-)$
satisfying $\xi_1 =\eta^- \xi_2 \eta^+$.
\end{df}
\begin{fact}[For example, see \cite{Morita1989} \cite{Pitsch2008}\cite{Pitsch2009}]
\label{fact_map_3_manifold}
The map
\begin{equation*}
\underrightarrow{\lim}_{g \rightarrow \infty} (\mathcal{I}(\Sigma_{g,1})/\sim )
\to \mathcal{H}(3),\xi \mapsto M(\xi)
\end{equation*}
is bijective, where $\mathcal{H}(3)$ is the set of integral homology
$3$-spheres, i.e.,closed oriented $3$-manifolods
whose homology group is isomorphic to
the homology group of $S^3$.
\end{fact}
We denote $\mathcal{IM} (H_{g,1}^\epsilon)
\defeq \mathcal{I} (\Sigma_{g,1}) \cap \mathcal{M}(H_{g,1}^\epsilon)$
for $\epsilon \in \shuugou{+,-}$.
\begin{lemm}[ Pitsch \cite{Pitsch2009}, Theorem 9, P.295, Omori \cite{Omori2016}]
\label{lemm_torelli_handle_generator}
For $\epsilon \in \shuugou{+,-}$, the subgroup $\mathcal{IM} (H_{g,1}^\epsilon)$ is generated by
\begin{equation*}
\shuugou{t_{\xi(c_a) \xi( c'_a)}|\xi \in \mathcal{M} (H_{g,1}^\epsilon)}
\cup \shuugou{t_{\xi(c_b) \xi(c'_b)}|\xi \in \mathcal{M} (H_{g,1}^\epsilon)},
\end{equation*}
where the simple closed curves $c_a ,c'_a,c_b$ and $c'_b$
are as in Figure \ref{figure_handle_torelli_generator}.
\end{lemm}
\begin{figure}
\caption{$c_a ,c'_a,c_b$ and $c'_b$}
\label{figure_handle_torelli_generator}
\end{figure}
\begin{proof}
We prove the lemma in the case $\epsilon$ is $+$.
Let $I \mathrm{Aut} \pi_1(H_g^+,*)$ be the kernel of
$\mathrm{Aut} \pi_1(H_g^+,*) \twoheadrightarrow \mathrm{Aut} (H_1 (H_g))$.
By \cite{MKS}, Theorem N4, p.168,
$I \mathrm{Aut} \pi_1 (H_g^+,*)$ is generated by
\begin{equation*}
\shuugou{x_* \in I \mathrm{Aut} \pi_1(H_g^+,*)
|x \in \shuugou{t_{\xi(c_b) \xi(c'_b)}|\xi \in \mathcal{M} (H_{g,1}^+)}},
\end{equation*}
where we denote by $x_*$
the element of $I \mathrm{Aut} \pi_1(H_g^+,*)$ induced by $x
\in \shuugou{t_{\xi(c_b) \xi(c'_b)}|\xi \in \mathcal{M} (H_{g,1}^+)}$.
We denote by $\mathcal{LIM} (H_{g,1}^+)$ the Luft-Torelli group
which is the kernel of
$\mathcal{IM} (H_{g,1}^+) \to I \mathrm{Aut} \pi_1(H_g^+,*)$.
Pitsch \cite{Pitsch2009}, Theorem 9, P.295 proves that
$\mathcal{LIM} (H_{g,1}^+)$ is generated by
\begin{equation*}
\shuugou{t_{\xi(c_a) \xi( c'_a)}|\xi \in \mathcal{M} (H_{g,1}^+)}.
\end{equation*}
This proves the case that $\epsilon$ is $ +$.
If $\epsilon$ is $-$, replacing $a$ by $b$,
the same proof works.
This finishes the proof.
\end{proof}
\begin{lemm}[\cite{Pitsch2009}, Lemma 4, p.285]
Let $G$ be a subgroup of $\mathcal{M}(H_{g,1}^+)
\cap \mathcal{M}(H_{g,1}^-)$ such that
the natural map $G \to \mathrm{Aut} (H_1(H_{g,1}^+))$ is onto.
For two elements $\xi_1$ and $\xi_2 \in \mathcal{I} (\Sigma_{g,1})$,
$\xi_1 \sim \xi_2$ if and only if
there exist $\eta_G \in G$, $\eta^+ \in \mathcal{IM} (H_{g,1}^+)$
and $\eta^- \in \mathcal{IM} (H_{g,1}^-)$ satisfying
$\eta^- \eta_G \xi_1 {\eta_G}^{-1} \eta^+ =\xi_2$.
\end{lemm}
\begin{proof}
Pitsch proved the above claim in the case
$G=\mathcal{M}(H^{+}_{g,1}) \cap
\mathcal{M}(H^{-}_{g,1}) $.
The proof is based on the fact
that the natural map $\mathcal{M}(H^{+}_{g,1}) \cap
\mathcal{M}(H^{-}_{g,1}) \to \mathrm{Aut} (H_1 (H^+_{g,1}))$ is onto.
Therefore, the proof of \cite{Pitsch2009} Lemma 4 works for this lemma.
\end{proof}
We construct a subgroup of $\mathcal{M}(H^{+}_{g,1}) \cap
\mathcal{M}(H^{-}_{g,1}) $ satisfying the above condition.
Let $G \subset \mathcal{M}(H_{g,1}^+)
\cap \mathcal{M}(H_{g,1}^-)$ be the subgroup generated by
\begin{equation*}
\shuugou{h_i|i \in \shuugou{1,2, \cdots, g}} \cup
\shuugou{s_{ij}|i \neq j}
\end{equation*}
where we denote by $h_i$ and $s_{ij}$
the half twist along $c_{h,i}$ as in Figure \ref{figure_handle_G_h_i}
and the element $t_{c_{i,j}}{t_{c_{a,i}}}^{-1}{t_{c_{b,j}}}^{-1}$
as in Figure \ref{figure_handle_G_S_j_i}
and Figure \ref{figure_handle_G_S_i_j}.
Since this subgroup $G$ satisfies the condition in the above lemma,
we have the following.
\begin{figure}
\caption{$c_{h,i}
\label{figure_handle_G_h_i}
\end{figure}
\begin{figure}
\caption{$c_{a,i}
\label{figure_handle_G_S_j_i}
\end{figure}
\begin{figure}
\caption{$c_{a,i}
\label{figure_handle_G_S_i_j}
\end{figure}
\begin{lemm}
\label{lemm_torelli_check}
The equivalence relation $\sim$ in $\mathcal{I}(\Sigma_{g,1})$ is generated by
$\xi \sim \eta_{G} \xi {\eta_{G}}^{-1}$ for $\eta_G \in \shuugou{h_i, s_{i,j}}$,
$\xi \sim \xi \eta^+$ for $\eta^+ \in
\shuugou{t_{\xi(c_a) \xi( c'_a)}|\xi \in \mathcal{M} (H_{g,1}^+)}
\cup \shuugou{t_{\xi(c_b) \xi(c'_b)}|\xi \in \mathcal{M} (H_{g,1}^+)}$
and
$\xi \sim \eta^- \xi$ for $\eta^- \in
\shuugou{t_{\xi(c_a) \xi( c'_a)}|\xi \in \mathcal{M} (H_{g,1}^-)}
\cup \shuugou{t_{\xi(c_b) \xi(c'_b)}|\xi \in \mathcal{M} (H_{g,1}^-)}$
\end{lemm}
\section{Proof of main theorem}
\subsection{Completed Kauffman bracket skein algebras and
Torelli groups}
Let $\Sigma$ be a compact connected oriented surface.
We denote by $\mathcal{T}(\Sigma)$ the set of unoriented framed tangles in
$\Sigma \times I$.
Let $\skein{\Sigma}$ be the quotient of $\mathbb{Q} [A.\gyaku{A}] \mathcal{T}(\Sigma)$
by the skein relation and the trivial knot relation as in Figure
\ref{figure_def_skein}. We consider the product of $\skein{\Sigma}$ as in Figure
\ref{figure_def_product} and the Lie bracket $[x,y] \defeq
\frac{1}{-A+\gyaku{A}} (xy-yx)$ for $x,y \in \skein{\Sigma}$.
The completed Kauffman bracket skein algebra
is defined by
\begin{equation*}
\widehat{\skein{\Sigma}} \defeq \comp{i}{\skein{\Sigma}/(\ker \varepsilon)^i}
\end{equation*}
where the augmentation map $ \varepsilon:\skein{\Sigma} \to \mathbb{Q}$ is defined by
$A+1 \mapsto 0$ and $[L]- (-2)^{\zettaiti{L}}\mapsto 0$ for $L \in \mathcal{T}(\Sigma)$.
In \cite{Tsujipurebraid}, we define the filtration
$\filtn{F^n \widehat{\skein{\Sigma}}}$ satisfying
\begin{align*}
&F^n \widehat{\skein{\Sigma}} F^m \widehat{\skein{\Sigma}} \subset
F^{n+m} \widehat{\skein{\Sigma}}, \\
&[F^n \widehat{\skein{\Sigma}}, F^m \widehat{\skein{\Sigma}}] \subset
F^{n+m-2} \widehat{\skein{\Sigma}}, \\
&F^{2n} \widehat{\skein{\Sigma}} = (\ker \varepsilon)^n.
\end{align*}
By the second equation, we can consider the Baker Campbell Hausdorff
series
\begin{equation*}
\mathrm{bch}(x,y) \defeq (-A+\gyaku{A}) \log ( \exp (\frac{x}{-A+\gyaku{A}})
\exp(\frac{y}{-A+\gyaku{A}}))
\end{equation*}
on $F^3 \widehat{\skein{\Sigma}}$.
As elements of the associated Lie algebra
$(\widehat{\skein{\Sigma}},[ \ \ , \ \ ])$,
$\mathrm{bch}$ has a usual expression.
For example,
\begin{equation*}
\mathrm{bch}(x,y) = x+y+\frac{1}{2}[x,y]+\frac{1}{12}([x,[x,y]]+[y,[y,x]])+ \cdots.
\end{equation*}
Furthermore, we have the following.
\begin{prop}[\cite{Tsujipurebraid} Corollary 5.7.]
\label{prop_embed_filtration}
For any embedding $i : \Sigma \times I \to S^3$ inducing $i_*
: \widehat{\skein{\Sigma}} \to \mathbb{Q} [[A+1]]$,
we have $i_* (F^n \widehat{\skein{\Sigma}}) \subset ((A+1)^{\gauss{\frac{n+1}{2}}})$,
where $\gauss{x}$ is the greatest integer not greater than
$x$ for $x \in \mathbb{Q}$.
\end{prop}
\begin{figure}
\caption{Definition of Kauffman bracket skein module}
\label{figure_def_skein}
\end{figure}
\begin{figure}
\caption{Definition of the product}
\label{figure_def_product}
\end{figure}
In our previous papers \cite{TsujiCSAI} \cite{Tsujipurebraid} \cite{TsujiTorelli},
we study a relationship between the Kauffman bracket skein algebra and
the mapping class group on a surface $\Sigma$.
Let $\widehat{\skein{\Sigma}}$ be the completed Kauffman
bracket skein algebra on $\Sigma$ and
$\widehat{\skein{\Sigma,J}}$ the completed Kauffman
bracket skein module with base point set $J \times
\shuugou{\frac{1}{2}}$ for a finite subset $J \subset
\partial \Sigma$.
In \cite{TsujiCSAI}, we prove the formula of the Dehn twist $t_c$ of a simple closed
curve $c$
\begin{equation*}
t_c ( \cdot) = \exp(\sigma(L(c)))(\cdot) \defeq
\sum_{i=0}^\infty \frac{1}{i!}(\sigma(L(c))^i(\cdot) \in
\mathrm{Aut} (\widehat{\skein{\Sigma,J}})
\end{equation*}
where
\begin{equation*}
L(c) \defeq \frac{-A+\gyaku{A}}{4 \log(-A)}
(\mathrm{arccosh} (\frac{-c}{2}))^2-(-A+\gyaku{A})
\log(-A).
\end{equation*}
We obtain the formula by analogy of
the formula of the
completed Goldman Lie algebra
\cite{Kawazumi} \cite{KK} \cite{MT}.
We define the filtration
$\filtn{F^n \widehat{\skein{\Sigma}}}$
in \cite{Tsujipurebraid}.
We consider $F^3 \widehat{\skein{\Sigma}}$ as
a group using the Baker Campbell Hausdorff series $\mathrm{bch}$
where $g>1$.
We remark that $\mathcal{I}(\Sigma_{g,1})$ is generated by
$\shuugou{t_{c_1c_2} \defeq t_{c_1}{t_{c_2}}^{-1}|(c_1,c_2) :\mathrm{BP}}$
where a BP (bounding pair) is a pair of two simple closed curves bounding
a submanifold of $\Sigma_{g,1}$.
By analogy of \cite{KK} 6.3, we have the following.
\begin{thm}[\cite{TsujiTorelli} Theorem 3.13. Corollary 3.14.]
\label{thm_zeta}
The group homomorphism $\zeta :\mathcal{I}(\Sigma_{g,1}) \to
(F^3 \widehat{\skein{\Sigma_{g,1}}}, \mathrm{bch})$ defined by
$\zeta (t_{c_1c_2}) =L(c_1)-L(c_2)$ for a
BP $(c_1, c_2)$ is injective whre $g>1$.
Furthermore, we have
\begin{equation*}
\xi (\cdot) =\exp (\sigma (\zeta(\xi))(\cdot) \in \mathrm{Aut}(\widehat{\skein{\Sigma_{g,1},J}})
\end{equation*}
for any $\xi \in \mathcal{I}(\Sigma_{g,1})$ and any finite subset $J \subset
\partial \Sigma_{g,1}$.
\end{thm}
We remark that $\zeta (t_c) =L(c)$ for a separating simple closed curve $c$.
Let $e$ be an embedding $\Sigma_{g,1} \times [0,1]$ satisfying
the following conditions
\begin{align*}
&e_{|\Sigma_{g,1} \times \shuugou{\frac{1}{2}}}:\Sigma \times \shuugou{\frac{1}{2}}
\to \Sigma, (x,\frac{1}{2}) \mapsto x, \\
&e(\Sigma \times [0,\frac{1}{2}]) \subset H_g^+, \\
&e(\Sigma \times [\frac{1}{2},1]) \subset H_g^-.
\end{align*}
We call this embedding a standard embedding.
We denote by $e_*$ the $\mathbb{Q}[[A+1]]$-module homomorphism
$\widehat{\skein{\Sigma_{g,1}}} \to \mathbb{Q}[[A+1]]$ induced by $e$.
The following is our main theorem.
\begin{thm}
\label{thm_main}
The map $Z: \mathcal{I}(\Sigma_{g,1}) \to \mathbb{Q}[[A+1]]$
defined by
\begin{equation*}
Z(\xi) \defeq \sum_{i=0}^\infty \frac{1}{(-A+\gyaku{A})^i i!}e_*
((\zeta (\xi))^i)
\end{equation*}
induces
\begin{equation*}
z:\mathcal{H} (3) \to \mathbb{Q}[[A+1]], M(\xi) \to Z(\xi).
\end{equation*}
\end{thm}
\subsection{Main theorem and its proof}
The aim of this subsection is to prove Theorem \ref{thm_main}.
By Proposition \ref{prop_embed_filtration}, the map
$Z: \mathcal{I}(\Sigma_{g,1}) \to \mathbb{Q}[[A+1]]$ is well-defined.
For $\epsilon \in \shuugou{+,-}$,
let $\skein{H_g^\epsilon}$ be the quotient of
$\mathbb{Q} [A,\gyaku{A}] \mathcal{T} (H_g^\epsilon)$
by the skein relation and the trivial knot relation,
where $\mathcal{T}(H_g^\epsilon)$
is the set of unoriented framed link
in $H_g^\epsilon$. We can consider its completion
$
\widehat{\skein{H_g^\epsilon}}$,
for details see \cite{TsujiCSAI} Theorem 5.1.
We denote the embedding $\iota'^+ :
\Sigma_{g,1} \times [0,\frac{1}{2}] \to{H_g^+}$
and the embedding $\iota'^- :
\Sigma_{g,1} \times [\frac{1}{2},1] \to {H_g^-}$.
The embeddings
\begin{align*}
&\iota^+:\Sigma_{g,1} \times I \to H_g^+, (x,t) \mapsto \iota'^+(x,t/2), \\
&\iota^-:\Sigma_{g,1} \times I \to H_g^-, (x,t) \mapsto \iota'^+(x,(t+1)/2) \\
\end{align*}
induces
\begin{align*}
&\iota^+:\widehat{\skein{\Sigma_{g,1}}} \to \widehat{\skein{H_g^+}}, \ \
\iota^-:\widehat{\skein{\Sigma_{g,1}}} \to \widehat{\skein{H_g^-}}. \\
\end{align*}
By definition, we have the followings.
\begin{prop}
\label{prop_ideal}
\begin{enumerate}
\item The kernel of $\iota^+$ is a right ideal
of $\widehat{\skein{\Sigma_{g,1}}}$.
\item The kernel of $\iota^-$ is a left ideal
of $\widehat{\skein{\Sigma_{g,1}}}$.
\item We have $e_* (\ker \iota^\epsilon) =\shuugou{0}$
for $\epsilon \in \shuugou{+,-}$.
\end{enumerate}
\end{prop}
\begin{prop}
\label{prop_bch_Z}
We have $Z(\xi_1 \xi_2) =\sum_{i,j \geq 0}\dfrac{1}{(-A+A^{-1})^{i+j} i! j!}e_* ((\zeta(\xi_1))^i
(\zeta(\xi_2))^j)$.
\end{prop}
\begin{lemm}
\label{lemm_proof_key}
\begin{enumerate}
\item
Let $c_a$, $c'_a$, $c_b$ and $c'_b$ be
simple closed curves as in Figure \ref{figure_handle_torelli_generator}.
For $\epsilon \in \shuugou{+,-}$,
we have
\begin{align*}
\iota^\epsilon (\xi(L(c_a)-L(c'_a))=0,
\iota^\epsilon (\xi(L(c_b)-L(c'_b))=0
\end{align*}
for $\xi \in \mathcal{M}(H_{g,1}^\epsilon)$.
\item
Let $c_{a,i}$, $c_{b,j}$ and $c_{i,j}$
be simple closed curves as in
Figure \ref{figure_handle_G_S_j_i} or Figure \ref{figure_handle_G_S_i_j}.
For $\epsilon \in \shuugou{+,-}$,
we have
\begin{align*}
\iota^\epsilon (L(c_{i,j})-L(c_{a,i})-L(c_{b,j}))=0
\end{align*}for $i \neq j$.
\end{enumerate}
\end{lemm}
By Lemma \ref{lemm_torelli_check},
in order to prove Theorem \ref{thm_main},
it is enough to check the following lemmas.
\begin{lemm}
For any $i$, we have $e_* \circ h_i =e_*$.
Furthermore, we have $Z (h_i \xi h_i^{-1}) =Z (\xi)$.
\end{lemm}
\begin{proof}
The embeddings $e \circ h_i $ and $e$ are
isotopic. This proves the first claim.
Using this, we have $e_* (\zeta (h_i \xi h_i^{-1}))=e_* \circ h_i (\zeta(\xi))$.
This proves the second claim.
This proves the lemma.
\end{proof}
\begin{lemm}
For any $i \neq j$, we have $e_* \circ s_{ij} =e_*$.
Furthermore, we have $Z (s_{ij }\xi s_{ij}^{-1}) =Z (\xi)$.
\end{lemm}
\begin{proof}
We fix an element $x$ of $\widehat{\skein{\Sigma_{g,1}}}$.
We have $s_{ij} (x) =\exp (\sigma(L(c_{i,j})-L(c_{a,i})-L(c_{b,j})))(x)$.
Using Lemma \ref{lemm_proof_key}(2) and Proposition
\ref{prop_ideal} (1)(2)(3),
we have $e_*(\exp (\sigma(L(c_{i,j})-L(c_{a,i})-L(c_{b,j})))(x))=e_*(x)$.
This proves the first claim.
Using this, we have $e_* (\zeta (s_{ij} \xi s_{ij}^{-1}))=e_* \circ s_{ij} (\zeta(\xi))$.
This proves the second claim.
This proves the lemma.
\end{proof}
\begin{lemm}
\begin{enumerate}
\item We have $Z(\xi \eta^+) =Z(\xi)$ for any
$\xi \in \mathcal{I}(\Sigma_{g,1})$ and
any $\eta^+ \in \shuugou{t_{\xi(c_a) \xi( c'_a)}|\xi \in \mathcal{M} (H_{g,1}^+)}
\cup \shuugou{t_{\xi(c_b) \xi(c'_b)}|\xi \in \mathcal{M} (H_{g,1}^+)}$.
\item We have $Z(\eta^- \xi ) =Z(\xi)$ for any
$\xi \in \mathcal{I}(\Sigma_{g,1})$ and
any $\eta^- \in \shuugou{t_{\xi(c_a) \xi( c'_a)}|\xi \in \mathcal{M} (H_{g,1}^-)}
\cup \shuugou{t_{\xi(c_b) \xi(c'_b)}|\xi \in \mathcal{M} (H_{g,1}^-)}$.
\end{enumerate}
\end{lemm}
\begin{proof}
We prove only (1) (because the proof of (2) is almost the same.)
By Proposition \ref{prop_bch_Z}, we have
$Z(\xi \eta^+) =\sum_{i,j \geq 0}\dfrac{1}{(-A+A^{-1})^{i+j} i! j!}e_* ((\zeta(\xi))^i
(\zeta(\eta^+))^j)$.
Using Lemma \ref{lemm_proof_key} and Proposition \ref{prop_ideal} (1)(3),
we obtain
\begin{equation*}
Z(\xi \eta^+) =\sum_{i,j \geq 0}\frac{1}{(-A+A^{-1})^{i+j} i! j!}e_* ((\zeta(\xi))^i
(\zeta(\eta^+))^j)=\sum_{i \geq 0}\frac{1}{(-A+A^{-1})^{i} i!}e_* ((\zeta(\xi))^i)=Z(\xi).
\end{equation*}
This proves the lemma.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm_main}]
By Fact \ref{fact_map_3_manifold} and
Lemma \ref{lemm_torelli_check}, it is enough to check the following
\begin{align*}
Z(h_i \xi {h_i}^{-1}) =Z(\xi), \\
Z(s_{ij} \xi {s_{ij}}^{-1}) =Z(\xi), \\
Z(\xi \eta^+) =Z(\xi), \\
Z(\eta^- \xi)=Z(\xi),
\end{align*}
for any $\xi \in \mathcal{I}(\Sigma_{g,1})$,
any $\eta^+ \in \shuugou{t_{\xi(c_a) \xi( c'_a)}|\xi \in \mathcal{M} (H_{g,1}^+)}
\cup \shuugou{t_{\xi(c_b) \xi(c'_b)}|\xi \in \mathcal{M} (H_{g,1}^+)}$,
any $\eta^- \in \shuugou{t_{\xi(c_a) \xi( c'_a)}|\xi \in \mathcal{M} (H_{g,1}^-)}
\cup \shuugou{t_{\xi(c_b) \xi(c'_b)}|\xi \in \mathcal{M} (H_{g,1}^-)}$
and any $i \neq j$.
The above lemmas prove
these equations.
This proves the Theorem.
\end{proof}
This invariant satisfies the following conditions.
\begin{prop}
\label{prop_z_disjoint}
For $M_1, M_2 \in \mathcal{H}(3)$, we have
\begin{equation*}
z(M_1 \sharp M_2) =z(M_1)z(M_2)
\end{equation*}
where $M_1 \sharp M_2 $ is the connected sum of $M_1$ and $M_2$.
\end{prop}
\begin{proof}
Let $\iota_1:\Sigma^1 \to \Sigma_{g,1}$ and $\iota_2:\Sigma^2 \to \Sigma_{g,1}$
be the embedding maps as in Figure \ref{figure_Sigma_disjoint}.
The embedding maps induces
\begin{align*}
&\iota_1:\mathcal{M}(\Sigma^1) \to \mathcal{M}(\Sigma_{g.1}),
&\iota_2:\mathcal{M}(\Sigma^2) \to \mathcal{M}(\Sigma_{g,1}), \\
&\iota_1:\skein{\Sigma^1} \to \skein{\Sigma_{g,1}},
&\iota_2:\skein{\Sigma^2} \to \skein{\Sigma_{g,2}}.
\end{align*}
We remark
$e_* (\iota_1(x_1) \iota_2(x_2))=e_*(\iota_1(x_1))e_*(\iota_2(x_2))$
for $x_1 \in \skein{\Sigma^1}$ and $x_2 \in \skein{\Sigma^2}$.
For $\xi_1 \in \iota_1(\mathcal{M}(\Sigma^1))$
and $\xi_2 \in \iota_2(\mathcal{M}(\Sigma^2))$
we have
\begin{align*}
&z(M(\xi_1)\sharp M(\xi_2))=z(M(\xi_1 \circ \xi_2))
=Z(\xi_1 \circ \xi_2)=\sum_{i=1}^\infty \frac{1}{(-A+A^{-1})^i i!}e_*((\zeta(\xi_1 \circ \xi_2))^i) \\
&=\sum_{i,j \geq 0} \frac{1}{(-A+A^{-1})^{i+j} i!j!}e_*((\zeta(\xi_1))^i (\zeta(\xi_2))^j)
=\sum_{i,j \geq 0} \frac{1}{(-A+A^{-1})^{i+j} i!j!}e_*((\zeta(\xi_1))^i )e_*((\zeta(\xi_2))^j) \\
&=(\sum_{i=0}^\infty\frac{1}{(-A+A^{-1})^i i!}e_*((\zeta(\xi_1))^i))(\sum_{j=0}^\infty\frac{1}{(-A+A^{-1})^j j!}e_*((\zeta(\xi_2))^j))=z(M(\xi_1))z(M(\xi_2)).
\end{align*}
This proves the proposition.
\begin{figure}
\caption{$\Sigma^1$ and $\Sigma^2$}
\label{figure_Sigma_disjoint}
\end{figure}
\end{proof}
\begin{prop}
For $\xi_1 \in \zeta^{-1} (F^{n_1+2}\widehat{\skein{\Sigma_{g,1}}}),
\cdots, \xi_k \in \zeta^{-1}(F^{n_k+2} \widehat{\skein{\Sigma_{g,1}}})$,
we have
\begin{equation*}
\sum_{\epsilon_i \in \shuugou{1,0}}(-1)^{\sum \epsilon_i}
z (M(\xi_1^{\epsilon_1} \cdots \xi_k^{\epsilon_k}))
\in (A+1)^{\gauss{(n_1 + \cdots +n_k+1)/{2}}}\mathbb{Q} [[A+1]].
\end{equation*}
We remark that $\zeta^{-1} (F^3 \widehat{\skein{\Sigma_{g,1}}})$
equals $\mathcal{I} (\Sigma_{g,1})$
and
that $\zeta^{-1}(F^4 \widehat{\skein{\Sigma_{g,1}}})$
equals the Johnson kernel.
\end{prop}
\begin{proof}
We have
\begin{align*}
&\sum_{\epsilon_i \in \shuugou{1,0}}(-1)^{\sum \epsilon_i}
z (M(\xi_1^{\epsilon_1} \cdots \xi_k^{\epsilon_k})) \\
&=e_*((1-\exp (\frac{\zeta(\xi_1)}{-A+\gyaku{A}}))\cdots(1-\exp (\frac{\zeta(\xi_k)}{-A+\gyaku{A}})).
\end{align*}
By Proposition \ref{prop_embed_filtration}, we have
\begin{equation*}
\sum_{\epsilon_i \in \shuugou{1,0}}(-1)^{\sum \epsilon_i}
z (M(\xi_1^{\epsilon_1} \cdots \xi_k^{\epsilon_k}))
\in (A+1)^{\gauss{({n_1 + \cdots +n_k+1})/{2}}}\mathbb{Q} [[A+1]].
\end{equation*}
This proves the proposition.
\end{proof}
\begin{cor}
\label{cor_finite_type}
The invariant $z(M) \in \mathbb{Q}[[A+1]]/((A+1)^{n+1})$
is a finite type invariant for $M \in \mathcal{H} (3)$
order $n$.
\end{cor}
\begin{prop}
\label{prop_z_finite_nontrivial}
The invariant $z(M) \in \mathbb{Q}[[A+1]]/((A+1)^{n+1})$
is a nontrivial finite type invariant for $M \in \mathcal{H} (3)$
order $n$.
\end{prop}
\begin{proof}
It is enough to show that
there exists $\xi_1, \cdots , \xi_n \in \mathcal{K}(\Sigma_{g,1})$
such that
\begin{equation*}
\sum_{\epsilon_i \in \shuugou{0,1}} (-1)^{\sum \epsilon_i}
M(\prod_{i=1}^n {\xi_i}^{\epsilon_i}) \neq 0 \mod ((A+1)^{n+1}).
\end{equation*}
Let $c^1_1,c^1_2,c^1_L, \cdots
c^n_1, c^n_2, c^n_L$
be simple closed curves as in Figure \ref{figure_nontrivial_Sigma}.
We denote $c_i \defeq t_{c^i_1} \circ t_{c^i_2} (c^i_L)$ and
$t_i \defeq t_{c_i}$
for $i =1, \cdots, n$.
We remark that $M(t_{i_1}\circ \cdots \circ t_{i_k})
=\sharp^k M(t_1)$ for $1 \leq i_1 <\cdots <i_k \leq n$
and that $M(t_1)$ is the Poincar\'{e} homology 3-sphere.
By the computation in section
\ref{section_example}
and Proposition \ref{prop_z_disjoint},
\begin{equation*}
\sum_{\epsilon_i \in \shuugou{0,1}} (-1)^{\sum \epsilon_i}
M(\prod_{i=1}^n {t_i}^{\epsilon_i})
=(1-z(M(t_1)))^n=6^n(A^4-1)^n \mod ((A+1)^{n+1}).
\end{equation*}
This proves the proposition.
\begin{figure}
\caption{$c^1_1,c^1_2,c^1_L, \cdots
c^n_1, c^n_2, c^n_L$}
\label{figure_nontrivial_Sigma}
\end{figure}
\end{proof}
By the computation in section
\ref{section_example}, the coefficient of $A^4-1$ in
the invariant $z$ for the Poincar\'{e} homology $3$-sphere
is $-6$.
Since the casson invariant is the unique
nontrivial finite type invariant of order $1$
upto a scalar, we have the following.
\begin{cor}
For any $M \in \mathcal{H}(3)$,
we have the coefficeint of $A^4-1$ in $z(M)$
is $(-6)$ times the Casson invariant.
\end{cor}
\section{Example}
\label{section_example}
Let $c_1$, $c_2$ and $c_L$ be simple closed curves in $\Sigma_{g,1}$ as in
Figure \ref{figure_handle_example}.
We consider integral homology 3-spheres
$M(\epsilon_1,\epsilon_2,\epsilon_3) \defeq
M ((t_{{t_1}^{\epsilon_1} \circ {t_2}^{\epsilon_2} (c_L)})^{\epsilon_3})$
for $\epsilon_1, \epsilon_2, \epsilon_3 \in \shuugou{\pm 1}$,
where $t_1 \defeq t_{c_1}$ and $t_2 \defeq t_{c_2}$.
For $\epsilon \in \shuugou{\pm 1}$,
the manifold $M(1,-1,\epsilon) \simeq M(-1,1,\epsilon)$ is
the integral homology 3-sphere
obtained from $S^3$ performing the $\epsilon$-surgery on the figure eight knot $4_1$,
which is $e (t_1^{-1} \circ t_2 (c_L))$.
For $\epsilon \in \shuugou{\pm 1}$,
the manifold $M(1,1,\epsilon)$ is
the integral homology 3-sphere
obtained from $S^3$ performing the $\epsilon$-surgery on the trefoil knot $3_1$,
which is $e (t_1 \circ t_2 (c_L))$.
For $\epsilon \in \shuugou{\pm 1}$,
the manifold $M(-1,-1,\epsilon)$ is
the integral homology 3-sphere
obtained from $S^3$ performing the $\epsilon$-surgery on the mirror $-3_1$
of the trefoil knot,
which is $e (t_1^{-1} \circ t_2^{-1} (c_L))$.
In particular, $M(1,1,1)$ is the Poincar\'{e} homology sphere.
We remark that $M(1,-1,1)$ and $M(-1,-1,-1)$
are the same $3$-manifold.
By straight forward computations using Habiro's formula \cite{Habiro2000}
for colored Jones polynomials of the trefoil knot and the figure eight knot,
we have the following.
We remark that we compute
$z(M(1,-1,1))=z(M(1,1,-1)) $ (rep.
$z(M(1,-1,-1))=z(M(-1,-1,1))$) by two ways
$Z(t_{{t_1} \circ {t_2}^{-1} (c_L)})=
Z((t_{{t_1} \circ {t_2} (c_L)})^{-1})$
(resp. $Z((t_{{t_1} \circ {t_2}^{-1} (c_L)})^{-1})
=Z(t_{{t_1}^{-1} \circ {t_2}^{-1} (c_L)})$.
\begin{prop}
We have
\begin{align*}
z(M(1,1,1))
&=[ 1, -6,45,-464,6224,-102816,2015237, \\
&-45679349,1175123730,-33819053477, \\
&1076447743008, -37544249290614, \\
&1423851232935885,-58335380481272491], \\
z(M(1,-1,1))=z(M(1,1,-1)
&=[ 1,6,63,932,17779,415086,11461591,365340318, \\
&13201925372,533298919166,23814078531737, \\
&1164804017792623,61932740213389942, \\
&3556638330023177088], \\
z(M(-1,-1,-1))
&=[1, 6, 39, 380, 4961, 80530, 1558976, 35012383, \\
&894298109, 25591093351, 810785122236, \\
&28169720107881, 1064856557864671, \\
&43506118030443092], \\
z(M(1,-1,-1))=z(M(-1,-1,1)
&=[ 1, -6, 69, -1064, 20770, -492052, 13724452, \\
&-440706098, 16015171303, -649815778392, 29121224693198, \\
&-1428607184648931, 76147883907835312, \\
&-4382222160786508572].
\end{align*}
Here we denote
\begin{equation*}
1+a_1 (A^4-1) +a_2(A^4-1)^2 + \cdots+a_{13}(A^4-1)^{13}+o(14)
=[1,a_1, a_2, \cdots,a_{13}]
\end{equation*}
where $o(14) \in (A+1)^{14} \mathbb{Q} [[A+1]]$.
\end{prop}
\begin{figure}
\caption{simple closed curves $c_1$, $c_2$ and $c_L$}
\label{figure_handle_example}
\end{figure}
\end{document} |
\begin{document}
\sloppy
\title[Automorphisms of affine toric varieties]
{On orbits of the automorphism group on \\ an affine toric variety}
\author{Ivan Arzhantsev}
\thanks{The first author was partially supported by the Simons Grant and the Dynasty Foundation.}
\address{Department of Higher Algebra, Faculty of Mechanics and Mathematics,
Moscow State University, Leninskie Gory 1, GSP-1, Moscow, 119991,
Russia } \email{arjantse@mccme.ru}
\author{Ivan Bazhov}
\address{Department of Higher Algebra, Faculty of Mechanics and Mathematics,
Moscow State University, Leninskie Gory 1, GSP-1, Moscow, 119991,
Russia } \email{ibazhov@gmail.com}
\date{\today}
\begin{abstract}
Let $X$ be an affine toric variety. The total coordinates on $X$
provide a canonical presentation $\overline{X} \to X$ of $X$
as a quotient of a vector space $\overline{X}$
by a linear action of a quasitorus.
We prove that the orbits of the connected component
of the automorphism group $\mathop{\rm Aut}(X)$ on $X$ coincide with the Luna strata
defined by the canonical quotient presentation.
\end{abstract}
\subjclass[2010]{Primary 14M25, 14R20; \ Secondary 14J50, 14L30}
\keywords{Toric variety, Cox ring, automorphism, quotient, Luna stratification}
\maketitle
\section*{Introduction}
Every algebraic variety $X$ carries a canonical stratification by orbits of the automorphism
group $\mathop{\rm Aut}(X)$. The aim of this paper is to give several characterizations of this stratification
when $X$ is an affine toric variety.
In the case of a complete toric variety $X$ the group $\mathop{\rm Aut}(X)$ is a linear algebraic group.
It admits an explicit description in terms of combinatorial data defining $X$; see~\cite{De}
and~\cite{Cox}. The orbits of the connected component $\mathop{\rm Aut}(X)^0$ on the variety $X$
are described in~\cite{Ba}. It is proved there that two points $x,x'\in X$ are in the same
$\mathop{\rm Aut}(X)^0$-orbit if and only if the semigroups in the divisor class group $\mathop{\rm Cl}(X)$
generated by classes of prime torus invariant divisors that do not contain $x$ and $x'$
respectively, coincide.
We obtain an analogue of this result for affine toric
varieties. It turns out that in the affine case one may replace the semigroups mentioned above
by the groups generated by the same classes. This relates $\mathop{\rm Aut}(X)^0$-orbits on $X$ with
stabilizers of points in fibres of the canonical quotient presentation
$\pi: \overline{X} \to \overline{X}/\!/H_X \cong X$, where $H_X$ is a quasitorus
with the character group identified with $\mathop{\rm Cl}(X)$ and $\overline{X}$ is a
finite-dimensional $H_X$-module whose coordinate ring is the total coordinate ring
(or the Cox ring) of $X$; see~\cite{Cox}, \cite[Chapter~5]{CLS}, or Section~\ref{sec4}
for details. More precisely, our main result (Theorem~\ref{tmain}) states that
the collection of $\mathop{\rm Aut}(X)^0$-orbits on $X$ coincides with the Luna stratification of $X$
as the quotient space of the linear $H_X$-action on $\overline{X}$.
In particular, in our settings the Luna stratification is intrinsic in the sense of~\cite{KR}.
For connections between quotient presentations of an arbitrary affine variety,
the Luna stratification, and Cox rings, see~\cite{Ar}.
In contrast to the complete case, the automorphism group of an affine toric variety
is usually infinite-dimensional. An explicit description of the automorphism group of an affine toric surface in terms of free amalgamated products is given in~\cite{AZ}.
Starting from dimension three such a description is unknown. Another difference is that
in the affine case the open orbit of $\mathop{\rm Aut}(X)^0$ coincides with the smooth locus of $X$
\cite[Theorem~2.1]{AKZ}, while for a smooth complete toric variety $X$ the group $\mathop{\rm Aut}(X)^0$
acts on $X$ transitively if and only if $X$ is a product of projective spaces
\cite[Theorem~2.8]{Ba}.
In Section~\ref{sec1} we recall basic facts on automorphisms of algebraic varieties and define the connected component $\mathop{\rm Aut}(X)^0$. Section~\ref{sec2} contains a background on affine toric varieties. We consider
one-parameter unipotent subgroups in $\mathop{\rm Aut}(X)$ normalized by the acting torus (root subgroups).
They are in one-to-one correspondence with the so-called Demazure roots of the cone of the
variety $X$. Also we recall the technique developed in~\cite{AKZ}, which will be used later.
In Section~\ref{sec3} we discuss the Luna stratification on the quotient space of a rational
$G$-module, where $G$ is a reductive group. Necessary facts on Cox rings and canonical quotient
presentations of affine toric varieties are collected in Section~\ref{sec4}. We define the
Luna stratification of an arbitrary affine toric variety and give a characterization
of strata in terms of some groups of classes of divisors (Proposition~\ref{prop23}).
Our main result is proved in Section~\ref{sec5}. We illustrate it by an example. It
shows that although the group $\mathop{\rm Aut}(X)^0$ acts (infinitely)
transitively on the smooth locus $X^{\mathop{\rm reg}}$, it may be non-transitive
on the set of smooth points of the singular locus $X^{\mathop{\rm sing}}$, even when $X^{\mathop{\rm sing}}$
is irreducible. Finally, in Section~\ref{sec7} we prove collective infinite
transitivity on $X$ along the orbits of the subgroup of $\mathop{\rm Aut}(X)$ generated by
root subgroups and their replicas (Theorem~\ref{thcit}). Here we follow the approach developed in~\cite{AFKKZ}.
\section{Automorphisms of algebraic varieties}
\label{sec1}
Let $X$ be a normal algebraic variety over an algebraically closed
field ${\mathbb K}$ of characteristic zero and $\mathop{\rm Aut}(X)$ be the automorphism group.
At the moment we consider $\mathop{\rm Aut}(X)$ as an abstract group and our aim
is to define the connected component of $\mathop{\rm Aut}(X)$ following \cite{Ra}.
\begin{definition}
A family $\{\phi_b\}_{b\in B}$ of automorphisms of a variety $X$, where
the parametrizing set $B$ is an algebraic variety, is an {\it algebraic family}
if the map $B\times X \to X$ given by $(b,x)\to\phi_b(x)$ is a morphism.
\end{definition}
If $G$ is an algebraic group and $G\times X \to X$ is a regular action, then
we may take $B=G$ and consider the algebraic family $\{\phi_g\}_{g\in G}$, where
$\phi_g(x)=gx$. So any automorphism defined by an element of $G$ is included in
an algebraic family.
\begin{definition} \label{def2}
The {\it connected component} $\mathop{\rm Aut}(X)^0$ of the group $\mathop{\rm Aut}(X)$ is the subgroup of
automorphisms that may be included in an algebraic family $\{\phi_b\}_{b\in B}$
with an (irreducible) rational curve as a base $B$ such that $\phi_{b_0}=\mathop{\rm id}_X$ for some
$b_0\in B$.
\end{definition}
\begin{remark}
It is also natural to consider arbitrary irreducible base $B$ in Definition~\ref{def2}.
But for our purposes related to toric varieties rational curves as bases are more suitable.
\end{remark}
It is easy to check that $\mathop{\rm Aut}(X)^0$ is indeed a subgroup; see~\cite{Ra}.
\begin{lemma}
Let $G$ be a connected linear algebraic group and $G\times X \to X$ be a regular action.
Then the image of $G$ in $\mathop{\rm Aut}(X)$ is contained in $\mathop{\rm Aut}(X)^0$.
\end{lemma}
\begin{proof}
We have to prove that every $g\in G$ can be connected with
the unit by a rational curve. By~\cite[Theorem~22.2]{Hu},
an element $g$ is contained in a Borel subgroup of $G$.
As any connected solvable linear algebraic group, a Borel subgroup is isomorphic (as a variety) to
$({\mathbb K}^{\times})^r\times{\mathbb K}^m$ with some non-negative integers $r$ and $m$.
The assertion follows.
\end{proof}
Denote by $\mathop{\rm WDiv}(X)$ the group of Weil divisors on a variety $X$ and by
$\mathop{\rm PDiv}(X)$ the subgroup of principal divisors, i.e.,
$$
\mathop{\rm PDiv}(X)=\{\mathop{\rm div}(f) \ ; \ f \in {\mathbb K}(X)^{\times}\} \cup \{0\}.
$$
Then the divisor class group of $X$ is defined as $\mathop{\rm Cl}(X):=\mathop{\rm WDiv}(X)/\mathop{\rm PDiv}(X)$.
The image of a divisor $D$ in $\mathop{\rm Cl}(X)$ is denoted by $[D]$ and is called the {\it class}
of $D$. Any automorphism $\phi\in\mathop{\rm Aut}(X)$ acts naturally on the set of prime divisors
and thus on the group $\mathop{\rm WDiv}(X)$. Under this action a principal divisor goes
to a principal one and we obtain an action of $\mathop{\rm Aut}(X)$ on $\mathop{\rm Cl}(X)$.
Recall that the local class group of $X$ in a point $x$ is the factor group
$$
\mathop{\rm Cl}(X,x) := \mathop{\rm WDiv}(X) / \mathop{\rm PDiv}(X, x),
$$
where $\mathop{\rm PDiv}(X,x)$ is the group of Weil divisors on $X$ that are principle in some
neighbourhood of the point $x$. We have a natural surjection $\mathop{\rm Cl}(X) \to \mathop{\rm Cl}(X,x)$.
Let us denote by $\mathop{\rm Cl}_x(X)$ the kernel of this homomorphism, i.e., $\mathop{\rm Cl}(X,x)=\mathop{\rm Cl}(X)/\mathop{\rm Cl}_x(X)$.
Equivalently, $\mathop{\rm Cl}_x(X)$ consists of classes that have a representative whose support
does not contain $x$.
We obtain the following result.
\begin{lemma} \label{lemlcg}
Assume that an automorphism $\phi\in\mathop{\rm Aut}(X)$ acts on $\mathop{\rm Cl}(X)$ trivially. Then $\mathop{\rm Cl}_x(X)=\mathop{\rm Cl}_{\phi(x)}(X)$ for any $x\in X$.
\end{lemma}
\section{Affine toric varieties and Demazure roots}
\label{sec2}
A {\it toric variety} is a normal algebraic variety $X$ containing
an algebraic torus $T$ as a dense open subset such that the action
of $T$ on itself extends to a regular action of $T$ on $X$.
Let $N$ be the lattice of one-parameter subgroups $\lambda: {\mathbb K}^{\times} \to T$ and
$M=\text{Hom}(N,{\mathbb Z})$ be the dual lattice. We identify $M$
with the lattice of characters $\chi: T\to{\mathbb K}^{\times}$, and
the pairing $N\times M \to {\mathbb Z}$ is given by
$$
(\lambda,\chi) \to
\langle \lambda,\chi\rangle,\quad \text{where} \quad \chi(\lambda(t)):=
t^{\langle \lambda,\chi\rangle}.
$$
Let us recall a correspondence between affine toric varieties and
rational polyhedral cones. Let $\sigma$ be a polyhedral cone in the rational
vector space $N_{\mathbb Q}:=N\otimes_{{\mathbb Z}} {\mathbb Q}$ and $\sigma^{\vee}$ be the dual cone
in $M_{\mathbb Q}$. Then the affine variety
$$
X_{\sigma} := \mathop{\rm Spec} {\mathbb K}[\sigma^{\vee}\cap M]
$$
is toric and any affine toric variety arises this way, see~\cite{CLS}, \cite{Fu}.
The $T$-orbits on $X_{\sigma}$ are in order-reversing bijection with faces
of the cone $\sigma$.
If $\sigma_0\preceq\sigma$ is a face, then we denote the corresponding $T$-orbit
by ${\mathcal{O}}_{\sigma_0}$. In particular, ${\mathcal{O}}_{\sigma}$ is
a closed $T$-orbit and, if $o$ is the minimal face of $\sigma$, then
$X_0$ is an open $T$-orbit on $X_{\sigma}$.
An affine variety $X$ is called {\it non-degenerate} if any regular invertible function on
$X$ is constant. If $X$ is toric, this condition means that there are no non-trivial
decompositions $T=T_1\times T_2$ and $X=X_0\times T_2$, where $X_0$ is an affine toric variety
with acting torus $T_1$. If $X=X_{\sigma}$, then $X$ is non-degenerate if and only if
the cone $\sigma$ spans $N_{\mathbb Q}$ or, equivalently, the cone $\sigma^{\vee}$ is pointed.
Let us consider for a moment the case of an arbitrary toric variety $X$.
Recall that $X$ contains a finite number of prime $T$-invariant divisors
$D_1,\ldots,D_m$. (In the affine case they are in bijection
with rays of the cone $\sigma$.) It is well known that the group $\mathop{\rm Cl}(X)$ is
generated by classes of these divisors. Let us associate with a $T$-orbit ${\mathcal{O}}$
on $X$ the set of all prime $T$-invariant divisors
$D({\mathcal{O}}) := \{D_{i_1},\ldots,D_{i_k}\}$
that do not contain ${\mathcal{O}}$. Denote by $G({\mathcal{O}})$ the subgroup of $\mathop{\rm Cl}(X)$
generated by the classes of divisors from $D({\mathcal{O}})$.
\begin{proposition} \label{pr1}
Let $X$ be a toric variety and $x\in X$. Then
$\mathop{\rm Cl}_x(X)=G(T\cdot x)$.
\end{proposition}
\begin{proof}
Take a class $[D]\in\mathop{\rm Cl}(X)$ with a $T$-invariant representative $D$.
By definition, $[D]$ is contained in $\mathop{\rm Cl}_x(X)$ if and only if it contains
a representative $D'\in\mathop{\rm WDiv}(X)$ whose support does not pass through $x$.
In particular, $D' = D + \mathop{\rm div}(h)$ for some $h\in{\mathbb K}(X)$.
Consider the decomposition $D'=D'_+-D'_-$, where $D'_+$ and $D'_-$ are effective.
This allows to deal with only the case where $D'$ is effective.
Suppose that $[D]\in\mathop{\rm Cl}_x(X)$.
We claim that there exists a $T$-invariant effective divisor $D''\in [D]$
whose support does not pass through $x$. Assume this is not the case and
consider the vector space
$$
\Gamma(X, D') = \{ f\in{\mathbb K}(X)^{\times} \ ; \ D' + \mathop{\rm div}(f) \ge 0\} \cup \{0\}.
$$
Then $\Gamma(X,D) = h\Gamma(X,D')$ and the subspace $\Gamma(X,D)$ in ${\mathbb K}(X)$
is invariant with respect to the action $(t\cdot f)(x):=f(t^{-1}\cdot x)$.
It is well known that $\Gamma(X,D)$ is a rational $T$-module. We can transfer
the structure of rational $T$-module to $\Gamma(X,D')$ by the formula
$$
t\circ f : = h^{-1} t\cdot (hf) \quad \text{for \ any} \quad t\in T, \ f\in\Gamma(X,D').
$$
Then a function $f\in\Gamma(X,D')$ is $T$-semiinvariant if and only if the divisor
$D'+\mathop{\rm div}(f)$ is $T$-invariant. Since $D'$ is effective, the support of $D'+\mathop{\rm div}(f)$
passes through $x$ if and only if $f(x)=0$. By our assumption, this is the case for
all $T$-semiinvariants in $\Gamma(X,D')$. But any vector in $\Gamma(X,D')$
is a sum of semiinvariants. Thus the support of any effective divisor in $[D]$
contains $x$. This is a contradiction, because the support of $D'$ does not
pass through $x$.
Since $D''$ is a sum of prime $T$-invariant divisors not passing through $x$,
the class $[D]$, and thus the group $\mathop{\rm Cl}_x(X)$, is contained in the group
$G(T\cdot x)$. The opposite inclusion is obvious.
\end{proof}
\begin{lemma}
Let $X$ be an affine toric variety.
Then $\mathop{\rm Aut}(X)^0$ is in the kernel of the action of $\mathop{\rm Aut}(X)$ on $\mathop{\rm Cl}(X)$.
\end{lemma}
\begin{proof}
Let $\{\phi_b\}_{b\in B}$ be an algebraic family of automorphisms with
$\phi_{b_0}=\mathop{\rm id}_X$ for some $b_0\in B$. We may assume that $B$ is an affine
rational curve. In particular, $\mathop{\rm Cl}(B)=0$. Consider the morphism
$\Phi\colon B\times X \to X$ given by $(b,x)\to\phi_b(x)$.
We have to show that for any divisor $D$ in $X$ the intersections of $\Phi^{-1}(D)$
with fibres $\{b\}\times X$ are linearly equivalent to each other. The torus $T$ acts
on the variety $B\times X$ via the action on the second component. It is well known that
every divisor on $B\times X$ is linearly equivalent to a $T$-invariant one.
Every prime $T$-invariant divisor on $B\times X$ is either vertical, i.e., is a product
of a prime divisor in $B$ and $X$, or horizontal, i.e., a product of $B$ and a prime
$T$-invariant divisor in $X$. Every vertical divisor is principal. So the divisor
$\Phi^{-1}(D)$ plus some principal divisor $\mathop{\rm div}(f)$ is a sum of horizontal divisors.
Restricting the rational function $f$ to every fibre $\{b\}\times X$ we obtain that
the intersections of $\Phi^{-1}(D)$ with fibres are linearly equivalent to each other.
\end{proof}
Our next aim is to present several facts on automorphisms of affine toric varieties.
Denote by $\sigma(1)$ the set of rays of a cone $\sigma$ and by $v_\tau$ the primitive
lattice vector on a ray $\tau$.
\begin{definition}
An element $e\in M$ is a called a {\it Demazure root} of a polyhedral
cone $\sigma$ in $N_{\mathbb Q}$ if there is $\tau\in\sigma(1)$ such that
$\langle v_\tau, e\rangle =-1$ and $\langle v_{\tau'}, e\rangle \ge 0$
for all $\tau'\in\sigma(1)\setminus \{\tau\}$.
\end{definition}
Let $\mathcal{R}=\mathcal{R}(\sigma)$ be the set of all Demazure roots of a cone $\sigma$.
For any root $e\in\mathcal{R}$ denote by $\tau_e$ (resp. $v_e$)
the ray $\tau$ (resp. primitive vector $v_\tau$) with $\langle v_\tau, e\rangle =-1$.
Let $\mathcal{R}_\tau$ be the set of roots $e$ with $\tau_e=\tau$.
Then
$$
\mathcal{R} = \cup_{\tau\in\sigma(1)} \mathcal{R}_\tau.
$$
One can easily check that every set $\mathcal{R}_\tau$ is infinite.
With any root $e$ one associates a one-parameter subgroup $H_e$ in
the group $\mathop{\rm Aut}(X)$ such that $H_e \cong ({\mathbb K}, +)$ and $H_e$ is normalized
by $T$, see~\cite{De}, \cite{Oda} or \cite[Section~2]{AKZ} for an explicit
form of $H_e$. Moreover, every one-parameter unipotent subgroup of $\mathop{\rm Aut}(X)$
normalized by $T$ has the form $H_e$ for some root $e$.
The following result is obtained in \cite[Proposition~2.1]{AKZ}.
\begin{proposition} \label{prcon}
Let $e\in\mathcal{R}$. For every point $x\in X\setminus X^{H_e}$ the orbit
$H_e\cdot x$ meets exactly two $T$-orbits ${\mathcal{O}}_1$ and ${\mathcal{O}}_2$. Moreover, ${\mathcal{O}}_2\subseteq\overline{{\mathcal{O}}_1}$ and $\dim{\mathcal{O}}_1 = 1 + \dim{\mathcal{O}}_2$.
\end{proposition}
A pair of $T$-orbits $({\mathcal{O}}_1,{\mathcal{O}}_2)$
as in Proposition~\ref{prcon} is called {\it $H_e$-connected}.
The next result is Lemma~2.2 from~\cite{AKZ}.
\begin{lemma} \label{lemcon}
Let ${\mathcal{O}}_{\sigma_1}$ and ${\mathcal{O}}_{\sigma_2}$ be two $T$-orbits
corresponding to faces $\sigma_1,\sigma_2\preceq\sigma$.
Then the pair $({\mathcal{O}}_{\sigma_1}, {\mathcal{O}}_{\sigma_2})$ is $H_e$-connected
if and only if
$$
e|_{\sigma_2} \le 0 \quad \text{and} \ \ \sigma_1=\sigma_2\cap e^{\perp} \ \
\text{is \! a \! facet \! of \! cone} \ \sigma_2.
$$
\end{lemma}
Let $\mathop{\rm AT}(X)$ be the subgroup of $\mathop{\rm Aut}(X)$ generated by subgroups $T$
and $H_e$, $e\in\mathcal{R}$.
Clearly, two $T$-orbits ${\mathcal{O}}$ and ${\mathcal{O}}'$ on $X$ are contained in the same $\mathop{\rm AT}(X)$-orbit
if and only if there is a sequence ${\mathcal{O}}={\mathcal{O}}_1, {\mathcal{O}}_2,\ldots,{\mathcal{O}}_k={\mathcal{O}}'$ such that
for any $i$ either the pair $({\mathcal{O}}_i,{\mathcal{O}}_{i+1})$ or the pair $({\mathcal{O}}_{i+1},{\mathcal{O}}_i)$
is $H_e$-connected for some $e\in\mathcal{R}$.
This statement admits a purely combinatorial reformulation. Let $\Gamma({\mathcal{O}})$ be
a semigroup in $\mathop{\rm Cl}(X)$ generated by classes of the elements of $D({\mathcal{O}})$.
The following result is given in Lemmas~2.2-4 of~\cite{Ba}.
\begin{proposition} \label{prat}
Two $T$-orbits ${\mathcal{O}}$ and ${\mathcal{O}}'$ on $X$ lie in the same $\mathop{\rm AT}(X)$-orbit
if and only if $\Gamma({\mathcal{O}})=\Gamma({\mathcal{O}}')$.
\end{proposition}
\section{The Luna stratification}
\label{sec3}
In this section we recall basic facts on the Luna stratification
introduced in~\cite{Lu73}, see also \cite[Section~6]{PV}.
Let $G$ be a reductive affine algebraic group over
an algebraically closed field ${\mathbb K}$ of characteristic zero and
$V$ be a rational finite-dimensional $G$-module. Denote
by ${\mathbb K}[V]$ the algebra of polynomial functions on $V$ and
by ${\mathbb K}[V]^G$ the subalgebra of $G$-invariants. Let $V/\!/G$
be the spectrum of the algebra ${\mathbb K}[V]^G$. The inclusion
${\mathbb K}[V]^G\subseteq{\mathbb K}[V]$ gives rise to a morphism $\pi \colon
V\to V/\!/G$ called the {\it quotient morphism} for the $G$-module $V$.
It is well known that the morphism $\pi$ is a categorical quotient
for the action of the group $G$ on $V$ in the category of
algebraic varieties, see~\cite[4.6]{PV}. In particular, $\pi$ is surjective.
The affine variety $X:=V/\!/G$ is irreducible and normal.
It is smooth if and only if the point $\pi(0)$ is smooth
on $X$. In the latter case the variety $X$ is an affine
space. Every fibre of the morphism $\pi$ contains a unique closed $G$-orbit.
For any closed $G$-invariant subset $A\subseteq V$ its image $\pi(A)$
is closed in $X$. These and other properties of the quotient morphism
may be found in~\cite[4.6]{PV}.
By Matsushima's criterion, if an orbit $G\cdot v$ is closed
in $V$, then the stabilizer $\mathop{\rm Stab}(v)$ is reductive,
see~\cite[4.7]{PV}. Moreover,
there exists a finite collection $\{H_1,\ldots,H_r\}$
of reductive subgroups in $G$ such that if an orbit $G\cdot v$
is closed in $V$, then $\mathop{\rm Stab}(v)$ is conjugate to one of these
subgroups. This implies that every fibre of
the morphism $\pi$ contains a point whose stabilizer coincides
with some $H_i$.
For every stabilizer $H$ of a point in a closed $G$-orbit in $V$ the subset
$$
V_H := \{ w\in V \, ; \, \text{there exists}\,
v\in V \, \text{such that} \
\overline{G\cdot w} \supset G\cdot v =\overline{G\cdot v} \
\text{and} \, \mathop{\rm Stab}(v)=H\}
$$
is $G$-invariant and locally closed in $V$.
The image $X_H:=\pi(V_H)$ turns out to be a smooth
locally closed subset of $X$. In particular, $X_H$ is
a smooth quasiaffine variety.
\begin{definition} \label{defls}
The stratification
$$
X \, = \, \bigsqcup_{i=1}^r \, X_{H_i}
$$
is called the {\it Luna stratification} of the quotient space $X$.
\end{definition}
Thus two points $x_1,x_2\in X$ are in the same Luna stratum if and only if
the stabilizers of points from closed $G$-orbits in $\pi^{-1}(x_1)$ and
$\pi^{-1}(x_2)$ are conjugate. In particular, if $G$ is a quasitorus,
these stabilizers should coincide.
There is a unique open dense stratum called the {\it principal
stratum} of $X$. The closure of any stratum is a union of strata.
Moreover, a stratum $X_{H_i}$ is contained in the closure of a stratum
$X_{H_j}$ if and only if the subgroup $H_i$ contains a subgroup conjugate to $H_j$.
This induces a partial ordering on the set of strata compatible with the (reverse)
ordering on the set of conjugacy classes of stabilizers.
\section{Cox rings and quotient presentations}
\label{sec4}
Let $X$ be a normal algebraic variety with finitely generated divisor class group $\mathop{\rm Cl}(X)$.
Assume that any regular invertible function $f\in {\mathbb K}[X]^{\times}$ is constant.
Roughly speaking, the {\it Cox ring} of $X$ may be defined as
$$
R(X) \, := \, \bigoplus_{[D]\in \mathop{\rm Cl}(X)} \Gamma(X, D).
$$
In order to obtain a multiplicative structure on $R(X)$ some technical work is needed,
especially when the group $\mathop{\rm Cl}(X)$ has torsion. We refer to \cite[Section~4]{ADHL} for details.
It is well known that if $X$ is toric and non-degenerate, then $R(X)$ is a polynomial ring
${\mathbb K}[Y_1,\ldots,Y_m]$, where the variables $Y_i$ are indexed by $T$-invariant
prime divisors $D_i$ on $X$ and the $\mathop{\rm Cl}(X)$-grading on $R(X)$ is given by
$\deg(Y_i)=[D_i]$; see~\cite{Cox} and \cite[Chapter~5]{CLS}.
The affine space
$\overline{X}:=\mathop{\rm Spec} R(X)$ comes with a linear action of a quasitorus $H_X:=\mathop{\rm Spec}({\mathbb K}[\mathop{\rm Cl}(X)])$
given by the $\mathop{\rm Cl}(X)$-grading on $R(X)$. The algebra of $H_X$-invariants on $R(X)$
coincides with the zero weight component $R(X)_0=\Gamma(X,0)={\mathbb K}[X]$.
Assume that $X$ is a non-degenerate affine toric variety. Then we obtain a
quotient presentation
$$
\pi: \overline{X} \to \overline{X}/\!/H_X \cong X.
$$
\begin{definition}
Let $X$ be a non-degenerate affine toric variety and $V$ be a rational module of
a quasitorus $H$. The quotient morphism $\pi': V\to V/\!/ H$ is called a
{\it canonical quotient presentation} of $X$, if there are an isomorphism
$\varphi: H_X \to H$ and a linear isomorphism $\psi: \overline{X}\to V$
such that $\psi(h\cdot y)=\varphi(h)\cdot\psi(y)$ for any $h\in H_X$ and
$y\in\overline{X}$.
\end{definition}
A canonical quotient presentation may be characterized in terms of the quasitorus action.
\begin{definition}
An action of a reductive group $F$ on an affine variety $Z$ is said to be
{\it strongly stable} if there exists an open dense invariant subset $U\subseteq Z$ such that
\begin{enumerate}
\item
the complement $Z\setminus U$ is of codimension at least two in $Z$;
\item
the group $F$ acts freely on $U$ ;
\item
for every $z\in U$ the orbit $F\cdot z$ is closed in $Z$.
\end{enumerate}
\end{definition}
The following proposition may be found in \cite[Remark~6.4.2 and Theorem~6.4.3]{ADHL}.
\begin{proposition} \label{propstst}
\begin{enumerate}
\item
Let $X$ be a non-degenerate toric variety. Then the action of $H=Spec({\mathbb K}[\mathop{\rm Cl}(X)])$ on
$\overline{X}$ is strongly stable.
\item
Let $H$ be a quasitorus acting linearly on a vector space $V$. Then
the quotient space $X:=V/\!/T$ is a non-degenerate affine toric variety.
If the action of $H$ on $V$ is strongly stable, then the quotient morphism
$\pi: V \to V/\!/T$ is a canonical quotient presentation of~$X$. In particular,
the group $\mathop{\rm Cl}(X)$ is isomorphic to the character group of $H$.
\end{enumerate}
\end{proposition}
A canonical quotient presentation allows to define a canonical stratification on $X$.
\begin{definition}
The {\it Luna stratification} of a non-degenerate affine toric variety $X$ is the Luna
stratification of Definition~\ref{defls} induced on $X$ by the canonical
quotient presentation $\pi:\overline{X} \to X$.
\end{definition}
\begin{proposition}
Let $X$ be a non-degenerate affine toric variety. Then
the principal stratum of the Luna stratification on $X$
coincides with the smooth locus $X^{\mathop{\rm reg}}$.
\end{proposition}
\begin{proof}
As was pointed out above, points of the principal stratum are smooth on $X$.
Conversely, the fibre $\pi^{-1}(x)$ over a smooth point $x\in X$ consists
of one $H_X$-orbit and $H_X$ acts on $\pi^{-1}(x)$ freely; see~\cite[Proposition~6.1.6]{ADHL}.
This shows that $x$ is contained in the principal stratum.
\end{proof}
Now we assume that $X$ is a degenerate affine toric variety. Let us fix a point $x_0$
in the open $T$-orbit on $X$ and consider a closed subvariety $X_0=\{x\in X ; f(x)=f(x_0)\}$,
where $f$ runs through all invertible regular functions on $X$. Then $X_0$ is a non-degenerate
affine toric variety with respect to a subtorus $T_1\subseteq T$, and $X_0$ depends on the
choice of $x_0$ only up to shift by an element of $T$. Moreover, $X\cong X_0 \times T_2$
for a subtorus $T_2\subset T$ with $T=T_1\times T_2$. We define a {\it Luna stratum}
on $X$ as $T\cdot Y$, where $Y$ is a Luna stratum on $X_0$. This way we obtain
a canonical stratification of $X$ with open stratum being the smooth locus.
The following lemma is straightforward.
\begin{lemma} \label{lemred}
In notations as above every Luna stratum on $X$ is isomorphic to
$Y\times T_2$, where $Y$ is a Luna stratum on $X_0$.
\end{lemma}
Now we present the first characterization of the Luna stratification.
\begin{proposition} \label{prop23}
Let $X$ be an affine toric variety. Then two points $x,x'\in X$
are in the same Luna stratum if and only if $\mathop{\rm Cl}_x(X)=\mathop{\rm Cl}_{x'}(X)$.
\end{proposition}
\begin{proof}
By Lemma~\ref{lemred} we may assume that $X$ is non-degenerate.
Let $\pi:\overline{X} \to X$ be the canonical quotient presentation.
For any point $v\in\overline{X}$ such that the orbit $H_X\cdot v$ is closed in $\overline{X}$
the stabilizer $\mathop{\rm Stab}(v)$ in $H_X$ is defined by the surjection of the character groups
${\mathbb X}(H_X) \to {\mathbb X}(\mathop{\rm Stab}(v))$. By~\cite[Proposition~6.2.2]{ADHL} we may identify
${\mathbb X}(H_X)$ with $\mathop{\rm Cl}(X)$, ${\mathbb X}(\mathop{\rm Stab}(v))$ with $\mathop{\rm Cl}(X,x)$, and the homomorphism
with the projection $\mathop{\rm Cl}(X)\to\mathop{\rm Cl}(X,x)$, where $x=\pi(v)$.
Thus two points $v,v'\in \overline{X}$
with closed $H_X$-orbits have the same stabilizers in $H_X$, or, equivalently,
the points $x=\pi(v)$ and $x'=\pi(v')$ lie in the same Luna stratum on $X$ if and only if
$\mathop{\rm Cl}_x(X)=\mathop{\rm Cl}_{x'}(X)$.
\end{proof}
\section{Orbits of the automorphism group}
\label{sec5}
The following theorem describes orbits of the group
$\mathop{\rm Aut}(X)^0$ in terms of local divisor class groups and the Luna stratification
of an affine toric variety $X$.
\begin{theorem} \label{tmain}
Let $X$ be an affine toric variety and $x,x'\in X$. Then
the following conditions are equivalent.
\begin{enumerate}
\item
The $\mathop{\rm Aut}(X)^0$-orbits of the points $x$ and $x'$ coincide.
\item
$G(T\cdot x) =G(T\cdot x')$.
\item
$\mathop{\rm Cl}_x(X)=\mathop{\rm Cl}_{x'}(X)$.
\item
The points $x$ and $x'$ lie in the same Luna stratum on $X$.
\end{enumerate}
\end{theorem}
\begin{proof}
Implication $1 \Rightarrow 3$ follows from Lemma~\ref{lemlcg}.
Conditions $3$ and $4$ are equivalent by Proposition~\ref{prop23}
and conditions $2$ and $3$ are equivalent by Proposition~\ref{pr1}.
So it remains to prove implication $2 \Rightarrow 1$.
\begin{proposition} \label{pr2}
Let $X$ be an affine toric variety and $x\in X$. Then $G(T\cdot x) = \Gamma(T\cdot x)$.
\end{proposition}
\begin{proof}
We begin with some generalities on quasitorus representations.
Let $K$ be a finitely generated abelian group.
Consider a diagonal linear action of the quasitorus $H=\mathop{\rm Spec}({\mathbb K}[K])$ on a vector space
$V$ of dimension $m$ given by characters $\chi_1,\ldots,\chi_m\in K$. Then we have a weight
decomposition $V=\oplus_{i=1}^m {\mathbb K} e_i$, where $h\cdot e_i=\chi_i(h)e_i$ for any $h\in H$.
With any vector $v=x_1e_1+\ldots+x_me_m$ one associates the set of characters
$\Delta(v)=\{\chi_{i_1},\ldots,\chi_{i_k}\}$ such that $x_{i_1}\ne 0,\ldots,x_{i_k}\ne 0$.
It is well known that the orbit $H\cdot v$ is closed in $V$ if and only if
the cone generated by $\chi_{i_1}\otimes 1,\ldots,\chi_{i_k}\otimes 1$
in $K_{\mathbb Q}=K\otimes_{\mathbb Z}{\mathbb Q}$ is a subspace.
Below we make use of the following elementary lemma.
\begin{lemma} \label{Lemel}
Let $\chi_1,\dots,\chi_m$ be elements of a finitely generated abelian group $K$.
If the cone generated by $\chi_1\otimes 1,\dots,\chi_m\otimes 1$ in
$K_{\mathbb Q}$ is a subspace, then the semigroup generated
by $\chi_1,\dots,\chi_m$ in $K$ is a group.
\end{lemma}
Let us return to the canonical quotient presentation $\pi:\overline{X}\to X$.
By construction, the $H_X$-weights of the linear $H_X$-action on $\overline{X}$
are the classes $-[D_1],\ldots,-[D_m]$ in $\mathop{\rm Cl}(X)$. Moreover,
for any point $v\in\overline{X}$ such that $H_X\cdot v$ is closed in $\overline{X}$
the set of weights $\Delta(v)$ coincides with the classes
of the divisors from $-D(T\cdot x)=\{-D_{i_1},\ldots,-D_{i_k}\}$, where $x=\pi(v)$.
Since the orbit $H_X\cdot v$ is closed in $\overline{X}$, by Lemma~\ref{Lemel}
the semigroup $\Gamma(T\cdot x)$ generated by $[D_{i_1}],\ldots,[D_{i_k}]$ coincides
with the group $G(T\cdot x)$ generated by $[D_{i_1}],\ldots,[D_{i_k}]$, and we
obtain Proposition~\ref{pr2}.
\end{proof}
By Proposition~\ref{pr2}, we have $\Gamma(T\cdot x)=\Gamma(T\cdot x')$.
Further, Proposition~\ref{prat} implies that $x$ and $x'$ lie in the same
$\mathop{\rm AT}(X)$-orbit and thus in the same $\mathop{\rm Aut}(X)^0$-orbit. This completes
the proof of Theorem~\ref{tmain}.
\end{proof}
\begin{remark}
Condition 2 of Theorem~\ref{tmain} is the most effective in practice.
It is interesting to know for which wider classes of varieties equivalence
of conditions 1 and 3 holds.
\end{remark}
\begin{remark}
It follows from properties of the Luna stratification that the $\mathop{\rm Aut}(X)^0$-orbit
of a point $x$ is contained in the closure of the $\mathop{\rm Aut}(X)^0$-orbit of a point $x'$
if and only if $\mathop{\rm Cl}_x(X)$ is a subgroup of $\mathop{\rm Cl}_{x'}(X)$.
\end{remark}
Let us finish this section with a description of $\mathop{\rm Aut}(X)$-orbits on an affine
toric variety $X$. Denote by $S(X)$ the image of the group $\mathop{\rm Aut}(X)$ in the
automorphism group $\mathop{\rm Aut}(\mathop{\rm Cl}(X))$ of the abelian group $\mathop{\rm Cl}(X)$. The group
$S(X)$ preserves the semigroup generated by the classes $[D_1],\ldots,[D_m]$
of prime $T$-invariant divisors. Indeed, this is the semigroup of
classes containing an effective divisor.
In particular, the group $S(X)$ preserves the cone in $\mathop{\rm Cl}(X)\otimes_{{\mathbb Z}}{\mathbb Q}$
generated by $[D_1]\otimes 1,\ldots,[D_m]\otimes 1$. This shows that
$S(X)$ is finite.
The following proposition is a direct corollary of Theorem~\ref{tmain}.
\begin{proposition}
Let $X$ be an affine toric variety. Two points $x,x'\in X$
are in the same $\mathop{\rm Aut}(X)$-orbit if and only if there exists
$s\in S(X)$ such that $s(\mathop{\rm Cl}_x(X))=\mathop{\rm Cl}_{x'}(X)$.
\end{proposition}
If $X$ is an affine toric variety, then the group $\mathop{\rm Aut}(X)^0$ acts
on the smooth locus $X^{\mathop{\rm reg}}$ transitively; see~\cite[Theorem~2.1]{AKZ}.
Let $X^{\mathop{\rm sing}}=X_1\cup\ldots\cup X_r$ be the decomposition of the singular
locus into irreducible components. One may expect that the group $\mathop{\rm Aut}(X)^0$
acts transitively on every subset $X^{\mathop{\rm reg}}_i\setminus \cup_{j\ne i} X_j$ too.
The following example shows that this is not the case.
\begin{example}
Consider a two-dimensional torus $T^2$ acting linearly on the vector space ${\mathbb K}^7$:
$$
(t_1,t_2)\cdot(z_1,z_2,z_3,z_4,z_5,z_6,z_7)=
(t_1z_1,t_1z_2,t_1^{-1}z_3,t_2z_4,t_2z_5,t_1^{-1}t_2^{-1}z_6,t_1^{-1}t_2^{-1}z_7).
$$
One can easily check that this action is strongly stable, and by Proposition~\ref{propstst}
the quotient morphism $\pi: {\mathbb K}^7 \to {\mathbb K}^7/\!/T^2$ is the canonical quotient
presentation of a five-dimensional non-degenerate affine toric variety $X:={\mathbb K}^7/\!/T^2$.
Looking at closed $T^2$-orbits on ${\mathbb K}^7$, one obtains that there are three
Luna strata
$$
X = (X \setminus Z) \ \cup \ (Z \setminus \{0\}) \ \cup \ \{0\},
$$
where $Z=\pi(W)$ and $W$ is a subspace in ${\mathbb K}^7$ given by $z_4=z_5=z_6=z_7=0$.
In particular, $Z$ is the singular locus of $X$. Clearly,
$Z$ is isomorphic to an affine plane with coordinates $z_1z_3$ and $z_2z_3$. So, $Z$
is irreducible and smooth, but the groups $\mathop{\rm Aut}(X)^0$ (and $\mathop{\rm Aut}(X)$) has two orbits
on $Z$, namely, $Z \setminus \{0\}$ and $\{0\}$.
\end{example}
\section{Collective infinite transitivity}
\label{sec7}
Let $X$ be a non-degenerate affine toric variety of dimension $\ge 2$.
It is shown in \cite[Theorem~2.1]{AKZ} that for any positive integer $s$
and any two tuples of smooth pairwise distinct points $x_1,\ldots,x_s$
and $x_1',\ldots,x_s'$ on $X$ there is an automorphism $\phi\in\mathop{\rm Aut}(X)^0$
such that $\phi(x_i)=x_i'$ for $i=1,\ldots,s$. In other words, the
action of $\mathop{\rm Aut}(X)^0$ on the smooth locus $X^{\mathop{\rm reg}}$ is {\it infinitely
transitive}.
Our aim is to generalize this result and to prove collective infinite transitivity
along different orbits of some subgroup of the automorphism group.
Let us recall some general notions from~\cite{AFKKZ}.
Consider a one-dimensional algebraic group $H\cong({\mathbb K},+)$ and a regular action
$H\times X \to X$ on an affine variety $X=\mathop{\rm Spec} A$. Then the associated derivation
$\partial$ of $A$ is locally nilpotent, i.e., for every $a\in A$ we can find
$n\in{\mathbb N}$ such that $\partial^n(a)=0$. Any derivation of $A$ may be viewed as
a vector field on $X$. So we may speak about locally nilpotent vector fields~$\partial$.
We use notation $H=H(\partial)=\exp({\mathbb K}\partial)$.
It is immediate that for every $f\in\mathop{\rm Ker}\partial$
the derivation $f\partial$ is again locally nilpotent \cite[1.4, Pripciple~7]{Fr}.
A one-parameter subgroup of the form $H(f\partial)$ for some $f\in\mathop{\rm Ker}\partial$
is called a {\it replica} of $H(\partial)$.
A set ${\mathcal{N}}$ of locally nilpotent vector fields on $X$ is said to be {\it saturated}
if it satisfies the following two conditions.
\begin{enumerate}
\item
${\mathcal{N}}$ is closed under conjugation by elements in $G$, where $G$ is the subgroup
of $\mathop{\rm Aut}(X)$ generated by all subgroups $H(\partial)$, $\partial\in{\mathcal{N}}$.
\item
${\mathcal{N}}$ is closed under taking replicas, i.e., for all $\partial\in{\mathcal{N}}$ and
$f\in\mathop{\rm Ker}\partial$ we have $f\partial\in{\mathcal{N}}$.
\end{enumerate}
If $X=X_{\sigma}$ is toric, we define ${\mathcal{A}}(X)$ as the subgroup of $\mathop{\rm Aut}(X)$
generated by $H_e$, $e\in\mathcal{R}$, and all their replicas. As ${\mathcal{N}}$ one can take
locally nilpotent vector fields corresponding to replicas of $H_e$ and all their conjugates.
\begin{theorem} \label{thcit}
Let $X$ be a non-degenerate affine toric variety. Suppose that
$x_1,\ldots,x_s$ and $x_1',\ldots,x_s'$ are points on $X$ with
$x_i\ne x_j$ and $x_i'\ne x_j'$ for $i\ne j$ such that for each $i$
the orbits ${\mathcal{A}}(X)\cdot x_i$ and ${\mathcal{A}}(X)\cdot x_i'$ are equal
and of dimension $\ge 2$. Then there exists an element $\phi\in{\mathcal{A}}(X)$
such that $\phi(x_i)=x_i'$ for $i=1,\ldots,s$.
\end{theorem}
The proof of Theorem~\ref{thcit} is based on the following results.
Let $G$ be a subgroup of $\mathop{\rm Aut}(X)$ generated by subgroups $H(\partial)$, $\partial\in{\mathcal{N}}$,
for some set ${\mathcal{N}}$ of locally nilpotent vector fields and $\Omega\subseteq X$ be a
$G$-invariant subset.
We say that a locally nilpotent vector field $\partial$ satisfies the
{\it orbit separation property} on $\Omega$ if there is an $H(\partial)$-stable subset
$U(H)\subseteq\Omega$ such that
\begin{enumerate}
\item
for each $G$-orbit $O$ contained in $\Omega$, the intersection $U(H)\cap O$
is open and dense in $O$;
\item
the global $H$-invariants ${\mathbb K}[X]^H$ separate all one-dimensional $H$-orbits in $U(H)$.
\end{enumerate}
Similarly we say that a set of locally nilpotent vector fields ${\mathcal{N}}$
satisfies the {\it orbit separation property} on $\Omega$ if it holds
for every $\partial\in{\mathcal{N}}$.
\begin{theorem} (\cite[Theorem~3.1]{AFKKZ}) \label{thAFKKZ}
Let $X$ be an irreducible affine variety and
$G\subseteq\mathop{\rm Aut}(X)$ be a subgroup
generated by a saturated set ${\mathcal{N}}$ of locally nilpotent vector
fields, which has the orbit separation property on a $G$-invariant
subset $\Omega\subseteq X$. Suppose that $x_1,\ldots ,x_s$ and
$x_1',\ldots, x_s'$ are points in $\Omega$ with $x_i\ne x_j$ and
$x_i'\ne x_j'$ for $i\ne j$ such that for each $i$ the orbits
$G\cdot x_i$ and $G\cdot x_i'$ are equal and of dimension $\ge 2$. Then
there exists an element $g\in G$ such that $g\cdot x_i=x'_i$ for
$i=1,\ldots, s$.
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thcit}.]
Let us take $G={\mathcal{A}}(X)$ and $\Omega=X$. In order to apply Theorem~\ref{thAFKKZ},
we have to check the
orbit separation property for one-parameter subgroups in ${\mathcal{A}}(X)$.
By~\cite[Lemma~2.8]{AFKKZ}, it suffices to check it for the subgroups
$H_e$, $e\in\mathcal{R}$.
\begin{proposition}
Let $e$ be a root of a cone $\sigma\subseteq N_{\mathbb Q}$ of full dimension
and $X=X_{\sigma}$ be the corresponding affine toric variety. Then for any
two one-dimensional $H_e$-orbits $C_1$ and $C_2$ there is an invariant
$f\in{\mathbb K}[X]^{H_e}$ with $f|_{C_1} =0$ and $f|_{C_2} =1$.
\end{proposition}
\begin{proof}
Let $R_e$ be a one-parameter subgroup of $T$ represented by the vector
$v_e\in N$. Then ${\mathbb K}[X]^{H_e}={\mathbb K}[X]^{R_e}$; see~\cite[Section~2.4]{AKZ}.
Moreover, the subgroup $R_e$ normalizes but not centralizes $H_e$ in $\mathop{\rm Aut}(X)$
and every one-dimensional $H_e$-orbit $C\cong{\mathbb A}^1$ is the closure of an $R_e$-orbit;
see~\cite[Proposition~2.1]{AKZ}. In particular, every one-dimensional $H_e$-orbit
contains a unique $R_e$-fixed point. Since the group $R_e$ is reductive, every two $R_e$-fixed
points can be separated by an invariant from ${\mathbb K}[X]^{R_e}$. This shows that any two
one-dimensional $H_e$-orbits can be separated by an invariant from ${\mathbb K}[X]^{H_e}$.
\end{proof}
This completes the proof of Theorem~\ref{thcit}.
\end{proof}
It follows from the proof of~\cite[Theorem~2.1]{AKZ} that the group ${\mathcal{A}}(X)$
acts (infinitely) transitively on the smooth locus of a non-degenerate affine
toric variety $X$. In particular, the open orbits of ${\mathcal{A}}(X)$ and
$\mathop{\rm Aut}(X)^0$ on $X$ coincides. The example below shows that this is not the case for
smaller orbits.
\begin{example}
Let $X_{\sigma}$ be the affine toric threefold defined be the cone
$$
\sigma=\rm{cone}(\tau_1,\tau_2,\tau_3), \quad v_{\tau_1}=(1,0,0),
\quad v_{\tau_2}=(1,2,0), \quad v_{\tau_3}=(0,1,2).
$$
We claim that all points on one-dimensional $T$-orbits of $X$ are ${\mathcal{A}}(X)$-fixed.
Indeed, suppose that a point $x$ on a one-dimensional $T$-orbit is moved by
the subgroup $H_e$ for some root $e$. Then $x$ belongs to a union of two
$H_e$-connected $T$-orbits. Assume, for example, that the $T$-orbit of $x$ corresponds
to the face $\rm{cone}(\tau_1,\tau_2)$ and the pair
of $H_e$-connected $T$-orbits includes the $T$-fixed point on $X$. By Lemma~\ref{lemcon},
we have
$$
\langle (1,0,0), e\rangle=0, \quad \langle (1,2,0), e\rangle=0,
\quad \langle (0,1,2), e\rangle=-1.
$$
These conditions imply $\langle (0,0,1), e\rangle=-1/2$, a contradiction.
If the pair of $H_e$-connected $T$-orbits includes the two-dimensional orbit, then
either
$$
\langle (1,0,0), e\rangle=0, \quad \langle (1,2,0), e\rangle=-1, \quad \text{or} \quad
\langle (1,0,0), e\rangle=-1, \quad \langle (1,2,0), e\rangle=0.
$$
In both cases we have $\langle (0,1,0), e\rangle=\pm 1/2$, a contradiction.
Other possibilities may be considered in the same way.
\end{example}
\end{document} |
\begin{document}
\title[Irreducible representations of the Braid Group]{IRREDUCIBLE REPRESENTATIONS OF THE BRAID GROUP $B_3$ in dimension 6}
\author{Taher I. Mayassi \and Mohammad N. Abdulrahim }
\address{Taher I. Mayassi\\
Department of Mathematics and Computer Science\\
Beirut Arab University\\
P.O. Box 11-5020, Beirut, Lebanon}
\email{tim187@student.bau.edu.lb}
\address{Mohammad N. Abdulrahim\\
Department of Mathematics and Computer Science\\
Beirut Arab University\\
P.O. Box 11-5020, Beirut, Lebanon}
\email{mna@bau.edu.lb}
\begin{abstract}
We use $q$-Pascal's triangle to define a family of representations of dimension 6 of the braid group $B_3$ on three strings. Then we give a necessary and sufficient condition for these representations to be irreducible.
\end{abstract}
\maketitle
\renewcommand{\thefootnote}{}
\footnote{\textit{Key words and phrases.} Braid group, irreducibility}
\footnote{\textit{Mathematics Subject Classification.} Primary: 20F36.}
\vskip 0.1in
\section{Introduction}
Braid groups have an important role in many branches of mathematics like Knot Theory and Cryptography. In this work, we study the irreducibility of representations of the Braid group $B_3$ of dimension 6.
In \cite{Al}, a family of representations of $B_3$ of dimension $n+1$ is constructed using $q$-deformed Pascal's triangle.
This family of representations of $B_3$ is a generalization of the representations given by Humphries \cite{Hu} as well as the representations given by I. Tuba and H. Wenzl \cite{TuW}.
For more details, see [1,Theorem 3] and \cite{AR}.
Kosyak mentioned in \cite{Ko} that the irreducibility of the representations constructed by $q$-Pascal's
triangle is still an open problem for dimensions $\geq6$, although some sufficient conditions are given in \cite{Al}.
In our work, we consider these representations and we determine a necessary and sufficient condition for the irreducibility in the case the dimension is precisely 6.\\
In section 2, we use some notations that help us define a family of representations of $B_3$ by $q$-Pascal's triangle (see Theorem 2.1).
In section 3, we specialize the representations defined by Theorem 2.1 to $n=5$, that is of dimension 6, by taking some specific values of some parameters.
We get a subfamily of representations of $B_3$.
Proposition 3.1 proves that these representations have no invariant subspaces of dimension 1.
However, Proposition 3.2, Proposition 3.3 and Proposition 3.4 state necessary and sufficient conditions for the non-existence of invariant subspaces of dimensions 2, 3 and 4 respectively.
Proposition 3.5 gives a sufficient condition for these representations to have no invariant subspaces of dimension 5.
Our main result is Theorem 3.6, which determines a necessary and sufficient condition for the irreducibility of this family of representations of $B_3$.
In section 4, we consider the cases where the representations are reducible.
Then, we reduce one of these reducible representations to a sub-representation of dimension 4 and we prove that this sub-representation is irreducible (Theorem 4.1).
\section{Notations, Definitions and Basic Theorems}
\begin{defn}\cite{Bi}
The braid group on $n$ strings, $B_n$, is the abstract group with $n-1$ generators $\sigma_1,\sigma_2,\cdots,\sigma_{n-1}$ satisfying the following relations
$$\sigma_i\sigma_j=\sigma_j\sigma_i\text{ for }|i-j|>1\text{ and }\sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_{i}\sigma_{i+1}\\
\text{ for }i=1,\cdots,n-2.$$
\end{defn}
\noindent In order to use the $q$-Pascal's triangle, we need the following notations.\\
\noindent
\textbf{Notations.} \cite{Al}
For every $(n\times n)$-matrix $M=(m_{ij})$, we set the matrices $M^\sharp=(m_{ij}^\sharp)$ and $M^s=(m_{ij}^s)$ where $m_{ij}^\sharp=m_{n-i,n-j}$ and
$m_{ij}^s=m_{n-j,n-i}$.\\\\
For $q\in\mathbb{C}\setminus\{0\}$, $n\in\mathbb{N}$, and for all integers $j$ and $r$ such that $j>0$ and $r\geqslant0$ we define the following terms.
$$\begin{array}{l}
(j)_q=1+q+\cdots+q^{j-1},\\\\
(j)!_q=(1)_q(2)_q\cdots(j)_q\text{ and }(0)!_q=1,\\\\
\displaystyle{n \choose r}_q=\dfrac{(n)!_q}{(r)!_q(n-r)!_q},\text{ for all integers }r\text{ and }n\text{ such that } 0\leqslant r\leqslant n,\\\\
q_{r}=q^{\dfrac{(r-1)r}{2}}.
\end{array}
$$
\noindent $(j)_q$ and ${n \choose r}_q$ are called $q$-natural numbers and $q$-binomial coefficients respectively.\\
\begin{defn}(\cite{Al},\cite{Ko})
Let $n$ be a non-negative integer. For all non-zero complex numbers $q$, $\lambda_0,\;\lambda_1,\cdots,\; \lambda_n$, consider the matrices
$$D_n(q)=\operatorname{diag}\diagfences{q_{r}}_{r=0}^n,\;\; \Lambda_n=\operatorname{diag}\diagfences{\lambda_0,\lambda_1,\cdots,\lambda_n}\text{ and }A_n(q)=\left(a_{km}\right)_{0\leqslant k,m\leqslant n},$$
where $a_{km}={n-k\choose n-m}_q=\frac{(n-k)!_q}{(n-m)!_q(m-k)!_q},$ for $k\leqslant m$ and $a_{km}=0$ for $k>m$.\\\\
We define the following family of $(n+1)\times(n+1)$-matrices
$$\sigma_1^{\Lambda_n}(q,n)=A_n(q)D_n^\sharp(q)\Lambda_n \text{ and } \sigma_2^{\Lambda_n}(q,n)=\Lambda_n^\sharp D_n(q)\left(\left(A_n\left(q^{-1}\right)\right)^{-1}\right)^\sharp.$$
\end{defn}
Using the definitions and notations above we state the following theorem.
\begin{thm}\cite{Al}
The mapping $B_3\to GL(n+1,\mathbb{C})$ defined by
$$\sigma_1\mapsto \sigma_1^{\Lambda_n}(q,n) \text{ and } \sigma_2\mapsto \sigma_2^{\Lambda_n}(q,n)$$
is a representation of dimension $n+1$ of the braid group $B_3$ provided that $\lambda_i\lambda_{n-i}=c$ for $0\leqslant i\leqslant n$, where $c$ is a constant non-zero complex number.
\end{thm}
\begin{defn}
A representation is called \textit{subspace irreducible} or \textit{irreducible}, if there are no non-trivial invariant subspaces for all operators of the representation.
A representation is called \textit{operator irreducible}, if there are no non-trivial bounded operators commuting with all operators of the representation.
\end{defn}
For the next theorem, we need to introduce the following operators. For $n,r\in\mathbb{N}$ such that $0\leqslant r\leqslant n$, and for $\lambda=(\lambda_0,\dots,\lambda_n)\in\mathbb{C}^{n+1}$ and $q\in\mathbb{C}$, we define
$$F_{r,n}(q,\lambda)=\exp_{(q)}\left(\sum_{k=0}^{n-1}(k+1)_{q}E_{k\;k+1}\right)-q_{n-r}\lambda_r\left(D_n(q)\Lambda_n^\sharp\right)^{-1},$$
where $E_{km}$ is a matrix with 1 in the $k,m$ entry and zeros elsewhere ($k,m\in\mathbb{Z})$, and $\exp_{(q)}(X)=\sum_{m=0}^{\infty}(X^m/(m)!_q)$.
For the $(n+1)\times(n+1)$-matrix $C$ over $\mathbb{C}$ and for $0\leqslant i_0<i_1<\cdots<i_r\leqslant n$, $0\leqslant j_0<j_1<\cdots<j_r\leqslant n$, we denote the minors of $C$ with $i_1,i_2,\dots,i_r$ rows and $j_1,j_2,\dots,j_r$ columns by
$$M_{j_1j_2\dots j_r}^{i_1i_2\dots i_r}(C).$$
\begin{thm}\cite{Al}
The representation of the group $B_3$ defined in Theorem 2.1 has the following properties:
\begin{enumerate}
\item for $q=1$, $\Lambda_n=I_{n+1}$ (the identity matrix), it is subspace irreducible in arbitrary dimension $n\in\mathbb{N}$;
\item for $q=1$, $\Lambda_n=\operatorname{diag}\diagfences{\lambda_0,\lambda_1,\cdots,\lambda_n}\neq I_{n+1}$, it is operator irreducible if and only if for
any $0\leqslant r \leqslant [\frac{n}{2}]$, there exists
$0\leqslant i_0<i_1<\dots<i_r\leqslant n$ such that
$$M_{r+1\:r+2\dots n}^{i_0i_1\dots i_{n-r-1}}\left(F_{r,n}^{s}(q,\lambda)\right)\neq0;$$
\item for $q\neq1$, $\Lambda_n=I_n$, it is subspace irreducible if and only if $(n)_q\neq0$.\\
The representation has $[\frac{n+1}{2}]+1$ free parameters.
\end{enumerate}
\end{thm}
\begin{thm}\cite{Al}
All representations of the braid group $B_3$ of dimension $\leqslant5$ are the representations defined in Theorem 2.1.
\end{thm}
Note that the irreducibility of the representation defined in Theorem 2.1 that are of dimension $\leqslant5$ is discussed in \cite{Tu} and \cite{TuW}.
Also, Theorem 2.1 gives a family of the representations of $B_3$ that are of dimension $\geqslant6$ ($n\geqslant5$).
The subject of the irreducibility of these representations is still under study and research.
In this work, we study the irreducibility of some of these representations of dimension 6.\\
Suppose, in what follows, that $n=5$. Then, the matrix $A_5(q)$ is given by
$$A_5(q)=\begin{pmatrix}
1&(5)_q&(1+q^2)(5)_q&(1+q^2)(5)_q&(5)_q&1\\
0&1&(1+q)(1+q^2)&(1+q^2)(3)_q&(1+q)(1+q^2)&1\\
0&0&1&(3)_q&(3)_q&1\\
0&0&0&1&1+q&1\\
0&0&0&0&1&1\\
0&0&0&0&0&1\\
\end{pmatrix}$$
and the representation of $B_3$ defined in Theorem 2.1 is of dimension 6. Moreover, the matrices representing the generators $\sigma_1$ and $\sigma_2$ of $B_3$ are given by
$$\sigma_1\mapsto \begin{pmatrix}
\lambda_0q^{10}&\lambda_1q^6(5)_q&\lambda_2q^3(1+q^2)(5)_q&\lambda_3q(1+q^2)(5)_q&\lambda_4(5)_q&\lambda_5\\
0&\lambda_1q^6&\lambda_2q^3(1+q)(1+q^2)&\lambda_3q(1+q^2)(3)_q&\lambda_4(1+q)(1+q^2)&\lambda_5\\
0&0&\lambda_2q^3&\lambda_3q(3)_q&\lambda_4(3)_q&\lambda_5\\
0&0&0&\lambda_3q&\lambda_4(1+q)&\lambda_5\\
0&0&0&0&\lambda_4&\lambda_5\\
0&0&0&0&0&\lambda_5\\
\end{pmatrix}$$
and
$$\sigma_2\mapsto \begin{pmatrix}
\lambda_5&0&0&0&0&0\\
-\lambda_4&\lambda_4&0&0&0&0\\
\lambda_3&-\lambda_3(1+q)&\lambda_3q&0&0&0\\
-\lambda_2&\lambda_2(3)_q&-\lambda_2q(3)_q&\lambda_2q^3&0&0\\
\lambda_1&-\lambda_1(4)_q&\lambda_1q(1+q^2)(3)_q&-\lambda_1q^3(4)_q&\lambda_1q^6&0\\
-\lambda_0&\lambda_0(5)_q&-\lambda_0q(1+q^2)(5)_q&\lambda_0q^3(1+q^2)(5)_q&-\lambda_0q^6(5)_q&\lambda_0q^{10}\\
\end{pmatrix}.$$
\section{Irreducibility of Representations of $B_3$ of dimension 6}
In this section, let $q$ be a primitive third root of unity ($q^3=1$ and $q\neq1$).
By taking $c=1$, $\lambda_0=1$ and $\lambda_2=q^2$, we get $\lambda_3=\frac{1}{q^2}=q$, $\lambda_4=\lambda_1^{-1}$ and $\lambda_5=\frac{c}{\lambda_0}=1$. Under these conditions and for $n=5$, we substitute these values in the matrices above to get the following definition.
\begin{defn}
Let $\rho:B_3\to GL(6,\mathbb{C})$ be the family of representations of $B_3$ of dimension 6 that is defined by
$$\sigma_1\mapsto
\begin{pmatrix}
q&-q^2\lambda_1&q^2&q^2&-q^2\lambda_1^{-1}&1\\
0&\lambda_1 &q^2&0&\lambda_1^{-1}&1\\
0&0&q^2&0&0&1\\
0&0&0&q^2&-q^2\lambda_1^{-1}&1\\
0&0&0&0&\lambda_1^{-1}&1\\
0&0&0&0&0&1\\
\end{pmatrix}$$
and
$$\sigma_2\mapsto
\begin{pmatrix}
1&0 &0&0&0&0\\
-\lambda_1^{-1}&\lambda_1^{-1} &0&0&0&0\\
q&1&q^2&0&0&0\\
-q^2&0&0&q^2&0&0\\
\lambda_1&-\lambda_1&0&-\lambda_1&\lambda_1&0\\
-1&-q^2&-q&1&q^2&q\\
\end{pmatrix}.$$
\end{defn}
Note that $\rho(\sigma_1)$ and $\rho(\sigma_2)$ have the same eigenvalues, which are $q,\lambda_1,q^2$ (of multiplicity 2), $\lambda_1^{-1}$ and $1$.
The corresponding eigenvectors of $\rho(\sigma_1)$ are
$$u_1=\begin{pmatrix}
1\\
0\\
0\\
0\\
0\\
0\\
\end{pmatrix},\;
u_2=\begin{pmatrix}
-\lambda_1(1+q)\\
-\lambda_1+q\\
0\\
0\\
0\\
0\\
\end{pmatrix},\;
u_3=\begin{pmatrix}
\lambda_1+q\\
q-1\\
(\lambda_1+q^2)(-q+q^2)\\
0\\
0\\
0\\
\end{pmatrix},\;
u_4=\begin{pmatrix}
-1\\
0\\
0\\
q^2-1\\
0\\
0\\
\end{pmatrix},$$
$$u_5=\begin{pmatrix}
q-\lambda_1^3\\
(1-\lambda_1q)(\lambda_1q-q^2)\\
0\\
-q(1-\lambda_1^2)(-1+\lambda_1q)\\
(-1+\lambda_1^2)(-1+\lambda_1q)(\lambda_1q-q^2)\\
0\\
\end{pmatrix} \text{ and }
u_6=\begin{pmatrix}
\lambda_1q^2-2\lambda_1+3+3\lambda_1^2+\lambda_1q\\
3q(\lambda_1q-1)\\
-3q^2(1-\lambda_1)^2\\
-3(-1+\lambda_1)(1+\lambda_1q^2)\\
3q(1-q)\lambda_1(-1+\lambda_1)\\
3q(1-q)(-1+\lambda_1)^2\\
\end{pmatrix}.$$
Assume that $\lambda_1\not\in\{-1,1,q,q^2\}$. Then, the vectors $u_i (i=1,2,\cdots,6)$ are linearly independent and the transition matrix $P=(u_1\;u_2\;u_3\;u_4\;u_5\;u_6)$ is invertible. Conjugating the representation by $P$, we get an equivalent representation.
$$\rho(\sigma_1)\mapsto X=\begin{pmatrix}
q&0&0&0&0&0\\
0&\lambda_1&0&0&0&0\\
0&0&q^2&0&0&0\\
0&0&0&q^2&0&0\\
0&0&0&0&\lambda_1^{-1}&0\\
0&0&0&0&0&1\\
\end{pmatrix}$$
and $\rho(\sigma_2)\mapsto Y= P^{-1}\rho(\sigma_2)P=\left(K_1\;K_2\;K_3\;K_4\;K_5\;K_6\right)$, where
$$K_1=\begin{pmatrix}
\frac{\lambda_1q}{(q^2-1)(-\lambda_1+q)(-1+\lambda_1q)}\\
\frac{\lambda_1^3-q^2}{\lambda_1(\lambda_1-1)^2(\lambda_1+1)(\lambda_1-q)(\lambda_1-q^2)}\\
-\frac{1}{3(-\lambda_1+q^2)}\\
-\frac{q^2(\lambda_1+q^2)}{3(-\lambda_1+q)}\\
\frac{\lambda_1^2}{(\lambda_1-1)^2(\lambda_1+1)(\lambda_1-q)q(\lambda_1q-1)}\\
\frac{1}{3(\lambda_1-1)^2(q^2-q)}
\end{pmatrix},\;\;
K_2=\begin{pmatrix}
\frac{q[2+q-(1+2q)\lambda_1^2]}{3(\lambda_1-q)(-1+\lambda_1q)}\\
\frac{1-\lambda_1^3q^2}{\lambda_1(\lambda_1-1)^2(\lambda_1+1)(\lambda_1-q)(\lambda_1-q^2)}\\
-\frac{1}{3(-\lambda_1+q^2)}\\
\frac{1+\lambda_1}{3(\lambda_1-q)}\\
-\frac{\lambda_1(\lambda_1^2+q)}{(\lambda_1-1)^2(\lambda_1+1)(\lambda_1-q)(\lambda_1q-1)}\\
\frac{1}{3(\lambda_1-1)^2(q^2-q)}
\end{pmatrix},
$$
$$K_3=\begin{pmatrix}
\frac{(1+\lambda_1)(-3q^2+(1-q^2)\lambda_1+3\lambda_1^2)}{-3(\lambda_1-q)(-1+\lambda_1q)}\\
\frac{q^2(-1-\lambda_1+(q-q^2)\lambda_1^2+\lambda_1^3+\lambda_1^4)}{\lambda_1(\lambda_1-1)^2(\lambda_1+1)(\lambda_1-q)(\lambda_1-q^2)}\\
-\frac{\lambda_1+q}{3q(-\lambda_1+q^2)}\\
-\frac{q(2+\lambda_1+2\lambda_1^2)}{3(\lambda_1-q)}\\
\frac{q\lambda_1^2(1+q\lambda_1))}{(\lambda_1-1)^2(\lambda_1+1)(\lambda_1-q)(\lambda_1q-1)}\\
\frac{q(q+\lambda_1)}{3(\lambda_1-1)^2(q-1)}
\end{pmatrix},\;\;
K_4=\begin{pmatrix}
\frac{-3+2(q-1)\lambda_1+3q\lambda_1^2}{-3(\lambda_1-q)(-1+\lambda_1q)}\\
\frac{q^2+(q-q^2)\lambda_1+(q-q^2)\lambda_1^2-q\lambda_1^3}{\lambda_1(\lambda_1-1)^2(\lambda_1+1)(\lambda_1-q)(\lambda_1-q^2)}\\
\frac{-2}{3q(-\lambda_1+q^2)}\\
\frac{q(\lambda_1+q^2)}{3(-\lambda_1+q)}\\
\frac{-q\lambda_1^2}{(\lambda_1-1)^2(\lambda_1+1)(\lambda_1-q)(\lambda_1q-1)}\\
-\frac{q}{3(\lambda_1-1)^2(q-1)}
\end{pmatrix},
$$
$$K_5=\begin{pmatrix}
\frac{\lambda_1(-2q-1+(2+q)\lambda_1^2+(2+q^2)\lambda_1^3+(2q+1)\lambda_1^5)}{3(\lambda_1-q)(-1+\lambda_1q)}\\
\frac{q^2+\lambda_1^2+q^2\lambda_1^3+\lambda_1^5+q^2\lambda_1^6+\lambda_1^8}{\lambda_1(\lambda_1-1)^2(\lambda_1+1)(\lambda_1-q)(\lambda_1-q^2)}\\
\frac{\lambda_1(1+q+(2+q)\lambda_1-(1+2q)\lambda_1^2+q^2\lambda_1^3)}{3q^2(-\lambda_1+q^2)}\\
-\frac{q^2\lambda_1(1+q^2-\lambda_1-(2+q)\lambda_1^2+q^2\lambda_1^3+q\lambda_1^4)}{3(\lambda_1-q)}\\
\frac{\lambda_1^3(-1+q\lambda_1^3)}{(\lambda_1-1)^2(\lambda_1+1)(\lambda_1-q)(\lambda_1q-1)}\\
-\frac{\lambda_1(-q+\lambda_1^3))}{3(\lambda_1-1)^2(q-1)}
\end{pmatrix},\;\;
K_6=\begin{pmatrix}
\frac{(q^2-1)(1-\lambda_1^2+\lambda_1^4)}{(\lambda_1-q)(-1+\lambda_1q)}\\
\frac{3q+3q^2\lambda_1^2-3q^2\lambda_1^3-3\lambda_1^5}{\lambda_1(\lambda_1-1)^2(\lambda_1+1)(\lambda_1-q)(\lambda_1-q^2)}\\
\frac{2+q+2(q^2-q)\lambda_1+(-1+q)\lambda_1^2}{(1-q)(-\lambda_1+q^2)}\\
\frac{q-q^2-3\lambda_1+3\lambda_1^2+(1+2q)\lambda_1^3}{(1-q)(\lambda_1-q)}\\
\frac{-3q^2\lambda_1^2-3q\lambda_1^4}{(\lambda_1-1)^2(\lambda_1+1)(\lambda_1-q)(\lambda_1q-1)}\\
\frac{-2-q+(1-q^2)\lambda_1+(-1+q^2)\lambda_1^2}{3(\lambda_1-1)^2}
\end{pmatrix}.
$$\\
\begin{prop}
The representation $\rho$ has no invariant subspaces of dimension 1.
\end{prop}
\begin{proof}
The possible subspaces of dimension 1 that are invariant under $X$ are $\langle e_i\rangle$ for $i=1,2,3,4,5,6$, and $\langle\alpha e_3+e_4\rangle$ for $\alpha\in\mathbb{C}$. Here $e_i$ are the standard unit vectors in $\mathbb{C}^6$, and are considered as column vectors.\\
It is clear to see that
$Y(e_i)\not\in\langle e_i\rangle$ for $i=1,2,3,4$.
Assume that $Y(e_5)=K_5\in\langle e_5\rangle$. Then all components, except the fifth, of $K_5$ are zeros.
In particular the third and sixth components of $K_5$ are zeros. So, $\lambda_1^3=q$ and $1+q+(2+q)\lambda_1-(1+2q)\lambda_1^2+q^2\lambda_1^3=0$. Then, by direct computation, we get $\lambda_1=q$ or $q^2$ and $\lambda_1^3=q$ which is impossible as $q\neq1$. So, $Y(e_5)\not\in\langle e_5\rangle$.
Therefore, $\langle e_5\rangle$ is not invariant under $Y$.\\
Suppose that $Y(e_6)\in\langle e_6\rangle$. Then the $5^{\text{th}}$ and $2^{\text{nd}}$ components of $K_6=Y(e_6)$ are zeros.
Then $\lambda_1^2=-q$ and $q+q^2\lambda_1^2-q^2\lambda_1^3-\lambda_1^5=0$.
By direct computation, we get $\lambda_1=-q$.
Therefore, $q=-1$, contradiction. So, $Y(e_6)\not\in\langle e_6\rangle$ and $\langle e_6\rangle$ is not invariant under $Y$.\\
It remains to show that any subspace of the form $\langle\alpha e_3+e_4\rangle$, where $\alpha $ is a non-zero complex number, is not invariant under $Y$.
Note that $Y(\alpha e_3+e_4)=\alpha K_3+K_4$. If $\alpha K_3+K_4\in\langle\alpha e_3+e_4\rangle$ then
the fifth and sixth components of $\alpha K_3+K_4$ are zeros. So, $\alpha q\lambda_1^2(1+q\lambda_1)+q\lambda_1^2=0$ and $\alpha q(q+1)-q=0$. This implies that $\alpha =-q$ and $\lambda_1=q-q^2$. Substitute the obtained values of $\alpha $ and $\lambda_1$ in the numerator of the $2^{\text{nd}}$ component of $ \alpha K_3+K_4$ to get
$$
-(q-q^2)^3-(q-q^2)^4+q^2+(q-q^2)^2-q(q-q^2)^3=2(q-8)\neq 0
\text{ when }q^3=1, q\neq1.$$
Therefore, the second component of $ \alpha K_3+K_4$ is not zero, contradiction.
\end{proof}
\begin{prop}
The representation $\rho$ has no invariant subspaces of dimension 2 if and only if $\lambda_1^3\neq q$.
\end{prop}
\begin{proof}
The possible subspaces of dimension 2 that are invariant under $X$ are: $S_{ij}=\langle e_i,e_j\rangle$, and $S^{\alpha}_k=\langle e_k,\alpha e_3+e_4\rangle$ for $\alpha\in\mathbb{C}$, $1\leqslant i<j\leqslant6$ and $k=1,2,5,6$.\\
We can easily see that $Y(e_1)=K_1\not\in S_{1i}$ for all $i=2,3,4,5,6$. So, the subspaces $S_{1i}$ $(i=2,3,4,5,6)$ are not invariant under $Y$.\\
Also $Y(e_2)=K_2\not\in S_{2i}$ for $i=3,4,5,6$ since the third and sixth components of $K_2$ are not zeros. Thus, the subspaces $S_{2i}$ $(i=3,4,5,6)$ are not invariant under $Y$.\\
The fourth, fifth and sixth components of $Y(e_3)=K_3$ cannot be zeros at the same time for all values of $\lambda_1$. So, $Y(e_3)=K_3\not\in S_{3i}$ for $i=4,5,6$. So, $S_{3i}$ is not invariant under $Y$ for $i=4,5,6$.\\
$Y(e_4)\not\in S_{4i}$ for $i=5,6$ because the third component of $Y(e_4)=K_4$ is not zero. So, $S_{4i}$ is not invariant under $Y$ for $i=5,6$.\\
If the third and fourth components of $Y(e_5)=K_5$ are zeros and since $\lambda_1\neq0$ we then have
$$\left\{\begin{array}{c}
1+q+(2+q)\lambda_1-(1+2q)\lambda_1^2+q^2\lambda_1^3=0\\
1+q^2-\lambda_1-(2+q)\lambda_1^2 +q^2\lambda_1^3 +q \lambda_1^4 =0\end{array}\right.$$
By using Mathematica, we show that there is no complex solution in terms of $q$ satisfying this system of equations.
So $Y(e_3)\not\in S_{56}$. Therefore, $ S_{56}$ is not invariant under $Y$.\\
It remains to discuss the subspaces $S^\alpha_{k}$ for $k=1,2,5,6$ and $\alpha\in\mathbb{C}$. Since the sixth components of $Y(e_1)$ and $Y(e_2)$ are not zeros, it follows that $Y(e_1)\not\in\langle\alpha e_3+e_4,e_1\rangle$ and $Y(e_2)\not\in\langle\alpha e_3+e_4,e_2\rangle$ for all $\alpha\in\mathbb{C}$. Therefore, $S^\alpha_{1}$ and $S^\alpha_{2}$ are not invariant under $Y$.
Now, if $Y(e_6)=K_6\in\langle\alpha e_3+e_4,e_6\rangle$ then the second and fifth components of $K_6$ are zeros.
This yields to the following equations:
$$
\left\{\begin{array}{c} q+q^2\lambda_1^2-q^2\lambda_1^3-\lambda_1^5=0\\
-3q\lambda_1^2(q+\lambda_1^2)=0
\end{array}
\right..$$
Using that fact that $q$ is a primitive third root of unity, we get $\lambda_1=\lambda_1^2=-q$, that is $q=\lambda_1=0 \text{ or } 1$, a contradiction.
Hence, $Y(e_6)\not\in S^\alpha_{6}$ for all $\alpha\in\mathbb{C}$. Therefore, $S^\alpha_{6}$ is not invariant under $Y$.\\
Finally, if a subspace $S^\alpha_{5}$ is invariant under $Y$ then, $Y(e_5)=K_5\in\langle S^\alpha_{5}\rangle$. Then, $\lambda_1^3=q$. Conversely, if $\lambda_1^3=q$
then, by direct computation and using Mathematica, we show that
$Y(e_5)=K_5\in S^\alpha_{5}$ and
$Y(\alpha e_3+e_4)\in S^\alpha_{5}$ for $\alpha=(-1-\lambda_1)(1+q\lambda_1)$.
Therefore, $\langle\alpha e_3+e_4,e_5\rangle$ is invariant under $Y$ if and only if $\lambda_1^3=q$. The invariant subspaces corresponding to these values of $\lambda_1$ are of the form
$$\langle\alpha e_3+e_4,e_5\rangle \text{ where, }\alpha=(-1-\lambda_1)(1+q\lambda_1).$$
\end{proof}
\begin{prop}
The representation $\rho$ has no invariant subspaces of dimension 3 if and only if $\lambda_1^2\neq-q$, $-q^2$.
\end{prop}
\begin{proof}
The subspaces of dimension 3 that are invariant under $X$ are $\langle e_i,e_j,e_k\rangle$ and $\langle e_s,\alpha e_3+e_4,e_t\rangle$ for $\alpha\in\mathbb{C}$, $1\leqslant i<j<k\leqslant6$ and $s,t\in\{1,2,5,6\}$ with $s<t$.\\
Since the third, fifth and sixth components of $Y(e_1)$ are not zeros, it follows that all the subspaces of the form $\langle e_1,e_j,e_k\rangle$ together with the subspaces of the form $\langle e_1,\alpha e_3+e_4,e_t\rangle$ are not invariant under $Y$ for $1<j<k\leqslant 6$ and $t=2,5,6$.\\
The third and sixth components of $K_2=Y(e_2)$ are not zeros. So, $Y(e_2)\not\in\langle e_2,e_j,e_k\rangle$ and $Y(e_2)\not\in\langle e_2,\alpha e_3+e_4,e_5\rangle$ for all $2<j<k\leqslant 6$ such that $\{j,k\}\neq\{3,6\}$ and for all $\alpha\in\mathbb{C}$. Then the subspaces of the form $\langle e_2,e_j,e_k\rangle$ and $\langle e_2,\alpha e_3+e_4,e_5\rangle$ are not invariant under $Y$ for all $2<j<k\leqslant 6$ such that $\{j,k\}\neq\{3,6\}$.\\
Assume that the subspace $S=\langle e_2,e_3,e_6\rangle$ is invariant under $Y$ then, $Y(e_3)\in S$. So, $\lambda_1=-q$ (as the sixth component of $Y(e_3)$ is zero). Substitute the value of $\lambda_1$ in the first component of $Y(e_3)$ to get $\frac{(q-1)(-q^2+1)}{6q}\neq0$, contradiction. So, $S$ is not invariant under $Y$.\\
Consider the subspace $S^\alpha=\langle e_2,\alpha e_3+e_4,e_6\rangle$, where $\alpha\in\mathbb{C}$.
Suppose $S^\alpha$ is invariant under $Y$ then, $Y(e_2)\in S^\alpha$. So, the fifth component of $K_2$ is zero. Thus, $\lambda_1^2=-q$.
Conversely, assume that $\lambda_1^2=-q$. Then, by direct computation and using Mathematica, we show that $Y(e_r)\in S^\alpha$ for $r=2,6$ and $Y(\alpha e_3+e_4)\in S^\alpha$
and in this case $\alpha=\frac{1}{2}\pm\frac{1}{2}i$. Therefore, the subspace $\langle e_2,\alpha e_3+e_4,e_6\rangle$ is invariant under $Y$ if and only if $\lambda_1^2=-q$.\\
Since the sixth and fifth components of $Y(e_4)$ are not zeros, it follows that $Y(e_4)\not\in\langle e_3,e_4,e_5\rangle$ and $Y(e_4)\not\in\langle e_3,e_4,e_6\rangle$. Thus, the subspaces $\langle e_3,e_4,e_5\rangle$ and $\langle e_3,e_4,e_6\rangle$ are not invariant under $Y$.\\
Assume that $Y(e_3)\in\langle e_3,e_5,e_6\rangle$. Then the fourth component of $K_3$ is zero. So, $\lambda_1=\frac{-1\pm i\sqrt{15}}{4}$. But for this value of $\lambda_1$ and by direct calculation, the first component of $K_3$ is $-\frac{9}{8}\pm\frac{21\sqrt{5}}{16}+i\left(\pm\frac{9\sqrt{3}}{16}\pm\frac{3\sqrt{15}}{8}\right)\neq0$. This is a contradiction. Thus the subspace $\langle e_3,e_5,e_6\rangle$ is not invariant under $Y$.\\
Since the third component of $K_4$ is not zero it follows that $Y(e_4)\not\in\langle e_4,e_5,e_6\rangle$. So, the subspace $\langle e_4,e_5,e_6\rangle$ is not invariant under $Y$.\\
Consider the subspace $V=\langle e_5,\alpha e_3+e_4,e_6\rangle$. By using Mathematica, we show that $Y(e_6)\in V$ if and only if $\lambda_1^2=-q^2$.
Moreover, $$\alpha=\left\{\begin{array}{c}\frac{2}{11}(4+i-3\lambda_1-\lambda_1^2)\;\;\text{ for } \lambda_1=iq \\
\frac{2}{11}(4-i-3\lambda_1-\lambda_1^2)\;\;\text{ for } \lambda_1=-iq
\end{array}
\right.$$
Also, we show that if $\lambda_1^2=-q^2$ then $Y(e_5)\in V$ and $Y(\alpha e_3+e_4)\in V$ for
$$\alpha=\left\{\begin{array}{c}\frac{2}{11}(4+i-3\lambda_1-\lambda_1^2)\;\;\text{ for } \lambda_1=iq \\
\frac{2}{11}(4-i-3\lambda_1-\lambda_1^2)\;\;\text{ for } \lambda_1=-iq
\end{array}
\right..$$
Therefore, $\langle e_5,\alpha e_3+e_4,e_6\rangle$ is invariant under $Y$ if and only if $\lambda_1^2=-q^2$.
\end{proof}
\begin{prop}
The representation $\rho$ has no invariant subspaces of dimension 4 if and only if $\lambda_1^3\neq q^2$.
\end{prop}
\begin{proof}
The subspaces of dimension 4 that are invariant under $X$ are $\langle e_i,e_j,e_k,e_r\rangle$ and $\langle \alpha e_3+e_4,e_s,e_t,e_h\rangle$ for $\alpha\in\mathbb{C}$, $1\leqslant i<j<k<r\leqslant6$ and $s,t,h\in\{1,2,5,6\}$ with $s<t<h$.\\
Since the fifth and sixth components of $Y(e_1)$ are not zeros, it follows that the subspaces of the form $\langle e_1,e_2,e_3,e_i\rangle$, $\langle e_1,e_2,e_4,e_j\rangle$, $\langle e_1,e_3,e_4,e_j\rangle$, $\langle \alpha e_3+e_4, e_1,e_2,e_5\rangle$ and $\langle\alpha e_3+e_4, e_1,e_2,e_6\rangle$ are not invariant under $Y$ for $i=4,5,6$, $j=5,6$ and all $\alpha\in\mathbb{C}$.\\
The subspace $\langle e_1,e_4,e_5,e_6\rangle$ is not invariant under $Y$ because the third component of $Y(e_4)$ is not zero.\\
Since the third component of $Y(e_2)$ is not zero, it follows that the subspaces $\langle e_1,e_2,e_5,e_6\rangle$ and $\langle e_2,e_4,e_5,e_6\rangle$ are not invariant under $Y$.\\
Assume that the subspace $\langle e_1,e_3,e_5,e_6\rangle$ is invariant under $Y$ then, $Y(e_1)\in\langle e_1,e_3,e_5,e_6\rangle$. So, the second and fourth components of $Y(e_1)$ are zeros. Hence, $\lambda_1^3=q^2$ and $\lambda_1=-q^2$. Thus, $-q^6=q^2$. But, this contradicts the fact that $q$ is a third root of unity. Thus, $\langle e_1,e_3,e_5,e_6\rangle$ is not invariant under $Y$.\\
Note that the sixth and fifth components of $Y(e_4)$ are not zeros. So, the subspaces $\langle e_2,e_3,e_4,e_r\rangle$ are not invariant under $Y$ for $r=5,6$.\\
Since $\lambda_1\neq-1$ it follows that the fourth component of $K_2$ is not zero and $Y(e_2)\not\in\langle e_2,e_3,e_5,e_6\rangle$. Therefore, the subspace $\langle e_2,e_3,e_5,e_6\rangle$ is not invariant under $Y$.\\
Suppose that the subspace $\langle e_3,e_4,e_5,e_6\rangle$ is invariant under $Y$. Then the first and second components of $Y(e_3)$ are zeros. This implies that
$$\left\{\begin{array}{c}2+q-(1+2q)\lambda_1^2=0\\
1-\lambda_1^3q^2=0
\end{array}
\right.,$$
Thus, $\lambda_1^2=-q$ and $\lambda_1^3=q$ but, this contradicts the fact that $q$ is a primitive third root of unity. Therefore, $\langle e_3,e_4,e_5,e_6\rangle$ is not invariant under $Y$.\\
Consider the subspace $S=\langle\alpha e_3+e_4, e_1,e_5,e_6\rangle$, where $\alpha\in\mathbb{C}$. Suppose that $S$ is invariant under $Y$, then $Y(e_1)\in S$. Then, the second component of $K_1$ is zero. So $\lambda_1^3=q^2$. Conversely, if $\lambda_1^3=q^2$, then the second component of $K_1$ is zero and
$$\frac{\text{The third component of }K_1}{\text{The fourth component of }K_1}=\frac{-\lambda_1 + q}{-\lambda_1^2 q^2 + 1}=\frac{(-\lambda_1 + q)\lambda_1}{-\lambda_1^3q^2+\lambda_1}=\frac{(-\lambda_1 + q)\lambda_1}{-\-q+\lambda_1}=-\lambda_1.$$
Thus, $Y(e_1)\in S$ and $\alpha=-\lambda_1$. Also, by direct computation and using Mathematica, we show that the second component of each of $Y(e_5)$, $Y(e_6)$ and $Y(-\lambda_1 e_3+e_4)$ is zero. As well as we show that the ratio of the third component to the fourth one of each of these vectors is $-\lambda_1$. This means that $Y(e_5)$, $Y(e_6)$ and $Y(-\lambda_1 e_3+e_4)$ are in $S$.
Therefore, $S$ is invariant under $Y$ if and only if $\lambda_1^3=q^2$ and in this case $\alpha=-\lambda_1$
and $S=\langle-\lambda_1e_3+e_4,e_1,e_5,e_6\rangle$.\\
It remains to prove that the subspace $S'=\langle\alpha e_3+e_4,e_2,e_5,e_6\rangle$ is not invariant under $Y$ for all $\alpha\in\mathbb{C}$.
Suppose that $S'=\langle\alpha e_3+e_4,e_2,e_5,e_6\rangle$ is invariant under $Y$ for some $\alpha\in\mathbb{C}$, then $Y(e_2)\in S'$.
Then the first component of $K_2$ is zero. Thus, $\lambda_1^2=\frac{2+q}{1+2q}=-q$ (as $q$ is a third root of unity).
Substitute the obtained value of $\lambda_1^2$ in the numerator of the first component of $K_5$ to get
$$ \lambda_1(-2q-1+(2+q)(-q)+(2+q^2)(-q)\lambda_1+(2q+1)q^2\lambda_1)=-3q\lambda_1(1+\lambda_1)\neq0.$$
Hence, $Y(e_5)=K_5\not\in S'$, a contradiction.
\end{proof}
\begin{prop}
If $\lambda_1^3\neq q^2$ then the representation $\rho$ has no invariant subspaces of dimension 5.
\end{prop}
\begin{proof}
The possible subspaces of dimension 5 that are invariant under $X$ are: $S_6=\langle e_1,e_2,e_3,e_4,e_5\rangle$, $S_5=\langle e_1,e_2,e_3,e_4,e_6\rangle$, $S_4=\langle e_1,e_2,e_3,e_5,e_6\rangle$,
$S_3=\langle e_1,e_2,e_4,e_5,e_6\rangle$, $S_2=\langle e_1,e_3,e_4,e_5,e_6\rangle$, $S_1=\langle e_2,e_3,e_4,e_5,e_6\rangle$ and $S^{\alpha}=\langle e_1,e_2,\alpha e_3+e_4,e_5,e_6\rangle$ for $\alpha\in\mathbb{C}$.\\
Since the third, fifth and sixth components of $K_1$ are not zeros, it follows that $Y(e_1)\not\in S_i$ for $i=3,5,6$. So, the subspaces $S_i$ are not invariant under $Y$ for $i=3,5,6$.\\
Assume that $S_4$ is invariant under $Y$, then $Y(e_1)\in S_4$ and $Y(e_2)\in S_4$. So the fourth components of $K_1$ and $K_2$ are zeros. So, $\lambda_1=-q^2$ and $\lambda_1=-1$ which is impossible because $q$ is a primitive third root of unity. So, $S_4$ is not invariant under $Y$.\\
Since $\lambda_1^2\neq q^3$, the second component of $K_1$ is not zero. So, $Y(e_1)\not\in S_2$. Thus, $S_2$ is not invariant under $Y$.\\
Assume that $S_1$ is invariant under $Y$, then $Y(e_2)\in S_1$. So, the first component of $K_2$ is zero. Thus, $\lambda_1^2=\frac{2+q}{1+2q}=-q$ (since $q$ is a primitive third root of unity). Substitute in numerator of the first component of $K_5$ to get $\lambda_1(-3q-3q\lambda_1)$, which is not zero for $\lambda_1^2=-q$. So $Y(e_5)\not\in S_1$. Therefore, $S_1$ is not invariant under $Y$. \\
It remains to show that $S^\alpha$ is not invariant under $Y$ for all $\alpha\in\mathbb{C}$.
Assume, for some $\alpha\in\mathbb{C}$, that $S^\alpha$ is invariant then, $Y(e_1)$ and $Y(e_2)$ belong to $S^\alpha$. So,
$$\frac{\text{The third component of }K_1}{\text{The fourth component of }K_1}=\frac{\text{The third component of }K_2}{\text{The fourth component of }K_2}=\alpha.$$
This implies that
$$\frac{-\lambda_1+q}{-\lambda_1^2 q^2 + 1}=\frac{\lambda_1 - q}{(1 + \lambda_1) (\lambda_1 - q^2)}.$$
So, $$(q^2-1)(\lambda_1^2+\lambda_1+1)=0.$$
Hence, $\lambda_1=q\text{ or }q^2$, a primitive third root of unity. This is a contradiction because $\lambda_1\neq q\text{ and }\lambda_1\neq q^2$. Therefore, $S^\alpha$ is not invariant under $Y$ for all $\alpha\in\mathbb{C}$.
\end{proof}
\begin{thm}
For $\lambda_1\in\mathbb{C}\setminus\{-1,1,q,q^2\}$, the representation $\rho$ is irreducible if and only if $\lambda_1^2\neq -q$, $\lambda_1^2\neq -q^2$, $\lambda_1^3\neq q$ and $\lambda_1^3\neq q^2$.
\end{thm}
\begin{proof}
It follows directly from Proposition 3.1, Proposition 3.2, Proposition 3.3, Proposition 3.4 and Proposition 3.5.
\end{proof}
\section{Reducible Representation}
By Theorem 3.6, the representation $\rho$ is reducible if and only if $\lambda_1^2=-q$, $\lambda_1^2=-q^2$, $\lambda_1^3=q$ or $\lambda_1^3=q^2$. For $\lambda_1^3=q^2$, $\rho$ is reducible and the subspace $V$, that is generated by the vectors $-\lambda_1e_3+e_4,\;e_1,\;e_5,\;e_6$, is invariant under $\rho$. Let us write the matrices representing $\rho(\sigma_1)$ and $\rho(\sigma_2)$ relative to the basis $\{-\lambda_1e_3+e_4,e_1,e_5,e_6,e_4,e_2\}$.
Let $A=(-\lambda_1e_3+e_4,e_1,e_5,e_6,e_4,e_2)$ be the transition matrix. Then,
$$A=\begin{pmatrix}
0&1&0&0&0&0\\
0&0&0&0&0&1\\
-\lambda_1&0&0&0&0&0\\
1&0&0&0&1&0\\
0&0&1&0&0&0\\
0&0&0&1&0&0\\
\end{pmatrix}
$$
The matrix representing $\sigma_1$ in this basis is
$$A^{-1}XA=\begin{pmatrix}
q^2&0&0&0&0&0\\
0&q&0&0&0&0\\
0&0&\lambda_1^{-1}&0&0&0\\
0&0&0&1&0&0\\
0&0&0&0&q^2&0\\
0&0&0&0&0&\lambda_1\\
\end{pmatrix}
$$
and the matrix representing $\sigma_2$ in the same basis is
$$A^{-1}YA=\left(C_1\;C_2\;C_3\;C_4\;C_5\;C_6\right),$$
where $$C_1=\begin{pmatrix}
\frac{(1+2q^{1/3}+2q^{2/3} + 2 q +2 q^{4/3})}{3 q (1 + q^{1/3} + q^{2/3} + q)} \\
-\frac{q (2 + q^{2/3}+q +2 q^{5/3}+ 3 q^{7/3})}{3 (-1 + q^{5/3})}\\
\frac{q+q^{5/3} + q^{7/3}}{(-1 + q^{1/3})^3 (1 + q^{1/3})^2 (1 + q^{2/3}) (-1+q^{5/3})}\\
-\frac{q (1 - q^{1/3} + q)}{3 (-1 + q^{1/3})^3 (1 + q^{1/3})^2}\\
0\\
0
\end{pmatrix},\;
C_2=\begin{pmatrix}
\frac{1}{3 q^{4/3} (-1 + q^{4/3})} \\
\frac{q}{-1 + q^{1/3} +q + q^{5/3} - q^{7/3} - q^{2/3}}\\
\frac{q^{1/3}}{(-1 + q^{2/3})^2 (1 + q^{2/3}) (q^{2/3} - q) (-1 + q^{5/3})}\\
\frac{1}{3 (-1 + q^{2/3})^2 (-1 + q) q}\\
0\\
0
\end{pmatrix},$$
$$C_3=\begin{pmatrix}
\frac{(1+2q)(1-q^{4/3})+(2+q)q^{2/3}}{3(1+q)(-q^{2/3}+q^2)} \\
\frac{q^2(1+q^{2/3})}{(q^{2/3}-q)(-1 + q^{5/3})}\\
0\\
-\frac{q^{2/3} (1 + 2 q^2)}{3 (-1 + q^{2/3})^2 (-1 + q)}\\
0\\
0
\end{pmatrix},\;
C_4=\begin{pmatrix}
-\frac{2 + q + (q-1)(q^{4/3} + 2q^{5/3})}{(q^2-1)q^{5/3}(-q^{2/3}+q^2)}\\
\frac{3(q^{4/3}- q^{8/3} + q)}{-1 + q^{1/3} + q^{5/3} - q^{7/3} - q^{2/3} + q}\\
\frac{(-1 + q) q^{7/3} (q^{4/3} (2 + q)-q^2(1+2 q))}{(-1+q^{2/3})^2 (1 + q^{2/3}) (q^{2/3} -q) (-1 + q^{5/3})}\\
\frac{q^{2/3} - q^{8/3} - (2 + q) + q^{4/3} (-1 + q^2)}{3(-1 + q^{2/3})^2}\\
0\\
0
\end{pmatrix},$$
$$C_5=\begin{pmatrix}
\frac{2 q^{2/3}}{3 (-1 + q^{4/3}}\\
-\frac{q (2 + 2 q^{1/3} + 5 q^{2/3} + 5 q + 2 q^{4/3} + 2 q^{5/3}))}{3 (-1 + q^{5/3}))}\\
-\frac{q^{7/3}}{(-1 + q^{2/3})^2 (1 + q^{2/3}) (q^{2/3} - q) (-1 + q^{5/3}))}\\
-\frac{q}{3(-1 + q^{2/3})^2 (-1 + q))}\\
\frac{q^{2/3}}{1-q^{4/3}}\\
\frac{1 + q^{2/3} + q^{4/3}}{1 - q^{1/3} + q^{2/3} -q}
\end{pmatrix} \text{ and }
C_6=\begin{pmatrix}
\frac{1}{3 q^{4/3} (-1 + q^{4/3})}\\
\frac{q (2 + q - q^{4/3} (1 + 2 q))}{3 (q^{2/3} - q) (-1 + q^{5/3})}\\
\frac{q}{(-1 + q^{1/3})^2 (q^{4/3}-1)(1-q^{5/3})}\\
\frac{1}{3 (-1 + q^{2/3})^2 (-1 + q)q}\\
-\frac{q + q^{4/3}+2q^2 - q^{1/3}}{3 - 3 q^{4/3}}\\
-\frac{1 + q^{4/3} + q^{8/3}}{(-1 + q^{1/3})^3 (1 + q^{2/3}) (q + q^{4/3})^2}
\end{pmatrix}.
$$
The restriction $\rho_V$ of $\rho$ to the subspace $V$ is given by:
$\sigma_1\mapsto X'=\begin{pmatrix}
q^2&0&0&0\\
0&q&0&0\\
0&0&\lambda_1^{-1}&0\\
0&0&0&1\\
\end{pmatrix}
$ and
$\sigma_2\mapsto Y'=\left(E_1\;E_2\;E_3\;E_4\right)$, where
$$E_1=\begin{pmatrix}
\frac{1+2q^{1/3}+2q^{2/3} + 2 q +2 q^{4/3}}{3 q (1 + q^{1/3} + q^{2/3} + q)} \\
-\frac{q (2 + q^{2/3}+q +2 q^{5/3}+ 3 q^{7/3})}{3 (-1 + q^{5/3})}\\
\frac{q+q^{5/3} + q^{7/3}}{(-1 + q^{1/3})^3 (1 + q^{1/3})^2 (1 + q^{2/3}) (-1+q^{5/3})}\\
-\frac{q (1 - q^{1/3} + q)}{3 (-1 + q^{1/3})^3 (1 + q^{1/3})^2}
\end{pmatrix},\;
E_2=\begin{pmatrix}
\frac{1}{3 q^{4/3} (-1 + q^{4/3})} \\
\frac{q}{-1 + q^{1/3} +q + q^{5/3} - q^{7/3} - q^{2/3}}\\
\frac{q^{1/3}}{(-1 + q^{2/3})^2 (1 + q^{2/3}) (q^{2/3} - q) (-1 + q^{5/3})}\\
\frac{1}{3 (-1 + q^{2/3})^2 (-1+q)q}
\end{pmatrix},
$$
$$E_3=\begin{pmatrix}
\frac{(1+2q)(1-q^{4/3})+(2+q)q^{2/3}}{3(1+q)(-q^{2/3}+q^2)}\\
\frac{q^2(1+q^{2/3})}{(q^{2/3}-q)(-1 + q^{5/3})}\\
0\\
-\frac{q^{2/3} (1 + 2 q^2)}{3 (-1 + q^{2/3})^2 (-1 + q)}
\end{pmatrix}\text{ and }
E_4=\begin{pmatrix}
-\frac{2 + q + (q-1)(q^{4/3} + 2q^{5/3})}{(q^2-1)q^{5/3}(-q^{2/3}+q^2)}\\
\frac{3(q^{4/3}- q^{8/3} + q)}{-1 + q^{1/3} + q^{5/3} - q^{7/3} - q^{2/3} + q}\\
\frac{(-1 + q) q^{7/3} (q^{4/3} (2 + q)-q^2(1+2 q))}{(-1+q^{2/3})^2 (1 + q^{2/3}) (q^{2/3} -q) (-1 + q^{5/3})}\\
\frac{q^{2/3} - q^{8/3} - (2 + q) + q^{4/3} (-1 + q^2)}{3(-1 + q^{2/3})^2}
\end{pmatrix}.$$
\begin{thm}
The representation $\rho_V$ is an irreducible representation of $B_3$ of dimension 4.
\end{thm}
\begin{proof}
The vectors $f_1=(1,0,0,0)^T$, $f_2=(0,1,0,0)^T$, $f_3=(0,0,1,0)^T$ and $f_4=(0,0,0,1)^T$ are the eigenvectors of $\sigma_1$ corresponding to the eigenvalues $q^2$, $q$, $\lambda_1^{-1}$ and 1 respectively. Note that each component of $Y'(f_i)=E_i$ is not zero for all $i=1,2,3,4$ except the third component of $E_3$. This implies that all the proper subspaces spanned by any subset of $\{f_1,f_2,f_3,f_4\}$ are not invariant under $Y'$. Hence, $\rho_V$ has no invariant proper subspaces. So, $\rho_V$ is irreducible.
\end{proof}
\section{Conclusion}
We consider a family of representations of $B_3$ constructed by $q$-Pascal's triangle \cite{Al}. We then specialize the parameters used in defining these
representations to non-zero complex numbers.
Kosyak mentioned in \cite{Ko} that the irreducibility of these representations is an open problem for dimensions $\geq6$.
We determine a necessary and sufficient condition for the irreducibility of representations when their dimension is 6.
In addition, we present an irreducible representation of the braid group $B_3$ of dimension 4;
a representation obtained by reducing one of those reducible representations presented in this work.
\end{document} |
\begin{document}
\begin{abstract}
We study a class of globally coupled maps in the continuum limit, where the individual maps are expanding maps of the circle. The circle maps in question are such that the uncoupled system admits a unique absolutely continuous invariant measure (acim), which is furthermore mixing. Interaction arises in the form of diffusive coupling, which involves a function that is discontinuous on the circle. We show that for sufficiently small coupling strength the coupled map system admits a unique absolutely continuous invariant distribution, which depends on the coupling strength $\textnormal{var}epsilon$. Furthermore, the invariant density exponentially attracts all initial distributions considered in our framework. We also show that the dependence of the invariant density on the coupling strength $\textnormal{var}epsilon$ is Lipschitz continuous in the BV norm.
When the coupling is sufficiently strong, the limit behavior of the system is more complex. We prove that a wide class of initial measures approach a point mass with support moving chaotically on the circle. This can be interpreted as synchronization in a chaotic state.
\end{abstract}
\maketitle
\let\thefootnote\relax\footnotetext{\emph{AMS subject classification.} 37D50, 37L60, 82C20}
\let\thefootnote\relax\footnotetext{\emph{Key words and phrases.} coupled map systems, synchronization, unique invariant density, mean field models.}
\section{Introduction}
In this paper we investigate a model of globally coupled maps. The precise definition is given in Section~\ref{s:results}; nonetheless, let us summarize here that by a globally coupled map we mean the following setup:
\begin{itemize}
\item The phase space $M$ is a compact metric space, and the state of the system is described by a Borel probability measure $\mu$ on $M$.
\item The dynamics, to be denoted by $F_{\textnormal{var}epsilon,\mu}$, is a composition of two maps $T \circ \Phi_{\textnormal{var}epsilon,\mu}$, where $T:M\to M$ describes the evolution of the individual sites, while $\Phi_{\textnormal{var}epsilon,\mu}:M\to M$ describes the coupling. Here the parameter $\textnormal{var}epsilon\ge 0$ is the coupling strength, and thus $\Phi_{0,\mu}$ is the identity for any $\mu$.
\item If the initial state of the system is given by some probability measure $\mu_0$, then for later times $n\ge 1$ the state of the system is given by $\mu_n$ which is
obtained by pushing forward $\mu_{n-1}$ by the map $F_{\textnormal{var}epsilon,\mu_{n-1}}$.
\end{itemize}
This framework allows a wide range of examples. To narrow down our analysis, let us assume that $M$ is either the interval $[0,1]$ or the circle $\mathbb{T}=\mathbb{R} / \mathbb{Z}$, and that the map $T$ has positive Lyapunov exponent and good ergodic properties with respect to an absolutely continuous invariant measure. Furthermore, we assume that the coupling map is of the form
\begin{equation} \label{couplingG}
\Phi_{\textnormal{var}epsilon,\mu}(x)=x+\textnormal{var}epsilon\cdot G(x,\mu)
\end{equation}
for some fixed function $G$ such that the dynamics preserves certain classes of measures. In particular, for any $N\ge 1$ the class of measures when $\mu=\frac{1}{N} \sum_{i=1}^N \delta_{x_i}$ for some points $x_1,\dots,x_N\in M$, is preserved, a case that is regarded as a finite system of mean field coupled maps. In turn, the situation when $\mu$ is (Lebesgue-)absolutely continuous can be thought of as the infinite -- more precisely, continuum -- version of the model.
Coupled dynamical systems in general, and globally coupled maps in particular have been extensively studied in the literature. Below we mention the papers that directly motivated our work, more complete lists of references can be found for instance in \cite{koiller2010coupled}, \cite{fernandez2014breaking}, \cite{keller2006uniqueness} and \cite{chazottes2005dynamics}. Our main interest here is to understand how different types of asymptotic phenomena arise in the system depending on the coupling strength $\textnormal{var}epsilon$. In particular, we would like to address the following questions:
\begin{itemize}
\item \textbf{Weak coupling.}
\begin{itemize}
\item Is there some positive $\textnormal{var}epsilon_0$ such that for $\textnormal{var}epsilon\in[0,\textnormal{var}epsilon_0]$ there exists a unique absolutely continuous invariant probability measure $\mu_*=\mu_*(\textnormal{var}epsilon)$? \item Is $\mu_*(\textnormal{var}epsilon)$ and $\mu_*(0)$ close in some sense?
\item Do we have convergence of $\mu_n$ to $\mu_*$, at least for sufficiently regular initial distributions $\mu_0$?
\end{itemize}
\item \textbf{Strong coupling.}
\begin{itemize}
\item Is it true that for larger values of $\textnormal{var}epsilon$ the asymptotic behavior can be strikingly different?
\item In particular, at least for a substantial class of initial distributions $\mu_0$, does $\mu_n$ approach a point mass that is evolved by the chaotic map $T$?
\end{itemize}
\end{itemize}
As by assumption $T$ has good ergodic properties, the weak coupling phenomena mean that for small enough $\textnormal{var}epsilon$ the behavior of the coupled system is analogous to that of the uncoupled system ($\textnormal{var}epsilon=0$). Hence from now on we will refer to this shortly as \textit{stability}. On the other hand, the limit behavior associated to a point mass moving chaotically on $\mathbb{T}$, which is expected to arise for a class of initial distributions when the coupling is sufficiently strong, will be referred to as \textit{synchronization in a chaotic state}. This terminology is borrowed from the literature of coupled map lattices (see eg.~\cite{chazottes2005dynamics}) where synchronization is defined as the analogous phenomena when the diagonal attracts all orbits from an open set of the phase space, which has a direct product structure.
The short summary of our paper is that for the class of models studied here the answer to all of the questions stated above is yes, in the sense formulated in Theorems~\ref{theomain},~\ref{stability} and~\ref{largeeps} below.
Before describing the works that provided a direct motivation for our research, we would like to comment briefly how the above questions can be regarded in the general context of coupled dynamical systems. As already mentioned, this area has an enormous literature, which we definitely do not aim to survey here. In particular, there is a wide range of asymptotic phenomena that may arise depending on the specifics of the coupled dynamical system.
Yet, the two extremes of stability for weak coupling, and synchronization for strong coupling, is a common feature of many of the examples. For instance, in the context of coupled map lattices, the case of small coupling strength can be often treated as the perturbation of the uncoupled system. As a consequence, the sites remain weakly correlated, typically resulting in a unique space-time chaotic phase, which is analogous to the unique absolutely continuous invariant measure in our setting. On the other hand, if the coupling is strong enough, synchronization occurs in the sense that initial conditions are attracted by some constant configurations, in many cases, by the diagonal. The dynamics along the diagonal is then governed by the local map $T$, which has a positive Lyapunov exponent, justifying the terminology of chaotic synchronization. For further discussion of coupled map lattices, we refer to \cite{chazottes2005dynamics}. Although the specifics are quite different, the two extremities of uncorrelated behavior versus synchronization are highly relevant in another popular paradigm for applied dynamics, the Kuramoto model of coupled oscillators (see \cite{dietert2018mathematics} for a recent survey). We emphasize that the asymptotic behavior in coupled map lattices or in the Kuramoto model is much more complex, discussing which is definitely beyond the scope of the present paper. However, it is worth mentioning that our findings are analogous to some key features of these important models.
The question of stability for small $\textnormal{var}epsilon$ in globally coupled maps received some attention in the 1990's when unexpected behavior, often referred to as the violation of the law of large numbers, was observed even for arbitrarily small values of $\textnormal{var}epsilon$, when $T$ was chosen as a quadratic map or a tent map of the interval (\cite{kaneko1990globally}, \cite{kaneko1995remarks}, \cite{ershov1995mean}, \cite{nakagawa1996dominant}). For the description of the violation of the law of large numbers we refer to \cite{keller2000ergodic}, here we only mention that these complicated phenomena definitely rule out the stability for weak coupling scenario as defined above. However, in \cite{keller2000ergodic} it was shown that there is no violation of the law of large numbers, and in particular, there is stability for weak coupling when
\begin{enumerate}
\item[(I)] the map $T$ is a $C^3$ expanding map of the circle $\mathbb{T}=\mathbb{R} / \mathbb{Z}$;
\item[(II)] the coupling $\Phi_{\textnormal{var}epsilon,\mu}$ defined by \eqref{couplingG} is such that
\[
G(x,\mu)=\gamma(x,\bar{\mu}), \quad \bar{\mu}=\int_{\mathbb{T}} F\circ T \text{ d}\mu,
\]
where $\gamma \in C^{2}(\mathbb{T} \times \mathbb{R},\mathbb{R})$ and $F\in C^{2}(\mathbb{T},\mathbb{R})$.
\end{enumerate}
Motivated in part by these results, the authors in \cite{bardet2009stochastically} investigated a class of models where $T$ is the doubling map, which is deformed by a coupling factor $G(x,\mu)=\gamma(x,\bar\mu)$ to a nonlinear piecewise fractional linear map. For this class of examples, which do not literally fit the structure described by \eqref{couplingG}, \cite{bardet2009stochastically} proved that while there is stability for weak coupling, a phase transition analogous to that of the Curie-Weiss model takes place for stronger coupling strength.
A parallel line of investigation was initiated when a class of models was introduced in \cite{koiller2010coupled}, and later studied from the ergodic theory point of view in \cite{fernandez2014breaking} and \cite{selley2016mean}. In these papers $T$ is the \textit{doubling map} of $\mathbb{T}$, while $\Phi_{\textnormal{var}epsilon,\mu}$ represents
\textit{a diffusive coupling on the circle} among the particles. As such, $\Phi_{\textnormal{var}epsilon,\mu}$ is given by the formula (\ref{couplingG}), where
\[
G(x,\mu)=\int_{\mathbb{T}}g(x-y)\text{ d}\mu(y)
\]
for the function $g$ depicted on Figure~\ref{g}, which is \textit{discontinuous on the circle}. As discussed in \cite{fernandez2014breaking} and \cite{selley2016mean}, the discontinuity of $g$ has some important consequences for the behavior of the finite system. In particular it is shown, mathematically for $N=3$ (\cite{fernandez2014breaking} and \cite{selley2016mean}) and $N=4$ (\cite{selley2016}), and numerically for higher values of $N$ (\cite{fernandez2014breaking}), that a loss of ergodicity takes place for finite system size when the coupling strength is increased beyond a critical value. From a different perspective, in \cite{selley2016mean} we studied the continuum version of this model as well. In that context we showed that while there is stability for weak coupling (\cite[Theorem 4]{selley2016mean}), yet, when $\textnormal{var}epsilon$ is sufficiently large, for a substantial class of initial distributions, $\mu_n$ approaches a point mass with support moving chaotically on $\mathbb{T}$ (as formulated in \cite[Theorem 5]{selley2016mean}).
In this paper we go beyond the doubling map discussed in \cite{selley2016mean}. The direct motivation
for this is that in that model, stability was, to some extent, prebuilt into the system: the very same (i.e. Lebesgue) measure
was invariant for every coupling strength.
Without such a symmetry, it was not at all clear whether the invariant measure could be stable under
perturbation by the discontinuous $g$. In other words, we are looking for the regularity class of systems where
the phase transition (or possibly a sequence of phase transitions) between stable behavior and synchronization occurs at positive coupling strength -- of which \cite{selley2016mean} only gave an example. Now we are able to demonstrate that the symmetry of the doubling map is not needed, and much less regularity is enough.
In particular, we consider the model of
\cite{selley2016mean}, yet, instead of the doubling map, $T$ is now basically an arbitrary $C^2$ expanding map of the circle (see Section~\ref{s:results} below for the precise formulation). Our main results are stability for weak coupling as formulated in Theorem~\ref{theomain} and ~\ref{stability}, and synchronization in a chaotic state for strong coupling as formulated in Theorem~\ref{largeeps}. Theorem~\ref{theomain} can be regarded as generalization of \cite[Theorem 4]{selley2016mean}, while Theorem~\ref{largeeps} is a generalization of \cite[Theorem 5]{selley2016mean} to this context.
In \cite{selley2016mean} the $\textnormal{var}epsilon$-dependence of the (unique absolutely continuous) invariant measure was not a question, since the measure was always Lebesgue by construction. In the present model this is not the case. Instead,
we prove in Theorem~\ref{stability} that the unique invariant density depends Lipschitz continuously on the coupling strength.
It is important to emphasize that our methods here are quite different from those of \cite{selley2016mean}. There, our arguments were somewhat ad hoc, based on computations that exploited some specific features, in particular the linearity of the doubling map. Here, a more general approach is needed; we apply spectral tools developed in the literature. In that respect, the proof of our Theorem~\ref{theomain} is strongly inspired by \cite{keller2000ergodic}. Yet, we would like to point out an important difference.
The assumptions of \cite{keller2000ergodic}, in particular \cite[Theorem 4]{keller2000ergodic} are summarized in (I) and (II) above. For the model discussed here, the map $T$ is not much different, though it is slightly less regular than assumed in (I). On the other hand, the coupling $\Phi_{\textnormal{var}epsilon,\mu}$ differs considerably from the one defined in assumption (II). It does not depend only on an integral average of $\mu$, but on a more complicated expression involving the function $g$, which \textit{is discontinuous on} $\mathbb{T}$. As mentioned above, the discontinuity of $g$ has some important consequences for the finite system size. In the continuum version of the model, it turns out that we have stability for weak coupling (Theorem~\ref{theomain}) as in the setting of \cite[Theorem 4]{keller2000ergodic}. Yet, $g$ causes several subtle technical challenges in the proof of Theorem~\ref{theomain}, since the discontinuities imply that not an integral expression, but a certain value of the density $f$ will play a role in $F_{\textnormal{var}epsilon,\mu}'$ -- and the same can be said for $f'$ and $F_{\textnormal{var}epsilon,\mu}''$. A related comment we would like to make concerns Theorem~\ref{stability} which proves that the invariant density depends on $\textnormal{var}epsilon$ Lipschitz continuously. We think that this result is remarkable as in the presence of singularities typically only a weaker, log-Lipschitz continuous dependence can be expected (\cite{baladi2007susceptibility}, \cite{bonetto2000properties}, \cite{keller2008continuity}).
The remainder of the paper is organized as follows: in Section~\ref{s:results} we introduce our model, and state our three theorems concerning stability and synchronization. In Section~\ref{s:theo1proof} we prove our first theorem on stability: we show that there exists an invariant absolutely continuous distribution, which is unique in our setting. We then show that densities close enough to the invariant density converge to it with exponential speed. In Section~\ref{s:theo2proof} we prove our second theorem concerning the Lipschitz continuity of the invariant density in $\textnormal{var}epsilon$. In Section~\ref{s:theo3proof} we prove our theorem on synchronization, namely we show that for large enough $\textnormal{var}epsilon$, sufficiently well-concentrated initial distributions tend to a point mass moving on $\mathbb{T}$ according to $T$.
In the proof of our statements, especially those about stability, the choice of a suitable space of densities plays an important role. This choice is discussed in Section~\ref{s:remarks}, along with some further open problems.
\section{The model and main results}
\label{s:results}
Let $\mathbb{T}=\mathbb{R} / \mathbb{Z}$, and denote the Lebesgue measure on $\mathbb{T}$ by $\lambda$. Consider the Lebesgue-absolutely continuous probability measure $\text{d}\mu=f\text{d}\lambda$ on $\mathbb{T}$ and a self-map of $\mathbb{T}$ denoted by $T$. We make the following initial assumptions on them:
\begin{enumerate}
\item[(F)] $f \in C^1(\mathbb{T},\mathbb{R}_{\geq 0})$, $f'$ is furthermore Lipschitz continuous and $\int f \text{ d}\lambda=1$,
\item[(T)] $T \in C^2(\mathbb{T}, \mathbb{T})$, $T''$ is Lipschitz continuous and $T$ is strictly expanding: that is, $\min|T'| = \omega > 1$. We further suppose that $T$
is $N$-fold covering and
\[
N < \omega^2.
\]
\end{enumerate}
Define $\Phi_{\mu}: \mathbb{T} \to \mathbb{T}$ as
\[
\Phi_{\mu}(x)=x+\textnormal{var}epsilon \int_{\mathbb{T}} g(y-x)\text{d}\mu(y) \qquad x \in \mathbb{T},
\]
where $0 \leq \textnormal{var}epsilon < 1$ and $g: \mathbb{T} \to \mathbb{R}$ is defined as
\begin{equation} \label{gg}
g(u)=
\begin{cases}
u & \text{if } u \in \left(-\frac{1}{2},\frac{1}{2} \right), \\
0 & \text{if } u =\pm \frac{1}{2}.
\end{cases}
\end{equation}
The graph of the natural lift of this function to $\mathbb{R}$ is depicted
in Figure \ref{g}.
\begin{figure}
\caption{The function $g$.}
\label{g}
\end{figure}
Define $F_{\mu}: \mathbb{T} \to \mathbb{T}$ as
\[
F_{\mu}= T \circ \Phi_{\mu}.
\]
This can be regarded as a coupled map dynamics in the following way: $\Phi_{\mu}$ accounts for the interaction between the sites (distributed according to the measure $\mu$) via the interaction function $g$. The map $T$ is the individual site dynamics. We call the parameter $\textnormal{var}epsilon$ coupling strength.
Let $\mu_0$ be the initial distribution and define
\begin{equation} \label{pforward}
\mu_{n+1}=(F_{\mu_n})_{\ast}\mu_n, \quad n=0,1,\dots
\end{equation}
Our assumptions guarantee that if $\mu_0 \ll \lambda$ with density $f_0$ of property (F) then $\mu_n \ll \lambda$ with density of property (F) for all $n \in \mathbb{N}$ (this will be proved later). We are going to use the notation $\text{d}\mu_n=f_n\text{d}\lambda$ for the densities. Also, we are going to index $F_{\cdot}$ and $\Phi_{\cdot}$ with the density instead of the measure. Now $f_{n+1}$ can be calculated with the help of the transfer operator $P_{F_{f_n}}$ in the following way:
\[
f_{n+1}(y)=P_{F_{f_n}}f_n(y)=\sum_{x: \thinspace F_{f_n}(x)=y}\frac{f_n(x)}{|F_{f_n}'(x)|}, \qquad y \in \mathbb{T}.
\]
To simplify notation, we are going to write
\[
P_{F_f}f=\mathcal{F}_{\textnormal{var}epsilon} (f),
\]
so $f_{n+1}=\mathcal{F}_{\textnormal{var}epsilon} (f_n)$.
Our main goal is to show that depending on $\textnormal{var}epsilon$, there exist two different limit behaviors of the sequence $(\mu_n)$.
The first type of limit behavior occurs when the coupling is sufficiently weak in terms of the regularity of the initial distribution. In this case, we claim that there exists an invariant distribution with density of property (F), and initial distributions sufficiently close to it converge to this distribution exponentially. More precisely, let
\[
\mathcal{C}_{R,S}^c=\{ f \text{ is of property (F)}, \text{var}(f) \leq R, |f'| \leq S, \text{Lip}(f') \leq c \}
\]
where we denoted the total variation of the function $f$ by var$(f)$, its Lipschitz constant by Lip($f$) and $R,S,c > 0$.
The choice of the set of densities $\mathcal{C}_{R,S}^c$ plays a central role in the arguments -- see the discussion in Section~\ref{s:discussion}. It is a subset of the space of functions of bounded variation, so we can endow it with the usual bounded variation norm:
\[
\|f\|_{BV}=\|f\|_1+\text{var}(f),
\]
where
\[
\|f\|_1=\int |f| \text{ d}\lambda \quad \text{and} \quad \text{var}(f)=\int |f'| \text{ d}\lambda.
\]
Note that the total variation $\text{var}(f)$ can be calculated indeed with this simple formula, since $f$ is continuously differentiable.
\begin{theo} \label{theomain}
There exist $R^*,S^*$ and $c^*>0$ such that for all
$R>R^*$, $S>S^*$ and $c>c^*$ there exists an $\textnormal{var}epsilon^*=\textnormal{var}epsilon^*(R,S,c)>0$, for which the following holds:
For all $0 \leq \textnormal{var}epsilon < \textnormal{var}epsilon^*$, there exists a density $f_*^{\textnormal{var}epsilon} \in \mathcal{C}_{R,S}^c$ for which $\mathcal{F}_{\textnormal{var}epsilon}f_*^{\textnormal{var}epsilon}=f_*^{\textnormal{var}epsilon}$. Furthermore,
\[
\lim_{n \to \infty}\mathcal{F}_{\textnormal{var}epsilon}^n(f_0)=f_{*}^{\textnormal{var}epsilon} \quad \text{ exponentially for all } f_0 \in \mathcal{C}_{R,S}^c
\]
in the sense that there exist $C > 0$ and $\gamma\in(0,1)$ such that
\[
\|\mathcal{F}_{\textnormal{var}epsilon}^n(f_0)-f_*^{\textnormal{var}epsilon}\|_{BV} \leq C \gamma^n \|f_0-f_*^{\textnormal{var}epsilon}\|_{BV} \quad \text{ for all } n \in \mathbb{N}
\text{ and } 0\leq \textnormal{var}epsilon<\textnormal{var}epsilon^*.
\]
\end{theo}
In this case we can also show that the fixed density of $\mathcal{F}_{\textnormal{var}epsilon}$ is Lipschitz continuous in the variable $\textnormal{var}epsilon$.
\begin{theo} \label{stability}
Let $R,S,c$ and $\textnormal{var}epsilon^{*}$ be chosen as in Theorem~\ref{theomain}. Then there exists a $K(R,S,c)=K > 0$ such that for any $0 \leq \textnormal{var}epsilon, \textnormal{var}epsilon' < \textnormal{var}epsilon^{*}$
\[
\|f_*^{\textnormal{var}epsilon}-f_*^{\textnormal{var}epsilon'}\|_{BV} \leq K|\textnormal{var}epsilon-\textnormal{var}epsilon'|
\]
holds for the densities $f_*^{\textnormal{var}epsilon}, f_*^{\textnormal{var}epsilon'} \in \mathcal{C}_{R,S}^c$ for which
$\mathcal{F}_{\textnormal{var}epsilon}f_*^{\textnormal{var}epsilon}=f_*^{\textnormal{var}epsilon}$ and $\mathcal{F}_{\textnormal{var}epsilon'}f_*^{\textnormal{var}epsilon'}=f_*^{\textnormal{var}epsilon'}$.
\end{theo}
However, when the coupling is strong, we expect to see synchronization in some sense. To be able to prove such behavior we need an initial distribution which is `sufficiently well concentrated' -- by this we mean the following:
\begin{enumerate}
\item[(F')] $f$ is of property (F), furthermore, there exists an interval $I \subset \mathbb{T}$, $|I| \geq \frac{1}{2}$ such that $\textnormal{supp}(f) \cap I = \emptyset$.
\end{enumerate}
In this case we can define $\textnormal{supp}^* (f)$ as the smallest closed interval on $\mathbb{T}$ containing the support of $f$.
Before stating our theorem, we recall the definition of the Wasserstein metric. Let $(S,d)$ be a metric space and let $\mathcal{P}_1(S)$ denote all measures $P$ for which $\int d(x,z)\text{ d}P(x) < \infty$ for every $z \in S$. Let $M(P,Q)$ be the set of measures on $S \times S$ with marginals $P$ and $Q$. Then the Wasserstein distance of $P$ and $Q$ is
\[
W_1(P,Q)=\inf \left\{\int d(x,y) \text{ d}\mu(x,y), \text{ } \mu \in M(P,Q) \right\}.
\]
By the Kantorovich-Rubinstein theorem {\cite[Theorem 11.8.2]{dudley2002real}} it holds that
\[
W_1(P,Q)=\sup \left\{\left|\int f \text{ d}(P-Q)\right|, \text{ } \text{Lip}(f) \leq 1 \right\}.
\]
\begin{theo} \label{largeeps}
Suppose $1-\frac{1}{\max |T'|} < \textnormal{var}epsilon < 1$. Then
\[
|\textnormal{supp}^* (f_n)| \underset{n \to \infty}{\to} 0 \quad \text{exponentially}
\]
in the sense that
\[
|\textnormal{supp}^* (f_n)| \leq [\max |T'|(1-\textnormal{var}epsilon)]^n |\textnormal{supp}^* (f_0)| \quad \text{for all} \quad n \in \mathbb{N}.
\]
Furthermore, there exists an $x^* \in \textnormal{supp}^*(f_0)$ such that
\[
W_1(\mu_n,\delta_{T^n(x^*)}) \underset{n \to \infty}{\to} 0 \quad \text{exponentially}
\]
in the sense that
\[
W_1(\mu_n,\delta_{T^n(x^*)}) \leq [\max |T'|(1-\textnormal{var}epsilon)]^n W_1(\mu_0,\delta_{x^*}) \quad \text{for all} \quad n \in \mathbb{N}.
\]
\end{theo}
So we claim that when the coupling is sufficiently strong, the support of a well-concentrated initial density eventually shrinks to a single point, hence complete synchronization is achieved.
We have chosen the Wasserstein metric to state our theorem because the convergence in this metric is equivalent to weak convergence of measures. This is about the best that can be expected, since for example a similar statement for the total variation distance cannot hold - a sequence of absolutely continuous measures cannot converge to a point measure in total variation distance.
In the subsequent three chapters we are going to give the proofs of Theorems~ \ref{theomain},~\ref{stability} and ~\ref{largeeps}.
\section{Proof of theorem \ref{theomain}}
\label{s:theo1proof}
\subsection{Existence of an invariant density} In this section we prove the following proposition:
\begin{prop} \label{propfix}
There exist $\textnormal{var}epsilon^*_0,R^*,S^*,c^* > 0$ such that if $0 \leq \textnormal{var}epsilon < \textnormal{var}epsilon^*_0$, the operator $\mathcal{F}_{\textnormal{var}epsilon}$ has a fixed point $f_*^{\textnormal{var}epsilon}$ in $\mathcal{C}_{R^*,S^*}^{c^*}$.
\end{prop}
\begin{proof}
The structure of the proof is as follows: we first show that $\mathcal{C}_{R,S}^c$ is invariant under the action of $\mathcal{F}_{\textnormal{var}epsilon}$ if $R,S$ and $c$ are chosen large enough. Then we prove that $\mathcal{F}_{\textnormal{var}epsilon}$ restricted to $\mathcal{C}_{R,S}^c$ is continuous in the $L^1$ norm. We then argue that the set $\mathcal{C}_{R,S}^c$ is a compact, convex metric space for any values of $R,S$ and $c$. Finally, we conclude by Schauder's fixed point theorem that $\mathcal{F}_{\textnormal{var}epsilon}$ has a fixed point in $\mathcal{C}_{R,S}^c$.
\begin{lem} \label{lem:1}
There exists $R^*>0$ such that for all $R\geq R^*$ there are $\textnormal{var}epsilon_0^*=\textnormal{var}epsilon_0^*(R)>0$ and $S^*=S^*(R)>0$ with the following properties:
For each $S\geq S^*$ there is $c^*=c^*(R,S)$
such that
$\mathcal{F}_{\textnormal{var}epsilon}(\mathcal{C}_{R,S}^c) \subseteq \mathcal{C}_{R,S}^c$ for $c \geq c^*$ and $0 \leq \textnormal{var}epsilon < \textnormal{var}epsilon^*_0$.
\end{lem}
\begin{proof}
The proof consist of the following steps: we first prove that if $f$ is of property (F), then $\mathcal{F}_{\textnormal{var}epsilon}(f)$ is also of property (F) -- as indicated in the introduction. Then let $f \in \mathcal{C}_{R,S}^c$ for some $R,S,c > 0$. We prove that we can choose $R$ large enough such that var$(\mathcal{F}_{\textnormal{var}epsilon}(f)) \leq R$. Then we prove that we can choose $S$ large enough such that $|(\mathcal{F}_{\textnormal{var}epsilon}(f))'| \leq S$ and $c$ large enough such that $\text{Lip}(\mathcal{F}_{\textnormal{var}epsilon}(f)') \leq c$ (provided that $\textnormal{var}epsilon$ is small enough).
\textbf{$\mathcal{F}_{\textnormal{var}epsilon}(f)$ is also of property (F).} First notice that
\[
\Phi'_{f}(x)=1+\textnormal{var}epsilon\left(f\left(x+\frac{1}{2}\right)-1 \right)
\quad \text{and} \quad \Phi''_{f}(x)=\textnormal{var}epsilon f'\left(x+\frac{1}{2}\right).
\]
This implies that $\Phi_{f}$ is monotone increasing:
\begin{equation} \label{phibound}
\Phi_{f}' \geq 1+\textnormal{var}epsilon (\inf_{\mathbb{T}} f-1) \geq 1-\textnormal{var}epsilon > 0 \qquad \text{if } 0 \leq \textnormal{var}epsilon < 1,
\end{equation}
and onto, since denoting the lift of $\Phi_f$ to $\mathbb{R}$ by $\Phi_{f}^{\mathbb{R}}$ we can see that
\[
\Phi_{f}^{\mathbb{R}}(0)=\textnormal{var}epsilon \int_{\mathbb{T}} g(y)f(y)\text{d}y
\quad \text{and} \quad
\Phi_{f}^{\mathbb{R}}(1)=1+\textnormal{var}epsilon \int_{\mathbb{T}} g(y-1)f(y)\text{d}y,
\]
implying that $\Phi_{f}^{\mathbb{R}}(1)=\Phi_{f}^{\mathbb{R}}(0)+1$. Bearing in mind the differentiability properties of $f$, we can observe that $\Phi_f$ is a $C^2$ diffeomorphism of $\mathbb{T}$. We are going to use the change of variables formula with respect to this diffeomorphism repeatedly in the calculations to come.
A further note on $\Phi_f$: we are going to denote the transfer operator associated to it by $P_{\Phi_f}$. Recall that this depends on $\textnormal{var}epsilon$. Denoting the transfer operator associated to $T$ by $P_T$, we have $P_{F_f}=P_TP_{\Phi_f}$ since $F_f=T \circ \Phi_f$.
Now we can see that $F_f$ is $N$-fold covering of $\mathbb{T}$. Let us denote the inverse branches by $F_f^{-1,k}$, $k=1,\dots,N$. Then
\[
\mathcal{F}_{\textnormal{var}epsilon}(f)=P_{F_{f}}f=\sum_{k=1}^{N}\frac{f \circ F_{f}^{-1,k}}{|F_{f}' \circ F_{f}^{-1,k}|},
\]
and we see that $\mathcal{F}_{\textnormal{var}epsilon}(f)$ is also $C^1$. It is also easy to see that the derivative
\begin{align} \label{der}
\mathcal{F}_{\textnormal{var}epsilon}(f)' = \sum_{k=1}^{N} \frac{f'}{(F_{f}')^2} \circ F_{f}^{-1,k} +\text{sign}(T')\,\cdot\sum_{k=1}^{N} \frac{f\cdot F_{f}''}{(F_{f}')^3} \circ F_{f}^{-1,k}.
\end{align}
is also Lipschitz continuous.
\textbf{Choice of $\mathbf{R}^*$.} Fix $0< b < \omega-1$. Then choose $R_0$ such that $\max|T''|/|T'| < b \cdot R_0$. First we note that
\begin{align*}
\textnormal{var}(P_{T}f) & \leq \int \left|\frac{f'}{T'}\right|\text{ d}\lambda +\int \left|\frac{f\cdot T''}{(T')^2} \right|\text{ d}\lambda \\
& \leq \max \left| \frac{1}{T'} \right |\textnormal{var}(f)+ \|f\|_1 \max \left( \frac{T''}{(T')^2} \right) \\
& \leq \max \frac{1}{|T'|}(\textnormal{var}(f)+bR_0).
\end{align*}
Let $\eta = \max \frac{1}{|T'|}(1+b)$ (note that $\eta < 1$ by our choice of $b$). Choose $\rho > 0$ such that $\delta=(1-\rho)^2-\eta > 0$. Let $R^*=\eta\cdot\max \left\{R_0,\frac{\rho}{\delta} \right\}$. Now given $R>R^*$, choose $\textnormal{var}epsilon_0^*\leqslant\frac{\rho}{R}$.
Later in the proof we will require further smallness properties of $\textnormal{var}epsilon_0^*$.
Fix $R\geqslant R^*$, $\textnormal{var}epsilon\in[0,\textnormal{var}epsilon_0^*)$, and
let $\textnormal{var}(f) \leq R$. As $\Phi_f'\geq 1-\textnormal{var}epsilon\textnormal{var}(f)\geq 1-\rho$ and $\int|\Phi_f''|=\textnormal{var}epsilon\textnormal{var}(f)\leq\rho$, we have
\begin{align*}
\text{var}(P_{\Phi_{f}}f)
& \leq
\textnormal{var}(f) \left( \max \left| \frac{1}{\Phi_{f}'} \right | + \textnormal{var} \left( \frac{1}{\Phi_{f}'} \right) \right)+\|f\|_1 \textnormal{var} \left( \frac{1}{\Phi_{f}'} \right) \\
& \leq
\textnormal{var}(f) \cdot\left( \frac{1}{1-\rho}+ \frac{\rho}{(1-\rho)^2} \right)+\|f\|_1 \frac{\rho }{(1-\rho)^2} \\
&=
\textnormal{var}(f) \frac{1}{(1-\rho)^2}+\frac{\rho}{(1-\rho)^2} \leq
\frac{1}{(1-\rho)^2}(R+\delta\eta^{-1} R^*)
\leqslant \frac{1}{\eta(1-\rho)^2}(\eta R+\delta R)\\
&=
\frac{R}{\eta}
\end{align*}
Finally, if $h=P_{\Phi_{f}}f$ then
\[
\textnormal{var}(P_Th) \leq \max \frac{1}{|T'|}(\eta^{-1}R+bR_0)
\leq \max \frac{1}{\eta\,|T'|}(R+bR^*)
\leq R.
\]
\textbf{Choice of $\mathbf{S}^*$.}
We are going to estimate $\mathcal{F}_{\textnormal{var}epsilon}(f)'$ using \eqref{der}. Since
\begin{align*}
F_{f}' \circ F_{f}^{-1,k}&=(T' \circ T^{-1,k}) \cdot (\Phi_{f}' \circ F_{f}^{-1,k})\\
F_{f}'' \circ F_{f}^{-1,k}&=(T'' \circ T^{-1,k}) \cdot (\Phi_f' \circ F_{f}^{-1,k})^2+(T' \circ T^{-1,k}) \cdot (\Phi_{f}'' \circ F_{f}^{-1,k}), \qquad k=1,\dots,N
\end{align*}
we get the following expression:
\begin{align*}
|\mathcal{F}_{\textnormal{var}epsilon}(f)'| &\leq \sum_{k=1}^{N} \left| \frac{f' \circ F_{f}^{-1,k}}{[(T' \circ T^{-1,k}) \cdot (\Phi_{f}' \circ F_{f}^{-1,k})]^2} \right| \\
&+\sum_{k=1}^{N} \left|\frac{(f \circ F_{f}^{-1,k}) \cdot [(T'' \circ T^{-1,k}) \cdot (\Phi_f' \circ F_{f}^{-1,k})^2+(T' \circ T^{-1,k}) \cdot (\Phi_{f}'' \circ F_{f}^{-1,k})]}{[(T' \circ T^{-1,k}) \cdot (\Phi_{f}' \circ F_{f}^{-1,k})]^3} \right|
\end{align*}
Remember that $\omega = \min|T'|$ and let $D=\max|T''|$. As $|f'|\leq S$
and $\textnormal{var}epsilon\in[0,\textnormal{var}epsilon_0^*)$, this implies
\begin{align*}
|\mathcal{F}_{\textnormal{var}epsilon}(f)'| &\leq \frac{N}{\omega^2(1-\textnormal{var}epsilon)^2}S+N(1+R)\left(\frac{D}{\omega^3(1-\textnormal{var}epsilon)}+\frac{\textnormal{var}epsilon S}{(1-\textnormal{var}epsilon)^3\omega^2} \right) \\
&\leq
\left(\frac{N}{\omega^2(1-\textnormal{var}epsilon_0^*)^2}+\frac{\textnormal{var}epsilon_0^* N(1+R)}{(1-\textnormal{var}epsilon_0^*)^3\omega^2}\right)S+\frac{DN(1+R)}{\omega^3(1-\textnormal{var}epsilon_0^*)} \\
&=: q_0(\textnormal{var}epsilon_0^*,R)\cdot S+K_0(\textnormal{var}epsilon_0^*,R).
\end{align*}
As $N<\omega^2$ by assumption (T), one can choose $\textnormal{var}epsilon_0^*=\textnormal{var}epsilon_0^*(R)$ so small that $q_0(\textnormal{var}epsilon_0^*,R)<1$.
Let
\[
S^*=S^*(R):=\frac{K_0(\textnormal{var}epsilon_0^*(R),R)}{1-q_0(\textnormal{var}epsilon_0^*(R),R)},
\]
and suppose that $S\geq S^*$. Then $|\mathcal{F}_{\textnormal{var}epsilon}(f)'|\leq S$.
\textbf{Choice of $\mathbf{c}^*$.} We are going to estimate $\text{Lip}(\mathcal{F}_{\textnormal{var}epsilon}(f)')$ with the help of \eqref{der}.
\begin{align*}
\text{Lip}(\mathcal{F}_{\textnormal{var}epsilon}(f)') & \leq \sum_{k=1}^{N} \text{Lip} \left( \frac{f'}{(F_{f}')^2} \circ F_{f}^{-1,k}\right) +\sum_{k=1}^{N} \text{Lip}\left(\frac{f\cdot F_{f}''}{(F_{f}')^3} \circ F_{f}^{-1,k} \right), \\
& \leq \frac{N}{\omega(1-\textnormal{var}epsilon_0^*)} \cdot \left( \text{Lip} \left( \frac{f'}{(F_{f}')^2} \right)+ \text{Lip}\left(\frac{f\cdot F_{f}''}{(F_{f}')^3} \right) \right),
\end{align*}
since $\max|(F_{f}^{-1})'| \leq \frac{1}{\omega(1-\textnormal{var}epsilon)} \leq \frac{1}{\omega(1-\textnormal{var}epsilon_0^*)}$.
Simple calculations yield that
\begin{align*}
\text{Lip} \left( \frac{f'}{(F_{f}')^2} \right) & \leq \frac{1}{\omega^2(1-\textnormal{var}epsilon_0^*)^2}c+K'(\textnormal{var}epsilon_0^*,R,S),
\end{align*}
and
\begin{align*}
\text{Lip}\left(\frac{f\cdot F_{f}''}{(F_{f}')^3} \right) \leq \frac{(1+R)\max|T'|\textnormal{var}epsilon_0^*}{\omega^3(1-\textnormal{var}epsilon_0^*)^3}c+K''(\textnormal{var}epsilon_0^*,R,S).
\end{align*}
In conclusion,
\begin{align*}
\text{Lip}(\mathcal{F}_{\textnormal{var}epsilon}(f)')
&\leq
N \left(\frac{1}{\omega^3(1-\textnormal{var}epsilon_0^*)^3}+\frac{(1+R)\max|T'|\textnormal{var}epsilon_0^*}{\omega^4(1-\textnormal{var}epsilon_0^*)^4} \right)c+\frac{N}{\omega(1-\textnormal{var}epsilon_0^*)}(K'+K'') \\
&:=
q_1(\textnormal{var}epsilon_0^*,R)\cdot c+K_1(\textnormal{var}epsilon_0^*,R,S).
\end{align*}
As in the previous step,
one can choose $\textnormal{var}epsilon_0^*=\textnormal{var}epsilon_0^*(R)$ so small that $q_1(\textnormal{var}epsilon_0^*,R)<1$.
Let
\[
c^*=c^*(R,S):=\frac{K_1(\textnormal{var}epsilon_0^*(R),R,S)}{1-q_1(\textnormal{var}epsilon_0^*(R),R)},
\]
and suppose that $c\geq c^*$. Then $\text{Lip}(\mathcal{F}_\textnormal{var}epsilon(f)')\leq c$.
\end{proof}
\begin{lem} \label{lemma_cont}
$\mathcal{F}_{\textnormal{var}epsilon}|_{\mathcal{C}_{R,S}^c}$ is continuous in the $L^1$-norm.
\end{lem}
\begin{proof}
We remind the reader that $\mathcal{F}_{\textnormal{var}epsilon}(f)=P_{F_{f}}f=P_TP_{\Phi_{f}}f$. Continuity of $P_T$ is standard:
\[
\| P_Tf_1-P_Tf_2\|_1 = \|P_T(f_1-f_2)\|_1 \leq \|f_1-f_2\|_1.
\]
So it suffices to see the continuity of $\tilde{\mathcal{F}}_{\textnormal{var}epsilon}(f)=P_{\Phi_{f}}f$. Let $f_1,f_2 \in \mathcal{C}_{R,S}^c$, and for the sake of brevity we are going to write $\Phi_{f_1}=\Phi_1$ and $\Phi_{f_2}=\Phi_2$. Then
\begin{align*}
\|P_{\Phi_1}f_1-P_{\Phi_2}f_2 \|_1 &\leq \|P_{\Phi_1}(f_1-f_2) \|_1 + \|(P_{\Phi_1}-P_{\Phi_2})f_2 \|_1 \\
&\leq \|f_1-f_2 \|_1 + \|(P_{\Phi_1}-P_{\Phi_2})f_2 \|_1
\end{align*}
\begin{claim} \label{l1}
Let $f_1,f_2$ be of property (F) and $\textnormal{var}phi$ be of bounded variation on $\mathbb{T}$. Denote $\Phi_{f_1}=\Phi_1$ and $\Phi_{f_2}=\Phi_2$. Then there exists a $K > 0$ such that
\[
\|(P_{\Phi_1}-P_{\Phi_2})\textnormal{var}phi \|_1 \leq \textnormal{var}epsilon K \|\textnormal{var}phi\|_{BV} \|f_1-f_2\|_1.
\]
\end{claim}
This Claim implies that
\begin{align*}
\|\tilde{\mathcal{F}}_{\textnormal{var}epsilon}(f_1)-\tilde{\mathcal{F}}_{\textnormal{var}epsilon}(f_2)\|_1 &\leq \left(1+\textnormal{var}epsilon \cdot K\|f_2\|_{BV}
\right)\|f_1-f_2\|_1 \\
& \leq \left(1+\textnormal{var}epsilon \cdot \text{const}(R) \right)\|f_1-f_2\|_1,
\end{align*}
hence the Lemma is proved once we have this claim.
\emph{The proof of Claim~\ref{l1}.}
\begin{align*}
\|(P_{\Phi_1}-P_{\Phi_2})\textnormal{var}phi \|_1&=\int \left|\frac{\textnormal{var}phi}{\Phi'_1} \circ \Phi^{-1}_1-\frac{\textnormal{var}phi}{\Phi'_2} \circ \Phi^{-1}_2 \right|\text{ d}\lambda \\ &\leq \underbrace{\int \left|\frac{\textnormal{var}phi}{\Phi'_1} \circ \Phi^{-1}_1-\frac{\textnormal{var}phi}{\Phi'_2} \circ \Phi^{-1}_1 \right|\text{ d}\lambda}_{(A)}+\underbrace{\int \left|\frac{\textnormal{var}phi}{\Phi'_2} \circ \Phi^{-1}_2-\frac{\textnormal{var}phi}{\Phi'_2} \circ \Phi^{-1}_1 \right|\text{ d}\lambda}_{(B)}
\end{align*}
We first deal with the term $(A)$.
\begin{align*}
\left|\int \left(\frac{\textnormal{var}phi}{\Phi_1'} -\frac{\textnormal{var}phi}{\Phi_2'} \right) \circ \Phi_1^{-1}\right|\text{ d}\lambda&=\int |\textnormal{var}phi| \cdot \left|1 -\frac{\Phi_1'}{\Phi_2'} \right|\text{ d}\lambda \leq \|\textnormal{var}phi\|_{\infty} \int \left|\frac{\Phi_2'-\Phi_1'}{\Phi_2'} \right|\text{ d}\lambda \\
&= \|\textnormal{var}phi\|_{BV} \int_0^1\left|\frac{\textnormal{var}epsilon(f_2(x+1/2)-f_1(x+1/2))}{1+\textnormal{var}epsilon(f_2(x+1/2)-1)} \right| \text{ d}x \\
&\leq \frac{\textnormal{var}epsilon }{1-\textnormal{var}epsilon}\|\textnormal{var}phi\|_{BV} \|f_1-f_2\|_1.
\end{align*}
Now we give an appropriate bound on $(B)$.
\begin{align*}
\int \left|\frac{\textnormal{var}phi}{\Phi_2'} \circ \Phi_2^{-1}-\frac{\textnormal{var}phi}{\Phi_2'} \circ \Phi_1^{-1} \right|\text{ d}\lambda&=\int|\textnormal{var}phi \circ \Phi_1^{-1} \circ \Phi_2-\textnormal{var}phi|\text{ d}\lambda=\int \psi\cdot (\textnormal{var}phi \circ \Phi_1^{-1} \circ \Phi_2-\textnormal{var}phi)\text{ d}\lambda \\
&=\int (P_{\Phi_1^{-1}}P_{\Phi_2}\psi-\psi)\textnormal{var}phi\text{ d}\lambda,
\end{align*}
where $\psi=\text{sign}(\textnormal{var}phi \circ \Phi_1^{-1} \circ \Phi_2-\textnormal{var}phi)$. Now we are going to apply {\cite[Lemma 11]{keller1982stochastic}}, which states that if $\ell \in BV$ and $h \in L^1$, then
\[
\left|\int \ell \cdot h \text{ d}\lambda \right| \leq \text{var}(\ell)\left \| \int (h) \right\|_{\infty}+\left|\int h \text{ d}\lambda \right| \cdot \|\ell\|_{\infty} \leq 2 \|\ell\|_{BV}\cdot\left \| \int (h) \right\|_{\infty},
\]
where $\int (h)=\int_{\{x \leq z\}}h(x)\text{ d}x$. Choosing $\ell=\textnormal{var}phi$ and $h=P_{\Phi_1^{-1}}P_{\Phi_2}\psi-\psi$ we get
\[
\int (P_{\Phi_1^{-1}}P_{\Phi_2}\psi-\psi)\textnormal{var}phi\text{ d}\lambda \leq 2 \|\textnormal{var}phi\|_{BV}\sup_{0 \leq z \leq 1}\left| \int (P_{\Phi_1^{-1}}P_{\Phi_2}\psi-\psi) \mathbf{1}_{[0,z]} \right|\text{ d}\lambda.
\]
So
\begin{align*}
\int (P_{\Phi_1^{-1}}P_{\Phi_2}\psi-\psi)\textnormal{var}phi\text{ d}\lambda &\leq 2\|\textnormal{var}phi\|_{BV} \sup_{0 \leq z \leq 1}\left| \int \psi \mathbf{1}_{[0,z]} \circ \Phi_1^{-1} \circ \Phi_2-\psi \mathbf{1}_{[0,z]} \right|\text{ d}\lambda \\
&\leq 2\|\textnormal{var}phi\|_{BV} \sup_{0 \leq z \leq 1} \int |\psi| |\mathbf{1}_{[0,z]} \circ \Phi_1^{-1} \circ \Phi_2-\mathbf{1}_{[0,z]}|\text{ d}\lambda \\
&= 2\|\textnormal{var}phi\|_{BV} \sup_{0 \leq z \leq 1} \int |\mathbf{1}_{[0,z]} \circ \Phi_1^{-1} -\mathbf{1}_{[0,z]} \circ \Phi_2^{-1}| \cdot \frac{1}{\Phi_2'\circ \Phi_2^{-1}}\text{ d}\lambda \\
& \leq \frac{2}{1-\textnormal{var}epsilon}\|\textnormal{var}phi\|_{BV} \sup_{0 \leq z \leq 1} \int |\mathbf{1}_{[\Phi_1(0),\Phi_1(z)]}-\mathbf{1}_{[\Phi_2(0),\Phi_2(z)]}|\text{ d}\lambda \\
& \leq \frac{2\|\textnormal{var}phi\|_{BV}}{1-\textnormal{var}epsilon} (|\Phi_1(0)-\Phi_2(0)| +\max_{0 \leq t \leq 1}|\Phi_1(t)-\Phi_2(t)|) \\
&\leq \frac{2\|\textnormal{var}phi\|_{BV}}{1-\textnormal{var}epsilon} 2\max_{0 \leq t \leq 1}|\Phi_1(t)-\Phi_2(t)| \\
& = \frac{4\|\textnormal{var}phi\|_{BV}}{1-\textnormal{var}epsilon} \max_{0 \leq t \leq 1} \textnormal{var}epsilon \int g(y-t)(f_1(y)-f_2(y))\text{ d}y \leq \frac{2\textnormal{var}epsilon}{1-\textnormal{var}epsilon}\|\textnormal{var}phi\|_{BV}\|f_1-f_2\|_1.
\end{align*}
\end{proof}
The final lemma, a corollary to the Arzel\`a-Ascoli theorem, is a folklore result on function spaces.
\begin{lem}
The space $\mathcal{C}_{R,S}^c$ is a compact, convex subset of
$C^0$ and a fortiori also of
$L^1$.
\end{lem}
By Schauder's fixed point theorem we can conclude that there exists a fixed point of $\mathcal{F}_{\textnormal{var}epsilon}$ in $\mathcal{C}_{R,S}^c$. This completes our proof of Proposition \ref{propfix}.
\end{proof}
\subsection{Convergence to the invariant density} We prove the following proposition in this section:
\begin{prop} \label{propconv}
Let $R>0$, $S>0$ and $c>0$.
Then there exist $C>0$, $\gamma\in(0,1)$ and an $\textnormal{var}epsilon^*(R,S,c)>0$ such that for $0 \leq \textnormal{var}epsilon < \textnormal{var}epsilon^*(R,S,c)$
\[
\|\mathcal{F}_{\textnormal{var}epsilon}^n(f_0)-f_*^{\textnormal{var}epsilon}\|_{BV} \leq C \gamma^n \|f_0-f_*^{\textnormal{var}epsilon}\|_{BV}\quad\text{for all }f_0 \in \mathcal{C}_{R,S}^c\text{ and }n\in\mathbb{N}.
\]
\end{prop}
\begin{proof}
It is obviously enough to prove this proposition for sufficiently large $R,S$ and $c$.
In particular we can assume that $R>R^*,S>S^*,c>c^*$ and also $\textnormal{var}epsilon<\textnormal{var}epsilon_0^*$, where $R^*,S^*,c^*,\textnormal{var}epsilon_0^*$
are chosen as in Lemma~\ref{lem:1}
The following proof is strongly inspired by the proof of \cite[Theorem 4]{{keller1982stochastic}}. We start by proving a lemma similar to \cite[Lemma 8]{{keller1982stochastic}}.
\begin{lem} \label{lemma8}
There exist $0 < \textnormal{var}epsilon_1^* < \textnormal{var}epsilon^*_0$, $\beta<1$ and $c_1>0$ such that for $0 \leq \textnormal{var}epsilon < \textnormal{var}epsilon_1^*$
\[
\| P_TP_{\Phi_{n}}\dots P_TP_{\Phi_{1}}u\|_{BV} \leq c_1 \cdot \beta^n \|u\|_{BV}
\]
for any $\Phi_{i}=\Phi_{f_i}$ for which $f_i \in \mathcal{C}_{R,S}^c$,
any
$u \in BV_{0}=\{v \in BV, \text{ } \int v\text{ d}\lambda=0 \}$ and any $n \in \mathbb{N}$.
\end{lem}
\begin{proof}
Let $\mathcal{Q}_{\textnormal{var}epsilon,n}=P_TP_{\Phi_{n}}\dots P_TP_{\Phi_{1}}$, and let $\Phi_*$ be the coupling function associated to the invariant density $f_*^{\textnormal{var}epsilon}$. The lemma would be immediate if $f_i=f_*^{\textnormal{var}epsilon}$ would hold for all of the densities. So what we need to show is that $\mathcal{Q}_{\textnormal{var}epsilon,n}$ is close to $(P_TP_{\Phi_*})^n$ in a suitable sense.
The proof has three main ingredients: we first show that $\mathcal{Q}_{\textnormal{var}epsilon,n}: BV_0 \to BV_0$ is a uniformly bounded operator. The second fact we are going to see is that $(P_TP_{\Phi_*})^N: BV_0 \to L^1$ is a bounded operator and the operator norm can be made suitably small by choosing $N$ large enough. Lastly, referring to Claim~\ref{l1} we argue that the norm of $P_{\Phi_k}-P_{\Phi_*}: BV_0 \to L^1$ is of order $\textnormal{var}epsilon$. These three facts will imply that $\mathcal{Q}_{\textnormal{var}epsilon,n}$ is a contraction on $BV_0$ for $n$ large enough.
Remember that $P_T P_{\Phi_k}=P_{F_k}$, where $F_k$ is a $C^2$ expanding map of $\mathbb{T}$. Hence it is mixing and satisfies a Lasota-Yorke type inequality. More precisely,
\begin{align} \label{ly}
\|P_T P_{\Phi_k} u\|_{BV} &= \|P_{F_k} u\|_{BV} \leq \|u\|_1+\frac{1}{\inf|F_k'|}\text{var}(u)+\max_{ i=1,\dots,N}\sup \frac{|(F^{-1,i}_k)''|}{|(F^{-1,i}_k)'|}\|u\|_1 \nonumber \\
& \leq \frac{1}{\omega(1-\textnormal{var}epsilon)}\|u\|_{BV}+\left(1+\tilde{D} \right)\|u\|_1.
\end{align}
where $\tilde{D}=\max_{ i=1,\dots,N}\sup \frac{|(F^{-1,i}_k)''|}{|(F^{-1,i}_k)'|}$. Let
\begin{equation}\label{eq:GK1}
\alpha=\frac{1}{\omega(1-\textnormal{var}epsilon_0^*)} \quad \text{and} \quad K_0=\frac{1+\tilde{D}}{1-\alpha}.
\end{equation}
Our assumptions on $\textnormal{var}epsilon_0^*$ already provide that $\omega(1-\textnormal{var}epsilon_0^*) > 1$, implying $\alpha < 1$. Then by applying \eqref{ly} repeatedly we get
\begin{equation}\label{eq:GK2}
\|\mathcal{Q}_{\textnormal{var}epsilon,n} u\|_{BV} \leq \alpha^n \|u\|_{BV}+K_0\|u\|_1 \leq (\alpha^n+K_0)\|u\|_{BV}.
\end{equation}
Hence
\begin{equation} \label{unif}
\|\mathcal{Q}_{\textnormal{var}epsilon,n}\|_{BV} \leq K_0+1.
\end{equation}
A consequence of the fact that $F_{f_*^{\textnormal{var}epsilon}}$ is mixing and $P_{f_*^{\textnormal{var}epsilon}}=P_TP_{\Phi_*}$ satisfies \eqref{ly} is that the spectrum of $P_TP_{\Phi_*}$ consists of the simple eigenvalue 1 and a part contained in a disc of radius $r < 1$ (see e.g.~\cite{baladi2000positive}). From this it follows that there exists an $N \in \mathbb{N}$ such that
\begin{equation}\label{eq:GK3}
\|(P_TP_{\Phi_*})^Nu\|_1 \leq \frac{1}{8K_0}\|u\|_{BV} \quad \forall u \in BV_0.
\end{equation}
Choose $N$ larger if necessary so that we also have
\[
\alpha^N(K_0+1) < \frac{1}{4}.
\]
Now according to Claim~\ref{l1} from the proof of Lemma~\ref{lemma_cont} we have
\[
\|(P_{\Phi_k}-P_{\Phi_*})u\|_1 \leq \text{const} \cdot \textnormal{var}epsilon \|u\|_{BV}\|f_k-f_*\|_1,
\]
where $f_k$ is the density corresponding to $\Phi_k$. Using this,
\begin{align*}
|\|\mathcal{Q}_{\textnormal{var}epsilon,N} u\|_1-\|(P_TP_{\Phi_{*}})^Nu\|_1| &\leq \sum_{j=1}^N \|P_TP_{\Phi_N}\dots P_TP_{\Phi_{j+1}}P_T(P_{\Phi_{j}}-P_{\Phi_{*}})(P_TP_{\Phi_{*}})^{j-1}u\|_1 \\
& \leq \text{const}_N \textnormal{var}epsilon \|u\|_{BV}.
\end{align*}
Combined with \eqref{eq:GK2} -- \eqref{eq:GK3} this implies for sufficiently small $\textnormal{var}epsilon_1^*$ (depending on $N$) and all $\textnormal{var}epsilon\in[0,\textnormal{var}epsilon_1^*)$
\begin{align*}
\|\mathcal{Q}_{\textnormal{var}epsilon,2N} u\|_{BV} &\leq \alpha^N\|\mathcal{Q}_{\textnormal{var}epsilon,N} u\|_{BV}+K_0\|\mathcal{Q}_{\textnormal{var}epsilon,N} u\|_1 \\
& \leq \alpha^N(K_0+1)\|u\|_{BV}+K_0\left(\frac{1}{8K_0}+\text{const}_N \textnormal{var}epsilon \right)\|u\|_{BV} \\
& \leq \frac{1}{2}\|u\|_{BV}.
\end{align*}
The lemma follows if one observes that $\mathcal{Q}_{\textnormal{var}epsilon,n}(BV_0) \subseteq BV_0$ and $\|\mathcal{Q}_{\textnormal{var}epsilon,n}\|_{BV} \leq K_0+1$ for all $n$ by \eqref{unif}.
\end{proof}
Now we move on to the proof of Proposition \ref{propconv}. We remind the reader of the notation $f_{n}=\mathcal{F}_{\textnormal{var}epsilon}^n(f_0)$. Write
\begin{align*}
f_{n+1}-f_*^{\textnormal{var}epsilon}&=P_TP_{\Phi_n}(f_n-f_*^{\textnormal{var}epsilon})+P_T(P_{\Phi_n}-P_{\Phi_*})f_*^{\textnormal{var}epsilon} \\
&\hspace{0.22cm}\vdots \\
&=P_TP_{\Phi_n}\dots P_TP_{\Phi_0}(f_0-f_*^{\textnormal{var}epsilon})+\sum_{k=0}^n P_TP_{\Phi_n}\dots P_TP_{\Phi_{k+1}}P_T(P_{\Phi_k}-P_{\Phi_*})f_*^{\textnormal{var}epsilon}
\end{align*}
Then Lemma \ref{lemma8} implies that
\begin{equation}\label{eq:GK4}
\|f_{n+1}-f_*^{\textnormal{var}epsilon}\|_{BV} \leq c_1 \beta^{n+1}\|f_0-f_*^{\textnormal{var}epsilon}\|_{BV}+c_1 \sum_{k=0}^n \beta^{n-k}\|P_T(P_{\Phi_k}-P_{\Phi_*})f_*^{\textnormal{var}epsilon}\|_{BV}.
\end{equation}
\begin{claim} \label{claim2} Let $\textnormal{var}phi \in \mathcal{C}_{R,S}^c$ and $f_1,f_2 \in \tilde{\mathcal{C}}_{R,S}^c=\{ \text{var}(f) \leq R, |f'| \leq S, \text{Lip}(f') \leq c \}$. Denote $\Phi_{f_1}=\Phi_1$, $\Phi_{f_2}=\Phi_2$. Then
\[
\|(P_{\Phi_1}-P_{\Phi_2})\textnormal{var}phi\|_{BV} \leq \textnormal{var}epsilon K(R,S,c) \|f_1-f_2\|_{BV}
\]
for some constant $K=K(R,S,c)$.
\end{claim}
Suppose Claim~\ref{claim2} holds (we are going to prove it later). This implies that
\begin{align*}
\|P_T(P_{\Phi_k}-P_{\Phi_*})f_*^{\textnormal{var}epsilon}\|_{BV} \leq \textnormal{var}epsilon K\|P_T\|_{BV}\|f_k-f_*^{\textnormal{var}epsilon}\|_{BV}
\end{align*}
Choose $\gamma\in(\beta,1)$ and $C>c_1$ where $\beta$ and $c_1$ are the constants from \eqref{eq:GK4}. Then, by using induction, we get
\begin{align*}
\|f_{n+1}-f_*^{\textnormal{var}epsilon}\|_{BV} &\leq c_1 \beta^{n+1}\|f_0-f_*^{\textnormal{var}epsilon}\|_{BV}+c_1\textnormal{var}epsilon K(R,S,c) \|P_T\|_{BV} \sum_{k=0}^n \beta^{n-k} \|f_k-f_*^{\textnormal{var}epsilon}\|_{BV} \\
& \leq c_1 \beta^{n+1}\|f_0-f_*^{\textnormal{var}epsilon}\|_{BV}+\textnormal{var}epsilon c_2(R,S,c) \sum_{k=0}^n \beta^{n-k} C\gamma^k \|f_0-f_*^{\textnormal{var}epsilon}\|_{BV} \\
& \leq c_1 \beta^{n+1}\|f_0-f_*^{\textnormal{var}epsilon}\|_{BV}+\textnormal{var}epsilon c_2(R,S,c) C \beta^n \|f_0-f_*^{\textnormal{var}epsilon}\|_{BV} \sum_{k=0}^n \left( \frac{\gamma}{\beta}\right)^k \\
& \leq c_1 \gamma^{n+1}\|f_0-f_*^{\textnormal{var}epsilon}\|_{BV}+\textnormal{var}epsilon c_3(R,S,c) C \gamma^n \|f_0-f_*^{\textnormal{var}epsilon}\|_{BV},
\end{align*}
so if we choose $\textnormal{var}epsilon^*(R,S,c):=\min\left\{\textnormal{var}epsilon_1^*, \frac{\gamma(C-c_1)}{c_3(R,S,c)C}\right\}$, then
\[
\|f_{n+1}-f_*^{\textnormal{var}epsilon}\|_{BV} \leq C \gamma^{n+1} \|f_0-f_*^{\textnormal{var}epsilon}\|_{BV}.
\]
This concludes the proof of Proposition~\ref{propconv} and thus the proof of Theorem~\ref{theomain}. What is left is the proof of Claim~\ref{claim2}.
\emph{Proof of Claim~\ref{claim2}.} First note that as $\int (P_{\Phi_1}-P_{\Phi_2})\textnormal{var}phi\text{ d}\lambda=0$,
\[
\|(P_{\Phi_1}-P_{\Phi_2})\textnormal{var}phi\|_{BV} \leq \frac{3}{2} \textnormal{var}((P_{\Phi_1}-P_{\Phi_2})\textnormal{var}phi),
\]
so we only need to give the appropriate bound on the total variation.
\begin{align*}
\text{var}((P_{\Phi_1}-P_{\Phi_2})\textnormal{var}phi)&=\int \left |
\left( \frac{\textnormal{var}phi}{\Phi'_1} \circ \Phi^{-1}_1-\frac{\textnormal{var}phi}{\Phi'_2} \circ \Phi^{-1}_2 \right)' \right|\text{ d}\lambda
\\
&=\int \left | \frac{\textnormal{var}phi'}{(\Phi'_1)^2} \circ \Phi^{-1}_1-\frac{\textnormal{var}phi'}{(\Phi'_2)^2} \circ \Phi^{-1}_2 +\frac{\textnormal{var}phi \cdot \Phi_2''}{(\Phi_2')^3} \circ \Phi_2^{-1}-\frac{\textnormal{var}phi \cdot \Phi_1''}{(\Phi_1')^3} \circ \Phi_1^{-1} \right|\text{ d}\lambda \\
& \leq \underbrace{\int \left | \frac{\textnormal{var}phi'}{(\Phi'_1)^2} \circ \Phi^{-1}_1-\frac{\textnormal{var}phi'}{(\Phi'_2)^2} \circ \Phi^{-1}_2 \right|\text{ d}\lambda}_{(C)} + \underbrace{\int \left | \frac{\textnormal{var}phi \cdot \Phi_2''}{(\Phi_2')^3} \circ \Phi_2^{-1}-\frac{\textnormal{var}phi \cdot \Phi_1''}{(\Phi_1')^3} \circ \Phi_1^{-1} \right|\text{ d}\lambda}_{(D)}
\end{align*}
We start by giving a bound for the term $(C)$.
\[
(C) \leq \int \underbrace{\left | \frac{\textnormal{var}phi'}{(\Phi'_1)^2} \circ \Phi^{-1}_1-\frac{\textnormal{var}phi'}{(\Phi'_2)^2} \circ \Phi^{-1}_1 \right|\text{ d}\lambda}_{(C1)}+\underbrace{\int \left | \frac{\textnormal{var}phi'}{(\Phi'_2)^2} \circ \Phi^{-1}_1-\frac{\textnormal{var}phi'}{(\Phi'_2)^2} \circ \Phi^{-1}_2 \right|\text{ d}\lambda}_{(C2)}
\]
As for $(C1)$,
\begin{align*}
(C1) &=\int |\textnormal{var}phi'| \cdot \left|\frac{1}{\Phi_1'}-\frac{\Phi_1'}{(\Phi_2')^2} \right|\text{ d}\lambda=\int |\textnormal{var}phi'| \cdot \left|\frac{(\Phi_2')^2-(\Phi_1')^2}{(\Phi_2')^2\Phi_1'} \right|\text{ d}\lambda \\
&\leq \max \left| \frac{\textnormal{var}phi'(\Phi_1'+\Phi_2')}{(\Phi_2')^2 \Phi_1'}\right|\int |\Phi_1'-\Phi_2'|\text{ d}\lambda
\leq \frac{2S(1+\textnormal{var}epsilon R)}{(1-\textnormal{var}epsilon R)^3} \cdot \textnormal{var}epsilon \|f_1-f_2\|_1 \leq \textnormal{var}epsilon K_{C1}\|f_1-f_2\|_{BV}.
\end{align*}
Note that $1-\textnormal{var}epsilon R > 0$ by our choice of $R$ and $\textnormal{var}epsilon$. $(C2)$ can be bounded the same way as term (A) in the proof of Claim~\ref{l1}, so we have
\begin{align*}
(C2) & \leq \left \|\frac{\textnormal{var}phi'}{\Phi_2'} \right \|_{BV} \frac{2\textnormal{var}epsilon}{1-\textnormal{var}epsilon R}\|f_1-f_2\|_1 \\
&\leq \left(\left \|\frac{\textnormal{var}phi'}{\Phi_2'} \right \|_{1}+\sup|\textnormal{var}phi'|\textnormal{var}\left(\frac{1}{\Phi_2'}\right)+\textnormal{var}|\textnormal{var}phi'|\sup\left(\frac{1}{\Phi_2'}\right) \right) \frac{2\textnormal{var}epsilon}{1-\textnormal{var}epsilon R}\|f_1-f_2\|_1 \\
& \leq \frac{2\textnormal{var}epsilon(R+S^2\textnormal{var}epsilon/(1-\textnormal{var}epsilon)+c)}{1-\textnormal{var}epsilon R}\|f_1-f_2\|_1 \leq \textnormal{var}epsilon K_{C2}\|f_1-f_2\|_{BV}.
\end{align*}
We now move on to bounding $(D)$.
\begin{align*}
(D) &\leq \underbrace{\int \left | \frac{\textnormal{var}phi \cdot \Phi_2''}{(\Phi_2')^3} \circ \Phi_2^{-1}-\frac{\textnormal{var}phi \cdot \Phi_1''}{(\Phi_1')^3} \circ \Phi_2^{-1} \right|\text{ d}\lambda}_{(D1)}+\underbrace{\int \left | \frac{\textnormal{var}phi \cdot \Phi_1''}{(\Phi_1')^3} \circ \Phi_2^{-1}-\frac{\textnormal{var}phi \cdot \Phi_1''}{(\Phi_1')^3} \circ \Phi_1^{-1} \right|\text{ d}\lambda}_{(D2)}
\end{align*}
We start with $(D1)$.
\begin{align*}
(D1)&=\int \left | \frac{\textnormal{var}phi \cdot \Phi_2''}{(\Phi_2')^2}-\frac{\textnormal{var}phi \cdot \Phi_1''}{(\Phi_1')^3} \cdot \Phi_2' \right|\text{ d}\lambda=\int |\textnormal{var}phi(y)| \cdot \left| \frac{\textnormal{var}epsilon f_2' \left(y+\frac{1}{2}\right)}{[\Phi_2'(y)]^2}-\frac{\textnormal{var}epsilon f_1' \left(y+\frac{1}{2}\right)\Phi_2'(y)}{[\Phi_1'(y)]^3} \right|\text{ d}y \\
&\leq \textnormal{var}epsilon \max \left|\frac{\textnormal{var}phi}{(\Phi_2')^2(\Phi_1')^3} \right|\int |f_2'\left(y+\frac{1}{2}\right)[\Phi_1'(y)]^3-f_1'\left(y+\frac{1}{2}\right)[\Phi_2'(y)]^3|\text{ d}y \\
& \leq \frac{\textnormal{var}epsilon(1+R)}{(1-\textnormal{var}epsilon R)^5}\bigg(\underbrace{\int|f_2'\left(y+\frac{1}{2}\right)([\Phi_1'(y)]^3-[\Phi_2'(y)]^3)|\text{ d}y}_{(D11)}\\
&+\underbrace{\int |[f_1'\left(y+\frac{1}{2}\right)-f_2'\left(y+\frac{1}{2}\right)][\Phi_2'(y)]^3|\text{ d}y}_{(D12)} \bigg)
\end{align*}
We can bound $(D11)$ and $(D12)$ in the following way:
\begin{align*}
(D11) & \leq \max|f_2'((\Phi_1')^2-2\Phi_1'\Phi_2'+(\Phi_2')^2)| \int |\Phi_1'-\Phi_2'|\text{ d}\lambda \leq 4S(1+\textnormal{var}epsilon R)^2 \textnormal{var}epsilon \|f_1-f_2\|_1 \\
(D12) & \leq \max|(\Phi_2')^3|\textnormal{var}(f_1-f_2) \leq (1+\textnormal{var}epsilon R)^3 \|f_1-f_2\|_{BV}
\end{align*}
Summarizing the bound for $(D1)$ we see that
\[
(D1) \leq \textnormal{var}epsilon K_{D1}\|f_1-f_2\|_{BV}.
\]
The last term left to bound is $(D2)$. This term can be bounded as term (A) in the proof of Claim~\ref{l1}, so we have
\begin{align*}
(D2) &\leq 2\left \| \frac{\textnormal{var}phi \cdot \Phi_1''}{(\Phi_1')^2} \right \|_{BV} \frac{\textnormal{var}epsilon}{1-\textnormal{var}epsilon R} \|f_1-f_2\|_1 \leq 2\left ( \max \left| \frac{\textnormal{var}phi \cdot \Phi_1''}{(\Phi_1')^2} \right|+ \text{Lip} \left( \frac{\textnormal{var}phi \cdot \Phi_1''}{(\Phi_1')^2} \right) \right ) \frac{\textnormal{var}epsilon}{1-\textnormal{var}epsilon R} \|f_1-f_2\|_1 \\
& \leq 2 \left ( \frac{\textnormal{var}epsilon (1+R)S}{(1-\textnormal{var}epsilon R)^2}+ \textnormal{var}epsilon K(R,S) \right ) \frac{\textnormal{var}epsilon}{1-\textnormal{var}epsilon R} \|f_1-f_2\|_1 \\
&\leq \textnormal{var}epsilon K_{D2}\|f_1-f_2\|_{BV}
\end{align*}
In conclusion,
\begin{align*}
\|(P_{\Phi_1}-P_{\Phi_2})\textnormal{var}phi\|_{BV} &\leq \textnormal{var}epsilon \cdot \frac{3}{2}(K_{C1}+K_{C2}+K_{D1}+K_{D2})\|f_1-f_2\|_{BV}.
\end{align*}
\end{proof}
\section{Proof of Theorem~\ref{stability}} \label{s:theo2proof}
We remind the reader of Theorem~\ref{stability}: It states that when $R,S,c$ and $\textnormal{var}epsilon^*$ are chosen as in Theorem~\ref{theomain}, then for all $0 \leq \textnormal{var}epsilon,\textnormal{var}epsilon' < \textnormal{var}epsilon^{*}$ we have
\[
\|f_*^{\textnormal{var}epsilon}-f_*^{\textnormal{var}epsilon'}\|_{BV} \leq K|\textnormal{var}epsilon-\textnormal{var}epsilon'|,
\]
for some $K>0$, where $f_*^{\textnormal{var}epsilon}$ and $f_*^{\textnormal{var}epsilon'}$ are the fixed densities of $\mathcal{F}_{\textnormal{var}epsilon}$ and $\mathcal{F}_{\textnormal{var}epsilon'}$, respectively.
We now proceed to prove this. First observe that the choice of $R,S$ and $c$ implies that the fixed density of $\mathcal{F}_{\textnormal{var}epsilon}$ is in $\mathcal{C}_{R,S}^c$ for all $0 \leq \textnormal{var}epsilon < \textnormal{var}epsilon^*$. In particular, $f_*^{\textnormal{var}epsilon'} \in \mathcal{C}_{R,S}^c$.
Let $f$ be an element of $\mathcal{C}_{R,S}^c$. It is a consequence of Theorem~\ref{theomain} that there exists an $N \in \mathbb{N}$ such that
\[
\|\mathcal{F}_{\textnormal{var}epsilon'}^Nf-f_*^{\textnormal{var}epsilon'}\|_{BV} \leq \lambda\|f-f_*^{\textnormal{var}epsilon'}\|_{BV}
\]
for some $0 < \lambda < 1$.
This implies that
\begin{align*}
\|\mathcal{F}_{\textnormal{var}epsilon}^Nf-f_*^{\textnormal{var}epsilon'}\|_{BV} &\leq \|\mathcal{F}_{\textnormal{var}epsilon}^Nf-\mathcal{F}_{\textnormal{var}epsilon'}^N f\|_{BV}+\|\mathcal{F}_{\textnormal{var}epsilon'}^Nf-f_*^{\textnormal{var}epsilon'}\|_{BV} \\
& \leq \|\mathcal{F}_{\textnormal{var}epsilon}^Nf-\mathcal{F}_{\textnormal{var}epsilon'}^N f\|_{BV}+\lambda\|f-f_*^{\textnormal{var}epsilon'}\|_{BV}.
\end{align*}
\begin{lem} \label{iterN_general}
For each $N \in \mathbb{N}$ there exists an $a_N > 0$ such that if $f \in \mathcal{C}_{R,S}^c$ and $0 \leq \textnormal{var}epsilon , \textnormal{var}epsilon' < \textnormal{var}epsilon^*$, then $\|\mathcal{F}_{\textnormal{var}epsilon}^Nf-\mathcal{F}_{\textnormal{var}epsilon'}^N f\|_{BV} \leq a_N |\textnormal{var}epsilon-\textnormal{var}epsilon'|$.
\end{lem}
We are going to give the proof of this lemma later. Using this result we have that
\[
\|\mathcal{F}_{\textnormal{var}epsilon}^Nf-f^{\textnormal{var}epsilon'}_*\|_{BV} \leq a_N |\textnormal{var}epsilon-\textnormal{var}epsilon'|+\lambda\|f-f_*^{\textnormal{var}epsilon'}\|_{BV}.
\]
Let $B(h,r)$ denote the ball in $BV$ with center $h$ and radius $r$. We claim that $\mathcal{F}_{\textnormal{var}epsilon}^N$ leaves the ball $B(f_*^{\textnormal{var}epsilon'},\bar{r})$ invariant, where $\bar{r}=\frac{a_N |\textnormal{var}epsilon-\textnormal{var}epsilon'|}{1-\lambda}$. To see this, suppose that $\|f-f_*^{\textnormal{var}epsilon'}\|_{BV} \leq \bar{r}$. Then indeed,
\[
\|\mathcal{F}_{\textnormal{var}epsilon}^Nf-f_*^{\textnormal{var}epsilon'}\|_{BV} \leq a_N|\textnormal{var}epsilon-\textnormal{var}epsilon'|+\lambda \bar{r}=a_N|\textnormal{var}epsilon-\textnormal{var}epsilon'|+\frac{\lambda a_N|\textnormal{var}epsilon-\textnormal{var}epsilon'|}{1-\lambda}=\frac{a_N|\textnormal{var}epsilon-\textnormal{var}epsilon'|}{1-\lambda}=\bar{r}.
\]
Note that Theorem~\ref{theomain} implies that
$\lim_{k\to\infty}\|\mathcal{F}^{kN}(f_0)-f^\textnormal{var}epsilon_*\|_{BV}=0$ for all $f_0\in C^c_{R,S}$. Note further that $B(f_*^{\textnormal{var}epsilon'},\bar{r}) \cap \mathcal{C}_{R,S}^c \neq \emptyset$. Hence $f_*^{\textnormal{var}epsilon} \in B(f_*^{\textnormal{var}epsilon'},\bar{r})$. This means that
\[
\|f_*^{\textnormal{var}epsilon}-f_*^{\textnormal{var}epsilon'}\|_{BV} \leq \frac{a_N |\textnormal{var}epsilon-\textnormal{var}epsilon'|}{1-\lambda} \eqqcolon K|\textnormal{var}epsilon-\textnormal{var}epsilon'|,
\]
which is exactly the statement to prove, so we are only left to prove Lemma~\ref{iterN_general}.
\emph{Proof of Lemma~\ref{iterN_general}.} Suppose (without loss of generality) that $0 \leq \textnormal{var}epsilon' \leq \textnormal{var}epsilon$. Notice that if we use the notation
\[
\Phi^{\textnormal{var}epsilon}_f(x)=x+\textnormal{var}epsilon \int_{\mathbb{T}} g(y-x)f(y)\text{ d}y
\]
then
\[
\Phi^{\textnormal{var}epsilon'}_f=\Phi^{\textnormal{var}epsilon}_{\textnormal{var}epsilon'f/\textnormal{var}epsilon}.
\]
By Claim~\ref{claim2},
\begin{equation} \label{lenyeg_general}
\|(P_{\Phi_f^{\textnormal{var}epsilon}}-P_{\Phi_{\textnormal{var}epsilon'f/\textnormal{var}epsilon}^{\textnormal{var}epsilon}})\textnormal{var}phi\|_{BV} \leq K_1\textnormal{var}epsilon\|f-\textnormal{var}epsilon'f/\textnormal{var}epsilon\|_{BV}=K_1 |\textnormal{var}epsilon-\textnormal{var}epsilon'|\|f\|_{BV}
\end{equation}
holds. (Here we have used the fact that $\textnormal{var}epsilon'/\textnormal{var}epsilon \leq 1$, so $\textnormal{var}epsilon'f/\textnormal{var}epsilon \in \mathcal{C}^c_{R,S}$, hence Claim~\ref{claim2} can be applied indeed.) $P_T$ is a bounded operator on $BV$, let $\|P_T\|_{BV} \leq K_2$. This implies that
\begin{align} \label{N1_general}
\|\mathcal{F}_{\textnormal{var}epsilon}f-\mathcal{F}_{\textnormal{var}epsilon'}f\|_{BV}
&=
\|P_TP_{\Phi_f^\textnormal{var}epsilon}f-P_TP_{\Phi_{\textnormal{var}epsilon'f/\textnormal{var}epsilon}^\textnormal{var}epsilon}f\|_{BV} \leq \bar K|\textnormal{var}epsilon-\textnormal{var}epsilon'|\|f\|_{BV},
\end{align}
where $\bar{K}=K_1K_2$.
Now we prove the lemma using induction. Since $\|f\|_{BV} \leq 1+R$, the case $N=1$ holds by \eqref{N1_general} with the choice of $a_1=\bar{K}(1+R)$. Assume that
\begin{equation} \label{Nmin1_general}
\|\mathcal{F}_{\textnormal{var}epsilon}^{N-1}f-\mathcal{F}_{\textnormal{var}epsilon'}^{N-1}f\|_{BV} \leq a_{N-1}|\textnormal{var}epsilon-\textnormal{var}epsilon'|.
\end{equation}
Then using the $N=1$ case
\begin{align*}
\|\mathcal{F}_{\textnormal{var}epsilon}^{N}f-\mathcal{F}_{\textnormal{var}epsilon'}^{N}f\|_{BV}
&\leq
\|\mathcal{F}_{\textnormal{var}epsilon}\mathcal{F}_{\textnormal{var}epsilon}^{N-1}f-\mathcal{F}_{\textnormal{var}epsilon'}\mathcal{F}_{\textnormal{var}epsilon}^{N-1}f\|_{BV}+\|\mathcal{F}_{\textnormal{var}epsilon'}\mathcal{F}_{\textnormal{var}epsilon}^{N-1}f-\mathcal{F}_{\textnormal{var}epsilon'}\mathcal{F}_{\textnormal{var}epsilon'}^{N-1}f\|_{BV} \\
& \leq a_1 |\textnormal{var}epsilon-\textnormal{var}epsilon'| + \|\mathcal{F}_{\textnormal{var}epsilon'}\mathcal{F}_{\textnormal{var}epsilon}^{N-1}f-\mathcal{F}_{\textnormal{var}epsilon'}\mathcal{F}_{\textnormal{var}epsilon'}^{N-1}f\|_{BV}.
\end{align*}
Let $\textnormal{var}phi=\mathcal{F}_{\textnormal{var}epsilon}^{N-1}f$ and $\psi=\mathcal{F}_{\textnormal{var}epsilon'}^{N-1}f$. Then
\begin{align*}
\|\mathcal{F}_{\textnormal{var}epsilon'}\mathcal{F}_{\textnormal{var}epsilon}^{N-1}f-\mathcal{F}_{\textnormal{var}epsilon'}\mathcal{F}_{\textnormal{var}epsilon'}^{N-1}f\|_{BV}
&=\|P_TP_{\Phi_{\textnormal{var}phi}^{\textnormal{var}epsilon'}}\textnormal{var}phi-P_TP_{\Phi_{\psi}^{\textnormal{var}epsilon'}}\psi\|_{BV} \\
& \leq \|P_TP_{\Phi_{\textnormal{var}phi}^{\textnormal{var}epsilon'}}(\textnormal{var}phi-\psi)\|_{BV}+\|P_T(P_{\Phi_{\textnormal{var}phi}^{\textnormal{var}epsilon'}}-P_{\Phi_{\psi}^{\textnormal{var}epsilon'}})\psi\|_{BV} \\
& \leq (c_1 \cdot \beta+\textnormal{var}epsilon' \bar{K}) \|\textnormal{var}phi-\psi\|_{BV}
\end{align*}
by using Lemma~\ref{lemma8} with $n=1$ for the first term and Claim~\ref{claim2} for the second term in line two. Let $\bar{A}= c_1 \cdot \beta+\textnormal{var}epsilon^* \bar{K}$. Now we can proceed in the following way:
\begin{align*}
\|\mathcal{F}_{\textnormal{var}epsilon}^{N}f-\mathcal{F}_{\textnormal{var}epsilon'}^{N}f\|_{BV}
& \leq a_1 |\textnormal{var}epsilon-\textnormal{var}epsilon'| + \bar{A} \|\mathcal{F}_{\textnormal{var}epsilon}^{N-1}f-\mathcal{F}_{\textnormal{var}epsilon'}^{N-1}f\|_{BV} \\
& \leq a_1 |\textnormal{var}epsilon-\textnormal{var}epsilon'| + \bar{A} a_{N-1} |\textnormal{var}epsilon-\textnormal{var}epsilon'|
\end{align*}
by using \eqref{Nmin1_general}. So $a_N=a_1+ \bar{A} a_{N-1}$ is an appropriate choice.
Now Lemma~\ref{iterN_general} and thus Theorem~\ref{stability} are proved.
\section{Proof of Theorem \ref{largeeps}}
\label{s:theo3proof}
We start by proving the statement about the support of our initial density shrinking to a single point. Remember that we denoted the smallest closed interval containing the support of $f$ by $\textnormal{supp}^*(f)$. The assumption (F') implies that this interval has length less than 1/2.
\begin{prop} \label{propshrink}
Suppose $f_0$ is of property (F'). Let $\Omega=\max |T'|$, $f_n=\mathcal{F}_{\textnormal{var}epsilon}^n(f_0)$. Then
\[
1-\frac{1}{\Omega} < \textnormal{var}epsilon < 1 \qquad \Rightarrow \qquad |\textnormal{supp}^* (f_n)| \underset{n \to \infty}{\to} 0
\]
\end{prop}
\begin{proof}
We are going to denote the length of an interval $I$ by $|I|$. First notice that $\textnormal{supp}^*(f_1)=F_{f_0}(\textnormal{supp}^*(f_0))$. Since $F_{f_0}$ is $C^2$, we have that
\begin{align*}
|\textnormal{supp}^*(f_1)|=|F_{f_0}(\textnormal{supp}^*(f_0))| &\leq \max_{x \in \textnormal{supp}^*(f_0)}|F'_{f_0}(x)||\textnormal{supp}^*(f_0)|, \\
&\leq \max|T'| \max_{x \in \textnormal{supp}^*(f_0)}|\Phi_{f_0}'(x)||\textnormal{supp}^*(f_0)|, \\
&\leq \Omega \max_{x \in \textnormal{supp}^*(f_0)}|\Phi_{f_0}'(x)||\textnormal{supp}^*(f_0)|, \\
& \leq \Omega(1-\textnormal{var}epsilon)|\textnormal{supp}^*(f_0)|,
\end{align*}
since $\Phi'_{f_0}(x)=1+\textnormal{var}epsilon\left(f_0(x+1/2)-1 \right)=1-\textnormal{var}epsilon$ for $x \in \textnormal{supp}^*(f_0)$.
Let $q=\Omega(1-\textnormal{var}epsilon)$. By this notation we have
\[
|\textnormal{supp}^*(f_1)| \leq q |\textnormal{supp}^*(f_0)|,
\]
and by iteration we get
\[
|\textnormal{supp}^*(f_n)| \leq q^n |\textnormal{supp}^*(f_0)|,
\]
implying $|\textnormal{supp}^* (f_n)| \underset{n \to \infty}{\to} 0$ if $q<1$, which holds if $1-\frac{1}{\Omega} < \textnormal{var}epsilon$.
\end{proof}
\begin{rem}
We required that the support of $f_0$ should fit in an interval with length less then 1/2. The significance of 1/2 comes from the fact that the distance function $g$ used in the coupling $\Phi$ has singularities at $\pm 1/2$. This results in the fact that $f_n(x+1/2)$ plays a role in $\Phi_{f_n}'(x+1/2)$.
However, singularities in the function $g$ are not vital to this phenomenon. It can be shown for some special continuous functions $g$ that if the support of the initial density $f_0$ is small in terms of some properties of $g$ (bounds on the supremum or the derivative), the supports of the densities $f_n$ shrink exponentially.
\end{rem}
Since the corresponding measures $\mu_n$ are all probability measures, this proposition implies that if the sequence $(\mu_n)$ converges, it can only converge to a Dirac measure supported on some $x^* \in \mathbb{T}$. But typically this is not the case. What happens typically is that the sequence $\mu_n$ is in fact divergent and approaches a Dirac measure moving along the $T-$trajectory of some $x^* \in \textnormal{supp}^*(f_0)$. More precisely:
\begin{prop} \label{wasser}
Suppose $f_0$ is of property (F') and $1-\frac{1}{\Omega} < \textnormal{var}epsilon$. Then there exists an $x^* \in \textnormal{supp}^*(f_0)$ such that
\[
\lim_{n \to \infty}W_1(\mu_n,\delta_{T^n(x^*)})=0,
\]
where $W_1(\cdot,\cdot)$ is the Wasserstein metric.
\end{prop}
\begin{proof} We start by proving a short lemma.
\begin{lem}
We claim that $\textnormal{supp}^* P_{\Phi_{f}}f \subset \textnormal{supp}^* f$.
\end{lem}
\begin{proof}
Consider the lifted coupling dynamics $\Phi_{f}^{\mathbb{R}}: \mathbb{R} \to \mathbb{R}$ which is defined as
\[
\Phi_{f}^{\mathbb{R}}(x)=x+\textnormal{var}epsilon \int_0^1 g(y-x)f(y)\text{d}y \qquad x \in \mathbb{R}.
\]
We have to study two cases separately. First, let $\textnormal{supp}^*f=[a,b]$, $0 \leq a < b \leq 1$. In this case, notice that
\[
\int_0^1 yf(y) \text{ d}y=M \in [a,b].
\]
Now
\begin{align*}
\Phi_{f}^{\mathbb{R}}(a)&=a+\textnormal{var}epsilon\int_0^1(y-a)f(y)\text{ d}y =a+\textnormal{var}epsilon(M-a), \\
\Phi_{f}^{\mathbb{R}}(b)&=b+\textnormal{var}epsilon\int_0^1(y-b)f(y)\text{ d}y =b-\textnormal{var}epsilon(b-M).
\end{align*}
Hence $[\Phi_{f}^{\mathbb{R}}(a), \Phi_{f}^{\mathbb{R}}(b)] \subset [a,b]$, and this implies $\textnormal{supp}^* P_{\Phi_{f}}f \subset \textnormal{supp}^* f$.
In the second case, $\textnormal{supp}^*f=[a,1]\cup[0,b]$, $0 < b < a < 1$. Now
\[
\int_0^b (y+1)f(y) \text{ d}y+\int_a^1 yf(y) \text{ d}y=\tilde{M} \in [a,1+b].
\]
On the one hand, we see that
\begin{align*}
\Phi_{f}^{\mathbb{R}}(a)&=a+\textnormal{var}epsilon\left(\int_0^b g(y-a)f(y) \text{ d}y+\int_a^1 g(y-a)f(y) \text{ d}y \right), \\
&= a+\textnormal{var}epsilon\left(\int_0^b (y-a+1)f(y) \text{ d}y+\int_a^1 (y-a)f(y) \text{ d}y \right), \\
&=a+\textnormal{var}epsilon(\tilde{M}-a),
\end{align*}
and
\begin{align*}
\Phi_{f}^{\mathbb{R}}(a)&\leq a+\int_0^b (y-a+1)f(y) \text{ d}y+\int_a^1 (y-a)f(y) \text{ d}y, \\
&=\tilde{M}.
\end{align*}
Furthermore,
\begin{align*}
\Phi_{f}^{\mathbb{R}}(1+b)&=(1+b)+\textnormal{var}epsilon\left(\int_0^b g(y-(1+b))f(y) \text{ d}y+\int_a^1 g(y-(1+b))f(y) \text{ d}y \right), \\
&=(1+b)+\textnormal{var}epsilon\left(\int_0^b (y-(1+b)+1)f(y) \text{ d}y+\int_a^1 (y-(1+b))f(y) \text{ d}y \right), \\
&=(1+b)-\textnormal{var}epsilon((1+b)-\tilde{M}),
\end{align*}
and
\begin{align*}
\Phi_{f}^{\mathbb{R}}(1+b)& \geq (1+b)-1 \cdot ((1+b)-\tilde{M})=\tilde{M}.
\end{align*}
Hence $[\Phi_{\mu}^{\mathbb{R}}(a), \Phi_{\mu}^{\mathbb{R}}(1+b)] \subset [a,1+b]$, and this implies $\textnormal{supp}^* P_{\Phi_{f}}f \subset \textnormal{supp}^* f$.
\end{proof}
The lemma implies that $\textnormal{supp}^* f_1=F_{f_0}(\textnormal{supp}^* f_0) \subset T(\textnormal{supp}^* f_0)$. Now the $T$-preimage of $\textnormal{supp}^* f_1$ is a finite collection of intervals; consider its component that coincides with $\Phi_{f_0}(\textnormal{supp}^* f_0)$. By the lemma above, this is a closed interval strictly contained in $\textnormal{supp}^* f_0$ (see Figure \ref{xstar} for an illustration). Arguing similarly for $\textnormal{supp}^* f_1$ and $\textnormal{supp}^* f_2$ and so on, we get a nested sequence of intervals with lengths shrinking to zero, since $|\Phi_{f_n}(\textnormal{supp}^* f_n)| \leq (1-\textnormal{var}epsilon)^n|\textnormal{supp}^* f_0|$ -- as calculated during the proof of Proposition \ref{propshrink}. By taking their intersection we get a point $x^*$ for which $T^n(x^*) \in \textnormal{supp}^* f_n$ holds.
\begin{figure}
\caption{Construction of $x^*$, first step.}
\label{xstar}
\end{figure}
Formally, by using the notation $F_{f_{n-1}}\dots F_{f_0}(\textnormal{supp}^* f_0)=F^n (\textnormal{supp}^* f_0)$,
\[
\{x^*\}=\bigcap_{n=0}^{\infty}T^{-n}F^n(\textnormal{supp}^* f_0).
\]
By the Kantorovich-Rubinstein theorem mentioned in Section \ref{s:results},
\begin{align*}
W_1(\mu_n,\delta_{T^n(x^*)})&=\sup \left|\int_0^1\ell(x)f_n(x) \text{d}x-\ell(T^n(x^*)) \right|,
\end{align*}
where the supremum is taken for all continuous functions $\ell: \mathbb{T} \to \mathbb{R}$ with Lipschitz constant $\leq 1$. Using that $T^n(x^*) \in \textnormal{supp}^* f_n$, we have that
\begin{align*}
\left|\int_0^1\ell(x)f_n(x) \text{d}x-\ell(T^n(x^*)) \right|&=\left|\int_0^1(\ell(x)-\ell(T^n(x^*)) )f_n(x) \text{d}x\right| \leq 1 \cdot |\textnormal{supp}^* f_n| \left| \int_0^1 f_n(x) \text{d}x \right| \\
& \leq [\Omega(1-\textnormal{var}epsilon)]^n.
\end{align*}
This implies $W_1(\mu_n,\delta_{T^n(x^*)}) \to 0$.
\end{proof}
\section{Concluding remarks} \label{s:remarks}
\subsection{Choice of the space $\mathcal{C}_{R,S}^c$}
\label{s:discussion}
As emphasized before, the space $\mathcal{C}_{R,S}^c$ is carefully chosen for the proof to work. In particular, the key statements that quantify the weak-coupling behavior of the transfer operators are Claim~\ref{l1} in the proof of Lemma~\ref{lemma_cont} and Claim~\ref{claim2} in the proof of Proposition~\ref{propconv}. In the proof of Claim~\ref{l1} regularity is not essential, since Lemma~\ref{lemma_cont} is needed in the course of proving the \emph{existence} of the invariant density, and we believe this is true in a much general context. On the other hand, Proposition~\ref{propconv} concerns the \emph{stability} (and in conclusion the \emph{uniqueness}) of the invariant density. This is where the regularity of the densities plays a crucial role, more specifically in the proof of Claim~\ref{claim2}. The uniform bound on the total variation and the derivative of the densities is needed recurrently, while the uniform Lipschitz continuity of the derivative of the densities is essential in bounding the term (C2). This explains the definition of $\mathcal{C}_{R,S}^c$. In turn, we need to assume the smoothness of $T$ to ensure that $P_T$ preserves $\mathcal{C}_{R,S}^c$.
\subsection{Relation to other works and open problems}
The goal of this paper was to show that the results of \cite{selley2016mean} on stability and synchronization in an infinite system can be generalized to a wider class of coupled map systems. The concept was to consider the widest class of systems possible, but for the sake of clarity and brevity we refrained from some superficial generalizations which pose no mathematical complications. For example, $g|_{(-1/2,1/2)}$ need not be the identity, a very similar version of our results is most likely to hold when $g|_{(-1/2,1/2)}$ has sufficient smoothness properties and bounds on the derivative. On the other hand, our result is somewhat less explicit then the stability result obtained in \cite{selley2016mean}, since we do not have an expression for $f_*^{\textnormal{var}epsilon}$, we only managed to show that it is of order $\textnormal{var}epsilon$ distance from $f_*^0$. A special case when it can be made explicit is when the unique acim of $T$ is Lebesgue. Actually in this case the proof in \cite{selley2016mean} can be applied with minor modifications showing that the constant density is a stable invariant distribution of the coupled map system for sufficiently small coupling strength.
In the introduction we remarked that if we choose the initial measure to be an average of $N$ point masses, we get a coupled map system of finite size. This can more conveniently be represented by a dynamical system on $\mathbb{T}^N=\mathbb{T}\times \dots \times \mathbb{T}$ with piecewise $C^2$ dynamics (the specific smoothness depending on the smoothness of $T$). Our papers \cite{selley2016mean} and \cite{selley2016} suggest that the analysis in the case of finite system size is particularly complicated and necessitates a thorough geometrical understanding. A direct consequence of this is that little can be proved when the dimension is large. If for example $g$ were a smooth function on $\mathbb{T}$, it could be shown that there exists a unique absolutely continuous invariant measure $\mu_{N}^{\textnormal{var}epsilon}$ for any system size $N$, once $\textnormal{var}epsilon$ is smaller then some $\textnormal{var}epsilon^*$ which does not depend on $N$. Simulations suggest that this is likely to be the case also when $g$ is defined by \eqref{gg}. If this could be verified, one could even aim to prove an analogue of part 1) of Theorem 3 in \cite{bardet2009stochastically}. Namely, along the same lines as in \cite{bardet2009stochastically}, one could show that for sufficiently small $\textnormal{var}epsilon$, the sequence $\mu_{N}^{\textnormal{var}epsilon} \circ \epsilon_N^{-1}$ converges weakly to $\delta_{\mu_*^{\textnormal{var}epsilon}}$, where
\[
\epsilon_N(x_1,\dots,x_N)=\frac{1}{N}\sum_{i=1}^N \delta_{x_i},
\]
and $\mu_*^{\textnormal{var}epsilon}$ is the unique invariant measure for the infinite system with coupling strength $\textnormal{var}epsilon$.
Finally, in our opinion the truly interesting question is the spectrum of possible limit behaviors in our class of systems. Are stability and synchronization, as shown in this paper to exist, the only possibilities? For example, one can easily imagine that for stronger coupling than the one producing stability, multiple locally attracting or repelling fixed densities can arise -- as seen in the case of the model introduced in \cite{bardet2009stochastically}. However, one would have to develop completely new analytical tools to prove such statements in our class of models. So on this end there is plenty of room for innovation.
\end{document} |
\begin{document}
\draft \title{Lowest-order relativistic corrections to the fundamental limits of nonlinear-optical coefficients}
\author{Nathan J. Dawson}
\address{Department of physics, Case Western Reserve University, Cleveland, OH 44106, USA}
\email{dawsphys@hotmail.com}
\date{\today}
\begin{abstract}The effects of small relativistic corrections to the off-resonant polarizability, hyperpolarizability, and second hyperpolarizability are investigated. Corrections to linear and nonlinear optical coefficients are demonstrated in the three-level ansatz, which includes corrections to the Kuzyk limits when scaled to semi-relativistic energies. It is also shown that the maximum value of the hyperpolarizability is more sensitive than the maximum polarizability or second hyperpolarizability to lowest-order relativistic corrections. These corrections illustrate how the intrinsic nonlinear-optical response is affected at semi-relativistic energies.\end{abstract}
\pacs{42.65.-k, 42.70.Mp, 42.70.Nq, 31.30.jx, 31.30.jc}
\maketitle
\section{Introduction}
Over a decade ago, Kuzyk \cite{kuzyk00.01} showed that there are fundamental limits to the off-resonant, electronic, nonlinear-optical response. This was discovered by manipulating both the on- and off-diagonal elements of the Thomas-Reiche-Kuhn (TRK) sum rule,\cite{thoma25.01,reich25.01,kuhn25.01} which limits the oscillator strengths of a quantum system with respect to fundamental constants in the non-relativistic regime. The oscillator strength is limited by the non-relativistic kinetic energy of a free particle, where field interactions from a four-potential do not contribute to the maximum oscillator strength. The intrinsic values of the hyperpolarizability and second hyperpolarizability in the non-relativistic limit have been studied in great detail,\cite{kuzyk00.02,kuzyk01.01,clays01.01,kuzyk03.03,clays03.01,Tripa04.01,perez05.01,kuzyk05.02,kuzyk06.01,kuzyk06.03,perez07.01,perez07.02,perez01.08,zhou08.01,kuzyk09.01,dawson11.02,dawson11.03,shafe13.01,kuzyk13.01} where there is a looming gap between the measured/calculated values and the fundamental limits in the non-relativistic regime.
There have been several approaches to reduce this gap using optimization routines on one-dimensional potentials, which have resulted in the confirmation of the apparent gap.\cite{zhou06.01,zhou07.02,wigge07.01,kuzyk08.01,watkins09.01,shafe10.01,watkins11.01,ather12.01,burke13.01} Another approach to breach the gap involves a systematic search for new classes of organic nonlinear optical molecules with multipolar charge-density analysis from crystallographic data.\cite{cole02.01,cole03.01,higgi12.01} New abstract methods of calculating large nonlinear responses have also been studied for low-dimensional quantum graphs.\cite{shafe12.01,lytel13.01,lytel13.02} All of these approaches focus on breaching the gap between the fundamental limit and the largest calculated (or directly measured) intrinsic values.
Although a four-potential does not contribute to the non-relativistic TRK sum rule, there may be other ways to change the limiting value on the oscillator strength, and thereby adjust the fundamental limits of the nonlinear optical response. Instead of focusing on optimizing the intrinsic value based on a specific potential, I will discuss the changes in the limiting constant of the TRK sum rules for a specific type of quantum systems that are not properly represented by a closed Schrodinger equation. Specifically, a relativistic system is examined which no longer has a simple $p^2/2m$ kinetic energy approximation, and therefore, the energy-momentum relationship directly affects the fundamental limits. Thus, this paper is dedicated to the study of the fundamental limits of the hyperpolarizability and second hyperpolarizability for systems that have non-negligible relativistic energies.
\section{Theory}
In the far off-resonant limit (frequency approaches zero), the respective one-dimensional polarizability, hyperpolarizability, and second hyperpolarizability are \cite{orr71.01}
\begin{eqnarray}
\alpha &=& 2 e^2 \left. \displaystyle \sum_{n}^{\infty} \right.^{\prime} \frac{x_{0n} x_{n0}} {E_{n0}} , \label{eq:polar} \\
\beta &=& 3 e^3 \left. \displaystyle \sum_{n,l}^{\infty} \right.^{\prime} \frac{x_{0n} \bar{x}_{nl} x_{l0}} {E_{n0} E_{l0}} , \label{eq:hyper}
\end{eqnarray}
and
\begin{equation}
\gamma = 4 e^4 \left( \left.\displaystyle \sum_{n,l,k}^{\infty} \right.^{\prime} \frac{x_{0n} \bar{x}_{nl} \bar{x}_{lk} x_{k0}} {E_{n0} E_{l0} E_{k0}} - \left.\displaystyle \sum_{n,l}^{\infty} \right.^{\prime} \frac{x_{0n} x_{n0} x_{0l} x_{l0}} {E_{n0}^2 E_{l0}} \right) ,
\label{eq:sechyper}
\end{equation}
where $x$ is the position operator in one-dimension, $e$ is the magnitude of an electron's charge, $E_{i}$ is the $i$th energy eigenstate, and the prime restricts the summation by excluding the ground state. The shorthand notation, $x_{ij} = \left\langle i \left| x \right| j \right\rangle$ and $E_{ij} = E_{i} - E_{j}$, was introduced in Eqs. \ref{eq:polar}-\ref{eq:sechyper}. Note that the barred operator presented in the expressions for the nonlinear coefficients is the origin-specific expectation value, which is given as $\bar{x}_{ii} = x_{ii} - x_{00}$ when the indices are matched.
The TRK sum rules for the Dirac equation, gives the well-known result of all states summing to zero, where the zero value is due to the sum of the positive and corresponding negative energy states.\cite{levin57.01} We wish to only observe the positive energy states from an electron in an atom or molecule, and therefore, we must project out the positive energy states. For a single electron system, the \textit{positive energy} TRK sum rules to lowest-relativistic order (ordered in $1/c$) have previously been derived \cite{leung86.01,cohen98.01,sinky06.01} using a Foldy-Wouthuysen (FW) transformation,\cite{foldy50.01}
\begin{eqnarray}
& & \displaystyle\sum_{l = 0}^{\infty} \left\langle k \left| \boldsymbol{r} \right| l \right\rangle \left\langle l \left| \boldsymbol{r} \right| n \right\rangle \left[E_l - \frac{1}{2}\left(E_k + E_n\right)\right] \nonumber \\
&=& \left\langle k' \left|\left(\frac{3\hbar^2}{2m} + \frac{5\hbar^4}{4m^3 c^2}\nabla^2\right)\right| n' \right\rangle ,
\label{eq:3DrelTRK}
\end{eqnarray}
where $\left| n' \right\rangle = e^{i{\cal S}} \left| n \right\rangle$ with ${\cal S}$ being a unitary operator. Equation \ref{eq:3DrelTRK} differs slightly from the results of Ref. \cite{cohen98.01}, where we have derived the relation with arbitrary eigenstates because both the on- and off-diagonal components of the sum rules are essential in determining the off-resonant, nonlinear-optical responses.\cite{kuzyk00.01} In the FW approach $p = p'$, where an operator, $A$, in the FW approximation is defined as $A' = e^{i{\cal S}} A e^{-i{\cal S}}$. Thus, the momentum operator commutes with $e^{iS}$, and therefore $\left\langle k' \left| p^2 \right| n' \right\rangle = \left\langle k \left| e^{-i{\cal S}} p^2 e^{i{\cal S}} \right|n \right\rangle = \left\langle k \left| p^2 \right|n \right\rangle$. Note that while transforming the Hamiltonian for an electron interacting with fields, ${\cal S}$ is chosen at every iteration to remove all odd operators.
Inasmuch as the $\nabla$ operator is related to the momentum operator, and that there is an equivalence between the momentum operator in the Schrodinger equation and the momentum in the FW transformation, the right-hand-side (RHS) of Eq. \ref{eq:3DrelTRK} may be written as
\begin{equation}
\left\langle k \left|\left(\frac{3\hbar^2}{2m} - \frac{5 \hbar^2 p^2}{4m^3 c^2}\right)\right| n \right\rangle . \nonumber
\label{eq:RHSofTRK}
\end{equation}
Thus, to the lowest-order relativistic correction, the RHS of the TRK sum rules given in Eq. \ref{eq:3DrelTRK} decreases for any real value of the momentum.
The lowest-order relativistic approximation to the Hamiltonian (for an electron in the presence of a scalar potential only) is given as
\begin{eqnarray}
H &=& H_{0} - \frac{p^4}{8m^3 c^2} + \frac{1}{4 m^2 c^2} \left(\boldsymbol{\sigma}\cdot \boldsymbol{p}\right) V\left(\boldsymbol{r}\right)\, \left(\boldsymbol{\sigma}\cdot \boldsymbol{p}\right) \nonumber \\
&-& \frac{1}{8 m^2 c^2} \left(p^2 V\left(\boldsymbol{r}\right) + V\left(\boldsymbol{r}\right)\,p^2 \right) , \label{eq:lowestHbefore}
\end{eqnarray}
where
\begin{equation}
H_{0} = \frac{p^2}{2m} + V\left(\boldsymbol{r}\right)
\label{eq:Hschrod}
\end{equation}
with $V\left(\boldsymbol{r}\right)$ denoting a spatially dependent scalar potential and $\boldsymbol{\sigma}$ representing the Pauli spin matrices. We may rewrite Eq. \ref{eq:lowestHbefore} as the well-known result \cite{grein00.01}
\begin{eqnarray}
H &=& \frac{p^2}{2m} - \frac{p^4}{8m^3 c^2} + V\left(\boldsymbol{r}\right) + \frac{\hbar}{4 m^2 c^2}\, \boldsymbol{\sigma}\cdot \left\{\left[\nabla V\left(\boldsymbol{r}\right)\right]\times \boldsymbol{p}\right\} \nonumber \\
&+& \frac{\hbar^2}{8 m^2 c^2} \nabla^2 V\left(\boldsymbol{r}\right) . \label{eq:lowestH}
\end{eqnarray}
The Hamiltonian with lowest-order relativistic corrections is a quartic equation with respect to momentum. For central potentials, the spin-orbit term may be recast in terms of the angular momentum operator, and thereby reduces the Hamiltonian to a quadratic equation in $p^2$.
There is an alternative method of reducing Eq. \ref{eq:lowestH} to a quadratic that does not require one to collapse the parameter space to the centrosymmetric limit, which is observed when limiting the system to one-dimension. In one-dimension, there is no orbital angular momentum, $\nabla V \times \boldsymbol{p} = 0$, and therefore, the spin-orbit term vanishes.
Thus, Eq. \ref{eq:lowestH} reduces to a simplified quadratic equation in $p_{x}^2$, where
\begin{eqnarray}
H = \frac{p_{x}^2}{2m} - \frac{p_{x}^4}{8m^3 c^2} + V\left(x\right)
+ \frac{\hbar^2}{8 m^2 c^2} \nabla^2 V\left(x\right) . \label{eq:lowestH1D}
\end{eqnarray}
Note that the Darwin still term survives the one-dimensional approximation. Although this approach simplifies the study of generalized semi-relativistic interactions while maintaining a non-centrosymmetric parameter space, one should note that many recent advances in quantum chemistry have been introduced for numerically approximating specified relativistic systems. Most notably are the electrostatic-potential-ordered Douglass-Kroll-Hess method,\cite{dougl74.01,hess86.01,nakaj00.01,reihe12.01} the ordered regular approximations,\cite{lenth93.01,lenth94.01,lenth96.01,filat03.01} and others based on exact decoupling methods.\cite{filat03.02,kutze05.01,kutze06.01,kutze07.01}
By restricting ourselves to one dimension, we may write
\begin{equation}
\left\langle k \left| V + \frac{p_{x}^2}{2m} - \frac{p_{x}^4}{8m^3 c^2} + \frac{\hbar^2}{8 m^2 c^2} \frac{\partial^2 V}{\partial x^2} = E_n \right| n \right\rangle . \label{eq:simpleham}
\end{equation}
Solving Eq. \ref{eq:simpleham} for $p_{x}^2$ gives
\begin{eqnarray}
\left\langle k \left| p_{x}^{2} \right| n \right\rangle &=& 2m^2c^2\delta_{k,n} - 2m^2c^2 \label{eq:12345} \\
&\times& \left\langle k \left|\sqrt{1 - \frac{2 \left(E_n- V\right)}{ m c^2} + \frac{\hbar^2}{4 m^3 c^4}\frac{\partial^2 V}{\partial x^2} } \right| n \right\rangle , \nonumber
\end{eqnarray}
where $\delta$ is the Kronecker delta function, and the negative root was chosen which reduces Eq. \ref{eq:12345} to the non-relativistic TRK sum rules as $1/c \rightarrow 0$. Using Eq. \ref{eq:12345}, the one-dimensional TRK sum rule with lowest-order relativistic corrections,
\begin{eqnarray}
& & \displaystyle\sum_{l = 0}^{\infty} \left\langle k \left| x \right| l \right\rangle \left\langle l \left| x \right| n \right\rangle \left[E_l - \frac{1}{2}\left(E_k + E_n\right)\right] \nonumber \\
&=& \left\langle k \left|\left(\frac{\hbar^2}{2m} - \frac{3 \hbar^2 p_{x}^2}{4m^3 c^2}\right)\right| n \right\rangle ,
\label{eq:1DrelTRK}
\end{eqnarray}
may be rewritten as
\begin{eqnarray}
& & \displaystyle\sum_{l = 0}^{\infty} \left\langle k \left| x \right| l \right\rangle \left\langle l \left| x \right| n \right\rangle \left[E_l - \frac{1}{2}\left(E_k + E_n\right)\right] \nonumber \\
&=& \frac{\hbar^2}{m}\left(\frac{3}{2}\lambda_{kn} - \delta_{k,n}\right),
\label{eq:1DrelTRKsubbed}
\end{eqnarray}
where
\begin{equation}
\lambda_{kn} = \left\langle k \left|\sqrt{1 - \frac{2}{ m c^2} \left(E_n- V\right) + \frac{\hbar^2}{4 m^3 c^4}\frac{\partial^2 V}{\partial x^2} } \right| n \right\rangle .
\label{eq:lambdakn}
\end{equation}
Under the current set of approximations, we take the element ($k=0$, $n=0$), or (0,0), which gives
\begin{equation}
\left|x_{10}\right|^2 E_{10} = \frac{\hbar^2}{m} \left(\frac{3}{2}\lambda_{00} - 1\right) - \displaystyle\sum_{l=2}^{\infty} \left|x_{l0}\right|^2 E_{l0} .
\label{eq:00eqfull}
\end{equation}
Considering the diagonal components and neglecting the Darwin term, there are two regimes that adjust the fundamental limit. If $E_{n} > V_{n,n}\left(x\right)$, then the electron is moving inside a potential and $\lambda_{nn}$ is real. This causes a decrease in the maximum oscillator strength. If the electron is expected to be outside a potential such that $E_{n} < V_{n,n}\left(x\right)$, then $\lambda_{nn}$ becomes imaginary, which cannot occur for bound states with positive energies. Therefore, the oscillator strength of a one-dimensional semi-relativistic system decreases with respect to the non-relativistic approximation; however, a competing parameter may increase the final numerical value of some systems (not the intrinsic value) of the off-resonant response because the relativistic corrections reduce the transition energies with respect to those mapped from the non-relativistic Hamiltonian.
In prior studies that began with a Hamiltonian in the non-relativistic limit, it was shown that the largest nonlinear-optical responses occur when all other transition energies become much larger than $E_{10}$. In other words, the sum-over-state (SOS) expressions are dominated by the first excited state transition. This is also true for relativistically corrected systems which is obvious from Eq. \ref{eq:00eqfull}. Thus, we adopt the same method as Kuzyk \cite{kuzyk00.01} and assume a three-level model. Then, Eq. \ref{eq:00eqfull} reduces to
\begin{equation}
\left|x_{10}\right|^2 E_{10} + \left|x_{20}\right|^2 E_{20} = \frac{\hbar^2}{m} \left(\frac{3}{2}\lambda_{00} - 1\right) .
\label{eq:00eq}
\end{equation}
Likewise, (1,1) produces the resultant equation
\begin{equation}
\left|x_{12}\right|^2 E_{21} - \left|x_{10}\right|^2 E_{10} = \frac{\hbar^2}{m} \left(\frac{3}{2}\lambda_{11} - 1\right) .
\label{eq:11eq}
\end{equation}
In the same manner as Eqs. \ref{eq:00eq} and \ref{eq:11eq} we take (1,0), which gives
\begin{equation}
x_{10} \bar{x}_{11} E_{10} + x_{12}x_{20} \left(E_{21} + E_{20}\right) = \frac{3 \hbar^2}{2m} \lambda_{10},
\label{eq:10eq}
\end{equation}
Note that the left-hand-side of Eq. \ref{eq:10eq} is identical for (1,0) and (0,1) when we assume real transition moments, \textit{i}.\textit{e}., $x_{ij} = x_{ji}$.\cite{kuzyk01.01} Thus, it is of no surprise that the corresponding $\lambda$ parameter must also possess the property $\lambda_{10} = \lambda_{01}$. Finally, we set the matrix elements corresponding to (2,0), or (0,2), which gives
\begin{equation}
x_{20} \bar{x}_{22} E_{20} + x_{10}x_{12} \left(E_{10} - E_{21}\right) = \frac{3 \hbar^2}{2m} \lambda_{20} .
\label{eq:20eq}
\end{equation}
Note that Eqs. \ref{eq:10eq} and \ref{eq:20eq} contain off-diagonal components that are real and positive for well behaved systems.
Solving Eqs. \ref{eq:00eq}-\ref{eq:20eq} for the transition dipole moments, we find
\begin{eqnarray}
\left| x_{10} \right| &=& \frac{\hbar}{\sqrt{m E_{10}}} X \sqrt{\frac{3}{2}\lambda_{00} - 1} , \label{eq:x10fs} \\
\left| x_{12} \right| &=& \frac{\hbar}{\sqrt{m E_{10}}} \sqrt{\frac{E}{1-E}} \, G_{\lambda}\left(X\right) , \label{eq:x12fs} \\
\bar{x}_{11} &=& \frac{\hbar}{\sqrt{m E_{10}}} \left[ \displaystyle \frac{E-2}{\displaystyle \sqrt{1-E}} \frac{\displaystyle \sqrt{1-X^2}}{X} \, G_{\lambda}\left(X\right) \right. \nonumber \\
&+& \left. \frac{3\lambda_{10}}{2X \sqrt{\frac{3}{2}\lambda_{00} - 1}} \right] , \label{eq:x11fs} \\
\bar{x}_{22} &=& \frac{\hbar}{\sqrt{m E_{10}}} \left[ \displaystyle \frac{1-2E}{\displaystyle \sqrt{1-E}} \frac{X}{\displaystyle \sqrt{1-X^2}} \, G_{\lambda}\left(X\right) \right. \nonumber \\
&+& \left. \frac{3 \sqrt{E} \lambda_{20}}{2 \displaystyle \sqrt{1-X^2} \sqrt{\frac{3}{2}\lambda_{00} - 1}} \right] , \label{eq:x22fs}
\end{eqnarray}
and
\begin{equation}
\left| x_{20} \right| = \frac{\hbar}{\sqrt{m E_{10}}}\sqrt{E} \sqrt{1-X^2} \sqrt{\frac{3}{2}\lambda_{00} - 1} .
\label{eq:x20fs}
\end{equation}
where
\begin{equation}
G_{\lambda}\left(X\right) = \sqrt{X^2 \left(\frac{3}{2}\lambda_{00} - 1\right) + \frac{3}{2}\lambda_{11} - 1} .
\label{eq:Glx}
\end{equation}
Here, we used the notation inline with previous expressions for the nonlinear-optical limits of non-relativistic systems such that
\begin{equation}
X = \frac{\left|x_{10}\right|}{\left|x_{10}^{\mathrm{max}}\right|} \qquad \mathrm{and} \qquad E = \frac{E_{10}}{E_{20}} ,
\label{eq:energyfrac}
\end{equation}
where we can see that the maximum value for the $x_{10}$ transition moment is
\begin{equation}
\left|x_{10}^{\mathrm{max}}\right| = \frac{\hbar}{\sqrt{m E_{10}}} \sqrt{\frac{3}{2}\lambda_{00} - 1} .
\label{eq:xmax}
\end{equation}
To find an expression for the off-resonant polarizability, hyperpolarizability, and second hyperpolarizability of a three-level system, we substitute Eqs. \ref{eq:x10fs}-\ref{eq:xmax} into Eqs. \ref{eq:polar}-\ref{eq:sechyper}. The three-level polarizability, hyperpolarizability, and second hyperpolarizability reduce to
\begin{eqnarray}
\alpha_{3L}' &=& \frac{2 e^2 \hbar^2}{m E_{10}^{2}} \left[ X^2 + E^2\left(1 - X^2\right) \right] H_{\lambda}^{2} , \label{eq:alpharel3L}
\end{eqnarray}
\begin{widetext}
\begin{eqnarray}
\beta_{3L}' &=& \frac{6 e^3 \hbar^3}{ \sqrt{m^3 E_{10}^7}} H_\lambda \left[\displaystyle\sqrt{1-X^2} X \left(1 - E\right)^{3/2} \left(1 + \frac{3}{2}E + E^2\right) H_\lambda G_{\lambda}\left(X\right)
- 3 X \lambda_{10} - 3 \displaystyle\sqrt{1 - X^2} E^{7/2} \lambda_{20} \right] ,
\label{eq:betarel3L}
\end{eqnarray}
and
\begin{eqnarray}
\gamma_{3L}' &=& \frac{e^4 \hbar^4}{m^2 E_{10}^{5}} \left\{ 4\left[4 - \left(1 + 2 X^2 + 5 X^4\right) E^5 - \left(1 - 2 X^2 - 5 X^4\right) E^3 - \left(3 - 5 X^4\right) E^2 - 5 X^4\right] \right.\nonumber \\
&-& \left. 9 \left[ \left(1 - 2 X^2 + 5 X^4\right) E^5 + \left(2 X^2 - 5 X^4\right) E^3 + \left(4 X^2 - 5 X^4\right) E^2 + 5X^4 - 4X^2 \right] \lambda_{00}^2 \right. \nonumber \\
&-& \left. 6\left[4 - 4 X^2 + (4 X^2 - 3) E^2 + (4 X^2 - 1) E^3 - 4 X^2 E^5 \right] \lambda_{11} \right. \nonumber \\
&+& \left. 6 \left[ \left(2 + 10 X^4\right) E^5 + \left(1 - 10 X^4\right) E^3 + \left(3 + 4 X^2 - 10 X^4\right) E^2 + 10X^4 - 4X^2 - 4 \right]\lambda_{00} \right. \nonumber \\
&-& \left. 9\left[ 4 X^2 E^5 \left(1- 4X^2\right) E^3 + \left(3 - 4X^2\right) E^2 + 4X^2 - 4\right]\lambda_{00} \lambda_{11} + 9 \left(E^5 \lambda_{20}^2 + \lambda_{10}^2\right) \right. \nonumber \\
&-& \left. 12 H_\lambda G_{\lambda}\left(X\right) \left[ \lambda_{10} \left(2 + E\right)\displaystyle\sqrt{1-E}\displaystyle\sqrt{1-X^2} - \lambda_{20} X \displaystyle\sqrt{E\left(1-E\right)} \left(E^3 + 2 E^4\right) \right] \right\} ,
\label{eq:gammarel3L}
\end{eqnarray}
{}
{}
{}
{}
\end{widetext}
where
\begin{equation}
H_\lambda = \sqrt{\frac{3}{2}\lambda_{00}-1} .
\label{eq:Hlam}
\end{equation}
The primed coefficients in Eqs. \ref{eq:alpharel3L}-\ref{eq:gammarel3L} denote relativistic corrections to the TRK sum rules. Note that the energies in these primed equations for the nonlinear-optical coefficients are also relativistically corrected.
In the non-relativistic limit, \textit{i}.\textit{e}., when $c \rightarrow \infty$, Eqs. \ref{eq:alpharel3L}-\ref{eq:gammarel3L} reduce to the off-resonant, three-level model calculated from the non-relativistic TRK sum rules.\cite{kuzyk09.01,perez01.08} The polarizability, hyperpolarizability and second hyperpolarizability in the non-relativistic limit are given by
\begin{eqnarray}
\alpha_{3L} &=& \frac{e^2 \hbar^2}{m E_{10}^{2}} \left[X^2 + E^2 \left(1 - X^2 \right) \right] \label{eq:alphanonrel} \\
\beta_{3L} &=& \frac{3 e^3 \hbar^3}{2 \sqrt{2 m^3 E_{10}^{7}}} X \displaystyle\sqrt{1-X^4} \nonumber \\
&\times& \left(1-E\right)^{3/2} \left(1+\frac{3}{2}E+E^2\right)
\label{eq:betanonrel}
\end{eqnarray}
and
\begin{eqnarray}
\gamma_{3L} &=& \frac{e^4 \hbar^4}{m^2 E_{10}^{5}} \left[4 - 2(E^2-1)E^3 X^2 \right. \nonumber \\
&-& 5 \left(E-1\right)^2\left(E+1\right)\left(E^2+E+1\right)X^4 \nonumber \\
&-& \left. \left(E^3+E+3\right)E^2 \right] .
\label{eq:gammanonrel}
\end{eqnarray}
\section{Discussion}
Transition moments (and expectation values of many types) in addition to diagonal energy/potential differences can appear in the relativistically corrected equation via the $\lambda_{ij}$ terms. If the values of $\lambda_{ij}$ are known for a specific potential, then the second hyperpolarizability can be approximated by Eq. \ref{eq:gammarel3L}. In other words, the inclusion of the momentum term in the TRK sum rules no longer gives a simple relationship between the transition moments and energies.
It is clear that the linear polarizability for all $X$ and $E$ is reduced by the lowest-order relativistic correction. The decrease is due to the presence of the $H_\lambda$ parameter, which can take values between 0 and 1, where $H_\lambda \rightarrow 1$ in the non-relativistic limit. In $X$ and $E$ parameter space, the limit of the hyperpolarizability is located at $X = 1/\sqrt[4]{3}$ and $E = 0$. The resulting limit corresponds to a two-level system, which is not surprising given the relationships in Eq. \ref{eq:00eqfull}. Because that the maximum is located when $1/E_{20} \rightarrow 0$, it seems counterintuitive that the maximum of the nonlinear-optical coefficients occur when $X\neq1$; however, we can no longer think in terms of simple linear optics. When calculating nonlinear-optical coefficients, the intermediate states and excited state sum rules are interwoven into Eqs. \ref{eq:polar} and \ref{eq:sechyper}. The limit of the hyperpolarizability of non-relativistic systems calculated using the three-level ansatz is given by,
\begin{equation}
\beta_{\mathrm{max}} = \sqrt[4]{3}\frac{e^3 \hbar^3}{\sqrt{m^3 E_{10}^{7}}} .
\label{eq:betanonrelupper}
\end{equation}
\begin{figure}
\caption{The intrinsic hyperpolarizability as a function of $\lambda_{00}
\label{fig:betamax}
\end{figure}
The effects of linear, relativistic, kinetic energy on the fundamental limit of the hyperpolarizability may be studied by substituting $X = 1/\sqrt[4]{3}$ and $E = 0$ into Eq. \ref{eq:betarel3L}. The lowest-order relativistic correction to the limit of the hyperpolarizability, $\beta' \left(X,E\right)$, is given by
\begin{eqnarray}
\beta' \left(\sqrt[-4]{3},0\right) &=& \frac{2}{\sqrt[4]{3}} \frac{e^3 \hbar^3}{\sqrt{m^3 E_{10}^{7}}} \, H_\lambda \label{eq:maxrelsubbeta} \\
&\times& \left( \sqrt{6} H_\lambda \sqrt{\frac{3}{2} \lambda_{00} + \frac{3}{2} \lambda_{11} - 1} - 9\lambda_{10} \right) . \nonumber
\end{eqnarray}
Note that when $E=0$, the second excited state is infinitely large; however, $E_{20}$ does not enter into the oscillator strength corrections as there is no $\lambda_{22}$ term. The same is true for any number of truncated states, where there is no diagonal $\lambda_{pp}$ term for a system truncated to $p$ states. Thus, we may still assume that $E_{20} \rightarrow \infty$ without any obvious negative consequences.
The limit to the hyperpolarizability for increasingly relativistic systems is shown in Fig. \ref{fig:betamax}, where the intrinsic value, $\beta_{int}' = \beta'/\beta_{max}$, is plotted as a function of $\lambda_{00}$ and $\lambda_{11}$. We must place a lower bound on some parameters due to the low-order approximation. We observe that for real values of the off-resonant hyperpolarizability, $\lambda_{00}$ and $\lambda_{11}$ can have a minimum value of $2/3$. As shown in Fig. \ref{fig:betamax}(a), the lowest-order relativistic correction to the limit of the hyperpolarizability is reduced, or even negative, when $\lambda_{10} = 0$ while $\lambda_{00}$ and $\lambda_{11}$ increase. The hyperpolarizability is further reduced when the off-diagonal relativistic term, $\lambda_{10}$, is increased as illustrated in Fig. \ref{fig:betamax}(b). If we further increase $\lambda_{10}$ away from the non-relativistic limit, there are values of $\lambda_{00}$ and $\lambda_{11}$ that correspond to a negative hyperpolarizability that is greater in magnitude than the fundamental limit. These occurrences where the limit is broken appear for values of $\lambda_{11}$ that deviate from unity, but not for large deviations of $\lambda_{00}$, where the entire function of $\beta'$ is multiplied by $H_{\lambda}$. Thus, large values of $\lambda_{00}$ quickly decrease the effects of an increasing $\lambda_{10}$.
The red region shown in Fig. \ref{fig:betamax}(c) corresponds to the region that is opposite in sign and greater in magnitude to the fundamental limit when $\lambda_{10} = 0.2$, which is still within the stability boundaries of the lowest-order approximation. There is the possibility that higher-order relativistic corrections may lessen the effects of the lowest-order correction; however, introducing higher-order corrections into an analytical framework is quite complicated and beyond the scope of the present study. The lowest-order correction to the ($n$,$n$) sum rules appears to damp the total strength of the transition probabilities by increasing the momentum at semi-relativistic energies. Note that an exotic Hamiltonian with a small momentum correction of opposite sign to that of the lowest-order relativistic correction would instead produce a virtual increase in the total oscillator strength. Relativistic corrections to the ($n$,$k$) TRK sum rules, where $n\neq k$, appear to directly subtract from the total response as opposed to an apparent quadratic damping. These nonzero terms are what appear to allow the non-relativistic fundamental limit to be broken when scaled to semi-relativistic kinetic energies.
\begin{figure}
\caption{The maximum intrinsic value of the second hyperpolarizability as a function of $\lambda_{00}
\label{fig:max}
\end{figure}
To get a general idea of how relativity affects the second hyperpolarizability, we first study the limits of the non-relativistic three-level model, Eq. \ref{eq:gammanonrel}. The upper limit of the non-relativistic second hyperpolarizability, in the reduced parameter space, is located at $E=0$ and $X=0$, which gives
\begin{equation}
\gamma_{\mathrm{max}} = \frac{4 e^4 \hbar^4}{m^2 E_{10}^{5}} .
\label{eq:gammanonrelupper}
\end{equation}
The lower limit is found when either $E = 1$, or when $E = 0$ and $X = 1$. For the non-relativistic case, the lower limit of the second hyperpolarizability is
\begin{equation}
\gamma_{\mathrm{min}} = -\frac{e^4 \hbar^4}{m^2 E_{10}^{5}} .
\label{eq:gammanonrellower}
\end{equation}
We may now substitute the corresponding three-level energy and first transition moment fractions, $X$ and $E$, into the lowest-order corrected second hyperpolarizability expression to study the maximum value of semi-relativistic systems. After substituting the parameters associated with the maximum for the non-relativistic limit, Eq. \ref{eq:gammarel3L} reduces to
\begin{eqnarray}
\gamma' \left(0,0\right) &=& \frac{e^4 \hbar^4}{m^2 E_{10}^{5}} \left[16 + 9\lambda_{10} - 24\lambda_{11} + 3 \lambda_{00} \left(12 \lambda_{11} - 8\right) \right. \nonumber \\
&-& \left. 12 \lambda_{10} \sqrt{\left(3\lambda_{00} - 2\right) \left(3\lambda_{11} - 2\right)} \right] .
\label{eq:maxrelsub}
\end{eqnarray}
\begin{figure}
\caption{The minimum intrinsic value of the second hyperpolarizability as a function of $\lambda_{00}
\label{fig:min}
\end{figure}
The lowest-order relativistic corrections to the second hyperpolarizability are shown in Fig. \ref{fig:max}. Note that the maximum intrinsic value, $\gamma_{int}'$, is $1$ and the minimum is $-1/4$.
As shown in Fig. \ref{fig:max}(a), the maximum possible second hyperpolarizability is reduced for a potential with negligible off-diagonal $\lambda$ parameters. The other two plots in Fig. \ref{fig:max} illustrate how a nonzero $\lambda_{10}$ further reduces the second nonlinear response from the non-relativistic maximum. Again, note that even though the intrinsic values are reduced, the net numerical values for the off-resonant response may be affected differently because of relativistic changes in $E_{10}$.
There are two regimes that are associated with the minimum value of the second hyperpolarizability. Focusing only on the minimum at the two-level limit, \textit{i}.\textit{e}. $E \rightarrow 0$, there is an intrinsic value of $-0.25$ when $X = 1$. The lower limit in this regime, with lowest-order relativistic corrections, is given by
\begin{equation}
\gamma' \left(1,0\right) = \frac{e^4 \hbar^4}{m^2 E_{10}^{5}} \left[12 \lambda_{00} - 9 \lambda_{00}^2 + 9 \lambda_{10}^2 - 4 \right] .
\label{eq:minrelsub}
\end{equation}
The minimum value in this regime is only affected by the lowest diagonal term, $\lambda_{00}$ and the first off-diagonal term, $\lambda_{10}$. Thus, it appears that, for well-behaved systems under these approximations, the first excited state does not contribute to the lowest-order relativistic correction at the (1,0) minimum.
The minimum at (1,0) is plotted in Fig. \ref{fig:min} as a function of $\lambda_{00}$ and $\lambda_{10}$. The value of $\lambda_{00}$ is `walked' away from the non-relativistic value of 1, while $\lambda_{10}$ is increased from the non-relativistic limit of zero. Notice how the magnitude of the lower limit in this regime is also reduced which signifies response damping as the dominant mechanism as opposed to a subtraction of the net response. Similar to $\beta'$, under more extreme circumstances, it appears that a negative value of $\gamma$ may also become zero or even positive. The positive is due to the off-diagonal $\lambda_{kn}$ subtraction of the response which is less prominent for the second hyperpolarizability.
The second regime where there exists a minimum is found when $E = 1$, where the minimum also reaches the negative intrinsic limit of $-1/4$. In this regime, the lowest-order relativistic correction gives
\begin{equation}
\gamma' \left(X,1\right) = \frac{e^4 \hbar^4}{m^2 E_{10}^{5}} \left[12 \lambda_{00} - 9 \lambda_{00}^2 + 9 \lambda_{10}^2 + 9 \lambda_{20}^2 - 4 \right] .
\label{eq:minDrelsub}
\end{equation}
Thus, this degenerate minimum is more strongly affected by relativistic corrections with the inclusion of a positive $\lambda_{20}$ parameter that increases the minimum value away from the negative limit.
\begin{center}\textit{Relativistic effects of H-like ions and the 3-level ansatz}\end{center}
It is well-known that, unlike many organic molecules, the continuum states make a significant contribution to the total dipole response of a single hydrogen atom. Thus, these continuum states cause problems with the SOS method for the second hyperpolarizability. Other non-relativistic methods have been developed such as a time-independent perturbation approach \cite{sewel49.01,boyle01.66} to calculate the zero-frequency response and a method employing Sturmian functions used by Shelton \cite{shelt87.03} to calculate the frequency-dependent coefficients. Because the largest portion of the dipole strength is in the $1$s-$2$p transition, a qualitatively study of the lowest-relativistic corrections to H-like ions can be performed with a simple three-level model. Here, problems with convergence and continuum states are washed away by placing the entire oscillator strength in the first two excited state transitions, which gives an reasonably approximate description for most systems.
The only nonzero angular contributions from the non-relativistic transitions are either $1/\sqrt{3}$ for $n$s-$n'$p or $2/\sqrt{15}$ for $n$p-$n'$d. The similarity between the two nonzero angular contributions, the low frequency of $n$p-$n'$d transitions in the SOS expression, and the fact that we can limit our study to the $\gamma_{zzzz}$ component allows us to make a one-dimensional approximation; thus, the total response is given by Eq. \ref{eq:1DrelTRK}. Note that the spin-orbit term will still enter into the calculation, but it will be later introduced as a perturbation in the energy so that we can further simplify the example.
We can treat the lowest-order linear moment, spin-orbit, and Darwin terms as first order perturbations in the energy.\cite{griff95.01} This provides a simpler approach when solving Eq. \ref{eq:simpleham}, where we may write
\begin{eqnarray}
\left(p_{nk}^{\mathrm{H-like}}\right)^2 &\approx& \frac{Z^4 \alpha^2}{\left(n+1\right)^2} \left[\frac{2}{\left(1+2j\right)\left(n+1\right)} \right. \nonumber \\
&-& \left. \frac{3}{4 \left(n+1\right)^2} \right] - \frac{Z^2}{\left(n+1\right)^2} - 2 V_{nk}
\label{eq:pnkHapprox}
\end{eqnarray}
given in atomic units ($\hbar\rightarrow 1$, $m\rightarrow 1$, $e\rightarrow 1$), where $Z$ is the number of protons, $\alpha$ is the fine structure constant, $n=0,1,2,\cdots$, and $V = -Z/r$. We can simplify the example even further by assuming a single energy level from averaging the $j=l\pm 1/2$ splitting for $l\neq 0$. To make this simplification, the transition probabilities for the $1$s$_{1/2}$-$1$p$_{1/2}$ and $1$s$_{1/2}$-$1$p$_{3/2}$ doublet as well as for the second transition's $2$p$_{1/2}$-$3$p$_{3/2}$, $2$p$_{3/2}$-$3$p$_{3/2}$, and $2$p$_{3/2}$-$3$p$_{5/2}$ multiplet are evaluated from the Dirac equation,\cite{Bethe77.01,garst01.71,young75.01} and used to perform the weighted averages for the excited state energies.
The $\lambda_{kn}$ terms are then given by
\begin{equation}
\lambda_{nk}^{\mathrm{H-like}} \approx \delta_{nk} - \alpha^2 \left(p_{nk}^{\mathrm{H-like}}\right)^2 ,
\label{eq:lambdaHapprox}
\end{equation}
where the off-diagonal terms in Eq. \ref{eq:lambdaHapprox} are taken to be zero under the current set of approximations with energy perturbations from a three-dimensional central potential.
\begin{figure}
\caption{The second hyperpolarizability as a function of atomic number for an H-like ion using the three-level model with lowest-order relativistic corrections to the TRK sum rules.}
\label{fig:gammaapprox}
\end{figure}
The second hyperpolarizability can now be calculated by substituting the approximate Z-dependent coefficients $E_{10}^{\mathrm{H-like}}$, $E^{\mathrm{H-like}}$, $X^{\mathrm{H-like}}$, $\lambda_{01}^{\mathrm{H-like}}$, and $\lambda_{02}^{\mathrm{H-like}}$, into Eq. \ref{eq:gammarel3L}. The ratio of the second hyperpolarizability for H-like atoms for the $z$-diagonal tensor component, $\lambda^{\mathrm{H-like}}_{zzzz}$, divided by the approximate second hyperpolarizability of the hydrogen atom, $\lambda^{\mathrm{hydrogen}}_{zzzz}$, is given in Fig. \ref{fig:gammaapprox} as a function of the atomic number. The total strength of the transitions decrease causing a drop in the static nonlinear optical response.
Note that the severity of damping to the second nonlinear response is lessened by a decrease in the first transition energy as the atomic number increases.
\section{Conclusion}
The lowest-order relativistic correction to the TRK sum rules was shown to limit the oscillator strength below the value derived from the non-relativistic Hamiltonian. This correction was applied to both the static linear and first two nonlinear optical responses; the magnitude of this correction is no longer a constant and depends on the potential energy function. This lowest-order relativistic correction has been applied to the three-level ansatz, where in the relativistic regime, the magnitudes of the fundamental limits of the polarizability, hyperpolarizability, and second hyperpolarizability are reduced. Thus, the non-relativistic regime gives the largest values of the fundamental limit for closed quantum systems.
In the regime where the relativistic parameters pull the hyperpolarizability, at the positive fundamental limit, to below the negative bound, we find that it may be possible to break the Kuzyk limit (although with opposite sign). This is a disturbing result and further studies with higher degrees of accuracy must be performed for this consequence to be supported. Further studies with additional corrections may also help in understanding the peculiar influences of the off-diagonal sum rules on the linear and nonlinear responses. Originally, these off-diagonal terms were equal to zero in the non-relativistic limit, where they are the primary reason that the Kuzyk limit of the hyperpolarizability could possibly be broken when referencing to the lowest-order correction.
\noindent {\bf Acknowledgments}
\acknowledgments I would like to thank Prof. Kenneth D. Singer and Prof. Mark G. Kuzyk for useful discussions. I would also like to thank the National Science Foundation grant number OISE-1243313 for partial support of this project.
{}
\begin{thebibliography}{64}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\undefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \url [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{http://dx.doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Kuzyk}(2000{\natexlab{a}})}]{kuzyk00.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Kuzyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. Lett.}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo {pages} {1218}
(\bibinfo {year} {2000}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Thomas}(1925)}]{thoma25.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Thomas}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Naturwissenschaften}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages}
{627} (\bibinfo {year} {1925})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Reiche}\ and\ \citenamefont
{Thomas}(1925)}]{reich25.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Reiche}}\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Thomas}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Z.
Phys.}\ }\textbf {\bibinfo {volume} {34}},\ \bibinfo {pages} {510} (\bibinfo
{year} {1925})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kuhn}(1925)}]{kuhn25.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Kuhn}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Z.
Phys.}\ }\textbf {\bibinfo {volume} {33}},\ \bibinfo {pages} {408} (\bibinfo
{year} {1925})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kuzyk}(2000{\natexlab{b}})}]{kuzyk00.02}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Kuzyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Opt.
Lett.}\ }\textbf {\bibinfo {volume} {25}},\ \bibinfo {pages} {1183} (\bibinfo
{year} {2000}{\natexlab{b}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kuzyk}(2001)}]{kuzyk01.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Kuzyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {IEEE
Journal on Selected Topics in Quantum Electronics}\ }\textbf {\bibinfo
{volume} {7}},\ \bibinfo {pages} {774 } (\bibinfo {year} {2001})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Clays}(2001)}]{clays01.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Clays}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Opt.
Lett.}\ }\textbf {\bibinfo {volume} {26}},\ \bibinfo {pages} {1699} (\bibinfo
{year} {2001})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kuzyk}(2003)}]{kuzyk03.03}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Kuzyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Chem
Phys.}\ }\textbf {\bibinfo {volume} {119}},\ \bibinfo {pages} {8327}
(\bibinfo {year} {2003})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Clays}\ and\ \citenamefont {Coe}(2003)}]{clays03.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Clays}}\ and\ \bibinfo {author} {\bibfnamefont {B.~J.}\ \bibnamefont
{Coe}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Chem.
Mater.}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {642} (\bibinfo
{year} {2003})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Tripathi}\ \emph {et~al.}(2004)\citenamefont
{Tripathi}, \citenamefont {Moreno}, \citenamefont {Kuzyk}, \citenamefont
{Coe}, \citenamefont {Clays},\ and\ \citenamefont {Kelley}}]{Tripa04.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Tripathi}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Moreno}},
\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Kuzyk}}, \bibinfo
{author} {\bibfnamefont {B.~J.}\ \bibnamefont {Coe}}, \bibinfo {author}
{\bibfnamefont {K.}~\bibnamefont {Clays}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.~M.}\ \bibnamefont {Kelley}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {J. Chem. Phys.}\ }\textbf {\bibinfo {volume}
{121}},\ \bibinfo {pages} {7932} (\bibinfo {year} {2004})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {P\`{e}rez~Moreno}\ and\ \citenamefont
{Kuzyk}(2005)}]{perez05.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{P\`{e}rez~Moreno}}\ and\ \bibinfo {author} {\bibfnamefont {M.~G.}\
\bibnamefont {Kuzyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {J. Chem. Phys.}\ }\textbf {\bibinfo {volume} {123}},\ \bibinfo
{pages} {194101} (\bibinfo {year} {2005})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kuzyk}(2005)}]{kuzyk05.02}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Kuzyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A.}\ }\textbf {\bibinfo {volume} {72}},\ \bibinfo {pages} {053819}
(\bibinfo {year} {2005})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kuzyk}(2006{\natexlab{a}})}]{kuzyk06.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Kuzyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J.
Nonl. Opt. Phys. \& Mat.}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo
{pages} {77} (\bibinfo {year} {2006}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kuzyk}(2006{\natexlab{b}})}]{kuzyk06.03}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Kuzyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Chem
Phys.}\ }\textbf {\bibinfo {volume} {125}},\ \bibinfo {pages} {154108}
(\bibinfo {year} {2006}{\natexlab{b}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {P\'{e}rez~Moreno}\ \emph
{et~al.}(2007{\natexlab{a}})\citenamefont {P\'{e}rez~Moreno}, \citenamefont
{Zhao}, \citenamefont {Clays},\ and\ \citenamefont {Kuzyk}}]{perez07.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{P\'{e}rez~Moreno}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Zhao}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Clays}}, \ and\
\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Kuzyk}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Opt. Lett.}\ }\textbf
{\bibinfo {volume} {32}},\ \bibinfo {pages} {59} (\bibinfo {year}
{2007}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {P\'{e}rez~Moreno}\ \emph
{et~al.}(2007{\natexlab{b}})\citenamefont {P\'{e}rez~Moreno}, \citenamefont
{Asselberghs}, \citenamefont {Zhao}, \citenamefont {Song}, \citenamefont
{Nakanishi}, \citenamefont {Okada}, \citenamefont {Nogi}, \citenamefont
{Kim}, \citenamefont {Je}, \citenamefont {Matrai}, \citenamefont {De~Mayer},\
and\ \citenamefont {Kuzyk}}]{perez07.02}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{P\'{e}rez~Moreno}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Asselberghs}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhao}},
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Song}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Nakanishi}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Okada}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Nogi}}, \bibinfo {author} {\bibfnamefont {O.-K.}\
\bibnamefont {Kim}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Je}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Matrai}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {De~Mayer}}, \ and\ \bibinfo
{author} {\bibfnamefont {M.~G.}\ \bibnamefont {Kuzyk}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys.}\ }\textbf
{\bibinfo {volume} {126}},\ \bibinfo {pages} {074705} (\bibinfo {year}
{2007}{\natexlab{b}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {P\`{e}rez~Moreno}\ \emph {et~al.}(2008)\citenamefont
{P\`{e}rez~Moreno}, \citenamefont {Clays},\ and\ \citenamefont
{Kuzyk}}]{perez01.08}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont
{P\`{e}rez~Moreno}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Clays}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Kuzyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J.
Chem. Phys.}\ }\textbf {\bibinfo {volume} {128}},\ \bibinfo {pages} {084109}
(\bibinfo {year} {2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Zhou}\ and\ \citenamefont {Kuzyk}(2008)}]{zhou08.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Zhou}}\ and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Kuzyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J.
Phys. Chem. C.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {7978}
(\bibinfo {year} {2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kuzyk}(2009)}]{kuzyk09.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Kuzyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J.
Mater. Chem.}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages}
{7444–7465} (\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Dawson}\ \emph
{et~al.}(2011{\natexlab{a}})\citenamefont {Dawson}, \citenamefont {Anderson},
\citenamefont {Schei},\ and\ \citenamefont {Kuzyk}}]{dawson11.02}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont
{Dawson}}, \bibinfo {author} {\bibfnamefont {B.~R.}\ \bibnamefont
{Anderson}}, \bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont {Schei}},
\ and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Kuzyk}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\
}\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {043406} (\bibinfo
{year} {2011}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Dawson}\ \emph
{et~al.}(2011{\natexlab{b}})\citenamefont {Dawson}, \citenamefont {Anderson},
\citenamefont {Schei},\ and\ \citenamefont {Kuzyk}}]{dawson11.03}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont
{Dawson}}, \bibinfo {author} {\bibfnamefont {B.~R.}\ \bibnamefont
{Anderson}}, \bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont {Schei}},
\ and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Kuzyk}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys Rev. A}\
}\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {043407} (\bibinfo
{year} {2011}{\natexlab{b}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Shafei}\ and\ \citenamefont
{Kuzyk}(2013)}]{shafe13.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Shafei}}\ and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Kuzyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {023863}
(\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kuzyk}\ \emph {et~al.}(2013)\citenamefont {Kuzyk},
\citenamefont {P\'{e}rez-Moreno},\ and\ \citenamefont {Shafei}}]{kuzyk13.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Kuzyk}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{P\'{e}rez-Moreno}}, \ and\ \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Shafei}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rep.}\ }\textbf {\bibinfo {volume} {529}},\ \bibinfo {pages}
{297} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Zhou}\ \emph {et~al.}(2006)\citenamefont {Zhou},
\citenamefont {Kuzyk},\ and\ \citenamefont {Watkins}}]{zhou06.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Zhou}}, \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Kuzyk}}, \
and\ \bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont {Watkins}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Opt. Lett.}\
}\textbf {\bibinfo {volume} {31}},\ \bibinfo {pages} {2891} (\bibinfo {year}
{2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Zhou}\ \emph {et~al.}(2007)\citenamefont {Zhou},
\citenamefont {Szafruga}, \citenamefont {Watkins},\ and\ \citenamefont
{Kuzyk}}]{zhou07.02}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Zhou}}, \bibinfo {author} {\bibfnamefont {U.~B.}\ \bibnamefont {Szafruga}},
\bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont {Watkins}}, \ and\
\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Kuzyk}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf
{\bibinfo {volume} {76}},\ \bibinfo {pages} {053831} (\bibinfo {year}
{2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wiggers}\ and\ \citenamefont
{Petschek}(2007)}]{wigge07.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont
{Wiggers}}\ and\ \bibinfo {author} {\bibfnamefont {R.~G.}\ \bibnamefont
{Petschek}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Opt.
Lett.}\ }\textbf {\bibinfo {volume} {32}},\ \bibinfo {pages} {942} (\bibinfo
{year} {2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kuzyk}\ and\ \citenamefont
{Kuzyk}(2008)}]{kuzyk08.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~C.}\ \bibnamefont
{Kuzyk}}\ and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Kuzyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Opt.
Soc. Am. B.}\ }\textbf {\bibinfo {volume} {25}},\ \bibinfo {pages} {103}
(\bibinfo {year} {2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Watkins}\ and\ \citenamefont
{Kuzyk}(2009)}]{watkins09.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont
{Watkins}}\ and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Kuzyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J.
Chem. Phys.}\ }\textbf {\bibinfo {volume} {131}},\ \bibinfo {pages} {064110}
(\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Shafei}\ \emph {et~al.}(2010)\citenamefont {Shafei},
\citenamefont {Kuzyk},\ and\ \citenamefont {Kuzky}}]{shafe10.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Shafei}}, \bibinfo {author} {\bibfnamefont {M.~C.}\ \bibnamefont {Kuzyk}}, \
and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Kuzky}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Opt. Soc. Am.
B}\ }\textbf {\bibinfo {volume} {27}},\ \bibinfo {pages} {361} (\bibinfo
{year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Watkins}\ and\ \citenamefont
{Kuzyk}(2011)}]{watkins11.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont
{Watkins}}\ and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Kuzyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J.
Chem. Phys.}\ }\textbf {\bibinfo {volume} {134}},\ \bibinfo {pages} {094109}
(\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Atherton}\ \emph {et~al.}(2012)\citenamefont
{Atherton}, \citenamefont {Lesnefsky}, \citenamefont {Wiggers},\ and\
\citenamefont {Petschek}}]{ather12.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont
{Atherton}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Lesnefsky}},
\bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont {Wiggers}}, \ and\
\bibinfo {author} {\bibfnamefont {R.~G.}\ \bibnamefont {Petschek}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Opt. Soc. Am.
B}\ }\textbf {\bibinfo {volume} {29}},\ \bibinfo {pages} {513} (\bibinfo
{year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Burke}\ \emph {et~al.}(2013)\citenamefont {Burke},
\citenamefont {Atherton}, \citenamefont {Lesnefsky},\ and\ \citenamefont
{Petschek}}]{burke13.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~J.}\ \bibnamefont
{Burke}}, \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Atherton}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Lesnefsky}}, \ and\
\bibinfo {author} {\bibfnamefont {R.~G.}\ \bibnamefont {Petschek}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Opt. Soc. Am.
B}\ }\textbf {\bibinfo {volume} {30}},\ \bibinfo {pages} {1438} (\bibinfo
{year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Cole}\ \emph {et~al.}(2002)\citenamefont {Cole},
\citenamefont {Copley}, \citenamefont {McIntyre}, \citenamefont {Howard},
\citenamefont {Szablewski},\ and\ \citenamefont {Cross}}]{cole02.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Cole}}, \bibinfo {author} {\bibfnamefont {R.~C.~B.}\ \bibnamefont {Copley}},
\bibinfo {author} {\bibfnamefont {G.~J.}\ \bibnamefont {McIntyre}}, \bibinfo
{author} {\bibfnamefont {J.~A.~K.}\ \bibnamefont {Howard}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Szablewski}}, \ and\ \bibinfo {author}
{\bibfnamefont {G.~H.}\ \bibnamefont {Cross}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume}
{65}},\ \bibinfo {pages} {125107} (\bibinfo {year} {2002})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Cole}(2003)}]{cole03.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Cole}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phil.
Trans. R. Soc. Lond.}\ }\textbf {\bibinfo {volume} {361}},\ \bibinfo {pages}
{2751} (\bibinfo {year} {2003})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Higginbotham}\ \emph {et~al.}(1993)\citenamefont
{Higginbotham}, \citenamefont {Cole}, \citenamefont {Blood-Forsythe},\ and\
\citenamefont {Hickstein}}]{higgi12.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~P.}\ \bibnamefont
{Higginbotham}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Cole}}, \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Blood-Forsythe}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~D.}\
\bibnamefont {Hickstein}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {J. Appl. Phys.}\ }\textbf {\bibinfo {volume} {111}},\ \bibinfo
{pages} {033512} (\bibinfo {year} {1993})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Shafei}\ \emph {et~al.}(2012)\citenamefont {Shafei},
\citenamefont {Lytel},\ and\ \citenamefont {Kuzyk}}]{shafe12.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Shafei}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Lytel}}, \
and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Kuzyk}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Opt. Soc. Am.}\
}\textbf {\bibinfo {volume} {29}},\ \bibinfo {pages} {3419} (\bibinfo {year}
{2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lytel}\ and\ \citenamefont
{Kuzyk}(2013)}]{lytel13.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Lytel}}\ and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Kuzyk}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J.
Nonlinear Optic. Phys. Mat.}\ }\textbf {\bibinfo {volume} {22}},\ \bibinfo
{pages} {1350041} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lytel}\ \emph {et~al.}(2013)\citenamefont {Lytel},
\citenamefont {Shafei}, \citenamefont {Smith},\ and\ \citenamefont
{Kuzyk}}]{lytel13.02}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Lytel}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Shafei}},
\bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont {Smith}}, \ and\
\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Kuzyk}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf
{\bibinfo {volume} {87}},\ \bibinfo {pages} {043824} (\bibinfo {year}
{2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Orr}\ and\ \citenamefont {Ward}(1971)}]{orr71.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~J.}\ \bibnamefont
{Orr}}\ and\ \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Ward}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Molecular
Physics}\ }\textbf {\bibinfo {volume} {20}},\ \bibinfo {pages} {513}
(\bibinfo {year} {1971})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Levinger}\ and\ \citenamefont
{Bethea}(1957)}]{levin57.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont
{Levinger}}\ and\ \bibinfo {author} {\bibfnamefont {C.~G.}\ \bibnamefont
{Bethea}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev.}\ }\textbf {\bibinfo {volume} {106}},\ \bibinfo {pages} {1191} (\bibinfo
{year} {1957})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Leung}\ \emph {et~al.}(1986)\citenamefont {Leung},
\citenamefont {Rustgi},\ and\ \citenamefont {Long}}]{leung86.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~T.}\ \bibnamefont
{Leung}}, \bibinfo {author} {\bibfnamefont {M.~L.}\ \bibnamefont {Rustgi}}, \
and\ \bibinfo {author} {\bibfnamefont {S.~A.~T.}\ \bibnamefont {Long}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\
}\textbf {\bibinfo {volume} {33}},\ \bibinfo {pages} {2827} (\bibinfo {year}
{1986})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Cohen}\ and\ \citenamefont
{Leung}(1998)}]{cohen98.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont
{Cohen}}\ and\ \bibinfo {author} {\bibfnamefont {P.~T.}\ \bibnamefont
{Leung}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {57}},\ \bibinfo {pages} {4994}
(\bibinfo {year} {1998})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Sinky}\ and\ \citenamefont
{Leung}(2006)}]{sinky06.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Sinky}}\ and\ \bibinfo {author} {\bibfnamefont {P.~T.}\ \bibnamefont
{Leung}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {034703}
(\bibinfo {year} {2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Foldy}\ and\ \citenamefont
{Wouthuysen}(1950)}]{foldy50.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~L.}\ \bibnamefont
{Foldy}}\ and\ \bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont
{Wouthuysen}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J.
Opt. Soc. Am. B}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {29}
(\bibinfo {year} {1950})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Greiner}(2000)}]{grein00.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Greiner}},\ }\href@noop {} {\emph {\bibinfo {title} {Relativistic quantum
mechanics. Wave equations}}},\ \bibinfo {edition} {3rd}\ ed.\ (\bibinfo
{publisher} {Springer},\ \bibinfo {address} {New York},\ \bibinfo {year}
{2000})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Douglas}\ and\ \citenamefont
{Kroll}(1974)}]{dougl74.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Douglas}}\ and\ \bibinfo {author} {\bibfnamefont {N.~M.}\ \bibnamefont
{Kroll}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Ann.
Phys.}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {89} (\bibinfo
{year} {1974})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hess}(1986)}]{hess86.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~A.}\ \bibnamefont
{Hess}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {33}},\ \bibinfo {pages} {3742}
(\bibinfo {year} {1986})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Nakajima}\ and\ \citenamefont
{Hirao}(2000)}]{nakaj00.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Nakajima}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Hirao}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J.
Chem. Phys.}\ }\textbf {\bibinfo {volume} {113}},\ \bibinfo {pages} {7786}
(\bibinfo {year} {2000})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Reiher}(2012)}]{reihe12.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Reiher}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {WIREs
Comput. Mol. Sci.}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {139}
(\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {van Lenthe}\ \emph {et~al.}(1993)\citenamefont {van
Lenthe}, \citenamefont {Baerends},\ and\ \citenamefont
{Snijders}}]{lenth93.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {van
Lenthe}}, \bibinfo {author} {\bibfnamefont {E.~J.}\ \bibnamefont {Baerends}},
\ and\ \bibinfo {author} {\bibfnamefont {J.~G.}\ \bibnamefont {Snijders}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys.}\
}\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {4597} (\bibinfo {year}
{1993})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {van Lenthe}\ \emph {et~al.}(1994)\citenamefont {van
Lenthe}, \citenamefont {Baerends},\ and\ \citenamefont
{Snijders}}]{lenth94.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {van
Lenthe}}, \bibinfo {author} {\bibfnamefont {E.~J.}\ \bibnamefont {Baerends}},
\ and\ \bibinfo {author} {\bibfnamefont {J.~G.}\ \bibnamefont {Snijders}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys.}\
}\textbf {\bibinfo {volume} {101}},\ \bibinfo {pages} {9783} (\bibinfo {year}
{1994})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {van Lenthe}\ \emph {et~al.}(1996)\citenamefont {van
Lenthe}, \citenamefont {Leeuwen}, \citenamefont {Baerends},\ and\
\citenamefont {Snijders}}]{lenth96.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {van
Lenthe}}, \bibinfo {author} {\bibfnamefont {R.~v.}\ \bibnamefont {Leeuwen}},
\bibinfo {author} {\bibfnamefont {E.~J.}\ \bibnamefont {Baerends}}, \ and\
\bibinfo {author} {\bibfnamefont {J.~G.}\ \bibnamefont {Snijders}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Int. J. Quant.
Chem.}\ }\textbf {\bibinfo {volume} {57}},\ \bibinfo {pages} {281} (\bibinfo
{year} {1996})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Filatov}\ and\ \citenamefont
{Cremer}(2003{\natexlab{a}})}]{filat03.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Filatov}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Cremer}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J.
Chem. Phys.}\ }\textbf {\bibinfo {volume} {118}},\ \bibinfo {pages} {6741}
(\bibinfo {year} {2003}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Filatov}\ and\ \citenamefont
{Cremer}(2003{\natexlab{b}})}]{filat03.02}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Filatov}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Cremer}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J.
Chem. Phys.}\ }\textbf {\bibinfo {volume} {119}},\ \bibinfo {pages} {11526}
(\bibinfo {year} {2003}{\natexlab{b}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kutzelnigg}\ and\ \citenamefont
{Liu}(2005)}]{kutze05.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Kutzelnigg}}\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Liu}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Chem.
Phys.}\ }\textbf {\bibinfo {volume} {123}},\ \bibinfo {pages} {241102}
(\bibinfo {year} {2005})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kutzelnigg}\ and\ \citenamefont
{Liu}(2006)}]{kutze06.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Kutzelnigg}}\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Liu}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Mol.
Phys.}\ }\textbf {\bibinfo {volume} {104}},\ \bibinfo {pages} {2225}
(\bibinfo {year} {2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kutzelnigg}\ and\ \citenamefont
{Liu}(2007)}]{kutze07.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Kutzelnigg}}\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Liu}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Chem.
Phys.}\ }\textbf {\bibinfo {volume} {126}},\ \bibinfo {pages} {114107}
(\bibinfo {year} {2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Sewell}(1949)}]{sewel49.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~L.}\ \bibnamefont
{Sewell}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Math.
Proc. Cambridge}\ }\textbf {\bibinfo {volume} {45}},\ \bibinfo {pages} {678}
(\bibinfo {year} {1949})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Boyle}\ \emph {et~al.}(1966)\citenamefont {Boyle},
\citenamefont {Buckingham}, \citenamefont {Disch},\ and\ \citenamefont
{Dunmur}}]{boyle01.66}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~L.}\ \bibnamefont
{Boyle}}, \bibinfo {author} {\bibfnamefont {A.~D.}\ \bibnamefont
{Buckingham}}, \bibinfo {author} {\bibfnamefont {R.~L.}\ \bibnamefont
{Disch}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont
{Dunmur}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J.
Chem. Phys.}\ }\textbf {\bibinfo {volume} {45}},\ \bibinfo {pages} {1318}
(\bibinfo {year} {1966})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Shelton}(1987)}]{shelt87.03}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~P.}\ \bibnamefont
{Shelton}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {36}},\ \bibinfo {pages} {3032}
(\bibinfo {year} {1987})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Griffiths}(1995)}]{griff95.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont
{Griffiths}},\ }\href@noop {} {\emph {\bibinfo {title} {Introduction to
Quantum Mechanics}}}\ (\bibinfo {publisher} {Prentice Hall, Inc.},\ \bibinfo
{address} {Upper Saddle River, NJ},\ \bibinfo {year} {1995})\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Bethe}\ and\ \citenamefont
{Salpeter}(1977)}]{Bethe77.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~A.}\ \bibnamefont
{Bethe}}\ and\ \bibinfo {author} {\bibfnamefont {E.~E.}\ \bibnamefont
{Salpeter}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Mechanics of
One and Two Electron Atoms}}}\ (\bibinfo {publisher} {Plenum},\ \bibinfo
{address} {New York},\ \bibinfo {year} {1977})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Garstang}(1971)}]{garst01.71}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~H.}\ \bibnamefont
{Garstang}}\ }(\bibinfo {publisher} {Colorado Associated University Press},\
\bibinfo {address} {Boulder},\ \bibinfo {year} {1971})\ pp.\ \bibinfo {pages}
{153--167}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Younger}\ and\ \citenamefont
{Weiss}(1975)}]{young75.01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont
{Younger}}\ and\ \bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont
{Weiss}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Res.
Nat. Stand. Sec. A}\ }\textbf {\bibinfo {volume} {79A}},\ \bibinfo {pages}
{629} (\bibinfo {year} {1975})}\BibitemShut {NoStop}
\end{thebibliography}
\end{document} |
\begin{document}
\title{Seymour's Second Neighborhood Conjecture\\
for orientations of (pseudo)random graphs}
\author{F\'abio Botler\authormark{1}
\and
Phablo F.\,S. Moura\authormark{2}
\and T\'assio Naia\authormark{3}
}
\maketitle
\newcommand{\showmark}[1]{
\hspace*{-1em}\makebox[1em][r]{\authormark{#1}\,}
\ignorespaces}
\begin{center}
\footnotesize
\showmark{1}
Programa de Engenharia de Sistemas e Computa\c c\~ao \\
Instituto Alberto Luiz Coimbra
de P\'os-Gradua\c c\~ao e Pesquisa em Engenharia \\
Universidade Federal do Rio de Janeiro, Brasil \\
{\texttt{fbotler@cos.ufrj.br}}
\showmark{2}
Departamento de Ci\^encia da Computa\c c\~ao \\
Instituto de Ci\^encias Exatas \\
Universidade Federal de Minas Gerais, Brasil \\
{\texttt{phablo@dcc.ufmg.br}}
\showmark{3}
Departamento de Ci\^encia da Computa\c c\~ao \\
Instituto de Matem\'atica e Estat\'\i stica \\
Universidade de S\~ao Paulo, Brasil \\
{\texttt{tnaia@member.fsf.org}}
\end{center}
\begin{abstract}
Seymour's Second Neighborhood Conjecture (SNC) states that
every oriented graph contains a vertex
whose second neighborhood is as large as its first neighborhood.
We investigate the SNC for orientations of both binomial and pseudo random graphs,
verifying the SNC asymptotically almost surely (a.a.s.)
\begin{enumerate}
\item for all orientations of $G(n,p)$ if $\limsup_{n\to\infty} p < 1/4$; and
\item for a uniformly-random orientation of each weakly
$(p,A\sqrt{np})$-bijumbled graph of order $n$ and
density~$p$, where $p=\Omega(n^{-1/2})$ and $1-p = \Omega(n^{-1/6})$
and $A>0$ is a universal constant independent of both $n$~and~$p$.
\end{enumerate}
We also show that a.a.s.\ the SNC holds
for almost every orientation of~$G(n,p)$.
More specifically, we prove that a.a.s.
\begin{enumerate}[resume]
\item
for all $\varepsilon > 0$ and $p=p(n)$
with $\limsup_{n\to\infty} p \le 2/3-\varepsilon$,
every orientation of~$G(n,p)$
with minimum outdegree~$\Omega_\varepsilon(\sqrt{n})$
satisfies the SNC; and
\item
for all $p=p(n)$, a random orientation of~$G(n,p)$ satisfies the SNC.
\end{enumerate}
\end{abstract}
\section{Introduction}
An \defi{oriented graph} \(D\)
is a digraph obtained from a simple graph \(G\)
by assigning directions to its edges
(i.e., $D$ contains neither loops, nor parallel arcs,
nor directed cycles of length~\(2\));
we also call \(D\) an \defi{orientation} of \(G\).
Given \(i\in\mathbb{N}\),
the \defi{\(i\)-th neighborhood} of \(u\in V(D)\), denoted by \defi{\(N^i(u)\)},
is the set of vertices \(v\) for which a shortest directed path
from \(u\) to \(v\) has precisely \(i\)~arcs.
A~\defi{Seymour vertex} (see~\cite{2015:Seacrest})
is a vertex~\(u\) for which \(|N^2(u)|\geq |N^1(u)|\).
Seymour conjectured
the following (see~\cite{1995:DeanLatka}).
\begin{conjecture}
\label{conj:snc}
Every oriented graph contains a Seymour vertex.
\end{conjecture}
Conjecture~\ref{conj:snc}, known as \emph{Seymour’s Second Neighborhood Conjecture} (SNC),
is a notorious open question
(see, e.g.,~\cite{2003:ChenShenYuster,2007:FidlerYuster,2012:Ghazal,2015:Seacrest}).
In particular, it was confirmed for tournaments
(orientations of~cliques)
by~Fisher~\cite{1996:Fisher}
and (with a purely combinatorial argument)
by~Havet and Thomass\'e~\cite{2000:HavetThomasee}; it was also studied
by Cohn, Godbole, Harkness and Zhang~\cite{2016:Cohn_etal}
for the random digraph model in which each ordered pair of vertices
is picked independently as an arc
with probability \(p<1/2\).
Throughout the paper,
we denote by \defi{$\mathcal{S}$} the set of graphs
$\{G:\text{all orientations of $G$ contain a Seymour vertex}\}$.
Our contribution comes from considering this combinatorial problem in a random
and pseudorandom setting
(see, e.g.,~\cite{ConlonGowers16,Schacht2016}).
More precisely, we
explore Conjecture~\ref{conj:snc} for orientations
of the binomial random graph~\defi{\(G(n,p)\)},
defined as the random graph with vertex set~$\{1,\ldots,n\}$ in which every
pair of vertices appears as an edge independently and with probability~$p$.
We say that an event $\mathcal{E}$ holds \defi{asymptotically almost surely}
(a.a.s.)
if $\Pr[\mathcal{E}]\to 1$ as $n\to\infty$.
If $G=G(n,p)$ is very sparse
(say, if $np \le (1-\varepsilon) \ln n$ for large~$n$ and fixed~$\varepsilon>0$), then
a.a.s.\ $G$ has an isolated vertex, which clearly is a Seymour vertex.
Our first result extends this observation
to much denser random graphs.
\begin{theorem}\label{t:snc-p<1/4}
Let~$p\colon\mathbb{N}\to (0,1)$.
If $\displaystyle\limsup_{n\to\infty} p < 1/4$,
then a.a.s.\ $G(n,p)\in\mathcal{S}$.
\end{theorem}
If we impose restrictions on the orientations,
requiring, for example, somewhat large minimum outdegree,
the range of~$p$ can be further increased.
\begin{theorem}\label{t:gnp-min-outdeg}
For every $\beta >0$, there exists $C=C(\beta)$
such that the following holds for all~$p\colon\mathbb{N}\to (0,1)$.
If $\displaystyle\limsup_{n\to\infty} p \le 2/3 -\beta$,
then a.a.s.\ every orientation of $G(n,p)$
with minimum degree at least $Cn^{1/2}$ contains a Seymour vertex.
\end{theorem}
For an even larger range of~$p$, we show that \emph{most} orientations
of~$G(n,p)$ contain a Seymour vertex;
i.e., Conjecture~\ref{conj:snc} holds
for almost every (labeled) oriented graph.
\begin{theorem}\label{t:typical}
Let~$p\colon\mathbb{N}\to (0,1)$
and let~$G=G(n,p)$.
If~\(D\) is chosen
uniformly at random among the $2^{e(G)}$
orientations of~$G$,
then a.a.s.\ \(D\) has a Seymour vertex.
\end{theorem}
In fact, we prove a version of Theorem~\ref{t:typical}
in a more general setting,
namely
orientations of pseudorandom graphs
(see Section~\ref{s:pseudo-random}).
\begin{theorem}\label{t:typical-bij}
There exists an absolute constant $C>1$ such that
the following holds.
Let~$G$ be a weakly~$(p,A\sqrt{np})$-bijumbled graph
of order~$n$, where $\varepsilon^3np^2 \ge A^2C$
and $p < 1-15\sqrt{\varepsilon}$.
If~\(D\) is chosen
uniformly at random among the $2^{e(G)}$
possible orientations of~$G$,
then a.a.s.\ \(D\) has a Seymour vertex.
\end{theorem}
This paper is organized as follows.
In Section~\ref{sec:wheel-free} we prove Conjecture~\ref{conj:snc}
for wheel-free graphs,
which implies the particular case of Theorem~\ref{t:snc-p<1/4}
when $n^2p^3\to 0$.
In Section~\ref{s:p-typical} we complete the proof
of Theorem~\ref{t:snc-p<1/4}
and prove Theorems~\ref{t:gnp-min-outdeg} and~\ref{t:typical}
using a set of standard properties
of \(G(n,p)\). These properties are collected in Definition~\ref{d:p-typical}
and Lemma~\ref{l:gnp-typical} (proved in Appendix~\ref{a:auxiliary}).
In Section~\ref{s:pseudo-random}, we introduce bijumbled graphs
and prove Theorem~\ref{t:typical-bij}.
We make a few further remarks in Section~\ref{sec:concluding-remarks}.
To avoid uninteresting technicalities, we omit floor and ceiling signs.
If $A$ and $B$ are sets of vertices, we denote by~\defi{$\vec e\,(A,B)$}
the number of arcs directed from $A$ to~$B$, by~\defi{$e(A,B)$} the number
of edges or arcs with one vertex in each set, and by~\defi{$e(A)$}
the number of edges or arcs with both vertices in~$A$.
The (underlying) \defi{neighborhood} of a vertex~$u$ is denoted by~\defi{$N(u)$},
and the \defi{codegree} of vertices $u,\,v$
is~$\defi{\ensuremath{\deg(u,v)}}=\bigl|N(u)\cap N(v)\bigr|$.
We remark that Theorem~\ref{t:snc-p<1/4} and a weaker version of Theorem~\ref{t:gnp-min-outdeg}
appeared in the extended abstracts~\cite{botler2022:seymour-dmd,botler2022:seymour-etc}.
\section{Wheel-free graphs}\label{sec:wheel-free}
A \defi{wheel} is a graph obtained from a cycle $C$ by adding a
new vertex adjacent to all vertices in \(C\).
Firstly, we show that $G(n,p)$ is wheel-free when $p$ is small;
then prove that all wheel-free graphs satisfy Conjecture~\ref{conj:snc}.
\begin{lemma}\label{l:wheel}
If $p\in(0,1)$ and $n^4p^6 < \varepsilon / 16$,
then $\Pr\bigl[\,\text{$G(n,p)$ is wheel-free}\,\bigr]\ge 1-\varepsilon$.
\end{lemma}
\begin{proof}
We can assume $\varepsilon < 1$.
Since $n^4p^6 < \varepsilon/16$, we have that
\begin{align}\label{e:np-wheel-bounds}
np^2 < (\varepsilon p^2/16)^{1/4} < 1/2.
\end{align}
Let $X=\sum_{k=4}^n X_k$, where $X_k$ denotes
the number of wheels of order $k$ in $G(n,p)$.
By the~linearity of~expectation,
\begin{align}
\EE X
& = \sum_{k=4}^n \EE X_k
= \sum_{k=4}^n \binom{n}{k}k\frac{(k-1)!}{2(k-1)}p^{2(k-1)} \nonumber\\
& < n\sum_{k=4}^n (np^2)^{k-1}
= n^4p^6\sum_{k=0}^{n-4} (np^2)^k
\stackrel{\mathrm{G.S.}}{<} \frac{n^4p^6}{1 - np^2}\label{e:GP}
\stackrel{\eqref{e:np-wheel-bounds}}{<} 2n^4p^6 < \frac{\varepsilon}{8} < \varepsilon.
\end{align}
Where in~\eqref{e:GP} we use
the formula $\sum_{i=0}^\infty r^i = (1-r)^{-1}$
for the geometric series (G.S.) of ratio~$r = np^2<1$.
Markov's inequality then yields
\(
\Pr[X \ge 1] \le \EE X < \varepsilon.
\)
\end{proof}
To show that every orientation of a wheel-free graph
has a Seymour vertex, we prove a slightly stronger result.
A digraph is \defi{locally cornering}
if the outneighborhood of each vertex induces
a digraph with a \defi{sink} (i.e., a vertex of outdegree~$0$).
The next proposition follows immediately by noting that,
in a locally cornering digraph, each
vertex of minimum outdegree is a
Seymour vertex.
\begin{proposition}\label{p:locally-cornering}
Every locally cornering digraph has a Seymour vertex.
\end{proposition}
Lemma~\ref{l:wheel}~and Proposition~\ref{p:locally-cornering} immediately
yield the following corollary.
\begin{corollary}\label{cor:snc-small-p}
If $p\in (0,1)$,
and $n^4p^6 < \varepsilon / 16$, then
$\Pr\bigl[\,G(n,p) \in \mathcal{S}\,\bigr] \ge 1-\varepsilon$.
\end{corollary}
\begin{proof}
Note that every orientation of a wheel-free graph is locally cornering,
since the (out)neighborhood of each vertex is a forest,
and every oriented forest has a vertex with outdegree~0.
Hence the result follows
by Lemma~\ref{l:wheel} and~Proposition~\ref{p:locally-cornering}.
\end{proof}
\section{Typical graphs}\label{s:p-typical}
In this section we prove
that if $\limsup_{n\to\infty} p< 1/4$, then
a.a.s.\ \(G(n,p)\in\mathcal{S}\).
We use a number of standard properties of~$G(n,p)$,
stated for convenience in Definition~\ref{d:p-typical}.
\begin{definition}\label{d:p-typical}
Let $p\in (0,1)$.
A graph $G$ of order~$n$ is \defi{$p$-typical} if
the following hold.
\begin{enumerate}
\item \label{i:p-typical:1}
For every~$X\subseteq V(G)$, we have
\[
\biggl|e(X) - \binom{|X|}{2}p\biggr|
\le |X|\sqrt{3np(1-p) }
+ 2 n.
\]
\item \label{i:p-typical:sharp-XY}
If $n'\ln n \le n''\le n$ or $n'=n''=n$,
then all~$X,\,Y\subseteq V(G)$ with~$|X|,|Y|\le n'$
satisfy
\[
\bigl|\,e(X,Y) -|X||Y|p\,\bigr|
\le \sqrt{6n''p(1-p) |X||Y|}
+ 2n''.
\]
\item \label{i:p-typical:3}
For every $v\in V(G)$, we have
\[
|\deg(v) - np\,|
\le \sqrt{6np(1-p)\ln n}
+ 2 \ln n.
\]
\item \label{i:p-typical:common-neigh}
For every distinct $u,v\in V(G)$,
we have
\[
\bigl|\,\deg(u,v) - (n-2)p^2\,\bigr|
\le \sqrt{6np^2(1-p^2)\ln n}
+ 2\ln n.
\]
\end{enumerate}
\end{definition}
It can be shown, using standard Chernoff-type concentration inequalities,
that $G(n,p)$ is $p$-typical with high probability
(see Appendix~\ref{a:auxiliary}).
\begin{lemma}
\label{l:gnp-typical}
For every $p\colon\mathbb{N} \to (0,1)$,
a.a.s.\ $G=G(n,p)$ is $p$-typical.
\end{lemma}
We also use the following property of graphs satisfying
Definition~\ref{d:p-typical}\,\ref{i:p-typical:1}.
\begin{lemma}
\label{l:bad-degrees:2}
Let $G$ be a graph of order~$n$ which satisfies
Definition~\ref{d:p-typical}\,\ref{i:p-typical:1}, and fix~$a\in\mathbb{N}$.
If $D$ is an orientation of~$G$
and $B=\{v\in V(D):\deg_D^+(v)<a\}$, then
\[
|B|
\le \frac{2}{p}(a-1) + 1 + \sqrt{\frac{12n(1-p)}{p}} + \frac{4n}{|B|p}.
\]
\end{lemma}
\begin{proof}
The lemma follows by multiplying all terms in the inequality below by~$2/|B|p$.
\begin{equation*}
|B|(a-1) \ge e(G[B])
\stackrel{\text{\ref{d:p-typical}\ref{i:p-typical:1}}}{\geq}
\binom{|B|}{2}p-
|B|\sqrt{3np(1-p)}
- 2n.\qedhere
\end{equation*}
\end{proof}
\subsection{Proof of Theorem~\ref{t:snc-p<1/4}}
Let us outline the proof of Theorem~\ref{t:snc-p<1/4}.
Firstly, we find a vertex~$w$
whose outneighborhood contains many
vertices with large outdegree.
Then, we note that
$|N^1(w)|=O(np)$ and that $N^1(w)\cup N^2(w)$
cannot be too dense.
Finally, since many outneighbors of~$w$
have large outdegree,
we conclude that $N^1(w)\cup N^2(w)$ must contain at least
$2|N^1(w)|$~vertices, completing the proof.
This yields the following.
\begin{lemma}\label{l:snc-p-typical<1/4}
Fix $0<\alpha<1/4$ and~$\varepsilon > 0$.
There is $n_1=n_1(\alpha,\varepsilon)$
such that $\mathcal{S}$ contains all $p$-typical graphs of order $n$
such that $n\ge n_1$ and~$\varepsilon n^{-2/3}\le p \le 1/4 - \alpha$.
\end{lemma}
Lemma~\ref{l:snc-p-typical<1/4} is our last ingredient
for proving Theorem~\ref{t:snc-p<1/4}.
Indeed,
fix $\varepsilon >0$, set~$\alpha=1/4-\limsup_{n\to\infty} p(n)$ and let
$n_0$ be~large enough so that $p(n)\le 1/4-\alpha$
and so that $G(n,p)$ is $p$-typical with probability
at least~$1-\varepsilon$ for all $n\ge n_0$ (this is Lemma~\ref{l:gnp-typical}).
Now either $p < \varepsilon n^{-2/3}$ or $\varepsilon n^{-2/3}\le p(n)\le 1/4-\alpha$.
In the former case we use Corollary~\ref{cor:snc-small-p},
and in the latter case Lemma~\ref{l:snc-p-typical<1/4},
concluding either way that
\[\Pr\bigl[\,G(n,p)\in\mathcal{S}\,\bigr] \ge 1-\varepsilon.\]
\begin{proof}[Proof of Lemma~\ref{l:snc-p-typical<1/4}]
We may and shall assume (choosing $n_1$ accordingly) that
$np$~is large enough whenever necessary.
Fix an arbitrary orientation of~$G$.
For simplicity, we write $G$ for both the oriented and
underlying graphs. Let
\[\displaystyle S = \{v\in V(G) : \deg^+(v)< (1-\alpha)np/2\}
\]
and $T=V(G)\setminus S$.
Firstly, we show that $|T|\ge\alpha n/2$.
This is clearly the case if $|S|< \alpha n$ (since $\alpha < 1/4 < 1-\alpha$);
let us show that this also holds if $|S| \ge \alpha n$.
Indeed, since~$ p \ge \varepsilon n^{-2/3}$,
from Lemma~\ref{l:bad-degrees:2}
with \(a = (1-\alpha)np/2\) we obtain
\begin{align*}
|S|
& \le \frac{2(a -1)}{p}
+ 1 + \sqrt{\frac{12n(1-p)}{p}} + \frac{4n}{|S|p}
< (1-\alpha)n +\mathrm{o}(n)
< \left(1-\frac{\alpha}{2}\right)n.
\end{align*}
Therefore $|T| = n - |S|\ge\alpha n/2$
as desired.
Recall that \(np\) is large and \(p\leq 1/4\).
Then \(\sqrt{3np(1-p)} \ge 4/\alpha\),
and hence, from Definition~\ref{d:p-typical}\,\ref{i:p-typical:1} ,
we get
\begin{align}
e\bigl(T\bigr)
& \ge \binom{|T|}{2}p
- |T|\sqrt{3np(1-p)}
- 2n
> \binom{|T|}{2}p
- 2|T|\sqrt{np}
> \frac{|T|^2p}{3},\label{e:size-S}
\end{align}
and therefore, by averaging, there exists $w\in T$ satisfying
\begin{align}\label{eq:degree-w-lb}
\deg_{T}^+(w)
& \ge \frac{e\bigl(T\bigr)}{|T|}
\stackrel{\eqref{e:size-S}}{\geq} \frac{\alpha n p}{6}.
\end{align}
We next show that $w$ is a Seymour vertex.
Let $X=N_G^1(w)$ and $Y=N_G^{2}(w)$, and suppose, for a contradiction,
that \(|Y| < |X|\).
From~\ref{d:p-typical}\,\ref{i:p-typical:3} and $p+\alpha \le1/4$, we have
\begin{align}
|X|
& \leq np +\sqrt{6np\ln n} +2\ln n
< n\left(p +\frac{\alpha}{2}\right) < \frac{n}{4} \le \frac{n}{2}(1-2\alpha -2p).
\label{e:size-X}
\end{align}
Moreover,
\begin{align}
|X|
= \deg^+(w) \leq np +\sqrt{6np\ln n} +2\ln n
& < 2np.
\label{e:size-X-2}
\end{align}
Recall that $w\in T$
and let~$N= X\cap T$ be the set of outneighbors of $w$ in~$T$.
By the definition of $N$ and~\eqref{eq:degree-w-lb} we have
\begin{align}
|N|
& \ge \frac{\alpha np}{6}.\label{e:|N|}
\end{align}
Note that $\vec e\,(N,X)$ counts arcs induced by~$N$ precisely once
(as~$N\subseteq X$),
and if the arc $u\to v$ is counted by $\vec e\,(N,X)$,
then $v$ is a common neighbor of $w$
and~$u\in N$.
Hence, by Definition~\ref{d:p-typical}\,\ref{i:p-typical:common-neigh},
we have that
\[\vec e\,(N,X) +e(N) \leq |N|\bigl(np^2 +\sqrt{6np^2\ln n} + 2\ln n\bigr).\]
Since vertices in \(T\) (and hence in \(N\))
have at least \((1-\alpha)np/2\) outneighbors,
we have
\begin{align}
\vec e\,(N,Y)
& \geq |N|\frac{(1-\alpha)np}{2} - \vec e\, (N,X) -e(N)\nonumber \\
&\geq |N|\frac{(1-\alpha)np}{2} - |N|\bigl(np^2+\sqrt{6np^2\ln n} +2\ln n\bigr)
\label{eq:eNY:lb}.
\end{align}
The following estimate will be useful.
\begin{claim}\label{cl:small-terms}
It holds that \(2\ln n + \sqrt{6np^2\ln n} + \sqrt{6|Y|np/|N|} = \mathrm{o}(np)\).
\end{claim}
\begin{claimproof}
We prove that each term in the sum above is \(\mathrm{o}(n)\)
when divided by~\(p\).
Clearly, \(\sqrt{6np^2\ln n}/p = \mathrm{o}(n)\).
Recall that \(p\ge \varepsilon n^{-2/3}\)
and thus \((2\ln n) /p = o(n)\).
Also,
\begin{equation*}
\sqrt{\frac{|Y|6n}{|N|p}}
\stackrel{\eqref{e:|N|}}{\le} \sqrt{\frac{|Y|36}{\alpha p^2}}
< \sqrt{\frac{|X|36}{\alpha p^2}}
\stackrel{\eqref{e:size-X-2}}{<} \sqrt{\frac{72n}{\alpha p}}
= \mathrm{o}(n).\qedhere
\end{equation*}
\end{claimproof}
We divide the remainder of the proof in two cases. Fix \(\gamma\in(1/2, 2/3)\).
\noindent\textbf{Case 1.}
Suppose firstly that~\(p > n^{\gamma - 1}/2\).
Using Definition~\ref{d:p-typical}\,\ref{i:p-typical:sharp-XY} we obtain
\begin{align}
\label{eq:eNY:ub}
\vec e\,(N,Y)
\le |N||Y|p + \sqrt{6np |N||Y|} + 2n.
\end{align}
Thus, combining~\eqref{eq:eNY:lb} and \eqref{eq:eNY:ub}, we have
\begin{align}
\label{e:Y}
\frac{(1-\alpha)np}{2} - (np^2+\sqrt{6np^2\ln n} + 2\ln n)
& \le |Y|p + \sqrt{\frac{6np|Y|}{|N|}} +\frac{2n}{|N|}.
\end{align}
Also note that since \(p > n^{\gamma - 1}/2\) and \(\gamma > 1/2\),
we can estimate
\begin{align}
\label{e:bad-term-p-big}
\frac{2n}{|N|p}
\stackrel{\eqref{e:|N|}}{\leq} \frac{12}{\alpha p^2}
< \frac{24}{\alpha n^{2\gamma -2}}
& = \mathrm{o}(n).
\end{align}
Finally, we conclude that $w$ is a Seymour vertex, since~\eqref{e:Y} becomes
\begin{align*}
|Y|
& \ge
\frac{(1-\alpha -2p)n}{2} - \sqrt{6n\ln n} -\sqrt{\frac{6n|Y|}{|N|p}} -\frac{2n}{|N|p} -\frac{2\ln n}{p} \\
& \stackrel{(\star)}{\ge}
\frac{n}{2}\left(1 -2\alpha -2p\right)
\stackrel{\eqref{e:size-X}}{>} |X|,
\end{align*}
where inequality~$(\star)$ follows from Claim~\ref{cl:small-terms} and~\eqref{e:bad-term-p-big}.
\noindent\textbf{Case 2.}
Suppose now that \(p\leq n^{\gamma -1}/2\).
In this case \eqref{e:size-X-2}
implies~\(|X| \leq n^{\gamma}\).
Since \(N\subseteq X\) and \(|Y| < |X|\),
Definition~\ref{d:p-typical}\,\ref{i:p-typical:sharp-XY} (with $n'=n^\gamma$ and $n''=n^\gamma\ln n$) yields
\begin{align}
\vec e\,(N,Y)
& \le |N||Y|p + \sqrt{6 (n^\gamma\ln n) p|N||Y|} + 2n^\gamma\ln n \nonumber\\
& < |N||Y|p + \sqrt{6np |N||Y|} + 2n^\gamma\ln n.
\label{eq:eNY:ub:p-small}
\end{align}
Now, from~\eqref{eq:eNY:lb} and~\eqref{eq:eNY:ub:p-small}, we obtain the following inequality,
which is analogous to~\eqref{e:Y}, but with the term \(2n/|N|\) replaced by \(2n^\gamma\ln n/|N|\).
\begin{equation}\label{e:Y:p-small}
\frac{(1-\alpha)np}{2} - \bigl(p^2n+\sqrt{6np^2\ln n} + 2\ln n\bigr)
\le |Y|p + \sqrt{\frac{6np|Y|}{|N|}} +\frac{2n^\gamma\ln n}{|N|}.
\end{equation}
We claim that \(2n^\gamma\ln n/|N| = \mathrm{o}(np)\).
Indeed, since \(p\ge \varepsilon n^{-2/3}\) and $\gamma < 2/3$, we have
\begin{equation}\label{e:last-term-estimate}
\frac{2n^\gamma\ln n}{|N|p}
\stackrel{\eqref{e:|N|}}{\leq}
\frac{12n^\gamma\ln n}{\alpha np^2}
= \frac{12n^{\gamma-1}\ln n}{\alpha p^2}
\le \frac{12n^{\gamma+1/3}\ln n}{\alpha \varepsilon^2}
= \mathrm{o}(n),
\end{equation}
We complete the proof of Case~2 by solving~\eqref{e:Y:p-small}
for~\(|Y|\) as in~Case~1 (using Claim~\ref{cl:small-terms}
and~\eqref{e:last-term-estimate} to estimate~\(2n^\gamma\ln n/|N|\)\,).
\end{proof}
\subsection{Proof of Theorem~\ref{t:typical}}
We are now in a position to prove Theorem~\ref{t:typical},
which we restate for convenience.
\begin{unnumtheorem}[Theorem~\ref{t:typical}]
Let~$p\colon\mathbb{N}\to (0,1)$,
and let~$G=G(n,p)$.
If~\(D\) is chosen
uniformly at random among the $2^{e(G)}$
orientations of~$G$,
then a.a.s.\ \(D\) has a Seymour vertex.
\end{unnumtheorem}
\begin{proof}[Proof of Theorem~\ref{t:typical}]
Let $G=G(n,p)$.
If $p<1/5$, then $\Pr[\,G\in \mathcal{S}\,]= 1-\mathrm{o}(1)$ by Theorem~\ref{t:snc-p<1/4}.
On the other hand, if $p\ge 1/5$,
then standard concentration results for binomial
random variables (e.g., Chernoff-type bounds)
yield that every ordered pair $(u,v)$ of distinct vertices
of $G$ satisfies, say $\deg(u,v)\ge n/50$, and hence with probability $1-\mathrm{o}(1)$
every such pair is joined by a directed path of length~$2$.
This is because building a random orientation of $G(n,p)$ is equivalent to first choosing
which edges are present and then choosing the orientation of each edge uniformly at random,
with choices mutually independent for each edge.
In other words, with probability $1-\mathrm{o}(1)$,
for all $u\in V(G)$ we have $V(G)=\{u\}\cup N^1(u)\cup N^2(u)$.
Finally, by averaging outdegrees,
we can find a vertex~$z\in V(D)$ with outdegree at most~$(n-1)/2$,
because~$\sum_{v\in V(D)}\deg^+(v)=e(G) \le n(n-1)/2$.
Such \(z\) is a Seymour vertex as desired.
\end{proof}
\subsection{Orientations with large minimum outdegree}
\label{s:minimum-degree}
Our last result in this section yields yet another class of
orientations of $p$-typical graphs which must always contain
a Seymour vertex.
In fact, we consider a larger class of underlying graphs,
showing that
if a graph~$G$ satisfies
items
\ref{i:p-typical:1}~and~\ref{i:p-typical:sharp-XY}
of Definition~\ref{d:p-typical}, then
every orientation~$D$ of $G$
with minimum outdegree~$\delta^+(D)=\Omega(n^{1/2})$
contains a Seymour vertex.
This may be useful towards extending the range of~$p$
for which a.a.s.\ $G(n,p)\in\mathcal{S}$.
\begin{lemma}\label{l:min-degree}
Fix $\beta > 0$.
There exist a constant~$C=C(\beta)$
and $n_0=n_0(\beta)$
such that the following holds for all $n\ge n_0$ and $p\le 2/3 -\beta$.
If $G$ is a graph of order $n$
that satisfies items~\ref{i:p-typical:1}
and~\ref{i:p-typical:sharp-XY}
of Definition~\ref{d:p-typical},
then every orientation $D$ of~$G$
for which~$\delta^+(D)\ge C n^{1/2}$ has a Seymour vertex.
\end{lemma}
Note that
Lemma~\ref{l:min-degree} and Lemma~\ref{l:gnp-typical}
immediately imply Theorem~\ref{t:gnp-min-outdeg}.
\begin{proof}[Proof of Lemma~\ref{l:min-degree}]
Since $(1-3p/2)\ge 3\beta/2$, we may fix $C \ge 4$ so that
\[
\Bigl(1-\frac{3p}{2}\Bigr)C - \Bigl(\sqrt{3p(1-p)} + \sqrt{6p(1-p)}\Bigr)
\geq \frac{3\beta C}{2} - 4 \ge 1.
\]
Fix \(v\in V(D)\) with $\deg^+(v)=\delta^+(D)$,
let \(X = N^1(v)\) and \(Y = N^2(v)\).
We shall prove that \(|X| \leq |Y|\).
Suppose to the contrary that \(|Y| < |X|\).
By Definition~\ref{d:p-typical}\,\ref{i:p-typical:1},
\begin{align}\label{e:e(X,Y)-lower}
\vec e\,(X,Y)
= \sum_{a\in X}\deg^+(a) - e(X)
& \ge |X|^2 - \biggl(\frac{|X|^2p}{2} + |X|\sqrt{3np(1-p)} + 2n\biggr) \nonumber\\
& = \left(1-\frac{p}{2}\right)|X|^2 - \Bigl(|X|\sqrt{3np(1-p)} + 2n\Bigr),
\end{align}
and by Definition~\ref{d:p-typical}\,\ref{i:p-typical:sharp-XY} (with $n'=n''=n$)
we have
\begin{align}\label{e:e(X,Y)-upper}
\vec e\,(X,Y)
\le e(X,Y)
& \le |X||Y|p + \sqrt{6np(1-p)|X||Y|} + 2n \nonumber\\
& < |X|^2p + |X|\sqrt{6np(1-p)} + 2n.
\end{align}
Since $|X|\ge Cn^{1/2} \ge n^{1/2}$,
combining \eqref{e:e(X,Y)-lower}~and~\eqref{e:e(X,Y)-upper}
yields the following contradiction.
\begin{align*}
4n & > \Bigl(1-\frac{3p}{2}\Bigr)|X|^2 - |X|\Bigl(\sqrt{3np(1-p)} + \sqrt{6np(1-p)}\Bigr) \\
& \ge Cn\left( \Bigl(1-\frac{3p}{2}\Bigr)C - \Bigl(\sqrt{3p(1-p)} + \sqrt{6p(1-p)}\Bigr)\right) \geq 4n.
\qedhere
\end{align*}
\end{proof}
\section{Typical orientations of bijumbled graphs}
\label{s:pseudo-random}
In this section, we focus on a well-known class of pseudorandom
graphs (that is, deterministic graphs which embody many
properties of $G(n,p)$\,), and argue that almost all of their orientations
contain a Seymour vertex.
The following results concern graphs of order $n$ and density~$p$,
where $C n^{-1/2}\le p\le 1- \varepsilon$, and $C = C(\varepsilon) >0$ depends only on the constant~$\varepsilon >0$.
\begin{definition}[$(p,\alpha)$-bijumbled]
\label{def:(p,alpha)-bijumbled}
Let~$p$ and~$\alpha$ be given. We say that a graph~$G$ of order~$n$
is \defi{weakly $(p,\alpha)$-bijumbled} if, for all~$U$,
$W\subset V(G)$ with $U\cap W=\emptyset$
and~$1\leq|U|\leq|W|\leq np|U|$, we have
\begin{equation}
\label{eq:(p,alpha)-bijumbled_def}
\big|e(U,W)-p|U||W|\big|\leq\alpha\sqrt{|U||W|}.
\end{equation}
If~\eqref{eq:(p,alpha)-bijumbled_def}
holds for all disjoint~$U$, $W\subset V(G)$, then we say that~$G$ is
\defi{$(p,\alpha)$-bijumbled}.
\end{definition}
We note that the random graph is a.a.s.\ bijumbled.
\begin{theorem}[Lemma 3.8 in \cite{haxell95ramsey}]
\label{t:gnp-is-bijumled}
For any $p:\mathbb{N}\to (0,1]$,
the random graph $G(n,p)$ is a.a.s.\
weakly $(p, A\sqrt{np})$-bijumbled for a certain absolute
constant~$A\le \mathrm{e}^2\sqrt{6}$.
\end{theorem}
In what follows, $A$ shall always denote the constant from Theorem~\ref{t:gnp-is-bijumled}.
A simple double-counting argument shows the following.
\begin{fact}
\label{fact:jumbled}
If~$G$ is weakly $(p,\alpha)$-bijumbled, then for
every~$U\subset V(G)$ we have
\begin{equation}
\label{eq:(p,alpha)-single}
\left|
e\bigl(G[U]\bigr) - p \binom{|U|}{2}
\right| \leq \alpha|U|.
\end{equation}
\end{fact}
We also use the following result,
whose simple proof we include for completeness.
\begin{lemma}\label{l:wbij-props}
There exists a universal constant~$C>1$ such that
if $A\ge 2$ and $\varepsilon,\,p\in(0,1)$ are such that $\varepsilon^3np^2 \ge A^2C$,
then every weakly~$(p,A\sqrt{np})$-bijumbled graph~$G$ of order~$n$ satisfies
the following properties.
\begin{enumerate}
\item\label{l:bij-props:deg}
$\bigl|\,\{v\in V(G) :
|\deg(v)-np| > \varepsilon np\}\,\bigr| \le \varepsilon n$.
\item\label{l:bij-props:codeg}
$\bigl|\,\{(u,v)\in V(G)^2 :
\deg(u,v) \le (1 - \varepsilon)np^2\}\,\bigr| \le \varepsilon n^2$.
\item\label{l:bij-props:orient}
For every orientation of $G$ and every integer $d$,
we have
\[
\bigl|\,\{v\in V(G) : \deg^+(v) < d\}\,\bigr|
\leq 2\frac{d-1}{p} + 2A\sqrt{\frac{n}{p}} +1
\]
\end{enumerate}
\end{lemma}
\begin{proof}
Let \(G\) be as in the statement.
We may and shall assume that $C$ is large enough so that the required inequalities
hold.
Throughout this proof, $W$
denotes the set of vertices with degree strictly below~$(1-2\varepsilon/3)np$.
Firstly, we prove~\ref{l:bij-props:deg}.
We claim that $|W|<\varepsilon n/2$.
Indeed, suppose the contrary and
consider a subset \(W'\subseteq W\) of size precisely \(\varepsilon n/2\).
By Fact~\ref{fact:jumbled}, we have
\begin{align}
e(W')
& \ge p\frac{(\varepsilon n/2)^2}{3} - A\sqrt{np}(\varepsilon n/2)
= p\frac{(\varepsilon n/2)^2}{3}\left(1 - \sqrt{\frac{36A^2}{\varepsilon^2np}}\right)
\nonumber\\
& > p\frac{(\varepsilon n/2)^2}{4}
= \frac{\sqrt{2}}{16}
\, \sqrt{\frac{\varepsilon^3np}{1-\varepsilon/2}}
\, \sqrt{\vphantom{\frac{\varepsilon^3np}{1-\varepsilon/2}}\frac{\varepsilon}{2} (1-\varepsilon/2)n^3p}
\nonumber\\
& \ge A\sqrt{np\frac{\varepsilon n}{2}(1-\varepsilon/2)n}
= A\sqrt{np|W'|(n-|W'|)}.
\label{e:e(W')}
\shortintertext{Now, note that $|V(G)\setminus W'| < n < A^2Cn/(\varepsilon^2p) \le \varepsilon n^2p = np|W'|$, but}
e\bigl(W', V(G)\setminus W'\bigr)
& < |W'| \cdot (1-2\varepsilon/3)np -2e(W')\nonumber\\
& < |W'| \cdot (1-\varepsilon/2)np -2e(W') \nonumber\\
& = p|W'|(n-|W'|) -2e(W') \nonumber\\
& \stackrel{\eqref{e:e(W')}}{\le} p|W'|(n-|W'|) - A\sqrt{np|W'|(n-|W'|)}, \nonumber
\end{align}
which contradicts the weak bijumbledness of \(G\).
Similarly, we show that the set $Z$ of vertices
having degree strictly greater than $(1+2\varepsilon/3)pn$
satisfies $|Z| < \varepsilon n/2$, which together with the argument
above proves~\ref{l:bij-props:deg}.
More precisely,
suppose $|Z|\ge \varepsilon n / 2$,
fix $Z'\subseteq Z$ with $|Z'|=\varepsilon n/2$.
We claim that
$A\sqrt{np}|Z'|$ and~$A\sqrt{np|Z'|(n-|Z'|)}$
are both small (constant) fractions of~$p|Z'|^2$.
Indeed, as $|Z'|^2 < |Z'|(n-|Z'|) < |Z'|n $, it follows that
\begin{align*}
\frac{A\sqrt{np}|Z'|}{p|Z'|^2}
& < \frac{A\sqrt{np|Z'|(n-|Z'|)}}{p|Z'|^2}
< \frac{A\sqrt{n^2p|Z'|}}{p|Z'|^2}\\
& = \sqrt{\frac{A^2n^2}{p|Z'|^3}}
= \sqrt{\frac{A^2n^2}{p(\varepsilon n/2)^3}}
\stackrel{(\sharp)}{\le} \sqrt{\frac{8p}{C}},
\end{align*}
where \((\sharp)\) is due to~$\varepsilon^3np^2\ge CA^2$.
Fact~\ref{fact:jumbled} and the previous inequalities imply
\begin{align*}
e(Z')
& < \frac{p|Z'|^2}{2} + A\sqrt{np}|Z'|
\\
& < p|Z'|^2\left(\frac{1}{2}+\sqrt{\frac{8p}{C}}\right) \\
& < p|Z'|^2\left(\frac{1}{2}+\sqrt{\frac{32p}{C}}\right) -A\sqrt{np|Z'|(n-|Z'|)}
\\
& < p|Z'|^2 -A\sqrt{np|Z'|(n-|Z'|)}.
\end{align*}
Analogously, we have $|V(G)\setminus Z'| < np|Z'|$, but
\begin{align*}
e(Z',V(G)\setminus Z')
& \ge (1+2\varepsilon/3)np|Z'| - 2e(Z') \\
& \ge p|Z'|\bigl(n-|Z'|\bigr)
+ \left(\frac{1}{2}+\frac{2}{3}\right)\varepsilon np|Z'| - 2e(Z') \\
& > p|Z'|\bigl(n-|Z'|\bigr) + 2p|Z'|^2 - 2e(Z') \\
& > p|Z'|\bigl(n-|Z'|\bigr) + A\sqrt{np|Z'|(n-|Z'|)},
\end{align*}
which is again a contradiction to
Definition~\ref{def:(p,alpha)-bijumbled}.
This concludes the proof of~\ref{l:bij-props:deg}.
We next prove~\ref{l:bij-props:codeg}.
For each $u\in V(G)$, let $B(u)$ be the set of vertices
that have fewer than $(1-\varepsilon)np^2$ common neighbors with~$u$.
By definition,
for any vertex \(u\) and set \(B'\subseteq B(u)\) we have
\(e\bigl(N(u), B'\bigr) < (1-\varepsilon)np^2\bigl|B'\bigr|\).
We shall prove that $\bigl|B(u)\bigr|< \varepsilon n/2$ for all~$u\in V(G)\setminus W$.
Indeed, suppose
for a contradiction, that~$u\in V(G)\setminus W$ and~$|B(u)|\ge \varepsilon n/2$.
Let \(N'\subset N(u)\) be a set of size precisely \((1-2\varepsilon/3)np\),
and let \(B'\subseteq B(u)\) be a set of size precisely \(\varepsilon n/2\).
Since $\varepsilon^3np^2 \ge A^2C$, we have
\begin{align}
\frac{\varepsilon n p^2|B'|}{3}
= \frac{\varepsilon^2 n^2 p^2}{6}
> \frac{1}{6}\sqrt{\frac{\varepsilon^4n^4p^4(1-2\varepsilon/3)}{2}}
& > A\sqrt{np |N'||B'|}.\label{e:aux-2}
\end{align}
We claim that $|B'|\le np|N'|$.
Indeed, $|N'|\le np\le \varepsilon n^2p/2= np |B'|$
because~$\varepsilon n/2 > 1$,
and~$|B'|=\varepsilon n/2 \le A^2Cn/(3\varepsilon^3)\le n^2p^2/3 \le np|N'|$
because~$\varepsilon^3np^2 \ge A^2C$ and~$\varepsilon <1$.
Hence, since $G$ is weakly bijumbled,
we reach the following contradiction
\begin{align*}
p|N'||B'| - A\sqrt{np |N'||B'|}
\le e\bigl(N', B'\bigr)
& < (1-\varepsilon)np^2\bigl|B'\bigr| \\
& = \left(1-\frac{2\varepsilon}{3}\right)np^2|B'| -\frac{\varepsilon n p^2|B'|}{3}\\
& \stackrel{\eqref{e:aux-2}}{<} p|N'||B'| - A\sqrt{np |N'||B'|}.
\end{align*}
Hence~$\bigl|B(u)\bigr|<\varepsilon n/2$ for all~$u\in V(G)\setminus W$.
Note that if~$|N(u)\cap N(v)| < np^2(1-\varepsilon)$
for distinct~$u,\,v\in V(G)$,
then either $u\in W$ or $v\in B(u)$.
We conclude that there are at most $|W|n + n(\varepsilon n/2) < \varepsilon n^2$ such pairs,
as desired.
To prove~\ref{l:bij-props:orient},
fix an orientation~$D$ of $G$
and put~\(X=\{v\in V(G):\deg_{D}^+(v)< d\}\).
Fact~\ref{fact:jumbled} then yields the desired inequality:
\[
|X|(d-1) \ge e(G[B])
\ge
\binom{|X|}{2}p-
A\sqrt{np}|X|.\qedhere
\]
\end{proof}
\subsection{Almost all orientations of bijumbled graphs}
In this section we show that
almost every orientation of a weakly bijumbled graph
contains a Seymour vertex.
\begin{unnumtheorem}[Theorem~\ref{t:typical-bij}]
There exists an absolute constant $C>1$ such that
the following holds.
Let~$G$ be a weakly~$(p,A\sqrt{np})$-bijumbled graph
of order~$n$, where $\varepsilon^3np^2 \ge A^2C$
and $p < 1-15\sqrt{\varepsilon}$.
If~\(D\) is chosen
uniformly at random among the $2^{e(G)}$
possible orientations of~$G$,
then a.a.s.\ \(D\) has a Seymour vertex.
\end{unnumtheorem}
\begin{proof}
We may and shall assume that $A^2C$
is larger than any given absolute constant.
Let $V=V(G)$.
For each $u\in V$,
let $B(u)=\{v\in V:\deg(u,v)\le (1-\varepsilon)np^2\}$.
Also, let $\bad_1=\bigl\{u\in V:|B(u)|\ge \sqrt{\varepsilon}n\bigr\}$.
Lemma~\ref{l:wbij-props}\,\ref{l:bij-props:codeg} guarantees that $|\bad_1|\le \sqrt{\varepsilon}n$
and, by definition, $|B(u)|<\sqrt{\varepsilon}n$
for each $u\notin \bad_1$.
Fix an arbitrary orientation of~$G$.
For simplicity, we write~$G$ for both the oriented and underlying graphs.
Let $\bad_2=\{v\in V(G): \deg^+(v) < 2\sqrt{\varepsilon}np\}$.
By~Lemma~\ref{l:wbij-props}\,\ref{l:bij-props:orient}, we must have
\[
|\bad_2|\le \frac{2(2\sqrt{\varepsilon}np -1)}{p}+2A\sqrt{\frac{n}{p}} + 1 < 5\sqrt{\varepsilon}n.
\]
Let $\bad=\bad_1\cup\bad_2$
and put~$U=V \setminus \bad$,
and note that~$|\bad|\le 6\sqrt{\varepsilon}n$.
\begin{claim}\label{cl:deg(w)}
There exists $w\inU$
such that
\begin{equation*}\label{e:deg(w)}
\deg_G^+(w) < n/2 - \sqrt{\varepsilon}n.
\end{equation*}
\end{claim}
\begin{subproof}{Proof}
Recall that $p<1 -15\sqrt{\varepsilon}$.
Hence~$\varepsilon < 15^{-2}<1$ and
\begin{equation}\label{e:aux}
\frac{(1+\varepsilon)p}{2} + 6\sqrt{\varepsilon}
< \frac{(1+\varepsilon)(1-15\sqrt{\varepsilon})}{2} + 6\sqrt{\varepsilon}
< \frac{1-2\sqrt{\varepsilon}}{2}
\end{equation}
Note also that $\varepsilon^3np^2 \ge A^2C$ yields $A \le \sqrt{\varepsilon^3np^2/C}$. Hence,
\begin{equation}\label{e:anp-bound}
A\sqrt{np}
\le \varepsilon np\sqrt{\frac{\varepsilon p}{C}}
< \frac{\varepsilon np}{2}.
\end{equation}
By Fact~\ref{fact:jumbled}, we have
\begin{align}\label{e:avg-deg}
\frac{e(G[U])}{|U|}
& \le \frac{p}{|U|}\binom{|U|}{2}+A\sqrt{np}
\le \frac{p|U|}{2} + A\sqrt{np}
\stackrel{\eqref{e:anp-bound}}{\le} (1+\varepsilon)\frac{np}{2}.
\end{align}
Owing to~\eqref{e:avg-deg}, averaging the outdegrees of vertices in $U$
yields that some $w\in U$
satisfies $\deg_{G[U]}^+(w)\le e\bigl(G[U]\bigr)/|U| < (1+\varepsilon)np/2$.
Hence,
\begin{align}
\deg_G^+(w)
& \le \deg_{G[U]}^+(w) +|\bad|\nonumber\\
& < \frac{(1+\varepsilon)np}{2} + 6\sqrt{\varepsilon}n
\stackrel{\eqref{e:aux}}{\le} \frac{(1-2\sqrt{\varepsilon})n}{2}.\qedhere
\end{align}
\end{subproof}
Note that since we picked an arbitrary orientation of \(G\),
the vertex \(w\) given by Claim~\ref{cl:deg(w)} exists for any such orientation.
To conclude the proof, we next show that
in a random orientation of~$G$
almost surely every vertex in~$U$
is an~$(1-2\sqrt{\varepsilon})$-king,
where a vertex $v$ is said to be a \defi{$\lambda$-king}
if the number of vertices $z$ for which
there exists a directed path of length $2$ from~$v$ to $z$ is at least~$\lambda n$.
\begin{claim}\label{cl:large-neigh}
In a random orientation of $G$,
a.a.s.\ for each $X\subseteq V(G)$
with $|X|=2\sqrt{\varepsilon}np$ we have $\bigl|N^1(X)\bigr|\ge (1-2\sqrt{\varepsilon})n$,
where $N^1(X)=\bigcup_{x\in X} N^1(x)$.
\end{claim}
\begin{claimproof}
Note that for all $X,Y\subseteq V(G)$, there exist $X'\subseteq X$ and $Y'\subseteq Y$
such that $X'\cap Y'=\emptyset$ and $|X'|= |X|/2$ and $|Y'|= |Y|/2$.
Fix $X\subseteq V(G)$ with $|X|=2\sqrt{\varepsilon}np$.
If we choose $Y$ such that $|Y|=2\sqrt{\varepsilon}n$,
then~$|X'| \le |Y'| = \sqrt{\varepsilon}n \leq \sqrt{\varepsilon}n^2p^2 = np |X'|$
because \(np^2 \geq A^2C/\varepsilon^3 \geq 1\). Hence, as $G$ is weakly bijumbled,
\begin{equation}\label{e:nedges}
e(X,Y)\ge e(X',Y')\ge \frac{p|X||Y|}{4} -\frac{A\sqrt{np|X||Y|}}{2}.
\end{equation}
Let $\mathcal{E}_X$ denote the `bad' event that $\bigl|N^1(X)\bigr|< (1-2\sqrt{\varepsilon})n$,
so $\mathcal{E}_X$ occurs if and only if
there exists $Y\subseteq V(G)$ with $|Y|=2\sqrt{\varepsilon}n$ such that~$\vec e\, (X,Y)=0$.
For any $X$ such that~$|X|=2\sqrt{\varepsilon}np$, summing over all $Y$ of size~$2\sqrt{\varepsilon}n$
yields
\begin{align}
\Pr[\mathcal{E}_X]
& \le \sum_{Y}2^{-e(X,Y)}
\stackrel{\eqref{e:nedges}}{\le}
\binom{n}{2\sqrt{\varepsilon}n}
\exp\Bigl( -(\ln 2)\bigl(\varepsilon n^2p^2 -A\sqrt{\varepsilon n^3p^2}\bigr)\Bigr)
\nonumber\\
& \le
\exp\left(
2n\sqrt{\varepsilon}\ln\left( \frac{\mathrm{e}}{2\sqrt{\varepsilon}} \right)
-(\ln 2)\varepsilon n^2p^2\Bigl(1-\frac{\varepsilon}{\sqrt{C}}\Bigr)
\right)
\nonumber\\
&\le
\exp\left( 2n\sqrt{\varepsilon}\left(\frac{\mathrm{e}}{2\sqrt{\varepsilon}} \right)
-(\ln 2)\varepsilon n^2p^2\Bigl( 1-\frac{\varepsilon}{\sqrt{C}} \Bigr)\right)
\nonumber\\
&\le \exp\bigl(-2(\ln 2)n\bigr)
\label{e:Pr(cale_X)}
\end{align}
using that~$\varepsilon np^2\ge A^2C\varepsilon^{-2}\geq 12$ and that $\varepsilon/\sqrt{C} \le C^{-1/2} < 1/2$
because $\varepsilon < 1$ and $C$ is a large constant.
Taking a union bound over all~$X$ of size~$2\sqrt{\varepsilon}np$,
we see that no bad event occurs is with high probability,
since
\[
\sum_X \Pr\bigl[ \mathcal{E}_X \bigr]
\stackrel{\eqref{e:Pr(cale_X)}}{\le}
2^n\exp\bigl(-2(\ln 2)n\bigr)
= \mathrm{o}(1),
\]
and the claim holds as required.
\end{claimproof}
We conclude showing that $w$ is a Seymour vertex.
Indeed, since \(w\notin \bad_2\), we have $\deg^+(w)\ge 2\sqrt{\varepsilon}np$.
Now, Claim~\ref{cl:large-neigh} implies
that $\bigl|N^2(w)\bigr|\ge (1-2\sqrt{\varepsilon})n$,
and thus, by Claim~\ref{cl:deg(w)}, we have \(\deg^+(v) < (1-2\sqrt{\varepsilon})n/2\),
which implies
\begin{align*}
\bigl|N_G^2(w)\bigr|
& \ge (1-2\sqrt{\varepsilon})n - \deg_G^+(w)
> \frac{(1 - 2\sqrt{\varepsilon})n}{2}
> \deg_G^+(w).\qedhere
\end{align*}
\end{proof}
\section{Concluding remarks}\label{sec:concluding-remarks}
In this paper we confirmed Seymour's Second Neighborhood Conjecture (SNC)
for a large family of graphs,
including almost all orientations of (pseudo)random graphs.
We also prove that this conjecture holds a.a.s.\ for
arbitrary orientations of the random graph~$G(n,p)$, where
$p=p(n)$ lies below $1/4$. Interestingly, this range of $p$ encompasses both
sparse and dense random graphs.
The main arguments in our proofs lie in finding a vertex~$w$ of relatively low
outdegree whose outneighborhood contains many vertices of somewhat large outdegree.
Since outneighbors of $w$ cannot have small common outneighborhood,
we conclude that $\bigl|N^2(w)\bigr|$ must be large.
Naturally, it would be interesting to extend further the range of densities for
which arbitrary orientations of $G(n,p)$ satisfy the SNC.
It is seems likely that other classes of graphs,
such as $(n,d,\lambda)$-graphs,
are susceptible to attack using this approach.
Theorem~\ref{t:typical} is also a small step
towards the following weaker
version of Conjecture~\ref{conj:snc}.
\begin{question}
Do most orientations of an arbitrary graph $G$
satisfy the SNC?
\end{question}
\phantomsection
\addcontentsline{toc}{section}{Acknowledgments}
\section*{Acknowledgments}
The authors thank Yoshiharu Kohayakawa for useful discussions,
in particular for suggesting we consider bijumbled graphs.
\noindent{\par\small\linespread{1}
This research has been partially supported by Coordena\c c\~ao de Aperfei\c coamento
de Pessoal de N\'\i vel Superior -- Brasil -- CAPES -- Finance Code 001.
F.~Botler is supported by CNPq {(\small 423395/2018-1)}
and by FAPERJ {(\small 211.305/2019 and 201.334/2022)}.
P.\,Moura is supported by FAPEMIG {(\small APQ-01040-21)}.
T.~Naia is supported by CNPq {(\small 201114/2014-3)} and FAPESP {(\small 2019/04375-5, 2019/13364-7, 2020/16570-4)}.
FAPEMIG, FAPERJ and FAPESP are, respectively, Research Foundations of Minas Gerais,
Rio de~Janeiro and S\~ao Paulo. CNPq is the National
Council for Scientific and Technological Development of Brazil.
}
\phantomsection
\addcontentsline{toc}{section}{References}
\begin{adjustwidth}{-0em}{-0.1em}
{\footnotesize
}
\end{adjustwidth}
\appendix
\section[Appendix: G(n,p) is p-typical]{Proof that $G(n,p)$ is $p$-typical (Lemma~\ref{l:gnp-typical})}
\label{a:auxiliary}
In this section, we show that $G(n,p)$ satisfies the standard properties
of Definition~\ref{d:p-typical}.
To simplify this exposition, we make use of Lemma~\ref{l:chernoff-2term} below.
Let \defi{$B\sim \mathcal{B}(N,p)$} denote that $B$ is a binomial
random variable corresponding to the number of successes in
$N$ mutually independent trials,
each with success probability~$p$.
\begin{lemma}\label{l:chernoff-2term}
For all $N\in\mathbb{N}$, all $p\in (0,1)$ and all positive $x$,
if $B\sim \mathcal{B}(N,p)$
then
\begin{equation*}
\Pr\bigl[\, |B - Np| > \sqrt{6Np(1-p) x} + 2x \,\bigr] < 2\exp(-3x).
\end{equation*}
\end{lemma}
Lemma~\ref{l:chernoff-2term}
follows from the following Chernoff inequality
(see~\cite[Lemma 2.1]{2000:Janson_etal:RG}).
\begin{lemma}\label{l:chernoff}
Let $X\sim\mathcal{B}(N,p)$
and $\sigma^2 = Np(1-p)$.
For all $t > 0$ we have
\[
\Pr\bigl[|X - \EE X | > t\bigr] < 2\exp\left(-\frac{t^2}{2(\sigma^2 + t/3)}\right).
\]
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{l:chernoff-2term} using Lemma~\ref{l:chernoff}]
Let $\sigma^2 = Np(1-p)$
and~\(t = \sqrt{x^2 + 6x\sigma^2} + x\).
Since~\((t-x)^2 = x^2 + 6x\sigma^2\),
we have \(t^2 = 2tx + 6x\sigma^2 = 6x(\sigma^2 +t/3)\).
By Lemma~\ref{l:chernoff},
\begin{equation}\label{eq:simple-chernoff}
\Pr\bigl[| B-\EE\, B| > t\bigr]
< 2\exp\left(-\frac{t^2}{2(\sigma^2 + t/3)}\right) = 2\exp(-3x).
\end{equation}
Since
$t \le \sqrt{6\sigma^2x} + 2x$,
we have
\begin{align*}
\Pr\bigl[| B-\EE\, B| > \sqrt{6\sigma^2x} + 2x\bigr]
& \le \Pr\bigl[| B-\EE\, B| > t\bigr]
\stackrel{\eqref{eq:simple-chernoff}}{<}
2\exp(-3x). \qedhere
\end{align*}
\end{proof}
We next show that $G(n,p)$ is $p$-typical.
The properties in Definition~\ref{d:p-typical}
follow by choosing \(x\) in Lemma~\ref{l:chernoff-2term}
so as to make the appropriate a union bound small.
\begin{unnumlemma}[Lemma~\ref{l:gnp-typical}]
For every $p\colon\mathbb{N} \to (0,1)$,
a.a.s.\ $G=G(n,p)$ is $p$-typical.
\end{unnumlemma}
\begin{proof}
We will show that a.a.s.~\ref{i:p-typical:1}--\ref{i:p-typical:common-neigh}
of Definition~\ref{d:p-typical}
hold.
Given a random variable~$Z$ and~$x > 0$, let~$\mathbbold{1}(Z,x)$ be
the indicator variable of the `bad' event
\[
|Z - \EE Z| > \sqrt{6x\Var(Z)} + 2x,
\]
where~$\Var (Z)$ is the variance of~$Z$.
By Lemma~\ref{l:chernoff-2term}, if $Z\sim\mathcal{B}(N,p)$
then
\begin{equation}\label{e:exp-Z}
\EE\bigl( \mathbbold{1}(Z,x)\bigr) = \Pr\bigl[ \mathbbold{1}(Z,x) = 1\bigr] < 2\exp(-3x).
\end{equation}
Firstly, we show that a.a.s.~\ref{i:p-typical:1}
holds.
For each $X\subseteq V(G)$, let $Z_X=e(X)$ and let
\[
Z^\star=\sum_{X\subseteq V(G)} \mathbbold{1}\left(Z_X,n\right),
\]
taking~$x=n$.
Note that $Z_X\sim\mathcal{B}\bigl(\binom{|X|}{2},p\bigr)$ for all $X$.
By linearity of expectation,
\begin{align*}
\EE\, Z^\star
& = \sum_{X\subseteq V(G)} \EE \bigl(\mathbbold{1}(Z_X,n)\bigr)
\stackrel{\eqref{e:exp-Z}}{<}
\sum_{X\subseteq V(G)} 2\exp\left(-3n\right)
<
2^{n+1}\exp\left(-3n\right) = \mathrm{o}(1).
\end{align*}
Since $Z^\star\ge 0$ (it is the sum of indicator random variables),
we may use Markov's inequality,
obtaining~$\Pr[Z^\star\ge 1]\le \EE\,Z^\star = \mathrm{o}(1)$.
A similar calculation, considering in turn
$\deg(v)$ or~$N(u)\cap N(v)$ instead of~$e(X)$,
proves that each of the items~\ref{i:p-typical:3} and~\ref{i:p-typical:common-neigh}
fails to hold with probability~$\mathrm{o}(1)$,
taking~$x$ as~$\ln n$ in both cases, and taking
union bounds over $n$ or~$\binom{n}{2}$~events respectively.
Hence $G(n,p)$ satisfies properties~\ref{i:p-typical:1},
\ref{i:p-typical:3} and~\ref{i:p-typical:common-neigh}
with probability~$1-\mathrm{o}(1)$.
The strategy to prove~\ref{i:p-typical:sharp-XY}
is similar to the above, but calculating the number of~events
in the union bound is slightly more involved.
If $n'=n''=n$, then (as~above) we consider~$e(X,Y)$ in place of~$e(X)$,
let $x=n$ and take a union bound over $2^{2n}$ events.
Otherwise, if $1\le n'\ln n\le n''\le n$,
then let $\Omega$ be the set of pairs $\{X,Y\}$ with $X,Y,\in V(G)$
and $|X|, |Y| \leq n'$, and
note that \(|\Omega| \leq 1+\bigl(\sum_{i=1}^{n'} \binom{n}{i}\bigr)^2\).
Since $i\le n'<n/2$ for sufficiently large $n$,
we have \(\binom{n}{i} \leq \binom{n}{n'}\le \left(\frac{en}{n'}\right)^{n'}\)
and therefore
\[
|\Omega|
\le 1 + \biggl(\:\sum_{i=1}^{n'} \binom{n}{n'}\biggr)^2
< \left(n'\binom{n}{n'}\right)^2
< \biggl(\,n' \left(\frac{\mathrm{e} n}{n'}\right)^{n'}\,\biggr)^2
< \exp\bigl(2n'(1+\ln n)\bigr)
\]
By Lemma~\ref{l:chernoff-2term},
for each $\{X,Y\}\in \Omega$
we have~$\Pr\bigl[\mathbbold{1}(e(X,Y),n'')\bigr] < 2\exp(-3n'')$.
Applying Markov's inequality
to~$Z^\star = \sum_{\{X,Y\}\in \Omega}\mathbbold{1}\bigl(e(X,Y),n''\bigr)$,
we obtain
\begin{align*}
\Pr[Z^\star\ge 1]
& \le \EE\, Z^\star
< \exp\bigl(2n'(1+\ln n)\bigr)\cdot 2\exp(-3n'')
\le 2\exp(-n''/2)
= \mathrm{o}(1),
\end{align*}
where we use that $\ln n \le n'\ln n \le n''$.
\end{proof}
\end{document} |
\begin{Exa}gin{document}
\partialef[\hskip -1pt[{[\hskip -1pt[}
\partialef]\hskip -1pt]{]\hskip -1pt]}
\title {A Schwarz-Type Lemma for Squeezing Function on Planar Domains}
\author{Ahmed Yekta Ökten}
\text{\rm ad }dress{ Department of Mathematics, Middle East Technical University, 06800 Ankara, Turkey }
\email{ayokten@metu.edu.tr}
\kappaeywords{}
\partialef\Label#1{\label{#1}}
\partialef\1#1{\overline{#1}}
\partialef\2#1{\tauidetilde{#1}}
\partialef\6#1{\mathcal{#1}}
\partialef\4#1{\mathbb{#1}}
\partialef\3#1{\tauidehat{#1}}
\partialef\7#1{\mathscr{#1}}
\partialef{\4K}{{\4K}}
\partialef{\4L}{{\4L}}
\partialef \MM{{\4M}}
\partialef \B{{\4B}^{2N'-1}}
\partialef \H{{\4H}^{2l-1}}
\partialef \F{{\4H}^{2N'-1}}
\partialef {\4L}{{\4L}}
\partialef\mathbb{R}e{{sf Re}\,}
\partialef{sf Im}\,{{sf Im}\,}
\partialef{sf id}\,{{sf id}\,}
\partialefs{s}
\partialef\kappa{\kappaappa}
\partialef\overline{\overlineerline}
\partialefspan{\text{\rm span}}
\partialef\text{\rm ad }{\text{\rm ad }}
\partialef\text{\rm tr}{\text{\rm tr}}
\partialef{x_0}{{x_0}}
\partialef\mathbb{R}k{\text{\rm Rk\,}}
\partialefsg{sigma}
\partialef \emxy{E_{(M,M')}(X,Y)}
\partialef semxy{scrE_{(M,M')}(X,Y)}
\partialef \jkxy {J^k(X,Y)}
\partialef \gkxy {G^k(X,Y)}
\partialef \exy {E(X,Y)}
\partialef sexy{scrE(X,Y)}
\partialef \hn {holomorphically nondegenerate}
\partialefhypersurface{hypersurface}
\partialef{\rm pr}imert#1{{{\rm pr}imeartial \overlineer{\rm pr}imeartial #1}}
\partialef{\text{\rm det}}{{\text{\rm det}}}
\partialef{w\over B(z)}{{w\overlineer B(z)}}
\partialef\chi_1{\chi_1}
\partialef{\rm pr}imeo{p_0}
\partialef\bar f{\bar f}
\partialef\bar g{\bar g}
\partialef\ov F{\overline F}
\partialef\Gammab {\overline G}
\partialef\ov H{\overline H}
\partialef\bar z{\bar z}
\partialef\bar w{\bar w}
\partialef \qb {\bar Q}
\partialef \t {\tau}
\partialef\chi{\chi}
\partialef\tau{\tau}
\partialef{\mathbb Z}{{\mathbb Z}}
\partialef{\rm pr}imehi{\varphi}
\partialef\end{Pro}silon{\end{Pro}silonilon}
\partialef \T {\theta}
\partialef \Th {\Theta}
\partialef \L {\Lambda}
\partialef\begin{Exa}ta{\begin{Exa}ta}
\partialef\alpha{\alpha}
\partialef\omega{\omega}
\partialef\lambda{\lambda}
\partialef \im{\text{\rm Im }}
\partialef \re{\text{\rm Re }}
\partialef \mathbb{C}har{\text{\rm Char }}
\partialef supp{\text{\rm supp }}
\partialef \chi_1dim{\text{\rm codim }}
\partialef \Ht{\text{\rm ht }}
\partialef \3Jt{\text{\rm dt }}
\partialef \hO{\tauidehat{\mathcal O}}
\partialef \cl{\text{\rm cl }}
\partialef \bS{\mathbb S}
\partialef \bK{\mathbb K}
\partialef \bD{\mathbb D}
\partialef \bC{\mathbb C}
\partialef \bL{\mathbb L}
\partialef \bZ{\mathbb Z}
\partialef \bN{\mathbb N}
\partialef scrF{\mathcal F}
\partialef scrK{\mathcal K}
\partialef \mc #1 {\mathcal {#1}}
\partialef scrM{\mathcal M}
\partialef \cR{\mathcal R}
\partialef scrJ{\mathcal J}
\partialef scrA{\mathcal A}
\partialef scrO{\mathcal O}
\partialef scrV{\mathcal V}
\partialef scrL{\mathcal L}
\partialef scrE{\mathcal E}
\partialef \hol{\text{\rm hol}}
\partialef \aut{\text{\rm aut}}
\partialef \Aut{\text{\rm Aut}}
\partialef \J{\text{\rm Jac}}
\partialef\jet#1#2{J^{#1}_{#2}}
\partialef\gp#1{G^{#1}}
\partialef\gp {2k_0}_0{\gp {2k_0}_0}
\partialef\scrF(M,p;M',p'){scrF(M,p;M',p')}
\partialef\text{\rm rk\,}{\text{\rm rk\,}}
\partialef\text{\rm Orb\,}{\text{\rm Orb\,}}
\partialef\text{\rm Exp\,}{\text{\rm Exp\,}}
\partialef\text{\rm span\,}{\text{\rm span\,}}
\partialef\partial{{\rm pr}imeartial}
\partialef\3J{\3J}
\partialef{\rm pr}imer{{\rm pr}}
\partialef \mathbb{C}ZZ {\mathbb{C} [\hskip -1pt[ Z,\chieta ]\hskip -1pt]}
\partialef \3J{\text{\rm Der}\,}
\partialef \mathbb{R}k{\text{\rm Rk}\,}
\partialef \mathbb{C}R{\text{\rm CR}}
\partialef \ima{\text{\rm im}\,}
\partialef \I {\mathcal I}
\partialef \M {\mathcal M}
\newtheorem{Thm}{Theorem}[section]
\newtheorem{Cor}[Thm]{Corollary}
\newtheorem{Pro}[Thm]{Proposition}
\newtheorem{Lem}[Thm]{Lemma}
\theoremstyle{definition}\newtheorem{Def}[Thm]{Definition}
\theoremstyle{remark}
\newtheorem{Rem}[Thm]{Remark}
\newtheorem{Exa}[Thm]{Example}
\newtheorem{Exs}[Thm]{Examples}
\numberwithin{equation}{section}
\partialef\begin{Exa}gin{Lem}{\begin{Exa}gin{Lem}}
\partialef\end{Lem}{\end{Lem}}
\partialef\begin{Exa}gin{Pro}{\begin{Exa}gin{Pro}}
\partialef\end{Pro}{\end{Pro}}
\partialef\begin{Exa}gin{Thm}{\begin{Exa}gin{Thm}}
\partialef\end{Thm}{\end{Thm}}
\partialef\begin{Exa}gin{Cor}{\begin{Exa}gin{Cor}}
\partialef\end{Cor}{\end{Cor}}
\partialef\begin{Exa}gin{Def}{\begin{Exa}gin{Def}}
\partialef\end{Def}{\end{Def}}
\partialef\begin{Exa}{\begin{Exa}gin{Exa}}
\partialef\end{Exa}{\end{Exa}}
\partialef\begin{Exa}gin{Pro}f{\begin{Exa}gin{proof}}
\partialef\end{Pro}f{\end{proof}}
\partialef\begin{Exa}n{\begin{Exa}gin{enumerate}}
\partialef\end{Exa}n{\end{enumerate}}
\partialef\begin{Rem}{\begin{Exa}gin{Rem}}
\partialef\end{Rem}{\end{Rem}}
\newcommand{\bar{\rm pr}imeartial}{\bar{\rm pr}imeartial}
\newcommand{\lambda}{\lambda}
\newcommand{{\rm pr}imeolynorm}[1]{{|| #1 ||}}
\newcommand{\vnorm}[1]{\left\| #1 \right\|}
\newcommand{\asspol}[1]{{\mathbf{#1}}}
\newcommand{\mathbb{C}^n}{\mathbb{C}^n}
\newcommand{\mathbb{B}^n}{\mathbb{B}^n}
\newcommand{{\mathscr{U}}}{{\mathscr{U}}}
\newcommand{{\mathscr{U}}o}{\mathscr{U}(\Omega)}
\newcommand{\mathscr{F}(\Omega)}{\mathscr{F}(\Omega)}
\newcommand{{\mathscr{U}}ot}{\tilde{\mathscr{U}}(\Omega)}
\newcommand{\mathbb{C}^d}{\mathbb{C}^d}
\newcommand{\mathbb{C}^m}{\mathbb{C}^m}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathnormal{dist}}{\mathnormal{dist}}
\newcommand{\mathbb{C}N}{\mathbb{C}^N}
\newcommand{\mathbb{C}Np}{\mathbb{C}^{N^{\rm pr}imerime}}
\newcommand{\mathbb{R}^d}{\mathbb{R}^d}
\newcommand{\mathbb{R}^n}{\mathbb{R}^n}
\newcommand{\mathbb{R}^N}{\mathbb{R}^N}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{{\mathscr{U}}i}{\tilde{\mathscr{{\mathscr{U}}}}_{\Gammaamma_i}(\Omega)}
\newcommand{\mathbb{A}_{r}}{\mathbb{A}_{r}}
\newcommand{\Gamma}{\Gammaamma}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\partialop}[1]{\frac{{\rm pr}imeartial}{{\rm pr}imeartial #1}}
\newcommand{\vardop}[3]{\frac{{\rm pr}imeartial^{|#3|} #1}{{\rm pr}imeartial {#2}^{#3}}}
\newcommand{\infnorm}[1]{{\left\| #1 \right\|}_{\infty}}
\newcommand{\onenorm}[1]{{\left\| #1 \right\|}_{1}}
\newcommand{\partialeltanorm}[1]{{\left\| #1 \right\|}_{\3Jelta}}
\newcommand{\omeganorm}[1]{{\left\| #1 \right\|}_{\Omega}}
\newcommand{{\equiv \!\!\!\!\!\! / \,\,}}{{\equiv \!\!\!\!\!\! / \,\,}}
\newcommand{\mathbf{K}}{\mathbf{K}}
\newcommand{{\rm pr}ime}{{\rm pr}imerime}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{{\rm pr}imeoly}{\mathcal{P}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{A}k}{\mathcal{A}_k}
\newcommand{\mathcal{A}ktwo}{\mathcal{B}_\mu}
\newcommand{\mathcal{O}}{\mathcal{O}}
\newcommand{\germs_n}{\mathcal{O}_n}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\formals_n}{\mathcal{F}_n}
\newcommand{\mathnormal{C}}{\mathnormal{C}}
\newcommand{{\Aut (M,0)}}{{\Aut (M,0)}}
\newcommand{{\Aut (M,0)}p}{{\Aut (M,p)}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{A}(\CN,0)}{\mathcal{A}(\mathbb{C}N,0)}
\newcommand{\jetsp}[2]{ G_{#1}^{#2} }
\newcommand{\njetsp}[2]{J_{#1}^{#2} }
\newcommand{\jetm}[2]{ j_{#1}^{#2} }
\newcommand{\mathsf{GL_n}(\C)}{\mathsf{GL_n}(\mathbb{C})}
\newcommand{\mathsf{GL_m}(\C)}{\mathsf{GL_m}(\mathbb{C})}
\newcommand{\mathsf{GL_{(m+1)n}}(\C)}{\mathsf{GL_{(m+1)n}}(\mathbb{C})}
\newcommand{\mathsf{GL_{k}}(\C)}{\mathsf{GL_{k}}(\mathbb{C})}
\newcommand{\mathcal{C}^{\infty}}{\mathcal{C}^{\infty}}
\newcommand{\mathcal{C}^{\omega}}{\mathcal{C}^{\omega}}
\newcommand{\mathcal{C}^{k}}{\mathcal{C}^{k}}
\begin{Exa}gin{abstract} With an easy application of maximum principle, we establish a Schwarz-type lemma for squeezing function on finitely connected planar domains that directly yields the explicit formula for squeezing function on doubly connected domains obtained in \cite{NTT} by Ng, Tang and Tsai.
\end{abstract}
\maketitle
section{Introduction}\Label{int}
Let $\Omega$ be a domain in $\mathbb{C}^n$ such that the set of univalent maps from $\Omega$ into the unit ball $\mathbb{B}^n subset \mathbb{C}^n$ is non-empty. Denote the set of such maps as ${\mathscr{U}}o$, let $\mathnormal{dist}(.,.)$ denote the Euclidean distance. Then the squeezing function of $\Omega$, $S_{\Omega}:\Omega \longrightarrow (0,1] $ is defined as follows \\
\[ S_{\Omega}(z)=sup_{substack{f\in{\mathscr{U}}o\\f(z)=0}}\{r\in(0,1]|r\mathbb{B}^n subset f(\Omega)\} \\
=sup_{substack{f\in{\mathscr{U}}o\\f(z)=0}}\mathnormal{dist} (0,{\rm pr}imeartial f(\Omega)). \]
Squeezing function, which is a biholomorphic invariant, arose from the study of invariant metrics on Teichmüller Spaces of Riemann Surfaces, see \cite{LSY1,LSY2}. Properties of squeezing function have been investigated by many authors and many interesting applications are demonstrated. \cite{YSK,DGZ1,DGZ2}, provides an introduction to the topic, whereas the references \cite{DZ,F2,F4,F3,F1,NTT2,RY,ZIM} are suggested for further study. \\
We follow the terminology of Solynin. By \textit{squeezing problem on $\Omega$} we mean the problem of determining $S_{\Omega}$ explicitly. A domain $\Omega$ is said to be \textit{extremal} for the squeezing problem for $0\in\Omega$ if $S_{\Omega}(0)=\mathnormal{dist}(0,{\rm pr}imeartial \Omega)$ and a map $f\in{\mathscr{U}}o$ is said to be \textit{extremal} for squeezing problem for $z\in\Omega$ if $f(z)=0$ and $S_{\Omega}(z)=\mathnormal{dist}(0,{\rm pr}imeartial f(\Omega))$. A standard normal families argument shows that the supremum in the definition of squeezing function is achieved, in other words there exists an extremal map and extremal domain for all squeezing problems. \\
Let $\3Jelta_rsubset \mathbb{C}$ denote the disc of radius $r$ centered at origin. For convenience we denote $\3Jelta:=\3Jelta_1$. A domain $Dsubset\3Jelta$ is said to be a circular slit disc if the connected components of $\3Jelta setminus D$ consists of closed proper arcs of circles centered at the origin. Circular slit discs are canonical type of domains in the sense that for every point in any finitely connected planar domain whose boundary doesn't contain any isolated points, in this case we say that domain has non-degenerate boundary, we can find a conformal map that takes the point to the origin and maps the domain onto a circular slit disc.\newline
Determining the squeezing function explicitly is a hard problem, even in the case of planar domains. Recently, a non-trivial example is been given in \cite{NTT} where the authors used Löwner's differential equation in order to obtained the explicit formulation of squeezing functions of doubly connected domains. On \cite{GR} and \cite{S}, the same result is proven with different techniques and it is also established that circular slit maps are the only extremal maps for squeezing problems on doubly connected domains. Considering domains with non-degenerate boundary components is sufficient to establish the squeezing problem on doubly connected domains, as there are no conformal maps from $\mathbb{C}^{*}:=\mathbb{C}setminus\{0\}$ into $\3Jelta$ and the squeezing function of $\3Jelta^{*}:=\3Jeltasetminus\{0\}$ is easily calculated on \cite{DGZ1}.
\begin{Exa}gin{Thm}[Ng,Tang,Tsai]
Let $\Omega$ be a doubly connected domain with non-degenerate boundary components. Let $z\in\Omega$. Let $C^1(z,\Omega)$, $C^2(z,\Omega)$ be the circular slit discs where $z$ is mapped to $0$, inner boundary and outer boundary corresponds to ${\rm pr}imeartial \3Jelta$ respectively. Let $r^1(z,\Omega)$, $r^2(z,\Omega)$ denote the radius of slits in $C^1(z,\Omega)$, $C^2(z,\Omega)$ respectively. Then \[S_{\Omega}(z)=\max\{r^1(z,\Omega), r^2(z,\Omega)\}. \tag{1}\] In particular, squeezing function on the annulus $\mathbb{A}_{r}=\3Jelta/\overlineerline{\3Jelta_{r}}$ is given explicitly as \[ S_{\mathbb{A}_{r}}(z)=\max\{ |z|,\frac{sqrt{r}}{|z|}\}. \tag{2}\]
Moreover, up to rotation, either one or both of those circular slit discs are unique extremal domains for this problem.
\end{Thm}
Now, let $\Omega$ be a planar domain of connectivity $n>2$ with non-degenerate boundary and $z\in\Omega$. Also let $\{\Gamma_i\}_{i \in \{1,2...n\}}$ be the set of boundary components of $\Omega$. We have exactly $n$ conformal maps, $\{{\rm pr}imesi_{i,z}\}_{i \in \{1,2...n\}}$ that satisfy ${{\rm pr}imesi_i}(z)=0$, normalized so that ${{\rm pr}imesi_i}'(z)>0$ and ${{\rm pr}imesi_i}(\Gamma_i)={\rm pr}imeartial \3Jelta$ in the sense of boundary correspondence. Chapter 5 of \cite{JEN} provides more information about these canonical conformal maps. Now, denote ${{\rm pr}imesi_i}(\Omega)=C_i(z,\Omega)$ where $C_i$ is a circular slit disc. Denote the slits of $C_i(z,\Omega)$ as $s^i_j(z,\Omega)={{\rm pr}imesi_{i,z}}(\Gamma_j)$,$i \neq j$ and let $r^i_j(z,\Omega)=\mathnormal{dist}(0,s^i_j(z,\Omega))$. The work of Ng, Tang and Tsai led them to conjecture that circular slit maps are extremal mappings for the squeezing problems on higher connected domains as well. More explicitly, they conjectured that in the case given above the following equality holds. \[S_{\Omega}(z)=\max_{i}{\min_{i\neq j}{r^i_j(z,\Omega)}}.\]
This conjecture has been disproven by Gumenyuk and Roth in \cite{GR} where they proved that $\forall n>2$ there exists an $n$-connected domain $\Omega$ and $\exists z \in \Omega$ such that $S_{\Omega}(z)>\max_{i}{\min_{i\neq j}{r^i_j(z,\Omega)}}$
. Very recently, Solynin\cite{S} showed that $\forall n>2$ there exists a circular slit disc of connectivity $n$ that is the extremal domain for the squeezing problem at $0$. \\
In this short paper, with an easy application of maximum principle we obtain \textit{Theorem 1.2} given below for squeezing function on multiply connected planar domains with non-degenerate boundary. We note that when the inequality is considered for doubly connected domains it directly yields \textit{Theorem 1.1}.
\begin{Exa}gin{Thm} Let $\Omega$ be a $n$-connected domain with non-degenerate boundary and $z\in\Omega$.
With the notation above, squeezing function of $\Omega$ satisfies the following inequality.\[\max_{i}{\min_{i\neq j}{r^i_j(z,\Omega)}} \leq S_{\Omega}(z)\leq \max_{i}{\max_{i\neq j}{r^i_j(z,\Omega)}}\tag{4}.\]
\end{Thm}
section{Preliminaries on Circular Slit Maps}\Label{int}
Circular slit maps are canonical maps for multiply connected domains with non-degenerate boundary. If $\Omega$ is a finitely connected domain with boundary components $\Gamma_i$, for ${i \in \{1,2...n\}}$ where each $\Gamma_j$ doesn't reduce to a point, we can find conformal maps ${\rm pr}imesi_{j,z}$ taking $z\in\Omega$ to $0$, $\Omega$ onto a circular slit disc and $\Gamma_j$ onto ${\rm pr}imeartial \3Jelta$ in the sense of boundary correspondence \cite{JEN}\\,moreover if ${\rm pr}imesi:C_1\longrightarrow C_2$ is a conformal map between two circular slit discs, fixing $0$, taking ${\rm pr}imeartial \3Jelta$ onto itself then ${\rm pr}imesi$ must be a rotation \cite{NTT}. Therefore, if $\Omega$ is an $n$-connected domain with non-degenerate boundary, $\forall z \in \Omega$, up to rotation, we can find exactly $n$-canonical conformal maps, taking $\Omega$ onto a circular slit disc. \\
Moreover, circular slit mappings are extremal in the following sense.
\begin{Exa}gin{Pro}[\cite{GR},Proposition 2.3]
Let $Dsubset\3Jelta$ be a finitely connected
domain with outer boundary ${\rm pr}imeartial \3Jelta$ and let ${\rm pr}imehi$ be the conformal mapping of D onto a circularly
slit disk normalized by ${\rm pr}imehi(0)=0$, ${\rm pr}imehi'(0)>0$ and ${\rm pr}imehi({\rm pr}imeartial\3Jelta)={\rm pr}imeartial\3Jelta$. Then ${\rm pr}imehi'(0)\geq1$ and ${\rm pr}imehi'(0)=1$ if and only if $D$ is a circular slit disc, in that case ${\rm pr}imehi={id}_{D}$.
\end{Pro}
Schottky-Klein prime function on the annulus $\mathbb{A}_{r}$ is defined as follows. \[\omega(z,y)=(z-y){\partialisplaystyle {\rm pr}imerod_{n=1}^{\infty} \partialfrac{(z-r^{2n}y)(y-r^{2n}z)}{(z-r^{2n}z)(y-r^{2n}y)}}.\]
Moreover, it satisfies the following properties. \[\overlineerline{\omega(\overlineerline{z}^{-1},\overlineerline{y}^{-1})}=\partialfrac{-\omega(z,y)}{zy}. \tag{5}\]
\[\omega(r^{-2}z,y)=\partialfrac{rz\omega(z,y)}{y}.\tag{6}\]
Following proposition states that we can write circular slit maps of annuli in terms of Schottky-Klein prime function.
\begin{Exa}gin{Pro}[\cite{NTT},Theorem 4]
Let $y\in\mathbb{A}_{r}$, define \[f(z,y)=\partialfrac{\omega(z,y)}{|y|\omega(z,\overlineerline{y}^{-1})}.\] Then $f(.,y)$ is a circular slit map taking $\mathbb{A}_{r}$ onto a circular slit disc, $f({\rm pr}imeartial\3Jelta)={\rm pr}imeartial\3Jelta$ and $y\in\mathbb{A}_{r}$ is mapped to $0$.
\end{Pro}
\begin{Exa}gin{Rem}
Schottky-Klein prime function allows us to compute the radius of the slit in the image. We do not repeat the calculation in \cite{NTT} and leave it to the interested reader to verify that the radius of the slit of $f(\mathbb{A}_{r},y)$ is equal to $|y|$ by using \textit{(5)} and \textit{(6)}.
\end{Rem}
section{Main Result}\Label{int}
Following proposition simplifies the squeezing problem in one dimension by showing that for the case of planar domains, only extremal domains are domains whose complement in the unit disc is relavitely compact. We include the proof to emphesize that domains which are not bounded by ${\rm pr}imeartial\3Jelta$ cannot be extremal.
\cite{NTT} \\
\begin{Exa}gin{Pro}
[\cite{DGZ1},Proposition 6.1]Let $\Omega$ be an $n$-connected planar domain with non-degenerate boundary. Let as above Let ${\mathscr{U}}otsubset{\mathscr{U}}o$ be the set of injective holomorphic maps from $\Omega$ into $\3Jelta$ such that the complement of image of $\Omega$ is relatively compact in the unit disc. \[ S_{\Omega}(z)=sup_{substack{f\in{\mathscr{U}}ot \\ f(z)=0}}\mathnormal{dist}(0,{\rm pr}imeartial f(\Omega)).\]
\end{Pro}
\begin{Exa}gin{proof}
Let $f:\Omega\longrightarrow\3Jelta$ be a conformal map such that $f(z)=0$ and $\3Jeltasetminus f(\Omega)$ is not relatively compact. Let $f(\Omega)'$ be the union of $f(\Omega)$ with compact components of its complement. Observe that $\Omega'$ is simply connected. Now let $g:\Omega'\longrightarrow\3Jelta$ be the Riemann mapping function fixing $0$. Consider the function $g \circ f \in {\mathscr{U}}ot$. We will show that $\mathnormal{dist}(0,{\rm pr}imeartial f(\Omega))<\mathnormal{dist}(0,{\rm pr}imeartial g(f(\Omega)))$.\\Let $\mathnormal{C}_{D}(.,.)$ denote the Carathédory distance on the bounded domain $D$ and for $r\in(0,\infty)$ define $t(r)=\tanh(r/2)$. $t$ is a strictly increasing function and it's well known that for $z\in\3Jelta$ $\mathnormal{dist}(0,z)=|z|=t(\mathnormal{C}_{\3Jelta}(0,z))$. By conformal invariance and monotonicity of Carathedory metric we have \[\ \mathnormal{C}_{\3Jelta}(0,\3Jelta setminus g(f(\Omega)))=
\mathnormal{C}_{f(\Omega)'}(0,f(\Omega)'setminus f(\Omega))>\mathnormal{C}_{\3Jelta}(0,f(\Omega)'setminus f(\Omega)).\tag{7}\] We note that inequality in the right hand side of \textit{(7)} is indeed strict since about $0$, due to Schwarz lemma, the infinitesimal version of Carathédory metric of $f(\Omega)'$ is strictly greater than the infinitesimal version of Carathédory metric of $\3Jelta$. As the function $t$ is strictly increasing we obtain \[\mathnormal{dist}(0,{\rm pr}imeartial g(f(\Omega)))=t(\mathnormal{C}_{\3Jelta}(0,\3Jelta setminus g(f(\Omega))))>t(\mathnormal{C}_{\3Jelta}(0,f(\Omega)'setminus f(\Omega)))=\mathnormal{dist}(0,f(\Omega)). \tag{8} \] This shows that any extremal map must belong to the family ${\mathscr{U}}ot$ and \[S_{\Omega}(z)=sup_{substack{f\in{\mathscr{U}}ot\\f(z)=0}}\mathnormal{dist} (0,{\rm pr}imeartial f(\Omega)).\] \end{proof}
We set our notation. Throughout this section $\Omega$ is an $n$-connected planar domain with non-degenerate smooth boundary. Now for $i\in\{1,2...n\}$ let ${\mathscr{U}}isubset{\mathscr{U}}o$ be the subset containing the maps that map $\Gamma_i$ onto ${\rm pr}imeartial\3Jelta$ in the sense of boundary correspondence. The squeezing function of $\Omega$ towards the boundary component $\Gamma_i$, $S_{\Omega,\Gamma_i}$ is defined as follows \[S_{\Omega,\Gamma_i}(z)=\partialisplaystyle{sup_{substack{f\in{\mathscr{U}}i\\ f(z)=0}}\mathnormal{dist}(0,{\rm pr}imeartial f(\Omega))}.\]
We clarify the notion of boundary correspondence. Let $f:D_1\longrightarrow D_2$ be a conformal map between two planar domains, $\Gamma_1$,$\Gamma_2$ be boundary components of $D_1,D_2$ respectively. When we write $f(\Gamma_1)=\Gamma_2$ we mean the following. If $\{z_i\}$ is a sequence in $D_1$ converging to $\Gamma_1$, every limit point of the sequence $\{f(z_i)\}$ belongs to $\Gamma_2$. \\
We have to ensure boundary continuity of holomorphic maps to apply the maximum principle. We will use the following theorem of Carathédory, as the author remarks after its proof, \textit{Theorem 5.1.1} of \cite{KR} extends to finitely connected domains bounded by Jordan curves. Hence we obtain the following lemma. \\
\begin{Exa}gin{Lem}[Carathédory's Theorem]
Let $\Omega_1,\Omega_2$ be planar domains bounded by finitely many Jordan curves. If $f:\Omega_1\longrightarrow\Omega_2$ is a conformal map, then $f$ extends continuously and injectively to a map from ${\rm pr}imeartial\overlineerline{\Omega_1}$ onto ${\rm pr}imeartial\overlineerline{\Omega_2}$. That is there is a continuous $\tilde{f}:\overlineerline{\Omega_1}\longrightarrow\overlineerline{\Omega_2}$ such that $\tilde{f}|_{\Omega_1}=f$
\end{Lem}
We now state the lemma that leads to our main result.
\begin{Exa}gin{Lem} With above notation \[\partialisplaystyle{\min_{j\neq i}} r^{i}_j(z,\Omega)\leq S_{\Omega,i}(z)\leq \partialisplaystyle{\max_{j\neq i}} r^{i}_j(z,\Omega).\tag{9}\]
\end{Lem}
\begin{Exa}gin{proof}
Left hand side of the inequality follows trially by observing that the circular slit maps are candidates for squeezing problem with respect to the boundary component ${\rm pr}imeartial \3Jelta$. \\
For the right hand side, without loss of generality we assume that $\Omega$ is the extremal domain for the squeezing problem for $0\in\Omega$ with respect to $\Gamma_i$. Due to the discussion on the proof of Proposition 3.1 outer boundary of $\Omega$ must correspond to ${\rm pr}imeartial\3Jelta$. We'll show that $\mathnormal{dist}(0,{\rm pr}imeartial \Omega)$ must be less than or equal to $\max_{j\neq i}r^{i}_j(z,\Omega)$. Let ${\rm pr}imesi$ be the conformal map of $\Omega$ onto a circular slit disc $C:=C_{i}(z,\Omega)$ fixing $0$ and ${\rm pr}imeartial\3Jelta$. Without loss of generality assume that ${{\rm pr}imesi}'(z)>0$. Consider the function $g:=\partialfrac{id}{{{\rm pr}imesi}^{-1}}$ on $C$. As ${{\rm pr}imesi}^{-1}(z)=({\rm pr}imesi^{-1})'(0)z+\mathscr{O}(z^2)$ about $0$ we have that
\[\partialisplaystyle{\lim_{z \to 0} g(z)}=\partialisplaystyle{\lim_{z \to 0} \partialfrac{z}{{{\rm pr}imesi}^{-1}(z)}=\partialfrac{1}{({{\rm pr}imesi}^{-1})'(0)}}={{\rm pr}imesi}'(0).\]
As $g$ is bounded near $0$ it extends to $C$ analytically. With an abuse of notation, from now on we will use $g$ to denote its analytic extension. \\
Due to Proposition 2.1. ${{\rm pr}imesi}'(0)\geq1$, therefore we must have that $g(0)\geq1$. If $g(0)=1$ then the mapping ${\rm pr}imesi$ is identity. As $\Omega$ is a circular slit disc we have that \[S_{\Omega,i}(0)=\min_{j\neq i}r^{i}_j(0,\Omega)\leq\max_{j\neq i}r^{i}_j(0,\Omega).\]
Now suppose $g(0)>1$. In this case $\Omega$ is not a circular slit disc. We will show that $\mathnormal{dist}(0,{\rm pr}imeartial \Omega)\leq \max_{j\neq i}r^i_j(0,\Omega)$. \\
For $k\in\mathbb{N}$ let $\varepsilon>0$ be small enough so that the curves $\gamma_{j,k}:=\{z\in C|\mathnormal{dist}(0,\Gamma_j)=\frac{\varepsilon}{k+1}\}$ are disjoint Jordan curves in $\Omega$. As ${{\rm pr}imesi}^{-1}$ is injective, their images are Jordan curves as well. Let $C_k$ be the domain bounded by ${\rm pr}imeartial\3Jelta$ and $\gamma_{j,k}$ for $k\in\mathbb{N}$ and $j\neq i,j\in\{1,2...n\}$. Due to Carathedory's theorem $({\rm pr}imesi)^{-1}$ extends continuously to $\overlineerline{C_k}$, then so does $g$. Maximum principle asserts that \[sup_{\overlineerline{C_k}}|g|=sup_{{\rm pr}imeartial C_k}|g|.\]
As $g(0)>1$ we must have $\exists z_k \in {\rm pr}imeartial C_k$ with $|g(z_k)|>1$. Noting that $\partialisplaystyle{\lim_{x\longrightarrow z}|g(x)|} = 1$ for all $z\in{\rm pr}imeartial\3Jelta$ we can see that $\{z_k\}\in{\rm pr}imeartial C_k setminus {\rm pr}imeartial \3Jelta$. Moreover \[1 < |g(z_k)| \leq \partialfrac{\max_{{\rm pr}imeartial C_k setminus {\rm pr}imeartial \3Jelta}|z|}{|{{\rm pr}imesi}^{-1}(z_k)|}. \tag{10}\]
From \textit{(10)} we can directly see that \[|{{\rm pr}imesi}^{-1}(z_k)|<\max_{{\rm pr}imeartial C_k setminus {\rm pr}imeartial \3Jelta}|z|=\partialfrac{\varepsilon}{k+1}+\max_{j\neq i}r^i_j(0,\Omega). \tag{11}\]
It is clear from the definition of $C_k$ that any convergent subsequence of $\{z_k\}$ must converge to ${\rm pr}imeartial C$. From \textit{(11)} we obtain
\[\partialisplaystyle{\limsup |{{\rm pr}imesi}^{-1}(z_k)|}\leq \partialisplaystyle{\lim_{k \to \infty} (\partialfrac{\varepsilon}{k+1}+\max_{j\neq i}r^i_j(0,\Omega))}= \max_{j\neq i}r^i_j(0,\Omega). \tag{12}\]
By passing to a subsequence if necessary, we assume that $\{z_k\}$ is convergent. Then the set of limit points of $\{ {{\rm pr}imesi}^{-1}(z_k)\}$ must belong to
$\overlineerline{\3Jelta_{\max_{j\neq i}r^i_j(0,\Omega)}}$.
This shows that $\mathnormal{dist}(0,{\rm pr}imeartial\Omega)\leq\max_{j\neq i}r^i_j(0,\Omega)$, we obtain the following inequality, establishing the lemma \[S_{\Omega,i}(0)\leq\max_{j\neq i}r^i_j(0,\Omega). \]
\end{proof}
\begin{Exa}gin{Rem}
If $\min_{j\neq i}r^i_j(0,\Omega)\neq\max_{j\neq i}r^i_j(0,\Omega)$ then the inequality on the right hand side of \textit{(9)} is strict. To see this suppose that $\min_{j\neq i}r^i_j(0,\Omega)\neq\max_{j\neq i}r^i_j(0,\Omega)$ and $\Omega$ is the extremal domain for the squeezing problem for $0\in\Omega$ towards the boundary component $\Gamma_i={\rm pr}imeartial\3Jelta$. If $\Omega$ is the circular slit disc, then clearly the inequality is strict. If not then at $0$, norm of the derivative of the circular slit mapping ${\rm pr}imesi$ fixing ${\rm pr}imeartial\3Jelta$ is strictly less then $1$. As in the proof of lemma, suppose that $$\left\lvert \frac{1}{{{{\rm pr}imesi}^{-1}}'(0)} \right\rvert=\alpha>1$$ Defining $g(z)=\partialfrac{2z}{(\alpha+1){{\rm pr}imesi}^{-1}(z)}$ and applying the same argument in the proof of the previous lemma shows that \[\mathnormal{dist}(0,{\rm pr}imeartial\Omega)\leq(\partialfrac{2}{\alpha+1})\max_{j\neq i}r^i_j(0,\Omega)<\max_{j\neq i}r^i_j(0,\Omega)\]
\end{Rem}
\begin{Exa}gin{Rem}
If $\Omega$ is a circular slit disc with slits lying on the same circle, the method in the previous proof shows that $\Omega$ is an extremal domain for squeezing problem for $0\in\Omega$ towards the boundary component ${\rm pr}imeartial \3Jelta$. Following the argument in the last remark we can see that there are not any other extremal domains, excluding rotations of $\Omega$ of course, therefore we recover Lemma 1 in \cite{S}. \\
\end{Rem}
\textbf{Proof of Theorem 1.2} \\
It is clear that if $f\in{\mathscr{U}}ot$ then $f$ must belong to some ${\mathscr{U}}i$. In other words, we have that \[ {\mathscr{U}}ot=\partialisplaystyle{\bigcup_{i\in\{1,2...n\}}{\mathscr{U}}i}.\] It follows that $S_{\Omega}(z)=\max_{i}S_{\Omega,i}(z)$. Therefore, by Lemma 3.3, we have \[ S_{\Omega}(z)=\max_{i} S_{\Omega,i}(z) \leq \max_{i}\max_{j\neq i}r^i_j(z,\Omega). \] Left hand side of the inequality follows trivially from Lemma 3.3. $square$
\textbf{Proof of Theorem 1.1} \\
As doubly connected domains with non-degenerate boundary are biholomorphic to annuli, we will discuss the case for the annulus $\mathbb{A}_{r}$.
Set $\Gamma_1={\rm pr}imeartial\3Jelta$ and $\Gamma_2={\rm pr}imeartial\3Jelta_r $. \textit{Remark 2.3} shows that $\forall z\in \mathbb{A}_{r}$, $r^1_2(z,\mathbb{A}_{r})=|z|$. To get $r^2_1(z,\mathbb{A}_{r})$ we apply $f(z)=\partialfrac{sqrt{r}}{z}\in Aut(\mathbb{A}_{r})$. As $z$ is mapped to $\partialfrac{sqrt{r}}{z}$ and $\Gamma_2$ is mapped to $\Gamma_1$, it again follows from \textit{Remark 2.3} that $r^2_1(z,\mathbb{A}_{r})=\left\lvert\partialfrac{sqrt{r}}{z}\right\rvert$. Then we obtain \[S_{\mathbb{A}_{r}}(z)=\max\{ r^1_2(z,\mathbb{A}_{r}),r^2_1(z,\mathbb{A}_{r} \}=\max\{|z|,\left\lvert\partialfrac{sqrt{r}}{z}\right\rvert\}. \]
\textit{Remark 3.5}, shows that the circularly slit discs considered are the only possible extremal domains for this problem, proving the uniqueness part as well.
$square$
\begin{Exa}gin{thebibliography}{999}
\bibitem{DGZ2} Deng, F., Guan, Q., and Zhang, L. (2016). Properties of squeezing functions and global transformations of bounded domains. Transactions of the American Mathematical Society, 368(4), 2679-2696.
\bibitem{DGZ1} Deng, F., Guan, Q., and Zhang, L. (2012). Some properties of squeezing functions on bounded domains. Pacific Journal of Mathematics, 257(2), 319-341.
\bibitem{DZ} Deng, F., and Zhang, X. J. (2019). Fridman’s invariant, squeezing functions, and exhausting domains. Acta Mathematica Sinica, English Series, 35(10), 1723-1728.
\bibitem{F4} Fornæss, J. E., and Rong, F. (2018). Estimate of the squeezing function for a class of bounded domains. Mathematische Annalen, 371(3), 1087-1094.
\bibitem{F2} Diederich, K., Fornæss, J. E., and Wold, E. F. (2016). A characterization of the ball in $\mathbb{C}^n$. International Journal of Mathematics, 27(09), 1650078.
\bibitem{F3} Fornæss, J. E., and Shcherbina, N. (2018). A domain with non-plurisubharmonic squeezing function. The Journal of Geometric Analysis, 28(1), 13-21.
\bibitem{F1} Fornæss, J. E., and Wold, E. F. (2015). An estimate for the squeezing function and estimates of invariant metrics. In Complex analysis and geometry (pp. 135-147). Springer, Tokyo.
\bibitem{GR} Gumenyuk, P., and Roth, O. (2020). On the squeezing function for finitely connected planar domains. arXiv preprint arXiv:2011.13734.
\bibitem {JEN} Jenkins, J. A. (2013). Univalent functions and conformal mapping (Vol. 18). Springer-Verlag.
\bibitem{KR} Krantz, S. G., and Epstein, C. L. (2006). Geometric function theory: explorations in complex analysis. Springer Science and Business Media.
\bibitem{LSY1} Liu, K., Sun, X., and Yau, S. T. (2004). Canonical metrics on the moduli space of Riemann surfaces. I. J. Differential Geom, 68(3), 571-637.
\bibitem{LSY2} Liu, K., Sun, X., and Yau, S. T. (2005). Canonical metrics on the moduli space of Riemann surfaces II. Journal of Differential Geometry, 69(1), 163-216.
\bibitem{NTT2}Ng, T. W., Tang, C. C., and Tsai, J. (2020). Fridman Function, Injectivity Radius Function and Squeezing Function. arXiv preprint arXiv:2012.13159.
\bibitem {NTT} Ng, T. W., Tang, C. C., and Tsai, J. (2020). The squeezing function on doubly-connected domains via the Loewner differential equation. Mathematische Annalen, 1-26.
\bibitem{RY}Rong, F., and Yang, S. (2020). On the comparison of the Fridman invariant and the squeezing function. Complex Variables and Elliptic Equations, 1-6.
\bibitem{S} Solynin, A. Y. (2021). A note on the squeezing function. arXiv preprint arXiv:2101.03361.
\bibitem{YSK} Yeung, S. K. (2009). Geometry of domains with the uniform squeezing property. Advances in Mathematics, 221(2), 547-569.
\bibitem{ZIM}Zimmer, A. (2018). A gap theorem for the complex geometry of convex domains. Transactions of the American Mathematical Society, 370(10), 7489-7509.
\end{thebibliography}
\end{document} |
\begin{document}
\maketitle
\begin{abstract}
For a positive integer $m$ and a subgroup $\Lambda$ of the unit group $({\mathbb{Z}}/m{\mathbb{Z}})^\times$,
the corresponding \emph{generalized Kloosterman sum} is the function
$K(a,b,m,\Lambda) = \sum_{u \in \Lambda}e(\frac{au + bu^{-1}}{m})$.
Unlike classical Kloosterman sums, which are real valued, generalized Kloosterman sums
display a surprising array of visual features when their values
are plotted in the complex plane. In a variety of instances,
we identify the precise number-theoretic conditions that give rise to particular phenomena.
\end{abstract}
\section{Introduction}
For a positive integer $m$ and a subgroup $\Lambda$ of the unit group $({\mathbb{Z}}/m{\mathbb{Z}})^\times$,
the corresponding \emph{generalized Kloosterman sum} is the function
\begin{equation}\label{eq:Kabmg}
K(a,b,m,\Lambda) = \sum_{u \in \Lambda}e\left(\frac{au + bu^{-1}}{m}\right),
\end{equation}
in which $e(x)=\exp (2\pi i x)$ and $u^{-1}$ denotes the multiplicative inverse of $u$ modulo $m$.
The classical Kloosterman sum arises when $\Lambda = ({\mathbb{Z}}/m{\mathbb{Z}})^{\times}$ \cite{iwaniec2004analytic}.
Unlike their classical counterparts, which are real valued, generalized Kloosterman sums
display a surprising array of visual features when their values
are plotted in the complex plane; see Figure \ref{fig:various1}. Our aim here is
to initiate the investigation of these sums from a graphical perspective. In a variety of instances,
we identify the precise number-theoretic conditions that give rise to particular phenomena.
\begin{figure}
\caption{$m=4820$, $\Lambda=\langle 1209 \rangle$}
\caption{$m=9015$, $\Lambda=\langle 596 \rangle$}
\caption{$m=4820$, $\Lambda = \langle 257 \rangle$}
\caption{$m=4820$, $\Lambda = \langle 497 \rangle$}
\caption{$m=3087$, $\Lambda = \langle 1010 \rangle$}
\caption{$m=890$, $\Lambda = \langle 479 \rangle$}
\caption{$m=9015$, $\Lambda=\langle 2284 \rangle$}
\caption{$m=1413$, $\Lambda=\langle 13 \rangle$}
\caption{$m=9015$, $\Lambda=\langle 577 \rangle$}
\caption{Plots in ${\mathbb{C}
\label{fig:various1}
\end{figure}
Like classical Kloosterman sums, generalized Kloosterman sums enjoy a certain multiplicative property.
If $m = m_1m_2$, in which $(m_1,m_2)=1$,
$r_1 \equiv m_1^{-1} \pmod{m_2}$, $r_2 \equiv m_2^{-1} \pmod{m_1}$,
$\omega_1 = \omega \pmod {m_1}$, and $\omega_2 = \omega \pmod {m_2}$, then
\begin{equation}\label{eq:Multiplicative}
K\big(a,b,m,\langle \omega \rangle \big) = K\big(r_2a, r_2b, m_1, \langle \omega_1 \rangle\big)K\big(r_1a, r_1b, m_2, \langle \omega_2 \rangle\big).
\end{equation}
This follows immediately from the Chinese Remainder Theorem.
Consequently, we tend to focus on prime or prime power moduli;
see Figure \ref{fig:decomp}. Since the group of units modulo an odd prime power is cyclic, most of our attention is restricted to the case
where $\Gamma = \langle \omega \rangle$ is a cyclic group of units.
\begin{figure}
\caption{$p=199$, $d=3$}
\caption{$m=22$, $\Lambda=\langle 5 \rangle$}
\caption{$m=4378$, $\Lambda=\langle 291 \rangle$}
\caption{The values $K(a,b,4378,\langle 291 \rangle)$ are obtained from the values of
$K(\,\cdot\,,\,\cdot\,,199,\langle 92 \rangle)$ and $K(\,\cdot\,,\,\cdot\,,22,\langle 5 \rangle)$
using \eqref{eq:Multiplicative}
\label{fig:decomp:a}
\label{fig:decomp:c}
\label{fig:decomp}
\end{figure}
Additional motivation for our work stems from the fact that generalized Kloosterman sums
are examples of supercharacters.
The theory of supercharacters, introduced in 2008 by P.~Diaconis and I.M.~Isaacs \cite{diaconis2008supercharacters}, has emerged as
a powerful tool in combinatorial representation theory.
Certain exponential sums of interest in number theory, such as
Ramanujan, Gauss, Heilbronn, and classical Kloosterman sums, arise as supercharacter values on abelian groups
\cite{brumbaugh2012supercharacters,duke,gausscyclotomy, fowler, heilbronn}.
In the terminology of \cite{brumbaugh2012supercharacters}, the functions \eqref{eq:Kabmg} arise
by letting $n =m$, $d= 2$, and $\Gamma = \{ \operatorname{diag}(u,u^{-1}) : u \in \Lambda\}$.
\section{Hypocycloids}
In what follows, we let $\phi$ denote the Euler totient function.
If $q = p^{\alpha}$ is an odd prime power, then $({\mathbb{Z}}/q{\mathbb{Z}})^{\times}$ is cyclic. Thus, for each divisor $d$ of $\phi(q) = p^{\alpha-1}(p - 1)$,
there is a unique subgroup $\Lambda$ of $({\mathbb{Z}}/q{\mathbb{Z}})^\times$ of order $d$. In this case, we write
\begin{equation*}
K(a,b,q,d)=\sum_{u^d=1} e\left( \frac{au+bu^{-1}}{q} \right)
\end{equation*}
instead of $K(a,b,q,\Lambda)$.
Under certain conditions, these generalized Kloosterman sums display remarkable asymptotic behavior.
If $d$ is a fixed odd prime and $q=p^\alpha$ is an odd prime power with $p \equiv 1 \pmod{d}$, then
the values $K(a,b,q,d)$ for $0 \leq a,b < q$ are contained in the closure $\mathbb{H}_d$ of the bounded region determined by the
$d$-cusped hypocycloid given by
\begin{equation}\label{eq:Parameter}
\theta\mapsto (d-1)e^{i\theta}+e^{(1-d)i\theta};
\end{equation}
see Figure \ref{Figure:Hypocycloid}.
As the prime power $q=p^\alpha$ for $p \equiv 1 \pmod{d}$ tends to infinity, the values $K(a,b,q,d)$
``fill out'' $\mathbb{H}_d$; see Figure \ref{fig:hypocycloid}.
Similar asymptotic behavior has been observed in Gaussian periods
\cite{gausscyclotomy, duke} and certain exponential sums related to the symmetric group \cite{brumbaugh2013graphic}.
\begin{figure}
\caption{$d=3$\\a deltoid}
\caption{$d=4$\\an astroid}
\caption{$d=5$\\{\quad}
\caption{The curve \eqref{eq:Parameter}
\label{Figure:Hypocycloid}
\end{figure}
To be more precise, we require a few words about uniformly distributed sets.
A sequence $S_n$ of finite subsets of $[0,1)^k$ is \emph{uniformly distributed} if
\begin{equation*}
\lim_{n\to\infty}\sup_B \left| \frac{ |B\cap S_n|}{|S_n|} - \mu(B)\right| = 0,
\end{equation*}
where the supremum runs over all boxes $B = [a_1,b_1)\times\cdots \times [a_k,b_k)$ in $[0,1)^k$
and $\mu$ denotes $k$-dimensional Lebesgue measure.
If $S_n$ is a sequence of finite subsets of ${\mathbb{R}}^k$, then
$S_n$ is \emph{uniformly distributed mod $1$} if the sets
\begin{equation*}
\big\{ \big( \{x_1\},\{x_2\},\ldots,\{x_k\}\big) : (x_1,x_2,\ldots,x_k) \in S_n\big\}
\end{equation*}
are uniformly distributed in $[0,1)^k$. Here $\{x\}$ denotes the fractional part $x-\lfloor x \rfloor$
of a real number $x$. The following result tells us that a certain sequence of sets closely
related to generalized Kloosterman sums is uniformly distributed modulo $1$.
\begin{figure}
\caption{$p=67$, $d=3$}
\caption{$p=193$, $d=3$}
\caption{$p=1279$, $d=3$}
\caption{$p=151$, $d=5$}
\caption{$p=431$, $d=5$}
\caption{$p=2221$, $d=5$}
\caption{$p=491$, $d=7$}
\caption{$p=1597$, $d=7$}
\caption{$p=2969$, $d=7$}
\caption{For the primes $d = 3,5,7$, the values $K(a,b,p,d)$ with $0\leq a,b \leq p-1$ ``fill out'' the closure $\mathbb{H}
\label{fig:hypocycloid}
\end{figure}
\begin{lemma}\label{Lemma:UniformDistribution}
Fix a positive integer $d$.
For each odd prime power $q=p^\alpha$ with $p \equiv 1 \pmod{d}$, let $\omega_q$ denote a
primitive $d$th root of unity modulo in ${\mathbb{Z}}/q{\mathbb{Z}}$. Let $\omega_q^{-1}$ denote the
inverse of $\omega_q$ modulo $q$. For each fixed $b \in \{0,1,\ldots,q-1\}$, the sets
\begin{equation}\label{eq:Sq}
S_q=\Big\{\Big(\frac{a+b}{q},\frac{a\omega_q+b\omega_q^{-1}}{q},\ldots,
\frac{a\omega_q^{\phi(d)-1}+b\omega_q^{-\phi(d)+1}}{q}\Big) : 0\leq a \leq q-1\Big\}
\end{equation}
in ${\mathbb{R}}^{\phi(d)}$ are uniformly distributed modulo $1$ as $q \to \infty$.
\end{lemma}
\begin{proof}
Fix a positive integer $d$.
Weyl's criterion asserts that $S_q$ is uniformly distributed modulo
$1$ if and only if $$\lim_{q \to \infty} \sum_{\vec{x} \in S_q}\e{\vec{x} \cdot \vec{y}}=0$$ for all nonzero $\vec{y} \in {\mathbb{R}}^{\phi(d)}$ \cite{weyl}.
Fix a nonzero $\vec{y} \in {\mathbb{R}}^{\phi(d)}$ and let $0 \leq b \leq q-1$. For $\vec{x} \in S_q$, write $\vec{x}= \vec{x}_1+\vec{x}_2$, in which
\begin{equation*}
\vec{x}_1=\Big(\frac{a}{q},\frac{a\omega_q}{q},\ldots,\frac{a\omega_q^{\phi(d)-1}}{q}\Big),
\qquad \vec{x}_2=\Big(\frac{b}{q},\frac{b\omega_q^{-1}}{q},\ldots,\frac{b\omega_q^{-\phi(d)+1}}{q}\Big);
\end{equation*}
Note that $\vec{x}_1$ depends on $a$ whereas $\vec{x}_2$ is fixed since we regard $b$ as constant.
A result of Myerson \cite[Thm.~12]{myerson} (see also \cite[Lem.~6.2]{duke}) asserts that the sets
\begin{equation*}
\Big\{ \frac{a}{q}\big(1,\omega_q, \omega_q^2,\dots, \omega_q^{\phi(d)-1}\big) : 0 \leq a \leq q-1 \Big\}
\subseteq [0,1)^{\phi(d)}
\end{equation*}
are uniformly distributed modulo $1$ as $q=p^\alpha$ tends to infinity; this requires the assumption that $p \equiv 1 \pmod{d}$. Thus,
\begin{equation*}
\frac{1}{|S_q|} \sum_{\vec{x} \in S_q} \e{\vec{x} \cdot \vec{y}}
= \frac{1}{|S_q|} \sum_{a=0}^{q-1} e\big((\vec{x}_1+\vec{x}_2) \cdot \vec{y} \big)
= e(\vec{x}_2 \cdot \vec{y}) \frac{1}{|S_q|}\sum_{a=0}^{q-1} e(\vec{x}_1 \cdot \vec{y})
\end{equation*}
tends to zero, so Weyl's criterion ensures that the sets $S_q$ are uniformly distributed modulo $1$ as $q \to \infty$.
\end{proof}
The following theorem explains the asymptotic behavior exhibited in Figure \ref{fig:hypocycloid}.
\begin{theorem}\label{thm:hypocycloid}
Fix an odd prime $d$.
\begin{enumerate}\addtolength{\itemsep}{0.5\baselineskip}
\item[(a)] For each odd prime power $q=p^\alpha$ with $p \equiv 1 \pmod{d}$, the values of $K(a,b,q,d)$ are contained in $\mathbb{H}_d$, the closure of the region
bounded by the $d$-cusped hypocycloid centered at $0$ and with a cusp at $d$.
\item[(b)] Fix $\epsilon > 0$, $b \in {\mathbb{Z}}$, and let $B_{\epsilon}(w)$ be an open ball of radius $\epsilon$ centered at $w\in \mathbb{H}_d$.
For every sufficiently large odd prime power $q=p^\alpha$ with $p \equiv 1 \pmod{d}$,
there exists $a \in {\mathbb{Z}}/q{\mathbb{Z}}$ so that $K(a,b,q,d) \in B_{\epsilon}(w)$.
\end{enumerate}
\end{theorem}
\begin{proof}
(a) Suppose that $q = p^{\alpha}$ is an odd prime power and $p \equiv 1 \pmod d$.
Let $g$ be a primitive root modulo $q$ and define
$u = g^{\phi(q)/d}$, so that $u$ has multiplicative order $d$ modulo $q$.
Since $p$ and $p-1$ are relatively prime,
$p-1 \nmid p^{\alpha-1}(\frac{p-1}{d})=\phi(q)/d$ and hence $u \not \equiv 1 \pmod{p}$. Thus, $u-1$ is a unit modulo $q$, from which it follows that
\begin{equation}\label{eq:cyclotomic}
1+u+ \cdots +u^{d-1} \equiv 0 \pmod{q}.
\end{equation}
Let ${\mathbb{T}}$ denote the unit circle in ${\mathbb{C}}$ and define $f:{\mathbb{T}}^{d-1}\to{\mathbb{C}}$ by
\begin{equation}\label{eq:hypo}
f(z_1,z_2,\ldots ,z_{d-1})=z_1+z_2+\cdots +z_{d-1}+\frac{1}{z_1z_2\cdots z_{d-1}};
\end{equation}
it is well-known that the image of this function is the filled hypocycloid defined by \eqref{eq:Parameter};
see \cite{cooper2007almost,gausscyclotomy,kaiser}.
For $k=1,2,\ldots,d-1$, let
\begin{equation*}
\zeta_k=e\Big(\frac{au^{k-1}+bu^{-(k-1)}}{q} \Big)
\end{equation*}
and use \eqref{eq:cyclotomic} to conclude that
\begin{align*}
K(a,b,q,d)
&= \sum_{k=0}^{d-2} e \Big(\frac{au^k+bu^{-k}}{q} \Big) + e\Big(\frac{au^{d-1}+bu^{-(d-1)}}{q} \Big) \\
&= \sum_{k=0}^{d-2} e \Big(\frac{au^k+bu^{-k}}{q} \Big) + e\Bigg(\frac{-a\sum_{k=0}^{d-2}u^k - b\sum_{k=0}^{d-2} u^{-k}}{q} \Bigg) \\
&= \sum_{k=1}^{d-1}\zeta_k + \frac{1}{\zeta_1\zeta_2 \cdots \zeta_{d-1}}.
\end{align*}
Thus $K(a,b,q,d)$ is contained in $\mathbb{H}_d$.
\noindent (b) Fix $\epsilon > 0$, $b \in {\mathbb{Z}}$, and let $B_{\epsilon}(w)$ be an open ball of radius $\epsilon$ centered at $w\in \mathbb{H}_d$.
Let $f:{\mathbb{T}}^{d-1}\to{\mathbb{C}}$ denote the function defined by \eqref{eq:hypo} and let $\vec{z} \in {\mathbb{T}}^{d-1}$ satisfy $f(\vec{z}) = w$.
The compactness of ${\mathbb{T}}^{d-1}$ ensures that $f$ is uniformly continuous, so there exists $\delta > 0$ so that
$|f(\vec{z}) - f(\vec{x})| < \epsilon$ whenever $\| \vec{x} - \vec{z} \| < \delta$ (here we use the norm induced by the standard
embedding of the torus ${\mathbb{T}}^{d-1}$ into $d$-dimensional Euclidean space).
Since $d$ is prime, $\phi(d)=d-1$ and hence
Lemma \ref{Lemma:UniformDistribution} ensures that for each fixed $b$, the sets $S_q$ in ${\mathbb{R}}^{d-1}$ defined by \eqref{eq:Sq}
are uniformly distributed mod $1$. So for $q$ sufficiently large, there exists
\begin{equation*}
\vec{x} = \Big(\frac{a+b}{q},\frac{a\omega_q+b\omega_q^{-1}}{q},\ldots,
\frac{a\omega_q^{\phi(d)-1}+b\omega_q^{-\phi(d)+1}}{q}\Big) \in S_q,
\end{equation*}
so that $\| \vec{x} - \vec{z}\| < \delta$ holds. Then $K(a,b,q,d) = f(\vec{x})$ belongs to $B_{\epsilon}(\vec{w})$.
\end{proof}
\section{Variants of hypocycloids}
A glance at Figure \ref{fig:various1} suggests that hypocycloids are but one of many shapes
that the values of generalized Kloosterman sums ``fill out.'' With additional work, a variety of
results similar to Theorem \ref{thm:hypocycloid} can be obtained. The following theorem is illustrated in Figure
\ref{fig:d=9}.
\begin{figure}
\caption{$p=523$}
\caption{$p=1621$}
\caption{$p=3511$}
\caption{The values $\{ K(a,b,p,9) : 0 \leq a,b \leq p-1\}
\label{fig:d=9}
\end{figure}
\begin{theorem}\label{theorem:tiled}
Let $p \equiv 1 \pmod{9}$ be an odd prime.
If $q=p^\alpha$ and $\alpha \geq 1$, then the values $\{K(a,b,q,d) : 0 \leq a,b \leq p-1\}$ are contained in the threefold sum
\begin{equation}\label{eq:3sum}
\mathbb{H}_3 + \mathbb{H}_3 + \mathbb{H}_3 = \{ w_1+ w_2 + w_3 : w_1,w_2,w_3 \in \mathbb{H}_3\}
\end{equation}
of the filled deltoid $\mathbb{H}_3$. Moreover,
as $q \to \infty$, this shape is ``filled out'' in the sense of Theorem \ref{thm:hypocycloid}.
\end{theorem}
\begin{proof}
Let $g$ be a primitive root modulo $q$ and define $u=g^{\phi(q)/9}$, so that $u$ has multiplicative order $9$
modulo $q$. Since $p$ and $p-1$ are relatively prime, $p-1 \nmid p^{\alpha-1}(\frac{p-1}{3})=\phi(q)/3$, so $u^3 \not\equiv 1 \pmod{p}$. Thus,
\begin{equation*}
u^6 + u^3 + 1 \equiv (u^9 - 1)(u^3 - 1)^{-1} \equiv 0 \pmod{q},
\end{equation*}
so that $u^{6+j} \equiv - u^{3+j} - u^{j} \pmod{q}$ for $j=1,2,3$. Along similar lines, we have
$u^{-(6+j)} \equiv -u^{-(3+j)} - u^{-j} \pmod{q}$ for $j=1,2,3$.
For $k=1,2,\ldots,6$, let
\begin{equation*}
\zeta_k = e\Big( \frac{au^k+bu^{-k}}{p} \Big)
\end{equation*}
and observe that
\begin{align*}
K(a,b,q,9)
&=\sum_{k=1}^9 \eqsmall{au^k+bu^{-k}} \\
&= \sum_{k=1}^6 \eqsmall{au^k+bu^{-k}} + \sum_{j=1}^3 \eqsmall{a(-u^{3+j}-u^j)+b(-u^{6+j}-u^{3+j})} \\
&= \zeta_1 + \zeta_2+\cdots + \zeta_6 + \frac{1}{\zeta_1 \zeta_4} + \frac{1}{\zeta_2 \zeta_5} + \frac{1}{\zeta_3 \zeta_6} \\
&= \Big( \zeta_1 + \zeta_4 + \frac{1}{\zeta_1 \zeta_4} \Big) +
\Big( \zeta_2 + \zeta_5 + \frac{1}{\zeta_2 \zeta_5} \Big) +
\Big( \zeta_3 + \zeta_6 + \frac{1}{\zeta_3 \zeta_6} \Big).
\end{align*}
Thus, $K(a,b,q,9)$ belongs to $\mathbb{H}_3 + \mathbb{H}_3 + \mathbb{H}_3$.
Since $\phi(9)=6$, Lemma \ref{Lemma:UniformDistribution} ensures that the sets
\begin{equation*}
T_q = \left\{\left(\eqsmall{au+bu^{-1}}, \ldots, \eqsmall{au^6+bu^{-6}}\right) : 0\leq a \leq q-1\right\}
\end{equation*}
are uniformly distributed modulo $1$ for any fixed $b$. Thus, the values
$\{K(a,b,q,d) : 0 \leq a,b \leq p-1\}$ ``fill out'' $\mathbb{H}_3 + \mathbb{H}_3 + \mathbb{H}_3$ as $q \to \infty$.
\end{proof}
\section{Of squares and Sali\'{e} sums}\label{sec:h=2}
The preceding subsections concerned the asymptotic behavior of generalized Kloosterman sums
$K(a,b,p^{\alpha},d)$, in which $d|(p-1)$ and $p^{\alpha}$ tends to infinity. Here
we turn the tables somewhat and consider the sums $K(a,b,p,\frac{p-1}{d})$ for $d = 2^n$. In general, we take
$d$ to be the largest power of two that divides $p-1$; otherwise the cyclic subgroup of ${\mathbb{Z}}/p{\mathbb{Z}}$ of order $\frac{p-1}{d}$
has even order. This forces $K(a,b,p,\frac{p-1}{d})$ to be real valued, which is uninteresting from our perspective.
For a fixed odd prime $p$,
\begin{equation*}
T(a,b,p)=\sum_{u=1}^{p-1}\leg{u}{p}\ep{au+bu^{-1}}
\end{equation*}
is called a \emph{Sali\'{e} sum}; here $(\frac{u}{p})$ denotes the Legendre symbol.
Although they bear a close resemblance to classical Kloosterman sums,
the values of Sali\'e sums can be explicitly determined \cite{iwaniec2004analytic}.
If $p$ is an odd prime and $(a,p)=(b,p)=1$, then
\begin{equation}\label{eq:SalieExplicit}
T(a,b,p)=
\begin{cases}
2\tau_p\cos\Big(\dfrac{2\pi k}{p} \Big) & \text{if } \leg{a}{p}=\leg{b}{p}=1,\\
-2\tau_p\cos\Big(\dfrac{2\pi k}{p} \Big) & \text{if } \leg{a}{p}=\leg{b}{p}=-1, \\
0 & \text{otherwise},
\end{cases}
\end{equation}
where $k$ is the square root of $4ab$ and
\begin{equation*}
\tau_n=\begin{cases} \sqrt{n} & \mbox{if } n \equiv 1 \pmod{4},\\ i\sqrt{n} & \mbox{if } n \equiv 3 \pmod{4}. \end{cases}
\end{equation*}
\begin{figure}
\caption{$p=1907$}
\caption{$p=6007$}
\caption{$p=11447$}
\caption{Plots of $K(a,b,p,\frac{p-1}
\label{fig:square}
\end{figure}
The following result explains the phenomenon observed in Figure \ref{fig:square}.
\begin{theorem}\label{h2}
Let $p \equiv 3 \pmod{4}$ be an odd prime. If $p \nmid ab$, then
\begin{equation}\label{eq:ReIm}
|\operatorname{Re} K(a,b,p,\tfrac{p-1}{2})| \leq \frac{\sqrt{p}}{2}, \qquad
|\operatorname{Im} K(a,b,p,\tfrac{p-1}{2})| \leq \frac{\sqrt{p}}{2}.
\end{equation}
If $p|ab$, then
\begin{equation*}
K(a,b,p,\frac{p-1}{2}) =
\begin{cases}
\frac{1}{2}\big( ( \frac{b}{p})\tau_p - 1\big) & \text{if $a=0$}, \\
\frac{p-1}{2} & \text{if $a=b=0$}.
\end{cases}
\end{equation*}
\end{theorem}
\begin{proof}
Since
\begin{align*}
T(a,b,p) &=\sum_{u^{\frac{p-1}{2}}=1} \epsmall{au+bu^{-1}} - \sum_{u^{\frac{p-1}{2}}=-1}\epsmall{au+bu^{-1}}, \\
K(a,b,p) &=\sum_{u^{\frac{p-1}{2}}=1} \epsmall{au+bu^{-1}} + \sum_{u^{\frac{p-1}{2}}=-1}\epsmall{au+bu^{-1}},
\end{align*}
it follows that
\begin{equation}
K(a,b,p,\tfrac{p-1}{2})=\tfrac{1}{2}\big(T(a,b,p)+K(a,b,p)\big);
\label{eq:h2}
\end{equation}
we thank Bill Duke for pointing this out to us.
Since $|T(a,b,p)|\leq 2\sqrt{p}|\cos(\theta)|\leq 2\sqrt{p}$, we have $\frac{1}{2}|T(a,b,p)| \leq \sqrt{p}$.
On the other hand,
the Weil bound for classical Kloosterman sums ensures that $\frac{1}{2}|K(a,b,p)| \leq \sqrt{p}$ if $p \nmid ab$ \cite{weil}.
Since $p \equiv 3 \pmod{4}$, the Sali\'{e} sums $T(a,b,p)$ are purely imaginary (or zero), which yields
\eqref{eq:ReIm}. The evaluation of $K(a,b,p,\tfrac{p-1}{2})$ when $p|ab$ is straightforward and omitted.
\end{proof}
\begin{figure}
\caption{$p=6053$}
\caption{$p=13309$}
\caption{Generalized Kloosterman sums $K(a,b,p,\frac{p-1}
\label{fig:h=4}
\end{figure}
\section{Boxes in boxes}
The images for $d=\frac{p-1}{4}$ resemble a rectangle inside a larger square; see Figure \ref{fig:h=4}. This
differs significantly from the $d=\frac{p-1}{2}$ case.
The following lemma and theorem partially explain the ``box-in-a-box'' behavior of $d=\frac{p-1}{4}$ plots.
\begin{lemma}\label{hreal}
Let $p$ be an odd prime of the form $p=2^nd+1$, with $d$ odd and $n \geq 1$. Then $K(a,b,p,\frac{p-1}{2^{n-1}})=2\operatorname{Re} K(a,b,p,\frac{p-1}{2^n})$.
\end{lemma}
\begin{proof}
Note that $p=2^nd+1=2^{n-1}(2d)+1$, so $K(a,b,p,\frac{p-1}{2^{n-1}})$ is real-valued. Therefore, it suffices to show that $2K(a,b,p,\frac{p-1}{2^{n}})-K(a,b,p,\frac{p-1}{2^{n-1}})$ is purely imaginary.
\begin{align*}
&\overline{2K(a,b,p,\tfrac{p-1}{2^{n}})-K(a,b,p,\tfrac{p-1}{2^{n-1}})} \\
&\qquad =2\sum_{u^d=1} \epsmall{-au-bu^{-1}} - \sum_{v^{2d}=1}\epsmall{-av-bv^{-1}} \\
&\qquad =\sum_{u^d=1} \epsmall{-au-bu^{-1}} - \sum_{v^d=-1} \epsmall{-av-bv^{-1}} \\
&\qquad =\sum_{u^d=(-1)^d} \epsmall{au+bu^{-1}} - \sum_{v^d=-(-1)^d} \epsmall{av+bv^{-1}} \\
&\qquad =\sum_{u^{2d}=1} \epsmall{au+bu^{-1}} - 2\sum_{v^d=-(-1)^d} \epsmall{av+bv^{-1}} .
\end{align*}
Since $d$ is odd, $-(-1)^d=1$, so the above term simplifies to
$K(a,b,p,\frac{p-1}{2^{n-1}})-2K(a,b,p,\frac{p-1}{2^{n}})$. Then $2K(a,b,p,\frac{p-1}{2^{n}})-K(a,b,p,\frac{p-1}{2^{n-1}})$ is purely imaginary.
\end{proof}
\begin{theorem}\label{thm:h4_bound}
Let $p \equiv 1 \pmod{4}$ be prime, $1 \leq a,b \leq p-1$.
Let $g$ be a primitive root of $({\mathbb{Z}}/p{\mathbb{Z}})^\times$ and write $a=g^r,b=g^s$. Then
\begin{equation*}
|\operatorname{Re} K(a,b,p,\tfrac{p-1}{4})| \le \begin{cases}
\sqrt{p} &\mbox{if } r-s \equiv 0,2 \pmod{4}, \\[3pt] \frac{\sqrt{p}}{2} &\mbox{if } r - s \equiv 1,3 \pmod{4}. \end{cases}
\end{equation*}
Furthermore, if $r-s \equiv 2 \pmod{4}$, then $\operatorname{Im} K(a,b,p,\frac{p-1}{4})=0$.
\end{theorem}
\begin{proof}
By Theorem \ref{h2}, we know that $$K(a,b,p,\tfrac{p-1}{2}) = \frac{T(a,b,p) + K(a,b,p)}{2}.$$ Using Lemma \ref{hreal},
we write $$\operatorname{Re} K(a,b,p,\tfrac{p-1}{4}) = \frac{T(a,b,p) + K(a,b,p)}{4}.$$ The first half of this fraction is simply a traditional
Kloosterman sum, which is real valued and bounded by $\pm 2\sqrt{p}$. Since $p \equiv 1\pmod{4}$,
$T(a,b,p)$ is also real valued and bounded by $\pm 2\sqrt{p}$. Note that $4ab$ is a quadratic residue modulo
$p$ if and only if $r+s$, and therefore $r-s$, is even. Thus, when $r - s \equiv 0\pmod{4}$ or $r-s \equiv 2 \pmod{4}$, we can say
\begin{align*}
|\operatorname{Re} K(a,b,p,\tfrac{p-1}{4})| &=\frac{K(a,b,p) + T(a,b,p)}{4}\le\frac{2\sqrt{p}+2\sqrt{p}}{4} = \sqrt{p}.
\end{align*}
Alternatively, if $r - s \equiv 1\pmod{4}$ or $r-s \equiv 3\pmod{4}$, then $T(a,b,p) = 0$ and
\begin{align*}
|\operatorname{Re} K(a,b,p,\tfrac{p-1}{4})| &=\frac{K(a,b,p)}{4}\le\frac{2\sqrt{p}}{4} = \frac{\sqrt{p}}{2}.
\end{align*}
Now suppose that $r-s \equiv 2 \pmod{4}$. First, we rewrite $\operatorname{Im} K(a,b,p,\frac{p-1}{4})$ using Lemma \ref{hreal}:
\begin{align*}
\operatorname{Im} K(a,b,p,\tfrac{p-1}{4})&=iK(a,b,p,\tfrac{p-1}{4})-i\operatorname{Re} K(a,b,p,\tfrac{p-1}{4}) \\
&=iK(a,b,p,\tfrac{p-1}{4})-\frac{i}{2}K(a,b,p,\tfrac{p-1}{2}) \\
&=i\sum_{k=1}^{d}\epsmall{ag^{4k}+bg^{-4k}}-\frac{i}{2}\sum_{k=1}^{2d}\epsmall{ag^{2k}+bg^{-2k}} \\
&=\frac{i}{2}\sum_{k=1}^{d}\epsmall{ag^{4k}+bg^{-4k}} - \frac{i}{2}\sum_{k=1}^d\epsmall{ag^{4k+2}+bg^{-4k-2}} \\
&=\frac{i}{2}\Big(K(a,b,p,\tfrac{p-1}{4})-K(ag^2,bg^{-2},p,\tfrac{p-1}{4}) \Big).
\end{align*}
Since $r-s \equiv 2 \pmod{4}$, we have $r \equiv 0 \pmod{4}$ and $s \equiv 2 \pmod{4}$,
or $r \equiv 1 \pmod{4}$ and $s \equiv 3 \pmod{4}$, up to permutation. Suppose the first case holds.
Then we can write $a=g^{4j},b=g^{4k+2}$ for some integers $j,k$. It is easy to check that
$K(a,b,p,d)=K(av,bv^{-1},p,d)$ for all $v^d \equiv 1 \pmod{p}$. Using this fact, we obtain
\begin{align*}
&K(a,b,p,\tfrac{p-1}{4})-K(ag^2,bg^{-2},p,\tfrac{p-1}{4})\\
&\qquad =K(g^{4j},g^{4k+2},p,\tfrac{p-1}{4})-K(g^{4j+2},g^{4k},p,\tfrac{p-1}{4})\\
&\qquad =K(g^{4k},g^{4j+2},p,\tfrac{p-1}{4})-K(g^{4j+2},g^{4k},p,\tfrac{p-1}{4}) = 0.
\end{align*}
The second case is similar.
\end{proof}
It is apparent from Figure \ref{fig:h=4} that different bounds are obeyed by $K(g^r,g^s,p,\frac{p-1}{4})$
depending on the value of $r-s \pmod{4}$. Theorem \ref{thm:h4_bound} confirms this observation for the real
part of the plot. As seen in Figure \ref{fig:h=4}, the bound of
$\operatorname{Im} K(a,b,p,\frac{p-1}{4})$ seems dependent on the value of $r-s$ modulo $p$. We have established the imaginary
bound for one value of $r-s$ modulo $p$. We have the following conjecture.
\noindent\textbf{Conjecture}:
Let $p=4d+1$ be a prime, $g$ a primitive root modulo $p$. Then
\begin{equation*}
|\operatorname{Im} K(g^r,g^s,p,\tfrac{p-1}{4})| \leq \begin{cases} \frac{\sqrt{2p}}{2}
&\mbox{if } r-s \equiv 1,3 \pmod{4}, \\ \sqrt{p} &\mbox{if } r-s \equiv 0 \pmod{4}. \end{cases}
\end{equation*}
\section{Sporadic spiders}\label{sec:bugs}
\begin{figure}
\caption{$m=11$, $d=5$}
\caption{$m=29$, $d=7$}
\caption{$m=199$, $d=11$}
\caption{$m=521$, $d=13$}
\caption{$m=3571$, $d=17$}
\caption{$m=9349$, $d=19$}
\caption{Generalized Kloosterman sums $K(a,b,m,\Lambda)$ for some $\Lambda \leq ({\mathbb{Z}
\label{fig:bugs}
\end{figure}
We conclude this note with an investigation of a peculiar and intriguing phenomenon.
Numerical evidence suggests that for a fixed odd prime $d$, the spider-like image depicted in Figure \ref{fig:bugs}
appears abruptly for only one specific modulus.
Figure \ref{fig:fleeting} illustrates the swift coming and going of the ephemeral spider.
\begin{figure}
\caption{$p=3469$, $d=17$}
\caption{$p=3571$, $d=17$}
\caption{$p=3673$, $d=17$}
\caption{$3469,3571,3673$ are three consecutive primes congruent to $1 \pmod{17}
\label{fig:fleeting}
\end{figure}
The moduli that generate the spiders are all of the form $L_{p(n)}$,
where $p(n)$ is the $n$th prime and $L_k$ is the $k$th Lucas number; see Table \ref{table:bugs}.
Recall that the Lucas numbers (sequence
\mathbb{H}ref{https://oeis.org/A000032}{\texttt{A000032}} in the OEIS) are defined by the initial conditions $L_0=2$, $L_1=1$,
and the recurrence relation $L_n=L_{n-1}+L_{n-2}$ for $n>1$.
It is not immediately clear that our pattern can continue indefinitely since for prime $d$ we require
$({\mathbb{Z}}/L_d{\mathbb{Z}})^\times$ to have a subgroup of order $d$. This is addressed by Theorem \ref{Theorem:Lucas}
below. We first need the following lemma.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\mathbb{H}line
$n$ & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11\\
\mathbb{H}line\mathbb{H}line
$p(n)$ & 5 & 7 & 11 & 13 & 17 & 19 & 23 & 29 & 31 \\
\mathbb{H}line
$L_{p(n)}$ & 11 & 29 & 199 & 521 & 3571 & 9349 & 64079 & 1149851 & 3010349\\
\mathbb{H}line
$\phi(L_{p(n)})$ & 10 & 28 & 198 & 520 & 3570 & 9348 & 63480 & 1130304 & 3010348 \\
\mathbb{H}line
\end{tabular}
\caption{This sequence $L_{p(n)}$, in which $p(n)$ is the $n$th prime and $L_k$ is the $k$th Lucas number;
see \mathbb{H}ref{https://oeis.org/A180363}{A180363} in the OEIS. Although the initial terms in this sequence are prime,
they are not all so. Theorem \ref{Theorem:Lucas} ensures that $p(n)$ divides $L_{p(n)}$ for $n \geq 3$.}
\label{table:bugs}
\end{center}
\end{table}
\begin{lemma}\label{qlem}
If $p \geq 5$ is an odd prime, then there is an odd prime $q$ so that $q |L_p$.
\end{lemma}
\begin{proof}
If we observe $L_0,L_1,\ldots,L_{11}$ modulo $8$, we get $2,1,3,4,7,3,2,5,7,4,3,7,2,1$.
Because the first two digits of this sequence are the same as the last and $L_n = L_{n-1} + L_{n-2}$, the sequence repeats.
Thus, $L_n$ is never divisible by $8$, and furthermore $L_n > 8$ for all $n \geq 5$.
Any integer greater than $8$ and not divisible by $8$ cannot be a power of two. Thus, there exists an odd prime $q$ such that $q|L_p$.
\end{proof}
\begin{theorem}\label{Theorem:Lucas}
If $p\geq 5$ is an odd prime, then $p | \phi(L_p)$.
\end{theorem}
\begin{proof}
Let $F_n$ denote the $n$th Fibonacci number and let $z(n)$ denote the order of appearance of $n$ \cite[p.~89]{vajda1989fibonacci}.
By Lemma \ref{qlem} there is an odd prime $q$ such that $q| L_p$.
Using the fact that $F_{2p} = L_pF_p$ \cite[p.~25]{vajda1989fibonacci}, we know that $q| F_{2p}$.
Furthermore, $q| F_{z(q)}$ by \cite[p.~89]{vajda1989fibonacci}. Consequently,
$$q| \gcd(F_{2p},F_{z(q)}) = F_{\gcd(2p,z(q))},$$
where we have used that $\gcd(F_a,F_b) = F_{\gcd(a,b)}$ for all $a,b \in {\mathbb{Z}}^+$ \cite[Theorem 16.3]{koshy2011fibonacci}.
Now, set $d = \gcd(2p,z(q))$, and observe that since $p$ is prime, $d = 1$, $2$, $p$ or $2p$.
However, $q$ is an odd prime and $q| F_d$. If $d = 1$ or $2$, this implies $q| 1$ because $F_1 = F_2 = 1$,
which is impossible because $q$ is an odd prime. Thus, $d = p$ or $2p$.
Now, consider the case $d = p$, implying that $q| F_n$. However, by \cite[p.~29]{vajda1989fibonacci} we know
$$L_p^2 - 5F_p^2 = 4(-1)^p,$$ thus implying that $q| 4$ which is impossible because $q$ is an odd prime.
Thus, $d = \gcd(2p,z(q)) = 2p$ and therefore $2p| z(q)$. Furthermore, we know $z(q)| q - (\frac{q}{5})$.
Now, $q$ cannot be 5, because the Lucas numbers are always coprime to 5 \cite[p.~89]{vajda1989fibonacci}.
We would like to show that $(\frac{q}{5}) = 1$, because then $$p \big| 2p \big| z(q) \big| (q-1) = \phi(q)\big| \phi(L_p).$$
Thus we must show that $q$ is a quadratic residue modulo $5$. For this, we must again use the fact that $L_p^2 - 5F_p^2 = 4(-1)^p$.
Reducing this modulo $q$, we get that $$-5F_p^2 \equiv -4 \pmod{q}.$$ Thus $(\frac{5}{q}) = 1$,
and furthermore $(\frac{q}{5})=1$ by quadratic reciprocity.
\end{proof}
For $n \geq 3$, Theorem \ref{Theorem:Lucas} guarantees the existence of an order $p(n)$ subgroup $\Lambda$ of
$({\mathbb{Z}}/L(p(n)){\mathbb{Z}})^\times$. In principle, this permits the patterns hinted at in
Figure \ref{fig:fleeting} continue indefinitely. However,
many questions remain. How can the spider phenomena be formalized?
One can immediately recognize a spider when one sees it, but it is more difficult to express the irregularity in a mathematical manner.
Further, how does the structure of Lucas numbers for prime indices influence the spider-like images?
These are questions we hope to return to at a later time.
\label{Bibliography}
\end{document} |