text
stringlengths 56
7.94M
|
---|
\begin{document}
\title{Loschmidt echo and fidelity decay near an exceptional point}
\section{Introduction}
Exceptional points (EPs) are special spectral degeneracies
of non-Hermitian Hamiltonians governing
the dynamics of open classical and quantum systems \cite{R1,R2,R3,R4,R5}.
Recently, the dynamical behavior of non-Hermitian systems near an EP has sparked a great interest with a wealth of
applications in several areas of physics, notably in integrated photonics systems \cite{R6,R7,R8,R9,R10,R11,R12,R13}, acoustics \cite{R14,R15,R16} and optomechanics \cite{R17,R18,R19,R20,R21,R22} to mention a few (for recent reviews and more extended references see \cite{R23,R24,R25,R26,R26bis,R26tris}).
At the EP, two or more eigenvalues and the corresponding
eigenstates of the Hamiltonian $H$ coalesce. An ubiquitous property of a non-Hermitian system is its extreme sensitivity to perturbations when operated closed to an EP.
Such a result stems from the fact that, if the Hamiltonian $H=H(h)$ depending on a control parameter $h$ shows an EP at $h=h_0$, then the energy spectrum $E=E(h)$ and corresponding eigenfunctions are non-analytic and show a branch point at $h_0$, with $(d E/ d h)_{h_0}=\infty$ \cite{R1,R27,R28}. The strong sensitivity to perturbations is at the heart of several phenomena studied in recent works, such as sensing enhancement in optical micro cavities \cite{R11,R12,R29,R30,R31}, cavity-assisted enhanced metrology and quantum sensing \cite{R32,R32bis}, ultra sensitive micro-scale gyroscopes \cite{R33,R34,R35}, quantum and photonic catastrophes \cite{R36,R37}, critical phenomena and dynamical quantum phase transitions \cite{R38,R38bis,R38uff,R38tris}. A related phenomenon observed as the parameter $h$ is slowly cycled around an EP is the asymmetric breakdown of the adiabatic theorem and unidirectional transitions \cite{R18,R36,R37,R38,R39,R40,R41,R42,R43,R44,R45}, resulting in topological energy transport \cite{R18} and asymmetric mode switching \cite{R43}.\\
In the study of complex quantum systems, the stability of quantum evolution in the presence of perturbations is rather generally measured by quantities such as Loschmidt echo and fidelity after imperfect time reversal or quench dynamics \cite{R46,R47,R48,R48bis}. For example, in Loschmidt echo setups an initial
state is propagated for a given time and then reversed. The comparison of the resulting state and the initial one constitutes a measure of the irreversibility suffered by the system during its evolution and generated by differences between the forward and backward dynamics. Likewise, an initial state $\psi_0$ evolves, after a time interval $t$, into the two states $\psi_1(t)$ and $\psi_2(t)$ under the Hamiltonians ${H}_1$ and ${H}_2={H}_1+ \epsilon P$, where $\epsilon P$ is a perturbation: the overlapping $\mathcal{F}(t) = |\langle \psi_2 (t) | \psi_1 (t)
\rangle|^2$, referred to as fidelity, provides a measure of the stability of the dynamics under the perturbation. When we extend such concepts to non-Hermitian dynamical systems, the effect of a perturbation on the dynamical behavior is expected to be strongly enhanced near an EP, resulting in a degraded fidelity in short times. In this work it is shown that, rather counterintuitively, in certain perturbed non-Hermitian models the fidelity decay can be {\it decelerated} (rather than accelerated) as the system operates {\it closer} to (rather than far apart from) an EP. We illustrate such an intriguing behavior by considering a paradigmatic model of non-Hermitian transport in tight-binding lattices with asymmetric hopping, namely the Hatano-Nelson model \cite{R49,R50,R51}. This model shows a rich physics and has seen a renewed interest very recently \cite{R52,R53,R54,R55,R56,R57,R58,R59,R60,R61,R62,R63,R64,R65,R66,R66bis,R67,R68,R69,R70}.
\section{Model and non-Hermitian stationary perturbation theory}
Let ${H}_0$ be a Hermitian $N \times N$ matrix that describes rather generally {the coherent hopping dynamics of a single-particle} on a finite-dimensional tight-binding network, and let us indicate by $E_1$, $E_2$,..., $E_N$ and ${\bf u}_1$, ${\bf u}_2$,..., ${\bf u}_N$ the eigen-energies and corresponding eigenvectors of ${H}_0$, i.e.
\begin{equation}
{H}_0 {\bf u}_n = E_n { \bf u}_n
\end{equation}
($n=1,2,3,...,N$). For the sake of simplicity, we assume that the eigenvalues are not degenerate, take the normalization $\langle {\bf u}_m | {\bf u}_n \rangle = \delta_{n,m}$ for the eigenvectors, and
assume short-range hopping so that $({H}_0)_{n,m}=0$ for $|n-m| >L$ for some integer $L \geq 1$. Indicating by ${X}$ the $N \times N$ non-unitary diagonal matrix given by
\begin{equation}
{X}_{n,m}= \exp(-hn) \delta_{n,m}
\end{equation}
with $h \geq 0$,
we can introduce the {\it pseudo-Hermitian} Hamiltonian ${H}_1$ via the similarity transformation
\begin{equation}
{H}_1 ={X} {H}_0 {X}^{-1},
\end{equation}
i.e.
\begin{equation}
\left( {H}_1 \right)_{n,m}=\left( {H}_0 \right)_{n,m} \exp [ h(m-n) ].
\end{equation}
The similarity transformation basically corresponds to a non-Hermitian gauge transformation of the wave function, which arises by application of a synthetic imaginary gauge field $h$. Such an imaginary gauge phase could be realized experimentally in photonic microring structures and in ultracold atomic systems, as proposed in some recent works \cite{R53,R54,R64,R66}.
For example, if ${H}_0$ describes the hopping dynamics in a uniform one-dimensional chain with nearest-neighbor hopping amplitude $\kappa$ and open boundary conditions, i.e.
\begin{equation}
{H}_0=\left(
\begin{array}{ccccccc}
0 & \kappa & 0 & 0 & ... & 0 & 0 \\
\kappa & 0 & \kappa& 0 & ... & 0 & 0 \\
0 & \kappa & 0 & \kappa & ... & 0 & 0 \\
... & ... & ... & ...& ...& ... & ... \\
0 & 0 & 0 & 0 & ... & 0 & \kappa \\
0 & 0 & 0 & 0 & ... & \kappa & 0
\end{array}
\right)
\end{equation}
a non-vanishing imaginary gauge phase $h$ introduces asymmetric forward/backward hopping amplitudes $\kappa_1= \kappa \exp(h)$ and $\kappa_2= \kappa \exp(-h)$ in the pseudo-Hermitian Hamiltonian ${H}_1$, namely one has
\begin{equation}
{H}_1=\left(
\begin{array}{cccccc}
0 & \kappa \exp(h) & 0 & ... & 0 & 0 \\
\kappa \exp(-h)& 0 & \kappa \exp(h)& ... & 0 & 0 \\
0 & \kappa \exp(-h) & 0 & ... & 0 & 0 \\
... & ... & ...& ...& ... & ... \\
0 & 0 & 0 & ... & 0 & \kappa \exp(h) \\
0 & 0 & 0 & ... & \kappa \exp(-h) & 0
\end{array}
\right).
\end{equation}
This pseudo-Hermitian Hamiltonian reproduces the Hatano-Nelson model without disorder \cite{R49} and shows interesting topological properties, as shown in recent works \cite{R64}.
Clearly, the Hamiltonians ${H}_0$ and ${H}_1$ are isospectral, and the eigenvectors $\bf{v}_n$ of ${H}_1$ are simply given by ${\bf v}_n=X{ \bf{u}_n}$, i.e. ${H}_1 {\bf{v}}_n =E_n {\bf{v}}_n$ with
\begin{equation}
({\bf{v}}_n)_l=({\bf{u}}_n)_l \exp(-lh)
\end{equation}
($l,n=1,2,...,N$). Note that the imaginary gauge field squeezes the eigenstates closer to the left edge (for $h>0$), i.e. all eigenstates become {\em left-edge states}. This effect has been referred in some recent works to as the {\em non-Hermitian skin effect}.
Since ${H}_1$ is not Hermitian, the left and right eigenvectors of ${H}_1$ do not coincide. Indicating by $\bf{v}^{\dag}_n$ the eigenvector of the adjoint ${H}_1^{\dag}=X^{-1} {H}_0 X$ with energy $E_n$, one has
\begin{equation}
({\bf{v}^{\dag}}_n)_l=({\bf{u}}_n)_l \exp(lh).
\end{equation}
Note that the ratio
\begin{eqnarray}
\frac{\langle {\bf v}_n | {\bf v}_n \rangle \langle {\bf v}_n^{\dag} | {\bf v}_n^{\dag} \rangle}{| \langle {\bf v}^{\dag}_n | {\bf v}_n \rangle |^2} & = & \left( \sum_{l=1}^{N} \left| ({\bf u}_n)_l \right|^2 \exp(2hl) \right) \nonumber \\
& \times &
\left( \sum_{l=1}^{N} \left| ({\bf u}_n)_l \right|^2 \exp(-2hl) \right)
\end{eqnarray}
is one in the Hermitian limit $h=0$, while it increases like $\sim 1/ \alpha^{2N}$ and diverges as $h \rightarrow \infty$, where we have set $\alpha \equiv \exp(-h)$: this is the signature that an EP is approached as $h$ is increased. This can be also shown by direct computation of the matrix ${H}_{1}$ in the large $h$ limit. In this case, the dominant elements of ${H}_{1}$ are those on the diagonal ${m=n+L}$, which scale like $\sim \exp(hL)$ according to Eq.(4). Hence, in the large $h$ limit, at leading order in the small parameter $\alpha$ one has ${H}_1 = \exp(hL) \left[ {A}+ O(\alpha) \right]$ with $({A})_{n,m}= ({H}_0)_{n,m} \delta_{n,m-L}$. Clearly, since the matrix ${A}$ has an EP at zero energy of order $(N-L)$, in the large $h$ limit such an EP is approached by the matrix ${H}_1$ as well.\par
Let us now consider how a perturbation affects the spectrum of the pseudo-Hermitian Hamiltonian ${H}_1$. Let us consider the perturbed Hamiltonian
\begin{equation}
{H}_2={H}_1+ \epsilon P
\end{equation}
where the elements of the matrix perturbation $P$ are of the same order than the elements of ${H}_0$, say of order $\sim 1$, while $\epsilon$ is a small parameter that measures the strength of the perturbation. Clearly, $H_2$ is isospectral to the matrix
\begin{equation}
H_0^{\prime}=X^{-1} H_2 X=H_0+\epsilon P^{\prime}
\end{equation}
where we have set
\begin{equation}
P^{\prime} \equiv X^{-1} P X.
\end{equation}
Hence we can compute the energy spectrum of $H_0^{\prime}$, which differs from the Hermitian Hamiltonian $H_0$ by the (generally non-Hermitian) perturbation term $\epsilon P^{\prime}$. If one applies standard Rayleigh-Schr\"odinger perturbation theory for non-degenerate eigenvalues, at first-order in $\epsilon$ the varied eigenvalues of $H_2$ are thus given by
\begin{equation}
E^{\prime}_n \simeq E_n+ \epsilon \langle {\bf u}_n| P^{\prime} | {\bf u}_n \rangle.
\end{equation}
Clearly, while Eq.(13) holds in the Hermitian limit $h=0$, it rapidly fails to predict the correction of the eigenvalues in the non-Hermitian case when the perturbation $P$ is long-range and the number $N$ of sites of the network is large enough. In fact, from Eq.(12) it readily follows that, for a long-range perturbation such as, for example, the element $P_{N,1}$ does not vanish, one has $P^{\prime}_{N,1}=P_{N,1} \exp[h(N-1)]$, and thus for $h \neq 0$ and large $N$, or more generally for $hN \gg 1$, the perturbation matrix element $P^{\prime}_{N,1}$ takes extremely large values, and Eq.(13) becomes invalid even for extremely low values of $\epsilon$, of the order $\sim \exp(-hN)$. This result agrees with previous studies showing the strong dependence of the spectrum of the Hatano-Nelson Hamiltonian on boundary conditions \cite{R57,R63,R64} and is a clear signature that in a non-Hermitian Hamiltonian near an EP a small change of a control parameter can induce a comparatively much larger change in its energy spectrum. For example, let us consider a uniform tight-binding chain with nearest-neighbor hopping rate $\kappa$ and open boundary conditions, defined by the Hamiltonian $H_0$ given in Eq.(5), and let assume that the perturbation $\epsilon P$ describes a small (Hermitian) coupling between the edge sites $n=1$ and $n=N$ of the chain, i.e. let us assume
\begin{equation}
P_{n,m}= \delta_{n,1}\delta_{m,N}+\delta_{n,N}\delta_{m,1}.
\end{equation}
This can be readily obtained by deforming a linear chain so that to weakly couple the edge sites, as shown in Fig.1. The perturbed Hamiltonian reads explicitly
\begin{equation}
{H}_2=\left(
\begin{array}{ccccccc}
0 & \kappa \exp(h) & 0 & ... & 0 & \epsilon \\
\kappa \exp(-h) & 0 & \kappa \exp(h) & ... & 0 & 0 \\
0 & \kappa \exp(-h) & 0 & ... & 0 & 0 \\
... & ... & ... & ...& ... & ... \\
\epsilon & 0 & 0 & ... & \kappa \exp(-h)& 0
\end{array}
\right).
\end{equation}
For $\epsilon=0$, i.e. in the absence of the perturbation, the energy spectrum of $H_2$ is real, independent of $h$ and given by
\begin{equation}
E_n=2 \kappa \cos[2 \pi n/(N+1)]
\end{equation}
($n=1,2,...,N$), with corresponding eigenfunctions
\begin{equation}
({\bf v}_n)_l=\sqrt{\frac{2}{N+1}} \sin \left( \frac{nl \pi}{N+1}\right) \exp(-lh).
\end{equation}
In the presence of the long-range perturbation (14), the energies are modified according to Eq.(13) as follows
\begin{eqnarray}
E^{\prime}_n & \simeq & E_n+{\frac{2 \epsilon}{N+1}} \sin^2 \left( \frac{n \pi }{N+1} \right) \nonumber \\
& \times & \left\{ \exp[(N-1)h]+\exp[-(N-1) h] \right\} .
\end{eqnarray}
The perturbative analysis would predict the energy spectrum to remain real, however it is clear that in long chains even for small $\epsilon$ [of the order of $\sim \kappa \exp(-hN)$] the correction of the energy ceases to be small and the perturbative analysis is expected to fail even for $\epsilon$ much smaller than the smallest hopping rate $\kappa_2=\kappa \exp(-h)$. {The exact eigenvalues $E^{\prime}_n$ (energy spectrum) of the matrix $H_2$ can be computed from the roots of a self-inversive polynomial, as shown in the Appendix A. In particular, as $\epsilon$ is increased above a critical value $\epsilon_c$, the energy spectrum ceases to be real and pairs of real energies coalesce and bifurcate into complex conjugate energies via an EP. For $\cosh[(N-1)h] \gg 1$, the critical value $\epsilon_c$ of perturbation strength takes the simple form $\epsilon_{c} \simeq \kappa / \{ 2 \cosh [(N-1) h] \}$.} As an example, Fig.2 shows the numerically-computed exact energy spectrum of the perturbed Hamiltonian $H_2$ in a lattice comprising $N=50$ sites for a few increasing values of $\epsilon$ and for $h=0$ [Hermitian limit, Fig.2(a)], $h=0.1$ [Figs.2(b)] and $h=0.2$ [Fig.2(c)]. As one can see, for a non-vanishing imaginary gauge field even for small values of $\epsilon$ the energy spectrum of the perturbed Hamiltonian $H_2$ is strongly modified as compared to the spectrum of the unperturbed one $H_1$, and rapidly bifurcates into complex energies as $\epsilon$ is increased {above $\epsilon_c$. A typical bifurcation scenario is shown in Fig.2(d), where the loci of eigenvalues $E^{\prime}$ in complex plane are depicted for increasing the perturbation strength, from below to above the critical value $\epsilon_c$. As one can see, as $\epsilon$ in increased couples of real energies coalesce via an EP and bifurcate into complex conjugate ones.}
\begin{figure}
\caption{\label{<label name>}
\label{<label name>}
\end{figure}
\begin{figure}
\caption{\label{<label name>}
\label{<label name>}
\end{figure}
\section{Dynamical stability under perturbations: fidelity decay}
The enhanced sensitivity of energy spectrum of the pseudo-Hermitian Hamiltonian $H_1$ to a perturbation, as compared to the Hermitian case, corresponds rather generally to a faster deviation of the system temporal dynamics as indicated by a faster decay of fidelity. Let us consider an initial state ${\boldsymbol \psi}_0$ at time $t=0$, and let us consider its temporal evolution under the unperturbed Hamiltonian $H_1$ and the perturbed one $H_2=H_1+\epsilon P$. After setting ${\boldsymbol \psi}_1(t)= \exp(-i H_1 t) {\boldsymbol \psi}_0$ and ${\boldsymbol \psi}_2(t)= \exp(-i H_2 t) {\boldsymbol \psi}_0$, the deviation of the dynamics induced by the perturbation is measured by the fidelity \cite{R46}
\begin{equation}
\mathcal{F}(t)=\frac{| \langle {\boldsymbol \psi}_2(t) | {\boldsymbol \psi}_1(t) \rangle |^2}{\langle {\boldsymbol \psi}_1(t) | {\boldsymbol \psi}_1(t) \rangle \langle {\boldsymbol \psi}_2(t) | {\boldsymbol \psi}_2(t) \rangle}
\end{equation}
where the denominator is introduced because of the non-unitary dynamics. Note that $\mathcal{F}(t) \leq 1$ and $\mathcal{F}(t)=1$ if and only if the state vectors ${\boldsymbol \psi}_1(t)$ and ${\boldsymbol \psi}_2(t)$ are parallel. A special case corresponds to the initial state ${\boldsymbol \psi}_0$ being prepared in an eigenstate of the unperturbed Hamiltonian, such as the ground (equilibrium) state: in this case the fidelity measures the deviations of the dynamics after a sudden quench of the Hamiltonian from $H_1$ to $H_2$.
Intuitively, one would expect that the fidelity decay should be faster as the non-Hermitian gauge phase $h$ is increased because of the stronger deformation of the energy spectrum for increasing values of $h$ (see Fig.2). Such a behavior is observed, for instance, in quench dynamics, where the initial state ${\boldsymbol \psi}_0$ is an eigenstate of the unperturbed Hamiltonian [see Fig.4(a) to be commented below]. However, what happens when system is initially prepared in a state which is not an equilibrium state, i.e. in a superposition of eigenstates of $H_1$? We show here that, for long-range perturbations and for initial excitations that are confined very far from the region of skin edge eigenstates, a very counterintuitive behavior can be observed as a result of asymmetric hopping and non-Hermitian skin effect: the fidelity decay can be {\em decelerated} (rather than accelerated), at least transiently, by increasing the imaginary gauge field $h$, i.e. by bringing the system closer to an EP. A simple physical picture of such a counterintuitive behavior can be gained as follows. Let us consider the time-dependent Schr\"odinger equation for the unperturbed and perturbed Hamiltonians
\begin{eqnarray}
i \partial_t {\boldsymbol \psi}_1 & = & H_1 {\boldsymbol \psi}_1 \\
i \partial_t {\boldsymbol \psi}_2 & = & H_2 {\boldsymbol \psi}_2
\end{eqnarray}
with ${\boldsymbol \psi}_1(0)={\boldsymbol \psi}_2(0)={\boldsymbol \psi}_0$. After setting
\begin{equation}
{\boldsymbol \psi}_2(t)={\boldsymbol \psi}_1(t)+ \delta {\boldsymbol \psi}(t)
\end{equation}
the evolution equation for the variation $\delta {\boldsymbol \psi}$ reads
\begin{equation}
i \partial_t \delta {\boldsymbol \psi}=H_2 \delta {\boldsymbol \psi} +\epsilon P {\boldsymbol \psi}_1
\end{equation}
with $\delta {\boldsymbol \psi}(0)=0$. If we expand the fidelity in power series of the variation $\delta {\boldsymbol \psi}(t)$, up to second-order expansion one obtains
\begin{equation}
\mathcal{F}(t) \simeq 1-\frac{\langle \delta {\boldsymbol \psi }| \delta {\boldsymbol \psi } \rangle}{\langle {\boldsymbol \psi }_1 | {\boldsymbol \psi } _1 \rangle}+\frac{| \langle \delta {\boldsymbol \psi }| {\boldsymbol \psi }_1 \rangle |^2}{\langle {\boldsymbol \psi }_1 | {\boldsymbol \psi } _1 \rangle ^2 }.
\end{equation}
\begin{figure}
\caption{\label{<label name>}
\label{<label name>}
\end{figure}
For the sake of definiteness, let us assume the long-range perturbation defined by Eq.(14), however a similar analysis holds for any perturbation with matrix elements $P_{n,m}$ non vanishing only for $|n-m|$ large enough. Let us also assume that the initial state ${\boldsymbol \psi }_0$ is localized on the right side of the chain, so that $({\boldsymbol \psi }_0)_n=0$ for small values of index $n$; see Fig.3(a) for a schematic. To provide an estimation of the fidelity $\mathcal{F}$, it is sufficient to provide a qualitative behavior of the solutions ${\boldsymbol \psi}_1$ and $ \delta {\boldsymbol \psi}$ of Eqs.(20) and (23).
Clearly, in the unperturbed system with Hamiltonian $H_1$ the initial excitation ${\boldsymbol \psi}_0$, localized on the few right edge sites of the chain, spreads and propagates backward with a speed which is ultimately limited by the largest hopping rate; see Fig.3(a) for a schematic. To provide a qualitative behavior of the solution $\delta {\boldsymbol \psi}(t)$, note that, since $\delta {\boldsymbol \psi}(0)=0$, in the early stage of the dynamics one can take $H_{2}=H_1+ \epsilon P \simeq H_1$ on the right-hand-side of Eq.(23), i.e one can assume
\begin{equation}
i \partial_t \delta {\boldsymbol \psi} \simeq H_1 \delta {\boldsymbol \psi} +\epsilon P {\boldsymbol \psi}_1
\end{equation}
Since $(P {\boldsymbol \psi}_1)_n=({\boldsymbol \psi}_1)_N \delta_{n,1}+({\boldsymbol \psi}_1)_1 \delta_{n,N}$ , the solution $\delta {\boldsymbol \psi}(t)$ is basically determined by the propagation in the same chain described by $H_1$ but with two sources $\epsilon ({\boldsymbol \psi}_1)_N$ and $\epsilon ( {\boldsymbol \psi}_1)_1 $ that create the excitation. The two sources are placed at the left and right edges sites $n=1$ and $n=N$, respectively; see Fig.3(b) for a schematic. Since the wave packet ${\boldsymbol \psi}_1(t)$ is initially localized near the $n=N$ edge of the chain and propagates at a finite speed along the chain, in the early stage of the dynamics one can assume $({\boldsymbol \psi}_1)_1 \simeq 0$, so that the only source for $\delta {\boldsymbol \psi}(t)$ is located at the left edge. This means that in the early stage of the dynamics, i.e. until the initial excitation ${\boldsymbol \psi}_1(t)$ has not spread from the right to the left boundaries of the chain and the correction $ \delta {\boldsymbol \psi}(t)$ has not spread from the left to the right extremes of the chain, one can assume $\langle {\boldsymbol \psi}_1(t) | \delta {\boldsymbol \psi}(t) \rangle \simeq 0$ in Eq.(24), so that
\begin{equation}
\mathcal{F}(t) \simeq 1-\frac{\langle \delta {\boldsymbol \psi }| \delta {\boldsymbol \psi } \rangle}{\langle {\boldsymbol \psi }_1 | {\boldsymbol \psi } _1 \rangle}.
\end{equation}
In the Hermitian limit $h=0$, the norm $\langle {\boldsymbol \psi }_1 | {\boldsymbol \psi } _1 \rangle=\langle {\boldsymbol \psi }_0 | {\boldsymbol \psi } _0 \rangle=1$ is conserved, whereas $\langle \delta {\boldsymbol \psi }| \delta {\boldsymbol \psi } \rangle$ increases from zero because of the source term in Eq.(25): hence the fidelity decays like $\mathcal{F}(t)=1-{\langle \delta {\boldsymbol \psi }| \delta {\boldsymbol \psi } \rangle}$. For a non-vanishing imaginary gauge field $h$, the wave packet ${\boldsymbol \psi}_1(t)$ propagates faster and it is exponentially {\em amplified} while propagating backward in the chain \cite{R53,R54}. Hence the norm $\langle {\boldsymbol \psi }_1 | {\boldsymbol \psi } _1 \rangle$ is not conserved and turns out to be an almost exponentially-increasing function of time. Likewise, the wave packet $\delta {\boldsymbol \psi}(t)$ created by the source on the left edge is exponentially {\em attenuated} while propagating forward along the chain \cite{R53,R54}, so that the norm $\langle \delta {\boldsymbol \psi }| \delta {\boldsymbol \psi } \rangle$ for $h>0$ takes smaller values as compared to the ones for $h=0$. This means that the fidelity $\mathcal{F}(t)$ is expected to {\em increase} when $h$ is increased from zero, i.e. an increase of the imaginary gauge field leads to a {\em deceleration} (rather than an acceleration) of the fidelity decay, despite the energy spectrum of the Hamiltonian $H_2$ undergoes a stronger deformation as $h$ is increased.\\ We checked the predictions of the theoretical analysis by numerically computing the fidelity decay for the nearest-neighbor tight-binding Hamiltonians $H_1$ and $H_2$, defined by Eqs.(6) and (15) with $N=50$ sites, for two different initial conditions ${\boldsymbol \psi}_0$. The numerical results are shown in Fig.4 for a vanishing ($h=0$) and non-vanishing ($h=0.3$) gauge field. In Fig.4(a) the initial state ${\boldsymbol \psi}_0$ is the eigenstate ${\bf v}_1$ of $H_1$, given by Eq.(17) with $n=1$. Clearly, a non-vanishing imaginary gauge field accelerates the decay of fidelity [compare left and right panels in Fig.4(a)]. This is an expected result because the gauge field effectively enhances the strength of perturbation and greatly modify the eigen-energies of the perturbed Hamiltonian $H_2$ as compared to the spectrum of $H_1$, as discussed in the previous section. Figure 4(b) shows the typical decay behavior of the fidelity for the initial state $({\boldsymbol \psi}_0)_n=\delta_{n,N-1}$, corresponding to initial excitation of a site close to the edge right site of the chain. In this case, one can see that in the early stage the fidelity decay is {\em decelerated} (rather than accelerated) by a non-vanishing imaginary gauge field $h$, until an abrupt drop of the fidelity is observed at the time $ t_1 \simeq 24 / \kappa \simeq N/v_g$, corresponding to the transit time of the excitation along the chain (the group velocity being given by $v_g= 2 \kappa \cosh (h)$ \cite{R53}). The time $t_1$ of the fidelity drop can be increased by increasing the length $N$ of the chain.
\begin{figure}
\caption{\label{<label name>}
\label{<label name>}
\end{figure}
\begin{figure}
\caption{\label{<label name>}
\label{<label name>}
\end{figure}
\begin{figure}
\caption{\label{<label name>}
\label{<label name>}
\end{figure}
\section{Imperfect time reversal and Loschmidt echo}
The reversibility of dynamical classical and quantum systems is generally captured by Loschmidt echo, which provides a quantitative measure of the sensitivity of perturbations of the backward temporal evolution
of the system \cite{R46,R48}. In typical Loschmidt echo setups, an initial state ${\boldsymbol \psi}_0$ is forward propagated for a given time $t$ with the Hamiltonian $H_1$ and then imperfectly time reversed and propagated backward by the Hamiltonain $H_2 \simeq H_1$, resulting in a final state
\begin{equation}
{\boldsymbol \psi}_f=\exp(iH_2 t) \exp(-i H_1 t) {\boldsymbol \psi}_0.
\end{equation}
The Loschmidt echo $M(t)$ is defined as the overlap between the original state ${\boldsymbol \psi}_0$ and the final state ${\boldsymbol \psi}_f$ after imperfect time reversal, i.e.
\begin{equation}
M(t)=\frac{\left| \langle {\boldsymbol \psi}_0 | {\boldsymbol \psi}_f \rangle \right|^2}{ \langle {\boldsymbol \psi}_f | {\boldsymbol \psi}_f \rangle \langle {\boldsymbol \psi}_0 | {\boldsymbol \psi}_0 \rangle}
\end{equation}
with $M(t) \leq 1$ and $M(t)=1$ only for perfect time reversal. { Clearly, in the Hermitian case the norm is conserved and $\hat{H_2}^{\dag}=\hat{H_2}$, so that from Eqs.(19), (27) and (28) it follows that $\mathcal{F}(t)=M(t)$, i.e. the Loschmidt echo and fidelity do coincide. This means that the sensitivity of the dynamical evolution to perturbations of the Hamiltonian can be obtained either from the imperfect time reversal of the dynamics of a single system or from a comparison of the different dynamical evolutions of two systems prepared in the same initial state but evolving under different (unperturbed and perturbed) Hamiltonians. When considering the non-Hermitian case, the two quantities $\mathcal{F}(t)$ and $M(t)$ do not coincide anymore, since time reversal requires to change $^{\prime}$gain$^{\prime}$ with $^{\prime}$loss$^{\prime}$ terms in the Hamiltonian. Nonetheless, both quantities provide a measure of sensitivity of perturbations in the dynamical evolution of a non-Hermitian Hamiltonian in the two different physical settings.} A main result that we wish to highlight in this section is that, despite the major sensitivity to perturbations, for certain initial states an imaginary gauge field $h$ can result in an {\em increase} (rather
than a decrease) of the Loschmidt echo after imperfect time reversal, as compared to the Hermitian limit $h=0$. To illustrate such a behavior, let us focus our attention to a tight-binding chain with nearest-neighbor hopping rate $\kappa$, defined by the Hatano-Nelson Hamiltonian (6). In this case time reversal, after forward propagation with the Hamiltonian $H_1$ for a time interval $t$, is simply obtained by flipping the sign of the hopping rate $\kappa$ at time $t$ . Note that, since the gauge field is imaginary, time reversal does not require to flip the sign of $h$ in the Hamiltonian. Time reversal for this kind of Hamiltonian in the Hermitian limit $h=0$ has been suggested and experimentally demonstrated in a series of recent works for cold atoms in optical lattices and for photons in evanescently coupled optical waveguide arrays \cite{R71,R72,R73,R74,R75,R76,R77}. As discussed in such previous works, the sign flip of the hopping amplitude $\kappa$ is obtained by fast half-cycle Bloch oscillations or by imprinting a sudden phase shift to adjacent amplitudes. In the presence of an imaginary gauge field, time reversal can be obtained using the same method. Let us assume, for example, that imperfect time reversal is obtained by imprinting a sudden site-dependent phase shift $\phi_n$ to the amplitudes $({\boldsymbol \psi})_n$, described by the operator $\mathcal{P}$ as follows
\begin{equation}
{\mathcal P} ({\boldsymbol \psi})_n=({\boldsymbol \psi})_n \exp(i \phi_n).
\end{equation}
The final state of the system is then given by
\begin{equation}
{\boldsymbol \psi}_f= \exp(-i H_1 t) {\mathcal{P}} \exp(-i H_1 t) {\boldsymbol \psi}_0
\end{equation}
For $\phi_n= \pi$, one has perfect time reversal since in this case $ \exp(-i H_1 t) {\mathcal{P}}= \exp(i H_1 t)$. In practice, especially for a large number of sites $N$, the phase shifts imprinted to the amplitudes $({\boldsymbol \psi})_n$ can deviate from the target $\pi$ value, resulting in an effective imperfect time reversal and lowing of the Loschmidt echo. For example, let us assume that a wrong phase, equal to $\pi/2$ (rather than $\pi$), is impressed at the lattice site $n=n_0$, i.e. let us assume
\begin{equation}
\phi_n= \pi -(\pi/2) \delta_{n,n_0}.
\end{equation}
Let us consider initial state excitation ${\boldsymbol \psi}_0$ in a single site of the chain located at the left edge, i.e. $({\boldsymbol \psi}_0)_n=\delta_{n,1}$. Figure 5 shows a typical behavior of the Loschmidt echo $M(t)$ in the Hermitian ($h=0$) and pseudo-Hermitian ($h>0$) cases corresponding to two different values of $n_0$, i.e. of position of the phase shift defect in the chain. Clearly, in both cases the Loschmidt echo turns out to be larger for a non-vanishing imaginary gauge field. Such a result can be physically explained on the basis of the non-Hermitian skin effect in the Hatano-Nelson model with open boundary conditions. Let us consider, for instance, the case of Fig.5(a), corresponding to the phase defect at the right site $n_0=N$ of the chain. In the Hermitian limit $h=0$ [left panel of Fig.5(a)] the Loschmidt echo remains very close to one for times $t$ smaller than the characteristic time $t \simeq t_1$ required for the initial excitation to reach the right boundary of the chain: in fact, for $t<t_1$ the initial excitation spreads along the chain but it is refocused before it can reach the right edge site, and thus the presence of the phase defect at the right edge site is not probed. On the other hand, for times $t$ larger than $t_1$ the excitation reaches the right edge site and the time reversal is thus imperfect, as one can see from the rather abrupt drop of the Loschmidt echo. Let us consider now the non-Hermitian case [right panel in Fig.5(a)].
Owing to the non-Hermitian skin effect, a positive imaginary gauge field $h$ pushes the excitation toward the left edge site, so that even for times larger than $t_1$ the right edge state with the phase defect is only weakly probed by the excitation, and thus the time reversal process is less sensitive to the phase defect. This is clearly shown in the right panel of Fig.5(a), where the drop of the Loschmidt echo near the time $t \sim t_1$ is much smaller than in the Hermitian case and the interference (oscillatory) fringes observed in the Hermitian case are washed out. A similar simple physical explanation can be given for the increase of Loschmidt echo observed by a non-vanishing imaginary gauge field for the case of Fig.5(b), where the phase defect is placed at the left edge site of the chain. It should be noted that for a different initial condition the Loschmidt echo can be degraded by the imaginary gauge phase. For example, if the system is initially excited in a single site (as in Fig.5) but {\em on the right edge}, such as $({\boldsymbol \psi})_n=\delta_{n,N}$, the imagery gauge field degrades the Loschmidt echo (see Fig.6). The reason thereof is that in this case the non-Hermitian skin effect amplifies the excitation that probes the phase defect site, so as the imperfection of the time reversal process is more effective when $h>0$. In some sense, one can say that the imaginary gauge field introduces a kind of {\em squeezing} in the dynamical stability of the system: the imaginary gauge field enhances time reversibility for some initial conditions (those with stronger localization at the left edge of the lattice, where the gauge field pushes the excitation) but degrades time reversibility for other initial conditions (those with stronger localization at the right edge).
\section{Conclusion.} The dynamical behaviors of non-Hermitian classical and open quantum systems near exceptional points are attracting a great interest in recent years, featuring some rather unique effects with no counterpart in Hermitian systems. A rather general result is that, for a non-Hermitian Hamiltonian that depends on a control parameter $h$ and shows an EP, the dynamical behavior of the system becomes much more sensitive to perturbations and/or initial conditions as the system is operated closer to the EP. The stability of the dynamical behavior of Hermitian systems is usually described by Loschmidt echo and fidelity decay. Such quantities are commonly introduced in problems related to quantum chaos, ergodicity, decoherence, non-equilibrium dynamics of many body systems, etc. However, so far they
have been rarely considered as a measure of dynamical stability in non-Hermitian systems \cite{R38uff,R38tris}. Here we have used fidelity and Loschmidt echo to investigate the dynamical stability of certain non-Hermitian models and
disclosed a rather surprising result. Owing to the strong sensitivity of non-Hermitian Hamiltonians near an EP, one would naively expect a less stable dynamical behavior signaled by degradation of Loschmidt echo and faster fidelity decay dynamics. Contrary to such an expectation, in this work we have shown that, for a class of pseudo-Hermitian Hamiltonians, a system operated closer to an EP can show a {\em deceleration} (rather than an acceleration) of the fidelity decay and an enhanced Loschmidt echo under a broad class of perturbations and initial excitations. We have illustrated such a counterintuitive effect by considering non-Hermitian transport in the Hatano-Nelson model, where an imaginary gauge field introduces asymmetric transport in the lattice, and provided a simple physical explanation of the main results on the basis of the so-called non-Hermitian skin effect. The present study discloses unexpected features in the dynamical behavior of certain non-Hermitian systems, and should motivate further investigations on dynamical stability of non-Hermitian classical and quantum systems. For example, it would be interesting to investigate dynamical stability in other non-Hermitian models, such as $\mathcal{PT}$-symmetric models, non-Hermitian many-particle systems, where effects such as particle statistics and correlations should play a major role \cite{fine}, dynamical stability in presence of two or more EPs in parameter space, and the interplay between topology and dynamical stability \cite{R38uff}.
\appendix
\renewcommand{A.\arabic{equation}}{A.\arabic{equation}}
\setcounter{equation}{0}
\section{Appendix}
{ In this Appendix we calculate the exact form of the eigenvalues and corresponding eigenvectors of the matrix $H_2$ given by Eq.(15) in the main text. Let $\mathbf{v}^{\prime}=(c_1,c_2,...,c_N)^T$ be an eigenvector of $H_2$ with eigenvalue $E^{\prime}$, i.e.
\begin{equation}
H_2 \mathbf{v}^{\prime}=E^{\prime} \mathbf{v}^{\prime}.
\end{equation}
We look for a solution of the eigenvector elements $c_n$ in the form of counter-propagating waves, i.e. we make the Ansatz
\begin{equation}
c_n=\{ A \exp [iq(n-1)]+B \exp[-iq(n-1)] \} \exp[-h(n-1)]
\end{equation}
($n=1,2,3,...,N$). The corresponding eigenvalue is given by
\begin{equation}
E^{\prime}=2 \kappa \cos (q)
\end{equation}
as one can readily prove after substitution of Eq.(A.2) into Eq.(A.1) and taking $n=2,3,...,N-1$. In the above equations, $q$ is a complex parameter that should be determined from a solvability condition, and $A$,$B$ are the amplitudes of the two counter-propagating waves in the lattice. By writing down Eq.(A.1) for $n=1$ and $n=N$ and using Eqs.(A.2) and (A.3), the following homogeneous equations for the amplitudes $A$ and $B$ are obtained
\begin{eqnarray}
\left( \frac{\kappa}{y} - {\epsilon \rho}{ y^{N-1}} \right) A +\left( \kappa y- \frac{\epsilon \rho}{y^{N-1}} \right) B=0\\
\left( \kappa y^N - \frac{\epsilon}{\rho} \right) A +\left( \frac{\kappa}{y^{N}} -\frac{\epsilon}{\rho} \right) B=0
\end{eqnarray}
where we have set
\begin{eqnarray}
y & \equiv & \exp(iq) \\
\rho & \equiv & \exp[-h(N-1)].
\end{eqnarray}
}
A non-vanishing solution for $A$ and $B$ requires the vanishing of the determinant in Eqs.(A.4) and (A.5). This solvability condition implies that $y$ is a root of the following polynomial $P(y)$ of order $2(N+1)$:
\begin{eqnarray}
P(y) & = & y^{(2N+2)}-\frac{\epsilon^2}{\kappa^2}y^{2N}-\frac{2 \epsilon}{\kappa}\cosh[(N-1)h] y^{N+2}+ \nonumber \\
& + & \frac{2 \epsilon}{\kappa} \cosh[(N-1)h] y^N+\frac{\epsilon^2}{\kappa^2}y^2-1 \equiv \sum_{n=0}^{2N+2} a_ny^n.
\end{eqnarray}
Note that $P(y)$ given by Eq.(A.8) belongs to the class of self-inversive (anti-palindromic) polynomials \cite{ruff1,ruff2}, i.e. one has $a_{2N+2-n}=-a_{n}$ for any $n=1,2,...,N+1$. Two roots of $P(y)$ are given by $y= \pm 1$, as can be readily check from the form of Eq.(A.8), so that $P(y)$ can be factorized as $P(y)=(y^2-1)Q(y)$, where $Q(y)$ is a self-inversive (palindromic) polynomial of order $2N$. The roots $y= \pm 1$ can not be accepted, since they would correspond to a vanishing solution ($A=B=0$) of the eigenvector. The eigenvalues $E^{\prime}$ are thus obtained from Eq.(A.3), i.e. from the relation
\begin{equation}
E^{\prime}= \kappa \left(y+\frac{1}{y} \right)
\end{equation}
where $y$ is one of the $2N$ roots of the self-inversive polynomial $Q(y)$. Note that, if $y$ is a root of $Q$, then also $1/y$ is a root of $Q$, so that according to Eq.(A.9) the number of eigenvalues $E^{\prime}$ of $H_2$ are $N$, as it should be. Note also that the energy $E^{\prime}$ is real when $|y|=1$, so that an the spectrum of $H_2$ is entirely real whenever $|y|=1$ for any root of $Q(y)$. For $\epsilon=0$, one readily obtains for $E^{\prime}=E_n$ the unperturbed values $E_n$ given by Eq.(19) in the main text. All such eigenvalues correspond to $|y|=1$, i.e. all the roots of the self-inversive polynomial $Q(y)$ are on the unit circle (they are unimodular). As $\epsilon$ is increased, the roots of $Q(y)$ remain on the unit circle, implying that the energy spectrum $E^{\prime}$ remains entirely real. However, the position of the roots $y$ on the unit circle vary as $\epsilon$ is increased from zero, indicating that the perturbation changes (mixes) all the unperturbed energies. The condition for the
self-inversive polynomial $Q(y)$ [or equivalently $P(y)$] to have the entire roots in the unit circle, i.e. the spectrum of $H_2$ to remain real, is given by certain general theorems of number theory (see, for instance, \cite{ruff1,ruff2}). According to the theorem stated in Ref.\cite{ruff2}, an upper boundary of $\epsilon$ is obtained from the inequality
\begin{equation}
\left( \frac{\epsilon}{\kappa} \right)^2+2 \left( \frac{\epsilon}{\kappa} \right) \cosh [(N-1)h] < 1.
\end{equation}
Note that, for a sufficiently long chain such that $\cosh[(N-1)h] \gg 1$, the first term on the left hand side of Eq.(A.10) can be neglected, so that one obtains the following limit for the perturbation strength $\epsilon$ to ensure an entry real energy spectrum of $H_2$
\begin{equation}
\epsilon < \frac{\kappa}{2 \cosh [(N-1)h]} \equiv \epsilon_c.
\end{equation}
Numerical computation of the eigenvalues shows that, as $\epsilon$ approached the critical value $\epsilon_c$ from below, two energies $E^{\prime}$ on the real axis coalesce, and further increasing $\epsilon$ above $\epsilon_c$, the real energies bifurcate into complex conjugate energies via an EP. As $\epsilon$ is further increased, other pairs of real energies coalesce and bifurcate into complex conjugate energies, until all the energy spectrum become complex.
\end{document} |
\begin{document}
\title[Left or right centralizers on $ \star $-algebras]{Left or right centralizers on $ \star $-algebras through orthogonal elements }
\author{ Hamid Farhadi}
\thanks{{\scriptsize
\hskip -0.4 true cm \emph{MSC(2020)}: 15A86; 47B49; 47L10; 16W10.
\newline \emph{Keywords}: Left centralizer, right centralizer, $ \star $-algebra, orthogonal element, zero product determined, standard operator algebra.\\}}
\address{Department of Mathematics, Faculty of Science, University of Kurdistan, P.O. Box 416, Sanandaj, Kurdistan, Iran}
\email{h.farhadi@uok.ac.ir}
\begin{abstract}
In this paper we consider the problem of characterizing linear maps on special $ \star $-algebras behaving like left or right centralizers at orthogonal elements and obtain some results in this regard.
\end{abstract}
\maketitle
\section{Introduction}
Throughout this paper all algebras and vector spaces will be over the complex field $ \mathbb{C} $. Let $ \mathcal{A} $ be an algebra. Recall that a linear (additive) map $ \varphi : \mathcal{A} \to \mathcal{A} $ is said to be a \textit{right $($left$)$ centralizer} if $ \varphi (ab) = a \varphi(b)
(\varphi(ab) = \varphi(a)b) $ for each $a, b \in \mathcal{A}$. The map $ \varphi $ is called a \textit{centralizer} if it is both a left centralizer and a right centralizer. In the case that $ \mathcal{A} $ has a unity $1$, $ \varphi : \mathcal{A} \to \mathcal{A} $ is a right (left) centralizer if and only if $ \varphi $ is of the form $ \varphi (a) = a \varphi(1) ( \varphi(a) = \varphi(1)a)$ for all $a \in \mathcal{A}$. Also $ \varphi $ is a centralizer if and only if $ \varphi (a) = a \varphi(1) = \varphi(1)a$ for each $a \in \mathcal{A}$. The notion of centralizer appears naturally in $C^{*}$-algebras. In ring theory it is more common to work with module homomorphisms. We refer the reader to \cite{gh1, gh2, vuk} and references therein for results concerning centralizers on rings and algebras.
In recent years, several authors studied the linear (additive) maps that behave like homomorphisms, derivations or right (left) centalizers when acting on special products (for instance, see \cite{barar, bre, fad0, fad1, fad2} and the references therein). An algebra $ \mathcal{A} $ is called \textit{zero product determined} if for every linear space $\mathcal{X}$ and every bilinear map $\phi:\mathcal{A}\times \mathcal{A}\rightarrow \mathcal{X}$ the following holds: If $\phi(a,b)=0$ whenever $ab=0$, then there exists a linear map $T : \mathcal{A}^{2}\rightarrow \mathcal{X}$ such that $\phi(a,b)=T(ab)$ for each $a,b\in \mathcal{A}$. If $\mathcal{A}$ has unity $1$, then $\mathcal{A}$ is zero product determined if and only if for every linear space $\mathcal{X}$ and every bilinear map $\phi:\mathcal{A}\times \mathcal{A}\rightarrow \mathcal{X}$, the following holds: If $\phi(a,b)=0$ whenever $ab=0$, then $\phi(a,b)=\phi(ab,1)$ for each $a,b\in \mathcal{A}$. Also in this case $\phi(a,1)=\phi(1,a)$ for all $a\in \mathcal{A}$. The question of characterizing linear maps through zero products, Jordan products, etc. on algebras sometimes can be effectively solved by considering bilinear maps that preserve certain zero product properties (for instance, see \cite{al, al1, fos, gh3,gh4,gh5,gh6, gh7, gh8}). Motivated by these works, Bre\v{s}ar et al. \cite{bre2} introduced the concept of zero product (Jordan product) determined algebras, which can be used to study linear maps preserving zero products (Jordan products) and derivable (Jordan derivable) maps at zero point.
\par
Let $ \varphi : \mathcal{A} \to \mathcal{A} $ be a linear mapping on algebra $ \mathcal{A} $. A tempting
challenge for researchers is to determine conditions on a certain set $ \mathcal{S} \subseteq \mathcal{A} \times \mathcal{A} $ to guarantee that the property
\begin{equation} \label{1}
\varphi (ab) = a \varphi(b)\quad \big (\varphi(ab) = \varphi(a)b\big), \text{ for every } (a, b) \in \mathcal{S} ,
\end{equation}
implies that $ \varphi $ is a (right, left) centralizer. Some particular subsets $ \mathcal{S} $ give rise to precise
notions studied in the literature. For example, given a fixed element $z \in \mathcal{A}$, a
linear map $ \varphi : \mathcal{A} \to \mathcal{A} $ satisfying \eqref{1} for the set $\mathcal{S}_{z} = \{ (a, b) \in \mathcal{A} \times \mathcal{A} : ab = z \} $ is called \textit{centralizer} at $z$. Motivated by \cite{barar, fad1, fad2, gh7, gh8} in this paper we consider the problem of characterizing linear maps on special $ \star $-algebras behaving like left or right centralizers at orthogonal elements for several types of orthogonality conditions.
\par
In this paper we consider the problem of characterizing linear maps behaving like right or left centralizers at orthogonal elements for several types of orthogonality conditions on $ \star $-algebras with unity. In particular, in this paper we consider the subsequent conditions on a linear map $ \varphi : \mathcal{A} \to \mathcal{A} $ where $ \mathcal{A} $ is a zero product determined $ \star $-algebra with unity or $ \mathcal{A} $ is a unital standard operator algebras on a Hilbert space $H$ such that $ \mathcal{A} $ is closed under adjoint operation :
\[ a, b \in \mathcal{A} , a b^\star =0 \Longrightarrow a \varphi(b)^\star = 0 ; \]
\[ a, b \in \mathcal{A} , a^\star b =0 \Longrightarrow \varphi(a)^\star b = 0. \]
Let $H$ be a Hilbert space. We denote by $B(H)$ the algebra of all bounded linear operators on $H$, and $F(H)$ denotes the algebra of all finite rank operators in $B(H)$. Recall that a \textit{standard operator algebra} is any subalgebra of $B(H)$ which contains $F(H)$. We shall denote the identity matrix of $B(H)$ by $I$.
\section{Main results}
We first characterize the centralizers at orthogonal elements on unital zero product determined $ \star $-algebras.
\begin{thm} \label{tc}
Let $ A $ be a zero product determined $ \star $-algebra with unity $1$ and $ \varphi : A \to A $ be a linear map. Then the following conditions are equivalent:
\begin{enumerate}
\item[(i)]
$\varphi$ is a left centralizer;
\item[(ii)]
$ a, b \in \mathcal{A} , a b^\star =0 \Longrightarrow a \varphi(b)^\star = 0 $.
\end{enumerate}
\end{thm}
\begin{proof}
$ (i) \Rightarrow (ii) $ Since $\mathcal{A}$ is unital, it follows that $\varphi(a) = \varphi(1)a$ for each $a\in \mathcal{A}$. If $ a b^\star =0$, then
\[a \varphi(b)^\star=a(\varphi(1)b)^{\star}=a b^\star\varphi(1)^{\star} =0. \]
So (ii) holds. \\
$ (ii) \Rightarrow (i) $ Define $\phi:\mathcal{A}\times \mathcal{A}\rightarrow \mathcal{A}$ by $\phi(a,b)=a\varphi(b^{\star})^{\star}$. It is easily checked that $\phi$ is a bilinear map. If $a,b\in \mathcal{A}$ such that $ab=0$, then $a(b^{\star})^{\star}=0$. It follows from hypothesis that $a\varphi(b^{\star})^{\star}=0$. Hence $\phi(a,b)=0$. Since $\mathcal{A}$ is a zero product determined algebra, it follows that $\phi(a,b)=\phi(ab,1)$ for each $a,b\in \mathcal{A}$. Now we have
\[ a\varphi(b^{\star})^{\star}=ab\varphi(1)^{\star}\]
for each $a,b\in \mathcal{A}$. By letting $a=1$ we get
\[\varphi(b^{\star})^{\star}=b\varphi(1)^{\star} \]
for each $b\in \mathcal{A}$. Thus $\varphi(b^{\star})=\varphi(1)b^{\star}$ for all $b\in \mathcal{A}$ and hence $\varphi(a)=\varphi(1)a$ for all $a\in \mathcal{A}$. Hence $\varphi$ is a left centralizer.
\end{proof}
\begin{thm} \label{tc2}
Let $ A $ be a zero product determined $ \star $-algebra with unity $1$ and $ \varphi : A \to A $ be a linear map. Then the following conditions are equivalent:
\begin{enumerate}
\item[(i)]
$\varphi$ is a right centralizer;
\item[(ii)]
$ a, b \in \mathcal{A} , a^\star b =0 \Longrightarrow \varphi(a)^\star b = 0 $.
\end{enumerate}
\end{thm}
\begin{proof}
$ (i) \Rightarrow (ii) $ Since $\mathcal{A}$ is unital, it follows that $\varphi(a) = a\varphi(1)$ for each $a\in \mathcal{A}$. If $ a^\star b=0$, then
\[ \varphi(a)^\star b=(a\varphi(1))^{\star}= \varphi(1)^{\star}a^\star b=0. \]
So (ii) holds. \\
$ (ii) \Rightarrow (i) $ Define the bilinear map $\phi:\mathcal{A}\times \mathcal{A}\rightarrow \mathcal{A}$ by $\phi(a,b)=\varphi(a^{\star})^{\star}b$. If $a,b\in \mathcal{A}$ such that $ab=0$, then $(a^{\star})^{\star}b=0$. By hypothesis $\varphi(a^{\star})^{\star}b=0$. So $\phi(a,b)=0$. Since $\mathcal{A}$ is a zero product determined algebra, it follows that $\phi(a,b)=\phi(ab,1)=\phi(1,ab)$ for each $a,b\in \mathcal{A}$. Now
\[ \varphi(a^{\star})^{\star}b=\varphi(1)^{\star}ab\]
for each $a,b\in \mathcal{A}$. By letting $b=1$ we arrive at
\[\varphi(a^{\star})^{\star}=\varphi(1)^{\star}a \]
for each $a\in \mathcal{A}$. Thus $\varphi(a^{\star})=a^{\star}\varphi(1)$ for all $a\in \mathcal{A}$ and hence $\varphi(a)=a\varphi(1)$ for all $a\in \mathcal{A}$, giving us $\varphi$ is a right centralizer.
\end{proof}
\begin{rem}
Every algebra which is generated by its idempotents is zero product determined \cite{bre3}. So the following algebras are zero product determined:
\begin{itemize}
\item[(i)] Any algebra which is linearly spanned by its idempotents.
By \cite[Lemma 3. 2]{hou} and \cite[Theorem 1]{pe}, $B(H)$ is linearly spanned by its idempotents. By \cite[Theorem 4]{pe}, every element in a properly infinite $W^*$-algebra $\mathcal{A}$ is a sum of at most five idempotents. In \cite{mar} several classes of simple $C^*$-algebras are given which are linearly spanned by their projections.
\item[(ii)] Any simple unital algebra containing a non-trivial idempotent, since these algebras are generated by their idempotents \cite{bre}.
\end{itemize}
Therefore Theorems \ref{tc} and \ref{tc2} hold for $\star$-algebras that satisfy one of the above conditions.
\end{rem}
In the following, we will characterize the the centralizers at orthogonal elements on the unital standard operator
algebras on Hilbert spaces that are closed under adjoint operation.
\begin{thm}\label{s1}
Let $\mathcal{A}$ be a unital standard operator algebra on a Hilbert space $H$ with $dimH \geq 2$, such that $\mathcal{A}$ is closed under adjoint operation. Suppose that $ \varphi : A \to A $ is a linear map. Then the following conditions are equivalent:
\begin{enumerate}
\item[(i)]
$\varphi$ is a left centralizer;
\item[(ii)]
$ A,B \in \mathcal{A} , AB^\star =0 \Longrightarrow A \varphi(B)^\star = 0 $.
\end{enumerate}
\end{thm}
\begin{proof}
$ (i) \Rightarrow (ii) $ is similar to proof of Theorem \ref{tc}.\\
$ (ii) \Rightarrow (i) $ Define $\psi :\mathcal{A} \rightarrow \mathcal{A}$ by $\psi(A)=\varphi(A^{\star})^{\star}$. Then $\psi$ is a linear map such that
\[ A,B \in \mathcal{A} , AB=0 \Longrightarrow A \psi(B) = 0. \]
Let $P\in \mathcal{A}$ be an idempotent operator of rank one and $P\in \mathcal{A}$. Then $P (I-P)A = 0$ and $(I-P)P A= 0$, and by assumption, we have
\[P\psi(A)=P\psi(PA) \quad \text{and} \quad \psi(PA)=P\psi(PA) \]
So $\psi(PA)=P\psi(A)$ for all $A\in \mathcal{A}$. By \cite[Lemma 1.1]{bur}, every element $X \in F(H )$ is a linear combination of rank-one idempotents, and so
\begin{equation}\label{e1}
\psi(XA)=X\psi(A)
\end{equation}
for all $X \in F(H )$ and $A\in \mathcal{A}$. By letting $A=I$ in \eqref{e1} we get $\psi(X)=X\psi(I)$ for all $X \in F(H )$. Since $F(H)$ is an ideal in $\mathcal{A}$, it follows that
\begin{equation}\label{e2}
\psi(XA)=XA\psi(I)
\end{equation}
for all $X \in F(H )$. By comparing \eqref{e1} and \eqref{e2}, we see that $X\psi(A)=XA\psi(I)$ for all $X \in F(H )$ and $A\in \mathcal{A}$. Since $F(H)$ is an essential ideal in $B(H)$, it follows that $\psi(A)=A\psi(I)$ for all $A\in \mathcal{A}$. From definition of $\psi$ we have $\varphi(A^{\star})^{\star}=A\varphi(I)^{\star}$ for all $A\in \mathcal{A}$. Thus $\varphi(A^{\star})=\varphi(I)A^{\star}$ for all $A\in \mathcal{A}$ and hence $\varphi(A)=\varphi(I)A$ for all $A\in \mathcal{A}$. Thus $\varphi$ is a left centralizer.
\end{proof}
\begin{thm}\label{s2}
Let $\mathcal{A}$ be a unital standard operator algebra on a Hilbert space $H$ with $dimH \geq 2$, such that $\mathcal{A}$ is closed under adjoint operation. Suppose that $ \varphi : A \to A $ is a linear map. Then the following conditions are equivalent:
\begin{enumerate}
\item[(i)]
$\varphi$ is a right centralizer;
\item[(ii)]
$ A,B \in \mathcal{A} , A^\star B =0 \Longrightarrow \varphi(A)^\star B = 0 $.
\end{enumerate}
\end{thm}
\begin{proof}
$ (i) \Rightarrow (ii) $ is similar to proof of Theorem \ref{tc2}.\\
$ (ii) \Rightarrow (i) $ Define $\psi :\mathcal{A} \rightarrow \mathcal{A}$ by $\psi(A)=\varphi(A^{\star})^{\star}$. Then $\psi$ is a linear map such that
\[ A,B \in \mathcal{A} , AB=0 \Longrightarrow \psi(A)B = 0. \]
Let $P\in \mathcal{A}$ be an idempotent operator of rank one and $P\in \mathcal{A}$. Then $AP (I-P) = 0$ and $A(I-P)P = 0$, and by assumption, we arrive at $\psi(AP)=\psi(A)P$ for all $A\in \mathcal{A}$. So
\begin{equation}\label{e3}
\psi(AX)=\psi(A)X
\end{equation}
for all $X \in F(H )$ and $A\in \mathcal{A}$. By letting $A=I$ in \eqref{e3} we have $\psi(X)=\psi(I)X$ for all $X \in F(H )$. Since $F(H)$ is an ideal in $\mathcal{A}$, it follows that
\begin{equation}\label{e4}
\psi(AX)=\psi(I)AX
\end{equation}
for all $X \in F(H )$. By comparing \eqref{e3} and \eqref{e4}, we get $\psi(A)X=\psi(I)AX$ for all $X \in F(H )$ and $A\in \mathcal{A}$. Since $F(H)$ is an essential ideal in $B(H)$, it follows that $\psi(A)=\psi(I)A$ for all $A\in \mathcal{A}$. From definition of $\psi$ we have $\varphi(A^{\star})^{\star}=\varphi(I)^{\star}A$ for all $A\in \mathcal{A}$. Thus $\varphi(A^{\star})=A^{\star}\varphi(I)$ for all $A\in \mathcal{A}$ and hence $\varphi(A)=A\varphi(I)$ for all $A\in \mathcal{A}$ implying that $\varphi$ is a right centralizer.
\end{proof}
Finally, we note that the characterization of left or right centralizers through orthogonal elements can be used to study local left or right centralizers.
\end{document}
\end{document} |
\begin{document}
\begin{abstract}
We classify the upper ramification breaks of totally ramified
nonabelian extensions of degree $p^3$ over a local field of
characteristic $p>0$. We find that nonintegral upper ramification
breaks can occur for each nonabelian Galois group of order $p^3$, except the dihedral group of
order $8$.
\end{abstract}
\maketitle
\section{Introduction}
Let $K$ be a local field and let $L/K$ be a finite Galois extension.
The Hasse-Arf theorem states that if $\mathrm{Gal}(L/K)$ is abelian, the
upper ramification breaks of $L/K$ are integers. But what if $\mathrm{Gal}(L/K)$ is
nonabelian? At least for small nonabelian extensions, the status of such basic
invariants should be well-understood.
In this paper, we give a complete description of the sequence of upper
ramification breaks for totally ramified nonabelian extensions of
degree $p^3$ over a local field of characteristic $p$. We find that
the upper ramifications breaks are integers for dihedral extensions
but that for the other nonabelian groups there are extensions with
nonintegral upper ramification breaks. In \cite{elder:hooper}, the
author and J. Hooper addressed this classification for quaternion
extensions over a local field of characteristic $0$ and residue
characteristic $2$ that contains the fourth roots of unity.
For abelian extensions,
the sequence of ramification breaks are well understood due
to the work of several authors, including Miki and Thomas \cite{miki,thomas}.
\subsection{Nonabelian groups of order $p^3$}
\begin{equation}\label{gps}\arraycolsep=1.4pt\def1.5{1.5}
\begin{array}{lrcl}
p=2:&Q_8&=&\langle\sigma_1,\sigma_2:\sigma_1^4=1,\sigma_1^2=\sigma_2^2=[\sigma_1,\sigma_2]\rangle,\\
&D_8&=&\langle\sigma_1,\sigma_2:\sigma_1^4=\sigma_2^2=1,\sigma_1^2=[\sigma_1,\sigma_2]\rangle,\\
p>2:&H(p^3)&=&\langle\sigma_1,\sigma_2:\sigma_1^p=\sigma_2^p=[\sigma_1,\sigma_2]^p=1,[\sigma_1,\sigma_2]\in Z(H(p^3))\rangle,\\
&M(p^3)&=&\langle\sigma_1,\sigma_2:\sigma_1^{p^2}=\sigma_2^p=1,
\sigma_1^p=[\sigma_1,\sigma_2]\rangle.
\end{array}\end{equation}
In all cases, the nonabelian group $G$ of order $p^3$ is
generated by two elements denoted by $\sigma_1,\sigma_2$, whose
commutator
$\sigma_3=[\sigma_1,\sigma_2]=\sigma_1^{-1}\sigma_2^{-1}\sigma_1\sigma_2$
generates the group's center $Z(G)$.
\begin{remark}\label{group-pres}
Replacing $\sigma_1$ with $\sigma_1'=\sigma_1\sigma_2^{-a}$, $a\in \mathbb Z$
does not change any of these group presentations.
\end{remark}
The first two groups are recognizable as the quaternion and dihedral group, respectively. The third group is the Heisenberg group modulo $p$ and can be expressed in terms of matrices
\[H(p^3)\cong\left\{\begin{bmatrix}
1&a&b\\
0&1&c\\
0&0&1\end{bmatrix}: a,b,c\in \mathbb F_p\right\}\]
with entries in the field of $p$ elements. The fourth group can also be expressed in terms of matrices
\[M(p^3)\cong\left\{\begin{pmatrix} a&b\\0&1\end{pmatrix}:a,b\in \mathbb Z/(p^2), a\equiv 1\bmod p\right\}.\]
When $p=2$, note that the presentations of $M(p^3)$ and $D_8$ agree.
\subsection{Local fields}
Throughout this paper, $K$ is a field of characteristic $p$
that is complete with respect to a discrete valuation and has a perfect residue field. Let $K^{\mathrm{sep}}$ be a separable closure of $K$, and for
each finite subextension $L/K$ of $K^{\mathrm{sep}}/K$ let $v_L$ be the
valuation on $K^{\mathrm{sep}}$ normalized so that $v_L(L^{\times})=\mathbb Z$. Let
$\mathcal O_L$ denote the ring of integers of $L$, let $\mathcal M_L$ denote the
maximal ideal of $\mathcal O_L$, and let $\pi_L$ be a uniformizer for $L$.
Let $\mathbb F_p$ be the field with $p$ elements.
\subsection{Ramification breaks}
We specialize the material in \cite[Chapter IV]{serre:local} to
our situation where $L/K$ is a totally ramified, Galois
extension of degree $p^3$. Define the lower ramification subgroups
$G_i\leq G=\mathrm{Gal}(L/K)$ by
\[G_i=\{\sigma\in \mathrm{Gal}(L/K): v_L(\sigma(\pi_L)-\pi_L)\geq i+1\}.\]
A lower ramification break occurs at $b$, if $G_b\supsetneq G_{b+1}$.
[Other authors may use ``jump,'' ``jump number'' or ``break number''
rather than ``break''.] Since the extension is totally ramified and
$G$ is a $p$-group, the lower ramification breaks are positive
integers coprime to $p$. If $[G_b:G_{b+1}]=p^m$, $m\geq 1$, we say
that $b$ occurs with multiplicity $m\geq 1$. Thus there are 3 breaks
$l_1\leq l_2\leq l_3$ in the lower ramification sequence. The upper
ramification breaks $u_1\leq u_2\leq u_3$ are related to the lower
breaks by: $u_1=b_1$ and $u_i-u_{i-1}=(b_i-b_{i-1})/p^{i-1}$ for
$i\in\{2,3\}$. The upper ramification groups $G^x$ for $0<x$ are
defined by setting $G^x=G_{b_1}=G$ for $x\leq u_1$, $G^x=G_{b_i}$ for
$u_{i-1}<x\leq u_i$ for $i=2,3$, and $G^x=\{\mathrm{id}\}$ for $u_3<x$. When
passing from the ramification filtration of a Galois group $G$ to the
ramification filtration of a subgroup $H$, one uses the lower
ramification breaks; namely, $H_i=G_i\cap H$. The upper breaks are
used when passing to the ramification filtration of a quotient group
$G/H$; namely, $(G/H)^i=(G^iH)/H$.
\subsection{Special polynomials}
The Weierstrass $\wp$-function, $\wp(X)=X^p-X\in \mathbb Z[X]$, is a $\mathbb F_p$-linear map. Recall
from the theory of Witt vectors, the Witt polynomial:
\[S(X_1,X_2)=\frac{X_1^p+X_2^p-(X_1+X_2)^p}{p}\in\mathbb Z[X_1,X_2].\]
\subsection{Artin-Schreier extensions}\label{AS}
In characteristic $p$, cyclic extensions $L/K$ of
degree $p$ are Artin-Schreier. Thus $L=K(x)$
for some $x\in K^\mathrm{sep}$ such that $x^p-x=\kappa$ where $\kappa \in K$. We will refer to $\kappa$ as the
{\em
Artin-Schreier generator} (AS-generator). As
explained in Remark \ref{technicalities}, we may replace $\kappa$ with
any $\kappa'\in \kappa+K^\wp$ where $K^\wp=\{\wp(k):k\in K\}$ without
changing the extension $L/K$. Thus, since $K$ is a local field, we may
assume, as we do in \S\ref{arith}, that the AS-generator $\kappa$ is
{\em reduced}; that is, $v_K(\kappa)=\max\{v_K(k):k\in\kappa+K^\wp\}$.
If $L/K$ is totally ramified, then $b:=-v_K(\kappa)=-v_L(x)>0$
is the ramification break of $L/K$, which is coprime to $p$.
For cyclic extensions of degree $p$ the upper and lower ramification breaks agree.
\subsection{Main Results}\label{results}
First, we describe the
extensions in terms of its AS-generators. This result does not require $K$ to be a local field.
\begin{theorem}\label{as-gen}
Let $K$ be a field of characteristic $p>0$. An extension $L/K$ is a
Galois extension with $\mathrm{Gal}(L/K)\cong Q_8, D_8,H(p^3)$
or $M(p^3)$ if and only if $L=K(x_1,x_2,x_3)$ where
\[
x_1^p-x_1=\kappa_1,\qquad
x_2^p-x_2=\kappa_2,\qquad
x_3^p-x_3=\mathfrak s(x_1,x_2)+\kappa_3,\]
for some $\kappa_i\in K$ such that $\kappa_1,\kappa_2$
represent $\mathbb F_p$-linearly independent cosets of $K/K^\wp$, and
\[\mathfrak s(x_1,x_2)=-\kappa_2x_1+\begin{cases}
\kappa_1x_1+\kappa_2x_2 & \mathrm{Gal}(L/K)\cong Q_8,\\
\kappa_1x_1 & \mathrm{Gal}(L/K)\cong D_8,\\
0& \mathrm{Gal}(L/K)\cong H(p^3),\\
S(x_1,\kappa_1)& \mathrm{Gal}(L/K)\cong M(p^3).
\end{cases}\]
Note that $S(x_1,\kappa_1)=\kappa_1x_1$ for $p=2$.
Furthermore, the Galois group
$\mathrm{Gal}(L/K)=\langle\sigma_1,\sigma_2,\sigma_3\rangle$ is determined by
\[(\sigma_i-1)x_j=\delta_{ij},\]
the Kronecker delta function,
for all pairs $(i,j)$ with $1\leq i,j\leq 3$ except two:
$(i,j)=(1,3)$ and $(2,3)$.
For $(i,j)=(1,3)$, we have
\[(\sigma_1-1)x_3=-x_2+\begin{cases}
x_1 & \mathrm{Gal}(L/K)\cong Q_8,\\
x_1 & \mathrm{Gal}(L/K)\cong D_8,\\
0& \mathrm{Gal}(L/K)\cong H(p^3),\\
S(x_1,1)& \mathrm{Gal}(L/K)\cong M(p^3).
\end{cases}\]
For
$(i,j)=(2,3)$, we have
\[(\sigma_2-1)x_3=\begin{cases}
x_2& \mathrm{Gal}(L/K)\cong Q_8,\\
0 & \mathrm{Gal}(L/K)\cong D_8,H(p^3),M(p^3).
\end{cases}\]
\end{theorem}
\begin{remark}\label{Witt/Saltman}
That a theorem like Theorem \ref{as-gen} might exist is not a
surprise. Saltman has proven that such descriptions exist for all
Galois extensions with Galois group of order $p^n$, $n\geq 1$
\cite[Corollary 2.5]{saltman}. What Theorem \ref{as-gen} does, that
is not in \cite{saltman}, is explicitly describe the term
$\mathfrak s(x_1,x_2)$.
\end{remark}
\begin{remark}\label{relations}
Observe that for $p=2$, $S(x_1,\kappa_1)=\kappa_1x_1$ and
$S(x_1,1)=x_1$. Thus, unsurprisingly, the descriptions of
$\mathfrak s(x_1,x_2)$ for $D_8$- and $M(p^3)$-extensions agree. Observe that
for $p>2$, $\wp(x_i^2)=2x_i\kappa_i+\kappa_i^2$ and thus
$x_i\kappa_i\in K(x_i)^\wp+K$ for $i=1,2$. This means that we can
choose to set $\mathfrak s(x_1,x_2)=-\kappa_2x_1+\kappa_1x_1+\kappa_2x_2$
for $\mathrm{Gal}(L/K)\cong H(p^3)$, which would make the descriptions of
$\mathfrak s(x_1,x_2)$ for $Q_8$- and $H(p^3)$-extensions agree. Finally,
observe that $\wp(x_1x_2)=\kappa_2x_1+\kappa_1x_2+\kappa_1\kappa_2$.
As a result, $-\kappa_2x_1\equiv
\kappa_1x_2\pmod{K(x_1,x_2)^\wp+K}$. Replacing $x_3$ by $-x_3$, we
can choose $\mathfrak s(x_1,x_2)=-\kappa_1x_2+\kappa_1x_1+\kappa_2x_2$ for
$\mathrm{Gal}(L/K)\cong Q_8$ and choose $\mathfrak s(x_1,x_2)=-\kappa_1x_2$ for
$\mathrm{Gal}(L/K)\cong H(p^3)$. These replacements show that the Galois
extensions described in Theorem \ref{as-gen} for $Q_8, H(p^3)$ remain
invariant under the transposition $(1\;2)$ acting on subscripts, just
as the descriptions of the two group presentations of groups are
invariant. This is a comforting rather than a surprising observation.
\end{remark}
So that we may use Theorem \ref{as-gen} to determine ramification
breaks, we specialize the result record to local fields.
\begin{corollary}\label{ram-as-gen}
Let $K$ be a local field of characteristic $p>0$. An extension
$L/K$ is a totally ramified Galois extension with $\mathrm{Gal}(L/K)\cong
Q_8, D_8,H(p^3)$ or $M(p^3)$ if and only if the content of Theorem
\ref{as-gen} holds, except that we replace the statement
\begin{itemize}
\item such that ``$\kappa_1,\kappa_2$
represent $\mathbb F_p$-linearly independent cosets of $K/K^\wp$''
\end{itemize}
with the alternate statement
\begin{itemize}
\item ``satisfying $v_K(\kappa_i)=-b_i<0$ with $p\nmid b_i$, and if
$b_1=b_2$ then, without loss of generality, $\kappa_1,\kappa_2$
represent $\mathbb F_p$-linearly independent cosets in
$\kappa_2\mathcal O_K/\kappa_2\mathcal M_K$,''
\end{itemize}
With this replacement, the
conclusions of Theorem \ref{as-gen} hold.
\end{corollary}
\begin{theorem} \label{sharp-bound}
Let $K$ is a local field of characteristic $p>0$ with perfect
residue field. Let $M/K$ be a totally ramified $C_p^2$-extension
with upper ramification breaks $u_1\leq u_2$. Therefore $M=K(y_1,y_2)$
for some $y_1,y_2\in K^\mathrm{sep}$
such that $\wp(y_i)=\beta_i$ with $v_K(\beta_i)=u_i$.
Embed $M/K$ in a Galois extension $N/K$ with
Galois group $\mathrm{Gal}(N/K)=G\cong D_8,H(p^3),M(p^3)$ or $Q_8$, as described in Theorem \ref{as-gen}.
\begin{quote}
{\rm [}Note: The $x_i$ in Theorem \ref{as-gen} are determined by the generators of the Galois groups, as described in \eqref{gps}. It is not necessarily the case that $y_i=x_i$. See Remark \ref{x-y-switch}.{\rm ]}
\end{quote}
For each group $G$, there is a lower bound $B_G$ such that the upper ramification breaks of
$N/K$ are $u_1\leq u_2\leq u_3$ where $B_G\leq u_3$ such that if $B_G<u_3$ then $u_3$ is an integer coprime to $p$.
In only remains to describe these lower bounds $B_G$:
\[B_{D_8}=\begin{cases}
u_1+u_2 &\mbox{for } u_2=u_1\mbox{ or }G_{l_2}=\langle \sigma_1^p,\sigma_2\rangle,\\
2u_2 &\mbox{for } u_2\neq u_1\mbox{ and }G_{l_2}=\langle \sigma_1\rangle.
\end{cases}
\]
By Lemma \ref{decomp}
\[\beta_2=\mu_0^p+\sum_{i=1}^{p-1}\mu_i^p\beta_1^i\]
for some $\mu_i\in K$ satisfying certain technical conditions stated there. Set
\[r=-v_K\left(\sum_{i=1}^{p-2}\mu_i^p\beta_1^i \right),\qquad s=-v_K(\mu_{p-1}^p\beta_1^{p-1}).\]
Observe
$s\equiv -u_1\bmod p$, $r\not\equiv 0,-u_1 \bmod p$ and
$u_2=\max\{r,s\}$.
Then
\[B_{H(p^3)}=\max\left\{s+u_1,r+\frac{u_1}{p}\right\}.\]
If $\mu_{p-1}$, which is used to define $s$, satisfies $\mu_{p-1}\in -1+\mathcal M_K$, set $\epsilon=\mu_{p-1}+1$ and
\[t=-v_K(\epsilon^p\beta_1^{p-1}).\]
Then
\[B_{M(p^3)}=\begin{cases}
\max\left\{pu_1,s+u_1,r+\frac{u_1}{p}\right\} &\mbox{for }\mu_{p-1}\neq -1\bmod \mathcal M_K,\mbox{ and }\\
&\hspace*{.75cm}u_2=u_1\mbox{ or }G_{l_2}=\langle \sigma_1^p,\sigma_2\rangle,\\
\max\left\{(p-1)u_1+\frac{u_1}{p},t+u_1,r+\frac{u_1}{p}\right\}
&\mbox{for }\mu_{p-1}= -1\bmod \mathcal M_K,\mbox{ and }\\
&\hspace*{.75cm}u_2=u_1\mbox{ or }G_{l_2}=\langle \sigma_1^p,\sigma_2\rangle,\\
pu_2&\mbox{for } u_2\neq u_1\mbox{ and }G_{l_2}=\langle \sigma_1\rangle.
\end{cases}\]
Using Lemma \ref{decomp}, $\beta_2\equiv \mu^p\beta_1\bmod \mathcal O_K$. If $u_2=u_1$ then $\mu=\omega+\epsilon$ for some root of unity $\omega\in\mathcal O_K/\mathcal M_K$ and $\epsilon\in\mathcal M_K$ satisfying $v_K(\epsilon)=e>0$.
Let
\[B_{Q_8}=\begin{cases}
2u_2 & \mbox{for } u_2\neq u_1,\mbox{ or }u_2= u_1,\omega^3\neq 1,\\
\max\left\{\frac{3u_1}{2},2u_1-2e\right\} & \mbox{for } u_2=u_1, \omega^3= 1.
\end{cases}
\]
\end{theorem}
\begin{remark}
The conclusion of the Theorem of Hasse-Arf is the statement that the upper ramification breaks are integers. Because $B_{D_8}$ is an integer, the conclusion continues to hold for $G\cong D_8$. Because $B_G$ for $G\cong Q_8,H(p^3),M(p^3)$ can fail to be an integer, the conclusion of Hasse-Arf can fail to hold for $G\cong Q_8,H(p^3),M(p^3)$.
\end{remark}
\subsection{Outline}
The link between reduced AS-generators and ramification breaks of
$C_p$-extensions, as described in \S\ref{AS}, is the tool we use to
determine the upper ramification breaks of our extensions. Thus we
begin in \S\ref{embed} by deriving the AS-generators described in
Theorem \ref{as-gen} and Corollary \ref{ram-as-gen}. Recall that
$l_1\leq l_2\leq l_3$ denote the lower ramification breaks of $N/K$.
Using \cite[Chapter IV, Proposition 10]{serre:local}, we will show that
$G_{l_3}=Z(G)=\langle\sigma_3\rangle$. Since $\langle\sigma_3\rangle$
is a ramification subgroup, the ramification break for $N/M$ is the
third lower ramification break $l_3$ of $N/K$. This means that to determine $l_3$, it is sufficient to reduce the AS-generator $\mathfrak s(x_1,x_2)+\kappa_3\in M$, except that the
notion of ``reduced,'' as given in \S\ref{AS},
is a little too simple for our purpose. Thus in
\S\ref{arith}, we generalize this notion
and apply it to determine the ramification breaks of certain auxiliary $C_p$-extensions. Then
in \S\ref{linking}, we use ramification theory to pull all these results together to determine
the ramification break of $M(x_3)/M$.
The upper ramification breaks $u_1\leq u_2\leq u_3$ follow.
We close in \S\ref{notting} by pointing out applications to the Nottingham group.
\section{Artin-Schreier generators}\label{embed} Let $K$ be a field of
characteristic $p>0$. Notice: The results of this section do not require $K$ to be a local field.
Let $N/K$ be a Galois extension with
$\mathrm{Gal}(N/K)=\langle\sigma_1,\sigma_2\rangle \cong Q_8,
D_8,H(p^3)$ or $M(p^3)$, adopting the notation of \eqref{gps}. Let $M=N^{\sigma_3}$ with
$\sigma_3=[\sigma_1,\sigma_2]$ denote the
fixed field of the center of $\mathrm{Gal}(N/K)$. In every case,
$\langle\sigma_1,\sigma_2\rangle/\langle\sigma_3\rangle\cong
C_p^2$. Thus we may assume without loss of generality, $M=K(x_1,x_2)$ for
some $x_i\in N$ such that $\wp(x_i)=\kappa_i$ for some
$\kappa_i\in K$, $i=1,2$ that
represent $\mathbb F_p$-linearly independent cosets of $K/K^\wp$, and
$(\sigma_i-1)x_j=\delta_{ij}$ for
$1\leq i,j\leq 2$.
\begin{remark}\label{technicalities}
When we apply the results of this section in \S\ref{arith}, we will
assume that in a preparatory step the AS-generators
$\kappa_1,\kappa_2$ were adjusted in the two ways listed here. But
since this preparatory step is not required in the remainder of this
section, we do not yet assume that it has been done.
\begin{enumerate}
\item Observe $K(x_i)=K(x_i+\kappa)$ for all $\kappa\in
K$. Thus $K(x_i)=K(x_i')$ for all $x_i'\in N$ satisfying
$\wp(x_i')\in \kappa_i+K^\wp$ where $K^\wp=\{\wp(\kappa):\kappa\in
K\}$. We may replace $\kappa_i$ with any element
in $\kappa_i+K^\wp$ without changing
$M=K(x_1,x_2)$.
\item Observe
that $M=K(x_1,x_2)=K(x_1,x_2')$ for $x_2'=ax_1+ x_2$,
$a\in\mathbb Z$. We may replace $x_1$ with $x_1'$ while replacing
$\kappa_1$ with $a\kappa_1+\kappa_2$ without changing the
description of $M=K(x_1,x_2)$. Furthermore, if we replace
$\sigma_1$ with $\sigma_1'=\sigma_1\sigma_2^{-a}$, then
$\sigma_1',\sigma_2$ act on $x_1,x_2'$ as $\sigma_1,\sigma_2$ acted
on $x_1,x_2$. By Remark \ref{group-pres}, replacing
$\sigma_1$ with $\sigma_1'=\sigma_1\sigma_2^{-a}$ does not change
the presentation of the Galois group.
\end{enumerate}
\end{remark}
Our description of the remaining part of the Galois extension, namely
$N/M$, depends upon the particular Galois group.
\subsection{$Q_8$}
Since $N/M$ is a quadratic extension, $N=M(x_3)$ for some $x_3\in N$ such that $\wp(x_3)\in M$ and $(\sigma_3-1)x_3=1$.
For $i=1,2$ there exist $A_i\in N$ such that $(\sigma_i-1)x_3=A_i$.
Since $[\sigma_i,\sigma_3]=1$ for $i=1,2$ we find that $A_i$ lies in the fixed field of $\sigma_3$. Thus $A_i\in M$.
Since $(\sigma_i-1)^2=(\sigma_3-1)$ for $i=1,2$ we find that
$(\sigma_i-1)A_i=1$. Thus
$A_i-x_i$ lies in the fixed field of $\langle\sigma_3,\sigma_i\rangle$. In other words,
\[A_1-x_1\in K(x_2),\qquad A_2-x_2\in K(x_1).\]
There exist $a,b,c,d\in K$ such that $A_1=a+bx_2+x_1$ and $A_2=c+dx_1+x_2$.
Apply $\sigma_1\sigma_2=\sigma_2\sigma_1\sigma_3$ to $x_3$. The result is
$x_3+A_1+\sigma_1(A_2)=x_3+A_2+\sigma_2(A_1)+1$. Thus
\[(\sigma_1-1)A_2=(\sigma_2-1)A_1+1.\]
From this, we determine that $d=b+1$. Observe that $x_3'=x_3+dx_1x_2+ax_1+cx_2$
satisfies $\wp(x_3')\in M$, $(\sigma_3-1)x_3'=1$. Thus we may replace $x_3$ with $x_3'$ so that, without loss of generality,
\[(\sigma_1-1)x_3=x_1+x_2,\qquad (\sigma_2-1)x_3=x_2.\]
Set $T=\wp(x_3)\in M$. Since $(\sigma_1-1)x_3=x_1+x_2$, we find that $(\sigma_1-1)T=\kappa_1+\kappa_2$. Since $(\sigma_2-1)x_3=x_2$, $(\sigma_2-1)T=\kappa_2$.
Thus $T-(\kappa_1+\kappa_2)x_1-\kappa_2x_2$ is fixed by both $\sigma_1$ and $\sigma_2$. Thus $T=(\kappa_1+\kappa_2)x_1+\kappa_2x_2+\kappa_3$ for some $\kappa_3\in K$.
\subsection{$D_8, H(p^3), M(p^3)$}
Let $L=N^{\sigma_2,\sigma_3}$ be
the fixed field of
$\langle\sigma_2,\sigma_3\rangle$, which is a normal subgroup of order $p^2$. Thus $L=K(x_1)$ with $\wp(x_1)=\kappa_1$.
Observe that
$N/L$ is a $C_p^2$ extension and consider
$N^{\sigma_2}/L$, a cyclic extension of degree $p$. The image of
$\sigma_3$ generates $\mathrm{Gal}(N^{\sigma_2}/L)$. So, without loss of generality,
$N^{\sigma_2}=L(x_3)$ for some $x_3\in N$ such that $\wp(x_3)\in L$
and $(\sigma_3-1)x_3=1$. Since $x_3\in N^{\sigma_2}$, $(\sigma_2-1)x_3=0$.
It remains for us to determine $(\sigma_1-1)x_3=A\in N$. Since
$[\sigma_1,\sigma_3]=1$, $A\in M$. Apply
$\sigma_1\sigma_2=\sigma_2\sigma_1\sigma_3$ to $x_3$ to find that
$(\sigma_2-1)A=-1$. This means that $A+x_2$ is fixed by $\sigma_2$ and
thus $A+x_2\in L$. Further considerations depend upon the group.
\subsubsection{$H(p^3)$}
Apply the trace $\mathrm{Tr}_{L/K}$ to $A+x_2\in L$ and find
$\mathrm{Tr}_{L/K}(A+x_2)=(\sigma_1^p-1)x_3+px_2=0$. Thus, by the additive
version of Hilbert's Theorem 90, there is an $\ell\in L$ such that
$(\sigma_1-1)\ell=A+x_2$. Let $x_3'=x_3-\ell$. Observe that
$\wp(x_3')\in L$, $(\sigma_3-1)x_3'=1$, $(\sigma_2-1)x_3'=0$ and
$(\sigma_1-1)x_3'=-x_2$. Thus without loss of generality, we relabel
so that $x_3$ has these properties:
\[(\sigma_2-1)x_3=0, \qquad (\sigma_1-1)x_3=-x_2.\]
Set $T=\wp(x_3)\in L$. Since $(\sigma_1-1)x_3=-x_2$, we find that
$(\sigma_1-1)T=-\kappa_2$. Thus $T+\kappa_2x_1\in L$ is fixed by
$\sigma_1$, which means $T+\kappa_2x_1\in K$ and
$T=-\kappa_2x_1+\kappa_3$ for some $\kappa_3\in K$.
\subsubsection{$D_8, M(p^3)$}
Observe that $S(x_1,\kappa_1)=\kappa_1x_1$ when $p=2$. Thus $D_8$ is not a separate case.
Since it is well-known that
$S(x_1,1),S(x_1,\kappa_1)\in L$ satisfy $\mathrm{Tr}_{L/K}S(x_1,1)=1$ and
$(\sigma_1-1)S(x_1,\kappa_1)=\wp(S(x_1,1))$, we leave verification of these identifies to the reader; namely,
\[\mathrm{Tr}_{L/K}\left(\frac{x_1^p+1-(x_1+1)^p}{p}\right)=1,\]
\[(\sigma_1-1)\left(\frac{x_1^p+\kappa_1^p-(x_1+\kappa_1)^p}{p}\right)=\wp\left(\frac{x_1^p+1-(x_1+1)^p}{p}\right).\]
We are now ready to proceed. Apply the trace $\mathrm{Tr}_{L/K}$ to $A+x_2-S(x_1,1)\in L$ and find
$\mathrm{Tr}_{L/K}(A+x_2-S(x_1,1))=(\sigma_1^p-1)x_3+px_2-1=0$. Thus, by the
additive version of Hilbert's Theorem 90, there is an $\ell\in L$ such
that $(\sigma_1-1)\ell=A+x_2-S(x_1,1)$. Let $x_3'=x_3-\ell$. Observe
that $\wp(x_3')\in L$, $(\sigma_3-1)x_3'=1$, $(\sigma_2-1)x_3'=0$ and
$(\sigma_1-1)x_3'=-x_2+S(x_1,1)$. Thus without loss of generality, we
relabel so that $x_3$ has these properties:
\[(\sigma_2-1)x_3=0, \qquad (\sigma_1-1)x_3=-x_2+S(x_1,1).\]
Again, set $T=\wp(x_3)\in L$. Since $(\sigma_1-1)x_3=-x_2+S(x_1,1)$, we
find that $(\sigma_1-1)T=-\kappa_2+(\sigma_1-1)S(x_1,\kappa_1)$. Thus
$T+\kappa_2x_1-S(x_1,\kappa_1)\in L$ is fixed by $\sigma_1$. We conclude that
$T=-\kappa_2x_1+S(x_1,\kappa_1)+\kappa_3$ for some $\kappa_3\in
K$.
The converse follows from \cite[Corollary 2.5]{saltman}. However, since for these small extensions, one might want to see the details in the converse worked out, we provide a sketch. First, we introduce a lemma:
\begin{lemma}\label{converse}
Let $K$ be a local field of characteristic $p>0$, $M/K$ be a $C_p^2$-extension with $\mathrm{Gal}(M/K)=\langle\bar{\sigma}_1,\bar{\sigma}_2\rangle$, and let
$N/M$ be a $C_p$-extension with $N=M(x)$ for some $x\in K^\mathrm{sep}$
such that $x^p-x=\mu$ with $\mu\in M$. Suppose that
\[(\bar{\sigma}_i-1)\mu\in M^\wp\]
for $i=1,2$. Then $N/K$ is Galois.
\end{lemma}
\begin{proof}
By assumption, for $i=1,2$ there exist $\mu_i\in M$ such that
$\bar{\sigma}_i(\mu)=\mu+\wp(\mu_i)$. Both of
$\bar{\sigma}_1,\bar{\sigma}_2$ can be extended to isomorphisms
$\sigma_1,\sigma_2$ from $N$ into $K^\mathrm{sep}$. Observe that
$\sigma_i(x)\in K^\mathrm{sep}$ is a root of $X^p-X=\bar{\sigma}_i(\mu)\in
M[X]$. Thus $\sigma_i(x)-\mu_i$ is a root of $X^p-X=\mu$, which
means that $\sigma_i(x)\in N$. The result follows.
\end{proof}
Consider the case where $p>2$, $\wp(x_1)=\kappa_1$,
$\wp(x_2)=\kappa_2$, and
$\wp(x_3)=-\kappa_2x_1+S(x_1,\kappa_1)+\kappa_3$. To prove that $M(x_3)/K$ is Galois we use Lemma \ref{converse}. Apply $(\sigma_1-1)$ to
the AS-generator
$-\kappa_2x_1+S(x_1,\kappa_1)+\kappa_3$ and the result is $-\kappa_2+(\sigma_1-1)S(x_1,\kappa_1)=\wp(-x_2+S(x_1,1))\in M^\wp$.
Apply $(\sigma_2-1)$ and the result is $0\in M^\wp$.
Now that $N/K$ is Galois, we identify which Galois group by the relationships that $\sigma_1,\sigma_2$ satisfy on $x_3$.
Since
\[\wp((\sigma_2-1)x_3)=(\sigma_2-1)\wp(x_3)=0,\]
we determine that $(\sigma_2-1)x_3$ satisfies $X^p-X=0$ and thus $(\sigma_2-1)x_3=c\in\mathbb F_p$. Therefore $(\sigma_2^p-1)x_3=0$ and $(\sigma_1-1)(\sigma_2-1)x_3=0$. We have proven $\lvert \sigma_2\rvert=p$.
Since
\[\wp((\sigma_1-1)x_3)=(\sigma_1-1)\wp(x_3)=-\kappa_2+\wp(S(x_1,1)),\]
we find that
$(\sigma_1-1)x_3+x_2-S(x_1,1)=d\in\mathbb F_p$. Thus
$(\sigma_1^p-1)x_3=(1+\sigma_1+\cdots \sigma_1^{p-1})S(x_1,1)=1$ and $(\sigma_2-1)(\sigma_1-1)x_3=-1$. We have proven $(\sigma_2^{p^2}-1)x_3=1$ and thus
$\lvert \sigma_2\rvert=p^2$.
Furthermore, putting together
\begin{align*}
(\sigma_1-1)(\sigma_2-1)x_3&=0,\\
(\sigma_2-1)(\sigma_1-1)x_3&=-1,
\end{align*}
we find that $\sigma_1\sigma_2(x_3)-\sigma_2\sigma_1(x_3)=1$, which means that $([\sigma_1,\sigma_2]-1)x_3=1$ and thus $[\sigma_1,\sigma_2]=\sigma_2^p$.
The other three cases are left for the reader.
\section{Reducing the Artin-Schreier generators}\label{arith}
Let $K$ be a local field of characteristic $p>0$ and let $N/K$ be a
totally ramified nonabelian extension of degree $p^3$ determined by
the Artin-Schreier equations given in Theorem \ref{as-gen} and
Corollary \ref{ram-as-gen}. Thus the Galois group $G=\mathrm{Gal}(N/K)$ is
generated by $\sigma_1,\sigma_2$ where $\sigma_3=[\sigma_1,\sigma_2]$
generates the center $Z(G)$ of order $p$ and fixes a subfield
$M$, which is a
$C_p^2$-extension of $K$. Recall the notation $G_i$ for the lower
ramification groups. Let $u_1\leq u_2\leq u_3$ denote the upper and
$l_1\leq l_2\leq l_3$ denote the lower ramification breaks of $N/K$.
In \cite[Chapter IV, Proposition 10]{serre:local}, one sees that if
$\sigma\in G_i\setminus G_{i+1}$ and $\sigma'\in G_j\setminus
G_{j+1}$, then $[\sigma,\sigma']\in G_{i+j+1}$. As a result, the
elements of $G_{l_3}$ lie in the center $Z(G)$, and since $G_{l_3}$ is
nontrivial while the center $Z(G)$ has order $p$,
\[G_{l_3}=Z(G)=\langle \sigma_3\rangle.\] Now that we have
proven that $M$ is the fixed field of a ramification group, we use
\cite[Chapter IV, Proposition 3, Corollary]{serre:local} to conclude
that the lower and upper ramification breaks of $M/K$ are $l_1\leq
l_2$ and $u_1\leq u_2$, respectively. The ramification break of $N/M$
is $l_3$, and $u_3$ determined by $l_3-l_2=p^2(u_3-u_2)$.
As a result, to determine the upper ramification sequence it only
remains to determinate the ramification break of the
$C_p$-extension $M(x_3)/M$ where \[\wp(x_3)=\mathfrak s(x_1,x_2)+\kappa_3\]
with $\kappa_3\in K$ and $\mathfrak s(x_1,x_2)$ described in Theorem \ref{as-gen}, and Corollary \ref{ram-as-gen}.
Thus the main object of this section is to
``reduce'' this Artin-Schreier generator.
Define the $K$-{\em
group valuation}\footnote{This is the additive analog of the {\em defect} of
a $1$-unit \cite[page 141]{wyman}. } of an element $\kappa\in K$ to be the
maximal valuation attained by the elements in the coset $\kappa+K^\wp$,
$K^\wp=\{\wp(\kappa):\kappa\in K\}$; namely,
\[g\nu_K(\kappa)=\max\{v_K(x):x\in\kappa+K^\wp\}.\]
Clearly, $g\nu_K$ is well-defined on the additive group $K/K^\wp$, and
$g\nu_K(\kappa)=\infty$ if and only if $\kappa\in K^\wp$,
while \[g\nu_K(\kappa_1+\kappa_2)\geq
\min\{g\nu_K(\kappa_1),g\nu_K(\kappa_2)\}\] with equality when
$g\nu_K(\kappa_1)\neq g\nu_K(\kappa_2)$. Thus, once we compose with the
exponential function $\exp\circg\nu_K: K/K^\wp\rightarrow
\Re_{>0}\cup\{\infty\}$, we have a function that satisfies the
conditions of a group valuation, a notion that Larson attributes to
Zassenhaus \cite{larson}. While there are four conditions
required of a group valuation, the remaining two hold vacuously since
addition is commutative and $\mathrm{char}(K)=p$.
It is well-known that the $\mathbb F_p$-linear map
\begin{equation}\label{wp}
\wp: \mathcal M_K^i/\mathcal M_K^{i+1}\longrightarrow \begin{cases}
\mathcal M_K^i/\mathcal M_K^{i+1}&\mbox{for }i\geq 0,\\
\mathcal M_K^{pi}/\mathcal M_K^{pi+1}&\mbox{for }i< 0,\\
\end{cases}\end{equation}
is an isomorphism for $i\neq 0$, and for $i=0$ the kernel is
$\ker\wp=\mathbb F_p$. One consequence of this is that $\mathcal M_K\subseteq
K^\wp$. Thus $g\nu_K(\kappa)=\infty$ for $\kappa\in\mathcal M_K$.
Another consequence is that for $\kappa\not\in \mathcal M_K$,
$g\nu_K(\kappa)$ is either zero or equal to a negative integer coprime
to $p$. This is used to prove that either $K(x)/K$ with
$\wp(x)=\kappa$ is unramified, or is ramified with $p\nmid
v_K(\kappa)<0$ and $b=-v_K(\kappa)$ the ramification break of
$K(x)/K$.
Recall that Remark \ref{technicalities} (1) states that
we may replace $\kappa_i$ with any $x\in \kappa_i+K^\wp$.
Thus we assume that this was done in \S\ref{embed}
so that each $\kappa_i$
is $K$-{\em reduced}\footnote{This is standard terminology. {\em e.g.} Reduced Witt vectors in \cite[\S4]{thomas}}; namely, $v_K(\kappa_i)=g\nu_K(\kappa_i)$.
Since $N/K$ is totally ramified, the subextensions $K(x_i)/K$ are
ramified with ramification breaks $b_i=-v_K(\kappa_i)$.
Remark \ref{technicalities} (2) states that we may also
replace $x_2$ with $x_2'=ax_1+x_2$ and $\sigma_1$ with
$\sigma_1'=\sigma_1\sigma_2^{-a}$ without changing our description of
$M$ or the presentation of the group. We may then relabel so that not
only are $\kappa_1,\kappa_2$ reduced and $(\sigma_i-1)x_j=\delta_{ij}$
for $1\leq i,j\leq 2$, but if $v_K(\kappa_1)=v_K(\kappa_2)$,
equivalently $b_1=b_2$, then $\kappa_1,\kappa_2$ represent
$\mathbb F_p$-linearly independent elements in
$\kappa_1\mathcal O_K/\kappa_1\mathcal M_K$. Thus we may record that
\begin{equation}
\label{one-break}
u_1=\min\{b_1,b_2\}\mbox{ and }u_2=\max\{b_1,b_2\}
\end{equation}
Our current notation describes $M$ as $K(x_1,x_2)$ with subscripts
determined by the Galois group. In
\S\ref{embed}, the Galois group took center stage and this choice was natural.
The fixed field of $\langle\sigma_2,\sigma_3\rangle$
was $K(x_1)$ where $\wp(x_1)=\kappa_1$ and $v_K(\kappa_1)=-b_1$. The
fixed field of $\langle\sigma_1,\sigma_3\rangle$ was $K(x_2)$ where
$\wp(x_2)=\kappa_2$ and $v_K(\kappa_2)=-b_2$. In this subsection,
ramification takes center stage, which makes our notation inconvenient.
To address this,
we set $\{x_1,x_2\}=\{y_1,y_2\}$ such that
$\wp(y_i)=\beta_i\in K$ so that $K(y_1)/K$ and $K(y_2)/K$ have
ramification breaks $u_1=-v_K(\beta_1),u_2=-v_K(\beta_2)$, respectively.
\begin{remark} \label{x-y-switch}
Since the group presentations for $Q_8$ and $H(p^3)$ are symmetric under
the transposition $(1\;2)$, we are able to assume that the subscripts
for the group presentations for $Q_8$ and $H(p^3)$ were chosen from the
outset based upon the ramification filtration. Thus for these two
groups, $y_1=x_1$ and $y_2=x_2$. Only for the groups $D_8$ and $M(p^3)$
does the introduction of $y_1,y_2$ matter. For these two groups,
because of \eqref{one-break}, we have $y_i=x_i$ when $b_1\leq b_2$,
and $y_1=x_2$, $y_2=x_1$ when $b_1>b_2$. Presenting this statement
another way, we have $y_i=x_i$ for $i=1,2$ when $u_2=u_1$ or
$G_{l_2}=\langle \sigma_1^p,\sigma_2\rangle$. We have $y_1=x_2$,
$y_2=x_1$ when $u_2\neq u_1$ and $G_{l_2}=\langle \sigma_1\rangle$.
\end{remark}
Using Remark \ref{x-y-switch},
we translate
the formula for
$\mathfrak s(x_1,x_2)$ from
Theorem \ref{as-gen} into expressions in $y_1,y_2,\beta_1,\beta_2$.
\begin{equation}\label{s(y,y)}
\mathfrak s(x_1,x_2)=\begin{cases}
-\beta_2y_1+\beta_1y_1+\beta_2y_2 & \mathrm{Gal}(L/K)\cong Q_8,\\
-\beta_2y_1& \mathrm{Gal}(L/K)\cong H(p^3),\\
-\beta_2y_1+S(y_1,\beta_1)& \mathrm{Gal}(L/K)\cong D_8,M(p^3)\mbox{ and } \\ &\hspace*{2cm} u_2=u_1\mbox{ or }G_{l_2}=\langle \sigma_1^p,\sigma_2\rangle,\\
-\beta_1y_2+S(y_2,\beta_2)& \mathrm{Gal}(L/K)\cong D_8,M(p^3)\mbox{ and } \\ &\hspace*{2cm} u_2\neq u_1\mbox{ and }G_{l_2}=\langle \sigma_1\rangle.
\end{cases}
\end{equation}
Recall that for $p=2$, $\beta_iy_i=S(y_i,\beta_i)$ for $i=1,2$.
Furthermore, for
$\mathrm{Gal}(L/K)\cong D_8,M(p^3)$. $u_2\neq u_1$ and $G_{l_2}=\langle \sigma_1\rangle$,
Remark \ref{relations} explains that
\[-\beta_1y_2=-\kappa_2x_1\equiv \kappa_1x_2=\beta_2y_1\pmod{M^\wp+K}.\]
Thus we record the following adjustment of \eqref{s(y,y)}: That
for $\mathrm{Gal}(L/K)\cong D_8,M(p^3)$, $u_2\neq u_1$ and $G_{l_2}=\langle \sigma_1\rangle$, we may instead use
\begin{equation}\label{gD,gM-adjust}
\mathfrak s(x_1,x_2)=\beta_2y_1+S(y_2,\beta_2).
\end{equation}
Now we use the description of $H(p^3)$-extensions
to motivate our next definition.
There we see that $N=M(x_3)$ where $x_3^p-x_3=-\beta_2y_1+\kappa_3$ for
some $\kappa_3\in K$. Furthermore, as $\kappa_3$ varies over all of $K$,
the field $N=M(x_3)$ varies over all $H(p^3)$-extensions $N/K$
that contain $M=K(x_1,x_2)=K(y_1,y_2)$.
Any determination of the lower/upper ramification breaks of $N/K$, together with the ramification groups associated with them classifies the ramification breaks of
the $C_p$-extension $N/M$, and thus necessarily determines
the value of
$\max\{v_M(\tau):\tau\in -\beta_2y_1+M^{\wp}+K\}$.
However $-\beta_2y_1\in K(y_1)$, so we begin by working in the subfield $K(y_1)$, computing
\[\max\{v_{K(y_1)}(\tau):\tau\in -\beta_2y_1+K(y_1)^{\wp}+K\}.\]
This leads us to define, for a given ramified $C_p$-extension $L/K$,
the $L/K$-{\em group valuation} of an element $\ell\in L$:
\[g\nu_{L/K}(\ell)=\max\{v_L(x):x\in \ell+L^\wp+K\}.\]
Observe that $\ell\in L^\wp+K$ if and only if $g\nu_{L/K}(\ell)=\infty$ and
\begin{equation}\label{val}
g\nu_{L/K}(\ell+\ell')\geq \min\{g\nu_{L/K}(\ell),g\nu_{L/K}(\ell')\}
\end{equation}
with equality when $g\nu_{L/K}(\ell)\neqg\nu_{L/K}(\ell')$.
Notice that if $g\nu_{L/K}(\ell)<\infty$ then $g\nu_{L/K}(\ell)<0$.
Given $\ell\in L$ with finite $L/K$-group valuation $g\nu_{L/K}(\ell)$, there exist $l\in L,k\in K$ such that
$v_L(\ell+\wp(l)+k)=g\nu_{L/K}(\ell)$. We will refer to this element
\[\URA{\ell}{L/K}=\ell+\wp(l)+k.\]
as a $L/K$-{\em reduction} of $\ell$.
Of course, while the valuation of the reduction $\URA{\ell}{L/K}$
is determined uniquely, the particular element $\URA{\ell}{L/K}$ that carries this valuation
is not. We will say that $\ell\in L$ is $L/K$-reduced if
$v_L(\ell)=v_L(\URA{\ell}{L/K})=g\nu_{L/K}(\ell)$.
The next result is a generalization of \eqref{wp}.
\begin{lemma}\label{image}
Let $L/K$ be a ramified $C_p$-extension with ramification break $b$. Thus $L=K(y)$ for some $y\in K^\mathrm{sep}$ with $\wp(y)=\beta\in K$, $v_K(\beta)=-b$.
Let $\phi_{L/K}:[0,\infty)\rightarrow [0,\infty)$ be the Hasse-Herbrand function
\[\phi_{L/K}(x)=\begin{cases}
x &\mbox{ for }0\leq x\leq b,\\
b+(x-b)/p&\mbox{ for }b< x\end{cases}\]
with inverse $\psi_{L/K}$.
Then for positive integers $n$ coprime to $p$,
\[\wp:\frac{\mathcal M_L^{-n}+K}{\mathcal M_L^{-n+1}+K}\longrightarrow\frac{\mathcal M_L^{-\psi_{L/K}(n)}+K}{\mathcal M_L^{-\psi_{L/K}(n)+1}+K}\]
is an isomorphism for $n\neq b$. If $n=b$ then $\ker\wp=\mathbb F_py+y\mathcal M_L+K$.
\end{lemma}
\begin{proof}
Since $p\nmid n$, there exist a unique pair $(i,m)$ with $1\leq i\leq p-1$, $m\in \mathbb Z$
such that $n=bi+pm$. Every element of $(\mathcal M_L^{-n}+K)/(\mathcal M_L^{-n+1}+K)$ can be represented by $\mu y^i$ for some $\mu\in K$ with $v_K(\mu)=-m$.
Since
$\wp(\mu y^i)=\mu^p(y+\beta)^i-\mu y^i=\mu^p\binom{i}{1}\beta^{i-1}y+(\mu^p-\mu)y^i\pmod{(\mu^p\beta^{i-1}y+\mu y)\mathcal M_K+K}$, we find that
\[\wp: \frac{\mathcal M_L^{pm-ib}+K}{\mathcal M_L^{pm-ib+1}+K}\longrightarrow
\begin{cases}
\frac{\mathcal M_L^{pm-ib}+K}{\mathcal M_L^{pm-ib+1}+K}&\mbox{for }pm-ib\geq -b,\\
\frac{\mathcal M_K^{p^2m-(i-1)pb-b}+K}{\mathcal M_K^{p^2m-(i-1)pb-b+1}+K}&\mbox{for }pm-ib< -b,
\end{cases}\]
from this the result follows.
\end{proof}
\begin{corollary}\label{>-b}
If $\ell\in L$ satisfies $v_L(\ell)>-b$ then $\ell\in L^\wp+K$ and $g\nu_{L/K}(\ell)=\infty$.
\end{corollary}
\begin{corollary}\label{-b}
If $\ell\in L$ satisfies $v_L(\ell)=-b$ then $\ell\in \omega y+y\mathcal M_L$ for some $\omega\in \mathcal O_K/\mathcal M_K$. In this situation,\begin{center}
$\ell\in L^\wp+K$ and $g\nu_{L/K}(\ell)=\infty$ if and only if $\omega\in (\mathcal O_K/\mathcal M_K)^\wp$.\end{center}
\end{corollary}
\begin{corollary}\label{complement}
If $\ell\in L$ satisfies $v_L(\ell)<-b$ and either
\[v_L(\ell)\not\equiv -b \bmod p\quad
\mbox{ or }\quad
v_L(\ell)\equiv (p-1)b\bmod p^2,\]
then
$g\nu_{L/K}(\ell)= v_L(\ell)$.
\end{corollary}
\subsection{$L/K$-reductions of the terms in $\mathfrak s(x_1,x_2)$ for $D_8,H(p^3),M(p^3)$}\label{subsect}
Since $S(y_1,\beta_1)$ and $S(y_2,\beta_2)$ are associated with
cyclic extensions of degree $p^2$ and ramification in cyclic
extensions is well-understood, our focus in this section will be the
$L/K$-reduction of $\pm\beta_2y_1$. Our first result decomposes
$\beta_2$ into powers of $\beta_1$.
\begin{lemma}\label{decomp}
Without loss of generality, the AS-generator $\beta_2$ satisfying $p\nmid v_K(\beta_2)\leq v_K(\beta_1)<0$, can be expressed as
\[
\beta_2=\mu_0^p+\sum_{i=1}^{p-1}\mu_i^p\beta_1^i
\]
for some $\mu_i\in K$ such that
\begin{enumerate}[label=\alph*)]
\item $\mu_0\in\mathcal O_K/\mathcal M_L$ and either $\mu_0\not\in\{ \wp(\omega):\omega\in \mathcal O_K/\mathcal M_K\}$ or $\mu_0=0$, and
\item for $1\leq i\leq p-1$, either $v_K(\mu_i^p\beta_1^i)<0$ or $\mu_i=0$.
\end{enumerate}
Additionally, if $v_K(\beta_2)=v_K(\beta_1)$, we may suppose
$\mu_1\in\mathcal O_M\setminus(\mathbb F_p+\mathcal M_K)$.
\end{lemma}
\begin{proof}
Observe that $K^p=\{\mu^p:\mu\in K\}$ is a subfield of $K$. Furthermore,
$K=K^p(\beta_1)$ is a field extension of $K^p$ of degree $p$.
Thus $1,\beta_1,\ldots, \beta_1^{p-1}$ is a basis for $K/K^p$ and there exists $\mu_i\in K$ such that
\[\beta_2=\sum_{i=0}^{p-1}\mu_i^p\beta_1^i.\]
However
since $\beta_2$ is an AS-generator, we are only concerned with this statement as a congruence
\begin{equation}\label{mu_0}
\beta_2\equiv\mu_0^p+\sum_{i=1}^{p-1}\mu_i^p\beta_1^i\pmod{K^{\wp}}.\end{equation}
Consider the term $\mu_0^p$. If $v_K(\mu_0^p)<0$, then since
$\mu_0^p=\mu_0+\wp(\mu_0)$, we may express
$\mu_0=\sum_{i=0}^{p-1}(\mu_i')^p\beta_1^i$ for some $\mu'_i\in K$,
and find that
$\beta_2\equiv(\mu'_0)^p+\sum_{i=1}^{p-1}(\mu_i+\mu_i')^p\beta_1^i\pmod{K^{\wp}}$
where $v_K(\mu_0')>v_K(\mu_0)$. Repeat this process and relabel, until $v_K(\mu_0^p)\geq 0$.
Now if $\mu_0^p\equiv \wp(\omega)\pmod{\mathcal M_K}$ for some $\omega\in \mathcal O_K/\mathcal M_K$, we set $\mu_0^p=0$. At this point, \eqref{mu_0} holds with $\mu_0\in \mathcal O_K/\mathcal M_K$
and either
$\mu_0\not\in\{ \wp(\omega):\omega\in \mathcal O_K/\mathcal M_K\}$ or $\mu_0=0$.
To finish up, we observe that
since $\mathcal M_K\subset K^\wp$, if
$v_K(\mu_i^p\beta_1^i)>0$ for some $1\leq i\leq p-1$, we may set $\mu_i=0$.
\end{proof}
Our approach towards determining $g\nu_{K(y_1)/K}(\beta_2y_1)$
depends upon Lemma \ref{decomp} and \eqref{val}.
First we address the easy case when $p=2$.
\begin{proposition}\label{p=2}
Assume $p=2$, and $y_1,y_2,\beta_1,\beta_2$ as above with
$\beta_2$ as in Lemma \ref{decomp}.
Then $\beta_2 y_1$ and $(\beta_1+\beta_2)y_1$ are $K(y_1)/K$-reduced with
\[v_{K(y_1)}(\beta_2 y_1)=v_{K(y_1)}((\beta_1+\beta_2)y_1)=-2u_2-u_1.\]
Similarly, $\beta_2 y_2$ is $K(y_2)/K$-reduced with $v_{K(y_2)}(\beta_2 y_2)=-3u_2$.
\end{proposition}
\begin{proof}
The results follow from Corollary \ref{complement} once it is observed that if
$v_K(\beta_1)= v_K(\beta_2)$ then
$\mu_1\not\in \mathbb F_2+\mathcal M_K$, and thus $v_K(\beta_1+\beta_2)=v_K(\beta_2)$.
\end{proof}
Now we address the general case.
\begin{proposition}\label{p>2}
Assume $p>2$, and $y_1,y_2,\beta_1,\beta_2$ as above with
$\beta_2$ as in Lemma \ref{decomp}.
Set
\[r=-v_K\left(\sum_{i=1}^{p-2}\mu_i^p\beta_1^i \right),\qquad s=-v_K(\mu_{p-1}^p\beta_1^{p-1}).\]
Observe
$s\equiv -u_1\bmod p$, $r\not\equiv 0,-u_1 \bmod p$ and
$u_2=\max\{r,s\}$. Then
\[v_{K(y_1)}(\URA{\beta_2y_1}{K(y_1)/K})=g\nu_{K(y_1)/K}(\beta_2 y_1)=-\max\{ps+u_1,pr-(p-2)u_1\}.\]
\end{proposition}
\begin{proof}
Let $L=K(y_1)$.
Based upon Lemma \ref{decomp}, summands $\{\mu_i^p\beta_1^i\}_{i\neq 0}$
in $\beta_2$ are either zero or have valuation $v_L(\mu_i^p\beta_1^i)< 0$.
Decompose the sum
$\sum_{i=1}^{p-2}\mu_i^p\beta_1^i=A+B$ where $A$ includes those summands
satisfying $v_K(\mu_i^p\beta_1^i)\leq v_K(\beta_1)$ and $B$ the nonzero summands satisfying $v_K(\beta_1)< v_K(\mu_i^p\beta_1^i)<0 $.
Thus
\[\beta_2=A+\mu_{p-1}^p\beta_1^{p-1}+B+\mu_0^p.\]
Since $v_K(\beta_2)\leq v_K(\beta_1)=-u_1$,
\[v_K(\beta_2)=\min\{v_K(A),v_K(\mu_{p-1}^p\beta^{p-1})\}.\]
At least one of $v_K(A),v_K(\mu_{p-1}^p\beta_1^{p-1})$ must be $\leq -u_1$.
At least one of $A,\mu_{p-1}\in K$ is nonzero.
To determine $g\nu_{L/K}(\beta_2y_1)$ we use
\eqref{val} and consider the following $L/K$-group valuations:
\[g\nu_{L/K}(Ay_1),\quad g\nu_{L/K}(\mu_{p-1}^p\beta_1^{p-1}y_1), \quadg\nu_{L/K}(By_1)\quad\mbox{ and }\quad
g\nu_{L/K}(\mu_0^py_1).\] Two are easy to analyze.
\begin{itemize}
\item Consider $g\nu_{L/K}(\mu_{p-1}^p\beta_1^{p-1}y_1)$ and suppose $\mu_{p-1}\neq 0$. Then
$v_L(\mu_{p-1}^p\beta_1^{p-1}y_1)\equiv
(p-1)u_1 \bmod p^2$. Thus
by
Corollary
\ref{complement}, $\mu_{p-1}^p\beta_1^{p-1}y_1$ is $L/K$-reduced and
\[g\nu_{L/K}(\mu_{p-1}^p\beta_1^{p-1}y_1)=v_L(\mu_{p-1}^p\beta_1^{p-1}y_1)
\leq -(p+1)u_1.\]
\item Consider $g\nu_{L/K}(\mu_0^py_1)$.
Since either $\mu_0\not\in\{ \wp(\omega):\omega\in \mathcal O_K/\mathcal M_K\}$ or $\mu_0=0$, Corollary \ref{-b} states that
\[-u_1\leq g\nu_{L/K}(\mu_{0}^py_1).\]
\end{itemize}
The remaining two $L/K$-group valuations are more involved. However, once we prove that
if $A\neq 0$, then
\[g\nu_{L/K}(Ay_1)\equiv -2u_1\bmod p\quad\mbox{ and }\quad g\nu_{L/K}(Ay_1)\leq -2u_1,\]
while
$-2u_1< g\nu_{L/K}(By_1)$, we will be able to conclude that
\begin{equation}\label{goal}
g\nu_{L/K}(\beta_2y_1)=\min\{g\nu_{L/K}(\mu_{p-1}^p\beta_1^{p-1}y_1),g\nu_{L/K}(Ay_1)\}.\end{equation}
We start by supposing that
$\mu\neq 0$ and $1\leq i\leq p-2$, and then expanding $\wp(\mu y_1^{i+1})=\mu^p(y_1+\beta_1)^{i+1}-\mu y_1^{i+1}$ to find that
\[\wp(\mu y_1^{i+1})=\mu^p\beta_1^{i+1}+\mu^p(i+1)\beta_1^iy_1+
\mu^p\sum_{j=2}^{i+1}\binom{i+1}{j}y_1^j\beta_1^{i+1-j}-\mu y_1^{i+1}.\]
Since $2\leq i+1<p$, we may solve for $\mu^p\beta_1^iy_1$ finding that
\[\mu^p\beta_1^iy_1\equiv \frac{i}{2}\mu^py_1^2\beta_1^{i-1}-\frac{\mu y_1^{i+1}}{i+1}\pmod{\mu^py_1^2\beta_1^{i-1}\mathcal M_L+L^\wp+K}.\]
Observe that
\[v_K\left(\frac{i}{2}\mu^py_1^2\beta_1^{i-1}\right)<v_K\left(\frac{\mu y_1^{i+1}}{i+1}\right)\iff v_K(\mu^p\beta_1^i)< v_K(\beta_1).\]
As a result, for $1\leq i\leq p-2$ and $\mu\neq 0$, while setting
$\mathcal L=L^\wp+K$
\begin{equation}\label{relate}
\mu^p\beta_1^iy_1\equiv\begin{cases} \frac{i}{2}\mu^py_1^2\beta_1^{i-1}\pmod{\mu^py_1^2\beta_1^{i-1}\mathcal M_L+\mathcal L} & v_K(\mu^p\beta_1^i)< v_K(\beta_1),\\
\frac{\wp(\mu)}{2}y_1^2 \pmod{y_1^2\mathcal M_L+\mathcal L}& v_K(\mu^p\beta_1^i)= v_K(\beta_1),\\
-\frac{\mu y_1^{i+1}}{i+1}\pmod{\mu y_1^{i+1}\mathcal M_L+\mathcal L} & v_K(\mu^p\beta_1^i)> v_K(\beta_1).\end{cases}\end{equation}
Now we apply
\eqref{relate}. Suppose that $A\neq 0$
and separate the cases: $v_K(A)<v_K(\beta_1)$ vs.~$v_K(A)=v_K(\beta_1)$.
In the first case $v_K(A)<v_K(\beta_1)$, let
$\mu_i^p\beta_1^iy_1$ be any
nonzero summand of
$Ay_1$ such that $v_L(\mu_i^p\beta_1^i)<v_L(\beta_1)$,
then by \eqref{relate} it
is congruent modulo $L^\wp+K$ to a term of valuation
$v_L(\mu_i^p\beta_1^{i-1}y_1^2)<-2u_1$. Moreover, using Corollary
\ref{complement} $\mu_i^p\beta_1^{i-1}y_1^2$ has largest valuation in the
coset of $L^\wp+K$ that it represents. Thus
\[g\nu_{L/K}(\mu_i^p\beta_1^iy_1)=v_L(\mu_i^p\beta_1^i)+(p-2)u_1<-2u_1.\]
If there is a summand such that $v_L(\mu_i^p\beta_1^i)=v_L(\beta_1)$, then $i=1$ and $v_K(\mu_i)=0$ and
$\mu_i^p\beta_1^iy_1$ is
congruent modulo $L^\wp+K$ to a term of valuation
$-2u_1\leq v_L(\wp(\mu_i)y_1^2)$. Using \eqref{val}, we conclude that
\[g\nu_{L/K}(Ay_1)=v_L(A)+(p-2)u_1<-2u_1.\]
The second case $v_K(A)=v_K(\beta_1)$ occurs when $A$ has only one
summand $\mu_i^p\beta_1^i$ with $i=1$ and $v_K(\mu_i)=0$. This means that
$v_K(A)=v_K(\beta_2)$ and
\[\beta_2\equiv \mu_1^p\beta_1\bmod \beta_1\mathcal M_K.\]
Since $u_1=-v_K(\beta_1)=-v_K(\beta_2)=u_2$, there is only one ramification break in the $C_p^2$-extension $M/K$, every nontrivial $\mathbb F_p$-linear combination of $\beta_1$ and $\beta_2$ has the same valuation, $\mu_1\in\mathcal O_K\setminus(\mathbb F_p+\mathcal M_K)$, and thus $v_K(\wp(\mu_1))=0$. In this case,
\[g\nu_{L/K}(Ay_1)=v_L(A)+(p-2)u_1=-2u_1.\]
Finally, we apply \eqref{relate} to $By_1$.
The condition $v_K(\mu^p\beta_1^i)> v_K(\beta_1)$ is equivalent to
$v_L(\mu y_1^i)> v_K(y_1)=-u_1$. Thus
each summand
$\mu_i^p\beta_1^iy_1$ of $By_1$, is congruent modulo $L^\wp+K$ to a
term of valuation $v_L(-\mu y_1^{i+1})$, which satisfies $-2u_1<v_L(-\mu y_1^{i+1})$. So
\[-2u_1< g\nu_{L/K}(By_1).\]
The result now follows from \eqref{goal} and $g\nu_{L/K}(Ay_1)=v_L(A)+(p-2)u_1$.
\end{proof}
We now record results that involve $S(y_1,\beta_1)$ or $S(y_2,\beta_2)$.
\begin{proposition}\label{S(x_2)}
Assume $p>2$, and $y_1,y_2,\beta_1,\beta_2$ as above with
$\beta_2$ as in Lemma \ref{decomp}. Then
$S(y_2,\beta_2)$ is $K(y_2)/K$-reduced with
$v_L(S(y_2,\beta_2))=-(p^2-p+1)u_2$. Set $r,s$ as in Proposition
\ref{p>2}. Then
\[g\nu_{K(y_1)/K}(-\beta_2 y_1+S(y_1,\beta_1))=-\max\{(p^2-p+1)u_1,ps+u_1,pr-(p-2)u_1\},\]
except when $\mu_{p-1}+1\in \mathcal M_K$.
When $\mu_{p-1}+1\in \mathcal M_K$,
$s=(p-1)u_1$ is fixed. Set
\[t=-v_K((\mu_{p-1}+1)^p\beta^{p-1})<s.\]
Note that $t\equiv -u_1\bmod p$.
Then
\[g\nu_{K(y_1)/K}(-\beta_2 y_1+S(y_1,\beta_1))=-\max\{(p^2-2p+2)u_1,pt+u_1,pr-(p-2)u_1\}.\]
\end{proposition}
\begin{proof}
Since $v_{K(y_i)}(S(y_i,\beta_i))=-(p^2-p+1)u_i$, we conclude from
Corollary \ref{complement} that $g\nu_{K(y_i)/K}(S(y_i,\beta_i))=-(p^2-p+1)u_i$.
Now consider
$-\beta_2y_1+S(y_1,\beta_1)=C-D$
where
\[C=-\mu_{p-1}^p\beta_1^{p-1}y_1+S(y_1,\beta_1)=-(\mu_{p-1}^p+1)\beta_1^{p-1}y_1-\sum_{i=1}^{p-2}\frac{1}{p}\binom{p}{i}\beta_1^iy_1^{p-i}\]
and $D=Ay_1+By_1+\mu_0^py_1$ is known from Proposition
\ref{p>2} to satisfy $g\nu_{K(y_1)/K}(D)=-(pr-(p-2)u_1)$.
Suppose $\mu_{p-1}+1\not\in\mathcal M_K$. Then
based upon Corollary \ref{complement},
\[g\nu_{K(y_1)/K}(C)=\min\{v_{K(y_1)}(-\mu_{p-1}^p\beta_1^{p-1}y_1),v_{K(y_1)}(S(y_1,\beta_1))\}\equiv (p-1)u_1\bmod p^2,\]
and
thus
$g\nu_{K(y_1)/K}(C)\neq
g\nu_{K(y_1)/K}(D)$. The first statement follows.
On the other hand, if $\mu_{p-1}+1=\tau\in\mathcal M_K$ then
$v_L(C)=\min\{v_L(\tau^p\beta_1^{p-1}y_1), v_L(\beta_1^{p-2}y_1^2)\}\equiv (p-1)u_1$ or $(2p-2)u_1\bmod p^2$. Observe that
$-(pr-(p-2)u_1)\equiv -2u_1\bmod p$ and
since $r\not\equiv (p-1)u_1\bmod p$,
$-(pr-(p-2)u_1)\not\equiv (2p-2)u_1\bmod p^2$.
Again
$g\nu_{K(y_1)/K}(C)\neq
g\nu_{K(y_1)/K}(D)$. The second statement follows.
\end{proof}
\begin{remark}\label{=same-as>}
When $p=2$ the sum that produces $r$ is an empty sum. Thus $s=u_2$
and the value of $g\nu_{K(y_1)/K}(\beta_2y_1)$ given in Proposition
\ref{p>2} agrees with the value in Proposition \ref{p=2}. The same
can be said for the values of
$g\nu_{K(y_1)/K}(-\beta_2y_1+S(y_1,\beta_1))$ and
$g\nu_{K(y_1)/K}(S(y_2,\beta_2))$ given in Proposition \ref{S(x_2)}.
They also agree with the values in Proposition \ref{p=2}.
\end{remark}
\subsection{Decomposition and reduction of $\mathfrak s(x_1,x_2)$ for $Q_8$}\label{Q-subsect}
In this case, $\mathrm{char}(K)=2$ and since $x_i=y_i$ for $i=1,2$,
we also have $u_i=-v_K(\kappa_i)$. We will continue to use $x_i$ and $\kappa_i$
(rather than $y_i$ and $\beta_i$). Recall that
$N=M(x_3)$ where $\wp(x_3)=(\kappa_1+\kappa_2)x_1+\kappa_2x_2+\kappa_3$ for some $\kappa_3$. In fact, as $\kappa_3$ ranges over all of $K$, $M(x_3)$ ranges over all totally ramified quaternion extensions of $K$. We are interested in determining a lower bound on the ramification break of $M(x_3)/M$. Thus we are interested in
$\max\{v_M(t):t\in \mathfrak s'+M^{\wp}+K\}$ for
\[\mathfrak s'=\mathfrak s(x_1,x_2)=(\kappa_1+\kappa_2)x_1+\kappa_2x_2.\]
Using Lemma \ref{decomp}, we have
\begin{equation}\label{kappa2}
\kappa_2=\mu^2\kappa_1+\mu_0^2
\end{equation}
where $-v_K(\mu)=m\geq 0$ and $\mu_0\in \mathcal O_K/\mathcal M_K$. Notice that
\[u_2=u_1+2m.\]
If $m=0$ then since
$\kappa_1,\kappa_2$ are linearly independent in
$\kappa_1\mathcal O_K/\kappa_1\mathcal M_K$, we have $\wp(\mu)\neq 0$, which means that $\wp(\mu)$ is a unit
in $\mathcal O_K/\mathcal M_K$.
Set \begin{equation}\label{X}
X=x_2-\mu x_1
\end{equation}
and observe that
$\wp(X)=-\wp(\mu) x_1+\mu_0^2$.
This means that $X$ satisfies an Artin-Schreier polynomial over $L=K(x_1)$.
Since $v_L(-\wp(\mu) x_1+\mu_0^2)=pv_K(\wp(\mu))-u_1\not\equiv 0\bmod 2$, we
see that $L(X)/L$ is a $C_2$-extension with ramification break
$b=u_1+4m$ and
\[v_M(X)=-(u_1+4m).\]
Using \eqref{kappa2} and \eqref{X},
replace $\kappa_2$ and $x_2$ in $\mathfrak s'$ so that
$\mathfrak s'=\mathfrak s_1'+\mathfrak s_2'$ with
\begin{align*}
\mathfrak s_1'&=(1+\mu^2+\mu^3)\kappa_1 x_1+\mu_0^2(1+\mu) x_1\in L=K(x_1),\\
\mathfrak s_2'&=(\mu^2\kappa_1+\mu_0^2)X\in M=K(x_1,x_2).
\end{align*}
Observe that
\[v_M(\mathfrak s_2')=-(5u_1+12m), \quad\mbox{and if $m>0$ then }
v_L(\mathfrak s_1')=-(3u_1+6m),\]
but that the determination of $v_L(\mathfrak s_1')$ is not so clear when $m=0$.
To clarify matters when $m=0$, we
replace $\mu^3\kappa_1x_1$ in the expression for $\mathfrak s_1'$, by expanding
$\wp(\mu x_1X+\mu X)$ to find that
\[\mu^3\kappa_1x_1\equiv \mu^4\kappa_1x_1+\mu^2\mu_0^2x_1+\mu^2\kappa_1X+\wp(\mu)x_1X+\wp(\mu)X\pmod{M^\wp+K}.\]
Thus $\mathfrak s'\equiv \mathfrak s\pmod{M^\wp+K}$
where $\mathfrak s=\mathfrak s_1+\mathfrak s_2$ with
\begin{align*}
\mathfrak s_1&=(1+\mu^2+\mu^4)\kappa_1 x_1+\mu_0^2(1+\mu+\mu^2) x_1\in L=K(x_1),\\
\mathfrak s_2&=\wp(\mu)x_1X+\wp(\mu)X+\mu_0^2X\in M=K(x_1,x_2).
\end{align*}
When $m=0$, express $\mu=\omega+\epsilon$ for some $\omega\in (\mathcal O_K/\mathcal M_K)\setminus\mathbb F_2$ and some $\epsilon\in\mathcal M_K$ with $v_K(\epsilon)=e>0$.
Since $\wp(\omega)$ is a unit, $v_K(\mathfrak s_2)=-3u_1$.
Observe that if $e\geq u_1/2$ then $(\omega+\epsilon)^2\kappa_1\equiv \omega^2\kappa_1\bmod \mathcal M_K\subseteq K^\wp$ and thus, since we are only interested in $\kappa_2=\mu^2\kappa+\mu_0^2\bmod K^\wp$, we may set $\epsilon=0$. Without loss of generality, we conclude that either $0<e<u_1/2$ or $\epsilon=0$. Replace $\mu$ in the expression for $\mathfrak s_1$:
\[
\mathfrak s_1=(1+\omega+\omega^2)^2\kappa_1x_1+(1+\omega+\omega^2)\mu_0^2x_1+(\epsilon+\epsilon^2)^2\kappa_1x_1+(\epsilon+\epsilon^2)\mu_0^2x_1.
\]
It is clear that $v_L(\mathfrak s_1)$ depends upon whether $\omega^3\neq 1$
or $\omega^3=1$. If $\omega^3\neq 1$, then $v_L(\mathfrak s_1)= -3u_1$ as
$m=0$. If $\omega^3=1$, then
\[
\mathfrak s_1=(\epsilon+\epsilon^2)^2\kappa_1x_1+(\epsilon+\epsilon^2)\mu_0^2x_1
\]
and $v_L(\mathfrak s_1)=-3u_1+4e$. Note that $\mathfrak s_1=0$ if $\omega^3=1$ and $v_K(\epsilon)=e>u_1/2$.
Altogether, this means that when $m=0$,
\[v_M(\mathfrak s_2)=-3u_1, \quad\mbox{and}\quad
v_L(\mathfrak s_1)=\begin{cases}
-3u_1 &\omega^3\neq 1,\\
-3u_1+4e&\omega^3= 1\mbox{ and } \mathfrak s_1\neq0.\end{cases}\]
We consolidate this information in a proposition.
\begin{proposition}\label{Q8-s1-s2}
Let $p=2$. Let $M/K$ be a totally ramified $C_p^2$ extension with upper ramification numbers $u_1\leq u_2$. Set $M=K(x_1,x_2)$
where $\wp(x_i)=\kappa_i$ with
$v_k(\kappa_i)=u_i$. Since $u_1,u_2$ are odd, $\kappa_2\equiv\mu^2\kappa_1\bmod\mathcal O_M$ for some $\mu\in K$ with $v_K(\mu)=-m\leq 0$. If $m=0$ then
$\mu=\omega+\epsilon$ for some $\omega\in \mathcal O_K/\mathcal M_K$ and $\epsilon\in \mathcal M_K$ with $v_K(\epsilon)=e$. Let $L=K(x_1)$. Then there exist $\mathfrak s_1\in L$ and $\mathfrak s_2\in M$ such that for $u_2\neq u_1$ or equivalently $m>0$,
\[v_L(\mathfrak s_1)=-3u_2\mbox{ and }v_M(\mathfrak s_2)=-(6u_2-u_1).\]
For $u_2=u_1$ or equivalently $m=0$, we have $v_M(\mathfrak s_2)=-3u_1$. And
unless $\mathfrak s_1=0$, we have
\[v_L(\mathfrak s_1)=\begin{cases}-3u_1 &\mbox{ if }\omega^3\neq 1,\\
-3u_1+4e &\mbox{ if $\omega^3= 1$ and $0<e<u_1/2$}.
\end{cases}\]
Note that for $\mathfrak s_1\neq 0$, the inequality
$v_L(\mathfrak s_1)<-u_1$ holds.
Finally,
every $Q_8$-extension $N/K$ that contains $M$ is expressible as $N=M(x_3)$ where $\wp(x_3)=\mathfrak s_1+\mathfrak s_2+\kappa_3$ for some $\kappa_3\in K$.
\end{proposition}
\section{Ramification theory}\label{linking}
Recall the notation thus far: $N/K$ is a totally ramified nonabelian
extension with Galois group $G=\mathrm{Gal}(N/K)$ generated by
$\sigma_1,\sigma_2$ where $\sigma_3=[\sigma_1,\sigma_2]$ fixes a
subfield $M$. The fixed field $M=N^{\sigma_3}$ was initially
expressed as $M=K(x_1,x_2)$ when we were solely concerned with Galois
action. Now that we need the ramification breaks to be involved, we
set $M=K(y_1,y_2)$
such that $\wp(y_i)=\beta_i$ and $u_i=-v_K(\beta_i)$ for $i=1,2$.
Recall that we proved that if
$u_1\leq u_2\leq u_3$ denote the upper and
$l_1\leq l_2\leq l_3$ the lower ramification breaks of $N/K$, then
the lower and upper ramification breaks of $M/K$ are
$l_1\leq l_2$ and $u_1\leq u_2$, respectively.
To determine $l_3$,
ramification break of $N/M$, so that $u_3$ is determined by
$l_3-l_2=p^2(u_3-u_2)$, we are examine
the ramification break of the
$C_p$-extension $M(x_3)/M$ where \[\wp(x_3)=\mathfrak s(x_1,x_2)+\kappa_3\]
with $\mathfrak s(x_1,x_2)$ first described in Theorem \ref{ram-as-gen} then translated/adjusted into \eqref{s(y,y)} and
\eqref{gD,gM-adjust}. Finally,
using the results of
\S\ref{subsect} and \S\ref{Q-subsect}, namely Propositions \ref{p=2},
\ref{p>2}, \ref{S(x_2)} and \ref{Q8-s1-s2}, we find that $\mathfrak s(x_1,x_2)$ can be replaced modulo $M^\wp+K$ by
\[
\mathfrak s(x_1,x_2)\equiv\begin{cases}
-\URA{\beta_2y_1}{L/K}& H(p^3),\\
\URA{-\beta_2y_1+S(y_1,\beta_1)}{L/K}& D_8,M(p^3), (u_2=u_1\mbox{ or }G_{l_2}=\langle \sigma_1^p,\sigma_2\rangle),\\
\URA{\beta_2y_1}{L/K}+S(y_2,\beta_2)& D_8,M(p^3), u_2\neq u_1\mbox{ and }G_{l_2}=\langle \sigma_1\rangle,\\
\mathfrak s_1+\mathfrak s_2& Q_8.
\end{cases}\]
This leads naturally to an interest in the ramification breaks of the following auxiliary $C_p$-extensions:
\begin{itemize}
\item $L(z_0)/L$ for $z_0\in K^\mathrm{sep}$ such that $\wp(z_0)=\URA{\beta_2y_1}{L/K}$,
\item $L(z_1)/L$ for $z_1\in K^\mathrm{sep}$ such that $\wp(z_1)=\URA{-\beta_2y_1+S(y_1,\beta_1)}{L/K}$,
\item $K(y_2,z_2)/K(y_2)$ for $z_2\in K^\mathrm{sep}$ such that $\wp(z_2)=S(y_2,\beta_2)$,
\item $L(z_3)/L$ for $z_4\in K^\mathrm{sep}$ such that $\wp(z_3)=\mathfrak s_1$ if $\mathfrak s_1\neq 0$,
\item $M(z_4)/M$ for $z_4\in K^\mathrm{sep}$ such that $\wp(z_4)=\mathfrak s_2$,
\end{itemize}
which we attach
to a diagram of $M/K$, and where
for easy reference, we label each $C_p$-subextension with its ramification break. The purpose of this diagram is to help us determine the ramification break of $M(x_3)/M$ when $x_3$ as in \eqref{x_3}. Thus we add the extension $M(x_3)/M$ to our diagram to remind us of the ``target'' in this exercise.
\begin{center}
\begin{tikzpicture}
\node (Q1) at (0,0) {$\mathbf{K}$};
\node (Q2) at (2,2) {$\mathbf{K(y_2)}$};
\node (Q3) at (0,4) {\hspace*{2cm}$\mathbf{M=K(y_1,y_2)}$};
\node (Q4) at (-2,2) {$\mathbf{L=K(y_1)}$};
\node (Q6) at (0,6) {$N=M(x_3)$};
\node (Q7) at (-3,4) {$L(z_1)$};
\node (Q8) at (-4.5,4) {$L(z_0)$};
\node (Q9) at (4,4) {$K(y_2,z_2)$};
\node (Q10) at (-1.8,4) {$L(z_3)$};
\node (Q11) at (2.25,6) {$M(z_4)$};
\draw (Q1)--(Q2) node [pos=0.5, above,inner sep=0.25cm] {$\scriptstyle{u_2}$};
\draw (Q1)--(Q4) node [pos=0.7, below,inner sep=0.25cm] {$\scriptstyle{u_1}$};
\draw (Q4)--(Q3) node [pos=0.7, below,inner sep=0.25cm] {$\scriptstyle{l_2}$};
\draw (Q2)--(Q3) node [pos=0.7, below,inner sep=0.25cm] {$\scriptstyle{u_1}$};
\draw[dotted] (Q3)--(Q6) node [pos=0.6, right,inner sep=0.15cm] {$\scriptstyle{l_3}$};
\draw (Q4)--(Q7) node [pos=0.35, above,inner sep=0.35cm] {$\scriptstyle{t_1}$};
\draw (Q4)--(Q8) node [pos=0.5, above,inner sep=0.15cm] {$\scriptstyle{t_0}$};
\draw (Q2)--(Q9) node [pos=0.7, below,inner sep=0.25cm] {$\scriptstyle{t_2}$};
\draw (Q4)--(Q10) node [pos=2.6, below,inner sep=2.65cm] {$\scriptstyle{t_3}$};
\draw (Q3)--(Q11) node [pos=0.7, below,inner sep=0.25cm] {$\scriptstyle{t_4}$};
\end{tikzpicture}
\end{center}
Using
Propositions \ref{p=2},
\ref{p>2} and \ref{S(x_2)},
the ramification breaks are determine and recorded below. Note that by
Remark \ref{=same-as>}, these expressions for the ramification breaks hold for $p=2$ as well as for $p>2$.
\begin{align*}
t_0&=\max\{ps+u_1,pr-(p-2)u_1\},\\
t_1&=\max\{(p^2-p+1)u_1,ps+u_1,pr-(p-2)u_1\},\mbox{ or}\\
&\max\{(p^2-2p+2)u_1,pt+u_1,pr-(p-2)u_1\}\mbox{ and }s=(p-1)u_1,\\
t_2&=(p^2-p+1)u_2,\\
t_3&=3u_2\mbox{ if }u_2\neq u_1,\mbox{ otherwise }t_3>u_1,\\
t_4&=6u_2-u_1 \mbox{ if }u_2\neq u_1,\mbox{ otherwise }t_3=3u_1.\\
\end{align*}
Because the elements $\URA{\beta_2y_1}{L/K}$,
$\URA{-\beta_2y_1+S(y_1,\beta_1)}{L/K}$, $\mathfrak s_1$ and
$S(y_2,\beta_2)$ all lie within proper subfields of $M$, while we
are interested in the $M$-group valuation of these elements, we record the following:
\begin{lemma}
The inequalities $t_0,t_1,t_3>l_2$ and $t_2>u_1$ hold.
\end{lemma}
\begin{proof}
Prove $t_0>l_2=pu_2-(p-1)u_2$ by checking the cases: $u_2=r$ and $u_2=s$. Next we prove $t_1>l_2$, which is clear when $t_1=\max\{(p^2-p+1)u_1,ps+u_1,pr-(p-2)u_1\}$. So consider the case when
$t_1=\max\{(p^2-2p+2)u_1,pt+u_1,pr-(p-2)u_1\}$ and $s=(p-1)u_1$.
If $u_2=r$, then $t_1>l_2$ follows as before.
On the other hand, if $u_2=s=(p-1)u_1$, then $l_2=p(p-1)u_1-(p-1)u_1<(p^2-2p+2)u_1$, which also gives $t_1>l_2$.
Prove $t_3>l_2$ by checking the cases $u_2=u_1$ and $u_2>u_1$.
Finally, $t_2>u_1$ because $t_2=(p^2-p+1)u_2\geq 3u_2$.
\end{proof}
Now that we have established these inequalities,
we need a well-known
result.
\begin{lemma}\label{C_p^2-breaks}
Let $M/L$ be a ramified $C_p$-extension with the ramification break $l$. Let $\alpha\in L$ be $L$-reduced: $g\nu_{L}(\alpha)=v_L(\alpha)=-a<0$ and $p\nmid a$. Suppose
\[a>l.\] Let $z\in L^\mathrm{sep}$ such that
$\wp(z)=\alpha$. Then $M(z)/M$ is a ramified $C_p$-extension with
ramification break $pa-(p-1)l$ and \[g\nu_{M}(\alpha)=-(pa-(p-1)l).\]
Otherwise if $a\leq l$ and
$M(z)/M$ is nontrivial, the ramification
break of $M(z)/M$ is less than or equal to $l$.
Sharper upper bounds than this exist, but this is enough for our purpose.
\end{lemma}
\begin{proof} Suppose $l<a$.
Since $M(z)/L$ is a totally ramified $C_p^2$-extension with upper
ramification breaks $l<a$, the lower ramification breaks are
$l_1=l$ and $l_2=l+p(a-l)$. Passing to the ramification
filtration of $\mathrm{Gal}(M(z)/M)$ yields the result. More generally, when
$M(z)/M$ is nontrivial, the ramification break of
$M(z)/M$ satisfies $\max\{a,pa-(p-1)l\}$ with equality when $a\neq l$.
\end{proof}
We may express $x_3=\bar{x}_3+z_5$ where $\wp(z_5)=\kappa_3$ and
\begin{equation}\label{x_3}
\bar{x}_3=\begin{cases}
-z_0 &H(p^3),\\
z_1& D_8,M(p^3), ( u_2=u_1\mbox{ or }G_{l_2}=\langle \sigma_1^p,\sigma_2\rangle),\\
z_0+z_2
& D_8,M(p^3),u_2\neq u_1\mbox{ and }G_{l_2}=\langle \sigma_1\rangle,\\
z_3+z_4& Q_8.
\end{cases}
\end{equation}
Letting $s_i$ denote the ramification break for $M(z_i)/M$ we find,
based upon Lemma \ref{C_p^2-breaks}, that $s_i=pt_i-(p-1)l_2$ for
$i=0,1,3$, $s_2=pt_2-(p-1)u_1$ and of course, $s_4=t_4$.
At this point, we have collected all the information we need to
determine the ramification break $\bar{l}_3$ of $M(\bar{x}_3)/M$ where $\bar{x}_3$ is
expressed as in \eqref{x_3}.
Consider the cases when $G\cong H(p^3)$,
or $G\cong D_8,M(p^3)$ and $G_{l_2}=\langle \sigma_1^p,\sigma_2\rangle$.
Thus $\bar{l}_3=s_i$ for $i=0,1$. The upper ramification numbers for $M(\bar{x}_3)/K$ are $u_1,u_2,\bar{u}_3$ where
$\bar{u}_3-u_2=(\bar{l}_3-l_2)/p^2=(t_i-l_2)/p$. Separating $D_8$ off from $M(p^3)$ for clarity, this establishes the fact that
\begin{equation}\label{bar{u}-1}
\bar{u}_3=\begin{cases}
\max\left\{s+u_1,r+\frac{u_1}{p}\right\} & H(p^3),\\
u_2+u_1 & D_8, (u_2=u_1\mbox{ or }G_{l_2}=\langle \sigma_1^p,\sigma_2\rangle),\\
\max\left\{pu_1,s+u_1,r+\frac{u_1}{p}\right\} &M(p^3), (u_2=u_1\mbox{ or }G_{l_2}=\langle \sigma_1^p,\sigma_2\rangle),\\
&\hspace*{1.5cm}\mbox{ and }\mu_{p-1}\neq 1\bmod \mathcal M_K\\
\max\left\{(p-1)u_1+\frac{u_1}{p},t+u_1,r+\frac{u_1}{p}\right\} &
M(p^3), (u_2=u_1\mbox{ or }G_{l_2}=\langle \sigma_1^p,\sigma_2\rangle),\\
&\hspace*{1.5cm}\mbox{ and } \mu_{p-1}= 1\bmod \mathcal M_K.\\
\end{cases}\end{equation}
Now consider the case where
$G\cong Q_8$, or
$G\cong D_8,M(p^3)$ and $u_2\neq u_1$, $G_{l_2}=\langle \sigma_1\rangle$. In these cases, $\bar{x}_3$ is the sum of two terms. Indeed, is an element of $M(z_0,z_2)$ or $M(z_4,z_5)$.
\begin{lemma}
$M(z_0,z_2)$ is a $C_p^2$-extension with two upper ramification breaks $s_0<s_2$. If $u_2\neq u_1$,
then $M(z_3,z_4)$ is $C_p^2$-extension with
upper ramification breaks then $s_3<s_4$.
Otherwise, if $\mathfrak s_1\neq 0$, the upper ramification breaks are
$s_4 <s_3$.
\end{lemma}
\begin{proof}
Verify that $s_2>s_0$ follows from the fact that $pu_2+u_1\geq t_0$ and $u_2>u_1$. For $u_2\neq u_1$, $s_3<s_4$ reduces to $u_1<u_2$. For
$u_2= u_1$
and $\mathfrak s_1\neq 0$, $s_4<s_3$ reduces to $e<u_1/2$.
\end{proof}
We use this lemma for the case when $G\cong D_8,M(p^3)$, $u_2\neq u_1$
and $G_{l_2}=\langle \sigma_1\rangle$, and also when $G\cong Q_8$.
The ramification break of $M(\bar{x}_3)/M$ where $\bar{x}_3=z_0+z_2$
is $s_2$, which is also the third lower ramification break for
$M(\bar{x}_3)/K$. The ramification break of $M(\bar{x}_3)/M$ where
$\bar{x}_3=z_3+z_4$ is $s_4$ if $u_2\neq u_1$. On the other hand, when
$u_2=u_1$ and $\mathfrak s_1\neq 0$, it is $s_3$. And if $u_2=u_1$ and
$\mathfrak s_1= 0$ (so $M(z_3,z_4)$ is a $C_p$-extension), it is $s_4$. These
are also the third lower ramification breaks for $M(\bar{x}_3)/K$.
Thus we determine that:
\begin{equation}\label{bar{u}-2}
\bar{u}_3=\begin{cases}
pu_2&D_8,M(p^3), u_2\neq u_1\mbox{ and }G_{l_2}=\langle \sigma_1\rangle,\\
2u_2 & Q_8, u_2\neq u_1,\mbox{ or }u_2= u_1,\omega^3\neq 1,\\
\max\left\{\frac{3u_1}{2},2u_1-2e\right\} & Q_8, u_2=u_1, \omega^3= 1.
\end{cases}\end{equation}
It is an easy exercise to determine from \eqref{bar{u}-1} and
\eqref{bar{u}-2} that either $\bar{u}_3$ is not an integer (because
$p\nmid u_1$) or $\bar{u}_3$ is an integer congruent to zero modulo
$p$. Now recall that regardless of the Galois group, $N=M(x_3)$ with
$x_3=\bar{x}_3+z_5$ where $\wp(z_5)=\kappa_3$. Without loss of
generality, $\kappa_3$ is $K$-reduced. This means that unless
$\kappa_3\in\mathfrak O_K$, $v_K(\kappa_3)=-b_3<0$ with $p\nmid b_3$, in
which case $b_3$ is the ramification break of $K(z_5)/K$ and an upper
ramification break for the Galois extension $M(\bar{x}_3,z_5)/K$,
which contains $M(x_3)/K$. The upper ramification breaks of
$M(\bar{x}_3,z_5)/K$ include $u_1\leq u_2<\bar{u}_3$ as well. And, since
$\bar{u}_3$ and $b_3$ are different types of rational numbers (never equal),
the upper ramification breaks of $M(x_3)/K$ are $u_1\leq u_2<\max\{\bar{u}_3,b_3\}$. This together with the expressions in \eqref{bar{u}-1} and
\eqref{bar{u}-2} appear in
Theorem \ref{sharp-bound}.
\section{Application}\label{notting}
As explained in \cite{nottingham}, there is interest in explicit
constructions of finite nonabelian subgroups of the Nottingham group,
The authors describe a process that uses the Witt vector description
of cyclic extensions of degree $p^n$ in characteristic $p$ to produce
elements of order $p^n$. It would be interesting to follow this
process and use the Artin-Schreier descriptions in Theorem
\ref{as-gen} to identify in the Nottingham group some nonabelian subgroups of
order $p^3$. Furthermore, one might then use Theorem \ref{sharp-bound}
to determine the upper ramification sequences for these subgroups and
thus address one of the open problems listed in
\cite[\S1.5]{nottingham}.
\end{document} |
\begin{document}
\title{A classical analog for the electron spin state}
\author{K.B. Wharton, R.A. Linck and C.H. Salazar-Lazaro}
\affiliation{San Jos\'e State University, Department of Physics and Astronomy, San Jos\'e, CA 95192-0106}
\email{wharton@science.sjsu.edu}
\date{\today}
\begin{abstract}
Despite conventional wisdom that spin-1/2 systems have no classical analog, we introduce a set of classical coupled oscillators with solutions that exactly map onto the dynamics of an unmeasured electron spin state in an arbitrary, time-varying, magnetic field. While not addressing the quantum measurement problem (discrete outcomes and their associated probabilities), this new classical analog yields a classical, physical interpretation of Zeeman splitting, geometric phase, the electron's doubled gyromagnetic ratio, and other quantum phenomena. This Lagrangian-based model can be used to clarify the division between classical and quantum systems, and might also serve as a guidepost for certain approaches to quantum foundations.
\end{abstract}
\maketitle
\section{Introduction}
Despite the conventional view of quantum spin as being an inherently non-classical phenomenon\cite{LL}, there is a rich history of exploring classical analogs for spin-1/2 systems in particular. For example, there exists a well-developed classical analog to a two-level quantum system, based upon the classical polarization (CP) of a plane electromagnetic wave\cite{Mcmaster,HnS,Klyshko,Malykin,Zap}. Although this CP-analog has been used to motivate introductory quantum mechanics texts\cite{Baym, Sakurai}, the power and depth of the analogy is not widely appreciated. For example, the CP-analog contains a straightforward classical picture for a $\pi$ geometric phase shift resulting from a full $2\pi$ rotation of the spin angular momentum, but this fact is rarely given more than a casual mention (with at least one notable exception\cite{Klyshko}). Still, the CP-analog contains certain drawbacks, especially when the analogy is applied to an electron spin state in an arbitrary, time-varying magnetic field. These drawbacks, along with complications involving quantum measurement outcomes, have prevented a general agreement on exactly which aspects of quantum spin are inherently non-classical.
In this paper, we extend the CP-analog to a system of four coupled oscillators, and prove that this classical system exactly reproduces the quantum dynamics of an unmeasured electron spin state in an arbitrary magnetic field. This result demonstrates, by explicit construction, that if there are any aspects of an electron spin state that cannot be described in a classical context, those aspects must lie entirely in the domain of quantum measurement theory, not the dynamics. In order to accomplish this feat, it turns out there must necessarily be a many-to-one map from the classical system to the quantum state. In other words, the classical system contains a natural set of ``hidden variables'', accessible to the classical analog, but hidden to a complete specification of the quantum state.
Some might argue that no classical analog is needed to discuss quantum spin dynamics because an unmeasured quantum state governed by the Schr\"odinger-Pauli Equation (SPE) could be interpreted as a set of classical quantities coupled by first-order differential equations. One can even analyze the classical Dirac field and deduce quantities which map nicely onto quantum spin concepts \cite{Ohanian}. But simply reinterpreting quantum wave equations as classical fields is not a very enlightening ``analog'', especially if the spin state is considered separately from the spatial state. For example, the use of complex numbers in these equations is significantly different from how they are used to encode phases in classical physics, and therefore have no obvious classical interpretation. And if a system of first-order differential equations cannot be directly transformed into a set of second-order differential equations, it is unclear how certain classical physics concepts (\textit{e.g.} generalized forces) can be applied. As we will demonstrate below, the full SPE \textit{can} be expanded to a system of second-order equations, but only by adding additional ``hidden variables'' along with new overall constraints. The classical analog presented here arrives at this result from a different direction, starting with a simple Lagrangian.
Apart from clarifying how quantum spin might best be presented to students, the question of which aspects of quantum theory are truly ``non-classical'' is of deep importance for framing our understanding of quantum foundations. For example, Spekkens has recently demonstrated a simple classical toy theory that very roughly maps onto two-level quantum systems, showing several examples of purportedly-quantum phenomena that have a strong classical analog\cite{Spekkens}. Still, neither Spekkens nor other prominent classical-hidden-variable approaches to two-level quantum systems\cite{Bell, KS} have concerned themselves with classical analogies to the curious \textit{dynamics} of such systems. Our result demonstrates that starting with the dynamics can naturally motivate particular foundational approaches, such as a natural hidden variable space on which classical analogies to quantum measurement theory might be pursued. And because this classical analog derives from a simple Lagrangian, it is potentially a useful test bed for approaches where the action governs the transition probabilities, as in quantum field theory.
The plan for the paper is as follows: After summarizing the CP-analog in Section II, a related two-oscillator analog (similar to a Foucault Pendulum) is presented in Section III. This two-oscillator analog is shown to be identical to a quantum spin state in a one-dimensional magnetic field; a three-dimensional field requires an extension to four oscillators, as shown in Section IV. The final section discusses and summarizes these results -- the most notable of which is that a classical system can encompass all the dynamics of a quantum spin-1/2 state.
\section{The Classical Polarization Analog}
For a classical plane electromagnetic (EM) wave moving in the z-direction with frequency $\omega_0$, the transverse electric fields $E_x(t)$ and $E_y(t)$ in the $z=0$ plane can always be expressed in two-vector notation as the real part of
\begin{equation}
\label{eq:cp}
\spi{E_x}{E_y}= \spi{a}{b} e^{i \omega_0 t}.
\end{equation}
Here $a$ and $b$ are complex coefficients, encoding the amplitude and phase of two orthogonal polarizations.
A strong analogy can be drawn between the two-vector $(a,b)$ on the right side of (\ref{eq:cp}) -- the well-known ``Jones vector" -- and the spinor $\ket{\chi}$ that defines a spin-1/2 state in quantum mechanics. The quantum normalization condition $<\!\!\chi |\chi \!\!>=1$ maps to a normalization of the energy density of the EM wave, and the global phase transformation $\ket{\chi} \to \ket{\chi} \, exp(i \theta)$ is analogous to changing the EM wave's phase.
This equivalence between a spinor and a Jones vector can be made more explicit by projecting them both onto the surface of a unit sphere in an abstract space (the ``Bloch sphere'' and the ``Poincar\'e sphere'' respectively). Each spinor/Jones vector maps to a unit vector in the angular direction $(\theta, \phi)$, according to the usual convention $a=cos(\theta/2)$, $b=sin(\theta/2)exp(i\phi)$. This is more familiarly described in terms of the sphere's six intersections with a centered cartesian axis
\begin{align}
\label{eq:defs}
\ket{z_+} &= \spi{1}{0} & \ket{z_-} &= \spi{0}{1} \notag \\
\ket {x_+} &= \frac{1}{\sqrt{2}} \spi{1}{1} & \ket{x_-} &= \frac{1}{\sqrt{2}} \spi{1}{-1} \\
\ket{y_+} &= \frac{1}{\sqrt{2}} \spi{1}{i} & \ket{y_-} &= \frac{1}{\sqrt{2}} \spi{1}{-i}. \notag
\end{align}
The CP-analog therefore maps linear x-polarized light to a spin-up electron $\ket{z_+}$ and linear y-polarized light to a spin-down electron $\ket{z_-}$. Electrons with measured spins $\ket{x_\pm}$ correspond to xy-diagonal linear polarizations, while $\ket{y_\pm}$ correspond to the two circular-polarization modes. In this framework, note that quantum superpositions are no different than ordinary EM superpositions.
The analogy extends further, but this is already sufficient background to classically motivate some of the strange features of spin-1/2 systems. Consider the rotation of a Jones vector around the equator of a Poincar\'e sphere, corresponding to a continuous rotation of the direction of linear polarization -- from horizontal, through vertical, and back to the original horizontal state. Any transformation that leads to this rotation (say, a physical rotation of the wave medium) will then be analogous to a magnetic-field induced precession of a spin state around the corresponding equator of the Bloch sphere.
The key point is that the above-described $2\pi$ rotation around the Poincar\'e sphere merely corresponds to a $\pi$ rotation of the EM polarization in physical space. And this is equivalent to a $\pi$ phase shift in the resulting wave; it would now interfere destructively with an identical unrotated wave. Of course, this is also the observed effect for a $2\pi$ rotation of a quantum spin state around the Bloch sphere, although in the latter case the net geometric phase shift is generally thought to be inexplicable from a classical perspective.
What the CP-analog accomplishes is to demonstrate that such behavior does indeed have a straightforward classical interpretation, because the geometrical phase of the spin state is directly analogous to the overall phase of the physical EM wave \cite{Klyshko}. The key is that the Poincar\'e sphere does not map to physical space, so a $2\pi$ rotation need not return the EM wave to its original state. The CP-analog therefore advocates the viewpoint that the Bloch sphere should not map to physical space, even for an electron spin state. This viewpoint will be implemented below in a fully consistent fashion.
To our knowledge, it has not been explicitly noted that this classical analogy naturally motivates an apparently-doubled gyromagnetic ratio for the electron. In the above-described Poincar\'e sphere rotation, as the EM wave is being rotated around its propagation axis, suppose an observer had reference to another system (say, a gyroscope) that truly recorded rotation in physical space. As compared to the gyroscope, the Jones vector would seem to complete a full rotation in half the time. If one interpreted the Poincar\'e sphere as corresponding to real space, the natural conclusion would be that the Jones vector was coupled to the physical rotation at double its ``classical'' value. Misinterpreting the Bloch sphere as corresponding to physical space would therefore lead to exactly the same conclusion for the electron's gyromagnetic ratio.
The classical polarization analog can be pursued much further than is summarized here, mapping the quantum dynamics induced by a magnetic field to the effects of different birefringent materials \cite{Klyshko,Malykin,Zap,Baym,Kubo}. The two EM modes in such materials then map to the two energy eigenstates, and generic rotations around the Poincar\'e sphere can be given a physical implementation. Still, this analog becomes quite abstract; there is no easy-to-describe vector quantity of a birefringent material that corresponds to the magnetic field, and the situation is even more convoluted for time-dependent field analogs.
Another disanalogy is the relation between the magnitude of the Zeeman energy splitting and the difference in wavenumber of the two EM modes. A more natural analogy would relate energy to a classical frequency, but the two EM modes always have identical frequencies. And of course, an electromagnetic plane wave cannot be pictured as a confined system with internal spin-like properties. In the next sections, we develop a novel classical analog that alleviates all of these problems.
\section{A Foucault-Pendulum-Like Analog}
The central success of the CP-analog stems from its use of two physical oscillators, which need not be electromagnetic. For any two uncoupled classical oscillators with the same natural frequency $\omega_0$, their solution can also be encoded by two complex numbers $a$ and $b$, representing the amplitude and phase of each oscillator. Therefore the use of Jones vectors and the Poincar\'e sphere does not only pertain to EM waves.
As an intermediate step towards our proposed classical analog for an electron spin state, consider this classical Lagrangian:
\begin{eqnarray}
L_1(x_1,x_2,\dot{x}_1,\dot{x}_2,t)=\frac{1}{2}\left[ p_1^2+p_2^2 - \omega_0^2(x_1^2+x_2^2)\right], \label{eq:L1}\\
\spi{p_1}{p_2} \equiv \spi{\dot{x_1}}{\dot{x_2}} + \left[ \begin{array}{cc} 0 & -\beta \\ \beta & 0 \end{array} \right] \spi{x_1}{x_2}. \label{eq:p1}
\end{eqnarray}
Here the $x's$ are all purely real quantities, and $\beta$ is some coupling constant that may be time-dependent. (As $\beta\to 0$, this becomes two uncoupled classical oscillators). Equation (\ref{eq:p1}) can be rewritten as $\bm{p}=\dot{\bm{x}}+\bm{B} \bm{x}$, where the conjugate momenta $p_1$ and $p_2$ form the column vector $\bm{p}$, etc. Note that squaring the $\bm{B}$ matrix yields $\bm{B}^2=-\beta^2 \bm{I}$. In this notation, $L_1=(\bm{p\cdot p}-\omega_0^2 \,\bm{x\cdot x})/2$.
First, consider the case of a constant $\beta$. The Euler-Lagrange equations of motion for $L_1$ are then
\begin{eqnarray}
\ddot{x_1}+(\omega_0^2-\beta^2) x_1 = 2\beta \dot{x}_2, \nonumber \\
\ddot{x_2}+(\omega_0^2-\beta^2) x_2 = -2\beta \dot{x}_1. \label{eq:2d1}
\end{eqnarray}
These equations happen to describe the projection of a Foucault pendulum into a horizontal plane (with orthogonal axes $x_1$ and $x_2$) in the small-angle limit. Specifically, $\beta=\Omega sin(\phi)$, where $\Omega$ is the rotation frequency of the Earth and $\phi$ is the latitude of the pendulum. (The natural frequency of such a pendulum is actually $\sqrt{\omega_0^2-\beta^2}$, because of a $\beta^2$ term in $L_1$ that does not appear in the Foucault pendulum Lagrangian, but for a constant $\beta$ this is just a renormalization of $\omega_0$).
The precession of the Foucault pendulum therefore provides a qualitative way to understand the effect of a constant $\beta$ on the unnormalized Jones vector $\bm{x}$. Given a non-zero $\beta$, it is well-known that linear oscillation in $x_1$ (mapping to $\ket{z_+}$ on the Poincar\'e sphere) precesses into a linear oscillation in $x_2$ (mapping to $\ket{z_-}$) and then back to $x_1$ ($\ket{z_+}$). But this $2\pi$ rotation of $\bm{x}$ around the Poincar\'e sphere merely corresponds to a $\pi$ rotation of the pendulum's oscillation axis in physical space, leaving the overall phase of the pendulum shifted by $\pi$, exactly as was described for the CP-analog.
Quantitatively, solutions to (\ref{eq:2d1}) are of the form $exp(-i\omega_\pm t)$, where $\omega_\pm =\omega_0 \pm \beta$. The generic solution can always be expressed as the real component of
\begin{equation}
\label{eq:2ds}
\spi{x_1}{x_2}= a\spi{1}{i} e^{-i (\omega_0- \beta ) t} + b\spi{1}{-i} e^{-i (\omega_0 + \beta )t}.
\end{equation}
Here $a$ and $b$ are arbitrary complex parameters (although again note that $x_1$ and $x_2$ are purely real).
One notable feature of this result is that the coupling constant $\beta$ has the effect of producing two solutions with well-defined frequencies equally spaced above and below the natural frequency -- just like Zeeman splitting of an electron's energy levels in a magnetic field. Furthermore, the modes that correspond to these two pure frequencies happen to be right- and left-hand circular motion of the pendulum, directly analogous to $\ket{y_+}$ and $\ket{y_-}$. A comparison of (\ref{eq:2ds}) with standard results from quantum mechanics reveals that $\beta$ produces exactly the same dynamics on $\bm{x}$ as does a constant magnetic field in the $y$ direction on an electron spin state (apart from an overall $exp(-i\omega_0 t)$ global phase).
Given the strong analogy between a constant $\beta$ and a constant (one-component) magnetic field, one can ask whether this correspondence continues to hold for a time-varying $\beta$. In this case the strict analogy with the Foucault pendulum fails (thanks to the $\beta^2$ terms in $L_1$) and comparing the exact solutions becomes quite difficult. But starting from the Euler-Lagrange equations for a time-varying $\beta$,
\begin{eqnarray}
\ddot{x_1}+(\omega_0^2-\beta^2) x_1 &=& 2\beta \dot{x}_2+\dot{\beta} x_2, \nonumber \\
\ddot{x_2}+(\omega_0^2-\beta^2) x_2 &=& -2\beta \dot{x}_1-\dot{\beta} x_1, \label{eq:2d2}
\end{eqnarray}
one can compare them directly to the relevant Schr\"odinger-Pauli Equation (SPE). Using a $y$-directed magnetic field $B_y=2\beta/\gamma$ (where $\gamma=-e/m$ is the gyromagnetic ratio) and an overall phase oscillation corresponding to a rest mass $mc^2=\hbar\omega_0$, this yields
\begin{equation}
\label{eq:sey}
i\hbar \frac{\partial}{\partial t} \spi{a}{b} = \hbar \left[ \begin{array}{cc} \omega_0 & i\beta \\ -i\beta & \omega_0 \end{array} \right] \spi{a}{b}.
\end{equation}
Taking an additional time-derivative of (\ref{eq:sey}), and simplifying the result using (\ref{eq:sey}) itself, it is possible to derive the following second-order differential equations:
\begin{eqnarray}
\ddot{a}+(\omega_0^2-\beta^2) a &=& 2\beta \dot{b}+\dot{\beta} b, \nonumber \\
\ddot{b}+(\omega_0^2-\beta^2) b &=& -2\beta \dot{a}-\dot{\beta} a. \label{eq:sey2}
\end{eqnarray}
While $a$ and $b$ are still complex, the real and imaginary parts have naturally split into separate coupled equations that are formally identical to (\ref{eq:2d2}). So every solution to the SPE (\ref{eq:sey}) must therefore have a real component which solves (\ref{eq:2d2}).
At first glance it may seem that the imaginary part of (\ref{eq:sey2}) contains another set of solutions not encoded in the real part of (\ref{eq:sey2}), but these solutions are not independent because they also solve (\ref{eq:sey}). The solution space of (\ref{eq:sey}) is a complex vector space of dimension 2 over the complex numbers. It can be verified that the SPE with a rest-mass oscillation cannot admit purely real solutions. Also, it is an elementary exercise to show that if a vector space over the complex numbers has a function basis given by $\{ z_1, z_2 \}$ and there is no complex linear combination of $z_1, z_2$ that yields a purely real function, then the set $\{ Re(z_1), Re(z_2), Im(z_1), Im(z_2) \}$ is a linearly independent set of real functions where linear independence is taken over the reals instead of the complex numbers. From this elementary result, it follows that if $\{z_1, z_2\}$ is a basis for the solution space of (\ref{eq:sey}) over the complex numbers, then the set of functions $X= \{Re(z_1), Re(z_2), Im(z_1), Im(z_2) \}$ spans a 4-d real subspace of the solution space of (\ref{eq:2d2}). Since (\ref{eq:2d2}) indeed has a 4-d solution space over the reals, it follows that the subspace spanned by $X$ is indeed the full solution space of (\ref{eq:2d2}). In summary, the solutions to the real, second-order differential equations (\ref{eq:2d2}) exactly correspond to the solutions to the complex, first-order differential equations (\ref{eq:sey}).
For a one-dimensional magnetic field, these results explicitly contradict the conventional wisdom concerning the inherent complexity of the spin-1/2 algebra. By moving to real second-order differential equations -- a natural fit for classical systems -- it is possible to retain exactly the same dynamics, even for a time-varying magnetic field. The resulting equations not only account for a Zeeman-like frequency splitting, but demonstrate that the quantum geometric phase can be accounted for as the classical phase of an underlying, high-frequency oscillation (a strict analog to the usually-ignored rest mass oscillation at the Compton frequency).
Despite the breadth of the above conclusions, this coupled-oscillator analog has a major drawback as an analog to an electron spin state. It is limited by is the lack of coupling parameters that correspond to magnetic fields in the $x$ or $z$ directions, associated with the appropriate rotations around the Poincar\'e sphere. The classical model in the next section solves this problem, although it comes at the expense of the Foucault pendulum's easily-visualized oscillations.
\section{The Full Analog: Four Coupled Oscillators}
In order to expand the above example to contain an analog of an arbitrarily-directed magnetic field, two more coupling parameters must enter the classical Lagrangian. But with only two oscillators, there are no more terms to couple. With this in mind, one might be tempted to extend the above example to three coupled oscillators, but in that case the odd number of eigenmodes makes the dynamics unlike that of a spin-1/2 system.
It turns out that four coupled oscillators can solve this problem, so long as the eigenmodes come in degenerate pairs. By extending $\bm{x}$ to a real 4-component vector (as opposed to the 2-component vector in the previous section), one can retain the same general form of the earlier Lagrangian:
\begin{equation}
\label{L2}
L_2(\bm{x},\dot{\bm{x}},t)=\frac{1}{2}(\bm{p\cdot p}-\omega_0^2 \,\bm{x\cdot x}).
\end{equation}
Here we are still using the definition $\bm{p}\equiv\dot{\bm{x}}+\bm{B} \bm{x}$, but now with a 4x4 matrix encoding three independent coupling coefficients,
\begin{equation}
\label{eq:Bdef}
\bm{B} = \left[ \begin{array}{cccc}
0 & -\beta_z & \beta_y & -\beta_x \\
\beta_z & 0 & \beta_x & \beta_y \\
-\beta_y & -\beta_x & 0 & \beta_z \\
\beta_x & -\beta_y & -\beta_z & 0 \end{array} \right].
\end{equation}
Again, note that squaring the matrix $\bm{B}$ yields $\bm{B}^2=-\beta^2 \bm{I}$, where now $\beta \equiv \sqrt{\beta_x^2 + \beta_y^2 + \beta_z^2}$.
\subsection{Constant Magnetic Fields}
The four corresponding Euler-Lagrange equations of motion (for constant $\beta$'s) can be written as
\begin{equation}
\label{eq:modes}
\left[ 2\bm{B}\frac{\partial}{\partial t}+\bm{I}\left(\omega_0^2-\beta^2+\frac{\partial^2}{\partial t^2} \right) \right]\bm{x}=0.
\end{equation}
Solving (\ref{eq:modes}) for the eigenmodes via the replacement $\partial/\partial t \to -i\omega$ yields only two solutions, as the eigenvalues are doubly degenerate. They are of the same form as in the previous section: $\omega_\pm = \omega_0 \pm \beta$.
Because of the degeneracy, the full classical solutions can be expressed in a variety of ways. It is convenient to consider a vector with the cartesian components $\bm{\beta}=(\beta_x,\beta_y,\beta_z)$, and then to transform it into spherical coordinates $(\beta,\theta,\phi)$. Using the two-spinors $\ket{y_+}$ and $\ket{y_-}$ defined in (\ref{eq:defs}), the general solutions to (\ref{eq:modes}) can then be written as the real part of
\begin{align}
\label{eq:full}
\fspi{x_1(t)}{x_2(t)}{x_3(t)}{x_4(t)}= & \, a\spi{cos(\theta/2)\ket{y_-}}{sin(\theta/2)e^{i\phi}\ket{y_-}} e^{-i \beta t} + b\spi{sin(\theta/2)\ket{y_-}}{-cos(\theta/2)e^{i\phi}\ket{y_-}} e^{i \beta t} \notag \\
& + c\spi{-sin(\theta/2)e^{i\phi}\ket{y_+}}{cos(\theta/2)\ket{y_+}} e^{-i \beta t} + d\spi{cos(\theta/2)e^{i\phi}\ket{y_+}}{sin(\theta/2)\ket{y_+}} e^{i \beta t}.
\end{align}
Here the global $exp(-i\omega_0 t)$ dependence has been suppressed; one multiplies by this factor and takes the real part to get the actual coordinate values. Having doubled the number of classical oscillators, the solution here is parameterized by {\it four} complex numbers ($a,b,c,d$).
This solution bears a striking similarity to the known dynamics of an electron spin state in an arbitrary uniform magnetic field $\vec{B}$ with components $(B,\theta,\phi)$. In the basis defined above in (\ref{eq:defs}), those solutions to the SPE are known to be
\begin{equation}
\label{eq:qmev}
\spi{\chi_+ (t)}{\chi_- (t)} = f \spi{cos(\theta/2)}{sin(\theta/2)e^{i\phi}} e^{-ie Bt/2m} + g \spi{sin(\theta/2)}{-cos(\theta/2)e^{i\phi}} e^{ie Bt/2m},
\end{equation}
where the left side of this equation is the spinor $\ket{\chi (t)}$. Here $f$ and $g$ are two complex constants subject to the normalization condition $|f|^2+|g|^2=1$.
It is not difficult to see how all possible SPE solutions (\ref{eq:qmev}) have corresponding classical solutions (\ref{eq:full}). Equating $\bm{\beta}=\vec{B}e/(2m)$, adding the quantum-irrelevant global phase dependence $exp(-i\omega_0 t)$ to $\ket{\chi (t)}$, and setting $c=d=0$ in (\ref{eq:full}) makes the two expressions appear almost identical if $a=\sqrt{2}f$ and $b=\sqrt{2}g$. (The $\sqrt{2}$'s appear in the definition of $\ket{y_-}$). The final step is to map the fully-real $x's$ to the complex $\chi's$ according to
\begin{equation}
\label{eq:map1}
\chi_+=x_1+ix_2 \, \, , \, \, \chi_- = x_3 + ix_4.
\end{equation}
This mapping turns out to be just one of many possible ways to convert a solution of the form (\ref{eq:qmev}) into the form (\ref{eq:full}). For example, setting $a=b=0$, $c=\sqrt{2}f$ and $d=\sqrt{2}g$ corresponds to the alternate map
\begin{equation}
\label{eq:map2}
\chi_+=x_3-ix_4 \, \, , \, \, \chi_- = -x_1 + ix_2.
\end{equation}
More generally, one can linearly combine the above two maps by introducing two complex parameters $A$ and $B$. Under the assignment $a=\sqrt{2}Af$, $b=\sqrt{2}Ag$, $c=\sqrt{2}Bf$ and $d=\sqrt{2}Bg$ (which can always be done if $ad\!=\!bc$) then the connection between the above equations (\ref{eq:full}) and (\ref{eq:qmev}) corresponds to
\begin{eqnarray}
\label{eq:map3}
\chi_+&=& \frac{(x_1+ix_2)A^*+(x_3-ix_4)B^*}{|A|^2+|B|^2}, \nonumber \\
\chi_- &=& \frac{(-x_1 + ix_2)B^*+(x_3 + ix_4)A^*}{|A|^2+|B|^2}.
\end{eqnarray}
This shows that for any solution (\ref{eq:full}) that obeys the $ad\!=\!bc$ condition, it will always encode a particular quantum solution to (\ref{eq:qmev}) via the map (\ref{eq:map3}), albeit with extra parameters $A$, $B$, and a specified global phase. Remarkably, this $ad\!=\!bc$ condition happens to be equivalent to the simple classical constraint $L_2(t)=0$. Imposing such a constraint on (\ref{eq:modes}) therefore yields a classical system where all solutions can be mapped to the dynamics of a spin-1/2 quantum state in an arbitrary, constant, magnetic field -- along with a number of ``hidden variables'' not encoded in the quantum state.
\subsection{Time-varying Magnetic Fields}
As in Section III, a generalization to time-varying magnetic fields is best accomplished at the level of differential equations, not solutions. Allowing $\bm{\beta}(t)$ to vary with time again adds a new term to the Euler-Lagrange equations, such that they now read:
\begin{equation}
\label{eq:ELx}
\left[ 2\bm{B}\frac{\partial}{\partial t}+\frac{\partial \bm{B}}{\partial t}+\bm{B}^2+\bm{I}\left(\omega_0^2+\frac{\partial^2}{\partial t^2} \right) \right]\bm{x}=0.
\end{equation}
Here $\bm{B}$ is given by (\ref{eq:Bdef}) with time-dependent $\beta_x$, $\beta_y$, and $\beta_z$. This must be again compared with the SPE with an explicit rest mass oscillation $mc^2=\hbar\omega_0$:
\begin{equation}
\label{eq:SE}
i\hbar \frac{\partial}{\partial t} \spi{\chi_+}{\chi_-} = \hbar \left( \omega_0 \bm{I}+\bm{\beta}\cdot\bm{\sigma} \right) \spi{\chi_+}{\chi_-},
\end{equation}
where again we have used $\bm{\beta}(t)=\vec{B}(t)e/(2m)$ to relate the coupling parameters in $L_2$ with the magnetic field $\vec{B}(t)$. (Here $\bm{\sigma}$ is the standard vector of Pauli matrices).
While it is possible to use the map (\ref{eq:map3}) to derive (\ref{eq:ELx}) from (\ref{eq:SE}) (and its time-derivative) via brute force, it is more elegant to use the quaternion algebra, as it is closely linked to both of the above equations. Defining the two quaternions $\mathfrak{q}=x_1+ix_2+jx_3+kx_4$, and $\mathfrak{b}=0+i\beta_z-j\beta_y+k\beta_x$, allows one to rewrite (\ref{eq:ELx}) as the quaternionic equation
\begin{equation}
\label{eq:ELq}
\ddot{\mathfrak{q}}+2\dot{\mathfrak{q}}\mathfrak{b}+\mathfrak{q}(\mathfrak{b}^2+\dot{\mathfrak{b}}+\omega_0^2)=0.
\end{equation}
Note that while $\bm{B}$ operates on $\bm{x}$ from the left, the $\mathfrak{b}$ acts as a right-multiplication on $\mathfrak{q}$ because (\ref{eq:Bdef}) is of the form of a right-isoclinic rotation in SO(4).
While it is well-known that the components of $i\bm{\sigma}$ act like purely imaginary quaternions, the precise mapping of $i\bm{\beta}\cdot\bm{\sigma}$ to $\mathfrak{b}$ depends on how one maps $\ket{\chi}$ to a quaternion $\mathfrak{s}$. Using the above map (\ref{eq:map1}), combined with the above definition of $\mathfrak{q}$, it happens that $i(\bm{\beta}\cdot\bm{\sigma})\ket{\chi}=\mathfrak{s}\mathfrak{b}$, where $\mathfrak{s}$ is the quaternionic version of $\ket{\chi}$ (as defined by the combination of (\ref{eq:map1}) and $\mathfrak{q}=\mathfrak{s}$). This allows one to write the SPE (\ref{eq:SE}) as
\begin{equation}
\label{eq:SEq}
\dot{\mathfrak{s}}+\mathfrak{s}\mathfrak{b}=-i\omega_0\mathfrak{s}.
\end{equation}
This equation uses a quaternionic $i$, not a complex $i$, acting as a left-multiplication (again because of the particular mapping from $\ket{\chi}$ to $\mathfrak{s}$ defined by (\ref{eq:map1})). While the SPE would look more complicated under the more general map (\ref{eq:map3}) as applied to $\mathfrak{q}=\mathfrak{s}$, this is equivalent to applying the simpler map (\ref{eq:map1}) along with
\begin{equation}
\label{eq:qtos}
\mathfrak{q}=[Re(A)+iIm(A)+jRe(B)-kIm(B)]\mathfrak{s}\equiv \mathfrak{u}\mathfrak{s},
\end{equation}
so long as $\mathfrak{u}$ is a constant unit quaternion (linking the normalization of $\mathfrak{q}$ and $\mathfrak{s}$).
Keeping the SPE in the form (\ref{eq:SEq}), we want to show that for any solution $\mathfrak{s}$ to (\ref{eq:SEq}), there is a family of solutions $\mathfrak{q=us}$ to the classical oscillators (\ref{eq:ELq}). The time-derivative of (\ref{eq:SEq}) can be expanded as
\begin{equation}
\label{eq:SEq2a}
\ddot{\mathfrak{s}}+2\dot{\mathfrak{s}}\mathfrak{b}+\mathfrak{s}\dot{\mathfrak{b}} = \dot{\mathfrak{s}}\mathfrak{b}-i\omega_0\dot{\mathfrak{s}}.
\end{equation}
Using (\ref{eq:SEq}) to eliminate the $\dot{\mathfrak{s}}$'s on the right side of (\ref{eq:SEq2a}) then yields
\begin{equation}
\label{eq:SEq2}
\ddot{\mathfrak{s}}+2\dot{\mathfrak{s}}\mathfrak{b}+\mathfrak{s}(\mathfrak{b}^2+\dot{\mathfrak{b}}+\omega_0^2)=0.
\end{equation}
If $\mathfrak{s}$ solves (\ref{eq:SEq}), it must solve (\ref{eq:SEq2}), but this is exactly the same equation as (\ref{eq:ELq}). And because $\mathfrak{u}$ is multiplied from the left, $\mathfrak{q=us}$ must then also solve (\ref{eq:ELq}). This concludes the proof that all solutions to the SPE (\ref{eq:SE}) -- even for a time-varying magnetic field -- have an exact classical analog in the solutions to (\ref{eq:ELx}).
The question remains as to which subset of solutions to (\ref{eq:ELx}) has this quantum analog. If the above connection exists between $\mathfrak{q}$ and $\mathfrak{s}$, then by definition $\mathfrak{s=u^*q}$, where $\mathfrak{u}$ is a unit quaternion. This substitution transforms the left side of (\ref{eq:SEq}) into $\mathfrak{u^*p}$, where $\mathfrak{p}=\dot{\mathfrak{q}}+\mathfrak{qb}$ is the quaternionic version of the canonical momentum. Therefore, from (\ref{eq:SEq}), $\mathfrak{p}=-\mathfrak{u}i\omega_0\mathfrak{u^*q}$. As $\mathfrak{u}$ is a unit quaternion, this yields a zero Lagrangian density $L_2=(|\mathfrak{p}|^2-\omega_0^2|\mathfrak{q}|^2)/2=0$, consistent with the constant-field case.
\section{Discussion}
The Foucault pendulum is often discussed in the context of classical analogs to quantum spin states\cite{Klyshko}, but the discussion is typically restricted to geometric phase. Section III demonstrated that the analogy runs much deeper, as the Coriolis coupling between the two oscillatory modes is exactly analogous to a one-dimensional magnetic field acting on an electron spin state. The analog also extends to the dynamics, and provides a classical description of Zeeman energy splitting, geometric phase shifts, and the appearance of a doubled gyromagnetic ratio. Apart from a global phase, there were no additional classical parameters needed to complete the Section III analog.
In Section IV, we demonstrated that it is possible to take four classical oscillators and physically couple them together in a particular manner (where the three coupling coefficients correspond to the three components of a magnetic field), yielding the equations of motion given in (\ref{eq:ELx}). Imposing a global physical constraint ($L_2=0$) on this equation forces the solutions to have an exact map (\ref{eq:map3}) to solutions of the Schr\"odinger-Pauli equation for a two-level quantum system with a rest-mass oscillation. This is a many-to-one map, in that there are additional parameters in the solution to (\ref{eq:ELx}) that can be altered without affecting the corresponding quantum solution, including an overall phase. From a quantum perspective, these additional parameters would be ``hidden variables''.
Perhaps one reason this analog has not been noticed before is that many prior efforts to find classical analogs for the spin-1/2 state have started with a physical angular momentum vector, in real space. Rotating such a physical vector by $2\pi$, it is impossible to explain a $\pi$ geometric phase shift without reference to additional elements outside the state itself, such as in Feynman's coffee cup demonstration \cite{Feynman}. In the four-oscillator analog, however, the expectation value of the spin angular momentum merely corresponds to an unusual combination of physical oscillator parameters:
\begin{equation}
\label{eq:expS}
\left< \bm{S\cdot\hat{e}} \right> = -\frac{\hbar}{2\omega_0} \bm{p\cdot B(\hat{e}) x}.
\end{equation}
Here $\bm{\hat{e}}$ is an arbitrary unit vector, and the above definition of $\bm{B(\beta)}$ in (\ref{eq:Bdef}) is used to define $\bm{B(\hat{e})}$. Note, for example, that a sign change of both $\bm{x}$ and $\bm{p}$ leaves $\left<\bm{S}\right>$ unchanged. This is indicative of the fact that the overall phase of the oscillators are shifted by $\pi$ under a $2\pi$ rotation of $\left<\bm{S}\right>$, exactly as in the CP-analog and the Foucault pendulum.
This result explicitly demonstrates that if there is any inherently non-classical aspect to a quantum spin-1/2 state, such an aspect need not reside in the dynamics. On the other hand, if the system is measured, this classical analog cannot explain why superpositions of eigenmodes are never observed, or indeed what the probability distribution of measurements should be. That analysis resides in the domain of quantum measurement theory, and these results do not indicate whether or not that domain can be considered to have a classical analog.
With this in mind, these results should still be of interest to approaches where the usual quantum state is not treated as a complete description of reality. The hidden variables that naturally emerge from the above analysis are the complex parameters $A$ and $B$ (or equivalently, the unit quaternion $\mathfrak{u}$). These parameters effectively resulted from the doubling of the parameter space (from two to four oscillators), but do not seem to have any quantitative links to prior hidden-variable approaches. Still, they are loosely aligned with the doubling of the ontological state space in Spekkens's toy model \cite{Spekkens}, as well as with the doubling of the parameter space introduced when moving from the first-order Schr\"odinger equation to the second-order Klein Gordon equation \cite{KGE}. Another point of interest is that this analog stems from a relatively simple Lagrangian, $L_2$, and there is good reason to believe than any realistic model of quantum phenomena should have the same symmetries as a Lagrangian density \cite{WMP}.
One final question raised by these results is whether or not it is possible to construct a working mechanical or electrical version of the classical oscillators described in Section IV. If this were possible, it would make a valuable demonstration concerning the dynamics of an unmeasured electron spin state. Even if it were not possible, some discussion of these results in a quantum mechanics course might enable students to utilize some of their classical intuition in a quantum context.
\begin{acknowledgments}
The authors are indebted to Patrick Hamill for recognizing (\ref{eq:L1}) as the Foucault pendulum Lagrangian; further thanks are due to Ian Durham, David Miller, and William Wharton. An early version of this work was completed when KW was a Visiting Fellow at the Centre for Time in the Sydney Centre for Foundations of Science, University of Sydney.
\end{acknowledgments}
\end{document} |
\begin{document}
\title{Higher order A-stable schemes for the wave equation using a recursive convolution approach}
\date{}
\begin{abstract}
In several recent works \cite{Causley2013a}, \cite{Causley2013}, we developed a new second order, A-stable approach to wave propagation problems based on the method of lines transpose (MOL$^T$) formulation combined with alternating direction implicit (ADI) schemes. Because our method is based on an integral solution of the ADI splitting of the MOL$^T$ formulation, we are able to easily embed non-Cartesian boundaries and include point sources with exact spatial resolution. Further, we developed an efficient $O(N)$ convolution algorithm for rapid evaluation of the solution, which makes our method competitive with explicit finite difference (e.g., FDTD) solvers, both in terms of accuracy and time to solution, even for Courant numbers slightly larger than 1. We have demonstrated the utility of this method by applying it to a range of problems with complex geometry, including cavities with cusps.
In this work, we present several important modifications to our recently developed wave solver. We obtain a family of wave solvers which are unconditionally stable, accurate of order $2P$, and require $O(P^d N)$ operations per time step, where $N$ is the number of spatial points, and $d$ the number of spatial dimensions. We obtain these schemes by including higher derivatives of the solution, rather than increasing the number of time levels. The novel aspect of our approach is that the higher derivatives are constructed using successive applications of the convolution operator.
We develop these schemes in one spatial dimension, and then extend the results to higher dimensions, by reformulating the ADI scheme to include recursive convolution. Thus, we retain a fast, unconditionally stable scheme, which does not suffer from the large dispersion errors characteristic to the ADI method. We demonstrate the utility of the method by applying it to a host of wave propagation problems. This method holds great promise for developing higher order, parallelizable algorithms for solving hyperbolic PDEs, and can also be extended to parabolic PDEs.
\end{abstract}
\section{Introduction}
In recent works \cite{Causley2013a}, \cite{Causley2013}, the method of lines transpose (MOL$^T$) has been utilized to solve the wave equation, resulting in a second order accurate, A-stable numerical scheme. The solution is constructed using a boundary-corrected integral equation, which is derived in a semi-discrete setting. Thus, the solution at time $t^{n+1}$ is found by convolving the solution at previous time levels against the semi-discrete Green's function. Upon spatial discretization, traditional convolution requires $O(N^2)$ operations for a total of $N$ spatial points per time step. However, we have developed a fast convolution algorithm for the one-dimensional problem, which reduces the computational cost to $O(N)$ operations \cite{Causley2013a}. This efficiency was additionally extended to the wave equation in higher dimensions by applying alternate direction implicit (ADI) splitting to the semi-discrete elliptic differential operator.
Our solver is intended to act as a field solver for Maxwell's equations in plasma simulations. In this regard, the MOL$^T$ approach has three distinct advantages: it can capture time-dependent point sources (particles) with exact spatial resolution; it is $O(N)$, A-stable, and second order accurate; and it can incorporate complex geometries by embedding boundaries in a Cartesian mesh. However, we also point out that our methods are quite suitable for general electromagnetics, and acoustic problems, and are competitive to standard explicit finite difference methods (for example, the traditional FDTD scheme of Yee), both in terms of computational complexity and accuracy, but without the CFL time step restriction.
While there is no stability restriction placed on our A-stable scheme, considerations of accuracy present themselves when the CFL number becomes large. Indeed, when large time steps are taken, the anisotropies introduced by the dimensional splitting are very pronounced. This problem has also been observed in the FDTD-ADI implementation of Maxwell's equations, which were introduced simultaneously by Namiki \cite{Namiki1999} and Zheng et. al. \cite{Zheng1999}. The splitting error can be understood as a dispersive term in the leading order of the truncation error \cite{Namiki2000}. Fornberg et. al. \cite{Fornberg2001,Fornberg2007} have studied the problem in great detail, and shown that if the dispersion error can be depressed with higher order spatial resolution, the resulting scheme for Maxwell's equations can be lifted to higher order in time using Richardson extrapolation, thus removing the ADI anisotropy.
In this work, rather than working with the first order Maxwell formulation, we shall apply ADI splitting directly to the wave equation. This is not a new idea, and in fact was first proposed by Lees \cite{Lees1962} shortly after Peaceman and Rachford applied it to the heat equation. Lees built upon the pioneering work of Von-Neumann, who first proposed an implicit finite difference solution for the 1d wave equation, which when viewed in a semi-discrete sense is essentially identical to our equation \eqref{eqn:Semi} below.
The notion of obtaining higher order ADI algorithms is also not novel, having first been presented by Fairweather \cite{Fairweather1965}. But the Fairweather's approach did not remain A-stable, as was the case for the second order scheme of Lees. The work on higher order ADI implementations for second order hyperbolic PDEs continues to this day, with recent emphasis placed on the use of compact finite differences \cite{Deng2012}. But what of higher order schemes which are also A-stable?
In the pioneering work by Dahlquist \cite{Dahlquist1963}, it is stated that no linear multistep scheme applied to the problem $y' = f(x,y)$ can simultaneously achieve A-stability, and order of accuracy greater than 2. Slightly less well know is that a decade later, the same result was proven again by Dahlquist \cite{Dahlquist1978} for periodic initial value problems of the form $y" = f(x,y)$. But the Dahlquist barrier can be broken, by not limiting the ODE solver to a linear multistep scheme, a fact pointed out by Ehle \cite{Ehle1968}, in his study of multistage implicit Runge-Kutta, as well as multiderivative schemes.
As such, a linear multiderivative scheme can achieve higher orders of accuracy, and remain A-stable. This result has been known for decades in the solution of periodic initial value problems, beginning with the work of Numerov, and Lambert and Watson \cite{Lambert1976}, and remains active to this day \cite{Stavroyiannis2009}, \cite{Stavroyiannis2010}.
On the other hand, multiderivative methods for solving hyperbolic PDEs has been considered much more sparsely in the literature. This could be attributed to the fact that, in contrast to ODEs, the introduction of spatial dependence creates several complications. In particular, we now must consider boundary conditions, and how the inclusion of higher derivatives effects the solution near the boundaries. Additionally, there is the issue of computational complexity. For instance, the traditional tridiagonal solves used in finite difference algorithms will now be replaced by banded matrices of growing bandwidth, which must be inverted at each time step. These issues were addressed very recently in \cite{Luo2013} for the telegraph equation, where an implicit Hermite interpolation in time was used to achieve a fourth order, A-stable numerical scheme. The 3-point spatial stencil was maintained, by using fourth order compact finite differences, and consistent endpoint corrections were derived for Dirichlet boundary conditions.
In addition to enhancing the stability properties of higher order methods, the use of multiderivative schemes also holds great promise for computational efficiency in parallel codes. This case was made recently in \cite{Seal2013}, where multiderivative methods are developed for hyperbolic conservation laws. Since mulitderivative schemes require more function evaluations, but smaller memory footprints to achieve greater accuracy, they fit perfectly into the computing paradigm of GPUs.
In this article, we obtain A-stable schemes of arbitrary order, using a MOL$^T$ formulation of the wave equation, and implicitly including higher order derivatives. However, we construct the derivatives using a novel approach: recursive convolution. The cornerstone of our method lies in the fact that, using recursive applications of the convolution operators introduced in \cite{Causley2013a}, the inversions of higher derivatives can be performed analytically, so that the resulting scheme is made explicit, even at the semi-discrete level. In constructing the analytical convolution operators, we incorporate the boundary conditions directly, and as a result Dirichlet and periodic boundary conditions can be implemented to higher order with no additional complexity.
Furthermore, the convolution operator can be applied in $O(N)$ operations, and so the schemes we find in $d = 1,2$ and 3 dimensions will achieve accuracy $2P$ in $O(P^d N)$ operations per time step. Finally, the convolution algorithm utilizes exponential recurrence relations, which effectively localize contributions to the spatial integrals, making it suitable for domain decomposition. Thus, our algorithm will scale to multiple processors much more efficiently than traditional ADI solvers, which utilize global, algebraic solvers. A parallel implementation of our algorithm is the subject of future work.
The rest of this paper is laid out as follows. In Section \ref{sec:Background}, we briefly describe the main features of the MOL$^T$ algorithm. In Section \ref{sec:Scheme1d}, we derive a family of schemes of order $2P$, and prove their order of accuracy, and unconditional stability. In Section \ref{sec:SchemeADI}, we generalize the first order results to higher spatial dimensions, producing ADI methods of order $2P$ which will be A-stable. We conclude with a brief discussion in Section \ref{sec:Conclusion}.
\section{Background and notation}
\label{sec:Background}
We begin with a review of the relevant details of our method. More details can be found in \cite{Causley2013a}, \cite{Causley2013}. Consider the one-dimensional wave equation
\begin{equation}
\label{eqn:wave}
\frac{1}{c^2 }u_{tt} =u_{xx}, \quad x \in \mathbb{R}
\end{equation}
with prescribed initial conditions.
Using the method of lines transpose (MOL$^T$), we introduce the semi-discrete solution $u^{n} =u^{n}(x)$, which approximates $u(x,t)$ at $t = t_n = n\Delta t$. We then replace the second time derivative with the standard second order finite difference stencil, so that
\begin{equation}
\label{eqn:Semi}
\frac{u^{n+1}-2u^n+u^{n-1}}{(c\Delta t)^2} = \frac{\partial^2}{\partial x^2} \left(u^n+\frac{u^{n+1}-2u^n+u^{n-1}}{\beta^2}\right),
\end{equation}
where we also introduce the averaging parameter $\beta>0$, to ensure that the second spatial derivative appears implicitly, and also that the scheme remains symmetric about $t^n$.
After some rearranging of terms, we arrive at the modified Helmholtz equation, which can be written in the form
\begin{equation}
\label{eqn:L_eq}
\mathcal{L}\left[ u^{n+1}-2u^n+u^{n-1} \right] = \beta^2\left( u^n - \mathcal{L}\left[ u^n\right] \right)
\end{equation}
where
\begin{equation}
\label{eqn:L_def}
\mathcal{L}: = 1- \frac{1}{\alpha^2}\frac{\partial^2}{\partial x^2} , \quad \alpha = \frac{\beta}{c\Delta t}, \quad 0<\beta \leq 2.
\end{equation}
The differential equation can be solved by convolution with the free space Green's function, which In 1d means that
\begin{equation}
\label{eqn:L_Inverse}
\mathcal{L}^{-1}[u(x)]:= \frac{\alpha}{2}\int_{-\infty}^\infty e^{-\alpha|x-y|}u(y)dy.
\end{equation}
The definition can additionally be modified to include boundary corrections on a finite domain (see \cite{Causley2013a}). We also introduce a new operator related to \eqref{eqn:L_Inverse}
\begin{equation}
\label{eqn:D_def_1d}
u^{n+1}-2u^n+u^{n-1} = -\beta^2\mathcal{D}[u^n], \quad \mathcal{D}[u](x):= u(x) - \mathcal{L}^{-1}[u](x),
\end{equation}
which will be used extensively in the ensuing discussion. The semi-discrete solution \eqref{eqn:D_def_1d} is therefore defined in terms of a convolution integral, and as mentioned, traditional methods of discretization in space will bear a cost of $O(N^2)$ operations per time step to evaluate $u^{n+1}(x)$ at $N$ spatial points. However, we have developed a fast convolution algorithm for \eqref{eqn:D_def_1d}, so that the numerical solution is obtained in $O(N)$ operations per time step. This is accomplished by first performing a "characteristic" decomposition $\mathcal{L}^{-1}[u](x) = I^L(x) + I^R(x)$, where
\begin{align*}
I^L(x) = \frac{\alpha}{2}\int_{-\infty}^x u(y)e^{-\alpha(x-y)}dy, \quad I^R(x) = \frac{\alpha}{2}\int_x^{\infty} u(y)e^{-\alpha(y-x)}dy,
\end{align*}
so that both integrands decay exponentially away from $x$. Additionally, they satisfy exponential recurrence relations, which means that
\[
I^L(x) = e^{-\alpha \delta}I^L(x-\delta) + \frac{\alpha}{2}\int_0^\delta e^{-\alpha y} u(x-y)dy, \quad I^R(x) = e^{-\alpha \delta}I^R(x+\delta) + \frac{\alpha}{2}\int_0^\delta e^{-\alpha y} u(x+y)dy.
\]
These expressions are exact, and upon discretization, the integrals are approximated with $O(1)$ operations at each of the $N$ points, hence resulting in an $O(N)$ scheme. We have also proven that the resulting fully discrete solution is second order accurate in time and space, and unconditionally stable (i.e., A-stable) for $0<\beta\leq 2$ \cite{Causley2013a}.
\begin{remark}
While it is not obvious from the update scheme \eqref{eqn:D_def_1d}, the solution $u^{n+1}(x)$ is the solution of an implicit scheme. This is more apparent from viewing equation \eqref{eqn:L_eq}, where $\mathcal{L}$ is acting on the unknown solution $u^{n+1}$. The fact that $u^{n+1}$ is given explicitly by \eqref{eqn:D_def_1d} is a feature of the MOL$^T$ formulation, which provides a means to analytically invert the semi-discrete Helmholtz operator $\mathcal{L}$. This is in contrast to the MOL formulation, which inverts an approximate (algebraic) spatial operator.
\end{remark}
\section{An A-stable family of schemes of order 2p}
\label{sec:Scheme1d}
As mentioned in our introductory remarks, we can achieve a higher order scheme by including more spatial derivatives in the numerical scheme. We shall continue to perform our analysis at the semi-discrete level, and make comments about the spatial discretization where appropriate. To motivate our discussion, let's first apply the Lax-Wendroff procedure to the semi-discrete wave equation, exchanging time derivatives for spatial derivatives in the Taylor expansion
\begin{align}
\label{eqn:exact}
u^{n+1}-2u^{n}+u^{n-1} &= 2\sum_{m=1}^{\infty} \frac{\Delta t^{2m}}{(2m)!}\left(\partial_{tt}\right)^{m}u^n \\
&= 2\sum_{m=1}^{\infty} \frac{\beta^{2m}}{(2m)!}\left(\frac{c\Delta t}{\beta}\right)^{2m}\left(\partial_{xx}\right)^{m}u^n \nonumber \\
&= 2\sum_{m=1}^{\infty} \frac{\beta^{2m}}{(2m)!} \left(\frac{\partial_{xx}}{\alpha^2}\right)^{m}u^n \nonumber.
\end{align}
In the second step of this expansion, we have used the fact that
\[
\left(\partial_{tt}\right)^m u = \left(c^2\partial_{xx}\right)^m u, \quad m \geq 1.
\]
Our next goal is to approximate the differential operators $(\partial_{xx})^m$ using the compositions of the convolution operator $\mathcal{D}$ from \eqref{eqn:D_def_1d}. We begin this process by observing
\[
\mathcal{F}\left[\left(\frac{\partial_{xx}}{\alpha^2}\right)^{m}\right] = (-1)^m\left(\frac{k}{\alpha}\right)^{2m}
\]
and
\[
\hat{D} := \mathcal{F}\left[ \mathcal{D} \right] = 1-\mathcal{F}\left[\mathcal{L}^{-1}\right] = 1 - \frac{1}{1+\left(\frac{k}{\alpha}\right)^2} = \frac{\left(\frac{k}{\alpha}\right)^2}{1+\left(\frac{k}{\alpha}\right)^2}.
\]
From this final expression for $\hat{D}$ we solve for the quantity $(k/\alpha)^2$, finding
\begin{align*}
&\left(\frac{k}{\alpha}\right)^2 = \frac{\hat{D}}{1-\hat{D}} = \sum_{p=1}^\infty \hat{D}^{p} \quad \text{and}\\
&\left(\frac{k}{\alpha}\right)^{2m} = \left(\frac{\hat{D}}{1-\hat{D}}\right)^{m} = \sum_{p=m}^\infty \binom{p-1}{m-1}\hat{D}^{p},
\end{align*}
which now gives an exact expression for all even derivatives, defined solely in terms of $\mathcal{D}$. Inserting these into the Taylor expansion \eqref{eqn:exact}
\[
u^{n+1}-2u^{n}+u^{n-1} = \sum_{m=1}^{\infty} (-1)^m\frac{2\beta^{2m}}{(2m)!} \sum_{p=m}^\infty \binom{p-1}{m-1}\mathcal{D}^{p}[u^n].
\]
While this expression is interesting from a theoretical standpoint, it holds little appeal in practice. However, if we reverse the order of summation, then the inner sum can be collapsed, and we have
\begin{equation}
\label{eqn:update_P}
u^{n+1}-2u^{n}+u^{n-1} = \sum_{p=1}^{\infty} A_p(\beta)\mathcal{D}^{p}[u^n],
\end{equation}
where the coefficients are polynomials in successively higher orders of $\beta^2$, and are given by
\begin{equation}
A_p(\beta) = 2\sum_{m=1}^p (-1)^m\frac{\beta^{2m} }{(2m)!}\binom{p-1}{m-1}.
\end{equation}
Since
\[
\mathcal{D}^p[u^n] \approx \left(-\frac{\partial_{xx}}{\alpha^2}\right)^p u^n + O\left(\frac{1}{\alpha^{2p+2}}\right)
\]
and $\alpha^{-1} = O(\Delta t)$, the series converges to a solution of the wave equation. Furthermore, once truncated, the $P$-term approximation will be accurate to order $2P$. For clarity, the first three schemes are
\begin{align}
\label{eqn:update_1}
u^{n+1}-2u^{n}+u^{n-1} &= -\beta^2 \mathcal{D}[u^n] \\
\label{eqn:update_2}
u^{n+1}-2u^{n}+u^{n-1} &= -\beta^2 \mathcal{D}[u^n]-\left(\beta^2-\frac{\beta^4}{12}\right)\mathcal{D}^2[u^n] \\
\label{eqn:update_3}
u^{n+1}-2u^{n}+u^{n-1} &= -\beta^2 \mathcal{D}[u^n]-\left(\beta^2-\frac{\beta^4}{12}\right)\mathcal{D}^2[u^n] - \left(\beta^2-\frac{\beta^4}{6}+\frac{\beta^6}{360}\right)\mathcal{D}^3[u^n].
\end{align}
The application of the operator $\mathcal{D}^p$ will be computed recursively, and so a scheme of order $P$ will have a cost of $O(PN)$ per time step for $N$ spatial discretization points. Notice that equation \eqref{eqn:update_1} is in fact identical to the original second order scheme \eqref{eqn:D_def_1d}, while the schemes \eqref{eqn:update_2} and \eqref{eqn:update_3} are fourth and sixth order, respectively. The local truncation errors will be $O((c\Delta t/\beta)^{2P+2})$, which means that the error constant will decrease with increasing $\beta$.
\subsection{Stability}
We now prove that the schemes of order $2P$ given by truncating \eqref{eqn:update_P} are unconditionally stable, for some range of the parameter $\beta$. Our main result is that, while the truncation error will decrease with increasing $\beta$, there is a maximal value $\beta_{max}$, which depends on $P$, for which the scheme remains stable. In this respect, the value $\beta_{max}$ is the optimal choice for the $P$th scheme.
We shall prove stability in the free-space case, using Von-Neumann analysis. The case of a bounded domain is similar, but consideration of specific boundary conditions must be undertaken, and thus handled individually (the Dirichlet case was shown in \cite{Causley2013} for the second order method). Upon taking the Fourier transform in space, and introducing the amplification factor
\[
\hat{u}^n = \rho^n \hat{u}^0
\]
the scheme will be A-stable provided that $|\rho|\leq 1$. Substitution into \eqref{eqn:update_P}, and after cancellation of the common terms, we form the characteristic polynomial satisfied by the amplification factor
\begin{equation}
\label{eqn:P_Stab}
\rho^2-2\rho+1 = \rho S(\beta,\hat{D}),
\end{equation}
where
\begin{equation}
\label{eqn:Stab_sum}
S(\beta,\hat{D}) = -\left(\sum_{p=1}^P A_p(\beta) \hat{D}^p\right), \quad \beta>0, \quad 0\leq \hat{D} \leq 1.
\end{equation}
Upon applying the Cohn-Schur criterion \cite{Strikwerda1989}, we find that the scheme will be A-stable, provided that
\[
0\leq S(\beta,\hat{D}) \leq 4.
\]
We proceed to analyze this inequality by first proving that $S$ is strictly increasing as a function of $\hat{D}$ for some interval $0<\beta \leq \beta^*$, and then by finding a maximal value $\beta_{max}$ for which stability of the scheme is ensured for any $\Delta t$. We will make use of the following
\begin{lemma}
For each $P\geq 1$, there exists $\beta^* >0$ such that for $0<\beta \leq \beta^*$, $A_p<0$ for each $1 \leq p \leq P$.
\end{lemma}
{\em Proof}. Our proof is by induction. The case $p=1$, is trivial, since $A_1 = -\beta^2$, and so $A_1<0$ for any choice $\beta^*>0$. For $p>1$, we first choose $\beta^*$ for which $A_p$ is strictly negative, and then show that the same is automatically true for $A_k$, for all $k<p$. To choose $\beta^*$, first note that
\[
A_p = -\frac{2}{0!2!}\beta^2\left(1-\frac{p-1}{1\cdot 3 \cdot 4}\beta^2\right)-\frac{2(p-1)(p-2)}{2! 6!}\beta^6\left(1-\frac{p-3}{3\cdot 7\cdot 8}\beta^2\right)-\ldots
\]
Thus, whenever $\beta\leq \beta^*= \sqrt{12/(p-1)}$, the first term inside the parentheses is strictly positive, and in fact all remaining terms will also be positive. Now, notice that $\beta^*$ will decrease monotonically with increasing $p$, and so if we choose $\beta^*$ to ensure that $A_p<0$, then it immediately follows that $A_k<0$ for all $k\leq p$.
\endproof
We are now prepared to state our main
\begin{theorem}
The semi-discrete scheme, given by truncating the sum in \eqref{eqn:update_P} after $P$ terms, will be unconditionally stable, provided that $0< \beta \leq \beta_{max}$, where
\begin{equation}
\label{eqn:Stab_cond}
-\sum_{p=1}^P A_p(\beta_{max}) = 4
\end{equation}
\end{theorem}
{\em Proof}.
As a result of the lemma, we are guaranteed an interval $(0,\beta^*)$ which, for fixed $P$, all $A_p<0$ for $1\leq p \leq P$. Therefore, the sum \eqref{eqn:Stab_sum} is strictly positive, and increasing in both $\beta$ and $\hat{D}$. Thus, we only need to study the extremal value $\hat{D} \to 1$, and $\beta_{max}$, which we find by solving the equality \eqref{eqn:Stab_cond}.
\endproof
\begin{figure}
\caption{A plot of the the sum \eqref{eqn:Stab_sum}
\label{fig:Stab_plot}
\end{figure}
Hence the stability condition amounts to finding the first positive root of a polynomial of order $P$ in $\beta^2$. A plot of the stability polynomial \eqref{eqn:Stab_sum} with $\hat{D}=1$ is shown for the first few values of $P$ in Figure \ref{fig:Stab_plot}. The value $\beta_{max}$ is taken as the intersection of the curve with the dotted line $S = 4$. The values for these first few schemes are also shown in Table \ref{tab:Stab_table}.
\begin{remark}
While $\beta_{max}$ decreases as $P$ increases, the double root at $\beta=0$ will guarantee the existence of some region $(0,\beta^*)$ for which stability can be achieved. Additionally, methods which include more than $P$ terms can be derived, which achieve only order $2P$, but allow for produce a smaller error constant by increasing $\beta_{max}$. A rigorous investigation of their construction has not yet been undertaken.
\end{remark}
\begin{table}
\begin{center}
\begin{tabular}{ | l ||c|c|c|c|c |}
\hline
P & 1 & 2 & 3 & 4 & 5 \\
\hline
Order & 2 & 4 & 6 & 8 & 10 \\
\hline
$\beta_{max}$ & 2 & 1.4840 & 1.2345 & 1.0795 & 0.9715 \\
\hline
\end{tabular}
\caption{The maximum values $\beta_{max}$ for which the $P$-th scheme remains A-stable, is found by solving $S(\beta_{max},1) = 4$.}
\label{tab:Stab_table}
\end{center}
\end{table}
\subsection{High order initial and boundary conditions}
Since the scheme \eqref{eqn:update_P} is a 3-step method, it will require two initial starting values, which must be computed to $O(\Delta t^{2P})$ in order for the numerical solution to achieve the expected order. While the initial condition $u^0 = f(x)$ is imposed exactly, the value $u^{1} = u(x,\Delta t)$ must be approximated. In analogy to the derivation above, we shall proceed with a Taylor expansion, and using the Lax-Wendroff procedure, convert all even time derivatives into spatial derivatives. However, the odd time derivatives will instead make use of the initial velocity, $u_{t}(x,0) = g(x)$. Thus
\begin{align}
u^1 &= \sum_{m=0}^\infty \frac{\Delta t^m}{m!}\partial_t^mu^0 \nonumber \\
&= \sum_{m=0}^\infty \left(\frac{\Delta t^{2m}}{(2m)!}\partial_ t^{2m}u^0+ \frac{\Delta t^{2m+1}}{(2m+1)!}\partial_t^{2m+1}u^0\right) \nonumber \\
&= \sum_{m=0}^\infty \left(\frac{(c\Delta t)^{2m}}{(2m)!}\partial_ x^{2m}u^0+ \frac{(c\Delta t)^{2m+1}}{(2m+1)!}\partial_x^{2m}\frac{1}{c}\partial_t u^0\right) \nonumber \\
&= \sum_{m=0}^\infty \left(\frac{(c\Delta t)^{2m}}{(2m)!}\partial_ x^{2m}f(x)+ \frac{(c\Delta t)^{2m+1}}{(2m+1)!}\partial_x^{2m}\frac{1}{c}g(x)\right).
\end{align}
This expansion can now be truncated at $O(\Delta t^{2P})$, and since all spatial derivatives are even, they can be approximated using convolution \eqref{eqn:D_def_1d}. Alternatively they can be computed analytically, since $f$ and $g$ are known.
We also consider the application of boundary conditions for recursive applications of the convolution operator \eqref{eqn:D_def_1d}. We demonstrate the approach for Dirichlet boundary conditions; suppose $u(a,t) = u_L(t)$, and $u(b,t) = u_R(t)$ are prescribed. Since $\mathcal{D}^m[u] \approx (\partial_{xx}/\alpha^2)^m u$, we require even spatial derivatives of $u$ at $x = a$ and $b$. But, since $u$ satisfies the wave equation for $a<x<b$, we can use the inverse Lax-Wendroff procedure, and upon taking the appropriate limit, find
\begin{equation}
\lim_{x\to a}\left(\partial_{xx}\right)^m u = \lim_{x \to a} \left(\frac{\partial_{tt}}{c^2}\right)^m u = \left(\frac{\partial_{tt}}{c^2}\right)^m u_L(t),
\end{equation}
with the corresponding result for holding for $x=b$. The result for Neumann boundary conditions will also follow from a similar procedure, but upon considering odd derivatives. Additionally, periodic boundary conditions can be implemented in a straightforward manner, as was shown in the basic algorithm in \cite{Causley2013a}.
\subsection{Numerical results}
Before moving onward to multi-dimensional schemes, we first illustrate the accuracy of our method for a 1d example. We perform time marching for a standing wave $u(x,0) = \sin(2\pi x)$, for $x \in [0,1]$, up to time $T = 1$. Since the fast convolution algorithm from \cite{Causley2013a} is second order accurate in space, we fix $\Delta x = 0.0001$ to ensure that the dominant error in the solution is temporal. The $L^2$-norm of the error is plotted in Figure \ref{fig:Conv_plot} for several values of $\Delta t$, with varying order $P$. For each $P$, we used $\beta = \beta_{max}$ according to Table \eqref{tab:Stab_table}. In the 10th order scheme, the spatial error can be seen to dominate the error for the smallest value $\Delta t$. Thus, refining further in time would produce no further improvement.
\begin{figure}
\caption{Convergence in the $L^2$-norm of a 1d standing wave, with Dirichlet boundary conditions. The spatial resolution is held fixed at $\Delta x = 0.0001$.}
\label{fig:Conv_plot}
\end{figure}
\section{Higher dimensions and higher accuracy via ADI splitting}
\label{sec:SchemeADI}
We next address the solution of the wave equation \eqref{eqn:wave} in higher dimensions using alternate direction implicit (ADI) splitting. To achieve schemes of higher order, we again begin with the expansion \eqref{eqn:exact}, but now the Lax procedure introduces the Laplacian
\[
u^{n+1}-2u^n+u^{n-1}= 2\sum_{m=1}^\infty \frac{\Delta t^{2m}}{(2m)!}\left(\frac{\partial^{2m}}{\partial t^{2m}}\right)u^n= 2\sum_{m=1}^\infty \frac{\beta^{2m}}{(2m)!}\left(\frac{\nabla^2}{\alpha^2}\right)^m u^n.
\]
In order to approximate higher order powers of the Laplacian operator using ADI splitting, we first define univariate modified Helmholtz operators, and their corresponding $\mathcal{D}$ operators as
\begin{equation}
\label{eqn:LD_gam}
\mathcal{L}_\gamma:= 1-\frac{\partial_\gamma^2}{\alpha^2}, \quad \mathcal{D}_\gamma := 1 - \mathcal{L}_\gamma^{-1} , \quad \gamma = x,y,z.
\end{equation}
Notice that these operators satisfy the following identity
\begin{equation}
\label{eqn:LD_identity}
\mathcal{L}_\gamma\mathcal{D}_\gamma[u] = \mathcal{L}[u] - u = -\frac{\partial_{\gamma\gamma}}{\alpha^2}u, \quad \gamma = x,y,z.
\end{equation}
Thus the Laplacian is given by
\[
-\frac{\nabla^2}{\alpha^2} = \mathcal{L}_x\mathcal{D}_x+\mathcal{L}_y\mathcal{D}_y+\mathcal{L}_z\mathcal{D}_z = \mathcal{L}_x\mathcal{L}_y\mathcal{L}_z [\mathcal{C}_{xyz}],
\]
where the new operator is
\begin{equation}
\label{eqn:C_def_ADI_3}
\mathcal{C}_{xyz} := \mathcal{L}_y^{-1}\mathcal{L}_z^{-1}\mathcal{D}_x+\mathcal{L}_z^{-1}\mathcal{L}_x^{-1}\mathcal{D}_y+\mathcal{L}_x^{-1}\mathcal{L}_y^{-1}\mathcal{D}_z.
\end{equation}
The corresponding 2d operator is
\begin{equation}
\label{eqn:C_def_ADI}
\mathcal{C}_{xy} := \mathcal{L}_y^{-1}\mathcal{D}_x+\mathcal{L}_x^{-1}\mathcal{D}_y.
\end{equation}
This result, while interesting, is not quite satisfactory. The first issue is that the Laplacian is now given in terms of $\mathcal{L}_\gamma$, and thus requires approximations of spatial derivatives, which we are trying to avoid. Secondly, the directional sweeps of the ADI convolutions are represented by a composition of the operators $\mathcal{L}_\gamma^{-1}$, not a sum of them. In two dimensions, the ADI scheme is defined by
\begin{equation}
\label{eqn:D_def_ADI}
\mathcal{D}_{xy} := 1-\mathcal{L}_x^{-1}\mathcal{L}_y^{-1},
\end{equation}
while in three dimensions it is
\begin{equation}
\label{eqn:D_def_ADI_3}
\mathcal{D}_{xyz} := 1-\mathcal{L}_x^{-1}\mathcal{L}_y^{-1}\mathcal{L}_z^{-1}.
\end{equation}
Motivated by this form we appeal to one final identity, obtained by rearranging and inverting \eqref{eqn:D_def_ADI_3},
\begin{align*}
\mathcal{L}_x\mathcal{L}_y\mathcal{L}_z =\left(1-\mathcal{D}_{xyz}\right)^{-1},
\end{align*}
which means that
\[
-\frac{\nabla^2}{\alpha^2} =\left(1-\mathcal{D}_{xyz}\right)^{-1} \mathcal{C}_{xyz}.
\]
Thus, all even order of the Laplacian can be constructed by expanding the symbol $(1-\mathcal{D})^{-m}$ as a power series (which is now in terms of the ADI operator!), and the result is
\begin{equation}
\label{eqn:Laplacian_2D_ADI_3}
\left(\frac{\nabla^2}{\alpha^2}\right)^m
= (-1)^m\mathcal{C}_{xyz}^m \sum_{p=m}^\infty \binom{p-1}{m-1}\mathcal{D}_{xyz}^{p-m}.
\end{equation}
The corresponding 2d result is
\begin{equation}
\label{eqn:Laplacian_2D_ADI}
\left(\frac{\nabla^2}{\alpha^2}\right)^m
= (-1)^m\mathcal{C}_{xy}^m \sum_{p=m}^\infty \binom{p-1}{m-1}\mathcal{D}_{xy}^{p-m},
\end{equation}
which can be seen to have the identical form in this notation, except for the spatial subscripts. Upon omitting the subscripts, the semi-discrete scheme for 2d and 3d is
\begin{align}
u^{n+1}-2u^n+u^{n-1} &= \sum_{m=1}^\infty \frac{2\beta^{2m}}{(2m)!}\left(\frac{\nabla^2}{\alpha^2}\right)^m u^n
\nonumber \\
&= \sum_{m=1}^\infty (-1)^m\frac{2\beta^{2m}}{(2m)!} \mathcal{C}^m \sum_{p=m}^\infty \binom{p-1}{m-1}\mathcal{D}^{p-m} [u^n]
\nonumber \\
\label{eqn:Full_2D}
&= \sum_{p=1}^\infty \sum_{m=1}^p (-1)^m\frac{2\beta^{2m}}{(2m)!}\binom{p-1}{m-1} \mathcal{C}^m\mathcal{D}^{p-m} [u^n].
\end{align}
It is interesting to compare to the 1d scheme \eqref{eqn:update_P}, which can in fact be recovered by setting $\mathcal{C} = \mathcal{D}$.
Upon truncation at $p = P$, we obtain a scheme of order $2P$, the first few of which are
\begin{align*}
u^{n+1}-2u^n+u^{n-1}&= -\beta^2\mathcal{C}[u^n] \\
u^{n+1}-2u^n+u^{n-1}&= -\beta^2\mathcal{C}[u^n] - \left(\beta^2\mathcal{D} - \frac{\beta^4}{12}\mathcal{C}\right)\mathcal{C}[u^n] \\
u^{n+1}-2u^n+u^{n-1}&= -\beta^2\mathcal{C}[u^n] - \left(\beta^2\mathcal{D} - \frac{\beta^4}{12}\mathcal{C}\right)\mathcal{C}[u^n]-\left(\beta^2\mathcal{D}^2 - \frac{\beta^4}{6}\mathcal{C}\mathcal{D}+\frac{\beta^6}{360}\mathcal{C}^2\right)\mathcal{C}[u^n].
\end{align*}
As in the 1d case, these schemes will be unconditionally stable for all $\Delta t$, and the same range for $\beta$ as shown in Table \ref{tab:Stab_table}.
\subsection{Inclusion of source terms}
Until now we have considered raising the order of approximations only for the homogeneous wave equation. We now consider the inclusion of general source terms $S(u,x,t)$, and the higher order schemes generated by including higher time derivatives. To do so, we consider
\begin{equation}
\frac{1}{c^2}u_{tt} = \nabla^2 u + S,
\end{equation}
and upon taking even order time derivatives, find
\begin{align*}
\left(\frac{\partial_{tt}}{c^2}\right)^m u &= \left(\frac{\partial_{tt}}{c^2}\right)^{m-1} \left( \nabla^2 u + S\right) \\
&= \left(\frac{\partial_{tt}}{c^2}\right)^{m-2}\nabla^2 \left( \nabla^2 u + S\right)+\left(\frac{\partial_{tt}}{c^2}\right)^{m-1}S \\
&= \vdots \\
&= \left(\nabla^2\right)^m u + \left[ \left(\frac{\partial_{tt}}{c^2}\right)^{m-1} +\nabla^2\left(\frac{\partial_{tt}}{c^2}\right)^{m-2}+\ldots \left(\nabla^2\right)^{m-1}\right] S.
\end{align*}
Thus, in order for the source to be included to high accuracy, we shall require the computation of terms of the form
\[
\left[\sum_{k=0}^{m} \left(\nabla^2\right)^k\left(\frac{\partial_{tt}}{c^2}\right)^{m-k}\right]S, \quad m=0, 1, \ldots P-1.
\]
The method of construction of such terms depends on the specific nature of the source term, specifically whether the time derivatives are approximated using finite differences, or by replacing them with other known information (i.e. from a constitutive relation). For these reasons, we only briefly consider the fourth order accurate implementation. Upon discretization of the second time derivative, and approximation of the Laplacian using the convolution operators \eqref{eqn:C_def_ADI}, we have
\[
u^{n+1}-2u^n+u^{n-1} = -\beta^2\mathcal{C}[u^n] - \left(\beta^2\mathcal{D} - \frac{\beta^4}{12}\mathcal{C}\right)\mathcal{C}[u^n] +\frac{\beta^2}{12\alpha^2}\left(S^{n+1}+10S^n+S^{n-1}\right)+\frac{\beta^2}{12\alpha^4}\mathcal{C}[S^n].
\]
If $S$ depends either on $u$, or another dependent variable which is coupled to $u$, then this equation must be solved iteratively now, due to the appearance of source terms at time level $t_{n+1}$.
\subsection{Efficient computation and operation count of higher order methods}
One potential pitfall to the numerical implementation of the schemes \eqref{eqn:Full_2D} is the unnecessary duplication of work in constructing additional terms, which are all defined by multiple convolutions. Here we estimate the operation count per time step for the $P$th scheme, and in doing so consider two competing forces: the economical re-use of computed quantities, and the symmetrization to remove anisotropy.
\begin{table}[hb]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
P&Input &Output & Operations & Total operations\\
\hline
1&$u^n$ &$v_1:=\mathcal{C}[u^n]$ & $O(N)$ & $O(N)$ \\
\hline
2&$v_1$ &$v_2:=\mathcal{D}[v_1], \quad v_1:=\mathcal{C}[v_1]$ & $2O(N)$ & $3O(N)$ \\
\hline
3&$v_1, \quad v_2$ &$v_3:=\mathcal{D}[v_2], \quad v_2:=\mathcal{C}[v_2], \quad v_1:=\mathcal{C}[v_1]$ & $3O(N)$ & $6O(N)$ \\
\hline
\end{tabular}
\caption{Operation count based on efficient re-use of computed quantities for schemes of order $2P$.}
\label{tab:Efficiency}
\end{center}
\end{table}
The first of these considerations comes from the binomial-like structure of higher order terms in the expansions. Notice that in the $P$th scheme an additional $P$ terms appear when compared to the ($P$-1)st scheme, all of which are of the form $\mathcal{C}^m\mathcal{D}^{P-m}[u^n]$. Furthermore, they can be constructed solely in terms of the previous $P$-1 terms of the previous stage, without use of any subsequent terms. Each new term requires one application of either $\mathcal{C}$, or $\mathcal{D}$, both of which can be computed in $O(N)$ operations. Thus, we obtain a simple estimate for the complexity as $O(P^2N)$ for the operation count. The procedure is demonstrated in Table \ref{tab:Efficiency} for the first few values of $P$, which also reveals that $P$ auxiliary variables $v_p$, will also be required, in addition $u^n$ and $u^{n-1}$.
The second important consideration is a practical aspect inherent to ADI schemes, which is the introduction of numerical anisotropy. To this end, it is prudent to apply the spatial convolution operators $\mathcal{L}_\gamma$ for $\gamma = x,y,z$ and average all permutations to reduce the anisotropy. This will inherently trade accuracy for computational efficiency, and as such is not included in the efficiency estimates of Table \ref{tab:Efficiency}.
\subsection{Numerical Results}
\begin{table}[ht!]
\begin{center}
\begin{tabular}{|c|c|c|c||c|c|c||c|c|c|}
\hline
& \multicolumn{3}{ c|| }{$P=1$} &\multicolumn{3}{ c|| }{$P=2$} &\multicolumn{3}{ c| }{$P=3$} \\
\hline
$\Delta t$ &Error & Rate & Time (s) &Error & Rate & Time (s) &Error & Rate & Time (s) \\
\hline
$0.4$ &$7.81$E-1 & * & 1.5 &$7.19$E-1 & * & 4.1 &$8.16$E-1 & * & $8.2$ \\
\hline
$0.2$ &$2.46$E-1 & 1.67 & 3.9 &$1.07$E-1 & 2.74 & 11.2 &$7.80$E-2 & 3.39 & $19.7$ \\
\hline
$0.1$ &$7.15$E-2 & 1.78 & 7.1 &$1.03$E-2 & 3.38 & 23.3 &$2.83$E-3 & 4.78 & $44.1$ \\
\hline
$0.05$ &$1.89$E-2 & 1.92 & 15.1 &$7.36$E-4 & 3.81 & 48.3 &$5.74$E-5 & 5.63 & $90.0$ \\
\hline
$0.025$ &$4.84$E-3 & 1.96 & 30.0 &$4.97$E-5 & 3.89 & 94.2 &$2.29$E-6 & 4.64 & $186.2$ \\
\hline
\end{tabular}
\caption{Refinement and computational efficiency for a 2d rectangular mode $u(x,y,0) = \sin(\pi x)\sin(\pi y)$. The mesh is held fixed at $\Delta x = \Delta y = 0.003125$.}
\label{tab:ADI}
\end{center}
\end{table}
\begin{figure}
\caption{Propagation due to a point source in 2d, on a $80\times 80$ mesh,with CFL number 2. The improvement due to the higher order corrections is quite apparent.}
\label{fig:ADI}
\end{figure}
We first show that our method behaves as expected, by performing a refinement study on a square domain $\Omega = [-1,1]\times[-1,1]$, with a standing mode $u(x,y,0) = \sin(\pi x)\sin(\pi y)$, up to time $T=1.2$, with a fixed spatial resolution of $641\times 641$ spatial points. The discrete $L^2$-norm of the error is constructed at each time step, and we report the maximum over all time steps in Table \ref{tab:ADI}. We additionally record the computation time required for each scheme for $P=1,2$ and 3, confirming the predicted scaling of the method from Table \ref{tab:Efficiency}.
As stated in the introduction, one major drawback of using an ADI method is the anisotropy introduced in the leading order truncation error. In Figure \ref{fig:ADI}, a sinusoidal point source located at the center of the domain is smoothly turned on, and propagated using the second order and fourth order schemes. The anisotropy is quite evident at the wavefront in the second order scheme, but is removed by implementing the fourth order scheme.
Finally, we demonstrate the advantages of a higher order method in an elliptical geometry, which is of interest in antennae design, among other applications \cite{Boriskin2008}. Currently, the most common method for simulating wave propagation in elliptical cavities is the conformal finite-difference time-domain method (CFDTD) method \cite{Dey1997}, which uses a conformal mapping to accurately represent curvilinear geometries, thus avoiding the reduction to first order due to the stair-step approximation in traditional FDTD algorithms.
\begin{figure}
\caption{Discretization of the ellipse, showing the regular Cartesian points (blue), and the additional boundary points (red) for the $x$ and $y$ sweeps.}
\label{fig:Ellipse_points}
\end{figure}
Our approach is different; rather than using conformal geometry, we embed the boundary in a regular Cartesian mesh, including a boundary point for each intersection of the ADI lines $x = x_j$, and $y = y_k$ with the boundary curve, as illustrated in Figure \ref{fig:Ellipse_points}. The $x$ and $y$ convolutions then operate on line objects, which are defined by a collection of interior points, and two boundary points, one at either end. The boundary points can be arbitrarily close to the interior points (a Taylor expansion of the coefficients is used when the spacing is small enough). See \cite{Causley2013a} for more details.
In Figure \ref{fig:Ellipse}, a Gaussian initial condition is placed inside the ellipse, whose boundary is given by
\[
C = \left\{ (x,y): \left(\frac{x+y}{4}\right)^2+(x-y)^2 =1 \right\}.
\]
The scattering will therefore be a superposition of the natural modes of the ellipse, which are Mathieu functions. This is a great test of the algorithm, not only due to the curved boundaries, but also because the principal axes of the ellipse do not coincide with the horizontal and vertical ADI lines.
\begin{figure}
\caption{Time evolution of a Gaussian field through an elliptical cavity.}
\label{fig:Ellipse}
\end{figure}
\section{Conclusions}
\label{sec:Conclusion}
In this paper we have proposed a family of schemes for the wave equation, of order $2P$, which remain A-stable by using a multi-derivative scheme in time. To maintain efficiency, we utilize a Lax approach to replace even order time derivatives with the powers of the Laplacian, which is then constructed using recursive applications of fast convolution operators previously developed for the base scheme in \cite{Causley2013a}. The resulting schemes therefore scale as $O(P^dN)$ where $d$ is the number of spatial dimensions. The expected algorithmic efficiency and convergence properties have been demonstrated in with 1d and 2d examples. This method holds great promise for developing higher order, parallelizable algorithms for solving hyperbolic PDEs, and can also be extended to parabolic PDEs.
\end{document} |
\begin{document}
\title[Generalized surface quasi-geostrophic equations]
{Dissipative models generalizing the 2D Navier-Stokes and the surface quasi-geostrophic equations}
\alphauthor[Dongho Chae, Peter Constantin and Jiahong Wu]{Dongho Chae$^{1}$, Peter Constantin$^{2}$ and Jiahong Wu$^{3}$}
\alphaddress{$^1$ Department of Mathematics,
Sungkyunkwan University,
Suwon 440-746, Korea}
\alphaddress{$^2$Department of Mathematics,
University of Chicago,
5734 S. University Avenue,
Chicago, IL 60637, USA.}
\alphaddress{$^3$Department of Mathematics,
Oklahoma State University,
401 Mathematical Sciences,
Stillwater, OK 74078, USA.}
\varepsilonmail{chae@skku.edu}
\varepsilonmail{const@cs.uchicago.edu}
\varepsilonmail{jiahong@math.okstate.edu}
\begin{abstract}
This paper is devoted to the global (in time) regularity problem for a family
of active scalar equations with fractional dissipation. Each component of the velocity field $u$ is determined by the active scalar $\thetaeta$ through $\mathcal{R} \Lambda^{-1} P(\Lambda) \thetaeta$ where $\mathcal{R}$ denotes a Riesz transform, $\Lambda=(-\Delta)^{1/2}$ and $P(\Lambda)$ represents a family of Fourier multiplier operators. The 2D Navier-Stokes vorticity equations correspond to the special case $P(\Lambda)=I$ while the surface quasi-geostrophic (SQG) equation to $P(\Lambda) =\Lambda$. We obtain the global regularity for a class of equations for which $P(\Lambda)$ and the fractional power of the dissipative Laplacian are required to satisfy an explicit condition. In particular, the active scalar equations with any fractional dissipation and with $P(\Lambda)= (\log(I-\Delta))^\gamma$ for any $\gamma>0$ are globally regular.
\varepsilonnd{abstract}
\maketitle
\section{Introduction}
\Lambdabel{intr}
\setcounter{equation}{0}
This paper is devoted to the dissipative active scalar equation
\begin{equation} \Lambdabel{general}
\left\{
\begin{array}{l}
\partial_t \thetaeta + u\cdot\nonumberabla \thetaeta + \kappaappa (-\Delta)^\alphalpha \thetaeta=0, \quad x\in \mathbb{R}^d, \, t>0, \\
u = (u_j), \quad u_j = \mathcal{R}_l \Lambda^{-1} P(\Lambda)\, \thetaeta,\quad 1\le j, \,l\le d,
\varepsilonnd{array}
\right.
\varepsilonnd{equation}
where $\kappaappa>0$ and $\alphalpha>0$ are parameters, $\thetaeta =\thetaeta(x,t)$ is a scalar function of $x\in \mathbb{R}^{d}$ and $t\ge 0$, $u$ denotes a velocity field with each of its components $u_j$ ($1\le j\le d$) given by a Riesz transform $\mathcal{R}_l$ applied to $\Lambda^{-1} P(\Lambda)\, \thetaeta$.
Here the operators $\Lambda = (-\Delta)^{\frac12}$, $P(\Lambda)$ and $\mathcal{R}_l$ are defined through their Fourier transforms,
$$
\widehat{\Lambda f}(\xi) = |\xi| \widehat{f}(\xi), \quad \widehat{P(\Lambda) f}(\xi) = P(|\xi|) \widehat{f}(\xi), \quad \widehat{\mathcal{R}_l f}(\xi)= \frac{i\,\xi_l}{|\xi|}\, \widehat{f}(\xi),
$$
where $1\le l\le d$ is an integer, $\widehat{f}$ or $\mathcal{F}(f)$ denotes the Fourier transform,
$$
\widehat{f}(\xi) = \mathcal{F}(f)(\xi) =\frac{1}{(2\pi)^{d/2}} \int_{\mathbb{R}^d} e^{-i x\cdot \xi} f(x)\,dx.
$$
We are primarily concerned with the global (in time) regularity issue concerning solutions of (\ref{general}) with a given initial data
\begin{equation} \Lambdabel{IC}
\thetaeta(x,0) =\thetaeta_0(x), \quad x\in \mathbb{R}^d.
\varepsilonnd{equation}
\vskip .1in
A special example of (\ref{general}) is the 2D active scalar equation
\begin{equation} \Lambdabel{general2d}
\left\{
\begin{array}{l}
\partial_t \thetaeta + u\cdot\nonumberabla \thetaeta + \kappaappa (-\Delta)^\alphalpha \thetaeta=0, \quad x\in \mathbb{R}^2, \, t>0, \\
u = \nonumberabla^\perp \psi\varepsilonquiv (-\partial_{x_2}\psi, \partial_{x_1} \psi), \quad
\Delta \psi = P(\Lambda)\, \thetaeta
\varepsilonnd{array}
\right.
\varepsilonnd{equation}
which includes as special cases the 2D Navier-Stokes vorticity equation
\begin{equation}\Lambdabel{euler}
\left\{
\begin{array}{l}
\partial_t \omega + u \cdot \nonumberabla \omega-\nonumberu \Delta \omega =0,\\
u =\nonumberabla^\perp \psi, \quad \Delta\psi=\omega
\varepsilonnd{array}
\right.
\varepsilonnd{equation}
and the dissipative surface quasi-geostrophic (SQG) equation
\begin{equation}\Lambdabel{SQG}
\left\{
\begin{array}{l}
\partial_t \thetaeta + u \cdot \nonumberabla \thetaeta + \kappaappa (-\Delta)^\alphalpha \thetaeta= 0,\\u=\nonumberabla^\perp \psi, \quad -\Lambda\psi = \thetaeta.
\varepsilonnd{array}
\right.
\varepsilonnd{equation}
There are numerous studies on the Navier-Stokes equations and the global regularity in the 2D case has long been established (see e.g. \cite{ConF}, \cite{DoGi} and \cite{MaBe}). The SQG equation models the dynamics of the potential temperature $\thetaeta$ of the 3D quasi-geostrophic equations on the 2D horizontal boundaries and is useful in modeling atmospheric phenomena such as the frontogenesis (see e.g. \cite{CMT}, \cite{MaTa} and \cite{Pe}). The SQG equation (inviscid or dissipative) is
also mathematically important. As detailed in \cite{CMT}, the behavior of its strongly nonlinear solutions are strikingly analogous to that of the potentially singular solutions of the 3D incompressible Navier-Stokes
and the Euler equations. The global regularity issue concerning the SQG equation has recently been studied very extensively and many important progress has been made (see e.g. \cite{AbHm}, \cite{Bae}, \cite{Bar}, \cite{Blu}, \cite{CaS}, \cite{CV}, \cite{CaFe}, \cite{Ch}, \cite{ChJDE}, \cite{Cha}, \cite{Cha2}, \cite{Cha4}, \cite{CCCF}, \cite{ChL}, \cite{Cham}, \cite{CMZ1}, \cite{Chen}, \cite{Con}, \cite{CCW}, \cite{CIW}, \cite{CLS}, \cite{CMT}, \cite{CNS}, \cite{CWnew1}, \cite{CWnew2}, \cite{Cor}, \cite{CC}, \cite{CoFe1}, \cite{CoFe2}, \cite{CoFe3}, \cite{CFMR}, \cite{Dab}, \cite{DHLY}, \cite{DoCh}, \cite{Dong}, \cite{DoDu}, \cite{DoLi0}, \cite{DoLi}, \cite{DoPo}, \cite{DoPo2}, \cite{FPV}, \cite{FrVi}, \cite{Gil}, \cite{HPGS}, \cite{HmKe}, \cite{HmKe2}, \cite{Ju}, \cite{Ju2}, \cite{KhTi}, \cite{Ki1}, \cite{Ki2}, \cite{Kinew1}, \cite{KN1}, \cite{KN2}, \cite{KNV}, \cite{Li}, \cite{LiRo}, \cite{Maj}, \cite{MaBe}, \cite{MaTa}, \cite{Mar1}, \cite{Mar2}, \cite{Mar3}, \cite{MarLr}, \cite{May}, \cite{MayZ}, \cite{MiXu}, \cite{Mi}, \cite{NiSc}, \cite{OhYa1}, \cite{Pe}, \cite{ReDr}, \cite{Res}, \cite{Ro1}, \cite{Ro2}, \cite{Sch}, \cite{Sch2}, \cite{Si}, \cite{Si2}, \cite{Sta}, \cite{WaJi}, \cite{WaZh}, \cite{Wu97}, \cite{Wu2}, \cite{Wu01}, \cite{Wu02}, \cite{Wu3}, \cite{Wu4}, \cite{Wu41}, \cite{Wu31}, \cite{Wu77}, \cite{Yu}, \cite{Yuan}, \cite{YuanJ}, \cite{Zha0}, \cite{Zha}, \cite{Zhou}, \cite{Zhou2}). In particular, the global regularity for the critical case $\alphalpha=1/2$
has been successfully established (\cite{CV}, \cite{KNV}). The situation in the supercritical case $\alphalpha<1/2$ is only partially understood at the time of writing. The results in \cite{CWnew1}, \cite{CWnew2} and \cite{DoPo} imply that any solution of the supercritical SQG equation can develop potential finite time singularity only in the regularity window between $L^\infty$ and $C^\delta$ with $\delta<1-2\alphalpha$. Several very recent preprints on the supercritical case also revealed some very interesting properties of the supercritical dissipation (\cite{Bar}, \cite{Dab}, \cite{Kinew1}, \cite{Si}).
\vskip .1in
Our goal here is to establish the global regularity of (\ref{general}) for more general operators $P$. In particular, we are interested in the global regularity of the intermediate equations between the 2D Navier-Stokes equation and the supercritical SQG equation. This paper is a continuation of our previous study on the inviscid counterpart of (\ref{general}) (\cite{ChCW}). The consideration here is restricted to $P$ satisfying the following condition.
\begin{assume} \Lambdabel{P_con}
The symbol $P=P(|\xi|)$ assumes the following properties:
\begin{enumerate}
\item $P$ is continuous on $\mathbb{R}^d$ and $P\in C^\infty(\mathbb{R}^d\setminus\{0\})$;
\item $P$ is radially symmetric;
\item $P=P(|\xi|)$ is nondecreasing in $|\xi|$;
\item There exist two constants $C$ and $C_0$ such that
\begin{equation*}
\sup_{2^{-1} \le |\varepsilonta| \le 2}\, \left|(I-\Delta_\varepsilonta)^n \,P(2^j |\varepsilonta|)\right| \le C\, P(C_0 \, 2^j)
\varepsilonnd{equation*}
for any integer $j$ and $n=1,2,\cdots, 1+ \left[\frac{d}{2}\right]$.
\varepsilonnd{enumerate}
\varepsilonnd{assume}
We remark that (4) in Condition \ref{P_con} is a very natural condition on symbols of Fourier multiplier operators and is similar to the main condition in the Mihlin-H\"{o}rmander Multiplier Theorem (see e.g. \cite[p.96]{St}). For notational convenience, we also assume that $P\ge 0$. Some special examples of $P$ are
\begin{eqnarray*}
&& P(\xi) = \left(\log(1 +|\xi|^2)\right)^\gamma \quad\mbox{with $\gamma\ge 0$}, \\
&& P(\xi) = \left(\log(1+\log(1 +|\xi|^2))\right)^\gamma \quad\mbox{with $\gamma\ge 0$}, \\
&& P(\xi) = |\xi|^\beta \quad\mbox{with $\beta\ge 0$},\\
&& P(\xi) = (\log(1 +|\xi|^2))^\gamma\,|\xi|^\beta \quad\mbox{with $\gamma\ge 0$ and $\beta\ge 0$}.
\varepsilonnd{eqnarray*}
\vskip .1in
As in the study of the Navier-Stokes and the Euler equations, the quantity $\|\nonumberabla u\|_{L^\infty}$ plays a crucial role in the global regularity issue. In our previous work on the inviscid counterpart of (\ref{general}), we established bounds for the building blocks $\|\nonumberabla \Delta_j u\|_{L^q}$ and $\|\nonumberabla S_N u\|_{L^q}$ for $1\le q\le \infty$. More precisely, the following theorem is proven in \cite{ChCW}.
\begin{thm} \Lambdabel{nablau}
Let $u: \mathbb{R}^d\to \mathbb{R}^d$ be a vector field. Assume that $u$ is related to a scalar $\thetaeta$ by
$$
(\nonumberabla u)_{jk} = \mathcal{R}_l \mathcal{R}_m\, P(\Lambda) \, \thetaeta,
$$
where $1\le j,k,l,m\le d$, $(\nonumberabla u)_{jk}$ denotes the $(j,k)$-th entry of $\nonumberabla u$, $\mathcal{R}_l$ denotes the Riesz transform, and $P$ obeys Condition \ref{P_con}. Then, for any integers $j\ge 0$ and $N\ge 0$,
\begin{eqnarray}
\|S_N \nonumberabla u\|_{L^p} &\le& C_{p,d}\, P(C_0 2^N)\,\|S_{N} \thetaeta\|_{L^p}, \quad 1<p<\infty,
\Lambdabel{bound1} \\
\|\Delta_j \nonumberabla u\|_{L^q} &\le& C_d\, P(C_0 2^j)\,\|\Delta_j \thetaeta\|_{L^q}, \quad 1\le q\le \infty,
\Lambdabel{bound2} \\
\|S_N \nonumberabla u\|_{L^\infty} &\le& C_d\,\|\thetaeta\|_{L^1\cap L^\infty} + C_d\, N\,P(C_0 2^N)\,\|S_{N+1}\thetaeta\|_{L^\infty}, \Lambdabel{bound3}
\varepsilonnd{eqnarray}
where $C_{p,d}$ is a constant depending on $p$ and $d$ only and $C_d$s' depend on $d$ only.
\varepsilonnd{thm}
With the aid of these bounds, we were able to show in \cite{ChCW}
that (\ref{general}) with $\kappaappa=0$ and $P(\Lambda)=\left(\log(1+\log(1 -\Delta))\right)^\gamma$ for
$0\le \gamma\le 1$ has a unique global (in time) solution in the Besov space
$B^s_{q,\infty}(\mathbb{R}^d)$ with $d<q\le \infty$ and $s>1$. In addition, a
regularity criterion is also provided in \cite{ChCW} for (\ref{general}) with
$P(\Lambda) = \Lambda^\beta$ for $0\le \beta\le 1$. Our goal here is to extend our
study to cover more general operators when we turn on the dissipation. Indeed we
are able to establish the global existence and uniqueness for a very general family
of symbols. Before stating the result, we introduce the extended Besov spaces. Here $\mathcal{S}'$ denotes the class of tempered distributions and $\Delta_j$ with $j\ge -1$ denotes the standard Fourier localization operator. The notation $\Delta_j$, $S_N$ and Besov spaces are now quite standard and can be found in several books and many papers (see e.g. \cite{BL}, \cite{Che}, \cite{RuSi}, \cite{Tr}). They can also be found in Appendix A of \cite{ChCW}.
\begin{define}
Let $s\in \mathbb{R}$ and $1\le q,r\le \infty$. Let $A=\{A_j\}_{j\ge -1}$ with $A_j\ge 0$ be a nondecreasing sequence. The extended Besov space $B^{s,A}_{q,r}$ consists of $f\in \mathcal{S}'(\mathbb{R}^d)$ satisfying
$$
\|f\|_{B^{s,A}_{q,r}} \varepsilonquiv \left\|2^{s A_j}\, \|\Delta_j f\|_{L^q(\mathbb{R}^d)} \right\|_{l^r} \,< \infty.
$$
\varepsilonnd{define}
Obviously, when $A_j = j+1$, $B^{s,A}_{q,r}$ becomes the standard inhomogeneous Besov space $B^s_{q,r}$. When $A_j =o(j+1)$ as $j\to \infty$, $B^{s,A}_{q,r}$ is a less regular class than the corresponding Besov space $B^s_{q,r}$; we will refer to these spaces as sub-Besov spaces. When $j=o(A_j)$, $B^{s,A}_{q,r}$, we will refer to the spaces as super-Besov spaces.
\vskip .1in
With these definitions at our disposal, our main theorem can be stated as follows.
\begin{thm} \Lambdabel{main1}
Consider the dissipative active scalar equation (\ref{general}) with $\kappaappa>0$, $\alphalpha>0$ and $P(\xi)$ satisfying Condition \ref{P_con}. Let $s>1$, $2\le q\le \infty$ and $A=\{A_j\}_{j\ge -1}$ be a nondecreasing sequence with $A_j\ge 0$. Let $\thetaeta_0 \in L^1(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d) \cap B^{s,A}_{q,\infty}(\mathbb{R}^d)$. Assume either the velocity $u$ is divergence-free or the solution $\thetaeta$ is bounded in $L^1(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ for all time. If, there exists a constant $C$ such that for all $j\ge -1$,
\begin{equation}\Lambdabel{sb}
\sum_{k\ge j-1, k\ge -1} \frac{2^{s A_{j-2}}\,P(2^{k+1})}{2^{s A_k}\,P(2^{j+1})} < C
\varepsilonnd{equation}
and
\begin{equation}\Lambdabel{decay}
\kappaappa^{-1}\, 2^{s(A_j-A_{j-2})}\, (j+2) P(2^{j+2})\,2^{-2\alphalpha j} \to 0\quad \mbox{as} \quad j\to \infty,
\varepsilonnd{equation}
then (\ref{general}) has a unique global solution $\thetaeta$ satisfying
$$
\thetaeta \in L^\infty\left([0,\infty); B^{s,A}_{q,\infty}(\mathbb{R}^d)\right).
$$
\varepsilonnd{thm}
We single out two special consequences of Theorem \ref{main1}. In the case when
\begin{equation}\Lambdabel{PA}
P(|\xi|) = \left(\log(I+|\xi|^2)\right)^\gamma,\,\, \gamma\ge 0\quad\mbox{and}\quad A_j=(j+1)^b\quad\mbox{for some $b\le 1$},
\varepsilonnd{equation}
(\ref{sb}) is trivially satisfied and the condition in (\ref{decay}) reduces to
\begin{equation}\Lambdabel{ed}
2^{s((j+1)^b - j^b)}\, (j+2)^{1+\gamma} 2^{-2\alphalpha j} \to 0 \quad \mbox{as} \quad j\to \infty,
\varepsilonnd{equation}
which is obviously satisfied for any $\alphalpha>0$. We thus obtain the following corollary.
\begin{cor}
Consider the dissipative Log-Euler equation
\begin{equation} \Lambdabel{Log-Euler}
\left\{
\begin{array}{l}
\partial_t \thetaeta + u\cdot\nonumberabla \thetaeta + \kappaappa (-\Delta)^\alphalpha\thetaeta =0, \\
u = \nonumberabla^\perp \psi, \quad
\Delta \psi = \left(\log(1 -\Delta)\right)^\gamma\, \thetaeta
\varepsilonnd{array}
\right.
\varepsilonnd{equation}
with $\kappaappa>0$, $\alphalpha>0$ and $\gamma\ge 0$. Assume that $\thetaeta_0$ satisfies
$$
\thetaeta_0 \in Y \varepsilonquiv L^1(\mathbb{R}^2)\cap L^\infty(\mathbb{R}^2) \cap B^{s,A}_{q,\infty}(\mathbb{R}^2)
$$
with $s>1$, $2 \le q \le \infty$ and $A$ given in (\ref{PA}). Then (\ref{Log-Euler}) has a unique global solution $\thetaeta$ satisfying
$$
\thetaeta \in L^\infty\left([0,\infty); Y\right).
$$
\varepsilonnd{cor}
The assumption that $A_j =(j+1)^b$ with $b\le 1$ corresponds to the Besov and the sub-Besov spaces. We can also consider the solutions of (\ref{Log-Euler}) in super-Besov spaces by taking $A_j = (j+1)^b$ for $b>1$. It is easy to see that (\ref{ed}) remains valid if $s\, b <2\alphalpha$. Therefore (\ref{Log-Euler}) with $2\alphalpha >s\, b$ has a global solution in the super-Besov space $B^{s,A}_{q,\infty}$ with $A_j=(j+1)^b$ for $b>1$.
\vskip .1in
Another very important special case is when
\begin{equation}\Lambdabel{ajj}
A_j =j+1, \quad
P(\xi) = |\xi|^\beta (\log(1 +|\xi|^2))^\gamma\, \quad\mbox{with $\gamma\ge 0$ and $0\le \beta < 2\alphalpha\le 1$}.
\varepsilonnd{equation}
Then again (\ref{sb}) is obviously satisfied and (\ref{decay}) is reduced to
$$
2^{s((j+1)^b - j^b)} (j+2)^{1+\gamma}\, 2^{(\beta-2\alphalpha)j} \to 0 \quad \mbox{as}\,\,j\to \infty,
$$
which is clearly true. That is, the following corollary holds.
\begin{cor} \Lambdabel{sss}
Consider the active scalar equation
\begin{equation} \Lambdabel{BG}
\left\{
\begin{array}{l}
\partial_t \thetaeta + u\cdot\nonumberabla \thetaeta + \kappaappa (-\Delta)^\alphalpha\thetaeta=0, \\
u = \nonumberabla^\perp \psi, \quad
\Delta \psi = \Lambda^\beta\,\left(\log(1 -\Delta)\right)^\gamma\, \thetaeta
\varepsilonnd{array}
\right.
\varepsilonnd{equation}
with $\kappaappa>0$, $\alphalpha>0$, $0\le \beta< 2\alphalpha\le 1$ and $\gamma\ge 0$. Assume the initial data $\thetaeta_0\in Y\varepsilonquiv L^1(\mathbb{R}^2)\cap L^\infty(\mathbb{R}^2) \cap B^{s,A}_{q,\infty}(\mathbb{R}^2)$ with $s>1$, $2\le q\le \infty$ and $A_j$ given by (\ref{ajj}). Then (\ref{BG}) has a unique global solution $\thetaeta$ satisfying
$$
\thetaeta \in L^\infty\left([0,\infty); Y\right).
$$
\varepsilonnd{cor}
Again we could have studied the global solutions of (\ref{BG}) in a super-Besov space $B^{s,A}_{q,\infty}$ with, say $A_j =(j+1)^b$ for $b>1$. Of course we need to put more restrictions on $\alphalpha$. When $\gamma=0$, (\ref{BG}) becomes
\begin{equation} \Lambdabel{GBG}
\left\{
\begin{array}{l}
\partial_t \thetaeta + u\cdot\nonumberabla \thetaeta + \kappaappa (-\Delta)^\alphalpha\thetaeta=0, \\
u = \nonumberabla^\perp \psi, \quad
\Delta \psi = \Lambda^\beta\, \thetaeta,
\varepsilonnd{array}
\right.
\varepsilonnd{equation}
which we call the generalized SQG equation. Corollary \ref{sss} does not cover the case when $\beta=2\alphalpha$, namely the modified SQG equation. The global regularity of the modified SQG equation with any $L^2$ initial data has previously been obtained in \cite{CIW}. In the supercritical case when $\beta>2\alphalpha$, the global regularity issue for (\ref{GBG}) is open. In particular, the global issue for supercritical SQG equation ($\beta=1$ and $2\alphalpha<1$) remains outstandingly open.
\vskip .1in
Following the ideas in \cite{Cha} and \cite{CMT}, we approach the global issue of (\ref{GBG}) in the super case $\beta>2\alphalpha$ by considering the geometry of the level curves of its solution. We present a geometric type criterion for the
regularity of solutions of (\ref{GBG}). This sufficient condition controls the
regularity of solutions in terms of the space-time integrability of
$|\nonumberao \thetaeta| $ and the regularity of the direction field
$\xi=\nonumberao\thetaeta/|\nonumberao\thetaeta|$ (unit tangent vector to a level curve
of $\thetaeta$).
\begin{thm} \Lambdabel{crit3}
Consider (\ref{GBG}) with $\kappaappa > 0$, $\alphalpha>0$ and $0\le
\beta\le 1$. Let $\thetaeta$ be the solution of (\ref{GBG})
corresponding to the initial data $\thetaeta_0 \in H^m(\mathbb{R}^2)$
with $m>2$. Let $T>0$. Suppose there exists $\sigma \in (0,1)$,
$q_1\in (\frac{2}{1+\beta-\sigma} , \infty]$, $p_1\in (1, \infty]$,
$p_2 \in (1, \frac{2}{1+\sigma-\beta} )$ and $r_1, r_2 \in [1, \infty]$
such that the followings hold.
\begin{eqnarray}\Lambdabel{con220}
\xi\in L^{r_1}(0,T; \mathcal{\dot{F}}^\sigma_{p_1,q} (\mathbb R^2)) \quad \mbox{and}
\quad \nonumbera \in L^{r_2} (0, T; L^{p_2} (\mathbb R^2 ))\\
\mbox{with}\qquad
\frac{1}{p_1} + \frac{1}{p_2} + \frac{\alpha}{r_1} +\frac{\alpha}{r_2}
\leq \alpha+\frac12(1+\sigma-\beta) .\nonumber
\varepsilonq
Then $\thetaeta$ remains in $H^m (\mathbb R^2)$ on $[0,T]$. Especially,
when $p_1=r_1=q=\infty$, (\ref{con220}) becomes
\begin{eqnarray}n \xi \in
L^\infty(0,T; C^\sigma (\mathbb R^2)) \quad\mbox{and}\quad \nonumberabla^\perp
\thetaeta
\in L^{r_2} (0, T; L^{p_2} (\mathbb R^2 ))\quad \\
\mbox{with}\qquad
\frac{1}{p_2} +\frac{\alpha}{r_2}
\leq \alpha+\frac12(1+\sigma-\beta).
\varepsilonqn
\varepsilonnd{thm}
Here $\dot{\mathcal{F}}^s_{p,q} (\mathbb{R}^2)$ denotes a homogeneous
Trebiel-Lizorkin type space. For $0\le s\le 1$, $1\le p\le \infty$ and $1\le q\le \infty$, $\dot{\mathcal{F}}^s_{p,q}$ contains functions such that the following semi-norm is finite,
$$
\|f\|_{\dot{\mathcal{F}}^s_{p,q}} = \left\{
\begin{array}{ll}
\displaystyle \left\|\left(\int \frac{|f(x+y)-f(x)|^q}{|y|^{n+sq}}\, dy\right)^{\frac1q}\right\|_{L^p}, \quad & \mbox{if $q<\infty$},\\\\
\displaystyle \left\|\sup_{y\nonumberot =0} \frac{|f(x+y)-f(x)|}{|y|^s}\right\|_{L^p}, \quad & \mbox{if $q=\infty$}
\varepsilonnd{array}
\right.
$$
We note that if we set $\beta=1$ in Theorem \ref{crit3}, then it reduces to Theorem 1.2 of \cite{Cha}.
\vskip .1in
The rest of this paper is divided into two sections. Section \ref{proofmain} proves Theorem \ref{main1} while Section \ref{geocri} derives the geometric regularity criterion stated in Theorem \ref{crit3}.
\vskip .4in
\section{Proof of Theorem \ref{main1}}
\Lambdabel{proofmain}
This section is devoted to the proof of Theorem \ref{main1}, which involves Besov space technique and the bounds stated in Theorem \ref{nablau}. In addition, lower bound estimates associated with the fractional dissipation are also used.
\vskip .1in
\begin{proof}[Proof of Theorem \ref{main1}]
The proof is divided into two main parts. The first part establishes the global (in time) {\it a priori} bound on solutions of (\ref{general}) while the second part briefly describes the construction of a unique local (in time) solution.
\vskip .1in
For notational convenience, we write $Y= L^1(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d) \cap B^{s,A}_{q,\infty}(\mathbb{R}^d)$. The first part derives the global bound, for any $T>0$,
\begin{equation}\Lambdabel{bdd}
\|\thetaeta(\cdot,t)\|_{B^{s,A}_{q,\infty}} \le C(T, \|\thetaeta_0\|_{Y}) \quad\mbox{for}\quad t\le T
\varepsilonnd{equation}
and we distinguish between two cases: $q<\infty$ and $q=\infty$. The dissipative term is handled differently in these two cases.
\vskip .1in
We start with the case when $q<\infty$. When the velocity field $u$ is divergence-free, $\thetaeta_0\in L^1 \cap L^\infty$ implies the corresponding solution $\thetaeta$ of (\ref{general}) satisfies the {\it a priori} bound
\begin{equation} \Lambdabel{mmm}
\|\thetaeta(\cdot,t)\|_{L^1\cap L^\infty} \le \|\thetaeta_0\|_{L^1\cap L^\infty},
\quad t \ge 0.
\varepsilonnd{equation}
When $u$ is not divergence-free, (\ref{mmm}) is assumed. The divergence-free condition is not used in the rest of the proof.
\vskip .1in
Let $j\ge -1$ be an integer. Applying $\Delta_j$ to (\ref{general}) and following a standard decomposition, we have
\begin{equation}\Lambdabel{base1}
\partial_t \Delta_j \thetaeta + \kappaappa (-\Delta)^\alphalpha \Delta_j \thetaeta = J_1 + J_2 + J_3 +J_4 +J_5,
\varepsilonnd{equation}
where
\begin{eqnarray}
J_{1} &=& - \sum_{|j-k|\le 2}
[\Delta_j, S_{k-1}(u)\cdot\nonumberabla] \Delta_k \thetaeta, \Lambdabel{j1t}\\
J_{2} &=& - \sum_{|j-k|\le 2}
(S_{k-1}(u) - S_j(u)) \cdot \nonumberabla \Delta_j\Delta_k \thetaeta, \Lambdabel{j2t}\\
J_3 &=& - S_j(u) \cdot\nonumberabla \Delta_j\thetaeta, \Lambdabel{j3t}\\
J_{4} &=& - \sum_{|j-k|\le 2}
\Delta_j (\Delta_k u \cdot \nonumberabla S_{k-1}
(\thetaeta)), \Lambdabel{j4t}\\
J_{5} &=& -\sum_{k\ge j-1}\Delta_j (\widetilde{\Delta}_k u\cdot\nonumberabla
\Delta_k \thetaeta) \Lambdabel{j5t}
\varepsilonnd{eqnarray}
with $\widetilde{\Delta}_k = \Delta_{k-1} + \Delta_k + \Delta_{k+1}$. We multiply (\ref{base1}) by $\Delta_j\thetaeta |\Delta_j \thetaeta|^{q-2}$ and integrate in space. Integrating by parts in the term associated with $J_3$, we obtain
\begin{eqnarray*}
-\int_{\mathbb{R}^d} \left(S_j (u) \cdot\nonumberabla \Delta_j\thetaeta\right) \,\Delta_j\thetaeta |\Delta_j \thetaeta|^{q-2} \,dx &=& \frac1q \, \int_{\mathbb{R}^d} (\nonumberabla\cdot S_j u) |\Delta_j \thetaeta|^q \,dx\\
&=& \int_{\mathbb{R}^d} \widetilde{J_3}\, |\Delta_j \thetaeta|^{q-1}\,dx,
\varepsilonnd{eqnarray*}
where $\widetilde{J_3}$ is given by
$$
\widetilde{J_3} = \frac1q (\nonumberabla\cdot S_j u) |\Delta_j \thetaeta|.
$$
Applying H\"{o}lder's inequality, we have
\begin{eqnarray}
&& \frac1q\,\frac{d}{dt} \|\Delta_j \thetaeta\|^q_{L^q} + \kappaappa \int \Delta_j \thetaeta |\Delta_j \thetaeta|^{q-2}(-\Delta)^\alphalpha \Delta_j\thetaeta\,dx \Lambdabel{root1}\\
&& \qquad\qquad \qquad \le \left(\|J_1\|_{L^q} + \|J_2\|_{L^q} + \|\widetilde{J_3}\|_{L^q} + \|J_4\|_{L^q} + \|J_5\|_{L^q}\right) \|\Delta_j \thetaeta\|_{L^q}^{q-1}. \nonumberonumber
\varepsilonnd{eqnarray}
For $j\ge 0$, we have the lower bound (see \cite{CMZ1} and \cite{Wu31})
\begin{equation}\Lambdabel{low}
\int \Delta_j \thetaeta |\Delta_j \thetaeta|^{q-2}(-\Delta)^\alphalpha \Delta_j\thetaeta \ge C\, 2^{2\alphalpha j}\,\|\Delta_j \thetaeta\|_{L^q}^q.
\varepsilonnd{equation}
For $j=-1$, this lower bound is invalid. Still we have
\begin{equation}\Lambdabel{pos}
\int \Delta_j \thetaeta |\Delta_j \thetaeta|^{q-2}(-\Delta)^\alphalpha \Delta_j\thetaeta \ge 0.
\varepsilonnd{equation}
Attention is paid to the case $j\ge 0$ first. Inserting (\ref{low}) in (\ref{root1}) leads to
$$
\frac{d}{dt} \|\Delta_j \thetaeta\|_{L^q} + \kappaappa \, 2^{2\alphalpha j}\, \|\Delta_j \thetaeta\|_{L^q} \le \|J_1\|_{L^q} + \|J_2\|_{L^q} + \|\widetilde{J_3}\|_{L^q} + \|J_4\|_{L^q} + \|J_5\|_{L^q}.
$$
By a standard commutator estimate,
$$
\|J_1\|_{L^q} \le C \sum_{|j-k|\le 2} \|\nonumberabla S_{k-1} u\|_{L^\infty} \|\Delta_k \thetaeta\|_{L^q}.
$$
By H\"{o}lder's and Bernstein's inequalities,
$$
\|J_2\|_{L^q} \le C\, \|\nonumberabla \widetilde{\Delta}_j u\|_{L^\infty} \, \|\Delta_j \thetaeta\|_{L^q}.
$$
Clearly,
$$
\|\widetilde{J_3}\|_{L^q} \le C\,\|\nonumberabla\cdot S_j u\|_{L^\infty} \, \|\Delta_j \thetaeta\|_{L^q}.
$$
For $J_4$ and $J_5$, we have
\begin{eqnarray*}
\|J_4\|_{L^q} &\le& \sum_{|j-k|\le 2} \|\Delta_k u\|_{L^\infty} \, \|\nonumberabla S_{k-1} \thetaeta\|_{L^q},\\
\|J_5\|_{L^q} &\le& \sum_{k\ge j-1} \,\|\widetilde{\Delta}_k u\|_{L^\infty} \| \Delta_k \nonumberabla \thetaeta\|_{L^q} \\
&\le& C\, \sum_{k\ge j-1} \|\nonumberabla \widetilde{\Delta}_k u\|_{L^\infty}\, \|\Delta_k \thetaeta\|_{L^q}.
\varepsilonnd{eqnarray*}
These terms can be further bounded as follows. By Theorem \ref{nablau},
\begin{eqnarray*}
\|\nonumberabla S_k u\|_{L^\infty} & \le & \|\thetaeta_0\|_{L^1\cap L^\infty} + C k\,P(2^{k+1})\|S_{k+1} \thetaeta\|_{L^\infty}\\
&\le& \|\thetaeta_0\|_{L^1\cap L^\infty} + C k\,P(2^{k+1}) \|\thetaeta_0\|_{L^\infty}.
\varepsilonnd{eqnarray*}
Thus,
\begin{eqnarray*}
\|J_1\|_{L^q} &\le& C\, \|\thetaeta_0\|_{L^1\cap L^\infty} \sum_{|j-k|\le 2} (1+ C k\,P(2^{k+1})) 2^{-s A_k} \, 2^{s A_k} \|\Delta_k \thetaeta\|_{L^q}\\
&\le& C\, 2^{-s A_j}\,\|\thetaeta_0\|_{L^1\cap L^\infty} \|\thetaeta\|_{B^{s,A}_{q,\infty}}\,\sum_{|j-k|\le 2}(1+ C k\,P(2^{k+1})) 2^{s(A_j-A_k)}.
\varepsilonnd{eqnarray*}
Since $A_j$ is a nondecreasing function of $j$,
\begin{equation}\Lambdabel{ajk}
2^{s (A_j -A_k)} \le 2^{s (A_j-A_{j-2})} \quad\mbox{for}\quad |k-j|\le 2,
\varepsilonnd{equation}
where we have adopted the convention that $A_l\varepsilonquiv 0$ for $l<-1$. Consequently,
$$
\|J_1\|_{L^q} \le C\, 2^{-s A_{j-2}}\,\|\thetaeta_0\|_{L^1\cap L^\infty} \|\thetaeta\|_{B^{s,A}_{q,\infty}}\,\left(1+ (j+2)P(2^{j+2})\right).
$$
Clearly, $\|J_2\|_{L^q}$ and $\|J_3\|_{L^q}$ admits the same bound
as $\|J_1\|_{L^q}$. By Bernstein's inequality and Theorem \ref{nablau},
\begin{eqnarray*}
\|J_4\|_{L^q} &\le& C\,\sum_{|j-k| \le 2} \|\nonumberabla \Delta_k u\|_{L^q}\, \|S_{k-1} \thetaeta\|_{L^\infty} \\
&\le& C\,\|\thetaeta\|_{L^\infty} \sum_{|j-k| \le 2} P(2^{k+1}) \|\Delta_k \thetaeta\|_{L^q}.
\varepsilonnd{eqnarray*}
By (\ref{ajk}), we have
$$
\|J_4\|_{L^q} \le C\, 2^{-s A_{j-2}}\,\|\thetaeta_0\|_{L^\infty}\, \|\thetaeta\|_{B^{s,A}_{q,\infty}}\,P(2^{j+2}).
$$
By Theorem \ref{nablau},
\begin{eqnarray*}
\|J_5\|_{L^q} &\le& C\, \sum_{k\ge j-1} P(2^{k+1}) \|\widetilde{\Delta}_k \thetaeta\|_{L^\infty}\|\Delta_k \thetaeta\|_{L^q}\\
&\le& C\, \|\thetaeta_0\|_{L^\infty} \sum_{k\ge j-1} P(2^{k+1}) \|\Delta_k \thetaeta\|_{L^q}\\
&\le& C\, \|\thetaeta_0\|_{L^\infty} 2^{-s A_{j-2}}\, P(2^{j+1}) \|\thetaeta\|_{B^{s,A}_{q,\infty}}\, \sum_{k\ge j-1} \frac{2^{s A_{j-2}}}{P(2^{j+1})}\, \frac{P(2^{k+1})}{2^{s A_k}}
\varepsilonnd{eqnarray*}
By (\ref{sb}),
$$
\|J_5\|_{L^q} \le C\, \|\thetaeta_0\|_{L^\infty} 2^{-s A_{j-2}}\, P(2^{j+1}) \|\thetaeta\|_{B^{s,A}_{q,\infty}}.
$$
Collecting all the estimates, we have, for $j\ge 0$,
\begin{eqnarray*}
\frac{d}{dt} \|\Delta_j \thetaeta\|_{L^q} + \kappaappa \,2^{2\alphalpha j}\, \|\Delta_j \thetaeta\|_{L^q} &\le& C\, 2^{-s A_{j-2}}\,\|\thetaeta_0\|_{L^1\cap L^\infty}\\
&& \,\times \|\thetaeta\|_{B^{s,A}_{q,\infty}}\,\left(1+ (j+2)P(2^{j+2})\right).
\varepsilonnd{eqnarray*}
That is,
$$
\frac{d}{dt} \left(e^{\kappaappa 2^{2\alphalpha j} t} \|\Delta_j \thetaeta\|_{L^q}\right) \le C\, e^{\kappaappa 2^{2\alphalpha j} t}2^{-s A_{j-2}}\,\|\thetaeta_0\|_{L^1\cap L^\infty} \|\thetaeta\|_{B^{s,A}_{q,\infty}}\,\left(1+ (j+2)P(2^{j+2})\right).
$$
Integrating in time and multiplying by $2^{s A_j}\cdot e^{ -\kappaappa 2^{2\alphalpha j} t}$, we obtain, for $j\ge 0$,
\begin{equation}\Lambdabel{aj1}
2^{s A_j}\,\|\Delta_j \thetaeta\|_{L^q} \le 2^{s A_j}\,e^{ -\kappaappa 2^{2\alphalpha j} t}\|\Delta_j \thetaeta_0\|_{L^q} + K_j,
\varepsilonnd{equation}
where
\begin{eqnarray*}
K_j = C\,\|\thetaeta_0\|_{L^1\cap L^\infty} \,\left(1+ (j+2)P(2^{j+2})\right) 2^{s (A_j-A_{j-2})}\int_0^t e^{-\kappaappa 2^{2\alphalpha j} (t-\tau)} \|\thetaeta(\tau)\|_{B^{s,A}_{q,\infty}}\,d\tau.
\varepsilonnd{eqnarray*}
To further the estimates, we fix $t_0\le T$ and let $t\le t_0$. It is easy to see that $K_j$ admits the upper bound
\begin{eqnarray*}
K_j &\le & C\,\|\thetaeta_0\|_{L^1\cap L^\infty} \,\left(1+ (j+2)P(2^{j+2})\right) 2^{s (A_j-A_{j-2})} \\
&& \quad \times \frac{1}{\kappaappa 2^{2\alphalpha j}} \left(1-e^{-\kappaappa 2^{2\alphalpha j} t} \right)\, \sup_{0\le \tau \le t_0} \|\thetaeta(\tau)\|_{B^{s,A}_{q,\infty}}.
\varepsilonnd{eqnarray*}
According to (\ref{decay}), there exists an integer $j_0$
such that, for $j\ge j_0$,
\begin{equation}\Lambdabel{bdd1}
K_j\le \frac12 \sup_{0\le \tau \le t_0} \|\thetaeta(\tau)\|_{B^{s,A}_{q,\infty}}.
\varepsilonnd{equation}
For $0\le j\le j_0$,
\begin{equation}\Lambdabel{bdd2}
K_j \le C\,\|\thetaeta_0\|_{L^1\cap L^\infty} \,\left(1+ (j_0+2)P(2^{j_0+2})\right) \max_{0\le j\le j_0} 2^{s(A_j-A_{j-2})} \int_0^t \|\thetaeta(\tau)\|_{B^{s,A}_{q,\infty}}\,d\tau.
\varepsilonnd{equation}
We now turn to the case when $j=-1$. By combining (\ref{base1}) and (\ref{pos}) and estimating $\|J_1\|_{L^q}$ through $\|J_5\|_{L^q}$ in an similar fashion as for the case $j\ge 0$, we obtain
\begin{equation}\Lambdabel{aj2}
\|\Delta_{-1} \thetaeta(t)\|_{L^q} \le \|\Delta_{-1} \thetaeta(0)\|_{L^q} + C\,\|\thetaeta_0\|_{L^1\cap L^\infty}\,\int_0^t \|\thetaeta(\tau)\|_{B^{s,A}_{q,\infty}}\, d\tau.
\varepsilonnd{equation}
Putting (\ref{aj1}) and (\ref{aj2}) together, we find, for any $j\ge -1$,
\begin{equation}\Lambdabel{aj3}
2^{s A_j}\,\|\Delta_j \thetaeta\|_{L^q} \le \|\thetaeta_0\|_{B^{s,A}_{q,\infty}} + K_j,
\varepsilonnd{equation}
where $K_j$ obeys the bound (\ref{bdd1}) for $j\ge j_0$ and the bound in (\ref{bdd2}) for $-1\le j<j_0$. Applying $\sup_{j\ge -1}$ to (\ref{aj3}) and using the simple fact that
$$
\sup_{j\ge -1} K_j \le \sup_{j\ge j_0} K_j + \sup_{-1 \le j < j_0} K_j,
$$
we obtain
\begin{eqnarray*}
\|\thetaeta(t)\|_{B^{s,A}_{q,\infty}} &\le& \|\thetaeta_0\|_{B^{s,A}_{q,\infty}} + \frac12 \sup_{0\le \tau\le t_0} \|\thetaeta(\tau)\|_{B^{s,A}_{q,\infty}} + C(\thetaeta_0, j_0) \int_0^t \|\thetaeta(\tau)\|_{B^{s,A}_{q,\infty}}\,d\tau,
\varepsilonnd{eqnarray*}
where
$$
C(\thetaeta_0, j_0) = C\,\|\thetaeta_0\|_{L^1\cap L^\infty} \,\left(1+ (j_0+2)P(2^{j_0+2})\right) \max_{0\le j\le j_0} 2^{s(A_j-A_{j-2})}.
$$
Now taking supermum over $t\in[0,t_0]$, we obtain
$$
\sup_{0\le \tau\le t_0} \|\thetaeta(\tau)\|_{B^{s,A}_{q,\infty}} \le 2\,\|\thetaeta_0\|_{B^{s,A}_{q,\infty}} + C(\thetaeta_0, j_0) \int_0^{t_0} \|\thetaeta(\tau)\|_{B^{s,A}_{q,\infty}}\,d\tau,
$$
Gronwall's inequality then implies (\ref{bdd}) for any $t\le t_0\le T$. This finishes the case when $q<\infty$.
\vskip .1in
We now turn to the case when $q=\infty$. For $j\ge 0$, applying $\Delta_j$ yields
$$
\partial_t \Delta_j \thetaeta + S_j u \cdot \nonumberabla (\Delta_j \thetaeta) + \kappaappa (-\Delta)^\alphalpha \Delta_j \thetaeta = J_1 + J_2 + J_4 +J_5
$$
where $J_1$, $J_2$, $J_4$ and $J_5$ are as defined in (\ref{j1t}), (\ref{j2t}), (\ref{j4t}) and (\ref{j5t}), respectively. According to
Lemma \ref{localm} below, we have
\begin{equation} \Lambdabel{mmmjjj}
\partial_t \|\Delta_j \thetaeta\|_{L^\infty} + C\, 2^{2\alphalpha j} \|\Delta_j\thetaeta\|_{L^\infty} \le \|J_1\|_{L^\infty} + \|J_2\|_{L^\infty} + \|J_4\|_{L^\infty} + \|J_5\|_{L^\infty}.
\varepsilonnd{equation}
The terms on the right can be estimated similarly as in the case when $q<\infty$. For $j=-1$, (\ref{mmmjjj}) is replaced by
$$
\partial_t \|\Delta_{-1} \thetaeta\|_{L^\infty} \le \|J_1\|_{L^\infty} + \|J_2\|_{L^\infty} + \|J_4\|_{L^\infty} + \|J_5\|_{L^\infty}.
$$
The rest of the proof for this case is then very similar to the case $q<\infty$ and we thus omit further details.
\vskip .1in
We briefly describe the construction of a local solution of (\ref{general}) and prove its uniqueness. The solution is constructed through the method of successive approximation. More precisely, we consider a successive approximation sequence $\{\thetaeta^{(n)}\}$ satisfying
\begin{equation}\Lambdabel{succ}
\left\{
\begin{array}{l}
\thetaeta^{(1)} = S_2 \thetaeta_0, \\ \\
u^{(n)} = (u^{(n)}_j), \quad u^{(n)}_j = \mathcal{R}_l \Lambda^{-1} P(\Lambda) \thetaeta^{(n)},\\ \\
\partial_t \thetaeta^{(n+1)} + u^{(n)} \cdot\nonumberabla \thetaeta^{(n+1)} + \kappaappa (-\Delta)^\alphalpha\thetaeta^{(n+1)} = 0,\\ \\
\thetaeta^{(n+1)}(x,0) = S_{n+2} \thetaeta_0
\varepsilonnd{array}
\right.
\varepsilonnd{equation}
and show that $\{\thetaeta^{(n)}\}$ converges to a solution of (\ref{general}). It suffices to prove the following properties of $\{\thetaeta^{(n)}\}$:
\begin{enumerate}
\item[i)] There exists $T_1>0$ such that $\thetaeta^{(n)}$ is bounded uniformly in $B^{s,A}_{q,\infty}$ for any $t\in[0,T_1]$, namely
$$
\|\thetaeta^{(n)}(\cdot,t)\|_{B^{s,A}_{q,\infty}} \le C_1 \|\thetaeta_0\|_{Y}, \quad t\in [0,T_1],
$$
where $C_1$ is a constant independent of $n$.
\item[ii)] There exists $T_2>0$ such that $\varepsilonta^{(n+1)} = \thetaeta^{(n+1)}- \thetaeta^{(n)}$ is a Cauchy sequence in $B^{s-1,A}_{q,\infty}$,
$$
\|\varepsilonta^{(n)}(\cdot,t)\|_{B^{s-1,A}_{q,\infty}} \le C_2\, 2^{-n}, \quad t\in [0, T_2],
$$
where $C_2$ is independent of $n$ and depends on $T_2$ and $\|\thetaeta_0\|_Y$ only.
\varepsilonnd{enumerate}
Since the essential ingredients in the proof of i) and ii) have appeared in proving the {\it a priori} bound, we omit the details. The uniqueness can be established by estimating the difference of any two solutions in $B^{s-1,A}_{q,\infty}$. A similar argument as in the proof of ii) would yield the desired result. This completes the proof of Theorem \ref{main1}.
\varepsilonnd{proof}
\vskip .1in
We have used the following lemma in the proof of Theorem \ref{main1}. It is obtained in \cite{WaZh}.
\begin{lemma} \Lambdabel{localm}
Let $j\ge 0$ be an integer. Let $\thetaeta$, $u$ and $f$ be smooth functions solving the equation
$$
\partial_t \Delta_j \thetaeta + u\cdot\nonumberabla \Delta_j \thetaeta + \kappaappa (-\Delta)^\alphalpha \Delta_j \thetaeta =f,
$$
where $\kappaappa>0$ is a parameter. Assume that $\Delta_j \thetaeta$ vanishes at infinity. Then, there exists a constant $C$ independent of $\thetaeta$, $u$, $f$ and $j$ such that
$$
\partial_t \|\Delta_j \thetaeta\|_{L^\infty} + C\, 2^{2\alphalpha j} \|\Delta_j \thetaeta\|_{L^\infty} \le \|f\|_{L^\infty}.
$$
\varepsilonnd{lemma}
\vskip .3in
\section{Geometric regularity criterion}
\Lambdabel{geocri}
\vskip .06in
In this section we prove Theorem \ref{crit3}. For this we recall the
following Serrin type of criterion, which is proved for $\beta=1$ in
\cite[Theorem 1.1]{Cha},
and obviously holds true for our case of $\beta\in [0, 1]$.
\begin{thm}\Lambdabel{crit30} Let $\theta (x,t)$ be a solution of (\ref{GBG}) with
initial data $\thetaeta_0\in H^m (\mathbb{R}^2)$ with $m>2$. Let $T>0$. If
there are indices $p,r$ with $\frac{1}{\alpha}<p<\infty$ and
$1<r<\infty$ respectively such that
\begin{equation}\Lambdabel{thm1}
\nonumberao \theta \in L^r (0,T; L^p (\mathbb R^2 )) \quad \mbox{ with}\quad
\frac{1}{p} +\frac{\alpha}{r}\leq \alpha,
\varepsilone
then $\thetaeta$ remains in $H^m (\mathbb R^2)$ on $[0,T]$.
\varepsilonnd{thm}
\vskip .1in
\begin{proof}[Proof of Theorem \ref{crit3}] Since the proof is
similar to that of Theorem 1.2 in \cite{Cha}, we will be brief here
mostly pointing out the essential changes.
Let $p$ be an integer of the form $p=2^k$, where $k$ is a positive
integer, and satisfy
\begin{equation}\Lambdabel{first}
\frac{1}{\alpha} \leq p <\infty.
\varepsilone
We take operation of $\nonumberabla^\bot$ on (\ref{GBG}), and take $L^2
(\mathbb R^2)$ inner product of it
by \nonumberewline
$\nonumberao \theta (x,t) |\nonumberao \theta (x,t)|^{p-2}$, and then
substituting $u=-\nonumberabla^\bot \Lambda^{-2+\beta} \thetaeta $ into it, we have
\begin{eqnarray}\Lambdabel{ve}
\lefteqn{\frac{1}{p} \frac{d}{dt} \|\nonumberao \theta(t)\|_{L^p} ^p +\kappa\int
(\Lambda ^{2\alpha}\nonumbera )\cdot \nonumbera |\nonumbera |^{p-2} dx}\hspace{0.0in}\nonumber \\
&&=\int (\nonumberao \theta \cdot \nonumberabla )u \cdot \nonumberao \theta |\nonumbera |^{p-2}dx\nonumber
\\
&& = \int \int [\nonumberabla \theta (x,t)\cdot \hat{y} ] [\nonumberao \theta
(x+y,t)\cdot
\nonumberabla \theta (x,t )]\frac{dy}{|y|^{1+\beta}} |\nonumbera (x,t)|^{p-2}dx \nonumber \\
&&:=I,
\varepsilonq
where the integral with respect to $y$ in the right hand side is in
the sense of principal value. The dissipation term can be estimated
\begin{eqnarray}\Lambdabel{dis}
\lefteqn{\kappa\int
(\Lambda ^{2\alpha}\nonumbera )\cdot \nonumbera |\nonumbera |^{p-2} dx
\geq \frac{\kappa}{p} \int \left|\Lambda ^{\alpha} |\nonumbera
|^{\frac{p}{2}} \right|^2 dx }\hspace{.2in}\nonumber \\
&\geq& \frac{\kappa C_\alpha}{p} \left(\int |\nonumbera |^{\frac{p}{1-\alpha}} dx
\right)^{1-\alpha}=\frac{\kappa C_\alpha}{p}\|\nonumbera
\|_{L^{\frac{p}{1-\alpha}}} ^p,
\varepsilonq
where we used Lemma 2.4 of \cite{CC} and the embedding
$L^2_{\alphalpha} (\mathbb R^2)\hookrightarrow L^{\frac{2}{1-\alphalpha}}
(\mathbb R^2)$.
Next, we estimate $I$ as follows.
\begin{eqnarray}n
\lefteqn{I=\int\int (\xi ^ \bot (x,t)\cdot \hat{y} ) [\xi(x,t)y \cdot \xi^\bot (x,t)]|\nonumberao \theta (x+y, t)|
\frac{dy}{|y|^{1+\beta}} |\nonumberao \theta (x,t ) |^p dx}\nonumber \hspace{.0in}\\
&&=\int\int (\xi ^\bot (x,t)\cdot \hat{y} ) [\xi(x,t)y -\xi(x,t) ]\cdot\xi^\bot (x,t)|\nonumberao \theta (x+y, t)|
\frac{dy}{|y|^{1+\beta}} |\nonumberao\theta (x,t ) |^p dx \nonumber\\
&&\leq \int\int |\xi(x,t)y -\xi(x,t) | |\nonumberao \theta
(x+y,t)|\frac{dy}{|y|^{\frac{2+(\beta-1 +s)q}{q} +\frac{2-sq'}{q'}}}
|\nonumberao \theta (x,t) |^p dx \nonumber\\
&&\leq
\int \left(\int \frac{|\xi(x,t)y -\xi(x,t) |^q}{|y|^{2+(\beta-1+s) q}} dy\right)^{\frac{1}{q}}
\left( \int \frac{|\nonumberao \theta (x+y,t)|^{q'}}{ |y|^{2-s q'}} dy \right)^{\frac{1}{q'}} |\nonumberao \theta |^p dx \nonumber \\
&&\leq \|\xi \|_{\mathcal{\dot{F}}^{\sigma} _{p_1 , q}} \left\|\{I_{s q'} ( |\nonumberao \theta |^{q'}) \}^{\frac{1}{q'}}
\right\|_{L^{\tilde{p}_2}} \|\nonumberao \theta\|^p _{L^{p_3}},\nonumber \\
\varepsilonqn
where we used the fact $\xi(x,t) \cdot\xi^\bot (x,t)=0$ in the second
equality, and H\"{o}lder's inequality in the second and the
third inequalities with the exponents satisfying
\begin{equation}\Lambdabel{pcon}
\frac{1}{p_1} +\frac{1}{\tilde{p}_2} +\frac{p}{p_3}=1, \qquad \frac{1}{q} +\frac{1}{q'} =1,
\varepsilone
and $I_{a} (\cdot ) $, $0<a <2$, is the operator defined by the Riesz
potential. We also set
\begin{equation}\Lambdabel{sigma}
\sigma=\beta-1 +s
\varepsilone
in the last inequality. After this, we apply Hardy-Littlewood-Sobolev inequality and Young's
inequality to estimate $I$, which is
similar to the proof of Theorem 1.2 of \cite{Cha}, and deduce
\begin{equation}\Lambdabel{last1}
\frac{d}{dt} \|\nonumberao \theta(t)\|_{L^p} ^p +\frac{\kappa C_\alpha }{2} \| \nonumberao
\theta(t)
\|_{L^{\frac{p}{1-\alpha}}} ^p \leq
C\|\xi (t)\|_{\mathcal{\dot{F}}^{\sigma} _{p_1 , q}} ^Q\|\nonumberao \theta (t)\|_{L^{p_2}}
^Q \|\nonumberao \theta (t)\|_{L^p} ^p,
\varepsilone
where we set
\begin{equation}\Lambdabel{Q} Q=\frac{2\alpha p_1p_2}{(2\alpha+s )p_1 p_2 -2p_1 -2p_2},
\varepsilone
which need to satisfy
\begin{equation}\Lambdabel{indices}
\frac{1}{r_1}+\frac{1}{r_2}\leq \frac{1}{Q}.
\varepsilone
We note that (\ref{indices}) is equivalent to
$$
\frac{1}{p_1} + \frac{1}{p_2} + \frac{\alpha}{r_1} +\frac{\alpha}{r_2}
\leq \alpha+\frac12(1+\sigma-\beta)
$$
after substituting $Q$ and $s$ from (\ref{Q}) and (\ref{sigma})
respectively
into (\ref{indices}).
Since
$$
\int_0 ^T \|\xi (t)\|_{\mathcal{\dot{F}}^{\sigma} _{p_1 , q}} ^Q\|\nonumberao \theta (t)\|_{L^{p_2}} ^Q dt
\leq \left(\int_0 ^T\|\xi (t)\|_{\mathcal{\dot{F}}^{\sigma} _{p_1 , q}} ^{r_1} dt\right)^{\frac{Q}{r_1}}
\left(\int_0 ^T\|\nonumberao \theta (t)\|_{L^{p_2}} ^{r_2}
dt\right)^{\frac{Q}{r_2}} <\infty
$$
by our hypothesis,
The inequality (\ref{last1}) leads us to
$$\int_0 ^T \| \nonumberao \theta
\|_{L^{\frac{p}{1-\alpha}}} ^p dt <\infty.
$$
Now applying Theorem \ref{crit30}, we conclude the proof.
\varepsilonnd{proof}
\vskip .4in
\begin{thebibliography}{99}
\bibitem{AbHm} H. Abidi and T. Hmidi, On the global well-posedness of the critical quasi-geostrophic equation, {\it SIAM J. Math. Anal. \bf 40} (2008), 167--185.
\bibitem{Bae} H. Bae, Global well-posedness of dissipative quasi-geostrophic equations in critical spaces. {\it Proc. Amer. Math. Soc. \bf 136} (2008), 257--261.
\bibitem{Bar} B. Barrios, Regularization for the supercritical quasi-geostrophic equation, arXiv:1007.4889v1 [math.AP] 28 Jul 2010.
\bibitem{BL} J. Bergh and J. L\"{o}fstr\"{o}m, {\it Interpolation Spaces,
An Introduction}, Springer-Verlag, Berlin-Heidelberg-New York, 1976.
\bibitem{Blu} W. Blumen, Uniform potential vorticity flow, Part I.
Theory of wave interactions and two-dimensional turbulence, {\it J.
Atmos. Sci.} {\bf 35} (1978), 774-783.
\bibitem{CaS} L. Caffarelli and L. Silvestre, An extension problem
related to the fractional Laplacian, {\it Comm. Partial Differential Equations \bf 32} (2007), 1245--1260.
\bibitem{CV} L. Caffarelli and A. Vasseur, Drift diffusion
equations with fractional diffusion and the quasi-geostrophic
equation, {\it Ann. of Math.}, in press.
\bibitem{CaFe} J. Carrillo and L. Ferreira, The asymptotic behaviour of subcritical dissipative quasi-geostrophic equations, {\it Nonlinearity \bf 21} (2008), 1001-1018.
\bibitem{Ch} D. Chae, The quasi-geostrophic equation in the Triebel-Lizorkin spaces,
{\it Nonlinearity} {\bf 16} (2003), 479-495.
\bibitem{ChJDE} D. Chae, On the continuation principles for the Euler equations and the quasi-geostrophic equation, {\it J. Differential Equations \bf 227} (2006), 640--651.
\bibitem{Cha}D. Chae, On the regularity conditions for the
dissipative quasi-geostrophic equations, {\it SIAM J. Math. Anal.}
{\bf 37} (2006), 1649-1656.
\bibitem{Cha2}D. Chae, The geometric approaches to the possible singularities in the inviscid fluid flows, {\it J. Phys. A \bf 41} (2008), 365501, 11 pp.
\bibitem{Cha4} D. Chae, On the a priori estimates for the Euler, the Navier-Stokes and the quasi-geostrophic equations, {\it Adv. Math. \bf 221} (2009), 1678--1702.
\bibitem{ChCW} D. Chae, P. Constantin and J. Wu, Inviscid models generalizing the 2D Euler and the surface quasi-geostrophic equations, arXiv:1010.1506v1 [math.AP] 7 Oct 2010.
\bibitem{CCCF} D. Chae, A. C\'{o}rdoba, D. C\'{o}rdoba and M. Fontelos, Finite time singularities in a 1D model of the quasi-geostrophic equation, {\it Adv. Math. \bf 194} (2005), 203--223.
\bibitem{ChL} D. Chae and J. Lee, Global well-posedness in the super-critical
dissipative quasi-geostrophic equations, {\it Commun. Math. Phys.}
{\bf 233} (2003), 297-311.
\bibitem{Cham} D. Chamorro, Remarks on a fractional diffusion transport equation with applications to the critical dissipative quasi-geostrophic equation, arXiv:1007.3919v2 [math.AP] 22 Oct 2010.
\bibitem{Che} J.-Y. Chemin, {\it Perfect Incompressible Fluids},
Oxford science publications, Oxford University Press, 1998.
\bibitem{CMZ1} Q. Chen, C. Miao and Z. Zhang,
A new Bernstein's inequality and the 2D dissipative
quasi-geostrophic equation, {\it Commun. Math. Phys. \bf 271}
(2007), 821-838.
\bibitem{Chen} Q. Chen and Z. Zhang,
Global well-posedness of the 2D critical dissipative
quasi-geostrophic equation in the Triebel-Lizorkin spaces, {\it
Nonlinear Anal. \bf 67} (2007), 1715-1725.
\bibitem{Con} P. Constantin, Euler equations, Navier-Stokes equations and turbulence.
Mathematical foundation of turbulent viscous flows, 1--43, Lecture
Notes in Math., 1871, Springer, Berlin, 2006.
\bibitem{CCW} P. Constantin, D. C\'{o}rdoba and J. Wu, On the critical dissipative
quasi-geostrophic equation, {\varepsilonm Indiana Univ. Math. J.} {\bf 50}
(2001), 97-107.
\bibitem{ConF} P. Constantin and C. Foias, {\it Navier-Stokes Equations}, Chicago Lectures in Mathematics, University of Chicago Press, Chicago, IL, 1988.
\bibitem{CIW} P. Constantin, G. Iyer and J. Wu, Global regularity for a modified critical dissipative quasi-geostrophic equation, {\it Indiana Univ. Math. J.}
{\bf 57} (2008), 2681-2692.
\bibitem{CLS}P. Constantin, M.-C. Lai, R. Sharma, Y.-H. Tseng and J. Wu, New numerical results for the surface quasi-geostrophic equation, submitted for publication.
\bibitem{CMT} P. Constantin, A. Majda, and E. Tabak,
Formation of strong fronts in the 2-D quasi-geostrophic thermal
active scalar, {\it Nonlinearity} {\bf 7} (1994), 1495-1533.
\bibitem{CNS} P. Constantin, Q. Nie and N. Schorghofer, Nonsingular
surface quasi-geostrophic flow, {\it Phys. Lett. A } {\bf 241}
(1998), 168--172.
\bibitem{CW5} P. Constantin and J. Wu, Behavior of solutions of 2D
quasi-geostrophic equations, {\it SIAM J. Math. Anal.} {\bf 30}
(1999), 937-948.
\bibitem{CWnew1} P. Constantin and J. Wu, Regularity of H\"{o}lder continuous
solutions of the supercritical quasi-geostrophic equation, {\it Ann.
Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire} {\bf 25} (2008), 1103-1110.
\bibitem{CWnew2} P. Constantin and J. Wu, H\"{o}lder continuity of solutions
of supercritical dissipative hydrodynamic transport equation, {\it
Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire} {\bf 26} (2009), 159-180.
\bibitem{Cor} D. C\'{o}rdoba, Nonexistence of simple hyperbolic blow-up for the quasi-geostrophic equation, {\it Ann. of Math. \bf 148} (1998), 1135--1152.
\bibitem{CC} A. C\'{o}rdoba and D. C\'{o}rdoba, A maximum principle
applied to quasi-geostrophic equations, {\it Commun. Math. Phys.}
{\bf 249} (2004), 511-528.
\bibitem{CoFe1} D. C\'{o}rdoba and Ch. Fefferman, Behavior of several
two-dimensional fluid equations in singular scenarios, {\it Proc.
Natl. Acad. Sci. USA} {\bf 98} (2001), 4311--4312.
\bibitem{CoFe2} D. C\'{o}rdoba and Ch. Fefferman, Scalars convected by
a two-dimensional incompressible flow, {\it Comm. Pure Appl. Math.}
{\bf 55} (2002), 255--260.
\bibitem{CoFe3} D. C\'{o}rdoba and Ch. Fefferman, Growth of solutions for QG and 2D
Euler equations, {\it J. Amer. Math. Soc.} {\bf 15} (2002),
665--670.
\bibitem{CFMR} D. C\'{o}rdoba, M. Fontelos, A. Mancho and J. Rodrigo, Evidence of singularities for a family of contour dynamics equations, {\it Proc. Natl. Acad. Sci. USA \bf 102} (2005), 5949--5952.
\bibitem{Dab} M. Dabkowski, Eventual regularity of the solutions to the
supercritical dissipative quasi-geostrophic equation, arXiv:1007.2970v1 [math.AP] 18 Jul 2010.
\bibitem{DHLY} J. Deng, T. Y. Hou, R. Li and X. Yu, Level set dynamics and the non-blowup of the 2D quasi-geostrophic equation, {\it Methods Appl. Anal. \bf 13} (2006), 157--180.
\bibitem{DoGi} C. Doering and J.D. Gibbon, {\it Applied Analysis of the Navier-Stokes Equations}, Cambridge Texts in Applied Mathematics, Cambridge University Press, Cambridge, 1995.
\bibitem{DoCh} B. Dong and Z. Chen, Asymptotic stability of the critical and super-critical dissipative quasi-geostrophic equation, {\it Nonlinearity \bf 19} (2006), 2919-2928.
\bibitem{Dong}H. Dong, Dissipative quasi-geostrophic equations in critical Sobolev spaces: smoothing effect and global well-posedness, {\it Discrete Contin. Dyn. Syst. \bf 26} (2010), 1197--1211.
\bibitem{DoDu} H. Dong and D. Du, Global well-posedness and a decay estimate for the critical dissipative quasi-geostrophic equation in the whole space, {\it Discrete Contin. Dyn. Syst. \bf 21} (2008), 1095--1101.
\bibitem{DoLi0} H. Dong and D. Li, Finite time singularities for a class of generalized surface quasi-geostrophic equations, {\it Proc. Amer. Math. Soc. \bf 136}(2008), 2555--2563.
\bibitem{DoLi} H. Dong and D. Li, Spatial analyticity of the solutions to the subcritical dissipative quasi-geostrophic equations, {\it Arch. Ration. Mech. Anal. \bf 189} (2008), 131--158.
\bibitem{DoPo} H. Dong and N. Pavlovic, A regularity criterion for the dissipation quasi-geostrophic equation, {\it Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire} {\bf 26} (2009), 1607--1619.
\bibitem{DoPo2} H. Dong and N. Pavlovic, Regularity criteria for the dissipative quasi-geostrophic equations in H?lder spaces, {\it Comm. Math. Phys. \bf 290} (2009), 801--812.
\bibitem{FPV} S. Friedlander, N. Pavlovic and V. Vicol, Nonlinear instability for the critically dissipative quasi-geostrophic equation, {\it Comm. Math. Phys. \bf 292} (2009), 797--810.
\bibitem{FrVi}S. Friedlander and V. Vicol, Global well-posedness for an advection-diffusion equation arising in magneto-geostrophic dynamics, arXiv:1007.1211v1 [math.AP] 12 Jul 2010.
\bibitem{Gil} A.E. Gill, {\it Atmosphere-Ocean Dynamics}, Academic Press,
New York, 1982.
\bibitem{HPGS} I. Held, R. Pierrehumbert, S. Garner, and K. Swanson,
Surface quasi-geostrophic dynamics, {\it J. Fluid Mech.} {\bf 282}
(1995), 1-20.
\bibitem{HmKe} T. Hmidi and S. Keraani, Global solutions of the super-critical 2D quasi-geostrophic equation in Besov spaces, {\it Adv. Math. \bf 214} (2007), 618--638.
\bibitem{HmKe2} T. Hmidi and S. Keraani, On the global well-posedness of the critical quasi-geostrophic equation, {\it SIAM J. Math. Anal. \bf 40} (2008), 167--185.
\bibitem{Ju} N. Ju, The maximum principle and the global attractor for the dissipative
2D quasi-geostrophic equations, {\it Commun. Math. Phys. \bf 255}
(2005), 161-181.
\bibitem{Ju2} N. Ju, Geometric constrains for global regularity of 2D quasi-geostrophic flows, {\it J. Differential Equations \bf 226} (2006), 54--79.
\bibitem{KhTi}B. Khouider and E. Titi, An inviscid regularization for the surface quasi-geostrophic equation, {\it Comm. Pure Appl. Math. \bf 61} (2008), 1331--1346.
\bibitem{Ki1} A. Kiselev, Some recent results on the critical surface quasi-geostrophic equation: a review, {\it Hyperbolic problems: theory, numerics and applications}, 105--122, Proc. Sympos. Appl. Math., 67, Part 1, AMS, Providence, RI, 2009.
\bibitem{Ki2} A. Kiselev, Regularity and blow up for active scalars, {\it Math. Model. Math. Phenom. \bf 5} (2010), 225--255.
\bibitem{Kinew1} A. Kiselev, Nonlocal maximum principles for active scalars, arXiv: 1009.0542v1 [math.AP] 2 Sep 2010.
\bibitem{KN1}A. Kiselev and F. Nazarov, Global regularity for the critical dispersive dissipative surface quasi-geostrophic equation, {\it Nonlinearity \bf 23} (2010), 549--554.
\bibitem{KN2}A. Kiselev and F. Nazarov, A variation on a theme of Caffarelli and Vasseur, {\it Zap. Nauchn. Sem. POMI \bf 370} (2010), 58--72.
\bibitem{KNV}A. Kiselev, F. Nazarov and A. Volberg, Global well-posedness for the
critical 2D dissipative quasi-geostrophic equation, {\it Invent.
Math. \bf 167} (2007), 445-453.
\bibitem{Li} D. Li, Existence theorems for the 2D quasi-geostrophic equation with plane wave initial conditions, {\it Nonlinearity \bf 22} (2009), 1639--1651.
\bibitem{LiRo} D. Li and J. Rodrigo, Blow up for the generalized surface quasi-geostrophic equation with supercritical dissipation, {\it Comm. Math. Phys. \bf 286} (2009), 111--124.
\bibitem{Maj} A. Majda, {\it Introduction to PDEs and Waves for the
Atmosphere and Ocean}, Courant Lecture Notes {\bf 9}, Courant
Institute of Mathematical Sciences and American Mathematical
Society, 2003.
\bibitem{MaBe}A. Majda and A. Bertozzi, {\it Vorticity and Incompressible Flow}, Cambridge University Press, 2002.
\bibitem{MaTa} A. Majda and E. Tabak, A two-dimensional model for quasigeostrophic flow: comparison with the two-dimensional Euler flow, {\it Phys. D \bf 98} (1996), 515--522.
\bibitem{Mar1}F. Marchand, Propagation of Sobolev regularity for the critical dissipative quasi-geostrophic equation, {\it Asymptot. Anal. \bf 49} (2006), 275--293.
\bibitem{Mar2}F. Marchand, Existence and regularity of weak solutions to the quasi-geostrophic equations in the spaces $L^p$ or $\dot H^{-1/2}$, {\it Comm. Math. Phys. \bf 277} (2008), 45--67.
\bibitem{Mar3}F. Marchand, Weak-strong uniqueness criteria for the critical quasi-geostrophic equation, {\it Phys. D \bf 237} (2008), 1346--1351.
\bibitem{MarLr} F. Marchand and P.G. Lemari\'{e}-Rieusset, Solutions
auto-similaires non radiales pour l'\'{e}quation
quasi-g\'{e}ostrophique dissipative critique, {\it C. R. Math.
Acad. Sci. Paris} {\bf 341} (2005), 535--538.
\bibitem{May} R. May, Global well-posedness for a modified 2D dissipative quasi-geostrophic equation with initial data in the critical Sobolev space $H^1$,
arXiv:0910.0998v1 [math.AP] 6 Oct 2009.
\bibitem{MayZ}R. May and E. Zahrouni, Global existence of solutions for subcritical quasi-geostrophic equations, {\it Commun. Pure Appl. Anal. \bf 7} (2008), 1179--1191.
\bibitem{MiXu} C. Miao and L. Xue,
Global wellposedness for a modified critical dissipative quasi-geostrophic equation,
arXiv:0901.1368v4 [math.AP] 18 Sep 2010.
\bibitem{Mi} H. Miura, Dissipative quasi-geostrophic equation for large initial data in the critical sobolev space, {\it Commun. Math. Phys. \bf 267} (2006), 141--157.
\bibitem{NiSc} C. Niche and M. Schonbek, Decay of weak solutions to the 2D dissipative quasi-geostrophic equation, {\it Comm. Math. Phys. \bf 276} (2007), 93--115.
\bibitem{OhYa1} K. Ohkitani and M. Yamada, Inviscid and inviscid-limit behavior
of a surface quasigeostrophic flow, {\it Phys. Fluids } {\bf 9}
(1997), 876--882.
\bibitem{Pe} J. Pedlosky, {\it Geophysical Fluid Dynamics}, Springer, New York,
1987.
\bibitem{ReDr}J. Reinaud and D. Dritschel, Destructive interactions between two counter-rotating quasi-geostrophic vortices, {\it J. Fluid Mech. \bf 639} (2009), 195--211.
\bibitem{Res} S. Resnick, Dynamical problems in nonlinear advective
partial differential equations, Ph.D. thesis, University of Chicago,
1995.
\bibitem{Ro1} J. Rodrigo, The vortex patch problem for the surface quasi-geostrophic equation, {\it Proc. Natl. Acad. Sci. USA \bf 101} (2004), 2684--2686
\bibitem{Ro2} J. Rodrigo, On the evolution of sharp fronts for the quasi-geostrophic equation, {\it Comm. Pure Appl. Math. \bf 58} (2005), 821--866.
\bibitem{RuSi} T. Runst and W. Sickel, {\it Sobolev Spaces of Fractional Order, Nemytskij Operators, and Nonlinear Partial Differential Equations}, Walter de Gruyter \& Co., Berlin, 1996.
\bibitem{Sch} M. Schonbek and T. Schonbek, Asymptotic behavior to
dissipative quasi-geostrophic flows, {\it SIAM J. Math. Anal.} {\bf
35} (2003), 357-375.
\bibitem{Sch2} M. Schonbek and T. Schonbek, Moments and lower
bounds in the far-field of solutions to quasi-geostrophic flows,
{\it Discrete Contin. Dyn. Syst.} {\bf 13} (2005), 1277-1304.
\bibitem{Si} L. Silvestre, Eventual regularization for the slightly supercritical quasi-geostrophic equation, {\it Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire \bf 27} (2010), no. 2, 693--704.
\bibitem{Si2} L. Silvestre, H\"{o}lder estimates for advection fractional-diffusion equations, arXiv:1009.5723v1 [math.AP] 29 Sep 2010.
\bibitem{Sta} A. Stefanov, Global well-posedness for the 2D quasi-geostrophic equation in a critical Besov space, {\it Electron. J. Differential Equations \bf 2007} (2007), 9 pp.
\bibitem{St} E. Stein, {\it Singular Integrals and Differentiability Properties of Functions}, Princeton Unviersity Press, Princeton, NJ, 1970.
\bibitem{Tr}H. Triebel, {\it Theory of Function Spaces}, Monographs in Mathematics {\bf 78}, Birkh?user Verlag, Basel, 1983. 284 pp.
\bibitem{WaJi} H. Wang and H. Jia, Local well-posedness for the 2D non-dissipative quasi-geostrophic equation in Besov spaces, {\it Nonlinear Anal. \bf 70} (2009), 3791--3798.
\bibitem{WaZh} H. Wang and Z, Zhang, A frequency localized maximum principle applied to the 2D quasi-geostrophic equation, {\it Comm. Math. Phys.}, in press.
\bibitem{Wu97}J. Wu, Quasi-geostrophic-type equations with initial data in Morrey spaces, {\it Nonlinearity} {\bf 10} (1997), 1409--1420.
\bibitem{Wu2} J. Wu, Inviscid limits and regularity estimates
for the solutions of the 2-D dissipative quasi-geostrophic
equations, {\it Indiana Univ. Math. J.} {\bf 46} (1997), 1113-1124.
\bibitem{Wu01} J. Wu, Dissipative quasi-geostrophic equations with $L^p$ data, {\it Electron. J. Differential Equations \bf 2001} (2001), 1-13.
\bibitem{Wu02} J. Wu, The quasi-geostrophic equation and its two regularizations, {\it Comm. Partial Differential Equations \bf 27} (2002), 1161--1181.
\bibitem{Wu3} J. Wu, Global solutions of the 2D dissipative
quasi-geostrophic equation in
Besov spaces, {\it SIAM J. Math. Anal.}\,\, {\bf 36} (2004/2005),
1014-1030.
\bibitem{Wu4} J. Wu, The quasi-geostrophic equation with critical or supercritical
dissipation, {\it Nonlinearity} \,\,{\bf 18} (2005), 139-154.
\bibitem{Wu41} J. Wu, Solutions of the 2-D quasi-geostrophic equation in H\"{o}lder
spaces, {\it Nonlinear Analysis}\,\, {\bf 62} (2005), 579-594.
\bibitem{Wu31} J. Wu, Lower bounds for an integral involving
fractional Laplacians and the generalized Navier-Stokes equations in
Besov spaces, {\it Comm. Math. Phys.} {\bf 263} (2006), 803-831.
\bibitem{Wu77} J. Wu, Existence and uniqueness results for the 2-D dissipative
quasi-geostrophic equation, {\it Nonlinear Anal.} {\bf 67} (2007),
3013-3036.
\bibitem{Yu} X. Yu, Remarks on the global regularity for the super-critical 2D dissipative quasi-geostrophic equation, {\it J. Math. Anal. Appl. \bf 339} (2008), 359--371.
\bibitem{Yuan} B. Yuan, The dissipative quasi-geostrophic equation in weak Morrey spaces, {\it Acta Math. Sin. (Engl. Ser.) \bf 24} (2008), 253--266.
\bibitem{YuanJ} J. Yuan, On regularity criterion for the dissipative quasi-geostrophic equations, {\it J. Math. Anal. Appl. \bf 340} (2008), 334--339.
\bibitem{Zha0} Z. Zhang,
Well-posedness for the 2D dissipative quasi-geostrophic equations in
the Besov space, {\it Sci. China Ser. A \bf 48} (2005), 1646-1655.
\bibitem{Zha} Z. Zhang,
Global well-posedness for the 2D critical dissipative
quasi-geostrophic equation, {\it Sci. China Ser. A \bf 50} (2007),
485-494.
\bibitem{Zhou} Y. Zhou,
Decay rate of higher order derivatives for solutions to the 2-D
dissipative quasi-geostrophic flows, {\it Discrete Contin. Dyn.
Syst. \bf 14} (2006), 525-532.
\bibitem{Zhou2} Y. Zhou, Asymptotic behaviour of the solutions to the 2D dissipative quasi-geostrophic flows, {\it Nonlinearity \bf 21} (2008), 2061--2071.
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\begin{abstract}
We introduce first-time sensitivity for a homeomorphism of a compact metric space, that is a condition on the first increasing times of open balls of the space. Continuum-wise expansive homeomorphisms, the shift map on the Hilbert cube, and also some partially hyperbolic diffeomorphisms
satisfy this condition. We prove the existence of local unstable continua satisfying similar properties with the local unstable continua of cw-expansive homeomorphisms, but assuming first-time sensitivity. As a consequence we prove that first-time sensitivity (with some additional technical assumptions) implies positive topological entropy.
\end{abstract}
\maketitle
\section{Introduction}
In the study of chaotic systems, the hyperbolic ones play a central role. Hyperbolicity appeared as a source of chaos \cite{Anosov}, \cite{S} and it was seen to be such a strong notion that several chaotic systems just do not satisfy it. Indeed, works of Pugh and Shub \cite{PughShub} indicate that little hyperbolicity is sufficient to obtain chaotic dynamics. The existence of unstable manifolds with hyperbolic behavior is enough for proving, for example, sensitivity to initial conditions and positive topological entropy, so partially hyperbolic diffeomorphisms are important examples of non-hyperbolic chaotic systems.
A general idea that we explore in this work is to understand how several features of hyperbolic systems can be present on chaotic systems, or, in other words, how we can prove parts of the hyperbolic dynamics using chaotic properties. Assuming only sensitivity to initial conditions there is not much we can prove, even when the space is regular such as a closed surface, since there exist examples of sensitive surface homeomorphisms that do not satisfy several features of hyperbolic systems. Indeed, we discuss one example in Proposition \mathbb{R}f{notft} that is sensitive, has zero topological entropy, has only one periodic point, which is a fixed point, and has local stable (unstable) sets as segments of regular flow orbits and, hence, do not increase when iterated backward (forward).
A classical and much stronger property on separation of distinct orbits is Expansiveness. The study of expansive surface homeomorphisms goes back to works of Hiraide \cite{Hiraide1} and Lewowicz \cite{L} where a complete characterization of expansiveness was given: surface expansive homeomorphisms are exactly the pseudo-Anosov ones. An important step of the proof is that expansiveness implies that stable and unstable sets form a pair of transversal singular foliations with a finite number of singularities. Indeed, both works study in detail properties of local stable/unstable sets of expansive homeomorphisms and obtain similar properties with the hyperbolic local stable/unstable manifolds.
The idea of considering dynamical properties that are stronger than sensitivity and weaker than expansiveness, and understanding how we can obtain part of the hyperbolic dynamics for these properties is what motivates the definition of the main property we consider in this paper, the first-time sensitivity. Before defining it precisely, it is important to observe that a few generalizations of expansiveness have already been considered in the literature \cites{Art,ArCa,APV,CC,CC2,CDZ,Kato1,Kato2,LZ,Morales,PaVi}, and among these the more general one is the continuum-wise expansiveness introduced by Kato in \cite{Kato1}. It is known that cw-expansive homeomorphisms of Peano continua are sensitive \cite{Hertz} and, thus, cw-expansiveness generalizes expansiveness and is stronger than sensitivity at the same time. Moreover, cw-expansive homeomorphisms of Peano continua have local stable/unstable continua with uniform diameter on every point of the space \cite{Kato2} with properties that resemble the expansive and hyperbolic cases, and this is enough to prove positive topological entropy \cite{Kato1}. This makes cw-expansiveness an example of a dynamical property that fits the idea of this paper explained above. Now we proceed to the definition of first-time sensitivity and for that we define and explain sensitivity.
\begin{definition}
A map $f\colon X\rightarrow X$ defined in a compact metric space $(X,d)$ is \emptyseth{sensitive} if there exists $\varepsilonsilon>0$ such that for every $x\in X$ and every $\delta>0$ there exist $y\in X$ with $d(x,y)<\delta$ and $n\in\mathbb{N}$ satisfying $d(f^{n}(x),f^{n}(y))>\varepsilonsilon.$ The number $\varepsilonsilon$ is called the \emptyseth{sensitivity constant} of $f$.
\end{definition}
Sensitivity means that for each initial condition there are arbitrarily close distinct initial conditions with separated future iterates. We can also explain sensitivity as follows. Denoting by $B(x,\delta)=\{y\in X; d(y,x)<\delta\}$ the ball centered at $x$ and radius $\delta$, sensitivity implies the existence of $\varepsilonsilon>0$ such that for every ball $B(x,\delta)$ there exists $n\in\mathbb{N}$ such that
$$\operatorname{dim}am(f^n(B(x,\delta)))>\varepsilonsilon,$$
where $\operatorname{dim}am(A)=\operatorname{Supp}p\{d(a,b); a,b\in A\}$ denotes the diameter of $A$. Thus, sensitivity increases the diameter of non-trivial balls of the space. Now we define the first increasing time of balls of the space.
\begin{definition}[First-increasing time]
Let $f:X\rightarrow X$ be a sensitive homeomorphism, with sensitivity constant $\varepsilon>0$, of a compact metric space $(X,d)$. Given $x\in X$ and $r>0$ let $n_1(x,r,\varepsilonsilon)\in\mathbb{N}$ be the first iterate of $B(x,r)$ with diameter greater than $\varepsilon$, that is, $n_1(x,r,\varepsilonsilon)$ satisfies:
$$\mbox{diam}\;f^{n_1(x,r,\varepsilonsilon)}(B(x,r))>\varepsilonsilon \,\,\,\,\,\, \text{and}$$
$$\mbox{diam}\;f^j(B(x,r))\leq\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, j\in[0,n_1(x,r,\varepsilonsilon))\cap\mathbb{N}.$$
We call the number $n_1(x,r,\varepsilonsilon)$ the \emptyseth{first increasing time} (with respect to $\varepsilon$) of the ball $B(x,r)$.
\end{definition}
\begin{definition}[First-time sensitivity]
We say that $f$ is \emptyseth{first-time sensitive} (or simply \emptyseth{ft-sensitive}) if there is a sequence of functions $(r_k)_{k\in\mathbb{N}}\colon X\to\mathbb{R}^*_+$ starting on a constant function $r_1$ and decreasing monotonically to $0$, such that for each $\gamma\in(0,\varepsilonsilon]$ there is $m_\gamma>0$ satisfying the following inequalities:
\begin{enumerate}
\item[(F1)] $|n_1(x,r_{k+1}(x),\gamma)-n_1(x,r_{k}(x),\gamma)|\leq m_\gamma$
\item[(F2)] $|n_1(x,r_k(x),\gamma) - n_1(x,r_k(x),\varepsilonsilon)|\leq m_\gamma$
\end{enumerate}
for every $x\in X$ and for every $k\in\mathbb{N}$ such that $r_k(x)\leq\gamma$.
\end{definition}
Condition (F1) means the following: if we start decreasing the radius of the ball centered at $x$ (the sequence $(r_k(x))_{k\in\mathbb{N}}$) and keeps checking the first increasing times of the balls $B(x,r_k(x))$ with respect to $\gamma$ (the numbers $n_1(x,r_k(x),\gamma)$), we obtain that when $r_k(x)$ changes to $r_{k+1}(x)$, the difference between the first increasing times $n_1(x,r_k(x),\gamma)$ and $n_1(x,r_{k+1}(x),\gamma)$ is bounded by the constant $m_{\gamma}$ that does not depend on $k\in\mathbb{N}$ or on $x\in X$ (see Figure \mathbb{R}f{figura:F1}). Condition (F2) means the following: if we decrease the sensitivity constant $\varepsilonsilon$ to $\gamma$ and check the first increasing times of the ball $B(x,r_k(x))$ with respect to $\gamma$ and $\varepsilonsilon$ (the numbers $n_1(x,r_k(x),\gamma)$ and $n_1(x,r_k(x),\varepsilonsilon)$) we obtain that their difference is bounded by the constant $m_{\gamma}$ that does not depend on $k\in\mathbb{N}$ nor on $x\in X$ (see Figure \mathbb{R}f{figura:F2}).
\begin{center}
\begin{figure}
\caption{Property F1}
\label{figura:F1}
\end{figure}
\end{center}
\begin{figure}
\caption{Property F2}
\label{figura:F2}
\end{figure}
Ft-sensitivity can be defined in any metric space, but for our purposes we impose additional hypothesis on the space. We assume that $(X,d)$ is a compact and connected metric space satisfying:
\begin{itemize}
\item[(P1)] there exists $r>0$ such that $B(x,r')$ is connected for every $r'\in(0,r)$ and every $x\in X$;
\item[(P2)] the map $(x,s)\to \overlineerline{B(x,s)}$ is continuous in the Hausdorff topology;
\end{itemize}
where $\overlineerline{A}$ denotes the closure of a set $A$. Properties (P1) and (P2) mean that balls with sufficiently small radius are connected and that these balls vary continuously with their centers and radius. These are mild conditions on the topology of the space and are satisfied, for example, by all closed manifolds, the Hilbert cube $[0,1]^\mathbb{Z}$ and more generally by Peano continua, that are compact, connected and locally connected metric spaces, when they are endowed with a convex metric (see \cite{Nadler}).
Now we explain the structure of this paper. In Section 2 we prove that first-time sensitivity implies the existence of local unstable continua with uniform diameter on every point of the space satisfying similar properties with the local unstable continua of cw-expansive homeomorphisms. We call them local cw-unstable continua and Section 2 is devoted to prove their existence and main properties.
In Section 3 we discuss our main examples of first-time sensitive homeomorphisms: the cw-expansive homeomorphisms, the full shift on the Hilbert cube $[0,1]^{\mathbb{Z}}$, and some partially hyperbolic diffeomorphisms. We also briefly discuss how to find the local cw-unstable continua in each case. In Section 4 we present our attempts to prove that first-time sensitivity implies positive topological entropy, explain the difficulties and how to circumvent them with some additional technical hypotheses.
\section{Local cw-unstable continua}
Let $f\colon X\to X$ be a homeomorphism of a compact metric space $(X,d)$. We consider the \emptyseth{c-stable set} of $x\in X$ as the set
$$W^s_{c}(x):=\{y\in X; \,\, d(f^k(y),f^k(x))\leq c \,\,\,\, \textrm{for every} \,\,\,\, k\geq 0\}$$
and the \emptyseth{c-unstable set} of $x$ as the set
$$W^u_{c}(x):=\{y\in X; \,\, d(f^k(y),f^k(x))\leq c \,\,\,\, \textrm{for every} \,\,\,\, k\leq 0\}.$$
We consider the \emptyseth{stable set} of $x\in X$ as the set
$$W^s(x):=\{y\in X; \,\, d(f^k(y),f^k(x))\to0 \,\,\,\, \textrm{when} \,\,\,\, k\to\infty\}$$
and the \emptyseth{unstable set} of $x$ as the set
$$W^u(x):=\{y\in X; \,\, d(f^k(y),f^k(x))\to0 \,\,\,\, \textrm{when} \,\,\,\, k\to-\infty\}.$$
The dynamical ball of $x$ with radius $c$ is the set $$\Gamma_{c}(x)=W^u_{c}(x)\cap W^s_{c}(x).$$
We say that $f$ is \emptyseth{expansive} if there exists $c>0$ such that $$\Gamma_c(x)=\{x\} \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, x\in X.$$ We say that $f$ is \emptyseth{continuum-wise expansive} if there exists $c>0$ such that $\Gamma_{c}(x)$ is totally disconnected for every $x\in X$.
We denote by $C^s_c(x)$ the $c$-stable continuum of $x$, that is the connected component of $x$ on $W^s_{c}(x)$, and denote by $C^u_c(x)$ the $c$-unstable continuum of $x$, that is the connected component of $x$ on $W^u_{c}(x)$.
\hspace{-0.4cm}\textbf{Existence of local unstable/stable continua:}
\hspace{-0.4cm}It is proved in \cite{Kato2} that for a cw-expansive homeomorphism the following holds:
\begin{theorem}\label{cw}\emptyseth{[}Theorem 1.6 in \cite{Kato2}\emptyseth{]}
If $f\colon X\to X$ is a cw-expansive homeomorphism of a Peano continuum $(X,d)$, with cw-expansivity constant $c>0$, then for every $\varepsilonsilon>0$ there exists $\delta>0$ such that $$\operatorname{dim}am(C^s_{\varepsilonsilon}(x))\geq\delta \,\,\,\,\,\, \text{and} \,\,\,\,\,\, \operatorname{dim}am(C^u_{\varepsilonsilon}(x))\geq\delta \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, x\in X.$$
\end{theorem}
This means that the $\varepsilonsilon$-stable and $\varepsilonsilon$-unstable sets of any point $x\in X$ contain continua with uniform diameter intersecting at $x$. In this subsection we prove a similar result using only first-time sensitivity.
\begin{theorem}\label{teoremacontinuosinst}
Let $f:X\rightarrow X$ be a homeomorphism defined on a compact and connected metric space satisfying the Properties (P1) and (P2).
\begin{itemize}
\item[(a)] If $f$ is ft-sensitive, then for each $\varepsilonsilon>0$
there exists $\delta>0$ such that
$$\operatorname{dim}am(C^u_\varepsilonsilon(x))\geq\delta \,\,\,\,\,\, \text{for every} \,\,\,\,\,\,x\in X.$$
\item[(b)] If $f^{-1}$ is ft-sensitive, then for each $\varepsilonsilon>0$
there exists $\delta>0$ such that
$$\operatorname{dim}am(C^s_\varepsilonsilon(x))\geq\delta \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, x\in X.$$
\end{itemize}
\end{theorem}
We remark that in the proof of this theorem we only use property (F1) on the definition of ft-sensitivity and that property (F2) will be important to prove the main properties of these continua later in this section. To prove this result, we first note that for a fixed sensitivity constant $\varepsilonsilon$, the first increasing time $n_1(x,r,\varepsilonsilon)$ depends basically on the radius $r$ and not exactly on $x\in X$.
\begin{lemma}\label{n1}
If $f\colon X\to X$ is sensitive, with sensitivity constant $\varepsilon$, and $X$ is a compact metric space satisfying hypothesis \emptyseth{(P2)}, then for each $r>0$, there exists $N\in\mathbb{N}$ such that
\[n_1(x,r,\varepsilonsilon)\leq N \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, x\in X.\]
\end{lemma}
\begin{proof}
If the conclusion is not true, then there exists $r>0$ such that for each $n\in\mathbb{N}$ there exists $x_n\in X$ such that $n_1(x_n,r,\varepsilonsilon)\geq n$. This means that
\[\operatorname{dim}am(f^j(B(x_n,r)))\leq\varepsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, j\in\{0,\dots,n-1\}.\] If $x=\lim_{k\to\infty}x_{n_k}$, then uniform continuity of $f$ and property (P2) on the space $X$ assure that
\[\operatorname{dim}am(f^j(B(x,r)))=\lim_{k\to\infty}\operatorname{dim}am(f^j(B(x_{n_k},r)))\leq\varepsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, j\in\mathbb{N},\]
contradicting sensitivity.
\end{proof}
\begin{proof}[Proof of the Theorem \mathbb{R}f{teoremacontinuosinst}]
Assume that $f$ is sensitive homeomorphism with sensitivity constant $c>0$ and choose $r\in(0,c)$, given by Property (P1) on the space $X$, such that $B(x,r')$ is connected for every $r'\in(0,r)$. Let $\varepsilonsilon\in(0,r)$ be arbitrary and note that $\varepsilonsilon$ is also a sensitivity constant of $f$. By hypothesis (F1), there exist $(r_k)_{k\in\mathbb{N}}\colon X\to\mathbb{R}^*_+$ and $m_\varepsilonsilon\in\mathbb{N}$ satisfying
$$n_1(x,r_{k+1}(x),\varepsilonsilon)-n_1(x,r_{k}(x),\varepsilonsilon)\leq m_\varepsilonsilon.$$
For each $m\in\mathbb{N}$, let $x_m=f^{-m}(x)$ and, for each $k\in\mathbb{N}$, consider
$$r_{k,m}=r_k(x_m) \,\,\,\,\,\, \text{and} \,\,\,\,\,\, n_{k,m}=n_1(x_m,r_{k,m},\varepsilonsilon).$$
Lemma \mathbb{R}f{n1} assures the existence of $N\in\mathbb{N}$ such that
$$n_1(x,r_1(x),\varepsilonsilon)\leq N \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, x\in X.$$ Then, (F1) assures that for each $m\geq N$, we can choose $k_m\in\mathbb{N}$ such that $$n_{k_m-1,m}< m\leq n_{k_m,m}.$$
It follows that
$$|n_{k_m,m}-m|<|n_{k_m,m}-n_{k_m-1,m}|<m_\varepsilonsilon.$$
The definitions of $n_{k_m,m}$ and $r_{k_m,m}$ guarantees that
$$\operatorname{dim}am(f^j(B(x_m, r_{k_m,m}))\leq\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, j\in[0,n_{k_m,m})\cap\mathbb{N}$$
$$\text{and} \,\,\,\,\,\, \operatorname{dim}am(f^{n_{k_m,m}}(B(x_m,r_{k_m,m})))>\varepsilonsilon.$$
Since $f^{-1}$ is uniformly continuous, there exists $\delta>0$ such that
$$\mbox{diam}(A)\geq\varepsilonsilon \,\,\,\,\,\, \text{implies} \,\,\,\,\,\, \mbox{diam}(f^{-n}(A))\geq\delta \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in[0,m_\varepsilonsilon].$$
This assures that
$$\operatorname{dim}am(f^{m}(B(x_m,r_{k_m,m})))=\operatorname{dim}am(f^{m-n_{k_m,m}}(f^{n_{k_m,m}}(B(x_m,r_{k_m,m}))))\geq\delta.$$
For each $m\geq N$, let $C_m=f^{m}(\overlineerline{B(x_m,r_{k_m,m})})$ and notice that $C_m$ is a continuum satisfying:
\begin{itemize}
\item[(1)] $x\in C_m$;
\item[(2)] $\mbox{diam}(C_m)\geq\delta$;
\item[(3)] $\mbox{diam}(f^{-j}(C_m))\leq\varepsilonsilon$ when $0\leq j\leq m$.
\end{itemize}
\begin{figure}
\caption{The choose that $k_m$ e $C_m$.}
\label{figura:construcaocontinuo}
\end{figure}
Thus, if $C_x$ is an accumulation continuum of the sequence $(C_m)_{m\in\mathbb{N}}$ in the Hausdorff metric, that is,
$$C_x=\lim_{l\rightarrow \infty}C_{m_l},$$ then $C_x$ satisfies:
\begin{itemize}
\item[(1)] $C_x$ is a continuum, as a Hausdorff limit of continua;
\item[(2)] $\mbox{diam}(C_x)\geq\delta$, since $\mbox{diam}(C_{m_l})\geq\delta$ for every $m_l\geq N$;
\item[(3)] $x\in C_x$, since $x\in C_{m_l}$ for every $m_l\geq N$;
\item[(4)] $C_x\operatorname{Supp}bset W^u_\varepsilonsilon(x)$, since for each $j\in\mathbb{N}$ we have
$$\mbox{diam}(f^{-j}(C_x))=\lim_{l\rightarrow\infty}(f^{-j}(C_{m_l}))\leq\varepsilonsilon.$$
\end{itemize}
This proves that $\operatorname{dim}am(C^u_{\varepsilonsilon}(x))\geq\delta$ for every $x\in X$ and complete the proof of the first item. A similar argument deals with item (b) where $f^{-1}$ is ft-sensitive and proves, in this case, that $\operatorname{dim}am(C^s_{\varepsilonsilon}(x))\geq\delta$ for every $x\in X$.
\end{proof}
This actually generalizes Theorem \mathbb{R}f{cw} since we can prove it assuming Theorem \mathbb{R}f{teoremacontinuosinst} as follows. First, we observe that Peano continua do not necessarily satisfy hypothesis (P1) and (P2) on the space, but every Peano continuum can be endowed with a convex metric and, in this case, hypothesis (P1) and (P2) are satisfied. A metric $D$ for a continuum $X$ is called \emptyseth{convex} if for each $x,y\in X$, there exists $z\in X$ such that
\noindent (3) \hspace{3.3cm}
$\operatorname{dim}splaystyle D(x,z)=\frac{D(x,y)}{2}=D(y,z).$
This assures that the closure of the open ball equals the closed ball, i. e.,
$$\overlineerline{B_D(x,\delta)}=\{y\in X\;;\;D(x,y)\leq\delta\} \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, x\in X \,\,\,\,\,\, \text{and} \,\,\,\,\,\, \delta>0.$$
Then, Theorem 3.3 in \cite{N2} ensures that (P2) is satisfied. See \cite[Proposition 10.6]{Nadler} for a proof that balls with a convex metric satisfy (P1). We will prove in Proposition \mathbb{R}f{cwft} that cw-expansivity implies first-time sensitivity when defined on spaces satisfying (P1) and (P2) and, in particular, Peano continua endowed with a convex metric. Thus, Theorem \mathbb{R}f{cw} is a particular case of Theorem \mathbb{R}f{teoremacontinuosinst} if we assume the space is endowed with a convex metric. For a general metric, we can argument as follows.
\begin{lemma}\label{topology}
If $d$ and $D$ are compact metrics on the same space $X$ generating the same topology, then for every $\varepsilonsilon>0$ there exists $\rho>0$ such that
$$D(x,y)<\rho \,\,\,\,\,\, \text{implies} \,\,\,\,\,\, d(x,y)<\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, (x,y)\in X\times X.$$
\end{lemma}
\begin{proof}
If this is not the case, there exists $\varepsilonsilon>0$ such that for each $n\in\mathbb{N}$ there exists $(x_n,y_n)\in X\times X$ such that
$$D(x_n,y_n)<\frac{1}{n} \,\,\,\,\,\, \text{and} \,\,\,\,\,\, d(x_n,y_n)\geq\varepsilonsilon.$$
Thus, $(x_n)_{n\in\mathbb{N}}$ and $(y_n)_{n\in\mathbb{N}}$ are sequences of $X$ that have the same accumulation points on the metric $D$ but are at least $\varepsilonsilon$-distant from each other on the metric $d$. Thus, if $(x_{n_k})_{k\in\mathbb{N}}$ converges to $z$ on the metric $D$, then $(y_{n_k})_{k\in\mathbb{N}}$ also does. But on the metric $d$ they cannot converge to $z$ simultaneously and we obtain a sequence that converges to $z$ on the metric $D$ but do not on the metric $d$, contradicting that they generate the same topology.
\end{proof}
\begin{proof}[Proof of Theorem \mathbb{R}f{cw}]
Let $\operatorname{dim}am_d$ and $\operatorname{dim}am_D$ denote the diameter on the metric $d$ and $D$, respectively. For each $\varepsilonsilon>0$ choose $\varepsilonsilon'\in(0,\varepsilonsilon)$ given by Lemma \mathbb{R}f{topology} such that $$D(x,y)<\varepsilonsilon' \,\,\,\,\,\, \text{implies} \,\,\,\,\,\, d(x,y)<\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, (x,y)\in X\times X.$$
If $x\in X$ and $y\in C^u_{\varepsilonsilon'}(x)$, that is,
$$D(f^{-n}(x),f^{-n}(y))<\varepsilonsilon' \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{N},$$ then the choice of $\varepsilonsilon'$ assures that
$$d(f^{-n}(x),f^{-n}(y))\leq\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{N}.$$ Hence, $C^u_{\varepsilonsilon'}(x)$ is an $\varepsilonsilon$-unstable continuum on the metric $d$.
Now let $\rho\in(0,\varepsilonsilon')$ given by Theorem \mathbb{R}f{teoremacontinuosinst} be such that
$$\operatorname{dim}am_D(C^u_{\varepsilonsilon'}(x))\geq\rho \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, x\in X.$$ The previous lemma assures the existence of $\delta\in(0,\rho)$ such that $$d(x,y)<\delta \,\,\,\,\,\, \text{implies} \,\,\,\,\,\, D(x,y)<\rho \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, (x,y)\in X\times X.$$ It follows that
$$\operatorname{dim}am_d(C^u_{\varepsilonsilon'}(x))\geq\delta \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, x\in X$$ since $\operatorname{dim}am_D(C^u_{\varepsilonsilon'}(x))\geq\rho$. Thus, $C^u_{\varepsilonsilon'}(x)$ is an $\varepsilonsilon$-unstable continuum on the metric $d$ with diameter at least $\delta$ for every $x\in X$. A similar argument proves that $C^s_{\varepsilonsilon'}(x)$ is an $\varepsilonsilon$-stable continuum on the metric $d$ with diameter at least $\delta$ for every $x\in X$.
\end{proof}
\hspace{-0.4cm}\textbf{Properties of local cw-unstable continua:}
\hspace{-0.4cm}The proof of Theorem \mathbb{R}f{teoremacontinuosinst} is actually more important then the statement of the result itself since it gives us an alternative way of creating local unstable continua that we will use in this paper and can be summarized as follows. For each $x\in X$ and each $m\in\mathbb{N}$ we choose an appropriate radius $r_m>0$ such that $$n_1(f^{-m}(x),r_m,\varepsilonsilon)\in[m,m+m_{\varepsilonsilon}]$$ and this implies that any accumulation continuum of the sequence $$(f^m(B(f^{-m}(x),r_m)))_{m\in\mathbb{N}}$$ is an $\varepsilonsilon$-unstable continuum, with diameter at least $\delta$ that comes from the uniform continuity of $f^{m_{\varepsilonsilon}}$. In this subsection we will discuss the main properties of continua that can be constructed in this way and compare them with properties of the local unstable continua of cw-expansive homeomorphisms. For that, we define the set of such continua as follows:
\[\mathcal{F}^u=\left\{C=\operatorname{dim}splaystyle \lim_{k\rightarrow\infty}f^{n_k}(\overlineerline{B(f^{-n_k}(x),r_{n_k})})\left|\begin{array}{l}
x\in X,\; n_k\to\infty, r_{n_k}\to0, \gamma\in(0,\varepsilonsilon],\\
n_1(f^{-n_k}(x),r_{n_k},\gamma)\in (n_k,n_k+m_\gamma]\;
\end{array}\right.\right\}.\]
Elements of $\mathcal{F}^u$ are called local cw-unstable continua and are the main object of discussion of this section. We proved in Theorem \mathbb{R}f{teoremacontinuosinst} that there exist local cw-unstable continua passing through each $x\in X$. We will prove that local cw-unstable continua are unstable, that is, every $C\in\mathcal{F}^u$ satisfies
$$\operatorname{dim}am(f^k(C))\to0 \,\,\,\,\,\, \text{when} \,\,\,\,\,\, k\to-\infty,$$ and that their diameter increases uniformly (depending only on the sensitivity constant $\gamma$) when they are iterated forward. These properties are similar to the properties satisfied by the local stable/unstable continua of cw-expansive homeomorphisms and this is the reason that we call continua in $\mathcal{F}^u$ local cw-unstable.
The sensitivity constant $\gamma$ in the definition of $\mathcal{F}^u$ will determine the increasing and decreasing times of local cw-unstable continua. Thus, we separate continua in $\mathcal{F}^u$ that are associated with distinct sensitivity constants as follows: for each $\gamma\in(0,\varepsilonsilon]$, let
$$\mathcal{F}^u_{\gamma}=\left\{C=\operatorname{dim}splaystyle \lim_{k\rightarrow\infty}f^{n_k}(\overlineerline{B(f^{-n_k}(x),r_{n_k})})\left|\begin{array}{l}
x\in X,\; n_k\to\infty, r_{n_k}\to0,\\
n_1(f^{-n_k}(x),r_{n_k},\gamma)\in (n_k,n_k+m_\gamma]\;
\end{array}\right.\right\}.$$
We note that $\mathcal{F}^u$ and $\mathcal{F}^u_{\gamma}$ depend on the sensitivity constant $\varepsilonsilon$ and during this whole section we will choose $\varepsilonsilon$ as in the beginning of the proof of Theorem \mathbb{R}f{teoremacontinuosinst}.
In the next result we prove that the diameter of continua in $\mathcal{F}^u_{\gamma}$ increase more than $\varepsilonsilon$ in at most $2m_{\gamma}$ iterates. In the proof we use the following notation: if $A\operatorname{Supp}bset X$, then $n_1(A,\varepsilonsilon)$ denotes the first increasing time of the set $A$ with respect to $\varepsilonsilon$.
\begin{prop}\label{crescimentouniforme}
If $C\in\mathcal{F}^u_\gamma$, then there exists $\ell_\gamma\in\{0,1,\ldots, 2m_\gamma\}$ such that \[\operatorname{dim}am(f^{\ell_\gamma}(C))\geq\varepsilonsilon.\]
\end{prop}
\begin{proof}
If $C\in\mathcal{F}^u_\gamma$, then there exist $x\in X,\;n_k\to\infty$ and $r_{n_k}\to0$ such that
\[C=\lim_{k\rightarrow\infty}f^{n_k}(\overlineerline{B(f^{-n_k}(x),r_{n_k})}) \,\,\,\,\,\, \text{and}\]
\[n_1(f^{-n_k}(x),r_{n_k},\gamma)\in (n_k,n_k+m_\gamma] \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, k\in\mathbb{N}.\]
Property (F2) says that
$$|n_1(f^{-n_k}(x),r_{n_k}(x),\gamma) - n_1(f^{-n_k}(x),r_{n_k}(x),\varepsilonsilon)|\leq m_\gamma$$ and this assures that
\[n_1(f^{-n_k}(x),r_{n_k},\varepsilonsilon) \in(n_k,n_k+2m_\gamma].\]
Consequently,
\[n_1(f^{n_k}(B(f^{-n_k}(x),r_{n_k})),\varepsilonsilon)\in\{1,2,\ldots,2m_\gamma\} \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, k\in\mathbb{N}\] and, thus, there exist $\ell_\gamma\in\{1,2,\ldots,2m_\gamma\}$ and an infinite subset $K\operatorname{Supp}bset\mathbb{N}$ such that \[n_1(f^{n_k}(B(f^{-n_k}(x),r_{n_k})),\varepsilonsilon)=\ell_\gamma \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, k\in K.\]
Therefore,
$$\operatorname{dim}am(f^{\ell_\gamma}(C))=\lim_{k\to\infty}\operatorname{dim}am(f^{\ell_\gamma}(f^{n_k}(\overlineerline{B(f^{-n_k}(x),r_{n_k})})))\geq\varepsilonsilon$$ and the proof is complete.
\end{proof}
In the next proposition we prove that local cw-unstable continua increase regularly in the future.
\begin{prop}\label{crescimentoregularcont}
If $C\in\mathcal{F}^u$, then for each $n\in\mathbb{N}$ there is $n'\in\{n,\ldots,n+m_\varepsilonsilon\}$ such that $\operatorname{dim}am(f^{n'}(C))\geq\varepsilonsilon$.
\end{prop}
\begin{proof}
If $C\in\mathcal{F}^u$, then $C\in\mathcal{F}^u_{\gamma}$ for some $\gamma\in(0,\varepsilonsilon)$, and, hence, there exist
$x\in X,\;n_k\to\infty$ and $r_{n_k}\to0$ such that
\[C=\lim_{k\rightarrow\infty}f^{n_k}(\overlineerline{B(f^{-n_k}(x),r_{n_k})}) \,\,\,\,\,\, \text{and}\]
\[n_1(f^{-n_k}(x),r_{n_k},\gamma)\in (n_k,n_k+m_\gamma] \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, k\in\mathbb{N}.\]
As in the proof of the previous proposition, property (F2) assures that
\[n_1(f^{-n_k}(x),r_{n_k},\varepsilonsilon) \in(n_k,n_k+2m_\gamma] \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, k\in\mathbb{N}.\]
For each $n\in\mathbb{N}$ we use property (F1) to reduce, if necessary, for each $k\in\mathbb{N}$ the radius $r_{n_k}$ to $r_{t_k}$ so that
\[n_{1}(f^{-n_k}(x),r_{t_k},\varepsilonsilon)\in\{n_k+n,\ldots,n_k+n+m_\varepsilonsilon\}.\]
This implies that
\[n_{1}(f^{n_k}(B(f^{-n_k}(x),r_{t_k})),\varepsilonsilon)\in\{n,\ldots,n+m_\varepsilonsilon\} \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, k\in\mathbb{N}\] and consequently, for each $k\in\mathbb{N}$ there is $\ell_k\in\{n,\ldots,n+m_\varepsilonsilon\}$ such that
\[\operatorname{dim}am\;(f^{\ell_k}(f^{n_k}(B(f^{-n_k}(x),r_{n_k}))))\geq \operatorname{dim}am\;(f^{\ell_k}(f^{n_k}(B(f^{-n_k}(x),r_{t_k}))))>\varepsilonsilon.\]
Thus, there exists $n'\in\{n,\ldots,n+m_\varepsilonsilon\}$ and an infinite subset $K\operatorname{Supp}bset\mathbb{N}$ such that
\[\operatorname{dim}am\;(f^{n'}(f^{n_k}(B(f^{-n_k}(x),r_{n_k}))))>\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, k\in K\]
and, hence,
$$\operatorname{dim}am(f^{n'}(C))=\lim_{k\to\infty}\operatorname{dim}am(f^{n'}(f^{n_k}(\overlineerline{B(f^{-n_k}(x),r_{n_k})})))\geq\varepsilonsilon.$$ This completes the proof.
\end{proof}
This regularity ensures that the set of the increasing times of a local cw-unstable continuum is syndetic. Recall that a subset $S\operatorname{Supp}bset\mathbb{N}$ is \emptyseth{syndetic} if there is $p(S)\in\mathbb{N}$ such that
\[\{n,n+1,\ldots, n+p(S)\}\cap S\neq\emptysettyset \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{N}.\]
The set of increasing times of a subset $C\operatorname{Supp}bset X$ with respect to a sensitivity constant $c>0$ is the set $$S_{C,c}=\{n\in\mathbb{N}\;;\;\operatorname{dim}am(f^{n}(C))\geq c\}.$$
\begin{corollary}\label{tempodecrescimentosindetico}
If $C\in\mathcal{F}^u$, then $S_{C,\varepsilonsilon}$ is syndetic.
\end{corollary}
\begin{proof}
Proposition \mathbb{R}f{crescimentoregularcont} assures that
\[\{n,n+1,\ldots, n+m_{\varepsilonsilon}\}\cap S_{C,\varepsilonsilon}\neq\emptysettyset \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{N},\] that is, $S_{C,\varepsilonsilon}$ is syndetic with $p(C)=m_{\varepsilonsilon}$ for every $C\in\mathcal{F}^u$.
\end{proof}
These results imply that every first-time sensitive homeomorphism is syndetically sensitive. Recall that a homeomorphism $f\colon X\to X$ of a compact metric space $(X,d)$ is \emptyseth{syndetically sensitive} if there exists $c>0$ such
that $S_{U,c}$ is syndetic for every non-empty open subset $U\operatorname{Supp}bset X$.
\begin{corollary}
If $f$ is first-time sensitive, then it is syndetically sensitive.
\end{corollary}
\begin{proof}
Let $U$ be a non-empty and open subset of $X$, $x\in U$, $\gamma\in(0,\varepsilonsilon)$ be such that $\overlineerline{B(x,2\gamma)}\operatorname{Supp}bset U$, and choose $C\operatorname{Supp}bset\mathcal{F}^u$, given by Theorem \mathbb{R}f{teoremacontinuosinst}, such that $C\operatorname{Supp}bset C^u_{\gamma}(x)$. Since $\operatorname{dim}am(C)\leq2\gamma$ and $x\in C$, it follows that $C\operatorname{Supp}bset U$ and this implies that $S_{C,\varepsilonsilon}\operatorname{Supp}bset S_{U,\varepsilonsilon}$.
Proposition \mathbb{R}f{crescimentoregularcont} assures that $S_{C,\varepsilonsilon}$ is syndetic and the previous inclusion assures that $S_{U,\varepsilonsilon}$ is syndetic.
\end{proof}
Another immediate corollary of Proposition \mathbb{R}f{crescimentoregularcont} is that the diameter of future iterations of local cw-unstable continua cannot become arbitrarily small after it reaches size $\varepsilonsilon$.
\begin{corollary}\label{continuonaodecresce}
There exists $\delta>0$ such that if $C\in\mathcal{F}^u_{\gamma}$, then $$\operatorname{dim}am(f^{n}(C))\geq\delta \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\geq 2m_\gamma.$$
\end{corollary}
\begin{proof}
The proof of Corollary \mathbb{R}f{tempodecrescimentosindetico} assures that for each $n\geq2m_{\gamma}$ there exists $m\in S_{C,\varepsilonsilon}$ such that $|m-n|\leq m_\varepsilonsilon$. Let $\delta>0$, given by uniform continuity of $f$ and $f^{-1}$, such that if $\operatorname{dim}am(A)\geq\varepsilonsilon$ then $$\operatorname{dim}am(f^{k}(A))\geq \delta \,\,\,\,\,\, \text{whenever} \,\,\,\,\,\, |k|\leq m_\varepsilonsilon.$$ Since $\operatorname{dim}am(f^m(C))\geq\varepsilonsilon$ and $|m-n|\leq m_\varepsilonsilon$, it follows that $\operatorname{dim}am(f^n(C))\geq\delta$.
\end{proof}
This corollary is the version in the case of first-time sensitive homeomorphisms of the following important property of cw-expansive homeomorphisms:
\begin{proposition}\cite[Proposition 2.2]{Kato1}.
There exists $\delta\in (0,\varepsilon)$ such that if $A$ is a subcontinuum of $X$ with $\mbox{diam}(A)\leq\delta$ and \[\mbox{diam}(f^n(A))\geq\varepsilon\;\;\; \mbox{ for some }\;\;\;n\in\mathbb{N},\] then
\[\mbox{diam}(f^j(A))\geq \delta\;\;\;\mbox{ for every }\;\;\;j\geq n.\]
\end{proposition}
In the last result of this subsection we prove that local cw-unstable continuum are (global) unstable. We also recall that in the case of cw-expansive homeomorphisms, local stable and local unstable continua are respectively stable and unstable (see \cite{Kato1}).
\begin{prop}
If $C\in\mathcal{F}^u$, then $\lim_{n\rightarrow\infty}\operatorname{dim}am(f^{-n}(C))=0$.
\end{prop}
\begin{proof}
If $C\in\mathcal{F}^u_\gamma$, then there exist $x\in X,\;n_k\to\infty$ and $r_{n_k}\to0$ such that
\[C=\lim_{k\rightarrow\infty}f^{n_k}(\overlineerline{B(f^{-n_k}(x),r_{n_k})}) \,\,\,\,\,\, \text{and}\]
\[n_1(f^{-n_k}(x),r_{n_k},\gamma)\in (n_k,n_k+m_\gamma] \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, k\in\mathbb{N}.\]
It is enough to prove that for each $\alpha\in(0,\gamma)$ there exists $\ell_\alpha\in\mathbb{N}$ such that
\[\operatorname{dim}am(f^{-n}(C))\leq \alpha \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\geq \ell_\alpha.\]
Since
\[n_k<n_1(f^{-n_k}(x),r_{n_k},\gamma)\leq n_1(f^{-n_k}(x),r_{n_k},\varepsilonsilon),\]
it follows from property (F2) that
\[n_k - n_{1}(x_{n_k},r_{n_k},\alpha)< n_{1}(f^{-n_k}(x),r_{n_k},\varepsilonsilon)-n_{1}(f^{-n_k}(x),r_{n_k},\alpha)\leq m_\alpha.
\] Let $\ell_\alpha=m_\alpha+1$ and note that if $n\geq \ell_\alpha$, then the previous inequality assures that
\[ n_k-n_{1}(f^{-n_k}(x),r_{n_k},\alpha)< n.\] For each $n\geq \ell_\alpha$ consider $k_n\in\mathbb{N}$ such that
\[n\leq n_k \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, k\geq k_n,\] recall that $\lim_{k\rightarrow\infty}n_k=\infty$.
This implies that
\[0\leq n_k-n< n_{1}(f^{-n_k}(x),r_{n_k},\alpha) \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, k\geq k_n\]
and, hence,
\[\operatorname{dim}am(f^{-n}(f^{n_k}(B(f^{-n_k}(x),r_{n_k})))) = \operatorname{dim}am(f^{n_k-n}(B(f^{-n_k}(x),r_{n_k})))\leq \alpha\] for every $k\geq k_n$. This assures that \[\lim_{k\rightarrow\infty}\operatorname{dim}am(f^{-n}(f^{n_k}(B(f^{-n_k}(x),r_{n_k}))))\leq\alpha\] for every $n\geq \ell_\alpha$. Therefore
\begin{eqnarray*}
\operatorname{dim}am(f^{-n}(C))&=&\operatorname{dim}am \left(f^{-n}\left(\lim_{k\rightarrow\infty}f^{n_k}(\overlineerline{B(f^{-n_k}(x),r_{n_k})})\right)\right)\\
&=&\lim_{k\rightarrow\infty}\operatorname{dim}am\left(f^{-n}(f^{n_k}(B(f^{-n_k}(x),r_{n_k})))\right)\\
&\leq&\alpha
\end{eqnarray*}
for every $n\geq\ell_\alpha$, which finishes the proof.
\end{proof}
At the end of this section we note that in the definition on first-time sensitivity nothing is said about the map $\gamma\mapsto m_{\gamma}$. In the following proposition we choose the numbers $m_{\gamma}$ satisfying (F1) and (F2) and such that $\gamma\mapsto m_{\gamma}$ is a non-increasing function. This will be used later in Section 4.
\begin{proposition}\label{decreasing}
If $f$ is a first-time sensitive homeomorphism with a sensitivity constant $\varepsilonsilon>0$, then for each $\gamma\in(0,\varepsilonsilon)$, there exists $m_{\gamma}>0$ satisfying (F1), (F2), and: if $\delta<\gamma$, then $m_{\gamma}\leq m_{\delta}$.
\end{proposition}
\begin{proof}
For each $\gamma\in(0,\varepsilonsilon)$ consider $m'_{\gamma}>0$, given by the definition of first-time sensitivity, satisfying
\begin{enumerate}
\item[(F1)] $|n_1(x,r_{k+1}(x),\gamma)-n_1(x,r_{k}(x),\gamma)|\leq m'_\gamma$
\item[(F2)] $|n_1(x,r_k(x),\gamma) - n_1(x,r_k(x),\varepsilonsilon)|\leq m'_\gamma$
\end{enumerate}
for every $x\in X$ and for every $k\in\mathbb{N}$ such that $r_k(x)\leq\gamma$. We first prove that if $\delta\leq\gamma$, then $3m'_{\delta}$ also bounds the differences above. Indeed, if $\delta\leq\gamma$, then $$n_1(x,r_k(x),\gamma)\geq n_1(x,r_k(x),\delta)$$
and, hence,
\begin{equation}\label{rmk1}
|n_1(x,r_k(x),\varepsilonsilon)-n_1(x,r_k(x),\gamma)|\leq |n_1(x,r_k(x),\varepsilonsilon)-n_1(x,r_k(x),\delta)|\leq m'_\delta, \end{equation}
where the second inequality is ensured by (F2).
Also, using triangular inequality, (F1), (F2), and (\mathbb{R}f{rmk1}) we obtain:
\begin{equation*}\label{rmk2}\begin{array}{rcl}
& &|n_1(x,r_{k+1}(x),\gamma)-n_1(x,r_{k+1}(x),\delta)| +\\
|n_1(x,r_{k+1}(x),\gamma)-n_1(x,r_k(x),\gamma)|&\leq&|n_1(x,r_{k+1}(x),\delta)-n_1(x,r_k(x),\delta)|+\\
&&|n_1(x,r_k(x),\delta)-n_1(x,r_k(x),\gamma)|\\
&&\\
&&|n_1(x,r_{k+1}(x),\varepsilonsilon)-n_1(x,r_{k+1}(x),\delta)| +\\
&\leq &|n_1(x,r_{k+1}(x),\delta)-n_1(x,r_k(x),\delta)|+\\
&&|n_1(x,r_k(x),\delta)-n_1(x,r_k(x),\varepsilonsilon)|\\
&&\\
&\leq &3m'_\delta.
\end{array}\end{equation*}
To define $(m_\gamma)_{\gamma\in(0,\varepsilonsilon)}$, we define a sequence $(m_n)_{n\in\mathbb{N}}$ as follows: let $m_1=3m'_{\frac{\varepsilonsilon}{2}}$, and inductively define for each $n\geq2$,
$$m_n=\max\{3m'_{\frac{\varepsilonsilon}{n}},m_{n-1}+1\}.$$
For each $n\geq 2$ and $\gamma\in \left[\frac{\varepsilonsilon}{n},\frac{\varepsilonsilon}{n-1}\right)$, let $m_\gamma=m_n$. Since the sequence $(m_n)_{n\in\mathbb{N}}$ is increasing, it follows that $\gamma\mapsto m_{\gamma}$ is non-increasing.
Finally, we prove that for each $\gamma\in(0,\varepsilonsilon)$, $m_\gamma$ satisfies (F1) and (F2). Given $\gamma\in(0,\varepsilonsilon]$, there is $n\geq2$ such that $\gamma\in\left[\frac{\varepsilonsilon}{n},\frac{\varepsilonsilon}{n-1}\right)$. Since $\frac{\varepsilonsilon}{n}\leq\gamma$, it follows that
$$|n_1(x,r_{k+1}(x),\gamma)-n_1(x,r_k(x),\gamma)|\leq 3m'_{\frac{\varepsilonsilon}{n}}\leq m_n=m_\gamma \,\,\,\,\,\, \text{and}$$
$$|n_1(x,r_k(x),\varepsilonsilon)-n_1(x,r_k(x),\gamma)|\leq m'_{\frac{\varepsilonsilon}{n}}\leq m_n=m_\gamma,$$
for every $x\in X$ and for every $k\in\mathbb{N}$ such that $r_k(x)\leq\gamma$.
\end{proof}
\section{Examples of first-time sensitive homeomorphisms}
In this section we discuss three distinct classes of systems satisfying first-time sensitivity. They are the continuum-wise expansive homeomorphisms, the shift map on the Hilbert cube $[0,1]^{\mathbb{Z}}$, and some partially hyperbolic diffeomorphisms. We will discuss them on three separate subsections.
\hspace{-0.4cm}\textbf{Continuum-wise expansive homeomorphisms:}
\hspace{-0.5cm} We start this subsection recalling the definition of cw-expansiveness.
\begin{definition}
We say that $f$ is \emptyseth{continuum-wise expansive} if there exists $c>0$ such that $W^u_c(x)\cap W^s_c(x)$ is totally disconnected for every $x\in X$. Equivalently, for each non-trivial continuum $C\operatorname{Supp}bset X$, that is $C$ is not a singleton, there exists $n\in\mathbb{Z}$ such that
\[\operatorname{dim}am(f^n(C))>c.\] The number $c>0$ is called a cw-expansivity constant of $f$.
\end{definition}
We will prove that cw-expansiveness implies ft-sensitivity on spaces satisfying (P1) and (P2). To prove this we will need the following lemma that obtains further consequences on the first increasing times of sensitive homeomorphisms defined on spaces satisfying hypothesis (P2).
\begin{lemma}\label{sequenceonlysensitive}
If $f\colon X\to X$ is sensitive, with a sensitivity constant $\varepsilonsilon>0$, and $X$ satisfies hypothesis \emptyseth{(P2)}, then there is a sequence $(r_k)_{k\in\mathbb{N}}\colon X\to \mathbb{R}^*_+$ starting on $r_1=\frac{\varepsilonsilon}{2}$ and decreasing monotonically to $0$ such that $(n_1(x,r_k(x),\varepsilonsilon))_{k\in\mathbb{N}}$ is strictly increasing and
\[\operatorname{dim}am(f^{n_1(x,r_k(x),\varepsilonsilon)}(B(x,r_{k+1}(x))))=\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, x\in X \,\,\,\,\,\, \text{and} \,\,\,\,\,\, k\in\mathbb{N}.\]
\end{lemma}
\begin{proof}
For each $x\in X$, let $r_1(x)=\frac{\varepsilonsilon}{2}$ and note that the continuity of $f^{n_1(x,r_1(x),\varepsilonsilon)}$ and hypothesis (P2) assure that if $r$ is sufficiently close to $r_1$, then \[\operatorname{dim}am(f^{n_1(x,r_1(x),\varepsilonsilon)}(B(x,r)))>\varepsilonsilon.\] Also, if $r$ is sufficiently small, then uniform continuity of $f$ assures that \[\operatorname{dim}am(f^{n_1(x,r_1(x),\varepsilonsilon)}(B(x,r)))<\varepsilonsilon.\] It follows from the hypothesis (P2) that there exists $r_2(x)\in(0,r_1(x))$ such that \[\operatorname{dim}am(f^{n_1(x,r_1(x),\varepsilonsilon)}(B(x,r_2(x))))=\varepsilonsilon.\] The first increasing time $n_1(x,r_2(x),\varepsilonsilon)$ of $B(x,r_2(x))$ with respect to $\varepsilonsilon$ satisfies
\[\mbox{diam}(f^{n_1(x,r_2(x),\varepsilonsilon)}(B(x,r_2(x))))>\varepsilonsilon \,\,\,\,\,\, \text{and}\]
\[\mbox{diam}(f^j(B(x,r_2(x))))\leq\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, j\in\{0,\dots,n_1(x,r_2(x),\varepsilonsilon)-1\}.\] This implies that $n_1(x,r_2(x),\varepsilonsilon)>n_1(x,r_1(x),\varepsilonsilon)$ since
\[\operatorname{dim}am(f^j(B(x,r_2(x),\varepsilonsilon)))\leq\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, j\in\{0,\dots,n_1(x,r_1(x),\varepsilonsilon)\}\] and $\operatorname{dim}am(f^{n_1(x,r_2(x),\varepsilonsilon)}(B(x,r_2(x))))>\varepsilonsilon$.
By induction we can define a decreasing sequence of real numbers $(r_k(x))_{k\in\mathbb{N}}$ such that $(n_1(x,r_k(x),\varepsilonsilon))_{k\in\mathbb{N}}$ is an increasing sequence of positive integer numbers
and that \[\operatorname{dim}am(f^{n_1(x,r_k(x))}(B(x,r_{k+1}(x))))=\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, k\in\mathbb{N}.\] Since this can be done for every $x\in X$, the proof is complete.
\end{proof}
\begin{remark}\label{0}
We note that if $(n_1(x,r_k(x),\varepsilonsilon))_{k\in\mathbb{N}}$ is strictly increasing, then \[ \lim_{k\rightarrow\infty}r_k(x)=0.\] Indeed, if this is not the case, there exists $r>0$ and a subsequence $(r_{k_n})_{n\in\mathbb{N}}$ such that $k_n\to\infty$ and
\[r_{k_n}(x)>r \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{N}.\] Thus,
\[n_1(x,r_{k_n},\varepsilonsilon)\leq n_1(x,r,\varepsilonsilon) \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{N}\]
and this implies that the subsequence $(n_1(x,r_{k_n}(x),\varepsilonsilon))_{n\in\mathbb{N}}$ is bounded. But this contradicts the hypothesis of $(n_1(x,r_k(x)))_{k\in\mathbb{N}}$ being strictly increasing since this implies that $\lim_{k\to\infty}n_1(x,r_k(x))=\infty$.
\end{remark}
\begin{theorem}\label{cwft}
Cw-expansive homeomorphisms defined in compact and connected metric spaces satisfying hypothesis (P1) and (P2) are first-time sensitive.
\end{theorem}
\begin{proof}
First, note that, since $X$ satisfies Property (P1), then it is locally connected and, in particular, a Peano continuum. Every cw-expansive homeomorphism defined on a Peano continuum is sensitive. This is a consequence of \cite[Theorem 1.1]{Hertz}, where it is proved that cw-expansive homeomorphisms defined on a Peano continuum do not have stable points, that are points $x\in X$ satisfying: for each $\varepsilon>0$ there exists $\delta>0$ such that
\[B(x,\delta)\operatorname{Supp}bset W^s_\varepsilon(x).\] Thus, we have that $f$ is sensitive and consider $\varepsilon>0$ a sensitivity constant of $f$. Let $(r_k)_{k\in\mathbb{N}}$ be the sequence given by Lemma \mathbb{R}f{sequenceonlysensitive} such that $(n_1(x,r_k(x),\varepsilonsilon))_{k\in\mathbb{N}}$ is strictly increasing and
\[\operatorname{dim}am(f^{n_1(x,r_k(x),\varepsilonsilon)}(B(x,r_{k+1}(x))))=\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, x\in X \,\,\,\,\,\, \text{and} \,\,\,\,\,\, k\in\mathbb{N}.\]
We will first prove property (F2) of the definition of first-time sensitivity. Suppose that (F2) is not valid, that is, for some constant $\gamma\in (0,\varepsilon]$, there are sequences $(x_m)_{m\in\mathbb{N}}\operatorname{Supp}bset X$ and $(k_m)_{m\in\mathbb{N}}\operatorname{Supp}bset\mathbb{N}$ such that $k_m\to\infty$ when $m\to\infty$ and
\[\lim_{m\rightarrow\infty}(n_1(x_m,r_{k_m}(x_m),\varepsilonsilon) - n_1(x_m,r_{k_m}(x_m),\gamma))=\infty.\] We can assume that
$$n_1(x_m, r_{k_m}(x_m),\gamma)<n_1(x_m,r_{k_m}(x_m),\varepsilon) \,\,\,\,\,\,\, \text{for every} \,\,\,\,\,\, m\in\mathbb{N}$$ and, hence, that \[\gamma<\operatorname{dim}am\;f^{n_1(x_m,r_{k_m}(x_m),\gamma)}(B(x_m,r_{k_m}(x_m)))\leq \varepsilon \,\,\,\,\,\,\, \text{for every} \,\,\,\,\,\, m\in\mathbb{N}.\] For each $m\in\mathbb{N}$, the continuum \[C_m = f^{n_1(x_m,r_{k_m}(x_m),\gamma)}(\overlineerline{B(x_m,r_{k_m}(x_m)))}.\]
satisfies to following conditions:
\begin{enumerate}
\item $\operatorname{dim}am(C_m)\geq\gamma$;
\item $\operatorname{dim}am(f^{-j}(C_m))\leq \varepsilon, \,\,\, \forall \, j\in[0,n_1(x_m,r_{k_m}(x_m),\gamma)]$;
\item $\operatorname{dim}am(f^{j}(C_m))\leq \varepsilon, \,\,\, \forall \, j\in[0,n_1(x_m,r_{k_m}(x_m),\varepsilonsilon)-n_1(x_m,r_{k_m}(x_m),\gamma)-1]$.
\end{enumerate}
Let $C$ be an accumulation continuum of the sequence $(C_m)_{m\in\mathbb{N}}$ in the Hausdorff topology, that is, \[C=\lim_{l\rightarrow\infty}C_{m_l}.\] Property (1) assures that $\operatorname{dim}am(C)\geq\gamma$. Since
\[\lim_{m\rightarrow\infty}(n_1(x_m,r_{k_m}(x_m),\varepsilonsilon) - n_1(x_m,r_{k_m}(x_m),\gamma)=\infty\]
and \[n_1(x_m,r_{k_m}(x_m),\varepsilon)\geq n_1(x_m,r_{k_m}(x_m),\varepsilon)-n_1(x_m,r_{k_m}(x_m),\gamma)\] it follows that $$\lim_{m\rightarrow\infty}n_1(x_m,r_{k_m}(x_m),\varepsilon)=\infty.$$ Lemma \mathbb{R}f{n1} assures that $\lim_{m\rightarrow\infty}r_{k_m}(x_m)=0$, since otherwise $$(n_1(x_m,r_{k_m}(x_m),\varepsilon))_{m\in\mathbb{N}}$$ would have a bounded subsequence. This implies that $$\lim_{m\rightarrow\infty}n_1(x_m,r_{k_m}(x_m),\gamma)=\infty.$$ Thus, Properties (2) e (3) assure that
\[\operatorname{dim}am(f^j(C)) = \lim_{l\rightarrow\infty}(f^j(C_{m_l})) \leq \varepsilon\,\,\,\,\,\, \text{for every} \,\,\,\,\,\, j\in\mathbb{Z}\]
and $C$ is a non-trivial $\varepsilonsilon$-stable and $\varepsilonsilon$-unstable continuum, contradicting cw- expansiveness. This proves property (F2). Now, we prove property (F1).
Suppose that (F1) is not valid, that is, for some constant $\gamma\in (0,\varepsilon]$, there are sequences $(x_m)_{m\in\mathbb{N}}\operatorname{Supp}bset X$ and $(k_m)_{m\in\mathbb{N}}\operatorname{Supp}bset\mathbb{N}$ such that $k_m\to\infty$ when $m\to\infty$ and
\[\lim_{m\rightarrow\infty}|n_1(x_m,r_{k_{m}+1}(x_m),\gamma)-n_1(x_m,r_{k_m}(x_m),\gamma)|=\infty.\] Using property (F2), that was proved for the sequence $(r_k)_{k\in\mathbb{N}}$, there is $m_\gamma\in\mathbb{N}$ such that
\[|n_1(x,r_k(x),\varepsilon)-n_1(x,r_k(x),\gamma)|<m_{\gamma}\] for every $x\in X$ and for every $k\in\mathbb{N}$ such that $r_k(x)<\gamma$. A simple triangle inequality assures that
\[\lim_{m\rightarrow\infty}|n_1(x_m,r_{k_{m}+1}(x_m),\varepsilon)-n_1(x_m,r_{k_m}(x_m),\varepsilon)|=\infty.\]
Choose a sequence $(\ell_m)_{m\in\mathbb{N}}$ of positive numbers satisfying $\lim_{m\rightarrow\infty}\ell_m=\infty$ and
\[|n_1(x_m,r_{k_{m}+1}(x_m),\varepsilon)-n_1(x_m,r_{k_m}(x_m),\varepsilon)|\geq 2\ell_m\;\;\;\mbox{ for every }\;\;\;m\in\mathbb{N}.\]
Let $\delta\in (0,\varepsilon)$ be given by \cite[Proposition 2.2]{Kato1} such that if $A$ is a subcontinuum of $X$ with $\mbox{diam}(A)\leq\delta$ and \[\mbox{diam}(f^n(A))\geq\varepsilon\;\;\; \mbox{ for some }\;\;\;n\in\mathbb{N},\] then
\[\mbox{diam}(f^j(A))\geq \delta\;\;\;\mbox{ for every }\;\;\;j\geq n.\]
Since for each $m\in\mathbb{N}$ we have
$$n_1(x_m,r_{k_{m}+1}(x_m),\varepsilon)\geq|n_1(x_m,r_{k_{m}+1}(x_m),\varepsilon)-n_1(x_m,r_{k_m}(x_m),\varepsilon)|$$
we obtain
$$\lim_{m\rightarrow\infty}n_1(x_m,r_{k_{m}+1}(x_m),\varepsilon)=\infty$$
and using Lemma \mathbb{R}f{n1}, as in the proof of (F2), we obtain
$$\lim_{m\rightarrow\infty}r_{k_m+1}(x_m)=0.$$
Thus, we can assume that
\[r_{k_m+1}(x_m)<\delta/2 \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, m\in\mathbb{N}.\]
For each $m\in\mathbb{N}$, let
\[C_m=f^{n_1(x_m,r_{{k_m}}(x_m),\varepsilon)+\ell_m}(\overlineerline{B(x_m, r_{{k_m}+1}(x_m))}).\]
Recall that the sequence $(r_k)_{k\in\mathbb{N}}$ was chosen so that
\[\operatorname{dim}am(f^{n_1(x_m,r_{k_m}(x_m),\varepsilon)}(\overlineerline{B(x_m, r_{{k_m}+1}(x_m))}))=\varepsilon\]
for every $m\in\mathbb{N}$. Since $\overlineerline{B(x_m, r_{{k_m}+1}(x_m))}$ is a subcontinuum of $X$, by property (P1) on the space, and it has diameter smaller than $\delta$, we obtain \[\operatorname{dim}am(f^{j}(\overlineerline{B(x_m, r_{{k_m}+1}(x_m))}))\geq\delta \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, j\geq n_1(x_m,r_{k_m}(x_m),\varepsilon).\] In particular,
\[\operatorname{dim}am(C_m)=\operatorname{dim}am(f^{n_{1,\varepsilon}(x_m,r_{{k_m}}(x_m))+\ell_m}(\overlineerline{B(x_m, r_{{k_m}+1}(x_m))}))\geq \delta.\]
Thus, the following conditions hold for every $m\in\mathbb{N}$:
\begin{itemize}
\item[(4)] $\mbox{diam}(C_m)\geq\delta$,
\item[(5)] $\mbox{diam}(f^{-j}(C_m))\leq \varepsilon$ for every $j\in\{0,\dots,\ell_m\}$ and
\item[(6)] $\mbox{diam}(f^{j}(C_m))\leq \varepsilon$ for every $j\in\{0,\dots,\ell_m\}$.
\end{itemize}
Considering an accumulation continuum \[\operatorname{dim}splaystyle C=\lim_{i\rightarrow\infty}C_{m_i}\] on the Hausdorff topology, we have that $C$ is a continuum, since it is a limit of continua, $\operatorname{dim}am (C)\geq\delta$, since
$$\mbox{diam}(C_m)\geq\delta \,\,\,\,\,\, \text{for every} \,\,\,\,\,\,m\in\mathbb{N},$$ and
\[\operatorname{dim}am(f^j(C))=\lim_{i\to\infty}\operatorname{dim}am(f^j(C_{m_i}))\leq \varepsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, j\in\mathbb{Z}\]
since $\ell_m\to\infty$. Thus, $C$ is a non-trivial $\varepsilonsilon$-stable and $\varepsilonsilon$-unstable continuum contradicting cw-expansiviness. This proves property (F1) and completes the proof.
\end{proof}
Theorem \mathbb{R}f{teoremacontinuosinst} ensures the existence of cw-unstable continua with uniform diameter in every point of the space $x\in X$. Since cw-unstable continua are indeed local unstable, they are contained in the local unstable continua $C^u_{\varepsilonsilon}(x)$. For some time we tried to prove that the local unstable continua are cw-unstable, i.e., belong to $\mathcal{F}^u$. We could just prove it with the following additional hypothesis:
\begin{definition}
Let $f$ be a homeomorphism of a compact metric space $X$ and $0<r<c$. We say that the first increasing time with respect to $c$ of a ball $B(x,r)$ is controlled by a subset $C\operatorname{Supp}bset B(x,r)$ if $$n_1(x,r,c)=n_1(C,c).$$
Let $f$ be a cw-expansive homeomorphism with cw-expansivity constant $c=2\varepsilonsilon>0$. We say that the local unstable continua control the increasing time of the balls of the space if the first increasing time of every ball $B(x,r)$ of radius $r<\varepsilonsilon$ is controlled by the connected component of $x$ in $C^u_{\varepsilonsilon}(x)\cap B(x,r)$.
\end{definition}
\begin{proposition}
If $f$ is a cw-expansive homeomorphism with cw-expansivity constant $2\varepsilonsilon>0$ and the local unstable continua control the increasing time of the balls of the space, then $C^u_{\varepsilonsilon}(x)\in\mathcal{F}^u$ for every $x\in X$.
\end{proposition}
\begin{proof}
Let $\delta\in(0,\varepsilonsilon)$ be given by Theorem \mathbb{R}f{cw} such that $$\operatorname{dim}am(C^u_{\varepsilonsilon}(x))\geq\delta \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, x\in X,$$ and choose $m_{2\varepsilonsilon}\in\mathbb{N}$ such that for each $x\in X$ there exists $k\in\{0,\dots,m_{2\varepsilonsilon}\}$ such that $$\operatorname{dim}am(f^k(C^u_{\varepsilonsilon}(x)))>\varepsilonsilon.$$
For each $x\in X$ and $m\in\mathbb{N}$ let $$r_m(x)=\operatorname{dim}am(f^{-m}(C^u_{\varepsilonsilon}(x)))$$
and note that $$r_m(x)\leq2\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, m\in\mathbb{N} \,\,\,\,\,\, \text{and}$$
$$r_m(x)\to0 \,\,\,\,\,\, \text{when} \,\,\,\,\,\, m\to\infty,$$ recall from \cite{Kato1} that
$$C^u_{\varepsilonsilon}(x)\operatorname{Supp}bset W^u(x) \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, x\in X.$$
We will prove that
\begin{enumerate}
\item $f^m(B(f^{-m}(x),r_m(x)))\to C^u_{\varepsilonsilon}(x)$ \,\, when \,\, $m\to\infty$, and
\item $n_1(f^{-m}(x),r_m(x),2\varepsilonsilon)\in(m,m+m_{2\varepsilonsilon}]$ \,\, for every \,\, $m\in\mathbb{N}$.
\end{enumerate}
Note that the choice of $r_m(x)$ ensures that
$$C^u_{\varepsilonsilon}(x)\operatorname{Supp}bset f^m(B(f^{-m}(x),r_m(x))) \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, m\in\mathbb{N}.$$
Thus, if $C$ is an accumulation continuum of the sequence $$(f^m(B(f^{-m}(x),r_m(x))))_{m\in\mathbb{N}},$$ then $C^u_{\varepsilonsilon}(x)\operatorname{Supp}bset C$.
It follows from the choice of $(r_m(x))_{m\in\mathbb{N}}$ and $m_{2\varepsilonsilon}$ that $$n_1(f^{-m}(x),r_m(x),2\varepsilonsilon)\leq m+m_{2\varepsilonsilon} \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, m\in\mathbb{N}.$$ The hypothesis that the local unstable continua control the increasing time of the balls of the space ensures that
$$n_1(f^{-m}(x),r_m(x),2\varepsilonsilon)>m \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, m\in\mathbb{N},$$
since in this case we have
$$n_1(f^{-m}(x),r_m(x),2\varepsilonsilon)=n_1(f^{-m}(C^u_{\varepsilonsilon}(x)),2\varepsilonsilon)>m.$$ This proves (2) and also ensures that $C$ is an $\varepsilonsilon$-unstable continuum, since $C$ would be the limit of a sequence $(f^{m_i}(B(f^{-m_i}(x),r_{m_i})))_{i\in\mathbb{N}}$ with $m_i\to\infty$ and $$\operatorname{dim}am(f^j(B(f^{-m_i}(x),r_{m_i})))\leq2\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, j\in\{0,\dots,m_i\} \,\,\,\,\,\, \text{and} \,\,\,\,\,\, i\in\mathbb{N}.$$
Then it follows that $C\operatorname{Supp}bset C^u_{\varepsilonsilon}(x)$, since $C^u_{\varepsilonsilon}(x)$ is the connected component of $x$ in $W^u_{\varepsilonsilon}(x)$, and we conclude (1) and the proof.
\end{proof}
\hspace{-0.4cm}\textbf{Shift on the Hilbert cube $[0,1]^{\mathbb{Z}}$:}
Let $X=[0,1]^{\mathbb{Z}}$ and consider the following metric on $X$: for each $\underline{x}=(x_i)_{i\in\mathbb{Z}}$ and $\underline{y}=(y_i)_{i\in\mathbb{Z}}$ in $X$, let
\[d(\underline{x},\underline{y}) = \operatorname{Supp}p_{i\in\mathbb{Z}}\frac{|x_i-y_i|}{2^{|i|}}.\] Consider the bilateral backward shift
\[\begin{array}{rcl}
\sigma : [0,1]^{\mathbb{Z}} &\rightarrow & [0,1]^{\mathbb{Z}}\\
(x_i)_{i\in\mathbb{Z}} & \mapsto & (x_{i+1})_{i\in\mathbb{Z}}\\
\end{array}.\]
In this section we prove that $\sigma$ is first-time sensitive and characterize their cw-local unstable continua.
\begin{theorem}\label{shiftsens}
The shift map $\sigma\colon [0,1]^{\mathbb{Z}}\to[0,1]^{\mathbb{Z}}$ is first-time sensitive.
\end{theorem}
\begin{proof}
We first prove that $\sigma$ is sensitive (this can be found in \cite{AIL}). We prove that any $\varepsilonsilon<c=\frac{1}{4}$ is a sensitivity constant of $\sigma$. Given $\delta>0$ and $\underline{x}=(x_i)_{i\in\mathbb{Z}}\in X$, choose $i_0\in\mathbb{Z}$ such that ${c}/{2^{i_0}}<\delta$. Let
$$y_{i_0} = \left\{\begin{array}{rr}
x_{i_0}+c,& \mbox{if } x_{i_0}\in [0,1/2]\\
x_{i_0}-c,& \mbox{if } x_{i_0}\in (1/2,1].\\
\end{array}\right.$$ Then, the sequence $\underline{y}=(\ldots,x_{-1},x_0, x_1,\ldots, x_{i_0-1},y_{i_0}, x_{i_0+1}\ldots)$, that is $\underline{x}$ changing only the $i_0$-th coordinate $x_{i_0}$ with $y_{i_0}$, belongs to $X$ and is contained in the ball centered at $\underline{x}$ and radius $\delta$, since $$d(\underline{y},\underline{x})=\operatorname{Supp}p_{i\in\mathbb{Z}}\left(\frac{|y_i-x_i|}{2^{|i|}}\right) = \frac{|x_{i_0} \partialm c - x_{i_0}|}{2^{i_0}} = \frac{c}{2^{i_0}}<\delta.$$
Also, note that
$$d(\sigma^{i_0}(\underline{x}),\sigma^{i_0}(\underline{y}))= \operatorname{Supp}p_{i\in\mathbb{Z}}\left(\frac{|x_{i+i_0} - y_{i+i_0}|}{2^{|i|}}\right) = |x_{i_0}- x_{i_0} \partialm c|=c>\varepsilonsilon$$ and that this is enough to prove that $\sigma$ is sensitive.
Now we prove that $\sigma$ is first-time sensitive.
For each $\underline{x}=(x_i)_{i\in\mathbb{Z}}\in X$ and $\underline{y}=(y_i)_{i\in\mathbb{Z}}\in X$ we have \[\begin{array}{rcl}\operatorname{dim}splaystyle\underline{y}=(y_i)_{i\in\mathbb{Z}}\in B(\underline{x},\varepsilonsilon)
&\Leftrightarrow&\operatorname{dim}splaystyle\operatorname{Supp}p_{i\in\mathbb{Z}}\left(\frac{|x_i-y_i|}{2^{|i|}}\right)<\varepsilonsilon
\\
&\Leftrightarrow&\operatorname{dim}splaystyle |x_i-y_i|< 2^{|i|}\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, i\in\mathbb{Z}
\\
&\Leftrightarrow&\operatorname{dim}splaystyle y_i\in (x_i-2^{|i|}\varepsilonsilon, x_i+2^{|i|}\varepsilonsilon) \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, i\in\mathbb{Z}.
\end{array}\]
A similar argument proves that
\[\underline{y}=(y_i)_{i\in\mathbb{Z}}\in \sigma^j\left(B\left(\underline{x},\frac{\varepsilonsilon}{2^{n}}\right)\right)\]
if, and only if,
\[y_i\in \left(x_{i+j}-2^{|i+j|}\frac{\varepsilonsilon}{2^n}, x_{i+j}+2^{|i+j|}\frac{\varepsilonsilon}{2^n}\right)\cap[0,1] \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, i\in\mathbb{Z}.\]
For each $\underline{x}\in X$, $n\in\mathbb{N}$ and $j\in\mathbb{N}$ we have
\begin{equation}\label{desigualdeftcshift}
2^{j-n}\varepsilonsilon\leq\operatorname{dim}am\;\left(\sigma^j\left(B\left(\underline{x},\frac{\varepsilonsilon}{2^{n}}\right)\right)\right)\leq 2^{j-n+1}\varepsilonsilon.\end{equation}
Indeed, letting for each $i\in\mathbb{Z}$ and $j\in\mathbb{N}$
$$I_{i,j} = \left(x_{i+j}-2^{|i+j|}\dfrac{\varepsilonsilon}{2^n}, x_{i+j}+2^{|i+j|}\dfrac{\varepsilonsilon}{2^n}\right)\cap[0,1],$$ we have
$$\operatorname{dim}am\left(\sigma^j\left(B\left(\underline{x},\frac{\varepsilonsilon}{2^{n}}\right)\right)\right)=\operatorname{Supp}p_{i\in\mathbb{Z}}\;\frac{\operatorname{dim}am(I_{i,j})}{2^{|i|}},$$
and since
$$2^{j-n}\varepsilonsilon\leq\frac{\operatorname{dim}am(I_{i,j})}{2^{|i|}}\leq2^{j-n+1}\varepsilonsilon$$
for every $i\in\mathbb{Z}$, we obtain the desired inequalities.
For each $\gamma\in(0,\varepsilonsilon]$, choose $k_\gamma\in\mathbb{N}$ such that
\begin{equation}\label{m_gammashift}
\operatorname{dim}splaystyle \dfrac{\varepsilonsilon}{2^{k_\gamma+1}}\leq\gamma <\dfrac{\varepsilonsilon}{2^{k_\gamma}}.
\end{equation}
From inequality (\mathbb{R}f{desigualdeftcshift}) we obtain
\[\operatorname{dim}am\;\left(\sigma^j\left(B\left(\underline{x},\frac{\varepsilonsilon}{2^{n}}\right)\right)\right)\leq\dfrac{\varepsilonsilon}{2^{k_\gamma+1}}\leq\gamma \,\,\,\,\,\, \text{when} \,\,\,\,\,\, 0\leq j\leq n-k_\gamma-2,\]
and \[ \operatorname{dim}am\;\left(\sigma^j\left(B\left(\underline{x},\frac{\varepsilonsilon}{2^{n}}\right)\right)\right)\geq\frac{\varepsilonsilon}{2^{k_\gamma}}>\gamma \,\,\,\,\,\, \text{if} \,\,\,\,\,\, j\geq n-k_\gamma.\]
This implies that $n_1\left(\underline{x},\frac{\varepsilonsilon}{2^n},\gamma\right)$ is either $n-k_\gamma-1$ or $n-k_\gamma$ for every $n\in\mathbb{N}$.
Thus,
\[n_1\left(\underline{x},\frac{\varepsilonsilon}{2^{n+1}},\gamma\right)-n_1\left(\underline{x},\frac{\varepsilonsilon}{2^n},\gamma\right)\leq 2\] and \[n_1\left(\underline{x},\frac{\varepsilonsilon}{2^n},\varepsilonsilon\right) - n_1\left(\underline{x},\frac{\varepsilonsilon}{2^n},\gamma\right) \leq k_\gamma+2\]
(in the last inequality we used that $n_1\left(\underline{x},\frac{\varepsilonsilon}{2^n},\varepsilonsilon\right)$ is either $n$ or $n+1$).
Since this holds for every $n\in\mathbb{N}$, considering $m_\gamma=k_\gamma+2$, we have that $(m_\gamma)_{\gamma\in(0,\varepsilonsilon]}$ satisfies the Properties (F1) and (F2) for sequence of the radius $\left(\frac{\varepsilonsilon}{2^n}\right)_{n\in\mathbb{N}}$. Given that $\left(\frac{\varepsilonsilon}{2^n}\right)_{n\in\mathbb{N}}$ decrease monotonically to 0, we conclude that $\sigma$ is first-time sensitive. Notice that $\gamma\mapsto m_\gamma$ is non-increasing, since $m_\gamma=k_\gamma+2$ where $k_\gamma$ satisfies (\mathbb{R}f{m_gammashift}).
\end{proof}
\begin{remark}
We remark that there are no cw-expansive homeomorphisms on infinite dimensional compact metric spaces. This was proved by Kato in \cite{Kato1} generalizing a result of Mañé in the case of expansive homeomorphisms \cite{Ma}. Even though first-time sensitive homeomorphisms share important properties with cw-expansive homeomorphisms, as proved in Section 2, it is not possible to adapt the proof of Kato/Mañé to the case of first-time sensitive homeomorphisms.
\end{remark}
A direct consequence of Proposition \mathbb{R}f{shiftsens} is the following:
\begin{corollary}
There exist first-time sensitive homeomorphisms on infinite dimensional compact metric spaces.
\end{corollary}
\begin{remark}\label{rmkshift1}
We remark that the theorem of Kato assures that $\sigma\colon [0,1]^{\mathbb{Z}}\to[0,1]^{\mathbb{Z}}$ is not cw-expansive, but it is easy to choose non-trivial continua in arbitrarily small dynamical balls. For each $r>0$, the continuum
$$C_r= \partialrod_{i<0}\{0\}\times [0,r]\times \partialrod_{i>0}\{0\}$$ is non-degenerate and
$$\operatorname{dim}am(\sigma^n(C_r))\leq\operatorname{dim}am(C_r)=r \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{Z}.$$ We note that $C_r$ is both an $r$-stable and $r$-unstable continuum that is not cw-stable nor cw-unstable since its diameter does not increase in the future or in the past. We also note that $C_r$ is both stable and unstable, since
$$\operatorname{dim}am(\sigma^n(C_r))\leq\frac{r}{2^{|n|}} \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{Z}.$$
\end{remark}
\begin{remark}\label{rmkshift2}
We remark that $\sigma$ contains local stable continua that are not stable, and local unstable continua that are not unstable, on every point of the space. For each $\varepsilonsilon\in(0,c)$ and $\underline{x}=(x_i)_{i\in\mathbb{Z}}\in X$, the non-trivial continuum \[C_{\underline{x}}=\partialrod_{i\in\mathbb{Z}}\{[x_i-\varepsilonsilon,x_i+\varepsilonsilon]\cap [0,1]\}\] is contained in $W^s_\varepsilonsilon(\underline{x})\cap W^u_\varepsilonsilon(\underline{x})$. Indeed, if $\underline{y}=(y_i)_{i\in\mathbb{Z}}\in C_{\underline{x}}$, then
\[y_i\in [x_i-\varepsilonsilon,x_i+\varepsilonsilon] \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, i\in\mathbb{Z}\] and this implies that
\begin{eqnarray*}
d(\sigma^n(\underline{x}), \sigma^n(\underline{y})) &=& \operatorname{Supp}p_{i\in\mathbb{Z}}\frac{|x_{i+n}-y_{i+n}|}{2^{|i|}}\\
&\leq&\operatorname{Supp}p_{i\in\mathbb{Z}}\frac{\varepsilonsilon}{2^{|i|}}\\
&\leq& \varepsilonsilon
\end{eqnarray*}
for every $n\in\mathbb{Z}$. Moreover, $C_{\underline{x}}$ is not stable. Indeed, for each $\alpha\in(0,\varepsilonsilon]$, the sequence $\underline{y}=(y_i)_{i\in\mathbb{Z}}$ defined as follows \[y_i=\left\{\begin{array}{ll}
x_i,&i<0\\
x_i+\alpha,&i\geq 0 \mbox{ and } x_i\in [0,1/2]\\
x_i-\alpha,&i\geq 0 \mbox{ and } x_i\in (1/2,1]\\
\end{array}\right.\] belongs to $C_{\underline{x}}$, but
\begin{eqnarray*}
d(\sigma^n(\underline{y}), \sigma^n(\underline{x}))&=&\operatorname{Supp}p_{i\in\mathbb{Z}}\frac{|y_{i+n}-x_{i+n}|}{2^{|i|}}\\
&=& \operatorname{Supp}p_{i\geq -n}\frac{|x_{i+n}\partialm\alpha-x_{i+n}|}{2^{|i|}}\\
&=&\alpha
\end{eqnarray*}
for every $n\in\mathbb{N}$, that is, $\underline{y}\notin W^s(\underline{x})$. This assures that
\[\operatorname{dim}am(\sigma^n(C_{\underline{x}}))\geq\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\geq 0\]
and that $C_{\underline{x}}$ is not stable. A similar argument proves that $C_{\underline{x}}$ is a local unstable continuum that is not unstable.
\end{remark}
The next proposition characterizes the local cw-unstable continua of the shift map.
\begin{proposition}\label{caracterizationfushift}
A continuum $C$ belongs to $\mathcal{F}^u$ if, and only if, there are $\underline{x}=(x_i)_{i\in\mathbb{Z}}\in[0,1]^{\mathbb{Z}}$ and $k\in\mathbb{N}\cup\{0\}$ such that
\[C=\partialrod_{i\in\mathbb{Z}}\{[x_i-2^{i-k}\varepsilonsilon,x_i+2^{i-k}\varepsilonsilon]\cap[0,1]\}.\]
\end{proposition}
\begin{proof}
According to the proof of Theorem \mathbb{R}f{shiftsens}, any $\varepsilonsilon<c=\frac{1}{4}$ is a sensitivity constant of $\sigma$, the sequence $(r_n)_{n\in\mathbb{N}}$ in the definition of first-time sensitivity is $\left(\dfrac{\varepsilonsilon}{2^n}\right)_{n\in\mathbb{N}}$, and for each $\gamma\in(0,\varepsilonsilon]$ there exists $k_\gamma\in\mathbb{N}$ such that $$\operatorname{dim}splaystyle \dfrac{\varepsilonsilon}{2^{k_\gamma+1}}\leq\gamma <\dfrac{\varepsilonsilon}{2^{k_\gamma}}, \,\,\,\,\,\, m_{\gamma}=k_{\gamma}+2, \,\,\,\,\,\, \text{and}$$
\begin{equation}\label{eq11}n_1\left(\underline{y}, \dfrac{\varepsilonsilon}{2^n},\gamma\right)\in \{n-k_\gamma-1,n-k_\gamma\} \mbox{ for every }\underline{y}\in[0,1]^{\mathbb{Z}} \mbox{ and } n\in\mathbb{N}.\end{equation}
Thus, if $C\in\mathcal{F}^u$, then $C\in\mathcal{F}^u_{\gamma}$ for some $\gamma\in(0,\varepsilonsilon)$, and, hence, there exist $\underline{x}\in[0,1]^{\mathbb{Z}}$, and increasing sequences $(l_j)_{j\in\mathbb{N}}$ and $(n_j)_{j\in\mathbb{N}}\operatorname{Supp}bset \mathbb{N}$ such that
\[C=\lim_{j\rightarrow\infty}\sigma^{n_j}\left(\overlineerline{B\left(\sigma^{-n_j}(\underline{x}),\dfrac{\varepsilonsilon}{2^{l_j}}\right)}\right)\;\;\;\mbox{ and}\]
\begin{equation}\label{eq12} n_1\left(\sigma^{-n_j}(\underline{x}),\dfrac{\varepsilonsilon}{2^{l_j}},\gamma\right)\in (n_j, n_j+m_\gamma]\;\;\;\mbox{for every}\;\;\;j\in\mathbb{N}.\end{equation}
As in the proof of Theorem \mathbb{R}f{shiftsens}, we have
\[\overlineerline{B\left(\sigma^{-n_j}(\underline{x}),\dfrac{\varepsilonsilon}{2^{l_j}}\right)} = \partialrod_{i\in\mathbb{Z}}\left\{\left[x_{i-n_j}-2^{|i|}\dfrac{\varepsilonsilon}{2^{l_j}},x_{i-n_j}+2^{|i|}\dfrac{\varepsilonsilon}{2^{l_j}}\right]\cap[0,1]\right\}\] and, consequently, \[\sigma^{n_j}\left(\overlineerline{B\left(\sigma^{-n_j}(\underline{x}),\dfrac{\varepsilonsilon}{2^{l_j}}\right)}\right) = \partialrod_{i\in\mathbb{Z}}\left\{\left[x_i-2^{|n_j+i|}\dfrac{\varepsilonsilon}{2^{l_j}},x_i+2^{|n_j+i|}\dfrac{\varepsilonsilon}{2^{l_j}}\right]\cap[0,1]\right\}\] for every $j\in\mathbb{N}$.
Thus,
\[\begin{array}{rcl}
C&=&\operatorname{dim}splaystyle \lim_{j\rightarrow\infty}\partialrod_{i\in\mathbb{Z}}\left\{\left[x_i-2^{|n_j+i|}\dfrac{\varepsilonsilon}{2^{l_j}},x_i+2^{|n_j+i|}\dfrac{\varepsilonsilon}{2^{l_j}}\right]\cap[0,1]\right\}\\
\\
&=&\operatorname{dim}splaystyle \partialrod_{i\in\mathbb{Z}}\left\{\left[x_i-\lim_{j\rightarrow\infty}2^{|n_j+i|-l_j}{\varepsilonsilon},x_i+\lim_{j\rightarrow\infty}2^{|n_j+i|-l_j}{\varepsilonsilon}\right]\cap[0,1]\right\}.
\end{array}\]
Note that the limit $\lim_{j\rightarrow\infty}2^{|n_j+i|-l_j}$ exists since the iterations of the previous closed balls converge to $C$ (by hypothesis) so each of their coordinates, and hence the radius of each interval, also converge.
Moreover, (\mathbb{R}f{eq11}) ensures that
$$n_1\left(\sigma^{-n_j}(\underline{x}),\dfrac{\varepsilonsilon}{2^{l_j}},\gamma\right)\in \{l_j-k_\gamma-1,l_j-k_\gamma\} \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, j\in\mathbb{N},$$
and this with (\mathbb{R}f{eq12}) ensure that $$n_j\in\{l_j-k_\gamma-m_\gamma-1,\dots,l_j-k_\gamma-1\} \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, j\in\mathbb{N},$$
that is,
$$n_j-l_j\in\{-k_\gamma-m_\gamma-1,\dots,-k_\gamma-1\} \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, j\in\mathbb{N}.$$
Thus, for each $i\in\mathbb{Z}$ there exists $j_0\in\mathbb{N}$ such that
$$|n_j+i|=n_j+i \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, j\geq j_0,$$ and, hence, $$|n_j+i|-l_j\in\{-k_\gamma-m_{\gamma}-1+i,\ldots, -k_\gamma-1+i\} \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, j\geq j_0.$$ Since the limit $\lim_{j\rightarrow\infty}2^{|n_j+i|-l_j}$ exists, there exists $$-k\in\{-m_\gamma-k_\gamma,\ldots,-k_\gamma-1\}$$ and $j_1\geq j_0$ such that
$$2^{|n_j+i|-l_j}=2^{i-k} \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, j\geq j_1.$$
So, $\operatorname{dim}splaystyle\lim_{j\rightarrow\infty}2^{|n_j+i|-l_j}=2^{i-k}$ and, hence,
\[C=\partialrod_{i\in\mathbb{Z}}\{[x_i-2^{i-k}\varepsilonsilon,x_i+2^{i-k}\varepsilonsilon]\cap[0,1].\]
Now, suppose that there exists $k\in\mathbb{N}$ such that $$\operatorname{dim}splaystyle C=\partialrod_{i\in\mathbb{Z}}\{[x_i-2^{i-k}\varepsilonsilon,x_i+2^{i-k}\varepsilonsilon]\cap[0,1]\}.$$ We will prove that
\[C=\lim_{j\rightarrow \infty}\sigma^j\left(\overlineerline{B\left(\sigma^{-j}(\underline{x}),\frac{\varepsilonsilon}{2^{j+k}}\right)}\right).\]
As above,
\[\sigma^j\left(\overlineerline{B\left(\sigma^{-j}(\underline{x}),\frac{\varepsilonsilon}{2^{j+k}}\right)}\right) = \partialrod_{i\in\mathbb{Z}}\{[x_i-2^{|i+j|-j-k}\varepsilonsilon,x_i+2^{|i+j|-j-k}\varepsilonsilon]\cap[0,1]\}\]
for every $j\in\mathbb{N}$.
Thus, if $i\in\mathbb{N}$ and $j\geq|i|$, then
$$|i+j|=i+j, \,\,\,\,\,\, 2^{|i+j|-j-k}=2^{i-k},$$
$$[x_i-2^{|i+j|-j-k}\varepsilonsilon,x_i+2^{|i+j|-j-k}\varepsilonsilon]=[x_i-2^{i-k}\varepsilonsilon,x_i+2^{i-k}\varepsilonsilon],$$ and, hence,
$$C \,\,\,\,\,\, \text{and} \,\,\,\,\,\, \sigma^j\left(\overlineerline{B\left(\sigma^{-j}\left(\underline{x},\frac{\varepsilonsilon}{2^{j+k}}\right)\right)}\right)$$
have the same coordinates between $-i$ and $i$. Since this holds for every $i\in\mathbb{N}$ and $j>|i|$, it is enough to conclude the desired limit and the proof.
\end{proof}
\begin{remark}
We note that all objects in the above proofs depend on the metric you choose for the space, from the sequence of radius $(\frac{\varepsilonsilon}{2^n})_{n\in\mathbb{N}}$, to the numbers $m_{\gamma}$ and also the local cw-unstable continua in $\mathcal{F}^u$. We invite the reader to prove similar results with a different metric for the Hilbert cube and see how these objects change.
\end{remark}
\hspace{-0.4cm}\textbf{Partially Hyperbolic Diffeomorphisms:}
In this subsection we discuss first-time sensitivity in the context of partially hyperbolic diffeomorphisms. The ideas and techniques of this paper are from topological dynamics and we will try to stay in the world of topological dynamics even though we need to talk about differentiability to define partial hyperbolicity.
\begin{definition}
A diffeomorphism $f\colon M\to M$ of a closed smooth manifold is called partially hyperbolic if the tangent bundle splits into three $Df$-invariant sub-bundles $TM=E^s\oplus E^c\oplus E^u$ where $E^s$ is uniformly contracted, $E^u$ is uniformly expanded, one of them is non-trivial, and the splitting is dominated (see \cite{CP} for more details on this definition).
\end{definition}
Classical and important examples of partially hyperbolic diffeomorphisms are obtained from direct products of an Anosov diffeomorphism $f\colon M\to M$ of a closed smooth manifold $M$ and the identity map or with a rotation of the unit circle $\mathbb{S}^1$. These examples are first-time sensitive and this is a consequence of the following more general proposition. Recall that an equicontinuous homeomorphism $g$ is defined as the family of iterates $(g^n)_{n\in\mathbb{N}}$ being equicontinuous.
\begin{prop}\label{product}
If $f$ is a first-time sensitive homeomorphism and $g$ is an equicontinuous homeomorphism, then $f\times g$ is first-time sensitive.
\end{prop}
\begin{proof}
Let $f\colon X\to X$ be a first-time sensitive homeomorphism and $g\colon Y\to Y$ be an equicontinuous homeomorphism of compact metric spaces $X$ and $Y$. We consider the product metric on the space $X\times Y$. Let $\varepsilonsilon>0$ be a sensitivity constant of $f$ and $(r_k)_{k\in\mathbb{N}}$ be the sequence of functions, given by first-time sensitivity, such that for each $\gamma\in(0,\varepsilonsilon]$ there is $m_\gamma>0$ satisfying properties (F1) and (F2).
Since $g$ is equicontinuous and $Y$ is compact, there exists $\delta_\gamma>0$ such that
\[B_Y(y,\delta_\gamma)\operatorname{Supp}bset W^s_{\gamma,g}(y)\cap W^u_{\gamma,g}(y) \;\;\;\mbox{ for every }y\in Y.\] Defining $s_k\colon X\times Y\to\mathbb{R}^*_+$ by $s_k(x,y)=r_k(x)$, this implies that \[n_{1,f\times g}((x,y),s_k(x,y),\gamma) = n_{1,f}(x,r_k(x),\gamma)\] for every $(x,y)\in X\times Y$ and $k\in\mathbb{N}$ such that $r_k(x)\leq\delta_\gamma$.
Since the sequences $(r_k)_{k\in\mathbb{N}}$ and $(n_{1,f}(x,r_k(x),\gamma))_{k\in\mathbb{N}}$ satisfy the Properties (F1) and (F2), it follows that $(s_k)_{k\in\mathbb{N}}$ and $(n_{1,f\times g}((x,y),s_k(x,y),\gamma))_{k\in\mathbb{N}}$ also satisfy them and with the same $m_{\gamma}$. Since this holds for every $\gamma>0$, we conclude that $f\times g$ is ft-sensitive.
\end{proof}
This is also true in the case of time-1 maps of Anosov flows and the proof is basically the same, with the direction of the flow acting as the equicontinuous homeomorphism. We also prove that the existence of continua with hyperbolic behavior and controlling the first increasing time of balls of the space implies first-time sensitivity.
\begin{theorem}
Let $f:X\rightarrow X$ be a sensitive homeomorphism of a compact metric space $(X,d)$. If there are $0<\lambda_1\leq\lambda_2<1$ such that for each ball $B(x,r)$ there is a continuum $C_{x,r}\operatorname{Supp}bset B(x,r)$ that controls the first increasing time of $B(x,r)$, with $\operatorname{dim}am(C_{x,r})\geq r$ and satisfying:
\begin{equation}\label{hiperbolicocontrolado}
\lambda_1^{-n}d(y,z)\leq d(f^n(y),f^n(z))\leq\lambda_2^{-n}d(y,z)
\end{equation}
for every $n\in\mathbb{N}$ and $y,z\in C_{x,r}$, then $f$ is first-time sensitive.
\end{theorem}
\begin{proof}
Let $c>0$ be a sensitivity constant of $f$, $\varepsilonsilon\in(0,c]$, and define
$$r_k(x)=2\lambda_1^{k}\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, x\in X \,\,\,\,\,\, \text{and every} \,\,\,\,\,\, k\in\mathbb{N}.$$ Choose $a\in\mathbb{N}$ such that $\lambda_2^a<\frac{1}{4}$ and let $m_\varepsilonsilon=a+1$.
For each ball $B(x,r_k(x))$ consider a continuum $C_{x,r_k(x)}\operatorname{Supp}bset B(x,r_k(x))$ as in the hypothesis. Since $\operatorname{dim}am(C_{x,r_k(x)})\geq r_k(x)$, there are $y,z\in C_{x,r_k(x)}$ such that $d(y,z) = r_k(x)$. Thus,
\[
d(f^k(y), f^k(z))\;\geq\; \lambda_1^{-k}d(y,z)
\;= \;2\lambda_1^{-k}\lambda_1^k\varepsilonsilon
\;=\;2\varepsilonsilon
\;>\;\varepsilonsilon,
\] and this implies that
$$n_1(x,r_k(x),\varepsilonsilon)\leq k \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, k\in\mathbb{N}.$$ Also, for each $k\geq a$,
\[
d(f^{k-a}(y),f^{k-a}(z))\;\leq\;\lambda_2^{-k+a}d(y,z)
\;=\;2\lambda_1^k\lambda_2^{-k+a}
\varepsilonsilon\;\leq\; 2\lambda_2^a\varepsilonsilon
\;<\;\dfrac{\varepsilonsilon}{2}
\;<\;\varepsilonsilon
\] (the second inequality is ensured by $\lambda_1^k\lambda_2^{-k}\leq 1$, since by hypothesis $\lambda_1\leq\lambda_2$). This implies that
$$n_1(C_{x,r_k(x)},\varepsilonsilon)\geq k-a,$$ and since $C_{x,r_k(x)}$ controls the first increasing time of $B(x,r_k(x))$, it follows that
$$n_1(x,r_k(x),\varepsilonsilon)\geq k-a.$$
Thus,
\[k-a\leq n_1(x,r_k(x),\varepsilonsilon)\leq k\] and, then
$$|n_1(x,r_{k+1}(x),\varepsilonsilon)-n_1(x,r_k(x),\varepsilonsilon)|\leq |k+1 - (k-a)|=a+1$$ for every $k\geq a$ and every $x\in X$.
Now, for each $\gamma\in(0,\varepsilonsilon)$ consider $\ell_\gamma, k_\gamma\in\mathbb{N}$ satisfying \[2\lambda_2^{k_\gamma}\varepsilonsilon\leq 2\lambda_1^{\ell_\gamma+1}\varepsilonsilon\leq\gamma<2\lambda_1^{\ell_\gamma}\varepsilonsilon,\] and let $$m_\gamma=\max\{k_\gamma,|k_\gamma-\ell_\gamma+1|\}.$$
Thus, for each $k\geq\max\{\ell_\gamma,k_\gamma\}$,
\[d(f^{k-\ell_\gamma}(y),f^{k-\ell_\gamma}(z))\geq \lambda_1^{\ell_\gamma-k}d(y,z) =2\lambda_1^{\ell_\gamma}\varepsilonsilon>\gamma\] and
\[d(f^{k-k_\gamma}(y),f^{k-k_\gamma}(z))\leq2\lambda_2^{k_\gamma-k}\lambda_1^k\varepsilonsilon\leq2\lambda_2^{k_\gamma}\varepsilonsilon<\gamma\] and, hence,
\[k-k_\gamma\leq n_1(x,r_k(x),\gamma)\leq k-\ell_\gamma.\]
Therefore,
\[\begin{array}{rcl}
|n_1(x,r_{k+1}(x),\gamma)-n_1(x,r_{k}(x),\gamma)|
&\leq & |k+1-\ell_\gamma - (k-k_\gamma)|\\
&=& |k_\gamma-\ell_\gamma+1|
\end{array}\]
and
\[|n_1(x,r_k(x),\varepsilonsilon)-n_1(x,r_k(x),\gamma)|\leq k-(k-k_\gamma) = k_\gamma\] for every $k$ such that $r_k(x)\leq \gamma$, ensuring Properties (F1) and (F2).
\end{proof}
Recall that for a partially hyperbolic diffeomorphism, the strong unstable manifold of $x\in M$ is the submanifold tangent to $E^u(x)$ and is denoted by $W^{uu}_{\varepsilonsilon}(x)$. The Stable Manifold Theorem ensures that the strong unstable manifolds satisfy the estimates (\mathbb{R}f{hiperbolicocontrolado}) of the previous theorem. Thus, partially hyperbolic diffeomorphisms where strong unstable manifolds (or some sub-manifold of them) control the increasing times of the balls of the space are first-time sensitive.
In the discussion about local cw-unstable continua of partially hyperbolic diffeomorphisms, the strong unstable manifolds seem to play a central role and following question seems natural to consider:
\begin{question}
Are local cw-unstable continua of partially hyperbolic diffeomorphisms necessarily strong unstable manifolds?
\end{question}
We prove that this question can be answered affirmatively in the case of the product of a linear Anosov diffeomorphism of $\mathbb{T}^n$ with the identity $id$ of $\mathbb{S}^1$.
\begin{proposition}
If $f_A$ is a linear Anosov diffeomorphism of the Torus $\mathbb{T}^n$ and $id$ is the identity map of $\mathbb{S}^1$, then for the product $f_A\times id$ on $\mathbb{T}^{n+1}$, the continua in $\mathcal{F}^u$ are strong unstable manifolds.
\end{proposition}
\begin{proof}
Let $g=f_A\times id$ and $C\in\mathcal{F}^u$. Then $C\in\mathcal{F}^u_{\gamma}$ for some $\gamma\in(0,\varepsilonsilon)$, and, hence, there exist
$x=(y,z)\in \mathbb{T}^n\times\mathbb{S}^1,\;n_k\to\infty$, and $r_{n_k}\to0$ such that
\[C=\lim_{k\rightarrow\infty}g^{n_k}(\overlineerline{B(g^{-n_k}(x),r_{n_k})}) \,\,\,\,\,\, \text{and}\]
\[n_1(g^{-n_k}(x),r_{n_k},\gamma)\in (n_k,n_k+m_\gamma] \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, k\in\mathbb{N}.\]
We will prove that $C\operatorname{Supp}bset\mathbb{T}^n\times\{z\}$. Indeed, in the product metric,
$$B_{\mathbb{T}^n\times\mathbb{S}^1}(x,r)=B_{\mathbb{T}^n}(y,r)\times(z-r,z+r)$$ and, hence, for each $k\in\mathbb{N}$,
$$B(g^{-n_k}(x),r_{n_k})=B_{\mathbb{T}^n}(f_A^{-n_k}(y),r_{n_k})\times(z-r_{n_k},z+r_{n_k}).$$ Since $r_{n_k}\to0$ and $g^{n_k}$ acts as the identity on $(z-r_{n_k},z+r_{n_k})$, it follows that
$$C=\lim_{k\rightarrow\infty}g^{n_k}(\overlineerline{B(g^{-n_k}(x),r_{n_k})})\operatorname{Supp}bset \lim_{k\rightarrow\infty}f_A^{n_k}(\overlineerline{B(f_A^{-n_k}(y),r_{n_k})})\operatorname{Supp}bset\mathbb{T}^n\times\{z\}.$$
Since $g=f_A$ on $\mathbb{T}^n\times\{z\}$, it follows by the local product structure of $f_A$ that $C\operatorname{Supp}bset W^{uu}_{\varepsilonsilon}(x)$.
\end{proof}
The case of partially hyperbolic diffeomorphisms that are the time-1 map of an Anosov flow seems to be similar to the above proposition but we believe this could not be the case for skew-product partially hyperbolic diffeomorphisms. This goes beyond the scope of this paper, though.
\hspace{-0.4cm}\textbf{Sensitive but not first-time sensitive:}
In this subsection we write precisely the example we briefly discussed in the introduction of a sensitive homeomorphism of $\mathbb{T}^2$ that is not first-time sensitive and do not satisfy several of important features of the hyperbolic systems. We begin with an irrational flow on the Torus generated by a constant vector field $F$ (whose every orbit is dense in $\mathbb{T}^2$) and multiply $F$ by a non-negative smooth function $g\colon\mathbb{T}^2\to\mathbb{R}$ with a single zero at a point $p\in\mathbb{T}^2$. The flow $\varphi$ generated by the vector field $gF$ has a fixed point on $p$ with one stable orbit (that is dense in the past) one unstable orbit (that is dense in the future) and any orbit distinct from these three is dense in the future and in the past (see Figure \mathbb{R}f{figura:fluxo}).
\begin{figure}
\caption{Modified irrational flow.}
\label{figura:fluxo}
\end{figure}
\begin{proposition}\label{notft}
If $f\colon\mathbb{T}^2\to\mathbb{T}^2$ is the time-1 map of the flow generated by the vector field $gF$, then $f$ is sensitive but not first-time sensitive.
\end{proposition}
\begin{proof}
To prove that $f$ is sensitive we just note that in every open ball of the space $B(x,r)$ there are points in the stable orbit of $p$ and points that are not in the stable orbit of $p$. Recall that both the backward part of the stable orbit of $p$ and the forward orbit of a point that is not in the stable orbit of $p$ are dense on $\mathbb{T}^2$. Thus, we can find $y,z\in B(x,r)$ such that $y\in W^s(p)$ and the future orbit of $z$ is dense on $\mathbb{T}^2$ and, hence, there exists $k\in\mathbb{N}$ such that $d(f^k(y),f^k(z))>\frac{1}{2}$.
To prove that $f$ is not first-time sensitive, we use techniques from \cite{art} where it is proved that $\varphi$ is not geometric expansive but is kinematic expansive, meaning that the separation of orbits is not geometric, since generic orbits are parallel straight lines, but local different orbits should be separated in time. We formalize this argument as follows.
For each $x\in\mathbb{T}^2$ and $\varepsilonsilon>0$ let $C_{\varepsilonsilon}(x)$ be the connected component of $x$ in the flow orbit of $x$ contained in $B(x,\varepsilonsilon)$. We will prove the existence of $\varepsilonsilon>0$ such that
\begin{equation}\label{Cuflow}
W^u_{\varepsilonsilon}(x)\operatorname{Supp}bset C_{\varepsilonsilon}(x) \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, x\neq p.
\end{equation}
If $x$ belongs to the stable orbit of $p$, then (\mathbb{R}f{Cuflow}) contradicts the existence of cw-unstable continua containing $x$, that should increase in the future (see Theorem \mathbb{R}f{teoremacontinuosinst} and Proposition \mathbb{R}f{crescimentouniforme}), but since it is a small segment of flow orbit contained in the stable orbit of $p$, then it could not increase in the future.
Let $\varepsilonsilon>0$ be such that if $y\in W^u_{\varepsilonsilon}(x)$ and segments of orbits of $x$ and $y$ with length $\varepsilonsilon$ are $\alpha$-distant from each other, then the segments of orbit of $f^{-k}(x)$ and $f^{-k}(y)$ with length $\varepsilonsilon$ are also $\alpha$-distant from each other for every $k\in\mathbb{N}$. The existence of $\varepsilonsilon$ follows from the fact that the orbits of the irrational flow are parallel lines and that the orbits of $\varphi$ are contained in the orbits of the irrational flow. If $y\in W^u_{\varepsilonsilon}(x)\setminus C_{\varepsilonsilon}(x)$, then it is in a different local orbit but its past orbit follows the past orbit of $x$. Let $\alpha>0$ be the distance between the segments of orbits of length $\varepsilonsilon$ of $x$ and $y$. Choose times $(n_k)_{k\in\mathbb{N}}$ such that $f^{-n_k}(x)$ converge to $p$. The choice of $\varepsilonsilon$ ensure that $f^{-n_k}(y)$ remain at a distance greater than $\frac{\alpha}{2}$ from $p$ and since $p$ is the only fixed point of $\varphi$, there is a lower bound $n\in\mathbb{N}$ for the number of iterates of $f$ to ensure the orbit of $y$ is $2\varepsilonsilon$-distant from $p$, while, the number of iterates for the orbit of $x$ goes to infinity since $f^{-n_k}(x)$ converge to $p$. This ensures the existence $k\in\mathbb{N}$ such that
$$d(f^{-n_k-n}(x),f^{-n_k-n}(y))>\varepsilonsilon$$
contradicting that $y\in W^u_{\varepsilonsilon}(x)$.
\end{proof}
\section{Positive topological entropy}
In the study of chaotic systems, the topological entropy is an important invariant that measures the complexity of the dynamics. Positivity of topological entropy is strongly related with chaotic properties of such systems. It is known that positive topological entropy implies distinct notions of chaos (see \cite{Down} for an example and the references therein). Let us define topological entropy precisely. During this whole section $f\colon X\to X$ denotes a homeomorphism of a compact metric space. Given $n\in\mathbb{N}$ and $\delta>0$, we say that $E\operatorname{Supp}bset X$ is $(n,\delta)$-\emptyseth{separated}
if for each $x,y\in E$, $x\neq y$, there is $k\in \{0,\dots,n\}$ such that
$\operatorname{\textit{d}}(f^k(x),f^k(y))>\delta$.
Let $s(n,\delta)$ denotes the maximal cardinality of an $(n,\delta)$-separated subset $E\operatorname{Supp}bset X$ (since $X$ is compact, $s(n,\delta)$ is finite).
Let\[
h(f,\delta)=\limsup_{n\to\infty}\frac 1n\log s_n(f,\delta).
\]
Note that $h(f,\delta)$ increases as $\delta$ decreases to $0$ and define $$h(f)=\lim_{\delta\to 0}h(f,\delta).$$
The example in Proposition \mathbb{R}f{notft} is a sensitive homeomorphism of $\mathbb{T}^2$ with zero topological entropy. Indeed, it is proved in \cite{Young} that every continuous flow on a compact two-manifold has zero topological entropy. Kato proved that cw-expansive homeomorphisms have positive topological entropy, when defined on compact metric spaces with positive topological dimension \cite{Kato2}. The existence of local unstable continua with several properties that resemble hyperbolic unstable manifolds, assures the existence of several distinct $(n,\delta)$-separated points (see Theorem 4.1 in \cite{Kato1} for more details).
In this subsection we obtain similar results in the case of first-time sensitive homeomorphisms. It is important to note that we were not able to prove that first-time sensitivity always implies positive topological entropy. The idea we explore here follows the proof of Kato for cw-expansive homeomorphisms exchanging local unstable continua by the local cw-unstable continua. It presented some difficulties that we were only able to circumvent with some additional hypotheses. We explain them in what follows.
The first difference to note is that for ft-sensitive homeomorphisms, the existence of local unstable continua that are also stable, and hence do not increase in the future (see remarks \mathbb{R}f{rmkshift1} and \mathbb{R}f{rmkshift2}), do not allow us to start the proof with any local unstable continua. The choice of a local cw-unstable continuum is enough to deal with this problem since they increase in the future. The second difference, illustrated by the following example, recalls that in the proof of Kato after iterating an unstable continuum to increase its diameter we can take a pair of distinct unstable subcontinua that can again be iterated and increase, and this can be done indefinitely in the future. But this is not necessarily true in the case of ft-sensitive homeomorphisms since local cw-unstable continua can contain several proper stable subcontinua.
\begin{example}
Let $c=\frac{1}{4}$ and $\varepsilonsilon\in(0,c)$. Consider the cw-unstable continuum as in the Proposition \mathbb{R}f{caracterizationfushift}:
\[D=\partialrod_{i\in\mathbb{Z}}([x_i-2^i\varepsilonsilon,x_i+2^i\varepsilonsilon]\cap [0,1]).\]
Choose $M\in\mathbb{N}$ such that $2^M\varepsilonsilon>c$ and, hence, $$\operatorname{dim}am(\sigma^M(D))>c.$$
For each $m\geq M$, let $$y_m=\min([x_m-2^m\varepsilonsilon,x_m+2^m\varepsilonsilon]\cap [0,1]) \mbox{ and }$$
$$z_m=\max([x_m-2^m\varepsilonsilon,x_m+2^m\varepsilonsilon]\cap [0,1]).$$
Define continua $C_1$ and $C_2$ as follows:
\[C_1 = \partialrod_{i<0}\{x_{i+M}\}\times [y_M,y_M+1/12]\times\partialrod_{i>0}[y_{i+M},y_{i+M}+1/12]\] and
\[C_2=\partialrod_{i<0}\{x_{i+M}\}\times [z_M-1/12,z_M]\times\partialrod_{i>0}[z_{i+M}-1/12,z_{i+M}].\]
Note that $C_1$ and $C_2$ are subcontinua of $\sigma^M(D)$ satisfying
\begin{enumerate}
\item $d(C_1,C_2)>1/12$,
\item $\operatorname{dim}am(\sigma^n(C_1))=1/12$ \,\,for every \,\,$n\in\mathbb{N}$, and
\item $\operatorname{dim}am(\sigma^n(C_2))= 1/12$ \,\,for every \,\,$n\in\mathbb{N}$.
\end{enumerate}
\qed
\end{example}
This property of indefinitely splitting unstable continua to increase in the future is the core of the proof of Kato of positivity of topological entropy in the case of cw-expansive homeomorphisms. In the following we state this as a definition for any homeomorphism and repeat the proof of Kato to prove it implies positive topological entropy. We denote by $\mathcal{C}(X)$ the set of all subcontinua of $X$ and by $d_H$ the Hausdorff distance on $\mathcal{C}(X)$ defined as
$$d_H(C,C')=\inf\{\varepsilonsilon>0; \,\,\, C\operatorname{Supp}bset B(C',\varepsilonsilon) \,\,\,\,\,\, \text{and} \,\,\,\,\,\, C'\operatorname{Supp}bset B(C,\varepsilonsilon)\}.$$
\begin{definition}\label{splittoincrease}
We say that $C\in\mathcal{C}(X)$ can be \textit{indefinitely split to increase} if there exist $\delta>0$ and $M\in\mathbb{N}$ such that $\operatorname{dim}am\;f^M(C)\geq \delta$ and for each $n\in\mathbb{N}$ and $(i_1,\dots,i_n)\in\{0,1\}^n$ there exists $C_{i_1i_2\cdots i_n}\in\mathcal{C}(X)$ satisfying:
\begin{enumerate}
\item $\operatorname{dim}am\;f^{M}(C_{i_1i_2\cdots i_n})\geq \delta$,
\item $C_{i_1i_2\cdots i_{n-1}i_n}\operatorname{Supp}bset f^M(C_{i_1i_2\cdots i_{n-1}})$, and
\item $d_H(C_{i_1i_2\cdots i_{n-1}0},C_{i_1i_2\cdots i_{n-1}1})\geq \frac{\delta}{3}$.
\end{enumerate}
\end{definition}
\begin{theorem}\label{posent}
Let $f\colon X\to X$ be a homeomorphism of a compact metric space. If there exists a continuum that can be indefinitely split to increase, then $f$ has positive topological entropy.
\end{theorem}
\begin{proof}
Let $C\in\mathcal{C}(X)$, $\delta>0$, $M\in\mathbb{N}$ and $(C_{i_1i_2\cdots i_n})_{i_1,\dots,i_n,n}$ be as in the previous definition.
For each $n\in\mathbb{N}$ and $(i_1,\dots,i_n)\in\{0,1\}^n$ choose $$y_{i_1i_2\ldots i_{n}}\in C_{i_1i_2\cdots i_n},$$ and let $$x_{i_1i_2\cdots i_n}=f^{-nM}(y_{i_1i_2\ldots i_{n}}).$$
Applying condition (2) of above definition inductively we obtain that $$x_{i_1i_2\cdots i_n}\in C \,\,\,\,\,\, \text{for every}\,\,\,\,\,\, (i_1,\dots,i_n)\in\{0,1\}^n \,\,\,\,\,\, \text{and} \,\,\,\,\,\, n\in\mathbb{N}.$$
We prove that for each $n\in\mathbb{N}$ the set
\[A_n = \{x_{i_1i_2\cdots i_n} \;|\; (i_1,\dots,i_n)\in\{0,1\}^n \}\] is $(nM,\delta/3)$-separated. Indeed, if $x_{i_1\cdots i_{n}},x_{j_1\cdots j_{n}}\in A_n$ are distinct, then there exists $k\in\{1,2,\ldots, n\}$ such that
$$j_l=i_l \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, l\in\{1,\dots,k-1\}, \,\,\,\,\,\, \text{and} \,\,\,\,\,\, j_k\neq i_k.$$ Assume without loss of generality that $i_k=0$ and $j_k=1$. Condition (3) ensures that
$$d_H(C_{i_1i_2\cdots i_{k-1}0}, C_{i_1i_2\cdots i_{k-1}1})\geq \delta/3,$$
and since condition (2) ensures that
$$f^{kM}(x_{i_1i_2\cdots i_{k-1}0i_{k+1}\cdots i_{n}})\in C_{i_1i_2\cdots i_{k-1}0} \,\,\,\,\,\, \text{and}$$
$$f^{kM}(x_{i_1i_2\cdots i_{k-1}1j_{k+1}\cdots j_{n}})\in C_{i_1i_2\cdots i_{k-1}1},$$
it follows that
\[d(f^{kM}(x_{i_1\cdots i_{n}}),f^{kM}(x_{j_1\cdots j_{n}}))\geq\delta/3.\]
Since for each $n\in\mathbb{N}$, $A_n$ has $2^n$ elements and is $(nM,\delta/3)$-separated, it follows that
$$s(nM,\delta/3)\geq2^n \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{N}.$$
Thus,
\[\begin{array}{rcl}
h(f,\delta/3) & = &\limsup_{n\rightarrow\infty}\frac{1}{n}\cdot\log s(n,\delta/3)\\
\\
&\geq&\limsup_{n\rightarrow\infty}\left(\frac{1}{nM}\cdot\log s(nM,\delta/3)\right)\\
\\
&\geq &\limsup_{n\rightarrow\infty} \frac{1}{nM}\cdot\log 2^n\\
\\
&\geq &\limsup_{n\rightarrow\infty} \frac{n}{nM}\cdot\log 2\\
\\
&=&\frac{1}{M}\cdot \log 2\;>\;0
\end{array}\]
and, hence, $h(f)>0$.
\end{proof}
\begin{proposition}
Every local unstable continuum of a cw-expansive homeomorphism of a Peano continuum can be indefinitely split to increase.
\end{proposition}
\begin{proof}
Let $c>0$ be a cw-expansivity constant of $f$, and $\varepsilonsilon\in(0,c)$. The following was proved by Kato in \cite{Kato1}:
\begin{enumerate}
\item for each $\gamma\in(0,\varepsilonsilon)$ there exists $m_\gamma\in\mathbb{N}$ such that $n_1(C,\varepsilonsilon)\leq m_\gamma$ for every $\varepsilonsilon$-unstable continuum $C$ with $\operatorname{dim}am(C)\geq\gamma$, and
\item there exists $\delta>0$ such that $\operatorname{dim}am(f^n(C))\geq \delta$ for every $n\geq n_1(C,\varepsilonsilon)$ and every $\varepsilonsilon$-unstable continuum $C$.
\end{enumerate}
Let $C$ be an $\varepsilonsilon$-unstable continuum, choose $\gamma\in(0,\operatorname{dim}am(C))$, consider $m_{\gamma}$ and $\delta$ given by (1) and (2) above and let $M=\max\{m_\gamma,m_{\delta/3}\}$. Thus, $\operatorname{dim}am (f^M(C))\geq \delta$ and we can choose $C_0$ and $C_1$ subcontinua of $f^M(C)$ such that
$$\operatorname{dim}am(C_i)\geq \delta/3 \,\,\,\,\,\, \text{for} \,\,\,\,\,\, i\in\{0,1\}, \,\,\,\,\,\, \text{and} \,\,\,\,\,\, d_H(C_0, C_1)\geq \delta/3.$$ Since $M\geq m_{\delta/3}$, it follows that $\operatorname{dim}am(f^M(C_i))\geq\delta$, for each $i\in\{0,1\}$ and, again, we can choose continua $C_{i0}$ and $C_{i1}$ with diameter larger than $\delta/3$ and $d_H(C_{i0},C_{i1})\geq\delta/3$. Inductively, we can define the family $(C_{i_1i_2\cdots i_n})_{i_1,\dots,i_n,n}$ satisfying items (1), (2), and (3) in Definition \mathbb{R}f{splittoincrease}.
\end{proof}
In the case of first-time sensitive homeomorphisms, the local cw-unstable continua increase when iterated forward. Thus, to prove that they can be split to increase it is enough to prove that they can be split by continua in $\mathcal{F}^u$. The next definition is a formalization of this idea.
\begin{definition}\label{splitinfu}
We say that $C\in\mathcal{F}^u$ can be \textit{indefinitely split in $\mathcal{F}^u$} if there exists $\delta>0$ such that $C\in\mathcal{F}^u_{\delta}$ and for each $n\in\mathbb{N}$ and $(i_1,\dots,i_n)\in\{0,1\}^n$ there exists $C_{i_1i_2\cdots i_n}\in\mathcal{C}(X)$ satisfying:
\begin{enumerate}
\item $C_{i_1i_2\cdots i_n}\in\mathcal{F}^u_{\gamma}$ for some $\gamma\geq \delta$
\item $C_{i_1i_2\cdots i_{n-1}i_n}\operatorname{Supp}bset f^{2m_{\delta}}(C_{i_1i_2\cdots i_{n-1}})$, and
\item $d_H(C_{i_1i_2\cdots i_{n-1}0},C_{i_1i_2\cdots i_{n-1}1})\geq \frac{\delta}{3}$.
\end{enumerate}
\end{definition}
\begin{comment}
\begin{definition}\label{splitinfu}
We say that $C\in\mathcal{F}^u_\alpha$ can be \textit{indefinitely split in $\mathcal{F}^u$} if there exists $\delta>0$ such that for each $n\in\mathbb{N}$ and $(i_1,\dots,i_n)\in\{0,1\}^n$ there exists $C_{i_1i_2\cdots i_n}\in\mathcal{C}(X)$ satisfying:
\begin{enumerate}
\item $C_{i_1i_2\cdots i_n}\in\mathcal{F}^u_{\gamma}$ for some $\gamma\geq \delta$
\item $C_{i_1i_2\cdots i_{n-1}i_n}\operatorname{Supp}bset f^{2m}(C_{i_1i_2\cdots i_{n-1}})$ for $m=\max\{m_\alpha,m_\delta\}$, and
\item $d_H(C_{i_1i_2\cdots i_{n-1}0},C_{i_1i_2\cdots i_{n-1}1})\geq \delta$.
\end{enumerate}
\end{definition}
\end{comment}
\begin{proposition}
For a first-time sensitive homeomorphism, if a continuum can be indefinitely split in $\mathcal{F}^u$, then it can be indefinitely split to increase.
\end{proposition}
\begin{proof}
If $C\in\mathcal{F}^u$ can be indefinitely split in $\mathcal{F}^u$, then there exists $\delta>0$ and a family $(C_{i_1i_2\cdots i_n})_{i_1i_2\cdots i_n,n}$ satisfying conditions (1), (2) and (3) in Definition \mathbb{R}f{splitinfu}. Let $M=m_{2\delta}$ and consider $\delta'\in(0,\delta)$ given by Lemma \mathbb{R}f{continuonaodecresce} such that
$$\operatorname{dim}am(f^{M}(C))\geq\delta' \,\,\,\,\,\, \text{and} \,\,\,\,\,\, \operatorname{dim}am(f^{M}(C_{i_1i_2\cdots i_n}))\geq\delta'$$
for every $(i_1,\dots,i_n)\in\{0,1\}^n$ and every $n\in\mathbb{N}$. Conditions (1), (2) and (3) of Definition \mathbb{R}f{splittoincrease} follow easily from that.
\end{proof}
\begin{comment}
\begin{proposition}
For a first-time sensitive homeomorphism, if a continuum can be indefinitely split in $\mathcal{F}^u$, then it can be indefinitely split to increase.
\end{proposition}
\begin{proof}
If $C\in\mathcal{F}^u_\alpha$ can be indefinitely split in $\mathcal{F}^u$, then there exists $\delta>0$ and a family $(C_{i_1i_2\cdots i_n})_{i_1i_2\cdots i_n,n}$ satisfying conditions (1), (2) and (3) in Definition \mathbb{R}f{splitinfu}. Let $M=2\max\{m_{\delta},m_\alpha\}$ and consider $\delta'\in(0,\delta)$ given by Lemma \mathbb{R}f{continuonaodecresce} such that
$$\operatorname{dim}am(f^{M}(C))\geq\delta' \,\,\,\,\,\, \text{and} \,\,\,\,\,\, \operatorname{dim}am(f^{M}(C_{i_1i_2\cdots i_n}))\geq\delta'$$
for every $(i_1,\dots,i_n)\in\{0,1\}^n$ and every $n\in\mathbb{N}$. Conditions (1), (2) and (3) of Definition \mathbb{R}f{splittoincrease} follow easily from that.
\end{proof}
\end{comment}
A consequence of this is that the existence of a continuum in $\mathcal{F}^u$ that can be indefinitely split in $\mathcal{F}^u$ would imply positive topological entropy for a first-time sensitive homeomorphism. A difficulty that appears is that for $C\in\mathcal{F}^u$ we can choose $x,y\in f^{2m_\delta}(C)$ such that $d(x,y)\geq\frac{\delta'}{3}$ and Theorem \mathbb{R}f{teoremacontinuosinst} ensures the existence of continua $C_0,C_1\in\mathcal{F}^u_{\delta}$ containing $x$ and $y$, respectively, but we could not prove that $C_0$ and $C_1$ are contained in $f^{2m_{\delta}}(C)$. Thus, the following is still an open question:
\begin{question}\label{3}
Do continua in $\mathcal{F}^u$ of a first-time sensitive homeomorphism can be indefinitely split in $\mathcal{F}^u$?
\end{question}
\begin{comment}
\begin{proposition}
Every $C\in\mathcal{F}^u$ of the shift map $\sigma:[0,1]^{\mathbb{Z}}\rightarrow[0,1]^{\mathbb{Z}} $ can be indefinitely split in $\mathcal{F}^u$.
\end{proposition}
\begin{proof}
Recall that any $\varepsilonsilon<c=\frac{1}{4}$ is a sensitivity constant of $\sigma$, the sequence $(r_n)_{n\in\mathbb{N}}$ in the definition of first-time sensitivity is $\left(\dfrac{\varepsilonsilon}{2^n}\right)_{n\in\mathbb{N}}$, and for each $\gamma\in(0,\varepsilonsilon]$ there exists $k_\gamma\in\mathbb{N}$ such that $$\operatorname{dim}splaystyle \dfrac{\varepsilonsilon}{2^{k_\gamma+1}}\leq\gamma <\dfrac{\varepsilonsilon}{2^{k_\gamma}}, \,\,\,\,\,\, m_{\gamma}=k_{\gamma}+2, \,\,\,\,\,\, \text{and}$$
\begin{equation*}\label{eq11}n_1\left(\underline{y}, \dfrac{\varepsilonsilon}{2^n},\gamma\right)\in \{n-k_\gamma-1,n-k_\gamma\} \mbox{ for every }\underline{y}\in[0,1]^{\mathbb{Z}} \mbox{ and } n\in\mathbb{N}.\end{equation*}
Proposition \mathbb{R}f{caracterizationfushift} ensures that if $C\in\mathcal{F}^u$, then there are $\gamma\in(0,\varepsilonsilon)$, $\underline{x} = (x_i)_{i\in\mathbb{Z}}$ and $k\in\{k_\gamma+1,\ldots,k_\gamma+m_\gamma\}$ such that
\[C=\partialrod_{i\in\mathbb{Z}}\{[x_i-2^{i-k}\varepsilonsilon,x_i+2^{i-k}\varepsilonsilon]\cap[0,1]\}.\] To simplify the notation, let
\[I(x,r) = \{[x-r,x+r]\cap[0,1]\}. \]
Let $m=2\max\{m_\gamma,m_{\varepsilonsilon/6}\}$ and note that
\[\begin{array}{rcl}
\sigma^{m}(C) &= &\operatorname{dim}splaystyle\partialrod_{i\in\mathbb{Z}}\{[x_{i+m}-2^{i-k+{m}}\varepsilonsilon, x_{i+2m}+2^{i-k+{m}}\varepsilonsilon]\cap[0,1]\}\\
&=&\operatorname{dim}splaystyle\partialrod_{i\in\mathbb{Z}} I(x_{i+m},2^{i-k+{m}}\varepsilonsilon).\\
\end{array}\]
So the $0$-th coordinate of $\sigma^{m}(C)$ has length greater than or equal to $\varepsilonsilon$ since
$$m-k\geq2m_{\gamma}-k_{\gamma}-m_{\gamma}=2$$
and, hence,
\[2^{m-k}\varepsilonsilon\geq\varepsilonsilon.\]
mma}+2^{-k+2{m_\gamma}}\varepsilonsilon]\cap[0,1]\}\right|\geq \delta/6\] and \[\left|p-\max\{[x_{0+2m_\gamma}-2^{-k+2{m_\gamma}}\varepsilonsilon, x_{0+2m_\gamma}+2^{-k+2{m_\gamma}}\varepsilonsilon]\cap[0,1]\}\right|\geq\delta/6.\] Now, consider $l\in\mathbb{N}$ satisfying $2^{-l+1}\varepsilonsilon<\delta/3\leq 2^{-l+2}\varepsilonsilon$ and let
If we let
$$p_1=\min\{ I(x_{m},2^{-k+{m}}\varepsilonsilon)\}+\varepsilonsilon/4 \,\,\,\,\,\, \text{and}$$
$$q_1=\max\{I(x_{m},2^{-k+{m}}\varepsilonsilon)\}-\varepsilonsilon/4,$$ then $I(p_1,2^{-3}\varepsilonsilon)$ and $I(q_1,2^{-3}\varepsilonsilon)$ are contained in $I(x_{m},2^{-k+{m}}\varepsilonsilon)$, have diameter greater than $\varepsilonsilon/6$ and $$d(I(p_1,2^{-3}\varepsilonsilon),I(q_1,2^{-3}\varepsilonsilon))>\frac{\varepsilonsilon}{6}.$$ Consider
\[C_0 = \partialrod_{i<0} I(x_{i+m},2^{i-3}\varepsilonsilon) \times I(p_1,2^{-3}\varepsilonsilon)\times \partialrod_{i>0} I(x_{i+m},2^{i-3}\varepsilonsilon)\]
and
\[C_1 = \partialrod_{i<0} I(x_{i+m},2^{i-3}\varepsilonsilon) \times I(q_1,2^{-3}\varepsilonsilon)\times \partialrod_{i>0} I(x_{i+m},2^{i-3}\varepsilonsilon)\]
and note that Proposition \mathbb{R}f{caracterizationfushift} ensures that $C_0,C_1\in\mathcal{F}^u$. More precisely,
$$C_0\in \mathcal{F}^u_{\gamma_0} \,\,\,\,\,\, \text{and} \,\,\,\,\,\, C_1\in\mathcal{F}^u_{\gamma_1}$$ with $\min\{\gamma_0,\gamma_1\}\geq\varepsilonsilon/6$, since $$\min\{\operatorname{dim}am(C_0),\operatorname{dim}am(C_1)\}>\frac{\varepsilonsilon}{6}$$ (recall that continua $A\in\mathcal{F}^u_{\frac{\varepsilonsilon}{6}}$ have $\operatorname{dim}am(A)\leq\frac{\varepsilonsilon}{6}$).
Moreover,
\[i-k+m\geq i+2>i-3\;\;\;\mbox{ for every } \;\;\;i\in\mathbb{Z},\] and this ensures that \[I(x_{i+m},2^{i-3}\varepsilonsilon)\operatorname{Supp}bset I(x_{i+m},2^{i-k+m}\varepsilonsilon)\;\;\mbox{ for every }\;\;i\in\mathbb{Z}\] which, in turn, ensures that $C_0\operatorname{Supp}bset\sigma^{m}(C)$ (the proof that $C_1\operatorname{Supp}bset \sigma^{m}(C)$ is analogous).
Also, $m\geq2m_{{\varepsilonsilon}/6}$ (as in Proposition \mathbb{R}f{decreasing} the map $\alpha\to m_{\alpha}$ is non-increasing) $$\operatorname{dim}am(\sigma^{m}(C_0))\geq\varepsilonsilon \,\,\,\,\,\, \text{and} \,\,\,\,\,\, \operatorname{dim}am(\sigma^{m}(C_1))\geq\varepsilonsilon.$$
The $0$-th coordinate of $\sigma^{m}(C_0)$ and $\sigma^{m}(C_1)$ are equal to \[I(x_{2m},2^{m-3}\varepsilonsilon) = \{[x_{2m}-2^{m-3}\varepsilonsilon, x_{2m}+2^{m-3}\varepsilonsilon]\cap[0,1]\},\] whose diameter is larger than $\varepsilonsilon$, since $2^{m - 4}\varepsilonsilon\geq\varepsilonsilon$.
Thus, if we let \[p_2 = \min\{I(x_{2m},2^{m-3}\varepsilonsilon)\}+\varepsilonsilon/4 \,\,\,\,\,\, \text{and}\] \[q_2=\max\{I(x_{2m},2^{m-3}\varepsilonsilon)\}-\varepsilonsilon/4\] then the sub intervals
$I(p_2,2^{-3}\varepsilonsilon)$ and $I(q_2,2^{-3}\varepsilonsilon)$ of $I(x_{2m},2^{m-3}\varepsilonsilon)$ has diameter at least $\varepsilonsilon/6$ and \[d(I(p_2,2^{-3}\varepsilonsilon), I(q_2,2^{-3}\varepsilonsilon))\geq \dfrac{\varepsilonsilon}{6}.\]
Now, consider the follows continua
\[C_{00} = \partialrod_{i\in\mathbb{Z}}\{[y_i^{00}-2^{i-3}\varepsilonsilon,y_i^{00}+2^{i-3}\varepsilonsilon]\cap[0,1]\},\] where
$y_i^{00} = \left\{\begin{array}{ll}
x_{i+2m}, & \mbox{ if }i\neq -m \mbox{ and } i\neq 0\\
p_1,&\mbox{ if } i=-m\\
p_2, &\mbox{ if }i=0
\end{array}\right.$
\[C_{01} = \partialrod_{i\in\mathbb{Z}}\{[y_i^{01}-2^{i-3}\varepsilonsilon,y_i^{01}+2^{i-3}\varepsilonsilon]\cap[0,1]\},\] where
$y_i^{01} = \left\{\begin{array}{ll}
x_{i+2m}, & \mbox{ if }i\neq -m \mbox{ and } i\neq 0\\
p_1,&\mbox{ if } i=-m\\
q_2, &\mbox{ if }i=0
\end{array}\right.$
\[C_{10} = \partialrod_{i\in\mathbb{Z}}\{[y_i^{10}-2^{i-3}\varepsilonsilon,y_i^{10}+2^{i-3}\varepsilonsilon]\cap[0,1]\},\] where
$y_i^{10} = \left\{\begin{array}{ll}
x_{i+2m}, & \mbox{ if }i\neq -m \mbox{ and } i\neq 0\\
q_1,&\mbox{ if } i=-m\\
p_2, &\mbox{ if }i=0
\end{array}\right.$
\[C_{11} = \partialrod_{i\in\mathbb{Z}}\{[y_i^{11}-2^{i-3}\varepsilonsilon,y_i^{11}+2^{i-3}\varepsilonsilon]\cap[0,1]\},\] where
$y_i^{11} = \left\{\begin{array}{ll}
x_{i+2m}, & \mbox{ if }i\neq -m \mbox{ and } i\neq 0\\
q_1,&\mbox{ if } i=-m\\
q_2, &\mbox{ if }i=0.
\end{array}\right.$\\
We have that, by choice of $p_2$ and $q_2$, \[d_H(C_{00},C_{01})\geq\dfrac{\varepsilonsilon}{6}\;\;\;\mbox{ and }\;\;\;d_H(C_{10},C_{11})\geq\dfrac{\varepsilonsilon}{6}.\] Moreover, the $0$-th coordinate of $C_{ij}$ is the interval $[y_0^{ij}-2^{-3}\varepsilonsilon,y_0^{ij}+2^{-3}\varepsilonsilon]\cap[0,1]$ whose diameter is larger than ${\varepsilonsilon}/{6}$. By Proposition \mathbb{R}f{caracterizationfushift}, $C_{ij}\in\mathcal{F}^u$, more precisely
\[C_{ij}\in\mathcal{F}^u_{\gamma_{ij}}\;\;\;\;\mbox{ with }\gamma_{ij}\geq\dfrac{\varepsilonsilon}{6},\] for each $(i,j)\in \{0,1\}^2$, since $\operatorname{dim}am (C_{ij})\geq \varepsilonsilon/6$ for every $(i,j)\in \{0,1\}^2$.
$\sigma^{m}(C_{ij})$ is $I(x_{3m},2^{m-2}\varepsilonsilon)$, for each $i,j\in\{0,1\}$, and $\operatorname{dim}am I(x_{3m},2^{m-2}\varepsilonsilon)\geq \varepsilonsilon$,
which implies that $\operatorname{dim}am\;\sigma^{m}(C_{ij})\geq \varepsilonsilon$.
Following previous argument, we can obtain a family of continua \[(C_{i_1i_2\cdots i_n})_{i_1i_2\cdots i_n,n}\] satisfying the properties of the Definition \mathbb{R}f{splittoincrease} in the following way: we choose $p_n$ and $q_n$ in the $0$-th coordinate of $\sigma^{m}(C_{i_1i_2\cdots i_{n-1}})$, $I(x_{{nm}}, 2^{m-1}\varepsilonsilon)$ --- whose diameter is larger than $2\varepsilonsilon$ --- such that $p_n = \min\{I(x_{{nm}}, 2^{m-2}\varepsilonsilon)\}+\varepsilonsilon/4$ and $q_n=\max\{I(x_{{nm}}, 2^{m-2}\varepsilonsilon)\}-\varepsilonsilon/4$.
The continuum $C_{i_1i_2\cdots i_{n-1}}$ is the type \[\operatorname{dim}splaystyle\partialrod_{j\in\mathbb{Z}}\{[y_j-2^{j-2}\varepsilonsilon, y_j+2^{j-2}\varepsilonsilon]\cap [0,1]\} =\partialrod_{j\in\mathbb{Z}} I(y_j, 2^{j-2}\varepsilonsilon)\] and, consequently, \[\operatorname{dim}splaystyle \sigma^{m}(C_{i_1i_2\cdots i_{n-1}})=\partialrod_{j\in\mathbb{Z}} I(y_{j+m},2^{j-2+m}\varepsilonsilon)\] and we consider
\[C_{i_1i_2\cdots i_n} = \partialrod_{j<0}I(y_{j+m},2^{j-2}\varepsilonsilon)\times I(z,2^{-2}\varepsilonsilon)\times \partialrod_{j>0}I(y_{j+m},2^{j-2}\varepsilonsilon),\] where
$z=p_n$ if $i_n=0$ or $z=q_n$ if $i_n=1$. By Proposition \mathbb{R}f{caracterizationfushift} each element of the previous family is in $\mathcal{F}^u$, moreover each $C_{i_1i_2\cdots i_n}$ belongs to $F^u_\gamma$ for some $\gamma\geq \delta/3$, since $\operatorname{dim}am(C_{i_1i_2\cdots i_n})\geq \delta/3$, and \[d_H(C_{i_1i_2\cdots i_{n-1}0},C_{i_1i_2\cdots i_{n-1}1})=\left|\left(p_n+\dfrac{\varepsilonsilon}{4}\right)-\left(q_n+\dfrac{\varepsilonsilon}{4}\right)\right|\geq |p_n-q_n|-\dfrac{\varepsilonsilon}{2}>\dfrac{\varepsilonsilon}{3}.\] Therefore, we have the conclusion this proposition.
\end{proof}
\end{comment}
Assuming that the answer for Question \mathbb{R}f{3} could be negative, we tried a distinct approach to prove positive topological entropy that we explain below. In the case of cw-expansive homeomorphisms, it is proved in \cite{ACCV3} the existence of a hyperbolic cw-metric, generalizing the hyperbolic metric in the case of expansive homeomorphisms in \cite{Fa}. We explain the hyperbolic cw-metric below and discuss the existence of a hyperbolic ft-metric for first-time sensitive homeomorphisms with additional assumptions on $\mathcal{F}^u$. After that, we explain how the existence of a hyperbolic ft-metric is enough to prove positive topological entropy. Let
$$E=\{(p,q,C): C\in \mathcal C(X),\, p,q\in C\}.$$
For $p,q\in C$ denote $C_{(p,q)}=(p,q,C)$.
The notation $C_{(p,q)}$ implies that $p,q\in C$ and that $C\in\mathcal C(X)$.
Define
$$f(C_{(p,q)})=f(C)_{(f(p),f(q))}$$
and consider the sets
\[
\mathcal C^s_\varepsilonsilon(X)=\{C\in\mathcal C(X)\;:\;\operatorname{dim}am(f^n(C))\leq\varepsilonsilon\, \text{ for every }\, n\geq 0\} \,\,\,\,\,\, \text{and}
\]
\[
\mathcal C^u_\varepsilonsilon(X)=\{C\in\mathcal C(X)\;:\;\operatorname{dim}am(f^{-n}(C))\leq\varepsilonsilon\, \text{ for every }\, n\geq 0\}.
\]
These sets contain exactly the $\varepsilonsilon$-stable and $\varepsilonsilon$-unstable continua of $f$, respectively.
\begin{theorem}[Hyperbolic $cw$-metric-\cite{ACCV}]
\label{teoCwHyp}
If $f\colon X\to X$ is a cw-expansive homeomorphism of a compact metric space $X$, then there is a function $D\colon E\to\mathbb{R}$ satisfying the following conditions.
\begin{enumerate}
\item Metric properties:
\begin{enumerate}
\item $D(C_{(p,q)})\geq 0$ with equality if, and only if, $C$ is a singleton,
\item $D(C_{(p,q)})=D(C_{(q,p)})$,
\item $D([A\cup B]_{(a,c)})\leq D(A_{(a,b)})+D(B_{(b,c)})$, $a\in A, b\in A\cap B, c\in B$.
\end{enumerate}
\item Hyperbolicity: there exist constants $\lambda\in(0,1)$ and $\varepsilonsilon>0$ satisfying
\begin{enumerate}
\item if $C\in\mathcal C^s_\varepsilonsilon(X)$ then $D(f^n(C_{(p,q)}))\leq 4\lambda^nD(C_{(p,q)})$ for every $n\geq 0$,
\item if $C\in\mathcal C^u_\varepsilonsilon(X)$ then $D(f^{-n}(C_{(p,q)}))\leq 4\lambda^nD(C_{(p,q)})$ for every $n\geq 0$.
\end{enumerate}
\item Compatibility: for each $\delta>0$ there is $\gamma>0$ such that
\begin{enumerate}
\item if $\operatorname{dim}am(C)<\gamma$, then $D(C_{(p,q)})<\delta$\,\, for every $p,q\in C$,
\item if there exist $p,q\in C$ such that $D(C_{(p,q)})<\gamma$, then $\operatorname{dim}am(C)<\delta$.
\end{enumerate}
\end{enumerate}
\end{theorem}
In the case of first-time sensitive homeomorphisms, we could not expect to obtain a function in the whole set $E$ since there could be continua in arbitrarily small dynamical balls. Then we will restrict the set $E$ considering only continua in $\mathcal{F}^u$ as follows
$$E^u = \{(p,q,C)\;:\;C\in\mathcal{F}^u,\;p,q\in C\}$$
and obtain a similar result. We will need though to add two hypothesis to $\mathcal{F}^u$ so that the function $D$ and its properties can be written precisely. In the first we ask that $\mathcal{F}^u$ is invariant by $f^{-1}$, that is,
$$\text{if} \,\,\,\,\,\, C\in\mathcal{F}^u, \,\,\,\,\,\, \text{then} \,\,\,\,\,\, f^{-1}(C)\in \mathcal{F}^u.$$ In the second we ask that $\mathcal{F}^u$ is closed by connected unions, that is
$$\text{if} \,\,\,\,\,\, A,B\in\mathcal{F}^u \,\,\,\,\,\, \text{and} \,\,\,\,\,\, A\cap B\neq\emptysettyset, \,\,\,\,\,\, \text{then} \,\,\,\,\,\, A\cup B\in\mathcal{F}^u.$$
We tried to prove these hypotheses are always satisfied for first-time sensitive homeomorphisms, but there were some technical details that we could not circumvent. The following is the ft-metric that we were able to obtain in the case of first-time sensitive homeomorphisms. The proof is an adaptation of the proof of the cw-metric switching cw-expansiveness and the properties of the local unstable continua proved by Kato for ft-sensitivity and the properties of the local cw-unstable continua we proved in Section 2.
\begin{theorem}[Hyperbolic ft-metric]\label{ft-metric}
\label{teoCwHyp}
Let $f\colon X\to X$ be a first-time sensitive homeomorphism, of a compact and connected metric space $X$ satisfying hypothesis (P1) and (P2), with a sensitivity constant $\varepsilonsilon>0$. If $\mathcal{F}^u$ is invariant by $f^{-1}$ and closed by connected unions, then there is a function $D\colon E^u\to\mathbb{R}$ satisfying the following conditions.
\begin{enumerate}
\item Metric properties:
\begin{enumerate}
\item $D(C_{(p,q)})> 0$ for every $C_{(p,q)}\in E^u$,
\item $D(C_{(p,q)})=D(C_{(q,p)})$,
\item $D([A\cup B]_{(a,c)})\leq D(A_{(a,b)})+D(B_{(b,c)})$, $a\in A, b\in A\cap B, c\in B$.
\end{enumerate}
\item Hyperbolicity: there exists $\lambda\in(0,1)$ satisfying
\begin{enumerate}
\item if $C_{(p,q)}\in E^u$, then $D(f^{-n}(C_{(p,q)}))\leq 4\lambda^nD(C_{(p,q)})$ for every $n\geq 0$.
\end{enumerate}
\item Compatibility: for each $\delta>0$ there is $\gamma>0$ such that
\begin{enumerate}
\item if $\operatorname{dim}am(C)<\gamma$, then $D(C_{(p,q)})<\delta$ for every $p,q\in C$,
\item if there exist $p,q\in C$ such that $D(C_{(p,q)})<\gamma$, then $\operatorname{dim}am(C)<\delta$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of Theorem \mathbb{R}f{ft-metric}]
For each $\gamma\in(0,\varepsilonsilon)$, consider $m_{\gamma}>0$, given by Proposition \mathbb{R}f{decreasing}. Let $m=m_{\frac{\varepsilonsilon}{2}}$, $\lambda=2^{-1/m}$, and define the function
\[\begin{array}{lccl}
\rho: & \mathcal{F}^u & \rightarrow &\mathbb{R}\\
&C&\mapsto &\lambda^{n_1(C,\varepsilonsilon)}.\\
\end{array}\]
Consider the map $D:E^u\rightarrow \mathbb{R}$ given by
\[D(C_{(p,q)}) = \inf\operatorname{Supp}m_{i=1}^n\rho(A^i_{(a_{i-1},a_i)})\]
where the infimum is taken over all $n\geq 1$, $a_0=p, a_1, \dots, a_n=q$, and $A^1, \dots, A^n\in\mathcal{F}^u$ such that $C=\bigcup_{i=1}^nA^i$.
The proof of this theorem is based on the following inequalities
\begin{equation}\label{Drho}
D(C_{(p,q)})\leq \rho(C)\leq 4D(C_{(p,q)})
\end{equation} for every $C_{(p,q)}\in E^u$.
\begin{enumerate}
\item Metric properties: items (b) and (c) are direct consequences of the definition of the function $D$, while item (a) is a consequence of the fact that if $C\in\mathcal{F}^u$, then $n_1(C,\varepsilonsilon)$ is a finite positive number, so $\rho(C)>0$, and then inequalities (\mathbb{R}f{Drho}) ensure that
$$D(C_{(p,q)})\geq\frac{1}{4}\rho(C)>0.$$
\item Hyperbolicity: If $C\in\mathcal{F}^u$, then $$\operatorname{dim}am(f^{-n}(C))\leq\varepsilonsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\geq 0,$$ $$n_1(C, \varepsilonsilon)<+\infty \,\,\,\,\,\, \text{and} \,\,\,\,\,\, \operatorname{dim}am(f^{n_1(C,\varepsilonsilon)}(C))>\varepsilonsilon.$$ This implies that $$n_1(f^{-n}(C),\varepsilonsilon)=n+n_1(C,\varepsilonsilon) \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\geq 0,$$ and, hence, the following holds for every $n\in\mathbb{N}$:
\begin{eqnarray*}
\rho(f^{-n}(C))&=&\lambda^{n_1(f^{-n}(C),\varepsilonsilon)}=\lambda^{n+n_1(C,\varepsilonsilon)}\\
&=&\lambda^n\lambda^{n_1(C,\varepsilonsilon)}=\lambda^n\rho(C).
\end{eqnarray*}
Inequalities (\mathbb{R}f{Drho}) ensure the following holds for every $n\in\mathbb{N}$:
$$D(f^{-n}(C_{(p.q)}))\leq\rho(f^{-n}(C))=\lambda^n\rho(C)\leq4\lambda^n D(C_{(p.q)}).$$
Recall the hypothesis that $\mathcal{F}^u$ is invariant by $f^{-1}$, so $$f^{-n}(C)\in\mathcal{F}^u\,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{N}.$$
\item Compatibility: Inequalities (\mathbb{R}f{Drho}) ensure that the compatibility between $\rho$ and $\operatorname{dim}am$ is enough to obtain compatibility between $D$ and $\operatorname{dim}am$. The compatibility between $\rho$ and $\operatorname{dim}am$ is proved as follows:
\noindent {(a)} Given $\delta>0$, choose $n\in\mathbb{N}$ such that $\lambda^n<\delta$. Let $\gamma>0$, given by continuity of $f$, be such that if $C\in C(X)$ satisfies $\operatorname{dim}am(C)<\gamma$, then $$\operatorname{dim}am(f^i(C))<\varepsilonsilon \,\,\,\,\,\, \text{whenever} \,\,\,\,\,\, |i|\leq n.$$ This implies that $n_1(C,\varepsilonsilon)>n$ and, hence, that $$\rho(C)=\lambda^{n_1(C,\varepsilonsilon)}<\lambda^n<\delta.$$
\noindent {(b)}
Given $\delta>0$, let $\gamma=\lambda^{2m_{\delta}}$. If $\rho(C)<\gamma$, then $$\lambda^{n_1(C,\varepsilonsilon)}<\lambda^{2m_{\delta}} \,\,\,\,\,\, \text{and} \,\,\,\,\,\, n_1(C,\varepsilonsilon)>2m_{\delta}.$$
Proposition \mathbb{R}f{crescimentouniforme} assures
that $C\notin\mathcal{F}^u_\delta$. Proposition \mathbb{R}f{decreasing} ensures that
$$C\notin\mathcal{F}^u_\alpha \,\,\,\,\,\, \text{for any} \,\,\,\,\,\, \alpha\in(\delta,\varepsilonsilon).$$ Indeed, if $\alpha>\delta$, then $m_{\alpha}\leq m_{\delta}$ and, hence, $$n_1(C,\varepsilonsilon)>2m_{\delta}\geq2m_{\alpha},$$
and again Proposition \mathbb{R}f{crescimentouniforme} ensures that $C\notin\mathcal{F}^u_\alpha$. Since $C\in\mathcal{F}^u$, it follows that $C\in\mathcal{F}^u_\alpha$ for some $\alpha\in(0,\delta),$ and, hence,
$$\operatorname{dim}am(C)\leq\alpha<\delta.$$
\end{enumerate}
\end{proof}
The first inequality in (\mathbb{R}f{Drho}) is assured by the definition of $D$, while the following result ensures the other inequality. Its proof is an adaptation of Lemma 2.4 of \cite{ACCV3}.
\begin{lemma}
The function $\rho$ satisfies:
\begin{equation}\label{desiglemma}\rho\left(\bigcup_{i=1}^nC_i\right)\leq2\rho(C_1)+4\rho(C_2)+\cdots +4\rho(C_{n-1})+2\rho(C_n)\end{equation} for all $C_1,\ldots, C_n\in\mathcal{F}^u$ such that $C_i\cap C_{i+1}\neq\emptysettyset$ for every $i\in\{1,\dots,n-1\}$.
\end{lemma}
\begin{proof}
First, we will prove this result for $n=2$. Consider $C=A\cup B$ with $A,B\in\mathcal{F}^u$ and $A\cap B\neq\emptysettyset$.
We claim that either
\begin{equation}\label{tempocrescimentoAouB}n_1(A,\varepsilonsilon)\leq m+n_1(C,\varepsilonsilon)\;\;\;\;\mbox{ or }\;\;\;\;n_1(B,\varepsilonsilon)\leq m+n_1(C,\varepsilonsilon).\end{equation}
Indeed, we know that $\operatorname{dim}am\;f^{n_1(C,\varepsilonsilon)}(C)>\varepsilonsilon$, so either
\[\operatorname{dim}am\;f^{n_1(C,\varepsilonsilon)}(A)>\dfrac{\varepsilonsilon}{2}\;\;\;\;\mbox{ or }\;\;\;\;\operatorname{dim}am\;f^{n_1(C,\varepsilonsilon)}(B)>\dfrac{\varepsilonsilon}{2}.\]
Assume we are in the first case (the second is analogous). Since $A\in\mathcal{F}^u$, property (F2) ensures that
$$|n_1(A,\varepsilonsilon/2) - n_1(A,\varepsilonsilon)|\leq m,$$
and since $$n_1(A,\varepsilonsilon/2)\leq n_1(C,\varepsilonsilon)\leq n_1(A,\varepsilonsilon)$$
it follows that
$$n_1(A,\varepsilonsilon) - n_1(C,\varepsilonsilon)\leq m,$$
so the first inequality in
(\mathbb{R}f{tempocrescimentoAouB}) holds and the claim is proved. If $n_1(A,\varepsilonsilon)\leq m+n_1(C,\varepsilonsilon)$, then
\[\rho(A)=\lambda^{n_1(A,\varepsilonsilon)}\geq \lambda^{m+n_1(C,\varepsilonsilon)}=\lambda^m\lambda^{n_1(C,\varepsilonsilon)} = \dfrac{1}{2}\rho(C),\] since $0<\lambda<1$ and $\lambda^m=1/2$, and this implies $2\rho(A)\geq\rho(C)$. Similarly, if $n_1(B,\varepsilonsilon)\leq m+n_1(C,\varepsilonsilon)$ we obtain $2\rho(B)\geq \rho(C)$ and conclude that
\begin{equation}\label{rhoCigual2max}
\rho(C)\leq 2\max\{\rho(A),\rho(B)\}.
\end{equation} This completes the proof in the case $n=2$.
Arguing by induction, suppose that given $n\geq 3$, the conclusion of the lemma holds for every $k<n$ and let $C=\bigcup_{i=1}^nC_i$ with $$C_i\cap C_{i+1}\neq\emptysettyset \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, i\in\{1,\ldots, n-1\} \,\,\,\,\,\, \text{and}$$ $$C_i\in\mathcal{F}^u \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, i\in\{1,\ldots, n\}.$$ In what follows the hypothesis that $\mathcal{F}^u$ is closed by connected unions is used in a few steps, though we will not mention it. Consider the following inequalities
\begin{equation}\label{desigualdades}
\rho(C_1)\leq \rho(C_1\cup C_2)\leq\cdots\leq \rho(C_1\cup\cdots\cup C_{n-1})\leq \rho(C).
\end{equation}
If $\rho(C)\leq 2\rho(C_1)$, then (\mathbb{R}f{desiglemma}) is proved and if $2\rho(C_1\cup\cdots\cup C_{n-1})<\rho(C)$, then (\mathbb{R}f{rhoCigual2max}) implies that $\rho(C)\leq 2\rho(C_n)$, which also implies (\mathbb{R}f{desiglemma}). Thus, we assume that
\[2\rho(C_1)<\rho(C)\leq2\rho(C_1\cup\cdots\cup C_{n-1}).\] This and (\mathbb{R}f{desigualdades}) imply that there is $1<r<n$ such that
\[2\rho(C_1\cup\cdots\cup C_{r-1})<\rho(C)\leq 2\rho(C_1\cup\cdots\cup C_r).\] The first inequality and (\mathbb{R}f{rhoCigual2max}) imply that
\[\rho(C)\leq 2\rho(C_r\cup\cdots \cup C_n).\] Thus,
\[\rho(C)=\dfrac{\rho(C)}{2}+\dfrac{\rho(C)}{2}\leq \rho(C_1\cup\cdots\cup C_r)+\rho(C_r\cup\cdots\cup C_n).\] Since (\mathbb{R}f{desiglemma}) holds for these two terms, by the induction assumption, the proof ends.
\end{proof}
\begin{theorem}\label{posent}
Let $f\colon X\to X$ be a first-time sensitive homeomorphism, of a compact and connected metric space $X$ satisfying hypothesis (P1) and (P2). If $\mathcal{F}^u$ is invariant by $f^{-1}$ and closed by connected unions, then $f$ has positive topological entropy.
\end{theorem}
\begin{proof}
We will prove that there exists $M\in\mathbb{N}$, $\delta>0$, and $C\in\mathcal{F}^u$ such that $\operatorname{dim}am(f^M(C))\geq\delta$, and for each $n\in\mathbb{N}$ and $(i_1,\dots,i_n)\in\{0,1\}^n$, there exists $C_{i_1i_2\cdots i_n}\in\mathcal{F}^u$ satisfying:
\begin{enumerate}
\item $\operatorname{dim}am (f^M(C_{i_1i_2\cdots i_n}))\geq\delta$;
\item \begin{enumerate}
\item $C_0\cap f^M(C)\neq \emptysettyset$ and $C_1\cap f^M(C)\neq \emptysettyset$;
\item $C_{i_1i_2\cdots i_{n-1}i_n}\cap f^M(C_{i_1i_2\cdots i_{n-1}})\neq \emptysettyset$;
\end{enumerate}
\item $d_H(C_{i_1i_2\cdots i_{n-1}0},C_{i_1i_2\cdots i_{n-1},1})\geq{\delta}/{3}$;
\item for each $k\in\mathbb{N}$, $n\geq k$ and $(i_1,i_2,\ldots, i_n)\in \{0,1\}^n$, \[\operatorname{dim}am\left(\bigcup_{j=0}^{n-k} f^{-jM}(C_{i_1\cdots i_{k+j}})\right)<\frac{\delta}{3}.\]
\end{enumerate} We first prove the existence of this family of continua $(C_{i_1i_2\cdots i_n})_{i_1i_2\cdots i_n,n}$ and after we prove it is enough to prove positive topological entropy. In Theorem \mathbb{R}f{teoCwHyp} we proved the existence of a ft-metric $D:E^u\rightarrow \mathbb{R}$ with a hyperbolic constant $\lambda\in(0,1)$. Consider $\delta>0$, given by Corollary \mathbb{R}f{continuonaodecresce}, satisfying: if $C\in\mathcal{F}^u_{\gamma}$, then $$\operatorname{dim}am(f^{n}(C))\geq\delta \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\geq 2m_\gamma.$$
The compatibility between $\operatorname{dim}am$ and $D$ ensures the existence of $\alpha\in(0,\delta)$ such that
$$D(C)<\alpha \,\,\,\,\,\, \text{implies} \,\,\,\,\,\, \operatorname{dim}am(C)<\frac{\delta}{6}.$$
Consider $M\in\mathbb{N}$ such that $M\geq 2m_{\delta/6}$ and
\[\frac{4\lambda^{M}}{1-\lambda^{M}}<\alpha,\] and choose any $C\in\mathcal{F}^u_{\delta/6}$.
Corollary \mathbb{R}f{continuonaodecresce} ensures that
\[\operatorname{dim}am\;f^k(C)\geq\delta\;\;\;\;\mbox{for every} \;\;\;\; k\geq 2m_{\delta/6}.\] Since $M\geq2m_{\delta/6}$, then $\operatorname{dim}am\;f^M(C)\geq\delta$. Thus, we can choose $x_0$ and $x_1$ in $f^M(C)$ with $d(x_0,x_1)\geq\delta$.
Theorem \mathbb{R}f{teoremacontinuosinst} ensures the existence of $C_0, C_1 \in \mathcal{F}^u_{\delta/6}$ with $x_0\in C_0$ and $x_1\in C_1$. Thus,
$$\operatorname{dim}am(C_i)\leq\frac{\delta}{6} \,\,\,\,\,\, \text{and} \,\,\,\,\,\, x_i\in C_i\cap f^M(C)$$
for each $i\in\{0,1\}$, so $d_H(C_0,C_1)\geq \delta/3$
(proving items (2) (a), (3) and (4) for $C, C_0$ and $C_1$). Also, Corollary \mathbb{R}f{continuonaodecresce} ensures that
$$\operatorname{dim}am(f^M(C_0))\geq\delta \,\,\,\,\,\, \text{and} \,\,\,\,\,\, \operatorname{dim}am(f^M(C_1))\geq\delta,$$ since $M\geq2m_{\delta/6}$, implying item (1) for $C_0$ and $C_1$.
Now, for each $i\in\{0,1\}$, consider $x_{i,0},x_{i1}\in f^M(C_{i})$ such that $d(x_{i0},x_{i1})\geq\delta$ and
$C_{i0},C_{i1}\in\mathcal{F}_{\delta/6}^u$ with
$$x_{i,0}\in C_{i,0} \,\,\,\,\,\, \text{and} \,\,\,\,\,\, x_{i,1}\in C_{i,1}.$$
Thus,
\[d_H(C_{i0},C_{i1})\geq \delta/3\;\;\;\mbox{ for each }\;\;\; i\in\{0,1\}\] and \[\operatorname{dim}am\;f^M(C_{ij})\geq \delta\;\;\;\mbox{ for each }\;\;\;(i,j)\in\{0,1\}^2.\] Moreover, the hyperbolicity of $D$ ensures that
\[D(f^{-M}(C_{ij}))\leq 4\lambda^MD(C_{ij})<\alpha \;\;\; \mbox{ for every } \;\;\;(i,j)\in\{0,1\}^2,\] which implies that
\[\operatorname{dim}am(f^{-M}(C_{ij}))\leq \frac{\delta}{6}\;\;\; \mbox{ for every } \;\;\;(i,j)\in\{0,1\}^2.\] Thus, for each $(i,j)\in\{0,1\}^2$,
\[\operatorname{dim}am(C_i\cup f^{-M}(C_{ij}))\leq \operatorname{dim}am(C_i)+\operatorname{dim}am(f^{-M}(C_{ij}))<\frac{\delta}{3}.\] Figure \mathbb{R}f{figura:bolae} illustrates these choices and estimates.
\begin{figure}
\caption{Local cw-unstable continua $\delta/3$ distant with past iterates exponentially small.}
\label{figura:bolae}
\end{figure}
Following the same steps inductively, for each $n\geq2$ and $(i_1i_2\cdots i_{n-1})\in\{0,1\}^{n-1}$ we create continua, $C_{i_1i_2\cdots i_{n-1}, 0}$ and $C_{i_1i_2\cdots i_{n-1}, 1}$ in $\mathcal{F}_{\delta/6}^u$, with $$C_{i_1i_2\cdots i_{n-1},0}\cap f^M(C_{i_1i_2\cdots i_{n-1}})\neq \emptysettyset \,\,\,\,\,\, \text{and} \,\,\,\,\,\, C_{i_1i_2\cdots i_{n-1},1}\cap f^M(C_{i_1i_2\cdots i_{n-1}})\neq \emptysettyset$$ and \[d_H(C_{i_1i_2\cdots i_{n-1} 0},C_{i_1i_2\cdots i_{n-1} 1})\geq \delta/3.\] Since $C_{i_1i_2\cdots i_{n-1} i_n}\in\mathcal{F}^u$, then, by Corollary \mathbb{R}f{continuonaodecresce},
\[\operatorname{dim}am\; f^M(C_{i_1i_2\cdots i_{n-1} i_n})\geq \delta.\]
The properties of $D$ (triangular inequality and hyperbolicity on $\mathcal{F}^u$) ensure that for each $k\in\mathbb{N}$, $n\geq k$ and $(i_1,i_2,\cdots,i_n)\in \{0,1\}^n$ we have
\[\begin{array}{rcl}
\operatorname{dim}splaystyle D\left(\bigcup_{j=1}^{n-k}f^{-jM}(C_{i_1\cdots i_{k+j}})\right) & \leq & \operatorname{dim}splaystyle \operatorname{Supp}m_{j=1}^{n-k} D(f^{-jM}(C_{i_1\cdots i_{k+j}}))\\
& \leq & \operatorname{dim}splaystyle\operatorname{Supp}m_{j=1}^{n-k} 4\lambda^{-jM}\\
& \leq & 4\left(\dfrac{\lambda^{M}}{1-\lambda^M}\right)\\
&<&\alpha,
\end{array}\]
Here we just write $f^{-jM}(C_{i_1\cdots i_{k+j}})$ omitting the marked points, which are
\[f^{-(j-1)M}(x_{i_1,\cdots, i_{k+j-1}})\;\;\;\mbox{ and }\;\;\; f^{-(j+1)M}(x_{i_1,\cdots, i_{k+j+1}})\]
where $x_{i_1,\cdots, i_{l}}$ is a point of the intersection $C_{i_1\cdots i_l}\cap f^M(C_{i_1\cdots i_{l-1}})$ for each $l\in\mathbb{N}$.
Since, by hypothesis, $$\bigcup_{j=1}^{n-k}f^{-jM}(C_{i_1\cdots i_{k+j}})\in\mathcal{F}^u,$$ the compatibility between $D$ and $\operatorname{dim}am$ ensures that $$\operatorname{dim}splaystyle\operatorname{dim}am\left(\bigcup_{j=1}^{n-k}f^{-jM}(C_{i_1\cdots i_{k+j}})\right)<\dfrac{\delta}{6}$$ and, therefore,
\[\begin{array}{rcl}
\operatorname{dim}splaystyle\operatorname{dim}am\left(\bigcup_{j=0}^{n-k}f^{-jM}(C_{i_1\cdots i_{k+j}})\right)&\leq& \operatorname{dim}splaystyle\operatorname{dim}am(C_{i_1\cdots i_{k}})+\operatorname{dim}am\left(\bigcup_{j=1}^{n-k}f^{-jM}(C_{i_1\cdots i_{k+j}})\right)\\
&<&\operatorname{dim}splaystyle \dfrac{\delta}{6}+\dfrac{\delta}{6}\; = \dfrac{\delta}{3}.\;\\
\end{array} \]
This proves the existence of the family $(C_{i_1\dots i_n})_{i_1,\dots,i_n,n}$ satisfying (1) to (4). To prove that this implies positive topological entropy, we use (2) and choose points
\[ x_i\in C_i\cap f^M(C) \;\;\;\mbox{ for each } \;\;\;i\in\{0,1\},\] and for each $n\geq 2$ and $(i_1,i_2,\ldots, i_n)\in\{0,1\}^n$, choose
\[x_{i_1i_2\cdots i_n}\in C_{i_1i_2\cdots i_{n}}\cap f^M(C_{i_1i_2\cdots i_{n-1}}).\]
We will prove that, for each $n\in\mathbb{N}$, the set
\[A_n=\{y_{i_1i_2\cdots i_{n}}=f^{-nM}(x_{i_1i_2\cdots i_{n}});\; (i_1,i_2,\ldots,i_n)\in\{0,1\}^n\}\]
is $(nM,\delta/3)-$separated. Indeed, if $y
_{i_1\cdots i_n},y_{j_1\cdots j_n}\in A_n$ are distinct, then there exists $k\in\{1,2,\ldots, n\}$ such that $j_k\neq i_k$ and
\[j_l=i_l\;\;\;\mbox{ for every }\;\;\;l\in\{1,\ldots, k-1\}.\] Assume, without loss of generality, that $i_k=0$ and $j_k=1$. Since \[y_{i_1i_2\cdots i_n}=f^{-nM}(x_{i_1i_2\cdots i_n})\in f^{-nM}(C_{i_1i_2\cdots i_n})\] and \[y_{j_1j_2\cdots j_n}=f^{-nM}(x_{j_1j_2\cdots j_n})\in f^{-nM}(C_{j_1j_2\cdots j_n}),\] we have that
\begin{equation*}\label{eqentropia1}
f^{kM}(y_{i_1\cdots i_n})\in f^{(-n+k)M}(C_{i_1i_2\cdots i_n}) \operatorname{Supp}bset \bigcup_{j=0}^{n-k} f^{-jM}(C_{i_1i_2\cdots i_{k+j}})
\end{equation*}
and
\begin{equation*}\label{eqentropia3}f^{kM}(y_{j_1\cdots j_n})\in f^{(-n+k)M}(C_{j_1j_2\cdots j_n}) \operatorname{Supp}bset \bigcup_{j=0}^{n-k} f^{-jM}(C_{j_1j_2\cdots j_{k+j}})
\end{equation*}
Item (4) ensures that
\[f^{kM}(y_{i_1\cdots i_n})\in B(x_{i_1\cdots i_{k-1}0},\delta/3) \;\; \mbox{ and }\;\;f^{kM}(y_{j_1\cdots j_n})\in B(x_{i_1\cdots i_{k-1}1},\delta/3),\]
and this implies that \[d(f^{kM}(y_{i_1\cdots i_n}), f^{kM}(y_{j_1\cdots j_n}))\geq \delta/3,\] since $d(x_{i_1\cdots i_{k-1}0},x_{i_1\cdots i_{k-1}1})\geq \delta$ by item (3) (recall that $x_{i_1\cdots i_{k-1}0},x_{i_1\cdots i_{k-1}1}\in C_{i_1\cdots i_{k-1}i_n}$). Since for each $n\in\mathbb{N}$, $A_n$ has $2^n$ elements and is $(nM,\delta/3)$-separated, it follows that
\[s(nM,\delta/3)\geq 2^n\;\;\;\mbox{ for every }\;\;\;n\in\mathbb{N}.\]
Thus,
\[\begin{array}{rcl}
h(f,\delta/4) & = &\limsup_{n\rightarrow\infty}\frac{1}{n}\cdot\log s(n,\delta/4)\\
\\
&\geq&\limsup_{n\rightarrow\infty}\left(\frac{1}{nM}\cdot\log s(nM,\delta/4)\right)\\
\\
&\geq &\limsup_{n\rightarrow\infty} \frac{1}{nM}\cdot\log 2^n\\
\\
&\geq &\limsup_{n\rightarrow\infty} \frac{n}{nM}\cdot\log 2\\
\\
&=&\frac{1}{M}\cdot \log 2\;>\;0
\end{array}\]
and, hence, $h(f)>0$.
\end{proof}
\begin{question}
Are the hypotheses on $\mathcal{F}^u$ of being invariant by $f^{-1}$ and closed by connected unions satisfied by all first-time sensitive homeomorphisms?
\end{question}
\noindent
{\em B. Carvalho}
\noindent
Dipartimento di Matematica,
Università degli Studi di Roma Tor Vergata
Via Cracovia n.50 - 00133
Roma - RM, Italy
\email{mldbnr01@uniroma2.it}
{\em M. B. Antunes}
\noindent
Escola de Engenharia Indústrial e Metalúrgica de Volta Redonda,
Universidade Federal Fluminense - UFF
Avenida dos Trabalhadores, 420, Vila Santa Cecília
Volta Redonda - RJ, Brasil.
\email{mayaraantunes@id.uff.br}
\end{document} |
\begin{document}
\title{A Machine Checked Model of Idempotent MGU Axioms For Lists of Equational Constraints}
\author{Sunil Kothari, James Caldwell\\
Department of Computer Science,\\
University of Wyoming, USA
}
\date{}
\maketitle
\begin{abstract}
We present formalized proofs verifying that the first-order
unification algorithm defined over lists of satisfiable constraints generates
a most general unifier (MGU), which also happens to be idempotent. All of our
proofs have been formalized in the Coq theorem prover. Our proofs show that
finite maps produced by the unification algorithm provide a model of the
axioms characterizing idempotent MGUs of lists of constraints. The axioms that
serve as the basis for our verification are derived from a standard set by
extending them to lists of constraints. For us, constraints are equalities
between terms in the language of simple types. Substitutions are formally
modeled as finite maps using the Coq library {\it
Coq.FSets.FMapInterface}. Coq's method of functional induction is the main
proof technique used in proving many of the axioms.
\end{abstract}
\section{Introduction}
\label{sec:introduction}
We present formalized proofs verifying that the first-order
unification algorithm defined over lists of satisfiable constraints generates a
most general unifier (MGU), which also happens to be idempotent. All of our
proofs have been formalized in the Coq theorem prover \cite{book:coq}. Our
proofs show that substitutions produced by the unification algorithm provide a
model of the axioms characterizing the idempotent MGUs of lists of constraints.
The formalization and verification presented here was motivated by our work on
to verifying Wand's constraint based type inference algorithm \cite{paper:wand}
(and to verify our extension of Wand's algorithm to include polymorphic let
\cite{paper:kotcal1}).
In the recent literature on machine certified proof of
correctness of type inference algorithms
\cite{paper:wcoq,paper:wisabelle,bookchapter:urbannipkow}, most general
unifiers are characterized by four axioms.
Recall that $\tau$ and $\tau'$ (in some language) are {\em unifiable} if there
exists a substitution $\rho$ mapping variables to terms in the language such
that $\rho(\tau) = \rho(\tau')$. In such a case, $\rho$ is called a {\em
unifier}. A unifier $\rho$ is a {\em most general unifier} if for any other
unifier $\rho''$ there is a substitution $\rho'$ such that $\rho \circ \rho' =
\rho''$.
We consider the MGU axioms given by
Nipkow and Urban \cite{bookchapter:urbannipkow}. Let $\rho, \rho', \rho''$
denote substitutions {\em {i.e.}} functions mapping type variables to terms,
constraints are of the form $\tau \cequal \tau'$ where $\tau$ and $\tau'$ are
simple types and the symbol ${\mathsf{FTV}}$ is overloaded to denote the free
type variables of substitutions, constraints and types and the
notation. Composition of substitutions\footnote{The reader should note that in
this paper, composition of functions is characterized by the equation $(\rho \circ\rho')(x)
= \rho'(\rho(x))$.} is denoted $\rho\circ\rho'$. With these notational
conventions in mind, the MGU axioms are presented as follows:
\begin{small}
\[\begin{array}{cl}
(i) & {\mathit{mgu}\;} \rho \: (\tau_1 \cequal \tau_2) \Rightarrow \rho(\tau_1)=\rho(\tau_2)\\
(ii) & {\mathit{mgu}\;} \rho \: (\tau_1 \cequal \tau_2) \; \wedge \; \rho'(\tau_1)=\rho'(\tau_2) \Rightarrow \exists \rho''. \rho' = \rho \circ \rho''\\
(iii) & {\mathit{mgu}\;} \rho \: (\tau_1 \cequal \tau_2) \Rightarrow \mathsf{FTV}(\rho) \subseteq \mathsf{FTV}(\tau_1 \cequal \tau_2)\\
(iv) & \rho(\tau_1) = \rho(\tau_2) \Rightarrow \exists \rho'. \;{\mathit{mgu}\;} \rho'\: (\tau_1 \cequal \tau_2)
\end{array}\]
\end{small}
These axioms, modeling MGUs, have proved useful in verifying
substitution-based type inference algorithms where the constraints are solved
as they are generated, one at a time. In constraint-based type inference
algorithms like Wand's, the constraints are generated before they are
solved. Thus, for use in the constraint based setting, we lift the MGU axioms
to lists of constraints. To do so, we restate the standard axioms to apply to
constraint lists, add two new axioms which characterize MGUs of lists of
constraints; one axiom for the empty list and another for lists constructed by
appends. Also, reasoning about Wand's type inference algorithm requires the
MGUs be idempotent, so we add another axiom for idempotency. Idempotent MGUs
have the nice property that their domain and range elements are disjoint.
We proceed by characterizing idempotent MGUs for lists of equational
constraints by presenting seven axioms. Then we show that the first order
unification algorithm models those axioms. The theorems and supporting lemmas
mentioned in this paper have been formalized and verified in Coq
\cite{manual:coq} - a theorem prover based on calculus of inductive
constructions \cite{paper:coc}. In the formalization, we represent
substitutions using Coq's finite map library \cite{manual:coq_fmapinterface}.
To start, we generalize the standard MGU axioms to constraint lists. In
addition to the notations introduced above, if $C$ is a list of constraints,
$\rho\models{}C$ (read $\rho$ {\em{satisfies}} $C$) means that $\rho$ unifies
all constrains in $C$. Let $C$ denote a constraint list, then the MGU axioms
(for a list of constraints) are:\\
\begin{tabular}{cl}
$\mbox{\hspace{1cm}}(i)$ & ${\mathit{mgu}\;} \rho \; C \; \Rightarrow \rho \models C$\\
$\mbox{\hspace{1cm}}(ii)$ & ${\mathit{mgu}\;} \rho \;C \; \wedge\; \rho' \models C \Rightarrow \exists \rho''. \;\rho' = \rho \circ \rho''$\\
$\mbox{\hspace{1cm}}(iii)$ & ${\mathit{mgu}\;} \rho \;C \; \Rightarrow \mathsf{FTV}\;(\rho) \subseteq \mathsf{FTV}\;(C)$\\
$\mbox{\hspace{1cm}}(iv)$ & $\rho \models \;C \;\Rightarrow \exists \rho'. \;{\mathit{mgu}\;} \rho'\: C$
\end{tabular}\\\\
\noindent{}To the axioms just mentioned we add three more axioms that characterize
idempotent MGUs for a list of equational constraints. List append is denoted
by $\mathit{++}$.
\begin{tabular}{cl}
$\mbox{\hspace{1cm}}(v)$ & ${\mathit{mgu}\;} \rho \; C \; \Rightarrow \;\rho \circ \rho = \rho$\\
$\mbox{\hspace{1cm}}(vi)$ & ${\mathit{mgu}\;} \rho \;[\;] \Rightarrow \rho = Id$\\
$\mbox{\hspace{1cm}}(vii)$ & ${\mathit{mgu}\;} \rho' \;C'\; \wedge \;{\mathit{mgu}\;} \rho'' \: (\rho'(C'')) \wedge {\mathit{mgu}\;} \rho \;(C' \; \mathit{++}\; C'')\Rightarrow \rho = \rho' \circ \rho''$\\\\
\end{tabular}
\noindent
These additional axioms are mentioned elsewhere in the unification literature,
namely \cite{paper:elmar, paper:LMM}. The statement of axiom {\it{vii}} is
convenient in proofs where constraint lists are constructed by combining lists
of constraints rather than adding them one at a time. A lemma characterizing
lists constructed by conses is easily proved from this axiom.
Formalizing substitutions as finite maps in Coq, we show that first-order
unification (${\mathsf{unify}}$) is a model of the MGU axioms. To distinguish
the formal representation of substitutions as finite maps from mathematical
functions, we denote finite maps by $\sigma$, $\sigma'$, $\sigma_1$,
etc. Mathematical functions enjoy extensional equality while finite maps do not
(more about this later). We write $ \rho \approx \rho'$ to denote extensional
equality for finite maps; {\em{i.e.}} that under application they agree
pointwise on all inputs. With these considerations in mind, we have proved the
following in Coq:\\
\begin{tabular}{cl}
$\mbox{\hspace{1cm}}(i)$ & $\mathsf{unify}(C) = \sigma \; \Rightarrow \sigma \models C$\\
$\mbox{\hspace{1cm}}(ii)$ & $(\mathsf{unify}(C) = \sigma \; \wedge \sigma' \models C) \Rightarrow \exists \sigma''. \;\sigma' \approx \sigma \circ \sigma''$\\
$\mbox{\hspace{1cm}}(iii)$ & $\mathsf{unify}(C) = \sigma \; \Rightarrow \mathsf{FTV}(\sigma) \subseteq \mathsf{FTV}(C)$\\
$\mbox{\hspace{1cm}}(iv)$ & $\sigma \models C \; \Rightarrow \exists \sigma'. \;\mathsf{unify}(C) = \sigma'$\\
$\mbox{\hspace{1cm}}(v)$ & $\mathsf{unify}(C) = \sigma \; \Rightarrow \sigma \circ \sigma \approx \sigma$\\
$\mbox{\hspace{1cm}}(vi)$ & $\mathsf{unify}([\;]) = \sigma \; \Rightarrow \sigma = \sigma_\mathbb{E}$\\
$\mbox{\hspace{1cm}}(vii)$ & $(\mathsf{unify}(C') = \sigma' \; \wedge \mathsf{unify}(\sigma'(C'')) = \sigma'' \wedge \mathsf{unify}(C' \; {++}\; C'') = \sigma) \Rightarrow \;\sigma \approx \sigma' \circ \sigma''$\\\\
\end{tabular}
The rest of this paper is organized as follows: Section \ref{sec:typesandsubst}
introduces a number of formal definitions and terminologies needed for this
paper. It also includes more discussion about substitutions represented as
finite functions. Section \ref{sec:unification} describes the formalization of
a first-order unification algorithm and the termination argument. Section
\ref{sec:mgus} describes the functional induction tactic and the theorems and
lemmas proved in the verification that {\sf{unify}} models the idempotent MGU
axioms. Finally, Section \ref{sec:conclusions} mentions related work and also
summarizes our current work.
\section{Types and Substitutions}
\label{sec:typesandsubst}
Unification is implemented here over a language of simple types
given by the following grammar:\\
\begin{small}
\begin {tabular}{lcl}
\mbox{\hspace{2cm}} $\tau$ ::= & $\alpha$ $\mid$ & $ \tau_1 \rightarrow \tau_2 $ \\
\end{tabular}\\
{\mbox{\hspace{8em}}} where $\alpha$ is a type variable, and $\tau_1,\tau_2\in\tau$ are type terms. \\
\end{small}
\noindent
Thus, a type is either a type variable or a function type.
We define the list of \emph{free\footnote{Strictly speaking, since we have no
binding operators in the language of simple types the modifier ``free'' is
unnecessary, we include it here anticipating a more complex language of
types in future developments.} variables of a type} ($\mathsf{FTV}$) as:\\
\begin{small}
\begin{tabular}{lcl}
$\mbox{\hspace{2cm}}\mathsf{FTV}(\alpha)$ & $\definedAs$ & $[\alpha]$\\
$\mbox{\hspace{2cm}}\mathsf{FTV}(\tau \rightarrow \tau')$ & $\definedAs$ & $\mathsf{FTV}(\tau)\,\, \mathrm{++}\,\, \mathsf{FTV}(\tau')$
\end{tabular}
\end{small}
We also have equational constraints of the form $\tau \cequal \tau'$, where $\tau, \tau'$ are types.
The list of \emph{free variables of a constraint list}, also denoted by $\mathsf{FTV}$, is given as: \\
\begin{small}
\begin{tabular}{lcl}
$\mbox{\hspace{2cm}}\mathsf{FTV}({[ \;]})$ & $\definedAs$ & ${[ \;]}$\\
$\mbox{\hspace{2cm}}\mathsf{FTV}((\tau_1 \cequal \tau_2)::C)$ &$\definedAs$ & $\mathsf{FTV}(\tau_1)\,\, \mathit{++}\,\,\mathsf{FTV}(\tau_2) \,\,\mathit{++}\,\, \mathsf{FTV}(C)$
\end{tabular}
\end{small}
Substitutions are formally represented as finite maps where the domain of the
map is the collection of type variables and the codomain is the simple
types. Application of a finite map to a type is defined as:\\
$\begin{array}{lcl} \mbox{\hspace{2cm}}\sigma(\alpha) &\definedAs & \left\{
\begin{array}{cl} \tau& \mathit{if}\; \langle{}\alpha,\tau \rangle \;\in\;
\sigma\; \\ \alpha & \mathit{otherwise}
\end{array} \right.\\
\mbox{\hspace{2cm}}\sigma(\tau_1 \rightarrow \tau_2) & \definedAs & \sigma (\tau_1) \rightarrow \sigma (\tau_2)
\end{array}$
\noindent
Application of a finite map to a constraint is defined similarly as:\\
\begin{small}
\begin{tabular}{lcl}
$\mbox{\hspace{2cm}}\sigma (\tau_1 \cequal \tau_2)$ &$\definedAs$& $\sigma(\tau_1) \cequal \sigma(\tau_2)$\\
\end{tabular}
\end{small}
\noindent
Since Coq's finite maps are not extensional, we define extensionality
($\approx$) as a relation on finite maps as follows:\\
\begin{small}
\begin{tabular}{lcl}
$\mbox{\hspace{2cm}}\sigma \approx \sigma'$ &$\definedAs$& $\forall{}\alpha.\: \sigma(\alpha) = \sigma'(\alpha)$
\end{tabular}
\end{small}
\noindent
Moreover, the equality can be extended to all types as given by the following lemma:
\begin{lemma}\label{lemma:squiggle_ext_lift}
$\forall{}\alpha.\: \sigma(\alpha) = \sigma'(\alpha) \Leftrightarrow \forall{}\tau.\: \sigma(\tau) = \sigma'(\tau) $
\end{lemma}
\subsection{Implementing Substitutions as Finite Maps}
The representation of substitutions and the libraries available to a user plays
a very important role in the formalization. In the verification literature,
substitutions have been represented as functions
\cite{bookchapter:urbannipkow}, as lists of pairs \cite{paper:wcoq}, and as
sets of pairs \cite{paper:paulsonLCF}. We represent substitutions as finite
functions (a.k.a finite maps in Coq). We use the Coq finite map library
{\it{Coq.FSets.FMapInterface}} \cite{manual:coq_fmapinterface}, which provides
an axiomatic presentation of finite maps and a number of supporting
implementations. However, it does not provide an induction principle for
finite maps, and forward reasoning is often needed to use the library. We found
we did not need induction to reason on finite maps, though there are natural
induction principles we might have proved
\cite{paper:collins_syme_95,
book:manna_waldinger_85}. The fact that the library
does not provide for extensional equality of finite maps means that, for
example, the following simple lemma does not hold:
\begin{lemma}
$\sigma_\mathbb{E} \circ \sigma_\mathbb{E} = \sigma_\mathbb{E}$
\end{lemma}
\noindent
But the following is easily proved:
\begin{lemma}
$\forall \tau. (\sigma_\mathbb{E} \circ \sigma_\mathbb{E})(\tau)= \sigma_\mathbb{E}(\tau)$
\end{lemma}
To give a feel of the Coq's finite map library, we define free type variables
of a substitution, and the substitution composition operator using the finite
map library functions. In the definitions below, we follow Coq's namespace
conventions; every library function has a qualifier which denotes the library
it belongs to. For example, $\mathit{M.map}$ is a function from the finite maps
library (${\mathit{M}}$) which maps a function over the range elements of a
finite map, whereas $\mathit{List.map}$ is a function from the list library.
\noindent
First, we define the list of free type variables of a substitution:\\
\begin{tabular}{lcl}
$\mbox{\hspace{2cm}}\mathsf{FTV}(\sigma)$ &$\definedAs$ & $\mathsf{dom\_subst}(\sigma) \,\,\mathit{++} \,\,\mathsf{range\_subst}( \sigma)$\\
\end{tabular}
To consider the {\em domain} and {\em range} elements of a finite function (and this is the key
feature of the function being finite), we use the finite map library function
$\mathsf{M.elements}$. $\mathsf{M.elements}(\sigma)$ returns a list of pairs (key-value pairs)
corresponding to the finite map $\sigma$. The domain and range elements of a
substitution are defined as:\\
\begin{tabular}{lcl}
$\mbox{\hspace{2cm}}\mathsf{dom\_subst}(\sigma)$ & $\definedAs$& ${\mathsf{List.map}\; (\lambda t. \mathsf{fst}\;(t)) \;(\mathsf{M.elements} (\: \sigma))}$\\
$\mbox{\hspace{2cm}}\mathsf{range\_subst}(\sigma)$ & $\definedAs$& ${\mathsf{List.flat\_map}\; (\lambda t.\mathsf{FTV}\; (\mathsf{snd}\; (t)))\; (\mathsf{M.elements} (\: \sigma))}$\\
\end{tabular}
\noindent
The function $\mathsf{List.flat\_map}$ is also known as $\mathsf{mapcan}$ in LISP and $\mathsf{concatMap}$ in Haskell.
Next, we define a few utility functions to help us define the composition operator $\circ$. Applying a substitution $\sigma'$ to a substitution $\sigma$ means applying $\sigma'$ to the range elements of $\sigma$.\\
\begin{tabular}{lcl}
$\mbox{\hspace{2cm}}\sigma'(\sigma)$ & $\definedAs$ & ${\mathsf{M.map}}\; (\lambda \tau. \sigma'(\tau)) \; \sigma$
\end{tabular}
\noindent
The function $\mathsf{subst\_diff}$ is used to define composition of finite maps, and is defined as:\\
\begin{tabular}{lcl}
$\mbox{\hspace{2cm}}\mathsf{subst\_diff}\: \sigma\: \sigma'$ & $\definedAs$& ${\mathsf{M.map2}}\; \mathsf{choose\_subst}\; \sigma \; \sigma'$\\
\end{tabular}
\noindent
In this definition, $\mathsf{M.map2}$ is defined in Coq library as the function
that takes two maps $\sigma$ and $\sigma'$, and creates a map whose binding
belongs to either $\sigma$ or $\sigma'$ based on the function
$\mathsf{choose\_subst}$, which determines the presence and value for a key
(absence of a value is denoted by $\mathsf{None}$). The values in the first map
are preferred over the values in the second map for a particular key. The
function $\mathsf{choose\_subst}$ is defined as:\\
\begin{tabular}{lcl}
$\mbox{\hspace{2cm}}\mathsf{choose\_subst}\; (\mathsf{Some}\,\tau_1) \;\; (\mathsf{Some} \,\tau_2)$& $\definedAs$ &$\mathsf{Some}\; \tau_1$\\
$\mbox{\hspace{2cm}}\mathsf{choose\_subst}\; (\mathsf{Some}\,\tau_1)\;\; \mathsf{None}$ &$\definedAs$& $\mathsf{Some}\; \tau_1$\\
$\mbox{\hspace{2cm}}\mathsf{choose\_subst}\; \mathsf{None} \;\; (\mathsf{Some}\, \tau_2)$&$\definedAs$& $\mathsf{Some}\, \tau_2$\\
$\mbox{\hspace{2cm}}\mathsf{choose\_subst}\; \mathsf{None}\;\; \mathsf{None}$ &$\definedAs$& $\mathsf{None}$
\end{tabular}
\noindent
Finally, the composition of finite maps ($\circ$) is defined as: \\
\begin{tabular}{lcl}
$\mbox{\hspace{2cm}}\sigma \circ \sigma'$ & $\definedAs$ & $\mathsf{subst\_diff}\,\, \sigma'(\sigma) \,\,\,\sigma'$
\end{tabular}
\noindent
Substitution composition application to a type has the following property:
\begin{theorem}\label{thm:composition_apply}
$\forall \sigma.\; \forall \sigma'. \; \forall{}\tau.(\sigma\circ\sigma')(\tau) = \sigma'(\sigma(\tau))$
\end{theorem}
\begin{proof}
By induction on the type $\tau$ followed by case analysis on the binding's occurrence in the composed substitution and in the individual substitutions.
\end{proof}
\noindent{}Interestingly, the base case (when $\tau$ is a type variable) is
more difficult than the inductive case (when $\tau$ is a compound
type). Incidentally, the same theorem has been formalized in Coq
\cite{paper:wcoq}, where substitutions are represented as lists of pairs, but
the proof there required 600 proof steps. We proved Theorem
\ref{thm:composition_apply} in about 100 proof steps.
\section{First-Order Unification}
\label{sec:unification}
We use the following standard presentation of the first-order unification algorithm:\\
\begin{small}
\begin{tabular}{lll}
$\mathsf{unify} \: {[ \;]}$ & $\definedAs$ & $Id$ \\
$\mathsf{unify}\:((\alpha \cequal \beta):: C)$& $\definedAs$ & \sf{if} $\alpha = \beta$ \sf{then} $\mathsf{unify}(C)$ \sf{else} $\{\alpha \mapsto \;\beta \} \circ \mathsf{unify}\:(\{\alpha \mapsto \beta\}(C))$ \\
$\mathsf{unify}\:((\alpha \cequal \tau) :: C)$ & $\definedAs$ & \sf{if} $\alpha$ occurs in $\tau$ \sf{then} Fail \sf{else}
$\{\alpha \mapsto \;\tau \} \circ \mathsf{unify}\; (\{\alpha\mapsto \tau\}(C))$\\
$\mathsf{unify}((\tau \cequal \alpha) :: C)$& $\definedAs$ & \sf{if} $\alpha$ occurs in $\tau$ \sf{then} Fail \sf{else}
$\{\alpha \mapsto \;\tau \} \circ \mathsf{unify}\: (\{\alpha\mapsto \tau\}(C))$\\
$\mathsf{unify}\:((\tau_1 \rightarrow \tau_2 \cequal \tau_3 \rightarrow \tau_4):: C)$ & $\definedAs$ & $\mathsf{unify}((\tau_1 \cequal \tau_3):: (\tau_2 \cequal \tau_4) :: C)$ \\\\
\end{tabular}
\end{small}
\noindent
This specification is written in a functional style. It would also have been
possible to formalize {\sf{unify}} in a relational style. A discussion of the
trade-offs between these two styles of formalization Coq can be found in
\cite{paper:func_ind_orig}. Since Coq's type theory requires functions to be
total, the functional style carries an overhead; we need a value to represent
failure. We used Coq's {\tt option} type to make first-order unification total.
The {\em option} type ({\em maybe} in Haskell) is defined in Coq as follows:\\
$\mbox{\hspace{2cm}}\mathsf{Inductive\; option\; (A:Set)\; : \; Set \; := \;
Some \; (\_:A) \; |\; None.}$
\noindent
The constructor $\mathsf{None}$ indicates failure and the term
$\mathsf{Some}(\sigma)$ indicates success (with $\sigma$ as the result). In
the presentation here, we omit the $\mathsf{None}$ and $\mathsf{Some}$
constructors. In virtually all theorems proved here, the $\mathsf{None}$ case
is trivial.
The presentation of the unification algorithm given here is general recursive,
{\em i.e.}, the recursive call is not necessarily on a structurally smaller
argument. Various papers have discussed the non-structural recursion used in
the standard first-order unification algorithm. McBride has given a
structurally recursive unification algorithm \cite{ paper:mcbride}. Bove
\cite{paper:bove} gives an algorithm similar to ours and proves termination in
Alf \cite{manual:alf}. We believe our presentation of the algorithm is more
perspicuous than Bove's although a similar termination argument works here. To
allow Coq to accept our definition of unification, we have to either give a
measure that shows that recursive argument is smaller or give a well-founded
ordering relation. We chose the latter. We use the standard lexicographic
ordering on the triple: $< \mid\! C_{FVC} \!\mid, \mid\! C_\rightarrow \!\mid,
\mid\! C \!\mid>$, where\\
\begin{small}
$\mbox{\hspace{1cm}}\mid\! C_{FVC}\! \mid$ is the number of {\em unique} free variables in a constraint list;\\
$\mbox{\hspace{1cm}}\mid \!C_\rightarrow \!\mid$ is the total number of arrows in the constraint list;\\
$\mbox{\hspace{1cm}}\mid\!C \!\mid$ is the length of the constraint list.\\
\end{small}
Our triple is similar to the triple proposed by others \cite{paper:bove, paper:baadersnyder, book:apt}, but a little simpler.
\begin{table}
\begin{small}
\begin{tabular}{|l|l|l|c|c|c|}
\hline
Original call & Recursive call & Conditions, if any & $\mid\! C_{FVC}\! \mid$ & $\mid \!C_\rightarrow\! \mid$ & $\mid\! C \!\mid$ \\
\hline
$(\alpha \cequal \alpha)::C$ & $C$ &$\alpha \in \mathsf{FVC}(C)$ & - & - & $\downarrow$ \\
$(\alpha \cequal \alpha)::C$ & $C$&$\alpha \notin \mathsf{FVC}(C)$ & $\downarrow$ & - & $\downarrow$ \\
$(\alpha \cequal \beta)::C$ & \; $\{\alpha\mapsto \beta\}(C)$&$ \alpha \neq \beta $ & $\downarrow$ & - & $\downarrow$ \\
$(\alpha \cequal \tau)::C$ & \; $\{\alpha\mapsto \tau\}(C)$&$ \alpha \notin \mathsf{FTV}(\tau)$ $\wedge$ $ \alpha \notin \mathsf{FVC}(C)$ & $\downarrow$ & $\downarrow$ & $\downarrow$ \\
$(\alpha \cequal \tau)::C$ & \; $\{\alpha\mapsto \tau\}(C)$&$ \alpha \notin (\mathsf{FTV}(\tau)$ $\wedge$ $ \alpha \in \mathsf{FVC}(C)$ & $\downarrow$ & $\uparrow$ & $\downarrow$ \\
$(\tau \cequal \alpha)::C$ & \; $\{\alpha\mapsto\tau\}(C)$&$ \alpha \notin \mathsf{FTV}(\tau)$ $\wedge$ $ \alpha \notin \mathsf{FVC}(C)$ & $\downarrow$ & $\downarrow$ & $\downarrow$ \\
$(\tau \cequal \alpha)::C$ & \; $\{\alpha\mapsto\tau\}(C)$&$ \alpha \notin \mathsf{FTV}(\tau )$ $\wedge$ $ \alpha \in \mathsf{FVC}(C)$ & $\downarrow$ & $\uparrow$ & $\downarrow$ \\
$((\tau_1 \rightarrow \tau_2)$ & $((\tau_1 \cequal \tau_3)$& None & - &$\downarrow$ &$\uparrow$ \\
$\mbox{\hspace{0.25cm}}\cequal (\tau_3 \rightarrow \tau_4))::C$& {\mbox{\hspace{1em}}}$::( \tau_2 \cequal \tau_4) :: C)$ &&&&\\
\hline
\end{tabular}
\end{small}
\caption{Properties of the termination measure components on the recursive call}
\label{table:unifitermination}
\end{table}
Table \ref{table:unifitermination} shows how these components vary depending on
the constraint at the head of the constraint list. The table closely follows
the reasoning we used to satisfy the proof obligations generated by the above
specification \cite{paper:unif09}. We use -, $\uparrow$, $\downarrow$ to
denote whether the component is unchanged, increased or decreased,
respectively. We might have used finite sets here (for counting the unique
free variables of a constraint list), but we used lists because of our
familiarity with the list library. We found the existing Coq list library offers
excellent support for reasoning about lists in general, and unique lists in
particular. Coq also provides a library to reason about sets as lists modulo
permutation.
We found the following lemma mentioned in the formalization of Sudoku puzzles
by Laurent Th\'ery \cite{paper:sudoku} very useful in our termination proofs.
\begin{lemma}\label{lemma:ulist_incl_length_strict}
\[\forall{} l, l' :list\; D,\; \mathsf{NoDup}\; l\; \Rightarrow \mathsf{NoDup}\; l' \Rightarrow \mathsf{List.incl}\; l\; l' \Rightarrow \neg \mathsf{List.incl}\; l'\; l \Rightarrow (\mathsf{List.length}\; l)\;< (\mathsf{List.length}\; l')\]
\end{lemma}
\noindent{}This lemma nicely relates list inclusion to length.
\section{Verification of the Model}
\label{sec:mgus}
Now we present the proofs of the theorems verifying our model of the idempotent
MGU axioms. The underlying theme in almost all of the proofs presented below is
the use of the $\mathsf{functional\; induction}$ tactic
\cite{paper:func_ind_orig} in Coq. This tactic is available to us because we
have specified first-order unification in a functional style rather than the
relational style. The functional induction technique generates an induction
principle for definitions defined using the $\mathsf{Function}$ keyword. Given
a general recursive algorithm known to terminate (termination requires a
separate proof), the induction principle generated for that particular algorithm
allows a symbolic unfolding of the computation with induction hypotheses for all
recursive calls.
This technique is featured in other theorem provers and was pioneered in
Nqthm by Boyer and Moore \cite{book:boyermoore}.
Functional induction is obviously stronger than the normal list induction, it
closely follows the syntax of the definition and tends to generate induction
hypotheses of exactly the right form needed. The actual induction principle is
available in \cite{paper:unif09}. The induction principle for the unification
algorithm itself is rather long because of the number of cases involved; there
are five cases - three of which have three sub-cases each.
In the next few sections, we present the formal statements of the most
important lemmas involved in the proofs of each of the axioms. For many of
these lemmas, we describe the main technique involved in the proofs. Due to
limitations on space, lemmas stated without comment on their proofs should be
assumed to follow by structural induction on a constraint list or type.
\subsection{Axiom i}
\begin{lemma}\label{lemma:satisfy_and_compose_subst}
$\forall{} \alpha. \: \forall{}C.\;\forall{} \sigma.\; \forall \tau. \; \sigma \models \{\alpha \mapsto \tau\}(C) \Rightarrow
(\{\alpha \mapsto \tau\}\circ \sigma) \models C $
\end{lemma}
\begin{theorem}
$\forall{} C. \;\forall{}\sigma.\; \mathsf{unify}(C) = \sigma \: \Rightarrow \sigma \models C$
\end{theorem}
\begin{proof}
Choose an arbitrary $C$. By functional induction on $\mathsf{unify}\; C$, there are two main cases:
\begin{description}
\item Case $C = [\;]$. Follows trivially since any substitution satisfies an empty constraint list.
\item Case $C \neq [\;]$.
We consider the various cases based on the constraint at the head of the constraint list.
\begin{enumerate}
\item Case $(\alpha \cequal \alpha)::C'$. This case follows from the induction hypothesis.
\item Case $(\alpha \cequal \beta)::C'$ and $ \alpha \neq \beta$. The reasoning is similar to case 3 below.
\item Case $(\alpha \cequal \tau_1 \rightarrow \tau_2)::C'$ and $\alpha \notin \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$.
We know $\mathsf{unify} (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C')) = \sigma'$ and
the induction hypothesis is\\
$\forall \sigma.\: \mathsf{unify} (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C')) = \sigma $
$\Rightarrow \sigma \models \{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C')$.\\
We have to show \\
$\forall \sigma. \sigma = (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} \circ \sigma')\Rightarrow \sigma \models (\alpha \cequal \tau_1 \rightarrow \tau_2)::C'$.
Pick an arbitrary $\sigma$. Assume $\sigma = \{\alpha \mapsto \tau_1 \rightarrow \tau_2\} \circ \sigma'$. We must show
$(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} \circ \sigma') \models (\alpha \cequal \tau_1 \rightarrow \tau_2)::C'$.
Since we know $\mathsf{unify} (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C')) = \sigma'$, so by the induction hypothesis we know
$\sigma' \models \{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C')$. We must show
$(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} \circ \sigma') \models (\alpha \cequal \tau_1 \rightarrow \tau_2)::C'$. By the definition of satisfiability, we must show:
\begin{enumerate}
\item $(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} \circ \sigma') \models (\alpha \cequal \tau_1 \rightarrow \tau_2)$.\\
By Theorem \ref{thm:composition_apply} and the definition of satisfiability, we must show $\sigma' (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(\alpha))$
= $\sigma' (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(\tau_1 \rightarrow \tau_2))$.
Since we know $\alpha \notin \mathsf{FTV} (\tau_1 \rightarrow \tau_2)$, so
$\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(\tau_1 \rightarrow \tau_2) = \tau_1 \rightarrow \tau_2$ and the proof follows.
\item $(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} \circ \sigma') \models C'$. \\
Since we know $\sigma' \models \{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C')$, so by Lemma \ref{lemma:satisfy_and_compose_subst} we know $(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} \circ \sigma') \models C'$ as was to be shown.
\end{enumerate}
\item Case $(\tau_1 \rightarrow \tau_2 \cequal \alpha)::C'$ and $\alpha \notin \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$.
Same as case 3 above.
\item Case $(\tau_1 \rightarrow \tau_2 \cequal \tau_3 \rightarrow \tau_4 )::C'$. The induction hypothesis is \\
$\forall \sigma.\; \mathsf{unify}((\tau_1 \cequal \tau_3) :: (\tau_2 \cequal \tau_4)::C') = \sigma \Rightarrow
\sigma \models ((\tau_1 \cequal \tau_3) :: (\tau_2 \cequal \tau_4)::C')$.
We have to show \\
$\forall \sigma'.\; \mathsf{unify}((\tau_1 \cequal \tau_3) :: (\tau_2 \cequal \tau_4)::C') = \sigma' \Rightarrow
\sigma' \models ((\tau_1 \rightarrow \tau_2 \cequal \tau_3 \rightarrow \tau_4 )::C')$. \\
Pick an arbitrary $\sigma'$ and assume $\mathsf{unify} \;((\tau_1 \cequal \tau_3) :: (\tau_2 \cequal \tau_4)::C') = \sigma'$. Since we know $\mathsf{unify} \;((\tau_1 \cequal \tau_3) :: (\tau_2 \cequal \tau_4)::C') = \sigma'$, so by the induction hypothesis we know \\
$\sigma' \models ((\tau_1 \cequal \tau_3) :: (\tau_2 \cequal \tau_4)::C')$. But by the definition of satisfiability, we know \\
$\sigma'(\tau_1) =\sigma'(\tau_3)$, $\sigma'(\tau_2) = \sigma'(\tau_4)$ and $\sigma' \models C'$. \\
To show $\sigma' \models ((\tau_1 \rightarrow \tau_2 \cequal \tau_3 \rightarrow \tau_4 )::C')$, we must show:
\begin{enumerate}
\item $\sigma' \models \tau_1 \rightarrow \tau_2 \cequal \tau_3 \rightarrow \tau_4$. By the definition of satisfiability, we must show \\
$\sigma' (\tau_1 \rightarrow \tau_2) = \sigma' (\tau_3 \rightarrow \tau_4)$. But we assumed $\sigma'(\tau_1) =\sigma'(\tau_3)$ and $\sigma'(\tau_2) = \sigma'(\tau_4)$, so this case holds.
\item $\sigma' \models C'$. But that we already know.
\end{enumerate}
\end{enumerate}
\end{description}
\end{proof}
\subsection{Axiom ii}
\begin{lemma}\label{lemma:constraint_satisfaction_and_substitution_instance}
$\forall C. \:\forall{}\sigma.\: \forall{}\alpha.\:\forall \tau.\: ( \sigma \models C\; \wedge \;\alpha \notin \mathsf{FTV}(\tau) \; \wedge\; \sigma(\alpha) = \sigma(\tau)) \:\Rightarrow \: \sigma \models \{\alpha \mapsto \tau\}(C)$
\end{lemma}
\begin{proof}
By induction on the constraint list $C$, followed by induction on the structure of the type $\tau$.
\end{proof}
\begin{theorem}
$\forall{} C.\;\forall{}\sigma. \forall \sigma'.\; (\mathsf{unify}(C) = \sigma \: \wedge \: \sigma' \models C) \Rightarrow \exists \sigma''. \;\sigma' \approx \sigma \circ \sigma''$
\end{theorem}
\begin{proof}
Choose an arbitrary constraint list $C$. By the definition of extensional
equality on finite maps, we must show $\forall{}\sigma. \forall \sigma'.\;
(\mathsf{unify}(C) = \sigma \: \wedge \: \sigma' \models C) \Rightarrow \exists
\sigma''. \;\forall \alpha.\; \sigma' (\alpha) = (\sigma \circ
\sigma'')(\alpha)$. \\By functional induction on $\mathsf{unify}(C)$, there are
two main cases:
\begin{description}
\item Case $C = [\;]$. Choose an arbitrary $\sigma$ and $\sigma'$. Assume $\mathsf{unify}([\;]) = \sigma$ and $\sigma' \models [ \;]$. By the definition of $\mathsf{unify}$, we know $\sigma = \sigma_\mathbb{E}$. So we must show $\exists \sigma''.\forall \alpha. \sigma' (\alpha) = (\sigma_\mathbb{E} \circ \sigma'')(\alpha)$.
Let $\sigma'$ be the witness for $\sigma''$ in $\exists \sigma''.\forall \alpha. \sigma' (\alpha) = (\sigma_\mathbb{E} \circ \sigma'')(\alpha)$. Choose an arbitrary $\alpha$. Then we must show $\sigma' (\alpha) = (\sigma_\mathbb{E} \circ \sigma')(\alpha)$. But by Theorem \ref{thm:composition_apply}, we have $(\sigma_\mathbb{E} \circ \sigma')(\alpha) = \sigma' (\sigma_\mathbb{E}(\alpha))$. So we must show
$\sigma' (\sigma_\mathbb{E}(\alpha)) = \sigma' (\alpha)$. But that follows since $\sigma_\mathbb{E}(\alpha) = \alpha$.
\item Case $C \neq [\;]$. We consider the various cases based on the constraint at the head of the constraint list:
\begin{enumerate}
\item Case $(\alpha \cequal \alpha)::C'$. Apply the induction hypothesis and then this case is trivial.
\item Case $(\alpha \cequal \beta)::C'$ and $\alpha \neq \beta$. Reasoning is similar to case 3 below.
\item Case $(\alpha \cequal \tau_1 \rightarrow \tau_2)::C'$ and $\alpha \notin \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$.
We know
$\mathsf{unify}( \{ \alpha \mapsto \tau_1 \rightarrow \tau_2 \}(C')) = \sigma_1$ and
the induction hypothesis is \\
$\forall \sigma. \; \forall \sigma'.\; (\mathsf{unify} (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} (C')) = \sigma\; \wedge \;\sigma' \models (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} (C')))$\\
$ \mbox{\hspace{1cm}}\Rightarrow \exists \sigma''. \;\forall \alpha'.\; \sigma' (\alpha') = (\sigma \circ \sigma'')(\alpha')$. \\
We must show \\
$\forall \sigma_p.\; \forall \sigma_2.\; \sigma_p = (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} \circ \sigma_1) \wedge \sigma_2 \models ((\alpha \cequal \tau_1 \rightarrow \tau_2)::C')$\\
$\mbox{\hspace{1cm}} \Rightarrow \exists \sigma_3. \;\forall \alpha''.\; \sigma_2(\alpha'')=
(\sigma_p \circ \sigma_3)(\alpha'')$.\\
Pick an arbitrary $\sigma_p$ and $\sigma_2$. \\Assume $\sigma_p = \{\alpha \mapsto \tau_1 \rightarrow \tau_2\} \circ \sigma_1$
and $\sigma_2 \models ((\alpha \cequal \tau_1 \rightarrow \tau_2)::C')$. We must show \\
$\exists \sigma_3. \;\forall \alpha''.\; \sigma_2(\alpha'')=
((\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} \circ \sigma_1) \circ \sigma_3)(\alpha'')$. Since $\sigma_2 \models ((\alpha \cequal \tau_1 \rightarrow \tau_2)::C')$ so, by the definition of constraint satisfiability, we know $\sigma_2 (\alpha) = \sigma_2(\tau_1 \rightarrow \tau_2)$ and
$\sigma_2 \models C'$. Then, by Lemma \ref{lemma:constraint_satisfaction_and_substitution_instance} and by our assumptions, we know
$\sigma_2 \models (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} (C'))$.
Since we also know
$\mathsf{unify} (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} (C')) = \sigma_1$, so, by the induction hypothesis, we know
$\exists \sigma''. \;\forall \alpha'.\; \sigma_2 (\alpha') = (\sigma_1 \circ \sigma'')(\alpha')$.
We assume $\forall \alpha'.\; \sigma_2 (\alpha') = (\sigma_1 \circ \sigma_4)(\alpha')$, where $\sigma_4$ is fresh.
Then, to show $\exists \sigma_3. \forall \alpha''. \sigma_2(\alpha'')=
((\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}\circ \sigma_1)\circ \sigma_3)(\alpha'')$, we choose the witness $\sigma_4$ and show $\forall \alpha''. \sigma_2(\alpha'')=
((\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}\circ \sigma_1)\circ \sigma_4)(\alpha'')$. Pick an arbitrary $\alpha''$ and show
$\sigma_2(\alpha'')=
((\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}\circ \sigma_1)\circ \sigma_4)(\alpha'')$. By Theorem \ref{thm:composition_apply}, we must show \\
$\sigma_2(\alpha'')= \sigma_4(\sigma_1(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(\alpha''))$.
There are two cases to consider:
\begin{enumerate}
\item Case $\alpha \neq \alpha''$. Then we must show $\sigma_2(\alpha'')=
\sigma_4(\sigma_1(\alpha''))$. But that follows our assumptions and Theorem \ref{thm:composition_apply}.
\item Case $\alpha = \alpha''$. Then we must show $\sigma_2(\alpha)=
\sigma_4(\sigma_1(\tau_1 \rightarrow \tau_2))$. Since we know \\ $\sigma_2(\alpha) = \sigma_2(\tau_1 \rightarrow \tau_2)$, so we must show $\sigma_2(\tau_1 \rightarrow \tau_2)=
\sigma_4(\sigma_1(\tau_1 \rightarrow \tau_2))$. But that follows from our assumptions and Lemma \ref{lemma:squiggle_ext_lift} and Theorem \ref{thm:composition_apply}.
\end{enumerate}
\item Case $(\tau_1 \rightarrow \tau_2 \cequal \alpha)::C$ and $\alpha \notin \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$. Same as case 3 above.
\item Case $(\tau_1 \rightarrow \tau_2 \cequal \tau_3 \rightarrow \tau_4)::C$. Apply the induction hypothesis and then this case is trivial.
\end{enumerate}
\end{description}
\end{proof}
\subsection{Axiom iii}
\begin{lemma}\label{lemma:compose_and_domain_membership}
$\forall{}\alpha, \alpha'. \:\forall{}\tau. \forall{} \sigma.\: \alpha' \in \mathsf{dom\_subst}(\{\alpha \mapsto \tau\} \circ \sigma)
\Rightarrow $\\
$\mbox{\hspace{2.5cm}} \alpha' \in \mathsf{dom\_subst}(\{\alpha \mapsto \tau\})\; \vee \; \alpha' \in \mathsf{dom\_subst}(\sigma)$
\end{lemma}
\begin{lemma}\label{lemma:compose_and_range_membership}
$\forall{}\alpha, \alpha'. \:\forall{}\tau. \:\forall{} \sigma. \: (\alpha \notin \mathsf{FTV}(\tau)\; \wedge\; \alpha' \in \mathsf{range\_subst}( \{\alpha \mapsto \tau\} \circ \sigma)) \Rightarrow$\\
$\mbox{\hspace{2.5cm}} \alpha' \in \mathsf{range\_subst}(\{\alpha \mapsto \tau\})\; \vee\; \alpha' \in \mathsf{range\_subst}(\sigma)$
\end{lemma}
\noindent
Without going into the details, the following lemma helps us in proving Lemma \ref{lemma:compose_and_range_membership}. Note that the definition of $\circ$ contains references to higher order functions $\mathsf{M.map2}$ and this lemma helps in not having to reason about
$\mathsf{M.map2}$ function but instead we use Theorem \ref{thm:composition_apply} to reason about substitution composition.
\begin{lemma}\label{lemma:alt_range_def}
$\forall{}\alpha .\: \forall{} \sigma.\; \alpha \in \mathsf{range\_subst}(\sigma) \Leftrightarrow \exists \alpha'. \alpha' \in \mathsf{dom\_subst} (\sigma)\; \wedge\; \alpha \in \mathsf{FTV}(\sigma(\alpha')) $
\end{lemma}
\begin{lemma}\label{lemma:termination_helper12}
$\forall \alpha, \alpha'. \; \forall \tau. \; \forall C.\; (\alpha' \notin \mathsf{FTV}(\tau)
\wedge \alpha' \in \mathsf{FTV}(\{\alpha \mapsto \tau\}(C))) \Rightarrow \alpha' \in \mathsf{FTV}(C)$.
\end{lemma}
\begin{lemma}\label{lemma:mgu_axiom3_helper1}
$\forall{} C.\;\forall{}\sigma. \;\mathsf{unify}(C) = \sigma \: \Rightarrow \mathsf{dom\_subst}(\sigma) \subseteq \mathsf{FTV}(C)$
\end{lemma}
\begin{proof}
By functional induction on $\mathsf{unify}(C)$ and Lemma \ref{lemma:compose_and_domain_membership}.
\end{proof}
\noindent
We focus on the proof of the most involved lemma.
\begin{lemma}\label{lemma:mgu_axiom3_helper2}
$\forall{} C.\;\forall{}\sigma. \;\mathsf{unify}(C) = \sigma \: \Rightarrow \mathsf{range\_subst}(\sigma) \subseteq \mathsf{FTV}(C)$
\end{lemma}
\begin{proof}
Choose an arbitrary $C$. Unfolding the definition of $\subseteq$, we must
show\\ $\forall{}\sigma. \;\mathsf{unify}(C) = \sigma \: \Rightarrow \forall
\alpha'. \;\alpha' \in \mathsf{range\_subst}(\sigma) \Rightarrow \alpha' \in
\mathsf{FTV}(C)$. By functional induction on $\mathsf{unify}(C)$, there are two
main cases:
\begin{description}
\item Case $C = [\;]$. Then, by the definition of $\mathsf{unify}$, we know $\sigma = \sigma_\mathbb{E}$. So we must show \\
$\mathsf{range\_subst}(\sigma_\mathbb{E}) \subseteq \mathsf{FTV}({[ \;]})$. The proof follows from the definition of $\mathsf{range\_subst}$ and the definition of $\mathsf{FTV}$.
\item Case $C \neq [\;]$. We consider the various cases based on the constraint at the head of the constraint list:
\begin{enumerate}
\item Case $(\alpha \cequal \alpha)::C'$. The induction hypothesis is:\\
$ \forall{}\sigma. \;\mathsf{unify}(C') = \sigma \: \Rightarrow \forall \alpha''. \alpha'' \in \mathsf{range\_subst}(\sigma) \Rightarrow \alpha'' \in \mathsf{FTV}(C')$\\
and we must show \\
$\forall{}\sigma. \;\mathsf{unify}(C') = \sigma \: \Rightarrow \forall \alpha'. \alpha' \in \mathsf{range\_subst}(\sigma) \Rightarrow \alpha' \in \mathsf{FTV}((\alpha \cequal \alpha)::C')$. \\
Pick an arbitrary $\sigma$ and assume $\mathsf{unify}(C') = \sigma$. Pick an arbitrary $\alpha'$. \\
Assume $\alpha' \in \mathsf{range\_subst}(\sigma)$ and show $\alpha' \in \mathsf{FTV}((\alpha \cequal \alpha)::C')$.\\
Since we know $\mathsf{unify}(C') = \sigma$, so, by the induction hypothesis, we know \\
$\forall \alpha''. \; \alpha'' \in \mathsf{range\_subst}(\sigma) \Rightarrow \alpha'' \in \mathsf{FTV}(C')$. Since we also know $\alpha' \in \mathsf{range\_subst}(\sigma)$, so we know $\alpha' \in \mathsf{FTV}(C')$. That also means $\alpha' \in \mathsf{FTV}((\alpha \cequal \alpha)::C')$ as was to be shown.
\item Case $(\alpha \cequal \beta)::C'$ and $\alpha \neq \beta$. Reasoning is similar to case 3 below.
\item Case $(\alpha \cequal \tau_1 \rightarrow \tau_2)::C'$ and $\alpha \notin \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$.
We know
$\mathsf{unify} (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} (C')) = \sigma_1$, and
the induction hypothesis is \\
$\; \forall \sigma'.\; \mathsf{unify}(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C')) = \sigma' \Rightarrow$\\
$\mbox{\hspace{1cm}}\forall \alpha'. \alpha' \in \mathsf{range\_subst}(\sigma') \Rightarrow \alpha' \in \mathsf{FTV}(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C')) $.\\
We must show \\
$ \forall \alpha''. \alpha'' \in \mathsf{range\_subst}(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} \circ \sigma_1) \Rightarrow \alpha'' \in \mathsf{FTV}((\alpha \cequal \tau_1 \rightarrow \tau_2)::C')$.\\
Pick an arbitrary $\alpha''$ and assume $\alpha'' \in \mathsf{range\_subst}(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} \circ \sigma_1)$. We must show $\alpha'' \in \mathsf{FTV}(\{\alpha \cequal \tau_1 \rightarrow \tau_2\}::C')$. There are two cases:
\begin{enumerate}
\item Case $\alpha'' = \alpha$. Then clearly $\alpha'' \in \mathsf{FTV}(\{\alpha \cequal \tau_1 \rightarrow \tau_2\}(C'))$ as was to be shown.
\item Case $\alpha'' \neq \alpha$. Then we have two cases:
\begin{enumerate}
\item $\alpha'' \in \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$. Then clearly $\alpha'' \in \mathsf{FTV}(\{\alpha \cequal \tau_1 \rightarrow \tau_2\}::C')$ as was to be shown.
\item $\alpha'' \notin \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$. Then we must show $\alpha'' \in \mathsf{FTV}(C')$. Since we know\\ $\mathsf{unify} (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} (C')) = \sigma_1$, so by the induction hypothesis we know \\
$\forall \alpha'. \alpha' \in \mathsf{range\_subst}(\sigma_1) \Rightarrow \alpha' \in \mathsf{FTV}(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C')) $.\\ Since $\alpha'' \in \mathsf{range\_subst}(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} \circ \sigma_1)$ so, by Lemma \ref{lemma:compose_and_range_membership}, we know either $\alpha'' \in \mathsf{range\_subst}(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\})$ or $\alpha'' \in \mathsf{range\_subst}(\sigma_1)$. Again, there are two cases:
\begin{enumerate}
\item Case $\alpha'' \in \mathsf{range\_subst}(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\})$. Then $\alpha'' \in \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$ - a contradiction.
\item Case $\alpha'' \in \mathsf{range\_subst}(\sigma_1)$. Then from the induction hypothesis we know\\ $\alpha'' \in \mathsf{FTV}(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C')) $. Then by Lemma \ref{lemma:termination_helper12}, $\alpha'' \in \mathsf{FTV}(C')$ as was to be shown.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\item Case $(\tau_1 \rightarrow \tau_2 \cequal \alpha)::C$ and $\alpha \notin \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$.
Same as case 3 above.
\item Case $(\tau_1 \rightarrow \tau_2 \cequal \tau_3 \rightarrow \tau_4)::C$. Apply the induction hypothesis and then this case is trivial.
\end{enumerate}
\end{description}
\end{proof}
\begin{theorem}\label{thm:mgu_axiom3}
$\forall{} C.\;\forall{}\sigma. \;\mathsf{unify}\: C = \sigma \: \Rightarrow \mathsf{FTV}(\sigma) \subseteq \mathsf{FTV}(C)$
\end{theorem}
\begin{proof}
By the definition of $\mathsf{FTV}$ and by Lemma \ref{lemma:mgu_axiom3_helper1} and Lemma \ref{lemma:mgu_axiom3_helper2}.
\end{proof}
\subsection{Axiom iv}
This axiom requires the notion of subterms, which we define below:\\
\begin{tabular}{lcl}
$\mbox{\hspace{1cm}}\mathsf{subterms}(\alpha)$ & $\definedAs$ & [\;]\\
$\mbox{\hspace{1cm}}\mathsf{subterms}(\tau_1 \rightarrow \tau_2)$ &$\definedAs$ &$\tau_1::\tau_2::(\mathsf{subterms} \; \tau_1)\,\, \mathrm{++}\,\, (\mathsf{subterms} \; \tau_2)$ \\\\
\end{tabular}
\noindent
Then we can define what it means to for a term to be contained in another term.
\begin{lemma}\label{lemma:containment}
$\forall{}\tau,\tau'.\:\tau \in \mathsf{subterms}(\tau') \:\Rightarrow \: \forall{}\tau''.\: \tau'' \in \mathsf{subterms}(\tau) \Rightarrow \tau'' \in \mathsf{subterms}(\tau') $
\end{lemma}
\noindent
A somewhat related lemma is used to show well foundedness of types.
\begin{lemma}\label{lemma:well_foundedness_for_types}
$\forall{}\tau.\:\neg\: \tau \in \mathsf{subterms}(\tau) $
\end{lemma}
\begin{proof}
By induction on the structure of the type $\tau$ and by Lemma \ref{lemma:containment}.
\end{proof}
\noindent
The following obvious but powerful lemma helps in proving the axiom:
\begin{lemma}\label{lemma:member_subterms_and_apply_subst}
$\forall{} \sigma.\: \forall{} \alpha.\: \forall{}\tau.\: \alpha \in \mathsf{subterms}(\tau) \Rightarrow \sigma(\alpha) \neq \sigma(\tau) $
\end{lemma}
\begin{proof}
By induction on the structure of the type $\tau$ and by Lemma \ref{lemma:well_foundedness_for_types}.
\end{proof}
\begin{lemma}\label{lemma:member_arrow_and_subterms}
$\forall{} \sigma.\: \forall{}\alpha.\: \forall{}\tau_1, \tau_2.\: \alpha \in \mathsf{FTV}(\tau_1) \vee \alpha \in \mathsf{FTV}(\tau_2) \Rightarrow \alpha \in \mathsf{subterms}(\tau_1 \rightarrow \tau_2) $
\end{lemma}
\begin{proof}
By induction on $\tau_1$, followed by induction on $\tau_2$.
\end{proof}
\noindent
A corollary from the above two gives us the required lemma.
\begin{corollary}\label{lemma:member_apply_subst_unequal}
$\forall{} \sigma.\: \forall{} \alpha.\: \forall{}\tau_1, \tau_2.\:\alpha \in \mathsf{FTV}(\tau_1) \vee \alpha \in \mathsf{FTV}(\tau_2) \Rightarrow \sigma (\alpha) \neq \sigma(\tau_1 \rightarrow \tau_2) $
\end{corollary}
\begin{proof}
By Lemma \ref{lemma:member_subterms_and_apply_subst} and \ref{lemma:member_arrow_and_subterms}.
\end{proof}
This is the only theorem where the failure cases are interesting. So in the following theorem we carry along the constructor that shows success or failure of $\mathsf{unify}$ function call.
\begin{theorem}
$ \forall{} C. \; \forall{}\sigma.\; \sigma \models C \: \Rightarrow \exists \sigma'.\; \mathsf{unify}(C) = \mathsf{Some} \;\sigma'$
\end{theorem}
\begin{proof}
Choose an arbitrary $C$ and $\sigma$. By functional induction on $\mathsf{unify}(C)$, there are two main cases:
\begin{description}
\item Case $C = [\;]$. Assume $\sigma \models {[ \;]}$. Then we must show $\exists \sigma'.\; \mathsf{unify}({[ \;]}) = \mathsf{Some} \;\sigma'$. Let $\sigma_\mathbb{E}$ be the witness for $\sigma'$ in $\exists \sigma'.\; \mathsf{unify}({[ \;]}) = \mathsf{Some} \;\sigma'$. So we must show $\mathsf{unify}\; {[ \;]} = \mathsf{Some}\; \sigma_\mathbb{E}$ but that follows from the definition of $\mathsf{unify}$.
\item Case $C \neq [\;]$. We consider the various cases based on the constraint at the head of the constraint list:
\begin{enumerate}
\item Case $(\alpha \cequal \alpha)::C'$. Apply the induction hypothesis and then this case is trivial.
\item Case $(\alpha \cequal \beta)::C'$ and $\alpha \neq \beta$. Reasoning is similar to case 3 below.
\item Case $(\alpha \cequal \tau_1 \rightarrow \tau_2)::C'$ and $\alpha \notin \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$.
We know \\
$\mathsf{unify} (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} (C')) = \mathsf{None}$ and
the induction hypothesis is: \\
$\sigma' \models (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} (C')) \Rightarrow \exists \sigma''.\; \mathsf{unify} (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} (C')) = \mathsf{Some}\;\sigma''$.\\
We must show \\
$\sigma' \models ((\alpha \cequal \tau_1 \rightarrow \tau_2)::C') \Rightarrow
\exists \sigma_3.\; \mathsf{None} = \mathsf{Some}\; \sigma_3$.\\
Assume $\sigma' \models ((\alpha \cequal \tau_1 \rightarrow \tau_2)::C')$, {\em i.e.}, $\sigma'(\alpha) = \sigma'(\tau_1 \rightarrow \tau_2)$ and
$\sigma' \models C'$. \\
We must show $\exists \sigma_3.\; \mathsf{None} = \mathsf{Some}\; \sigma_3$.
By Lemma \ref{lemma:constraint_satisfaction_and_substitution_instance} and by our assumptions, we know\\
$\sigma' \models (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} (C'))$. So, by the induction hypothesis, we know $\exists \sigma''. \; \mathsf{None} = \mathsf{Some}\;\sigma''$. Since we know $\exists \sigma''. \; \mathsf{None} = \mathsf{Some}\;\sigma''$, so assume
$\mathsf{None} = \mathsf{Some}\;\sigma'''$, where $\sigma'''$ is fresh, but that is a contradiction and so this case holds.
\item Case $(\alpha \cequal \tau_1 \rightarrow \tau_2)::C'$ and $\alpha \in \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$.\\
Then, we must show $\sigma' \models ((\alpha \cequal \tau_1 \rightarrow \tau_2)::C') \Rightarrow
\exists \sigma_3. \mathsf{None} = \mathsf{Some}\; \sigma_3$.\\
Assume $\sigma' \models ((\alpha \cequal \tau_1 \rightarrow \tau_2)::C')$, {\em i.e.}, $\sigma' (\alpha) = \sigma'(\tau_1 \rightarrow \tau_2)$ and $\sigma' \models C'$. Since we know
$\alpha \in \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$, {\em i.e.}, either $\alpha \in \mathsf{FTV}(\tau_1)$ or $\alpha \in \mathsf{FTV}(\tau_2)$,
so by Corollary \ref{lemma:member_apply_subst_unequal}\\ $\sigma' (\alpha) \neq \sigma'(\tau_1 \rightarrow \tau_2)$, which is a contradiction. Thus the proof follows trivially.
\item Case $(\tau_1 \rightarrow \tau_2 \cequal \alpha)::C$ and $\alpha \notin \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$.
Similar to case 3.
\item Case $(\tau_1 \rightarrow \tau_2 \cequal \alpha)::C$ and $\alpha \in \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$.
Similar to case 4.
\item Case $(\tau_1 \rightarrow \tau_2 \cequal \tau_3 \rightarrow \tau_4)::C$. Apply the induction hypothesis and then this case is trivial.
\end{enumerate}
\end{description}
\end{proof}
\subsection{Axiom v}
The following lemmas are needed for the main proof, the first two follow by induction on the structure of the type $\tau$ and the third by induction on $C$.
\begin{lemma}\label{idempotent_helper_helper1_gen}
$\forall \sigma.\; \forall \alpha. \;\forall \tau.\; \alpha \notin \mathsf{FTV}(\tau) \wedge \alpha \notin \mathsf{FTV}(\sigma) \Rightarrow \alpha \notin \mathsf{FTV}\;(\sigma (\tau))$
\end{lemma}
\begin{lemma}\label{fresh_type_and_member_converse}
$\forall \alpha.\; \forall \tau, \tau'. \;\alpha \notin \mathsf{FTV}(\tau) \Rightarrow \{\alpha \mapsto \tau'\} (\tau)= \tau$
\end{lemma}
\begin{lemma}\label{lemma:apply_subst_constr_general}
$\forall \alpha.\; \forall{}\tau.\; \forall{} C. \;\alpha \notin \mathsf{FTV}(\tau) \Rightarrow \alpha \notin \mathsf{FTV} (\{\alpha \mapsto \tau \}(C))$
\end{lemma}
\noindent
The theorem we must prove is:
\begin{theorem}
$\forall{} C.\; \forall{}\sigma.\; \mathsf{unify}(C)= \sigma \: \Rightarrow (\sigma \circ \sigma) \approx \sigma$.
\end{theorem}
\begin{proof}
Pick an arbitrary $C$. Unfolding the definition of $\approx$, and by Theorem \ref{thm:composition_apply}, we must show:\\
$\forall{}\sigma.\; \mathsf{unify}(C) = \sigma \Rightarrow \forall \alpha.\; \sigma (\sigma (\alpha))= \sigma (\alpha)$.\\
By functional induction on $\mathsf{unify}\; C$, there are two main cases:
\begin{description}
\item Case $C = [\;]$.
This case follows since $\forall \alpha. \;\sigma_\mathbb{E} (\alpha) = \alpha$.
\item Case $C \neq [\;]$. We consider the various cases based on the constraint at the head of the constraint list:
\begin{enumerate}
\item Case $(\alpha \cequal \alpha)::C'$. Apply the induction hypothesis and then this case is trivial.
\item Case $(\alpha \cequal \beta)::C'$. Reasoning is similar to case 3 below.
\item Case $(\alpha \cequal \tau_1 \rightarrow \tau_2)::C'$ and $\alpha \notin \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$. We know $\mathsf{unify}\; (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C'))= \sigma$ and the induction hypothesis is:\\
$\forall \sigma'. \mathsf{unify}\; \{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C')= \sigma' \Rightarrow \forall \alpha'. \sigma' (\alpha') = \sigma'(\sigma' (\alpha'))$\\
And we must show:\\
$ \sigma (\{\alpha \mapsto \tau_1 \rightarrow \tau_2 \}(\alpha'')) = (\sigma(\{\alpha \mapsto \tau_1 \rightarrow \tau_2 \}(\sigma(\{\alpha \mapsto \tau_1 \rightarrow \tau_2 \}(\alpha'')))))$.\\
There are two cases:
\begin{enumerate}
\item Case $\alpha = \alpha''$. Then we must show $\sigma(\tau_1 \rightarrow \tau_2) = \sigma(\{\alpha \mapsto \tau_1 \rightarrow \tau_2 \}(\sigma(\tau_1 \rightarrow \tau_2)))$. From Lemma \ref{lemma:apply_subst_constr_general} and Theorem \ref{thm:mgu_axiom3}, we know that $\alpha \notin \mathsf{FTV}(\sigma)$.
Since $\alpha \notin \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$ and $\alpha \notin \mathsf{FTV}(\sigma)$, so by Lemma \ref{idempotent_helper_helper1_gen},
$\alpha \notin \mathsf{FTV}(\sigma(\tau_1 \rightarrow \tau_2))$. By Lemma \ref{fresh_type_and_member_converse} (choosing $\tau'$ to be $\tau_1 \rightarrow \tau_2$), we get $\sigma(\tau_1 \rightarrow \tau_2) = \{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(\sigma (\tau_1 \rightarrow \tau_2))$. So now we must show
$\sigma (\tau_1 \rightarrow \tau_2) = \sigma(\sigma (\tau_1 \rightarrow \tau_2))$. Then, by Lemma \ref{lemma:squiggle_ext_lift}, we must show
$\forall \beta. \;\sigma (\beta) = \sigma(\sigma (\beta))$ . Choose an arbitrary $\beta$ and show $\sigma (\beta) = \sigma(\sigma (\beta))$, but that follows from the induction hypothesis (by choosing $\sigma'$ to be $\sigma$ and $\alpha'$ to be $\beta$) and our assumptions.
\item Case $\alpha \neq \alpha''$. Then we must show $\sigma(\alpha'') = \sigma(\{\alpha \mapsto \tau_1 \rightarrow \tau_2 \}(\sigma(\alpha'')))$. From Lemma \ref{lemma:apply_subst_constr_general} and Theorem \ref{thm:mgu_axiom3}, we know that $\alpha \notin \mathsf{FTV}(\sigma)$.
Since $\alpha \notin \mathsf{FTV}(\alpha'')$ and $\alpha \notin \mathsf{FTV}(\sigma)$, so by Lemma \ref{idempotent_helper_helper1_gen},
$\alpha \notin \mathsf{FTV}(\sigma(\alpha''))$. By Lemma \ref{fresh_type_and_member_converse} and using $\tau'$ to be $\tau_1 \rightarrow \tau_2$ we get $\sigma(\alpha'') = (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(\sigma (\alpha'')))$. So now we must show
$\sigma (\alpha'') = \sigma(\sigma (\alpha''))$, but that follows from the induction hypothesis (by choosing $\sigma'$ to be $\sigma$ and $\alpha'$ to be $\alpha''$) and our assumptions.
\end{enumerate}
\item Case $(\tau_1 \rightarrow \tau_2 \cequal \alpha)::C'$ and $\alpha \notin \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$. Same as Case 3.
\item Case $(\tau_1 \rightarrow \tau_2 \cequal \tau_3 \rightarrow \tau_4)::C'$. Apply the induction hypothesis and then this case is trivial.
\end{enumerate}
\end{description}
\end{proof}
\subsection{Axiom vi}
The theorem we must prove is:
\begin{theorem}\label{thm:empty_map_axiom}
$\forall \sigma. \mathsf{unify}\;[\;] = \sigma \Rightarrow \;\sigma = \sigma_\mathbb{E} $
\end{theorem}
\begin{proof}
Choose an arbitrary $\sigma$. Assume $\mathsf{unify}\;[\;] = \sigma$. Unfold the definition of $\mathsf{unify}$. Then we know $\sigma = \sigma_\mathbb{E}$ as was to be shown.
\end{proof}
\subsection{Axiom vii}
The main proof requires a lemma, which we mention next.
\begin{lemma}\label{lemma:unify_and_append_helper2}
$\forall C, C'. \;\forall \alpha.\; \forall \tau.\; \{\alpha \mapsto \tau\}(C)\; {++}\; \{\alpha \mapsto \tau\}(\mathbb{C'}) = \{\alpha \mapsto \tau\} (C \;{++}\; C')$
\end{lemma}
\noindent
The theorem we must prove is:
\begin{theorem}
$\forall C, C_2. \;\forall \sigma', \sigma'', \sigma'''. \;(\mathsf{unify}(C) = \sigma'\: \wedge \: \mathsf{unify}(\sigma'(C_2)) = \sigma''\; \wedge\; \mathsf{unify}(C\; {++}\; C_2) = \sigma''')$\\
\mbox{\hspace{2cm}}$\Rightarrow \;\sigma''' \approx (\sigma' \circ \sigma'')$
\end{theorem}
\begin{proof}
Pick an arbitrary $C$. By Theorem \ref{thm:composition_apply} and unfolding the definition of $\approx$, we must show:\\
$\forall C_2. \;\forall \sigma', \sigma'', \sigma'''. \;(\mathsf{unify}(C) = \sigma'\: \wedge \: \mathsf{unify}(\sigma'(C_2)) = \sigma'' \: \wedge \: \mathsf{unify}(C\: \mathit{++}\: C_2) = \sigma''')$\\
$\mbox{\hspace{3cm}} \Rightarrow \forall \alpha'. \sigma'''(\alpha') = \sigma'' (\sigma' (\alpha'))$.\\
By functional induction on $\mathsf{unify}(C)$, there are two main cases:
\begin{description}
\item Case $C = [\;]$. Follows from Theorem \ref{thm:empty_map_axiom} and the assumptions.
\item Case $C \neq [\;]$.
Consider the various cases based on the constraint at the head of the constraint list.
\begin{enumerate}
\item Case $(\alpha \cequal \alpha)::C'$. This case follows from the induction hypothesis and the definition of append.
\item Case $(\alpha \cequal \beta)::C'$ and $ \alpha \neq \beta$. Similar to case 3 below.
\item Case $(\alpha \cequal \tau_1 \rightarrow \tau_2)::C'$ and $\alpha \notin \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$.
We know $\mathsf{unify}(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C')) = \sigma$.
The induction hypothesis is:\\
$\forall C_1.\; \forall \sigma_1, \sigma_2, \sigma_3.$\\
$\mbox{\hspace{1cm}} \;(\mathsf{unify} (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C')) = \sigma_1 \; \wedge$\\
$\mbox{\hspace{1.25cm}} \mathsf{unify}(\sigma_1(C_1)) = \sigma_2 \; \wedge $\\
$\mbox{\hspace{1.25cm}}\mathsf{unify} ((\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C')) {++} C_1) = \sigma_3)$\\
$\mbox{\hspace{2cm}}\Rightarrow \forall \alpha''.\; \sigma_3 (\alpha'') = \sigma_2 (\sigma_1 (\alpha''))$.\\
We must show:\\
$\forall C_2.\; \forall \sigma', \sigma'', \sigma'''.$\\
$\mbox{\hspace{1cm}}(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}\circ \sigma = \sigma' \wedge$\\
$\mbox{\hspace{1.25cm}}\mathsf{unify} (\sigma' ( C_2)) = \sigma'' \wedge$\\
$\mbox{\hspace{1.25cm}}\mathsf{unify} (((\alpha \cequal \tau_1 \rightarrow \tau_2)::C') {++} C_2) = \sigma''')$\\
$\mbox{\hspace{2cm}}\Rightarrow \forall \alpha'. \; \sigma'''(\alpha') = \sigma'' (\sigma' (\alpha'))$.\\
Pick an arbitrary $C_2, \sigma', \sigma''$ and $\sigma'''$.
Assume
$\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}\circ \sigma = \sigma'$ and \\
$\mathsf{unify}(\sigma'( C_2)) = \sigma''$ and
$\mathsf{unify}(((\alpha \cequal \tau_1 \rightarrow \tau_2)::C') {++} C_2) = \sigma'''$.
By the definition of append, the last assumption is $\mathsf{unify} ((\alpha \cequal \tau_1 \rightarrow \tau_2)::(C' {++} C_2)) = \sigma'''$.\\ Unfolding the $\mathsf{unify}$ definition once, we know
$\mathsf{unify} (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C'\:\: {++}\:\: C_2)) = {\sigma}_T$, where
$\sigma''' = \{\alpha \mapsto \tau_1 \rightarrow \tau_2\} \circ \sigma_T$. Also, since $\sigma' = \{\alpha \mapsto \tau_1 \rightarrow \tau_2\}\circ \sigma$,
so we know $\mathsf{unify}((\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}\circ \sigma)(C_2)) = \sigma''$.
Since we know $\sigma''' = \{\alpha \mapsto \tau_1 \rightarrow \tau_2\} \circ \sigma_T$, so
we must show $\forall \alpha'. \; (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} \circ \sigma_T)(\alpha') = \sigma''((\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}\circ \sigma)(\alpha'))$.
Pick an arbitrary $\alpha'$. By Theorem \ref{thm:composition_apply}, we must show \\
$\sigma_T(\{\alpha \mapsto \tau_1 \rightarrow \tau_2\} (\alpha')) = \sigma'' (\sigma (\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(\alpha')))$.
There are two cases:
\begin{enumerate}
\item Case $\alpha = \alpha'$. Then we must show $\sigma_T(\tau_1 \rightarrow \tau_2) = \sigma''(\sigma(\tau_1 \rightarrow \tau_2))$. But by Lemma \ref{lemma:squiggle_ext_lift}, we must show
$\forall \alpha'''.\;\sigma_T(\alpha''') = \sigma''(\sigma(\alpha'''))$. Pick an arbitrary $\alpha'''$ and so we must show
$\sigma_T(\alpha''') = \sigma''(\sigma(\alpha'''))$.
But that follows from the induction hypothesis (by choosing $C_1$ to be $\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C_2)$, $\sigma_1$ to be $\sigma$, $\sigma_2$ to be $\sigma''$ and $\sigma_3$ to be $\sigma_T$) and the definition of substitution composition
and Lemma \ref{lemma:unify_and_append_helper2} and the assumptions.
\item Case $\alpha \neq \alpha' $.
Then we must show $\sigma_T(\alpha') = \sigma''(\sigma(\alpha'))$.
But that follows from the induction hypothesis (by choosing $C_1$ to be $\{\alpha \mapsto \tau_1 \rightarrow \tau_2\}(C_2)$, $\sigma_1$ to be $\sigma$, $\sigma_2$ to be $\sigma''$ and $\sigma_3$ to be $\sigma_T$) and the definition of substitution composition
and Lemma \ref{lemma:unify_and_append_helper2} and the assumptions.
\end{enumerate}
\item Case $(\tau_1 \rightarrow \tau_2 \cequal \alpha)::C$ and $\alpha \notin \mathsf{FTV}(\tau_1 \rightarrow \tau_2)$.
Same as the above case.
\item Case $(\tau_1 \rightarrow \tau_2 \cequal \tau_3 \rightarrow \tau_4 )::C$.
Apply the induction hypothesis.
\end{enumerate}
\end{description}
\end{proof}
\section{Related Work and Conclusions}
\label{sec:conclusions}
Unification is
fundamentally used in type inference. There are formalizations of the unification algorithm in a number of different
theorem provers \cite{Blanqui_unification, paper:paulsonLCF, phdthesis:rouyer}. We comment on the implementation in the CoLoR library \cite{paper:color}. CoLoR
is an extensive and very successful library supporting reasoning about
termination and rewriting. A Coq implementation of the unification
algorithm was recently released \cite{Blanqui_unification}. Our implementation
differs from theirs in a number of ways. Perhaps the most significant
difference is that we represent substitutions as finite maps, whereas in CoLoR the substitutions are represented
by functions from type variables to a
generalized term structure. The axioms verified here are not explicitly
verified in CoLoR, however their library could serve as a basis for doing
so. We believe that the lemmas supporting our verification could be translated
into their more general framework but that the proofs would be significantly
different because we use functional induction which follows the structure of our
algorithm. The unification algorithm in CoLoR is specified in a significantly different
style (as an iterated step function).
Though many lemmas were simple, many others required generalization in order
for the proof to go through. Our choice of finite maps library to represent
substitutions helped us significantly. Coq's finite maps library is expressive
enough to specify complicated definitions (substitution composition, range
elements) yet the reasoning with them is simple if we abstract away from the
actual definition and look at the extensional behavior instead. Since we used
an interface, we could not really argue about the normal substitution equality.
Our specification of unification was in a functional style but the definition
was general recursive. This meant that we had to show the termination using a
well-founded ordering. Once termination was established, the {\tt functional
induction} tactic helped us immensely in reasoning about the first-order
unification algorithm.
The entire formalization (all seven axioms) is
done in Coq 8.1.pl3 version in around 5000 lines of specifications and tactics,
and is available online at \url{http://www.cs.uwyo.edu/~skothari}.
We would like to thank Santiago Zanella (INRIA - Sophia Antipolis) for showing
us how to encode lexicographic ordering for 3-tuples in Coq. We thank
Frederic Blanqui for answering our queries regarding the new release of CoLoR
library, Laurent Th\'ery for making his Coq
formulation of Sudoku \cite{paper:sudoku} available on the web, St\'ephane Lescuyer and other Coq-club members
for answering our queries on the Coq-club mailing list, and Christian Urban (TU Munich) for discussing at length the MGU axioms used in their verification of Algorithm W \cite{bookchapter:urbannipkow}. Finally, we want to thank anonymous referees for their detailed comments and
suggestions (on an earlier draft of this paper), which greatly improved the presentation of this paper.
\end{document} |
\begin{document}
\title{Conditional coherent control with superconducting artificial atoms}
\newcommand{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\newcommand{Department of Physics, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}{Department of Physics, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\newcommand{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\newcommand{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}
\newcommand{Departamento de F\'{i}sica, Universidade Federal de S\~ao Carlos, Rodovia Washington Lu\'{i}s, km 235 - SP-310, 13565-905 S\~ao Carlos, SP, Brazil}{Departamento de F\'{i}sica, Universidade Federal de S\~ao Carlos, Rodovia Washington Lu\'{i}s, km 235 - SP-310, 13565-905 S\~ao Carlos, SP, Brazil}
\newcommand{Department of Physics, Stockholm University, AlbaNova University Center 106 91 Stockholm, Sweden}{Department of Physics, Stockholm University, AlbaNova University Center 106 91 Stockholm, Sweden}
\newcommand{Universit\'e C\^ote d'Azur, CNRS, Institut de Physique de Nice, 06560 Valbonne, France}{Universit\'e C\^ote d'Azur, CNRS, Institut de Physique de Nice, 06560 Valbonne, France}
\author{Chang-Kang Hu}
\email{huck@sustech.edu.cn}
\affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}\affiliation{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\author{Jiahao Yuan}
\affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}\affiliation{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{Department of Physics, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\author{Bruno A. Veloso}
\affiliation{Departamento de F\'{i}sica, Universidade Federal de S\~ao Carlos, Rodovia Washington Lu\'{i}s, km 235 - SP-310, 13565-905 S\~ao Carlos, SP, Brazil}
\author{Jiawei Qiu}
\affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}\affiliation{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{Department of Physics, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\author{Yuxuan Zhou}
\affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}\affiliation{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{Department of Physics, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\author{Libo Zhang}
\affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}\affiliation{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\author{\\Ji Chu}
\affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}\affiliation{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\author{Orkesh Nurbolat}
\affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}\affiliation{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing, Jiangsu 210093, China}
\author{Ling Hu}
\affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}\affiliation{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\author{Jian Li}
\affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}\affiliation{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\author{Yuan Xu}
\affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}\affiliation{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\author{Youpeng Zhong}
\affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}\affiliation{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\author{Song Liu}
\affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}\affiliation{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\author{\\Fei Yan}
\email{yanf7@sustech.edu.cn}
\affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}\affiliation{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\author{Dian Tan}
\email{tand@sustech.edu.cn}
\affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}\affiliation{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\author{R. Bachelard~\orcidlink{0000-0002-6026-509X}}
\affiliation{Departamento de F\'{i}sica, Universidade Federal de S\~ao Carlos, Rodovia Washington Lu\'{i}s, km 235 - SP-310, 13565-905 S\~ao Carlos, SP, Brazil}
\affiliation{Universit\'e C\^ote d'Azur, CNRS, Institut de Physique de Nice, 06560 Valbonne, France}
\author{Alan C. Santos~\orcidlink{0000-0002-6989-7958}}
\email{ac\_santos@df.ufscar.br}
\affiliation{Departamento de F\'{i}sica, Universidade Federal de S\~ao Carlos, Rodovia Washington Lu\'{i}s, km 235 - SP-310, 13565-905 S\~ao Carlos, SP, Brazil}
\affiliation{Department of Physics, Stockholm University, AlbaNova University Center 106 91 Stockholm, Sweden}
\author{C. J. Villas-Boas~\orcidlink{0000-0001-5622-786X}}
\affiliation{Departamento de F\'{i}sica, Universidade Federal de S\~ao Carlos, Rodovia Washington Lu\'{i}s, km 235 - SP-310, 13565-905 S\~ao Carlos, SP, Brazil}
\author{Dapeng Yu}
\affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{International Quantum Academy, Futian District, Shenzhen, Guangdong 518048, China}\affiliation{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}\affiliation{Department of Physics, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\maketitle
\onecolumngrid
\noindent\textbf{Controlling the flow of quantum information is a fundamental task for quantum computers, which is unpractical to realize on classical devices. Coherent devices which can process quantum states are thus required to route the quantum states yielding the information.
In this paper we demonstrate experimentally the smallest quantum transistor for superconducting processors, composed of \textit{collector} and \textit{emitter} qubits, and the coupler.
The interaction strength between the collector and emitter is controlled by tuning the frequency and the state of the gate qubit, effectively implementing a quantum switch. From the truth-table measurement (open-gate fidelity 93.38\%, closed-gate fidelity 98.77\%), we verify the high performance of the quantum transistor. We also show that taking into account the third energy level of the qubits is critical to achieving a high-fidelity transistor. The presented device has a strong potential for quantum information processes in superconducting platforms.
}
\twocolumngrid
Thanks to the advent of semiconductors physics and advances in solid state physics~\cite{Ridley:Book,Phillips:Book}, devices based on quantum effects have been used to design first-generation quantum technologies.
These devices exploit tunneling and band-structure of complex systems and have been used to build transistors with nanometer physical gate~\cite{Desai:16}, although the information flow obeys classical laws.
Furthermore, spin-based transistors have allowed for the manipulation and engineering of atom-like spins at an elementary level, with applicability to quantum information science~\cite{Igor:04}. Since transistors are fundamental hardware components in classical computers, transistors able to process bits of quantum information have been proposed as quantum analogs of such devices~\cite{Marchukov:16,sun2018,Loft:18,shan2018}. In this sense, quantum transistors are required components to control the flow and the processing of information in operations of quantum computation.
Among the physical platforms candidate to the construction of quantum processors, superconducting circuit systems have stood out as a cutting-edge technology for the realization of a scalable quantum computer~\cite{arute2019quantum, gong2021quantum, wu2021strong}, in addition to being a promising platform to efficiently study quantum many-body physics~\cite{You:11,Wang:20,ZhangKe:21,Zanner:22}. Although superconducting circuits present a multilevel structure (artificial atoms), they can also be operated as two-level systems (qubits) to implement tunable interaction between the parts of a superconducting processor~\cite{Yan:18, Li:20,Han:20,Feng:20,collodo2020implementation, Qiu:21, Jin:21, stehlik2021tunable, sung2021realization}.
In this paper, using two superconducting qubits allied with a frequency-tunable coupler~\cite{Qiu:21}, we demonstrate a coherent state-switchable transfer of information, effectively realizing a quantum transistor. The effective dynamics of the superconducting qubits with tunable interactions is investigated theoretically, which allows to identify the critical role of the third energy level. Accounting for the multilevel structure of these artificial atoms turns to be crucial to achieve high-fidelity operations.
\begin{figure}
\caption{(a) Chip with the superconducting quantum circuits used in our experiment. (b) Schematic diagram of the three interacting components of the circuit, with the tunable control qubit in-between the two target qubits. (c) Schematic representation of the superconducting quantum transistor, where the information is encoded in the collector qubit and sent to the emitter qubit in a coupler-state-dependent way. The reversibility of the quantum evolution allows to achieve a two-way transistor for quantum information.}
\label{Fig1}
\end{figure}
Our superconducting circuit is schematically presented in Figs.~\ref{Fig1}{\color{blue}a} and~\ref{Fig1}{\color{blue}b}, where the tunable coupler qubit ($C$) lies in-between two Xmon superconducting artificial atoms ($Q_1$ and $Q_2$). Using an external flux (chain control $Z$) across the SQUID loop, the frequencies of the atom $Q_1$ and of the coupler are made tunable, whereas the one of the atom $Q_2$ remains fixed.
\begin{table}[b]
\centering
\scriptsize
\caption{Parameters of the superconducting qubits used in our experiment.}
\begin{ruledtabular}
\begin{tabular}{rccccccp{0cm}}
\hspace{0.5cm}
{\bf \small Qubit} &
{\small $\mathrm{\bf Freq}^{\mathrm{\bf Max}}$} &
{\small $\mathrm{\bf Freq}^{\mathrm{ \bf Idling}}$} &
{\bf \small $\alpha$} &
{ \small $T_{1}^{\mathrm{\bf Idling}}$} &
{\bf \small $T_{2}^{\mathrm{ \bf Idling}}$ }&
{\bf \small $T_{\mathrm{\bf 2,Echo}}^{\mathrm{\bf Idling}}$ }& \\
\hline
\hline\\[-2mm]
Qubit 1 & 5.230 GHz & 4.670 GHz & -222 MHz& 6.51 us& 0.54 us & 3.83us &\\
Coupler & 8.831 GHz & 6.183 GHz & -378 MHz& 4.06 us& 0.27 us & 2.42us &\\
Qubit 2 & --- & 4.619 GHz & -242 MHz& 6.58 us& 7.43 us & 13.02us &\\
\end{tabular}
\end{ruledtabular}
\label{tab_para}
\end{table}
As a preliminary step to realize the transistor, let us discuss the nature of our artificial atoms:
they cannot be treated as simple two-level systems, as it was done previously~\cite{Yan:18, Li:20,Han:20,Feng:20,collodo2020implementation, Qiu:21, Jin:21, stehlik2021tunable, sung2021realization}.
The system is thus described by a Hamiltonian of the form $H\!=\!H_{0} + V_{0}$, where
\begin{align}
\label{H0}
H_{0} &= \hbar\sum\nolimits_{i=1,2,c} \left(\omega_i \, a_i^\dagger a_i + \frac{\alpha_i}{2} \, a_i^\dagger a_i^\dagger a_i a_i \right) ,
\end{align}
is the bare Hamiltonian of the atoms and the coupler, with $a_i^\dagger$ and $a_i$ their creation and annihilation operators, and $\omega_{i}$ the transition frequency between the ground state and the first excited state. $\alpha_{i}$ is the energy level anharmonicity, with $\alpha_{i}$ big enough corresponding to the two-level system, and $\alpha_{i}=0$ to the harmonic oscillator. The term
\begin{align}
V_{0} = \hbar\sum\nolimits_{i = 1,2}\left[g_{i} \left( a_i^\dagger a_{\mathrm{c}} + a_i a_{\mathrm{c}}^\dagger \right)\right] + \hbar g_{12}\left( a_1^\dagger a_2 + a_1 a_2^\dagger \right) ,
\end{align}
describes the interaction between the components of the superconducting circuit, with $g_{i}$ the (capacitive) coupling strength between the $i$th atom and the coupler. Such design allows for an atom-coupler coupling $g_{i}$ much stronger than the capacitive atom-atom coupling $g_{12}$ ($g_{12}\!\ll\!g_{i}$).
As a first result, we show that assuming the superconducting artificial atoms behave as two-level systems leads to inaccurate predictions for the coupled dynamics, which can in turn affect the operation of the circuit.
In the explored regime, each atom must be considered at least as a three-level system, where the anharmonicity of the third energy level plays an important role when the state of the coupler changes. Let us set the two atoms $Q_1$ and $Q_2$ at the same frequency ($\omega_{1}\!=\!\omega_{2}\!=\!\omega$), and tune the coupler frequency toward the dispersive regime $|\omega_{\mathrm{c}} - \omega_{i}|\!\gg\!g_{i}$~\cite{Yan:18,Li:20,Han:20,Feng:20}. The effective Hamiltonian then reads $\tilde{H}_{\text{eff}}^{\ket{n}_{\mathrm{c}}} = \hbar \tilde{g}_{\text{eff}}^{\ket{n}_\mathrm{c}}(\Delta)[\sigma_{1}^{-}\sigma_{2}^{+} + \sigma_{2}^{-}\sigma_{1}^{+}]$, where the effective coupling coefficients $\tilde{g}_{\text{eff}}^{\ket{n}_\mathrm{c}}(\Delta)$ read:
\begin{align}
\tilde{g}_{\text{eff}}^{\ket{n}_\mathrm{c}}(\Delta) = g_{12} + g_{1}g_{2}\left(\frac{2}{\Delta-\delta_{n1}\alpha_{\mathrm{c}}} - \frac{1}{\Delta}\right), \label{Eq-EffectiveH}
\end{align}
with $\delta_{nm}$ the Kronecker delta symbol. This result is consistent with the two-level approach adopted in Refs.~\cite{Yan:18,Li:20,Han:20,Feng:20} {\it when the coupler is in the ground state}. However, when it is in the first excited state, the anharmonicity $\alpha_{\mathrm{c}}$ comes into play, and that it affects the effective coupling: This state-dependent coupling has a direct impact on the operation of the transistor, as we shall now see.
\begin{figure}
\caption{Effective coupling strength between two target qubits under the control of the quantum state and frequency of the coupler. (a) Pulse sequence of the experimental implementation to measure the effective coupling strength. (b) Experimental data for resonant exchange between $\ket{0}
\label{Fig2}
\end{figure}
\begin{figure*}
\caption{(a,d) Sketch of the quantum transistor operation, which allows for the state transfer from $Q_{2}
\label{Fig3}
\end{figure*}
The contribution of the third level is probed in our setup by implementing the procedure sketched in Fig.~\ref{Fig2}{\color{blue}a}.
We use a power splitter to combine a DC signal and a pulse signal to control the frequency of the frequency-tunable atoms and coupler. The DC signal is a biased signal used to set the idling frequency of $Q_1$ and of the coupler, which remain unchanged during the experiment. We then set the coupler frequency idling point to completely turn off the effective coupling between $Q_1$ and $Q_2$ (white horizontal dotted line in Fig.~\ref{Fig2}{\color{blue}b}) when the coupler is in the ground state, and we set the $Q_1$ frequency idling point to be about $50$~MHz above that of $Q_2$. In this case, the interaction between $Q_1$, $Q_2$ and the coupler can be neglected, and the computational basis approximates the eigenstates of the bare system. Through this procedure, one can then use the single-qubit gate ($\pi$-pulse) to efficiently prepare the system in the state $\ket{0}_{\mathrm{c}}\ket{01}_{12}$ or $\ket{1}_{\mathrm{c}}\ket{01}_{12}$.
The effective interaction is then turned on by putting the atom $Q_1$ at resonance with $Q_2$. After the interaction step, we switch back the frequency of the atoms and coupler to their initial configuration (idling point) to measure the fidelity in the binary state detection (as detailed in the Supplementary Material).
The above sequence is used to monitor the oscillation of population between $Q_1$ and $Q_2$: this only requires to measure the population of the two-qubit state $\ket{0_{1} 1_{2}}$, independently on the coupler state, which we present in Fig.~\ref{Fig2}{\color{blue}b}. The coupling coefficient $\tilde{g}_{12}^{\ket{n}_{\mathrm{c}}}$ is extracted from these oscillations by fitting the oscillating dynamics of this population with a sine function. Its dependence on the detuning $\Delta$ and the coupler state $\ket{n}_{\mathrm{c}}$ is shown in Fig.~\ref{Fig2}{\color{blue}c}. As mentioned before, for the coupler in the ground state, the measured behaviour of the coupling $\tilde{g}_{12}^{\ket{0}_{\mathrm{c}}}$ is in very good agreement with the theoretical prediction even if the system is considered to possess only two levels,
since, as there is only one excitation in the whole system, the higher levels
do not play a role. On the other hand, the two-level model fails to predict the behavior of an excited coupler, $\tilde{g}_{12}^{\ket{1}_{\mathrm{c}}}$, which may lead to a loss in high control of two-qubit operations.
Differently, our three-level approach leads to an effective coupling for which theory (see Eq.~\eqref{Eq-EffectiveH}) and experiment (see Fig.~\ref{Fig2}{\color{blue}c}) agree very well. This leads to a dramatic increase in the controllability on the system evolution and, consequently, in the robustness and fidelity of the derived quantum computation processes.
Let us now demonstrate how a quantum transistor can be achieved using our circuit, which illustrates the coherent manipulation of the quantum information flow through the system.
To this end, we exploit the
dependence of the coupling on the coupler state, which allows us to realize a gate.
The qubits $Q_1$ and $Q_2$ now correspond to the collector and emitter, in which we encode and read-out the quantum information, while the coupler acts as the control gate.
First we prepare the information to be transferred in the emitter $Q_2$, while the coupler (transistor base) remains in its ground state, see Fig.~\ref{Fig3}{\color{blue}a}. We set the coupler frequency at $\omega_{\mathrm{c}}\!=\!6.183 $~GHz and we turn on the effective interaction between $Q_1$ and $Q_2$. In Fig.~\ref{Fig3}{\color{blue}b}, we present the population of the different states when the information encoded corresponds to a single excitation $\ket{\psi}_{2}\!=\!\ket{1}_{2}$. The population then remains blockaded in state $\ket{0_{1} 1_{2}}$, that is, the information is not transferred to the collector. The efficiency of this blockade is here limited by the decoherence timescale of the emitter, as shown in Table~\ref{tab_para}.
In order to show that our device truly allows for the coherent manipulation of the information (transfer and blockade mechanism), we reconstruct its truth table, see Fig.~\ref{Fig3}{\color{blue}c}, in the computational basis. To this end, we implement the sequence shown previously to the other states necessary to reconstruct the truth table, where the fidelity of the operation is characterized by the measurement of $F\!=\!(1/4)\mathrm{tr}(M_{\mathrm{exp}}M_{\mathrm{id}})$, which reaches $F_{\mathrm{clos}}\!=\!98.77(81)\%$ for the closed gate. Here we have $M_{\mathrm{id}}=\mathbbm{1}_{1} \otimes \mathbbm{1}_{2}$, and $M_{\mathrm{id}} = SWAP_{12}$ when the coupler is in the state $\ket{0}$ and $\ket{1}$, respectively, where $SWAP_{12}$ is the quantum SWAP gate between $Q_1$ and $Q_2$.
In the case of the open gate (see Fig.~\ref{Fig3}{\color{blue}d}), the coupler is initially excited, which provides an effective coupling between $Q_1$ and $Q_2$ ($2\tilde{g}_{12}/2\pi\!=\!8.45$~MHz). The information then flows from $Q_2$ to $Q_1$, as illustrated in Fig.~\ref{Fig3}{\color{blue}c} by the transfer of population. This shows that our proposal constitutes a two-way transistor, where the information can coherently flow both ways throughout the system, thus being a signature of the quantumness of our transistor (see details in the SM). From a fundamental point of view, this symmetric behavior is a direct consequence of the unitary nature of the dynamics and of the quantum version of the recurrence theorem of Poincaré~\cite{Bocchieri:57}. The corresponding ideal and experimental truth tables for the open gate are shown in Fig.~\ref{Fig3}{\color{blue}f}, for which a fidelity $F_{\mathrm{open}}\!=\!93.38(68)\%$ is obtained.
In conclusion, we have presented a quantum transistor based on a three-component superconducting circuit, where high-fidelity operations are made possible by accounting for the multi-level structure of the superconducting artificial atoms.
The anharmonicity between the two first excited states of these atoms plays an important role, which manifests in the dependence of the dynamics on the coupler state.
In particular, a two-level description is valid only when the coupler remains in its ground state, so this anharmonicity will inevitably affect the operation of the transistor.
Our implementation of the transistor with Xmon artificial atoms confirm the role of this anharmonicity, with theoretical and experimental results in very good agreement. Our results thus allow us to design a precisely-controllable mechanism for the conditional quantum information transfer between two parts of a superconducting circuit. As sketched in Fig.~\ref{Fig1}{\color{blue}c}, the two atoms act as collector and emitter, while the coupler controls the information flow (i.e., the control gate)), so the coherent control of quantum states transfer is achieved between the two qubits. This architecture differs from previous proposals in literature, where the control gate is encoded in the entangled state of two qubits~\cite{Marchukov:16}, so that our study constitutes the first experimental realization of the single-qubit quantum switch, that is, the smallest quantum transistor proposed so far.
In addition, the study of native multi-qubit quantum gates, such as the controlled SWAP gate, has great significance for the realization of quantum computation, which is one of the challenging but promising technology in contemporary science.~\cite{fedorov2012implementation, reed2012realization, patel2016quantum, gao2019entanglement}.
The scheme used to implement the quantum transistor can be, in principle, generalized to design a single-shot controlled SWAP gate in superconducting circuit system. In such a setup, the high control over the quantum information encoded in the coupler makes the coupler operate as the control qubit, and qubits $Q_1$ and $Q_2$ as target qubits.
\begin{acknowledgments}
This work was supported by the Key-Area Research and Development Program of Guang-Dong Province (Grant No. 2018B030326001), the National Natural Science Foundation of China (U1801661, 12004167, 11934010), the China Postdoctoral Science Foundation (Grant No. 2020M671861, {2021T140648}), the Guangdong Innovative and Entrepreneurial Research Team Program (2016ZT06D348), the Guangdong Provincial Key Laboratory (Grant No.2019B121203002), the Natural Science Foundation of Guangdong Province (2017B030308003), and the Science, Technology and Innovation Commission of Shenzhen Municipality (JCYJ20170412152620376, KYTDPT20181011104202253), and the NSF of Beijing (Grants No. Z190012). A.C.S., B. A. V., C.J.V.-B., and R.B. acknowledge the financial support of the São Paulo Research Foundation (FAPESP) (Grants No. 2018/15554-5, No. 2019/22685-1, No. 2019/11999-5, No. 2019/13143-0 and No. 2021/10224-0) and the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES/STINT), Grants No.
88881.304807/2018-01, and No. 88887.512104/2020-00. R.B. and C.J.V.-B. benefitted from the support of the National Council for Scientific and Technological Development (CNPq) Grants No. 302981/2017-9, No. 409946/2018-4, and No. 311612/2021-0. C.J.V.-B. is also thankful for the support from the Brazilian National Institute of Science and Technology for Quantum Information (INCTIQ/CNPq) Grant No. 465469/2014-0.
\end{acknowledgments}
C.-K.H., J.Y. and B.A.V. contributed equally to this work.
\begin{thebibliography}{31}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\undefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\mathbbm{1}2\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \url [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{http://dx.doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Ridley}(2013)}]{Ridley:Book}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~K.}\ \bibnamefont
{Ridley}},\ }\href {\doibase 10.1093/acprof:oso/9780199677214.001.0001}
{\emph {\bibinfo {title} {Quantum Processes in Semiconductors}}}\ (\bibinfo
{publisher} {Oxford University Press},\ \bibinfo {year} {2013})\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Phillips}(2012)}]{Phillips:Book}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Phillips}},\ }\href {\doibase 10.1017/CBO9781139031066} {\emph {\bibinfo
{title} {Advanced Solid State Physics}}},\ \bibinfo {edition} {2nd}\ ed.\
(\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {year}
{2012})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Desai}\ \emph {et~al.}(2016)\citenamefont {Desai},
\citenamefont {Madhvapathy}, \citenamefont {Sachid}, \citenamefont {Llinas},
\citenamefont {Wang}, \citenamefont {Ahn}, \citenamefont {Pitner},
\citenamefont {Kim}, \citenamefont {Bokor}, \citenamefont {Hu}, \citenamefont
{Wong},\ and\ \citenamefont {Javey}}]{Desai:16}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~B.}\ \bibnamefont
{Desai}}, \bibinfo {author} {\bibfnamefont {S.~R.}\ \bibnamefont
{Madhvapathy}}, \bibinfo {author} {\bibfnamefont {A.~B.}\ \bibnamefont
{Sachid}}, \bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Llinas}},
\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Wang}}, \bibinfo {author}
{\bibfnamefont {G.~H.}\ \bibnamefont {Ahn}}, \bibinfo {author} {\bibfnamefont
{G.}~\bibnamefont {Pitner}}, \bibinfo {author} {\bibfnamefont {M.~J.}\
\bibnamefont {Kim}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Bokor}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Hu}}, \bibinfo
{author} {\bibfnamefont {H.-S.~P.}\ \bibnamefont {Wong}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Javey}},\ }\href {\doibase
10.1126/science.aah4698} {\bibfield {journal} {\bibinfo {journal}
{Science}\ }\textbf {\bibinfo {volume} {354}},\ \bibinfo {pages} {99}
(\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {\ifmmode \check{Z}\else
\v{Z}\fi{}uti\ifmmode~\acute{c}\else \'{c}\fi{}}\ \emph
{et~al.}(2004)\citenamefont {\ifmmode \check{Z}\else
\v{Z}\fi{}uti\ifmmode~\acute{c}\else \'{c}\fi{}}, \citenamefont {Fabian},\
and\ \citenamefont {Das~Sarma}}]{Igor:04}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{\ifmmode \check{Z}\else \v{Z}\fi{}uti\ifmmode~\acute{c}\else \'{c}\fi{}}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Fabian}}, \ and\ \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Das~Sarma}},\ }\href {\doibase
10.1103/RevModPhys.76.323} {\bibfield {journal} {\bibinfo {journal} {Rev.
Mod. Phys.}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages} {323}
(\bibinfo {year} {2004})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Marchukov}\ \emph {et~al.}(2016)\citenamefont
{Marchukov}, \citenamefont {Volosniev}, \citenamefont {Valiente},
\citenamefont {Petrosyan},\ and\ \citenamefont {Zinner}}]{Marchukov:16}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.~V.}\ \bibnamefont
{Marchukov}}, \bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont
{Volosniev}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Valiente}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Petrosyan}}, \ and\
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Zinner}},\ }\href
{\doibase https://doi.org/10.1038/ncomms13070} {\bibfield {journal}
{\bibinfo {journal} {Nature communications}\ }\textbf {\bibinfo {volume}
{7}},\ \bibinfo {pages} {1} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Sun}\ \emph {et~al.}(2018)\citenamefont {Sun},
\citenamefont {Kim}, \citenamefont {Luo}, \citenamefont {Solomon},\ and\
\citenamefont {Waks}}]{sun2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Sun}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Kim}}, \bibinfo
{author} {\bibfnamefont {Z.}~\bibnamefont {Luo}}, \bibinfo {author}
{\bibfnamefont {G.~S.}\ \bibnamefont {Solomon}}, \ and\ \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Waks}},\ }\href
{https://science.sciencemag.org/content/361/6397/57} {\bibfield {journal}
{\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {361}},\ \bibinfo
{pages} {57} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {{Loft}}\ \emph {et~al.}(2018)\citenamefont {{Loft}},
\citenamefont {{Kristensen}}, \citenamefont {{Andersen}},\ and\ \citenamefont
{{Zinner}}}]{Loft:18}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~J.~S.}\
\bibnamefont {{Loft}}}, \bibinfo {author} {\bibfnamefont {L.~B.}\
\bibnamefont {{Kristensen}}}, \bibinfo {author} {\bibfnamefont {C.~K.}\
\bibnamefont {{Andersen}}}, \ and\ \bibinfo {author} {\bibfnamefont {N.~T.}\
\bibnamefont {{Zinner}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {arXiv e-prints}\ ,\ \bibinfo {eid} {arXiv:1802.04292}} (\bibinfo
{year} {2018})},\ \Eprint {http://arxiv.org/abs/1802.04292} {arXiv:1802.04292
[quant-ph]} \BibitemShut {NoStop}
\bibitem [{\citenamefont {Shan}\ \emph {et~al.}(2018)\citenamefont {Shan},
\citenamefont {Dai}, \citenamefont {Shen},\ and\ \citenamefont
{Yi}}]{shan2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Shan}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Dai}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Shen}}, \ and\ \bibinfo {author}
{\bibfnamefont {X.}~\bibnamefont {Yi}},\ }\href {\doibase
https://doi.org/10.1038/s41598-018-31552-w} {\bibfield {journal} {\bibinfo
{journal} {Scientific Reports}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo
{pages} {1} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Arute}\ \emph {et~al.}(2019)\citenamefont {Arute},
\citenamefont {Arya}, \citenamefont {Babbush}, \citenamefont {Bacon},
\citenamefont {Bardin}, \citenamefont {Barends}, \citenamefont {Biswas},
\citenamefont {Boixo}, \citenamefont {Brandao}, \citenamefont {Buell},
\citenamefont {Burkett}, \citenamefont {Chen}, \citenamefont {Chen},
\citenamefont {Chiaro}, \citenamefont {Collins}, \citenamefont {Courtney},
\citenamefont {Dunsworth}, \citenamefont {Farhi}, \citenamefont {Foxen},
\citenamefont {Fowler}, \citenamefont {Gidney}, \citenamefont {Giustina},
\citenamefont {Graff}, \citenamefont {Guerin}, \citenamefont {Habegger},
\citenamefont {Harrigan}, \citenamefont {Hartmann}, \citenamefont {Ho},
\citenamefont {Hoffmann}, \citenamefont {Huang}, \citenamefont {Humble},
\citenamefont {Isakov}, \citenamefont {Jeffrey}, \citenamefont {Jiang},
\citenamefont {Kafri}, \citenamefont {Kechedzhi}, \citenamefont {Kelly},
\citenamefont {Klimov}, \citenamefont {Knysh}, \citenamefont {Korotkov},
\citenamefont {Kostritsa}, \citenamefont {Landhuis}, \citenamefont
{Lindmark}, \citenamefont {Lucero}, \citenamefont {Lyakh}, \citenamefont
{Mandrà}, \citenamefont {McClean}, \citenamefont {McEwen}, \citenamefont
{Megrant}, \citenamefont {Mi}, \citenamefont {Michielsen}, \citenamefont
{Mohseni}, \citenamefont {Mutus}, \citenamefont {Naaman}, \citenamefont
{Neeley}, \citenamefont {Neill}, \citenamefont {Niu}, \citenamefont {Ostby},
\citenamefont {Petukhov}, \citenamefont {Platt}, \citenamefont {Quintana},
\citenamefont {Rieffel}, \citenamefont {Roushan}, \citenamefont {Rubin},
\citenamefont {Sank}, \citenamefont {Satzinger}, \citenamefont {Smelyanskiy},
\citenamefont {Sung}, \citenamefont {Trevithick}, \citenamefont
{Vainsencher}, \citenamefont {Villalonga}, \citenamefont {White},
\citenamefont {Yao}, \citenamefont {Yeh}, \citenamefont {Zalcman},
\citenamefont {Neven},\ and\ \citenamefont {Martinis}}]{arute2019quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Arute}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Arya}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Bacon}}, \bibinfo {author}
{\bibfnamefont {J.~C.}\ \bibnamefont {Bardin}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Barends}}, \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Biswas}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Boixo}}, \bibinfo {author} {\bibfnamefont {F.~G. S.~L.}\
\bibnamefont {Brandao}}, \bibinfo {author} {\bibfnamefont {D.~A.}\
\bibnamefont {Buell}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Burkett}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Chen}},
\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Chen}}, \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {Chiaro}}, \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Collins}}, \bibinfo {author} {\bibfnamefont
{W.}~\bibnamefont {Courtney}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Dunsworth}}, \bibinfo {author} {\bibfnamefont
{E.}~\bibnamefont {Farhi}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {Foxen}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Fowler}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Gidney}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Giustina}}, \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Graff}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Guerin}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Habegger}}, \bibinfo {author} {\bibfnamefont {M.~P.}\
\bibnamefont {Harrigan}}, \bibinfo {author} {\bibfnamefont {M.~J.}\
\bibnamefont {Hartmann}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Ho}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hoffmann}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Huang}}, \bibinfo
{author} {\bibfnamefont {T.~S.}\ \bibnamefont {Humble}}, \bibinfo {author}
{\bibfnamefont {S.~V.}\ \bibnamefont {Isakov}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Jeffrey}}, \bibinfo {author} {\bibfnamefont
{Z.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Kafri}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Kechedzhi}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Kelly}}, \bibinfo {author} {\bibfnamefont {P.~V.}\
\bibnamefont {Klimov}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Knysh}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Korotkov}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Kostritsa}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Landhuis}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Lindmark}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Lucero}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Lyakh}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Mandrà}}, \bibinfo {author} {\bibfnamefont {J.~R.}\
\bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{McEwen}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Megrant}},
\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Mi}}, \bibinfo {author}
{\bibfnamefont {K.}~\bibnamefont {Michielsen}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Mohseni}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Mutus}}, \bibinfo {author} {\bibfnamefont
{O.}~\bibnamefont {Naaman}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Neeley}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Neill}}, \bibinfo {author} {\bibfnamefont {M.~Y.}\
\bibnamefont {Niu}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Ostby}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Petukhov}},
\bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Platt}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Quintana}}, \bibinfo {author}
{\bibfnamefont {E.~G.}\ \bibnamefont {Rieffel}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Roushan}}, \bibinfo {author} {\bibfnamefont
{N.~C.}\ \bibnamefont {Rubin}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Sank}}, \bibinfo {author} {\bibfnamefont {K.~J.}\
\bibnamefont {Satzinger}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Smelyanskiy}}, \bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont
{Sung}}, \bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont
{Trevithick}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Vainsencher}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Villalonga}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {White}},
\bibinfo {author} {\bibfnamefont {Z.~J.}\ \bibnamefont {Yao}}, \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Yeh}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Zalcman}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Neven}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\
\bibnamefont {Martinis}},\ }\href {\doibase 10.1038/s41586-019-1666-5}
{\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo
{volume} {574}},\ \bibinfo {pages} {505} (\bibinfo {year}
{2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gong}\ \emph {et~al.}(2021)\citenamefont {Gong},
\citenamefont {Wang}, \citenamefont {Zha}, \citenamefont {Chen},
\citenamefont {Huang}, \citenamefont {Wu}, \citenamefont {Zhu}, \citenamefont
{Zhao}, \citenamefont {Li}, \citenamefont {Guo}, \citenamefont {Qian},
\citenamefont {Ye}, \citenamefont {Chen}, \citenamefont {Ying}, \citenamefont
{Yu}, \citenamefont {Fan}, \citenamefont {Wu}, \citenamefont {Su},
\citenamefont {Deng}, \citenamefont {Rong}, \citenamefont {Zhang},
\citenamefont {Cao}, \citenamefont {Lin}, \citenamefont {Xu}, \citenamefont
{Sun}, \citenamefont {Guo}, \citenamefont {Li}, \citenamefont {Liang},
\citenamefont {Bastidas}, \citenamefont {Nemoto}, \citenamefont {Munro},
\citenamefont {Huo}, \citenamefont {Lu}, \citenamefont {Peng}, \citenamefont
{Zhu},\ and\ \citenamefont {Pan}}]{gong2021quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Gong}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Wang}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Zha}}, \bibinfo {author}
{\bibfnamefont {M.-C.}\ \bibnamefont {Chen}}, \bibinfo {author}
{\bibfnamefont {H.-L.}\ \bibnamefont {Huang}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont
{Q.}~\bibnamefont {Zhu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Zhao}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Li}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Guo}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Qian}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Ye}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Ying}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Yu}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Fan}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Su}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Deng}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Rong}}, \bibinfo {author}
{\bibfnamefont {K.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Cao}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Lin}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Xu}}, \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {Sun}}, \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {Guo}}, \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Liang}}, \bibinfo {author} {\bibfnamefont {V.~M.}\ \bibnamefont {Bastidas}},
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Nemoto}}, \bibinfo
{author} {\bibfnamefont {W.~J.}\ \bibnamefont {Munro}}, \bibinfo {author}
{\bibfnamefont {Y.-H.}\ \bibnamefont {Huo}}, \bibinfo {author} {\bibfnamefont
{C.-Y.}\ \bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont {C.-Z.}\
\bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont
{Zhu}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont
{Pan}},\ }\href {\doibase 10.1126/science.abg7812} {\bibfield {journal}
{\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {372}},\ \bibinfo
{pages} {948} (\bibinfo {year} {2021})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wu}\ \emph {et~al.}(2021)\citenamefont {Wu},
\citenamefont {Bao}, \citenamefont {Cao}, \citenamefont {Chen}, \citenamefont
{Chen}, \citenamefont {Chen}, \citenamefont {Chung}, \citenamefont {Deng},
\citenamefont {Du}, \citenamefont {Fan}, \citenamefont {Gong}, \citenamefont
{Guo}, \citenamefont {Guo}, \citenamefont {Guo}, \citenamefont {Han},
\citenamefont {Hong}, \citenamefont {Huang}, \citenamefont {Huo},
\citenamefont {Li}, \citenamefont {Li}, \citenamefont {Li}, \citenamefont
{Li}, \citenamefont {Liang}, \citenamefont {Lin}, \citenamefont {Lin},
\citenamefont {Qian}, \citenamefont {Qiao}, \citenamefont {Rong},
\citenamefont {Su}, \citenamefont {Sun}, \citenamefont {Wang}, \citenamefont
{Wang}, \citenamefont {Wu}, \citenamefont {Xu}, \citenamefont {Yan},
\citenamefont {Yang}, \citenamefont {Yang}, \citenamefont {Ye}, \citenamefont
{Yin}, \citenamefont {Ying}, \citenamefont {Yu}, \citenamefont {Zha},
\citenamefont {Zhang}, \citenamefont {Zhang}, \citenamefont {Zhang},
\citenamefont {Zhang}, \citenamefont {Zhao}, \citenamefont {Zhao},
\citenamefont {Zhou}, \citenamefont {Zhu}, \citenamefont {Lu}, \citenamefont
{Peng}, \citenamefont {Zhu},\ and\ \citenamefont {Pan}}]{wu2021strong}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Wu}}, \bibinfo {author} {\bibfnamefont {W.-S.}\ \bibnamefont {Bao}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Cao}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont
{M.-C.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont
{X.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {T.-H.}\
\bibnamefont {Chung}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Deng}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Du}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Fan}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Gong}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Guo}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Guo}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Guo}}, \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {Han}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Hong}}, \bibinfo {author} {\bibfnamefont
{H.-L.}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {Y.-H.}\
\bibnamefont {Huo}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Li}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Li}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Li}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Liang}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Lin}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Lin}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Qian}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Qiao}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Rong}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Su}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Sun}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Wang}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Wang}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Yan}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Yang}}, \bibinfo
{author} {\bibfnamefont {Y.}~\bibnamefont {Yang}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Ye}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Ying}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Yu}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Zha}}, \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Zhao}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Zhao}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Zhou}}, \bibinfo
{author} {\bibfnamefont {Q.}~\bibnamefont {Zhu}}, \bibinfo {author}
{\bibfnamefont {C.-Y.}\ \bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont
{C.-Z.}\ \bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont
{X.}~\bibnamefont {Zhu}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\
\bibnamefont {Pan}},\ }\href {\doibase 10.1103/PhysRevLett.127.180501}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {127}},\ \bibinfo {pages} {180501} (\bibinfo {year}
{2021})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {You}\ and\ \citenamefont {Nori}(2011)}]{You:11}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~Q.}\ \bibnamefont
{You}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\
}\href {\doibase 10.1038/nature10122} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {474}},\ \bibinfo {pages}
{589} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2020)\citenamefont {Wang},
\citenamefont {Li}, \citenamefont {Feng}, \citenamefont {Song}, \citenamefont
{Song}, \citenamefont {Liu}, \citenamefont {Guo}, \citenamefont {Zhang},
\citenamefont {Dong}, \citenamefont {Zheng}, \citenamefont {Wang},\ and\
\citenamefont {Wang}}]{Wang:20}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Wang}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Li}}, \bibinfo
{author} {\bibfnamefont {W.}~\bibnamefont {Feng}}, \bibinfo {author}
{\bibfnamefont {X.}~\bibnamefont {Song}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Song}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Liu}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Guo}}, \bibinfo
{author} {\bibfnamefont {X.}~\bibnamefont {Zhang}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Dong}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Zheng}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Wang}}, \ and\ \bibinfo {author} {\bibfnamefont {D.-W.}\
\bibnamefont {Wang}},\ }\href {\doibase 10.1103/PhysRevLett.124.013601}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {124}},\ \bibinfo {pages} {013601} (\bibinfo {year}
{2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {{Zhang}}\ \emph {et~al.}(2021)\citenamefont
{{Zhang}}, \citenamefont {{Li}}, \citenamefont {{Zhang}}, \citenamefont
{{Yuan}}, \citenamefont {{Chen}}, \citenamefont {{Ren}}, \citenamefont
{{Wang}}, \citenamefont {{Song}}, \citenamefont {{Wang}}, \citenamefont
{{Wang}}, \citenamefont {{Zhu}}, \citenamefont {{Agarwal}},\ and\
\citenamefont {{Scully}}}]{ZhangKe:21}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{{Zhang}}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {{Li}}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {{Zhang}}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {{Yuan}}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {{Chen}}}, \bibinfo {author} {\bibfnamefont
{W.}~\bibnamefont {{Ren}}}, \bibinfo {author} {\bibfnamefont
{Z.}~\bibnamefont {{Wang}}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {{Song}}}, \bibinfo {author} {\bibfnamefont {D.-W.}\
\bibnamefont {{Wang}}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{{Wang}}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {{Zhu}}},
\bibinfo {author} {\bibfnamefont {G.~S.}\ \bibnamefont {{Agarwal}}}, \ and\
\bibinfo {author} {\bibfnamefont {M.~O.}\ \bibnamefont {{Scully}}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv e-prints}\
,\ \bibinfo {eid} {arXiv:2109.00964}} (\bibinfo {year} {2021})},\ \Eprint
{http://arxiv.org/abs/2109.00964} {arXiv:2109.00964 [quant-ph]} \BibitemShut
{NoStop}
\bibitem [{\citenamefont {Zanner}\ \emph {et~al.}(2021)\citenamefont {Zanner},
\citenamefont {Orell}, \citenamefont {Schneider}, \citenamefont {Albert},
\citenamefont {Oleschko}, \citenamefont {Juan}, \citenamefont {Silveri},\
and\ \citenamefont {Kirchmair}}]{Zanner:22}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Zanner}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Orell}},
\bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Schneider}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Albert}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Oleschko}}, \bibinfo {author}
{\bibfnamefont {M.~L.}\ \bibnamefont {Juan}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Silveri}}, \ and\ \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Kirchmair}},\ }\href {\doibase
https://doi.org/10.1038/s41567-022-01527-w} {\bibfield {journal} {\bibinfo
{journal} {Nature Physics}\ } (\bibinfo {year} {2021}),\
https://doi.org/10.1038/s41567-022-01527-w}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Yan}\ \emph {et~al.}(2018)\citenamefont {Yan},
\citenamefont {Krantz}, \citenamefont {Sung}, \citenamefont {Kjaergaard},
\citenamefont {Campbell}, \citenamefont {Orlando}, \citenamefont
{Gustavsson},\ and\ \citenamefont {Oliver}}]{Yan:18}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Yan}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Krantz}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Sung}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Kjaergaard}}, \bibinfo {author}
{\bibfnamefont {D.~L.}\ \bibnamefont {Campbell}}, \bibinfo {author}
{\bibfnamefont {T.~P.}\ \bibnamefont {Orlando}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Gustavsson}}, \ and\ \bibinfo {author}
{\bibfnamefont {W.~D.}\ \bibnamefont {Oliver}},\ }\href {\doibase
10.1103/PhysRevApplied.10.054062} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Applied}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages}
{054062} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Li}\ \emph {et~al.}(2020)\citenamefont {Li},
\citenamefont {Cai}, \citenamefont {Yan}, \citenamefont {Wang}, \citenamefont
{Pan}, \citenamefont {Ma}, \citenamefont {Cai}, \citenamefont {Han},
\citenamefont {Hua}, \citenamefont {Han}, \citenamefont {Wu}, \citenamefont
{Zhang}, \citenamefont {Wang}, \citenamefont {Song}, \citenamefont {Duan},\
and\ \citenamefont {Sun}}]{Li:20}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont
{Li}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Cai}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Yan}}, \bibinfo {author}
{\bibfnamefont {Z.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont
{X.}~\bibnamefont {Pan}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Ma}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Cai}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Han}}, \bibinfo {author}
{\bibfnamefont {Z.}~\bibnamefont {Hua}}, \bibinfo {author} {\bibfnamefont
{X.}~\bibnamefont {Han}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Wu}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zhang}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Wang}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Song}}, \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Duan}}, \ and\ \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Sun}},\ }\href {\doibase 10.1103/PhysRevApplied.14.024070}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Applied}\ }\textbf
{\bibinfo {volume} {14}},\ \bibinfo {pages} {024070} (\bibinfo {year}
{2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Han}\ \emph {et~al.}(2020)\citenamefont {Han},
\citenamefont {Cai}, \citenamefont {Li}, \citenamefont {Wu}, \citenamefont
{Ma}, \citenamefont {Ma}, \citenamefont {Wang}, \citenamefont {Zhang},
\citenamefont {Song},\ and\ \citenamefont {Duan}}]{Han:20}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.~Y.}\ \bibnamefont
{Han}}, \bibinfo {author} {\bibfnamefont {T.~Q.}\ \bibnamefont {Cai}},
\bibinfo {author} {\bibfnamefont {X.~G.}\ \bibnamefont {Li}}, \bibinfo
{author} {\bibfnamefont {Y.~K.}\ \bibnamefont {Wu}}, \bibinfo {author}
{\bibfnamefont {Y.~W.}\ \bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont
{Y.~L.}\ \bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {J.~H.}\
\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {H.~Y.}\ \bibnamefont
{Zhang}}, \bibinfo {author} {\bibfnamefont {Y.~P.}\ \bibnamefont {Song}}, \
and\ \bibinfo {author} {\bibfnamefont {L.~M.}\ \bibnamefont {Duan}},\ }\href
{\doibase 10.1103/PhysRevA.102.022619} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {102}},\ \bibinfo
{pages} {022619} (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Feng}\ and\ \citenamefont {Wang}(2020)}]{Feng:20}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Feng}}\ and\ \bibinfo {author} {\bibfnamefont {D.-w.}\ \bibnamefont
{Wang}},\ }\href {\doibase 10.1103/PhysRevA.101.062312} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {101}},\
\bibinfo {pages} {062312} (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Collodo}\ \emph {et~al.}(2020)\citenamefont
{Collodo}, \citenamefont {Herrmann}, \citenamefont {Lacroix}, \citenamefont
{Andersen}, \citenamefont {Remm}, \citenamefont {Lazar}, \citenamefont
{Besse}, \citenamefont {Walter}, \citenamefont {Wallraff},\ and\
\citenamefont {Eichler}}]{collodo2020implementation}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~C.}\ \bibnamefont
{Collodo}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Herrmann}},
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Lacroix}}, \bibinfo
{author} {\bibfnamefont {C.~K.}\ \bibnamefont {Andersen}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Remm}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Lazar}}, \bibinfo {author} {\bibfnamefont {J.-C.}\
\bibnamefont {Besse}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Walter}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Wallraff}}, \
and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Eichler}},\ }\href
{\doibase 10.1103/PhysRevLett.125.240502} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {125}},\ \bibinfo
{pages} {240502} (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Qiu}\ \emph {et~al.}(2021)\citenamefont {Qiu},
\citenamefont {Zhou}, \citenamefont {Hu}, \citenamefont {Yuan}, \citenamefont
{Zhang}, \citenamefont {Chu}, \citenamefont {Huang}, \citenamefont {Liu},
\citenamefont {Luo}, \citenamefont {Ni}, \citenamefont {Pan}, \citenamefont
{Yang}, \citenamefont {Zhang}, \citenamefont {Chen}, \citenamefont {Deng},
\citenamefont {Hu}, \citenamefont {Li}, \citenamefont {Niu}, \citenamefont
{Xu}, \citenamefont {Yan}, \citenamefont {Zhong}, \citenamefont {Liu},
\citenamefont {Yan},\ and\ \citenamefont {Yu}}]{Qiu:21}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Qiu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhou}}, \bibinfo
{author} {\bibfnamefont {C.-K.}\ \bibnamefont {Hu}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Yuan}}, \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Chu}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Huang}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Liu}}, \bibinfo
{author} {\bibfnamefont {K.}~\bibnamefont {Luo}}, \bibinfo {author}
{\bibfnamefont {Z.}~\bibnamefont {Ni}}, \bibinfo {author} {\bibfnamefont
{X.}~\bibnamefont {Pan}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Yang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhang}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Chen}}, \bibinfo {author}
{\bibfnamefont {X.-H.}\ \bibnamefont {Deng}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Niu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Xu}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Yan}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Zhong}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Yan}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Yu}},\
}\href {\doibase 10.1103/PhysRevApplied.16.054047} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Applied}\ }\textbf {\bibinfo {volume}
{16}},\ \bibinfo {pages} {054047} (\bibinfo {year} {2021})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {{Jin}}(2021)}]{Jin:21}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{{Jin}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv
e-prints}\ ,\ \bibinfo {eid} {arXiv:2105.13306}} (\bibinfo {year} {2021})},\
\Eprint {http://arxiv.org/abs/2105.13306} {arXiv:2105.13306 [quant-ph]}
\BibitemShut {NoStop}
\bibitem [{\citenamefont {Stehlik}\ \emph {et~al.}(2021)\citenamefont
{Stehlik}, \citenamefont {Zajac}, \citenamefont {Underwood}, \citenamefont
{Phung}, \citenamefont {Blair}, \citenamefont {Carnevale}, \citenamefont
{Klaus}, \citenamefont {Keefe}, \citenamefont {Carniol}, \citenamefont
{Kumph}, \citenamefont {Steffen},\ and\ \citenamefont
{Dial}}]{stehlik2021tunable}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Stehlik}}, \bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont {Zajac}},
\bibinfo {author} {\bibfnamefont {D.~L.}\ \bibnamefont {Underwood}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Phung}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Blair}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Carnevale}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Klaus}}, \bibinfo {author} {\bibfnamefont {G.~A.}\
\bibnamefont {Keefe}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Carniol}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kumph}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Steffen}}, \ and\
\bibinfo {author} {\bibfnamefont {O.~E.}\ \bibnamefont {Dial}},\ }\href
{\doibase 10.1103/PhysRevLett.127.080505} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {127}},\ \bibinfo
{pages} {080505} (\bibinfo {year} {2021})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Sung}\ \emph {et~al.}(2021)\citenamefont {Sung},
\citenamefont {Ding}, \citenamefont {Braum\"uller}, \citenamefont
{Veps\"al\"ainen}, \citenamefont {Kannan}, \citenamefont {Kjaergaard},
\citenamefont {Greene}, \citenamefont {Samach}, \citenamefont {McNally},
\citenamefont {Kim}, \citenamefont {Melville}, \citenamefont {Niedzielski},
\citenamefont {Schwartz}, \citenamefont {Yoder}, \citenamefont {Orlando},
\citenamefont {Gustavsson},\ and\ \citenamefont
{Oliver}}]{sung2021realization}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Sung}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Ding}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Braum\"uller}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Veps\"al\"ainen}}, \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {Kannan}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Kjaergaard}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Greene}}, \bibinfo {author} {\bibfnamefont {G.~O.}\
\bibnamefont {Samach}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{McNally}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Kim}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Melville}}, \bibinfo
{author} {\bibfnamefont {B.~M.}\ \bibnamefont {Niedzielski}}, \bibinfo
{author} {\bibfnamefont {M.~E.}\ \bibnamefont {Schwartz}}, \bibinfo {author}
{\bibfnamefont {J.~L.}\ \bibnamefont {Yoder}}, \bibinfo {author}
{\bibfnamefont {T.~P.}\ \bibnamefont {Orlando}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Gustavsson}}, \ and\ \bibinfo {author}
{\bibfnamefont {W.~D.}\ \bibnamefont {Oliver}},\ }\href {\doibase
10.1103/PhysRevX.11.021058} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. X}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {021058}
(\bibinfo {year} {2021})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bocchieri}\ and\ \citenamefont
{Loinger}(1957)}]{Bocchieri:57}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Bocchieri}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Loinger}},\ }\href {\doibase 10.1103/PhysRev.107.337} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev.}\ }\textbf {\bibinfo {volume} {107}},\
\bibinfo {pages} {337} (\bibinfo {year} {1957})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Fedorov}\ \emph {et~al.}(2012)\citenamefont
{Fedorov}, \citenamefont {Steffen}, \citenamefont {Baur}, \citenamefont
{da~Silva},\ and\ \citenamefont {Wallraff}}]{fedorov2012implementation}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Fedorov}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Steffen}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Baur}}, \bibinfo {author}
{\bibfnamefont {M.~P.}\ \bibnamefont {da~Silva}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Wallraff}},\ }\href
{https://www.nature.com/articles/nature10713} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {481}},\ \bibinfo {pages}
{170} (\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Reed}\ \emph {et~al.}(2012)\citenamefont {Reed},
\citenamefont {DiCarlo}, \citenamefont {Nigg}, \citenamefont {Sun},
\citenamefont {Frunzio}, \citenamefont {Girvin},\ and\ \citenamefont
{Schoelkopf}}]{reed2012realization}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont
{Reed}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {DiCarlo}},
\bibinfo {author} {\bibfnamefont {S.~E.}\ \bibnamefont {Nigg}}, \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {Sun}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Frunzio}}, \bibinfo {author} {\bibfnamefont
{S.~M.}\ \bibnamefont {Girvin}}, \ and\ \bibinfo {author} {\bibfnamefont
{R.~J.}\ \bibnamefont {Schoelkopf}},\ }\href
{https://www.nature.com/articles/nature10786} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {482}},\ \bibinfo {pages}
{382} (\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Patel}\ \emph {et~al.}(2016)\citenamefont {Patel},
\citenamefont {Ho}, \citenamefont {Ferreyrol}, \citenamefont {Ralph},\ and\
\citenamefont {Pryde}}]{patel2016quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~B.}\ \bibnamefont
{Patel}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Ho}}, \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Ferreyrol}}, \bibinfo {author}
{\bibfnamefont {T.~C.}\ \bibnamefont {Ralph}}, \ and\ \bibinfo {author}
{\bibfnamefont {G.~J.}\ \bibnamefont {Pryde}},\ }\href
{https://www.science.org/doi/10.1126/sciadv.1501531} {\bibfield {journal}
{\bibinfo {journal} {Science advances}\ }\textbf {\bibinfo {volume} {2}},\
\bibinfo {pages} {1501531} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gao}\ \emph {et~al.}(2019)\citenamefont {Gao},
\citenamefont {Lester}, \citenamefont {Chou}, \citenamefont {Frunzio},
\citenamefont {Devoret}, \citenamefont {Jiang}, \citenamefont {Girvin},\ and\
\citenamefont {Schoelkopf}}]{gao2019entanglement}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.~Y.}\ \bibnamefont
{Gao}}, \bibinfo {author} {\bibfnamefont {B.~J.}\ \bibnamefont {Lester}},
\bibinfo {author} {\bibfnamefont {K.~S.}\ \bibnamefont {Chou}}, \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {Frunzio}}, \bibinfo {author}
{\bibfnamefont {M.~H.}\ \bibnamefont {Devoret}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Girvin}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\
\bibnamefont {Schoelkopf}},\ }\href
{https://www.nature.com/articles/s41586-019-0970-4} {\bibfield {journal}
{\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {566}},\ \bibinfo
{pages} {509} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Dewes}\ \emph {et~al.}(2012)\citenamefont {Dewes},
\citenamefont {Ong}, \citenamefont {Schmitt}, \citenamefont {Lauro},
\citenamefont {Boulant}, \citenamefont {Bertet}, \citenamefont {Vion},\ and\
\citenamefont {Esteve}}]{Dewes2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Dewes}}, \bibinfo {author} {\bibfnamefont {F.~R.}\ \bibnamefont {Ong}},
\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Schmitt}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Lauro}}, \bibinfo {author}
{\bibfnamefont {N.}~\bibnamefont {Boulant}}, \bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Bertet}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Vion}}, \ and\ \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Esteve}},\ }\href {\doibase
10.1103/PhysRevLett.108.057002} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages}
{057002} (\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ficheux}\ \emph {et~al.}(2021)\citenamefont
{Ficheux}, \citenamefont {Nguyen}, \citenamefont {Somoroff}, \citenamefont
{Xiong}, \citenamefont {Nesterov}, \citenamefont {Vavilov},\ and\
\citenamefont {Manucharyan}}]{Ficheux2021}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont
{Ficheux}}, \bibinfo {author} {\bibfnamefont {L.~B.}\ \bibnamefont {Nguyen}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Somoroff}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Xiong}}, \bibinfo {author}
{\bibfnamefont {K.~N.}\ \bibnamefont {Nesterov}}, \bibinfo {author}
{\bibfnamefont {M.~G.}\ \bibnamefont {Vavilov}}, \ and\ \bibinfo {author}
{\bibfnamefont {V.~E.}\ \bibnamefont {Manucharyan}},\ }\href {\doibase
10.1103/PhysRevX.11.021026} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. X}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {021026}
(\bibinfo {year} {2021})}\BibitemShut {NoStop}
\end{thebibliography}
\onecolumngrid
\begin{center}
{\large{ {\bf Supplemental Material for: \\ Conditional coherent control with superconducting artificial atoms}}}
\vskip0.5\baselineskip{Chang-Kang Hu,$^{1,2,3,{\color{blue}\ast}}$ Jiahao Yuan,$^{1,2,3,4}$ Bruno A. Veloso,$^{5}$ Jiawei Qiu,$^{1,2,3,4}$ Yuxuan Zhou,$^{1,2,3,4}$ Libo Zhang,$^{1,2,3}$ \\
Ji Chu,$^{1,2,3}$ Orkesh Nurbolat,$^{1,2,3,6}$ Ling Hu,$^{1,2,3}$ Jian Li,$^{1,2,3}$ Yuan Xu,$^{1,2,3}$ Youpeng Zhong,$^{1,2,3}$ Song Liu,$^{1,2,3,{\color{blue}\ast}}$ \\
Fei Yan,$^{1,2,3,{\color{blue}\dagger}}$ Dian Tan,$^{1,2,3,{\color{blue}\ddagger}}$ R. Bachelard,$^{7,5}$ Alan C. Santos,$^{5,8,{\color{blue}\S}}$ C. J. Villas-Boas,$^{5}$ Dapeng Yu$^{1,2,3,4}$}
\vskip0.5\baselineskip{$^{1}$\textit{Shenzhen Insititute for Quantum Science and Engineering, \\Southern University of Science and Technology, Shenzhen 518055, China}\\
$^{2}$\textit{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, \\Southern University of Science and Technology, Shenzhen 518055, China}
\\
$^{3}$\textit{Shenzhen Key Laboratory of Quantum Science and Engineering, \\Southern University of Science and Technology, Shenzhen 518055, China}
\\
$^{4}$\textit{Department of Physics, Southern University of Science and Technology, Shenzhen 518055, China}
\\
$^{5}$\textit{Departamento de Física, Universidade Federal de São Carlos,\\ Rodovia Washington Luís, km 235 - SP-310, 13565-905 São Carlos, SP, Brazil}
\\
$^{6}$\textit{National Laboratory of Solid State Microstructures and Department of Physics, \\Nanjing University, Nanjing, Jiangsu 210093, China}
\\
$^{7}$\textit{Universit\'e C\^ote d'Azur, CNRS, Institut de Physique de Nice, 06560 Valbonne, France}
\\
$^{8}$\textit{Department of Physics, Stockholm University, AlbaNova University Center 106 91 Stockholm, Sweden}
}
\vskip0.5\baselineskip{$^{\color{blue}\ast}$huck@sustech.edu.cn, ~~~ $^{\color{blue}\dagger}$yanf7@sustech.edu.cn, ~~~ $^{\color{blue}\ddagger}$tand@sustech.edu.cn, ~~~ $^{\color{blue}\S}$ac\_santos@df.ufscar.br}
\end{center}
\twocolumngrid
\section{Experimental setup}
The transmon superconducting chip is installed inside a BlueFors XLD-1000 dilution refrigerator system, and its base temperature is under 10 mK. We magnetically shield the chip with a Cryoperm cylinder.
The Electronics and sample diagram are shown in Fig.~\ref{Figure_Setup-SM}{\color{blue}}.
To perform the standard circuit-QED measurements, we apply microwave to the input port of the readout transmission line. Four microwave isolators are placed before the high electron-mobility transistor (HEMT) amplifier to prevent noise from higher-temperature stages. After being amplified by the HEMT amplifier at the 4K stage and a low noise amplifier at room temperature, the readout signal will be downconverted, and the demodulated IQ signals will be digitized by analog-to-digital converters.
All of the control electronics which is used to apply the XY and Z controls of the qubits and coupler are at room temperature. The control signals and the readout signals are programmed in Labber software and sent to the QuantumCTek arbitrary waveform generator (AWG). Then, the corresponding microwave pulses generated by the AWGs will be mixed with different local oscillators (LO), respectively. Here, the LO is supported by a commercial Multichannel coherence microwave generator Sinolink SLFS20. In the experiment, we need to add extra DC Z control signals to set the idling points of the tunable qubit and the coupler.
So, we use a bias-tee to combine the DC signals and the fast Z control pulse signals.
\begin{figure*}
\caption{Electronics and sample schematic diagram of the experimental setup}
\label{Figure_Setup-SM}
\end{figure*}
\section{Readout errors correction}
Before characterizing the performance of our quantum transistor, we need to identify and correct the readout errors of the system. Readout errors consist of the incorrect mapping error of single-qubit and the cross-talk error between qubits~\cite{Dewes2012, Ficheux2021}. Here, a transfer matrix $\mathcal{M}$ is adapted to correct both readout errors simultaneously. And the transfer matrix working on the joint readout population is written as
\begin{equation}
\begin{pmatrix}
p_{00}'\\
p_{10}'\\
p_{01}'\\
p_{11}'
\end{pmatrix}
=
\begin{pmatrix}
M_{00}&M_{01}&M_{02}&M_{03} \\
M_{10}&M_{11}&M_{12}&M_{13} \\
M_{20}&M_{21}&M_{22}&M_{33} \\
M_{30}&M_{31}&M_{32}&M_{33} \\
\end{pmatrix}
\begin{pmatrix}
p_{00}\\
p_{10}\\
p_{01}\\
p_{11}
\end{pmatrix},
\label{eq:readout_error_correction}
\end{equation}
where $p_{ij}'$ are the measurement qubit populations, and $p_{ij}$ are the corrected qubit populations. To find $\mathcal{M}$, we prepare two qubits in states $\ket{00}$, $\ket{01}$, $\ket{10}$ and $\ket{11}$ successively, and perform single-shot measurements in $\ket{00}$, $\ket{01}$, $\ket{10}$ and $\ket{11}$ bases. As shown in Fig.~\ref{fig:m_table_off}, we get $\mathcal{M}$ directly from the measured population matrix. Then, we are able to correct measured population with equation $\vec{p} = \mathcal{M}^{-1} \vec{p}'$.
\begin{figure}
\caption{Readout errors characterization. Each column in the graph is the measured population distribution of the prepared initial state. The transfer matrix $\mathcal{M}
\label{fig:m_table_off}
\end{figure}
\section{Two-level system effective Hamiltonian}
As discussed in the main text, assuming a two-level system (TLS) to deduce the effective Hamiltonian is not adequate to describe the effective dynamics of the system. To verify this point, we first define the raising and lowering operators for each of our atom $i$ as $\sigma^{+}_{i} = \ket{1}_i\bra{0}$ and $\sigma^{-}_{i}=\ket{0}_i\bra{1}$, respectively, as well as the population operators $ \sigma^{0}_i = \ket{0}_i\bra{0} $ and $ \sigma^{1}_i = \ket{1}_i\bra{1} $. Then, assuming $ \omega_{1} = \omega_{2} = \omega $, we can write the Hamiltonian presented in Eqs. (1) and (2) of the main text as
\begin{align}\label{EQ:TLS_Hamiltonian}
H &= \sum_{i = 1,2} \hbar\omega \sigma_{i}^{1} +\hbar g_{ic}(\sigma_{i}^{+}\sigma_{c}^{-}+h.c.)\nonumber \\ &+ \hbar g_{12}(\sigma_{1}^{+}\sigma_{2}^{-}+h.c.)+\hbar\omega_{c}\sigma_{c}^{1}.
\end{align}
Then, using a unitary transformation $H_{I}=UHU^{\dagger}-H_{0},$ with $H_{0}=\hbar\omega(\sigma_{1}^{1}+\sigma_{2}^{1})+\hbar\omega_{c}\sigma_{c}^{1}$ and $U(t)=e^{-iH_{0}t/\hbar}$, casting Eq.~(\ref{EQ:TLS_Hamiltonian}) to the interaction picture we find
\begin{align}\label{EQ:TLS_InteractionHamiltonian}
H_{\mathrm{I}}&=\hbar g_{1}(\sigma_{1}^{+}\sigma_{c}^{-}e^{i\Delta t} + \mathrm{h.c.})+
\hbar g_{2}(\sigma_{2}^{+}\sigma_{c}^{-}e^{i\Delta t} + \mathrm{h.c.})\nonumber\\ &+ \hbar g_{12}(\sigma_{1}^{-}\sigma_{2}^{+}+\sigma_{2}^{-}\sigma_{1}^{+}),
\end{align}
where we have defined $\Delta = \omega - \omega_{c} $. In the interaction picture, we can apply the Rotating Wave Approximation (RWA) to the Hamiltonian,
which here translates as
\begin{align}\label{EQ:EffectiveHamiltonian}
H_{\mathrm{eff}} & \approx \frac{1}{\hbar}\left[-iH_{I}(t)\int_{0}^{t}H_{I}(t')dt'\right]_{RWA},
\end{align}
where $ i $ stands for the imaginary unity, and where we have assumed that $ |\Delta|\gg g_{k} $ to neglect the fast-oscillating terms. That leads to the TLS effective Hamiltonian
\begin{align}\label{TLS_EffectiveHamiltonian}
H_{\mathrm{eff}}&=\hbar\frac{g_{1}g_{2}}{\Delta}\left[ h_0 + (\sigma_{1}^{-}\sigma_{2}^{+}+\sigma_{2}^{-}\sigma_{1}^{+})(\sigma_{c}^{0}-\sigma_{c}^{1})\right]\nonumber\\
&+\hbar g_{12}(\sigma_{1}^{-}\sigma_{2}^{+}+\sigma_{2}^{-}\sigma_{1}^{+}),
\end{align}
where $h_0\!=\!(\sigma_{1}^{1}+\sigma_{2}^{1})\sigma_{c}^{0}-(\sigma_{1}^{0}+\sigma_{2}^{0})\sigma_{c}^{1}$, with $\sigma_{k}^{n}\!=\!\ket{n}\bra{n}_{k}$, is an energy shift term which does not promote any population transference in the system.
When we consider only the interaction terms of Eq. (\ref{TLS_EffectiveHamiltonian}), we can identify the effective coupling
\begin{align}\label{TLS_EffectiveCoupling}
g_{\mathrm{eff}}^{\ket{n}_{\mathrm{c}}}(\Delta) &= g_{12} + (-1)^n \frac{g_{1}g_{2}}{\Delta},
\end{align}
where the $(-1)^n$ term is a signature that the effective coupling depends on the coupler state.
\section{Three-level system effective Hamiltonian}
To evaluate the effects of the third level of energy on the dynamics, we need to consider the anharmonic terms in Eq. (1) of the main text. Let us proceed similarly to the TLS case: First we define $ \tilde{H}_0 = \sum_{j=1,2,c} \left[\omega_j \, a_j^\dagger a_j + \frac{\alpha_j}{2} \, a_j^\dagger a_j^\dagger a_j a_j\right]$ as the unperturbed Hamiltonian and use the unitary operator $ U(t)=e^{\frac{-i\tilde{H}_{0}t}{\hbar}} $ to write the Hamiltonian in the interaction picture. However, in this case, we define the operators $ \Sigma_{j} $ for each artificial atom, and proceed with the transformations
\begin{align}
a_{j} &\rightarrow \Sigma_{j}^{-} = \sum_{k=1}^{2} \sqrt{k}\ket{k-1}\bra{k} ,\nonumber\\ a_{j}^{\dagger} &\rightarrow \Sigma_{j}^{+} = \sum_{k=1}^{2} \sqrt{k}\ket{k}\bra{k-1}.
\end{align}
This allow us to write the Hamiltonian in the interaction picture
\begin{align}\label{EQ:3lvl_InteratonPicture_1}
\tilde{H}_{I} &= \sum_{k = 1,2}\hbar g_{i} \left( U^{\dagger}(t)\Sigma^{+}_k U(t)U^{\dagger}(t)\Sigma^{-}_{\text{c}} U(t) + h.c. \right) \nonumber \\ &+ \hbar g_{12}\left( U^{\dagger}(t)\Sigma^{+}_1 U(t)U^{\dagger}(t) \Sigma^{-}_2U(t) + h.c. \right).
\end{align}
Defining the operator $P_{nm}^{(j)}\!=\! \ket{n} \bra{m}_j$ and manipulating the Eq.~\eqref{EQ:3lvl_InteratonPicture_1}, the Hamiltonian assumes the form
\begin{align}
\tilde{H}_{I} &= H_{1,c}(t) + H_{2,c}(t) + H_{2}(t)
\end{align}
with
\begin{align}
H_{k,c}(t) &= \hbar g_{k}\left[e^{i (\omega_{k} - \omega_{\text{c}})t} P_{10}^{(k)} P_{01}^{(\text{c})} + e^{i (\omega_{k}-\tilde{\omega}_{\text{c}})t} \sqrt{2}P_{10}^{(k)} P_{12}^{(\text{c})} \right.\nonumber \\ &\left. +
e^{i (\tilde{\omega}_{k}-\omega_{\text{c}})t} \sqrt{2} P_{21}^{(k)} P_{01}^{(\text{c})} + 2e^{i (\tilde{\omega}_{k}-\tilde{\omega}_{\text{c}})t} P_{21}^{(k)} P_{12}^{(\text{c})} + h.c. \right]
\nonumber \\
H_{2}(t) &= \hbar g_{12} \left[ e^{i (\omega_{1} - \omega_{2})t} P_{10}^{(1)} P_{01}^{(2)} + e^{i (\omega_{1}-\tilde{\omega}_{2})t} \sqrt{2}P_{10}^{(1)} P_{12}^{(2)} \right.\nonumber \\ &\left.+ e^{i (\tilde{\omega}_{1}-\omega_{2})t} \sqrt{2} P_{21}^{(1)} P_{01}^{(2)} + 2 e^{i (\tilde{\omega}_{1}-\tilde{\omega}_{2})t} P_{21}^{(1)} P_{12}^{(2)} + h.c. \right]
\end{align}
where, $ \tilde{\omega}_i = \omega_i + \alpha_i $. Assuming, for simplicity, that $ \omega_{1}=\omega_{2}=\omega $, and $ \alpha_{1} = \alpha_{2} = \alpha $ we can write the Hamiltonian as
\begin{align}\label{3lvl_InteractionHamiltonian}
\tilde{H}_{I} & =H_{0}+\sqrt{2}\hbar g_{12}\left[e^{it\alpha}\left(P_{21}^{(1)}P_{01}^{(2)}+P_{01}^{(1)}P_{21}^{(2)}\right) \right.\nonumber \\
&\left.\hspace{2.2cm}+e^{-it\alpha}\left(P_{12}^{(1)}P_{10}^{(2)}+P_{10}^{(1)}P_{12}^{(2)}\right)\right]\nonumber \\
& +\sum_{k=1,2}\hbar g_{k}\left[e^{i\Delta t}P_{10}^{(k)}P_{01}^{(\text{c})}+e^{i\tilde{\Delta}t}\sqrt{2}P_{10}^{(k)}P_{12}^{(\text{c})} + \right.\nonumber \\&\left.\hspace{1.4cm}+e^{i\text{\ensuremath{\tilde{\Delta}'}}t}\sqrt{2}P_{21}^{(k)}P_{01}^{(\text{c})}+2e^{i\tilde{\tilde{\Delta}}t}P_{21}^{(k)}P_{12}^{(\text{c})}+h.c.\right] ,
\end{align}
where $ H_{0}=g_{12}\left(P_{10}^{(1)}P_{01}^{(2)}+2P_{21}^{(1)}P_{12}^{(2)}+h.c.\right)$ is the time-independent part of the Hamiltonian, and $\tilde{\Delta}=\omega-\tilde{\omega}_{c}$,
$\tilde{\Delta}'=\tilde{\omega}-\omega_{c}$ and $\tilde{\tilde{\Delta}}=\tilde{\omega}-\tilde{\omega}_{c}$ are the detunnings. In the interaction picture, obtaining the effective Hamiltonian is similar to the TLS case, where we use Eq.~(\ref{EQ:EffectiveHamiltonian}) and assume that $\Delta,\alpha,\alpha_{c}\gg g_{1},g_{2}$ to be able to neglect the fast oscillating terms. The final result is
\begin{widetext}
\begin{align}\label{EQ:3lvl_EffectiveHamiltonian}
H_{\mathrm{eff}}&= \sum_{k,m=1,2} \hbar g_{k}g_{m}\left[\frac{1}{\omega-\omega_{c}}(P_{10}^{(k)}P_{01}^{(m)}P_{00}^{(c)}-P_{01}^{(k)}P_{10}^{(m)}P_{11}^{(c)})\right.+\frac{2}{\omega-\tilde{\omega}_{c}}(P_{10}^{(k)}P_{01}^{(m)}P_{11}^{(c)}-P_{01}^{(k)}P_{10}^{(m)}P_{22}^{(c)})\nonumber \\
+ & \left.\frac{2}{\tilde{\omega}-\omega_{c}}(P_{21}^{(k)}P_{12}^{(m)}P_{00}^{(\text{c})}-P_{12}^{(k)}P_{21}^{(m)}P_{11}^{(\text{c})})+\frac{4}{\tilde{\omega}-\tilde{\omega}_{c}}(P_{21}^{(k)}P_{12}^{(m)}P_{11}^{(\text{c})}-P_{12}^{(k)}P_{21}^{(m)}P_{22}^{(\text{c})})\right]\nonumber \\
+ & \hbar\frac{2g_{12}^{2}}{\alpha}\left(-P_{11}^{(1)}P_{11}^{(2)}+P_{22}^{(1)}P_{00}^{(2)}+P_{20}^{(1)}P_{02}^{(2)}+h.c.\right)+ \hbar g_{12}\left(P_{10}^{(1)}P_{01}^{(2)}+2P_{21}^{(1)}P_{12}^{(2)}+h.c.\right).
\end{align}
\end{widetext}
The difference between Eq.~(\ref{EQ:3lvl_EffectiveHamiltonian}) and Eq.~(\ref{TLS_EffectiveHamiltonian}) is quite clear: When neglecting the effects of the third level of energy, a significant part of the effective dynamics is lost, which leads to a difference in the expression for the effective coupling, when the coupler is in the first excited state. To verify that, we calculate the effective Hamiltonians taking in account only the interaction terms, and find
\begin{subequations}
\begin{align}
\tilde{H}_{\mathrm{eff}}^{|0\rangle_{c}}&=\hbar\left(\frac{g_{1}g_{2}}{\Delta}+g_{12}\right)(P_{10}^{(1)}P_{01}^{(2)}+h.c.)\nonumber \\ &+2\hbar\left(g_{12}+\frac{g_{1}g_{2}}{\Delta+\alpha}\right)(P_{21}^{(1)}P_{12}^{(2)}+h.c.) \label{EQ:3lvl_EffectiveHamiltonian_0_c} \\
\tilde{H}_{\mathrm{eff}}^{|1\rangle_{c}}&=
\hbar\left[2g_{12}-2g_{1}g_{2}\left(\frac{1}{\Delta+\alpha}-\frac{2}{\alpha-\alpha_{c}}\right)\right] (P_{12}^{(1)}P_{21}^{(2)}+h.c.)
\nonumber \\
&+ \hbar\left[g_{12}-g_{1}g_{2}\left(\frac{1}{\Delta}-\frac{2}{\Delta-\alpha_{c}}\right)\right](P_{01}^{(1)}P_{10}^{(2)}+h.c.) \label{EQ:3lvl_EffectiveHamiltonian_1_c}.
\end{align}
\end{subequations}
Given that no transition occurs to the second excited state, only the $ (P_{01}^{(1)}P_{10}^{(2)}+h.c.) $ terms will contribute to the effective coupling, which leads to the effective two-level system Hamitonian
\begin{align}
\tilde{H}_{\mathrm{eff}}^{\ket{n}_{\mathrm{c}}}(\Delta)=\hbar \tilde{g}_{\mathrm{eff}}^{\ket{n}_{\mathrm{c}}}(\Delta)(\sigma_{1}^{-}\sigma_{2}^{+}+\sigma_{2}^{-}\sigma_{1}^{+}),
\end{align}
where
\begin{align}\label{EQ:3lvl_EffectiveCoupling}
\tilde{g}_{\mathrm{eff}}^{\ket{n}_{\mathrm{c}}}(\Delta) = g_{12} + g_1 g_2\left(
\frac{2}{\Delta - \delta_{n1}\alpha_{c}}- \frac{1}{\Delta}\right),
\end{align}
with $ \delta_{n1} $ the Kronecker delta. Comparing the expressions for the effective couplings obtained in Eq.(\ref{TLS_EffectiveCoupling}) for the TLS and now in Eq.~(\ref{EQ:3lvl_EffectiveCoupling}), we can verify that when the coupler is in the ground state, both expressions are equivalent, however, when we have an excitation on the coupler, the TLS expression differs from the three-level approach by a factor $ (2g_1g_2)/(\Delta - \alpha_{c}) $. This term can only be negligible in the limit of a genuine two-level system, that is, when we have anharmonicity large enough ($ \alpha_{c} \rightarrow \infty $).
\subsection{Effective dynamics and Quantumness of the device}
In this section we discuss the theory used to find the value of $\tilde{g}_{\mathrm{eff}}^{\ket{1}_{\mathrm{c}}}(\Delta)$, as observed from the experimental data. The effective Hamiltonian in the three-level system Eq.~(\ref{EQ:3lvl_EffectiveHamiltonian_1_c}) can be used to verify the inversion of excitation between $ Q_1 $ and $ Q_2 $ when the system is set on the initial state $ \ket{\Psi(0)}\!=\!\ket{1}_{\mathrm{c}} $ of the coupler. In this case, the Hamiltonian can be written in the $ \{\ket{10}, \ket{01}\} $ basis as
\begin{align}
\tilde{H}_{\mathrm{eff}} = \tilde{g}_{\mathrm{eff}}^{\ket{1}_{\mathrm{c}}}(\Delta)\left[\begin{array}{cc}
0 & 1\\
1 & 0
\end{array}\right],
\end{align}
which has the eigenenergies $E_{\pm}\!=\!\pm \tilde{g}_{\mathrm{eff}}^{\ket{1}_{\mathrm{c}}}(\Delta)$, associated with eigenstates $\ket{\Psi_{\pm}}\!=\!(\ket{01}_{12}\pm\ket{10}_{12})/\sqrt{2}$. By computing the evolved state $ \ket{\Psi(t)} = e^{-iHt/\hbar}\ket{\Psi(0)} $, from the initial state $ \ket{\Psi(0)}\!=\!\ket{1}_{\mathrm{c}} \ket{10}_{12} $, we find
\begin{align}
\ket{\Psi(t)}=\cos(\tilde{g}_{\mathrm{eff}}^{\ket{1}_{\mathrm{c}}}t)~\ket{01}_{12}-\text{i}\sin(\tilde{g}_{\mathrm{eff}}^{\ket{1}_{\mathrm{c}}}t)~\ket{10}_{12} .
\end{align}
\begin{figure}
\caption{Fidelity of transfer and blockade when (a) no decoherence acts on the system, and (b) the coupler is affected by decoherence that brings the system into the classical realm. To this simulation we use the same parameters as in the main text with $\Gamma\!=\!g_{1}
\label{fig:classical}
\end{figure}
Therefore, by measuring the probability of getting the system in states $\ket{10}$ and $\ket{01}$, we find
\begin{align}
P_{1}(t) &= |\interpro{\Psi(t)}{10}|^2 = \cos^2(\tilde{g}_{\mathrm{eff}}^{\ket{1}_{\mathrm{c}}}t) , \\
P_{2}(t) &= |\interpro{\Psi(t)}{01}|^2 = \sin^2(\tilde{g}_{\mathrm{eff}}^{\ket{1}_{\mathrm{c}}}t) .
\end{align}
Thus, from the experimental data for $P_{1}(t)$ and $P_{2}(t)$, we can measure the value of $\tilde{g}_{\mathrm{eff}}^{\ket{1}_{\mathrm{c}}}$ by fitting the curves $P_{1}(t)$ and $P_{2}(t)$.
We now briefly explore the quantumness of the device proposed here, where we investigate the performance of the system under strong decoherence, which brings the system into a classical realm. We focus on the simpler case where we assume that the noise affects only the coupler. In this case the system is governed by the master equation
\begin{align}
\dot{\rho}(t) = -\frac{i}{\hbar} [H,\rho(t)] + \frac{\Gamma}{2} \left[ 2a^{\dagger}_{\mathrm{c}}a_{\mathrm{c}} \rho(t)a_{\mathrm{c}}a^{\dagger}_{\mathrm{c}} - \{a_{\mathrm{c}}a^{\dagger}_{\mathrm{c}}a^{\dagger}_{\mathrm{c}}a_{\mathrm{c}},\rho(t)\} \right] ,
\end{align}
where the last operator describes the dephasing effect on the system with rate $\Gamma$. When $\Gamma$ is strong enough the coherent transport of information is drastically deteriorated as shown in Fig.~\ref{fig:classical}. This shows that, due to the loss of coherence, the device cannot explore quantum effects and the performance becomes worse than the case where no decoherence acts on the system.
\end{document} |
\begin{document}
\title{Numerical homogenization of H(curl)-problems}
\author{Dietmar Gallistl\footnotemark[2] \and Patrick Henning\footnotemark[3]\and Barbara Verf\"urth\footnotemark[4]}
\date{}
\maketitle
\renewcommand{\arabic{footnote}}{\fnsymbol{footnote}}
\footnotetext[2]{Institut f\"ur Angewandte und Numerische Mathematik, Karlsruher Institut f\"ur Technologie, Englerstr. 2, D-76131 Karlsruhe, Germany}
\footnotetext[3]{Department of Mathematics, KTH Royal Institute of Technology, Lindstedtsv\"agen 25, SE-100 44 Stockholm, Sweden}
\footnotetext[4]{Applied Mathematics, Westf\"alische Wilhelms-Uni\-ver\-si\-t\"at M\"unster, Einsteinstr. 62, D-48149 M\"unster, Germany}
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\begin{Abstract}
If an elliptic differential operator associated with an
$\mathbf{H}(\mathrm{curl})$-problem involves rough (rapidly varying) coefficients,
then solutions to the corresponding $\mathbf{H}(\mathrm{curl})$-problem admit typically
very low regularity, which leads to arbitrarily bad convergence rates
for conventional numerical schemes.
The goal of this paper is to show that the missing regularity can be
compensated through a corrector operator. More precisely, we consider
the lowest order N{\'e}d{\'e}lec finite element space
and show the existence of a linear corrector operator
with four central properties:
it is computable, $\mathbf{H}(\mathrm{curl})$-stable,
quasi-local and allows for a correction of coarse finite element
functions so that first-order estimates (in terms of the coarse
mesh-size) in the $\mathbf{H}(\mathrm{curl})$ norm are obtained provided
the right-hand side belongs to $\mathbf{H}(\mathrm{div})$.
With these four properties, a practical application is to construct
generalized finite element spaces which can be straightforwardly used
in a Galerkin method. In particular, this characterizes a homogenized
solution and a first order corrector, including corresponding
quantitative error estimates without the requirement of scale separation.
\end{Abstract}
\begin{keywords}
multiscale method, wave propagation, Maxwell's equations, finite element method, a priori error estimates
\end{keywords}
\begin{AMS}
35Q61, 65N12, 65N15, 65N30, 78M10
\end{AMS}
\section{Introduction}
Electromagnetic wave propagation plays an essential role in many physical applications, for instance, in the large field of wave optics.
In the last years, multiscale and heterogeneous materials are studied with great interest, e.g., in the context of photonic crystals \cite{JJWM08phc}.
These materials can exhibit unusual and astonishing (optical) properties, such as band gaps, perfect transmission or negative refraction \cite{CJJP02negrefraction, EP04negphC, LS15negindex}.
These problems are modeled by Maxwell's equations, which involve the curl-operator and the associated Sobolev space $\VH(\curl)$.
Additionally, the coefficients in the problems are rapidly oscillating on a fine scale for the context of photonic crystals and metamaterials.
The numerical simulation and approximation of the solution is then a challenging task for the following three reasons.
1.\ As with all multiscale problems, a direct treatment with standard methods in infeasible in many cases because it needs grids which resolve all discontinuities or oscillations of the material parameters.
2.\ Solutions to $\VH(\curl)$-problems with discontinuous coefficients in Lip\-schitz domains can have arbitrarily low regularity, see \cite{BGL13regularitymaxwell, CDN99maxwellinterface, Cost90regmaxwellremark}.
Hence, standard methods (see e.g., \cite{Monk} for an overview) suffer from bad convergence rates and fine meshes are needed to have a tolerably small error.
3.\ Due to the large kernel of the curl-operator, we cannot expect that the $L^2$-norm is of a lower order as the full $\VH(\curl)$-norm (the energy norm).
Thus, it is necessary to consider dual norms or the Helmholtz decomposition to obtain improved a priori error estimates.
In order to deal with the rapidly oscillating material parameters, we consider multiscale methods and thereby aim at a feasible numerical simulation.
In general, these methods try to decompose the exact solution into a macroscopic contribution (without oscillations), which can be discretized on a coarse mesh, and a fine-scale contribution.
Analytical homogenization for locally periodic $\VH(\curl)$-problems shows that there exists such a decomposition, where the macroscopic part is a good approximation in $H^{-1}$ and an additional fine-scale corrector leads to a good approximation in $L^2$ and $\VH(\curl)$, cf.\ \cite{CH15homerrormaxwell, HOV15maxwellHMM, Well2}.
Based on these analytical results, multiscale methods are developed, e.g., the Heterogeneous Multiscale Method in \cite{HOV15maxwellHMM, CFS17hmmmaxwell} and asymptotic expansion methods in \cite{CZAL10maxwell}.
The question is now in how far such considerations can be extended beyond the (locally) periodic case.
The main contribution of this paper is the numerical homogenization of $\VH(\curl)$-elliptic problems -- beyond the periodic case and without assuming scale separation.
The main findings can be summarized as follows.
We show that the exact solution can indeed be decomposed into a coarse and fine part, using a suitable interpolation operator.
The coarse part gives an optimal approximation in the $H^{-1}$-norm, the best we can hope for in this situation.
In order to obtain optimal $L^2$ and $\VH(\curl)$ approximations, we have to add a so called fine-scale corrector or corrector Green's operator.
This corrector shows exponential decay and can therefore be truncated to local patches of macroscopic elements, so that it can be computed efficiently.
This technique of numerical homogenization is known as Localized Orthogonal Decomposition (LOD) and it was originally proposed by M{\aa}lqvist and Peterseim \cite{MP14LOD} to solve elliptic multiscale problems
through an orthogonalization procedure with a problem-specific \quotes{multiscale} inner product.
The LOD has been extensively studied in the context of Lagrange finite elements \cite{HM14LODbdry, HP13oversampl}, where we particularly refer to the contributions written on wave phenomena \cite{AH17LODwaves, BrG16, bgp2017, GP15scatteringPG, OV16a, P15LODhelmholtz, PeS17}. Aside from Lagrange finite elements, an LOD application in Raviart-Thomas spaces was given in \cite{HHM16LODmixed}.
A crucial ingredient for numerical homogenization procedures in the spirit of LODs is the choice of a suitable interpolation operator.
As we will see later, in our case we require it to be computable, $\VH(\curl)$-stable, (quasi-)local and that it commutes with the curl-operator.
Constructing an operator that enjoys such properties is a very delicate task and a lot of operators have been suggested -- with different backgrounds and applications in mind.
The nodal interpolation operator, see e.g.\ \cite[Thm.\ 5.41]{Monk}, and the interpolation operators introduced in \cite{DB05maxwellpintpol} are not well-defined on $\VH(\curl)$ and hence lack the required stability.
Various (quasi)-interpolation operators are constructed as composition of smoothing and some (nodal) interpolation, such as
\cite{Chr07intpol, CW08intpol, DH14aposteriorimaxwell, EG15intpol, Sch05multilevel,Sch08aposteriori}.
For all of them, the kernel of the operator is practically hard or even impossible to compute and they only fulfill the projection \emph{or} the locality property.
Finally, we mention the interpolation operator of \cite{EG15intpolbestapprox} which is local and a projection, however, which does not commute with the exterior derivative.
A suitable candidate (and to the authors' best knowledge, the only one) that enjoys all required properties was proposed by Falk and Winther in \cite{FalkWinther2014}.
This paper thereby also shows the applicability of the Falk-Winther operator.
In this context, we mention two results, which may be of own interest: a localized regular decomposition of the interpolation error (in the spirit of \cite{Sch08aposteriori}), and the practicable implementation of the Falk-Winther operator as a matrix.
The last point admits the efficient implementation of our numerical scheme and we refer to \cite{EHMP16LODimpl} for general considerations.
The paper is organized as follows.
Section \ref{sec:setting} introduces the general curl-curl-problem under consideration and briefly mentions its relation to Maxwell's equations.
In Section \ref{sec:motivation}, we give a short motivation of our approach from periodic homogenization.
Section \ref{sec:intpol} introduces the necessary notation for meshes, finite element spaces, and interpolation operators.
We introduce the Corrector Green's Operator in Section \ref{sec:LODideal} and show its approximation properties.
We localize the corrector operator in Section \ref{sec:LOD} and present the main apriori error estimates.
The proofs of the decay of the correctors are given in Section \ref{sec:decaycorrectors}.
Details on the definition of the interpolation operator and its implementation are given in Section \ref{sec:intpolimpl}.
The notation $a\lesssim b$ is used for $a\leq Cb$ with a constant $C$ independent of the mesh size $H$ and the oversampling parameter $m$.
It will be used in (technical) proofs for simplicity and readability.
\section{Model problem}
\label{sec:setting}
Let $\Omega\subset \mathbb{R}^3$ be an open, bounded, contractible
domain with polyhedral Lipschitz boundary.
We consider the following so called curl-curl-problem: Find $\Vu:\Omega\to\mathbb{C}^3$ such that
\begin{equation}
\label{eq:curlcurl}
\begin{split}
\curl(\mu\curl\Vu)+\kappa\Vu&=\Vf\quad\text{in }\Omega,\\
\Vu\times \Vn&=0\quad\text{on }\partial \Omega
\end{split}
\end{equation}
with the outer unit normal $\Vn$ of $\Omega$.
Exact assumptions on the parameters $\mu$ and $\kappa$ and the right-hand side $\Vf$ are given in Assumption~\ref{asspt:sesquiform} below,
but we implicitly assume that the above problem is a multiscale problem, i.e.\ the coefficients $\mu$ and $\kappa$ are rapidly varying on a very fine sale.
Such curl-curl-problems arise in various formulations and reductions of Maxwell's equations and we shortly give a few examples.
In all cases, our coefficient $\mu$ equals $\tilde{\mu}^{-1}$ with the magnetic permeability $\tilde{\mu}$, a material parameter.
The right-hand side $\Vf$ is related to (source) current densities.
One possible example are Maxwell's equations in a linear conductive medium, subject to Ohm's law, together with the so called time-harmonic ansatz $\hat{\Vpsi}(x,t)=\Vpsi(x)\exp(-i\omega t)$ for all fields.
In this case, one obtains the above curl-curl-problem with $\Vu=\VE$, the electric field, and $\kappa=i\omega\sigma-\omega^2\varepsilon$ related to the electric permittivity $\varepsilon$ and the conductivity $\sigma$ of the material.
Another example are implicit time-step discretizations of eddy current simulations, where the above curl-curl-problem has to be solved in each time step.
In that case $\Vu$ is the vector potential associated with the magnetic field and $\kappa\approx\sigma/\tau$, where $\tau$ is the time-step size. Coefficients with multiscale properties can for instance arise in the context of photonic crystals.
Before we define the variational problem associated with our general curl-curl-problem \eqref{eq:curlcurl}, we need to introduce some function spaces.
In the following, bold face letters will indicate vector-valued quantities and all functions are complex-valued, unless explicitly mentioned.
For any bounded subdomain $G\subset \Omega$, we define the space
\[\VH(\curl, G):=\{ \Vv\in L^2(G, \mathbb{C}^3)|\curl\Vv\in L^2(G, \mathbb{C}^3)\}\]
with the inner product $(\Vv, \Vw)_{\VH(\curl, G)}:=(\curl\Vv, \curl\Vw)_{L^2(G)}+(\Vv, \Vw)_{L^2(G)}$ with the complex $L^2$-inner product.
We will omit the domain $G$ if it is equal to the full domain $\Omega$.
The restriction of $\VH(\curl, \Omega)$ to functions with a zero tangential trace
is defined as
\[\VH_0(\curl, \Omega):=\{\Vv\in \VH(\curl, \Omega)|\hspace{3pt} \Vv\times \Vn \vert_{\partial \Omega} =0\}. \]
Similarly, we define the space
\[\VH(\Div, G):=\{\Vv\in L^2(G, \mathbb{C}^3)|\Div \Vv\in L^2(G, \mathbb{C})\}\]
with corresponding inner product $(\cdot, \cdot)_{\VH(\Div, G)}$.
For more details we refer to \cite{Monk}.
We make the following assumptions on the data of our problem.
\begin{assumption}
\label{asspt:sesquiform}
Let $\Vf\in \VH(\Div, \Omega)$ and let $\mu\in L^\infty(\Omega, \mathbb{R}^{3 \times 3})$ and $\kappa\in L^\infty(\Omega, \mathbb{C}^{3 \times 3})$.
For any open subset $G\subset\Omega$,
we define the sesquilinear form
$\CB_{G}: \VH(\curl,G)\times \VH(\curl,G)\to \mathbb{C}$ as
\begin{equation}
\label{eq:sesquiform}
\CB_{G}(\Vv, \Vpsi):=(\mu\curl \Vv, \curl\Vpsi)_{L^2(G)}
+(\kappa\Vv, \Vpsi)_{L^2(G)},
\end{equation}
and set $\CB:=\CB_\Omega$.
The form $\CB_{G}$ is obviously continuous, i.e.\ there is $C_B>0$ such that
\begin{equation*}
|\CB_{G}(\Vv, \Vpsi)|\leq C_B\|\Vv\|_{\VH(\curl,G)}\|\Vpsi\|_{\VH(\curl,G)}
\quad\text{for all }\Vv,\Vpsi\in\VH(\curl,G).
\end{equation*}
We furthermore assume that $\mu$ and $\kappa$ are such that
$\CB: \VH_0(\curl)\times \VH_0(\curl)\to \mathbb{C}$
is $\VH_0(\curl)$-elliptic,
i.e.\ there is $\alpha>0$ such that
\[
|\CB(\Vv, \Vv)|\geq \alpha\|\Vv\|^2_{\VH(\curl)}
\quad\text{for all }\Vv\in\VH_0(\curl) .
\]
\end{assumption}
We now give a precise definition of our model problem for this article.
Let Assumption \ref{asspt:sesquiform} be fulfilled. We look for $\Vu\in \VH_0(\curl, \Omega)$ such that
\begin{equation}
\label{eq:problem}
\CB(\Vu, \Vpsi)=(\Vf, \Vpsi)_{L^2(\Omega)} \quad\text{for all } \Vpsi\in \VH_0(\curl, \Omega).
\end{equation}
Existence and uniqueness of a solution to \eqref{eq:problem} follow from the Lax-Milgram-Babu{\v{s}}ka theorem \cite{Bab70fem}.
Assumption \ref{asspt:sesquiform} is fulfilled in the following two important examples mentioned at the beginning:
(i) a strictly positive real function in the identity term, i.e.\ $\kappa\in L^\infty(\Omega, \mathbb{R})$, as it occurs in the time-step discretization of eddy-current problems;
(ii) a complex $\kappa$ with strictly negative real part and strictly positive imaginary part, as it occurs for time-harmonic Maxwell's equations in a conductive medium.
Further possibilities of $\mu$ and $\kappa$ yielding an $\VH(\curl)$-elliptic problem are described in \cite{FR05maxwell}.
\begin{remark}
The assumption of contractibility of $\Omega$ is only required to ensure the existence of local regular decompositions later used in the proof of Lemma \ref{lem:localregulardecomp}. We note that this assumption can be relaxed by assuming that $\Omega$ is simply connected in certain local subdomains formed by unions of tetrahedra (i.e. in patches of the form $\UN(\Omega_P)$, using the notation from Lemma \ref{lem:localregulardecomp}).
\end{remark}
\section{Motivation of the approach}
\label{sec:motivation}
For the sake of the argument, let us consider model problem \eqref{eq:curlcurl} for the case that the coefficients $\mu$ and $\kappa$ are replaced by parametrized multiscale coefficients $\mu_{\delta}$ and $\kappa_\delta$, respectively.
Here, $0<\delta \ll 1$ is a small parameter that characterizes the roughness of the coefficient or respectively the speed of the variations, i.e.\ the smaller $\delta$, the faster the oscillations of $\mu_{\delta}$ and $\kappa_\delta$.
If we discretize this model problem
in the lowest order N{\'e}d{\'e}lec finite element space $\mathring{\CN}(\CT_H)$, we have the classical error estimate of the form
\begin{align*}
\inf_{\mathbf{v}_H \in \mathring{\CN}(\CT_H)} \| \Vu_{\delta} - \mathbf{v}_H \|_{\VH(\curl)} \le C H \left( \| \Vu_{\delta} \|_{H^1(\Omega)} + \| \curl \Vu_{\delta} \|_{H^1(\Omega)} \right).
\end{align*}
However, if the coefficients $\mu_{\delta}$ and $\kappa_\delta$ are discontinuous the necessary regularity for this estimate is not available, see \cite{Cost90regmaxwellremark, CDN99maxwellinterface, BGL13regularitymaxwell}.
On the other hand, if $\mu_{\delta}$ and $\kappa_\delta$ are sufficiently regular but $\delta$ small, then we face the blow-up with $\| \Vu_{\delta} \|_{H^1(\Omega)} + \| \curl \Vu_{\delta} \|_{H^1(\Omega)}\rightarrow \infty$ for $\delta \rightarrow 0$, which makes the estimate useless in practice, unless the mesh size $H$ becomes very small to compensate for the blow-up. This does not change if we replace the $\VH(\curl)$-norm by the $L^2(\Omega)$-norm since both norms are equivalent in our setting.
To understand if there exist any meaningful approximations of $\Vu_{\delta}$ in $\mathring{\CN}(\CT_H)$ (even on coarse meshes), we make a short excursus to classical homogenization theory. For that we assume that the coefficients $\mu_{\delta}(x)=\mu(x/\delta)$ and $\kappa_\delta(x)=\kappa(x/\delta)$ are periodically oscillating with period $\delta$. In this case it is known (cf.\ \cite{CFS17hmmmaxwell, HOV15maxwellHMM, Well2}) that the sequence of exact solutions $\Vu_{\delta}$ converges weakly in $\VH_0(\curl)$ to a \emph{homogenized} function $\Vu_{0}$. Since $\Vu_0 \in \VH_0(\curl)$ is $\delta$-independent and slow, it can be well approximated in $\mathring{\CN}(\CT_H)$. Furthermore, there exists a \emph{corrector} $\mathcal{K}_{\delta}(\Vu_0)$ such that
\[\Vu_{\delta} \approx (\id + \mathcal{K}_{\delta})\Vu_0 \]
is a good approximation in $\VH(\curl)$, i.e.\ the error converges strongly to zero with
\[
\| \Vu_{\delta} -( \Vu_0 + \mathcal{K}_{\delta}(\Vu_0)) \|_{\VH(\curl)} \rightarrow 0
\qquad \mbox{for } \delta \rightarrow 0.
\]
Here the nature of the corrector is revealed by two estimates. In fact, $\mathcal{K}_{\delta}(\Vu_0)$ admits a decomposition into a gradient part and a part with small amplitude (cf. \cite{HOV15maxwellHMM, CH15homerrormaxwell, Well2}) such that
\[
\mathcal{K}_{\delta}(\Vu_0) = \Vz_{\delta} + \nabla \theta_{\delta}
\]
with
\begin{align}
\label{hom-corrector-est-1}
\delta^{-1} \| \Vz_{\delta} \|_{L^2(\Omega)} + \| \Vz_{\delta} \|_{\VH(\curl)} &\le C\| \Vu_0 \|_{\VH(\curl)}\\
\label{hom-corrector-est-2}
\text{and}\qquad\delta^{-1} \| \theta_{\delta} \|_{L^2(\Omega)} + \| \nabla \theta_{\delta} \|_{L^2(\Omega)} &\le C \| \Vu_0 \|_{\VH(\curl)},
\end{align}
where $C=C(\alpha,C_B)$ only depends on the constants appearing in Assumption \ref{asspt:sesquiform}. First, we immediately see that the estimates imply that $\mathcal{K}_{\delta}(\Vu_0)$ is $\VH(\curl)$-stable in the sense that it holds
\begin{align*}
\| \mathcal{K}_{\delta}(\Vu_0) \|_{\VH(\curl)} \le C \| \Vu_0 \|_{\VH(\curl)}.
\end{align*}
Second, and more interestingly, we see that alone from the above properties, we can conclude that $\Vu_0$ \emph{must} be a good approximation of the exact solution in the space $H^{-1}(\Omega,\mathbb{C}^3)$. In fact, using \eqref{hom-corrector-est-1} and \eqref{hom-corrector-est-2} we have for any
$\mathbf{v}\in H^1_0(\Omega,\mathbb{C}^3)$ with $\| \mathbf{v} \|_{H^1(\Omega)}=1$ that
\begin{align*}
\left|\int_{\Omega} \mathcal{K}_{\delta}(\Vu_0) \cdot \mathbf{v} \right|=
\left|\int_{\Omega} \Vz_{\delta} \cdot \mathbf{v} - \int_{\Omega} \theta_{\delta} \hspace{2pt} (\nabla \cdot \mathbf{v}) \right| \le
\| \Vz_{\delta} \|_{L^2(\Omega)} + \| \theta_{\delta} \|_{L^2(\Omega)}
\le C \delta \| \Vu_0 \|_{\VH(\curl)}.
\end{align*}
Consequently we have strong convergence in $H^{-1}(\Omega)$ with
\begin{align*}
\| \Vu_{\delta} - \Vu_0 \|_{H^{-1}(\Omega)}
\le \| \Vu_{\delta} - ( \Vu_0 + \mathcal{K}_{\delta}(\Vu_0))\|_{H^{-1}(\Omega)} + \| \mathcal{K}_{\delta}(\Vu_0) \|_{H^{-1}(\Omega)} \overset{\delta \rightarrow 0}{\longrightarrow} 0.
\end{align*}
We conclude two things. Firstly, even though the coarse space $\mathring{\CN}(\CT_H)$ does not contain good $\VH(\curl)$- or $L^2$-approximations, it still contains meaningful approximations in $H^{-1}(\Omega)$.
Secondly, the fact that the
coarse part
$\Vu_0$
is a good $H^{-1}$-approximation of $\Vu_{\delta}$ is an intrinsic conclusion from the properties of the correction $\mathcal{K}_{\delta}(\Vu_0)$.
In this paper we are concerned with the question if the above considerations can be transferred to a discrete setting beyond the assumption of periodicity. More precisely, defining a coarse level of resolution through the space $\mathring{\CN}(\CT_H)$, we ask if it is possible to find a coarse function $\Vu_H$ and an (efficiently computable) $\VH(\curl)$-stable operator $\mathcal{K}$, such that
\begin{align}
\label{motivation:int-estimates}
\| \Vu_{\delta} - \Vu_H \|_{H^{-1}(\Omega)} \le C H \qquad \mbox{and} \qquad \| \Vu_{\delta} - (I+\mathcal{K})\Vu_H \|_{\VH(\curl)} \le CH,
\end{align}
with $C$ being independent of the oscillations in terms of $\delta$. A natural ansatz for the coarse part is $\Vu_H=\pi_H( \Vu_{\delta} )$ for a suitable projection $\pi_H : \VH(\curl) \rightarrow \mathring{\CN}(\CT_H)$. However, from the considerations above we know that $\Vu_H=\pi_H( \Vu_{\delta} )$ can only be a good $H^{-1}$-approximation if the error fulfills a discrete analog to the estimates \eqref{hom-corrector-est-1} and \eqref{hom-corrector-est-2}. Since $\Vu_{\delta} - \pi_H( \Vu_{\delta} )$ is nothing but an interpolation error, we can immediately derive a sufficient condition for our choice of $\pi_H$: we need that, for any $\Vv\in \VH_0(\curl, \Omega)$, there are $\Vz\in \VH^1_0(\Omega)$ and $\theta\in H^1_0(\Omega)$ such that
\[\Vv-\pi_H \Vv=\Vz+\nabla \theta\]
and
\begin{equation}
\label{motivation:properties-pi-H}
\begin{split}
H^{-1}\|\Vz\|_{L^2(\Omega)}+\|\nabla \Vz\|_{\VH(\curl)} &\leq C \|\curl\Vv\|_{L^2(\Omega)},\\
H^{-1}\|\theta\|_{L^2(\Omega)}+\|\nabla \theta\|_{L^2(\Omega)}&\leq C \|\curl\Vv\|_{L^2(\Omega)}.
\end{split}
\end{equation}
This is a sufficient condition for $\pi_H$. Note that the above properties are not fulfilled for e.g. the $L^2$-projection. This resembles the fact that the $L^2$-projection does typically not yield a good $H^{-1}$-approximation in our setting.
We conclude this paragraph by summarizing that if we have a projection $\pi_H$ fulfilling \eqref{motivation:properties-pi-H}, then we can define a coarse scale numerically through the space
$\mathring{\CN}(\CT_H) = \mbox{im}(\pi_H)$.
On the other hand, to ensure that the corrector inherits the desired decomposition with estimates \eqref{motivation:int-estimates}, it needs to be constructed such that it maps into the kernel of the projection operator, i.e. $\mbox{im}(\mathcal{K})\subset\mbox{ker}(\pi_H)$.
\section{Mesh and interpolation operator}
\label{sec:intpol}
In this section we introduce the basic notation for establishing our coarse scale discretization and we will present a projection operator that fulfills the sufficient conditions derived in the previous section.
Let $\CT_H$ be a regular partition of $\Omega$ into tetrahedra, such that $\cup\CT_H=\overline{\Omega}$ and any two distinct $T, T'\in \CT_H$ are either disjoint or share a common vertex, edge or face.
We assume the partition $\CT_H$ to be shape-regular and quasi-uniform.
The global mesh size is defined as $H:=\max\{ \diam(T)|T\in \CT_{H}\}$.
$\CT_H$ is a coarse mesh in the sense that it does not resolve the fine-scale oscillations of the parameters.
Given any (possibly even not connected) subdomain $G\subset \overline{\Omega}$ define its neighborhood via
\[\UN(G):=\Int(\cup\{T\in \CT_{H}|T\cap\overline{G}\neq \emptyset\})\]
and for any $m\geq 2$ the patches
\[\UN^1(G):=\UN(G)\qquad \text{and}\qquad\UN^m(G):=\UN(\UN^{m-1}(G)),\]
see Figure \ref{fig:patch} for an example.
The shape regularity implies that there is a uniform bound $C_{\ol, m}$ on the number of elements in the $m$-th order patch
\[\max_{T\in \CT_{H}}\operatorname{card}\{K\in \CT_{H}|K\subset\overline{\UN^m(T)}\}\leq C_{\ol, m}\]
and the quasi-uniformity implies that $C_{\ol, m}$ depends polynomially on $m$.
We abbreviate $C_{\ol}:=C_{\ol, 1}$.
\begin{figure}
\caption{Triangle $T$ (in black) and its first and second order patches (additional elements for $\UN(T)$ in dark gray and additional elements for $\UN^2(T)$ in light gray).}
\label{fig:patch}
\end{figure}
The space of $\CT_H$-piecewise affine and continuous functions is
denoted by $\CS^1(\CT_H)$.
We denote the lowest order N{\'e}d{\'e}lec finite element, cf.\ \cite[Section 5.5]{Monk}, by
\[
\mathring{\CN}(\CT_H):=\{\Vv\in \VH_0(\curl)|\forall T\in \CT_H: \Vv|_T(\Vx)=\Va_T\times\Vx+\Vb_T \text{ with }\Va_T, \Vb_T\in\mathbb{C}^3\}
\]
and the space of Raviart--Thomas fields by
\[
\mathring{\CR\CT}(\CT_H):=\{\Vv\in \VH_0(\Div)|\forall T\in \CT_H: \Vv|_T(\Vx)=\Va_T\cdot\Vx+\Vb_T \text{ with }\Va_T\in \mathbb{C}, \Vb_T\in\mathbb{C}^3\}.
\]
As motivated in Section \ref{sec:motivation} we require an $\VH(\curl)$-stable interpolation operator $\pi_H^E:\VH_0(\curl)\to \mathring{\CN}(\CT_H)$ that allows for a decomposition with the estimates such as \eqref{motivation:properties-pi-H}. However, from the view point of numerical homogenization where corrector problems should be localized to small subdomains, we also need that $\pi_H^E$ is local and (as we will see later)
that it fits into a commuting diagram with other stable interpolation operators for lowest order $H^1(\Omega)$, $\VH(\Div)$ and $L^2(\Omega)$ elements.
As discussed in the introduction, the only suitable candidate is the Falk-Winther interpolation operator $\pi_H^E$ \cite{FalkWinther2014}.
We postpone a precise definition of $\pi_H^E$ to Section \ref{sec:intpolimpl} and just summarize its most important properties in the following proposition.
\begin{proposition}\label{p:proj-pi-H-E}
There exists a projection $\pi_H^E:\VH_0(\curl)\to \mathring{\CN}(\CT_H)$ with the following local stability properties:
For all $\Vv\in \VH_0(\curl)$ and all $T\in \CT_H$ it holds that
\begin{align}
\label{eq:stabilityL2}
\|\pi_H^E(\Vv)\|_{L^2(T)}&\leq C_\pi \bigl(\|\Vv\|_{L^2(\UN(T))}+H\|\curl\Vv\|_{L^2(\UN(T))}\bigr),\\*
\label{eq:stabilitycurl}
\|\curl\pi_H^E(\Vv)\|_{L^2(T)}&\leq C_\pi \|\curl\Vv\|_{L^2(\UN(T))}.
\end{align}
Furthermore, there exists a projection
$\pi_H^F:\VH_0(\Div)\to \mathring{\mathcal{RT}}(\CT_H)$
to the Raviart-Thomas space such that the following commutation property
holds
\[\curl\pi_H^E(\Vv)=\pi_H^F(\curl \Vv).\]
\end{proposition}
\begin{proof}
See \cite{FalkWinther2014} for a proof, which can be adapted to
the present case of homogeneous boundary values.
\end{proof}
As explained in the motivation in Section \ref{sec:motivation}, we also require that $\pi_H^E$ allows for a regular decomposition in the sense of \eqref{motivation:properties-pi-H}. In general, regular decompositions are an important tool for the study of $\VH(\curl)$-elliptic problems and involve that a vector field $\Vv\in \VH_0(\curl)$ is split -- in a non-unique way -- into a gradient and a (regular) remainder in $\VH^1$, see \cite{Hipt02FEem, PZ02Schwarz}. In contrast to the Helmholtz decomposition, this splitting is not orthogonal with respect to the $L^2$-inner product. If the function $\Vv\in \VH_0(\curl)$ is additionally known to be in the kernel of a suitable quasi-interpolation, a modified decomposition can be derived that is localized and $H$-weighted. In particular, the weighting with $H$ allows for estimates similar as the one stated in \eqref{motivation:properties-pi-H}. The first proof of such a modified decomposition was given by Sch\"oberl \cite{Sch08aposteriori}. In the following we shall use his results and the locality of the Falk-Winther operator to recover a similar decomposition for the projection $\pi_H^E$. More precisely, we have the following lemma which is crucial for our analysis.
\begin{lemma}
\label{lem:localregulardecomp}
Let $\pi_H^E$ denote the projection from Proposition \ref{p:proj-pi-H-E}. For any $\Vv\in \VH_0(\curl, \Omega)$, there are $\Vz\in \VH^1_0(\Omega)$ and $\theta\in H^1_0(\Omega)$ such that
\[\Vv-\pi_H^E(\Vv)=\Vz+\nabla \theta\]
with the local bounds for every $T\in \CT_H$
\begin{equation}
\label{eq:regulardecomp}
\begin{split}
H^{-1}\|\Vz\|_{L^2(T)}+\|\nabla \Vz\|_{L^2(T)}&\leq C_z\|\curl\Vv\|_{L^2(\UN^3(T))},\\
H^{-1}\|\theta\|_{L^2(T)}+\|\nabla \theta\|_{L^2(T)}&\leq C_\theta\bigl(\|\Vv\|_{L^2(\UN^3(T))}+H\|\curl\Vv\|_{L^2(\UN^3(T))}\bigr),
\end{split}
\end{equation}
where $\nabla \Vz$ stands for the Jacobi matrix of $\Vz$.
Here $C_z$ and $C_\theta$ are generic constants that only depend on the regularity of the coarse mesh.
\end{lemma}
Observe that \eqref{eq:regulardecomp} implies the earlier formulated sufficient condition \eqref{motivation:properties-pi-H}.
\begin{proof}
Let $\Vv\in \VH_0(\curl, \Omega)$.
Denote by $I_H^S:\VH_0(\curl,\Omega)\to \mathring{\CN}(\CT_H)$ the quasi-interpolation operator introduced by Sch\"oberl in \cite{Sch08aposteriori}.
It is shown in \cite[Theorem 6]{Sch08aposteriori} that there exists
a decomposition
\begin{equation}
\label{eq:schoeberlstab-p1}
\Vv-I_H^S(\Vv) =
\sum_{\substack{P \text{ vertex}\\ \text{of }\CT_H}} \Vv_P
\end{equation}
where, for any vertex $P$,
$\Vv_P\in \VH_0(\curl, \Omega_P)$ and $\Omega_P$ the support of the local hat function associated with $P$.
Moreover, \cite[Theorem 6]{Sch08aposteriori} provides the stability
estimates
\begin{equation}\label{eq:schoeberlstab}
\| \Vv_P \|_{L^2(\Omega_P)} \lesssim \|\Vv\|_{L^2(\UN(\Omega_P))}
\quad\text{and}\quad
\|\curl \Vv_P \|_{L^2(\Omega_P)}
\lesssim \|\curl \Vv\|_{L^2(\UN(\Omega_P))}
\end{equation}
for any vertex $P$.
With these results we deduce, since $\pi_H^E$ is a projection onto the finite element space, that
\begin{align*}
\Vv-\pi_H^E(\Vv)
=\Vv-I_H^S(\Vv)-\pi_H^E(\Vv-I_H^S\Vv)
=\sum_{\substack{P \text{ vertex}\\ \text{of }\CT_H}}(\id-\pi_H^E)(\Vv_P).
\end{align*}
Due to the locality of $\pi_H^E$, we have $(\id-\pi_H^E)(\Vv_P)\in \VH_0(\curl, \UN(\Omega_P))$.
The local stability of $\pi_H^E$, \eqref{eq:stabilityL2} and \eqref{eq:stabilitycurl}, and the stability \eqref{eq:schoeberlstab} imply
\begin{align*}
\|(\id-\pi_H^E)(\Vv_P)\|_{L^2(\UN(\Omega_P))}&\lesssim \|\Vv\|_{L^2(\UN(\Omega_P))}+H\|\curl\Vv\|_{L^2(\UN(\Omega_P))},\\*
\|\curl(\id-\pi_H^E)(\Vv_P)\|_{L^2(\UN(\Omega_P))}&\lesssim \|\curl\Vv\|_{L^2(\UN(\Omega_P))},
\end{align*}
We can now apply the regular splitting to $\Vv_P$ (cf.\ \cite{PZ02Schwarz}), i.e.\ there are $\Vz_P\in \VH^1_0(\UN(\Omega_P))$, $\theta_P\in H^1_0(\UN(\Omega_P))$ such that $\Vv_P=\Vz_P+\nabla \theta_P$ and with the estimates
\begin{align*}
H^{-1}\|\Vz_P\|_{L^2(\UN(\Omega_P))}+\|\nabla \Vz_P\|_{L^2(\UN(\Omega_P))}&\lesssim \|\curl((\id-\pi_H^E)(\Vv_P))\|_{L^2(\UN(\Omega_P))},\\*
H^{-1}\|\theta_P\|_{L^2(\UN(\Omega_P))}+\|\nabla \theta_P\|_{L^2(\UN(\Omega_P))}&\lesssim \|(\id-\pi_H^E)(\Vv_P)\|_{L^2(\UN(\Omega_P))}.
\end{align*}
Set $\Vz=\sum_P\Vz_P$ and $\theta=\sum_P\theta_P$, which is a regular decomposition of $\Vv-\pi_H^E(\Vv)$.
The local estimates follows from the foregoing estimates for $\Vv_P$ and the decomposition \eqref{eq:schoeberlstab-p1} which yields
\begin{align*}
H^{-1}\|\Vz\|_{L^2(T)}+\|\nabla \Vz\|_{L^2(T)}&\leq \sum_{\substack{P \text{ vertex}\\ \text{of } T}}
\left(
H^{-1}\| \Vz_P \|_{L^2(\Omega_P)}+\|\nabla \Vz_P \|_{L^2(\Omega_P)}
\right)\\
&\lesssim
\sum_{\substack{P \text{ vertex}\\ \text{of } T}} \|\curl (\id-\pi_H^E)(\Vv_P)\|_{L^2(\UN(\Omega_P))}
\lesssim \|\curl\Vv\|_{L^2(\UN^3(T))}.
\end{align*}
The local estimate for $\theta$ follows analogously.
\end{proof}
\section{The Corrector Green's Operator}
\label{sec:LODideal}
In this section we introduce an ideal \emph{Corrector Green's Operator} that allows us to derive a decomposition of the exact solution into a coarse part (which is a good approximation in $H^{-1}(\Omega,\mathbb{C}^3)$) and two different corrector contributions. For simplicity, we let from now on $\mathcal{L} : \VH_0(\curl) \rightarrow \VH_0(\curl)^{\prime}$ denote the differential operator associated with the sesquilinear form $\CB(\cdot,\cdot)$, i.e. $\mathcal{L}(v)(w)=\CB(v,w)$.
Using the Falk-Winter interpolation operator $\pi_H^E$ for the N{\'e}d{\'e}lec elements, we split the space $\VH_0(\curl)$ into the finite, low-dimensional coarse space $\mathring{\CN}(\CT_H)=\mbox{im}(\pi_H^E)$ and a corrector space given as the kernel of $\pi_H^E$, i.e.\ we set $\VW:=\ker (\pi_H^E)\subset \VH_0(\curl)$. This yields the direct sum splitting $\VH_0(\curl)=\mathring{\CN}(\CT_H)\oplus\VW$. Note that $\VW$ is closed since it is the kernel of a continuous (i.e. $\VH(\curl)$-stable) operator. With this the ideal Corrector Green's Operator is defined as follows.
\begin{definition}[Corrector Green's Operator]
For $\mathbf{F} \in \VH_0(\curl)^\prime$, we define the Corrector Green's Operator
\begin{align}
\label{cor-greens-op}
\mathcal{G}: \VH_0(\curl)^{\prime} \rightarrow \VW
\hspace{40pt}
\mbox{by} \hspace{40pt}
\CB(\mathcal{G}(\mathbf{F}) , \Vw )=\mathbf{F}(\Vw)\qquad \mbox{for all } \Vw\in \VW.
\end{align}
It is well-defined by the Lax-Milgram-Babu{\v{s}}ka theorem, which is applicable since $\CB(\cdot,\cdot)$ is $\VH_0(\curl)$-elliptic and since $\VW$ is a closed subspace of $\VH_0(\curl)$.
\end{definition}
Using the Corrector Green's Operator we obtain the following decomposition of the exact solution.
\begin{lemma}[Ideal decomposition]
\label{lemma:ideal-decompos}
The exact solution $\Vu\in\VH_0(\curl)$ to \eqref{eq:problem}
and $\Vu_H:=\pi_H^E(\Vu)$ admit the decomposition
\[
\Vu = \Vu_H - (\mathcal{G} \circ \mathcal{L})(\Vu_H) + \mathcal{G}(\Vf).
\]
\end{lemma}
\begin{proof}
Since $\VH_0(\curl)=\mathring{\CN}(\CT_H)\oplus\VW$, we can write $\Vu$ uniquely as
\[
\Vu = \pi_H^E(\Vu) + (\id - \pi_H^E)(\Vu) = \Vu_H + (\id - \pi_H^E)(\Vu),
\]
where $(\id - \pi_H^E)(\Vu) \in \VW$ by the projection property
of $\pi_H^E$.
Using the differential equation for test functions $\Vw\in \VW$ yields that
\begin{align*}
\CB( (\id - \pi_H^E)(\Vu) , \Vw )= - \CB( \Vu_H , \Vw ) + (\Vf, \Vw)_{L^2(\Omega)}
= - \CB( (\mathcal{G} \circ \mathcal{L})(\Vu_H) , \Vw ) + \CB( \mathcal{G}(\Vf) , \Vw ).
\end{align*}
Since this holds for all $\Vw\in \VW$ and since $\mathcal{G}(\Vf) - (\mathcal{G} \circ \mathcal{L})(\Vu_H) \in \VW$, we conclude that
\[
(\id - \pi_H^E)(\Vu) = \mathcal{G}(\Vf) - (\mathcal{G} \circ \mathcal{L})(\Vu_H),
\]
which finishes the proof.
\end{proof}
The Corrector Green's Operator has the following approximation and stability properties, which reveal that its contributions is always negligible in the $\VH(\Div)^\prime$-norm and negligible in the $\VH(\curl)$-norm if applied to a function in $\VH(\Div)$.
\begin{lemma}[Ideal corrector estimates]
\label{lemma:corrector-props}
Any $\mathbf{F} \in \VH_0(\curl)^{\prime}$ satisfies
\begin{align}
\label{green-est-Hcurl-1}
H \| \mathcal{G}(\mathbf{F}) \|_{\VH(\curl)} + \| \mathcal{G}(\mathbf{F}) \|_{\VH(\Div)^{\prime}} \le C H \alpha^{-1} \| \mathbf{F} \|_{\VH_0(\curl)^{\prime}}.
\end{align}
If $\mathbf{F} = \mathbf{f} \in \VH(\Div)$ we even have
\begin{align}
\label{green-est-Hdiv-1}
H \| \mathcal{G}(\mathbf{f}) \|_{\VH(\curl)} + \| \mathcal{G}(\mathbf{f}) \|_{\VH(\Div)^{\prime}} \le C H^2 \alpha^{-1} \| \mathbf{f} \|_{\VH(\Div)}.
\end{align}
Here, the constant $C$ does only depend on the maximum number of neighbors of a coarse element and the generic constants appearing in Lemma \ref{lem:localregulardecomp}.
\end{lemma}
Note that this result is still valid if we replace the $\VH(\Div)^{\prime}$-norm by the $H^{-1}(\Omega,\mathbb{C}^3)$-norm.
\begin{proof}
The stability estimate $\| \mathcal{G}(\mathbf{F}) \|_{\VH(\curl)} \le \alpha^{-1} \| \mathbf{F} \|_{\VH_0(\curl)^{\prime}}$ is obvious. Next,
with $\mathcal{G}(\mathbf{F})\in\VW$ and Lemma \ref{lem:localregulardecomp} we have
\begin{equation}\label{green-est-Hdiv-1-proof}
\begin{aligned}
\| \mathcal{G}(\mathbf{F}) \|_{\VH(\Div)^{\prime}}
&=
\underset{\| \mathbf{v} \|_{\VH(\Div)}=1}{\sup_{\mathbf{v}\in \VH(\Div)}} \left|\int_{\Omega} \Vz \cdot \mathbf{v} - \int_{\Omega} \theta (\nabla \cdot \mathbf{v}) \right|
\\
&
\le
( \| \Vz \|_{L^2(\Omega)}^2 + \| \theta \|_{L^2(\Omega)}^2 )^{1/2}
\le C H \| \mathcal{G}(\mathbf{F}) \|_{\VH(\curl)} \le C H \alpha^{-1} \| \mathbf{F} \|_{\VH_0(\curl)^{\prime}},
\end{aligned}
\end{equation}
which proves \eqref{green-est-Hcurl-1}.
Note that this estimate exploited $\theta \in H^{1}_0(\Omega)$, which is why we do not require the function $\mathbf{v}$ to have a vanishing normal trace. Let us now consider the case that $\mathbf{F} = \mathbf{f} \in \VH(\Div)$. We have
by \eqref{green-est-Hdiv-1-proof} that
\begin{align*}
\alpha \| \mathcal{G}( \mathbf{f} ) \|_{\VH(\curl)}^2 \le \| \mathcal{G}( \mathbf{f}) \|_{\VH(\Div)^{\prime}}
\| \mathbf{f} \|_{\VH(\Div)}
\le C H
\| \mathcal{G}(\mathbf{f}) \|_{\VH(\curl)} \| \mathbf{f} \|_{\VH(\Div)}.
\end{align*}
We conclude $\| \mathcal{G}( \mathbf{f} ) \|_{\VH(\curl)} \le C H \alpha^{-1} \| \mathbf{f} \|_{\VH(\Div)}$. Finally, we can use this estimate again in \eqref{green-est-Hdiv-1-proof} to obtain
\begin{align*}
\| \mathcal{G}(\Vf) \|_{\VH(\Div)^{\prime}} \le C H \| \mathcal{G}(\Vf) \|_{\VH(\curl)} \le C H^2 \alpha^{-1} \| \mathbf{f} \|_{\VH(\Div)}.
\end{align*}
This finishes the proof.
\end{proof}
An immediate conclusion of Lemmas \ref{lemma:ideal-decompos} and \ref{lemma:corrector-props} is the following.
\begin{conclusion}
\label{conclusion-ideal-corr-est}
Let $\Vu$ denote the exact solution to \eqref{eq:curlcurl} for $ \mathbf{f} \in \VH(\Div)$. Then with the coarse part $\Vu_H:=\pi_H^E(\Vu)$ and corrector operator $\mathcal{K} := - \mathcal{G} \circ \mathcal{L}$ it holds
\begin{align*}
H^{-1}\| \Vu - (\id + \mathcal{K})\Vu_H \|_{\VH(\Div)^{\prime}}
+
\| \Vu - (\id + \mathcal{K})\Vu_H \|_{\VH(\curl)} + \| \Vu - \Vu_H \|_{\VH(\Div)^{\prime}} \le C H \| \mathbf{f} \|_{\VH(\Div)} .
\end{align*}
Here, $C$ only depends on $\alpha$, the mesh regularity and on the constants appearing in Lemma \ref{lem:localregulardecomp}.
\end{conclusion}
\begin{proof}
The estimates for $\Vu - (\id + \mathcal{K})\Vu_H =\mathcal{G}(\Vf)$ directly follow
from \eqref{green-est-Hdiv-1}.
For the estimate of $\Vu - \Vu_H =\mathcal{K}\Vu_H + \mathcal{G} \Vf$, observe that
\eqref{green-est-Hcurl-1} and
Proposition~\ref{p:proj-pi-H-E} imply
\begin{equation*}
\| \mathcal{K}\Vu_H \|_{\VH(\Div)^{\prime}}
\lesssim H
\| \CL\Vu_H \|_{\VH_0(\curl)^{\prime}}
\lesssim
H
\| \Vu_H \|_{\VH(\curl)}
=
H
\| \pi_H^E \Vu \|_{\VH(\curl)}
\lesssim
H
\| \Vu \|_{\VH(\curl)} .
\end{equation*}
Thus, the proof follows from the stability of the problem
and the the triangle inequality.
\end{proof}
It only remains to derive an equation that characterizes $(\id + \mathcal{K})\Vu_H$ as the unique solution of a variational problem. This is done in the following theorem.
\begin{theorem}
We consider the setting of Conclusion \ref{conclusion-ideal-corr-est}. Then $\Vu_H=\pi_H^E(\Vu) \in \mathring{\CN}(\CT_H)$ is characterized as the unique solution to
\begin{align}
\label{ideal-lod}
\CB( \hspace{2pt} (\id + \mathcal{K})\Vu_H , (\id + \mathcal{K}^{\ast})\Vv_H \hspace{1pt} ) = ( \Vf, (\id + \mathcal{K}^{\ast})\Vv_H )_{L^2(\Omega)} \qquad \mbox{for all } \Vv_H \in \mathring{\CN}(\CT_H).
\end{align}
Here, $\mathcal{K}^{\ast}$ is the adjoint operator to $\mathcal{K}$. The sesquilinear form $\CB( \hspace{1pt} (\id + \mathcal{K})\hspace{3pt}\cdot \hspace{2pt}, (\id + \mathcal{K}^{\ast})\hspace{2pt}\cdot \hspace{2pt} )$ is $\VH(\curl)$-elliptic on $\mathring{\CN}(\CT_H)$.
\end{theorem}
Observe that we have the simplification $\mathcal{K}^{\ast}=\mathcal{K}$ if the differential operator $\mathcal{L}$ is self-adjoint as it is typically the case for $\VH(\curl)$-problems.
\begin{proof}
Since Lemma \ref{lemma:ideal-decompos} guarantees $\Vu = \Vu_H - (\mathcal{G} \circ \mathcal{L})(\Vu_H) + \mathcal{G}(\Vf)$, the weak formulation \eqref{eq:problem} yields
\begin{align*}
\CB( \Vu_H - (\mathcal{G} \circ \mathcal{L})(\Vu_H) + \mathcal{G}(\Vf) , \Vv_H ) = ( \Vf, \Vv_H )_{L^2(\Omega)} \qquad \mbox{for all } \Vv_H \in \mathring{\CN}(\CT_H).
\end{align*}
We observe that by definition of $\mathcal{G}$ we have
\begin{align*}
\CB( \mathcal{G}(\Vf) , \Vv_H ) = ( \Vf , (\mathcal{G} \circ \mathcal{L})^{\ast}\Vv_H )_{L^2(\Omega)}
\end{align*}
and
\begin{align*}
\CB( \Vu_H - (\mathcal{G} \circ \mathcal{L})(\Vu_H) , (\mathcal{G} \circ \mathcal{L})^{\ast}\Vv_H ) = 0.
\end{align*}
Combining the three equations shows that $(\id + \mathcal{K})\Vu_H$ is a solution to \eqref{ideal-lod}. The uniqueness follows from the
following norm equivalence
\begin{align*}
\| \Vu_H \|_{\VH(\curl)} = \| \pi_H^E((\id + \mathcal{K})\Vu_H) \|_{\VH(\curl)} \le C \| (\id + \mathcal{K})\Vu_H \|_{\VH(\curl)}
\le C \| \Vu_H \|_{\VH(\curl)}.
\end{align*}
This is also the reason why the $\VH(\curl)$-ellipticity of $\CB( \cdot, \cdot)$ implies the $\VH(\curl)$-ellipticity of $\CB( \hspace{1pt} (\id + \mathcal{K})\hspace{3pt}\cdot \hspace{2pt}, (\id + \mathcal{K}^{\ast})\hspace{2pt}\cdot \hspace{2pt} )$ on $\mathring{\CN}(\CT_H)$.
\end{proof}
\textbf{Numerical homogenization}. Let us summarize the most important findings and relate them to (numerical) homogenization. We defined a \emph{homogenization scale} through the coarse FE space $\mathring{\CN}(\CT_H)$. We proved that
there exists a numerically homogenized function $\Vu_H \in \mathring{\CN}(\CT_H)$ which approximates the exact solution well in $\VH(\Div)^{\prime}$ with
\begin{align*}
\| \Vu - \Vu_H \|_{\VH(\Div)^{\prime}} \le C H \| \mathbf{f} \|_{\VH(\Div)}.
\end{align*}
From the periodic homogenization theory (cf. Section \ref{sec:motivation}) we know that this is the best we can expect and that $\Vu_H$ is typically not a good $L^2$-approximation due to the large kernel of the curl-operator.
Furthermore, we showed the existence of an $\VH(\curl)$-stable corrector operator $\mathcal{K}: \mathring{\CN}(\CT_H) \rightarrow \VW$ that corrects the homogenized solution in such a way that the approximation is also accurate in $\VH(\curl)$ with
\begin{align*}
\| \Vu - (\id + \mathcal{K})\Vu_H \|_{\VH(\curl)} \le C H \| \mathbf{f} \|_{\VH(\Div)}.
\end{align*}
Since $\mathcal{K} = - \mathcal{G} \circ \mathcal{L}$, we know that we can characterize $\mathcal{K} (\Vv_H) \in \VW$ as the unique solution to the (ideal) corrector problem
\begin{align}
\label{ideal-corrector-problem}
\CB( \mathcal{K} (\Vv_H) , \Vw )=- \CB( \Vv_H , \Vw ) \qquad \mbox{for all } \Vw\in \VW.
\end{align}
The above result shows that $(\id + \mathcal{K})\Vu_H$ approximates the analytical solution with linear rate without any assumptions on the regularity of the problem or the structure of the coefficients that define $\CB(\cdot,\cdot)$. Also it does not assume that the mesh resolves the possible fine-scale features of the coefficient.
On the other hand, the ideal corrector problem \eqref{ideal-corrector-problem} is global, which significantly limits its practical usability in terms of real computations.
However, as we will see next, the corrector Green's function associated with problem \eqref{cor-greens-op} shows an exponential decay measured in units of $H$. This property will allow us to split the global corrector problem \eqref{ideal-corrector-problem} into several smaller problems on subdomains, similar to how we encounter it in classical homogenization theory. We show the exponential decay of the corrector Green's function indirectly through the properties of its corresponding Green's operator $\mathcal{G}$. The localization is established in Section \ref{sec:LOD}, whereas we prove the decay in Section \ref{sec:decaycorrectors}.
\section{Quasi-local numerical homogenization}
\label{sec:LOD}
In this section we describe how the ideal corrector $\mathcal{K}$ can be approximated by a sum of local correctors, without destroying the overall approximation order. This is of central importance for an efficient computability. Furthermore, it also reveals that the new corrector is a quasi-local operator, which is in line with homogenization theory.
We start with quantifying the decay properties of the Corrector Green's Operator in Section \ref{subsec:idealapprox}. In Section \ref{subsec:LODlocal} we apply the result to our numerical homogenization setting and state the error estimates for the \quotes{localized} corrector operator. We close with a few remarks on a fully discrete realization of the localized corrector operator in Section \ref{subsec:discreteLOD}.
\subsection{Exponential decay and localized corrector}
\label{subsec:idealapprox}
The property that $\mathcal{K}$ can be approximated by local correctors is directly linked to the decay properties of the Green's function associated with problem \eqref{cor-greens-op}. These decay properties can be quantified explicitly by measuring distances between points in units of the coarse mesh size $H$. We have the following result, which states -- loosely speaking -- in which distance from the support of a source term $\mathbf{F}$, becomes the $\VH(\curl)$-norm of $\mathcal{G}(\mathbf{F})$ negligibly small. For that, recall the definition of the element patches from the beginning of Section \ref{sec:intpol}, where $\UN^m(T)$ denotes the patch that consists of a coarse element $T \in \CT_H$ and $m$ layers of coarse elements around it. A proof of the following proposition is given in Section \ref{sec:decaycorrectors}.
\begin{proposition}
\label{prop:decaycorrector1}
Let $T\in \CT_H$ denote a coarse element and $m\in \mathbb{N}$ a number of layers. Furthermore, let $\mathbf{F}_T \in \VH_0(\curl)^{\prime}$ denote a local source functional in the sense that $\mathbf{F}_T(\Vv)=0$ for all $\Vv \in \VH_0(\curl)$ with $\supp(\Vv) \subset \Omega \setminus T$. Then there exists $0<\tilde{\beta}<1$, independent of $H$, $T$, $m$ and $\mathbf{F}_T$, such that
\begin{equation}
\label{eq:decaycorrector1}
\| \mathcal{G}(\mathbf{F}_T) \|_{\VH(\curl, \Omega\setminus \UN^m(T))}\lesssim \tilde{\beta}^m\| \mathbf{F}_T \|_{\VH_0(\curl)^{\prime}}.
\end{equation}
\end{proposition}
In order to use this result to approximate $\mathcal{K}(\Vv_H) = - (\mathcal{G} \circ \mathcal{L})\Vv_H$ (which has a nonlocal argument), we introduce,
for any $T\in\CT_H$, localized differential operators
$\CL_T:\VH(\curl,T)\to\VH(\curl,\Omega)'$
with
\[\langle \mathcal{L}_T(\Vu), \Vv \rangle := \CB_T(\Vu, \Vv ),\]
where $\CB_T(\cdot, \cdot )$ denotes the restriction of $\CB(\cdot, \cdot )$ to the element $T$. By linearity of $\mathcal{G}$ we have that
\[\mathcal{G} \circ \mathcal{L} = \sum_{T \in \CT_H} \mathcal{G} \circ \mathcal{L}_T\]
and consequently we can write
\[
\mathcal{K}( \Vv_H ) = \sum_{T \in \CT_H} \mathcal{G}( \mathbf{F}_T ), \qquad \mbox{with } \mathbf{F}_T:= - \mathcal{L}_T(\Vv_H).
\]
Obviously, $\mathcal{G}( \mathbf{F}_T )$ fits into the setting of Proposition \ref{prop:decaycorrector1}. This suggests to truncate the individual computations of $\mathcal{G}( \mathbf{F}_T )$ to a small patch $\UN^m(T)$ and then collect the results to construct a global approximation for the corrector. Typically, $m$ is referred to as \emph{oversampling parameter}. The strategy is detailed in the following definition.
\begin{definition}[Localized Corrector Approximation]
\label{de:loc-correctors}
For an element $T\in \CT_H$ we define the element patch $\Omega_T:=\UN^m(T)$ of order $m\in \mathbb{N}$. Let
$\mathbf{F} \in \VH_0(\curl)^{\prime}$ be the sum of local functionals with $\mathbf{F} =\sum_{T\in \CT_H} \mathbf{F}_T$, where $\mathbf{F}_T \in \VH_0(\curl)^{\prime}$ is as in Proposition \ref{prop:decaycorrector1}. Furthermore, let $\VW(\Omega_T)\subset \VW$ denote the space of functions from $\VW$ that vanish outside $\Omega_T$, i.e.
\[\VW(\Omega_T)=\{\Vw\in\VW|\Vw=0 \text{ \textrm{outside} }\Omega_T\}.\]
We call $\mathcal{G}_{T,m}( \mathbf{F}_T ) \in \VW(\Omega_T)$ the \emph{localized corrector} if it solves
\begin{equation}
\label{eq:correctorlocal}
\CB( \mathcal{G}_{T,m}( \mathbf{F}_T ) , \Vw )=\mathbf{F}_T(\Vw)\qquad \mbox{for all } \Vw\in \VW(\Omega_T).
\end{equation}
With this, the global corrector approximation is given by
\begin{align*}
\mathcal{G}_{m}(\mathbf{F}) := \sum_{T\in \CT_H} \mathcal{G}_{T,m}( \mathbf{F}_T ).
\end{align*}
\end{definition}
Observe that problem \eqref{eq:correctorlocal} is only formulated on the patch $\Omega_T$ and that it admits a unique solution by the Lax-Milgram-Babu{\v{s}}ka theorem.
Based on decay properties stated in Proposition \ref{prop:decaycorrector1}, we can derive the following error estimate for the difference between the exact corrector $\mathcal{G}(\mathbf{F})$ and its approximation $\mathcal{G}_{m}(\mathbf{F})$ obtained by an $m$th level truncation. The proof of the following result is again postponed to Section \ref{sec:decaycorrectors}.
\begin{theorem}
\label{thm:errorcorrectors}
We consider the setting of Definition \ref{de:loc-correctors} with ideal Green's Corrector $\mathcal{G}(\mathbf{F})$ and its $m$th level truncated approximation $\mathcal{G}_{m}(\mathbf{F})$. Then there exist constants $C_{d}>0$ and $0<\beta<1$ (both independent of $H$ and $m$) such that
\begin{align}
\label{eq:errorcorrector}
\| \mathcal{G}(\mathbf{F}) - \mathcal{G}_{m}(\mathbf{F}) \|_{\VH(\curl)}&\leq C_{d} \sqrt{C_{\ol,m}}\,\beta^m \left( \sum_{T\in \CT_H} \| \mathbf{F}_T \|_{\VH_0(\curl)^{\prime}}^2 \right)^{1/2}
\end{align}
and
\begin{align}
\label{eq:errorcorrector-2}
\| \mathcal{G}(\mathbf{F}) - \mathcal{G}_{m}(\mathbf{F}) \|_{\VH(\Div)^{\prime}}&\leq C_{d} \sqrt{C_{\ol,m}}\, \beta^m H \left( \sum_{T\in \CT_H} \| \mathbf{F}_T \|_{\VH_0(\curl)^{\prime}}^2 \right)^{1/2}.
\end{align}
\end{theorem}
As a direct conclusion from Theorem \ref{thm:errorcorrectors} we obtain the main result of this paper that we present in the next subsection.
\subsection{The quasi-local corrector and homogenization}
\label{subsec:LODlocal}
Following the above motivation we split the ideal corrector $\mathcal{K}(\Vv_H) =- (\mathcal{G} \circ \mathcal{L})\Vv_H$ into a sum of quasi-local contributions of the form $\sum_{T \in \CT_H} (\mathcal{G} \circ \mathcal{L}_T)\Vv_H$. Applying Theorem \ref{thm:errorcorrectors}, we obtain the following result.
\begin{conclusion}
\label{conclusion-main-result}
Let $\mathcal{K}_m := - \sum_{T \in \CT_H} (\mathcal{G}_{T,m} \circ \mathcal{L}_T): \mathring{\CN}(\CT_H) \rightarrow \VW$ denote the localized corrector operator obtained by truncation of $m$th order. Then it holds
\begin{align}
\label{conclusion-main-result-est}
\inf_{\mathbf{v}_H \in \mathring{\CN}(\CT_H)} \| \Vu - (\id + \mathcal{K}_m)\mathbf{v}_H \|_{\VH(\curl)} \le
C \left( H + \sqrt{C_{\ol,m}} \beta^m \right) \| \Vf \|_{\VH(\Div)}.
\end{align}
\end{conclusion}
Note that even though the ideal corrector $\mathcal{K}$ is a non-local operator, we can approximate it by a quasi-local corrector $\mathcal{K}_m$. Here, the quasi-locality is seen by the fact that, if $\mathcal{K}$ is applied to a function $\Vv_H$ with local support, the image $\mathcal{K}(\Vv_H)$ will typically still have a global support in $\Omega$. On the other hand, if $\mathcal{K}_m$ is applied to a locally supported function, the support will only increase by a layer with thickness of order $mH$.
\begin{proof}[Proof of Conclusion \ref{conclusion-main-result}]
With $\mathcal{K}_m = - \sum_{T \in \CT_H} (\mathcal{G}_{T,m} \circ \mathcal{L}_T)$ we apply
Conclusion~\ref{conclusion-ideal-corr-est}
and Theorem \ref{thm:errorcorrectors} to obtain
\begin{equation*}
\begin{aligned}
&
\inf_{\mathbf{v}_H \in \mathring{\CN}(\CT_H)} \| \Vu - (\id + \mathcal{K}_m)\mathbf{v}_H \|_{\VH(\curl)}
\le
\| \Vu - (\id + \mathcal{K})\mathbf{u}_H \|_{\VH(\curl)} + \|(\mathcal{K} - \mathcal{K}_m)\mathbf{u}_H \|_{\VH(\curl)}\\
&
\qquad\qquad\qquad\qquad
\le C H \| \Vf \|_{\VH(\Div)} + C \sqrt{C_{\ol,m}}\, \beta^m \left( \sum_{T\in \CT_H} \| \mathcal{L}_T(\mathbf{u}_H) \|_{\VH_0(\curl)^{\prime}}^2 \right)^{1/2},
\end{aligned}
\end{equation*}
where we observe with $\| \mathcal{L}_T(\mathbf{v}_H) \|_{\VH_0(\curl)^{\prime}} \le C \| \mathbf{v}_H \|_{\VH(\curl,T)}$ that
\begin{align*}
\sum_{T\in \CT_H} \| \mathcal{L}_T(\mathbf{u}_H) \|_{\VH_0(\curl)^{\prime}}^2 \le C \| \mathbf{u}_H \|_{\VH(\curl)}^2
= C \| \pi_H^E(\Vu) \|_{\VH(\curl)}^2 \le C \| \Vu \|_{\VH(\curl)}^2 \le C \| \Vf \|_{\VH(\Div)}^2.
\end{align*}
\end{proof}
Conclusion \ref{conclusion-main-result} has immediate implications from the computational point of view. First, we observe that $\mathcal{K}_m$ can be computed by solving local decoupled problems. Considering a basis $\{ \boldsymbol{\Phi}_k | \hspace{3pt} 1 \le k \le N \}$ of $\mathring{\CN}(\CT_H)$, we require to determine $\mathcal{K}_m(\boldsymbol{\Phi}_k)$. For that, we consider all $T \in \CT_H$ with $T \subset \supp(\boldsymbol{\Phi}_k)$ and solve for $\mathcal{K}_{T,m}(\boldsymbol{\Phi}_k) \in \VW(\hspace{1pt}\UN^m(T)\hspace{1pt})$ with
\begin{align}
\label{loc-corrector-problems}
\CB_{\UN^m(T)}( \mathcal{K}_{T,m}(\boldsymbol{\Phi}_k), \Vw ) = - \CB_{T}( \boldsymbol{\Phi}_k , \Vw ) \qquad \mbox{for all }
\Vw \in \VW(\hspace{1pt}\UN^m(T)\hspace{1pt}).
\end{align}
The global corrector approximation is now given by
\[
\mathcal{K}_m(\boldsymbol{\Phi}_k) = \underset{ T \subset \supp(\boldsymbol{\Phi}_k) }{\sum_{ T \in \CT_H }}
\mathcal{K}_{T,m}(\boldsymbol{\Phi}_k).
\]
Next, we observe that selecting the localization parameter $m$ such that
\[
m\gtrsim \lvert \log H\rvert \big/ \lvert \log \beta\rvert,
\]
we have with Conclusion \ref{conclusion-main-result} that
\begin{align}
\label{curl-est-m-logH}\inf_{\mathbf{v}_H \in \mathring{\CN}(\CT_H)} \| \Vu - (\id + \mathcal{K}_m)\mathbf{v}_H \|_{\VH(\curl)} \le
C H \| \Vf \|_{\VH(\Div)},
\end{align}
which is of the same order as for the ideal corrector $\mathcal{K}$. Consequently, we can consider the Galerkin finite element method, where we seek $\Vu_{H,m} \in \mathring{\CN}(\CT_H)$ such that
\begin{align*}
\CB( (\id + \mathcal{K}_m)\Vu_{H,m} , (\id + \mathcal{K}_m)\mathbf{v}_H ) = (\mathbf{f} , (\id + \mathcal{K}_m)\mathbf{v}_H )_{L^2(\Omega)}
\qquad \mbox{for all } \Vv_{H,m} \in \mathring{\CN}(\CT_H).
\end{align*}
Since a Galerkin method yields the $\VH(\curl)$-quasi-best approximation of $\Vu$ in the space \linebreak[4]$(\id + \mathcal{K}_m)\mathring{\CN}(\CT_H)$ we have with \eqref{curl-est-m-logH} that
\begin{align*}
\| \Vu - (\id + \mathcal{K}_m)\Vu_{H,m} \|_{\VH(\curl)} \le C H \| \Vf \|_{\VH(\Div)}
\end{align*}
and we have with \eqref{green-est-Hcurl-1}, \eqref{eq:errorcorrector-2} and the $\VH(\curl)$-stability of $\pi_H^E$ that
\begin{align*}
\| \Vu - \Vu_{H,m} \|_{\VH(\Div)^{\prime}} \le C H \| \Vf \|_{\VH(\Div)}.
\end{align*}
This result is a homogenization result in the sense that it yields a coarse function $\Vu_{H,m}$ that approximates the exact solution in $\VH(\Div)^{\prime}$. Furthermore, it yields an appropriate (quasi-local) corrector $\mathcal{K}_m(\Vu_{H,m})$ that is required for an accurate approximation in $\VH(\curl)$.
\begin{remark}[Refined estimates]
With a more careful proof, the constants in the estimate of Conclusion \ref{conclusion-main-result} can be specified as
\begin{eqnarray*}
\label{eq:errorLOD-refined}
\lefteqn{\inf_{\mathbf{v}_H \in \mathring{\CN}(\CT_H)} \| \Vu - (\id + \mathcal{K}_m)\mathbf{v}_H \|_{\VH(\curl)}}\\
\nonumber&\leq& \alpha^{-1}(1+H)\bigl(H\max\{C_z, C_\theta\} \sqrt{C_{\ol,3}}+C_d C_\pi C_B^2\sqrt{C_{\ol,m}C_{\ol}}\, \beta^m\bigr)\|\Vf\|_{\VH(\Div)},
\end{eqnarray*}
where $\alpha$ and $C_B$ are as in Assumption \ref{asspt:sesquiform}, $C_{d}$ is the constant appearing in the decay estimate \eqref{eq:errorcorrector}, $C_\pi$ is as in Proposition \ref{p:proj-pi-H-E}, $C_z$ and $C_\theta$ are from \eqref{eq:regulardecomp} and $C_{\ol,m}$ as detailed at the beginning of Section \ref{sec:intpol}.
Note that if $m$ is large enough so that $\UN^m(T)=\Omega$ for all $T \in \CT_H$, we have as a refinement of Conclusion \ref{conclusion-ideal-corr-est} that
\begin{eqnarray*}
\inf_{\mathbf{v}_H \in \mathring{\CN}(\CT_H)} \| \Vu - (\id + \mathcal{K})\mathbf{v}_H \|_{\VH(\curl)} \leq \alpha^{-1}(1+H)\bigl(H\max\{C_z, C_\theta\} \sqrt{C_{\ol,3}} \bigr)\|\Vf\|_{\VH(\Div)}.
\end{eqnarray*}
\end{remark}
\subsection{A fully discrete localized multiscale method}
\label{subsec:discreteLOD}
The procedure described in the previous section is still not yet \quotes{ready to use} for a practical computation as the local corrector problems \eqref{loc-corrector-problems} involve the infinite dimensional spaces $\VW(\Omega_T)$. Hence, we require an additional fine scale discretization of the corrector problems (just like the cell problems in periodic homogenization theory can typically not be solved analytically).
For a fully discrete formulation, we introduce a second shape-regular partition $\CT_h$ of $\Omega$ into tetrahedra.
This partition may be non-uniform and is assumed to be obtained from $\CT_H$ by at least one global refinement.
It is a fine discretization in the sense that $h<H$ and that $\CT_h$ resolves all fine-scale features of the coefficients.
Let $\mathring{\CN}(\CT_h)\subset\VH_0(\curl)$ denote the space of N{\'e}d{\'e}lec elements with respect to the partition $\CT_h$.
We then introduce the space
\[\VW_h(\Omega_T):=\VW(\Omega_T)\cap\mathring{\CN}(\CT_h)=\{\Vv_h\in\mathring{\CN}(\CT_h)|\Vv_h=0\text{ outside }\Omega_T, \pi_H^E(\Vv_h)=0\}\]
and discretize the corrector problem
\eqref{loc-corrector-problems} with this new space.
The corresponding correctors are denoted by $\mathcal{K}_{T,m,h}$ and $\mathcal{K}_{m,h}$. With this modification we can prove analogously to the error estimate \eqref{conclusion-main-result-est} that it holds
\begin{align}
\label{conclusion-main-result-est-h}
\inf_{\mathbf{v}_H \in \mathring{\CN}(\CT_H)} \| \Vu_h - (\id + \mathcal{K}_{m,h})\mathbf{v}_H \|_{\VH(\curl)} \le
C \left( H + \sqrt{C_{\ol,m}} \tilde{\beta}^m \right) \| \Vf \|_{\VH(\Div)},
\end{align}
where $\Vu_h$ is the Galerkin approximation of $\Vu$ in the discrete fine space $\mathring{\CN}(\CT_h)$. If $\CT_h$ is fine enough, we can assume that $\Vu_h$ is a good $\VH(\curl)$-approximation to the true solution $\Vu$. Consequently, it is justified to formulate a fully discrete (localized) multiscale method by seeking
$\Vu_{H,h,m}^{\ms}:=(\id+\mathcal{K}_{m,h})\Vu_H$ with $\Vu_H\in \mathring{\CN}(\CT_H)$ such that
\begin{equation}
\label{eq:discreteLOD}
\CB(\Vu_{H,h,m}^{\ms}, (\id+\mathcal{K}_{m,h})\Vv_H)=(\Vf, (\id+\mathcal{K}_{m,h})\Vv_H)_{L^2(\Omega)}\qquad\mbox{for all } \Vv_H\in\mathring{\CN}(\CT_H).
\end{equation}
As before, we can conclude from \eqref{conclusion-main-result-est-h} together with the choice $m\gtrsim \lvert \log H\rvert/\lvert\log \beta\rvert$, that it holds
\begin{align*}
\| \Vu_h - \Vu_{H,h,m}^{\ms} \|_{\VH(\curl)}
+
\| \Vu_h - \pi_H^E \Vu_{H,h,m}^{\ms} \|_{\VH(\Div)^{\prime}}
\le C H \| \Vf \|_{\VH(\Div)}.
\end{align*}
Thus, the additional fine-scale discretization does not affect the overall error estimates and we therefore concentrate in the proofs (for simplicity) on the semi-discrete case as detailed in Sections \ref{subsec:idealapprox} and \ref{subsec:LODlocal}. Compared to the fully-discrete case, only some small modifications are needed in the proofs for the decay of the correctors. These modifications are outlined at the end of Section \ref{sec:decaycorrectors}. Note that $\Vu_h$ is not needed in the practical implementation of the method.
\section{Proof of the decay for the Corrector Green's Operator}
\label{sec:decaycorrectors}
In this section, we prove Proposition \ref{prop:decaycorrector1} and Theorem \ref{thm:errorcorrectors}. Since the latter one is based on the first result, we start with proving
the exponential decay of the Green's function associated with $\mathcal{G}$. Recall that we quantified the decay indirectly through estimates of the form
\begin{equation*}
\| \mathcal{G}(\mathbf{F}_T) \|_{\VH(\curl, \Omega\setminus \UN^m(T))}\lesssim \tilde{\beta}^m\| \mathbf{F}_T \|_{\VH_0(\curl)^{\prime}},
\end{equation*}
where $\mathbf{F}_T$ is a $T$-local functional and $0<\tilde{\beta}<1$.
\begin{proof}[Proof of Proposition \ref{prop:decaycorrector1}]
Let $\eta\in \CS^1(\CT_H)\subset H^1(\Omega)$ be a scalar-valued, piece-wise linear and globally continuous cut-off function with
\begin{equation*}
\eta=0\qquad \text{in}\quad \UN^{m-6}(T)\qquad \qquad\qquad \eta=1\qquad \text{in}\quad \Omega\setminus\UN^{m-5}(T).
\end{equation*}
Denote $\CR=\supp(\nabla \eta)$ and $\Vphi:=\mathcal{G}(\mathbf{F}_T) \in \VW$. In the following we use $\UN^k(\CR)=\UN^{m-5+k}(T)\setminus \UN^{m-6-k}(T)$.
Note that $\|\nabla \eta\|_{L^\infty(\CR)}\sim H^{-1}$.
Furthermore, let $\Vphi=\Vphi-\pi_H^E\Vphi=\Vz+\nabla \theta$ be the splitting from Lemma \ref{lem:localregulardecomp}.
We obtain with $\eta\leq 1$, the coercivity, and the product rule
that
\begin{align*}
\alpha\|\Vphi\|^2_{\VH(\curl, \Omega\setminus\UN^m(T))}&\leq \bigl|(\mu\curl\Vphi, \eta\curl\Vphi)_{L^2(\Omega)}+(\kappa\Vphi, \eta\Vphi)_{L^2(\Omega)}\bigr|\\
&=\bigl|(\mu\curl\Vphi, \eta\curl\Vz)_{L^2(\Omega)}+(\kappa\Vphi, \eta\nabla\theta+\eta\Vz)_{L^2(\Omega)}\bigr|\\
&\leq M_1
+M_2+M_3+M_4+M_5,
\end{align*}
where
\begin{align*}
& M_1:=\Bigl|\bigl(\mu\curl\Vphi, \curl(\id-\pi_H^E)(\eta\Vz)\bigr)_{L^2(\Omega)}
&&\hspace{-23pt}+\enspace
\bigl(\kappa \Vphi, (\id-\pi_H^E)
(\eta\Vz+\nabla(\eta\theta))\bigr)_{L^2(\Omega)}\Bigr|,
\\
&M_2:=\Bigl|\bigl(\mu\curl\Vphi, \curl\pi_H^ E(\eta\Vz)\bigr)_{L^2(\Omega)}\Bigr|,
&&
M_3:=\Bigl|\bigl(\kappa \Vphi,\pi_H^E(\eta\Vz+\nabla(\eta\Vphi))\bigr)_{L^2(\Omega)}\Bigr|,
\\
&
M_4:=\Bigl|\bigl(\mu\curl \Vphi, \nabla \eta\times \Vz\bigr)_{L^2(\Omega)}\Bigr|,
&&
M_5:=\Bigl|\bigl(\kappa\Vphi, \theta\nabla \eta\bigr)_{L^2(\Omega)}\Bigr|.
\end{align*}
We used the product rule $\curl(\eta\Vz)=\nabla\eta\times \Vz+\eta\curl\Vz$ here.
We now estimate the five terms separately.
Let $\Vw:=(\id-\pi_H^E)(\eta\Vz+\nabla(\eta\theta))$ and note that (i) $\curl\Vw=\curl(\id-\pi_H^E)(\eta\Vz)$, (ii) $\Vw\in \VW$, (iii) $\supp\Vw\subset\Omega\setminus T$.
Using the definition of the Corrector Green's Operator in \eqref{cor-greens-op} and the fact that $\mathbf{F}_T(\Vw)=0$ yields $M_1=0$.
For $M_2$, note that the commuting property of the projections
$\pi^E$ and $\pi^F$ implies
$\curl\pi_H^E(\Vz)=\pi_H^F(\curl \Vz)=\pi_H^F(\curl\Vphi)=\curl\pi_H^E\Vphi=0$
because $\Vphi\in \VW$.
Using the stability of $\pi_H^E$ \eqref{eq:stabilitycurl} and Lemma \ref{lem:localregulardecomp}, we can estimate $M_2$ as
\begin{align*}
M_2&\lesssim \|\curl\Vphi\|_{L^2(\UN(\CR))}\|\curl\pi_H^E(\eta\Vz)\|_{L^2(\UN(\CR))}\lesssim \|\curl\Vphi\|_{L^2(\UN(\CR))}\|\curl(\eta\Vz)\|_{L^2(\UN^2(\CR))}\\
&\lesssim \|\curl\Vphi\|_{L^2(\UN(\CR))}\Bigl(\|\nabla\eta\|_{L^\infty(\CR)}\|\Vz\|_{L^2(\CR)}
+\|\eta\|_{L^\infty(\UN^2(\CR))}\|\curl\Vz\|_{L^2(\UN^{m-3}(T)\setminus \UN^{m-6}(T)))}\Bigr)\\
&\lesssim \|\curl\Vphi\|_{L^2(\UN(\CR))}\|\curl\Vphi\|_{L^2(\UN^{m}(T)\setminus \UN^{m-9}(T))}.
\end{align*}
In a similar manner, we obtain for $M_3$ that
\begin{align*}
M_3&\lesssim\|\Vphi\|_{L^2(\UN(\CR))}\Bigl(\|\eta \Vz\|_{L^2(\UN^2(\CR))}+\|\nabla(\eta\theta)\|_{L^2(\UN^2(\CR))}+H\|\curl(\eta\Vz)\|_{L^2(\UN^2(\CR))}\Bigr)\\
&\lesssim \|\Vphi\|_{L^2(\UN(\CR))}\Bigl(\|\Vphi\|_{L^2(\UN^{m}(T)\setminus \UN^{m-9}(T))}+H\|\curl\Vphi\|_{L^2(\UN^{m}(T)\setminus \UN^{m-9}(T))}\Bigr).
\end{align*}
Simply using Lemma \ref{lem:localregulardecomp}, we deduce for $M_4$ and $M_5$
\begin{align*}
M_4&\lesssim \|\curl\Vphi\|_{L^2(\CR)}\|\curl\Vphi\|_{L^2(\UN^3(\CR))},
\\
M_5&\lesssim \|\Vphi\|_{L^2(\CR)}
(\|\Vphi\|_{L^2(\UN^3(\CR))}
+ H\|\curl \Vphi\|_{L^2(\UN^3(\CR))}).
\end{align*}
All in all, this gives
\begin{equation*}
\|\Vphi\|^2_{\VH(\curl, \Omega\setminus \UN^m(T))}\leq
\tilde{C} \|\Vphi\|^2_{\VH(\curl, \UN^{m}(T)\setminus \UN^{m-9}(T) )}
\end{equation*}
for some $\tilde{C}>0$.
Moreover, it holds that
\begin{equation*}
\|\Vphi\|^2_{\VH(\curl, \Omega\setminus \UN^m(T))}
=
\|\Vphi\|^2_{\VH(\curl, \Omega\setminus \UN^{m-9}(T))}
- \|\Vphi\|^2_{\VH(\curl, \UN^m(T)\setminus \UN^{m-9}(T))}.
\end{equation*}
Thus, we obtain finally with $\tilde{\beta}_{\mathrm{pre}}:=(1+\tilde{C}^{-1})^{-1}<1$, a re-iteration of the above argument, and Lemma~\ref{lemma:corrector-props} that
\begin{equation*}
\|\Vphi\|^2_{\VH(\curl, \Omega\setminus \UN^m(T))}\lesssim \tilde{\beta}_{\mathrm{pre}}^{\lfloor m/9\rfloor}\|\Vphi\|^2_{\VH(\curl)}\lesssim \tilde{\beta}_{\mathrm{pre}}^{\lfloor m/9\rfloor}\| \mathbf{F}_T \|^2_{\VH_0(\curl)^\prime}.
\end{equation*}
Algebraic manipulations give the assertion.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:errorcorrectors}]
We start by proving the following local estimate
\begin{align}
\label{eq:errorcorrectorlocal}
\| \mathcal{G}( \mathbf{F}_T )-\mathcal{G}_{T,m}( \mathbf{F}_T ) \|_{\VH(\curl)}&\leq C_1 \tilde{\beta}^m \| \mathbf{F}_T \|_{\VH_0(\curl)^{\prime}}
\end{align}
for some constant $C_1>0$ and $0<\tilde{\beta}<1$.
Let $\eta\in \CS^1(\CT_H)$ be a piece-wise linear and globally continuous cut-off function with
\begin{align*}
\eta=0 \qquad \text{in} \quad \Omega\setminus \UN^{m-1}(T)\qquad\qquad\qquad\eta=1\qquad\text{in}\quad \UN^{m-2}(T).
\end{align*}
Due to C{\'e}a's Lemma we have
\begin{align*}
\| \mathcal{G}( \mathbf{F}_T ) - \mathcal{G}_{T,m}( \mathbf{F}_T ) \|_{\VH(\curl)}\lesssim \inf_{\Vw_{T,m}\in \VW(\Omega_T)}\| \mathcal{G}( \mathbf{F}_T ) -\Vw_{T,m}\|_{\VH(\curl)}.
\end{align*}
We use the splitting of Lemma \ref{lem:localregulardecomp} and write $\mathcal{G}( \mathbf{F}_T )=(\id-\pi_H^E)(\mathcal{G}( \mathbf{F}_T ))=\Vz+\nabla\theta$.
Then we choose $\Vw_{T,m}=(\id-\pi_H^E)(\eta\Vz+\nabla(\eta\theta))\in \VW(\Omega_T)$ and derive
with the stability of $\pi_H^E$ and \eqref{eq:regulardecomp}
\begin{align*}
\|\mathcal{G}( \mathbf{F}_T )-\mathcal{G}_{T,m}( \mathbf{F}_T )\|_{\VH(\curl)}&\lesssim \|(\id-\pi_H^E)(\mathcal{G}( \mathbf{F}_T )-\eta\Vz - \nabla(\eta\theta))\|_{\VH(\curl)}\\
&=\|(\id-\pi_H^E)((1-\eta)\Vz+\nabla((1-\eta)\theta))\|_{\VH(\curl)}\\
&\lesssim \|(1-\eta)\Vz\|_{L^2(\Omega\setminus\{\eta=1\})}+\|\nabla((1-\eta)\theta)\|_{L^2(\Omega\setminus\{\eta=1\})}\\*
&\quad+(1+H)\|\curl((1-\eta)\Vz)\|_{L^2(\Omega\setminus\{\eta=1\})}\\
&\lesssim (1+H)\,\| \mathcal{G}( \mathbf{F}_T )\|_{\VH(\curl, \UN^3(\Omega\setminus\{\eta=1\}))}.
\end{align*}
Combination with Proposition \ref{prop:decaycorrector1} gives estimate \eqref{eq:errorcorrectorlocal}.
To prove the main estimate of Theorem \ref{thm:errorcorrectors}, i.e.\ estimate \eqref{eq:errorcorrector}, we define, for a given simplex $T\in\CT_H$, the piece-wise linear, globally continuous cut-off function $\eta_T\in \CS^1(\CT_H)$ via
\begin{align*}
\eta_T=0\qquad \text{in}\quad \UN^{m+1}(T)\qquad\qquad\qquad \eta_T=1\qquad\text{in}\quad \Omega\setminus\UN^{m+2}(T).
\end{align*}
Denote $\Vw:=(\mathcal{G}-\mathcal{G}_m)(\mathbf{F})=\sum_{T \in \CT_H} \Vw_T$ with $\Vw_T:=(\mathcal{G}-\mathcal{G}_{T,m})(\mathbf{F}_T)$ and split $\Vw$ according to Lemma \ref{lem:localregulardecomp} as $\Vw=\Vw-\pi_H^E(\Vw)=\Vz+\nabla\theta$.
Due to the ellipticity of $\CB$ and its sesquilinearity, we have
\begin{align*}
\alpha\|\Vw\|^2_{\VH(\curl)}
\leq
\Bigl|\sum_{T\in\CT_H}\CB(\Vw_T,\Vw)\Bigr|\leq \sum_{T\in \CT_H}|\CB(\Vw_T,\Vz+\nabla\theta )|
\leq
\sum_{T\in \CT_H} (A_T + B_T)
\end{align*}
where, for any $T\in\CT_H$, we abbreviate
\begin{equation*}
A_T:=|\CB(\Vw_T,(1-\eta_T)\Vz+\nabla((1-\eta_T)\theta))|
\quad\text{and}\quad
B_T:=|\CB(\Vw_T,\eta_T\Vz+\nabla(\eta_T\theta))| .
\end{equation*}
For the term $A_T$, we derive by using the properties of the cut-off function and the regular decomposition \eqref{eq:regulardecomp}
\begin{align*}
A_T&\lesssim\|\Vw_T\|_{\VH(\curl)}\|(1-\eta_T)\Vz+\nabla((1-\eta_T)\theta)\|_{\VH(\curl, \{\eta_T\neq 1\})}\\
&\leq \|\Vw_T\|_{\VH(\curl)}\,(1+H)\,\|\Vw\|_{\VH(\curl, \UN^3(\{\eta_T\neq 1\}))}.
\end{align*}
The term $B_T$ can be split as
\begin{align*}
B_T\leq |\CB(\Vw_T,(\id-\pi_H^E)(\eta_T\Vz+\nabla(\eta_T\theta)))|+|\CB(\Vw_T,\pi_H^E(\eta_T\Vz+\nabla(\eta_T\theta)))|.
\end{align*}
Denoting $\Vphi:=(\id-\pi_H^E)(\eta_T\Vz+\nabla(\eta_T\theta))$, we observe $\Vphi\in \VW$ and $\supp\Vphi\subset\Omega\setminus \UN^m(T)$.
Because $\Vphi\in\VW$ with support outside $T$, we have $\CB(\mathcal{G}(\mathbf{F}_T),\Vphi)=\mathbf{F}_T(\Vphi)=0$.
Since $\Vphi$ has support outside $\UN^m(T)=\Omega_T$, but $\mathcal{G}_{T,m}(\mathbf{F}_T)\in \VW(\Omega_T)$, we also have $\CB(\mathcal{G}_{T,m}(\mathbf{F}_T),\Vphi)=0$.
All in all, this means $\CB(\Vw_T , \Vphi )=0$.
Using the stability of $\pi_H^E$ \eqref{eq:stabilityL2}, \eqref{eq:stabilitycurl} and the regular decomposition \eqref{eq:regulardecomp}, we obtain
\begin{align*}
B_T &\leq |\CB(\Vw_T,\pi_H^E(\eta_T\Vz+\nabla(\eta_T\theta)))|\\*
&\lesssim\|\Vw_T\|_{\VH(\curl)}\bigl(\|\eta_T\Vz+\nabla(\eta_T\theta)\|_{L^2(\UN^2(\{\eta_T\neq 1\}))}+(1+H)\|\curl(\eta_T\Vz)\|_{L^2(\UN^2(\{\eta_T\neq 1\}))}\bigr)\\
&\lesssim \|\Vw_T\|_{\VH(\curl)}(1+H)\,\|\Vw\|_{\VH(\curl, \UN^5(\{\eta_T\neq 1\}))}.
\end{align*}
Combining the estimates for $A_T$ and $B_T$
and observing that $\{\eta_T\neq 1\} =\UN^{m+2}(T)$, we deduce
\begin{align*}
\alpha\|\Vw\|_{\VH(\curl)}^2&\lesssim \sum_{T\in\CT_H}\|\Vw_T\|_{\VH(\curl)}\,\|\Vw\|_{\VH(\curl, \UN^{m+7}(T))}
\lesssim \sqrt{C_{\ol, m}}\, \|\Vw\|_{\VH(\curl)}\sqrt{\sum_{T\in\CT_H}\|\Vw_T\|_{\VH(\curl)}^2}.
\end{align*}
Combination with estimate \eqref{eq:errorcorrectorlocal} finishes the proof of \eqref{eq:errorcorrector}. Finally, estimate \eqref{eq:errorcorrector-2} follows with
\begin{align*}
\|\Vw\|_{\VH(\Div)^{\prime}} \leq C H \|\Vw\|_{\VH(\curl)}.
\end{align*}
\end{proof}
\textbf{Changes for the fully discrete localized method.}\hspace{2pt}
Let us briefly consider the fully-discrete setting described in Section \ref{subsec:discreteLOD}. Here we note that, up to a modification of the constants, Theorem \ref{thm:errorcorrectors} also holds for the difference
$(\mathcal{G}_h - \mathcal{G}_{h,m})(\mathbf{F})$, where $\mathcal{G}_h(\mathbf{F})$ is the Galerkin approximation of $\mathcal{G}(\mathbf{F})$ in the discrete space $\VW_h:=\{\Vv_h\in\mathring{\CN}(\CT_h)|\pi_H^E(\Vv_h)=0\}$ and where $\mathcal{G}_{h,m}(\mathbf{F})$ is defined analogously to $\mathcal{G}_{h,m}(\mathbf{F})$ but where $\VW_h(\Omega_T):=\{ \Vw_h \in \VW_h| \hspace{3pt} \Vw_h \equiv 0 \mbox{ in } \Omega \setminus \Omega_T \}$ replaces $\VW(\Omega_T)$ in the local problems.
Again, the central observation is a decay result similar to Proposition \ref{prop:decaycorrector1}, but now for $\mathcal{G}_{h}(\mathbf{F}_T)$.
A few modifications to the proof have to be made, though: The product of the cut-off function $\eta$ and the regular decomposition $\Vz+\nabla\theta$ does not lie in $\mathring{\CN}(\CT_h)$.
Therefore, an additional interpolation operator into $\mathring{\CN}(\CT_H)$ has to be applied.
Here it is tempting to just use the nodal interpolation operator and its stability on piece-wise polynomials, since $\eta \hspace{2pt} \mathcal{G}_{h}(\mathbf{F}_T)$ is a piece-wise (quadratic) polynomial. However, the regular decomposition employed is no longer piece-wise polynomial and we hence have to use the Falk-Winther operator $\pi_h^E$ onto the fine space $\mathring{\CN}(\CT_h)$ here.
This means that we have the following modified terms in the proof of Proposition \ref{prop:decaycorrector1}:
\begin{align*}
\tilde{M}_1&:=\Bigl|\bigl(\mu\curl\Vphi, \curl(\id-\pi_H^E)\pi_h^E(\eta\Vz)\bigr)_{L^2(\Omega)}
&&\hspace{-17pt}+\enspace
\bigl(\kappa \Vphi, (\id-\pi_H^E)\pi_h^E(\eta\Vz+\nabla(\eta\theta))\bigr)_{L^2(\Omega)}\Bigr|,
\\
\tilde{M}_2&:=\Bigl|\bigl(\mu\curl\Vphi, \curl\pi_H^ E\pi_h^E(\eta\Vz)\bigr)_{L^2(\Omega)}\Bigr|,
&&
\tilde{M}_3:=\Bigl|\bigl(\kappa \Vphi,\pi_H^E\pi_h^E(\eta\Vz+\nabla(\eta\Vz))\bigr)_{L^2(\Omega)}\Bigr|.
\end{align*}
They can be treated similarly to
$M_1$, $M_2$ and $M_3$, using in addition the stability of $\pi_h^E$.
Note that the additional interpolation operator $\pi_h^E$ will enlarge the patches slightly, so that we should define $\eta$ via
\begin{align*}
\eta=0\qquad \text{in}\quad \UN^{m-8}(T)\qquad\qquad\qquad\eta=1\qquad\text{in}\quad \Omega\setminus \UN^{m-7}(T).
\end{align*}
The terms $M_4$ and $M_5$ remain unchanged, and we moreover get the terms
\begin{align*}
\tilde{M}_6:=\Bigl|\bigl(\mu\curl\Vphi, \curl(\id-\pi_h^E)(\eta\Vz)\bigr)_{L^2(\Omega)}\Bigr|,
\qquad
\tilde{M}_7:=\Bigl|\bigl(\kappa \Vphi, (\id-\pi_h^E)(\eta\Vz+\nabla(\eta\theta))\bigr)_{L^2(\Omega)}\Bigr|.
\end{align*}
These can be estimated simply using the stability of $\pi_h^E$, the properties of $\eta$ and the regular decomposition \eqref{eq:regulardecomp}.
\section{Falk--Winther interpolation}
\label{sec:intpolimpl}
This section briefly describes the construction of the
bounded local cochain projection of \cite{FalkWinther2014} for the
present case of $\VH(\curl)$-problems in three space dimensions.
The two-dimensional case is thoroughly described in the gentle
introductory paper \cite{FalkWinther2015}.
After giving the definition of the operator, we describe how it
can be represented as a matrix. This is important because the
interpolation operator is part of the algorithm and not a mere
theoretical tool and therefore
required in a practical realization.
\subsection{Definition of the operator}
Let $\Delta_0$ denote the set of vertices of $\CT_H$ and
let $\mathring{\Delta}_0:=\Delta_0\cap\Omega$ denote the
interior vertices.
Let $\Delta_1$ denote the set of edges and
let $\mathring{\Delta}_1$ denote the interior edges, i.e.,
the elements of $\Delta_1$ that
are not a subset of $\partial\Omega$.
The space $\mathring{\CN}(\CT_H)$ is spanned by the well-known edge-oriented
basis $(\Vpsi_E)_{E\in\mathring{\Delta}_1}$ defined for any $E\in\mathring{\Delta}_1$
through the property
\begin{equation*}
\fint_E \Vpsi_E\cdot \Vt_E\,ds = 1
\quad\text{and}\quad
\fint_{E'} \Vpsi_E\cdot \Vt_E\,ds = 0
\quad\text{for all }E'\in\mathring{\Delta}_1\setminus\{E\}.
\end{equation*}
Here $\Vt_E$ denotes the unit tangent to the edge $E$ with a globally
fixed sign.
Any vertex $z\in\Delta_0$ possesses a nodal patch (sometimes also called
macroelement)
\begin{equation*}
\omega_z:=\Int\Big(\bigcup\{T\in\CT_H : z\in T\}\Big).
\end{equation*}
For any edge $E\in\Delta_1$ shared by two vertices
$z_1,z_2\in\Delta_0$ such that
$E=\operatorname{conv}\{z_1,z_2\}$, the extended edge patch
reads
\begin{equation*}
\omega_E^{\mathit{ext}} := \omega_{z_1}\cup\omega_{z_2}.
\end{equation*}
The restriction of the mesh $\CT_H$ to $\omega_E^{\mathit{ext}}$
is denoted by
$\CT_H(\omega_E^{\mathit{ext}})$.
Let $\CS^1(\CT_H(\omega_E^{\mathit{ext}}))$ denote the (scalar-valued) first-order
Lagrange finite element space with respect to
$\CT_H(\omega_E^{\mathit{ext}})$ and let
$\CN(\CT_H(\omega_E^{\mathit{ext}}))$ denote the
lowest-order N\'ed\'elec finite element space over
$\CT_H(\omega_E^{\mathit{ext}})$.
The operator
\[
Q^1_E:
\VH(\curl, \omega_E^{\mathit{ext}})
\to
\CN(\CT_H(\omega_E^{\mathit{ext}}))
\]
is defined for any $\Vu\in \VH(\curl, \omega_E^{\mathit{ext}})$
via
\begin{equation*}
\begin{aligned}
(\Vu-Q^1_E \Vu, \nabla \tau) &= 0 \quad
&&\text{for all } \tau\in \CS^1(\CT_H(\omega_E^{\mathit{ext}}))
\\
(\curl (\Vu-Q^1_E \Vu),\curl \Vv) &=0
&&\text{for all } \Vv\in \CN(\CT_H(\omega_E^{\mathit{ext}})).
\end{aligned}
\end{equation*}
Given any vertex $y\in\Delta_0$, define the piecewise constant function
$z^0_y$ by
\begin{equation*}
z^0_y = \begin{cases}
(\operatorname{meas}(\omega_y))^{-1} &\text{in } \omega_y \\
0 &\text{in } \Omega\setminus\omega_y
\end{cases}
\end{equation*}
Given any edge $E\in\Delta_1$ shared by vertices
$y_1,y_2\in\Delta_0$ such that $E=\operatorname{conv}\{y_1,y_2\}$,
define
\begin{equation*}
(\delta z^0)_E :=
z^0_{y_2} - z^0_{y_1} .
\end{equation*}
Let $E\in\Delta_1$ and denote by
$\mathring{\mathcal{RT}}(\CT_H(\omega_E^{\mathit{ext}}))$ the lowest-order
Raviart--Thomas space with respect to
$\CT_H(\omega_E^{\mathit{ext}})$ with vanishing normal trace on
the boundary $\partial (\omega_E^{\mathit{ext}})$.
Let for any $E\in\Delta_1$
the field
$\Vz_E^1\in\mathring{\mathcal{RT}}(\CT_H(\omega_E^{\mathit{ext}}))$
be defined by
\begin{equation*}
\begin{aligned}
\Div \Vz_E^1 &=-(\delta z^0)_E \quad &&
\\
(\Vz_E^1,\curl\Vtau) &= 0
&&\text{for all }
\Vtau\in\mathring{\CN}(\CT_H(\omega_E^{\mathit{ext}}))
\end{aligned}
\end{equation*}
where $\mathring{\CN}(\CT_H(\omega_E^{\mathit{ext}}))$ denotes
the N\'ed\'elec finite element functions over
$\CT_H(\omega_E^{\mathit{ext}})$ with vanishing tangential trace
on the boundary $\partial(\omega_E^{\mathit{ext}})$.
The operator
$M^1:L^2(\Omega;\mathbb{C}^3)\to\mathring{\CN}(\CT_H)$ maps any
$\Vu\in L^2(\Omega;\mathbb{C}^3)$ to
\begin{equation*}
M^1\Vu :=
\sum_{E\in\mathring{\Delta}_1}
(\operatorname{length}(E))^{-1}
\int_{\omega_E^{\mathit{ext}}} \Vu\cdot \Vz_E^1\,dx\, \Vpsi_E.
\end{equation*}
The operator
\[
Q^1_{y,-} : \VH(\curl,\omega_E^{\mathit{ext}})
\to
\CS^1(\CT_H(\omega_E^{\mathit{ext}}))
\]
is the solution operator of a local discrete Neumann problem.
For any $\Vu\in \VH(\curl, \omega_E^{\mathit{ext}})$,
the function
$ Q^1_{y,-} \Vu $ solves
\begin{equation*}
\begin{aligned}
(\Vu-\nabla Q^1_{y,-} \Vu,\nabla v) &= 0
\quad&&\text{for all } v\in \CS^1(\CT_H(\omega_E^{\mathit{ext}}))
\\
\int_{\omega_E^{\mathit{ext}}} Q^1_{y,-} \Vu\,dx & = 0. &&
\end{aligned}
\end{equation*}
Define now the operator
$S^1:\VH_0(\curl,\Omega)\to \mathring{\CN}(\CT_H)$
via
\begin{equation}\label{e:S1def1}
S^1 \Vu :=
M^1 \Vu +
\sum_{y\in\mathring{\Delta}_0}
(Q^1_{y,-}\Vu)(y)\nabla \lambda_y .
\end{equation}
The second sum on the right-hand side can be rewritten in terms
of the basis functions $\Vpsi_E$.
The inclusion
$\nabla \mathring{\CS}^1(\CT_H)\subseteq \mathring{\CN}(\CT_H)$
follows from the principles of finite element exterior calculus
\cite{ArnoldFalkWinther2006,ArnoldFalkWinther2010}.
Given an interior vertex
$y\in\mathring{\Delta}_0$, the expansion in terms of the basis
$(\Vpsi_E)_{E\in\mathring{\Delta}_1}$ reads
\begin{equation*}
\nabla\lambda_z
=
\sum_{E\in\mathring{\Delta}_1} \fint_E \nabla\lambda_z\cdot \Vt_E\,ds\,\Vpsi_E
=
\sum_{E\in\Delta_1(z)}
\frac{\operatorname{sign}(\Vt_E\cdot\nabla\lambda_z)}{\operatorname{length}(E)}
\Vpsi_E
\end{equation*}
where $\Delta_1(z)\subseteq\mathring{\Delta}_1$
is the set of all edges that contain $z$.
Thus, $S^1$ from \eqref{e:S1def1} can be rewritten as
\begin{equation}\label{e:S1def2}
S^1 \Vu :=
M^1 \Vu +
\sum_{E\in\mathring{\Delta}_1}
(\operatorname{length}(E))^{-1}
\big((Q^1_{y_2(E),-}\Vu)(y_2(E)) - (Q^1_{y_1(E),-}\Vu)(y_1(E))\big)
\Vpsi_E
\end{equation}
where $y_1(E)$ and $y_2(E)$ denote the endpoints of $E$
(with the orientation convention
$\Vt_E = (y_2(E)-y_1(E))/\operatorname{length}(E)$).
Finally, the Falk-Winter interpolation operator $\pi_H^E:\VH_0(\curl, \Omega)\to\mathring{\CN}(\CT_H)$ is defined as
\begin{equation}\label{e:R1def}
\pi_H^E \Vu
:=
S^1 \Vu
+
\sum_{E\in\mathring{\Delta}_1}
\fint_E
\big((\id-S^1)Q^1_E \Vu\big)\cdot \Vt_E\,ds
\,\Vpsi_E .
\end{equation}
\subsection{Algorithmic aspects}
Given a mesh $\CT_H$ and a refinement $\CT_h$, the linear
projection
$\pi_H : \mathring{\CN}(\CT_h)\to \mathring{\CN}(\CT_H)$ can be represented by a matrix
$\mathsf{P}\in\mathbb R^{\dim \mathring{\CN}(\CT_H)\times\dim \mathring{\CN}(\CT_h)}$.
This subsection briefly sketches the assembling of that matrix.
The procedure involves the solution of local discrete problems
on the macroelements. It is important to note that these problems
are of small size because the mesh $\CT_h$ is a refinement
of $\CT_H$.
Given an interior edge $E\in\mathring{\Delta}_1^H$
of $\CT_H$ and an interior edge
$e\in\mathring{\Delta}_1^h$ of $\CT_h$, the interpolation $\pi_H \Vpsi_e$
has an expansion
\begin{equation*}
\pi_H \Vpsi_e= \sum_{E'\in\mathring{\Delta}_1^H} c_{E'} \Vpsi_{E'}
\end{equation*}
for real coefficients $(c_{E'})_{E'\in\mathring{\Delta}_1^H}$.
The coefficient $c_E$ is zero whenever $e$ is not contained in the
closure of the extended edge patch $\overline{\omega}_E^{\mathit{ext}}$.
The assembling can therefore be organized in a loop over all interior
edges in $\mathring{\Delta}_1^H$.
Given a global numbering of the edges in
$\mathring{\Delta}_1^H$,
each edge $E\in\mathring{\Delta}_1^H$
is equipped with a unique index
$I_H(E)\in\{1,\dots,\operatorname{card}(\mathring{\Delta}_1^H)\}$.
Similarly, the numbering of edges in $\mathring{\Delta}_1^h$ is denoted
by $I_h$.
The matrix $\mathsf{P}=\mathsf{P_1}+\mathsf{P_2}$ will be composed
as the sum of matrices
$\mathsf{P_1}$, $\mathsf{P_2}$ that represent the two summands on the
right-hand side of \eqref{e:R1def}.
Those will be assembled in loops over the interior edges.
Matrices $\mathsf{P_1}$, $\mathsf{P_2}$ are initialized as
empty sparse matrices.
\subsubsection{Operator $\mathsf{P_1}$}
\noindent
\textbf{for} $E\in\mathring{\Delta}_1^H$ \textbf{do}
Let the interior edges in
$\mathring{\Delta}_1^h$ that lie inside
$\overline{\omega}_E^{\mathit{ext}}$
be denoted with
$\{e_1,e_2,\dots,e_N\}$ for some $N\in\mathbb N$.
The entries $\mathsf{P}_1(I_H(E),[I_h(e_1)\dots I_h(e_N)])$ of the matrix
$\mathsf{P}_1$ are now determined as follows.
Compute $\Vz^1_E \in \mathring{\mathcal{RT}}(\CT_H({\omega}_E^{\mathit{ext}}))$.
The matrix
$\mathsf{M}_E\in\mathbb R^{1\times N}$ defined via
\[
\mathsf{M}_E
:=
(\operatorname{length}(E))^{-1}
\left[
\int_{{\omega}_E^{\mathit{ext}}} \Vz^1_E\cdot\Vpsi_{e_j}\,dx
\right]_{j=1}^N
\]
represents the map of the basis functions on the fine mesh
to the coefficient of $M^1$ contributing to $\Vpsi_E$ on the coarse mesh.
Denote by
$\mathsf{A}_{y_j(E)}$ and
$\mathsf{B}_{y_j(E)}$ ($j=1,2$)
the stiffness and right-hand side matrix
representing the system for the operator $Q_{y_j(E),-}$
\begin{align*}
\mathsf{A}_{y_j(E)}
&:=
\left[
\int_{\omega_{y_j(E)}} \nabla \phi_y \cdot\nabla \phi_z\,dx
\right]_{y,z\in\Delta_0(\CT_H(\omega_{y_j(E)}))},
\\
\mathsf{B}_{y_j(E)}
&:=
\left[
\int_{\omega_{y_j(E)}} \nabla \phi_y \cdot\Vpsi_{e_j}\,dx
\right]_{\substack{y\in\Delta_0(\CT_H(\omega_{y_j(E)}))\\ j=1,\dots,N}}.
\end{align*}
After enhancing the system
to
$\tilde{\mathsf{A}}_{y_j(E)}$ and
$\tilde{\mathsf{B}}_{y_j(E)}$
(with a Lagrange multiplier accounting for the mean constraint),
it is uniquely solvable.
Set
$\tilde{\mathsf{Q}}_{y_j(E)} =
\tilde{\mathsf{A}}_{y_j(E)}^{-1}\tilde{\mathsf{B}}_{y_j(E)}$
and extract the row corresponding to the vertex $y_j(E)$
\[
\mathsf{Q}_j:=
(\operatorname{length}(E))^{-1}
\tilde{\mathsf{Q}}_{y_j(E)}[y_j(E),:]
\in \mathbb R^{1\times N}.
\]
Set
\[
\mathsf{P}_1(I_H(E),[I_h(e_1)\dots I_h(e_N)])
=
\mathsf{M}_E + \mathsf{Q}_1 -\mathsf{Q}_2 .
\]
\noindent
\textbf{end}
\subsubsection{Operator $\mathsf{P_2}$}
\noindent
\textbf{for} $E\in\mathring{\Delta}_1^H$ \textbf{do}
Denote the matrices
-- where indices $j,k$ run from $1$ to
$\operatorname{card}(\Delta_1(\CT_H({\omega}_E^{\mathit{ext}})))$, $y$ through \linebreak[4]$\Delta_0(\CT_H({\omega}_E^{\mathit{ext}}))$, and $\ell=1,\ldots, N$ --
\begin{equation*}
\mathsf{S}_E
:=
\left[
\int_{{\omega}_E^{\mathit{ext}}}
\curl \Vpsi_{E_j} \cdot\curl\Vpsi_{E_k}\,dx
\right]_{j,k}
\mathsf{T}_E
:=
\left[
\int_{{\omega}_E^{\mathit{ext}}}
\Vpsi_{E_j} \cdot\nabla\lambda_{y}\,dx
\right]_{j,y}
\end{equation*}
and
\begin{equation*}
\mathsf{F}_E
:=
\left[
\int_{{\omega}_E^{\mathit{ext}}}
\curl \Vpsi_{E_j} \cdot\curl\Vpsi_{e_\ell}\,dx
\right]_{j,\ell}
\mathsf{G}_E
:=
\left[
\int_{{\omega}_E^{\mathit{ext}}}
\Vpsi_{e_\ell} \cdot\nabla\lambda_{y}\,dx
\right]_{y, \ell} .
\end{equation*}
Solve the saddle-point system
\begin{equation*}
\begin{bmatrix}
\mathsf{S} & \mathsf{T}^* \\ \mathsf{T} & 0
\end{bmatrix}
\begin{bmatrix}
\mathsf{U} \\ \mathsf{V}
\end{bmatrix}
=
\begin{bmatrix}
\mathsf{F} \\ \mathsf{G}
\end{bmatrix} .
\end{equation*}
(This requires an additional one-dimensional gauge condition
because the sum of the test functions $\sum_y\nabla\lambda_y$ equals
zero.)
Assemble the operator $S^1$ (locally) as described in the
previous step and denote this matrix by
$\mathsf{P}_1^{\mathit{loc}}$.
Compute $\mathsf{U}- \mathsf{P}_1^{\mathit{loc}} \mathsf{U}$
and extract the line
$\mathsf{X}$ corresponding to the edge $E$
\[
\mathsf{P_2}(I_H(E),[I_h(e_1)\dots I_h(e_N)])
=
\mathsf{X} .
\]
\noindent
\textbf{end}
\section*{Conclusion}
In this paper, we suggested a procedure for the numerical homogenization of $\VH(\curl)$-elliptic problems.
The exact solution is decomposed into a coarse part, which is a good approximation in $\VH(\Div)^\prime$, and a corrector contribution by using the Falk-Winther interpolation operator.
We showed that this decomposition gives an optimal order approximation in $\VH(\curl)$, independent of the regularity of the exact solution.
Furthermore, the corrector operator can be localized to patches of macro elements, which allows for an efficient computation.
This results in a generalized finite element method in the spirit of the Localized Orthogonal Decomposition which utilizes
the bounded local cochain projection of the Falk-Winther
as part of the algorithm.
\section*{Acknowledgments}
Main parts of this paper were written while the authors enjoyed the kind hospitality of the Hausdorff Institute for Mathematics in Bonn.
PH and BV acknowledge financial support by the DFG in the project OH 98/6-1 ``Wave propagation in periodic structures and negative refraction mechanisms''.
DG acknowledges support by the DFG through CRC 1173
``Wave phenomena: analysis and numerics'' and by the
Baden-W\"urttemberg Stiftung
(Eliteprogramm f\"ur Postdocs)
through the project
``Mehrskalenmethoden für Wellenausbreitung in heterogenen Materialien und
Metamaterialien''.
\end{document} |
\begin{document}
\title{Mixedness in Bell-violation vs. Entanglement of Formation }
\author{Sibasish Ghosh\protect\( ^{\%}\protect \)\thanks{
res9603@isical.ac.in
} , Guruprasad Kar\protect\( ^{\%}\protect \)\thanks{
gkar@isical.ac.in
} , Aditi Sen(De)\protect\( ^{\#}\protect \)\thanks{
dhom@boseinst.ernet.in
} and Ujjwal Sen\protect\( ^{\#}\protect \)\protect\( ^{\ddagger }\protect \)}
\maketitle
\lyxaddress{\protect\( ^{\%}\protect \)Physics and Applied Mathematics Unit, Indian Statistical
Institute, 203 B.T. Road, Kolkata 700 035, India }
\lyxaddress{\protect\( ^{\#}\protect \)Department of Physics, Bose Institute, 93/1 A.P.C.
Road, Kolkata 700 009, India}
\begin{abstract}
Recently Munro, Nemoto and White (\emph{The Bell Inequality: A measure of Entanglement?},
quant-ph/0102119) tried to indicate that the reason behind a state \( \rho \)
having higher amount of entanglement (as quantified by the entanglement of formation)
than a state \( \rho ^{\prime } \), but producing the same amount of Bell-violation,
is due to the fact that the amount of mixedness (as quantified by the linearised
entropy) in \( \rho \) is higher than that in \( \rho ^{\prime } \). We counter
their argument with examples. We extend these considerations to the von Neumann
entropy. Our results suggest that the reason as to why equal amount of Bell-violation
requires different amounts of entanglement cannot, at least, be explained by
mixedness alone.
\end{abstract}
Werner\cite{1} (see also Popescu\cite{2}) first demonstrated the existence
of states which are entangled but do not violate any Bell-type inequality\cite{3,4}.
But there exist classes of states (pure states, mixture of two Bell states),
which violate Bell inequality whenever they are entangled\cite{5,6}.
This implies that to produce an equal amount of Bell-violation, some states
require to have more entanglement (with respect to some measure) than others.
It would be interesting to find out what property of the first state requires
it to have more entanglement to produce the same Bell-violation. Recently Munro
\emph{et al.}\cite{7} have tried to indicate that this anomalous property of
the first state is due to its being more \emph{mixed} than the second, where
they took the linearised entropy\cite{8} as the measure of mixedness.
As in \cite{7}, we use the entanglement of formation as our measure of entanglement.
For a state \( \rho \) of two qubits, its entanglement of formation \( EoF(\rho ) \)
is given by\cite{9}
\[
EoF(\rho )=h\left( \frac{1+\sqrt{1-\tau }}{2}\right) \]
with
\[
h(x)=-x\log _{2}x-(1-x)\log _{2}(1-x).\]
The tangle \( \tau \) \cite{10} is given by
\[
\tau (\rho )=[\max \{0,\: \lambda _{1}-\lambda _{2}-\lambda _{3}-\lambda _{4}\}]^{2},\]
the \( \lambda _{i} \)'s being square root of eigen values, in decreasing order,
of \( \rho \widetilde{\rho } \), where
\[
\widetilde{\rho }=(\sigma _{y}\otimes \sigma _{y})\rho ^{*}(\sigma _{y}\otimes \sigma _{y}),\]
the complex conjugation being taken in the standard product basis \( \left| 00\right\rangle \),
\( \left| 01\right\rangle \), \( \left| 10\right\rangle \), \( \left| 11\right\rangle \)
of two qubits. Note that EoF is monotonically increasing ranging from \( 0 \)
to \( 1 \) as \( \tau \) increases from \( 0 \) to \( 1 \) and hence, like
Munro \emph{et al.}\cite{7}, we take \( \tau \) as our measure of entanglement.
The maximum amount of Bell-violation(\( B \)) of a state \( \rho \) of two
qubits is given by\cite{6}
\[
B(\rho )=2\sqrt{M(\rho )}\]
where \( M(\rho ) \) is the sum of the two larger eigenvalues of \( T_{\rho }T^{\dagger }_{\rho } \),
\( T_{\rho } \) being the \( 3\times 3 \) matrix whose \( (m,n) \)-element
is
\[
t_{mn}=tr(\rho \sigma _{n}\otimes \sigma _{m}).\]
The \( \sigma \)'s are the Pauli matrices.
The linearised entropy \cite{8}
\[
S_{L}(\rho )=\frac{4}{3}(1-tr(\rho ^{2}))\]
is taken as the measure of mixedness.
Munro \emph{et al.}\cite{7} proposed that given two two-qubit states \( \rho \)
and \( \rho ^{\prime } \) with
\[
B(\rho )=B(\rho ^{\prime }),\]
but
\[
\tau (\rho )>\tau (\rho ^{\prime }),\]
would imply
\[
S_{L}(\rho )>S_{L}(\rho ^{\prime }).\]
To support this proposal, it was shown that it holds for any combination of
states from the following three classes of states:
(1) the class of all pure states
\[
\rho _{pure}=P[a\left| 00\right\rangle +b\left| 11\right\rangle ]\]
with \( a,\: b\geq 0 \),and \( a^{2}+b^{2}=1, \)
(2) the class of all Werner states\cite{1}
\[
\rho _{werner}=xP[\Phi ^{+}]+\frac{1-x}{4}I_{2}\otimes I_{2}\]
with \( 0\leq x\leq 1 \) and \( \Phi ^{+}=\frac{1}{\sqrt{2}}(\left| 00\right\rangle +\left| 11\right\rangle ) \),
and
(3) the class of all maximally entangled mixed states\cite{11}
\[
\rho _{mems}=\frac{1}{2}(2g(\gamma )+\gamma )P[\Phi ^{+}]+\frac{1}{2}(2g(\gamma )-\gamma )P[\Phi ^{-}]+(1-2g(\gamma ))P[\left| 01\right\rangle \left\langle 01\right| \]
with \( g(\gamma )=1/3 \) for \( 0<\gamma <2/3 \) and \( g(\gamma )=\gamma /2 \)
for \( 2/3\leq \gamma \leq 1 \), and \( \Phi ^{\pm }=\frac{1}{\sqrt{2}}(\left| 00\right\rangle \pm \left| 11\right\rangle ) \).
However, consider the class of all mixtures of two Bell states
\[
\rho _{2}=wP[\Phi ^{+}]+(1-w)P[\Phi ^{-}],\]
with \( 0<w<1 \). \( \rho _{2} \) is entangled whenever \( w\neq \frac{1}{2} \),
and for that entire region, \( \rho _{2} \) is Bell-violating\cite{6}. For
this class it is easy to show that
\[
B=2\sqrt{1+\tau }\]
But the corresponding curve for pure states \( \rho _{pure} \) is also given
by\cite{7}
\[
B=2\sqrt{1+\tau }\]
We see that for any fixed Bell-violation, the corresponding \( \rho _{2} \)
has its tangle equal to that for the corresponding pure state. But the mixedness
of \( \rho _{2} \) is obviously \emph{larger} than that of the pure state (as
the mixedness is always zero for pure states).
Next consider the following class of mixtures of \emph{three} Bell states
\[
\rho _{3}=w_{1}P[\Phi ^{+}]+w_{2}P[\Phi ^{-}]+w_{3}P[\Psi ^{+}]\]
with \( 1\geq w_{1}\geq w_{2}\geq w_{3}\geq 0 \), \( \sum _{i}w_{i}=1 \) and
\( \Psi ^{+}=\frac{1}{\sqrt{2}}(\left| 01\right\rangle +\left| 10\right\rangle ) \).
We take \( w_{1}>\frac{1}{2} \) so that \( \rho _{3} \) is entangled \cite{12}.
For \( \rho _{3} \), we have (as \( w_{1}\geq w_{2}\geq w_{3} \))
\[
B(\rho _{3})=2\sqrt{2-4w_{2}(1-w_{2})-4w_{3}(1-w_{3})},\]
\[
\tau (\rho _{3})=1-4w_{1}(1-w_{1}),\]
\[
S_{L}(\rho _{3})=\frac{4}{3}\{w_{1}(1-w_{1})+w_{2}(1-w_{2})+w_{3}(1-w_{3})\}.\]
Let
\[
\rho ^{\prime }_{3}=w^{\prime }_{1}P[\Phi ^{+}]+w_{2}^{\prime }P[\Phi ^{-}]+w^{\prime }_{3}P[\Psi ^{+}]\]
with \( 1\geq w^{\prime }_{1}\geq w^{\prime }_{2}\geq w^{\prime }_{3}\geq 0 \),
\( \sum _{i}w^{\prime }_{i}=1 \), \( w^{\prime }_{1}>\frac{1}{2} \) be such
that
\[
B(\rho _{3})=B(\rho _{3}^{\prime })\]
which gives
\[
w_{2}(1-w_{2})+w_{3}(1-w_{3})=w_{2}^{\prime }(1-w^{\prime }_{2})+w^{\prime }_{3}(1-w^{\prime }_{3}).\]
Now if
\[
\tau (\rho _{3})>\tau (\rho _{3}^{\prime }),\]
we have
\[
w_{1}(1-w_{1})<w^{\prime }_{1}(1-w^{\prime }_{1})\]
so that
\[
w_{1}(1-w_{1})+w_{2}(1-w_{2})+w_{3}(1-w_{3})<w^{\prime }_{1}(1-w^{\prime }_{1})+w^{\prime }_{2}(1-w^{\prime }_{2})+w_{3}^{\prime }(1-w_{3}^{\prime })\]
that is
\[
S_{L}(\rho _{3})<S_{L}(\rho _{3}^{\prime }).\]
Thus for a fixed Bell-violation, the order of \( S_{L} \) for \( \rho _{3} \)
and \( \rho _{3}^{\prime } \) is \emph{always} reversed with respect to the
order of their \( \tau \)'s. That is, the indication of \cite{7}, referred
to earlier, is \emph{always} violated for any two states from the class of mixtures
of \emph{three} Bell states.
One can now feel that if the \emph{entanglement of formation of two states are
equal}, it could imply some order between the amount of Bell-violation and mixedness
of the two states. But even that is not true.
For our first example, if
\[
\tau (\rho _{2})=\tau (\rho _{pure})\]
then
\[
B(\rho _{2})=B(\rho _{pure}),\]
but
\[
S_{L}(\rho _{2})>S_{L}(\rho _{pure}).\]
On the other hand for our second example, if
\[
\tau (\rho _{3})=\tau (\rho ^{\prime }_{3})\]
then
\[
B(\rho _{3})>B(\rho ^{\prime }_{3})\]
implies
\[
S_{L}(\rho _{3})<S_{L}(\rho ^{\prime }_{3}).\]
In Ref.\cite{7}, the linearised entropy was the only measure of mixedness that
was considered. But the von Neumann entropy\cite{13}
\[
S(\rho )=-tr(\rho log_{4}\rho ),\]
of a state \( \rho \) of two qubits, is a more physical measure of mixedness
than the linearised entropy. We have taken the logarithm to the base \( 4 \)
to normalise the von Neumann entropy of the maximally mixed state \( \frac{1}{2}I_{2}\otimes \frac{1}{2}I_{2} \)
to unity as it is for the linearised entropy. One may now feel that the conjecture
under discussion may turn out to be true if we change our measure of mixedness
from linearised entropy to von Neumann entropy.
But both the von Neumann entropy and the linearised entropy are convex functions,
attaining their maximum for the same state \( \frac{1}{2}I_{2}\otimes \frac{1}{2}I_{2} \)
and each of them are symmetric about the maximum. Thus
\[
S_{L}(\rho )>S_{L}(\rho ^{\prime })\]
would imply
\[
S(\rho )>S(\rho ^{\prime })\]
and viceversa. Thus all our considerations with linearised entropy as the measure
of mixedness would carry over to von Neumann entropy as the measure of mixedness.
Our results emphasize that the reason as to why equal amount of Bell-violation
requires different amounts of entanglement cannot, at least, be explained by
mixedness alone.
We thank Anirban Roy and Debasis Sarkar for helpful discussions. We acknowledge
Frank Verstraete for encouraging us to carry over our considerations to the
von Neumann entropy. A.S. and U.S. thanks Dipankar Home for encouragement and
U.S. acknowledges partial support by the Council of Scientific and Industrial
Research, Government of India, New Delhi.
\end{document} |
\begin{document}
\title[Matter-screened
Casimir force and Casimir-Polder force]{Matter-screened
Casimir force and Casimir-Polder force in planar structures}
\author{Christian Raabe and Dirk-Gunnar Welsch}
\address{Theoretisch-Physikalisches Institut,
Friedrich-Schiller-Universit\"at Jena, Max-Wien-Platz 1, D-07743 Jena,
Germany}
\ead{C.Raabe@tpi.uni-jena.de, D.-G.Welsch@tpi.uni-jena.de}
\date{\today}
\begin{abstract}
Using a recently developed theory of the Casimir
force (Raabe C and Welsch D-G 2005 \emph{Phys. Rev.} A
{\bfseries 71} 013814),
we calculate the force that acts on a plate in front
of a planar wall and the force that acts on the plate
in the case where the plate is part of matter that fills
the space in front of the wall. We show that in the
limit of a dielectric plate whose permittivity is close to unity,
the force obtained in the former case reduces to the
ordinary, i.e., unscreened Casimir-Polder force acting
on isolated atoms. In the latter case, the theory
yields the Casimir-Polder force that is screened
by the surrounding matter.
\\[1ex]
{\bfseries Keywords:}
Casimir force, Casimir-Polder force, QED vacuum effects,
screening effect
\end{abstract}
\maketitle
\section{Introduction}
\label{sec1}
From the point of view of statistical physics, the
classical electromagnetic vacuum
can be characterized by the condition that
all moments of the electric and induction
fields vanish identically, which implies the
absence of any interaction with matter.
In quantum electrodynamics, this definition
is precluded by the non-commutativity
of canonically conjugate field quantities;
thus non-vanishing moments must inevitably occur.
The quantum electromagnetic vacuum can be
merely characterized as the state in which
all normally ordered field moments
vanish identically. Clearly, the anti-normally
(or otherwise non-normally) ordered
field moments then cannot do so, due to
virtual photon creation and destruction
-- a signature of the noise of the quantum vacuum.
Since the electromagnetic vacuum cannot be switched
off, its interaction with atomic systems cannot
be switched off either, thereby giving rise to
a number of observable effects.
Both virtual and real photons can be involved in the
interaction. Whereas the interaction of ground-state
atoms with the electromagnetic vacuum takes place
via virtual photon creation and destruction,
the creation of real photons always requires
excited atoms.
A typical example of the
interaction via virtual photons
is the attractive van der Waals force
between two unpolarized ground-state atoms,
which can be regarded as the force between
electric dipoles that are induced by the
fluctuating vacuum field. In the non-retarded
(i.e., short-distance) limit,
the potential associated with the force
was
first calculated by London
\cite{LondonF001930,LondonF001937}. The theory
was later extended by Casimir and Polder
\cite{CasimirHBG021948} to allow for larger separations,
where retardation effects cannot be disregarded.
Forces which are mediated by the electromagnetic
vacuum are not only observed on a microscopic level
but also on macroscopic levels. Typical examples
are the force that an (unpolarized) atom experiences
in the presence of macroscopic (unpolarized) bodies
-- referred to as Casimir-Polder (CP) force
in the following -- or the Casimir force
between macroscopic (unpolarized) bodies
(for a review, see,
for example,
\cite{MilonniQuantumVacuum}).
Since macroscopic bodies consist of a huge number of
atoms, both the CP force and the Casimir force
can be regarded as macroscopic manifestations of
microscopic van der Waals forces, and both types of forces
are intimately related to each other.
Clearly, they cannot be obtained, in general, from
a simple superposition of two-atom van der
Waals forces because such a procedure would
completely ignore
the interaction between the constituent atoms of the
bodies, and thus also their collective influence on the
structure of the body-assisted electromagnetic field
\cite{LifshitzEM001955}.
The aim of the present paper is to study
this problem in more detail, with
special emphasis on the CP force in
planar structures.
In principle, it is certainly possible to calculate
CP and Casimir forces within the framework of
microscopic quantum electrodynamics, by solving
the respective many-particle problem in some approximation.
Alternatively, one can start from a macroscopic description
of the bodies in terms of boundary conditions
or, more generally, in terms of polarization and magnetization
fields together with (phenomenologically introduced)
constitutive relations.
The latter, very powerful
approach will be used throughout this paper.
To be more specific, we will apply the
theory recently developed in
\cite{RaabeC012005},
which renders it possible not only to calculate the
Casimir force that acts on bodies separated by
empty space, but also the one which acts on bodies
the interspace between which is filled with matter.
As we will see, the formula for the Casimir force
obtained in this way contains,
as a special case, the well-known formula for the CP force acting on
isolated atoms. Moreover, it can also be used to calculate the
CP force acting on atoms that are constituents of matter,
where the neighbouring atoms give rise to a screening effect
that diminishes the force.
Throughout the paper systems that are at rest are
considered, which implies that the electromagnetic
vacuum forces must be thought of as being
balanced by some other forces.
The paper is organized as follows.
After a
review
in section \ref{sec2} of
the CP force, the theory of the Casimir force as
developed in
\cite{RaabeC012005} is outlined
in section \ref{sec3}, and the Casimir stress in
planar structures is given.
Relations between the Casimir force and the unscreened as well as
the screened CP force are studied in section \ref{sec4}, and the
results are discussed and summarized
in section \ref{sec5}.
\section{Casimir-Polder force}
\label{sec2}
Provided that the broadening of the atomic levels can be
neglected, the CP force is conservative, i.e., expressible as the
(negative) gradient of a potential -- the CP potential.
While this approximation
may be invalid for atoms prepared in an excited state, it is
well justified for ground-state atoms,
in which case the CP potential
can be written in the form of
\begin{equation}
\label{L6}
V^\mathrm{(at)}(\mathbf{r})=\frac{\hbar\mu_{0}}{2\pi}
\int_{0}^{\infty}\mathrm{d}\xi\,
\xi^2 \alpha(i\xi) \mathrm{Tr\,}\tensor{G}^\mathrm{(S)}
(\mathbf{r},\mathbf{r},i\xi)
\end{equation}
(for derivations, see,
for example,
\cite{McLachlanAD001963,AgarwalGS001975,
WylieJM001984,HenkelC002002,BuhmannSY032004}).
Here, $\mathbf{r}$ is the position of the atom,
$\alpha(i\xi)$ is its (ground-state)
polarizability and $\tensor{G}^\mathrm{(S)}(\mathbf{r,r'},i\xi)$
is the scattering part of the classical retarded Green tensor
$\tensor{G}(\mathbf{r,r'},i\xi)$
on the imaginary frequency axis.
Note that the Green tensor takes the presence of arbitrary (locally
responding) magnetodielectric bodies
into account within the framework of macroscopic
linear electrodynamics in causal media.
Only the scattering part
of the Green tensor figures in equation (\ref{L6}); its bulk part,
despite being divergent in the
coincidence limit $\mathbf{r'}$ $\!\to$ $\!\mathbf{r}$,
does not contribute to the force on the atom
and can be thus discarded from equation (\ref{L6}).
The potential (\ref{L6}), which is valid to first order
in $\alpha(i\xi)$, can be derived
using the Green tensor scheme of electromagnetic field
quantization \cite{KnoellL002001}
and treating the atom--field interaction in
the electric dipole approximation and the lowest
(non-vanishing) order of perturbation theory.
In particular, for a ground-state atom
in front of a planar magnetodielectric wall, equation
(\ref{L6}) leads to (see,
for example,
\cite{McLachlanAD001963,AgarwalGS001975,WylieJM001984,KryszewskiS001993,BuhmannSY042005})
\begin{equation}
\label{L38}
V^\mathrm{(at)}(z)
=\frac{\hbar\mu_{0}}{8\pi^2
}
\int_{0}^{\infty}\mathrm{d}\xi\,
\xi^2
\alpha(i\xi)\int_{0}^{\infty}
\mathrm{d} q\,
\frac{q}{\kappa}\,
e^{-2 \kappa z}
\left[
r_{1-}^{s}
-r_{1-}^{p}
\left(1+\frac{2q^2}
{\xi^2/c^2}
\right)
\right]
\end{equation}
($\kappa^2$ $\!=$ $\!\xi^2/c^2$ $\!+$ $\!q^2$), where
the atom is situated at some position $z$ $\!>$ $\!0$, and the
magnetodielectric wall
(which may have
a (1D) internal structure,
for example,
a planarly layered one
)
extends from some negative $z$-value up to $z$ $\!=$ $\!0$.
Note that the effect of the wall is fully
described in terms of the
(generalized) reflection coefficients
$r_{1-}^{s}$ and $r_{1-}^{p}$,
both of which are functions of
the imaginary frequency $i\xi$
and the transverse wave vector projection $q$
($s,p$, polarization indices).
Two comments on equations
(\ref{L6}) and (\ref{L38}) seem to be advisable.
First, the neglect of level broadening
might suggest that $\alpha(\omega)$
has poles on the real frequency axis, such as
\begin{equation}
\label{L5a-0}
\alpha(\omega)
\sim
\sum_{k}
\frac{\Omega_{k}^2}{\omega_{k}^2-\omega^2}\,.
\end{equation}
In fact, equation (\ref{L5a-0}) has to be understood as
\begin{equation}
\label{L5a}
\alpha(\omega)
\sim
\lim_{\gamma\to 0+}
\sum_{k}
\frac{\Omega_{k}^2}
{\omega_{k}^2-\omega^2-i\gamma\omega}\,,
\end{equation}
where the limit prescription
$\gamma$ $\!\to$ $\! 0+$ reminds one of
the proper response function properties \cite{LanLif} of $\alpha(\omega)$.
Throughout this paper it will not be necessary to make use of any
particular form of the polarizability. Second, although
the (magnetodielectric) bodies can be quite arbitrary, it
is important that the atom under study is an isolated one,
i.e., equations (\ref{L6}) and (\ref{L38}) do not apply
to atoms in matter.
Needless to say that a convincing consideration of atoms
in matter
must include local field corrections.
Correspondingly, the reflection
coefficients in equation (\ref{L38}) refer to the
reflection of waves being
incident on the wall from free space.
\section{Casimir force}
\label{sec3}
Let us now
consider
the Casimir force acting on a macroscopic body
in the presence of other bodies.
In the zero-temperature limit, it is just the
ground-state expectation value of the
Lorentz force acting on the charges and currents which constitute
the body on the level of macroscopic electrodynamics, i.e.,
the charge density $\hat{\rho}(\mathbf{r})$ can be given by
\begin{equation}
\label{L40}
\hat{\rho}(\mathbf{r})=\int_{0}^\infty
\mathrm{d}\omega\,
\fo{\rho}(\mathbf{r},\omega) + \mathrm{H.\,c.}
\end{equation}
and the current density
$\hat{\mathbf{j}}(\mathbf{r})$ accordingly, with
\begin{equation}
\label{L40a}
\fo{\rho}(\mathbf{r},\omega) =
-\varepsilon_{0}
\boldsymbol{\nabla}\cdot\{[\varepsilon(\mathbf{r},\omega)-1]\fo{E}(\mathbf{r},\omega)\}
+ (i\omega)^{-1} \boldsymbol{\nabla}\cdot\fo{j}_\mathrm{N}(\mathbf{r},\omega)
\end{equation}
and
\begin{equation}
\label{L40b}
\fl
\fo{j}(\mathbf{r},\omega)
=
-i\omega \varepsilon_{0}
[\varepsilon(\mathbf{r},\omega)-1]
\fo{E}(\mathbf{r},\omega)
+ \boldsymbol{\nabla}\times \{
\mu_{0}^{-1}
[1-\mu^{-1}(\mathbf{r},\omega)]
\fo{B}(\mathbf{r},\omega)\}
+\fo{j}_\mathrm{N}(\mathbf{r},\omega)
\end{equation}
(for details, see
\cite{RaabeC012005}).
Here, $\fo{j}_{\mathrm{N}}(\mathbf{r},\omega)$
is the current density that acts as a Langevin noise source
in the operator Maxwell equations,
$\varepsilon(\mathbf{r},\omega)$ and
$\mu(\mathbf{r},\omega)$
are the permittivity and permeability,
respectively, and
$\fo{E}(\mathbf{r},\omega)$ and $\fo{B}(\mathbf{r},\omega)$
are the (positive) frequency parts of the
electric field and the induction field, respectively,
\begin{equation}
\label{L43}
\fo{E}(\mathbf{r},\omega)
=i\mu_{0}\omega\int \mathrm{d}^3r'\,
\tensor{G}(\mathbf{r,r'},\omega)
\cdot
\fo{j}_{\mathrm{N}}(\mathbf{r'},\omega),
\end{equation}
\begin{equation}
\label{L44}
\fo{B}(\mathbf{r},\omega)
=\mu_{0}\boldsymbol{\nabla}\times \int \mathrm{d}^3r'\,
\tensor{G}(\mathbf{r,r'},\omega)
\cdot
\fo{j}_{\mathrm{N}}(\mathbf{r'},\omega).
\end{equation}
According to
\cite{RaabeC012005},
the Casimir force on a magnetodielectric body of volume $V$
can then be expressed in terms of the Casimir stress as
a surface integral,
\begin{equation}
\label{L44-1}
\mathbf{F}=\int_{\partial V}
\mathrm{d}\mathbf{a}
\cdot \tensor{T}(\mathbf{r,r}),
\end{equation}
where
\begin{equation}
\label{L12}
\tensor{T}(\mathbf{r,r})=
\lim_{\mathbf{r}'\to\mathbf{r}}\left[
\tensor{\theta}(\mathbf{r,r'})
- {\textstyle\frac{1}{2}} \tensor{1}
{\rm Tr}\,\tensor{\theta}(\mathbf{r,r'})
\right],
\end{equation}
\begin{equation}
\label{L13}
\tensor{\theta}(\mathbf{r,r'})
=-\frac{\hbar}{\pi}\int_{0}^{\infty} \mathrm{d}\xi\,
\left[\frac{\xi^2}{c^2}\,\tensor{G}^\mathrm{(S)}(\mathbf{r,r'},i\xi)
+\boldsymbol{\nabla}\times
\tensor{G}^\mathrm{(S)}(\mathbf{r,r'},i\xi)\times\Lnabla{'}
\right]\!,
\end{equation}
with the body under study being taken into account in the
definition of the scattering Green tensor. It is worth noting that
equations (\ref{L12}) and (\ref{L13}) also
apply if the interspace between the bodies is not empty but
also filled with magnetodielectric matter
(which has to be homogeneous at least in some small
neighbourhood of the body under consideration). In this case
the Casimir force is expected to be diminished
as compared to the case where the surrounding matter
is absent, because of the screening effect of the matter.
Let us apply equation (\ref{L12})
(
together with equation (\ref{L13})
)
to the $j$th (homogeneous) layer of a planar magnetodielectric
multi-layer structure. For such systems the Green tensor is well
known \cite{TomasM031995,ChewBook}, leading to
($0$ $\!<$ $\!z$ $\!<$ $\!d_j$; $d_j$, thickness of the layer)
\begin{equation}
\label{L201}
T_{zz}(\mathbf{r,r})=
\frac{\hbar}{8\pi^2}\int_{0}^{\infty}\!\!\mathrm{d}\xi\,
\int_{0}^{\infty} \mathrm{d} q\,q\,\frac{\mu_{j}(i\xi)}
{i\beta_{j}(i\xi,q)}\, g_j(z,i\xi,q),
\end{equation}
where
\begin{eqnarray}
\label{L202}
g_j(z,\omega,q)
&=& 2\bigl[\beta_{j}^2 (1+n^{-2}_{j})-q^2
(1-n^{-2}_{j})\bigr]
D_{js}^{-1}r_{j+}^{s}r_{j-}^{s}e^{2i\beta_{j}d_{j}}
\nonumber\\
&& +2\bigl[\beta_{j}^2 (1+n^{-2}_{j})
+q^2 (1-n^{-2}_{j})\bigr]
D_{jp}^{-1}r_{j+}^{p}r_{j-}^{p}e^{2i\beta_{j}d_{j}}
\nonumber\\
&& - (\beta_{j}^2+q^2)(1-n^{-2}_{j})
D_{js}^{-1}\bigl[r_{j-}^{s}
e^{2i\beta_{j} z}+r_{j+}^{s}
e^{2i\beta_{j}(d_{j}-z)}\bigr]
\nonumber\\
&& + (\beta_{j}^2+q^2)(1-n^{-2}_{j})
D_{jp}^{-1}
\bigl[r_{j-}^{p}e^{2i\beta_{j} z}+r_{j+}^{p}
e^{2i\beta_{j}(d_{j}-z)}\bigr],
\end{eqnarray}
with
\begin{eqnarray}
\label{L203}
n^2_{j}
=n^2_{j}(\omega)
=\varepsilon_{j}(\omega)\mu_{j}(\omega),\\
\label{L204}
\beta_{j}=\beta_{j}(\omega,q)=(\omega^2n_{j}^2/c^2
-q^2)^{1/2},\\
\label{L205}
D_{j\sigma}=D_{j\sigma}(\omega,q)=
1-r_{j+}^{\sigma} r_{j-}^{\sigma} e^{2i\beta_{j}d_{j}},
\end{eqnarray}
and $r_{j\pm}^{\sigma}$ $\!=$ $\!r_{j\pm}^{\sigma}(\omega,q)$ being
the generalized reflection coefficients associated with
the $j$th layer ($\sigma$ $\!=$ $\!s,p$).
Since $-i\beta_{j}$ is purely
real and nonnegative at imaginary frequencies,
we will use the notation \mbox{$\kappa_{j}$ $\!=$
$\!-i\beta_{j}(i\xi,q)$} in the remainder of the paper.
\section{Unscreened versus screened Casimir-Polder force}
\label{sec4}
Equation (\ref{L44-1}) together with equations (\ref{L12})
and (\ref{L13}) contains the unscreened
CP force that acts on isolated atoms as limiting case.
Moreover, it enables one to calculate also the
screened CP force acting on atoms that are
constituents of matter. To illustrate this, let us consider planar
systems as sketched in figure \ref{Fig1} and begin with
the unscreened CP force.
\begin{figure}
\caption{\label{Fig1}
\label{Fig1}
\end{figure}
\subsection{
Casimir--Polder
force on isolated atoms}
\label{sec4.2}
For this purpose we first calculate the Casimir force
acting on a (homogeneous) plate of thickness
\mbox{$d_2$ $\!=$ $\!\mathrm{d}elta z$} in
front of a planar wall according to the four-layer system in
figure \ref{Fig1}(a), with the regions
1 and 3 being empty. In this case we have to set
$n_{1}$ $\!=$ $\!n_{3}$ $\!=$ $\!1$ and $d_{3}$ $\!\to$ $\!\infty$
(and hence \mbox{$r_{3+}^{\sigma}$ $\!\to$ $\!0$)} in
equations (\ref{L201}) and (\ref{L202}), which in particular implies
that the stress in the empty-space region 1 is independent of position
and vanishes in the semi-infinite empty-space region 3.
According to equations (\ref{L44-1}) and (\ref{L201}), the
total Casimir force (per transverse unit area) acting on
the plate (layer $2$) can then be written as
[$\kappa_{1}$ $\!=$ $\!\kappa_{3}$ $\!=$
$\!\kappa$ $\!=$ $\!(\xi^2/c^2+q^2)^{1/2}$]
\begin{equation}
\label{L24}
F=\frac{\hbar}{8\pi^2}\int_{0}^{\infty}\mathrm{d}\xi\int_{0}^{\infty}
\mathrm{d} q\,\frac{q}{\kappa}\, g_{1}(d_{1},i\xi,q),
\end{equation}
where
\begin{equation}
\label{L25}
\fl
g_{1}(d_{1},i\xi,q)
=
-4\kappa^2
\sum_{\sigma=s,p}\frac{r_{1+}^{\sigma}r_{1-}^{\sigma}
e^{-2\kappa d_{1}}}{1-r_{1+}^{\sigma}r_{1-}^{\sigma}
e^{-2\kappa d_{1}}}
=-4\kappa^2
\sum_{\sigma=s,p}\sum_{m=1}^{\infty}[r_{1+}^{\sigma}r_{1-}^{\sigma}
e^{-2\kappa d_{1}}]^{m}.
\end{equation}
Since the reflection coefficients $r_{1-}^{\sigma}$ (containing
the details of the wall structure) are independent of
the properties of the plate, they
need not be further specified.
In
the
case of a homogeneous plate,
the reflection coefficients $r_{1+}^{\sigma}$
read
\begin{equation}
\label{L26}
r_{1+}^{\sigma}
=\frac{r_{1/2}^{\sigma}\,(1-e^{-2\kappa_{2}d_{2}})}
{1-r_{1/2}^{\sigma\,2}e^{-2\kappa_{2}d_{2}}}
\qquad (\sigma = s,p),
\end{equation}
where $r_{1/2}^{\sigma}$ are the usual single-interface
(Fresnel) amplitudes,
\begin{equation}
\label{L27}
r_{1/2}^{s}=
\frac{\kappa\mu_{2}-\kappa_{2}}
{\kappa\mu_{2}+\kappa_{2}}\,,
\qquad
r_{1/2}^{p}
=\frac{\kappa\varepsilon_{2}-\kappa_{2}}
{\kappa\varepsilon_{2}+\kappa_{2}}\,.
\end{equation}
To recover the ordinary (unscreened) CP force,
let us consider a nonmagnetic plate ($\mu_{2}$ $\!=$ $\!1$)
consisting of weakly dielectric material
($|\varepsilon_{2}$ $\!-$ $\!1|$ $\!\ll$ $\!1$)
and expand to first order in $\varepsilon_{2}$ $\!-$ $\!1$.
In this approximation, we may set
\begin{equation}
\label{L28a}
\frac{\kappa_{2}}{\kappa} =1+\frac{2\xi^2}{\kappa^2
c^2}\,(\varepsilon_{2}-1)
\end{equation}
and equations (\ref{L27}) approximate to
\begin{equation}
\label{L29}
r_{1/2}^{s}
=-
(\varepsilon_{2}-1)
\,\frac{\xi^2}
{4\kappa^2 c^2}\,,
\end{equation}
\begin{equation}
r_{1/2}^{p}=\frac{
\varepsilon_{2}-1
}{2}\,
\left(1-
\frac{\xi^2}
{2\kappa^2 c^2}
\right),
\end{equation}
so that equations (\ref{L26}) read
\begin{equation}
\label{L31a}
r_{1+}^{s}
=-
(\varepsilon_{2}-1)
\,\frac{\xi^2}
{4\kappa^2 c^2}
\left(1-e^{-2\kappa
\mathrm{d}elta z
}\right),
\end{equation}
\begin{equation}
\label{L31b}
r_{1+}^{p}=
\frac{
\varepsilon_{2}-1
}
{2}
\left(1-
\frac{\xi^2}
{2\kappa^2 c^2}
\right)\left(1-e^{-2\kappa
\mathrm{d}elta z
}\right)
.
\end{equation}
Inserting equations (\ref{L31a}) and (\ref{L31b}) in
equation (\ref{L25}), we see that $g_{1}(d_{1},i\xi,q)$
approximates to
\begin{eqnarray}
\label{L32}
\fl
g_{1}(d_{1},i\xi,q)=-4\kappa^2
\sum_{\sigma=s,p}
r_{1+}^{\sigma}r_{1-}^{\sigma}
e^{-2\kappa d_{1}}
\nonumber\\\lo
=
(\varepsilon_{2}-1)
\,\kappa^2 e^{-2\kappa d_{1}}
\left(1-e^{-2\kappa
\mathrm{d}elta z
}\right)
\left[
r_{1-}^{s}
\left(1-\frac{q^2}{\kappa^2}
\right)
-r_{1-}^{p}
\left(1+\frac{q^2}{\kappa^2}
\right)
\right]
.
\end{eqnarray}
Note that it solely results
from the $m$ $\!=$ $\!1$ (
`single round-trip'
) term
(
in the second, expanded form of equation (\ref{L25})
). Substitution of equation (\ref{L32})
into equation (\ref{L24}) yields the
Casimir force (per transverse unit area)
to first order in $\varepsilon_{2}$ $\!-$ $\!1$:
\begin{eqnarray}
\label{L24a}
\fl
F=\frac{\hbar}{8\pi^2}\int_{0}^{\infty}\mathrm{d}\xi\,
(\varepsilon_{2}-1)
\nonumber\\ \times
\int_{0}^{\infty}
\mathrm{d} q\,q\kappa
e^{-2\kappa d_{1}}
\left(1-e^{-2\kappa
\mathrm{d}elta z
}\right)
\left[
r_{1-}^{s}
\left(1-\frac{q^2}{\kappa^2}
\right)
-r_{1-}^{p}
\left(1+\frac{q^2}{\kappa^2}
\right)
\right].
\end{eqnarray}
It is not difficult to prove that equation (\ref{L24a})
can be rewritten as
\begin{equation}
\label{L24a-1}
F = \int_{d_1}^{d_1+\mathrm{d}elta z} \mathrm{d} z\, f(z),
\end{equation}
where the force density $f(z)$ can be derived from
a potential $V(z)$ as follows:
\begin{equation}
\label{L35a}
f(z) = -
\frac{\partial V(z)}{\partial z}
\,,
\end{equation}
\begin{equation}
\label{L37a}
\fl
V(z)=\frac{\hbar
}
{8\pi^2
}
\int_{0}^{\infty}
\mathrm{d}\xi\,
(\varepsilon_{2}-1)
\int_{0}^{\infty}
\mathrm{d} q\,q\kappa
e^{-2\kappa z}
\left[
r_{1-}^{s}
\left(1-\frac{q^2}{\kappa^2}
\right)
-r_{1-}^{p}
\left(1+\frac{q^2}{\kappa^2}
\right)
\right].
\end{equation}
Let us suppose that the plate consists of atom-like basic
constituents of polarizability $\alpha(i\xi)$
and $\eta$ is the (constant) number density
of the atoms. Using the relation
\begin{equation}
\label{L25c}
\varepsilon_{2}(i\xi)-1
=\frac{\eta}
{\varepsilon_{0}}
\, \alpha(i\xi),
\end{equation}
which is valid in the case of
weakly dielectric material,
from inspection of equations (\ref{L24a-1})--(\ref{L37a})
we see that
\begin{equation}
\label{L25c-1}
F^{({\rm at})}(z) = \eta^{-1}f(z)
\end{equation}
can be regarded as the force acting on an atom at position $z$
in the plate, where the associated potential reads
\begin{equation}
\label{L37}
\fl
V^\mathrm{(at)}
(z)
=\frac{\hbar}{8\pi^2
\varepsilon_{0}
}
\int_{0}^{\infty}\mathrm{d}\xi\,
\alpha(i\xi)
\int_{0}^{\infty}
\mathrm{d} q\, q \kappa e^{-2 \kappa z}
\left[
r_{1-}^{s}
\left(1-\frac{q^2}{\kappa^2}
\right)
-r_{1-}^{p}
\left(1+\frac{q^2}{\kappa^2}
\right)
\right].
\end{equation}
It is straightforwardly checked that equation (\ref{L37}) is
identical with equation (\ref{L38}), i.e., with the standard CP
potential. Hence, $F^{({\rm at})}(z)$ is nothing but the
unscreened CP force that acts on a single (ground-state)
atom at position $z$ in front of the wall.
Clearly, this interpretation presupposes that $\alpha(i\xi)$
is really the polarizability of a single atom. Otherwise
$\alpha(i\xi)$ and $\eta$ are rather formal quantities
defined by equation (\ref{L25c}), so that
the introduction of
$F^{({\rm at})}(z)$ is also rather formal.
Although equations
(\ref{L25c-1}) and (\ref{L37}) are of course correct, equations
(\ref{L35a}) and (\ref{L37a}) may be more appropriate in this case.
\subsection{Screened Casimir-Polder force on medium atoms}
\label{sec4.1}
Let us now consider the case where the plate is part of
matter that fills the space in front of the wall according
to the two-layer system in figure \ref{Fig1}(b). In this case
from equation (\ref{L202}) it follows that ($d_2$ $\!\to$ $\!\infty$
and hence $r^\sigma_{2+}$ $\!=$ $\!0$)
\begin{equation}
\label{L100}
\fl
g_{2}(z+\mathrm{d}elta
z,\omega,q)-g_{2}(z,\omega,q)=
e^{-2\kappa_{2} z}
(q^2-\kappa_{2}^2)
(1-n_{2}^{-2})
(r_{2-}^{p}-r_{2-}^{s})
\left(e^{-2\kappa_{2}\mathrm{d}elta z}-1\right),
\end{equation}
so that, according to equations (\ref{L44-1}) and (\ref{L201})
the force (per transverse unit area) on the plate reads
\begin{equation}
\label{L101}
F=\frac{\hbar}{8\pi^2}\!\int_{0}^{\infty}\mathrm{d}\xi\,
\frac{\xi^2}{c^2}\,\mu_{2}
(n_{2}^2-1)
\!\int_{0}^{\infty}\mathrm{d} q\, \frac{q}{\kappa_{2}}
\,
e^{-2\kappa_{2} z}
(r_{2-}^{p}-
r_{2-}^{s})
\left(e^{-2\kappa_{2}\mathrm{d}elta z}-1\right).
\end{equation}
Now we again focus on nonmagnetic
and weakly dielectric matter, i.e.,
$\mu_{2}$ $\!=$ $\!1$,
\mbox{$|\varepsilon_{2}$ $\!-$ $\!1|$ $\!\ll$ $\!1$}.
It is not difficult to see that to
first order in $\varepsilon_{2}$ $\!-$ $\!1$
equation (\ref{L101}) yields
\begin{equation}
\label{L102}
F=\frac{\hbar}{8\pi^2}\!\int_{0}^{\infty}\mathrm{d}\xi\,
\frac{\xi^2}{c^2}\,
(\varepsilon_{2}-1)
\!\int_{0}^{\infty}\mathrm{d} q\,\frac{q}{\kappa}
\,
e^{-2\kappa z}
(r_{2-}^{p}-
r_{2-}^{s})
\left(e^{-2\kappa\mathrm{d}elta z}-1\right),
\end{equation}
where the quantities $r_{2-}^{\sigma}$
must be computed for
$\varepsilon_{2}$ $\!=$ $\!1$ and $\kappa_{2}$ $\!=$ $\!\kappa$.
In other words, they are the reflection coefficients
corresponding to the case where
the half-space on the right-hand side
of the wall is empty, and thus they
agree with the reflection coefficients
$r_{1-}^{\sigma}$ in equation (\ref{L24a}).
Obviously, equation (\ref{L102}) can be written
in the form of equation (\ref{L24a-1})
(with $d_{1}\mapsto z$),
where the force density $f(z)$ can be derived,
according to equation (\ref{L35a}), from a potential $V(z)$
which now reads
\begin{equation}
\label{L103a}
V(z)=-\frac{\hbar}{8\pi^2}\,
\int_{0}^{\infty}\mathrm{d}\xi\,
\frac{\xi^2}{c^2}\,
(\varepsilon_{2}-1)
\int_{0}^{\infty}\mathrm{d} q\,
\frac{q}{\kappa}
\,e^{-2\kappa z}
(r_{2-}^{p}-
r_{2-}^{s})
\end{equation}
instead of equation (\ref{L37a}).
Applying equation (\ref{L25c}), we can again use equation
(\ref{L25c-1}) to introduce the force $F^{({\rm at})}(z)$ acting
on an atom at position $z$ in the plate, where the
potential from which the force can be derived is given by
\begin{equation}
\label{L105b}
V^\mathrm{(at)}(z)
=-\frac{\hbar\mu_{0}}{8\pi^2}\,
\int_{0}^{\infty}\mathrm{d}\xi\,
\xi^2\,\alpha(i\xi)
\int_{0}^{\infty}\frac{\mathrm{d} q\,q}{\kappa}\,
(r_{2-}^{p}-r_{2-}^{s})
e^{-2\kappa z}
\end{equation}
instead of equation (\ref{L37}).
Hence, equation (\ref{L105b}) can be interpreted as the potential
that a matter atom is subject to when
the presence of the surrounding
atoms of the (weakly dielectric) matter
is taken into account,
thus being the screened single-atom CP potential.
It differs from
the unscreened CP potential (\ref{L38})
(
or, equivalently, (\ref{L37})
)
in the $p$-polarization contributions.
The result is in agreement with the one recently found
in
\cite{TomasM022005}, in which
equation (\ref{L44-1}) together with equation (\ref{L201})
is applied to a multi-plate cavity-like system
and it is shown that the Casimir force acting on a plate
embedded in such a system can be decomposed into two
parts, where one part can be regarded as screened force.
\section{Discussion and Summary}
\label{sec5}
It is not surprising that the unscreened CP potential
(
equation (\ref{L38}) or, equivalently, equation
(\ref{L37})
)
must differ from the screened one
(
equation (\ref{L105b})
),
as can be seen from the respective
physical meaning of the force (per transverse unit area)
$\mathrm{d} F$ $\!=$ $\!f(z)\mathrm{d} z$ that acts on a slice of
infinitesimal thickness $\mathrm{d} z$
(cf~equation (\ref{L24a-1})).
To obtain the unscreened CP force, a slice in the otherwise
empty right half-space is considered
(
see figure \ref{Fig1}(a)
),
leading -- if equation (\ref{L25c}) holds -- to the force
on a single atom in front of the wall.
In contrast, the screened CP force is obtained if the slice is
unavoidably a part of the medium that fills the right half-space in
figure \ref{Fig1}(b), leading to the force on a medium atom. From
a microscopical point of view, this force does not only result
from the van der Waals forces between the medium
atom under consideration
and the atoms of the wall, but also between the medium
atom and the other medium atoms.
Since there are
many more
other medium atoms to the right rather than to
the left of the medium atom under consideration, there will be a net
effect, which is expected to diminish the force as
compared to the single-atom case.
To illustrate this
screening effect,
let us compare the unscreened potential
(
equation (\ref{L37})
)
and the screened potential
(
equation (\ref{L105b})
)
in the idealized limit where the wall can be
regarded as a perfectly reflecting mirror such that
$r_{1-}^{p}$ $\!=$ $\! 1$,
\mbox{$r_{1-}^{s}$ $\!=$ $\!-1$}.
In this case equation (\ref{L37}) greatly simplifies
(the expression in the square bracket equals $-2$),
leading to the unscreened potential in the form of
\begin{equation}
\label{L33}
V^\mathrm{(at)}(z)
=-\frac{\hbar c}{64\pi^2\varepsilon_{0}
}
\frac{1}{z^4}\int_{0}^{\infty} \mathrm{d} y\,
\alpha\!\left(\frac{icy}{2z}\right)h
(y),
\end{equation}
where the dimensionless function $h(y)$ is defined by
\begin{equation}
\label{L24c}
h(y)
=\int_{y}^{\infty}\mathrm{d} x\, x^2 e^{-x}=(y^2+2y+2)e^{-y}.
\end{equation}
In particular, in the large-distance
limit equation (\ref{L33}) reduces to
Casimir's and Polder's well-known formula
\cite{CasimirHBG021948}
$\bigl[\int_{0}^{\infty} \mathrm{d} y \,h(y)$ $\!=$ $\!6\bigr]$
\begin{equation}
\label{L34}
V^\mathrm{(at)}(z)
=
- \frac{3\hbar c\alpha(0)}{32\pi^2\varepsilon_{0}}\,
\frac{1}{z^4}
\,.
\end{equation}
Correspondingly, setting
$r_{2-}^{p}$ $\!=$ $\!1$ and $r_{2-}^{s}$
$\!=$ $\! -1$ in equation (\ref{L105b}),
we may write the screened potential in the form of
equation (\ref{L33}), where the function $h(y)$ changes to
\begin{equation}
\label{L24d-1}
h(y)
=y^2 \int_{y}^{\infty}\mathrm{d} x\, e^{-x}=y^2 e^{-y}.
\end{equation}
Since now $\int_{0}^{\infty} \mathrm{d} y \,h(y)$ $\!=$ $\!2$, it
follows that in the large distance-limit
the screened potential is one third of the unscreened one,
\begin{equation}
\label{L34-1}
V^\mathrm{(at)}(z)
=
- \frac{\hbar c\alpha(0)}{32\pi^2\varepsilon_{0}}\,
\frac{1}{z^4}\,,
\end{equation}
provided that $\alpha(0)$ is the same in both cases.
The result clearly shows that the screening effect
can be fairly large under certain conditions.
In the short-distance limit, the
asymptotic behaviour of the unscreened CP potential is commonly
obtained by approximately setting \mbox{$c$ $\!\to$ $\!\infty$},
implying $\kappa$ $\!\simeq$ $\!q$
(non-retarded approximation), thus the leading term behaves like
$z^{-3}$. It is not difficult to see that this term is missing
in the screened potential, and that the same approximation would lead
to a $z^{-1}$ distance law. It is, however, more than questionable
whether this approximation is reasonable in the case of a medium atom.
Moreover, when the atom can be no longer regarded as being
surrounded by sufficiently many other atoms of the same
kind, then the macroscopic description leading to
equation (\ref{L105b}) fails. Hence, even if the strict short-distance limit of
Eq.~(\ref{L105b}) could be found by more elaborate methods, its meaning
might be severly limited.
{F}rom the above we can conclude that electromagnetic vacuum forces
acting on micro-objects that consist of collections of atomic
constituents should be preferably calculated as
Casimir forces, by assigning appropriately chosen permittivities
and/or permeabilities to the micro-objects.
This approach ensures that screening effects are properly taken into account.
In particular, in the case of a weakly magnetodielectric
object the total force can be obtained,
to
leading order, by
superimposing screened CP forces acting on the atomic constituents.
With increasing strength of the magnetodielectric properties,
higher-order corrections (not considered in this paper)
must be included in the calculation.
This can be done in a systematic fashion, by
starting from the exact formula for the Casimir force and
expanding to higher powers in the electric and/or
magnetic susceptibility.
In summary, we have
studied relations between the Casimir force acting on
a macroscopic body and the CP force acting on an atom, with
special emphasis on planar structures.
We have shown that the exact formula for the
Casimir force contains as
a
special case the
ordinary CP force acting
on a single atom in front of a wall.
Further,
we have shown that
the exact formula can also be used to calculate
the CP force acting on an atom
that is
a
constituent
of bulk material that fills the half-space in front of the wall.
In this case the surrounding atoms give rise
to a screening effect that diminishes the
CP force compared with the force that acts
on an isolated atom.
\ack
We thank
Stefan Scheel for discussions.
CR
is grateful for being granted a Th\"u\-rin\-ger
Lan\-des\-gradu\-ier\-ten\-sti\-pen\-dium.
\section*{References}
\end{document} |
\begin{document}
\newcommand{\hspace*{1.5em}}{\hspace*{1.5em}}
\newcommand{\em Journal of Russian Laser Research}{\em Journal of Russian Laser Research}
\newcommand{\em Volume 30, Number 5, 2009}{\em Volume 30, Number 5, 2009}
\newcommand{\ketbra}[2]{\ket{#1}\!\bra{#2}}
\newcommand{\mathbbm{1}}{\mathbbm{1}}
\thispagestyle{plain}
\label{sh}
\begin{center} {\Large \bf
\begin{tabular}{c}
Quantum state identification of qutrits
\\[-1mm]
via a nonlinear protocol
\end{tabular}
} \end{center}
\begin{center} {\bf
P.~V.~Pyshkin$^{1}$, A.~G\'abris$^{2,1}$, O.~K\'alm\'an$^{1}$, I.~Jex$^{2}$ and T.~Kiss$^{1*}$
}\end{center}
\begin{center}
{\it
$^1$Institute for Solid State Physics and Optics, Wigner Research Centre, Hungarian Academy of Sciences\\
P.O. Box 49, H-1525 Budapest, Hungary
$^2$Czech Technical University in Prague, Faculty of Nuclear Sciences and Physical Engineering \\
B\v rehov\'a 7, 115 19 Praha 1, Star\'e M\v esto, Czech Republic.
}
$^*$Corresponding author e-mail: \texttt{kiss.tamas@wigner.mta.hu}\\
\end{center}
\begin{abstract}\noindent
We propose a probabilistic quantum protocol to realize a nonlinear transformation of qutrit states, which by iterative applications on ensembles can be used to distinguish two types of pure states. The protocol involves single-qutrit and two-qutrit unitary operations as well as post-selection according to the results obtained in intermediate measurements. We utilize the nonlinear transformation in an algorithm to identify a quantum state provided it belongs to an arbitrary known finite set. The algorithm is based on dividing the known set of states into two appropriately designed subsets which can be distinguished by the nonlinear protocol. In most cases this is accompanied by the application of some properly defined physical (unitary) operation on the unknown state. Then, by the application of the nonlinear protocol one can decide which of the two subsets the unknown state belongs to thus reducing the number of possible candidates. By iteratively continuing this procedure until a single possible candidate remains, one can identify the unknown state.
\end{abstract}
\noindent{\bf Keywords:}
quantum measurement, quantum control, quantum state identification.
\section{Introduction}
\hspace*{1.5em}
Measurement on a quantum system inevitably affects its state. One of the questions J\'ozsef Janszky was intrigued by in his last active years was how one can design useful protocols involving post-selection based on measurement results \cite{QuantumScissors, Janszky}. The power of measurement-based protocols can be used in quantum state purification~\cite{Hiromichi, aschauer_multiparticle_2005, cooling_by_feedback_control}, as well as for quantum state engineering~\cite{Piani2014_entanglment_by_measurements, Streltsov2011_entanglment_by_measurements, Wu_entanglement_generation, Pyshkin_compression, Filippov2017}, in particular also to cool down quantum systems to their ground-state~\cite{Li2011, gsc_paper, hertzberg_back-action-evading_2010, rocheleau_preparation_2010}. One can exploit the nonlinear nature of this type of protocols for enhancing initially small differences between quantum states \cite{Gilyen}.
Discrimination of nonorthogonal quantum states is an important task for applications of quantum information and quantum control~\cite{Nielsen2000}. Various protocols have been proposed for efficient quantum state discrimination (QSD) (see reviews~\cite{Barnett09,Kwek2015}). A crucial ingredient of these methods is to have an ensemble of identical quantum systems for implementing QSD~\cite{mack_enhanced_2000, Torres2017, Zhang2018, Kalman2018}. Measurement-induced nonlinear dynamics is experimentally feasible in quantum optics~\cite{Xu2014}, and it has been shown~\cite{Torres2017, Kalman2018} that nonlinear quantum transformations could be a possible way for implementing QSD of two-level quantum systems. In this report we propose a scheme which can be used for QSD of three-level quantum systems. Such systems are studied as candidates for quantum processing also experimentally, see e.g.~\cite{AbdumalikovJr2013}.
Quantum state identification (QSI) is a problem where one has to decide whether an unknown quantum state is identical to one of some reference quantum states. In the original formulation of the problem \cite{Hayashi2005, Hayashi2006} the unknown pure state has to be identified with one of two or more reference pure states, some or all of which are unknown, but a certain number of copies of them are available \cite{Herzog2008, Herzog2016}.
In this paper we design a quantum protocol based on post-selection where the difference between the absolute values of two coefficients in the expansion of the quantum state of a three-level system (qutrit) is enhanced. The protocol is thus capable of decreasing the overlap of initially nonothogonal ensembles of systems, according to a specific property of the states. We show that one can build an algorithm around this protocol which solves a quantum-state-identification type of problem where a finite number of reference states is classically given.
\section{Nonlinear transformation of qutrit states}
\hspace*{1.5em}
We consider an ensemble of identically prepared quantum systems in the state parametrized by two complex parameters, $z_1$ and $z_2$ as
\begin{equation}
\ket{\psi_0} = \mathcal{N}\left( \vphantom{\frac{1}{1}}\ket{0} + z_1\ket{1} + z_2\ket{2}\right),
\label{initial_1}
\end{equation}
with $\mathcal{N} = (1 + |z_1|^2 + |z_2|^2)^{-1/2}$ chosen such that $\| \ket{\psi_0} \|=1$. In the following we describe a protocol that allows us to distinguish between cases: (i) $|z_1|>|z_2|$ and (ii) $|z_1|<|z_2|$, regarding the parametrization. The core of this procedure is the nonlinear transformation schematically depicted on Fig.~\ref{fig1} as a quantum circuit diagram,
with the single-qutrit unitary operators defined as
\begin{equation} \label{single_Us}
\begin{array}{r@{}l}
&u_1 = \ketbra{0}{2} + \ketbra20 + \ketbra{1}{1}, \\
&u_2 = \ketbra{0}{1} + \ketbra{1}{0} + \ketbra{2}{2}, \\
\end{array}
\end{equation}
and the two-qutrit operators as
\begin{equation} \label{two_Us}
\begin{array}{r@{}l}
&U_1 = \ketbra{01}{11} + \ketbra{11}{01} + (\mathbbm{1} - \ketbra{01}{01} - \ketbra{11}{11}), \\
&U_2 = \ketbra{02}{22} + \ketbra{22}{02} + (\mathbbm{1} - \ketbra{02}{02} - \ketbra{22}{22}), \\
&U = \ketbra{01}{10} + \ketbra{10}{01} + (\mathbbm{1} - \ketbra{01}{01} - \ketbra{10}{10}).
\end{array}
\end{equation}
\begin{figure}
\caption{Scheme of the non-linear transformation $\ket{\psi_0}
\label{fig1}
\end{figure}
The procedure starts by taking two pairs of the system in initial state~$\ket{\psi_0}$. Then one member of each pair is transformed by a single-qutrit unitary $u_j$ ($j=1,2$), after which $U_j$ acts on the pair of qubits as a whole, followed by perfoming selective projective measurements $P = \ketbra{0}{0}$ on the first system of each pair. If both results are ``yes'' then we take the unmeasured systems from each pair and apply a joint unitary operator~$U$ on them. Then we perfom again a projective measurement $P$ on the first system in the pair, and if the result is again ``yes'' then the unmeasured system transforms into the state
\begin{equation}\label{psi1}
\ket{\psi_1} = \mathcal{N}'\left(\vphantom{1^1}\ket{0} + \frac{z_1}{z_2}z_1\ket{1} + \frac{z_2}{z_1}z_2\ket{2}\right).
\end{equation}
By appling the above procedure to the entire ensemble of qutrits (always taking two pairs at a time) we arrive at a new albeit smaller ensemble constituted by identical states. We can interpret this as transforming an ensemble described by the state $\ket{\psi_0}$ to an ensemble described by the state $\ket{\psi_1}$.
The transformation $\ket{\psi_0} \to \ket{\psi_1}$ is a nonlinear vector mapping:
\begin{equation} \label{mapping}
\vec{f}^{(n)} = \{f_1^{(n)}, f_2^{(n)}\}, \quad \vec{f}^{(n)} = \vec{f}(\vec{f}^{(n-1)}), \quad \vec{f}^{(0)} = \{z_1, z_2\},
\end{equation}
where $f_1^{(n)} = {f_1^{(n-1)}}^2 / f_2^{(n-1)}$ and $f_2^{(n)} = {f_2^{(n-1)}}^2 / f_1^{(n-1)}$. Thus, the result of~$M$ iterations will be state~$\ket{\psi_M} \propto \ket{0}+f_1^{(M)}\ket{1} + f_2^{(M)}\ket{2}$. The map~(\ref{mapping}) has two attractors: $\{\infty, 0\}$ and $\{0, \infty\}$. Therefore, if $|z_1|\neq|z_2|$ we will have for some relatively large~$M$: $\ket{\psi_M}\approx \ket{1}$ in the case of $|z_1| > |z_2|$, and $\ket{\psi_M}\approx \ket{2}$ in the case of $|z_1| < |z_2|$. We can distinguish these two cases, up to a certain error margin, by performing the projective measurement~$\ket{1}\bra{1}$ on the system in the state~$\ket{\psi_M}$, which allows us to draw conclusions regarding initial state $\ket{\psi_0}$. In Figs.~\ref{fig2}(a) and \ref{fig2}(b) we show the probability of obtaining the state $\ket{1}$ in a measurement after one iteration ($M=1$) and three iterations ($M=3$) of the nonlinear transformation of Eq.~(\ref{psi1}), respectively. The border between regions with high and low probability corresponds to the $|z_1| = |z_2|$ condition, and this border becomes sharper with increasing $M$. Thus, the reliablity of QSI increases with increasing $M$. Note, that the above discussed probability describes the precision of the discrimination process at the end of the protocol, yielding an {\em error margin} on making the right conclusion about the given initial state \cite{hayashi_state_2008}. Another relevant quantity is the {\em survival probability} which describes the post-selection process that is based on the intermediate projective measurements shown in Fig.~\ref{fig1}. We will discuss this probability at the end of this Section.
\begin{figure}
\caption{Probability of obtaining the state $\ket{1}
\label{fig2}
\end{figure}
The nonlinear transformation of Eq.~(\ref{psi1}) itself cannot distinguish two states in which $|z_1|=|z_2|$, as can be seen from Fig.~\ref{fig2}. However, a properly chosen single-qutrit unitary operation can be used to make the magnitudes of these coefficients different so that the subsequent nonlinear transformation can distinguish such states. In order to show this, let us consider input states of the form
\begin{equation}
\ket{\psi_0} = \mathcal{N}(\ket{0} + \rho\ket{1} + \rho \exp(i\varphi)\ket{2}),
\label{initial_2}
\end{equation}
where $\rho,\varphi\in\mathbb{R}$. Then, apply the following single-qutrit unitary transformation~$R$
\begin{equation} \label{Rot}
R = \left(\begin{array}{ccc}
1 & 0 & 0 \\
0 & i/\sqrt2 & 1/\sqrt2 \\
0 & -1/\sqrt2 & -i/\sqrt2
\end{array} \right)
\end{equation}
on each initial state. Due to this operation the transformed state will be
\begin{equation}
R\ket{\psi_0} = \mathcal{N}(\ket{0} + z_1'\ket{1} + z_2'\ket{2}),
\label{Rpsi0}
\end{equation}
where
\begin{equation}
|z_1'|= \rho\sqrt{1 + \sin\varphi}, \quad |z_2'|= \rho\sqrt{1 - \sin\varphi}.
\end{equation}
Therefore, we can treat the problem as before, since when $\varphi\neq0$ then $|z_1'|\neq|z_2'|$ and applying the nonlinear transformation on the state of Eq.~(\ref{Rpsi0}) we can distinguish the two different situations: $0<\varphi<\pi $ (corresponding to $|z_1'|>|z_2'|$), and $-\pi<\varphi<0 $ (corresponding to $|z_1'|<|z_2'|$). In Figs.~\ref{fig3}(a) and \ref{fig3}(b) we show the probability of measuring state~$\ket{1}$ as a function of the initial values of $\rho$ and $\varphi$.
\begin{figure}
\caption{Probability of measuring state $\ket{1}
\label{fig3}
\end{figure}
In practical situations the necessary number of iterations of the nonlinear protocol is determined by the probabilities depicted in Fig.~\ref{fig2} and \ref{fig3}. However, due to the fact that our nonlinear protocol is based on selective measurements, we need to have a relatively large number of identical qutrits in the initial state~$\ket{\psi_0}$. This can be characterized by the survival probability in a single step
\begin{equation}\label{Ps}
P_s = \frac{ |z_1|^2|z_2|^2 + |z_1|^6 + |z_2|^6 }{( 1 +|z_1|^2 + |z_2|^2 )^4},
\end{equation}
which is the product of the three probabilities to measure $\ket{0}$ (see Fig.~\ref{fig1}).
From Eq.~(\ref{Ps}) it can be seen that the survival probability is very small (i.e., the protocol is very source demanding) when both $z_1, z_2 \rightarrow 0$ or $z_{1(2)}\rightarrow\infty$. However, in cases when only a few iterational steps are expected to give a conclusive answer for discrimination, then the survival probability is also realtively high (see Fig.~\ref{fig4}).
\begin{figure}
\caption{The overall survival probability of the nonlinear protocol for $M=1$ (a), and $M=2$ iterations (b) as a function of the initial parameters. Note that in the latter case the survival probability is given by the product of the $P_{s}
\label{fig4}
\end{figure}
\section{Quantum state identification of qutrits}
\hspace*{1.5em}
Let us assume we have a ``black box'' that produces qutrits in the quantum state ~$\ket{\psi_{?}}$, which is unknown to us. What we know is that it belongs to a {\em finite set} $S$ of possible qutrit states which we denote by
\begin{equation}
\label{K-set}
S = \left\{ \ket{\psi_i} = \mathcal{N}_i\left(\vphantom{1^1} \ket{0} + z_{i1}\ket{1} + z_{i2}\ket{2}\right) \mathop{\vert} i = 1, \ldots, K \right\},
\end{equation}
with $z_{ij} \in \mathbbm{C}$ being complex numbers. We also introduce the function $N\colon S \rightarrow N(S)$ denoting the number of elements in the set $S$, yielding $N(S)=K$ in the particular case. The problem of quantum state identification~(QSI) is to find $w$ such that $\ket{\psi_{?}} = \ket{\psi_w}$ by applying quantum operations on the ensemble produced by the black box.
For the sake of simplicity let us assume that $|z_{i1}|\neq|z_{i2}|$ for all $i\in(1,2,\dots, K)$. In case we have $|z_{n1}|=|z_{n2}|$ for some $n$ we can apply a unitary rotation similar to the one in Eq.~(\ref{Rot}) to transform the coefficients into new ones of unequal magnitude.
Before discussing the QSI algorithm itself, let us consider the following unitary rotation:
\begin{equation} \label{W}
W(\theta) = \left( \begin{array}{ccc}
1 & 0 & 0 \\
0 & \cos\theta & \sin\theta \\
0 & - \sin\theta & \cos\theta
\end{array} \right).
\end{equation}
This can transform a quantum state of the form of Eq.~(\ref{initial_1}) with $|z_{1}|\neq|z_{2}|$ into the state with $|z_{1}|=|z_{2}|$ if $\theta$ is chosen in the following way:
\begin{equation}\label{teta_1}
\tan2\theta = \frac{|z_2|^2 - |z_1|^2}{z_1z_2^* + z_1^*z_2}.
\end{equation}
Moreover, by choosing a different angle $\theta'$ in such a way that $\theta < \theta' < \pi/4$ (for $\theta>0$) or $-\pi/4 < \theta' < \theta$ (for $\theta<0$) we can change the sign of $|z_{1}| - |z_{2}|$. We take advantage of such transformations in our quantum state identification algorithm, which is depicted in Fig.~\ref{fig5} by a flowchart.
\begin{figure}
\caption{The flow chart of the proposed QSI algorithm employing the non-linear transformation described by the map in Eq.~(\ref{mapping}
\label{fig5}
\end{figure}
The QSI algorithm of Fig.~\ref{fig5} is composed of calculational steps (with yellow background) in which the parameters of the necessary physical operations are determined and these are followed by the actual physical operations on the qutrit ensemble (with blue background). The description of the algorithm is the following
\begin{description}
\item[Inputs:] Set $S$ as given by Eq.~(\ref{K-set}) with $K\geq 2$ elements. Ensemble of qutrits in an unknown state $\ket{\psi_{?}} \in S$.
\item[Output:] Number $w$ such that $\ket{\psi_w} = \ket{\psi_{?}}$ up to a desired error margin.
\item[Procedure:] ~\par
\begin{enumerate}\itemsep1pt
\item Divide $S$ into $S^+$ (containing states with $|z_{n1}|>|z_{n2}|$) and $S^-$ (containing states with $|z_{m1}|<|z_{m2}|$).
\item If both $N(S^{+})>0$ and $N(S^{-})>0$ then skip to \textbf{step \ref{step:5}}, otherwise continue to the next step.
\item Determine a proper~$\theta$ with which $S$ can be divided into subsets $S^+$ and $S^-$ with $N(S^{\pm})>0$.
\item Apply the unitary rotation $W(\theta)$ of Eq.~(\ref{W}) to the ensemble with the unknown state $\ket{\psi_{?}}$.
\item\label{step:5} Apply iteratively $M$ times the nonlinear protocol of Fig.~\ref{fig1} to a sufficiently large ensemble representing the unknown state $\ket{\psi_{?}}$.
\item Make a projective measurement to decide whether the unknown state belongs to the set $S^{+}$ or to $S^{-}$, i.e.\ whether $|z_{w1}|>|z_{w2}|$ or $|z_{w1}|<|z_{w2}|$ should be satisfied by $w$.
\item If the result is ``true'' (i.e., in the projective measurement the state $\ket{1}$ was found) then we define $S'$ to be equal to $S^+$, if the result is ``false'' (i.e., in the projective measurement the state $\ket{2}$ was found) then we set $S'$ to $S^-$.
\item If the number of elements of the new set $N(S')=1$ then the single element of the set is equal to the unknown quantum (apart from possible $W(\theta)$ rotations), thus $w$ has been found. If $N(S')>1$ then repeat the whole procedure from \textbf{step 1} with $S$ set to $S'$.
\end{enumerate}
\end{description}
\begin{figure}
\caption{The average probability of correctly identifying the unknown quantum state among the given set (of size $K$) of qutrit states as a function of the number of iterations~$M$ of the nonlinear transformation in step \ref{step:5}
\label{fig6}
\end{figure}
The efficiency of our algorithm is illustrated in Fig.~\ref{fig6}. We numerically simulated the QSI algorithm of Fig.~\ref{fig5} by choosing $20000$ random realizations of sets of qutrit states of different sizes from $K=2$ to $K=6$. The coefficients $z_{ij} = \rho_{ij}\exp( i \varphi_{ij})$ of the states were given by choosing $\rho_{ij}$ and $\varphi_{ij}$ randomly from a uniform distribution in the interval~$ [1/2,2)$ and $ [0,2\pi)$, respectively. The integer parameter $w$ was randomly chosen in the range $1,2,\dots, K$. As can be seen from Fig.~\ref{fig6} the results of our algorithm scale well with increasing set size. In fact, it is comparable to the well-known ``weighing puzzle``, in which someone has to find a ``false coin'' in a given set of coins by using balanced scales \cite{Chudnov2015}. In our numerical simulations if one of the subsets $S^+$ or $S^-$ was empty, then, for the states of the non-empty subset we calculated the ordered set $\theta_1<\theta_2<\dots<\theta_D$ by using Eq.~(\ref{teta_1}) (where $D$ is the size of non-empty subset). We counted the number $n^{+}$ ($n^{-}$) of positive (negative) $\theta_{i}$'s and calculated their difference $d=n^{+}-n^{-}$. Then, we determined $\theta$ according to Table~\ref{td}. Let us note that if $\theta_l = \theta_m$ for some $1\leq l,m \leq K$, then this procedure fails. As can be seen from Eq.~(\ref{teta_1}) this occurs when $z_{mj} = Az_{lj}$, where $j=1,2$, and $A$ is some constant. In order to solve this problem we can apply the $u_2$ single-qutrit unitary transformation of Eq.~(\ref{single_Us}) in step 3 to every member of the set~(\ref{K-set}). Thus, if we initially have two states $\ket{\psi_l} = \mathcal{N}_l (\ket{0} + z_{l1}\ket{1}+z_{l2}\ket{2})$ and $\ket{\psi_m} = \mathcal{N}_m (\ket{0} + Az_{l1}\ket{1}+Az_{l2}\ket{2})$, then after this transformation we will have $\ket{\psi_l\rq} = \mathcal{N}_l' (\ket{0} + \frac{1}{z_{l1}}\ket{1}+\frac{z_{l2}}{z_{l1}}\ket{2})$ and $\ket{\psi_m\rq} = \mathcal{N}_m' (\ket{0} + \frac{1}{Az_{l1}}\ket{1}+\frac{z_{l2}}{z_{l1}}\ket{2})$ and the corresponding $\theta$ angles will be different.
\begin{table}[]
\centering
\begin{tabular}{|c|l|}
\hline
$d=n^{+}-n^{-}$ & optimized $\theta$ \\ \hline
$d=-D$ & $\theta = (\theta_{\lfloor D/2\rfloor} + \theta_{\lfloor D/2\rfloor+1})/2$ \\ \hline
$-D<d\leq-2$ & $\theta = (\theta_{\lfloor - d/2 \rfloor} + \theta_{\lfloor -d/2\rfloor+1})/2$ \\ \hline
$-1\leq d \leq 1$ & $\theta = (\theta_1 - \pi/4)/2$ \\ \hline
$2\leq d < D$ & $\theta = (\theta_{\lfloor D - d/2 \rfloor} + \theta_{\lfloor D-d/2\rfloor+1})/2$ \\ \hline
$d=D$ & $\theta = (\theta_{\lfloor D/2\rfloor} + \theta_{\lfloor D/2\rfloor+1})/2$ \\ \hline
\end{tabular}
\caption{Optimized determination of $\theta$ to divide $S$ into $S^{+}$ and $S^{-}$. We have denoted the lower integer part of a real number $x$ by $\lfloor x \rfloor$.}
\label{td}
\end{table}
Due to the optimization we apply during every loop when dividing the set $S$, the average number of loops that are needed for the QSI algorithm to complete scales as $\ln K$. In Table~\ref{t1} we show the numerical results with the average number of loops and their standard deviations based on numerical simulations with~$20000$ random realizations of the set given by Eq.~(\ref{K-set}).
\begin{table}[h]
\centering
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Set size & $K=2$ & $K=3$ & $K=4$ & $K=5$ & $K=6$ \\ \hline
Average number of loops & $1$ & $1.89$ & $2.58$ & $3.16$ & $3.66$ \\ \hline
Standard deviation & $0$ & $0.31$ & $0.49$ & $0.57$ & $0.63$ \\ \hline
\end{tabular}
\caption{Average number of loops of the QSI algorithm and their standard deviations for different set sizes.}
\label{t1}
\end{table}
\section{Summary}
\hspace*{1.5em}
We have presented a probabilistic scheme to realize a nonlinear transformation of qutrit states with two stable attracting fixed points. The nonlinear transformation is defined on an ensemble of quantum systems in identical quantum state, and in each elementary operation it uses two pairs of systems to probabilistically produce one system in the transformed state. Therefore, the protocol requires at least exponential resources in terms of the size of the ensemble as a function of the number of iterations.
The nonlinear map can be used to find out whether a given unknown pure state belongs to the subset converging to one of the attractive fixed points. We employed this property for QSI by proposing an algorithm that can be used to identify the quantum state from a finite set $S$ of candidates, with an error margin. We have shown that the number of loops the algorithm uses scales logarithmically with the number of elements $K$ in the set $S$. While the error margin can be made arbitrarily small by increasing the number of iterations $M$ of the nonlinear map, the role and impact of the survival probability remains an open question that deserves further studies.
Our results indicate that probabilistic nonlinear schemes may offer a consistent approach towards QSI of higher dimensional systems, the present study on qutrits being the first step towards this direction.
\section*{Acknowledgments}
\hspace*{1.5em}
The authors P.~V.~P., O.~K.\ and T.~K.\ were supported by the National Research, Development and Innovation Office (Project Nos.\ K115624, K124351, PD120975, 2017-1.2.1-NKP-2017-00001). In addition, O.~K.\ by the J.~Bolyai Research Scholarship, and the Lend\"ulet Program of the HAS (project No.~LP2011-016). I.~J.\ and A.~G.\ have been partially supported by M{\v S}MT RVO 68407700, the Czech Science Foundation (GA{\v C}R) under project number 17-00844S, and by the project ``Centre for Advanced Applied Sciences,'' registry No.\ CZ.02.1.01/0.0/0.0/16\_019/0000778, supported by the Operational Programme Research, Development and Education, co-financed by the European Structural and Investment Funds and the state budget of the Czech Republic.
\begin{thebibliography}{30}
\setlength{\itemsep}{1ex}
\expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi
\expandafter\ifx\csname bibnamefont\endcsname\relax
\def\bibnamefont#1{#1}\fi
\expandafter\ifx\csname bibfnamefont\endcsname\relax
\def\bibfnamefont#1{#1}\fi
\expandafter\ifx\csname citenamefont\endcsname\relax
\def\citenamefont#1{#1}\fi
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi
\providecommand{\bibinfo}[2]{#2}
\providecommand{\eprint}[2][]{\url{#2}}
\bibitem[{\citenamefont{Pegg et~al.}(1998)\citenamefont{Pegg, Phillips, and
Barnett}}]{QuantumScissors}
\bibinfo{author}{\bibfnamefont{D.~T.} \bibnamefont{Pegg}},
\bibinfo{author}{\bibfnamefont{L.~S.} \bibnamefont{Phillips}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.~M.}
\bibnamefont{Barnett}}, \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{81}}, \bibinfo{pages}{1604} (\bibinfo{year}{1998}).
\bibitem[{\citenamefont{Koniorczyk et~al.}(2000)\citenamefont{Koniorczyk,
Kurucz, G\'{a}bris, and Janszky}}]{Janszky}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Koniorczyk}},
\bibinfo{author}{\bibfnamefont{Z.}~\bibnamefont{Kurucz}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{G\'{a}bris}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Janszky}},
\emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{62}},
\bibinfo{pages}{013802} (\bibinfo{year}{2000}).
\bibitem[{\citenamefont{Nakazato et~al.}(2003)\citenamefont{Nakazato, Takazawa,
and Yuasa}}]{Hiromichi}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Nakazato}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Takazawa}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Yuasa}},
\emph{\bibinfo{journal}{Phys. Rev. Lett.}} \textbf{\bibinfo{volume}{90}},
\bibinfo{pages}{060401} (\bibinfo{year}{2003}).
\bibitem[{\citenamefont{Aschauer et~al.}(2005)\citenamefont{Aschauer, D\"ur,
and Briegel}}]{aschauer_multiparticle_2005}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Aschauer}},
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{D\"ur}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.-J.} \bibnamefont{Briegel}},
\emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{71}},
\bibinfo{pages}{012319} (\bibinfo{year}{2005}).
\bibitem[{\citenamefont{Combes and Jacobs}(2006)}]{cooling_by_feedback_control}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Combes}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Jacobs}},
\emph{\bibinfo{journal}{Phys. Rev. Lett.}} \textbf{\bibinfo{volume}{96}},
\bibinfo{pages}{010504} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Coles and
Piani}(2014)}]{Piani2014_entanglment_by_measurements}
\bibinfo{author}{\bibfnamefont{P.~J.} \bibnamefont{Coles}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Piani}},
\emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{89}},
\bibinfo{pages}{010302} (\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Streltsov et~al.}(2011)\citenamefont{Streltsov,
Kampermann, and Bru\ss{}}}]{Streltsov2011_entanglment_by_measurements}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Streltsov}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Kampermann}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Bru\ss{}}},
\emph{\bibinfo{journal}{Phys. Rev. Lett.}} \textbf{\bibinfo{volume}{106}},
\bibinfo{pages}{160401} (\bibinfo{year}{2011}).
\bibitem[{\citenamefont{Wu et~al.}(2004)\citenamefont{Wu, Lidar, and
Schneider}}]{Wu_entanglement_generation}
\bibinfo{author}{\bibfnamefont{L.-A.} \bibnamefont{Wu}},
\bibinfo{author}{\bibfnamefont{D.~A.} \bibnamefont{Lidar}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Schneider}},
\emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{70}},
\bibinfo{pages}{032322} (\bibinfo{year}{2004}).
\bibitem[{\citenamefont{Pyshkin
et~al.}(2016{\natexlab{a}})\citenamefont{Pyshkin, Sherman, Luo, You, and
Wu}}]{Pyshkin_compression}
\bibinfo{author}{\bibfnamefont{P.~V.} \bibnamefont{Pyshkin}},
\bibinfo{author}{\bibfnamefont{E.~Y.} \bibnamefont{Sherman}},
\bibinfo{author}{\bibfnamefont{D.-W.} \bibnamefont{Luo}},
\bibinfo{author}{\bibfnamefont{J.~Q.} \bibnamefont{You}},
\bibnamefont{et~al.}, \emph{\bibinfo{journal}{Phys. Rev. B}}
\textbf{\bibinfo{volume}{94}}, \bibinfo{pages}{134313}
(\bibinfo{year}{2016}{\natexlab{a}}).
\bibitem[{\citenamefont{Luchnikov and Filippov}(2017)}]{Filippov2017}
\bibinfo{author}{\bibfnamefont{I.~A.} \bibnamefont{Luchnikov}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.~N.}
\bibnamefont{Filippov}}, \emph{\bibinfo{journal}{Phys. Rev. A}}
\textbf{\bibinfo{volume}{95}}, \bibinfo{pages}{022113}
(\bibinfo{year}{2017}).
\bibitem[{\citenamefont{Li et~al.}(2011)\citenamefont{Li, Wu, Wang, and
Yang}}]{Li2011}
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Li}},
\bibinfo{author}{\bibfnamefont{L.-A.} \bibnamefont{Wu}},
\bibinfo{author}{\bibfnamefont{Y.-D.} \bibnamefont{Wang}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{L.-P.} \bibnamefont{Yang}},
\emph{\bibinfo{journal}{Phys. Rev. B}} \textbf{\bibinfo{volume}{84}},
\bibinfo{pages}{094502} (\bibinfo{year}{2011}).
\bibitem[{\citenamefont{Pyshkin
et~al.}(2016{\natexlab{b}})\citenamefont{Pyshkin, Luo, You, and
Wu}}]{gsc_paper}
\bibinfo{author}{\bibfnamefont{P.~V.} \bibnamefont{Pyshkin}},
\bibinfo{author}{\bibfnamefont{D.-W.} \bibnamefont{Luo}},
\bibinfo{author}{\bibfnamefont{J.~Q.} \bibnamefont{You}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{L.-A.} \bibnamefont{Wu}},
\emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{93}},
\bibinfo{pages}{032120} (\bibinfo{year}{2016}{\natexlab{b}}).
\bibitem[{\citenamefont{Hertzberg et~al.}(2010)\citenamefont{Hertzberg,
Rocheleau, Ndukum, Savva, Clerk, and
Schwab}}]{hertzberg_back-action-evading_2010}
\bibinfo{author}{\bibfnamefont{J.~B.} \bibnamefont{Hertzberg}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Rocheleau}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Ndukum}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Savva}},
\bibnamefont{et~al.}, \emph{\bibinfo{journal}{Nature Physics}}
\textbf{\bibinfo{volume}{6}}, \bibinfo{pages}{213} (\bibinfo{year}{2010}).
\bibitem[{\citenamefont{Rocheleau et~al.}(2010)\citenamefont{Rocheleau, Ndukum,
Macklin, Hertzberg, Clerk, and Schwab}}]{rocheleau_preparation_2010}
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Rocheleau}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Ndukum}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Macklin}},
\bibinfo{author}{\bibfnamefont{J.~B.} \bibnamefont{Hertzberg}},
\bibnamefont{et~al.}, \emph{\bibinfo{journal}{Nature}}
\textbf{\bibinfo{volume}{463}}, \bibinfo{pages}{72} (\bibinfo{year}{2010}).
\bibitem[{\citenamefont{Gily\'{e}n et~al.}(2015)\citenamefont{Gily\'{e}n, Kiss,
and Jex}}]{Gilyen}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Gily\'{e}n}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Kiss}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Jex}},
\emph{\bibinfo{journal}{Sci. Rep.}} \textbf{\bibinfo{volume}{6}},
\bibinfo{pages}{20076} (\bibinfo{year}{2015}).
\bibitem[{\citenamefont{Nielsen and Chuang}(2000)}]{Nielsen2000}
\bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Nielsen}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Chuang}},
\emph{\bibinfo{title}{Quantum Computation and Quantum Information}}
(\bibinfo{publisher}{Cambridge University Press}, \bibinfo{year}{2000}).
\bibitem[{\citenamefont{Barnett and Croke}(2009)}]{Barnett09}
\bibinfo{author}{\bibfnamefont{S.~M.} \bibnamefont{Barnett}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Croke}},
\emph{\bibinfo{journal}{Adv. Opt. Photon.}} \textbf{\bibinfo{volume}{1}},
\bibinfo{pages}{238} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Bae and Kwek}(2015)}]{Kwek2015}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Bae}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{L.-C.} \bibnamefont{Kwek}},
\emph{\bibinfo{journal}{Journal of Physics A: Mathematical and Theoretical}}
\textbf{\bibinfo{volume}{48}}, \bibinfo{pages}{083001}
(\bibinfo{year}{2015}).
\bibitem[{\citenamefont{Mack et~al.}(2000)\citenamefont{Mack, Fischer, and
Freyberger}}]{mack_enhanced_2000}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Mack}},
\bibinfo{author}{\bibfnamefont{D.~G.} \bibnamefont{Fischer}},
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Freyberger}},
\emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{62}},
\bibinfo{pages}{042301} (\bibinfo{year}{2000}).
\bibitem[{\citenamefont{Torres et~al.}(2017)\citenamefont{Torres, Bern\'ad,
Alber, K\'alm\'an, and Kiss}}]{Torres2017}
\bibinfo{author}{\bibfnamefont{J.~M.} \bibnamefont{Torres}},
\bibinfo{author}{\bibfnamefont{J.~Z.} \bibnamefont{Bern\'ad}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Alber}},
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{K\'alm\'an}},
\bibnamefont{et~al.}, \emph{\bibinfo{journal}{Phys. Rev. A}}
\textbf{\bibinfo{volume}{95}}, \bibinfo{pages}{023828}
(\bibinfo{year}{2017}).
\bibitem[{\citenamefont{Zhang and Ren}(2018)}]{Zhang2018}
\bibinfo{author}{\bibfnamefont{W.-H.} \bibnamefont{Zhang}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Ren}},
\emph{\bibinfo{journal}{Quantum Information Processing}}
\textbf{\bibinfo{volume}{17}}, \bibinfo{pages}{155} (\bibinfo{year}{2018}).
\bibitem[{\citenamefont{K\'alm\'an and Kiss}(2018)}]{Kalman2018}
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{K\'alm\'an}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Kiss}},
\emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{97}},
\bibinfo{pages}{032125} (\bibinfo{year}{2018}).
\bibitem[{\citenamefont{{Xu} et~al.}(2014)\citenamefont{{Xu}, {Yung}, {Xu},
{Boixo}, {Zhou}, {Li}, {Aspuru-Guzik}, and {Guo}}}]{Xu2014}
\bibinfo{author}{\bibfnamefont{J.-S.} \bibnamefont{{Xu}}},
\bibinfo{author}{\bibfnamefont{M.-H.} \bibnamefont{{Yung}}},
\bibinfo{author}{\bibfnamefont{X.-Y.} \bibnamefont{{Xu}}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{{Boixo}}},
\bibnamefont{et~al.}, \emph{\bibinfo{journal}{Nat Photon}}
\textbf{\bibinfo{volume}{8}}, \bibinfo{pages}{113} (\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Abdumalikov~Jr
et~al.}(2013)\citenamefont{Abdumalikov~Jr, Fink, Juliusson, Pechal, Berger,
Wallraff, and Filipp}}]{AbdumalikovJr2013}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Abdumalikov~Jr}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Fink}, \bibfnamefont{J.}},
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Juliusson}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Pechal}},
\bibnamefont{et~al.}, \emph{\bibinfo{journal}{Nature}}
\textbf{\bibinfo{volume}{496}}, \bibinfo{pages}{482} (\bibinfo{year}{2013}).
\bibitem[{\citenamefont{Hayashi et~al.}(2005)\citenamefont{Hayashi, Horibe, and
Hashimoto}}]{Hayashi2005}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Hayashi}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Horibe}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Hashimoto}},
\emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{72}},
\bibinfo{pages}{052306} (\bibinfo{year}{2005}).
\bibitem[{\citenamefont{Hayashi et~al.}(2006)\citenamefont{Hayashi, Horibe, and
Hashimoto}}]{Hayashi2006}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Hayashi}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Horibe}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Hashimoto}},
\emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{73}},
\bibinfo{pages}{012328} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Herzog and Bergou}(2008)}]{Herzog2008}
\bibinfo{author}{\bibfnamefont{U.}~\bibnamefont{Herzog}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.~A.} \bibnamefont{Bergou}},
\emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{78}},
\bibinfo{pages}{032320} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Herzog}(2016)}]{Herzog2016}
\bibinfo{author}{\bibfnamefont{U.}~\bibnamefont{Herzog}},
\emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{94}},
\bibinfo{pages}{062320} (\bibinfo{year}{2016}).
\bibitem[{\citenamefont{Hayashi et~al.}(2008)\citenamefont{Hayashi, Hashimoto,
and Horibe}}]{hayashi_state_2008}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Hayashi}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Hashimoto}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Horibe}},
\emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{78}},
\bibinfo{pages}{012333} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Chudnov}(2015)}]{Chudnov2015}
\bibinfo{author}{\bibfnamefont{A.~M.} \bibnamefont{Chudnov}},
\emph{\bibinfo{journal}{Discrete Math. Appl.}} \textbf{\bibinfo{volume}{25}},
\bibinfo{pages}{69} (\bibinfo{year}{2015}).
\end{thebibliography}
\end{document} |
\boldsymbol egin{document}
\title{Consistency of Hill Estimators in a Linear Preferential Attachment Model\thanks{This work was supported by Army MURI grant W911NF-12-1-0385 to Cornell University.}}
\titlerunning{Hill Estimator for Network Data}
\author{Tiandong Wang \and Sidney I. Resnick}
\authorrunning{T. Wang \and S.I. Resnick}
\institute{Tiandong Wang \at
School of Operations Research and Information Engineering, Cornell University,
Ithaca, NY 14853 \\
\email{tw398@cornel.edu}
\and
Sidney I. Resnick \at
School of Operations Research and Information Engineering, Cornell University,
Ithaca, NY 14853 \\
\email{sir1@cornel.edu}
}
\date{Received: November 15, 2017}
\maketitle
\boldsymbol egin{abstract}
Preferential attachment is widely used to model power-law behavior of
degree distributions in both directed and undirected networks.
Practical analyses on the tail exponent of the power-law degree distribution
use Hill estimator as one of the key summary statistics, whose consistency is justified mostly for iid data.
The major goal in this paper is to answer the question whether the Hill estimator is still consistent when applied to non-iid network data.
To do this, we first derive the asymptotic behavior of the degree sequence via embedding the degree growth of a fixed node into
a birth immigration process. We also need to show the convergence of the tail empirical measure, from which
the consistency of Hill estimators is obtained. This step requires checking the concentration of degree counts.
We give a proof for a particular linear preferential attachment model and use
simulation results as an illustration in other choices of models.
\keywords{Hill estimators \and Power laws \and Preferential Attachment\and Continuous Time Branching Processes}
\subclass{60G70 \and 60B10 \and 60G55 \and 60G57 \and 05C80 \and 62E20}
\end{abstract}
\section{Introduction.}
The preferential attachment model gives a random graph in which nodes
and edges are added to the network based on probabilistic rules,
and is used to \sid{mimic} the evolution of networks such as social networks, collaborator and citation networks, as well as recommender networks. The probabilistic rule depends on the node degree and captures the feature
that nodes with larger degrees tend to attract more edges.
Empirical analysis of social network data shows that degree
distributions follow power laws. Theoretically, this is
true for linear preferential attachment models \sid{which} makes
preferential attachment a popular choice for network modeling
\citep{bollobas:borgs:chayes:riordan:2003, durrett:2010b,
krapivsky:2001,krapivsky:redner:2001,vanderHofstad:2017}.
The preferential attachment mechanism has been applied to both directed and undirected graphs.
Limit theory for degree counts can be found in \citet{resnick:samorodnitsky:2016b}, \citet{bhamidi:2007}, \citet{krapivsky:redner:2001} for the undirected case and \citet{wang:resnick:2015}, \citet{resnick:samorodnitsky:towsley:davis:willis:wan:2016}, \citet{resnick:samorodnitsky:2015}, \citet{wang:resnick:2016}, \citet{krapivsky:2001} for the directed case.
This paper only focuses on the undirected case.
One statistical issue is how to estimate the index of the \sid{degree distribution} power-law
tail. In practice, this is often done by combining a minimum distance
method \cite{clauset:shalizi:newman:2009} with the Hill
estimator \sid{\cite{hill:1975}.}
\sid{Data repositories of large network datasets such as KONECT (http://konect.uni-koblenz.de/)
\cite{kunegis:2013} provide for each dataset key summary
statistics including Hill estimates of degree distribution tail
indices. However, there is no theoretical
justification for such estimates and}
consistency of the Hill estimator has been
proved only for data from a stationary sequence of random variables,
which is assumed to be either iid \cite{mason:1982} or satisfy certain
structural or mixing
assumptions, e.g. \cite{resnick:starica:1995,
resnick:starica:1998b, rootzen:leadbetter:dehaan:1990,hsing:1991}.
Therefore, proving/disproving the
consistency of Hill estimators for network data is a major concern in this paper.
\sid{The Hill estimator and other tail descriptors are often analyzed using
the tail empirical estimator. Using standard point measure notation,
let}
$$\epsilon_x(A)=\boldsymbol egin{cases} 1,& \text{ if }x\in A,\\ 0,& \text{ if
}x\notin A \end{cases}.$$
For \sid{positive iid random variables} $\{X_i:i\geq 1\}$ whose
\sid{distribution has a regularly varying tail} with index $-\alpha<0$,
we have the following convergence in the space of Radon measures on
$(0,\infty]$ of the \sid{sequence of} empirical measures
\boldsymbol egin{equation}\label{conv1}
\sum_{i=1}^n\epsilon_{X_i/b(n)}(\cdot)\mathbb{R}ightarrow
\text{PRM}(\nu_\alpha (\cdot)), \quad\text{with }\quad\nu_\alpha(y,\infty] = y^{-\alpha},y>0,
\end{equation}
to the limit Poisson random measure with mean measure
$\nu_\alpha(\cdot).$
From \eqref{conv1} other \sid{extremal properties of $\{X_n\}$}
follow \cite[Chapter 4.4]{resnick:1987}. \sid{See for example the
application
given in this paper after Theorem \ref{thm:tail_meas}.}
Further, for \sid{any} intermediate sequence $k_n\to\infty$,
$k_n/n\to 0$ as $n\to\infty$, the sequence of tail empirical measures also
\sid{converge to a deterministic limit,}
\boldsymbol egin{equation}\label{conv2}
\frac{1}{k_n}\sum_{i=1}^n\epsilon_{X_i/b(n/k_n)}(\cdot) \mathbb{R}ightarrow
\nu_\alpha (\cdot),\end{equation}
which is \sid{one way} to prove consistency of the Hill estimator for iid data \citet[Chapter 4.4]{resnickbook:2007}.
\sid{We seek a similar dual pair as \eqref{conv1} and \eqref{conv2} for
network models that facilitates the study of the Hill estimator and
extremal properties of node degrees.}
With this goal in mind, we first find the limiting distribution for the degree sequence in a linear preferential attachment model,
from which a similar convergence result to \eqref{conv1} follows.
Embedding the network growth model into a continuous time branching process
(cf. \citet{bhamidi:2007, athreya:2007, athreya:ghosh:sethuraman:2008}) is a useful tool in this case.
We model the growth of the degree of each single node as a birth
process with immigration. Whenever a new node is added
\sid{to} the network, a new birth immigration process is
initiated. In this \sid{embedding}, the total number of nodes in the network
growth model also forms a birth immigration process. \sid{Using} results
from the limit theory of continuous time branching processes
(cf. \citet[Chapter~5.11]{resnick:1992}; \citet{tavare:1987}), we
\sid{give} the limiting distribution of the degree of a fixed
node as well as the maximal degree growth.
\sid{Empirical evidence for simulated networks
lead\wtd{s} to the belief that the} Hill estimator is consistent. However,
proving the analogue of \eqref{conv2} is challenging and
requires showing concentration inequalities for \sid{expected} degree
counts. We \sid{have} only succeeded for a particular linear preferential
attachment model, where each new node must attach to one of the
existing nodes in the graph. \sid{We are not sure the concentration
inequalities always hold for preferential attachment} and discussion of
limitations of the Hill estimator for network data must be left for the
future.
\sid{For a more sophisticated model where we could not verify the
concentration inequalities}, we illustrate consistency of the Hill
estimator coupled with \wtd{a minimum distance method (introduced in \cite{clauset:shalizi:newman:2009})} via
simulation for a range of parameter values; however the
asymptotic distribution of the Hill estimator in this case is
confounding and it is not obviously normal.
\sid{Whether this possible non-normality is due to the minimum distance
threshold selection or due to network data (rather than iid data)
being used, we are not sure at this point.}
The rest of the paper is structured as follows. \sid{After giving
background
on the tail empirical measure and Hill estimator in the rest of this section},
Section~\ref{sec:motiv} gives two linear preferential attachment
models.
Section~\ref{sec:prelim} summarizes \sid{key facts about the pure birth
and the birth-immigration processes.}
We analyze \sid{social network} degree growth in Section~\ref{sec:embed} using a sequence of
birth-immigration processes and \sid{give} the limiting
\sid{empirical measures of normalized degrees in the style of \eqref{conv1}}
for both models under consideration. We prove
the consistency of the Hill estimator for the simpler model in
Section~\ref{sec:Hill} and give simulation results in Section~\ref{sec:sim}
that illustrate the behavior of Hill estimators in the
other model.
\sid{Parameter estimation based on maximum likelihood or approximate
MLE for {\it directed\/}
preferential attachment models is studied in
\cite{wan:wang:davis:resnick:2017}. A comparison between MLE model
based methods and asymptotic extreme value methods is forthcoming.}
\subsection{Background}
\sid{Our approach to the Hill estimator considers it as a functional
of the tail empirical measure so we start with necessary background}
and review standard results (cf. \citet[Chapter 3.3.5]{resnickbook:2007}).
For $\mathbb{E} = (0,\infty]$, let $M_+(\mathbb{E})$ be the set
of non-negative Radon measures on $\mathbb{E}$.
A point measure $m$ is an element of $M_+(\mathbb{E})$ of the form
\boldsymbol egin{equation}\label{eq:pm}
m = \sum_i\epsilon_{x_i}.
\end{equation}
The set $M_p(\mathbb{E})$ is the set of all Radon point measures of the form \eqref{eq:pm}
and $M_p(\mathbb{E})$ is a closed subset of $M_+(\mathbb{E})$ in
the vague metric.
For $\{X_n, n\geq 1\}$ iid and non-negative with common
\sid{regularly varying} distribution
\sid{tail} $\overline{F}\in RV_{-\alpha}$, $\alpha>0$, there
exists a sequence $\{b(n)\}$ such that for a limiting Poisson random
measure with mean measure $\nu_\alpha $ and $\nu_\alpha(y,\infty] =
y^{-\alpha}$ for $y>0$, written \wtd{as} PRM($\nu_\alpha$), we have\boldsymbol egin{equation}\label{eq:bnX}
\sum_{i=1}^n\epsilon_{X_i/b(n)}\mathbb{R}ightarrow \text{PRM}(\nu_\alpha),\quad\text{ in }M_p((0,\infty]),
\end{equation}
and for some $k_n\to\infty$, $k_n/n\to 0$,
\boldsymbol egin{equation}\label{eq:bnkX}
\frac{1}{k_n}\sum_{i=1}^n\epsilon_{X_i/b(n/k_n)}\mathbb{R}ightarrow \nu_\alpha,\quad\text{ in }M_+((0,\infty]),
\end{equation}
\sid{Note the limit in \eqref{eq:bnX} is random while that in
\eqref{eq:bnkX} is deterministic.}
\sid{Define the Hill estimator $H_{k,n}$ based on $k$ upper order statistics of
$\{X_1,\dots,X_n\}$} as in \cite{hill:1975}
\[
H_{k,n} := \frac{1}{k}\sum_{i=1}^{k}\log\frac{X_{(i)}}{X_{(k+1)}},
\]
where $X_{(1)}\ge X_{(2)}\ge \ldots\ge X_{(n)}$ are order statistics of $\{X_i:1\le i\le n\}$.
In the iid case there are many proofs of consistency (cf. \citet{mason:1982,mason:turova:1994,hall:1982,dehaan:resnick:1998,
csorgo:haeusler:mason:1991a}): For
$k=k_n\to\infty,\,k_n/n\to 0$, we have
\boldsymbol egin{equation}\label{e:Hillconv}
H_{k_n,n} \convp 1/\alpha\qquad\text{as }n\to\infty.
\end{equation}
\sid{The treatment in \citet[Theorem~4.2]{resnickbook:2007} approaches
consistency by showing \eqref{e:Hillconv} follows from \eqref{eq:bnkX}
and we follow this approach for the network context where the iid case
is inapplicable.}
\sid{The next section constructs two undirected preferential attachment models,
labelled A and B, and gives behavior of $D_i(n)$, the degree of node
$i$ at the $n$th stage of construction.} Theorem~\ref{thm:tail_meas}
shows that for $\delta$, a parameter in the model construction,
the degree sequences in either Model A or B have empirical measures
\boldsymbol egin{equation}\label{emp_meas}
\sum_{i=1}^n \epsilon_{D_i(n)/n^{1/(2+\delta)}}
\end{equation}
that converge weakly to some \sid{random} limit point measure in $M_p((0,\infty])$.
The question then becomes whether there is an analogy to
\eqref{eq:bnkX} in the network
case so that
\boldsymbol egin{equation}\label{eq:bnkD}
\frac{1}{k_n}\sum_{i=1}^n\epsilon_{D_i(n)/b(n/k_n)}\mathbb{R}ightarrow \nu_{2+\delta},\quad\text{ in }M_+((0,\infty]),
\end{equation}
with some function $b(\cdot)$ and intermediate sequence $k_n$. \sid{This
would facilitate proving consistency of the Hill estimator.}
We \sid{successfully} derive \eqref{eq:bnkD} for Model A in
Section~\ref{sec:Hill} and discuss why we failed for Model B.
For Model A, we give the consistency of the Hill estimator.
\section{Preferential Attachment Models.}\label{sec:motiv}
\subsection{Model setup.}\label{subsec:PA}
We consider an undirected preferential attachment model initiated from
the initial graph $G(1)$, which
consists of one node $v_1$ and a self loop. Node $v_1$ then has degree 2 at stage $n=1$. For $n\ge \sid{1}$,
we obtain a new graph $G(n+1)$ by appending a new node $v_{n+1}$ to the existing graph $G(n)$.
The graph $G(n)$ consists of $n$ edges and $n$ nodes. Denote the set
of nodes in $G(n)$ by $V(n) := \{v_1, v_2, \ldots, v_n\}$.
For $v_i\in V(n)$, $D_i(n)$ is the degree of $v_i$ in $G(n)$.
We consider two ways to construct the random graph and refer to them as Model A and B.
\\
\textbf{Model A}:
Given $G(n)$,
the new node $v_{n+1}$ is connected to one of the existing nodes $v_i\in V(n)$ with probability
\boldsymbol egin{equation}\label{eq:prob1}
\frac{f(D_i(n))+\delta}{\sum_{i=1}^n \left(f(D_i(n))+\delta\right)},
\end{equation}
where \sid{{\it the preferential attachment function\/}} $f(j), j\ge 1$ is deterministic and \sid{non-decreasing}.
In this case, the new node $\sid{v_{n+1}}$ for $n\geq 1$, is always born with degree 1.
\\
\textbf{Model B}: In this model,
given graph $G(n)$, the graph $G(n+1)$ is obtained by \sid{either:}
\boldsymbol egin{itemize}
\item \sid{Adding a new node $v_{n+1}$ and a new edge connecting} to \sid{an} existing node $v_i\in V(n)$ with probability
\boldsymbol egin{equation}\label{eq:prob2}
\frac{f(D_i(n))+\delta}{\sum_{i=1}^n \left(f(D_i(n))+\delta\right) + f(1)+\delta},
\end{equation}
where $\delta >-f(1)$ is a parameter;
or
\item \sid{Adding a new node $v_{n+1}$} with a self loop with probability
\boldsymbol egin{equation}\label{eq:prob3}
\frac{f(1)+\delta}{\sum_{i=1}^n \left(f(D_i(n))+\delta\right) + f(1)+\delta}.
\end{equation}
\end{itemize}
\sid{{\it Linear case\/}:} If the preferential attachment function is $f(j) = j$ for
$j= 1,2,\ldots$, then \sid{the model is called} the \emph{linear preferential attachment model}.
Since every time we add a node and an edge the degree of 2 nodes is
increased by 1, \sid{we have for both model A or B that}
$
\sum_{i=1}^{n} D_i(n) = 2n, \,n\ge 1.
$
\sid{Therefore,} the attachment probabilities in \eqref{eq:prob1}, \eqref{eq:prob2} and \eqref{eq:prob3} are
\[
\frac{D_i(n)+\delta}{(2+\delta)n},\quad\frac{D_i(n)+\delta}{(2+\delta)n + 1+\delta}, \quad\text{ and }\quad \frac{1+\delta}{(2+\delta)n + 1+\delta},
\]respectively, where $\delta>-1$ is a constant.
\subsection{Power-law tails.}
\sid{Continuing with $f(j)=j$,} suppose $G(n)$ is a random graph generated by either Model A or B after $n$ steps.
Let $N_k(n)$ be the number of nodes in $G(n)$ with degree equal to $k$, i.e.
\boldsymbol egin{equation}\label{eq:defN}
N_k(n) := \sum_{i=1}^n \textbf{1}_{\{D_i(n) = k\}},
\end{equation}
then $N_{>k} (n) := \sum_{j>k} N_j(n)$, $k\ge 1$, is the number of nodes in $G(n)$ with degree strictly greater than $k$.
For $k= 0$, we set $N_{>0}(n) = n$.
\sid{For both models A and B,} it \sid{is}
shown in \citet[Theorem~8.3]{vanderHofstad:2017}
using concentration inequalities and martingale methods
that
for fixed $k\ge \wtd{1}$, as $n\to\infty$,
\boldsymbol egin{equation}\label{eq:pmfk}
\frac{N_k(n)}{n}\convp p_k =
(2+\delta)\frac{\Gamma(k+\delta)\Gamma(3+2\delta)}{\Gamma(k+3+2\delta)\Gamma(1+\delta)}\sid{\sim
(2+\delta) \frac{\Gamma(3+2\delta)}{\Gamma(1+\delta)}k^{-(3+\delta)}; }
\end{equation}
$\left(p_k\right)_{k\ge 0}$ is a pmf \sid{and the asymptotic form, as
$k\to\infty$, follows
from Stirling.}
\sid{Let $p_{>k}=\sum_{j>k}p_j$ be the complementary \wtd{cdf} and}
by Scheff\'e's lemma as well as
\citet[Equation (8.4.6)]{vanderHofstad:2017}, we have
\boldsymbol egin{equation}\label{eq:def_pk}
\frac{N_{>k}(n)}{n}\convp p_{>k} :=
\frac{\Gamma(k+1+\delta)\Gamma(3+2\delta)}{\Gamma(k+3+2\delta)\Gamma(1+\delta)},
\end{equation}
and again by Stirling's formula we get from \eqref{eq:def_pk} as $k\to\infty$,
\[
p_{>k}\sim \sid{c\cdot } k^{-(2+\delta)}, \qquad c=\frac{\Gamma(3+2\delta)}{\Gamma(1+\delta)}.
\]
In other words, the tail distribution of the asymptotic degree
sequence in a linear preferential attachment model is \sid{asymptotic}
to a power law with tail index $2+\delta$.
In practice, the Hill estimator is widely used to estimate this tail
index.
\sid{Absent prior justification for using the Hill estimator on network
data, we investigate its use.}
\section{Preliminaries: Continuous Time Markov Branching Processes.}\label{sec:prelim}
In this section, we \sid{review} two continuous time Markov branching
processes \sid{needed} in Section~\ref{subsec:embed}, where we
embed the degree sequence of a fixed network node into a
continuous time branching process and derive the asymptotic limit of
the degree growth.
\subsection{Linear birth processes.}
A linear birth process $\{\zeta(t):t\ge 0\}$ is a continuous time Markov process taking values in the set $\mathbb{N}^+ =\{1,2,3,\ldots\}$ and having a transition rate
$$ q_{i,i+1} = \lambda i, \qquad i\in \mathbb{N}^+, \quad\lambda> 0.$$
The linear birth process $\{\zeta(t):t\ge 0\}$ is a mixed Poisson
process; \wtd{see
\citet[Theorem~5.11.4]{resnick:1992},
\citet{kendall:1966} and \citet{waugh:1971} among other sources.} If $\zeta(0)=1$ then the representation is
\boldsymbol egin{equation}
\zeta(t)=1+N_0\bigl(W(e^{\lambda t} -1\bigr), \,t\geq 0,
\end{equation}
where \wtd{$\{N_0(t): t\geq 0\}$ is a} unit rate homogeneous Poisson on $\mathbb{R}_+$ with
$N_0(0)=0$ and $W\textbf{1}ependent N_0(\cdot)$ is a unit exponential random
variable independent of $N_0$.
Since $N_0(t)/t\to 1$ almost surely \wtd{as $t\to\infty$}, it follows immediately that
\boldsymbol egin{equation} \label{e:PBconv}
\frac{\zeta(t)}{e^{\lambda t}}\convas W,\qquad\text{as }t\to\infty.
\end{equation}
We use these facts in Section~\ref{subsec:asy} to analyze
the asymptotic behavior of the degree growth in a preferential
attachment network.
\subsection{Birth processes with immigration.}
Apart from individuals within the population giving birth to new
individuals, population size can also increase due to
immigration which is assumed independent of births.
The linear birth process with immigration (B.I. process),
$\{BI(t): t\ge 0\}$, having lifetime parameter $\lambda>0$ and
immigration parameter $\theta\ge 0$ is a continuous time Markov
process with state space $\mathbb{N} =\{0,1,2,3,\ldots\}$ and
transition rate
$$q_{i,i+1} = \lambda i + \theta.$$
When $\theta = 0$ there is no immigration and the B.I. process becomes
a pure birth process.
For $\theta>0$, the B.I. process starting from 0 can be
constructed from a Poisson process and an independent family of iid
linear birth processes \cite{tavare:1987}.
Suppose that $N_\theta (t)$ is the counting function of homogeneous
Poisson points $0<\tau_1<\tau_2<\ldots$ with rate
$\theta$ and independent of this Poisson process we have
independent
copies of a linear birth process $\{\zeta_i(t):t\ge 0\}_{i\ge 1}$
with parameter $\lambda>0$ and $\zeta_i(0) = 1$ for $i\ge 1$.
Let $BI(0) = 0$, then the B.I. process is a shot noise process with form
\boldsymbol egin{equation}\label{eq:defBI}
BI(t) := \sum_{i=1}^\infty
\zeta_i(t-\tau_i)\textbf{1}_{\{t\ge\tau_i\}}=\sum_{i=1}^{N_\theta(t) } \zeta_i(t-\tau_i).
\end{equation}
\sid{
Theorem~\ref{thm:tavare} modifies slightly} the statement of
\citet[Theorem~5]{tavare:1987} summarizing the asymptotic behavior of the B.I. process.
\boldsymbol egin{Theorem}\label{thm:tavare}
For $\{BI(t):t\ge 0\}$ as in \eqref{eq:defBI}, we have as $t\to\infty$,
\boldsymbol egin{equation} \label{e:sigma}
e^{-\lambda t}BI(t) \convas \sum_{i=1}^\infty W_i e^{-\lambda\tau_i} \sid{=:\sigma}
\end{equation}
where $\{W_i: i\ge 1\}$ are independent unit exponential random
variables satisfying for each $i\ge 1$,
$$W_i=\lim_{t\to\infty} e^{-t}\zeta_i(t).$$
The random variable $\sigma$ in \eqref{e:sigma}
is a.s. finite and has a Gamma density given by
\[
f(x) = \frac{1}{\Gamma(\theta/\lambda)}x^{\theta/\lambda-1} e^{-x},\qquad x>0.
\]
\end{Theorem}
The form of $\sigma$ in \eqref{e:sigma} and its Gamma density is justified in
\cite{tavare:1987}. It can be guessed from
\eqref{eq:defBI} and some \wtd{cavalier} interchange of limits and infinite
sums. The density of $\sigma$ comes from transforming Poisson points
$\{(W_i,\tau_i), i \geq 1\}$, summing and recognizing a Gamma L\'evy
process at $t=1$.
\section{Embedding Process.}\label{sec:embed}
\sid{Our approach to the weak convergence of the \sid{sequence of empirical measures} in
\eqref{emp_meas} embeds the degree sequences $\{D_i(n), 1 \leq i
\leq n, n \geq 1\}$
into a B.I. process. The embedding idea is proposed in
\cite{athreya:ghosh:sethuraman:2008} and we tailor it for our setup
finding it
flexible enough to accommodate both linear preferential attachment
Models A and B introduced in Section~\ref{subsec:PA}.}
\subsection{Embedding.}\label{subsec:embed}
\sid{Here is how we embed the network growth model} using a sequence of independent B.I. processes.
\subsubsection{Model A and B.I. processes.}
Model A is the simpler case where \sid{a new node is not allowed to
have self loop.}
Let $\{BI_i(t):t\ge 0\}_{i\ge 1}$ be independent B.I. processes such that
\boldsymbol egin{equation}\label{eq:initial}
BI_1(0) = 2, \quad BI_i(0) = 1,\quad \forall i\ge 2.
\end{equation}
\sid{Each has} transition rate is $\sid{q_{j,j+1}}=j+\delta$,
$\delta>-1$.
For $i\ge 1$, let $\{\tau^{(i)}_k: k\ge 1\}$ be the jump times of
the B.I. process $\{BI_i(t): t\ge 0\}$
\sid{and set} $\tau^{(i)}_0 := 0$ for all $i\ge 1$.
Then for $k\ge 1$,
\[
BI_1(\tau^{(1)}_k) = k+2,\qquad BI_i(\tau^{(i)}_k) = k+1, \, i\ge 2.
\]
Therefore,
\[
\tau^{(1)}_k - \tau^{(1)}_{k-1} \sim \text{Exp}(k+1+\delta),\quad\mbox{and }\quad
\tau^{(i)}_{k}-\tau^{(i)}_{k-1} \sim \text{Exp}(k+\delta), \, i\ge 2.
\]
and $\{\tau^{(i)}_{k}-\tau^{(i)}_{k-1}: i\ge 1, k\ge 1\}$ are independent.
Set $T_1^A=0$ and relative to $BI_1(\cdot)$ define
\boldsymbol egin{equation}\label{eq:defT2}
T^A_2:= \tau^{(1)}_1,
\end{equation}
i.e. the first time that $BI_1(\cdot)$ jumps.
Start \sid{the} new B.I. process $\{BI_2(t-T^A_2):{t\ge T^A_2}\}$
at $T^A_2$ and
let $T^A_3$ be the first time after $T^A_2$ that either $BI_1(\cdot)$
or $BI_2,(\cdot)$ jumps so that,
\[
T^A_3 = \text{min}\{T^A_{i}+\tau^{(i)}_k: k\ge 1, T^A_{i}+\tau^{(i)}_k>T^A_2, i=1,2\}.
\]
Start a new, independent B.I. process $\{BI_3(t-T^A_3)\}_{t\ge T^A_3}$
at $T^A_3$. See Figure~\ref{fig:embed}, \sid{which assumes} $
\tau_1^{(2)}+T_2^A< \tau_2^{(1)}$.
Continue in this way. When $n$ lines have been created, define
$T^A_{n+1}$ to be the first time after $T^A_n$ that one of the processes
$\{BI_i(t-T^A_i): t\ge T^A_i\}_{1\le i\le n}$ jumps, i.e.
\boldsymbol egin{equation}\label{eq:defTn}
T^A_{n+1} := \text{min}\{T^A_{i}+\tau^{(i)}_k: k\ge 1, T^A_{i}+\tau^{(i)}_k>T^A_{n}, 1\le i\le n\}.
\end{equation}
At $T^A_{n+1}$, start a new, independent B.I. process
$\{BI_{n+1}(t-T^A_{n+1})\}_{t\ge T^A_{n+1}}$.
\newsavebox{\mytikzpic}
\boldsymbol egin{lrbox}{\mytikzpic}
\boldsymbol egin{tikzpicture}[scale=1.15]
\boldsymbol egin{scope}
\draw [thick,->] (0,0) -- (9,0) node[below right] {$\scriptstyle t$};
\draw [thick,->] (2,-1) -- (9,-1) node[below right] {$\scriptstyle t$};
\draw [thick,->] (4,-2) -- (9,-2) node[below right] {$\scriptstyle t$};
\foreach \x in {0,2,4}
\draw [very thick] (\x cm,3pt) -- (\x cm,-2pt);
\foreach \x in {6}
\draw (\x cm,3pt) -- (\x cm,-2pt);
\draw [thick](2, 4pt) -- (2,-1);
\draw [thick](4, 4pt) -- (4,-2);
\draw [very thick](2, -0.9) -- (2, -1.05);
\draw [very thick](4, -0.9) -- (4, -1.05);
\draw [very thick](4, -1.9) -- (4, -2.05);
\draw (7, -0.9) -- (7, -1.05);
\draw (0,0) node[above = 2pt] {$T^A_1 = 0$}
node[above = 20pt]{$ BI_1(0)=2 $};
\draw (2,0) node[above = 2pt] {$T^A_2 = \tau_1^{(1)}$}node[above= 20pt]{$ BI_1(T^A_2)=3 $};
\draw (4.5,0) node[above = 2pt] {$T^A_3 = \tau_1^{(2)}+T_2^A$};
\draw (6,0) node[below = 2pt]{$\tau_2^{(1)}$};
\draw (8,0) node[below=3pt] {$ \cdots $} node[above=3pt] {$ \cdots $};
\draw (2,-1) node[below = 2pt]{$ BI_2(0)= 1$};
\draw (7,-1) node[below = 2pt]{$\tau_2^{(2)}+T^A_2$};
\draw (4, -2) node[below = 2pt]{$BI_1(T^A_3) = 3$}
node[below = 20pt]{$BI_2(T^A_3-T^A_2) = 2$}
node[below = 38pt]{$BI_3(0) = 1$};
\draw (8,-1) node[below=3pt] {$ \cdots $} node[above=3pt] {$ \cdots $};
\draw (8,-2) node[below=3pt] {$ \cdots $} node[above=3pt] {$ \cdots $};
\end{scope}
\end{tikzpicture}
\end{lrbox}
\boldsymbol egin{figure}
\centering
\usebox{\mytikzpic}
\caption{Embedding procedure for Model A assuming $ \tau_1^{(2)}+T_2^A< \tau_2^{(1)}$.}\label{fig:embed}
\end{figure}
\subsubsection{Model B and BI processes.}
In Model B, \sid{a new node may} be born with a self loop
\sid{but} the B.I. process framework can \sid{still be used.}
We keep the independent sequence of $\{BI_i(t): t\ge 0\}_{i\ge 1}$ initialized as in \eqref{eq:initial},
as well as the definition of $\{\tau^{(i)}_k: k\ge 1\}$ for $i\ge 1$.
\sid{Set $T_0^B=T^B_1=0$} and start \emph{two} B.I. processes
$BI_1(\cdot)$ and $BI_2(\cdot)$ at $T^B_1$.
At time $T^B_n$ with $n\ge 1$, there exist $n+1$ B.I. processes.
We define $T^B_{n+1}$ as the first time after $T^B_n$ that one of the processes
$\{BI_i(t-T^B_{i-1}): t\ge T^B_{i-1}\}_{1\le i\le n+1}$ jumps, i.e.
\boldsymbol egin{equation}\label{eq:defTnb}
T^B_{n+1} := \text{min}\{T^B_{i-1}+\tau^{(i)}_k: k\ge 1, T^B_{i-1}+\tau^{(i)}_k>T^B_{n}, 1\le i\le n+1\},
\end{equation}
and start a new, independent B.I. process $\{BI_{n+2}(t-T^B_{n+1})\}_{t\ge T^B_{n+1}}$ at $T^B_{n+1}$.
\subsubsection{Embedding.}
The following embedding theorem is similar to the one proved in
\cite{athreya:ghosh:sethuraman:2008} and summarizes how to embed in
the B.I. constructions.
\boldsymbol egin{Theorem}\label{thm:embed} \sid{Fix $n\geq 1$.}\\
(a) For Model A, suppose
$$\sid{\boldsymbol{\mathcal{D}}^A(n):=\bigl(D_1^A(n),\dots,D_n^A(n) \bigr)}$$ \sid{is} the degree sequence of
nodes in the graph $G(n)$
and $\{T^A_n\}_{n\ge 1}$ is defined as in \eqref{eq:defTn}. For each fixed $n$, define
\boldsymbol egin{align*}
\widetilde{\boldsymbol{\mathcal{D}}}^A(n) &:= (BI_1(T^A_n), BI_2(T^A_n-T^A_2), \ldots, BI_{n-1}(T^A_n-T^A_{n-1}), BI_n(0) ),
\end{align*}
and then $\boldsymbol{\mathcal{D}}^A(n)$ and $\widetilde{\boldsymbol{\mathcal{D}}}^A(n)$
have the same distribution in $\mathbb{R}^n$.\\
(b) Analogously, for Model B,
the degree sequence $$\boldsymbol{\mathcal{D}}^B(n):=(D_1^B(n), \ldots, D_n^B(n))$$ and
$$\widetilde{\boldsymbol{\mathcal{D}}}^B(n) := (BI_1(T^B_n), BI_2(T^B_n), \ldots, BI_{n-1}(T^B_n-T^B_{n-2}), BI_n(T^B_n-T^B_{n-1}) )$$
have the same distribution in $\mathbb{R}^n$.
\end{Theorem}
\boldsymbol egin{proof}
By the construction of Model A, at each $T^A_n$, $n\ge 2$, we \sid{start} a new
B.I. process \sid{$BI_n(\cdot)$} with initial value equal to 1 and
one of $BI_i$, $1\le i\le n-1$ also increases by 1. This makes the sum of the values of $BI_i$, $1\le i\le n$, increase by 2 so that
\[
\sum_{i=1}^n \left(BI_i(T^A_n-T^A_i)+\delta\right) = (2+\delta)n.
\]
The rest is essentially the proof of
\citet[Theorem~2.1]{athreya:ghosh:sethuraman:2008} \sid{which we now outline.}
Both $\{ \boldsymbol{\mathcal{D}}^A(n),n\geq 1\}$ and $\{\widetilde{\boldsymbol{\mathcal{D}}}^A(n), n \geq 1\}$
are Markov on the state space $\cup_{n\geq 1}\mathbb{R}_+^n$ since
\boldsymbol egin{align*}
\boldsymbol{\mathcal{D}}^A(n+1)=&\bigl(\boldsymbol{\mathcal{D}}^A(n),1\bigr) +\bigl(\boldsymbol e_{J_{n+1}}^{(n)},0\bigr),\\
\widetilde{\boldsymbol{\mathcal{D}}}^A(n+1)=&\bigl(\widetilde{\boldsymbol{\mathcal{D}}}^A(n),1\bigr)+\bigl(\boldsymbol e_{L_{n+1}}^{(n)},0\bigr),
\end{align*}
where for $n
\geq 1,$ $\boldsymbol e^{(n)}_j$ is a vector of length $n$ of $0$'s except for a
$1$ in the $j$-th \wtd{entry} and
$$P[J_{n+1}=j|\boldsymbol{\mathcal{D}}^A(n)]=\frac{D^A_j(n)+\delta}{(2+\delta)n},\qquad 1
\leq j \leq n,$$
and $L_{n+1}$ records which B.I. process in $\{BI_i(t-T_i^A): t\ge T_i^A\}_{1\le i\le n}$ is the first
to have a new birth after $T^A_n$.
When $n =1$,
\[
\widetilde{\boldsymbol{\mathcal{D}}}^A(1) = BI_1(0) = 2 = D_1^A(1) = \boldsymbol{\mathcal{D}}^A(1),
\]
\sid{so to prove equality in distribution for any $n$,}
it suffices to verify that the transition probability from
$\widetilde{\boldsymbol{\mathcal{D}}}(n)$ to $\widetilde{\boldsymbol{\mathcal{D}}}^A(n+1)$ is the same as that
from
${\boldsymbol{\mathcal{D}}}^A(n)$ to ${\boldsymbol{\mathcal{D}}}^A(n+1)$.
According to the preferential attachment setup, we have
\boldsymbol egin{align}
\textbf{P} \Big({\boldsymbol{\mathcal{D}}}^A(n+1) &=(d_1, d_2, \ldots, d_i+1, d_{i+1},\ldots, d_n, 1)\Big\vert {\boldsymbol{\mathcal{D}}}^A(n) = (d_1, d_2, \ldots, d_n)\Big)\nonumber\\
&= \frac{d_i+\delta}{(2+\delta)n}, \quad 1\le i \le n.\label{transprob1}
\end{align}
At time $T_n^A$, there are $n$ B.I. processes and each of them has a population size of $BI_i(T_n^A-T_i^A)$, $1\le i\le n$.
Therefore, $T_{n+1}^A- T_n^A$ is the minimum of $n$ independent exponential random variables, $\{E^{(i)}_{n}\}_{1\le i\le n}$, with means
\[
\left(BI_i(T_n^A-T_i^A)+\delta\right)^{-1},\quad 1\le i\le n,
\]
which gives for any $1\le i \le n,$
\boldsymbol egin{align*}
\textbf{P}&\Big(L_{n+1}=i\Big\vert \widetilde{\boldsymbol{\mathcal{D}}}^A(n) = (d_1, d_2, \ldots, d_n)\Big)\\
=&\textbf{P} \Big(\widetilde{\boldsymbol{\mathcal{D}}}^A(n+1) =(d_1, d_2, \ldots, d_i+1, d_{i+1},\ldots, d_n, 1)\Big\vert \widetilde{\boldsymbol{\mathcal{D}}}^A(n)
= (d_1, \ldots, d_n)\Big)\\
=& \textbf{P}\Big(E^{(i)}_{n}<\bigwedge_{j=1, j\neq i}^{n} E^{(j)}_{n}\Big\vert\widetilde{\boldsymbol{\mathcal{D}}}^A(n) = (d_1, d_2, \ldots, d_n)\Big)\\
=& \frac{BI_i(T_n^A-T_i^A)+\delta}{\sum_{i=1}^n \left(BI_i(T^A_n-T^A_i)+\delta\right)}=
\frac{d_i+\delta}{(2+\delta)n}.
\end{align*}
This agrees with the transition probability in \eqref{transprob1}, thus completing the proof for Model A.
For Model B, the proof follows in a similar way except that for each $n\ge 1$, $T_{n+1}^B- T_n^B$ is
the minimum of $n+1$ independent exponential random variables with means
\[
\left(BI_i(T_n^B-T_{i-1}^B)+\delta\right)^{-1},\quad 1\le i\le n+1,
\]
so that for $1\le i\le n$,
\boldsymbol egin{align*}
\textbf{P} \Big(\widetilde{\boldsymbol{\mathcal{D}}}^B(n+1) &=(d_1, d_2, \ldots, d_i+1, d_{i+1},\ldots, d_n, 1)\Big\vert \widetilde{\boldsymbol{\mathcal{D}}}^B(n)
= (d_1, d_2, \ldots, d_n)\Big)\\
&= \frac{BI_i(T_n^B-T_{i-1}^B)+\delta}{\sum_{i=1}^{n+1} \left(BI_i(T_n^B-T_{i-1}^B)+\delta\right)}=
\frac{d_i+\delta}{(2+\delta)n+1+\delta},\\
\intertext{and}
\textbf{P}\Big(\widetilde{\boldsymbol{\mathcal{D}}}^B(n+1) &= (d_1, d_2, \ldots, d_n, 2)\Big\vert \widetilde{\boldsymbol{\mathcal{D}}}^B(n) = (d_1, d_2, \ldots, d_n)\Big)\\
&= \frac{1+\delta}{(2+\delta)n + 1+\delta}.
\end{align*}
\end{proof}
\boldsymbol egin{Remark}{\rm
This B.I. process construction can also be generalized for other choices of the preferential attachment functions $f$.
For example, its applications to the super- and sub-linear preferential attachment models are studied in \cite{athreya:2007}.}
\end{Remark}
\subsection{Asymptotic properties.}\label{subsec:asy}
One important reason to use the embedding technique specified in Section~\ref{subsec:embed} is that asymptotic behavior of the degree growth in a preferential attachment model can be characterized explicitly.
These asymptotic properties then help us derive weak convergence of the empirical measure, which is analogous to \eqref{eq:bnX} in the iid case.
\subsubsection{Branching times.}
We first consider the asymptotic behavior of the branching times $\{T^A_n\}_{n\ge 1}$ and $\{T^B_n\}_{n\ge 1}$, which are defined in Section~\ref{subsec:embed}.
\boldsymbol egin{Proposition}\label{prop:asy_Tn}
For $\{T^A_n\}_{n\ge 1}$ and $\{T^B_n\}_{n\ge 1}$ defined in \eqref{eq:defTn} and \eqref{eq:defTnb} respectively, we have
\boldsymbol egin{align}
\frac{n}{e^{(2+\delta)T^A_n}}\convas W_A,\qquad W_A &\sim \text{Exp}\left(1\right);\label{eq:TAn_asy}\\
\intertext{ and }
\frac{n}{e^{(2+\delta)T^B_n}}\convas W_B,\qquad W_B &\sim \text{Gamma}\left(\frac{3+2\delta}{1+\delta}, 1\right).\label{eq:TBn_asy}
\end{align}
\end{Proposition}
\boldsymbol egin{proof}
Define two counting processes
\[
N_A(t) := \frac{1}{2}\sum_{i=1}^\infty BI_i(t-T^A_{i})\textbf{1}_{\{t\ge T^A_i\}}
\]
in Model A, and
\[
N_B(t) := \frac{1}{2}\sum_{i=1}^\infty BI_i(t-T^B_{i-1})\textbf{1}_{\{t\ge T^B_i\}}
\]
in Model B.
In either case, we have
\[
N_l(t)\textbf{1}_{\big\{t\in[T^l_n, T^l_{n+1})\big\}} = n, \quad l= A,B.
\]
In other words, $\{T^l_n\}_{n\ge 1}$ are the jump times of the counting process $N_l(\cdot)$, for $l=A,B$, with the following structure
\boldsymbol egin{align}
\{T^A_{n+1}- T^A_{n}: n\ge 1\} &\stackrel{d}{=} \left\{\frac{A_i}{(2+\delta)i}, i\ge 1\right\},\label{eq:NA}\\
\{T^B_{n+1}- T^B_{n}: n\ge 1\} &\stackrel{d}{=} \left\{\frac{B_i}{(2+\delta)i+1+\delta}, i\ge 1\right\},\label{eq:NB}
\end{align}
where $\{A_i: i\ge 1\}$ and $\{B_i: i\ge 1\}$ are iid unit exponential random variables.
From \eqref{eq:NA}, we see that $N_A(\cdot)$ is a pure birth process with $N_A(0)=1$ and transition rate
\[
q^A_{i,i+1} = (2+\delta)i ,\quad i\ge 1.
\]
\sid{Replacing $t$ with $T_n^A$} in \eqref{e:PBconv}
gives \eqref{eq:TAn_asy}.
By \eqref{eq:NB}, $N_B(\cdot)$ is a B.I. process with $N_B(0)=1$ and
transition rate $q^B_{i,i+1}=(2+\delta)i+1+\delta$, $i\ge 1$.
In order to apply Theorem~\ref{thm:tavare} which assumes $N_B(0)$, we
define $N_B'(t):= N_B(t)-1$ for all $t\ge 1$. Then $N'_B$ is a
B.I. process with $N_B'(0)=0$ and transition rate
$(2+\delta)i+3+2\delta$, for $i\ge 0$.
Therefore, \eqref{eq:TBn_asy} follows directly from Theorem~\ref{thm:tavare}.
\end{proof}
\subsubsection{Convergence of the measure.}
Using embedding techniques, Theorem~\ref{thm:tail_meas} gives the convergence of the empirical measure, which draws an analogy to \eqref{eq:bnX} in the iid case.
\boldsymbol egin{Theorem}\label{thm:tail_meas}
Suppose that
\boldsymbol egin{enumerate}
\item[(1)]$\{T^l_i: i\ge 1\}$, $l=A,B$ are distributed as in \eqref{eq:NA} and \eqref{eq:NB}.
\item[(2)]$W_l$, $l=A,B$ are limit random variables as given in \eqref{eq:TAn_asy} and \eqref{eq:TBn_asy}.
\item[(3)]$\{\sigma_i\}_{i\ge 1}$ \wtd{is} a sequence of independent Gamma
random variables specified in \eqref{eq:BI_asy} and
\eqref{eq:sigma} below.
\end{enumerate}
Then in $M_p((0,\infty])$, we have for $\delta\ge 0$,
\boldsymbol egin{subequations}\label{eq:meas_asy}
\boldsymbol egin{align}
\sum_{i=1}^n &\epsilon_{D^A_i(n)/n^{1/(2+\delta)}}(\cdot)\mathbb{R}ightarrow
\sum_{i=1}^\infty \epsilon_{\sigma_i
e^{-T^A_i}/W_A^{1/(2+\delta)}}(\cdot),\label{eq:meas_asyA}\\
\sum_{i=1}^n &\epsilon_{D^B_i(n)/n^{1/(2+\delta)}}(\cdot)\mathbb{R}ightarrow \sum_{i=1}^\infty \epsilon_{\sigma_i e^{-T^B_{i-1}}/W_B^{1/(2+\delta)}}(\cdot).\label{eq:meas_asyB}
\end{align}
\end{subequations}
\end{Theorem}
\boldsymbol egin{Remark}\label{rem:applic} {\rm
From \eqref{eq:meas_asyA} we get for any fixed $k\geq 1$, that in $\mathbb{R}_+^k$,
\boldsymbol egin{equation}\label{e:bigly}
\Bigl( \frac{D^A_{(1)}(n)}{n^{1/(2+\delta)}}, \dots
\frac{D^A_{(k)}(n)}{n^{1/(2+\delta)}}\Bigr) \mathbb{R}ightarrow
W_A^{-1/(2+\delta)}
\Bigl(
(\sigma_\cdot
e^{-T^A_\cdot} )_{(1)},\dots, (\sigma_\cdot
e^{-T^A_\cdot} )_{(k)} \Bigr),
\end{equation}
where a subscript inside parentheses indicates ordering so that
$D_{(1)} ^A(n) \geq \dots \geq D^A_{(k)}$ and the limit on the right
side of \eqref{e:bigly} represents the ordered $k$ largest points from the
right side of \eqref{eq:meas_asyA}.
A similar result for Model B follows from \eqref{eq:meas_asyB}.
}
\end{Remark}
To prove Theorem~\ref{thm:tail_meas}, we first need to show the following lemma, which gives the
asymptotic limit of the degree sequence under the B.I. process framework.
\boldsymbol egin{Lemma}\label{lemma:degseq}
Suppose that
\boldsymbol egin{enumerate}
\item[(1)]$\{T^l_i: i\ge 1\}$, $l=A,B$ are distributed as in \eqref{eq:NA} and \eqref{eq:NB}.
\item[(2)]$W_l$, $l=A,B$ are limit random variables as given in \eqref{eq:TAn_asy} and \eqref{eq:TBn_asy}.
\end{enumerate}
Then we have the following convergence results \sid{pertinent to} the degree sequence $\{D_i^l(n):1\le i\le n\}$, for $l=A,B$:
\boldsymbol egin{enumerate}
\item[(i)]
For each $i\ge 1$,
\boldsymbol egin{subequations}\label{eq:BI_asy}
\boldsymbol egin{align}
\frac{BI_i(T^A_n - T^A_i)}{e^{T^A_n}} &\convas \sigma_i e^{-T^A_i}, \label{eq:BI_asyA}\\
\frac{BI_i(T^B_n - T^B_{i-1})}{e^{T^B_n}} &\convas \sigma_i e^{-T^B_{i-1}},\label{eq:BI_asyB}
\end{align}
\end{subequations}
where $\{\sigma_i\}_{i\ge 1}$ are a sequence of independent Gamma random variables with
\boldsymbol egin{equation}\label{eq:sigma}
\sigma_1\sim \text{Gamma}(2+\delta,1),\quad\text{and} \quad \sigma_i\sim \text{Gamma}(1+\delta,1),\quad i\ge 2.
\end{equation}
Furthermore, \sid{for $i\ge
1$,} $ \sigma_i\sid{\textbf{1}ependent} e^{-T^A_i}$ in Model A and $\sigma_i\textbf{1}ependent e^{-T^B_{i-1}}$ in Model B.
\item[(ii)]
For $\delta>-1$,
\boldsymbol egin{subequations}\label{eq:max_asy}
\boldsymbol egin{align}
\max_{i\ge 1}\, &\frac{D^A_i(n)}{n^{1/(2+\delta)}}\,\sid{\convp}\,W_A^{-1/(2+\delta)}\max_{i\ge 1} \sigma_i e^{-T^A_i},\label{eq:max_asyA}\\
\max_{i\ge 1}\, &\frac{D^B_i(n)}{n^{1/(2+\delta)}}\,\sid{\convp}\, W_B^{-1/(2+\delta)}\max_{i\ge 1} \sigma_i e^{-T^B_{i-1}},\label{eq:max_asyB}
\end{align}
\end{subequations}
where we set $T^B_0:=0$ and $D^l_i(n):=0$ for all $i\ge n+1$, $l=A,B$.
\end{enumerate}
\end{Lemma}
\boldsymbol egin{proof}
(i) For the B.I. processes $\{BI_i(\cdot)\}_{i\ge 1}$ defined here, all of them have initial values greater than 0.
Hence, in order to apply the asymptotic results in \cite{tavare:1987}, we need to modify them such that they all start with 0.
To do this, set for all $t\ge 0$,
\[
BI'_1(t) := BI_1(t) - 2, \qquad BI'_i(t) := BI_i(t) - 1, \quad i\ge 2,
\]
and we have $BI'_i(0) = 0$ for all $i\ge 0$. The transition rate needs to be changed accordingly, i.e. the process $BI'_1(\cdot)$
has transition rate $q'_{j,j+1}=j+2+\delta$ and that for $BI'_i(\cdot)$, $i\ge 2$, becomes $ j+1+\delta$, $j\ge 0$.
Throughout the rest of the proof of Lemma~\ref{lemma:degseq}, we only show the case for Model A and the result for Model B follows from the same argument.
Now applying Theorem~\ref{thm:tavare} gives that as $t\to\infty$,
\boldsymbol egin{align*}
\frac{BI_i(t-T^A_{i})}{e^{t-T^A_{i}}} & \stackrel{\text{a.s.}}{\longrightarrow} \sigma_i ,\quad i\ge 1,
\end{align*}
where $\{\sigma_i\}_{i\ge 1}$ are independent Gamma random variables with
\[
\sigma_1\sim \text{Gamma}(2+\delta,1)\quad\text{and} \quad \sigma_i\sim \text{Gamma}(1+\delta,1),\quad i\ge 2.
\]
Thus as $n\to\infty$,
\boldsymbol egin{align*}
\frac{BI_i(T^l_n-T^A_{i})}{e^{T^l_n-T^A_{i}}} & \stackrel{\text{a.s.}}{\longrightarrow} \sigma_i,\quad i\ge 1,
\end{align*}
which gives \eqref{eq:BI_asyA}.
For $i\ge 2$, the independence of $\sigma_i$ and $T^A_i$ follows from the construction and
this completes the proof of (i).\\
(ii) Combining \eqref{eq:BI_asyA} with \eqref{eq:TAn_asy}, we have for fixed $1\le i\le n$,
\[
\frac{BI_i(T^A_n-T^A_i)}{n^{1/(2+\delta)}}\convas \frac{\sigma_i e^{-T^A_i}}{W_A^{1/(2+\delta)}},
\]
and $BI_i(T^A_n-T^A_i) = 0$ for $i\ge n+1$.
By Theorem~\ref{thm:embed}, it suffices to show
\[
\max_{i\ge 1}\frac{BI_i(T^A_n-T^A_i)}{n^{1/(2+\delta)}}\convas \max_{i\ge 1}\frac{\sigma_i e^{-T^A_i}}{W_A^{1/(2+\delta)}},
\]
which is proved in \citet[Theorem~1.1(iii)]{athreya:ghosh:sethuraman:2008}.
\end{proof}
With the preparation in Lemma~\ref{lemma:degseq}, we are ready to prove the convergence result in Theorem~\ref{thm:tail_meas}.\\
{\it Proof of Theorem~\ref{thm:tail_meas}.}
Note that the limit random variables $$\sigma_i e^{-T^A_i}W_A^{-1/(2+\delta)},\quad i\ge 1,$$
have continuous distributions, so for any $y>0$,
\[
\textbf{P}\left(\sum_{i=1}^\infty \epsilon_{\sigma_i e^{-T^A_i}/W_A^{1/(2+\delta)}}(\{y\}) = 0\right) = 1.
\]
Hence, by Kallenberg's theorem for weak convergence to a point process on an interval (see \citet[Theorem~4.18]{kallenberg:2017} and
\citet[Proposition~3.22]{resnick:1987}), proving \eqref{eq:meas_asyA} requires checking
\boldsymbol egin{enumerate}
\item[(a)] For $y>0$, as $n\to\infty$,
\boldsymbol egin{equation}\label{eq:meas_cond1}
\textbf{E} \left(\sum_{i=1}^n \epsilon_{D^A_i(n)/n^{1/(2+\delta)}}(y,\infty]\right)
\to \textbf{E}\left( \sum_{i=1}^\infty \epsilon_{\sigma_i e^{-T^A_i}/W_A^{1/(2+\delta)}}(y,\infty]\right).
\end{equation}
\item[(b)] For $y>0$, as $n\to\infty$,
\boldsymbol egin{align}\label{eq:meas_cond2}
\textbf{P}&\left(\sum_{i=1}^n \epsilon_{D^A_i(n)/n^{1/(2+\delta)}}(y,\infty] = 0\right)\nonumber\\
&\longrightarrow \textbf{P}\left( \sum_{i=1}^\infty \epsilon_{\sigma_i e^{-T^A_i}/W_A^{1/(2+\delta)}}(y,\infty] = 0\right).
\end{align}
\end{enumerate}
To show \eqref{eq:meas_cond1}, first note that for any $M>0$,
\boldsymbol egin{align*}
\textbf{E} \left(\sum_{i=1}^M \epsilon_{D^A_i(n)/n^{1/(2+\delta)}}(y,\infty]\right)
& =\sum_{i=1}^M \textbf{P}\left(\frac{D^A_i(n)}{n^{1/(2+\delta)}}>y\right)\\
&\longrightarrow \sum_{i=1}^M\textbf{P}\left(\sigma_i e^{-T^A_i}W_A^{-1/(2+\delta)}>y\right)\\
& = \textbf{E}\left( \sum_{i=1}^M \epsilon_{\sigma_i e^{-T^A_i}/W_A^{1/(2+\delta)}}(y,\infty]\right),
\end{align*}
as $n\to\infty$.
By Chebyshev's inequality we have for any $k>2+\delta$,
\boldsymbol egin{align}
\textbf{E} \left(\sum_{i=M+1}^n \epsilon_{D^A_i(n)/n^{1/(2+\delta)}}(y,\infty]\right)
& =\sum_{i=M+1}^n \textbf{P}\left(\frac{D^A_i(n)}{n^{1/(2+\delta)}}>y\right)\nonumber\\
&\le y^{-k} \sum_{i=M+1}^n \textbf{E}\left[\left(\frac{D^A_i(n)}{n^{1/(2+\delta)}}\right)^k\right].\label{eq:tail_meansum}
\end{align}
Also, we have for $\delta\ge 0$,
\[
\textbf{E}\left[\left(\frac{D^A_i(n)}{n^{1/(2+\delta)}}\right)^k\right]\le \textbf{E}\left[\left(\frac{D^A_i(n)+\delta}{n^{1/(2+\delta)}}\right)^k\right]
\le \textbf{E}\left[\left(\frac{\sigma_i e^{-T^A_i}}{W_A^{1/(2+\delta)}}\right)^k\right],
\]
where the last inequality follows from the result in \citet[Equation~(8.7.26)]{vanderHofstad:2017}.
From \citet[Equation~(8.7.22)]{vanderHofstad:2017}, we have
\[
\textbf{E}\left[\left(\frac{\sigma_i e^{-T^A_i}}{W_A^{1/(2+\delta)}}\right)^k\right]
= \frac{\Gamma(i-\frac{1}{2+\delta})}{\Gamma(i+\frac{k-1}{2+\delta})}\frac{\Gamma(k+1+\delta)}{\Gamma(1+\delta)}
\sim C_{k,\delta} i^{-\frac{k}{2+\delta}},
\]
for $i$ large and $C_{k,\delta}>0$.
Hence, continuing from \eqref{eq:tail_meansum}, we have
\boldsymbol egin{align*}
\textbf{E} \left(\sum_{i=M+1}^n \epsilon_{D^A_i(n)/n^{1/(2+\delta)}}(y,\infty]\right)
&\le y^{-k} \sum_{i=M+1}^n \textbf{E}\left[\left(\frac{D^A_i(n)}{n^{1/(2+\delta)}}\right)^k\right]\\
&\le y^{-k} \sum_{i=M+1}^\infty \textbf{E}\left[\left(\frac{\sigma_i e^{-T^A_i}}{W_A^{1/(2+\delta)}}\right)^k\right]\\
& = y^{-k} \sum_{i=M+1}^\infty \frac{\Gamma(i-\frac{1}{2+\delta})}{\Gamma(i+\frac{k-1}{2+\delta})}\frac{\Gamma(k+1+\delta)}{\Gamma(1+\delta)}\\
&\stackrel{M\to\infty}{\longrightarrow} 0,
\end{align*}
since $k/(2+\delta)>1$. This verifies Condition~(a).
To see \eqref{eq:meas_cond2}, we have
\boldsymbol egin{align*}
\left\{\sum_{i=1}^n \epsilon_{D^A_i(n)/n^{1/(2+\delta)}}(y,\infty] = 0\right\}
&= \left\{\frac{D^A_i(n)}{n^{1/(2+\delta)}}\le y, 1\le i\le n\right\}\\
&=\left\{\max_{1\le i\le n} \frac{D^A_i(n)}{n^{1/(2+\delta)}}\le y\right\}.
\end{align*}
Since we set $D^A_i(n) = 0$ for all $i\ge n+1$, then
\[
\left\{\max_{1\le i\le n} \frac{D^A_i(n)}{n^{1/(2+\delta)}}\le y\right\}
= \left\{\max_{i\ge 1} \frac{D^A_i(n)}{n^{1/(2+\delta)}}\le y\right\}.
\]
Similarly,
\[
\left\{\sum_{i=1}^\infty \epsilon_{\sigma_i e^{-T^A_i}/W_A^{1/(2+\delta)}}(y,\infty] = 0\right\}
= \left\{\max_{i\ge 1} \frac{\sigma_i e^{-T^A_i}}{W_A^{1/(2+\delta)}} \le y\right\}.
\]
By \eqref{eq:max_asyA}, we have for $y>0$,
\[
\textbf{P}\left(\max_{i\ge 1} \frac{D^A_i(n)}{n^{1/(2+\delta)}}\le y\right)\to \textbf{P}\left(\max_{i\ge 1} \frac{\sigma_i e^{-T^A_i}}{W_A^{1/(2+\delta)}} \le y\right),\quad \text{as }n\to\infty,
\]
which gives \eqref{eq:meas_cond2} and completes the proof of (iv).
\section{Consistency of Hill Estimator.}\label{sec:Hill}
\sid{We now turn to \eqref{eq:bnkD} as preparation for considering
consistency of the Hill estimator.} We first give a \sid{plausibility argument}
based on the form of the limit point measure in \eqref{eq:meas_asyA}
or \eqref{eq:meas_asyB}. However,
proving \eqref{eq:bnkD} requires showing $N_{>k}(n)/n$ concentrates on
$p_{>k}$, for all $k\ge 1$, which in other words means controlling the
bias for $N_{>k}(n)/n$ and the discrepancy between $E(N_{>k}(n)/n)$
and $p_{>k}$.
Later we will show this is true for our Model A
but we were not successful for Model B. See Remark~\ref{rmk:modelB}.
\subsection{Heuristics.}
Before starting formalities, \sid{here is a} heuristic explanation for
the consistency of the Hill estimator \sid{when applied to preferential
attachment data} from Model A. The heuristic is the same for both
Model A and B so for simplicity, we focus on Model A \wtd{and apply} the Hill
estimator to the limit points in
\eqref{eq:meas_asyA}.
Since the Gamma random variables $\sigma_i$
have light tailed distributions, one may expect that $\{\sigma_i: i\ge
1\}$ will not distort the consistency result and so we pretend the
$\sigma_i$'s are absent; then what remains \wtd{in} the limit points is
monotone in $i$. Set $Y_i := e^{-T^A_i}/W_A^{1/(2+\delta)}$ and apply
the Hill estimator to the $Y's$ to get
\boldsymbol egin{align*}
H_{k,n} =&\frac 1k \sum_{i=1}^k \log \Bigl(\frac{Y_i}{Y_{k+1}}\Bigr)
=\frac 1k \sum_{i=1}^k( T^A_{k+1} -T^A_i).
\intertext{
Recall from just after \eqref{transprob1} that
$$T_{n+1}^A-T_n^A \stackrel{d}{=} E_n/(n(2+\delta)),$$
where $E_n, n\geq 1$ are iid unit exponential random variables. Then}
H_{k,n}=&\frac 1k \sum_{i=1}^k \sum_{l=i}^k(T^A_{l+1}-T_l^A)=\frac 1k
\sum_{l=1}^k l(T_{l+1}^A-T^A_l)=\frac 1k \sum_{l=1}^k \frac{E_l}{2+\delta}\convas\frac{1}{2+\delta},
\end{align*}
by strong law of large numbers, provided that $k\to\infty$.
There are clear shortcomings to this approach, the most obvious being
that we
only dealt with the points at asymptopia rather than $\{D_i(n), 1 \leq
i \leq n\}$. Furthermore we simplified the limit points by neglecting
the $\sigma_i$'s.
We have not found an effective
way to analyze order statistics of $\{\sigma_i
e^{-T^A_i}/W_A^{1/(2+\delta)}: i\ge 1\}$.
Concentration results for degree counts \sid{provide a traditional tool to}
prove \eqref{eq:bnkD} and we pursue this for \sid{Model A} in the next subsection.
\subsection{Concentration of the degree sequence in Model A}
We begin with considering the sequence of degree counts $\{N_{>k}(n)\}_{k\ge 1}$.
Theorem~\ref{thm:concentrate} shows that $N_{>k}(n)/n$ concentrates on $p_{>k}$, for all $k\ge 1$.
This concentration is what is needed for the consistency of the Hill estimator for network data.
Note that for the linear preferential attachment model, the concentration result for $N_k(n)$ is known from \citet[Theorem~8.3]{vanderHofstad:2017}.
\boldsymbol egin{Theorem}\label{thm:concentrate}
For $\delta>-1$ there exists a constant $C>0$, such that as $n\to\infty$,
\boldsymbol egin{equation}\label{eq:concentrate}
\textbf{P}\left(\max_k |N_{>k}(n) - np_{>k}|\ge C(1+\sqrt{n\log n})\right) = o(1).
\end{equation}
\end{Theorem}
\boldsymbol egin{proof}
Let $\mu_{>k}(n) := \textbf{E}(N_{>k}(n))$.
Following the proof in \citet[Proposition~8.4]{vanderHofstad:2017}, we have for any $C_\mu>2\sqrt{2}$,
\[
\textbf{P}\left(|N_{>k}(n) - \mu_{>k}(n)|\ge C_\mu\sqrt{n\log n}\right) = o(1/n).
\]
Since $N_{>k}(n) = 0$ a.s. for all $k> n$, then
\boldsymbol egin{align}\label{eq:concen1}
\textbf{P} &\left(\max_k |N_{>k}(n) - \mu_{>k}(n)|\ge C_\mu\sqrt{n\log n}\right)\nonumber\\
=& \textbf{P}\left(\max_{0\le k\le n} |N_{>k}(n) - \mu_{>k}(n)|\ge C_\mu\sqrt{n\log n}\right)\nonumber\\
\le & \sum_{k=1}^n \textbf{P}\left(|N_{>k}(n) - \mu_{>k}(n)|\ge C_\mu\sqrt{n\log n}\right) = o(1).
\end{align}
\wtd{Note that \eqref{eq:concen1} also holds for Model B, but we do not succeed in proving
the concentration result later in \eqref{eq:claim} for Model B; see Remark~\ref{rmk:modelB} for details.}
We are now left to show the concentration of $\mu_{>k}(n)$ on $n p_{>k}$ \wtd{in the setup of Model A}.
We claim that
\boldsymbol egin{equation}\label{eq:claim}
|\mu_{>k}(n) - n p_{>k}|\le C',\qquad \wtd{\forall n\ge 1,\quad\forall k\ge 1.}
\end{equation}
for some constant $C'>0$ specified later.
We prove \eqref{eq:claim} by induction.
First, by model construction, $N_{>k} (n)$ satisfies
\boldsymbol egin{align*}
\textbf{E}(N_{>k}(n+1)|G(n)) &= N_{>k}(n) + \frac{k+\delta}{(2+\delta)n} N_k(n) \\
&= N_{>k}(n) + \frac{k+\delta}{(2+\delta)n} (N_{>k-1}(n)-N_{>k}(n)), \qquad k \ge 1.
\end{align*}
Therefore,
\boldsymbol egin{align}\label{eq:mu}
\mu_{>k}(n+1)
&= \mu_{>k}(n) + \frac{k+\delta}{(2+\delta)n} (\mu_{>k-1}(n)-\mu_{>k}(n)), \qquad k \ge 1.
\end{align}
Moreover,
\wtd{it follows from \eqref{eq:pmfk} and \eqref{eq:def_pk} that
\[
p_{>k} = \frac{k+\delta}{2+\delta} p_k, \qquad k \ge 1.
\]
Thus}
$p_{>k}$ satisfies the recursion
\boldsymbol egin{equation}\label{eq:pk}
p_{>k} = \frac{k+\delta}{2+\delta} (p_{>k-1}-p_{>k}), \qquad k \ge 1,
\end{equation}
since $p_k = p_{>k-1}-p_{>k}$.
Let $\varepsilon_{>k}(n) := \mu_{>k}(n) - np_{>k}$, then \eqref{eq:mu} and \eqref{eq:pk} give that for $k\ge 1$,
\boldsymbol egin{equation}\label{eq:diff}
\varepsilon_{>k}(n+1) = \left(1-\frac{k+\delta}{(2+\delta)n}\right) \varepsilon_{>k}(n) + \frac{k+\delta}{(2+\delta)n} \varepsilon_{>k-1}(n).
\end{equation}
\wtd{In order to prove \eqref{eq:claim}, we initiate the induction procedure by
first inducting} on $n$ to prove
\boldsymbol egin{equation}\label{eq:base2}
|\varepsilon_{>1}(n)|\le 1,\qquad\wtd{\forall n\ge 1}.
\end{equation}
When $n = 1$, the graph $G(1)$ consists of one node and $D_1(1) =2$. Since $p_{>k}\le 1$, we have
\boldsymbol egin{equation}\label{eq:base1}
|\varepsilon_{>k}(1)| = |\mu_{>k}(1) - p_{>k}| \le 1,\qquad \wtd{\forall k\ge 1},
\end{equation}
\wtd{which also implies $|\varepsilon_{>1}(1)|\le 1$. Assume $|\varepsilon_{>1}(n)|\le 1$
and we want to show $|\varepsilon_{>1}(n+1)|\le 1$.}
Note that $ \varepsilon_{>0}(n) = \wtd{\mu_{>0}(n) - n p_{>0} = n-n\cdot 1} =0$, then \wtd{when $k=1$,} \eqref{eq:diff} becomes
\[
\varepsilon_{>1}(n+1) = \left(1-\frac{1+\delta}{(2+\delta)n}\right) \varepsilon_{>1}(n),
\]
and $1-\frac{1+\delta}{(2+\delta)n} \ge 0$ for $n\ge 1$.
This gives
\[
|\varepsilon_{>1}(n+1)| \le \left(1-\frac{1+\delta}{(2+\delta)n}\right) |\varepsilon_{>1}(n)| \le 1.
\]
Hence, \eqref{eq:base2} is verified, \wtd{which gives the initialization step of the induction.}
\wtd{Since proving \eqref{eq:claim} requires showing
\boldsymbol egin{equation}\label{eq:hypo2}
\sup_{n\ge 1}\left|\varepsilon_{>k}(n)\right|\le C_p,\qquad \forall k\ge 1,
\end{equation}
for some constant $C_p$ which will be defined later, we verify \eqref{eq:hypo2} by inducting on $k$.
What is proved in \eqref{eq:base2} gives the initialization of the induction ($k=1$) and
we want to verify
$$\left|\varepsilon_{>k}(n)\right|\le C_p,\qquad \forall n\ge 1,$$
assuming}
\boldsymbol egin{equation}\label{eq:inductn2}
\left|\varepsilon_{>k-1}(n)\right|\le C_p,\qquad \forall n\ge 1,
\end{equation}
for some $k\ge 2$.
To do this, we again use induction on $n$, with the result for the base case $n=1$ being verified in \eqref{eq:base1}.
\wtd{We now need to show $\left|\varepsilon_{>k}(n+1)\right|\le C_p$, given both $\left|\varepsilon_{>k}(n)\right|\le C_p$ and \eqref{eq:inductn2}.}
The recursion in \eqref{eq:diff} gives that for $2\le k\le (2+\delta)n - \delta$,
\[
|\varepsilon_{>k}(n+1)| \le \left(1-\frac{k+\delta}{(2+\delta)n}\right) |\varepsilon_{>k}(n)| + \frac{k+\delta}{(2+\delta)n} |\varepsilon_{>k-1}(n)|
\le 1.
\]
For $k> (2+\delta)n - \delta$, $$|\varepsilon_{>k}(n+1)| = (n+1)p_{>k}.$$
Since $(2+\delta)n - \delta \ge n+1$ for $\delta>-1$, $n\ge1$, \wtd{we apply \eqref{eq:def_pk} and} there exists a $C_p=C_p(\delta)$ such that
\[
p_{>k} \le C_p (n+1)^{-(2+\delta)},
\]
which gives
\[
|\varepsilon_{>k}(n+1)| = (n+1)p_{>k} \le C_p (n+1)^{-(1+\delta)}\le C_p.
\]
Thus, the claim in \eqref{eq:claim} is verified with
\[
C':= \max\{1, C_p\}.
\]
\wtd{With the result in \eqref{eq:concen1}, the proof of the theorem is complete by choosing
$C = \max\{ C_\mu, C'\}$.}
\end{proof}
\boldsymbol egin{Remark}\label{rmk:modelB} {\rm
The induction argument does not suffice to prove
\eqref{eq:claim} for Model B.
To see this, we re-compute the recursion on the difference term $\varepsilon_{>k}(n)$ for Model B and \eqref{eq:diff} then becomes
\boldsymbol egin{align*}
\varepsilon_{>k}(n+1) =& \left(1-\frac{k+\delta}{(2+\delta)n+1+\delta}\right) \varepsilon_{>k}(n) + \frac{k+\delta}{(2+\delta)n+1+\delta} \varepsilon_{>k-1}(n)\\
&+\left(\frac{1}{2+\delta}-\frac{n}{(2+\delta)n+1+\delta}\right) (k+\delta) (p_{>k-1}-p_{>k})\\
=& \left(1-\frac{k+\delta}{(2+\delta)n+1+\delta}\right) \varepsilon_{>k}(n) + \frac{k+\delta}{(2+\delta)n+1+\delta} \varepsilon_{>k-1}(n)\\
&+\frac{1+\delta}{2+\delta}\frac{(k+\delta)p_k}{(2+\delta)n+1+\delta}.
\end{align*}
By \citet[Exercise 8.19]{vanderHofstad:2017}, $(k+\delta)p_k\le 2+\delta$.
Therefore, if $|\varepsilon_{>k}(n)|\le 1$, then
\boldsymbol egin{align*}
|\varepsilon_{>k}(n+1)|\le 1+ \frac{1}{n+1},
\end{align*}
which contradicts the induction hypothesis.
Since the concentration inequality proved in Theorem~\ref{thm:concentrate} cannot be validated for Model B by induction,
we are also not able to verify the consistency of the Hill estimator in Model B, using the proof steps proposed here. This
will be deferred as future research.}
\end{Remark}
\subsection{Convergence of the tail empirical measure for Model A}
We then use the concentration result in \eqref{eq:concentrate} to analyze the convergence of the tail empirical measure.
First consider the degree of each node in $G(n)$,
\[
(D_1(n),D_2(n),\ldots, D_n(n)),
\]
and let
\[
D_{(1)}(n) \ge D_{(2)}(n) \ge \cdots \ge D_{(n)}(n)
\]
be the corresponding order statistics. Then the tail empirical measure becomes
\[
\hat{\nu}_n (\cdot) := \frac{1}{k_n} \sum_{i=1}^n \epsilon_{D_i(n)/D_{(k_n)}(n)}(\cdot),
\]
for some intermediate sequence $\{k_n\}$, i.e. $k_n\to\infty$ and $k_n/n\to 0$ as $n\to\infty$.
\boldsymbol egin{Theorem}
Suppose that $\{k_n\}$ is some intermediate sequence satisfying
\boldsymbol egin{equation}\label{cond:kn}
\liminf_{n\to\infty} k_n/(n\log n)^{1/2}>0\quad \text{and}\quad k_n/n\to 0 \quad \text{as}\quad n\to\infty,
\end{equation}
then
\boldsymbol egin{equation}\label{eq:tailmeas}
\hat{\nu}_n \mathbb{R}ightarrow \nu_{2+\delta},
\end{equation}
in $M_+((0,\infty])$, where $\nu_{2+\delta}(x,\infty] = x^{-(2+\delta)}$, $x>0$.
\end{Theorem}
\boldsymbol egin{proof}
\emph{Step 1.} We first show that for fixed $t>0$,
\boldsymbol egin{equation}\label{eq:step1}
\frac{D_{([k_n t])}(n)}{b(n/k_n)} \convp t^{-\frac{1}{2+\delta}},
\end{equation}
where
$$ b(n/k_n) = \left(\frac{\Gamma(3+2\delta)}{\Gamma(1+\delta)}\right)^{\frac{1}{2+\delta}}(n/k_n)^{\frac{1}{2+\delta}}.$$
Since
\boldsymbol egin{align*}
\textbf{P}&\left(\left|\frac{D_{([k_n t])}(n)}{b(n/k_n)}- t^{-\frac{1}{2+\delta}}\right|>\epsilon\right)\\
\le & \textbf{P}(D_{([k_n t])}(n) > b(n/k_n) (t^{-\frac{1}{2+\delta}}+\epsilon)) + \textbf{P}(D_{([k_n t])}(n)< b(n/k_n)(t^{-\frac{1}{2+\delta}}-\epsilon))\\
=: &\, I+II,
\end{align*}
it suffices to show $I,II\to 0$ as $n\to\infty$.
For the first term, we have, with $ u_t := t^{-\frac{1}{2+\delta}}+\epsilon$,
\boldsymbol egin{align}
I &\le \textbf{P}(N_{>[b(n/k_n)u_t]}(n)\ge [k_n t])\nonumber\\
&= \textbf{P}\left(N_{>[b(n/k_n)u_t]}(n) - n p_{>[b(n/k_n)u_t]}
\ge [k_n t]- n p_{>[b(n/k_n)u_t]}\right).\label{eq:part1}
\end{align}
Using Stirling's formula, \citet[Equation~8.3.9]{vanderHofstad:2017} gives
\[
\frac{\Gamma(t+a)}{\Gamma(t)} = t^a(1+O(1/t)).
\]
Recall the definition of $p_{>k}$ in \eqref{eq:def_pk} for fixed $k$, then we have
\boldsymbol egin{align}\label{eq:conv_poverk}
\frac{n}{k_n} p_{>[b(n/k_n) y]} &= \frac{n}{k_n} \frac{\Gamma(3+2\delta)}{\Gamma(1+\delta)}\frac{\Gamma\left([b(n/k_n) y]+1+\delta\right)}{\Gamma\left([b(n/k_n) y]+3+2\delta\right)}\nonumber\\
&= \frac{\Gamma(3+2\delta)}{\Gamma(1+\delta)} \frac{n}{k_n} \left(b(n/k_n)y\right)^{-(2+\delta)}\left(1+O\left(\frac{1}{b(n/k_n)}\right)\right)\nonumber\\
&= y^{-(2+\delta)} \left(1+O\left(\frac{1}{b(n/k_n)}\right)\right).
\end{align}
Continuing from \eqref{eq:part1} then gives
\boldsymbol egin{align*}
I &\le
\textbf{P}\left(N_{>[b(n/k_n)u_t]}(n) - n p_{>[b(n/k_n)u_t]}
\ge [k_n t]- n p_{>[b(n/k_n)u_t]}\right)\\
\le & \textbf{P}\left(\left|N_{>[b(n/k_n)u_t]}(n) - n p_{>[b(n/k_n)u_t]}\right|
\ge k_n\left(t - u_t^{-(2+\delta)}+O(1/b(n/k_n))\right)\right)\\
\le & \textbf{P}\left(\max_j\left|N_{>j}(n) - n p_{>j}\right|
\ge k_n\left(t - u_t^{-(2+\delta)}+O(1/b(n/k_n))\right)\right).
\end{align*}
By Theorem~\ref{thm:concentrate}, the right hand side goes to 0 as $n\to\infty$, provided that $k_n$ satisfies \eqref{cond:kn}.
Similarly, we can also show $II\to 0$ as $n\to\infty$ for $k_n$ satisfying \eqref{cond:kn}, thus proving \eqref{eq:step1}.
\emph{Step 2.} Note that $D_{([k_n t])}(n)$ is decreasing in $t$ and the limit in \eqref{eq:step1} is continuous on $(0,\infty]$,
which implies
\[
\frac{D_{([k_n t])}(n)}{b(n/k_n)} \convp t^{-\frac{1}{2+\delta}}, \quad \text{in } D(0,\infty].
\]
This gives, by inversion and \citet[Proposition~3.2]{resnickbook:2007},
\boldsymbol egin{equation}\label{eq:step2}
\frac{1}{k_n} \sum_{i=1}^n \epsilon_{D_i(n)/b(n/k_n)}(t, \infty] \convp t^{-(2+\delta)}, \quad t\in (0,\infty],
\end{equation}
in $D(0,\infty]$. Moreover,
\boldsymbol egin{equation}\label{eq:jointconv}
\left(\frac{1}{k_n} \sum_{i=1}^n \epsilon_{D_i(n)/b(n/k_n)}, \frac{D_{([k_n])}(n)}{b(n/k_n)}\right)
\mathbb{R}ightarrow \left(\nu_{2+\delta},1\right)
\end{equation}
in $M_+((0,\infty])\times (0,\infty)$.
\emph{Step 3.} With \eqref{eq:step2}, we use a scaling argument to prove \eqref{eq:tailmeas}. Define the operator
\[
S: M_+((0,\infty])\times (0,\infty) \mapsto M_+((0,\infty])
\]
by
\[
S(\nu, c)(A) = \nu(cA).
\]
By the proof in \citet[Theorem~4.2]{resnickbook:2007}, the mapping $S$ is continuous at $(\nu_{2+\delta},1)$.
Therefore, applying the continuous mapping $S$ to the joint weak convergence in \eqref{eq:jointconv} gives \eqref{eq:tailmeas}.
\end{proof}
\subsection{Consistency of the Hill estimator for Model A}
We are now able to prove the consistency of the Hill estimator applied to $\{D_i(n): 1\le i\le n\}$, i.e.
\[
H_{k_n, n} = \frac{1}{k_n}\sum_{i=1}^{k_n} \log\frac{D_{(i)}(n)}{D_{(k_n+1)}(n)}.
\]
\boldsymbol egin{Theorem}
Let $\{k_n\}$ be an intermediate sequence satisfying \eqref{cond:kn}, then
\[
H_{k_n, n} \convp \frac{1}{2+\delta}.
\]
\end{Theorem}
\boldsymbol egin{proof}
First define a mapping $T: D(0,\infty] \mapsto \mathbb{R}_+$ by
\[
T(f) = \int_1^\infty f(y)\frac{\mathrm{d} y}{y},
\]
and note that
\[
H_{k_n, n} = \int_1^\infty \hat{\nu}_n(y,\infty] \frac{\mathrm{d} y}{y}.
\]
Therefore, proving the consistency of $H_{k_n, n}$ requires justifying the continuity of the mapping $T$ at $\nu_{2+\delta}$, so that
\[
H_{k_n, n} = \int_1^\infty \hat{\nu}_n(y,\infty] \frac{\mathrm{d} y}{y}\convp \int_1^\infty \nu_{2+\delta}(y,\infty] \frac{\mathrm{d} y}{y} =\frac{1}{2+\delta}.
\]
Note that for any $M$ we have
\[
\int_1^M \hat{\nu}_n(y,\infty] \frac{\mathrm{d} y}{y}\convp \int_1^M \nu_{2+\delta}(y,\infty] \frac{\mathrm{d} y}{y},
\]
so we only need to show
\[
\int_M^\infty \hat{\nu}_n(y,\infty] \frac{\mathrm{d} y}{y}\convp \int_M^\infty \nu_{2+\delta}(y,\infty] \frac{\mathrm{d} y}{y}.
\]
By the second converging together theorem (see \citet[Theorem~3.5]{resnickbook:2007}),
it suffices to show
\boldsymbol egin{equation}\label{eq:cont_integral}
\lim_{M\to\infty} \limsup_{n\to\infty} \textbf{P}\left(\int_M^\infty \hat{\nu}_n(y,\infty] \frac{\mathrm{d} y}{y}>\varepsilon\right) = 0.
\end{equation}
Consider the probability in \eqref{eq:cont_integral} and \wtd{we have}
\boldsymbol egin{align*}
\textbf{P} & \left(\int_M^\infty \hat{\nu}_n(y,\infty] \frac{\mathrm{d} y}{y}>\varepsilon\right)\\
\le\, &\textbf{P}\left(\int_M^\infty \hat{\nu}_n(y,\infty] \frac{\mathrm{d} y}{y}>\varepsilon, \left|\frac{D_{(k_n)}(n)}{b(n/k_n)}-1\right|<\eta\right)\\
&+ \textbf{P}\left(\int_M^\infty \hat{\nu}_n(y,\infty] \frac{\mathrm{d} y}{y}>\varepsilon, \left|\frac{D_{(k_n)}(n)}{b(n/k_n)}-1\right|\ge\eta\right)\\
\le\, \textbf{P} & \left(\int_M^\infty \frac{1}{k_n}\sum_{i=1}^n \epsilon_{D_i(n)/b(n/k_n)}((1-\eta)y,\infty] \frac{\mathrm{d} y}{y}>\varepsilon\right)\\
+& \textbf{P}\left(\left|\frac{D_{(k_n)}(n)}{b(n/k_n)}-1\right|\ge\eta\right) =: A+B.
\end{align*}
By \eqref{eq:step1}, $B\to 0$ as $n\to\infty$, and using the Markov inequality, $A$ is bounded by
\boldsymbol egin{align*}
\frac{1}{\varepsilon} &\textbf{E}\left(\int_M^\infty \frac{1}{k_n}\sum_{i=1}^n \epsilon_{D_i(n)/b(n/k_n)}((1-\eta)y,\infty] \frac{\mathrm{d} y}{y}\right)\\
&= \frac{1}{\varepsilon}\textbf{E}\left(\int_{M(1-\eta)}^\infty \frac{1}{k_n}\sum_{i=1}^n \epsilon_{D_i(n)/b(n/k_n)}(y,\infty] \frac{\mathrm{d} y}{y}\right)\\
&\le \frac{1}{\varepsilon}\int_{M(1-\eta)}^\infty \frac{1}{k_n}\textbf{E}\left(N_{>[b(n/k_n) y]}(n)\right) \frac{\mathrm{d} y}{y}.
\end{align*}
Furthermore, we also have for $y>0$,
\boldsymbol egin{align*}
\Big|\frac{1}{k_n} &\textbf{E}\left(N_{>[b(n/k_n) y]}(n)\right)-y^{-(2+\delta)}\Big|\\
\le &\frac{1}{k_n}\Big|\textbf{E}\left(N_{>[b(n/k_n) y]}(n)\right)- n p_{>[b(n/k_n) y]}\Big|
+ \left|\frac{n}{k_n} p_{>[b(n/k_n) y]} - y^{-(2+\delta)}\right|.
\end{align*}
According to \eqref{eq:claim}, the first term is bounded above by $C'/k_n\to 0$ as $n\to\infty$.
The second term also goes to 0 by \eqref{eq:conv_poverk} as $n\to\infty$.
Hence, as $n\to\infty$,
\boldsymbol egin{equation}\label{eq:conv_Eoverk}
\frac{1}{k_n}\textbf{E}\left(N_{>[b(n/k_n) y]}(n)\right)\rightarrow y^{-(2+\delta)}.
\end{equation}
Let $U(t) := \textbf{E}(N_{>[t]}(n))$ and \eqref{eq:conv_Eoverk} becomes: for $y>0$,
\[
\frac{1}{k_n} U(b(n/k_n) y) \rightarrow y^{-(2+\delta)},\quad \text{as }n\to\infty.
\]
Since $U(\cdot)$ is a non-increasing function, $U\in RV_{-(2+\delta)}$ by \citet[Proposition 2.3(ii)]{resnickbook:2007}.
Therefore, Karamata's theorem gives
\[
A \le \frac{1}{\varepsilon}\int_{M(1-\eta)}^\infty \frac{1}{k_n}\textbf{E}\left(N_{>[b(n/k_n) y]}(n)\right) \frac{\mathrm{d} y}{y}
\sim C(\delta, \eta) M^{-(2+\delta)},
\]
with some positive constant $C(\delta, \eta)>0$.
Also, $M^{-(2+\delta)}\to 0$ as $M\to\infty$, and \eqref{eq:cont_integral} follows.
\end{proof}
\section{Simulation Studies.}\label{sec:sim}
As noted in Remark~\ref{rmk:modelB}, we \wtd{fail to} prove the consistency of the Hill estimator in Model B
using the techniques \sid{of} Section~\ref{sec:Hill}.
In this section, however, we give some simulation results to see how consistent the Hill estimator is in Model B.
The main problem is to choose a proper $k_n$. We adopt the threshold
selection method proposed in \cite{clauset:shalizi:newman:2009},
which is also widely used in online data sources like KONECT
\cite{kunegis:2013}. \wtd{This
method is encoded in
the \texttt{plfit} script, which can be found at \url{http://tuvalu.santafe.edu/~aaronc/powerlaws/plfit.r)}.}
Here is a summary of this method that we refer to it as the ``minimum
distance method".
Given a sample of $n$ iid observations, $Z_1,\ldots, Z_n$ from a power
law distribution with tail index $\alpha$, the minimum distance method suggests using
the thresholded data consisting of the $k$ upper-order statistics, $Z_{(1)}\ge\ldots \ge Z_{(k)}$, for estimating $\alpha$.
The tail index is estimated by
\[
\hat{\alpha}(k):= \left( \frac{1}{k}\sum_{i=1}^{k} \log\frac{Z_{(i)}}{Z_{(k+1)}} \right)^{-1}, \quad k\ge 1.
\]
To select $k$, we first compute
the Kolmogorov-Smirnov (KS) distance between the
empirical tail distribution of the upper $k$ observations and the power-law
tail with index $\hat{\alpha}(k)$:
\[
d_k:=\sup_{y\ge 1} \left|\frac{1}{k}\sum_{i=1}^n\epsilon_{Z_i/Z_{(k+1)}}(y,\infty]-y^{-\hat{\alpha}(k)}\right|, \quad 1\le k\le n.
\]
Then the optimal $k^*$ is the one that minimizes the KS distance, i.e.
$$
k^* := \operatornamewithlimits{argmin}_{1\le k\le n}\, d_k,
$$
and we estimate the tail index and threshold by $\hat{\alpha}(k^*)$ and \wtd{$Z_{(k^*+1)}$} respectively.
This estimator performs well if the thresholded portion
comes from a Pareto tail and also seems effective in a
variety of non-iid scenarios.
We chose $\delta = -0.5, 0, 0.5, 1, 2$ then the theoretical tail indices of degree distributions from Model B were equal to $\alpha:=2+\delta=1.5, 2, 2.5, 3, 4$, respectively.
For each value of $\delta$, we also varied the number of edges in the network:
$n=5000, 10000, 50000, 100000$.
For each combination of $(\alpha, n)$, we \sid{simulated} 500 independent
replications of the preferential attachment network using
\sid{software discussed in \cite{wan:wang:davis:resnick:2017} and
linked to
\url{http://www.orie.cornell.edu/orie/research/groups/multheavytail/software.cfm}.}
For each replication we
computed $\hat{\alpha}(k^*)$ using the minimum distance method. We
recorded the mean of those 500 estimates in the corresponding entry of
Table~\ref{Table:alpha_star}, based on the combination of $(\alpha,
n)$.
\boldsymbol egin{table}[h]
\centering
\boldsymbol egin{tabular}{l|cccc}
\hline
& \multicolumn{4}{c}{Number of Edges} \\
& 5000 & 10000 & 50000 & 100000 \\
\hline
$\alpha = 1.5$ & 1.481 & 1.484 & 1.484 & 1.488 \\
$\alpha = 2$ & 2.061 & 2.028 & 1.998 & 1.990\\
$\alpha = 2.5$ & 2.602 & 2.557 & 2.507 & 2.494 \\
$\alpha = 3$ & 3.135 & 3.079 & 3.045 & 2.983 \\
$\alpha = 4$ & 3.957 & 3.930 & 3.942 & 3.932 \\
\hline
\end{tabular}
\caption{Mean values of $\hat{\alpha}(k^*)$ over 500 estimates using the minimum distance method, for each combination of $(\alpha, n)$.}
\label{Table:alpha_star}
\end{table}
We see that when $\delta = -0.5 <0 $, i.e. $\alpha = 1.5$, the minimum distance estimate $\hat{\alpha}(k^*)$
consistently underestimates the tail index, even if the number of edges in the network has been increased to $10^5$.
For the cases where $\delta\ge 0$ (i.e. $\alpha\ge 2$), the tail estimates have smaller biases as $n$ increases, as long as
the tails are not too ``light". When $\alpha = 4$, the tail becomes much lighter. Because of the finite sample bias that may occur while applying the minimum distance method to lighter-tailed power laws, increasing the number of
edges in the network does not significantly improve the bias of estimates.
\boldsymbol egin{figure}[h]
\centering
\includegraphics[scale=.38]{QQ_alpha_star.pdf}
\caption{QQ plots of $\hat{\alpha}(k^*)$ with $n=10^5$ and $\alpha = 1.5, 2, 2.5, 3, 4$. The fitted lines in red are the traditional qq-lines used to check normality of the estimates.}\label{fig:QQ}
\end{figure}
In Figure~\ref{fig:QQ}, we provide the QQ plots of those 500 minimum distance tail estimates $\hat{\alpha}(k^*)$ while holding $n = 10^5$ and varying $\alpha$ as specified in Table~\ref{Table:alpha_star}. The fitted lines in red are the traditional qq-lines used to check normality of the estimates.
When $\delta\le 0$ (i.e. the cases where $\alpha = 1.5, 2$), QQ plots
\sid{are consistent with} normality of $\hat{\alpha}(k^*)$.
However, as $\delta$ increases ($\alpha = 2.5, 3, 4$), significant
departures from the normal distribution are observed and asymptotic
normality is \sid{not proven theoretically or empirically.}
In conclusion, for Model B, simulation results \sid{suggest} that the Hill estimator is consistent when $\delta\ge 0$ (i.e. the tail index $\alpha\ge 2$),
but the asymptotic normality is not guaranteed. Since we only have QQ plots of the minimum distance estimates in Figure~\ref{fig:QQ}, it is still not clear whether this non-normality is due to the minimum distance method or the dependence in the network data.
We intend to analyze further the consistency for Model B and other
variants, \wtd{as well as the} asymptotic behavior of the Hill estimator.
\end{document} |
\begin{document}
\title{Proposed experiment to test fundamentally binary theories}
\author{Matthias~Kleinmann}
\email{matthias_kleinmann001@ehu.eus}
\affiliation{Department of Theoretical Physics, University of the Basque
Country UPV/EHU, P.O.~Box 644, E-48080 Bilbao, Spain}
\author{Tamás~Vértesi}
\email{tvertesi@atomki.mta.hu}
\affiliation{Institute for Nuclear Research, Hungarian Academy of Sciences,
H-4001 Debrecen, P.O.~Box 51, Hungary}
\author{Adán~Cabello}
\email{adan@us.es}
\affiliation{Departamento de Física Aplicada II, Universidad de Sevilla,
E-41012 Sevilla, Spain}
\begin{abstract}
Fundamentally binary theories are nonsignaling theories in which measurements
of many outcomes are constructed by selecting from binary measurements.
They constitute a sensible alternative to quantum theory and have never been
directly falsified by any experiment.
Here we show that fundamentally binary theories are experimentally testable
with current technology.
For that, we identify a feasible Bell-type experiment on pairs of entangled
qutrits.
In addition, we prove that, for any $n$, quantum $n$-ary correlations are not
fundamentally $(n-1)$-ary.
For that, we introduce a family of inequalities that hold for fundamentally
$(n-1)$-ary theories but are violated by quantum $n$-ary correlations.
\end{abstract}
\maketitle
\section{Introduction}
Quantum theory (QT) is the most successful theory physicists have ever devised.
Still, there is no agreement on which physical reasons force its formalism
\cite{FS16}.
It is therefore important to test ``close-to-quantum'' alternatives, defined as
those which are similar to QT in the sense that they have entangled states,
incompatible measurements, violation of Bell inequalities, and no experiment
has falsified them, and sensible in the sense that they are in some aspects
simpler than QT.
Examples of these alternatives are theories allowing for almost quantum
correlations \cite{NGHA15}, theories in which measurements are fundamentally
binary \cite{KC16}, and theories allowing for a higher degree of
incompatibility between binary measurements \cite{BHSS13}.
Each of these alternatives identifies a particular feature of QT that we do not
fully understand and, as a matter of fact, may or may not be satisfied by
nature.
For example, we still do not know which principle singles out the set of
correlations in QT \cite{Cabello15}.
In contrast, the set of almost quantum correlations satisfies a list of
reasonable principles and is simple to characterize \cite{NGHA15}.
Similarly, we do not know why in QT there are measurements that cannot be
constructed by selecting from binary measurements \cite{KC16}.
However, constructing the set of measurements of the theory would be simpler if
this would not be the case.
Finally, we do not know why the degree of incompatibility of binary
measurements in QT is bounded as it is, while there are theories that are not
submitted to such a limitation \cite{BHSS13}.
Unfortunately, we do not yet have satisfactory answers to these questions.
Therefore, it is important to test whether nature behaves as predicted by QT
also in these particular aspects.
However, this is not an easy task.
Testing almost quantum theories is difficult because we still do not have a
well-defined theory; thus, there is not a clear indication on how we should
aim our experiments.
Another reason, shared by theories with larger binary incompatibility, is that
the only way to test them is by proving that QT is wrong, which is, arguably,
very unlikely.
The case of fundamentally binary theories is different.
We have explicit theories \cite{KC16} and we know that fundamentally binary
theories predict supraquantum correlations for some experiments but subquantum
correlations for others.
That is, if QT is correct, there are experiments that can falsify fundamentally
binary theories \cite{KC16}.
The problem is that all known cases of subquantum correlations require
visibilities that escape the scope of current experiments.
This is particularly unfortunate now that, after years of efforts, we have
loophole-free Bell inequality tests \cite{HBD15,GVW15,SMC15,HKB16,W16}, tests
touching the limits of QT \cite{PJC15,CLBGK15}, and increasingly sophisticated
experiments using high-dimensional two-photon entanglement
\cite{VWZ02,GJVWZ06,DLBPA11}.
Therefore, a fundamental challenge is to identify a feasible experiment
questioning QT beyond the local realistic theories \cite{Bell64}.
The main aim of this work is to present a feasible experiment capable of
excluding fundamentally binary theories.
In addition, the techniques employed to identify that singular experiment will
allow us to answer a question raised in Ref.~\cite{KC16}, namely, whether or
not, for some $n$, quantum $n$-ary correlations are fundamentally $(n-1)$-ary.
\subsection{Device-independent scenario}
Consider a bipartite scenario where two observers, Alice and Bob, perform
independent measurements on a joint physical system.
For a fixed choice of measurements $x$ for Alice and $y$ for Bob, $P(a,b|x,y)$
denotes the joint probability of Alice obtaining outcome $a$ and Bob obtaining
outcome $b$.
We assume that both parties act independently in the sense that the marginal
probability for Alice to obtain outcome $a$ does not depend on the choice of
Bob's measurement $y$, i.e., $\sum_b P(a,b|x,y)\equiv
P(a,\omitted|x,\omitted)$, and analogously $\sum_a P(a,b|x,y)\equiv
P(\omitted,b|\omitted,y)$.
These are the nonsignaling conditions, which are obeyed by QT whenever both
observers act independently, in particular, if the operations of the observers
are spacelike separated.
However, QT does not exhaust all possible correlations subject to these
constraints \cite{PR94}.
The strength of this scenario lies in the fact that the correlations can be
obtained without taking into account the details of the experimental
implementation and hence it is possible to make statements that are
independent of the devices used.
This device-independence allows us to test nature without assuming a particular
theory---such as QT---for describing any of the properties of the measurement
setup.
This way, it is also possible to make theory-independent statements and, in
particular, to analyze the structure of any probabilistic theory that obeys
the nonsignaling conditions.
\subsection{Fundamentally binary theories}
One key element of the structure of any probabilistic theory was identified in
Ref.~\cite{KC16} and concerns how the set of measurements is constructed,
depending on the number of outcomes.
According to Ref.~\cite{KC16}, it is plausible to assume that a theory
describing nature has, on a fundamental level, only measurements with two
outcomes while situations where a measurement has more outcomes are achieved
by classical postprocessing of one or several two-outcome measurements.
To make this a consistent construction, it is also admissible that the
classical postprocessing depends on additional classical information and, in
the bipartite scenario, this classical information might be correlated between
both parties.
The total correlation attainable in such a scenario are the binary nonsignaling
correlations, which are characterized by the convex hull of all nonsignaling
correlations obeying $P(a,\omitted|x,\omitted)= 0$ for all measurements $x$
and all but two outcomes $a$, and $P(\omitted,b|\omitted,y) = 0$ for all
measurements $y$ and all but two outcomes $b$.
The generalization to $n$-ary nonsignaling correlations is straightforward.
In Ref.~\cite{KC16}, it was shown that for no $n$ the set of $n$-ary nonlocal
correlations covers all the set of quantum correlations.
Albeit this being a general result, the proof in Ref.~\cite{KC16} has two
drawbacks:
(i) It does not provide a test which is experimentally feasible.
(ii) It does not allow us to answer whether or not quantum $n$-ary correlations
are still fundamentally $(n-1)$-ary.
For example, the proof in Ref.~\cite{KC16} requires {10}-outcome quantum
measurements for excluding the binary case.
In this work, we address both problems and provide
(i') an inequality that holds for all binary nonsignaling correlations, but can
be violated using three-level quantum systems (qutrits) with current
technology, and
(ii') a family of inequalities obeyed by $(n-1)$-ary nonsignaling correlations
but violated by quantum measurements with $n$ outcomes.
\section{Results}
\subsection{Feasible experiment to test fundamentally binary theories}
We first consider the case where Alice and Bob both can choose between two
measurements, $x=0,1$ and $y=0,1$, and each measurement has three outcomes
$a,b=0,1,2$.
For a set of correlations $P(a,b|x,y)$, we define
\begin{equation}
I_a=\sum_{k,x,y=0,1} (-1)^{k+x+y}P(k,k|x,y),
\end{equation}
where the outcomes with $k=2$ do not explicitly appear.
With the methods explained in Sec.~\ref{polymeth}, we find that, up to
relabeling of the outcomes,
\begin{equation}\label{ineqa}
I_a\le 1
\end{equation}
holds for nonsignaling correlations if and only if the correlations are
fundamentally binary.
However, according to QT, the inequality in Eq.~\eqref{ineqa} is violated, and
a value of
\begin{equation}\label{qvaluea}
I_a= 2(2/3)^{3/2}\approx 1.0887
\end{equation}
can be achieved by preparing a two-qutrit system in the pure state
\begin{equation}
\ket\psi=\frac{1}{2}(\sqrt{2}\ket{00}+ \ket{11}-\ket{22})
\end{equation}
and choosing the measurements $x,y=0$ as $M_{k|0}= V\proj{k}V^\dag$, and the
measurements $x,y=1$ as $M_{k|1}= U\proj{k}U^\dag$, where, in canonical matrix
representation,
\begin{equation}
V=\frac1{\sqrt{12}}\begin{pmatrix} 2 & 2 & 2 \\
-\sqrt{3}-1 & \sqrt{3}-1 & 2 \\
\sqrt{3}-1 & -\sqrt{3}-1 & 2
\end{pmatrix},
\end{equation}
and $U=\diag(-1,1,1)V$.
Using the second level of the Navascués--Pironio--Acín (NPA) hierarchy
\cite{NPA07}, we verify that the value in Eq.~\eqref{qvaluea} is optimal
within our numerical precision of $10^{-6}$.
The visibility required to observe a violation of the inequality in
Eq.~\eqref{ineqa} is $91.7\%$, since the value for the maximally mixed state
is $I_a=0$.
The visibility is defined as the minimal $p$ required to obtain a violation
assuming that the prepared state is a mixture of the target state and a
completely mixed state, $\rho_{\rm prepared} = p \proj\psi + (1-p) \rho_{\rm
mixed}$.
We show in Sec.~\ref{polymeth} that the inequality in Eq.~\eqref{ineqa} holds
already if only one of the measurements of either Alice or Bob is
fundamentally binary.
Therefore, the violation of the inequality in Eq.~\eqref{ineqa} allows us to
make an even stronger statement, namely, that none of the measurements used is
fundamentally binary, thus providing a device-independent certificate of the
genuinely ternary character of all measurements in the experimental setup.
The conclusion at this point is that the violation of the inequality in
Eq.~\eqref{ineqa} predicted by QT could be experimentally observable even
achieving visibilities that have been already attained in previous
Bell-inequality experiments on qutrit--qutrit systems
\cite{VWZ02,GJVWZ06,DLBPA11}.
It is important to point out that, in addition, a compelling experiment
requires that the local measurements are implemented as measurements with
three outcomes rather than measurements that are effectively two-outcome
measurements.
That is, there should be a detector in each of the three possible outcomes of
each party.
The beauty of the inequality in Eq.~\eqref{ineqa} and the simplicity of the
required state and measurements suggest that this experiment could be carried
out in the near future.
\subsection{Quantum $n$-ary correlations are not fundamentally $(n-1)$-ary}
If our purpose is to test whether or not one particular measurement is
fundamentally binary (rather than all of them), then it is enough to consider
a simpler scenario where Alice has a two-outcome measurement $x=0$ and a
three-outcome measurement $x=1$, while Bob has three two-outcome measurements
$y=0,1,2$.
We show in Sec.~\ref{polymeth} that for the combination of correlations
\begin{equation}\label{ieb}
I_b=-P(0,\omitted|0,\omitted)+\sum_{k=0,1,2}[P(0,0|0,k)-P(k,0|1,k)],
\end{equation}
up to relabeling of the outcomes and Bob's measurement settings,
\begin{equation}\label{ineqb}
I_b\le 1
\end{equation}
holds for nonsignaling correlations if and only if the correlations are
fundamentally binary.
According to QT, this bound can be violated with a value of
\begin{equation}\label{qvalueb}
I_b=\sqrt{16/15}\approx 1.0328,
\end{equation}
by preparing the state
\begin{equation}
\ket\psi=\frac1{\sqrt{(3\zeta+1)^2+2}}(\ket{00}+\ket{11}+\ket{22}+
\zeta\ket\phi\!\ket\phi),
\end{equation}
where $\zeta= -\frac13+\frac16\sqrt{10\sqrt{15}-38}\approx -0.19095$,
$\ket\phi=\ket0+\ket1+\ket2$, and choosing Alice's measurement $x=0$ as
$A_{0|0}=\openone-A_{1|0}$, $A_{1|0}=\proj{\phi}/3$, and measurement $x=1$ as
$A_{k|1}=\proj k$, for $k=0,1,2$, and Bob's measurements $y=0,1,2$ as
$B_{0|y}=\openone-B_{1|y}$ and $B_{1|k}=\proj{\eta_k}/\braket{\eta_k|\eta_k}$,
where $\ket{\eta_k}=\ket{k}+\xi\ket\phi$, for $k=0,1,2$, and $\xi =
-\frac13+\frac16\sqrt{6\sqrt{15}+22}\approx 0.78765$.
[Another optimal solution is obtained by flipping the sign before the
$(\frac16\sqrt{\,})$-terms in $\xi$ and $\zeta$, yielding $\xi\approx -1.4543$
and $\zeta\approx -0.47572$.]
We use the third level of the NPA hierarchy to confirm that, within our
numerical precision of $10^{-6}$, the value in Eq.~\eqref{qvalueb} is optimal.
Notice, however, that the visibility required to observe a violation of the
inequality in Eq.~\eqref{ineqb} is $96.9\%$.
This contrasts with the $91.7\%$ required for the inequality in
Eq.~\eqref{ineqa} and shows how a larger number of outcomes allows us to
certify more properties with a smaller visibility.
Nevertheless, what is interesting about the inequality in Eq.~\eqref{ineqb} is
that it is a member of a family of inequalities and this family allows us to
prove that, for any $n$, quantum $n$-ary correlations are not fundamentally
$(n-1)$-ary, a problem left open in Ref.~\cite{KC16}.
For that, we modify the scenario used for the inequality in Eq.~\eqref{ineqb},
so that now Alice's measurement $x=1$ has $n$ outcomes, while Bob has $n$
measurements with two outcomes.
We let $I_b^{(n)}$ be as $I_b$ defined in Eq.~\eqref{ieb}, with the only
modification that in the sum, $k$ takes values from $0$ to $n-1$.
Then,
\begin{equation}\label{ineqc}
I_b^{(n)}\le n-2
\end{equation}
is satisfied for all fundamentally $(n-1)$-ary correlations.
The proof is given in Sec.~\ref{proof}.
Clearly, the value $I_b^{(n)}=n-2$ can already be reached by choosing the fixed
local assignments where all measurements of Alice and Bob always have outcome
$a,b=0$.
According to QT, it is possible to reach values of $I_b^{(n)}> (n-2)+1/(4n^3)$,
as can be found by generalizing the quantum construction from above to
$n$-dimensional quantum systems with $\xi=\sqrt2$ and $\zeta=
-1/n+1/(\sqrt2n^2)$.
Thus, the $(n-1)$-ary bound is violated already by $n$-ary quantum
correlations.
Note, that the maximal quantum violation is already very small for $n=4$ as the
bound from the third level of the NPA hierarchy is $I_b^{(4)}<2.00959$.
\section{Methods}
\subsection{Restricted nonsignaling polytopes}\label{polymeth}
We now detail the systematic method that allows us to obtain the inequalities
in Eqs.~\eqref{ineqa}, \eqref{ineqb}, and \eqref{ineqc}.
We write $S=\bisc{a_1, a_2,\dotsc, a_n}{b_1, b_2,\dotsc, b_m}$ for the case
where Alice has $n$ measurements and the first measurement has $a_1$ outcomes,
the second $a_2$ outcomes, etc., and similarly for Bob and his $m$
measurements with $b_1$, $b_2$,\dots, outcomes.
The nonsignaling correlations for such a scenario form a polytope $C(S)$.
For another bipartite scenario $S'$ we consider all correlations $P'\in C(S')$
that can be obtained by local classical postprocessing from any $P\in C(S)$.
The convex hull of these correlations is again a polytope and is denoted by
$C(S\rightarrow S')$.
The simplest nontrivial polytope of fundamentally binary correlations is then
$C(\bisc{2,2}{2,2}\rightarrow \bisc{3,3}{3,3})$.
We construct the vertices of this polytope and compute the {468} facet
inequalities (i.e., tight inequalities for fundamentally binary correlations)
with the help of the Fourier-Motzkin elimination implemented in the software
\texttt{porta} \cite{porta}.
We confirm the results by using the independent software \texttt{ppl}
\cite{ppl}.
Up to relabeling of the outcomes, only the facet $I_a\le 1$ is not a face of
the set the nonsignaling correlations $C(\bisc{3,3}{3,3})$, which concludes
our construction of $I_a$.
In addition, we find that
\begin{equation}\label{coneq}
C(\bisc{2,3}{3,3})= C(\bisc{2,2}{2,2}\rightarrow \bisc{2,3}{3,3}),
\end{equation}
and therefore the inequality in Eq.~\eqref{ineqa} holds for all nonsignaling
correlations where at least one of the measurements is fundamentally binary.
As a complementary question we consider the case where only a single
measurement has three outcomes.
According to Eq.~\eqref{coneq}, the smallest scenarios where such a
verification is possible are $\bisc{2,3}{2,2,2}$ and $\bisc{2,2}{2,2,3}$.
We first find that $C(\bisc{2,2}{3,3,3})= C(\bisc{2,2}{2,2,2}\rightarrow
\bisc{2,2}{3,3,3})$, i.e., even if all of Bob's measurements would be
fundamentally ternary, the correlations are always within the set of
fundamentally binary correlations.
Hence, we investigate the polytope $C(\bisc{2,2}{2,2,2}\rightarrow
\bisc{2,3}{2,2,2})$ and its {126} facets.
Up to symmetries, only the facet $I_b\le 1$ is not a face of
$C(\bisc{2,3}{2,2,2})$.
Our method also covers other scenarios.
As an example we study the polytope $C(\bisc{2,4}{2,4}\rightarrow
\bisc{2,2,2}{2,2,2})$ with its {14052} facets.
In this case, the four-outcome measurements have to be distributed to
two-outcome measurements (or the two-outcome measurement is used twice).
Hence, this scenario is equivalent to the requirement that for each party at
least two of the three measurements are compatible.
The polytope has, up to relabeling, {10} facets that are not a face of
$C(\bisc{2,2,2}{2,2,2})$.
According to the fourth level of the NPA hierarchy, two of the facets may
intersect with the quantum correlations.
While for one of them the required visibility (with respect to correlations
where all outcomes are equally probable) is at least $99.94\%$, the other
requires a visibility of at least $97.88\%$.
This latter facet is $I_c\le 0$, where
\begin{multline}
I_c=-P(10|00)-P(00|01)-P(00|10)-P(00|11)\\
-P(10|12)-P(01|20)-P(01|21)+P(00|22).
\end{multline}
For arbitrary nonsignaling correlations, $I_c\le 1/2$ is tight, while within
QT, $I_c< 0.0324$ must hold.
We can construct a numeric solution for two qutrits which matches the bound
from the third level of the NPA hierarchy up to our numerical precision of
$10^{-6}$.
The required quantum visibility then computes to $97.2\%$.
The quantum optimum is reached for measurements $A_{0|k}=\proj{\alpha_k}$,
$A_{1|k}=\openone -A_{0|k}$, and $B_{0|k}=\proj{\beta_k}$, $B_{1|k}=\openone
-B_{0|k}$, where all $\ket{\alpha_k}$ and $\ket{\beta_k}$ are normalized and
$\braket{\alpha_0|\alpha_1}\approx 0.098$, $\braket{\alpha_0|\alpha_2}\approx
0.630$, $\braket{\alpha_1|\alpha_2}\approx 0.572$, and
$\braket{\beta_k|\beta_\ell}\approx 0.771$ for $k\ne \ell$.
A state achieving the maximal quantum value is $\ket\psi\approx
0.67931\ket{00}+0.67605\ket{11}+0.28548\ket{22}$.
Note, that $I_c\approx 0.0318$ can still be reached according to QT, when Alice
has only two incompatible measurements by choosing
$\braket{\alpha_0|\alpha_1}= 0$.
Curiously, the facet $I_c\le 0$ is equal to the inequality $M_{3322}$ in
Ref.~\cite{BGS05} and a violation of it has been observed recently by using
photonic qubits \cite{CLBGK15}.
However, while $M_{3322}$ is the only nontrivial facet of the polytope
investigated in Ref.~\cite{BGS05}, it is just one of several nontrivial facets
in our case.
\subsection{Proof of the inequality in Eq.~\eqref{ineqc}}\label{proof}
Here, we show that for $(n-1)$-ary nonsignaling correlations, the inequality in
Eq.~\eqref{ineqc} holds.
We start by letting for some fixed index $0\le \ell < n$,
\begin{subequations}
\begin{align}
F&=-\sum_b R_{0,b|0,\ell} + \sum_k [ R_{0,0|0,k}-R_{k,0|1,k} ],\\
X_{1;a|x,y}&=\sum_b(R_{a,b|x,y}-R_{a,b|x,\ell}),\\
X_{2;b|x,y}&=\sum_a(R_{a,b|x,y}-R_{a,b|0,y}),
\end{align}
\end{subequations}
where all $R_{a,b|x,y}$ are linearly independent vectors from a real vector
space $V$.
Clearly, for any set of correlations, we can find a linear function $\phi\colon
V\rightarrow {\mathbb R}$ with $\phi(R_{a,b|x,y})= P(a,b|x,y)$.
For such a function, $I_b^{(n)}= \phi(F)$ holds and $\phi(X_\tau)= 0$ are all
the nonsignaling conditions.
The maximal value of $I_b^{(n)}$ for $(n-1)$-ary nonsignaling correlations is
therefore given by
\begin{equation}\label{prim}\begin{split}
\textstyle\max_{\ell'}
\max\{ \phi(F) \mid\; & \phi\colon V\rightarrow {\mathbb R} \text{, linear,}\\
&\phi(X_\tau) = 0, \text{ for all } \tau, \\
& \phi(R_{\ell',b|1,y})= 0, \text{ for all } b,y,\\
& \textstyle\sum_\upsilon \phi(R_\upsilon)= 2n, \text{ and }\\
& \phi(R_\upsilon)\ge 0, \text{ for all } \upsilon\}.
\end{split}\end{equation}
Since the value of the inner maximization does not depend on the choice of
$\ell$, we can choose $\ell=\ell'$.
Equation~\eqref{prim} is a linear program, and the equivalent dual to this
program can be written as
\begin{equation}\label{dual}
\max_\ell
\min_{t,\boldsymbol\xi, \boldsymbol\eta}
\set{ t | t\ge \zeta_\upsilon \text{ for all } \upsilon},
\end{equation}
where $\boldsymbol\zeta$ is the solution of
\begin{equation}
2 n F - \sum_\tau \xi_\tau X_\tau -\sum_{b,y}\eta_{b,y} R_{\ell,b|1,y}=
\sum_\upsilon \zeta_\upsilon R_\upsilon.
\end{equation}
To obtain an upper bound in Eq.~\eqref{dual}, we choose $\boldsymbol\eta\equiv
2n$ and all $\xi_\tau= 0$, but
$\xi_{1;a|0,k}=4$,
$\xi_{1;k|1,k}=-2n$,
$\xi_{2;b|1,\ell}=-3n+2$, and
$\xi_{2;b|1,k}=-(-1)^bn+2$, for $k\ne \ell$.
This yields $\max_\upsilon \zeta_\upsilon= n-2$ for all $\ell$ and hence the
$(n-1)$-ary nonsignaling correlations obey $I_b^{(n)}\le n-2$.
\section{Conclusions}
There was little chance to learn new physics from the recent loophole-free
experiments of the Bell inequality \cite{HBD15,GVW15,SMC15,HKB16,W16}.
Years of convincing experiments \cite{FC72,ADR82,WJSWZ98} allowed us to
anticipate the conclusions: nature cannot be explained by local realistic
theories \cite{Bell64}, there are measurements for which there is not a joint
probability distribution \cite{Fine82}, and there are states that are not a
convex combination of local states \cite{Werner89}.
Here we have shown how to use Bell-type experiments to gain insights into QT.
In Ref.~\cite{KC16}, it was shown that QT predicts correlations that cannot be
explained by nonsignaling correlations produced by fundamentally binary
measurements (including Popescu--Rohrlich boxes \cite{PR94}).
We proposed a feasible experiment which will allow us to either exclude all
fundamentally binary probabilistic theories or to falsify QT.
If the results of the experiment violate the inequality in Eq.~\eqref{ineqa},
as predicted by QT, then we would learn that no fundamentally binary theory
can possibly describe nature.
In addition, it would prove that all involved measurements are genuine
three-outcome measurements.
If the inequality in Eq.~\eqref{ineqa} is not violated despite visibilities
would \emph{a priori} lead to such a violation, then we would have evidence
that QT is wrong at a fundamental level (although being subtle to detect in
experiments).
We have also gone beyond Ref.~\cite{KC16} by showing that, for any $n$, already
$n$-ary quantum correlations are not fundamentally $(n-1)$-ary.
\begin{acknowledgments}
This work is supported by
Project No.~FIS2014-60843-P, ``Advanced Quantum Information'' (MINECO, Spain),
with FEDER funds,
the FQXi Large Grant ``The Observer Observed: A Bayesian Route to the
Reconstruction of Quantum Theory'',
the project ``Photonic Quantum Information'' (Knut and Alice Wallenberg
Foundation, Sweden),
the Hungarian National Research Fund OTKA (Grants No.~K111734 and No.~KH125096),
the EU (ERC Starting Grant GEDENTQOPT),
and the DFG (Forschungsstipendium KL~2726/2-1).
\end{acknowledgments}
\end{document} |
\begin{document}
\title{Quantum thermometry in diffraction-limited systems}
\author{Dong Xie}\email{xiedong@mail.ustc.edu.cn}
\affiliation{College of Science, Guilin University of Aerospace Technology, Guilin, Guangxi 541004, People's Republic of China}
\affiliation{State Key Laboratory for Mesoscopic Physics, School of Physics, Frontiers Science Center for Nano-Optoelectronics, and
Collaborative Innovation Center of Quantum Matter, Peking University, Beijing 100871, People's Republic of China}
\author{Chunling Xu}
\affiliation{College of Science, Guilin University of Aerospace Technology, Guilin, Guangxi 541004, People's Republic of China}
\author{An Min Wang}
\affiliation{Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China}
\begin{abstract}
We investigate the ultimate quantum limit of resolving the temperatures of two thermal sources affected by the diffraction. More quantum Fisher information can be obtained with the priori information than that without the priori information. We carefully consider two strategies: the simultaneous estimation and the individual estimation. The simultaneous estimation of two temperatures is proved to satisfy the saturation condition of quantum Cram\'{e}r bound and performs better than the individual estimation in the case of small degree of diffraction given the same resources. However, in the case of high degree of diffraction, the individual estimation performs better. In particular, at the maximum diffraction, the simultaneous estimation can not get any information, which is supported by a practical measurement, while the individual estimation can still get the information. In addition, we find that for the individual estimation, a practical and feasible estimation strategy by using the full Hermite-Gauss basis can saturate the quantum Cram\'{e}r bound without being affected by the attenuation factor at the maximum diffraction.
\end{abstract}
\maketitle
\section{Introduction}
In classical optics, optical imaging resolution is limited by the diffraction. For over a century, Rayleigh's criterion had been used as a limit of resolution of two incoherent point sources\cite{lab1,lab2}. In the last decade, the limit can be beaten by a
variety of superresolution techniques, such as, fluorescence microscopy\cite{lab3,lab4,lab5}.
Tang \textit{et.al.}\cite{lab6} first investigated the imaging resolution limit with the tool in quantum metrology. They obtained the lower bound of the separation between two incoherent point sources and showed that the spatial-mode demultiplexing can approach the optimal measurement, which is superior to direct measurement. This seminal work opened up a wide range of interest in exploring quantum imaging using quantum Fisher information (QFI). They mainly extended the superresolution technique to deal with two-dimensional\cite{lab7} and three-dimensional imaging\cite{lab8,lab9,lab10,lab11}, many sources\cite{lab12,lab13,lab14,lab15,lab16}, the effects of noise\cite{lab17,lab18}, and the optimal measurement for the practical superresolution imaging\cite{lab19}.
Up to now, very little work has been done to investigate the effect of diffraction on quantum thermometry, which mainly involves improving precision standards for temperature sensing in the quantum regime\cite{lab20}. Improving temperature measurement precision is important in the quantum thermodynamics and modern quantum technology\cite{lab21,lab22,lab23}.
The commercially available pyrometer is one of the most common noncontact thermometer, which is the measurement of the thermal infrared radiation naturally emitted by all heated samples\cite{lab24,lab25}. Like the quantum imaging, it is necessary to study the effect of diffraction on temperature measurement precision for obtaining the optimal temperature measurement.
In this article, we fill in the gaps above. We investigate the ultimate quantum limit of resolving the temperatures of two thermal sources
affected by the diffraction.
When one knows a priori that the two temperatures are always the same, the maximum diffraction reduces the QFI of the high temperature by
half and the diffraction has little effect on the measurement of the low temperature. We find that the prior information can help to obtain twice as much QFI as without the priori information (the two temperatures are independent). More importantly, we find that the simultaneous estimation is superior to the individual estimation in the case of small degree of diffraction. In the case of high degree of diffraction, the individual estimation can perform better. In addition, we utilize a practical and feasible estimation strategy based on the optimized error transfer formula to obtain the individual temperature estimation uncertainty, which can saturate the quantum Cram\'{e}r bound (QCRB) at the maximum diffraction. Finally, we show that the diffraction will reduce the precision of
the simultaneous estimation with a practical measurement operator, which can not obtain any information
at the maximum diffraction.
This article is organized as follows. In Section II, we introduce the imaging model and the density matrix in which temperature information is encoded. In Section III, we obtain the QFI when the two thermal sources have the same temperature. In Section IV, the simultaneous estimation and the individual estimation are used to obtain the QFI, and compare the merits of the two strategies. In Section V, we investigate a practical and feasible estimation of the single parameter. The simultaneous estimation with a practical measurement operator is studied in Section VI. We make a brief conclusion in Section VII.
\section{the imaging model}
We consider the model of a linear optical imaging system in the far field, as shown in Fig.~\ref{fig.1}.
Two thermal pointlike sources are monochromatic with the frequency $\omega$ and located in the object plane, orthogonal
to the optical axis, at position $-d/2$ and $d/2$. We define that $T_1$ and $T_2$ are temperatures of the two sources associated with the
field operators $c_1$ and $c_2$, respectively.
We assume that the two sources emit a total mean photon number equal to $2N$, where $N=1/2[1/(\chi_1-1)+1/(\chi_2-1)]$ with $\chi_i=e^{\omega/T_i}$(the reduced Planck constant $\hbar=1$ and Boltzmann constant $\kappa_B=1$ throughout this article). The sources can be described by the density matrix $\rho_0=\rho_{c_1}[(1-\gamma)N]\otimes\rho_{c_2}[(1+\gamma)N]$, where $\gamma=(\chi_1-\chi_2)/(\chi_1+\chi_2)$ takes into account the possibly different temperatures of the two sources.
In the Glauber-Sudarshan P-representation, the density matrix can be also described by
\begin{align}
\rho_0=\int d^2\alpha_1d^2\alpha_2 P_{c_1,c_2}(\alpha_1,\alpha_2)|\alpha_1,\alpha_2\rangle\langle\alpha_1,\alpha_2|,\label{eq:A01}
\end{align}
where $|\alpha_{1}\rangle$ and $|\alpha_{2}\rangle$ are coherent states of the field operators $c_{1}$ and $c_{2}$ respectively, and the Glauber-Sudarshan
$P$ function is $P_{c_1,c_2}(\alpha_1,\alpha_2)=P_{c_1}(\alpha_1)P_{c_2}(\alpha_2)$, with
\begin{align}
P_{c_1,c_2}(\alpha_1,\alpha_2)=\frac{1}{\pi^2N^2(1-\gamma^2)}e^{[-|\alpha_1|^2/(1-\gamma)-|\alpha_2|^2/(1+\gamma)]}.
\end{align}
\begin{figure}
\caption{\label{fig.1}
\label{fig.1}
\end{figure}
The point-spread function $\psi(x)$ determines the field operators on the image plane, which read
\begin{align}
a_1^\dagger=\int dx \psi(x+d/2)a_x^\dagger,\ \
a_2^\dagger=\int dx \psi(x-d/2)a_x^\dagger,
\end{align}
where $a_x^\dagger$ is the canonical creation operator for a field localized at position $x$ on the image screen.
A diffraction-limited optical system transforms the source operators as\cite{lab26}
\begin{align}
c_1\longrightarrow\sqrt{\eta}a_1+\sqrt{1-\eta}v_1,\label{eq:A04}\\
c_2\longrightarrow\sqrt{\eta}a_2+\sqrt{1-\eta}v_2,
\label{eq:A05}
\end{align}
where $\eta$ is an attenuation factor, $v_1$ and $v_2$ are auxiliary environmental modes in the vacuum state.
The operators $c_1^\dagger$ and $c_2$ do not commute due to the nonzero overlap between the two point-spread functions $\psi(x+d/2)$ and $\psi(x-d/2)$. To obviate this problem, the orthonormal image modes are introduced
\begin{align}
\psi_\pm(x)=\frac{\psi(x+d/2)\pm\psi(x-d/2)}{\sqrt{2(1\pm s)}},
\end{align}
where $s$ is the overlap between the source images
\begin{align}
s=\int d^2x\psi^*(x+d/2)\psi(x-d/2).
\end{align}
$s$ quantifies the diffraction introduced by the imaging optical system. $s=1$ represents the maximum diffraction. $s=0$ means that there's no diffraction.
By taking the sum and difference of the relations in Eq.~(\ref{eq:A04}) and Eq.~(\ref{eq:A05}), one can obtain
\begin{align}
c_\pm:=\frac{c_1\pm c_2}{\sqrt{2}}\rightarrow\sqrt{\eta_\pm}a_\pm+\sqrt{1-\eta_\pm}v_\pm,
\label{eq:A08}
\end{align}
where $a_\pm=(a_1\pm a_2)/\sqrt{2(1\pm s)}$ are orthogonal symmetric and antisymmetric mode operators associated with the modes $\psi_\pm(x)$, the effective attenuation factors are $\eta_\pm=\eta(1\pm s)$ and $v_\pm$ associated with auxiliary modes in the vacuum state.
Inverting Eq.~(\ref{eq:A08}), we can write
\begin{align}
a_\pm:=\sqrt{\eta_\pm}c_\pm+\sqrt{1-\eta_\pm}v_\pm.
\label{eq:A09}
\end{align}
The density matrix in the image plane can be obtained by using Eq.~(\ref{eq:A09}) to propagate the quantum state of source in Eq.~(\ref{eq:A01})\cite{lab27}, as shown in Appendix,
\begin{align}
\rho=\int d^2\alpha_+d^2\alpha_- P_{a_+,a_-}(\alpha_+,\alpha_-)|\alpha_+,\alpha_-\rangle\langle\alpha_+,\alpha_-|,\label{eq:A10}
\end{align}
where the corresponding $P$ function is
\begin{align}
P_{a_+,a_-}(\alpha_+,\alpha_-)=\frac{1}{\pi^2\textmd{det} V}e^{-{\mathbf{A}}^\dagger V^{-1}\mathbf{A}},
\end{align}
with the definition
$\mathbf{A}=(\alpha_+,\alpha_-)^T$ and
\[
V= \left(
\begin{array}{ll}
N_+\ \ \ \ \ \ \ \ \ \ \ \gamma\sqrt{N_+N_-}\\
\gamma\sqrt{N_+N_-}\ \ \ N_-\\
\end{array}
\right ),
\]
in which,
$N_\pm=N \eta(1\pm s)$.
\section{Two thermal sources with the same temperature}
We first consider that temperatures of the two sources are always the same, i.e., $T_1=T_2=T$. According to Eq.(11), The density matrix in the image plane is a product state, which can be described in the number-diagonal states of the form
\begin{align}
\rho=\rho_+\otimes\rho_-,
\end{align}
with the density matrixs associated with the field operators $a_\pm$
\begin{align}
\rho_\pm=\sum_{n=0}^\infty p_\pm(n)|n\rangle_\pm\langle n|
\end{align}
where
\begin{align}
p_\pm(n)=\frac{(M_\pm)^n}{(M_\pm+1)^{n+1}},
\end{align}
$M_\pm=\frac{\eta(1\pm s)}{e^{\omega/T}-1}$, and $|n\rangle_\pm=\frac{1}{\sqrt{n!}}{(a^\dagger_\pm)^n}|0\rangle$ denote Fock states with $n$ photons in the image plane.
Due to that it is diagonal state, the QFI of the temperature $T$ can be directly calculated
\begin{align}
& \mathcal{F}(T)=\sum_{n=0}^\infty \frac{[\partial_Tp_+(n)]^2}{p_+(n)}+\frac{[\partial_Tp_-(n)]^2}{p_-(n)}\\
&=\frac{2\chi^{2}\omega^2\eta(\chi-1+\eta-s^2\eta)}{(\chi-1)^2T^4(-1+\chi+\eta-s \eta)(-1+\chi+\eta+s \eta)},
\end{align}
where the short hand $\partial_T=\frac{\partial}{\partial T}$ and $\chi=e^{\omega/T}$.
At low temperature $\omega/T\gg1$, we can achieve
\begin{align}
\mathcal{F}(T)=\frac{2\omega^2\eta}{T^4e^{\omega/T}}.
\end{align}
It is independent of the degree of diffraction $s$, which can show that the diffraction has little effect on the measurement of low temperature.
At high temperature $\omega/T\ll1$, we can obtain
\begin{align}
\mathcal{F}(T)\approx\frac{2\eta[\omega/T+\eta(1-s^2)]}{T^2[\omega/T+\eta(1-s)][\omega/T+\eta(1+s)]}
\end{align}
In this case, we find that $\mathcal{F}(T)|_{s=1}/\mathcal{F}(T)|_{s=0}=1/2$. It means that the maximum diffraction reduces the QFI by half. \begin{figure}
\caption{\label{fig.2}
\label{fig.2}
\end{figure}
In the general case, we can see that the diffraction will reduce the QFI of temperature as shown in Fig.~\ref{fig.2}.
At the maximum diffraction, we still obtain a finite QFI. It demonstrates that the diffraction has no great influence on temperature measurement in the case of the two thermal sources with the same temperature.
\section{estimating two different temperatures}
In this section, we want to estimate the temperatures $T_1$ and $T_2$ of the two thermal sources. In this case, the two temperatures are independent.
The estimation precision of $(T_1, T_2)$, governed by its covariance matrix $\textmd{Cov}(T_1, T_2)$, is lower bounded via QCRB\cite{lab278}
\begin{align}
\textmd{ Cov}(T_1, T_2)\geq(\nu \mathcal{H})^{-1}
\end{align}
where $\mathcal{H}$ is the QFI matrix and $\nu$ denotes the classical contribution from repeating the
experiment.
There are two measurement strategies: one is the simultaneous estimation of the two temperatures, the other is the individual estimation of the two temperatures. A lot of works\cite{lab28,lab29,lab30,lab31,lab32,lab33,lab34,lab35,lab36,lab37} clearly showed that the simultaneous estimation can be more precise than the individual estimation given by the same resource. And then we're going to look at whether that's true in the diffraction case.
For the simultaneous estimation, the total estimation uncertainty of the two temperatures is given by
\begin{align}
(\delta^2T_1+\delta^2T_2)|_{\textmd{sim}}=\textmd{tr}[\textmd{Cov}(T_1, T_2)]\geq\textmd{tr}(\nu \mathcal{H})^{-1}\nonumber\\
=\frac{1}{\nu}\frac{\mathcal{H}^{11}+\mathcal{H}^{22}}{\mathcal{H}^{11}\mathcal{H}^{22}-|\mathcal{H}^{12}|^2}
\label{eq:A20}
\end{align}
where $\mathcal{H}^{ij}$ ($i,j=1,2$) represent the elements of the QFI matrix $\mathcal{H}$.
For the individual estimation, the estimation uncertainties of the the two temperatures are given by
\begin{align}
\delta^2T_1|_{ind}\geq\frac{1}{\nu/2}\frac{1}{\mathcal{H}^{11}},\label{eq:A21}\\
\delta^2T_2|_{ind}\geq\frac{1}{\nu/2}\frac{1}{\mathcal{H}^{22}},\label{eq:A22}
\end{align}
where we consider $T_1$ and $T_2$ are individually measured $\nu/2$ times so that the total number of measurements is consistent with the case of the simultaneous estimation.
In the case of the individual estimation, the lower bound in Eq.~(\ref{eq:A21}-\ref{eq:A22}) can be saturated with the large number of repeated measurements ($\nu\gg1$).
In the case of the simultaneous estimation, the lower bound in Eq.~(\ref{eq:A20}) is saturated by satisfying the weak commutation relation in addition to the large number of repeated measurements, which is described as\cite{lab38}
\begin{align}
\textmd{tr}[\rho[\mathcal{L}_1,\mathcal{L}_2]]=0,
\end{align}
where $\mathcal{L}_i$ ($i=1,2$) are the symmetric logarithmic derivatives, which are defined as operator solutions to equations $\partial_i\rho=\frac{1}{2}(\mathcal{L}_i\rho+\rho\mathcal{L}_i)$, where $\partial_i = \partial_{T_i}$ denotes partial derivative with respect to the $i$¡¯th element of the vector of estimated parameters $(T_1,T_2)$.
The quantum state $\rho$ is a Gaussian state.
For the Gaussian state,
the QFI matrix $\mathcal{H}$ and symmetric logarithmic derivatives can be described as\cite{lab39}
\begin{align}
\mathcal{H}^{ij}=1/2\textmd{vec}[\partial_i\sigma]^\dagger \mathcal{R}^{-1}\textmd{vec}[\partial_j\sigma]+2\partial_i\mathbf{d}^\dagger\sigma^{-1}\partial_j\mathbf{d},\\
\mathcal{L}_i=\Delta \mathbf{A}^\dagger \mathcal{R}^{-1}\textmd{vec}[\partial_i\sigma]\Delta \mathbf{A}-\frac{1}{2}\textmd{tr}[\sigma\mathcal{R}^{-1}\textmd{vec}[\partial_i\sigma]]\nonumber\\
+2\Delta \mathbf{A}^\dagger\sigma^{-1}\partial_i\mathbf{d},
\end{align}
where the elements
of the displacement vector $\mathbf{d}$ and the covariant matrix $\sigma$ are defined as $d_i=\textmd{tr}[\rho A_i]$ and $\sigma_{ij}=\textmd{tr}\{\rho{\Delta A_i,\Delta A_j}\}$, $\mathbf{A}=(a_+,a_-,a_+^\dagger,a_-^\dagger)^T$, $\Delta A_i=A_i-d_i$, $\mathcal{R}^{-1}=\bar{\sigma}\otimes\sigma-\mathbf{K}\otimes\mathbf{K}$, and $\mathbf{K}=\textmd{diag}(1,1,-1,-1)$. Bar as in $\sigma$ denotes the complex conjugate, $\{.,.\}$ denotes the anticommutator. vec[.]
denotes vectorization of a matrix, which is defined as a column vector constructed from columns of a matrix. By calculation, the variance matrix can be achieved\\
$\sigma=$
\[
\left(
\begin{array}{ll}
2N_++1\ \ \ \ \ \ \ -2\gamma\sqrt{N_+N_-}\ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ 0\\
-2\gamma\sqrt{N_+N_-}\ \ \ \ \ \ 2N_-+1\ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ 0\\
\ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ 2N_++1\ \ \ -2\gamma\sqrt{N_+N_-}\\
\ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ -2\gamma\sqrt{N_+N_-}\ \ \ \ \ \ 2N_-+1
\end{array}
\right ).
\]
\subsection{Increasing QFI with the priori information}
When we know a priori that the two temperatures of the two sources are always equal, i.e., $T_1=T_2$. With the priori information, the saturated uncertainty of $T_1$ is given by
\begin{align}
\delta^2T_1|_{\textmd{pri}}=\frac{1}{\nu/2}\frac{1}{\mathcal{F}(T_1)},
\end{align}
where the QFI $\mathcal{F}(T_1)$ is described in Eq.16.
Without the priori information, when $T_2\rightarrow T_1$, we can obtain the analytical results of the QFI matrix based on Eq.(24)
\begin{align}
&\mathcal{H}^{11}(T_2\rightarrow T_1)=\mathcal{H}^{22}(T_2\rightarrow T_1)=\nonumber\\
&\frac{\chi_1^2\omega^2\eta}{T_1^4(\chi_1-1)^2[(1-\chi_1-\eta)^2-s^2\eta^2](-1+\chi_1+\eta-s^2\eta)}\times\nonumber\\
&[(1+\chi_1^2)(s^2-2)-2(s^2-1)^2\eta^2-4(s^2-1)\eta+\nonumber\\
&\chi_1(4-4\eta+s^2(4\eta-2))].
\end{align}
In this case, the uncertainty of the temperature $T_1$ is
\begin{align}
\delta^2T_1|_{ind}=\frac{1}{\nu/2}\frac{1}{\mathcal{H}^{11}(T_2\rightarrow T_1)},
\end{align}
\begin{figure}
\caption{\label{fig.3}
\label{fig.3}
\end{figure}
When there is no diffraction ($s=0$), $\mathcal{F}(T_1)=\mathcal{H}^{11}(T_2\rightarrow T_1)+\mathcal{H}^{22}(T_2\rightarrow T_1)=2\mathcal{H}^{11}(T_2\rightarrow T_1)$. However, when there is diffraction ($s\neq0$), $\mathcal{F}(T_1)>2\mathcal{H}^{11}(T_2\rightarrow T_1)$, as shown in Fig.~\ref{fig.3}. In particular, when $s=1$, $\mathcal{F}(T_1)=2(\mathcal{H}^{11}+\mathcal{H}^{22})=4\mathcal{H}^{11}$. It shows that more QFI can be obtained with the priori information of $T_1=T_2$ than that without the priori information when subjected to diffraction. At the maximum diffraction ($s=1$), the prior information can help to obtain twice as much QFI as without the priori information.
\subsection{Simultaneous estimation versus individual estimation}
For simultaneous estimation, we show that the lower bound in Eq.(20) can be saturated by analytically deriving
\begin{align}
\textmd{tr}[\rho[\mathcal{L}_1,\mathcal{L}_2]]=
\textmd{vec}[\partial_1\sigma]^\dagger\mathcal{R}^{-1}(\bar{\sigma}\otimes\mathbf{K}-\mathbf{K}\otimes\sigma)\mathcal{R}^{-1}\textmd{vec}[\partial_2\sigma]\nonumber\\
+4\partial_1\mathbf{d}^\dagger\sigma^{-1}\mathbf{K}\sigma^{-1}\partial_2\mathbf{d}=0.
\end{align}
From now on, we set $\nu = 1$ for the sake of convenience due to that this article is independent of the number of measurements.
The QFI matrix can be analytically derived by Eq.(24). However, the general form is verbose. Results are presented by using numerical values, as shown in Fig.~\ref{fig.4}-\ref{fig.6}.
We define the factor $\mu$ as the ratio of the simultaneous uncertainty and the individual uncertainty, i.e.,
\begin{align}
\mu=\frac{ (\delta^2T_1+\delta^2T_2)|_{\textmd{sim}}}{\delta^2T_1|_{ind}+\delta^2T_2|_{ind}}
=\frac{\mathcal{H}^{11}\mathcal{H}^{22}}{\mathcal{H}^{11}\mathcal{H}^{22}-|\mathcal{H}^{12}|^2},
\end{align}
where the latter equation comes from the saturated QCRB.
From Fig.~\ref{fig.4}, we can see that in the case of $s=0.5$, the simultaneous estimation uncertainty is less than the individual uncertainty given by the same resource, i.e., the ratio factor $\mu<1$. It shows that the simultaneous estimation performs better than the individual estimation. When the temperature difference ($|T_1-T_2|$) is relatively large or both temperatures are relatively high ($T_1\gg\omega$ and $T_2\gg\omega$), we find that the ratio $\mu$ is close to 1/2. It indicates that simultaneous estimation in this case is a better use of resources to improve measurement precision.
\begin{figure}
\caption{\label{fig.4}
\label{fig.4}
\end{figure}
\begin{figure}
\caption{\label{fig.5}
\label{fig.5}
\end{figure}
\begin{figure}
\caption{\label{fig.6}
\label{fig.6}
\end{figure}
From Fig.~\ref{fig.5}, we can see that the ratio of the individual uncertainty and the simultaneous uncertainty, $1/\mu$, decreases with the increase of $s$. In particular, the ratio $1/\mu$ approaches 0 as the diffraction degree approaches 1. It indicates that the advantage of the simultaneous estimation decreases as $s$ increases. At the maximum diffraction, the simultaneous estimation uncertainty will be infinite, which means that the maximum diffraction completely prevents the simultaneous estimation from obtaining the information of both temperatures. In addition, we can see that the attenuation factor $\eta$ has very little effect on the ratio, especially if $s$ is around 0 and 1.
As shown in Fig.~\ref{fig.6}, although the individual estimation uncertainty ($\delta^2T_1|_{ind}+\delta^2T_2|_{ind}$) also increases with $s$, it is always finite. It means that the individual estimation can obtain the information of the two temperatures when subjected to the maximum diffraction.
\section{a practical and feasible estimation of the single parameter}
A simple way to measure the individual estimation error of the single parameter $T_i $ is given by the error transfer formula\cite{lab40,lab41}
\begin{align}
(\delta T_i)^2=(\delta {X})^2/(\partial_i\langle{X}\rangle)^2,
\end{align}
where $(\delta {X})^2=\langle X^2\rangle-\langle X\rangle^2$, and $\langle \bullet\rangle=\textmd{tr}[\bullet \rho]$. It just needs to measure the average value of a single measurement observable $X$.
For a single parameter, Gessner \textit{et.al.}\cite{lab42} provided an analytical optimization over all possible linear combinations of some given possible measurement observables $\mathbf{X}=(X_1,...,X_K)^T$.
With the optimal linear combinations ${X}_{\textbf{m}}={\textbf{m}}\cdot\textbf{X}\propto\Gamma^{-1}[T_i,\mathbf{X}]D[T_i,\mathbf{X}]\cdot\mathbf{X}$, the corresponding optimized measurement sensitivity can be described as
\begin{align}
M[T_i,\mathbf{X}]=\textmd{max}_{\tilde{m}}(\partial_i\langle{X_{\tilde{m}}}\rangle)^2/(\delta {X_{\tilde{m}}})^2\\
=\textbf{D}[T_i,\mathbf{X}]^T\Gamma^{-1}[T_i,\mathbf{X}]\textbf{D}[T_i,\mathbf{X}],
\label{eq:A33}
\end{align}
where linear combinations $X_{\tilde{\textbf{m}}}=\tilde{\textbf{m}}\cdot\textbf{X}$, $\textbf{D}[T_i,\mathbf{X}]=(\partial_i\langle {X}_1\rangle,...,\partial_i\langle{X}_K\rangle)^T$ and the elements of the covariance matrix are $\Gamma_{k,l}[T_i,\mathbf{X}]=\langle{X}_k{X}_l\rangle-\langle {X}_k\rangle\langle{X}_l\rangle$. The optimized sensitivity given by Eq.~(\ref{eq:A33}) is obtained by the measurement coefficients vector $\tilde{\textbf{m}}=\textbf{m}$.
The measurement sensitivity obeys the chain of inequalities $M[T_i,\mathbf{X}]\leq \mathcal{F}[T_i,X_\textbf{m}]\leq \mathcal{H}^{ii}$.
Here $\mathcal{F}[T_i,X_\textbf{m}]$ denotes the Fisher information (FI) of $T_i$ obtained from the measurement of ${X}_\textbf{m}$; $ \mathcal{H}^{ii}$ denotes the QFI of $T_i$ as shown in Eq.(24).
Photon counting after spatial-mode demultiplexing has been shown to be the measurement that allow one to approach the ultimate limit for the separation estimation\cite{lab6,lab26}. Supposing that we have access to $K$ orthonormal spatial modes $\{\upsilon_k(x)\}$ with associated field operators $a_k$ and that the photon number in each mode can be obtained from the photon counting operator $N_k=b_k^\dagger b_k$.
$b_k=g_{k+}a_++g_{k-}a_-$ with $g_{k\pm}=\int dx \upsilon^*_k(x)\psi_\pm (x)$. Then, the mean photon number in each mode is
\begin{align}
\langle N_k\rangle=N\eta(|f_{+,k}|^2+|f_{-,k}|^2)-\gamma N\eta(|f_{+,k}|^2-|f_{-,k}|^2),
\end{align}
where $f_{\pm,k}=\int dx \upsilon^*_k(x)\psi(x\pm d/2)$.
Next, we focus on the case of a Gaussian point spread function $\psi(x)=\sqrt{2/\pi\varpi^2}\exp(-x^2/\varpi^2)$.
For small average number of photons, demultiplexing Hermite-Gauss (HG) modes can help to approach the QCRB.
Hence, we also consider the orthonormal spatial modes
\begin{align}
\upsilon_k(x)=u_{k}(x)=\mathcal{N}_{k}H_n(\frac{\sqrt{2}x}{\varpi})e^{-x^2}
\end{align}
where $H_n(x)$ are the Hermite polynomials and the normalization constant $\mathcal{N}_{k}=[(\pi/2)\varpi^22^{k}k!]^{-1/2}$.
Let $X_k=N_k, \mathbf{X}=\mathbf{N}=(N_1,...,N_K)^T$, the measurement sensitivity of $T_i$ can be obtained by Eq.~(\ref{eq:A33})
\begin{align}
M[T_i,\mathbf{N}]=(2\eta\partial_iN)^2[\frac{\sum_{k=0}^K\beta_{k}^2(d)}{2N\eta}-\frac{A_+}{A_+A_--B^2}\mathcal{S}_1^2\nonumber\\
+\frac{2B}{A_+A_--B^2}\mathcal{S}_1\mathcal{S}_2-\frac{A_-}{A_+A_--B^2}\mathcal{S}_2^2],
\end{align}
where
\begin{align}
A_\pm=\frac{2}{1\pm\gamma^2}+2N\eta{\sum_{k=0}^K\beta_{k}^2(d)},\\
B=2N\eta\sum_{k=0}^K(-1)^{k}\beta_{k}^2(d),\\
\mathcal{S}_1=\sum_{k=0}^K(-1)^{k}\beta_{k}^2(d),\\
\mathcal{S}_2=\sum_{k=0}^K\beta_{k}^2(d)
\end{align}
Here, $\beta_{k}(d)=f_{\pm,k}=\frac{1}{\sqrt{k!}}\exp[\frac{d^2}{8\varpi^2}](\pm\frac{ d}{2\varpi})^k$.
When the number of received photons is low $N\eta\ll1 $, the sensitivity can be simplified as
\begin{align}
M[T_i,\mathbf{N}]&\approx (2\eta\partial_iN)^2\frac{\sum_{k=0}^K\beta_{k}^2(d)}{2N\eta}\nonumber\\
&=\sum_{k=0}^K\frac{(\partial_iN_k)^2}{N_k}
=N_t\sum_{k=0}^K\frac{(\partial_ip_k)^2}{p_k}
\end{align}
where the total number of the thermal photons $N_t=\sum_{k=0}^KN_k$ and the probability $p_k=N_k/N_t$. The above equation shows that the FI is obtained, which means that the estimation strategy based on the optimized error transfer formula can saturate the Cram\'{e}r-Rao bound.
\begin{figure}
\caption{\label{fig.7}
\label{fig.7}
\end{figure}
When the full HG basis is measured, i.e., $K\longrightarrow\infty$, we can obtain the sensitivity of the single parameter $T_i$
\begin{align}
&M[T_i,\mathbf{N}]=(2\eta\partial_iN)^2\nonumber\\
&[\frac{1}{2N\eta}-\frac{2s^2(1-\gamma^2)+2(1+\gamma^2)+2N\eta(1-\gamma^4)(s-1)^2}{4+8N\eta+4N^2\eta^2(1-s^2)(1-\gamma^4)}],
\end{align}
where $\partial_iN=\chi_i\omega/2T_i^2(1-\chi_i)^2$.
As shown in Fig.~\ref{fig.7}, the measurement sensitivity $M[T_1,\mathbf{N}]$ gradually approaches the QFI as the degree of the diffraction $s$ increases. In other words, as $s$ increases, the estimation strategy based on the optimized error transfer formula tends to be the optimal method by using the full HG basis. At the maximum diffraction $s=1$, the estimation strategy can saturate the QCRB without being affected by the attenuation factor $\eta$.
\section{simultaneous estimation with a practical measurement operator}
In this section, we use a practical measurement to estimate the two parameters $T_1$ and $T_2$ simultaneously.
We consider a simple measurement operator $E=\sum_{k=0}^\infty N_k$, which is the direct sum of all the photon counting after the HG spatial-mode demultiplexing. It is independent of the estimation parameters $T_i$.
After a simple calculation, we can obtain that $E=\sum_{k=0}^\infty N_k=a_+^\dag a_++a_-^\dag a_-=(n_++n_-)|n_+,n_-\rangle\langle n_+,n_-|$. Conditioned on a detection event, the probability of detecting $n_+$ and $n_-$ photons in the modes of $a_+$ and $a_-$ is given by $P_{n_+,n_-}=\langle n_+,n_-|\rho|n_+,n_-\rangle$, which can be further expressed as
\begin{align}
&P_{n_+,n_-}=\nonumber\\
&\frac{(1-\gamma^2)^{1+n_++n_-}N_+^{n_+}N_-^{n_-}
\ _2F_1[1+n_+,1+n_-,1,\frac{\gamma^2}{\lambda_+\lambda_-}]}{\lambda_+^{n_++1}\lambda_-^{n_-+1}},
\end{align}
where the parameters in the denominator are $\lambda_\pm=1+N_\pm-N_\pm\gamma^2$, and the Hypergeometric Function $\ _2F_1[1+n_+,1+n_-,1,\frac{\gamma^2}{\lambda_+\lambda_-}]=\sum_{m=0}^\infty\frac{[(1+n_+)(1+n_-)\frac{\gamma^2}{\lambda_+\lambda_-}]^m}{m!}$.
With this measurement probability, the FI can be calculated by
\begin{align}
\mathcal{F}_C^{ij}=\sum_{n_+,n_-=0}^\infty \frac{\partial_iP_{n_+,n_-}\partial_jP_{n_+,n_-}}{P_{n_+,n_-}}.
\end{align}
\begin{figure}
\caption{\label{fig.8}
\label{fig.8}
\end{figure}
As shown in Fig.~\ref{fig.8}, the reciprocal of the simultaneous estimation uncertainty $\frac{1}{(\delta^2T_1+\delta^2T_2)|_{\textmd{sim}}}$ decreases with the degree of the diffraction $s$. When $s=1$, $\frac{1}{(\delta^2T_1+\delta^2T_2)|_{\textmd{sim}}}=0$. It shows that the diffraction will reduce the precision of the simultaneous estimation with the practical measurement operator $E$, which can not obtain any information at the maximum diffraction. This results support the previous results using the saturated QCRB as shown in section IV.B.
\section{conclusion}
We have investigated the effect of the diffraction on the quantum thermometry. When we know a priori that the temperatures of the two thermal sources are always equal, the diffraction will reduce the estimation precision but not by much: at low temperature, the diffraction has little effect on the estimation precision; at high temperature, the maximum diffraction yields half as much the QFI as no diffraction.
More QFI can be obtained with the priori information (the two temperatures are always equal) than that without the priori information (i.e., the two temperatures of the two thermal sources are independent). In particular, at the maximum diffraction, the prior information can help to obtain twice as much QFI as without the priori information. What's more, we carefully consider the two strategies: the simultaneous estimation and the individual estimation. The simultaneous estimation of two temperatures is proved to satisfy the saturation condition of QCRB. Given the same resources, the simultaneous estimation performs better than the individual estimation in the case of small degree of diffraction. However, in the case of high degree of diffraction, the individual estimation performs better. In particular, at the maximum diffraction, the simultaneous estimation can not get any information, which is supported by a practical measurement, while the individual estimation can still get the information. In addition, we find that for the individual estimation, the practical and feasible estimation strategy based on the optimized error transfer formula can saturate the Cram\'{e}r-Rao bound when the number of received photons is low. At the maximum diffraction, the practical and feasible estimation strategy by using the full HG basis can saturate the QCRB without being affected by the attenuation factor.
Our study illustrates the effect of diffraction on the temperature measurement precision and the advantages and disadvantages of different measurement strategies, which lays a foundation for constructing a remote precision thermometry.
\section*{Appendix}
We now use Eq.~(\ref{eq:A09}) to propagate the density matrix $\rho_0$ of the sources to the density matrix $\rho$ in the image plane.
When we transform the coherent state $|\alpha_1,\alpha_2\rangle$ of the field operators $c_{1}$ and $c_2$ to the coherent state $|\alpha_+,\alpha_-\rangle$ of the field operators $a_\pm$, we can obtain the following mapping relation according to Eq.~(\ref{eq:A09}) and the auxiliary modes in the vacuum state
\begin{align}
\sqrt{\eta_+}c_\pm|\alpha_1,\alpha_2\rangle\langle\alpha_1,\alpha_2|\rightarrow a_\pm|\alpha_+,\alpha_-\rangle\langle\alpha_+,\alpha_-|\Rightarrow\nonumber\\
\sqrt{\eta_\pm/2}(\alpha_1\pm\alpha_2)|\alpha_1,\alpha_2\rangle\langle\alpha_1,\alpha_2|\rightarrow\alpha_\pm|\alpha_+,\alpha_-\rangle\langle\alpha_+,\alpha_-|.
\end{align}
Based on above equations, we obtain the mapping relations
\begin{align}
\alpha_{1}\rightarrow\alpha_{1+}=\alpha_+/\sqrt{2\eta_+}+\alpha_-/\sqrt{2\eta_-};\\
\alpha_{2}\rightarrow\alpha_{2-}=\alpha_+/\sqrt{2\eta_+}-\alpha_-/\sqrt{2\eta_-}.
\end{align}
Then, with the two equations above we further obtain
\begin{align}
&\int d^2\alpha_1d^2\alpha_2 P_{c_1,c_2}(\alpha_1,\alpha_2)|\alpha_1,\alpha_2\rangle\langle\alpha_1,\alpha_2|\rightarrow \nonumber\\
&\int d^2\alpha_{1+} d^2\alpha_{2-} P_{c_1,c_2}(\alpha_{1\pm},\alpha_{2\pm})|\alpha_+,\alpha_-\rangle\langle\alpha_+,\alpha_-|\\
&=\int d^2\alpha_+d^2\alpha_- P_{a_+,a_-}(\alpha_+,\alpha_-)|\alpha_+,\alpha_-\rangle\langle\alpha_+,\alpha_-|.
\end{align}
At this point, Eq.~(\ref{eq:A10}) has been derived.
\end{document} |
\begin{document}
\title{An invariant region for the collisional dynamics of
two bodies on Keplerian orbits}
\maketitle
\author{Dario Benedetto}
\address{Dipartimento di Matematica, {\it Sapienza} Universit\`a di Roma }
\mathrm{e}mail{benedetto@mat.uniroma1.it}
\author{Flavia Lenti}
\address{Dipartimento di Scienza e di alta Tecnologia
Universit\`a dell'Insubria}
\mathrm{e}mail{flavia.lenti@uninsubria.it}
\begin{abstract}
We study the dynamics of two bodies moving on
elliptic Keplerian orbits around a fixed center
of attraction and interacting
only by means of elastic or inelastic collisions. We show that
there exists a bounded invariant region: for
suitable values of the total energy and the total angular
momentum (explicitly computable) the orbits of the bodies remain elliptic,
whatever are the number and the details
of the collisions. We show that there exists a
bounded invariant region even in the case of two bodies interacting by
short range potential.
\mathrm{e}nd{abstract}
\keywords{Planetary systems, Planetary rings,
Elastic collisions, Inelastic collisions}
\subjclass{MSC 70F15, MSC 37N05}
\section{Introduction}
\label{sezione:introduzione}
The interest in the collisional dynamics in a
planetary system goes back to Poincar\'e. In particular,
in \cite{poincare}
he studies
the planetary three-body problem, with
one body of large mass (the Sun)
and two bodies of small mass (the planets).
He indicates how to find periodic
solutions (of {\it deuxi\`eme esp\`ece}),
as perturbations of periodic collisional solutions
he can construct when the mass of the planets are infinitely small
and the distance between them becomes infinitely small.
In this approximation,
the two bodies are on Keplerian ellipses until the ``choc'' (i.e.
the interaction), which moves the bodies on two other Keplerian
ellipses. During the interaction, only the total energy and the total
momentum are conserved, and the choc acts
as an elastic collision. It should be noted that
the collision can moves the bodies also on hyperbolic orbits but
Poincar\'e is only interested in elliptic case.
In this work, we prove that, for this collisional dynamics,
there exists an invariant bounded region of
of positive measure in the phase space.
More precisely, we consider two
bodies moving on elliptic Keplerian orbits around a center,
interacting only by means of collisions. A collision changes the
orbital parameters of the bodies, and a sequence of collisions can
move one of the bodies out of the system, on parabolic or hyperbolic
orbits. We show that for suitable values of the total energy and
the total angular momentum (easily computable),
the bodies remain on elliptic colliding orbits.
Moreover, we extend this result to the case of two point particles
interacting by means a bounded short range potential.
We are neglecting the gravitational interaction between the bodies
and the influence on the attractive center.
These approximations can be only justified for very light
particles and for a few revolution times
(the time between two consecutive
collisions can be very long with respect to the
revolution period of the particles).
Considering this, our result has the following
interpretation for real systems:
the particles can not leave the system due to the collisions,
unless other perturbations change enough the
orbital parameters.
Our result seems not known in the literature, despite
its simplicity and despite the great interest in research on
planetary systems.
We run in the result while we were studying a numerical models
for the dynamics of inelastic particles of planetary rings.
It is known (see
\cite{poincare-cosm}, \cite{braich}, and
\cite{bh})
that the inelasticity of the collisions
is sufficient to guarantee the persistence of the rings.
In contrast, in the case of
elastic collisions, almost all the
particles leave the system on hyperbolic orbits.
Here we prove that in the case of only two particles,
the inelasticity is not needed
to avoid that the orbits become hyperbolic or parabolic;
in this sense, two colliding particles are
a stable subsystem of a ring.
This observation and its consequences can be interesting
for the study of various models of planetary rings.
In particular, in some models it is assumed that the
collisions are elastic
for small relative velocities (see \cite{hatzes}
for experimental result on the inelasticity of
ice balls).
From the mathematical point of view, it can be interesting
to study how the particles moves on the invariant region.
Preliminary two-dimensional numerical simulations,
in which the impact parameter
is randomly chosen, show that
the two orbits are approximately tangent, for most of the time.
It is difficult to implement a numerical simulation for the more
realistic
three-dimensional case.
The problem is that it is not easy to find efficiently
the values of the anomalies which correspond to collisional
configurations.
Some useful suggestion and some technical insight
can be obtained analyzing
the solutions to the problem
of finding the critical points of the distance from two
elliptic orbits (see \cite{gronchi}).
\vskip.3cm
The paper is organized as follows.
In section \ref{sez:problema}, we establish the mathematical notation
and the exact nature of the problem.
In section \ref{sezione:invariante}, we analyze a simplified
model considering a two-dimensional {\it dynamics of the orbits}
instead of the dynamics of the to bodies.
We consider more general cases in section \ref{sezione:estensioni}
(two bodies in $\mathbb R^2$) and in section
\ref{sezione:dim3}, in which we
analyze the case of two bodies in $\mathbb R^3$; here we also
analyze the case of point particles interacting by means a
short range potential.
\section{The problem}
\label{sez:problema}
The system we analyze consists in two spherical bodies, of mass $m_1$ and
$m_2$ and radii $R_1$ and $R_2$ respectively,
which are attracted by a fixed gravitational center and interact
by means of elastic or inelastic collisions.
We indicate with $\ve x_i$ and $\ve v_i$ the position of the center
and the velocity of the body $i=1,2$,
with $M=m_1+m_2$ the total mass
and with $\mu_i = m_i/M$ the fraction of the total mass
carried by the body $i$.
We choose the units of measure in such that
the gravitational potential energy of the body $i$
is $m_i/r_i$, where $r_i=|\ve x_i|$ is the distance
from the attracting center.
In the case of hard spheres,
inelastic collisions can be modeled supposing
that only a part of the normal impulse is transferred,
while the tangential one is conserved. Let us denote with
$\ve n = (\ve x_1 - \ve x_2)/(R_1+R_2)$ the direction
of the relative position at the moment of the impact, and with $\ve w = \ve v_1 - \ve v_2$ the
relative velocity. The particles are in the incoming configuration iff $\ve n\cdot \ve w < 0$.
The relative velocity after the collision is
\begin{equation}
\label{w-urto}
\ve w' = (I-\ve n\times \ve n) \ \ve w - ( 1-2\mathrm{e}ps)
(\ve n \times \ve n) \ \ve w
\mathrm{e}nd{equation}
where $I$ is the identity matrix,
$(\ve x \times \ve y)_{ij} = x_i y_j$
is the tensor product, $\ve n\times \ve n$ is the projector on
the direction of $\ve n$, $I-\ve n\times \ve n$ is the projector
on the orthogonal plane to $\ve n$,
and finally $\mathrm{e}ps\in [0,0.5]$ is
the parameter of inelasticity, which is $0$ in the case
of the elastic collision.
The outgoing velocities $\ve v_1'$, $\ve v_2'$
can be obtained from $\ve w'$ using the conservation
of the velocity of the center of mass
$$\ve v' = \mu_1 \ve v_1' + \mu_2 \ve v_2' = \mu_1 \ve v_1 + \mu_2 \ve v_2
= \ve v$$
If the collision is inelastic,
the normal component of $\ve w$ is reduced in modulus:
\begin{equation}
\label{w-normale}
\ve w' \cdot \ve n = -(1-2\mathrm{e}ps) \ve w \cdot \ve n,\ \ \
\ |\ve w' \cdot \ve n| \le |\ve w \cdot \ve n|,\mathrm{e}nd{equation}
where $(1-2\mathrm{e}ps)\in[0,1)$ is the {\it coefficient of restitution}
which is $1$ in the elastic case.
The kinetic energy
$T= m_1 |\ve v_1|^2/2 + m_2 |\ve v_2|^2/2$
decreases and becomes
$$T' = \frac 12 \left( m_1 |\ve v_1'|^2 + m_2 |\ve v_2'|^2
\right) = T - 2 M \mu_1 \mu_2 \mathrm{e}ps (1-\mathrm{e}ps)
(\ve w \cdot \ve n)^2.
$$
We remark that in some model
it is assumed that also the tangential component
$(I-\ve n \times \ve n)\ \ve w$ is reduced
(see \cite{bh} and \cite{kawai}).
Moreover the restitution coefficient can depend on
the relative velocity, as shown in a huge number
of theoretical and experimental studies
(see e.g. \cite{djerassi}, \cite{hertzsch}, and references therein).
In all these models the kinetic energy decreases.
\vskip3pt
Let us describe what it can be happen to the orbits after a collision
of the two bodies. Before the collision
both energies are
negative:
$$\frac {m_i}2 \ve v_i^2 - \frac {m_i}{|\ve x_i|} < 0,
\ \ \ i\in 1,2.$$
These conditions are equivalent to
\begin{equation}
\label{condw}
|\ve v+ \mu_2 \ve w|^2 < 2 |\ve x_1|, \ \ \
|\ve v - \mu_1 \ve w|^2 < 2 |\ve x_2|,
\mathrm{e}nd{equation}
with $|\ve x_1 - \ve x_2|= R_1 + R_2$.
If
\begin{equation}
\label{eq:condw0}
|\ve w| < \min \left( (\sqrt{2|\ve x_1|} - |\ve v|)/\mu_2,
(\sqrt{2|\ve x_2|} - |\ve v|)/\mu_1\right)
\mathrm{e}nd{equation}
the inequalities \mathrm{e}qref{condw} are satisfied, then,
after the collision, the orbits remain elliptic
because
$|\ve w'|\le |\ve w|$, as follows from eq.s \mathrm{e}qref{w-urto}, \mathrm{e}qref{w-normale}.
The hypothesis \mathrm{e}qref{eq:condw0} is not sufficient to avoid that
one of the particles leaves the system, because the
next collision can take place at other points
with very different values of $\ve w$, $\ve v$, $\ve x_1$ and $\ve x_2$.
More in general,
a sequence of collisions on different points
can end with a particle which leaves the system on an hyperbolic orbit.
In the next section we show how to control the condition of ellipticity,
regardless the collision history.
Let us first note that, for bodies with vanishing radii,
the possible points of collision
are at most two, which is the maximum number of intersection
of two non identical co-focal Keplerian orbits.
As noted by Poincar\'e in \cite{poincare},
if two orbits have two points of intersections
the points and the center of gravity are on a line
or the two orbits are in a plane.
Then we start our analysis in section \ref{sezione:invariante}
with the two dimensional case,
also simplifying the dynamics considering two point
particles.
\vskip3pt
\section{The
invariant region for two point-particles in $\mathbb R^2$}
\label{sezione:invariante}
In this section we consider
a simplified bi-dimensional mathematical model.
We suppose that the bodies are point particles
moving on co-focal Keplerian orbits in a plane.
In order to allow the
particles collide, we have to assume
that the orbits intersect (note that this can happen at most in two points).
Although this condition is satisfied,
the particles can not collide because the cross section
is zero for dimensionless bodies.
In order to avoids this problem, we considered the
{\it dynamics of the orbits}:
we choose as variables the parameters of two orbits $o_1$ and $o_2$,
and we evolve the system with the following procedure:
we choose one of the points of intersection, we consider
in that point
two fictitious particles on the two orbits,
we choose an impact parameter $\ve n$ and we consider the resulting orbits
$o_1'$ and $o_2'$
after the collision of the two particles. We will show, in
Theorem \ref{teo:inter}, that
for sufficiently low value of
the total energy,
the new orbits $o_1'$ and $o_2'$ remain intersecting ellipses,
for any choice between the two
possible collision points and for any choice of
the impact parameter $\ve n$. The keys points of the proof are
the conservation of the total momentum,
the decrease of the energy, and the fact that
the condition of intersection of the orbits is a feature
preserved by the dynamics.
\vskip 3pt
For the orbit $o_i$ of the particle $i$,
$\omega_i$ is the angle between the $x$ axis and the
position of the periapsis (the point of the orbit
of the minimal distances from the attractive center);
$\vartheta_i$ is the true anomaly,
i.e. the angle in the orbital plane between the particle
and the periapsis
of the orbit;
$E_i= \ve v_i^2/2 - 1/r_i$ is the specific energy (i.e. the energy
for unit of mass);
$L_i=r_i^2 \dot \vartheta_i$ is the specific angular momentum;
$e_i = \sqrt{1 + 2 E_i L_i^2}$ is the eccentricity.
The position of the particle $i$ in orbits is given by
\begin{equation}
\label{ri}
\ve x_i = \frac {L_i^2}{1+e_i\cos \vartheta_i}
\binom {\cos(\vartheta_i + \omega_i)}{\sin(\vartheta_i + \omega_i)}
\mathrm{e}nd{equation}
\vskip3pt
All the quantities
$E_i$ and $L_i$ are conserved during the Keplerian motion
but only the combinations $m_1E_1+m_2E_2$
(the total energy)
and $m_1L_1+m_2L_2$ (the total angular momentum)
are conserved in the elastic collisions, moreover
$m_1E_1+m_2E_2$ decreases if the collisions are inelastic.
We fix the initial values of the specific energy
and the specific angular momentum of the whole system:
\begin{equation}
\label{medie}
\begin{array}{l}
L = \mu_1 L_1 + \mu_2 L_2 \\
E = \mu_1 E_1 + \mu_2 E_2
\mathrm{e}nd{array}
\mathrm{e}nd{equation}
We can assume $L\ge 0$ without loss of generality,
and we consider $E<0$, which is the case
of a couple of elliptic orbits.
We rewrite the orbital parameters in terms of the
differences of the energy and angular momentum:
\begin{equation}
\label{differenze}
\begin{array}{l}
\mathop{}\!\mathrm{d}lta E = E_1 - E_2\\
\mathop{}\!\mathrm{d}lta L = L_1 - L_2
\mathrm{e}nd{array}
\ \ \ \text{ from which } \ \ \
\begin{array}{l}
E_1 = E +\mu_2 \mathop{}\!\mathrm{d}lta E\\
E_2 = E -\mu_1 \mathop{}\!\mathrm{d}lta E\\
\mathrm{e}nd{array}
\ \ \ \text{ and } \ \ \
\begin{array}{l}
L_1 = L +\mu_2 \mathop{}\!\mathrm{d}lta L\\
L_2 = L -\mu_1 \mathop{}\!\mathrm{d}lta L.\\
\mathrm{e}nd{array}
\mathrm{e}nd{equation}
From these quantities we can obtain the
shapes (i.e. the
eccentricities) and the dimensions of the orbits.
Theirs positions in the framework are specified by the angles $\omega_i$, but
we are only interested in the relative position of the two orbits,
which is given by $\mathop{}\!\mathrm{d}lta \omega = \omega_1 -\omega_2$.
\vskip3pt
\begin{figure*}[h]
\includegraphics[scale=0.3]{esia}
\includegraphics[scale=0.3]{esib}
\caption{$\mu_1 = 0.45$: the region of the admissible
values for $EL^2 = -0.6$ and $EL^2 = -0.4$.}
\label{fig:es}
\mathrm{e}nd{figure*}
Not all the values of $\mathop{}\!\mathrm{d}lta E$ and $\mathop{}\!\mathrm{d}lta L$ correspond
to couple of orbits: namely
the energy $E_i$ and the angular momentum $L_i$ must satisfy the
condition $0 \le e_i^2 = 1 + 2 E_iL_i^2$, i.e.
\begin{equation}
\label{esistenza}
\frac 1{\mu_2} \left( -E -
\frac 1{2L_1^2}\right) \le \mathop{}\!\mathrm{d}lta E \le
\frac 1{\mu_1} \left( E +
\frac 1{2L_2^2}\right)
\mathrm{e}nd{equation}
These inequalities define, in the space $\mathop{}\!\mathrm{d}lta L,\mathop{}\!\mathrm{d}lta E$, the region
of admissibility
$$A = \{ (\mathop{}\!\mathrm{d}lta L,\mathop{}\!\mathrm{d}lta E)|\, \text{eq.s \mathrm{e}qref{esistenza} hold}\},
$$
which we show in fig. \ref{fig:es}. The boundary of $A$
corresponds to $e_1 = 0$, i.e. $\mathop{}\!\mathrm{d}lta E = -\frac 1{\mu_2} \left( E +
\frac 1{2L_1^2}\right)$, and
$e_2=0$ i.e.
$\mathop{}\!\mathrm{d}lta E \frac 1{\mu_1} \left( E +
\frac 1{2L_2^2}\right)$.
As follows from easy calculations, the topology of the set $A$
depends on the value of $EL^2$.
If $EL^2<-1/2$, as in fig. \ref{fig:es} (a), the region is not connected
(this condition is equivalent to the non existence
of the 'mean orbit', i.e. the orbit of energy $E$ and angular momentum $L$,
whose eccentricity is $\sqrt{1+2EL^2}$).
If $-1/2\le EL^2 < 0$, as in fig. \ref{fig:es} (n),
the region is connected.
If $\mathop{}\!\mathrm{d}lta E \in \left(E/\mu_1,-E/\mu_2\right)$
the orbits are both elliptic,
while if $\mathop{}\!\mathrm{d}lta E>-E/\mu_2$ (i.e. $E_1>0$) the first orbit is hyperbolic
and if $\mathop{}\!\mathrm{d}lta E<E/\mu_1$ (i.e. $E_2 > 0)$ the second orbit is hyperbolic.
If $\mathop{}\!\mathrm{d}lta L \in \left( -L/\mu_1, L/\mu_2\right)$ both the particles
move counterclockwise, while if $\mathop{}\!\mathrm{d}lta L = L/\mu_2$
(i.e. $L_1=0$) or $\mathop{}\!\mathrm{d}lta L = -L/\mu_1$ (i.e. $L_2 = 0$)
one of the orbits degenerates.
\vskip3pt
We fix $\mu_1 \le \mu_2$ without loss of generality,
and we consider the case $E<0$.
\begin{theorem}
\label{teo:inter}
Let be
\begin{equation}
\label{sigma}
\sigma= \sigma(\mu_1,\mu_2) =
- \frac{(1-e^2)(\mu_1^2 + \mu_2^2e)^2}{2\mu_2 e^2}
\mathrm{e}nd{equation}
where
\begin{equation}
\label{e-sol}
e=\left(\sqrt{(\mu_1/\mu_2)^4 + 8(\mu_1/\mu_2)^2}-
(\mu_1/\mu_2)^2\right)/4.
\mathrm{e}nd{equation}
If initially
\begin{equation}
\label{condizione}
EL^2 < \sigma\\
\mathrm{e}nd{equation}
the orbits remain elliptic for all times.
Moreover, $|L_1|$ and $|L_2|$ are bounded and
$e_i \le c_i < 1$
for suitable constants $c_1$, $c_2$.
\mathrm{e}nd{theorem}
\noindent
{\bf Remarks.}
\begin{enumerate}[{ i.}]
\item As we will see in the proof, if the orbits intersect,
the value of $EL^2$ is bounded from below:
$EL^2 \ge - 1/2$.
\item According to the spatial scale invariance of the
problem,
the behavior of the system depends only on
the product $EL^2$ of the two invariant quantities $E$ and $L$.
\item In the proof, we study the condition of intersection
in terms of the orbital parameters;
for a similar analysis see \cite{laskar0}.
\item If $M$ is the mass of the central body, $G$ the Newton constant
and $k=GM$ the standard gravitational parameter of the system,
the major semi-axes of the orbit $i$ is $L_i^2/(k(1-e_i))$
where the
eccentricity is $e_i=\sqrt{ 1 + 2 L_i^2 E_i /k^2}$.
The condition \mathrm{e}qref{condizione} must be rewritten in term of $EL^2/k^2$.
\item The invariant region is large, from two point of view:
it contains couples of orbits which can be
very different, and the orbits live in a
huge region on the configuration space. For instance,
in the case $\mu_1 = \mu_2=0.5$, the critical value is $EL^2 = -27/64$,
and, for this value,
if one of the particles has the same orbit of the Earth, the
other particle can intersect the Jupiter orbit,
and can arrive at $6.95$ U.A. from the Sun.
If we consider two orbits with $L_1=L_2=L$ and $E_1=E_2=E$, their
eccentricity is $\sqrt{5/32} \approx 0.40$ and the
ratio between the major and the minor semi-axis is approximately $2.33$.
\item We have fixed the attractive center, therefore
we are not considering here a three body problem,
in which we can fix only
the center of mass.
The planetary three body system
is usually described as a perturbation of the system obtained
in the canonical heliocentric variables neglecting
the terms of order $m_1m_2$ (see e.g. \cite{laskar}).
This unperturbed system
is a system of two particles moving independently on
Keplerian orbits, with respect the position of the large body.
The standard gravitational parameters are $G(M+m_i)$, and can be
different for the two particles. Our results also hold in
this case, with minor modifications.
\item We have done some preliminary numerical simulations for this model,
with $n>2$ particles.
For $n=3$, if initially all the particles can
collide with the others, in the elastic case
one of the particle leaves the system after few collisions;
in the inelastic case one of the particle
stops to interact with the others after few collisions,
and the orbits of the others two particles converge.
A system of a large amount of colliding particles
exhibits a complex behavior:
a certain number of particles (decreasing with the
parameter of inelasticity $\mathrm{e}ps$)
leaves the system, the others particles
asymptotically separate in non interacting
clusters of one particle or two colliding particles.
Let us note that for some different inelastic model
(somewhat artificial), it can be proved the existence of
``ringlets'' i.e. the existence of a state of $n$ particle
which does not cease to interact, and whose
orbits converge (see \cite{kawai}).
\mathrm{e}nd{enumerate}
\vskip3pt
\proof
\mathop{}\!\partialr
\noindent
We prove the theorem in the case of elastic collisions, for which
$E$, $L$ and $EL^2$ are conserved quantities, and then the condition
\mathrm{e}qref{condizione} is invariant for the dynamics.
In the case of inelastic collisions, the thesis
follows from the fact that $L$ is conserved and $E$ and
$EL^2$ can only decrease, then the condition \mathrm{e}qref{condizione}
is invariant for the dynamics also in this case.
The proof follows from the fact that
if $EL^2$ is sufficiently close
to $-0.5$, and the orbits intersect,
then the two orbits are elliptic, as we now show.
The two orbits intersect if
$\ve x_1 = \ve x_2$ for some value of the anomalies
$\vartheta_1$ and $\vartheta_2$.
Using eq. \mathrm{e}qref{ri}, this condition is expressed by the
equalities
$$\vartheta_2 - \vartheta_1 = \omega_1 - \omega_2 = \mathop{}\!\mathrm{d}lta \omega
\ \ \text{ and } \
L_1^2(1 + e_2 \cos \vartheta_2) = L_2^2 (1 + e_1 \cos \vartheta_1).$$
Inserting in the last equation that
$\vartheta_2 = \vartheta_1 + \mathop{}\!\mathrm{d}lta \omega$,
we obtain an equation in the unknown $\vartheta_2$ that can be solved if and
only if
\begin{equation}
\label{interserzione}
e_1 e_2 \cos \mathop{}\!\mathrm{d}lta \omega \le 1 + L_2^2 E_1 + L_1^2 E_2
\mathrm{e}nd{equation}
Let us define the set of the values of $(\mathop{}\!\mathrm{d}lta L,\mathop{}\!\mathrm{d}lta E)$
for which two orbits of parameters $L_1,E_1$ and $L_2,E_2$
intersect if the angle between the periapsides is $\mathop{}\!\mathrm{d}lta \omega = \mathrm{e}ta$:
\begin{equation}
\label{def:I}
I_{\mathrm{e}ta } = \{ (\mathop{}\!\mathrm{d}lta L,\mathop{}\!\mathrm{d}lta E)\in A |\,
e_1 e_2 \cos\mathrm{e}ta \le
1 + L_2^2 E_1 + L_1^2 E_2 \}
\mathrm{e}nd{equation}
By definition
$$I_{\mathrm{e}ta_1} \subset I_{\mathrm{e}ta_2}\ \ \text{ if } \mathrm{e}ta_1 < \mathrm{e}ta_2,$$
and in particular $I_{\mathrm{e}ta} \subset I_{\pi}$ if $\mathrm{e}ta \in [0,\pi]$.
This implies that, if we
rotate two intersecting orbits,
in such a way that the two periapsides become in opposition
($\mathop{}\!\mathrm{d}lta \omega = \pi$), we obtain two orbits which intersect.
The intersection condition is invariant for the dynamics, then
all the values $(\mathop{}\!\mathrm{d}lta L,\mathop{}\!\mathrm{d}lta E)$
during the evolution are in the set
\begin{equation}
\label{Ipi}
I_{\pi}= \{ (\mathop{}\!\mathrm{d}lta L,\mathop{}\!\mathrm{d}lta E)\in A |\,
e_1 e_2 \ge
- (1 + L_2^2 E_1 + L_1^2 E_2) \}
\mathrm{e}nd{equation}
Therefore the set $I_{\pi}$ is invariant
for the dynamics.
Now we show that $I_{\pi}$ is contained
in the region in which both the orbits are elliptic,
if $EL^2$ is sufficiently small.
The set $I_{\pi}$ as defined in \mathrm{e}qref{Ipi} is the union of
the set of values of $\mathop{}\!\mathrm{d}lta L$ and $\mathop{}\!\mathrm{d}lta E$ in $A$ which solve
\begin{equation}
\label{condizione-quadra}
e_1^2 e_2^2 = (1+2E_1L_1^2) (1+2E_2L_2^2) \ge
(1+L_2^2 E_1 + L_1^2 E_2)^2
\mathrm{e}nd{equation}
provided that
\begin{equation}
\label{provided}
1+L_2^2 E_1 + L_1^2 E_2 \le 0,
\mathrm{e}nd{equation}
and the set of values which solve
\begin{equation}
\label{pimezzi}
1+L_2^2 E_1 + L_1^2 E_2 \ge 0,
\mathrm{e}nd{equation}
Note that this last equation identifies the region $I_{\pi/2}$, i.e.
the region of intersecting orbits with perpendicular semi-major axis,
and it is equivalent to
\begin{equation}
\label{pimezzi-risolta}
\mathop{}\!\mathrm{d}lta E ( \mu_1\mu_2 \mathop{}\!\mathrm{d}lta L + (\mu_1 - \mu_2 ) ) \le
1 + 2 E L^2 - (\mu_1 - \mu_2 ) E \mathop{}\!\mathrm{d}lta L,
\mathrm{e}nd{equation}
Eq. \mathrm{e}qref{condizione-quadra} is equivalent to
\begin{equation}
\label{condizione-diseq}
\mathop{}\!\mathrm{d}lta E^2 (\mu_1 L_1^2 + \mu_2 L_2^2)^2 - 2 \mathop{}\!\mathrm{d}lta E (L_1^2 - L_2^2)
( 1 + E(\mu_1L_1^2+\mu_2 L_2^2)) + E^2 (L_1^2-L_2^2)^2
\le 0
\mathrm{e}nd{equation}
which can be solved if
\begin{equation}
\label{condizione-discriminante}
(L_1^2-L_2^2)^2 ( 1 + 2E (\mu_1L_1^2+\mu_2 L_2^2)) \ge 0
\mathrm{e}nd{equation}
If $|L_1|\neq |L_2|$
this condition is equivalent to
$1+2E(L^2+\mu_1\mu_2 \mathop{}\!\mathrm{d}lta L^2) \ge 0$
then the set $I_{\pi}$ is non void if and only if
$1+2EL^2 \ge 0$, and it is bounded
by the condition
\begin{equation}
\label{limiti}
\mathop{}\!\mathrm{d}lta L^2 \le \frac { 1 - 2|E| L^2}{2|E|\mu_1\mu_2}
\mathrm{e}nd{equation}
The boundary of the region identified by the
inequality \mathrm{e}qref{condizione-diseq} is given by the functions
\begin{equation}
\label{soluzione}
\mathop{}\!\mathrm{d}lta E = \frac {L_1^2 - L_2^2}{(\mu_1L_1^2 + \mu_2 L_2^2)^2}
\left( 1 + E(\mu_1L_1^2 + \mu_2 L_2^2)
\pm \sqrt{ 1 + 2E(\mu_1L_1^2+\mu_2 L_2^2)}\right)
\mathrm{e}nd{equation}
which have the sign of $L_1^2 - L_2^2$.
\begin{figure*}
\includegraphics[scale=0.3]{intera}
\includegraphics[scale=0.3]{interb}
\caption{$\mu_1 = 0.45$: the set $I_{\pi}$ for
$EL^2=-0.41$ and $EL^2 = -0.445$.
Over the line $E_1 =0$,
the orbit $1$ is hyperbolic. }
\label{fig:inter}
\mathrm{e}nd{figure*}
\vskip3pt
In figure \ref{fig:inter} we
show the region $I_{\pi}$,
identified by
the inequality \mathrm{e}qref{pimezzi-risolta}
(region $I_{\pi/2} \subset I_{\pi}$) and the inequality
\mathrm{e}qref{condizione-diseq}
(region $I_{\pi}\backslash I_{\pi/2}$).
In figure \ref{fig:inter} (a), the value of
$EL^2$ is $-0.41$, and the region $I_{\pi}$
intersects the region $E_1 \ge 0$ (i.e.
$\mathop{}\!\mathrm{d}lta E> |E|/\mu_2$). Then, after a collision, one of the outgoing orbits
can become hyperbolic. In figure \ref{fig:inter} (b),
the value of $EL^2$ is smaller and the invariant
region $I_{\pi}$ is completely contained in the region $\mathop{}\!\mathrm{d}lta E \in
\left(-|E|/\mu_2, |E|/\mu_1\right)$ in which both the
orbits are elliptic.
Then, whatever are the details of
the collisions, the two orbits remain elliptic,
whit $e_i\le c_i < 1$ for some constants $c_1,c_2$,
and $|L_1|,\ |L_2|$ are bounded via \mathrm{e}qref{limiti} and \mathrm{e}qref{medie}.
\vskip3pt
Now we will show that the behavior of the system is
driven by $EL^2$: there exists a critical value which separates
the two cases.
Let us define
\begin{equation}
\label{dbar}
\bar d(\mathop{}\!\mathrm{d}lta L, \mathop{}\!\mathrm{d}lta E) = \frac {L_1^2}{1+e_1} -
\frac {L_2^2}{1-e_2}.
\mathrm{e}nd{equation}
This quantity is the distance from the periapsis of the orbits
1 and the apoapsis of the orbit 2, in the case of $\omega = \pi$.
The value of $\bar d$ is $0$ on the boundary of $I_{\pi}$
in the first quadrant, out of $I_{\pi/2}$ (see fig. \ref{fig:inter}).
Then the critical value of $E$ e $L$ is such that
$$E_1 = 0,\ e_1 = 1,\ \bar d(\mathop{}\!\mathrm{d}lta E, \mathop{}\!\mathrm{d}lta L) = 0,\
\frac {\mathop{}\!\partial\bar d}{\mathop{}\!\partial \mathop{}\!\mathrm{d}lta L} (\mathop{}\!\mathrm{d}lta E, \mathop{}\!\mathrm{d}lta L) = 0$$
(the gradient of $\bar d(\mathop{}\!\mathrm{d}lta E, \mathop{}\!\mathrm{d}lta L)$ is vertical
in the point of tangency to the line $E_1 = 0$).
By deriving $\bar d$ with respect to $\mathop{}\!\mathrm{d}lta L$ we obtain
\begin{equation}
\label{derivata}
\mathop{}\!\partial_{\mathop{}\!\mathrm{d}lta L} \bar d =
2 \mu_2 \frac {L_1}{1+e_1} + 2 \mu_1 \frac {L_2}{1-e_2}
- 2 \mu_2 E_1 L_1 \frac {L_1^2}{e_1(1+e_1)^2} +
2 \mu_1 E_2 L_2 \frac {L_2^2}{e_2(1-e_2)^2}
\mathrm{e}nd{equation}
which, using $-2E_iL_i^2 = 1-e^2_i$, becomes
\begin{equation}
\mathop{}\!\partial_{\mathop{}\!\mathrm{d}lta L} \bar d =
2 \mu_2 \frac {L_1}{1+e_1} + 2 \mu_1 \frac {L_2}{1-e_2}
\mu_2 L_1 \frac {1-e_1^2}{e_1(1+e_1)^2}
- \mu_1 L_2 \frac {1-e_2^2}{e_2(1-e_2)^2}
= \frac {\mu_2}{e_1} L_1 - \frac{\mu_1}{e_2} L_2.
\mathrm{e}nd{equation}
The condition $\mathop{}\!\partial_{\mathop{}\!\mathrm{d}lta L} \bar d=0$ and the definition of $L$
in eq. \mathrm{e}qref{medie} allow us to calculate $L_1$, $L_2$
in terms of $e_1$, $e_2$:
\begin{equation}
\label{valoriL}
\begin{array}{l}
L_1 = \mu_1 e_1 L/(\mu_1^2 e_1 + \mu_2^2 e_2)\\
L_2 = \mu_2 e_2 L/(\mu_1^2 e_1 + \mu_2^2 e_2)
\mathrm{e}nd{array}
\mathrm{e}nd{equation}
Using these expressions in $\bar d = 0$ with $e_1=1$
we obtain the following equation
for $e_2$:
$$2\mu_2^2e_2^2 = \mu_1^2 (1-e_2),$$
which has only one solution in $(0,1)$, given
by eq. \mathrm{e}qref{e-sol}.
Substituting this value in the expression of $L_2$,
we obtain the critical values of $EL^2$ as in \mathrm{e}qref{condizione},
imposing $1+ E_2 L_2^2 = e_2^2$, with $E_2 = E/\mu_1$ and $e_1=1$:
$$EL^2 = - (1-e_2^2)(\mu_1^2 + \mu_2^2e_2)^2/(2\mu_2 e_2^2).$$
Note that we can find a similar condition for which
the region $I_{\pi}$ is tangent to the line $E_2 = 0$,
but in the case $\mu_1 \le \mu_2$ this second critical value
of $EL^2$ is greater then the previous, then it can be ignored.
\qed
\section{The invariant region for
two bodies in $\mathbb R^2$}
\label{sezione:estensioni}
In this section, the
case of two bodies and the case
of point particles which interact by means of a short range potential are analyzed.
\vskip3pt
\begin{theorem}
\label{teo:D}
We consider two circular bodies in $\mathbb R^2$
of radii $R_1$ and $R_2$,
interacting by means of elastic or inelastic collisions.
If $EL^2 < \sigma(\mu_1,\mu_2)$, with $\sigma$ as defined in
eq. \mathrm{e}qref{sigma},
and $D=R_1+R_2$ is sufficiently small, then the two bodies remain on
elliptic orbits.
\mathrm{e}nd{theorem}
\proof
\mathop{}\!\partialr\noindent
We show that if two orbits have points whose distance
is less then or equal to $D$, and
$EL^2<\sigma$ and $D$ is sufficiently small, then
the orbits are elliptic.
We remark that the condition on the distance is preserved
by the dynamics.
Fixed $L_1,L_2,E_1,E_2$, assuming that the two bodies can collide,
we need to distinguish two situations:
it exists $\mathop{}\!\mathrm{d}lta \omega$ such that
the orbits intersect, and then $(\mathop{}\!\mathrm{d}lta L,\mathop{}\!\mathrm{d}lta E)\in I_{\pi}$,
or the orbits do not intersect for any $\mathop{}\!\mathrm{d}lta \omega$.
One of them, named orbit $2$,
is contained in the other.
But
$\min_{\mathop{}\!\mathrm{d}lta \omega} \min_{\vartheta_1,\vartheta_2} |\ve x_1 - \ve x_2|\le D,$
and the minimum is reached for $\mathop{}\!\mathrm{d}lta \omega=\pi$, then
$0 < \bar d(\mathop{}\!\mathrm{d}lta L,\mathop{}\!\mathrm{d}lta E) \le D$
(note that
if the orbit of the particle $1$ is contained in the orbit of
the particle $2$,
we have to define $\bar d$ as in \mathrm{e}qref{dbar}, but exchanging the indexes
$1\leftrightarrow 2$).
If $EL^2< \sigma$, the distance between
the level set $\bar d = 0$ (the boundary of $I_{\pi}$) and
the critical lines $E_1=0$, $E_2=0$, is strictly positive.
Then, if $D$ is sufficiently small, the set
$\bar d\le D$ does not intersect the region in which
$E_1\ge 0$ or $E_2\ge 0$,
and this proves the theorem. \qed
\vskip.3cm
\begin{figure*}
\includegraphics[scale=0.3]{livello2}
\includegraphics[scale=0.3]{livello3}
\caption{$\mu_1 = 0.45$: the values of $\bar d$
in grayscale,
in the case $EL^2= -0.445$
and $EL^2=-0.52$.
}
\label{fig:lev}
\mathrm{e}nd{figure*}
In figure \ref{fig:lev} we show
the level sets of $\bar d$.
In figure \ref{fig:lev} (a), $EL^2 = -0.445$
and the invariant set $I_{\pi}$ and the values of
$\bar d$ in the complementary region are shown: the black color corresponds
to $\bar d = 0$, the
white color corresponds to
$\bar d \ge L^2/10$, while the grays correspond to the values
$\bar d \in (0,L^2/10)$.
The critical value of $D$ is approximately $0.034 L^2$.
Figure \ref{fig:lev} (b)
is relative to the case $EL^2 = -0.52$, in which the
set $I_{\pi}$ is void.
Nevertheless, the set $\bar d \le D$
is invariant and is contained in the region of elliptic orbits
if $D$ is sufficiently small. In the graphics,
the black color corresponds
to $\bar d = 0.2 L^2$, the white to $\bar d \ge 0.3 L^2$.
In this case,
the critical value for $D$ is approximately $0.25 L^2$.
\vskip3pt
We are not able to give a simple expression for the critical values of
$EL^2$ and $D$,
but for the case $\mu_1 = \mu_2 = 1/2$, in which
the particles have the same mass.
\begin{theorem}
\label{teo:D2}
If $\mu_1 = \mu_2 = 1/2$, the conditions on $EL^2$ and $D$ are
$$EL^2 < - \frac{(1-e^2)(1+e)^2}{16e^2}\ \ \text{ where }
e = \gamma - \sqrt{\gamma^2 - \gamma +1}\ \ \text{ with } \gamma = 2L^2/D > 1
$$
\mathrm{e}nd{theorem}
\proof
\mathop{}\!\partialr\noindent
We proceed as in the proof of Theorem \ref{teo:inter}.
Assuming $\mu_1 = \mu_2 = 1/2$ and $EL^2<-27/64$,
the critical condition $\mathop{}\!\partial_{\mathop{}\!\mathrm{d}lta L} \bar d(\mathop{}\!\mathrm{d}lta E, \mathop{}\!\mathrm{d}lta L) = 0$
allows us to obtain the value of $L_1$ and $L_2$
as in eq. \mathrm{e}qref{valoriL}, which becomes
\begin{equation}
\label{valori2L}
\begin{array}{l}
L_1 = 2e_1 L/(e_1 + e_2)\\
L_2 = 2e_2 L/(e_1 + e_2)
\mathrm{e}nd{array}
\mathrm{e}nd{equation}
Using these values in $\bar d = D$ with $e_1=1$,
we obtain the following equation for $e_2$
$$2L^2 (1-2e_2) = D (1-e_2^2),$$
which is solved in $(0,1)$ by
$$e_2 = \gamma - \sqrt{\gamma^2 - \gamma + 1}
\ \ \text{ where } \ \ \gamma = 2L^2/D
\ \ \text{ with } \ \ D<2L^2.
$$
The corresponding value of $EL^2$
is given by
$EL^2 = - (1-e_2^2)(1+e_2)^2/(16e_2^2)$.\qed
\vskip.3cm
In figure \ref{fig:funz} we plot these values in function
of $D/L^2$. Let us recall that for $EL^2<-0.5$ the
two orbits do not intersect.
\begin{figure}
\includegraphics[scale=0.7]{funz}
\caption{$\mu_1 = \mu_2 = 0.5$: the critical value of
$EL^2$ in function of $D/L^2$.
}
\label{fig:funz}
\mathrm{e}nd{figure}
\vskip3pt
The last extension we consider in the two dimensional case is that of two
point particles interacting by means of a force of symmetric potential
energy $V(|\ve x_1 - \ve x_2|)$, with a compact support.
\begin{theorem}
\label{teo:V}
Let us consider two point particles interacting by means of a force
of potential energy $V=V(|\ve x_1 - \ve x_2|)$, such that
$V(r) = 0$ if $r\ge D$ for some $D>0$
and $V(r)\ge -U$ with $U\ge 0$.
If $EL^2 < \sigma(\mu_1,\mu_2)$, with $\sigma$ as
in eq. \mathrm{e}qref{sigma}, and
and $D$ and $U$ are sufficiently small,
then the particles remains on bounded orbits.
\mathrm{e}nd{theorem}
\proof
The specific
angular momentum $L=\mu_1 L_1 + \mu_2 L_2$ is conserved
at any time also in this case, because the potential energy is symmetric.
If $|\ve x_1 - \ve x_2| \ge D$ the energy of the interaction is
zero and the specific energy of the system is exactly
$$E= \mu_1 \left( \frac {\ve v_1^2}2 - \frac1{|\ve x_1|} \right) +
\mu_2 \left( \frac {\ve v_2^2}2 - \frac1{|\ve x_2|} \right)$$
Therefore,
when the two particles leave the region of
the interaction, we can apply Theorem \ref{teo:D},
concluding that the particles remain on elliptic orbits until the
next interaction.
To achieve the proof, we have to discuss
the motion of the particles during the interaction,
i.e. when their distance is less than $D$.
If $|\ve x_1 - \ve x_2| < D$, we can consider
the two 'osculating' Keplerian orbits, i.e. the Keplerian orbits
which correspond to the two couple position-velocity $(\ve x_1,\ve v_1)$,
and $(\ve x_2,\ve v_2)$.
The specific energies of this two orbits
are
$$E_1 = \ve v_1^2/2 - 1/|\ve x_1|,\ \ \ \ E_2 = \ve v_2^2/2 - 1/|\ve x_2|.$$
These quantities are not the specific energies of the two particles
(because the contribution of the interaction is not zero), but verify
$$\tilde E = \mu_1 E_1 + \mu_2 E_2 = E - V(|\ve x_1 - \ve x_2|)/(m_1+m_2)
\le E + U/(m_1+m_2)
$$
by the conservation of the total energy.
If $EL^2 < \sigma$ and $U$ is sufficiently small, we have also that
$\tilde E L^2 < \sigma$, then
we can apply Theorem \ref{teo:D}, using $\tilde E$ instead of $E$.
Therefore, if $D$ is sufficiently small,
as in the hypothesis of Theorem \ref{teo:D},
the two osculating orbits are elliptic, with bounded value of $L_1$, $L_2$,
and $e_i\le c_i < 1$, as follows from the compactness
of the set
$I_{\pi} \cup \{ \bar d(\mathop{}\!\mathrm{d}lta L,\mathop{}\!\mathrm{d}lta E) \le D\}
\subset \left(-L/\mu_1,L/\mu_2 \right)
\times \left(E/\mu_1, -E/\mu_2\right)$.\qed
\section{The invariant region for two bodies in $\mathbb R^3$}
\label{sezione:dim3}
Here we discuss
the three dimensional case.
We indicate with $\ve L_i=\ve x_i \wedge \ve v_i$
the vector which express the specific angular momentum of the
particle $i$ in the positi`on $\ve x_i$ whit velocity $\ve v_i$,
and with $\ve L$
the specific angular momentum of the whole system
$\ve L = \mu_1 \ve L_1 + \mu_2 \ve L_2$, which is a conserved vector.
In this case, they hold the analogous of
Theorems \ref{teo:inter}, \ref{teo:D}, \ref{teo:V},
where the role of $L$ is played by $|\ve L|$. We summarize these results
in the following theorem.
\begin{theorem}
\label{teo:3d} \phantom{nullaaaaa}
\begin{enumerate}
\item The invariant region for the orbital dynamics in $\mathbb R^3$, defined
as in section \ref{sezione:invariante}, is given by
$$E|\ve L|^2 < \sigma$$
with $\sigma=\sigma(\mu_1,\mu_2)$ defined as in eq. \mathrm{e}qref{sigma}.
\item The invariant region for the collisional dynamics of
two hard spheres in $\mathbb R^3$, of radii $R_1$ and $R_2$, with
$D=R_1+R_2$,
is given by $E|\ve L|^2 < \sigma$
with $D$ sufficiently small.
\item The invariant region for two point particles interacting by means a
potential energy $V$
as in Theorem \ref{teo:V}, is given by $E|\ve L|^2<\sigma$ with $D$ and
$U$ sufficiently small.
\mathrm{e}nd{enumerate}
\mathrm{e}nd{theorem}
\proof
We prove the theorem staring from the last case, which includes the others
as particular ones.
As in the proof of Theorem \ref{teo:V}, we define
$$E_1= 1/2 \ve v_1^2 - 1/|\ve x_1|, \ \ \
E_2= 1/2 \ve v_2^2 - 1/|\ve x_2|, \ \ \
\tilde E = \mu_1 E_1 + \mu_2 E_2$$
and we note that
$$\begin{array}{lll}
\tilde E = E &\ \ \ \ \text{ if }\ \ \ \ & |\ve x_1 - \ve x_2| \ge D \\
\tilde E \le E + U/(m_1+m_2) &\ \ \ \ \text{ if } \ \ \ \
& |\ve x_1 - \ve x_2| < D.
\mathrm{e}nd{array}
$$
We indicate whit $\tilde L_i = |\ve L_i|$ the modulus of the
angular momentum of the orbit of the particle $i$, and we define
$$\tilde L = \mu_1 \tilde L_1 + \mu_2 \tilde L_2$$
which verifies
$$|\ve L| = |\mu_1 \ve L_1 + \mu_2 \ve L_2| \le \tilde L
\ \
\text{ and } \ \
E\tilde L^2 \le E|\ve L|^2$$
($E$ is negative).
Moreover
$$\tilde E \tilde L^2 \le \left(E + \frac{U}{m_1+m_2} \right) |\tilde L^2|$$
then, if $E|L^2| < \sigma$ as in the hypothesis,
for $U$ sufficiently small,
it also holds
\begin{equation}
\label{condtilde}
\tilde E \tilde L^2 < \sigma
\mathrm{e}nd{equation}
Let us indicate whit $o_i$ the Keplerian orbit
identified by the position $\ve x_i$ and the velocity $\ve v_i$;
its energy is $E_i$ and its angular momentum is $\ve L_i$.
If the particles can interact, $o_1$ and $o_2$ have
points at distance less then $D$.
We consider the orbits $\tilde o_1$ and $\tilde o_2$ we
obtain rigid rotating in $\mathbb R^3$, around the center,
$o_1$ and $o_2$, in such that
$\tilde o_1$ and $\tilde o_2$ are in the same plane,
and the periapsides of $\tilde o_1$ and $\tilde o_2$
are in opposition.
The energy and the eccentricity of the orbits $\tilde o_i$ are
the same of the orbits $o_i$, while we can identify the
angular momentum of $\tilde o_i$ with the positive scalar
quantity $\tilde L_i$.
This two planar orbits intersect or have points at distance less that
$D$. In the first case, we can apply Theorem \ref{teo:inter}
using \mathrm{e}qref{condtilde}, and we can conclude that the $\tilde o_1$,
$\tilde o_2$, and then $o_1$ and $o_2$ are elliptic.
In the second case, we can apply Theorem \ref{teo:D}, and we can again
conclude that, if $D$ is sufficiently small, the orbits are elliptic.
The case of spherical bodies can be considered as a particular
case, in which $V=+\infty$, if $|\ve x_1 - \ve x_2|\le D$. Now $U=0$
and we have only to require a sufficiently small
value of $D$. Finally, the case of the orbital dynamics
can be considered as the particular case in which
$D=0$. The hypothesis
$E|\ve L|^2<\sigma$ is then sufficient to achieve the thesis. \qed
\vskip3pt
\noindent
{\bf Remarks.}
\begin{enumerate}[{i.}]
\item The values of $D$ in Theorem \ref{teo:D}
can be very large with respect
to the scale $L^2$ of the semi-axis of the orbits, but, if it is so,
the system is in the region $EL^2 < -1/2$,
in which the orbits do not intersect. Therefore, the system is made of
two particle which can interact only by means of
grazing collisions.
\item It can be interesting to analyze the
case of Theorem \ref{teo:V} when $V$ is unbounded from below.
The particles can leave the
system only if their distance remains less then $D$, and, in this case,
we can expect that the center of mass of the two particles
moves on an approximately elliptic orbit.
On the other hand, there are no a priori bounds on the
kinetic energy or on the position of this center of mass.
\item It can be also interesting to analyze the case of $n>2$
particles, with positive radii. In particular it can be expected that
there exist stable ringlets in which the collisions are grazing.
\mathrm{e}nd{enumerate}
\vskip.5cm
\noindent{\bf Acknowledgments}
\thanks{The authors thank L. Biasco, E. Caglioti, G. F. Gronchi,
P. Negrini, for usefull suggestions on this subject.}
\begin{thebibliography}{}
\bibitem{braich}
Braich A.: {\it Systems of colliding bodies
in a gravitational field. I - Numerical simulation of the standard model},
Astron. Astroph. {\bf 54} no. 3, pp. 895--907 (1977).
\bibitem{bh}
Braich A., H\'enon M.: {\it Systems of colliding
bodies in a gravitational
field: II. Effect of transversal viscosity},
Astron. Astroph. {\bf 59} no. 1, pp. 1--7 (1977).
\bibitem{djerassi} Djerassi S.:
{\it Collision with friction; Part A:
Newton's hypothesis}, Multibody Syst. Dyn. {\bf 21}, pp. 37--54
(2009) doi:10.1007/s11044-008-9126-2.
\bibitem{gronchi}
Gronchi G.F.: {\it On the stationary points of the
squared distance between two ellipses with a common focus},
SIAM Jour. Sci. Comp. {\bf 24} 1, pp. 61--80 (2002).
\bibitem{hatzes}
Hatzes A.P., Briges F.G., Lin D.N.C.:
{\it Collisional properties of ice spheres at low impact velocities},
Royal Astr. Soc. {\bf 231}, pp. 1091--1115
(1988).
\bibitem{hertzsch} Hertzsch J.-M., Scholl H., Spahn F.,
Katzorke I.: {\it Simulation of collisions in planetary rings},
Astron. Astrophys. {\bf 320}, pp. 319--324 (1997).
\bibitem{kawai}
Kawai T., Shida K.: {\it An inelastic collision model for the
evolution of ``Planetary Rings''}, Jour. Phys. Soc. Jap. {\bf 59}
no. 1, pp. 381--388 (1990).
\bibitem{laskar0} Laskar J.:
{\it On the spacing of planetary systems},
Phy. Rev. Let. {\bf 84} no. 15, pp. 3240--3243 (2000) .
\bibitem{laskar} Laskar J., Robutel P.:
{\it Stability of the planetary three-body problem I},
Cel. Mech. Dyn. Astr. {\bf 62}, pp. 193--217 (1995).
\bibitem{marsden} Marsden J.E., Ross S.D.:
{\it New method in celestial mechanics and mission design},
Bull. Am. Math. Soc. {\bf 43} 1, pp. 43--73
(2005).
\bibitem{planetary}
Papaloizou J.C.B.: {\it Planetary system formation}, Science {\bf 321}
pp. 777--778 (2008).
\bibitem{poincare} Poincar\'e H.:
{\it Les m\'ethodes nouvelles de la m\'ecanique c\'eleste} (1889), tome III,
chapitre XXII, Lib. Sci. Tech. A. Blanchard, Paris (1987)
\bibitem{poincare-cosm} Poincar\'e H., Vergne H.:
{\it Lecons sur les hypotheses cosmogoniques},
A. Hemann et fils, Paris (1911).
\mathrm{e}nd{thebibliography}
\mathrm{e}nd{document} |
\begin{document}
\title[On certain localized version of uniform selection principles]{On certain localized version of uniform selection principles}
\author[ N. Alam, D. Chandra ]{ Nur Alam$^*$, Debraj Chandra$^*$ }
\newcommand{\newline\indent}{\newline\indent}
\address{\llap{*\,}Department of Mathematics, University of Gour Banga, Malda-732103, West Bengal, India}
\email{nurrejwana@gmail.com, debrajchandra1986@gmail.com}
\thanks{ The first author
is thankful to University Grants Commission (UGC), New Delhi-110002, India for granting UGC-NET Junior Research Fellowship (1173/(CSIR-UGC NET JUNE 2017)) during the tenure of which this work was done.}
\subjclass{Primary: 54D20, 54E15; Secondary: 54E35, 54E99}
\maketitle
\begin{abstract}
We intend to localize the selection principles in uniform spaces (Ko\v{c}inac, 2003) by introducing their local variations, namely locally $\Upsilonpsilon$-bounded spaces (where $\Upsilonpsilon$ is Menger, Hurewicz or Rothberger). It has been observed that the difference between uniform selection principles and the corresponding local correlatives as introduced here is reasonable enough to discuss about these new notions. Certain observations using the critical cardinals (on the uniform selection principles which have not studied before) as well as preservation like properties (on the local versions) are presented. The interrelationships between the notions considered in this paper are outlined into an implication diagram. Certain interactions between these local variations are also investigated. We present several examples to illustrate the distinguishable behaviour of the new notions.
\end{abstract}
\noindent{\bf\keywordsname{}:} {Uniform space, selection principles, ${\sf M}$-bounded, ${\sf H}$-bounded, ${\sf R}$-bounded, locally ${\sf M}$-bounded, locally ${\sf H}$-bounded, locally ${\sf R}$-bounded, locally precompact, locally pre-Lindel\"{o}f.}
\section{Introduction}
There is a long illustrious history of study of selection principles in set-theoretic topology. This vast field in topology became more popular and had attracted a lot of researcher's attention in the last twenty five years after Scheepers' seminal paper \cite{coc1} (see also \cite{coc2}), where a systematic study in this fascinating field was initiated. Since then, various topological notions have been defined or characterized in terms of the classical selection principles. Interested readers may explore the survey papers \cite{SRSP,SCTU,Survey} for more information on this topic.
In 2003, Ko\v{c}inac \cite{SPUS} introduced the study of selection principles in uniform spaces by defining uniform analogues of Menger, Hurewicz and Rothberger covering properties, namely ${\sf M}b$, ${\sf H}b$ and ${\sf R}b$ respectively and differentiated these uniform variations from classical Menger, Hurewicz and Rothberger properties. Interestingly it was observed that these uniform covering properties can also be defined in terms of star selection principles in unform spaces.
Later in 2013, Ko\v{c}inac and K\"{u}nzi \cite{SPUR} further extended the study to quasi-uniform spaces. For more information about the uniform selection principles, we refer the reader to consult the papers \cite{SRSP,SCTU,Maio,kocqm,BPFS,UBFS} and references therein.
This paper is a continuation of the study of uniform selection principles started in \cite{SPUS} and is organised as follows. In Section 3, we present certain observations on uniform selection principles (that were not at all investigated earlier in uniform structures), which seem to be effective in our context.
In Section 4, we make an effort to extend the concept of uniform selection principles by introducing local variations of these selection principles, namely locally ${\sf M}b$, locally ${\sf H}b$ and locally ${\sf R}b$ spaces (for similar type of investigations, see \cite{dcna21}). Certain situations are described which witness that these local variations behave much differently from the uniform selection principles. Later in this section, preservation like properties of the new notions are investigated carefully and the interrelationships between these new notions are also discussed.
Section 5 is the final portion of this article, which is devoted to present illustrative examples. It is shown that the class consisting of each of these local variations is strictly larger than the class containing the corresponding uniform counterparts. We also present exemplary observations of their perceptible behaviours.
\section{Preliminaries}
For undefined notions and terminologies, see \cite{Engelking}. We start with some basic information about uniform spaces.
Let $X$ be a set and let $A,B\subseteq X\times X$. We define $A^{-1}=\{(x,y) : (y,x)\in A\}$ and $A\circ B=\{(x,y) : \exists \,z\in X \:\text{such that}\: (x,z)\in A \;\text{and}\; (z,y)\in B\}$. The diagonal of $X\times X$ is the set $\Delta=\{(x,x) : x\in X\}$. A set $U\subseteq X\times X$ is said to be an entourage of the diagonal if $\Delta\subseteq U$ and $U^{-1}=U$. The family of all entourages of the diagonal $\Delta\subseteq X\times X$ will be denoted by $E_X(\Delta)$. If $F\subseteq X$ and $U\in E_X(\Delta)$, then $U[F]=\cup_{x\in F}U[x]$, where $U[x]=\{y\in X : (x,y)\in U\}$. Recall that a uniform space can be described equivalently in terms of either a diagonal uniformity or a covering uniformity \cite{Engelking} (see also \cite{Tukey,Borubaev}). In this paper we use diagonal uniformity to define a uniform space.
A uniformity on a set $X$ is a subfamily $\mathbb{U}$ of $E_X(\Delta)$ which satisfies the following conditions.
(i) If $U\in\mathbb{U}$ and $V\in E_X(\Delta)$ with $U\subseteq V$, then $V\in\mathbb{U}$; (ii) If $U,V\in\mathbb{U}$, then $U\cap V\in\mathbb{U}$; (iii) For every $U\in\mathbb{U}$, there exists a $V\in\mathbb{U}$ such that $V\circ V\subseteq U$; and (iv) $\cap\mathbb{U}=\Delta$. The pair $(X,\mathbb{U})$ is called a uniform space \cite{Engelking}.
Clearly, every uniform space $(X,\mathbb{U})$ is a topological space. The family $\tau_{\mathbb{U}}=\{O\subseteq X : \:\text{for each} \: x\in O\:\text{there exists a}\: U\in\mathbb{U}\:\text{such that}\: U[x]\subseteq O\}$ is the topology on $X$ generated by the uniformity $\mathbb{U}$. It is well known that the topology of a space $X$ can be induced by a uniformity on $X$ if and only if $X$ is Tychonoff (see \cite[Theorem 8.1.20]{Engelking}). If $(X,d)$ is a metric space, then the family $\{U_\varepsilon : \varepsilon>0\}$, where $U_\varepsilon=\{(x,y)\in X\times X : d(x,y)<\varepsilon\}$, is a base for the uniformity $\mathbb{U}$ induced by the metric $d$. Moreover, the topologies induced on $X$ by the uniformity $\mathbb{U}$ and by the metric $d$ coincide.
By a subspace $Y$ of a uniform space $(X,\mathbb{U})$ we mean the uniform space $(Y,\mathbb{U}_Y)$, where $Y\subseteq X$ and $\mathbb{U}_Y=\{(Y\times Y)\cap U : U\in\mathbb{U}\}$ (which is called the relative uniformity on $Y$).
Let $\mathcal{F}$ be a family of subsets of $X$. We say that $\mathcal{F}$ contains arbitrarily small sets if for every $U\in\mathbb{U}$ there exists a $F\in\mathcal{F}$ such that $F\times F\subseteq U$ (see \cite{Engelking}).
We say that $X$ is complete if every family $\mathcal{F}$ of closed subsets of $X$ which has the finite intersection property and contains arbitrarily small sets has nonempty intersection. A uniformity $\mathbb{U}$ on a set $X$ is complete if the space $(X,\mathbb{U})$ is complete \cite{Engelking}. A function $f:(X,\mathbb{U})\to(Y,\mathbb{V})$ between two uniform spaces is uniformly continuous if for every $V\in\mathbb{V}$ there exists a $U\in\mathbb{U}$ such that for all $x,y\in X$ we have $(f(x),f(y))\in V$ whenever $(x,y)\in U$. A bijective mapping $f:(X,\mathbb{U})\to(Y,\mathbb{V})$ is said to be a uniform isomorphism if both $f$ and $f^{-1}$ are uniformly continuous (see \cite{Engelking}). We say that two uniform spaces $X$ and $Y$ are uniformly isomorphic if there exists a uniform isomorphism of $X$ onto $Y$. It is clear that every uniform isomorphism is an open mapping.
$(X,\mathbb{U})$ is said to be precompact or totally bounded (resp. pre-Lindel\"{o}f) if for each $U\in\mathbb{U}$ there exists a finite (resp. countable) $A\subseteq X$ such that $U[A]=X$ \cite{Engelking,Borubaev,preLindelof}.
Every compact (resp. Lindel\"{o}f) uniform space is precompact (resp. pre-Lindel\"{o}f). Moreover, for a complete uniform space precompactness (resp. pre-Lindel\"{o}fness) and compactness (resp. Lindel\"{o}fness) are equivalent. It is easy to observe that if the uniformity $\mathbb{U}$ on a set $X$ is induced by a metric $d$, then $(X,\mathbb{U})$ is complete (resp. precompact, pre-Lindel\"{o}f) if and only if $(X,d)$ is complete (resp. precompact, pre-Lindel\"{o}f).
We now recall some definitions of topological spaces formulated in terms of classical selection principles from \cite{coc1,coc2}.
A topological space $X$ is said to be Menger (resp. Rothberger) if for each sequence $(\mathcal{U}_n)$ of open covers of $X$ there is a sequence $(\mathcal{V}_n)$ (resp. $(U_n)$) such that for each $n$ $\mathcal{V}_n$ is a finite subset of $\mathcal{U}_n$ (resp. $U_n\in\mathcal{U}_n)$ and $\cup_{n\in\mathbb{N}}\mathcal{V}_n$ (resp. $\{U_n : n\in\mathbb{N}\}$) is an open cover of $X$. A topological space $X$ is said to be Hurewicz if for each sequence $(\mathcal{U}_n)$ of open covers of $X$ there is a sequence $(\mathcal{V}_n)$ such that for each $n$ $\mathcal{V}_n$ is a finite subset of $\mathcal{U}_n$ and each $x\in X$ belongs to $\cup\mathcal{V}_n$ for all but finitely many $n$. A topological space $X$ is said to be locally compact (resp. locally Menger, locally Hurewicz, locally Rothberger, locally Lindel\"{o}f) if for each $x\in X$ there exist an open set $U$ and a compact (resp. Menger, Hurewicz, Rothberger, Lindel\"{o}f) subspace $Y$ of $X$ such that $x\in U\subseteq Y$.
In \cite{SPUS} (see also \cite{SRSP}), Ko\v{c}inac introduced the following uniform selection principles. A uniform space $(X,\mathbb{U})$ is Menger-bounded (in short, ${\sf M}b$) if for each sequence $(U_n)$ of members of $\mathbb{U}$ there is a sequence $(F_n)$ of finite subsets of $X$ such that $\cup_{n\in\mathbb{N}}U_n[F_n]=X$. A uniform space $(X,\mathbb{U})$ is said to be Hurewicz-bounded (in short, ${\sf H}b$) if for each sequence $(U_n)$ of members of $\mathbb{U}$ there is a sequence $(F_n)$ of finite subsets of $X$ such that each $x\in X$ belongs to $U_n[F_n]$ for all but finitely many $n$.
Also a uniform space $(X,\mathbb{U})$ is said to be Rothberger-bounded (in short, ${\sf R}b$) if for each sequence $(U_n)$ of members of $\mathbb{U}$ there is a sequence $(x_n)$ of members of $X$ such that $\cup_{n\in\mathbb{N}}U_n[x_n]=X$.
The above properties are also known as uniformly Menger, uniformly Hurewicz and uniformly Rothberger respectively.
We also say that a metric space $(X,d)$ is ${\sf M}$-bounded (resp. ${\sf H}$-bounded, ${\sf R}$-bounded) if $X$ with the induced uniformity is ${\sf M}$-bounded (resp. ${\sf H}$-bounded, ${\sf R}$-bounded).
Throughout the paper $(X,\mathbb{U})$ (or $X$ for short, when $\mathbb U$ is clear from the context) stands for a uniform space, where $\mathbb U$ is a diagonal uniformity on $X$.\\
The following two results are useful in our context.
\begin{Th}[cf. \cite{SPUS}]
Let $(X,\mathbb{U})$ and $(Y,\mathbb{V})$ be two uniform spaces.
\begin{enumerate}[wide=0pt,label={\upshape(\arabic*)},ref={\theTh(\arabic*)},leftmargin=*]
\item \label{LU501} $(X\times Y,\mathbb{U}\times\mathbb{V})$ is ${\sf H}$-bounded if and only if both $(X,\mathbb{U})$ and $(Y,\mathbb{V})$ are ${\sf H}$-bounded.
\item
If $(X,\mathbb{U})$ is ${\sf M}$-bounded and $(Y,\mathbb{V})$ is precompact, then $(X\times Y,\mathbb{U}\times\mathbb{V})$ is ${\sf M}$-bounded.
\end{enumerate}
\end{Th}
\begin{Th}[cf. \cite{SPUS}]
\begin{enumerate}[wide=0pt,label={\upshape(\arabic*)},ref={\theTh(\arabic*)},leftmargin=*]
\item \label{LU404} ${\sf M}b$, ${\sf H}b$ and ${\sf R}b$ properties are hereditary and preserved under uniformly continuous mappings.
\item \label{LU401} Let $(X,\mathbb{U})$ be a uniform space and $Y\subseteq X$. If $(Y,\mathbb{U}_Y)$ is ${\sf H}$-bounded, then $(\overline{Y},\mathbb{U}_{\overline{Y}})$ is also ${\sf H}$-bounded.
\item \label{LU402} If $(X,\mathbb{U})$ is a complete uniform space, then $(X,\mathbb{U})$ is Hurewicz, if and only if it is ${\sf H}$-bounded.
\item \label{LU403} If $(X,\mathbb{U})$ is Menger, Hurewicz, Rothberger, then it is also ${\sf M}$-bounded, ${\sf H}$-bounded and ${\sf R}$-bounded respectively.
\end{enumerate}
\end{Th}
\section{Few observations on uniform selection principles}
We present a few more observations on uniform selection principles that will be useful subsequently. Throughout the paper we use the symbol $\Upsilon$ to denote any of the {Menger}, {Hurewicz} or {Rothberger} properties. Accordingly $\Upsilonpsilon\text{-bounded}$ denotes any of the ${\sf M}b$, ${\sf H}b$ or ${\sf R}b$ properties.
We start with two basic observations (without proof) about uniform selection principles.
\begin{Lemma}
Let $(X,\mathbb{U})$ be a uniform space. A subspace $Y$ of $X$ is
\begin{enumerate}[wide=0pt,label={\upshape(\arabic*)},
ref={\theLemma(\arabic*)},leftmargin=*]
\item
${\sf M}$-bounded if and only if for each sequence $(U_n)$ of members of $\mathbb{U}$ there exists a sequence $(F_n)$ of finite subsets of $Y$ such that $Y\subseteq\cup_{n\in\mathbb{N}}U_n[F_n]$.
\item \label{LU603} ${\sf H}$-bounded if and only if for each sequence $(U_n)$ of members of $\mathbb{U}$ there exists a sequence $(F_n)$ of finite subsets of $Y$ such that each $y\in Y$ belongs to $U_n[F_n]$ for all but finitely many $n$.
\item
${\sf R}$-bounded if and only if for each sequence $(U_n)$ of members of $\mathbb{U}$ there exists a sequence $(x_n)$ of members of $Y$ such that $Y\subseteq\cup_{n\in\mathbb{N}}U_n[x_n]$.
\end{enumerate}
\end{Lemma}
\begin{Lemma}
\label{TU2}
Let $\mathbb{U}$ and $\mathbb{V}$ be two uniformities on a set $X$ such that $\mathbb{V}$ is finer than $\mathbb{U}$. If $(X,\mathbb{V})$ is $\Upsilonpsilon\text{-bounded}$, then $(X,\mathbb{U})$ is also $\Upsilonpsilon\text{-bounded}$.
\end{Lemma}
\begin{Th}
\label{TU4}
Let $(X,\mathbb{U})$ be a uniform space with $X=\cup_{n\in\mathbb{N}}X_n$. Then $X$ is $\Upsilonpsilon\text{-bounded}$ if and only if $X_n$ is $\Upsilonpsilon\text{-bounded}$ for each $n$.
\end{Th}
\begin{proof}
We only prove sufficiency for the case of ${\sf H}$-bounded.
Let $(U_n)$ be a sequence of members of $\mathbb{U}$. By Lemma~\ref{LU603}, for each $k\in\mathbb{N}$ we can choose a sequence $(F_n^{(k)}:{n\geq k})$ of finite subsets of $X_k$ such that each $x\in X_k$ belongs to $U_n[F_n^{(k)}]$ for all but finitely many $n\geq k$. For each $n$ let $F_n=\cup_{k\leq n}F_n^{(k)}$. Then $(F_n)$ is a sequence of finite subsets of $X$. We show that each $x\in X$ belongs to $U_n[F_n]$ for all but finitely many $n$. Let $x\in X$. Choose $k_0\in\mathbb{N}$ such that $x\in X_{k_0}$. Clearly, $x\in U_n[F_n^{(k_0)}]$ for all but finitely many $n\geq k_0$ and hence $x\in U_n[F_n]$ for all but finitely many $n$ since $F_n^{(k_0)}\subseteq F_n$ for all $n\geq k_0$. Hence the result.
\end{proof}
Let $(Y,\mathbb{V})$ be a uniform space and $X$ be a set. If $f:X\to Y$ is an injective mapping, then there is a natural uniformity on $X$ induced by $f$ and denoted by $f^{-1}(\mathbb{V})$. This uniformity is generated by the base $\{g^{-1}(V) : V\in\mathbb{V}\}$, where $g:X\times X\to Y\times Y$ is defined by $g(x,y)=(f(x),f(y))$; that is $g=f\times f$.
\begin{Th}
\label{L4}
Let $f:X\to Y$ be an injective mapping from a set $X$ onto a uniform space $(Y,\mathbb{V})$. If $Y$ is $\Upsilonpsilon\text{-bounded}$, then $X$ is also $\Upsilonpsilon\text{-bounded}$.
\end{Th}
\begin{proof}
We give a proof for the case of ${\sf H}$-bounded.
Suppose that $Y$ is ${\sf H}$-bounded.
Let $(U_n)$ be a sequence of members of $f^{-1}(\mathbb{V})$. By definition, $\{g^{-1}(V) : V\in\mathbb{V}\}$ is a base for the uniformity $f^{-1}(\mathbb{V})$, where $g=f\times f$. For each $n$ choose $V_n\in\mathbb{V}$ such that $g^{-1}(V_n)\subseteq U_n$.
Next choose a sequence $(F_n)$ of finite subsets of $Y$ such that for each $y\in Y$ there exists a $n_y\in\mathbb{N}$ such that $y\in V_n[F_n]$ for all $n\geq n_y$. For each $n$ set $F_n=\{y_1^{(n)},y_2^{(n)},\cdots,y_{k_n}^{(n)}\}$ and $F_n^\prime=\{x_1^{(n)},x_2^{(n)},\cdots,x_{k_n}^{(n)}\}
\subseteq X$, where $f(x_i^{(n)})=y_i^{(n)}$ for each $1\leq i\leq k_n$.
Let $x\in X$. Then there exists a $n_{f(x)}\in\mathbb{N}$ such that $f(x)\in V_n[F_n]$ for all $n\geq n_{f(x)}$. For each $n\geq n_{f(x)}$ there exists a $i_n$ with $1\leq i_n\leq k_n$ such that $f(x)\in V_n[y_{i_n}^{(n)}]$. Thus, for each $n\geq n_{f(x)}$ we have $g(x,x_{i_n}^{(n)})\in V_n$; that is $x\in U_n[x_{i_n}^{(n)}]\subseteq U_n[F_n^\prime]$ for all but finitely many $n$. This completes the proof.
\end{proof}
The eventual dominance relation $\leq^*$ on the Baire space $\mathbb{N}^\mathbb{N}$ is defined by $f\leq^*g$ if and only if $f(n)\leq g(n)$ for all but finitely many $n$. A subset $A$ of $\mathbb{N}^\mathbb{N}$ is said to be dominating if for each $g\in\mathbb{N}^\mathbb{N}$ there exists a $f\in A$ such that $g\leq^* f$. A subset $A$ of $\mathbb{N}^\mathbb{N}$ is said to be bounded if there is a $g\in\mathbb{N}^\mathbb{N}$ such that $f\leq^*g$ for all $f\in A$. Moreover a set $A\subseteq\mathbb{N}^\mathbb{N}$ is said to be guessed by $g\in\mathbb{N}^\mathbb{N}$ if $\{n\in\mathbb{N} : f(n)=g(n)\}$ is infinite for all $f\in A$. The minimum cardinality of a dominating subset of $\mathbb{N}^\mathbb{N}$ is denoted by $\mathfrak{d}$, and the minimum cardinality of a unbounded subset of $\mathbb{N}^\mathbb{N}$ is denoted by $\mathfrak{b}$. Let $\cov(\mathcal{M})$ be the minimum cardinality of a family of meager subsets of $\mathbb{R}$ that covers $\mathbb{R}$. In \cite{CAMC} (see also \cite[Theorem 2.4.1]{TBHJ}), $\cov(\mathcal{M})$ is described as the minimum cardinality of a subset $F\subseteq\mathbb{N}^\mathbb{N}$ such that for every $g\in\mathbb{N}^\mathbb{N}$ there is $f\in F$ such that $f(n)\neq g(n)$ for all but finitely many $n$. Thus, we can say that if $F\subseteq\mathbb{N}^\mathbb{N}$ and $|F|<\cov(\mathcal{M})$, then $F$ can be guessed by a $g\in\mathbb{N}^\mathbb{N}$. It is to be noted that the Baire space is also a uniform space with the uniformity $\mathbb B$ induced by the Baire metric.
\begin{Th}
\label{TN01}
Every pre-Lindel\"{o}f space $(X,\mathbb{U})$ with $|X|<\mathfrak d$ is ${\sf M}b$.
\end{Th}
\begin{proof}
Let $(U_n)$ be a sequence of members of $\mathbb{U}$. Apply the pre-Lindel\"{o}f property to obtain a sequence $(A_n)$ of countable subsets of $X$ such that $U_n[A_n]=X$ for each $n$. Say $A_n=\{x_m^{(n)} : m\in\mathbb{N}\}$ for each $n$. Now for each $x\in X$ define $f_x\in\mathbb{N}^\mathbb{N}$ by $f_x(n)=\min\{m\in\mathbb{N} : x\in U_n[x_m^{(n)}]\}$, $n\in\mathbb{N}$. Since the cardinality of $\{f_x : x\in X\}$ is less than $\mathfrak{d}$, there is a $g\in\mathbb{N}^\mathbb{N}$ and for $x\in X$ a $n_x\in\mathbb{N}$ such that $f_x(n_x)<g(n_x)$. For each $n$ define $F_n=\{x_m^{(n)} : m\leq g(n)\}$. Observe that if $x\in X$, then $x\in U_{n_x}[F_{n_x}]$. Clearly, $\{U_n[F_n] : n\in\mathbb{N}\}$ covers $X$ and hence $X$ is ${\sf M}b$.
\end{proof}
Similarly we obtain the following.
\begin{Th}
\label{TN02}
Every pre-Lindel\"{o}f space $(X,\mathbb{U})$ with $|X|<\mathfrak{b}$ is ${\sf H}b$.
\end{Th}
\begin{Th}
\label{TN03}
Every pre-Lindel\"{o}f space $(X,\mathbb{U})$ with $|X|<\cov(\mathcal{M})$ is ${\sf R}b$.
\end{Th}
\begin{proof}
Let $(U_n)$ be a sequence of members of $\mathbb{U}$. Consider $A_n$ and $f_x$ as in Theorem~\ref{TN01} and proceed as follows.
Since the cardinality of $\{f_x : x\in X\}$ is less than $\cov(\mathcal{M})$, choose $g\in\mathbb{N}^\mathbb{N}$ such that $\{n : f_x(n)=g(n)\}$ is infinite for all $x\in X$.
Observe that if $x\in X$, then $f_x(n_x)=g(n_x)$ for some positive integer $n_x$; that is $x\in U_{n_x}[x_{g(n_x)}^{(n_x)}]$. Thus, $\{U_n[x_{g(n)}^{(n)}] : n\in\mathbb{N}\}$ is a cover of $X$, so that $X$ is ${\sf R}b$.
\end{proof}
\begin{Th}
\label{TU8}
If $(X,\mathbb{U})$ is ${\sf M}$-bounded, then any uniformly continuous image of $X$ into $\mathbb{N}^\mathbb{N}$ is non-dominating.
\end{Th}
\begin{proof}
In view of Theorem~\ref{LU404}, we can assume that $X$ is an ${\sf M}b$ subspace of $\mathbb{N}^\mathbb{N}$. For each $n\in\mathbb{N}$ consider $U_n=\{(\varphi,\psi)\in\mathbb{N}^\mathbb{N}\times\mathbb{N}^\mathbb{N} : \varphi(n)=\psi(n)\}\in\mathbb{B}$. Let $\{P_n : n\in\mathbb{N}\}$ be a partition of $\mathbb{N}$ into pairwise disjoint infinite subsets. For each $n\in\mathbb{N}$, apply the ${\sf M}b$ property of $X$ to $(U_k : k\in\ P_n)$ to obtain a sequence $(F_k : k\in P_n)$ of finite subsets of $X$ such that $X\subseteq\cup_{k\in P_n} U_k[F_k]$. Thus, $F_n$ is defined for each $n\in \mathbb N$. Now define $g:\mathbb{N}\to\mathbb{N}$ by $g(n)=1+\max\{f(n) : f\in F_n\}$. To complete the proof we show that $X$ is not dominating (which is witnessed by $g$).
Let $f\in X$. For each $k\in\mathbb{N}$ choose $n_k\in P_k$ such that $f\in U_{n_k}[F_{n_k}]$. Again choose for each $k\in\mathbb{N}$ a $f_k\in F_{n_k}$ such that $(f,f_k)\in U_{n_k}$; that is $f(n_k)=f_k(n_k)$ for all $k\in\mathbb{N}$. Consequently, $f(n_k)<g(n_k)$ for all $k\in\mathbb{N}$ and hence the set $\{n\in\mathbb{N} : g(n)\nleq f(n)\}$ is infinite.
\end{proof}
\begin{Th}
\label{TU9}
If $(X,\mathbb{U})$ is ${\sf H}$-bounded, then any uniformly continuous image of $X$ into $\mathbb{N}^\mathbb{N}$ is bounded (with respect to $\leq^*$).
\end{Th}
\begin{proof}
The proof is modelled in the proof of Theorem~\ref{TU8}. It remains to observe that the respective map $g$ dominates every element of $X$.
\end{proof}
\begin{Th}
\label{TU10}
If $(X,\mathbb{U})$ is ${\sf R}$-bounded, then any uniformly continuous image of $X$ into $\mathbb{N}^\mathbb{N}$ can be guessed.
\end{Th}
\begin{proof}
We closely follow the proof of Theorem~\ref{TU8}.
Choose $U_n$'s as in Theorem~\ref{TU8} and proceed with the following modifications. For each $n\in\mathbb{N}$ choose a sequence $(f_k : k\in P_n)$ of members of $X$ such that $X\subseteq\cup_{k\in P_n}U_k[f_k]$. Thus, $f_n$ is defined for each positive integer $n$. Define $g:\mathbb{N}\to\mathbb{N}$ by $g(n)=f_n(n)$. We now show that $X$ is guessed by $g$. Choose any $f\in X$ and for each $k\in\mathbb{N}$ choose $n_k\in P_k$ such that $(f,f_{n_k})\in U_{n_k}$. Clearly, the set $\{n\in\mathbb N: f(n)=g(n)\}$ is infinite (since it contains all $n_k$'s) and this completes the proof.
\end{proof}
\section{Local variations of uniform selection principles}
\subsection{Locally $\Upsilonpsilon\text{-bounded}$ spaces}
We now introduce the main definition of this paper.
\begin{Def}
Let $(X,\mathbb{U})$ be a uniform space. Then $X$ is said to be locally $\Upsilonpsilon\text{-bounded}$ if for each $x\in X$ there exists a $U\in\mathbb{U}$ such that $U[x]$ is a $\Upsilonpsilon\text{-bounded}$ subspace of $X$.
\end{Def}
\begin{Rem}
Locally precompact and locally pre-Lindel\"{o}f spaces can be similarly defined.
\end{Rem}
From the above definition and \cite{SPUS}, we obtain the following implication diagram (where the abbreviations ${\sf C, \;H, \;L, \;M}$, ${\sf R}$ denote respectively compact, Hurewicz, Lindel\"{o}f, Menger and Rothberger spaces, the prefixes $l$, ${\sf p}$ stand respectively for `locally' and `pre-' and the suffix ${\sf b}$ stands for `-bounded').
\begin{figure}
\caption{Diagram of local properties in uniform spaces}
\label{dig1}
\end{figure}
We now present equivalent formulations of the new notions.
\begin{Th}
If $(X,\mathbb{U})$ is a uniform space, then the following assertions are equivalent.
\begin{enumerate}[label={\upshape(\arabic*)}]
\item $X$ is locally $\Upsilonpsilon\text{-bounded}$.
\item For each $x\in X$ and $V\in\mathbb{U}$ there exist a $U\in\mathbb{U}$ and a $\Upsilonpsilon\text{-bounded}$ subspace $Y$ of $X$ such that $U[x]\subseteq Y\subseteq V[x]$.
\item For each $x\in X$ there exist a $U\in\mathbb{U}$ and a $\Upsilonpsilon\text{-bounded}$ subspace $Y$ of $X$ such that $U[x]\subseteq Y$.
\item For each $x\in X$ there exists a $U\in\mathbb{U}$ such that $\overline{U[x]}$ is a $\Upsilonpsilon\text{-bounded}$ subspace of $X$.
\end{enumerate}
\end{Th}
\begin{proof}
We only present proof of $(1)\Rightarrow (2)$. Let $x\in X$ and $V\in\mathbb{U}$. We can find a $U\in\mathbb{U}$ such that $U[x]$ is a $\Upsilonpsilon\text{-bounded}$ subspace of $X$. Clearly, $U\cap V\in\mathbb{U}$ and $Y=U[x]\cap V[x]$ is a $\Upsilonpsilon\text{-bounded}$ subspace of $X$ with $(U\cap V)[x]\subseteq Y\subseteq V[x]$. Hence $(2)$ holds.
\end{proof}
We say that a metric space $(X,d)$ is locally $\Upsilonpsilon\text{-bounded}$ if for each $x\in X$ there exists an open set $V$ in $X$ such that $x\in V$ and $V$ is a $\Upsilonpsilon\text{-bounded}$ subspace of $(X,d)$. Likewise locally precompact and locally pre-Lindel\"{o}f metric spaces can also be defined.
\begin{Rem}
It is easy to observe that if the uniformity $\mathbb{U}$ on a set $X$ is induced by a metric $d$, then the uniform space $(X,\mathbb{U})$ is locally $\Upsilonpsilon\text{-bounded}$ if and only if the metric space $(X,d)$ is locally $\Upsilonpsilon\text{-bounded}$. Similar assertion holds for locally precompact and locally pre-Lindel\"{o}f metric spaces.
\end{Rem}
Using Theorem~\ref{TU4}, we have the following observation.
\begin{Prop}
\label{PU3}
Let $(X,\mathbb{U})$ be a uniform space such that $X$ is Lindel\"{o}f. Then $X$ is $\Upsilonpsilon\text{-bounded}$ if and only if $X$ is locally $\Upsilonpsilon\text{-bounded}$.
\end{Prop}
\begin{Prop}
\label{PU2}
Let $(X,\mathbb{U})$ be a uniform space. If $X$ is locally $\Upsilon$, then $X$ is locally $\Upsilonpsilon\text{-bounded}$.
\end{Prop}
\begin{proof}
Let $x\in X$. Choose an open set $U$ and a $\Upsilon$ subspace $Y$ of $X$ such that $x\in U\subseteq Y$. By Theorem~\ref{LU403}, $Y$ is a $\Upsilonpsilon\text{-bounded}$ subspace of $X$. Since $x\in U$ is open in $X$, choose $V\in\mathbb{U}$ such that $V[x]\subseteq U$. We thus obtain a $\Upsilonpsilon\text{-bounded}$ subspace $V[x]$ of $X$, which shows that $X$ is locally $\Upsilonpsilon\text{-bounded}$.
\end{proof}
It can also be observed that locally compact (resp. locally Lindel\"{o}f) implies locally precompact (resp. locally pre-Lindel\"{o}f).
\begin{Th}
\label{P41}
If $(X,\mathbb{U})$ is a complete uniform space, then $X$ is locally Hurewicz if and only if $X$ is locally ${\sf H}$-bounded.
\end{Th}
\begin{proof}
The necessity follows from Proposition~\ref{PU2}.
Conversely assume that $X$ is locally ${\sf H}$-bounded. Let $x\in X$. Choose $U\in\mathbb{U}$ such that $U[x]$ is ${\sf H}$-bounded. Clearly, $x\in\Int U[x]\subseteq\overline{U[x]}$. By Theorem~\ref{LU401}, $\overline{U[x]}$ is a ${\sf H}$-bounded subspace of $X$. Again by Theorem~\ref{LU402}, $\overline{U[x]}$ is a Hurewicz subspace of $X$ as $\overline{U[x]}$ is complete. Thus, $X$ is locally Hurewicz.
\end{proof}
\begin{Th}
Let $(X,\mathbb{U})$ be locally $\Upsilonpsilon\text{-bounded}$. An element $U\in\mathbb{U}$ is open in $X\times X$ if and only if $(Y\times Z)\cap U$ is open in $Y\times Z$ for every two $\Upsilonpsilon\text{-bounded}$ subspaces $Y$ and $Z$ of $X$.
\end{Th}
\begin{proof}
We only need to prove sufficiency. Let $U\in\mathbb{U}$ and $(x,y)\in U$. Since $X$ is locally $\Upsilonpsilon\text{-bounded}$, there exist $V,W\in\mathbb{U}$ such that $V[x]$ and $W[y]$ are $\Upsilonpsilon\text{-bounded}$ subspaces of $X$. Clearly, $(\Int V[x]\times \Int W[y])\cap U$ is open in $\Int V[x]\times\Int W[y]$ since $(V[x]\times W[y])\cap U$ is open in $V[x]\times W[y]$. Evidently $(x,y)\in(\Int V[x]\times\Int W[y])\cap U$ is open in $X\times X$ and hence $U$ is open in $X\times X$ as required.
\end{proof}
The following observation is due to Theorems~\ref{TN01}, \ref{TN02} and \ref{TN03} with necessary modifications.
\begin{Prop}
Every locally pre-Lindel\"{o}f space with cardinality less than $\mathfrak{d}$ (resp. $\mathfrak{b}$, $\cov(\mathcal{M})$) is locally ${\sf M}b$ (resp. locally ${\sf H}b$, locally ${\sf R}b$).
\end{Prop}
\subsection{Some observations on locally $\Upsilonpsilon\text{-bounded}$ spaces}
We now present some preservation like properties of these local variations under certain topological operations.
\begin{Prop}
\label{P4}
Locally $\Upsilonpsilon\text{-bounded}$ property is hereditary.
\end{Prop}
\begin{Prop}
Let $f:X\to Y$ be an injective mapping from a set $X$ onto a uniform space $(Y,\mathbb{V})$. If $Y$ is locally $\Upsilonpsilon\text{-bounded}$, then $X$ is also locally $\Upsilonpsilon\text{-bounded}$.
\end{Prop}
\begin{proof}
First recall that the uniformity $f^{-1}(\mathbb{V})$ on $X$ is generated by the base $\{g^{-1}(V) : V\in\mathbb{V}\}$, where $g=f\times f$. Now let $x\in X$. Since $Y$ is locally $\Upsilonpsilon\text{-bounded}$, there exists a $U\in\mathbb{V}$ such that $(Z,\mathbb{V}_Z)$ is a $\Upsilonpsilon\text{-bounded}$ subspace of $Y$, where $Z=U[f(x)]$. Let $W=f^{-1}(Z)$ and define $h:W \to Z$ by $h(u)=f(u)$. By Theorem~\ref{L4}, $(W,\mathbb S)$ is $\Upsilonpsilon\text{-bounded}$, where $\mathbb S=h^{-1}(\mathbb{V}_Z)$. Let $\tilde{g}=h\times h$. Now observe that $\mathbb{B}=\{\tilde{g}^{-1}(V) : V\in\mathbb{V}_Z\}$ is a base for the uniformity $\mathbb S$ on $W$ and $\mathbb{B}^\prime=\{(W\times W)\cap g^{-1}(V) : V\in\mathbb{V}\}$ is a base for the uniformity $\mathbb S^\prime = {f^{-1}(\mathbb{V})}_{W}$ on $W$. It is easy to verify that $\mathbb{B}=\mathbb{B}^\prime$ and hence $\mathbb S=\mathbb S^\prime$. Thus, $(W,\mathbb S)$ is a $\Upsilonpsilon\text{-bounded}$ subspace of $X$. Clearly, $g^{-1}(U)\in f^{-1}(\mathbb{V})$ and $g^{-1}(U)[x]=W$. Consequently, $g^{-1}(U)[x]$ is a $\Upsilonpsilon\text{-bounded}$ subspace of $X$ and the proof is now complete.
\end{proof}
We now observe that locally $\Upsilonpsilon\text{-bounded}$ property remains invariant under certain mappings. First we recall the following definitions from \cite{Bi-quotient}.
A surjective continuous mapping $f:X\to Y$ is said to be weakly perfect if $f$ is closed and $f^{-1}(y)$ is Lindel\"{o}f for each $y\in Y$.
Also a surjective continuous mapping $f:X\to Y$ is said to be bi-quotient if whenever $y\in Y$ and $\mathcal{U}$ is a cover of $f^{-1}(y)$ by open sets in $X$, then finitely many $f(U)$ with $U\in\mathcal{U}$ cover some open set containing $y$ in $Y$.
It is immediate that surjective continuous open (and also perfect) mappings are bi-quotient.
\begin{Th}
\begin{enumerate}[wide=0pt, label={\upshape(\arabic*)},ref={\theTh(\arabic*)},leftmargin=*]
\item
If $f:(X,\mathbb{U})\to(Y,\mathbb{V})$ is a uniformly continuous, bi-quotient mapping from a locally $\Upsilonpsilon\text{-bounded}$ space $X$ onto $Y$, then $Y$ is also locally $\Upsilonpsilon\text{-bounded}$.
\item
If $f:(X,\mathbb{U})\to(Y,\mathbb{V})$ is a uniformly continuous, weakly perfect mapping from a locally $\Upsilonpsilon\text{-bounded}$ space $X$ onto $Y$, then $Y$ is also locally $\Upsilonpsilon\text{-bounded}$.
\end{enumerate}
\end{Th}
\begin{proof}
$(1)$. For each $x\in X$ choose $U_x\in\mathbb{U}$ such that $U_x[x]$ is $\Upsilonpsilon\text{-bounded}$. Consider the open cover $\{\Int U_x[x] : x\in X\}$ of $X$. Let $y\in Y$. Since $f$ is a bi-quotient mapping, there is a finite set $\{\Int U_{x_i}[x_i] : 1\leq i\leq k\}\subseteq\{\Int U_x[x] : x\in X\}$ and an open set $V$ in $Y$ such that $y\in V\subseteq\cup_{i=1}^kf(\Int U_{x_i}[x_i])$; that is $V\subseteq\cup_{i=1}^kf(U_{x_i}[x_i])$. By Theorem~\ref{LU404} and Theorem~\ref{TU4}, $\cup_{i=1}^kf(U_{x_i}[x_i])$ is a $\Upsilonpsilon\text{-bounded}$ subspace of $Y$. Next we can choose $W\in\mathbb{V}$ with $W[y]\subseteq V$ since $V$ is a neighbourhood of $y$ in $Y$. Consequently, $W[y]$ is the required $\Upsilonpsilon\text{-bounded}$ subspace of $Y$.\\
$(2)$. Let $y\in Y$ and say $A=f^{-1}(y)$. For each $x\in A$ choose $U_x\in\mathbb{U}$ such that $U_x[x]$ is $\Upsilonpsilon\text{-bounded}$. Consider the cover $\{\Int U_x[x] : x\in A\}$ of $A$ by open sets in $X$. Since $A$ is Lindel\"{o}f, there is a countable collection $\{\Int U_{x_n}[x_n] : n\in\mathbb{N}\}$ that covers $A$; that is $A\subseteq\cup_{n\in\mathbb{N}}U_{x_n}[x_n]$. Observe that $y\in Y\setminus f(X\setminus\cup_{n\in\mathbb{N}}\Int U_{x_n}[x_n])\subseteq f(\cup_{n\in\mathbb{N}}U_{x_n}[x_n])$. By Theorem~\ref{TU4} and Theorem~\ref{LU404}, $f(\cup_{n\in\mathbb{N}}U_{x_n}[x_n])$ is a $\Upsilonpsilon\text{-bounded}$ subspace of $Y$. Since $f$ is closed, $Y\setminus f(X\setminus\cup_{n\in\mathbb{N}}\Int U_{x_n}[x_n])$ is an open set in $Y$ containing $y$. Thus, we obtain a $V\in\mathbb{V}$ such that $V[y]\subseteq Y\setminus f(X\setminus\cup_{n\in\mathbb{N}}\Int U_{x_n}[x_n])$. Clearly, $V[y]$ is a $\Upsilonpsilon\text{-bounded}$ subspace of $Y$, which completes the proof.
\end{proof}
\begin{Cor}
\label{CU1}
If $f:(X,\mathbb{U})\to(Y,\mathbb{V})$ is a uniformly continuous, perfect (or a uniformly continuous, open) mapping from a locally $\Upsilonpsilon\text{-bounded}$ space $X$ onto $Y$, then $Y$ is also locally $\Upsilonpsilon\text{-bounded}$.
\end{Cor}
\begin{Th}
\label{TU5}
Let $\{X_\alpha: \alpha\in\Lambda\}$ be a family of open subspaces of a uniform space $(X,\mathbb{U})$ satisfying $X=\cup_{\alpha\in\Lambda}X_\alpha$. Then $X$ is locally $\Upsilonpsilon\text{-bounded}$ if and only if each $X_\alpha$ is locally $\Upsilonpsilon\text{-bounded}$.
\end{Th}
\begin{proof}
We only need to prove sufficiency. Choose $x\in X$ and say $x\in X_{\beta}$. Since $(X_{\beta},\mathbb{U}_{X_{\beta}})$ is locally $\Upsilonpsilon\text{-bounded}$, choose $V\in\mathbb{U}_{X_{\beta}}$ such that $V[x]$ is a $\Upsilonpsilon\text{-bounded}$ subspace of $X_{\beta}$. Since $x\in \Int_{X_{\beta}}V[x]$ is open in $X$, there exists a $U\in\mathbb{U}$ such that $U[x]\subseteq \Int_{X_{\beta}}V[x]$. It follows that $U[x]$ is $\Upsilonpsilon\text{-bounded}$ and therefore, $X$ is locally $\Upsilonpsilon\text{-bounded}$.
\end{proof}
\begin{Cor}
\label{TU3}
Let $\{(X_\alpha,\mathbb{U}_\alpha) : \alpha\in\Lambda\}$ be a family of uniform spaces. The sum $\oplus_{\alpha\in\Lambda}X_\alpha$ is locally $\Upsilonpsilon\text{-bounded}$ if and only if each $X_\alpha$ is locally $\Upsilonpsilon\text{-bounded}$.
\end{Cor}
\begin{Th}
Let $\{X_\alpha: \alpha\in\Lambda\}$ be a locally finite family of closed subspaces of a uniform space $(X,\mathbb{U})$ satisfying $X=\cup_{\alpha\in\Lambda}X_\alpha$. Then $X$ is locally $\Upsilonpsilon\text{-bounded}$ if and only if each $X_\alpha$ is locally $\Upsilonpsilon\text{-bounded}$.
\end{Th}
\begin{proof}
We only need to prove sufficiency. Let $\mathbb{V}$ be the uniformity on the sum $Y=\oplus_{\alpha\in\Lambda}X_\alpha$. Define $f:Y\to X$ by $f(x,\alpha)=x$. We claim that $f$ is a uniformly continuous, perfect mapping. Let $U\in\mathbb{U}$. For each $\alpha$ choose $U_\alpha\in\mathbb{U}_{X_\alpha}$ such that $\cup_{\alpha\in\Lambda}U_\alpha\subseteq U$. Also for each $\alpha$ define $V_\alpha=\{((x,\alpha),(y,\alpha)) : (x,y)\in U_\alpha\}$. Clearly, $V=\cup_{\alpha\in\Lambda} V_\alpha\in\mathbb{V}$. Let $(u,v)\in V$ and choose a $\beta$ so that $(u,v)\in V_\beta$. Thus, $u=(x,\beta)$ and $v=(y,\beta)$ for some $(x,y)\in U_\beta$ and hence $(f(u),f(v))\in U_\beta$, so that $(f(u),f(v))\in U$. It follows that $f$ is uniformly continuous.
To show that $f$ is perfect, for each $\alpha$ we define a continuous mapping $\varphi_\alpha:X_\alpha\to Y$ by $\varphi_\alpha(x)=(x,\alpha)$. Let $F$ be closed in $Y$. Since each $\varphi_\alpha^{-1}(F)$ is closed and $X_\alpha$'s are locally finite in $X$, $f(F)=\cup_{\alpha\in\Lambda}\varphi_\alpha^{-1}(F)$ is also closed in $X$. Since $f^{-1}(x)$ is finite for every $x\in X$, it is compact. Thus, $f$ is a perfect mapping, so that by Corollary~\ref{CU1}, $X$ is locally $\Upsilonpsilon\text{-bounded}$.
\end{proof}
The next two results concern respectively the locally ${\sf H}b$ and locally ${\sf M}b$ property in product spaces.
\begin{Th}
Let $(X,\mathbb{U})$ and $(Y,\mathbb{V})$ be two uniform spaces. Then $(X\times Y,\mathbb{U}\times\mathbb{V})$ is locally ${\sf H}$-bounded if and only if both $(X,\mathbb{U})$ and $(Y,\mathbb{V})$ are locally ${\sf H}$-bounded.
\end{Th}
\begin{proof}
By considering the projection mappings, necessity follows from Corollary~\ref{CU1}.
Conversely assume that both $X$ and $Y$ are locally ${\sf H}$-bounded. Let $(x,y)\in X\times Y$. Choose $U\in\mathbb{U}$ and $V\in\mathbb{V}$ such that $U[x]$ and $V[y]$ are ${\sf H}$-bounded subspaces of $X$ and $Y$ respectively. By Theorem~\ref{LU501}, $U[x]\times V[y]$ is a ${\sf H}$-bounded subspace of $X\times Y$. Clearly, $(x,y)\in \Int_X U[x]\times\Int_Y V[y]$ is an open set in $X\times Y$. Thus, there is a $W\in\mathbb{U}\times\mathbb{V}$ such that $W[(x,y)]\subseteq \Int_X U[x]\times\Int_Y V[y]$. Consequently, $W[(x,y)]$ is a ${\sf H}$-bounded subspace of $X\times Y$ and hence $X\times Y$ is locally ${\sf H}$-bounded.
\end{proof}
It is to be noted that locally ${\sf R}b$ property is not productive (see Example~\ref{EU7}). We are unable to verify whether the above result holds for locally ${\sf M}b$ spaces. Still, we have the following observation that can be easily verified. Moreover, the same example also demonstrates that `locally ${\sf M}b$' cannot be replaced by `locally ${\sf R}b$' in the following result.
\begin{Th}
Let $(X,\mathbb{U})$ and $(Y,\mathbb{V})$ be two uniform spaces. If $X$ is locally ${\sf M}$-bounded and $Y$ is locally precompact, then $(X\times Y,\mathbb{U}\times\mathbb{V})$ is locally ${\sf M}$-bounded.
\end{Th}
We end this section with some observations on the Baire space.
Combining Theorem~\ref{TU8}, Theorem~\ref{TU9}, Theorem~\ref{TU10}, Corollary~\ref{CU1} and Proposition~\ref{PU3}, we obtain the following result.
\begin{Prop}
Let $(X,\mathbb{U})$ be a uniform space.
\begin{enumerate}[wide=0pt,label={\upshape(\arabic*)},ref={\theProp(\arabic*)},leftmargin=*]
\item
If $X$ is locally ${\sf M}b$, then every open, uniformly continuous image of $X$ into $\mathbb{N}^\mathbb{N}$ is non-dominating.
\item
If $X$ is locally ${\sf H}b$, then every open, uniformly continuous image of $X$ into $\mathbb{N}^\mathbb{N}$ is bounded (with respect to $\leq^*$).
\item
If $X$ is locally ${\sf R}b$, then every open, uniformly continuous image of $X$ into $\mathbb{N}^\mathbb{N}$ can be guessed.
\end{enumerate}
\end{Prop}
\section{Examples}
We now present examples to illustrate the distinction between the behaviours of the local variations as introduced in this article.
Recall that a collection $\mathcal{A}$ of subsets of $\mathbb{N}$ is said to be an almost disjoint family if each $A\in\mathcal{A}$ is infinite and for every two distinct elements $B,C\in\mathcal{A}$, $|B\cap C|<\aleph_0$. Also $\mathcal{A}$ is said to be a maximal almost disjoint (in short, MAD) family if $\mathcal{A}$ is not contained in any larger almost disjoint family. For an almost disjoint family $\mathcal{A}$, let $\Psi(\mathcal{A})=\mathcal{A}\cup\mathbb{N}$ be the Isbell-Mr\'{o}wka space. It is well known that $\Psi(\mathcal{A})$ is a locally compact zero-dimensional Hausdorff space (and hence is a Tychonoff space) (see \cite{Gillman,Mrowka}).
Note that $\Upsilonpsilon\text{-bounded}$ implies locally $\Upsilonpsilon\text{-bounded}$. The following example shows that the class of locally $\Upsilonpsilon\text{-bounded}$ uniform spaces properly contains the class of $\Upsilonpsilon\text{-bounded}$ uniform spaces.
\begin{Ex}[{locally $\Upsilonpsilon\text{-bounded}\centernot\implies \Upsilonpsilon\text{-bounded}$}]
\label{EU3}
Let $\Psi(\mathcal{A})$ be the Isbell-Mr\'{o}wka space with $|\mathcal{A}|>\aleph_0$. Let $\mathbb{U}$ be the corresponding uniformity on $\Psi(\mathcal{A})$. Since $\mathcal{A}$ is an uncountable discrete subspace of $\Psi(\mathcal{A})$, it follows that $\Psi(\mathcal{A})$ is not pre-Lindel\"{o}f. Thus, $\Psi(\mathcal{A})$ cannot be $\Upsilonpsilon\text{-bounded}$. Since for each member of $\Psi(\mathcal{A})$ we can find a countable basic open set containing it, it follows that $\Psi(\mathcal{A})$ is locally $\Upsilonpsilon\text{-bounded}$ by Proposition~\ref{PU2}.
\end{Ex}
The preceding example can also be used to show the existence of locally pre-Lindel\"{o}f (resp. locally precompact) space which is not pre-Lindel\"{o}f (resp. precompact).
\begin{Rem}
\label{RN010}
The Isbell-Mr\'{o}wka space $\Psi(\mathcal{A})$ is $\Upsilonpsilon\text{-bounded}$ if and only if $|\mathcal{A}|\leq\aleph_0$.
\end{Rem}
We now give an example of a locally ${\sf H}$-bounded (and hence locally ${\sf M}$-bounded) space which is not locally ${\sf R}$-bounded. First we recall that a set $A\subseteq\mathbb{R}$ has strong measure zero if for every sequence $(\varepsilon_n)$ of positive reals there exists a sequence $(I_n)$ of intervals such that the length of the interval $I_n$ is less than $\varepsilon_n$ for all $n$ and $A\subseteq\cup_{n\in\mathbb{N}}I_n$.
\begin{Ex}[{locally ${\sf H}$ (or, ${\sf M}$)-bounded$\centernot\implies$ locally ${\sf R}b$}]
\label{EU8}
Consider $\mathbb{R}$ with the uniformity induced by the standard metric $d$. Clearly, $\mathbb{R}$ is locally precompact and hence is locally ${\sf H}b$ and locally ${\sf M}b$ as well. Now if possible suppose that $\mathbb{R}$ is locally ${\sf R}b$. By Proposition~\ref{PU3}, $\mathbb{R}$ is ${\sf R}b$. Let $(\varepsilon_n)$ be a sequence of positive real numbers. Apply ${\sf R}b$ property of $\mathbb{R}$ to the sequence of $\frac{\varepsilon_n}{3}$-entourages to obtain a sequence $(x_n)$ of reals such that $\mathbb{R}=\cup_{n\in\mathbb{N}}B_d(x_n,\frac{\varepsilon_n}{3})$. This implies that $\mathbb{R}$ has strong measure zero, a contradiction. Thus, $\mathbb{R}$ fails to be locally ${\sf R}b$.
\end{Ex}
The following is an example of a locally pre-Lindel\"{o}f space which is not locally $\Upsilonpsilon\text{-bounded}$.
\begin{Ex}[{locally pre-Lindel\"{o}f $\centernot\implies$ locally $\Upsilonpsilon\text{-bounded}$}]
Let $\mathbb{R}^\omega$ be the Tychonoff product of $\omega$-copies of $\mathbb{R}$. The topology of $\mathbb{R}^\omega$ is induced by the metric $d(x,y)=\sup
\limits_{i\in\mathbb{N}}\frac{1}{i}\min\{|x_i-y_i|,1\}$ on $\mathbb{R}^\omega$, where $x=(x_i)$ and $y=(y_i)$. Let $\mathbb{U}$ be the uniformity on $\mathbb{R}^\omega$ induced by $d$. We claim that $\mathbb{R}^\omega$ is not ${\sf M}$-bounded. On the contrary, assume that $\mathbb{R}^\omega$ is ${\sf M}$-bounded. For each $n\in\mathbb{N}$ set $U_n=\{(x,y)\in\mathbb{R}^\omega\times\mathbb{R}^\omega : x=(x_i),y=(y_i)\;\text{and}\;|x_n-y_n|<n\}$. For any $0<\varepsilon<1$, $U_n$ contains the entourage $U_{\frac{\varepsilon}{n}}=
\{(x,y)\in\mathbb{R}^\omega\times\mathbb{R}^\omega : d(x,y)<\frac{\varepsilon}{n}\}$, so that $U_n\in\mathbb{U}$ for each $n$. Apply ${\sf M}$-bounded property of $\mathbb{R}^\omega$ to $(U_n)$ to obtain a sequence $(F_n)$ of finite subsets of $\mathbb{R}^\omega$ such that $\mathbb{R}^\omega=\cup_{n\in\mathbb{N}}U_n[F_n]$. Say $F_n=\{x^{(n,j)}=(x_i^{(n_,j)}) : 1\leq j\leq k_n\}$ for each $n$. Choose $x=(x_i)\in\mathbb{R}^\omega$ such that $x_n=n+\Sigma_{j=1}^{k_n}|x_n^{(n,j)}|$ for each $n$. Also choose $n_0\in\mathbb{N}$ such that $(x,x^{(n_0,j_0)})\in U_{n_0}$ for some $x^{(n_0,j_0)}\in F_{n_0}$. By the construction of $U_{n_0}$, we obtain $|x_{n_0}-x_{n_0}^{(n_0,j_0)}|<n_0$, which is a contradiction as $x_{n_0}=n_0+\Sigma_{j=1}^{k_{n_0}}|x_{n_0}^{(n_0,j)}|$. Thus, $\mathbb{R}^\omega$ is not ${\sf M}$-bounded. Now apply Proposition~\ref{PU3} to conclude that $\mathbb{R}^\omega$ is not locally ${\sf M}$-bounded (and hence it is not locally $\Upsilonpsilon\text{-bounded}$ by Figure~\ref{dig1}).
\end{Ex}
We now give an example of a locally $\Upsilonpsilon\text{-bounded}$ space which is not locally precompact.
\begin{Ex} [{locally $\Upsilonpsilon\text{-bounded}$ $\centernot\implies$ locally precompact}]
Let $X$ be the hedgehog metric space (see \cite{Engelking}) of spininess $\aleph_0$. Then $X$ is a complete $\Upsilonpsilon\text{-bounded}$ space. Observe that $X$ is not locally compact. Since every complete precompact space is compact, a similar observation in line of Theorem~\ref{P41} shows that a complete locally precompact space is locally compact. Therefore, $X$ is not locally precompact.
\end{Ex}
We now make a quick observation which reflects that Lemma~\ref{TU2} does not hold for locally $\Upsilonpsilon\text{-bounded}$ spaces.
\begin{Ex}
Let $\kappa>\aleph_0$. Consider the space $X$ with the hedgehog metric $\rho$ of spininess $\kappa$. Clearly, $X$ is complete, but not locally Lindel\"{o}f. Let $\mathbb{U}$ be the uniformity on $X$ induced by $\rho$. Since every complete locally pre-Lindel\"{o}f space is locally Lindel\"{o}f, we can say that $(X,\mathbb{U})$ is not locally pre-Lindel\"{o}f. It now follows that $(X,\mathbb{U})$ is not locally $\Upsilonpsilon\text{-bounded}$. But the uniform space $(X,\mathbb{D})$ with discrete uniformity $\mathbb{D}$ is locally $\Upsilonpsilon\text{-bounded}$.
\end{Ex}
Recall that the product of a ${\sf M}$-bounded (resp. ${\sf H}$-bounded) space with a precompact space is again ${\sf M}$-bounded (resp. ${\sf H}$-bounded), but if we replace `precompact' by `locally precompact', then the product need not be ${\sf M}$-bounded (resp. ${\sf H}$-bounded).
\begin{Ex}[{${\sf M}$ (resp. ${\sf H}$)-bounded $\times$ locally precompact $\centernot\implies$ ${\sf M}$ (resp. ${\sf H}$)-bounded }]
Let $X$ be the Isbell-Mr\'{o}wka space $\Psi(\mathcal{A})$ as in Example~\ref{EU3}. Let $\mathbb{U}$ be the corresponding uniformity on $X$. By Example~\ref{EU3}, $X$ is locally precompact but not pre-Lindel\"{o}f. Let $Y=\mathbb{R}$ with the standard metric uniformity $\mathbb{V}$. Since $Y$ is $\sigma$-compact, $Y$ is ${\sf M}b$ as well as ${\sf H}b$. If possible suppose that $X\times Y$ is ${\sf M}$-bounded. Using the projection mapping and then by applying Theorem~\ref{LU404}, we have $X$ is ${\sf M}b$, which is a contradiction. Thus, $X\times Y$ is not ${\sf M}b$ and hence it fails to be ${\sf H}b$.
\end{Ex}
We now observe that locally ${\sf R}$-bounded property behaves somewhat differently from locally ${\sf M}$-bounded and locally ${\sf H}$-bounded property.
\begin{Ex}[locally ${\sf R}$-bounded $\times$ locally precompact $\centernot\implies$ locally ${\sf R}$-bounded]
\label{EU7}
This example is about the product of a locally ${\sf R}$-bounded space and a locally precompact space which fails to be locally ${\sf R}$-bounded.
To prove this, consider a ${\sf R}$-bounded uniform space $X$ and $\mathbb R$ with the standard metric uniformity. Suppose if possible that $X\times\mathbb{R}$ is locally ${\sf R}$-bounded. By Corollary~\ref{CU1} and by means of projection mapping, it follows that $\mathbb{R}$ is locally ${\sf R}$-bounded, which is a contradiction (see Example~\ref{EU8}). Therefore, the product $X\times\mathbb{R}$ fails to be locally ${\sf R}$-bounded.
\end{Ex}
\begin{Rem}
Product of a ${\sf R}$-bounded space and a precompact space need not be ${\sf R}$-bounded. The reason is as follows. If possible suppose that the assertion is true. Now consider Example~\ref{EU7}. Since $\mathbb R$ is a countable union of its precompact subspaces, our supposition together with Theorem~\ref{TU4} imply $X\times\mathbb{R}$ is ${\sf R}$-bounded. We then again arrive at a contradiction as in Example~\ref{EU7}. Thus, the product of a ${\sf R}$-bounded space and a precompact space need not be ${\sf R}$-bounded.
\end{Rem}
We now present an example of a $\Upsilonpsilon\text{-bounded}$ space for which Corollary~\ref{TU3} fails to hold.
\begin{Ex}
Consider the sum $\oplus_{\alpha\in\omega_1}X$ of $\omega_1$ copies of a $\Upsilonpsilon\text{-bounded}$ space $(X,\mathbb{U})$. The sum is uniformly isomorphic to $(X\times\omega_1,\mathbb{U}\times\mathbb{D})$, where $\mathbb{D}$ is the discrete uniformity on $\omega_1$. By Theorem~\ref{LU404}, $X\times\omega_1$ is not $\Upsilonpsilon\text{-bounded}$ and consequently, $\oplus_{\alpha\in\omega_1}X$ is not $\Upsilonpsilon\text{-bounded}$.
\end{Ex}
\begin{Rem}
\mbox{}
\begin{enumerate}[wide=0pt,
label={\upshape(\arabic*)},
ref={\theRem(\arabic*)},leftmargin=*]
\item It is interesting to observe that Theorem~\ref{TU5} does not hold for $\Upsilonpsilon\text{-bounded}$ spaces. Consider the Isbell-Mr\'{o}wka space $\Psi(\mathcal{A})$ as in Example~\ref{EU3}. As noted in that example, each point of $\Psi(\mathcal{A})$ has a countable basic open set containing it. Thus, $\Psi(\mathcal{A})$, which is itself not $\Upsilonpsilon\text{-bounded}$, is a union of $\Upsilonpsilon\text{-bounded}$ open subspaces.
\item Also note that Theorems~\ref{TN01}, \ref{TN02} and \ref{TN03} can not be extended to locally pre-Lindel\"{o}f spaces.
Assume that $\omega_1<\min\{\mathfrak b, \cov(\mathcal M)\}$ (which implies $\omega_1<\mathfrak d$ too). Let $\Psi(\mathcal{A})$ be the Isbell-Mr\'{o}wka space with $|\mathcal{A}|=\omega_1$. By Example~\ref{EU3}, $\Psi(\mathcal{A})$ is locally pre-Lindel\"{o}f. However, by Remark~\ref{RN010}, $\Psi(\mathcal{A})$ is not $\Upsilonpsilon\text{-bounded}$.
\end{enumerate}
\end{Rem}
\noindent{\bf Acknowledgement:} The authors are thankful to the Referee for his/her several valuable suggestions which significantly improved the presentation of the paper.
\paragraph{}
\vskip .4cm
\end{document} |
\begin{document}
\title{Is learning for the unit commitment problem \\ a low-hanging fruit?}
\author{S. Pineda and J. M. Morales
\thanks{S. Pineda is with the Dep.
of Electrical Engineering, Univ. of Malaga, Spain. E-mail: spinedamorente@gmail.com. J. M. Morales is with the Dep. of Applied Mathematics, Univ. of Malaga, Spain. E-mail: juan.morales@uma.es.}
\thanks{This work was supported in part by the Spanish Ministry of Science and Innovation through project PID2020-115460GB-I00, by the Andalusian Regional Government through project P20-00153, and by the Research Program for Young Talented Researchers of the University of Málaga under Project B1-2019-11. This project has also received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 755705).}}
\maketitle
\begin{abstract}
The blast wave of machine learning and artificial intelligence has also reached the power systems community, and amid the frenzy of methods and black-box tools that have been left in its wake, it is sometimes difficult to perceive a glimmer of Occam's razor principle. In this letter, we use the unit commitment problem (UCP), an NP-hard mathematical program that is fundamental to power system operations, to show that \emph{simplicity} must guide any strategy to solve it, in particular those that are based on \emph{learning} from past UCP instances. To this end, we apply a naive algorithm to produce candidate solutions to the UCP and show, using a variety of realistically sized power systems, that we are able to find optimal or quasi-optimal solutions with remarkable speedups. Our claim is thus that any sophistication of the learning method must be backed up with a statistically significant improvement of the results in this letter.
\end{abstract}
\begin{IEEEkeywords}
Unit commitment problem, machine learning, computational burden, power system operations.
\end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction}\label{sec:intro}
\IEEEPARstart{T}{he} unit commitment problem (UCP) is currently one of the most fundamental mathematical tools to operate power systems. The UCP determines the on/off commitment status and power dispatch of generating units to satisfy electricity demand at a minimum cost while complying with the technical limits of generation and transmission assets \cite{sen1998optimal}. The UCP is usually formulated as a mixed-integer optimization problem that is proven to be NP-hard \cite{bendotti2019complexity} and therefore, the technical literature includes several methods to trim down its computational burden \cite{saravanan2013solution, chen2016improving}. Existing strategies comprise formulation tightening \cite{pandzic2013comparison,tejada2020which}, decomposition techniques \cite{fu2005security} and constraint screening \cite{zhai2010fast}. These methods, however, overlook the fact that slight variations of the same UCP are to be solved everyday and therefore, learning from the past may be a powerful weapon to tackle future UCP instances.
In the same vein, some learning-based methodologies have been recently proposed to reduce the computational burden of the UCP using information about previously solved instances. References \cite{yang2021machine,ruan2021review} review current learning-based methods for power system problems.
In particular, the authors of \cite{dalal2018unit} learn a proxy of the UCP's solution to be used in long-term planning assessments. Despite being simple and fast (with speedups up to 260x), the proposed strategy involves an average optimality error of 3.5\% for the IEEE-96RTS test system, which precludes its use in short-term operation. Using Classification and Regression Trees (CART) or Random Forest (RF), reference \cite{lin2019approximate} presents a methodology to find the relationships between the solutions of the original and relaxed versions of the UCP and reports average optimality errors of between 0.14\% and 0.23\%. A supervised classification procedure is proposed in \cite{pineda2020data} to learn the transmission capacity constraints that can be screened out of the UCP. Using a 2000-bus system, the authors report a speedup of 19x if retrieving the original solutions must be guaranteed, and a speedup of 43x for an average suboptimality error of 0.04\%. The authors of \cite{yang2020integrated} describe a sophisticated methodology to cluster decision variables depending on the difficulty of the UCP instance to be solved. They achieve speedups of between 1.5x and 2x, an average optimality error of 0.1\%, and a maximum optimality error of 1\% for the IEEE 118-bus system. The authors of \cite{chen2020distributed} use the UCP solution of the previous day UC as a warm start strategy and report speedups of 2x for the MISO power system. This speedup can be increased to 2x-12x if additional historical information is considered, as shown in \cite{chen2021high}. In \cite{mohammadi2021machine}, the authors use machine learning to determine unnecessary constraints of the stochastic unit commitment and attain speedups of 14x for a 500-bus power system. Finally, reference \cite{xavier2021learning} proposes different strategies to learn initial solutions with speedups of 4.3x while retrieving the optimal solution of the UCP. The authors also discuss a learning-based procedure to learn the relationships among binary variables, which yields speedups of 10.2x but no optimality guarantees. These methodologies are tested on a set of large-sized networks.
It is apparent that when it comes to learning for the UCP, one may easily get lost in a myriad of methods and approaches, all of which promise reasonable computational savings. The takeaway message is that, despite the high theoretical complexity of the UCP, having access to previous instances may significantly reduce the solution task in practice, since today's commitment decisions are probably very similar to those made yesterday, one week ago, or even one year ago. Given the high potential of using historical data to reduce the computational burden of the UCP, the questions that naturally arise are: Are existing learning methods an actual breakthrough in the solution of the UCP or are they just picking the low-hanging fruit? Do sophisticated learning methods to solve the UCP substantially outperform painless naive learning strategies? How should we benchmark the performance of learning methods to solve the UCP? In this letter, we aim to answer these questions by learning the UCP's solution through a straightforward K-nearest neighbor procedure. The purpose of this letter is, by no means, to propose a learning-based methodology that outperforms existing ones under certain conditions. What we claim is that the performance of existing and upcoming learning-based methods to solve the UCP should be thoroughly compared with some painless naive methods such as the one we suggest, in order to justify the hassle from the increased complexity and sophistication, and the loss of transparency.
\section{A Naive Learning Method}\label{sec:learning}
The unit commitment problem can be generally formulated as the following optimization program:
\begin{subequations}
\begin{align}
\underset{\textbf{u},\textbf{y}}{\min} \enskip & f(\textbf{u},\textbf{y}) \\
\text{s.t.} \enskip & g_j(\textbf{u},\textbf{y},\textbf{d}) \leq 0, \forall j
\end{align} \label{eq:general_uc}
\end{subequations}
\noindent where $\textbf{u}$ and $\textbf{y}$ denote, respectively, the binary and continuous variables, $\textbf{d}$ represents the input parameters such as net demand throughout the network, and $f(\cdot),g_j(\cdot)$ are the objective function and the technical constraints, in that order. Under some mild assumptions, model \eqref{eq:general_uc} becomes a mixed-integer quadratic programming problem that can be solved using optimization solvers at a usually high computational cost. With some abuse of notation, we express the solution of \eqref{eq:general_uc} as a function of the input parameters, namely, $\textbf{u}^{\rm UC}(\textbf{d}),\textbf{y}^{\rm UC}(\textbf{d})$. If binary variables $\textbf{u}$ are fixed to given values $\Tilde{\textbf{u}}$, model \eqref{eq:general_uc} becomes an optimal power flow (OPF) problem, which is easier to solve and whose optimal solution is denoted as $\textbf{y}^{\rm OPF}(\Tilde{\textbf{u}},\textbf{d})$.
Suppose we have access to a sample set of optimal solutions of problem \eqref{eq:general_uc} for different input parameters, referred to as $S=\{(\textbf{d}_i,\textbf{u}^{\rm UC}_i,C^{\rm UC}_i,T^{\rm UC}_i)\}_{i\in\mathcal{I}}$, where $\textbf{u}^{\rm UC}_i = \textbf{u}^{\rm UC}(\textbf{d}_i)$, and $C^{\rm UC}_i,T^{\rm UC}_i$ are, respectively, the objective function and the computational time of problem \eqref{eq:general_uc} for instance $i \in \mathcal{I}$. Intuitively, the naive learning methodology we propose as a benchmark consists in fixing the binary variables to those of close past instances to solve several OPFs in parallel and select the one with the lowest cost. Using a leave-one-out procedure, the learning strategy for each instance $i\in\mathcal{I}$ runs as follows:
\begin{itemize}
\item[1)] Compute the distance between the input parameters $\textbf{d}_i$ and those of the remaining instances using a norm, for example, $||\textbf{d}_i-\textbf{d}_{\Tilde{i}}||_2, \forall \Tilde{i}\in\mathcal{I},\Tilde{i}\neq i$.
\item[2)] Find the set of the $K$-nearest neighbors to $i$ with the lowest distances computed in step 1), denoted as $\mathcal{I}^K_i$.
\item[3)] Solve the optimal power flow for input parameters $\textbf{d}_i$ and binary variables fixed to $\textbf{u}^{\rm UC}_{\Tilde{i}}$. That is, compute $\textbf{y}^{\rm OPF}(\textbf{u}^{\rm UC}_{\Tilde{i}},\textbf{d}_i), \forall \Tilde{i} \in \mathcal{I}^K_i$ and denote the objective function and solving time as $C^{\rm OPF}_{i\Tilde{i}}$ and $T^{\rm OPF}_{i\Tilde{i}}$, respectively.
\item[4)] Among all feasible problems solved in step 3), approximate the cost of the UCP for instance $i$ as $\widehat{C}^{\rm UC}_i = \min_{\Tilde{i}}C^{\rm OPF}_{i\Tilde{i}}$. The lowest suboptimality gap is thus computed as $\Delta_i = (\widehat{C}^{\rm UC}_i-C^{\rm UC}_i)/C^{\rm UC}_i$.
\item[5)] Problems in step 3) are solved in parallel and the speedup factor is thus computed as $S_i = T^{\rm UC}_i/(\max_{\Tilde{i}}(T^{\rm OPF}_{i\Tilde{i}})+T^{L}_i)$, where $T^{L}_i$ is the learning time of steps 1) and 2).
\end{itemize}
Once steps 1)-5) are run for each instance, we determine the average suboptimality gap $\overline{\Delta}$, the maximum suboptimality gap $\Delta^{\max}$, the average speedup $\overline{S}$, and the number of instances for which the $K$ problems solved in step 3) are infeasible $N^{\rm IN}$.
\section{Numerical results}
In this section, we provide the results obtained by the learning method described in Section \ref{sec:learning} for nine large-scale European test systems used in \cite{xavier2021learning} and available for download at \cite{alison2020dataset}. The numbers of buses, units and lines of each system are collated in columns 2-4 of Table \ref{tab:results}.
\begin{table*}[]
\renewcommand\arraystretch{1.3}
\centering
\begin{tabular}{ccccccccccccc}
\hline
System&Buses&Units&Lines&$\overline{\Delta}$&$\Delta^{\max}$& $< 0.01\%$ &$0.01-0.02\%$&$0.02-0.05\%$&$0.05-0.1\%$&$>0.1\%$&$N^{\rm IN}$&$\overline{S}$ \\
\hline
1888rte&1888&297&2531&0.0174&0.2394&230&131&109&20&9&1&116.5x\\
1951rte&1951&391&2596&0.0382&0.3759&47&116&217&85&27&8&150.4x\\
2848rte&2848&547&3776&0.0186&0.1332&179&138&159&14&8&2&132.6x\\
3012wp&3012&502&3572&0.0485&0.4864&37&78&212&132&36&5&188.8x\\
3375wp&3374&596&4161&0.1256&0.8073&9&14&102&164&198&13&215.9x\\
6468rte&6468&1295&9000&-0.0001&0.0175&498&2&0&0&0&0&41.2x\\
6470rte&6470&1330&9005&-0.0016&0.0187&496&4&0&0&0&0&171.9x\\
6495rte&6495&1372&9019&-0.0001&0.0481&496&3&1&0&0&0&41.0x\\
6515rte&6515&1388&9037&-0.0009&0.0133&497&3&0&0&0&0&101.7x\\
\hline
\end{tabular}
\caption{Numerical results}
\label{tab:results}
\end{table*}
The performance of the naive learning method is illustrated using the solution of 500 UCP instances that differ on the 24-hour load profile. According to the procedure proposed in \cite{xavier2021learning}, each 24-hour load profile is randomly generated as follows:
\begin{itemize}
\item[1)] Let $\overline{P}_g$ denote the capacity of each unit $g$. The peak demand $\overline{D}$ is randomly generated as $\overline{D} = 0.6 \times \left(\sum_g \overline{P}_g\right) \times \text{unif}(0.925,1.075)$, where $\text{unif}(a,b)$ represents a random sample from a uniform distribution in the interval $[a,b]$.
\item[2)] Let $\overline{\beta}_b$ denote the nominal load distribution factor of bus $b$. The distribution factor $\beta_b$ is randomly generated as $\beta_b=\overline{\beta}_b \times \text{unif}(0.9,1.1)$, which is subsequently normalized as $\beta^{\rm N}_b = \beta_b/\sum_b \beta_b$ to satisfy that $\sum \beta^{\rm N}_b = 1$.
\item[3)] Let $t \in \mathcal{T}$ denote the index for the time periods of the UCP, and let $\mu_t$ and $\sigma_t$ represent, respectively, the average and standard deviation of the ratio between the aggregated load level for two consecutive hours $D_{t}/D_{t-1}$. The hourly variation factors $v_t$ are randomly generated as $v_t = \text{normal}(\mu_t,\sigma_t)$, where $\text{normal}(a,b)$ represents a sample from a normal distribution with a mean equal to $a$ and a standard deviation equal to $b$. The temporal factor $\gamma_t$ is computed as $\gamma_t = \prod_{\tau=1}^{t}v_{\tau}$, which is normalized as $\gamma^{\rm N}_t = \gamma_t / \max\{\gamma_t\}_{t\in\mathcal{T}}$ to satisfy that $\max\{\gamma^{\rm N}_t\}_{t\in\mathcal{T}}=1$.
\item[4)] The load demand at each bus $b$ and time period $t$ is computed as $D_{bt}=\overline{D} \times \beta^{\rm N}_b \times \gamma^{\rm N}_t$.
\end{itemize}
For each load profile, we solve the specific unit commitment formulation provided in the Appendix of \cite{xavier2021learning}. For the sake of simplicity, we consider neither reserves nor security constraints. Due to its high computational burden, all UCP instances are solved using the constraint generation approach proposed in \cite{fu2013modeling} using Gurobi 9.1.2 \cite{gurobi} with a MIP gap set to 0.01\%. Afterwards, the learning method is used for each instance and test system assuming that the number of neighbors is set to 50, that is, 10\% of the total number of instances. A summary of the most relevant results is provided in Table \ref{tab:results}. More specifically, columns 5 and 6 include the average and maximum suboptimality gap for each test system. Columns 7-11 contain the number of instances whose suboptimality gap belongs to given intervals. For instance, column 9 provides the number of instances with a suboptimality gap of between 0.02\% and 0.05\%. The number of instances for which all OPF problems are infeasible is included in column 12. Finally, column 13 provides the average speedup of the naive learning method in relation to solving the original UCP instances using constraint generation.
The results in Table \ref{tab:results} lead to several interesting observations. Firstly, the average suboptimality error for the four systems with more than 6000 buses is negative. This means that, on average, fixing the binary variables to those of the neighbors yields an objective function that is actually lower than that of the original UCP. Indeed, the suboptimality error is below the MIP gap in more than 99\% of the instances. Finally, these systems do not include any infeasible instances and report speedups between 41x and 172x. According to these results, we can conclude that using complicated learning-based techniques for these four systems seems unnecessary since naive alternatives significantly reduce the UCP computational burden with negligible suboptimality errors. Secondly, for systems 1888rte and 2848rte, the 50 OPF solved become infeasible for 1 and 2 instances, respectively. For these two systems, the average suboptimality errors are slightly higher than the predefined MIP gap, while the computational times are reduced by two orders of magnitude. Therefore, the naive learning method is a competitive approach for these two systems. Finally, the number of infeasible instances for systems 1951rte, 3012wp and 3375wp are 8, 5 and 13, respectively. The average optimality errors exceed the MIP gap and amount to 0.0382\%, 0.0485\%, and 0.1256\%, in the same order, whereas the speedup factor ranges between 150x and 216x for these three systems. Consequently, using complicated learning approaches for these three systems would only be justified when the computational time, the suboptimality error or the infeasible cases are drastically lower than those reported by the naive learning method described in this letter.
\section{Conclusion}
A highly relevant topic within the PES scientific community is the reduction of the computational burden of the unit commitment problem. However, assessing the improvements of sophisticated learning-based approaches is sometimes tricky due to the lack of benchmark methods. This letter describes a painless and naive learning method and shows that, for some power systems, learning the solution of the unit commitment problem is indeed a low-hanging fruit that can be effortlessly picked. For other systems, the results of the naive learning approach are less impressive but, at the very least, they should be used to benchmark the performance of existing and upcoming learning methods to solve the unit commitment problem.
\ifCLASSOPTIONcaptionsoff
\fi
\end{document} |
\begin{document}
\selectlanguage{english}
\title{Christoffel functions for multiple orthogonal polynomials}
\begin{abstract}
We study weak asymptotic behaviour of the Christoffel--Darboux kernel on the main diagonal corresponding to multiple orthogonal polynomials. We show that under some hypotheses the weak limit of $\tfrac{1}{n} K_n(x,x) {\: \rm d} \mu$ is the same as the limit of the normalized zero counting measure of type II multiple orthogonal polynomials. We also study an extension of Nevai's operators to our context.
\end{abstract}
\section{Introduction} \label{sec:1}
Let $\{p_n \}_{n=0}^\infty$ be the orthonormal polynomials for a positive
measure $\mu$ on the real line,
\[
\int_{\mathbb{R}} p_n(x)p_m(x) {\: \rm d} \mu(x) = \delta_{m,n},
\]
then the Christoffel--Darboux kernel is given by
\[
K_n(x,y) = \sum_{k=0}^{n-1} p_k(x)p_k(y),
\]
and the Christoffel function is
\[
\lambda_n(x) = \frac{1}{K_n(x,x)}.
\]
The Christoffel--Darboux kernel and Christoffel function play an important role in the theory of orthogonal polynomials, polynomial least squares approximation,
the moment problem, approximation of weight functions, and universality in random matrix theory, see, e.g., the long survey of Nevai \cite{Nevai1986}, and the papers of M\'at\'e-Nevai-Totik \cite{Mate1991}, Van Assche \cite{VanAssche1993}, Inglese \cite{Inglese1995}, Totik \cite{Totik2000, Totik2016}, Simon \cite{Simon2008}, and Lubinsky \cite{Lubinsky2009, Lubinsky2011}.
The Christoffel--Darboux kernel can be expressed in terms of the polynomials $p_n$ and $p_{n-1}$ through the Christoffel--Darboux formula
\[
K_n(x,y) = a_n \frac{p_n(x)p_{n-1}(y)-p_{n-1}(x)p_n(y)}{x-y},
\]
where $a_n$ is one of the coefficients of the three-term recurrence relation for the orthonormal polynomials. The Christoffel function
is a positive function on the real line and satisfies an extremum problem
\[
\lambda_n(x) =
\min_{\substack{p \in \mathbb{P}_{n-1}\\p(x)=1}}
\int_{\mathbb{R}} |p(y)|^2 {\: \rm d} \mu(y),
\]
where $\mathbb{P}_n \subset \mathbb{R}[x]$ is the space of real polynomials with degree less than or equal to $n$.
Furthermore it is clear that the Christoffel--Darboux kernel $K_n(x,y)$ is symmetric in the two variables $x,y$.
In this paper we will consider the Christoffel--Darboux kernel for multiple orthogonal polynomials. In this case the symmetry is lost, there is no obvious
extremum problem, and the positivity of the Christoffel function is not immediately visible since it is no longer a sum of squares. Nevertheless we
will be able to give some results about the weak convergence of the Christoffel--Darboux kernel.
Let $r \geq 1$ and let $\vec{\mu} = (\mu_1,\ldots,\mu_r)$ be a vector of positive measures on the real line having all moments finite.
By $\mathbb{N}$ we denote the set of positive integers and $\mathbb{N}_0 = \mathbb{N} \cup \{0\}$.
Let $\vec{n} \in \mathbb{N}_0^r$ be a multi-index of size $|\vec{n}| = n_1+\ldots+n_r$. The monic polynomial $P_{\vec{n}} \in \mathbb{R}[x]$ is called the \emph{type II multiple orthogonal polynomial} if it of degree $|\vec{n}|$ and it satisfies the following simultaneous orthogonality
\begin{equation} \label{eq:46}
\int_\mathbb{R} x^k P_{\vec{n}}(x) {\: \rm d} \mu_j(x) = 0, \qquad
0 \leq k \leq n_j-1; 1 \leq j \leq r.
\end{equation}
The existence of $P_{\vec{n}}$ is not automatic but it holds under some additional conditions imposed on the moments of the measures. If for any $\vec{n} \in \mathbb{N}_0^r$ the polynomial $P_{\vec{n}}$ exists, then the vector $\vec{\mu}$ is called \emph{perfect}. The class of perfect systems contains: Angelesco, Nikishin or more generally AT systems (see \cite[Chapter 23]{Ismail2009} for more details). In this article we shall assume that $\vec{\mu}$ is perfect.
Multiple orthogonal polynomials have applications in such fields as approximation theory (Hermite-P\'ade approximation \cite{VanAssche2006}, construction of quadratures \cite{Coussement2005, LubWVA, WVAVuer}), number theory (proving irrationality of numbers \cite{VanAssche1999}), random matrix theory (models with external source \cite{Bleher2004} and products of random matrices \cite{KuijlZhang, KieKuijlStiv}) and more general determinantal point processes (see, e.g., \cite{Kuijlaars2010}).
In the applications to determinantal point processes one is interested in the asymptotic behaviour of the corresponding Christoffel--Darboux kernel which is the main object of study in the present paper. In order to define it we need some definitions. First of all, we need a dual concept to \eqref{eq:46}. A vector $A_{\vec{n}} = (A_{\vec{n},1},\ldots,A_{\vec{n},r})$ contains \emph{type I multiple orthogonal polynomials} if each $A_{\vec{n},j} \in \mathbb{R}[x]$ is a polynomial of degree $\leq n_j-1$ and
\[
\sum_{j=1}^r \int_\mathbb{R} x^k A_{\vec{n},j}(x) {\: \rm d} \mu_j(x) = 0, \qquad 0 \leq k \leq |\vec{n}|-2,
\]
with the normalization
\[
\sum_{j=1}^r \int_\mathbb{R} x^{|\vec{n}|-1} A_{\vec{n},j}(x) {\: \rm d} \mu_j(x) = 1.
\]
It is a basic result that $A_{\vec{n}}$ exists if and only if $P_{\vec{n}}$ exists.
Next, without loss of generality we can assume that the measures $\mu_1,\ldots,\mu_r$ are absolutely continuous with respect to a measure $\mu$ (e.g., one can take $\mu = \mu_1+\mu_2 + \cdots +\mu_r$). Let $w_j$ be the Radon-Nikodym derivative of $\mu_j$ with respect to $\mu$. Then one defines the function
\[
Q_{\vec{n}}(x) = \sum_{j=1}^r A_{\vec{n},j}(x) w_j(x).
\]
Further, let us fix a sequence of multi-indices from $\mathbb{N}_0^r$ such that for any $\ell \in \mathbb{N}_0$
\begin{equation} \label{eq:24}
|\vec{n}_\ell| = \ell \quad \text{and} \quad
\vec{n}_{\ell+1} = \vec{n}_\ell + \vec{e}_{i_\ell}
\end{equation}
for some $i_\ell \in \{1,\ldots,r\}$, where $\vec{e}_j \in \mathbb{N}_0^r$ is equal to $1$ on the $j$th position and $0$ elsewhere. Next, let us define
\begin{equation} \label{eq:25}
p_{\ell} := P_{\vec{n}_{\ell}}, \quad
q_{\ell} := Q_{\vec{n}_{\ell+1}}.
\end{equation}
Then the sequences $(p_{\ell} : \ell \in \mathbb{N}_0), (q_{\ell} : \ell \in \mathbb{N}_0)$ are biorthogonal in $L^2(\mu)$, \cite[\S 23.1.3]{Ismail2009}, i.e.
\begin{equation} \label{eq:1}
\int_\mathbb{R} p_{\ell}(x) {q_{\ell'}(x)} {\: \rm d} \mu(x) =
\begin{cases}
1 & \ell = \ell' \\
0 & \text{otherwise}.
\end{cases}
\end{equation}
Finally, one defines the \emph{Christoffel--Darboux kernel} by the formula
\begin{equation} \label{eq:47}
K_n(x,y) = \sum_{j=0}^{n-1} p_j(x) {q_j(y)}.
\end{equation}
Observe that the kernel $K_n(x,y)$ is usually non-symmetric, i.e., $K_n(x,y) \neq K_n(y,x)$, unless $r=1$ and $\mu:=\mu_1$. This kernel has been studied extensively in the case $r=1$ (and $\mu=\mu_1$) for compactly supported measures $\mu$, see, e.g., the survey \cite{Nevai1986} or \cite{Simon2008} for details. In particular, for any $n \in \mathbb{N}$ and $x \in \operatornamewithlimits{supp}(\mu)$ one has $K_n(x,x) \geq 1$, and
\begin{equation} \label{eq:52}
\lim_{n \to \infty} K_n(x,x) = \frac{1}{\mu(\{x\})},
\end{equation}
and for any $f \in \mathcal{C}_b(\mathbb{R})$ (continuous bounded function on the real line)
\begin{equation} \label{eq:48}
\lim_{n \to \infty} \bigg| \int_\mathbb{R} f {\: \rm d} \nu_n - \int_\mathbb{R} f {\: \rm d} \eta_n \bigg| = 0,
\end{equation}
where
\begin{equation} \label{eq:49}
\nu_n = \frac{1}{n} \sum_{x \in p_n^{-1} (\{0\})} \delta_x \quad \text{and} \quad
{\: \rm d} \eta_n(x) = \frac{1}{n} K_n(x,x) {\: \rm d} \mu(x).
\end{equation}
The measure $\nu_n$ is the normalized zero counting measure of $p_n$. The relation \eqref{eq:48} tells us that the weak accumulation points of the sequences $(\nu_n : n \in \mathbb{N})$ and $(\eta_n : n \in \mathbb{N})$ (by weak compactness they exist) are the same. This property is very important because the behaviour of the sequence $(\nu_n : n \in \mathbb{N})$ is rather well-understood in terms of logarithmic potential theory (see, e.g., \cite{Simon2007}). Let us mention that in the applications to random matrix theory one is usually interested in stronger pointwise limits
\[
\lim_{n \to \infty} \frac{1}{n^\gamma} K_n \bigg(x + \frac{a}{n^\gamma}, x + \frac{b}{n^\gamma} \bigg)
\]
for $a,b \in \mathbb{R}$ and some $\gamma > 0$, see e.g. \cite{Totik2009, Lubinsky2016}, which are out of the scope of the present article.
The idea of the proof of \eqref{eq:48} presented in \cite{Simon2009} is to show that \eqref{eq:48} holds for any $f \in \mathbb{R}[x]$ by expressing both integrals in terms of the corresponding Jacobi matrix. A crucial feature is that in this setup the Jacobi matrix is a bounded operator on $\ell^2$. Since $(\nu_n : n \in \mathbb{N})$ and $(\eta_n : n \in \mathbb{N})$ are sequences of probability measures, then \eqref{eq:48} holds for any $f \in \mathcal{C}_b(\mathbb{R})$ by a density argument.
In Theorem~\ref{thm:A} below we adapt this approach to $r > 1$ to obtain that under some hypotheses \eqref{eq:48} holds true for any $f \in \mathbb{R}[x]$. To do so, we need to define a generalisation of the Jacobi matrix to our setup. More precisely, since the sequence $(p_\ell : \ell \geq 0)$ is an algebraic basis for $\mathbb{R}[x]$ there are real constants $J_{\ell,k}$ such that
\begin{equation} \label{eq:53}
x p_\ell = \sum_{k=0}^\infty J_{\ell,k} p_k.
\end{equation}
We collect these constants into a matrix $J = [J_{\ell,k}]_{\ell,k=0,1,\ldots}$, which is lower Hessenberg, i.e. $J_{\ell,k} = 0$ for $k-\ell > 1$. Let us remark that for a specific choice of the sequence $(\vec{n}_\ell: \ell \geq 0)$, namely the so-called stepline multi-indices, the matrix $J$ is both banded and bounded (see \cite{Aptekarev2006}). This is not true anymore in our generality hence it makes our analysis more complicated. The idea of using $J$ in the general setup comes from \cite{Duits2021}.
Hardy \cite{Hardy2015,Hardy2018} proved results comparing the distribution of the zeros of $p_n$ with the
distribution of random points of special determinantal point processes known as polynomial ensembles. He used very similar techniques which he calls tracial representations (see Lemma 2.1 and Corollary 2.4 in \cite{Hardy2015}) and his results about the average empirical
distribution of the point process \cite[Thm. 3.1]{Hardy2018} corresponds to \eqref{eq:48} when $f$ is a polynomial.
\begin{theorem} \label{thm:A}
Let $(\vec{n}_\ell : \ell \geq 0)$ be a sequence satisfying \eqref{eq:24}. Assume that the corresponding matrix $J$ satisfies
\begin{equation} \label{eq:44}
\sup_{n \geq N} |J_{n,n-N}| < \infty \quad \text{for any } N \geq 0.
\end{equation}
Then for any $f \in \mathbb{R}[x]$ the formula \eqref{eq:48} holds true.
\end{theorem}
It turns out that the condition \eqref{eq:44} is the right substitute for the boundedness of the Jacobi matrix. In Theorem~\ref{thm:3} we formulate a sufficient condition for \eqref{eq:44} to hold which is satisfied for Angelesco and AT systems. Moreover, in Lemma~\ref{lem:2} we present a sufficient condition for the positivity of $K_n(x,x)$, which allows us to prove that \eqref{eq:48} holds for any $f \in \mathcal{C}_b(\mathbb{R})$. This condition covers compactly supported Angelesco systems and compactly supported AT systems with continuous densities.
Inspired by \cite[Section 6.2]{Nevai1979} we define for every bounded measurable function $f$ and $x \in \operatornamewithlimits{supp}(\mu)$
\[
G_n[f](x) = \frac{1}{K_n(x,x)} \int_\mathbb{R} K_n(x,y) K_n(y,x) f(y) {\: \rm d} \mu(y) ,
\]
provided that $K_n(x,x) \neq 0$.
We are interested in those cases when
\begin{equation} \label{eq:50}
\lim_{n \to \infty}
G_n[f](x) =
f(x), \quad f \in \mathcal{C}_b(\mathbb{R}) .
\end{equation}
In the classical case $r=1$ the condition \eqref{eq:50} has been introduced in \cite{Nevai1979} in order to obtain relative asymptotics of Christoffel functions (see, e.g., \cite[Section 4.5]{Nevai1986}). It has been used also for pointwise convergence of orthogonal expansions with respect to systems of orthonormal polynomials (see \cite[Section 4.12]{Nevai1986}). It has been studied rather extensively in \cite{Breuer2010a} where it was shown that for compactly supported measures $\mu:= \mu_1$ the condition \eqref{eq:50} is equivalent to subexponential growth of the sequence of orthonormal polynomials $(p_n : n \geq 0)$ in $L^2(\mu)$, namely
\begin{equation} \label{eq:51}
\lim_{n \to \infty} \frac{p_n^2(x)}{\sum_{j=0}^n p_j^2(x)} = 0.
\end{equation}
In \cite{Lubinsky2011} the problem of convergence of $G_n[f]$ to $f$ in the $L^p(\!{\: \rm d} x)$ norm was considered. Finally, in \cite{Breuer2014} this concept has been applied to estimation of the variance of linear statistics coming from orthogonal polynomial ensembles.
Our motivation of examining the condition \eqref{eq:50} comes from the desire to understand whether an analogue of \eqref{eq:52} holds true (see Proposition~\ref{prop:6}).
\begin{theorem} \label{thm:B}
Assume that:
\begin{enumerate}[(a)]
\item the matrix $J$ satisfies \eqref{eq:44},
\item $K_n(x,x) > 0$ for large $n$,
\item for any $N \geq 1$ one has
$\begin{aligned}[b]
\lim_{n \to \infty} \frac{q_{\ell}(x) p_{\ell'}(x)}{K_n(x,x)} = 0
\end{aligned}$, where $\ell' \in [n-N, n)$ and $\ell \in [n, n+N]$. \label{thm:B:d}
\end{enumerate}
Then the convergence \eqref{eq:50} holds for any $f \in \mathbb{R}[x]$.
\end{theorem}
Let us observe that \eqref{thm:B:d} is an analogue of \eqref{eq:51}.
Regarding Theorem~\ref{thm:B}, we were not able to extend the convergence \eqref{eq:50} to any $f \in \mathcal{C}_b(\mathbb{R})$. The problem here is that the kernel $K_n(x,y) K_n(y,x)$ might not be of constant sign --- we observed this phenomenon numerically for Jacobi-Pi\~{n}eiro polynomials.
We prove Theorem~\ref{thm:B} by similar means as for Theorem~\ref{thm:A}. Namely, we were able to rewrite $G_n[f](x)$ in terms of the matrix $J$ (see Lemma~\ref{lem:1}), which seems to be a novel approach to this problem even for $r=1$.
The article is organized as follows. In Section~\ref{sec:4} we define and analyse the matrix $J$ from \eqref{eq:53} and its finite truncations. Section~\ref{sec:5} is devoted to analysing the condition~\eqref{eq:44}, its consequences and sufficient conditions. In Section~\ref{sec:6} we prove Theorem~\ref{thm:A}, its corollaries and we discuss conditions under which we can extend convergence to $\mathcal{C}_b(\mathbb{R})$. In Section~\ref{sec:7} we prove Theorem~\ref{thm:B} and discuss some open problems related to it. Finally, in Section~\ref{sec:8} we discuss applications of Theorem~\ref{thm:A} to classes of multiple orthogonal polynomials when one can describe the limit of $(\nu_n : n \in \mathbb{N})$ more explicitly.
\section{The matrix representations of the multiplication operator} \label{sec:4}
Consider the multiplication operator $M_x : \operatorname{Dom}(M_x) \to L^2(\mu)$ given by
\[
(M_x f)(x) := x f(x),
\]
where
\[
\operatorname{Dom}(M_x) = \big\{ f \in L^2(\mu) : x \cdot f \in L^2(\mu) \big\}.
\]
We have that $M_x$ is a (possibly unbounded) self-adjoint operator.
Let us define the linear spaces
\[
\mathcal{P}_n := \big\{ p \in \mathbb{R}[x] : \deg(p) \leq n \big\}, \quad n \geq 0.
\]
We would like to examine the matrix representation of $M_x$ on the space $\mathbb{R}[x]$. Since
$(p_\ell : \ell \in \mathbb{N}_0)$ is an algebraic basis for the space $\mathbb{R}[x]$, it is enough
to examine the action of $M_x$ on each $p_{\ell}$. Because $x p_{\ell} \in \mathcal{P}_{\ell+1}$
there are constants $\{ J_{\ell, k} \}_{k=0}^{\ell+1}$ such that
\begin{equation} \label{eq:2a}
M_x p_{\ell} = \sum_{k=0}^{\ell+1} J_{\ell, k} p_{k} = \sum_{k=0}^{\infty} J_{\ell, k} p_{k},
\end{equation}
where we have defined $J_{\ell, k} = 0$ for $k > \ell+1$.
Let us collect these constants into a matrix $J = [J_{\ell, k}]_{\ell,k=0}^\infty$.
Observe that it is a lower Hessenberg matrix.
Next, we want to examine the spectrum of finite sections of $J$.
\begin{proposition} \label{prop:2}
For any $n \geq 1$ let
\[
J_n := [J]_{0 \ldots n-1;0 \ldots n-1}.
\]
Then $\det(x \operatorname{Id}_n - J_n) = p_n(x)$. In particular, $\sigma(J_n) = p_{n}^{-1}[\{0\}]$, i.e., the eigenvalues of $J_n$ are equal to the
zeros of $p_n$.
\end{proposition}
\begin{proof}
The proof is analogous to \cite[Section 2.2]{Coussement2005}. Namely, observe that
\[
x \operatorname{Id}_n - J_n =
\begin{pmatrix}
x-J_{0,0} & -1 & 0 & 0 & \ldots & 0 & 0 \\
-J_{1,0} & x-J_{1,1} & -1 & 0 & \ldots & 0 & 0 \\
-J_{2,0} & -J_{2,1} & x-J_{2,2} & -1 & \ldots &0 & 0 \\
\ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots\\
-J_{n-2,0} & -J_{n-2,1} & -J_{n-2,2} & \ldots & \ldots & x-J_{n-2,n-2} & -1 \\
-J_{n-1,0} & -J_{n-1,1} & -J_{n-1,2} & \ldots & \ldots &-J_{n-1,n-2} & x-J_{n-1,n-1} \\
\end{pmatrix}.
\]
Set
\[
f_n(x) = \det(x \operatorname{Id}_n - J_n), \quad n \geq 0,
\]
where the determinant of an empty matrix is defined to be $1$. Then by expanding this determinant with respect to the last row and repeatedly expanding the resulting determinants with respect to the last columns we get
\[
f_n(x) = (x-J_{n-1,n-1})f_{n-1}(x) -\sum_{j=1}^{n-1} (-1)^{n+j} J_{n-1,j-1} (-1)^{n-j} f_{j-1}(x),
\]
which results in
\begin{equation} \label{eq:23}
x f_{n-1}(x) = \sum_{j=0}^{n} J_{n-1, j} f_j(x).
\end{equation}
Observe that by \eqref{eq:23} and \eqref{eq:2a} $f_n(x)$ and $p_n(x)$ satisfy the same recurrence relation. Since $f_0(x) = p_0(x)$ we obtain that $f_n(x) = p_n(x)$.
\end{proof}
Analogously, one can try to consider the matrix representation of $M_x$ on the space $\mathcal{Q}$, where
\[
\mathcal{Q}_n := \operatorname{span} \{ q_{k} : k = 0, 1, \ldots, n \} \quad \text{and} \quad
\mathcal{Q} := \bigcup_{k=0}^\infty \mathcal{Q}_k.
\]
The following proposition provides sufficient conditions under which it is possible.
\begin{proposition} \label{prop:1}
Let $(\vec{n}_\ell : \ell \geq 0)$ be a path satisfying \eqref{eq:24}. Let $(p_\ell : \ell \geq 0)$ and $(q_\ell : \ell \geq 0)$ be defined as in \eqref{eq:25}. If
\begin{equation} \label{eq:11}
\lim_{\ell \to \infty} (\vec{n}_\ell)_j = \infty, \quad j=1, 2, \ldots r,
\end{equation}
then the operator $M_x$ is well-defined on $\mathcal{Q}$. Moreover, for any $\ell \geq 0$ we have
\[
M_x q_\ell = \sum_{k=0}^\infty J_{k, \ell} q_k,
\]
where $J$ is defined in \eqref{eq:2a}, and the above sum contains a finite number of non-zero elements.
\end{proposition}
\begin{proof}
First of all, by \cite[Corollary 23.1.1]{Ismail2009} and induction we can derive that for any $\ell \geq 0$
\begin{equation} \label{eq:12}
\operatorname{span} \big\{ A_{\vec{n}_{k+1}} : 0 \leq k \leq \ell \big\} =
\bigoplus_{j=1}^r \mathcal{P}_{(\vec{n}_N)_j}.
\end{equation}
In particular,
\[
A_{\vec{n}_{\ell+1}} \in \bigoplus_{j=1}^r \mathcal{P}_{(\vec{n}_\ell)_j},
\]
and consequently,
\begin{equation} \label{eq:13}
x A_{\vec{n}_{\ell+1}} \in \bigoplus_{j=1}^r x \mathcal{P}_{(\vec{n}_\ell)_j}
\subseteq
\bigoplus_{j=1}^r \mathcal{P}_{(\vec{n}_\ell)_j+1}.
\end{equation}
Now, by \eqref{eq:11} there exists \emph{minimal} $N_\ell \geq \ell+r$ such that
\[
(\vec{n}_{N_\ell})_j \geq (\vec{n}_\ell)_j + 1, \quad j=1,2,\ldots,r.
\]
Thus, by \eqref{eq:12} and \eqref{eq:13} we get
\[
x A_{\vec{n}_\ell+1} \in \operatorname{span} \big\{ A_{\vec{n}_{k+1}} : 0 \leq k \leq N_\ell \big\},
\]
which immediately implies $x q_\ell \in \mathcal{Q}_{N_\ell}$. Thus, there are constants $\{ S_{\ell,k} \}_{k=0}^{N_\ell}$ such that
\begin{equation} \label{eq:2b}
M_x q_\ell = \sum_{k=0}^{N_\ell} S_{\ell,k} q_k =
\sum_{k=0}^\infty S_{\ell,k} q_k,
\end{equation}
where we have defined $S_{\ell,k} = 0$ for $k > N_\ell$. Let us collect these constants into a matrix $S=[S_{\ell,k}]_{\ell,k=0}^\infty$. It remains to prove that $S = J^t$.
Let us take the scalar product on the both sides of \eqref{eq:2a} with $q_{\ell'}$. Then
\[
\langle M_x p_{\ell}, q_{\ell'} \rangle_{L^2(\mu)} =
\sum_{k=0}^{\infty} J_{\ell, k} \langle p_{k}, q_{\ell'} \rangle_{L^2(\mu)} =
J_{\ell, \ell'},
\]
where the last equality follows from \eqref{eq:1}. Next, by self-adjointness of $M_x$, we have
\[
\langle M_x p_{\ell}, q_{\ell'} \rangle_{L^2(\mu)} =
\langle p_{\ell}, M_x q_{\ell'} \rangle_{L^2(\mu)} =
{\langle M_x q_{\ell'}, p_{\ell} \rangle}_{L^2(\mu)}.
\]
Hence, by an analogous reasoning applied to \eqref{eq:2b}, we obtain
\[
\langle M_x q_{\ell'}, p_\ell \rangle_{L^2(\mu)} =
\sum_{k=0}^\infty S_{\ell',k} \langle q_k, p_{\ell} \rangle_{L^2(\mu)} =
S_{\ell', \ell}
\]
Thus, $S = {J^t}$ and the proof is complete.
\end{proof}
\section{Near diagonal boundedness of $J$} \label{sec:5}
Let us denote by $\mathcal{H}$ the set of lower Hessenberg matrices. Specifically,
\[
X \in \mathcal{H} \quad \Leftrightarrow \quad X_{i,j} = 0 \ \text{for any } j > i+1.
\]
In particular, we have $J \in \mathcal{H}$. Let us define the neighbourhood of the diagonal in $\mathbb{N}_0^2$ of radius
$R \geq 0$ by
\[
D_R = \big\{ (i,j) \in \mathbb{N}_0^2 : | i-j | \leq R \big\}.
\]
In what follows, we will need conditions implying that
\begin{equation} \label{eq:NDB}
\tag{NDB}
\sup_{(i,j) \in D_R} |X_{i,j}| < \infty \quad \text{for any } R \geq 0.
\end{equation}
The following proposition implies that if $X \in \mathcal{H}$ satisfies \eqref{eq:NDB}, then every power of $X$ has this property.
\begin{proposition} \label{prop:3}
Let $X \in \mathcal{H}$. Then for any $\ell \geq 1$ and $M \geq 0$ there are constants $c(M,\ell) \geq 1$ and $R(M,\ell) \geq M$ \emph{independent} of $X$ such that
\begin{equation} \label{eq:8}
\sup_{(i,j) \in D_M} | [X^\ell]_{i,j} | \leq
c(M, \ell) \Big( \sup_{(i,j) \in D_{R(M, \ell)}} |X_{i,j}| \Big)^\ell.
\end{equation}
\end{proposition}
\begin{proof}
First observe that by induction one can show that $[X^\ell]_{i,j} = 0$ provided $j > i+\ell$.
Thus for any $\ell \geq 2$
\begin{equation} \label{eq:9}
[X^\ell]_{i,j} =
\sum_{k=0}^\infty [X^{\ell-1}]_{i,k} X_{k,j} =
\sum_{k=j-1}^{i+\ell-1} [X^{\ell-1}]_{i,k} X_{k,j}.
\end{equation}
We shall prove \eqref{eq:8} inductively. For $\ell=1$, the statement holds true for $c(M, \ell) = 1$
and $R(M, \ell) = M$.
Suppose that $\ell \geq 2$. Then by \eqref{eq:9} and for any $(i,j) \in D_M$ we have
\[
| [X^\ell]_{i,j} | \leq (M+\ell+1) \cdot
\sup_{(i,k) \in D_{M+\ell}} | [X^{\ell-1}]_{i,k} | \cdot
\sup_{(k,j) \in D_{M+\ell}} |X_{k,j}|.
\]
By the induction hypothesis
\[
| [X^\ell]_{i,j} | \leq (M+\ell+1) \cdot c(M+\ell, \ell-1)
\Big( \sup_{(i,k) \in D_{R(M+\ell, \ell-1)}} | X_{i,k} | \Big)^{\ell-1} \cdot
\sup_{(k,j) \in D_{M+\ell}} |X_{k,j}|.
\]
Hence by defining
\[
c(M, \ell) := (M+\ell+1) \cdot c(M+\ell, \ell-1) \quad \text{and} \quad
R(M, \ell) := \max \big( R(M+\ell, \ell-1), M+\ell \big)
\]
we obtain
\begin{equation} \label{eq:10}
| [X^\ell]_{i,j} | \leq c(M, \ell)
\Big( \sup_{(i,k) \in D_{R(M, \ell)}} | X_{i,k} | \Big)^{\ell} .
\end{equation}
The conclusion follows by taking the supremum over all $(i,j) \in D_M$
on the left-hand side of \eqref{eq:10}.
\end{proof}
\subsection{Criteria for the boundedness}
The first sufficient condition implying that $J$ satisfies \eqref{eq:NDB} is formulated in terms of the nearest neighbor recurrence relations which multiple orthogonal polynomials satisfy. Let us recall that according to \cite{VanAssche2011} we have for the type II polynomials
\begin{equation} \label{NNRR-P}
xP_{\vec{n}}(x) =
P_{\vec{n}+\vec{e}_k}(x) +
b_{\vec{n},k} P_{\vec{n}}(x) +
\sum_{j=1}^r a_{\vec{n},j} P_{\vec{n}-\vec{e}_j}(x), \qquad 1 \leq k \leq r
\end{equation}
for some real sequences $a_{\vec{n},j}$ and $b_{\vec{n},k}$.
More precisely, we have the following
\begin{proposition} \label{prop:4}
Let $(\vec{n}_{\ell} : \ell \geq 0)$ be a path satisfying \eqref{eq:24}.
Suppose that the nearest neighbour recurrence coefficients for MOPs satisfy
\begin{align}
\label{eq:3a}
&\max_{1 \leq i \leq r} \sup_{\ell \geq 0} |a_{\vec{n}_\ell, i}| < \infty, \\
\label{eq:3b}
&\max_{1 \leq i \leq r} \sup_{\vec{n} \in \mathbb{N}_0^r} |b_{\vec{n},i}| < \infty.
\end{align}
Then the matrix $J$ associated with the sequence $(\vec{n}_{\ell} : \ell \geq 0)$ satisfies \eqref{eq:NDB}.
\end{proposition}
\begin{proof}
In view of the formula \eqref{eq:2a} and \cite[Proposition 2.1]{Duits2021}, the matrix $J$ satisfies
\[
\begin{gathered}
J_{\ell,\ell+1} = 1, \qquad
J_{\ell, \ell} = b_{\vec{n}_\ell, i_{\ell}} \\
J_{\ell, j} =
\sum_{k=1}^r a_{\vec{n}_{\ell},k}
\prod_{m=j+2}^\ell
\big( b_{\vec{n}_{m-1}-\vec{e}_k,k} - b_{\vec{n}_{m-1}-\vec{e}_k,i_{m-1}} \big), \quad 0 \leq j \leq \ell-1.
\end{gathered}
\]
Let $R \geq 0$. Then for $(\ell, j) \in D_R$ we have $|j-\ell| \leq R$. Hence the product above has at most $R-1$ terms which, by \eqref{eq:3b}, are uniformly bounded. Hence, the result readily follows from \eqref{eq:3a}.
\end{proof}
The following theorem gives other conditions implying \eqref{eq:NDB}. Conditions implying the hypotheses of this result are contained in \cite{Haneczok2012}.
\begin{theorem} \label{thm:3}
Let $(\vec{n}_{\ell} : \ell \geq 0)$ be a path satisfying \eqref{eq:24} and let $J$ be the corresponding matrix
defined by \eqref{eq:2a}.
Suppose there is a compact interval $\Delta \subset \mathbb{R}$ such that for any $n \in \mathbb{N}_0$:
\begin{enumerate}[(a)]
\item the polynomial $p_n$ has exactly $n$ simple zeros which lie inside $\Delta$,
\item the zeros of $p_n$ and $p_{n+1}$ are interlacing.
\end{enumerate}
Then the matrix $J$ satisfies \eqref{eq:NDB}.
\end{theorem}
\begin{proof}
It is enough to prove that
\begin{equation} \label{eq:6}
\sup_{n \geq N} |J_{n,n-N}| < \infty \quad \text{for any } N \geq 0.
\end{equation}
We will prove this inductively with respect to $N$.
We follow the argument from \cite[Lemma 2.2]{Aptekarev2006} and \cite[Lemma 2.3.2]{Denisov2022}.
Let the zeros of $p_n$ be denoted by $(x_{k;n} : k =1,\ldots, n)$, where we order them as follows
\[
x_{1;n} < x_{2;n} < \ldots < x_{n;n}.
\]
Let $p_{n+1}(z) = (z - x_{1;n+1}) (z - x_{n+1;n+1}) r_n(z)$. Then by a partial fraction decomposition we obtain
\begin{align*}
\frac{p_{n+1}(z)}{p_n(z)}
&=
(z - x_{1;n+1}) (z - x_{n+1;n+1}) \frac{r_n(z)}{p_{n}(z)} \\
&=
(z - x_{1;n+1}) (z - x_{n+1;n+1})
\sum_{k=1}^n \frac{r_n(x_{k;n})}{p_{n}'(x_{k;n})} \frac{1}{z-x_{k;n}} .
\end{align*}
Let us divide the last identity by $z$. Since the polynomials $p_n$ are monic we get
\begin{align*}
1 &=
\lim_{|z| \to \infty} \frac{p_{n+1}(z)}{z p_n(z)} \\
&=
\lim_{|z| \to \infty} \frac{z - x_{1;n+1}}{z}
\sum_{k=1}^n \frac{r_n(x_{k;n})}{p_{n}'(x_{k;n})}
\lim_{|z| \to \infty} \frac{z - x_{n+1;n+1}}{z-x_{k;n}} \\
&=
\sum_{k=1}^n \frac{r_n(x_{k;n})}{p_{n}'(x_{k;n})}.
\end{align*}
Since the polynomials $r_n$ and $p_n$ are monic and their zeros are interlacing one can prove that
\[
\frac{r_n(x_{k;n})}{p_{n}'(x_{k;n})} > 0.
\]
Hence for any compact $K \subset \mathbb{C} \setminus \Delta$
\[
M_1(K) := \sup_{n \geq 0} \sup_{z \in K} \frac{|p_{n+1}(z)|}{|p_n(z)|} < \infty.
\]
Consequently, for any $j \in \mathbb{N}$
\begin{equation} \label{eq:14}
M_j(K) := \sup_{n \geq 0} \sup_{z \in K} \frac{|p_{n+j}(z)|}{|p_n(z)|} \leq M_1(K)^j < \infty.
\end{equation}
Let us turn to the proof of \eqref{eq:6} for $N=0$. Since $(p_n : n \geq 0)$ are monic, by \eqref{eq:2a} we have
\begin{equation} \label{eq:19}
x p_n = p_{n+1} + \sum_{k=0}^n J_{n,k} p_k.
\end{equation}
Hence, by dividing both sides by $x p_n$ we get
\begin{equation} \label{eq:15}
1 - \frac{p_{n+1}}{x p_n} = \sum_{k=0}^n J_{n,k} \frac{p_k}{x p_n}.
\end{equation}
Let us denote
\begin{equation} \label{eq:16}
f(z) = 1 - \frac{p_{n+1}(z)}{z p_n(z)}.
\end{equation}
Since $(p_n : n \geq 0)$ are monic we get by \eqref{eq:15} and \eqref{eq:16}
\begin{equation} \label{eq:17}
\lim_{|z| \to \infty} f(z) = 0 \quad \text{and} \quad
\lim_{|z| \to \infty} z f(z) = J_{n,n}.
\end{equation}
Thus, we arrive at
\begin{equation} \label{eq:18}
\operatorname{Res}(f, \infty) = -J_{n,n}.
\end{equation}
On the other hand, let $\gamma_r$ be a positively oriented circle around the origin with radius $r$ which contains $\Delta$ in its interior. Then by the residue theorem
\begin{align*}
\operatorname{Res}(f, \infty)
&=
-\frac{1}{2 \pi i} \int_{\gamma_r} f(z) {\: \rm d} z \\
&=
-\frac{1}{2 \pi i} \int_{\gamma_r} \bigg( 1 - \frac{p_{n+1}(z)}{z p_n(z)} \bigg) {\: \rm d} z \\
&=
\frac{1}{2 \pi i} \int_{\gamma_r} \frac{p_{n+1}(z)}{p_n(z)} \frac{{\: \rm d} z}{z}.
\end{align*}
Hence, by \eqref{eq:14} and \eqref{eq:18} we obtain
\[
|J_{n,n}| \leq M_1(\gamma_r) =: C_0.
\]
Now, let $N \geq 1$ and suppose that
\begin{equation} \label{eq:22}
\sup_{n \geq 0} |J_{n,n-j}| \leq C_j, \quad j=0, 1, \ldots N-1.
\end{equation}
We shall prove a similar bound for $|J_{n, n-N}|$. Similarly as before, let us divide both sides of \eqref{eq:19} by $x p_{n-N}$. Then
\[
\frac{p_{n}}{p_{n-N}} -
\frac{p_{n+1}}{x p_{n-N}} -
\sum_{k=n-N+1}^{n} J_{n,k} \frac{p_k}{x p_{n-N}} =
\sum_{k=0}^{n-N} J_{n,k} \frac{p_k}{x p_{n-N}}.
\]
Let us denote
\[
f(z) =
\frac{p_{n}(z)}{p_{n-N}(z)} -
\frac{p_{n+1}(z)}{z p_{n-N}(z)} -
\sum_{k=n-N+1}^{n} J_{n,k} \frac{p_k(z)}{z p_{n-N}(z)}.
\]
Then
\[
\lim_{|z| \to \infty} f(z) = 0 \quad \text{and} \quad
\lim_{|z| \to \infty} z f(z) = J_{n, n-N}.
\]
Hence
\begin{equation} \label{eq:20}
\operatorname{Res}(f, \infty) = -J_{n, n-N}.
\end{equation}
On the other hand, analogously as before,
\begin{equation} \label{eq:21}
\operatorname{Res}(f, \infty) =
-\frac{1}{2 \pi i}
\int_{\gamma} z f(z) \frac{{\: \rm d} z}{z}.
\end{equation}
But by \eqref{eq:14} and \eqref{eq:22}
\[
\sup_{z \in \gamma} |z f(z)| \leq
r M_N(\gamma_r) + M_{N+1}(\gamma_r) + \sum_{k=0}^{N-1} C_{k} M_{k+1}(\gamma_r) =: C_N.
\]
Hence by \eqref{eq:20} and \eqref{eq:21} we get
\[
\sup_{n \geq N} |J_{n, n-N}| \leq C_N
\]
which ends the proof of \eqref{eq:6} for $N$.
\end{proof}
\section{Density of zeros and weak limit of the Christoffel--Darboux kernel} \label{sec:6}
Let us recall that in \eqref{eq:49} we have defined
\begin{equation} \label{eq:27}
\nu_n = \frac{1}{n} \sum_{y \in p_n^{-1}[\{0\}]} \delta_y \quad \text{and} \quad
{\: \rm d} \eta_n(x) = \frac{1}{n} K_n(x,x) {\: \rm d} \mu(x),
\end{equation}
where in the first sum we take into account the possible multiplicities of the zeros of $p_n$, and
\begin{equation} \label{eq:28}
K_n(x,y) = \sum_{j=0}^{n-1} p_j(x) {q_j(y)}.
\end{equation}
The proof of the following theorem is based on \cite[Proposition 2.3]{Simon2009}.
\begin{theorem} \label{thm:4}
Let $(\vec{n}_\ell : \ell \geq 0)$ be a path satisfying \eqref{eq:24}. Suppose that the corresponding matrix $J$,
defined by \eqref{eq:2a}, satisfies \eqref{eq:NDB}. Then for any $\ell \geq 0$
\[
\lim_{n \to \infty}
\bigg|
\int_\mathbb{R} x^\ell {\: \rm d} \nu_{n} - \int_\mathbb{R} x^\ell {\: \rm d} \eta_n \bigg| = 0.
\]
\end{theorem}
\begin{proof}
By Proposition~\ref{prop:2} we obtain
\[
\int_\mathbb{R} x^\ell {\: \rm d} \nu_{n} =
\frac{1}{n} \operatorname{tr} (J_n^\ell) =
\frac{1}{n} \sum_{j=0}^{n-1} [ J_n^\ell ]_{j,j} .
\]
By \eqref{eq:2a} we have
\begin{align*}
\int_\mathbb{R} x^\ell {\: \rm d} \eta_n &=
\frac{1}{n} \sum_{j=0}^{n-1} \int_\mathbb{R} x^\ell p_j(x) {q_j(x)} {\: \rm d} \mu(x) \\
&=
\frac{1}{n} \sum_{j=0}^{n-1} \big\langle M_x^\ell p_j, q_j \big\rangle_{L^2(\mu)} \\
&=
\frac{1}{n} \sum_{j=0}^{n-1} [ J^\ell ]_{j,j} .
\end{align*}
Hence
\begin{align}
\nonumber
\bigg|
\int_\mathbb{R} x^\ell {\: \rm d} \nu_{n} - \int_\mathbb{R} x^\ell {\: \rm d} \eta_n \bigg| &\leq
\frac{1}{n}
\sum_{j=0}^{n-1}
\big| [ J_n^\ell ]_{j,j} - [ J^\ell ]_{j,j} \big| \\
\nonumber
&=
\frac{1}{n}
\sum_{j=n-\ell}^{n-1}
\big| [ J_n^\ell ]_{j,j} - [ J^\ell ]_{j,j} \big| \\
\label{eq:26}
&\leq
\frac{1}{n}
\sum_{j=n-\ell}^{n-1}
\big( |[ J_n^\ell ]_{j,j}| + |[ J^\ell ]_{j,j}| \big).
\end{align}
By \eqref{eq:NDB} and Proposition~\ref{prop:3} we obtain that any fixed $\ell$
\[
\sup_{(i,j) \in D_M} |[J^\ell]_{i,j}| < \infty \quad \text{for any } M \geq 0.
\]
Next, since
\[
|[J_n]_{i,j}| \leq |J_{i,j}|,
\]
Proposition~\ref{prop:3} together with \eqref{eq:NDB} implies the existence of constants $c(M,\ell) \geq 1$ and $R(M, \ell) \geq M$
such that
\begin{align*}
\sup_{(i,j) \in D_M} |[J_n^\ell]_{i,j}|
&\leq
c(M, \ell) \Big( \sup_{(i,j) \in D_{R(M, \ell)}} |[J_n]_{i,j}| \Big)^\ell \\
&\leq
c(M, \ell) \Big( \sup_{(i,j) \in D_{R(M, \ell)}} |J_{i,j}| \Big)^\ell .
\end{align*}
Hence
\[
\sup_{n \geq 1} \sup_{(i,j) \in D_M} |[J_n^\ell]_{i,j}| < \infty \quad \text{for any } M \geq 0.
\]
Thus,
\[
\sup_{n \geq 1}
\sup_{n-\ell \leq j < n}
\big( [ |J_n^\ell ]_{j,j}| + |[ J^\ell ]_{j,j}| \big) < \infty.
\]
Hence, together with \eqref{eq:26} this implies the existence of a constant $c > 0$ such that
\[
\bigg|
\int_\mathbb{R} x^\ell {\: \rm d} \nu_{n} - \int_\mathbb{R} x^\ell {\: \rm d} \eta_n \bigg|
\leq \frac{c}{n},
\]
from which the conclusion follows.
\end{proof}
\begin{corollary} \label{cor:1}
Let the hypotheses of Theorem~\ref{thm:4} be satisfied. Let $(n_k : k \in \mathbb{N}_0)$ be an increasing sequence of positive integers. Suppose that there is a compact set $K \subset \mathbb{R}$ such that $\operatornamewithlimits{supp}(\mu) \subset K$ and $\operatornamewithlimits{supp}(\nu_{n_k}) \subset K$ for any $k \in \mathbb{N}_0$. If
\begin{equation} \label{eq:4}
\sup_{k \in \mathbb{N}_0} \frac{1}{n_k} \int_\mathbb{R} |K_{n_k}(x,x)| {\: \rm d} \mu(x) < \infty,
\end{equation}
then
\begin{equation} \label{eq:5'}
\lim_{k \to \infty} \bigg| \int_\mathbb{R} f {\: \rm d} \nu_{n_k} - \int_\mathbb{R} f {\: \rm d} \eta_{n_k} \bigg| = 0, \quad f \in \mathcal{C}(K).
\end{equation}
In particular, for any probability measure $\nu_\infty$ supported on $K$ one has the equivalence\footnote{By $\mu_{n} \xrightarrow{w} \mu$ we denote the weak convergence of finite measures, i.e. $\int_\mathbb{R} f {\: \rm d} \mu_n \to \int_\mathbb{R} f {\: \rm d} \mu$ for any $f \in \mathcal{C}_b(\mathbb{R})$.}
\begin{equation} \label{eq:5}
\nu_{n_k} \xrightarrow{w} \nu_\infty \quad \Leftrightarrow \quad
\eta_{n_k} \xrightarrow{w} \nu_\infty.
\end{equation}
\end{corollary}
\begin{proof}
First of all, it is immediate from the definition \eqref{eq:27} that $(\nu_{n_k} : k \geq 1)$ is a sequence of probability measures. Let us observe that the sequence $(\eta_{n_k} : k \geq 1)$ is bounded in the total variation norm.
Indeed, in view of \eqref{eq:27}, we have
\[
\| \eta_{n_k} \|_{\mathrm{TV}} =
|\eta_{n_k}|(\mathbb{R}) = \frac{1}{n_k} \int_\mathbb{R} |K_{n_k}(x,x)| {\: \rm d} \mu(x).
\]
Hence, by \eqref{eq:4}, we obtain
\begin{equation} \label{eq:4'}
\sup_{k \geq 0} \| \eta_{n_k} \|_{\mathrm{TV}} =: C < \infty.
\end{equation}
Take $f \in \mathcal{C}(K)$. Then, by the Weierstrass theorem, for any $\epsilon > 0$,
there exists $P_\epsilon \in \mathbb{R}[x]$ such that
\begin{equation} \label{eq:7a}
\sup_{x \in K} |f(x) - P_\epsilon(x)| < \epsilon.
\end{equation}
Hence, by \eqref{eq:7a} and \eqref{eq:4'},
\begin{align*}
\bigg|\int_\mathbb{R} f {\: \rm d} \nu_{n_k} - \int_\mathbb{R} f {\: \rm d} \eta_{n_k} \bigg|
&=
\bigg| \int_\mathbb{R} (f - P_\epsilon) {\: \rm d} \nu_{n_k} - \int_\mathbb{R} (f-P_\epsilon) {\: \rm d} \eta_{n_k} + \int_\mathbb{R} P_\epsilon {\: \rm d} \nu_{n_k} - \int_\mathbb{R} P_\epsilon {\: \rm d} \eta_{n_k} \bigg| \\
&\leq
( 1 + C) \epsilon +
\bigg| \int_\mathbb{R} P_\epsilon {\: \rm d} \nu_{n_k} - \int_\mathbb{R} P_\epsilon {\: \rm d} \eta_{n_k} \bigg|.
\end{align*}
Thus, by Theorem~\ref{thm:4} we get
\[
\lim_{k \to \infty}
\bigg|\int_\mathbb{R} f {\: \rm d} \nu_{n_k} - \int_\mathbb{R} f {\: \rm d} \eta_{n_k} \bigg| \leq
(1+C) \epsilon .
\]
By letting $\epsilon \to 0$ we obtain \eqref{eq:5'}. The conclusion \eqref{eq:5} follows from this immediately.
\end{proof}
Let us comment that the condition \eqref{eq:4} seems not to be automatically true.
Observe however that in view of \eqref{eq:27}, \eqref{eq:28} and \eqref{eq:1} we have
\[
\frac{1}{n_k} \int_\mathbb{R} K_{n_k}(x,x) {\: \rm d} \mu(x) = \frac{n_k}{n_k} = 1.
\]
Hence, if
\begin{equation} \label{eq:POS}
\tag{POS}
K_{n_k}(x,x) \geq 0 \quad \text{for a.e. } x \in \operatornamewithlimits{supp}(\mu),\ k \geq 0,
\end{equation}
then \eqref{eq:4} is satisfied.
A direct consequence of \cite[Section 3.2 and 4.1]{Kuijlaars2010} is the following result.
\begin{lemma} \label{lem:2}
Let $n \in \mathbb{N}$ be given and suppose that for any points $\{x_1, \ldots, x_n \} \subset \operatornamewithlimits{supp}(\mu)$ satisfying $x_1 < \ldots < x_n$ one has
\begin{equation} \label{eq:detPOS}
\det \big[ K_n(x_i, x_j) \big]_{i,j=1,\ldots,n} \geq 0.
\end{equation}
Then $K_n(x,x) \geq 0$ for any $x \in \operatornamewithlimits{supp}(\mu)$, and consequently, \eqref{eq:4} is satisfied.
\end{lemma}
According to Kuijlaars \cite{Kuijlaars2010}, the condition \eqref{eq:detPOS} implies the existence of a determinantal point process whose correlation kernel is equal to $K_n(\cdot,\cdot)$. Let us mention that \cite[Section 4]{Kuijlaars2010} is devoted to discussing conditions under which \eqref{eq:detPOS} is satisfied. In particular, this condition is always satisfied for $r=1$ where it leads to the so-called \emph{orthogonal polynomial ensembles}. A survey of some models in probability theory leading to orthogonal polynomial ensembles has been given in \cite{Konig2005}.
\section{The Nevai condition} \label{sec:7}
Let $(\vec{n}_\ell : \ell \geq 0)$ be a path satisfying \eqref{eq:24}. Let $K_n$ be the corresponding Christoffel--Darboux kernel.
Inspired by Nevai \cite[Section 6.2]{Nevai1979} (see also \cite{Breuer2010a}) we define for every bounded measurable function $f$ and $x \in \operatornamewithlimits{supp}(\mu)$
\[
G_n[f](x) = \frac{1}{K_n(x,x)} \int_\mathbb{R} K_n(x,y) K_n(y,x) f(y) {\: \rm d} \mu(y) ,
\]
provided that $K_n(x,x) \neq 0$.
\begin{proposition}
Let $n \in \mathbb{N}$ and $x \in \operatornamewithlimits{supp}(\mu)$. Suppose that $K_n(x,x) \neq 0$. Then the definition of $G_n[f]$ makes sense for any bounded measurable function $f$.
\end{proposition}
\begin{proof}
We have
\[
\big| G_n[f](x) \big| \leq
\frac{1}{|K_n(x,x)|} \sup_{y \in \mathbb{R}} |f(y)|
\int_\mathbb{R} |K_n(x,y) K_n(y,x)| {\: \rm d} \mu(y).
\]
Hence, we need to prove
\begin{equation} \label{eq:45}
\int_\mathbb{R} |K_n(x,y) K_n(y,x)| {\: \rm d} \mu(y) < \infty.
\end{equation}
To do so, by the Cauchy--Schwarz inequality we obtain
\[
\int_\mathbb{R} |K_n(x,y) K_n(y,x)| {\: \rm d} \mu(y) \leq
\bigg( \int_\mathbb{R} |K_n(x,y)|^2 {\: \rm d} \mu(y) \bigg)^{1/2}
\bigg( \int_\mathbb{R} |K_n(y,x)|^2 {\: \rm d} \mu(y) \bigg)^{1/2}.
\]
Next, by the Cauchy--Schwarz inequality applied to \eqref{eq:28} gives
\[
|K_n(x,y)|^2 \leq
\bigg( \sum_{j=0}^{n-1} |p_j(x)|^2 \bigg) \cdot
\bigg( \sum_{j=0}^{n-1} |q_j(y)|^2 \bigg).
\]
Hence
\[
\int_\mathbb{R} |K_n(x,y)|^2 {\: \rm d} \mu(y) \leq
\bigg( \sum_{j=0}^{n-1} |p_j(x)|^2 \bigg) \cdot
\bigg( \sum_{j=0}^{n-1} \|q_j\|_{L^2(\mu)}^2 \bigg) < \infty.
\]
Similarly we obtain
\[
\int_\mathbb{R} |K_n(y,x)|^2 {\: \rm d} \mu(y) \leq
\bigg( \sum_{j=0}^{n-1} |q_j(x)|^2 \bigg) \cdot
\bigg( \sum_{j=0}^{n-1} \|p_j\|_{L^2(\mu)}^2 \bigg) < \infty,
\]
which implies \eqref{eq:45}.
\end{proof}
We say that $(K_n : n \geq 0)$ satisfies the \emph{Nevai condition} at $x \in \operatornamewithlimits{supp}(\mu)$
if
\begin{equation} \label{eq:38}
\lim_{n \to \infty}
G_n[f](x) =
f(x), \quad f \in \mathcal{C}_b(\mathbb{R}).
\end{equation}
Our motivation of examining the condition \eqref{eq:38} comes from the desire to understand the pointwise behaviour of $K_n(x,x)$. A very modest result in this direction is the following:
\begin{proposition} \label{prop:6}
Let $x \in \operatornamewithlimits{supp}(\mu)$ be an isolated point. If \eqref{eq:38} is satisfied for $x$, then
\begin{equation} \label{eq:43}
\lim_{n \to \infty} K_n(x,x) = \frac{1}{\mu(\{x\})}.
\end{equation}
\end{proposition}
\begin{proof}
Let $\varepsilon > 0$ be such that $[x-\varepsilon, x+\varepsilon] \cap \operatornamewithlimits{supp}(\mu) = \{x\}$.
Set $f(y) = \frac{1}{\varepsilon} \max(\varepsilon - |y-x|, 0)$. Then $f \in \mathcal{C}_b(\mathbb{R})$, $\operatornamewithlimits{supp}(f) = [x-\varepsilon,x+\varepsilon]$ and $f(x) = 1$. Hence
\[
\frac{1}{K_n(x,x)} \int_\mathbb{R} K_n(x,y) K_n(y,x) f(y) {\: \rm d} \mu(y) =
K_n(x,x) \mu(\{x\}).
\]
On the other hand, by \eqref{eq:38}, we get
\[
\lim_{n \to \infty} \frac{1}{K_n(x,x)} \int_\mathbb{R} K_n(x,y) K_n(y,x) f(y) {\: \rm d} \mu(y) = 1.
\]
By combining the last two formulas the result follows.
\end{proof}
In the classical case $r=1$, the formula \eqref{eq:43} holds for any $x \in \operatornamewithlimits{supp}(\mu)$ (with the convention that $1/0 = +\infty$) provided the measure $\mu$ is determined by its moments, see e.g. \cite[Corollary 2.6]{Shohat1943}. This motivates the following problem.
\begin{problem} \label{prob:1}
Suppose that the support of $\mu$ is compact. Is it true that \eqref{eq:43} holds for any $x \in \operatornamewithlimits{supp}(\mu)$?
\end{problem}
Let us observe that \eqref{eq:43} implies the positivity of $K_n(x,x)$ for large $n$. Hence, the positive answer to Problem~\ref{prob:1} leads to a weaker version of \eqref{eq:POS}.
Our aim is to prove that under some hypotheses \eqref{eq:38} holds for any $f \in \mathbb{R}[x]$. It turns out that in this situation $G_n[f]$ can be rewritten in terms of $J$.
\begin{lemma} \label{lem:1}
For any $k \geq 0$ we have
\[
\int_\mathbb{R} K_n(x,y) y^k K_n(y,x) {\: \rm d} \mu(y) =
\big\langle
\Pi_n \vec{q}(x),
J^k \Pi_n \vec{p}(x)
\big\rangle_{\ell^2},
\]
where the sequence $\Pi_n u$ is defined by
\[
[\Pi_n u]_k =
\begin{cases}
u_k, & k < n, \\
0, & \text{otherwise},
\end{cases}
\]
and $\vec{p}(x)$ and $\vec{q}(x)$ are defined as follows
\[
\vec{p}(x) = (p_0(x),p_1(x), \ldots)^t, \quad
\vec{q}(x) = (q_0(x), q_1(x), \ldots)^t.
\]
\end{lemma}
\begin{proof}
We have
\begin{align*}
\int_\mathbb{R} K_n(x,y) y^k K_n(y,x) {\: \rm d} \mu(y)
&=
\big\langle K_n(x,y), y^k K_n(y,x) \big\rangle_{L^2_y(\mu)} \\
&=
\bigg\langle
\sum_{\ell=0}^{n-1} p_\ell(x) q_{\ell}(y),
y^k \sum_{\ell'=0}^{n-1} p_{\ell'}(y) q_{\ell'}(x)
\bigg\rangle_{L^2_{y}(\mu)} \\
&=
\sum_{\ell=0}^{n-1} \sum_{\ell'=0}^{n-1}
p_{\ell}(x) q_{\ell'}(x)
\big\langle
q_{\ell},
y^k p_{\ell'}
\big\rangle_{L^2(\mu)}.
\end{align*}
Let us recall that $\langle y p_{\ell'}, q_{\ell} \rangle_{L^2(\mu)} = J_{\ell', \ell}$. Hence we arrive at
\begin{align*}
\int_\mathbb{R} K_n(x,y) y^k K_n(y,x) {\: \rm d} \mu(y)
&=
\sum_{\ell=0}^{n-1} \sum_{\ell'=0}^{n-1}
p_{\ell}(x) q_{\ell'}(x) [J^k]_{\ell', \ell} \\
&=
\big\langle
\Pi_n \vec{q}(x),
J^k \Pi_n \vec{p}(x)
\big\rangle_{\ell^2}. \qedhere
\end{align*}
\end{proof}
We are ready to prove our main result of this section.
\begin{theorem} \label{thm:5}
Assume that:
\begin{enumerate}[(a)]
\item the matrix $J$ satisfies \eqref{eq:NDB},
\item $K_n(x,x) > 0$ for large $n$,
\item for any $N \geq 1$ one has
$\begin{aligned}[b]
\lim_{n \to \infty} \frac{q_{\ell}(x) p_{\ell'}(x)}{K_n(x,x)} = 0
\end{aligned}$, where $\ell' \in [n-N, n)$ and $\ell \in [n, n+N]$.
\end{enumerate}
Then the convergence \eqref{eq:38} holds for any polynomial $f \in \mathbb{R}[x]$.
\end{theorem}
\begin{proof}
By linearity it is enough to prove the convergence for $f(x) = x^k$ for any $k \geq 0$.
To do so, let us recall that $J \vec{p}(x) = x \vec{p}(x)$. Thus
\[
x^k K_n(x,x)
=
\big\langle
\Pi_n \vec{q}(x),
\Pi_n x^k \vec{p}(x)
\big\rangle_{\ell^2}
=
\big\langle
\Pi_n \vec{q}(x),
\Pi_n J^k \vec{p}(x)
\big\rangle_{\ell^2}.
\]
Hence, in view of Lemma~\ref{lem:1}, we need to estimate
\[
\int_\mathbb{R} K_n(x,y) y^k K_n(y,x) {\: \rm d} \mu(y) - x^k K_n(x,x) =
\big\langle
\Pi_n \vec{q}(x),
(J^k \Pi_n - \Pi_n J^k) \vec{p}(x)
\big\rangle_{\ell^2}.
\]
Let us observe that since $J \in \mathcal{H}$ we have $[J^k]_{\ell, \ell'} = 0$ for $\ell' > \ell+k$.
Thus,
\begin{align*}
\big\langle
\Pi_n \vec{q}(x),
J^k \Pi_n \vec{p}(x)
\big\rangle_{\ell^2}
&=
\sum_{\ell=0}^{n-1} q_{\ell}(x) \sum_{\ell'=0}^{n-1} [J^k]_{\ell, \ell'} p_{\ell'}(x) \\
&=
\sum_{\ell=0}^{n-1} q_{\ell}(x) \sum_{\ell'=0}^{\min(n-1,\ell+k)} [J^k]_{\ell, \ell'} p_{\ell'}(x),
\end{align*}
and
\[
\big\langle
\Pi_n \vec{q}(x),
\Pi_n J^k \vec{p}(x)
\big\rangle_{\ell^2}
=
\sum_{\ell=0}^{n-1} q_\ell(x) \sum_{\ell'=0}^{\ell+k} [J^k]_{\ell, \ell'} p_{\ell'}(x).
\]
Hence
\[
\big\langle
\Pi_n \vec{q}(x),
(J^k \Pi_n - \Pi_n J^k) \vec{p}(x)
\big\rangle_{\ell^2}
=
-\sum_{\ell=n-k}^{n-1} q_{\ell}(x) \sum_{\ell'=n}^{\ell+k} [J^k]_{\ell, \ell'} p_{\ell'}(x).
\]
Now, in view of Proposition~\ref{prop:3}, the result easily follows.
\end{proof}
If in the setup of Theorem~\ref{thm:5} we additionally assume that $\operatornamewithlimits{supp}(\mu)$ is compact and
\begin{equation} \label{eq:39}
\sup_{n \geq 0}
\frac{1}{K_n(x,x)}
\int_{\mathbb{R}} |K_n(x,y) K_n(y,x)| {\: \rm d} \mu(y) < \infty,
\end{equation}
then by a similar reasoning as in the proof of Corollary~\ref{cor:1} we obtain that \eqref{eq:38}
holds for any $f \in \mathcal{C}_b(\mathbb{R})$. Note that by the reproducing property we have
\begin{equation} \label{eq:41}
\int_{\mathbb{R}} K_n(x,y) K_n(y,x) {\: \rm d} \mu(y) = K_n(x,x).
\end{equation}
Consequently,
\[
\frac{1}{K_n(x,x)} \int_{\mathbb{R}} K_n(x,y) K_n(y,x) {\: \rm d} \mu(y) \equiv 1.
\]
Hence \eqref{eq:39} surely holds if
\begin{equation} \label{eq:40}
K_n(x,y) K_n(y,x) \geq 0 \quad \text{for a.e. } y \in \operatornamewithlimits{supp}(\mu), \ n \geq 0.
\end{equation}
However, numerical experiments show that \eqref{eq:40} is \emph{not} satisfied for
the Jacobi-Pi\~{n}eiro polynomials (at least for some choice of parameters). However, it seems that \eqref{eq:39} can still be satisfied.
So we state the following problem.
\begin{problem} \label{prob:2}
Formulate a criterion which implies that \eqref{eq:39} holds true.
\end{problem}
Let us observe that \eqref{eq:40} together with \eqref{eq:41} implies the positivity of $K_n(x,x)$.
\section{Examples} \label{sec:8}
In this section we shall assume that the sequence of multi-indices from \eqref{eq:24} satisfies
\begin{equation} \label{eq:29a}
\lim_{\ell \to \infty}
\frac{\vec{n}_\ell}{|\vec{n}_\ell|} =
(s_1, s_2, \ldots, s_r),
\end{equation}
where
\begin{equation} \label{eq:29b}
\sum_{i=1}^r s_i = 1 \quad \text{and} \quad s_i > 0.
\end{equation}
This means that the multi-indices $\vec{n}_\ell$ tend to infinity in $\mathbb{N}^r$ in the direction $(s_1,s_2,\ldots,s_r)$.
Next, we shall show that our results can be applied for some well-known systems of multiple
orthogonal polynomials on the real line.
\subsection{Angelesco systems}
An Angelesco system $\vec{\mu}=(\mu_1, \ldots, \mu_r)$ consists of $r$ measures such that
the convex hull of the support of each measure $\mu_i$ is a compact interval $\Delta_i \subset \mathbb{R}$
and these intervals are pairwise disjoint. It is a basic fact that
Angelesco systems are perfect provided each $\operatornamewithlimits{supp}(\mu_i)$ contains infinitely many points
(see, e.g., \cite[Section 23.1.1]{Ismail2009}).
In what follows, we need some concepts from potential theory.
Namely, for positive measures $\eta, \nu$ with \emph{compact} supports, we define their mutual energy by
\begin{equation} \label{eq:30}
I(\eta,\nu) = \int_\mathbb{C} \int_\mathbb{C} \log \frac{1}{|z-w|} {\: \rm d} \eta(z) {\: \rm d} \nu(w).
\end{equation}
It was proven in \cite{Gonchar1981} (see also \cite[Chapter 5.6]{Nikishin1991}) that under the assumption that\footnote{For any measure $\mu$ we write ${\: \rm d} \mu = \mu'(x) {\: \rm d} x + {\: \rm d} \mu_{\mathrm{s}}$, where $\mu_{\mathrm{s}}$ is singular with respect to the Lebesgue measure.}
\begin{equation} \label{eq:31}
\mu'_i(x) > 0, \quad a.e.\ x \in \Delta_i,\ i=1,\ldots,r ,
\end{equation}
there exists a unique minimizer $\vec{\omega} = (\omega_1, \ldots, \omega_r)$ of
\begin{equation} \label{eq:32}
E(\eta_1, \ldots, \eta_r) =
\sum_{j=1}^r I(\eta_j, \eta_j) +
\sum_{j=1}^{r-1} \sum_{k=j+1}^r I(\eta_j, \eta_k)
\end{equation}
under the constraint
\begin{equation} \label{eq:33}
\operatornamewithlimits{supp}(\eta_i) \subset \Delta_i \quad \text{and} \quad
\eta_i(\Delta_i) = s_i, \qquad i=1,\ldots,r.
\end{equation}
Moreover, $\nu_n \xrightarrow{w} \nu_\infty$, where $\nu_\infty = \omega_1 + \ldots + \omega_r$.
\begin{theorem}
Let $(\vec{n}_\ell : \ell \geq 0)$ be a path satisfying \eqref{eq:24}, \eqref{eq:29a} and \eqref{eq:29b}.
Let $\vec{\mu}$ be an Angelesco system of measures on the real line satisfying \eqref{eq:31}.
Set $\mu = \mu_1+\ldots+\mu_r$. Then
\[
\frac{1}{n} K_n(x,x) {\: \rm d} \mu(x) \xrightarrow{w} \nu_\infty,
\]
where $\nu_\infty = \omega_1 + \ldots + \omega_r$ and $\vec{\omega}$ is the unique minimizer of the problem
\eqref{eq:32} under the constraint \eqref{eq:33}.
\end{theorem}
\begin{proof}
We only need to verify the hypotheses of Corollary~\ref{cor:1}.
First of all, in view of \cite[Remark A.11]{Aptekarev2020} the hypotheses of Proposition~\ref{prop:4} are satisfied.
Hence, the corresponding matrix $J$ satisfies \eqref{eq:NDB}. Next, $\operatornamewithlimits{supp}(\mu) \subset \bigcup_{i=1}^r \Delta_i$ is compact
and by \cite[Theorem 23.1.3]{Ismail2009} $\operatornamewithlimits{supp}(\nu_n) \subset \bigcup_{i=1}^r \Delta_i$. Next, by \cite[Section 4.2]{Kuijlaars2010} one has that \eqref{eq:detPOS} is satisfied. Hence, by Lemma~\ref{lem:2}, the condition \eqref{eq:4} is satisfied. Hence the conclusion follows from Corollary~\ref{cor:1}.
\end{proof}
\subsection{AT systems}
Let $\vec{\mu} = (\mu_1,\ldots,\mu_r)$ be a vector of $r$ measures which are absolutely continuous with respect to a fixed measure $\mu$
on some compact interval $[a,b]$. Let ${\: \rm d} \mu_i(x) = w_j(x) {\: \rm d} \mu(x)$. Then $\mu$ is an \emph{algebraic Chebyshev system} (or an AT system for short) if for any multi-index $\vec{n} \in \mathbb{N}_0^r$ the set
\[
\Phi_{\vec{n}} = \bigcup_{i=1}^r \bigcup_{k=0}^{n_i - 1} \{ x^k w_i \}
\]
is a Chebyshev system on $[a,b]$, which means that any non-trivial linear combination of
functions from $\Phi_{\vec{n}}$ has at most $|\vec{n}|-1$ zeros on $[a,b]$. An important result (see \cite[Theorem 23.1.4]{Ismail2009}) states that every AT system is perfect.
\begin{proposition} \label{prop:5}
Let $(\vec{n}_\ell : \ell \geq 0)$ be a path satisfying \eqref{eq:24}, \eqref{eq:29a} and \eqref{eq:29b}.
Let $\vec{\mu}$ be an AT system such that ${\: \rm d} \mu_i(x) = w_j(x) {\: \rm d} \mu(x)$ and the functions $w_1, \ldots, w_r$ are continuous. Then the weak limits of the sequences $(\nu_n : n \in \mathbb{N}_0)$ and $(\eta_n : n \in \mathbb{N}_0)$ are the same, i.e. \eqref{eq:5} is satisfied.
\end{proposition}
\begin{proof}
We only need to verify the hypotheses of Corollary~\ref{cor:1}.
First of all, in view of \cite[Theorem 23.1.4]{Ismail2009} and \cite[Theorem 2.1]{Haneczok2012} the hypotheses of Theorem~\ref{thm:3} are satisfied. Hence, the corresponding matrix $J$ satisfies \eqref{eq:NDB}.
Next, by \cite[Section 4.3]{Kuijlaars2010} one has that \eqref{eq:detPOS} is satisfied. Hence, by Lemma~\ref{lem:2}, the condition \eqref{eq:4} is satisfied. Hence the conclusion follows from Corollary~\ref{cor:1}.
\end{proof}
Proposition~\ref{prop:5} can be applied to Jacobi-Pi\~{n}eiro polynomials (see \cite[Chapter 23.3.2]{Ismail2009} for more details). The description of the limiting distribution of $(\nu_n : n \in \mathbb{N})$ for a very special choice of the path $(\vec{n}_\ell : \ell \geq 0)$ has been considered in \cite{Neuschel2016, Coussement2008}.
\subsubsection{Nikishin systems}
Let $\vec{\sigma} = (\sigma_1, \ldots, \sigma_r)$ be a vector of finite measures such that the convex hull of of support of each measure $\sigma_i$ is a compact interval $\Delta_i \subset \mathbb{R}$ and
\[
\Delta_j \cap \Delta_{j+1} = \emptyset, \quad j=1,\ldots,r-1.
\]
For two measures $\eta, \nu$ let us define a measure $\langle \eta, \mu \rangle$ by
\begin{equation} \label{eq:36}
{\: \rm d} \langle \eta, \mu \rangle (x) = \mathcal{C}(\mu)(x) {\: \rm d} \eta(x),
\end{equation}
where for any measure $\mu$ on the real line its Cauchy transform is defined by
\begin{equation} \label{eq:35}
\mathcal{C}(\mu)(z) = \int_\mathbb{R} \frac{{\: \rm d} \mu(y)}{z-y}, \quad z \notin \operatornamewithlimits{supp}(\mu).
\end{equation}
Then we say that $\vec{\mu} = (\mu_1,\ldots,\mu_r)$ is a Nikishin system generated by $\vec{\sigma}$ if
\begin{equation} \label{eq:34}
\mu_1 = \sigma_1, \quad
\mu_2 = \langle \sigma_1, \sigma_2 \rangle, \quad \ldots \quad
\mu_r = \big\langle \sigma_1, \langle \sigma_2, \ldots, \sigma_r \rangle \big\rangle.
\end{equation}
It has been proven in \cite{Fidalgo2011} that any Nikishin system is an AT system.
It was proven in \cite[Chapter 5.7]{Nikishin1991} that under the assumption that
\begin{equation} \label{eq:31'}
\sigma'_i(x) > 0, \quad a.e.\ x \in \Delta_i,\ i=1,\ldots,r ,
\end{equation}
there exists a unique minimizer $\vec{\omega} = (\omega_1, \ldots, \omega_r)$ of
\begin{equation} \label{eq:32'}
E(\eta_1, \ldots, \eta_r) =
\sum_{j=1}^r I(\eta_j, \eta_j) -
\sum_{j=1}^{r-1} I(\eta_j, \eta_{j+1})
\end{equation}
under the constraint
\begin{equation} \label{eq:33'}
\operatornamewithlimits{supp}(\eta_i) \subset \Delta_i \quad \text{and} \quad
\eta_i(\Delta_i) = \sum_{j=i}^r s_j, \qquad i=1,\ldots,r.
\end{equation}
Moreover, $\nu_n \xrightarrow{w} \nu_\infty$, where $\nu_\infty = \omega_1$.
\begin{theorem}
Let $(\vec{n}_\ell : \ell \geq 0)$ be a path satisfying \eqref{eq:24}, \eqref{eq:29a} and \eqref{eq:29b}.
Let $\vec{\mu}$ be a Nikishin system generated by $\vec{\sigma}$ satisfying \eqref{eq:31'}.
Set $\mu = \mu_1$. Then
\[
\frac{1}{n} K_n(x,x) {\: \rm d} \mu(x) \xrightarrow{w} \nu_\infty,
\]
where $\nu_\infty = \omega_1$ and $\vec{\omega}$ is the unique minimizer of the problem
\eqref{eq:32'} under the constraint \eqref{eq:33'}.
\end{theorem}
\begin{proof}
Observe that by \eqref{eq:34}, \eqref{eq:36} and \eqref{eq:35} the densities of $\mu_i$ with respect to $\mu_1$
are continuous functions. Moreover, by the discussion above $\nu_n \xrightarrow{w} \nu_\infty$.
Hence, by Proposition~\ref{prop:5} the result follows.
\end{proof}
Let us mention that in \cite{Gonchar1997} (see also \cite{Aptekarev2010}) a generalisation of both Angelesco and Nikishin systems has been proposed. For such systems it is known that $\nu_n \xrightarrow{w} \nu_\infty$ for some measure $\nu_\infty$ which also comes from a minimalization problem. It is quite possible that our Corollary~\ref{cor:1} can be applied also here.
\subsection*{Acknowledgment}
The authors would like to thank Guilherme Silva for informing us about \cite{Hardy2015} and anonymous referees for useful suggestions.
The first author was supported by the Methusalem grant \textit{Classification, symmetries and singularities at the frontiers of algebra, analysis and geometry} of the Flemish Government. The second author was supported by FWO grant G0C9819N of the Research Foundation -- Flanders.
\begin{bibliography}{CD-MOP.bib}
\end{bibliography}
\end{document} |
\begin{document}
\title{Sinkhorn Algorithm for Lifted Assignment Problems}
\begin{abstract}
Recently, Sinkhorn's algorithm was applied for approximately solving linear programs emerging from optimal transport very efficiently \cite{cuturi2013sinkhorn}. This was accomplished by formulating a regularized version of the linear program as Bregman projection problem onto the polytope of doubly-stochastic matrices, and then computing the projection using the efficient Sinkhorn algorithm, which is based on alternating closed-form Bregman projections on the larger polytopes of row-stochastic and column-stochastic matrices.
In this paper we suggest a generalization of this algorithm for solving a well-known lifted linear program relaxations of the Quadratic Assignment Problem (QAP), which is known as the Johnson Adams (JA) Relaxation. First, an efficient algorithm for Bregman projection onto the JA polytope by alternating closed-form Bregman projections onto one-sided local polytopes is devised. The one-sided polytopes can be seen as a high-dimensional, generalized version of the row/column-stochastic polytopes.
Second, a new method for solving the original linear programs using the Bregman projections onto the JA polytope is developed and shown to be more accurate and numerically stable than the standard approach of driving the regularizer to zero.
The resulting algorithm is considerably more scalable than standard linear solvers and is able to solve significantly larger linear programs.
\end{abstract}
\section{Introduction}
The popular Sinkhorn algorithm \cite{kosowsky1994invisible,cuturi2013sinkhorn} for optimal transport problems solves optimal transport problems extremely efficiently, at the price of a minor modification of the energy to be minimized which takes the form of an entropic regularization term. The regularized optimal transport problem can be phrased as the problem of computing the Bregman projection of a matrix onto the optimal transport polytope. The Sinkhorn algorithm represents the optimal transport polytope as an intersection of two polytopes for which the Bregman projection has a simple closed-form solution, and then iteratively computes these projections in an alternating fashion. This results in a provably convergent algorithm for regularized optimal transport problems that is significantly more scalable than generic linear programming (LP) solvers.
In this paper we propose a Sinkhorn-type algorithms for the famous Johnson-Adams linear relaxation of the Quadratic Assignment Problem (QAP) . The QAP as introduced in Lawler \cite{lawler1963quadratic} is the problem of finding a bijection between the $n$ vertices of two graphs minimizing a quadratic energy. Two well-known subproblems of the QAP are the traveling salesman problem and the Koopmans-Beckmann quadratic assignment problem. Approximately solving either one of these subproblems is known to be NP-hard in general \cite{QAPnpHardToApprox}. The popular Johnson-Adams (JA) relaxation \cite{adams1994improved} for the QAP is an LP relaxation defined in a \emph{lifted} high dimensional variable space with $O(n^4)$ variables and constraints. As a result they are often too big to solve with generic ({\it e.g.}, interior point) LP solvers. We represent the \emph{Johnson Adams polytope} (JAP) as an intersection of four polytopes which we call one-sided local polytopes. We show that computing Bregman projections onto a one-sided local polytope has an easily computable \emph{closed-form solution}. The time complexity of computing this closed-form solution is \emph{linear} in the size of the data. Based on this observation and the fact that the JAP is the intersection of four one-sided local polytopes, we propose an efficient, provably convergent Sinkhorn-type algorithm for computing Bregman projections onto the JAP, by iteratively solving one-sided problems.
Once we have an efficient algorithm for Bregman projection onto the JAP, we can use this algorithm to optimize linear energies over these polytopes. At this point we abandon the standard regularization approach used by the Sinkhorn algorithm, and suggest an alternative process for iteratively using Bregman projections for solving the original LP. The resulting algorithm for solving the original LP is more accurate and numerically robust than the standard entropy regularization approach.
We provide numerical experiments validating our algorithm on the standard QAP benchmark \cite{burkard1997qaplib} achieving slightly inferior results to the best known lower-bounds for these problems. We note that these best lower-bounds were achieved with a plethora of different techniques including combinatorial algorithms with exponential worst-case time complexity. We further apply our algorithm to three "real-life" anatomical datasets of bones \cite{boyer2011algorithms} demonstrating state of the art classification results, improving upon previous works and providing better classification than human experts in all but one (almost comparable) instance.
\section{Related work}
\paragraph*{Quadratic assignment problems}
Convex relaxations are a common strategy for dealing with the hardness of the QAP. Small-medium instances of the QAP ($n<30$) can be solved using branch and bound algorithms which use convex relaxations to obtain lower bounds \cite{QAPsurvey}. For larger problems the non-integer solution obtained from the relaxation is rounded to obtain a feasible (generally suboptimal) solution for the QAP. Examples include spectral relaxations \cite{rendl1992applications,leordeanu2005spectral} and quadratic programming relaxations over the set of doubly stochastic matrices \cite{anstreicher2001new,zaslavskiy2009path,fogel2013convex}. Lifting methods, in which auxiliary variables that represent the quadratic terms are introduced, provide linear programming (LP) relaxations \cite{adams1994improved} or semi-definite programming relaxations \cite{zhao1998semidefinite,Itay} which are often more accurate than the former methods. For example for certain classes of the QAP the worst case error of the LP relaxations can be bounded by a multiplicative constant of $\approx 3.16$ \cite{nagarajan2009maximum}. The disadvantage of lifting methods is that they solve convex problems with $n^4$ variables in contrast with the cheaper spectral and quadratic programming methods that solve problems with $n^2$ variables. As a result, lifting methods cannot be solved using generic convex programming solvers for $n>20$. It is also possible to construct relaxations with $n^{2k}$, $k>2$ variables to achieve even tighter relaxations \cite{laurent2003comparison,adams2007level,hahn2012level} at an increased computational price.
The authors of \cite{adams1994improved,karisch1999dual} suggest to deal with the computational complexity of the large JA linear program by using a greedy coordinate ascent algorithm to solve the dual LP. This algorithm is not guaranteed to converge to the global minimum of the JA relaxations. The authors of \cite{rendl2007bounds} propose a specialized solver for a lifted SDP relaxation of QAP, and the authors of \cite{burer2006solving} propose a converging algorithm for the JA and SDP relaxations. However both algorithms can only handle quadratic assignment instances with up to 30 points. More on the QAP can be found in surveys such as \cite{QAPsurvey}.
\paragraph*{Entropic regularization} The successfulness of entropic regularization for optimal transport linear programs has motivated research aimed at extending this method to other optimization problems. In \cite{rangarajan1996novel,rangarajan1997convergence,Justin} it is shown that regularized quadratic energies over positive matrices with fixed marginal constraints can be solved efficiently by solving a sequence of regularized optimal transport problems. Cuturi et al. \cite{cuturi2014fast} compute Wasserstein barycenters using entropic regularization. Benamou et al. \cite{benamou2015iterative} also consider Wasserstein barycenters as well as several other problems for which entropic regularization can be applied. One of these problems is the multi-marginal optimal transport which is related to the JA linear program, although the latter is more complex as the marginals in the JA linear program are themselves variables constrained by certain marginal constraints.
\section{Approach}
\subsection{Problem statement}
The \emph{quadratic assignment problem} (QAP) is the problem of minimizing a quadratic energy over the set $\mathcal{P}i=\mathcal{P}i(n)$ of permutation matrices of dimension $n$:
\begin{equation}\label{e:qap}
\min_{x\in \mathcal{P}i} \quad \sum_{ij} \theta_{ij} x_{ij} + \sum_{ijkl}\tau_{ijkl}x_{ij}x_{kl}.
\end{equation}
One common and powerful approximation to the solution of \eqref{e:qap} is achieved via an LP relaxation in a lifted space. That is, \eqref{e:qap} is relaxed by replacing quadratic terms $x_{ij}x_{kl}$ with new auxiliary variables $y_{ijkl}$ to obtain
\begin{equation}\label{e:perm_lp}
\min_{(x,y)\in C} \quad \sum_{ij} \theta_{ij} x_{ij} + \sum_{ijkl}\tau_{ijkl}\,y_{ijkl},
\end{equation}
where $C$ is the \emph{Johnson Adams polytope} (JAP) which is a convex relaxation of $\mathcal{P}i$ in the lifted $(x,y)$ space:
\begin{subequations}\label{e:JA}
\begin{align}\label{e:JA_1}
\sum_j x_{ij} &= 1, \qquad \ \ \ \forall i \\
\sum_i x_{ij} &= 1, \qquad \ \ \ \forall j \\ \label{e:JA_3}
\sum_l y_{ijkl} &= x_{ij}, \qquad \forall i,j,k \\
\sum_k y_{ijkl} &= x_{ij}, \qquad \forall i,j,l \\ \label{e:JA_5}
\sum_j y_{ijkl} &= x_{kl}, \qquad \forall i,k,l \\
\sum_i y_{ijkl} &= x_{kl}, \qquad \forall j,k,l \\ \label{e:JA_geq_0}
x,y &\geq 0.
\end{align}
\end{subequations}
Here $x\in\mathbb R^{n\times n}$, $y\in\mathbb R^{n^2\times n^2}$. It is indeed a relaxation of $\mathcal{P}i$ since every permutation $x$ satisfies $(x,y)\in \JAP$ for $y_{ijkl}=x_{ij}x_{kl}$.
For notational convenience we let $d=n^2+n^4$ and denote $(x,y)\in\mathbb R^d$.\\
\subsection{Sinkhorn's algorithm}
Our goal is to construct efficient algorithms for solving the JA relaxation. Our method is motivated by the successfulness of the highly scalable Sinkhorn algorithm \cite{kosowsky1994invisible,cuturi2013sinkhorn} in (approximately) solving optimal transport problems. We begin by reviewing the key ingredients of the Sinkhorn algorithm and then explain how we generalize it to higher order LP relaxations, and the modifications we suggest for improving convergence.
To solve optimal transport\ (OT) problems efficiently, it is suggested in
\cite{bregman1967relaxation,cuturi2014fast,benamou2015iterative} to add an entropic regularizer to the OT problem:
\begin{equation}\label{e:OT_entropy}
\min_{x\in \DS} \quad \ip{\theta,x} + \beta^{-1} \sum_{ij} x_{ij} \parr{\log x_{ij} - 1},
\end{equation}
where $\beta$ is some large positive number, and $\DS=\DS(\mu,\nu)\subset \mathbb R^{n\times n}_{\scriptscriptstyle{ \geq 0}}$ is the set of non-negative $n\times n$ matrices with specified positive marginals $\mu,\nu \in \mathbb R_{\scriptscriptstyle{ > 0}}^n $:
\begin{subequations}\label{e:DS}
\begin{align} \label{e:DS_RS}
& \quad \sum_j x_{ij}=\mu_i, \quad \forall i\\ \label{e:DS_CS}
& \quad \sum_i x_{ij}=\nu_j, \quad \forall j\\ \label{e:DS_geq}
& \quad x_{ij}\geq 0, \quad \forall i,j
\end{align}
\end{subequations}
Adding the entropy to the energy has several benefits: First, it allows writing the energy as a Kullback-Leibler divergence w.r.t.~some $z\in\mathbb R^{n\times n}_{\scriptscriptstyle{ > 0}}$,
\begin{equation}\label{e:kl}
\min_{x\in \DS} \quad KL(x \vert z),
\end{equation}
where $KL(x\vert z) = \sum_{ij} x_{ij} \parr{\log\frac{x_{ij}}{z_{ij}} -1}$ is the KL divergence. This turns \eqref{e:OT_entropy} into an equivalent KL-projection problem. Secondly, it makes the energy strictly convex. Thirdly, since the entropy's derivative explodes at the boundary of $\DS$~it serves as a barrier function which ensures that the inequality constraints \eqref{e:DS_geq} are never active, resulting in significant simplification of the KKT equations for \eqref{e:DS}; Finally, due to this simplification, the KL-projection over the row-stochastic matrices $\RS(\mu)$ defined by \eqref{e:DS_RS}, and column-stochastic matrices $\CS(\nu)$ defined by \eqref{e:DS_CS} has a closed form solution:
\begin{theorem}\label{thm:kl_projection_rs}
Given $z\in \mathbb R_{\scriptscriptstyle{ > 0}}^{n\times n}$, the minimizer of
\begin{equation}\label{e:kl_rs}
\min_{x\in \RS(\mu)} \quad KL(x \vert z),
\end{equation}
is realized by the equation
\begin{equation}\label{e:KL_projection_RS}
x_{ij}^*=\frac{z_{ij}}{\sum_s z_{is}} \mu_i,
\end{equation}
that is the row normalized version of $z$. Simliarly the projection onto $\CS$ is the column normalized version of $z$.
\end{theorem}
The theorem is proved by directly solving the KKT equations of \eqref{e:kl_rs} (see {\it e.g.}~\cite{benamou2015iterative}). These observations are used to construct an efficient algorithm to approximate the solution of the regularized OT problem \eqref{e:OT_entropy} by repeatedly solving KL-projections on $\RS(\mu)$ and $\CS(\nu)$. As proved in \cite{bregman1967relaxation} this converges to the minimizer of \eqref{e:OT_entropy}.
Following \cite{benamou2015iterative}, we note that the Sinkhorn algorithm is an instance of the Bregman iterative projection method that allows solving KL-projection problems over intersection of affine sets $C_1,C_2,\ldots,C_N$,
\begin{subequations}\label{e:KL}
\begin{align} \label{e:KL_e}
\min_{x \geq 0} &\quad KL(x \vert z)\\
\mathrm{s.t.} & \quad x\in C_1\cap C_2\cap\dots\cap C_N
\end{align}
\end{subequations}
via alternate KL-projections on the sets $C_k$, that is
\begin{subequations}\label{e:bergman}
\begin{align}
x_0 & = z\\
x_n & = \textrm{argmin}_{x\in C_{\mathrm{mod}(n-1,N)+1}}KL(x\vert x_{n-1}), \quad n\geq 1
\end{align}
\end{subequations}
In \cite{bregman1967relaxation} it is shown that this procedure is guaranteed to converge, under the conditions that: (i) the feasible set of \eqref{e:KL}, $C=\cap_i C_i$ contains a vector whose entries are all strictly positive , {\it i.e.}, it is \emph{strictly feasible}; and (ii) All entries of the minimizer of \eqref{e:KL_e} over each $C_i$ are strictly positive . In fact, in the case of KL-divergence (in contrast to the general Bregman divergence dealt with in \cite{bregman1967relaxation}), condition (ii) can be proved from (i) using the fact that the derivatives of the KL-divergence blow-up at the boundary of the set defined by $x\geq 0$.
Lastly, condition (i) is satisfied in all the problems we discuss in this paper. For example, $\DS$ contains a feasible interior point $x=\frac{1}{n}\mu \nu^T >0$.
\subsection{Approach}
Our approach for solving lifted assignment problems is based on two main components:
The first component is an efficient computation of KL projections onto the lifted JA polytope using alternating projections. While Bregman iterations can always be used to solve problems of the form \eqref{e:KL}, the performance of the method greatly depends on the chosen splitting of the feasible convex set $C$ into convex subsets $C_i$, $i=1,\ldots, N$. Generally speaking a good splitting will split $C$ into a small number of sets, where the KL-projection on every set is easy to compute. The successfulness of the optimal transport solution can be attributed to the fact that the feasible set $C=\DS(\mu,\nu)$ is split into only two sets $C_1=\RS(\mu)$, $C_2=\CS(\nu)$, and the projection onto each one of these sets has a closed form solution. We will use Bregman iterations to approximate the solution of the JA relaxation of the QAP. We split the feasible sets of these relaxations into four sets, so that the projection on each one of these sets has a closed-form solution. For comparsion, note that the standard alternating type method for the Johnson Adams relaxation needs to solve multiple linear programs in $n^2$ variables in each iteration \cite{adams1994improved,karisch1999dual,rostami2014revised} instead of computing closed form solution for each iteration, and is not guaranteed to converge. Our algorithm for computing KL-projections onto lifted polytopes is described in Section~\ref{sec:KLontoLifted}.
The second component of our approach is using the KL projections onto the lifted JA polytope for approximating the solution of the linear program \eqref{e:perm_lp}. The approximation provided by the standard Sinkhorn algorithm described above is known to be suboptimal since in practice the parameter $\beta$ in \eqref{e:OT_entropy} cannot be chosen to be very large due to numerical instabilities. We propose an alternative method for approximating the solutions of the linear program by iteratively solving a number of KL-projection problems. We find that this method gives a good approximation of the solution of the linear program in a small number of iterations. This method is discussed in Section~\ref{sec:lin2KL}.
\section{KL-Projections onto lifted polytopes}\label{sec:KLontoLifted}
We consider the problem of minimizing
$$ KL(x\vert z) +KL( y \vert w),$$
where $x,z\in\mathbb R^{n\times n}$ and $y,w\in\mathbb R^{n^2\times n^2}$, $w,z>0$, over the JAP using alternating Bregman iterations. The main building block in this algorithm is defining the
\emph{one-sided local polytope} ($\OLP$):
\begin{subequations}\label{e:olp}
\begin{align}
\sum_j x_{ij} &= 1, \qquad \ \ \ \forall i \\
\sum_l y_{ijkl} &= x_{ij}, \qquad \forall i,j,k \label{e:olp_Yconstraints}\\
x,y & \geq 0
\label{e:local_poly_positivity}
\end{align}
\end{subequations}
and observing that the $\JAP$ is an intersection of four sets, which are, up to permutation of coordinates, $\OLP$ sets:
Denote $y_{ijkl}^\diamond=y_{klij}$ and define $\OLP^\diamond$ as the set of $(x,y)$ satisfying $(x,y^\diamond)\in\OLP$. Next denote $y^T_{ijkl}=y_{jilk} $ and define $\OLP^T $ to be the set of $(x,y)$ satisfying $(x^T,y^T) \in \OLP $. Denote by $\OLP^{T\diamond}$ the set of $(x,y)$ satisfying $(x^T,(y^T)^\diamond) \in \OLP $. Then we obtain
\begin{equation}\label{e:TLPnTLPTn_decomp}
\JAP=\OLP\cap\OLP^\diamond\cap \OLP^T\cap \OLP^{T\diamond}.
\end{equation}
We show that there is a closed form formula for the KL-projection onto the OLP polytope.
The derivation of this formula will be presented in the next subsection. Thus by applying
Bregman iterations iteratively to the four OLP sets as in \eqref{e:bergman}, we are guaranteed to converge to the KL-projection onto the lifted polytope, providing that the JAP is strictly feasible. This is indeed the case; an example of a strictly feasible solution is $x=\frac{1}{n} \mathbbm 1 \mathbbm 1^T, y=\frac{1}{n^2} \mathbbm 1 \mathbbm 1^T$ where $\mathbbm 1$ denotes the vector of all ones in the relevant dimension.
\subsection{KL-Projections onto the one-sided local polytope}
We now compute the closed-form solution for KL-projections over the one-sided local polytope ($\OLP$) defined in \eqref{e:olp}. Namely for given $(z,w)\in\mathbb R^d_{\scriptscriptstyle{ > 0}}$ we seek to solve
\begin{equation}\label{e:kl_olp}
\min_{(x,y)\in \text{OLP}} KL(x \,\vert\, z) + KL(y \, \vert \,w),
\end{equation}
\begin{theorem}\label{thm:KL_projection_LP1}
Given $(z,w)\in \mathbb R^d_{\scriptscriptstyle{ > 0}}$, the minimizer of \eqref{e:kl_olp} is given by the equations:
\begin{subequations}\label{e:kl_olp_solution}
\begin{align}
q_{ij}&=\exp \parr{\frac{\sum_k \log (\sum_s w_{ijks}) + \log z_{ij}}{n+1}} \\
x_{ij} & = \frac{q_{ij}}{\sum_j q_{ij}} \\
y_{ijkl} &= x_{ij}\frac{w_{ijkl}}{\sum_s w_{ijks}}
\end{align}
\end{subequations}
\end{theorem}
\begin{proof}
The proof is based on two applications of Theorem \ref{thm:kl_projection_rs}.
First, we will find the optimal $y$ for any fixed $x$. Indeed, fixing $x$ decomposes \eqref{e:kl_olp} into $n\times n$ independent problems, one for each pair of indices $i,j $ in \eqref{e:olp_Yconstraints}. Each independent problem can be solved using the observation that the matrix $u_{kl}=y_{ijkl}$ is in $\RS(\mu)$ where $\mu$ is the constant vector $\mu=x_{ij}\mathbbm 1$, where $\mathbbm 1$ denotes the vector of ones. Thus using Theorem~\ref{thm:kl_projection_rs}
$$y_{ijkl}=u_{kl} = x_{ij}\frac{w_{ijkl}}{\sum_s w_{ijk s}}.$$
Now we can plug this back in \eqref{e:kl_olp} and end up with a problem in the variable $x$ alone. Indeed,
\begin{align*}
KL(x\,\vert\, z)+KL(y\,\vert\, w) &=\sum_{ij} \brac{ KL(x_{ij} \, \vert \, z_{ij} )+ \sum_{kl} \frac{w_{ijkl}}{\sum_s w_{ijk s}} KL\Big(x_{ij} \, \Big\vert \, \sum_s w_{ijk s} \Big)} \\
&=(n+1)\sum_{ij} KL \Bigg( x_{ij} \, \Bigg \vert\, \exp\parr{\frac{
\log z_{ij} + \sum_k \log (\sum_s w_{ijks})}{n+1}} \Bigg),
\end{align*}
where in the second equality we used the following (readily verified) property of KL-divergence $a_k,b_k>0$:
$$\sum_{k} a_{k} KL(x\,\vert\, b_{k}) = \big(\sum_{k}a_{k}\big) KL \parr{x \, \Big\vert \exp \parr{\frac{\sum_k a_k \log b_k}{\sum_k a_k}} }.$$
Finally, we are left with a single problem of the form \eqref{e:kl_rs} and applying Theorem \ref{thm:kl_projection_rs} again proves \eqref{e:kl_olp_solution}.
\end{proof}
\paragraph{Incorporating zeros constrains} The JA relaxation stated above can be strengthened by noting that for permutations $x\in \mathcal{P}i$ there exists exactly one non-zero entry in each row and column and therefore $x_{ij}x_{il}=0$ and $x_{ji}x_{li}=0 $ for all $j\ne l$. In the lifted LP formulation this implies $y_{ijil}=0$ and $y_{jili}=0 $ for all $i$ and $j\ne l$. These constraints (which are sometimes called \emph{gangster constraints}) are part of the standard JA relaxation. They can be incorporated seamlessly in our algorithm as we will now explain.
Denote multi-indices of $y$ by $\gamma$ and let $\Gamma$ be the set of multi-indices $\gamma$ for which the constraint $y_\gamma=0 $ is to be added. We eliminate the zero valued variables from the objective \eqref{e:perm_lp} and the constraints defining the polytope $C$, and rewrite them as optimization problem in the variables $x$ and $(y_\gamma)_{\gamma \not \in \Gamma} $. We then consider $KL$-projections only with respect to these variables, and use the same Bregman iteration scheme described above for the reduced variables. The only modification needed to the algorithm is a minor modification to the formula \eqref{e:kl_olp_solution}, where $w$ is replaced with $\bar w$ which satisfies $\bar w_\gamma =0$ if $\gamma \in \Gamma$ and $\bar w_\gamma=w_\gamma $ otherwise.
We note that also with respect to the reduced variables the strengthened relaxations are strictly feasible so that the alternating KL-projection algorithm converges. An example of a strictly feasible solution in the $\JAP$ being
$$\quad (x,y)=\frac{1}{|\mathcal{P}i |} \sum_{x \in \mathcal{P}i} (x,y(x)), $$
where $y(x)$ is defined via $y_{ijkl}=x_{ij}x_{kl}$.
\section{From linear programs to KL projections}\label{sec:lin2KL}
The JA relaxation of the QAP, and in fact all linear programs, are of the general form
\begin{equation} \label{e:linear}
\min_{v \in \mathcal{P}} \quad c^Tv
\end{equation}
where $\mathcal{P}$ is a standard polytope
$$\mathcal{P}=\{v\ \vert \ v \geq 0, \ Av=b\}.$$
containing a strictly feasible solution. We want to approximate a solution of the linear program using KL-projections onto $\mathcal{P}$.
The most common strategy for doing this ({\it e.g.}, \cite{bregman1967relaxation,cuturi2013sinkhorn,benamou2015iterative}) which we already described above, is regularizing \eqref{e:linear} by adding a KL-divergence term with a small coefficient $\beta^{-1}$ and solving
\begin{equation} \label{e:regularized}
v_\beta^*= \mathrm{arg}\min_{v \in \mathcal{P}} c^Tv+\beta^{-1} KL(v|u_0)=\mathrm{arg}\min_{v \in \mathcal{P}}KL(v|u_0 \cdot \exp(-\beta c)) \end{equation}
Here $u_0$ is some constant positive vector, often chosen as $u_0=\mathbbm 1$, and $\cdot$ denotes elementwise multiplications. As our notation suggests, these regularized problems are strictly convex and hence have a unique minimizer $v_\beta^*$, which converges in the limit $\beta\rightarrow \infty$ to the minimizer of \eqref{e:linear} with maximal entropy \cite{benamou2015iterative}. We will call this algorithm for approximating the solution of the linear program \eqref{e:linear} the \emph{regularization method}. This approach encounters two difficulties:
\begin{enumerate}
\item \textbf{Underflow/overflow} occurs for large values of $\beta$.
\item \textbf{Slow convergence rate} of the Bregman iterations for large values of $\beta$. This phenomenon can be explained by the fact that Sinkhorn's algorithm can find an $\varepsilonilon$ approximate solution in $O(n^2 \log n \varepsilonilon^{-3}) $ time \cite{altschuler2017near}. Thus for a fixed error rate of $\varepsilonilon$ fast approximation by Sinkhorn's algorithm is possible, but the rate of convergence grows polynomially in $\varepsilonilon^{-1}$ instead of logarithmically as in the case of interior point algorithms. As a result taking $\varepsilonilon$ to be very small can lead to very long computations.
\end{enumerate}
The underflow/overflow encountered for large values of $\beta$, and methods to overcome it, can be understood by considering the KKT equations of \eqref{e:regularized}: Using the fact that the unique minimum of \eqref{e:regularized} can be shown to be strictly positive, the KKT conditions amount to solving the following equations for $v,\lambda$:
\begin{subequations}\label{e:KKT}
\begin{align}
v&=u_0 \cdot \exp(-\beta c)\cdot \exp(-A^T \lambda) \label{e:unstable}\\
Av&=b
\end{align}
\end{subequations}
As $\beta$ increases the entries of $\exp(-\beta c)$ become very large numbers or very close to zero (depending on the sign of the entries), which leads to numerical overflow/underflow.
One natural approach (which we will not use) for approximating the solution of linear programs by KL-projections which avoids underflow/overflow is the \emph{proximal method} approach \cite{censor1992proximal,chen1993convergence}: This method proposes an iterative process beginning with some initial guess $v_0>0$ and then solving
\begin{equation} \label{e:PMD}
v_{k+1}=\underset{v \in \mathcal{P}}{\mathrm{argmin}} \, c^Tv+\beta_0^{-1} KL(v|v_k).
\end{equation}
The advantage of this algorithm over the previous one is that it converges as $k \rightarrow \infty$ even when $\beta=\beta_0$ is held fixed and as a result the coefficients of the KKT equation
of \eqref{e:PMD} do not explode or vanish. On the other hand, this algorithm requires solving multiple KL-projection problems in order to converge. In our experiments in the context of the linear relaxations for the QAP this method required many hundreds of KL-projection onto the JAP to converge (we call these iterations external iterations). As each KL projection onto the JAP polytope typically requires several hundered (closed form) projections onto OLPs (we call these projections internal iterations), this algorithm becomes rather slow.
Accordingly we will propose a new method for approximating linear programs by KL-projections which will only require a small number of external iterations in order to converge, and still avoids underflow/overflow. The inspiration for this method comes from the following observation on the relation between the proximal method and the regularization method:
\begin{lemma}\label{lem:slower}
If the proximal method and regularization method are initialized from the same point $(u_0=v_0 )$ then the solution obtained from the proximal method with fixed $\beta_0$ after $k$ iterations, is equal to the solution of the regularization method when choosing $\beta=k\beta_0$ (that is $v_k=v_{k\beta_0}^* $).
\end{lemma}
\begin{proof}
By induction on $k$. For $k=1$ the claim is obvious since in this case the equation \eqref{e:PMD} determining $v_{\beta_0}^*$ and the equation \eqref{e:regularized} determining $v_1 $ are identical. Now assume correctness for $k$, then according to \eqref{e:unstable} we have for some $\lambda_k$,
$$v_k=v_{k \beta_0}^*=v_0 \cdot \exp(-k\beta_0 c)\cdot \exp(-A^T \lambda_k) $$
By replacing $u_0 $ and $\beta$ in \eqref{e:regularized} with $v_k$ and $\beta_0$ we obtain that the KKT equations for \eqref{e:PMD} are of the form
\begin{subequations}
\begin{align}
v&=v_k \cdot \exp(-\beta_0 c)\cdot \exp(A^T \lambda)=u_0 \cdot \exp(-(k+1)\beta_0 c) \cdot \exp(A^T(\lambda+\lambda_k)) \label{e:two_formulations} \\
Av&=b
\end{align}
\end{subequations}
and thus the solution $v_{k+1} $ to this equation is identical to the solution of \eqref{e:KKT} with $\beta=(k+1)\beta_0 $.
\end{proof}
The lemma shows that the proximal method can be interpreted as a method for improving the conditioning of the KKT equations of \eqref{e:regularized} for large values of $\beta$ by using solutions for smaller values of $\beta$ ({\it i.e.}, solving $k$ iterations with $\beta_0$ is equivalent to solving one iteration with $k\beta_0\gg \beta_0$). The proof suggests other methods for exploiting solutions for small values of $\beta$ in order to solve \eqref{e:regularized} for large values of $\beta$. For example given $v_1,\ldots, v_k$, we can define $u_0$ to be
\begin{equation}
\label{e:square}
u_0=v_k \cdot v_k.
\end{equation}
Another possible choice, which is the choice we use in practice, is
\begin{equation} \label{e:last}
u_0=v_k\cdot v_{k-1}\cdot \ldots \cdot v_0 .
\end{equation}
We then have
\begin{lemma}\label{lem:faster}
Let $v_k$ be defined as in \eqref{e:square} or \eqref{e:last}, If $u_0=v_0=\mathbbm 1$ then $v_k=v_{2^{k-1}\beta_0}^* $.
\end{lemma}
Thus, as in the previous algorithm, the proposed algorithm computes a solution $v_k$ which is in fact identical to $v_{\beta(k)}^* $ for some monotonely increasing function $\beta$. In the proposed algorithm $\beta(k)$ grows exponentially, while in the previous algorithm $\beta(k)$ grew linearly. As a result we can obtain a high-quality solution for the JA relaxation using only a small number of external iterations (around 15 in experiments we performed).
\begin{proof}
We prove the lemma for the update rule defined in \eqref{e:last}. The proof is similar to the proof of the previous lemma. For $k=1$ it is clear that $v_1$ and $v_{\beta_0}^* $ solve the same equation and hence are equal. Now we assume correctness of the claim for all $j \leq k $ and prove it for $k+1$. By assumption and \eqref{e:unstable}, for all $j \leq k$ there is a vector $\lambda_j$ so that
$$v_j=\mathbbm 1 \cdot \exp(-2^{j-1}\beta_0 c)\cdot \exp(-A^T \lambda_j) $$
and therefore the KKT equations obtained by replacing $u_0 $ with $v_k$ are of the form \begin{subequations}
\begin{align}
v&= \exp(-\beta_0(1+\sum_{j=1}^k 2^{j-1}) c) \cdot \exp(A^T(\lambda+\sum_{j=1}^k\lambda_j)) \\
Av&=b
\end{align}
\end{subequations}
Thus the solution $v_{k+1}$ to this equation is identical to $v_{2^{k}\beta_0}^*$.
\end{proof}
To summarize, we state our full algorithm for computing the lifted linear relaxations of the QAP: We set $\beta=1$, and $u_0=(x_0,y_0)=(\mathbbm 1,\mathbbm 1) $. We then solve \eqref{e:regularized} using alternating Bregman projections onto the lifted polytopes (OLP) as described in Section~\ref{sec:KLontoLifted} and denote the solution by $v_1=(x_1,y_1)$. In general, we obtain $v_{k+1}$ by solving \eqref{e:regularized} with $\beta=1$ and $u_0=v_0 \cdot v_1 \cdot \ldots \cdot v_k $. In each external iteration we preform alternating Bregman projections until we recieve a solution $v_k$ which satisfies all the constraints up to a maximal error of $\varepsilonilon$. We preform external iterations until the normalized difference between the energy of $v_{k+1}$ and $v_k$ is smaller than $\varepsilonilon$. In our experiments we use $\varepsilonilon=10^{-2} $.
\section{Results}
\paragraph{Comparison with interior point solvers}
\begin{wrapfigure}[12]{r}{0.3\columnwidth}
\includegraphics[width=0.3\columnwidth]{newTiming.pdf}{}
\caption{\small Timing comparsion of the Sinkhorn-JA algorithm with the Mosek interior point solver.}
\label{fig:timing}
\end{wrapfigure}
We compare the timing of our algorithm for solving the JA algorithm (denoted by Sinkhorn-JA) with Mosek \cite{andersenmosek} which is a popular commercial interior point solver. We ran both algorithms on randomly generated quadratic assignment problems, with varying values of $n$ until they required more than ten minutes to solve the relaxations. Both algorithms were run with a single thread implementation on a Intel Xeon CPU with two 2.40 GHz processors. As can be seen in Figure~\ref{fig:timing} solving the JA relaxation with $n=20$ using Mosek takes over ten minutes. In a similar time frame we can approximately solve the JA relaxation for problems with $n=90 $.
\paragraph{Quadratic assignment}
We evaluate our algorithm's performance for the JA relaxation using QAPLIB \cite{burkard1997qaplib}, an online library containing several data sets of quadratic assignment problems, and provides the best known lower and upper bounds for each problem. Many of the problems have been solved to optimality, in which case the lower bound and upper bound are equal.
In figure~\ref{fig:qaplib} we compare the upper and lower bound obtained from the proposed algorithm, with the lower and upper bound in QAPLIB. The energy of our solution for the JA relaxation provides us with a lower bound for the QAP. We obtain an upper bound by projecting the solution $x$ we obtain from the algorithm onto the set of permutations using the projection procedure of \cite{maron2018concave} and computing its energy. As can be seen in Figure~\ref{fig:qaplib}, for the \textbf{bur}, and \textbf{chr} datasets we achieve a very tight lower bound (3 digits of accuracy) and for the \textbf{lipa} dataset we achieve accurate solutions for the entire set. In total we achieve 19 accurate solutions (zero energy gap), and 36 lower bounds, with up to 2 digits of accuracy. For the rest of QAPLIB we achieve reasonable results. We note that the QAPLIB bounds were achieved using a rather large collection of different algorithms that are typically far slower than our own and have worst case exponential time complexity.
\begin{figure}
\caption{\small Comparison of the upper bounds and lower bounds obtained from the Sinkhorn-JA algorithm with the best known upper bounds and lower bounds in the QAPLIB library. }
\label{fig:qaplib}
\end{figure}
\paragraph{Anatomical datasets} We applied our approach for the task of classification of anatomical surfaces. We considered three datasets, consisting of three different primate bone types \cite{boyer2011algorithms}: (A) 116 molar teeth, (B) 61 metatarsal bones (C) 45 radius bones. On each surface we sampled 400 points using farthest point sampling. We first found a correspondence map for the first 50 points using our algorithm, and then we used this result as an initialization to \cite{maron2018concave} in order to achieve correspondences for all 400 points. Finally we use the computed correspondences and calculate the Procrustes distance \cite{schonemann1966generalized} between each pair of shapes as a dissimilarity measure. A representative example is shown in the inset. Note that in this case the teeth are related by an orientation reversing map which our pipeline recovered.
\begin{wraptable}[11]{r}{0.4\columnwidth}
\includegraphics[width=0.4\columnwidth]{teethFigHaggai.pdf}{}
\end{wraptable} To evaluate our algorithm, we calculate the dissimilarity measure for every two meshes in a set and use a "leave one out" strategy: each bone is assigned to the taxonomic group of its nearest neighbor among all other bones. The table below shows successful classification rates (in \%) for the three bone types and three different classification queries.
For the initial 50 points matching we note that our algorithm is very accurate: the normalized gap ($\frac{\text{energy} - \text{projected energy}} {\text{energy}}$) is less than 0.01 for $90\%$ of the cases. We compared our method with the convex relaxation method of \cite{maron2016point} and the performance of human experts as reported in \cite{boyer2011algorithms}. Note that our algorithm achieves state of the art results on all but one experiment. We also compared our method with an alternative baseline method where we match the 400 points using \cite{maron2018concave} initialized with $\frac{1}{n}\mathbbm 1\mathbbm 1^T$ and found that our algorithm achieves significantly more accurate results.
\begin{tabular}{ |p{2cm}||p{3cm}||p{1cm}|p{1cm}|p{1cm}|p{1cm}||p{1cm}||p{1cm}| }
\hline
\multicolumn{8}{|c|}{data sets} \\
\hline
classification & \multicolumn{1}{|c|} {algorithm} & \multicolumn{2}{|c|} {Teeth}& \multicolumn{2}{|c|}{1st Metatarsal } & \multicolumn{2}{|c|}{Radius}\\
\hline
- &- & No. & \% & No. &\% &No. &\% \\
\hline
\multirow{2}{*}{genera} &Sinkhorn-JA & 99 & \textbf{93.9} & 59 & 83.0 & 45 &\textbf{84.44} \\
& PM-SDP & 99 & 91.9 & 59 & 76.6 &45 & 82.44 \\
& Human-expert & 99 & 91.9 & 59 & \textbf{88.1} &45 & 77.8 \\
\hline
\multirow{2}{*}{Family} &Sinkhorn-JA & 106 & 94.3 & 61 & 93.4 &\multicolumn{2}{|c|}{-}\\
&PM-SDP & 106 & 94.3 & 61 & 93.4 &\multicolumn{2}{|c|}{-}\\
&Human-expert & 106 & 94.3 & 61 & 93.4 &\multicolumn{2}{|c|}{-}\\
\hline
\multirow{2}{*}{Above Family} &Sinkhorn-JA & 116 & \textbf{99.1} & 61 & 100 &\multicolumn{2}{|c|}{-}\\
&PM-SDP & 116 & 98.2 & 61 & 100 &\multicolumn{2}{|c|}{-}\\
&Human-expert & 116 & 95.7 & 61 & 100 &\multicolumn{2}{|c|}{-}\\
\hline
\end{tabular}
\section{Conclusion}
In this paper, we suggested a new algorithm for approximately solving the JA relaxation, by generalizing the Sinkhorn algorithm for higher dimensional polytopes. This algorithm is significantly more scalable than standard solvers, and as a result the high quality solutions often obtained by the JA relaxations are made available for problems of non-trivial size.
The main drawback of our algorithm is the fact that we only approximate the optimal solution, but as we demostrate in the results section, it nevertheless achieves state of the art performance on various tasks. We believe that other lifted relaxations can benefit from such Sinkhorn-like solvers and leave it a possible future work direction.
\section*{Acknowledgments}
This research was supported in part by the European Research Council (ERC Consolidator Grant, "LiftMatch" 771136) and the Israel Science Foundation (Grant No. 1830/17).
\end{document} |
\begin{document}
\let\realverbatim=\verbatim
\let\realendverbatim=\endverbatim
\renewcommand\verbatim{\par\addvspace{6pt plus 2pt minus 1pt}\realverbatim}
\renewcommand\endverbatim{\realendverbatim\addvspace{6pt plus 2pt minus 1pt}}
\makeatletter
\newcommand\verbsize{\@setfontsize\verbsize{10}\@xiipt}
\renewcommand\verbatim@font{\verbsize\normalfont\ttfamily}
\makeatother
\newtheorem{thm}{Theorem}[section]
\newtheorem{prop}[thm]{Proposition}
\newtheorem{lemma}[thm]{Lemma}
\newtheorem{cor}[thm]{Corollary}
\newtheorem{conj}[thm]{Conjecture}
\begin{abstract}
For any fixed nonzero integer $h$, we show that a positive proportion of integral binary quartic forms $F$ do locally everywhere represent $h$, but do not globally represent $h$.
We order classes of integral binary quartic forms by the two generators of their ring of ${\rm GL}_{2}({\mathbb Z})$-invariants, classically denoted by $I$ and $J$.
\end{abstract}
\maketitle
{\rm stab.}ection{Introduction}\label{Intro}
Let $h\in{\mathbb Z}$ be nonzero. We will prove the existence of many integral quartic forms that do not represent $h$. Specifically, we aim to show many quartic {\it Thue equations}
\begin{equation}
F(x,y)=h
\end{equation}
have no solutions in integers $x$ and $y$, where $F(x , y)$ is an irreducible binary quartic form
with coefficients in the integers.
Let
$$
F(x , y) = a_{0}x^{4} + a_{1}x^{3}y + a_{2}x^{2}y^{2} + a_{3}xy^{3} + a_{4}y^{4} \in \mathbb{Z}[x , y].
$$
The discriminant $D$ of $F(x, y)$ is given by
$$
D = D_{F} = a_{0}^{6} (\alpha_{1} - \alpha_{2})^{2} (\alpha_{1} - \alpha_{3})^{2} (\alpha_{1} - \alpha_{4})^{2} (\alpha_{2} - \alpha_{3})^{2} (\alpha_{2} - \alpha_{4})^{2} (\alpha_{3} - \alpha_{4})^{2} ,
$$
where $\alpha_{1}$, $\alpha_{2}$, $\alpha_{3}$ and $\alpha_{4}$ are the roots of
$$
F(x , 1) = a_{0}x^{4} + a_{1}x^{3} + a_{2}x^{2} + a_{3}x + a_{4} .
$$
Let
$
A = \bigl( \begin{smallmatrix}
a & b \\
c & d \end{smallmatrix} \bigr)$ be a $2 \times 2$ matrix, with $a, b, c, d \in {\mathbb Z}$. We define the integral binary quartic form $F^{A}(x , y)$ by
$$
F^{A}(x , y) : = F(ax + by ,\ cx + dy).
$$
It follows that
\begin{equation}\label{St6}
D_{F^{A}} = (\textrm{det} A)^{12} D_F.
\end{equation}
If $A \in {\rm GL}_{2}(\mathbb{Z})$, then we say that $\pm F^{A}$ is {\it equivalent} to $F$.
The ${\rm GL}_{2}({\mathbb Z})$-invariants of a generic binary quartic form, which will be called \emph{invariants}, form a ring that is generated by two invariants. These two invariants are denoted by $I$ and $J$ and are algebraically independent. For
$F(x , y) = a_{0}x^{4} + a_{1}x^{3}y + a_{2}x^{2}y^{2} + a_{3}xy^{3} + a_{4}y^{4}$, these invariants are defined as follows:
\begin{equation}\label{defofI}
I = I_{F} = a_{2}^{2} - 3a_{1}a_{3} + 12a_{0}a_{4}
\end{equation}
and
\begin{equation}\label{defofJ}
J = J_{F} = 2a_{2}^{3} - 9a_{1}a_{2}a_{3} + 27 a_{1}^{2}a_{4} - 72 a_{0}a_{2}a_{4} + 27a_{0}a_{3}^{2}.
\end{equation}
Every invariant is a polynomial in $I$ and $J$. Indeed, the discriminant $D$, which is an invariant, satisfies
$$
27D = 4I^3 - J^2.
$$
Following \cite{BaShSel}, we define the height $\mathcal{H}(F)$ of an integral binary quartic form $F(x , y)$ as follows,
\begin{equation}\label{Bash}
\mathcal{H}(F) : = \mathcal{H}(I , J) := \max\left\{\left|I^3\right|, \frac{J^2}{4}\right\},
\end{equation}
where $I = I_F$ and $J = J_F$.
We note that if $F(x,y)=h$ has no solution, and $G$ is a {\it proper subform} of $F$, i.e.,
\begin{equation}\label{defofsubform}
G(x,y)=F(ax+by,cx+dy)
\end{equation}
for some integer matrix $A=\bigl(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\bigr)$ with $|\!\det A|>1$, then clearly $G(x,y)=h$ will also have no integer solutions. We will call a binary form {\it maximal} if it is not a proper subform of another binary form.
Our goal in this paper is to show that many (indeed, a positive proportion) of integral binary quartic forms are not proper subforms, locally represent $h$ at every place, but globally do not represent~$h$. The following is our main result.
\begin{thm}\label{mainquartic}
Let $h$ be any nonzero integer. When maximal integral binary quartic forms $F(x , y) \in \mathbb{Z}[x , y]$ are ordered by their height $\mathcal{H}(I, J)$, a positive proportion of the ${\rm GL}_2({\mathbb Z})$-classes of these forms $F$ have the following properties:
\begin{enumerate}[{\rm (i)}]
\item they locally everywhere represent $h$ $($i.e., $F(x , y) = h$ has a solution in~${\mathbb R}^2$ and in~${\mathbb Z}_p^2$ for all $p);$ and
\item they globally do not represent $h$ $($i.e., $F(x , y) = h$ has no solution in~$\mathbb{Z}^2)$.
\end{enumerate}
\end{thm}
In other words, we show that a positive proportion of quartic Thue equations $F(x,y)=h$ fail the integral Hasse principle, when classes of
integral binary quartic forms $F$ are ordered by the height $\mathcal{H}(I , J)$ defined in \eqref{Bash}. We will construct a family of quartic forms that do not represent a given integer $h$ and obtain a
lower bound $\mu > 0$ for the density of such forms. The value for $\mu$ is expressed explicitly in \eqref{finaldensity}. Moreover, our method yields an explicit construction of this positive density of forms.
It is conjectured that, for any $n \geq 3$, a density of $100\%$ of integral binary forms of degree $n$ that locally represent a fixed integer $h$ do not globally represent $h$. The positive lower bound $\mu$ in \eqref{finaldensity} is much smaller than the conjectured density $1$.
In joint work with Manjul Bhargava \cite{AB}, we proved a result similar to Theorem \ref{mainquartic}. In \cite{AB} we consider integral binary forms of any given degree ordered by na\"ive height (the maximum of absolute values of their coefficients). Theorem \ref{mainquartic} is new, as we use a different ordering of integral binary quartic forms, which is more interesting for at least two reasons; here integral binary quartic forms are ordered by two quantities $I$ and $J$, as opposed to five coefficients, and $I$ and $J$, unlike the coefficients, are ${\rm GL}_{2}({\mathbb Z})$-invariant.
In \cite{AB}, for any fixed integer $h$, we showed that a positive proportion of binary forms of degree $n \geq 3$ do not represent $h$, when binary $n$-ic forms are ordered by their naive heights. Moreover, for $n =3$, we established the same conclusion when cubic forms are ordered by their absolute discriminants. The Davenport-Heilbronn Theorem, which states that the number of equivalence classes of irreducible binary cubic forms per discriminant is a constant on average, was an essential part of our argument in \cite{AB} for cubic forms. More importantly we made crucial use of the asymptotic counts given by the Davenport-Heilbronn Theorem for the number of equivalent integral cubic forms with bounded absolute discriminant (see the original work in \cite{DH}, and \cite{AB} for application and further references). Such results are not available for binary forms of degree larger than $3$. For quartic forms, fortunately we are empowered by beautiful results due to Bhargava and Shankar that give asymptotic formulas for the number of ${\rm GL}_{2}({\mathbb Z})$-equivalence classes of irreducible integral binary quartic forms having bounded invariants. These results will be discussed in Section \ref{BaShsec}.
This paper is organized as follows. In Section \ref{perilim} we discuss some upper bounds for the number of primitive solutions of quartic Thue equations. Section \ref{BaShsec} contains important results, all cited from \cite{BaShSel}, about the height $\mathcal{H}(I, J)$.
In Sections \ref{splitsection} and \ref{localsection} we impose conditions on the splitting behavior of the forms used in our construction modulo different primes to make sure we produce a large enough number of forms (which in fact form a subset of integral quartic forms with positive density) that do not represent $h$, without any local obstruction.
In Section \ref{completesection}, we summarize the assumptions made in Sections \ref{splitsection} and \ref{localsection}, and apply essential results cited in Sections \ref{perilim} and \ref{BaShsec} to conclude that the quartic forms that we construct form a subset of integral binary quartic forms with positive density.
{\rm stab.}ection{Primitive Solutions of Thue Equations}\label{perilim}
Let $F(x , y) \in {\mathbb Z}[x , y]$ and $m \in {\mathbb Z}$.
A pair $(x_{0} , y_{0}) \in \mathbb{Z}^2$ is called a {\it primitive solution} to the Thue equation $F(x , y) = m$ if $F(x_{0} , y_{0}) = m$ and
$\gcd(x_{0} , y_{0}) = 1$.
We will use the following result from \cite{AkhQuaterly} to obtain upper bounds for the number of primitive solutions of Thue equations.
\begin{prop}[\cite{AkhQuaterly}, Theorem 1.1]\label{maineq4}
Let $F(x , y) \in \mathbb{Z}[x , y]$ be an irreducible binary form of degree $4$ and discriminant $D$. Let $m$ be an integer with
$$
0 < m \leq \frac{|D|^{\frac{1}{6} - \epsilon} } {(3.5)^{2} 4^{ \frac{2}{3 } } },
$$
where $ 0< \epsilon < \frac{1}{6}$.
Then the equation $|F(x , y)| = m$ has at most
\[
36 + \frac{4}{3 \epsilon}
\]
primitive solutions. In addition to the above assumptions, if we assume that the polynomial $F(X , 1)$ has $2 \mathtt{i}$ non-real roots, with $\mathtt{i} \in\{0, 1, 2\}$, then the number of primitive solutions does not exceed
\[
36 -16\mathtt{i} + \frac{4-\mathtt{i}}{3 \epsilon}.
\]
\end{prop}
If the integral binary forms $F_{1}$ and $F_{2}$ are equivalent, as defined in the introduction, then there exists $A \in {\rm GL}_2({\mathbb Z})$ such that
$$
F_2(x , y) = F_1^{A}(x , y) \, \, \textrm{or} \, \, F_2(x , y) = -F_1^{A}(x , y).
$$
Therefore,
$D_{F_{1}} = D_{F_{2}}$, and for every fixed integer $h$, the number of primitive solutions to $F_1(x , y) = \pm h$ equals the number of primitive solutions to $F_2(x , y) = \pm h$.
The invariants $I_F$ and $J_F$ of an integral quartic form $F$ that are defined in \eqref{defofI} and \eqref{defofJ} have weights $4$ and $6$, respectively. This means
\begin{equation}\label{Idet}
I_{F^{A}} = (\textrm{det} A)^{4} I_F,
\end{equation}
and
\begin{equation}\label{Jdet}
J_{F^{A}} = (\textrm{det} A)^{6} J_F.
\end{equation}
Consequently, by definition of the height $\mathcal{H}$ in \eqref{Bash}, we have
\begin{equation}\label{Hdet}
\mathcal{H}(F^{A}) = (\textrm{det} A)^{12} \mathcal{H}(F),
\end{equation}
and
\begin{equation*}
\mathcal{H}(-F^{A}) = (\textrm{det} A)^{12} \mathcal{H}(F).
\end{equation*}
{\rm stab.}ection{On the Bhargava--Shankar height $\mathcal{H}(I, J)$}\label{BaShsec}
In \cite{BaShSel} Bhargava and Shankar introduce the height $\mathcal{H}(F)$ (see \eqref{Bash} for definition) for any integral binary quartic form $F$. In this section we present some of the asymptotical results in \cite{BaShSel}, which will be used in our proofs. Indeed these asymptotic formulations are the reason that we are able to order quartic forms with respect to their $I$ and $J$ invariants.
One may ask which integer pairs $(I , J)$ can actually occur as the invariants of an integral binary quartic form. The following result of Bhargava and Shankar provides a complete answer to this question.
\begin{thm}[\cite{BaShSel}, Theorem 1.7]\label{BaSh-thm1.7}
A pair $(I , J) \in \mathbb{Z} \times \mathbb{Z}$ occurs as the invariants of an integral binary quartic form if and only if it satisfies one of the following congruence conditions:
\begin{eqnarray*}
(a) \, \, I \equiv 0 \, \, (\textrm{mod}\, \, 3) &\textrm{and}\, & J \equiv 0\, \, (\textrm{mod}\, \, 27),\\
(b)\, \, I \equiv 1 \, \, (\textrm{mod}\, \, 9) &\textrm{and}\, & J \equiv \pm 2\, \, (\textrm{mod}\, \, 27),\\
(c)\, \, \, I \equiv 4 \, \, (\textrm{mod}\, \, 9) &\textrm{and}\, & J \equiv \pm 16\, \, (\textrm{mod}\, \, 27),\\
(d)\, \, I \equiv 7 \, \, (\textrm{mod}\, \, 9) &\textrm{and}\, & J \equiv \pm 7\, \, (\textrm{mod}\, \, 27).
\end{eqnarray*}
\end{thm}
Let $V_{{\mathbb R}}$ denote the vector space of binary quartic forms over the real numbers ${\mathbb R}$. The group ${\rm GL}_{2}({\mathbb R})$ naturally acts on $V_{{\mathbb R}}$. The action of ${\rm GL}_{2}({\mathbb Z})$ on $V_{{\mathbb R}}$ preserves the lattice $V_{{\mathbb Z}}$ consisting of the integral elements of $V_{{\mathbb R}}$.
The elements of $V_{{\mathbb Z}}$ are the forms that we are interested in.
Let $V^{(\mathtt{i})}_{{\mathbb Z}}$ denote the set of elements in $V_{{\mathbb Z}}$ having nonzero discriminant
and $\mathtt{i}$ pairs of complex conjugate roots and $4 -2\mathtt{i}$ real roots.
For any ${\rm GL}_{2}({\mathbb Z})$-invariant set $S {\rm stab.}ubseteq V_{{\mathbb Z}}$, let $N(S ; X)$ denote the number of ${\rm GL}_{2}({\mathbb Z})$-equivalence classes of irreducible elements $f \in S$ satisfying
$\mathcal{H}(f) < X$.
For any set $S$ in $V_{{\mathbb Z}}$ that is definable by congruence conditions, following \cite{BaShSel}, we denote by $\mu_{p}(S)$ the $p$-adic density of the $p$-adic closure of $S$ in $V_{{\mathbb Z}_p}$, where we normalize the additive measure $\mu_p$ on $V_{{\mathbb Z}_p}$ so that $\mu_p(V_{{\mathbb Z}_p})= 1$. The following is a combination of Theorem 2.11 and Theorem 2.21 of \cite{BaShSel}.
\begin{thm}[Bhargava--Shankar]\label{BaSh-thm2.11}
Suppose $S$ is a subset of $V_{{\mathbb Z}}$ defined by congruence conditions modulo finitely many prime powers, or even a suitable infinite set of prime powers. Then we have
\begin{equation}
N(S \cap V_{{\mathbb Z}}^{(\mathtt{i})}; X) {\rm stab.}im N( V_{{\mathbb Z}}^{(\mathtt{i})}; X) \prod_{p} \mu_{p} (S).
\end{equation}
\end{thm}
The statement of Theorem \ref{BaSh-thm2.11} for finite number of congruence conditions follows directly from Theorem 2.11 of \cite{BaShSel}.
In Subsection 2.7 of \cite{BaShSel},
some congruence conditions are specified that are suitable for inclusion of infinitely many primes in the statement of Theorem \ref{BaSh-thm2.11} (see Theorem 2.21 of \cite{BaShSel}).
A function $\phi : V_{{\mathbb Z}} \rightarrow [0, 1]$ is said to be \emph{defined by congruence conditions} if, for all primes $p$, there exist functions
$\phi_p : V_{{\mathbb Z}_p} \rightarrow [0, 1]$
satisfying the following conditions:\newline
(1) for all $F \in V_{{\mathbb Z}}$, the product $\prod_{p} \phi_{p}(F)$ converges to $\phi(F)$,\newline
(2) for each prime $p$, the function $\phi_p$ is locally constant outside some closed
set $S_p {\rm stab.}ubset V_{{\mathbb Z}_p}$ of measure zero.
Such a function $\phi$ is called \emph{acceptable} if, for sufficiently large primes $p$, we have $\phi_p(F) = 1$ whenever $p^2 \nmid D_F$.
For our purpose, particularly in order to impose congruence conditions modulo the infinitely many primes that are discussed in Subsection \ref{largeprimesubsection}, we define the acceptable function $\phi : V_{{\mathbb Z}} \rightarrow \{0, 1\}$ to be the characteristic function of a certain subset of integral binary quartic forms. More specifically, for $p < 49$, we define $\phi_p$ to be the constant function $1$. For $p > 49$, we define $\phi_p : V_{{\mathbb Z}_p} \rightarrow \{0, 1\}$ to be the characteristic function of the set of integral binary quartic forms that are not factored as $c_p M_p (x , y)^2$ modulo $p$, with $c_p \in \mathbb{F}_p$ and $M_p(x , y)$ any quadratic form over $\mathbb{F}_p$. Then
\begin{equation}\label{defofaccept}
\phi(F) = \prod_{p} \phi_{p}(F)
\end{equation} is the characteristic function of the set of integral binary quartic forms that are not factored as $c_p M_p (x , y)^2$ over $\mathbb{F}_p$ for any $p > 49$.
We denote by $\lambda(p)$ the $p$-adic density
$\int_{F \in V_{{\mathbb Z}_p}} \phi_p(F) dF
$.
The value of $\lambda(p)$ will be computed in \eqref{largedensity}. It turns out that in Theorem \ref{mainquartic}, the positive proportion of integral binary quartic forms that do not represent $h$ is bounded below by
$$
\mu = \kappa(h) \prod_{p} \lambda(p),
$$
where $p$ ranges over all primes and $\kappa(h)$ is a constant that only depends on $h$ and can be explicitly determined
from \eqref{finaldensity} in Section 6.
Later in our proofs, in order to construct many inequivalent quartic forms, it will be important to work with quartic forms that have no non-trivial stabilizer in ${\rm GL}_2(\mathbb{Z})$.
We note that the stabilizer in ${\rm GL}_2(\mathbb{Z})$ of an element in $V_{\mathbb{R}}$ always contains the identity matrix and its negative, and has size at least $2$.
We will appeal to another important result due to Bhargava and Shankar, which bounds the number of ${\rm GL}_{2}(\mathbb{Z})$-equivalence classes of integral binary quartic forms having large stabilizers inside
${\rm GL}_{2}(\mathbb{Z})$.
\begin{prop}[\cite{BaShSel}, Lemma 2.4]\label{BSL2.4}
The number of $\textrm{GL}_{2}(\mathbb{Z})$-orbits of integral binary quartic forms
$F \in V_{\mathbb{Z}}$ such that $D_F \neq 0$ and $\mathcal{H}(F) < X$ whose stabilizer in ${\rm GL}_{2}(\mathbb{Q})$ has size greater than $2$ is $O(X^{3/4 + \epsilon})$.
\end{prop}
{\rm stab.}ection{Quartic Forms Splitting Modulo a Prime}\label{splitsection}
\textbf{Definition}. We define the subset $V'_{\mathbb{Z}}$ of integral binary quartic forms $V_{\mathbb{Z}}$ to be those forms $F$ that have trivial stabilizer (of size $2$).
By Proposition \ref{BSL2.4}, $V'_{\mathbb{Z}}$ is a dense subset of equivalence classes of quartic forms and selecting our forms from $V'_{\mathbb{Z}}$ will not alter the $p$-adic densities that we will present later. From now on we will work only with classes of forms in $V'_{\mathbb{Z}}$.
\textbf{Definition}. Assume that $F(x , y)$ is an irreducible quartic form. We say that $F(x , y)$ \emph{splits completely} modulo a prime number $p$, if either
\begin{equation}\label{splitgI}
F(x , y) \equiv m_{0} (x - b_{1}y)(x-b_{2}y) (x-b_{3}y)(x- b_{4}y)\, \, (\textrm{mod} \, \, p),
\end{equation}
or
\begin{equation}\label{splitgII}
F(x , y) \equiv m_{0} y(x-b_{2}y) (x-b_{3}y)(x- b_{4}y)\, \, (\textrm{mod} \, \, p),
\end{equation}
where $m_{0} \not \equiv 0$ (mod $p$), and $b_{1}, b_{2}, b_{3}, b_{4}$ are distinct integers modulo $p$, and further
\begin{equation}\label{assumemore}
b_{2}, b_{3}, b_{4} \not \equiv 0 \, \, \qquad (\textrm{mod} \, \, p).
\end{equation}
In case \eqref{splitgI}, we
call $b_1$, $b_2$, $b_3$, and $b_4$ the \emph{simple roots} of the binary form $F(x , y)$ modulo $p$. In case \eqref{splitgII}, we
call $\infty$, $b_2$, $b_3$, and $b_4$ the \emph{simple roots} of the binary form $F(x , y)$ modulo $p$.
Let $p \geq 5$ be a prime. The $p$-adic density of binary quartic forms that split completely modulo $p$ is given by
\begin{eqnarray}\label{splitdensity}
\mu_{p} &= & \frac{ (p -1) \left( \frac{p (p-1)(p-2) (p-3) }{4!} + \frac{(p-1)(p-2)(p-3)} {3!} \right) }{p^5}\\ \nonumber
& =& \frac{ (p -1)^2 (p+4) (p-2) (p-3) }{4! \, p^5},
\end{eqnarray}
where in the first identity in \eqref{splitdensity}, the summand $\frac{p (p-1)(p-2) (p-3) }{4!}$ in the numerator counts the corresponding forms in \eqref{splitgI} and the summand
$\frac{(p-1)(p-2)(p-3)} {3!}$ counts the corresponding forms in \eqref{splitgII}.
Clearly the factor $p -1$ in the numerator counts the number of possibilities for $m_{0}$ modulo $p$ and the denominator $p^5$ counts all quartic forms with all choices for their five coefficients modulo $p$.
Now assume $F(x , y)$ is an irreducible integral quartic form that splits completely modulo $p$.
For $j\in \{ 1, 2, 3, 4\}$, we define
\begin{equation}\label{defofFb}
F_{b_{j}}(x , y) : = F(p x + b_{j} y, y),
\end{equation}
and additionally in case \eqref{splitgII},
\begin{equation}\label{defofFinf}
F_{\infty}(x , y) := F (p y , x).
\end{equation}
We claim that
the four forms $F_{b_{1}}(x , y)$ (or $F_{\infty}(x,y)$), $F_{b_{2}}(x , y)$, $F_{b_{3}}(x , y)$, and $F_{b_{4}}(x , y)$ are pairwise inequivalent. Indeed, any transformation $B\in{\rm GL}_2({\mathbb Q})$ taking, say $F_{b_i}(x,y)$ to $F_{b_j}(x,y)$ must be of the form $B=\bigl(\begin{smallmatrix}p&b_i\\ 0& 1\end{smallmatrix}\bigr)^{-1}\!A\bigl(\begin{smallmatrix}p &b_j\\ 0& 1\end{smallmatrix}\bigr)$, where $A\in{\rm GL}_2({\mathbb Q})$ stabilizes $F(x,y)$.
Since we assumed $F \in V'_{\mathbb{Z}}$, the $2 \times 2$ matrix $A$ must be the identity matrix or its negative, and so $B= \pm \bigl(\begin{smallmatrix}p&b_i\\ 0& 1\end{smallmatrix}\bigr)^{-1}\bigl(\begin{smallmatrix}p&b_j\\ 0& 1\end{smallmatrix}\bigr)$. But $B\notin{\rm GL}_2({\mathbb Z})$, as $p \nmid (b_i-b_j)$. Therefore, for $i \neq j$,
the quartic forms $F_{b_i}(x,y)$ and $F_{b_j}(x,y)$ are not ${\rm GL}_2({\mathbb Z})$-equivalent.
Similarly in case \eqref{splitgII}, any transformation $B\in{\rm GL}_2({\mathbb Q})$ taking $F_{\infty}(x,y)$ to $F_{b_j}(x,y)$ must be of the form $B= \bigl(\begin{smallmatrix}0&p\\ 1& 0\end{smallmatrix}\bigr)^{-1}\!A\bigl(\begin{smallmatrix}p &b_j\\ 0& 1\end{smallmatrix}\bigr)$, where $A\in{\rm GL}_2({\mathbb Q})$ stabilizes $F(x,y)$.
This change-of-variable matrix does not belong to ${\rm GL}_{2}(\mathbb{Z})$, unless $b_{j}\equiv 0$ (mod $p$).
Therefore, $F_{\infty}(x , y)$, $F_{b_2}(x , y)$, $F_{b_{3}}(x , y)$, and $F_{b_{4}}(x , y)$ are pairwise inequivalent, as long as none of $b_{2}$, $b_{3}$ and $b_{4}$ are a multiple of $p$ (this motivated the extra assumption \eqref{assumemore} in our definition).
Starting with a form $F$ that belongs to $V'_{\mathbb{Z}}$ and splits completely modulo $p$, we can construct $4$ integral quartic forms that are pairwise inequivalent.
Let
$
F(x , y) = a_{0}x^4 + a_{1} x^{3} y +a_{2} x^{2} y^2 + a_{3} x y^3+ a_{4}y^4 \in \mathbb{Z}[x , y],
$
with content $1$ (i.e., the integers $a_{0}, a_{1}, a_{2}, a_{3}, a_{4}$ have no common prime divisor).
If $F(x , y)$ satisfies \eqref{splitgII} then
\begin{equation}\label{deftildeinf}
\tilde{F}_{\infty}(x , y):= \frac{ F_{\infty}(x , y)}{p} \in \mathbb{Z}[x , y],
\end{equation}
where $F_{\infty}(x , y)$ is defined in \eqref{defofFinf}.
Suppose that
\begin{equation}\label{alessp4}
F (b , 1) \equiv 0 \, \, \, (\textrm{mod}\, \, p), \, \, \, \textrm{with}\, \, b \in \mathbb{Z}.
\end{equation}
By \eqref{defofFb},
\begin{equation*}\label{Faei4}
F_{b}(x , y) = F(p x + by , y) = e_{0} x^4 + e_{1} x^{3} y +e_{2} x^{2} y^2 + e_{3} x y^3+ e_{4}y^4,
\end{equation*}
with
\begin{equation}\label{dotss4}
e_{4-j} = p^j {\rm stab.}um_{i=0}^{4-j} a_{i} \, b^{4-i-j} {4-i \choose j},
\end{equation}
for $j=0, 1, 2, 3, 4$.
If $j \geq 1$, clearly $e_{4-j}$ is divisible by $p$. Since $e_{4} = F(b , 1)$, by \eqref{alessp4}, $e_{4}$ is also divisible by $p$. Therefore,
\begin{equation}\label{deftildeb}
\tilde{F_{b}}(x , y): = \frac{F_{b}(x , y)}{p} \in \mathbb{Z}[x , y].
\end{equation}
Since $e_{3} = p f'(b)$, where $f'(X)$ denotes the derivative of polynomial $f(X) = F(X , 1)$, if $b$ is a simple root modulo $p$ then $f'(b)\not \equiv 0\, \, (\textrm{mod}\, p)$ and
\begin{equation}\label{yL4}
\tilde{F_{b}}(x , y) = y^{3} L(x , y)\, \, (\textrm{mod}\, \, p),
\end{equation}
where $L(x , y)= l_{1}x + l_{2}y$ is a linear form modulo $p$, with $l_{1} \not \equiv 0\pmod p$.
We also note that $\mathcal{H}(F_b)$, defined in \eqref{Bash}, as well as the invariants of the form $F_b$, can be expressed in terms of invariants of the form $F$, as $F_b$ is obtained under the action of a $2 \times 2$ matrix of determinant $\pm p$ on $F$. By \eqref{Idet}, \eqref{Jdet}, and \eqref{Hdet}, we have
\begin{eqnarray*}\label{Hoftilde}
D_{F_{b}} & =& p^{12} D_F,\\
I_{{F_{b}}} &= & p^4 I_{{F}}, \\
J_{{F_{b}}} &= & p^6 J_{F}, \\
\mathcal{H}\left({F_{b}}\right) & =& \mathcal{H}\left(I_{{F_{b}}}, J_{{F_{b}}} \right) = p^{12} \mathcal{H}(F).
\end{eqnarray*}
After multiplication of the form $F_{b}(x , y)$ by $p^{-1}$, we therefore have
\begin{eqnarray}\nonumber
D_{\tilde{F_{b}}} & =& p^{6} D_F\\ \nonumber
I_{\tilde{F_{b}}} &= & p^2 I_{{F}}, \\ \nonumber
J_{\tilde{F_{b}}} &= & p^3 J_{F}, \\
\mathcal{H}\left(\tilde{F_{b}}\right) & =& \mathcal{H}\left(I_{\tilde{F_{b}}}, J_{\tilde{F_{b}}} \right) = p^{6} \mathcal{H}(F).
\end{eqnarray}
Now let us consider the quartic Thue equation
$$
F(x , y) = m,
$$
where $m = p_{1} p_{2} p_{3} h$, and $p_{1}$, $p_{2}$, and $p_{3}$ are three distinct primes greater than $4$, and $\gcd(h, p_{k}) = 1$, for $k\in \{ 1, 2, 3\}$. We will further assume that the quartic form $F(x , y)$ splits completely modulo $p_{1}$, $p_{2}$, and $p_{3}$. In Lemma \ref{corresponds-sol}, we will construct $64$ integral binary quartic forms $G_{j}(x , y)$, for $1 \leq j \leq 4^3$, and will make a one-to-one correspondence between the set of primitive solutions of $F(x , y) = m$ and the union of the sets of primitive solutions of $G_{j}(x , y) = h$, for $1 \leq j \leq 4^3$. First we need two auxiliary lemmas.
\begin{lemma}\label{lem1corres}
Let $F(x , y) \in \mathbb{Z}[x , y]$ be a binary quartic form that splits completely modulo $p$ and $m = p m_1$, with $p \nmid m_1$. The primitive solutions of the Thue equation $F(x , y) = m$ are in one-to-one correspondence with the union of the sets of primitive solutions to four Thue equations
$$
\tilde{F}_{i}(x , y) = m_1,
$$
where $\tilde{F}_{i}(x , y)$ are defined in \eqref{deftildeinf} and \eqref{deftildeb}, and $i=1, 2, 3, 4$.
\end{lemma}
\begin{proof}
Assume that $(x_{0}, y_{0}) \in \mathbb{Z}^2$ is a solution to $F(x , y) = m = p m_{1}$.
If
$$
F(x , y) \equiv m_{0} (x - b_{1}y)(x-b_{2}y) (x-b_{3}y)(x- b_{4}y)\, \, (\textrm{mod} \, \, p),
$$
then
since
$
p| F(x_0 , y_0)
$,
we have
$$
p| (x_{0}- b_{i} y_0)
$$
for some $i \in \{1, 2, 3, 4\}$. The value of $i$ is uniquely determined by the solution $(x_{0}, y_{0})$, as $b_{j}$'s are distinct modulo $p$. Therefore,
\begin{equation}\label{x0X}
x_{0} = p_1 X_{0} + b_{i} y_{0},
\end{equation}
for some $ X_{0} \in \mathbb{Z}$, and $(X_{0}, y_{0})$ is a solution to
\begin{equation}\label{redmp}
\tilde{F}_{i}(x , y) = \frac{1}{p} F(p x + b_{i} y , y) = m_{1} = \frac{m}{p}.
\end{equation}
Conversely, assume for a fixed $i \in \{1, 2, 3, 4\}$ that $(X_{0}, y_{0}) \in \mathbb{Z}^2$ is a solution to
$$
\tilde{F}_{i}(x , y) = \frac{1}{p} F(p x + b_{i} y , y) = m_{1} = \frac{m}{p}.
$$
First we observe that $p \nmid y_{0}$. Because otherwise $p$ divides $p X_0 + b_{i} y_0$ and $p^4 \mid \frac{m}{p}$, which is a contradiction.
Now by construction of the form $\tilde{F}_{i}(x , y)$, we clearly have $(x_0 , y_{0})$, with
$$
x_{0} = p X_{0} + b_{i} y_{0},
$$
satisfies the equation $F(x , y) = m$. Further, if $(X_{0}, y_{0})$ is a primitive solution of
$\tilde{F}_{i}(x , y) = \frac{m}{p}$, since $p \nmid y_0$, we have $\gcd(x_0 , y_0) = 1$.
Assume that
$$
F(x , y) \equiv m_{0} y (x-b_{2}y) (x-b_{3}y)(x- b_{4}y)\, \, (\textrm{mod} \, \, p).
$$
The pair $(x_{0} , y_{0}) \in \mathbb{Z}^2$ with $p \nmid y_{0}$ is a primitive solution
of
$$
F(x , y) = p m_1,
$$
if and only if $p \mid (x_0-b_{2}y_0) (x_0-b_{3}y_0)(x_0- b_{4}y_0)$. In this case, for a unique $i \in \{2, 3, 4\}$, we have \eqref{x0X}, and $(X_0, y_0)$ is a primitive solution to the Thue equation \eqref{redmp}.
Similarly, the pair $(x_{1} , y_{1}) \in \mathbb{Z}^2$ with $p \mid y_{1}$ is a primitive solution
of
$$
F(x , y) = p m_1,
$$
if and only if $(Y_1, x_{1})$, with $Y_1 = \frac{y_1}{p}$, is a primitive solution to
$$\tilde{F}_{\infty}(x , y) = \frac{m}{p}.$$
\end{proof}
\begin{lemma}\label{lem2}
If $F(x, y)$ splits completely modulo $p_1$ and $p_2$, then $\tilde{F}_{b}(x , y)$ will also split completely modulo $p_2$, for any simple root $b$ (possibly $\infty$) of $F(x , y)$ modulo $p_1$.
\end{lemma}
\begin{proof}
If
$$
F(x , y) \equiv m_{0} (x - b_1 y) (x - b_2 y) (x - b_3 y) (x - b_4 y) \, \qquad (\textrm{mod} \, \, p_{1})
$$
and
\begin{equation}\label{ciroots}
F(x , y) \equiv m'_{0} (x - c_1 y) (x - c_2 y) (x - c_3 y) (x - c_4 y) \, \qquad (\textrm{mod} \, \, p_{2}),
\end{equation}
then for any $b \in \{ b_{1}, b_{2}, b_{3}, b_{4}\}$, we have
$$
\tilde{F_{b}}(x , y) \equiv m''_{0}(x - c'_1 y) (x - c'_2 y) (x - c'_3 y) (x - c'_4 y) \, \qquad (\textrm{mod} \, \, p_{2}),
$$
where
$$
c'_{j} = p_{1} c_{j} + b.
$$
The integers $c'_1, c'_2 , c'_3 , c'_4$ are indeed distinct modulo $p_{2}$, as $c_1, c_2 , c_3 , c_4$ are so and $p_{1}$ is invertible modulo $p_{2}$. We conclude that the quartic form $ \tilde{F}_{b}(x , y)$ splits completely modulo $p_{2}$, as well.
If
$$
F(x , y) \equiv m_{0} y (x - b_2 y) (x - b_3 y) (x - b_4 y) \, \qquad (\textrm{mod} \, \, p_{1})
$$
and \eqref{ciroots} holds, then
$$
\tilde{F}_{\infty}(x , y) \equiv m''_{0}(x - c'_1 y) (x - c'_2 y) (x - c'_3 y) (x - c'_4 y) \, \qquad (\textrm{mod} \, \, p_{2}),
$$
with $c'_i = c^{-1}_{i}$ modulo $p_2$, where $0$ and $\infty$ are considered to be the inverse of each other modulo $p_2$. Namely, if $c_1 =0$ modulo $p_2$, we get
$$
\tilde{F}_{\infty}(x , y) \equiv m''_{0} y (x - c'_2 y) (x - c'_3 y) (x - c'_4 y) \, \qquad (\textrm{mod} \, \, p_{2}).
$$
If
$$
F(x , y) \equiv m_{0} y (x - b_2 y) (x - b_3 y) (x - b_4 y) \, \qquad (\textrm{mod} \, \, p_{1})
$$
and
\begin{equation*}
F(x , y) \equiv m'_{0} y (x - c_2 y) (x - c_3 y) (x - c_4 y) \, \qquad (\textrm{mod} \, \, p_{2}),
\end{equation*}
then
$$
\tilde{F}_{\infty}(x , y) \equiv m''_{0}x (x - c'_2 y) (x - c'_3 y) (x - c'_4 y) \, \qquad (\textrm{mod} \, \, p_{2}),
$$
with $c'_i = c^{-1}_{i}$ modulo $p_2$. Therefore, if $F(x, y)$ splits completely modulo $p_1$ and $p_2$, the $\tilde{F}_{b}(x , y)$ will also split completely modulo $p_2$, for any simple root $b$ of $F(x , y)$ modulo $p_1$.
\end{proof}
\begin{lemma}\label{corresponds-sol}
Let $h$ be an integer, and
$p_1$, $p_{2}$, and $p_{3}$ be three distinct primes greater than $4$ that do not divide $h$. Let $F(x, y) \in \mathbb{Z}[x , y]$ be a binary quartic form that splits completely modulo primes $p_{1}$, $p_{2}$, and $p_{3}$. Then there are $64$ binary quartic forms $G_{i}(x , y) \in \mathbb{Z}[x , y]$, with $1 \leq i \leq 64$, such that every primitive solution $(x_{\mathit{l}}, y_{\mathit{l}})$ of the equation $F(x , y)= h \, p_{1} p_{2} p_{3}$ corresponds uniquely to a triple $(j, x_{l, j}, y_{l, j})$, with
$$
j \in \{1, 2, \ldots, 64\},\, \, x_{\mathit{l}, j}, y_{\mathit{l}, j} \in \mathbb{Z}, \, \, \gcd(x_{\mathit{l}, j} , y_{\mathit{l}, j}) =1,
$$
and
$$
G_{j} (x_{\mathit{l}, j} , y_{\mathit{l}, j}) = h.
$$
Furthermore,
\begin{equation*}
\mathcal{H}\left( G_{j} \right) = \left(p_1 p_2 p_3\right)^{6} \mathcal{H}(F),
\end{equation*}
for $j = 1, \ldots, 64$.
\end{lemma}
\begin{proof}
Let $m = p_1 p_2 p_3 h$.
By Lemma \ref{lem1corres}, we may
reduce the Thue equation $F(x , y) = m$ modulo $p_1$ to obtain $4$ quartic Thue equations
\begin{equation}\label{reduceto4}
\tilde{F}_{i}(x , y) = \frac{m}{p_1},
\end{equation}
with $i = 1, 2, 3, 4$, such that every primitive solution of $F(x , y)= h \, p_{1} p_{2} p_{3} = m$ corresponds uniquely to a primitive solution of exactly one of the equations in \eqref{reduceto4}.
By Lemma \ref{lem2}, every
binary quartic form $\tilde{F}_{i}(x , y)$ in \eqref{reduceto4} splits completely modulo $p_2$.
Applying Lemma \ref{lem1corres} modulo $p_2$ to each equation in \eqref{reduceto4}, we construct $4$ binary quartic forms. Therefore, we obtain $4^2$ Thue equations
\begin{equation}\label{reduceto16}
\tilde{F}_{i, k}(x , y) = \frac{m}{p_1p_2},
\end{equation}
with $i, k =1, 2, 3, 4$,
such that every primitive solution $F(x , y)= h \, p_{1} p_{2} p_{3} = m$ corresponds uniquely to a primitive solution of exactly one of the equations in \eqref{reduceto16}.
By \eqref{Hoftilde},
\begin{equation}\label{HofFij}
\mathcal{H}\left( F_{i, k} \right) = \left(p_1 p_2 \right)^{6} \mathcal{H}(F).
\end{equation}
By Lemma \ref{lem2},
each form $\tilde{F}_{i, k}(x , y)$ splits modulo $p_3$. We may apply Lemma \ref{lem1corres} once again to each equation in \eqref{reduceto16}. This way we obtain $4^3$ equations
\begin{equation}\label{reduceto64}
G_{j}(x , y) = \frac{m}{ p_1 p_2 p_3} = h.
\end{equation}
The construction of these equations ensures a one-to-one correspondence between the primitive solutions of the equation $F(x , y) = m$ and the union of the sets of the primitive solutions of Thue equations in \eqref{reduceto64}.
By \eqref{Hoftilde} and \eqref{HofFij},
\begin{equation}\label{HofGj}
\mathcal{H}\left( G_{j} \right) = \left(p_1 p_2 p_3\right)^{6} \mathcal{H}(F),
\end{equation}
for $j = 1, \ldots, 64$.
\end{proof}
We note that if $F(x , y)$ is irreducible over $\mathbb{Q}$, its associated forms $G_{j} (x , y)$, which are constructed in the proof of Lemma \ref{corresponds-sol}, will also be irreducible over $\mathbb{Q}$ as all of the matrix actions are rational. Furthermore, the forms $G_{j}(x , y)$ are not constructed as proper subforms of the binary quartic form $F(x , y)$.
Indeed, they are maximal over~${\mathbb Z}_p$ for all $p\notin \{p_1,p_{2},p_3\}$ (being equivalent, up to a unit constant, to $F(x,y)$ over~${\mathbb Z}_p$ in that case), while for $p\in\{p_1,p_2, p_3\}$, we have $p\nmid D_F$, implying $p^6 || D_{G_j}$, and so $G_j(x , y)$ cannot be a subform over ${\mathbb Z}_p$ of any form by equation~(\ref{St6}) (see the definition of a subform in \eqref{defofsubform}).
We remark that the reduction of Thue equations $F(x , y) = m$ modulo prime divisors of $m$ is a classical approach, and some sophisticated applications of it to bound the number of solutions of Thue equations can be found in \cite{Bom, Ste}.
{\rm stab.}ection{Avoiding Local Obstructions}\label{localsection}
In the previous section, we constructed $4^3$ binary quartic forms $G_{j}(x , y)$ and established Lemma \ref{corresponds-sol}, which corresponds each primitive solution of $F(x , y) = h p_1 p_2 p_3$ to a primitive solution of one of the equations $G_{j}(x , y) = h$, for $1 \leq j \leq 4^3$.
Using Proposition \ref{maineq4}, we will obtain a small upper bound for the number of integral solutions to the equation $F(x , y) = m = p_{1} p_{2} p_{3} h$, which will lead us to conclude that some of the newly constructed Thue equations $G_{j}(x , y)= h$ cannot have any solutions.
In this section we will work with a proper subset of the set of all quartic forms to construct forms such that the associated Thue equations have no local obstructions to solubility.
We will impose some extra congruence conditions in our choice of forms $F(x , y)$, resulting in construction of $4^3$ forms $G_i(x , y)$ that locally represent $h$. For each prime $p$, we will make some congruence assumptions modulo $p$ and present $p$-adic densities for the subset of quartic forms that satisfy our assumptions to demonstrate that we will be left with a subset of $V_{\mathbb{Z}}$ with positive density.
Before we divide up our discussion modulo different primes, we note that by \eqref{defofsubform},
if a form is non-maximal, then either it is not primitive, or after an ${\rm SL}_2({\mathbb Z})$-transformation it is of the form $a_0x^4+a_1x^{3}y+a_2 x^2 y^2 +a_3 xy^3+a_4 y^4$, where $p^i\mid a_i$, $i=0,1,2, 3, 4$, for some prime~$p$. In particular, integral binary quartic forms that are non-maximal must factor modulo some prime $p$ as a constant times the forth power of a linear form. It turns out that all integral binary quartic forms that are discussed in this section are indeed maximal.
{\rm stab.}ubsection{Quartic Forms Modulo $2$.}
To ensure that a quartic Thue equation $F(x , y) =h$ has a solution in $\mathbb{Z}_2$, it is sufficient to assume that
$$
F(x , y) \equiv L_1(x,y) L_2(x , y)^{3} \, \, (\textrm{mod}\, \, 2^{4}),
$$
where $L_1(x,y)$ and $L_2(x,y)$ are linearly independent linear forms modulo $2$. The system of two linear equations
$$
L_1(x,y) \equiv h \, \, (\textrm{mod}\, 2^{4})
$$
$$
L_2(x , y) \equiv 1 \, \, (\textrm{mod}\, 2^{4}),
$$
has a solution and therefore, by Hensel's Lemma, $F(x , y) = h$ is soluble in $\mathbb{Z}_2$.
The 2-adic density of quartic forms $F(x , y)$ such that $
F(x , y) \equiv L_1(x,y) L_2(x , y)^{3}$ modulo $2^{4}$ is
\begin{equation}\label{2-adicdensity}
\frac{6}{2^5} = \frac{3}{16},
\end{equation}
where the linear forms $L_1$ and $L_2$ can be chosen from the three linear forms $x$, $y$, or $x + y$.
It is indeed necessary to consider integral quartic forms modulo $16$, as a $2$-adic unit $u$ belongs to $\mathbb{Q}^{4}_{2}$ if and only if $u \equiv 1$ modulo $16 \mathbb{Z}_2$.
More specifically, assume that $(x_0: y_0: z_0)$ is a $\mathbb{Z}_2$-point on the projective curve $C: hz^4 = F(x , y)$ and
$u = z_0^4$, with $z_0$ a unit in $\mathbb{Z}_2$. Therefore, $z_0 = 1 +2 t$ for some $t \in \mathbb{Z}_2$ and
$$
z_{0}^{4} = (1 + 2 t)^4 \equiv 1 + 8 \left(t(3t+1)\right) \equiv 1 \, \, (\textrm{mod}\, \, 16).
$$
{\rm stab.}ubsection{Quartic Forms Modulo Large Primes}\label{largeprimesubsection}
Let us consider the curve $C: h z^4 = F(x , y)$ of genus~$g = 3$ over the finite field $\mathbb{F}_{q}$ of order $q$. By the Leep-Yeomans generalization of Hasse-Weil bound in \cite{LeYe}, the number of points $N$ on the curve $C$ satisfies the inequality
\begin{equation}\label{HWLeYe}
\left| N - (q+1) \right| \leq 2g {\rm stab.}qrt{q}.
\end{equation}
Let $p$ be a prime $p> (2g+1)^2 = 49$, $p \not \in \{p_1, p_2, p_3\}$, $p \nmid h$. Since $p+1 \geq 2g {\rm stab.}qrt{p} +1$, the lower bound in \eqref{HWLeYe} is nontrivial, implying that there must be an $\mathbb{F}_p$-rational point on the curve $h z^4 = F(x , y)$.
If there exists $a \in {\mathbb Z}$ such that
\begin{equation}\label{SimpleRoot}
F(x, y) \equiv (x- ay) A(x , y)\, \, (\textrm{mod}\, p),
\end{equation}
with $A(x , y)$ an integral cubic binary form for which
\begin{equation}\label{SimpleRoota}
A(a , 1) \not \equiv 0 \, \, (\textrm{mod}\, p),
\end{equation}
then
by Hensel's lemma,
the smooth $\mathbb{F}_p$-point $(x_0: y_0 : z_0) = (a : 1 : 0)$ will lift to a
$\mathbb{Z}_p$-point on the curve $h z^4 = F(x , y)$.
Similarly, if
\begin{equation*}
F(x, y) \equiv y A(x , y)\, \, (\textrm{mod}\, p),
\end{equation*}
with $A(x , y)$ an integral cubic binary form for which
\begin{equation*}
A(1 , 0) \not \equiv 0 \, \, (\textrm{mod}\, p),
\end{equation*}
the smooth $\mathbb{F}_p$-point $(x_0: y_0 : z_0) = (1 : 0 : 0)$ will lift to a
$\mathbb{Z}_p$-point on the curve $h z^4 = F(x , y)$.
A quartic form over ${\mathbb F}_p$ that has a triple root must have a simple root, as well.
So we will assume
that $F(x,y)$ does not factor as $cM(x,y)^2$ modulo~$p$ for any quadratic binary form $M(x,y)$ and constant $c$ over ${\mathbb F}_p$. By definition, these forms are maximal over ${\mathbb Z}_p$. It follows from this assumption on $F(x ,y)$ that the curves $hz^4=F(x,y)$ are irreducible over ${\mathbb F}_p$ and there is at least one smooth $\mathbb{F}_p$-rational point on $h z^4 = F(x , y)$, which lifts to a $\mathbb{Z}_p$-point.
We conclude that the integral quartic forms $G_{j}(x, y)$, constructed as described in Section \ref{splitsection} from such a form $F(x , y)$, all represent $h$ in ${\mathbb Z}_p$ for primes $p> (2g+1)^2$ as well.
The $p$-adic density of binary quartic forms that are primitive and not constant multiples
of the second powers of quadratic binary forms modulo~$p$ is
\begin{equation}\label{largedensity}
1 - \frac{ (p-1)(p+1) p }{ 2p^5} - \frac{ (p-1)(p+1) }{p^5},
\end{equation}
where the summand $-\frac{ (p-1)(p+1) p }{ 2p^5}$ eliminates forms of the shape $ c M^2(x, y) = c (x-b_{1}y)^2 (x-b_{2}y)^2 $ or $ c M^2(x, y) = c (x-b_{1}y)^2 y^2$ (mod $p$), and the summand $- \frac{ (p-1)(p+1) }{p^5}$ eliminates forms of the shape $ c L(x , y)^4$ (mod $p$), with $L(x , y)$ a linear form modulo $p$.
{\rm stab.}ubsection{Quartic Forms Modulo Special Odd Primes}\label{specialoddprime}
For $p \mid h$ we will assume that
$$
F(x , y) \equiv L_1(x,y) L_3(x , y)^{3}\, \, \textrm{ (mod}\, \, p),
$$
where $L_1(x,y)$ and $L_2(x,y)$ are two linearly independent linear forms modulo $p$.
To find ${\mathbb Z}_p$-points on the curve $C: hz^4 = F(x , y)$, we consider the equation $F(x , y)=0$ (mod $p$).
Since $L_1(x , y)$ and $L_2(x , y)$ are linearly independent modulo $p$, the system of linear equations
$$
L_1(x,y) \equiv 0 \textrm{ (mod} \, \, p)
$$
and
$$
L_2(x,y) \equiv 0 \textrm{ (mod} \, \, p)
$$
has exactly one solution.
Since $L_1(x , y)=0$ has at least three points over ${\mathbb F}_p$, the equation $F(x , y)=0$ (mod $p$) has at least two solutions over ${\mathbb F}_p$ that provide smooth ${\mathbb F}_p$-points on the curve $C: hz^4 = F(x , y)$ (i.e., all points other than that intersection point of the two lines defined by $L_1(x , y)$ and $L_2(x , y)$). By Hensel's Lemma, these smooth points will lift to ${\mathbb Z}_p$-points. Thus the equations $F(x,y)=h$
and $G_j(x,y)=h$ will be locally soluble modulo $p$.
Similarly, for every odd prime $p \not \in \{ p_{1}, p_2, p_{3}\}$, with $ p < 49$ and $p \nmid h$ (these are the primes not considered in Subsection \ref{largeprimesubsection}), we will assume that
$$
F(x , y) \equiv L_1(x,y) L_2(x , y)^{3}\, \, \textrm{ (mod}\, \, p),
$$
where $L_1(x,y)$ and $L_2(x,y)$ are linear forms that are linearly independent modulo~$p$.
This condition implies that $F(x , y) \equiv h \textrm{ (mod} \, \, p)$ has solutions in integers, for $L_1(x,y)$ and $L_2(x,y)$ are linearly independent and therefore we can find $x_{0} , y_{0} \in \mathbb{Z}$ satisfying the following system of linear equations:
$$
L_1(x_0,y_0) \equiv h \textrm{ (mod} \, \, p)
$$
and
$$
L_2(x_0,y_0) \equiv 1 \textrm{ (mod} \, \, p).
$$
The smooth ${\mathbb F}_p$-point $(x_0 : y_0 : 1)$ lifts to a ${\mathbb Z}_p$-point on the curve $C: hz^4 = F(x , y)$.
The $p$-adic density of primitive binary quartic forms of the shape
\begin{equation}\label{modhigherp}
L_1(x,y) L_2(x , y)^{3}\, \, (\textrm{mod} \, \, p)
\end{equation}
where $L_1(x,y)$ and $L_2(x,y)$ are linearly independent linear forms modulo $p$ and
is
\begin{equation}\label{specialdensity}
\frac{(p+1)p(p-1)}{p^{5}}.
\end{equation}
The above density is calculated by considering the unique factorization of the form $F$ modulo $p$ as
$$
m_{0} (x - b_{1}y)(x - b_{2} y)^3,
$$
with $m_{0}$ non-zero, and $b_{1}$ and $b_{2}$ distinct roots (possibly $\infty$) modulo $p$.
Such forms are maximal over ${\mathbb Z}_p$
{\rm stab.}ection{Completing the proof}\label{completesection}
For $i=1, 2, 3$, let $p_{i}$ be the $i$-th prime greater than $4$ such that $p_{i}\nmid h$ and set
$$
m = h\, p_1 p_2 p_3,
$$
and
$$
\mathcal{P} = \{ p_1, p_2, p_3\}.
$$
For example, if $h=1$, we will choose $p_1 = 5$, $p_2 = 7$, and $p_3 = 11$.
Let $F(x , y)$ be a maximal primitive irreducible integral binary quartic form which has a trivial stabilizer in ${\rm GL}_{2}(\mathbb{Q})$, with
$$
\left|D_F \right| > (3.5)^{24} \, 4^{8} \left( \prod_{i=1}^{3} p_{i} \right)^{12}.
$$
We note that the above assumption on the size of the discriminant of quartic forms exclude only finitely many ${\rm GL}_{2}(\mathbb{Z})$-equivalence classes of quartic forms (see \cite{BM, EG1}).
In order to ensure that $h$ is represented by $F$ in $\mathbb{R}$, we assume that the leading coefficient of $F$ is positive if $h$ is positive and negative otherwise. Assume further that $F(x , y)$ splits completely modulo the primes $p_{1}$, $p_2$, $p_{3}$.
Assume that for every prime $p \not \in \{ p_{1}, p_2, p_3\} = \mathcal{P}$, with $ p < 49$, we have
$$
F(x , y) \equiv L_1(x,y) L_2(x , y)^{3}\, \, \textrm{ (mod}\, \, p),
$$
where $L_1(x,y)$ and $L_2(x,y)$ are linear forms that are linearly independent modulo~$p$.
Finally, assume, for each prime $p > 49$,
that $F(x,y)$ does not factor as $cM(x,y)^2$ modulo~$p$ for any quadratic binary form $M(x,y)$ and constant $c$ over ${\mathbb F}_p$.
By Proposition \ref{maineq4}, and taking $\epsilon = \frac{1}{12}$, there are at most
\[
36 -16\mathtt{i} + \frac{4-\mathtt{i}}{\frac{1}{4}} = 52 - 20 \mathtt{i}
\]
primitive solutions to the equation
$$
F(x , y) = m = h\, p_1 p_2 p_3,
$$
where $2 \, \mathtt{i}$ is the number of non-real roots of the polynomial $F(X, 1)$.
By Lemma \ref{corresponds-sol}, each primitive solution $(x_{0}, y_{0})$ of $F(x , y) = m$ corresponds uniquely to a solution of $G_{i}(x , y) = h$, where $1 \leq i \leq 4^3$ is also uniquely determined by
$(x_{0}, y_{0})$.
Since
$$
4^3 - 52 + 20 \mathtt{i} = 12 + 20 \mathtt{i} \geq 12
$$
we conclude that at least $12$ of the $64$ equations $G_{i}(x , y) = h$ have no solutions in integers $x, y$.
By \eqref{HofGj}, and Theorems
\ref{BaSh-thm1.7} and \ref{BaSh-thm2.11},
we have the following lower bound $\mu$ for the density of integral quartic forms that represent $h$ locally, but not globally,
\begin{equation}\label{finaldensity}
\mu = \frac{12 } {\left(p_1 p_2 p_3\right)^{5}} \, \, \delta_{2} \prod_{p \in \mathcal{P}} {\rm stab.}igma(p) \prod_{p\geq 49, \, p\not\in \mathcal{P}, \, p\nmid h} \lambda(p) \prod_{p \mid h \, \textrm{or}\, p < 49} \gamma_{p},
\end{equation}
where, via \eqref{splitdensity},
\eqref{2-adicdensity},
\eqref{largedensity},
\eqref{specialdensity},
$$\delta_2 = \frac{3}{16},$$
$${\rm stab.}igma(p)= \frac{ (p -1)^2 (p+4) (p-2) (p-3) }{4! \, p^5},$$
\begin{equation}\label{lambdacal}
\lambda(p) = 1 - \frac{ (p-1)(p+1) p }{ 2p^5} - \frac{ (p-1)(p+1) }{p^5},
\end{equation}
and
$$\gamma(p) = \frac{(p+1)p(p-1)}{p^{5}}.$$
In \eqref{finaldensity} all products are over rational primes.
For all but finitely many primes $p$, the density
$
\lambda(p)$
in \eqref{lambdacal}
contributes to the product in \eqref{finaldensity}. Since
$$
\prod_p \left(1 - \frac{ (p-1)(p+1)^2}{ 2p^5}\right)
$$
is a convergent Euler product, the lower bound $\mu$ is a real number satisfying $0 <\mu <1$.
{\rm stab.}ection*{Acknowledgements.} I am grateful to the anonymous referee for their careful reading of an earlier version of this manuscript and insightful comments. I would like to thank Arul Shankar for very helpful conversation and answering my questions, especially regarding the height $\mathcal{H}(I , J)$, which is the key tool in the present paper.
I would also like to thank Mike Bennett and Manjul Bhargava for their insights and suggestions.
This project was initiated during my visit to the
Max Planck Institute for Mathematics, in Bonn, in the academic year 2018-2019. I acknowledge the support from the MPIM. In different stages of this project, my research has been partly
supported by the National Science Foundation award DMS-2001281 and by the
Simons Foundation Collaboration Grants, Award Number 635880.
\end{document} |
\begin{document}
\title{Generic flows on $3$-manifolds}
\author{Carlo~\textsc{Petronio}}
\maketitle
\begin{abstract}
\noindent
\noindent MSC (2010): 57R25 (primary); 57M20, 57N10, 57R15 (secondary).
We provide a combinatorial presentation of the
set $\calF$ of 3-dimensional \emph{generic flows}, namely the set
of pairs $(M,v)$ with $M$ a compact oriented $3$-manifold
and $v$ a nowhere-zero vector field on $M$ having generic behaviour along $\partial M$,
with $M$ viewed up to diffeomorphism and $v$ up to homotopy on $M$ fixed on $\partial M$.
To do so we introduce a certain class $\calS$ of finite 2-dimensional polyhedra with
extra combinatorial structures, and some moves on $\calS$, exhibiting
a surjection $\varphi:\calS\to\calF$ such that $\varphi(P_0)=\varphi(P_1)$ if
and only if $P_0$ and $P_1$ are related by the moves. To obtain this
result we first consider the subset $\calF_0$ of $\calF$ consisting of flows
having all orbits homeomorphic to closed segments or points,
constructing a combinatorial counterpart $\calS_0$ for $\calF_0$ and
then adapting it to $\calF$.
\end{abstract}
\noindent
Combinatorial presentations of $3$-dimensional topological categories, such
as the description of closed oriented $3$-manifolds via surgery along framed links in $S^3$,
and many more, have proved crucial for the theory of quantum invariants, initiated
in~\cite{RT} and~\cite{TV} and now one of the main themes of geometric topology.
In this paper we provide one such presentation for the set $\calF$ of pairs $(M,v)$ with
$M$ a $3$-manifold and $v$ a flow
having generic behaviour on $\partial M$, viewed up to homotopy fixed on $\partial M$. This extends the presentation
of closed combed $3$-manifolds contained in~\cite{LNM}, and it is based on a generalization
of the notion of \emph{branched spine}, introduced there as a combination
of the definition of special spine due to Matveev~\cite{Matveev:AAM}
with the concept of branched surface introduced by Williams~\cite{Williams}, already
partially investigated by Ishii~\cite{Ishii} and Christy~\cite{Christy}.
A \emph{presentation} here is as usual meant as a constructive surjection onto $\calF$ from a set
of finite combinatorial objects, together with a finite set of combinatorial
moves on the objects generating the equivalence relation induced by the surjection.
To get our presentation we will initially restrict to generic flows having all orbits
homeomorphic to points or to segments, viewed first up to diffeomorphism and then up to homotopy,
and we will carefully describe their combinatorial counterparts.
A restricted type of generic flows on manifolds with boundary
was actually already considered in~\cite{LNM}, but two such flows could never
be glued together along boundary components. On the contrary, as we will point out in detail in
Remark~\ref{glue:rem}, using the flows we consider here one can develop a theory
of cobordism and hence, hopefully, a TQFT in the spirit of~\cite{Turaev}.
Another reason why we expect that our
encoding of generic flows might have non-trivial applications is that
the notion of branched spine was one of the combinatorial tools
underlying the theory of quantum hyperbolic invariants of
Baseilhac and Benedetti~\cite{BB1,BB2,BB3}.
\textsc{Acknowledgements} The author profited from several inspiring discussions with Riccardo Benedetti.
\section{Generic flows, streams, and stream-spines}
In this section we define the topological objects that we will deal with
in the paper and we introduce the combinatorial objects
that we will use to encode them. We then describe our first representation result,
for manifolds with generic traversing flows (that we call \emph{streams}) viewed up
to diffeomorphism.
\subsection{Generic flows}
Let $M$ be a smooth, compact, and oriented $3$-manifold with non-empty boundary,
and let $v$ be a nowhere-vanishing vector field on $M$. We will always in this paper assume
the following genericity of the tangency of $v$ to $\partial M$, first discussed by Morin~\cite{Morin}:
\begin{itemize}
\item[(\textbf{G1})]
The field $v$ is tangent to $\partial M$ only along a union $\Gamma$ of circles,
and $v$ is tangent to $\Gamma$ itself at isolated points only;
moreover, at the two sides on $\Gamma$ of each of these points,
$v$ points to opposite sides of $\Gamma$ on $\partial M$.
\end{itemize}
To graphically illustrate the situation, we introduce some terminology
that we will repeatedly employ in the rest of the paper:
\begin{itemize}
\item We call \emph{in-region} (respectively, \emph{out-region})
the union of the components of $(\partial M)\setminus\Gamma$
on which $v$ points towards the interior (respectively, the exterior) of $M$;
\item If $A$ is a point of $\Gamma$ we will say that
$A$ is \emph{concave}
if at $A$ the field $v$ points from the out-region to the in-region,
and \emph{convex} if it points
from the in-region to the out-region; this terminology is borrowed from~\cite{LNM} and
is motivated by the shape of the orbits of $v$ near $A$, see Fig.~\ref{concave/convex:fig};
\begin{figure}\label{concave/convex:fig}
\end{figure}
\item A point $A$ of $\Gamma$ at which $v$ is tangent to $\Gamma$
will be termed \emph{transition} point;
as one easily sees, there are up to
diffeomorphism only $2$ local models for the field $v$ near $A$,
as shown in Fig.~\ref{transition/types:fig}.
\begin{figure}\label{transition/types:fig}
\end{figure}
\end{itemize}
The next result records obvious facts and two less obvious ones:
\begin{prop}\label{trans:fate:prop}
Let $A$ be a point of $\partial M$. Then, depending on where $A$ lies,
the orbit of $v$ through $A$ extends as follows:
\begin{center}
\begin{tabular}{l|l}
$A$ in the in-region & Only forward \\
$A$ in the out-region & Only backward \\
$A$ a concave point & Both forward and backward \\
$A$ a convex point & Neither forward nor backward \\
$A$ a concave-to-convex transition point & Only backward\\
$A$ a convex-to-concave transition point & Only forward
\end{tabular}
\end{center}
\end{prop}
\begin{proof}
The result is evident except for orbits through the transition points.
To deal with them
we first analyze what the orbits would be if $v$ were projected
to $\partial M$, which we do in Fig.~\ref{transition/orbits/first:fig}.
\begin{figure}\label{transition/orbits/first:fig}
\end{figure}
The picture shows that at the concave-to-convex transition points the orbit
of the projection of $v$ lies in the out-region, which implies that the
orbit of $v$ extends backward but not forward, while at the
convex-to-concave transition points the opposite happens.
\end{proof}
From now on an orbit of $v$ reaching a concave-to-convex transition point or
leaving from a convex-to-concave transition point
will be termed \emph{transition orbit}.
\subsection{Streams}
Our main aim in this paper is to provide a combinatorial presentation of the set of generic flows on $3$-manifolds
up to homotopy fixed on the boundary, but to achieve this aim we first need to somewhat restrict the
class of flows we consider and the equivalence relation on them.
Informally, we call \emph{stream} on a $3$-manifold $M$ a vector field $v$
satisfying (G1) such that, in addition, all the orbits of $v$ start and end on $\partial M$, and
the orbits of $v$ tangent to $\partial M$ are generic with respect to each other.
More precisely, $v$ is a stream on $M$ if it satisfies the conditions (G1)-(G4), with:
\begin{itemize}
\item[(\textbf{G2})]
Every orbit of $v$ is either a single point (a convex point of $\Gamma$)
or a closed arc with both ends on $\partial M$;
\item[(\textbf{G3})]
The transition orbits are tangent to $\partial M$ at their transition point only.
\end{itemize}
For the next and last condition we note that if an arc of an orbit of $v$ has
ends $A$ and $B$ contained in the interior of $M$ then the parallel transport
along $v$ defines a linear bijection
from the tangent space to $M$ at $A$ to that at $B$. We then require the following:
\begin{itemize}
\item[(\textbf{G4})]
Each orbit of $v$ is tangent to $\partial M$ at two points at most;
if an orbit of $v$ is tangent to $\partial M$ at two points $A$ and $B$,
that necessarily are concave points of $\Gamma$ by conditions (G2) and (G3),
then the tangent directions to $\Gamma$ at $A$ and at $B$ are transverse to
each other under the bijection defined by the parallel transport along $v$.
\end{itemize}
This last condition is illustrated in Fig.~\ref{transverse:fig}.
\begin{figure}\label{transverse:fig}
\end{figure}
We will henceforth denote by $\calF_0^*$ the set of pairs $(M,v)$ with
$M$ an oriented, compact, connected $3$-manifold and $v$ a stream on $M$,
up to diffeomorphism.
\subsection{Stream-spines}
We now introduce the objects that will eventually be shown to be the combinatorial
counterparts of streams on smooth oriented $3$-manifolds.
As above, stating all the requirements takes some time and involves some new terminology.
We will then stepwise introduce 3 conditions
(S1), ({S2}), ({S3}) for a compact and connected $2$-dimensional polyhedron $P$, the combination of
which will constitute the definition of a \emph{stream-spine}. We begin with the following:
\begin{itemize}
\item[(\textbf{S1})] A neighbourhood of each point of $P$ is homeomorphic to one of the $5$ models
of Fig.~\ref{local/stream/spine:fig}.
\begin{figure}\label{local/stream/spine:fig}
\end{figure}
\end{itemize}
This condition implies that $P$ consists of:
\begin{enumerate}
\item Some open surfaces, called \emph{regions},
each having a closure in $P$ which is a compact surface with possibly immersed boundary;
\item Some \emph{triple lines}, to which three regions are locally incident;
\item Some \emph{single lines}, to which only one region is locally incident;
\item A finite number of points, called \emph{vertices}, to which six regions are locally incident;
\item A finite number of points, called \emph{spikes}, to which both a triple and a single line are incident.
\end{enumerate}
We note that a polyhedron satisfying condition (S1) is simple according
to Matveev~\cite{Matveev:AAM}, but not almost-special if single lines exist.
Our next condition was first introduced in~\cite{BP:Manuscripta};
to state it we define a \emph{screw-orientation} along an arc of triple line of $P$ as an orientation
of the arc together with a cyclic ordering of the three germs of regions of $P$ incident
to the arc, viewed up simultaneous reversal of both, as in
Fig.~\ref{screw:fig}-left.
\begin{figure}\label{screw:fig}
\end{figure}
\begin{itemize}
\item[(\textbf{S2})] Along each triple line of $P$ a screw-orientation is defined in such
a way that at each vertex the screw-orientations are as in Fig.~\ref{screw:fig}-right.
\end{itemize}
We now give the last condition of the definition of stream-spine:
\begin{itemize}
\item[(\textbf{S3})]
Each region of $P$ is orientable, and it is endowed with a specific orientation, in such a way that no
triple line is induced three times the same orientation by the regions incident to it.
\end{itemize}
We will say that two stream-spines are \emph{isomorphic} if they are related by a PL homeomorphism
respecting the screw-orientations along triple lines and the orientations of the regions, and
we will denote by $\calS_0$ the set of all stream-spines up to isomorphism.
\subsection{Stream carried by a stream-spine}
In this subsection we will show that each stream-spine uniquely defines an oriented smooth
manifold and a stream on it. To begin we take a compact polyhedron $P$ satisfying condition
({S1}) of the definition of stream-spine, namely locally appearing as in
Fig.~\ref{local/stream/spine:fig}. We will say that an embedding of $P$ in a
$3$-manifold $M$ is \emph{branched} if the following happens upon identifying $P$ with
its image in $M$ (see Fig.~\ref{branched/embedding:fig}):
\begin{figure}\label{branched/embedding:fig}
\end{figure}
\begin{itemize}
\item Each region of $P$ has a well-defined
tangent plane at every point;
\item If a point $A$ of $P$ lies on a triple line but is neither a vertex nor a spike,
the tangent planes at $A$ to the $3$ regions of $P$ locally incident to $A$ coincide,
and not all the $3$ regions of $P$ locally project to one and the same half-plane of this tangent plane;
\item At a vertex $A$ of $P$ the tangent planes at $A$ to the $6$ regions of $P$ locally incident to $A$ coincide;
\item At a spike $A$ of $P$ the tangent planes at $A$ to the $2$ regions of $P$ locally incident to $A$ coincide.
\end{itemize}
\begin{prop}\label{spine:to:stream:prop}
To any stream-spine $P$ there correspond a smooth compact oriented $3$-manifold $M$ and
a stream $v$ on $M$ such that $P$ embeds in a branched fashion in $M$, the field
$v$ is everywhere positively transversal to $P$, and $M$ is homeomorphic to a regular
neighbourhood of $P$ in $M$; the pair $(M,v)$ is well-defined up to oriented diffeomorphism, therefore
setting $\varphi(P)=(M,v)$ one gets a well-defined map $\varphi_0^*:\calS_0\to\calF_0^*$.
\end{prop}
\begin{proof}
Our first task is to show that $P$ thickens in a PL sense to a well-defined
oriented manifold $M$ (later we will need to describe a smooth structure for
$M$ and the field $v$). This argument extends that of~\cite{BP:Manuscripta}.
Let us denote by $U$ a regular neighbourhood in $P$ of the union
of the triple lines. We observe that $U$ can be seen as a union of fragments as in
Fig.~\ref{thicken:fig}-top, that we thicken
\begin{figure}\label{thicken:fig}
\end{figure}
as shown in the bottom part of the same figure, giving each block the
orientation such that the screw-orientations along the
portions of triple lines of $P$ within each block are positive.
Note that on the boundary of each block there are some T-shaped regions and that
some rectangles are highlighted. Following the
way $U$ is reassembled from the fragments into which it was decomposed, we can now
assemble the blocks by gluing together the T's on their boundary.
(Note that the gluing between two T's need not identify the vertical legs to each other,
so each T should actually be thought of as a Y: the three legs play symmetric r\^oles.)
Since the gluings automatically reverse the orientation, the result is an
oriented manifold, on the boundary of which we have some highlighted strips,
each having the shape of a rectangle or of an annulus.
Now we turn to the closure in $P$ of the complement of $U$, that we denote by $S$.
Of course $S$ is a surface with boundary, and on $\partial S$ we can highlight the
arcs and circles shared with $U$. (The rest of $\partial S$ consists of arcs lying on single lines of $P$.)
We then take the product $S\times I$ ---this is a crucial choice that will be discussed below---
and note that the highlighted arcs and circles on $\partial S$ give
highlighted rectangles and annuli on $\partial (S\times I)$. We are only left to glue these
rectangles and annuli to those on the boundary of the assembled blocks,
respecting the way $S$ is glued to $U$ and making sure the orientation is reversed.
The result is the required manifold $M$.
We must now explain how to smoothen $M$ and how to choose the stream $v$. Away from the triple and single lines of $P$
the manifold $M$ is the product $S\times I$ with $S$ a surface, so it is sufficient to
smoothen $S$ and to define $v$ to be parallel to the $I$ factor and positively transversal to $S$.
(This justifies our choice of thickening $S$ as a trivial rather than some other $I$-bundle.)
Along the triple and single lines of $P$ we extend this construction as suggested
in a cross-section in Fig.~\ref{smooth/thick:fig}.
\begin{figure}\label{smooth/thick:fig}
\end{figure}
Note that a triple line of $P$ gives rise to a concave tangency
line of $v$ to $\partial M$, and that a single line of $P$ gives rise to a
convex tangency line. To conclude we must illustrate the extension of the construction of $v$ near
vertices and near spikes, which we do in two examples in Fig.~\ref{vert/spike/thick:fig}.
\begin{figure}\label{vert/spike/thick:fig}
\end{figure}
In the figure we represent $v$ by showing some of its orbits. Note that:
\begin{itemize}
\item In both cases the local configurations of $v$ near $\partial M$ are as in condition (G1) of the definition of stream;
\item The orbits of $v$ are closed arcs or points, as in condition (G2);
\item To a vertex of $P$ there corresponds an orbit of $v$ that is tangent to $\partial M$ at two points,
in a concave fashion and respecting the transversality condition (G4);
\item To a spike of $P$ there corresponds a transition orbit of $v$ satisfying condition (G3).
\end{itemize}
This shows that $v$ is indeed a stream on $M$. Since che construction of $(M,v)$ is uniquely
determined by $P$, the proof is complete.
\end{proof}
\subsection{The in-backward and the out-forward\\
stream-spines of a stream}
In this subsection we prove that the construction of Proposition~\ref{spine:to:stream:prop}
can be reversed, namely that the map $\varphi_0^*:\calS_0\to\calF_0^*$ is bijective.
More exactly, we will see that the topological
construction has two inverses that are
equivalent to each other ---but not obviously so. If $v$ is a stream on a $3$-manifold $M$ we define:
\begin{itemize}
\item The \emph{in-backward} polyhedron associated to $(M,v)$
as the closure of the in-region of $\partial M$
union the set of all points $A$ such that
there is an orbit of $v$ going from $A$ to a concave or transition point of $\partial M$;
\item The \emph{out-forward} polyhedron associated to $(M,v)$
as the closure of the out-region of $\partial M$ union the set of all points $A$ such that
there is an orbit of $v$ going from a concave or transition point of $\partial M$ to $A$.
\end{itemize}
\begin{prop}\label{stream:to:spine:prop}\ \\
\begin{itemize}
\item Let $v$ be a stream on $M$. Then the
in-backward and out-forward polyhedra associated to $(M,v)$
satisfy condition (S1) of the definition of stream-spine;
moreover each of their regions shares some point with the in-region or with the out-region of $\partial M$, and
it can be oriented so that at these points the field $v$ is positive transversal to it;
with this orientation on each region, the
in-backward and out-forward polyhedra associated to $(M,v)$ are stream-spines, they
are isomorphic to each other and
via Proposition~\ref{spine:to:stream:prop} they both define
the pair $(M,v)$.
\item If $P$ is a stream-spine and $(M,v)$ is the associated manifold-stream pair
as in Proposition~\ref{spine:to:stream:prop}, then the
in-backward and out-forward polyhedra associated to $(M,v)$
are isomorphic to $P$.
\end{itemize}
\end{prop}
\begin{proof}
Most of the assertions are easy, so we confine ourselves to the main points.
It is first of all obvious that away from the special orbits of $v$ as in conditions
(G3) and (G4) the concave tangency lines of $v$ to $\partial M$ generate
triple lines in the in-backward and out-forward polyhedra associated to $(M,v)$, while convex
tangency lines generate single lines. Moreover, if from a stream-spine $P$
we go to $(M,v)$ and then to the associated
in-backward and out-forward polyhedra, away from the
vertices and spikes of $P$ we see that these polyhedra
are naturally isomorphic to $P$, as shown in a cross-section in Fig.~\ref{P/to/M/to/IO:fig}.
\begin{figure}\label{P/to/M/to/IO:fig}
\end{figure}
The fact that an orbit of $v$ as in condition (G4) generates a vertex in
the in-backward and out-forward polyhedra associated to $(M,v)$
was already shown in~\cite{LNM}, but we reproduce the argument here
for the sake of completeness, showing in Fig.~\ref{G4/to/vertex:fig}-left,
top and bottom,
\begin{figure}\label{G4/to/vertex:fig}
\end{figure}
the in-backward and the out-forward spines near the orbit of Fig.~\ref{transverse:fig}.
Both these spines are locally isomorphic to the stream-spine shown on the right, to
which Proposition~\ref{spine:to:stream:prop} associates precisely an orbit as in Fig.~\ref{transverse:fig}.
We are left to deal with transition points and with spikes.
Let us concentrate on a
concave-to-convex transition point as in Fig.~\ref{transition/types:fig}-left,
but mirrored and rotated in $3$-space for convenience.
In this case the transition orbit
extends backward (and not forward), and the
locally associated in-backward polyhedron is easy to describe,
which we do in Fig.~\ref{trans/spike:fig}-top.
\begin{figure}\label{trans/spike:fig}
\end{figure}
The out-forward polyhedron is instead slightly more complicated to understand, since the orbits
of $v$ starting from the concave line near the transition point finish on points close to the transition one, as
illustrated in Fig.~\ref{trans/spike:fig}-bottom. The pictures shows that the spikes thus generated
are indeed locally the same. Moreover, the
concave-to-convex configuration of $v$ near $\partial M$
is precisely that generated by a spike as in Fig.~\ref{vert/spike/thick:fig}-right, which is again of the same type.
This concludes the proof.
\end{proof}
Combining Propositions~\ref{spine:to:stream:prop} and~\ref{stream:to:spine:prop} we get the following
main result of this section:
\begin{thm}\label{stream:homeo:thm}
The map $\varphi_0^*:\calS_0\to\calF_0^*$ from the set of stream-spines up to isomorphism
to the set of streams on $3$-manifolds up to diffeomorphism is a bijection.
\end{thm}
\section{Stream-homotopy and\\ sliding moves on stream-spines}
In this section we consider a natural equivalence relation on streams, and we translate it into combinatorial moves on stream-spines.
\subsection{Elementary homotopy catastrophes}
Let $M$ be an oriented $3$-manifold with non-empty boundary. On the set $\calF_0^*$ of streams on $M$ we define
\emph{stream-homotopy} as the equivalence relation of homotopy through vector fields with
fixed configuration on $\partial M$ and all orbits homeomorphic to closed intervals or
to points. We then define $\calF_0$ as the quotient of $\calF_0^*$ under the
equivalence relation of stream-homotopy.
The next result shows how to factor this relation into easier ones:
\begin{prop}\label{catastrophes:prop}
Stream-homotopy is generated by isotopy and by the
elementary moves shown in
Figg.~\ref{20/catastrophe:fig} to~\ref{catastrophe/trans/in:fig}.
\begin{figure}\label{20/catastrophe:fig}
\end{figure}
\begin{figure}\label{32/catastrophe:fig}
\end{figure}
\begin{figure}\label{catastrophe/trans/in:fig}
\end{figure}
\end{prop}
\begin{proof}
It is evident that taking a generic perturbation of a homotopy one only gets
the elementary catastrophes of the statement, plus perhaps finitely many times at which
an orbit starts and ends at transition points. We then only need to show that this type of catastrophe
can be generically avoided during a homotopy.
To do so we carefully analyze in Fig.~\ref{near/transition/in:fig}
\begin{figure}\label{near/transition/in:fig}
\end{figure}
the initial portions of the orbits close to an incoming transition orbit. In the type of catastrophe we want to avoid
we would have a concave-to-convex transition point $A$ such that the orbit through $A$
traces backward to, say, orbit $1$ just before the catastrophe, to orbit $0$ at the
catastrophe, and to orbit $8$ just after the catastrophe, with numbers as in
Fig.~\ref{near/transition/in:fig}. We can now modify the homotopy so that the orbit through $A$
traces back to either
\begin{itemize}
\item orbit 1, then 2, then 3, then 4, then 8, or
\item orbit 1, then 5, then 6, then 7, then 8.
\end{itemize}
Note that at $A$ with the first choice we obviously create a catastrophe as in
Fig.~\ref{catastrophe/trans/in:fig}, but for an outgoing transition orbit, while with the second
choice we do not create any catastrophe at $A$. On the other hand
at the starting point of orbit $0$ in Fig.~\ref{near/transition/in:fig}
we could create a
catastrophe as in Fig.~\ref{catastrophe/trans/in:fig} with one of the two choices and no
catastrophe with the other choice, but we cannot predict which is which. This shows that we can
always get rid of a doubly transition orbit
either at no cost or by inserting one catastrophe as in Fig.~\ref{catastrophe/trans/in:fig}.
\end{proof}
\subsection{Sliding moves on stream-spines}
In this subsection we introduce certain combinatorial moves on stream-spines. We do so
showing pictures and always meaning that the mirror images in $3$-space of the moves that we represent are
also allowed and named in the same way. Here comes the list; we call:
\begin{itemize}
\item \emph{Sliding $0\leftrightarrow 2$ move} any move as in Fig.~\ref{slide/20:fig};
\item \emph{Sliding $2\leftrightarrow 3$ move} any move as in Fig.~\ref{slide/32:fig};
\item \emph{Spike-sliding move} any move as in Fig.~\ref{slide/spike:fig};
\item \emph{Sliding move} any move of the types just described.
\end{itemize}
\begin{figure}\label{slide/20:fig}
\end{figure}
\begin{figure}\label{slide/32:fig}
\end{figure}
\begin{figure}\label{slide/spike:fig}
\end{figure}
The following result is evident:
\begin{prop}\label{sliding:ok:prop}
If two stream-spines $P_1$ and $P_2$ in $\calS_0$ are related by a sliding move then the corresponding
streams $\varphi_0^*(P_1)$ and $\varphi_0^*(P_2)$ are stream-homotopic to each other.
\end{prop}
\subsection{Translating catastrophes into moves}
In this subsection we establish the following:
\begin{thm}
Let $\varphi_0:\calS_0\to\calF_0$ be the surjection from the set of
stream-spines to the set of streams on $3$-manifolds up to homotopy.
Then $\varphi_0(P_1)$ and $\varphi_0(P_2)$ coincide in $\calF_0$
if and only if $P_1$ and $P_2$ are related by sliding moves.
\end{thm}
\begin{proof}
We must show that the elementary catastrophes along a generic stream-homotopy, as described in
Proposition~\ref{catastrophes:prop}, correspond at the level of stream-spines to the sliding moves.
Checking that the catastrophes of Fig.~\ref{20/catastrophe:fig} and~\ref{32/catastrophe:fig}
correspond to the $0\leftrightarrow2$ and $2\leftrightarrow3$ sliding moves is easy and
already described in~\cite{LNM}, so we do not reproduce the argument.
We then concentrate on the catastrophes of Fig.~\ref{catastrophe/trans/in:fig}, showing that on the associated out-forward spines
their effect is that of a spike-sliding. This is done in Fig.~\ref{slide/trans/out:fig}
\begin{figure}\label{slide/trans/out:fig}
\end{figure}
for the catastrophe in the top portion of Fig.~\ref{catastrophe/trans/in:fig},
which is then easily recognized to give the first spike-sliding
move of Fig.~\ref{slide/spike:fig}; a very similar picture shows that the bottom portion of
Fig.~\ref{catastrophe/trans/in:fig} gives the second spike-sliding move of
Fig.~\ref{slide/spike:fig}.
The proof is now complete and the isomorphism of the in-backward and out-forward stream-spines implies
that the effect of the catastrophes of Fig.~\ref{catastrophe/trans/in:fig}
is that of a spike-sliding also on the in-backward stream-spine. It is however instructive to
analyze the effect directly on the in-backward stream-spine ---in fact, it is not
even obvious at first sight that the catastrophes of Fig.~\ref{catastrophe/trans/in:fig} have
any impact on the in-backward stream-spine, given that there is no transition orbit to follow backward anyway.
But the catastrophes of Fig.~\ref{catastrophe/trans/in:fig} do have an impact on the
in-backward stream-spine, because at the catastrophe time there is an orbit that from a concave
tangency point traces back to a transition point. To analyze what the impact exactly is,
we restrict to the top portion of Fig.~\ref{catastrophe/trans/in:fig} and we employ
Fig.~\ref{near/transition/in:fig} in a crucial fashion. We do this in
Fig.~\ref{slide/trans/in:fig},
\begin{figure}\label{slide/trans/in:fig}
\end{figure}
where we show the exact time of the catastrophe (top),
the situation before (middle-left) and after (middle-right) the catastrophe, and
the corresponding in-backward stream-spines (bottom). In the middle figures we show how the concave tangency lines trace
back to the in-region, showing for some points $Q$ the boundary point $Q'$ obtained by following backward the orbit through $Q$; note that
after the catastrophe one point $P$ traces back first to a point $P'$ of the concave tangency line and then to a point $P''$ of
the in-region. Using the information of the middle figures one indeed sees that the corresponding
stream-spines are as in the bottom figures, where one recognizes the first spike-sliding
of Fig.~\ref{slide/spike:fig}.
\end{proof}
\section{Combinatorial presentation of generic flows}
As already anticipated, let us now define $\calF$ as the set of pairs $(M,v)$ where $M$ is a
compact, connected, oriented $3$-manifold (possibly without boundary) and $v$ is a generic flow on $M$,
with $M$ viewed up to diffeomorphism and $v$ viewed up to homotopy on $M$ fixed on $\partial M$.
To provide a combinatorial presentation of $\calF$ we call:
\begin{itemize}
\item \emph{Trivial sphere} on the boundary of some $(N,w)$ one that is split into one in-disc and one out-disc
by one concave tangency circle;
\item \emph{Trivial ball} a ball $(B^3,u)$ with $u$ a stream on $B^3$ and $\partial B^3$
split into one in-disc and one out-disc
by one convex tangency circle.
\end{itemize}
Note that a trivial ball can be glued to a trivial sphere matching the vector fields.
We now define $\calS$ as the subset of $\calS_0$ consisting of stream-spines $P$ such that the boundary of
$\varphi_0(P)$ contains at least one trivial sphere. We will establish the following:
\begin{thm}\label{flow:thm}
For $P\in\calS$ let $\varphi(P)$ be obtained from $\varphi_0(P)$ by
attaching a trivial ball to a trivial sphere in the boundary of $\varphi_0(P)$.
This gives a well-defined surjective map $\varphi:\calS\to\calF$, and
$\varphi(P_0)=\varphi(P_1)$ if an only if $P_0$ and $P_1$ are obtained
from each other by the sliding moves of Figg.~\ref{slide/20:fig} to~\ref{slide/spike:fig}.
\end{thm}
\subsection{Equivalence of trivial balls}
In this subsection we will show that the map $\varphi$ of Theorem~\ref{flow:thm} is well-defined.
To this end choose $P\in\calS$ and set $(N,w)=\varphi_0(P)$. To define $\varphi(P)$ we
must choose one trivial sphere $S\subset\partial N$, a trivial ball $(B^3,u)$ and
a diffeomorphism $f:\partial B^3\to S$ matching $u$ to $w$. The manifold $M$
resulting from the gluing is of course independent of $S$, and the resulting
flow $v$ on $M$ is of course independent of $f$ up to homotopy. However, when
the boundary of $\varphi_0(P)$ contains more than one trivial sphere, it
is not obvious that the pair $(M,v)$ as an element of $\calF$ is independent of $S$.
This will be a consequence of the following:
\begin{prop}
Let $v$ be a generic flow on $M$, and let $B_1$ and $B_2$ be disjoint trivial balls contained in
the interior of $M$. Then there is a flow $v'$ on $M$
homotopic to $v$ relatively to $(\partial M)\cup B_1\cup B_2$
such that $\overline{M\setminus B_1}$ and $\overline{M\setminus B_2}$, endowed with the
restrictions of $v'$, are diffeomorphic to each other.
\end{prop}
\begin{proof}
Choose a smooth path $\alpha:[0,1]\to M$ with $\alpha(j)\in\partial B_j$ and
$\dot\alpha(j)=v(\alpha(j))$ not tangent to $\partial B_j$ for $j=0,1$, and
$\alpha(t)\not\in B_1\cup B_2$ for $0<t<1$. Up to small perturbation we can
assume $\dot\alpha(t)\neq -v(\alpha(t))$ for $t\in[0,1]$, and then homotope $v$
on a neighbourhood of $\alpha$ to a flow $v''$ such that
$v''(\alpha(t))=\dot\alpha(t)$ for $t\in[0,1]$. Now we can homotope $v''$ to $v'$
in a neighbourhood of $B_1\cup B_2\cup\alpha$ as suggested in Fig.~\ref{twoballs:fig},
\begin{figure}\label{twoballs:fig}
\end{figure}
which gives the desired conclusion.
\end{proof}
\subsection{Normal sections}
Let us now show that the map $\varphi$ of Theorem~\ref{flow:thm} is surjective. To this end we adapt
a definition from~\cite{LNM, Ishii}, calling \emph{normal section} for a manifold $M$ with generic flow $v$
a smooth disc $\Delta$ in the interior of $M$ such that $v$ is transverse to $\Delta$, every orbit
of $v$ meets $\Delta\cup\partial M$ in both positive and negative time, and the orbits of $v$ tangent
to $\partial M$ or intersecting $\partial\Delta$ are generic with respect to each other, with the obvious meaning.
The existence of normal sections is rather easily established~\cite{LNM}, and Fig.~\ref{cutsection:fig}
\begin{figure}\label{cutsection:fig}
\end{figure}
suggests how, given a normal section $\Delta$ of $(M,v)$,
to remove a trivial ball $B$ from $(M,v)$ so that the restriction $w$ of $v$ to $N=M\setminus B$ is a stream on $N$.
By construction if $P$ is a stream-spine such that $\varphi_0^*(P)=(N,w)$ we have that $\varphi(P)$ represents
$(M,v)$, whence the surjectivity of $\varphi$. Let us also note, since we will need this to prove injectivity,
that $P$ can be directly recovered from $(M,v)$ and $\Delta$, taking $\Delta$ union the in-region of $\partial M$ union
the set of points $A$ such that there exists an orbit of $v$ going from $A$ to
$\partial \Delta$ or to the concave tangency line of $v$ to $\partial M$,
with the obvious branching along triple lines.
\subsection{Homotopy}
We are left to establish injectivity of the
map $\varphi$ of Theorem~\ref{flow:thm}. Recalling that the elements $(M,v)$ of $\calF$ are regarded
up to diffeomorphism of $M$ and homotopy of $v$ on $M$ relative to $\partial M$,
we see that injectivity is a consequence of
the following:
\begin{prop}\label{inj:prop}
Let $(v_t)_{t\in[0,1]}$ be a homotopy of generic flows on $M$, fixed on $\partial M$.
For $j=0,1$ let $\Delta_j$ be a normal section for $(M,v_j)$ and let $P_j$ be the stream-spine
defined by $\Delta_j$ and $v_j$ as at the end of the previous subsection. Then
$P_0$ and $P_1$ are related by the sliding moves of
Figg.~\ref{slide/20:fig} to~\ref{slide/spike:fig}.
\end{prop}
\begin{proof}
The first step is to follow the first normal section
along the homotopy, thus getting a smooth deformation
$(\Sigma_t)_{t\in[0,1]}$ with $\Sigma_0=\Delta_0$ and $\Sigma_t$ a normal section for
$v_t$ for all $t\in[0,1]$. Assuming the deformation is generic, along the deformation
$(\Sigma_t)_{t\in[0,1]}$ and simultaneous homotopy $(v_t)_{t\in[0,1]}$ we will
only have the same catastrophes as in Proposition~\ref{catastrophes:prop},
so $P_0$ and the stream-spine $\widetilde{P_1}$ defined by $\Sigma_1$ and $v_1$
are related by sliding moves.
The next step, as in~\cite{LNM}, consists in constructing normal sections $\Theta$ and $\Xi$ for
$(M,v_1)$ such that $\Sigma_1\cap\Theta=\Theta\cap\Xi=\Xi\cap\Delta_1=\emptyset$, which is easily done.
The conclusion now comes from the fact that given two disjoint normal sections $X$ and $Y$ of
$(M,v)$ one can join them by a small strip constructing a normal section $Z$ that contains $X\cup Y$, and then
one can view the transformation of $X$ into $Y$ as first the smooth expansion of $X$ to $Z$ and then the contraction of $Z$ to $Y$.
At the level of the associated stream-spines this transition again consists of the elementary
sliding moves of Figg.~\ref{slide/20:fig} to~\ref{slide/spike:fig}.
\end{proof}
\begin{rem}\label{glue:rem}
\emph{Suppose for $j=1,2$ that $M_j$ is an oriented $3$-manifold endowed with a generic flow $v_j$, and that $\Sigma_j$ is a boundary
component of $M_j$. Suppose moreover that one is given a diffeomorphism $\Sigma_1\to\Sigma_2$ mapping
the in-region of $\Sigma_1$ to the out-region of $\Sigma_2$ and conversely,
the concave line on $\Sigma_1$ to the convex line on $\Sigma_2$ and conversely,
the concave-to-convex transition points of $\Sigma_1$ to the convex-to-concave transition points of $\Sigma_2$
and conversely.
Then one can glue $M_1$ to $M_2$ along this map, getting on the resulting manifold $M$ a generic flow $v$ well-defined up to homotopy.
This implies that there exists a natural cobordism theory in the set $\calF$ of $3$-manifolds
endowed with a generic flow, and one could hope to use the combinatorial encoding
$\varphi:\calS\to\calF$ described in this paper as a technical tool to develop a TQFT~\cite{Turaev}
for such manifolds.}\end{rem}
\noindent
Dipartimento di Matematica\\
Universit\`a di Pisa\\
Via Filippo Buonarroti, 1C\\
56127 PISA -- Italy\\
petronio@dm.unipi.it
\end{document} |
\begin{document}
\def\langle{\langle}
\def\rangle{\rangle}
\graphicspath{{figures/}}
\begin{frontmatter}
\title{Two-Photon Scattering by a Cavity-Coupled Two-Level Emitter in a One-Dimensional Waveguide}
\author{Zhongling Ji and Shaoyan Gao\corref{cor1}}
\cortext[cor1]{Corresponding Author: gaosy@mail.xjtu.edu.cn}
\address{Department of Applied Physics, MOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter, Xi'an Jiaotong University, Xi'an 710049, China}
\begin{abstract}
We show that two-photon transport can be modulated by a two-level emitter coupled to a cavity in a one-dimensional waveguide. In the ordinary case, the transmitted light has a wider frequency spectrum than the situation without the cavity because it is reflected and scattered many times. But when the two photons are resonant with the cavity resonance reflection frequency, the frequency spectrum of the transmitted light becomes narrower than that without the cavity. This means that properly tuning the cavity resonance frequency can improve the photon-photon interaction. In addition, we show that the two-photon intensity correlation functions are nearly opposite to each other at the two sides of the emitter transition frequency rather than be the same, which is exactly the Fano resonance line shape for two photons. Such an effect is important for lowering the power threshold in optical bistable devices and for sensing applications. When the emitter transition frequency equals to the cavity resonance frequency for a high-Q cavity, our results agree with the recent experiments and theories.
\end{abstract}
\begin{keyword}
one-dimensional waveguide \sep atom-cavity system \sep two-photon scattering \sep Fano resonance line shape
\end{keyword}
\end{frontmatter}
\section{Introduction}\label{sec:1}
Quantum internet is important for quantum computation, communication and metrology \cite{KimbleNat453}. The main methods to realize the quantum networks are the cavity-QED-based protocol \cite{CiracPRL78} and the protocol provided by Duan, Lukin, Cirac, and Zoller (DLCZ) \cite{DuanNat414, ChoiNat452}. The former one uses the interaction between light and $\Lambda$-type three-level atoms in a high-Q cavity to store quantum states \cite{FleischOC179}. The latter one involves the measurement-induced entanglement \cite{CabrilloPRA59}, replaces single atoms by atomic ensembles so that it reduces the restriction of the cavity's quality and utilize the built-in entanglement-purification to create the long-distance quantum communication efficiently.
Thus it is of great significance to consider the interaction between few photons and a cavity-coupled emitter. Here we focus on a unique one-dimensional (1D) waveguide for photons because it is helpful to realize the strong photon-photon interaction \cite{FanPRL98, FanPRA76}. Recently, people have already done some researches about the interaction between light and an atom-cavity system in a 1D waveguide both in experiment \cite{WallraffNat431, BirnbaumNat436} and in theory \cite{FanAPL80, FanJOSAA20, FanOL30, FanPRL95, BlaisPRA69, FanPRA79, ShiarXiv1009}. These theoretical approaches include either scattering matrix \cite{FanAPL80, FanJOSAA20, FanOL30, FanPRL95} or quantum field theory \cite{BlaisPRA69, FanPRA79, ShiarXiv1009}. The two methods get the similar results for single photons \cite{FanPRL95, BlaisPRA69}.
In this paper we study the interaction of two photons and a cavity-coupled two-level emitter in a 1D waveguide. By analyzing two-photon wave packets, generalizing the scattering matrix \cite{HausWFO} to dispose frequency-spectrum transformation, we get the two-photon correlated momenta distribution, correlated transmitted coefficient, Fano resonance line shape and other important properties. Comparing with most existing works, our results are more general because our approach is not restricted to a high-Q cavity. Because our model includes the decay inherently, our results exactly agree with the experimental results in Ref. \cite{BirnbaumNat436}. In Sec.~\ref{sec:2} we show the model and analyze two-photon overlaps. In Sec.~\ref{sec:3} we generalize the scattering matrix \cite{HausWFO} to deal with the frequency-spectrum transformation. We show the results in Sec.~\ref{sec:4} and make a summary in Sec.~\ref{sec:5}.
\section{The Model and Two-Photon Wave Packets}\label{sec:2}
There are several methods to realize the atom-cavity system in a 1D waveguide \cite{WallraffNat431, BirnbaumNat436, FanAPL80}. Fig.~\ref{fig:1a} shows the schematic experimental setups \cite{FanPRA76}. The incident beam is monochrome plane wave. After the two photons are scattered by the whole system, we separate them by a beam splitter (BS) and use two single-photon counters to detect them. We register the probability of concurrent detections. Similarly with the Hanbury Brown-Twiss effect \cite{ScullyQO}, the results can show correlation of the two photons. Fig.~\ref{fig:1b} shows the parameters of the two-level emitter.
\begin{figure}
\caption{(a) The schematic experimental setups for concurrence measurement of the correlated transmission probability density $|t_2(x_1,x_2)|^2$. $D_1$, $D_2$ are photon-detectors with adjustable positions. BS is beam splitter. The ``='' denotes the two-level emitter in the 1D waveguide. The black parentheses symbolize the cavity, of which the length is $2l$. (b) The two-level emitter. $k$ and $p$ are the momenta of the two photons. $\Omega$ is the emitter transition frequency.}
\label{fig:1a}
\label{fig:1b}
\label{fig:1}
\end{figure}
Fig.~\ref{fig:2} shows wave packets of the two photons. After the incidence of the two photons on the reference plane, they can be scattered to different sides of the plane. Incident photons can also come from different sides of the system. So we cannot divide these wave packets into left-coming or right-coming parts. However, we find that if each one of the two photons runs out of the cavity, it cannot form the situation that two photons are at together again. So we take this situation as ``loss'' at first [see the ``uncoupled'' wave packets in Fig.~\ref{fig:2}]. After calculation of wave packet \textcircled{\small{1}}, we add wave packets \textcircled{\small{2}} and \textcircled{\small{3}} to make the complete resolution of this problem.
\begin{figure}
\caption{(a) The ellipses are the two-photon wave packets, of which some are at together and some are separated. The arrows in the ellipses represent the moving directions of the photons. The uncircled numbers, e.g. ``1'' and ``0'', denote the known amplitudes of wave packets each. The black, green and purple lines are the reference plane of the cavity, the free space and the emitter respectively. (b) Symbols in Eq.~(\ref{eq:1}
\label{fig:2}
\label{fig:2b}
\label{fig:2:0}
\end{figure}
\section{Scattering Matrix and Frequency-Spectrum Transformation}\label{sec:3}
For two-photon case, the emitted light has a frequency-spectrum distribution \cite{FanPRL98, FanPRA76}. So we need to generalize the scattering matrix \cite{HausWFO} to dispose this transformation. In fact, we need to solve several integral equations to get the spectrums of the outgoing wave packets [wave packets \textcircled{\small{1}}, \textcircled{\small{2}} and \textcircled{\small{3}} in Fig.~\ref{fig:2}] from the incident wave packets [the wave packets denoted by uncircled numbers ``1'' and ``0'' in Fig.~\ref{fig:2}], of which the amplitudes are known. Here we use scattering matrix to express and solve these equations. We define the scattering matrix as follow
\begin{equation}\label{eq:1}
\begin{bmatrix}
S_{1LL}(k,p) \\
S_{2RR}(k,p) \\
S_{RLout}(k,p) \\
\end{bmatrix}
=
\begin{bmatrix}
\hat{R}_{22} & \hat{T}_{22} & \hat{T}_{12} \\
\hat{T}_{22} & \hat{R}_{22} & \hat{T}_{12} \\
\hat{T}_{21} & \hat{T}_{21} & \hat{T}_{11} \\
\end{bmatrix}
\begin{bmatrix}
S_{1RR}(k,p) \\
S_{2LL}(k,p) \\
S_{RLin}(k,p) \\
\end{bmatrix}.
\end{equation}
Here, $S_{1RR}(k,p)$ et al. are frequency spectrums of the wave packets [Fig.~\ref{fig:2b}]. The operator $\hat{T}_{22}$ ($\hat{R}_{22}$) represents that two photons come together and transmit (reflect) together. $\hat{T}_{12}$ describes that two photons come from different sides of the reference plane and are scattered to the same side, of which the reverse process is denoted by $\hat{T}_{21}$. $\hat{T}_{11}$ denotes that two photons come from and to different sides of the system. According to \cite{FanPRA76} we have
\begin{equation}\label{eq:2}
\begin{gathered}
\hat{T}_{22} f(k,p) = \frac{1}{2} \int_{0}^{\infty} dk_1 dp_1 f(k_1,p_1)\times \\
\{ \bar{t}_{k_{1}} \bar{t}_{p_{1}} [ \delta(k_{1}-k) \delta(p_{1}-p) + \delta(k_{1}-p) \delta(p_{1}-k) ] + B \delta(E_{1}-E) \},
\end{gathered}
\end{equation}
\begin{equation}\label{eq:3}
\begin{gathered}
\hat{R}_{22} f(k,p) = \frac{1}{2} \int_{0}^{\infty} dk_1 dp_1 f(k_1,p_1)\times \\
\{ \bar{r}_{k_{1}} \bar{r}_{p_{1}} [ \delta(k_{1}-k) \delta(p_{1}-p) + \delta(k_{1}-p) \delta(p_{1}-k) ] + B \delta(E_{1}-E) \},
\end{gathered}
\end{equation}
\begin{equation}\label{eq:4}
\begin{gathered}
\hat{T}_{12} f(k,p) = \frac{1}{\sqrt{2}} \hat{T}_{21} f(k,p) = \frac{1}{2} \int_{0}^{\infty} dk_1 dp_1 f(k_1,p_1)\times \\
\left\{ \frac{\bar{t}_{k_{1}} \bar{r}_{p_{1}} + \bar{r}_{k_{1}} \bar{t}_{p_{1}}}{2} [ \delta(k_{1}-k) \delta(p_{1}-p) + \delta(k_{1}-p) \delta(p_{1}-k) ] + B \delta(E_{1}-E) \right\},
\end{gathered}
\end{equation}
\begin{equation}\label{eq:5}
\begin{gathered}
\hat{T}_{11} f(k,p) = \frac{1}{2} \int_{0}^{\infty} dk_1 dp_1 f(k_1,p_1)\times \\
\left\{ \frac{\bar{t}_{k_{1}} \bar{t}_{p_{1}} + \bar{r}_{k_{1}} \bar{r}_{p_{1}}}{\sqrt{2}} [ \delta(k_{1}-k) \delta(p_{1}-p) + \delta(k_{1}-p) \delta(p_{1}-k) ] + \sqrt{2}B \delta(E_{1}-E) \right\},
\end{gathered}
\end{equation}
where $f(k,p)=f(p,k)$ denotes the frequency spectrum of the incident photons. $\bar{t}_{k}$ and $\bar{r}_{k}$ are the transmission and reflection amplitude of the emitter for single photons respectively \cite{FanPRA76}. $B$ is the background fluorescence
\begin{equation}\label{eq:6}
B = \frac {4i \Gamma^2} {\pi} \frac {E_1 - 2 \Omega + i\Gamma} {[4\Delta_1^2 - (E_1 - 2\Omega + i\Gamma)^2] [4\Delta^2 - (E_1 - 2\Omega + i\Gamma)^2]}.
\end{equation}
Here, $\Gamma$ is the coupling strength between the photons and the emitter.
From Eq.~(\ref{eq:2}) we know that operator $\hat{T}_{22}$ has two parts: the one is multiplied by $[\delta(k_{1}-k) \delta(p_{1}-p) + \delta(k_{1}-p) \delta(p_{1}-k)]$ and the other is multiplied by $\delta(E_{1}-E)$. The calculation of operators can be considered by each part respectively. In Tab.~\ref{tab:1} we have the operator-multiplying formula (see \ref{app:1} for details)
\begin{table}[h]
\centering
\caption{Operator-multiplying formula}\label{tab:1}
\begin{tabular}{ccc}
\hline
Operators & $\delta(E_{1}-E)$ & $\delta(k_{1}-k)\delta(p_{1}-p)$ \\ \hline
$\hat{T}$ & $B_0B_1(\Delta_1)B_2(\Delta_2)$ & $tt(\Delta)$ \\
$\hat{R}$ & $C_0C_1(\Delta_1)C_2(\Delta_2)$ & $rr(\Delta)$ \\
$\hat{R}\hat{T}$ & $B_0B_1(\Delta_1)B_2(\Delta_2)rr(\Delta_2)$ & $tt(\Delta)rr(\Delta)$ \\
& + $C_0C_1(\Delta_1)C_2(\Delta_2)tt(\Delta_1)$ & \\
& + $B_0C_0B_1(\Delta_1)C_2(\Delta_2)f_{int}$ & \\
\hline
\end{tabular}
\end{table}
\\ where
\begin{equation}\label{eq:6:1}
f_{int} = \frac{1}{2} \int_{-\infty}^{+\infty} B_2(y) C_1(y) dy.
\end{equation}
For calculation, we need the inverse operator, which satisfies
\begin{equation}\label{eq:7}
\begin{gathered}
\hat{T}^{-1} \hat{T} f(k,p) = \hat{T} \hat{T}^{-1} f(k,p) = f(k,p) = \frac{1}{2} [f(k,p)+f(p,k)] \\
= \frac{1}{2} \int dk_1 dp_1 f(k_1,p_1) [ \delta(k_{1}-k) \delta(p_{1}-p) + \delta(k_{1}-p) \delta(p_{1}-k) ].
\end{gathered}
\end{equation}
Then we derive Tab.~\ref{tab:2}, which shows the operator-inversion formula (see \ref{app:2} for details)
\begin{table}[h]
\centering
\caption{Operator-inversion formula}\label{tab:2}
\begin{tabular}{ccc}
\hline
Operators & $\delta(E_{1}-E)$ & $\delta(k_{1}-k)\delta(p_{1}-p)$ \\ \hline
$\hat{T}$ & $B_0B_1(\Delta_1)B_2(\Delta_2)$ & $tt(\Delta)$ \\
$\hat{T}^{-1}$ & $-\frac{B_0}{1+B_0g_{int}} \frac{B_1(\Delta_1)}{tt(\Delta_1)} \frac{B_2(\Delta_2)}{tt(\Delta_2)}$ & $\frac{1}{tt(\Delta)}$ \\
\hline
\end{tabular}
\end{table}
\\ where
\begin{equation}\label{eq:8}
g_{int} = \frac{1}{2} \int_{-\infty}^{+\infty} \frac{B_1(y) B_2(y)}{tt(y)} dy.
\end{equation}
Having the formulae in Tab.~\ref{tab:1} and \ref{tab:2}, we can put the reduced scattering matrix of the emitter as follow (see \ref{app:3} for details)
\begin{equation}\label{eq:9}
\begin{bmatrix}
S_{1LL}(k,p) \\
S_{2RR}(k,p) \\
\end{bmatrix}
=
\begin{bmatrix}
\hat{R} & \hat{T} \\
\hat{T} & \hat{R} \\
\end{bmatrix}
\begin{bmatrix}
S_{1RR}(k,p) \\
S_{2LL}(k,p) \\
\end{bmatrix},
\end{equation}
where
\begin{equation}\label{eq:10}
\begin{aligned}
\hat{T} & = \hat{T}_{22} + r'^2 \hat{T}_{12} (1-r'^2\hat{T}_{11})^{-1} \hat{T}_{21}, \\
\hat{R} & = \hat{R}_{22} + r'^2 \hat{T}_{12} (1-r'^2\hat{T}_{11})^{-1} \hat{T}_{21}.
\end{aligned}
\end{equation}
Here we make the substitution
\begin{equation}\label{eq:11}
r'^2 = \sqrt{2}e^{2iEl} \times r^2,
\end{equation}
where $r$ is the reflectivity of the cavity, $2l$ is the length of the cavity.
Similarly with the approach for single photons in Ref.~\cite{FanAPL80}, we rewrite Eq.~(\ref{eq:9}) as follow
\begin{equation}\label{eq:11:0}
\begin{bmatrix}
S_{2RR}(k,p) \\
S_{2LL}(k,p) \\
\end{bmatrix}
=
\begin{bmatrix}
\hat{T}-\hat{R}\hat{T}^{-1}\hat{R} & \hat{R}\hat{T}^{-1} \\
-\hat{T}^{-1}\hat{R} & \hat{T}^{-1} \\
\end{bmatrix}
\begin{bmatrix}
S_{1RR}(k,p) \\
S_{1LL}(k,p) \\
\end{bmatrix}.
\end{equation}
The scattering matrix of the cavity and the free space are
\begin{equation}\label{eq:11:1}
\begin{bmatrix}
S_{2RR}(k,p) \\
S_{2LL}(k,p) \\
\end{bmatrix}
=
\begin{bmatrix}
t^2-\frac{r^4}{t^2} & \frac{r^2}{t^2} \\
-\frac{r^2}{t^2} & \frac{1}{t^2} \\
\end{bmatrix}
\begin{bmatrix}
S_{1RR}(k,p) \\
S_{1LL}(k,p) \\
\end{bmatrix}
\end{equation}
and
\begin{equation}\label{eq:11:2}
\begin{bmatrix}
S_{2RR}(k,p) \\
S_{2LL}(k,p) \\
\end{bmatrix}
=
\begin{bmatrix}
e^{iEl} & 0 \\
0 & e^{-iEl} \\
\end{bmatrix}
\begin{bmatrix}
S_{1RR}(k,p) \\
S_{1LL}(k,p) \\
\end{bmatrix}.
\end{equation}
We combine the scattering matrix of the cavity, the free space and the emitter in sequence to get the scattering matrix of the whole system
\begin{equation}\label{eq:12}
T_s =
\begin{bmatrix}
t^2-\frac{r^4}{t^2} & \frac{r^2}{t^2} \\
-\frac{r^2}{t^2} & \frac{1}{t^2} \\
\end{bmatrix}
\begin{bmatrix}
e^{iEl} & 0 \\
0 & e^{-iEl} \\
\end{bmatrix}
\begin{bmatrix}
\hat{T}-\hat{R}\hat{T}^{-1}\hat{R} & \hat{R}\hat{T}^{-1} \\
-\hat{T}^{-1}\hat{R} & \hat{T}^{-1} \\
\end{bmatrix}
\begin{bmatrix}
e^{iEl} & 0 \\
0 & e^{-iEl} \\
\end{bmatrix}
\begin{bmatrix}
t^2-\frac{r^4}{t^2} & \frac{r^2}{t^2} \\
-\frac{r^2}{t^2} & \frac{1}{t^2} \\
\end{bmatrix}.
\end{equation}
Next, the procedure of our approach is described as follow [below the wave packets are shown in Fig.~\ref{fig:2}]
\begin{enumerate}
\item By using the formulae in Tab.~\ref{tab:1} and Tab.~\ref{tab:2}, we derive the operators $\hat{T}_{s,22}^{-1}$ and $T_{s,12}T_{s,22}^{-1}$. So we get the frequency spectrums of wave packets \textcircled{\small{1}} and \textcircled{\small{4}}, which are $\hat{T}_{s,22}^{-1} f_{in}(k,p)$ and $T_{s,12}T_{s,22}^{-1} f_{in}(k,p)$ respectively \cite{FanAPL80}. Here $f_{in}(k,p)$ is the frequency spectrum of the incident two photons, which is a delta function and denoted by the uncircled number ``1'' in Fig.~\ref{fig:2}.
\item By using the scattering matrix of the cavity [Eq.~(\ref{eq:11:1})], we derive the frequency spectrums of wave packets \textcircled{\small{2}} and \textcircled{\small{5}} from \textcircled{\small{1}} and wave packet \textcircled{\small{6}} from \textcircled{\small{4}}.
\item By using the scattering matrix of the emitter [Eq.~(\ref{eq:1})], we derive the frequency spectrum of wave packet \textcircled{\small{7}} from \textcircled{\small{5}} and \textcircled{\small{6}}. When one photon of wave packet \textcircled{\small{7}} transmits the cavity, we get wave packet \textcircled{\small{3}} from \textcircled{\small{7}}.
\item we calculate the two photon transmitted wave packets associated with \textcircled{\small{2}} and \textcircled{\small{3}}. In this situation, the problem turns to single photon transport, so we can use the results in Ref.~\cite{FanAPL80} directly.
\item By adding the two photon transmitted wave packets associated with \textcircled{\small{2}} and \textcircled{\small{3}} to the wave packet \textcircled{\small{1}}, we derive the frequency spectrum of the two photon transmitted wave packet finally. After fourier transformation, we get the two-photon wave function $t_2(x_1,x_2)$ in part of the out states, in which both photons are transmitted.
\end{enumerate}
\section{Results}\label{sec:4}
\subsection{Situation for Low-Q Cavity}\label{sec:4:1}
Here we consider a low-Q cavity. We set the cavity reflectivity $r=0.9$, cavity transmission amplitude $t=i\sqrt{1-r^2}$, the coupling strength of the emitter $\Gamma=0.004(2\pi c/l)$.
Firstly, We consider the situation that two photons are near resonant with the emitter transition frequency $\Omega$ [i.e. $\delta E \simeq 0, \Delta \simeq 0$, see Fig.~\ref{fig:1b}]. We find that, in ordinary case [e.g. $E_1/2=0.240(2\pi c/l)$, see Fig.~\ref{fig:3b}], the correlated transmission probability density $|t_2(x_1,x_2)|^2$ contracts, which indicates that the transmitted light has a wider frequency spectrum than the situation without the cavity \cite{FanPRL98, FanPRA76}. This is because the photons reflected and scattered many times. But when the two photons are resonant with the cavity resonance reflection frequency [e.g. $E_1/2=0.125(2\pi c/l)$, see Fig.~\ref{fig:3c}], the two-photon wave packet extends. So the cavity makes the frequency spectrum of the transmitted light narrower than that without the cavity. As a consequence, we achieve a strong photon-photon interaction by properly tuning the resonant frequency of the cavity, which is important for realization of controlled optical nonlinearity.
\begin{figure}
\caption{Two-photon correlated transmission probability density $|t_2(x_1,x_2)|^2$. Two photons are near resonant with the emitter transition frequency $\Omega$. (a) shows the situation without the cavity. (b) and (c) represent the influences of the cavity. The abscissa is the distance difference $x=x_1-x_2$ of the two detectors ($D_1$, $D_2$) [Fig.~\ref{fig:1a}
\label{fig:3a}
\label{fig:3b}
\label{fig:3c}
\label{fig:3}
\end{figure}
Then we study the non-resonant case that two photons are not resonant with the emitter transition frequency $\Omega=0.325(2\pi c)/l$ (Fig.~\ref{fig:4}). Here we consider the intensity correlation function $g^{(2)}(x_1,x_2)$, which is defined as follow \cite{ScullyQO}
\begin{equation}\label{eq:13}
g^{(2)}(x_1,x_2) = g^{(2)}(x) = \frac {\langle a^+(x_1)a^+(x_2)a(x_2)a(x_1) \rangle} {\langle a^+a \rangle^2} = \frac {|t_2(x_1,x_2)|^2} {D},
\end{equation}
where $x=x_1-x_2$. $D$ is the normalization constant and independent of $x$.
We show that the intensity correlation functions $g^{(2)}(x_1,x_2)$ are nearly opposite to each other at the two sides of the emitter transition frequency $\Omega$ rather than be the same. The transmitted two photons exhibit bunching behaviors and super-Poissonian statistics ($g^{(2)}(0)>1$) when $E/2<\Omega$ (the upside figures of Fig.~\ref{fig:4}) and anti-bunching behaviors and sub-Poissonian statistics ($g^{(2)}(0)<1$) when $E/2<\Omega$ (the underside figures of Fig.~\ref{fig:4}). We already know that, for single photons, Fano resonance exhibits a sharp asymmetric line shape with the transmission coefficients varying from 0 to 100\% over a very narrow frequency range \cite{FanJOSAA20}. So here we show the Fano resonance line shape for two photons. Because of its sharp asymmetric line shape, such an effect is important for lowering the power threshold in optical bistable devices and for sensing applications \cite{FanAPL80}.
\begin{figure}
\caption{Intensity correlation function $g^{(2)}
\label{fig:4a}
\label{fig:4b}
\label{fig:4c}
\label{fig:4d}
\label{fig:4e}
\label{fig:4f}
\label{fig:4}
\end{figure}
\subsection{Situation for High-Q Cavity}\label{sec:4:2}
In a high-Q cavity, we can compare our results with the recent works done by Birnbaum et al. \cite{BirnbaumNat436} and Shi et al. \cite{ShiarXiv1009}. We set our parameters as follow to fit their conditions: the cavity's reflectivity $r=0.9996$ and transmission amplitude $t=i\sqrt{1-r^2}$, the emitter transition frequency $\Omega=2\pi c/l$ (which is also the cavity resonance frequency when $r$ is a real number), the coupling strength $\Gamma=0.002(2\pi c/l)$. We also consider the intensity correlation function $g^{(2)}(x_1,x_2) = g^{(2)}(x)$ (Fig.~\ref{fig:0}).
Fig.~\ref{fig:0a} represents the relation between $log[g^{(2)}(0)]$ and $E/2$, which agrees with the results in Ref.~\cite{BirnbaumNat436, ShiarXiv1009}. When $E/2=0.9966(2\pi c/l)$ or $1.034(2\pi c/l)$, $log[g^{(2)}(0)]$ gets its minimum. Comparing with Ref.~\cite{BirnbaumNat436, ShiarXiv1009}, we also consider the situation when $E/2=0.9966(2\pi c/l)$ [Fig.~\ref{fig:0b} and Fig.~\ref{fig:0c}]. Because our model includes the decay of the cavity inherently, our results exactly match the Figure.~3 of Ref.~\cite{BirnbaumNat436}, except where has a modulation arising from center-of-mass motion of the trapped atom. In Ref.~\cite{ShiarXiv1009}, however, the $g^{(2)}(\tau)$ arises to unity after infinite time because it does not consider the decay of the cavity or that of the two-level emitter.
\begin{figure}
\caption{Intensity correlation function $g^{(2)}
\label{fig:0a}
\label{fig:0b}
\label{fig:0c}
\label{fig:0}
\end{figure}
\section{Summary}\label{sec:5}
In conclusion, we study the two-photon correlated scattering by a two-level cavity-coupled emitter in a 1D waveguide. Our results agree with the recent works \cite{BirnbaumNat436, ShiarXiv1009}. Our research is important for realization of strong photon-photon interaction at the case of few photons and for quantum network. In addition, we propose a general approach to two-photon, cavity-coupled and multi-frequency scattering problem in the 1D waveguide. The same approach can deal with the three-level emitter, which is useful to make the photon-transistor \cite{ChangNatphy3, WittNJP12}.
\section{Acknowledgments}
We acknowledge the financial support of Special Prophase Project on the National Basic Research Program of China (No.2011CB311807), the National Natural Science Foundation of China (No.10804092), the Natural Science Basic Research Plan in Shaanxi Province of China (No.2010JQ1004), the Fundamental Research Funds for the Central Universities (No.xjj20100099).
\appendix
\section{Derivations of Operator-Multiplying Formula}\label{app:1}
Here we derive the Operator-multiplying formula in Tab.~\ref{tab:1}. If operators $\hat{T}$ and $\hat{R}$ are defined as follow
\begin{equation}\label{app:eq:1}
\begin{gathered}
g(k_2,p_2) = \hat{T} f(k,p) = \frac{1}{2} \int_{0}^{\infty} dk_1 dp_1 f(k_1,p_1) \times \\
\{ tt(k_1,p_1) [ \delta(k_{1}-k_2) \delta(p_{1}-p_2) + \delta(k_{1}-p_2) \delta(p_{1}-k_2) ] + B(\Delta_1,\Delta_2) \delta(E_{1}-E_2) \},
\end{gathered}
\end{equation}
\begin{equation}\label{app:eq:2}
\begin{gathered}
\hat{R} g(k_2,p_2) = \frac{1}{2} \int_{0}^{\infty} dk_2 dp_2 g(k_2,p_2) \times \\
\{ rr(k_2,p_2) [ \delta(k_{2}-k) \delta(p_{2}-p) + \delta(k_{2}-p) \delta(p_{2}-k) ] + C(\Delta_2,\Delta) \delta(E_{2}-E) \},
\end{gathered}
\end{equation}
then the operator $\hat{R}\hat{T}$ can be derived as follow
\begin{equation*}
\begin{aligned}
\hat{R}\hat{T} f(k,p) & = && \frac{1}{4} \int_{0}^{\infty} \int_{0}^{\infty} dk_1 dp_1dk_2 dp_2 f(k_1,p_1) \times \\
& && \{ tt(k_1,p_1) rr(k_2,p_2) [ \delta(k_{1}-k_2) \delta(p_{1}-p_2) + \delta(k_{1}-p_2) \delta(p_{1}-k_2) ] \times \\
& && [ \delta(k_{2}-k) \delta(p_{2}-p) + \delta(k_{2}-p) \delta(p_{2}-k) ] \\
& && + B(\Delta_1,\Delta_2) rr(k_2,p_2) [ \delta(k_{2}-k) \delta(p_{2}-p) \\
& && + \delta(k_{2}-p) \delta(p_{2}-k) ] \delta(E_{1}-E_2) \\
& && + C(\Delta_2,\Delta) tt(k_1,p_1) [ \delta(k_{1}-k_2) \delta(p_{1}-p_2) \\
& && + \delta(k_{1}-p_2) \delta(p_{1}-k_2)] \delta(E_{2}-E) \\
& && + B(\Delta_1,\Delta_2) C(\Delta_2,\Delta) \delta(E_{1}-E_2) \delta(E_{2}-E) \}
\end{aligned}
\end{equation*}
\begin{equation}\label{app:eq:3}
\begin{aligned}
& = && \frac{1}{2} \int_{0}^{\infty} dk_1dp_1 f(k_1,p_1) \times \\
& && \{ tt(k_1,p_1) rr(k_1,p_1) [ \delta(k_{1}-k) \delta(p_{1}-p) + \delta(k_{1}-p) \delta(p_{1}-k) ] \\
& && + [B(\Delta_1,\Delta) rr(k,p) + C(\Delta_1,\Delta) rr(k_1,p_1) \\
& && + \int_{-\infty}^{\infty} B(\Delta_1,\Delta_2) C(\Delta_2,\Delta) \delta(E_{2}-E_{1}) d\Delta_2]\delta(E_{1}-E) \} \\
& = && \frac{1}{2} \int_{0}^{\infty} dk_1dp_1 f(k_1,p_1) \times \\
& && \{ tt(k_1,p_1) rr(k_1,p_1) [ \delta(k_{1}-k) \delta(p_{1}-p) + \delta(k_{1}-p) \delta(p_{1}-k) ] \\
& && +[B_0B_1(\Delta_1)B_2(\Delta_2) rr(\Delta) + C_0C_1(\Delta_1)C_2(\Delta_2) tt(\Delta_1) \\
& && + B_0C_0B_1(\Delta_1)C_2(\Delta_2)f_{int}]\delta(E_{1}-E) \}.
\end{aligned}
\end{equation}
Here we factorize the background fluorescence $B(\Delta_1,\Delta_2)$ and $C(\Delta_1,\Delta_2)$ as follow
\begin{equation}\label{app:eq:4}
\begin{aligned}
B(\Delta_1,\Delta_2) & = B_0B_1(\Delta_1)B_2(\Delta_2), \\
C(\Delta_1,\Delta_2) & = C_0C_1(\Delta_1)C_2(\Delta_2).
\end{aligned}
\end{equation}
For example, if $\hat{T}=\hat{T}_{22}$ [Eq.~(\ref{eq:2})], then
\begin{equation*}
\begin{gathered}
B(\Delta_1,\Delta_2) = \frac {4i \Gamma^2} {\pi} \frac {E_1 - 2 \Omega + i\Gamma} {[4\Delta_1^2 - (E_1 - 2\Omega + i\Gamma)^2] [4\Delta^2 - (E_1 - 2\Omega + i\Gamma)^2]}, \\
B_0 = \frac {4i \Gamma^2} {\pi} (E_1 - 2 \Omega + i\Gamma), \\
B_1(\Delta_1) = \frac{1}{4\Delta_1^2 - (E_1 - 2\Omega + i\Gamma)^2}, \\
B_2(\Delta_2) = \frac{1}{4\Delta_2^2 - (E_1 - 2\Omega + i\Gamma)^2}.
\end{gathered}
\end{equation*}
In Eq.~(\ref{app:eq:3})
\begin{equation}\label{app:eq:5}
f_{int} = \frac{1}{2} \int_{-\infty}^{+\infty} B_2(y) C_1(y) dy.
\end{equation}
In Eq.~(\ref{app:eq:1}), we find that an operator can be determined by the part multiplied by $[ \delta(k_{1}-k) \delta(p_{1}-p) + \delta(k_{1}-p) \delta(p_{1}-k) ]$ (e.g. $tt(k_1,p_1)$ in operator $\hat{T}$) and the part multiplied by $\delta(E_{1}-E_2)$ (e.g. $B(\Delta_1,\Delta_2)$ in operator $\hat{T}$). So we briefly write the result of Eq.~(\ref{app:eq:3}) in Tab.~\ref{tab:1}.
\section{Derivations of Operator-inversion Formula}\label{app:2}
We use the operator-multiplying formula to derive the operator-inversion formula in Tab.~\ref{tab:2}. We order that $\hat{R}=\hat{T}^{-1}$, so we have
\begin{equation}\label{app:eq:6}
\begin{gathered}
\hat{R} \hat{T} f(k,p) = \hat{T} \hat{R} f(k,p) = f(k,p) = \frac{1}{2} [f(k,p)+f(p,k)] \\
= \frac{1}{2} \int dk_1 dp_1 f(k_1,p_1) [ \delta(k_{1}-k) \delta(p_{1}-p) + \delta(k_{1}-p) \delta(p_{1}-k) ].
\end{gathered}
\end{equation}
Comparing Eq.~(\ref{app:eq:6}) with Eq.~(\ref{app:eq:3}), we get
\begin{equation}\label{app:eq:7}
\begin{gathered}
tt(k_1,p_1) rr(k_1,p_1) = 1, \\
B_0B_1(\Delta_1)B_2(\Delta_2) rr(\Delta) + C_0C_1(\Delta_1)C_2(\Delta_2) tt(\Delta_1) + B_0C_0B_1(\Delta_1)C_2(\Delta_2)f_{int} = 0.
\end{gathered}
\end{equation}
So
\begin{equation}\label{app:eq:7}
\begin{gathered}
C_0 = -\frac{B_0}{1+B_0f_{int}}, \\
C_1(\Delta_1) = \frac{B_1(\Delta_1)}{tt(\Delta_1)}, \\
C_2(\Delta_2) = \frac{B_2(\Delta_2)}{tt(\Delta_2)}.
\end{gathered}
\end{equation}
Here
\begin{equation}\label{app:eq:8}
f_{int} = \frac{1}{2} \int_{-\infty}^{+\infty} B_2(y) C_1(y) dy = \frac{1}{2} \int_{-\infty}^{+\infty} \frac{B_1(y) B_2(y)}{tt(y)} dy \equiv g_{int}.
\end{equation}
We briefly write the result of Eq.~(\ref{app:eq:7}) in Tab.~\ref{tab:2}.
\section{Derivations of the Reduced Scattering Matrix of the Emitter}\label{app:3}
Bellow we derive Eq.~(\ref{eq:9}) and Eq.~(\ref{eq:10}). From the third equation of Eq.~(\ref{eq:1}) we have
\begin{equation}\label{app:eq:9}
S_{RLout}(k,p) = \hat{T}_{21}S_{1RR}(k,p) + \hat{T}_{21}S_{2LL}(k,p) + \hat{T}_{21}S_{RLin}(k,p).
\end{equation}
By using the following relation (see Fig.~\ref{fig:2})
\begin{equation}\label{app:eq:10}
S_{RLin}(k,p) = r'^2 S_{RLout}(k,p),
\end{equation}
we have
\begin{equation}\label{app:eq:11}
S_{RLin}(k,p) = r'^2 (1-r'^2\hat{R}_{11})^{-1} \hat{T}_{21} [S_{1RR}(k,p)+S_{2LL}(k,p)].
\end{equation}
By putting Eq.~(\ref{app:eq:11}) into the first and second equations of Eq.~(\ref{eq:1})
\begin{equation}\label{app:eq:12}
\begin{aligned}
S_{1LL}(k,p) &= \hat{R}_{22}S_{1RR}(k,p) + \hat{T}_{22}S_{2LL}(k,p) + \hat{T}_{12}S_{RLin}(k,p), \\
S_{2RR}(k,p) &= \hat{T}_{22}S_{1RR}(k,p) + \hat{R}_{22}S_{2LL}(k,p) + \hat{T}_{12}S_{RLin}(k,p),
\end{aligned}
\end{equation}
we get
\begin{equation}\label{app:eq:13}
\begin{aligned}
\hat{T} &= \hat{T}_{22} + r'^2 \hat{T}_{12} (1-r'^2\hat{R}_{11})^{-1} \hat{T}_{21}, \\
\hat{R} &= \hat{R}_{22} + r'^2 \hat{T}_{12} (1-r'^2\hat{R}_{11})^{-1} \hat{T}_{21}.
\end{aligned}
\end{equation}
Here we make the substitution
\begin{equation}\label{eq:app:14}
r'^2 = \sqrt{2}e^{2iEl} \times r^2,
\end{equation}
where $r$ is the reflectivity of the cavity.
By using the formulae in Tab.~\ref{tab:1} and Tab.~\ref{tab:2}, we get the expressions of $\hat{T}$ and $\hat{R}$.
\end{document} |
\begin{document}
\input amssym.def
\input amssym.tex
\title{Natural Internal Forcing Schemata extending ZFC. A Crack in the
Armor surrounding $CH?$}
\author{Garvin Melles\thanks{Would like to thank Ehud Hrushovski
for supporting him with funds from NSF Grant DMS 8959511} \\Hebrew University of Jerusalem}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{defi}[theorem]{Definition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{coro}[theorem]{Corollary}
\newtheorem{conj}{Conjecture}
\newcommand{{\sc proof} \hspace{0.1in}}{{\sc proof} \hspace{0.1in}}
\newcommand{\iopp}{\stackrel{|}{\smile}}
\newcommand{\niopp}{\not\!\!{\stackrel{|}{\smile}}}
\newcommand{\noindep}[1]{\mathop{\niopp}\limits_{\textstyle{#1}}}
\newcommand{\indep}[1]{\mathop{\iopp}\limits_{\textstyle{#1}}}
\newcommand{\subseteq}{\subseteqseteq}
\mathsurround=.1cm
\maketitle
Mathematicians are one over on the physicists in that they already have a
unified theory of mathematics, namely set theory. Unfortunately the
plethora of independence results since the invention of forcing has
taken away some of the luster of set theory in the eyes of many
mathematicians. Will man's knowledge of mathematical truth
be forever limited to those theorems derivable from the standard
axioms of set theory, $ZFC?$ This author does not think so, and in fact
he feels there is a schema concerning non-constructible sets
which is a very natural candidate for being considered as part of the
axioms of set theory. To
understand the motivation why, let us take a very short look back at
the history of the development of mathematics. Mathematics began
with the study of mathematical objects very physical and concrete in
nature and has progressed to the study of
things completely imaginary and abstract. Most mathematicians now accept these objects
as as mathematically legitimate as any of their more concrete
counterparts. It is enough that these objects are consistently
imaginable, i.e., exist in the world of
set theory. Applying the same intuition to set theory itself, we
should accept as sets as many that we can whose existence are
consistent with $ZFC.$ Of course this is only a vague notion, but
knowledge of set theory so far, namely of the existence of $L$
provides a good starting point. What sets can we consistently imagine
beyond $L?$
Since by forcing one can prove the
consistency of $ZFC$ with the existence of non-constructible sets and as $L$ is
absolute, with these forcing extensions of $L$ you have consistently imagined more sets in
a way which satisfies the vague notion mentioned above. The problem
is which forcing extensions should you consider as part of the
universe? But there is no problem, because if you prove the
consistency of the existence of some $L$ generic subset of a partially
ordered set $P\in L$ with $ZFC,$ then $P$ must be describable
and we can easily prove the consistency of $ZFC$ with the existence of
$L$ generic subsets of $P$ for every $P$ definible in $L.$ Namely, the
axiom schema $IFS_L$ (For internal forcing schema over $L\!$) defined
below is consistent with $ZFC.$
\begin{defi}
$IFS_L$ is the axiom schema which says for every formula $\phi(x),$ if
$L\models$ there is a unique partial order $P$ such that $\phi(P),$
then there is a $L$ generic subset of $P$ in the universe $V.$
\end{defi}
\noindent $IFS_L$ is a natural closure condition on a universe
of set theory. Given a class model of $ZFC$ which has no inner class model of the
form $L[G]$ for some partial order $P$ definable in $L,$ we can (by
forcing) consistently imagine expanding the model to include such a
class. Conversely, no class model of $ZFC+IFS_L$ can
be contained in a class model of $ZFC$ which does not satisfy $IFS_L.$
\begin{theorem}
If there is a sequence $\langle M_n\mid n<\omega\rangle$ of transitive models with
$M_n\models ZFC_n$ where $ZFC=\bigcup\limits_{n\in\omega}ZFC_n,$ then $Con(ZFC+IFS_L)$
\end{theorem}
{\sc proof} \hspace{0.1in} By the compactness theorem and forcing.
\begin{theorem}\label{IFSLCBLG}
If $V$ is a model of $ZFC,$ then $V\models IFS_L$ if and only if
$V\models$ every set definable in $L$ is countable.
\end{theorem}
{\sc proof} \hspace{0.1in} Certainly if every set definable in $L$ is countable, then
every partially ordered set definable in $L$ is countable,
so therefore is the set of
dense subsets of $P$ in $L$ countable and so
$P$ has generic subsets over $L$ in the universe. In the other
direction, if $s$ is a set definable in $L,$ then so is the partially
ordered set consisting of maps from distinct finite subsets of $s$ to distinct finite
subsets of $\omega,$ so a $L$ generic subset over the partial ordering
is a witness to $\big|s\big|=\omega.$
\noindent Perhaps $IFS_L$ is not surprising since $ZFC+0^{\#}\ \vdash\ IFS_L.$
But the same reasoning as led to $IFS_L$ leads to the following
stronger schema, $IFS_{Ab\,L[r]}$ (For internal forcing schema for absolute
class models of $ZFC$ constructible over an absolutely definable real) which implies that if
$0^{\#}$ exists, then all sets definable in $L[0^{\#}]$ are countable.
\begin{defi}
A subset $r$ of $\omega$ is said to be absolutely definable if for
some $\Pi_1$ formula $\theta(x),$
\begin{enumerate}
\item $V\models \theta(r)$
\item $ZFC\vdash \exists\,x\theta(x)\ \rightarrow\
\exists!\,x\theta(x)$
\end{enumerate}
\end{defi}
\begin{defi}
$IFS_{Ab\,L[r]}$ is the axiom schema of set theory which says if $r$
is an absolutely definable real then all definable elements of $L[r]$
are countable (equivalently, every partial order $P$ definable in
$L[r]$ has an $L[r]$ generic subset.)
\end{defi}
\noindent The following theorem is a formal justification of $IFS_{Ab\,L[r]}.$
\begin{theorem}\label{CONIFSr}
Suppose $V$ is a countable transitive model of $ZFC$ and let $\big\{\theta_i(x)\mid
i<\omega\big\}$ be the list of all formulas defining absolute reals
such that $V\models
\bigwedge\limits_{i<\omega}\exists\,x\theta_i(x).$ Suppose that the
supremum of the ordinals definable in $V$ is in $V.$ Then there is a
countable transitive extension $V'$ of $V$ with the same ordinals such that
$$V'\models ZFC+IFS_{Ab\,L[r]}+\bigwedge\limits_{i<\omega}\exists\,x\theta_i(x)$$
\end{theorem}
{\sc proof} \hspace{0.1in} Let $\alpha^*$ be the sup of all the ordinals definable in $L.$
Let $P$ be the set of finite partial one to one functions from
$\alpha^*$ to $\omega.$ Let $V'=V[G]$ where $G$ is a $V$ generic
subset of $P.$
To finish the proof it is enough to prove the following claim.
\noindent Claim: If $\psi(x)$ defines a real in $M[G]$ then it
is in $M.$\\
{\sc proof} \hspace{0.1in} Since $P$ is separative, if $p\in P$ and $\pi$ is an
automorphism of $P,$ then for every formula $\varphi(v_1,\ldots,v_n)$
and names $x_1,\ldots,x_n$
$$*\ \ \ \ p\Vdash \varphi(x_1,\ldots,x_n)\ \hbox{ iff }\ \pi p\Vdash
\varphi(\pi x_1,\ldots,\pi x_n)$$
Let $\varphi(x)=\exists\,Y(\psi(Y)\ \wedge\ x\in Y).$ Let $n\in
\omega.$ If for no $p\in P$ does $p\Vdash
\big|\big|\varphi(\check n)\big|\big|$ then
$\big|\big|\varphi(\check n)\big|\big|=0.$ So let $p\in P$ such that $p\Vdash
\big|\big|\varphi(\check n)\big|\big|.$ By $*$ if $\pi$ is an automorphism
of $P$ then $\pi p\Vdash
\big|\big|\varphi(\check n)\big|\big|.$ Let $\pi$ be a permutation of
$\omega.$ $\pi$ induces a permutation of $P$ by letting for $p\in P,$
$dom\,\pi p=dom\,p$ and letting $\pi p(\alpha)=\pi(p(\alpha)).$ By
letting $\pi$ vary over the permutations of $\omega$ it follows that
$\big|\big|\varphi(\check n)\big|\big|=1.$ Let $\dot r$ be the name
with domain $\big\{\check n\mid n<\omega\big\}$ and such that
$$\dot r(\check n)=\big|\big|\varphi(\check n)\big|\big|$$
$i_G(\dot r)=r,$ but then
$r=\big\{n\mid \big|\big|\varphi(\check n)\big|\big|=1\big\}$ which
means it is in $M.$
\begin{coro}
$ZFC+IFS_{Ab\,L[r]}+$'there are no absolutely definable
non-constructible reals' is
consistent. (Relative to the assumption of a countable transitive
model of $L$ with its definable ordinals having a supremum in the
model.)
\end{coro}
\noindent Since classes of the form $L[r]$
are absolute if $r$ is an absolutely definable real, they provide
reference points from which to measure the size of the universe.
We can extend the schema $IFS_{Ab\,L[r]}$ by exploiting the similarity
between a
class such as $L({\Bbb R})$ and a class of the form $L[r]$ where $r$
is an absolutely definable real. We can
argue that if $P$ is a partial order definable in $L({\Bbb R}),$ and
if a $V$ generic subset of $P$ cannot add any reals to $V,$ then an
$L({\Bbb R})$ generic subset of $P$ should exist in $V.$ $L({\Bbb R})$ is
concrete in the sense the interpretation of $L({\Bbb R})$ is absolute
in any class model containing ${\Bbb R}$, and thereby
like classes of the form $L[r]$ where $r$ is an absolutely definable
real, $L({\Bbb R})$ provides a reference point from which to measure
the size of the universe.
This leads to the following natural strengthening of $IFS_L$ and
$IFS_{Ab\,L[r]}.$
\begin{defi}
$x\in V$ is said to be weakly absolutely definable of the form
$V_{\alpha}$ if for some
formula $\psi(v)$ which provably defines an ordinal and which is
provably $\Delta_1$ from $ZF,$
$$V\models \exists!\alpha\Big(\psi(\alpha)\ \wedge\ \forall y(y\in x\
\leftrightarrow\ \rho(x)\leq\alpha)\Big)$$
Let $\theta(x)$ denote $\exists!\alpha\Big(\psi(\alpha)\ \wedge\ \forall y(y\in x\
\leftrightarrow\ \rho(x)\leq\alpha)\Big)$ and let $ZF_{\theta}$ be a
finite part of $ZF$ which proves $\psi(v)$ is $\Delta_1$ and proves
$\psi(v)$ defines an ordinal. $\theta(x)$
is said to define a weakly absolutely definable set of the form
$V_{\alpha}.$ ($\rho(x)$ denotes the foundation rank.)
\end{defi}
\begin{defi}
$IFS_{W\!Ab\,L(V_{\alpha})}$ is the axiom schema of set theory which says for every
weakly absolutely definable set of the form $V_{\alpha}$ for every
partial order
$P$ definable in
$L(V_{\alpha}),$ if
$$\big|\big|V_{\alpha}^{V[G]}=V_{\alpha}^{V}\big|\big|^{(r.o.P)^V}=1$$
then there exists an $L(V_{\alpha})$ generic subset $G$ of $P.$
\end{defi}
\begin{theorem}
If there is a sequence $\langle M_n\mid n<\omega\rangle$ of transitive models with
$M_n\models ZFC_n$ where $ZFC=\bigcup\limits_{n\in\omega}ZFC_n$ then $Con(ZFC+IFS_{W\!Ab\,L(V_{\alpha})})$
\end{theorem}
{\sc proof} \hspace{0.1in} Let $\langle\theta_i\mid i<n\rangle$ be a list of formulas
defining weakly absolute sets of the form $V_{\alpha}.$ Let
$\big\{\varphi_{ij}(x)\mid i<n, j<m,\big\}$ be a set of formulas. It
is enough to show the consistency with $ZFC$ of
$$\bigwedge\limits_{i<n,j<m}\exists
V_{\alpha_i}\Big[\theta_i(V_{\alpha_i})
\ \wedge\ \exists! P_{ij}(L(V_{\alpha})\models \varphi_{ij}(P_{ij}))\
\longrightarrow$$
$$\exists G(G\subseteqseteq P_{ij}\ \wedge\ G\hbox{ is }L(V_{\alpha_i})\hbox{ generic}\Big]$$
Let $M$ be a countable transitive model of enough of $ZFC$ (including
$\mathop{\wedge}\limits_{i<n}ZF_{\theta_i}.\!$) Let
$\langle\alpha_0,\ldots,\alpha_{n-1}\rangle$ be the increasing sequence of ordinals such that
$$M\models \theta_i(V_{\alpha_i})$$
for $i<n.$
We define by induction on $(i,j)\in n\times m$ sets $G_{ij}.$ Suppose $P_{ij}$ is a partial order
definable in $L(V_{\alpha_i}^{M[\{G_{h,l}|h\leq
i,l<j\}]})$ by $\varphi_{ij}(x)$ and
there exists a $M[\{G_{h,l}|h\leq
i,l<j\}]$ generic subset of $P_{ij}$ not increasing
$$V_{\alpha_i}^{M[\{G_{h,l}|h\leq
i,l<j\}]}$$
Then let $G_{ij}$ be such a $M[\{G_{h,l}|h\leq i,l<j\}]$
subset of $P_{ij}.$ (If not, let $G_{ij}=\emptyset.$) Let
$$N=M[\{G_{ij}|i<n,j<m\}]$$
$N$ has the property that if $P_{ij}$ is a partial
order definable in $L(V_{\alpha_i})$ by $\varphi_{ij}(x)$ and $G$ is an $N$ generic subset
of $P_{ij}$ such that
$$\big|\big|V_{\alpha_i}^{N[G]}=V_{\alpha_i}^N\big|\big|^{(r.o.P)^N}=1$$
then an $L(V_{\alpha_i}^{N})$ generic subset of $P_{ij}$ exists in $N.$
\begin{theorem}
If $\langle M_n\mid n<\omega\rangle$ is a sequence of transitive models with
$M_n\models ZFC_n$ where $ZFC=\bigcup\limits_{n\in\omega}ZFC_n,$ then $Con(ZFC+IFS_{W\!Ab\,L[V_{\alpha}]}+IFS_{Ab\,L[r]})$
\end{theorem}
{\sc proof} \hspace{0.1in} Same as the last theorem except we start with a model of enough
of $ZFC+IFS_{Ab\,L[r]}.$
\begin{theorem}\label{Jech}
$V[G]$ has no functions $f:\kappa\rightarrow\kappa$ not in the ground
model if and only if $r.o.P$ is $(\kappa,\kappa)\!$-distributive.
\end{theorem}
{\sc proof} \hspace{0.1in} See [Jech1].
\begin{coro}
$IFS_{W\!Ab\,L(V_{\alpha})}$ is equivalent to the axiom schema of set
theory which says for every
weakly absolutely definable set of the form $V_{\alpha},$ for every
partial order
$P$ definable in
$L(V_{\alpha}),$ if
$$(r.o.P)^V \hbox{ is }(\kappa,\kappa)\hbox{-distributive}$$
for each $\kappa$ such that for some $\beta<\alpha,\ \kappa\leq \big|V_{\beta}\big|,$
then there exists an $L(V_{\alpha})$ generic subset $G$ of $P.$
\end{coro}
\begin{theorem}
$ZFC+IFS_{W\!Ab\ L(V_{\alpha})}\vdash\ CH$
\end{theorem}
{\sc proof} \hspace{0.1in} Let $P=$ the set of bijections from countable ordinals into
${\Bbb R}.$ Since $P$ is $\sigma$ closed, $\omega_1=\omega_1^{L({\Bbb
R})},$ and $P$ is a definable element of $L({\Bbb R}),$ there is an
$L({\Bbb R})$ generic subset of $P$ in $V.$ If $\alpha$ is an ordinal
less than $\omega_1$ and $r$ is a real, let $D_{\alpha}=\big\{p\in
P\mid \alpha\in dom\,p\big\}$ and $D_r=\big\{p\in
P\mid r\in ran\,p\big\}.$ For each $\alpha<\omega_1,$ $G\cap
D_{\alpha}\neq\emptyset$ and for each $r\in
{\Bbb R},$ $G\cap D_r\neq\emptyset,$ so $\bigcup G$ is a bijection from
$\omega_1$ to ${\Bbb R}.$
Perhaps the following is a better illustration of the kind of result
obtainable from $ZFC+IFS_{W\!Ab\ L(V_{\alpha})}.$
\begin{defi}
A Ramsey ultrafilter on $\omega$ is an Ultrafilter on $\omega$ such
that every coloring of $\omega$ with two colors has a homogenous set
in the ultrafilter.
\end{defi}
\begin{theorem}
$ZFC+IFS_{W\!Ab\ L(V_{\alpha})}\vdash $ there is a Ramsey ultrafilter
on $\omega.$
\end{theorem}
{\sc proof} \hspace{0.1in} Let $P$ be the partial order $(P(\omega),\subseteq^*)$ where
$P(\omega)$ is the power set of $\omega$ and $a\subseteq^* b$ means $a$ is
a subset of $b$ except for finitely many elements. $P$ is definable is
$L({\Bbb R})$ and is $\omega$ closed. The generic object is an Ramsey ultrafilter
over $L({\Bbb R}),$ and since all colorings of $\omega$ are in $L({\Bbb R}),$ it is a
Ramsey ultrafilter over $V.$
\noindent One can argue that $IFS_{W\!Ab\ L(V_{\alpha})}$ is not a natural axiom
since among the definable sets $X$ with the property that $L(X)$ is
absolute when not increasing $X,$ why should you choose only
those of the form $V_{\alpha}?$ But it is natural in the sense it is a
way of forcing the universe as large as possible with respect to the
existence of generics by first fixing the
height of the models under consideration and then by fixing more and
more of their widths. In any case we should consider the
strengthenings of
$IFS_{W\!Ab\,L(V_{\alpha})}$ defined below.
\begin{defi}
$x\in V$ is said to be weakly absolutely definable if for some
formula $\psi(x)$ which is provably $\Delta_1$ from $ZF,$
$$V\models \forall y(y\in x\ \leftrightarrow\ \psi(y))$$
\end{defi}
\begin{defi}
$IFS$ is the axiom schema of set theory which says for every
weakly absolutely definable set $X,$ for every partial order $P$ definable in
$L(X),$ if
$$\big|\big|X^{V[G]}=X^{V}\big|\big|^{(r.o.P)^V}=1$$
then there exists an $L(X)$ generic subset $G$ of $P.$
\end{defi}
\noindent If $X$ is an weakly absolutely definable set and $P$ is a partial ordering
definable in $L(X)$ such that
$$\big|\big|X^{V[G]}=X^{V}\big|\big|^{(r.o.P)^V}=1$$
and if there is no $L(X)$ generic subset of $P$ in $V,$
we say that $V$ has a gap. $IFS$ says there are no gaps.
The intuition that such gaps should not
occur in $V$ leads to the following:
\begin{conj}
$ZFC+IFS$ is consistent.
\end{conj}
\noindent If $ZFC+IFS$ is consistent, then this means that it is
consistent that the universe is complete with respect to the
natural yardstick classes, (the classes of the form $L(X)$ where $X$
is weakly absolutely definable.) In my view, confirming the consistency of
$ZFC+IFS$ would be strong evidence that the universe of set theory
conforms to the axioms of $IFS.$ One reason for this opinion is that there is no apriori
reason for the consistency of $ZFC+IFS,$ so if $ZFC+IFS$ is
consistent, it seems that confirmimg its
consistency would involve some deep mathematics implying $IFS$
should be taken seriously.
\section{Formalizing the arguments in favor of $IFS_L$ and the other schemata}
\noindent In this section we try to formalize the vague notion that $IFS_L$ is a
natural closure condition on the universe, and that gaps in
general are esthetically undesirable. For simplicity we concentrate on $IFS_L.$
\begin{defi}
Let $T$ be a recursive theory in the language of set theory extending $ZFC.$ Let $P$
be a unary predicate.
If $\varphi$ is a formula of set theory
then $\varphi^*$ is $\varphi$ with all its quantifiers restricted to
$P,$ i.e., if $\exists x$ occurs in $\varphi$ then it is replaced by
$\exists x(P(x)\,\wedge\ldots)$ and $\forall x$ is replaced by
$\forall x(P(x)\ \rightarrow\ldots).$
The theory majorizing $T,$ $T',$ is the recursive theory in the
language $\big\{\varepsilon, P(x)\big\}$ such that
\begin{enumerate}
\item $\varphi\in T\ \rightarrow\ \varphi^*\in T'$
\item $P(x)\ is\ transitive\ \in T'$
\item $\forall x(x\in ORD\ \rightarrow\ P(x))\in T'$
\item $ZFC\subseteqseteq T'$
\end{enumerate}
\noindent If $\theta(x)=\forall y(y\in x\leftrightarrow \psi(x))$ is a
formula defining a weakly absolutely
definable set then the theory majorizing $T$ with respect to
$\theta(x)$ is $T'$ plus all the axioms of the form
$$\Big(\varphi_1\ \wedge\ \ldots\ \wedge\ \varphi_n\ \rightarrow\
\big(\psi(y)\leftrightarrow \exists z\psi_0(y,z)\leftrightarrow\forall
z\psi_1(y,z)\big)\Big)\ \longrightarrow\ \Big(\forall y(\psi(y)\ \rightarrow\ P(y))\Big)$$
where $\varphi_1,\ldots,\varphi_n\in ZF$ and $\psi_0(y,z)$ and
$\psi_1(y,z)$ are $\Delta_0$ formulas.
\end{defi}
\begin{theorem}
Let $T$ be a recursive extension of $ZFC.$ Let
$T=\bigcup\limits_{n\in\omega}T_n$ where for some recursive function
$F,$ for each $n,$ $F(n)=T_n,$ a finite subset of $T$ and the $T_n$ are increasing.
If there is a sequence $\langle M_n\mid n<\omega\rangle$ of
countable transitive models such that
$$M_n\models T_n$$
then $T'+IFS_L$ ($T'$ is the theory majorizing $T\!$) is consistent
and there is a sequence $\langle N_n\mid n\in\omega\rangle$ of
countable transitive models such that $$N_n\models T_n'$$
where $T'=\bigcup\limits_{n\in\omega}T'_n$ and for some recursive
function $H,$ for each $n\in\omega,$ $H(n)=T'_n$ a
finite subset of $T'.$
\end{theorem}
{\sc proof} \hspace{0.1in} Let $IFS_L=\bigcup\limits_{n\in\omega}(IFS_L)_n$ where for each
$n\in\omega,\ (IFS_L)_n$ is finite. We can find a subsequence
$\langle N_n\mid n\in\omega\rangle$ of the $\langle
M_n\mid n\in\omega\rangle$ and $N_n$-generic sets $G_n$ such that
$N_n[G_n]\models (IFS_L)_n,\ N_n\models T_n.$ Let $N_n[G_n]^*$
be the model in the language $\big\{\varepsilon,P(x)\big\}$ obtained
by letting the interpretation of $P(x)$ to be $N_n.$ Let $D$ be an
ultrafilter on $\omega.$ Then
$$\prod N_n[G_n]^*/D$$
is a model for $T'+IFS_L.$
\begin{defi}
A theory extending $ZFC$ is $\omega-\!$complete if whenever $\varphi(x)$ is a
formula of set theory and if for each natural
number $n,$
$$T\vdash\varphi(n)$$
then $T\vdash \forall n\in\omega\varphi(n).$
\end{defi}
\begin{theorem}
Let $T$ be a recursive extension of $ZFC$ and suppose it has a
consistent, complete and $\omega-\!$complete extension $T^*.$ Then
$T'+IFS_L$ is
consistent.
\end{theorem}
{\sc proof} \hspace{0.1in} By reflection in $T^*,$ by its $\omega-\!$completeness and by the axiom of choice in $T^*,$
$$T^*\vdash\exists\langle N_n\mid n\in\omega\rangle$$
with the $\langle N_n\mid n\in\omega\rangle$ having the same properties as in
the previous theorem. As in the previous theorem since $ZFC\subseteqset T,$ $T^*\vdash Con(T'+IFS_L).$
Since $T^*$ is $\omega-\!$complete, (by the omitting
types theorem) it has an model $M$
with the standard set of integers. Since $M\models T^*,$
$$M\models Con(T'+IFS_L)$$
and as $Con(T'+IFS_L)$ is an arithmetical statement, it must really be
true.
\noindent Certainly if the hypothesis of the theorem fails, then $T$
cannot be a suitable axiom system for set theory.
\begin{defi}
If $\theta(x)$ is a formula defining an weakly absolutely definable set,
then $IFS\restriction\theta(x)$ is $IFS$ restricted to the set defined by
$\theta(x),$ i.e., it says for all partial orders $P$ definable in
$L(X)$ were $X$ is defined by $\theta(x)$ such that
$$\big|\big|X^{V[G]}=X^{V}\big|\big|^{(r.o.P)^V}=1$$
there is an $L(X)$ generic
subset of $P.$
\end{defi}
\begin{theorem}
Let $\theta(x)$ be a formula defining an weakly absolutely definable set.
Let $T$ be a recursive extension of $ZFC$ and suppose it has a
consistent, complete and $\omega-\!$complete extension $T^*.$ Then
$T_{\theta(x)}'+IFS\restriction\theta(x)$ is
consistent.
\end{theorem}
{\sc proof} \hspace{0.1in} Same as above.
\begin{theorem}
Let $\theta(x)$ be a formula defining an weakly absolutely definable set.
Let $T$ be a recursive extension of $ZFC+IFS\restriction\theta(x)$ and
suppose $T'_{\theta(x)}$ majorizes $T$ with respect to $\theta(x).$
Then $T_{\theta(x)}'\vdash IFS\restriction\theta(x).$
\end{theorem}
{\sc proof} \hspace{0.1in} Working in $T'$ the generics in the inner model are still generic over $L(X)$
since the inner model is a transitive class containing
all the ordinals.
\noindent The theorems in this section are meant as the formalization
of the notion that we can 'consistently imagine' a class model of
$ZFC$ not satisfying $IFS_L$ as being contained in a larger class
satisfying $ZFC+IFS_L,$ and that models of $ZFC$ not satisfying
$IFS$ have a gap.
\section{Conclusion}
\noindent These axiom schemata lead to many questions, among
them
\begin{enumerate}
\item Are there models of $IFS$ or $IFS_{W\!Ab\,L[V_{\alpha}]}$ which are
forcing extensions of $L$?
\item Are there similar natural schema's making the universe large,
but contradicting $IFS$ or
$IFS_{W\!Ab\,L[V_{\alpha}]}?$
\item What are the consequences for ordinary mathematics of these
axioms?
\end{enumerate}
\noindent The conventional view of the history of set theory says that Godel in
1938 proved that the consistency of $ZF$ implies the consistency of
$ZFC$ and of $ZFC+GCH,$ and that Cohen with the invention of forcing
proved that $Con(ZF)$ implies $Con(ZF+\neg AC)$ and $Con(ZFC+\neg
GCH)$ but from the point of view of $IFS_L$ a better way to state the history
would be to say that Godel
discovered $L$ and Cohen proved there are many generic sets over $L.$
I think confirming the consistency of $IFS$ with $ZFC$ would be a
vindication of the idea that generics over partial orders definable in
$L(X)$ with $X$ an weakly absolutely definable set exist, and
thereby put a crack in the armor surrounding the continuim hypothesis as
$ZFC+IFS\restriction{\Bbb R}\vdash CH.$ On the other hand, if
$ZFC+IFS$ is not consistent, it would show the universe must have some
gaps, i.e., incomplete with respect to some concrete set, an
esthetically unpleasing result.
It is ironic that although mathematics and especially mathematical
logic is an art noted for its precise and formalized reasoning,
it seems that in order to solve problems at the frontiers of logic's
foundations we must tackle questions of an esthetic nature.
\pagebreak
\begin{center}
REFERENCES
\end{center}
\noindent 1. [CK] C. C. Chang and J. Keisler, {\em Model Theory},
North Holland Publishing Co.
\noindent 2. [Jech1] T. Jech, {\em Multiple Forcing}, Cambridge
University Press.
\noindent 3. [Jech2] T. Jech, {\em Set Theory}, Academic Press.
\end{document} |
\begin{document}
\selectlanguage{english}
\title{Asymptotic Fixed-Speed Reduced Dynamics for Kinetic Equations in Swarming}
\author{
Mihai Bostan
\thanks{Laboratoire d'Analyse, Topologie, Probabilit\'es LATP, Centre de
Math\'ematiques et Informatique CMI, UMR CNRS 7353, 39 rue Fr\'ed\'eric Joliot Curie, 13453 Marseille Cedex 13
France. E-mail : {\tt bostan@cmi.univ-mrs.fr}}
, J. A. Carrillo
\thanks{ICREA (Instituci\'o Catalana de Recerca i Estudis Avan\c{c}ats)
and Departament de Matem\`atiques, Universitat Aut\`onoma de
Barcelona, 08193 Bellaterra Spain. E-mail : {\tt
carrillo@mat.uab.es}. {\it On leave from:} Department of
Mathematics, Imperial College London, London SW7 2AZ, UK.}}
\date{ (\today)}
\maketitle
\begin{abstract}
We perform an asymptotic analysis of general particle systems
arising in collective behavior in the limit of large
self-propulsion and friction forces. These asymptotics impose a
fixed speed in the limit, and thus a reduction of the dynamics to
a sphere in the velocity variables. The limit models are obtained
by averaging with respect to the fast dynamics. We can include all
typical effects in the applications: short-range repulsion,
long-range attraction, and alignment. For instance, we can
rigorously show that the Cucker-Smale model is reduced to the
Vicsek model without noise in this asymptotic limit. Finally, a
formal expansion based on the reduced dynamics allows us to treat
the case of diffusion. This technique follows closely the
gyroaverage method used when studying the magnetic confinement of
charged particles. The main new mathematical difficulty is to deal
with measure solutions in this expansion procedure.
\end{abstract}
\paragraph{Keywords:}
Vlasov-like equations, Measure solutions, Swarming, Cucker-Smale
model, Vicsek model, Laplace-Beltrami operator.
\paragraph{AMS classification:} 92D50, 82C40, 92C10.
\section{Introduction}
\label{Intro}
\indent
This paper is devoted to continuum models for the dynamics of
systems involving living organisms such as flocks of birds, school
of fish, swarms of insects, myxobacteria... The individuals of
these groups are able to organize in the absence of a leader, even
when starting from disordered configurations \cite{ParEde99}.
Several minimal models describing such self-organizing phenomenon
have been derived \cite{VicCziBenCohSho95, GreCha04,
CouKraFraLev05}. Most of these models include three basic effects:
short-range repulsion, long-range attraction, and reorientation or
alignment, in various ways, see \cite{HW} and particular
applications to birds \cite{HCH09} and fish \cite{BTTYB,BEBSVPSS}.
We first focus on populations of individuals driven by
self-propelling forces and pairwise attractive and repulsive
interaction \cite{LevRapCoh00, DorChuBerCha06}. We consider
self-propelled particles with Rayleigh friction
\cite{ChuHuaDorBer07, ChuDorMarBerCha07, CarDorPan09,UAB25},
leading to the Vlasov equation in $d=2,3$ dimensions:
\begin{equation}
\label{Equ1} \partial _t \fe + v \cdot \nabla _x \fe + a ^\eps
(t,x) \cdot \nabla _v \fe + \frac{1}{\eps} \Divv\{\fe \abv\}=
0,\;\;(t,x,v) \in \R_+ \times \R^d \times \R^d
\end{equation}
where $\fe = \fe (t,x,v) \geq 0$ represents the particle density
in the phase space $(x,v) \in \R^d \times \R^d$ at any time $t \in
\R_+$, $a ^\eps $ stands for the acceleration
\[
a^\eps (t,\cdot) = - \nabla _x U \star \rho ^\eps (t, \cdot
),\;\;\rho ^\eps (t, \cdot ) = \intv{\fe (t, \cdot,
v)\;\mathrm{d}v}\, ,
\]
and $U$ is the pairwise interaction potential modelling the
repelling and attractive effects. Here, the propulsion and
friction forces coefficients $\alpha ^\eps =
\frac{\alpha}{\eps}>0$, $\beta ^\eps = \frac{\beta}{\eps}
>0$ are scaled in such a way that for $\eps\to 0$ particles will tend
to move with asymptotic speed $\sqrt{\tfrac{\alpha}\beta}$. These
models have been shown to produce complicated dynamics and
patterns such as mills, double mills, flocks and clumps, see
\cite{DorChuBerCha06}. Assuming that all individuals move with
constant speed also leads to spatial aggregation, patterns, and
collective motion \cite{CziStaVic97, EbeErd03}.
Another source of models arises from introducing alignment at the
modelling stage. A popular choice in the last years to include
this effect is the Cucker-Smale reorientation procedure
\cite{CS2}. Each individual in the group adjust their relative
velocity by averaging with all the others. This velocity averaging
is weighted in such a way that closer individuals in space have
more influence than further ones. The continuum kinetic version of
them leads to Vlasov-like models of the form \eqref{Equ1} in which
the acceleration is of the form
\[
a^\eps (t,\cdot) = - H \star f^\eps (t, \cdot )\, ,
\]
where $\star$ stands for the $(x,v)$-convolution, abusing a bit on
the notation, with the nonnegative interaction kernel
$H:\R^{2d}\longrightarrow \R^d$. In the original Cucker-Smale
work, the interaction is modelled by $H(x,v)=h(x)v$, with the
weight function $h$ being a decreasing radial nonnegative
function. We refer to the extensive literature in this model for
further details \cite{HT08,HL08,CFRT10,review,MT11}.
In this work, we will consider the Vlasov equation \eqref{Equ1}
where the acceleration includes the three basic effects discussed
above, and then takes the form:
\begin{equation}\label{accel}
a^\eps (t,\cdot) = - \nabla _x U \star \rho ^\eps (t, \cdot ) - H
\star f^\eps (t, \cdot )\, .
\end{equation}
We will assume that the interaction potential $U\in C^2_b(\R^d)$,
$U$ bounded continuous with bounded continuous derivatives up to
second order, and $H(x,v)=h(x)v$ with $h\in C^1_b(\R^d)$ and
nonnegative. Under these assumptions the model
\eqref{Equ1}-\eqref{accel} can be rigorously derived as a
mean-field limit \cite{Neu77, BraHep77, Dob79,CCR10,BCC11} from
the particle systems introduced in \cite{DorChuBerCha06,CS2}.
We will first study in detail the linear problem, assuming that
the acceleration $a = a(t,x)$ is a given global-in-time bounded
smooth field. We investigate the regime $\eps \searrow 0$, that is
the case when the propulsion and friction forces dominate the
potential interaction between particles. At least formally we have
\begin{equation}
\label{EquAnsatz} \fe = f + \eps \fo + \eps ^2 f ^{(2)} + ...
\end{equation}
where
\begin{equation}
\label{Equ2} \Divv\{f \abv \} = 0
\end{equation}
\begin{equation}
\label{Equ3} \partial _t f + \Divx (fv) + \Divv (f a(t,x)) +
\Divv\{\fo \abv \} = 0\,,
\end{equation}
up to first order. Therefore, to characterize the zeroth order
term in the expansion we need naturally to work with solutions
whose support lies on the sphere of radius $r :=
\sqrt{\alpha/\beta}$ denoted by $r\sphere$ with $\sphere = \{v\in
\R^d : |v| = 1\}$. In turn, we need to work with measure solutions
to \eqref{Equ2} which makes natural to set as functional space the
set of nonnegative bounded Radon measures on $\R^d\times\R^d$
denoted by ${\cal M}_b ^+ (\R^d\times\R^d)$. We will be looking at
solutions to \eqref{Equ1} which are typically continuous curves in
the space ${\cal M}_b ^+ (\R^d\times\R^d)$ with a suitable notion
of continuity to be discussed later on. We will denote by
$\fe(t,x,v)\, \mathrm{d}(x,v)$ the integration against the measure
solution $\fe(t,x,v)$ of \eqref{Equ1} at time $t$. For the sake of
clarity, this is done independently of being the measure $\fe(t)$
absolutely continuous with respect to Lebesgue or not, i.e.,
having a $L^1(\R^d\times\R^d)$ density or not.
\begin{pro}\label{Kernel}
Assume that $(1+|v|^2)F \in {\cal M}_b ^+ (\R^d)$. Then $F$ is a
solution to \eqref{Equ2} if and only if $\supp F \subset \{0\}
\cup r \sphere$.
\end{pro}
The condition \eqref{Equ2} appears as a constraint, satisfied at
any time $t \in \R_+$. The time evolution of the dominant term $f$
in the Ansatz \eqref{EquAnsatz} will come by eliminating the
multiplier $\fo$ in \eqref{Equ3}, provided that $f$ verifies the
constraint \eqref{Equ2}. In other words we are allowed to use
those test functions $\psi (x,v)$ which remove the contribution of
the term $\Divv\{ \fo \abv \}$ {\it i.e.,}
\[
\intxv{\abv \cdot \nabla _v \psi \;\fo(t,x,v)\, \mathrm{d}(x,v) }
= 0.
\]
Therefore we need to investigate the invariants of the field $\abv
\cdot \nabla _v$. The admissible test functions are mainly those
depending on $x$ and $v/|v|, v \neq 0$. The characteristic flow
$(s,v) \to {\cal V}(s;v)$ associated to $\tfrac1\eps \abv \cdot
\nabla _v$
\[
\frac{\mathrm{d}{\cal V}}{\mathrm{d}s} = \frac1\eps
\abvs,\;\;{\cal V}(0;v) = v
\]
will play a crucial role in our study. It will be analyzed in
detail in Section \ref{LimMod}. Notice that the elements of $\A$
are the equilibria of $\abv \cdot \nabla _v $. It is easily seen
that the jacobian of this field
\[
\partial _v \{ \abv \} = (\alpha - \beta |v|^2 ) I - 2 \beta v \otimes v
\]
is negative on $r\sphere$, saying that $r\sphere$ are stable
equilibria. The point $0$ is unstable, $\partial _v \{ \abv \}
|_{v = 0}=\alpha I$. When $\eps \searrow 0$ the solutions
$(\fe)_\eps$ concentrate on $\xA$, leading to a limit curve of
measures even if $(\fe)_\eps$ were smooth solutions. We can
characterize the limit curve as solution of certain PDE whenever
our initial measure does not charge the unstable point $0$.
\begin{thm} \label{MainResult}
Assume that $a \in \litwoix{}$, $(1 + |v|^2) \fin \in \mbxv{}$,
$\supp \fin \subset \{(x,v) :|v|\geq r_0>0\}$. Then $(\fe)_\eps$
converges weakly $\star$ in $\litmbxv{}$ towards the solution of
the problem
\begin{equation}
\label{Equ22} \partial _t f + \Divx(fv) + \Divv \left \{f \imvv a \right \} = 0
\end{equation}
\begin{equation}
\label{Equ23} \Divv \{f \abv \} = 0
\end{equation}
with initial data $f(0) = \ave{\fin}$ defined by
$$
\intxv{\psi (x,v) \ave{\fin}(x,v)\, \mathrm{d}(x,v)} = \intxv{\psi
\left (x, r \vsv\right ) \fin(x,v)\, \mathrm{d}(x,v)}\,,
$$
for all $\psi \in \czcxv$.
\end{thm}
In the rest, we will refer to $\ave{\fin}$ as the projected
measure on the sphere of radius $r$ corresponding to $\fin$. Let
us point out that the previous result can be equivalently written
in spherical coordinates by saying that $f(t,x,\omega)$ is the
measure solution to the evolution equation on $(x,\omega)\in\R ^d
\times r \sphere$ given by
\begin{equation*}
\partial _t f + \Divx(f\omega) + \Divo \left \{f
\imoo a \right \} = 0 \,.
\end{equation*}
These results for the linear problem, when $a(t,x,v)$ is given, can
be generalized to the nonlinear counterparts where $a(t,x)$ is
given by \eqref{accel}. The main result of this work is (see Section \ref{MeaSol} for the definition of $\Po$):
\begin{thm} \label{MainResult2}
Assume that $U\in C^2_b(\R^d)$, $H(x,v)=h(x)v$ with $h\in
C^1_b(\R^d)$ nonnegative, $\fin \in \poxv{}$, $\supp \fin \subset
\{(x,v) :|x| \leq L_0, r_0\leq |v| \leq R_0\}$ with $0<r_0<r<R_0<\infty$. Then
for all $\delta>0$, the sequence $(\fe)_\eps$ converges in
$C([\delta,\infty);\poxv)$ towards the measure solution
$f(t,x,\omega)$ on $(x,\omega)\in\R ^d \times r \sphere$ of the
problem
\begin{equation}
\label{Equ22n} \partial _t f + \Divx(f\omega) - \Divo \left \{f
\imoo \left(\nabla_x U\star \rho + H\star f \right) \right \} = 0
\end{equation}
with initial data $f(0) = \ave{\fin}$. Moreover, if the initial
data $\fin$ is already compactly supported on $B_{L_0} \times r \sphere$, then
the convergence holds in $\cztpoxv$.
\end{thm}
Let us mention that the evolution problem \eqref{Equ22n} on $\R ^d
\times r \sphere$ was also proposed in the literature as the
continuum version \cite{DM08} of the Vicsek model
\cite{VicCziBenCohSho95,CouKraFraLev02} without diffusion for the
particular choice $U=0$ and $H(x,v)=h(x) v$ with $h(x)$ some local
averaging kernel. The original model in
\cite{VicCziBenCohSho95,CouKraFraLev02} also includes noise at the
particle level and was derived as the mean filed limit of some
stochastic particle systems in \cite{BCC12}. In fact, previous
particle systems have also been studied with noise in \cite{BCC11}
for the mean-field limit, in \cite{HLL09} for studying some
properties of the Cucker-Smale model with noise, and in
\cite{DFL10,FL11} for analyzing the phase transition in the Vicsek
model.
In the case of noise, getting accurate control on the particle
paths of the solutions is a complicated issue and thus, we are not
able to show the corresponding rigorous results to Theorems
\ref{MainResult} and \ref{MainResult2}. Nevertheless, we will
present a simplified formalism, which allows us to handle more
complicated problems to formally get the expected limit equations.
This approach was borrowed from the framework of the magnetic
confinement, where leading order charged particle densities have
to be computed after smoothing out the fluctuations which
correspond to the fast motion of particles around the magnetic
lines \cite{BosAsyAna, BosTraEquSin, BosGuiCen3D, BosNeg09}. We
apply this method to the following (linear or nonlinear) problem
\begin{equation}
\label{Equ31} \partial _t \fe + \Divx\{\fe v\} + \Divv \{ \fe a\} + \frac{1}{\eps} \Divv \{ \fe \abv \} = \Delta _v \fe
\end{equation}
with initial data $\fe (0) = \fin$ where the acceleration $a \in
\litwoix{}$ and $\fin \in \mbxv{}$. By applying the projection
operator $\ave{\cdot}$ to \eqref{Equ31}, we will show that the
limiting equation for the evolution of $f(t,x,\omega)$ on
$(x,\omega)\in\R ^d \times r \sphere$ is given by
\begin{equation} \label{Equ22Diff}
\partial _t f + \Divx(f\omega) + \Divo \left \{f \imoo a \right \} =
\Delta_\omega f
\end{equation}
where $\Delta_\omega$ is the Laplace-Beltrami operator on $r
\sphere$.
Our paper is organized as follows. In Section \ref{MeaSol} we
investigate the stability of the characteristic flows associated
to the perturbed fields $v \cdot \nabla _x + a \cdot \nabla _v +
\frac{1}{\eps} \abv \cdot \nabla _v $. The first limit result for
the linear problem (cf. Theorem \ref{MainResult}) is derived
rigorously in Section \ref{LimMod}. Section \ref{NLimMod} is
devoted to the proof of the main Theorem \ref{MainResult2}. The
new formalism to deal with the treatment of diffusion models is
presented in Section \ref{DiffMod}. The computations to show that
these models correspond to the Vicsek models, written in spherical
coordinates, are presented in the Appendix \ref{A}.
\section{Measure solutions}
\label{MeaSol}
\subsection{Preliminaries on mass transportation metrics and notations}
\label{prelim}
We recall some notations and result about mass transportation
distances that we will use in the sequel. For more details the
reader can refer to \cite{Vi1,CT}.
We denote by $\Po(\R^d)$ the space of probability measures on
$\R^d$ with finite first moment. We introduce the so-called
\emph{Monge-Kantorovich-Rubinstein distance} in $\Po(\R^d)$
defined by
\begin{equation*}
W_1(f,g) = \sup \left \{ \left |\int_{\R^d} \varphi(u)
(f(u)-g(u))\, \mathrm{d} u \right |, \varphi \in \mathrm{Lip}(\R^d),
\mathrm{Lip}(\varphi)\leq 1 \right \}
\end{equation*}
where $\mathrm{Lip}(\R^d)$ denotes the set of Lipschitz functions on
$\R^d$ and $\mathrm{Lip}(\varphi)$ the Lipschitz constant of a function
$\varphi$. Denoting by $\Lambda$ the set of transference plans
between the measures $f$ and $g$, i.e., probability measures in
the product space $\R^d \times \R^d$ with first and second
marginals $f$ and $g$ respectively
\[
f(y) = \int_{\R^d} \pi (y,z)\,\mathrm{d}z,\;\;g(z) = \int_{\R^d}
\pi (y,z)\,\mathrm{d}y
\]
then we have
\begin{equation*}
W_1(f, g) = \inf_{\pi\in\Lambda} \left\{ \int_{\R^d \times \R^d}
\vert y - z \vert \, \pi(y, z)\,\mathrm{d}(y,z) \right\}
\end{equation*}
by Kantorovich duality. $\Po(\R^d)$ endowed with this distance is
a complete metric space. Its properties are summarized below,
see\cite{Vi1}.
\begin{pro}
\label{w2properties} The following properties of the distance
$W_1$ hold:
\begin{enumerate}
\item[1)] {\bf Optimal transference plan:} The infimum in the
definition of the distance $W_1$ is achieved. Any joint
probability measure $\pi_o$ satisfying:
$$
W_1(f, g) = \int_{\R^d \times \R^d} \vert y - z \vert \,
\mathrm{d}\pi_o(y, z)
$$
is called an optimal transference plan and it is generically non
unique for the $W_1$-distance.
\item[2)] {\bf Convergence of measures:} Given $\{f_k\}_{k\ge 1}$
and $f$ in $\Po(\R^d)$, the following two assertions are
equivalent:
\begin{itemize}
\item[a)] $W_1(f_k, f)$ tends to $0$ as $k$ goes to infinity.
\item[b)] $f_k$ tends to $f$ weakly $\star$ as measures as $k$
goes to infinity and
$$
\sup_{k\ge 1} \int_{\vert v \vert > R} \vert v \vert \, f_k(v) \,
\mathrm{d}v \to 0 \, \mbox{ as } \, R \to +\infty.
$$
\end{itemize}
\end{enumerate}
\end{pro}
Let us point out that if the sequence of measures is supported on
a common compact set, then the convergence in $W_1$-sense is
equivalent to standard weak-$\star$ convergence for bounded Radon
measures.
Finally, let us remark that all the models considered in this
paper preserve the total mass. After normalization we can consider
only solutions with total mass $1$ and therefore use the
Monge-Kantorovich-Rubinstein distance in $\Po (\R ^d \times \R
^d)$. From now on we assume that the initial conditions has total
mass $1$.
\subsection{Estimates on Characteristics}
In this section we investigate the linear Vlasov problem
\begin{equation}
\label{Equ10} \partial _t \fe + \Divx\{\fe v\} + \Divv \{ \fe a\}
+ \frac{1}{\eps} \Divv \{ \fe \abv \} = 0,\;\;(t,x,v) \in \R_+
\times \R^d \times \R^d
\end{equation}
\begin{equation}
\label{Equ11}
\fe (0) = \fin
\end{equation}
where $a \in \litwoix{}$ and $\fin \in \mbxv{}$.
\begin{defi}\label{DefMeaSol}
Assume that $a \in \litwoix{}$ and $\fin \in \mbxv{}$. We say that
$\fe \in \litmbxv{}$ is a measure solution of
\eqref{Equ10}-\eqref{Equ11} if for any test function $\varphi \in
\coctxv{}$ we have
\begin{align*}
\inttxv{\{\partial _t + v \cdot \nabla _x + a \cdot \nabla _v +
\frac{1}{\eps} \abv \cdot & \nabla _v \}\varphi \fe(t,x,v)\,
\mathrm{d}(x,v)} \\
&+ \intxv{\varphi (0,x,v) \fin(x,v) \, \mathrm{d}(x,v) } = 0.
\end{align*}
\end{defi}
We introduce the characteristics of the field $v\cdot \nabla _x + a \cdot \nabla _v + \frac{1}{\eps} \abv \cdot \nabla _v $
\begin{equation*}
\frac{\mathrm{d}\Xe}{\mathrm{d}s} = \Ve(s),\;\;\frac{\mathrm{d}\Ve}{\mathrm{d}s} = a(s, \Xe(s)) + \frac{1}{\eps} \abves
\end{equation*}
\begin{equation*}
\Xe (s=0) = x,\;\;\Ve (s = 0) = v.
\end{equation*}
We will prove that $(\Xe, \Ve)$ are well defined for any $(s,x,v)
\in \R_+ \times \R^d \times \R^d$. Indeed, on any interval $[0,T]$
on which $(\Xe, \Ve)$ is well defined we get a bound
\[
\sup _{s \in [0,T]} \{|\Xe (s) | + |\Ve (s) | \} < +\infty
\]
implying that the characteristics are global in positive time. For
that we write
\begin{equation}\label{charnew}
\frac12\frac{\mathrm{d}|\Ve|^2}{\mathrm{d}s} = a(s, \Xe(s))\cdot
\Ve (s) + \frac{1}{\eps} ( \alpha - \beta |\Ve (s) |^2) |\Ve
(s)|^2.
\end{equation}
and then, we get the differential inequality
\[
\frac{\mathrm{d}|\Ve |^2}{\mathrm{d}s} \leq 2\|a\|_{\linf} |\Ve
(s)| + \frac{2}{\eps} ( \alpha - \beta |\Ve (s) |^2) |\Ve (s)|^2
\]
for all $s\in [0,T]$, so that
\[
\sup _{s \in [0,T]} |\Ve (s) | < +\infty,\;\;\sup _{s\in [0,T]} |\Xe (s) | \leq |x| + T \sup _{s\in [0,T]} |\Ve (s) | < +\infty.
\]
Once constructed the characteristics, it is easily seen how to
obtain a measure solution for the Vlasov problem
\eqref{Equ10}-\eqref{Equ11}. It reduces to push forward the
initial measure along the characteristics, see \cite{CCR10} for
instance.
\begin{pro}
For any $t \in \R_+$ we denote by $\fe (t)$ the measure given by
\begin{equation}\label{EquDefMea}
\intxv{\psi (x,v) \fe(t,x,v)\,\dxv} = \intxv{\psi((\Xe,
\Ve)(t;0,x,v))\fin(x,v)\,\dxv}\,,
\end{equation}
for all $\psi \in \czcxv$. Then the application $t \to \fe (t)$,
denoted $\fin \#(\Xe, \Ve)(t;0,\cdot,\cdot)$ is the unique measure
solution of \eqref{Equ10}, \eqref{Equ11}, belongs to $\cztmbxv$
and satisfies
$$
\intxv{\fe (t,x,v)\,\dxv} = \intxv{\fin(x,v)\,\dxv}, t \in \R_+.
$$
\end{pro}
\begin{proof}
The arguments are straightforward and are left to the reader. We
only justify that $\fe \in \cztmbxv$ meaning that for any $\psi
\in \czcxv{}$ the application $t \to \intxv{\,\,\psi(x,v) \fe
(t,x,v)\;\dxv}$ is continuous. Choose $\psi\in \czcxv{}$. Then,
for any $0 \leq t_1 < t_2$ we have
\begin{align*}
\intxv{\psi(x,v) \fe (t_2,x,v) &\,\dxv } - \intxv{\psi(x,v) \fe
(t_1,x,v)\,\dxv } \\ &= \intxv{\left[\psi ((\Xe, \Ve )(t_2;t_1, x,
v)) - \psi (x,v)\right]\fe (t_1,x,v)\,\dxv}.
\end{align*}
Taking into account that $(\Xe, \Ve)$ are locally bounded (in
time, position, velocity) it is easily seen that for any compact
set $K \subset \R ^d \times \R^d$ there is a constant $C(K)$ such
that
\[
|\Xe (t_2; t_1, x, v) - x| + |\Ve (t_2; t_1, x, v) - v| \leq |t_2 - t_1 | C(K),\;\;(x,v) \in K.
\]
Our conclusion follows easily using the uniform continuity of $\psi$ and that
$\|\fe (t_1) \|_{{\cal M}_b} = \|\fin \|_{{\cal M}_b}$. Notice
also that the equality \eqref{EquDefMea} holds true for any
bounded continuous function $\psi$.
\end{proof}
We intend to study the behavior of $(\fe)_\eps$ when $\eps $
becomes small. This will require a more detailed analysis of the
characteristic flows $(\Xe, \Ve)$. The behavior of these
characteristics depends on the roots of functions like $A +
\frac{1}{\eps} (\alpha - \beta \rho ^2 ) \rho$, with $\rho \in
\R_+$, $A \in \R$.
\begin{pro}\label{NegA}
Assume that $A < 0$ and $ 0 < \eps < 2\alpha r /(|A|
3 \sqrt{3})$. Then the equation $\lae (\rho) := \eps A + (\alpha -
\beta \rho ^2 ) \rho = 0$ has two zeros on $\R_+$, denoted $\reo
(A), \ret (A)$, satisfying
\[
0 < \reo < \frac{r}{\sqrt{3}} < \ret < r
\]
and
\[
\lime \frac{\reo}{\eps} = \frac{|A|}{\alpha},\;\;\;\;\;\;\lime \frac{r - \ret}{\eps} = \frac{|A|}{2\alpha}
\]
where $r = \sqrt{\alpha/\beta}$.
\end{pro}
\begin{proof}
It is easily seen that the function $\lae$ increases on
$[0,r/\sqrt{3}]$ and decreases on $[r/\sqrt{3}, +\infty[$ with
change of sign on $[0,r/\sqrt{3}]$ and $[r/\sqrt{3}, r]$. We can
prove that $(\reo)_\eps, (\ret)_\eps$ are monotone with respect to
$\eps >0$. Take $0 < \eps < \teps < 2\alpha r /(|A| 3 \sqrt{3})$
and observe that $\lae > \lambda ^{\teps}$. In particular we have
\[
\lambda ^{\teps} (\reo) < \lae (\reo) = 0 = \lambda ^{\teps} (\rho _1 ^{\teps})
\]
implying $\reo < \rho _1 ^{\teps}$, since $\lambda ^{\teps}$ is
strictly increasing on $[0, r/\sqrt{3}]$. Similarly we have
\[
\lambda ^{\teps} (\ret) < \lae (\ret) = 0 < \lambda ^{\teps} (\rho _2 ^{\teps})
\]
and thus $\ret > \rho _2 ^{\teps}$, since $\lambda ^{\teps}$ is
strictly decreasing on $[r/\sqrt{3}, r]$. Passing to the limit in
$\lae (\rho _k ^\eps) = 0, k \in \{1,2\}$ it follows easily that
\[
\lime \reo = 0,\;\;\lime \ret = r.
\]
Moreover we can write
\begin{equation}
\alpha = \frac{\mathrm{d}}{\mathrm{d}\rho }\{(\alpha - \beta \rho ^2 ) \rho \} |_{\rho = 0} = \lime \frac{[\alpha - \beta (\reo)^2]\reo}{\reo} = - \lime \frac{\eps A}{\reo} \nonumber
\end{equation}
and
\begin{equation}
-2 \alpha = \frac{\mathrm{d}}{\mathrm{d}\rho }\{(\alpha - \beta \rho ^2 ) \rho \} |_{\rho = r} = \lime \frac{[\alpha - \beta (\ret)^2]\ret}{\ret - r} = - \lime \frac{\eps A}{\ret - r} \nonumber
\end{equation}
saying that
\[
\lime \frac{\reo}{\eps} = \frac{|A|}{\alpha},\;\;\lime \frac{r - \ret}{\eps} = \frac{|A|}{2\alpha}.
\]
\end{proof}
The case $A>0$ can be treated is a similar way and we obtain
\begin{pro} \label{PosA}
Assume that $A > 0$ and $ \eps >0$. Then the equation $\lae (\rho)
:= \eps A + (\alpha - \beta \rho ^2 ) \rho = 0$ has one zero on
$\R_+$, denoted $\reth (A)$, satisfying
\[
\reth >r,\;\;\lime \frac{\reth - r}{\eps} = \frac{|A|}{2\alpha}.
\]
\end{pro}
Using the sign of the function $\rho \to \eps \|a\|_{\linf{}} +
(\alpha - \beta \rho ^2 ) \rho$ we obtain the following bound for
the kinetic energy.
\begin{pro}\label{KinBou}
Assume that $a \in \litwoix{}$, $(1 + |v|^2) \fin \in \mbxv{}$ and
let us denote by $\fe$ the unique measure solution of
\eqref{Equ10}, \eqref{Equ11}. Then we have
\[
\left \|\intxv{\,|v|^2 \fe(\cdot,x,v)\,\dxv}\right \|_{\linf
(\R_+)} \leq \intxv{[(\reth)^2 + |v|^2] \fin(x,v)\,\dxv}.
\]
\end{pro}
\begin{proof}
We know that
\[
\frac{\mathrm{d}}{\mathrm{d}t} |\Ve|^2 \leq 2\|a\|_{\linf{}}
|\Ve(t)|+ \frac{2}{\eps} (\alpha - \beta |\Ve (t) |^2 ) |\Ve
(t)|^2=\frac{2}{\eps}|\Ve (t)|\lae (|\Ve (\overline{t})| ),\;\;t
\in \R_+.
\]
By comparison with the solutions of the autonomous differential
equation associated to the righthand side, we easily deduce that
\[
|\Ve (t;0,x,v)| \leq \max \{ |v|, \reth(\|a\|_{\linf{}})\}\,,
\]
for any $T \in \R_+, (x,v) \in \R ^d \times \R ^d$. This yields
the following bound for the kinetic energy
\begin{align*}
\intxv{|v|^2\fe (T,x,v)\,\dxv} &= \intxv{|\Ve (T;0,x,v)|^2
\fin(x,v)\,\dxv} \\
&\leq \intxv{[(\reth)^2 + |v|^2]
\fin(x,v)\,\dxv}.
\end{align*}
\end{proof}
The object of the next result is to establish the stability of
$\Ve$ around $|v| = r$. We will show that the characteristics
starting at points with velocities inside an annulus of length
proportional to $\eps$ around the sphere $r\sphere$ get trapped
there for all positive times for small $\eps$.
\begin{pro}
\label{RStab} Assume that $\eps \|a\|_{\linf{}} < 2\alpha r
/(3\sqrt{3})$ and that $\ret (-\|a\|_{\linf{}}) \leq |v| \leq
\reth (\|a\|_{\linf{}})$. Then, for any $(t,x) \in \R_+ \times
\R^d$ we have
\[
\ret (-\|a\|_{\linf{}}) \leq |\Ve(t;0,x,v)| \leq \reth (\|a\|_{\linf{}}).
\]
\end{pro}
\begin{proof}
As in previous proof, we know that
\[
\frac{\mathrm{d}}{\mathrm{d}t} |\Ve|^2 \leq \frac{2}{\eps}|\Ve
(t)|\lae (|\Ve (\overline{t})| ),\;\;t \in \R_+\,.
\]
By comparison with the constant solution $\reth$ to the autonomous
differential equation associated to the righthand side, we get
that $\sup _{t \in \R_+} |\Ve (t;0,x,v)| \leq \reth$. Assume now
that there is $T>0$ such that $|\Ve (T) | < \ret$ and we are done
if we find a contradiction. Since $|\Ve (0) |= |v| \geq \ret$, we
can assume that $\min _{t \in [0,T]} |\Ve (t) | > \reo>0$ by time
continuity. Take now $\ot \in [0,T]$ a minimum point of $t \to
|\Ve (t)|$ on $[0,T]$. Obviously $\ot >0$ since
\[
|\Ve (\ot) | \leq |\Ve (T)| < \ret \leq |v| = |\Ve (0)|.
\]
By estimating from below in \eqref{charnew} and using that $\ot$
is a minimum point of $t \to |\Ve (t)|>0$ on $[0,T]$, we obtain
\[
0 \geq \frac{\mathrm{d}}{\mathrm{d}t} |\Ve (\ot)| \geq - \|a\|_{\linf{}} + \frac{(\alpha - \beta |\Ve (\ot)|^2)|\Ve (\ot)| }{\eps}
=\frac{\lae ( |\Ve (\ot)| )}{\eps}.
\]
But the function $\lae$ has negative sign on $[0,\reo] \cup [\ret,
+\infty[$. Since we know that $\min _{t \in [0,T]} |\Ve (t)| >
\reo$, it remains that
\[
\min _{t \in [0,T]} |\Ve (t)| = |\Ve (\ot)| \geq \ret
\]
which contradicts the assumption $|\Ve (T)| < \ret$.
\end{proof}
Let us see now what happens when the initial velocity is outside
$[\ret (-\|a\|_{\linf{}}), \reth (\|a\|_{\linf{}})]$. In
particular we prove that if initially $v \neq 0$, then $\Ve (t), t
\in \R_+$ remains away from $0$. We actually show that the
characteristics starting away from zero speed but inside the
sphere $r\sphere$ will increase their speed with respect to its
initial value while those starting with a speed outside the sphere
$r\sphere$ will decrease their speed with respect to its initial
value, all for sufficiently small $\eps$.
\begin{pro}
\label{ZeroStab} Consider $\eps >0$ such that $\eps \|a\|_{\linf{}} < 2\alpha r /(3\sqrt{3})$.\\
1. Assume that $\reo (- \|a\|_{\linf{}}) < |v| < \ret (-
\|a\|_{\linf{}})$. Then for any $(t,x) \in \R_+ ^\star \times \R
^d$ we have
\[
\reo (- \|a\|_{\linf{}}) < |v| < |\Ve (t;0,x,v)|\leq\reth ( \|a\|_{\linf{}}).
\]
2. Assume that $\reth ( \|a\|_{\linf{}}) < |v|$. Then for any
$(t,x) \in \R_+ ^\star \times \R^d$ we have
\[
\ret (- \|a\|_{\linf{}}) \leq |\Ve (t;0,x,v) | < |v|.
\]
\end{pro}
\begin{proof}
1. Notice that if $|\Ve (T;0,x,v)| = \ret$ for some $T>0$, then we
deduce by Proposition \ref{RStab} that $\ret \leq |\Ve (t) | \leq
\reth$ for any $t >T$ and thus $|\Ve (t;0,x,v) | \geq \ret > |v|,
t \geq T$. It remains to establish our statement for intervals
$[0,T]$ such that $|\Ve (t) | < \ret$ for any $t \in [0,T]$. We
are done if we prove that $t \to |\Ve (t)|$ is strictly increasing
on $[0,T]$. For any $\tau \in ]0,T]$ let us denote by $\ot$ a
maximum point of $t \to |\Ve (t)|>0$ on $[0,\tau]$. If $\ot \in
[0,\tau[$ we have $\frac{\mathrm{d}}{\mathrm{d}t} |\Ve (\ot)| \leq
0$ and thus
\[
0 \geq \frac{\mathrm{d}}{\mathrm{d}t} |\Ve (\ot)|\geq - \|a\|_{\linf{}} + \frac{(\alpha - \beta |\Ve (\ot)|^2)|\Ve (\ot)| }{\eps}
=\frac{\lae ( |\Ve (\ot)| )}{\eps}.
\]
By construction $|\Ve (\ot)| < \ret$ and moreover,
\[
|\Ve (\ot)| = \max _{[0,\tau]} |\Ve | \geq |v| > \reo\,,
\]
and thus, $\lae ( |\Ve (t)| )>0$ for all $t\in [0,T]$.
Consequently, we infer that $t \to |\Ve (t)|$ is strictly
increasing on $[0,T]$ since
\[
\frac{\mathrm{d}}{\mathrm{d}t} |\Ve (t)|\geq - \|a\|_{\linf{}} +
\frac{(\alpha - \beta |\Ve (t)|^2)|\Ve (t)| }{\eps} =\frac{\lae (
|\Ve (t)| )}{\eps} >0\,.
\]
Therefore we have $\ot = \tau$ saying that $|\Ve (\tau)| \geq |v|$
for any $\tau \in [0,T]$.
2. As before, it is sufficient to work on intervals $[0,T]$ such
that $|\Ve (t) | > \reth (\|a\|_{\linf{}})$ for any $t \in [0,T]$.
We are done if we prove that $t \to |\Ve (t)|$ is strictly
decreasing on $[0,T]$. We have for any $t \in [0,T]$
\[
\frac{\mathrm{d}}{\mathrm{d}t} |\Ve (t)|\leq \|a\|_{\linf{}} + \frac{(\alpha - \beta |\Ve (t)|^2)|\Ve (t)| }{\eps}
=\frac{\lae ( |\Ve (t)| )}{\eps} <0
\]
where for the last inequality we have used $|\Ve (t) | > \reth, t \in [0,T]$.
\end{proof}
\section{The limit model}
\label{LimMod} We investigate now the stability of the family
$(\fe)_\eps$ when $\eps$ becomes small. After extraction of a
sequence $(\eps_k)_k$ converging to $0$ we can assume that
$(\fek)_k$ converges weakly $\star$ in $L^\infty(\R_+;{\cal M}_b
(\R^d \times \R^d))$, meaning that
\[
\limk \inttxv{\varphi (t,x,v) \fek (t,x,v)\,\dxv} =
\inttxv{\varphi (t,x,v) f (t,x,v)\,\dxv}
\]
for any $\varphi \in \lotczcxv{}$. Using the weak formulation of
\eqref{Equ10}-\eqref{Equ11} with test functions $\eta (t) \varphi
(x,v)$, $\eta \in C^1 _c (\R_+)$, $\varphi \in C^1 _c (\R^d \times
\R^d)$ one gets
\begin{align*}
\inttxv{\{\eta ^{\;\prime} (t) \varphi + \eta (t) v \cdot \nabla _x \varphi + \eta (t) a \cdot \nabla _v \varphi \}\fek(t,x,v)\,\dxv&}\\
+ \frac{1}{\eps _k} \inttxv{\eta (t) \abv \cdot \nabla _v \varphi \fek(t,x,v)\,\dxv &} \\
= -\intxv{\eta (0) &\varphi (x,v) \fin(x,v)\,\dxv }.
\end{align*}
Multiplying by $\eps _k$ and passing to the limit for $k \to +\infty$ yields
\[
\inttxv{\eta (t) \abv \cdot \nabla _v \varphi f (t,x,v)\,\dxv} = 0
\]
and therefore one gets for any $t \in \R_+$ and $\varphi \in \cocxv{}$
\[
\intxv{\abv \cdot \nabla _v \varphi f (t,x,v)\,\dxv} = 0.
\]
Under the hypothesis $(1 + |v|^2) \fin \in \mbxv{}$ we deduce by
Proposition \ref{KinBou} that $( 1 + |v|^2) f(t) \in \mbxv{}$ and
therefore, applying the $(x,v)$ version of Proposition
\ref{Kernel} (whose proof is detailed in the sequel), we obtain
\[
\supp f(t) \subset \R^d \times (\A),\;\;t \in \R_+.
\]
The proof of Proposition \ref{Kernel} is based on the resolution of the adjoint problem
\[
- \abv \cdot \nabla _v \varphi = \psi (v),\;\;v \in \R^d
\]
for any smooth righthand side $\psi$ with compact support in $^c(\A)$.
\begin{proof} (of Proposition \ref{Kernel})
It is easily seen that for any $F \in \mbxv{}$, $\supp F \subset \A$ and any $\varphi \in \cocv{}$ we have
\[
\intv{\abv \cdot \nabla _v \varphi (v) F(v)\,\dv} = 0
\]
saying that $\Divv \{F \abv \} = 0$. Assume now that $\Divv \{F
\abv \} = 0$ for some $F \in \mbxv{}$ and let us prove that $\supp
F \subset \A$. We introduce the flow ${\cal V} = {\cal V}(s;v)$
given by
\begin{equation}
\label{Equ4} \frac{\mathrm{d}{\cal V}}{\mathrm{d}s} = ( \alpha - \beta |{\cal V} (s;v) |^2 ) {\cal V } (s;v),\;\;{\cal V}(0;v) = v.
\end{equation}
A direct computation shows that $\vsv$ are left invariant
\[
\abv \cdot \nabla _v \left ( \vsv \right ) = (\alpha - \beta |v|^2 ) \imvv \vsv = 0
\]
and therefore
\[
{\cal V} (s;v) = |{\cal V}(s;v)| \vsv,\;\;v \neq 0.
\]
Multiplying \eqref{Equ4} by ${\cal V}(s;v) / |{\cal V}(s;v)|$ yields
\[
\frac{\mathrm{d}}{\mathrm{d}s}|{\cal V}| = ( \alpha - \beta |{\cal V} (s;v) |^2 ) |{\cal V } (s;v)|
\]
whose solution is given by
\[
|{\cal V}(s;v)| = |v| \frac{r e ^{\alpha s}}{\sqrt{|v|^2 ( e ^{2\alpha s} - 1) + r^2}}
\]
Finally one gets
\begin{equation*}
{\cal V}(s;v) = \frac{r e ^{\alpha s}}{\sqrt{|v|^2 ( e ^{2\alpha s} - 1) + r^2}}\;v,\;\;s \in ]S(v),+\infty[
\end{equation*}
with $S(v) = - \infty$ if $0 \leq |v| \leq r$ and $S(v) =
\frac{1}{2\alpha} \ln \left ( 1 - \frac{r^2}{|v|^2} \right ) < 0$
if $|v| > r$. Notice that the characteristics ${\cal V} (\cdot;v)$
are well defined on $\R_+$ for any $v \in \R^d$ and we have
\[
\lim _ {s \to +\infty} {\cal V}(s;v) = r \vsv\;\mbox{ if } v \neq 0,\;\;\lim _ {s \to +\infty} {\cal V}(s;v) =0\;\mbox{ if } v = 0
\]
and
\[
\lim _{s \searrow S(v)} |{\cal V}(s)| = 0\mbox{ if }0 \leq |v| < r,\;\lim _{s \searrow S(v)} |{\cal V}(s)| =r\mbox{ if } |v| = r,\;\lim _{s \searrow S(v)} |{\cal V}(s)| =+\infty\;\mbox{ if } |v| >r.
\]
Let us consider a $C^1$ function $\psi = \psi (v)$ with compact support in $^c (\A)$. We intend to construct a bounded $C^1$ function $\varphi = \varphi (v)$ such that
\begin{equation*}
- \abv \cdot \nabla _v \varphi = \psi (v),\;\;v \in
\R^d.
\end{equation*}
Obviously, if such a function exists, we may assume that $\varphi (0) = 0$. Motivated by the equality
\[
- \frac{\mathrm{d}}{\mathrm{d}s} \{\varphi ({\cal V}(s;v)) \}= \psi ({\cal V}(s;v)),\;\;0 \leq |v| < r,\;\;- \infty < s \leq 0
\]
and since we know that $\lim _{s \to - \infty} {\cal V} (s;v) = 0$ for any $0 \leq |v| < r$, we define
\begin{equation}
\label{Equ7} \varphi (v) = - \int _{-\infty} ^ 0 \psi ( {\cal V}(\tau; v))\;\mathrm{d}\tau,\;\;0 \leq |v| < r.
\end{equation}
Let us check that the function $\varphi$ in \eqref{Equ7} is well
defined and is $C^1$ in $|v|<r$. The key point is that $\psi $ has
compact support in $^c (\A)$ and therefore there are $0 < r_1 <
r_2 < r < r_3 < r_4 < +\infty$ such that $ \supp \psi \subset \{ v
\in \R ^d \;:\; r_1 \leq |v| \leq r_2 \} \cup \{ v \in \R^d \;:\;
r_3 \leq |v| \leq r_4\}. $ It is easily seen that $\tau \to |{\cal
V} (\tau; v)|$ is strictly increasing for any $0 < |v| < r$.
Therefore, for any $|v| \leq r_1$ we have $ |{\cal V} (\tau; v) |
\leq |{\cal V} (0; v) | = |v| \leq r_1,\;\;\tau \leq 0 $, implying
that
\[
\varphi (v) = - \int _{-\infty} ^ 0 \psi ({\cal V}(\tau; v))\;\mathrm{d}\tau = 0,\;\;0 \leq |v| \leq r_1.
\]
For any $v$ with $r_1 < |v| < r_2$ there are $\tau _1 < 0 < \tau _2$ such that
$
|{\cal V}(\tau _1; v)| = r_1 < r_2 = |{\cal V}(\tau _2; v)|.
$
The time interval between $\tau _1$ and $\tau _2$ comes easily by writing
\[
\frac{\frac{\mathrm{d}}{\mathrm{d}\tau}|{\cal V}(\tau) |}{(\alpha - \beta |{\cal V}(\tau)|^2)|{\cal V}(\tau) |}= 1
\]
implying that
\[
|\tau _2 | + |\tau _1 | = \tau _2 - \tau _1 = \int _{r_1} ^ {r_2} \frac{\mathrm{d}\rho}{(\alpha - \beta \rho ^2 ) \rho }.
\]
From the equality
\[
\varphi (v) = - \int _{-\infty} ^{\tau _1} \psi ({\cal
V}(\tau;v))\;\mathrm{d}\tau - \int _{\tau _1} ^0 \psi ({\cal
V}(\tau;v))\;\mathrm{d}\tau = - \int _{\tau _1} ^0 \psi ({\cal
V}(\tau;v))\;\mathrm{d}\tau\,,
\]
we deduce that
\begin{equation}
\label{Equ8} |\varphi (v) | \leq |\tau _1 | \; \|\psi \|_{C^0} \leq
\int _{r_1} ^ {r_2} \frac{\mathrm{d}\rho}{(\alpha - \beta \rho ^2 ) \rho }\; \|\psi \|_{C^0}.
\end{equation}
Assume now that $r_2 \leq |v| < r$. There is $\tau _2 \geq 0$ such
that $v = {\cal V} ( \tau_2 ; r_2 \vsv)$ and therefore
\begin{align*}
\varphi (v) & = - \int _{-\infty} ^ 0 \psi ({\cal V}(\tau;v))\;\mathrm{d}\tau = - \int _{-\infty} ^ 0 \psi ({\cal V}(\tau + \tau _2;r_2 \vsv))\;\mathrm{d}\tau \\
& = - \int _{-\infty} ^ {-\tau _2} \psi ({\cal V}(\tau + \tau _2
;r_2 \vsv))\;\mathrm{d}\tau = - \int _{-\infty} ^ {0} \psi ({\cal
V}(\tau ;r_2 \vsv))\;\mathrm{d}\tau = \varphi \left ( r_2 \vsv
\right).
\end{align*}
In particular, the restriction of $\varphi$ on $r_2 \leq |v| < r$
satisfies the same bound as in \eqref{Equ8}
\[
|\varphi (v) | \leq
\int _{r_1} ^ {r_2} \frac{\mathrm{d}\rho}{(\alpha - \beta \rho ^2 ) \rho }\; \|\psi \|_{C^0},\;\;r_2 \leq |v| < r.
\]
It is easily seen that $\varphi $ is $C^1$ on $0 \leq |v| < r$.
For that it is sufficient to consider $r_1 \leq |v| \leq r_2$.
Notice that
\[
\frac{\partial {\cal V}}{\partial v} (\tau; v) = \frac{|{\cal V}(\tau;v)|}{|v|} \left ( I - \frac{{\cal V}(\tau;v) \otimes {\cal V}(\tau;v)}{r^2} ( 1 - e ^ {-2\alpha \tau } ) \right)
\]
and therefore the gradient of $\varphi$ remains bounded on $r_1
\leq |v| \leq r_2$
\[
\nabla _v \varphi (v) = - \int _{\tau _1} ^ 0 \frac{^ t \partial {\cal V}}{\partial v }(\tau; v) \nabla \psi ({\cal V}(\tau;v))\;\mathrm{d}\tau
\]
since on the interval $\tau \in [\tau _1, 0]$ we have $|{\cal
V}(\tau;v)| \in [r_1, |v|] \subset [r_1, r_2]$. Taking now as
definition for $|v| = r$
\[
\varphi (v) = \varphi \left ( r_2 \vsv \right )\,,
\]
we obtain a bounded $C^1$ function on $|v| \leq r$ satisfying
\[
- \abv \cdot \nabla _v \varphi = \psi (v),\;\;|\varphi (v) | \leq \int _{r_1} ^ {r_2} \frac{\mathrm{d}\rho}{(\alpha - \beta \rho ^2 ) \rho }\; \|\psi \|_{C^0},\;|v|\leq r.
\]
We proceed similarly in order to extend the above function for
$|v| > r$. We have for any $s>0$
\[
- \varphi ({\cal V}(s;v)) + \varphi (v) = \int _0 ^s \psi ({\cal V}(\tau;v))\;\mathrm{d}\tau,\;\;|v|> r.
\]
As $\lim _{s \to +\infty} {\cal V}(s;v) = r \vsv$ we must take
$$
\varphi (v) = \lim _{s \to +\infty}\left \{\varphi ( {\cal
V}(s;v)) + \int _0 ^s \psi ({\cal V}(\tau;v))\;\mathrm{d}\tau
\right \} = \varphi \left (r\vsv \right ) + \int _0 ^{+\infty}
\psi ({\cal V}(\tau;v))\;\mathrm{d}\tau,\;\;|v| >r.\nonumber
$$
Clearly, for any $|v| > r$ the function $\tau \to |{\cal V}(\tau;v)|$ is strictly decreasing. Therefore, for any $r < |v| \leq r_3$ we have
\[
\varphi (v) = \varphi \left (r\vsv \right )= \varphi \left (r_2\vsv \right )
\]
since $|{\cal V}(\tau;v)|\leq |v| \leq r_3$ and $\psi ({\cal V}(\tau;v)) = 0$, $\tau \geq 0$. If $r_3 < |v| < r_4$ let us consider $\tau _4 < 0 < \tau _3$ such that
$
|{\cal V}(\tau _3;v)| = r_3 < r_4 = |{\cal V}(\tau _4;v)|.
$
The time interval between $\tau _4$ and $\tau _3$ is given by
\[
|\tau _3 | + |\tau _4 | = \tau _3 - \tau _4 = \int _{r_4} ^ {r_3}
\frac{\mathrm{d}\rho}{(\alpha - \beta \rho ^2) \rho } < +\infty\,,
\]
and therefore one gets for $r_3 < |v| < r_4$
\begin{align}
|\varphi (v) | &\leq \left | \varphi \left ( r \vsv \right ) \right | + \left |\int _0 ^{\tau _3} \!\!\!\!\psi ({\cal V}(\tau;v))\;\mathrm{d}\tau \right | \nonumber \\
& \leq \left [ \int _{r_1} ^ {r_2} \frac{\mathrm{d}\rho}{(\alpha - \beta \rho ^2) \rho } + \int _{r_4} ^ {r_3} \frac{\mathrm{d}\rho}{(\alpha - \beta \rho ^2) \rho } \right ] \|\psi \|_{C^0}.\label{Equ9}
\end{align}
Consider now $|v|\geq r_4$. There is $\tau _4 \geq 0$ such that $r_4 \vsv = {\cal V} (\tau_4; v)$ implying that
\begin{align*}
\varphi (v) & = \varphi \left ( r \vsv \right ) + \int _0 ^{+\infty} \psi ({\cal V} (\tau; v)) \;\mathrm{d}\tau =
\varphi \left ( r \vsv \right ) + \int _{\tau _4} ^{+\infty} \psi ({\cal V} (\tau; v)) \;\mathrm{d}\tau \\
& = \varphi \left ( r \vsv \right ) + \int _0 ^{+\infty} \psi ({\cal V} (\tau; {\cal V}(\tau _4;v))) \;\mathrm{d}\tau
= \varphi \left ( r \vsv \right ) + \int _0 ^{+\infty} \psi ({\cal V} (\tau; r_4 \vsv)) \;\mathrm{d}\tau \\
& = \varphi \left ( r_4 \vsv \right ).
\end{align*}
We deduce that the restriction of $\varphi $ on $\{v :|v| \geq
r_4\}$ satisfies the same bound as in \eqref{Equ9}. Moreover the
function $\varphi $ is $C^1$ on $\{v:|v|\geq r\}$, with bounded
derivatives. Indeed, it is sufficient to consider only the case
$r_3 \leq |v| \leq r_4$, observing that
\begin{eqnarray}
\nabla _v \varphi (v) = \frac{r_2}{|v|} \imvv \nabla _v \varphi \left ( r_2 \vsv \right ) + \int _0 ^{\tau _3} \frac{^t \partial {\cal V}}{\partial v }(\tau;v)\nabla \psi ({\cal V}(\tau;v)) \;\mathrm{d}\tau \nonumber
\end{eqnarray}
\[
|{\cal V} (\tau; v)| \in [r_3, |v| ] \subset [r_3, r_4],\;\tau \in [0,\tau_3],\;\;|\tau _3| + |\tau _4| = \int _{r_4} ^ {r_3} \frac{\mathrm{d}\rho}{(\alpha - \beta \rho ^2) \rho } < +\infty.
\]
By construction we have $- \abv \cdot \nabla _v \varphi = \psi
(v)$, $|v| >r$.
Consider a $C^1$ decreasing function on $\R_+$ such that $\chi
|_{[0,1]} = 1, \chi _{[2,+\infty[} = 0$. We know that
\[
\intv{\abv \cdot \nabla _v \left \{ \varphi (v) \chi \left (
\frac{|v|}{R} \right ) \right \}\,F(v)\,\dv} = 0,\;\;R>0\,,
\]
saying that
\[
\intv{\chi \left ( \frac{|v|}{R} \right )\abv \cdot \nabla _v \varphi
\;F(v)\,\dv} + \intv{(\alpha - \beta |v|^2) \varphi (v) \frac{|v|}{R} \chi ^{\;\prime} \left ( \frac{|v|}{R} \right ) \;F(v)\,\dv} = 0.
\]
Since $\varphi$ and $\psi = - \abv \cdot \nabla _v \varphi $ are
bounded and $F$ has finite mass and kinetic energy, we can pass to
the limit for $R \to +\infty$, using the dominated convergence
theorem. We obtain for any $C^1$ function $\psi$, with compact
support in $^c(\A)$
\[
\intv{\psi (v) F(v)\,\dv} = - \intv{\abv \cdot \nabla _v \varphi\,
F(v)\,\dv} = 0.
\]
Actually the previous equality holds true for any continuous
function $\psi$ with compact support in $^c(\A)$, since
$\intv{F(v)\,\dv} < +\infty$, so that $\supp F \subset \A$.
\end{proof}
In order to obtain stability for $(\fek)_k$ we need to avoid the
unstable equilibrium $v = 0$. For that we assume that the initial
support is away from zero speed: there is $r_0
>0$ (eventually small, let us say $r_0 < r$) such that
\begin{equation}
\label{Equ20} \supp \fin \subset \{ (x,v)\in \R ^d \times \R^d
\;:\;|v| \geq r_0\}.
\end{equation}
\begin{pro}
\label{UnifSupp} Under the hypothesis \eqref{Equ20} we have for any $\eps >0$ small enough
\[
\supp \fe (t) \subset \{ (x,v)\in \R ^d \times \R^d \;:\;|v| \geq
r_0\},\;\;t \in \R_+.
\]
\end{pro}
\begin{proof}
Take $\eps >0$ such that $\eps \|a\|_{\linf{}} < 2\alpha r /(3
\sqrt{3})$ and $\reo (- \|a\|_{\linf{}}) < r_0$. For any
continuous function $\psi = \psi (x,v)$ with compact support in
$\R ^d \times \{v\;:\; |v| < r_0\}$ we have
\begin{align*}
\intxv{\psi(x,v) \fe (t,x,v)\,\dxv} & = \intxv{\psi (\Xe
(t;0,x,v),
\Ve (t;0,x,v))\fin(x,v)\,\dxv } \\
& = \intxv{\psi (\Xe (t;0,x,v), \Ve (t;0,x,v) ){\bf 1}_{\{|v| \geq
r_0 \}}\fin(x,v)\,\dxv}.
\end{align*}
But for any $|v| \geq r_0 > \reo$ we know by Proposition
\ref{ZeroStab} that $|\Ve (t;0,x,v)| > |v| \geq r_0$, implying
that $\psi (\Xe (t), \Ve (t)) = 0$. Therefore one gets $\int
_{\R^d \times \R^d}{\psi(x,v) \fe (t,x,v)\,\dxv} = 0$ saying that
$\supp \fe (t) \subset \{ (x,v):|v| \geq r_0\}$.
\end{proof}
We are ready now to establish the model satisfied by the limit
measure $f$. The idea is to use the weak formulation of
\eqref{Equ10}, \eqref{Equ11} with test functions which are
constant along the flow of $\abv \cdot \nabla _v$, in order to get
rid of the term in $\frac{1}{\eps}$. These functions are those
depending on $x$ and $\vsv$. Surely, the invariants $\vsv$ have no
continuous extensions in $v = 0$, but we will see that we can use
it, since our measures $\fe$ vanish around $v = 0$.
\begin{proof} (of Theorem \ref{MainResult})
We already know that $f$ satisfies \eqref{Equ23}. Actually, since
$\supp \fe (t) \subset \{(x,v):|v|\geq r_0\}, t \in \R_+, \eps
>0$, we deduce that $\supp f(t) \subset \{(x,v):|v| \geq r_0\}$
and finally $\supp f(t) \subset \R ^d \times r \sphere, t \in
\R_+$. We have to establish \eqref{Equ22} and find the initial
data. Consider a $C^1$ decreasing function $\chi $ on $\R_+$ such
that $\chi |_{[0,1]} = 1, \chi _{[2,+\infty[} = 0$. For any $\eta
= \eta (t) \in C^1_c (\R_+)$, $\varphi = \varphi (x,v) \in
\cocxv{}$ we construct the test function
\[
\theta (t,x,v) = \eta (t) \left [ 1 - \chi \left ( \frac{2|v|}{r_0}\right ) \right ] \varphi \left ( x, r\vsv \right ).
\]
Notice that $\theta $ is $C^1$ and $\theta = 0$ for $|v| \leq
\frac{r_0}{2}$. When applying the weak formulation of
\eqref{Equ10}-\eqref{Equ11} with $\theta$, the term in
$\frac{1}{\eps}$ vanishes. Indeed, we can write
\begin{align*}
\frac{1}{\eps}\inttxv{\eta (t) & \abv \cdot \nabla _v \left \{\left [ 1 - \chi \left ( \frac{2|v|}{r_0}\right ) \right ]\varphi \left ( x, r\vsv \right ) \right \}\fe(t,x,v)\,\dxv } \nonumber \\
& = \frac{1}{\eps} \int _{\R_+} \eta (t) \int _{|v|\geq r_0} \abv
\cdot \nabla _v \left \{ \varphi \left ( x, r\vsv \right ) \right
\}\fe(t,x,v)\,\dxv \;\mathrm{d}t = 0.\nonumber
\end{align*}
For the term containing $\partial _t \theta$ we obtain the following limit when $k \to +\infty$
\begin{align*}
T_1 ^k := \inttxv{\partial _t \theta \fek(t,x,v)\,\dxv} \to &\inttxv{\partial _t \theta f(t,x,v)\,\dxv} \\
& = \int _{\R_+} \eta ^{\;\prime} (t) \int _{|v|\geq r_0} \varphi \left ( x, r\vsv \right ) f(t,x,v)\,\dxv \;\mathrm{d}t \\
& = \int _{\R_+} \eta ^{\;\prime} (t) \int _{|v| = r} \varphi \left ( x, r\vsv \right ) f(t,x,v)\,\dxv \;\mathrm{d}t \\
& = \int _{\R_+} \eta ^{\;\prime} (t) \int _{|v| = r} \varphi \left ( x, v\right ) f(t,x,v)\,\dxv \;\mathrm{d}t \\
& = \inttxv{\partial _t ( \eta \varphi )f(t,x,v)\,\dxv}.
\end{align*}
Similarly, one gets
\begin{align*}
T_2 ^k := \inttxv{ v \cdot \nabla _x \theta \fek (t,x,v)\,\dxv} \to & \inttxv{v \cdot \nabla _x \theta f (t,x,v)\,\dxv} \\
& = \inttxv{v \cdot \nabla _x ( \eta \varphi )
f(t,x,v)\,\dxv}.\nonumber
\end{align*}
For the term containing $a \cdot \nabla _v \theta$ notice that on the set $|v| \geq r_0$ we have
\[
a \cdot \nabla _v \theta = \eta (t) a \cdot \nabla _v \left \{ \varphi \left ( x, r\vsv \right )\right \} = \eta (t) \frac{r}{|v|}a \cdot \imvv (\nabla _v \varphi ) \left ( x, r\vsv \right )
\]
and therefore we obtain
\begin{align*}
T_3 ^k := \inttxv{& a \cdot \nabla _v \theta \fek (t,x,v)\,\dxv} \to \inttxv{a \cdot \nabla _v \theta f (t,x,v)\,\dxv} \nonumber \\
& = \int _{\R_+} \eta (t) \int _{|v|\geq r_0} \frac{r}{|v|} \imvv a \cdot (\nabla _v \varphi ) \left ( x, r\vsv \right ) f(t,x,v)\,\dxv \;\mathrm{d}t \\
& = \inttxv{\;\;\;\imvv a \cdot \nabla _v (\eta \varphi )
f(t,x,v)\,\dxv}.\nonumber
\end{align*}
For treating the term involving the initial condition, we write
\begin{align*}
T_4 : = \intxv{\theta (0,x,v) \fin(x,v)\,\dxv } &= \intxv{\eta (0)
\varphi \left ( x, r \vsv \right ) \fin(x,v)\,\dxv } \\
&= \intxv{\eta (0) \varphi (x,v) \ave{\fin}(x,v)\,\dxv}.
\end{align*}
Passing to the limit for $k \to +\infty$ in the weak formulation
$T_1 ^ k + T_2 ^ k + T_3 ^ k + T_4 = 0$ yields the problem
\[
\partial _t f + \Divx\{f v \} + \Divv \left \{f \imvv a \right \} = 0,\;\;f(0) = \ave{\fin}
\]
as desired.
\end{proof}
\begin{remark}
\label{ConstraintPropagation} The constraint \eqref{Equ23} is
propagated by the evolution equation \eqref{Equ22}. This comes by
the fact that the flow $(X,V)$ associated to the field $v \cdot
\nabla _x + \imvv a \cdot \nabla _v$ leaves invariant $\R^d \times
r\sphere$. Indeed, if $(X,V)$ solves
\[
\frac{\mathrm{d}X}{\mathrm{d}s} = V(s),\;\;\frac{\mathrm{d}V}{\mathrm{d}s} = \left (I - \frac{V(s) \otimes V(s)}{|V(s)|^2} \right ) a(s, X(s))
\]
\[
X(s;0,x,v) = x,\;\;V(s;0,x,v) = v \neq 0
\]
then
\[
\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}s}|V(s)|^2 = \left (I - \frac{V(s) \otimes V(s)}{|V(s)|^2} \right ) a(s, X(s)) \cdot V(s) = 0
\]
saying that $|V(s;0,x,v)| = |v|$ for any $(s,x) \in \R_+ \times
\R^d$. In particular, for any continuous function $\psi = \psi
(x,v)$ with compact support in $^c (\R ^d \times r\sphere)$ we
have
\begin{align*}
\intxv{\psi(x,v) f(s,x,v)\,\dxv} & = \intxv{\psi (X(s;0,x,v),
V(s;0,x,v))
\ave{\fin}(x,v)\,\dxv} \\
& = \int _{|v| = r} \psi (X(s;0,x,v), V(s;0,x,v))
\ave{\fin}(x,v)\,\dxv = 0
\end{align*}
since $\supp \ave{\fin} \subset \R ^d \times r\sphere$. Therefore
for any $s \in \R_+$ we have $\supp f(s) \subset \R ^d \times
r\sphere$ implying that $\Divv \{f(s)\abv \} = 0, s \in \R_+$.
\end{remark}
\begin{remark}
\label{Uni} By the uniqueness of the solution for \eqref{Equ22}
with initial data $\ave{\fin}$, we deduce that all the family
$(\fe)_\eps$ converges weakly $\star$ in $\litmbxv{}$.
\end{remark}
\section{The non linear problem}
\label{NLimMod}
Up to now we considered the stability of the linear problems
\eqref{Equ10}-\eqref{Equ11} for a given smooth field $a = a(t,x)
\in \litwoix{}$. We concentrate now on the non linear problem
\begin{equation}
\label{Equ41} \partial _t \fe + \Divx\{\fe v\} + \Divv \{ \fe
a^\eps\} + \frac{1}{\eps} \{ \fe \abv \}= 0,\;\;(t,x,v) \in \R_+
\times \R ^d \times \R ^d
\end{equation}
with $a^\eps = - \nabla _x U \star \rho ^\eps - H \star \fe.$ The
well posedness of the non linear equation \eqref{Equ41} comes by
fixed point arguments in suitable spaces of measures, and it has
been discussed in \cite{CCR10, BCC12} in the measure solution
framework. We summarize next the properties of the solutions
$(\fe)_{\eps >0}$.
\begin{pro}
\label{ExiUniNonLin} Assume $h \in C^1_b (\R^d), U \in C^2 _b (\R
^d)$ and $( 1 + |v|^2 ) \fin \in \mbxv{}$. For all $\eps >0$,
there is a unique solution $(\fe, a ^\eps) \in C(\R_+;\poxv{})
\times \litwoix{}$ to
\begin{equation}
\label{Equ43} \partial _t \fe + \Divx \{\fe v \} + \Divv \{\fe
\aeps\} + \frac{1}{\eps}\Divv \{ \fe \abv \} = 0,\;\;(t,x,v) \in
\R _+ \times \R ^d \times \R ^d
\end{equation}
\begin{equation}
\label{Equ44} \aeps = - \nabla _x U \star \int _{\R ^d} \fe \;\md
v - H \star \fe,\;\;H(x,v) = h(x)v
\end{equation}
with initial data $\fe (0) = \fin$, satisfying the uniform bounds
\[
\sup _{\eps >0, t \in \R_+} \intxv{|v|^2 \fe
(t,x,v)\;\dxv}<+\infty
\]
\[
\sup _{\eps >0} \|\aeps \|_{L^\infty(\R_+;L^\infty(\R^d))} = :A
<+\infty,\;\;\sup _{\eps >0} \|\nabla _x \aeps
\|_{L^\infty(\R_+;L^\infty(\R^d))} = :A_1 <+\infty.
\]
Moreover, if the initial condition satisfies
\begin{equation*}
\supp \fin \subset \{ (x,v) \in \R ^d \times \R ^d
\;:\;|x| \leq L_0, r_0 \leq |v| \leq R_0 \}
\end{equation*}
for some $L_0 >0, 0 < r_0 < r < R_0 < +\infty$, then for any $\eps >0$
small enough we have
\[
\supp \fe (t) \subset \{ (x,v) \in \R ^d \times \R ^d \;:\;|x|\leq L_0 + t R_0, r_0 \leq |v| \leq R_0 \},\;\;t \in \R_+.
\]
\end{pro}
\begin{proof}
Here, we only justify the uniform bounds in $\eps$, the rest is a
direct application of the results in \cite{CCR10, BCC12}. The
divergence form of \eqref{Equ43} guarantees the mass conservation
\[
\intxv{\,\fe (t,x,v)\;\dxv } = \intxv{\,\fin (x,v)\;\dxv},\;\;t
\in \R_+.
\]
Notice that the term $- \Divv \{ \fe H \star \fe\}$ balances the
momentum
$$
\int _{\R^{2d}}{\!\! v \Divv \{ \fe H \star \fe\}\;\dxv} = \int
_{\R^{4d}}{\!\! h(x-x^{\prime}) (v ^{\prime}-v) \fe (t, x^\prime,
v^\prime)\fe (t,x,v)\;\md (x^\prime, v^\prime)}\dxv = 0
$$
and decreases the kinetic energy
\begin{align*}
\int _{\R^{2d}}{\!\!\!|v|^2 \Divv \{ \fe H \star \fe\}\;\dxv} &= 2\int _{\R^{4d}}{{\!\!\!\!h(x-x^{\prime}) (v ^{\prime}-v) \cdot v \fe (t, x^\prime, v^\prime)\fe (t,x,v)\;\md (x^\prime, v^\prime)}\dxv} \\
& = - \int _{\R^{4d}}{{\!\!h(x-x^{\prime}) |v - v ^{\prime}|^2
\fe (t, x^\prime, v^\prime)\fe (t,x,v)\;\md (x^\prime,
v^\prime)}\dxv}.
\end{align*}
In particular, as $|v|^2 \fin \in \mbxv{}$, then the kinetic
energy $\int _{\R^{2d}}{|v|^2 \fe (t,x,v)\;\dxv}$ remains bounded,
uniformly in time $t \in \R_+$ and $\eps >0$. Indeed, using the
continuity equation one gets
\[
\intxv{v \cdot (\nabla _x U \star \rho ^\eps ) \fe (t,x,v)\;\dxv}
= \frac{1}{2}\frac{\md}{\md t} \int _{\R^d}{(U \star \rho ^\eps
(t))(x) \rho ^\eps (t,x)\;\md x}
\]
and after multiplying \eqref{Equ43} by $\frac{|v|^2}{2}$ together
with \eqref{Equ44}, we obtain
\begin{align}
\frac{\md }{\md t} \intxv{&\,\,\left (\frac{|v|^2}{2} +\frac{U
\star \rho ^\eps}{2} \right ) \fe (t,x,v) \;\dxv }
- \frac{1}{\eps}\intxv{(\alpha |v|^2 - \beta |v|^4) \fe (t,x,v) \;\dxv} \nonumber \\
& = - \frac{1}{2}\int_{\R^{4d }}h(x-x^{\prime}) |v - v
^{\prime}|^2 \fe (t, x^\prime, v^\prime)\fe (t,x,v)\;\md
(x^\prime, v^\prime)\dxv\leq 0\,.\label{energy}
\end{align}
Consider now $t ^\eps $ a maximum point on $[0,T], T>0$, of the
total energy
\[
W^\eps (t) = \intxv{\left (\frac{|v|^2}{2} + \frac{U \star \rho
^\eps}{2} \right ) \fe (t,x,v) \;\dxv },\;\;t \in [0,T].
\]
If $t^\eps = 0$ then it is easily seen that for any $t \in [0,T]$
\[
\intxv{\frac{|v|^2}{2} \fe (t,x,v) \;\dxv} \leq
\intxv{\frac{|v|^2}{2} \fin (x,v) \;\dxv} + \|U \| _{\linf} \left
( \intxv{\fin \;\dxv}\right ) ^2.
\]
If $t ^\eps \in ]0,T]$ then $\frac{\md }{\md t} W ^\eps (t^\eps)
\geq 0$ implying from \eqref{energy} by moment interpolation in
$v$ that
\[
\sup _{\eps >0, T>0} \intxv{(1 + |v|^4) \fe (t ^\eps ,x,v)
\;\dxv } <+\infty
\]
and thus the inequality $W^\eps (t) \leq W ^\eps (t ^\eps), t \in
[0,T]$ yields
\begin{align*}
\sup _{\eps >0, t \in [0,T]} \intxv{\frac{|v|^2}{2} \fe (t,x,v) \;\dxv} \leq & \sup _{\eps >0, T>0} \intxv{\frac{|v|^2}{2} \fe (t ^\eps ,x,v) \;\dxv } \\
& + \|U \|_{\linf} \left ( \intxv{\fin \;\dxv}\right ) ^2
<+\infty.
\end{align*}
Therefore the kinetic energy remains bounded on $[0,T]$, uniformly
with respect to $\eps >0$, and the bound does not depend on $T>0$.
The uniform bounds for $\aeps$ come immediately by convolution
with $\nabla _x U$ and $H$, thanks to the uniform estimate
\[
\sup _{\eps >0, t \in \R_+} \intxv{|v| \fe (t,x,v)} < +\infty.
\]
We analyze the support of $(\fe )_{\eps >0}$. Take $\eps >0$ small
enough such that $\eps A < 2 \alpha r /(3\sqrt{3})$ and $\reo (-A)
< r_0,\; \reth (A) < R_0$. By Proposition \ref{UnifSupp} we already
know that
\[
\supp \fe (t) \subset \{ (x,v) \in \R ^d \times \R ^d \;:\;|v |
\geq r_0\},\;\;t \in \R_+.
\]
For any continuous function $\psi = \psi (x,v)$ with compact
support in $\R ^d \times \{ v\in \R ^d\;:\;|v| > R_0\}$ we have
\begin{align*}
\intxv{\psi(x,v) \fe(t,x,v) \;\dxv} & =
\intxv{\psi (\Xe (t), \Ve (t)) \fin(x,v) \;\dxv } \\
& = \intxv{\psi (\Xe (t), \Ve (t) ) \ind{\{r_0 \leq |v| \leq
R_0\}}\fin(x,v) \;\dxv}.
\end{align*}
We distinguish several cases:\\
1. If $r_0 \leq |v| < \ret (-A)$ we deduce by Proposition
\ref{ZeroStab} that
$
|v| < |\Ve (t;0,x,v)| \leq \reth (A) < R_0,\;\;t \in \R_+, \eps
>0.
$
2. If $\ret (-A) \leq |v| \leq \reth (A)$ we obtain by Proposition
\ref{RStab} that
$
\ret (-A) \leq |\Ve (t;0,x,v)| \leq \reth (A) < R_0,\;\;t \in
\R_+, \eps >0.
$
3. If $\reth (A) < |v| \leq R_0$ one gets thanks to Proposition
\ref{ZeroStab}
$
\ret (-A) \leq |\Ve (t;0,x,v) | < |v| \leq R_0.
$
In all cases $(\Xe, \Ve )(t;0,x,v)$ remains outside the support of
$\psi$, implying that
\[
\intxv{\psi (x,v) \fe (t,x,v)\;\dxv} = 0.
\]
Thus for any $t \in \R_+$ and $\eps >0$ small enough one gets
\[
\supp \fe (t) \subset \{ (x,v) \in \R ^d \times \R ^d \;:\;r_0
\leq |v | \leq R_0\}.
\]
Consider $\theta \in C^1 (\R)$ non decreasing, verifying $\theta
(u) = 0$ if $u \leq 0$, $\theta (u) >0$ if $u>0$. Applying the
weak formulation of \eqref{Equ43}-\eqref{Equ44} with the test
function $\theta (|x| - L_0 - t R_0)$ yields
\begin{align*}
\intxv{\theta (|x| - L_0 -& t R_0) \fe (t,x,v)\;\dxv} = \intxv{\theta (|x| - L_0 )\fin(x,v)\;\dxv} \\
& + \int _0 ^t \intxv{\theta ^{\prime}(|x| - L_0 - s R_0) \left (
v \cdot \frac{x}{|x|} - R_0\right )\fe (s,x,v)\;\dxv} \md s \leq 0
\end{align*}
implying that $\supp \fe (t) \subset \{ (x,v) \in \R ^d \times \R ^d :|x| \leq L_0 + t R_0\}, t \in \R_+$.
\end{proof}
\
The uniform bound for the total mass allows us to extract a
sequence $(\eps _k)_k \subset \R_+ ^\star$ convergent to $0$ such
that $(\fek)_k$ converges weakly $\star$ in $\litmbxv{}$. The
treatment of the non linear term requires a little bit more, that
is convergence in $C(\R_+;\poxv{})$ or at least in $C([\delta,
+\infty[;\poxv{})$ for any $\delta >0$. The key argument for
establishing that is emphasized by the lemma
\begin{lemma}
\label{TimeEstimate} Consider $\eps >0$ small enough.\\
1. For any $(x,v)\in \R ^d \times \R ^d$ with $r_0 \leq |v| < \ret
(-A) - \eps$, the first time $t^\eps _1 = t ^\eps _1 (x,v)$ such
that $|\Ve (t^\eps _1;0,x,v) | = \ret (-A) - \eps$ satisfies
\[
t ^\eps _1 \leq \frac{\eps}{2\beta r_0 ^2} \ln \left ( \frac{r -
r_0 }{\eps} \right ).
\]
2. For any $(x,v)\in \R ^d \times \R ^d$ with $\reth (A) + \eps <
|v| \leq R_0$, the first time $t^\eps _2 = t ^\eps _2 (x,v)$ such
that $|\Ve (t^\eps _2;0,x,v) | = \reth (A) + \eps$ satisfies
\[
t ^\eps _2 \leq \frac{\eps}{2\beta r ^2} \ln \left ( \frac{R_0 -
r}{\eps} \right ).
\]
\end{lemma}
\begin{proof}
1. During the time $[0,t ^\eps _1]$ the velocity modulus $|\Ve
(t)|$ remains in $[r_0, \ret (-A) - \eps] \subset [\reo (-A), \ret
(-A)]$ and we can write for any $t \in [0, t ^\eps _1]$
\[
\frac{\eps \frac{\md |\Ve |}{\md t }}{- \eps A + (\alpha - \beta
|\Ve (t) |^2 ) \;|\Ve (t) |}\geq \frac{\frac{\md |\Ve |}{\md t
}}{\aeps (t, \Xe (t)) \cdot \frac{\Ve (t)}{|\Ve (t)|} +
\frac{1}{\eps}(\alpha - \beta |\Ve (t) |^2 ) \;|\Ve (t) |} = 1
\]
since $- \eps A + (\alpha - \beta u^2)u$ is positive for $u \in
[\reo (-A), \ret (-A)]$. Integrating with respect to $t \in [0, t
^\eps _1]$ yields
\[
t ^\eps _1 (x,v) \leq \eps \int _{|v|} ^{\ret (-A) - \eps }
\frac{\md u }{- \eps A + (\alpha - \beta u ^2 ) u } \leq \eps \int
_{r_0} ^{\ret (-A) - \eps } \frac{\md u }{- \eps A + (\alpha -
\beta u ^2 ) u }.
\]
Recall that $\ret (-A)$ is one of the roots of $u \to - \eps A +
(\alpha - \beta u ^2 ) u$ and therefore a direct computation lead
to
\[
- \eps A + (\alpha - \beta u ^2 ) u = \beta (\ret - u ) [u ^2 + u
\ret + (\ret ) ^2 - r^2] \geq 2 \beta r_0 ^2 ( \ret - u), \;\;u
\in [r_0, \ret],\;\eps \;\mbox{small enough }
\]
implying that
\[
t ^\eps _1 (x,v) \leq \frac{\eps}{2\beta r_0 ^2} \int _{r_0}
^{\ret - \eps} \frac{\md u }{ \ret - u} = \frac{\eps}{2\beta r_0
^2}\ln \left (\frac{\ret - r_0}{\eps} \right ) \leq
\frac{\eps}{2\beta r_0 ^2}\ln \left (\frac{r - r_0}{\eps} \right
).
\]
2. During the time $[0,t ^\eps _2]$ the velocity modulus $|\Ve
(t)|$ remains in $[\reth (A) + \eps, R_0] \subset [\reth(A),
+\infty[$ and we can write for any $t \in [0, t ^\eps _2]$
\[
\frac{\eps \frac{\md |\Ve |}{\md t }}{ \eps A + (\alpha - \beta
|\Ve (t) |^2 ) \;|\Ve (t) |}\geq \frac{\frac{\md |\Ve |}{\md t
}}{\aeps (t, \Xe (t)) \cdot \frac{\Ve (t)}{|\Ve (t)|} +
\frac{1}{\eps}(\alpha - \beta |\Ve (t) |^2 ) \;|\Ve (t) |} = 1
\]
since $ \eps A + (\alpha - \beta u^2)u$ is negative for $u \in
[\reth (A), +\infty[$. Integrating with respect to $t \in [0, t
^\eps _2]$ yields
\[
t ^\eps _2 (x,v) \leq \eps \int _{|v|} ^{\reth (A) + \eps }
\frac{\md u }{ \eps A + (\alpha - \beta u ^2 ) u } \leq \eps \int
_{R_0} ^{\reth (A) + \eps } \frac{\md u }{ \eps A + (\alpha - \beta
u ^2 ) u }.
\]
By direct computation we obtain
\[
\eps A + (\alpha - \beta u ^2 ) u = - \beta (u - \reth ) [u ^2 + u \reth + (\reth ) ^2 - r^2] \leq -2 \beta r ^2 ( u - \reth ), \;\;u \geq \reth,\;\eps \;\mbox{small enough }
\]
implying that
\[
t ^\eps _2 (x,v) \leq \frac{\eps}{2\beta r ^2} \int _{\reth +
\eps} ^{R_0} \frac{\md u }{ u - \reth} = \frac{\eps}{2\beta r
^2}\ln \left (\frac{R _0 - \reth }{\eps} \right ) \leq
\frac{\eps}{2\beta r ^2}\ln \left (\frac{R_0 - r}{\eps} \right
).
\]
\end{proof}
We intend to apply Arzela-Ascoli theorem in $C(\R_+;\Po(\R ^d
\times \R ^d))$ in order to extract a convergent sequence
$(\fek)_k$ with $\limk \eps _k = 0$. We need to establish the
uniform equicontinuity of the family $(\fe)_{\eps >0}$. The
argument below is essentially similar to arguments in
\cite{CCR10}.
\begin{pro}
\label{UnifEquiCont} 1. If the initial data is well prepared {\it
i.e.,} $\supp \fin \subset \{ (x,v) \in \R ^d \times \R ^d\;:\;|x|\leq L_0,
|v| = r\}$ then there is a constant $C$ (not depending on $t \in
\R_+, \eps >0$) such that
\[
W_1 (\fe (t), \fe (s)) \leq C | t - s|,\;\;t, s \in \R_+, \eps >0.
\]
2. If $\supp \fin \subset \{ (x,v) \in \R ^d \times \R ^d\;:\;|x|\leq L_0, r_0
\leq |v| \leq R_0\}$ then there is a constant $C$ (not depending
on $t \in \R_+, \eps >0$) such that for any $\delta >0$ we can
find $\eps _\delta$ satisfying
\[
W_1 (\fe (t), \fe (s)) \leq C |t - s|,\;\; t,s \geq \delta,\;\;0 <
\eps < \eps _\delta.
\]
\end{pro}
\begin{proof}
1. Consider $\varphi = \varphi (x,v)$ a Lipschitz function on
$\R^d \times \R ^d$ with $\mathrm{Lip} (\varphi ) \leq 1$. For any $t, s
\in \R_+, \eps >0$ we have
\begin{align*}
\left | \intxv{\!\!\!\varphi ( \fe (t) - \fe (s))\dxv}\right | &= \left | \intxv{\!\!\!\{ \varphi (\Xe (t), \Ve (t)) - \varphi (\Xe (s), \Ve (s))\}\fin (x,v)\dxv}\right | \\
& \leq \intxv{\{ |\Xe (t) - \Xe (s)| + |\Ve (t) - \Ve (s)|\}
\ind{\{|v| = r\}} \fin \dxv}.
\end{align*}
Thanks to Proposition \ref{RStab} we have for any $(\tau, x, v)
\in \R_+ \times \R ^d \times r \sphere $
\[
\frac{\ret (-A) - r}{\eps} \leq \frac{|\Ve (\tau;0,x,v)| -
r}{\eps} \leq \frac{\reth (A) - r}{\eps}
\]
and it is easily seen, integrating the system of characteristics
between $s$ and $t$, that
\[
|\Xe (t;0,x,v) - \Xe (s;0,x,v) | = \left | \int _s ^ t \Ve
(\tau;0,x,v) \;\mathrm{d}\tau\right | \leq R_0 |t-s|
\]
and
\begin{align*}
\left | \Ve (t;0,x,v) - \Ve (s;0,x,v) \right | & \leq \left | \int _s ^t \left \{ |a^\eps(\tau, \Xe (\tau))| + \frac{|\alpha - \beta |\Ve (\tau) | ^2 | \;|\Ve (\tau) |}{\eps} \right \}\md \tau \right |\nonumber \\
& \leq |t -s | \left \{ A + \beta ( r + R_0) R_0 \max \left (
\frac{\reth (A) - r}{\eps}, \frac{r - \ret (-A) }{\eps} \right )
\right \}.
\end{align*}
Our conclusion comes immediately by Propositions \ref{NegA}, \ref{PosA}.\\
2. Consider $\delta >0$ and $\eps _\delta $ small enough such that
$\frac{\eps}{2\beta r_0 ^2} \ln \left ( \frac{r - r_0}{\eps}
\right ) < \delta$, $\frac{\eps}{2\beta r ^2} \ln \left (
\frac{R_0 - r}{\eps} \right ) < \delta$ for $0 < \eps < \eps
_\delta$. For any Lipschitz function $\varphi $ with $\mathrm{Lip}
(\varphi ) \leq 1$ and any $t, s \geq \delta$ we have
\[
\left | \intxv{\!\!\!\!\varphi ( \fe (t) - \fe (s) ) \;\dxv}\right | \leq \!\!\intxv{\{|\Xe
(t) - \Xe (s) | + |\Ve (t) - \Ve (s)| \} \ind{\{r_0 \leq |v| \leq
R_0\}} \fin \;\dxv}.
\]
For any $(\tau, x) \in \R_+ \times \R ^d$, $\ret (-A) - \eps \leq
|v| \leq \reth (A) + \eps$ we have by Propositions \ref{RStab},
\ref{ZeroStab}
\[
\ret (-A) - \eps \leq |\Ve (\tau;0,x,v) | \leq \reth (A) + \eps.
\]
The same conclusion holds true for any $\tau \geq \delta$, $x \in
\R^d$ and $|v| \in [r_0, \ret (-A) - \eps[ \cup ]\reth (A) + \eps,
R_0]$, thanks to Lemma \ref{TimeEstimate}, since $\delta > \max \{
t^\eps _1 (x,v), t^\eps _2 (x,v)\}$ (after a time $\delta$, the
velocity modulus $|\Ve (\tau;0,x,v)|$ is already in the set
$\{w\;:\;\ret (-A) - \eps < |w| < \reth (A) + \eps \}$). Our
statement follows as before, integrating the system of
characteristics between $s$ and $t$.
\end{proof}
Applying Arzela-Ascoli theorem, we deduce that there is a sequence
$(\eps _k)_k \subset \R _+ ^\star$, convergent to $0$ such that
\[
\limk W_1 (\fek (t), f(t)) = 0 \mbox{ uniformly for } t \in
[0,T],\;\; T>0
\]
for some $f \in C(\R_+;\Po (\R ^d \times \R ^d))$ if $\supp \fin
\subset \{(x,v)\in \R ^d \times \R ^d\;:\;|x| \leq L_0,|v| = r\}$ and
\[
\limk W_1 (\fek (t), f(t)) = 0 \mbox{ uniformly for } t \in
[\delta ,T],\;\; T>\delta >0
\]
for some $f \in C(\R_+ ^\star;\Po (\R ^d \times \R ^d))$ if $\supp
\fin \subset \{(x,v)\in \R ^d \times \R ^d\;:\;|x|\leq L_0, r_0 \leq |v| \leq
R_0\}$. It is easily seen that if the initial condition is well
prepared then there is a constant $C$ cf. Proposition
\ref{UnifEquiCont} such that
$
W_1 (f(t), f(s)) \leq C |t -s |,\;\;t, s \in \R_+.
$
The same is true for not prepared initial
conditions $\fin$. Take $\delta >0$ and $\eps _\delta$ as in
Proposition \ref{UnifEquiCont}. For any $0 < \eps < \eps _\delta$
we have
$
W_1 (\fe(t), \fe(s)) \leq C |t -s |,\;\;t, s \geq \delta.
$
For $k$ large enough we have $\eps _k < \eps _\delta$ and
therefore
$
W_1 (\fek(t), \fek(s)) \leq C |t -s |,\;\;t, s \geq \delta.
$
Passing to the limit as $k$ goes to infinity yields
$
W_1 (f(t), f(s)) \leq C |t -s |,\;\;t, s \geq \delta.
$
Since the constant $C$ does not depend on $\delta$ one gets
\[
W_1 (f(t), f(s)) \leq C |t -s |,\;\;t, s >0.
\]
In particular we deduce that $f$ has a limit as $t$ goes to $0$
since $( \Po (\R ^d \times \R ^d), W_1)$ is a complete metric
space and therefore we can extend $f$ by continuity at $t = 0$.
The extended function, still denoted by $f$, belongs to
$C(\R_+;\Po (\R ^d \times \R ^d))$ and satisfies
\[
W_1 (f(t), f(s)) \leq C |t -s |,\;\;t, s \in \R_+.
\]
The above convergence allows us to handle the non linear terms. We
use the following standard argument \cite{Dob79,CCR10}.
\begin{lemma}
\label{NonLinTerm} Consider $f,g \in \Po (\R ^d \times \R ^d)$
compactly supported $ \supp f \cup \supp g \subset \{ (x,v) \in \R
^d \times \R ^d \;:\;|x|\leq L, |v| \leq R\}$, and let us consider
\[
a_f = - \nabla _x U \star \int _{\R ^d} f \;\md v - H \star
f,\;\;a_g = - \nabla _x U \star \int _{\R ^d} g \;\md v - H \star
g.
\]
Then we have
\[
\|a_f - a_g \|_{L^\infty ( \R ^3 \times B_R)} \leq \left
\{\|\nabla _x ^2 U \|_{\linf} + \left (\|h \|^2 _{\linf} + 4 R ^2
\|\nabla _x h \| ^2 _{\linf} \right ) ^{1/2} \right \}W_1 (f,g)
\]
where $B_R$ stands for the closed ball in $\R ^d$ of center $0$
and radius $R$.
\end{lemma}
\begin{proof}
Take $\pi $ to be a optimal transportation plan between $f$ and
$g$. Then for any $x \in \R^d$ we have, using the marginals of
$\pi$
\begin{align*}
| (\nabla _x U \star f) (x) - (\nabla _x U \star g) (x) | &= \left | \intxv{\nabla _x U ( x - x^\prime) \{ f(\xp, \vp) - g(\xp, \vp)\}\;\dxpvp} \right | \\
& = \left | \intxv{\intxv{ [\nabla _x U (x - \xp) - \nabla _x U (x - \xs)]\md \pi (\xp, \vp, \xs, \vs)}} \right | \\
& \leq \|\nabla _x ^2 U \|_{\linf{}} \intxv{\intxv{|\xp - \xs | \;\md \pi (\xp, \vp, \xs, \vs)}} \\
& \leq \|\nabla _x ^2 U \|_{\linf{}} W_1 (f,g).
\end{align*}
The estimate for $H \star f - H \star g$ follows similarly
observing that on the support of $\pi$, which is included in
$\{(\xp, \vp, \xs, \vs)\in \R ^{4d}\;:\; |\vp|\leq R, |\vs | \leq
R\}$ we have
\begin{align*}
|h(x- \xp) (v- \vp) - h(x- \xs) & (v- \vs)| \\ & \leq |h(x- \xp)( \vs - \vp)| + |h(x- \xp)- h(x- \xs)| \;|v - \vs | \\
& \leq \left ( \|h \|^2 _{\linf} + 4 R ^2 \|\nabla _x h \| ^2
_{\linf} \right ) ^{1/2} \left ( |\xp - \xs | ^2 + |\vp - \vs
|^2 \right ) ^{1/2}.
\end{align*}
\end{proof}
We are ready now to prove Theorem \ref{MainResult2}.
\begin{proof}
(of Theorem \ref{MainResult2}) The arguments are the same as those
in the proof of Theorem \ref{MainResult} except for the treatment
of the non linear terms. We only concentrate on it. Consider
$(\fek )_k$ with $\limk \eps _k = 0$ such that $\limk W_1 (\fek
(t), f(t)) = 0$ uniformly for $t \in [0,T], T>0$ if $\supp \fin
\subset \{ (x,v) \;:\;|x|\leq L_0, |v| = r\}$ and $\limk W_1 (\fek
(t), f(t)) = 0$ uniformly for $t \in [\delta,T], T>\delta >0$ if
$\supp \fin \subset \{ (x,v) \;:\;|x|\leq L_0, r_0 \leq |v| \leq
R_0\}$ for some function $f \in C(\R_+;\Po (\R ^d \times \R ^d))$.
Thanks to Proposition \ref{Kernel} we deduce (for both prepared or
not initial data) that
\[
\supp f(t) \subset \{ (x,v)\in \R ^d \times \R ^d \;:\;|v| =
r\},\;\;t>0.
\]
The previous statement holds also true at $t = 0$, by the
continuity of $f$. The time evolution for the limit $f$ comes by
using the particular test functions
\[
\theta (t,x,v) = \eta (t) \left [ 1 - \chi \left (
\frac{2|v|}{r_0}\right ) \right ] \varphi \left ( x, r\vsv \right
)
\]
with $\eta \in C^1_c (\R_+)$, $\varphi \in \cocxv{}$. From now on
we consider only the not prepared initial data case (the other
case is simpler). We recall the notation $\aeps = - \nabla _x U
\star \int _{\R^d} \fe \;\md v - H \star \fe $ and we introduce $a = - \nabla _x
U \star \int _{\R^d} f \;\md v - H \star f$. Since $f$ satisfies the same
bounds as $(\fe)_\eps$, we deduce that $\|a\|_{\linf{}} \leq A,
\|\nabla _x a\|_{\linf{}} \leq A_1$. For any $\delta >0$ we can
write
\begin{align}
\label{EquBil}
&\left |\inttxv{\left \{ \aek \cdot \nabla _v \theta \;\fek - a \cdot \nabla _v \theta \;f\right \}\dxv\!\!} \right |
\leq \left |\int _0 ^\delta \intxv{\aek \cdot \nabla _v \theta \fek \;\dxv \md t } \right | \nonumber \\
&\qquad + \left |\int _0 ^\delta \intxv{a \cdot \nabla _v \theta \;f \;\dxv \md t} \right | + \left | \int _\delta ^{+\infty} \!\!\intxv{\left \{ \aek \cdot \nabla _v \theta \;\fek - a \cdot \nabla _v \theta \;f \right \}\;\dxv \md t} \right | \nonumber \\
\leq &\, 2 A \delta \|\nabla _v \theta \|_{C^0} \intxv{\fin
\;\dxv} +
\left | \int _\delta ^{+\infty}\!\! \intxv{ (\aek - a) \cdot \nabla _v \theta \;\ind{\{|v|\leq R_0\}}\fek \;\dxv }\md t \right | \nonumber \\
& + \left | \int _\delta ^{+\infty}\!\! \intxv{ a \cdot \nabla _v
\theta \;(\fek - f)\;\dxv}\md t \right |.
\end{align}
We keep $\delta >0$ fixed and we pass to the limit when $k$ goes
to infinity. Lemma \ref{NonLinTerm} implies that the second term
in the last right hand side can be estimated as
\[
\|\aek - a \|_{\linf (\R^d \times B_{R_0})} = \| a_{\fek} - a_f \|_{\linf (\R^d \times B_{R_0})} \leq C(R_0)
W_1 (\fek (t), f(t)) \to 0 \;\mbox{ when } k \to +\infty
\]
uniformly for $t \in [\delta, T]$, implying, for $T$ large enough
\[
\left | \int _\delta ^{+\infty}\!\! \int _{|v| \leq R_0}{ (\aek -
a) \cdot \nabla _v \theta \fek \;\dxv }\md t \right | \leq
C(R_0)\| \theta \|_{C^1} \int _\delta ^T W_1 (\fek (t), f(t))\;\md
t \to 0
\]
when $k$ goes to infinity. For the third term in the right hand
side of \eqref{EquBil} we use the weak $\star$ convergence $\limk
\fek (t) = f(t)$ in $\mbxv{}$ for any $t\geq \delta$, cf. Proposition \ref{w2properties}
\[
\limk \intxv{a \cdot \nabla _v \theta (\fek (t)- f(t)) \;\dxv } =
0,\;\;t\geq \delta
\]
and we conclude by the Lebesgue dominated convergence theorem
\[
\limk \int _\delta ^{+\infty} \intxv{ a \cdot \nabla _v \theta
(\fek (t,x,v) - f(t,x,v) )\;\dxv}\md t = 0\,.
\]
Passing to the limit in \eqref{EquBil} when $k$ goes to infinity,
we obtain
\[
\limsup _{k \to +\infty} \left |\inttxv{\left \{ \aek \cdot \nabla
_v \theta \fek - a \cdot \nabla _v \theta f\right
\}\;\dxv\!\!} \right | \leq 2 A \delta \|\nabla _v \theta \|_{C^0}
\,.
\]
Sending $\delta$ to $0$ we obtain that
\[
\limk \inttxv{ \aek \cdot \nabla _v \theta \; \fek\;\dxv\!\!} =
\inttxv{ a \cdot \nabla _v \theta \; f\;\dxv\!\!}\,.
\]
\end{proof}
\section{Diffusion models}\label{DiffMod}
We intend to introduce a formalism which will allow us to
investigate in a simpler manner the asymptotic behavior of
\eqref{Equ10} and \eqref{Equ31}. This method comes from
gyrokinetic models in plasma physics: when studying the magnetic
confinement we are looking for averaged models with respect to the
fast motion of particles around the magnetic lines. The analysis
relies on the notion of gyro-average operator \cite{BosTraEquSin},
which is a projection onto the space of slow time depending
functions. In other words, projecting means smoothing out the
fluctuations with respect to the fast time variable, corresponding
to the high cyclotronic frequency. This projection appears like a
gyro-average operator. Here the arguments are developed at a
formal level.
We first introduce rigorously the projected measure on the sphere
$r\sphere$ for general measures. Let $f \in \mbxv{}$ be a non
negative bounded measure on $\R^d \times \R^d$. We denote by
$\ave{f}$ the measure corresponding to the linear application
\[
\psi \to \intxv{\psi(x,v)\,\ind{v = 0} f(x,v)\,\dxv } +
\intxv{\psi\left ( x , r \vsv \right ) \ind{v \neq 0} f(x,v)\,\dxv
}\,,
\]
for all $\psi \in \czcxv$, {\it i.e.,}
\[
\intxv{\psi(x,v) \ave{f}(x,v)\,\dxv} = \int _{v = 0} \psi(x,v)
f(x,v)\,\dxv + \int _{v \neq 0} \psi \left ( x , r \vsv \right )
f(x,v)\,\dxv\,,
\]
for all $\psi \in \czcxv$. Observe that $\ave{f}$ is a non
negative bounded measure,
$$
\intxv{\;\;\ave{f}(x,v)\,\dxv} = \intxv{\;\;f(x,v)\,\dxv},
$$
with $\supp \ave{f} \subset \R^d \times (\A)$. We have the
following characterization.
\begin{pro} \label{VarChar}
Assume that $f$ is a non negative bounded measure on $\R^d \times
\R^d$. Then $\ave{f}$ is the unique measure $F$ satisfying $\supp
F \subset \R^d \times (\A)$,
\[
\int _{v\neq 0} \psi \left ( x , r \vsv \right )F(x,v)\,\dxv =
\int _{v \neq 0}\psi \left ( x , r \vsv \right
)f(x,v)\,\dxv,\;\;\psi \in \czcxv{}
\]
and $F = f$ on $\R^d \times \{0\}$.
\end{pro}
\begin{proof}
The measure $\ave{f}$ defined before satisfies the above
characterization. Indeed, $\supp \ave{f} \subset \R^d \times
(\A)$. Taking now $\psi (x,v) = \varphi (x) \chi (|v|/\delta)$
with $\varphi \in C^0 _c (\R^d)$ and $\delta >0$ one gets
\begin{align*}
\intxv{\varphi (x) \chi \left ( \frac{|v|}{\delta} \right )
\ave{f}(x,v)\,\dxv } = &\,\int _{v = 0} \varphi (x) f(x,v)\,\dxv
\\ &+ \int _{v \neq 0} \varphi (x) \chi \left ( \frac{|v|}{\delta}
\right ) f(x,v)\,\dxv.
\end{align*}
Passing to the limit for $\delta \searrow 0$ yields
\[
\int _{v = 0} \varphi (x) \ave{f}(x,v)\,\dxv = \int _{v = 0}
\varphi (x) f(x,v)\,\dxv,\;\;\varphi \in \czc{}
\]
meaning that $\ave{f} = f$ on $\R^d \times \{0\}$. Therefore one
gets for any $\psi \in \czcxv{}$
\begin{align*}
\int_{v \neq 0} \psi \left ( x , r \vsv \right )\ave{f}(x,v)\,\dxv & = \int_{|v| = r} \psi ( x , v )\ave{f}(x,v)\,\dxv \\
& = \int _{v \neq 0} \psi (x,v) \ave{f}(x,v)\,\dxv \\
& = \intxv{\psi \ave{f}}(x,v)\,\dxv- \int _{v = 0}\psi \ave{f}(x,v)\,\dxv \\
& = \intxv{\psi \ave{f}}(x,v)\,\dxv- \int _{v = 0}\psi f(x,v)\,\dxv \\
& = \int _{v \neq 0} \psi \left ( x , r \vsv \right )
f(x,v)\,\dxv.
\end{align*}
Conversely, let us check that the above characterization exactly defines the measure $\ave{f}$. For any $\psi \in \czcxv{}$ we have
\begin{align*}
\intxv{\psi (x,v) F(x,v)\,\dxv} & = \int _{v = 0} \psi F(x,v)\,\dxv + \int _{v \neq 0} \psi F(x,v)\,\dxv \\
& = \int _{v = 0} \psi (x,v) f(x,v)\,\dxv + \int _{v \neq 0} \psi \left ( x, r \vsv \right ) F(x,v)\,\dxv \\
& = \int _{v = 0} \psi (x,v) f(x,v)\,\dxv + \int _{v \neq 0} \psi
\left ( x, r \vsv \right )f(x,v)\,\dxv
\end{align*}
saying that $F = \ave{f}$.
\end{proof}
By Proposition \ref{VarChar} it is clear that $\ave{\cdot}$ leaves
invariant the measures with support in $\R^d \cup (\A)$.
Consider $f \in \mbxv{}$. We say that $\Divv \{f \abv \} \in {\cal
M}_b (\R^d \times \R^d)$ if and only if there is a constant $C>0$
such that
\[
\intxv{\abv \cdot \nabla _v \psi f(x,v)\,\dxv } \leq C \|\psi
\|_{\linf{}},\;\;\psi \in \cocxv{}.
\]
In this case there is a bounded measure $\mu$ such that
\[
- \intxv{\abv \cdot \nabla _v \psi f(x,v)\,\dxv } = \intxv{\psi
\mu },\;\;\psi \in \cocxv{}.
\]
By definition we take $\Divv \{f \abv \} = \mu$. The main
motivation for the construction of the projection $\ave{\cdot}$ is
the following result.
\begin{pro}
\label{ZeroAve} For any $f \in \mbxv{}$ such that $ \Divv \{f \abv
\}\in {\cal M}_b (\R^d \times \R^d)$ we have $\ave{\Divv \{f \abv
\}} = 0$.
\end{pro}
\begin{proof}
Let us take $\Divv \{f \abv \} = \mu$. We will check that the zero
measure $0$ satisfies the characterization of $\ave{\mu}$ in
Proposition \ref{VarChar}. Clearly $\supp 0 = \emptyset \subset
\R^d \times (\A)$. For any $\varphi (x) \in \czc{}$ we have
\begin{align*}
\int _{v = 0} \varphi (x) \mu(x,v)\,\dxv & = \limd \intxv{\varphi (x) \chi \left ( \frac{|v|}{\delta} \right ) \mu(x,v)\,\dxv} \\
& = - \limd \intxv{\varphi (x) \chi ^{\;\prime} \left (
\frac{|v|}{\delta} \right )\frac{|v|}{\delta} ( \alpha - \beta
|v|^2) f(x,v)\,\dxv}= 0
\end{align*}
by dominated convergence, since
\[
\left | \chi ^{\;\prime} \left ( \frac{|v|}{\delta} \right )\frac{|v|}{\delta} ( \alpha - \beta |v|^2) \right |\leq \alpha \sup _{u \geq 0} |\chi ^{\;\prime} (u) u | + \beta \delta ^2 \sup _{u \geq 0} |\chi ^{\;\prime} (u) u ^3|.
\]
Therefore we deduce that $\Divv \{f \abv\} = 0$ on $\R ^d \times
\{0\}$. Consider now $\psi \in \cocxv{}$ and lets us compute
\begin{align*}
\int _{v \neq 0} \psi \left (x,r \vsv \right ) & \mu(x,v)\,\dxv = \limd \intxv{\psi \left (x,r \vsv \right ) \left ( 1 - \chi \left ( \frac{|v|}{\delta} \right ) \right ) \mu(x,v)\,\dxv} \\
& = \limd \intxv{\psi \left (x,r \vsv \right ) \chi
^{\;\prime}\left ( \frac{|v|}{\delta} \right ) \frac{|v|}{\delta}
(\alpha - \beta |v|^2) f(x,v)\,\dxv} = 0
\end{align*}
since $v \cdot \nabla _v \{ \psi (x, r \vsv)\} = 0$. By density, the same conclusion holds true for any $\psi \in \czcxv{}$ and thus $\ave{\Divv \{f \abv \}} = 0$.
\end{proof}
\begin{remark} \label{SimplerAve}
When $f \in \mbxv{}$ does not charge $\R^d \times \{0\}$,
$\ave{f}$ is given by
\[
\supp \ave{f} \subset \R ^d \times r\sphere,\;\;\int _{v \neq 0}
\psi \left ( x, r \vsv \right ) \ave{f} = \int _{v \neq 0} \psi
\left ( x, r \vsv \right ) f,\;\;\psi \in \czcxv{}
\]
or equivalently
\begin{equation}
\label{Equ34} \intxv{\psi \ave{f} } = \int _{v \neq 0} \psi \left ( x, r \vsv \right ) f,\;\;\psi \in \czcxv{}.
\end{equation}
\end{remark}
\
Using Proposition \ref{ZeroAve} we can obtain, at least formally,
the limit model satisfied by $f = \lime \fe$. By \eqref{Equ2} we
know that $\supp f \subset \R ^d \times (\A)$. The time evolution
of $f$ comes by eliminating $\fo$ in \eqref{Equ3}. For that it is
sufficient to project on the subspace of the measures satisfying
the constraint \eqref{Equ2}, {\it i.e.,} to apply $\ave{\cdot}$.
\begin{equation}
\label{Equ35} \ave{\partial _t f } + \ave{\Divx \{f v\}} + \ave{\Divv \{ f a \}} = 0.
\end{equation}
It is easily seen that $\ave{\partial _t f } = \partial _t \ave{f}
= \partial _t f $ since $\supp f \subset \R ^d \times (\A)$ and
therefore $\ave{f} = f$. We need to compute the last two terms in
\eqref{Equ35}. We show that
\begin{pro}\label{TransportAve}
Assume that $a = a(x)$ is a bounded continuous field. Then we have
the following equalities
\[
\ave{\Divx \{f v \}} = \Divx \{f v \}\;\;\mbox{ if } \;\supp f
\subset \R ^d \times (\A)
\]
\[
\ave{\Divv \{ f a \}} = \Divv \left \{ f \imvv a \right \}
\;\;\mbox{ if } \;\supp f \subset \R ^d \times r\sphere.
\]
As a consequence, \eqref{Equ35} yields the transport equation
\eqref{Equ22} obtained rigorously in Theorems
{\rm\ref{MainResult}} and {\rm\ref{MainResult2}}.
\end{pro}
\begin{proof}
For any $\psi \in \cocxv{}$ we have
\begin{align*}
\intxv{\psi & \ave{\Divx\{f v \}}} = \int _{v = 0} \psi \Divx\{f v \} + \int _{v \neq 0} \psixv \Divx\{f v \}\\
& = \limd \intxv{\psi \chivd \Divx\{f v \}} + \limd \intxv{\psixv \left ( 1 - \chivd \right ) \Divx\{f v \}} \\
& = - \limd \intxv{v \cdot \nabla _x \psi \chivd f} - \limd \intxv{v \cdot \nabla _x \psixv \left ( 1 - \chivd \right ) f} \\
& = - \int _{v = 0} v \cdot \nabla _x \psi f - \int _{v \neq 0} v \cdot \nabla _x \psixv f \\
& = - \intxv{v \cdot \nabla _x \psi f } = \intxv{\psi \Divx\{f v
\}}
\end{align*}
saying that $\ave{\Divx \{f v \}} = \Divx \{f v \}$. Assume now
that $\supp f \subset \R ^d \times r\sphere$. It is easily seen
that $\Divv (fa)$ does not charge $\R ^d \times \{0\}$. Indeed, for
any $\psi \in \czcxv{}$ we have by dominated convergence
\begin{align*}
\int _{v = 0} \psi \Divv (fa) & = \limd \intxv{\psi \chivd \Divv (fa)} \\
& = - \limd \intxv{a \cdot \nabla _v \psi \chivd f} - \limd
\intxv{a \cdot \frac{v}{|v|} \frac{1}{\delta} \chipvd \psi f } =
0.
\end{align*}
Therefore we can use \eqref{Equ34}
\begin{align*}
\intxv{\psi \ave{\Divv (fa)}} & = \int _{v \neq 0} \psixv \Divv (fa) \\
& = \limd \intxv{\left ( 1 - \chivd \right ) \psixv \Divv (fa) }\\
& = - \limd \intxv{\left ( 1 - \chivd \right ) \frac{r}{|v|} \imvv a \cdot (\nabla _v \psi ) \left ( x, r\vsv \right ) f } \\
& \quad + \limd \intxv{\;\;\frac{1}{\delta} \chipvd \frac{v}{|v|} \cdot a \psixv f } \\
& = - \int _{v \neq 0} \imvv a \cdot \nabla _v \psi f =
\intxv{\psi \;\Divv \left \{f \imvv a \right \}}.
\end{align*}
\end{proof}
We investigate now the limit when $\eps \searrow 0$ of the
diffusion model \eqref{Equ31}. We are done if we compute
$\ave{\Delta _v f}$ for a non negative bounded measure with
support contained in $\R^d \times r\sphere$. As before we can
check that $\Delta _v f$ does not charge $\R^d \times \{0\}$ and
therefore, thanks to \eqref{Equ34}, we obtain after some
computations
\begin{equation}
\label{Equ37} \intxv{\psi \ave{\Delta _v f}} = \int _{v \neq 0} \psixv \Delta _v f = \int _{v \neq 0} \Delta _v \left \{ \psixv \right \}f,\;\;\psi \in \ctcxv{}.
\end{equation}
\begin{lemma}
\label{ZeroHom} For any function $\varphi \in C^2 (\R^d \setminus
\{0\})$ and any $r >0$ we have
\[
\Delta _v \left \{ \varphi \left ( r \vsv \right ) \right \} = \left ( \frac{r}{|v|}\right ) ^2 \imvv : \partial ^2 _v \varphi \left ( r \vsv \right ) - 2 \frac{r}{|v|} \frac{v \cdot \nabla _v \varphi \left ( r \vsv \right ) }{|v|^2},\;\;v \neq 0.
\]
\end{lemma}
\
\noindent Combining \eqref{Equ37}, Lemma \ref{ZeroHom} and the
fact that $\supp f \subset \R ^d \times r \sphere$ we obtain
\begin{align*}
\intxv{\psi (x,v) \ave{\Delta _v f} } & = \int _{v \neq 0} \left [ \imvv : \partial _v ^2 \psi (x,v) - 2 \frac{v \cdot \nabla _v \psi (x,v)}{|v|^2} \right]f \nonumber \\
& = \intxv{\psi (x,v) \Divv \left \{ \Divv \left [ f \imvv \right
] + 2 f \frac{v}{|v|^2} \right \}}. \nonumber
\end{align*}
We deduce the formula
\[
\ave{\Delta _v f} = \Divv \left \{ \Divv \left [ f \imvv \right ] + 2 f \frac{v}{|v|^2} \right \}
\]
for any $f$ satisfying $\supp f \subset \R ^d \times r \sphere$
and the limit of the Vicsek model \eqref{Equ31} when $\eps
\searrow 0$ becomes
\begin{equation}\label{equnew}
\partial _t f + \Divx (fv) + \Divv \left \{ f \imvv a \right \} = \Divv \left \{ \Divv \left [ f \imvv \right ] + 2 f \frac{v}{|v|^2} \right \}
\end{equation}
with the initial condition $f(0) = \ave{\fin}$, as stated in
\eqref{Equ22Diff}.
\appendix
\section{Spherical coordinates and the Laplace-Beltrami operator}
\label{A}
In this appendix, we show the computations to relate the equations
written in original variables $(x,v)$ to the equations in
spherical coordinates $(x,\omega)$. Our limit densities have their
support contained in $\R^d \times r \sphere$ and thus reduce to
measures on $\R^d \times r\sphere$. For example, let us consider
the measure on $\R^d \times r\sphere$ still denoted by $f$, given
by
\[
\intxo{\psi (x, \omega) f(x,\omega)\,\mathrm{d}(x,\omega)} = \int
_{v \neq 0} \psixv{} f(x,v)\,\dxv
\]
for any function $\psi \in \czcxo{}$. In particular, to any $f \in
\mbxv{}$ not charging $\R^d \times \{0\}$ it corresponds $\ave{f}
\in \mbxv{}$, with $\supp \ave{f} \subset \R ^d \times r \sphere$,
whose characterization is
\[
\intxo{\psi (x, \omega)\ave{f}(x,\omega)\,\mathrm{d}(x,\omega)} =
\int _{v \neq 0}\psixv f(x,v)\,\dxv.
\]
We intend to write the previous limit models (in Theorems
\ref{MainResult}, \ref{MainResult2}, and \eqref{equnew}) in
spherical coordinates.
\begin{pro}
\label{SpherCoord} Assume that $f \in \mbxv{}$, $\supp f \subset
\R^d \times r \sphere$ and let us denote by $F \in \mbxo $ its
corresponding measure on $\R^d \times r \sphere$. Therefore we
have
\[
\ave{\Divx (fv)} = \Divx (F \omega),\;\;\ave{\Divv (fa)} = \Divo
\left \{F \imoo a \right \},\;\;\ave{\Delta _v f } = \Delta
_\omega F.
\]
\end{pro}
\begin{proof}
Thanks to Proposition \ref{TransportAve} we have for any $\psi \in
\cocxo{}$
\begin{align*}
\intxo{\psi (x, \omega) \ave{\Divx (fv)}} & = \int _{v \neq 0} \psixv \Divx (fv) = - \int _{v\neq 0} v \cdot \nabla _x \psixv f\\
& = - \int _{v \neq 0} r \vsv \cdot \nabla _x \psixv f = -
\intxo{\omega \cdot \nabla _x \psi (x, \omega) F}
\end{align*}
and thus $\ave{\Divx (fv)} = \Divx (F \omega)$. Similarly we can
write
\begin{align*}
\intxo{\psi (x, \omega) \ave{\Divv (fa)}} & = \int _{v \neq 0} \psixv \ave{\Divv (fa)}(\mathrm{d}(x,v)) \nonumber \\
& = \int _{v \neq 0} \psixv \Divv \left \{f \imvv a\right \} \nonumber \\
& = - \int _{v \neq 0} \frac{r}{|v|}\imvv a \cdot \imvv \nabla _v\psixv f \nonumber \\
& = - \int _{v \neq 0} \imvv a \cdot \imvv \nabla _v \psixv f \nonumber \\
& = - \intxo{\imoo a \cdot \imoo \nabla _v \psi (x, \omega) F}\nonumber \\
& = - \intxo{\imoo a \cdot \nabla _\omega \psi (x, \omega) F}\nonumber \\
& = \intxo{\psi (x, \omega) \Divo \left \{ F\imoo a\right
\}}\nonumber
\end{align*}
and therefore
\[
\ave{\Divv (fa)} = \Divo \left \{F \imoo a \right \}.
\]
Here $\Divo $ stands for the divergence along $r\sphere$ (notice
that $\imoo a$ is a tangent field of $r\sphere$) and $\nabla
_\omega = \imoo \nabla _v$ is the gradient along $r\sphere$. For
the last assertion we appeal to the following well known result
asserting that the Laplace-Beltrami operator coincides with the
Laplacian of the degree zero homogeneous extension, see also
\cite{BCC12}.
\begin{pro}
\label{LaplaceBeltrami} Consider $\varphi = \varphi (\omega)$ a
$C^2$ function on $r\sphere$ and we denote by $\Phi = \Phi (v)$
its degree zero homogeneous extension on $\R^d \setminus \{0\}$
\[
\Phi (v) = \varphi \left ( r \vsv \right ),\;\;v \neq 0.
\]
Therefore we have for any $\omega \in r \sphere$
\[
\Delta _\omega \varphi (\omega) = \Delta _v \Phi (\omega).
\]
\end{pro}
Let us come back to the proof of Proposition \ref{SpherCoord}. For
any $\psi \in C^2_c (\R ^d \times r \sphere)$ we introduce its
degree zero homogeneous extension $\Psi (x,v) = \psixv$. Thanks to
Proposition \ref{LaplaceBeltrami} we can write
\begin{align*}
\intxo{\psi (x,\omega) \ave{\Delta _v f} }& = \int _{v \neq 0}
\psixv \ave{\Delta _v f} \nonumber = \int _{v \neq 0} \Psi (x,v)
\Delta _v f = \int _{v \neq 0} \Delta _v \Psi f \\
& = \int _{|v| = r} \Delta _\omega \psi (x,v) f = \intxo{\Delta
_\omega \psi (x,\omega) F} = \intxo{\psi (x,\omega) \Delta _\omega
F}
\end{align*} meaning that $\ave{\Delta _v f } = \Delta _\omega
F$.
\end{proof}
\vskip 6pt
For the sake of completeness, we finally write the equations in
spherical coordinates in $\R^3$. We introduce the spherical
coordinates $\omega = r (\cos \theta \cos \varphi, \cos \theta
\sin \varphi, \sin \theta)$ with the angle variables $(\theta,
\varphi ) \in ]-\pi/2, \pi/2[ \times [0,2\pi [$, and the
orthogonal basis of the tangent space to $r\sphere$
\[
e_\theta = (- \sin \theta \cos \varphi, - \sin \theta \sin
\varphi, \cos \theta),\;\;e_\varphi = (- \cos \theta \sin \varphi,
\cos \theta \cos \varphi, 0)
\]
with $|e_\theta| = 1,\;|e_\varphi| = \cos \theta$. For any smooth
function $u$ on $r\sphere$ we have
\[
\nabla _\omega u = (\nabla _\omega u \cdot e_\theta) e_\theta +
(\nabla _\omega u \cdot e _\varphi ) \frac{e_\varphi}{\cos ^2
\theta} = \frac{1}{r} \partial _\theta u \;e _\theta +
\frac{1}{r\cos ^2 \theta} \partial _\varphi u \;e _\varphi
\]
and for any smooth tangent field $\xi = \xi _\theta e _\theta +
\xi _\varphi e _\varphi $ we have
\[
\Divo \xi = \frac{1}{r} \left \{\frac{1}{\cos \theta} \partial
_\theta (\xi _\theta \cos \theta) + \partial _\varphi \xi _\varphi
\right \}.
\]
The coordinates of the tangent field $\xi := F \imoo a$ are
$
\xi _\theta = \xi \cdot e _\theta = F a_\theta,\;\;\xi _\varphi =
\frac{\xi \cdot e _\varphi }{\cos ^2 \theta } = F a_\varphi
$
and we obtain
\[
\ave{\Divv (fa)} = \Divo \left \{ F \imoo a \right \} =
\frac{1}{r} \left \{ \frac{1}{\cos \theta} \partial _\theta (F
a_\theta \cos \theta) + \partial _\varphi ( F a_\varphi ) \right
\}.
\]
The spherical Laplacian is given by
\begin{align*}
\Delta _\omega F & = \Divo (\nabla _\omega F) = \frac{1}{r} \left \{\frac{1}{\cos \theta} \frac{\partial}{\partial \theta} \left ( \frac{\cos \theta}{r} \partial _\theta F\right ) + \frac{\partial}{\partial \varphi } \left ( \frac{1}{r \cos ^2 \theta} \partial _\varphi F \right ) \right \}\\
& = \frac{1}{r^2}\left \{ \frac{1}{\cos \theta}
\frac{\partial}{\partial \theta} ( \cos \theta \;\partial _\theta
F ) + \frac{1}{\cos ^2 \theta} \;\partial ^2 _\varphi F \right
\}.
\end{align*}
\begin{pro}
The limit transport equation obtained in \eqref{equnew} for $\R^3$
is
\[
\partial _t F + \omega \cdot \nabla _x F + \frac{1}{r} \left \{\frac{\partial _\theta (F a_\theta \cos \theta )}{\cos \theta} + \partial _\varphi ( F a _\varphi ) \right \} = \frac{1}{r^2} \left \{\frac{1}{\cos \theta} \frac{\partial}{\partial \theta} ( \cos \theta \;\partial _\theta F ) + \frac{1}{\cos ^2 \theta} \;\partial ^2 _\varphi F \right \}.
\]
\end{pro}
We recall here the proof of Proposition \ref{LaplaceBeltrami}. It
is a consequence of a more general result.
\begin{pro}
\label{MoreGenRes} Let us consider a function $\varphi = \varphi
(v) \in C^2 (\R^d)$, $d \geq 2$ which writes in polar coordinates
$ \varphi (v) = \tvarphi (\rho, \sigma),\;\;\rho = |v|
>0,\;\;\sigma = \vsv \in \sphere. $ Therefore for any $v \neq 0$ we
have
\[
\Delta _v \varphi (v) = \frac{1}{\rho ^{N-1}} \frac{\partial}{\partial \rho} ( \rho ^{N-1} \partial _\rho \tvarphi ) + \frac{1}{\rho ^2 } \Delta _\sigma \tvarphi (\rho, \sigma),\;\;\rho = |v| >0,\;\;\sigma = \vsv.
\]
\end{pro}
\begin{proof}
Consider a smooth function $\psi = \psi (v) \in C^2$ with compact
support in $\R ^N \setminus \{0\}$, which writes in polar
coordinates $ \psi (v) = \tpsi (\rho, \sigma),\;\;\rho = |v|
>0,\;\;\sigma = \vsv \in \sphere. $ We have
\[
\frac{\partial \tvarphi }{\partial \rho } = \nabla _v \varphi \cdot \sigma,\;\;\nabla _v \varphi = (\nabla _v \varphi \cdot \sigma ) \sigma + (I - \sigma \otimes \sigma) \nabla _v \varphi =
\frac{\partial \tvarphi }{\partial \rho }\;\sigma + \nabla _{\omega = \rho \sigma} \tvarphi
\]
and
\[
\frac{\partial \tpsi }{\partial \rho } = \nabla _v \psi \cdot \sigma,\;\;\nabla _v \psi = (\nabla _v \psi \cdot \sigma ) \sigma + (I - \sigma \otimes \sigma) \nabla _v \psi =
\frac{\partial \tpsi }{\partial \rho }\;\sigma + \nabla _{\omega = \rho \sigma} \tpsi.
\]
Integrating by parts yields
\begin{align*}
- \intvN{\Delta _v \varphi \;\psi (v)} & = \intvN{\nabla _v
\varphi \cdot \nabla _v \psi } = \int_{\R_+} \int _{S^{N-1}} \left
\{ \frac{\partial \tvarphi}{\partial \rho} \frac{\partial
\tpsi}{\partial \rho} + \frac{1}{\rho ^2} \nabla _\sigma \tvarphi
\cdot \nabla _\sigma \tpsi \right \}
\;\mathrm{d}\sigma \rho ^{N-1} \;\mathrm{d}\rho \nonumber \\
& = - \int _{S^{N-1}} \int _{\R_+} \tpsi \frac{\partial}{\partial
\rho } \left ( \rho ^{N-1} \frac{\partial \tvarphi}{\partial \rho}
\right )
\;\mathrm{d}\rho\;\mathrm{d}\sigma - \int _{\R_+} \frac{\rho ^{N-1}}{\rho ^2} \int _{S^{N-1}} \tpsi \;\Delta _\sigma \tvarphi \;\mathrm{d}\sigma \;\mathrm{d}\rho \nonumber \\
& = - \intvN{\psi (v) \left \{ \frac{1}{\rho ^{N-1}}
\frac{\partial}{\partial \rho} ( \rho ^{N-1} \partial _\rho
\tvarphi ) + \frac{1}{\rho ^2 } \Delta _\sigma \tvarphi \right
\}}
\end{align*}
and therefore
\[
\Delta _v \varphi (v) = \frac{1}{\rho ^{N-1}} \frac{\partial}{\partial \rho} ( \rho ^{N-1} \partial _\rho \tvarphi ) + \frac{1}{\rho ^2 } \Delta _\sigma \tvarphi (\rho, \sigma),\;\;\rho = |v| >0,\;\;\sigma = \vsv.
\]
\end{proof}
\begin{proof} (of Proposition \ref{LaplaceBeltrami})
The degree zero homogeneous extension $\Phi (v) = \varphi \left (
r \vsv \right )$ does not depend on the polar radius $\Phi (v) =
\tilde{\Phi} (\sigma) = \varphi (\omega = r \sigma),\;\;\sigma =
\vsv.$ Thanks to Proposition \ref{MoreGenRes}, we deduce
$
\Delta _v \Phi = \frac{1}{\rho ^2} \Delta _\sigma \tilde{\Phi} = \frac{r^2}{\rho ^2} \Delta _\omega \varphi .
$
Taking $\rho = r $, which means $v = r \sigma = \omega$ we obtain
$
\Delta _v \Phi (\omega) = \Delta _\omega \varphi
(\omega),\;\;\omega \in r \sphere.
$
\end{proof}
\subsection*{Acknowledgments}
JAC was supported by projects MTM2011-27739-C04-02 and
2009-SGR-345 from Ag\`encia de Gesti\'o d'Ajuts Universitaris i de
Recerca-Generalitat de Catalunya.
\end{document} |
\begin{document}
\maketitle
\begin{abstract}
We consider a class of $1D$ NLS perturbed with a steplike potential.
We prove that the nonlinear solutions satisfy the double scattering channels
in the energy space.
The proof is based on
concentration-compactness/rigidity method. We prove moreover that in dimension higher than one, classical scattering holds if the potential is periodic in all but one dimension and is steplike and repulsive in the remaining one.
\end{abstract}
\section{Introduction}\label{intro}
The main motivation of this paper is the analysis of the behavior for large times
of solutions to the following $1D$ Cauchy problems (see below for a
suitable generalization in higher dimensions):
\begin{equation}\label{NLSV}
\left\{ \begin{aligned}
i\partial_{t}u+\partial_{x}^2 u-Vu&=|u|^{{\alpha}lpha}u, \quad (t,x){\infty}n \mathbb{R}\times
\mathbb{R}, \quad {\alpha}lpha>4\\
u(0)&=u_0{\infty}n H^1(\mathbb{R})
\end{aligned}\right.,
\end{equation}
namely we treat the $L^2$-supercritical defocusing power nonlinearities, and $V:{\mathbb{R}}\to{\mathbb{R}}$ is a real time-independent steplike potential. More precisely we assume that
$V(x)$ has two different asymptotic behaviors at $\pm{\infty}nfty$:
\begin{equation}\label{differentlimit}
a_+=\lim_{x\rightarrow+ {\infty}nfty} V(x)\neq \lim_{x\rightarrow -{\infty}nfty} V(x)=a_-.
\end{equation}
In order to simplify the presentation we shall assume in our treatment
\begin{equation*}
a_+=1 \quad\hbox{ and }\quad a_-=0,
\end{equation*}
but of course the arguments and the results below
can be extended to the general case $a_+\neq a_-$.
Roughly speaking the Cauchy problem \eqref{NLSV} looks like
the
following Cauchy problems
respectively for $x>>0$ and $x<<0$:
\begin{equation}\label{NLS}
\left\{\begin{aligned}
i\partial_{t}v+\partial_{x}^2v&=|v|^{{\alpha}lpha}v\\
v(0)&=v_0{\infty}n H^1(\mathbb{R})
\end{aligned} \right.
\end{equation}
and \begin{equation}\label{NLS1}
\left\{ \begin{aligned}
i\partial_{t}v+(\partial_{x}^2-1)v&=|v|^{{\alpha}lpha}v\\
v(0)&=v_0{\infty}n H^1(\mathbb{R})
\end{aligned} \right..
\end{equation}
\noindent We recall that in $1D,$ the long time behavior of solutions to \eqref{NLS}
(and also to \eqref{NLS1})
has been first obtained in the work by Nakanishi (see \cite{N}), who proved that
the solutions to \eqref{NLS} (and also \eqref{NLS1})
scatter to a free wave in $H^{1}(\mathbb{R})$
(see {\alpha}utoref{def-classic} for a precise definition of scattering from nonlinear to linear solutions in a general framework).
The Nakanishi argument is a combination of the induction
on the energy in conjunction with a suitable version of
Morawetz inequalities with time-dependent weights. Alternative proofs
based on the use of the interaction Morawetz estimates, first introduced in \cite{CKSTT},
have been obtained later (see
\cite{CHVZ, CGT, PV, Vis} and the references therein).
\newline
As far as we know, there are not results available in the literature about the
long time behavior of solutions to NLS perturbed by a steplike potential,
and this is the main motivation of this paper.
We recall that in physics literature the steplike potentials are called
{\em barrier potentials} and are very useful to study the interactions of particles
with the boundary of a solid (see Gesztesy \cite{G} and Gesztesy, Noewll and P\"otz \cite{GNP} for more details). We also mention the paper
\cite{DaSi} where, in between other results,
it is studied via the twisting trick the long time behavior of solutions
to the propagator $e^{i t(\partial_x^2 - V)}$, where $V(x)$ is steplike
(see below for more details on the definition of the double scattering channels).
For a more complete list of references devoted to the analysis of steplike potentials
we refer to \cite{DS}.
Nevertheless, at the best of our knowledge, no results are available about
the long time behavior of solutions to nonlinear Cauchy problem \eqref{NLSV} with a steplike potential.
\\
\\
It is worth mentioning that in $1D$,
we can rely on the Sobolev embedding
$H^1(\mathbb{R} )\hookrightarrow L^{\infty}nfty(\mathbb{R})$. Hence it is straightforward
to show that the
Cauchy problem \eqref{NLSV} is locally well posed in the energy space $H^1(\mathbb{R})$. For higher dimensions the local well posedness theory is still well known, see for example Cazenave's book \cite{CZ}, once the good dispersive properties of the linear flow are established. Moreover, thanks to the defocusing character of the nonlinearity,
we can rely on the conservation of the mass and of the energy below, valid in any dimension:
\begin{equation}\label{consmass}
\|u(t)\|_{L^2(\mathbb{R}^d)}=\|u(0)\|_{L^2(\mathbb{R}^d)},
\end{equation}
and
\begin{equation}\label{consen}
E(u(t)):=\frac{1}{2}{\infty}nt_{\mathbb{R}^d}\bigg(|\nabla u(t)|^2+V|u(t)|^2+\frac{2}{{\alpha}lpha+2}|u(t)|^{{\alpha}lpha+2}\bigg) dx=E(u(0)),
\end{equation}
in order to deduce that the solutions are global. Hence for any initial datum $u_0{\infty}n H^1(\mathbb{R}^d)$ there exists one
unique global solution $u(t,x){\infty}n {\mathcal C}(\mathbb{R}; H^1(\mathbb{R}^d))$ to
\eqref{NLSV} for $d=1$ (and to \eqref{NLSV-d} below in higher dimension).
It is well-known that a key point in order to study the long time behavior of nonlinear solutions
is a good knowledge of the dispersive properties
of the linear flow, namely the so called Strichartz estimates.
A lot of works have been written in the literature about the topic, both in $1D$ and in higher dimensions. We briefly mention
\cite{AY, CK, DF, GS, W1, W2, Y}
for the one dimensional case and \cite{BPST-Z, GVV, JK, JSS, R, RS} for the higher dimensional case, referring to the bibliographies contained in these papers for a more detailed list of works on the subject.
It is worth mentioning that in all the papers mentioned above the potential perturbation
is assumed to decay at infinity, hence steplike potential are not allowed.
Concerning contributions in the literature to NLS perturbed by a decaying potential
we have several results, in between we quote the following
most recent ones: \cite{BV, CA, CGV, GHW, H, La, Li, LZ}, and all the references therein.
At the best of our knowledge, the unique paper where the dispersive properties of the corresponding $1D$
linear flow perturbed by a steplike potential $V(x)$ have been analyzed is \cite{DS},
where the $L^1-L^ {\infty}nfty$ decay estimate in $1D$ is proved:
\begin{equation}\label{disp}
\|e^{it(\partial_{x}^2-V)}f\|_{L^{{\infty}nfty}(\mathbb{R})}\lesssim|t|^{-1/2}\|f\|_{L^1(\mathbb{R})}, \quad \forall\, t\neq0\quad \forall\, f{\infty}n L^1({\mathbb{R}}).
\end{equation}
We point out that beside the different spatial behavior of $V(x)$ on left and on right of the line, other assumptions must be satisfied by the potential.
There is a huge literature devoted to those spectral properties, nevertheless
we shall not focus on it since our main point is to show how
to go from \eqref{disp} to the analysis of the long time
behavior of solutions to \eqref{NLSV}. We will assume therefore as black-box the dispersive relation \eqref{disp} and for its proof, under further assumptions
on the steplike potential $V(x)$, we refer to Theorem $1.1$ in \cite{DS}.
Our first aim is to provide a nonlinear version of the
{\em double scattering channels } that has been established in the literature in the linear context (see \cite{DaSi}).
\begin{definition}\label{def1}
Let $u_0{\infty}n H^1(\mathbb{R})$ be given and $u(t, x){\infty}n \mathcal {C}(\mathbb{R};H^1(\mathbb{R}))$
be the unique global solution to \eqref{NLSV} with $V(x)$ that satisfies
\eqref{differentlimit} with $a_-=0$ and $a_+=1$. Then we say that
$u(t,x)$ satisfies the {\em double scattering channels} provided that
$$\lim_{t\rightarrow \pm {\infty}nfty} \|u(t,x) - e^{it\partial_x^2} \eta_\pm
- e^{it(\partial_x^2-1)} \gamma_\pm \|_{H^1 (\mathbb{R})}=0,
$$
for suitable $\eta_\pm, \gamma_\pm{\infty}n H^1(\mathbb{R})$.
\end{definition}
We can now state our first result in $1D$.
\begin{theorem}\label{1Dthm}
Assume that $V:{\mathbb{R}}\to{\mathbb{R}}$ is a bounded, nonnegative potential satisfying \eqref{differentlimit}
with $a_-=0$ and $a_+=1,$ and \eqref{disp}. Furthermore, suppose that:
\begin{itemize}
{\infty}tem\label{hhyp0} $|\partial_xV(x)|\overset{|x|\rightarrow{\infty}nfty}\longrightarrow0$; \\
{\infty}tem\label{hhyp1}$\lim_{x\rightarrow +{\infty}nfty}|x|^{1+\varepsilon}|V(x)-1|=0,\,
\lim_{x\rightarrow -{\infty}nfty}|x|^{1+\varepsilon}|V(x)|=0\,$ for some\, $\varepsilonilon>0$;\\
{\infty}tem\label{hhyp3} $x \cdot \partial_xV (x)\leq 0.$
\end{itemize}
Then for every $u_0{\infty}n H^1({\mathbb{R}})$ the corresponding unique solution
$u(t,x){\infty}n \mathcal {C}(\mathbb{R};H^1(\mathbb{R}^d))$ to \eqref{NLSV}
satisfies the {\em double scattering channels} (according to {\alpha}utoref{def1}).
\end{theorem}
\begin{remark}\label{per1D}
It is worth mentioning that the assumption \eqref{disp} it may look
somehow quite strong.
However we emphasize that the knowledge of the estimate
\eqref{disp} provides for free informations on the long time behavior of nonlinear solutions
for small data, but in general it is more complicated to deal with large data, as it is the case
in
{\alpha}utoref{1Dthm}. For instance consider the case of $1D$ NLS perturbed
by a periodic potential. In this situation it has been established in the literature the
validity of the dispersive estimate for the linear propagator (see \cite{Cu})
and also the small data nonlinear scattering (\cite{CuV}). However, at the best of our knowledge,
it is unclear how to deal with the large data scattering.
\end{remark}
The proof of {\alpha}utoref{1Dthm}
goes in two steps. The first one is to show that solutions
to \eqref{NLSV} scatter to solutions of the linear problem
(see {\alpha}utoref{def-classic} for a rigorous definition of scattering in a general framework); the second one is the asymptotic description of solutions
to the linear problem associated with \eqref{NLSV} in the energy space $H^1$
(see {\alpha}utoref{linscat}).
Concerning the first step we use the technique of concentration-compactness/rigidity pioneered by Kenig and Merle (see \cite{KM1, KM2}).
Since this argument is rather general, we shall present it
in a more general higher dimensional setting.
\newline
More precisely in higher dimension
we consider the following family of NLS
\begin{equation}\label{NLSV-d}
\left\{ \begin{aligned}
i\partial_{t}u+\Delta u-Vu&=|u|^{{\alpha}lpha}u, \quad (t,x){\infty}n \mathbb{R}\times
\mathbb{R}^d\\
u(0)&=u_0{\infty}n H^1(\mathbb{R}^d)
\end{aligned}\right.,
\end{equation}
where
\begin{equation*}
\begin{cases}
\frac4d<{\alpha}lpha<\frac{4}{d-2} &\text{if}\qquad d\geq3\\
\frac4d<{\alpha} &\text{if}\qquad d\leq2
\end{cases}.
\end{equation*}
The potential $V(x)$ is assumed to satisfy, uniformly in $\bar x{\infty}n{\mathbb{R}}^{d-1},$
\begin{equation}\label{difflim2}
a_-=\lim_{x_1\rightarrow-{\infty}nfty} V(x_1,\bar x)\neq \lim_{x_1\rightarrow +{\infty}nfty} V(x_1, \bar x)=a_+, \quad\hbox{ where }\quad x=(x_1, \bar x).
\end{equation}
Moreover we assume $V(x)$
periodic w.r.t. the variables $\bar x=(x_2,\dots, x_d).$
Namely we assume the
existence of $d-1$ linear independent vectors $P_2,\dots,P_d{\infty}n{\mathbb{R}}^{d-1}$ such that for any fixed
$x_1{\infty}n{\mathbb{R}},$ the following holds:
\begin{equation}\label{periods}
\begin{aligned}
V(x_1, \bar x)=V(x_1,\bar x +k_2P_2+\dots +k_dP_d),\\
\forall\,\bar x=(x_2,\dots,x_d){\infty}n{\mathbb{R}}^{d-1},
\quad \forall\, (k_2,\dots,k_d){\infty}n\mathbb{Z}^{d-1}.
\end{aligned}
\end{equation}
Some comments about this choice of assumptions on $V(x)$ are given in {\alpha}utoref{assVd}.
\newline
Exactly as in $1D$ case mentioned above,
we assume
as a black-box the dispersive estimate
\begin{equation}\label{disp-d}
\|e^{it(\Delta-V)}f\|_{L^{{\infty}nfty}(\mathbb{R}^d)}\lesssim|t|^{-d/2}\|f\|_{L^1(\mathbb{R}^d)}, \quad \forall\, t\neq0\quad \forall\, f{\infty}n L^1({\mathbb{R}}^d).
\end{equation}
Next we recall the classical definition of scattering from nonlinear to linear solutions
in a general setting. We recall that by classical arguments
we have that once \eqref{disp-d} is granted, then the local (and also the global, since the equation is defocusing) existence and uniqueness of solutions
to \eqref{NLSV-d} follows by standard arguments.
\begin{definition}\label{def-classic}
Let $u_0{\infty}n H^1(\mathbb{R}^d)$ be given and $u(t, x){\infty}n \mathcal {C}(\mathbb{R};H^1(\mathbb{R}^d))$
be the unique global solution to \eqref{NLSV-d}. Then we say that
$u(t,x)$ {\em scatters to a linear solution} provided that
$$\lim_{t\rightarrow \pm {\infty}nfty} \|u(t,x) - e^{it(\Delta-V)}\psi^\pm \|_{H^1 (\mathbb{R}^d)}=0
$$
for suitable $\psi^\pm{\infty}n H^1(\mathbb{R}^d).$ \end{definition}
In the sequel we will also use the following auxiliary Cauchy problems
that roughly speaking represent the Cauchy problems
\eqref{NLSV-d} in the regions
$x_1<<0$ and $x_1>>0$ (provide that we assume
$a_-=0$ and $a_+=1$ in \eqref{difflim2}):
\begin{equation}\label{NLS-d}
\left\{ \begin{aligned}
i\partial_{t}u+\Delta u&=|u|^{\alpha} u \quad (t,x){\infty}n \mathbb{R}\times
\mathbb{R}^d\\
u(0)&=\psi{\infty}n H^1(\mathbb{R}^d)
\end{aligned}\right.,
\end{equation}
and
\begin{equation}\label{NLS1-d}
\left\{ \begin{aligned}
i\partial_{t}u+(\Delta -1)u&=|u|^{\alpha} u \quad (t,x){\infty}n \mathbb{R}\times
\mathbb{R}^d\\
u(0)&=\psi{\infty}n H^1(\mathbb{R}^d)
\end{aligned}\right..
\end{equation}
Notice that those problems are respectively the analogue
of \eqref{NLS} and \eqref{NLS1} in higher dimensional setting.
\newline
We can now state our main result about scattering from nonlinear to
linear solutions in general dimension $d\geq 1$.
\begin{theorem}\label{finalthm}
Let $V{\infty}n\mathcal C^1(\mathbb{R}^d;{\mathbb{R}})$ be a bounded, nonnegative potential which satisfies \eqref{difflim2} with $a_-=0,\,a_+=1,$ \eqref{periods} and assume moreover:
\begin{itemize}
{\infty}tem\label{hyp0} $|\nabla V(x_1,\bar x)|\overset{|x_1|\to{\infty}nfty}\longrightarrow0$ uniformly in $\bar x{\infty}n{\mathbb{R}}^{d-1};$\\
{\infty}tem\label{hyp2} the decay estimate \eqref{disp-d} is satisfied;\\
{\infty}tem\label{hyp3} $x_1 \cdot \partial_{x_1}V (x)\leq 0$ for any $x{\infty}n{\mathbb{R}}^d$.
\end{itemize}
Then for every $u_0{\infty}n H^1(\mathbb{R}^d)$ the unique corresponding global solution
$u(t,x){\infty}n \mathcal {C}(\mathbb{R};H^1(\mathbb{R}^d))$ to \eqref{NLSV-d} scatters.
\end{theorem}
\begin{remark}\label{assVd}
Next we comment about the assumptions done
on the potential $V(x)$ along {\alpha}utoref{finalthm}. Roughly speaking we assume that the potential $V(x_1,\dots,x_d)$
is steplike and repulsive w.r.t. $x_1$ and it is periodic w.r.t. $(x_2,\dots, x_d)$.
The main motivation of this choice is that this situation is reminiscent,
according with \cite{DaSi},
of the higher dimensional version of the $1D$ double scattering channels
mentioned above. Moreover we highlight the fact that the
repulsivity of the potential in one unique direction is sufficient to get scattering, despite to other situations considered in the literature
where repulsivity is assumed w.r.t. the full set of variables $(x_1,\dots,x_d)$.
Another point is that along the proof of {\alpha}utoref{finalthm}
we show how to deal with a partially periodic
potential $V(x)$, despite to the fact that, at the best of our knowledge,
the large data scattering for potentials periodic w.r.t. the full set of variables
has not been established elsewhere, either in the $1D$ case (see {\alpha}utoref{per1D}).\end{remark}\begin{remark}\label{repulsively}Next we discuss about the repulsivity assumption on $V(x)$.
As pointed out in \cite{H}, this assumption on the potential plays the same role of the convexity assumption for the obstacle problem studied by Killip, Visan and Zhang in \cite{KVZ}. The author highlights the fact that both strict convexity of the obstacle and the repulsivity of the potential prevent wave packets to refocus once they are reflected by the obstacle or by the potential.
From a technical point of view the repulsivity assumption is done
in order to control the right sign in the virial identities, and
hence to conclude the rigidity part of the Kenig and Merle
argument. In this paper, since we assume repulsivity only in one
direction we use a suitable version of the Nakanishi-Morawetz time-dependent estimates in order to get the rigidity part in the Kenig and Merle road map. Of course
it is a challenging mathematical question to understand
whether or not the repulsivity assumption (partial or global)
on $V(x)$ is a necessary condition in order to get scattering.
\end{remark}When we specialize in $1D,$ we are able to complete the theory of double scattering channels in the energy space. Therefore how to concern the linear part of our work, we give the following result, that in conjunction with {\alpha}utoref{finalthm}
where we fix $d=1$, provides the proof of {\alpha}utoref{1Dthm}.
\begin{theorem}\label{linscat}
Assume that $V(x){\infty}n\mathcal C(\mathbb{R};{\mathbb{R}})$ satisfies the following space decay rate:
\begin{equation}\label{hyp1}
\lim_{x\rightarrow +{\infty}nfty}|x|^{1+\varepsilon}|V(x)-1|=\lim_{x\rightarrow -{\infty}nfty}|x|^{1+\varepsilon}|V(x)|=0\quad\text{for\, some\quad} \varepsilon>0.
\end{equation}
Then for every $\psi{\infty}n H^1(\mathbb{R})$ we have
$$\lim_{t\rightarrow \pm {\infty}nfty} \|e^{it(\partial_x^2-V)} \psi - e^{it\partial_x^2} \eta_\pm
- e^{it(\partial_x^2-1)} \gamma_\pm \|_{H^1 (\mathbb{R})}=0$$
for suitable $\eta_\pm, \gamma_\pm{\infty}n H^1(\mathbb{R}).$
\end{theorem}
Notice that {\alpha}utoref{linscat} is a purely linear statement.
The main point (compared with other results in the literature)
is that the asymptotic convergence
is stated with respect to the $H^1$ topology and not with respect
to the weaker $L^2$ topology.
Indeed we point out that the content of {\alpha}utoref{linscat} is well-known
and has been proved
in \cite{DaSi} in the $L^2$ setting. However,
it seems natural to us to understand, in view of {\alpha}utoref{finalthm}, whether or not the
result can be extended in the $H^1$ setting.
In fact according with {\alpha}utoref{finalthm} the asymptotic convergence of the nonlinear dynamic to linear dynamic occurs in the energy space and not only in $L^2$. As far as we know the issue of $H^1$ linear scattering has not been previously discussed in the literature, not even in the case of a potential which decays in both directions $\pm{\infty}nfty$.
For this reason we have decided to state {\alpha}utoref{linscat} as an independent result.
\subsection{Notations}\label{notations}
The spaces $L_I^{p}L^{q}=L^{p}_{t}(I;L^{q}_x(\mathbb{R}^d))$ are the usual time-space Lebesgue mixed spaces endowed with norm defined by
\begin{equation}\notag
\|u\|_{L^{p}_{t}(I;L^{q}_x(\mathbb{R}^d))}=\bigg({\infty}nt_{I}\bigg|{\infty}nt_{\mathbb{R}^d}|u(t,x)|^q\,dx\bigg|^{p/q}\,dt\bigg)^{1/p}
\end{equation}
and by the context it will be clear which interval $I\subseteq\mathbb{R},$ bounded or unbounded, is considered. If $I=\mathbb{R}$ we will lighten the notation by writing $L^pL^q.$ The operator $\tau_z$ will denote the translation operator $\tau_zf(x):=f(x-z).$ If $z{\infty}n\mathbb C,$ ${\mathbb{R}}e{z}$ and $\Im{z}$ are the common notations for the real and imaginary parts of a complex number and $\bar z$ is its complex conjugate.
In what follows, when dealing with a dimension $d\geq2,$ we write ${\mathbb{R}}^d\ni x:=(x_1,\bar x)$ with $\bar x{\infty}n {\mathbb{R}}^{d-1}.$ For $x{\infty}n{\mathbb{R}}^d$ the quantity $|x|$ will denote the usual norm in ${\mathbb{R}}^d$.
With standard notation, the Hilbert spaces $L^2(\mathbb{R}^d), H^1(\mathbb{R}^d), H^2(\mathbb{R}^d)$ will be denoted simply by $L^2, H^1, H^2$ and likely for all the Lebesgue $L^p(\mathbb{R}^d)$ spaces. By $(\cdot,\cdot)_{L^2}$ we mean the usual $L^2$-inner product, i.e. $(f,g)_{L^2}={\infty}nt_{\mathbb{R}^d}f\bar{g}\,dx,$ $\forall\, f,g{\infty}n L^2,$ while the energy norm $\mathcal H$ is the one induced by the inner product $(f,g)_{\mathcal H}:=(f,g)_{\dot H^1}+(Vf,g)_{L^2}.$
Finally, if $d\geq 3,$ $2^*=\frac{2d}{d-2}$ is the Sobolev conjugate of 2 ($2^*$ being $+{\infty}nfty$ in dimension $d\leq2$), while if $1\leq p\leq{\infty}nfty$ then $p^\prime$ is the conjugate exponent given by $p^{\prime}=\frac{p}{p-1}.$
\newline
\section{Strichartz Estimates}\label{strichartz}
The well known Strichartz estimates are a basic tool in the studying of the nonlinear Schr\"odinger equation and we will assume the validity of them in our context.
Roughly speaking, we can say that these essential space-time estimates arise from the so-called dispersive estimate for the Schr\"odinger propagator
\begin{equation}\label{disp2}
\|e^{it(\Delta-V)}f\|_{L^{{\infty}nfty}}\lesssim|t|^{-d/2}\|f\|_{L^1,} \quad \forall\, t\neq0\quad \forall\, f{\infty}n L^1,
\end{equation}
which is proved in $1D$ in \cite{DS}, under suitable assumptions
on the steplike potential $V(x)$, and we take for granted
by hypothesis.
\\
As a first consequence we get the following Strichartz estimates
$$\|e^{it(\Delta-V)}f\|_{L^aL^b}\lesssim \|f\|_{L^2}$$
where $a,b{\infty}n [1, {\infty}nfty]$ are assumed to be Strichartz admissible, namely
\begin{equation}\label{admissible}
\frac 2a=d\left(\frac 12-\frac 1 b\right).
\end{equation}
We recall, as already mentioned in the introduction, that along our paper we are assuming the validity of the dispersive estimate \eqref{disp2} also in higher dimensional setting.
We fix from now on the following Lebesgue exponents
\begin{equation*}
r={\alpha}lpha+2,\qquad p=\frac{2{\alpha}lpha({\alpha}lpha+2)}{4-(d-2){\alpha}lpha},\qquad q=\frac{2{\alpha}lpha({\alpha}lpha+2)}{d{\alpha}lpha^2-(d-2){\alpha}lpha-4}.
\end{equation*}
(where ${\alpha}lpha$ is given by the nonlinearity in \eqref{NLSV-d}).
Next, we give the linear estimates that will be fundamental in our study:
\begin{align}\label{fxc1}
\|e^{it(\Delta-V)}f\|_{L^{\frac{4({\alpha}+2)}{d{\alpha}}}L^r}&\lesssim \|f\|_{H^1},\\\label{fxc2}
\|e^{it(\Delta-V)}f\|_{L^\frac{2(d+2)}{d} L^\frac{2(d+2)}{d} }&\lesssim \|f\|_{H^1},\\\label{fxc3}
\|e^{it(\Delta-V)}f\|_{L^pL^r}&\lesssim \|f\|_{H^1}.
\end{align}
The last estimate that we need is (some in between) the so-called inhomogeneous Strichartz estimate for non-admissible pairs:
\begin{align}\label{str2.4}
\bigg\|{\infty}nt_{0}^{t}e^{i(t-s)(\Delta-V)}g(s)\,ds \bigg\|_{L^pL^r}\lesssim\|g\|_{L^{q'}L^{r'}},
\end{align}
whose proof is contained in \cite{CW}.
\begin{remark}
In the unperturbed framework, i.e. in the absence of the potential, and for general dimensions, we refer to \cite{FXC} for comments and references about Strichartz estimates \eqref{fxc1}, \eqref{fxc2}, \eqref{fxc3} and \eqref{str2.4}.
\end{remark}
\section{Perturbative nonlinear results}\label{perturbative}
The results in this section
are quite standard and hence we skip the complete proofs which can be found for instance in \cite{BV, CZ, FXC}.
In fact the arguments involved are a compound of dispersive properties of the linear propagator
and a standard perturbation argument.
Along this section we assume that the estimate \eqref{disp-d} is satisfied by the propagator associated with the potential $V(x)$. We do not need for the moment to assume the other assumptions done on $V(x)$.
We also specify that in the sequel the couple $(p,r)$ is the one given in {\alpha}utoref{strichartz}.
\begin{lemma}\label{lemma3.1}
Let $u_0{\infty}n H^1$ and assume that the corresponding solution to \eqref{NLSV-d}
satisfies $u(t,x){\infty}n\mathcal{C}(\mathbb{R};H^{1})\cap L^{p}L^r$.
Then $u(t,x)$ {\em scatters} to a linear solution in $H^1.$
\end{lemma}
\begin{proof}
It is a standard consequence of Strichartz estimates.
\end{proof}
\begin{lemma}\label{lemma3.2}
There exists $\varepsilon_{0}>0$ such that for any $u_0{\infty}n H^{1}$ with $\|u_0\|_{H^1}\leq\varepsilon_{0},$ the solution $u(t,x)$ to the Cauchy problem \eqref{NLSV-d}
{\em scatters} to a linear solution in $H^1$.
\end{lemma}
\begin{proof}
It is a simple consequence of Strichartz estimates.
\end{proof}
\begin{lemma}\label{lemma3}
For every $M>0$ there exist $\varepsilon=\varepsilon(M)>0$ and $C=C(M)>0$ such that: if $u(t,x){\infty}n\mathcal{C}(\mathbb{R};H^1)$ is the unique global solution to \eqref{NLSV-d} and $w{\infty}n\mathcal{C}(\mathbb{R};H^1)\cap L^pL^r$ is a global solution to the perturbed problem
\begin{equation*}
\left\{ \begin{aligned}
i\partial_{t}w+\Delta w-Vw&=|w|^{{\alpha}lpha}w+e(t,x)\\
w(0,x)&=w_0{\infty}n H^1
\end{aligned} \right.
\end{equation*}
satisfying the conditions $\|w\|_{L^pL^r}\leq M$, $\|{\infty}nt_{0}^{t}e^{i(t-s)(\Delta-V)}e(s)\,ds\|_{L^pL^r}\leq \varepsilon$ and $\|e^{it(\Delta-V)}(u_0-w_0)\|_{L^pL^r}\leq\varepsilon$, then $u{\infty}n L^pL^r$ and $\|u-w\|_{L^pL^r}\leq C \varepsilon.$
\end{lemma}
\begin{proof}
The proof is contained in \cite{FXC}, see Proposition $4.7,$ and it relies on \eqref{str2.4}.
\end{proof}
\section{Profile decomposition}\label{profile}
The main content of this section is the following profile decomposition theorem. \begin{theorem}\label{profiledec} Let $V(x){\infty}n L^{\infty}nfty$ satisfies:
$V\geq 0$, \eqref{periods}, \eqref{difflim2} with $a_{-}=0$ and $a_{+}=1,$
the dispersive relation \eqref{disp-d} and suppose that $|\nabla V(x_1,\bar x)|\rightarrow0$ as $|x_1|\to{\infty}nfty$ uniformly in $\bar x{\infty}n{\mathbb{R}}^{d-1}.$ Given a bounded sequence $\{v_n\}_{n{\infty}n\mathbb{N}}\subset H^1,$ $\forall\, J{\infty}n\mathbb{N}$ and $\forall\,1\leq j\leq J$ there exist two sequences $\{t_n^j\}_{n{\infty}n\mathbb{N}}\subset {\mathbb{R}},\,\{x_n^j\}_{n{\infty}n\mathbb{N}}\subset{\mathbb{R}}^d$ and $\psi^j{\infty}n H^1$ such that, up to subsequences,
\begin{equation*}
v_n=\sum_{1\leq j\leq J}e^{it_n^j(\Delta - V)}\tau_{x_n^j}\psi^j+R_n^J
\end{equation*}
with the following properties:
\begin{itemize}
{\infty}tem for any fixed $j$ we have the following dichotomy for the time parameters $t_n^j$:
\begin{align*}
either\quad t_n^j=0\quad \forall\, n{\infty}n\mathbb{N}\quad &or \quad t_n^j\overset{n\to {\infty}nfty}\longrightarrow\pm{\infty}nfty;
\end{align*}
{\infty}tem for any fixed $j$ we have the following scenarios for the space parameters $x_n^j=(x_{n,1}^j, \bar x_n^j){\infty}n {\mathbb{R}}\times {\mathbb{R}}^{d-1}$:
\begin{equation*}
\begin{aligned}
either& \quad x_n^j=0\quad \forall\, n{\infty}n\mathbb{N}\\
or &\quad |x_{n,1}^j|\overset{n\to {\infty}nfty}\longrightarrow{\infty}nfty\\
or \quad x_{n,1}^j=0, \quad \bar x_{n}^j=\sum_{l=2}^dk_{n,l}^j &P_l
\quad with \quad k_{n,l}^j{\infty}n \mathbb Z \quad and \quad \sum_{l=2}^d |k_{n,l}^j|
\overset{n\to {\infty}nfty}\longrightarrow{\infty}nfty,
\\
\quad\qquad\hbox{ where } P_l \hbox{ are given in \eqref{periods}; }
\end{aligned}
\end{equation*}
{\infty}tem (orthogonality condition) for any $j\neq k$
\begin{equation*}
|x_n^j-x_n^k|+|t_n^j-t_n^k| \overset{n\rightarrow{\infty}nfty}\longrightarrow{\infty}nfty;
\end{equation*}
{\infty}tem (smallness of the remainder) $\forall\,\varepsilon>0\quad\exists\,J=J(\varepsilon)$ such that
\begin{equation*}
\limsup_{n\rightarrow{\infty}nfty}\|e^{it(\Delta - V)}R_n^J\|_{L^{p}L^{r}}\leq\varepsilon;
\end{equation*}
{\infty}tem by defining $\|v\|_{\mathcal H}^2={\infty}nt (|\nabla v|^2 + V |v|^2) dx$ we have, as $n\to{\infty}nfty,$
\begin{align*}\,
\|v_n\|_{L^2}^2=&\sum_{1\leq j\leq J}\|\psi^j\|_{L^2}^2+\|R_n^J\|_{L^2}^2+o(1), \quad \forall\, J{\infty}n\mathbb{N},
\\
\|v_n\|^2_{\mathcal H}=&\sum_{1\leq j\leq J}\|\tau_{x_n^j}\psi^j\|^2_{\mathcal H}+\|R_n^J\|^2_{\mathcal H}+o(1), \quad \forall\, J{\infty}n\mathbb{N};
\end{align*}
{\infty}tem $\forall\, J{\infty}n\mathbb{N}$\quad and\quad $\forall\,\,2<q<2^*$ we have, as $n\to{\infty}nfty,$
\begin{equation*}
\|v_n\|_{L^q}^q=\sum_{1\leq j\leq J}\|e^{it_n^j(\Delta - V)}\tau_{x_n^j}\psi^j\|_{L^q}^q+\|R_n^J\|_{L^q}^q+o(1);\\
\end{equation*}
{\infty}tem with $E(v)=\frac{1}{2}{\infty}nt \big(|\nabla v|^2+V|v|^2+\frac{2}{{\alpha}lpha+2}|v|^{{\alpha}lpha+2}\big)dx,$ we have, as $n\to{\infty}nfty,$
\begin{equation}\label{energypd}
E(v_n)=\sum_{1\leq j\leq J}E(e^{it_n^j(\Delta - V)}\tau_{x_n^j}\psi^j)+E(R_n^j)+o(1),
\quad \forall\, J{\infty}n\mathbb{N}.
\end{equation}
\end{itemize}
\end{theorem}
First we prove the following lemma.
\begin{lemma}\label{lemmapreli}
Given a bounded sequence $\{v_n\}_{n{\infty}n{\mathbb{N}}}\subset H^1({\mathbb{R}}^d)$ we define
\begin{equation*}
\Lambda =\left\{ w{\infty}n L^2\quad |\quad \exists \{x_k\}_{k{\infty}n{\mathbb{N}}}\quad and \quad \{n_k\}_{k{\infty}n{\mathbb{N}}}\quad\text{s. t. }\quad \tau_{x_k}v_{n_k}\overset{L^2}\rightharpoonup w\right\}
\end{equation*}
and
\begin{equation*}
\lambda=\sup\{\|w\|_{L^2},\quad w{\infty}n\Lambda\}.
\end{equation*}
Then for every $q{\infty}n(2,2^*)$ there exists a constant $M=M(\sup_n\|v_n\|_{H^1})>0$ and an exponent $e=e(d,q)>0$ such that
\begin{equation*}
\limsup_{n\to{\infty}nfty}\|v_n\|_{L^q}\leq M\lambda^e.
\end{equation*}
\end{lemma}
\begin{proof}
We consider a Fourier multiplier $\zeta$ where $\zeta$ is defined as
\begin{equation*}
C^{{\infty}nfty}_c({\mathbb{R}}^d;{\mathbb{R}})\ni\zeta(\xi)=
\begin{cases}
1 & \text{if}\quad |\xi|\leq1\\
0 & \text{if}\quad |\xi|>2
\end{cases}.
\end{equation*}
By setting $\zeta_R(\xi)=\zeta(\xi/R),$ we define the pseudo-differential operator with symbol $\zeta_R,$ classically given by $\zeta_R(|D|)f=\mathcal F^{-1}(\zeta_R\mathcal Ff)(x)$ and similarly we define the operator $\tilde{\zeta}_R(|D|)$ with the associated symbol given by $\tilde{\zeta}_R(\xi)=1-\zeta_R(\xi).$ Here by $\mathcal F,\mathcal F^{-1}$ we mean the Fourier transform operator and its inverse, respectively. For any $q{\infty}n(2,2^*)$ there exists a $\varepsilonilon{\infty}n(0,1)$ such that $H^\varepsilonilon\hookrightarrow L^{\frac{2d}{d-2\varepsilonilon}}=:L^q.$ Then
\begin{equation*}
\begin{aligned}
\|\tilde{\zeta}_R(|D|)v_n\|_{L^q}&\lesssim \|\langle\xi\rightarrowngle^\varepsilonilon\tilde{\zeta}_R(\xi)\hat{v}_n\|_{L^2_\xi}\\
&= \|\langle\xi\rightarrowngle^{\varepsilonilon-1}\langle\xi\rightarrowngle\tilde{\zeta}_R(\xi)\hat{v}_n\|_{L^2_\xi}\\
&\lesssim R^{-(1-\varepsilonilon)}
\end{aligned}
\end{equation*}
where we have used the boundedness of $\{v_n\}_{n{\infty}n\mathbb N}$ in $H^1$ at the last step.
For the localized part we consider instead a sequence $\{y_n\}_{n{\infty}n\mathbb N}\subset{\mathbb{R}}^d$ such that
\begin{equation*}
\|\zeta_R(|D|)v_n\|_{L^{\infty}nfty}\leq2|\zeta_R(|D|)v_n(y_n)|
\end{equation*}
and we have that up to subsequences, by using the well-known properties $\mathcal F^{-1}(fg)=\mathcal F^{-1}f{\alpha}st\mathcal F^{-1}g$ and $\mathcal F^{-1}\left(f\left(\frac{\cdot}{r}\right)\right)=r^d(\mathcal F^{-1}f)(r\cdot),$
\begin{equation*}
\limsup_{n\to{\infty}nfty}|\zeta_R(|D|)v_n(y_n)|=R^d\limsup_{n\to{\infty}nfty}\left|{\infty}nt\eta(Rx)v_n(x-y_n)\,dx\right|\lesssim R^{d/2}\lambda\\
\end{equation*}
where we denoted $\eta=\mathcal F^{-1}\zeta$ and we used Cauchy-Schwartz inequality. Given $\theta{\infty}n(0,1)$ such that $\frac1q=\frac{1-\theta}{2},$ by interpolation follows that
\begin{equation*}
\|\zeta_R(|D|)v_n\|_{L^q}\leq\|\zeta_R(|D|)v_n\|^{\theta}_{L^{\infty}nfty}\|\zeta_R(|D|)v_n\|^{1-\theta}_{L^2}\lesssim R^{\frac{d\theta}{2}}\lambda^{\theta}
\end{equation*}
\begin{equation*}
\limsup_{n\to{\infty}nfty}\|v_n\|_{L^q}\lesssim \left(R^{\frac{d\theta}{2}}\lambda^{\theta}+R^{-1+\varepsilonilon}\right)
\end{equation*}
and the proof is complete provided we select as radius $R=\lambda^{-\beta}$ with $0<\beta=\theta\left(1-\varepsilonilon+\frac{d\theta}{2}\right)^{-1}$ and so $e=\theta(1-\varepsilonilon)\left(1-\varepsilonilon+\frac{d\theta}{2}\right)^{-1}.$
\end{proof}
Based on the previous lemma we can prove the following result.
\begin{lemma}
Let $\{v_n\}_{n{\infty}n {\mathbb{N}}}$ be a bounded sequence in $H^1({\mathbb{R}}^d).$ There exists, up to subsequences, a function $\psi{\infty}n H^1$ and two sequences $\{t_n\}_{n{\infty}n {\mathbb{N}}}\subset{\mathbb{R}},$ $\{x_n\}_{n{\infty}n {\mathbb{N}}}\subset{\mathbb{R}}^d$ such that
\begin{equation}\label{ex}
\tau_{-x_n}e^{it_n(\Delta-V)}v_n=\psi+W_n,
\end{equation}
where the following conditions are satisfied:
\begin{equation*}
W_n\overset{H^1}\rightharpoonup0,
\end{equation*}
\begin{equation*}
\limsup_{n\to{\infty}nfty}\|e^{it(\Delta-V)}v_n\|_{L^{\infty}nfty L^q}\leq C\left(\sup_n\|v_n\|_{H^1}\right)\|\psi\|_{L^2}^e
\end{equation*}
with the exponent $e>0$ given in {\alpha}utoref{lemmapreli}. Furthermore, as $n\to {\infty}nfty,$ $v_n$ fulfills the Pythagorean expansions below:
\begin{equation}\label{gasp1}
\|v_n\|_{L^2}^2=\|\psi\|_{L^2}^2+\|W_n\|_{L^2}^2+o(1);
\end{equation}
\begin{equation}\label{gasp2}
\|v_n\|_{\mathcal H}^2=\|\tau_{x_n}\psi\|_{\mathcal H}^2+\|\tau_{x_n}W_n\|_{\mathcal H}^2+o(1);
\end{equation}
\begin{equation}\label{gasp3}
\|v_n\|_{L^q}^q=\|e^{it_n(\Delta-V)}\tau_{x_n}\psi\|_{L^q}^q+\|e^{it_n(\Delta-V)}\tau_{x_n}W_n\|_{L^q}^q+o(1),\qquad q{\infty}n(2,2^*).
\end{equation}
Moreover we have the following dichotomy for the time parameters $t_n$:
\begin{align}\label{parav}
either \quad t_n=0\quad \forall\, n{\infty}n\mathbb{N}\quad &or \quad t_n\overset{n\to{\infty}nfty}\longrightarrow\pm{\infty}nfty.
\end{align}
{\infty}tem Concerning the space parameters $x_n=(x_{n,1}, \bar x_n){\infty}n {\mathbb{R}}\times {\mathbb{R}}^{d-1}$ we have the following scenarios:
\begin{align}\label{para2v}
& either \quad x_n=0\quad \forall\, n{\infty}n\mathbb{N}\\
\nonumber & or \quad |x_{n,1}|\overset{n\to {\infty}nfty}\longrightarrow{\infty}nfty\\
\nonumber or \quad x_{n,1}=0, \quad \bar x_{n}^j=\sum_{l=2}^dk_{n,l} P_l
\quad & with \quad k_{n,l}{\infty}n \mathbb Z \quad and \quad \sum_{l=2}^d |k_{n,l}|
\overset{n\to {\infty}nfty}\longrightarrow{\infty}nfty.
\end{align}
\end{lemma}
\begin{proof}
Let us choose a sequences of times $\{t_n\}_{n{\infty}n\mathbb N}$ such that
\begin{equation}\label{time}
\|e^{it_n(\Delta-V)}v_n\|_{L^q}>\frac12\|e^{it(\Delta-V)}v_n\|_{L^{\infty}nfty L^q}.
\end{equation}
According to {\alpha}utoref{lemmapreli}
we can consider a sequence of space translations such that
\begin{equation*}
\tau_{-x_n}(e^{it_n(\Delta-V)}v_n)\overset{H^1}\rightharpoonup \psi,
\end{equation*}
which yields \eqref{ex}. Let us remark that the choice of the time sequence in \eqref{time} is possible since the norms $H^1$ and $\mathcal H$ are equivalent.
Then
\begin{equation*}
\limsup_{n\to{\infty}nfty}\|e^{it_n(\Delta-V)}v_n\|_{L^q}\lesssim\|\psi\|_{L^2}^e,
\end{equation*}
which in turn implies by \eqref{time} that
\begin{equation*}
\limsup_{n\to{\infty}nfty}\|e^{it(\Delta-V)}v_n\|_{L^{\infty}nfty L^q}\lesssim\|\psi\|_{L^2}^e,
\end{equation*}
where the exponent is the one given in {\alpha}utoref{lemmapreli}. By definition of $\psi$ we can write
\begin{equation}\label{dec2}
\tau_{-x_n}e^{it_n(\Delta-V)}v_n=\psi+W_n,\qquad W_n\overset{H^1}\rightharpoonup 0
\end{equation}
and the Hilbert structure of $L^2$ gives \eqref{gasp1}.\\
Next we prove \eqref{gasp2}. We have
\begin{equation*}
v_n=e^{-it_n(\Delta-V)}\tau_{x_n}\psi+e^{-it_n(\Delta-V)}\tau_{x_n}W_n,\qquad W_n\overset{H^1}\rightharpoonup 0
\end{equation*}
and we conclude provided that we show
\begin{equation}\label{gasp5}(e^{-it_n(\Delta-V)}\tau_{x_n}\psi, e^{-it_n(\Delta-V)}\tau_{x_n}W_n)_{\mathcal H}\overset{n\rightarrow {\infty}nfty} \longrightarrow 0.
\end{equation}
Since we have
\begin{align*}
&(e^{-it_n(\Delta-V)}\tau_{x_n}\psi, e^{-it_n(\Delta-V)}\tau_{x_n}W_n)_{\mathcal H}\\
&=(\psi, W_n)_{\dot{H}^1}+{\infty}nt V(x+x_n)\psi(x)\bar{W}_n(x)\,dx
\end{align*}
and $W_n\overset{H^1}\rightharpoonup 0
$,
it is sufficient to show that
\begin{equation}\label{gasp4}
{\infty}nt V(x+x_n)\psi(x)\bar{W}_n(x)\,dx
\overset{n\rightarrow {\infty}nfty} \longrightarrow 0.
\end{equation}
If (up to subsequence) $x_n\overset{n\to {\infty}nfty}\longrightarrow x^*{\infty}n{\mathbb{R}}^d$ or $|x_{n,1}|\overset{n\to {\infty}nfty}\longrightarrow{\infty}nfty$,
where we have splitted $x_n=(x_{n,1}, \bar x_n){\infty}n {\mathbb{R}}\times {\mathbb{R}}^{d-1}$,
then we have that the sequence $\tau_{-x_n}V (x)=V(x+x_n)$ pointwise converges to the function $\tilde{V}(x){\infty}n L^{{\infty}nfty}$ defined by
\begin{equation*}
\tilde{V}(x)=
\left\{ \begin{array}{ll}
1\quad &if\quad x_{n,1}\overset{n\rightarrow {\infty}nfty}\longrightarrow+{\infty}nfty\\
V(x+x^*)\quad &if\quad x_n\overset{n\rightarrow {\infty}nfty}\longrightarrow x^*{\infty}n{\mathbb{R}}^d\\
0\quad &if\quad x_{n,1}\overset{n\rightarrow {\infty}nfty}\longrightarrow-{\infty}nfty
\end{array} \right.
\end{equation*}
and hence
\begin{equation*}
\begin{aligned}
{\infty}nt V(x+x_n)\psi(x)\bar{W}_n(x)\,dx&={\infty}nt[V(x+x_n)-\tilde{V}(x)]\psi(x)\bar{W}_n(x)\,dx\\
&\quad+{\infty}nt \tilde{V}(x)\psi(x)\bar{W}_n(x)\,dx.
\end{aligned}
\end{equation*}
The function $\tilde{V}(x)\psi(x)$ belongs to $L^2$ since $\tilde{V}$ is bounded and $\psi{\infty}n H^1$, and since $W_n\rightharpoonup0$ in $H^1$ (and then in $L^2$) we have that
\begin{equation*}
{\infty}nt \tilde{V}(x)\psi(x)\bar{W}_n(x)\,dx\overset{n\rightarrow {\infty}nfty}\longrightarrow0.
\end{equation*}
Moreover by using Cauchy-Schwartz inequality
\begin{align*}
\bigg|{\infty}nt[V(x+x_n)-\tilde{V}(x)]\psi(x)\bar{W}_n(x)\,dx\bigg|\leq&\sup_{n}\|W_n\|_{L^2}\|[V(\cdot+x_n)-\tilde{V}(\cdot)]\psi(\cdot)\|_{L^2};
\end{align*}
since $\left|[V(\cdot+x_n)-\tilde{V}(\cdot)]\psi(\cdot) \right|^2\lesssim|\psi(\cdot)|^2{\infty}n L^1$ we claim, by dominated convergence theorem, that also
\begin{equation*}
{\infty}nt[V(x+x_n)-\tilde{V}(x)]\psi(x)\bar{W}_n(x)\,dx\overset{n\rightarrow {\infty}nfty}\longrightarrow0,
\end{equation*}
and we conclude \eqref{gasp4} and hence \eqref{gasp5}.
It remains to prove \eqref{gasp5} in the case when, up to subsequences, $x_{n,1}
\overset{n\rightarrow {\infty}nfty}
\longrightarrow x_1^*$ and $|\bar x_n|\overset {n\rightarrow {\infty}nfty}
\longrightarrow {\infty}nfty$. Up to subsequences we can assume therefore that $\bar x_{n}= \bar x^*+\sum_{l=2}^d k_{n, l}P_l+o(1)$
with $\bar x^*{\infty}n {\mathbb{R}}^{d-1}$, $k_{n, l}{\infty}n \mathbb Z$ and
$\sum_{l=2}^d |k_{n,l}|\overset{n\rightarrow {\infty}nfty} \longrightarrow {\infty}nfty.$ Then by using the
periodicity of the potential $V$ w.r.t. the $(x_2,\dots, x_d)$ variables we get:
\begin{equation*}
\begin{aligned}
&(e^{-it_n(\Delta-V)}\tau_{x_n}\psi, e^{-it_n(\Delta-V)}\tau_{x_n}W_n)_{\mathcal H}=\\
&(e^{-it_n(\Delta-V)}\tau_{(x_1^*,\bar x_n)}\psi, e^{-it_n(\Delta-V)}\tau_{(x_1^*,\bar x_n)}W_n)_{\mathcal H}+o(1)=\\
&(\tau_{(x_1^*,\bar x^*)}\psi, \tau_{(x_1^*,\bar x^*)}W_n)_{\mathcal H}+o(1)=\\
&(\psi,W_n)_{\dot H^1}+{\infty}nt V(x+(x_1^*,\bar x^*))\psi(x)\bar{W}_n\,dx=o(1)
\end{aligned}
\end{equation*}
where we have used the fact that $W_n\overset{ H^1} \rightharpoonup0$.
\newline
We now turn our attention to the orthogonality of the non quadratic term of the energy,
namely \eqref{gasp3}. The proof is almost the same of the one carried out in \cite{BV}, with some modification. \\
\noindent \emph{Case 1.} Suppose $|t_n|\overset{n\to {\infty}nfty}\longrightarrow{\infty}nfty.$ By \eqref{disp2} we have $\|e^{it(\Delta-V)}\|_{L^1\rightarrow L^{{\infty}nfty}}\lesssim|t|^{-d/2}$ for any $t\neq0.$ We recall that for the evolution operator $e^{it(\Delta-V)}$ the $L^2$ norm is conserved, so the estimate $\|e^{it(\Delta-V)}\|_{L^{p\prime}\rightarrow L^{p}}\lesssim|t|^{-d\left(\frac{1}{2}-\frac{1}{p}\right)}$ holds from Riesz-Thorin interpolation theorem, thus we have the conclusion
provided that $\psi{\infty}n L^1\cap L^2$. If
this is not the case we can conclude by a straightforward approximation argument. This implies that if $|t_n|\to{\infty}nfty$ as $n\to{\infty}nfty$ then for any $p{\infty}n(2,2^*)$ and for any $\psi{\infty}n H^1$
\begin{equation*}
\|e^{it_n(\Delta-V)}\tau_{x_n}\psi\|_{L^p}\overset{n\rightarrow {\infty}nfty}\longrightarrow 0.
\end{equation*}
Thus we conclude by \eqref{dec2}. \\
\noindent\emph{Case 2.}
Assume now that $t_n\overset{n\to {\infty}nfty}\longrightarrow t^*{\infty}n{\mathbb{R}}$ and $x_n
\overset{n\to {\infty}nfty}\longrightarrow x^*{\infty}n{\mathbb{R}}^d.$ In this case the proof relies on a combination of the Rellich-Kondrachov theorem
and the Brezis-Lieb Lemma contained in \cite{BL}, provided that
\begin{equation}\notag
\|e^{it_n(\Delta-V)}(\tau_{x_n}\psi)-e^{it^*(\Delta-V)}(\tau_{x^*}\psi)\|_{H^1}\overset{n\rightarrow{\infty}nfty}\longrightarrow0,\qquad\forall\,\psi{\infty}n H^1.
\end{equation}
But this is a straightforward consequence of the continuity of the linear propagator (see \cite{BV} for more details).
\newline
\noindent \emph{Case 3.} It remains to consider $t_n\overset{n\to {\infty}nfty}\longrightarrow t^*{\infty}n{\mathbb{R}}$ and $|x_n|\overset{n\to {\infty}nfty}\longrightarrow{\infty}nfty.$
Also here we can proceed as in \cite{BV} provided that for any $\psi{\infty}n H^1$ there exists a $\psi^*{\infty}n H^1$ such that
\begin{equation}\notag
\|\tau_{-x_n}(e^{it_n(\Delta-V)}(\tau_{x_n}\psi))-\psi^*\|_{H^1}\overset{n\rightarrow{\infty}nfty}\longrightarrow0.
\end{equation}
Since translations are isometries in $H^1,$ it suffices to show that for some $\psi^*{\infty}n H^1$
\begin{equation*}
\|e^{it_n(\Delta-V)}\tau_{x_{n}}\psi-\tau_{x_n}\psi^*\|_{H^1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow0.
\end{equation*}
We decompose $x_n=(x_{n,1}, \bar x_n){\infty}n {\mathbb{R}}\times {\mathbb{R}}^{d-1}$
and
we consider the two scenarios: $|x_{n,1}|\overset{n\rightarrow {\infty}nfty}\longrightarrow {\infty}nfty$ and $
\sup_n |x_{n,1}|<{\infty}nfty$.
\newline
If $x_{n,1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow -{\infty}nfty,$ by continuity in $H^1$ of the flow, it is enough to prove that
\begin{equation*}
\| e^{it^*(\Delta-V)}\tau_{x_{n}}\psi-e^{it^*\Delta}\tau_{x_{n}}\psi\|_{H^{1}}
\overset{n\rightarrow {\infty}nfty}\longrightarrow 0.
\end{equation*}
We observe that
\begin{equation*}
e^{it^*(\Delta-V)}\tau_{x_{n}}\psi-e^{it^*\Delta}\tau_{x_{n}}\psi={\infty}nt_{0}^{t^*}e^{i(t^*-s)(\Delta-V)}(Ve^{-is \Delta}\tau_{x_{n}}\psi)(s)\,ds
\end{equation*}
and hence,
\begin{equation*}
\| e^{it^*(\Delta-V)}\tau_{x_{n}}\psi-e^{it^*\Delta}\tau_{x_{n}}\psi\|_{H^1}\leq {\infty}nt_0^{t^*}
\|(\tau_{-x_n}V)e^{is\Delta}\psi\|_{H^1} ds.
\end{equation*}
We will show that
\begin{equation}\label{s}{\infty}nt_0^{t^*}
\|(\tau_{-x_n}V)e^{is\Delta}\psi\|_{H^1} ds\overset{n\rightarrow {\infty}nfty}\longrightarrow 0.
\end{equation}
Since we are assuming $x_{n,1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow -{\infty}nfty,$ for fixed $x{\infty}n\mathbb{R}^d$ we get $V(x+x_n)\overset{n\rightarrow {\infty}nfty}\longrightarrow 0,$ namely $(\tau_{-x_n}V)(x)\overset{n\rightarrow {\infty}nfty}\longrightarrow 0$ pointwise; since $V{\infty}n L^{{\infty}nfty},$ $|\tau_{-x_n}V|^2|e^{it \Delta}\psi|^2\leq\|V\|^2_{L^{{\infty}nfty}}|e^{it\Delta}\psi|^2$ and $|e^{it \Delta}\psi|^2{\infty}n L^1,$ the dominated convergence theorem yields to
\begin{equation*}
\|(\tau_{-x_n}V)e^{it \Delta}\psi\|_{L^2}\overset{n\rightarrow {\infty}nfty}\longrightarrow 0.
\end{equation*}
Analogously, since $|x_{n,1}|
\overset{n\rightarrow {\infty}nfty}\longrightarrow{\infty}nfty$
implies $| \nabla \tau_{-x_n} V(x)|
\overset{n\rightarrow {\infty}nfty}\longrightarrow 0,$
we obtain
\begin{equation*}
\|\nabla(\tau_{-x_n}Ve ^{it\Delta} \psi)\|_{L^2}\leq\|
(e^{it \Delta} \psi) \nabla
\tau_{-x_n} V \|_{L^2}+\|
(\tau_{-x_n}V) \nabla(e^{it \Delta}\psi)\|_{L^2}\overset{n\rightarrow {\infty}nfty}\longrightarrow
0.
\end{equation*}
We conclude
\eqref{s} by using
the dominated convergence theorem w.r.t the measure
$ds$.
For the case $x_{n,1}\overset{n\rightarrow {\infty}nfty}\longrightarrow {\infty}nfty$ we proceed similarly.
If $\sup_{n{\infty}n {\mathbb{N}}} |x_{n,1}|<{\infty}nfty,$ then
up to subsequence
$x_{n,1} \overset{n\rightarrow {\infty}nfty} \longrightarrow x_1^*{\infty}n {\mathbb{R}}$.
The thesis follows by choosing
$\psi^*=e^{it^*(\Delta-V)}\tau_{(x_1^*,\bar x^*)}\psi,$ with $\bar x^*
{\infty}n {\mathbb{R}}^{d-1}$ defined as follows
(see above the proof of \eqref{gasp2}):
$\bar x_{n}= \bar x^*+\sum_{l=2}^d k_{n, l}P_l+o(1)$ with
$k_{n, l}{\infty}n \mathbb Z$ and
$\sum_{l=2}^d |k_{n,l}|\overset{n\rightarrow {\infty}nfty} \rightarrow {\infty}nfty.$
\newline
Finally, it is straightforward from \cite{BV} that the conditions on the parameters \eqref{parav} and \eqref{para2v} hold.
\end{proof}
\begin{proof}[Proof of {\alpha}utoref{profiledec}] The proof of the profile decomposition theorem can be carried out as in \cite{BV} iterating the previous lemma.
\end{proof}
\section{Nonlinear profiles}\label{nonlinearpro}
The results of this section will be crucial along the construction of the minimal element.
We recall that the couple $(p,r)$ is the one given in {\alpha}utoref{strichartz}.
Moreover for every sequence $x_n{\infty}n {\mathbb{R}}^d$ we use the notation
$x_n=(x_{n,1}, \bar x_n){\infty}n {\mathbb{R}}\times {\mathbb{R}}^{d-1}$.
\begin{lemma}\label{lem5.1}
Let $\psi{\infty}n H^1$ and $\{x_n\}_{n{\infty}n \mathbb{N}}\subset\mathbb{R}^d$ be such that
$|x_{n,1}| \overset{n\rightarrow {\infty}nfty}\longrightarrow {\infty}nfty$. Up to subsequences we have the following estimates:
\begin{equation}\label{eq5.1}
x_{n,1} \overset{n\rightarrow {\infty}nfty}\longrightarrow -{\infty}nfty
{\infty}mplies \|e^{it\Delta}\psi_{n}-e^{it(\Delta-V)}\psi_{n}\|_{L^pL^r}\overset{n\rightarrow{\infty}nfty}\longrightarrow 0,
\end{equation}
\begin{equation}\label{eq5.2}
x_{n,1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow +{\infty}nfty {\infty}mplies \|e^{it(\Delta-1)}\psi_{n}-e^{it(\Delta-V)}\psi_{n}\|_{L^pL^r}\overset{n\rightarrow{\infty}nfty}\longrightarrow 0,
\end{equation}
where $\psi_n:=\tau_{x_n}\psi.$
\end{lemma}
\begin{proof}
Assume
$x_{n,1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow -{\infty}nfty$
(the case $x_{n,1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow +{\infty}nfty$
can be treated similarly).
We first prove that
\begin{equation}\label{eq5.3}
\sup_{n{\infty}n\mathbb{N}}\| e^{it(\Delta-V)}\psi_{n}\|_{L^{p}_{(T,{\infty}nfty)}L^{r}}\overset{T\rightarrow{\infty}nfty}\longrightarrow0.
\end{equation}
Let $\varepsilon>0$. By density there exists $\tilde{\psi}{\infty}n C^{{\infty}nfty}_c$ such that $\|\tilde{\psi}-\psi\|_{H^{1}}\leq\varepsilon,$ then by the estimate \eqref{fxc3}
\begin{equation*}
\|e^{it(\Delta-V)}(\tilde{\psi}_{n}-\psi_{n})\|_{L^{p}L^{r}}\lesssim\|\tilde{\psi}_{n}-\psi_{n}\|_{H^{1}}=\|\tilde{\psi}-\psi\|_{H^{1}}\lesssim\varepsilon.
\end{equation*}
Since $\tilde{\psi}{\infty}n L^{r'}$, by interpolation between
the dispersive estimate \eqref{disp2}
and the conservation of the mass along the linear flow,
we have
\begin{equation*}
\| e^{it(\Delta-V)}\tilde{\psi}_{n}\|_{L^{r}}\lesssim|t|^{-d\left(\frac{1}{2}-\frac{1}{r}\right)}\|\tilde{\psi}\|_{L^{r'}},
\end{equation*}
and since $f(t)=|t|^{-d\left(\frac{1}{2}-\frac{1}{r}\right)}{\infty}n L^p(|t|>1),$ there exists $T>0$ such that
\begin{equation*}
\sup_n\| e^{it(\Delta-V)}\tilde{\psi}_{n}\|_{L^{p}_{|t|\geq T}L^{r}}\leq\varepsilon,
\end{equation*}
hence we get \eqref{eq5.3}.
In order to obtain \eqref{eq5.1}, we are reduced to show that for a fixed $T>0$
\begin{equation*}
\| e^{it \Delta}\psi_{n}-e^{it(\Delta-V)}\psi_{n}\|_{L^{p}_{(0,T)}L^{r}}\overset{n\rightarrow{\infty}nfty}\longrightarrow0.
\end{equation*}
Since $w_n=e^{it \Delta}\psi_{n}-e^{it(\Delta-V)}\psi_{n}$ is the solution of the following linear Schr\"odinger equation
\begin{equation*}
\left\{\begin{aligned}
i\partial_{t}w_n+ \Delta w_n-Vw_n&=-Ve^{it\Delta}\psi_{n}\\
w_n(0)&=0
\end{aligned}\right.,
\end{equation*}
by combining \eqref{fxc3} with the Duhamel formula we get
\begin{align*}
\|e^{it \Delta}\psi_{n}-e^{it(\Delta-V)}\psi_{n}\|_{L^{p}_{(0,T)}L^{r}}&\lesssim
\|(\tau_{-x_n}V)e^{it \Delta}\psi\|_{L^1_{(0,T)}H^1}.
\end{align*}
The thesis follows from the dominated convergence theorem.
\end{proof}
\begin{lemma}\label{lem5.2}
Let $\{x_{n}\}_{n{\infty}n\mathbb{N}}\subset\mathbb{R}^d$ be a sequence such that
$x_{n,1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow -{\infty}nfty,$
(resp. $x_{n,1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow +{\infty}nfty$)
and $v{\infty}n \mathcal{C}(\mathbb{R};H^{1})$
be the unique solution to \eqref{NLS-d} (resp. \eqref{NLS1-d}). Define $v_{n}(t,x):=v(t,x-x_{n})$. Then, up to a subsequence, the followings hold:
\begin{equation}\label{eq5.11}
\bigg\|{\infty}nt_{0}^{t}[e^{i(t-s)\Delta}\left (|v_{n}|^{{\alpha}lpha}v_{n}
\right )-e^{i(t-s)(\Delta-V)}
\left (|v_{n}|^{{\alpha}lpha}v_{n}\right )]ds \bigg\|_{L^{p}L^{r}}\overset{n\rightarrow{\infty}nfty}\longrightarrow0;
\end{equation}
\begin{equation}\label{eq5.12}
\left( \hbox{resp.\,} \bigg\|{\infty}nt_{0}^{t}[e^{i(t-s)(\Delta-1)}\left (|v_{n}|^{{\alpha}lpha}v_{n}\right )
-e^{i(t-s)(\Delta-V)}
\left (|v_{n}|^{{\alpha}lpha}v_{n}\right )]ds\bigg\|_{L^{p}L^{r}}\overset{n\rightarrow{\infty}nfty}\longrightarrow0\right).
\end{equation}
\end{lemma}
\begin{proof} Assume $x_{n,1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow -{\infty}nfty$
(the case $x_{n,1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow +{\infty}nfty$
can be treated similarly).
Our proof starts with the observation that
\begin{equation}\label{fiseca}
\lim_{T\rightarrow {\infty}nfty}
\bigg(\sup_n \bigg\|{\infty}nt_{0}^{t}e^{i(t-s)(\Delta-V)} \left (|v_{n}|^{{\alpha}lpha}v_{n}\right )ds\bigg\|_{L^{p}_{(T,{\infty}nfty)}L^{r}}\bigg)=0.
\end{equation}
By Minkowski inequality and the interpolation of the dispersive estimate \eqref{disp2}
with the conservation of the mass, we have
\begin{align}\notag
\bigg\|{\infty}nt_{0}^{t}e^{i(t-s)(\Delta-V)}\left(|v_{n}|^{{\alpha}lpha}v_{n}\right)ds\bigg\|_{L^r_x}&\lesssim{\infty}nt_{0}^{t}|t-s|^{-d\left(\frac{1}{2}-\frac{1}{r}\right)}\||v_n|^{{\alpha}lpha}v_n\|_{L^{r'}_x}ds\\\notag
&\lesssim{\infty}nt_{\mathbb{R}}|t-s|^{-d\left(\frac{1}{2}-\frac{1}{r}\right)}\||v|^{{\alpha}lpha}v\|_{L^{r'}_x} ds=
|t|^{-d\left(\frac 12 - \frac 1r\right)}{\alpha}st g
\end{align}
with $g(s)=\||v|^{{\alpha}lpha}v(s)\|_{L^{r'}_x}.$ We conclude \eqref{fiseca} provided that we show
$|t|^{-d\left(\frac 12 - \frac 1r\right)}{\alpha}st g(t){\infty}n L^p_t$.
By using the Hardy-Littlewood-Sobolev inequality (see for instance Stein's book \cite{ST}, p. 119) we assert
\begin{equation*}
\big\||t|^{-1+\frac{(2-d){\alpha}+4}{2({\alpha}+2)}}{\alpha}st g(t)
\big\|_{L^p_t} \lesssim\||v|^{{\alpha}lpha}v\|_{L^{\frac{2{\alpha}lpha({\alpha}lpha+2)}{\left((2-d){\alpha}+4\right)({\alpha}+1)}}L^{r'}}=\|v\|^{{\alpha}+1}_{L^pL^r}.
\end{equation*}
Since $v$ scatters, then it belongs to $L^pL^r,$ and so we can deduce the validity of \eqref{fiseca}.
\newline
Consider now $T$ fixed: we are reduced to show that
\begin{equation*}
\bigg\|{\infty}nt_{0}^{t}[e^{i(t-s)\Delta}\left(|v_{n}|^{{\alpha}lpha}v_{n}\right)-e^{i(t-s)(\Delta-V)}\left(|v_{n}|^{{\alpha}lpha}v_{n}\right)]ds\bigg\|_{L^{p}_{(0,T)} L ^{r}}\overset{n\rightarrow{\infty}nfty}\longrightarrow0.
\end{equation*}
As usual we observe that
\begin{equation*}
w_n(t,x)={\infty}nt_{0}^{t}e^{i(t-s) \Delta}\left(|v_{n}|^{{\alpha}lpha}v_{n}\right) ds - {\infty}nt_{0}^{t}e^{i(t-s)(\Delta-V)}\left(|v_{n}|^{{\alpha}lpha}v_{n}\right) ds
\end{equation*}
is the solution of the following linear Schr\"odinger equation
\begin{equation*}
\left\{\begin{aligned}
i\partial_{t}w_n+ \Delta w_n -V w_n&=-V{\infty}nt_{0}^{t}e^{i(t-s) \Delta}\left(|v_{n}|^{{\alpha}lpha}v_{n}\right)ds\\
w_n(0)&=0
\end{aligned}\right.,
\end{equation*}
and likely for {\alpha}utoref{lem5.1} we estimate
\begin{align*}\notag
\bigg\| {\infty}nt_{0}^{t}e^{i(t-s) \Delta}&\left(|v_{n}|^{{\alpha}lpha}v_{n}\right)ds-{\infty}nt_{0}^{t}e^{i(t-s)(\Delta-V)}\left(|v_{n}|^{{\alpha}lpha}v_{n}\right)ds \bigg\|_{L^{p}_{(0,T)}L^{r}}\\
&\lesssim \|(\tau_{-x_n}V)|v|^{{\alpha}lpha}v\|_{L^1_{(0,T)}H^1}.
\end{align*}
By using the dominated convergence theorem we conclude the proof.
\end{proof}
The previous results imply the following useful corollaries.
\begin{corollary}\label{cor5.3}
Let $\{x_n\}_{n{\infty}n\mathbb{N}}\subset\mathbb{R}^d$ be a sequence such that
$x_{n,1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow -{\infty}nfty,$
and let $v{\infty}n\mathcal{C}(\mathbb{R};H^1)$ be the unique solution to \eqref{NLS-d} with initial datum $v_0{\infty}n H^1$. Then
\begin{equation*}
v_n(t,x)=e^{it(\Delta-V)}v_{0,n} -i{\infty}nt_{0}^{t}e^{i(t-s)(\Delta-V)}\left(|v_{n}|^{{\alpha}lpha}v_{n}\right)ds+e_{n}(t,x)
\end{equation*}
where $v_{0,n}(x):=\tau_{x_n}v_0(x),$ $v_{n}(t,x):=v(t,x-x_n)$ and $\|e_n\|_{L^pL^r}
\overset{n\rightarrow{\infty}nfty}\longrightarrow 0$.
\end{corollary}
\begin{proof}
It is a consequence of \eqref{eq5.1} and \eqref{eq5.11}.
\end{proof}
\begin{corollary}\label{cor5.4}
Let $\{x_n\}_{n{\infty}n\mathbb{N}}\subset\mathbb{R}^d$ be a sequence such that
$x_{n,1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow +{\infty}nfty,$
and let $v{\infty}n\mathcal{C}(\mathbb{R};H^1)$ be the
unique solution to \eqref{NLS1-d} with initial datum $v_0{\infty}n H^1$.
Then
\begin{equation*}
v_n(t,x)=e^{it(\Delta-V)} v_{0,n}- i{\infty}nt_{0}^{t}e^{i(t-s)(\Delta-V)}\left(|v_{n}|^{{\alpha}lpha}v_{n}\right)ds+e_{n}(t,x)
\end{equation*}
where $v_{0,n}(x):=\tau_{x_n}v_0(x),$ $v_{n}(t,x):=v (t,x-x_n)$ and $\|e_n\|_{L^pL^r}
\overset{n\rightarrow{\infty}nfty}\longrightarrow 0$.
\end{corollary}
\begin{proof}
It is a consequence of \eqref{eq5.2} and \eqref{eq5.12}.
\end{proof}
\begin{lemma}\label{lem5.5}
Let $v(t,x){\infty}n \mathcal C(\mathbb{R}; H^1)$ be a solution to \eqref{NLS-d}
(resp. \eqref{NLS1-d})
and let $\psi_\pm {\infty}n H^{1}$ (resp. $\varphi_\pm {\infty}n H^{1}$) be such that
\begin{equation*}
\begin{aligned}
&\|v(t,x)-e^{it \Delta}\psi_\pm \|_{H^1}\overset{t\rightarrow\pm{\infty}nfty}
{\longrightarrow}0
\\
\bigg(\hbox{ resp. }
&\|v(t,x)-e^{it(\Delta-1)}\varphi_\pm\|_{H^1}\overset{t\rightarrow\pm{\infty}nfty}
{\longrightarrow}0\bigg).
\end{aligned}
\end{equation*}
Let $\{x_{n}\}_{n{\infty}n\mathbb{N}}\subset{\mathbb{R}}^d, \,\{t_{n}\}_{n{\infty}n\mathbb{N}}\subset\mathbb{R}$ be two sequences such that $x_{n,1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow -{\infty}nfty$
(resp. $x_{n,1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow +{\infty}nfty$) and $|t_{n}|\overset{n\rightarrow{\infty}nfty}\longrightarrow {\infty}nfty.$ Let us define moreover $v_{n}(t,x):=v(t-t_{n},x-x_{n})$ and $\psi_n^\pm (x):=\tau_{x_n}\psi_\pm(x)$
(resp. $\varphi_n^\pm (x)=\tau_{x_n}\varphi_\pm(x)$).
Then, up to subsequence, we get
\begin{align}\notag
t_n\rightarrow \pm {\infty}nfty \Longrightarrow
\| e^{i(t-t_{n})\Delta}\psi_{n}^\pm -e^{i(t-t_{n})(\Delta-V)}\psi_{n}^\pm\|_{L^{p}L^{r}}\overset{n\rightarrow{\infty}nfty}\longrightarrow0\quad\emph{and}
\\\label{eq517}
\bigg\|{\infty}nt_{0}^{t}[e^{i(t-s) \Delta}\big(|v_{n}|^{{\alpha}lpha}
v_{n}\big)ds- e^{i(t-s)(\Delta-V)}\big(|v_{n}|^{{\alpha}lpha}v_{n}\big)]ds \bigg\|_{L^{p}L^{r}}\overset{n\rightarrow{\infty}nfty}\longrightarrow0
\end{align}
\begin{align*}
\bigg(\hbox{ resp. }
t_n\rightarrow \pm {\infty}nfty \Longrightarrow
\| e^{i(t-t_{n})(\Delta-1)}\varphi_{n}^\pm -e^{i(t-t_{n})(\Delta-V)}\varphi_{n}^\pm\|_{L^{p}L^{r}}\overset{n\rightarrow{\infty}nfty}\longrightarrow0\quad\emph{and}
\\\nonumber
\bigg\|{\infty}nt_{0}^{t}[e^{i(t-s)(\Delta-1)}\big(|v_{n}|^{{\alpha}lpha}
v_{n}\big)ds- e^{i(t-s)(\Delta-V)}\big(|v_{n}|^{{\alpha}lpha} v_{n}\big)]ds \bigg\|_{L^{p}L^{r}}\overset{n\rightarrow{\infty}nfty}\longrightarrow0 \bigg).
\end{align*}
\end{lemma}
\begin{proof} It is a multidimensional suitable version of
Proposition 3.6 in \cite{BV}.
Nevertheless, since in \cite{BV} the details of the proof are not given, we expose below the proof
of the most delicate estimate, namely the second estimate
in \eqref{eq517}. After a change of variable in time, proving \eqref{eq517} is clearly equivalent to prove
\begin{equation*}
\bigg\|{\infty}nt_{-t_n}^{t}e^{i(t-s) \Delta}\tau_{x_n}(|v|^{{\alpha}lpha} v)(s)ds- {\infty}nt_{-t_n}^te^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{{\alpha}lpha} v)(s)ds \bigg\|_{L^{p}L^{r}}\overset{n\rightarrow{\infty}nfty}\longrightarrow0.
\end{equation*}
\newline
We can focus on the case $t_n\rightarrow {\infty}nfty$ and $x_{n,1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow+ {\infty}nfty,$ being the other cases similar.
\\
The idea of the proof is to split the estimate above in three different regions, i.e. $(-{\infty}nfty,-T)\times\mathbb{R}^d, (-T,T)\times\mathbb{R}^d, (T,{\infty}nfty)\times\mathbb{R}^d$ for some fixed $T$ which will be chosen in an appropriate way below. The strategy is to use translation in the space variable to gain smallness in the strip $(-T,T)\times\mathbb{R}^d$ while we use smallness of Strichartz estimate in half spaces $(-T,T)^c\times\mathbb{R}^d$. Actually in $(T,{\infty}nfty)$ the situation is more delicate and we will also use the dispersive relation. \\
Let us define $g(t)=\|v(t)\|^{{\alpha}+1}_{L^{({\alpha}+1)r'}}$ and for fixed $\varepsilon>0$ let us consider $T=T(\varepsilon)>0$ such that:
\begin{equation}\label{smallness}
\begin{aligned}
\left\{ \begin{array}{ll}
\||v|^{{\alpha}lpha}v\|_{L^{q'}_{(-{\infty}nfty,-T)}L^{r'}}<\varepsilon\\
\||v|^{{\alpha}lpha}v\|_{L^{q'}_{(T,+{\infty}nfty)}L^{r'}}<\varepsilon\\
\||v|^{{\alpha}lpha}v\|_{L^1_{(-{\infty}nfty,-T)}H^1
}<\varepsilon\\
\left\||t|^{-d\left(\frac 12 - \frac 1r\right)}{\alpha}st g(t)\right \|_{L^p_{(T,+{\infty}nfty)}}<\varepsilon
\end{array} \right..
\end{aligned}
\end{equation}
The existence of such a $T$ is guaranteed by the integrability properties of $v$ and its decay at infinity (in time). We can assume without loss of generality that $|t_n|>T.$\\
We split the term to be estimated as follows:
\begin{equation*}
\begin{split}
{\infty}nt_{-t_n}^{t}e^{i(t-s)\Delta}\tau_{x_n}(|v|^{{\alpha}lpha} v)(s)ds- {\infty}nt_{-t_n}^te^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{{\alpha}lpha} v)(s)ds\\
=e^{it\Delta}{\infty}nt_{-t_n}^{-T}e^{-is\Delta}\tau_{x_n}(|v|^{{\alpha}lpha} v)(s)ds- e^{it(\Delta-V)}{\infty}nt_{-t_n}^{-T}e^{-is(\Delta-V)}\tau_{x_n}(|v|^{{\alpha}lpha} v)(s)ds\\
+{\infty}nt_{-T}^{t}e^{i(t-s)\Delta}\tau_{x_n}(|v|^{{\alpha}lpha} v)(s)ds- {\infty}nt_{-T}^te^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{{\alpha}lpha} v)(s)ds.
\end{split}
\end{equation*}
By Strichartz estimate \eqref{fxc3} and the third one of \eqref{smallness}, we have, uniformly in $n,$
\begin{equation*}
\begin{aligned}
\bigg\|e^{it\Delta}{\infty}nt_{-t_n}^{-T}e^{-is\Delta}\tau_{x_n}(|v|^{{\alpha}lpha} v)(s)ds\bigg\|_{L^pL^r}&\lesssim\varepsilon,\\
\bigg\|e^{it(\Delta-V)}{\infty}nt_{-t_n}^{-T}e^{-is(\Delta-V)}\tau_{x_n}(|v|^{{\alpha}lpha} v)(s)ds \bigg\|_{L^{p}L^{r}}&\lesssim\varepsilon.
\end{aligned}
\end{equation*}
Thus, it remains to prove
\begin{equation*}
\bigg\|{\infty}nt_{-T}^{t}e^{i(t-s)\Delta}\tau_{x_n}(|v|^{{\alpha}lpha} v)(s)ds- {\infty}nt_{-T}^te^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{{\alpha}lpha} v)(s)ds \bigg\|_{L^{p}L^{r}}\overset{n\rightarrow{\infty}nfty}\longrightarrow0
\end{equation*}
and we split it by estimating it in the regions mentioned above.
By using \eqref{str2.4} and the first one of \eqref{smallness} we get uniformly in $n$ the following estimates:
\begin{equation*}
\begin{aligned}
\bigg\|{\infty}nt_{-T}^{t}e^{i(t-s)\Delta}\tau_{x_n}(|v|^{{\alpha}lpha} v)(s)ds\bigg\|_{L^{p}_{(-{\infty}nfty,-T)}L^{r}}&\lesssim\||v|^{{\alpha}lpha}v\|_{L^{q'}_{(-{\infty}nfty,-T)}L^{r'}}\lesssim\varepsilon,\\
\bigg\|{\infty}nt_{-T}^{t}e^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{{\alpha}lpha} v)(s)ds\bigg\|_{L^{p}_{(-{\infty}nfty,-T)}L^{r}}&\lesssim\||v|^{{\alpha}lpha}v\|_{L^{q'}_{(-{\infty}nfty,-T)}L^{r'}}\lesssim\varepsilon.
\end{aligned}
\end{equation*}
The difference $w_n={\infty}nt_{-T}^{t}e^{i(t-s)\Delta}\tau_{x_n}(|v|^{{\alpha}lpha} v)(s)ds- {\infty}nt_{-T}^te^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{{\alpha}lpha} v)(s)ds$ satisfies the following Cauchy problem:
\begin{equation*}
\left\{ \begin{aligned}
i\partial_{t}w_n+(\Delta-V)w_n&=-V{\infty}nt_{-T}^te^{i(t-s)\Delta}\tau_{x_n}(|v|^{\alpha} v)(s)\,ds\\
w_n(-T)&=0
\end{aligned}
\right..
\end{equation*}
Then $w_n$ satisfies the integral equation
\begin{equation*}
\begin{aligned}
w_n(t)={\infty}nt_{-T}^te^{i(t-s)(\Delta-V)}\bigg(-V{\infty}nt_{-T}^se^{i(s-\sigma)\Delta}\tau_{x_n}(|v|^{\alpha} v)(\sigma)\,d\sigma\bigg)\,ds
\end{aligned}
\end{equation*}
which we estimate in the region $(-T,T)\times\mathbb{R}^d.$ By Sobolev embedding $H^1\hookrightarrow L^r,$ H\"older and Minkowski inequalities we have
therefore
\begin{equation*}
\begin{aligned}
\bigg\|{\infty}nt_{-T}^te^{i(t-s)(\Delta-V)}\bigg(-V{\infty}nt_{-T}^se^{i(s-\sigma)\Delta}\tau_{x_n}(|v|^{\alpha} v)(\sigma)\,d\sigma\bigg)\,ds\bigg\|_{L^p_{(-T,T)}L^r}\lesssim\\
\lesssim T^{1/p}{\infty}nt_{-T}^T\bigg\|(\tau_{-x_n}V){\infty}nt_{-T}^se^{i(s-\sigma)\Delta}|v|^{\alpha} v(\sigma)\,d\sigma\bigg\|_{H^1}\,ds\lesssim\varepsilon
\end{aligned}
\end{equation*}
by means of Lebesgue's theorem.
\\
What is left is to estimate in the region $(T,{\infty}nfty)\times{\mathbb{R}}^d$ the terms
$${\infty}nt_{-T}^te^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{\alpha} v)\,ds\hbox{\qquad and \qquad}{\infty}nt_{-T}^te^{i(t-s)\Delta}\tau_{x_n}(|v|^{\alpha} v)\,ds.$$
We consider only one term being the same for the other. Let us split the estimate as follows:
\begin{equation*}
\begin{aligned}
\bigg\|{\infty}nt_{-T}^t&e^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{\alpha} v)\,ds\bigg\|_{L^p_{(T,{\infty})}L^r}\leq\\
&\leq\bigg\|{\infty}nt_{-T}^{T}e^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{\alpha} v)\,ds\bigg\|_{L^p_{(T,{\infty})}L^r}\\
&\quad+\bigg\|{\infty}nt_{T}^te^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{\alpha} v)\,ds\bigg\|_{L^p_{(T,{\infty})}L^r}.
\end{aligned}
\end{equation*}
The second term is controlled by Strichartz estimates, and it is $\lesssim\varepsilon$ since we are integrating in the region where $\||v|^{\alpha} v\|_{L^{q'}((T,{\infty});L^{r'})}<\varepsilon$ (by using the second of \eqref{smallness}), while the first term is estimated by using the dispersive relation. More precisely
\begin{equation*}
\begin{aligned}
\bigg\|{\infty}nt_{-T}^{T}e^{i(t-s)(\Delta-V)}&\tau_{x_n}(|v|^{\alpha} v)\,ds\bigg\|_{L^p_{(T,{\infty})}L^r}\lesssim\\
&\lesssim\bigg\|{\infty}nt_{-T}^{T}|t-s|^{-d\left(\frac{1}{2}-\frac{1}{r}\right)}\|v\|^{{\alpha}lpha+1}_{L^{({\alpha}lpha+1)r'}}ds \bigg\|_{L^p_{(T,{\infty}nfty)}}\\
&\lesssim\bigg\|{\infty}nt_{{\mathbb{R}}}|t-s|^{-d\left(\frac{1}{2}-\frac{1}{r}\right)}\|v\|^{{\alpha}lpha+1}_{L^{({\alpha}lpha+1)r'}}
ds \bigg\|_{L^p_{(T,{\infty}nfty)}}\lesssim\varepsilon
\end{aligned}
\end{equation*}
where in the last step we used Hardy-Sobolev-Littlewood inequality and the fourth of \eqref{smallness}.
\end{proof}
As consequences of the previous lemma we obtain the following corollaries.
\begin{corollary}\label{cor5.6}
Let $\{x_n\}_{n{\infty}n\mathbb{N}}\subset\mathbb{R}^d$ be a sequence such that
$x_{n,1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow-{\infty}nfty$
and let $v{\infty}n\mathcal{C}(\mathbb{R};H^1)$ be a solution to
\eqref{NLS-d} with initial datum $\psi{\infty}n H^1$. Then for a sequence $\{t_n\}_{n{\infty}n\mathbb{N}}$ such that $|t_n|\overset{n\rightarrow{\infty}nfty}\longrightarrow{\infty}nfty$
\begin{equation*}
v_n(t,x)=e^{it(\Delta-V)}\psi_n-i{\infty}nt_{0}^{t}e^{i(t-s)(\Delta-V)}\big(|v_{n}|^{{\alpha}lpha}v_{n}\big)ds+e_n(t,x)
\end{equation*}
where $\psi_n:=e^{-it_n(\Delta -V)}\tau_{x_n}\psi,$ $v_n:=v(t-t_n,x-x_n)$ and $\|e_n\|_{L^pL^r}\overset{n\rightarrow{\infty}nfty}\longrightarrow 0$. \end{corollary}
\begin{corollary}\label{cor5.7}
Let $\{x_n\}_{n{\infty}n\mathbb{N}}\subset\mathbb{R}^d$ be a sequence such that
$x_{n,1}
\overset{n\rightarrow {\infty}nfty}\longrightarrow+ {\infty}nfty$
and let $v{\infty}n\mathcal{C}(\mathbb{R};H^1)$ be a solution to
\eqref{NLS1-d} with initial datum $\psi{\infty}n H^1$. Then for a sequence $\{t_n\}_{n{\infty}n\mathbb{N}}$ such that $|t_n|\overset{n\rightarrow{\infty}nfty}\longrightarrow {\infty}nfty$
\begin{equation*}
v_n(t,x)=e^{it(\Delta-V)}\psi_n-i{\infty}nt_{0}^{t}e^{i(t-s)(\Delta-V)}\big(|v_{n}|^{{\alpha}lpha}v_{n}\big)ds+e_n(t,x)
\end{equation*}
where $\psi_n:=e^{-it_n(\Delta -V)}\tau_{x_n}\psi,$ $v_n:=v(t-t_n,x-x_n)$ and $\|e_n\|_{L^pL^r}\overset{n\rightarrow{\infty}nfty}\longrightarrow 0$.
\end{corollary}
We shall also need the following results, for whose proof we refer to \cite{BV}.
\begin{prop}\label{prop5.8}
Let $\psi{\infty}n H^1.$ There exists $\hat{U}_{\pm}{\infty}n\mathcal{C}(\mathbb{R}_{\pm};H^1)\cap L^{p}_{\mathbb{R}_{\pm}}L^r$ solution to \eqref{NLSV-d} such that
\begin{equation*}
\|\hat{U}_{\pm}(t,\cdot)-e^{-it(\Delta-V)}\psi\|_{H^1}\overset{t\rightarrow\pm{\infty}nfty}\longrightarrow0.
\end{equation*}
Moreover, if $t_n\rightarrow\mp{\infty}nfty$, then
\begin{equation*}
\hat{U}_{\pm,n}=e^{it(\Delta-V)}\psi_n-i{\infty}nt_{0}^{t}e^{i(t-s)(\Delta-V)}\big(|\hat{U}_{\pm,n}|^{{\alpha}lpha}\hat{U}_{\pm,n}\big)ds+h_{\pm,n}(t,x)
\end{equation*}
where $\psi_n:=e^{-it_n(\Delta-V)}\psi,$ $\hat{U}_{\pm,n}(t,\cdot)=:\hat{U}_{\pm}(t-t_n,\cdot)$ and $\|h_{\pm,n}(t,x)\|_{L^pL^r}\overset{n\rightarrow{\infty}nfty}\longrightarrow 0.$
\end{prop}
\section{Existence and extinction of the critical element}\label{critical}
In view of the results stated in {\alpha}utoref{perturbative}, we define
the following quantity belonging to $(0, {\infty}nfty]$:
\begin{align*}
E_{c}=\sup\bigg\{ &E>0 \textit{ such that if } \varphi{\infty}n H^1\, \textit{with } E(\varphi)<E\\\notag
&\textit{then the solution of \eqref{NLSV-d} with initial data } \varphi \textit{ is in } L^{p}L^{r}\bigg\}.
\end{align*}
Our aim is to show that $E_c={\infty}nfty$ and hence we get the large data scattering.
\subsection{Existence of the Minimal Element}
\begin{prop}\label{lemcri}
Suppose $E_{c}<{\infty}nfty.$ Then there exists $\varphi_{c}{\infty}n H^1$,
$\varphi_{c}\not\equiv0$, such that the corresponding global solution $u_{c}(t,x)$
to \eqref{NLSV-d} does not scatter. Moreover, there exists $\bar x(t){\infty}n{\mathbb{R}}^{d-1}$ such that $\left\{ u_{c}(t, x_1,\bar x-\bar x(t))\right\}_{t{\infty}n{\mathbb{R}}^+} $
is a relatively compact subset in $H^{1}$.
\end{prop}
\begin{proof}
If $E_{c}<{\infty}nfty$, there exists a sequence $\varphi_{n}$ of elements of $H^{1}$ such that
\begin{equation*}
E(\varphi_{n})\overset{n\rightarrow{\infty}nfty}{\longrightarrow}E_{c},
\end{equation*}
and by denoting with $u_{n}{\infty}n \mathcal{C}(\mathbb{R};H^{1})$ the corresponding solution to \eqref{NLSV} with initial datum $\varphi_n$ then
\begin{equation*}
u_{n}\notin L^{p}L^{r}.
\end{equation*}
We apply the profile decomposition to $\varphi_{n}:$
\begin{equation}\label{profdec}
\varphi_{n}=\sum_{j=1}^{J}e^{-it_{n}^{j}(-\Delta +V)}\tau_{x_{n}^{j}}\psi^{j}+R_{n}^{J}.
\end{equation}
\begin{claim}\label{claim}
There exists only one non-trivial profile, that is $J=1$.
\end{claim}
Assume $J>1$. For $j{\infty}n\{1,\dots, J\}$ to each profile $\psi^{j}$ we associate a nonlinear profile $U_n^j$. We can have one of the following situations, where we have reordered without loss of generality the cases in these way:
\begin{enumerate}
{\infty}tem $(t_{n}^{j},x_{n}^{j})=(0,0){\infty}n {\mathbb{R}}\times {\mathbb{R}}^d$,
{\infty}tem $t_{n}^{j}=0$ and $x_{n,1}^{j}\overset{n \rightarrow {\infty}nfty}\longrightarrow-{\infty}nfty,$
{\infty}tem $t_{n}^{j}=0,$ and $x_{n,1}^{j}\overset{n \rightarrow {\infty}nfty}\longrightarrow +{\infty}nfty,$
{\infty}tem $t_{n}^{j}=0,$ $x_{n,1}^{j}=0$ and $|\bar x_{n}^{j}|\overset{n \rightarrow {\infty}nfty}\longrightarrow{\infty}nfty,$
{\infty}tem $x_{n}^{j}=\vec 0$ and $t_{n}^{j}\overset{n \rightarrow {\infty}nfty}\longrightarrow-{\infty}nfty$,
{\infty}tem $x_{n}^{j}=\vec 0$ and $t_{n}^{j}\overset{n \rightarrow {\infty}nfty}\longrightarrow+{\infty}nfty$,
{\infty}tem $x_{n,1}^{j}\overset{n\to {\infty}nfty}\longrightarrow-{\infty}nfty$ and $t_{n}^{j}\overset{n \rightarrow {\infty}nfty}\longrightarrow-{\infty}nfty,$
{\infty}tem $x_{n,1}^{j}\overset{n\to {\infty}nfty}\longrightarrow-{\infty}nfty$ and $t_{n}^{j}\overset{n \rightarrow {\infty}nfty}\longrightarrow+{\infty}nfty,$
{\infty}tem $x_{n,1}^{j}\overset{n\to {\infty}nfty}\longrightarrow+{\infty}nfty$ and $t_{n}^{j}\overset{n \rightarrow {\infty}nfty}\longrightarrow-{\infty}nfty,$
{\infty}tem $x_{n,1}^{j}\overset{n\to {\infty}nfty}\longrightarrow+{\infty}nfty$ and $t_{n}^{j}\overset{n \rightarrow {\infty}nfty}\longrightarrow+{\infty}nfty,$
{\infty}tem $x_{n,1}^{j}=0,$ $t_{n}^{j}\overset{n \rightarrow {\infty}nfty}\longrightarrow-{\infty}nfty$ and $|\bar x^j_n|\overset{n \rightarrow {\infty}nfty}\longrightarrow{\infty}nfty,$
{\infty}tem $x_{n,1}^{j}=0,$ $t_{n}^{j}\overset{n \rightarrow {\infty}nfty}\longrightarrow+{\infty}nfty$ and $|\bar x^j_n|\overset{n \rightarrow {\infty}nfty}\longrightarrow{\infty}nfty.$
\end{enumerate}
Notice that despite to \cite{BV} we have twelve cases to consider and not six (this is because we have to consider a different behavior of $V(x)$ as $|x|\rightarrow {\infty}nfty$).
Since the argument to deal with the cases above is similar
to the ones considered in \cite{BV} we skip the details. The main point is that
for instance in dealing with the cases
$(2)$ and $(3)$ above we have to use respectively
{\alpha}utoref{cor5.3} and {\alpha}utoref{cor5.4}.
When instead $|\bar x_n^j|
\overset{n\to {\infty}nfty}\longrightarrow {\infty}nfty$ and $x_{1,n}^j=0$ we use the fact that this sequences can be assumed, according with the profile decomposition
{\alpha}utoref{profiledec}
to have components which are integer multiples of the periods, so the translations and the nonlinear equation commute and if $|t_n|\overset{n\to {\infty}nfty}\longrightarrow{\infty}nfty$ we use moreover {\alpha}utoref{prop5.8}. We skip the details.
Once it is proved that $J=1$ and
\begin{equation*}
\varphi_n=e^{it_n(\Delta-V)}\psi+R_n
\end{equation*}
with $\psi{\infty}n H^1$ and $\underset{n\rightarrow{\infty}nfty}\limsup\|e^{it(\Delta-V)}R_n\|_{L^pL^r}=0,$ then the existence of the critical element follows now by \cite{FXC}, ensuring that, up to subsequence, $\varphi_n$ converges to $\psi$ in $H^1$ and so $\varphi_c=\psi.$ We define by $u_c$ the solution to \eqref{NLSV-d} with Cauchy datum $\varphi_c,$ and we call it critical element. This is the minimal (with respect to the energy) non-scattering solution to \eqref{NLSV-d}. We can assume therefore with no loss of generality that $\|u_c\|_{L^{p}((0,+{\infty}nfty);L^r)}={\infty}nfty.$ The precompactenss of the trajectory up to translation by a path $\bar x(t)$ follows again by \cite{FXC}.
\end{proof}
\subsection{Extinction of the Minimal Element}
Next we show that the unique solution that satisfies the compactness properties of the minimal element $u_c(t,x)$ (see {\alpha}utoref{lemcri}) is the trivial solution. Hence we get a contradiction
and we deduce that necessarily $E_c={\infty}nfty$.
The tool that we shall use is the following Nakanishi-Morawetz type estimate.
\begin{lemma}
Let $u(t,x)$ be the solution to \eqref{NLSV-d}, where $V(x)$ satisfies
$x_1 \cdot \partial_{x_1}V (x)\leq 0$ for any $x{\infty}n{\mathbb{R}}^d,$ then
\begin{equation}\label{pote}
{\infty}nt_{\mathbb{R}}{\infty}nt_{{\mathbb{R}}^{d-1}}{\infty}nt_{\mathbb{R}}\frac{t^2|u|^{{\alpha}+2}}{(t^2+x_1^2)^{3/2}}\,dx_1\,d\bar x\,dt
<{\infty}nfty.
\end{equation}
\end{lemma}
\begin{proof}
The proof follows the ideas of \cite{N}; we shall recall it shortly, with the obvious modifications of our context.
Let us introduce
\begin{equation*}
m(u)=a\partial_{x_1}u+gu
\end{equation*}
with
\begin{equation*}
\begin{aligned}
a=-\frac{2x_1}{\lambda},\quad g=-\frac{t^2}{\lambda^3}-\frac{it}{\lambda}, \quad
\lambda=(t^2+x_1^2)^{1/2}
\end{aligned}
\end{equation*}
and by using the equation solved by $u(t,x)$ we get
\begin{equation}\label{identity}
\begin{aligned}
0&={\mathbb{R}}e\{(i\partial_t u+\Delta u-Vu-|u|^{\alpha} u)\bar{m})\}
\\
&=\frac{1}{2}\partial_t\bigg(-\frac{2x_1}{\lambda}\Im\{\bar{u}\partial_{x_1}u\}-\frac{t|u|^2}{\lambda}\bigg)\\
& \quad +\partial_{x_1}{\mathbb{R}}e\{\partial_{x_1}u\bar{m}-al_V(u)-\partial_{x_1}g\frac{|u|^2}{2}\}\\
& \quad +\frac{t^2G(u)}{\lambda^3}+\frac{|u|^2}{2}{\mathbb{R}}e\{\partial_{x_1}^2g\}\\
& \quad +\frac{|2it\partial_{x_1}u+{x_1}u|^2}{2\lambda^3}-{x_1}\partial_{x_1}V\frac{|u|^2}{\lambda}\\
& \quad +div_{\bar x}{\mathbb{R}}e\{\bar m\nabla_{\bar x}u\}.
\end{aligned}
\end{equation}
with $G(u)=\frac{{\alpha}}{{\alpha}+2}|u|^{{\alpha}+2},$ $l_V(u)=\frac{1}{2}\left(-{\mathbb{R}}e\{i\bar{u}\partial_tu\}+|\partial_{x_1}u|^2+\frac{2|u|^{{\alpha}+2}}{{\alpha}+2}+V|u|^2\right)$ and $div_{\bar x}$ is the divergence operator w.r.t. the $(x_2,\dots,x_d)$ variables.
Making use of the repulsivity assumption in the $x_1$ direction, we get \eqref{pote} by integrating \eqref{identity} on $\{1<|t|<T\}\times{\mathbb{R}}^d,$ obtaining
\begin{equation*}
{\infty}nt_1^T{\infty}nt_{{\mathbb{R}}^{d-1}}{\infty}nt_{\mathbb{R}}\frac{t^2|u|^{{\alpha}+2}}{(t^2+x_1^2)^{3/2}}\,dx_1\,d\bar x\,dt\leq C,
\end{equation*}
where $C=C(M,E)$ depends on mass and energy and then letting $T\to{\infty}nfty.$
\end{proof}
\begin{lemma}\label{limit-point}
Let $u(t,x)$ be a nontrivial solution to \eqref{NLSV-d} such that
for a suitable choice $\bar x(t){\infty}n {\mathbb{R}}^{d-1}$ we have that
$\{u(t,x_1, \bar x-\bar x(t))\}\subset H^1$ is a precompact set.
If $\bar{u}{\infty}n H^1$ is one of its limit points, then $\bar{u}\neq0.$
\end{lemma}
\begin{proof}
This property simply follows from the conservation of the energy.
\end{proof}
\begin{lemma}\label{lem2}
If $u(t,x)$ is an in {\alpha}utoref{limit-point} then for any $\varepsilon>0$ there exists $R>0$ such that
\begin{equation}
\sup_{t{\infty}n {\mathbb{R}}} {\infty}nt_{{\mathbb{R}}^{d-1}} {\infty}nt_{|x_1|>R} (|u|^2+|\nabla_x u|^2+|u|^{{\alpha}lpha+2})\,d\bar x\,dx_1<\varepsilon.
\end{equation}
\end{lemma}
\begin{proof}
This is a well-known property implied by the precompactness of the sequence.
\end{proof}
\begin{lemma}\label{lem1}
If $u(t,x)$ is an in {\alpha}utoref{limit-point}
then there exist $R_0>0$ and $\varepsilon_0>0$ such that
\begin{equation}
{\infty}nt_{{\mathbb{R}}^{d-1}}{\infty}nt_{|x_1|<R_0}|u(t,x_1,\bar x-\bar x(t))|^{{\alpha}lpha+2}\,d\bar x\,dx_1>\varepsilon_0 \qquad \forall\,t{\infty}n{\mathbb{R}}^+.
\end{equation}
\end{lemma}
\begin{proof}
It is sufficient to prove that ${\infty}nf_{t{\infty}n {\mathbb{R}}^+} \|u(t ,x_1,\bar x-\bar x(t))\|_{L^{{\alpha}lpha+2}}
>0$, then the result follows by combining this fact with
{\alpha}utoref{lem2}.
If by the absurd it is not true then
there exists a sequence $\{t_n\}_{n{\infty}n {\mathbb{N}}}\subset{\mathbb{R}}^+$ such that
$u(t_n ,x_1,\bar x-\bar x(t_n))
\overset{n\rightarrow{\infty}nfty}\longrightarrow 0$ in $L^{{\alpha}lpha+2}.$
On the other hand by the compactness assumption, it implies that
$u(t_n ,x_1,\bar x-\bar x(t_n))
\overset{n\rightarrow{\infty}nfty}\longrightarrow 0$ in $H^{1}$,
and it is in contradiction with {\alpha}utoref{limit-point}.
\end{proof}
We now conclude the proof of scattering for large data, by showing the extinction
of the minimal element.
Let $R_0>0$ and $\varepsilon_0>0$ be given by {\alpha}utoref{lem1}, then
\begin{equation*}
\begin{aligned}
{\infty}nt_{\mathbb{R}}{\infty}nt_{{\mathbb{R}}^{d-1}}{\infty}nt_{\mathbb{R}}\frac{|u|^{{\alpha}lpha+2}t^2}{(t^2+x_1^2)^{3/2}}\,dx_1\,d\bar x\,dt&\geq{\infty}nt_{\mathbb{R}} {\infty}nt_{{\mathbb{R}}^{d-1}}{\infty}nt_{|x_1|<R_0}\frac{t^2|u(t,x_1,\bar x-\bar x(t))|^{{\alpha}lpha+2}}{(t^2+x_1^2)^{3/2}}\,dx_1\,d\bar x\,dt\\
&\geq\varepsilon{\infty}nt_{1}^T\frac{t^2}{(t^2+R_0^2)^{3/2}}\,dt\to{\infty}nfty\qquad \text{if}\quad T\to{\infty}nfty.
\end{aligned}
\end{equation*}
Hence we contradict \eqref{pote} and we get that the critical element
cannot exist.
\section{Double scattering channels in $1D$}
This last section is devoted to prove {\alpha}utoref{linscat}. Following \cite{DaSi} (see \emph{Example 1,} page $283$) we have the following property:
\begin{equation}\label{multi}
\begin{aligned}
\forall\, \psi{\infty}n L^2 \quad &\exists\,\eta_\pm, \gamma_\pm{\infty}n L^2 \hbox{ \,such that } \\
\|e^{it (\partial_x^2 - V)} \psi &-e^{it\partial_x^2}\eta_\pm-e^{it(\partial_x^2 -1)}\gamma_\pm\|_{L^2}\overset{t \rightarrow\pm {\infty}nfty}\longrightarrow 0.
\end{aligned}
\end{equation}
Our aim is now to show that \eqref{multi} actually holds in $H^1$ provided that $\psi{\infty}n H^1$. We shall prove this property for $t\rightarrow +{\infty}nfty$ (the case $t\rightarrow -{\infty}nfty$
is similar).
\subsection{Convergence \eqref{multi} occurs in $H^1$ provided that $\psi{\infty}n H^2$}
In order to do that it is sufficient to show that
\begin{equation}\label{firststep}\psi{\infty}n H^2 \Longrightarrow
\eta_+, \gamma_+{\infty}n H^2.\end{equation}
Once it is proved then we conclude the proof of this first step by using the following interpolation
inequality
$$\|f\|_{H^1}\leq \|f\|_{L^2}^{1/2} \|f\|_{H^2}^{1/2}$$
in conjunction with \eqref{multi} and with the bound
$$
\sup_{t{\infty}n \mathbb R} \|e^{it (\partial_x^2 - V)} \psi -e^{it\partial_x^2}\eta_+-e^{it(\partial_x^2 -1)}\gamma_+\|_{H^2}<{\infty}nfty$$
(in fact this last property follows by the fact that $D(\partial_x^2 - V(x))=H^2$ is preserved along the linear flow and by \eqref{firststep}).
Thus we show \eqref{firststep}.
Notice that by \eqref{multi} we get
\begin{equation*}
\|e^{-it\partial_x^2}e^{it (\partial_x^2 - V)} \psi -\eta_+-e^{-it}\gamma_+\|_{L^2}\overset{t \rightarrow {\infty}nfty}\longrightarrow 0,
\end{equation*}
and by choosing as subsequence $t_n=2\pi n$ we get
\begin{equation*}
\|e^{-it_n\partial_x^2}e^{it_n (\partial_x^2 - V)} \psi -\eta_+-\gamma_+\|_{L^2}\overset{n \rightarrow {\infty}nfty}\longrightarrow 0.
\end{equation*}
By combining this fact with the bound
$\sup_n \|e^{-it_n\partial_x^2}e^{it_n (\partial_x^2 - V)} \psi\|_{H^2}<{\infty}nfty$
we get
$\eta_++\gamma_+{\infty}n H^2$.
Arguing as above but by choosing $t_n=(2n+1)\pi$ we also get
$\eta_+-\gamma_+{\infty}n H^2$ and hence necessarily $\eta_+, \gamma_+{\infty}n H^2$.
\subsection{The map $H^2\ni \psi\mapsto (\eta_+, \gamma_+){\infty}n H^2\times H^2$ satisfies $\|\gamma_+\|_{H^1}+\|\eta_+\|_{H^1}\lesssim \|\psi\|_{H^1}$}
Once this step is proved then we conclude by a straightforward density argument.
By a linear version of the conservation laws \eqref{consmass}, \eqref{consen}
we get
\begin{equation}\label{H1V}
\|e^{it (\partial_x^2 - V)} \psi\|_{H^1_V}=
\|\psi\|_{H^1_V}
\end{equation}
where
$$
\|w\|_{H^1_V}^2
={\infty}nt |\partial_x w|^2 dx+{\infty}nt V|w|^2dx+{\infty}nt |w|^2 dx.
$$
Notice that this norm is clearly equivalent to the usual norm of $H^1$.
\\
Next notice that by using the conservation of the mass
we get
\begin{equation*}
\|\eta_++\gamma_+\|_{L^2}^2=\|\eta_+ + e^{-2n\pi i}\gamma_+\|_{L^2}^2
=\|e^{i2\pi n\partial_x^2}\eta_+ + e^{i2\pi n(\partial_x^2-1)}\gamma_+\|_{L^2}^2
\end{equation*}
and by using
\eqref{multi} we get
\begin{equation*}\|\eta_++\gamma_+\|_{L^2}^2=\lim_{t\rightarrow{\infty}nfty}\|e^{it( \partial_x^2 - V)} \psi\|_{L^2}^2
=\|\psi\|_{L^2}^2
\end{equation*}
Moreover we have
\begin{align}\notag
\|\partial_x(\eta_++\gamma_+)\|^2_{L^2}&=\|\partial_x(\eta_++e^{-2n\pi i}\gamma_+)\|^2_{L^2}=\|\partial_x(e^{i2\pi n\partial_x^2}(\eta_++e^{-i2\pi n}\gamma_+))\|^2_{L^2}\\\notag
&=\|\partial_x(e^{i2\pi n \partial_x^2}\eta_++e^{i2\pi n (\partial_x^2-1)}\gamma_+)\|^2_{L^2}\\\notag
\end{align}
and by using the previous step and \eqref{H1V} we get
\begin{align*}
\|\partial_x(\eta_++\gamma_+)\|^2_{L^2}=&\lim_{t\rightarrow+{\infty}nfty}\|\partial_x(e^{it(\partial_x^2 - V)}\psi)\|^2_{L^2}\\
\leq&\lim_{t\rightarrow {\infty}nfty}\|e^{it(\partial_x^2 - V)}\psi \|^2_{H^1_V}=\|\psi\|^2_{H^1_V}
\lesssim \|\psi\|^2_{H^1}.
\end{align*}
Summarizing we get
$$\|\eta_++\gamma_+\|_{H^1}\lesssim \|\psi\|_{H^1}.$$
By a similar argument and by replacing the sequence $t_n=2\pi n$ by
$t_n=(2n+1)\pi$ we get
$$\|\eta_+-\gamma_+\|_{H^1}\lesssim \|\psi\|_{H^1}.$$
The conclusion follows.
\end{document} |
\begin{document}
\title{Negative group delay for Dirac particles travelling
through a potential well}
\author{Xi Chen}
\email{xchen@mail.shu.edu.cn} \affiliation{Department of Physics,
Shanghai University, 99 Shangda Road, Shanghai 200436, P. R. China}
\author{Chun-Fang Li}
\email{cfli@mail.shu.edu.cn} \affiliation{Department of Physics,
Shanghai University, 99 Shangda Road, Shanghai 200436, P. R.
China} \affiliation{State Key Laboratory of Transient Optics
Technology, Xi'an Institute of Optics and Precision Mechanics,
Academia Sinica, 234 West Youyi Road, Xi'an 710068, P. R. China
}
\begin{abstract}
The properties of group delay for Dirac particles travelling through a potential well are
investigated. A necessary condition is put forward for the group delay to be negative. It
is shown that this negative group delay is closely related to its anomalous dependence on
the width of the potential well. In order to demonstrate the validity of stationary-phase
approach, numerical simulations are made for Gaussian-shaped temporal wave packets. A
restriction to the potential-well's width is obtained that is necessary for the wave
packet to remain distortionless in the travelling. Numerical comparison shows that the
relativistic group delay is larger than its corresponding non-relativistic one.
\end{abstract}
\pacs{03.65.Xp, 73.40.Gk}
\maketitle
\section{Introduction}
The question of how much time it takes quantum particles to tunnel through a potential
barrier has been controversial for these decades \cite{Hauge-S,Nimtz,Chiao-S}.
Theoretical investigations \cite{MacColl,Hartman,Buttiker-L,Martin-L,Steinberg-Chiao} and
experimental researches \cite{Steinberg-Kwiat,Carniglia,Enders-1,Enders-2,Sp-R,Balcou}
show that the group delay for some kinds of barriers, also known as the phase time in the
literature, which describes the motion of a wave packet peak \cite{Wigner}, has the
well-known superluminality. In addition, the faster-than-light propagation was also
predicted \cite{Garrett} and experimentally verified \cite{Chu,Segard,Wang-1} for light
pulses through anomalous dispersion media. In a previous paper \cite{Li-Wang}, Li and
Wang have elaborated the superluminal and even negative properties of the group delay
for quantum particles travelling through a potential well, instead of tunnelling through
a potential barrier. This counterintuitive phenomenon resulted from the interference of
multi-reflected waves in the potential well was demonstrated in a microwave analogy
experiment \cite{Vetter}. Recently, the concept of negative group delay has been extended
to microelectronics \cite{Daniel}.
Most of the theoretical works on tunnelling times in quantum mechanics rely on
Schr\"{o}dinger's non-relativistic theory. Such a theory has a potential deficiency in
accurately addressing the question of causality \cite{Krekora-1,Li-Chen}. For this
reason, a few authors \cite{Krekora-1,Li-Chen,Leavens,Petrillo} extended the analysis of
the tunnelling to Dirac's fully relativistic quantum theory. Leavens and Aers
\cite{Leavens} used the stationary-state method to analyze the Larmor-clock transmission
times for single barriers and resonant double-barriers. Krekora {\it et al.}
\cite{Krekora-1} solved numerically the time-dependent Dirac equation for a quantum wave
packet tunnelling through a potential barrier. And Petrillo and Janner \cite{Petrillo}
studied the dynamics of wave-packet tunnelling through a barrier. In a recent work
\cite{Li-Chen}, we discussed an energy-transfer associated traversal time for Dirac
particles tunnelling through a potential barrier.
The purpose of this paper is to investigate the properties of the group delay for Dirac
particles travelling through a potential well. It is shown that it behaves superluminal
and even negative in much the same way as in the non-relativistic quantum mechanics
\cite{Li-Wang}. The negativity of the group delay is closely related to its anomalous
dependence on the width of the potential well around transmission resonances. In order to
demonstrate the validity of the stationary-phase approximation, numerical simulations are
made for Gaussian-shaped temporal wave packets. A restriction to the width of the
potential well is given that is necessary for the wave packet to remain distortionless in
the travelling. Finally, a numerical comparison shows that the Dirac's relativistic group
delay is larger than its corresponding non-relativistic one.
\section{Relativistic group delay and its non-relativistic limit}
Consider Dirac particles of precisely defined incident energy $E$
and of helicity $+1$, travelling through a one-dimensional
rectangular potential well $-V_0\Theta(z)\Theta(a-z)$ (with $V_0$
positive, representing the depth of the potential well). Let the
incident wave function be
\begin{equation} \label{incident wave}
\psi_{in}(z)=\left(\begin{array}{c} 1 \\ 0 \\ \frac{\hbar kc}{E +
\mu c^2} \\ 0
\end{array}\right) e^{ikz},(z<0),
\end{equation}
where $k=(E^2-\mu^2c^4)^{1/2}/\hbar c$, $\mu$ is the mass of incident particles, and $c$
is the speed of light in vacuum. Then Dirac equation and boundary conditions give for the
transmitted wave function
\begin{equation}
\label{transmitted wave} \psi_{tr}(z)=F\left(\begin{array}{c} 1 \\ 0 \\
\frac{\hbar kc}{E+\mu c^2} \\ 0
\end{array}\right) e^{ik(z-a)},(z>a),
\end{equation}
where the transmission coefficient $F=e^{i\phi}/f$ is determined by the following complex
number,
$$
fe^{i \phi}=\cos{k'a}+(i/2)(\chi+1/\chi) \sin{k'a},
$$
so that
\begin{equation}
\label{phase} \phi=
\mbox{int}\left(\frac{k'a}{\pi}+\frac{1}{2}\right)\pi+
\tan^{-1}\left[\frac{1}{2}\left(\chi+\frac{1}{\chi}\right)\tan{k'a}\right],
\end{equation}
int($\cdot$) stands for the integer part of involved number,
$k'=[(E+V_0)^2-\mu^2c^4]^{1/2}/\hbar c$. Noting that the real
parameter $\chi$ is defined as
$$\chi \equiv \frac{k}{k'}\frac{E+V_0+\mu
c^2}{E+\mu c^2},$$ and has the property that $0<\chi<1$.
Obviously, $\phi$ is the phase shift of transmitted wave
(\ref{transmitted wave}) at $z=a$ with respect to the incident
wave (\ref{incident wave}) at $z=0$. The transmission probability
$T$ is a periodical function of the width $a$ of the potential
well,
\begin{equation}
\label{transmission probability} T=\frac{1}{f^2}=\frac{4 \chi^2}{4
\chi^2+(\chi^2-1)^2 \sin^{2} {k' a}}.
\end{equation}
The group delay, defined as the derivative of the phase shift $\phi$ with respect to
particle's energy $E$ \cite{Chiao-S,Wigner}, is given by
\begin{eqnarray}
\label{group delay-1} \tau_{\phi}&=& \hbar \frac{\partial
\phi}{\partial E}=\frac{T}{2 \chi \hbar k'
c}\left[(1+\chi^2)(E+V_{0}) -(1-\chi^2) \frac{\mu
V_{0}(2E+V_{0})}{\hbar^2 k^2} \frac{\sin {2k'a}} {2k'a}\right]
\frac{a}{c}.
\end{eqnarray}
For the following comparisons, let us first give its non-relativistic limit. By
non-relativistic limit it is meant that the speed of incident particles is much smaller
than $c$, so that their kinetic energy $E'=E-\mu c^2$ is much smaller than their rest
energy $\mu c^2$, $E' \ll \mu c^2$. It is also meant that the interaction energy $V_0$
satisfies $V_0 \ll \mu c^2$. In this limit, we have $k \approx \sqrt{2 \mu E'}/\hbar$,
$k' \approx [2 \mu (E'+V_0)]^{1/2}/\hbar$, and $\chi \approx k/k'$. Collecting all these,
we get
\begin{equation}
\label{group delay-2} \tau_{\phi} \approx \tau'_{\phi}=\frac{2 \mu
a}{\hbar
k}\frac{k^2(k^2+k'^2)/k_0^4-\sin{2k'a}/{2k'a}}{4k^2k'^2/k_0^4+\sin^2{k'a}},
\end{equation}
which is expected from Schr\"{o}dinger's non-relativistic theory
\cite{Li-Wang}, where $k_0 \approx \sqrt{2 \mu V_0}/\hbar$.
\section{Negative property of the group delay}
In this section, we discuss the negative property of the group
delay. It is seen from Eq. (\ref{group delay-1}) that when
inequality
$$
(1+\chi^2)(E+V_0)<(1-\chi^2)\frac{\mu
V_0(2E+V_0)}{\hbar^2k^2}\frac{\sin2k'a}{2k'a}
$$
holds, the group delay is negative, $\tau_\phi<0$. Since
$\sin2k'a/2k'a<1$, the above inequality leads to the following
necessary condition for the group delay to be negative,
\begin{equation} \label{necessary condition}
(1+\chi^2)(E+V_0) < (1-\chi^2) \frac{\mu V_0(2E+V_0)}{\hbar^2k^2},
\end{equation}
which can be expressed as a restriction to the total energy of
incident particles as follows,
\begin{eqnarray} \label{inequality-1}
E<E_t &\equiv& \mu c^2 \left\{\frac{V_0}{2\mu c^2}+\left[\left(\frac{V_0}{2\mu
c^2}\right)^2-\left(\frac{1}{3}\right)^3\right]^{1/2}\right\}^{1/3} \nonumber \\ && + \mu
c^2 \left\{\frac{V_0}{2\mu c^2}-\left[\left(\frac{V_0}{2\mu
c^2}\right)^2-\left(\frac{1}{3}\right)^3\right]^{1/2}\right\}^{1/3}.
\end{eqnarray}
This means that when the energy of incident particles satisfies
Eq. (\ref{inequality-1}), that is to say, the energy of incident
particles $E$ is less than a threshold energy $E_t$, one can
always find a width $a$ of the potential well at which the group
delay is negative. Of course, $E_t$ in Eq. (\ref{inequality-1}) is
always real and larger than $\mu c^2$. In the case of $V_0 < 2\mu
c^2/3\sqrt{3}$, $E_t$ can be rewritten as
\begin{equation}
\label{E-t} E_t = \mu c^2 \left\{\frac{V_0}{2\mu c^2}+i
\left[{\left(\frac{1}{3}\right)}^3-\left({\frac{V_0}{2\mu
c^2}}\right)^2\right]^{1/2}\right\}^{1/3} + \mbox{c.c}.
\end{equation}
In the non-relativistic limit, $V_0 \ll \mu c^2$, we obtain $E_t \approx \mu c^2+ V_0/2$.
Therefore, the necessary condition (\ref{inequality-1}) now reduces to $E'<V_0/2$, as
observed previously in Schr\"{o}dinger's theory \cite{Li-Wang}.
Fig. \ref{fig.1} shows a typical example of the dependence of $\tau_\phi$ on the width
$a$, where the well depth $V_0=0.4 \mu c^2 > 2 \mu c^2/3 \sqrt{3}$ ($E_t=1.16 \mu c^2$),
the total energy $E=1.01 \mu c^2 < E_t$, and $a$ is re-scaled to be $k'a$. For
comparison, Fig. \ref{fig.1} also shows the periodical dependence of transmission
probability $T$ on $a$ under the same conditions. It is interesting to note that the
oscillation of the group delay with respect to $a$ is closely related to the periodical
occurrence of transmission resonances at $k'a=m \pi$ ($m=1,2,3...$).
\begin{figure}
\caption{\label{fig.1}
\label{fig.1}
\end{figure}
On the one hand, at resonances, the group delay becomes
\begin{equation}
\tau_\phi|_{k'a=m\pi}=\frac{1}{2}\left(\chi+\frac{1}{\chi}\right)
\frac{E+V_0}{ \hbar k'c}\frac{a}{c}>\frac{a}{c},
\end{equation}
which is proportional to $a$. Its corresponding group velocity is less than the velocity
of light in vacuum, $c$. On the other hand, the derivative of group delay $\tau_\phi$
with respect to $a$ is, at resonances,
\begin{eqnarray}
\frac{\partial \tau_\phi}{\partial a}|_{k'a=m\pi}&=& \frac{1}{2
\chi \hbar k'c^2}\left[(1+\chi^2)(E+V_0)
-(1-\chi^2)\frac{\mu V_0(2E+V_0)}{
\hbar^2 k^2}\right].
\end{eqnarray}
When the necessary condition (\ref{necessary condition}) is satisfied, it is negative.
This shows that the group delay depends anomalously on the width around resonance points.
In other words, it decreases with increasing the width of the potential well at
resonances, as displayed clearly in Fig. \ref{fig.1}.
In connection with the negative characteristics of the group
delay, the phase shift (\ref{phase}) shows a quantum-like behavior
with respect to the potential-well's width.
Fig. \ref{fig.2} indicates such a behavior, where $
E=1.01\mu c^2$ and $V_0=0.4\mu c^2$. This quantum-like behavior is
also related to the periodical occurrence of the transmission
resonances. It can be seen from Eq. (\ref{phase}) and Fig.
\ref{fig.2} that at resonances, $k'a=m\pi$, the phase shift
becomes $\phi=k'a$ and changes rapidly around here with respect to
$a$. However, when the width is far from the resonance points as
is in the middle between two adjacent resonance points, the phase
shift changes slowly.
\begin{figure}
\caption{\label{fig.2}
\label{fig.2}
\end{figure}
In addition, it is also indicated from Eq. (\ref{group delay-1})
that the group delay depends not only on the potential-well's
width $a$, but also on the incident energy $E$ and the depth of
the potential well $V_0$. To see the latter more clearly, we
convert it to a dimensionless form. Denoting $\alpha=E/\mu c^2$,
$\beta=V_0/\mu c^2$, and $ \gamma=\hbar/a\mu c$, we have
\begin{eqnarray} \label{dimensionless}
\frac{\tau_\phi}{\tau_0} &=& \frac{T}{2\chi} \frac{k'a}
{(\alpha+\beta)^2-1} \left[(1+\chi^2)( \alpha+\beta) - (1-\chi^2)
\frac{\beta(2\alpha+\beta)}{\alpha^2-1}\frac{\sin
{2k'a}}{2k'a}\right],
\end{eqnarray}
where $\tau_0=\hbar/\mu c^2$, $k'a=[(\alpha+\beta)^2-1]^{1/2}/ \gamma$. When the kinetic
energy of incident particles $E'$ is small enough that $E'/\mu c^2 \rightarrow 0$ with
the depth of the potential well remaining finite, we have $\alpha \rightarrow 1$, so that
$\chi \rightarrow 0$. In this limit, the group delay (\ref{dimensionless}) has the
following form,
$$
\lim_{\alpha\rightarrow 1}\frac{\tau_\phi}{\tau_0}= -\sqrt{\frac{
\beta+2}{(\alpha^2-1)\beta}}\cot{k'a},
$$
which approaches negative infinite when $\cot{k'a}>0$. Fig. \ref{fig.3} shows such a
dependence of the group delay on the incident energy $E$, where $\beta=0.4$ and
$\gamma=0.01$. A strange phenomenon occurs here that for a given potential well, the
absolute value of the negative group delay becomes larger when decreasing the incident
kinetic energy. Of course, the transmission probability in this limit tends to zero in
the following way,
\begin{equation} \label{transmission limit-1}
\lim_{\alpha \rightarrow 1}T=\frac{4\chi^2}{4\chi^2+\sin^2{k'a}},
\end{equation}
so that very few particles can travel through the potential well
at this negative group velocity.
\begin{figure}
\caption{\label{fig.3}
\label{fig.3}
\end{figure}
In order to demonstrate the validity of the above stationary-phase method in this
problem, we proceed to numerical simulations of the group delay for a Gaussian-shaped
wave packet. The incident wave function of Dirac particles is assumed to be, at $z=0$,
\begin{equation}
\label{wavepacket} \Psi_{in}(t)|_{z=0}=\left(\begin{array}{c} 1 \\ 0 \\
\frac{\hbar k_0c}{E_0 + \mu c^2} \\ 0
\end{array}\right) \exp(-t^2/2 w^2 -iE_0 t /\hbar),
\end{equation}
which has the Fourier integral of the following form,
\begin{equation} \label{incidentintegral}
\Psi_{in}(t)|_{z=0}=\frac{1}{\sqrt{2\pi}}\int
A(E)\psi(E_0)\exp(-iE t/\hbar)dE,
\end{equation}
where $k_0=(E_0^2-\mu^2 c^4)^{1/2}/\hbar c$, the spinor $\psi(E_0)$ is $[1 , 0, \hbar
k_0c/(E_0 + \mu c^2), 0 ]^T$, the energy spectral distribution $A(E)$ with the central
energy $E_0$ is given by
$$A(E)=(w/\hbar)\exp[-(w^2 /2\hbar^2)(E-E_0)^2],$$
and $w$ is the temporal width of the wave packet (\ref{wavepacket}). The transmitted wave
function takes the following form,
\begin{equation}
\label{transmittedintegral} \Psi_{tr}(z,t)=\frac{1}{\sqrt{2\pi}}\int
F(E)A(E)\psi(E_0)\exp\{i[k(z-a)-Et/\hbar]\}dE,
\end{equation}
The numerically calculated group delay, $\tau^N _{\phi}$, is defined here by
\begin{equation}
\label{numerical result} |\Psi_{tr}(z=a,\tau^N _{\phi})|^2=\mbox{max}\{|\Psi_{tr}
(z=a,t)|^2 \}.
\end{equation}
Since the incident wave is of perfect Gaussian shape, the integral limit in Eq.
(\ref{incidentintegral}) and hence in Eq. (\ref{transmittedintegral}) should be from
$-\infty$ to $+\infty$. But the energy of incident particles must be larger than $\mu
c^2$. So the real integral in numerical simulations is taken to be from $\mu c^2$ to
$+\infty$.
Calculations show that the stationary-phase approximation (\ref{group delay-1}) for the
group delay is in good agreement with the numerical result, especially when the energy
spectral distribution is sharp. In Fig. \ref{fig.4} we show such a comparison between
theoretical and numerical results, where $E_0=1.01 \mu c^2$, $V_0=0.4 \mu c^2$, and the
temporal width $w=300 \tau_0$. For the chosen temporal width, the corresponding energy
spreading $\Delta E=\hbar/2w=\mu c^2/600$ is narrow enough that the integral in Eq.
(\ref{transmittedintegral}) can be performed from $\mu c^2$ to $2 \mu c^2$ without
changing the result significantly.
\begin{figure}
\caption{\label{fig.4}
\label{fig.4}
\end{figure}
As pointed out in the optical analog \cite{Huang}, for an incident wave packet of energy
spreading $\Delta E$, the corresponding spreading of $k'a$ should be much smaller than
$\pi$, the period of $|F|$, in order that the stationary-phase approximation is valid.
With the energy spreading $\Delta E=\hbar/2w$, this leads to the following restriction to
the width of the potential well,
\begin{equation}
a\ll 2 \pi w \frac{\partial E}{\hbar \partial k'}.
\end{equation}
It is noted that $\partial E/\hbar \partial k'$ is nothing but the group velocity of
particles in the region of potential well. By introducing a characteristic length $L$
which is defined as $w \partial E/\hbar \partial k'$, the above restriction is simplified
to be $a \ll 2\pi L$. With this restriction, the temporal wave packet can travel through
the potential well with negligible distortion.
All the above discussions show that the group delay (\ref{group delay-1}) in Dirac's
relativistic theory can be superluminal and even negative in much the same way as in
Schr\"{o}dinger's non-relativistic theory \cite{Li-Wang}. In this sense, the superluminal
and negative properties is not an artifact due to the deficiency of non-relativistic
quantum theory. Rather, it is a real effect. In the next section, we investigate the
difference between the group delays in relativistic and non-relativistic theories.
\section{Relativistic effect on the group delay}
As mentioned in Sec.II, when the incident kinetic energy $E' \ll \mu c^2$ and the
potential-well's depth $V_0 \ll \mu c^2$, the group delay (\ref {group delay-1}) tends to
its non-relativistic limit (\ref{group delay-2}). To show their differences, we rewrite
the non-relativistic group delay in the following dimensionless form,
\begin{eqnarray} \label{dimensionless2}
\frac{\tau'_\phi}{\tau_0} &=& \frac{k'a} {\sqrt{
(\alpha-1)(\alpha+\beta-1)}}\frac{(\alpha -1)(2\alpha+\beta-2)
-\beta^2\sin{2k'a}/{2k'a}} {4(\alpha-1) (\alpha+\beta-1)+ \beta^2
\sin^2{k'a}},
\end{eqnarray}
where $\alpha$, $\beta$, and $\gamma$ are the same as before, $k'a
= [2(\alpha+\beta-1)]^{1/2}/\gamma$ in the non-relativistic
quantum mechanics.
In order to address the relativistic effect on the group delay, we draw in Fig.
\ref{fig.5} the dependence of relativistic and non-relativistic group delays on the width
of the potential well for $\alpha=1.01$ and $\beta=0.2$ ($V_0=0.2 \mu c^2<2 \mu c^2
/3\sqrt{2}$), where the relativistic group delay is shown by solid curve, the
corresponding non-relativistic one is shown by dashed curve, and the width of potential
well is re-scaled by their own $k'$ to be $k'a$.
\begin{figure}
\caption{\label{fig.5}
\label{fig.5}
\end{figure}
From Fig. \ref{fig.5} we see that the relativistic group delay for travelling through a
potential well is larger than the corresponding non-relativistic one. This is in
agreement with the result of Leavens and Aers \cite{Leavens}, who observed that the local
mean velocity for transmitted particles is reduced due to relativistic effect for
resonant double barriers where the group delay is always larger than zero. The difference
between the two group delays are very small for the low energy and potential-well depth.
It is expected that the agreement between the group delays in relativistic and
non-relativistic theory is obtained when $E' \ll \mu c^2$ and $V_0 \ll \mu c^2$.
\section{Conclusions}
In summary, we have investigated the negative property of the group delay for Dirac
particles travelling through a quantum potential well. A necessary condition
(\ref{necessary condition}) is given for the group delay to be negative, which is a
restriction on the energy of incident particles and reduces to $E'<V_0/2$ in
non-relativistic limit. The relation of the negativity of the group delay with its
anomalous dependence on the width of the potential well around transmission resonances is
discussed. In order to demonstrate the validity of the stationary-phase approach,
numerical simulations are made for a Gaussian-shaped temporal wave packet. A restriction
to the width of the potential well is given that is necessary for the wave packet to
remain distortionless in the travelling. Comparison of the relativistic and
non-relativistic group delays is also made. It is found that the value of the
relativistic group delay is larger than that of the non-relativistic one. It should be
pointed out that the negative group delay discussed here is not at odds with the
principle of causality. When a wave packet travels through a potential well, the boundary
effect leads to a reshaping of the edge of the packet \cite{Steinberg-Kwiat,Japha} which
results in an effective acceleration of the wave packet. This phenomenon is similar to
but different from the negative group-velocity propagation performed by Wang {\it et.
al.} \cite{Wang-1} where the gain-assisted anomalous dispersion plays the role. We hope
that this work may stimulate further experimental researches in electronic domains.
\section*{Acknowledgments}
This work was supported in part by the Science Foundation of Shanghai Municipal
Commission of Education (Grant No. 01SG46), the Science Foundation of Shanghai Municipal
Commission of Science and Technology (Grant No. 03QMH1405), and by Shanghai Leading
Academic Discipline Program.
\end{document} |
\begin{document}
\newcommand{{\mathbb{1}}}{{\mathbb{1}}}
\newcommand{\bra}[1]{{\langle{#1}|}}
\newcommand{\ket}[1]{{|{#1}\rangle}}
\newcommand{{\mathbb{Z}}}{{\mathbb{Z}}}
\title*{Quantum Finite State Transducers}
\toctitle{Quantum Finite State Transducers}
\titlerunning{Quantum Finite State Transducers}
\author{R\= usi\c n\v s Freivalds\inst{1}
\and Andreas Winter\inst{2}}
\authorrunning{R\= usi\c n\v s Freivalds and Andreas Winter}
\institute{Institute of Mathematics and Computer Science, University of Latvia, Rai\c na bulv\= aris 29, LV--1459, Riga, Latvia. Email: \texttt{rusins@paul.cclu.lv}.
\and
Department of Computer Science, University of Bristol, Merchant Venturers Building, Woodland Road, Bristol BS8 1UB, United Kingdom. Email: \texttt{winter@cs.bris.ac.uk}.}
\maketitle
\begin{abstract}
We introduce \emph{quantum finite state transducers (qfst)}, and
study the class of relations which they compute. It turns
out that they share many features with probabilistic finite
state transducers, especially regarding undecidability of
emptiness (at least for low probability of success). However,
like their `little brothers', the quantum finite automata,
the power of qfst is incomparable to that of their probabilistic
counterpart. This we show by discussing a number of characteristic
examples.
\end{abstract}
\section{Introduction and definitions}
\label{sec:intro}
\label{sec:defi}
The issue of this work is to introduce and to study the computational model
of quantum finite state transducers. These can be understood as finite automata
with the addition of an output tape
which compute a relation between strings, instead of a decision (which we read
as a binary valued function). After the necessary definitions, the relation to
quantum finite automata is clarified (section~\ref{sec:qfa:qfst}), then
decidability questions are addressed (section~\ref{sec:empty}):
it is shown that emptiness of the
computed relation is undecidable both for quantum and probabilistic transducers.
However, the membership problem for a specific output is decidable.
Next, the relation between deterministic and probabilistic transducers is explored
(section~\ref{sec:det:prob}), and in section~\ref{sec:qfst}
quantum and probabilistic transducers are compared.
\par
We feel our extension of quantum automata studies to this new model
justified by the following quote from D. Scott~\cite{scott}:
\begin{quote}
{\it
`The author (along with many other people) has come recently
to the conclusion that the functions computed by the various
machines are more important -- or at least more basic -- than
the sets accepted by these devices. (...) In fact by putting the
functions first, the relationship between various classes of sets
becomes much clearer'.}
\end{quote}
\par
We start be reviewing the concept of probabilistic finite state transducer.
For a finite set $X$ we denote by $X^*$ the set of all finite strings formed
from $X$, the empty string is denoted $\epsilon$.
\begin{defi}
\label{defi:pfst}
A \emph{probabilistic finite state transducer (pfst)} is a tuple
$T=(Q,\Sigma_1,\Sigma_2,V,f,q_0,Q_{\rm acc},Q_{\rm rej})$, where $Q$ is a finite
set of states, $\Sigma_1,\Sigma_2$ is the input/output
alphabet, $q_0\in Q$ is the initial state, and
$Q_{\rm acc},Q_{\rm rej}\subset Q$ are (disjoint) sets of
accepting and rejecting states, respectively. (The
other states, forming set $Q_{\rm non}$, are called non--halting).
The transition function $V:\Sigma_1\times Q\rightarrow Q$ is
such that for all $a\in\Sigma_1$ the matrix $(V_a)_{qp}$
is stochastic, and $f_a:Q\rightarrow\Sigma_2^*$ is the
output function. If all matrix entries are either $0$ or $1$
the machine is called a \emph{deterministic finite state
transducer (dfst)}.
\end{defi}
The meaning of this definition is that, being in state $q$, and reading
input symbol $a$, the transducer prints $f_a(q)$ on the output
tape, and changes to state $p$ with probability $(V_a)_{qp}$, moving
input and output head to the right. After each such step,
if the machine is found in a halting state, the computation
stops, accepting or rejecting the input, respectively.
\par
To capture this formally, we introduce the \emph{total state}
of the machine, which is an element
$$(P_{\rm NON},P_{\rm ACC}, p_{\rm rej})\in
\ell^1(Q\times\Sigma_2^*)\oplus \ell^1(\Sigma_2^*)\oplus \ell^1(\{{\rm REJ}\}),$$
with the natural norm
$$\|(P_{\rm NON},P_{\rm ACC},p_{\rm rej})\|=
\|P_{\rm NON}\|_1+\|P_{\rm ACC}\|_1+|p_{\rm rej}|.$$
At the beginning, the total state is $((q_0,\epsilon),{\bf 0},0)$
(where we identify an element of $Q\times\Sigma_2^*$ with its characteristic
function). The computation is represented by the (linear extensions
of the) transformations
$$T_a:((q,w),P_{\rm ACC},p_{\rm rej})\mapsto
\left(\left(\sum_{p\in Q_{\rm non}} (V_a)_{qp}p,w f_a(q)\right),
P_{\rm ACC}',p_{\rm rej}'\right),$$
of the total state, for $a\in\Sigma_1$, with
\begin{equation*}
P_{\rm ACC}'(x)=\begin{cases}
P_{\rm ACC}(x)+\sum_{p\in Q_{\rm acc}} (V_a)_{qp}
& \text{ if } x=w f_a(q),\\
P_{\rm ACC}(x) & \text{ else},
\end{cases}
\end{equation*}
and $p_{\rm rej}'=p_{\rm rej}+\sum_{p\in Q_{\rm rej}} (V_a)_{qp}$.
\par
For a string $x_1\ldots x_n$ the map $T_x$ is just the concatenation of the
$T_{x_i}$. Observe that all the $T_a$ conserve the probability.
\par
Implicitely, we add initial and end marker symbols ($\ddag,\$ $)
at the input, with additional stochastic matrices $V_\ddag$ and
$V_\$ $, executed only at the very beginning, and at the very
end. We assume that $V_\$ $ puts no probability outside
$Q_{\rm acc}\cup Q_{\rm rej}$.
\par
By virtue of the computation, to each input string $v\in\Sigma_1^*$
there corresponds a probability distribution $T(\cdot|v)$
on the set $\Sigma_2^*\cup\{{\rm REJ}\}$:
$$T({\rm REJ}|v):=T_{\ddag v \$}((q_0,\epsilon),{\bf 0},0)[{\rm REJ}]$$
is the probability to reject the input $v$, whereas
$$T(w|v):=T_{\ddag v \$}((q_0,\epsilon),{\bf 0},0)[w]$$
is the probability to accept, after having produced the
output $w$.
\begin{defi}
\label{defi:compute}
Let ${\cal R}\subset\Sigma_1^*\times\Sigma_2^*$.
\par
For $\alpha>1/2$ we say that $T$ \emph{computes the relation ${\cal R}$
with probability $\alpha$} if for all $v$, whenever $(v,w)\in{\cal R}$,
then $T(w|v)\geq\alpha$, and whenever $(v,w)\not\in{\cal R}$,
then $T(w|v)\leq 1-\alpha$
\par
For $0<\alpha<1$ we say that $T$ \emph{computes the relation ${\cal R}$
with isolated cutpoint $\alpha$} if there exists $\varepsilon>0$ such
that for all $v$, whenever $(v,w)\in{\cal R}$, then
$T(w|v)\geq\alpha+\varepsilon$, but whenever
$(v,w)\not\in{\cal R}$, then $T(w|v)\leq\alpha-\varepsilon$.
\end{defi}
The following definition is modelled after the ones for pfst
for quantum finite state automata~\cite{kondacs:watrous}:
\begin{defi}
\label{defi:qfst}
A \emph{quantum finite state transducer (qfst)} is a tuple
$T=(Q,\Sigma_1,\\ \Sigma_2, V,f,q_0,Q_{\rm acc},Q_{\rm rej})$,
where $Q$ is a finite
set of states, $\Sigma_1,\Sigma_2$ is the input/output
alphabet, $q_0\in Q$ is the initial state, and
$Q_{\rm acc},Q_{\rm rej}\subset Q$ are (disjoint) sets of
accepting and rejecting states, respectively.
The transition function $V:\Sigma_1\times Q\rightarrow Q$ is
such that for all $a\in\Sigma_1$ the matrix $(V_a)_{qp}$
is unitary, and $f_a:Q\rightarrow\Sigma_2^*$ is the
output function.
\end{defi}
Like before, implicitely matrices $V_\ddag$ and $V_\$ $ are assumed,
$V_\$ $ carrying no amplitude from $Q_{\rm non}$ to outside
$Q_{\rm acc}\cup Q_{\rm rej}$.
The computation proceeds as follows: being in state $q$, and reading
$a$, the machine prints $f_a(q)$ on the output tape, and moves
to the superposition $V_a\ket{q}=\sum_p (V_a)_{qp}\ket{p}$ of internal states.
Then a measurement of the orthogonal
decomposition $E_{\rm non}\oplus E_{\rm acc}\oplus E_{\rm rej}$
(with the subspaces $E_i={\rm span}\ Q_i\subset \ell^2(Q)$, which we identify
with their respective projections) is performed, stopping
the computation with accepting the input on the second outcome
(while observing the output),
with rejecting it on the third.
\par
Here, too, we define total states: these are elements
$$(\ket{\psi_{\rm NON}},P_{\rm ACC},p_{\rm rej})\in
\ell^2(Q\times\Sigma_2^*)\oplus \ell^1(\Sigma_2^*)\oplus \ell^1(\{{\rm REJ}\}),$$
with norm
$$\|(\ket{\psi_{\rm NON}},P_{\rm ACC},p_{\rm rej})\|=
\|\ket{\psi_{\rm NON}}\|_2+\|P_{\rm ACC}\|_1+|p_{\rm rej}|.$$
At the beginning the total state is $(\ket{q_0}\otimes\ket{\epsilon},{\bf 0},0)$,
the total state transformations, for
$$\ket{\psi}=\sum_{q\in Q} \ket{q}\otimes\ket{\omega_q},\qquad\text{with }
\ket{\omega_q}=\sum_{w\in\Sigma_2^*} \alpha_{qw}\ket{w},$$
are (for $a\in\Sigma_1$)
$$T_a:(\ket{\psi},P_{\rm ACC},p_{\rm rej})\mapsto
\left(E_{\rm non}\sum_q V_a\ket{q}\otimes\ket{\omega_q f_a(q)},
P_{\rm ACC}',
p_{\rm rej}'\right),$$
where $\ket{\omega_q f_a(q)}=\sum_w \alpha_{qw}\ket{wf_a(q)}$, and
\begin{align*}
P_{\rm ACC}'(x) &=P_{\rm ACC}(x)
+\left\|E_{\rm acc}\sum_{q,w\text{ s.t. }x=wf_a(q)}
\alpha_{qw}V_a\ket{q}\right\|_2^2\ ,\\
p_{\rm rej}' &=p_{\rm rej}
+\left\|E_{\rm rej}
\sum_q V_a\ket{q}\otimes\ket{\omega_q f_a(q)}\right\|_2^2\ .
\end{align*}
Observe that the $T_a$ do not exactly preserve the norm, but that there
is a constant $\gamma$ such that $\|T_a(X)\|\leq\gamma\|X\|$ for any total
state $X$.
Quite straightforwardly, the distributions $T(\cdot|v)$ are defined, and so are the
concepts of computation with probability $\alpha$ or with isolated
cutpoint $\alpha$.
\par
Observe also that we defined our model in closest possible analogy to
quantum finite automata~\cite{kondacs:watrous}. This is of course to
be able to compare qfst to the latter. In principle however other
definitions are conceivable, e.g. a mixed state computation where
the $T_a$ are any completely positive, trace preserving, linear maps
(the same of course applies to quantum finite automata!).
We defer the study of such a model to another occasion.
\par
Notice the physical benefits of having the output tape: whereas for
finite automata a superposition of states means that the amplitudes
of the various transitions are to be added, this is no longer true
for transducers if we face a superposition of states \emph{with different
output tape content}. I.e. the entanglement of the internal state with
the output may prohibit certain interferences. This will be a crucial
feature in some of our later constructions.
\section{Quantum Finite Automata and Quantum Transducers}
\label{sec:qfa:qfst}
The definition of qfst is taylored in such a way that by excluding
the output tape and the output function, we get a quantum finite
automaton. One, however, with distinct acceptance and rejection
properties, as compared to the qfst.
Nevertheless, the decision capabilities
of qfst equal those of quantum finite automata:
\begin{thm}
\label{thm:qfa-c-qfst}
A language $L$ is accepted by a 1--way quantum finite automaton
with probability bounded away from 1/2 if and only if the relation
$L\times\{0\}\cup\overline{L}\times\{1\}$ is computed with isolated
cutpoint.
\end{thm}
{\it Proof}:
First observe that for finite automata (probabilistic and quantum), recognizability
with an isolated cutpoint is equivalent to recognizability with probability
bounded away from $1/2$ (by ``shifting the cutpoint'': just add
in the $\ddag$--step possibilities to accept or reject right away with
certain probabilities).
We have to exhibit two constructions:
\par
Let there be given a quantum finite automaton. We may assume that it is such
that $V_\$ $ is a permutation on $Q$.
\par
This can be forced by duplicating
each $q\in Q_{\rm acc}\cup Q_{\rm rej}$ by a new state $q'$, and modifying the transition
function as follows: denote by $\sigma$ the map interchanging $q$ with $q'$
for $q\not\in Q_{\rm non}$, and being the identity on $q,q'$ for $q\in Q_{\rm non}$.
Define a unitary $U$ such that for $q\in Q_{\rm non}$
$$U\ket{q}=\sum_p (V_\$)_{qp}\ket{\sigma p},$$
and $U\ket{q}=\ket{q}$ for $q\in Q_{\rm acc}\cup Q_{\rm rej}$.
Now let
$$V_\ddag' :=UV_\ddag,\qquad
V_\$' :=\sigma,\qquad
V_a' :=UV_a U^{-1}.$$
It is easily checked that this automaton behaves exactly like the
initial one.
\par
Construct a qfst as follows: its states are
$Q\cup\widehat{Q}$,
with $\widehat{Q}=\{\hat{q}:q\in Q_{\rm acc}\cup Q_{\rm rej}\}$ being
the accepting states, and no rejecting states.
Let the transition function be $W$ with
\begin{align*}
W_a\ket{q} &=V_a\ket{q}\text{ for }q\in Q_{\rm non},\text{ but}\\
W_a\ket{q} &=\ket{\hat{q}}\text{ for }q\in Q_{\rm acc}\cup Q_{\rm rej}.
\end{align*}
Since $V_\$ $ is the permutation $\sigma$ on $Q$, we may define
\begin{equation*}
W_\$\ket{q}=\begin{cases}
\ket{\widehat{\sigma q}} & \text{for }\sigma q\in Q_{\rm acc}\cup Q_{\rm rej}, \\
\ket{\sigma q} & \text{for }\sigma q\in Q_{\rm non}.
\end{cases}
\end{equation*}
Finally, let the output function be (for $q\in Q$)
\begin{equation*}
\begin{array}{lcr}
{
f_a(q)=\begin{cases}
0 & \text{for }q\in Q_{\rm acc},\\
1 & \text{for }q\in Q_{\rm rej},
\end{cases}
} & {\phantom{===}} &
{
f_\$(q)=\begin{cases}
0 & \text{for }\sigma q\in Q_{\rm acc},\\
1 & \text{for }\sigma q\in Q_{\rm rej},
\end{cases}
}
\end{array}
\end{equation*}
and $\epsilon$ in all other cases. It can be checked that it
behaves in the desired way.
\par
Given a qfst, construct a quantum finite automaton as follows:
its states are
$Q\times\Sigma_2^{\leq t}$, where the second
component represents the tape content up to
$t=1+\max_{a,q} |f_a(q)|$ many symbols. Initial state is $(q_0,\epsilon)$.
Observe that by definition of the $T_a$ amplitude that once is shifted onto
output tapes of length larger than $1$ is never recovered for smaller
lengths. Hence we may as well cut such branches by immediate rejection:
the states in $Q\times\Sigma_2^{\geq 2}$ are all rejecting, and
so are $(Q_{\rm acc}\cup Q_{\rm rej})\times\{1\}$.
The accepting states are $Q_{\rm acc}\times\{0\}$.
\par
The transition function is partially defined by
$$W_a\ket{q,x}:=\sum_{p\in Q} (V_a)_{qp}\ket{p,xf_a(q)},\quad x\in\Sigma\cup\{\epsilon\},$$
(for $a=\$ $ this is followed by mapping $\ket{p,\epsilon}$
to a rejecting state, while leaving the other halting states
alone), i.e. the automaton performs like the qfst on the
elements of $Q$, and uses
the second component to simulate the output tape. We think of $W_a$
being extended in an arbitary way to a unitary map.
One can check that this construction behaves in the desired way.
\qed\par
\section{Decidability questions}
\label{sec:empty}
As is well known, the emptiness problem for the language accepted
by a deterministic (or nondeterministic) finite automaton is decidable.
Since the languages accepted by probabilistic
and quantum finite automata with bounded error
are regular~\cite{rabin,kondacs:watrous},
these problems are decidable, too.
\par
For finite state transducers the situation is more complicated:
In~\cite{whoever} it is shown that the emptiness problem
for deterministic and nondeterministic fst is decidable.
In contrast we have
\begin{thm}
\label{thm:empty1}
The emptiness problem for pfst computing a relation with
probability $2/3$ is undecidable.
\par
Likewise, the emptiness problem for qfst computing a relation with
probability $2/3$ is undecidable.
\end{thm}
{\it Proof}:
By reduction to the Post Correspondence Problem: let an instance
$(v_1,\ldots,v_k)$, $(w_1,\ldots,w_k)$ of PCP be given (i.e.
$v_i,w_i\in\Sigma^+$). It is to be decided whether there exists
a sequence $i_1,\ldots,i_n$ ($n>0$) such that
$$v_{i_1}\cdots v_{i_n}=w_{i_1}\cdots w_{i_n}.$$
Construct the following qfst with input alphabet
$\{1,\ldots,k\}$: it has states $q_0,q_v,q_w$, and $q_{\rm rej}$.
The initial transformation produces a superposition of
$q_v,q_w,q_{\rm rej}$, each with amplitude $1/\sqrt{3}$. The unitaries
$U_i$ are all identity, but the output function is defined as
$f_i(q_x)=x_i$, for $x\in\{v,w\}$. The endmarker maps $q_v,q_w$ to accepting
states. It is clear that $i_1,\ldots,i_n$ is a solution iff
$(i_1\ldots i_n,v_{i_1}\cdots v_{i_n})$ is in the relation computed
with probability $2/3$ (the automaton is easily modified so that
it rejects when the input was the empty word, in this way we
force $n>0$).
\par
By replacing the unitaries by stochastic matrices (with entries
the squared moduli of the corresponding amplitudes) the same applies
to pfst.
\par
Since it is well known that PCP is undecidable, it follows that
there can be no decision procedure for emptiness of the relation
computed by the constructed pfst, or qfst, respectively.
\qed
\begin{rem}
Undecidable questions for quantum finite automata were noted first
for ``$1\frac{1}{2}$--way'' automata, i.e.
ones which move only to the right on their input, but may also
keep their position on the tape. In~\cite{amano:iwama} it is
shown that the equivalence problem for these is undecidable.
The same was proved for 1--way--2--tape quantum finite automata
in~\cite{b:f:g}.
\end{rem}
\begin{conj}
\label{conj:decidable}
The emptiness problem for probabilistic and quantum fst computing a relation
with probability $0.99$ is decidable.
\par
The emptiness problem for probabilistic and quantum fst computing a relation
with a single--letter input alphabet, with probability $1/2+\varepsilon$ is decidable.
\end{conj}
To prove this, we would like to apply a packing argument in the space of
all total states, equipped with the above metric. However, this fails
because of the infinite volume of this space (for finite automata it
is finite, see~\cite{rabin} and~\cite{kondacs:watrous}). In any case,
a proof must involve the size of the gap between the upper and the lower
probability point, as the above theorem shows that it cannot possibly
work with gap $1/3$.
\par
Still, we can prove:
\begin{thm}
If the relation ${\cal R}$ is computed by a pfst or a qfst with
an isolated cutpoint, then
${\rm Range}({\cal R})=\{y:\exists x\ (x,y)\in{\cal R}\}$
is a recursive set (so, for each specific output, it is decidable
if it is ever produced above the threshold probability).
\end{thm}
{\it Proof}:
Let the cutpoint be $\alpha$, with isolation radius $\delta$, and
let $y=y_1\ldots y_n\in\Sigma_2^*$.\par
Define
$Y=\{y_1\ldots y_i:0\leq i\leq n\}$, the set of prefixes of $y$.
Consider the \emph{output--$y$--truncated total state}, which is an
element
\begin{equation*}\begin{split}
(\ket{\widetilde{\psi}},\widetilde{P}_{\rm ACC},\widetilde{p}_{\rm rej})
&\in \ell^2(Q\times Y)\oplus \ell^1(Y)\oplus \ell^1(\{{\rm REJ}\}) \\
&\subseteq \ell^2(Q\times\Sigma_2^*)\oplus \ell^1(\Sigma_2^*)\oplus \ell^1(\{{\rm REJ}\}).
\end{split}\end{equation*}
It is obtained from $(\ket{\psi},P_{\rm ACC},p_{\rm rej})$
-- with $\ket{\psi}=\sum_{q,w} \alpha_{qw}\ket{q}\otimes\ket{w}$ --
by defining
\begin{align*}
\ket{\widetilde{\psi}} &=\sum_{q\in Q,w\in Y} \alpha_{qw}\ket{q}\otimes\ket{w}, \\
\widetilde{P}_{\rm ACC} &=P_{\rm ACC}|_{Y}, \\
\widetilde{p}_{\rm rej} &=p_{\rm rej}+\sum_{q\in Q,w\not\in Y} |\alpha_{qw}|^2
+\sum_{w\not\in Y} P_{\rm ACC}(w).
\end{align*}
Let us denote this transformation by $J$.
Now observe that in the total state evolution of the qfst probability
once put outside $Y$ never returns, and likewise, amplitude once
put outside $Q\times Y$ never returns (compare proof of
theorem~\ref{thm:qfa-c-qfst}). Formally, this is reflected in the
relation
$$J T_{ab}(\ket{\widetilde{\psi}},\widetilde{P}_{\rm ACC},\widetilde{p}_{\rm rej})
=J T_b J T_a(\ket{\widetilde{\psi}},\widetilde{P}_{\rm ACC},\widetilde{p}_{\rm rej}).$$
Hence, if we want to know if $T(y|x)\geq\alpha+\delta$ for some $x$,
we may concentrate on the space of output--$y$--truncated total states,
which is finite dimensional, and its transformation functions
$\widetilde{T}_a=JT_a$.
\par
It is easily seen that there is a constant $\gamma$ such that
for all truncated total states $s,t$ and all $w\in\Sigma_1^*$
$$\|\widetilde{T}_w s-\widetilde{T}_w t\|\leq \gamma\|s-t\|.$$
Hence, for $x,x',w\in\Sigma_1^*$, if
$$\|\widetilde{T}_{\ddag x}(\ket{q_0}\otimes\ket{\epsilon},{\bf 0},0)
-\widetilde{T}_{\ddag x'}(\ket{q_0}\otimes\ket{\epsilon},{\bf 0},0)\|<\delta/\gamma,$$
then
$$\|\widetilde{T}_{\ddag xw\$}(\ket{q_0}\otimes\ket{\epsilon},{\bf 0},0)
-\widetilde{T}_{\ddag x'w\$}(\ket{q_0}\otimes\ket{\epsilon},{\bf 0},0)\|<\delta.$$
Because of the cutpoint isolation we find that either
both or none of $(x,y)$, $(x',y)$ is in ${\cal R}$.
Now, because of compactness of the set of truncated total states
reachable from the starting state, it follows that there is a constant
$c>1$ such that for all $x\in\Sigma_1^*$ of length $|x|\geq c$
one can write $x=v x_0 w$, with $0<|x_0|<c$, such that
$$\|\widetilde{T}_{\ddag v x_0}(\ket{q_0}\otimes\ket{\epsilon},{\bf 0},0)
-\widetilde{T}_{\ddag v}(\ket{q_0}\otimes\ket{\epsilon},{\bf 0},0)\|<\delta/\gamma.$$
Hence
$$\|\widetilde{T}_{\ddag x\$}(\ket{q_0}\otimes\ket{\epsilon},{\bf 0},0)
-\widetilde{T}_{\ddag vw\$}(\ket{q_0}\otimes\ket{\epsilon},{\bf 0},0)\|<\delta,$$
and thus, if $x$ had produced $y$ with probability at least $\alpha+\delta$,
so had the shorter string $vw$. This means that we only have to consider
input strings of length up to $c$ to decide whether $y\in{\rm Range}({\cal R})$.
\par
Obviously, this reasoning applies to pfst, too.
\qed\par
\section{Deterministic vs. Probabilistic Transducers}
\label{sec:det:prob}
Unlike the situation for finite automata, pfst are strictly more powerful than
their deterministic counterparts:
\begin{thm}
\label{thm:mmm}
For arbitrary $\varepsilon>0$ the relation
$${\cal R}_1=\{(0^m1^m,2^m):m\geq 0\}$$
can be computed by a pfst with
probability $1-\varepsilon$.
It cannot be computed by a dfst.
\end{thm}
{\it Proof}:
The idea is essentially from~\cite{frei:1}: for a natural number $k$
choose initially an alternative $j\in\{0,\ldots,k-1\}$, uniformly.
Then do the following: repeatedly read $k$ $0$'s, and output
$j$ $2$'s, until the $1$'s start (remember the remainder modulo $k$), then
repeatedly read $k$ $1$'s, and output $k-j$ $2$'s. Compare the remainder modulo
$k$ with what you remembered: if the two are equal, output this number of
$2$'s and accept, otherwise reject.
\par
It is immediate that on input $0^m1^m$ this machine outputs $2^m$ with
certainty. However, on input $0^m1^{m'}$ each $2^n$ receives
probability at most $1/k$.
\par
That this cannot be done deterministically is straightforward: assume that
a dfst has produced $f(m)$ $2$'s after having read $m$ $0$'s. Because of
finiteness there are $k,l$ such that after reading $k$ $1$'s (while $n_0$
$2$'s were output) the internal state is the same as after reading $l$
further $1$'s (while $n$ $2$'s are output). So, the output for input
$0^m1^{k+rl}$ is $2^{f(m)+n_0+rn}$, and these pairs are either all
accepted or all rejected. Hence they are all rejected, contradicting
acceptance for $m=k+rl$.
\qed\par
By observing that the random choice at the beginning
can be mimicked quantumly, and that all intermediate computations
are in fact reversible, we immediately get
\begin{thm}
\label{thm:mmm:q}
For arbitrary $\varepsilon>0$ the relation
${\cal R}_1$ can be computed by a qfst with
probability $1-\varepsilon$. \qed
\end{thm}
Note that this puts qfst in contrast to quantum finite automata:
in~\cite{a:f} it was shown that if a language is recognized with probability
strictly exceeding $7/9$ then it is possible to accept it with
probability $1$, i.e. reversibly deterministically.
\begin{thm}
The relation
$${\cal R}_2=\{(w2w,w):w\in\{0,1\}^*\}$$
can be computed by a pfst and by a qfst with probability $2/3$.
\end{thm}
{\it Proof}:
We do this only for qfst (the pfst is obtained by replacing the unitaries
involved by the stochastic matrices obtained by computing the squared
moduli of the entries): let the input be
$x2y$ (other forms are rejected).
With amplitude $1/\sqrt{3}$ each go to one of three `subprograms':
either copy $x$ to the output, or $y$ (and accept),
or reject without output. This works by the same reasoning as the proof
of theorem~\ref{thm:empty1}
\qed\par
\section{{...} vs. Quantum Transducers}
\label{sec:qfst}
After seeing a few examples one might wonder if everything that
can be done by a qfst can be done by a pfst. That this is not so
is shown as follows:
\begin{thm}
\label{thm:mnk:q}
The relation
$${\cal R}_3=\{(0^m1^n2^k,3^m): n\neq k \wedge (m=k \vee m=n)\}$$
can be computed by a qfst with probability $4/7-\varepsilon$, for
arbitrary $\varepsilon>0$.
\end{thm}
\begin{thm}
\label{thm:mnk}
The relation ${\cal R}_3$ cannot be computed
by a pfst with probability bounded away from $1/2$.
In fact, not even with an isolated cutpoint.
\end{thm}
\par\noindent
{\it Proof (of theorem~\ref{thm:mnk:q})}:
For a natural number $l$ construct the following transducer: from
$q_0$ go to one of the states $q_1$, $q_{j,b}$ ($j\in\{0,\ldots,l-1\}$, $b\in\{1,2\}$),
with amplitude $\sqrt{3/7}$ for $q_1$ and with amplitude
$\sqrt{2/(7l)}$ each, for the others. Then proceed as follows (we assume the
form of the input to be $0^m1^n2^k$, others are rejected):
for $q_1$ output one $3$ for each $0$, and finally accept.
For $q_{j,b}$ repeatedly
read $l$ $0$'s and output $j$ $3$'s (remember the remainder $m\mod l$).
Then repeatedly read $l$ $b$'s and output $l-j$ $3$'s (output nothing
on the $(3-b)$'s). Compare the remainder with the one remembered, and reject
if they are unequal, otherwise output this number of $3$'s.
Reading $\$ $ perform the following unitary on the subspace
spanned by the $q_{j,b}$ and duplicte states $q_{j',b}$:
\begin{equation*}
(j\leftrightarrow j')\otimes\frac{1}{\sqrt{2}}\left(\begin{array}{rr}
1 & 1 \\
1 & -1
\end{array}\right).
\end{equation*}
Accepting are all $q_{j',2}$, rejecting are all $q_{j',1}$.
\par
Now assume that the input does not occur as the left member in the relation:
this means either $m\neq k$ and $m\neq n$, or $m=n=k$. In the first case
all the outputs in each of the $b$--branches of the program
are of different length, so get amplitude $\sqrt{2/(7l)}$. The final
step combines at most two of them, so any output is accepted with
probability at most $4/(7l)$. The second case is more interesting:
in all branches the amplitude is concentrated on the output $3^m$.
The rotation $V_\$ $ however is made such that the amplitude on
$q_{j',2}$ cancels out, so we end up in a rejecting state $q_{j',1}$.
In total, any output is accepted with probability at most $3/7+\varepsilon$.
\par
On the other hand, if the input occurs as the left member in the relation,
exactly one of the two $b$--branches of the program concentrates all amplitude
on output $3^m$, whereas the other spreads it to $l$ different lengths.
This means that the output $3^m$ is accepted with probability
at least $(l-1)\cdot 1/(7l)$, and others are accepted with probability
at most $1/(7l)$ each.
In total, the output $3^m$ is accepted with probability at least
$4/7-\varepsilon$, all others are accepted with probability at most
$3/7+\varepsilon$.
\qed\par
\noindent
{\it Proof (of theorem~\ref{thm:mnk})}:
By contradiction.
Suppose ${\cal R}_3$ is computed by a pfst $T$ with isolated cutpoint $\alpha$.
The following construction computes it with probability bounded away from
$1/2$: assuming $\alpha\leq 1/2$ (the other case is similar),
let $p=\frac{1/2-\alpha}{1-\alpha}$. Run one of the following subprograms
probabilistically: with probability $p$ output one $3$ for each $0$, and ignore
the other symbols (we may assume that the input has the form $0^m1^n2^k$),
with probability $1-p$ run $T$ on the input. It is easily seen that this
new pfst computes the same relation with probability bounded away from $1/2$.
\par
Hence, we may assume that $T$ computes ${\cal R}$ with probability
$\varphi>1/2$, from this we shall derive a contradiction.
The state set $Q$ together with any of the stochastic matrices
$V_0,V_1,V_2$ is a Markov chain. We shall use the classification of
states for finite Markov chains (see~\cite{kemeny:snell}): for $V_i$
$Q$ is partitioned into the set $R_i$ of \emph{transient} states
(i.e. the probability to find the process in $R_i$
tends to $0$) and a number of sets $S_{ij}$ of \emph{ergodic} states
(i.e. once in $S_{ij}$ the process does not leave this set, and all
states inside can be reached from each other, though maybe only by
a number of steps). Each $S_{ij}$ is divided further into its
\emph{cyclic} classes $C_{ij\nu}$ ($\nu\in{\mathbb{Z}}_{d_{ij}}$), $V_i$
mapping $C_{ij\nu}$ into $C_{ij\nu+1}$. By considering
sufficiently high powers $V_i^d$ (e.g. product of all the periods
$d_{ij}$) as transition matrices, all these cyclic sets become ergodic,
in fact, $V_i^d$ restricted to each is \emph{regular}.
\par
Using only these powers amounts to concentrating on input
of the form ${\bf 0}^m{\bf 1}^n{\bf 2}^k$, with ${\bf i}=i^d$,
which we will do from now on. Relabelling, the ergodic sets of
$V_{\bf i}=V_i^d$ will be denoted $S_{ij}$. Each has its
unique equilibrium distribution, to which every initial one
converges: denote it by $\pi_{ij}$. Furthermore, there are
limit probabilities $a(j_0)$ to find the process $V_{\bf 0}$
in $S_{0j_0}$ after long time, starting from $q_0$.
Likewise, there are limit probabilities $b(j_1|j_0)$ to
find the process $V_{\bf 1}$ in $S_{1j_1}$ after long time,
starting from $\pi_{0j_0}$, and similarly $c(j_2|j_1)$.
So, by the law of large numbers, for large enough $m,n,k$
the probability that $V_{\bf 0}$ has passed into $S_{0j_0}$
after $\sqrt{m}$ steps, after which $V_{\bf 1}$ has passed into $S_{1j_1}$
after $\sqrt{n}$ steps, after which $V_{\bf 2}$ has passed into $S_{2j_2}$
after $\sqrt{k}$ steps, is arbitrarily close to
$P(j_0,j_1,j_2)=a(j_0)b(j_1|j_0)c(j_2|j_1)$. (Note that these probabilities
sum to one).
\par
As a consequence of the ergodic theorem (or law of large numbers),
see~\cite{kemeny:snell},~ch.~4.2, in each of these events
$J=(j_0,j_1,j_2)$ the probable number of $3$'s written after
the final $\$ $, is linear in $m,n,k$:
$$T(3^{[(1-\delta)\lambda_J(m,n,k),(1+\delta)\lambda_J(m,n,k)]}
|{\bf 0}^m{\bf 1}^n{\bf 2}^k,J)\rightarrow 1,$$
as $m,n,k\rightarrow\infty$, with
$$\lambda_J(m,n,k)=\alpha_J m+\beta_J n+\gamma_J k,$$
and non--negative constants $\alpha_J,\beta_J,\gamma_J$.
\par
Since we require that for $k\neq m$
$$T(3^{dm}|{\bf 0}^m{\bf 1}^m{\bf 2}^k)\geq\varphi,$$
it is necessary that for a set ${\cal A}$ of events
$J=(j_0,j_1,j_2)$
$$\alpha_J+\beta_J=d,\ \gamma_J=0,\text{ with }P({\cal A})\geq \varphi.$$
In fact, as for $J\not\in{\cal A}$
$$T(3^{dm}|{\bf 0}^m{\bf 1}^m{\bf 2}^k,J)\rightarrow 0$$
for certain sequences $m,k\rightarrow\infty$, we even have
$$\sum_{J\in{\cal A}} P(J)T(3^{dm}|{\bf 0}^m{\bf 1}^m{\bf 2}^k,J)\geq\varphi-o(1).$$
\par
For $J\in{\cal A}$ it is obvious that the transducer
outputs \emph{no more} $3$'s, once
in $S_{2j_2}$. But this implies that for $m,k$ large enough,
$T(3^{dm}|{\bf 0}^m{\bf 1}^m{\bf 2}^k,J)$ is arbitrarily close
to $T(3^{dm}|{\bf 0}^m{\bf 1}^m{\bf 2}^m,J)$, hence
$$T(3^{dm}|{\bf 0}^m{\bf 1}^m{\bf 2}^m)\geq \varphi-o(1),$$
which implies that
$$T(3^{dm}|{\bf 0}^m{\bf 1}^m{\bf 2}^m)\geq \varphi,$$
contradicting $(0^{dm}1^{dm}2^{dm},3^{dm})\not\in{\cal R}_3$.
\qed\par
In general however, computing with isolated cutpoint is strictly weaker than
with probability bounded away from $1/2$ (observe that for finite automata,
probabilistic and quantum, recognizability
with an isolated cutpoint is equivalent to recognizability with probability
bounded away from $1/2$, see theorem~\ref{thm:qfa-c-qfst}):
\begin{thm}
\label{thm:cutpoint}
The relation
$${\cal R}_4=\{(0^m1^na,4^l):(a=2 \rightarrow l=m) \wedge (a=3 \rightarrow l=n)\}$$
can be computed by a pfst and by a qfst with an isolated cutpoint
(in fact, one arbitrarily close to $1/2$),
but not with a probability bounded away from $1/2$.
\end{thm}
\noindent
{\it Proof}:
First the construction (again, only for qfst):
initially branch into two possibilities
$c_0,c_1$, each with amplitude $1/\sqrt{2}$. Assume that the
input is of the correct form (otherwise reject), and in state
$c_i$ output one $4$ for each $i$, ignoring the $(1-i)$'s.
Then, if $a=2+i$, accept, if $a=3-i$, reject.
It is easily seen that $4^l$ is accepted with probability
$1/2$ if $(0^m1^na,4^l)\in{\cal R}_4$, and with probability
$0$ otherwise.
\par
That this cannot be done with probability above $1/2$ is clear
intuitively: the machine has to produce some output (because of
memory limitations), but whether to output $4^m$ or $4^n$
it cannot decide until seeing the last symbol.
Formally, assume that $|m-n|>4t$, with
$t=\max_{a,q} |f_a(q)|$. If
$$T_{\ddag 0^m1^n2 \$}((q_0,\epsilon),{\bf 0},0)[4^m]=
T(4^m|0^m1^n2)\geq 1/2+\delta,$$
necessarily
$$T_{\ddag 0^m1^n}((q_0,\epsilon),{\bf 0},0)[4^m]+
T_{\ddag 0^m1^n}((q_0,\epsilon),{\bf 0},0)[Q_{\rm non}\times 4^{[m-2t,m+2t]}]
\geq 1/2+\delta.$$
But this implies
$$T_{\ddag 0^m1^n}((q_0,\epsilon),{\bf 0},0)[4^n]+
T_{\ddag 0^m1^n}((q_0,\epsilon),{\bf 0},0)[Q_{\rm non}\times 4^{[n-2t,n+2t]}]
\leq 1/2-\delta,$$
hence
$$T_{\ddag 0^m1^n3 \$}((q_0,\epsilon),{\bf 0},0)[4^n]=
T(4^n|0^m1^n3)\leq 1/2-\delta,$$
contradicting $(0^m1^n3,4^n)\in{\cal R}_4$.
\qed\par
To conclude from these examples, however, that quantum is even better
than probabilistic, would be premature:
\begin{thm}
\label{thm:lastsymbol}
The relation
$${\cal R}_5=\{(wx,x):w\in\{0,1\}^*,x\in\{0,1\}\}$$
cannot be computed by a qfst with an isolated cutpoint.
(Obviously it is computed by a pfst with probability $1$,
i.e. a dfst).
\end{thm}
{\it Proof}:
The construction of a dfst computing the relation is straightforward.
To show that no qfst doing this job exists, we recall from~\cite{kondacs:watrous}
that $\{0,1\}^*0$ is not recognized by a $1$--way quantum finite automaton
with probability bounded away from $1/2$, and use theorem~\ref{thm:qfa-c-qfst}
for this language.
\qed
\section{Conclusion}
\label{sec:concl}
We introduced quantum finite state transducers, and showed some of their
unique properties: undecidability of the emptiness problem, as opposed to
deterministic finite state transducers and finite automata, and incomparability
of their power to that of probabilistic and deterministic finite state transducers.
As open questions we would like to point out primarily our
conjecture~\ref{conj:decidable}. Another interesting question is whether a relation
computed by a qfst with probability \emph{sufficiently close to $1$}
can be computed by a pfst.
This would be the closest possible analog to the ``$7/9$--theorem'' from~\cite{a:f}.
\end{document} |
\begin{document}
\begin{tikzpicture}
\partialraw (-2,-0.5) node{};
\partialraw (3,7.5) node{};
\begin{scope} [shift={(0.5,2.5)}]
\partialraw (0,0) ellipse (0.2 and 0.1);
\partialraw (0,2) ellipse (0.2 and 0.1);
\partialraw (0.2,0)--(0.2,2);
\partialraw (-0.2,0)--(-0.2,2);
\partialraw[thick, red, bend left = 30, <->] (-0.8,1.8) to (0.8,1.8);
\partialraw[thick, red, bend left = 30, <->] (0.8,0.2) to (-0.8,0.2);
\partialraw[ thick, blue, ->] (0,-1)--(0,-0.2);
\partialraw[ thick, blue, ->] (0,3)--(0,2.2);
\partialraw[ thick, blue, ->] (1,1)--(1.8,1);
\partialraw[ thick, blue, ->] (-1,1)--(-1.8,1);
\end{scope}
;\end{tikzpicture}
\end{document} |
\begin{document}
\title{Gaussian classical-quantum channels: \\ gain of entanglement-assistance}
\author{A. S. Holevo \\
Steklov Mathematical Institute, Moscow}
\date{}
\maketitle
\begin{abstract}
In the present paper we introduce and study Bosonic Gaussian
classical-quantum (c-q) channels; the embedding of the classical input into
quantum is always possible and therefore the classical entanglement-assisted capacity $C_{ea}$ under appropriate input
constraint is well defined. We prove a general property of entropy increase
for weak complementary channel, that implies the equality $C=C_{ea}$ (where $C$ is the unassisted capacity) for
certain class of c-q Gaussian channel under appropriate energy-type
constraint. On the other hand, we show by explicit example that the
inequality $C<C_{ea}$ is not unusual for constrained c-q Gaussian channel.
\end{abstract}
\section{Introduction}
In finite dimension a classical-quantum or quantum-classical channel can
always be represented as a quantum channel, by embedding the classical input
or output into quantum system. Then it makes sense to speak about
entanglement-assisted capacity $C_{ea}$ \cite{BSST}, \cite{E} of such a
channel, in particular, to compare it with the unentangled classical
capacity $C$. An interesting observation in \cite{BSST} was that
entanglement-assisted communication may be advantageous even for \emph{
entanglement-breaking} channels such as depolarizing channel with
sufficiently high error probability. In the paper \cite{H2} we considered the case of
quantum-classical (measurement) channels, showing that generically $C<C_{ea}$
for such channels. For infinite dimensional (in particular, continuous
variable) systems an embedding of the classical output into quantum is not
always possible, however entanglement-assisted transmission still makes
sense \cite{H2}; in particular this is the case for Bosonic Gaussian q-c
channels. The measurement channels demonstrate the gain of entanglement
assistance in the most spectacular way.
On the contrary, as shown in \cite{Sh-19}, finite dimensional c-q channels
(preparations) are \emph{essentially} characterized by the property of
having no gain of entanglement assistance, in this sense being ``more
classical'' than measurements. In the present paper we study Bosonic
Gaussian c-q channels; we observe that the embedding of the classical input
into quantum is always possible and $ C_{ea}$ under the input constraint is
thus well defined. We prove a general property of entropy increase for the
weak complementary channel, that implies equality $C=C_{ea}$ for certain
class of c-q Gaussian channel under appropriate energy-type constraint. On
the other hand, we show by explicit example that the inequality $C<C_{ea}$
is not unusual for \emph{constrained} c-q Gaussian channels.
\section{Bosonic Gaussian Systems}
The main applications of infinite-dimensional quantum information theory are
related to Bosonic systems, for detailed description of which we refer to
Ch. 12 in \cite{H-SSQT}. Let $\mathcal{H}_{A}$ be the representation space
of the Canonical Commutation Relations (CCR)
\begin{equation}
W(z_{A})W(z_{A}^{\prime })=\exp \left( -\frac{i}{2}z_{A}^{t}\Delta
_{A}z_{A}^{\prime }\right) W(z_{A}^{\prime }+z_{A}) \label{CCR}
\end{equation}
with a coordinate symplectic space $(Z_{A},\Delta _{A})$ and the Weyl system
$W_{A}(z)=\exp (iR_{A}\cdot z_{A});\,z_{A}\in Z_{A}$. Here $R_{A}$ is the
row-vector of the canonical variables in $\mathcal{H}_{A}$, and $\Delta _{A}$
is the canonical skew-symmetric commutation matrix of the components of $
R_{A}$,
\begin{equation}
\Delta =\mathrm{diag}\left[
\begin{array}{cc}
0 & 1 \\
-1 & 0
\end{array}
\right] _{j=1,\dots ,s}. \label{cf}
\end{equation}
Let $(Z_{A},\Delta _{A}),(Z_{B},\Delta _{B})$ be the symplectic spaces of
dimensions $2s_{A},2s_{B},$ which will describe the input and the
output of the channel (here $\Delta _{A},\Delta _{B}$ have the canonical
form (\ref{cf})), and let $W_{A}(z_{A}),W_{B}(z_{B})$ be the Weyl operators
in the Hilbert spaces $\mathcal{H}_{A},\mathcal{H}_{B}$ of the corresponding
Bosonic systems. A centered Gaussian channel $\Phi :\mathfrak{T}(\mathcal{H}
_{A})\rightarrow \mathfrak{T}(\mathcal{H}_{B})$ is defined via the action of
its dual $\Phi ^{\ast }$ on the Weyl operators:
\begin{equation}
\Phi ^{\ast }[W_{B}(z_{B})]=W(Kz_{B})\exp \left[ -\frac{1}{2}z_{B}^{t}\alpha
z_{B}\right] , \label{gaus-ch}
\end{equation}
where $K$ is matrix of a linear operator $Z_{B}\rightarrow Z_{A}$, and $\alpha $ is real
symmetric matrix satisfying
\begin{equation}
\alpha \geq \pm \frac{i}{2}\left( \Delta _{B}-K^{t}\Delta _{A}K\right),
\label{nid}
\end{equation}
where $\Delta _{B}-K^{t}\Delta _{A}K \equiv \Delta _{K}$ is a real
skew-symmetric matrix.
We will make use of the unitary dilation of the channel $\Phi $ constructed
in \cite{cegh1} (see also \cite{H-SSQT}). Consider the composite Bosonic
system $AD=BE$ with the Hilbert space $\mathcal{H}_{A}\otimes \mathcal{H}
_{D}\simeq \mathcal{H}_{B}\otimes \mathcal{\ H}_{E}$ corresponding to the
symplectic space $Z=Z_{A}\oplus Z_{D}=Z_{B}\oplus Z_{E},$ where $
(Z_{E},\Delta _{E})\simeq (Z_{A},\Delta _{A})$. Thus $[R_{A}\,R_{D}]=[R_{B}
\,R_{E}]$ describe two different splits of the set of canonical observables
for the composite system. Here $A$ and $B$ refer to input and output,
while $D$ and $E$ to input and output environments.
The channel $\Phi $ is then described by the
linear input-output relation (preserving the commutators)
\begin{equation}
R_{B}^{\prime }=R_{A}K+R_{D}K_{D}, \label{ior}
\end{equation}
where the system $D$ is in a centered Gaussian state $\rho _{D}$ with the
covariance matrix $\alpha _{D}$ such that
\begin{equation*}
\alpha =K_{D}^t\alpha _{D}K_{D}.
\end{equation*}
(for simplicity of notations we write $R_{A},\dots $ instead of $
R_{A}\otimes I_{D},\dots $). It is shown that the commutator-preserving
relation (\ref{ior}) can be complemented to the full linear canonical
transformation by putting
\begin{equation}
R_{E}^{\prime }=R_{A}L+R_{D}L_{D}, \label{iocomp}
\end{equation}
where $\left( 2s_{A}\right) \times \left( 2s_{E}\right) -$ matrix $L$ and $
\left( 2s_{D}\right) \times \left( 2s_{A}\right) -$ matrix $L_{D}$ are such
that the square $2\left( s_{A}+s_{D}\right) \times 2\left(
s_{B}+s_{E}\right) -$ matrix
\begin{equation}
T=\left[
\begin{array}{cc}
K & L \\
K_{D} & L_{D}
\end{array}
\right] \label{bltr}
\end{equation}
is symplectic, i.e. satisfies the relation
\begin{equation*}
T^{t }\left[
\begin{array}{cc}
\Delta _{A} & 0 \\
0 & \Delta _{D}
\end{array}
\right] T=\left[
\begin{array}{cc}
\Delta _{B} & 0 \\
0 & \Delta _{E}
\end{array}
\right] ,
\end{equation*}
which is equivalent to
\begin{eqnarray}
\Delta _{B} &=&K^{t}\Delta _{A}K+K_{D}^{t}\Delta _{D}K_{D},\quad \label{com}
\\
0 &=&K^{t}\Delta _{A}L+K_{D}^{t }\Delta _{D}L_{D}, \label{com1} \\
\Delta _{E} &=&L^{t}\Delta _{A}L+L_{D}^{t }\Delta _{D}L_{D}. \label{com2}
\end{eqnarray}
Denote by the $U_{T}$ the unitary operator in $\mathcal{H}_{A}\otimes
\mathcal{H}_{D}\simeq \mathcal{H}_{B}\otimes \mathcal{H}_{E}$ implementing
the symplectic transformation $T$ so that
\begin{equation}
\lbrack R_{B}^{\prime }\,R_{E}^{\prime }]=U_{T}^{\ast
}[R_{B}\,R_{E}]U_{T}=[R_{A}\,R_{D}]T. \label{deistvo}
\end{equation}
Then we have the unitary dilation
\begin{equation}
\Phi ^{\ast }[W_{B}(z_{B})]=\mathrm{Tr}_{D}\left( I_{A}\otimes \rho
_{D}\right) U_{T}^{\ast }\left( W_{B}(z_{B})\otimes I_{E}\right) U_{T}.
\label{udi1}
\end{equation}
The \emph{weakly complementary} channel \cite{cegh1} is then
\begin{equation*}
\left( \tilde{\Phi}^{w}\right) ^{\ast }[W_{E}(z_{E})]=\mathrm{Tr}_{D}\left(
I_{A}\otimes \rho _{D}\right) U_{T}^{\ast }\left( I_{B}\otimes
W_{E}(z_{E})\right) U_{T}.
\end{equation*}
The equation (\ref{iocomp}) is nothing but the input-output relation for the
weakly complementary channel which thus acts as
\begin{equation}
\left( \tilde{\Phi}^{w}\right) ^{\ast }[W_{E}(z_{E})]=W_{A}(Lz_{E})\exp
\left[ -\frac{1}{2}z_{E}^{t}L_{D}^{t}\alpha _{D}L_{D}z_{E}\right] .
\label{Gc}
\end{equation}
In the case of pure state $\rho _{D}=|\psi _{D}\rangle \langle \psi _{D}|$
the relation (\ref{udi1}) amounts to the Stinespring representation for the
channel $\Phi $ with the isometry $V=U_{T}|\psi _{D}\rangle ,$ implying that
$\tilde{\Phi}^{w}$ is the complementary channel $\tilde{\Phi}$ (see e.g.
\cite{H-SSQT}).
\section{A property of Gaussian classical-quantum channels}
\label{2}
Usually classical-quantum (c-q) channel is understood as a mapping $x\rightarrow \rho _{x}$ of the classical alphabet
$\mathcal{X}=\{x\}$ into density operators in a Hilbert space. In the case of continuous alphabet there is
no problem with embedding c-q channel into a quantum channel (as distinct from q-c channel, see \cite{H2}).
Intuitively, let $\mathcal{X}$ be a continual domain with measure $dx$, then the required embedding is
\begin{equation*}
\Phi [\rho ]=\int_{\mathcal{X}}\langle x|\rho |x\rangle \rho _{x}dx,
\end{equation*}
where $\left\{ |x\rangle ;x\in \mathcal{X}\right\} $ is a Dirac's system
satisfying $\langle x|x^{\prime }\rangle =$ $\delta (x-x^{\prime }).$ Here
$\Phi$ maps density operators into density operators. Notice that the
range of the dual channel $\Phi ^{\ast }$ consists of bounded operators diagonal
in the $x$-representation.
In general, we call a quantum channel $\Phi $ \emph{classical-quantum} (c-q) if the
range of $\Phi ^{\ast }$ consists of commuting operators. By using a structure
theorem for Abelian algebras of operators in a Hilbert space, it is then not
difficult to see that such a definition is essentially equivalent to the usual understanding.
It follows from (\ref{CCR}) that the necessary and sufficient condition for a Bosonic Gaussian
channel (\ref{gaus-ch}) to be c-q is
\begin{equation}
K^{t}\Delta _{A}K=0. \label{cl-qu}
\end{equation}
Thus $\Delta _{K}=\Delta _{B}$ and therefore $\det \Delta _{K}\neq 0.$ Under
this condition it was shown in \cite{H3} that in the unitary dilation
described above one can take $s_{E}=s_{A},\,s_{D}=s_{B}$ (and in fact $
E=A,D=B$). We call such a dilation ``minimal'' as it is indeed such at least
in the case of the pure state $\rho _{D}$, as follows from \cite{cegh1}. The
condition (\ref{nid}) then amounts to
\begin{equation}
\alpha \geq \pm \frac{i}{2}\Delta _{B}, \label{unc}
\end{equation}
saying that $\alpha $ is a covariance matrix of a centered Gaussian state $
\rho _{D}$. We say that the channel has \emph{minimal noise} if $\rho _{D}$
is a pure state, which is equivalent to the fact that $\alpha $ is a minimal
solution of the inequality (\ref{unc}). In quantum optics such channels are
called quantum-limited.
Let us explain how this notion of c-q channel agrees with the usual one
in the case of Bosonic Gaussian channels. The condition (
\ref{cl-qu}) means that the components of the operator $R_{A}K$ all commute,
hence their joint spectral measure is a sharp observable, and their
probability distribution $\mu _{\rho }(d^{2n}z)$ can be arbitrarily sharply
peaked around any point $z=\mathsf{E}_{\rho }(R_{A}K)^{t}=K^{t}m$ in the
support $\mathcal{X}$ of this measure by appropriate choice of the state $
\rho $. Here $\mathsf{E}_{\rho }$ denotes expectation with respect to $\rho $
and $m=\mathsf{E}_{\rho }(R_{A})^{t}$, hence $\mathcal{X}=\mathbf{Ran}
K^{t}\subseteq Z_{B}$. Thus in this case it is natural to identify $\Phi $
as c-q channel determined by the family of states $z\rightarrow W(z)\rho
_{B}W(z)^{\ast };z\in \mathcal{X}$.
\textbf{Proposition 1.} \emph{Let }$\Phi $ \emph{be a Gaussian c-q channel,
then the weak complementary} $\tilde{\Phi}^{w}$ \emph{in the minimal unitary
dilation has nonnegative entropy gain:}
\begin{equation*}
S(\tilde{\Phi}^{w}[\rho ])-S(\rho )\geq 0\quad \text{\textrm{for all \ \ \ }}
\rho .
\end{equation*}
\emph{In particular if} $\Phi $ \emph{has minimal noise, then this holds for
the complementary channel $\tilde{\Phi}$, implying}
\begin{equation}\label{IleS}
I(\rho ,\Phi )\leq S(\Phi \lbrack \rho ]),
\end{equation}
\emph{where}
\begin{equation*}
I(\rho ,\Phi )=S(\rho )+S(\Phi \lbrack \rho ])-S(\tilde{\Phi}[\rho ])
\end{equation*}
\emph{is the quantum mutual information.}
\emph{Proof.} Taking into account (\ref{cl-qu}), the relation (\ref{com})
becomes
\begin{equation}
\quad \Delta _{B}=K_{D}^{t}\Delta _{D}K_{D}. \label{iskom2}
\end{equation}
We consider the minimal dilation for which $\Delta _{D}=\Delta _{B}$, $
\Delta _{E}=\Delta _{A}$, hence $K_{D}$ is a symplectic $2s_{B}\times 2s_{B}-
$ matrix. Then (\ref{com1}) implies
\begin{equation*}
L_{D}=-\left( K_{D}^{t}\Delta _{D}\right) ^{-1}K^{t}\Delta _{A}L.
\end{equation*}
Substituting (\ref{com2}) gives $\Delta _{E}=L^{t}ML,$ where
\begin{eqnarray*}
M &=&\Delta _{A}+\Delta _{A}K\left( \Delta _{D}K_{D}\right) ^{-1}\Delta
_{D}\left( K_{D}^{t}\Delta _{D}\right) ^{-1}K^{t}\Delta _{A} \\
&=&\Delta _{A}+\Delta _{A}KK_{D}^{-1}\Delta _{D}^{-1}\left( K_{D}^{t}\right)
^{-1}K^{t}\Delta _{A} \\
&=&\Delta _{A}+\Delta _{A}K\Delta _{B}^{-1}K^{t}\Delta _{A}.
\end{eqnarray*}
Therefore $1=\left( \det L\right) ^{2}\det M,$ where
\begin{eqnarray*}
\det M &=&\det \left( \Delta _{A}+\Delta _{A}K\Delta _{B}^{-1}K^{t}\Delta
_{A}\right) \\
&=&\det \left( I_{2s_{A}\times 2s_{A}}+K\Delta _{B}^{-1}K^{t}\Delta
_{A}\right) .
\end{eqnarray*}
Due to (\ref{cl-qu}) the matrix $N=K\Delta _{B}^{-1}K^{t}\Delta _{A}$
satisfies $N^{2}=0,$ hence it has only zero eigenvalues. Therefore $
I_{2s_{A}\times 2s_{A}}+N$ \ has only unit eigenvalues, implying $\det M=1$
and hence $\left\vert \det L\right\vert =1.$
By relation (\ref{Gc}), the channel $\tilde{\Phi}^{w}$ is the Gaussian
channel with the operator $L$ playing the role of $K.$ By using a result of
\cite{H1}, we have
\begin{equation*}
S(\tilde{\Phi}^{w}[\rho ])-S(\rho )\geq \log |\det L|=0.\qquad \square
\end{equation*}
\textbf{Proposition 2}. \emph{Let} $\Phi $ \emph{be a Gaussian c-q channel
with minimal noise } $\alpha $, \emph{such that } $\mathbf{Ran}K^{t}=Z_{B}$,
\emph{satisfying the input constraint\footnote{The trace here is understood in the sense of extended expectation, as in \cite{H1}.} }
\begin{equation}
\mathrm{Tr}\rho H\leq E, \label{constraint}
\end{equation}
\emph{where} $H=RK\epsilon K^{t}R^{t}$ \emph{and} $\epsilon $ \emph{is real
symmetric strictly positive definite matrix. }
\emph{Then denoting }$C(E)$ (\emph{resp.} $C_{ea}(E)$) \emph{the classical
(resp. entanglement-assisted) capacity of the channel under the constraint (
\ref{constraint}), }
\begin{equation}
C(E)=C_{ea}(E)=\sup_{\rho :\mathrm{Tr}\rho H\leq E}S(\Phi \lbrack \rho ]).
\label{main}
\end{equation}
An important condition here is $\mathbf{Ran}K^{t}=Z_{B}$, as we shall see in
the next Section. The form of the operator $H=RK\epsilon K^{t}R^{t}$ is such
that the constraint is expressed only in terms of the input observables of
the c-q channel. Without it one could hardly expect the equality (\ref{main}), although this requires further investigation. On the other hand,
assumption of minimality of the noise seems to be related to the method of
the proof and probably could be relaxed, with the last expression in (\ref{main})
replaced by the supremum of $\chi$-function.
\textbf{Lemma.} \emph{Under the assumption }(\ref{cl-qu})\emph{\
there exists a sequence of real symmetric} $\left( 2s_{A}\right) \times
\left( 2s_{A}\right) -$\emph{matrices} $\gamma _{n}$ \emph{satisfying the
conditions:}
\begin{enumerate}
\item $\gamma _{n}\geq \pm \frac{i}{2}\Delta _{A};$
\item $K^{t}\gamma _{n}K\rightarrow 0.$
\end{enumerate}
\emph{Proof.} The assumption (\ref{cl-qu}) means that the subspace $
\mathcal{N}=\mathrm{Ran}K\subseteq Z_{A}$ is isotropic, i.e. such that $
\Delta _{A}$ is degenerate on it. From the linear algebra it is known that
there is a symplectic basis in $Z_{A}$ of the form $\left\{ e_{1},\dots
,e_{k},h_{1},\dots ,h_{k},g_{1},\dots \right\} ,$ where $\left\{ e_{1},\dots
,e_{k}\right\} $ is a basis in $\mathcal{N},\left\{ h_{1},\dots
,h_{k}\right\} $ span the isotropic subspace $\mathcal{N}^{\prime }$ and
are such that $e_{i}^{t}\Delta _{A}h_{j}=\delta _{ij},$ and $\left\{
g_{1},\dots \right\} $ span the symplectic orthogonal complement of $
\mathcal{N}+\mathcal{N}^{\prime }.$ Then $\Delta _{A}$ has the block matrix
form in this basis
\begin{equation*}
\Delta _{A}=\left[
\begin{array}{ccc}
0 & I_{k} & 0 \\
-I_{k} & 0 & 0 \\
0 & 0 & \Delta _{g}
\end{array}
\right] .
\end{equation*}
Let $\varepsilon _{n}$ be a sequence of positive numbers converging to zero,
then
\begin{equation*}
\gamma _{n}=\left[
\begin{array}{ccc}
\varepsilon _{n}I_{k} & 0 & 0 \\
0 & \frac{1}{4\varepsilon _{n}}I_{k} & 0 \\
0 & 0 & \gamma _{g}
\end{array}
\right] ,
\end{equation*}
where $\gamma _{g}\geq \pm \frac{i}{2}\Delta _{g},$ satisfies the condition
1, and $K^{t}\gamma _{n}K=\varepsilon _{n}K^{t}K\rightarrow 0.\quad \square $
\emph{Proof of Proposition 2.} According to the general version of the finite-dimensional
result of \cite{E} proven in \cite{H-Sh},
\begin{equation}
C_{ea}(E)=\sup_{\rho :\mathrm{Tr}\rho H\leq E}I(\rho, \Phi ).
\end{equation}
This version makes the only assumption that $H$ is positive self-adjoint operator,
allowing the constraint set to be non-compact, which is important for our considerations in Sec. \ref{3}.
Due to (\ref{IleS}), it is then sufficient to show that
\begin{equation*}
C(E)\geq \sup_{\rho :\mathrm{Tr}\rho H\leq E}S(\Phi \lbrack \rho ]).
\end{equation*}
We first consider the supremum in the right-hand side. Since the constraint
operator $H=RK\epsilon K^{t}R^{t}$ is quadratic in the canonical variables $
R,$ the supremum can be taken over (centered) Gaussian states. Since the
entropy of Gaussian state with covariance matrix $\alpha $ is equal to
\begin{equation}
\frac{1}{2}\mathrm{Sp}g\left( \mathrm{abs}\left( \Delta ^{-1}\alpha \right)
-I/2\right) =\frac{1}{2}\sum_{j=1}^{2s}g(|\lambda _{j}|-\frac{1}{2}),
\label{entropy}
\end{equation}
where $g(x)=(x+1)\log (x+1)-x\log x$, Sp denotes trace of the matrices as
distinct from that of operators in $\mathcal{H}$, and $\lambda _{j}$ are the
eigenvalues of $\Delta ^{-1}\alpha $ (see e.g. \cite{H-SSQT}, Sec. 12.3.4),
we have
\begin{eqnarray}
\sup_{\rho :\mathrm{Tr}\rho H\leq E}S(\Phi \lbrack \rho ]) &=&\frac{1}{2}
\sup_{\beta :\mathrm{Sp}K\epsilon K^{t}\beta \leq E}\mathrm{Sp}g\left(
\mathrm{abs}\left( \Delta _{B}^{-1}\left( K^{t}\beta K+\alpha \right)
\right) -I/2\right) \notag \\
&=&\frac{1}{2}\max_{\mu :\mathrm{Sp}\epsilon \mu \leq E}\mathrm{Sp}g\left(
\mathrm{abs}\left( \Delta _{B}^{-1}\left( \mu +\alpha \right) \right)
-I/2\right) . \label{max}
\end{eqnarray}
Here in the first equality we used the formula (\ref{entropy}) for the
output state with the covariance matrix $K^{t}\beta K+\alpha ,$ and in the
second we denoted $\mu =K^{t}\beta K$ and used the fact that for every $\mu $
such a $\beta $ exists due to the condition $\mathbf{Ran}K^{t}=Z_{B}$. In
the second expression the supremum is attained on some $\mu _{0}$ due to
nondegeneracy of $\epsilon$ (see \cite{H-SSQT}, Sec. 12.5). Denote by $\beta _{0}$ a solution of the
equation $\mu _{0}=K^{t}\beta _{0}K.$
We construct a sequence of suboptimal ensembles as follows. Using the
condition 1 of the Lemma, we let $\rho _{n}$ be a centered Gaussian state in $\mathcal{H}_{A}
$ with the covariance matrices $\gamma _{n}$ and $\rho
_{n}(z)=D(z)\rho _{n}D(z)^{\ast },z\in Z_{A},$ be the family of the
displaced states, where $D(z)$ are the displacement operators obtained by
re-parametrization of the Weyl operators $W(z)$. Define the Gaussian
probability density $p_{n}(z)$ with zero mean and the covariance matrix $
k_{n}\beta _{0},$ where $k_{n}=1-\mathrm{Sp}\gamma _{n}K\epsilon K^{t}/E>0$
for large enough $n$ by the condition 2$.$ The average state of this
ensemble is centered Gaussian with the covariance matrix $\gamma
_{n}+k_{n}\beta _{0}.$ Taking into account that $S(\rho _{n}(z))=S(\rho
_{n}),$ the $\chi -$quantity of this ensemble is equal to
\begin{equation*}
\chi _{n}=\frac{1}{2}\mathrm{Sp}\,g\left( \mathrm{abs}\left( \Delta
_{B}^{-1}\left( K^{t}\gamma _{n}K+k_{n}K^{t}\beta _{0}K+\alpha \right)
\right) -I/2\right)
\end{equation*}
\begin{equation*}
-\frac{1}{2}\mathrm{Sp}\,g\left( \mathrm{abs}\left( \Delta _{B}^{-1}\left(
K^{t}\gamma _{n}K+\alpha \right) \right) -I/2\right) .
\end{equation*}
By the condition 2 this converges to
\begin{equation*}
\frac{1}{2}\mathrm{Sp}\,g\left( \mathrm{abs}\left( \Delta _{B}^{-1}\left(
K^{t}\beta _{0}K+\alpha \right) \right) -I/2\right) -\frac{1}{2}\mathrm{Sp}
\,g\left( \mathrm{abs}\left( \Delta _{B}^{-1}\alpha \right) -I/2\right) .
\end{equation*}
By minimality of the noise the second term is entropy of a pure state, equal
to zero, and the first term is just the maximum in (\ref{max}). Thus
\begin{equation*}
C(E)\geq \limsup_{n\rightarrow \infty }\chi _{n}=\sup_{\rho :\mathrm{Tr}\rho
H\leq E}S(\Phi \lbrack \rho ]).\qquad \square
\end{equation*}
\section{One mode}\label{3}
Let $q,p$ be a Bosonic mode, $W(z)=\exp i(xq+yp)$ the corresponding Weyl
operator and $D(z)=\exp i(yq-xp)$ the displacement operator. We give two
examples where the channel describes classical signal with additive Gaussian
(minimal) quantum noise, in the first case the signal being two-dimensional
while in the second -- one-dimensional. As we have seen, a c-q channel can
be described in two equivalent ways: as a mapping $m\rightarrow \rho _{m},$
where $m$ is the classical signal, and as an extended quantum channel
satisfying (\ref{cl-qu}).
1. We first consider the minimal noise c-q channel with two-dimensional real
signal and show the coincidence of the classical entanglement-assisted and
unassisted capacities of this channel under appropriate input constraint, by
using result of Sec. \ref{2}. Such a coincidence is generic for
unconstrained finite-dimensional channels \cite{E}, but in infinite
dimensions, as we will see in the second example, situation is different.
Some sufficient conditions for the equality $C=C_{ea}$ were given in \cite
{Sh-19}, however they do not apply here.
Let $m=(m_{q},m_{p})\in \mathbf{R}^{2}$ and consider the mapping $
m\rightarrow \rho _{m}$, where $\rho _{m}$ is the state with the
characteristic function
\begin{equation}
\mathrm{Tr}\rho _{m}W(z)=\exp \left[ i(m_{q}x+m_{p}y)-\frac{\left( N+\frac{1
}{2}\right) }{2}(x^{2}+y^{2})\right] , \label{component}
\end{equation}
so that
\begin{equation*}
\rho _{m}=D(m)\rho _{0}D(m)^{\ast }.
\end{equation*}
The mapping $m\rightarrow \rho _{m}$ can be considered as transmission of
the two-dimensional classical signal $m=(m_{q},m_{p})$ with the additive
quantum Gaussian noise $q,p$ with the average number of quanta $N$. The
minimal noise corresponds to $N=0$.
The classical capacity of this channel with the input constraint
\begin{equation}
\frac{1}{2}\int \left\Vert m\right\Vert ^{2}\,\,p(m)d^{2}m\leq E \label{3-5}
\end{equation}
is given by the expression (see e.g. \cite{H-SSQT}, Sec. 12.1.4)
\begin{equation*}
C(E)=g(N+E)-g(N),
\end{equation*}
with the optimal distribution
\begin{equation}
p(m)=\frac{1}{2\pi E}\,\mbox{exp}\left( -\frac{\left\Vert m\right\Vert ^{2}}{
2E}\right) \label{3-9}
\end{equation}
in the ensemble of coherent states $|m\rangle\langle m|$.
In particular, for the minimal noise channel ($N=0$),
\begin{equation}
C(E)=g(E)=S(\bar{\rho}), \label{cge}
\end{equation}
where $\bar{\rho}$ is the Gaussian state with
\begin{equation*}
\mathrm{Tr}\bar{\rho}W(z)=\exp \left[ -\frac{\left( E+\frac{1}{2}\right) }{2}
(x^{2}+y^{2})\right] .
\end{equation*}
Let us now embed this channel into quantum
Gaussian channel $\Phi $ in the spirit of previous Section. Since the input $
m=(m_{q},m_{p})$ is two-dimensional classical, one has to use two Bosonic
input modes $q_{1},p_{1,},q_{2},p_{2}$ to describe it quantum-mechanically,
so that e.g. $m_{q}=q_{1},m_{p}=q_{2}.$ The environment is one mode $q,p$ in
the Gaussian state $\rho _{0}$ so the output is given by the equations
\begin{eqnarray}
q^{\prime } &=&q+q_{1}=q+m_{q}; \label{eqcha} \\
p^{\prime } &=&p+q_{2}=p+m_{p}, \notag
\end{eqnarray}
and the channel $\Phi $ parameters are
\begin{equation*}
K=\left[
\begin{array}{cc}
1 & 0 \\
0 & 0 \\
0 & 1 \\
0 & 0
\end{array}
\right] ,\quad \alpha =\left( N+\frac{1}{2}\right) I_{2}.
\end{equation*}
The equations for the environment modes describing the weakly complementary
channel $\tilde{\Phi}^{w}$ are
\begin{eqnarray}
q_{1}^{\prime } &=&q_{1}, \label{eqen} \\
p_{1}^{\prime } &=&p_{1}-p-q_{2}/2, \notag \\
q_{2}^{\prime } &=&q_{2}, \notag \\
p_{2}^{\prime } &=&p_{2}+q+q_{1}/2. \notag
\end{eqnarray}
In fact, the set of equations (\ref{eqcha}), (\ref{eqen}) is the same as for
the quantum channel with additive classical Gaussian noise (see \cite{H-SSQT}
, Ex. 12.42), but in the latter case the input variables are $q,p$ while in
the former -- $q_{1},p_{1,},q_{2},p_{2}$ (in both cases the output is $
q^{\prime },p^{\prime }$). If $N=0$ so that $\rho _{0}$ is pure, these
equations describe the complementary channel $\tilde{\Phi}$.
Having realized the c-q channel as a quantum one (i.e. a channel with
quantum input and output), it makes sense to speak of its
entanglement-assisted capacity. Under the same constraint it is given by the
expression
\begin{equation}
C_{ea}(E)=\sup_{\rho _{12}\in \mathfrak{S}_{E}}I(\rho _{12},\Phi ),
\label{cea}
\end{equation}
where
\begin{equation*}
\mathfrak{S}_{E}=\left\{ \rho _{12}:\mathrm{Tr}\rho _{12}\left( \frac{
q_{1}^{2}+q_{2}^{2}}{2}\right) \leq E\right\}
\end{equation*}
corresponds to the constraint (\ref{3-5}). Notice that the constraint
operator $H=\frac{q_{1}^{2}+q_{2}^{2}}{2}$ is unusual in that it is given by
\emph{degenerate} quadratic form in the input variables $
q_{1},p_{1,},q_{2},p_{2}$. In this case the set $\mathfrak{S}_{E}$ is not
compact, the supremum in (\ref{cea}) is not attained and
to obtain this formula we need to use a result from \cite{H-Sh}.
Now assume the minimal noise $N=0$ and let us show that
\begin{equation}
C_{ea}(E)=C(E)=g(E). \label{ccea}
\end{equation}
Proposition 1 of Sec. \ref{2} implies
\begin{equation*}
C_{ea}(E)\leq \sup_{\rho _{12}\in \mathfrak{S}_{E}}S(\Phi \lbrack \rho
_{12}]).
\end{equation*}
But
\begin{equation*}
\Phi \lbrack \mathfrak{S}_{E}]=\left\{ \bar{\rho}_{p}:p\in \mathcal{P}
_{E}\right\},
\end{equation*}
where $\mathcal{P}_{E}$ is defined by (\ref{3-9}), as can be seen from the
equations of the channel (\ref{eqcha}) and the identification of the
probability density $p(m_{q}\,,m_{p})$ with that of observables $q_{1},q_{2}$
in the state $\rho _{12}.$ Invoking (\ref{cge}) gives $\sup_{\rho _{12}\in
\mathfrak{S}_{E}}H(\Phi \lbrack \rho _{12}])=g(E)$ and hence the equality (
\ref{ccea}). This example is a special case of Proposition 2 in Sec. \ref
{2}, all the conditions of which are fulfilled with $\mathbf{Ran}K^t=Z_{B}=
\mathbf{R}^2$ and
\begin{equation*}
\gamma _{n}=\left[
\begin{array}{cccc}
\varepsilon _{n} & 0 & 0 & 0 \\
0 & \frac{1}{4\varepsilon _{n}} & 0 & 0 \\
0 & 0 & \varepsilon _{n} & 0 \\
0 & 0 & 0 & \frac{1}{4\varepsilon _{n}}
\end{array}
\right] .
\end{equation*}
2. Now we give an example with $C(E)<C_{ea}(E).$ Let $m\in \mathbf{R}$ be a
real one-dimensional signal and the channel is $m\rightarrow \rho _{m}$,
where $\rho _{m}$ is the state with the characteristic function
\begin{equation}
\mathrm{Tr}\rho _{m}W(z)=\exp \left[ imx-\frac{1}{2}(\sigma ^{2}x^{2}+\frac{1
}{4\sigma ^{2}}y^{2})\right] , \label{component1}
\end{equation}
so that
\begin{equation*}
\rho _{m}=D(x,0)\rho _{0}D(x,0)^{\ast }.
\end{equation*}
The mapping $m\rightarrow \rho _{m}$ can be considered as transmission of
the classical signal $m$ with the additive noise arising from the $q$-component of quantum Gaussian
mode $q,p$ with
the variances $\mathsf{D}q=\sigma ^{2},\mathsf{D}p=\frac{1}{4\sigma ^{2}}$
and zero covariance between $q$ and $p$. The state $\rho _{0}$ is pure
(squeezed vacuum) corresponding to a minimal noise.
The constraint on the input probability distribution $p(m)$ is defined as
\begin{equation}
\int m^{2}\,\,p(m)dm\leq E, \label{3-51}
\end{equation}
where $E$ is a positive constant. As the component $p$ is not affected by
the signal, from information-theoretic point of view this channel is
equivalent to the classical additive Gaussian noise channel $m\rightarrow
m+q,$ and its capacity under the constraint (\ref{3-51}) is given by the
Shannon formula
\begin{equation}
C(E)=\frac{1}{2}\log \left( 1+r\right) , \label{gacapa1}
\end{equation}
where $r=E/\sigma ^{2}$ is the \emph{signal-to-noise ratio}.
A different way to describe this channel is to represent it as a quantum
Gaussian channel $\Phi $. Introducing the input mode $q_{1},p_{1},$ so that $
m=q_{1},$ with the environment mode $q,p$ in the state $\rho _{0}$, the
output is given by the equations
\begin{eqnarray}
q_{1}^{\prime } &=&q_{1}+q; \label{eqcha1} \\
p_{1}^{\prime } &=&\quad \quad p, \notag
\end{eqnarray}
and the channel $\Phi $ parameters are
\begin{equation*}
K=\left[
\begin{array}{cc}
1 & 0 \\
0 & 0
\end{array}
\right] ,\quad \alpha =\left[
\begin{array}{cc}
\sigma ^{2} & 0 \\
0 & \frac{1}{4\sigma ^{2}}
\end{array}
\right] .
\end{equation*}
The equations for the environment mode describing the complementary channel $
\tilde{\Phi}$ are (see \cite{H-SSQT})
\begin{eqnarray}
q^{\prime } &=&q_{1}, \label{eqen1} \\
p^{\prime } &=&p_{1}-p, \notag
\end{eqnarray}
and the set of equations (\ref{eqcha1}), (\ref{eqen1}) describes the
canonical transformation of the composite system = system+environment.
The classical entanglement-assisted capacity of this channel under the same
constraint is given by the expression
\begin{equation}
C_{ea}(E)=\sup_{\rho _{1}\in \mathfrak{S}_{E}^{(1)}}I(\rho _{1},\Phi ),
\label{cea1}
\end{equation}
where $\mathfrak{S}_{E}^{(1)}=\left\{ \rho _{1}:\mathrm{Tr}\rho
_{1}q_{1}^{2}\leq E\right\} .$ As in the first example, the constraint
operator $q_{1}^{2}$ is given by degenerate quadratic form in the input
variables $q_{1},p_{1}$, the set $\mathfrak{S}_{E}^{(1)}$ is not compact and
the supremum in (\ref{cea}) is not attained.
Let us compute the entanglement-assisted capacity. For this consider the
values of $I(\rho _{A},\Phi )$ for centered Gaussian states $\rho _{A}=\rho
_{1}$ with covariance matrices
\begin{equation*}
\alpha _{1}=\left[
\begin{array}{cc}
E & 0 \\
0 & E_{1}
\end{array}
\right] ,
\end{equation*}
satisfying the uncertainty relation $EE_{1}\geq \frac{1}{4}$ and belonging
to the set $\mathfrak{S}_{E}^{(1)}$ with the equality.
We use the formula (\ref{entropy}) implying
\begin{equation*}
S(\rho _{A})=g\left( \sqrt{EE_{1}}-\frac{1}{2}\right) ,
\end{equation*}
According to (\ref{eqcha1}), the output state $\rho _{B}=\Phi \lbrack \rho
_{A}]$ has the covariance matrix
\begin{equation*}
\alpha _{B}=\left[
\begin{array}{cc}
E+\sigma ^{2} & 0 \\
0 & \frac{1}{4\sigma ^{2}}
\end{array}
\right] ,
\end{equation*}
with the entropy
\begin{equation*}
S(\rho _{B})=g\left( \sqrt{\frac{E}{4\sigma ^{2}}+\frac{1}{4}}-\frac{1}{2}
\right) .
\end{equation*}
Similarly, according to (\ref{eqen1}) the state $\rho _{E}=\tilde{\Phi}[\rho _{A}]$ of
the environment has the covariance matrix
\begin{equation*}
\alpha _{E}=\left[
\begin{array}{cc}
E & 0 \\
0 & E_{1}+\frac{1}{4\sigma ^{2}}
\end{array}
\right] ,
\end{equation*}
with the entropy
\begin{equation*}
S(\rho _{E})=g\left( \sqrt{EE_{1}+\frac{E}{4\sigma ^{2}}}-\frac{1}{2}\right)
.
\end{equation*}
Summing up,
\begin{eqnarray*}
I(\rho _{A},\Phi ) &=&S(\rho _{A})+S(\rho _{B})-S(\rho _{E}) \\
&=&g\left( \sqrt{\frac{E}{4\sigma ^{2}}+\frac{1}{4}}-\frac{1}{2}\right)
-\delta _{1}(E_{1}),
\end{eqnarray*}
where
\begin{equation*}
\delta _{1}(E_{1})=g\left( \sqrt{EE_{1}+\frac{E}{4\sigma ^{2}}}-\frac{1}{2}
\right) -g\left( \sqrt{EE_{1}}-\frac{1}{2}\right)
\end{equation*}
is a positive function in the range $[\frac{1}{4E},\infty ),$ decreasing
from $g\left( \sqrt{\frac{E}{4\sigma ^{2}}+\frac{1}{4}}-\frac{1}{2}\right) $
to 0 for $E_{1}\rightarrow \infty $ (this follows from the asymptotic $
g\left( x\right) =\log \left( x/\mathrm{e}\right) +o(1)$). Thus
\begin{equation*}
C_{ea}(E)\geq g\left( \sqrt{\frac{E}{4\sigma ^{2}}+\frac{1}{4}}-\frac{1}{2}
\right) .
\end{equation*}
Let us show that in fact there is equality here, by using the concavity of
the quantum mutual information (see \cite{H-SSQT}, Sec. 12.5). For a given input state $\rho $ with finite
second moments consider the state
\begin{equation*}
\tilde{\rho}=\frac{1}{2}\left( \rho +\rho ^{\top }\right) ,
\end{equation*}
where the transposition $^{\top }$ corresponds to the antiunitary
conjugation $q,p\rightarrow q,-p.$ The state $\tilde{\rho}$ has the same
variances $\mathsf{D}q, \mathsf{D}p$ as $\rho$, and zero covariance between $q$ and $p$.
The channel (\ref{eqcha1}) is covariant with respect to the transposition;
by the aforementioned concavity, $I(\tilde{\rho},\Phi )\geq I(\rho ,\Phi ),$
moreover, $I(\tilde{\rho}_{G},\Phi )\geq I(\tilde{\rho},\Phi ),$ where $
\tilde{\rho}_{G}$ is the Gaussian state with the same first and second
moments as $\tilde{\rho}.$ Thus
\begin{eqnarray*}
C_{ea}(E) &=&g\left( \sqrt{\frac{E}{4\sigma ^{2}}+\frac{1}{4}}-\frac{1}{2}
\right) =g\left( \frac{\sqrt{1+r}-1}{2}\right) \\
&=&\frac{\sqrt{1+r}+1}{2}\log \frac{\sqrt{1+r}+1}{2}-\frac{\sqrt{1+r}-1}{2}
\log \frac{\sqrt{1+r}-1}{2},
\end{eqnarray*}
where $r=E/\sigma ^{2}$ is signal-to-noise ratio. Comparing this with (\ref
{gacapa1}), one has $C_{ea}(E)>C(E)$ for $E>0$ (see Appendix), with the
entanglement-assistance gain $C_{ea}(E)/C(E)\sim -\frac{1}{2}\log r,$ as $
r\rightarrow 0$ and $C_{ea}(E)/C(E)\rightarrow 1,$ as $r\rightarrow \infty $
(see Figures).
As it is to be expected, Proposition 2 is not applicable, as $\mathrm{rank}
K^t=1<\dim Z_{B}$ here, while
\begin{equation*}
\gamma _{n}=\left[
\begin{array}{cc}
\varepsilon _{n} & 0 \\
0 & \frac{1}{4\varepsilon _{n}}
\end{array}
\right]
\end{equation*}
still satisfies the conditions 1, 2 of the Lemma.
\section{Appendix}
1. Consider the channel (\ref{eqcha}). It is instructive to compare its unassisted classical capacity $C(E)$
given by (\ref{ccea}) with the values of $I(\rho _{12},\Phi )$ for centered Gaussian states $\rho
_{12}=\rho _{A}$ with the covariance matrices
\begin{equation*}
\alpha _{12}=\left[
\begin{array}{cccc}
E & 0 & 0 & 0 \\
0 & E_{1} & 0 & 0 \\
0 & 0 & E & 0 \\
0 & 0 & 0 & E_{1}
\end{array}
\right] ,
\end{equation*}
satisfying the uncertainty relation $EE_{1}\geq \frac{1}{4}$ and belonging
to the set $\mathfrak{S}_{E}$ with the equality.
We then find
\begin{equation*}
S(\rho _{12})=2g\left( \sqrt{EE_{1}}-\frac{1}{2}\right) .
\end{equation*}
According to (\ref{eqcha}), $\rho _{B}=\Phi \lbrack \rho _{A}]$ has the
covariance matrix
\begin{equation*}
\alpha _{B}=\left[
\begin{array}{cc}
E+\frac{1}{2} & 0 \\
0 & E+\frac{1}{2}
\end{array}
\right] ,
\end{equation*}
with the entropy $g(E),$ and according to (\ref{eqen}) the state $\rho _{E}$
of the environment has the covariance matrix
\begin{equation*}
\alpha _{E}=\left[
\begin{array}{cccc}
E & 0 & 0 & E/2 \\
0 & \tilde{E}_{1} & -E/2 & 0 \\
0 & -E/2 & E & 0 \\
E/2 & 0 & 0 & \tilde{E}_{1}
\end{array}
\right] ,
\end{equation*}
where $\tilde{E}_{1}=E_{1}+\frac{1}{2}+\frac{E}{4}.$ The eigenvalues of $
\Delta _{E}^{-1}\alpha _{E}$ are $\sqrt{E}\left( \sqrt{\tilde{E}_{1}}\pm
\frac{1}{2}\sqrt{E}\right) $ and have multiplicity 2. Thus
\begin{equation*}
S(\rho _{E})=S(\tilde{\Phi}[\rho _{12}])=g\left( \sqrt{E}\left( \sqrt{\tilde{
E}_{1}}+\frac{1}{2}\sqrt{E}\right) -\frac{1}{2}\right)
\end{equation*}
\begin{equation*}
+g\left( \sqrt{E}\left( \sqrt{\tilde{E}_{1}}-\frac{1}{2}\sqrt{E}\right) -
\frac{1}{2}\right) .
\end{equation*}
Summing up,
\begin{equation*}
I(\rho _{12},\Phi )=g(E)-\delta (E_{1}),
\end{equation*}
where
\begin{eqnarray*}
\delta (E_{1}) &=&g\left( \sqrt{E}\left( \sqrt{\tilde{E}_{1}}+\frac{1}{2}
\sqrt{E}\right) -\frac{1}{2}\right) +g\left( \sqrt{E}\left( \sqrt{\tilde{E}
_{1}}-\frac{1}{2}\sqrt{E}\right) -\frac{1}{2}\right) \\
&&-2g\left( \sqrt{EE_{1}}-\frac{1}{2}\right)
\end{eqnarray*}
is a positive function in the range $[\frac{1}{4E},\infty ),$ varying from $
g(E)$ to 0. Hence the value (\ref{ccea}) is attained only asymptotically for
the input states $\rho _{12}$ with momentum variance $E_{1}\rightarrow
\infty .$
2. Introducing the new variable $x=\sqrt{1+r}\geq 1,$ we have
\begin{equation*}
C(E)=\log x\equiv f_{1}(x),\quad C_{ea}(E)=\frac{x+1}{2}\log \frac{x+1}{2}-
\frac{x-1}{2}\log \frac{x-1}{2}\equiv f_{2}(x).
\end{equation*}
Then $f_{1}(1)=f_{2}(1),f_{1}^{\prime }(\infty )=f_{2}^{\prime }(\infty )$
and $f_{1}^{\prime \prime }(x)>f_{2}^{\prime \prime }(x).$ It follows $
f_{1}(x)<f_{2}(x),x>1.$
\textbf{Acknowledgments.} This work was partly supported by RFBR grant N
12-01-00319-a, Fundamental Research Programs of RAS and by the Cariplo
Fellowship under the auspices of the Landau Network - Centro Volta. The
author is grateful to G. M. D'Ariano for the hospitality at the QUIT group
of the University of Pavia, and to A. Barchielli, L. Maccone, P.
Perinotti, M.F. Sacchi and M.E. Shirokov for stimulating discussions.
Special thanks are due to L. Maccone for the help with Latex graphics.
\begin{figure}
\caption{\emph{Ex.2: The classical capacities (nats) as functions of
signal-to-noise ratio $r$: $C_{ea}
\end{figure}
\begin{figure}
\caption{\emph{Ex.2: The gain of entanglement assistance $C_{ea}
\end{figure}
\end{document} |
\begin{document}
\title{On customer flows in Jackson queuing networks}
\alphauthor{Sen Tan\ and\
Aihua Xia\footnote{Corresponding author.
E-mail: xia@ms.unimelb.edu.au}\\
Department of Mathematics and Statistics\\
The University of
Melbourne\\
Parkville, VIC 3052\\
Australia}
\deltaate {{19 January, 2010}}
\maketitle
\begin{abstract} Melamed's theorem states that for a Jackson queuing network, the equilibrium flow along a link
follows Poisson distribution if and only if no customers
can travel along the link more than once. Barbour \& Brown~(1996) considered the Poisson approximate version of Melamed's theorem by
allowing the customers a small probability $p$ of travelling along the link more than once. In this paper, we prove that the customer flow process is a Poisson cluster process and then establish a general approximate version of Melamed's theorem accommodating all possible cases of
$0\lambdae p<1$.
\vskip12pt
\noindent\textit{Key words and phrases.} Jackson queuing network, Palm distribution, Poisson cluster process, over-disperson, Stein's method, negative binomial.
\vskip12pt
\noindent\textit{AMS 2000 subject classifications.} Primary 60G55; secondary 60F05, 60E15.
\end{equation}d{abstract}
\section{Introduction}
\setcounter{equation}{0}
We consider a Jackson queuing network
with $J$ queues and the following specifications [see Barbour \& Brown~(1996) for more details]. First, we assume that customers can move from one queue to another as well as can enter and leave from any queue. We assume that the exogenous arrival processes are independent Poisson processes with rates $\nu_j$, $1\lambdae j\lambdae J$. Service requirements are assumed to be exponential random variables with parameter 1 and when there are $m$ customers in queue $j$,
the service effort for queue $j$ is $\phi_j(m)$, where $\phi_j(0)=0$, $\phi_j(1)>0$ and $\phi_j(m)$ is a non-decreasing function of $m$.
Second, we define the switching process as follows.
Let $\lambdaambda_{ij}$ be the probability that an individual moves from queue $i$ to queue $j$,
$\mu_i$ be the exit probability from queue $i$ and it is natural to assume
$$\sum_{j=1}^J\lambdaambda_{ij}+\mu_i=1, \ 1\lambdae i\lambdae J.$$
Without loss of generality, we may assume that the network is irreducible in the sense that all customers can access any queue with a positive probability.
Set $\alphalpha_j$ as the total rate of arriving customers (including both exogenous and endogenous arrivals) to queue $j$, then the rates $\{\alphalpha_j\}$ satisfy
the equations
$$\alphalpha_j=\nu_j+\sum_{i=1}^J\alphalpha_i\lambdaambda_{ij},\ 1\lambdae j\lambdae J$$
and they are the unique solution of the equations with $\alphalpha_j>0$ for all $j$.
For convenience, we define state 0 as the outside of the network, that is, the point of arrival and departure of an individual into and from the system.
We write ${\cal S}:=\{(j,k):\ 0\lambdae j,k\lambdae J\}$ as the set of all possible direct links and use
$\mbox{\boldmath$\Xi$}^{jk}$ to record the transitions of individuals moving from queue $j$ to queue $k$, then
$\mbox{\boldmath$\Xi$}=\{\mbox{\boldmath$\Xi$}^{jk},\ 0\lambdae j,k\lambdae J\}$ gives a full account of customer flows in the network, where departures are transitions to 0 and arrivals are transitions from 0. If $\rho_{jk}$ is the rate of equilibrium flow along the link $(j,k)$, then $\rho_{jk}=\alphalpha_j\lambdaambda_{jk}$ and the mean measure of $\mbox{\boldmath$\Xi$}$
is
$$\mbox{\boldmath$\lambdaambda$}(ds,(j,k))=\rho_{jk}ds,\ s\in{\rm {I\ \nexto R}},\ (j,k)\in {\cal S}.$$
Our interest is on the customer flows along the links in $C\subset{{\cal A}l S}$ for the time interval $[0,t]$, so we set the carrier space as ${\Gamma}amma_{C,t}=[0,t]\times C$ and use $\boldXi_{C,t}$ to stand for the transitions along the links in $C$ for the period $[0,t]$. Then the mean measure of $\boldXi_{C,t}$
is
$$\mbox{\boldmath$\lambdaambda$}ct(ds,(j,k))=\rho_{jk}ds,\ 0\lambdae s\lambdae t,\ (j,k)\in C.$$
Melamed's theorem states that $\boldXi_{C,t}$ is a Poisson process if and only if no customers travel along the links in $C$ more than once [Melamed~(1979) proved the theorem when $\phi_j(m)=c_j$ for $m{\gamma}e 1$ and $1\lambdae j\lambdae J$, and the general case was completed by Walrand \& Varaiya~(1981)]. Barbour \& Brown~(1996) considered the Poisson approximate version of Melamed's theorem by
allowing the customers a small probability of traveling along the links more than once. For convenience, we call the probability of customers traveling along the links in $C$ more than once as the {\it loop probability}. The bounds for the errors of Poisson process approximation are sharpened by Brown, Weinberg \& Xia~(2000) and Brown, Fackrell \& Xia~(2005) and it is concluded in these studies that the accuracy of Poisson approximation depends on how small the loop probability is.
In section~2 of the paper, we use the Palm theory, Barbour-Brown Lemma [Barbour \& Brown~(1996)] and infinite divisibility of point processes to prove that
$\boldXi_{C,t}$ is a Poisson cluster process [Daley \& Vere-Jones~(1988), p.~243]. The characterization involves a few quantities which are generally intractable, so in section~3, we prove that $\mbox{\boldmath$\Xi$}$ is over-dispersed [see Brown, Hamza \& Xia~(1998)], i.e., its variance is greater than its mean, and conclude that suitable approximations should be those with the same property, such as the compound Poisson or negative binomial distributions. We then establish a general approximate version of Melamed's theorem for the total number of customers traveling along the links in $C$ for the period $[0,t]$ based on a suitably chosen negative binomial distribution. The approximation error is measured in terms of the total variation distance and the error bound is small when the loop probability is small [cf. the Poisson approximation error bound in Barbour \& Brown~(1996)] and/or $t$ is large with order $\frac{1}{\sqrt{t}}$ [cf. Berry--Esseen bound for normal approximation, Petrov~(1995)].
\section{A characterization of the customer flow process}
\setcounter{equation}{0}
The customer flows are directly linked to the changes of the states of the queue lengths, so we define $N_i(t)$ as the number of customers, including those in service, at queue $i$ and time $t$, $1\lambdae i\lambdae J$. Then
$\{{\bf N}(t):=(N_1(t),\deltaots,N_J(t)):\ t\in{\rm {I\ \nexto R}}\}$ is a pure Markov jump process on state space $\{0,1,2,\deltaots\}^J$ with the following transition rates ${\bf n}=(n_1,\deltaots,n_J)$:
$$q_{{\bf n},{\bf n}+e_j}=\nu_j,\ q_{{\bf n},{\bf n}-e_j+e_k}=\phi_j(n_j)\lambdaambda_{jk},\ q_{{\bf n},{\bf n}-e_j}=\mu_j\phi_j(n_j),\ 1\lambdae j,k\lambdae J,$$
where $e_j$ is the $j$th coordinate vector in $\{0,1,2,\deltaots\}^J$.
The stationary Markov queue length process has a unique stationary distribution: for each $t>0$, $N_j(t)$, $j=1,\deltaots,J$ are independent with
$${\rm {I\ \nexto P}}(N_j(t)=k)=\frac{\alphalpha_j^k/\prod_{r=1}^k\phi_j(r)}{\sum_{l=0}^\infty
\alphalpha_j^l/\prod_{r=1}^l\phi_j(r)}.$$
Let $X_i$ be the $i$th queue visited by a given customer, then
$\{X_i:\ i=1,2,\deltaots\}$ is called the {\it forward customer chain} and it is a homogeneous finite Markov chain with transition probabilities
\begin{eqnarray*}
&&p_{00}=0,\ p_{0k}=\frac{\nu_k}{\sum_{j=1}^J\nu_j};\\
&&p_{j0}=\mu_j,\ p_{jk}=\lambdaambda_{jk},\ j,k=1,\deltaots,J.
\end{equation}d{eqnarray*}
The backward customer chain $X^\alphast$ is the forward customer chain for the time-reversed process of $\{{\bf N}(t):=(N_1(t),\deltaots,N_J(t)):\ t\in{\rm {I\ \nexto R}}\}$ [Barbour \& Brown~(1996), p.~475] and it can be viewed as the time-reversal of the forward customer chain $\{X_i:\ i=1,2,\deltaots\}$ with transition probabilities
\begin{eqnarray*}
&&p_{00}^\alphast=0,\ p_{0j}^\alphast=\frac{\mu_j\alphalpha_j}{\sum_{l=1}^J\mu_l\alphalpha_l};\\
&&p_{k0}^\alphast=\frac{\nu_k}{\alphalpha_k},\ p_{kj}^\alphast=\frac{\alphalpha_j\lambdaambda_{jk}}{\alphalpha_k},\ j,k=1,\deltaots,J.
\end{equation}d{eqnarray*}
We will use the Palm distributions to characterize the distribution of $\boldXi_{C,t}$, prove its properties and establish
a general approximate version of Melamed's theorem for $\boldXi_{C,t}({\Gamma}amma_{C,t})$.
For the point process $\mbox{\boldmath$\Xi$}$ with locally finite mean measure $\mbox{\boldmath$\lambdaambda$}(ds,(j,k))=\rho_{jk}ds,\ s\in{\rm {I\ \nexto R}},\ (j,k)\in{\cal S}$, we may consider it as a random measure on the metric space ${\rm {I\ \nexto R}}\times {\cal S}$ equipped with the metric
$$d((u_1,(j_1,j_2)),(u_2,(k_1,k_2)))=|u_1-u_2|{\bf 1}_{(j_1,j_2)\ne(k_1,k_2)},\ u_1,u_2\in{\rm {I\ \nexto R}}\mbox{ and }(j_1,j_2),\ (k_1,k_2)\in{\cal S},$$ so that we can define the {\it Palm distribution} at $\alphalpha\in {\rm {I\ \nexto R}}\times {\cal S}$ as the distribution of $\mbox{\boldmath$\Xi$}$ conditional on the presence of a point at $\alphalpha$, that is,
$$P^\alphalpha(\cdot)=
\frac{{\rm {I\ \kern -0.54em E}} \lambdaeft[1_{[{\scriptsize \mbox{\boldmath$\Xi$}}\in\cdot]}\mbox{\boldmath$\Xi$}(d\alphalpha)\right]}{\mbox{\boldmath$\lambdaambda$}(d\alphalpha)},\
\alphalpha\in{\Gamma}amma_{C,t}\
\mbox{\boldmath$\lambdaambda$}-\mbox{almost surely},$$
see Kallenberg~(1983), p.~83 for more details.
A process $\mbox{\boldmath$\Xi$}^\alphalpha$ is called the {\it Palm process} of $\mbox{\boldmath$\Xi$}$ at $\alphalpha$ if its distribution is $P^\alphalpha$. In applications, it is often more convenient to work with the {\it reduced Palm process} $\mbox{\boldmath$\Xi$}^\alphalpha-\deltaelta_\alphalpha$
[Kallenberg~(1983), p.~84], where $\deltaelta_\alphalpha$ is the Dirac measure at $\alphalpha$.
The Palm distributions are closely related to the {\it size-biasing} in
sampling contexts [Cochran~(1977)]. More precisely, if $X$ is a non-negative integer-valued random variable, one may consider it as a point process with the carrier space having only one point so its Palm distribution becomes
$${\rm {I\ \nexto P}}(X_s=i):=\frac{i{\rm {I\ \nexto P}}(X=i)}{{\rm {I\ \kern -0.54em E}} X}.$$
However, this is exactly the definition of the {\it size
biased} distribution of $X$ [see Goldstein \& Xia~(2006)].
\begin{Lemma}\lambdaabel{BBlemma}
[Barbour \& Brown (1996)]
For the open queuing network, the reduced Palm distribution for the network given a transition at link
$(j,k)$ at time $0$ is the same as that for the original network, save that the network on $(0,\infty)$ behaves as if there were an extra individual at queue $k$ at time 0 and the network on $(-\infty,0)$ behaves as if there were an extra individual in queue $j$ at time 0.
\end{equation}d{Lemma}
For two random elements $\eta_1$ and $\eta_2$ having the same distribution, we write for brevity $\eta_1\stackrel{\mbox{\scriptsize{{\rm d}}}}{=}\eta_2$.
\begin{Lemma}\lambdaabel{XiaLemma1} For each $(j,k)\in{\cal S}$, there is a point process $\mbox{\boldmath$\xi$}^{(0,(j,k))}$ on ${\rm {I\ \nexto R}}\times{\cal S}$ independent of $\mbox{\boldmath$\Xi$}$ such that
$$\mbox{\boldmath$\xi$}^{(0,(j,k))}+\mbox{\boldmath$\Xi$} \stackrel{\mbox{\scriptsize{{\rm d}}}}{=}\mbox{\boldmath$\Xi$}^{(0,(j,k))}.$$
\end{equation}d{Lemma}
\noindent{\bf Proof.} The proof is adapted from Barbour \& Brown~(1996), p.~480. By Lemma~\ref{BBlemma}, the reduced Palm process $\mbox{\boldmath$\Xi$}^{(0,(j,k))}-\deltaelta_{(0,(j,k))}$ has the same distribution as that of $\mbox{\boldmath$\Xi$}$ except that the network on $(0,\infty)$ behaves as if there were an extra individual at queue $k$ at time 0 and the network on $(-\infty,0)$ behaves as if there were an extra individual in queue $j$ at time 0. Let $\tilde X^{(0)}$ and $X^{(0)}$ be the routes taken by the extra individual on $(-\infty,0)$ and $(0,\infty)$ respectively. Whenever the extra customer is at queue $i$ together with other $m$ customers, we use independently sampled exponential service requirements with instantaneous service rate $\phi_i(m+1)-\phi_i(m)$. Noting that this construction ensures that the extra customer uses the ``spare" service effort and never ``interferes" with the flow of the main traffic, one can see that its transitions are independent of $\mbox{\boldmath$\Xi$}$. The same procedure applies to the construction of the backward route. Let
$\mbox{\boldmath$\xi$}^{(0,(j,k))}$ be the transitions taken by the extra customer on $(-\infty,0){\cal U}p(0,\infty)$ plus the Dirac measure $\deltaelta_{(0,(j,k))}$, then $\mbox{\boldmath$\xi$}^{(0,(j,k))}$ is independent of $\mbox{\boldmath$\Xi$}$ and the conclusion of the lemma follows from the construction. \hbox{\vrule width 5pt height 5pt depth 0pt}
Let $\theta_s, \ s\in{\rm {I\ \nexto R}}$, denote the shift operator on
${\rm {I\ \nexto R}}\times{\cal S}$ which translates each point in ${\rm {I\ \nexto R}}\times{\cal S}$ by $s$ to the left, i.e. $\theta_s((u,(j,k)))=(u-s,(j,k))$ and we use
$\mbox{\boldmath$\xi$}^{(s,(j,k))}$ to stand for a copy of $\mbox{\boldmath$\xi$}^{(0,(j,k))}{\cal I}rc\theta_s$, $s\in{\rm {I\ \nexto R}}$.
From now on, we focus on the point process $\boldXi_{C,t}$. With metric $d$, ${\Gamma}amma_{C,t}$ is a Polish space and we use ${\cal B}\lambdaeft({\Gamma}amma_{C,t}\right)$ to stand for the Borel $\sigma$-algebra in ${\Gamma}amma_{C,t}$. Let $H_{C,t}$ denote the class of all configurations (finite nonnegative integer-valued measures) on ${\Gamma}amma_{C,t}$
with ${\cal H}_{C,t}$ the $\sigma$-algebra in $H_{C,t}$ generated by the sets
$$\{\xi\in H_{C,t}: \xi(B)=i\},\ i\in\mathbb{Z}_+:=\{0,1,2,\deltaots\},
\ B\in{\cal B}\lambdaeft({\Gamma}amma_{C,t}\right),$$
see Kallenberg~(1983), p.~12.
\begin{Theorem}\lambdaabel{Th1} Let $\{\mbox{\boldmath$\eta$}_i,\ i{\gamma}e 0\}$ be independent and identically distributed random measures on ${\Gamma}amma_{C,t}$ having the distribution
\begin{equation}{\rm {I\ \nexto P}}\lambdaeft[\mbox{\boldmath$\eta$}_0\lambdaeft({\Gamma}amma_{C,t}\right){\gamma}e 1\right]=1,\ {\rm {I\ \nexto P}}(\mbox{\boldmath$\eta$}_0\in A)={\rm {I\ \kern -0.54em E}} \sum_{(j,k)\in C}\int_0^t
\frac{{\bf 1}_{[{\scriptsize\mbox{\boldmath$\xi$}}^{(s,(j,k))}\in A]}}{\mbox{\boldmath$\xi$}^{(s,(j,k))}\lambdaeft({\Gamma}amma_{C,t}\right)}\cdot\frac{\rho_{jk}}{\theta_{C,t}}ds,\ A\in {\cal H}_{C,t},
\lambdaabel{Xia2.1}
\end{equation}
where
\begin{equation}\theta_{C,t}={\rm {I\ \kern -0.54em E}} \sum_{(j,k)\in C}\int_0^t
\frac{1}{\mbox{\boldmath$\xi$}^{(s,(j,k))}\lambdaeft({\Gamma}amma_{C,t}\right)}\rho_{jk}ds.\lambdaabel{Xia2.2}\end{equation}
Let $M$ be a Poisson random variable with mean $\theta_{C,t}$ and independent of $\{\mbox{\boldmath$\eta$}_i,\ i{\gamma}e 0\}$, then
$$\boldXi_{C,t}\stackrel{\mbox{\scriptsize{{\rm d}}}}{=} \sum_{i=1}^M\mbox{\boldmath$\eta$}_i.$$
\end{equation}d{Theorem}
\noindent{\bf Proof.} By Lemma~\ref{XiaLemma1} and Theorem~11.2 of [Kallenberg~(1983)], we can conclude that $\boldXi_{C,t}$ is infinitely divisible, hence we obtain from Lemma~6.6 and Theorem~6.1 of [Kallenberg~(1983)] that $\boldXi_{C,t}$ is a Poisson cluster process, that is,
$$\boldXi_{C,t}\stackrel{\mbox{\scriptsize{{\rm d}}}}{=} \sum_{i=1}^M\mbox{\boldmath$\eta$}_i,$$
where $\mbox{\boldmath$\eta$}_i,\ i{\gamma}e 0$ are independent and identically distributed random measures on ${\Gamma}amma_{C,t}$ such that ${\rm {I\ \nexto P}}\lambdaeft(\mbox{\boldmath$\eta$}_0({\Gamma}amma_{C,t}){\gamma}e 1\right)=1$, $M$ is a Poisson random variable with mean $\theta_{C,t}$ and independent of $\{\mbox{\boldmath$\eta$}_i,\ i{\gamma}e 1\}$.
The direct verification ensures that the Palm process of $\sum_{i=1}^M\mbox{\boldmath$\eta$}_i$ at $\alphalpha\in{\Gamma}amma_{C,t}$ is
$\sum_{i=1}^M\mbox{\boldmath$\eta$}_i+\mbox{\boldmath$\eta$}_0^{\alphalpha}$, where $\mbox{\boldmath$\eta$}_0^{\alphalpha}$ is the Palm process of $\mbox{\boldmath$\eta$}_0$ at $\alphalpha$ and is independent of $\{M,\mbox{\boldmath$\eta$}_i,\ i{\gamma}e 1\}$. This in turn implies that $\mbox{\boldmath$\xi$}^{(s,(j,k))}\stackrel{\mbox{\scriptsize{{\rm d}}}}{=} \mbox{\boldmath$\eta$}_0^{(s,(j,k))}.$
Let $\mbox{\boldmath$\mu$}(ds,(j,k))$ denote the mean measure of the point process $\mbox{\boldmath$\eta$}_0$, then some elementary computation ensures that the mean measure of $\sum_{i=1}^M\mbox{\boldmath$\eta$}_i$ is $\theta_{C,t}\mbox{\boldmath$\mu$}(ds,(j,k))$ for $(j,k)\in C$ and $0\lambdae s\lambdae t$. On the other hand, the mean measure of $\boldXi_{C,t}$ is $\mbox{\boldmath$\lambdaambda$}ct(ds,(j,k))=\rho_{jk}ds $, $(j,k)\in C$ and $s\in[0,t]$, so we obtain
\begin{equation}\mbox{\boldmath$\mu$}(ds,(j,k))=\frac{\rho_{jk}}{\theta_{C,t}}ds,\ (j,k)\in C,\ s\in[0,t].\lambdaabel{Xia2.3}\end{equation}
The representation \Ref{Xia2.1} is because of the fact that
${\rm {I\ \nexto P}}\lambdaeft(\mbox{\boldmath$\eta$}_0({\Gamma}amma_{C,t}){\gamma}e 1\right)=1$ and
$${\rm {I\ \nexto P}}\lambdaeft(\mbox{\boldmath$\eta$}_0\in A\right)={\rm {I\ \kern -0.54em E}}\int_{{\Gamma}amma_{C,t}}\frac{{\bf 1}_{[{\scriptsize\mbox{\boldmath$\eta$}_0}\in A]}}{\mbox{\boldmath$\eta$}_0\lambdaeft({\Gamma}amma_{C,t}\right)}\mbox{\boldmath$\eta$}_0(d\alphalpha)={\rm {I\ \kern -0.54em E}} \sum_{(j,k)\in C}\int_0^t
\frac{{\bf 1}_{[{\scriptsize\mbox{\boldmath$\xi$}}^{(s,(j,k))}\in A]}}{\mbox{\boldmath$\xi$}^{(s,(j,k))}\lambdaeft({\Gamma}amma_{C,t}\right)}\frac{\rho_{jk}}{\theta_{C,t}}ds.$$
In particular, if we take $A={\cal H}_{C,t}$, then the left hand side becomes 1, so \Ref{Xia2.2} follows.
\hbox{\vrule width 5pt height 5pt depth 0pt}
Despite the fact that $\theta_{C,t}$ is specified by \Ref{Xia2.2}, since the Palm process $\mbox{\boldmath$\xi$}^{(s,(j,k))}$ is generally intractable, it is virtually impossible to express $\theta_{C,t}$ explicitly in terms of the specifications of the Jackson queuing network. On the other hand, the relationship \Ref{Xia2.3} yields
$${\rm {I\ \kern -0.54em E}}\mbox{\boldmath$\eta$}_0({\Gamma}amma_{C,t})=\frac{\rho_C}{\theta_{C,t}}t.$$
The following proposition tells us the range of values that ${\rm {I\ \kern -0.54em E}}\mbox{\boldmath$\eta$}_0({\Gamma}amma_{C,t})$ and $\theta_{C,t}$ may take. To this end, we define
\begin{equation}\epsilon_C(j,k)={\rm {I\ \kern -0.54em E}}\mbox{\boldmath$\xi$}^{(0,(j,k))}({\rm {I\ \nexto R}}\times C)-1\mbox{ and }\epsilon_C=\sum_{(j,k)\in C}\frac{\rho_{jk}}{\rho_C}\epsilon_C(j,k).\lambdaabel{Xia2.6}\end{equation}
In other words, $\epsilon_C(j,k)$ is the average number of visits in $C$ by the extra customer crossing the link $(j,k)$ and $\epsilon_C$ is the weighted average number of visits by an extra customer crossing links in $C$.
\begin{Proposition} We have
\begin{equation}1\lambdae {\rm {I\ \kern -0.54em E}}\mbox{\boldmath$\eta$}_0({\Gamma}amma_{C,t})\lambdae 1+\epsilon_C\lambdaabel{Xia2.4}\end{equation}
and
\begin{equation}\frac{\rho_C}{1+\epsilon_C}t\lambdae\theta_{C,t}\lambdae\rho_C t.\lambdaabel{Xia2.5}\end{equation}
\end{equation}d{Proposition}
\noindent{\bf Proof.} The first inequality of \Ref{Xia2.4} follows immediately from the fact that \\
${\rm {I\ \nexto P}}(\mbox{\boldmath$\eta$}_0({\Gamma}amma_{C,t}){\gamma}e 1)=1$. For the second inequality of \Ref{Xia2.4}, noting that the mean measure of $\mbox{\boldmath$\eta$}_0$ is
$$\mbox{\boldmath$\mu$}(ds,(j,k))=\frac{\rho_{jk}}{\theta_{C,t}}ds,\ (j,k)\in C,\ s\in[0,t],$$
we have
\begin{eqnarray*}
\lambdaeft\{\frac{\rho_C}{\theta_{C,t}}t\right\}^2&=&[{\rm {I\ \kern -0.54em E}}\mbox{\boldmath$\eta$}_0({\Gamma}amma_{C,t})]^2\lambdae{\rm {I\ \kern -0.54em E}}\lambdaeft[\mbox{\boldmath$\eta$}_0({\Gamma}amma_{C,t})^2\right]\\
&=&{\rm {I\ \kern -0.54em E}}\int_{{\Gamma}amma_{C,t}}\mbox{\boldmath$\eta$}_0({\Gamma}amma_{C,t})\mbox{\boldmath$\eta$}_0(d\alphalpha)\\
&=&\sum_{(j,k)\in C}\int_0^t{\rm {I\ \kern -0.54em E}} \mbox{\boldmath$\eta$}_0^{(s,(j,k))}({\Gamma}amma_{C,t})\mbox{\boldmath$\mu$}(ds,(j,k))\\
&\lambdae&\sum_{(j,k)\in C}(1+\epsilon_C(j,k))\frac{\rho_{jk}}{\theta_{C,t}}t\\
&=&\frac{\rho_C}{\theta_{C,t}}t+\frac{\sum_{(j,k)\in C}\epsilon_C(j,k)\rho_{jk}}{\theta_{C,t}}t\\
&=&\frac{\rho_C}{\theta_{C,t}}t(1+\epsilon_C).
\end{equation}d{eqnarray*}
We divide both sides by $\frac{\rho_C}{\theta_{C,t}}t$ to get
$${\rm {I\ \kern -0.54em E}}\mbox{\boldmath$\eta$}_0({\Gamma}amma_{C,t})=\frac{\rho_C}{\theta_{C,t}}t\lambdae 1+\epsilon_C.$$
Finally, \Ref{Xia2.5} is an immediate consequence of \Ref{Xia2.4} and the equation $\theta_{C,t}=\frac{\rho_C}{{\rm {I\ \kern -0.54em E}}\mbox{\boldmath$\eta$}_0({\Gamma}amma_{C,t})}t$. \hbox{\vrule width 5pt height 5pt depth 0pt}
\begin{Remark}{\rm If the loop probability in $C$ is 0, then $\epsilon_C=0$ and ${\rm {I\ \kern -0.54em E}}\mbox{\boldmath$\eta$}_0({\Gamma}amma_{C,t})=1$, so there is only one customer on ${\Gamma}amma_{C,t}$. This customer is crossing the link $(j,k)$ with probability $\frac{\rho_{jk}}{\rho_C}$ at a time uniformly distributed on $[0,t].$}
\end{equation}d{Remark}
\section{A discrete central limit theorem for the customer flow process}
\setcounter{equation}{0}
A random variable is said to be {\it over-dispersed} (resp. {\it under-dispersed}) if its variance to mean ratio is greater (resp. less) than one.
A random measure ${\cal H}i$ on a Polish space is said to be {\it over-dispersed} (resp. {\it under-dispersed}) if ${\cal H}i(B)$ is over-dispersed (resp. under-dispersed) for all bounded Borel subset $B$ of the Polish space.
It is concluded in Brown, Hamza \& Xia~(1998) that point processes which arise from Markov chains which are
time-reversible, have finitely many states and are irreducible are always
over-dispersed. As our process $\mbox{\boldmath$\Xi$}$ is virtually a multivariate version of point processes studied in Brown, Hamza \& Xia~(1998), the following property can be viewed as a natural extension of the study in Brown, Hamza \& Xia~(1998).
\begin{Proposition} The point process $\mbox{\boldmath$\Xi$}$ is over-dispersed.
\end{equation}d{Proposition}
\noindent{\bf Proof.} The space $({\rm {I\ \nexto R}}\times{\cal S},d)$ is a Polish space and for each bounded Borel subset $B$ of ${\rm {I\ \nexto R}}\times {\cal S}$, it follows from the definition of the Palm processes [see Kallenberg~(1983), p.~84, equation~(10.4)] that
\begin{eqnarray*}
&&{\rm {I\ \kern -0.54em E}}\lambdaeft[\mbox{\boldmath$\Xi$}(B)\right]^2={\rm {I\ \kern -0.54em E}}\int_B\mbox{\boldmath$\Xi$}(B)\mbox{\boldmath$\Xi$}(d\alphalpha)={\rm {I\ \kern -0.54em E}} \int_B\mbox{\boldmath$\Xi$}^\alphalpha(B)\mbox{\boldmath$\lambdaambda$}(d\alphalpha)
{\gamma}e{\rm {I\ \kern -0.54em E}} \int_B\lambdaeft(\mbox{\boldmath$\Xi$}(B)+1\right)\mbox{\boldmath$\lambdaambda$}(d\alphalpha),
\end{equation}d{eqnarray*}
that is,
\begin{equation}{\rm Var}\lambdaeft[\mbox{\boldmath$\Xi$}(B)\right]{\gamma}e{\rm {I\ \kern -0.54em E}} \mbox{\boldmath$\Xi$}(B),\lambdaabel{xia3.1}
\end{equation}
completing the proof. \hbox{\vrule width 5pt height 5pt depth 0pt}
The inequality in \Ref{xia3.1} is generally strict except that the loop probability is 0,
i.e. $\mbox{\boldmath$\Xi$}$ is a Poisson process.
Hence, suitable approximate models for the distribution of $\Xi_{C,t}:=\boldXi_{C,t}({\Gamma}amma_{C,t})$
are necessarily over-dispersed. One potential candidate for approximating the distribution of $\Xi_{C,t}$ is the compound Poisson distribution. However, as it is virtually impossible to extract the distribution of
$\mbox{\boldmath$\xi$}^{(s,(j,k))}({\Gamma}amma_{C,t})$ for $0\lambdae s\lambdae t$, we face the same difficulty to specify and estimate the approximate distribution if a general compound Poisson is used. On the other hand, as a special family of the compound Poisson distributions [Johnson, Kemp \& Kotz (2005), pp.~212--213 and p.~346], the negative binomial distribution has been well documented as a natural model for many over-dispersed random phenomena [see Bliss \& Fisher~(1953), Wang \& Xia~(2008) and Xia \& Zhang~(2009)]. The negative binomial distribution
${\rm NB}(r,q)$, $r>0$, $0<q<1$, is defined as
$$\pi_i=\frac{{\Gamma}amma(r+i)}{{\Gamma}amma(r)i!}q^r(1-q)^i,\ i\in\mathbb{Z}_+.$$
The advantage of using negative binomial approximation is that it suffices to estimate the mean and variance of the approximating distribution, like what we often do in applying the central limit theorem based on the normal approximation.
We will use the total variation distance between the distributions of nonnegative integer-valued random variables $Y_1$ and $Y_2$
$$d_{TV}(Y_1,Y_2):=\sup_{A\subset \mathbb{Z}_+}|{\rm {I\ \nexto P}}(Y_1\in A)-{\rm {I\ \nexto P}}(Y_2\in A)|$$
to measure the approximation errors in negative binomial approximation.
The discrete central limit theorem is valid under the assumption that the loop probability
is less than 1. More precisely, let $w_C(jk)$ be the probability that a link $(j,k)$ crossing customer crosses the links in $C$ only once, i.e., the only time that the customer crosses the links in $C$ is the one the customer is crossing. Define
$$w_C=\sum_{(j,k)\in C}w_C(jk)\rho_{jk}/\rho_C,$$
the weighted probability of customers crossing links in $C$ only once. Clearly, we have $w_C(jk){\gamma}e\mu_k$, so
$$w_C{\gamma}e\sum_{(j,k)\in C}\rho_{jk}\mu_k/\rho_C.$$
The following lemma plays a crucial rule for the estimation of the negative binomial approximation error.
\begin{Lemma}\lambdaabel{keylemma1}
$d_{TV}\lambdaeft(\Xi_{C,t},\Xi_{C,t}+1\right)\lambdae \frac1{\sqrt{2e w_C\rho_Ct}}.$
\end{equation}d{Lemma}
\noindent{\bf Proof.} We prove the claim by a coupling based on the ``priority principle" [cf. the proof of Lemma~\ref{XiaLemma1}]. We define a customer as a {\it single crossing} ({\it sc} for brevity) customer if the customer crosses links in $C$ only once, otherwise, the customer is labeled as {\it multiple crossing}, or {\it mc} for short. We ``manage" the network by regrouping the customers at each queue into {\it sc} customers and {\it mc} customers. Whenever there are $m_2$ {\it mc} customers together with $m_1$ {\it sc} customers at queue $j$, we use independently sampled exponential service requirements with instantaneous service rate $\phi_j(m_1+m_2)-\phi_j(m_1)$ for all of the {\it mc} customers while the service for the {\it sc} customers is carried out with instantaneous service rate $\phi_j(m_1)$, that is, as if no {\it mc} customers present at the queue. Since the {\it sc} customers take priority over the {\it mc} customers and the {\it mc} customers use the ``spare" service effort and never interrupt the traffic flow of the {\it sc} ones, we can see that its transitions are independent of the transitions of the {\it sc} customers. Let $Z_1^{jk}$ (resp. $Z_2^{jk}$) denote the transitions of {\it sc} (resp. {\it mc}) customers moving from queue $j$ to queue $k$ in the period $[0,t]$, then ${\bf Z}_1:=\{Z_1^{jk},\ (j,k)\in C\}$ and
${\bf Z}_2:=\{Z_2^{jk},\ (j,k)\in C\}$ are independent and
$$\boldXi_{C,t}\stackrel{\mbox{\scriptsize{{\rm d}}}}{=} {\bf Z}_1+{\bf Z}_2.$$
By Melamed's theorem, the point process ${\bf Z}_1$ is a Poisson process with mean measure
$$\mbox{\boldmath$\lambdaambda$}_{{\bf Z}_1}(ds,(j,k))=w_C(jk)\rho_{jk}ds,\ (j,k)\in C,\ 0\lambdae s\lambdae t,$$
so ${\bf Z}_1({\Gamma}amma_{C,t})$ follows Poisson distribution with mean $w_C\rho_Ct$ and
$$d_{TV}\lambdaeft(\Xi_{C,t},\Xi_{C,t}+1\right)\lambdae d_{TV}\lambdaeft({\bf Z}_1({\Gamma}amma_{C,t}),{\bf Z}_1({\Gamma}amma_{C,t})+1\right)\lambdae \frac1{\sqrt{2e w_C\rho_Ct}},$$
where the last inequality is because of the fact that the distribution of Poisson is unimodal and Proposition~A.2.7 of [Barbour, Holst \& Janson~(1992), p.~262].
\hbox{\vrule width 5pt height 5pt depth 0pt}
To state the discrete central limit theorem, we set
$$\sigma_C(j,k)={\rm {I\ \kern -0.54em E}}[\mbox{\boldmath$\xi$}^{(0,(j,k))}({\rm {I\ \nexto R}}\times C)(\mbox{\boldmath$\xi$}^{(0,(j,k))}({\rm {I\ \nexto R}}\times C)-1)]\mbox{ and }\sigma_C=\sum_{(j,k)\in C}\frac{\rho_{jk}}{\rho_C}\sigma_C(j,k).$$
That is, $\sigma_C(j,k)$ is the second factorial moment of the number of visits in $C$ by the extra customer crossing the link $(j,k)$ and $\sigma_C$ is the weighted average of the second factorial moments of the number of visits by an extra customer crossing links in $C$ [cf. \Ref{Xia2.6}].
\begin{Theorem}\lambdaabel{Xia3.2} Let
$$r=\frac{(\rho_Ct)^2}{{\rm Var}(\Xi_{C,t})-\rho_Ct},\ q=\frac{\rho_Ct}{{\rm Var}(\Xi_{C,t})},$$
then
\begin{eqnarray}
d_{TV}\lambdaeft(\Xi_{C,t},{\rm NB}(r,q)\right)&\lambdae&\frac1{(\rho_Ct)^2\sqrt{2e w_C\rho_Ct}}
\lambdaeft\{2({\rm Var}(\Xi_{C,t})-\rho_Ct)^2\right.\nonumberumber\\
&&\mbox{\hskip0cm}\lambdaeft.+\rho_Ct(\Xi_{C,t}[3]-\rho_Ct\Xi_{C,t}[2]-2\rho_Ct({\rm Var}(\Xi_{C,t})-\rho_Ct))\right\}\lambdaabel{Xia3.2.1}\\
&\lambdae&\frac1{\sqrt{2e w_C\rho_Ct}}(2\epsilon_C^2+\sigma_C),\lambdaabel{Xia3.2.2}
\end{equation}d{eqnarray}
where $\Xi_{C,t}[n]$ stands for the $n$th factorial moment of $\Xi_{C,t}$ defined as
$$\Xi_{C,t}[n]={\rm {I\ \kern -0.54em E}}[\Xi_{C,t}(\Xi_{C,t}-1)\deltaots(\Xi_{C,t}-n+1)].$$
\end{equation}d{Theorem}
\begin{Remark}{\rm The parameters of the approximating negative binomial distribution are chosen so that it matches the mean and variance of $\Xi_{C,t}$.}
\end{equation}d{Remark}
\begin{Remark}{\rm If
the loop probability in $C$ is 0, then the negative binomial is reduced to Poisson distribution and the upper bound in Theorem~\ref{Xia3.2} becomes 0. This implies half of Melamed's theorem~(1979).}
\end{equation}d{Remark}
\begin{Remark}{\rm
If the loop probability
is between 0 and 1, then both $\epsilon_C$ and $\sigma_C$ are finite, so the negative binomial approximation error bound is of order $O(1/\sqrt{t})$. Furthermore, if the loop probability is small, then both $\epsilon_C$ and $\sigma_C$ are small, so the negative binomial approximation to the distribution of $\Xi_{C,t}$ is even faster.}
\end{equation}d{Remark}
\noindent{\bf Proof of Theorem~\ref{Xia3.2}.} The essence of Stein's method is to find a generator which characterizes the approximating distribution, establish a Stein identity to transform the problem of estimating the approximation errors into the study of the structure of the object under investigation.
In the context of negative binomial approximation,
let $a=r(1-q)$, $b=1-q$, then a generator which characterizes ${\rm NB}(r,q)$ is defined as
$${\cal B} g(i)=(a+bi)g(i+1)-ig(i),\ i\in\mathbb{Z}_+,$$
for all bounded functions $g$ on $\mathbb{Z}_+$ [see Brown \& Phillips~(1999) and Brown \& Xia~(2001)]. The Stein identity is naturally established as
\begin{equation}{\cal B} g(i)=f(i)-\pi(f)\lambdaabel{Steinidentity}\end{equation}
for $f\in{\cal F}:=\{f:\ \mathbb{Z}_+\to[0,1]\}$, where $\pi(f)=\sum_{i=0}^\infty f(i)\pi_i$. It was shown in Brown \& Xia~(2001) that, for each $f\in{\cal F}$, the solution $g_f$ to the Stein equation \Ref{Steinidentity} satisfies
\begin{equation}\|\Deltaelta g_f\|\lambdae\frac1a,\lambdaabel{Steinconstant}\end{equation}
where $\Deltaelta g_f(\cdot)=g_f(\cdot+1)-g_f(\cdot).$ The Stein identity \Ref{Steinidentity} ensures that
$$\sup_{f\in{\cal F}}\lambdaeft |{\rm {I\ \kern -0.54em E}} f(\Xi_{C,t})-\pi(f)\right|=\sup_{f\in{\cal F}}\lambdaeft|{\rm {I\ \kern -0.54em E}} {\cal B} g_f(\Xi_{C,t})\right|,$$
hence, it suffices to estimate ${\rm {I\ \kern -0.54em E}} {\cal B} g_f(\Xi_{C,t})$ for all $f\in{\cal F}$. For convenience, we drop $f$ from the subindex of $g_f$.
By Lemma~\ref{XiaLemma1}, we can take a point process $\mbox{\boldmath$\xi$}_{C,t}^{(s,(j,k))}$ on ${\Gamma}amma_t$ independent of $\boldXi_{C,t}$ such that
$$\boldXi_{C,t}^{(s,(j,k))}=\boldXi_{C,t}+\mbox{\boldmath$\xi$}_{C,t}^{(s,(j,k))}.$$
Therefore, if we write $\mbox{\boldmath$\xi$}_{C,t}^{(s,(j,k))}({\Gamma}amma_{C,t})=1+\xi^{(s,(j,k))}$, then
\begin{eqnarray}
{\rm {I\ \kern -0.54em E}}{\cal B} g(\Xi_{C,t})&=&{\rm {I\ \kern -0.54em E}}[(a+b\Xi_{C,t})g(\Xi_{C,t}+1)-\Xi_{C,t} g(\Xi_{C,t})]\nonumberumber\\
&=&a{\rm {I\ \kern -0.54em E}} g(\Xi_{C,t}+1)+b\sum_{(j,k)\in C}\int_0^tg(\Xi_{C,t}+2+\xi^{(s,(j,k))})\rho_{jk}ds\nonumberumber\\
&&-\sum_{(j,k)\in C}\int_0^tg(\Xi_{C,t}+1+\xi^{(s,(j,k))})\rho_{jk}ds.\lambdaabel{proof01}
\end{equation}d{eqnarray}
Let
\begin{equation} a+(b-1)\sum_{(j,k)\in C}\rho_{jk}t=0,\lambdaabel{coefficient1}\end{equation}
and $\tilde\Xict=\Xi_{C,t}+1$, then it follows from \Ref{proof01} that
\begin{eqnarray}
&&{\rm {I\ \kern -0.54em E}}{\cal B} g(\Xi_{C,t})\nonumberumber\\
&&={\rm {I\ \kern -0.54em E}} \sum_{(j,k)\in C}\int_0^t\lambdaeft[b\lambdaeft(g\lambdaeft(\tilde\Xict+1+\xi^{(s,(j,k))}\right)-g\lambdaeft(\tilde\Xict\right)\right)-\lambdaeft(g\lambdaeft(\tilde\Xict+\xi^{(s,(j,k))}\right)-g\lambdaeft(\tilde\Xict\right)\right)\right]\rho_{jk}ds\nonumberumber\\
&&={\rm {I\ \kern -0.54em E}} \sum_{(j,k)\in C}\int_0^t\lambdaeft\{\sum_{r=0}^{\xi^{(s,(j,k))}-1}\lambdaeft[b\Deltaelta g\lambdaeft(\tilde\Xict+r+1\right)
-\Deltaelta g\lambdaeft(\tilde\Xict+r\right)\right]+b\Deltaelta g\lambdaeft(\tilde\Xict\right)\right\}\rho_{jk}ds.
\lambdaabel{proof02}\nonumberumber\\
\end{equation}d{eqnarray}
Now, set
\begin{equation} b=\frac{\sum_{(j,k)\in C}\int_0^t{\rm {I\ \kern -0.54em E}} \xi^{(s,(j,k))}\rho_{jk}ds}{\sum_{(j,k)\in C}\int_0^t{\rm {I\ \kern -0.54em E}} \xi^{(s,(j,k))}\rho_{jk}ds+\rho_C t}
=\frac{{\rm Var}\lambdaeft(\Xi_{C,t}\right)-\rho_Ct}{{\rm Var}\lambdaeft(\Xi_{C,t}\right)}\lambdaabel{coefficient2},\end{equation}
where the last equality is due to the following observation:
\begin{eqnarray*}
{\rm {I\ \kern -0.54em E}}\Xi_{C,t}^2&=&{\rm {I\ \kern -0.54em E}}\int_{{\Gamma}amma_{C,t}}\Xi_{C,t}\boldXi_{C,t}(d\alphalpha)\\
&=&\sum_{(j,k)\in C}{\rm {I\ \kern -0.54em E}}\int_0^t\lambdaeft(\Xi_{C,t}+1+\xi^{(s,(j,k))}\right)\rho_{jk}ds\\
&=&({\rm {I\ \kern -0.54em E}}\Xi_{C,t})^2+\rho_Ct+{\rm {I\ \kern -0.54em E}}\sum_{(j,k)\in C}\int_0^t\xi^{(s,(j,k))}\rho_{jk}ds,
\end{equation}d{eqnarray*}
and so
\begin{equation}{\rm {I\ \kern -0.54em E}}\sum_{(j,k)\in C}\int_0^t\xi^{(s,(j,k))}\rho_{jk}ds={\rm Var}(\Xi_{C,t})-\rho_Ct.\lambdaabel{proof04}\end{equation}
We then obtain from \Ref{proof02} that
\begin{eqnarray}
&&{\rm {I\ \kern -0.54em E}}{\cal B} g(\Xi_{C,t})\nonumberumber\\
&&={\rm {I\ \kern -0.54em E}} \sum_{(j,k)\in C}\int_0^t\lambdaeft\{\sum_{r=0}^{\xi^{(s,(j,k))}-1}\lambdaeft[b\Deltaelta^2 g\lambdaeft(\tilde\Xict+r\right)-(1-b)\sum_{l=0}^{r-1}\Deltaelta^2 g\lambdaeft(\tilde\Xict+l\right)\right]\right\}\rho_{jk}ds\nonumberumber\\
&&={\rm {I\ \kern -0.54em E}} \sum_{(j,k)\in C}\int_0^t\lambdaeft\{\sum_{r=0}^{\xi^{(s,(j,k))}-1}\lambdaeft[b{\rm {I\ \kern -0.54em E}}\Deltaelta^2 g\lambdaeft(\tilde\Xict+r\right)-(1-b)\sum_{l=0}^{r-1}{\rm {I\ \kern -0.54em E}}\Deltaelta^2 g\lambdaeft(\tilde\Xict+l\right)\right]\right\}\rho_{jk}ds,
\lambdaabel{proof03}\nonumberumber\\
\end{equation}d{eqnarray}
where the last equation is due to the fact that $\xi^{(s,(j,k))}$ is independent of $\tilde\Xict$. On the other hand, using \Ref{Steinconstant}, we have
$$\lambdaeft|{\rm {I\ \kern -0.54em E}} \Deltaelta^2 g(\tilde\Xict+l)\right|\lambdae 2\|\Deltaelta g\|d_{TV}(\Xi_{C,t},\Xi_{C,t}+1)\lambdae \frac{2d_{TV}(\Xi_{C,t},\Xi_{C,t}+1)}{a},$$
so it follows from \Ref{proof03} that
\begin{eqnarray}&&\lambdaeft|{\rm {I\ \kern -0.54em E}}{\cal B} g\lambdaeft(\Xi_{C,t}\right)\right|\nonumberumber\\
&&\lambdae\frac{d_{TV}\lambdaeft(\Xi_{C,t},\Xi_{C,t}+1\right)}{a}\sum_{(j,k)\in C}\int_0^t\lambdaeft[2b{\rm {I\ \kern -0.54em E}}\xi^{(s,(j,k))}+(1-b){\rm {I\ \kern -0.54em E}} \xi^{(s,(j,k))}\lambdaeft(\xi^{(s,(j,k))}-1\right)\right]\rho_{jk}ds.
\nonumberumber\\
\lambdaabel{proof05}\end{equation}d{eqnarray}
Using the Palm distributions of $\boldXi_{C,t}$ together with \Ref{proof04}, we get
\begin{eqnarray*}
\Xi_{C,t}[3]&=&{\rm {I\ \kern -0.54em E}}\int_{{\Gamma}amma_{C,t}}(\Xi_{C,t}-1)(\Xi_{C,t}-2)\boldXi_{C,t}(d\alphalpha)\\
&=&\sum_{(j,k)\in C}\int_0^t{\rm {I\ \kern -0.54em E}}\lambdaeft[\lambdaeft(\Xi_{C,t}+\xi^{(s,(j,k))}\right)\lambdaeft(\Xi_{C,t}+\xi^{(s,(j,k))}-1\right)\right]\rho_{jk}ds\\
&=&\rho_Ct\Xi_{C,t}[2]+2\rho_Ct({\rm Var}(\Xi_{C,t})-\rho_Ct)+\sum_{(j,k)\in C}\int_0^t{\rm {I\ \kern -0.54em E}} \xi^{(s,(j,k))}\lambdaeft(\xi^{(s,(j,k))}-1\right)\rho_{jk}ds.
\end{equation}d{eqnarray*}
This in turn ensures
\begin{equation}\sum_{(j,k)\in C}\int_0^t{\rm {I\ \kern -0.54em E}} \xi^{(s,(j,k))}\lambdaeft(\xi^{(s,(j,k))}-1\right)\rho_{jk}ds=\Xi_{C,t}[3]-\rho_Ct\Xi_{C,t}[2]-2\rho_Ct({\rm Var}(\Xi_{C,t})-\rho_Ct).\lambdaabel{proof06}\end{equation}
Consequently, combining \Ref{proof04}, \Ref{proof06} with \Ref{proof05} gives \Ref{Xia3.2.1}.
Finally, by the definitions of $\epsilon_C$ and $\sigma_C$, we have
$${\rm {I\ \kern -0.54em E}}\sum_{(j,k)\in C}\int_0^t\xi^{(s,(j,k))}\rho_{jk}ds\lambdae \epsilon_C\rho_Ct$$
and
$$\sum_{(j,k)\in C}\int_0^t{\rm {I\ \kern -0.54em E}} \xi^{(s,(j,k))}\lambdaeft(\xi^{(s,(j,k))}-1\right)\rho_{jk}ds\lambdae
\sigma_C\rho_Ct.$$
Therefore, \Ref{Xia3.2.2} follows from \Ref{Xia3.2.1}, \Ref{proof04} and \Ref{proof06}.
\hbox{\vrule width 5pt height 5pt depth 0pt}
\deltaef\alphac{{Academic Press}~}
\deltaef\alphaap{{Adv. Appl. Prob.}~}
\deltaef\alphap{{Ann. Probab.}~}
\deltaef\alphanap{{Ann. Appl. Probab.}~}
\deltaef{J. Appl. Probab.}~{{J. Appl. Probab.}~}
\deltaef{John Wiley $\&$ Sons}~{{John Wiley $\&$ Sons}~}
\deltaef{New York}~{{New York}~}
\deltaef{Probab. Theory Related Fields}~{{Probab. Theory Related Fields}~}
\deltaef{Springer}~{{Springer}~}
\deltaef{Springer}~a{{Stochastic Processes Appl.}~}
\deltaef{Springer-Verlag}~{{Springer-Verlag}~}
\deltaef{Theory Probab. Appl.}~{{Theory Probab. Appl.}~}
\deltaef{Z. Wahrsch. Verw. Gebiete}~{{Z. Wahrsch. Verw. Gebiete}~}
\begin{thebibliography}{Dillo 83}
\typeout{References...}
{\bf i}bliography{math}
{\bf i}bliographystyle{alpha}
{\bf i}bitem{BB96} {\sc Barbour, A. D. \& Brown, T. C.~(1996)} Approximate Versions of Melamed's Theorem. {J. Appl. Probab.}~\textbf{33},
472--489.
{\bf i}bitem{BHJ} {\sc Barbour, A. D., Holst, L. \& Janson, S.~(1992)} {\em Poisson
Approximation.\/} Oxford Univ. Press.
{\bf i}bitem{BF53} {\sc Bliss, C. \& Fisher, R. A.~(1953)} Fitting the negative binomial distribution to biological data.
Biometrics~\textbf{9}, 174--200.
{\bf i}bitem{BFX05} {\sc Brown, T. C., Fackrell, M. \& Xia, A.~(2005)} Improved results on Poisson process approximation
in Jackson networks. {\em COSMOS Journal}~\textbf{1}, 47--55.
{\bf i}bitem{BHX98} {\sc Brown, T. C., Hamza, K. \& Xia, A.~(1998)} On the Variance to Mean Ratio for Random Variables from Markov Chains and
Point Processes.
{J. Appl. Probab.}~\textbf{35}, 303--312.
{\bf i}bitem{BrP} {\sc Brown, T. C. \& Phillips, M. J.~(1999)} Negative Binomial
Approximation with Stein's Method. Method. Comput. Appl. Probab.~\textbf{1:4}, 407--421.
{\bf i}bitem{BWX00} {\sc Brown, T. C., Weinberg, G.~V. \& Xia, A.~(2000)} Removing Logarithms from Poisson Process Error Bounds.
{Springer}~a\textbf{87}, 149--165.
{\bf i}bitem{BX01} {\sc
Brown, T. C. \& Xia, A.~(2001)} Stein's method and birth-death processes. \alphap\textbf{29}, 1373--1403.
{\bf i}bitem{Cochran} {\sc Cochran, W.~(1977)} {\em Sampling techniques.\/} Wiley.
{\bf i}bitem{DVJ}
{\sc Daley, D. J. \& Vere-Jones, D.~(1988)} {\em An Introduction to the Theory of Point Processes.\/}
Springer-Verlag, New York.
\item\lambdaabel{GX06} {\sc Goldstein, L. and Xia, A.~(2006)} Zero Biasing and a Discrete Central Limit Theorem. \alphap\textbf{34}, 1782--1806.
{\bf i}bitem{JKK05}
{\sc Johnson, N. L., Kemp, A. \& Kotz, S.~(2005)} {\em Univariate discrete distributions.\/} Third Edition. Wiley, New York.
{\bf i}bitem{Kallenberg83} {\sc Kallenberg, O.~(1983)} {\em Random Measures.\/} \alphac.
{\bf i}bitem{Mel79} {\sc Melamed, B.~(1979)} Characterizations of Poisson traffic streams in Jackson queueing networks. \alphaap\textbf{11}, 422--438.
{\bf i}bitem{P95} {\sc Petrov, V. V.~(1995)} {\em Limit Theorems of Probability Theory: Sequences of Independent Random
Varaibles.\/} Clarendon Press, Oxford.
{\bf i}bitem{WV81} {\sc Walrand, J. \& Varaiya, P.~(1981)} Flows in queueing networks: a martingale approach. Math. Operat. Res.~\textbf{6}, 387--404.
\item\lambdaabel{WX08} {\sc Wang, X. \& Xia, A.~(2008)} On negative binomial approximation to $k$-runs. {J. Appl. Probab.}~\textbf{45}, 456--471.
\item\lambdaabel{XZ09} {\sc Xia, A. \& Zhang, M.~(2009)} On approximation of Markov binomial distributions. Bernoulli~(accepted).
\end{equation}d{thebibliography}
\end{equation}d{document} |
\begin{document}
\title{Isomorphism of Hilbert modules over stably finite C$^*$-algebras}
\author{Nathanial P. Brown}
\address{Department of Mathematics, Penn State University,
State College, PA, 16802, USA}
\varepsilonmail{nbrown@math.psu.edu}
\author{Alin Ciuperca}
\address{Fields Institute, 222 College Street, Toronto, Ontario, Canada, M5T 3J1}
\varepsilonmail{ciuperca@math.toronto.edu}
\keywords{$C^*$-algebras, Hilbert modules, Cuntz semigroup, compact}
\subjclass[2000]{Primary 46L08, Secondary 46L80}
\thanks{N.B. was partially supported by DMS-0554870; A.C. was partially supported by Fields Institute.}
\begin{abstract} It is shown that if $A$ is a stably finite C$^*$-algebra and $E$ is a countably generated Hilbert $A$-module, then $E$ gives rise to a compact element of the Cuntz semigroup if and only if $E$ is algebraically finitely generated and projective. It follows that if $E$ and $F$ are equivalent in the sense of Coward, Elliott and Ivanescu (CEI) and $E$ is algebraically finitely generated and projective, then $E$ and $F$ are isomorphic. In contrast to this, we exhibit two CEI-equivalent Hilbert modules over a stably finite C$^*$-algebra that are not isomorphic.
\varepsilonnd{abstract}
\maketitle
\section{Introduction}
In \cite{cowelliottiv} a new equivalence relation -- we'll call it \varepsilonmph{CEI equivalence} -- on Hilbert modules was introduced. In general CEI equivalence is weaker than isomorphism, but it was shown that if $A$ has stable rank one, then it is the same as isomorphism (\cite[Theorem 3]{cowelliottiv}). Quite naturally, the authors wondered whether their result could be extended to the stably finite case. Unfortunately, it can't. In Section \ref{sec:counterexample}, we give examples of Hilbert modules over a stably finite C$^*$-algebra which are CEI-equivalent, but not isomorphic. On the other hand, we show in Section \ref{sec:main} that CEI equivalence amounts to isomorphism when restricted to ``compact" elements of the Cuntz semigroup, in the stably finite case.
\noindent\textbf{Acknowledgments:} We thank George Elliott, Francesc Perera, Leonel Robert, Luis Santiago, Andrew Toms and Wilhelm Winter for valuable conversations on topics related to this work.
\section{Definitions and Preliminaries}
Throughout this note all C$^*$-algebras are assumed to be separable and all Hilbert modules are assumed to be right modules and countably generated. We will follow standard terminology and notation in the theory of Hilbert modules (see, for example, \cite{lance}). In particular, $\mathcal{K}$ denotes the compact operators on $\varepsilonll^2(\mathbb{N})$, while $\mathcal{K}(E)$ will denote the ``compact" operators on a Hilbert module $E$.
For the reader's convenience, we recall a few definitions that are scattered throughout \cite{cowelliottiv}.
\begin{defn}
\label{defn:compactcontain}
If $E \subset F$ are Hilbert $A$-modules, we say $E$ is \varepsilonmph{compactly contained in} $F$
if there exists a self-adjoint $T \in \mathcal{K}(F)$ such that $T|_E = \operatorname{id}_E$. In this situation we write $E \subset \subset F$.
\varepsilonnd{defn}
Note that $E \subset \subset E$ if and only if $\mathcal{K}(E)$ is unital; it can be shown that this is also equivalent to $E$ being algebraically finitely generated and projective (in the purely algebraic category of right $A$-modules) -- see the proof of \cite[Corollary 5]{cowelliottiv} (this part of the proof did not require the assumption of stable rank one.).
\begin{defn} We say a Hilbert $A$-module $E$ is \varepsilonmph{CEI subequivalent} to another Hilbert $A$-module $F$ if every compactly contained submodule of $E$ is isomorphic to a compactly contained submodule of $F$.
We say $E$ and $F$ are \varepsilonmph{CEI equivalent} if they are CEI subequivalent to each other -- i.e., a third Hilbert $A$-module $X$ is isomorphic to a compactly contained submodule of $E$ if and only if $X$ is isomorphic to a compactly contained submodule of $F$.
\varepsilonnd{defn}
\begin{defn} We let $Cu(A)$ denote the set of Hilbert $A$-modules, modulo CEI equivalence. The class of a module $E$ in $Cu(A)$ will be denoted $[E]$.
\varepsilonnd{defn}
It turns out that $Cu(A)$ is an abelian semigroup with $[E] + [F] := [E\oplus F]$. (Note: it isn't even obvious that this is well defined!) Moreover $Cu(A)$ is partially ordered -- $[E] \leq [F] \Longleftrightarrow$ $E$ is CEI subequivalent to $F$ -- and every increasing sequence has a supremum (i.e., least upper bound). See \cite[Theorem 1]{cowelliottiv} for proofs of these facts.
\begin{defn} An element $x \in Cu(A)$ is \varepsilonmph{compact} (in the order-theoretic sense) if for every increasing sequence $\{ x_n \} \subset Cu(A)$ with $x \leq \sup_n x_n$ there exists $n_0 \in \mathbb{N}$ such that $x \leq x_{n_0}$.
\varepsilonnd{defn}
For a unital C$^*$-algebra $A$, \varepsilonmph{stable finiteness} means that for every $n \in \mathbb{N}$, $M_n(A)$ contains no infinite projections. In the nonunital case there are competing definitions, but it seems most popular to say $A$ is stably finite if the unitization $\tilde{A}$ is stably finite, so this is the definition we will use.
\section{Main Results}
\label{sec:main}
The proof of our first lemma is essentially contained in the proof of \cite[Corollary 5]{cowelliottiv}.
\begin{lem}
\label{lem:equality}
Assume $E\subset \subset F$ is a compact inclusion of Hilbert $A$-modules. If $E \cong F$ then either $E = F$ or $A\otimes \mathcal{K}$ contains a scaling element (in the sense of \cite{BC}). If $A$ is stably finite, then $A\otimes \mathcal{K}$ cannot contain a scaling element; hence, in this case, $E \cong F$ if and only if $E = F$
\varepsilonnd{lem}
\begin{proof} Assume $E$ is properly contained in $F$; we'll show $A\otimes \mathcal{K}$ contains a scaling element. Let $v\colon F \to E$ be an isomorphism and $T \in \mathcal{K}(F)$ be a positive operator such that $T|_E = \operatorname{id}_E$. As observed in \cite{cowelliottiv}, the map $vT$ is adjointable -- i.e.\ defines an element of $\mathcal{L}(F)$ -- and, in fact, is compact. (This assertion is readily checked whenever $T$ is a ``finite-rank" operator). Moreover, a calculation shows that $(vT)^*|_E = Tv^{-1}$. It is also worth noting that $T(vT) = vT$, since $T|_E = \operatorname{id}_E$ and $vT(F) \subset E$.
The scaling element we are after is $x = vT$. Indeed, one checks that $x^* x = T^2$; hence, $(x^* x)(xx^*) = T^2(vT)(vT)^* = (vT)(vT)^* = xx^*$. Finally, we must see why $xx^* \neq x^* x$. But if $xx^* = x^* x$, then $T^2 = (vT)(vT)^*$ and thus $T^2(F) \subset vT(F) \subset E$. It follows that $T^2$ is a self-adjoint projection onto $E$ (since $T^2|_E = \operatorname{id}_E$, too), and hence $x = vT$ is a partial isometry whose support and range coincide with $E$. But this is impossible because $T = T^2$ (since $T \geq 0$), so $vT(F) \subsetneqq E$ (since $T(F) = E \subsetneqq F$).
We've shown that if $E \subsetneqq F$, then $\mathcal{K}(F)$ contains a scaling element. But Kasparov's stabilization theorem provides us with an inclusion $\mathcal{K}(F) \subset A\otimes \mathcal{K}$, so the proof of the first part is complete.
In the case that $A$ is stably finite, it is well known to the experts that $A\otimes \mathcal{K}$ cannot contain a scaling element. Indeed, if it did, then \cite[Corollary 4.4]{BC} implies that $M_n(A)$ contains a scaling element, for some $n \in \mathbb{N}$. But it was shown in \cite{BC} that the unitization $\widetilde{M_n(A)}$ would then have an infinite projection. However, there is a natural embedding $\widetilde{M_n(A)} \subset M_n(\tilde{A})$, which contradicts the assumption of stable finiteness.
\varepsilonnd{proof}
Note that the canonical Hilbert module $\varepsilonll^2(A)$ is isomorphic to lots of (non-compactly contained) proper submodules.
\begin{prop}
\label{prop}
Let $E$ be a Hilbert $A$-module such that $[E]$ is compact in $Cu(A)$. Then either $E \subset \subset E$ or $A\otimes \mathcal{K}$ contains a scaling element.
\varepsilonnd{prop}
\begin{proof} Let $h \in \mathcal{K}(E)$ be strictly positive. If $0$ is an isolated point in the spectrum $\sigma(h)$, then functional calculus provides a projection $p \in \mathcal{K}(E)$ such that $p = \operatorname{id}_E$; so $E \subset \subset E$, in this case. If $0 \in \sigma(h)$ is not isolated, then, again using functional calculus, we can find $E_1 \subset \subset E_2 \subset \subset E_3 \cdots \subset \subset E$ such that $\cup_i E_i$ is dense in $E$ and $E_i \subsetneqq E_{i+1}$ for all $i \in \mathbb{N}$.
Since $[E]$ is compact, there exists $i$ such that $[E_i] = [E]$. Since $E_{i+1} \subset \subset E$, $E_{i+1}$ is isomorphic to a compactly contained submodule of $E_i$ and this isomorphism restricted to $E_i$ maps onto a \varepsilonmph{proper} submodule of $E_i$ (since $E_i \subsetneqq E_{i+1}$). Thus $E_i$ is isomorphic to a proper compactly contained submodule of itself. Hence, by Lemma \ref{lem:equality}, $A\otimes \mathcal{K}$ contains a scaling element.
\varepsilonnd{proof}
\begin{cor} Let $A$ be stably finite and $E$ be a Hilbert $A$-module. Then $[E] \in Cu(A)$ is compact if and only if $E \subset \subset E$. In particular, if $[E]$ is compact and $[E] \leq [F]$, then $E$ is isomorphic to a compactly contained submodule of $F$.
\varepsilonnd{cor}
\begin{proof} The ``only if" direction is immediate from the previous proposition. So assume $E \subset \subset E$ and let $[F_n] \in Cu(A)$ be an increasing sequence such that $[E] \leq [F] := \sup [F_n]$. By definition, $E$ is then isomorphic to a compactly contained submodule $E' \subset \subset F$. In the proof of \cite[Theorem 1]{cowelliottiv} it is shown that if $E' \subset \subset F$ and $[F] = \sup [F_n]$, then there is some $n \in \mathbb{N}$ such that $[E'] \leq [F_n]$. Since $[E] = [E']$, the proof is complete.
\varepsilonnd{proof}
\begin{cor}
\label{cor:isom}
Let $A$ be stably finite and $E, F$ be Hilbert $A$-modules. If $[E]= [F] \in Cu(A)$ is compact, then $E \cong F$. In particular, if $[E]= [F]$ and $E$ is algebraically finitely generated and projective, then $[E] \in Cu(A)$ is compact; hence, $E \cong F$.
\varepsilonnd{cor}
\begin{proof} Assume $[E] = [F]$ is compact. Then $E \subset \subset E$ and $F \subset \subset F$, by the previous corollary. Hence there exist isomorphisms $v\colon F \to F' \subset \subset E$ and $u\colon E \to E' \subset \subset F$. It follows that $F \cong u(v(F)) \subset \subset F$, which, by Lemma \ref{lem:equality}, implies that $u(v(F)) = F$. Hence $u$ is surjective, as desired.
As mentioned after Definition \ref{defn:compactcontain}, if $E$ is algebraically finitely generated and projective, then $E \subset \subset E$, which implies $[E]$ is compact (as we've seen).
\varepsilonnd{proof}
In the appendix of \cite{cowelliottiv} it is shown that $Cu(A)$ is isomorphic to the classical Cuntz semigroup $W(A\otimes \mathcal{K})$. When $A$ is stable, the isomorphism $W(A) \to Cu(A)$ is very easy to describe: the Cuntz class of $a \in A_+$ is sent to $H_a := \overline{aA}$ (with its canonical Hilbert $A$-module structure).
\begin{thm}
\label{thm:main}
Let $A$ be a stable, finite C$^*$-algebra, $a \in A_+$ and $H_a = \overline{aA}$. The following are equivalent:
\begin{enumerate}
\item $H_a$ is algebraically finitely generated and projective;
\item $[H_a] \in Cu(A)$ is compact;
\item $\sigma(a) \subset \{0\} \cup [\varepsilon, \infty)$ for some $\varepsilon > 0$;
\item $\langle a \rangle = \langle p \rangle \in W(A)$ for some projection $p\in A$.
\varepsilonnd{enumerate}
\varepsilonnd{thm}
\begin{proof} The implication $(1) \Longrightarrow (2)$ was explained above.
$(2) \Longrightarrow (3)$: Let $a_\varepsilon = (a-\varepsilon)_+$. Then $H_{a_\varepsilon} \subset \subset H_a$ and $\cup_\varepsilon H_{a_\varepsilon}$ is dense in $H_a$. Since $[H_a] \in Cu(A)$ is compact, there exists $\varepsilon > 0$ such that $[H_a] = [H_{a_\varepsilon}]$. Corollary \ref{cor:isom} implies that $H_a \cong H_{a_\varepsilon}$; thus $H_a = H_{a_\varepsilon}$, by Lemma \ref{lem:equality}. It follows that $\sigma(a) \subset \{0\} \cup [\varepsilon, \infty)$, because otherwise functional calculus would provide a nonzero element $b \in C^*(a)$ such that $0 \leq b \leq a$ (so $b \in H_a$) and $a_\varepsilon b = 0$ (so $b \notin H_{a_\varepsilon}$), which would contradict the equality $H_a = H_{a_\varepsilon}$.
$(3) \Longrightarrow (4)$ is a routine functional calculus exercise.
$(4) \Longrightarrow (1)$: Assume $\langle a \rangle = \langle p \rangle \in W(A)$. Since $pA$ is singly generated and algebraically projective, Corollary \ref{cor:isom} implies $H_a$ is isomorphic to $pA$.
\varepsilonnd{proof}
The equivalence of $(3)$ and $(4)$ above generalizes Proposition 2.8 in \cite{PT}.
\begin{cor} If $A$ is stably finite, then $A\otimes \mathcal{K}$ has no nonzero projections if and only if $Cu(A)$ contains no compact element.
\varepsilonnd{cor}
\section{A Counterexample}
\label{sec:counterexample}
Now let us show that if $A$ is stably finite and $E,F$ are Hilbert $A$-modules such that $[E] = [F]$, then it need not be true that $E$ and $F$ are isomorphic. Let $A = C_0(0,1] \otimes \mathcal{O}_3 \otimes \mathcal{K}$, where $\mathcal{O}_3$ is the Cuntz algebra with three generators. Voiculescu's homotopy invariance theorem (cf.\ \cite{dvv}) implies that $A$ is quasidiagonal, hence stably finite. Let $p, q \in \mathcal{O}_3 \otimes \mathcal{K}$ be two nonzero projections which are \varepsilonmph{not} Murray-von Neumann equivalent. If $x \in C_0(0,1]$ denotes the function $t \mapsto t$, then we define $f_p = x \otimes p$ and $f_q = x \otimes q$ in $A$. Since $A$ is purely infinite in the sense of \cite{KR} and the ideals generated by $f_p$ and $f_q$ coincide, it follows that $[\overline{f_p A}] = [\overline{f_q A}] \in Cu(A)$. We claim that the modules $\overline{f_p A}$ and $\overline{f_q A}$ are not isomorphic.
Indeed, if they were isomorphic, then we could find $v \in A$ such that $v^* v = f_p$ and $\overline{vv^*A} = \overline{f_q A}$. (See \cite[Lemma 3.4.2]{ciuperca}; if $T\colon \overline{f_p A}\to\overline{f_q A}$ is an isomorphism, then $v = T(f_p^{1/2})$ has the asserted properties.) Letting $\varphii\colon A \to \mathcal{O}_3 \otimes \mathcal{K}$ be the quotient map corresponding to evaluation at $1 \in (0,1]$, it follows that $\varphii(v)^* \varphii(v) = p$ and $\overline{\varphii(v) \varphii(v)^* (\mathcal{O}_3 \otimes \mathcal{K})} = \overline{q (\mathcal{O}_3 \otimes \mathcal{K})}$. Since $\varphii(v) \varphii(v)^*$ is a projection whose associated hereditary subalgebra agrees with the hereditary subalgebra generated by $q$, it follows that $\varphii(v) \varphii(v)^* = q$ (since both projections are units for the same algebra). This contradicts the assumption that $p$ and $q$ are not Murray-von Neumann equivalent, so $\overline{f_p A}$ and $\overline{f_q A}$ cannot be isomorphic.
\section{Questions and Related Results}
If the following question has an affirmative answer, then the proof of \cite[Corollary 5]{cowelliottiv} would show that $A$ has real rank zero if and only if the compacts are ``dense" in $Cu(A)$.
\begin{question} Can Corollary \ref{cor:isom} be extended to the ``closure" of the compact elements? That is, if $A$ is stably finite and $E$ and $F$ are Hilbert A-modules such that $[E]=[F] = \sup [C_n]$ for an increasing sequence of compact elements $[C_n]$, does it follow that $E\cong F$?
\varepsilonnd{question}
The next question was raised in \cite{cowelliottiv}, but we repeat it because the modules in Section \ref{sec:counterexample} are not counterexamples -- they mutually embed into each other. (To prove this, use the fact that $p$ is Murray-von Neumann equivalent to a subprojection of $q$, and vice versa.)
\begin{question} Are there two Hilbert modules $E$ and $F$ such that $[E] = [F]$, but $F$ is not isomorphic to a submodule of $E$?
\varepsilonnd{question}
\begin{question} If $x \in Cu(A)$ is compact, is there a projection $p \in A\otimes \mathcal{K}$ such that $x = \langle p \rangle$?
\varepsilonnd{question}
Of course, in the stably finite case the results of Section \ref{sec:main} tell us that much more is true, but for general C$^*$-algebras we don't know the answer to this question. However, we can give an affirmative answer in some interesting cases, as demonstrated below. First, a definition.
\begin{defn} An element $x \in Cu(A)$ will be called \varepsilonmph{infinite} if $x+y=x$ for some non-zero $y\in Cu(A)$. Otherwise, $x$ will be called \varepsilonmph{finite}.
\varepsilonnd{defn}
Note that $[\varepsilonll^2(A)] \in Cu(A)$ is always infinite.
\begin{lem}
\label{lem:unique}
If $A$ is simple, then $[\varepsilonll^2(A)] \in Cu(A)$ is the unique infinite element.
\varepsilonnd{lem}
\begin{proof} Assume $[E] + [F] = [E]$ for some nonzero Hilbert $A$-module $F$. Adding $[F]$ to both sides, we see that $[E] + 2[F] = [E]$; repeating this, we have that $[E] + k[F] = [E]$ for all $k \in \mathbb{N}$. By uniqueness of suprema, it follows that $[E] + [\varepsilonll^2(F)] = [E]$ (cf.\ \cite[Theorem 1]{cowelliottiv}). Since $A$ is simple, $F$ is necessarily full and hence $\varepsilonll^2(F) \cong \varepsilonll^2(A)$ (\cite[Proposition 7.4]{lance}). Thus $$[E] = [E] + [\varepsilonll^2(F)] = [E \oplus \varepsilonll^2(A)] = [\varepsilonll^2(A)],$$ by Kasparov's stabilization theorem.
\varepsilonnd{proof}
In the proof of the following lemma, we use the operator inequality $$xbx^*+ y^*by\geq xby + y^*bx^*,$$ for any $b$ in $A^+$, and $x, y\in A$. (Which follows from the fact that $(x-y^*)b(x-y^*)^*\geq0$.)
\begin{lem}\label{algsimple} Let $A$ be a stable algebraically simple C*-algebra.
\begin{enumerate}
\item For any non-zero $x\in Cu(A)$ there exists $n\in \mathbb{N}$ such that $nx=[A]$.
\item There exists a projection $q\in A$ such that $[A]=[qA]$. In particular, $[A]$ is a compact element of the Cuntz semigroup $Cu(A)$.
\varepsilonnd{enumerate}
\varepsilonnd{lem}
\begin{proof} It will be convenient to work in the original positive-element picture of the Cuntz semigroup. Our notation is by now standard (cf.\ \cite{PT}).
Proof of (1): Let $x=[\overline{bA}]$ for some $0\neq b\in A^+$ and let $a\in A$ be a strictly positive element.
(Stability implies that every right Hilbert $A$-module is isomorphic to a closed right ideal of $A$.) Since $A$
is algebraically simple, one can find $x_1,\ldots, x_n, y_1, \ldots, y_n \in A$ such that $a=\sum_{i=1}^k
x_iby_i$. Thus,
\begin{align*}
a\sim 2a=a+a^* &=\sum_{i=1}^k (x_iby_i + y_i^*bx_i^*)\\
& \leq \sum_{i=1}^k (x_ibx_i^*+y_i^*by_i)\\
& \lesssim x_1bx_1^*\oplus y_1^*by_1 \oplus \cdots x_kbx_k^*\oplus y_k^*by_k\\
& \lesssim b\oplus b\oplus \cdots \oplus b,
\varepsilonnd{align*}
where the last sum has $n=2k$ summands.
Since $A$ is stable, one can embed the Cuntz algebra $O_n$ in the multiplier algebra $M(A)$. This gives us isometries $s_1,\cdots, s_n\in M(A)$ with orthogonal ranges. Set $b_i'=s_ibs_i^*$ and note that $b_i'\sim b$ and $b_i'\varphierp b_j'$. Moreover, $a\lesssim b_1'+\cdots +b_n'\lesssim a$ (since $a$ is strictly positive, it Cuntz-dominates any element of $A$). Therefore, $\langle a \rangle = n\langle b\rangle = nx$, or equivalently, $[A] = nx$.
Proof of (2): Since $A$ is stable and algebraically simple, \cite[Theorem 3.1]{BC} implies $A$ has a non-zero projection $p$. As above, we can find orthogonal projections $p_1,\ldots, p_n \in A$ such that $p_i\sim p$ and $\langle p_1+\cdots +p_n \rangle = n\langle p \rangle = [A]$. Defining $q = p_1+\cdots +p_n$, we are done.
\varepsilonnd{proof}
We'll also need a consequence of the work in Section \ref{sec:main}.
\begin{prop}
\label{stablyfinite}
If $A$ is stable, $\langle a \rangle \in W(A) = Cu(A)$ is compact and $0 \in \sigma(a)$ is not an isolated point, then $A$ contains a scaling element and $\langle a \rangle$ is infinite.
\varepsilonnd{prop}
\begin{proof} Assume $A$ contains no scaling element. Since $\langle a \rangle$ is compact, Proposition \ref{prop} implies that $H_a \subset \subset H_a$. As in the proof of $(2) \Longrightarrow (3)$ in Theorem \ref{thm:main}, there exists $\varepsilon > 0$ such that $[H_a] = [H_{a_\varepsilon}]$ and hence $H_a$ is isomorphic to a compactly contained submodule $E$ of $H_{a_\varepsilon}$. Lemma \ref{lem:equality} implies $E = H_a$, so $H_{a_\varepsilon} = H_a$ too. As we've seen, this implies $\sigma(a) \subset \{0\} \cup [\varepsilon, \infty)$, contradicting our hypothesis; hence, $A$ contains a scaling element.
To prove the second assertion, choose $\varepsilon > 0$ such that $[H_a] = [H_{a_\varepsilon}]$. Since $0 \in \sigma(a)$ is not isolated, we can find a nonzero positive function $f \in C_0(0,\|a\|]$ such that $f(t) = 0$ for all $t \geq \varepsilon$. Thus $f(a) + (a-\varepsilon)_+ \varphirecsim a$ and $f(a) (a-\varepsilon)_+ = 0$. It follows that $$[H_{f(a)}] + [H_a] = [H_{f(a)}] + [H_{a_\varepsilon}] \leq [H_a]$$ and thus $[H_a]$ is infinite.
\varepsilonnd{proof}
\begin{thm} Let $x \in Cu(A)$ be compact.
\begin{enumerate}
\item If $A$ is simple, then there exists a projection $p \in A\otimes \mathcal{K}$ such that $x = \langle p \rangle$.
\item If $x$ is finite, then there exists a projection $p \in A\otimes \mathcal{K}$ such that $x = \langle p \rangle$.
\varepsilonnd{enumerate}
\varepsilonnd{thm}
\begin{proof} In both cases we may assume $A$ is stable.
Proof of (1): Fix a nonzero positive element $a\in A$ such that $x = [H_a]$. If $0 \in \sigma(a)$ is an isolated point, then functional calculus provides us with a Cuntz equivalent projection, and we're done. Otherwise Proposition \ref{stablyfinite} tells us that $x$ is infinite and $A$ contains a scaling element. By simplicity and Lemma \ref{lem:unique}, we have that $x = [\varepsilonll^2(A)] = [A]$ (by stability). Moreover, the existence of a scaling element ensures that $A$ is algebraically simple (see \cite[Theorem 1.2]{BC}). Hence part (2) of Lemma \ref{algsimple} provides the desired projection.
Proof of (2): Choose $a \in A_+$ such that $x = \langle a \rangle$. Since $x$ is finite, Proposition \ref{stablyfinite} implies $0 \in \sigma(a)$ is an isolated point, so we're done.
\varepsilonnd{proof}
\begin{rem} It is possible to improve part (2) of the theorem above. Namely, it is shown in \cite{ciuperca} that if $x \in Cu(A)$ is compact and there is no \varepsilonmph{compact} element $y \in Cu(A)$ such that $x = x + y$, then there exists a projection $p \in A\otimes \mathcal{K}$ such that $x = \langle p \rangle$.
\varepsilonnd{rem}
\begin{thebibliography}{999}
\bibitem{BC} B. Blackadar and J. Cuntz, \varepsilonmph{The structure of stable algebraically simple $C\sp{*} $-algebras} Amer. J. Math. \textbf{104} (1982), 813--822.
\bibitem{ciuperca} A. Ciuperca, \varepsilonmph{Some properties of the Cuntz semigroup and an isomorphism theorem for a certain class of non-simple C$^*$-algebras}, PhD Thesis, University of Toronto, 2008.
\bibitem{cowelliottiv} K. T. Coward, G. A. Elliott and C. Ivanescu, \varepsilonmph{The Cuntz semigroup as an invariant for C$^*$-algebras}, J. reine angew. math., to appear.
\bibitem{KR} E. Kirchberg and M. R{\o}rdam, \varepsilonmph{Non-simple purely infinite $C\sp *$-algebras}, Amer. J. Math. \textbf{122} (2000), 637--666.
\bibitem{lance} E. C. Lance, \varepsilonmph{Hilbert $C\sp *$-modules.
A toolkit for operator algebraists}, London Mathematical Society
Lecture Note Series, 210. Cambridge University Press, Cambridge,
1995.
\bibitem{PT} F. Perera and A.S. Toms, \varepsilonmph{Recasting the Elliott conjecture}, Math. Ann. \textbf{338} (2007), 669--702.
\bibitem{dvv} D. V. Voiculescu, \varepsilonmph{A note on quasi-diagonal $C\sp *$-algebras and homotopy}, Duke Math. J. \textbf{62} (1991), 267--271.
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\title[\fontsize{7}{9}\selectfont ]{Stability results of locally coupled wave equations with local Kelvin-Voigt damping: Cases when the supports of damping and coupling coefficients are disjoint}
\author{Mohammad Akil$^{1}$, Haidar Badawi$^{1}$, and Serge Nicaise$^1$}
\address{$^1$ Universit\'e Polytechnique Hauts-de-France, CERAMATHS/DEMAV,
Valenciennes, France}
\email{Mohammad.Akil@uphf.fr, Haidar.Badawi@uphf.fr, Serge.Nicaise@uphf.fr}
\keywords{Coupled wave equations, Kelvin-Voigt damping, strong stability, polynomial stability }
\setcounter{equation}{0}
\begin{abstract}
In this paper, we study the direct/indirect stability of locally coupled wave equations with local Kelvin-Voigt dampings/damping and by assuming that the supports of the dampings and the coupling coefficients are disjoint. First, we prove the well-posedness, strong stability, and polynomial stability for some one dimensional coupled systems. Moreover, under some geometric control condition, we prove the well-posedness and strong stability in the multi-dimensional case.
\end{abstract}
\maketitle
\pagenumbering{roman}
\maketitle
\tableofcontents
\pagenumbering{arabic}
\setcounter{page}{1}
\section{Introduction}
\noindent
The direct and indirect stability of locally coupled wave equations with local damping arouses many interests in recent years. The study of coupled systems is also motivated by several physical considerations like Timoshenko and Bresse systems (see for instance \cite{BASSAM20151177,Akil2020,Akil2021,ABBresse,Wehbe08,Fatori01,FATORI2012600}).
The exponential or polynomial stability of the wave equation with a local Kelvin-Voigt damping is considered in \cite{Liu-Rao:06,Tebou:16,BurqSun:22}, for instance. On the other hand, the direct and indirect stability of locally and coupled wave equations with local viscous dampings are analyzed in \cite{Alabau-Leautaud:13,Kassemetal:19,Gerbietal:21}. In this paper, we are interested in locally coupled wave equations with local Kelvin-Voigt dampings. Before stating our main contributions, let us mention similar results for such systems. In 2019, Hayek {\it et al.} in \cite{Hayek}, studied the stabilization of a multi-dimensional system
of weakly coupled wave equations with one or two locally Kelvin-Voigt damping and
non-smooth coefficient at the interface. They established different stability results. In 2021, Akil {\it et al.} in \cite{Wehbe2021}, studied the stability of an elastic/viscoelastic transmission problem of locally coupled waves with non-smooth coefficients, by considering:
\begin{equation*} \left\{ \begin{array}{llll}
\displaystyle u_{tt}-\left(au_x +{\color{black}b_0 \chi_{(\alpha_1,\alpha_3)}} {\color{black}u_{tx}}\right)_x +{\color{black}c_0 \chi_{(\alpha_2,\alpha_4)}}y_t =0,& \text{in}\ (0,L)\times (0,\infty) ,&\\
y_{tt}-y_{xx}-{\color{black}c_0 \chi_{(\alpha_2,\alpha_4)}}u_t =0, &\text{in} \ (0,L)\times (0,\infty) ,&\\
u(0,t)=u(L,t)=y(0,t)=y(L,t)=0,& \text{in} \ (0,\infty) ,&
\end{array}\right.
\end{equation*}
where $a, b_0, L >0$, $c_0 \neq 0$, and $0<\alpha_1<\alpha_2<\alpha_3<\alpha_4<L$. They established a polynomial energy decay rate of type $t^{-1}$. In the same year, Akil {\it et al.} in \cite{ABWdelay}, studied the stability of a singular local interaction elastic/viscoelastic coupled wave equations with time delay, by considering:
\begin{equation*} \left\{ \begin{array}{llll}
\displaystyle u_{tt}-\left[au_x +{\color{black} \chi_{(0,\beta)}}(\kappappa_1 {\color{black}u_{tx}}+\kappappa_2 u_{tx}(t-\tau))\right]_x +{\color{black}c_0 \chi_{(\alpha,\gamma ma)}}y_t =0,& \text{in}\ (0,L)\times (0,\infty) ,&\\
y_{tt}-y_{xx}-{\color{black}c_0 \chi_{(\alpha,\gamma ma)}}u_t =0, &\text{in} \ (0,L)\times (0,\infty) ,&\\
u(0,t)=u(L,t)=y(0,t)=y(L,t)=0,& \text{in} \ (0,\infty) ,&
\end{array}\right.
\end{equation*}
where $a, \kappappa_1, L>0$, $\kappappa_2, c_0 \neq 0$, and $0<\alpha <\beta <\gamma ma <L$. They proved that the energy of their system decays polynomially in $t^{-1}$. In 2021, Akil {\it et al.} in \cite{ABNWmemory}, studied the stability of coupled wave models with locally memory in a past history framework via non-smooth coefficients on the interface, by considering:
\begin{equation*} \left\{ \begin{array}{llll}
\displaystyle u_{tt}-\left(au_x +{\color{black} b_0 \chi_{(0,\beta)}} {\color{black}\int_0^{\infty}g(s)u_{x}(t-s)ds}\right)_x +{\color{black}c_0 \chi_{(\alpha,\gamma ma)}}y_t =0,& \text{in}\ (0,L)\times (0,\infty) ,&\\
y_{tt}-y_{xx}-{\color{black}c_0 \chi_{(\alpha,\gamma ma)}}u_t =0, &\text{in} \ (0,L)\times (0,\infty) ,&\\
u(0,t)=u(L,t)=y(0,t)=y(L,t)=0,& \text{in} \ (0,\infty) ,&
\end{array}\right.
\end{equation*}
where $a, b_0, L >0$, $c_0 \neq 0$, $0<\alpha<\beta<\gamma ma<L$, and $g:[0,\infty) \longmapsto (0,\infty)$ is the convolution kernel function. They established an exponential energy decay rate if the two waves have the same speed of propagation. In case of different speed of propagation, they proved that the energy of their system decays polynomially with rate $t^{-1}$. In the same year, Akil {\it et al.} in \cite{akil2021ndimensional}, studied the stability of a multi-dimensional elastic/viscoelastic transmission problem with Kelvin-Voigt damping and non-smooth coefficient at the interface, they established some polynomial stability results under some geometric control condition. In those previous literature, the authors deal with the locally coupled wave equations with local damping and by assuming that there is an intersection between the damping and coupling regions. The aim of this paper is to study the direct/indirect stability of locally coupled wave equations with Kelvin-Voigt dampings/damping localized via non-smooth coefficients/coefficient and by assuming that the supports of the dampings and coupling coefficients are disjoint. In the first part of this paper, we consider the following one dimensional coupled system:
\begin{eqnarray}
u_{tt}-\left(au_x+bu_{tx}\right)_x+c y_t&=&0,\quad (x,t)\in (0,L)\times (0,\infty),\label{eq1}\\
y_{tt}-\left(y_x+dy_{tx}\right)_x-cu_t&=&0,\quad (x,t)\in (0,L)\times (0,\infty),\label{eq2}
\end{eqnarray}
with fully Dirichlet boundary conditions,
\begin{equation}\label{eq3}
u(0,t)=u(L,t)=y(0,t)=y(L,t)=0,\ t\in (0,\infty),
\end{equation}
and the following initial conditions
\begin{equation}\label{eq4}
u(\cdot,0)=u_0(\cdot),\ u_t(\cdot,0)=u_1(\cdot),\ y(\cdot,0)=y_0(\cdot)\quad \text{and}\quad y_t(\cdot,0)=y_1(\cdot), \ x \in (0,L).
\end{equation}
In this part, for all $b_0, d_0 >0$ and $c_0 \neq 0$, we treat the following three cases:\\[0.1in]
\textbf{Case 1 (See Figure \ref{p7-Fig1}):}
\begin{equation}\tag{${\rm C1}$}\label{C1}
\left\{\begin{array}{l}
b(x)=b_0 \chi_{(b_1,b_2)}(x)
,\ \quad c(x)=c_0\chi_{(c_1,c_2)}(x),\ \quad
d(x)=d_0\chi_{(d_1,d_2)}(x),\\[0.1in]
\text{where}\ 0<b_1<b_2<c_1<c_2<d_1<d_2<L.
\end{array}
\right.
\end{equation}
\textbf{Case 2 (See Figure \ref{p7-Fig2}):}
\begin{equation}\tag{${\rm C2}$}\label{C2}
\left\{\begin{array}{l}
b(x)=b_0 \chi_{(b_1,b_2)}(x)
,\ \quad c(x)=c_0\chi_{(c_1,c_2)}(x),\ \quad
d(x)=d_0\chi_{(d_1,d_2)}(x),\\[0.1in]
\text{where}\ 0<b_1<b_2<d_1<d_2<c_1<c_2<L.
\end{array}
\right.
\end{equation}
\textbf{Case 3 (See Figure \ref{p7-Fig3}):}
\begin{equation}\tag{${\rm C3}$}\label{C3}
\left\{\begin{array}{l}
b(x)=b_0 \chi_{(b_1,b_2)}(x)
,\ \quad c(x)=c_0\chi_{(c_1,c_2)}(x),\ \quad
d(x)=0,\\[0.1in]
\text{where}\ 0<b_1<b_2<c_1<c_2<L.
\end{array}
\right.
\end{equation}
\begin{figure}
\caption{Geometric description of the functions $b, c$ and $d$ in Case 1.}
\label{p7-Fig1}
\end{figure}
\begin{figure}
\caption{Geometric description of the functions $b,c$ and $d$ in Case 2.}
\label{p7-Fig2}
\end{figure}
\begin{figure}
\caption{Geometric description of the functions $b$ and $c$ in Case 3.}
\label{p7-Fig3}
\end{figure}
\noindent While in the second part, we consider the following multi-dimensional coupled system:
\begin{eqnarray}\label{ND-1}
u_{tt}-\divv (\nabla u+bu_t)+cy_t&=&0\quad \text{in}\ \Omega\times (0,\infty),\\
y_{tt}-\Delta y-cy_t&=&0\quad \text{in}\ \Omega\times (0,\infty),
\end{eqnarray}
with full Dirichlet boundary condition
\begin{equation}\label{ND-2}
u=y=0\quad \text{on}\quad \Gamma\times (0,\infty),
\end{equation}
and the following initial condition
\begin{equation}\label{ND-5}
u(\cdot,0)=u_0(\cdot),\ u_t(\cdot,0)=u_1(\cdot),\ y(\cdot,0)=y_0(\cdot)\ \text{and}\ y_t(\cdot,0)=y_1(\cdot) \ \text{in} \ \Omega,
\end{equation}
where $\Omega \subset \mathbb R^d$, $d\geq 2$ is an open and bounded set with boundary $\Gamma$ of class $C^2$. Here, $b,c\in L^{\infty}(\Omega)$ are such that $b:\Omega\to \mathbb R_+$ is the viscoelastic damping coefficient, $c:\Omega\to \mathbb R$ is the coupling function and
\begin{equation}\label{ND-3}
b(x)\geq b_0>0\ \ \text{in}\ \ \omega_b\subset \Omega, \quad c(x)\geq c_0\neq 0\ \ \text{in}\ \ \omega_c\subset \Omega\quad \text{and}\quad c(x)=0\ \ \text{on}\ \ \Omega\backslash \omega_c
\end{equation}
and
\begin{equation}\label{ND-4}
\meas\left(\overline{\omega_c}\cap \Gamma\right)>0\quad \text{and}\quad \overline{\omega_b}\cap \overline{\omega_c}=\emptyset.
\end{equation}\\\linebreak
In the first part of this paper, we study the direct and indirect stability of system \eqref{eq1}-\eqref{eq4} by considering the three cases \eqref{C1}, \eqref{C2}, and \eqref{C3}. In Subsection \ref{WP}, we prove the well-posedness of our system by using a semigroup approach. In Subsection \ref{subss}, by using a general criteria of Arendt-Batty, we prove the strong stability of our system in the absence of the compactness of the resolvent. Finally, in Subsection \ref{secps}, by using a frequency domain approach combined with a specific multiplier method, we prove that our system decay polynomially in $t^{-4}$ or in $t^{-1}$.\\\linebreak
In the second part of this paper, we study the indirect stability of system \eqref{ND-1}-\eqref{ND-5}. In Subsection \ref{wpnd}, we prove the well-posedness of our system by using a semigroup approach. Finally, in Subsection \ref{Strong Stability-ND}, under some geometric control condition, we prove the strong stability of this system.
\section{Direct and Indirect Stability in the one dimensional case}\label{sec1}
In this section, we study the well-posedness, strong stability, and polynomial stability of system \eqref{eq1}-\eqref{eq4}. The main result of this section are the following three subsections.
\subsection{Well-Posedness}\label{WP}
\noindent In this subsection, we will establish the well-posedness of system \eqref{eq1}-\eqref{eq4} by using semigroup approach. The energy of system \eqref{eq1}-\eqref{eq4} is given by
\begin{equation*}
E(t)=\frac{1}{2}\int_0^L \left(|u_t|^2+a|u_x|^2+|y_t|^2+|y_x|^2\right)dx.
\end{equation*}
Let $\left(u,u_{t},y,y_{t}\right)$ be a regular solution of \eqref{eq1}-\eqref{eq4}. Multiplying \eqref{eq1} and \eqref{eq2} by $\overline{u_t}$ and $\overline{y_t}$ respectively, then using the boundary conditions \eqref{eq3}, we get
\begin{equation*}
E^\prime(t)=- \int_0^L \left(b|u_{tx}|^2+d|y_{tx}|^2\right)dx.
\end{equation*}
Thus, if \eqref{C1} or \eqref{C2} or \eqref{C3} holds, we get $E^\prime(t)\leq0$. Therefore, system \eqref{eq1}-\eqref{eq4} is dissipative in the sense that its energy is non-increasing with respect to time $t$. Let us define the energy space $\mathcal{H}$ by
\begin{equation*}
\mathcal{H}=(H_0^1(0,L)\times L^2(0,L))^2.
\end{equation*}
\noindent The energy space $\mathcal{H}$ is equipped with the following inner product
$$
\left(U,U_1\right)_\mathcal{H}=\int_{0}^Lv\overline{{v}}_1dx+a\int_{0}^Lu_x(\overline{{u}}_1)_xdx+\int_{0}^Lz\overline{{z}}_1dx+\int_{0}^Ly_x(\overline{{y}}_1)_xdx,
$$
for all $U=\left(u,v,y,z\right)^\top$ and $U_1=\left(u_1,v_1,y_1,z_1\right)^\top$ in $\mathcal{H}$.
We define the unbounded linear operator $\mathcal{A}: D\left(\mathcal{A}\right)\subset \mathcal{H}\longrightarrow \mathcal{H}$ by
\begin{equation*}
D(\mathcal{A})=\left\{\begin{array}{l}
\displaystyle
U=(u,v,y,z)^\top \in\mathcal{H};\ v,z\in H_0^1(0,L), \ (au_{x}+bv_{x})_{x}\in L^2(0,L), \ (y_{x}+dz_x)_x\in L^2(0,L)
\end{array}\right\}
\end{equation*}
and
$$
\mathcal{A}\left(u, v,y, z\right)^\top=\left(v,(au_{x}+bv_{x})_{x}-cz, z, (y_x+dz_x)_x+cv \right)^{\top}, \ \forall U=\left(u, v,y, z\right)^\top \in D\left(\mathcal{A}\right).
$$
\noindent Now, if $U=(u,u_t,y,y_t)^\top$ is the state of system \eqref{eq1}-\eqref{eq4}, then it is transformed into the following first order evolution equation
\begin{equation}\label{eq-2.9}
U_t=\mathcal{A}U,\quad
U(0)=U_0,
\end{equation}
where $U_0=(u_0,u_1,y_0,y_1)^\top \in \mathcal H$.\\\linebreak
\begin{pro}\label{mdissipative}
{\rm
If \eqref{C1} or \eqref{C2} or \eqref{C3} holds. Then, the unbounded linear operator $\mathcal A$ is m-dissipative in the Hilbert space $\mathcal H$.}
\end{pro}
\begin{proof}
For all $U=(u,v,y,z)^{\top}\in D(\mathcal{A})$, we have
\begin{equation*}
\mathbb Re\left<\mathcal{A}U,U\right>_{\mathcal{H}}=-\int_0^Lb\abs{v_x}^2dx-\int_0^Ld\abs{z_x}^2dx\leq 0,
\end{equation*}
which implies that $\mathcal{A}$ is dissipative. Now, similiar to Proposition 2.1 in \cite{Wehbe2021} (see also \cite{ABWdelay} and \cite{ABNWmemory}), we can prove that there exists a unique solution $U=(u,v,y,z)^{\top}\in D(\mathcal{A})$ of
\begin{equation*}
-\mathcal{A}U=F,\quad \forall F=(f^1,f^2,f^3,f^4)^\top\in \mathcal{H}.
\end{equation*}
Then $0\in \rho(\mathcal{A})$ and $\mathcal{A}$ is an isomorphism and since $\rho(\mathcal{A})$ is open in $\mathbb{C}$ (see Theorem 6.7 (Chapter III) in \cite{Kato01}), we easily get $R(\lambdabda I -\mathcal{A}) = {\mathcal{H}}$ for a sufficiently small $\lambdabda>0 $. This, together with the dissipativeness of $\mathcal{A}$, imply that $D\left(\mathcal{A}\right)$ is dense in ${\mathcal{H}}$ and that $\mathcal{A}$ is m-dissipative in ${\mathcal{H}}$ (see Theorems 4.5, 4.6 in \cite{Pazy01}).
\end{proof}\\\linebreak
According to Lumer-Phillips theorem (see \cite{Pazy01}), then the operator $\mathcal A$ generates a $C_{0}$-semigroup of contractions $e^{t\mathcal A}$ in $\mathcal H$ which gives the well-posedness of \eqref{eq-2.9}.
Then, we have the following result:
\begin{theoreme}{\rm
For all $U_0 \in \mathcal H$, system \eqref{eq-2.9} admits a unique weak solution $$U(t)=e^{t\mathcal A}U_0\in C^0 (\mathbb R_+ ,\mathcal H).
$$ Moreover, if $U_0 \in D(\mathcal A)$, then the system \eqref{eq-2.9} admits a unique strong solution $$U(t)=e^{t\mathcal A}U_0\in C^0 (\mathbb R_+ ,D(\mathcal A))\cap C^1 (\mathbb R_+ ,\mathcal H).$$}
\end{theoreme}
\subsection{Strong Stability}\label{subss}
In this subsection, we will prove the strong stability of system \eqref{eq1}-\eqref{eq4}. We define the following conditions:
\begin{equation}\label{SSC1}\tag{${\rm SSC1}$}
\eqref{C1} \ \text{holds}\quad \text{and} \quad \abs{c_0}<\min\left(\frac{\sqrt{a}}{c_2-c_1},\frac{1}{c_2-c_1}\right),
\end{equation}
\begin{equation}\label{SSC2}\tag{${\rm SSC3}$}
\eqref{C3} \ \text{holds},\quad a=1\quad \text{and}\quad \abs{c_0}<\frac{1}{c_2-c_1}.
\end{equation}
The main result of this section is the following theorem.
\begin{theoreme}\label{Th-SS1}
{\rm Assume that \eqref{SSC1} or \eqref{C2} or \eqref{SSC2} holds. Then, the $C_0$-semigroup of contractions $\left(e^{t\mathcal{A}}\right)_{t\geq 0}$ is strongly stable in $\mathcal{H}$; i.e. for all $U_0\in \mathcal{H}$, the solution of \eqref{eq-2.9} satisfies
$$
\lim_{t\to +\infty}\|e^{t\mathcal{A}}U_0\|_{\mathcal{H}}=0.
$$}
\end{theoreme}
\noindent According to Theorem \ref{App-Theorem-A.2}, to prove Theorem \ref{Th-SS1}, we need to prove that the operator $\mathcal A$ has no pure imaginary eigenvalues and $\sigma(\mathcal A)\cap i\mathbb R $ is countable. Its proof has been divided into the following Lemmas.
\begin{lemma}\label{ker-SS123}
{\rm
Assume that \eqref{SSC1} or \eqref{C2} or \eqref{SSC2} holds. Then, for all ${\lambda}\in \mathbb{R}$, $i{\lambda}I-\mathcal{A}$ is injective, i.e.
$$
\ker\left(i{\lambda}I-\mathcal{A}\right)=\left\{0\right\}.
$$}
\end{lemma}
\begin{proof}
From Proposition \ref{mdissipative}, we have $0\in \rho(\mathcal{A})$. We still need to show the result for $\la\in \mathbb R^{\ast}$. For this aim, suppose that there exists a real number $\la\neq 0$ and $U=\left(u,v,y,z\right)^\top\in D(\mathcal A)$ such that
\begin{equation*}
\mathcal A U=i{\lambda}U.
\end{equation*}
Equivalently, we have
\begin{eqnarray}
v&=&i{\lambda}u,\label{eq-2.20}\\
(au_{x}+bv_{x})_{x}-cz&=&i{\lambda}v,\label{eq-2.21}\\
z&=&i{\lambda}y,\label{eq-2.22}\\
(y_{x}+dz_x)+cv&=&i{\lambda}z.\label{eq-2.23}
\end{eqnarray}
Next, a straightforward computation gives
\begin{equation}\label{Re}
0=\mathbb Re\left<i{\lambda}U,U\right>_{\mathcal H}=\mathbb Re\left<\mathcal A U,U\right>_{\mathcal H}=-\int_0^L b|v_x|^2dx-\int_0^L d|z_x|^2dx.
\end{equation}
Inserting \eqref{eq-2.20} and \eqref{eq-2.22} in \eqref{eq-2.21} and \eqref{eq-2.23}, we get
\begin{eqnarray}
\la^2u+(au_{x}+i{\lambda}bu_x)_x-i{\lambda}cy&=&0\quad \text{in}\quad (0,L),\label{eq-2.27}\\
\la^2y+(y_{x}+i{\lambda}dy_x)_x+i{\lambda}cu&=&0\quad \text{in}\quad (0,L),\label{eq-2.2.8}
\end{eqnarray}
with the boundary conditions
\begin{equation}\label{boundaryconditionker}
u(0)=u(L)=y(0)=y(L)=0.
\end{equation}
$\bullet$ \textbf{Case 1:} Assume that \eqref{SSC1} holds.
From \eqref{eq-2.20}, \eqref{eq-2.22} and \eqref{Re}, we deduce that
\begin{equation}\label{2.10}
u_x= v_x=0 \ \ \text{in} \ \ (b_1,b_2) \ \ \text{and} \ \ y_x=z_x =0 \ \ \text{in} \ \ (d_1,d_2).
\end{equation}
Using \eqref{eq-2.27}, \eqref{eq-2.2.8} and \eqref{2.10}, we obtain
\begin{equation}\label{2interval}
\la^2u+au_{xx}=0\ \ \text{in}\ \ (0,c_1)\quad \text{and}\quad \la^2y+y_{xx}=0\ \ \text{in}\ \ (c_2,L).
\end{equation}
Deriving the above equations with respect to $x$ and using \eqref{2.10}, we get
\begin{equation}\label{2interval1}
\left\{\begin{array}{lll}
\la^2u_x+au_{xxx}=0&\text{in}&(0,c_1),\\[0.1in]
u_x=0&\text{in}&(b_1,b_2)\subset (0,c_1),
\end{array}
\right.\quad \text{and}\quad
\left\{\begin{array}{lll}
\la^2y_x+y_{xxx}=0&\text{in}&(c_2,L),\\[0.1in]
y_x=0&\text{in}&(d_1,d_2)\subset (c_2,L).
\end{array}
\right.
\end{equation}
Using the unique continuation theorem, we get
\begin{equation}\label{2interval2}
u_x=0\ \ \text{in}\ \ (0,c_1)\quad \text{and}\quad y_x=0\ \ \text{in}\ \ (c_2,L)
\end{equation}
Using \eqref{2interval2} and the fact that $u(0)=y(L)=0$, we get
\begin{equation}\label{2interval3}
u=0\ \ \text{in}\ \ (0,c_1)\quad \text{and}\quad y=0\ \ \text{in}\ \ (c_2,L).
\end{equation}
Now, our aim is to prove that $u=y=0 \ \text{in} \ (c_1,c_2)$. For this aim, using \eqref{2interval3} and the fact that $u, y\in C^1([0,L])$, we obtain the following boundary conditions
\begin{equation}\label{1c1c2}
u(c_1)=u_x(c_1)=y(c_2)=y_x(c_2)=0.
\end{equation}
Multiplying \eqref{eq-2.27} by $-2(x-c_2)\overline{u}_x$, integrating over $(c_1,c_2)$ and taking the real part, we get
\begin{equation}\label{ST1step2}
-\int_{c_1}^{c_2}\la^2(x-c_2)(\abs{u}^2)_xdx-a\int_{c_1}^{c_2}(x-c_2)\left(\abs{u_x}^2\right)_xdx+2\mathbb Re\left(i{\lambda}c_0\int_{c_1}^{c_2}(x-c_2)y\overline{u}_xdx\right)=0,
\end{equation}
using integration by parts and \eqref{1c1c2}, we get
\begin{equation}\label{ST2step2}
\int_{c_1}^{c_2}\abs{{\lambda}u}^2dx+a\int_{c_1}^{c_2}\abs{u_x}^2dx+2\mathbb Re\left(i{\lambda}c_0\int_{c_1}^{c_2}(x-c_2)y\overline{u}_xdx\right)=0.
\end{equation}
Multiplying \eqref{eq-2.2.8} by $-2(x-c_1)\overline{y}_x$, integrating over $(c_1,c_2)$, taking the real part, and using the same argument as above, we get
\begin{equation}\label{ST3step2}
\int_{c_1}^{c_2}\abs{{\lambda}y}^2dx+\int_{c_1}^{c_2}\abs{y_x}^2dx+2\mathbb Re\left(i{\lambda}c_0\int_{c_1}^{c_2}(x-c_1)u\overline{y}_x dx\right)=0.
\end{equation}
Adding \eqref{ST2step2} and \eqref{ST3step2}, we get
\begin{equation}\label{ST4step2}
\int_{c_1}^{c_2}\abs{{\lambda}u}^2dx+a\int_{c_1}^{c_2}\abs{u_x}^2dx+\int_{c_1}^{c_2}\abs{{\lambda}y}^2dx+\int_{c_1}^{c_2}\abs{y_x}^2dx\leq 2\abs{\la}\abs{c_0}(c_2-c_1)\int_{c_1}^{c_2}\left(\abs{y}\abs{u_x}+\abs{u}\abs{y_x}\right)dx.
\end{equation}
Using Young's inequality in \eqref{ST4step2}, we get
\begin{equation}\label{ST5step2}
\begin{array}{c}
\displaystyle
\int_{c_1}^{c_2}\abs{{\lambda}u}^2dx+a\int_{c_1}^{c_2}\abs{u_x}^2dx+\int_{c_1}^{c_2}\abs{{\lambda}y}^2dx+\int_{c_1}^{c_2}\abs{y_x}^2dx\leq \frac{c_0^2(c_2-c_1)^2}{a}\int_{c_1}^{c_2}\abs{{\lambda}y}^2dx
\\
\displaystyle
+\, a\int_{c_1}^{c_2}\abs{u_x}^2dx+c_0^2(c_2-c_1)^2\int_{c_1}^{c_2}\abs{{\lambda}u}^2dx+\int_{c_1}^{c_2}\abs{y_x}^2dx,
\end{array}
\end{equation}
consequently, we get
\begin{equation}\label{ST6step2}
\left(1-\frac{c_0^2(c_2-c_1)^2}{a}\right)\int_{c_1}^{c_2}\abs{{\lambda}y}^2dx+\left(1-c_0^2(c_2-c_1)^2\right)\int_{c_1}^{c_2}\abs{{\lambda}u}^2dx\leq 0.
\end{equation}
Thus, from the above inequality and \eqref{SSC1}, we get
\begin{equation}\label{0c1c2}
u=y=0 \ \ \text{in} \ \ (c_1,c_2).
\end{equation}
Next, we need to prove that $u=0$ in $(c_2,L)$ and $y=0$ in $(0,c_1)$. For this aim, from \eqref{0c1c2} and the fact that $u,y \in C^1([0,L])$, we obtain
\begin{equation}\label{ST1step3}
u(c_2)=u_x(c_2)=0\quad \text{and}\quad y(c_1)=y_x(c_1)=0.
\end{equation}
It follows from \eqref{eq-2.27}, \eqref{eq-2.2.8} and \eqref{ST1step3} that
\begin{equation}\label{ST2step3}
\left\{\begin{array}{lll}
\la^2u+au_{xx}=0\ \ \text{in}\ \ (c_2,L),\\[0.1in]
u(c_2)=u_x(c_2)=u(L)=0,
\end{array}
\right.\quad \text{and}\quad
\left\{\begin{array}{rcc}
\la^2y+y_{xx}=0\ \ \text{in}\ \ (0,c_1),\\[0.1in]
y(0)=y(c_1)=y_x(c_1)=0.
\end{array}
\right.
\end{equation}
Holmgren uniqueness theorem yields
\begin{equation}\label{2.25}
u=0 \ \ \text{in} \ \ (c_2,L) \ \ \text{and} \ \ y=0 \ \ \text{in} \ \ (0,c_1).
\end{equation}
Therefore, from \eqref{eq-2.20}, \eqref{eq-2.22}, \eqref{2interval3}, \eqref{0c1c2} and \eqref{2.25}, we deduce that
$$
U=0.
$$
$\bullet$ \textbf{Case 2:} Assume that \eqref{C2} holds. From \eqref{eq-2.20}, \eqref{eq-2.22} and \eqref{Re}, we deduce that
\begin{equation}\label{2.10*}
u_x= v_x=0 \ \ \text{in} \ \ (b_1,b_2) \ \ \text{and} \ \ y_x=z_x =0 \ \ \text{in} \ \ (d_1,d_2).
\end{equation}
Using \eqref{eq-2.27}, \eqref{eq-2.2.8} and \eqref{2.10*}, we obtain
\begin{equation}\label{2interval}
\la^2u+au_{xx}=0\ \ \text{in}\ \ (0,c_1)\quad \text{and}\quad \la^2y+y_{xx}=0\ \ \text{in}\ \ (0,c_1).
\end{equation}
Deriving the above equations with respect to $x$ and using \eqref{2.10*}, we get
\begin{equation}\label{C22interval1}
\left\{\begin{array}{lll}
\la^2u_x+au_{xxx}=0&\text{in}&(0,c_1),\\[0.1in]
u_x=0&\text{in}&(b_1,b_2)\subset (0,c_1),
\end{array}
\right.\quad \text{and}\quad
\left\{\begin{array}{lll}
\la^2y_x+y_{xxx}=0&\text{in}&(0,c_1),\\[0.1in]
y_x=0&\text{in}&(d_1,d_2)\subset (0,c_1).
\end{array}
\right.
\end{equation}
Using the unique continuation theorem, we get
\begin{equation}\label{C22interval2}
u_x=0\ \ \text{in}\ \ (0,c_1)\quad \text{and}\quad y_x=0\ \ \text{in}\ \ (0,c_1).
\end{equation}
From \eqref{C22interval2} and the fact that $u(0)=y(0)=0$, we get
\begin{equation}\label{C22interval3}
u=0\ \ \text{in}\ \ (0,c_1)\quad \text{and}\quad y=0\ \ \text{in}\ \ (0,c_1).
\end{equation}
Using the fact that $u,y\in C^1([0,L])$ and \eqref{C22interval3}, we get
\begin{equation}\label{C21}
u(c_1)=u_x(c_1)=y(c_1)=y_x(c_1)=0.
\end{equation}
Now, using the definition of $c(x)$ in \eqref{eq-2.27}-\eqref{eq-2.2.8}, \eqref{2.10*} and \eqref{C21} and Holmgren theorem, we get
$$u=y=0\ \text{ in} \ (c_1,c_2).$$
Again, using the fact that $u,y\in C^1([0,L])$, we get
\begin{equation}\label{C22}
u(c_2)=u_x(c_2)=y(c_2)=y_x(c_2)=0.
\end{equation}
Now, using the same argument as in Case 1, we obtain
$$u=y=0 \ \text{in} \ (c_2,L),$$
consequently, we deduce that
$$
U=0.
$$
$\bullet$ \textbf{Case 3:} Assume that \eqref{SSC2} holds.
\noindent Using the same argument as in Cases 1 and 2, we obtain
\begin{equation}\label{C3-SST1}
u=0\ \ \text{in}\ \ (0,c_1)\quad \text{and}\quad u(c_1)=u_x(c_1)=0.
\end{equation}
\textbf{Step 1.} The aim of this step is to prove that
\begin{equation}\label{2c1c2}
\int_{c_1}^{c_2}\abs{u}^2dx=\int_{c_1}^{c_2}\abs{y}^2dx.
\end{equation}
For this aim, multiplying \eqref{eq-2.27} by $\overline{y}$ and \eqref{eq-2.2.8} by $\overline{u}$ and using integration by parts, we get
\begin{eqnarray}
\int_{0}^{L}\la^2u\overline{y}dx-\int_{0}^{L}u_x\overline{y_x}dx-i{\lambda}c_0\int_{c_1}^{c_2}\abs{y}^2dx&=&0,\label{3c1c2}\\[0.1in]
\int_{0}^{L}\la^2y\overline{u}dx-\int_{0}^{L}y_x\overline{u_x}dx+i{\lambda}c_0\int_{c_1}^{c_2}\abs{u}^2dx&=&0.\label{4c1c2}
\end{eqnarray}
Adding \eqref{3c1c2} and \eqref{4c1c2}, taking the imaginary part, we get \eqref{2c1c2}.\\[0.1in]
\textbf{Step 2.}
Multiplying \eqref{eq-2.27} by $-2(x-c_2)\overline{u}_x$, integrating over $(c_1,c_2)$ and taking the real part, we get
\begin{equation}\label{C3ST3step2}
-\mathbb Re\left(\int_{c_1}^{c_2}\la^2(x-c_2)(\abs{u}^2)_xdx\right)-\mathbb Re\left(\int_{c_1}^{c_2}(x-c_2)\left(\abs{u_x}^2\right)_xdx\right)+2\mathbb Re\left(i{\lambda}c_0\int_{c_1}^{c_2}(x-c_2)y\overline{u}_xdx\right)=0,
\end{equation}
using integration by parts in \eqref{C3ST3step2} and \eqref{C3-SST1}, we get
\begin{equation}\label{C3ST4step2}
\int_{c_1}^{c_2}\abs{{\lambda}u}^2dx+a\int_{c_1}^{c_2}\abs{u_x}^2dx+2\mathbb Re\left(i{\lambda}c_0\int_{c_1}^{c_2}(x-c_2)y\overline{u}_xdx\right)=0.
\end{equation}
Using Young's inequality in \eqref{C3ST4step2}, we obtain
\begin{equation}\label{C3ST5step2}
\int_{c_1}^{c_2}\abs{{\lambda}u}^2dx+\int_{c_1}^{c_2}\abs{u_x}^2dx\leq \abs{c_0}(c_2-c_1)\int_{c_1}^{c_2}\abs{{\lambda}y}^2dx+\abs{c_0}(c_2-c_1)\int_{c_1}^{c_2}\abs{u_x}^2dx.
\end{equation}
Inserting \eqref{2c1c2} in \eqref{C3ST5step2}, we get
\begin{equation}\label{C3ST6step2}
\left(1-\abs{c_0}(c_2-c_1)\right)\int_{c_1}^{c_2}\left(\abs{{\lambda}u}^2+\abs{u_x}^2\right)dx\leq 0.
\end{equation}
According to \eqref{SSC2} and \eqref{2c1c2}, we get
\begin{equation}\label{C3ST7step2}
u=y=0\quad \text{in}\quad (c_1,c_2).
\end{equation}
\textbf{Step 3.} Using the fact that $u\in H^2(c_1,c_2)\subset C^1([c_1,c_2])$, we get
\begin{equation}\label{C3ST1step3}
u(c_1)=u_x(c_1)=y(c_1)=y_x(c_1)=y(c_2)=y_x(c_2)=0.
\end{equation}
Now, from \eqref{eq-2.27}, \eqref{eq-2.2.8} and the definition of $c$, we get
\begin{equation*}
\left\{\begin{array}{lll}
\la^2u+u_{xx}=0\ \ \text{in} \ \ (c_2,L),\\
u(c_2)=u_x(c_2)=0,
\end{array}
\right.\quad \text{and}\quad
\left\{\begin{array}{lll}
\la^2y+y_{xx}=0\ \ \text{in}\ \ (0,c_1)\cup (c_2,L),\\
y(c_1)=y_x(c_1)=y(c_2)=y_x(c_2)=0.
\end{array}
\right.
\end{equation*}
From the above systems and Holmgren uniqueness Theorem, we get
\begin{equation}\label{C3ST2step3}
u=0\ \ \text{in}\ \ (c_2,L)\quad \text{and}\quad y=0\ \ \text{in}\ \ (0,c_1)\cup (c_2,L).
\end{equation}
\\ \noindent Consequently, using \eqref{C3-SST1}, \eqref{C3ST7step2} and \eqref{C3ST2step3}, we get $U=0$. The proof is thus completed.
\end{proof}
\begin{lemma}\label{surjectivity}
{\rm
Assume that \eqref{SSC1} or \eqref{C2} or \eqref{SSC2} holds. Then, for all $\lambdabda\in \mathbb{R}$, we have
$$
R\left(i{\lambda}I-\mathcal{A}\right)=\mathcal{H}.
$$}
\end{lemma}
\begin{proof}
See Lemma 2.5 in \cite{Wehbe2021} (see also \cite{ABNWmemory}).
\end{proof}\\\linebreak
\noindent \textbf{Proof of Theorems \ref{Th-SS1}}. From Lemma \ref{ker-SS123}, we obtain that the operator $\mathcal{A}$ has no pure imaginary eigenvalues (i.e. $\sigma_p(\mathcal A)\cap i\mathbb R=\emptyset$). Moreover, from Lemma \ref{surjectivity} and with the help of the closed graph theorem of Banach, we deduce that $\sigma(\mathcal A)\cap i\mathbb R=\emptyset$. Therefore, according to Theorem \ref{App-Theorem-A.2}, we get that the C$_0 $-semigroup $(e^{t\mathcal A})_{t\geq0}$ is strongly stable. The proof is thus complete. \xqed{$\square$}
\subsection{Polynomial Stability}\label{secps}
\noindent In this subsection, we study the polynomial stability of system \eqref{eq1}-\eqref{eq4}. Our main result in this section are the following theorems.
\begin{theoreme}\label{1pol}
{\rm
Assume that \eqref{SSC1} holds. Then, for all $U_0 \in D(\mathcal A)$, there exists a constant $C>0$ independent of $U_0$ such that
\begin{equation}\label{Energypol1}
E(t)\leq \frac{C}{t^4}\|U_0\|^2_{D(\mathcal A)},\quad t>0.
\end{equation}}
\end{theoreme}
\begin{theoreme}\label{2pol}
{\rm
Assume that \eqref{SSC2} holds . Then, for all $U_0 \in D(\mathcal A)$ there exists a constant $C>0$ independent of $U_0$ such that
\begin{equation}\label{Energypol2}
E(t)\leq \frac{C}{t}\|U_0\|^2_{D(\mathcal A)},\quad t>0.
\end{equation}}
\end{theoreme}
\noindent According to Theorem \ref{bt}, the polynomial energy decays \eqref{Energypol1} and \eqref{Energypol2} hold if the following conditions
\begin{equation}\label{H1}\tag{${\rm{H_1}}$}
i\mathbb R\subset \rho(\mathcal{A})
\end{equation}
and
\begin{equation}\label{H2}\tag{${\rm{H_2}}$}
\limsup_{{\lambda}\in \mathbb R, \ |\la| \to \infty}\frac{1}{|\la|^\ell}\left\|(i{\lambda}I-\mathcal A)^{-1}\right\|_{\mathcal{L}(\mathcal{H})}<\infty \ \ \text{with} \ \ \ell=\left\{\begin{array}{lll}
\frac{1}{2} \ \ \text{for Theorem \ref{1pol}},
\\
2 \ \ \text{for Theorem \ref{2pol}},
\end{array}\right.
\end{equation}
are satisfied. Since condition \eqref{H1} is already proved in Subsection \ref{subss}. We still need to prove \eqref{H2}, let us prove it by a contradiction argument. To this aim, suppose that \eqref{H2} is false, then there exists $\left\{\left(\la_n,U_n:=(u_n,v_n,y_n,z_n)^\top\right)\right\}_{n\geq 1}\subset \mathbb R^{\ast}_+\times D(\mathcal A)$ with
\begin{equation}\label{pol1}
\la_n\to \infty \ \text{as} \ n\to \infty \quad \text{and}\quad \|U_n\|_{\mathcal{H}}=1, \ \forall n\geq1,
\end{equation}
such that
\begin{equation}\label{pol2-w}
\left(\la_n\right)^{\ell}\left(i\la_nI-\mathcal A\right)U_n=F_n:=(f_{1,n},f_{2,n},f_{3,n},f_{4,n})^{\top}\to 0 \ \ \text{in}\ \ \mathcal{H}, \ \text{as} \ n\to \infty.
\end{equation}
For simplicity, we drop the index $n$. Equivalently, from \eqref{pol2-w}, we have
\begin{eqnarray}
i{\lambda}u-v&=&\dfrac{f_1}{\la^{\ell}}, \ f_1 \to 0 \ \ \text{in}\ \ H_0^1(0,L),\label{pol3}\\
i{\lambda}v-\left(au_x+bv_x\right)_x+cz&=&\dfrac{f_2}{\la^{\ell}}, \ f_2 \to 0 \ \ \text{in}\ \ L^2(0,L),\label{pol4}\\
i{\lambda}y-z&=&\dfrac{f_3}{\la^{\ell}}, \ f_3 \to 0 \ \ \text{in}\ \ H_0^1(0,L),\label{pol5}\\
i{\lambda}z-(y_x+dz_x)_x-cv&=&\dfrac{f_4}{\la^{\ell}},\ f_4 \to 0 \ \ \text{in} \ \ L^2(0,L).\label{pol6}
\end{eqnarray}
\subsubsection{Proof of Theorem \ref{1pol}} In this subsection, we will prove Theorem \ref{1pol} by checking the condition \eqref{H2}, by finding a contradiction with \eqref{pol1} by showing $\|U\|_{\mathcal{H}}=o(1)$. For clarity, we divide the proof into several Lemmas.
By taking the inner product of \eqref{pol2-w} with $U$ in $\mathcal{H}$, we remark that
\begin{equation*}
\int _0^L b\left|v_{x}\right|^2dx+\int_0^Ld\abs{z_x}^2dx=-\mathbb Re\left(\left<\mathcal A U,U\right>_{\mathcal H}\right)=\la^{-\frac{1}{2}}\mathbb Re\left(\left<F,U\right>_{\mathcal H}\right)=o\left(\lambdabda^{-\frac{1}{2}}\right).
\end{equation*}
Thus, from the definitions of $b$ and $d$, we get
\begin{equation}\label{eq-4.9}
\int _{b_1}^{b_2}\left|v_{x}\right|^2dx=o\left(\lambdabda^{-\frac{1}{2}}\right)\quad \text{and}\quad \int _{d_1}^{d_2}\left|z_{x}\right|^2dx=o\left(\lambdabda^{-\frac{1}{2}}\right).
\end{equation}
Using \eqref{pol3}, \eqref{pol5}, \eqref{eq-4.9}, and the fact that $f_1,f_3\to 0$ in $H_0^1(0,L)$, we get
\begin{equation}\label{eq-5.0}
\int_{b_1}^{b_2}\abs{u_x}^2dx=\frac{o(1)}{\la^{\frac{5}{2}}}\quad \text{and}\quad \int_{d_1}^{d_2}\abs{y_x}^2dx=\frac{o(1)}{\la^{\frac{5}{2}}}.
\end{equation}
\begin{lemma}\label{F-est}
{\rm
The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following estimations
\begin{equation}\label{F-est1}
\int_{b_1}^{b_2}\abs{v}^2dx=\frac{o(1)}{\la^{\frac{3}{2}}}\quad \text{and}\quad \int_{d_1}^{d_2}\abs{z}^2dx=\frac{o(1)}{\la^{\frac{3}{2}}}.
\end{equation}}
\end{lemma}
\begin{proof}
We give the proof of the first estimation in \eqref{F-est1}, the second one can be done in a similar way. For this aim, we fix $g\in C^1\left([b_1,b_2]\right)$ such that
$$
g(b_2)=-g(b_1)=1,\quad \max_{x\in[b_1,b_2]}\abs{g(x)}=m_g\ \ \text{and}\ \ \max_{x\in [b_1,b_2]}\abs{g'(x)}=m_{g'}.
$$
The proof is divided into several steps: \\
\textbf{Step 1}. The goal of this step is to prove that
\begin{equation}\label{Step1-Eq1}
\abs{v(b_1)}^2+\abs{v(b_2)}^2\leq \left(\frac{\la^{\frac{1}{2}}}{2}+2m_{g'}\right)\int_{b_1}^{b_2}\abs{v}^2dx+\frac{o(1)}{\la}.
\end{equation}
From \eqref{pol3}, we deduce that
\begin{equation}\label{Step1-Eq2}
v_x=i{\lambda}u_x-\la^{-\frac{1}{2}}(f_1)_x.
\end{equation}
Multiplying \eqref{Step1-Eq2} by $2g\overline{v}$ and integrating over $(b_1,b_2)$, then taking the real part, we get
\begin{equation*}
\int_{b_1}^{b_2}g\left(\abs{v}^2\right)_xdx=\mathbb Re\left(2i{\lambda}\int_{b_1}^{b_2}gu_x\overline{v}dx\right)-\mathbb Re\left(2\la^{-\frac{1}{2}}\int_{b_1}^{b_2}g(f_1)_x\overline{v}dx\right).
\end{equation*}
Using integration by parts in the left hand side of the above equation, we get
\begin{equation}\label{Step1-Eq3}
\abs{v(b_1)}^2+\abs{v(b_2)}^2=\int_{b_1}^{b_2}g'\abs{v}^2dx+\mathbb Re\left(2i{\lambda}\int_{b_1}^{b_2}gu_x\overline{v}dx\right)-\mathbb Re\left(2\la^{-\frac{1}{2}}\int_{b_1}^{b_2}g(f_1)_x\overline{v}dx\right).
\end{equation}
Using Young's inequality, we obtain
\begin{equation*}
2{\lambda}m_g\abs{u_x}\abs{v}\leq \frac{\la^\frac{1}{2}\abs{v}^2}{2}+2\la^{\frac{3}{2}}m_g^2\abs{u_x}^2\ \text{and}\quad 2\la^{-\frac{1}{2}}m_g\abs{(f_1)_x}\abs{v}\leq m_{g'}\abs{v}^2+m_g^2m_{g'}^{-1}\la^{-1}\abs{(f_1)_x}^2.
\end{equation*}
From the above inequalities, \eqref{Step1-Eq3} becomes
\begin{equation}\label{Step1-Eq4}
\abs{v(b_1)}^2+\abs{v(b_2)}^2\leq \left(\frac{\la^{\frac{1}{2}}}{2}+2m_{g'}\right)\int_{b_1}^{b_2}\abs{v}^2dx+2\la^{\frac{3}{2}}m_g^2\int_{b_1}^{b_2}\abs{u_x}^2dx+\frac{m_g^2}{m_{g'}}\la^{-1}\int_{b_1}^{b_2}\abs{(f_1)_x}^2dx.
\end{equation}
Inserting \eqref{eq-5.0} in \eqref{Step1-Eq4} and the fact that $f_1 \to 0$ in $H^1_0(0,L)$, we get \eqref{Step1-Eq1}.\\[0.1in]
\textbf{Step 2}. The aim of this step is to prove that
\begin{equation}\label{Step2-Eq1}
\abs{(au_x+bv_x)(b_1)}^2+\abs{(au_x+bv_x)(b_2)}^2\leq \frac{\la^{\frac{3}{2}}}{2}\int_{b_1}^{b_2}\abs{v}^2dx+o(1).
\end{equation}
Multiplying \eqref{pol4} by $-2g\left(\overline{au_x+bv_x}\right)$, using integration by parts over $(b_1,b_2)$ and taking the real part, we get
\begin{equation*}
\begin{array}{l}
\displaystyle
\abs{\left(au_x+bv_x\right)(b_1)}^2+\abs{\left(au_x+bv_x\right)(b_2)}^2=\int_{b_1}^{b_2}g'\abs{au_x+bv_x}^2dx+\\[0.1in]
\displaystyle
\mathbb Re\left(2i{\lambda}\int_{b_1}^{b_2}gv(\overline{au_x+bv_x})dx\right)-\mathbb Re\left(2\la^{-\frac{1}{2}}\int_{b_1}^{b_2}gf_2(\overline{au_x+bv_x})dx\right),
\end{array}
\end{equation*}
consequently, we get
\begin{equation}\label{Step2-Eq2}
\begin{array}{lll}
\displaystyle
\abs{\left(au_x+bv_x\right)(b_1)}^2+\abs{\left(au_x+bv_x\right)(b_2)}^2\leq m_{g'}\int_{b_1}^{b_2}\abs{au_x+bv_x}^2dx\\[0.1in]
\displaystyle
+2{\lambda}m_g\int_{b_1}^{b_2}\abs{v}\abs{au_x+bv_x}dx+2m_g\la^{-\frac{1}{2}}\int_{b_1}^{b_2}\abs{f_2}\abs{au_x+bv_x}dx.
\end{array}
\end{equation}
By Young's inequality, \eqref{eq-4.9}, and \eqref{eq-5.0}, we have
\begin{equation}\label{Step2-Eq3}
2{\lambda}m_g\int_{b_1}^{b_2}\abs{v}\abs{au_x+bv_x}dx\leq \frac{\la^{\frac{3}{2}}}{2}\int_{b_1}^{b_2}\abs{v}^2dx+2m_g^2\la^{\frac{1}{2}}\int_{b_1}^{b_2}\abs{au_x+bv_x}^2dx\leq \frac{\la^{\frac{3}{2}}}{2}\int_{b_1}^{b_2}\abs{v}^2dx+o(1).\\[0.1in]
\end{equation}
Inserting \eqref{Step2-Eq3} in \eqref{Step2-Eq2}, then using \eqref{eq-4.9}, \eqref{eq-5.0} and the fact that $f_2 \to 0$ in $L^2(0,L)$, we get \eqref{Step2-Eq1}.\\[0.1in]
\textbf{Step 3.} The aim of this step is to prove the first estimation in \eqref{F-est1}. For this aim, multiplying \eqref{pol4} by $-i\la^{-1}\overline{v}$, integrating over $(b_1,b_2)$ and taking the real part , we get
\begin{equation}\label{Step3-Eq1}
\int_{b_1}^{b_2}\abs{v}^2dx=\mathbb Re\left(i\la^{-1}\int_{b_1}^{b_2}(au_x+bv_x)\overline{v}_xdx-\left[i\la^{-1}\left(au_x+bv_x\right)\overline{v}\right]_{b_1}^{b_2}+i\la^{-\frac{3}{2}}\int_{b_1}^{b_2}f_2\overline{v}dx\right).
\end{equation}
Using \eqref{eq-4.9}, \eqref{eq-5.0}, the fact that $v$ is uniformly bounded in $L^2(0,L)$ and $f_2\to 0$ in $L^2(0,1)$, and Young's inequalities, we get
\begin{equation}\label{Step3-Eq2}
\int_{b_1}^{b_2}\abs{v}^2dx\leq \frac{\la^{-\frac{1}{2}}}{2}[\abs{v(b_1)}^2+\abs{v(b_2)}^2]+\frac{\la^{-\frac{3}{2}}}{2}[\abs{(au_x+bv_x)(b_1)}^2+\abs{(au_x+bv_x)(b_2)}^2]+\frac{o(1)}{\la^{\frac{3}{2}}}.
\end{equation}
Inserting \eqref{Step1-Eq1} and \eqref{Step2-Eq1} in \eqref{Step3-Eq2}, we get
\begin{equation*}
\int_{b_1}^{b_2}\abs{v}^2dx\leq \left(\frac{1}{2}+m_{g'}\la^{-\frac{1}{2}}\right)\int_{b_1}^{b_2}\abs{v}^2dx+\frac{o(1)}{\la^{\frac{3}{2}}},
\end{equation*}
which implies that
\begin{equation}\label{Step3-Eq3}
\left(\frac{1}{2}-m_{g'}\la^{-\frac{1}{2}}\right)\int_{b_1}^{b_2}\abs{v}^2dx\leq \frac{o(1)}{\la^{\frac{3}{2}}}.
\end{equation}
Using the fact that ${\lambda}\to \infty$, we can take ${\lambda}> 4m_{g'}^2$. Then, we obtain the first estimation in \eqref{F-est1}. Similarly, we can obtain the second estimation in \eqref{F-est1}. The proof has been completed.
\end{proof}
\begin{lemma}\label{Sec-est}
{\rm
The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following estimations
\begin{equation}\label{Sec-est1}
\int_0^{c_1}\left(\abs{v}^2+a\abs{u_x}^2\right)dx=o(1)\quad \text{and}\quad \int_{c_2}^L\left(\abs{z}^2+\abs{y_x}^2\right)dx=o(1).
\end{equation}}
\end{lemma}
\begin{proof}
First, let $h\in C^1([0,c_1])$ such that $h(0)=h(c_1)=0$. Multiplying \eqref{pol4} by $2a^{-1}h\overline{(au_x+bv_x)}$, integrating over $(0,c_1)$, using integration by parts and taking the real part, then using \eqref{eq-4.9} and the fact that $u_x$ is uniformly bounded in $L^2(0,L)$ and $f_2 \to 0$ in $L^2(0,L)$, we get
\begin{equation}\label{Sec-est2}
\mathbb Re\left(2i{\lambda}a^{-1}\int_0^{c_1}vh\overline{(au_x+bv_x)}dx\right)+a^{-1}\int_0^{c_1}h'\abs{au_x+bv_x}^2dx=\frac{o(1)}{\la^{\frac{1}{2}}}.
\end{equation}
From \eqref{pol3}, we have
\begin{equation}\label{Sec-est3}
i{\lambda}\overline{u}_x=-\overline{v}_x-\la^{-\frac{1}{2}}(\overline{f_1})_x.
\end{equation}
Inserting \eqref{Sec-est3} in \eqref{Sec-est2}, using integration by parts, then using \eqref{eq-4.9}, \eqref{F-est1}, and the fact that $f_1 \to 0 $ in $H^1_0 (0,L)$ and $v$ is uniformly bounded in $L^2 (0,L)$, we get
\begin{equation}\label{Sec-est4}
\begin{array}{c}
\displaystyle
\int_0^{c_1}h'\abs{v}^2dx+a^{-1}\int_0^{c_1}h'\abs{au_x+bv_x}^2dx=\underbrace{2\mathbb Re\left(\la^{-\frac{1}{2}}\int_{0}^{c_1}vh(\overline{f_1})_xdx\right)}_{=o(\la^{-\frac{1}{2}})}\\[0.1in]
\displaystyle
+\underbrace{\mathbb Re\left(2i{\lambda}a^{-1}b_0\int_{b_1}^{b_2}hv\overline{v}_xdx\right)}_{=o(1)}+\frac{o(1)}{\la^{\frac{1}{2}}}.
\end{array}
\end{equation}
Now, we fix the following cut-off functions
$$
p_1(x):=\left\{\begin{array}{ccc}
1&\text{in}&(0,b_1),\\
0&\text{in}&(b_2,c_1),\\
0\leq p_1\leq 1&\text{in}&(b_1,b_2),
\end{array}
\right. \quad\text{and}\quad
p_2(x):=\left\{\begin{array}{ccc}
1&\text{in}&(b_2,c_1),\\
0&\text{in}&(0,b_1),\\
0\leq p_2\leq 1&\text{in}&(b_1,b_2).
\end{array}
\right.
$$
Finally, take $h(x)=xp_1(x)+(x-c_1)p_2(x)$ in \eqref{Sec-est4} and using \eqref{eq-4.9}, \eqref{eq-5.0}, \eqref{F-est1}, we get the first estimation in \eqref{Sec-est1}. By using the same argument, we can obtain the second estimation in \eqref{Sec-est1}. The proof is thus completed.
\end{proof}
\begin{lemma}\label{Third-est}
The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following estimations
\begin{equation}\label{Third-est1}
\abs{{\lambda}u(c_1)}=o(1),\ \abs{u_x(c_1)}=o(1),\ \abs{{\lambda}y(c_2)}=o(1)\quad \text{and}\quad \abs{y_x(c_2)}=o(1).
\end{equation}
\end{lemma}
\begin{proof}
First, from \eqref{pol3} and \eqref{pol4}, we deduce that
\begin{equation}\label{Th-est1}
\la^2u+au_{xx}=-\frac{f_2}{\la^{\frac{1}{2}}}-i\la^{\frac{1}{2}}f_1 \ \ \text{in} \ \ (b_2,c_1).
\end{equation}
Multiplying \eqref{Th-est1} by $2(x-b_2)\bar{u}_x$, integrating over $(b_2,c_1)$ and taking the real part, then using the fact that $u_x$ is uniformly bounded in $L^2(0,L)$ and $f_2 \to 0$ in $L^2(0,L)$, we get
\begin{equation}\label{Th-est2}
\int_{b_2}^{c_1}\la^2 (x-b_2)\left(\abs{u}^2\right)_xdx+a\int_{b_2}^{c_1}(x-b_2)\left(\abs{u_x}^2\right)_xdx=-\mathbb Re\left(2i\la^{\frac{1}{2}}\int_{b_2}^{c_1}(x-b_2)f_1\overline{u}_xdx\right)+\frac{o(1)}{\la^{\frac{1}{2}}}.
\end{equation}
Using integration by parts in \eqref{Th-est2}, then using \eqref{Sec-est1}, and the fact that $f_1\to 0$ in $H_0^1(0,L)$ and ${\lambda}u$ is uniformly bounded in $L^2(0,L)$, we get
\begin{equation}\label{Th-est3}
0\leq (c_1-b_2)\left(\abs{{\lambda}u(c_1)}^2+a\abs{u_x(c_1)}^2\right)=\mathbb Re\left(2i\la^{\frac{1}{2}}(c_1-b_2)f_1(c_1)\overline{u}(c_1)\right)+o(1),
\end{equation}
consequently, by using Young's inequality, we get
\begin{equation*}
\begin{array}{lll}
\displaystyle\abs{{\lambda}u(c_1)}^2+\abs{u_x(c_1)}^2 &\leq& \displaystyle 2\la^{\frac{1}{2}}|f_1(c_1)||u(c_1)|+o(1)\\[0.1in]
&\leq &\displaystyle\frac{1}{2}\abs{{\lambda}u(c_1)}^2+\frac{2}{\la}\abs{f_1(c_1)}^2 +o(1).
\end{array}
\end{equation*}
Then, we get
\begin{equation}
\frac{1}{2}\abs{{\lambda}u(c_1)}^2+\abs{u_x(c_1)}^2\leq \frac{2}{\la}\abs{f_1(c_1)}^2+o(1).
\end{equation}
Finally, from the above estimation and the fact that $f_1 \to 0$ in $H^1_0 (0,L)$, we get the first two estimations in \eqref{Third-est1}. By using the same argument, we can obtain the last two estimations in \eqref{Third-est1}. The proof has been completed.
\end{proof}
\begin{lemma}\label{Fourth-est}
The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following estimation
\begin{equation}\label{4-est1}
\int_{c_1}^{c_2} |{\lambda}u|^2 +a |u_x|^2 +|{\lambda}y|^2 +|y_x|^2 dx =o(1).
\end{equation}
\end{lemma}
\begin{proof}
Inserting \eqref{pol3} and \eqref{pol5} in \eqref{pol4} and \eqref{pol6}, we get
\begin{eqnarray}
-\la^2u-au_{xx}+i{\lambda}c_0y&=&\frac{f_2}{\la^{\frac{1}{2}}}+i\la^{\frac{1}{2}}f_1+\frac{c_0f_3}{\la^{\frac{1}{2}}} \ \ \text{in} \ \ (c_1,c_2),\label{4-est2}\\
-\la^2y-y_{xx}-i{\lambda}c_0u&=&\frac{f_4}{\la^{\frac{1}{2}}}+i\la^{\frac{1}{2}}f_3-\frac{c_0f_1}{\la^{\frac{1}{2}}} \ \ \ \text{in} \ \ (c_1,c_2)\label{4-est3}.
\end{eqnarray}
Multiplying \eqref{4-est2} by $2(x-c_2)\overline{u_x}$ and \eqref{4-est3} by $2(x-c_1)\overline{y_x}$, integrating over $(c_1,c_2)$ and taking the real part, then using the fact that $\|F\|_\mathcal H =o(1)$ and $\|U\|_\mathcal H =1$, we obtain
\begin{equation}\label{4-est4}
\begin{array}{l}
\displaystyle
-\la^2\int_{c_1}^{c_2}(x-c_2)\left(\abs{u}^2\right)_xdx-a\int_{c_1}^{c_2}(x-c_2)\left(\abs{u_x}^2\right)_xdx+\mathbb Re\left(2i{\lambda}c_0\int_{c_1}^{c_2}(x-c_2)y\overline{u_x}dx\right)=
\\
\displaystyle
\mathbb Re\left(2i\la^{\frac{1}{2}}\int_{c_1}^{c_2}(x-c_2)f_1\overline{u_x}dx\right)+\frac{o(1)}{\la^{\frac{1}{2}}}
\end{array}
\end{equation}
and
\begin{equation}\label{4-est5}
\begin{array}{l}
\displaystyle
-\la^2\int_{c_1}^{c_2}(x-c_1)\left(\abs{y}^2\right)_xdx-\int_{c_1}^{c_2}(x-c_1)\left(\abs{y_x}^2\right)_xdx-\mathbb Re\left(2i{\lambda}c_0\int_{c_1}^{c_2}(x-c_1)u\overline{y_x}dx\right)=
\\
\displaystyle
\mathbb Re\left(2i\la^{\frac{1}{2}}\int_{c_1}^{c_2}(x-c_1)f_3\overline{y_x}dx\right)+\frac{o(1)}{\la^{\frac{1}{2}}}.
\end{array}
\end{equation}
Using integration by parts, \eqref{Third-est1}, and the fact that $f_1, f_3 \to 0$ in $H^1_0(0,L)$, $\|u\|_{L^2(0,L)}=O(\la^{-1})$, $\|y\|_{L^2(0,L)}=O(\la^{-1})$, we deduce that
\begin{equation}\label{4-est6}
\mathbb Re\left(i\la^{\frac{1}{2}}\int_{c_1}^{c_2}(x-c_2)f_1\overline{u_x}dx\right)=\frac{o(1)}{\la^{\frac{1}{2}}}\quad \text{and}\quad \mathbb Re\left(i\la^{\frac{1}{2}}\int_{c_1}^{c_2}(x-c_1)f_3\overline{y_x}dx\right)=\frac{o(1)}{\la^{\frac{1}{2}}}.
\end{equation}
Inserting \eqref{4-est6} in \eqref{4-est4} and \eqref{4-est5}, then using integration by parts and \eqref{Third-est1}, we get
\begin{eqnarray}
\int_{c_1}^{c_2}\left(\abs{{\lambda}u}^2+a\abs{u_x}^2\right)dx+\mathbb Re\left(i{\lambda}c_0\int_{c_1}^{c_2}(x-c_2)y\overline{u_x}dx\right)&=&o(1),\label{4-est7}\\
\int_{c_1}^{c_2}\left(\abs{{\lambda}y}^2+\abs{y_x}^2\right)dx-\mathbb Re\left(i{\lambda}c_0\int_{c_1}^{c_2}(x-c_1)u\overline{y_x}dx\right)&=&o(1).\label{4-est8}
\end{eqnarray}
Adding \eqref{4-est7} and \eqref{4-est8}, we get
$$
\begin{array}{lll}
\displaystyle
\int_{c_1}^{c_2}\left(\abs{{\lambda}u}^2+a\abs{u_x}^2+\abs{{\lambda}y}^2+\abs{y_x}^2\right)dx&=&\displaystyle
\mathbb Re\left(2i{\lambda}c_0\int_{c_1}^{c_2}(x-c_1)u\overline{y_x}dx\right)-\mathbb Re\left(2i{\lambda}c_0\int_{c_1}^{c_2}(x-c_2)y\overline{u_x}dx\right)+o(1)\\[0.in]
&\leq &\displaystyle
2{\lambda}\abs{c_0}(c_2-c_1)\int_{c_1}^{c_2}\abs{u}\abs{y_x}dx+2\la\frac{\abs{c_0}}{a^{\frac{1}{4}}}(c_2-c_1)a^{\frac{1}{4}}\int_{c_1}^{c_2}\abs{y}\abs{u_x}dx+o(1).
\end{array}
$$
Applying Young's inequalities, we get
\begin{equation}\label{4-est9}
\left(1-\abs{c_0}(c_2-c_1)\right)\int_{c_1}^{c_2}(\abs{{\lambda}u}^2+\abs{y_x}^2)dx+\left(1-\frac{1}{\sqrt{a}}\abs{c_0}(c_2-c_1)\right)\int_{c_1}^{c_2}(a\abs{u_x}^2+\abs{{\lambda}y}^2)dx\leq o(1).
\end{equation}
Finally, using \eqref{SSC1}, we get the desired result. The proof has been completed.
\end{proof}
\begin{lemma}\label{5-est}
The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following estimations
\begin{equation}\label{5-est1}
\int_0^{c_1}\left(\abs{z}^2+\abs{y_x}^2\right)dx=o(1)\quad \text{and}\quad \int_{c_2}^L\left(\abs{v}^2+a\abs{u_x}^2\right)dx=o(1).
\end{equation}
\end{lemma}
\begin{proof}
Using the same argument of Lemma \ref{Sec-est}, we obtain \eqref{5-est1}.
\end{proof}
\noindent \textbf{Proof of Theorem \ref{1pol}.} Using \eqref{eq-5.0}, Lemmas \ref{F-est}, \ref{Sec-est}, \ref{Fourth-est}, \ref{5-est}, we get $\|U\|_{\mathcal{H}}=o(1)$, which contradicts \eqref{pol1}. Consequently, condition ${\rm (H2)}$ holds. This implies the energy decay estimation \eqref{Energypol1}.
\subsubsection{Proof of Theorem \ref{2pol}} In this subsection, we will prove Theorem \ref{2pol} by checking the condition \eqref{H2}, that is by finding a contradiction with \eqref{pol1} by showing $\|U\|_{\mathcal{H}}=o(1)$. For clarity, we divide the proof into several Lemmas. By taking the inner product of \eqref{pol2-w} with $U$ in $\mathcal{H}$, we remark that
\begin{equation*}
\int_0^L b\abs{v_x}^2dx=-\mathbb Re\left(\left<\mathcal{A}U,U\right>_{\mathcal{H}}\right)=\la^{-2}\mathbb Re\left(\left<F,U\right>_{\mathcal{H}}\right)=o(\la^{-2}).
\end{equation*}
Then,
\begin{equation}\label{C2-dissipation}
\int_{b_1}^{b_2}\abs{v_x}^2dx=o(\la^{-2}).
\end{equation}
Using \eqref{pol3} and \eqref{C2-dissipation}, and the fact that $f_1 \to 0$ in $H^1_0(0,L)$, we get
\begin{equation}\label{C2-dissipation1}
\int_{b_1}^{b_2}\abs{u_x}^2dx=o(\la^{-4}).
\end{equation}
\begin{lemma}\label{C2-Fest}
Let $0<\varepsilon<\frac{b_2-b_1}{2}$, the solution $U\in D(\mathcal{A})$ of the system \eqref{pol3}-\eqref{pol6} satisfies the following estimation
\begin{equation}\label{C2-Fest1}
\int_{b_1+\varepsilon}^{b_2-\varepsilon}\abs{v}^2dx=o(\la^{-2}).
\end{equation}
\end{lemma}
\begin{proof}
First, we fix a cut-off function $\theta_1\in C^{1}([0,c_1])$ such that
\begin{equation}\label{C2-theta1}
\theta_1(x)=\left\{\begin{array}{clc}
1&\text{if}&x\in (b_1+\varepsilon,b_2-\varepsilon),\\
0&\text{if}&x\in (0,b_1)\cup (b_2,L),\\
0\leq \theta_1\leq 1&&\text{elsewhere}.
\end{array}
\right.
\end{equation}
Multiplying \eqref{pol4} by $\la^{-1}\theta_1 \overline{v}$, integrating over $(0,c_1)$, using integration by parts, and the fact that $f_2 \to 0$ in $L^2(0,L)$ and $v$ is uniformly bounded in $L^2(0,L)$, we get
\begin{equation}\label{C2-Fest2}
i\int_0^{c_1}\theta_1\abs{v}^2dx+\frac{1}{\la}\int_0^{c_1}(u_x+bv_x)(\theta_1'\overline{v}+\theta \overline{v_x})dx=o(\la^{-3}).
\end{equation}
Using \eqref{C2-dissipation} and the fact that $\|U\|_{\mathcal{H}}=1$, we get
\begin{equation*}
\frac{1}{\la}\int_0^{c_1}(u_x+bv_x)(\theta_1'\overline{v}+\theta \overline{v_x})dx=o(\la^{-2}).
\end{equation*}
Inserting the above estimation in \eqref{C2-Fest2}, we get the desired result \eqref{C2-Fest1}. The proof has been completed.
\end{proof}
\begin{lemma}\label{C2-Secest}
The solution $U\in D(\mathcal{A})$ of the system \eqref{pol3}-\eqref{pol6} satisfies the following estimation
\begin{equation}\label{C2-Secest1}
\int_{0}^{c_1}(\abs{v}^2+\abs{u_x}^2)dx=o(1).
\end{equation}
\end{lemma}
\begin{proof}
Let $h\in C^1([0,c_1])$ such that $h(0)=h(c_1)=0$. Multiplying \eqref{pol4} by $2h\overline{(u_x+bv_x)}$, integrating over $(0,c_1)$ and taking the real part, then using integration by parts and the fact that $f_2 \to 0$ in $L^2(0,L)$, we get
\begin{equation}\label{C2-Secest2}
\mathbb Re\left(2\int_0^{c_1}i{\lambda}vh\overline{(u_x+bv_x)}dx\right)+\int_0^{c_1}h'\abs{u_x+bv_x}^2dx=o(\la^{-2}).
\end{equation}
Using \eqref{C2-dissipation} and the fact that $v$ is uniformly bounded in $L^2(0,L)$, we get
\begin{equation}\label{C2-Secest3}
\mathbb Re\left(2\int_0^{c_1}i{\lambda}vh\overline{(u_x+bv_x)}dx\right)=2\int_0^{c_1}i{\lambda}vh\overline{u_x}dx+o(1).
\end{equation}
From \eqref{pol3}, we have
\begin{equation}\label{C2-Secest4}
i\la\overline{u}_x=-\overline{v}_x-\frac{\left(\overline{f_1}\right)_x}{\la^2}.
\end{equation}
Inserting \eqref{C2-Secest4} in \eqref{C2-Secest3}, using integration by parts and the fact that $f_1 \to 0$ in $H^1_0(0,L)$, we get
\begin{equation}\label{C2-Secest5}
\mathbb Re\left(2\int_0^{c_1}i{\lambda}vh\overline{(u_x+bv_x)}dx\right)=\int_0^{c_1}h'\abs{v}^2dx+o(1).
\end{equation}
Inserting \eqref{C2-Secest5} in \eqref{C2-Secest2}, we obtain
\begin{equation}\label{C2-Secest6}
\int_0^{c_1}h'\left(\abs{v}^2+\abs{u_x+bv_x}^2\right)dx=o(1).
\end{equation}
Now, we fix the following cut-off functions
$$
\theta_2(x):=\left\{\begin{array}{ccc}
1&\text{in}&(0,b_1+\varepsilon),\\
0&\text{in}&(b_2-\varepsilon,c_1),\\
0\leq \theta_2\leq 1&\text{in}&(b_1+\varepsilon,b_2-\varepsilon),
\end{array}
\right. \quad\text{and}\quad
\theta_3(x):=\left\{\begin{array}{ccc}
1&\text{in}&(b_2-\varepsilon,c_1),\\
0&\text{in}&(0,b_1+\varepsilon),\\
0\leq \theta_3\leq 1&\text{in}&(b_1+\varepsilon,b_2-\varepsilon).
\end{array}
\right.
$$
Taking $h(x)=x\theta_2(x)+(x-c_1)\theta_3(x)$ in \eqref{C2-Secest6}, then using \eqref{C2-dissipation} and \eqref{C2-dissipation1}, we get
\begin{equation}\label{C2-Secest7}
\int_{(0,b_1+\varepsilon)\cup (b_2-\varepsilon,c_1)}\abs{v}^2dx+\int_{(0,b_1)\cup (b_2,c_1)}|u_x|^2dx=o(1).
\end{equation}
Finally, from \eqref{C2-dissipation1}, \eqref{C2-Fest1} and \eqref{C2-Secest7}, we get the desired result \eqref{C2-Secest1}. The proof has been completed.
\end{proof}
\noindent
\begin{lemma}\label{C2-Fourthest}
The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following estimations
\begin{equation}\label{C2-Thirest1}
\abs{{\lambda}u(c_1)}=o(1)\quad \text{and}\quad \abs{u_x(c_1)}=o(1),
\end{equation}
\begin{equation}\label{C2-Fourthest1}
\int_{c_1}^{c_2}\abs{{\lambda}u}^2dx=\int_{c_1}^{c_2}\abs{{\lambda}y}^2dx+o(1).
\end{equation}
\end{lemma}
\begin{proof}
First, using the same argument of Lemma \ref{Third-est}, we claim \eqref{C2-Thirest1}.
Inserting \eqref{pol3}, \eqref{pol5} in \eqref{pol4} and \eqref{pol6}, we get
\begin{eqnarray}
\la^2u+\left(u_x+bv_x\right)_x-i{\lambda}cy&=&-\frac{f_2}{\la^{2}}-i\frac{f_1}{\la}-c\frac{f_3}{\la^2},\label{Combination1}\\
\la^2y+y_{xx}+i{\lambda}cu&=&-\frac{f_4}{\la^2}-\frac{if_3}{\la}+c\frac{f_1}{\la^2}.\label{Combination2}
\end{eqnarray}
Multiplying \eqref{Combination1} and \eqref{Combination2} by ${\lambda}\overline{y}$ and ${\lambda}\overline{u}$ respectively, integrating over $(0,L)$, then using integration by parts, \eqref{C2-dissipation}, and the fact that $\|U\|_\mathcal H=1$ and $\|F\|_\mathcal H =o(1)$, we get
\begin{eqnarray}
\la^{3}\int_0^Lu\bar{y}dx-\la\int_0^Lu_x\bar{y}_xdx-i c_0\int_{c_1}^{c_2}\abs{{\lambda}y}^2dx=o(1),\label{C2-Fourthest2}\\
\la^{3}\int_0^Ly\bar{u}dx-{\lambda}\int_0^Ly_x\bar{u}_xdx+i c_0\int_{c_1}^{c_2}\abs{{\lambda}u}^2dx=\frac{o(1)}{\la}\label{C2-Fourthest3}.
\end{eqnarray}
Adding \eqref{C2-Fourthest2} and \eqref{C2-Fourthest3} and taking the imaginary parts, we get the desired result \eqref{C2-Fourthest1}. The proof is thus completed.
\end{proof}
\begin{lemma}\label{C2-Fifthest}
The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following asymptotic behavior
\begin{equation}\label{C2-Fifthest1}
\int_{c_1}^{c_2}\abs{{\lambda}u}^2dx=o(1),\quad \int_{c_1}^{c_2}\abs{{\lambda}y}^2dx=o(1)\quad \text{and}\quad \int_{c_1}^{c_2}\abs{u_x}^2dx=o(1).
\end{equation}
\end{lemma}
\begin{proof}
First, Multiplying \eqref{Combination1} by $2(x-c_2)\bar{u}_x$, integrating over $(c_1,c_2)$ and taking the real part, using the fact that $\|U\|_\mathcal H=1$ and $\|F\|_\mathcal H =o(1)$, we get
\begin{equation}\label{C2-Fifthest2}
\la^2\int_{c_1}^{c_2}(x-c_2)\left(\abs{u}^2\right)_xdx+\int_{c_1}^{c_2}(x-c_2)\left(\abs{u_x}^2\right)_xdx=\mathbb Re\left(2i{\lambda}c_0\int_{c_1}^{c_2}(x-c_2)y\bar{u}_xdx\right)+o(1).
\end{equation}
Using integration by parts in \eqref{C2-Fifthest2} with the help of \eqref{C2-Thirest1}, we get
\begin{equation}\label{C2-Fifthest3}
\int_{c_1}^{c_2}\abs{{\lambda}u}^2dx+\int_{c_1}^{c_2}\abs{u_x}^2dx\leq 2\la\abs{c_0}(c_2-c_1)\int_{c_1}^{c_2}\abs{y}\abs{u_x}+o(1).
\end{equation}
Applying Young's inequality in \eqref{C2-Fifthest3}, we get
\begin{equation}\label{C2-Fifthest4}
\int_{c_1}^{c_2}\abs{{\lambda}u}^2dx+\int_{c_1}^{c_2}\abs{u_x}^2dx\leq \abs{c_0}(c_2-c_1)\int_{c_1}^{c_2}\abs{u_x}^2dx+\abs{c_0}(c_2-c_1)\int_{c_1}^{c_2}\abs{{\lambda}y}^2dx+o(1).
\end{equation}
Using \eqref{C2-Fourthest1} in \eqref{C2-Fifthest4}, we get
\begin{equation}\label{C2-Fifthest5}
\left(1-\abs{c_0}(c_2-c_1)\right)\int_{c_1}^{c_2}\left(\abs{{\lambda}u}^2+|u_x|^2\right)dx\leq o(1).
\end{equation}
Finally, from the above estimation, \eqref{SSC2} and \eqref{C2-Fourthest1}, we get the desired result \eqref{C2-Fifthest1}. The proof has been completed.
\end{proof}
\begin{lemma}\label{C2-sixthest}
Let $0<\delta<\frac{c_2-c_1}{2}$. The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following estimations
\begin{equation}\label{C2-sixthest1}
\int_{c_1+\delta}^{c_2-\delta}\abs{y_x}^2dx=o(1).
\end{equation}
\end{lemma}
\begin{proof}
First, we fix a cut-off function $\theta_4\in C^1([0,L])$ such that
\begin{equation}\label{C2-theta4}
\theta_4(x):=\left\{\begin{array}{clc}
1&\text{if}&x\in (c_1+\delta,c_2-\delta),\\
0&\text{if}&x\in (0,c_1)\cup (c_2,L),\\
0\leq \theta_4\leq 1&&\text{elsewhere}.
\end{array}
\right.
\end{equation}
Multiplying \eqref{Combination2} by $\theta_4\bar{y}$, integrating over $(0,L)$ and using integration by parts, we get
\begin{equation}\label{C2-sixthest2*}
\int_{c_1}^{c_2}\theta_4\abs{{\lambda}y}^2dx-\int_{0}^{L}\theta_4\abs{y_x}^2dx-\int_0^L\theta_4'y_x\bar{y}dx+i{\lambda}c_0\int_{c_1}^{c_2}\theta_4u\bar{y}dx=\frac{o(1)}{\la^2}.
\end{equation}
Using \eqref{C2-Fifthest1} and the definition of $\theta_4$, we get
\begin{equation}\label{C2-sixthest3}
\int_{c_1}^{c_2}\theta_4\abs{{\lambda}y}^2dx=o(1),\quad \int_0^L\theta_4'y_x\bar{y}dx=o(\la^{-1}),\quad i{\lambda}c_0\int_{c_1}^{c_2}\theta_4u\bar{y}dx=o(\la^{-1}).
\end{equation}
Finally, Inserting \eqref{C2-sixthest3} in \eqref{C2-sixthest2*}, we get the desired result \eqref{C2-sixthest1}. The proof has been completed.
\end{proof}
\begin{lemma}\label{C2-seventhest}
The solution $U\in D(\mathcal A)$ of system \eqref{pol3}-\eqref{pol6} satisfies the following estimations
\begin{equation}\label{C2-sixthest1}
\int_0^{c_1+\varepsilon}\abs{{\lambda}y}^2dx,\int_{0}^{c_1+\varepsilon}\abs{y_x}^2dx,\int_{c_2-\varepsilon}^L\abs{{\lambda}y}^2dx,\int_{c_2-\varepsilon}^L\abs{y_x}^2dx,\int_{c_2}^{L}\abs{{\lambda}u}^2dx,\int_{c_2}^{L}\abs{u_x}^2dx=o(1).
\end{equation}
\end{lemma}
\begin{proof}
Let $q\in C^1([0,L])$ such that $q(0)=q(L)=0$. Multiplying \eqref{Combination1} by $2q\bar{y}_x$ integrating over $(0,L)$, using \eqref{C2-Fifthest1}, and the fact that $y_x$ is uniformly bounded in $L^2(0,L)$ and $\|F\|_{\mathcal{H}}=o(1)$, we get
\begin{equation}\label{C2-sixthest2}
\int_0^{L}q'\left(\abs{{\lambda}y}^2+\abs{y_x}^2\right)dx=o(1).
\end{equation}
Now, take $q(x)=x\theta_5(x)+(x-L)\theta_6(x)$ in \eqref{C2-sixthest2}, such that
$$
\theta_5(x):=\left\{\begin{array}{ccc}
1&\text{in}&(0,c_1+\varepsilon),\\
0&\text{in}&(c_2-\varepsilon,L),\\
0\leq \theta_1\leq 1&\text{in}&(c_1+\varepsilon,c_2-\varepsilon),
\end{array}
\right. \quad\text{and}\quad
\theta_2(x)\left\{\begin{array}{ccc}
1&\text{in}&(c_2-\varepsilon,L),\\
0&\text{in}&(0,c_1+\varepsilon),\\
0\leq \theta_2\leq 1&\text{in}&(c_1+\varepsilon,c_2-\varepsilon).
\end{array}
\right.
$$
Then, we obtain the first four estimations in \eqref{C2-sixthest1}. Now, multiplying \eqref{Combination1} by $2q\left(\overline{u_x+bv_x}\right)$ integrating over $(0,L)$ and using the fact that $u_x$ is uniformly bounded in $L^2(0,L)$, we get
\begin{equation}
\int_0^Lq'\left(\abs{{\lambda}u}^2+\abs{u_x}^2\right)dx=o(1).
\end{equation}
By taking $q(x)=(x-L)\theta_7(x)$, such that
$$
\theta_7(x)=\left\{\begin{array}{ccc}
1&\text{in}&(c_2,L),\\
0&\text{in}&(0,c_1),\\
0\leq \theta_7\leq 1&\text{in}&(c_1,c_2),
\end{array}
\right.
$$
we get the the last two estimations in \eqref{C2-sixthest1}. The proof has been completed.
\end{proof}
\noindent \textbf{Proof of Theorem \ref{2pol}.} Using \eqref{C2-dissipation1}, Lemmas \ref{C2-Secest}, \ref{C2-Fifthest}, \ref{C2-sixthest} and \ref{C2-seventhest}, we get $\|U\|_{\mathcal{H}}=o(1)$, which contradicts \eqref{pol1}. Consequently, condition ${\rm (H2)}$ holds. This implies the energy decay estimation \eqref{Energypol2}
\section{Indirect Stability in the multi-dimensional case}\label{secnd}
\noindent In this section, we study the well-posedness and the strong stability of system \eqref{ND-1}-\eqref{ND-5}.
\subsection{Well-posedness}\label{wpnd} In this subsection, we will establish the well-posedness of \eqref{ND-1}-\eqref{ND-5} by usinf semigroup approach. The energy of system \eqref{ND-1}-\eqref{ND-5} is given by
\begin{equation}\label{ND-energy}
E(t)=\frac{1}{2}\int_0^L\left(\abs{u_t}^2+\abs{\nabla u}^2+\abs{y_t}^2+\abs{\nabla y}^2\right)dx.
\end{equation}
Let $(u,u_t,y,y_t)$ be a regular solution of \eqref{ND-1}-\eqref{ND-5}. Multiplying \eqref{ND-1} and \eqref{ND-2} by $\overline{u_t}$ and $\overline{y_t}$ respectively, then using the boundary conditions \eqref{ND-3}, we get
\begin{equation}\label{ND-denergy}
E'(t)=-\int_{\Omega}b|\nabla u_{t}|^2dx,
\end{equation}
using the definition of $b$, we get $E'(t)\leq 0$. Thus, system \eqref{ND-1}-\eqref{ND-5} is dissipative in the sense that its energy is non-increasing with respect to time $t$. Let us define the energy space $\mathcal{H}$ by
$$
\mathcal{H}=\left(H_0^1(\Omega)\times L^2(\Omega)\right)^2.
$$
The energy space $\mathcal{H}$ is equipped with the inner product defined by
$$
\left<U,U_1\right>_{\mathcal{H}}=\int_{\Omega}v\overline{v_1}dx+\int_{\Omega}\nabla{u}\nabla{\overline{u_1}}dx+\int_{\Omega}z\overline{z_1}dx+\int_{\Omega}\nabla{y}\cdot \nabla{\overline{y_1}}dx,
$$
for all $U=(u,v,y,z)^\top$ and $U_1=(u_1,v_1,y_1,z_1)^\top$ in $\mathcal{H}$. We define the unbounded linear operator $\mathcal{A}_d:D\left(\mathcal{A}_d\right)\subset \mathcal{H}\longrightarrow \mathcal{H}$ by
$$
D(\mathcal{A}_d)=\left\{
U=(u,v,y,z)^\top\in \mathcal{H};\ v,z\in H_0^1(\Omega),\ \divv(u_x+bv_x)\in L^2(\Omega),\ \Delta y \in L^2 (\Omega)
\right\}
$$
and
$$
\mathcal{A}_d U=\begin{pmatrix}
v\\[0.1in] \divv(\nabla u+b\nabla v)-cz\\[0.1in] z\\ \Delta y+cv
\end{pmatrix}, \ \forall U=(u,v,y,z)^\top \in D(\mathcal{A}_d).
$$
If $U=(u,u_t,y,y_t)$ is a regular solution of system \eqref{ND-1}-\eqref{ND-5}, then we rewrite this system as the following first order evolution equation
\begin{equation}\label{ND-evolution}
U_t=\mathcal{A}_dU,\quad U(0)=U_0,
\end{equation}
where $U_0=(u_0,u_1,y_0,y_1)^{\top}\in \mathcal H$. For all $U=(u,v,y,z)^{\top}\in D(\mathcal{A}_d )$, we have
$$
\mathbb Re\left<\mathcal{A}_d U,U\right>_{\mathcal{H}}=-\int_{\Omega}b\abs{\nabla v}^2dx\leq 0,
$$
which implies that $\mathcal{A}_d$ is dissipative. Now, similar to Proposition 2.1 in \cite{akil2021ndimensional}, we can prove that there exists a unique solution $U=(u,v,y,z)^{\top}\in D(\mathcal{A}_d)$ of
$$
-\mathcal A_d U=F,\quad \forall F=(f^1,f^2,f^3,f^4)^\top\in \mathcal{H}.
$$
Then $0\in \rho(\mathcal{A}_d)$ and $\mathcal{A}_d$ is an isomorphism and since $\rho(\mathcal{A}_d)$ is open in $\mathbb{C}$ (see Theorem 6.7 (Chapter III) in \cite{Kato01}), we easily get $R(\lambdabda I -\mathcal{A}_d) = {\mathcal{H}}$ for a sufficiently small $\lambdabda>0 $. This, together with the dissipativeness of $\mathcal{A}_d$, imply that $D\left(\mathcal{A}_d\right)$ is dense in ${\mathcal{H}}$ and that $\mathcal{A}_d$ is m-dissipative in ${\mathcal{H}}$ (see Theorems 4.5, 4.6 in \cite{Pazy01}).
According to Lumer-Phillips theorem (see \cite{Pazy01}), then the operator $\mathcal A_d$ generates a $C_{0}$-semigroup of contractions $e^{t\mathcal A_d}$ in $\mathcal H$ which gives the well-posedness of \eqref{ND-evolution}. Then, we have the following result:
\begin{theoreme}{\rm
For all $U_0 \in \mathcal H$, system \eqref{eq-2.9} admits a unique weak solution $$U(t)=e^{t\mathcal A_d}U_0\in C^0 (\mathbb R_+ ,\mathcal H).
$$ Moreover, if $U_0 \in D(\mathcal A)$, then the system \eqref{eq-2.9} admits a unique strong solution $$U(t)=e^{t\mathcal A_d}U_0\in C^0 (\mathbb R_+ ,D(\mathcal A_d))\cap C^1 (\mathbb R_+ ,\mathcal H).$$}
\end{theoreme}
\subsection{Strong Stability }\label{Strong Stability-ND}
In this subsection, we will prove the strong stability of system \eqref{ND-1}-\eqref{ND-5}. First, we fix the following notations
$$
\widetilde{\Omega}=\Omega-\overline{\omega_c},\quad \Gamma_1=\partial \omega_c-\partial \Omega\quad \text{and}\quad \Gamma_0=\partial\omega_c-\Gamma_1.
$$
\begin{figure}
\caption{Geometric description of the sets $\omega_b$ and $\omega_c$}
\label{p7-Fig4}
\end{figure}
\begin{comment}
\begin{definition}\label{Gammacondition}
Saying that $\omega$ satisfies the \textbf{$\Gamma-$condition} if it contains a neighborhood in $\Omega$ of the set
$$
\left\{x\in \Gamma;\ (x-x_0)\cdot \nu(x)>0\right\},
$$
for some $x_0\in \mathbb R^n$, where $\nu$ is the outward unit normal vector to $\Gamma=\partial \Omega$.
\end{definition}
\end{comment}
\noindent Let $x_0\in \mathbb{R}^{d}$ and $m(x)=x-x_0$ and suppose that (see Figure \ref{p7-Fig4})
\begin{equation}\tag{${\rm GC}$}\label{Geometric Condition}
m\cdot \nu\leq 0\quad \text{on}\quad \Gamma_0=\left(\partial\omega_c\right)-\Gamma_1.
\end{equation}
The main result of this section is the following theorem
\begin{theoreme}\label{Strong-Stability-ND}
Assume that \eqref{Geometric Condition} holds and
\begin{equation}\label{GC-Condition}\tag{${\rm SSC}$}
\|c\|_{\infty}\leq \min\left\{\frac{1}{\|m\|_{\infty}+\frac{d-1}{2}},\frac{1}{\|m\|_{\infty}+\frac{(d-1)C_{p,\omega_c}}{2}}\right\},
\end{equation}
where $C_{p,\omega_c}$ is the Poincarr\'e constant on $\omega_c$. Then, the $C_0-$semigroup of contractions $\left(e^{t\mathcal{A}_d}\right)$ is strongly stable in $\mathcal{H}$; i.e. for all $U_0\in \mathcal{H}$, the solution of \eqref{ND-evolution} satisfies
$$
\lim_{t\to +\infty}\|e^{t\mathcal{A}_d}U_0\|_{\mathcal{H}}=0.
$$
\end{theoreme}
\begin{proof}
First, let us prove that \begin{equation}\label{ker}\ker (i{\lambda}I-\mathcal A_d)=\{0\},\ \forall {\lambda}\in \mathbb R.\end{equation} Since $0\in \rho(\mathcal{A}_d)$, then we still need to show the result for $\lambdabda\in \mathbb{R}^{\ast}$. Suppose that there exists a real number $\lambdabda\neq 0$ and $U=(u,v,y,z)^\top\in D(\mathcal{A}_d)$, such that
$$
\mathcal{A}_dU=i{\lambda}U.
$$
Equivalently, we have
\begin{eqnarray}
v&=&i{\lambda}u,\label{ND-ST1}\\
\divv(\nabla u+b\nabla v)-cz&=&i{\lambda}v,\label{ND-ST2}\\
z&=&i{\lambda}y, \label{ND-ST3}\\
\Delta y+cv&=&i{\lambda}z.\label{ND-ST4}
\end{eqnarray}
Next, a straightforward computation gives
$$
0=\mathbb Re\left<i{\lambda}U,U\right>_{\mathcal{H}}=\mathbb Re\left<\mathcal{A}_dU,U\right>_{\mathcal{H}}=-\int_{\Omega}b\abs{\nabla v}^2dx,
$$
consequently, we deduce that
\begin{equation}\label{ND-ST5}
b\nabla v=0\ \ \text{in}\ \ \Omega \quad \text{and}\quad \nabla v= \nabla u=0 \quad \text{in}\quad \omega_b.
\end{equation}
Inserting \eqref{ND-ST1} in \eqref{ND-ST2}, then using the definition of $c$, we get
\begin{equation}\label{ND-ST6}
\Delta u=-\la^2 u\quad \text{in}\quad \omega_b.
\end{equation}
From \eqref{ND-ST5} we get $\Delta u=0$ in $\omega_b$ and from \eqref{ND-ST6} and the fact that $\la\neq 0$, we get
\begin{equation}\label{ND-ST7}
u=0\quad \text{in}\quad \omega_b.
\end{equation}
Now, inserting \eqref{ND-ST1} in \eqref{ND-ST2}, then using \eqref{ND-ST5}, \eqref{ND-ST7} and the definition of $c$, we get
\begin{equation}\label{ND-ST8}
\begin{array}{rll}
\la^2u+\Delta u&=&0\ \ \text{in}\ \ \widetilde{\Omega},\\
u&=&0\ \ \text{in}\ \ \omega_b\subset \widetilde{\Omega}.
\end{array}
\end{equation}
Using Holmgren uniqueness theorem, we get
\begin{equation}\label{ND-ST9}
u=0\quad \text{in}\quad \widetilde{\Omega}.
\end{equation}
It follows that
\begin{equation}\label{ND-ST10}
u=\frac{\partial u}{\partial\nu}=0\quad \text{on}\quad \Gamma_1.
\end{equation}
Now, our aim is to show that $u=y=0$ in $\omega_c$. For this aim, inserting \eqref{ND-ST1} and \eqref{ND-ST3} in \eqref{ND-ST2} and \eqref{ND-ST4}, then using \eqref{ND-ST5}, we get the following system
\begin{eqnarray}
\la^2u+\Delta u-i{\lambda}cy&=&0\quad \text{in}\ \Omega,\label{ND-ST11}\\
\la^2y+\Delta y+i{\lambda}cu&=&0\quad \text{in}\ \Omega,\label{ND-ST12}\\
u&=&0\quad \text{on}\ \partial\omega_c,\label{ND-ST13}\\
y&=&0\quad \text{on}\ \Gamma_0,\label{ND-ST14}\\
\frac{\partial u}{\partial \nu}&=&0\quad \text{on}\ \Gamma_1.\label{ND-ST15}
\end{eqnarray}
Let us prove \eqref{ker} by the following three steps:\\\linebreak
\textbf{Step 1.} The aim of this step is to show that
\begin{equation}\label{ND-Step1-1}
\int_{\Omega}c\abs{u}^2dx=\int_{\Omega}c\abs{y}^2dx.
\end{equation}
For this aim, multiplying \eqref{ND-ST11} and \eqref{ND-ST12} by $\bar{y}$ and $\bar{u}$ respectively, integrating over $\Omega$ and using Green's formula, we get
\begin{eqnarray}
\la^2\int_{\Omega}u\bar{y}dx-\int_{\Omega}\nabla u\cdot \nabla{\bar{y}}dx-i{\lambda}\int_{\Omega}c\abs{y}^2dx&=&0,\label{ND-Step1-2}\\
\la^2\int_{\Omega}y\bar{u}dx-\int_{\Omega}\nabla y\cdot \nabla{\bar{u}}dx+i{\lambda}\int_{\Omega}c\abs{u}^2dx&=&0.\label{ND-Step1-3}
\end{eqnarray}
Adding \eqref{ND-Step1-2} and \eqref{ND-Step1-3}, then taking the imaginary part, we get \eqref{ND-Step1-1}.\\
\noindent \textbf{Step 2.} The aim of this step is to prove the following identity
\begin{equation}\label{ND-Stpe2-1}
-d\int_{\omega_c}\abs{{\lambda}u}^2dx+(d-2)\int_{\omega_c}\abs{\nabla u}^2dx+\int_{\Gamma_0}(m\cdot \nu)\left|\frac{\partial u}{\partial\nu}\right|^2d\Gamma -2\mathbb Re\left(i{\lambda}\int_{\omega_c}cy\left(m\cdot \nabla{\bar{u}}\right)dx\right)=0.
\end{equation}
For this aim, multiplying \eqref{ND-ST11} by $2(m\cdot\nabla\bar{u})$, integrating over $\omega_c$ and taking the real part, we get
\begin{equation}\label{ND-Stpe2-2}
2\mathbb Re\left(\la^2\int_{\omega_c}u(m\cdot \nabla\bar{u})dx\right)+2\mathbb Re\left(\int_{\omega_c}\Delta u(m\cdot \nabla\bar{u})dx\right)-2\mathbb Re\left(i\la\int_{\omega_c}cy(m\cdot\nabla\bar{u})dx\right)=0.
\end{equation}
Now, using the fact that $u=0$ in $\partial\omega_c$, we get
\begin{equation}\label{ND-Stpe2-3}
\mathbb Re\left(2\la^2\int_{\omega_c}u(m\cdot\nabla\bar{u})dx\right)=-d\int_{\omega_c}\abs{{\lambda}u}^2dx.
\end{equation}
Using Green's formula, we obtain
\begin{equation}\label{ND-Stpe2-4}
\begin{array}{ll}
\displaystyle
2\mathbb Re\left(\int_{\omega_c}\Delta u(m\cdot \nabla\bar{u})dx\right)=\displaystyle
-2\mathbb Re\left(\int_{\omega_c}\nabla u\cdot\nabla\left(m\cdot\nabla\bar{u}\right)dx\right)+2\mathbb Re\left(\int_{\Gamma_0}\frac{\partial u}{\partial\nu}\left(m\cdot\nabla\bar{u}\right)d\Gamma\right)\\[0.1in]
\hspace{3.85cm}=\displaystyle
(d-2)\int_{\omega_c}\abs{\nabla u}^2dx-\int_{\partial\omega_c}(m\cdot \nu)\abs{\nabla u}^2dx+2\mathbb Re\left(\int_{\Gamma_0}\frac{\partial u}{\partial\nu}\left(m\cdot\nabla\bar{u}\right)d\Gamma\right).
\end{array}
\end{equation}
Using \eqref{ND-ST13} and \eqref{ND-ST15}, we get
\begin{equation}\label{ND-Stpe2-5}
\int_{\partial\omega_c}(m\cdot \nu)\abs{\nabla u}^2dx=\int_{\Gamma_0}(m\cdot\nu)\left|\frac{\partial u}{\partial\nu}\right|^2d\Gamma\ \ \text{and}\ \ \mathbb Re\left(\int_{\Gamma_0}\frac{\partial u}{\partial\nu}\left(m\cdot\nabla\bar{u}\right)d\Gamma\right)=\int_{\Gamma_0}(m\cdot\nu)\left|\frac{\partial u}{\partial\nu}\right|^2d\Gamma.
\end{equation}
Inserting \eqref{ND-Stpe2-5} in \eqref{ND-Stpe2-4}, we get
\begin{equation}\label{ND-Stpe2-6}
2\mathbb Re\left(\int_{\omega_c}\Delta u(m\cdot \nabla\bar{u})dx\right)=(d-2)\int_{\omega_c}\abs{\nabla u}^2dx+\int_{\Gamma_0}(m\cdot\nu)\left|\frac{\partial u}{\partial\nu}\right|^2d\Gamma.
\end{equation}
Inserting \eqref{ND-Stpe2-3} and \eqref{ND-Stpe2-6} in \eqref{ND-Stpe2-2}, we get \eqref{ND-Stpe2-1}. \\\linebreak
\noindent \textbf{Step 3}. In this step, we prove \eqref{ker}. Multiplying \eqref{ND-ST11} by $(d-1)\overline{u}$, integrating over $\omega_c$ and using \eqref{ND-ST13}, we get
\begin{equation}\label{ND-Stpe2-7}
(d-1)\int_{\omega_c}\abs{{\lambda}u}^2dx+(1-d)\int_{\omega_c}\abs{\nabla u}^2dx-\mathbb Re\left(i{\lambda}(d-1)\int_{\omega_c}cy\bar{u}dx\right)=0.
\end{equation}
Adding \eqref{ND-Stpe2-1} and \eqref{ND-Stpe2-7}, we get
\begin{equation*}
\int_{\omega_c}\abs{{\lambda}u}^2dx+\int_{\omega_c}\abs{\nabla u}^2dx=\int_{\Gamma_0}(m\cdot \nu)\left|\frac{\partial u}{\partial\nu}\right|^2d\Gamma-2\mathbb Re\left(i{\lambda}\int_{\omega_c}cy\left(m\cdot \nabla{\bar{u}}\right)dx\right)-\mathbb Re\left(i{\lambda}(d-1)\int_{\omega_c}cy\bar{u}dx\right)=0.
\end{equation*}
Using \eqref{Geometric Condition}, we get
\begin{equation}\label{ND-Stpe2-8}
\int_{\omega_c}\abs{{\lambda}u}^2dx+\int_{\omega_c}\abs{\nabla u}^2dx\leq 2\abs{\la}\int_{\omega_c}\abs{c}\abs{y}\abs{m\cdot \nabla u}dx+\abs{\la}(d-1)\int_{\omega_c}\abs{c}\abs{y}\abs{u}dx.
\end{equation}
Using Young's inequality and \eqref{ND-Step1-1}, we get
\begin{equation}\label{ND-Stpe2-9}
2\abs{\la}\int_{\omega_c}\abs{c}\abs{y}\abs{m\cdot \nabla u}dx\leq \|m\|_{\infty}\|c\|_{\infty}\int_{\omega_c}\left(\abs{{\lambda}u}^2+\abs{\nabla u}^2\right)dx
\end{equation}
and
\begin{equation}\label{ND-Stpe2-10}
\abs{\la}(d-1)\int_{\omega_c}\abs{c(x)}\abs{y}\abs{u}dx\leq \frac{(d-1)\|c\|_{\infty}}{2}\int_{\omega_c}\abs{{\lambda}u}^2dx+\frac{(d-1)\|c\|_{\infty}C_{p,\omega_c}}{2}\int_{\omega_c}\abs{\nabla u}^2dx.
\end{equation}
Inserting \eqref{ND-Stpe2-10} in \eqref{ND-Stpe2-8}, we get
\begin{equation*}
\left(1-\|c\|_{\infty}\left(\|m\|_{\infty}+\frac{d-1}{2}\right)\right)\int_{\omega_c}\abs{{\lambda}u}^2dx+\left(1-\|c\|_{\infty}\left(\|m\|_{\infty}+\frac{(d-1)C_{p,\omega_c}}{2}\right)\right)\int_{\omega_c}\abs{\nabla u}^2dx\leq 0.
\end{equation*}
Using \eqref{GC-Condition} and \eqref{ND-Step1-1} in the above estimation, we get
\begin{equation}\label{ND-Stpe2-11}
u=0\quad \text{and}\quad y=0\quad \text{in}\quad \omega_c.
\end{equation}
In order to complete this proof, we need to show that $y=0$ in $\widetilde{\Omega}$. For this aim, using the definition of the function $c$ in $\widetilde{\Omega}$ and using the fact that $y=0$ in $\omega_c$, we get
\begin{equation}\label{ND-Stpe2-12}
\begin{array}{rll}
\displaystyle \la^2y+\Delta y&=&0\ \ \text{in}\ \ \widetilde{\Omega},\\[0.1in]
\displaystyle y&=&0 \ \ \text{on}\ \ \partial\widetilde{\Omega},\\[0.1in]
\displaystyle \frac{\partial y}{\partial \nu}&=&0\ \ \text{on}\ \ \Gamma_1.
\end{array}
\end{equation}
Now, using Holmgren uniqueness theorem, we obtain $y=0$ in $\widetilde{\Omega}$ and consequently \eqref{ker} holds true. Moreover, similar to Lemma 2.5 in \cite{akil2021ndimensional}, we can prove $R(i{\lambda}I-\mathcal A_d)=\mathcal H, \ \forall {\lambda}\in \mathbb R$. Finally, by using the closed graph theorem of Banach and Theorem \ref{App-Theorem-A.2}, we conclude the proof of this Theorem.
\end{proof}\\\linebreak
\noindent Let us notice that, under the sole assumptions \eqref{Geometric Condition}
and \eqref{GC-Condition}, the polynomial stability of system \eqref{ND-1}-\eqref{ND-5}
is an open problem.
\appendix
\section{Some notions and stability theorems}\label{p2-appendix}
\noindent In order to make this paper more self-contained, we recall in this short appendix some notions and stability results used in this work.
\begin{definition}
\label{App-Definition-A.1}{\rm
Assume that $A$ is the generator of $C_0-$semigroup of contractions $\left(e^{tA}\right)_{t\geq0}$ on a Hilbert space $H$. The $C_0-$semigroup $\left(e^{tA}\right)_{t\geq0}$ is said to be
\begin{enumerate}
\item[$(1)$] Strongly stable if
$$
\lim_{t\to +\infty} \|e^{tA}x_0\|_H=0,\quad \forall\, x_0\in H.
$$
\item[$(2)$] Exponentially (or uniformly) stable if there exists two positive constants $M$ and $\varepsilon$ such that
$$
\|e^{tA}x_0\|_{H}\leq Me^{-\varepsilon t}\|x_0\|_{H},\quad \forall\, t>0,\ \forall\, x_0\in H.
$$
\item[$(3)$] Polynomially stable if there exists two positive constants $C$ and $\alpha$ such that
$$
\|e^{tA}x_0\|_{H}\leq Ct^{-\alpha}\|A x_0\|_{H},\quad \forall\, t>0,\ \forall\, x_0\in D(A).
$$
\xqed{$\square$}
\end{enumerate}}
\end{definition}
\noindent To show the strong stability of the $C_0$-semigroup $\left(e^{tA}\right)_{t\geq0}$ we rely on the following result due to Arendt-Batty \cite{Arendt01}.
\begin{theoreme}\label{App-Theorem-A.2}{\rm
{Assume that $A$ is the generator of a C$_0-$semigroup of contractions $\left(e^{tA}\right)_{t\geq0}$ on a Hilbert space $H$. If $A$ has no pure imaginary eigenvalues and $\sigma\left(A\right)\cap i\mathbb{R}$ is countable,
where $\sigma\left(A\right)$ denotes the spectrum of $A$, then the $C_0$-semigroup $\left(e^{tA}\right)_{t\geq0}$ is strongly stable.}\xqed{$\square$}}
\end{theoreme}
\noindent Concerning the characterization of polynomial stability stability of a $C_0-$semigroup of contraction $\left(e^{tA}\right)_{t\geq 0}$ we rely on the following result due to Borichev and Tomilov \cite{Borichev01} (see also \cite{Batty01} and \cite{RaoLiu01})
\begin{theoreme}\label{bt}
{\rm
Assume that $A$ is the generator of a strongly continuous semigroup of contractions $\left(e^{tA}\right)_{t\geq0}$ on $\mathcal{H}$. If $ i\mathbb{R}\subset \rho(\mathcal{A})$, then for a fixed $\ell>0$ the following conditions are equivalent
\begin{equation}\label{h1}
\limsup_{{\lambda}\in \mathbb R, \ |\la| \to \infty}\frac{1}{|\la|^\ell}\left\|(i{\lambda}I-\mathcal A)^{-1}\right\|_{\mathcal{L}(\mathcal{H})}<\infty,
\end{equation}
\begin{equation}\label{h2}
\|e^{t\mathcal{A}}U_{0}\|^2_{\mathcal H} \leq \frac{C}{t^{\frac{2}{\ell}}}\|U_0\|^2_{D(\mathcal A)},\hspace{0.1cm}\forall t>0,\hspace{0.1cm} U_0\in D(\mathcal A),\hspace{0.1cm} \text{for some}\hspace{0.1cm} C>0.
\end{equation}\xqed{$\square$}}
\end{theoreme}
\end{document} |
\begin{document}
\setlength{\unitlength}{1mm}
\begin{abstract}
Recently, the dynamical and spectral properties of square-free
integers, visible lattice points and various generalisations have
received increased attention. One reason is the connection of
one-dimensional examples such as $\mathscr B$-free numbers with
Sarnak's conjecture on the `randomness' of the M\"obius function,
another the explicit computability of correlation functions as well as
eigenfunctions for these systems together with intrinsic ergodicity
properties. Here, we summarise some of the results, with focus on
spectral and dynamical aspects, and expand a little on the
implications for mathematical diffraction theory.
\end{abstract}
\maketitle
\centerline{Dedicated to Nikolai P.~Dolbilin on the occasion of his 70th birthday}
\section{Introduction}
Delone sets are important mathematical descriptions of atomic
arrangements, and Meyer sets are special cases with a rich spectral
structure~\cite{TAO,DD,Dolbilin}. Particularly well-studied are cut and
project sets or \emph{model}\/ sets, which underly the structure of
perfect quasicrystals~\cite{Shechtman,Steurer,TAO}. It is fair to say
that such structures are rather well understood. This is much less so
if one keeps uniform discreteness but relaxes relative denseness. In
fact, one might expect to leave the realm of pure point spectrum, at
least as soon as one has entropy~\cite{BLR}. However, there are
interesting examples such as $k$-free numbers or visible lattice
points that have positive topological entropy but nevertheless pure
point dynamical and diffraction spectrum. They are examples of
\emph{weak} model sets, and deserve a better understanding. Here, we
summarise some of the known results and add to the structure of their
topological and spectral properties.
The paper is organised as follows. In Section~\ref{visible}, we use the visible points of $\mathbb Z^2$ as a
paradigm to formulate the results for this case in a geometrically
oriented
manner and to develop our notation and methods while we go along. Section~\ref{kfree} extends the findings to $k$-free points of
$n$-dimensional lattices, while Section~\ref{bfree} looks into the
setting of $\mathscr B$-free systems as introduced
in~\cite{ELD,KLW}. Finally, in Section~\ref{number}, we analyse an
example from the number field generalisation of~\cite{CV} in our more
geometric setting of diffraction analysis. For convenience and better readability, the more technical issues are
presented in
two appendices.
\section{Visible square lattice points}\label{visible}
Two classic examples for the structure we are after are provided
by the \emph{square-free integers}\/ (the elements of $\mathbb Z$ that are
not divisible by any nontrivial square) and the \emph{visible
points}\/ of a lattice (the points with coprime coordinates in a
lattice basis). Since our focus is on higher-dimensional cases, let us
take a closer look at the visible points of $\mathbb Z^2$,
$$
V\,=\,V_{\mathbb Z^2}\,=\,\mathbb
Z^2\setminus\bigcup_{\text{\scriptsize $p$ prime}} p\mathbb
Z^2\,=\,\{x\in\mathbb Z^2|\operatorname{gcd}(x)=1\},
$$
where $\operatorname{gcd}(x)=\operatorname{gcd}(x_1,x_2)$ for
$x=(x_1,x_2)$. Note that, throughout the text, $p$ will denote a prime
number. The set is illustrated in Fig.~\ref{fig: visible}, and
can also be found in many textbooks including~\cite{Apostol}, where it
is shown on the cover, and~\cite[Sec.\ 10.4]{TAO}. The following
result is standard; see~\cite[Prop.\ 10.4]{TAO} and references
therein for details.
\begin{prop}\label{propbasic}
The set\/ $V$\/ has the following properties.
\begin{itemize}
\item[(a)]
The set\/ $V$ is uniformly discrete, but not relatively
dense. In particular,\/ $V$ contains holes of arbitrary size
that repeat lattice-periodically. More precisely, given an inradius
$\rho>0$, there is a sublattice of\/
$\mathbb Z^2$ depending on $\rho$ such that a suitable translate of this
sublattice consists of centres of holes of inradius at least $\rho$.
\item[(b)]
The group\/ $\operatorname{GL}(2,\mathbb Z)$ acts transitively on
$V$, and one has the partition\/ $\mathbb
Z^2=\dot\bigcup_{m\in\mathbb N_0}mV$ of\/ $\mathbb Z^2$ into
$\operatorname{GL}(2,\mathbb Z)$-invariant sets.
\item[(c)]
The difference set is\/\/ $V-V=\mathbb Z^2$.
\item[(d)]
The natural density of\/ $V$ exists and is given by
$\operatorname{dens}(V)=\frac{1}{\zeta(2)}=\frac{6}{\pi^2}$,
where\/ $\zeta$ denotes Riemann's zeta function.
\qed
\end{itemize}
\end{prop}
\begin{center}
\begin{figure}
\caption{A central patch of the visible points $V$ of the square lattice
$\mathbb Z^2$. Note the invariance of $V$ with respect to $\operatorname{GL}
\label{fig: visible}
\end{figure}
\end{center}
Note that big holes are rare, but important; see~\cite[Rem.\
10.6]{TAO} for some examples. The interesting fact is
that they do not destroy the existence of patch frequencies, though
the latter clearly cannot exist uniformly. For the natural pair correlation
(or autocorrelation) coefficients,
$$
\eta(x)\,:=\,\lim_{R\to\infty}\frac{1}{\pi
R^2}\big |V\cap (-x+V)\cap B_R(0)\big |,
$$
one finds the following result; compare~\cite[Lemma 10.6]{TAO},
\cite[Thm.\ 2]{BMP}, as well as \cite[Thm.\ 7]{PH}.
\begin{lemma}
For each\/ $x\in\mathbb Z^2$, the natural autocorrelation coefficient
$\eta(x)$ of\/ $V$ exists, and is given by
$$
\eta(x)\,=\,\xi\!\prod_{p\mid\operatorname{gcd}(x)}\left(1+\frac{1}{p^2-2} \right),
$$
where\/ $\xi=\prod_p(1-2p^{-2})\approx 0.3226$. In particular, with
$\operatorname{gcd}(0)=0$, this also gives the density\/ $\eta(0)=\prod_p(1-p^{-2})=1/\zeta(2)$.\qed
\end{lemma}
The autocorrelation measure of $V$,
$$
\gamma\,=\,\sum_{x\in\mathbb Z^2}\eta(x)\delta_x,
$$
is thus well-defined, and is a translation bounded, positive definite measure on $\mathbb R^2$
by construction. Its Fourier transform $\widehat\gamma$ exists by
general arguments, compare~\cite[Ch.\ I.4]{BF} or~\cite[Rem.~8.7 and Prop.\ 8.6]{TAO}, and leads to the following result;
see~\cite[Thm.\ 10.5]{TAO} and~\cite[Thm.\ 3]{BMP}.
\begin{center}
\begin{figure}
\caption{Diffraction $\widehat{\gamma}
\label{fig: vispodiff}
\end{figure}
\end{center}
\begin{theorem}\label{diff}
The natural diffraction measure\/ $\widehat \gamma$ of the visible points\/ $V$ of
the square lattice\/ $\mathbb Z^2$ exists. It is a positive pure point
measure which is translation bounded and supported on the points of\/ $\mathbb Q^2$ with
square-free denominator, the Fourier-Bohr spectrum of $\gamma$, so
$$
\widehat{\gamma}=\sum_{{\substack{k\in\mathbb
Q^2\\\operatorname{den}(k) \text{\scriptsize \,square-free}}}}I(k)\delta_k,
$$
where\/ $\operatorname{den}(k):=\operatorname{gcd}\{n\in\mathbb
N\,|\,nk\in\mathbb Z^2\}$. In particular,
$I(0)=(1/\zeta(2))^2=36/\pi^4$, and when\/ $0\neq k\in\mathbb Q^2$ has
square-free denominator
$\operatorname{den}(k)$, the corresponding intensity is given by
$$
I(k)=\left(\frac{6}{\pi^2}\prod_{p\mid\operatorname{den}(k)}\frac{1}{p^2-1}\right)^2.
$$
\qed
\end{theorem}
The essence of this result is the pure point nature of
$\widehat\gamma$ together with its explicit computability via an
intensity formula in the form of a {\em finite}\/ product for any given
$k\in\mathbb Q^2$ with square-free denominator. Fig.~\ref{fig:
vispodiff} illustrates the diffraction measure. Note that
$\widehat\gamma$ has the symmetry group $\mathbb Z^2\rtimes
\operatorname{GL}(2,\mathbb Z)$.
An alternative view is possible by means of the Herglotz--Bochner
theorem as follows. The autocorrelation measure $\gamma$ is positive
definite on $\mathbb R^2$ if and only if the function $\eta\!:\, \mathbb
Z^2\longrightarrow \mathbb R$ is positive definite on
$\mathbb Z^2$; see~\cite[Lemma 8.4]{TAO}. The latter property is
equivalent to the existence of a positive measure $\varrho$ on $\mathbb
T^2=\mathbb R^2/\mathbb Z^2\simeq [0,1)^2$ such that
$$
\eta(x)\,=\int_{\mathbb
T^2} e^{2\pi i xy}\,\,{\rm d}\varrho(y),
$$
where the connection to $\widehat\gamma$ is established by
$\varrho=\gamma\!\mid_{[0,1)^2}$, so that $\widehat\gamma=\varrho\hspace{0.5pt} * \delta_{\mathbb
Z^2}$, where $\delta_{S} := \sum_{x\in S} \delta_{x}$ denotes
the Dirac comb of a discrete point set $S$. The finite positive measure $\varrho$ is a spectral measure
in the sense of dynamical system theory. It is related to the
diffraction measure by convolution; for background, we refer
to~\cite{BL,BLvE} and references therein. We shall return to the
dynamical point of view shortly.
Let us pause to comment on the history and the development of this problem. The arithmetic
properties of $V$ are classic and can be found in many
places, including~\cite{Apostol,Hua}. The investigation of spectral
aspects was advertised by Schroeder in~\cite{Schroeder1}, see also~\cite{Schroeder2}, by means of a numerical
approach via FFT techniques. These results suffered from insufficient
resolution in the numerical treatment, and seemed to point towards
continuous diffraction components, perhaps in line with the idea that
the distribution of primes is sufficiently `random'.
Ten years later, on the basis of a formal M\"obius inversion
calculation for the amplitudes, Mosseri argued in~\cite{Mosseri} that the diffraction should be pure point rather than
continuous, thus contradicting the earlier numerical findings of
Schroeder. This was corroborated in~\cite{BGW} with
further calculations on the diffraction intensities (still without
proof), which gave the formula of Thm.~\ref{diff} above. Also, a
rather convincing comparison with an optical diffraction experiment
was shown, which clearly indicated the correctness of the formal
calculation. The first complete proof, with a detailed convergence
result with precise error estimates, appeared in [BMP], and was
recently improved and extended in~\cite{PH}, on the basis of
number-theoretic results
due to Mirsky~\cite{Mirsky1,Mirsky2}.
Simultaneously, due to the renewed interest in the square-free integers
in connection with Sarnak's conjecture, the dynamical sytems point of
view became more important, as is obvious
from~\cite{CS,CV,PH,HB}. Here, the focus is more on the dynamical
spectrum, which is closely related to the diffraction
measure as indicated above, and explained in detail in~\cite{BL,BLvE}.
To explain this, let us define the (discrete) hull of $V$ as
$$
\mathbb X_{V}=\overline{\{t+V\,|\,t\in\mathbb Z^2\}},
$$
where the closure is taken in the product topology induced on
$\{0,1\}^{\mathbb Z^2}$ by the discrete topology on $\{0,1\}$. This
topology is metric~\cite{Sol,PH} and is
also called the \emph{local topology}, because two elements of
$\{0,1\}^{\mathbb Z^2}$ are close if they agree on a large ball around
the origin. Clearly, $\mathbb X_{V}$ is then compact, where
here and below we simultaneously view subsets of $\mathbb Z^2$ as configurations. In
particular, the empty set is identified with the configuration
$\underline 0$, and $\mathbb Z^2$ with $\underline 1$ this way. Since
$V$ contains holes of arbitrary size, the empty set is an element of
$\mathbb X_{V}$.
For
a natural number $m$, let $\cdot\,_m\!:\,\mathbb
Z^2\longrightarrow \mathbb
Z^2/m\mathbb
Z^2$ denote the canonical projection $x\mapsto [x]_m$,
where $[x]_m=x+m\mathbb Z^2$. For a subset $X\subset \mathbb Z^2$, we
denote by $X_m$ its image under this projection map. It
should however be born in mind that, for
elements $x\in\mathbb Z^2$, their images under the above map will always be written as $[x]_m$
rather than $x_m$. This is due to the occasional need of regarding $m$
in the last expression as an
index. Let us recall the
following result from~\cite{BMP}.
\begin{prop}[Chinese Remainder Theorem]{\rm \cite[Prop.\ 2]{BMP}}\label{crt}
For pairwise coprime positive integers\/ $m_1,m_2,\ldots,m_r$, the
natural group homomorphism
$$
(\mathbb Z^2)_{m_1m_2\cdot\ldots \cdot m_r} \,\,\longrightarrow \,\,\prod_{i=1}^r
(\mathbb Z^2)_{m_i}$$
is an isomorphism. In particular, for\/ $x_1,x_2,\ldots,x_r\in\mathbb
Z^2$, the simultaneous solutions\/ $t\in\mathbb Z^2$ of
$$
[t]_{m_i}= [x_i]_{m_i}, \quad 1\le i\le r,
$$
comprise precisely one coset of\/ $m_1m_2\cdot\ldots\cdot m_r\mathbb Z^2$ in
$\mathbb Z^2$. \qed
\end{prop}
Next, let us come to a characterisation of $\mathbb X_{V}$. Let
$\mathbb A$ denote
the set of \emph{admissible} subsets $A$
of $\mathbb Z^2$, i.e.\ subsets $A\subset\mathbb Z^2$ with the property that,
for every prime $p$, $A$ does \emph{not} contain a full set of
representatives modulo $p\mathbb Z^2$. In other words, $A$ is
admissible if and only if
$$|A_p|<p^2$$ for any prime $p$. Since $V\in\mathbb A$ (otherwise some point
of $V$ would be in $p\mathbb Z^2$
for some prime $p$, a contradiction) and since $\mathbb A$ is a
$\mathbb Z^2$-invariant and
closed subset of $\{0,1\}^{\mathbb Z^2}$, it is clear that $\mathbb
X_{V}$
is a subset of $\mathbb A$. By~\cite[Thm.~2]{PH}, the other inclusion is also
true. This was first shown by Herzog and Stewart~\cite{HS} for visible
lattice points and by Sarnak~\cite{Sarnak} for the analogous case of
the square-free integers. In fact, similar statements hold true for
various generalisations discussed below; cf.~\cite[Thm.~6]{PH} for the
case of $k$-free lattice points.
\begin{theorem}\label{charachull}
One has\/ $\mathbb X_{V}=\mathbb A$.\qed
\end{theorem}
It follows that $\mathbb X_{V}$ is \emph{hereditary}, i.e.\
$$
\forall\, X\in\mathbb A:\,(Y\subset X\Rightarrow
Y\in\mathbb A),
$$
and in particular contains
\emph{all} subsets of $V$. In other
words, $V$
is an \emph{interpolating set}\/ for $\mathbb X_{V}$ in the sense
of~\cite{W}, which means that $$\mathbb X_{V}|^{}_V\,\,:=\{X\cap
V\,\mid\,
X\in\mathbb X_{V}\}=\{0,1\}^{V}.$$
Given a radius $\rho>0$ and a point $t\in\mathbb Z^2$,
the \emph{$\rho$-patch} of $V$ at $t$ is
\[(V-t)\cap B_\rho(0),\]
the translation to the origin of the part of $V$ within a distance
$\rho$ of $t$. We denote by $\mathcal A(\rho)$ the (finite) set of all
$\rho$-patches of $V$, and by $N(\rho)=|\mathcal A(\rho)|$ the number of
distinct $\rho$-patches of $V$. For a $\rho$-patch $\mathcal P$ of
$V$, denote by $C_{\mathcal P}$ the set of elements of
$\mathbb X_{V}$ whose $\rho$-patch at
$0$ is $\mathcal P$, the so-called \emph{cylinder set} defined by the
$\rho$-patch $\mathcal P$; compare~\cite{Denker}. Note that these cylinder sets form a
basis of the topology of $\mathbb X_{V}$.
The \emph{patch counting entropy} of
$V$ is defined as
$$
h_{\rm pc}(V):=\lim_{\rho\to\infty}\frac{\log N(\rho)}{\pi\rho^2}.
$$
Note that this differs from the definition in~\cite{PH,HB}, where, in
view of the binary configuration space interpretation, we
used the base $2$ logarithm. It can be shown by a classic subadditivity argument that this limit
exists. Since $\mathbb X_{V}$ is hereditary, it follows that $V$ has patch counting entropy $h_{\rm
pc}(V)$ at least
$\operatorname{dens}(V)\log (2)=(6/\pi^2)\log (2)$. In fact, one has more.
\begin{theorem}{\rm \cite[Thm.~3]{PH}}\label{hpc}
One has\/ $h_{\rm pc}(V)=(6/\pi^2)\log (2)$. \qed
\end{theorem}
The natural translational
action of the group $\mathbb Z^2$
on $\mathbb X_{V}$ is continuous and $(\mathbb
X_{V},\mathbb Z^2)$ thus is a \emph{topological dynamical
system}. By construction, $(\mathbb X_{V},\mathbb Z^2)$ is topologically
transitive~\cite{A,G,W}, as it is the orbit closure of one of its
elements (namely $V$). Equivalently, for any two non-empty open subsets $U$
and $W$ of $\mathbb X_{V}$, there is an element $t\in\mathbb Z^2$ such
that
$$
U\cap (W+t)\neq\varnothing.
$$
In accordance with Sarnak's findings~\cite{Sarnak} for
square-free integers, one has the following result.
\begin{theorem}\label{c1}
The topological dynamical system\/ $(\mathbb X_{V},\mathbb Z^2)$ has the following properties.
\begin{itemize}
\item[\rm (a)]
$(\mathbb X_{V},\mathbb Z^2)$ is topologically ergodic with positive topological
entropy equal to\/ $(6/\pi^2)\log (2)$.
\item[\rm (b)]
$(\mathbb X_{V},\mathbb Z^2)$ is proximal, and\/ $\{\varnothing\}$ is
the unique\/ $\mathbb Z^2$-minimal subset of\/ $\mathbb X_{V}$.
\item[\rm (c)]
$(\mathbb X_{V},\mathbb Z^2)$ has no non-trivial topological Kronecker
factor\/ $($i.e., minimal equicontinuous factor\/$)$. In particular,\/ $(\mathbb
X_{V},\mathbb Z^2)$ has trivial topological point spectrum.
\item[\rm (d)]
$(\mathbb X_{V},\mathbb Z^2)$ has a non-trivial joining with the
Kronecker system given by\/ $K=(G,\mathbb Z^2)$, where\/ $G$ is the compact
Abelian group\/ $\prod_p (\mathbb Z^2)_p$ and\/ $\mathbb Z^2$ acts on\/ $G$
via addition of $\iota(x)=([x]_p)$, i.e.\ $g\mapsto
g+\iota(x)$, with\/ $g\in G$ and $x\in\mathbb Z^2$. In
particular,\/ $(\mathbb X_{V},\mathbb Z^2)$ fails to be topologically weakly mixing.
\end{itemize}
\end{theorem}
\begin{proof}
The topological entropy of the dynamical system
$(\mathbb X_{V},\mathbb Z^2)$ is just $h_{\rm pc}(V)$, so the assertion
follows from Theorem~\ref{hpc}; cf.~\cite[Thm.~1]{BLR}.
The topological ergodicity~\cite{A,G} will follow from the existence
of an ergodic full (non-empty open subsets have positive measure) $\mathbb Z^2$-invariant Borel measure on $\mathbb X_{V}$;
see~Theorems~\ref{freq} and~\ref{c2}(b) below.
For part (b), recall from Theorem~\ref{charachull} that the hull
contains many more elements than the translates of $V$. Nevertheless, one can derive from
Proposition~\ref{propbasic}(a) that every element of $\mathbb
X_{V}$ contains holes of arbitrary size
that repeat lattice-periodically. This follows by standard compactness
arguments from considering a
sequence of the form $(t_n+V)^{}_{n\in\mathbb N}$ that converges in the local
topology, via selecting suitable subsequences. In particular, let
$X,Y\in\mathbb X_V$ and a radius $\rho$ be fixed. Let $t_\rho+\varGamma$ be
positions of holes of inradius $\rho$ in $X$. Choose $\rho'$ large
enough such that $B_{\rho'}(0)$ covers $B_{\rho}(0)+F$, where $F$ is
a fundamental domain of $\varGamma$. Then, any $\rho'$-hole of $Y$
(which exists) contains a $\rho$-hole of $X$. Hence, for any $\rho>0$
and any two
elements $X,Y\in\mathbb
X_{V}$, there is a translation $t\in\mathbb Z^2$ such that
$$(X+t)\cap B_\rho(0)=(Y+t)\cap B_\rho(0)=\varnothing,$$ meaning that both $X$ and $Y$
have the empty $\rho$-patch at $-t$. In terms of the metric $d$ on
$\mathbb X_V$~\cite{Sol,PH,HB} this means
that $d(X+t,Y+t)\le
1/\rho$ and the proximality of the system follows. Similarly, the assertion on the unique $\mathbb Z^2$-minimal
subset $\{\varnothing\}$ follows from the fact that any element of $\mathbb
X_{V}$ contains arbitrarily large holes and thus any non-empty closed
subsytem contains $\varnothing$.
Since Kronecker systems are distal, the first assertion of part (c) is an immediate consequence of the
proximality of $(\mathbb X_{V},\mathbb Z^2)$. This also
implies that $(\mathbb X_{V},\mathbb Z^2)$ has trivial
topological point spectrum; see~\cite{HB} for an alternative argument
that the non-zero constant function is the only continuous eigenfunction of the
translation action.
For part (d), one can verify that a non-trivial (topological) joining~\cite{G}
of $(\mathbb X_{V},\mathbb Z^2)$ with the Kronecker system $K$ is given
by
$$
W:=\bigcup_{X\in\mathbb X_{V}}\Big(\{X\}\times \prod_p
\bigl(\mathbb Z^2\setminus X\bigr)_p\Big).
$$
Since the Kronecker system $K$ is minimal and distal, a well-known
disjointness theorem by Furstenberg~\cite[Thm.~II.3]{F} implies that
$(\mathbb X_{V},\mathbb Z^2)$ fails to be topologically weakly mixing.
\end{proof}
Following~\cite{BMP,PH}, the natural \emph{frequency} $\nu(\mathcal P)$
of a $\rho$-patch $\mathcal P$ of $V$ is defined as
\begin{equation}\label{freqdef}
\nu(\mathcal
P):=\operatorname{dens}\big(\{t\in\mathbb Z^2\,\mid\,(V-t)\cap B_\rho(0)=\mathcal P\}\big),
\end{equation}
which can indeed be seen to exist.
\begin{theorem}{\rm \cite[Thms.~1 and~2]{PH}}\label{freq}
Any\/ $\rho$-patch\/ $\mathcal P$ of\/ $V$ occurs with positive
frequency, which is given by
\[\nu(\mathcal P)=\sum_{\mathcal F\subset (\mathbb Z^2\cap B_{\rho}(0))\setminus \mathcal P}(-1)^{|\mathcal F|}
\prod_p\left(1-\frac{|(\mathcal P\cup\mathcal
F)_p|}{p^{2}}\right).\]
\qed
\end{theorem}
The frequency function $\nu$ from~\eqref{freqdef}, regarded as a function on the
cylinder sets by setting $\nu(C_\mathcal
P):=\nu(\mathcal P)$, is finitely additive on the
cylinder sets with $$\sum_{\mathcal P\in\mathcal
A(\rho)}\nu(C_{\mathcal P})=\frac{1}{|\det(\mathbb Z^2)|}=1.$$ Since the
family of cylinder sets is a (countable) semi-algebra that generates the
Borel $\sigma$-algebra on $\mathbb X_{V}$ (i.e.\ the smallest
$\sigma$-algebra on $\mathbb X_{V}$ which contains the open subsets of
$\mathbb X_{V}$), $\nu$ extends uniquely to a probability measure on
$\mathbb X_{V}$; compare~\cite[Prop. 8.2]{Denker} and references given
there. Moreover, this probability measure is
$\mathbb Z^2$-invariant by construction wherefore we have a
measure-theoretic dynamical system $(\mathbb X_{V},\mathbb Z^2,\nu)$. For part (b) of the following
claim, note that
the Fourier--Bohr spectrum of $V$ is itself a group and
compare~\cite[Prop. 17]{BLvE}.
\begin{theorem} \label{c2}
The measure-theoretic
dynamical system\/ $(\mathbb X_{V},\mathbb Z^2,\nu)$ has the following properties.
\begin{itemize}
\item[\rm (a)]
The\/ $\mathbb Z^2$-orbit of\/ $V$ in\/ $\mathbb X_{V}$ is\/ $\nu$-equidistributed,
which means that for any function\/ $f\in C(\mathbb X_{V})$, one has
\[
\lim_{R\to\infty}\frac{1}{\pi R^2}\sum_{x\in\mathbb Z^2\cap
B_R(0)}f(V+x)=\int_{\mathbb X_{V}}f(X)\,\,{\rm d}\nu(X).
\]
In other words,\/ $V$ is\/ $\nu$-generic.
\item[\rm (b)]
$(\mathbb X_{V},\mathbb Z^2,\nu)$ is ergodic, deterministic
$($i.e., it is of zero measure entropy\/$)$ and has pure point
dynamical spectrum. The latter is given by
the Fourier--Bohr spectrum of the autocorrelation\/ $\gamma$, as
described in Theorem~$\ref{diff}$.
\item[\rm (c)]
$(\mathbb X_{V},\mathbb Z^2,\nu)$ is metrically
isomorphic to the Kronecker system\/ $K_{\mu}=(G,\mathbb Z^2,\mu)$, where\/ $G$ is the
compact Abelian
group\/ $\prod_p (\mathbb Z^2)_p$, the lattice\/ $\mathbb Z^2$ acts on\/ $G$
as
above and
$\mu$ is the normalised
Haar measure on\/ $G$.
\end{itemize}
\end{theorem}
\begin{proof}
For part (a), it suffices to show this for the characteristic
functions of cylinder sets of finite patches, as their span is dense
in $C(\mathbb X_{V})$. But for such functions, the claim is clear as
the left hand side is the patch frequency as used for the definition
of the measure $\nu$.
For the ergodicity of $(\mathbb X_{V},\mathbb Z^2,\nu)$, one has to show
that
$$
\lim_{R\rightarrow\infty}\frac{1}{\pi R^2}\sum_{x\in\mathbb Z^2\cap
B_R(0)}\nu\big((C_\mathcal P+x)\cap C_\mathcal Q\big)=\nu(C_\mathcal P)\nu(C_\mathcal Q)
$$
for arbitrary cylinder sets $C_\mathcal P$ and $C_\mathcal Q$;
compare~\cite[Thm.~1.17]{Walters}. The latter in turn follows from a
straightforward (but lengthy) calculation using
Theorem~\ref{freq} and the definition of the measure $\nu$ together
with the Chinese Remainder Theorem. In fact, for technical
reasons, it is better to work with a different semi-algebra that also
generates the Borel $\sigma$-algebra on $\mathbb X_{V}$; see Appendix~\ref{appa} for the
details.
Vanishing measure-theoretical entropy, i.e.\
$$
h_{\rm meas}(V)=\lim_{\rho\to\infty}
\frac{1}{\pi\rho^2}\sum_{\mathcal P\in\mathcal
A(\rho)}\!\!\!-\nu(C_\mathcal P)\log \nu(C_\mathcal P)=0,
$$
was shown
in~\cite[Thm.~4]{PH}, which is in line with the results
of~\cite{BLR}. Alternatively, it is an immediate consequence of part
(c) above. As a consequence of part (a), the individual
diffraction measure of $V$ according to Theorem~\ref{diff} coincides
with the diffraction measure of the system $(\mathbb
X_V,\mathbb Z^2,\nu)$ in the sense of~\cite{BL}. Then, pure point
diffraction means pure point dynamical spectrum~\cite[Thm.~7]{BL},
and the latter is the group generated by the Fourier--Bohr spectrum;
compare~\cite[Thm.~8]{BL} and~\cite[Prop. 17]{BLvE}. Since the intensity
formula of Theorem~\ref{diff} shows that there are no extinctions,
the Fourier--Bohr spectrum here is itself a group, which completes
part (b).
It is well known that $K_{\mu}=(G,\mathbb Z^2,\mu)$ has the same pure
point
dynamical spectrum as $(\mathbb
X_V,\mathbb Z^2,\nu)$; compare~\cite{CS} for the details in the case
of the square-free integers. In particular, the subgroup of $\mathbb
T^2$ given by the points of
$\mathbb Q^2\cap [0,1)^2$ with square-free denominator is easily seen to be isomorphic
to the direct sum $\bigoplus_p \mathbb Z^2/p\mathbb Z^2$, wherefore it is
the Pontryagin dual of the
direct product
$G=\prod_p \mathbb Z^2/p\mathbb Z^2$; cf.~\cite[Sec.\ 2.2]{Rudin}. By a theorem of von Neumann~\cite{vNeumann}, two ergodic measure-preserving transformations
with pure point dynamical spectrum are isomomorphic if and only if
they have the same dynamical spectrum. This proves part
(c), which is a particular instance of the Halmos--von Neumann
theorem; cf.~\cite{HvN}.
Alternatively, the Kronecker system can be read off from the model set
description, which also provides the compact Abelian group. The general formalism is developed in~\cite{BLM}, though
the torus parametrisation does not immediately apply. Some extra work
is required here to establish the precise properties of the
measure-theoretic
homomorphism onto the compact Abelian group. Diagrammatically,
the construction looks like this:
\[\begin{array}{ccccc}
\mathbb Z^2&\longleftarrow&\mathbb Z^2\times \prod_p (\mathbb Z^2)_p &\longrightarrow&\prod_p (\mathbb Z^2)_p\\
\cup&&\cup&&\cup\\ M&&L&&W\\ \uparrow&&\uparrow&&\uparrow\\
x&\longleftrightarrow&(x,\iota(x))&\longrightarrow&\iota(x)
\end{array}\]
Here $$L:=\big\{(x,\iota(x))\,\big|\,x\in\mathbb Z^2\big\}=\big\{(x,([x]_p))\,\big|\,x\in\mathbb Z^2\big\}$$ is the natural (diagonal) embedding of $\mathbb Z^2$ into $\mathbb
Z^2\times \prod_p (\mathbb Z^2)_p$ and
$$
W:=\prod_p \big((\mathbb Z^2)_p\setminus\{[0]_p\}\big)
$$
satisfies $W=\partial W$ and has measure $\mu(W)=\prod_p(1-\tfrac{1}{p^2})=1/\zeta(2)$ with respect to the
normalised Haar measure $\mu$ on the compact group $\prod_p (\mathbb
Z^2)_p$. Clearly, one has
$$
M:=M(W):=\{x\in\mathbb Z^2\,|\,\iota(x)\in W\}=V
$$
The above diagram is in fact a \emph{cut and project scheme}\/: $L$ is a
lattice (a discrete co-compact subgroup) in $\mathbb Z^2\times \prod_p (\mathbb Z^2)_p$
with one-to-one projection onto the first
factor and dense projection onto the second factor. This means that $V$ is a \emph{weak model
set}~\cite{TAO}. The corresponding `torus' is
$$
\mathbb T\,:=\,\big(\mathbb Z^2\times \prod_p (\mathbb Z^2)_p\big)\big/L\,\,\simeq \,\,\prod_p(\mathbb Z^2)_p,
$$
with the $\mathbb Z^2$-action given by addition of
$\iota(x)=([x]_p)$. A similar construction with translations from the group
$\mathbb R^2$ in mind is
given in~\cite{BMP}; see also~\cite[Ch.\ 5a]{Sing}.
The so-called \emph{torus parametrisation}~\cite{Moody} is the Borel map
$$
\varphi\!:\, \mathbb T\rightarrow \mathbb X_V,
$$
given by
$$
([y_p]_p)\mapsto M\big(([y_p]_p)+W\big).
$$
Clearly, $\varphi$ intertwines the $\mathbb Z^2$-actions. Note that, since $\mathbb X_V=\mathbb A$, one indeed has
$$
M(([y_p]_p)+W)=\mathbb Z^2\setminus\bigcup_p (y_p+p\mathbb Z^2)\in\mathbb X_V.
$$
The map $\varphi$ fails to be injective. For example, the fibre $\varphi^{-1}(\varnothing)$ over
the empty set $\varnothing$ is easily seen to be
uncountable. Furthermore, $\varphi$ is not continuous, since, e.g.,
$\varphi(([(0,0)]_p))=V$ but $$V\owns (1,0)\not\in\varphi\big(([(0,0)]_{p_1},\dots,[(0,0)]_{p_{n-1}},[(1,0)]_{p_n},[(0,0)]_{p_{n+1}},\dots)\big).$$ Nevertheless,
employing the ergodicity of the measure $\nu$, one can show that
$\varphi$ is in fact a measure-theoretic isomorphism between the two systems; see
Appendix~\ref{appb} for details.
\end{proof}
Let us mention that our approach is complementary to that
of~\cite{CS}. There, ergodicity and pure point spectrum are consequences of
determining all eigenfunctions, then concluding via $1$ being a simple
eigenvalue and via the basis property of the eigenfunctions. Here, we
establish the $\nu$-genericity of $V$ and the ergodicity of the measure $\nu$ and afterwards use the equivalence
theorem between pure point dynamical and diffraction spectrum~\cite[Thm.~7]{BL},
hence employing the diffraction measure of $V$ calculated in~\cite{BMP,PH}.
\section{$k$-free lattice points}\label{kfree}
The square-free integers and the visible points of the square lattice are
particular cases of the following natural generalisation. Let
$\varLambda\subset\mathbb R^n$ be a lattice. The \emph{$k$-free
points} $V=V(\varLambda,k)$ of $\varLambda$ are then defined by
$$
V\,=\,\mathbb
\varLambda\setminus\bigcup_{\text{$p$ \scriptsize prime}} p^k\varLambda.
$$
They are the points with the property that the
greatest common divisor of their
coordinates (in an arbitrary lattice basis) is not divisible by any non-trivial
$k$th power of an integer. Without restriction, we shall assume that
$\varLambda$ is
\emph{unimodular}, i.e.\ $|\det(\varLambda)|=1$. Moreover, we exclude the trivial
case $n=k=1$, where $V$ consists of just the two points of $\varLambda$ closest to $0$ on either side. On the basis of the results
in~\cite{BMP,PH}, one can then show analogous versions of any of the above
findings. In particular, one has~\cite[Cor.~1]{PH}
$$
\operatorname{dens}(V)=\frac{1}{\zeta(nk)}
$$
and the result for the diffraction measure $\widehat\gamma$ of $V$ looks
as follows. Recall that the \emph{dual}\/ or \emph{reciprocal
lattice}\/ $\varLambda^*$ of $\varLambda$ is
\[
\varLambda^*:=\{y \in\mathbb{R}^n\,\mid\, y\cdot x\in\mathbb Z
\mbox{ for all } x\in\varLambda\}.
\]
Further, the
\emph{denominator} of a point $\ell$ in the $\mathbb Q$-span $\mathbb
Q\varLambda^*$ of $\varLambda^*$ is defined as
$$
\operatorname{den}(\ell):=\operatorname{gcd}\{m\in\mathbb N\,\mid\,m \ell\in\varLambda^*\}.
$$
\begin{theorem}\cite[Thms.~3 and 5]{BMP} \cite[Thm.~8]{PH}\label{thdiff}
The natural diffraction measure $\widehat{\gamma}$ of the autocorrelation
$\gamma$ of\/ $V$ exists. It is a positive pure point measure which is
translation bounded and supported on the set of points in $\mathbb Q\varLambda^*$
with $(k+1)$-free denominator, so
$$
\widehat{\gamma}=\sum_{{\substack{\ell\in\mathbb
Q\varLambda^*\\\text{\scriptsize $\operatorname{den}(\ell)$ $(k+1)$-free}}}}I(\ell)\,\delta_l.
$$
In particular,
$I(0)=(1/\zeta(nk))^2$ and when\/ $0\neq \ell\in\mathbb Q\varLambda^*$ has
$(k+1)$-free denominator
$\operatorname{den}(\ell)$, the corresponding intensity is given by
$$
I(\ell)=\Bigg(\frac{1}{\zeta(nk)}\prod_{p\mid \operatorname{den}(\ell)}\frac{1}{p^{nk}-1}\Bigg)^2.
$$
\qed
\end{theorem}
Again, the hull $$\mathbb X_V=\overline{\{t+V\,|\,t\in\varLambda\}}$$ of $V$ turns out to be just the set of
\emph{admissible}\/ subsets of $\varLambda$, i.e.\ subsets $A$ of $\varLambda$
with
$$
|A_p|<p^{nk}
$$
for any prime $p$, where $A_p$ denotes the reduction of $A$ modulo
$p^k\varLambda$; see~\cite[Thm.~6]{PH}. The natural topological dynamical system $(\mathbb
X_V,\varLambda)$ has the analogous properties as in the special case
discussed above in Theorem~\ref{c1}. In particular, it has positive topological entropy
equal to the patch counting entropy of $V$, i.e.\
$$h_{\rm pc}(V)=\frac{\log (2)}{\zeta(nk)}$$
by~\cite[Thm.~3]{PH}. For the patch frequencies, one has the following
result.
\begin{theorem}{\rm \cite[Thms.~1 and~2]{PH}}\label{freq2}
Any\/ $\rho$-patch\/ $\mathcal P$ of\/ $V$ occurs with positive
frequency, which is given by
\[\nu(\mathcal P)=\sum_{\mathcal F\subset (\varLambda\cap B_{\rho}(0))\setminus \mathcal P}(-1)^{|\mathcal F|}
\prod_p\left(1-\frac{|(\mathcal P\cup\mathcal
F)_p|}{p^{nk}}\right).\]
\qed
\end{theorem}
This gives rise to a measure-theoretic dynamical
system $(\mathbb X_{V},\varLambda,\nu)$ which can be seen, as above,
to be ergodic and metrically isomorphic to $(G,\varLambda,\mu)$, where $G$ is the
compact Abelian group
$$
G=\prod_p\varLambda_p=\prod_p\varLambda/p^k\varLambda
$$
on which the lattice $\varLambda$ acts via addition of
$\iota(x)=([x]_p)$. As before, $\mu$ is the normalised Haar measure on $G$. It follows that $(\mathbb X_{V},\varLambda,\nu)$ has
zero measure entropy; see also~\cite[Thm.~4]{PH}. Again, $V$ turns out
to be $\nu$-generic. We thus get the analogous result to
Theorem~\ref{c2} also in this more general setting.
\section{$\mathscr B$-free lattice points}\label{bfree}
One further generalisation step seems possible as follows. In~\cite{ELD},
Lema\'nczyk et al.\ studied the dynamical properties of $\mathscr B$-free systems, i.e.\
$\mathscr B\subset\{2,3,\dots\}$ consists of
pairwise coprime integers satisfying
$$
\sum_{b\in\mathscr B}\frac{1}{b}<\infty
$$
and the hull $\mathbb X_{\mathscr
B}=\overline{\{t+V\,|\,t\in\mathbb Z\}}$ is the orbit closure of
the set
$$
V\,=\,\mathbb Z\setminus\bigcup_{b\in\mathscr B} b\mathbb Z
$$
of \emph{$\mathscr B$-free numbers}\/ (integers with no factor from
$\mathscr B$). Replacing the one-dimensional lattice $\mathbb Z\subset \mathbb
R$ by other unimodular
lattices $\varLambda\subset\mathbb R^n$ in the above definitions and
requiring that $\sum_{b\in\mathscr B}\frac{1}{b^n}<\infty$, one arrives
at \emph{$\mathscr B$-free lattice points}\/ $V$ and the associated
topological dynamical systems $(\mathbb X_V,\varLambda)$. The $k$-free lattice points from the previous
section then arise from the particular choice $\mathscr B=\{p^k\,|\,\text{$p$ prime}\}$. Since the proofs in~\cite{BMP,PH} do not use
special properties of $k$th powers of prime numbers except their
pairwise coprimality, the above results carry over to the case of $\mathscr
B$-free lattice points with almost identical proofs. In particular,
the density and topological (patch counting) entropy of $V$ are given by
$$
\operatorname{dens}(V)=\prod_{b\in\mathscr B}\left(1-\frac{1}{b^n}\right)
$$
and
$
h_{\rm pc}(V)=\log (2)\operatorname{dens}(V),
$
respectively. Again, $\mathbb X_V$ contains the \emph{admissible}\/
subsets $A$ of $\varLambda$, i.e.\
$$
|A_b|<b^{n}
$$
for any $b\in\mathscr B$, where $A_b$ denotes the reduction of $A$ modulo
$b\varLambda$. Moreover, the diffraction measure $\widehat \gamma$
of $V$ exists. It is a pure point measure that is supported on the set of points $\ell\in\mathbb Q\varLambda^*$
with the property that the denominator $\operatorname{den}(\ell)$ divides
a finite product of distinct $b$'s, the intensity at such a point
being given by
$$
I(\ell)=\Bigg(\operatorname{dens}(V)\prod_{\substack{ b\in\mathscr B\\\operatorname{gcd}(\operatorname{den}(\ell),b)\neq 1}}\frac{1}{b^{n}-1}\Bigg)^2.
$$
Both the pure point nature and the intensity formula can be shown by
writing $\widehat \gamma$ as a vague limit of diffraction measures of
approximating crystallographic systems. A Weierstra{\ss} M-test, as
in~\cite{BMP}, implies that one also has the stronger norm
convergence, which preserves the pure point nature of approximating
measures in the limit via~\cite[Thm.\ 8.4]{TAO}.
Further, the frequency of a $\rho$-patch $\mathcal P$ of $V$ is
positive and given by the expression
$$
\nu(\mathcal P)=\sum_{\mathcal F\subset (\varLambda\cap B_{\rho}(0))\setminus \mathcal P}(-1)^{|\mathcal F|}
\prod_{b\in\mathscr B}\left(1-\frac{|(\mathcal P\cup\mathcal
F)_b|}{b^{n}}\right).
$$
The associated measure-theoretic dynamical
system $(\mathbb X_{V},\varLambda,\nu)$ can be seen, as above,
to be ergodic and metrically isomorphic to $(G,\varLambda,\mu)$, where $G$ is the
compact Abelian group
$$
G=\prod_{b\in\mathscr B}\varLambda_b=\prod_{b\in\mathscr B}\varLambda/b\varLambda
$$
on which the lattice $\varLambda$ acts via addition of
$\iota(x)=([x]_b)$
and $\mu$ is the normalised Haar measure on $G$. Again, $V$ turns out
to be $\nu$-generic. Due to the similarity of the structures, we leave
all details to the reader.
\section{$k$-free integers in number fields}\label{number}
Another possible extension is given by the number-theoretic setting that was studied
by Cellarosi and Vinogradov in~\cite{CV}. Let $K$ be an algebraic
number field, i.e.\ a finite extension of $\mathbb Q$, say of degree
$[K:\mathbb Q]=d$; see~\cite{BS,Neukirch} for background material on
algebraic number theory. Further, consider the Dedekind domain $\mathcal O_K$ of
integers in $K$ (in particular, any ideal $0\neq\mathfrak a\neq
\mathcal O_K$
of $\mathcal O_K$
factors uniquely as a product of prime ideals) and define, for $k\ge
2$, the set $V$ of \emph{$k$-free integers}\/ of $K$ as the nonzero elements $a\in
\mathcal O_K$ with the property that, if $a$ is not a
unit, then the prime ideal factorisation of
the corresponding principal ideal
$0\neq(a)\neq\mathcal O_K$ contains no $k$th powers of prime ideals, i.e.\ $(a)\not\subseteq
\mathfrak p^k$ for any prime ideal $\mathfrak p\subset \mathcal
O_K$. In other words, one has
$$
V\,=\,V(\mathcal O_K,k)\,=\,\mathcal O_K\setminus\bigcup_{\substack{\mathfrak p\subset \mathcal
O_K\\\text{\scriptsize $\mathfrak p$ prime ideal}}} \mathfrak p^k.
$$
It is well known that $\mathcal
O_K$ is a free $\mathbb Z$-module of rank $d$ and is thus isomorphic to the lattice
$\mathbb Z^d$ as a group. In particular, there is a natural isomorphism from $\mathcal
O_K$ to a lattice in $\mathbb R^d$, namely the \emph{Minkowski
embedding}; see~\cite{BS,Neukirch} and~\cite[Ch.\ 3.4]{TAO}. In order to illustrate this, we prefer to discuss the specific real quadratic
number field
$K=\mathbb Q(\sqrt 2)$ with $\mathcal
O_K=\mathbb Z[\sqrt 2]$. It is well known that $K$ is a Galois
extension and thus has precisely two field automorphisms, namely the
identity and the non-trivial automorphism determined by $\sqrt
2\mapsto -\sqrt 2$. We denote the latter automorphism by $x\mapsto
x'$. One can then easily check that $\mathcal O_K$ corresponds
under the \emph{Minkowski embedding}\/ $j\!:\, K\rightarrow
\mathbb R^2$, given by
$$
x\mapsto (x,x'),
$$
to a non-unimodular lattice
$\mathcal L$ in $\mathbb R^2$ with area $|\det \mathcal L|=2\sqrt
2=\sqrt{ |d_K|}$, where $d_K=8$ is the discriminant of $K$. In fact, since $\mathcal O_K=\mathbb Z\oplus \mathbb Z\sqrt 2$, one has $\mathcal L=\mathbb
Z(1,1)\oplus\mathbb Z(\sqrt 2,-\sqrt 2)$; see Figure~\ref{fig: minkowski}.
\begin{center}
\begin{figure}
\caption{The Minkowski embedding of $\mathbb Z[\sqrt 2]$ in $\mathbb R^2$.}
\label{fig: minkowski}
\end{figure}
\end{center}
Moreover, the image
$j(\mathfrak a)$ of any ideal
$0\neq\mathfrak a\subset \mathcal O_K$ is a lattice in $\mathbb
R^2$ with area
\begin{equation}\label{normindex}
|\det j(\mathfrak a)|=2\sqrt 2\, (\mathcal O_K:\mathfrak a),
\end{equation}
where $(\mathcal O_K:\mathfrak a)$ denotes the (finite) subgroup index of
$\mathfrak a$ in $\mathcal O_K$, i.e.\ the absolute norm $N(\mathfrak a)$
of $\mathfrak a$. Note that the absolute norm is a totally multiplicative function on the set of
non-zero ideals of $\mathcal O_K$. We are thus led to consider
the familiar object
$$
j(V)\,=\,\mathcal L\setminus\bigcup_{\mathfrak p} j(\mathfrak p^k),
$$
where here and below $\mathfrak p$ runs through the prime ideals of
$\mathcal O_K$; see Fig.~\ref{fig: squarefreenumber} for an illustration. Again, the
proofs in~\cite{BMP,PH} can be adjusted to obtain similar results as above.
\begin{center}
\begin{figure}
\caption{A central patch of the Minkowski embedding of the square-free
integers $V$ of $\mathbb Q(\sqrt 2)$.}
\label{fig: squarefreenumber}
\end{figure}
\end{center}
Denote by
$$
\zeta_K(s)=\zeta_{\mathbb
Q(\sqrt 2)}(s)=\sum_{\substack{\mathfrak a\subset \mathcal
O_K\\\text{\scriptsize $\mathfrak a\neq 0$
ideal}}}\frac{1}{N(\mathfrak a)^s}=\prod_{\mathfrak p}\left(1-\frac{1}{N(\mathfrak p)^s}\right)^{-1}
$$
the \emph{Dedekind $\zeta$-function}\/ of $K$, which converges for all $s$ with
$\operatorname{Re}(s)>1$. Employing Eq.~\eqref{normindex}, a similar reasoning as in the
previous sections now shows that the density and topological (patch
counting) entropy of $j(V)$ are given by
$$
\operatorname{dens}\big(j(V)\big)=\frac{1}{2\sqrt
2}\,\frac{1}{\zeta_{K}(k)}=\frac{1}{2\sqrt 2}\prod_{\mathfrak p}\left(1-\frac{1}{N(\mathfrak p)^k}\right)
$$
and $\log (2) \operatorname{dens}(j(V))$, respectively. Note that the Chinese Remainder Theorem in its general form says that, given
pairwise coprime ideals $\mathfrak a_1,\dots,\mathfrak a_r$ in a ring $\mathcal
O$ ($\mathfrak a_s+\mathfrak a_t=\mathcal O$ for $s\neq t$), one has
$\prod_{i=1}^r\mathfrak a_i=\mathfrak a_1\cap\dots\cap \mathfrak a_r$
and
$$
\mathcal O\big/\prod_{i=1}^r\mathfrak a_i\,\simeq\,
\prod_{i=1}^r \mathcal O/\mathfrak a_i.
$$
Recall that the \emph{dual}\/ or \emph{reciprocal
module}\/ $\mathcal O_K^*$ of $\mathcal O_K$ is the fractional ideal
\[
\mathcal O_K^*:=\{y \in K\,\mid\, \operatorname{Tr}_{K/\mathbb Q}(yx)\in\mathbb Z
\mbox{ for all } x\in\mathcal O_K\}
\]
of $K$ containing $\mathcal O_K$, where $\operatorname{Tr}_{K/\mathbb Q}(yx)=yx+y'x'$ is the \emph{trace}\/ of $yx$. Then $j(\mathcal O_K^*)=\mathcal L^*$ and hence $j(\mathbb
Q\mathcal O_K^*)=\mathbb Q
\mathcal L^*$. Here, one calculates that $\mathcal O_K^*=\mathbb Z
\frac{1}{2}\oplus\mathbb Z\frac{\sqrt 2}{4}$ and thus $\mathbb Q\mathcal
O_K=\mathbb Q \mathcal O_K^*=K$ as well as $\mathbb Q\mathcal
L^*=\mathbb Q\mathcal L$. Further, the
\emph{denominator} of a point $\ell$ in $\mathbb
Q\mathcal L^*$ is defined as the non-zero ideal
$$
\operatorname{den}(\ell):=\{x\in \mathcal O_K \,\mid\,x
j^{-1}(\ell)\in\mathcal O_K^*\}\subset\mathcal O_K.
$$
Then, the diffraction measure $\widehat \gamma$
of $j(V)$ is pure point (for the same reason as above) and is supported on the set of points $\ell\in j(\mathbb
Q\mathcal O_K^*)=\mathbb Q
\mathcal L^*$
with $(k+1)$-free denominator $\operatorname{den}(\ell)$ (i.e.,
either $\operatorname{den}(\ell)=\mathcal O_K$ or the unique
prime ideal factorization of $\operatorname{den}(\ell)$ contains no
$(k+1)$th powers), the intensity
at such a point
being given by
$$
I(\ell)=\Bigg(\frac{1}{2\sqrt 2}\,\frac{1}{\zeta_{K}(k)}\prod_{\substack{\mathfrak p\\ \operatorname{den}(\ell)\subset\mathfrak p}}\frac{1}{N(\mathfrak p)^{k}-1}\Bigg)^2.
$$
See Fig.~\ref{fig: squarefreenumberdiff} for an illustration, where
the restriction of $\widehat \gamma$ to a compact region is shown.
\begin{center}
\begin{figure}
\caption{Diffraction $\widehat{\gamma}
\label{fig: squarefreenumberdiff}
\end{figure}
\end{center}
With a view to the general case of an algebraic number field $K$, the
above did not make use of the additional fact that $\mathbb Z[\sqrt
2]$ is in fact a \emph{Euclidean domain}\/ and thus a \emph{principal
ideal domain}\/ (in particular, a \emph{unique factorisation domain}). Here, one has
$N(\mathfrak a)=|N_{K/\mathbb Q}(a)|$ for $\mathfrak a=(a)$, where
$N_{K/\mathbb Q}(a)=aa'$ is the \emph{norm} of $a$. Note that $\mathbb Z[\sqrt
2]$ is a Euclidean domain with respect to the norm function
$x\mapsto |N_{K/\mathbb Q}(x)|$, i.e. $a+b\sqrt 2\mapsto |a^2-2b^2|$. Furthermore, the
Dedekind $\zeta$-function of $K$ can
be written more explicitly in terms of the usual rational
primes, i.e.\
$$
\zeta_K(s)=\frac{1}{1-2^{-s}}\prod_{p\equiv\pm 1 (8)}\frac{1}{(1-p^{-s})^2}\prod_{p\equiv\pm 3 (8)}\frac{1}{1-p^{-2s}};
$$
see~\cite[Eq.\ (7)]{BM}. For instance, one has
$\zeta_K(2)=\frac{\pi^4}{48\sqrt 2}$; see~\cite[Eq.\ (58)]{BM}. Hence, the intensity $I(\ell)$ can be
computed explicitly in terms of the prime elements of $\mathcal O_K$
dividing any generator of the (principal) ideal
$\operatorname{den}(\ell)$.
Let us finally turn to the associated topological dynamical system
$(\mathbb X_V,\mathcal O_K)\simeq(\mathbb
X_{j(V)},\mathcal L)$, where $\mathbb X_V$ resp.\ $\mathbb
X_{j(V)}$ are given as the translation orbit closure of $V$ resp.\ $j(V)$ with
respect to the product topology on $\{0,1\}^{\mathcal O_K}$ resp.\
$\{0,1\}^{\mathcal L}$. Again, $\mathbb X_V$ resp.\ $\mathbb
X_{j(V)}$ can be characterised as the \emph{admissible}\/ subsets $A$ of
$\mathcal O_K$ resp.\ $\mathcal L$, i.e.\
$$
|A_{\mathfrak p}|<N(\mathfrak p)^k
$$
for any prime ideal $\mathfrak p$ of $\mathcal O_K$, where
$A_{\mathfrak p}$ denotes the reduction of $A$ modulo $\mathfrak p^k$
resp.\ $j(\mathfrak p^k)$. Further, the frequency of a $\rho$-patch $\mathcal P$ of $j(V)$ is
positive and given by the expression
$$
\nu(\mathcal P)=\sum_{\mathcal F\subset (\mathcal L\cap B_{\rho}(0))\setminus \mathcal P}(-1)^{|\mathcal F|}
\prod_{\mathfrak p}\left(1-\frac{|(\mathcal P\cup\mathcal
F)_{\mathfrak p}|}{N(\mathfrak p)^k}\right).
$$
The associated measure-theoretic dynamical
system $(\mathbb X_{j(V)},\mathcal L,\nu)\simeq (\mathbb X_V,\mathcal O_K,\nu)$ can be seen, as above,
to be ergodic and metrically isomorphic to $(G,\mathcal L,\mu)$, where $G$ is the
compact Abelian group
$$
G=\prod_{\mathfrak p}\mathcal L_{\mathfrak p}=\prod_{\mathfrak
p}\mathcal L/j(\mathfrak p^k)\simeq \prod_{\mathfrak p}\mathcal
O_K/\mathfrak p^k=\prod_{\mathfrak p}(\mathcal O_K)_{\mathfrak p}
$$
on which the lattice $\mathcal L$ (resp.\ the group $\mathcal O_K$) acts via
addition of $\iota(j(x))=([j(x)]_{\mathfrak p})$
(resp.\ $\iota(x)=([x]_{\mathfrak p})$), where $x\in\mathcal O_K$,
and $\mu$ is the normalised Haar measure on $G$. As above, $V$ turns out
to be $\nu$-generic.
Since none of the above uses special properties of the quadratic field
$\mathbb Q(\sqrt 2)$, similar results hold for the general case of an arbitrary
algebraic number field $K$. Moreover, even the extension to $\mathscr
B$-free integers in $K$, i.e.\
$$
V\,=\,V_{\mathcal O_K}\,=\,\mathcal O_K\setminus\bigcup_{\mathfrak
b\in \mathscr B} \mathfrak b,
$$
where $\mathscr B$ is a set of pairwise coprime ideals $\mathfrak
b\subsetneq \mathcal O_K$ satisfying
$$
\sum_{\mathfrak
b\in \mathscr B}\frac{1}{N(\mathfrak b)}<\infty,
$$
should be possible.
\appendix
\renewcommand{B\arabic{equation}}{A\arabic{equation}}
\setcounter{equation}{0}
\section{Ergodicity of the patch frequency measure}\label{appa}
Below, we shall only treat the paradigmatic case $V=V_{\mathbb Z^2}$ from Section~\ref{visible}. For a $\rho$-patch $\mathcal P\in\mathcal A(\rho)$ of $V$ (i.e.\ $P\subset
B_{\rho}(0)\cap \mathbb Z^2$ with $P\in\mathbb A$), denote by
$B_\mathcal P$ the set of elements of $\mathbb X_V=\mathbb A$ whose
$\rho$-patch at $0$ contains $\mathcal P$. One readily checks that the
sets of type
$B_\mathcal P$ form a semi-algebra that also generates the Borel
$\sigma$-algebra on $\mathbb X_V$. In fact, one has $$C_\mathcal
P=B_\mathcal P\setminus {\bigcup_{\substack{\mathcal Q\in \mathcal
A(\rho)\\\mathcal P\subsetneq\mathcal Q}}} B_\mathcal Q,$$ and
\begin{equation}\label{bc}
B_\mathcal P=\dot{\bigcup_{\substack{\mathcal Q\in \mathcal
A(\rho)\\\mathcal P\subset\mathcal Q}}} C_\mathcal Q.
\end{equation}
\begin{corollary}\label{nub}
For any\/ $\rho$-patch\/ $\mathcal P$ of $V$, one has
$$
\nu(B_\mathcal P)=\prod_p\left(1-\frac{|\mathcal
P_p|}{p^{2}}\right).
$$
\end{corollary}
\begin{proof}
Let $\mathcal P$ be a $\rho$-patch of $V$. In this proof, we indicate summation variables by a dot under
the symbol. By Theorem~\ref{freq} and the definition of $\nu$, ~\eqref{bc} implies that
\begin{eqnarray*}
\nu(B_\mathcal P)
&=& \sum_{\substack{\mathcal Q\in \mathcal A(\rho)\\\mathcal P\subset\mathcal Q}} \nu(C_\mathcal Q)\\
&=&\sum_{\substack{\mathcal Q\in \mathcal A(\rho)\\\mathcal P\subset\mathcal Q}}\hspace{1mm}\sum_{\mathcal Q\subset\mathcal F\subset \mathbb Z^2\cap B_{\rho}(0)}(-1)^{|\mathcal F\setminus \mathcal Q|}\prod_p\left(1-\frac{|\mathcal F_p|}{p^{2}}\right)\\
&=& \sum_{\mathcal P\subset \udo{\mathcal Q}\subset\udo{\mathcal F}\subset \mathbb Z^2\cap B_{\rho}(0)}(-1)^{|\mathcal F\setminus \mathcal Q|}\prod_p\left(1-\frac{|\mathcal F_p|}{p^{2}}\right)\\
&=& \prod_p\left(1-\frac{|\mathcal P_p|}{p^{2}}\right)+\sum_{\substack{\mathcal P\subset \udo{\mathcal Q}\subset\udo{\mathcal F}\subset \mathbb Z^2\cap B_{\rho}(0)\\\mathcal F\neq\mathcal P}}(-1)^{|\mathcal F\setminus \mathcal Q|}\prod_p\left(1-\frac{|\mathcal F_p|}{p^{2}}\right)\\
&=& \prod_p\left(1-\frac{|\mathcal P_p|}{p^{2}}\right),
\end{eqnarray*}
since, for fixed $\mathcal F \subset \mathbb Z^2\cap B_{\rho}(0)$ with
$\mathcal P\subsetneq \mathcal F$, one indeed has
\begin{eqnarray*}
\sum_{\mathcal P\subset \udo{\mathcal Q}\subset\mathcal F}(-1)^{|\mathcal F\setminus \mathcal Q|}\prod_p\left(1-\frac{|\mathcal F_p|}{p^{2}}\right)
&=& \prod_p\left(1-\frac{|\mathcal F_p|}{p^{2}}\right)\sum_{\mathcal P\subset \udo{\mathcal Q}\subset\mathcal F}(-1)^{|\mathcal F\setminus \mathcal Q|}\\
&=&\prod_p\left(1-\frac{|\mathcal F_p|}{p^{2}}\right)\sum_{\udo{\mathcal R}\subset \mathcal F\setminus\mathcal P}(-1)^{|\mathcal R|}\\
&=&\prod_p\left(1-\frac{|\mathcal F_p|}{p^{2}}\right)\sum_{i=0}^{|\mathcal F\setminus\mathcal P|} {|\mathcal F\setminus\mathcal P|\choose i}(-1)^i\\
&=& 0,
\end{eqnarray*}
where the last equality follows from the binomial theorem since $\mathcal F\setminus\mathcal P\neq \varnothing$.
\end{proof}
For a natural number $m$, finite subsets $\mathcal P$ and $\mathcal Q$ of
$\mathbb Z^2$ and $S\subset \mathcal P_m$, we set
$$
\mathcal Q_{S,\mathcal P}^m=\Big(\bigcap_{s\in S}\mathcal
Q_m-s\Big)\setminus\Big(\bigcup_{s\in\mathcal P_m\setminus S}\mathcal Q_m-s\Big),
$$
i.e.\ the set of elements of $(\mathbb Z^2)_m$ that lie
in $\mathcal Q_m-s$ precisely for those $s\in \mathcal P_m$ with $s\in
S\subset\mathcal P_m$, in particular $\mathcal
Q_{\varnothing,\mathcal P}^m=(\mathbb Z^2)_m \setminus (\mathcal
Q_m-\mathcal P_m)$. With $q_{S,\mathcal P}^m=|\mathcal Q_{S,\mathcal P}^m|$, one then has
$q_{\varnothing,\mathcal P}^m=m^{2}-|\mathcal Q_m-\mathcal P_m|$ and, since
the difference set $\mathcal Q_m-\mathcal P_m$
is the disjoint union of the various $\mathcal Q_{S,\mathcal P}^m$,
where one has
$\varnothing\neq S\subset \mathcal P_m$,
\begin{equation}\label{eq1}
\sum_{S\subset \mathcal P_m}q_{S,\mathcal P}^m=m^{2}.
\end{equation}
Note that the following two lemmas also hold for any
finite
subsets $\mathcal P$ and $\mathcal Q$ of an arbitrary finite group $G$ instead of
$G=(\mathbb Z^2)_m$.
\begin{lemma}\label{lambda}
For any finite subsets\/ $\mathcal P$ and\/ $\mathcal Q$ of\/
$\mathbb Z^2$ and any natural number\/ $m$, one has
$$
\sum_{S\subset \mathcal P_m}|S|q_{S,\mathcal P}^m=|\mathcal
P_m||\mathcal
Q_m|.
$$
\end{lemma}
\begin{proof}
We use induction on $|\mathcal
P_m|$. For the induction basis $|\mathcal
P_m|=0$, the assertion is trivially true. For the induction step, consider $\mathcal P$ with $|\mathcal P_m|>0$
and fix an element, say $*$, of $\mathcal P_m$. It follows that
\begin{eqnarray*}
\sum_{S\subset \mathcal P_m}|S|q_{S,\mathcal P}^m
&=&\sum_{S\subset \mathcal P_m\setminus\{*\}}|S|q_{S,\mathcal
P}^m+\sum_{S\subset \mathcal
P_m\setminus\{*\}}(|S|+1)q_{S\cup\{*\},\mathcal P}^m\\
&=&\sum_{S\subset \mathcal
P_m\setminus\{*\}}|S|(q_{S,\mathcal
P}^m+q_{S\cup\{*\},\mathcal P}^m)+\sum_{S\subset
\mathcal P_m\setminus\{*\}}q_{S\cup\{*\},\mathcal P}^m\\
&=&\sum_{S\subset \mathcal
P_m\setminus\{*\}}|S|(q_{S,\mathcal
P}^m+q_{S\cup\{*\},\mathcal P}^m)+|\mathcal Q_m|,
\end{eqnarray*}
since
\begin{eqnarray*}
\sum_{S\subset
\mathcal P_m\setminus\{*\}}q_{S\cup\{*\},\mathcal P}^m
&=&\sum_{S\subset
\mathcal P_m\setminus\{*\}}\left|\left((\mathcal Q_m-\{*\})\cap \Big(\bigcap_{s\in S}\mathcal
Q_m-s\Big)\right)\setminus\Big(\bigcup_{s\in\mathcal P_m\setminus
(S\cup\{*\})}\mathcal Q_m-s\Big)\right |\\
&=&\sum_{S\subset
\mathcal P_m\setminus\{*\}}\left|(\mathcal Q_m-\{*\})\cap \left(\Big(\bigcap_{s\in S}\mathcal
Q_m-s\Big)\setminus\Big(\bigcup_{s\in (\mathcal P_m\setminus\{*\})\setminus
S}\mathcal Q_m-s\Big)\right)\right |\\
&=&|\mathcal Q_m-\{*\}|\\
&=&|\mathcal Q_m|.
\end{eqnarray*}
Furthermore, for $S\subset \mathcal P_m\setminus\{*\}$, one has
\begin{eqnarray*}
&&\hspace{-2em}
\mathcal Q_{S\cup\{*\},\mathcal P}^m\cup \mathcal Q_{S,\mathcal P}^m\\
&=&\left[(\mathcal Q_m-\{*\})\cap \left(\Big(\bigcap_{s\in S}\mathcal
Q_m-s\Big)\setminus\Big(\bigcup_{s\in (\mathcal P_m\setminus\{*\})\setminus
S}\mathcal Q_m-s\Big)\right)\right]\dot\bigcup \\
&\hphantom{=}&\left[\left(\Big(\bigcap_{s\in S}\mathcal
Q_m-s\Big)\setminus (\mathcal Q_m-\{*\})\right)\cap \left(\Big(\bigcap_{s\in S}\mathcal
Q_m-s\Big)\setminus\Big(\bigcup_{s\in (\mathcal P_m\setminus\{*\})\setminus
S}\mathcal Q_m-s\Big)\right)\right]\\
&=& \Big(\bigcap_{s\in S}\mathcal
Q_m-s\Big)\setminus\Big(\bigcup_{s\in (\mathcal P_m\setminus\{*\})\setminus
S}\mathcal Q_m-s\Big)\\
&=&\mathcal Q_{S,\mathcal P\setminus \{*\}}^m
\end{eqnarray*}
wherefore, by the induction hypothesis,
$$
\sum_{S\subset \mathcal
P_m\setminus\{*\}}|S|(q_{S,\mathcal P}^m+q_{S\cup\{*\},\mathcal P}^m)=\sum_{S\subset \mathcal
P_m\setminus\{*\}}|S|q_{S,\mathcal P\setminus \{*\}}^m=(|\mathcal P_m|-1)|\mathcal
Q_m|.
$$
This completes the proof.
\end{proof}
\begin{lemma}\label{lambda2}
For any finite subsets\/ $\mathcal P$ and\/ $\mathcal Q$ of
$\mathbb Z^2$ and square-free
$d$, one has
$$
\sum_{\substack{(\nu_p)_{p\mid
d}\\0\le\nu_p\le|\mathcal P_p|}}\prod_{p\mid
d}\Big((\nu_p+|\mathcal Q_p|)\sum_{\substack{S\subset \mathcal
P_p\\|S|=|\mathcal P_p|-\nu_p}}q_{S,\mathcal P}^p\Big)\,=\,\prod_{p\mid d}\left(p^{2}|\mathcal P_p|+ p^{2}|\mathcal
Q_p|-|\mathcal P_p| |\mathcal Q_p|\right).
$$
\end{lemma}
\begin{proof}
This is proved by induction on the number $\omega(d)$ of prime
factors of $d$. For the induction basis $\omega(d)=1$, say $d=p$, note that by~\eqref{eq1}
and Lemma~\ref{lambda} one indeed has
\begin{eqnarray*}
\sum_{0\le\nu_p\le|\mathcal P_p|}\Big((\nu_p+|\mathcal
Q_p|)\sum_{\substack{S\subset \mathcal P_p\\|S|=|\mathcal
P_p|-\nu_p}}q_{S,\mathcal P}^p\Big)
&=& \Big(|\mathcal
Q_p| \sum_{S\subset \mathcal P_p}q_{S,\mathcal P}^p \Big)+\sum_{S\subset \mathcal P_p}(|\mathcal
P_p|-|S|)q_{S,\mathcal P}^p\\
&=& |\mathcal
Q_p|p^{2}+|\mathcal
P_p|p^{2}-|\mathcal
P_p||\mathcal
Q_p|.
\end{eqnarray*}
For the induction step, consider $d$ with $\omega(d)=r+1$, where $r\ge
1$, say $d=p_1\ldots p_rp_{r+1}$. Then
\begin{eqnarray*}
&&\hspace{-2em}
\sum_{\substack{(\nu_p)_{p\mid
d}\\0\le\nu_p\le|\mathcal P_p|}}\prod_{p\mid
d}\Big((\nu_p+|\mathcal Q_p|)\sum_{\substack{S\subset \mathcal
P_p\\|S|=|\mathcal P_p|-\nu_p}}q_{S,\mathcal P}^p\Big)\\
&=&\sum_{i=0}^{|\mathcal P_{p_{r+1}}|}\sum_{\substack{(\nu_p)_{p\mid
d}\\0\le\nu_p\le|\mathcal P_p|\\ \nu_{p_{r+1}}=i}}(i+|\mathcal Q_{p_{r+1}}|)\sum_{\substack{S\subset \mathcal
P_{p_{r+1}}\\|S|=|\mathcal P_{p_{r+1}}|-i}}q_{S,\mathcal P}^{p_{r+1}}\prod_{p\mid
\frac{d}{p_{r+1}}}\Big((\nu_p+|\mathcal Q_p|)\sum_{\substack{S\subset \mathcal
P_p\\|S|=|\mathcal P_p|-\nu_p}}q_{S,\mathcal P}^p\Big)\\
&=&\sum_{i=0}^{|\mathcal P_{p_{r+1}}|}(i+|\mathcal Q_{p_{r+1}}|)\sum_{\substack{S\subset \mathcal
P_{p_{r+1}}\\|S|=|\mathcal P_{p_{r+1}}|-i}}q_{S,\mathcal P}^{p_{r+1}}\sum_{\substack{(\nu_p)_{p\mid
\frac{d}{p_{r+1}}}\\0\le\nu_p\le|\mathcal P_p|}}\prod_{p\mid
\frac{d}{p_{r+1}}}\Big((\nu_p+|\mathcal Q_p|)\sum_{\substack{S\subset \mathcal
P_p\\|S|=|\mathcal P_p|-\nu_p}}q_{S,\mathcal P}^p\Big),
\end{eqnarray*}
which is, by the induction hypothesis,
\begin{eqnarray*}
&&\hspace{-2em}
\sum_{i=0}^{|\mathcal P_{p_{r+1}}|}(i+|\mathcal Q_{p_{r+1}}|)\sum_{\substack{S\subset \mathcal
P_{p_{r+1}}\\|S|=|\mathcal P_{p_{r+1}}|-i}}q_{S,\mathcal P}^{p_{r+1}}\prod_{p\mid \frac{d}{p_{r+1}}}\left(p^{2}|\mathcal P_p|+ p^{2}|\mathcal
Q_p|-|\mathcal P_p| |\mathcal Q_p|\right)\\
&=&\prod_{p\mid \frac{d}{p_{r+1}}}\left(p^{2}|\mathcal P_p|+ p^{2}|\mathcal
Q_p|-|\mathcal P_p| |\mathcal Q_p|\right)\sum_{i=0}^{|\mathcal P_{p_{r+1}}|}\sum_{\substack{S\subset \mathcal
P_{p_{r+1}}\\|S|=|\mathcal P_{p_{r+1}}|-i}}(i+|\mathcal
Q_{p_{r+1}}|)q_{S,\mathcal P}^{p_{r+1}}.
\end{eqnarray*}
It thus remains to show that
$$
\sum_{i=0}^{|\mathcal P_{p_{r+1}}|}\sum_{\substack{S\subset \mathcal
P_{p_{r+1}}\\|S|=|\mathcal P_{p_{r+1}}|-i}}(i+|\mathcal
Q_{p_{r+1}}|)q_{S,\mathcal P}^{p_{r+1}}\,=\,p_{r+1}^{2}|\mathcal P_{p_{r+1}}|+ p_{r+1}^{2}|\mathcal
Q_{p_{r+1}}|-|\mathcal P_{p_{r+1}}| |\mathcal Q_{p_{r+1}}|,
$$
which is clear since, by~\eqref{eq1}
and Lemma~\ref{lambda} again, one has
\begin{eqnarray*}
&&\hspace{-2em}
\sum_{i=0}^{|\mathcal P_{p_{r+1}}|}\sum_{\substack{S\subset \mathcal
P_{p_{r+1}}\\|S|=|\mathcal P_{p_{r+1}}|-i}}(i+|\mathcal
Q_{p_{r+1}}|)q_{S,\mathcal P}^{p_{r+1}}\\
&=&\sum_{i=0}^{|\mathcal P_{p_{r+1}}|}\sum_{\substack{S\subset \mathcal
P_{p_{r+1}}\\|S|=|\mathcal P_{p_{r+1}}|-i}}iq_{S,\mathcal P}^{p_{r+1}}+\sum_{i=0}^{|\mathcal P_{p_{r+1}}|}\sum_{\substack{S\subset \mathcal
P_{p_{r+1}}\\|S|=|\mathcal P_{p_{r+1}}|-i}}|\mathcal
Q_{p_{r+1}}|q_{S,\mathcal P}^{p_{r+1}}\\
&=&\sum_{S\subset \mathcal
P_{p_{r+1}}}(|\mathcal P_{p_{r+1}}|-|S|)q_{S,\mathcal P}^{p_{r+1}}+|\mathcal
Q_{p_{r+1}}|\sum_{S\subset \mathcal
P_{p_{r+1}}}q_{S,\mathcal P}^{p_{r+1}}\\
&=&|\mathcal P_{p_{r+1}}|p_{r+1}^{2}-|\mathcal P_{p_{r+1}}| |\mathcal Q_{p_{r+1}}|+ |\mathcal
Q_{p_{r+1}}|p_{r+1}^{2}.
\end{eqnarray*}
This completes the proof.
\end{proof}
The proof of the main result below needs the well-known estimate
\begin{equation}\label{LambdaCount}
|B_\rho({ x})\cap\mathbb Z^2|=\pi\rho^2+
O(\rho)+O(1),
\end{equation}
approximating the number of points of
$\mathbb Z^2$ in a large ball $B_\rho({ x})$ (the last error term being
required only when $\rho^2<1$). This is
obtained by dividing $B_\rho({ x})$ into fundamental regions for
$\mathbb Z^2$, each of volume $1$ and containing one point of
$\mathbb Z^2$, with the error terms arising from fundamental regions that
overlap the boundary of $B_\rho({ x})$. A more precise version is
given as~\cite[Prop.~1]{BMP}.
\begin{theorem}
The measure\/ $\nu$ is ergodic.
\end{theorem}
\begin{proof}
We have to show that
$$
\nu(B_\mathcal P)\nu(B_\mathcal Q)=\lim_{R\to \infty}\frac{1}{\pi R^2}\sum_{x\in B_R(0)\cap \mathbb Z^2}\nu(x+B_\mathcal P\cap B_\mathcal Q)
$$
for any finite admissible subsets $\mathcal P$ and $\mathcal Q$ of
$\mathbb Z^2$; cf.~\cite[Thm.~1.17(i)]{Walters}. By Corollary~\ref{nub}, the latter is equivalent to
$$
\prod_p\left(1-\frac{|\mathcal
P_p|}{p^{2}}\right)\prod_p\left(1-\frac{|\mathcal
Q_p|}{p^{2}}\right)=\lim_{R\to \infty}\frac{1}{\pi R^2}\sum_{x\in B_R(0)\cap \mathbb Z^2}\nu(x+B_\mathcal P\cap B_\mathcal Q).
$$
Let $\mu$ denote the M\"obius function for the rest of this proof. The left-hand side is equal to
\begin{eqnarray*}
\prod_p\left(1-\frac{|\mathcal P_p|}{p^{2}}- \frac{|\mathcal
Q_p|}{p^{2}}+\frac{|\mathcal P_p| |\mathcal Q_p|}{p^{4}} \right)&=&\prod_p\left(1-\left(\frac{|\mathcal P_p|}{p^{2}}+ \frac{|\mathcal
Q_p|}{p^{2}}-\frac{|\mathcal P_p||\mathcal Q_p|}{p^{4}} \right)\right)\\
&=&\sum_{\text{\scriptsize $d$ square-free}}\mu(d)\prod_{p\mid d}\left(\frac{|\mathcal P_p|}{p^{2}}+ \frac{|\mathcal
Q_p|}{p^{2}}-\frac{|\mathcal P_p| |\mathcal Q_p|}{p^{4}} \right)\\
&=&\sum_{\text{\scriptsize $d$ square-free}}\frac{\mu(d)}{d^{2}}\prod_{p\mid d}\left(|\mathcal P_p|+ |\mathcal
Q_p|-\frac{|\mathcal P_p| |\mathcal Q_p|}{p^{2}} \right).
\end{eqnarray*}
For a fixed $R>0$, the right-hand side is
\begin{eqnarray*}
&&\hspace{-2em}\frac{1}{\pi R^2}\sum_{x\in B_R(0)\cap
\mathbb Z^2}\prod_p\left(1-\frac{|(\mathcal Q\cup x+\mathcal
P)_p|}{p^{2}}\right)\\
&=& \frac{1}{\pi R^2}\sum_{x\in B_R(0)\cap
\mathbb Z^2}\hspace{1mm}\sum_{\text{\scriptsize $d$
square-free}}\mu(d)\prod_{p\mid d}\frac{|(\mathcal Q\cup x+\mathcal
P)_p|}{p^{2}}\\
&=& \frac{1}{\pi R^2}\sum_{\text{\scriptsize $d$
square-free}}\mu(d)\sum_{x\in B_R(0)\cap
\mathbb Z^2}\prod_{p\mid d}\frac{|(\mathcal Q\cup x+\mathcal
P)_p|}{p^{2}}\\
&=& \frac{1}{\pi R^2}\sum_{\text{\scriptsize $d$
square-free}}\mu(d)\sum_{\substack{(\nu_p)_{p\mid d}\\|\mathcal
Q_p|\le\nu_p\le|\mathcal Q_p|+|\mathcal P_p|}}\,\sum_{\substack{x\in B_R(0)\cap
\mathbb Z^2\\\forall p\mid d: |(\mathcal Q\cup x+\mathcal
P)_p|=\nu_p}}\prod_{p\mid d}\frac{\nu_p}{p^{2}}\\
&=& \frac{1}{\pi R^2}\sum_{\text{\scriptsize $d$
square-free}}\mu(d)\sum_{\substack{(\nu_p)_{p\mid d}\\|\mathcal
Q_p|\le\nu_p\le|\mathcal Q_p|+|\mathcal P_p|}}\prod_{p\mid d}\frac{\nu_p}{p^{2}}\sum_{\substack{x\in B_R(0)\cap
\mathbb Z^2\\\forall p\mid d: |(\mathcal Q\cup x+\mathcal
P)_p|=\nu_p}}1\\
&=& \frac{1}{\pi R^2}\sum_{\text{\scriptsize $d$
square-free}}\frac{\mu(d)}{d^{2}}\sum_{\substack{(\nu_p)_{p\mid d}\\|\mathcal
Q_p|\le\nu_p\le|\mathcal Q_p|+|\mathcal P_p|}}\prod_{p\mid d}\nu_p\sum_{\substack{x\in B_R(0)\cap
\mathbb Z^2\\\forall p\mid d: |(\mathcal Q\cup x+\mathcal
P)_p|=\nu_p}}1.\end{eqnarray*}
By~\cite[Prop.~1]{BMP} and Proposition~\ref{crt}
with $\mathbb Z^2$ replaced by the lattices $p\mathbb Z^2$, where
$p\mid d$, one obtains the estimate
$$
\left(\frac{\pi R^2}{d^{2}}+
O\left(\left(\frac{R^2}{d^{2}}\right)^{1-1/2}\right)+O(1)\right)\prod_{p\mid
d}\big|\{x_p\in (\mathbb Z^2)_p\,:\,|(\mathcal Q\cup x+\mathcal
P)_p|=\nu_p\}\big|
$$
for the inner sum. Substituting this in the above expression and
letting $R$ tend to infinity, one obtains
\begin{eqnarray*}
&&\hspace{-2em}
\sum_{\text{\scriptsize $d$
square-free}}\frac{\mu(d)}{d^{4}}\sum_{\substack{(\nu_p)_{p\mid d}\\|\mathcal
Q_p|\le\nu_p\le\mathcal |Q_p|+|\mathcal P_p|}}\prod_{p\mid d}\left(\nu_p\,\big|\{x_p\in (\mathbb Z^2)_p\,:\,|(\mathcal Q\cup x+\mathcal
P)_p|=\nu_p\}\big|\right)\\
&=&\sum_{\text{\scriptsize $d$
square-free}}\frac{\mu(d)}{d^{4}}\sum_{\substack{(\nu_p)_{p\mid
d}\\0\le\nu_p\le|\mathcal P_p|}}\prod_{p\mid
d}\left((\nu_p+|\mathcal Q_p|)\,\big|\{x_p\in (\mathbb Z^2)_p\,:\,|(\mathcal Q\cup x+\mathcal
P)_p|=\nu_p+|\mathcal Q_p|\}\big|\right)
\end{eqnarray*}
and the inner product can be rewritten as
\begin{eqnarray*}
&&\hspace{-2em}
\prod_{p\mid
d}\left((\nu_p+|\mathcal Q_p|)\,\big|\{x_p\in (\mathbb Z^2)_p\,:\,x_p\in \mathcal Q_{S,\mathcal P}^p\mbox{ for
$S\subset\mathcal P_p$ with $|S|=|\mathcal P_p|-\nu_p$}\}\big|\right)
\\
&=&
\prod_{p\mid
d}\Big((\nu_p+|\mathcal Q_p|)\sum_{\substack{S\subset \mathcal
P_p\\|S|=|\mathcal P_p|-\nu_p}}q_{S,\mathcal P}^p\Big).
\end{eqnarray*}
Thus, in order to prove the claim, it suffices to show, for square-free
$d$, the identity
$$
\sum_{\substack{(\nu_p)_{p\mid
d}\\0\le\nu_p\le|\mathcal P_p|}}\prod_{p\mid
d}\Big((\nu_p+|\mathcal Q_p|)\sum_{\substack{S\subset \mathcal
P_p\\|S|=|\mathcal P_p|-\nu_p}}q_{S,\mathcal P}^p\Big)\,=\,\prod_{p\mid d}\left(p^{2}|\mathcal P_p|+ p^{2}|\mathcal
Q_p|-|\mathcal P_p| |\mathcal Q_p|\right),
$$
which is just the content of Lemma~\ref{lambda2}.
\end{proof}
\renewcommand{B\arabic{equation}}{B\arabic{equation}}
\setcounter{equation}{0}
\section{Isomorphism between $(\mathbb X_{V},\mathbb Z^2,\nu)$ and $(\prod_p (\mathbb Z^2)_p,\mathbb Z^2,\mu)$}\label{appb}
Let $\mathbb A_1$ be the Borel subset of $\mathbb X_V=\mathbb A$ consisting
of the elements $X\in\mathbb X_V$ that satisfy
$$
|X_p|=p^2-1
$$
for any prime $p$, i.e.\ $X$ misses exactly one coset of $p\mathbb
Z^2$ in $\mathbb Z^2$. There is a natural Borel map
$$
\theta\!:\,\mathbb A_1\rightarrow \mathbb T,
$$
given by
$X\mapsto ([y_p]_p)$, where $[y_p]_p$ is uniquely determined by $[y_p]_p\not\in
X_p$. Note that $\theta$ fails to be continuous or injective. Clearly, $\mathbb A_1$ is $\mathbb Z^2$-invariant and
$\theta$ intertwines the $\mathbb Z^2$-actions. Note also that, for
$X\in\mathbb A_1$, one clearly has
\begin{equation}\label{phitheta}
X\subset\varphi(\theta(X)).
\end{equation}
\begin{lemma}\label{va1}
One has\/ $V\in\mathbb A_1$.
\end{lemma}
\begin{proof}
Fix a prime number $p$ and choose a set $A$ of $p^2-1$ elements of $\mathbb
Z^2$ such that $|A_p|=p^2-1$. By the Chinese Remainder
Theorem (Prop.\ \ref{crt}), we may assume that $A_{p'}=\{(0,0)_{p'}\}$ for the
finitely many primes $p'<p$. Then, $A\in\mathbb A$ and, by
Theorem~\ref{charachull}, there is a translation $t\in \mathbb Z^2$ such
that $t+A\subset V$. Since
$$
p^2-1= |(t+A)_p|\le |V_p|\le p^2-1,
$$
the assertion follows.
\end{proof}
\begin{lemma}
One has\/ $\nu(\mathbb A_1)=1$.
\end{lemma}
\begin{proof}
The ergodicity of the full measure
$\nu$ (Thms.~\ref{freq} and~\ref{c2}(b)) implies that $\nu$-almost
every $X\in\mathbb X_V$ has a dense $\mathbb Z^2$-orbit;
compare~\cite[Thm.~1.7]{Walters}. It follows from Lemma~\ref{va1} that
$$
\{X\in\mathbb X_V\,|\,X \mbox{ has a dense $\mathbb Z^2$-orbit}\}\subset
\mathbb A_1.
$$
This inclusion is due to the fact that, if $X$ has a dense
orbit, then in particular $V\in\mathbb A_1$ is
an element of the orbit closure of $X$. Since, for any prime $p$, representatives of the
$p^2-1$ different elements of $V_p$ in $V$ can be chosen within a finite
distance from the origin, it is clear from the definition of the
topology on $\mathbb X_V$ that there is a translation $t\in\mathbb
Z^2$ such that $t+X$ contains these representatives. Thus
$$p^2-1\ge|X_p|=|(t+X)_p|\ge|V_p|=p^2-1$$ and the assertion follows.
\end{proof}
Hence the push-forward measure of $\nu$ to $\mathbb T$ by
the map $\theta$ is a $\mathbb
Z^2$-invariant probability measure and thus is the normalised Haar
measure $\mu$. Set
$$
\mathbb T_1:=\varphi^{-1}(\mathbb A_1),
$$
which is a $\mathbb Z^2$-invariant Borel
set with $\varphi(\mathbb T_1)\subset \mathbb A_1$. In
particular, this shows that $\theta\circ\varphi|_{\mathbb
T_1}=\operatorname{id}_{\mathbb T_1}$ and thus the restriction of
$\varphi$ to $\mathbb T_1$ and the restriction of $\theta$ to
$\varphi(\mathbb T_1)$ are
injective.
\begin{lemma}\label{a1t1}
One has\/ $\theta(\mathbb
A_1)=\mathbb T_1$.
\end{lemma}
\begin{proof}
It suffices to prove the inclusion $\theta(\mathbb
A_1)\subset\mathbb T_1$, since this immediately yields the assertion due to $\mathbb T_1=\theta(\varphi(\mathbb
T_1))\subset \theta(\mathbb A_1)$. So let us assume the existence of an
$X\in\mathbb A_1$ with $\theta(X)\not\in \mathbb T_1$, i.e.\
$\varphi(\theta(X))\not\in\mathbb
A_1$. Then there is a prime number $p$ such that
$$|(\varphi(\theta(X)))_p|<p^2-1.$$ Using~\eqref{phitheta},
this implies that also $|X_p|<p^2-1$, a contradiction.
\end{proof}
In particular, this shows that $\theta^{-1}(\mathbb T_1)=\mathbb
A_1$ and thus
$$
\mu(\mathbb T_1)=\nu(\mathbb A_1)=1.
$$
It follows that $\theta$ is
a factor map from $(\mathbb X_{V},\mathbb Z^2,\nu)$ to $(\prod_p
(\mathbb Z^2)_p,\mathbb Z^2,\mu)$.
In order to see that $\theta$ is in fact an isomorphism, let us first
note that the Borel map $\varphi\!:\,\mathbb T\rightarrow \mathbb
X_V$ is measure-preserving since, for any $\rho$-patch
$\mathcal P$, one has
\begin{eqnarray*}
\mu(\varphi^{-1}(B_\mathcal P)) &=&\mu\big(\{([y_p]_p)\in\mathbb
T\,|\,[y_p]_p\not\in\mathcal P_p\mbox{ for all } p
\}\big)\\
&=&\prod_p\left(\frac{p^2-|\mathcal
P_p|}{p^2}\right)\\
&=&\nu(B_\mathcal P)
\end{eqnarray*}
by Corollary~\ref{nub}. Next, consider the
subset $\mathbb A_1^*$ of elements $X\in\mathbb A_1$ that are maximal
elements of $\mathbb A$ with respect to inclusion, i.e.
$$
\forall\, Y\in\mathbb A:\,(X\subset Y\Rightarrow
X=Y).
$$
Clearly, $V$ and
every translate of $V$ are elements of $\mathbb A_1^*$; see
Lemma~\ref{va1}. Using Lemma~\ref{a1t1}, one can
verify that $\mathbb
A_1^*$ contains precisely the elements $X\in\mathbb A_1$ with
\begin{equation}\label{phitheta2}
X=\varphi(\theta(X)).
\end{equation}
Employing~\eqref{phitheta2}, one further
verifies that $$\mathbb
A_1^*=\mathbb A_1\cap \varphi(\mathbb T).$$ Since $\varphi(\mathbb
T)$ can be seen to be a Borel set,
$\mathbb A_1^*$ is thus a Borel
set with measure
$$
\nu(\mathbb A_1^*)=\nu(\mathbb A_1\cap \varphi(\mathbb T))=\nu(\mathbb A_1)+\nu(\varphi(\mathbb T))-\nu(\mathbb A_1\cup \varphi(\mathbb T))=1.
$$
Setting
$$
\mathbb T_1^*:=\varphi^{-1}(\mathbb A_1^*),
$$
the restrictions $\varphi\!:\,\mathbb T_1^*\rightarrow \mathbb A_1^*$ and
$\theta\!:\,\mathbb A_1^*\rightarrow\mathbb T_1^*$ are well-defined
($X\in\mathbb A_1^*\Rightarrow \varphi(\theta(X))=X\in\mathbb A_1^*$)
and can now be
shown to be bijective and
inverses to each other. Hence $\theta$ and
$\varphi$ are isomorphisms.
\end{document} |
\begin{document}
\title{Randomised Buffer Management with Bounded Delay against Adaptive Adversary}
\section{Introduction}
We study the Buffer Management with Bounded Delay problem, introduced by Kesselman~et~al.~\cite{DBLP:journals/siamcomp/KesselmanLMPSS04},
or, in the standard scheduling terminology, the problem of online scheduling of unit jobs to maximise weighted throughput. The adaptive-online
adversary model for this problem has recently been studied by Bie{\'n}kowski~et~al.~\cite{DBLP:conf/waoa/BienkowskiCJ08}, who proved a lower
bound of \(\frac{4}{3}\) on the competitive ratio and provided a matching upper bound for \(2\)-bounded sequences. In particular, the authors
of~\cite{DBLP:conf/waoa/BienkowskiCJ08} claim that the algorithm $\mathcal{R}MIX$~\cite{DBLP:journals/jda/ChinCFJST06} is \(\frac{\mathrm{e}}{\mathrm{e}-1}\)-competitive
against an adaptive-online adversary. However, the original proof of Chin~et~al.~\cite{DBLP:journals/jda/ChinCFJST06} holds only in the oblivious
adversary model. The reason is as follows. First, the potential function used in the proof depends on the adversary's future schedule, and second,
it assumes that the adversary follows the earliest-deadline-first policy. Both of these cannot be assumed in adaptive-online adversary model,
as the whole schedule of such adversary depends on the random choices of the algorithm. We give an alternative proof that {\mathcal{R}MIX} indeed is
\(\frac{\mathrm{e}}{\mathrm{e}-1}\)-competitive against an adaptive-online adversary.
Similar claim about {\mathcal{R}MIX} was made in another paper by Bie{\'n}kowski~et~al.~\cite{DBLP:conf/soda/BienkowskiCDHJJS09} studying a slightly
more general problem. It assumes that the algorithm does not know exact deadlines of the packets, and instead knows only the order of their
expirations. However, any prefix of the deadline-ordered sequence of packets can expire in every step. The new proof that we provide holds even
in this more general model, as both the algorithm and its analysis rely only on the relative order of packets' deadlines.
\section{{\mathcal{R}MIX} and its new analysis}
The algorithm $\mathcal{R}MIX$ works as follows.
In each step, let $h$ be the heaviest pending job.
Select a real $x \in [-1,0]$ uniformly at random. Transmit $f$, the earliest-deadline
pending packet with $w_f \geq \mathrm{e}^x \cdot w_h$.
We write $a \lhd b$ ($a \unlhd b$) to denote that the deadline of packet $a$ is earlier
(not later) that the deadline of packet $b$. This is consistent with the convention
of~\cite{DBLP:conf/soda/BienkowskiCDHJJS09} for the more general problem studied therein.
\begin{theorem}
\mathcal{R}MIX is $\mathrm{e} / (\mathrm{e}-1)$-competitive against an adaptive-online adversary.
\mathrm{e}nd{theorem}
\begin{proof}
We use the paradigm of modifying the adversary's buffer used in the paper of Li et al.~\cite{DBLP:conf/soda/LiSS05}.
Namely, in each time step we assume that $\mathcal{R}MIX$ and the adversary $\textsc{Adv}$ have the same buffers.
Both $\mathcal{R}MIX$ and $\textsc{Adv}$ transmit a packet. If after doing so, the contents of their buffers become
different, we modify the adversary's buffer to make it identical with that of $\mathcal{R}MIX$.
To do so, we may have to let the adversary transmit another packet and keep the one originally transmitted in the buffer,
or upgrade one of the packets in its buffer by increasing its weight and deadline. We show that in each step
the expected gain of $\mathcal{R}MIX$ is at least $\frac{\mathrm{e}-1}{\mathrm{e}}$ times the expected {\mathrm{e}m amortized gain} of the adversary,
denoted $\textsc{Adv}'$. The latter is defined as the sum of weights of the packets that $\textsc{Adv}$ eventually transmitted in the step.
Both expected values are taken over possible random choices of $\mathcal{R}MIX$.
First, we compute the expected gain of $\mathcal{R}MIX$ in a single step.
\[
\mathbf{E}[\mathcal{R}MIX] = \mathbf{E}[w_f] = \int_{-1}^0 w_f \; dx
\mathrm{e}nspace.
\]
Assume now that $\textsc{Adv}$ transmits a packet $j$.
Without loss of generality, we may assume that for each packet $k$ from the buffer, either $w_j \geq w_k$ or
$j \unlhd k$. We call it a {\mathrm{e}m greediness property}.
We consider two cases.
\begin{enumerate}
\item $f \lhd j$. By the greediness property, $w_j \geq w_f$. After both $\textsc{Adv}$ and $\mathcal{R}MIX$ transmit their
packets, we replace $f$ in the buffer of $\textsc{Adv}$ by $j$.
\item $j \unlhd f$. After both $\textsc{Adv}$ and $\mathcal{R}MIX$ transmit their packets, we let $\textsc{Adv}$ transmit additionally
$f$ in this round and we reinsert $j$ into its buffer.
\mathrm{e}nd{enumerate}
Therefore the amortized gain of $\textsc{Adv}$ is $w_j$ and additionally $w_f$ if $j \unlhd f$.
By the definition of the algorithm, $j \unlhd f$ only if $w_f \geq w_j$.
Let $y = \ln (w_j/w_h)$. Then,
\[
\mathbf{E}[\textsc{Adv}'] = w_j + \mathbf{E}[w_f | w_f \geq w_j] =
w_j + \int_{y}^0 w_f \; dx
\mathrm{e}nspace.
\]
Finally, we compare the gains, obtaining
\begin{align*}
\frac{\mathbf{E}[\mathcal{R}MIX]}{\mathbf{E}[\textsc{Adv}']} \;= &\;
\frac{\int_{-1}^y w_f \; dx + \int_y^0 w_f \; dx}{w_j + \int_{y}^0 w_f \; dx} \geq
\frac{\int_{-1}^y \mathrm{e}^x w_h \; dx + \int_y^0 \mathrm{e}^x w_h \; dx}{w_j + \int_{y}^0 \mathrm{e}^x w_h \; dx} \\
= &\; \frac{\int_{-1}^0 \mathrm{e}^x w_h \; dx}{w_j + \int_{y}^0 \mathrm{e}^x w_h \; dx}
= \frac{w_h \cdot (1-1/\mathrm{e})}{w_j + w_h \cdot (1-w_j/w_h)} \\
= &\; 1 - 1/\mathrm{e}
\mathrm{e}nspace,
\mathrm{e}nd{align*}
which concludes the proof.
\qed
\mathrm{e}nd{proof}
\mathrm{e}nd{document} |
\begin{document}
\begin{abstract}
We consider the defocusing energy-critical nonlinear Schr\"odinger equation in the exterior of a smooth compact strictly convex obstacle in three dimensions. For the initial-value problem with Dirichlet boundary condition we prove global well-posedness and scattering for all initial data in the energy space.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}\lambdaabel{S:Introduction}
We consider the defocusing energy-critical NLS in the exterior domain $\Omega$ of a smooth compact strictly convex obstacle in ${\mathbb{R}}^3$
with Dirichlet boundary conditions:
\begin{align}\lambdaabel{nls}
\begin{cases}
i u_t+\Delta u=|u|^4 u, \\
u(0,x)=u_0(x),\\
u(t,x)|_{x\in \partial\Omega}=0.
\end{cases}
\end{align}
Here $u:{\mathbb{R}}\times\Omega\to{\mathbb{C}}$ and the initial data $u_0(x)$ will only be required to belong to the energy space,
which we will describe shortly.
The proper interpretation of the \emph{linear} Schr\"odinger equation with such boundary conditions was an early difficulty in mathematical quantum mechanics, but is now well understood. Let us first whisk through these matters very quickly; see \cite{Kato:pert,RS1,RS2} for further information.
We write $-\Delta_\Omega$ for the Dirichlet Laplacian on $\Omega$. This is the unique self-adjoint operator acting on $L^2(\Omega)$ associated with the closed quadratic form
$$
Q: H^1_0(\Omega) \to [0,\infty) \qtq{via} Q(f):=\int_\Omega \overline{\nabla f(x)} \cdot \nabla f(x) \,dx.
$$
The operator $-\Delta_\Omega$ is unbounded and positive semi-definite. All functions of this operator will be interpreted via the Hilbert-space functional calculus. In particular, $e^{it\Delta_\Omega}$ is unitary and provides the fundamental solution to the linear Schr\"odinger equation $i u_t+\Delta_\Omega u=0$, even when the naive notion of the boundary condition $u(t,x)|_{x\in\partial\Omega}=0$ no longer makes sense.
We now define the natural family of homogeneous Sobolev spaces associated to the operator $-\Delta_\Omega$ via the functional calculus:
\begin{defn}[Sobolev spaces]
For $s\geq 0$ and $1<p<\infty$, let $\dot H_D^{s,p}(\Omega)$ denote the completion of $C^\infty_c(\Omega)$ with respect to the norm
$$
\| f \|_{\dot H_D^{s,p}(\Omega)} := \| (-\Delta_\Omega)^{s/2} f \|_{L^p(\Omega)}.
$$
Omission of the index $p$ indicates $p=2$.
\end{defn}
For $p=2$ and $s=1$, this coincides exactly with the definition of $\dot H^1_0(\Omega)$. For other values of parameters, the definition of $\dot H^{s,p}_D(\Omega)$ deviates quite sharply from the classical definitions of Sobolev spaces on domains, such as $\dot H^{s,p}(\Omega)$, $\dot H^{s,p}_0(\Omega)$, and the Lions--Magenes spaces $\dot H^{s,p}_{00}(\Omega)$. Recall that all of these spaces are defined via the Laplacian in the whole space and its fractional powers.
For bounded domains ${\mathcal O}\subseteq{\mathbb{R}}^d$, the relation of $\dot H^{s,p}_D({\mathcal O})$ to the classical Sobolev spaces has been thoroughly investigated. See, for instance, the review \cite{Seeley:ICM} and the references therein. The case of exterior domains is much less understood; moreover, new subtleties appear. For example, for bounded domains $\dot H^{1,p}_D({\mathcal O})$ is equivalent to the completion of $C^\infty_c({\mathcal O})$ in the space $\dot H^{1,p}({\mathbb{R}}^d)$. However, this is no longer true in the case of exterior domains; indeed, it was observed in \cite{LSZ} that this equivalence fails for $p>3$ in the exterior of the unit ball in ${\mathbb{R}}^3$, even in the case of spherically symmetric functions.
As the reader will quickly appreciate, little can be said about the problem \eqref{nls} without some fairly thorough understanding of the mapping properties of functions of $-\Delta_\Omega$ and of the Sobolev spaces $\dot H_D^{s,p}(\Omega)$, in particular. The analogue of the Mikhlin multiplier theorem is known for this operator and it is possible to develop a Littlewood--Paley theory on this basis; see \cite{IvanPlanch:square,KVZ:HA} for further discussion. To obtain nonlinear estimates, such as product and chain rules in $\dot H_D^{s,p}(\Omega)$, we use the main result of \cite{KVZ:HA}, which we record as Theorem~\ref{T:Sob equiv} below. By proving an equivalence between $\dot H_D^{s,p}(\Omega)$ and the classical Sobolev spaces (for a restricted range of exponents), Theorem~\ref{T:Sob equiv} allows us to import such nonlinear estimates directly from the Euclidean setting.
After this slight detour, let us return to the question of the proper interpretation of a solution to \eqref{nls} and the energy space. For the linear Schr\"odinger equation with Dirichlet boundary conditions, the energy space is the domain of the quadratic form associated to the Dirichlet Laplacian, namely, $\dot H^1_D(\Omega)$. For the nonlinear problem \eqref{nls}, the energy space is again $\dot H^1_D(\Omega)$ and the energy functional is given by
\begin{align}\lambdaabel{energy}
E(u(t)):=\int_{\Omega} \tfrac12 |\nabla u(t,x)|^2 + \tfrac16 |u(t,x)|^6\, dx.
\end{align}
Note that the second summand here, which is known as the potential energy, does not alter the energy space by virtue of Sobolev embedding, more precisely, the embedding $\dot H^1_D(\Omega)H^1_0(\Omega)krightarrow L^6(\Omega)$.
The PDE \eqref{nls} is the natural Hamiltonian flow associated with the energy functional \eqref{energy}. Correspondingly, one would expect this energy to be conserved by the flow. This is indeed the case, provided we restrict ourselves to a proper notion of solution.
\begin{defn}[Solution]\lambdaabel{D:solution}
Let $I$ be a time interval containing the origin. A function $u: I \times \Omega \to {\mathbb{C}}$ is called a (strong) \emph{solution}
to \eqref{nls} if it lies in the class $C_t(I'; \dot H^1_D(\Omega)) \cap L_t^{5}L_x^{30}(I'\times\Omega)$ for every compact subinterval $I'\subseteq I$
and it satisfies the Duhamel formula
\begin{equation}\lambdaabel{E:duhamel}
u(t) = e^{it\Delta_\Omega} u_0 - i \int_0^t e^{i(t-s)\Delta_\Omega} |u(s)|^4 u(s)\, ds,
\end{equation}
for all $t \in I$.
\end{defn}
For brevity we will sometimes refer to such functions as solutions to $\text{NLS}_\Omega$. It is not difficult to verify that strong solutions conserve energy.
We now have sufficient preliminaries to state the main result of this paper.
\begin{thm}\lambdaabel{T:main}
Let $u_0\in \dot H^1_D(\Omega)$. Then there exists a unique strong solution $u$ to \eqref{nls} which is global in time and satisfies
\begin{align}\lambdaabel{E:T:main}
\iint_{{\mathbb{R}}\times\Omega} |u(t,x)|^{10} \,dx\, dt\lambdae C(E(u)).
\end{align}
Moreover, $u$ scatters in both time directions, that is, there exist asymptotic states $u_\pm\in\dot H^1_D(\Omega)$ such that
\begin{align*}
\|u(t) - e^{it\Delta_\Omega}u_\pm\|_{\dot H^1_D(\Omega)}\to 0 \qtq{as} t\to\pm\infty.
\end{align*}
\end{thm}
There is much to be said in order to give a proper context for this result. In particular, we would like to discuss the defocusing NLS in ${\mathbb{R}}^3$ with general power nonlinearity:
\begin{equation}\lambdaabel{GNLS}
i u_t+\Delta u=|u|^p u.
\end{equation}
A key indicator for the local behaviour of solutions to this equation is the scaling symmetry
\begin{align}\lambdaabel{GNLSrescale}
u(t,x)\mapsto u^\lambdaambda(t,x):=\lambdaambda^{\frac2p} u(\lambdaambda^2 t, \lambdaambda x) \qtq{for any} \lambdaambda>0,
\end{align}
which leaves the class of solutions to \eqref{GNLS} invariant. Notice that when $p=4$ this rescaling
also preserves the energy associated with \eqref{GNLS}, namely,
$$
E(u(t)) = \int_{{\mathbb{R}}^3} \tfrac12|\nabla u(t,x)|^2+\tfrac1{p+2}|u(t,x)|^{p+2}\,dx.
$$
For this reason, the quintic NLS in three spatial dimensions is termed energy-critical. The energy is the \emph{highest regularity} conservation law that is known for NLS; this has major consequences for the local and global theories for this equation when $p\geq 4$. When $p>4$, the equation is ill-posed in the energy space; see \cite{CCT}. For $p=4$, which is the focus of this paper, well-posedness in the energy space is delicate, as will be discussed below.
For $0\lambdaeq p<4$, the equation is called energy-subcritical. Indeed, the energy strongly suppresses the short-scale behaviour of solutions, as can be read-off from its transformation under the rescaling \eqref{GNLSrescale}:
$$
E(u^\lambdaambda) = \lambdaambda^{\frac4p - 1} E(u).
$$
Accordingly, it is not very difficult to prove local well-posedness for initial data in $H^1({\mathbb{R}}^3)$. This follows by contraction mapping in Strichartz spaces and yields a local existence time that depends on the $H^1_x$ norm of the initial data. Using the conservation of mass (= $L^2_x$-norm) and energy, global well-posedness follows immediately by iteration. Notice that this procedure gives almost no information about the long-time behaviour of the solution.
The argument just described does not extend to $p=4$. In this case, the local existence time cannot depend solely on the energy, which is a scale-invariant quantity. Nevertheless, a different form of local well-posedness was proved by Cazenave and Weissler \cite{cw0,cw1}, in which the local existence time depends upon the \emph{profile} of the initial data, rather than solely on its norm. Therefore, the iteration procedure described above cannot be used to deduce global existence. In fact, as the energy is the highest regularity conservation law that is known, global existence is non-trivial \emph{even} for Schwartz initial data. In \cite{cw0, cw1}, the time of existence is shown to be positive via the monotone convergence theorem; on the basis of subsequent developments, we now understand that this time is determined by the spread of energy on the Fourier side. In the case of the \emph{focusing} equation, the existence time obtained in these arguments is not fictitious; there are solutions with a fixed energy that blow up arbitrarily quickly.
The Cazenave--Weissler arguments also yield global well-posedness and scattering for initial data with \emph{small} energy, for both the focusing and defocusing equations. Indeed, in this regime the nonlinearity can be treated perturbatively.
The first key breakthrough for the treatment of the large-data energy-critical NLS was the paper \cite{borg:scatter}, which proved global well-posedness and scattering for spherically symmetric solutions in ${\mathbb{R}}^3$ and ${\mathbb{R}}^4$. This paper introduced the induction on energy argument, which has subsequently become extremely influential in the treatment of dispersive equations at the critical regularity. We will also be using this argument, so we postpone a further description until later. The induction on energy method was further advanced by Colliander, Keel, Staffilani, Takaoka, and Tao in their proof \cite{CKSTT:gwp} of global well-posedness and scattering for the quintic NLS in ${\mathbb{R}}^3$, for all initial data in the energy space. This result, which is the direct analogue of Theorem~\ref{T:main} for NLS in the whole space, will play a key role in the analysis of this paper. Let us state it explicitly:
\begin{thm}[\cite{CKSTT:gwp}]\lambdaabel{T:gopher}
Let $u_0\in \dot H^1({\mathbb{R}}^3)$. Then there exists a unique strong solution $u$ to the quintic NLS in ${\mathbb{R}}^3$ which is global in time and satisfies
\begin{align*}
\iint_{{\mathbb{R}}\times{\mathbb{R}}^3} |u(t,x)|^{10} \,dx\, dt\lambdae C(E(u)).
\end{align*}
Moreover, $u$ scatters in both time directions, that is, there exist asymptotic states $u_\pm\in\dot H^1({\mathbb{R}}^3)$ such that
\begin{align*}
\|u(t) - e^{it\Delta_{{\mathbb{R}}^3}}u_\pm\|_{\dot H^1({\mathbb{R}}^3)} \to 0 \qtq{as} t\to\pm\infty.
\end{align*}
\end{thm}
We will also be employing the induction on energy argument, but in the style pioneered by Kenig and Merle \cite{KenigMerle}. The main result of this paper of Kenig and Merle was the proof of global well-posedness and scattering for the focusing energy-critical equation and data smaller than the soliton threshold. This result was for spherically symmetric data and dimensions $3\lambdaeq d \lambdaeq 5$; currently, the analogous result for general data is only known in dimensions five and higher \cite{Berbec}. The proof of Theorem~\ref{T:gopher} was revisited within this framework in \cite{KV:gopher}, which also incorporates innovations of Dodson \cite{Dodson:3+}.
Let us now turn our attention to the problem of NLS on exterior domains. This is a very popular and challenging family of problems. While we will discuss many contributions below, to get a proper sense of the effort expended in this direction one should also consult the many references therein. In the Euclidean setting, the problem is invariant under space translations; this means that one may employ the full power of harmonic analytic tools. Indeed, much of the recent surge of progress in the analysis of dispersive equations is based on the incorporation of this powerful technology. Working on exterior domains breaks space translation invariance and so many of the tools that one could rely on in the Euclidean setting. The companion paper \cite{KVZ:HA} allows us to transfer many basic harmonic analytic results from the Euclidean setting to that of exterior domains. Many more subtle results, particularly related to the long-time behaviour of the propagator, require a completely new analysis; we will discuss examples of this below.
Working on exterior domains also destroys the scaling symmetry. Due to the presence of a boundary, suitable scaling and space translations lead to the study of NLS in \emph{different} geometries. While equations with broken symmetries have been analyzed before, the boundary causes the geometric changes in this paper to be of a more severe nature than those treated previously. An additional new difficulty is that we must proceed without a dispersive estimate, which is currently unknown in this setting.
Before we delve into the difficulties of the energy-critical problem in exterior domains, let us first discuss the energy-subcritical case. The principal difficulty in this case has been to obtain Strichartz estimates (cf. Theorem~\ref{T:Strichartz}). The first results in this direction hold equally well in interior and exterior domains. There is a strong parallel between compact manifolds and interior domains, so we will also include some works focused on that case.
For both compact manifolds and bounded domains, one cannot expect estimates of the same form as for the Euclidean space. Finiteness of the volume means that there can be no long-time dispersion of wave packets; there is simply nowhere for them to disperse to. Indeed, in the case of the torus ${\mathbb{R}}^d/{\mathbb{Z}}^d$, solutions to the linear Schr\"odinger equation are periodic in time. Because of this, all Strichartz estimates must be local in time. Further, due to the existence of conjugate points for the geodesic flow, high frequency waves can reconcentrate; moreover, they can do so arbitrarily quickly. Correspondingly, Strichartz estimates in the finite domain/compact manifold setting lose derivatives relative to the Euclidean case. Nevertheless, the resulting Strichartz estimates are still strong enough to prove local (and so global) well-posedness, at least for a range of energy-subcritical nonlinearity exponents $p$. See the papers \cite{BFHM:Polygon,BSS:PAMS,BSS:schrodinger,borg:torus,BGT:compact} and references therein for further information.
For exterior domains, the obstructions just identified no longer apply, at least in the case of non-trapping obstacles (we do not wish to discuss resonator cavities, or similar geometries). Thus one may reasonably expect all Strichartz estimates to hold, just as in the Euclidean case. There are many positive results in this direction, as will be discussed below; however, the full answer remains unknown, even for the exterior of a convex obstacle (for which there are no conjugate points).
In the Euclidean case, the explicit form of the propagator guarantees the following dispersive estimate:
\begin{equation}\lambdaabel{E:EuclidDisp}
\| e^{it\Delta_{{\mathbb{R}}^d}} f \|_{L^\infty({\mathbb{R}}^d)} \lambdaesssim |t|^{-\frac d2} \| f\|_{L^1({\mathbb{R}}^d)}, \qtq{for all} t\neq0.
\end{equation}
This and the unitary of the propagator on $L^2({\mathbb{R}}^d)$ are all that is required to obtain all known Strichartz estimates. For the basic estimates the argument is elementary; see, for example, \cite{gv:strichartz}. The endpoint cases and exotic retarded estimates are more delicate; see \cite{Foschi,KeelTao, Vilela}.
It is currently unknown whether or not the dispersive estimate holds outside a convex obstacle, indeed, even for the exterior of a sphere. The only positive result in this direction belongs to Li, Smith, and Zhang, \cite{LSZ}, who prove the dispersive estimate for spherically symmetric functions in the exterior of a sphere in ${\mathbb{R}}^3$. Relying on this dispersive estimate and employing an argument of Bourgain \cite{borg:scatter} and Tao \cite{tao:radial}, these authors proved Theorem~\ref{T:main} for spherically symmetric initial data when $\Omega$ is the exterior of a sphere in ${\mathbb{R}}^3$.
In due course, we will explain how the lack of a dispersive estimate outside convex obstacles is one of the major hurdles we needed to overcome in order to prove Theorem~\ref{T:main}. Note that the dispersive estimate will not hold outside a generic non-trapping obstacle, since concave portions of the boundary can act as mirrors and refocus wave packets.
Even though the question of dispersive estimates outside convex obstacles is open, global in time Strichartz estimates are known to hold. Indeed, in \cite{Ivanovici:Strichartz}, Ivanovici proves all classical Strichartz estimates except the endpoint cases. Her result will be crucial in what follows and is reproduced below as Theorem~\ref{T:Strichartz}. We also draw the reader's attention to the related papers \cite{Anton08,BSS:schrodinger,BGT04,HassellTaoWunsch,PlanchVega,RobZuily,StaffTataru,Tataru:Strichartz}, as well as the references therein.
The key input for the proof of Strichartz estimates in exterior domains is the local smoothing estimate; one variant is given as Lemma~\ref{L:local smoothing} below. In the Euclidean setting, this result can be proved via harmonic analysis methods (cf. \cite{ConsSaut,Sjolin87,Vega88}). For the exterior of a convex obstacle, the usual approach is the method of positive commutators, which connects it to both Kato smoothing (cf. \cite[\S XIII.7]{RS4}) and the Morawetz identity; this is the argument used to prove Lemma~\ref{L:local smoothing} here. Local smoothing is also known to hold in the exterior of a non-trapping obstacle; see \cite{BGT04}.
The local smoothing estimate guarantees that wave packets only spend a bounded amount of time next to the obstacle. This fact together with the fact that Strichartz estimates hold in the whole space can be used to reduce the problem of proving Strichartz inequalities to the local behaviour near the obstacle, locally in time. Using this argument, Strichartz estimates have been proved for merely non-trapping obstacles; for further discussion see \cite{BSS:schrodinger, BGT04, IvanPlanch:IHP, PlanchVega, StaffTataru}.
While both local smoothing and Strichartz estimates guarantee that wave packets can only concentrate for a bounded amount of time, they do not guarantee that this period of time is one contiguous interval. In the context of a large-data nonlinear problem, this is a severe handicap when compared to the dispersive estimate: Once a wave packet begins to disperse, the nonlinear effects are reduced and the evolution is dominated by the linear part of the equation. If this evolution causes the wave packet to refocus, then nonlinear effects will become strong again. These nonlinear effects are very hard to control and one must fear the possibility that when the wave packet final breaks up again we find ourselves back at the beginning of the scenario we have just been describing. Such an infinite loop is inconsistent with scattering and global spacetime bounds. In Section~\ref{S:Linear flow convergence} we will prove a new kind of convergence result that plays the role of a dispersive estimate in precluding such periodic behaviour.
The next order of business is to describe what direct information the existing Strichartz estimates give us toward the proof of Theorem~\ref{T:main}. This is how we shall begin the
\subsection{Outline of the proof}
For small initial data, the nonlinearity can be treated perturbatively, provided one has the right linear estimates, of course! In this way, both \cite{Ivanovici:Strichartz} and \cite{BSS:schrodinger} use the Strichartz inequalities they prove to obtain small energy global well-posedness and scattering for $\text{NLS}_\Omega$. Actually, there is one additional difficulty that we have glossed over here, namely, estimating the derivative of the nonlinearity. Notice that in order to commute with the free propagator, the derivative in question must be the square root of the Dirichlet Laplacian (rather than simply the gradient). In \cite{BSS:schrodinger} an $L^4_t L^\infty_x$ Strichartz inequality is proved, which allows the authors to use the equivalence of $\dot H^1_0$ and $\dot H^1_D$. In \cite{IvanPlanch:square} a Littlewood--Paley theory is developed, which allows the use of Besov space arguments (cf. \cite{IvanPlanch:IHP}). Indeed, the paper \cite{IvanPlanch:IHP} of Ivanovici and Planchon goes further, proving small data global well-posedness in the exterior of non-trapping obstacles.
The main result of \cite{KVZ:HA}, which is repeated as Theorem~\ref{T:Sob equiv} below, allows us to transfer the existing local well-posedness arguments directly from the Euclidean case. Actually, a little care is required to ensure all exponents used lie within the regime where norms are equivalent; nevertheless, this can be done as documented in \cite{KVZ:HA}. Indeed, this paper shows that our problem enjoys a strong form of continuous dependence, known under the rubric `stability theory'; see Theorem~\ref{T:stability}. Colloquially, this says that every function that almost solves \eqref{nls} and has bounded spacetime norm lies very close to an actual solution to \eqref{nls}. This is an essential ingredient in any induction on energy argument.
All the results just discussed are perturbative, in particular, they are blind to the sign of the nonlinearity. As blowup can occur for the focusing problem, any large-data global theory must incorporate some deeply nonlinear ingredient which captures the dynamical effects of the sign of the nonlinearity. At present, the only candidates for this role are the identities of Morawetz/virial type and their multi-particle (or interaction) counterparts.
Historically, the Morawetz identity was first introduced for the linear wave equation and soon found application in proving energy decay in exterior domain problems and in the study of the nonlinear wave equation; see \cite{Morawetz75}. As noticed first by Struwe, this type of tool also provides the key non-concentration result to prove global well-posedness for the energy-critical wave equation in Euclidean spaces. See the book \cite{ShatahStruwe} for further discussion and complete references. More recently, this result (plus scattering) has been shown to hold outside convex obstacles \cite{SS10} and (without scattering) in interior domains \cite{BLP08}. In both instances, the Morawetz identity provides the crucial non-concentration result.
There is a significant difference between the Morawetz identities for the nonlinear wave equation and the nonlinear Schr\"odinger equation, which explains why the solution of the well-posedness problem for the energy-critical NLS did not follow closely on the heels of that for the wave equation: \emph{scaling}. In the wave equation case, the Morawetz identity has energy-critical scaling. This ensures that the right-hand side of the inequality can be controlled in terms of the energy alone; it also underscores why it can be used to guarantee non-concentration of solutions.
The basic Morawetz inequality for solutions $u$ to the defocusing quintic NLS in ${\mathbb{R}}^3$ (see \cite{LinStrauss}) reads as follows:
$$
\frac{d\ }{dt} \int_{{\mathbb{R}}^3} \frac{x}{|x|} \cdot 2 \Im\bigl\{ \bar u(t,x) \nabla u(t,x)\bigr\} \,dx \geq \int _{{\mathbb{R}}^3} \frac{8|u|^6}{3|x|}\,dx.
$$
The utility of this inequality is best seen by integrating both sides over some time interval~$I$; together with Cauchy--Schwarz, this leads directly to
\begin{equation}\lambdaabel{GNLSmor}
\int_I \int _{{\mathbb{R}}^3} \frac{|u|^6}{|x|}\,dx \,dt \lambdaesssim \| u \|_{L^\infty_t L^2_x (I\times{\mathbb{R}}^3)} \| \nabla u \|_{L^\infty_t L^2_x (I\times{\mathbb{R}}^3)}.
\end{equation}
Obviously the right-hand side cannot be controlled solely by the energy; indeed, the inequality has the scaling of $\dot H^{1/2}$. Nevertheless, the right-hand side can be controlled by the conservation of both mass and energy; this was one of the key ingredients in the proof of scattering for the inter-critical problem (i.e. $\frac43<p<4$) in \cite{GinibreVelo}. However, at both the mass-critical endpoint $p=\frac43$ and energy-critical endpoint $p=4$, solutions can undergo dramatic changes of scale without causing the mass or energy to diverge. In particular, by simply rescaling an energy-critical solution as in \eqref{GNLSrescale} one may make the mass as small as one wishes.
Our comments so far have concentrated on RHS\eqref{GNLSmor}, but these concerns apply equally well to LHS\eqref{GNLSmor}. Ultimately, the Morawetz identity together with mass and energy conservation are each consistent with a solution that blows up by focusing \emph{part} of its energy at a point, even at the origin. A scenario where \emph{all} of the energy focuses at a single point would not be consistent with the conservation of mass.
The key innovation of Bourgain \cite{borg:scatter} was the induction on energy procedure, which allowed him to reduce the analysis of general solutions to $\text{NLS}_{{\mathbb{R}}^3}$ to those which have a clear intrinsic characteristic length scale (at least for the middle third of their evolution). This length scale is time dependent. In this paper we write $N(t)$ for the reciprocal of this length, which represents the characteristic frequency scale of the solution. The fact that the solution lives at a single scale precludes the scenario described in the previous paragraph. By using suitably truncated versions of the Morawetz identity (cf. Lemma~\ref{L:morawetz} below) and the mass conservation law, Bourgain succeeded in proving not only global well-posedness for the defocusing energy-critical NLS in ${\mathbb{R}}^3$, but also global $L^{10}_{t,x}$ spacetime bounds for the solution.
As noted earlier, the paper \cite{borg:scatter} treated the case of spherically symmetric solutions only. The general case was treated in \cite{CKSTT:gwp}, which also dramatically advanced the induction on energy method, including reducing treatment of the problem to the study of solutions that not only live at a single scale $1/N(t)$, but are even well localized in space around a single point $x(t)$. The dispersive estimate is needed to prove this strong form of localization. Another key ingredient in \cite{CKSTT:gwp} was the newly introduced interaction Morawetz identity; see \cite{CKSTT:interact}. As documented in \cite{CKSTT:gwp}, there are major hurdles to be overcome in frequency localizing this identity in the three dimensional setting. In particular, the double Duhamel trick is needed to handle one of the error terms. This relies \emph{crucially} on the dispersive estimate; thus, we are unable to employ the interaction Morawetz identity as a tool with which to tackle our Theorem~\ref{T:main}.
In four or more spatial dimensions, strong spatial localization is not needed to employ the interaction Morawetz identity. This was first observed in \cite{RV, thesis:art}. Building upon this, Dodson \cite{Dodson:obstacle} has shown how the interaction Morawetz identity can be applied to the energy-critical problem in the exterior of a convex obstacle in four dimensions. He relies solely on frequency localization; one of the key tools that makes this possible is the long-time Strichartz estimates developed by him in the mass-critical Euclidean setting \cite{Dodson:3+} and adapted to the energy-critical setting in \cite{Visan:IMRN}. For the three dimensional problem, these innovations do not suffice to obviate the need for a dispersive estimate, even in the Euclidean setting; see \cite{KV:gopher}.
The variant of the induction on energy technique that we will use in this paper was introduced by Kenig and Merle in \cite{KenigMerle}. This new approach has significantly streamlined the induction on energy paradigm; in particular, it has made it modular by completely separating the induction on energy portion from the rest of the argument. It has also sparked a rapid and fruitful development of the method, which has now been applied successfully to numerous diverse PDE problems, including wave maps and the Navier--Stokes system.
Before we can discuss the new difficulties associated with implementing the induction on energy method to prove Theorem~\ref{T:main}, we must first explain what it is. We will do so rather quickly; readers not already familiar with this technique, may benefit from the introduction to the subject given in the lecture notes \cite{ClayNotes}. The argument is by contradiction.
Suppose Theorem~\ref{T:main} were to fail, which is to say that there is no function $C:[0,\infty)\to[0,\infty)$ so that \eqref{E:T:main} holds. Then there must be some sequence of solutions $u_n:I_n\times\Omega\to{\mathbb{C}}$ so that $E(u_n)$ is bounded, but $S_{I_n}(u_n)$ diverges. Here we introduce the notation
\begin{align*}
S_I(u):=\iint_{I\times\Omega}|u(t,x)|^{10} dx\, dt,
\end{align*}
which is known as the \emph{scattering size} of $u$ on the time interval $I$.
By passing to a subsequence, we may assume that $E(u_n)$ converges. Moreover, without loss of generality, we may assume that the limit $E_c$ is the smallest number that can arise as a limit of $E(u_n)$ for solutions with $S_{I_n}(u_n)$ diverging. This number is known as the \emph{critical energy}. It has the following equivalent interpretation: If
\begin{align*}
L(E):=\sup\{S_I(u) : \, u:I\times\Omega\to {\mathbb{C}}\mbox{ such that } E(u)\lambdae E\},
\end{align*}
where the supremum is taken over all solutions $u$ to \eqref{nls} defined on some spacetime slab $I\times\Omega$ and having energy $E(u)\lambdae E$, then
\begin{align}\lambdaabel{E:induct hyp}
L(E)<\infty \qtq{for} E<E_c \quad \qtq{and} \quad L(E)=\infty \qtq{for} E\ge E_c.
\end{align}
(The fact that we can write $E\geq E_c$ here rather than merely $E>E_c$ relies on the stability result Theorem~\ref{T:stability}.) This plays the role of the inductive hypothesis; it says that Theorem~\ref{T:main} is true for energies less than $E_c$. The argument is called induction on energy precisely because this is then used (via an extensive argument) to show that $L(E_c)$ is finite and so obtain the sought-after contradiction.
Note that by the small-data theory mentioned earlier, we know that $E_c>0$. Indeed, in the small-data regime, one obtains very good quantitative bounds on $S_{\mathbb{R}}(u)$. As one might expect given the perturbative nature of the argument, the bounds are comparable to those for the linear flow; see \eqref{SbyE}.
One would like to pass to the limit of the sequence of solutions $u_n$ to exhibit a solution $u_\infty$ that has energy $E_c$ and infinite scattering size. Notice that by virtue of \eqref{E:induct hyp}, such a function would be a \emph{minimal energy blowup solution}. This is a point of departure of the Kenig--Merle approach from \cite{borg:scatter,CKSTT:gwp}, which worked with merely almost minimal almost blowup solutions, in essence, the sequence $u_n$.
Proving the existence of such a minimal energy blowup solution will be the key difficulty in this paper; even in the Euclidean setting it is highly non-trivial. In the Euclidean setting, existence was first proved by Keraani \cite{keraani-l2} for the (particularly difficult) mass-critical NLS; see also \cite{BegoutVargas,CarlesKeraani}. Existence of a minimal blowup solution for the Euclidean energy-critical problem was proved by Kenig--Merle \cite{KenigMerle} (see also \cite{BahouriGerard,keraani-h1} for some ingredients), who were also the first to realize the value of this result for well-posedness arguments.
Let us first describe how the construction of minimal blowup solutions proceeds in the Euclidean setting. We will then discuss the difficulties encountered on exterior domains and how we overcome these. As $\text{NLS}_{{\mathbb{R}}^3}$ has the \emph{non-compact} symmetries of rescaling and spacetime translations, we cannot expect any subsequence of the sequence $u_n$ of almost minimal almost blowup solutions to converge. This is a well-known dilemma in the calculus of variations and lead to the development of \emph{concentration compactness}. In its original form, concentration compactness presents us with three possibilities: a subsequence converges after applying symmetry operations (the desired \emph{compactness} outcome); a subsequence splits into one or more bubbles (this is called \emph{dichotomy}); or the sequence is completely devoid of concentration (this is called \emph{vanishing}).
The vanishing scenario is easily precluded. If the solutions $u_n$ concentrate at no point in spacetime (at any scale), then we expect the nonlinear effects to be weak and so expect spacetime bounds to follow from perturbation theory and the Strichartz inequality (which provides spacetime bounds for linear solutions). As uniform spacetime bounds for the solutions $u_n$ would contradict how these were chosen in the first place, this rules out the vanishing scenario. Actually, this discussion is slightly too naive; one needs to show that failure to concentrate actually guarantees that the linear solution has small spacetime bounds, which then allows us to treat the nonlinearity perturbatively.
The tool that allows us to complete the argument just described is an inverse Strichartz inequality (cf. Proposition~\ref{P:inverse Strichartz}), which says that linear flows can only have non-trivial spacetime norm if they contain at least one bubble of concentration. Applying this result inductively to the functions $e^{it\Delta_{{\mathbb{R}}^3}}u_n(0)$, one finds all the bubbles of concentration in a subsequence of these linear solutions together with a remainder term. This is expressed in the form of a \emph{linear profile decomposition} (cf. Theorem~\ref{T:LPD}). Two regions of concentration are determined to be separate bubbles if their relative characteristic length scales diverge as $n\to\infty$, or if their spatial/temporal separation diverges relative to their characteristic scale; see~\eqref{E:LP5}.
If there is only one bubble and no remainder term, then (after a little untangling) we find ourselves in the desired compactness regime, namely, that after applying symmetry operations to $u_n(0)$ we obtain a subsequence that converges strongly in $\dot H^1({\mathbb{R}}^3)$. Moreover this limit gives initial data for the needed minimal blowup solution (cf. Theorem~\ref{T:mmbs}). But what if we find ourselves in the unwanted dichotomy scenario where there is more than one bubble? This is where the inductive hypothesis comes to the rescue, as we will now explain.
To each profile in the linear profile decomposition, we associate a nonlinear profile, which is a solution to $\text{NLS}_{{\mathbb{R}}^3}$. For bubbles of concentration that overlap time $t=0$, these are simply the nonlinear solutions with initial data given by the bubble. For bubbles of concentration that are temporally well separated from $t=0$, they are nonlinear solutions that have matching long-time behaviour (i.e. matching scattering state). If there is more than one bubble (or a single bubble but non-zero remainder), all bubbles have energy strictly less than $E_c$. (Note that energies are additive due to the strong separation of distinct profiles.) But then by the inductive hypothesis \eqref{E:induct hyp}, each one of the nonlinear profiles will be global in time and obey spacetime bounds. Adding the nonlinear profiles together (and incorporating the linear flow of the remainder term) we obtain an approximate solution to $\text{NLS}_{{\mathbb{R}}^3}$ with finite global spacetime bounds. The fact that the sum of the nonlinear profiles is an approximate solution relies on the separation property of the profiles (this is, after all, a \emph{nonlinear} problem). Thus by perturbation theory, for $n$ sufficiently large there is a true solution to $\text{NLS}_{{\mathbb{R}}^3}$ with initial data $u_n(0)$ and bounded global spacetime norms. This contradicts the criterion by which $u_n$ were chosen in the first place and so precludes the dichotomy scenario.
This completes the discussion of how one proves the existence of minimal energy blowup solutions for the energy-critical problem in the Euclidean setting. The argument gives slightly more, something we call (by analogy with the calculus of variations) a \emph{Palais--Smale condition} (cf. Proposition~\ref{P:PS}). This says the following: Given an optimizing sequence of solutions for the scattering size with the energy converging to $E_c$, this sequence has a convergent subsequence (modulo the symmetries of the problem). Note that by the definition of $E_c$, such optimizing sequences have diverging scattering size.
Recall that one of the key discoveries of \cite{borg:scatter,CKSTT:gwp} was that it was only necessary to consider solutions that have a well-defined (time-dependent) location and characteristic length scale. Mere existence of minimal blowup solutions is not sufficient; they need to have this additional property in order to overcome the intrinsic limitations of non-scale-invariant conservation/monotonicity laws.
Fortunately, this additional property follows neatly from the Palais--Smale condition. If $u(t)$ is a minimal energy blowup solution and $t_n$ is a sequence of times, then $u_n(t)=u(t+t_n)$ is a sequence to which we may apply the Palais--Smale result. Thus, applying symmetry operations to $u(t_n)$ one may find a subsequence that is convergent in $\dot H^1({\mathbb{R}}^3)$. This is precisely the statement that the solution $u$ is \emph{almost periodic}, which is to say, the orbit is cocompact modulo spatial translations and rescaling. This compactness guarantees that the orbit is tight in both the physical and Fourier variables (uniformly in time).
Let us now turn to the problem on exterior domains. Adapting the concentration compactness argument to this setting will cause us a great deal of trouble. Naturally, NLS in the exterior domain $\Omega$ does not enjoy scaling or translation invariance. Nevertheless, both the linear and nonlinear profile decompositions must acknowledge the possibility of solutions living at any scale and in any possible location. It is important to realize that in certain limiting cases, these profiles obey \emph{different} equations. Here are the three main examples:
\begin{CI}
\item Solutions with a characteristic scale much larger than that of the obstacle evolve as if in ${\mathbb{R}}^3$.
\item Solutions very far from the obstacle (relative to there own characteristic scale) also evolve as if in ${\mathbb{R}}^3$.
\item Very narrowly concentrated solutions lying very close to the obstacle evolve as if in a halfspace.
\end{CI}
This is both an essential idea that we will develop in what follows and extremely naive. In each of the three scenarios just described, there are serious omissions from this superficial picture, as we will discuss below.
Nevertheless, the Palais--Smale condition we obtain in this paper (see Proposition~\ref{P:PS}) is so strong, that it proves the existence of minimal counterexamples in the following form:
\begin{thm}[Minimal counterexamples]\lambdaabel{T:mincrim}
\hskip 0em plus 1em Suppose Theorem \ref{T:main} failed. Then there exist a critical energy\/ $0<E_c<\infty$ and a global solution $u$ to \eqref{nls} with
$E(u)=E_c$, infinite scattering size both in the future and in the past
$$
S_{\ge 0}(u)=S_{\lambdae 0}(u)=\infty,
$$
and whose orbit $\{ u(t):\, t\in {\mathbb{R}}\}$ is precompact in $\dot H_D^1(\Omega)$.
\end{thm}
As evidence of the strength of this theorem, we note that it allows us to complete the proof of Theorem~\ref{T:main} very quickly indeed (see the last half-page of this paper).
Induction on energy has been adapted to scenarios with broken symmetries before and we would like to give a brief discussion of some of these works. Our efforts here diverge from these works in the difficulty of connecting the limiting cases to the original model. The lack of a dispersive estimate is a particular facet of this.
In \cite{KVZ:quadpot}, the authors proved global well-posedness and scattering for the energy-critical NLS with confining or repelling quadratic potentials. The argument was modelled on that of Bourgain \cite{borg:scatter} and Tao \cite{tao:radial}, and correspondingly considered only spherically symmetric data. Radiality helps by taming the lack of translation invariance; the key issue was to handle the broken scaling symmetry. This problem has dispersive estimates, albeit only for short times in the confining (i.e. harmonic oscillator) case.
In \cite{LSZ}, the Bourgain--Tao style of argument is adapted to spherically symmetric data in the exterior of a sphere in ${\mathbb{R}}^3$. A key part of their argument is to prove that a dispersive estimate holds in this setting.
The paper \cite{KKSV:gKdV} considers the mass-critical generalized Korteweg--de Vries equation, using the concentration compactness variant of induction on energy. This paper proves a minimal counterexample theorem in the style of Theorem~\ref{T:mincrim}. Dispersive estimates hold; the main obstruction was to overcome the broken Galilei invariance. In the limit of highly oscillatory solutions (at a fixed scale) the gKdV equation is shown to resemble a \emph{different} equation, namely, the mass-critical NLS. This means that both the linear and nonlinear profile decompositions contain profiles that are embeddings of solutions to the linear/nonlinear Schr\"odinger equations, carefully embedded to mimic solutions to Airy/gKdV.
An analogous scenario arrises in the treatment of the cubic Klein--Gordon equation in two spatial dimensions, \cite{KSV:2DKG}. Dispersive estimates hold for this problem. Here the scaling symmetry is broken and strongly non-relativistic profiles evolve according to the mass-critical Schr\"odinger equation, which also breaks the Lorentz symmetry. Linear and nonlinear profile decompositions that incorporate Lorentz boosts were one of the novelties of this work.
In the last two examples, the broken symmetries have led to dramatic changes in the equation, though the geometry has remained the same (all of Euclidean space). Next, we describe some instances where the geometry changes, but the equation is essentially the same.
The paper \cite{IPS:H3} treats the energy-critical NLS on three-dimensional hyperbolic space. Theorem~\ref{T:gopher} is used to treat highly concentrated profiles, which are embedded in hyperbolic space using the strongly Euclidean structure at small scales. Some helpful ingredients in hyperbolic space are the mass gap for the Laplacian and its very strong dispersive and Morawetz estimates.
More recently, dramatic progress has been made on the energy-critical problem on the three dimensional flat torus. Global well-posedness for small data was proved in \cite{HerrTataruTz:torus} and the large-data problem was treated in \cite{IonPaus}. (See also \cite{HaniPaus,Herr:Zoll,HerrTataruTz:mixed,IonPaus1} for results in related geometries.) While the manifold in question may be perfectly flat, the presence of closed geodesics and corresponding paucity of Strichartz estimates made this a very challenging problem. The large data problem was treated via induction on energy, using the result for Euclidean space (i.e. Theorem~\ref{T:gopher}) as a black box to control highly concentrated profiles. The local-in-time frequency localized dispersive estimate proved by Bourgain \cite{borg:torus} plays a key role in ensuring the decoupling of profiles.
While the methods employed in the many papers we have discussed so far inform our work here, they do not suffice for the treatment of Theorem~\ref{T:main}. Indeed, even the form of perturbation theory needed here spawned the separate paper \cite{KVZ:HA}. Moreover, in this paper we encounter not only changes in geometry, but also changes in the equation; after all, the Dirichlet Laplacian on exterior domains is very different from the Laplacian on ${\mathbb{R}}^3$.
We have emphasized the dispersive estimate because it has been an essential ingredient in the concentration compactness variant of induction on energy; it is the tool that guarantees that profiles contain a single bubble of concentration and so underwrites the decoupling of different profiles. Up to now, no one has succeeded in doing this without the aid of a dispersive-type estimate. Moreover, as emphasized earlier, the dispersive estimate plays a seemly irreplaceable role in the treatment of the energy-critical problem in ${\mathbb{R}}^3$. Thus, we are confronted with the problem of finding and then proving a suitable substitute for the dispersive estimate. One of the key messages of this paper is the manner in which this issue is handled, in particular, that the weakened form of dispersive estimate we prove, namely Theorem~\ref{T:LF}, is strong enough to complete the construction of minimal blowup solutions. The result we prove is too strong to hold outside merely non-trapping obstacles; convexity plays an essential role here.
Section~\ref{S:Linear flow convergence} is devoted entirely to the proof of Theorem~\ref{T:LF}.
Three different methods are used depending on the exact geometric setting, but in all cases, the key result is an \emph{infinite-time} parametrix that captures the action of $e^{it\Delta_\Omega}$ up to a \emph{vanishing fraction} of the mass/energy. Both this level of accuracy and the fact that it holds for all time are essential features for the rest of the argument.
The most difficult regime in the proof of Theorem~\ref{T:LF} is when the initial data is highly concentrated, say at scale ${\varepsilon}$, at a distance $\delta$
from the obstacle with ${\varepsilon}\lambdaesssim \delta\lambdaesssim 1$. To treat this regime, we subdivide into two cases: ${\varepsilon}\lambdaesssim \delta\lambdaesssim{\varepsilon}^{\frac67}$ and ${\varepsilon}^{\frac67}\lambdaesssim\delta\lambdaesssim 1$, which are called Cases~(iv) and~(v), respectively.
In Case~(iv), the initial data sees the obstacle as a (possibly retreating) halfspace. To handle this case, we first approximate the initial data by a linear combination of Gaussian wave packets (with characteristic scale ${\varepsilon}$). Next we use the halfspace evolution of these wave packets (for which there is an exact formula) to approximate their linear evolution in $\Omega$. As the halfspace evolution does not match the Dirichlet boundary condition, we have to introduce a correction term $w$. Moreover, we have to choose the parameters in the definition of $w$ carefully, so that the resulting error terms can be controlled for the full range of $\delta$.
In Case~(v), the obstacle is far from the initial data relative to the data's own scale, but close relative to the scale of the obstacle. We decompose the initial data into a linear combination of Gaussian wave packets, whose characteristic scale $\sigma$ is chosen carefully to allow refection off the obstacle to be treated by means of geometric optics. In particular, $\sigma$ is chosen so that the wave packets do not disperse prior to their collision with the obstacle, but do disperse shortly thereafter. We divide these wave packets into three categories: those that miss the obstacle, those that are near-grazing, and those that collide non-tangentially with the obstacle. Wave packets in the last category are the most difficult to treat. For these, we build a Gaussian parametrix for the reflected wave. To achieve the needed degree of accuracy, this parametrix must be very precisely constructed; in particular, it must be matched to the principal curvatures of the obstacle at the collision point. This parametrix does not match the Dirichlet boundary condition perfectly, and it is essential to wring the last drops of cancellation from this construction in order to ensure that it is not overwhelmed by the resulting errors. Further, the term $w$ that we introduce to match the boundary condition is carefully chosen so that it is non-resonant; note the additional phase factor in the definition of $w^{(3)}$. This is needed so that the error terms are manageable.
An example of how the results of Section~\ref{S:Linear flow convergence} play a role can be seen in the case of profiles that are highly concentrated at a bounded distance from the obstacle. These live far from the obstacle relative to their own scale, and so we may attempt to approximate them by solutions to $\text{NLS}_{{\mathbb{R}}^3}$ whose existence is guaranteed by Theorem~\ref{T:gopher}. Such solutions scatter and so eventually dissolve into outward propagating radiation. However, the obstacle blocks a positive fraction of directions and so a non-trivial fraction of the energy of the wave packet will reflect off the obstacle. Theorem~\ref{T:LF3} guarantees that this reflected energy will not refocus. Only with this additional input can we truly say that such profiles behave as if in Euclidean space.
Now consider the case when the profile is much larger than the obstacle. In this case the equivalence of the linear flows follows from Theorem~\ref{T:LF1}. However, the argument does not carry over to the nonlinear case. Embedding the nonlinear profiles requires a special argument; one of the error terms is simply not small. Nevertheless, we are able to control it by proving that it is non-resonant; see Step~2 in the proof of Theorem~\ref{T:embed2}.
The third limiting scenario identified above was when the profile concentrates very close to the obstacle. In this regime the limiting geometry is the halfspace ${\mathbb{H}}$. Note that spacetime bounds for $\text{NLS}_{\mathbb{H}}$ follow from Theorem~\ref{T:gopher} by considering solutions that are odd under reflection in $\partial{\mathbb{H}}$. The linear flow is treated in Theorem~\ref{T:LF2} and the embedding of nonlinear profiles is the subject of Theorem~\ref{T:embed4}. Note that in this regime, the spacetime region where the evolution is highly nonlinear coincides with the region of collision with the boundary. In the far-field regime, the finite size of the obstacle affects the radiation pattern; thus it is essential to patch the halfspace linear evolution together with that in $\Omega$.
Our discussion so far has emphasized how to connect the free propagator in the limiting geometries with that in $\Omega$. The complexity of energy-critical arguments is such that we also need to understand the relations between other spectral multipliers, such as Littlewood--Paley projectors and fractional powers. This is the subject of Section~\ref{S:Domain Convergence}.
After much toil, we show that nonlinear profiles arising from all limiting geometries obey spacetime bounds, which plays an analogous role to the induction on energy hypothesis. Thus, when the nonlinear profile decomposition is applied to a Palais--Smale sequence, we can show that there can be only one profile and it cannot belong to either of the limiting geometries ${\mathbb{R}}^3$ or ${\mathbb{H}}$; it must live at approximately unit scale and at approximately unit distance from the obstacle. This is how we obtain Theorem~\ref{T:mincrim}. The proof of this theorem occupies most of Section~\ref{S:Proof}. The last part of that section deduces Theorem~\ref{T:main} from this result.
To close this introduction, let us quickly recount the contents of this paper by order of presentation.
Section~\ref{S:Preliminaries} mostly reviews existing material that is needed for the analysis: equivalence of Sobolev spaces and the product rule for the Dirichlet Laplacian; Littlewood--Paley theory and Bernstein inequalities; Strichartz estimates; local and stability theories for $\text{NLS}_\Omega$; persistence of regularity for solutions of NLS that obey spacetime bounds (this is important for the embedding of profiles); the Bourgain-style Morawetz identity; and local smoothing.
Section~\ref{S:Domain Convergence} proves results related to the convergence of functions of the Dirichlet Laplacian as the underlying domains converge. Convergence of Green's functions at negative energies is proved via direct analysis making use of the maximum principle. This is extended to complex energies via analytic continuation and the Phragmen--Lindel\"of principle. General functions of the operator are represented in terms of the resolvent via the Helffer--Sj\"ostrand formula.
Section~\ref{S:Linear flow convergence} analyses the behaviour of the linear propagator under domain convergence. In all cases, high-accuracy infinite-time parametrices are constructed. When the geometry guarantees that a vanishing fraction of the wave actually hits the obstacle, a simple truncation argument is used (Theorem~\ref{T:LF1}). For disturbances close to the obstacle, we base our approximation off the exact solution of the halfspace linear problem with Gaussian initial data; see Theorem~\ref{T:LF2}. For highly concentrated wave packets a bounded distance from the obstacle, we build a parametrix based on a Gaussian beam technique; see Theorem~\ref{T:LF3}. The fact that Gaussian beams are exact linear solutions in Euclidean space prevents the accumulation of errors at large times.
Section~\ref{S:LPD} first proves refined and inverse Strichartz inequalities (Lemma~\ref{lm:refs} and Proposition~\ref{P:inverse Strichartz}). These show that linear evolutions with non-trivial spacetime norms must contain a bubble of concentration. This is then used to obtain the linear profile decomposition, Theorem~\ref{T:LPD}. The middle part of this section contains additional results related to the convergence of domains, which combine the tools from Sections~\ref{S:Domain Convergence} and~\ref{S:Linear flow convergence}.
Section~\ref{S:Nonlinear Embedding} shows how nonlinear solutions in the limiting geometries can be embedded in $\Omega$. As nonlinear solutions in the limiting geometries admit global spacetime bounds (this is how Theorem~\ref{T:gopher} enters our analysis), we deduce that solutions to $\text{NLS}_\Omega$ whose characteristic length scale and location conform closely to one of these limiting cases inherit these spacetime bounds. These solutions to $\text{NLS}_\Omega$ appear again as nonlinear profiles in Section~\ref{S:Proof}.
Section~\ref{S:Proof} contains the proofs of the Palais--Smale condition (Proposition~\ref{P:PS}), as well as the existence and almost periodicity of minimal blowup solutions (Theorem~\ref{T:mmbs}). Because of all the ground work laid in the previous sections, the nonlinear profile decomposition, decoupling, and induction on energy arguments all run very smoothly. This section closes with the proof of Theorem~\ref{T:main}; the needed contradiction is obtained by combining the space-localized Morawetz identity introduced in Lemma~\ref{L:morawetz} with the almost periodicity of minimal blowup solutions.
\section{Preliminaries}\lambdaabel{S:Preliminaries}
\subsection{Some notation}
We write $X \lambdaesssim Y$ or $Y \gtrsim X$ to indicate $X \lambdaeq CY$ for some absolute constant $C>0$, which may change from line
to line. When the implicit constant depends on additional quantities, this will be indicated with subscripts. We use $O(Y)$ to
denote any quantity $X$ such that $|X| \lambdaesssim Y$. We use the notation $X \sim Y$ whenever $X \lambdaesssim Y \lambdaesssim X$. We write $o(1)$
to indicate a quantity that converges to zero.
Throughout this paper, $\Omega$ will denote the exterior domain of a smooth compact strictly convex obstacle in ${\mathbb{R}}^3$.
Without loss of generality, we assume that $0\in \Omega^c$. We use $\diam:=\diam(\Omega^c)$ to denote the diameter of the obstacle
and $d(x):=\dist(x,\Omega^c)$ to denote the distance of a point $x\in{\mathbb{R}}^3$ to the obstacle.
In order to prove decoupling of profiles in $L^p$ spaces (when $p\neq 2$) in Section~\ref{S:LPD}, we will make use of the
following refinement of Fatou's Lemma, due to Br\'ezis and Lieb:
\begin{lem}[Refined Fatou, \cite{BrezisLieb}]\lambdaabel{lm:rf}
Let $0<p<\infty$. Suppose $\{f_n\}\subseteq L^p({\mathbb{R}}^d)$ with $\lambdaimsup\|f_n\|_{L^p}<\infty$. If $f_n\to f$ almost everywhere, then
\begin{align*}
\int_{{\mathbb{R}}^d}\Bigl||f_n|^p-|f_n-f|^p-|f|^p \Bigr| \,dx\to 0.
\end{align*}
In particular, $\|f_n\|_{L^p}^p-\|f_n-f\|_{L^p}^p \to \|f\|_{L^p}^p$.
\end{lem}
As described in the introduction, we need adaptations of a wide variety of harmonic analysis tools to the setting
of exterior domains. Most of these were discussed in our paper \cite{KVZ:HA}. One of the key inputs for that
paper is the following (essentially sharp) estimate for the heat kernel:
\begin{thm}[Heat kernel bounds, \cite{qizhang}]\lambdaabel{T:heat}
Let $\Omega$ denote the exterior of a smooth compact convex obstacle in ${\mathbb{R}}^d$ for $d\geq 3$. Then there exists $c>0$ such that
\begin{align*}
|e^{t\Delta_{\Omega}}(x,y)|\lambdaesssim \Bigr(\frac{d(x)}{\sqrt t\wedge \diam}\wedge 1\Bigr)\Bigl(\frac{d(y)}{\sqrt t\wedge \diam}\wedge 1\Bigr) e^{-\frac{c|x-y|^2}t} t^{-\frac d 2},
\end{align*}
uniformly in $x, y\in \Omega$ and $t\geq 0$; recall that $A\wedge B = \min\{A,B\}$. Moreover, the reverse inequality holds after suitable modification
of $c$ and the implicit constant.
\end{thm}
The most important result from \cite{KVZ:HA} for our applications here is the following, which identifies Sobolev spaces defined with respect
to the Dirichlet Laplacian with those defined via the usual Fourier multipliers. Note that the restrictions on the regularity $s$ are necessary,
as demonstrated by the counterexamples discussed in~\cite{KVZ:HA}.
\begin{thm}[Equivalence of Sobolev spaces, \cite{KVZ:HA}]\lambdaabel{T:Sob equiv}
Let $d\geq 3$ and let $\Omega$ denote the complement of a compact convex body $\Omega^c\subset{\mathbb{R}}^d$ with smooth boundary. Let $1<p<\infty$. If $0\lambdaeq s<\min\{1+\frac1p,\frac dp\}$ then
\begin{equation}\lambdaabel{E:equiv norms}
\bigl\| (-\Delta_{{\mathbb{R}}^d})^{s/2} f \bigl\|_{L^p} \sim_{d,p,s} \bigl\| (-\Delta_\Omega)^{s/2} f \bigr\|_{L^p} \qtq{for all} f\in C^\infty_c(\Omega).
\end{equation}
\end{thm}
This result allows us to transfer several key results directly from the Euclidean setting, provided we respect the restrictions on $s$ and $p$. This includes such basic facts as the $L^p$-Leibnitz (or product) rule for first derivatives. Indeed, the product rule for the operator
$(-\Delta_\Omega)^{1/2}$ is non-trivial; there is certainly no pointwise product rule for this operator.
We also need to consider derivatives of non-integer order. The $L^p$-product rule for fractional derivatives in Euclidean spaces was
first proved by Christ and Weinstein \cite{ChW:fractional chain rule}. Combining their result with Theorem~\ref{T:Sob equiv}
yields the following:
\begin{lem}[Fractional product rule]\lambdaabel{lm:product}
For all $f, g\in C_c^{\infty}(\Omega)$, we have
\begin{align}\lambdaabel{fp}
\| (-\Delta_\Omega)^{\frac s2}(fg)\|_{L^p} \lambdaesssim \| (-\Delta_\Omega)^{\frac s2} f\|_{L^{p_1}}\|g\|_{L^{p_2}}+
\|f\|_{L^{q_1}}\| (-\Delta_\Omega)^{\frac s2} g\|_{L^{q_2}}
\end{align}
with the exponents satisfying $1<p, p_1, q_2<\infty$, $1<p_2,q_1\lambdae \infty$,
\begin{align*}
\tfrac1p=\tfrac1{p_1}+\tfrac1{p_2}=\tfrac1{q_1}+\tfrac1{q_2}, \qtq{and} 0<s<\min\bigl\{ 1+\tfrac1{p_1}, 1+\tfrac1{q_2},\tfrac3{p_1},\tfrac3{q_2} \bigr\}.
\end{align*}
\end{lem}
\subsection{Littlewood--Paley theory on exterior domains}
Fix $\phi:[0,\infty)\to[0,1]$ a smooth non-negative function obeying
\begin{align*}
\phi(\lambdaambda)=1 \qtq{for} 0\lambdae\lambdaambda\lambdae 1 \qtq{and} \phi(\lambdaambda)=0\qtq{for} \lambdaambda\ge 2.
\end{align*}
For each dyadic number $N\in 2^{\mathbb{Z}}$, we then define
\begin{align*}
\phi_N(\lambdaambda):=\phi(\lambdaambda/N) \qtq{and} \psi_N(\lambdaambda):=\phi_N(\lambdaambda)-\phi_{N/2}(\lambdaambda);
\end{align*}
notice that $\{\psi_N(\lambdaambda)\}_{N\in 2^{\Z}} $ forms a partition of unity for $(0,\infty)$.
With these functions in place, we can now introduce the Littlewood--Paley projections adapted to the Dirichlet Laplacian on $\Omega$
and defined via the functional calculus for self-adjoint operators:
\begin{align*}
P^{\Omega}_{\lambdae N} :=\phi_N\bigl(\sqrt{-\Delta_\Omega}\,\bigr), \quad P^{\Omega}_N :=\psi_N(\sqrt{-\Delta_\Omega}\,\bigr), \qtq{and} P^{\Omega}_{>N} :=I-P^{\Omega}_{\lambdae N}.
\end{align*}
For brevity we will often write $f_N := P^{\Omega}_N f$ and similarly for the other projections.
We will write $P_N^{{\mathbb{R}}^3}$, and so forth, to represent the analogous operators associated to the usual Laplacian in the full Euclidean space.
We will also need the analogous operators on the halfspace ${\mathbb{H}}=\{x\in{\mathbb{R}}^3 : x \cdot e_3 >0\}$ where $e_3=(0,0,1)$, which we denote by $P^{{\mathbb{H}}}_N$, and so forth.
Just like their Euclidean counterparts, these Littlewood--Paley projections obey Bernstein estimates. Indeed, these follow quickly
from heat kernel bounds and the analogue of the Mikhlin multiplier theorem for the Dirichlet Laplacian. See \cite{KVZ:HA} for further details.
\begin{lem}[Bernstein estimates]
Let $1<p<q\lambdae \infty$ and $-\infty<s<\infty$. Then for any $f\in C_c^{\infty}(\Omega)$, we have
\begin{align*}
\|P^{\Omega}_{\lambdae N} f \|_{L^p(\Omega)}+\|P^{\Omega}_N f\|_{L^p(\Omega)}&\lambdaesssim \|f\|_{L^p(\Omega)},\\
\|P^{\Omega}_{\lambdae N} f\|_{L^q(\Omega)}+\|P^{\Omega}_N f\|_{L^q(\Omega)}&\lambdaesssim N^{d(\frac 1p-\frac1q)}\|f\|_{L^p(\Omega)},\\
N^s\|P^{\Omega}_N f\|_{L^p(\Omega)}&\sim \|(-\Delta_{\Omega})^{\frac s2}P^{\Omega}_N f\|_{L^p(\Omega)}.
\end{align*}
\end{lem}
A deeper application of the multiplier theorem for the Dirichlet Laplacian is the proof of the square function inequalities.
Both are discussed in \cite{IvanPlanch:square}, as well as \cite{KVZ:HA}, and further references can be found therein.
\begin{lem}[Square function estimate]\lambdaabel{sq}
Fix $1<p<\infty$. For all $f\in C_c^{\infty}(\Omega)$,
\begin{align*}
\|f\|_{L^p(\Omega)}\sim \Bigl\|\Bigl(\sum_{N\in 2^{\Z}}|P^{\Omega}_{N} f|^2\Bigr)^{\frac12}\Bigr\|_{L^p(\Omega)}.
\end{align*}
\end{lem}
Implicit in this lemma is the fact that each $f$ coincides with $\sum f_N$ in $L^p(\Omega)$ sense for $1<p<\infty$. This relies on the
fact that $0$ is not an eigenvalue of $-\Delta_\Omega$, as follows from Lemma~\ref{L:local smoothing}.
\subsection{Strichartz estimates and the local theory}
As the endpoint Strichartz inequality is not known for exterior domains, some care needs to be taken when defining the natural Strichartz spaces.
For any time interval $I$, we define
\begin{align*}
S^0(I)&:=L_t^{\infty} L_x^2(I\times\Omega)\cap L_t^{2+{\varepsilon}}L_x^{\frac{6(2+{\varepsilon})}{2+3{\varepsilon}}}(I\times\Omega)\\
\dot S^1(I) &:= \{u:I\times\Omega\to {\mathbb{C}} :\, (-\Delta_\Omega)^{1/2}u\in S^0(I)\}.
\end{align*}
By interpolation,
\begin{align}\lambdaabel{Sspaces}
\|u\|_{L_t^q L_x^r(I\times\Omega)}\lambdaeq \|u\|_{S^0(I)} \qtq{for all} \tfrac2q+\tfrac3r=\tfrac32 \qtq{with} 2+{\varepsilon}\lambdaeq q\lambdaeq \infty.
\end{align}
Here ${\varepsilon}>0$ is chosen sufficiently small so that all Strichartz pairs of exponents used in this paper are covered. For example, combining
\eqref{Sspaces} with Sobolev embedding and the equivalence of Sobolev spaces Theorem~\ref{T:Sob equiv}, we obtain the following lemma.
\begin{lem}[Sample spaces]
We have
\begin{align*}
\|u\|_{L_t^\infty \dot H^1_D} &+ \|(-\Delta_{\Omega})^{\frac12}u\|_{L_t^{10} L_x^{\frac{30}{13}}} + \|(-\Delta_{\Omega})^{\frac12}u\|_{L_t^5L_x^{\frac{30}{11}}} + \|(-\Delta_{\Omega})^{\frac 12}u\|_{L_{t,x}^{\frac{10}3}}\\
& + \|(-\Delta_{\Omega})^{\frac 12}u\|_{L_t^{\frac83} L_x^4} +\|u\|_{L_t^\infty L_x^6}+\|u\|_{L^{10}_{t,x}}+\|u\|_{L_t^5 L_x^{30}}\lambdaesssim \|u\|_{\dot S^1(I)},
\end{align*}
where all spacetime norms are over $I\times \Omega$.
\end{lem}
We define $N^0(I)$ to be the dual Strichartz space and
$$
\dot N^1(I):=\{F:I\times\Omega\to {\mathbb{C}}:\, (-\Delta_\Omega)^{1/2} F\in N^0(I)\}.
$$
For the case of exterior domains, Strichartz estimates were proved by Ivanovici \cite{Ivanovici:Strichartz}; see also \cite{BSS:schrodinger}. These estimates form an essential foundation for all the analysis carried out in this papaer.
\begin{thm}[Strichartz estimates]\lambdaabel{T:Strichartz}
Let $I$ be a time interval and let $\Omega$ be the exterior of a smooth compact strictly convex obstacle in ${\mathbb{R}}^3$. Then
the solution $u$ to the forced Schr\"odinger equation $i u_t + \Delta_\Omega u = F$ satisfies the estimate
\begin{align*}
\|u\|_{S^0(I)}\lambdaesssim \|u(t_0)\|_{L^2(\Omega)}+\|F\|_{N^0(I)}
\end{align*}
for any $t_0\in I$. In particular, as $(-\Delta_\Omega)^{1/2}$ commutes with the free propagator $e^{it\Delta_\Omega}$,
\begin{align*}
\|u\|_{\dot S^1(I)}\lambdaesssim \|u(t_0)\|_{\dot H^1_D(\Omega)}+\|F\|_{\dot N^1(I)}
\end{align*}
for any $t_0\in I$.
\end{thm}
When $\Omega$ is the whole Euclidean space ${\mathbb{R}}^3$, we may take ${\varepsilon}=0$ in the definition of Strichartz spaces; indeed, for the linear propagator
$e^{it\Delta_{{\mathbb{R}}^3}}$, Strichartz estimates for the endpoint pair of exponents $(q,r)=(2,6)$ were proved by Keel and Tao \cite{KeelTao}. Embedding functions on the halfspace ${\mathbb{H}}$ as functions on ${\mathbb{R}}^3$ that are odd under reflection in $\partial{\mathbb{H}}$, we immediately see that the whole range of Strichartz estimates, including the endpoint, also hold for the free propagator $e^{it\Delta_{{\mathbb{H}}}}$.
The local theory for \eqref{nls} is built on contraction mapping arguments combined with Theorem~\ref{T:Strichartz} and the equivalence of Sobolev spaces Theorem~\ref{T:Sob equiv}. We record below a stability result for \eqref{nls}, which is essential in extracting a minimal counterexample to Theorem~\ref{T:main}. Its predecessor in the Euclidean case can be found in \cite{CKSTT:gwp}; for versions in higher dimensional Euclidean spaces see \cite{ClayNotes, RV, TaoVisan}.
\begin{thm}[Stability for $\text{NLS}_{\Omega}$, \cite{KVZ:HA}]\lambdaabel{T:stability}
Let $\Omega$ be the exterior of a smooth compact strictly convex obstacle in ${\mathbb{R}}^3$. Let $I$ a compact time interval and let $\tilde u$ be an approximate solution to \eqref{nls} on $I\times \Omega$ in the sense that
$$
i\tilde u_t + \Delta_\Omega \tilde u = |\tilde u|^4\tilde u + e
$$
for some function $e$. Assume that
\begin{align*}
\|\tilde u\|_{L_t^\infty \dot H_D^1(I\times \Omega)}\lambdae E \qtq{and} \|\tilde u\|_{L_{t,x}^{10}(I\times \Omega)} \lambdae L
\end{align*}
for some positive constants $E$ and $L$. Let $t_0 \in I$ and let $u_0\in \dot H_D^1(\Omega)$ satisfy
\begin{align*}
\|u_0-\tilde u(t_0)\|_{\dot H_D^1}\lambdae E'
\end{align*}
for some positive constant $E'$. Assume also the smallness condition
\begin{align}\lambdaabel{E:stab small}
\bigl\|\sqrt{-\Delta_\Omega}\; e^{i(t-t_0)\Delta_\Omega}\bigl[u(t_0)-\tilde u_0\bigr]\bigr\|_{L_t^{10}L_x^{\frac{30}{13}}(I\times \Omega)}
+\bigl\|\sqrt{-\Delta_\Omega}\; e\bigr\|_{N^0(I)}&\lambdae{\varepsilon}
\end{align}
for some $0<{\varepsilon}<{\varepsilon}_1={\varepsilon}_1(E,E',L)$. Then, there exists a unique strong solution $u:I\times\Omega\mapsto {\mathbb{C}}$ to \eqref{nls} with initial data $u_0$ at time $t=t_0$ satisfying
\begin{align*}
\|u-\tilde u\|_{L_{t,x}^{10}(I\times \Omega)} &\lambdaeq C(E,E',L){\varepsilon}\\
\bigl\|\sqrt{-\Delta_\Omega}\; (u-\tilde u)\bigr\|_{S^0(I\times\Omega)} &\lambdaeq C(E,E',L)\, E'\\
\bigl\|\sqrt{-\Delta_\Omega}\; u\bigr\|_{S^0(I\times\Omega)} &\lambdaeq C(E,E',L).
\end{align*}
\end{thm}
There is an analogue of this theorem for $\Omega$ an exterior domain in ${\mathbb{R}}^d$ with $d=4,5,6$; see \cite{KVZ:HA}. For dimensions $d\geq 7$, this is an open question. The proof of the stability result in ${\mathbb{R}}^d$ with $d\geq 7$ relies on fractional chain rules for H\"older continuous functions and `exotic' Strichartz estimates; see \cite{ClayNotes,TaoVisan}. The equivalence of Sobolev spaces Theorem~\ref{T:main} guarantees that the fractional chain rule can be imported directly from the Euclidean setting. However, the `exotic' Strichartz estimates are derived from the dispersive estimate \eqref{E:EuclidDisp} and it is not known whether they hold in exterior domains.
Applying Theorem~\ref{T:stability} with $\tilde u\equiv0$, we recover the standard local well-posedness theory for \eqref{nls}. Indeed, for an arbitrary (large) initial data $u_0\in \dot H^1_D(\Omega)$, the existence of some small time interval $I$ on which the smallness hypothesis
\eqref{E:stab small} holds is guaranteed by the monotone convergence theorem combined with Theorem~\ref{T:Strichartz}. Moreover, if the initial data $u_0$ has small norm in $\dot H^1_D(\Omega)$ (that is, $E'$ is small), then Theorem~\ref{T:Strichartz} yields \eqref{E:stab small} with $I={\mathbb{R}}$.
Therefore, both local well-posedness for large data and global well-posedness for small data follow from Theorem~\ref{T:stability}. These special
cases of Theorem~\ref{T:stability} have appeared before, \cite{BSS:schrodinger, IvanPlanch:IHP}; induction on energy, however, requires the full strength of
Theorem~\ref{T:stability}.
In Section~\ref{S:Nonlinear Embedding}, we will embed solutions to NLS in various limiting geometries back inside $\Omega$. To embed solutions to $\text{NLS}_{{\mathbb{R}}^3}$ in $\Omega$, we will make use of the following persistence of regularity result for this equation:
\begin{lem}[Persistence of regularity for $\text{NLS}_{{\mathbb{R}}^3}$, \cite{CKSTT:gwp}]\lambdaabel{lm:persistencer3} Fix $s\ge 0$ and let $I$ be a compact time interval and $u:I\times{\mathbb{R}}^3\to {\mathbb{C}}$ be a solution to $\text{NLS}_{{\mathbb{R}}^3}$ satisfying
\begin{align*}
E(u)\lambdaeq E<\infty \qtq{and} \|u\|_{L_{t,x}^{10}(I\times{\mathbb{R}}^3)}\lambdaeq L<\infty.
\end{align*}
If $u(t_0)\in \dot H^s({\mathbb{R}}^3)$ for some $t_0\in I$, then
\begin{align*}
\|(-\Delta_{{\mathbb{R}}^3})^{\frac s2}u\|_{S^0(I)}\lambdaeq C(E,L) \|u(t_0)\|_{\dot H^s({\mathbb{R}}^3)}.
\end{align*}
\end{lem}
We will also need a persistence of regularity result for $\text{NLS}_{{\mathbb{H}}}$. This follows by embedding solutions on the halfspace as solutions on ${\mathbb{R}}^3$ that are odd under reflection in $\partial{\mathbb{H}}$. In particular, one may regard $-\Delta_{\mathbb{H}}$ as the restriction of $-\Delta_{{\mathbb{R}}^3}$ to odd functions. For example, one can see this equivalence in the exact formula for the heat kernel in ${\mathbb{H}}$.
\begin{lem} [Persistence of regularity for $\text{NLS}_{{\mathbb{H}}}$]\lambdaabel{lm:persistenceh} Fix $s\ge 0$ and let $I$ be a compact time interval and
$u:I\times{\mathbb{H}}\to {\mathbb{C}}$ be a solution to $\text{NLS}_{{\mathbb{H}}}$ satisfying
\begin{align*}
E(u)\lambdaeq E<\infty \qtq{and} \|u\|_{L_{t,x}^{10}(I\times{\mathbb{H}})}\lambdaeq L<\infty.
\end{align*}
If $u(t_0)\in \dot H^s_D({\mathbb{H}})$ for some $t_0\in I$, then
\begin{align*}
\|(-\Delta_{{\mathbb{H}}})^{\frac s2}u\|_{S^0(I)}\lambdaeq C(E,L) \|u(t_0)\|_{\dot H^s_D({\mathbb{H}})}.
\end{align*}
\end{lem}
\subsection{Morawetz and local smoothing}
We preclude the minimal counterexample to Theorem~\ref{T:main} in Section~\ref{S:Proof} with the use of the following one-particle Morawetz inequality; cf. \cite{borg:scatter, LinStrauss}.
\begin{lem}[Morawetz inequality]\lambdaabel{L:morawetz}
Let $I$ be a time interval and let $u$ be a solution to \eqref{nls} on $I$. Then for any $A\ge 1$ with $A|I|^{1/2}\geq \diam(\Omega^c)$ we have
\begin{align}\lambdaabel{mora}
\int_I\int_{|x|\lambdae A|I|^{\frac 12}, x\in \Omega}\frac{|u(t,x)|^6}{|x|} \,dx\,dt\lambdaesssim A|I|^{\frac 12},
\end{align}
where the implicit constant depends only on the energy of $u$.
\end{lem}
\begin{proof}
Let $\phi(x)$ be a smooth radial bump function such that $\phi(x)=1$ for $|x|\lambdae 1$ and $\phi(x)=0$ for $|x|>2$. Let $R\geq \diam(\Omega^c)$ and define
$a(x):=|x|\phi\bigl(\frac x R\bigr)$. Then for $|x|\lambdae R$ we have
\begin{align}\lambdaabel{cd1}
\partial_j\partial_k a(x) \text{ is positive definite,} \quad \nabla a(x)=\frac x{|x|}, \qtq{and} \Delta \Delta a(x)<0,
\end{align}
while for $|x|>R$ we have the following rough estimates:
\begin{align}\lambdaabel{cd2}
|\partial_k a(x)|\lambdaesssim 1, \quad |\partial_j\partial_k a(x)|\lambdaesssim \frac 1R,\qtq{and} |\Delta\Delta a(x)|\lambdaesssim \frac 1{R^3}.
\end{align}
To continue, we use the local momentum conservation law
\begin{align}\lambdaabel{lmc}
\partial_t \Im(\bar u \partial_k u)=-2\partial_j {\mathbb{R}}e(\partial_k u\partial_j\bar u)+\frac 12\partial_k\Delta(|u|^2)-\frac 23\partial_k(|u|^6).
\end{align}
Multiplying both sides by $\partial_k a$ and integrating over $\Omega$ we obtain
\begin{align}
\partial_t \Im\int_\Omega \bar u \partial_k u \partial_k a \,dx
&=-2 {\mathbb{R}}e\int_\Omega\partial_j(\partial_k u\partial_j \bar u)\partial_k a\,dx\notag\\
&\quad+\frac 12\int_\Omega\partial_k\Delta(|u|^2)\partial_k a \,dx-\frac 23\int_\Omega\partial_k(|u|^6)\partial_k a \,dx.\lambdaabel{17}
\end{align}
The desired estimate \eqref{mora} will follow from an application of the fundamental theorem of calculus combined with an upper bound on $\text{LHS}\eqref{17}$ and a lower bound on $\text{RHS}\eqref{17}$.
The desired upper bound follows immediately from H\"older followed by Sobolev embedding:
\begin{align}\lambdaabel{upper bound}
\Im\int_\Omega \bar u \partial_k u \partial_k a dx
\lambdaesssim \|u\|_{L^6(\Omega)} \|\nabla u\|_{L^2(\Omega)} \|\nabla a\|_{L^3(\Omega)}\lambdaesssim R\|\nabla u\|_{L^2(\Omega)}^2.
\end{align}
Next we seek a lower bound on $\text{RHS}\eqref{17}$. From the divergence theorem, we obtain
\begin{align*}
-2{\mathbb{R}}e\int_\Omega\partial_j(\partial_k u\partial_j \bar u)\partial_k a\,dx
&=-2{\mathbb{R}}e\int_\Omega\partial_j(\partial_k u\partial_j \bar u\partial_k a) \,dx+ 2{\mathbb{R}}e\int_\Omega\partial_k u\partial_j\bar u\partial_j\partial_k a \,dx\\
&=2{\mathbb{R}}e\int_{\partial\Omega}\partial_ku\partial_k a\partial_j \bar u\vec n_j d\sigma(x)+2{\mathbb{R}}e\int_{|x|\lambdae R}\partial_k u\partial_j\bar u\partial_j\partial_k a\,dx\\
&\qquad + 2\int_{|x|\ge R}\partial_k u\partial_j\bar u\partial_j\partial_k a \,dx,
\end{align*}
where $\vec n$ denotes the outer normal to $\Omega^c$. We write
\begin{align*}
\partial_j \bar u\vec n_j=\nabla \bar u\cdot\vec n=\bar u_n.
\end{align*}
Moreover, from the Dirichlet boundary condition, the tangential
derivative of $u$ vanishes on the boundary; thus,
\begin{align*}
\nabla u=(\nabla u\cdot {\vec n})\vec n=u_n\vec n \qtq{and} \partial_k u\partial_k a=u_n a_n.
\end{align*}
Using this, \eqref{cd1}, and \eqref{cd2} we obtain
\begin{align*}
-2{\mathbb{R}}e\int_\Omega\partial_j(\partial_k u\partial_j\bar u)\partial_k a\,dx
&\ge 2\int_{\partial\Omega} a_n|u_n|^2 d\sigma(x)+2\int_{|x|\ge R}\partial_k u\partial_j \bar u\partial_j\partial_k a \,dx\\
&\ge 2\int_{\partial\Omega} a_n|u_n|^2 d\sigma(x)-\frac CR\|\nabla u\|_{L^2(\Omega)}^2.
\end{align*}
Similarly, we can estimate the second term on $\text{LHS}\eqref{17}$ as follows:
\begin{align*}
\frac 12\int_\Omega \partial_k\Delta(|u|^2)\partial_k a \,dx
&=\frac 12\int_\Omega\partial_k\bigl[\Delta(|u|^2)\partial_k a\bigr] \,dx-\frac12\int_{\Omega}\Delta(|u|^2)\Delta a \,dx\\
&=-\frac 12\int_{\partial\Omega}\Delta(|u|^2)\partial_k a\vec n_k d\sigma(x)-\frac 12\int_{\Omega}|u|^2\Delta\Delta a \,dx\\
&=-\int_{\partial\Omega}|\nabla u|^2 a_n d\sigma(x)-\frac12\int_{|x|\lambdae R}|u|^2 \Delta\Delta a \,dx\\
&\quad -\frac 12\int_{|x|\geq R}|u|^2 \Delta\Delta a \,dx\\
&\ge-\int_{\partial\Omega}|u_n|^2 a_n d\sigma(x)-\frac CR \|\nabla u\|_{L^2(\Omega)}^2;
\end{align*}
to obtain the last inequality we have used \eqref{cd1}, \eqref{cd2}, H\"older, and Sobolev embedding.
Finally, to estimate the third term on $\text{LHS}\eqref{17}$
we use \eqref{cd1} and \eqref{cd2}:
\begin{align*}
-\frac 23\int_{\Omega}\partial_k(|u|^6)\partial_k a \,dx
&=\frac23\int_\Omega|u|^6 \Delta a \,dx=\frac 43\int_{|x|\lambdae R}\frac{|u|^6}{|x|} \,dx-\frac CR\|u\|_{L^6(\Omega)}^6.
\end{align*}
Collecting all these bounds and using the fact that $a_n\geq 0$ on $\partial \Omega$, we obtain
\begin{align}
\text{LHS}\eqref{17}\gtrsim \int_{|x|\lambdae R}\frac{|u|^6}{|x|} \,dx - R^{-1} \bigl[ \|\nabla u\|_{L^2(\Omega)}^2 +\|u\|_{L^6(\Omega)}^6 \bigr].\lambdaabel{lower bound}
\end{align}
Integrating \eqref{17} over $I$ and using \eqref{upper bound} and \eqref{lower bound} we derive
\begin{align*}
\int_I\int_{|x|\lambdae R, x\in \Omega}\frac{|u|^6}{|x|} \,dx\,dt\lambdaesssim R+\frac{|I|}{R}.
\end{align*}
Taking $R=A|I|^{\frac 12}$ yields \eqref{mora}. This completes the proof of the lemma.
\end{proof}
We record next a local smoothing result. While the local smoothing estimate does guarantee local energy decay, it falls short of fulfilling the role of a dispersive estimate. In particular, local smoothing does not preclude the possibility that energy refocuses finitely many times. Indeed, it is known to hold in merely non-trapping geometries. Nevertheless, it does play a key role in the proof of the Strichartz estimates. The version we need requires uniformity under translations and dilations; this necessitates some mild modifications of the usual argument.
\begin{lem}[Local smoothing]\lambdaabel{L:local smoothing}
Let $u=e^{it\Delta_\Omega} u_0$. Then
\begin{align*}
\int_{\mathbb{R}} \int_\Omega |\nabla u(t,x)|^2 \bigl\lambdaangle R^{-1} (x-z)\bigr\rangle^{-3} \,dx\,dt \lambdaesssim R \| u_0 \|_{L^2(\Omega)} \|\nabla u_0 \|_{L^2(\Omega)},
\end{align*}
uniformly for $z\in {\mathbb{R}}^3$ and $R>0$.
\end{lem}
\begin{proof}
We adapt the proof of local smoothing using the Morawetz identity
from the Euclidean setting. For the level of generality needed
here, we need to combine two Morawetz identities: one adapted to the
obstacle and a second adapted to the $R$ ball around $z$.
Recall that the origin is an interior point of $\Omega^c$. Given $x\in\partial\Omega$, let $\vec n(x)$ denote the outward normal to the obstacle at this point. As $\Omega^c$ is convex, there is a constant $C>0$ independent of $z$ so that
\begin{align}\lambdaabel{E:ls geom}
\bigl| \tfrac{R^{-1}(x-z)}{\lambdaangle R^{-1}(x-z)\rangle} \cdot \vec n(x) \bigr| \lambdaeq C \tfrac{x}{|x|}\cdot \vec n(x) \qtq{for all} x\in\partial\Omega.
\end{align}
Indeed, the right-hand side is bounded away from zero uniformly for $x\in\partial\Omega$, while the set of vectors $\frac{R^{-1}(x-z)}{\lambdaangle R^{-1}(x-z)\rangle}$ is compact.
For $C>0$ as above, let
$$
F(t) := \int_\Omega \Im( \bar u \nabla u) \cdot \nabla a\, dx \qtq{with} a(x) := C |x| + R \lambdaangle R^{-1} (x-z) \bigr\rangle.
$$
After integrating by parts several times (cf. Lemma~\ref{L:morawetz}) and using that
$$
-\Delta\Delta a(x) \geq 0 \qtq{and} \partial_j\partial_k a(x) \geq \tfrac{1}{R} \lambdaangle R^{-1} (x-z) \bigr\rangle^{-3} \delta_{jk}
\quad \text{(as symmetric matrices)}
$$
one obtains
$$
\partial_t F(t) \geq 2 \int_\Omega \frac{|\nabla u(t,x)|^2 \,dx}{R \lambdaangle R^{-1} (x-z) \rangle^{3}}
+ \int_{\partial\Omega} |\nabla u(t,x)|^2 \bigl[ \nabla a(x)\cdot \vec n(x) \bigr] \,d\sigma(x).
$$
Moreover, by \eqref{E:ls geom} the integral over $\partial\Omega$ is positive since
$$
\nabla a(x)\cdot \vec n(x) = \bigl[C \tfrac{x}{|x|} + \tfrac{R^{-1}(x-z)}{\lambdaangle R^{-1}(x-z)\rangle} \bigr]\cdot \vec n(x) \geq 0
\qtq{for} x\in\partial\Omega.
$$
Noting that $|F(t)|\lambdaeq (C+1) \| u(t) \|_{L^2(\Omega)} \| \nabla u(t) \|_{L^2(\Omega)}$, the lemma now follows by applying the
fundamental theorem of calculus.
\end{proof}
The remainder term in the linear profile decomposition Theorem~\ref{T:LPD} goes to zero in $L^{10}_{t,x}$; however, in order
to prove the approximate solution property (cf. Claim~3 in the proof of Proposition~\ref{P:PS}), we need to show
smallness in Strichartz spaces with one derivative. This is achieved via local smoothing (cf. Lemma~3.7 from
\cite{keraani-h1}); the uniformity in Lemma~\ref{L:local smoothing} is essential for this application.
\begin{cor}\lambdaabel{C:Keraani3.7}
Given $w_0\in \dot H^1_D(\Omega)$,
$$
\| \nabla e^{it\Delta_\Omega} w_0 \|_{L^{\frac52}_{t,x}([\tau-T,\tau+T]\times\{|x-z|\lambdaeq R\})} \lambdaesssim
T^{\frac{31}{180}} R^{\frac7{45}} \| e^{it\Delta_\Omega} w_0 \|_{L^{10}_{t,x}({\mathbb{R}}\times\Omega)}^{\frac1{18}}
\| w_0 \|_{\dot H^1_D(\Omega)}^{\frac{17}{18}},
$$
uniformly in $w_0$ and the parameters $R,T > 0$, $\tau\in{\mathbb{R}}$, and $z\in{\mathbb{R}}^3$.
\end{cor}
\begin{proof}
Replacing $w_0$ by $e^{i\tau\Delta_\Omega} w_0$, we see that it suffices to treat the case $\tau=0$.
By H\"older's inequality,
\begin{align*}
\| \nabla e^{it\Delta_\Omega} & w_0
\|_{L^{\frac52}_{t,x}([-T,T]\times\{|x-z|\lambdaeq R\})} \\
&\lambdaesssim \| \nabla e^{it\Delta_\Omega} w_0
\|_{L^2_{t,x}([-T,T]\times\{|x-z|\lambdaeq R\})}^{\frac13}
\|\nabla e^{it\Delta_\Omega} w_0
\|_{L^{\frac{20}7}_{t,x}([-T,T]\times\Omega)}^{\frac23}.
\end{align*}
We will estimate the two factors on the right-hand side separately. By the H\"older and Strichartz inequalities, as well as the equivalence
of Sobolev spaces, we estimate
\begin{align*}
\|\nabla e^{it\Delta_\Omega} w_0 \|_{L^{\frac{20}7}_{t,x}([-T,T]\times\Omega)}
&\lambdaesssim T^{\frac 18} \| (-\Delta_\Omega)^{\frac12}
e^{it\Delta_\Omega} w_0 \|_{L^{\frac{40}9}_t L^{\frac{20}7}_x}
\lambdaesssim T^{\frac 18} \|w_0\|_{\dot H^1_D(\Omega)} .
\end{align*}
In this way, the proof of the corollary reduces to showing
\begin{align}\lambdaabel{E:LS1022}
\| \nabla e^{it\Delta_\Omega} w_0
\|_{L^2_{t,x}([-T,T]\times\{|x-z|\lambdaeq R\})}
\lambdaesssim T^{\frac4{15}} R^{\frac7{15}} \| e^{it\Delta_\Omega} w_0 \|_{L^{10}_{t,x}}^{\frac16} \| w_0 \|_{\dot H^1_D(\Omega)}^{\frac56}.
\end{align}
Given $N>0$, using the H\"older, Bernstein, and Strichartz inequalities, as well as the equivalence of Sobolev spaces, we have
\begin{align*}
\bigl\| \nabla e^{it\Delta_\Omega} P_{< N}^\Omega & w_0 \bigr\|_{L^2_{t,x}([-T,T]\times\{|x-z|\lambdaeq R\})} \\
&\lambdaesssim T^{\frac25} R^{\frac9{20}} \bigl\| \nabla e^{it\Delta_\Omega} P_{< N}^\Omega w_0 \bigr\|_{L^{10}_tL^{\frac{20}7}_x} \\
&\lambdaesssim T^{\frac25} R^{\frac9{20}} N^{\frac14} \| (-\Delta_\Omega)^{\frac38} e^{it\Delta_\Omega} P_{< N}^\Omega w_0 \|_{L^{10}_t L^{\frac{20}7}_x} \\
&\lambdaesssim T^{\frac25} R^{\frac9{20}} N^{\frac14} \| e^{it\Delta_\Omega} w_0 \|_{L^{10}_{t,x}}^{\frac14}
\| (-\Delta_\Omega)^{\frac12} e^{it\Delta_\Omega} w_0 \|_{L^{10}_t L^{\frac{30}{13}}_x}^{\frac34} \\
&\lambdaesssim T^{\frac25} R^{\frac9{20}} N^{\frac14} \| e^{it\Delta_\Omega} w_0 \|_{L^{10}_{t,x}}^{\frac14} \| w_0 \|_{\dot H^1_D(\Omega)}^{\frac34}.
\end{align*}
We estimate the high frequencies using Lemma~\ref{L:local smoothing} and the Bernstein inequality:
\begin{align*}
\bigl\| \nabla e^{it\Delta_\Omega} P_{\geq N}^\Omega w_0 \bigr\|_{L^2_{t,x}([-T,T]\times\{|x-z|\lambdaeq R\})} ^2
&\lambdaesssim R \| P_{\geq N}^\Omega w_0 \|_{L^2_x} \| \nabla P_{\geq N}^\Omega w_0 \|_{L^2_x} \\
&\lambdaesssim R N^{-1} \| w_0 \|_{\dot H^1_D(\Omega)}^2.
\end{align*}
The estimate \eqref{E:LS1022} now follows by optimizing in the choice of $N$.
\end{proof}
\section{Convergence of domains}\lambdaabel{S:Domain Convergence}
The region $\Omega$ is not invariant under scaling or translation; indeed, under suitable choices of such operations,
the obstacle may shrink to a point, march off to infinity, or even expand to fill a halfspace. The objective of this
section is to prove some rudimentary statements about the behaviour of functions of the Dirichlet Laplacian under
such circumstances. In the next section, we address the much more subtle question of the convergence of propagators
in Strichartz spaces.
We begin by defining a notion of convergence of domains that is general enough to cover the scenarios discussed in this
paper, without being so general as to make the arguments unnecessarily complicated. Throughout, we write
$$
G_{\mathcal O}(x,y;z) := (-\Delta_{\mathcal O} - z)^{-1}(x,y)
$$
for the Green's function of the Dirichlet Laplacian in a general open set ${\mathcal O}$. This function is symmetric under
the interchange of $x$ and $y$.
\begin{defn}\lambdaabel{D:converg}
Given a sequence ${\mathcal O}_n$ of open subsets of ${\mathbb{R}}^3$ we define
$$
\tlim {\mathcal O}_n := \{ x\in {\mathbb{R}}^3 :\, \lambdaiminf_{n\to\infty} \dist(x,{\mathcal O}_n^c) > 0\}.
$$
Writing $\tilde{\mathcal O}=\tlim {\mathcal O}_n$, we say ${\mathcal O}_n\to{\mathcal O}$ if the following two conditions hold: ${\mathcal O}\triangle\tilde{\mathcal O}$ is a finite set and
\begin{align}\lambdaabel{cr2}
G_{{\mathcal O}_n}(x,y;z)\to G_{{\mathcal O}}(x,y;z)
\end{align}
for all $z\in (-2,-1)$, all $x\in \tilde{\mathcal O}$, and uniformly for $y$ in compact subsets of $\tilde{\mathcal O}\setminus \{x\}$.
\end{defn}
The arguments that follow adapt immediately to allow the symmetric difference ${\mathcal O}\triangle\tilde{\mathcal O}$ to be a set
of vanishing Minkowski $1$-content, rather than being merely finite. The role of this hypothesis is to guarantee that this
set is removable for $\dot H^1_D({\mathcal O})$; see Lemma~\ref{L:dense} below. We restrict $z$ to the interval $(-2,-1)$
in \eqref{cr2} for simplicity and because it allows us to invoke the maximum principle when checking this
hypothesis. Nevertheless, this implies convergence for all $z\in{\mathbb{C}}\setminus[0,\infty)$, as we will show in Lemma~\ref{lm:allz}.
\begin{lem}\lambdaabel{L:dense} If ${\mathcal O}_n\to {\mathcal O}$, then $C^\infty_c(\tilde{\mathcal O})$ is dense in $\dot H^1_D({\mathcal O})$.
\end{lem}
\begin{proof}
By definition, $C^\infty_c({\mathcal O})$ is dense in $\dot H^1_D({\mathcal O})$. Given $f\in C^\infty_c({\mathcal O})$ and ${\varepsilon}>0$ define
$
f_{\varepsilon}(x) := f(x) \prod_{k=1}^m \theta\bigl(\tfrac{x-x_k}{{\varepsilon}}\bigr)
$
where $\{x_k\}_{k=1}^m$ enumerates ${\mathcal O}\triangle\tilde{\mathcal O}$ and $\theta:{\mathbb{R}}^3\to[0,1]$ is a smooth function that vanishes when $|x|\lambdaeq1$
and equals one when $|x|\geq2$.
Then $f_{\varepsilon} \in C^\infty_c(\tilde{\mathcal O})\cap C^\infty_c({\mathcal O})$ and
$$
\| f - f_{\varepsilon} \|_{\dot H^1({\mathbb{R}}^3)} \lambdaesssim \sqrt{m{\varepsilon}^3} \; \|\nabla f\|_{L^\infty} + \sqrt{m{\varepsilon}}\; \|f\|_{L^\infty}.
$$
As ${\varepsilon}$ can be chosen arbitrarily small, the proof is complete.
\end{proof}
In what follows, we will need some crude bounds on the Green's function that hold uniformly for the rescaled domains we consider.
While several existing methods could be used to obtain more precise results (cf. \cite{Hislop}), we prefer to give a simple
argument that gives satisfactory bounds and for which the needed uniformity is manifest.
\begin{lem}\lambdaabel{L:G bnds}
For all open sets ${\mathcal O}\subseteq{\mathbb{R}}^3$ and $z\in{\mathbb{C}}\setminus[0,\infty)$,
\begin{align}\lambdaabel{moron}
\bigl|G_{\mathcal O}(x,y;z)\bigr| \lambdaesssim \frac{|z|^2}{(\Im z)^2} e^{-\frac12{\mathbb{R}}e \sqrt{-z}|x-y|} \Bigl(\frac1{|x-y|} + \sqrt{\Im z}\Bigr).
\end{align}
Moreover, if\/ ${\mathbb{R}}e z\lambdaeq 0$, then
\begin{align}\lambdaabel{moron'}
\bigl|G_{\mathcal O}(x,y;z)\bigr| \lambdaesssim e^{-\frac12{\mathbb{R}}e \sqrt{-z}|x-y|} \Bigl(\frac1{|x-y|} + \sqrt{\Im z}\Bigr).
\end{align}
\end{lem}
\begin{proof}
By the parabolic maximum principle, $0\lambdaeq e^{t\Delta_{{\mathcal O}}}(x,y) \lambdaeq e^{t\Delta_{{\mathbb{R}}^3}}(x,y)$. Thus,
$$
|G_{{\mathcal O}} (x,y;z)| = \bigg| \int_0^\infty e^{tz + t\Delta_{{\mathcal O}}}(x,y) \,dt \biggr| \lambdaeq \int_0^\infty e^{t{\mathbb{R}}e(z) + t\Delta_{{\mathbb{R}}^3}}(x,y) \,dt
= G_{{\mathbb{R}}^3} (x,y;{\mathbb{R}}e z)
$$
for all ${\mathbb{R}}e z \lambdaeq 0$. Using the explicit formula for the Green's function in ${\mathbb{R}}^3$, we deduce
\begin{equation}\lambdaabel{Go bound}
| G_{{\mathcal O}} (x,y;z) | \lambdaeq \frac{e^{-\sqrt{-{\mathbb{R}}e z}|x-y|}}{4\pi|x-y|} \qtq{whenever} {\mathbb{R}}e z\lambdaeq0.
\end{equation}
(When $z\in(-\infty,0]$ this follows more simply from the elliptic maximum principle.)
Note that the inequality \eqref{Go bound} implies \eqref{moron'} in the sector ${\mathbb{R}}e z < -|\Im z|$. Indeed, in this region we have
${\mathbb{R}}e \sqrt{-z} \lambdaeq \sqrt{|z|} \lambdaeq 2^{\frac14} \sqrt{-{\mathbb{R}}e z}$. In the remaining cases of \eqref{moron'}, namely, $-|\Im z|\lambdaeq {\mathbb{R}}e z\lambdaeq0$, we have
$1\lambdaeq\frac{|z|^2}{(\Im z)^2}\lambdaeq 2$ and so in this case \eqref{moron'} follows from \eqref{moron}. Thus, it remains to establish \eqref{moron}.
To obtain the result for general $z\in{\mathbb{C}}\setminus[0,\infty)$, we combine \eqref{Go bound} with a crude bound elsewhere and the Phragm\'en--Lindel\"of principle.
From \eqref{Go bound} and duality, we have
$$
\bigl\| (-\Delta_{{\mathcal O}}+|z|)^{-1} \bigr\|_{L^1\to L^2} = \bigl\| (-\Delta_{{\mathcal O}}+|z|)^{-1} \bigr\|_{L^2\to L^\infty} \lambdaesssim |z|^{-1/4}.
$$
Combining this information with the identity
$$
(-\Delta_{{\mathcal O}}-z)^{-1} = (-\Delta_{{\mathcal O}}+|z|)^{-1} +
(-\Delta_{{\mathcal O}}+|z|)^{-1}\biggl[\frac{(z+|z|)(-\Delta_{{\mathcal O}}+|z|)}{-\Delta_{{\mathcal O}}-z}\biggr](-\Delta_{{\mathcal O}}+|z|)^{-1}
$$
and elementary estimations of the $L^2$-norm of the operator in square brackets, we deduce that
\begin{equation}\lambdaabel{Go bound'}
|G_{{\mathcal O}} (x,y;z)| \lambdaesssim \frac{1}{|x-y|} + \frac{|z|^{3/2}}{|\Im z|} \qtq{for all} z\in{\mathbb{C}}\setminus[0,\infty).
\end{equation}
Using \eqref{Go bound} when ${\mathbb{R}}e z \lambdaeq 0$ and \eqref{Go bound'} when ${\mathbb{R}}e z >0$, we see that for given ${\varepsilon}>0$ we have
\begin{equation}\lambdaabel{log Go}
\lambdaog \biggl|\frac{G_{{\mathcal O}} (x,y;z)}{(z+i{\varepsilon})^2}\biggr|
\lambdaeq -|x-y|{\mathbb{R}}e\bigl(\sqrt{i{\varepsilon}-z}\bigr) + \lambdaog\bigl(\tfrac1{|x-y|}+\sqrt{\varepsilon}\bigr) + 2\lambdaog\bigl(\tfrac1{\varepsilon}\bigr) + C
\end{equation}
for a universal constant $C$ and all $z$ with $\Im z ={\varepsilon}$. By the Phragm\'en--Lindel\"of principle, this inequality extends to the
entire halfspace $\Im z \geq {\varepsilon}$. Indeed, LHS\eqref{log Go} is subharmonic and converges to $-\infty$ at infinity, while RHS\eqref{log Go}
is harmonic and grows sublinearly. To obtain the lemma at a fixed $z$ in the upper halfplane we apply \eqref{log Go} with ${\varepsilon}=\frac12\Im z$
and use the elementary inequality
$$
{\mathbb{R}}e \sqrt{-u-\smash[b]{\tfrac i2 v}} \geq \tfrac12 {\mathbb{R}}e \sqrt{-u-iv} \qtq{for all} u\in{\mathbb{R}} \qtq{and} v>0.
$$
The result for the lower halfplane follows by complex conjugation symmetry.
\end{proof}
\begin{lem}\lambdaabel{lm:allz} If ${\mathcal O}_n\to {\mathcal O}$, then \eqref{cr2} holds uniformly for $z$ in compact subsets of ${\mathbb{C}}\setminus[0,\infty)$,
$x\in \tilde{\mathcal O}$, and $y$ in compact subsets of $\tilde{\mathcal O}\setminus \{x\}$.
\end{lem}
\begin{proof} We argue by contradiction. Suppose not. Then there exist an $x\in \tilde{\mathcal O}$ and a sequence $y_n\to y_\infty\in \tilde {\mathcal O}\setminus \{x\}$ so that
\begin{align*}
f_n(z):=G_{{\mathcal O}_n}(x, y_n, z)
\end{align*}
does not converge uniformly to $G_{\mathcal O}(x,y_\infty;z)$ on some compact subset of ${\mathbb{C}}\setminus[0,\infty)$.
By Lemma~\ref{L:G bnds}, we see that $\{f_n\}$ are a normal family and so, after passing to a subsequence, converge
uniformly on compact sets to some $f(z)$. As $G_{{\mathcal O}_n}(x, y_n;z)\to G_{\mathcal O}(x,y_\infty;z)$ whenever $z\in (-2,-1)$,
the limit must be $f(z)=G_{\mathcal O}(x,y_\infty;z)$. This shows that it was unnecessary to pass to a subsequence, thus providing the sought-after contradiction.
\end{proof}
Given sequences of scaling and translation parameters $N_n\in 2^{\mathbb{Z}}$ and $x_n\in\Omega$, we wish to consider the domains $N_n(\Omega-\{x_n\})$.
Writing $d(x_n):=\dist(x_n,\Omega^c)$ and passing to a subsequence, we identify four specific scenarios:
\begin{CI}
\item Case 1: $N_n\equiv N_\infty$ and $x_n\to x_\infty\in \Omega$. Here we set $\Omega_n:=\Omega$.
\item Case 2: $N_n\to 0$ and $-N_n x_n \to x_\infty\in{\mathbb{R}}^3$. Here $\Omega_n:=N_n(\Omega-\{x_n\})$.
\item Case 3: $N_nd(x_n)\to \infty$. Here $\Omega_n:=N_n(\Omega-\{x_n\})$.
\item\mbox{Case 4: }$N_n\to\infty$ and $N_n d(x_n)\to d_\infty>0$. Here $\Omega_n:=N_nR_n^{-1}(\Omega-\{x_n^*\})$, where $x_n^*\in\partial\Omega$ and $R_n\in SO(3)$ are chosen so that $d(x_n)=|x_n-x_n^*|$ and $R_n e_3 = \frac{x_n-x_n^*}{|x_n-x_n^*|}$.
\end{CI}
The seemingly missing possibility, namely, $N_n \gtrsim 1$ and $N_n d(x_n)\to 0$ will be precluded in the proof of Proposition~\ref{P:inverse Strichartz}.
In Case~1, the domain modifications are so tame as to not require further analysis, as is reflected by the choice of $\Omega_n$. The definition of
$\Omega_n$ in Case~4 incorporates additional translations and rotations to normalize the limiting halfspace to be
$$
{\mathbb{H}} := \{ x\in{\mathbb{R}}^3 : e_3 \cdot x >0 \} \qtq{where} e_3:=(0,0,1).
$$
In Cases~2 and~3, the limiting domain is ${\mathbb{R}}^3$, as we now show.
\begin{prop}\lambdaabel{P:convdomain}
In Cases~2 and~3, $\Omega_n\to {\mathbb{R}}^3;$ in Case 4, $\Omega_n\to {\mathbb{H}}$.
\end{prop}
\begin{proof} In Case 2, we have $\tlim \Omega_n = {\mathbb{R}}^3\setminus\{x_\infty\}$. It remains to show convergence of the Green's functions.
Let $C_0>0$ be a constant to be chosen later. We will show that for $z\in(-2,-1)$ and $n$ sufficiently large,
\begin{align}\lambdaabel{G to R lb}
G_{\Omega_n}(x,y;z)\ge G_{{\mathbb{R}}^3}(x,y;z)-C_0N_nG_{{\mathbb{R}}^3}(x,-x_nN_n;z)
\end{align}
for $x\in {\mathbb{R}}^3\setminus\{ x_\infty\}$ fixed and $y$ in any compact subset $K$ of ${\mathbb{R}}^3\setminus\{x,x_\infty\}$. Indeed, for $n$ large enough we
have $x\in \Omega_n$ and $K\subseteq\Omega_n$. Also, for $x_0\in \partial\Omega_n$ we have $|x_0+x_n N_n|\lambdae \diam(\Omega^c)N_n$. Thus,
for $z\in(-2,-1)$ we estimate
\begin{align*}
G_{{\mathbb{R}}^3}(x_0,y;z)-C_0N_nG_{{\mathbb{R}}^3}(x_0,-x_nN_n;z)&=\frac{e^{-\sqrt{-z}|x_0-y|}}{4\pi|x_0-y|}
-C_0N_n\frac{e^{-\sqrt{-z}|x_0+x_nN_n|}}{4\pi|x_0+x_nN_n|}\\
&\lambdae \frac{e^{-\sqrt{-z}|x_0-y|}}{4\pi|x_0-y|}-C_0\frac{e^{-\diam\sqrt{-z}N_n}}{4\pi \diam}<0,
\end{align*}
provided $C_0>\sup_{y\in K}\frac {\diam}{|x_0-y|}$ and $n$ is sufficiently large. Thus \eqref{G to R lb} follows from the maximum principle.
The maximum principle also implies $G_{{\mathbb{R}}^3}(x,y;z)\ge G_{\Omega_n}(x,y;z) \geq 0$. Combining this with \eqref{G to R lb},
we obtain
\begin{align*}
G_{{\mathbb{R}}^3}(x,y;z)-C_0N_nG_{{\mathbb{R}}^3}(x,-x_nN_n;z)\lambdae G_{\Omega_n}(x,y;z)\lambdae G_{{\mathbb{R}}^3}(x,y;z)
\end{align*}
for $n$ sufficiently large. As $N_n\to0$ and $-x_nN_n\to x_\infty$, this proves the claim in Case~2.
Next we consider Case 3. From the condition $N_nd(x_n)\to\infty$ it follows easily that $\tlim\Omega_n={\mathbb{R}}^3$.
It remains to show the convergence of the Green's functions. By the maximum principle, $G_{{\mathbb{R}}^3}(x,y;z)\ge G_{\Omega_n}(x,y;z)$; thus, it suffices to prove a suitable lower bound. To this end, let ${\mathbb{H}}_n$ denote the halfspace containing $0$ for which $\partial{\mathbb{H}}_n$ is the hyperplane
perpendicularly bisecting the line segment from $0$ to the nearest point on $\partial\Omega_n$. Note that $\dist(0,\partial{\mathbb{H}}_n)\to\infty$
as $n\to\infty$. Given $x\in{\mathbb{R}}^3$ and a compact set $K\subset{\mathbb{R}}^3\setminus\{x\}$, the maximum principle guarantees that
\begin{align*}
G_{\Omega_n}(x,y;z)\ge G_{{\mathbb{H}}_n}(x,y;z) \qtq{for all} y\in K \qtq{and} z\in(-2,-1),
\end{align*}
as long as $n$ is large enough that $x\in {\mathbb{H}}_n$ and $K\subset {\mathbb{H}}_n$. Now
$$
G_{{\mathbb{H}}_n}(x,y;z)=G_{{\mathbb{R}}^3}(x,y;z)-G_{{\mathbb{R}}^3}(x,y_n;z),
$$
where $y_n$ is the reflection of $y$ across $\partial{\mathbb{H}}_n$. Thus,
\begin{align*}
G_{{\mathbb{H}}_n}(x,y;z)=\frac{e^{-\sqrt{-z}|x-y|}}{4\pi|x-y|} - \frac{e^{-\sqrt{-z}|x-y_n|}}{4\pi|x-y_n|}\to G_{{\mathbb{R}}^3}(x,y;z) \qtq{as} n\to \infty.
\end{align*}
This completes the treatment of Case 3.
It remains to prove the convergence in Case 4, where $\Omega_n=N_nR_n^{-1}(\Omega-\{x_n^*\})$,
$N_n\to\infty$, and $N_n d(x_n)\to d_\infty>0$. It is elementary to see that $\tlim \Omega_n = {\mathbb{H}}$; in particular, ${\mathbb{H}}\subset \Omega_n$ for all $n$.
We need to verify that
\begin{align*}
G_{\Omega_n}(x,y;z)\to G_{{\mathbb{H}}}(x,y;z)\qtq{for} z\in (-2,-1), \quad x\in {\mathbb{H}},
\end{align*}
and uniformly for $y$ in a compact set $K\subset{\mathbb{H}}\setminus\{x\}$. By the maximum principle, $G_{\Omega_n}(x,y;z)\ge G_{\mathbb{H}}(x,y;z)$.
On the other hand, we will show that
\begin{align}\lambdaabel{s21}
G_{\Omega_n}(x,y;z)\lambdae G_{\mathbb{H}}(x,y;z)+ C{N_n^{-{\varepsilon}}}e^{-\sqrt{-z}x_3},
\end{align}
for any $0<{\varepsilon}<\frac 13$ and a large constant $C$ depending on $K$. As $N_n\to\infty$, these two bounds together
immediately imply the convergence of the Green's functions.
We now prove the upper bound \eqref{s21}. From the maximum principle it suffices to show that this holds just for $x\in \partial\Omega_n$,
which amounts to
\begin{align}\lambdaabel{s22}
|G_{{\mathbb{H}}}(x,y;z)| \lambdae C {N_n^{-{\varepsilon}}}e^{-\sqrt{-z}x_3} \quad \text{for all } z\in (-2,-1), \ x\in \partial\Omega_n,\text{ and } y\in K. \!\!
\end{align}
Note that $G_{{\mathbb{H}}}$ is negative for such $x$. Recall also that
\begin{align*}
G_{{\mathbb{H}}}(x,y;z)=\frac 1{4\pi}\biggl(\frac1{|x-y|}e^{-\sqrt{-z}|x-y|}-\frac 1{|x-\bar y|}e^{-\sqrt{-z}|x-\bar y|}\biggr),
\end{align*}
where $\bar y=(y^{\perp},-y_3)$ denotes the reflection of $y$ across $\partial{\mathbb{H}}$.
If $x\in\partial\Omega_n$ with $|x|\ge N_n^{\varepsilon}$ then we have $|x-y|\sim|x-\bar y|\gtrsim N_n^{\varepsilon}$ for $n$ large and so
\begin{align*}
|G_{{\mathbb{H}}}(x,y;z)|\lambdae CN_n^{-{\varepsilon}}e^{-\sqrt{-z}x_3},
\end{align*}
provided we choose $C \gtrsim \sup_{y\in K} \exp\{\sqrt{2}\,|y_3|\}$.
Now suppose $x\in\partial\Omega_n$ with $|x|\lambdae N_n^{{\varepsilon}}$. As the curvature of $\partial\Omega_n$ is $O(N_n^{-1})$, for such
points we have $|x_3| \lambdaesssim N_n^{2{\varepsilon}-1}$. Correspondingly,
$$
0 \lambdaeq |x-y| - |x-\bar y| = \frac{|x-y|^2 - |x-\bar y|^2}{|x-y|+|x-\bar y|} = \frac{4|x_3||y_3|}{|x-y|+|x-\bar y|} \lambdaesssim_K N_n^{2{\varepsilon}-1}.
$$
Thus, by the Lipschitz character of $r\mapsto e^{-\sqrt{-z}r}/r$ on compact subsets of $(0,\infty)$,
$$
|G_{{\mathbb{H}}}(x,y;z)| \lambdaesssim_K N_n^{2{\varepsilon}-1}.
$$
On the other hand, since $|x_3| \lambdaesssim N_n^{2{\varepsilon}-1}\to 0$ as $n\to\infty$,
$$
{N_n^{-{\varepsilon}}}e^{-\sqrt{-z}x_3} \gtrsim N_n^{-{\varepsilon}}.
$$
As $0<{\varepsilon}<\frac13$, this completes the justification of \eqref{s22} and so the proof of the lemma in Case~4.
\end{proof}
We conclude this section with two results we will need in Sections~\ref{S:LPD} and~\ref{S:Nonlinear Embedding}.
\begin{prop}\lambdaabel{P:converg}
Assume $\Omega_n\to \Omega_\infty$ in the sense of Definition~\ref{D:converg} and let $\Theta\in C_c^{\infty}((0,\infty))$. Then
\begin{align}\lambdaabel{E:P converg1}
\|[\Theta(-\Delta_{\Omega_n})-\Theta(-\Delta_{\Omega_\infty})]\delta_y\|_{\dot H^{-1}({\mathbb{R}}^3)} \to 0
\end{align}
uniformly for $y$ in compact subsets of $\,\tlim \Omega_n$. Moreover, for any fixed $t\in {\mathbb{R}}$ and $h\in C_c^{\infty}(\tlim \Omega_n)$, we have
\begin{align}\lambdaabel{E:P converg2}
\lambdaim_{n\to\infty}\|e^{it\Delta_{\Omega_n}}h-e^{it\Delta_{\Omega_\infty}} h\|_{\dot H^{-1}({\mathbb{R}}^3)}=0.
\end{align}
\end{prop}
\begin{proof}
By the Helffer-Sj\"ostrand formula (cf. \cite[p. 172]{HelfferSjostrand}), we may write
\begin{align*}
\Theta(-\Delta_{{\mathcal O}})(x,y)=\int_{{\mathbb{C}}} G_{{\mathcal O}}(x,y;z)\rho_\Theta(z) \, d{\hbox{Area}},
\end{align*}
where $\rho_\theta\in C_c^\infty({\mathbb{C}})$ with $|\rho_{\Theta}(z)|\lambdaesssim |\Im z|^{20}$. Note that by Lemma~\ref{L:G bnds}
this integral is absolutely convergent; moreover, we obtain the following bounds:
\begin{align}\lambdaabel{s18}
|\Theta(-\Delta_{{\mathcal O}})(x,y)|\lambdaesssim |x-y|^{-1}\lambdaangle x-y\rangle^{-10},
\end{align}
uniformly for any domain ${\mathcal O}$.
As $\Omega_n\to \Omega_\infty$, applying dominated convergence in the Helffer-Sj\"ostrand formula also guarantees that
$\Theta(-\Delta_{\Omega_n})(x,y)\to \Theta(-\Delta_{\Omega_\infty})(x,y)$ for each
$x\in \tilde\Omega_\infty:=\tlim \Omega_n$ fixed and uniformly for $y$ in
compact subsets of $\tilde\Omega_\infty\setminus\{x\}$. Combining this with \eqref{s18} and applying
the dominated convergence theorem again yields
\begin{align*}
\|\Theta(-\Delta_{\Omega_n})\delta_y-\Theta(-\Delta_{\Omega_\infty})\delta_y\|_{L_x^{\frac
65}}\to 0,
\end{align*}
which proves \eqref{E:P converg1} since by Sobolev embedding $L_x^{6/5}({\mathbb{R}}^3)\subseteq \dot H^{-1}_x({\mathbb{R}}^3)$.
We turn now to \eqref{E:P converg2}. From the $L^{6/5}_x$-convergence of Littlewood--Paley expansions (cf. \cite[\S4]{KVZ:HA}),
we see that given ${\varepsilon}>0$ and $h\in C_c^{\infty}(\tlim \Omega_n)$, there is a smooth function $\Theta:(0,\infty)\to[0,1]$
of compact support so that
$$
\| [1 - \Theta(-\Delta_{\Omega_\infty})] h \|_{\dot H^{-1}({\mathbb{R}}^3)} \lambdaeq {\varepsilon}.
$$
Combining this with \eqref{E:P converg1} we deduce that
$$
\lambdaimsup_{n\to\infty} \| [1 - \Theta(-\Delta_{\Omega_n})] h \|_{\dot H^{-1}({\mathbb{R}}^3)} \lambdaeq {\varepsilon}.
$$
In this way, the proof of \eqref{E:P converg2} reduces to showing
$$
\lambdaim_{n\to\infty}\bigl\|e^{it\Delta_{\Omega_n}}\Theta(-\Delta_{\Omega_n}) h-e^{it\Delta_\infty} \Theta(-\Delta_{\Omega_\infty}) h\bigr\|_{\dot H^{-1}({\mathbb{R}}^3)}=0,
$$
which follows immediately from \eqref{E:P converg1}.
\end{proof}
\begin{lem}[Convergence of $\dot H^1_D$ spaces]\lambdaabel{L:n3}
Let $\Omega_n\to \Omega_\infty$ in the sense of Definition~\ref{D:converg}.
Then we have
\begin{align}\lambdaabel{n4}
\|(-\Delta_{\Omega_n})^{\frac 12} f-(-\Delta_{\Omega_\infty})^{\frac 12} f\|_{L^2({\mathbb{R}}^3)}\to 0 \qtq{for all} f\in C^\infty_c(\tlim\Omega_n).
\end{align}
\end{lem}
\begin{proof}
By the definition of $\tlim\Omega_n$, any $f\in C_c^{\infty}(\tlim \Omega_n)$ obeys $\supp(f)\subseteq\Omega_n$
for $n$ sufficiently large $n$ and for such $n$ we have
\begin{align}\lambdaabel{normequal}
\| (-\Delta_{\Omega_n})^{\frac 12} f \|_{L^2({\mathbb{R}}^3)} =\|\nabla f\|_{L^2({\mathbb{R}}^3)} = \| (-\Delta_{\Omega_\infty})^{\frac 12} f \|_{L^2({\mathbb{R}}^3)}.
\end{align}
Given ${\varepsilon}>0$, there exists $\Theta_{\varepsilon}\in C_c^\infty((0,\infty))$ such that
\begin{align*}
\sup_{\lambdaambda\in[0,\infty)}\ \biggl|\frac{\sqrt\lambdaambda}{1+\lambdaambda}-\Theta_{\varepsilon}(\lambdaambda) \biggr| <{\varepsilon}.
\end{align*}
Thus for any $g\in C_c^{\infty}({\mathbb{R}}^3)$, we have
\begin{align*}
\lambdaangle g, (-\Delta_{\Omega_n})^{\frac 12} f\rangle &=\bigl\lambdaangle g, \frac{(-\Delta_{\Omega_n})^{\frac12}}{1-\Delta_{\Omega_n}}(1-\Delta) f\bigr\rangle
=\lambdaangle g, \Theta_{\varepsilon}(-\Delta_{\Omega_n})(1-\Delta)f\rangle +O({\varepsilon}).
\end{align*}
Using Proposition \ref{P:converg} and the same reasoning, we obtain
\begin{align*}
\lambdaim_{n\to\infty} \lambdaangle g, \Theta_{\varepsilon}(-\Delta_{\Omega_n})&(1-\Delta) f\rangle
= \lambdaangle g, \Theta_{\varepsilon}(-\Delta_{\Omega_\infty})(1-\Delta)f\rangle
= \lambdaangle g, (-\Delta_{\Omega_\infty})^{\frac 12} f\rangle + O({\varepsilon}).
\end{align*}
Putting these two equalities together and using the fact that ${\varepsilon}>0$ was arbitrary, we deduce that
$(-\Delta_{\Omega_n})^{\frac 12} f \rightharpoonup (-\Delta_{\Omega_\infty})^{\frac 12} f$ weakly in $L^2({\mathbb{R}}^3)$.
Combining this with \eqref{normequal} gives strong convergence in $L^2({\mathbb{R}}^3)$ and so proves the lemma.
\end{proof}
\section{Convergence of linear flows}\lambdaabel{S:Linear flow convergence}
In this section we prove convergence of free propagators in Strichartz spaces, as we rescale and translate the domain $\Omega$ by parameters $N_n\in 2^{\mathbb{Z}}$ and $x_n\in \Omega$ conforming to one of the following three scenarios:
\begin{equation}\lambdaabel{scenarios}
\lambdaeft\{ \ \begin{aligned}
&\text{(i) $N_n\to 0$ and $-N_n x_n \to x_\infty\in {\mathbb{R}}^3$}\\
&\text{(ii) $N_n d(x_n)\to \infty$}\\
&\text{(iii) $N_n\to\infty$ and $N_n d(x_n) \to d_\infty>0$.}
\end{aligned}\right.
\end{equation}
Here we use the shorthand $d(x_n)=\dist(x_n, \Omega^c)$. Notice that these are Cases~2--4 discussed in Section~\ref{S:Domain Convergence}. We will not discuss Case~1 of Section~\ref{S:Domain Convergence} here; there is no change in geometry in Case~1, which renders the results of this section self-evident.
As seen in Section~\ref{S:Domain Convergence}, the limiting geometry in the first and second scenarios is the whole space ${\mathbb{R}}^3$, while in the third scenario the limiting geometry is the halfspace ${\mathbb{H}}$ (after a suitable normalization). More precisely, in the first and second scenarios writing
$\Omega_n=N_n(\Omega-\{x_n\})$, Proposition~\ref{P:convdomain} gives $\Omega_n\to {\mathbb{R}}^3$. In the third scenario, we define
$\Omega_n=N_nR_n^{-1}(\Omega-\{x_n^*\})$, where $x_n^*\in\partial\Omega$ and $R_n\in SO(3)$ are chosen so that $d(x_n)=|x_n-x_n^*|$ and $R_n e_3 = \frac{x_n-x_n^*}{|x_n-x_n^*|}$; in this scenario, Proposition~\ref{P:convdomain} gives $\Omega_n\to {\mathbb{H}}=\{x\in{\mathbb{R}}^3:x\cdot e_3>0\}$.
The main result in this section is the following:
\begin{thm}[Convergence of linear flows in Strichartz spaces]\lambdaabel{T:LF}\hskip 0em plus 1em
Let $\Omega_n$ be as above and let $\Omega_\infty$ be such that
$\Omega_n\to \Omega_\infty$. Then
\begin{align*}
\lambdaim_{n\to \infty}\| e^{it\Delta_{\Omega_n}} \psi-e^{it\Delta_{\Omega_\infty}}\psi\|_{L_t^q L_x^r({\mathbb{R}}\times{\mathbb{R}}^3)}=0
\end{align*}
for all $\psi\in C_c^{\infty}(\tlim \Omega_n)$ and all pairs $(q,r)$ satisfying $\frac2q+\frac3r=\frac32$ with $2<q<\infty$ and $2<r<6$.
\end{thm}
In this paper we are considering an energy-critical problem and so need an analogue of this theorem with the corresponding scaling. To this end, we prove the following corollary, which will be used to obtain a linear profile decomposition for the propagator $e^{it\Delta_\Omega}$ in the following section.
\begin{cor}[Convergence of linear flows in $L_{t,x}^{10}$]\lambdaabel{C:LF} Let $\Omega_n$ be as above and let $\Omega_\infty$ be such that
$\Omega_n\to \Omega_\infty$. Then
\begin{align*}
\lambdaim_{n\to \infty}\| e^{it\Delta_{\Omega_n}} \psi-e^{it\Delta_{\Omega_\infty}}\psi\|_{L_{t,x}^{10}({\mathbb{R}}\times{\mathbb{R}}^3)}=0
\end{align*}
for all $\psi\in C_c^{\infty}(\tlim \Omega_n)$.
\end{cor}
\begin{proof}
By H\"older's inequality,
\begin{align*}
\| e^{it\Delta_{\Omega_n}} \psi-e^{it\Delta_{\Omega_\infty}}\psi\|_{L_{t,x}^{10}}
&\lambdaesssim \| e^{it\Delta_{\Omega_n}} \psi-e^{it\Delta_{\Omega_\infty}}\psi\|_{L_{t,x}^{10/3}}^{1/3} \| e^{it\Delta_{\Omega_n}} \psi-e^{it\Delta_{\Omega_\infty}}\psi\|_{L_{t,x}^{\infty}}^{2/3}.
\end{align*}
The corollary then follows from Theorem~\ref{T:LF} and the following consequence of Sobolev embedding:
$$
\| e^{it\Delta_{\Omega_n}} \psi\|_{L_{t,x}^{\infty}} + \| e^{it\Delta_{\Omega_\infty}}\psi\|_{L_{t,x}^{\infty}} \lambdaesssim \| (1-\Delta_{\Omega_n}) \psi \|_{L_t^{\infty} L_x^2} + \| (1-\Delta_{\Omega_\infty}) \psi \|_{L_t^{\infty} L_x^2} \lambdaesssim_\psi 1.
$$
Note that the implicit constant here does not depend on $n$, because the domains obey the interior cone condition with uniform constants.
\end{proof}
The proof of Theorem~\ref{T:LF} will occupy the remainder of this lengthy section. We will consider three different regimes of behaviour for $N_n$ and $x_n$. These do not exactly correspond to the three scenarios above, but rather are dictated by the method of proof. The first such case is when $N_n\to 0$ or $d(x_n)\to \infty$. The limiting geometry in this case is the whole of ${\mathbb{R}}^3$.
\begin{thm} \lambdaabel{T:LF1} Let $\Omega_n=N_n(\Omega-\{x_n\})$ and assume that $N_n\to 0$ or $d(x_n)\to\infty$. Then
\begin{align*}
\lambdaim_{n\to\infty}\|e^{it\Delta_{\Omega_n}}\psi-e^{it\Delta_{{\mathbb{R}}^3}}\psi\|_{L_t^q L_x^r({\mathbb{R}}\times{\mathbb{R}}^3)}=0
\end{align*}
for all $\psi\in C_c^{\infty}(\tlim \Omega_n)$ and all pairs $(q,r)$ satisfying $\frac2q+\frac3r=\frac32$ with $2<q<\infty$ and $2<r<6$.
\end{thm}
\begin{proof}
By interpolation and the Strichartz inequality, it suffices to prove convergence in the symmetric Strichartz space $q=r=\frac{10}3$. To ease notation, we will simply write $-\Delta$ for $-\Delta_{{\mathbb{R}}^3}$.
Let $\Theta$ be a smooth radial cutoff such that
\begin{align*}
\Theta(x)=\begin{cases}0, &|x|\lambdae \frac 14\\1, &|x|\ge \frac12\end{cases}
\end{align*}
and let $\chi_n(x):=\Theta\bigl(\frac{\dist(x,\Omega_n^c)}{\diam(\Omega_n^c)}\bigr)$.
Note that if $N_n\to 0$ then $\diam(\Omega_n^c)\to 0$ and so $\supp(1-\chi_n)$ is a collapsing neighbourhood of the point $-N_n x_n$.
On the other hand, if $d(x_n)\to\infty$ then we have $\frac{\dist(0,\Omega_n^c)}{\diam (\Omega_n^c)}\to\infty$. As for $x\in \supp(1-\chi_n)$ we have
$\dist(x,\Omega_n^c)\lambdae \frac 12\diam(\Omega_n^c)$, this gives
\begin{align*}
|x|\ge\dist(0,\Omega_n^c)-\dist(x,\Omega_n^c)\ge\dist(0,\Omega_n^c)-\tfrac 12\diam(\Omega_n^c)\to\infty \qtq{as} n\to \infty.
\end{align*}
Now fix $\psi\in C_c^{\infty}(\tlim \Omega_n)$. From the considerations above, for $n$ sufficiently large we have $\supp \psi\subseteq\{x\in {\mathbb{R}}^3:\, \chi_n(x)=1\}$.
Moreover, if $N_n\to 0$ or $d(x_n)\to\infty$, the monotone convergence theorem together with the Strichartz inequality give
\begin{align*}
\lambdaim_{n\to\infty}\bigl\|(1-\chi_n)e^{it\Delta}\psi\bigr\|_{L_{t,x}^{\frac{10}3} ({\mathbb{R}}\times{\mathbb{R}}^3)}=0.
\end{align*}
We are thus left to estimate $e^{it\Delta_{\Omega_n}}\psi-\chi_n e^{it\Delta}\psi$. From the Duhamel formula,
\begin{align*}
e^{it\Delta_{\Omega_n}}\psi=\chi_n e^{it\Delta}\psi+i\int_0^t e^{i(t-s)\Delta_{\Omega_n}}[\Delta, \chi_n] e^{is\Delta}\psi \,ds.
\end{align*}
Using the Strichartz inequality, we thus obtain
\begin{align}\lambdaabel{1:43}
\|e^{it\Delta_{\Omega_n}}\psi-\chi_n e^{it\Delta}\psi\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times{\mathbb{R}}^3)}
\lambdaesssim\bigl\|[\Delta,\chi_n] e^{it\Delta}\psi\bigr\|_{(L_{t,x}^{\frac{10}7}+L_t^1L_x^2)({\mathbb{R}}\times{\mathbb{R}}^3)}.
\end{align}
To estimate the right-hand side of \eqref{1:43}, we discuss separately the cases: $(1)$ $N_n\to 0$ and $(2)$ $d(x_n)\to \infty$ with $N_n\gtrsim 1$. In the first case, we estimate
\begin{align*}
\|&[\Delta, \chi_n]e^{it\Delta}\psi\bigr\|_{L_{t,x}^{\frac{10}7}}
\lambdaesssim \Bigl[\|\Delta\chi_n\|_{L_x^{\frac{10}7}}+\|\nabla \chi\|_{L_x^{\frac{10}7}}\Bigr]
\Bigl[\|e^{it\Delta}\psi\|_{L_t^{\frac{10}7}L_x^\infty} + \|e^{it\Delta}\nabla\psi\|_{L_t^{\frac{10}7}L_x^\infty} \Bigr]\\
&\lambdaesssim\Bigl[\diam(\Omega_n^c)^{-2}+\diam(\Omega_n^c)^{-1}\Bigr]\diam(\Omega_n^c)^{\frac{21}{10}}
\Bigl[\|e^{it\Delta}\psi\|_{L_t^{\frac{10}7}L_x^\infty} + \|e^{it\Delta}\nabla\psi\|_{L_t^{\frac{10}7}L_x^\infty} \Bigr] \\
&\lambdaesssim\Bigl[N_n^{\frac 1{10}}+N_n^{\frac{11}{10}}\Bigr]
\Bigl[\|e^{it\Delta}\psi\|_{L_t^{\frac{10}7}L_x^\infty} + \|e^{it\Delta}\nabla\psi\|_{L_t^{\frac{10}7}L_x^\infty} \Bigr].
\end{align*}
From the dispersive estimate and Sobolev embedding,
\begin{align}\lambdaabel{E:4disp}
\|e^{it\Delta}\psi\|_{L_x^{\infty}}\lambdaesssim \lambdaangle t\rangle^{-\frac32}\bigl[\|\psi\|_{L_x^1}+\|\psi\|_{H^2_x}\bigr]\lambdaesssim_\psi\lambdaangle t\rangle^{-\frac 32},
\end{align}
and similarly with $\psi$ replaced by $\nabla\psi$. Thus we obtain
\begin{align*}
\lambdaim_{n\to\infty}\|[\Delta,\chi_n] e^{it\Delta} \psi\|_{L_{t,x}^{\frac{10}7}({\mathbb{R}}\times{\mathbb{R}}^3)}=0.
\end{align*}
Consider now the case $N_n\gtrsim 1$ and $d(x_n)\to\infty$. Then
\begin{align*}
\|[\Delta,\chi_n]e^{it\Delta}\psi\|_{L_t^1L_x^2}
&\lambdaesssim \bigl[\|\Delta\chi_n\|_{L_x^{\infty}}+\|\nabla\chi_n\|_{L_x^{\infty}}\bigr]\|e^{it\Delta}\lambdaangle\nabla\rangle \psi\|_{L_t^1L_x^2(\dist(x,\Omega_n^c)\sim N_n)}\\
&\lambdaesssim \bigl[N_n^{-2}+N_n^{-1}\bigr]\|e^{it\Delta}\lambdaangle\nabla\rangle\psi\|_{L_t^1L_x^2(\dist(x,\Omega_n^c)\sim N_n)}.
\end{align*}
Using H\"older's inequality and \eqref{E:4disp}, we obtain
\begin{align*}
\|e^{it\Delta}\lambdaangle\nabla\rangle\psi\|_{L_x^2(\dist(x,\Omega_n^c)\sim N_n)}
&\lambdaesssim N_n^{\frac 32}\|e^{it\Delta}\lambdaangle\nabla\rangle\psi\|_{L_x^\infty}
\lambdaesssim_{\psi} N_n^{\frac 32}\lambdaangle t\rangle^{-\frac 32}.
\end{align*}
On the other hand, from the virial identity,
\begin{align*}
\|xe^{it\Delta}\lambdaangle\nabla\rangle\psi\|_{L_x^2}\lambdaesssim_\psi\lambdaangle t\rangle
\end{align*}
and so,
\begin{align*}
\|e^{it\Delta}\lambdaangle\nabla\rangle\psi\|_{L_x^2(\dist(x,\Omega_n^c)\sim N_n)}
&\lambdaesssim\frac 1{\dist(0,\Omega_n^c)}\|xe^{it\Delta}\lambdaangle \nabla\rangle\psi\|_{L_x^2}\lambdaesssim_\psi\frac{\lambdaangle t\rangle}{N_nd(x_n)}.
\end{align*}
Collecting these estimates we obtain
\begin{align*}
\|e^{it\Delta}\lambdaangle\nabla\rangle\psi\|_{L_t^1L_x^2(\dist(x,\Omega_n^c)\sim N_n)}
&\lambdaesssim_\psi\int_0^{\infty}\min\biggl\{\frac{N_n^{\frac 32}}{\lambdaangle t\rangle^{\frac 32}}, \ \frac{\lambdaangle t\rangle}{N_nd(x_n)}\biggr\}\\
&\lambdaesssim _\psi N_nd(x_n)^{-\frac 15}+\min\bigl\{N_n^{\frac 32}, N_n^{-1}d(x_n)^{-1}\bigr\}
\end{align*}
and so,
\begin{align*}
\|[\Delta,\chi_n]e^{it\Delta}\psi\|_{L_t^1L_x^2}\lambdaesssim_\psi d(x_n)^{-\frac 15}+N_n^{-2}d(x_n)^{-1}\to 0 \qtq{as} n\to \infty.
\end{align*}
This completes the proof of the theorem.
\end{proof}
Theorem~\ref{T:LF1} settles Theorem~\ref{T:LF} for $N_n$ and $x_n$ conforming to scenario (i) in \eqref{scenarios}, as well as part of scenario (ii). The missing part of the second scenario is $N_n d(x_n)\to \infty$ with $N_n\to \infty$ and $d(x_n)\lambdaesssim 1$. Of course, we also have to prove Theorem~\ref{T:LF} for $N_n$ and $x_n$ conforming to scenario (iii), namely, $N_n d(x_n) \to d_\infty>0$ and $N_n\to\infty$. We will cover these remaining cases in two parts:
\begin{SL}\addtocounter{smalllist}{3}
\item $N_n\to \infty$ and $1\lambdaesssim N_nd(x_n) \lambdaeq N_n^{1/7}$
\item $N_n\to \infty$ and $N_n^{1/7}\lambdaeq N_nd(x_n) \lambdaesssim N_n$.
\end{SL}
Note that in case (iv) the obstacle $\Omega_n^c$ grows in diameter much faster than its distance to the origin. As seen from the origin, the obstacle is turning into a (possibly retreating) halfspace. By comparison, case (v) includes the possibility that the obstacle grows at a rate comparable to its distance to the origin.
The two cases will receive different treatments. In Case~(iv), we use a parametrix construction adapted to the halfspace evolution. We also prove that when the halfspace is retreating, the halfspace propagators converge to $e^{it\Delta_{{\mathbb{R}}^3}}$; see Proposition~\ref{P:HtoR}. In Case~(v), the parametrix construction will be inspired by geometric optics considerations and will require a very fine analysis.
We now turn to the details of the proof of Theorem~\ref{T:LF} in Case~(iv).
\subsection{Case (iv)} After rescaling we find ourselves in the setting shown schematically in Figure~\ref{F:case4} below, with
${\varepsilon} = N_n^{-1}$. This restores the obstacle to its original size. We further rotate and translate the problem
so that the origin lies on the boundary of the obstacle, the outward normal is $e_3$ at this point, and the wave packet $\psi$ is centered
around the point $\delta e_3$. Abusing notation, we will write $\Omega$ for this new rotated/translated domain. As before, we write
${\mathbb{H}}=\{(x_1, x_2,x_3)\in{\mathbb{R}}^3:\, x_3>0\}$; by construction, $\partial{\mathbb{H}}$ is the tangent plane to $\Omega^c$ at the origin. Throughout this subsection, we write $x^{\perp}:=(x_1,x_2)$; also $\bar x:=(x_1,x_2,-x_3)$ denotes the reflection of $x$ in $\partial{\mathbb{H}}$.
\begin{figure}
\caption{Depiction of Case~(iv); here ${\varepsilon}
\end{figure}
This subsection will primarily be devoted to the proof of the following
\begin{thm}\lambdaabel{T:LF2} Fix $\psi\in C_c^{\infty}({\mathbb{H}} - \{e_3\})$ and let
\begin{align*}
\psi_{{\varepsilon},\delta}(x):={\varepsilon}^{-\frac 32}\psi\biggl(\frac{ x-\delta e_3}{\varepsilon}\biggr).
\end{align*}
Then for any pair $(q,r)$ satisfying $\frac2q+\frac3r=\frac32$ with $2<q<\infty$ and $2<r<6$ we have
\begin{align}\lambdaabel{cas4}
\|e^{it\Delta_{\Omega({\varepsilon})}}\psi_{{\varepsilon},\delta}-e^{it\Delta_{{\mathbb{H}}}}\psi_{{\varepsilon},\delta}\|_{L_t^q L_x^r({\mathbb{R}}\times{\mathbb{R}}^3)} \to 0
\end{align}
as ${\varepsilon}\to0$ with any $\delta=\delta({\varepsilon})$ obeying ${\varepsilon}\lambdaeq\delta\lambdaeq{\varepsilon}^{6/7}$. Here $\Omega({\varepsilon})$ is any family of affine images (i.e. rotations and translations) of $\Omega$ for which ${\mathbb{H}}\subseteq\Omega({\varepsilon})$ and $\partial{\mathbb{H}}$ is the tangent plane to $\Omega({\varepsilon})$ at the origin.
\end{thm}
Theorem~\ref{T:LF2} gives Theorem~\ref{T:LF} for $N_n$ and $x_n$ conforming to scenario (iii) in \eqref{scenarios}. Indeed, one applies Theorem~\ref{T:LF2} to the function $\tilde\psi(x)=\psi(x+e_3)$ with $\delta={\varepsilon}=N_n^{-1}$ and $\Omega({\varepsilon})=R_n^{-1}(\Omega-\{x_n^*\})$.
With the aid of Proposition~\ref{P:HtoR} below, Theorem~\ref{T:LF2} also implies Theorem~\ref{T:LF} for $N_n$ and $x_n$ conforming to scenario (ii) with the additional restriction that $N_n^{1/7}\geq N_nd(x_n) \to \infty$. In this case, we apply Theorem~\ref{T:LF2} to the function $\tilde\psi(x)=\psi_\infty(x-\rho e_3)$ with $\rho=\sup\{|x|:x\in\supp(\psi)\}$, ${\varepsilon}=N_n^{-1}$, $\delta=d(x_n)-{\varepsilon}\rho$, $\Omega({\varepsilon})=R_n^{-1}(\Omega-\{x_n^*\})$, and $\psi_\infty$ being any subsequential limit of $\psi\circ R_n$. As $\psi\circ R_n\to\psi_\infty$ in $L^2$ sense, the Strichartz inequality controls the resulting errors.
\begin{prop}\lambdaabel{P:HtoR} Fix $\psi\in C_c^{\infty}({\mathbb{H}} - \{e_3\})$ and let
\begin{align*}
\psi_{{\varepsilon},\delta}(x):={\varepsilon}^{-\frac 32}\psi\biggl(\frac{ x-\delta e_3}{\varepsilon}\biggr).
\end{align*}
Then for any pair $(q,r)$ satisfying $\frac2q+\frac3r=\frac32$ with $2<q<\infty$ and $2<r<6$ we have
\begin{align*}
\|e^{it\Delta_{{\mathbb{H}}}}\psi_{{\varepsilon},\delta}-e^{it\Delta_{{\mathbb{R}}^3}}\psi_{{\varepsilon},\delta}\|_{L_t^q L_x^r({\mathbb{R}}\times{\mathbb{R}}^3)} \to 0
\end{align*}
as ${\varepsilon}\to0$ with any $\delta=\delta({\varepsilon})$ obeying $\frac{\delta}{{\varepsilon}}\to \infty$.
\end{prop}
\begin{proof}
We will prove the proposition in the special case $q=r=\frac{10}3$. The result for general exponents follows from
the Strichartz inequality and interpolation, or by a simple modification of the arguments that follow.
Using the exact formulas for the propagator in ${\mathbb{R}}^3$ and ${\mathbb{H}}$ and rescaling reduces the question to
\begin{align}\lambdaabel{E:H2R1}
\|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\psi_{{\varepsilon},\delta}\|_{L^{\frac{10}3}_{t,x}({\mathbb{R}}\times{\mathbb{H}})} \to 0 \qtq{where}
\tilde\psi_{{\varepsilon},\delta}(y) = \psi( \bar y - \tfrac\delta{\varepsilon} e_3 ).
\end{align}
Notice that $\tilde\psi_{{\varepsilon},\delta}$ is supported deeply inside the complementary halfspace ${\mathbb{R}}^3\setminus{\mathbb{H}}$.
For large values of $t$ we estimate as follows: Combining the $L^1_x\to L^\infty_x$ dispersive estimate with mass conservation gives
$$
\|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\psi_{{\varepsilon},\delta}\|_{L^{\frac{10}3}_{x}({\mathbb{R}}^3)}
\lambdaesssim |t|^{-3/5} \|\tilde\psi_{{\varepsilon},\delta}\|_{L^1_x}^{\frac25} \|\tilde\psi_{{\varepsilon},\delta}\|_{L^2_x}^{\frac35} \lambdaesssim_\psi |t|^{-3/5}.
$$
We use this bound when $|t| \geq T:=\sqrt{\delta/{\varepsilon}}$ to obtain
\begin{align*}
\|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\psi_{{\varepsilon},\delta}\|_{L^{\frac{10}3}_{t,x}(\{|t|\geq T\}\times{\mathbb{H}})}
&\lambdaesssim_\psi \Bigl( \int_{T}^\infty t^{-2}\,dt\Bigr)^{\frac{3}{10}} \to 0 \qtq{as} {\varepsilon}\to 0.
\end{align*}
For $|t|\lambdaeq T$, we use the virial estimate
$$
\bigl\| \bigl( y + \tfrac{\delta}{{\varepsilon}} e_3) e^{it\Delta_{{\mathbb{R}}^3}} \tilde\psi_{{\varepsilon},\delta} \bigr\|_{L^2_{x}({\mathbb{R}}^3)}^2
\lambdaesssim \bigl\| \bigl( y + \tfrac{\delta}{{\varepsilon}} e_3) \tilde\psi_{{\varepsilon},\delta} \bigr\|_{L^2_{x}({\mathbb{R}}^3)}^2
+ t^2 \bigl\| \nabla \tilde\psi_{{\varepsilon},\delta} \bigr\|_{L^2_{x}({\mathbb{R}}^3)}^2 \lambdaesssim_\psi \tfrac{\delta}{{\varepsilon}}.
$$
This together with the H\"older and Strichartz inequalities gives
\begin{align*}
\|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\psi_{{\varepsilon},\delta}\|_{L^{\frac{10}3}_{t,x}(\{|t|\lambdaeq T\}\times{\mathbb{H}})}
&\lambdaesssim \|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\psi_{{\varepsilon},\delta}\|_{L^\infty_t L^2_x(\{|t|\lambdaeq T\}\times{\mathbb{H}})}^{\frac25}
\|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\psi_{{\varepsilon},\delta}\|_{L^2_t L^6_x({\mathbb{R}}\times{\mathbb{H}})}^{\frac35} \\
&\lambdaesssim \bigl(\tfrac{{\varepsilon}}{\delta}\bigr)^{\frac25}\bigl\| \bigl( y + \tfrac{\delta}{{\varepsilon}} e_3) e^{it\Delta_{{\mathbb{R}}^3}} \tilde\psi_{{\varepsilon},\delta} \bigr\|_{L^\infty_t L^2_x(\{|t|\lambdaeq T\}\times{\mathbb{H}})}^{\frac25} \|\psi\|_{L_x^2}^{\frac35}\\
&\lambdaesssim_\psi \bigl(\tfrac{\varepsilon}\delta\bigr)^{\frac15} \to 0 \qtq{as} {\varepsilon}\to 0.
\end{align*}
This completes the proof of the proposition.
\end{proof}
We begin the proof of Theorem~\ref{T:LF2} by showing that we can approximate $\psi_{{\varepsilon},\delta}$ by Gaussians.
\begin{lem}[Approximation by Gaussians] \lambdaabel{lm:exp4}
Let $\psi\in C_c^{\infty}({\mathbb{H}}-\{e_3\})$. Then for any $\eta>0$, $0<{\varepsilon}\lambdaeq 1$, and $\delta\geq{\varepsilon}$ there exist $N>0$, points
$\{y^{(n)}\}_{n=1}^N \subset {\mathbb{H}}$, and constants $\{c_n\}_{n=1}^N\subset {\mathbb{C}}$ such that
\begin{align*}
\biggl\|\psi_{{\varepsilon},\delta}(x)-\sum_{n=1}^N c_n (2\pi{\varepsilon}^2)^{-\frac34}\Bigl[\exp\bigl\{-\tfrac {|x-\delta y^{(n)}|^2}{4{\varepsilon}^2}\bigr\}
-\exp\bigl\{-\tfrac {|x-\delta\bar{ y}^{(n)}|^2}{4{\varepsilon}^2}\bigr\}\Bigr]\biggr\|_{L_x^2({\mathbb{H}})}<\eta.
\end{align*}
Here, $\bar y^{(n)}$ denotes the reflection of $y^{(n)}$ in $\partial{\mathbb{H}}$. Moreover, we may ensure that
$$
\sum_n |c_n| \lambdaesssim_\eta 1 \qtq{and} \sup_n |y^{(n)}-e_3| \lambdaesssim_\eta {\varepsilon}\delta^{-1},
$$
uniformly in ${\varepsilon}$ and $\delta$.
\end{lem}
\begin{proof}
Wiener showed that linear combinations of translates of a fixed function in $L^2({\mathbb{R}}^d)$ are dense in this space if and only if
the Fourier transform of this function is a.e. non-vanishing. (Note that his Tauberian theorem is the analogous statement for $L^1$.)
In this way, we see that we can choose vectors $z^{(n)}\in {\mathbb{R}}^3$ and numbers $\tilde c_n$ so that
\begin{align*}
\biggl\| \psi(x) - \sum_{n=1}^N \tilde c_n (2\pi)^{-\frac 34} e^{-\frac{|x-z^{(n)}|^2}4}\biggr\|_{L_x^2({\mathbb{R}}^3)}<\tfrac12 \eta.
\end{align*}
Rescaling, translating, and combining with the reflected formula, we deduce immediately that
\begin{align*}
\biggl\| \psi_{{\varepsilon},\delta}(x) - \psi_{{\varepsilon},\delta}(\bar x) - \sum_{n=1}^N c_n (2\pi{\varepsilon}^2)^{-\frac34} \Bigl[
e^{-\frac {|x-\delta y^{(n)}|^2}{4{\varepsilon}^2}} - e^{-\frac {|x-\delta\bar{ y}^{(n)}|^2}{4{\varepsilon}^2}}\Bigr]
\biggr\|_{L_x^2({\mathbb{R}}^3)}<\eta,
\end{align*}
where $y^{(n)} = {\varepsilon}\delta^{-1} z^{(n)} + e_3$ and $c_n=\tilde c_n$ when $y^{(n)}\in {\mathbb{H}}$; otherwise
we set $\bar y^{(n)} = {\varepsilon}\delta^{-1} z^{(n)} + e_3$ and $c_n= - \tilde c_n$, which ensures $y^{(n)} \in {\mathbb{H}}$.
As $\psi_{\delta,{\varepsilon}}(x)$ is supported wholely inside ${\mathbb{H}}$, so $\psi_{\delta,{\varepsilon}}(\bar x)$ vanishes there. Thus the lemma now follows.
\end{proof}
By interpolation and the Strichartz inequality, it suffices to prove Theorem~\ref{T:LF2} for the symmetric Strichartz pair $q=r=\frac{10}3$. Also, to ease notation, we simply write $\Omega$ for $\Omega({\varepsilon})$ in what follows.
Combining Lemma~\ref{lm:exp4} with the Strichartz inequality for both propagators $e^{it\Delta_{\Omega}}$ and $e^{it\Delta_{{\mathbb{H}}}}$, we obtain
\begin{align*}
&\|e^{it\Delta_{\Omega}}\psi_{{\varepsilon}, \delta}-e^{it\Delta_{{\mathbb{H}}}}\psi_{{\varepsilon},\delta}\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times\Omega)}\\
&\lambdaeq \sum_{n=1}^N |c_n|\Bigl\|\bigl[e^{it\Delta_{\Omega}}\chi_{{\mathbb{H}}}-e^{it\Delta_{{\mathbb{H}}}}\chi_{{\mathbb{H}}}\bigr]
(2\pi{\varepsilon}^2)^{-\frac34}\bigl[e^{-\frac{|x-\delta y^{(n)}|^2}{4{\varepsilon}^2}}-e^{-\frac{|x-\delta\bar y^{(n)}|^2}{4{\varepsilon}^2}}\bigr]\Bigl\|_{L_{t,x}^{\frac{10}3} ({\mathbb{R}}\times\Omega)}\\
&\qquad+C\biggl\|\psi_{{\varepsilon},\delta} -\sum_{n=1}^N c_n(2\pi{\varepsilon}^2)^{-\frac34}\bigl[e^{-\frac{|x-\delta y^{(n)}|^2}{4{\varepsilon}^2}}-e^{-\frac{|x-\delta \bar y^ {(n)}|^2}{4{\varepsilon}^2}}\bigr]\biggr\|_{L^2({\mathbb{H}})}.
\end{align*}
Therefore, Theorem~\ref{T:LF2} is reduced to showing
\begin{align}\lambdaabel{fr}
\Bigl\|\bigl[e^{it\Delta_{\Omega}}\chi_{{\mathbb{H}}}-e^{it\Delta_{{\mathbb{H}}}}\chi_{{\mathbb{H}}}\bigr](2\pi{\varepsilon}^2)^{-\frac34}\bigl[e^{-\frac {|x-\delta y|^2}{4{\varepsilon}^2}}
-e^{-\frac {|x-\delta\bar{y}|^2}{4{\varepsilon}^2}}\bigr]\Bigr\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times\Omega)}=o(1)
\end{align}
as ${\varepsilon}\to0$ with any $\delta=\delta({\varepsilon})$ obeying ${\varepsilon}\lambdaeq\delta\lambdaeq{\varepsilon}^{6/7}$, and $y$ as in Lemma~\ref{lm:exp4}.
Next we show that we can further simplify our task to considering only $y\in{\mathbb{H}}$ of the form $y=(0,0,y_3)$ in the estimate \eqref{fr}.
Given $y\in{\mathbb{H}}$ with $|y-e_3|\lambdaesssim{\varepsilon}\delta^{-1}$ that is not of this form, let ${\mathbb{H}}_y$ denote the halfspace containing $\delta y$ with $\partial{\mathbb{H}}_y$ being the tangent plane to $\partial\Omega$ at the point nearest $\delta y$. Moreover, let $\delta\tilde y$ be the reflection of $\delta y$ in $\partial{\mathbb{H}}_y$. Elementary geometric considerations show that the angle between $\partial{\mathbb{H}}$ and $\partial{\mathbb{H}}_y$ is $O({\varepsilon})$. Correspondingly, $|\delta\tilde y - \delta\bar y|\lambdaesssim\delta{\varepsilon}$ and so
\begin{align}\lambdaabel{619}
{\varepsilon}^{-\frac 32}\bigl\|e^{-\frac {|x-\delta \bar y|^2}{4{\varepsilon}^2}}-e^{-\frac{|x-\delta\tilde y|^2}{4{\varepsilon}^2}}\bigr\|_{L^2({\mathbb{R}}^3)}\to 0 \qtq{as} {\varepsilon} \to 0.
\end{align}
As we will explain, \eqref{fr} (and so Theorem~\ref{T:LF2}) follows by combining \eqref{619} with the Strichartz inequality and Proposition~\ref{P:LF2} below. Indeed, the only missing ingredient is the observation that
$$
{\varepsilon}^{-\frac32} \Bigl\| e^{it\Delta_{{\mathbb{H}}}}\chi_{{\mathbb{H}}} \bigl[e^{-\frac {|x-\delta y|^2}{4{\varepsilon}^2}} -e^{-\frac {|x-\delta\bar{y}|^2}{4{\varepsilon}^2}}\bigr]
- e^{it\Delta_{{\mathbb{H}}_y}}\chi_{{\mathbb{H}}_y}\bigl[e^{-\frac {|x-\delta y|^2}{4{\varepsilon}^2}} -e^{-\frac {|x-\delta\tilde{y}|^2}{4{\varepsilon}^2}}\bigr] \Bigr\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times{\mathbb{R}}^3)}
$$
is $o(1)$ as ${\varepsilon}\to0$, which follows from \eqref{619} and the exact formula for the propagator in halfspaces.
Therefore, it remains to justify the following proposition, whose proof will occupy the remainder of this subsection.
\begin{prop} \lambdaabel{P:LF2} We have
\begin{align}\lambdaabel{fn4}
\Bigl\|\bigl[e^{it\Delta_{\Omega}}\chi_{{\mathbb{H}}}-e^{it\Delta_{{\mathbb{H}}}}\chi_{{\mathbb{H}}}\bigr](2\pi{\varepsilon}^2)^{-\frac34}\bigl[e^{-\frac {|x-\delta y|^2}{4{\varepsilon}^2}}
-e^{-\frac {|x-\delta\bar{y}|^2}{4{\varepsilon}^2}}\bigr]\Bigr\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times\Omega)}=o(1)
\end{align}
as ${\varepsilon}\to0$ with any $\delta=\delta({\varepsilon})$ obeying ${\varepsilon}\lambdaeq\delta\lambdaeq{\varepsilon}^{6/7}$, uniformly for $y=(0,0,y_3)$ and $y_3$ in a compact subset of $(0,\infty)$.
\end{prop}
\begin{proof}
To prove \eqref{fn4}, we will build a parametrix for the evolution in $\Omega$ and show that this differs little from the evolution in ${\mathbb{H}}$, for which we have an exact formula:
\begin{align*}
&(2\pi{\varepsilon}^2)^{-\frac 34} e^{it\Delta_{{\mathbb{H}}}}\chi_{{\mathbb{H}}}\bigl[e^{-\frac{|x-\delta y|^2}{4{\varepsilon}^2}}-e^{-\frac {|x-\delta\bar y|^2}{4{\varepsilon}^2}}\bigr]
=(2\pi)^{-\frac 34}\bigl(\tfrac{{\varepsilon}}{{\varepsilon}^2+it}\bigr)^{\frac 32} \bigl[e^{-\frac {|x-\delta y|^2}{4({\varepsilon}^2+it)}}
-e^{-\frac {|x-\delta\bar y|^2}{4({\varepsilon}^2+it)}}\bigr],
\end{align*}
for all $t\in {\mathbb{R}}$ and $x\in {\mathbb{H}}$. We write
\begin{align*}
u(t,x):=(2\pi)^{-\frac 34}\biggl(\frac {\varepsilon}{{\varepsilon}^2+it}\biggr)^{\frac 32}e^{-\frac{|x-\delta y|^2}{4({\varepsilon}^2+it)}},
\end{align*}
and so for all $t\in{\mathbb{R}}$ and $x\in{\mathbb{H}}$,
\begin{align*}
(2\pi{\varepsilon}^2)^{-\frac 34} e^{it\Delta_{{\mathbb{H}}}}\chi_{{\mathbb{H}}}\bigl[e^{-\frac{|x-\delta y|^2}{4{\varepsilon}^2}}-e^{-\frac {|x-\delta\bar y|^2}{4{\varepsilon}^2}}\bigr]
=u(t,x)-u(t,\bar x).
\end{align*}
We start by showing that a part of the halfspace evolution does not contribute to the $L_{t,x}^{10/3}$ norm. Let $\phi:[0, \infty)\to {\mathbb{R}}$ and
$\theta:{\mathbb{R}}\to {\mathbb{R}}$ be smooth functions such that
\begin{align*}
\phi(r)=\begin{cases} 0, & 0\lambdae r\lambdae \frac 12\\ 1, & r\geq1\end{cases} \quad \qtq{and}\quad
\theta(r)=\begin{cases} 1, & r\lambdae 0 \\ 0, &r\geq 1. \end{cases}
\end{align*}
We define
\begin{align*}
v(t,x):=\bigl[u(t,x)-u(t, \bar x)\bigr]\Bigl[1-\phi\Bigl(\frac{x_1^2+x_2^2}{\varepsilon}\Bigr)\theta\Bigl(\frac{x_3}{\varepsilon}\Bigr)\Bigr]\chi_{\{x_3\ge-\frac 12\}}.
\end{align*}
We will prove that $v$ is a good approximation for the halfspace evolution.
\begin{figure}
\caption{The role of the cutoffs defining $v(t,x)$. The cutoff
function takes values between $0$ and $1$ in the shaded region. We
depict only one half of a cross-section; one obtains the full 3D
figure by rotating about the~$x_3$-axis.}
\end{figure}
\begin{lem} \lambdaabel{L:v matters}
We have
\begin{align*}
\|u(t,x)-u(t,\bar x)-v(t,x)\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times{\mathbb{H}})}=o(1)
\end{align*}
as ${\varepsilon}\to0$ with any $\delta=\delta({\varepsilon})$ obeying ${\varepsilon}\lambdaeq\delta\lambdaeq{\varepsilon}^{6/7}$, uniformly for $y=(0,0,y_3)$ and $y_3$ in a compact subset of $(0,\infty)$.
\end{lem}
\begin{proof}
By the definition of $v$, we have to prove
\begin{align*}
\biggl\|\bigl[u(t,x)-u(t,\bar x)\bigr]\phi\biggl(\frac{x_1^2+x_2^2}{\varepsilon}\biggr)\theta\biggl(\frac{x_3}{\varepsilon}\biggr)\biggr\|_ {L_{t,x}^{\frac{10}3}({\mathbb{R}}\times{\mathbb{H}})}=o(1) \qtq{as} {\varepsilon}\to 0,
\end{align*}
which, considering the supports of $\phi$ and $\theta$, amounts to showing
\begin{align}\lambdaabel{pts}
&\|u(t,x)\|_{L_{t,x}^{\frac{10}3}(|x^{\perp}|\ge \sqrt{{\varepsilon}/2},\ 0\lambdae x_3\lambdae {\varepsilon})}+\| u(t,\bar x)\|_{L_{t,x}^{\frac{10}3}(|x^{\perp}|\ge \sqrt{{\varepsilon}/2},\ 0\lambdae
x_3\lambdae {\varepsilon})}=o(1).
\end{align}
We only prove \eqref{pts} for $u(t,x)$ with $t\in[0,\infty)$; the proof for negative times and for $u(t,\bar x)$ is similar.
Let $T:={\varepsilon}^2\lambdaog(\frac 1{\varepsilon})$. We will consider separately the short time contribution $[0,T]$ and the long time contribution $[T, \infty)$.
The intuition is that for short times the wave packet does not reach the cutoff, while for large times the wave packet has already disintegrated.
Thus, we do not need to take advantage of the cancelation between $u(t,x)$ and $u(t,\bar x)$.
We start with the long time contribution. A simple change of variables yields
\begin{align*}
\int_T^\infty\int_{{\mathbb{R}}^3} |u(t,x)|^{\frac{10}3} \,dx\,dt
&\lambdaesssim\int_T^\infty\int_{{\mathbb{R}}^3}\biggl(\frac{{\varepsilon}^2}{{\varepsilon}^4+t^2}\biggr)^{\frac52} e^{-\frac{5{\varepsilon}^2|x-\delta y|^2}{6({\varepsilon}^4+t^2)}} \,dx \,dt\\
&\lambdaesssim \int_T^\infty \biggl(\frac{{\varepsilon}^2}{{\varepsilon}^4+t^2}\biggr)^{\frac52-\frac 32} \,dt\lambdaesssim{\varepsilon}^2\int_T^\infty t^{-2} \,dt\lambdaesssim \lambdaog^{-1}(\tfrac 1{\varepsilon}).
\end{align*}
For short times, we estimate
\begin{align*}
\int_0^T\int_{|x^{\perp}|\ge\sqrt{{\varepsilon}/2},0\lambdae x_3\lambdae{\varepsilon}}|u(t,x)|^{\frac{10}3} \,dx\,dt
&\lambdaesssim {\varepsilon}\int_0^T \biggl(\frac{{\varepsilon}^2}{{\varepsilon}^4+t^2}\biggr)^{\frac52}\int_{\sqrt{{\varepsilon}/2}}^\infty e^{-\frac{5{\varepsilon}^2r^2}{6({\varepsilon}^4+t^2)}}r \,dr\,dt\\
&\lambdaesssim {\varepsilon}\int_0^T \biggl(\frac{{\varepsilon}^2}{{\varepsilon}^4+t^2}\biggr)^{\frac52-1}e^{-\frac{5{\varepsilon}^3}{12({\varepsilon}^4+t^2)}} \,dt\\
&\lambdaesssim {\varepsilon} e^{-\frac{5 {\varepsilon}^3}{24 {\varepsilon}^4\lambdaog^2(\frac1{\varepsilon})}}\int_0^T \biggl(\frac{{\varepsilon}^2}{{\varepsilon}^4+t^2}\biggr)^{\frac 32} \,dt\\
&\lambdae {\varepsilon}^{100}.
\end{align*}
This completes to the proof of the lemma.
\end{proof}
In view of Lemma~\ref{L:v matters}, Proposition~\ref{P:LF2} reduces to showing
\begin{align}\lambdaabel{compl}
\Bigl\| (2\pi{\varepsilon}^2)^{-\frac 34} e^{it\Delta_\Omega}\chi_{{\mathbb{H}}}\bigl[e^{-\frac{|x-\delta y|^2}{4{\varepsilon}^2}}-e^{-\frac{|x-\delta\bar y|^2}{4{\varepsilon}^2}}\bigr]-v(t,x)\Bigl\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times\Omega)}=o(1).
\end{align}
To achieve this, we write
\begin{align*}
(2\pi{\varepsilon}^2)^{-\frac 34}e^{it\Delta_\Omega}\chi_{{\mathbb{H}}}\bigl[e^{-\frac{|x-\delta y|^2}{4{\varepsilon}^2}}-e^{-\frac{|x-\delta\bar y|^2}{4{\varepsilon}^2}}\bigr]
=v(t,x)-w(t,x)-r(t,x),
\end{align*}
where $w$ is essentially $v$ evaluated on the boundary of $\Omega$ and $r(t,x)$ is the remainder term. More precisely,
\begin{align*}
w(t,x):=\bigl[u(t,x_*)-u(t,\bar x_*)\bigr]\biggl[1-\phi\biggl(\frac{x_1^2+x_2^2}{\varepsilon}\biggr)\theta\biggl(\frac{x_3}{\varepsilon}\biggr)\biggr]
\theta\biggl(\frac{\dist(x,\Omega^c)}{\varepsilon}\biggr)\chi_{\{x_3\ge -\frac 12\}},
\end{align*}
where $x_*$ denotes the point on $\partial\Omega$ such that $x_*^{\perp}=x^\perp$ and $\bar x_*$ denotes the reflection of $x_*$ in $\partial{\mathbb{H}}$. Note that for $x\in\partial\Omega$, we have $w(t,x)=v(t,x)$. Thus, on ${\mathbb{R}}\times\Omega$ the remainder $r(t,x)$ satisfies
\begin{align*}
(i\partial_t+\Delta_\Omega )r=(i\partial_t+\Delta )(v-w).
\end{align*}
Therefore, by the Strichartz inequality, \eqref{compl} will follow from
\begin{align}\lambdaabel{E:case4 estimates}
\|w\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times\Omega)} + \|(i\partial_t+\Delta)v\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)}
+ \|(i\partial_t+\Delta)w\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)}=o(1).
\end{align}
\begin{figure}
\caption{The shaded area indicates the support of $w(t,x)$. As in Figure~\ref{F:v}
\end{figure}
To prove \eqref{E:case4 estimates}, we will make repeated use of the following
\begin{lem}
For $\alpha\geq 0$,
\begin{align}
\int_{|x^\perp|\lambdae \sqrt {\varepsilon}}|x^\perp|^{\alpha}e^{-\frac{{\varepsilon}^2|x^\perp|^2}{2({\varepsilon}^4+t^2)}} \,dx^\perp
&\lambdaesssim \min\biggl\{\biggl(\frac{{\varepsilon}^4+t^2}{{\varepsilon}^2}\biggr)^{\frac{\alpha+2}2},{\varepsilon}^{\frac{\alpha+2}2}\biggr\}\lambdaabel{estsmall}\\
\int_{|x^{\perp}|\ge \sqrt{{\varepsilon}/2}}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{2(\eps^4+t^2)}}|x^{\perp}|^\alpha \,dx^{\perp}
&\lambdaesssim\biggl(\frac{\eps^4+t^2}{{\varepsilon}^2}\biggr)^{\frac{\alpha+2}2}\min\biggl\{1,\biggl(\frac{\eps^4+t^2}{{\varepsilon}^3}\biggr)^{20}\biggr\}.\lambdaabel{estbig}
\end{align}
In particular, for $\alpha\geq0$, $\beta>\frac12$, and $\gamma=\min\{3-4\beta+\frac\alpha2,2-3\beta+\frac\alpha4\}$,
\begin{align}\lambdaabel{512}
\int_0^{\infty}({\varepsilon}^4+t^2)^{-\beta}\biggl(\int_{|x^\perp|\lambdae\sqrt{{\varepsilon}}}|x^\perp|^{\alpha}e^{-\frac{{\varepsilon}^2|x^\perp|^2}{2({\varepsilon}^2+t^2)}}\,d x^\perp\biggr)^{\frac 12} \,dt\lambdaesssim {\varepsilon}^\gamma.
\end{align}
Moreover, for $\frac12<\beta<10$,
\begin{align} \lambdaabel{514}
\int_0^\infty({\varepsilon}^4+t^2)^{-\beta}\min\biggl\{1, \biggl(\frac {{\varepsilon}^4+t^2}{{\varepsilon}^3}\biggr)^{10}\biggr\}\,dt\lambdaesssim {\varepsilon}^{\frac32-3\beta}.
\end{align}
\end{lem}
\begin{proof}
Passing to polar coordinates, we estimate
\begin{align*}
\text{LHS}\eqref{estsmall}=\int_0^{\sqrt{\varepsilon}}e^{-\frac{{\varepsilon}^2r^2}{2({\varepsilon}^4+t^2)}} r^{\alpha+1} \,dr
&\lambdaesssim \biggl(\frac{{\varepsilon}^4+t^2}{{\varepsilon}^2}\biggr)^{\frac{\alpha+2}2}\int_0^{\frac{{\varepsilon}^{\frac32}}{\sqrt{{\varepsilon}^4+t^2}}} e^{-\frac {\rho^2}2} \rho^{\alpha+1} \,d\rho\\
&\lambdaesssim \biggl(\frac{{\varepsilon}^4+t^2}{{\varepsilon}^2}\biggr)^{\frac{\alpha+2}2}\min\biggl\{1,\biggl(\frac{{\varepsilon}^{\frac32}}{\sqrt{{\varepsilon}^4+t^2}}\biggr)^{\alpha+2}\biggr\},
\end{align*}
which settles \eqref{estsmall}. The proof of \eqref{estbig} follows along similar lines.
Using \eqref{estsmall}, we estimate
\begin{align*}
\text{LHS}\eqref{512}
&\lambdaesssim \int_0^\infty({\varepsilon}^4+t^2)^{-\beta}\min\biggl\{\biggl(\frac{{\varepsilon}^4+t^2}{{\varepsilon}^2}\biggr)^{\frac{\alpha+2}4},\,{\varepsilon}^{\frac{\alpha+2}4}\biggr\} \,dt\\
&\lambdaesssim \int_0^{{\varepsilon}^{\frac 32}}({\varepsilon}^4+t^2)^{-\beta+\frac{\alpha+2}4}{\varepsilon}^{-\frac{\alpha+2}2} \,dt
+\int_{{\varepsilon}^{\frac 32}}^\infty({\varepsilon}^4+t^2)^{-\beta}{\varepsilon}^{\frac{\alpha+2}4} \,dt\\
&\lambdaesssim {\varepsilon}^{-\frac{\alpha+2}2}\int_0^{{\varepsilon}^2} {\varepsilon}^{-4\beta+\alpha+2}\,dt + {\varepsilon}^{-\frac{\alpha+2}2}\int_{{\varepsilon}^2}^{{\varepsilon}^{\frac32}}t^{-2\beta+\frac{\alpha+2}2}\, dt +{\varepsilon}^{\frac{\alpha+2}4}{\varepsilon}^{\frac 32(1-2\beta)}\\
&\lambdaesssim {\varepsilon}^{\frac{\alpha+2}2} {\varepsilon}^{2-4\beta}+ {\varepsilon}^{-\frac{\alpha+2}2}{\varepsilon}^{\frac 32(-2\beta+\frac{\alpha+2}2+1)}+{\varepsilon}^{\frac{\alpha+2}4}{\varepsilon}^{\frac 32(1-2\beta)}\\
&\lambdaesssim {\varepsilon}^{3-4\beta+\frac\alpha2} +{\varepsilon}^{2-3\beta+\frac\alpha4}.
\end{align*}
To establish \eqref{514} one argues as for \eqref{512}; we omit the details.
\end{proof}
We are now ready to prove \eqref{E:case4 estimates}, which will complete the proof of Proposition~\ref{P:LF2}. We will estimate each of the three summands appearing on the left-hand side of \eqref{E:case4 estimates}. We start with the first one.
\begin{lem}[Estimate for $w$]\lambdaabel{L:we}
We have
\begin{align}\lambdaabel{we}
\|w\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times\Omega)}\lambdaesssim \delta{\varepsilon}^{-\frac15}.
\end{align}
\end{lem}
\begin{proof}
We first obtain a pointwise bound for $w$. Note that on the support of $w$,
\begin{align*}
|x^\perp|\lambdae {\varepsilon}^{\frac 12}, \quad |x_3|\lambdaesssim {\varepsilon}, \qtq{and} |x_{*3}|\lambdaesssim |x^\perp|^2\lambdaesssim {\varepsilon},
\end{align*}
where the last two estimates follow from the finite curvature assumption. Here we use the notation $x_{*3}:=x_*\cdot e_3$. Thus, using the fact that
$|\bar x_* -\delta y|=|x_*+\delta y|$ and the mean value theorem, on the support of $w$ we have
\begin{align}\lambdaabel{dif}
\biggl| e^{-\frac{|x_*-\delta y|^2}{4({\varepsilon}^2+it)}}-e^{-\frac{|\bar x_*-\delta y|^2}{4({\varepsilon}^2+it)}}\biggr|
&=\biggl| e^{-\frac{|x^\perp|^2}{4({\varepsilon}^2+it)}}\biggl(e^{-\frac{|x_{*3}-\delta y_3|^2}{4({\varepsilon}^2+it)}}-e^{-\frac{|x_{*3}+\delta y_3|^2}{4({\varepsilon}^2+it)}}\biggr)\biggr|\notag\\
&\lambdaesssim e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4({\varepsilon}^4+t^2)}}\frac\delta{\sqrt{{\varepsilon}^4+t^2}}|x_{*3}|\notag\\
&\lambdaesssim \delta {({\varepsilon}^4+t^2)}^{-\frac12} |x^\perp|^2 e^{-\frac{{\varepsilon}^2|x^\perp|^2}{4({\varepsilon}^4+t^2)}}.
\end{align}
Therefore,
\begin{align}\lambdaabel{ptw}
|w(t,x)|\lambdae |u(t,x_*)-u(t,\bar x_*)|\lambdaesssim \delta{\varepsilon}^{\frac32}({\varepsilon}^4+t^2)^{-\frac 54} |x^\perp|^2e^{-\frac{{\varepsilon}^2|x^\perp|^2}{4({\varepsilon}^4+t^2)}}.
\end{align}
To control the $L_{t,x}^{\frac {10}3}$ norm of $w$ we use \eqref{ptw} together with \eqref{estsmall}, as follows:
\begin{align*}
\int_{\mathbb{R}}\int_\Omega|w(t,x)|^{\frac{10}3} \,dx\,dt
&\lambdaesssim \delta^{\frac{10}3}{\varepsilon}^{-\frac73}\int _0^{\infty}\biggl(\frac{{\varepsilon}^2}{{\varepsilon}^4+t^2}\biggr)^{\frac{25}6} \!\! \int_{|x^\perp|\lambdae{\varepsilon}^{\frac 12}} e^{-\frac{5{\varepsilon}^2|x^\perp|^2}{6({\varepsilon}^4+t^2)}}|x^\perp|^{\frac {20}3} \,dx^\perp dt\\
&\lambdaesssim \delta^{\frac{10}3}{\varepsilon}^{-\frac73}\int_0^{\infty}\biggl(\frac{{\varepsilon}^2}{{\varepsilon}^4+t^2}\biggr)^{\frac{25}6}\min\biggl\{\biggl(\frac{{\varepsilon}^4+t^2}{{\varepsilon}^2}\biggr)^{\frac{13}3},{\varepsilon}^{\frac{13}3}\biggr\}\,dt\\
&\lambdaesssim \delta^{\frac{10}3}{\varepsilon}^{-\frac 83}\int_0^{{\varepsilon}^{\frac 32}}({\varepsilon}^4+t^2)^{\frac 16}\,dt+\delta^{\frac{10}3}{\varepsilon}^{\frac{31}3}\int_{{\varepsilon}^{\frac 32}}^\infty({\varepsilon}^4+t^2)^{-\frac{25}6}\,dt\\
&\lambdaesssim \delta^{\frac{10}3}{\varepsilon}^{-\frac 23}.
\end{align*}
This completes the proof of the lemma.
\end{proof}
\begin{lem}[Estimate for $(i\partial_t+\Delta)v$]\lambdaabel{L:ve}
We have
\begin{align}\lambdaabel{ve}
\|(i\partial_t+\Delta)v\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)}\lambdaesssim \delta {\varepsilon}^{-\frac34}.
\end{align}
\end{lem}
\begin{proof}
Using the definition of $v(t,x)$, we compute
\begin{align}
(i\partial_t+\Delta)v(t,x)
&=(i\partial_t+\Delta)\Bigl\{\bigl[u(t,x)-u(t,\bar x)\bigr]\bigl[1-\phi\bigl(\tfrac{x_1^2+x_2^2}{\varepsilon}\bigr)\theta\bigl(\tfrac{x_3}{\varepsilon}\bigr)\bigr]\chi_{\{x_3\ge -\frac 12\}}\Bigr\}\notag\\
&=\bigl[u(t,x)-u(t,\bar x)\bigr]\Delta\Bigl\{\bigl[1-\phi\bigl(\tfrac{x_1^2+x_2^2}{\varepsilon}\bigr)\theta\bigl(\tfrac{x_3}{\varepsilon}\bigr)\bigr]\chi_{\{x_3\ge-\frac 12\}}\Bigr\}\lambdaabel{1v}\\
&\quad+2\nabla\bigl[u(t,x)-u(t,\bar x)\bigr]\cdot \nabla\Bigl\{\bigl[1-\phi\bigl(\tfrac{x_1^2+x_2^2}{\varepsilon}\bigr)\theta\bigl(\tfrac{x_3}{\varepsilon}\bigr)\bigr]\chi_{\{x_3\ge -\frac 12\}}\Bigr\}.\lambdaabel{2v}
\end{align}
We first consider the contribution of \eqref{1v}. A direct analysis yields that for $x\in \Omega$ in the support of \eqref{1v},
\begin{align*}
|x_3|\lambdaesssim {\varepsilon}, \quad |x^{\perp}|\ge \sqrt{{\varepsilon}/2}, \qtq{and} \Bigl|\Delta\Bigl\{\bigl[1-\phi\bigl(\tfrac{x_1^2+x_2^2}{\varepsilon}\bigr)\theta\bigl(\tfrac{x_3}{\varepsilon}\bigr)\bigr]\chi_{\{x_3\ge -\frac 12\}}\Bigr\}\Bigr|\lambdaesssim {\varepsilon}^{-2}.
\end{align*}
Thus, by the mean value theorem,
\begin{align}
\biggl| e^{-\frac{|x-\delta y|^2}{4({\varepsilon}^2+it)}}-e^{-\frac{|\bar x-\delta y|^2}{4({\varepsilon}^2+it)}}\biggr|
&\lambdaesssim e^{-\frac{{\varepsilon}^2|x^\perp|^2}{4({\varepsilon}^4+t^2)} }\frac{\delta|x_3|}{\sqrt{{\varepsilon}^4+t^2}}
\lambdaesssim \delta{\varepsilon}({\varepsilon}^4+t^2)^{-\frac12}e^{-\frac{{\varepsilon}^2|x^\perp|^2}{4({\varepsilon}^4+t^2)}}.\lambdaabel{7c}
\end{align}
This yields the pointwise bound
\begin{align}
\eqref{1v}\lambdaesssim {\varepsilon}^{-2}|u(t,x)-u(t,\bar x)|\lambdaesssim \delta{\varepsilon}^{\frac 12}({\varepsilon}^4+t^2)^{-\frac54}e^{-\frac{{\varepsilon}^2|x^\perp|^2}{4({\varepsilon}^4+t^2)}}.\lambdaabel{p1v}
\end{align}
Using \eqref{p1v} followed by \eqref{estbig} (with $\alpha=0$) and \eqref{514} (with $\beta=\frac 34$), we obtain
\begin{align*}
\|\eqref{1v}\|_{L_t^1 L_x^2({\mathbb{R}}\times\Omega)}
& \lambdaesssim {\varepsilon}^{\frac12}\delta{\varepsilon}^{\frac 12} \int_0^{\infty}({\varepsilon}^4+t^2)^{-\frac 54}\biggl(\int_{|x^\perp|\ge \sqrt{{\varepsilon}/2}}e^{-\frac{{\varepsilon}^2|x^\perp|^2}{2({\varepsilon}^4+t^2)}}\,dx^\perp\biggr)^{\frac 12} \,dt\\
&\lambdaesssim \delta\int_0^{\infty}({\varepsilon}^4+t^2)^{-\frac 54+\frac12}\min\biggl\{1,\biggl(\frac{{\varepsilon}^4+t^2}{{\varepsilon}^3}\biggr)^{10}\biggr\}\,dt\\
&\lambdaesssim \delta{\varepsilon}^{-\frac 34}.
\end{align*}
We now consider the contribution of \eqref{2v}. For $x\in \Omega$ in the support of \eqref{2v}, we have
\begin{align*}
|x_3|\lambdaesssim {\varepsilon}, \quad |x^{\perp}|\ge \sqrt{{\varepsilon}/2}, \qtq{and}
\Bigl|\nabla\Bigl\{\bigl[1-\phi\bigl(\tfrac{x_1^2+x_2^2}{\varepsilon}\bigr)\theta\bigl(\tfrac{x_3}{\varepsilon}\bigr)\bigr]\chi_{\{x_3\ge -\frac 12\}}\Bigr\}\Bigr|\lambdaesssim {\varepsilon}^{-1}.
\end{align*}
Using that $|x-\delta \bar y|=|\bar x-\delta y|$, we compute
\begin{align*}
\nabla \biggl(e^{-\frac{|x-\delta y|^2}{4({\varepsilon}^2+it)}}-e^{-\frac{|\bar x- \delta y|^2}{4({\varepsilon}^2+it)}}\biggr)
&=-\frac{x-\delta y}{2({\varepsilon}^2+it)}e^{-\frac{|x-\delta y|^2}{4({\varepsilon}^2+it)}}+ \frac{x-\delta\bar y}{2({\varepsilon}^2+it)}e^{-\frac{|x-\delta\bar y|^2}{4({\varepsilon}^2+it)}}\\
&=-\frac x{2({\varepsilon}^2+it)}\biggl(e^{-\frac{|x-\delta y|^2}{4({\varepsilon}^2+it)}}-e^{-\frac{|x-\delta \bar y|^2}{4({\varepsilon}^2+it)}}\biggr)\\
&\quad+ \frac{\delta y_3 e_3}{2({\varepsilon}^2+it)}\biggl(e^{-\frac{|x-\delta y|^2}{4({\varepsilon}^2+it)}}+e^{-\frac{|x-\delta \bar y|^2}{4({\varepsilon}^2+it)}}\biggr).
\end{align*}
Thus, for $x\in \Omega$ in the support of \eqref{2v} we have
\begin{align*}
\bigl|\nabla\bigl[ & u(t,x)-u(t,\bar x)\bigr]\bigr|\\
&\lambdaesssim\biggl(\frac{{\varepsilon}^2}{\eps^4+t^2}\biggr)^{\frac34}\biggl\{\frac{|x|}{\sqrt{\eps^4+t^2}}\frac{{\varepsilon}\delta}{\sqrt{\eps^4+t^2}}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}+\frac{\delta}{\sqrt{\eps^4+t^2}}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}\biggr\} \\
&\lambdaesssim\Bigl\{{\varepsilon}^{\frac 72}\delta(\eps^4+t^2)^{-\frac 74}+{\varepsilon}^{\frac 52}\delta(\eps^4+t^2)^{-\frac 74}|x^{\perp}|+{\varepsilon}^{\frac 32}\delta(\eps^4+t^2)^{-\frac 54}\Bigr\}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}\\
&\lambdaesssim \Bigl\{{\varepsilon}^{\frac 32}\delta(\eps^4+t^2)^{-\frac 54}+{\varepsilon}^{\frac52}\delta(\eps^4+t^2)^{-\frac74}|x^{\perp}|\Bigr\}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}},
\end{align*}
which yields the pointwise bound
\begin{align*}
|\eqref{2v}|\lambdaesssim \Bigl\{{\varepsilon}^{\frac 12}\delta(\eps^4+t^2)^{-\frac54}+{\varepsilon}^{\frac 32}\delta(\eps^4+t^2)^{-\frac74}|x^{\perp}|\Bigr\}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}.
\end{align*}
Using \eqref{estbig} followed by \eqref{514}, we estimate the contribution of \eqref{2v} as follows:
\begin{align*}
\|\eqref{2v}\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)}
&\lambdaesssim {\varepsilon}^{\frac12}{\varepsilon}^{\frac 12}\delta\int_0^\infty(\eps^4+t^2)^{-\frac 54}\biggl(\int_{|x^{\perp}|\ge\sqrt{{\varepsilon}/2}}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{2(\eps^4+t^2)}} \,dx^{\perp}\biggr)^{\frac12}\,dt\\
&\quad+{\varepsilon}^{\frac 12}{\varepsilon}^{\frac32}\delta\int_0^\infty(\eps^4+t^2)^{-\frac74}\biggl(\int_{|x^{\perp}|\ge\sqrt{{\varepsilon}/2}}|x^{\perp}|^2e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{2(\eps^4+t^2)}} \,dx^{\perp}\biggr)^{\frac 12}\,dt\\
&\lambdaesssim \delta\int_0^{\infty}(\eps^4+t^2)^{-\frac 34}\min\biggl\{1,\biggl(\frac{\eps^4+t^2}{{\varepsilon}^3}\biggr)^{10}\biggr\} \,dt\\
&\lambdaesssim \delta{\varepsilon}^{-\frac 34}.
\end{align*}
This completes the proof of the lemma.
\end{proof}
\begin{lem}[Estimate for $(i\partial_t+\Delta)w$]\lambdaabel{L:we1}
We have
\begin{align}\lambdaabel{we1}
\|(i\partial_t+\Delta)w\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)}\lambdaesssim \delta {\varepsilon}^{-\frac34} + \delta^3{\varepsilon}^{-2}.
\end{align}
\end{lem}
\begin{proof}
We compute
\begin{align}
(i\partial_t + \Delta)w\!\!&\notag\\
&=\Bigl\{(i\partial_t+\Delta)\bigl[u(t,x_*)-u(t,\bar x_*)\bigr]\lambdaabel{w1}\\
&\quad+2\nabla\bigl[u(t,x_*)-u(t,\bar x_*)\bigr]\cdot \nabla\lambdaabel{w2}\\
&\quad+\bigl[u(t,x_*)-u(t,\bar x_*)\bigr]\ \Delta \Bigr\}\bigl[1-\phi\bigl(\tfrac{x_1^2+x_2^2}{\varepsilon}\bigr)\theta\bigl(\tfrac{x_3}{\varepsilon}\bigr)\bigr]\theta\bigl(\tfrac{\dist(x,\Omega^c)}{{\varepsilon}}\bigr)\chi_{\{x_3\ge-\frac 12\}}\lambdaabel{w3}.
\end{align}
We first consider the contribution of \eqref{w3}. Using \eqref{ptw}, we obtain the pointwise bound
\begin{align*}
|\eqref{w3}|\lambdaesssim \delta{\varepsilon}^{-\frac 12}(\eps^4+t^2)^{-\frac 54}|x^{\perp}|^2 e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}.
\end{align*}
Thus using \eqref{512} and the fact that $|x_3|\lambdaesssim {\varepsilon}$ for $x\in\supp w$, we obtain
\begin{align*}
\|\eqref{w3}\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)}
&\lambdaesssim \delta\int_0^\infty(\eps^4+t^2)^{-\frac 54} \biggl(\int_{|x^{\perp}|\lambdae\sqrt {\varepsilon}}|x^{\perp}|^4e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{2(\eps^4+t^2)}} \,dx^{\perp}\biggr)^{\frac 12} \,dt
\lambdaesssim\delta{\varepsilon}^{-\frac 34}.
\end{align*}
Next we consider the contribution of \eqref{w2}. As $\frac{\partial x_*}{\partial x_3}=0$, $\nabla[u(t,x_*)-u(t,\bar x_*)]$ has no component in the $e_3$ direction. For the remaining directions we have
\begin{equation*}
\begin{aligned}
\nabla_{\perp}\bigl[u(t,x_*)-u(t,\bar x_*)\bigr]
&= \tfrac{-x^{\perp}}{2({\varepsilon}^2+it)}\bigl[u(t,x_*)-u(t,\bar x_*)\bigr] \\
&\quad - (\nabla_\perp x_{*3}) \bigl[\tfrac{x_{*3}-\delta y_3}{2({\varepsilon}^2+it)} u(t,x_*)- \tfrac{x_{*3}+\delta y_3}{2({\varepsilon}^2+it)} u(t,\bar x_*)\bigr].
\end{aligned}
\end{equation*}
Using \eqref{ptw}, $|\nabla_\perp x_{*3}|\lambdaesssim |x^{\perp}|$, and $|x_{*3}|\lambdaesssim {\varepsilon}$, we deduce
\begin{align*}
\bigl|\nabla_{\perp}\bigl[u(t,x_*)-u(t,\bar x_*)\bigr]\bigr|
&\lambdaesssim \bigl[ \delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac74}|x^{\perp}|^3 + \delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac54}|x^{\perp}| \bigr] e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}.
\end{align*}
This gives the pointwise bound
\begin{align*}
|\eqref{w2}|&\lambdaesssim \bigl[\delta{\varepsilon}^{\frac 12} (\eps^4+t^2)^{-\frac74}|x^{\perp}|^3 + \delta{\varepsilon}^{\frac 12} (\eps^4+t^2)^{-\frac54}|x^{\perp}| \bigr]e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}.
\end{align*}
Using \eqref{512}, we thus obtain
\begin{align*}
\|\eqref{w2}\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)}
&\lambdaesssim {\varepsilon}\delta\int_0^\infty (\eps^4+t^2)^{-\frac74}\biggl(\int_{|x^{\perp}|\lambdae\sqrt{\varepsilon}}|x^{\perp}|^6e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{2(\eps^4+t^2)}} \,dx^{\perp}\biggl)^{\frac 12} \,dt\\
&\quad + {\varepsilon}\delta\int_0^\infty (\eps^4+t^2)^{-\frac54}\biggl(\int_{|x^{\perp}|\lambdae\sqrt{\varepsilon}}|x^{\perp}|^2e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{2(\eps^4+t^2)}} \,dx^{\perp}\biggl)^{\frac 12} \,dt\\
&\lambdaesssim \delta{\varepsilon}^{-\frac34} + \delta{\varepsilon}^{-\frac14}\lambdaesssim \delta{\varepsilon}^{-\frac34}.
\end{align*}
Lastly, we consider \eqref{w1}. We begin with the contribution coming from the term $\partial_t\bigl[u(t,x_*)-u(t,\bar x_*)\bigr]$, which we denote by $\eqref{w1}_{\partial_t}$. We start by deriving a pointwise bound on this term. A straightforward computation using \eqref{dif} yields
\begin{align*}
&\bigl|\partial_t\bigl[u(t,x_*)-u(t,\bar x_*)\bigr]\bigr|\\
&\lambdaesssim \frac{{\varepsilon}^{\frac32}}{(\eps^4+t^2)^{\frac 54}}\Bigl| e^{-\frac{|x_*-\delta y|^2}{4({\varepsilon}^2+it)}}-e^{-\frac{|\bar x_*-\delta y|^2}{4({\varepsilon}^2+it)}}\Bigr|\\
&\quad +\biggl(\frac {{\varepsilon}^2}{\eps^4+t^2}\biggr)^{\frac 34}\frac 1{\eps^4+t^2}\biggl||x_*-\delta y|^2 e^{-\frac{|x_*-\delta y|^2}{4({\varepsilon}^2+it)}}-|\bar x_*-\delta y|^2 e^{-\frac{|\bar x_*-\delta y|^2}{4({\varepsilon}^2+it)}}\biggr|\\
&\lambdaesssim\bigl[{\varepsilon}^{\frac 32}(\eps^4+t^2)^{-\frac54}+{\varepsilon}^{\frac 32}(\eps^4+t^2)^{-\frac74}|x^{\perp}|^2\bigl]\Bigl|e^{-\frac{|x_*-\delta
y|^2}{4({\varepsilon}^2+it)}}-e^{-\frac{|\bar x_*-\delta y|^2}{4({\varepsilon}^2+it)}}\Bigr|\\
&\quad +{\varepsilon}^{\frac 32}(\eps^4+t^2)^{-\frac 74}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}\Bigl||x_{*3}-\delta y_3|^2e^{-\frac{|x_{*3}-\delta y_3|^2}{4({\varepsilon}^2+it)}}-|x_{*3}+\delta y_3|^2 e^{-\frac{|x_{*3}+\delta y_3|^2}{4({\varepsilon}^2+it)}}\Bigr|\\
&\lambdaesssim \bigl[{\varepsilon}^{\frac 32}(\eps^4+t^2)^{-\frac 54}+{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac 74}|x^{\perp}|^2\bigr]e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}\delta(\eps^4+t^2)^{-\frac12}|x^{\perp}|^2\\
&\quad +{\varepsilon}^{\frac 32}(\eps^4+t^2)^{-\frac 74}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}\bigl[\delta|x_{*3}|+(|x_{*3}|^2+\delta^2)\delta (\eps^4+t^2)^{-\frac12}|x^\perp|^2\bigr]\\
&\lambdaesssim e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}\Bigl[\delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac 74}|x^{\perp}|^2+
\delta{\varepsilon}^{\frac 32}(\eps^4+t^2)^{-\frac94}|x^{\perp}|^4+{\varepsilon}^{\frac 32}\delta^3(\eps^4+t^2)^{-\frac94}|x^{\perp}|^2\Bigr],
\end{align*}
where in order to obtain the third inequality we have used the identity $2(ab-cd)=(a-c)(b+d)+(a+c)(b-d)$. Using \eqref{512} as before, we obtain
\begin{align*}
\|\eqref{w1}_{\partial_t}\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)}
&\lambdaesssim \delta{\varepsilon}^{-\frac 14} + \delta{\varepsilon}^{-\frac 34} + \delta^3{\varepsilon}^{-2}\lambdaesssim \delta{\varepsilon}^{-\frac 34} + \delta^3{\varepsilon}^{-2}.
\end{align*}
We now turn to the Laplacian term in \eqref{w1}, which we denote by $\eqref{w1}_\Delta$. For a generic function $f:{\mathbb{R}}^3\to{\mathbb{C}}$,
\begin{equation}\lambdaabel{bits&pieces}
\Delta f(x_*) = [\Delta_\perp f + 2 (\nabla_\perp x_{*3})\cdot (\nabla_\perp \partial_3 f) + (\Delta_\perp x_{*3})(\partial_3 f)
+ |\nabla_\perp x_{*3}|^2(\partial_3^2 f) ](x_*).
\end{equation}
Using this formula with $f(x) := u(t,x) - u(t,\bar x)$, we first derive a pointwise bound on $\eqref{w1}_\Delta$.
A direct computation gives
\begin{align*}
(\Delta_{\perp}f)(x_*)=\biggl[-\frac{1}{{\varepsilon}^2+it} + \frac{|x^{\perp}|^2}{4({\varepsilon}^2+it)^2}\biggr] \bigl[u(t,x_*)-u(t,\bar x_*)\bigr].
\end{align*}
Therefore, using \eqref{ptw} we obtain the pointwise bound
\begin{align*}
\bigl|(\Delta_{\perp}f)(x_*)\bigr|
&\lambdaesssim \bigl[\delta{\varepsilon}^{\frac 32}(\eps^4+t^2)^{-\frac74}|x^{\perp}|^2+\delta{\varepsilon}^{\frac 32}(\eps^4+t^2)^{-\frac 94}|x^{\perp}|^4\bigr] e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}.
\end{align*}
Next, we combine $|\nabla_\perp x_{*3}|\lambdaesssim |x^\perp|$ with
$$
(\nabla_\perp \partial_3 f)(x_*) = \frac{(x_{*3}-\delta y_3)x^\perp}{4({\varepsilon}^2+it)^2} u(t,x_*)
- \frac{(x_{*3}+\delta y_3)x^\perp}{4({\varepsilon}^2+it)^2} u(t,\bar x_*),
$$
$|x_{*3}|\lambdaesssim |x^\perp|^2$, \eqref{ptw}, and the crude bound
\begin{equation}\lambdaabel{u size}
|u(t,x_*)|+|u(t,\bar x_*)| \lambdaesssim {\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac34} e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}
\end{equation}
to obtain
\begin{align*}
\bigl|(\nabla_\perp x_{*3})\cdot (\nabla_\perp \partial_3 f)\bigr|
&\lambdaesssim \bigl[\delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac94}|x^\perp|^6 + \delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac74}|x^\perp|^2\bigr]e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}.
\end{align*}
Next we use $|\Delta_\perp x_{*3}|\lambdaesssim 1$ and $|x_{*3}\pm\delta y|\lambdaesssim \delta$ together with elementary computations to find
$$
\bigl|(\Delta_\perp x_{*3})(\partial_3 f)\bigr| \lambdaesssim \delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac54} e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}.
$$
Toward our last pointwise bound, we compute
$$
(\partial_3^2 f)(x_*) = \biggl[\frac{-1}{2({\varepsilon}^2+it)}+\frac{|x_{*3}-\delta y_3|^2}{4({\varepsilon}^2+it)^2}\biggr] u(t,x_*)
- \biggl[\frac{-1}{2({\varepsilon}^2+it)}+\frac{|x_{*3}+\delta y_3|^2}{4({\varepsilon}^2+it)^2}\biggr] u(t,\bar x_*),
$$
Combining this with \eqref{ptw}, \eqref{u size}, $|\nabla_\perp x_{*3}|\lambdaesssim |x^\perp|$, and $|x_{*3}|\lambdaesssim |x^\perp|^2\lambdaesssim{\varepsilon}\lambdaesssim\delta$ yields
\begin{align*}
&\bigl||\nabla_\perp x_{*3}|^2(\partial_3^2 f) (x_*)\bigr|\\
&\lambdaesssim \bigl[\delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac74}|x^\perp|^4 + \delta^3{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac94}|x^\perp|^4
+ \delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac74}|x^\perp|^2\bigr]e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}.
\end{align*}
We now put together all the pieces from \eqref{bits&pieces}. Using $|x^\perp|^2 \lambdaesssim \delta \lambdaesssim 1$ so as to keep only the largest terms, we obtain
$$
\bigl| \Delta f (x_*) \bigr| \lambdaesssim \bigl[
\delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac54} + \delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac74}|x^\perp|^2 + \delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac94}|x^\perp|^4\bigr]
e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}.
$$
Using \eqref{512} as before, we thus obtain
\begin{align*}
\|\eqref{w1}_\Delta\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)}\lambdaesssim \delta+\delta{\varepsilon}^{-\frac 14}+\delta{\varepsilon}^{-\frac 34}\lambdaesssim \delta{\varepsilon}^{-\frac 34}.
\end{align*}
This completes the proof of the lemma.
\end{proof}
Collecting Lemmas~\ref{L:we}, \ref{L:ve}, and \ref{L:we1} and recalling that ${\varepsilon}\lambdaeq \delta\lambdaeq {\varepsilon}^{6/7}$, we derive \eqref{E:case4 estimates}.
This in turn yields \eqref{compl}, which combined with Lemma~\ref{L:v matters} proves Proposition~\ref{P:LF2}.
\end{proof}
This completes the proof of Theorem~\ref{T:LF2} and so the discussion of Case~(iv).
\subsection{Case (v)} In this case we have $N_n\to \infty$ and $N_n^{-6/7}\lambdaeq d(x_n) \lambdaesssim 1$. As in Case~(iv), we rescale so that the
obstacle is restored to its original (unit) size. Correspondingly, the initial data has characteristic scale ${\varepsilon}:= N_n^{-1}\to 0$ and is supported within a distance $O({\varepsilon})$ of the origin, which is at a distance $\delta:=d(x_n)$ from the obstacle. A schematic representation is given in Figure~\ref{F:case5}. There and below,
\begin{equation}\lambdaabel{E:psi eps defn}
\psi_{\varepsilon}(x) := {\varepsilon}^{-3/2} \psi\bigl(\tfrac x{{\varepsilon}}\bigr).
\end{equation}
In this way, the treatment of Case~(v) reduces to the following assertion:
\begin{thm}\lambdaabel{T:LF3} Fix $\psi\in C^\infty_c({\mathbb{R}}^3)$ and let $\psi_{\varepsilon}$ be as in \eqref{E:psi eps defn}. Then for any pair $(q,r)$ satisfying
$\frac2q+\frac3r=\frac32$ with $2<q<\infty$ and $2<r<6$, we have
\begin{align}\lambdaabel{main}
\|e^{it\Delta_{\Omega({\varepsilon})}}\psi_{\varepsilon}-e^{it\Delta}\psi_{\varepsilon}\|_{L_t^q L_x^r({\mathbb{R}}\times{\mathbb{R}}^3)}\to 0 \qtq{as} {\varepsilon}\to 0,
\end{align}
for any ${\varepsilon}$-dependent family of domains $\Omega({\varepsilon})$ that are affine images of $\Omega$
with the property that $\delta:=\dist(0,\Omega({\varepsilon})^c) \geq {\varepsilon}^{6/7}$ and $\delta\lambdaesssim 1$.
\end{thm}
\begin{figure}
\caption{Depiction of Case~(v); here ${\varepsilon}
\end{figure}
We now begin the proof of Theorem~\ref{T:LF3}. By interpolation and the Strichartz inequality, it suffices to treat the case $q=r=\frac{10}3$. By time-reversal symmetry, it suffices to consider positive times only, which is what we will do below. To ease notation, we write $\Omega$ for $\Omega({\varepsilon})$ for the remainder of this subsection.
The first step in the proof is to write $\psi_{\varepsilon}$ as a superposition of Gaussian wave packets; we will then investigate the evolution of the individual wave packets. The basic decomposition is given by the following lemma. The parameter $\sigma$ denotes the initial width of the Gaussian wave packets. It is chosen large enough so that the wave packets hold together until they collide with the obstacle. This ensures that they reflect in an almost particle-like manner and allows us to treat the reflected wave in the realm of geometric optics. Indeed, the particle-like regime lasts for time $\sim\sigma^2$, while the velocity of the wave packet is $2\xi=\tfrac{2n}{L}$, which is $\sim{\varepsilon}^{-1}$ for the dominant terms, up to double logarithmic factors (cf. \eqref{auto}). As the obstacle is $\delta$ away from the origin, it takes the dominant wave packets time $\sim\delta{\varepsilon}$ to reach the obstacle (up to $\lambdaog\lambdaog(\frac1{\varepsilon})$ factors), which is much smaller than $\sigma^{2}=\delta{\varepsilon}\lambdaog^2(\frac1{\varepsilon})$. Moreover, $\sigma$ is chosen small enough that the individual wave packets disperse shortly after this collision. In addition to the main geometric parameters ${\varepsilon}$ and $\delta$, we also need two degrees of small parameters; these are $[\lambdaog(\frac1{\varepsilon})]^{-1}$ and $[\lambdaog\lambdaog(\frac1{\varepsilon})]^{-1}$.
\begin{lem}[Wave packet decomposition] \lambdaabel{decomposition}
Fix $\psi\in C_c^{\infty}({\mathbb{R}}^3)$ and let $0<{\varepsilon}\lambdal 1$,
$$
\sigma:=\sqrt{{\varepsilon}\delta}\lambdaog(\tfrac 1{{\varepsilon}}) \qquad\text{and}\qquad L:=\sigma \lambdaog\lambdaog(\tfrac 1{{\varepsilon}}).
$$
Then there exist coefficients $\{c_n^{{\varepsilon}}\}_{n\in{\mathbb{Z}}^3}$ so that
\begin{equation}\lambdaabel{expand}
\biggl\|\psi_{\varepsilon}(x)-\sum_{n\in{\mathbb{Z}}^3}c_n^{{\varepsilon}}(2\pi\sigma^2)^{-\frac34}\exp\Bigl\{-\frac {|x|^2}{4\sigma^2}+in\cdot\frac
xL\Bigr\}\biggr\|_{L^2({\mathbb{R}}^3)}=o(1)
\end{equation}
as ${\varepsilon}\to 0$. Moreover,
\begin{equation}\lambdaabel{bdforc}
|c_n^{{\varepsilon}}|\lambdaesssim_{k,\psi} \frac {(\sigma{\varepsilon})^{\frac 32}}{L^3}\min\biggl\{1,\Bigl(\frac L{{\varepsilon}|n|}\Bigr)^k\biggr\} \qtq{for all} k\in {\mathbb{N}}
\end{equation}
and \eqref{expand} remains true if the summation is only taken over those $n$ belonging to
\begin{align}\lambdaabel{auto}
\mathcal S:=\biggl\{n\in {\mathbb{Z}}^3: \,\frac 1{{\varepsilon}\lambdaog\lambdaog(\frac 1{{\varepsilon}})}\lambdaeq \frac {|n|}L \lambdaeq \frac {\lambdaog\lambdaog(\frac1{{\varepsilon}})}{{\varepsilon}}\biggr\}.
\end{align}
\end{lem}
\begin{proof}
For $n\in{\mathbb{Z}}^3$, let
$$
\gamma_n(x):=(2\pi\sigma^2)^{-\frac 34} \exp\bigl\{-\tfrac{|x|^2}{4\sigma^2}+in\cdot\tfrac xL\bigr\}.
$$
Note that $\|\gamma_n\|_{L^2({\mathbb{R}}^3)}=1$. We define
$$
c_n^{{\varepsilon}}:=(2\pi L)^{-3}\int_{[-\pi L,\pi L]^3}\psi_{\varepsilon}(x)(2\pi\sigma^2)^{\frac 34} \exp\bigl\{\tfrac{|x|^2}{4\sigma^2}-in\cdot\tfrac xL\bigr\} \, dx.
$$
Then by the convergence of Fourier series we have
\begin{equation}\lambdaabel{eq1}
\psi_{\varepsilon}(x)=\sum_{n\in {\mathbb{Z}}^3}c_n^{{\varepsilon}} \gamma_n(x) \quad\text{for all}\quad x\in[-\pi L,\pi L]^3.
\end{equation}
Taking ${\varepsilon}$ sufficiently small, we can guarantee that $\supp \psi_{\varepsilon}\subseteq[-\frac{\pi L}2, \frac{\pi L}2]^3$. Thus,
to establish \eqref{expand} we need to show that outside the cube $[-\pi L,\pi L]^3$, the series only contributes a small error.
Indeed, let $k\in {\mathbb{Z}}^3\setminus \{ 0 \}$ and $Q_k:=2\pi kL+[-\pi L,\pi L]^3$; using the periodicity of Fourier series, we obtain
$$
\Bigl\|\sum_{n\in {\mathbb{Z}}^3}c_n^{{\varepsilon}}\gamma_n\Bigr\|_{L^2(Q_k)}^2=\int_{[-\pi L,\pi L]^3}|\psi_{\varepsilon}(x)|^2\exp\bigl\{\tfrac{|x|^2}{2\sigma^2} - \tfrac{|x+2\pi kL|^2}{2\sigma^2}\bigr\} \, dx.
$$
As on the support of $\psi_{\varepsilon}$ we have $|x|\lambdae \tfrac12 \pi L\lambdaeq \tfrac 14 |2\pi kL|$, we get
$$
\Bigl\|\sum_{n\in {\mathbb{Z}}^3} c_n^{{\varepsilon}}\gamma_n\Bigr\|_{L^2(Q_k)}^2
\lambdaesssim \|\psi\|_{L^2({\mathbb{R}}^3)}^2 \exp\bigl\{ -\tfrac{\pi^2 k^2L^2}{2\sigma^2} \bigr\}.
$$
Summing in $k$ and using \eqref{eq1}, we obtain
\begin{align*}
\Bigl\|\psi_{\varepsilon}-\sum_{n\in {\mathbb{Z}}^3}c_n^{{\varepsilon}}\gamma_n\Bigr\|_{L^2({\mathbb{R}}^3)}
&\lambdaesssim \sum_{k\in {\mathbb{Z}}^3\setminus\{0\}} \Bigl\|\sum_{n\in {\mathbb{Z}}^3}c_n^{{\varepsilon}}\gamma_n\Bigr\|_{L^2(Q_k)}\\
&\lambdaesssim \|\psi\|_{L^2({\mathbb{R}}^3)}\sum_{k\in{\mathbb{Z}}^3\setminus \{0\}}\exp \bigl\{ -\tfrac {\pi^2 k^2L^2}{4\sigma^2} \bigr\}\\
&\lambdaesssim_{\psi} e^{-\frac {\pi^2 L^2}{4\sigma^2}}=o(1) \qtq{as} {\varepsilon}\to 0.
\end{align*}
This proves \eqref{expand}.
Next we prove the upper bound \eqref{bdforc}. From the definition of $c_n^{{\varepsilon}}$, we immediately obtain
\begin{align*}
|c_n^{{\varepsilon}}|&\lambdaesssim \frac {\sigma^{\frac32}}{L^3}\bigl\|\psi_{\varepsilon}(x)e^{\frac{|x|^2}{4\sigma^2}}\bigr\|_{L^1({\mathbb{R}}^3)} \lambdaesssim\frac{(\sigma{\varepsilon})^{\frac 32}}{L^3}.
\end{align*}
To derive the other upper bound, we use integration by parts. Let $\mathbb D:= i\frac {Ln}{|n|^2}\cdot\nabla$; note that
$\mathbb D^k e^{-in\frac xL}=e^{-in\frac xL}$. The adjoint of $\mathbb D$ is given by $\mathbb D^t=-i\nabla\cdot\frac{Ln}{|n|^2}$.
We thus obtain
\begin{align*}
|c_n^{{\varepsilon}}|
&=(2\pi L)^{-3}\biggl|\int_{{\mathbb{R}}^3} \mathbb D^k e^{-in\frac xL}\psi_{\varepsilon}(x)(2\pi\sigma^2)^{\frac 34} e^{\frac{|x|^2}{4\sigma^2}}dx\biggr|\\
&=(2\pi L)^{-3}\biggl|\int_{{\mathbb{R}}^3} e^{-in\frac xL}(\mathbb D^t)^k\Bigl[ {\varepsilon}^{-\frac 32}\psi\Bigl(\frac x{\varepsilon}\Bigr)(2\pi\sigma^2)^{\frac 34}e^{\frac{|x|^2}{4\sigma^2}}\Bigr]\,dx\biggr|\\
&\lambdaesssim L^{-3}\Bigl(\frac L{|n|}\Bigr)^k\Bigl(\frac {\sigma}{{\varepsilon}}\Bigr)^{\frac 32}\sum_{|\alpha|\lambdaeq k}\Bigl\|\partial^\alpha\Bigl[\psi\Bigl(\frac x{{\varepsilon}}\Bigr)e^{\frac {|x|^2}{4\sigma^2}}\Bigr]\Bigr\|_{L^1({\mathbb{R}}^3)}\\
&\lambdaesssim_{k,\psi}L^{-3}\Bigl(\frac L{|n|}\Bigr)^k\Bigl(\frac {\sigma}{{\varepsilon}}\Bigr)^{\frac 32}{\varepsilon}^{3-k}\\
&\lambdaesssim_{k,\psi}\frac {({\varepsilon}\sigma)^{\frac 32}}{L^3}\Bigl(\frac L{{\varepsilon}|n|}\Bigr)^k.
\end{align*}
This proves \eqref{bdforc}.
To derive the last claim, we first note that
\begin{equation}\lambdaabel{E:gamma inner prod}
\int_{{\mathbb{R}}^3} \gamma_n(x)\overline{\gamma_m(x)}\,dx=e^{-\frac{\sigma^2}{2L^2}|n-m|^2}.
\end{equation}
Now fix $N\in {\mathbb{N}}$. For $n\lambdae N$, we use the first upper bound for $c_n^{{\varepsilon}}$ to estimate
\begin{align*}
\Bigl\| \sum_{|n|\lambdae N}c_n^{{\varepsilon}}\gamma_n \Bigr\|_{L^2({\mathbb{R}}^3)}^2
&\lambdaesssim_{\psi}\frac {(\sigma{\varepsilon})^3}{L^6}\sum_{|n|,|m|\lambdae N} e^{-\frac{\sigma^2}{2L^2}|n-m|^2}
\lambdaesssim_{\psi}\frac {(\sigma{\varepsilon})^3}{L^6}N^3\Bigl(\frac L{\sigma}\Bigr)^3 \lambdaesssim_\psi \Bigl(\frac {{\varepsilon} N}L\Bigr)^3.
\end{align*}
For $n\geq N$, we use the second upper bound for $c_n^{{\varepsilon}}$ (with $k=3$) to estimate
\begin{align*}
\biggl\|\sum_{|n|\ge N}c_n^{{\varepsilon}}\gamma_n \biggr\|_{L^2}^2
&\lambdaesssim_{\psi}\frac {(\sigma{\varepsilon})^3}{L^6}\Bigl(\frac L{{\varepsilon}}\Bigr)^6
\sum_{|n|\ge |m|\ge N} \frac 1{|n|^3} \frac 1{|m|^3} e^{-\frac {\sigma^2}{2L^2}|n-m|^2}\\
&\lambdaesssim_{\psi} \Bigl(\frac\sigma{{\varepsilon}}\Bigr)^3\sum_{|n|\ge|m|\ge N}\frac 1{|m|^6} e^{-\frac {\sigma^2}{2L^2}|n-m|^2}\\
&\lambdaesssim_\psi \Bigl(\frac {\sigma}{{\varepsilon}}\Bigr)^3\Bigl(\frac L{\sigma}\Bigr)^3 \sum_{|m|\ge N}\frac 1{|m|^6}\\
&\lambdaesssim_\psi \Bigl(\frac L{{\varepsilon} N}\Bigr)^3.
\end{align*}
Thus,
\begin{align}\lambdaabel{error}
\biggl\|&\sum_{|n|\lambdae \frac L{{\varepsilon}\lambdaog\lambdaog(\frac 1{{\varepsilon}})}}c_n^{{\varepsilon}}\gamma_n\biggr\|_{L^2_x}^2+\biggl\|\sum_{|n|\ge {\frac L{\varepsilon}\lambdaog\lambdaog(\frac 1{{\varepsilon}})}}c_n^{{\varepsilon}}\gamma_n\biggr\|_{L^2_x}^2\lambdaesssim_{\psi}[\lambdaog\lambdaog(\tfrac 1{{\varepsilon}})]^{-3}=o(1)
\end{align}
as ${\varepsilon}\to 0$. This completes the proof of Lemma~\ref{decomposition}.
\end{proof}
Combining the Strichartz inequality with Lemma~\ref{decomposition}, proving Theorem~\ref{T:LF3} reduces to showing
\begin{align*}
\Bigl\|\sum_{n\in \mathcal S} c_n^{{\varepsilon}}\bigl[{e^{it\Delta}}omega(1_\Omega\gamma_n)-e^{it\Delta_{{\mathbb{R}}^3}}\gamma_n\bigr]\Bigr\|_{L_{t,x}^{\frac {10}3}({\mathbb{R}}\times{\mathbb{R}}^3)}=o(1) \qtq{as}{\varepsilon}\to 0.
\end{align*}
Recall that the linear Schr\"odinger evolution of a Gaussian wave packet in the whole space has a simple explicit expression:
\begin{align*}
u_n(t,x):=[e^{it\Delta_{{\mathbb{R}}^3}}\gamma_n](x)=\frac 1{(2\pi)^{\frac34}}\biggl(\frac {\sigma}{\sigma^2+it}\biggr)^{\frac32}\exp\biggl\{ix\cdot \xi_n-it|\xi_n|^2-\frac {|x-2\xi_nt|^2}{4(\sigma^2+it)}\biggr\},
\end{align*}
where $\xi_n:=\frac nL$.
\begin{defn}[Missing, near-grazing, and entering rays] \lambdaabel{D:MEG}
Fix $n\in \mathcal S$. We say $u_n$ \emph{misses the obstacle} if
\begin{align*}
\dist(2t\xi_n, \Omega^c)\ge \frac{|2t\xi_n|}{[\lambdaog\lambdaog(\tfrac 1\eps)]^4} \qtq{for all} t\geq 0.
\end{align*}
Let $$\mathcal M=\{n\in \mathcal S : u_n\mbox{ misses the obstacle}\}.$$
If the ray $2t\xi_n$ intersects the obstacle, let $t_c\geq0$ and $x_c=2t_c\xi_n\in \partial\Omega$ denote the time and location of
first incidence, respectively. We say $u_n$ \emph{enters the obstacle} if in addition
$$
\frac{|\xi_n\cdot\nu|}{|\xi_n|} \geq [\lambdaog\lambdaog(\tfrac 1\eps)]^{-4},
$$
where $\nu$ denotes the unit normal to the obstacle at the point $x_c$. Let
\begin{align*}
\mathcal E=\{n\in \mathcal S : u_n\mbox{ enters the obstacle}\}.
\end{align*}
Finally, we say $u_n$ is \emph{near-grazing} if it neither misses nor enters the obstacle. Let
\begin{align*}
\mathcal G=\{n\in \mathcal S : u_n \mbox{ is near-grazing}\}.
\end{align*}
\end{defn}
We first control the contribution of the near-grazing directions.
\begin{lem}[Counting $\mathcal G$] \lambdaabel{L:counting G} The set of near-grazing directions constitutes a
vanishing fraction of the total directions. More precisely,
$$
\# \mathcal G \lambdaesssim [\lambdaog\lambdaog(\tfrac 1\eps)]^{-4} \# \mathcal S \lambdaesssim \Bigl(\frac L{\varepsilon}\Bigr)^3 [\lambdaog\lambdaog(\tfrac 1\eps)]^{-1}.
$$
\end{lem}
\begin{proof}
We claim that the near-grazing directions are contained in a
neighbourhood of width $O( [\lambdaog\lambdaog(\frac1{\varepsilon})]^{-4} )$ around the
set of grazing rays, that is, rays that are tangent to
$\partial\Omega$. We will first verify this claim and then explain
how the lemma follows. The objects of interest are depicted in
Figure~\ref{Fig.NG}. Two rays are show, one which collides with the
obstacle and another that does not. The horizontal line
represents the nearest grazing ray. The origin, from which the
rays emanate, is marked $O$.
\begin{figure}
\caption{Near-grazing rays.}
\end{figure}
For rays that collide with the obstacle, the condition to be near-grazing is that $\sin(\phi) \lambdaeq [\lambdaog\lambdaog(\frac1{\varepsilon})]^{-4}$.
Here $\phi$ is the angle between the ray and the tangent plane to the obstacle at the point of collision. Convexity of the obstacle
guarantees that $\phi \geq \theta$. From this we deduce $\theta\lambdaesssim [\lambdaog\lambdaog(\frac1{\varepsilon})]^{-4}$, in accordance with the claim made above.
Let us now consider rays that do not collide with the obstacle. We recall that to be near-grazing in this case there must be some time $t>0$ so that
$X=2\xi t$ is within a distance $2|\xi|t[\lambdaog\lambdaog(\frac1{\varepsilon})]^{-4}$ of a point $P$ on the obstacle. Then
$\theta\lambdaeq\tan\theta \lambdaeq \frac{|XP|}{|OX|} \lambdaeq [\lambdaog\lambdaog(\frac1{\varepsilon})]^{-4}$. This finishes the proof of the claim.
The set of directions corresponding to grazing rays is a smooth
curve whose length is uniformly bounded in terms of
the geometry of $\Omega$ alone. Moreover, we have shown that all
near-grazing directions lie within a neighbourhood of this curve of
thickness $O( [\lambdaog\lambdaog(\frac1{\varepsilon})]^{-4} )$. Noting that the
directions $\{\frac n{|n|} : n\in\mathcal S\}$ are uniformly
distributed on the sphere and much more tightly packed than the width of this neighbourhood,
the lemma follows from a simple area estimate.
\end{proof}
\begin{prop}[The near-grazing contribution]\lambdaabel{P:ng} We have
$$
\Bigl\|\sum_{n\in \mathcal G} c_n^{{\varepsilon}}\bigl[{e^{it\Delta}}omega(1_\Omega\gamma_n)-u_n\bigr]\Bigr\|_{L_{t,x}^{\frac {10}3}({\mathbb{R}}\times{\mathbb{R}}^3)}=o(1) \qtq{as}{\varepsilon}\to 0.
$$
\end{prop}
\begin{proof}
From the Strichartz inequality, it suffices to prove
\begin{align*}
\Bigl\| \sum_{n\in \mathcal G} c_n^{{\varepsilon}}\gamma_n \Bigr\|_{L^2({\mathbb{R}}^3)}=o(1) \qtq{as} {\varepsilon}\to 0.
\end{align*}
Using \eqref{bdforc}, \eqref{E:gamma inner prod}, and Lemma~\ref{L:counting G}, we estimate
\begin{align*}
\Bigl\|\sum_{n\in \mathcal G} c_n^{{\varepsilon}} \gamma_n\Bigr\|_{L^2({\mathbb{R}}^3)}^2
&\lambdaesssim \sum_{n,m\in \mathcal G}\frac {(\sigma {\varepsilon})^3}{L^6} e^{-\frac {\sigma^2}{2L^2}|n-m|^2}
\lambdaesssim\sum_{n\in \mathcal G} \frac {(\sigma{\varepsilon})^3}{L^6}\Bigl(\frac L{\sigma}\Bigr)^3\lambdaesssim [\lambdaog\lambdaog(\tfrac 1{{\varepsilon}})]^{-1},
\end{align*}
which converges to $0$ as ${\varepsilon}\to 0$.
\end{proof}
We now consider the contribution of rays that miss the obstacle in the sense of Definition~\ref{D:MEG}.
\begin{prop}[Contribution of rays that miss the obstacle]\lambdaabel{P:missing}
Assume $n\in \mathcal M$. Then
\begin{equation}\lambdaabel{432}
\|e^{it\Delta_{\Omega}}(1_{\Omega}\gamma_n)-u_n\|_{L_{t,x}^{\frac {10}3}({\mathbb{R}}\times {\mathbb{R}}^3)}\lambdaesssim {\varepsilon}^{100}
\end{equation}
for sufficiently small ${\varepsilon}$. Furthermore, we have
\begin{align}\lambdaabel{249}
\Bigl\|\sum_{n\in \mathcal M} c_n^{\varepsilon} \bigl[e^{it\Delta_{\Omega}}(1_\Omega\gamma_n)-u_n\bigr]\Bigr\|_{L_{t,x}^{\frac{10}3} ({\mathbb{R}}\times{\mathbb{R}}^3)}=o(1) \qtq{as} {\varepsilon}\to 0.
\end{align}
\end{prop}
\begin{proof} We first notice that \eqref{249} is an immediate consequence of \eqref{432}. Indeed, using the upper bound \eqref{bdforc} for
$c_n^{\varepsilon}$, we estimate
\begin{align*}
\Bigl\|\sum_{n\in \mathcal M} c_n^{\varepsilon}\bigl[e^{it\Delta_\Omega}(1_\Omega\gamma_n)-u_n\bigr]\Bigr\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times{\mathbb{R}}^3)}
&\lambdaesssim\sum_{|n|\lambdae \frac L{\varepsilon}\lambdaog\lambdaog(\tfrac 1\eps)}\frac{(\sigma{\varepsilon})^{\frac32}}{L^3}{\varepsilon}^{100}\\
&\lambdaesssim (\sigma{\varepsilon})^{\frac 32}{\varepsilon}^{97}[\lambdaog\lambdaog(\tfrac 1\eps)]^3=o(1).
\end{align*}
We are thus left to prove \eqref{432}.
As $u_n$ misses the obstacle, we have
\begin{align}\lambdaabel{845}
\dist(2t\xi_n,\Omega^c)\ge \tfrac 12\delta[\lambdaog\lambdaog(\tfrac 1\eps)]^{-4} \qtq{for all} t\geq 0.
\end{align}
Indeed, when $|2t\xi_n|<\frac \delta 2$, the triangle inequality gives $\dist(2t\xi_n,\Omega^c)\ge \frac \delta 2$; when
$|2t\xi_n|\geq\frac \delta 2$, this bound follows immediately from Definition~\ref{D:MEG}.
Now let $\chi$ be a smooth cutoff that vanishes on the obstacle and equals $1$ when
\begin{align*}
\dist(x,\Omega^c)\ge \delta \lambdaog^{-1}(\tfrac 1{\varepsilon}).
\end{align*}
This cutoff can be chosen to also obey the following:
\begin{equation}\lambdaabel{533}
|\nabla \chi|\lambdaesssim \delta^{-1}\lambdaog(\tfrac 1\eps), \quad |\Delta \chi|\lambdaesssim \delta^{-2}\lambdaog^2(\tfrac 1{\varepsilon}),\quad |\supp(\Delta\chi)|\lambdaesssim \delta\lambdaog^{-1}(\tfrac 1{\varepsilon}).
\end{equation}
From \eqref{845} and the triangle inequality, we obtain
\begin{align}\lambdaabel{1001}
\dist(2t\xi_n, \supp(1-\chi))&\ge\dist(2t\xi_n,\Omega^c)-\delta\lambdaog^{-1}(\tfrac 1{\varepsilon})\notag\\
&\ge \tfrac12 \delta[\lambdaog\lambdaog(\tfrac 1\eps)]^{-4}-\delta\lambdaog^{-1}(\tfrac 1{\varepsilon})\ge \tfrac14\delta[\lambdaog\lambdaog(\tfrac 1\eps)]^{-4}.
\end{align}
Moreover, when $|t|\ge \sigma^2$, we observe that
\begin{align}\lambdaabel{1002}
\dist(2t\xi_n, \supp(1-\chi))&\ge \dist(2t\xi_n,\Omega^c)-\delta\lambdaog^{-1}(\tfrac 1{\varepsilon})\notag\\
&\ge \frac{|2t\xi_n|}{[\lambdaog\lambdaog(\tfrac 1\eps)]^4}-\frac\delta{\lambdaog(\tfrac 1\eps)}\ge\frac{|t\xi_n|}{[\lambdaog\lambdaog(\tfrac 1\eps)]^4}.
\end{align}
Here we have used the fact that $\delta\lambdal |2t\xi_n|$ for $t\ge\sigma^2$.
With these preliminaries out of the way, we are ready to begin proving \eqref{432}. By the triangle inequality,
\begin{align}
\text{LHS}\eqref{432}\lambdae\|e^{it\Delta_{\Omega}}(1_{\Omega}\gamma_n)-\chi u_n\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times{\mathbb{R}}^3)}+\|\chi
u_n-u_n\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times{\mathbb{R}}^3)}.\lambdaabel{E:M}
\end{align}
We begin with the first term on the right-hand side of \eqref{E:M}. Using the Duhamel formula, we write
\begin{align*}
e^{it\Delta_{\Omega}}(1_{\Omega}\gamma_n)-\chi u_n
&=e^{it\Delta_{\Omega}}(1_{\Omega}\gamma_n)-e^{it\Delta_{\Omega}}(\chi\gamma_n)+e^{it\Delta_{\Omega}}(\chi \gamma_n)-\chi u_n\\
&=e^{it\Delta_{\Omega}}[(1_{\Omega}-\chi)\gamma_n]+i\int_0^te^{i(t-s)\Delta_{\Omega}}\bigl[\Delta\chi u_n+2\nabla \chi \cdot \nabla u_n\bigr]\,ds.
\end{align*}
Similarly, for the second term on the right-hand side of \eqref{E:M} we have
\begin{align*}
(1-\chi)u_n&=e^{it\Delta}(1-\chi)\gamma_n+i\int_0^te^{i(t-s)\Delta}\bigl[\Delta \chi u_n+2\nabla \chi \cdot \nabla u_n\bigr](s)\,ds.
\end{align*}
Thus, using the Strichartz inequality we obtain
\begin{align}
\text{LHS}\eqref{432}
&\lambdaesssim\|(1-\chi)\gamma_n\|_{L^2({\mathbb{R}}^3)}+\|\Delta \chi u_n\|_{L^1_tL_x^2({\mathbb{R}}\times {\mathbb{R}}^3)} +\|\nabla \chi \cdot \nabla u_n\|_{L_t^1L_x^2({\mathbb{R}}\times {\mathbb{R}}^3)}.\lambdaabel{530}
\end{align}
The first term on the right-hand side of \eqref{530} can be easily controlled:
\begin{align*}
\|(1-\chi)\gamma_n\|_{L^2({\mathbb{R}}^3)}^2
&\lambdaesssim \sigma^{-3}\int_{\supp(1-\chi)}e^{-\frac {|x|^2}{2\sigma^2}}dx \\
&\lambdaesssim \sigma^{-3}\sigma^3 \exp\Bigl\{-\frac {\dist^2(0,\supp(1-\chi))}{4 \sigma^2}\Bigr\}\\
&\lambdaesssim \exp\Bigl\{-\frac{\delta^2}{8{\varepsilon}\delta\lambdaog^2(\tfrac1{\varepsilon})}\Bigr\}\lambdae{\varepsilon}^{200}.
\end{align*}
To estimate the remaining terms on the right-hand side of \eqref{530}, we first observe that
\begin{align*}
|\nabla u_n|\lambdaesssim |\xi_n||u_n|+\frac {|x-2\xi_n t|}{\sqrt{\sigma^4+t^2}}|u_n|
&\lambdaesssim \bigl[|\xi_n |+\sigma^{-1}\bigr]\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 34}e^{-\frac{{\sigma^2}|x-2\xi_n t|^2}{8(\sigma^4+t^2)}}.
\end{align*}
As $\sigma^{-1}\lambdaeq |\xi_n|\lambdae \frac {\lambdaog\lambdaog(\frac 1{{\varepsilon}})}{{\varepsilon}}$, we obtain
\begin{align}\lambdaabel{1244}
|u_n|+ |\nabla u_n| \lambdaesssim \frac{\lambdaog\lambdaog(\tfrac 1\eps)}{{\varepsilon}}\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 34}e^{-\frac{{\sigma^2}|x-2\xi_n t|^2}{8(\sigma^4+t^2)}}.
\end{align}
To estimate the contribution of these terms, we discuss short and long times separately. For $0\lambdaeq t\lambdae \sigma^2$, we use \eqref{1001} to estimate
\begin{align*}
\|u_n&\|_{L_t^1L_x^2(t\lambdae \sigma^2, \ x\in \supp(1-\chi))} + \|\nabla u_n\|_{L_t^1L_x^2(t\lambdae \sigma^2, \ x\in \supp(1-\chi))}\\
&\lambdaesssim \frac{\lambdaog\lambdaog(\tfrac 1\eps)}{{\varepsilon}} \sigma^2\sup_{0\lambdaeq t\lambdae\sigma^2}\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 34}\biggl\|\exp\Bigl\{-\frac{\sigma^2|x-2t\xi_n|^2}{8(\sigma^4+t^2)}\Bigr\}\biggr\|_{L_x^2(\supp(1-\chi))}\\
&\lambdaesssim\frac{\lambdaog\lambdaog(\tfrac 1\eps)}{{\varepsilon}} \sigma^2 \sup_{0\lambdaeq t\lambdae\sigma^2}\biggl\|\exp\Bigl\{-\frac{\sigma^2|x-2t\xi_n|^2}{16(\sigma^4+t^2)}\Bigr\}\biggr\|_{L_x^{\infty}
(\supp(1-\chi))}\\
&\lambdaesssim \frac{\lambdaog\lambdaog(\tfrac 1\eps)}{{\varepsilon}}\sigma^2 \exp\biggl\{-\frac\delta{{\varepsilon}\lambdaog^3(\tfrac 1{\varepsilon})}\biggr\}\\
&\lambdae{\varepsilon}^{110}.
\end{align*}
For $|t|>\sigma^2$, we use \eqref{533} and \eqref{1002} to obtain
\begin{align*}
\|u_n&\|_{L_t^1L_x^2(t>\sigma^2, \ x\in \supp(1-\chi))} + \|\nabla u_n\|_{L_t^1L_x^2(t> \sigma^2, \ x\in \supp(1-\chi))}\\
&\lambdaesssim\bigl[\delta\lambdaog^{-1}(\tfrac 1{\varepsilon})\bigr]^{\frac 12}\bigl\||u_n|+|\nabla u_n|\bigr\|_{L_t^1L_x^{\infty}(t>\sigma^2, \ x\in\supp(1- \chi))}\\
&\lambdaesssim \frac{\delta^{\frac 12}\sigma^{\frac 32}\lambdaog\lambdaog(\tfrac 1\eps)}{{\varepsilon}\lambdaog^{\frac 12}(\tfrac 1{\varepsilon})}\biggl\|t^{-\frac 32}\exp\Bigl\{-\frac{\sigma^2\dist^2(2t\xi_n,
\supp(1-\chi))}{8(\sigma^4+t^2)}\Bigr\}\biggr\|_{L_t^1(t>\sigma^2)}\\
&\lambdaesssim \frac{\delta^{\frac 12}\sigma^{\frac 12}\lambdaog\lambdaog(\tfrac 1\eps)}{{\varepsilon}\lambdaog^{\frac 12}(\tfrac 1{\varepsilon})} \exp\Bigl\{-\frac\delta{\varepsilon}\Bigr\}\\
&\lambdae {\varepsilon}^{110}.
\end{align*}
Putting these two pieces together, we find
\begin{align*}
\|\Delta \chi u_n\|_{L_t^1L_x^2({\mathbb{R}}\times{\mathbb{R}}^3)}+\|\nabla \chi \cdot \nabla u_n\|_{L_t^1L_x^2({\mathbb{R}}\times {\mathbb{R}}^3)}
\lambdaesssim\delta^{-2}\lambdaog^2(\tfrac 1{\varepsilon}){\varepsilon}^{110}\lambdae {\varepsilon}^{100}.
\end{align*}
This completes the proof of Proposition~\ref{P:missing}.
\end{proof}
In order to complete the proof of Theorem~\ref{T:LF3}, we need to estimate the contribution from the Gaussian wave packets $\gamma_n$ that
collide non-tangentially with the obstacle, that is, for $n\in\mathcal E$. This part of the argument is far more subtle than the treatment of
$n\in \mathcal G$ or $n\in \mathcal M$. Naturally, the entering wave packets reflect off the obstacle and we will need to build a careful parametrix to capture this reflection. Moreover, the convexity of the obstacle enters in a crucial way --- it ensures that the reflected waves do not refocus.
The treatment of the entering rays will occupy the remainder of this subsection. We begin with the simplest part of the analysis, namely,
the short time contribution. Here, short times means well before the wave packets have reached the obstacle. The estimate applies equally
well to all wave packets, irrespective of whether $n\in\mathcal E$ or not.
\begin{prop}[The contribution of short times]\lambdaabel{P:short times}
Let $T:=\frac{{\varepsilon}\delta}{10\lambdaog\lambdaog(\tfrac 1\eps)}$. Then
\begin{align}\lambdaabel{st}
\sum_{n\in \mathcal S} |c_n^{\varepsilon}|\bigl\|e^{it\Delta_{\Omega}}(1_\Omega\gamma_n)-u_n\bigr\|_{L_{t,x}^{\frac{10}3} ([0,T]\times{\mathbb{R}}^3)}=o(1)
\qtq{as} {\varepsilon}\to 0.
\end{align}
\end{prop}
\begin{proof}
Let $\chi $ be a smooth cutoff that vanishes on the obstacle and equals $1$ when $\dist(x,\Omega^c)>\frac \delta{10}$. This cutoff can be chosen to also satisfy
\begin{align}\lambdaabel{deta}
|\nabla \chi|\lambdaesssim \delta^{-1} \qtq{and} |\Delta \chi|\lambdaesssim \delta^{-2}.
\end{align}
Moreover, for $t\in[0,T]$ we have
\begin{align*}
|2t\xi_n |\lambdae 2\frac{{\varepsilon}\delta}{10\lambdaog\lambdaog(\tfrac 1\eps)}\cdot\frac{\lambdaog\lambdaog(\tfrac 1\eps)}{{\varepsilon}}=\frac15 \delta
\end{align*}
and so
\begin{align}\lambdaabel{615}
\dist(2t\xi_n, \supp(1-\chi))\ge \tfrac12 \delta \qtq{for all} t\in[0, T].
\end{align}
The proof of this proposition is almost identical to that of Proposition~\ref{P:missing}, with the roles of \eqref{1001} and \eqref{1002} being played by \eqref{615}. Indeed, using the Duhamel formula and the Strichartz inequality as in the proof of Proposition~\ref{P:missing},
\begin{align*}
\|e^{it\Delta_{\Omega}}(&1_{\Omega}\gamma_n)-u_n\|_{L_{t,x}^{\frac{10}3}([0,T]\times{\mathbb{R}}^3)}\\
&\lambdae\|e^{it\Delta_{\Omega}}(1_{\Omega}\gamma_n)-\chi u_n\|_{L_{t,x}^{\frac{10}3}([0,T]\times{\mathbb{R}}^3)}+\|\chi u_n-u_n\|_{L_{t,x}^{\frac{10}3}([0,T]\times{\mathbb{R}}^3)}\\
&\lambdaesssim \|(1-\chi)\gamma_n\|_{L^2({\mathbb{R}}^3)}+\|\Delta \chi u_n\|_{L^1_tL_x^2([0,T]\times{\mathbb{R}}^3)} +\|\nabla \chi \cdot \nabla u_n\|_{L_t^1L_x^2([0,T]\times{\mathbb{R}}^3)}.
\end{align*}
The first term is estimated straightforwardly
\begin{align*}
\|(1-\chi)\gamma_n\|_{L^2({\mathbb{R}}^3)}^2
&\lambdaesssim\sigma^{-3}\int_{\supp(1-\chi)} e^{-\frac{|x|^2}{2\sigma^2}} \,dx
\lambdaesssim e^{-\frac{\dist^2(0,\supp(1-\chi))}{4\sigma^2}}
\lambdaesssim e^{-\frac{\delta^2}{16\sigma^2}}\lambdae {\varepsilon}^{200}.
\end{align*}
For the remaining two terms, we use \eqref{1244} and \eqref{615} to estimate
\begin{align*}
\|u_n\|_{L_t^1L_x^2([0,T]\times\supp(1-\chi))} & + \|\nabla u_n\|_{L_t^1L_x^2([0,T]\times\supp(1-\chi))}\\
&\lambdaesssim\delta\sup_{t\in[0,T]} \biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac34}\Bigl\|e^{-\frac{\sigma^2|x-2\xi_n t|^2}{8(\sigma^4+t^2)}}\Bigr\|_{L_x^2(\supp(1-\chi))}\\
&\lambdaesssim \delta \sup_{t\in[0,T]}\Bigl\|e^{-\frac{\sigma^2|x-2\xi_nt|^2}{16(\sigma^4+t^2)}}\Bigr\|_{L_x^{\infty}(\supp(1-\chi))}\\
&\lambdaesssim \delta \sup_{t\in[0,T]} \exp\Bigl\{-\frac{\sigma^2\dist^2(2t\xi_n,\supp(1-\chi))}{32\sigma^4}\Bigr\} \\
&\lambdaesssim \delta e^{-\frac{\delta^2}{128\sigma^2}}\lambdae {\varepsilon}^{110}.
\end{align*}
This implies
\begin{align*}
\|\Delta \chi u_n\|_{L^1_tL_x^2([0,T]\times {\mathbb{R}}^3)} +\|\nabla \chi \cdot \nabla u_n\|_{L_t^1L_x^2([0,T]\times {\mathbb{R}}^3)}
\lambdaesssim \delta^{-2}{\varepsilon}^{110}\lambdae {\varepsilon}^{100}.
\end{align*}
Collecting these estimates and using \eqref{bdforc} we obtain
\begin{align*}
\text{LHS}\eqref{st}\lambdaesssim \sum_{n\in \mathcal S} \frac{(\sigma{\varepsilon})^{\frac32}}{L^3} {\varepsilon}^{100}=o(1) \qtq{as} {\varepsilon}\to 0.
\end{align*}
This finishes the proof of Proposition~\ref{P:short times}.
\end{proof}
Now take $n\in \mathcal E$, which means that the wave packet $u_n(t,x)$ enters the obstacle. We write $t_c$ for the first time of intersection and $x_c=2t_c\xi_n$ for the location of this collision. Naturally both $t_c$ and $x_c$ depend on $n$; however, as most of the analysis will focus on one wave packet at a time, we suppress this dependence in the notation.
We approximate the wave generated by $u_n$ reflecting off $\partial\Omega$ by a Gaussian wave packet $v_n$ (or more accurately
by $-v_n$ since the Dirichlet boundary condition inverts the profile), which we define as follows:
\begin{align}\lambdaabel{forv}
v_n(t,x):=&\Bigl(\frac {\sigma^2}{2\pi}\Bigr)^{\frac 34}\frac {(\det\Sigma)^{\frac 12}}{(\sigma^2+it_c)^{\frac 32}} [\det(\Sigma+i(t-t_c))]^{-\frac12}
\exp\Bigl\{i(x-x_c)\eta-it|\eta|^2\notag\\
&\qquad\qquad\qquad\qquad+ix_c\cdot \xi-\tfrac14(x-x(t))^T(\Sigma+i(t-t_c))^{-1}(x-x(t))\Bigr\},
\end{align}
where for simplicity we write $\xi=\xi_n$. The parameters $\eta$, which represents the momentum of the reflected wave packet, and $\Sigma$, which gives its covariance structure, will be specified shortly. Correspondingly, $x(t):=x_c+2\eta(t-t_c)$ represents the center of the reflected wave packet.
We define an orthonormal frame $(\vec\tau,\vec \gamma,\vec \nu)$ at the point $x_c\in\partial\Omega$, where $\vec \tau,\vec \gamma$ are
two tangent vectors to $\partial \Omega$ in the directions of the principal curvatures $\frac 1{R_1},\ \frac 1{R_2}$ and $\vec \nu$ is the
unit outward normal to the obstacle. Note that the obstacle being strictly convex amounts to $1\lambdaesssim R_1,R_2<\infty$. Without loss
of generality, we may assume $R_1\lambdae R_2$.
With this frame, we define $\eta:=\xi-2(\xi\cdot\vec\nu)\vec\nu$ as the reflection of $\xi$, in accordance with the basic law of reflection,
namely, the angle of incidence equals the angle of reflection. In this frame, $\Sigma^{-1}$ is defined as follows:
\begin{align}\lambdaabel{E:Sigma defn}
\Sigma^{-1}=\frac 1{\sigma^2+it_c}\mathrm{Id}+iB,
\end{align}
where
\begin{align*}
B=\begin{pmatrix}
\frac {4\xi_3}{R_1} & 0 &\frac {4\xi_1}{R_1}\\
0 &\frac {4\xi_3}{R_2} &\frac {4\xi_2}{R_2}\\
\frac {4\xi_1}{R_1} &\frac {4\xi_2}{R_2} &\frac{4\xi_1^2}{R_1\xi_3}+\frac {4\xi_2^2}{R_2\xi_3}
\end{pmatrix}
\end{align*}
and
\begin{align*}
\eta_1:=\eta\cdot\vec\tau&=\xi\cdot\vec\tau=:\xi_1 \\
\eta_2:=\eta\cdot \vec\gamma&=\xi\cdot \vec\gamma=:\xi_2\\
\eta_3:=\eta\cdot\vec\nu=-\xi\cdot\vec\nu&=:-\xi_3=\tfrac12 |\xi-\eta|.
\end{align*}
The matrix $B$ encodes the additional spreading of the reflected wave packet induced by the curvature of the obstacle; incorporating this subtle effect is essential for the analysis that follows. The structure of the matrix $B$ captures the basic rule of mirror manufacture: the radius of curvature equals twice the focal length.
\begin{lem}[Bounds for collision times and locations]\lambdaabel{L:xc}
For rays that enter the obstacle, we have
$$
\xi_3 < 0, \quad |\xi_3| \geq |\xi| [\lambdaog\lambdaog(\tfrac 1\eps)]^{-4}, \qtq{and} \delta \lambdaeq |x_c| \lambdaesssim \delta[\lambdaog\lambdaog(\tfrac1{\varepsilon})]^8.
$$
In particular, $\delta|\xi|^{-1}\lambdaeq 2t_c\lambdaesssim \delta|\xi|^{-1}[\lambdaog\lambdaog(\tfrac 1\eps)]^8$.
\end{lem}
\begin{proof}
The first inequality simply expresses the fact that the ray approaches the obstacle from without. The second inequality is an exact repetition of
$n\in \mathcal E$ as given in Definition~\ref{D:MEG}. The lower bound on $|x_c|$ follows directly from the fact that $\delta=\dist(0,\Omega^c)$.
The proof of the upper bound on $|x_c|$ divides into two cases. When $\delta\gtrsim[\lambdaog\lambdaog(\tfrac1{\varepsilon})]^{-8}$, the result follows from
$|x_c|\lambdaeq \dist(0,\Omega^c) + \diam(\Omega^c) \lambdaesssim1$.
It remains to consider the case when $\delta\lambdaeq \tfrac{1}{8C} [\lambdaog\lambdaog(\tfrac1{\varepsilon})]^{-8}$ for some fixed large $C=C(\Omega)$. By
approximating $\partial\Omega$ from within by a paraboloid, this case reduces to the analysis of the following system of equations:
\begin{align*}
y &= m x \quad \text{and} \quad y = Cx^2 + \delta \quad \text{with} \quad m\geq [\lambdaog\lambdaog(\tfrac1{\varepsilon})]^{-4}.
\end{align*}
The first equation represents the ray, whose slope is restricted by that permitted for an entering ray. (Note that the convexity of the obstacle implies that the angle between the ray and $\partial\Omega$ is larger than the angle between the ray and the axis $y=0$.) Using the quadratic formula, we see that the solution obeys
$$
|x_c|\lambdaeq \sqrt{ x^2 + y^2 } = \frac{2\delta \sqrt{1+m^2}}{m + \sqrt{m^2 - 4C\delta}} \sim \frac{\delta\sqrt{1+m^2}}{m},
$$
where we used the restriction on $\delta$ in the last step.
\end{proof}
\begin{lem}[Reflected waves diverge]\lambdaabel{L:diverging rays}
For $j=1,2$, let $x^{(j)}(t)$ denote the broken ray beginning at the origin, moving with velocity $2\xi^{(j)}$ and reflecting off the convex body $\Omega^c$. Then
$$
| x^{(1)}(t) - x^{(2)}(t) | \geq 2| \xi^{(1)} - \xi^{(2)} | \, t
$$
whenever $t \geq \max\{t_c^{(1)},t_c^{(2)}\}$, that is, greater than the larger collision time.
\end{lem}
\begin{proof}
In the two dimensional case, this result follows from elementary planar geometry. A particularly simple argument is to reflect the
outgoing rays across the line joining the two collision points. By convexity, the continuations of the incoming rays will both lie
between the reflected outgoing rays. Note that the geometry involved is dictated solely by the two tangent lines at the collision points, not by the
shape of the convex body in between.
We note that given two vectors $v^{(j)}\in{\mathbb{R}}^2$ and two points $y^{(j)}\in{\mathbb{R}}^2$, there is a convex curve passing through these points
and having these vectors as outward normals at these points if and only if
\begin{equation}\lambdaabel{Convex position}
v^{(1)} \cdot \bigl(y^{(1)}-y^{(2)}\bigr) \geq 0\quad\text{and}\quad v^{(2)} \cdot \bigl(y^{(2)}-y^{(1)}\bigr) \geq0.
\end{equation}
Indeed, by convexity, $\Omega^c\subseteq\{x:\, (x-y^{(j)})\cdot v^{(j)}\lambdaeq 0\}$ for $j=1,2$.
We will use this two dimensional case as a stepping stone to treat three dimensions. (The argument carries over to higher dimensions
also.) If $\xi^{(1)}$ and $\xi^{(2)}$ are parallel, then the analysis is one-dimensional and totally elementary. In what
follows, we assume that these vectors are not parallel.
Let $\nu^{(1)}$ and $\nu^{(2)}$ denote the unit outward normals to $\partial\Omega$ at the collision points. These are linearly
independent. We write $P$ for the orthogonal projection into the plane that they span and $Q=\mathrm{Id}-P$ for the complementary projection.
By the law of reflection, $Q [ x^{(j)}(t) ] = Q [2\xi^{(j)} t]$ and the broken rays $P [ x^{(j)}(t) ]$ make equal angles of incidence
and reflection with the projected normals $P[\nu^{(j)}]=\nu^{(j)}$ at the projected collision points. We now apply the two-dimensional
result. To do this, we need to see that the projected collision points and the projected normals obey the chord/normal condition \eqref{Convex position};
this follows immediately from the convexity of the original obstacle.
Using the two-dimensional result, we get
\begin{align*}
\bigl| x^{(1)}(t) - x^{(2)}(t) \bigr|^2 &= \bigl| P[x^{(1)}(t)] - P[x^{(2)}(t)] \bigr|^2 + \bigl| Q[x^{(1)}(t)] - Q[x^{(2)}(t)]\bigr|^2\\
&\geq 4 \bigl| P [ \xi^{(1)} ] - P [ \xi^{(2)} ] \bigr|^2 t^2 +4\bigl| Q [ \xi^{(1)} ] - Q [ \xi^{(2)} ] \bigr|^2 t^2 \\
&= 4 | \xi^{(1)}- \xi^{(2)} |^2 t^2,
\end{align*}
which proves the lemma.
\end{proof}
Next we investigate in more detail the properties of the matrix $\Sigma$.
\begin{lem}[Bounds for the covariance matrix]\lambdaabel{L:matrix}
Let $n\in \mathcal E$. Then
\begin{align}
{\mathbb{R}}e \vec v^{\;\!T}(\Sigma+i(t-t_c))^{-1}\vec v&\ge\frac{\sigma^2}{[\lambdaog\lambdaog(\tfrac 1\eps)]^{25}[\sigma^4+\lambdaog^4(\tfrac 1{\varepsilon})t^2]} |\vec v|^2\lambdaabel{sig41}\\
\|(\Sigma+i(t-t_c))^{-1}\|_{\max}&\lambdaeq\frac{\lambdaog^{5}(\frac1{\varepsilon})}{\sqrt{\sigma^4+t^2}}\lambdaabel{sig42}\\
|\det( \mathrm{Id} +i(t-t_c)\Sigma^{-1})|^{-\frac 12}&\lambdaeq \lambdaog^{\frac52}(\tfrac1{{\varepsilon}})\biggl(\frac{\sigma^4+t_c^2}{\sigma^4+t^2}\biggr)^{\frac34}\lambdaabel{sig3}
\end{align}
for all $t\geq 0$ and $\vec v\in {\mathbb{R}}^3$. If in addition $|t-t_c|\lambdae 4\frac{\sigma\lambdaog(\tfrac 1\eps)}{|\xi|}$, then
\begin{align}
\|(\Sigma+i(t-t_c))^{-1}-\Sigma^{-1}\|_{HS}&\lambdae{\varepsilon}^{-\frac 12}\delta^{-\frac 32}\lambdaog^3(\tfrac 1{\varepsilon})\lambdaabel{sig1}\\
\bigl|1-\det(\mathrm{Id}+i(t-t_c)\Sigma^{-1})^{-\frac12}\bigr|&\lambdae{\varepsilon}^{\frac12}\delta^{-\frac12}\lambdaog^3(\tfrac 1{\varepsilon})\lambdaabel{sig2}.
\end{align}
Here $\|\cdot\|_{HS}$ denotes the Hilbert--Schmidt norm: for a matrix $A=(a_{ij})$, this is given by $\|A\|_{HS}=(\sum_{i,j}|a_{ij}|^2)^{\frac 12}$.
Also, $\|A\|_{\max}$ denotes the operator norm of $A$.
\end{lem}
\begin{proof} We first prove \eqref{sig1}. Using Lemma~\ref{L:xc}, we get
\begin{align}\lambdaabel{sig}
\|\Sigma^{-1}\|_{HS}\lambdaeq \frac{\sqrt{3}}{|\sigma^2+it_c|}+\|B\|_{HS}
\lambdae \sqrt{3}\sigma^{-2}+\frac {4\sqrt{10}|\xi|^2}{ R_1|\xi_3|}
&\lambdaesssim\sigma^{-2}+|\xi|[\lambdaog\lambdaog(\tfrac 1{{\varepsilon}})]^4 \notag\\
&\lambdaesssim {\varepsilon}^{-1}\delta^{-1}[\lambdaog\lambdaog(\tfrac 1{{\varepsilon}})]^5.
\end{align}
Thus, for $|t-t_c|\lambdae 4\frac{\sigma\lambdaog(\tfrac 1\eps)}{|\xi|}$ we obtain
\begin{align}\lambdaabel{306}
\|(t-t_c)\Sigma^{-1}\|_{HS}&\lambdaesssim {\varepsilon}^{\frac12}\delta^{-\frac12}\lambdaog^2(\tfrac 1{\varepsilon})[\lambdaog\lambdaog(\tfrac 1\eps)]^6\lambdal1.
\end{align}
Combining this with the resolvent formula
\begin{align*}
(\Sigma+i(t-t_c))^{-1}-\Sigma^{-1}&=-i(t-t_c)\Sigma^{-1}(\Sigma+i(t-t_c))^{-1}\\
&=-i(t-t_c)\Sigma^{-2}(\mathrm{Id}+i(t-t_c)\Sigma^{-1})^{-1}
\end{align*}
and using \eqref{306}, we estimate
\begin{align*}
\|(\Sigma+i(t-t_c))^{-1}-\Sigma^{-1}\|_{HS}&\lambdae
|t-t_c|\|\Sigma^{-1}\|_{HS}^2\|(\mathrm{Id}+i(t-t_c)\Sigma^{-1})^{-1}\|_{HS}\\
&\lambdaesssim \frac{4\sigma\lambdaog(\tfrac 1\eps)}{|\xi|}{\varepsilon}^{-2}\delta^{-2}[\lambdaog\lambdaog(\tfrac 1\eps)]^{10}\\
&\lambdaesssim {\varepsilon}^{-\frac 12}\delta^{-\frac 32}\lambdaog^2(\tfrac1{\varepsilon})[\lambdaog\lambdaog(\tfrac 1\eps)]^{11}\\
&\lambdae {\varepsilon}^{-\frac 12}\delta^{-\frac 32}\lambdaog^3(\tfrac 1{\varepsilon}).
\end{align*}
This settles \eqref{sig1}. The estimate \eqref{sig2} follows from \eqref{306} and the fact that the determinant function is Lipschitz on a small neighborhood of the identity.
We now turn to the remaining estimates; the key is to understand the real symmetric matrix $B$. A direct computation gives
\begin{align*}
\det(\lambdaambda \mathrm{Id}-B)=\lambdaambda\biggl[\lambdaambda^2-4\biggl(\frac{\xi_1^2+\xi_3^2}{R_1\xi_3}+\frac{\xi_2^2+\xi_3^2}{R_2\xi_3}\biggr)\lambdaambda
+\frac{16|\xi|^2}{R_1R_2}\biggr].
\end{align*}
Hence one eigenvalue is $0$ and it is easy to check that $\eta$ is the corresponding eigenvector. We write $-\infty<\lambdaambda_2\lambdae\lambdaambda_1<0$ for the remaining eigenvalues. Moreover, as
\begin{align*}
\lambdaambda_1\lambdaambda_2=\frac{16|\xi|^2}{R_1R_2}\qtq{and}
|\lambdaambda_1|+|\lambdaambda_2|=4\Bigl(\frac{\xi_1^2+\xi_3^2}{R_1|\xi_3|}+\frac{\xi_2^2+\xi_3^2}{R_2|\xi_3|}\Bigr),
\end{align*}
using Lemma~\ref{L:xc} we get
\begin{align*}
[\lambdaog\lambdaog(\tfrac 1\eps)]^{-4}|\xi|\lambdaesssim |\lambdaambda_1|\lambdae |\lambdaambda_2|\lambdaesssim\frac{|\xi|^2}{|\xi_3|}\lambdaesssim |\xi|[\lambdaog\lambdaog(\tfrac 1\eps)]^4.
\end{align*}
In particular,
\begin{align}\lambdaabel{B norm}
\| B \|_{\max} \lambdaesssim |\xi|[\lambdaog\lambdaog(\tfrac 1\eps)]^4 \lambdaesssim {\varepsilon}^{-1} [\lambdaog\lambdaog(\tfrac 1\eps)]^5.
\end{align}
The orthonormal eigenbasis for $B$ is also an eigenbasis for $\Sigma^{-1}$ with eigenvalues
\begin{align*}
\frac 1{\sigma^2+it_c}, \quad \frac 1{\sigma^2+it_c}+i\lambdaambda_1,\qtq{and} \frac1{\sigma^2+it_c}+i\lambdaambda_2.
\end{align*}
In this basis, $(\Sigma+i(t-t_c))^{-1}$ is diagonal with diagonal entries
\begin{align*}
\frac 1{\sigma^2+it}, \ \Bigl[\Bigl(\frac1{\sigma^2+it_c}+i\lambdaambda_1\Bigr)^{-1}+i(t-t_c)\Bigr]^{-1},\
\Bigl[\Bigl(\frac1{\sigma^2+it_c}+i\lambdaambda_2\Bigr)^{-1}+i(t-t_c)\Bigr]^{-1}.
\end{align*}
An exact computation gives
\begin{align*}
{\mathbb{R}}e \Bigl[\Bigl(\frac1{\sigma^2+it_c}+i\lambdaambda_j\Bigr)^{-1}+i(t-t_c)\Bigr]^{-1}=\frac{\sigma^2}{\sigma^4[1-\lambdaambda_j(t-t_c)]^2 +[t-\lambdaambda_jt_c(t-t_c)]^2}.
\end{align*}
Using $\delta\lambdaesssim 1$, the upper bound for $t_c$ given by Lemma~\ref{L:xc}, and the upper bound for $\lambdaambda_j$ obtained above, we get
\begin{align*}
|\lambdaambda_j t_c|&\lambdaesssim |\xi|[\lambdaog\lambdaog(\tfrac 1\eps)]^4\frac{\delta[\lambdaog\lambdaog(\tfrac 1\eps)]^8}{|\xi|}\lambdaesssim [\lambdaog\lambdaog(\tfrac 1\eps)]^{12}\\
\lambdaambda_j^2 t_c^4&\lambdaesssim |\xi|^2[\lambdaog\lambdaog(\tfrac 1\eps)]^8\frac{\delta^4[\lambdaog\lambdaog(\tfrac 1\eps)]^{32}}{|\xi|^4}\lambdaesssim\delta^4{\varepsilon}^2[\lambdaog\lambdaog(\tfrac 1\eps)]^{42}\lambdae \sigma^4\\
\sigma^4\lambdaambda_j^2&\lambdaesssim {\varepsilon}^2\delta^2\lambdaog^4(\tfrac 1{\varepsilon})|\xi|^2[\lambdaog\lambdaog(\tfrac 1\eps)]^8 \lambdaesssim \lambdaog^4(\tfrac1{\varepsilon})[\lambdaog\lambdaog(\tfrac 1\eps)]^{10}.
\end{align*}
Therefore,
\begin{align*}
\sigma^4[1-\lambdaambda_j(t-t_c)]^2 &+[t-\lambdaambda_jt_c(t-t_c)]^2\\
&\lambdaesssim \sigma^4(1+\lambdaambda_j^2t^2+\lambdaambda_j^2t_c^2)+t^2+\lambdaambda_j^2t_c^2t^2+\lambdaambda_j^2t_c^4\\
&\lambdaesssim\sigma^4(1+\lambdaambda_j^2t_c^2)+\lambdaambda_j^2t_c^4+t^2(1+\lambdaambda_j^2t_c^2+\sigma^4\lambdaambda_j^2)\\
&\lambdaesssim \sigma^4[\lambdaog\lambdaog(\tfrac 1\eps)]^{24}+t^2\lambdaog^4(\tfrac1{\varepsilon})[\lambdaog\lambdaog(\tfrac 1\eps)]^{10}\\
&\lambdae [\lambdaog\lambdaog(\tfrac 1\eps)]^{25}[\sigma^4+\lambdaog^4(\tfrac 1{\varepsilon}) t^2].
\end{align*}
Thus,
\begin{align*}
{\mathbb{R}}e \Bigl[\Bigl(\frac 1{\sigma^2+it_c}+i\lambdaambda_j\Bigr)^{-1}+i(t-t_c)\Bigr]^{-1}\ge\frac{ \sigma^2}{[\lambdaog\lambdaog(\tfrac 1\eps)]^{25}[\sigma^4+t^2\lambdaog^4(\frac 1{\varepsilon})]}.
\end{align*}
As ${\mathbb{R}}e \frac 1{\sigma^2+it}$ admits the same lower bound, we derive \eqref{sig41}.
We now turn to \eqref{sig42}. Our analysis is based on the identity
\begin{align*}
\biggl|\Bigl[\Bigl(\frac1{\sigma^2+it_c}+i\lambdaambda_j\Bigr)^{-1}+i(t-t_c)\Bigr]^{-1}\biggr|^2
&=\biggl|\frac{1-\lambdaambda_jt_c+i\lambdaambda_j\sigma^2}{\sigma^2[1-\lambdaambda_j(t-t_c)]+i[t-\lambdaambda_jt_c(t-t_c)]}\biggr|^2\\
&=\frac{(1-\lambdaambda_jt_c)^2+(\lambdaambda_j\sigma^2)^2}{\sigma^4[1-\lambdaambda_j(t-t_c)]^2+[t-\lambdaambda_jt_c(t-t_c)]^2}.
\end{align*}
We have
\begin{align*}
(1-\lambdaambda_jt_c)^2+(\lambdaambda_j\sigma^2)^2\lambdaesssim 1+ [\lambdaog\lambdaog(\tfrac 1\eps)]^{24} + \lambdaog^4(\tfrac1{\varepsilon})[\lambdaog\lambdaog(\tfrac 1\eps)]^{10} \lambdaeq \lambdaog^5(\tfrac1{\varepsilon}).
\end{align*}
To estimate the denominator we use Lemma~\ref{L:xc} to see that $t_c\lambdal \sigma^2$ and so
\begin{align}\lambdaabel{823}
\sigma^4[1-\lambdaambda_j(t-t_c)]^2+[t-\lambdaambda_jt_c(t-t_c)]^2
&\geq t_c^2\Bigl\{[1-\lambdaambda_j(t-t_c)]^2 + \bigl[ \tfrac{t}{t_c} -\lambdaambda_j(t-t_c)\bigr]^2\Bigr\}\notag\\
&=2\bigl[\tfrac{t+t_c}2-\lambdaambda_j t_c (t-t_c)\bigr]^2 + \tfrac12(t-t_c)^2\notag\\
&\gtrsim [\lambdaog\lambdaog(\tfrac 1\eps)]^{-24} (t+t_c)^2\notag\\
&\gtrsim \frac{\sigma^4+t^2}{\lambdaog^4(\frac1{{\varepsilon}}) [\lambdaog\lambdaog(\tfrac 1\eps)]^{26}},
\end{align}
where we have used the bound $|\lambdaambda_j t_c|\lambdaesssim [\lambdaog\lambdaog(\tfrac 1\eps)]^{12}$ to derive the penultimate inequality. Combining these bounds we obtain
\begin{align*}
\biggl|\Bigl[\Bigl(\frac1{\sigma^2+it_c}+i\lambdaambda_j\Bigr)^{-1}+i(t-t_c)\Bigr]^{-1}\biggr|^2
&\lambdaesssim \frac{\lambdaog^9(\frac1{\varepsilon})[\lambdaog\lambdaog(\tfrac 1\eps)]^{26}}{\sigma^4+t^2} \lambdae \frac{\lambdaog^{10}(\frac1{{\varepsilon}})}{\sigma^4+t^2}.
\end{align*}
As $(\Sigma+i(t-t_c))^{-1}$ is orthogonally diagonalizable, this bound on its eigenvalues yields \eqref{sig42}.
Finally, we compute
\begin{align*}
|\det(\mathrm{Id}+i(t-t_c)\Sigma^{-1})|
&=\biggl|\Bigl(1+\frac{i(t-t_c)}{\sigma^2+it_c}\Bigr)\prod_{j=1,2}\Bigl[1+i(t-t_c)\Bigl(\frac 1{\sigma^2+it_c}+i\lambdaambda_j\Bigr)\Bigr]\biggr|\\
&=\biggl|\frac{\sigma^2+it}{(\sigma^2+it_c)^3}\prod_{j=1,2}\Bigl\{\sigma^2[1-\lambdaambda_j(t-t_c)]+i[t-\lambdaambda_jt_c(t-t_c)]\Bigr\}\biggr|.
\end{align*}
Using \eqref{823} we obtain
\begin{align*}
&|\det(\mathrm{Id}+i(t-t_c)\Sigma^{-1})|^{-1}\\
&\quad\lambdae \frac{(\sigma^4+t_c^2)^{\frac 32}}{(\sigma^4+t^2)^{\frac12}}
\prod_{j=1,2}\Bigl\{\sigma^4[1-\lambdaambda_j(t-t_c)]^2+[t-\lambdaambda_jt_c(t-t_c)]^2\Bigr\}^{-\frac12}\\
&\quad\lambdaeq \lambdaog^5(\tfrac1{{\varepsilon}})\biggl(\frac{\sigma^4+t_c^2}{\sigma^4+t^2}\biggr)^{\frac32}.
\end{align*}
This completes the proof of the lemma.
\end{proof}
Using this lemma, we will see that the reflected wave $v_n$ agrees with $u_n$ to high order on $\partial\Omega$, at least for $x$ near $x_c^{(n)}$ and $t$ near $t_c^{(n)}$; compare \eqref{A} with $|u_n(t_c^{(n)},x_c^{(n)})|\sim \sigma^{-3/2}$. Indeed, requiring this level of agreement can be used to derive the matrix $B$ given above. Without this level of accuracy we would not be able to show that the contribution of entering rays is $o(1)$ as ${\varepsilon}\to0$.
\begin{lem} \lambdaabel{L:uv match} Fix $n\in \mathcal E$. For each $x\in \Omega$, let $x_*=x_*(x)$ denote the nearest point to $x$ in $\partial\Omega$.
Let
$$
A_n(t,x):=\exp\{it|\xi_n|^2-i\xi_n\cdot(x_*-x_c^{(n)})\}\bigl[u_n(t,x_*)-v_n(t,x_*)\bigr].
$$
Then for each $(t,x)\in{\mathbb{R}}\times\Omega$ such that $|x_*-x_c^{(n)}|\lambdae\sigma \lambdaog(\frac 1{{\varepsilon}})$ and $|t-t_c^{(n)}|\lambdae \frac {4\sigma\lambdaog(\frac 1{{\varepsilon}})}{|\xi_n|}$ we have
\begin{align}
|A_n(t,x)|&\lambdaesssim{\varepsilon}^{-\frac 14}\delta^{-\frac 54}\lambdaog^{12}(\tfrac 1{{\varepsilon}}) \lambdaabel{A}\\
|\nabla A_n(t,x)|&\lambdaesssim {\varepsilon}^{-\frac 34}\delta^{-\frac 74}\lambdaog^{12}(\tfrac 1{{\varepsilon}}) \lambdaabel{deriv A}\\
|\partial_t A_n(t,x)| + |\Delta A_n(t,x)|&\lambdaesssim {\varepsilon}^{-\frac 74}\delta^{-\frac 74}\lambdaog^9(\tfrac 1{{\varepsilon}}) \lambdaabel{laplace A}.
\end{align}
\end{lem}
\begin{proof}
Throughout the proof, we will suppress the dependence on $n\in\mathcal E$; indeed, all estimates will be uniform in $n$. Let
\begin{align*}
F(t,x) &:= \biggl( \frac{\sigma^2+it_c}{\sigma^2+it}\biggr)^{\frac32} e^{-\frac{|x-2\xi t|^2}{4(\sigma^2+it)} } - \det (1+i(t-t_c)\Sigma^{-1})^{-\frac12} e^{\Phi(t,x)}
\end{align*}
with
\begin{align*}
\Phi(t,x) &:= i(x-x_c)(\eta-\xi) -\tfrac14(x-x(t))^T(\Sigma +i(t-t_c))^{-1}(x-x(t)),
\end{align*}
so that
\begin{equation}\lambdaabel{AfromF}
A(t,x) = \Bigl(\frac {\sigma^2}{2\pi}\Bigr)^{\frac 34}(\sigma^2+it_c)^{-\frac 32} e^{ix_c \xi} F(t,x_*).
\end{equation}
We further decompose
\begin{align*}
F(t,x)=F_1(t,x)+F_2(t,x)+F_3(t,x),
\end{align*}
where
\begin{align*}
F_1(t,x)&:= \biggl[\biggl(\frac{\sigma^2+it_c}{\sigma^2+it}\biggr)^{\frac32} -1\biggr]e^{-\frac{|x-2\xi t|^2}{4(\sigma^2+it)} }\\
F_2(t,x)&:=\bigl[1-\det(1+i(t-t_c)\Sigma^{-1})^{-\frac 12}\bigr] e^{-\frac {|x-2\xi t|^2}{4(\sigma^2+it)}}\\
F_3(t,x)&:= \det (1+i(t-t_c)\Sigma^{-1})^{-\frac12}\Bigl\{e^{-\frac{|x-2\xi t|^2}{4(\sigma^2+it)}}-e^{\Phi(t,x)}\Bigr\}.
\end{align*}
We begin by estimating the time derivative of $F$ on $\partial\Omega$. We will make repeated use of the following bounds:
\begin{align}\lambdaabel{E:bounds1}
t\sim t_c \lambdal \sigma^2 \qtq{and} |x-2\xi t| + |x-x(t)|\lambdaesssim\sigma\lambdaog(\tfrac1{{\varepsilon}}),
\end{align}
for all $|x-x_c|\lambdae\sigma \lambdaog(\frac 1{{\varepsilon}})$ and $|t-t_c|\lambdae \frac {4\sigma\lambdaog(\frac 1{{\varepsilon}})}{|\xi|}$. Moreover, from \eqref{sig1}, \eqref{sig2}, and \eqref{sig},
we obtain
\begin{align}\lambdaabel{E:bounds2}
\bigl|\partial_t\det(1+i(t-t_c)\Sigma^{-1})^{-\frac 12}\bigr| &=\tfrac12 \bigl|\det(1+i(t-t_c)\Sigma^{-1})^{-\frac 12}\bigr| \bigl| \Tr (\Sigma+i(t-t_c))^{-1}\bigr|\notag\\
&\lambdaesssim \|(\Sigma+i(t-t_c))^{-1}\|_{HS}\lambdaesssim {\varepsilon}^{-1}\delta^{-1}[\lambdaog\lambdaog(\tfrac 1\eps)]^5.
\end{align}
Lastly, as $\xi-\eta$ is normal to $\partial\Omega$ at $x_c$, we see that
\begin{align}\lambdaabel{E:bounds3}
|(\xi-\eta)\cdot(x-x_c)| \lambdaesssim |\xi| \, |x-x_c|^2 \lambdaesssim \delta \lambdaog^5(\tfrac1{\varepsilon}),
\end{align}
for all $x\in \partial\Omega$ with $|x-x_c|\lambdaesssim \sigma\lambdaog(\tfrac 1\eps)$.
A straightforward computation using \eqref{E:bounds1} gives
\begin{align*}
|\partial_tF_1(t,x)|&\lambdaesssim \sigma^{-2} + |t-t_c|\sigma^{-2}\bigl[\sigma^{-2}|\xi||x-2\xi t| + \sigma^{-4}|x-2\xi t|^2\bigr]\lambdaesssim {\varepsilon}^{-1}\delta^{-1}.
\end{align*}
Using also \eqref{sig2} and \eqref{E:bounds2} we obtain
\begin{align*}
|\partial_tF_2(t,x)|&\lambdaesssim {\varepsilon}^{-1}\delta^{-1}[\lambdaog\lambdaog(\tfrac 1\eps)]^5 + {\varepsilon}^{\frac12}\delta^{-\frac12} \lambdaog^3(\tfrac1{\varepsilon})\bigl[\sigma^{-2}|\xi||x-2\xi t| + \sigma^{-4}|x-2\xi t|^2\bigr]\\
&\lambdaesssim {\varepsilon}^{-1}\delta^{-1} \lambdaog^{4}(\tfrac1{\varepsilon}).
\end{align*}
As $|\partial_t A| \lambdaesssim \sigma^{-3/2} |\partial_t F|$, the contributions of $\partial_t F_1$ and $\partial_t F_2$ are consistent with \eqref{laplace A}.
We now turn to $F_3$. In view of \eqref{sig41}, \eqref{sig2}, and \eqref{E:bounds2},
\begin{align*}
&|\partial_t F_3(t,x)|
\lambdaesssim{\varepsilon}^{-1}\delta^{-1}[\lambdaog\lambdaog(\tfrac 1\eps)]^5+\biggl|\Bigl[\frac{\xi(x-2\xi t)}{\sigma^2+it}+\frac{i|x-2\xi t|^2}{4(\sigma^2+it)^2}\Bigr]e^{-\frac{|x-2\xi t|^2}{4(\sigma^2+it)}}\\
&- \Bigl[ \eta^T(\Sigma + i(t-t_c))^{-1}(x-x(t)) +\tfrac i4(x-x(t))^T(\Sigma + i(t-t_c))^{-2}(x-x(t)) \Bigr] e^{\Phi(t,x)}\biggr|.
\end{align*}
To simplify this expression we use the following estimates
\begin{align*}
\biggl| \frac{\xi(x-2\xi t)}{\sigma^2+it}+\frac{i|x-2\xi t|^2}{4(\sigma^2+it)^2} - \frac{\xi(x-2\xi t)}{\sigma^2+it_c} - \frac{i|x-2\xi t|^2}{4(\sigma^2+it_c)^2} \biggr|
&\lambdaesssim {\varepsilon}^{-1}\delta^{-1} \\
\bigl| \eta^T \bigl[ (\Sigma +i(t-t_c))^{-1} - \Sigma^{-1} \bigr] (x-x(t)) \bigr| &\lambdaesssim {\varepsilon}^{-1}\delta^{-1} \lambdaog^6(\tfrac1{\varepsilon}) \\
\bigl| (x-x(t))^T\bigl[ (\Sigma +i(t-t_c))^{-2} - \Sigma^{-2} \bigr] (x-x(t)) \bigr| &\lambdaesssim {\varepsilon}^{-\frac12}\delta^{-\frac32} \lambdaog^8(\tfrac1{\varepsilon}) \\
\biggl| \frac{|x-2\xi t|^2}{4(\sigma^2+it)} - \frac{|x-2\xi t|^2}{4(\sigma^2+it_c)} \biggr| &\lambdaesssim {\varepsilon}^{\frac12} \delta^{-\frac12} \lambdaog^3(\tfrac1{\varepsilon}) \\
\bigl| (x-x(t))^T\bigl[ (\Sigma +i(t-t_c))^{-1} - \Sigma^{-1} \bigr] (x-x(t)) \bigr| &\lambdaesssim {\varepsilon}^{\frac12} \delta^{-\frac12} \lambdaog^7(\tfrac1{\varepsilon}),
\end{align*}
which follow from \eqref{sig1}, \eqref{sig}, and \eqref{E:bounds1}. Combining these estimates with the fact that $z\mapsto e^{z}$ is $1$-Lipschitz
on the region ${\mathbb{R}}e z <0$ yields
\begin{align*}
&|\partial_t F_3(t,x)|\lambdaesssim{\varepsilon}^{-1}\delta^{-1} \lambdaog^{10}(\tfrac1{\varepsilon}) \\
&\quad+\biggl| \frac{\xi(x-2\xi t)}{\sigma^2+it_c}+\frac{i|x-2\xi t|^2}{4(\sigma^2+it_c)^2} - \eta^T \Sigma^{-1}(x-x(t)) - \tfrac i4(x-x(t))^T\Sigma^{-2}(x-x(t)) \biggr| \\
&\quad+ {\varepsilon}^{-\frac32}\delta^{-\frac12} \lambdaog(\tfrac 1\eps) \biggl|\frac{|x-2\xi t|^2}{4(\sigma^2+it_c)} + i(x-x_c)(\eta-\xi)-\tfrac14(x-x(t))^T \Sigma^{-1}(x-x(t)) \biggr|.
\end{align*}
As $|\partial_t A| \lambdaesssim \sigma^{-3/2} |\partial_t F|$, the first term on the right-hand side is consistent with \eqref{laplace A}. Thus to complete our analysis of $ |\partial_t F|$, it remains only to bound the second and third lines in the display above. Recalling \eqref{E:Sigma defn}, the fact that $\eta$ belongs to the kernel of the symmetric matrix $B$, and $x(t)=x_c+2\eta(t-t_c)$, we can simplify these expressions considerably. First, we have
\begin{align*}
\Bigl| \tfrac{\xi(x-2\xi t)}{\sigma^2+it_c} & +\tfrac{i|x-2\xi t|^2}{4(\sigma^2+it_c)^2} - \eta^T \Sigma^{-1}(x-x(t)) - \tfrac i4(x-x(t))^T\Sigma^{-2}(x-x(t)) \Bigr| \\
={}& \Bigl|\tfrac{(\xi-\eta)(x-x_c)}{\sigma^2+it_c} - i \tfrac{(t-t_c)(\xi-\eta)(x-x_c)}{(\sigma^2+it_c)^2} + \tfrac{(x-x_c)^T B (x-x_c)}{2(\sigma^2+it_c)}
+ i\tfrac{(x-x_c)^T B^2(x-x_c)}{4} \Bigr| \\
\lambdaesssim{}& {\varepsilon}^{-1} \lambdaog^5(\tfrac1{\varepsilon}),
\end{align*}
where we used \eqref{E:bounds1}, \eqref{E:bounds3}, and \eqref{B norm} to obtain the inequality. This shows that the second line in the estimate on $\partial_t F_3$ is acceptable for \eqref{laplace A}.
For the last line of our estimate on $\partial_t F_3$ above, we use the same tools to obtain
\begin{align*}
{\varepsilon}^{-\frac32}\delta^{-\frac12} &\lambdaog(\tfrac 1\eps)\Bigl|\tfrac{|x-2\xi t|^2}{4(\sigma^2+it_c)} + i(x-x_c)(\eta-\xi)-\tfrac14(x-x(t))^T \Sigma^{-1}(x-x(t)) \Bigr| \\
={}& {\varepsilon}^{-\frac32}\delta^{-\frac12} \lambdaog(\tfrac 1\eps)\Bigl| \tfrac{(t-t_c)(\xi-\eta)(x-x_c)}{\sigma^2+it_c} + i(x-x_c)(\xi-\eta) + \tfrac{i}4 (x-x_c)^T B (x-x_c) \Bigr| \\
\lambdaesssim {}& {\varepsilon}^{-1} \lambdaog^7(\tfrac1{\varepsilon}) + {\varepsilon}^{-\frac32}\delta^{-\frac12} \lambdaog(\tfrac 1\eps)\Bigl| (x-x_c)(\xi-\eta) + \tfrac14 (x-x_c)^T B (x-x_c) \Bigr|.
\end{align*}
The first summand is acceptable. To bound the second summand, we need to delve deeper.
Using the orthonormal frame introduced earlier, we write
\begin{align}\lambdaabel{y1}
y_1:=(x-x_c)\cdot \vec{\tau},\quad y_2:=(x-x_c) \cdot \vec{\gamma},\qtq{and} y_3:=(x-x_c) \cdot \vec{\nu}.
\end{align}
Then
\begin{align}\lambdaabel{xi3y3}
(x-x_c)\cdot(\xi-\eta)=2\xi_3(x-x_c)\cdot{\vec{\nu}}=2\xi_3y_3.
\end{align}
For $x\in\partial \Omega$ near $x_c$, we have
\begin{align}\lambdaabel{y3}
y_3=-\frac {y_1^2}{2R_1}-\frac {y_2^2}{2R_2}+O(|y_1|^3+|y_2|^3).
\end{align}
On the other hand, for any $z\in {\mathbb{R}}^3$ a direct computation shows that
$$
\frac 14 z^TBz=\frac 1{\xi_3R_1}(\xi_3z_1+\xi_1z_3)^2+\frac1{\xi_3R_2}(\xi_3z_2+\xi_2z_3)^2.
$$
Applying this to
\begin{align*}
z=x-x(t)&=x-x_c-2\eta(t-t_c)
=\begin{pmatrix}
y_1-2\xi_1(t-t_c)\\
y_2-2\xi_2(t-t_c)\\
2\xi_3(t-t_c)\\
\end{pmatrix} + \begin{pmatrix} 0\\ 0\\ y_3\\ \end{pmatrix}
\end{align*}
and noting that $|y_3|\lambdaesssim |y_1|^2+|y_2|^2\lambdaesssim \sigma^2\lambdaog^2(\frac 1{\varepsilon})$, we get
\begin{align*}
\frac 14(x-x(t))^T B(x-x(t))
&=\frac{(y_1\xi_3+y_3\xi_1)^2}{\xi_3R_1}+\frac{(y_2\xi_3+y_3\xi_2)^2}{\xi_3R_2}\\
&=\xi_3\frac {y_1^2}{R_1}+\xi_3\frac {y_2^2}{R_2}+O\Bigl(\sigma^3\lambdaog^3(\tfrac 1{{\varepsilon}})\cdot \frac{|\xi|^2}{|\xi_3|}\Bigr).
\end{align*}
Combining this with \eqref{xi3y3}, \eqref{y3}, and Lemma~\ref{L:xc}, we deduce
\begin{align}\lambdaabel{B cancel}
\Bigl|(x-x_c)\cdot(\xi-\eta)+\tfrac14 (x-x(t))^T B(x-x(t))\Bigr|
&\lambdaesssim\sigma^3\lambdaog^3(\tfrac 1{\varepsilon})\frac{|\xi|^2}{|\xi_3|} \notag \\
&\lambdaesssim {\varepsilon}^{\frac 12}\delta^{\frac 32}\lambdaog^7(\tfrac 1{\varepsilon}).
\end{align}
This is the missing piece in our estimate of $|\partial_t F_3|$. Putting everything together yields
$$
|\partial_t A(t,x)| \lambdaesssim \sigma^{-\frac32} |\partial_t F(t,x)| \lambdaesssim \sigma^{-\frac32} {\varepsilon}^{-1}\delta^{-1} \lambdaog^{10}(\tfrac1{\varepsilon})
\lambdaesssim {\varepsilon}^{-\frac74}\delta^{-\frac74} \lambdaog^{9}(\tfrac1{\varepsilon}),
$$
which proves the first half of \eqref{A}.
This bound on the time derivative of $A$ allows us to deduce \eqref{A} by just checking its validity at $t=t_c$. Note that
both $F_1$ and $F_2$ vanish at this point and so, by the fundamental theorem of calculus, we have
\begin{align*}
|A(t,x)| &\lambdaesssim |t-t_c| {\varepsilon}^{-\frac74}\delta^{-\frac74} \lambdaog^{9}(\tfrac1{\varepsilon}) + \sigma^{-\frac32} |F_3(t_c,x)| \\
&\lambdaesssim {\varepsilon}^{-\frac14}\delta^{-\frac54} \lambdaog^{12}(\tfrac1{\varepsilon}) + \sigma^{-\frac32} \bigl| (x-x_c)(\eta-\xi) + \tfrac14(x-x_c)^T B (x-x_c) \bigr|.
\end{align*}
Combining this with \eqref{B cancel} yields \eqref{A}.
It remains to estimate the spatial derivatives of $A$. Notice that this corresponds to derivatives of $F$ in directions parallel to $\partial\Omega$.
To compute these, we need to determine the unit normal $\vec\nu_x$ to $\partial\Omega$ at a point $x\in\partial\Omega$; indeed, the projection matrix onto the tangent space at $x$ is given by $\mathrm{Id} - \vec\nu_x^{\vphantom{T}} \vec\nu_x^T$. Writing $y=x-x_c$ as in \eqref{y1} and \eqref{y3}, we have
\begin{equation}\lambdaabel{nu_x}
\vec\nu_x = \begin{pmatrix} y_1/R_1 \\ y_2/R_2 \\ 1 \end{pmatrix} + |y|^2 \vec \psi(y),
\end{equation}
where $\vec \psi$ is a smooth function with all derivatives bounded.
However, the Laplacian of $A$ does not involve only the tangential derivatives of $F$; due to the curvature of the obstacle, the normal derivative of $F$ also enters:
$$
|\Delta A| \lambdaesssim \sigma^{-\frac32} \Bigl\{ |\nabla F|_{{\mathbb{R}}^3} + |\partial^2 F|_{T_x\partial\Omega} \Bigr\}.
$$
Here $\partial^2 F$ denotes the full matrix of second derivatives of $F$, while the subscript $T_x\partial\Omega$ indicates that only the tangential components are considered; no subscript or ${\mathbb{R}}^3$ will be used to indicate that all components are considered. In this way, verifying \eqref{deriv A} and the remaining part of \eqref{laplace A} reduces to proving
\begin{equation}\lambdaabel{E:lap A needs}
\begin{gathered}
|\nabla F|_{T_x\partial\Omega} \lambdaesssim \delta^{-1} \lambdaog^{13}(\tfrac1{\varepsilon})
\qtq{and} |\nabla F| + |\partial^2 F|_{T_x\partial\Omega} \lambdaesssim {\varepsilon}^{-1}\delta^{-1} \lambdaog^{10}(\tfrac1{\varepsilon}).
\end{gathered}
\end{equation}
Again we decompose $F$ into the three parts $F_1$, $F_2$, and $F_3$. The first two are easy to estimate; indeed, we do not even need to
consider normal and tangential components separately:
\begin{align*}
|\nabla F_1(t,x)| \lambdaesssim \frac{|x-2\xi t|}{\sigma^2} \frac{|t-t_c|}{\sigma^2} \lambdaesssim \delta^{-1} \lambdaog\lambdaog(\tfrac 1\eps)
\end{align*}
and similarly, using \eqref{sig2},
\begin{align*}
|\nabla F_2(t,x)| \lambdaesssim \frac{|x-2\xi t|}{\sigma^2} {\varepsilon}^{\frac12}\delta^{-\frac12} \lambdaog^3(\tfrac1{\varepsilon}) \lambdaesssim \delta^{-1} \lambdaog^3(\tfrac1{\varepsilon}).
\end{align*}
These are both consistent with the needs of \eqref{E:lap A needs}.
We can bound the second derivatives of $F_1$ and $F_2$ in a similar manner:
\begin{align*}
|\partial^2 F_1(t,x)| &\lambdaesssim \Bigl[ \sigma^{-2} + \frac{|x-2\xi t|^2}{\sigma^4} \Bigr] \frac{|t-t_c|}{\sigma^2} \lambdaesssim {\varepsilon}^{-\frac12}\delta^{-\frac32} \lambdaog\lambdaog(\tfrac 1\eps) \\
|\partial^2 F_2(t,x)| &\lambdaesssim \Bigl[ \sigma^{-2} + \frac{|x-2\xi t|^2}{\sigma^4} \Bigr] {\varepsilon}^{\frac12}\delta^{-\frac12} \lambdaog^3(\tfrac1{\varepsilon})
\lambdaesssim {\varepsilon}^{-\frac12}\delta^{-\frac32} \lambdaog^3(\tfrac1{\varepsilon}).
\end{align*}
Both are acceptable for \eqref{E:lap A needs}.
This leaves us to estimate the derivatives of $F_3$; now it will be important to consider tangential derivatives separately. We have
\begin{align*}
|\nabla F_3(t,x)|_{T_x\partial\Omega} &\lambdaesssim \frac{|x-2\xi t|}{\sigma^2} |F_3(t,x)| \\
& \quad + \biggl| \frac{x-2\xi t}{2(\sigma^2+it)} - i(\xi-\eta) - \tfrac12 (\Sigma +i(t-t_c))^{-1}(x-x(t)) \biggr|_{T_x\partial\Omega}.
\end{align*}
From the proof of \eqref{A}, we know that $|F_3| \lambdaesssim {\varepsilon}^{1/2}\delta^{-1/2}\lambdaog^{13}(\frac1{\varepsilon})$. To estimate the second line we begin by
simplifying it. Using \eqref{E:bounds1} and \eqref{sig1}, we have
\begin{align}
|\nabla F_3(t,x)|_{T_x\partial\Omega} &\lambdaesssim \frac{\sigma\lambdaog(\frac1{\varepsilon})}{\sigma^2} {\varepsilon}^{\frac12}\delta^{-\frac12}\lambdaog^{13}(\tfrac1{\varepsilon}) \notag\\
&\quad + \frac{|t-t_c|}{\sigma^{4}} |x-2\xi t| + \|(\Sigma +i(t-t_c))^{-1}-\Sigma^{-1}\|_{HS}|x-x(t)| \notag\\
& \quad + \biggl| \frac{x-2\xi t}{2(\sigma^2+it_c)}-i(\xi-\eta) - \tfrac12 \Sigma^{-1}(x-x(t)) \biggr|_{T_x\partial\Omega} \notag\\
& \lambdaesssim \delta^{-1}\lambdaog^{13}(\tfrac1{\varepsilon})+ \biggl| \frac{(\xi-\eta)(t-t_c)}{\sigma^2+it_c} \biggr|_{T_x\partial\Omega} + \biggl| \xi-\eta + \tfrac12 B (x-x_c) \biggr|_{T_x\partial\Omega}. \lambdaabel{nab F3}
\end{align}
Thus far, we have not used the restriction to tangential directions. Thus, using \eqref{B norm} we may pause to deduce
$$
|\nabla F_3(t,x)|_{{\mathbb{R}}^3} \lambdaesssim \delta^{-1}\lambdaog^{13}(\tfrac1{\varepsilon}) + \sigma^{-1}\lambdaog(\tfrac 1\eps) + |\xi| \lambdaesssim {\varepsilon}^{-1}\lambdaog\lambdaog(\tfrac 1\eps),
$$
which is consistent with \eqref{E:lap A needs}.
We now return to \eqref{nab F3}. To estimate the last two summands we write $x-x_c=y$ and use \eqref{nu_x} to obtain
\begin{equation*}
\bigl(\mathrm{Id} - \vec\nu_x^{\vphantom{T}} \vec\nu_x^T\bigr) (\xi-\eta) = - \begin{pmatrix} 2\xi_3y_1/R_1\\ 2\xi_3y_2/R_2 \\ 0 \end{pmatrix} + O(|\xi| \, |y|^2).
\end{equation*}
Similarly,
\begin{equation*}
\tfrac12 \bigl(\mathrm{Id} - \vec\nu_x^{\vphantom{T}} \vec\nu_x^T\bigr) B (x-x_c)= \begin{pmatrix} 2\xi_3y_1/R_1\\ 2\xi_3y_2/R_2 \\ 0 \end{pmatrix} + O(\|B\|_{\max} |y|^2).
\end{equation*}
Using \eqref{B norm}, this allows us to deduce that
\begin{equation}\lambdaabel{E:T cancel}
\bigl| \xi-\eta + \tfrac12 B (x-x_c) \bigr|_{T_x\partial\Omega} \lambdaesssim \|B\|_{\max} |y|^2 + |\xi| \, |y|^2 \lambdaesssim \delta\lambdaog^5(\tfrac1{\varepsilon}).
\end{equation}
Therefore,
$$
|\nabla F_3(t,x)|_{T_x\partial\Omega} \lambdaesssim \delta^{-1}\lambdaog^{13}(\tfrac1{\varepsilon}) + \lambdaog^{2}(\tfrac1{\varepsilon}) + \delta\lambdaog^{5}(\tfrac1{\varepsilon})
\lambdaesssim \delta^{-1}\lambdaog^{13}(\tfrac1{\varepsilon}).
$$
This is consistent with \eqref{E:lap A needs}, thereby completing the proof of \eqref{deriv A}.
Estimating the second order derivatives of $F_3$ is very messy. We get
\begin{align*}
\partial_k \partial_l e^{-\frac{|x-2\xi t|^2}{4(\sigma^2+it)}}
&=\biggl[ \frac{-\delta_{kl}}{2(\sigma^2+it)} + \frac{(x-2\xi t)_k(x-2\xi t)_l}{4(\sigma^2+it)^2}\biggr] e^{-\frac{|x-2\xi t|^2}{4(\sigma^2+it)}} \\
&=\biggl[ \frac{-\delta_{kl}}{2(\sigma^2+it_c)} + \frac{(x-2\xi t)_k(x-2\xi t)_l}{4(\sigma^2+it_c)^2}\biggr] e^{-\frac{|x-2\xi t|^2}{4(\sigma^2+it)}}
+ O\biggl(\frac{\lambdaog\lambdaog(\frac1{\varepsilon})}{{\varepsilon}^{\frac12}\delta^{\frac32}}\biggr).
\end{align*}
Proceeding similarly and using \eqref{sig1} and \eqref{sig} yields
\begin{align*}
& \partial_k \partial_l e^{\Phi(t,x)} = \partial_k \partial_l e^{i(x-x_c)(\eta-\xi)-\frac14(x-x(t))^T(\Sigma +i(t-t_c))^{-1}(x-x(t))} \\
&=\biggl[ -\tfrac12\Sigma^{-1}_{kl} + \Bigl\{ i(\eta-\xi) - \tfrac12\Sigma^{-1}(x-x(t))\Bigr\}_k \Bigl\{ i(\eta-\xi) - \tfrac12\Sigma^{-1}(x-x(t))\Bigr\}_l\biggr] e^{\Phi(t,x)}\\
&\quad + O\bigl({\varepsilon}^{-1}\delta^{-1}\lambdaog^6(\tfrac1{\varepsilon})\bigr).
\end{align*}
We now combine these formulas, using \eqref{B norm}, \eqref{E:T cancel}, and the definition of $\Sigma^{-1}$ in the process. This yields
\begin{align*}
|\partial^2 F_3|_{T_x\partial\Omega} &\lambdaesssim \frac{\lambdaog^2(\frac1{\varepsilon})}{\sigma^2} |F_3| + |B|_{T_x\partial\Omega}
+ {\varepsilon}^{-\frac12}\delta^{\frac12}\lambdaog^5(\tfrac1{\varepsilon}) +{\varepsilon}^{-1}\delta^{-1}\lambdaog^6(\tfrac1{\varepsilon})\\
&\lambdaesssim {\varepsilon}^{-\frac12}\delta^{-\frac32}\lambdaog^{13}(\tfrac1{\varepsilon}) + {\varepsilon}^{-1} \lambdaog(\tfrac1{\varepsilon}) + {\varepsilon}^{-\frac12}\delta^{\frac12}\lambdaog^5(\tfrac1{\varepsilon}) +{\varepsilon}^{-1}\delta^{-1}\lambdaog^6(\tfrac1{\varepsilon})\\
&\lambdaesssim {\varepsilon}^{-1}\delta^{-1} \lambdaog^{6}(\tfrac1{\varepsilon}).
\end{align*}
This completes the proof of \eqref{E:lap A needs} and so that of \eqref{laplace A}.
\end{proof}
With all these preparations, we are ready to begin estimating the contribution of the wave packets that enter the obstacle. In view of
Proposition~\ref{P:short times}, it suffices to prove the following
\begin{prop}[The long time contribution of entering rays]\lambdaabel{P:long times}
We have
\begin{equation}\lambdaabel{enter}
\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}\bigl[{e^{it\Delta}}omega (1_\Omega\gamma_n)-u_n\bigr]\Bigr\|_{L_{t,x}^{\frac {10}3}([T,\infty)\times{\mathbb{R}}^3)}=o(1) \qtq{as} {\varepsilon}\to 0,
\end{equation}
where $T=\frac1{10}{\varepsilon}\delta[\lambdaog\lambdaog(\tfrac 1\eps)]^{-1}$.
\end{prop}
Now fix $n\in \mathcal E$. We denote by $\chi^{u_n}(t)$ a smooth time cutoff such that
$$
\chi^{u_n}(t)=1 \qtq{for} t\in \bigl[0, t_c+2\tfrac{\sigma\lambdaog(\frac1{\varepsilon})}{|\xi_n|}\bigr] \qtq{and} \chi^{u_n}(t)=0 \qtq{for} t\ge t_c+4\tfrac{\sigma\lambdaog(\frac1{\varepsilon})}{|\xi_n|}.
$$
Denote by $\chi^{v_n}(t)$ a smooth time cutoff such that
$$
\chi^{v_n}(t)=1 \qtq{for} t\ge t_c-2\tfrac{\sigma\lambdaog(\frac1{\varepsilon})}{|\xi_n|} \qtq{and} \chi^{v_n}(t)=0 \qtq{for} t\in\bigl[0,t_c- 4\tfrac{\sigma\lambdaog(\frac1{\varepsilon})}{|\xi_n|}\bigr].
$$
We then define
\begin{align*}
\tilde u_n(t,x) :=\chi^{u_n}(t) u_n(t,x) \qtq{and} \tilde v_n(t,x):=\chi^{v_n}(t)v_n(t,x).
\end{align*}
The cutoff $\chi^{u_n}$ kills $u_n$ shortly after it enters the obstacle; the additional time delay (relative to $t_c$) guarantees that the bulk of the wave packet is deep inside $\Omega^c$ when the truncation occurs. Note that the cutoff also ensures that $u_n$ does not exit the obstacle. Analogously,
$\chi^{v_n}$ turns on the reflected wave packet shortly before it leaves the obstacle.
By the triangle inequality,
\begin{align}
\text{LHS}\eqref{enter}
&\lambdaesssim \Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}(u_n-1_\Omega\tilde u_n)\Bigr\|_{L_{t,x}^{\frac{10}3}([T,\infty)\times{\mathbb{R}}^3)}
+ \Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}\tilde v_n\Bigr\|_{L_{t,x}^{\frac {10}3}([T,\infty)\times\Omega)}\notag\\
&\quad + \Bigl\|\sum_{ n\in \mathcal E} c_n^{{\varepsilon}}\bigl[{e^{it\Delta}}omega(1_\Omega\gamma_n)-(\tilde u_n -\tilde v_n)\bigr]\Bigr\|_{L_{t,x}^{\frac
{10}3}([T,\infty)\times\Omega)}.\lambdaabel{E:control 462}
\end{align}
We prove that the first two summands are $o(1)$ in Lemmas~\ref{L:small u} and \ref{L:bdfv}, respectively. Controlling the last summand is a much lengthier enterprise and follows from Lemma~\ref{L:rem}. This will complete the proof of Proposition~\ref{P:long times}, which together with Propositions~\ref{P:ng}, \ref{P:missing}, and \ref{P:short times} yields Theorem~\ref{T:LF3}.
\begin{lem}\lambdaabel{L:small u}
We have
\begin{align*}
\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}(u_n-1_\Omega\tilde u_n)\Bigr\|_{L_{t,x}^{\frac{10}3}([T,\infty)\times{\mathbb{R}}^3)}=o(1) \qtq{as} {\varepsilon}\to 0.
\end{align*}
\end{lem}
\begin{proof}
By the triangle inequality and H\"older,
\begin{align}
\Bigl\|\sum_{n\in \mathcal E} c_n^{\varepsilon}(u_n-1_\Omega\tilde u_n)\Bigr\|_{L_{t,x}^{\frac{10}3}}
&\lambdaesssim\Bigl\|\sum_{n\in \mathcal S}|c_n^{\varepsilon}||u_n|\Bigr\|_{L_{t,x}^{\frac{10}3}}\lambdaabel{interp}\notag\\
&\lambdaesssim \Bigl\|\sum_{n\in \mathcal S}|c_n^{\varepsilon}| |u_n|\Bigr\|_{L_t^\infty L_x^2}^{\frac15}
\Bigl\|\sum_{n\in \mathcal S}|c_n^{\varepsilon}||u_n|\Bigr\|_{L_t^{\frac83}L_x^4}^{\frac 45},
\end{align}
where all spacetime norms are over $[T,\infty)\times{\mathbb{R}}^3$. To estimate the first factor on the right-hand side of \eqref{interp}, we use
\begin{align*}
|x-2t\xi_n|^2+|x-2t\xi_m|^2=2|x-(\xi_n+\xi_m)t|^2+2t^2|\xi_n-\xi_m|^2
\end{align*}
together with \eqref{bdforc} to get
\begin{align*}
\Bigl\|\sum_{n\in \mathcal S}& |c_n^{\varepsilon}||u_n|\Bigr\|_{L_x^\infty L_x^2([T,\infty)\times{\mathbb{R}}^3)}^2\\
&\lambdaesssim \biggl\|\sum_{n,m\in \mathcal S}\frac{(\sigma{\varepsilon})^3}{L^6}\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac32}
\int_{{\mathbb{R}}^3} e^{-\frac{\sigma^2|x-2t\xi_n|^2}{4(\sigma^4+t^2)}-\frac{\sigma^2|x-2t\xi_m|^2}{4(\sigma^4+t^2)}}\,dx\biggr\|_{L_t^{\infty}([T,\infty))}\\
&\lambdaesssim \biggl\|\sum_{n, m\in \mathcal S}\frac{(\sigma{\varepsilon})^3}{L^6}\exp\Bigl\{-\frac{\sigma^2 t^2|\xi_n-\xi_m|^2}{2(\sigma^4+t^2)}\Bigr\}\biggr\|_{L_t^{\infty}([T,\infty))}.
\end{align*}
For $t\geq T$ we have
\begin{align*}
\frac{\sigma^2 t^2|\xi_n-\xi_m|^2}{2(\sigma^4+t^2)}
&\ge\frac{\sigma^2|n-m|^2 T^2}{2L^2(\sigma^4+T^2)}\ge\frac{|n-m|^2T^2}{4\sigma^4[\lambdaog\lambdaog(\tfrac 1\eps)]^2}\ge\frac{|n-m|^2}{\lambdaog^5(\tfrac 1{\varepsilon})},
\end{align*}
and so derive
\begin{align*}
\Bigl\|\sum_{n\in \mathcal S}|c_n^{\varepsilon}||u_n|\Bigr\|_{L_t^\infty L_x^2([T,\infty)\times{\mathbb{R}}^3)}^2
&\lambdaesssim \sum_{n, m\in \mathcal S}\frac{(\sigma{\varepsilon})^3}{L^6}\exp\Bigl\{-\frac{|n-m|^2}{\lambdaog^5(\tfrac 1{\varepsilon})}\Bigr\}\\
&\lambdaesssim \sum_{n\in \mathcal S}\frac{(\sigma{\varepsilon})^3}{L^6}\lambdaog^{\frac{15}2}(\tfrac 1{\varepsilon})\\
&\lambdaesssim\frac{(\sigma{\varepsilon})^3}{L^6}\lambdaog^{\frac {15}2}(\tfrac1{\varepsilon})\cdot\biggl(\frac L{\varepsilon}\lambdaog\lambdaog(\tfrac 1\eps)\biggr)^3\\
&\lambdaesssim \lambdaog^{\frac{15}2}(\tfrac 1{\varepsilon}).
\end{align*}
We now turn to the second factor on the right-hand side of \eqref{interp}. As
\begin{align*}
\sum_{j=1}^4\frac 14 |x-2\xi_{n_j} t|^2&= \biggl|x-\frac {\sum_{j=1}^4\xi_{n_j}}2 t\biggr|^2+\frac {t^2}4\sum_{j<k}|\xi_{n_j} -\xi_{n_k}|^2,
\end{align*}
we have
\begin{align}\lambdaabel{129}
\Bigl\|\sum_{n\in \mathcal S}&|c_n^{{\varepsilon}}| |u_n|\Bigr\|_{L_x^4({\mathbb{R}}^3)}^4\notag\\
&\lambdaesssim \sum_{n_1,\cdots,n_4\in \mathcal S}\bigl|c_{n_1}^{{\varepsilon}}c_{n_2}^{{\varepsilon}} c_{n_3}^{{\varepsilon}}c_{n_4}^{\varepsilon}\bigr|
\biggl(\frac {\sigma^2}{\sigma^4+t^2}\biggr)^3\int_{{\mathbb{R}}^3} \exp\Bigl\{-\sum_{j=1}^4 \frac{\sigma^2|x-2\xi_{n_j} t|^2}{4(\sigma^4+t^2)}\Bigr\}\,dx\notag\\
&\lambdaesssim \sum_{n_1,\cdots,n_4\in \mathcal S} \frac{(\sigma{\varepsilon})^6}{L^{12}}\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 32} \exp\Bigl\{-\frac
{\sigma^2t^2}{4(\sigma^4+t^2)}\sum_{j<k}|\xi_{n_j}-\xi_{n_k}|^2\Bigr\}.
\end{align}
To estimate the sum in \eqref{129} we divide it into two parts. Let $N:=\lambdaog^3(\frac1{{\varepsilon}}).$
\textbf{Part 1:} $|n_j-n_k|\ge N$ for some $1\lambdae j\neq k\lambdae 4$. We estimate the contribution of the summands conforming to this case to LHS\eqref{129} by
\begin{align*}
\frac{(\sigma{\varepsilon})^6}{L^{12}}\biggl(\frac L{{\varepsilon}}\lambdaog\lambdaog(\tfrac 1{{\varepsilon}})\biggr)^{12}& \biggl(\frac {\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 32}
\exp\Bigl\{-\frac {\sigma^2N^2t^2}{4L^2(\sigma^4+t^2)} \Bigr\}\\
&\lambdaesssim t^{-3} \sigma^9{\varepsilon}^{-6}[\lambdaog\lambdaog(\tfrac 1\eps)]^{12}\exp\Bigl\{-\frac{\sigma^2N^2t^2}{4L^2(\sigma^4+t^2)}\Bigr\}.
\end{align*}
For $T\lambdae t\lambdae \sigma^2$, we estimate
\begin{align*}
\exp\Bigl\{-\frac{\sigma^2N^2t^2}{4L^2(\sigma^4+t^2)}\Bigr\}\lambdae\exp\Bigl\{-\frac{T^2N^2}{8\sigma^2L^2}\Bigr\}\lambdae{\varepsilon}^{100}
\end{align*}
while for $t\ge \sigma^2$,
\begin{align*}
\exp\Bigl\{-\frac{\sigma^2N^2t^2}{4L^2(\sigma^4+t^2)}\Bigr\}\lambdae\exp\Bigl\{-\frac{\sigma^2N^2}{8L^2}\Bigr\}\lambdae {\varepsilon}^{100}.
\end{align*}
Thus the contribution of Part 1 is $O( {\varepsilon}^{80}t^{-3})$.
\textbf{Part 2:} $|n_j-n_k|\lambdae N$ for all $1\lambdae j\neq k\lambdae 4$. We estimate the contribution of the summands conforming to this case to LHS\eqref{129} by
\begin{align*}
\frac {(\sigma{\varepsilon})^6}{L^{12}}\biggl(\frac {\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 32}N^9 \biggl(\frac L{{\varepsilon}}\lambdaog\lambdaog(\tfrac 1{{\varepsilon}})\biggr)^3
\lambdaesssim \frac{\sigma^9N^9{\varepsilon}^3}{L^9t^3}[\lambdaog\lambdaog(\tfrac 1\eps)]^3\lambdaesssim \frac{{\varepsilon}^3\lambdaog^{27}(\tfrac1{\varepsilon})}{[\lambdaog\lambdaog(\tfrac 1\eps)]^6} t^{-3}.
\end{align*}
Collecting the two parts and integrating in time, we obtain
\begin{align*}
\Bigl\|\sum_{n\in \mathcal S}|c_n^{\varepsilon}||u_n|\Bigr\|_{L_t^{\frac83}L_x^4([T,\infty)\times{\mathbb{R}}^3)}
\lambdaesssim \frac{{\varepsilon}^{\frac34}\lambdaog^{\frac{27}4}(\tfrac1{\varepsilon})}{[\lambdaog\lambdaog(\tfrac 1\eps)]^{\frac 32}}\cdot T^{-\frac 38}
\lambdaesssim {\varepsilon}^{\frac38}\delta^{-\frac38}\lambdaog^8(\tfrac 1{\varepsilon}).
\end{align*}
Putting everything together and invoking \eqref{interp} we get
$$
\Bigl\|\sum_{n\in \mathcal E} c_n^{\varepsilon}(u_n-1_\Omega\tilde u_n)\Bigr\|_{L_{t,x}^{\frac{10}3}([T, \infty)\times{\mathbb{R}}^3)}
\lambdaesssim {\varepsilon}^{\frac3{10}}\delta^{-\frac3{10}}\lambdaog(\tfrac1{\varepsilon})^{\frac34+\frac{32}5}=o(1) \qtq{as}{\varepsilon}\to0.
$$
This completes the proof of the lemma.
\end{proof}
\begin{lem}\lambdaabel{L:bdfv}
We have
$$
\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}} \tilde v_n\Bigr\|_{L_{t,x}^{\frac {10}3}([T,\infty)\times{\mathbb{R}}^3)}=o(1) \qtq{as}{\varepsilon}\to0.
$$
\end{lem}
\begin{proof}
By H\"older's inequality,
\begin{align}\lambdaabel{E:interp}
\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}} \tilde v_n\Bigr\|_{L_{t,x}^{\frac {10}3}}
\lambdaesssim \Bigl\|\sum_{n\in \mathcal E}c_n^{\varepsilon} \tilde v_n\Bigr\|_{L_t^\infty L_x^2}^{\frac15}
\Bigl\|\sum_{n\in \mathcal E}c_n^{\varepsilon} \tilde v_n\Bigr\|_{L_t^{\frac83}L_x^4}^{\frac 45},
\end{align}
where all spacetime norms are over $[T,\infty)\times{\mathbb{R}}^3$.
First we note that from \eqref{sig41} and \eqref{sig3}, we can bound
\begin{align}\lambdaabel{bdfv}
|v_n(t,x)|
&\lambdaesssim \lambdaog^{\frac52}(\tfrac1{{\varepsilon}})\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac34}\exp\Bigl\{-\frac{\sigma^2|x-x_n(t)|^2}{4[\lambdaog\lambdaog(\tfrac 1\eps)]^{25}[\sigma^4+t^2\lambdaog^4(\frac1{\varepsilon})]}\Bigr\}.
\end{align}
Using this bound, we estimate
\begin{align*}
&\int_{{\mathbb{R}}^3} \! |\tilde v_{n_1}||\tilde v_{n_2}| \,dx
\lambdaesssim \lambdaog^5(\tfrac1{{\varepsilon}})\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 32} \!\!\int_{{\mathbb{R}}^3}\!\! \exp\Bigl\{-\frac{\sigma^2[|x-x_1(t)|^2+|x-x_2(t)|^2]}{4[\lambdaog\lambdaog(\frac 1{{\varepsilon}})]^{25}[\sigma^4+t^2\lambdaog^4(\frac 1{{\varepsilon}})]} \Bigr\} \,dx\\
&\lambdaesssim\lambdaog^5(\tfrac1{{\varepsilon}})\biggl[\frac {[\lambdaog\lambdaog(\frac1{{\varepsilon}})]^{25}[\sigma^4+t^2\lambdaog^4(\frac1{{\varepsilon}})]}{\sigma^4+t^2}\biggr]^{\frac 32} \exp\Bigl\{-\frac{\sigma^2|x_1(t)-x_2(t)|^2}{8[\lambdaog\lambdaog(\frac1{{\varepsilon}})]^{25}[\sigma^4+t^2\lambdaog^4(\frac 1{{\varepsilon}})]}\Bigr\}\\
&\lambdaesssim \lambdaog^{12}(\tfrac 1{{\varepsilon}})\exp\Bigl\{-\frac{\sigma^2|x_1(t)-x_2(t)|^2}{8[\lambdaog\lambdaog(\frac1{{\varepsilon}})]^{25}[\sigma^4+t^2\lambdaog^4(\frac 1{{\varepsilon}})]}\Bigr\},
\end{align*}
where $x_j(t)$ denotes the trajectory of $v_{n_j}$, that is,
$$
x_j(t):=2\xi^{(j)} t_c^{(j)}+2\eta^{(j)} (t-t_c^{(j)})
$$
with $\xi^{(j)}:=\xi_{n_j}$, $\eta^{(j)}:=\eta_{n_j}$, and $t_c^{(j)}$ representing the corresponding collision times.
Therefore,
\begin{align}\lambdaabel{E:tilde v}
\Bigl\|\sum_{n\in\mathcal E} c_n^{{\varepsilon}}\tilde v_n\Bigr\|_{L_t^{\infty}L_x^2}^2
&\lambdaesssim \sup_t \sum_{ n_1, n_2\in \mathcal E}|c_{n_1}^{{\varepsilon}}||c_{n_2}^{{\varepsilon}}|\lambdaog^{12}(\tfrac 1{{\varepsilon}}) e^{-\frac {\sigma^2|x_1(t)-x_2(t)|^2}{8[\lambdaog\lambdaog(\frac1{{\varepsilon}})]^{25}[\sigma^4+t^2\lambdaog^4(\frac 1{{\varepsilon}})]}},
\end{align}
where the supremum in $t$ is taken over the region
\begin{align*}
t\ge \max\biggl\{t_c^{(1)}-4\frac {\sigma \lambdaog(\frac1{{\varepsilon}})}{|\xi^{(1)}|},\ t_c^{(2)}-4\frac {\sigma\lambdaog(\frac1{{\varepsilon}})}{|\xi^{(2)}|}\biggr\}.
\end{align*}
Next we show that for $|n_1-n_2|\geq \lambdaog^4(\frac1{\varepsilon})$ and all such $t$,
\begin{equation}\lambdaabel{difray}
|x_1(t)-x_2(t)|\ge |\xi^{(1)}-\xi^{(2)}|t.
\end{equation}
We discuss two cases. When $t\ge \max\{t_c^{(1)},t_c^{(2)}\}$, this follows immediately from Lemma~\ref{L:diverging rays}. It remains to prove \eqref{difray} for
$$
\max\biggl\{t_c^{(1)}-4\frac {\sigma\lambdaog(\frac1{{\varepsilon}})}{|\xi^{(1)}|},\ t_c^{(2)}-4\frac {\sigma\lambdaog(\frac1{{\varepsilon}})}{|\xi^{(2)}|}\biggr\}\lambdae t\lambdae \max\bigl\{t_c^{(1)}, \ t_c^{(2)}\bigr\}.
$$
Without loss of generality, we may assume $t_c^{(1)}\ge t_c^{(2)}$. Using Lemmas~\ref{L:xc} and~\ref{L:diverging rays} and the fact that
$|n_1-n_2|\ge \lambdaog^4(\frac 1{\varepsilon})$, we estimate
\begin{align*}
|x_1(t)-x_2(t)|&\ge |x_1(t_c^{(1)})-x_2(t_c^{(1)})|-|x_1(t)-x_1(t_c^{(1)})|-|x_2(t)-x_2(t_c^{(1)})|\\
&\ge 2|\xi^{(1)}-\xi^{(2)}|t_c^{(1)}-2|\xi^{(1)}||t-t_c^{(1)}|-2|\xi^{(2)}||t-t_c^{(1)}|\\
&\ge 2\frac {|n_1-n_2|}L t_c^{(1)}-8\sigma \lambdaog(\tfrac 1{{\varepsilon}})-
8\frac {|\xi^{(2)}|\sigma\lambdaog(\tfrac 1\eps)}{|\xi^{(1)}|}\\
&\ge 2\frac {|n_1-n_2|}{L}t_c^{(1)} -16\sigma\lambdaog(\tfrac 1{{\varepsilon}})[\lambdaog\lambdaog(\tfrac 1{{\varepsilon}})]^2\\
&\ge\frac {|n_1-n_2|}{L} t=|\xi^{(1)}-\xi^{(2)}| t.
\end{align*}
This completes the verification of \eqref{difray}.
Using \eqref{bdforc} and \eqref{difray}, \eqref{E:tilde v} implies
\begin{align*}
\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}\tilde v_n\Bigr\|_{L_x^2}^2
&\lambdaesssim \sum_{|n_1-n_2|\ge \lambdaog^4(\frac1{{\varepsilon}}),\ n_i\in \mathcal E}\frac {(\sigma{\varepsilon})^3}{L^6}\lambdaog^{12}(\tfrac1{{\varepsilon}}) e^{-\frac{\sigma^2|n_1-n_2|^2t^2}{8L^2[\lambdaog\lambdaog(\frac 1{{\varepsilon}})]^{25}[\sigma^4+t^2\lambdaog^4(\frac 1{{\varepsilon}})]}}\\
&\quad +\sum_{|n_1-n_2|\lambdae\lambdaog^4(\frac 1{{\varepsilon}}),\ n_i\in \mathcal E}\frac{(\sigma{\varepsilon})^3}{L^6}\lambdaog^{12}(\tfrac 1{{\varepsilon}})\\
&\lambdaesssim \frac {(\sigma{\varepsilon})^3}{L^6}\lambdaog^{12}(\tfrac 1{{\varepsilon}})\biggl(\frac {L\lambdaog\lambdaog(\tfrac 1{{\varepsilon}})}{{\varepsilon}}\biggr)^3 \biggl(\frac{[\lambdaog\lambdaog(\tfrac 1{{\varepsilon}})]^{27}[\sigma^4+t^2\lambdaog^4(\frac 1{{\varepsilon}})]}{t^2}\biggr)^{\frac 32}\\
&\quad+\frac{(\sigma{\varepsilon})^3}{L^6}\lambdaog^{24}(\tfrac 1{{\varepsilon}})\biggl(\frac {L\lambdaog\lambdaog(\frac 1{{\varepsilon}})}{{\varepsilon}}\biggr)^3\\
&\lambdaesssim \lambdaog^{13}(\tfrac1{{\varepsilon}})\biggl(\frac{\sigma^6}{t^3}+\lambdaog^6(\tfrac 1{\varepsilon})\biggr) + \lambdaog^{25}(\tfrac 1{{\varepsilon}}).
\end{align*}
Thus,
\begin{align}\lambdaabel{E:536}
\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}\tilde v_n\Bigr\|_{L_t^{\infty}L_x^2([T,\infty)\times{\mathbb{R}}^3)}\lambdaesssim \lambdaog^{\frac{25}2}(\tfrac1{{\varepsilon}}).
\end{align}
We now turn to estimating the second factor on the right-hand side of \eqref{E:interp}. Combining \eqref{bdfv} with
\begin{align*}
\frac 14\sum_{j=1}^4|x-x_j(t)|^2=\biggl|x-\frac 14\sum_{j=1}^4 x_j(t)\biggr|^2+\frac 1{16}\sum_{j<l}|x_j(t)-x_l(t)|^2,
\end{align*}
we get
\begin{align*}
&\int_{{\mathbb{R}}^3}|\tilde v_{n_1}||\tilde v_{n_2}||\tilde v_{n_3}||\tilde v_{n_4}|\,dx\\
&\lambdaesssim \lambdaog^{10}(\tfrac1{{\varepsilon}})\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^3\!\!\int_{{\mathbb{R}}^3}\!\!\exp\Bigl\{ -\frac {\sigma^2}{4[\lambdaog\lambdaog(\frac1{{\varepsilon}})]^{25}[\sigma^4+t^2\lambdaog^4(\frac1{{\varepsilon}})]}\sum_{j=1}^4|x-x_j(t)|^2\Bigr\}\,dx\\
&\lambdaesssim \lambdaog^{10}(\tfrac1{{\varepsilon}})\biggl(\frac {\sigma^2}{\sigma^4+t^2}\biggr)^3\biggl(\frac{[\lambdaog\lambdaog(\frac 1{{\varepsilon}})]^{25}[\sigma^4+t^2\lambdaog^4(\frac 1{{\varepsilon}})]}{\sigma^2}\biggr)^{\frac 32}
e^{-\frac{\sigma^2\sum_{j<l}|x_j(t)-x_l(t)|^2}{16[\lambdaog\lambdaog(\frac 1{{\varepsilon}})]^{25}[\sigma^4+t^2\lambdaog^4(\frac 1{{\varepsilon}})]}}\\
&\lambdaesssim\sigma^3t^{-3}\lambdaog^{17}(\tfrac 1{{\varepsilon}})e^{-\frac{\sigma^2\sum_{j<l}|x_j(t)-x_l(t)|^2}{16[\lambdaog\lambdaog(\frac1{{\varepsilon}})]^{25}[\sigma^4+t^2\lambdaog^4(\frac 1{{\varepsilon}})]}}.
\end{align*}
Combining this with \eqref{bdforc}, we obtain
\begin{align}\lambdaabel{359}
\Bigl\|\sum_{n\in \mathcal E} &c_n^{{\varepsilon}} \tilde v_n(t)\Bigr\|_{L_x^4}^4
\lambdaesssim \!\sum_{n_1,\Delta_{\Omega}ots,n_4\in \mathcal E}\! \frac{(\sigma{\varepsilon})^6}{L^{12}}\sigma^3t^{-3}\lambdaog^{17}(\tfrac1{{\varepsilon}}) e^{-\frac{\sigma^2\sum_{j<l}|x_j(t)-x_l(t)|^2}{16[\lambdaog\lambdaog(\frac1{{\varepsilon}})]^{25}[\sigma^4+t^2\lambdaog^4(\frac 1{{\varepsilon}})]}}.
\end{align}
To estimate the sum above we break it into two parts. Let $N:= \lambdaog^4(\frac 1{{\varepsilon}})$.
\textbf{Part 1:} $|n_j-n_k|\ge N$ for some $1\lambdae j\neq k\lambdae 4$. By \eqref{difray}, we have
$$
|x_j(t)-x_k(t)|\ge \frac {|n_j-n_k|}{L} t \qtq{for all} t\in \supp \prod_{l=1}^4 \tilde v_{n_l}.
$$
As $t\geq T$, we estimate the contribution of the summands conforming to this case to LHS\eqref{359} by
\begin{align*}
\frac {(\sigma{\varepsilon})^6}{L^{12}}\sigma^3t^{-3}& \lambdaog^{17}(\tfrac1{{\varepsilon}})\biggl[\frac {L\lambdaog\lambdaog(\frac 1{{\varepsilon}})}{{\varepsilon}}\biggr]^{12}
\exp\Bigl\{-\frac{\sigma^2t^2N^2}{16L^2[\lambdaog\lambdaog(\frac1{{\varepsilon}})]^{25}[\sigma^4+t^2\lambdaog^4(\frac 1{{\varepsilon}})]}\Bigr\}\\
&\lambdaesssim \frac {\sigma^9}{{\varepsilon}^6}t^{-3}\lambdaog^{18}(\tfrac 1{{\varepsilon}})\exp\Bigl\{-\frac{N^2}{\lambdaog^5(\frac 1{{\varepsilon}})}\Bigr\}
\lambdae{\varepsilon}^{100}t^{-3}.
\end{align*}
\textbf{Part 2:} $|n_j-n_k|\lambdae N$ for all $1\lambdae j<k \lambdae 4$. We estimate the contribution of the summands conforming to this case to LHS\eqref{359} by
\begin{align*}
\frac {(\sigma {\varepsilon})^6}{L^{12}}&\sigma^3 t^{-3} \lambdaog^{17}(\tfrac1{{\varepsilon}}) N^9\biggl(\frac {L\lambdaog\lambdaog(\frac 1{{\varepsilon}})}{{\varepsilon}}\biggr)^3
\lambdaesssim \biggl(\frac {\sigma N}L\biggr)^9{\varepsilon}^3 t^{-3}\lambdaog^{18}(\tfrac1{{\varepsilon}})\lambdae {\varepsilon}^3 t^{-3}\lambdaog^{56}(\tfrac 1{{\varepsilon}}).
\end{align*}
Combining the estimates from the two cases, we obtain
\begin{align*}
\Bigl\|\sum_{n\in \mathcal E} c_n^{\varepsilon} \tilde v_n\Bigr\|_{L_t^{\frac83}L_x^4([T,\infty)\times{\mathbb{R}}^3)}
&\lambdaesssim \bigl[{\varepsilon}^{25} + {\varepsilon}^{\frac 34}\lambdaog^{14}(\tfrac 1{{\varepsilon}})\bigr] T^{-\frac 38}
\lambdaesssim {\varepsilon}^{\frac 38}\delta^{-\frac 38}\lambdaog^{15}(\tfrac1{\varepsilon}).
\end{align*}
Combining this with \eqref{E:interp} and \eqref{E:536} completes the proof of Lemma~\ref{L:bdfv}.
\end{proof}
To complete the proof of Proposition~\ref{P:long times}, we are left to estimate the last term on RHS\eqref{E:control 462}. For each $n\in \mathcal E$,
we write
\begin{equation}\lambdaabel{E:259}
{e^{it\Delta}}omega (1_\Omega\gamma_n)-(\tilde u_n -\tilde v_n)=-(w_n+r_n),
\end{equation}
where $w_n$ is chosen to agree with $\tilde u_n-\tilde v_n$ on $\partial \Omega$ and $r_n$ is the remainder. More precisely, let
$\phi\in C_c^{\infty}([0,\infty))$ with $\phi\equiv 1$ on $[0,\frac 12]$ and $\phi\equiv 0$ on $[1,\infty)$. For each $x\in \Omega$, let $x_*\in\partial\Omega$ denote the point obeying $|x-x_*|=\dist(x,\Omega^c)$. Now we define
$$
w_n(t,x):=w_n^{(1)}(t,x)+w_n^{(2)}(t,x)+w_n^{(3)}(t,x)
$$
and
\begin{align*}
w_n^{(j)}(t,x):=(\tilde u_n-\tilde v_n)(t,x_*)\phi\bigl(\tfrac{|x-x_*|}{\sigma}\bigr)
\!\begin{cases}
(1-\phi)\bigl(\frac {|x_*-x_c^{(n)}|}{\sigma\lambdaog(\frac 1{{\varepsilon}})}\bigr), & j=1,\\[2mm]
\phi\bigl(\frac {|x_*-x_c^{(n)}|}{\sigma\lambdaog(\frac 1{{\varepsilon}})}\bigr)(1-\phi)\bigl(\frac {|t-t_c^{(n)}||\xi_n|}{2\sigma\lambdaog(\frac 1{{\varepsilon}})}\bigr), &j=2,\\[2mm]
\phi\bigl(\frac {|x_*-x_c^{(n)}|}{\sigma\lambdaog(\frac 1{{\varepsilon}})}\bigr) \phi\bigl(\frac {|t-t_c^{(n)}||\xi_n|}{2\sigma\lambdaog(\frac 1{{\varepsilon}})}\bigr)e^{i\xi\cdot (x-x_*)},\!\! &j=3.
\end{cases}
\end{align*}
We will estimate $w_n$ by estimating each $w_n^{(j)}$ separately. Note that $w_n^{(3)}$ is the most significant of the three;
spatial oscillation has been introduced into this term to ameliorate the temporal oscillation of $\tilde u_n-\tilde v_n$. This subtle modification is essential to achieve satisfactory estimates.
To estimate $r_n$, we use \eqref{E:259} to write
$$
0=(i\partial_t +\Delta_\Omega)(\tilde u_n-\tilde v_n-w_n-r_n)=(i\partial_t+\Delta)(\tilde u_n-\tilde v_n-w_n)-(i\partial_t+\Delta_\Omega) r_n,
$$
which implies
$$
(i\partial_t+\Delta_\Omega) r_n=iu_n\partial_t \chi^{u_n} -iv_n\partial_t \chi^{v_n} -(i\partial_t+\Delta)w_n.
$$
Using the Strichartz inequality, we estimate
\begin{align*}
\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}r_n\Bigr\|_{L_{t,x}^{\frac{10}3}([T,\infty)\times\Omega)}
&\lambdaesssim \Bigl\|\sum_{n\in \mathcal E}c_n^{{\varepsilon}}\bigl[u_n\partial_t\chi^{u_n}-v_n\partial_t\chi^{v_n}\bigr]\Bigr\|_{L_t^1L_x^2([T,\infty)\times\Omega)}\\
&\quad + \Bigl\|\sum_{n\in \mathcal E}c_n^{{\varepsilon}}(i\partial_t+\Delta)w_n\Bigr\|_{L_t^1L_x^2([T,\infty)\times\Omega)}.
\end{align*}
Putting everything together, we are thus left to prove the following
\begin{lem}\lambdaabel{L:rem} As ${\varepsilon}\to0$, we have
\begin{align}
&\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}} u_n\partial_t \chi^{u_n}\Bigr\|_{L_t^1L_x^2([T,\infty)\times \Omega)}
+\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}} v_n \partial_t \chi^{v_n}\Bigr\|_{L_t^1L_x^2([T,\infty)\times \Omega)}=o(1)\lambdaabel{rem1}\\
&\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}w_n^{(j)}\Bigr\|_{L_{t,x}^{\frac{10}3}([T,\infty)\times\Omega)}
+\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}(i\partial_t+\Delta)w_n^{(j)}\Bigr\|_{L_t^1L_x^2([T,\infty)\times\Omega)}=o(1),\lambdaabel{rem2}
\end{align}
for each $j=1,2,3$. As previously, $T=\frac1{10}{\varepsilon}\delta[\lambdaog\lambdaog(\frac1{\varepsilon})]^{-1}$.
\end{lem}
\begin{proof}
We first prove the estimate \eqref{rem1} for $u_n$. Recall the following bound for $u_n$:
\begin{align*}
|u_n(t,x)|\lambdaesssim \biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 34} \exp\Bigl\{-\frac{\sigma^2|x-2\xi_n t|^2}{4(\sigma^4+t^2)}\Bigr\}.
\end{align*}
Also, for $t\in \supp \partial_t \chi^{u_n}=[t_c^{(n)}+\frac{2\sigma\lambdaog(\frac 1{{\varepsilon}})}{|\xi_n|},t_c^{(n)}+\frac{4\sigma\lambdaog(\frac 1{{\varepsilon}})}{|\xi_n|}]$ we have
$t\lambdae \sigma^2$ and, by the definition of $\mathcal E$,
\begin{align*}
\dist(2\xi_n t,\Omega)\gtrsim \frac{|\xi_n||t-t_c^{(n)}|}{[\lambdaog\lambdaog(\tfrac 1\eps)]^4}\ge\frac {\sigma\lambdaog(\frac1{{\varepsilon}})}{[\lambdaog\lambdaog(\frac 1{{\varepsilon}})]^5}.
\end{align*}
Thus,
\begin{align*}
|\partial_t\chi^{u_n}|^2\int_{\Omega}|u_n(t,x)|^2\,dx
&\lambdaesssim\biggl(\frac {\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 32}|\partial_t\chi^{u_n}|^2\int_{\Omega}\exp\Bigl\{-\frac {\sigma^2|x-2\xi_{n}t|^2}{2(\sigma^4+t^2)}\Bigr\}\,dx\\
&\lambdaesssim \sigma^{-3}\biggl(\frac{|\xi_n|}{\sigma\lambdaog(\tfrac 1\eps)}\biggr)^2 \int_{|y|\ge\frac{\sigma\lambdaog(\frac1{{\varepsilon}})}{[\lambdaog\lambdaog(\frac 1{{\varepsilon}})]^5}}e^{-\frac {|y|^2}{4\sigma^2}}\,dy\\
&\lambdaesssim {\varepsilon}^{200}.
\end{align*}
Summing in $n$ and using \eqref{bdforc}, we obtain
\begin{align*}
\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}u_n\partial_t\chi^{u_n}\Bigr\|_ {L_t^1L_x^2([T,\infty)\times\Omega)}
&\lambdaesssim \sum_{n\in \mathcal E} \frac {(\sigma{\varepsilon})^{\frac 32}}{L^3}{\varepsilon}^{100} \frac{\sigma\lambdaog(\frac1{{\varepsilon}})}{|\xi_n|}\lambdaeq {\varepsilon}^{90}.
\end{align*}
The estimate for $v_n$ is similar. Note that by the definition of $\mathcal E$, for $t\in\supp \partial_t \chi^{v_n}=[t_c^{(n)}-4\frac {\sigma \lambdaog(\frac1{{\varepsilon}})}{|\xi_n|},t_c^{(n)}-2\frac {\sigma\lambdaog(\frac1{{\varepsilon}})}{|\xi_n|}]$ we have
\begin{align}\lambdaabel{124}
\dist(x_n(t), \Omega)\gtrsim\frac{|\xi_n||t-t_c^{(n)}|}{[\lambdaog\lambdaog(\tfrac 1\eps)]^4}\ge \frac {\sigma\lambdaog(\frac1{{\varepsilon}})} {[\lambdaog\lambdaog(\frac 1{{\varepsilon}})]^5}
\end{align}
and, by Lemma \ref{L:xc},
\begin{align*}
t\lambdaeq t_c^{(n)}\lambdaesssim \frac{\delta}{|\xi_n|}[\lambdaog\lambdaog(\tfrac 1\eps)]^8 \qtq{and} t^2\lambdaog^4(\tfrac1{\varepsilon})\lambdae\sigma^4[\lambdaog\lambdaog(\tfrac 1\eps)]^{19}.
\end{align*}
Therefore, using \eqref{bdfv} we get
\begin{align*}
|(\partial_t\chi^{v_n})v_n(t,x)|^2
&\lambdaesssim \lambdaog^5(\tfrac1{{\varepsilon}})\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac32}\exp\biggl\{-\frac{\sigma^2|x-x_n(t)|^2}{2[\lambdaog\lambdaog(\tfrac 1\eps)]^{25}[\sigma^4+t^2\lambdaog^4(\tfrac
1{\varepsilon})]}\biggr\}\cdot\frac{|\xi_n|^2}{[\sigma\lambdaog(\tfrac 1\eps)]^2}\\
&\lambdaesssim \sigma^{-5}|\xi_n|^2\lambdaog^{3}(\tfrac1{\varepsilon})\exp\biggl\{-\frac{|x-x_n(t)|^2}{4\sigma^2[\lambdaog\lambdaog(\tfrac 1\eps)]^{44}}\biggr\}.
\end{align*}
Using \eqref{124} and computing as for $u_n$, we obtain
\begin{align*}
\int_{\Omega} |\partial _t\chi^{v_n} v_n(t,x)|^2 \,dx\lambdaesssim {\varepsilon}^{200}
\end{align*}
and then
$$
\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}v_n\partial_t\chi^{v_n}\Bigr\|_{L_t^1L_x^2([T,\infty)\times\Omega)} \lambdae {\varepsilon}^{90}.
$$
This completes the proof of \eqref{rem1}.
We now turn to estimating \eqref{rem2}. We begin with the contribution from $w_n^{(1)}$. Using the definitions of $\tilde u_n(t,x)$ and $\tilde v_n(t,x)$, as well as \eqref{sig42}, \eqref{sig3}, and the fact that $\partial_t[\det M(t)]^{-1/2} = -\frac12 [\det M(t)]^{-1/2} \Tr[M(t)^{-1}\partial_t M(t)]$, we estimate
\begin{align}
|w_n^{(1)}(t,x)|&+|(i\partial_t+\Delta)w_n^{(1)}(t,x)|\notag\\
&\lambdaesssim \Bigl[\sigma^{-2}+|\xi_n|^2+\frac{|x_*-2\xi_nt|^2}{\sigma^4}\Bigr]| u_n(t,x_*)| \chi_1(t,x)\lambdaabel{cun}\\
&\quad+\Bigl[\sigma^{-2}+|\xi_n|^2+\frac{\lambdaog^{10}{(\frac1{\varepsilon})}}{\sigma^4+t^2}|x_*-x_n(t)|^2\Bigr]|v_n(t,x_*)| \chi_2(t,x)\lambdaabel{cvn},
\end{align}
where $\chi_1(t,x)$ is a cutoff to the spacetime region
\begin{align*}
\biggl\{(t,x)\in [0,\infty) \times\Omega: \, |x-x_*|\lambdae \sigma, \ |x_*-x_c^{(n)}|\geq\tfrac12 \sigma\lambdaog(\tfrac 1\eps),\ t\lambdaeq t_c^{(n)}+4\frac{\sigma\lambdaog(\tfrac 1\eps)}{|\xi_n|}\biggr\}
\end{align*}
and $\chi_2(t,x)$ is a cutoff to the spacetime region
\begin{align*}
\biggl\{(t,x)\in [0,\infty) \times\Omega: \, |x-x_*|\lambdae \sigma, \ |x_*-x_c^{(n)}|\geq\tfrac12\sigma\lambdaog(\tfrac 1\eps),\ t\ge t_c^{(n)}-4\frac{\sigma\lambdaog(\tfrac 1\eps)}{|\xi_n|}\biggr\}.
\end{align*}
Note that
\begin{align}\lambdaabel{E:chi}
\int_{\Omega} \chi_1(t,x) + \chi_2(t,x) \, dx\lambdaesssim \sigma \qtq{for all} t\geq 0.
\end{align}
To estimate the contribution from \eqref{cun}, we note that on the spacetime support of this term we have $t\lambdaeq \sigma^2$ and,
by the definition of $\mathcal E$,
\begin{align*}
\frac{|x_*-x_c^{(n)}|}{[\lambdaog\lambdaog(\tfrac 1\eps)]^4}&\lambdaesssim |x_*-2\xi_n t|\lambdaeq |x_*| + |x_c^{(n)}| + 2|\xi_n(t-t_c^{(n)})|\lambdaesssim 1.
\end{align*}
Thus we can estimate
\begin{align}
\eqref{cun}
&\lambdaesssim\bigl[\sigma^{-2}+|\xi_n|^2+\sigma^{-4}\bigr]|u_n(t,x_*)|\chi_1(t,x)\notag\\
&\lambdaesssim {\varepsilon}^{-2}\delta^{-2}[\lambdaog\lambdaog(\tfrac 1\eps)]^2 \sigma^{-\frac32}\exp\Bigl\{-\frac{c\sigma^2|x_*-x_c^{(n)}|^2}{\sigma^4[\lambdaog\lambdaog(\tfrac 1\eps)]^8}\Bigr\}\chi_1(t,x)\notag\\
&\lambdaesssim {\varepsilon}^{-2}\delta^{-2}\sigma^{-\frac32}[\lambdaog\lambdaog(\tfrac 1\eps)]^2\exp\Bigl\{-\frac{\lambdaog^2(\tfrac1{\varepsilon})}{[\lambdaog\lambdaog(\tfrac 1\eps)]^9}\Bigr\}\chi_1(t,x)\notag\\
&\lambdae {\varepsilon}^{100}\chi_1(t,x).\lambdaabel{E:cun}
\end{align}
To estimate the contribution from \eqref{cvn}, we discuss long and short times separately. If $t\geq \delta[\lambdaog\lambdaog(\tfrac 1\eps)]^{10}|\xi_n|^{-1}$, then $t\gg t_c^{(n)}$ and so $2 |t-t_c^{(n)}|\geq t$. Using the definition of $\mathcal E$, we thus obtain
\begin{align*}
|x_*-x_n(t)|\ge \dist(x_n(t), \partial\Omega)\gtrsim \frac{|2\xi_n(t-t_c^{(n)})|}{[\lambdaog\lambdaog(\tfrac 1\eps)]^4}\geq \frac{|\xi_nt|}{[\lambdaog\lambdaog(\tfrac 1\eps)]^5}.
\end{align*}
Noting also that $\sigma^4\lambdae t^2\lambdaog^4(\tfrac 1{\varepsilon})$, we estimate
\begin{align}\lambdaabel{l2}
\frac{\sigma^2|x_*-x_n(t)|^2}{4[\lambdaog\lambdaog(\tfrac 1\eps)]^{25}[\sigma^4+t^2\lambdaog^4(\tfrac1{\varepsilon})]}
&\ge \frac{\sigma^2 |\xi_n|^2t^2}{8[\lambdaog\lambdaog(\tfrac 1\eps)]^{35}t^2\lambdaog^4(\tfrac 1{\varepsilon})}\ge \frac\delta{{\varepsilon}\lambdaog^3(\frac 1{\varepsilon})}.
\end{align}
Using the crude upper bound
$$
|x_*-x_n(t)|\lambdaeq |x_*|+|x_c^{(n)}| +2|\xi_n|(t-t_c^{(n)})\lambdaesssim 1 + |\xi_n|t,
$$
together with \eqref{bdfv} and \eqref{l2}, we obtain
\begin{align*}
\eqref{cvn}
&\lambdaesssim \lambdaog^{10}{(\tfrac1{\varepsilon})}{\varepsilon}^{-2}\delta^{-2}[\lambdaog\lambdaog(\tfrac 1\eps)]^2 \lambdaog^{\frac52}{(\tfrac1{\varepsilon})}\sigma^{\frac 32}t^{-\frac32}\exp\Bigl\{-\frac \delta{{\varepsilon}\lambdaog^3(\tfrac1{\varepsilon})}\Bigr\}\chi_2(t,x)\\
&\lambdae t^{-\frac 32}{\varepsilon}^{100}\chi_2(t,x)
\end{align*}
for $t\geq \delta[\lambdaog\lambdaog(\tfrac 1\eps)]^{10}|\xi_n|^{-1}$.
Now consider the regime $t_c^{(n)}-4\sigma\lambdaog(\tfrac 1\eps)|\xi_n|^{-1}\lambdaeq t\lambdaeq \delta[\lambdaog\lambdaog(\tfrac 1\eps)]^{10}|\xi_n|^{-1}$. By the definition of $\mathcal E$, we have
\begin{align}\lambdaabel{E:515}
|x_*-x_n(t)|\gtrsim \frac{|x_*-x_c^{(n)}|}{[\lambdaog\lambdaog(\tfrac 1\eps)]^4}\ge\frac{\sigma\lambdaog(\tfrac 1\eps)}{[\lambdaog\lambdaog(\tfrac 1\eps)]^5}.
\end{align}
For the times under consideration,
\begin{align*}
\sigma^4+t^2\lambdaog^4(\tfrac 1{\varepsilon})\lambdae \sigma^4+\delta^2{\varepsilon}^2\lambdaog^4(\tfrac 1{\varepsilon})[\lambdaog\lambdaog(\tfrac 1\eps)]^{22}\lambdae \sigma^4[\lambdaog\lambdaog(\tfrac 1\eps)]^{23},
\end{align*}
and so we obtain
\begin{align}\lambdaabel{l1}
\frac{\sigma^2|x_*-x_n(t)|^2}{4[\lambdaog\lambdaog(\tfrac 1\eps)]^{25}[\sigma^4+t^2\lambdaog^4(\tfrac1{\varepsilon})]}&\geq\frac{\lambdaog^2(\tfrac 1{\varepsilon})}{[\lambdaog\lambdaog(\tfrac 1\eps)]^{60}}.
\end{align}
Using the crude upper bound
\begin{align*}
|x_*-x_n(t)|\lambdaesssim |x_*|+|x_c^{(n)}|+|\xi_n t|\lambdaesssim [\lambdaog\lambdaog(\tfrac 1\eps)]^{10}
\end{align*}
together with \eqref{bdfv} and \eqref{l1}, we obtain
\begin{align*}
\eqref{cvn}
&\lambdaesssim {\varepsilon}^{-2}\delta^{-2}\lambdaog^6(\tfrac1{{\varepsilon}})[\lambdaog\lambdaog(\tfrac 1\eps)]^{20} \lambdaog^{\frac52}{(\tfrac1{\varepsilon})}\sigma^{-\frac 32}\exp\Bigl\{-\frac{\lambdaog^2(\tfrac 1{\varepsilon})}{[\lambdaog\lambdaog(\tfrac 1\eps)]^{60}}\Bigr\}\chi_2(t,x)\\
&\lambdae {\varepsilon}^{100}\chi_2(t,x)
\end{align*}
in the short time regime.
Collecting our estimates for long and short times, we get
\begin{align*}
\eqref{cvn}\lambdaesssim \lambdaangle t\rangle^{-\frac32}{\varepsilon}^{100} \chi_2(t,x).
\end{align*}
Combining this with \eqref{bdforc}, \eqref{E:chi}, and the bound \eqref{E:cun} for \eqref{cun}, we obtain
\begin{align*}
\Bigl\|\sum_{n\in\mathcal E} c_n^{{\varepsilon}}w_n^{(1)}\Bigr\|_{L_{t,x}^{\frac{10}3}([T,\infty)\times\Omega)}+\Bigl\|\sum_{n\in\mathcal E} c_n^{{\varepsilon}}(i\partial_t+\Delta) w_n^{(1)}\Bigr\|_{L_t^1L_x^2([T,\infty)\times\Omega)}=o(1).
\end{align*}
This proves \eqref{rem2} for $w_n^{(1)}$.
Next we consider the term $w_n^{(2)}$. Just as for $w_n^{(1)}$, we have the following pointwise bound:
\begin{align*}
|w_n^{(2)}(t,x)| & +|(i\partial_t+\Delta )w_n^{(2)}(t,x)|\lambdaesssim \biggl\{\Bigl[\sigma^{-2}+|\xi_n|^2+\frac{|x_*-2\xi_nt|^2}{\sigma^4}\Bigr]|\tilde u_n(t,x_*)|\\
&\qquad+\Bigl[\sigma^{-2}+|\xi_n|^2+\frac{\lambdaog^{10}(\frac1{\varepsilon})}{\sigma^4+t^2}|x_*-x_n(t)|^2\Bigr]|\tilde v_n(t,x_*)|\biggr\}\cdot \chi(t,x),
\end{align*}
where $\chi(t,x)$ is a cutoff to the spacetime region
\begin{align*}
\biggl\{(t,x)\in [0,\infty)\times\Omega: \, |x-x_*|\lambdae \sigma, \ |x_*-x_c^{(n)}|\lambdae \sigma\lambdaog(\tfrac 1\eps),\ |t-t_c^{(n)}|\ge \frac{\sigma\lambdaog(\tfrac 1\eps)}{|\xi_n|}\biggr\}.
\end{align*}
On the support of $\tilde u_n(t,x_*) \chi(t,x)$ we have $t\lambdae \sigma^2$ and
\begin{align*}
|x_*-2\xi_n t|\ge |2\xi_n(t-t_c^{(n)})|-|x_*-x_c^{(n)}|\ge \sigma\lambdaog(\tfrac 1\eps).
\end{align*}
Hence
\begin{align*}
|\tilde u_n(t,x_*)|\chi(t,x)&\lambdaesssim \biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac34}\exp\Bigl\{-\frac{\sigma^2|x_*-2\xi_nt|^2}{4(\sigma^4+t^2)}\Bigr\}
\lambdaesssim \sigma^{-\frac 32}\exp\bigl\{-\tfrac 18 \lambdaog^2(\tfrac 1{\varepsilon})\bigr\}\\
&\lambdae{\varepsilon}^{100}.
\end{align*}
As before, this estimate is good enough to deduce
\begin{align*}
\Bigl\|\Bigl[\sigma^{-2}+|\xi_n|^2+\frac{|x_*-2\xi_nt|^2}{\sigma^4}\Bigr]|\tilde u_n(t,x_*)|\chi(t,x)\Bigr\|_{L_{t,x}^{\frac{10}3}\cap L_t^1L_x^2([T,\infty)\times\Omega)}\lambdae {\varepsilon}^{90}.
\end{align*}
To estimate the contribution of the $\tilde v_n$ term, we split into short and long times as in the treatment of the corresponding term in $w_n^{(1)}$. Indeed, the treatment of the regime $t\ge \delta[\lambdaog\lambdaog(\tfrac 1\eps)]^{10}|\xi_n|^{-1}$ follows verbatim as there. For the complementary set of times $t_c^{(n)}-4\sigma\lambdaog(\tfrac 1\eps)|\xi_n|^{-1}\lambdaeq t\lambdaeq \delta[\lambdaog\lambdaog(\tfrac 1\eps)]^{10}|\xi_n|^{-1}$, we estimate
\begin{align*}
|x_*-x_n(t)|&\ge|x_c^{(n)}-x_n(t)|-|x_c^{(n)}-x_*|=2|\xi_n(t-t_c^{(n)})|-|x_c^{(n)}-x_*|\ge \sigma\lambdaog(\tfrac 1\eps).
\end{align*}
This plays the role of \eqref{E:515}; indeed, it is a stronger bound. With this in place, arguing as for $w_n^{(1)}$ we obtain
\begin{align*}
\Bigl\|\Bigl[\sigma^{-2}+|\xi_n|^2+\frac{\lambdaog^{10}(\frac1{\varepsilon})}{\sigma^4+t^2}|x_*-x_n(t)|^2\Bigr]|\tilde v_n(t,x_*)|\chi(t,x)\Bigr\|_{L_{t,x}^{\frac{10}3}\cap L_t^1L_x^2([T,\infty)\times\Omega)}\lambdae {\varepsilon}^{90}.
\end{align*}
Combining the two estimates and using \eqref{bdforc} yields \eqref{rem2} for $w_n^{(2)}$.
It remains to prove \eqref{rem2} for $w_n^{(3)}$, which is the most subtle of all.
\begin{lem}[Almost disjointness of the $w^{(3)}_n$]\lambdaabel{L:disjoint w3} Fix $(t,x)\in{\mathbb{R}}\times\Omega$. Then
\begin{equation}\lambdaabel{E:disjoint w3}
\# \big\{ n\in \mathcal E : w^{(3)}_n(t,x) \neq 0 \bigr\} \lambdaesssim
\lambdaog(\tfrac 1\eps)^{12}.
\end{equation}
\end{lem}
\begin{proof}
From the definition of $w^{(3)}_n$ we see that if $w^{(3)}_n(t,x) \cdot w^{(3)}_m(t,x) \neq 0$, then
\begin{align*}
|t_c^{(n)}-t_c^{(m)}| \lambdaeq |t_c^{(n)}-t| + |t-t_c^{(m)}| \lambdae 2\bigl( |\xi_n|^{-1}+ |\xi_m|^{-1} \bigr)\sigma\lambdaog(\tfrac 1\eps)
\end{align*}
and
\begin{align*}
|x_c^{(n)}-x_c^{(m)}| &\lambdaeq |x_c^{(n)}-x_*| + |x_*-x_c^{(m)}| \lambdae 2\sigma\lambdaog(\tfrac 1\eps) .
\end{align*}
Combining these with
\begin{align*}
|x_c^{(n)}-x_c^{(m)}| &= 2 |\xi_n t_c^{(n)}-\xi_m t_c^{(m)} | \\
&= \bigl| (\xi_n+\xi_m) (t_c^{(n)}-t_c^{(m)}) + (\xi_n-\xi_m)(t_c^{(n)}+t_c^{(m)}) \bigr|
\end{align*}
and ${\varepsilon}^{-1} [\lambdaog\lambdaog(\tfrac 1\eps)]^{-1} \lambdaeq |\xi_n|,|\xi_m|\lambdaeq {\varepsilon}^{-1}\lambdaog\lambdaog(\tfrac 1\eps)$ yields
\begin{align*}
|\xi_n-\xi_m| \, (t_c^{(n)}+t_c^{(m)}) \lambdaesssim \sigma\lambdaog(\tfrac 1\eps) +
\sigma\lambdaog(\tfrac 1\eps)[\lambdaog\lambdaog(\tfrac 1\eps)]^{2} \lambdaesssim \sigma\lambdaog(\tfrac 1\eps)[\lambdaog\lambdaog(\tfrac 1\eps)]^{2}.
\end{align*}
From Lemma~\ref{L:xc} we have $t_c^{(n)}+t_c^{(m)} \geq \delta{\varepsilon}[\lambdaog\lambdaog(\tfrac 1\eps)]^{-1}$ and so
$$
|n-m| = L |\xi_n-\xi_m| \lambdaesssim \frac{L\sigma}{\delta{\varepsilon}}\lambdaog(\tfrac 1\eps)[\lambdaog\lambdaog(\tfrac 1\eps)]^3 = [\lambdaog(\tfrac 1\eps)]^3[\lambdaog\lambdaog(\tfrac 1\eps)]^4 \lambdaeq [\lambdaog(\tfrac 1\eps)]^4.
$$
The lemma now follows; RHS\eqref{E:disjoint w3} bounds the number of lattice points in a ball of this radius.
\end{proof}
To continue, we note that on the support of $w_n^{(3)}$ we have $\tilde u_n(t,x)=u_n(t,x)$ and $\tilde v_n(t,x)=v_n(t,x)$.
We rewrite $w_n^{(3)}$ as follows:
\begin{align*}
w^{(3)}_n(t,x)&=\exp\{it|\xi_n|^2-i\xi_n\cdot(x_*-x_c^{(n)})\}\bigl[u_n(t,x_*)-v_n(t,x_*)\bigr]\\
&\qquad\quad \cdot
\phi\biggl(\frac{|x-x_*|}{\sigma}\biggr)\phi\biggl(\frac{|x_*-x_c^{(n)}|}{\sigma\lambdaog(\tfrac 1\eps)}\biggr) \phi\biggl(\frac{|t-t_c^{(n)}| |\xi_n|}{2\sigma\lambdaog(\tfrac 1\eps)}\biggr)\\
&\qquad\quad \cdot\exp\{-it|\xi_n|^2+i\xi_n\cdot(x-x_c^{(n)})\}\\
&=:A_n(t,x)\cdot B_n(t,x)\cdot C_n(t,x).
\end{align*}
We have the following pointwise bounds on $A_n,B_n,C_n$, and their derivatives that are uniform in $n$:
\begin{align*}
&\begin{cases}
|C_n(t,x)|\lambdae1,\quad |\nabla C_n(t,x)|\lambdae |\xi_n|\lambdaesssim {\varepsilon}^{-1}\lambdaog\lambdaog(\tfrac 1\eps),\\
(i\partial_t+\Delta) C_n(t,x)=0,
\end{cases}\\
&\begin{cases}
|B_n(t,x)|\lambdae 1, \ |\nabla B_n(t,x)|\lambdaesssim \sigma^{-1}+[\sigma\lambdaog(\tfrac 1{\varepsilon})]^{-1}\lambdaesssim {\varepsilon}^{-\frac12}\delta^{-\frac12}, \\
|(i\partial_t+\Delta)B_n(t,x)|\lambdaesssim \sigma^{-2} +[\sigma\lambdaog(\tfrac 1\eps)]^{-2}+\frac{|\xi_n|}{\sigma\lambdaog(\frac1{\varepsilon})}\lambdaesssim {\varepsilon}^{-\frac32}\delta^{-\frac12},
\end{cases}\\
&\begin{cases}
|A_n(t,x)|\lambdaesssim {\varepsilon}^{-\frac 14}\delta^{-\frac54}\lambdaog^{12}(\tfrac 1{\varepsilon}), \ |\nabla A_n(t,x)|\lambdaesssim {\varepsilon}^{-\frac34}\delta^{-\frac 74}\lambdaog^{12}(\tfrac 1{\varepsilon}),\\
|(i\partial_t+\Delta) A_n(t,x)|\lambdaesssim {\varepsilon}^{-\frac 74}\delta^{-\frac74}\lambdaog^9(\tfrac 1{\varepsilon}),
\end{cases}
\end{align*}
on the support of $w_n^{(3)}$. Indeed, the bounds on $C_n$ and $B_n$ follow from direct computations, while the bounds on $A_n$ were proved in Lemma~\ref{L:uv match}. Using these bounds we immediately get
\begin{align}
\bigl\|w_n^{(3)}\bigr\|_{L_{t,x}^\infty([T,\infty)\times\Omega)}&\lambdaesssim{\varepsilon}^{-\frac 14}\delta^{-\frac 54}\lambdaog^{12}(\tfrac 1{\varepsilon})\lambdaabel{E:w3}\\
\bigl\|(i\partial_t+\Delta)w_n^{(3)}\bigr\|_{L_{t,x}^\infty([T,\infty)\times\Omega)}&\lambdaesssim{\varepsilon}^{-\frac 74}\delta^{-\frac 74}\lambdaog^{13}(\tfrac 1{\varepsilon}),\lambdaabel{E:laplace w3}
\end{align}
uniformly for $n\in \mathcal E$. Additionally, the spacetime support of $w_n^{(3)}$ has measure
$$
\bigl|\supp w_n^{(3)}\bigr| \lambdaesssim \bigl[\sigma \lambdaog(\tfrac1{\varepsilon}) {\varepsilon}\lambdaog\lambdaog(\tfrac 1\eps)] \sigma \bigl[\sigma\lambdaog(\tfrac1{\varepsilon})\bigr]^2 \lambdaesssim \sigma^4{\varepsilon}\lambdaog^3(\tfrac1{\varepsilon})\lambdaog\lambdaog(\tfrac 1\eps).
$$
Using this together with \eqref{bdforc}, Lemma~\ref{L:disjoint w3}, \eqref{E:w3}, and H\"older's inequality, we estimate
\begin{align*}
&\Bigl\|\sum_{n\in \mathcal E} c_n^{\varepsilon} w_n^{(3)}\Bigr\|_{L_{t,x}^{\frac{10}3}([T,\infty)\times\Omega)}^{\frac{10}3}\\
&\lambdaesssim \sum_{n_1, \Delta_{\Omega}ots, n_4\in \mathcal E} |c_{n_1}^{\varepsilon}|^{\frac56} \cdot \Delta_{\Omega}ots \cdot |c_{n_4}^{\varepsilon}|^{\frac56} \int_T^\infty\int_\Omega |w_{n_1}^{(3)}(t,x)|^{\frac56}\cdot\Delta_{\Omega}ots\cdot |w_{n_4}^{(3)}(t,x)|^{\frac56}\, dx\, dt\\
&\lambdaesssim \frac{(\sigma {\varepsilon})^5}{L^{10}}\lambdaog^{36}(\tfrac1{\varepsilon})\Bigl[\frac L{\varepsilon}\lambdaog\lambdaog(\tfrac 1\eps)\Bigr]^3 \bigl[{\varepsilon}^{-\frac 14}\delta^{-\frac 54}\lambdaog^{12}(\tfrac 1{\varepsilon})\bigr]^{\frac{10}3} \sigma^4{\varepsilon}\lambdaog^3(\tfrac1{\varepsilon})\lambdaog\lambdaog(\tfrac 1\eps)\\
&\lambdaesssim {\varepsilon}^{\frac{19}6}\delta^{-\frac{19}6}\lambdaog^{82}(\tfrac1{\varepsilon}) = o(1) \qtq{as} {\varepsilon}\to 0.
\end{align*}
Arguing similarly and using \eqref{E:laplace w3} in place of \eqref{E:w3}, we obtain
\begin{align*}
&\Bigl\|\sum_{n\in \mathcal E} c_n^{\varepsilon} (i\partial_t+\Delta)w_n^{(3)}\Bigr\|_{L_{t,x}^2([T,\infty)\times\Omega)}^2\\
&\lambdaesssim \sum_{n_1,n_2\in \mathcal E} |c_{n_1}^{\varepsilon}||c_{n_2}^{\varepsilon}|\int_T^\infty\int_\Omega \bigl| (i\partial_t+\Delta)w_{n_1}^{(3)}(t,x)\bigr|\bigl| (i\partial_t+\Delta)w_{n_2}^{(3)}(t,x)\bigr|\, dx\, dt\\
&\lambdaesssim \frac{(\sigma {\varepsilon})^3}{L^6}\lambdaog^{12}(\tfrac1{\varepsilon})\Bigl[\frac L{\varepsilon}\lambdaog\lambdaog(\tfrac 1\eps)\Bigr]^3 \bigl[{\varepsilon}^{-\frac 74}\delta^{-\frac 74}\lambdaog^{13}(\tfrac 1{\varepsilon})\bigr]^2 \sigma^4{\varepsilon}\lambdaog^3(\tfrac1{\varepsilon})\lambdaog\lambdaog(\tfrac 1\eps)\\
&\lambdaesssim {\varepsilon}^{-\frac12}\delta^{-\frac32}\lambdaog^{46}(\tfrac1{\varepsilon}).
\end{align*}
To convert this to a bound in $L^1_tL^2_x$, we need the following consequence of Lemma~\ref{L:xc}:
\begin{align*}
\bigl| \bigl\{ t : {\textstyle\sum_{n\in \mathcal E}} c_n^{\varepsilon} w_n^{(3)}(t,x)\not\equiv 0\bigr\} \bigr|
&\lambdaeq \max_{n,m\in\mathcal E} |t_c^{(n)}- t_c^{(m)}| +\tfrac{2\sigma\lambdaog(\frac1{\varepsilon})}{|\xi_n|} + \tfrac{2\sigma\lambdaog(\frac1{\varepsilon})}{|\xi_m|}\\
&\lambdaesssim {\varepsilon}\delta[\lambdaog\lambdaog(\tfrac 1\eps)]^9.
\end{align*}
Applying H\"older's inequality in the time variable, we get
\begin{align*}
\Bigl\|\sum_{n\in\mathcal E} c_n^{\varepsilon} (i\partial_t+\Delta) & w_n^{(3)}\Bigr\|_{L_t^1L_x^2([T,\infty)\times\Omega)}\\
&\lambdaesssim \bigl[{\varepsilon}\delta\lambdaog\lambdaog^9(\tfrac1{\varepsilon})\bigr]^{\frac12}\Bigl\|\sum_{n\in \mathcal E} c_n^{\varepsilon} (i\partial_t+\Delta)w_n^{(3)}\Bigr\|_{L_{t,x}^2([T,\infty)\times\Omega)}\\
&\lambdaesssim {\varepsilon}^{\frac14}\delta^{-\frac 14}\lambdaog^{24}(\tfrac 1{\varepsilon}) = o(1) \qtq{as} {\varepsilon}\to 0.
\end{align*}
This proves \eqref{rem2} for $w_n^{(3)}$ and so completes the proof of Lemma~\ref{L:rem}.
\end{proof}
Combining Lemmas~\ref{L:small u}, \ref{L:bdfv}, and \ref{L:rem} yields Proposition~\ref{P:long times}, which controls the contribution for large times of rays that enter the obstacle. The contribution from short times was estimated in Proposition~\ref{P:short times}, while the contributions of near-grazing rays and rays that miss the obstacle were estimated in Propositions~\ref{P:ng} and \ref{P:missing}, respectively. Putting everything together completes the proof of Theorem~\ref{T:LF3} and so the discussion of Case~(v).
\section{Linear profile decomposition}\lambdaabel{S:LPD}
The purpose of this section is to prove a linear profile decomposition for the propagator $e^{it\Delta_\Omega}$
for data in $\dot H^1_D(\Omega)$; see Theorem~\ref{T:LPD}. As we will see below, the profiles can live
in different limiting geometries; this is one of the principal differences relative to previous analyses.
Throughout this section, $\Theta:{\mathbb{R}}^3\to[0,1]$ denotes a smooth function such that
\begin{align*}
\Theta(x)=\begin{cases} 0, & |x|\lambdae \frac 14,\\1, & |x| \ge \frac
12.\end{cases}
\end{align*}
We also write $\Theta^c(x):=1-\Theta(x)$ and $d(x):=\dist(x,\Omega^c)$.
\begin{lem}[Refined Strichartz estimate]\lambdaabel{lm:refs}
Let $f\in \dot H^1_D(\Omega)$. Then we have
\begin{align*}
\|e^{it\lambdad} f\|_{L_{t,x}^{10}(\R\times\Omega)}\lambdaesssim \|f\|_{\dot H^1_D(\Omega)}^{\frac 15} \sup_{N\in 2^{\Z}}\|e^{it\lambdad} f_N\|_{L_{t,x}^{10}(\R\times\Omega)}^{\frac45}.
\end{align*}
\end{lem}
\begin{proof}
From the square function estimate Lemma~\ref{sq}, Bernstein, and Strichartz inequalities,
\begin{align*}
\|e^{it\lambdad} & f\|_{L^{10}_{t,x}}^{10} \lambdaesssim \iint_{{\mathbb{R}}\times\Omega} \Bigl(\sum_{N\in 2^{\Z}}|e^{it\lambdad} f_N|^2 \Bigr)^5 \,dx \,dt\\
&\lambdaesssim \sum_{N_1\lambdae \cdots\lambdae N_5}\iint_{{\mathbb{R}}\times\Omega} |e^{it\lambdad} f_{N_1}|^2 \cdots |e^{it\lambdad} f_{N_5}|^2 \,dx\,dt\\
&\lambdaesssim \sum_{N_1\lambdae\cdots\lambdae N_5} \|e^{it\lambdad} f_{N_1}\|_{L^\infty_{t,x}}\|e^{it\lambdad} f_{N_1}\|_{L^{10}_{t,x}}
\prod_{j=2}^4\|e^{it\lambdad} f_{N_j}\|_{L^{10}_{t,x}}^2 \\
&\qquad\qquad \qquad\cdot\|e^{it\lambdad} f_{N_5}\|_{L^{10}_{t,x}}\|e^{it\lambdad} f_{N_5}\|_{L^5_{t,x}}\\
&\lambdaesssim \sup_{N\in 2^{\Z}}\|e^{it\lambdad} f_N\|_{L^{10}_{t,x}}^8 \sum_{N_1\lambdae N_5} \bigr[1+\lambdaog\bigl(\tfrac {N_5}{N_1}\bigr)\bigr]^3 N_1^{\frac32}\| e^{it\lambdad} f_{N_1}\|_{L^\infty_t L^2_x}\\
&\qquad\qquad\qquad\cdot N_5^{\frac 12}\|e^{it\lambdad} f_{N_5}\|_{L^5_t L^{\frac{30}{11}}_x} \\
&\lambdaesssim \sup_{N\in 2^{\Z}}\|e^{it\lambdad} f_N\|_{L^{10}_{t,x}}^8 \sum_{N_1\lambdae N_5} \bigr[1+\lambdaog\bigl(\tfrac {N_5}{N_1}\bigr)\bigr]^3 \bigl(\tfrac{N_1}{N_5}\bigr)^{\frac 12}
\|f_{N_1}\|_{\dot H^1_D(\Omega)} \|f_{N_5}\|_{\dot H^1_D(\Omega)}\\
&\lambdaesssim \sup_{N\in 2^{\Z}}\|e^{it\lambdad} f_N\|_{L^{10}_{t,x}}^8 \|f\|_{\dot H^1_D(\Omega)}^2,
\end{align*}
where all spacetime norms are over ${\mathbb{R}}\times\Omega$. Raising this to the power $\frac 1{10}$ yields the lemma.
\end{proof}
The refined Strichartz inequality shows that linear solutions with non-trivial spacetime norm must concentrate on at least one frequency annulus.
The next proposition goes one step further and shows that they contain a bubble of concentration around some point in spacetime. A novelty in our setting is that the bubbles of concentration may live in one of the limiting geometries identified earlier.
\begin{prop}[Inverse Strichartz inequality]\lambdaabel{P:inverse Strichartz}
Let $\{f_n\}\subset \dot H^1_D(\Omega)$. Suppose that
\begin{align*}
\lambdaim_{n\to \infty}\|f_n\|_{\dot H^1_D(\Omega)}=A < \infty \qtq{and} \lambdaim_{n\to\infty}\|e^{it\lambdad} f_n\|_{L_{t,x}^{10}(\R\times\Omega)}={\varepsilon} >0.
\end{align*}
Then there exist a subsequence in $n$, $\{\phi_n\}\subset \dot H^1_D(\Omega)$, $\{N_n\}\subset 2^{\Z}$, $\{(t_n, x_n)\}\subset {\mathbb{R}}\times\Omega$
conforming to one of the four cases listed below such that
\begin{gather}
\lambdaiminf_{n\to\infty}\|\phi_n\|_{\dot H^1_D(\Omega)}\gtrsim {\varepsilon}(\tfrac{{\varepsilon}}A)^{\frac 78}, \lambdaabel{nontri}\\
\lambdaiminf_{n\to\infty}\Bigl\{ \|f_n\|_{\dot H^1_D(\Omega)}^2-\|f_n-\phi_n\|_{\dot H^1_D(\Omega)}^2\Bigr\} \gtrsim A^2 (\tfrac{\varepsilon} A)^{\frac{15}4},\lambdaabel{dech}\\
\lambdaiminf_{n\to\infty}\Bigl\{ \|e^{it\lambdad} f_n\|_{L_{t,x}^{10}(\R\times\Omega)}^{10}-\|e^{it\lambdad} (f_n-\phi_n)\|_{L_{t,x}^{10}(\R\times\Omega)}^{10}\Bigr\} \gtrsim {\varepsilon}^{10}(\tfrac{\varepsilon} A)^{\frac{35}4}.\lambdaabel{dect}
\end{gather}
The four cases are:
\begin{CI}
\item Case 1: $N_n\equiv N_\infty \in 2^{\mathbb{Z}}$ and $x_n\to x_{\infty}\in \Omega$. In this case, we choose $\phi\in \dot H^1_D(\Omega)$ and the subsequence
so that $e^{it_n\Delta_{\Omega}}f_n\rightharpoonup \phi$ weakly in $\dot H^1_D(\Omega)$ and we set $\phi_n:=e^{-it_n\Delta_{\Omega}}\phi$.
\item Case 2: $N_n\to 0$ and $-N_n x_n\to x_\infty\in {\mathbb{R}}^3$. In this case, we choose ${\tilde\phi}\in \dot H^1(\R^3)$ and the subsequence so that
$$
g_n(x) :=N_n^{-\frac 12}(e^{it_n\Delta_{\Omega}}f_n)(N_n^{-1}x+x_n) \rightharpoonup {\tilde\phi}(x) \quad\text{weakly in} \quad \dot H^1({\mathbb{R}}^3)
$$
and we set
$$
\phi_n(x):=N_n^{\frac 12} e^{-it_n\Delta_{\Omega}}[(\chi_n\tilde\phi)(N_n(x-x_n))],
$$
where $\chi_n(x)=\chi(N_n^{-1}x+x_n)$ and $\chi(x)=\Theta(\frac{d(x)}{\diam (\Omega^c)})$.
\item Case 3: $N_n d(x_n)\to\infty$. In this case, we choose $\tilde\phi\in \dot H^1(\R^3)$ and the subsequence so that
$$
g_n(x) :=N_n^{-\frac 12}(e^{it_n\Delta_{\Omega}}f_n)(N_n^{-1}x+x_n) \rightharpoonup {\tilde\phi}(x) \quad\text{weakly in} \quad \dot H^1({\mathbb{R}}^3)
$$
and we set
$$
\phi_n(x) :=N_n^{\frac12}e^{-it_n\Delta_{\Omega}}[(\chi_n\tilde\phi)(N_n(x-x_n))],
$$
where $\chi_n(x)=1-\Theta(\frac{|x|}{N_n d(x_n)})$.
\item Case 4: $N_n\to \infty$ and $N_n d(x_n)\to d_{\infty}>0$. In this case, we choose $ \tilde\phi \in \dot H^1_D({\mathbb{H}})$ and the subsequence so that
$$
g_n(x) := N_n^{-\frac12}(e^{it_n\Delta_{\Omega}}f_n)(N_n^{-1}R_nx+x^*_n)\rightharpoonup {\tilde\phi}(x) \quad\text{weakly in} \quad \dot H^1(\R^3)
$$
and we set
$$
\phi_n(x) :=N_n^{\frac 12} e^{-it_n\Delta_{\Omega}}[\tilde\phi(N_nR_n^{-1}(\cdot-x^*_n))],
$$
where $R_n\in SO(3)$ satisfies $R_n e_3 = \frac{x_n-x^*_n}{|x_n-x^*_n|}$ and $x^*_n\in \partial \Omega$ is chosen so that $d(x_n)=|x_n-x^*_n|$.
\end{CI}
\end{prop}
\begin{rem}
The analogue of $\tilde \phi$ in Case 1 is related to $\phi$ via $\phi(x)= N_\infty^{\frac12} \tilde \phi(N_\infty (x-x_\infty))$; see \eqref{1converg}.
\end{rem}
\begin{proof}
From Lemma \ref{lm:refs} and the conditions on $f_n$, we know that for each $n$ there exists $N_n\in 2^{\Z}$ such that
\begin{align*}
\|e^{it\lambdad} P_{N_n}^{\Omega} f_n\|_{L_{t,x}^{10}(\R\times\Omega)}\gtrsim {\varepsilon}^{\frac 54}A^{-\frac 14}.
\end{align*}
On the other hand, from the Strichartz and Bernstein inequalities we get
\begin{align*}
\|e^{it\lambdad} P_{N_n}^{\Omega} f_n\|_{L_{t,x}^{\frac {10}3}({\mathbb{R}}\times\Omega)}\lambdaesssim \| P_{N_n}^{\Omega} f_n\|_{L_x^2(\Omega)} \lambdaesssim N_n^{-1} A.
\end{align*}
By H\"older's inequality, these imply
\begin{align*}
A^{-\frac 14}{\varepsilon}^{\frac 54}&\lambdaesssim \|e^{it\lambdad}P_{N_n}^{\Omega} f_n\|_{L_{t,x}^{10}(\R\times\Omega)}\\
&\lambdaesssim \|e^{it\lambdad}P_{N_n}^{\Omega} f_n\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times\Omega)}^{\frac 13}\|e^{it\lambdad}P_{N_n}^{\Omega} f_n\|_{L_{t,x}^{\infty}({\mathbb{R}}\times\Omega)}^{\frac 23} \\
&\lambdaesssim N_n^{-\frac 13}A^{\frac 13}\|e^{it\lambdad} P_{N_n}^{\Omega} f_n\|_{L_{t,x}^{\infty}({\mathbb{R}}\times\Omega)}^{\frac 23},
\end{align*}
and so
\begin{align*}
\|e^{it\lambdad} P_{N_n}^{\Omega} f_n\|_{L_{t,x}^{\infty}({\mathbb{R}}\times\Omega)} \gtrsim N_n^{\frac 12}{\varepsilon} (\tfrac {\varepsilon} A)^{\frac78}.
\end{align*}
Thus there exist $(t_n,x_n)\in {\mathbb{R}}\times \Omega$ such that
\begin{align}\lambdaabel{cncen}
\Bigl|(e^{it_n\Delta_{\Omega}}P_{N_n}^{\Omega} f_n)(x_n)\Bigr|\gtrsim N_n^{\frac12}{\varepsilon} (\tfrac{{\varepsilon}}A)^{\frac 78}.
\end{align}
The cases in the statement of the proposition are determined solely by the behaviour of $x_n$ and $N_n$. We will now show
\begin{align}\lambdaabel{lb}
N_n d(x_n)\gtrsim (\tfrac{\varepsilon} A)^{\frac{15}8} \qtq{whenever} N_n \gtrsim 1,
\end{align}
which explains the absence of the scenario $N_n \gtrsim 1$ with $N_nd(x_n)\to0$. The proof of \eqref{lb} is based on Theorem~\ref{T:heat}, which implies
\begin{align*}
\int_\Omega \bigl| e^{\Delta_\Omega / N_n^2}(x_n,y) \bigr|^2\,dy &\lambdaesssim N_n^{6} \int_\Omega \Bigl| \bigl[N_n d(x_n)\bigr]\bigl[N_n d(x_n)+N_n|x_n-y| \bigr] e^{-c N_n^2|x_n-y|^2} \Bigr|^2 \,dy \\
&\lambdaesssim [N_n d(x_n)]^2[N_nd(x_n) + 1]^2 N_n^3,
\end{align*}
whenever $N_n\gtrsim 1$. Writing
$$
(e^{it_n\Delta_{\Omega}} P_{N_n}^{\Omega} f_n)(x_n) = \int_\Omega e^{\Delta_\Omega / N_n^2}(x_n,y) \, \bigl[ P^\Omega_{\lambdaeq 2 N_n} e^{ - \Delta_\Omega / N_n^2} e^{it_n\Delta_{\Omega}} P_{N_n}^{\Omega} f_n \bigr](y) \,dy
$$
and using \eqref{cncen} and Cauchy--Schwarz gives
\begin{align*}
N_n^{\frac12}{\varepsilon} (\tfrac{{\varepsilon}}A)^{\frac 78} &\lambdaesssim \bigl[N_n d(x_n)\bigr] \bigl[N_nd(x_n) + 1\bigr] N_n^{\frac32}
\bigl\| P^\Omega_{\lambdaeq 2 N_n} e^{ - \Delta_\Omega / N_n^2} e^{it_n\Delta_{\Omega}} P_{N_n}^{\Omega} f_n \bigr\|_{L^2_x} \\
& \lambdaesssim \bigl[N_n d(x_n)\bigr] \bigl[N_nd(x_n) + 1\bigr] N_n^{\frac12} \| f_n \|_{\dot H^1_D(\Omega)}.
\end{align*}
The inequality \eqref{lb} now follows.
Thanks to the lower bound \eqref{lb}, after passing to a subsequence, we only need to consider the four cases below, which
correspond to the cases in the statement of the proposition.
\textbf{Case 1:} $N_n\sim 1$ and $N_n d(x_n)\sim 1$.
\textbf{Case 2:} $N_n\to 0$ and $N_n d(x_n) \lambdaesssim 1$.
\textbf{Case 3:} $N_n d(x_n)\to \infty$ as $n\to \infty$.
\textbf{Case 4:} $N_n\to \infty$ and $N_n d(x_n)\sim 1$.
We will address these cases in order. The geometry in Case~1 is simplest and it allows us to introduce the basic framework for the argument. The main new difficulty in the remaining cases is the variable geometry, which is where Proposition~\ref{P:converg} and Corollary~\ref{C:LF} play a crucial role. Indeed, as we will see below, the four cases above reduce to the ones discussed in Sections~\ref{S:Domain Convergence} and~\ref{S:Linear flow convergence} after passing to a further subsequence.
With Proposition~\ref{P:converg} and Corollary~\ref{C:LF} already in place, the arguments in the four cases parallel each other rather closely. There are four basic steps. The most important is to embed the limit object ${\tilde\phi}$ back inside $\Omega$ in the form of $\phi_n$ and to show that $f_n-\phi_n$ converges to zero in suitable senses. The remaining three steps use this information to prove the three estimates \eqref{nontri}, \eqref{dech}, and \eqref{dect}.
\textbf{Case 1:} Passing to a subsequence, we may assume
\begin{align*}
N_n\equiv N_\infty\in 2^{\Z} \quad\text{and}\quad x_n\to x_\infty\in \Omega.
\end{align*}
To prefigure the treatment of the later cases we set
$$
g_n(x) :=N_n^{-\frac 12}(e^{it_n\Delta_{\Omega}}f_n)(N_n^{-1}x+x_n),
$$
even though the formulation of Case~1 does not explicitly include this sequence. As $f_n$ is supported in $\Omega$, so $g_n$ is supported in
$\Omega_n :=N_n(\Omega-\{x_n\})$. Moreover,
$$
\|g_n\|_{\dot H^1_D(\Omega_n)}=\|f_n\|_{\dot H^1_D(\Omega)}\lambdaesssim A.
$$
Passing to a subsequence, we can choose $\tilde \phi$ so that $g_n\rightharpoonup \tilde\phi$ weakly in $\dot H^1(\R^3)$. Rescaling the relation $g_n\rightharpoonup \tilde\phi$ yields
\begin{align}\lambdaabel{1converg}
(e^{it_n\Delta_{\Omega}}f_n)(x)\rightharpoonup \phi(x) :=N_\infty^{\frac 12}\tilde \phi(N_\infty(x-x_\infty)) \quad \text{weakly in} \quad \dot H^1_D(\Omega).
\end{align}
To see that $\phi\in\dot H^1_D(\Omega)$ when defined in this way, we note that $\dot H^1_D(\Omega)$ is a weakly closed subset of $\dot H^1(\R^3)$; indeed, a convex set is weakly closed if and only if it is norm closed.
The next step is to prove \eqref{nontri} by showing that $\phi$ is non-trivial. Toward this end, let $h:=P^{\Omega}_{N_\infty}\delta(x_\infty)$. Then from the Bernstein inequality we have
\begin{align}\lambdaabel{h bd}
\|(-\Delta_{\Omega})^{-\frac 12}h\|_{L^2(\Omega)}=\|(-\Delta_{\Omega})^{-\frac 12}P_{N_\infty}^\Omega\delta(x_\infty)\|_{L^2(\Omega)}\lambdaesssim N_\infty^{\frac 12}.
\end{align}
In particular, $h\in \dot H^{-1}_D(\Omega)$. On the other hand, we have
\begin{align}\lambdaabel{h meets phi}
\lambdaangle \phi,h\rangle
&=\lambdaim_{n\to\infty}\lambdaangle e^{it_n\Delta_{\Omega}}f_n,h\rangle=\lambdaim_{n\to\infty}\lambdaangle e^{it_n\Delta_{\Omega}} f_n,P_{N_\infty}^\Omega\delta(x_\infty)\rangle \notag \\
&=\lambdaim_{n\to\infty}(e^{it_n\Delta_{\Omega}}P_{N_n}^{\Omega} f_n)(x_n)+\lambdaim_{n\to\infty}\lambdaangle e^{it_n\Delta_{\Omega}}f_n, P_{N_\infty}^{\Omega}[\delta({x_\infty})-\delta({x_n})]\rangle.
\end{align}
The second limit in \eqref{h meets phi} vanishes. Indeed, basic elliptic theory shows that
\begin{align}\lambdaabel{elliptic est}
\| \nabla v \|_{L^\infty(\{|x|\lambdaeq R\})} \lambdaesssim R^{-1} \| v \|_{L^\infty(\{|x|\lambdaeq 2R\})} + R \| \Delta v \|_{L^\infty(\{|x|\lambdaeq 2R\})},
\end{align}
which we apply to $v(x) = (P_{N_\infty}^{\Omega} e^{it_n\Delta_{\Omega}}f_n )(x+x_n)$ with $R=\frac12 d(x_n)$. By hypothesis, $d(x_n) \sim 1$,
while by the Bernstein inequalities,
$$
\| P_{N_\infty}^{\Omega} e^{it_n\Delta_{\Omega}}f_n \|_{L^\infty_x} \lambdaesssim N_\infty^{\frac12} A \qtq{and}
\| \Delta P_{N_\infty}^{\Omega} e^{it_n\Delta_{\Omega}}f_n \|_{L^\infty_x} \lambdaesssim N_\infty^{\frac52} A.
$$
Thus by the fundamental theorem of calculus and \eqref{elliptic est}, for $n$ sufficiently large,
\begin{align}\lambdaabel{6:37}
|\lambdaangle e^{it_n\Delta_{\Omega}} f_n,P_{N_\infty}^{\Omega}[\delta(x_\infty)-\delta(x_n)]\rangle| &\lambdaesssim |x_\infty-x_n| \, \| \nabla P_{N_\infty}^{\Omega} e^{it_n\Delta_{\Omega}} f_n\|_{L^\infty(\{|x|\lambdaeq R\})}\notag\\
&\lambdaesssim A \bigl[\tfrac{N_\infty^{\frac12}}{d(x_n)} + N_\infty^{\frac52} d(x_n)\bigr] |x_\infty-x_n|,
\end{align}
which converges to zero as $n\to \infty$.
Therefore, using \eqref{cncen}, \eqref{h bd}, and \eqref{h meets phi}, we have
\begin{align}
N_\infty^{\frac12} {\varepsilon} \bigl(\tfrac{{\varepsilon}}A\bigr)^{\frac78} \lambdaesssim |\lambdaangle \phi, h\rangle| \lambdaesssim \|\phi\|_{\dot H^1_D(\Omega)}\|h\|_{\dot H^{-1}_D(\Omega)}
\lambdaesssim N_\infty^{\frac12}\|\phi\|_{\dot H^1_D(\Omega)}.\lambdaabel{lbf}
\end{align}
As $e^{it\lambdad}n$ is unitary on $\dot H^1_D(\Omega)$ we have $\|\phi_n\|_{\dot H^1_D(\Omega)}=\|\phi\|_{\dot H^1_D(\Omega)}$, and so \eqref{lbf} yields \eqref{nontri}.
Claim \eqref{dech} follows immediately from \eqref{nontri} and \eqref{1converg} since $\dot H^1_D(\Omega)$ is a Hilbert space.
The only remaining objective is to prove decoupling for the $L_{t,x}^{10}$ norm. Note
\begin{align*}
(i\partial_t)^{\frac 12} e^{it\lambdad} =(-\Delta_{\Omega})^{\frac 12} e^{it\lambdad}.
\end{align*}
Thus, by H\"older, on any compact domain $K$ in ${\mathbb{R}}\times{\mathbb{R}}^3$ we have
\begin{align*}
\|e^{it\lambdad} e^{it\lambdad}n f_n\|_{H^{\frac 12}_{t,x}(K)}\lambdaesssim \| \lambdaangle-\Delta_{\Omega}\rangle ^{\frac12} e^{i(t+t_n)\Delta_\Omega}f_n \|_{L^2_{t,x}(K)}\lambdaesssim_K A.
\end{align*}
From Rellich's Lemma, passing to a subsequence, we get
\begin{align*}
e^{it\lambdad} e^{it\lambdad}n f_n \to e^{it\lambdad} \phi \qtq{strongly in} L_{t,x}^{2}(K)
\end{align*}
and so, passing to a further subsequence, $e^{it\lambdad}e^{it\lambdad}n f_n(x)\to e^{it\lambdad} \phi(x)$ a.e. on $K$. Using a diagonal argument
and passing again to a subsequence, we obtain
\begin{align*}
e^{it\lambdad}e^{it\lambdad}n f_n(x)\to e^{it\lambdad} \phi(x) \quad\text{a.e. in ${\mathbb{R}}\times {\mathbb{R}}^3$}.
\end{align*}
Using the Fatou Lemma of Br\'ezis and Lieb (cf. Lemma~\ref{lm:rf}) and a change of variables, we get
\begin{align*}
\lambdaim_{n\to \infty}\Bigl\{\|e^{it\lambdad} f_n\|_{L_{t,x}^{10}(\R\times\Omega)}^{10}-\|e^{it\lambdad} (f_n-\phi_n)\|_{L_{t,x}^{10}(\R\times\Omega)}^{10}\Bigr\} = \|e^{it\lambdad} \phi\|_{L_{t,x}^{10}(\R\times\Omega)}^{10},
\end{align*}
from which \eqref{dect} will follow once we prove
\begin{align}\lambdaabel{want}
\|e^{it\lambdad} \phi\|_{L_{t,x}^{10}(\R\times\Omega)}\gtrsim {\varepsilon} (\tfrac{{\varepsilon}}A)^{\frac{7}{8}}.
\end{align}
To see this, we use \eqref{lbf}, the Mikhlin multiplier theorem (for $e^{it\Delta_\Omega} P^\Omega_{\lambdaeq 2N_\infty}$), and Bernstein to estimate
\begin{align*}
N_\infty^{\frac12} {\varepsilon} \bigl(\tfrac{{\varepsilon}}A\bigr)^{\frac78}\lambdaesssim |\lambdaangle \phi, h\rangle|
&=|\lambdaangle e^{it\Delta_\Omega} \phi, e^{it\Delta_\Omega} h\rangle|
\lambdaesssim \|e^{it\Delta_\Omega}\phi\|_{L_x^{10}}\|e^{it\Delta_\Omega}h\|_{L_x^{\frac{10}9}}\\
&\lambdaesssim \|e^{it\Delta_\Omega}\phi\|_{L_x^{10}} \|h\|_{L_x^{\frac{10}9}}
\lambdaesssim N_\infty^{\frac3{10}} \|e^{it\Delta_\Omega}\phi\|_{L_x^{10}},
\end{align*}
for each $|t|\lambdae N_{\infty}^{-2}$.
Thus
$$
\|e^{it\Delta_\Omega}\phi\|_{L_x^{10}} \gtrsim N_\infty^{\frac15} {\varepsilon} \bigl(\tfrac{{\varepsilon}}A\bigr)^{\frac78},
$$
uniformly in $|t|\lambdae N_{\infty}^{-2}$. Integrating in $t$ leads to \eqref{want}.
\textbf{Case 2:} As $N_n\to 0$, the condition $N_nd(x_n)\lambdaesssim 1$ guarantees that $\{N_nx_n\}_{n\geq 1}$ is a bounded sequence; thus, passing to a subsequence, we may assume $-N_n x_n\to x_\infty\in {\mathbb{R}}^3$. As in Case 1, we define $\Omega_n :=N_n(\Omega-\{x_n\})$. Note that the rescaled obstacles $\Omega_n^c$ shrink to $x_\infty$ as $n\to \infty$; this is the defining characteristic of Case~2.
As $f_n$ is bounded in $\dot H^1_D(\Omega)$, so the sequence $g_n$ is bounded in $\dot H^1_D(\Omega_n)\subseteq\dot H^1(\R^3)$. Thus, passing to a subsequence, we
can choose ${\tilde\phi}$ so that $g_n \rightharpoonup {\tilde\phi}$ in $\dot H^1(\R^3)$.
We cannot expect ${\tilde\phi}$ to belong to $\dot H^1_D(\Omega_n)$, since it has no reason to vanish on $\Omega_n^c$.
This is the role of $\chi_n$ in the definition of $\phi_n$. Next we show that this does not deform ${\tilde\phi}$ too gravely; more precisely,
\begin{align}\lambdaabel{E:no deform}
\chi_n{\tilde\phi} \to {\tilde\phi}, \qtq{or equivalently,} \bigl[ 1 - \chi(N_n^{-1}x+x_n)\bigr]{\tilde\phi}(x) \to 0 \quad \text{in $\dot H^1(\R^3)$.}
\end{align}
Later, we will also need to show that the linear evolution of $\chi_n{\tilde\phi}$ in $\Omega_n$ closely approximates the whole-space linear evolution of ${\tilde\phi}$.
To prove \eqref{E:no deform} we first set $B_n:=\{ x\in {\mathbb{R}}^3 : \dist(x,\Omega_n^c) \lambdaeq \diam(\Omega_n^c)\}$, which contains $\supp(1-\chi_n)$ and $\supp(\nabla\chi_n)$. Note that because $N_n\to0$, the measure of $B_n$ shrinks to zero as $n\to\infty$. By H\"older's inequality,
\begin{align*}
\bigl\| [ 1 &- \chi(N_n^{-1}x+x_n)]{\tilde\phi}(x) \bigr\|_{\dot H^1(\R^3)} \\
&\lambdaesssim \bigl\| [ 1 - \chi(N_n^{-1}x+x_n)]\nabla {\tilde\phi}(x) \bigr\|_{L^2({\mathbb{R}}^3)}
+ \bigl\| N_n^{-1} \bigl(\nabla\chi\bigr)(N_n^{-1}x+x_n) {\tilde\phi}(x) \bigr\|_{L^2({\mathbb{R}}^3)} \\
&\lambdaesssim \| \nabla {\tilde\phi} \|_{L^2(B_n)} + \| {\tilde\phi} \|_{L^6(B_n)},
\end{align*}
which converges to zero by the dominated convergence theorem.
With \eqref{E:no deform} in place, the proofs of \eqref{nontri} and \eqref{dech} now follow their Case~1 counterparts very closely; this will rely on key inputs from Section~\ref{S:Domain Convergence}. We begin with the former.
Let $h:=P_1^{{\mathbb{R}}^3} \delta(0)$; then
\begin{align*}
\lambdaangle \tilde \phi, h\rangle=\lambdaim_{n\to\infty} \lambdaangle g_n, h\rangle=\lambdaim_{n\to\infty}\lambdaangle g_n, P_1^{\Omega_n} \delta(0)\rangle+\lambdaim_{n\to\infty}\lambdaangle g_n, (P_1^{{\mathbb{R}}^3}- P_1^{\Omega_n})\delta(0)\rangle.
\end{align*}
The second term vanishes due to Proposition~\ref{P:converg} and the uniform boundedness of $\|g_n\|_{\dot H^1(\R^3)}$. Therefore,
\begin{align}
|\lambdaangle\tilde\phi, h\rangle|&=\Bigl|\lambdaim_{n\to \infty}\lambdaangle g_n,P_1^{\Omega_n}\delta(0)\rangle\Bigr|\notag\\
&=\Bigl|\lambdaim_{n\to\infty} \lambdaanglee^{it\lambdad}n f_n, N_n^{\frac52} (P_1^{\Omega_n}\delta(0))(N_n(x-x_n))\rangle\Bigr|\notag\\
&=\Bigl|\lambdaim_{n\to\infty}\lambdaangle e^{it_n\Delta_{\Omega}}f_n, N_n^{-\frac12}P_{N_n}^{\Omega}\delta(x_n)\rangle\Bigr|\gtrsim{\varepsilon} (\tfrac{{\varepsilon}}A)^{\frac 78},\lambdaabel{256}
\end{align}
where the last inequality follows from \eqref{cncen}. Thus,
\begin{align*}
\|{\tilde\phi}\|_{\dot H^1(\R^3)}\gtrsim {\varepsilon} (\tfrac{{\varepsilon}}A)^{\frac 78}
\end{align*}
as in \eqref{lbf}. Combining this with \eqref{E:no deform}, for $n$ sufficiently large we obtain
\begin{align*}
\|\phi_n\|_{\dot H^1_D(\Omega)}= \| \chi_n \tilde \phi\|_{\dot H^1_D(\Omega_n)}\gtrsim {\varepsilon} (\tfrac{{\varepsilon}}A)^{\frac 78}.
\end{align*}
This proves \eqref{nontri}.
To prove the decoupling in $\dot H^1_D(\Omega)$, we write
\begin{align*}
&\|f_n\|_{\dot H^1_D(\Omega)}^2 -\|f_n-\phi_n\|_{\dot H^1_D(\Omega)}^2 = 2\lambdaangle f_n, \phi_n\rangle_{\dot H^1_D(\Omega)}-\|\phi_n\|_{\dot H^1_D(\Omega)}^2\\
&\quad=2\Bigl\lambdaangle N_n^{-\frac 12} (e^{it\lambdad}n f_n)(N_n^{-1} x+x_n),\, {\tilde\phi}(x)\chi_n(x)\Bigr\rangle_{\dot H^1_D(\Omega_n)}-\| \chi_n \tilde \phi\|_{\dot H^1_D(\Omega_n)}^2\\
&\quad=2\lambdaangle g_n, \tilde \phi\rangle_{\dot H^1(\R^3)}-2\bigl\lambdaangle g_n, {\tilde\phi} (1-\chi_n) \bigr\rangle_{\dot H^1(\R^3)} -\| \chi_n \tilde \phi\|_{\dot H^1_D(\Omega_n)}^2.
\end{align*}
From the weak convergence of $g_n$ to ${\tilde\phi}$, \eqref{E:no deform}, and \eqref{nontri}, we deduce
\begin{align*}
\lambdaim_{n\to\infty}\Bigl\{\|f_n\|_{\dot H^1_D(\Omega)}^2-\|f_n-\phi_n\|_{\dot H^1_D(\Omega)}^2\Bigr\}=\|\tilde\phi\|_{\dot H^1(\R^3)}^2 \gtrsim {\varepsilon}^2 (\tfrac{{\varepsilon}}A)^{\frac 74}.
\end{align*}
This completes the verification of \eqref{dech}.
We now turn to proving decoupling of the $L_{t,x}^{10}(\R\times\Omega)$ norm, which we will achieve by showing that
\begin{align}\lambdaabel{305}
\lambdaiminf_{n\to\infty}\biggl\{\|e^{it\Delta_\Omega} f_n\|_{L_{t,x}^{10}(\R\times\Omega)r}^{10}-\|e^{it\Delta_\Omega}(f_n-\phi_n)&\|_{L_{t,x}^{10}(\R\times\Omega)}^{10}\biggr\} = \|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\phi\|_{L_{t,x}^{10}(\R\times\Omega)r}^{10}.
\end{align}
Notice that \eqref{dect} then follows from the lower bound
\begin{align}\lambdaabel{328}
\|e^{it\Delta_{{\mathbb{R}}^3}} \tilde \phi\|_{L_{t,x}^{10}(\R\times\Omega)r}^{10}\gtrsim {\varepsilon}^{10} (\tfrac {\varepsilon} A)^{\frac{35}4},
\end{align}
which we prove in much the same way as in Case~1: From \eqref{256} and the Mikhlin multiplier theorem, we have
\begin{align*}
{\varepsilon} (\tfrac{{\varepsilon}}A)^{\frac 78}&\lambdaesssim |\lambdaangle \tilde\phi, h\rangle|\lambdaesssim
\|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\phi\|_{L^{10}({\mathbb{R}}^3)}\|e^{it\Delta_{{\mathbb{R}}^3}} P_1^{{\mathbb{R}}^3}\delta(0)\|_{L^{\frac
{10}9}({\mathbb{R}}^3)}\lambdaesssim \|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\phi\|_{L^{10}({\mathbb{R}}^3)}
\end{align*}
uniformly for $|t|\lambdaeq 1$. Integrating in time yields \eqref{328} and then plugging this into \eqref{305} leads to \eqref{dect} in Case 2.
To establish \eqref{305} we need two ingredients: The first ingredient is
\begin{align}\lambdaabel{c2i1}
e^{it\Delta_{\Omega_n}}[g_n-\chi_n\tilde\phi]\to 0 \quad \text{a.e. in } {\mathbb{R}}\times{\mathbb{R}}^3,
\end{align}
while the second ingredient is
\begin{align}\lambdaabel{c2i2}
\|e^{it\Delta_{\Omega_n}}[\chi_n\tilde \phi]-e^{it\Delta_{{\mathbb{R}}^3}}\tilde\phi\|_{L_{t,x}^{10}(\R\times\Omega)r}\to 0.
\end{align}
Combining these and passing to a subsequence if necessary we obtain
$$
e^{it\Delta_{\Omega_n}}g_n-e^{it\Delta_{{\mathbb{R}}^3}}\tilde\phi\to 0 \quad \text{a.e. in } {\mathbb{R}}\times{\mathbb{R}}^3,
$$
which by the Fatou Lemma of Br\'ezis and Lieb (cf. Lemma~\ref{lm:rf}) yields
\begin{align*}
\lambdaiminf_{n\to\infty}\Bigl\{\|e^{it\Delta_{\Omega_n}}g_n\|_{L_{t,x}^{10}(\R\times\Omega)r}^{10}-\|e^{it\Delta_{\Omega_n}}g_n-e^{it\Delta_{{\mathbb{R}}^3}} &\tilde\phi\|_{L_{t,x}^{10}(\R\times\Omega)r}^{10}\Bigr\}\\
&= \|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\phi\|_{L_{t,x}^{10}(\R\times\Omega)r}^{10}.
\end{align*}
Combining this with \eqref{c2i2} and rescaling yields \eqref{305}.
We start with the first ingredient \eqref{c2i1}. Using the definition of $\tilde \phi$ together with \eqref{E:no deform}, we deduce
\begin{align*}
g_n-\chi_n\tilde \phi \rightharpoonup 0 \quad \text{weakly in} \quad \dot H^1({\mathbb{R}}^3).
\end{align*}
Thus, by Proposition~\ref{P:converg},
\begin{align*}
e^{it\Delta_{\Omega_n}}[g_n-\chi_n\tilde \phi]\rightharpoonup 0 \quad \text{weakly in} \quad \dot H^1({\mathbb{R}}^3)
\end{align*}
for each $t\in {\mathbb{R}}$. By the same argument as in Case 1, using the fact that $( i\partial_t)^{1/2} e^{it\Delta_{\Omega_n}}=(-\Delta_{\Omega_n})^{1/2} e^{it\Delta_{\Omega_n}}$ and passing to a subsequence, we obtain \eqref{c2i1}.
To establish \eqref{c2i2} we will make use of Corollary~\ref{C:LF}. Note that $\tlim \Omega_n={\mathbb{R}}^3\setminus\{x_\infty\}$ and by Lemma~\ref{L:dense}, $\tilde \phi$ can be well approximated in $\dot H^1({\mathbb{R}}^3)$ by $\psi\in C^\infty_c(\tlim \Omega_n)$. By \eqref{E:no deform}, for $n$ sufficiently large,
$\chi_n\tilde \phi$ are also well approximated in $\dot H^1({\mathbb{R}}^3)$ by the same $\psi\in C^\infty_c(\tlim \Omega_n)$. Thus, \eqref{c2i2} follows by combining Corollary~\ref{C:LF} with the Strichartz inequality.
\textbf{Case 3:} The defining characteristic of this case is that the rescaled obstacles $\Omega_n^c$ march off to infinity; specifically,
$\dist(0,\Omega_n^c)=N_nd(x_n)\to \infty$, where $\Omega_n:=N_n(\Omega-\{x_n\})$.
The treatment of this case parallels that of Case~2. The differing geometry of the two cases enters only in the use of Proposition~\ref{P:converg}, Corollary~\ref{C:LF}, and the analogue of the estimate \eqref{E:no deform}. As these first two inputs have already been proven in all cases, our only obligation is to prove
\begin{align}\lambdaabel{E:no deform 3}
\chi_n{\tilde\phi} \to {\tilde\phi}, \qtq{or equivalently,} \Theta\bigl(\tfrac{|x|}{\dist(0,\Omega_n^c)}\bigr) {\tilde\phi}(x) \to 0 \quad \text{in $\dot H^1(\R^3)$}.
\end{align}
To this end, let $B_n:= \{x\in {\mathbb{R}}^3: \, |x|\geq \frac14\dist(0, \Omega_n^c)\}$. Then by H\"older,
\begin{align*}
\bigl\| \Theta\bigl(\tfrac{|x|}{\dist(0,\Omega_n^c)}\bigr) {\tilde\phi}(x)\bigr\|_{\dot H^1(\R^3)} \lambdaesssim \|\nabla {\tilde\phi}(x) \|_{L^2(B_n)}+ \| {\tilde\phi} \|_{L^6(B_n)}.
\end{align*}
As $1_{B_n} \to 0$ almost everywhere, \eqref{E:no deform 3} follows from the dominated convergence theorem.
\textbf{Case 4:} Passing to a subsequence, we may assume $N_nd(x_n)\to d_\infty>0$. By weak sequential compactness of balls in $\dot H^1({\mathbb{R}}^3)$, we are guaranteed that we can find a subsequence and a ${\tilde\phi}\in \dot H^1({\mathbb{R}}^3)$ so that
$g_n \rightharpoonup {\tilde\phi}$ weakly in this space. However, the proposition claims that ${\tilde\phi}\in \dot H^1_D({\mathbb{H}})$. This is a closed subspace isometrically
embedded in $\dot H^1({\mathbb{R}}^3)$; indeed,
$$
\dot H^1_D({\mathbb{H}}) = \bigl\{ g\in\dot H^1({\mathbb{R}}^3) : {\textstyle\int_{{\mathbb{R}}^3}} g(x)\psi(x) \,dx = 0 \text{ for all } \psi\in C^\infty_c(-{\mathbb{H}}) \bigr\}.
$$
Using this characterization, it is not difficult to see that ${\tilde\phi}\in \dot H^1_D({\mathbb{H}})$ since for any compact set $K$ in the left halfplane,
$K\subset \Omega_n^c$ for $n$ sufficiently large. Here $\Omega_n:=N_n R_n^{-1}(\Omega-\{x_n^*\})$, which is where $g_n$ is supported.
As ${\tilde\phi}\in \dot H^1_D({\mathbb{H}})$ we have $\phi_n \in \dot H^1_D(\Omega)$, as is easily seen from
$$
x\in{\mathbb{H}} \iff N_n^{-1} R_n^{} x + x^*_n \in {\mathbb{H}}_n := \{ y : (x_n - x_n^*)\cdot (y-x_n^*) >0 \} \subseteq \Omega;
$$
indeed, $\partial {\mathbb{H}}_n$ is the tangent plane to $\partial\Omega$ at $x_n^*$. This inclusion further shows that
\begin{align}\lambdaabel{6:20}
\bigl\| {\tilde\phi} \bigr\|_{\dot H^1_D({\mathbb{H}})} = \bigl\| \phi_n \bigr\|_{\dot H^1_D({\mathbb{H}}_n)} = \bigl\| \phi_n \bigr\|_{\dot H^1_D(\Omega)}.
\end{align}
To prove claim \eqref{nontri} it thus suffices to show a lower bound on $\| {\tilde\phi} \|_{\dot H^1_D({\mathbb{H}})}$. To this end, let
$h:=P_1^{{\mathbb{H}}}\delta_{d_\infty e_3}$. From the Bernstein inequality we have
\begin{align}\lambdaabel{6:40}
\|(-\Delta_{{\mathbb{H}}})^{-\frac 12}h\|_{L^2(\Omega)}\lambdaesssim 1.
\end{align}
In particular, $h\in \dot H^{-1}_D({\mathbb{H}})$. Now let $\tilde x_n:= N_nR_n^{-1}(x_n-x_n^*)$; by hypothesis, $\tilde x_n \to d_\infty e_3$. Using Proposition~\ref{P:converg} we obtain
\begin{align*}
\lambdaangle \tilde \phi,h\rangle
&=\lambdaim_{n\to\infty}\Bigl\{\lambdaangle g_n,P_1^{\Omega_n}\delta_{\tilde x_n}\rangle + \lambdaangle g_n,[P_{1}^{{\mathbb{H}}}-P_1^{\Omega_n}]\delta_{d_\infty e_3}\rangle
+ \lambdaangle g_n,P_1^{\Omega_n}[\delta_{d_\infty e_3} - \delta_{\tilde x_n}]\rangle\Bigr\}\\
&=\lambdaim_{n\to\infty}\Bigl\{N_n^{-\frac12}(e^{it_n\Delta_{\Omega}}P_{N_n}^{\Omega} f_n)(x_n)+\lambdaangle g_n,P_1^{\Omega_n}[\delta_{d_\infty e_3} - \delta_{\tilde x_n}]\rangle\Bigr\}.
\end{align*}
Arguing as in the treatment of \eqref{6:37} and applying \eqref{elliptic est} to $v(x)=(P_1^{\Omega_n}g_n)(x+\tilde x_n)$ with $R=\frac12 N_nd(x_n)$, for $n$ sufficiently large we obtain
\begin{align*}
|\lambdaangle g_n,P_1^{\Omega_n}[\delta_{d_\infty e_3} - \delta_{\tilde x_n}]\rangle|&\lambdaesssim A \bigl(d_\infty^{-1}+d_\infty\bigr) |d_\infty e_3- \tilde x_n|\to 0\qtq{as} n\to \infty.
\end{align*}
Therefore, we have
\begin{align*}
|\lambdaangle \tilde \phi,h\rangle|\gtrsim {\varepsilon} (\tfrac{{\varepsilon}}A)^{\frac78},
\end{align*}
which together with \eqref{6:20} and \eqref{6:40} yields \eqref{nontri}.
Claim \eqref{dech} is elementary; indeed,
\begin{align*}
\|f_n\|_{\dot H^1_D(\Omega)}^2 -\|f_n-\phi_n\|_{\dot H^1_D(\Omega)}^2 &= 2\lambdaangle f_n, \phi_n\rangle_{\dot H^1_D(\Omega)}-\|\phi_n\|_{\dot H^1_D(\Omega)}^2\\
&= 2\lambdaangle g_n,\, {\tilde\phi}\rangle_{\dot H^1_D(\Omega_n)}-\|\tilde\phi\|_{\dot H^{1}_D({\mathbb{H}})}^2 \to \|\tilde\phi\|_{\dot H^{1}_D({\mathbb{H}})}^2.
\end{align*}
The proof of \eqref{dect} differs little from the cases treated previously: One uses the Rellich Lemma and Corollary~\ref{C:LF} to
show $e^{it\Delta_{\Omega_n}} g_n\to e^{it\Delta_{\mathbb{H}}}{\tilde\phi}$ almost everywhere and then the Fatou Lemma of Br\'ezis and Lieb to
see that
$$
\text{LHS\eqref{dect}} = \| e^{it\Delta_{\mathbb{H}}} {\tilde\phi} \|_{L^{10}({\mathbb{R}}\times{\mathbb{H}})}^{10}.
$$
The lower bound on this quantity comes from pairing with $h$; see Cases~1 and~2.
\end{proof}
To prove a linear profile decomposition for the propagator $e^{it\Delta_\Omega}$ we will also need the following weak convergence results.
\begin{lem}[Weak convergence]\lambdaabel{L:converg}
Assume $\Omega_n\equiv\Omega$ or $\{\Omega_n\}$ conforms to one of the three scenarios considered in Proposition~\ref{P:convdomain}. Let $f\in C_c^\infty(\tlim \Omega_n)$ and let $\{(t_n,x_n)\}_{n\geq 1}\subset{\mathbb{R}}\times{\mathbb{R}}^3$. Then
\begin{align}\lambdaabel{lc}
e^{it_n\Delta_{\Omega_n}}f(x+x_n) \rightharpoonup 0 \quad \text{weakly in } \dot H^1({\mathbb{R}}^3) \quad \text{as } n\to \infty
\end{align}
whenever $|t_n|\to \infty$ or $|x_n|\to \infty$.
\end{lem}
\begin{proof}
By the definition of $\tlim\Omega_n$, we have $f\in C^\infty_c(\Omega_n)$ for $n$ sufficiently large. Let $\Omega_\infty$ denote the limit of $\Omega_n$ in the sense of Definition~\ref{D:converg}.
We first prove \eqref{lc} when $t_n\to \infty$; the proof when $t_n\to-\infty$ follows symmetrically. Let $\psi\in C_c^\infty({\mathbb{R}}^3)$ and let
$$
F_n(t):=\lambdaangle e^{it\Delta_{\Omega_n}}f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)}.
$$
To establish \eqref{lc}, we need to show
\begin{align}\lambdaabel{lc1}
F_n(t_n)\to 0 \qtq{as} n\to \infty.
\end{align}
We compute
\begin{align*}
|\partial_t F_n(t)|&= \bigl|\lambdaangle i\Delta_{\Omega_n} e^{it\Delta_{\Omega_n}}f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)} \bigr|\\
&= \bigl|\lambdaangle \Delta_{\Omega_n} e^{it\Delta_{\Omega_n}}f(x+x_n), \Delta \psi\rangle_{L^2({\mathbb{R}}^3)} \bigr|
\lambdaesssim \|f\|_{\dot H^2} \|\psi\|_{\dot H^2}\lambdaesssim_{f,\psi}1.
\end{align*}
On the other hand,
\begin{align*}
\|F\|_{L_t^{\frac{10} 3}([t_n,\infty))}
&\lambdaesssim \|e^{it\Delta_{\Omega_n}}f\|_{L_{t,x}^{\frac{10}3}([t_n,\infty)\times{\mathbb{R}}^3)}\|\Delta \psi\|_{L_x^{\frac{10}7}({\mathbb{R}}^3)}\\
&\lambdaesssim_\psi \|[e^{it\Delta_{\Omega_n}}-e^{it\Delta_{\Omega_\infty}}]f\|_{L_{t,x}^{\frac{10}3}([0,\infty)\times{\mathbb{R}}^3)}
+\|e^{it\Delta_{\Omega_\infty}}f\|_{L_{t,x}^{\frac{10}3}([t_n,\infty)\times{\mathbb{R}}^3)}.
\end{align*}
The first term converges to zero as $n\to \infty$ by Theorem~\ref{T:LF}, while convergence to zero of the second term follows from the Strichartz inequality combined with the dominated convergence theorem. Putting everything together, we derive \eqref{lc1} and so \eqref{lc} when $t_n\to \infty$.
Now assume $\{t_n\}_{n\geq 1}$ is bounded, but $|x_n|\to \infty$ as $n\to \infty$. Without loss of generality, we may assume $t_n\to t_\infty\in {\mathbb{R}}$
as $n\to \infty$. Let $\psi\in C_c^\infty({\mathbb{R}}^3)$ and $R>0$ such that $\supp\psi\subseteq B(0,R)$. We write
\begin{align*}
\lambdaangle e^{it_n\Delta_{\Omega_n}}f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)}
&=\lambdaangle e^{it_\infty\Delta_{\Omega_\infty}}f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)} \\
&\quad +\lambdaangle [e^{it_\infty\Delta_{\Omega_n}}-e^{it_\infty\Delta_{\Omega_\infty}}]f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)}\\
&\quad +\lambdaangle [e^{it_n\Delta_{\Omega_n}}-e^{it_\infty\Delta_{\Omega_n}}]f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)}.
\end{align*}
By the Cauchy--Schwarz inequality,
\begin{align*}
\bigl|\lambdaangle e^{it_\infty\Delta_{\Omega_\infty}}f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)}\bigr|
\lambdaesssim \|e^{it_\infty\Delta_{\Omega_{\infty}}}f\|_{L^2(|x|\ge |x_n|-R)} \|\Delta \psi\|_{L^2({\mathbb{R}}^3)},
\end{align*}
which converges to zero as $n\to \infty$, by the monotone convergence theorem. By duality and Proposition~\ref{P:converg},
\begin{align*}
\bigl\lambdaangle [e^{it_\infty\Delta_{\Omega_n}}-&e^{it_\infty\Delta_{\Omega_\infty}}]f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)}\bigr|\\
&\lambdaesssim \|[e^{it_\infty\Delta_{\Omega_n}}-e^{it_\infty\Delta_{\Omega_\infty}}]f\|_{\dot H^{-1}({\mathbb{R}}^3)} \|\Delta\psi\|_{\dot H^1({\mathbb{R}}^3)} \to 0 \qtq{as} n\to \infty.
\end{align*}
Finally, by the fundamental theorem of calculus,
\begin{align*}
\bigl|\lambdaangle [e^{it_n\Delta_{\Omega_n}}-e^{it_\infty\Delta_{\Omega_n}}]f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)}\bigr|
\lambdaesssim |t_n-t_\infty| \|\Delta_{\Omega_n} f\|_{L^2} \|\Delta \psi\|_{L^2},
\end{align*}
which converges to zero as $n\to \infty$. Putting everything together we deduce
$$
\lambdaangle e^{it_n\Delta_{\Omega_n}}f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)} \to 0 \qtq{as} n\to \infty.
$$
This completes the proof of the lemma.
\end{proof}
\begin{lem}[Weak convergence]\lambdaabel{L:compact} Assume $\Omega_n\equiv\Omega$ or $\{\Omega_n\}$ conforms to one of the three scenarios considered in Proposition~\ref{P:convdomain}. Let $f_n\in \dot H_D^1(\Omega_n)$ be such that $f_n\rightharpoonup 0$ weakly in $\dot H^1({\mathbb{R}}^3)$ and let $t_n\to t_\infty\in {\mathbb{R}}$. Then
\begin{align*}
e^{it_n\Delta_{\Omega_n}} f_n\rightharpoonup 0 \qtq{weakly in} \dot H^1({\mathbb{R}}^3).
\end{align*}
\end{lem}
\begin{proof} For any $\psi\in C_c^{\infty}({\mathbb{R}}^3)$,
\begin{align*}
\bigl|\lambdaangle [e^{it_n\Delta_{\Omega_n}}-e^{it_\infty\Delta_{\Omega_n}}]f_n, \psi\rangle_{\dot H^1({\mathbb{R}}^3)}\bigr|
&\lambdaesssim \|[e^{it_n\Delta_{\Omega_n}}-e^{it_\infty\Delta_{\Omega_n}}]f_n\|_{L^2} \|\Delta\psi\|_{L^2}\\
&\lambdaesssim |t_n-t_\infty|^{\frac12} \|(-\Delta_{\Omega_n})^{\frac12}f_n\|_{L^2} \|\Delta\psi\|_{L^2},
\end{align*}
which converges to zero as $n\to \infty$. To obtain the last inequality above, we have used the spectral theorem together with the elementary inequality $|e^{it_n\lambdaambda}-e^{it_\infty\lambdaambda}|\lambdaesssim |t_n-t_\infty|^{1/2}\lambdaambda^{1/2}$ for $\lambdaambda\geq 0$. Thus, we are left to prove
\begin{align*}
\int_{{\mathbb{R}}^3} \nabla \bigl[e^{it_\infty\Delta_{\Omega_n}} f_n\bigr](x) \nabla \bar\psi(x)\,dx
= \int_{{\mathbb{R}}^3} e^{it_\infty\Delta_{\Omega_n}} f_n (x) (-\Delta\bar\psi)(x)\, dx \to 0 \qtq{as} n\to \infty
\end{align*}
for all $\psi\in C_c^\infty({\mathbb{R}}^3)$. By Sobolev embedding,
$$
\|e^{it_\infty\Delta_{\Omega_n}} f_n\|_{L^6}\lambdaesssim \|f_n\|_{\dot H^1({\mathbb{R}}^3)}\lambdaesssim 1 \qtq{uniformly in} n\geq 1,
$$
and so using a density argument and the dominated convergence theorem (using the fact that the measure of $\Omega_n\triangle(\tlim \Omega_n)$ converges to zero), it suffices to show
\begin{align}\lambdaabel{9:38am}
\int_{{\mathbb{R}}^3} e^{it_\infty\Delta_{\Omega_n}} f_n (x) \bar\psi(x)\, dx \to 0 \qtq{as} n\to \infty
\end{align}
for all $\psi\in C_c^\infty(\tlim \Omega_n)$. To see that \eqref{9:38am} is true, we write
\begin{align*}
\lambdaangle e^{it_\infty\Delta_{\Omega_n}} f_n, \psi \rangle =\lambdaangle f_n, [e^{-it_\infty\Delta_{\Omega_n}} -e^{-it_\infty\Delta_{\Omega_\infty}}]\psi \rangle
+ \lambdaangle f_n,e^{-it_\infty\Delta_{\Omega_\infty}}\psi \rangle,
\end{align*}
where $\Omega_\infty$ denotes the limit of $\Omega_n$ in the sense of Definition~\ref{D:converg}. The first term converges to zero by Proposition~\ref{P:converg}. As $f_n\rightharpoonup 0$ in $\dot H^1({\mathbb{R}}^3)$, to see that the second term converges to zero, we merely need to prove that $e^{-it_\infty\Delta_{\Omega_\infty}}\psi\in \dot H^{-1}({\mathbb{R}}^3)$ for all $\psi\in C_c^\infty(\tlim \Omega_n)$. Toward this end, we use the Mikhlin multiplier theorem and Bernstein's inequality to estimate
\begin{align*}
\|e^{-it_\infty\Delta_{\Omega_\infty}}\psi\|_{\dot H^{-1}({\mathbb{R}}^3)}
&\lambdaesssim\|e^{-it_\infty\Delta_{\Omega_\infty}}P_{\lambdaeq 1}^{\Omega_\infty} \psi\|_{L^{\frac65}}+\sum_{N\geq 1}\|e^{-it_\infty\Delta_{\Omega_\infty}}P_N^{\Omega_\infty}\psi\|_{L^{\frac65}}\\
&\lambdaesssim\|\psi\|_{L^{\frac65}}+\sum_{N\geq 1} \lambdaangle N^2t_\infty\rangle^2\|P_N^{\Omega_\infty}\psi\|_{L^{\frac65}}\\
&\lambdaesssim \|\psi\|_{L^{\frac65}} + \|(-\Delta_{\Omega_\infty})^3\psi\|_{L^{\frac65}}\lambdaesssim_\psi 1.
\end{align*}
This completes the proof of the lemma.
\end{proof}
Finally, we turn to the linear profile decomposition for the propagator $e^{it\Delta_\Omega}$ in $\dot H^1_D(\Omega)$. This is proved by the inductive application of Proposition~\ref{P:inverse Strichartz}. To handle the variety of cases in as systematic a way as possible, we introduce operators $G_n^j$ that act unitarily in $\dot H^1({\mathbb{R}}^3)$.
\begin{thm}[$\dot H^1_D(\Omega)$ linear profile decomposition]\lambdaabel{T:LPD}
Let $\{f_n\}$ be a bounded sequence in $\dot H^1_D(\Omega)$. After passing to a subsequence, there exist $J^*\in \{0, 1, 2, \Delta_{\Omega}ots,\infty\}$,
$\{\phi_n^j\}_{j=1}^{J^*}\subset \dot H_D^1(\Omega)$, $\{\lambdaambda_n^j\}_{j=1}^{J^*}\subset(0,\infty)$, and $\{(t_n^j,x_n^j)\}_{j=1}^{J^*}\subset {\mathbb{R}}\times \Omega$
conforming to one of the following four cases for each $j$:
\begin{CI}
\item Case 1: $\lambdaambda_n^j\equiv \lambdaambda_\infty^j$, $x_n^j\to x_\infty^j$, and there is a $\phi^j\in \dot H^1_D(\Omega)$ so that
\begin{align*}
\phi_n^j = e^{it_n^j (\lambdaambda_n^j)^2\Delta_\Omega}\phi^j.
\end{align*}
We define $[G_n^j f] (x) := (\lambdaambda_n^j)^{-\frac 12} f\bigl(\tfrac{x-x_n^j}{\lambdaambda_n^j} \bigr)$ and $\Omega_n^j:=(\lambdaambda_n^j)^{-1}(\Omega - \{x_n^j\})$.
\item Case 2: $\lambdaambda_n^j\to \infty$, $-(\lambdaambda_n^j)^{-1}x_n^j \to x^j_\infty\in{\mathbb{R}}^3$, and there is a $\phi^j\in \dot H^1({\mathbb{R}}^3)$ so that
\begin{align*}
\phi_n^j(x)= G_n^j \bigl[e^{it_n^j \Delta_{\Omega_n^j}} (\chi_n^j \phi^j)\bigr] (x) \qtq{with} [G_n^j f] (x) := (\lambdaambda_n^j)^{-\frac 12} f\bigl(\tfrac{x-x_n^j}{\lambdaambda_n^j} \bigr),
\end{align*}
$\Omega_n^j := (\lambdaambda_n^j)^{-1}(\Omega - \{x_n^j\})$, $\chi_n^j(x)=\chi(\lambdaambda_n^jx+x_n^j)$, and $\chi(x)=\Theta(\tfrac{d(x)}{\diam(\Omega^c)})$.
\item Case 3: $\frac{d(x_n^j)}{\lambdaambda_n^j}\to \infty$ and there is a $\phi^j\in \dot H^1({\mathbb{R}}^3)$ so that
\begin{align*}
\phi_n^j(x)= G_n^j \bigl[e^{it_n^j \Delta_{\Omega_n^j}} (\chi_n^j \phi^j)\bigr] (x) \qtq{with} [G_n^j f] (x) := (\lambdaambda_n^j)^{-\frac 12} f\bigl(\tfrac{x-x_n^j}{\lambdaambda_n^j} \bigr),
\end{align*}
$\Omega_n^j := (\lambdaambda_n^j)^{-1}(\Omega - \{x_n^j\})$, and $\chi_n^j(x)=1-\Theta(\tfrac{\lambdaambda_n^j|x|}{d(x_n^j)})$.
\item Case 4: $\lambdaambda_n^j\to 0$, $\frac{d(x_n^j)}{\lambdaambda_n^j}\to d_\infty^j>0$, and there is a $\phi^j\in \dot H^1_D({\mathbb{H}})$ so that
\begin{align*}
\phi_n^j(x)= G_n^j \bigl[ e^{it_n^j \Delta_{\Omega_n^j}} \phi^j\bigr] (x) \qtq{with} [G_n^j f](x) := (\lambdaambda_n^j)^{-\frac12} f\bigl(\tfrac{(R_n^j)^{-1}(x-(x_n^j)^*)}{\lambdaambda_n^j}\bigr),
\end{align*}
$\Omega_n^j := (\lambdaambda_n^j)^{-1}(R_n^j)^{-1}(\Omega - \{(x_n^j)^*\})$, $(x_n^j)^*\in \partial\Omega$ is defined by $d(x_n^j)=|x_n^j-(x_n^j)^*|$,
and $R_n^j\in SO(3)$ satisfies $R_n^j e_3=\tfrac{x_n^j-(x_n^j)^*}{|x_n^j-(x_n^j)^*|}$.
\end{CI}
Further, for any finite $0\lambdae J\lambdae J^*$, we have the decomposition
\begin{align*}
f_n=\sum_{j=1}^ J \phi_n^j +w_n^J,
\end{align*}
with $w_n^J\in \dot H^1_D(\Omega)$ satisfying
\begin{gather}
\lambdaim_{J\to J^*} \lambdaimsup_{n\to\infty} \|e^{it\Delta_{\Omega}}w_n^J\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}=0, \lambdaabel{E:LP1}\\
\lambdaim_{n\to\infty}\Bigl\{\|f_n\|_{\dot H^1_D(\Omega)}^2-\sum_{j=1}^J\|\phi_n^j\|_{\dot H_D^1(\Omega)}^2-\|w_n^J\|_{\dot H^1_D(\Omega)}^2\Bigr\}=0, \lambdaabel{E:LP2}\\
\lambdaim_{n\to\infty}\Bigl\{\|f_n\|_{L^6(\Omega)}^6-\sum_{j=1}^J \|\phi_n^j\|_{L^6(\Omega)}^6-\|w_n^J\|_{L^6(\Omega)}^6\Bigr\}=0, \lambdaabel{E:LP3}\\
e^{it_n^J\Delta_{\Omega_n^J}}(G_n^J)^{-1}w_n^J\rightharpoonup 0 \qtq{weakly in} \dot H^1({\mathbb{R}}^3), \lambdaabel{E:LP4}
\end{gather}
and for all $j\neq k$ we have the asymptotic orthogonality property
\begin{align}\lambdaabel{E:LP5}
\lambdaim_{n\to\infty} \ \frac{\lambdaambda_n^j}{\lambdaambda_n^k}+\frac{\lambdaambda_n^k}{\lambdaambda_n^j}+
\frac{|x_n^j-x_n^k|^2}{\lambdaambda_n^j\lambdaambda_n^k}+\frac{|t_n^j(\lambdaambda_n^j)^2-t_n^k(\lambdaambda_n^k)^2|}{\lambdaambda_n^j\lambdaambda_n^k}=\infty.
\end{align}
Lastly, we may additionally assume that for each $j$ either $t_n^j\equiv 0$ or $t_n^j\to \pm \infty$.
\end{thm}
\begin{proof} We will proceed inductively and extract one bubble at a time. To start with, we set $w_n^0:=f_n$. Suppose
we have a decomposition up to level $J\geq 0$ obeying \eqref{E:LP2} through \eqref{E:LP4}. (Note that conditions \eqref{E:LP1}
and \eqref{E:LP5} will be verified at the end.) Passing to a subsequence if necessary, we set
\begin{align*}
A_J:=\lambdaim_{n\to\infty} \|w_n^J\|_{\dot H^1_D(\Omega)} \qtq{and} {\varepsilon}_J:=\lambdaim_{n\to \infty} \|e^{it\Delta_{\Omega}}w_n^J\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}.
\end{align*}
If ${\varepsilon}_J=0$, we stop and set $J^*=J$. If not, we apply the inverse Strichartz inequality Proposition \ref{P:inverse Strichartz} to
$w_n^J$. Passing to a subsequence in $n$ we find $\{\phi_n^{J+1}\}\subset \dot H^1_D(\Omega)$, $\{\lambdaambda_n^{J+1}\}\subset 2^{\mathbb Z}$, and
$\{(t_n^{J+1}, x_n^{J+1})\}\subset{\mathbb{R}}\times\Omega$, which conform to one of the four cases listed in the theorem. Note that we rename
the parameters given by Proposition~\ref{P:inverse Strichartz} as follows: $\lambdaambda_n^{J+1} := N_n^{-1}$ and $t_n^{J+1} := - N_n^{2} t_n$.
The profiles are defined as weak limits in the following way:
\begin{align*}
\tilde \phi^{J+1}=\wlim_{n\to\infty}(G_n^{J+1})^{-1} \bigl[ e^{-it_n^{J+1}(\lambdaambda_n^{J+1})^2\Delta_\Omega}w_n^J\bigr]
=\wlim_{n\to\infty}e^{-it_n^{J+1}\Delta_{\Omega_n^{J+1}}}[(G_n^{J+1})^{-1}w_n^J],
\end{align*}
where $G_n^{J+1}$ is defined in the statement of the theorem. In Cases 2, 3, 4, we define $\phi^{J+1}:=\tilde \phi^{J+1}$, while in Case 1,
$$
\phi^{J+1}(x):= G_\infty^{J+1}\tilde \phi^{J+1}(x):=(\lambdaambda_\infty^{J+1})^{-\frac12} \tilde \phi^{J+1}\bigl(\tfrac{x-x_\infty^{J+1}}{\lambdaambda_\infty^{J+1}} \bigr).
$$
Finally, $\phi_n^{J+1}$ is defined as in the statement of the theorem. Note that in Case 1, we can rewrite this definition as
$$
\phi_n^{J+1}=e^{it_n^{J+1}(\lambdaambda_n^{J+1})^2\Delta_\Omega}\phi^{J+1}=G_\infty^{J+1}e^{it_n^{J+1}\Delta_{\Omega_\infty^{J+1}}}\tilde \phi^{J+1},
$$
where $\Omega_\infty^{J+1}:=(\lambdaambda_\infty^{J+1})^{-1}(\Omega - \{x_\infty^{J+1}\})$. Note that in all four cases,
\begin{align}\lambdaabel{strong}
\lambdaim_{n\to \infty}\|e^{-it_n^{J+1}\Delta_{\Omega_n^{J+1}}}(G_n^{J+1})^{-1}\phi_n^{J+1}-\tilde \phi^{J+1}\|_{\dot H^1({\mathbb{R}}^3)}=0;
\end{align}
see also \eqref{E:no deform} and \eqref{E:no deform 3} for Cases 2 and 3.
Now define $w_n^{J+1}:=w_n^J-\phi_n^{J+1}$. By \eqref{strong} and the construction of $\tilde \phi^{J+1}$ in each case,
\begin{align*}
e^{-it_n^{J+1}\Delta_{\Omega_n^{J+1}}}(G_n^{J+1})^{-1}w_n^{J+1} \rightharpoonup 0 \quad \text{weakly in }\dot H^1({\mathbb{R}}^3).
\end{align*}
This proves \eqref{E:LP4} at the level $J+1$. Moreover, from Proposition \ref{P:inverse Strichartz} we also have
\begin{align*}
\lambdaim_{n\to\infty}\Bigl\{\|w_n^J\|_{\dot H^1_D(\Omega)}^2-\|\phi_n^{J+1}\|_{\dot H^1_D(\Omega)}^2-\|w_n^{J+1}\|_{\dot H^1_D(\Omega)}^2\Bigr\}=0.
\end{align*}
This together with the inductive hypothesis give \eqref{E:LP2} at the level $J+1$. A similar argument establishes \eqref{E:LP3} at the same level.
From Proposition~\ref{P:inverse Strichartz}, passing to a further subsequence we have
\begin{equation}\lambdaabel{new a,eps}
\begin{aligned}
&A_{J+1}^2=\lambdaim_{n\to\infty}\|w_n^{J+1}\|_{\dot H^1_D(\Omega)}^2\lambdae A_J^2\Bigl[1 -C \bigl(\tfrac{{\varepsilon}_J}{A_J}\bigr)^{\frac{15}4}\Bigr]\lambdaeq A_J^2\\
&{\varepsilon}_{J+1}^{10}=\lambdaim_{n\to\infty}\|e^{it\Delta_{\Omega}}w_n^{J+1}\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}^{10}\lambdae{\varepsilon}_J^{10}\Bigl[1-C\bigl(\tfrac{{\varepsilon}_J}{A_J}\bigr)^{\frac{35}4}\Bigr].
\end{aligned}
\end{equation}
If ${\varepsilon}_{J+1}=0$ we stop and set $J^*=J+1$; moreover, \eqref{E:LP1} is automatic. If ${\varepsilon}_{J+1}>0$ we continue the induction. If the algorithm does not terminate in finitely many steps, we set $J^*=\infty$; in this case, \eqref{new a,eps} implies ${\varepsilon}_J\to 0$ as $J\to \infty$ and so \eqref{E:LP1} follows.
Next we verify the asymptotic orthogonality condition \eqref{E:LP5}. We argue by contradiction. Assume \eqref{E:LP5} fails to be true for some pair $(j,k)$. Without loss of generality, we may assume $j<k$ and \eqref{E:LP5} holds for all pairs $(j,l)$ with $j<l<k$. Passing to a subsequence, we may assume
\begin{align}\lambdaabel{cg}
\frac{\lambdaambda_n^j}{\lambdaambda_n^k}\to \lambdaambda_0\in (0,\infty), \quad \frac{x_n^j-x_n^k}{\sqrt{\lambdaambda_n^j\lambdaambda_n^k}}\to x_0, \qtq{and}
\frac{t_n^j(\lambdaambda_n^j)^2-t_n^k(\lambdaambda_n^k)^2}{\lambdaambda_n^j\lambdaambda_n^k}\to t_0.
\end{align}
From the inductive relation
\begin{align*}
w_n^{k-1}=w_n^j-\sum_{l=j+1}^{k-1}\phi_n^l
\end{align*}
and the definition for $\tilde \phi^k$, we obtain
\begin{align}
\tilde \phi^k&=\wlim_{n\to\infty}e^{-it_n^k\Delta_{\Omega_n^k}}[(G_n^k)^{-1}w_n^{k-1}]\notag\\
&=\wlim_{n\to\infty}e^{-it_n^k\Delta_{\Omega_n^k}}[(G_n^k)^{-1}w_n^j]\lambdaabel{tp1}\\
&\quad-\sum_{l=j+1}^{k-1} \wlim_{n\to \infty}e^{-it_n^k\Delta_{\Omega_n^k}}[(G_n^k)^{-1}\phi_n^l]\lambdaabel{tp2}.
\end{align}
We will prove that these weak limits are zero and so obtain a contradiction to the nontriviality of $\tilde \phi^k$.
We rewrite \eqref{tp1} as follows
\begin{align*}
e^{-it_n^k\Delta_{\Omega_n^k}}[(G_n^k)^{-1}w_n^j]
&=e^{-it_n^k\Delta_{\Omega_n^k}}(G_n^k)^{-1}G_n^je^{it_n^j\Delta_{\Omega_n^j}}[e^{-it_n^j\Delta_{\Omega_n^j}}(G_n^j)^{-1}w_n^j]\\
&=(G_n^k)^{-1}G_n^je^{i\bigl(t_n^j-t_n^k\tfrac{(\lambdaambda_n^k)^2}{(\lambdaambda_n^j)^2}\bigr)\Delta_{{\Omega_n^j}}}[e^{-it_n^j\Delta_{\Omega_n^j}}(G_n^j)^{-1}w_n^j].
\end{align*}
Note that by \eqref{cg},
\begin{align*}
t_n^j-t_n^k\frac{(\lambdaambda_n^k)^2}{(\lambdaambda_n^j)^2}=\frac{t_n^j(\lambdaambda_n^j)^2-t_n^k(\lambdaambda_n^k)^2}{\lambdaambda_n^j\lambdaambda_n^k}\cdot\frac{\lambdaambda_n^k}
{\lambdaambda_n^j}\to \frac{t_0}{\lambdaambda_0}.
\end{align*}
Using this together with \eqref{E:LP4}, Lemma~\ref{L:compact}, and the fact that the adjoints of the unitary operators $(G_n^k)^{-1}G_n^j$ converge strongly, we obtain $\eqref{tp1}=0$.
To complete the proof of \eqref{E:LP5}, it remains to show $\eqref{tp2}=0$. For all $j<l<k$ we write
\begin{align*}
e^{-it_n^k{\Delta_{\Omega_n^k}}}(G_n^k)^{-1}\phi_n^l
=(G_n^k)^{-1}G_n^je^{i\bigl(t_n^j-t_n^k\tfrac{(\lambdaambda_n^k)^2}{(\lambdaambda_n^j)^2}\bigr)\Delta_{{\Omega_n^j}}}[e^{-it_n^j\Delta_{\Omega_n^j}}(G_n^j)^{-1}\phi_n^l].
\end{align*}
Arguing as for \eqref{tp1}, it thus suffices to show
\begin{align*}
e^{-it_n^j\Delta_{\Omega_n^j}}(G_n^j)^{-1}\phi_n^l\rightharpoonup 0 \qtq{weakly in} \dot H^1({\mathbb{R}}^3).
\end{align*}
Using a density argument, this reduces to
\begin{align}\lambdaabel{need11}
I_n:=e^{-it_n^j\Delta_{\Omega_n^j}}(G_n^j)^{-1}G_n^le^{it_n^l\Delta_{\Omega_n^l}}\phi\rightharpoonup 0 \qtq{weakly in} \dot H^1({\mathbb{R}}^3),
\end{align}
for all $\phi\in C_c^\infty(\tlim \Omega_n^l)$. In Case 1, we also used the fact that $(G_n^l)^{-1} G_\infty^l$ converges strongly to the identity.
Depending on which cases $j$ and $l$ fall into, we can rewrite $I_n$ as follows:
\begin{CI}
\item Case a): If both $j$ and $l$ conform to Case 1, 2, or 3, then
\begin{align*}
I_n=\biggl(\frac{\lambdaambda_n^j}{\lambdaambda_n^l}\biggr)^{\frac12}\biggl[e^{i\bigl(t_n^l-t_n^j\bigl(\frac{\lambdaambda_n^j}
{\lambdaambda_n^l}\bigr)^2\bigr)\Delta_{\Omega_n^l}}\phi\biggr]\biggl(\frac{\lambdaambda_n^j x+x_n^j- x_n^l}{\lambdaambda_n^l}\biggr).
\end{align*}
\item Case b): If $j$ conforms to Case 1, 2, or 3 and $l$ to Case 4, then
\begin{align*}
I_n=\biggl(\frac{\lambdaambda_n^j}{\lambdaambda_n^l}\biggr)^{\frac12}\biggl[e^{i\bigl(t_n^l-t_n^j\bigl(\frac{\lambdaambda_n^j}
{\lambdaambda_n^l}\bigr)^2\bigr) \Delta_{\Omega_n^l}}\phi\biggr]\biggl(\frac{(R_n^l)^{-1}(\lambdaambda_n^j x+x_n^j-(x_n^l)^*)}{\lambdaambda_n^l}\biggr).
\end{align*}
\item Case c): If $j$ conforms to Case 4 and $l$ to Case 1, 2, or 3, then
\begin{align*}
I_n=\biggl(\frac{\lambdaambda_n^j}{\lambdaambda_n^l}\biggr)^{\frac12}\biggl[e^{i\bigl(t_n^l-t_n^j\bigl(\frac{\lambdaambda_n^j}
{\lambdaambda_n^l}\bigr)^2\bigr) \Delta_{\Omega_n^l}}\phi\biggr]\biggl(\frac{R_n^j\lambdaambda_n^j x+(x_n^j)^*-x_n^l}{\lambdaambda_n^l}\biggr).
\end{align*}
\item Case d): If both $j$ and $l$ conform to Case 4, then
\begin{align*}
I_n=\biggl(\frac{\lambdaambda_n^j}{\lambdaambda_n^l}\biggr)^{\frac12}\biggl[e^{i\bigl(t_n^l-t_n^j\bigl(\frac{\lambdaambda_n^j}
{\lambdaambda_n^l}\bigr)^2\bigr) \Delta_{\Omega_n^l}}\phi\biggr]\biggl(\frac{(R_n^l)^{-1}(R_n^j\lambdaambda_n^j x+(x_n^j)^*-(x_n^l)^*)}{\lambdaambda_n^l}\biggr).
\end{align*}
\end{CI}
We first prove \eqref{need11} when the scaling parameters are not comparable, that is,
\begin{align*}
\lambdaim_{n\to\infty}\frac{\lambdaambda_n^j}{\lambdaambda_n^l}+\frac{\lambdaambda_n^l}{\lambdaambda_n^j}=\infty.
\end{align*}
We treat all cases simultaneously. By Cauchy--Schwarz,
\begin{align*}
\bigl|\lambdaangle I_n, \psi\rangle_{\dot H^1({\mathbb{R}}^3)}\bigr|
&\lambdaesssim \min\Bigl\{\|\Delta I_n\|_{L^2({\mathbb{R}}^3)}\|\psi\|_{L^2({\mathbb{R}}^3)}, \|I_n\|_{L^2({\mathbb{R}}^3)}\|\Delta\psi\|_{L^2({\mathbb{R}}^3)}\Bigr\}\\
&\lambdaesssim \min\biggl\{\frac{\lambdaambda_n^j}{\lambdaambda_n^l}\|\Delta\phi\|_{L^2({\mathbb{R}}^3)}\|\psi\|_{L^2({\mathbb{R}}^3)}, \frac{\lambdaambda_n^l}{\lambdaambda_n^j}\|\phi\|_{L^2({\mathbb{R}}^3)}\|\Delta\psi\|_{L^2({\mathbb{R}}^3)}\biggr\},
\end{align*}
which converges to zero as $n\to \infty$, for all $\psi\in C_c^\infty({\mathbb{R}}^3)$. Thus, in this case $\eqref{tp2}=0$ and we get the desired contradiction.
Henceforth we may assume
\begin{align*}
\lambdaim_{n\to \infty}\frac{\lambdaambda_n^j}{\lambdaambda_n^l}=\lambdaambda_0\in (0,\infty).
\end{align*}
We now suppose the time parameters diverge, that is,
\begin{align*}
\lambdaim_{n\to \infty}\frac{|t_n^j(\lambdaambda_n^j)^2-t_n^l(\lambdaambda_n^l)^2|}{\lambdaambda_n^j\lambdaambda_n^l}=\infty;
\end{align*}
then we also have
\begin{align*}
\biggl|t_n^l-t_n^j\biggl(\frac{\lambdaambda_n^j}{\lambdaambda_n^l}\biggr)^2\biggr|
=\frac{|t_n^l(\lambdaambda_n^l)^2-t_n^j(\lambdaambda_n^j)^2|}{\lambdaambda_n^l\lambdaambda_n^j}\cdot\frac{\lambdaambda_n^j}{\lambdaambda_n^l}\to \infty \qtq{as} n\to\infty.
\end{align*}
We first discuss Case a). Under the above condition, \eqref{need11} follows from
\begin{align*}
\lambdaambda_0^{\frac 12}\biggl(e^{i\bigl(t_n^l-t_n^j\bigl(\frac{\lambdaambda_n^j}{\lambdaambda_n^l}\bigr)^2\bigr)\Delta_{\Omega_n^l}}\phi\biggr)\bigl(\lambdaambda_0 x+(\lambdaambda_n^l)^{-1}(x_n^j-x_n^l)\bigr)\rightharpoonup 0 \qtq{weakly in} \dot H^1({\mathbb{R}}^3),
\end{align*}
which is an immediate consequence of Lemma \ref{L:converg}. In Cases b), c), and d), the proof proceeds similarly since $SO(3)$ is a compact group; indeed, passing to a subsequence we may assume that $R_n^j\to R_0$ and $R_n^l\to R_1$, which places us in the same situation as in Case a).
Finally, we deal with the situation when
\begin{align}\lambdaabel{cdition}
\frac{\lambdaambda_n^j}{\lambdaambda_n^l}\to \lambdaambda_0, \quad \frac{t_n^l(\lambdaambda_n^l)^2-t_n^j(\lambdaambda_n^j)^2}{\lambdaambda_n^j\lambdaambda_n^l}\to t_0,
\qtq{but} \frac{|x_n^j-x_n^l|^2}{\lambdaambda_n^j\lambdaambda_n^l}\to \infty.
\end{align}
Then we also have $t_n^l-t_n^j(\lambdaambda_n^j)^2/(\lambdaambda_n^l)^2\to \lambdaambda_0t_0$. Thus, in Case a) it suffices to show
\begin{align}\lambdaabel{524}
\lambdaambda_0^{\frac 12}
e^{it_0\lambdaambda_0\Delta_{\Omega_n^l}}\phi(\lambdaambda_0x+y_n)\rightharpoonup0 \qtq{weakly in} \dot H^1({\mathbb{R}}^3),
\end{align}
where
\begin{align*}
y_n:=\frac{x_n^j-x_n^l}{\lambdaambda_n^l}=\frac{x_n^j-x_n^l}{\sqrt{\lambdaambda_n^l\lambdaambda_n^j}}\sqrt{\frac{\lambdaambda_n^j}{\lambdaambda_n^l}}\to \infty \qtq{as} n\to \infty.
\end{align*}
The desired weak convergence \eqref{524} follows from Lemma \ref{L:converg}.
As $SO(3)$ is a compact group, in Case b) we can proceed similarly if we can show
\begin{align*}
\frac{|x_n^j-( x_n^l)^*|}{\lambdaambda_n^l}\to \infty \qtq{as} n\to \infty.
\end{align*}
But this is immediate from an application of the triangle inequality: for $n$ sufficiently large,
\begin{align*}
\frac{|x_n^j-(x_n^l)^*|}{\lambdaambda_n^l}\ge\frac{|x_n^j-x_n^l|}{\lambdaambda_n^l}-\frac{|x_n^l-(x_n^l)^*|}{\lambdaambda_n^l}
\ge\frac{|x_n^j-x_n^l|}{\lambdaambda_n^l}-2d_\infty^l\to \infty.
\end{align*}
Case c) can be treated symmetrically. Finally, in Case d) we note that for $n$ sufficiently large,
\begin{align*}
\frac{|(x_n^j)^*-(x_n^l)^*|}{\lambdaambda_n^l}&\ge\frac{|x_n^j-x_n^l|}{\lambdaambda_n^l}-\frac{|x_n^j-(x_n^j)^*|}{\lambdaambda_n^l}-\frac{|x_n^l-(x_n^l)^*|}{\lambdaambda_n^l}\\
&\ge\frac{|x_n^j-x_n^l|}{\sqrt{\lambdaambda_n^j\lambdaambda_n^l}}\sqrt{\frac{\lambdaambda_n^j}{\lambdaambda_n^l}}-\frac{d(x_n^j)}{\lambdaambda_n^j}\frac{\lambdaambda_n^j}{\lambdaambda_n^l}-\frac{d(x_n^l)}{\lambdaambda_n^l}\\
&\ge \frac12\sqrt{\lambdaambda_0}\frac{|x_n^j-x_n^l|}{\sqrt{\lambdaambda_n^j\lambdaambda_n^l}}-2\lambdaambda_0d_\infty^j-2d_\infty^l\to \infty \qtq{as} n\to \infty.
\end{align*}
The desired weak convergence follows again from Lemma \ref{L:converg}.
Finally, we prove the last assertion in the theorem regarding the behaviour of $t_n^j$. For each $j$, by passing to a subsequence we may assume $t_n^j\to t^j\in [-\infty, \infty]$. Using a standard diagonal argument, we may assume that the limit exists for all $j\ge 1$.
Given $j$, if $t^j=\pm\infty$, there is nothing more to be proved; thus, let us suppose that $t^j\in (-\infty, \infty)$.
We claim that we may redefine $t_n^j\equiv 0$, provided we replace the original profile $\phi^j$ by $\exp\{it^j\Delta_{\Omega^j_\infty}\} \phi^j$,
where $\Omega^j_\infty$ denotes the limiting geometry dictated by the case to which $j$ conforms. Underlying this claim is the
assertion that the errors introduced by these changes can be incorporated into $w_n^J$. The exact details of proving this depend
on the case to which $j$ conforms; however, the principal ideas are always the same. Let us give the details in Case~2 alone (for which
$\Omega_\infty^j={\mathbb{R}}^3$). Here, the claim boils down to the assertion that
\begin{align}\lambdaabel{s1}
\lambdaim_{n\to\infty} \bigl\|e^{it_n^j(\lambdaambda_n^j)^2\Delta_\Omega}[G_n^j(\chi_n^j\phi^j)]
- G_n^j(\chi_n^j e^{it^j\Delta_{{\mathbb{R}}^3}} \phi^j) \bigr\|_{\dot H^1_D(\Omega)} = 0.
\end{align}
To prove \eqref{s1} we first invoke Lemma~\ref{L:dense} to replace $\phi^j$ by a function $\psi\in C^\infty_c(\tlim \Omega^j_n)$.
Moreover, for such functions $\psi$ we have $\chi_n\psi=\psi$ for $n$ sufficiently large. Doing this and also changing variables, we reduce \eqref{s1} to
\begin{align}\lambdaabel{s1'}
\lambdaim_{n\to\infty} \bigl\|e^{it_n^j\Delta_{\Omega_n^j}} \psi
- \chi_n^j e^{it^j\Delta_{{\mathbb{R}}^3}} \psi \bigr\|_{\dot H^1_D(\Omega_n^j)} = 0.
\end{align}
We prove this by breaking it into three pieces.
First, by taking the time derivative, we have
$$
\bigl\|e^{it_n^j\Delta_{\Omega_n^j}} \psi - e^{it^j\Delta_{\Omega_n^j}}\psi \bigr\|_{\dot H^1({\mathbb{R}}^3)} \lambdaeq | t_n^j - t^j | \| \Delta \psi \|_{\dot H^1({\mathbb{R}}^3)},
$$
which converges to zero since $t_n^j \to t^j$. Secondly, we claim that
$$
e^{it^j\Delta_{\Omega_n^j}}\psi \to e^{it^j\Delta_{{\mathbb{R}}^3}}\psi \qtq{strongly in} \dot H^1({\mathbb{R}}^3) \qtq{as} n\to\infty.
$$
Indeed, the $\dot H^1({\mathbb{R}}^3)$ norms of both the proposed limit and all terms in the sequence are the same, namely, $\|\psi\|_{\dot H^1({\mathbb{R}}^3)}$.
Thus, strong convergence can be deduced from weak convergence, which follows from Proposition~\ref{P:converg}. The third
and final part of \eqref{s1'}, namely,
\begin{align*}
\bigl\| (1 - \chi_n^j) e^{it_j\Delta_{{\mathbb{R}}^3}} \psi \bigr\|_{\dot H^1({\mathbb{R}}^3)} \to 0 \qtq{as} n\to\infty,
\end{align*}
can be shown by direct computation using that $\lambdaambda_n^j\to\infty$; see the proof of \eqref{E:no deform}.
This completes the proof of the Theorem~\ref{T:LPD}.
\end{proof}
\section{Embedding of nonlinear profiles}\lambdaabel{S:Nonlinear Embedding}
The next step in the proof of Theorem~\ref{T:main} is to use the linear profile decomposition obtained in the previous section to derive a Palais--Smale condition for minimizing sequences of blowup solutions to \eqref{nls}. This essentially amounts to proving a nonlinear profile decomposition for solutions to $\text{NLS}_\Omega$; in the next section, we will prove this decomposition and then combine it with the stability result Theorem~\ref{T:stability} to derive the desired compactness for minimizing sequences of solutions. This leads directly to the Palais--Smale condition.
In order to prove a nonlinear profile decomposition for solutions to \eqref{nls}, we have to address the possibility that the nonlinear profiles we will extract are solutions to the energy-critical equation in \emph{different} limiting geometries. In this section, we will see how to embed these nonlinear profiles corresponding to different limiting geometries back inside $\Omega$. Specifically, we need to approximate these profiles \emph{globally in time} by actual solutions to \eqref{nls} that satisfy \emph{uniform} spacetime bounds. This section contains three theorems, one for each of the Cases 2, 3, and 4 discussed in the previous sections.
As in Section~\ref{S:LPD}, throughout this section $\Theta:{\mathbb{R}}^3\to [0,1]$ denotes a smooth function such that
\begin{align*}
\Theta(x)=\begin{cases}0, \ & |x|\lambdae \frac 14, \\1, \ & |x|\geq \frac 12.
\end{cases}
\end{align*}
We will also use the following notation:
$$
\dot X^1(I\times\Omega):=L_{t,x}^{10}(I\times\Omega)\cap L_t^5\dot H^{1,\frac{30}{11}}_D(I\times\Omega).
$$
Our first result in this section concerns the scenario when the rescaled obstacles $\Omega_n^c$ are shrinking to a point (cf. Case 2 in Theorem~\ref{T:LPD}).
\begin{thm}[Embedding nonlinear profiles for shrinking obstacles]\lambdaabel{T:embed2} Let $\{\lambdaambda_n\}\subset 2^{\mathbb Z}$ be such that $\lambdaambda_n\to \infty$. Let $\{t_n\}\subset{\mathbb{R}}$ be such that $t_n\equiv0$ or $t_n\to \pm\infty$. Let $\{x_n\}\subset \Omega$ be such that
$-\lambdaambda_n^{-1}x_n\to x_\infty\in {\mathbb{R}}^3$. Let
$\phi\in \dot H^1({\mathbb{R}}^3)$ and
\begin{align*}
\phi_n(x)=\lambdaambda_n^{-\frac12}e^{it_n\lambdaambda_n^2\Delta_\Omega}\bigl[(\chi_n\phi)\bigl(\tfrac{x-x_n}{\lambdaambda_n}\bigr)\bigr],
\end{align*}
where $\chi_n(x)=\chi(\lambdaambda_n x+x_n)$ with $\chi(x)=\Theta(\tfrac{d(x)}{\diam(\Omega^c)})$. Then for $n$ sufficiently large there exists a global solution $v_n$ to $\text{NLS}_{\Omega}$ with initial data $v_n(0)=\phi_n$ which satisfies
\begin{align*}
\|v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}\lambdaesssim 1,
\end{align*}
with the implicit constant depending only on $\|\phi\|_{\dot H^1}$. Furthermore, for every ${\varepsilon}>0$ there exists $N_{\varepsilon}\in {\mathbb{N}}$ and
$\psi_{\varepsilon}\in C_c^{\infty}({\mathbb{R}}\times{\mathbb{R}}^3)$ such that for all $n\geq N_{\varepsilon}$ we have
\begin{align}\lambdaabel{dense2}
\|v_n(t-\lambdaambda_n^2 t_n,x+x_n)-\lambdaambda_n^{-\frac12}\psi_{\varepsilon}(\lambdaambda_n^{-2}t,\lambdaambda_n^{-1} x)\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<{\varepsilon}.
\end{align}
\end{thm}
\begin{proof}
The proof contains five steps. In the first step we construct global solutions to the energy-critical NLS in the limiting geometry ${\mathbb{R}}^3$ and we record some of their properties. In the second step we construct a candidate for the sought-after solution to $\text{NLS}_\Omega$. In the third step we prove that our candidate asymptotically matches the initial data $\phi_n$, while in the fourth step we prove that it is an approximate solution to \eqref{nls}. In the last step we invoke the stability result Theorem~\ref{T:stability} to find $v_n$ and then prove the approximation result \eqref{dense2}.
To ease notation, throughout the proof we will write $-\Delta=-\Delta_{{\mathbb{R}}^3}$.
\textbf{Step 1:} Constructing global solutions to $\text{NLS}_{{\mathbb{R}}^3}$.
Let $\theta:=\frac 1{100}$. The construction of the solutions to $\text{NLS}_{{\mathbb{R}}^3}$ depends on the behaviour of $t_n$. If $t_n\equiv0$, let $w_n$ and
$w_\infty$ be solutions to $\text{NLS}_{{\mathbb{R}}^3}$ with initial data $w_n(0)=\phi_{\lambdae \lambdaambda_n^{\theta}}$ and $w_\infty(0)=\phi$.
If instead $t_n\to \pm\infty$, let $w_n$ be the solution to $\text{NLS}_{{\mathbb{R}}^3}$ such that
\begin{align*}
\|w_n(t)-e^{it\Delta}\phi_{\lambdae \lambdaambda_n^{\theta}}\|_{\dot H^1({\mathbb{R}}^3)}\to 0 \qtq{as} t\to \pm\infty.
\end{align*}
Similarly, we define $w_\infty$ as the solution to $\text{NLS}_{{\mathbb{R}}^3}$ such that
\begin{align}\lambdaabel{n24}
\|w_\infty(t)-e^{it\Delta}\phi\|_{\dot H^1({\mathbb{R}}^3)}\to 0 \qtq{as} t\to \pm\infty.
\end{align}
By \cite{CKSTT:gwp}, in all cases $w_n$ and $w_\infty$ are global solutions and satisfy
\begin{align}\lambdaabel{258}
\|w_n\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)}+\|w_\infty\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)}\lambdaesssim 1,
\end{align}
with the implicit constant depending only on $\|\phi\|_{\dot H^1}$. Moreover, by the perturbation theory described in that paper,
\begin{align}\lambdaabel{258'}
\lambdaim_{n\to \infty}\|w_n-w_\infty\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)}=0.
\end{align}
By Bernstein's inequality,
\begin{align*}
\|\phi_{\lambdae \lambdaambda_n^{\theta}}\|_{\dot H^s({\mathbb{R}}^3)}\lambdaesssim \lambdaambda_n^{\theta(s-1)} \qtq{for any} s\ge 1,
\end{align*}
and so the persistence of regularity result Lemma~\ref{lm:persistencer3} gives
\begin{align}\lambdaabel{persist2}
\||\nabla|^s w_n\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)}\lambdaesssim \lambdaambda_n^{\theta s} \qtq{for any} s\ge 0,
\end{align}
with the implicit constant depending solely on $\|\phi\|_{\dot H^1}$. Combining this with the Gagliardo--Nirenberg inequality
\begin{align*}
\|f\|_{L^\infty_x}\lambdaesssim \|\nabla f\|_{L^2_x}^{\frac 12}\|\Delta f\|_{L^2_x}^{\frac12},
\end{align*}
we obtain
\begin{align}\lambdaabel{259}
\||\nabla|^s w_n\|_{L_{t,x}^{\infty}({\mathbb{R}}\times{\mathbb{R}}^3)}\lambdaesssim \lambdaambda_n^{\theta(s+\frac12)},
\end{align}
for all $s\geq 0$. Finally, using the equation we get
\begin{align}\lambdaabel{260}
\|\partial_t w_n\|_{L_{t,x}^{\infty}({\mathbb{R}}\times{\mathbb{R}}^3)}\lambdae \|\Delta w_n\|_{L_{t,x}^{\infty}({\mathbb{R}}\times{\mathbb{R}}^3)}+\|w_n\|_{L_{t,x}^{\infty}({\mathbb{R}}\times{\mathbb{R}}^3)}^5
\lambdaesssim\lambdaambda_n^{\frac 52\theta}.
\end{align}
\textbf{Step 2:} Constructing the approximate solution to $\text{NLS}_\Omega$.
As previously in this scenario, let $\Omega_n:=\lambdaambda_n^{-1}(\Omega-\{x_n\})$. The most naive way to embed $w_n(t)$ into $\Omega_n$
is to choose $\tilde v_n(t) = \chi_n w_n(t)$; however, closer investigation reveals that this is \emph{not} an approximate solution to $\text{NLS}_\Omega$,
unless one incorporates some high-frequency reflections off the obstacle, namely,
\begin{align*}
z_n(t):=i\int_0^t e^{i(t-s)\Delta_{\Omega_n}}(\Delta_{\Omega_n}\chi_n)w_n(s,-\lambdaambda_n^{-1}x_n)\,ds.
\end{align*}
The source of these waves is nonresonant in spacetime due to the slow variation in time when compared to the small spatial scale involved.
This allows us to estimate these reflected waves; indeed, we have the following lemma:
\begin{lem} For any $T>0$, we have
\begin{align}
\lambdaimsup_{n\to\infty}\|z_n\|_{\dot X^1([-T,T]\times\Omega_n)}&=0\lambdaabel{209}\\
\|(-\Delta_{\Omega_n})^{\frac s2}z_n\|_{L_t^\infty L_x^2([-T,T]\times\Omega_n)}&\lambdaesssim \lambdaambda_n^{s-\frac 32+\frac52\theta}(T+\lambdaambda_n^{-2\theta}) \qtq{for all} 0\lambdae s<\tfrac 32.\lambdaabel{209'}
\end{align}
\end{lem}
\begin{proof}
Throughout the proof, all spacetime norms will be over $[-T,T]\times\Omega_n$. We write
\begin{align*}
z_n(t)&=-\int_0^t [e^{it\Delta_{\Omega_n}}\partial_se^{-is\Delta_{\Omega_n}}\chi_n]w_n(s,-\lambdaambda_n^{-1}x_n) \,ds\\
&=-\chi_nw_n(t,-\lambdaambda_n^{-1}x_n)+e^{it\Delta_{\Omega_n}}[\chi_nw_n(0,-\lambdaambda_n^{-1}x_n)]\\
&\quad +\int_0^t [e^{i(t-s)\Delta_{\Omega_n}}\chi_n]\partial_sw_n(s,-\lambdaambda_n^{-1}x_n)\,ds.
\end{align*}
We first estimate the $L_t^5\dot H^{1,\frac{30}{11}}_D$ norm of $z_n$. Using the Strichartz inequality, the equivalence of Sobolev spaces
Theorem~\ref{T:Sob equiv}, \eqref{259}, and \eqref{260}, we get
\begin{align*}
\|z_n\|_{L_t^5\dot H_D^{1,\frac{30}{11}}}
&\lambdaesssim \|\nabla\chi_n(x) w_n(t, -\lambdaambda_n^{-1}x_n)\|_{L_t^5L_x^\frac{30}{11}} +\|\nabla \chi_n(x)w_n(0,-\lambdaambda_n^{-1}x_n)\|_{L^2_x}\\
&\quad+\|\nabla \chi_n(x) \partial_t w_n(t,-\lambdaambda_n^{-1}x_n)\|_{L_t^1L_x^2}\\
&\lambdaesssim T^{\frac 15}\|\nabla \chi_n\|_{L^{\frac{30}{11}}_x}\|w_n\|_{L_{t,x}^\infty}+\|\nabla\chi_n\|_{L^2_x}\|w_n\|_{L_{t,x}^\infty}
+T\|\nabla\chi_n\|_{L^2_x}\|\partial_t w_n\|_{L_{t,x}^\infty}\\
&\lambdaesssim T^{\frac15}\lambdaambda_n^{-\frac{1}{10}+\frac{\theta}2}+\lambdaambda_n^{-\frac12+\frac{\theta}2}+T\lambdaambda_n^{-\frac 12+\frac 52\theta}\to 0\qtq{as} n\to \infty.
\end{align*}
Similarly, using also Sobolev embedding we obtain
\begin{align}\lambdaabel{zn in L10}
\|z_n\|_{L_{t,x}^{10}}&\lambdaesssim\|(-\Delta_{\Omega_n})^{\frac 12}z_n\|_{L_t^{10}L_x^{\frac{30}{13}}}\notag\\
&\lambdaesssim \|\nabla \chi_n(x) w_n(t,-\lambdaambda_n^{-1}x_n)\|_{L_t^{10}L_x^{\frac{30}{13}}}+\|\nabla\chi_n(x)w_n(0,-\lambdaambda_n^{-1}x_n)\|_{L^2_x}\notag\\
&\quad+\|\nabla\chi_n(x)\partial_tw_n(t,-\lambdaambda_n^{-1}x_n)\|_{L_t^1L_x^2}\notag\\
&\lambdaesssim T^{\frac 1{10}}\|\nabla\chi_n\|_{L_x^{\frac{30}{13}}}\|w_n\|_{L_{t,x}^\infty}+\|\nabla\chi_n\|_{L_x^2}\|w_n\|_{L_{t,x}^\infty}
+T\|\nabla\chi_n\|_{L_x^2}\|\partial_t w_n\|_{L_{t,x}^\infty}\notag\\
&\lambdaesssim T^{\frac1{10}}\lambdaambda_n^{-\frac{3}{10}+\frac{\theta}2}+\lambdaambda_n^{-\frac12+\frac{\theta}2}+T\lambdaambda_n^{-\frac 12+\frac 52\theta} \to 0\qtq{as} n\to \infty.
\end{align}
This proves \eqref{209}.
To establish \eqref{209'}, we argue as before and estimate
\begin{align*}
\|(-\Delta_{\Omega_n})^{\frac s2}z_n\|_{L_t^\infty L_x^2}
&\lambdaesssim \|(-\Delta)^{\frac s2}\chi_n w_n(t,-\lambdaambda_n^{-1}x_n)\|_{L_t^{\infty}L_x^2}+\|(-\Delta)^{\frac s2}\chi_nw_n(0,-\lambdaambda_n^{-1}x_n)\|_{L_x^2}\\
&\quad+\|(-\Delta)^{\frac s2}\chi_n\partial_tw_n(t,-\lambdaambda_n^{-1}x_n)\|_{L_t^1L_x^2}\\
&\lambdaesssim \|(-\Delta)^{\frac s2}\chi_n\|_{L_x^2}\|w_n\|_{L_{t,x}^\infty}+T\|(-\Delta)^{\frac s2}\chi_n\|_{L_x^2}\|\partial_t w_n\|_{L_{t,x}^\infty}\\
&\lambdaesssim \lambdaambda_n^{s-\frac 32+\frac{\theta}2}+T\lambdaambda_n^{s-\frac 32+\frac 52\theta}\\
&\lambdaesssim \lambdaambda_n^{s-\frac 32+\frac 52\theta}(T+\lambdaambda_n^{-2\theta}).
\end{align*}
This completes the proof of the lemma.
\end{proof}
We are now in a position to introduce the approximate solution
\begin{align*}
\tilde v_n(t,x):=\begin{cases}
\lambdaambda_n^{-\frac12}(\chi_nw_n+z_n)(\lambdaambda_n^{-2} t, \lambdaambda_n^{-1}(x-x_n)), &|t|\lambdae\lambdaambda_n^2 T,\\
e^{i(t-\lambdaambda_n^2 T)\Delta_{\Omega}}\tilde v_n(\lambdaambda_n^2 T,x), & t>\lambdaambda_n^2 T, \\
e^{i(t+\lambdaambda_n^2 T)\Delta_\Omega}\tilde v_n(-\lambdaambda_n^2 T,x), & t<-\lambdaambda_n^2 T,
\end{cases}
\end{align*}
where $T>0$ is a parameter to be chosen later. Note that $\tilde v_n$ has finite scattering size. Indeed, using a change of variables, the Strichartz inequality, \eqref{258}, \eqref{209'}, and \eqref{zn in L10}, we get
\begin{align}\lambdaabel{tildevn2}
\|\tilde v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}
&\lambdaesssim \|\chi_nw_n +z_n\|_{L_{t,x}^{10}([-T,T]\times\Omega_n)}+\|(\chi_nw_n+z_n)(\pm T)\|_{\dot H^1_D(\Omega_n)}\notag\\
&\lambdaesssim \|w_n\|_{L_{t,x}^{10}({\mathbb{R}}\times{\mathbb{R}}^3)}+\|z_n\|_{L_{t,x}^{10}([-T,T]\times\Omega_n)}+\|\chi_n\|_{L_x^\infty}\|\nabla w_n\|_{L_t^\infty L_x^2({\mathbb{R}}\times{\mathbb{R}}^3)}\notag\\
&\quad+\|\nabla \chi_n\|_{L_x^3}\|w_n\|_{L_t^\infty L_x^6({\mathbb{R}}\times{\mathbb{R}}^3)}+\|(-\Delta_{\Omega_n})^{\frac12}z_n\|_{L_t^\infty L_x^2([-T,T]\times\Omega_n)}\notag\\
&\lambdaesssim 1+ T^{\frac1{10}}\lambdaambda_n^{-\frac{3}{10}+\frac{\theta}2}+\lambdaambda_n^{-\frac12+\frac{\theta}2}+T\lambdaambda_n^{-\frac 12+\frac 52\theta} .
\end{align}
\textbf{Step 3:} Asymptotic agreement of the initial data.
In this step, we show (cf. the smallness hypothesis in Theorem~\ref{T:stability})
\begin{align}\lambdaabel{match2}
\lambdaim_{T\to \infty}\lambdaimsup_{n\to\infty}\|(-\Delta_\Omega)^{\frac 12}e^{it\Delta_\Omega}[\tilde v_n(\lambdaambda_n^2t_n)-\phi_n]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}=0.
\end{align}
We first prove \eqref{match2} in the case when $t_n\equiv0$. Using the Strichartz inequality, a change of variables, and H\"older, we estimate
\begin{align*}
\|(-\Delta_\Omega)^{\frac 12} e^{it\Delta_\Omega}&[\tilde v_n(0)-\phi_n]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}\\
&\lambdaesssim \|(-\Delta_\Omega)^{\frac 12}[\tilde v_n(0)-\phi_n]\|_{L_x^2}\\
&\lambdaesssim \|\nabla[\chi_n\phi_{\lambdae\lambdaambda_n^{\theta}}-\chi_n\phi]\|_{L_x^2}\\
&\lambdaesssim \|\nabla\chi_n\|_{L_x^3}\|\phi_{\lambdae\lambdaambda_n^{\theta}}-\phi\|_{L_x^6}+\|\chi_n\|_{L_x^\infty}\|\nabla[\phi_{\lambdae\lambdaambda_n^\theta}-\phi]\|_ {L_x^2},
\end{align*}
which converges to zero as $n\to \infty$.
It remains to prove \eqref{match2} in the case $t_n\to \infty$; the case $t_n\to -\infty$ can be treated similarly. As $T>0$ is fixed, for sufficiently large $n$ we have $t_n>T$ and so
\begin{align*}
\tilde v_n(\lambdaambda_n^2t_n,x)&=e^{i(t_n-T)\lambdaambda_n^2\Delta_\Omega}\tilde v_n(\lambdaambda_n^2 T,x)
=e^{i(t_n-T)\lambdaambda_n^2\Delta_\Omega}\bigl[\lambdaambda_n^{-\frac12}\bigl(\chi_nw_n+z_n\bigr)\bigl(T,\tfrac{x-x_n}{\lambdaambda_n}\!\bigl)\bigr] .
\end{align*}
Thus by a change of variables and the Strichartz inequality,
\begin{align*}
\|(-\Delta_\Omega)^{\frac 12}& e^{it\Delta_\Omega}[\tilde v_n(\lambdaambda_n^2 t_n)-\phi_n]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}\\
&=\|(-\Delta_{\Omega_n})^{\frac 12}\{e^{i(t-T)\Delta_{\Omega_n}}(\chi_nw_n+z_n)(T)-e^{it\Delta_{\Omega_n}}(\chi_n\phi)\}\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\\
&\lambdaesssim \|(-\Delta_{\Omega_n})^{\frac 12}z_n(T)\|_{L_x^2}+\|(-\Delta_{\Omega_n})^{\frac 12}[\chi_n(w_n-w_\infty)(T)]\|_{L_x^2}\\
&\quad+\|(-\Delta_{\Omega_n})^{\frac 12}e^{it\Delta_{\Omega_n}}[e^{-iT\Delta_{\Omega_n}}(\chi_nw_\infty(T))-\chi_n\phi]\|
_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}.
\end{align*}
Using \eqref{258'} and \eqref{209'}, we see that
\begin{align*}
&\|(-\Delta_{\Omega_n})^{\frac 12}z_n(T)\|_{L_x^2}+\|(-\Delta_{\Omega_n})^{\frac 12}[\chi_n(w_n-w_\infty)(T)]\|_{L_x^2}\\
&\lambdaesssim \lambdaambda_n^{-\frac 12+\frac 52\theta}(T+\lambdaambda_n^{-2\theta})+\|\nabla\chi_n\|_{L_x^3}\|w_n-w_\infty\|_{L_t^\infty L_x^6}+\|\chi_n\|_{L_x^\infty}\|\nabla(w_n- w_\infty)\|_{L_t^\infty L_x^2},
\end{align*}
which converges to zero as $n\to \infty$. Thus, to establish \eqref{match2} we are left to prove
\begin{align}\lambdaabel{n23}
\lambdaim_{T\to \infty}\lambdaimsup_{n\to\infty}\|(-\Delta_{\Omega_n})^{\frac12}e^{it\Delta_{\Omega_n}}[e^{-iT\Delta_{\Omega_n}}(\chi_nw_\infty(T))-\chi_n\phi]\|
_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}=0.
\end{align}
Using the triangle and Strichartz inequalities, we obtain
\begin{align*}
\|(-\Delta_{\Omega_n})^{\frac12}&e^{it\Delta_{\Omega_n}}[e^{-iT\Delta_{\Omega_n}}(\chi_nw_\infty(T))-\chi_n\phi]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\\
&\lambdaesssim \|(-\Delta_{\Omega_n})^{\frac 12}(\chi_n w_\infty(T))-\chi_n(-\Delta)^{\frac 12}w_\infty(T)\|_{L_x^2}\\
&\quad+\|[e^{i(t-T)\Delta_{\Omega_n}}-e^{i(t-T)\Delta}][\chi_n(-\Delta)^{\frac12}w_{\infty}(T)]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\\
&\quad+\|e^{-iT\Delta}[\chi_n(-\Delta)^{\frac12}w_\infty(T)]-\chi_n(-\Delta)^{\frac12}\phi\|_{L_x^2}\\
&\quad+\|[e^{it\Delta_{\Omega_n}}-e^{it\Delta}][\chi_n(-\Delta)^{\frac12}\phi]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\\
&\quad+ \|(-\Delta_{\Omega_n})^{\frac12}(\chi_n\phi)-\chi_n(-\Delta)^{\frac12}\phi\|_{L_x^2}.
\end{align*}
The fact that the second and fourth terms above converge to zero as $n\to \infty$ follows from Theorem~\ref{T:LF} and the density in $L_x^2$ of $C_c^{\infty}$ functions supported in ${\mathbb{R}}^3$ minus a point. To see that the first and fifth terms above converge to zero, we note that for any $f\in \dot H^1({\mathbb{R}}^3)$,
\begin{align*}
\|(-\Delta_{\Omega_n})^{\frac 12}(\chi_n f)-\chi_n(-\Delta)^{\frac12}f\|_{L_x^2}
&\lambdae \|(1-\chi_n)(-\Delta)^{\frac 12}f\|_{L_x^2}+\|(-\Delta)^{\frac12}[(1-\chi_n)f]\|_{L_x^2}\\
&\quad+\|(-\Delta_{\Omega_n})^{\frac 12}(\chi_n f)-(-\Delta)^{\frac12}(\chi_n f)\|_{L_x^2}\to 0
\end{align*}
as $n\to \infty$ by Lemma~\ref{L:n3} and the monotone convergence theorem. Finally, for the third term we use \eqref{n24} and the monotone convergence theorem to obtain
\begin{align*}
\|e^{-iT\Delta}[\chi_n(-\Delta)^{\frac12}&w_\infty(T)]-\chi_n(-\Delta)^{\frac 12}\phi\|_{L_x^2}\\
&\lambdaesssim \|(1-\chi_n)(-\Delta)^{\frac12}w_\infty(T)\|_{L^2_x}+\|(1-\chi_n)(-\Delta)^{\frac 12}\phi\|_{L_x^2}\\
&\quad+\|e^{-iT\Delta}(-\Delta)^{\frac12}w_\infty(T)-(-\Delta)^{\frac 12}\phi\|_{L_x^2} \to 0,
\end{align*}
by first taking $n\to \infty$ and then $T\to \infty$. This completes the proof of \eqref{n23} and so the proof of \eqref{match2}.
\textbf{Step 4:} Proving that $\tilde v_n$ is an approximate solution to $\text{NLS}_{\Omega}$ in the sense that
\begin{align*}
i\partial_t \tilde v_n+\Delta_\Omega\tilde v_n=|\tilde v_n|^4\tilde v_n+e_n
\end{align*}
with
\begin{align}\lambdaabel{error2}
\lambdaim_{T\to\infty}\lambdaimsup_{n\to\infty}\|e_n\|_{\dot N^1({\mathbb{R}}\times\Omega)}=0.
\end{align}
We start by verifying \eqref{error2} on the large time interval $t>\lambdaambda_n^2 T$; symmetric arguments can be used to treat $t<-\lambdaambda_n^2 T$. By the definition of $\tilde v_n$, in this regime we have
$e_n=-|\tilde v_n|^4\tilde v_n$. Using the equivalence of Sobolev spaces, Strichartz, and \eqref{209'}, we estimate
\begin{align*}
\|e_n\|_{\dot N^1(\{t>\lambdaambda_n^2 T\}\times\Omega)}
&\lambdaesssim \|(-\Delta_\Omega)^{\frac12}(|\tilde v_n|^4\tilde v_n)\|_{L_t^{\frac53} L_x^{\frac{30}{23}}(\{t>\lambdaambda_n^2 T\}\times\Omega)}\\
&\lambdaesssim \|(-\Delta_\Omega)^{\frac12}\tilde v_n\|_{L_t^5 L_x^{\frac{30}{11}}(\{t>\lambdaambda_n^2 T\}\times\Omega)} \|\tilde v_n\|^4_{L_{t,x}^{10}(\{t>\lambdaambda_n^2 T\}\times\Omega)}\\
&\lambdaesssim \|(-\Delta_{\Omega_n})^{\frac12}[\chi_n w_n (T)+z_n(T)] \|_{L_x^2}\|\tilde v_n\|^4_{L_{t,x}^{10}(\{t>\lambdaambda_n^2 T\}\times\Omega)}\\
&\lambdaesssim \bigl[ 1+ \lambdaambda_n^{-\frac12+\frac52\theta}(T+\lambdaambda_n^{-2\theta}) \bigr] \|\tilde v_n\|^4_{L_{t,x}^{10}(\{t>\lambdaambda_n^2 T\}\times\Omega)}.
\end{align*}
Thus, to establish \eqref{error2} it suffices to show
\begin{align}\lambdaabel{largetime2}
\lambdaim_{T\to\infty}\lambdaimsup_{n\to\infty}\|e^{i(t-\lambdaambda_n^2 T)\Delta_{\Omega}}\tilde v_n(\lambdaambda_n^2T)\|_{L_{t,x}^{10}(\{t>\lambdaambda_n^2 T\}\times\Omega)}=0,
\end{align}
to which we now turn.
As a consequence of the spacetime bounds \eqref{258}, the global solution $w_\infty$ scatters. Let $w_+$ denote the forward asymptotic state, that is,
\begin{align}\lambdaabel{as2}
\|e^{-it\Delta}w_\infty(t)- w_+\|_{\dot H^1({\mathbb{R}}^3)}\to 0 \qtq{as} t\to \infty.
\end{align}
(Note that in the case when $t_n\to \infty$, from the definition of $w_\infty$ we have $w_+=\phi$.) Using a change of variables, \eqref{209'}, the Strichartz and H\"older inequalities, and Sobolev embedding, we obtain
\begin{align*}
\|&e^{i(t-\lambdaambda_n^2 T)\Delta_\Omega}\tilde v_n(\lambdaambda_n^2T)\|_{L_{t,x}^{10}((\lambdaambda_n^2T,\infty)\times\Omega)}\\
&=\|e^{it\Delta_{\Omega_n}}(\chi_nw_n(T)+z_n(T))\|_{L_{t,x}^{10}([0,\infty)\times\Omega_n)}\\
&\lambdaesssim \|(-\Delta_{\Omega_n})^{\frac12}z_n(T)\|_{L_x^2}+\|(-\Delta_{\Omega_n})^{\frac12}[\chi_n(w_n(T)-w_\infty(T))]\|_{L_x^2}\\
&\quad+\|(-\Delta_{\Omega_n})^{\frac12}[\chi_n(w_{\infty}(T)-e^{iT\Delta}w_+)]\|_{L_x^2}+\|e^{it\Delta_{\Omega_n}}[\chi_ne^{iT\Delta}w_+]\|_{L_{t,x}^{10}([0,\infty)\times\Omega_n)}\\
&\lambdaesssim \lambdaambda_n^{-\frac12+\frac 52\theta}(T+\lambdaambda_n^{-2\theta})+\|w_n(T)-w_\infty(T)\|_{\dot H^1_x}+\|w_\infty(T)-e^{iT\Delta}w_+\|_{\dot H^1_x}\\
&\quad+\|[e^{it\Delta_{\Omega_n}}-e^{it\Delta}][\chi_ne^{iT\Delta}w_+]\|_{L_{t,x}^{10}([0,\infty)\times{\mathbb{R}}^3)} +\|\nabla [(1-\chi_n)e^{iT\Delta}w_+]\|_{L_x^2}\\
&\quad+\|e^{it\Delta}w_+\|_{L_{t,x}^{10}((T,\infty)\times{\mathbb{R}}^3)},
\end{align*}
which converges to zero by first letting $n\to \infty$ and then $T\to \infty$ by \eqref{258'}, \eqref{as2}, Corollary~\ref{C:LF}, and the monotone convergence theorem.
We are left to prove \eqref{error2} on the middle time interval $|t|\lambdaeq \lambdaambda_n^2T$. For these values of time, we compute
\begin{align*}
e_n(t,x)&=[(i\partial_t+\Delta_\Omega )\tilde v_n- |\tilde v_n|^4\tilde v_n](t,x)\\
&=-\lambdaambda_n^{-\frac52}[\Delta\chi_n](\lambdaambda_n^{-1}(x-x_n))w_n(\lambdaambda_n^{-2}t,-\lambdaambda_n^{-1}x_n)\\
&\quad+\lambdaambda_n^{-\frac 52}[\Delta\chi_n w_n](\lambdaambda_n^{-2}t,\lambdaambda_n^{-1}(x-x_n))\\
&\quad+2\lambdaambda_n^{-\frac 52}(\nabla\chi_n\cdot\nabla w_n)(\lambdaambda_n^{-2}t, \lambdaambda_n^{-1}(x-x_n))\\
&\quad+\lambdaambda_n^{-\frac52}[\chi_n|w_n|^4w_n-|\chi_nw_n+z_n|^4(\chi_nw_n+z_n)](\lambdaambda_n^{-2}t,\lambdaambda_n^{-1}(x-x_n)).
\end{align*}
Thus, using a change of variables and the equivalence of Sobolev norms Theorem~\ref{T:Sob equiv}, we estimate
\begin{align}
\|e_n\|_{\dot N^1(\{|t|\lambdaeq\lambdaambda_n^2 T\}\times\Omega)}
&\lambdaesssim \|(-\Delta_\Omega)^{\frac12} e_n\|_{L_{t,x}^{\frac{10}7}(\{|t|\lambdae\lambdaambda_n^2T\}\times\Omega)}\notag\\
&\lambdaesssim\|\nabla[\Delta\chi_n(w_n(t,x)-w_n(t,-\lambdaambda_n^{-1}x_n))]\|_{L_{t,x}^{\frac{10}7}([-T,T]\times\Omega_n)}\lambdaabel{51}\\
&\quad+\|\nabla[\nabla\chi_n\cdot\nabla w_n]\|_{L_{t,x}^{\frac{10}7}([-T,T]\times\Omega_n)}\lambdaabel{52}\\
&\quad+\|\nabla[\chi_n|w_n|^4 w_n-|\chi_nw_n+z_n|^4(\chi_nw_n+z_n)]\|_{L_{t,x}^{\frac{10}7}([-T,T]\times\Omega_n)}.\lambdaabel{53}
\end{align}
Using H\"older, the fundamental theorem of calculus, and \eqref{259}, we estimate
\begin{align*}
\eqref{51}&\lambdaesssim T^{\frac 7{10}}\|\Delta\chi_n\|_{L_x^{\frac{10}7}}\|\nabla w_n\|_{L_{t,x}^\infty}\\
&\quad+T^{\frac7{10}}\|\nabla\Delta\chi_n\|_{L_x^\frac{10}7}\|w_n(t,x)-w_n(t,-\lambdaambda_n^{-1}x_n)\|_{L_{t,x}^\infty({\mathbb{R}}\times\supp\Delta\chi_n)}\\
&\lambdaesssim T^{\frac 7{10}}\lambdaambda_n^{-\frac 1{10}+ \frac32\theta}+T^{\frac7{10}}\lambdaambda_n^{\frac 9{10}}\lambdaambda_n^{-1}\|\nabla w_n\|_{L_{t,x}^\infty}\\
&\lambdaesssim T^{\frac 7{10}}\lambdaambda_n^{-\frac 1{10}+\frac 32\theta} \to 0 \qtq{as} n\to \infty.
\end{align*}
Notice that the cancellations induced by the introduction of $z_n$ were essential in order to control this term. Next,
\begin{align*}
\eqref{52}
&\lambdae T^{\frac7{10}}\bigl[\|\Delta\chi_n\|_{L_x^{\frac{10}7}}\|\nabla w_n\|_{L_{t,x}^\infty}+\|\nabla \chi_n\|_{L_x^{\frac{10}7}}\|\Delta w_n\|_{L_{t,x}^\infty}\bigr]\\
&\lambdae T^{\frac 7{10}}[\lambdaambda_n^{-\frac 1{10}+\frac 32\theta}+\lambdaambda_n^{-\frac{11}{10}+\frac 52\theta}]\to 0 \qtq{as} n\to \infty.
\end{align*}
Finally, we turn our attention to \eqref{53}. A simple algebraic computation yields
\begin{align*}
\eqref{53}&\lambdaesssim T^{\frac7{10}}\Bigl\{ \|\nabla[(\chi_n-\chi_n^5)w_n^5] \|_{L_t^\infty L_x^{\frac{10}7}} + \|z_n^4\nabla z_n\|_{L_t^\infty L_x^{\frac{10}7}}\\
&\quad+\sum_{k=1}^4\Bigl[ \|w_n^{k-1}z_n^{5-k}\nabla (\chi_n w_n) \|_{L_t^\infty L_x^{\frac{10}7}}+
\|w_n^k z_n^{4-k}\nabla z_n\|_{L_t^\infty L_x^{\frac{10}7}}\Bigr]\Bigr\},
\end{align*}
where all spacetime norms are over $[-T,T]\times\Omega_n$. Using H\"older and \eqref{259}, we estimate
\begin{align*}
\|\nabla[(\chi_n-\chi_n^5)w_n^5] \|_{L_t^\infty L_x^{\frac{10}7}}
&\lambdaesssim \|\nabla \chi_n\|_{L_x^{\frac{10}7}}\|w_n\|_{L_{t,x}^\infty}^5 + \|\chi_n-\chi_n^5\|_{L_x^{\frac{10}7}}\|w_n\|_{L_{t,x}^\infty}^4\|\nabla w_n\|_{L_{t,x}^\infty} \\
&\lambdaesssim \lambdaambda_n^{-\frac{11}{10}+\frac 52\theta} + \lambdaambda_n^{-\frac{21}{10}+\frac 72\theta}.
\end{align*}
Using also \eqref{209'}, Sobolev embedding, and Theorem~\ref{T:Sob equiv}, we obtain
\begin{align*}
\|z_n^4\nabla z_n\|_{L_t^{\infty}L_x^{\frac{10}7}}
\lambdaesssim\|\nabla z_n\|_{L_t^\infty L_x^2}\|z_n\|_{L_t^\infty L_x^{20}}^4
&\lambdaesssim \|\nabla z_n\|_{L_t^\infty L_x^2}\||\nabla |^{\frac{27}{20}} z_n\|_{L_t^\infty L_x^2}^4\\
&\lambdaesssim \lambdaambda_n^{-\frac{11}{10}+\frac{25}2\theta}(T+\lambdaambda_n^{-2\theta})^5.
\end{align*}
Similarly,
\begin{align*}
\| & w_n^{k-1}z_n^{5-k}\nabla (\chi_n w_n) \|_{L_t^\infty L_x^{\frac{10}7}} \\
&\lambdaesssim \|\nabla\chi_n\|_{L_x^3}\|w_n\|_{L_t^\infty L_x^{\frac{150}{11}}}^k \|z_n\|_{L_t^\infty L_x^{\frac{150}{11}}}^{5-k}
+ \|\nabla w_n\|_{L_t^\infty L_x^2}\|w_n\|_{L_t^\infty L_x^{20}}^{k-1}\|z_n\|_{L_t^\infty L_x^{20}}^{5-k} \\
&\lambdaesssim \||\nabla|^{\frac{32}{25}}w_n\|_{L_t^\infty L_x^2}^k\||\nabla|^{\frac{32}{25}}z_n\|_{L_t^\infty L_x^2}^{5-k}
+ \||\nabla|^{\frac{27}{20}}w_n\|_{L_t^\infty L_x^2}^{k-1}\||\nabla|^{\frac{27}{20}}z_n\|_{L_t^\infty L_x^2}^{5-k} \\
&\lambdaesssim \lambdaambda_n^{\frac7{25}\theta k+(-\frac{11}{50}+\frac 52\theta)(5-k)}(T+\lambdaambda_n^{-2\theta})^{5-k}
+ \lambdaambda_n^{\frac 7{20}\theta(k-1)}\lambdaambda_n^{(-\frac 3{20}+\frac 52\theta)(5-k)}(T+\lambdaambda_n^{-2\theta})^{5-k}
\end{align*}
and
\begin{align*}
\|w_n^k z_n^{4-k}\nabla z_n\|_{L_t^\infty L_x^{\frac{10}7}}
&\lambdaesssim \|\nabla z_n\|_{L_t^\infty L_x^2}\|w_n\|_{L_t^\infty L_x^{20}}^k\|z_n\|_{L_t^\infty L_x^{20}}^{4-k}\\
&\lambdaesssim \lambdaambda_n^{-\frac 12+\frac 52\theta}\lambdaambda_n^{\frac7{20}\theta k}\lambdaambda_n^{(-\frac3{20}+\frac 52\theta)(4-k)}(T+\lambdaambda_n^{-2\theta})^{5-k}.
\end{align*}
Putting everything together and recalling $\theta=\frac1{100}$, we derive
\begin{align*}
\eqref{53}\to 0 \qtq{as} n\to \infty.
\end{align*}
Therefore,
\begin{align*}
\lambdaim_{T\to\infty}\lambdaimsup_{n\to\infty}\|e_n\|_{\dot N^1(\{|t|\lambdaeq \lambdaambda_n^2T\}\times\Omega)}=0,
\end{align*}
which together with \eqref{largetime2} gives \eqref{error2}.
\textbf{Step 5} Constructing $v_n$ and approximation by $C_c^\infty$ functions.
Using \eqref{tildevn2}, \eqref{match2}, and \eqref{error2}, and invoking the stability result Theorem~\ref{T:stability}, for $n$ (and $T$) sufficiently large we obtain a global solution $v_n$ to $\text{NLS}_\Omega$ with initial data $v_n(0)=\phi_n$ and
\begin{align*}
\|v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}\lambdaesssim 1.
\end{align*}
Moreover,
\begin{align}\lambdaabel{vncase2}
\lambdaim_{T\to\infty}\lambdaimsup_{n\to\infty}\|v_n(t-\lambdaambda_n^2 t_n)-\tilde v_n(t)\|_{\dot S^1({\mathbb{R}}\times\Omega)}=0.
\end{align}
To complete the proof of the theorem, it remains to prove the approximation result \eqref{dense2}, to which we now turn. From the density of
$C_c^\infty({\mathbb{R}}\times{\mathbb{R}}^3)$ in $\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)$, for any ${\varepsilon}>0$ there exists $\psi_{\varepsilon}\in C_c^\infty({\mathbb{R}}\times{\mathbb{R}}^3)$ such that
\begin{align}\lambdaabel{approxwinfty2}
\|w_\infty-\psi_{\varepsilon}\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<\tfrac {\varepsilon} 3.
\end{align}
Using a change of variables, we estimate
\begin{align*}
\|v_n(t-\lambdaambda_n^2 t_n, x+x_n)&-\lambdaambda_n^{-\frac12}\psi_{\varepsilon}(\lambdaambda_n^{-2}t, \lambdaambda_n^{-1}x)\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}\\
&\lambdae \|v_n(t-\lambdaambda_n^2 t_n)-\tilde v_n(t)\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}+\|w_\infty-\psi_{\varepsilon}\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}\\
&\quad+\|\tilde v_n(t,x)-\lambdaambda_n^{-\frac12}w_\infty(\lambdaambda_n^{-2},\lambdaambda_n^{-1}(x-x_n))\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}.
\end{align*}
In view of \eqref{vncase2} and \eqref{approxwinfty2}, proving \eqref{dense2} reduces to showing
\begin{align}\lambdaabel{remaincase2}
\|\tilde v_n(t,x)-\lambdaambda_n^{-\frac 12}w_\infty(\lambdaambda_n^{-2}t,\lambdaambda_n^{-1}(x-x_n))\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}< \tfrac {\varepsilon} 3
\end{align}
for sufficiently large $n$ and $T$.
To prove \eqref{remaincase2} we discuss two different time regimes. On the middle time interval $|t|\lambdaeq \lambdaambda_n^2 T$, we have
\begin{align*}
&\|\tilde v_n(t,x)-\lambdaambda_n^{-\frac 12}w_\infty(\lambdaambda_n^{-2}t,\lambdaambda_n^{-1}(x-x_n))\|_{\dot X^1(\{|t|\lambdaeq \lambdaambda_n^2T\}\times{\mathbb{R}}^3)}\\
&\lambdaesssim\|\chi_n w_n+z_n-w_\infty\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)}\\
&\lambdaesssim \|(1-\chi_n)w_\infty\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)}+\|\chi_n(w_n-w_\infty)\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)}+\|z_n\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)},
\end{align*}
which converges to zero by \eqref{258}, \eqref{258'}, and \eqref{209}.
We now consider $|t|> \lambdaambda_n^2 T$; by symmetry, it suffices to control the contribution of positive times. Using the Strichartz inequality, we estimate
\begin{align*}
\|\tilde v_n(t,x)&-\lambdaambda_n^{-\frac 12}w_\infty(\lambdaambda_n^{-2}t,\lambdaambda_n^{-1}(x-x_n))\|_{\dot X^1((\lambdaambda_n^2T, \infty)\times{\mathbb{R}}^3)}\\
&=\|e^{i(t-T)\Delta_{\Omega_n}}[\chi_nw_n(T)+z_n(T)]-w_\infty\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\\
&\lambdaesssim \|z_n(T)\|_{\dot H^1_D(\Omega_n)}+\|\nabla[\chi_n(w_\infty-w_n)]\|_{L_x^2} +\|w_\infty\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\\
&\quad +\|e^{i(t-T)\Delta_{\Omega_n}}[\chi_nw_\infty(T)]\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\\
&=o(1) +\|e^{i(t-T)\Delta_{\Omega_n}}[\chi_nw_\infty(T)]\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)} \qtq{as} n, T\to \infty
\end{align*}
by \eqref{209'}, \eqref{258'}, and the monotone convergence theorem. Using the triangle and Strichartz inequalities, we estimate the last term as follows:
\begin{align*}
\|&e^{i(t-T)\Delta_{\Omega_n}}[\chi_n w_\infty(T)]\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\\
&\lambdaesssim\|[e^{i(t-T)\Delta_{\Omega_n}}-e^{i(t-T)\Delta}][\chi_nw_\infty(T)]\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}+\|\nabla[(1-\chi_n)w_\infty]\|_{L_x^2}\\
&\quad+\|\nabla[e^{-iT\Delta}w_\infty(T)-w_+]\|_{L_x^2} + \|e^{it\Delta}w^+\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)},
\end{align*}
which converges to zero by letting $n\to \infty$ and then $T\to \infty$ by Theorem~\ref{T:LF}, \eqref{as2}, and the monotone convergence theorem.
Putting everything together we obtain \eqref{remaincase2} and so \eqref{dense2}. This completes the proof of Theorem~\ref{T:embed2}.
\end{proof}
Our next result concerns the scenario when the rescaled obstacles $\Omega_n^c$ are retreating to infinity (cf. Case 3 in Theorem~\ref{T:LPD}).
\begin{thm}[Embedding nonlinear profiles for retreating obstacles]\lambdaabel{T:embed3}
Let $\{t_n\}\subset {\mathbb{R}}$ be such that $t_n\equiv0$ or $t_n\to \pm\infty$. Let $\{x_n\}\subset \Omega$ and $\{\lambdaambda_n\}\subset 2^{{\mathbb{Z}}}$ be such that $\frac{d(x_n)}{\lambdaambda_n}\to \infty$. Let $\phi\in\dot H^1({\mathbb{R}}^3)$ and define
\begin{align*}
\phi_n(x)=\lambdaambda_n^{-\frac 12}e^{i\lambdaambda_n^2 t_n\Delta_\Omega}\bigl[(\chi_n\phi)\bigl(\tfrac{x-x_n}{\lambdaambda_n}\bigr)\bigr],
\end{align*}
where $\chi_n(x)=1-\Theta(\lambdaambda_n|x|/d(x_n))$. Then for $n$ sufficiently large there exists a global solution $v_n$ to $\text{NLS}_\Omega$ with initial data $v_n(0)=\phi_n$ which satisfies
\begin{align*}
\|v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}\lambdaesssim 1,
\end{align*}
with the implicit constant depending only on $\|\phi\|_{\dot H^1}$. Furthermore, for every ${\varepsilon}>0$ there exist $N_{\varepsilon}\in {\mathbb{N}}$ and
$\psi_{\varepsilon}\in C_c^{\infty}({\mathbb{R}}\times{\mathbb{R}}^3)$ such that for all $n\ge N_{\varepsilon}$ we have
\begin{align}\lambdaabel{apcase3}
\|v_n(t-\lambdaambda_n^2 t_n, x+x_n)-\lambdaambda_n^{-\frac12}\psi_{\varepsilon}(\lambdaambda_n^{-2}t, \lambdaambda_n^{-1} x)\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<{\varepsilon}.
\end{align}
\end{thm}
\begin{proof} The proof of this theorem follows the general outline of the proof of Theorem~\ref{T:embed2}. It consists of the same five steps. Throughout the proof we will write $-\Delta=-\Delta_{{\mathbb{R}}^3}$.
\textbf{Step 1:} Constructing global solutions to $\text{NLS}_{{\mathbb{R}}^3}$.
Let $\theta:=\frac 1{100}$. As in the proof of Theorem~\ref{T:embed2}, the construction of the solutions to $\text{NLS}_{{\mathbb{R}}^3}$ depends on the behaviour of $t_n$. If $t_n\equiv0$, we let $w_n$ and $w_\infty$ be solutions to $\text{NLS}_{{\mathbb{R}}^3}$ with initial data $w_n(0)=\phi_{\lambdae(d(x_n)/\lambdaambda_n)^{\theta}}$
and $w_\infty(0)=\phi$. If $t_n\to \pm\infty$, we let $w_n$ and $w_\infty$ be solutions to $\text{NLS}_{{\mathbb{R}}^3}$ satisfying
\begin{align*}
\|w_n(t)-e^{it\Delta}\phi_{\lambdae (d(x_n)/\lambdaambda_n)^{\theta}}\|_{\dot H^1({\mathbb{R}}^3)}\to0 \qtq{and} \|w_\infty(t)-e^{it\Delta}\phi\|_{\dot H^1({\mathbb{R}}^3)}\to 0
\end{align*}
as $t\to \pm \infty$.
In all cases, \cite{CKSTT:gwp} implies that $w_n$ and $w_\infty$ are global solutions obeying global spacetime norms. Moreover, arguing as in the proof of Theorem~\ref{T:embed2} and invoking perturbation theory and the persistence of regularity result Lemma \ref{lm:persistencer3}, we see that $w_n$ and $w_\infty$ satisfy the following:
\begin{equation}\lambdaabel{cond3}
\lambdaeft\{ \quad \begin{aligned}
&\|w_n\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)}+\|w_\infty\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)}\lambdaesssim 1,\\
&\lambdaim_{n\to \infty}\|w_n-w_\infty\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)}= 0,\\
&\||\nabla|^s w_n\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)}\lambdaesssim \bigl(\tfrac{d(x_n)}{\lambdaambda_n}\bigr)^{s\theta} \qtq{for all} s\ge 0.
\end{aligned} \right.
\end{equation}
\textbf{Step 2:} Constructing the approximate solution to $\text{NLS}_{\Omega}$.
Fix $T>0$ to be chosen later. We define
\begin{align*}
\tilde v_n(t,x):=\begin{cases} \lambdaambda_n^{-\frac12}[\chi_nw_n](\lambdaambda_n^{-2}t, \lambdaambda_n^{-1}(x-x_n)), & |t|\lambdae \lambdaambda_n^2 T, \\
e^{i(t-\lambdaambda_n^2 T)\Delta_\Omega}\tilde v_n(\lambdaambda_n^2 T,x), & t>\lambdaambda_n^2 T, \\
e^{i(t+\lambdaambda_n^2 T)\Delta_\Omega}\tilde v_n(-\lambdaambda_n^2 T,x), & t<-\lambdaambda_n^2 T.
\end{cases}
\end{align*}
Note that $\tilde v_n$ has finite scattering size; indeed, using a change of variables, the Strichartz inequality, H\"older, Sobolev embedding, and \eqref{cond3}, we get
\begin{align}\lambdaabel{tildevn3}
\|\tilde v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}
&\lambdaesssim \|\chi_n w_n\|_{L_{t,x}^{10}([-T,T]\times\Omega_n)}+\|\chi_n w_n(\pm T)\|_{\dot H^1_D(\Omega_n)} \notag\\
&\lambdaesssim \|w_n\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)} + \|\nabla\chi_n\|_{L_x^3} \|w_n(\pm T)\|_{L_x^6} + \|\chi_n\|_{L_x^\infty} \|\nabla w_n(\pm T)\|_{L_x^2}\notag\\
&\lambdaesssim 1,
\end{align}
where $\Omega_n:=\lambdaambda_n^{-1}(\Omega-\{x_n\})$.
\textbf{Step 3:} Asymptotic agreement of the initial data:
\begin{align}\lambdaabel{n0}
\lambdaim_{T\to\infty}\lambdaimsup_{n\to\infty}\|(-\Delta_\Omega)^{\frac12}e^{it\Delta_\Omega}[\tilde v_n(\lambdaambda_n^2t_n)-\phi_n]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}=0.
\end{align}
We first consider the case when $t_n\equiv0$. By Strichartz and a change of variables,
\begin{align*}
\|&(-\Delta_{\Omega})^{\frac 12}e^{it\Delta_\Omega}[\tilde v_n(0)-\phi_n]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}\\
&\lambdaesssim \|(-\Delta_{\Omega_n})^{\frac 12}[\chi_n\phi_{>(d(x_n)/\lambdaambda_n)^{\theta}}]\|_{L^2_x(\Omega_n)}\\
&\lambdaesssim \|\nabla \chi_n\|_{L_x^3}\|\phi_{>(d(x_n)/\lambdaambda_n)^{\theta}}\|_{L_x^6}+\|\chi_n\|_{L_x^\infty}\|\nabla \phi_{>(d(x_n)/\lambdaambda_n)^{\theta}}\|_{L_x^2}\to0\qtq{as} n\to \infty.
\end{align*}
It remains to prove \eqref{n0} when $t_n\to \infty$; the case $t_n\to-\infty$ can be treated similarly. As $T$ is fixed, for sufficiently large $n$ we have $t_n>T$ and so
\begin{align*}
\tilde v_n(\lambdaambda_n^2t_n,x)=e^{i(t_n-T)\lambdaambda_n^2\Delta_{\Omega}}\bigl[\lambdaambda_n^{-\frac 12}(\chi_nw_n(T))\bigl(\tfrac{x-x_n}{\lambdaambda_n}\bigr)\bigr].
\end{align*}
Thus, by a change of variables and the Strichartz inequality,
\begin{align}
\|(-\Delta_{\Omega})^{\frac 12}& e^{it\Delta_{\Omega}}[\tilde v_n(\lambdaambda_n^2 t_n)-\phi_n]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}\notag\\
&=\|(-\Delta_{\Omega_n})^{\frac 12}e^{it\Delta_{\Omega_n}}[e^{-iT\Delta_{\Omega_n}}(\chi_nw_n(T))-\chi_n\phi)]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\notag\\
&\lambdaesssim \|(-\Delta_{\Omega_n})^{\frac 12}[\chi_n(w_n(T)-w_\infty(T))]\|_{L^2(\Omega_n)}\lambdaabel{n1}\\
&\quad +\|(-\Delta_{\Omega_n})^{\frac12}e^{it\Delta_{\Omega_n}}[e^{-iT\Delta_{\Omega_n}}(\chi_nw_\infty(T))-\chi_n\phi]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\lambdaabel{n2}.
\end{align}
Using \eqref{cond3} and Sobolev embedding, we see that
\begin{align*}
\eqref{n1}\lambdaesssim \|\nabla\chi_n\|_{L_x^3}\|w_n(T)-w_\infty(T)\|_{L_x^6}+\|\chi_n\|_{L_x^\infty}\|\nabla[w_n(T)-w_\infty(T)]\|_{L_x^2} \to 0
\end{align*}
as $n\to \infty$. The proof of
$$
\lambdaim_{T\to\infty}\lambdaimsup_{n\to\infty}\eqref{n2}=0
$$
is identical to the proof of \eqref{n23} in Theorem~\ref{T:embed2} and we omit it. This completes the proof of \eqref{n0}.
\textbf{Step 4:} Proving that $\tilde v_n$ is an approximate solution to $\text{NLS}_{\Omega}$ in the sense that
\begin{align}\lambdaabel{n6}
\lambdaim_{T\to\infty}\lambdaimsup_{n\to\infty}\|(-\Delta_\Omega)^{\frac12}[(i\partial_t+\Delta_\Omega)\tilde v_n-|\tilde v_n|^4\tilde v_n]\|_{\dot N^0({\mathbb{R}}\times\Omega)}=0.
\end{align}
We first verify \eqref{n6} for $|t|>\lambdaambda_n^2 T$. By symmetry, it suffices to consider positive times. Arguing as in the proof of Theorem~\ref{T:embed2}, we see that in this case \eqref{n6} reduces to
\begin{align}\lambdaabel{n7}
\lambdaim_{T\to \infty}\lambdaimsup_{n\to\infty}\|e^{i(t-\lambdaambda_n^2T)\Delta_{\Omega}}\tilde v_n(\lambdaambda_n^2T)\|_{L_{t,x}^{10}((\lambdaambda_n^2T,\infty)\times\Omega)}=0.
\end{align}
Let $w_+$ denote the forward asymptotic state of $w_\infty$. Using a change of variables and the Strichartz inequality, we get
\begin{align*}
&\|e^{i(t-\lambdaambda_n^2 T)\Delta_\Omega}\tilde v_n(\lambdaambda_n^2T)\|_{L_{t,x}^{10}((\lambdaambda_n^2T,\infty)\times\Omega)}\\
&=\|e^{it\Delta_{\Omega_n}}[\chi_nw_n(T)]\|_{L_{t,x}^{10}((0,\infty)\times\Omega_n)}\\
&\lambdaesssim \|e^{it\Delta_{\Omega_n}}[\chi_ne^{iT\Delta}w_+]\|_{L_{t,x}^{10}((0,\infty)\times\Omega_n)}+\|\chi_n[w_\infty(T)-e^{iT\Delta}w_+]\|_{\dot H^1({\mathbb{R}}^3)}\\
&\quad+\|\chi_n[w_\infty(T)-w_n(T)]\|_{\dot H^1({\mathbb{R}}^3)}\\
&\lambdaesssim \|[e^{it\Delta_{\Omega_n}}-e^{it\Delta}][\chi_n e^{iT\Delta}w_+]\|_{L_{t,x}^{10}((0,\infty)\times{\mathbb{R}}^3)}+\|(1-\chi_n)e^{iT\Delta}w_+\|_{\dot H^1({\mathbb{R}}^3)}\\
&\quad +\|e^{it\Delta}w_+\|_{L_{t,x}^{10}((T,\infty)\times{\mathbb{R}}^3)}+\|w_\infty(T) -e^{iT\Delta}w_+\|_{\dot H^1({\mathbb{R}}^3)}+\|w_\infty(T)-w_n(T)\|_{\dot H^1({\mathbb{R}}^3)},
\end{align*}
which converges to zero by letting $n\to \infty$ and then $T\to \infty$ in view of Corollary~\ref{C:LF} (and the density of $C_c^\infty({\mathbb{R}}^3)$ functions in
$\dot H^1({\mathbb{R}}^3)$), \eqref{cond3}, the definition of $w_+$, and the monotone convergence theorem.
Next we show \eqref{n6} on the middle time interval $|t|\lambdae \lambdaambda_n^2 T$. We compute
\begin{align*}
[(i\partial_t+\Delta_{\Omega})\tilde v_n-|\tilde v_n|^4\tilde v_n](t,x)
&=\lambdaambda_n^{-\frac 52}[(\chi_n-\chi_n^5)|w_n|^4w_n](\lambdaambda_n^{-2}t,\lambdaambda_n^{-1}(x-x_n))\\
&\quad+2\lambdaambda_n^{-\frac 52}[\nabla\chi_n \cdot\nabla w_n](\lambdaambda_n^{-2} t, \lambdaambda_n^{-1}(x-x_n))\\
&\quad+\lambdaambda_n^{-\frac 52}[\Delta\chi_nw_n](\lambdaambda_n^{-2} t,\lambdaambda_n^{-1}(x-x_n)).
\end{align*}
Thus, using a change of variables and the equivalence of Sobolev spaces we obtain
\begin{align}
\|(-\Delta_\Omega)^{\frac 12}&[(i\partial_t+\Delta)\tilde v_n-|\tilde v_n|^4\tilde v_n]\|_{\dot N^0((|t|\lambdae \lambdaambda_n^2 T)\times\Omega)}\notag\\
&\lambdaesssim \|\nabla[(\chi_n-\chi_n^5)|w_n|^4 w_n]\|_{\dot N^0([-T,T]\times\Omega_n)}\lambdaabel{n9}\\
&\quad+\|\nabla(\nabla\chi_n\cdot \nabla w_n)\|_{\dot N^0([-T,T]\times\Omega_n)}+\|\nabla (\Delta\chi_nw_n)\|_{\dot N^0([-T,T]\times\Omega_n)}.\lambdaabel{n10}
\end{align}
Using H\"older, we estimate the contribution of \eqref{n9} as follows:
\begin{align*}
\eqref{n9}&\lambdaesssim \|(\chi_n-\chi_n^5)|w_n|^4 \nabla w_n\|_{L_{t,x}^{\frac{10}7}}+\|\nabla \chi_n(1-5\chi_n^4)w_n^5\|_{L_t^{\frac 53} L_x^{\frac{30}{23}}}\\
&\lambdaesssim\|\nabla w_n\|_{L_{t,x}^{\frac{10}3}}\Bigl[\|w_n-w_\infty\|_{L_{t,x}^{10}}^4+\|1_{|x|\sim \frac{d(x_n)}{\lambdaambda_n}} w_\infty\|_{L_{t,x}^{10}}^4\Bigr]\\
&\quad+\|w_n\|_{L_t^5 L_x^{30}}\|\nabla \chi_n\|_{L_x^3}\Bigl[\|w_n-w_\infty\|_{L_{t,x}^{10}}^4+\|1_{|x|\sim\frac{d(x_n)}{\lambdaambda_n}}w_\infty\|_{L_{t,x}^{10}}^4\Bigr]\to0,
\end{align*}
by the dominated convergence theorem and \eqref{cond3}. Similarly,
\begin{align*}
\eqref{n10}&\lambdaesssim T \Bigl[\|\Delta\chi_n\|_{L_x^\infty} \|\nabla w_n\|_{L_t^\infty L_x^2} +\|\nabla \chi_n\|_{L_x^\infty}\|\Delta w_n\|_{L_t^\infty L_x^2}
+\|\nabla\Delta\chi_n\|_{L_x^3}\|w_n\|_{L_t^\infty L_x^6}\Bigr]\\
&\lambdaesssim T\Bigl[\bigl(\tfrac{d(x_n)}{\lambdaambda_n}\bigr)^{-2}+\bigl(\tfrac{d(x_n)}{\lambdaambda_n}\bigr)^{\theta-1}\Bigr] \to 0 \qtq{as} n\to \infty.
\end{align*}
This completes the proof of \eqref{n6}.
\textbf{Step 5:} Constructing $v_n$ and approximation by $C_c^\infty$ functions.
Using \eqref{tildevn3}, \eqref{n0}, and \eqref{n6}, and invoking the stability result Theorem~\ref{T:stability}, for $n$ sufficiently large we obtain a global solution $v_n$ to $\text{NLS}_\Omega$ with initial data $v_n(0)=\phi_n$ which satisfies
\begin{align*}
\|v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}\lambdaesssim 1
\qtq{and}
\lambdaim_{T\to\infty}\lambdaimsup_{n\to \infty}\|v_n(t-\lambdaambda_n^2t_n)-\tilde v_n (t)\|_{\dot S^1({\mathbb{R}}\times\Omega)}=0.
\end{align*}
It remains to prove the approximation result \eqref{apcase3}.
From the density of $C_c^\infty({\mathbb{R}}\times{\mathbb{R}}^3)$ in $\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)$, for any ${\varepsilon}>0$ we can find $\psi_{\varepsilon}\in C_c^\infty({\mathbb{R}}\times{\mathbb{R}}^3)$ such that
\begin{align*}
\|w_\infty-\psi_{\varepsilon}\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}< \tfrac {\varepsilon} 3.
\end{align*}
Thus, to prove \eqref{apcase3} it suffices to show
\begin{align}\lambdaabel{n11}
\|\tilde v_n(t,x)-\lambdaambda_n^{-\frac 12}w_\infty(\lambdaambda_n^{-2}t, \lambdaambda_n^{-1}(x-x_n))\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<\tfrac {\varepsilon} 3
\end{align}
for $n, T$ sufficiently large. A change of variables gives
\begin{align*}
\text{LHS}\eqref{n11}
&\lambdae \|\chi_n w_n-w_\infty\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)}+\|e^{i(t-T)\Delta_{\Omega_n}}[\chi_nw_n(T)]-w_\infty\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\\
&\quad+\|e^{i(t+T)\Delta_{\Omega_n}}[\chi_nw_n(-T)]-w_\infty\|_{\dot X^1((-\infty,-T)\times{\mathbb{R}}^3)}.
\end{align*}
We estimate the contribution from each term separately. For the first term we use the monotone convergence theorem and \eqref{cond3} to see that
\begin{align*}
\|\chi_n w_n-w_\infty\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)}&\lambdaesssim \|(1-\chi_n)w_\infty\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}+\|w_n-w_\infty\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}\to 0,
\end{align*}
as $n\to \infty$. For the second term we use Strichartz to get
\begin{align*}
&\|e^{i(t-T)\Delta_{\Omega_n}}[\chi_n w_n(T)]-w_\infty\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\\
&\lambdaesssim \|w_\infty\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}+\|\chi_n[w_\infty(T)-w_n(T)]\|_{\dot H^1({\mathbb{R}}^3)}\\
&\quad+\|e^{i(t-T)\Delta_{\Omega_n}}[\chi_n w_\infty(T)]\|_{\dot X^1((T,\infty))}\\
&\lambdaesssim \|w_\infty\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}+\|w_\infty(T)-w_n(T)\|_{\dot H^1({\mathbb{R}}^3)}+\|(1-\chi_n)w_\infty(T)\|_{\dot H^1({\mathbb{R}}^3)}\\
&\quad+\|[e^{i(t-T)\Delta_{\Omega_n}}-e^{i(t-T)\Delta}][\chi_nw_\infty(T)]\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}+\|e^{it\Delta}w_+\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\\
&\quad+\|w_+-e^{-iT\Delta}w_\infty(T)\|_{\dot H^1({\mathbb{R}}^3)}\to 0 \qtq{as} n\to \infty \qtq{and then} T\to \infty
\end{align*}
by Theorem~\ref{T:LF}, \eqref{cond3}, the definition of the asymptotic state $w_+$, and the monotone convergence theorem.
The third term can be treated analogously to the second term.
This completes the proof of \eqref{n11} and with it, the proof of Theorem~\ref{T:embed3}.
\end{proof}
Our final result in this section treats the case when the obstacle expands to fill a halfspace (cf. Case~4 in Theorem~\ref{T:LPD}).
\begin{thm}[Embedding $\text{NLS}_{{\mathbb{H}}}$ into $\text{NLS}_{\Omega}$]\lambdaabel{T:embed4}
Let $\{t_n\}\subset {\mathbb{R}}$ be such that $t_n\equiv0$ or $t_n\to\pm\infty$. Let $\{\lambdaambda_n\}\subset 2^{{\mathbb{Z}}}$ and $\{x_n\}\subset \Omega$ be such that
\begin{align*}
\lambdaambda_n\to 0 \qtq{and} \tfrac{d(x_n)}{\lambdaambda_n}\to d_\infty>0.
\end{align*}
Let $x_n^*\in \partial\Omega$ be such that $|x_n-x_n^*|=d(x_n)$ and let $R_n\in SO(3)$ be such that $R_n e_3=\frac{x_n-x_n^*}{|x_n-x_n^*|}$.
Finally, let $\phi\in \dot H^1_D({\mathbb{H}})$ and define
\begin{align*}
\phi_n(x)=\lambdaambda_n^{-\frac 12}e^{i\lambdaambda_n^2t_n\Delta_\Omega}\bigl[\phi\bigl(\tfrac{R_n^{-1}(x-x_n^*)}{\lambdaambda_n}\bigr)\bigr].
\end{align*}
Then for $n$ sufficiently large there exists a global solution $v_n$ to $\text{NLS}_\Omega$ with initial data $v_n(0)=\phi_n$ which satisfies
\begin{align*}
\|v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}\lambdaesssim 1,
\end{align*}
with the implicit constant depending only on $\|\phi\|_{\dot H^1}$. Furthermore, for every ${\varepsilon}>0$ there exist $N_{\varepsilon}\in {\mathbb{N}}$ and
$\psi_{\varepsilon}\in C_c^\infty({\mathbb{R}}\times{\mathbb{H}})$ such that for all $n\geq N_{\varepsilon}$ we have
\begin{align}\lambdaabel{ap4}
\|v_n(t-\lambdaambda_n^2 t_n, R_nx+x_n^*)-\lambdaambda_n^{-\frac12}\psi_{\varepsilon}(\lambdaambda_n^{-2}t, \lambdaambda_n^{-1}x)\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<{\varepsilon}.
\end{align}
\end{thm}
\begin{proof}
Again, the proof follows the outline of the proofs of Theorems~\ref{T:embed2} and \ref{T:embed3}.
\textbf{Step 1:} Constructing global solutions to $\text{NLS}_{{\mathbb{H}}}$.
Let $\theta:=\frac 1{100}$. If $t_n\equiv0$, let $w_n$ and $w_\infty$ be solutions to $\text{NLS}_{{\mathbb{H}}}$ with initial data
$w_n(0)=\phi_{\lambdae \lambdaambda_n^{-\theta}}$ and $w_\infty(0)=\phi$. If $t_n\to \pm\infty$, let $w_n$ and $w_\infty$ be solutions to $\text{NLS}_{{\mathbb{H}}}$ that satisfy
\begin{align}\lambdaabel{m12}
\|w_n(t)-e^{it\Delta_{{\mathbb{H}}}}\phi_{\lambdae \lambdaambda_n^{-\theta}}\|_{\dot H^1_D({\mathbb{H}})}\to 0 \qtq{and} \|w_\infty(t)-e^{it\Delta_{{\mathbb{H}}}}\phi\|_{\dot H^1_D({\mathbb{H}})}\to 0,
\end{align}
as $t\to \pm \infty$.
In all cases, \cite{CKSTT:gwp} implies that $w_n$ and $w_\infty$ are global solutions and obey
\begin{align*}
\|w_n\|_{\dot S^1({\mathbb{R}}\times{\mathbb{H}})}+\|w_\infty\|_{\dot S^1({\mathbb{R}}\times{\mathbb{H}})}\lambdaesssim 1,
\end{align*}
with the implicit constant depending only on $\|\phi\|_{\dot H^1_D({\mathbb{H}})}$. Indeed, we may interpret such solutions as solutions to $\text{NLS}_{{\mathbb{R}}^3}$ that are odd under reflection in $\partial{\mathbb{H}}$. Moreover, arguing as in the proof of Theorems~\ref{T:embed2} and \ref{T:embed3} and using the stability result from \cite{CKSTT:gwp} and the persistence of regularity result Lemma~\ref{lm:persistenceh}, we have
\begin{align}\lambdaabel{cond4}
\begin{cases}
\lambdaim_{n\to \infty}\|w_n-w_\infty\|_{\dot S^1({\mathbb{R}}\times{\mathbb{H}})}=0,\\
\|(-\Delta_{\mathbb{H}})^{\frac k2}w_n\|_{L_t^\infty L_x^2({\mathbb{R}}\times{\mathbb{H}})}\lambdaesssim\lambdaambda_n^{-\theta(k-1)} \qtq{for} k=1,2,3.
\end{cases}
\end{align}
\textbf{Step 2:} Constructing approximate solutions to $\text{NLS}_\Omega$.
Let $\Omega_n:=\lambdaambda_n^{-1}R_n^{-1}(\Omega-\{x_n^*\})$ and let $T>0$ to be chosen later. On the middle time interval $|t|<\lambdaambda_n^2 T$, we embed
$w_n$ by using a boundary straightening diffeomorphism $\Psi_n$ of a neighborhood of zero in $\Omega_n$ of size $L_n:=\lambdaambda_n^{-2\theta}$ into a corresponding neighborhood in ${\mathbb{H}}$.
To this end, we define a smooth function $\psi_n$ on the set $|x^\perp|\lambdae L_n$ so that $x^\perp\mapsto (x^\perp, -\psi_n(x^\perp))$ traces out $\partial\Omega_n$. Here and below we write $x\in {\mathbb{R}}^3$ as $x=(x^\perp, x_3)$. By our choice of $R_n$, $\partial \Omega_n$ has unit normal $e_3$ at zero. Moreover, $\partial\Omega_n$ has curvatures that are $O(\lambdaambda_n)$. Thus,
$\psi_n$ satisfies the following:
\begin{align}\lambdaabel{psin}
\begin{cases}
&\psi_n(0)=0, \quad \nabla\psi_n(0)=0, \quad |\nabla\psi_n(x^\perp)|\lambdaesssim \lambdaambda_n^{1-2\theta},\\
&|\partial^{\alpha}\psi_n(x^\perp)|\lambdaesssim\lambdaambda_n^{|\alpha|-1} \qtq{for all} |\alpha|\ge 2.
\end{cases}
\end{align}
We now define the map $\Psi_n: \Omega_n\cap\{|x^\perp|\lambdae L_n\}\to {\mathbb{H}}$ and a cutoff $\chi_n:{\mathbb{R}}^3\to[0,1]$ via
\begin{align*}
\Psi_n(x):=(x^{\perp}, x_3+\psi_n(x^\perp)) \qtq{and} \chi_n(x):=1-\Theta\bigl(\tfrac{x}{L_n}\bigr).
\end{align*}
Note that on the domain of $\Psi_n$, which contains $\supp\chi_n$, we have
\begin{align}\lambdaabel{detpsin}
|\det(\partial \Psi_n)|\sim 1 \qtq{and} |\partial\Psi_n|\lambdaesssim 1.
\end{align}
We are now ready to define the approximate solution. Let $\tilde w_n:=\chi_nw_n$ and define
\begin{align*}
\tilde v_n(t,x):=\begin{cases} \lambdaambda_n^{-\frac12}[\tilde
w_n(\lambdaambda_n^{-2}t)\circ\Psi_n](\lambdaambda_n^{-1}R_n^{-1}(x-x_n^*)), &|t|\lambdae \lambdaambda_n^2 T, \\
e^{i(t-\lambdaambda_n^2 T)\Delta_\Omega}\tilde v_n(\lambdaambda_n^2 T,x), &t>\lambdaambda_n^2 T,\\
e^{i(t+\lambdaambda_n^2 T)\Delta_\Omega}\tilde v_n(-\lambdaambda_n^2T,x), &t<-\lambdaambda_n^2 T .
\end{cases}
\end{align*}
We first prove that $\tilde v_n$ has finite scattering size. Indeed, by the Strichartz inequality, a change of variables, and \eqref{detpsin},
\begin{align}\lambdaabel{tildevn4}
\|\tilde v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}
&\lambdaesssim \|\tilde w_n\circ\Psi_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega_n)}+\|\tilde w_n(\pm T)\circ\Psi_n\|_{\dot H^1_D(\Omega_n)}\notag\\
&\lambdaesssim \|\tilde w_n\|_{L_{t,x}^{10}({\mathbb{R}}\times{\mathbb{H}})} + \|\tilde w_n(\pm T)\|_{\dot H^1_D({\mathbb{H}})}\lambdaesssim 1.
\end{align}
\textbf{Step 3:} In this step we prove asymptotic agreement of the initial data, namely,
\begin{align}\lambdaabel{match4}
\lambdaim_{T\to\infty}\lambdaimsup_{n\to \infty}\|(-\Delta_\Omega)^{\frac12}e^{it\Delta_\Omega}[\tilde v_n(\lambdaambda_n^2 t_n)-\phi_n]\|_{L_t^{10} L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}=0.
\end{align}
We discuss two cases. If $t_n\equiv0$, then by Strichartz and a change of variables,
\begin{align*}
\| & (-\Delta_{\Omega})^{\frac 12} e^{it\Delta_\Omega}[\tilde v_n(0)-\phi_n]\|_{L_t^{10} L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}\\
&\lambdaesssim \|(\chi_n\phi_{\lambdae \lambdaambda_n^{-\theta}})\circ\Psi_n-\phi\|_{\dot H^1_D(\Omega_n)}\\
&\lambdaesssim \|\nabla[(\chi_n\phi_{>\lambdaambda_n^{-\theta}})\circ\Psi_n]\|_{L^2_x}+\|\nabla[(\chi_n\phi)\circ\Psi_n-\chi_n\phi]\|_{L^2_x}+\|\nabla[(1-\chi_n)\phi]\|_{L^2_x}.
\end{align*}
As $\lambdaambda_n\to 0$ we have $\|\nabla \phi_{>\lambdaambda_n^{-\theta}}\|_{L^2_x}\to 0$ as $n\to \infty$; thus, using \eqref{detpsin} we see that the first
term converges to $0$. For the second term, we note that $\Psi_n(x)\to x$ in $C^1$; thus, approximating $\phi$ by $C_c^\infty({\mathbb{H}})$ functions we see that the second term converges to $0$. Finally, by the dominated convergence theorem and $L_n=\lambdaambda_n^{-2\theta}\to \infty$, the last term converges to $0$.
It remains to prove \eqref{match4} when $t_n\to +\infty$; the case when $t_n\to -\infty$ can be treated similarly.
Note that as $T>0$ is fixed, for $n$ sufficiently large we have $t_n>T$ and so
\begin{align*}
\tilde v_n(\lambdaambda_n^2t_n,x)&=e^{i(t_n-T)\lambdaambda_n^2\Delta_\Omega}[\lambdaambda_n^{-\frac12}(\tilde w_n(T)\circ\Psi_n)(\lambdaambda_n^{-1}R_n^{-1}(x-x_n^*))].
\end{align*}
Thus, a change of variables gives
\begin{align}
\|(-\Delta_\Omega)^{\frac12} &e^{it\Delta_\Omega}[\tilde v_n(\lambdaambda_n^2 t_n)-\phi_n]\|_{L_t^{10} L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}\notag\\
&\lambdaesssim \|(-\Delta_{\Omega_n})^{\frac 12}[\tilde w_n(T)\circ\Psi_n-w_\infty(T)]\|_{L^2_x}\lambdaabel{n13}\\
&\quad+\|(-\Delta_{\Omega_n})^{\frac 12}[e^{i(t-T)\Delta_{\Omega_n}}w_\infty(T)-e^{it\Delta_{\Omega_n}}\phi]\|_{L_t^{10} L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}.\lambdaabel{n12}
\end{align}
Using the triangle inequality,
\begin{align*}
\eqref{n13}
&\lambdaesssim\|(-\Delta_{\Omega_n})^{\frac12}[(\chi_nw_\infty(T))\circ\Psi_n-w_\infty(T)]\|_{L^2_x}\\
&\quad+\|(-\Delta_{\Omega_n})^{\frac 12}[(\chi_n(w_n(T)-w_\infty(T)))\circ\Psi_n]\|_{L^2_x},
\end{align*}
which converges to zero as $n\to \infty$ by \eqref{cond4} and the the fact that $\Psi_n(x)\to x$ in $C^1$. Using Strichartz, Lemma~\ref{L:n3},
Theorem~\ref{T:LF}, and \eqref{m12}, we see that
\begin{align*}
\eqref{n12}
&\lambdaesssim \|e^{i(t-T)\Delta_{\Omega_n}}(-\Delta_{{\mathbb{H}}})^{\frac12}w_\infty(T)-e^{it\Delta_{\Omega_n}}(-\Delta_{{\mathbb{H}}})^{\frac12}\phi\|_{L_t^{10} L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\\
&\quad +\|[(-\Delta_{\Omega_n})^{\frac 12}-(-\Delta_{{\mathbb{H}}})^{\frac12}]w_\infty(T)\|_{L^2_x}+\|[(-\Delta_{\Omega_n})^{\frac 12}-(-\Delta_{{\mathbb{H}}})^{\frac 12}]\phi\|_{L^2_x}\\
&\lambdaesssim\|[e^{i(t-T)\Delta_{\Omega_n}}-e^{i(t-T)\Delta_{{\mathbb{H}}}}](-\Delta_{{\mathbb{H}}})^{\frac 12}w_\infty(T)\|_{L_t^{10} L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\\
&\quad+\|[e^{it\Delta_{\Omega_n}}-e^{it\Delta_{{\mathbb{H}}}}](-\Delta_{{\mathbb{H}}})^{\frac12}\phi\|_{L_t^{10} L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\\
&\quad+\|e^{-iT\Delta_{{\mathbb{H}}}}w_\infty(T)-\phi\|_{\dot H^1_D({\mathbb{H}})}+o(1),
\end{align*}
and that this converges to zero by first taking $n\to \infty$ and then $T\to \infty$.
\textbf{Step 4:} In this step we prove that $\tilde v_n$ is an approximate solution to $\text{NLS}_\Omega$ in the sense that
\begin{align}\lambdaabel{n14}
\lambdaim_{T\to\infty}\lambdaimsup_{n\to\infty}\|(-\Delta_\Omega)^{\frac12}[(i\partial_t+\Delta_\Omega)\tilde v_n-|\tilde v_n|^4\tilde v_n]\|_{\dot N^0({\mathbb{R}}\times\Omega)}=0.
\end{align}
We first control the contribution of $|t|\ge \lambdaambda_n^2T$. As seen previously, this reduces to proving
\begin{align}\lambdaabel{n15}
\lambdaim_{T\to\infty}\lambdaimsup_{n\to\infty}\|e^{i(t-\lambdaambda_n^2T)\Delta_{\Omega}}\tilde v_n(\lambdaambda_n^2 T)\|_{L_{t,x}^{10}((\lambdaambda_n^2 T,\infty)\times\Omega)}=0.
\end{align}
and the analogous estimate in the opposite time direction, which follows similarly.
Let $w_+$ denote the forward asymptotic state of $w_\infty$. Using Strichartz, our earlier estimate on \eqref{n13}, and the monotone convergence theorem, we see that
\begin{align*}
\|&e^{i(t-\lambdaambda_n^2 T)\Delta_{\Omega}}\tilde v_n(\lambdaambda_n^2T)\|_{L_{t,x}^{10}((\lambdaambda_n^2 T,\infty)\times\Omega)}\\
&=\|e^{i(t-T)\Delta_{\Omega_n}}[\tilde w_n(T)\circ \Psi_n]\|_{L_{t,x}^{10}((T,\infty)\times\Omega_n)}\\
&\lambdaesssim \|e^{i(t-T)\Delta_{\Omega_n}}[e^{iT\Delta_{{\mathbb{H}}}}w_+]\|_{L_{t,x}^{10}((T,\infty)\times\Omega_n)}+\|w_\infty(T)-e^{iT\Delta_{{\mathbb{H}}}}w_+\|_{\dot H^1_D({\mathbb{H}})}\\
&\quad+\|\tilde w_n(T)\circ\Psi_n-w_\infty(T)\|_{\dot H^1_D(\Omega_n)}\\
&\lambdaesssim \|[e^{i(t-T)\Delta_{\Omega_n}}-e^{i(t-T)\Delta_{{\mathbb{H}}}}][e^{iT\Delta_{{\mathbb{H}}}}w_+]\|_{L_{t,x}^{10}((0,\infty)\times\Omega_n)}\\
&\quad+\|e^{it\Delta_{\mathbb{H}}}w_+\|_{L_{t,x}^{10} ((T,\infty)\times{\mathbb{H}})}+o(1)
\end{align*}
and that this converges to zero by Theorem~\ref{T:LF} and the monotone convergence theorem by first taking $n\to \infty$ and then $T\to \infty$. Thus \eqref{n15} is proved.
Next we control the contribution of the middle time interval $\{|t|\lambdae \lambdaambda_n^2 T\}$ to \eqref{n14}. We compute
\begin{align*}
\Delta(\tilde w_n\circ \Psi_n)&=(\partial_k\tilde w_n\circ\Psi_n)\Delta\Psi_n^k+(\partial_{kl}\tilde w_n\circ\Psi_n)\partial_j\Psi_n^l\partial_j\Psi_n^k,
\end{align*}
where $\Psi_n^k$ denotes the $k$th component of $\Psi_n$ and repeated indices are summed. As $\Psi_n(x)=x+(0,\psi_n(x^{\perp}))$, we have
\begin{align*}
&\Delta\Psi_n^k=O(\partial^2\psi_n), \quad \partial_j\Psi_n^l=\delta_{jl}+O(\partial\psi_n), \\
&\partial_j\Psi_n^l\partial_j\Psi_n^k=\delta_{jl}\delta_{jk}+O(\partial\psi_n)+O((\partial\psi_n)^2),
\end{align*}
where we use $O$ to denote a collection of similar terms. For example, $O(\partial\psi_n)$ contains terms of the form $c_j\partial_{x_j}\psi_n$ for some constants $c_j\in {\mathbb{R}}$, which may depend on the indices $k$ and $l$ appearing on the left-hand side. Therefore,
\begin{align*}
(\partial_k\tilde w_n\circ\Psi_n)\Delta\Psi_n^k&=O\bigl((\partial\tilde w_n\circ\Psi_n)(\partial^2\psi_n)\bigr),\\
(\partial_{kl}\tilde w_n\circ\Psi_n)\partial_j\Psi_n^l\partial_j\Psi_n^k
&=\Delta\tilde w_n\circ\Psi_n+O\bigl(\bigl(\partial^2\tilde w_n\circ\Psi_n\bigr)\bigl(\partial\psi_n+(\partial\psi_n)^2\bigr)\bigr)
\end{align*}
and so
\begin{align*}
(i\partial_t+\Delta_{\Omega_n})&(\tilde w_n\circ \Psi_n)-(|\tilde w_n|^4\tilde w_n)\circ\Psi_n\\
&=[(i\partial_t+\Delta_{{\mathbb{H}}})\tilde w_n-|\tilde w_n|^4\tilde w_n]\circ \Psi_n \\
&\quad+O\bigl((\partial\tilde w_n\circ\Psi_n)(\partial^2\psi_n)\bigr)+O\bigl(\bigl(\partial^2\tilde w_n\circ\Psi_n\bigr)\bigl(\partial\psi_n+(\partial\psi_n)^2\bigr)\bigr).
\end{align*}
By a change of variables and \eqref{detpsin}, we get
\begin{align}
\|(-\Delta_\Omega)^{\frac 12}&[(i\partial_t+\Delta_\Omega)\tilde v_n-|\tilde v_n|^4\tilde v_n]\|_{L_t^1L_x^2((|t|\lambdae \lambdaambda_n^2T)\times\Omega)}\notag\\
&=\|(-\Delta_{\Omega_n})^{\frac12}[(i\partial_t+\Delta_{\Omega_n})(\tilde w_n\circ\Psi_n)-(|\tilde w_n|^4\tilde w_n)\circ \Psi_n]\|_{L_t^1L_x^2((|t|\lambdae T)\times\Omega_n)}\notag\\
&\lambdaesssim \|(-\Delta_{\Omega_n})^{\frac12}[((i\partial_t+\Delta_{{\mathbb{H}}})\tilde w_n-|\tilde w_n|^4\tilde w_n)\circ\Psi_n]\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\notag\\
&\quad+\|(-\Delta_{\Omega_n})^{\frac 12}[(\partial\tilde w_n\circ \Psi_n)\partial^2\psi_n]\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\notag\\
&\quad+\bigl\|(-\Delta_{\Omega_n})^{\frac 12}\bigl[(\partial^2\tilde w_n\circ\Psi_n)\bigl(\partial\psi_n+(\partial\psi_n)^2\bigr)\bigr]\bigr\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\notag\\
&\lambdaesssim \|\nabla[(i\partial_t+\Delta_{{\mathbb{H}}})\tilde w_n -|\tilde w_n|^4\tilde w_n]\|_{L_t^1L_x^2([-T,T]\times{\mathbb{H}})}\lambdaabel{n18}\\
&\quad+\|\nabla[(\partial \tilde w_n\circ\Psi_n)\partial^2\psi_n]\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\lambdaabel{n16}\\
&\quad+\bigl\|\nabla\bigl[(\partial^2 \tilde w_n\circ \Psi_n)\bigl(\partial\psi_n+(\partial\psi_n)^2\bigr)\bigr]\bigr\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\lambdaabel{n17}.
\end{align}
Using \eqref{cond4}, \eqref{psin}, and \eqref{detpsin}, we can control the last two terms as follows:
\begin{align*}
\eqref{n16}
&\lambdaesssim\|(\partial\tilde w_n\circ\Psi_n)\partial^3\psi_n\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}+\|(\partial^2\tilde w_n\circ\Psi_n)\partial\Psi_n\partial^2\psi_n\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\\
&\lambdaesssim T\lambdaambda_n^2\|\nabla \tilde w_n\|_{L_t^\infty L_x^2}+T\lambdaambda_n\|\partial^2\tilde w_n\|_{L_t^\infty L_x^2}\\
&\lambdaesssim T\lambdaambda_n^2\bigl[\|\nabla \chi_n\|_{L^3_x}\|w_n\|_{L_t^\infty L^6_x}+\|\nabla w_n\|_{L_t^\infty L_x^2}\bigr]\\
&\quad+T\lambdaambda_n\bigl[\|\partial^2 \chi_n\|_{L^3_x}\|w_n\|_{L_t^\infty L^6_x}+\|\nabla \chi_n\|_{L_x^\infty}\|\nabla w_n\|_{L_t^\infty L_x^2}
+\|\partial^2w_n\|_{L_t^\infty L_x^2}\bigr]\\
&\lambdaesssim T\lambdaambda_n^2+T\lambdaambda_n[L_n^{-1}+\lambdaambda_n^{-\theta}]\to 0\qtq{as} n\to \infty
\end{align*}
and similarly,
\begin{align*}
\eqref{n17}
&\lambdaesssim \|(\partial^2 \tilde w_n\circ\Psi_n)(\partial^2\psi_n+\partial\psi_n\partial^2\psi_n)\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\\
&\quad+ \|(\partial^3\tilde w_n\circ\Psi_n)[\partial\Psi_n(\partial\psi_n+(\partial\psi_n)^2)]\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\\
&\lambdaesssim T[\lambdaambda_n+\lambdaambda_n^{2-2\theta}] \|\partial^2\tilde w_n\|_{L_t^\infty L_x^2}+T[\lambdaambda_n^{1-2\theta}+\lambdaambda_n^{2-4\theta}]\|\partial^3\tilde w_n\|_{L_t^\infty L_x^2}\\
&\lambdaesssim T\lambdaambda_n[L_n^{-1}+\lambdaambda_n^{-\theta}]+T\lambdaambda_n^{1-2\theta}\bigl[\|\partial^3\chi_n\|_{L^3_x}\|w_n\|_{L_t^\infty L^6_x}+\|\partial^2\chi_n\|_{L_x^\infty}\|\nabla w_n\|_{L^2_x}\\
&\quad+\|\nabla\chi\|_{L_x^\infty}\|\partial^2w_n\|_{L_t^\infty L^2_x}+\|\partial^3 w_n\|_{L_t^\infty L^2_x}\bigr]\\
&\lambdaesssim T\lambdaambda_n[L_n^{-1}+\lambdaambda_n^{-\theta}]+T\lambdaambda_n^{1-2\theta}\bigl[L_n^{-2}+L_n^{-1}\lambdaambda_n^{-\theta}+\lambdaambda_n^{-2\theta}\bigr]\to 0\qtq{as} n\to \infty.
\end{align*}
Finally, we consider \eqref{n18}. A direct computation gives
\begin{align*}
(i\partial_t+\Delta_{{\mathbb{H}}})\tilde w_n-|\tilde w_n|^4\tilde w_n=(\chi_n-\chi_n^5)|w_n|^4w_n+2\nabla\chi_n\cdot\nabla w_n+\Delta\chi_n w_n.
\end{align*}
We then bound each term as follows:
\begin{align*}
\|\nabla(\Delta \chi_n w_n)\|_{L_t^1L_x^2([-T,T]\times{\mathbb{H}})}
&\lambdaesssim T\bigl[ \|\partial^3\chi_n\|_{L^3_x}\|w_n\|_{L^\infty_t L^6_x}+\|\partial^2 \chi_n\|_{L^\infty_x} \|\nabla w_n\|_{L^\infty_t L^2_x} \bigr] \\
&\lambdaesssim TL_n^{-2} \to 0 \qtq{as} n\to \infty\\
\|\nabla(\nabla\chi_n\cdot \nabla w_n)\|_{L_t^1L_x^2([-T,T]\times{\mathbb{H}})}
&\lambdaesssim T\bigl[ \|\partial^2 \chi_n\|_{L^\infty_x} \|\nabla w_n\|_{L^\infty_t L^2_x} + \|\nabla\chi_n\|_{L^\infty_x} \|\partial^2 w_n\|_{L^\infty_t L^2_x}\bigr]\\
&\lambdaesssim T[L_n^{-2}+L_n^{-1}\lambdaambda_n^{-\theta}] \to 0 \qtq{as} n\to \infty.
\end{align*}
Finally, for the first term, we have
\begin{align*}
\| & \nabla[(\chi_n-\chi_n^5)|w_n|^4w_n]\|_{\dot N^0([-T,T]\times{\mathbb{H}})}\\
&\lambdaesssim \|(\chi_n-\chi_n^5) |w_n|^4\nabla w_n\|_{L_{t,x}^{\frac{10}7}([-T,T]\times{\mathbb{H}})}+\| |w_n|^5\nabla \chi_n\|_{L_t^{\frac53}L_x^{\frac{30}{23}}([-T,T]\times{\mathbb{H}})}\\
&\lambdaesssim \|w_n 1_{|x|\sim L_n}\|_{L^{10}_{t,x}}^4 \|\nabla w_n\|_{L^{\frac{10}3}_{t,x}}
+ \|\nabla \chi_n\|_{L^3_x} \|w_n 1_{|x|\sim L_n}\|_{L^{10}_{t,x}}^4 \|\nabla w_n\|_{L^5_t L^\frac{30}{11}_x}\\
&\lambdaesssim \|1_{|x|\sim L_n}w_\infty \|_{L^{10}_{t,x}}^4+\|w_\infty-w_n\|_{L^{10}_{t,x}}^4 \to 0 \qtq{as} n\to \infty.
\end{align*}
This completes the proof of \eqref{n14}.
\textbf{Step 5:} Constructing $v_n$ and approximating by $C_c^{\infty}$ functions.
Using \eqref{tildevn4}, \eqref{match4}, and \eqref{n14}, and invoking the stability result Theorem~\ref{T:stability}, for $n$ large enough we obtain a global solution $v_n$ to $\text{NLS}_\Omega$ with initial data $v_n(0)=\phi_n$ and
\begin{align*}
\|v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}\lambdaesssim 1.
\end{align*}
Moreover,
\begin{align}\lambdaabel{n19}
\lambdaim_{T\to\infty}\lambdaimsup_{n\to\infty}\|v_n(t-\lambdaambda_n^2t_n)-\tilde v_n(t)\|_{\dot S^1({\mathbb{R}}\times\Omega)}=0.
\end{align}
It remains to prove the approximation result \eqref{ap4}. By the density of $C_c^{\infty}({\mathbb{R}}\times{\mathbb{H}})$ in $\dot X^1({\mathbb{R}}\times{\mathbb{H}})$, for
every ${\varepsilon}>0$ there exists $\psi_{\varepsilon}\in C_c^{\infty}({\mathbb{R}}\times{\mathbb{H}})$ such that
\begin{align*}
\|w_\infty-\psi_{\varepsilon}\|_{\dot X^1({\mathbb{R}}\times{\mathbb{H}})}<\tfrac {\varepsilon} 3.
\end{align*}
This together with \eqref{n19} reduce matters to showing
\begin{align}\lambdaabel{c4e3}
\|\tilde v_n(t,x)-\lambdaambda_n^{-\frac 12}w_\infty(\lambdaambda_n^{-2}t, \lambdaambda_n^{-1}R_n^{-1}(x-x_n^*))\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<\tfrac{\varepsilon}3
\end{align}
for $n,\ T$ sufficiently large. A change of variables shows that
\begin{align*}
\text{LHS\eqref{c4e3}}&\lambdaesssim \|\tilde w_n\circ \Psi_n-w_\infty\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)} \\
& \quad +\|e^{i(t-T)\Delta_{\Omega_n}}(\tilde w_n(T)\circ\Psi_n)-w_\infty\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\\
&\quad+\|e^{i(t+T)\Delta_{\Omega_n}}(\tilde w_n(-T)\circ\Psi_n)-w_\infty\|_{\dot X^1((-\infty,-T)\times{\mathbb{R}}^3)}.
\end{align*}
The first term can be controlled as follows:
\begin{align*}
\|\tilde w_n\circ\Psi_n-w_\infty\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)}
& \lambdaesssim \| (\chi_n w_\infty)\circ\Psi_n-w_\infty\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)}\\
&\quad +\|[\chi_n(w_n-w_\infty)]\circ\Psi_n\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)},
\end{align*}
which converges to zero as $n\to\infty$ by \eqref{cond4} and the fact that $\Psi_n(x)\to x$ in $C^1$.
Similarly, we can use the Strichartz inequality to replace $\tilde w_n(T)\circ \Psi_n$ by $w_\infty(T)$ in the second term by making a $o(1)$ error as $n\to \infty$. Then we can use the convergence of propagators result Theorem~\ref{T:LF} to replace $e^{i(t-T)\Delta_{\Omega_n}}$ by $e^{i(t-T)\Delta_{{\mathbb{H}}}}$ with an additional $o(1)$ error. It then suffices to show
\begin{align*}
\|e^{i(t-T)\Delta_{{\mathbb{H}}}}w_\infty(T)-w_\infty\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\to 0 \qtq{as}T\to \infty,
\end{align*}
which follows from the fact that $w_\infty$ scatters forward in time, just as in the proofs of Theorem~\ref{T:embed2} and~\ref{T:embed3}. The treatment of the third term is similar. This completes the proof of \eqref{ap4} and so the proof of Theorem~\ref{T:embed4}.
\end{proof}
\section{Palais--Smale and the proof of Theorem~ \ref{T:main}}\lambdaabel{S:Proof}
In this section we prove a Palais--Smale condition for minimizing sequences of blowup solutions to \eqref{nls}. This will allow us to conclude that failure of Theorem~\ref{T:main} would imply the existence of special counterexamples that are almost periodic. At the end of this section, we rule out these almost periodic solutions by employing a spatially truncated (one-particle) Morawetz inequality in the style of \cite{borg:scatter}. This will complete the proof of Theorem~\ref{T:main}.
We first define operators $T_n^j$ on general functions of spacetime. These act on linear solutions in a manner corresponding to the action of $G_n^j \exp\{it_n^j\Delta_{\Omega_n^j}\}$ on initial data in Theorem~\ref{T:LPD}. As in that theorem, the exact definition depends on the case to which
the index $j$ conforms. In Cases~1, 2,~and~3, we define
\begin{align*}
(T_n^j f)(t,x) :=(\lambdaambda_n^j)^{-\frac 12}f\bigl((\lambdaambda_n^j)^{-2} t+t_n^j, (\lambdaambda_n^j)^{-1}(x-x_n^j)\bigr).
\end{align*}
In Case 4, we define
\begin{align*}
(T_n^j f)(t,x):=(\lambdaambda_n^j)^{-\frac12}f\bigl((\lambdaambda_n^j)^{-2}t+t_n^j, (\lambdaambda_n^j)^{-1}(R_n^j)^{-1}(x-(x_n^j)^*)\bigr).
\end{align*}
Here, the parameters $\lambdaambda_n^j, t_n^j, x_n^j, (x_n^j)^*$, and $R_n^j$ are as defined in Theorem~\ref{T:LPD}. Using the asymptotic orthogonality
condition \eqref{E:LP5}, it is not hard to prove the following
\begin{lem}[Asymptotic decoupling]\lambdaabel{L:ortho} Suppose that the parameters associated to $j,k$ are orthogonal in the sense
of \eqref{E:LP5}. Then for any $\psi^j, \psi^k\in C_c^{\infty}({\mathbb{R}}\times{\mathbb{R}}^3)$,
\begin{align*}
\|T_n^j\psi^j T_n^k\psi^k\|_{L_{t,x}^5({\mathbb{R}}\times{\mathbb{R}}^3)}+\|T_n^j\psi^j \nabla(T_n^k\psi^k)\|_{L_{t,x}^{\frac 52}({\mathbb{R}}\times{\mathbb{R}}^3)}
+\|\nabla(T_n^j\psi^j)\nabla (T_n^k\psi^k)\|_{L_{t,x}^{\frac53}({\mathbb{R}}\times{\mathbb{R}}^3)}
\end{align*}
converges to zero as $n\to\infty$.
\end{lem}
\begin{proof}
From a change of variables, we get
\begin{align*}
\|T_n^j&\psi^jT_n^k \psi^k\|_{L_{t,x}^5} + \|T_n^j\psi^j\nabla(T_n^k\psi^k)\|_{L_{t,x}^{\frac 52}}
+\|\nabla(T_n^j\psi^j)\nabla (T_n^k\psi^k)\|_{L_{t,x}^{\frac53}}\\
&= \|\psi^j (T_n^j)^{-1}T_n^k\psi^k\|_{L_{t,x}^5}+\|\psi^j\nabla(T_n^j)^{-1}T_n^k\psi^k\|_{L_{t,x}^{\frac 52}}
+\|\nabla \psi^j\nabla (T_n^j)^{-1}T_n^k\psi^k\|_{L_{t,x}^{\frac53}},
\end{align*}
where all spacetime norms are over ${\mathbb{R}}\times{\mathbb{R}}^3$. Depending on the cases to which $j$ and $k$ conform, $(T_n^j)^{-1}T_n^k$ takes one of the following forms:
\begin{CI}
\item Case a): $j$ and $k$ each conform to one of Cases 1, 2, or 3.
\begin{align*}
[(T_n^j)^{-1}T_n^k\psi^k](t,x)=\bigl(\tfrac{\lambdaambda_n^j}{\lambdaambda_n^k}\bigr)^{\frac12}
\psi^k\Bigl(\bigl(\tfrac{\lambdaambda_n^j}{\lambdaambda_n^k}\bigr)^{2} \bigl(t-\tfrac{t_n^j(\lambdaambda_n^j)^2-t_n^k(\lambdaambda_n^k)^2}{(\lambdaambda_n^j)^2}\bigr),
\tfrac{\lambdaambda_n^j}{\lambdaambda_n^k}\bigl( x - \tfrac{x_n^k-x_n^j}{\lambdaambda_n^j}\bigr) \Bigr).
\end{align*}
\item Case b): $j$ conforms to Case 1, 2, or 3 and $k$ conforms to Case 4.
\begin{align*}
[(T_n^j)^{-1}&T_n^k\psi^k](t,x) \\
&=\bigl(\tfrac{\lambdaambda_n^j}{\lambdaambda_n^k}\bigr)^{\frac12}
\psi^k\Bigl(\bigl(\tfrac{\lambdaambda_n^j}{\lambdaambda_n^k}\bigr)^{2} \bigl(t-\tfrac{t_n^j(\lambdaambda_n^j)^2-t_n^k(\lambdaambda_n^k)^2}{(\lambdaambda_n^j)^2}\bigr),
\tfrac{\lambdaambda_n^j}{\lambdaambda_n^k} (R_n^k)^{-1}\bigl( x - \tfrac{(x_n^k)^*-x_n^j}{\lambdaambda_n^j}\bigr) \Bigr).
\end{align*}
\item Case c): $j$ conforms to Case 4 and $k$ to Case 1, 2, or 3.
\begin{align*}
[(T_n^j)^{-1}T_n^k\psi^k](t,x)=\bigl(\tfrac{\lambdaambda_n^j}{\lambdaambda_n^k}\bigr)^{\frac12}
\psi^k\Bigl(\bigl(\tfrac{\lambdaambda_n^j}{\lambdaambda_n^k}\bigr)^{2} \bigl(t-\tfrac{t_n^j(\lambdaambda_n^j)^2-t_n^k(\lambdaambda_n^k)^2}{(\lambdaambda_n^j)^2}\bigr),
\tfrac{\lambdaambda_n^j}{\lambdaambda_n^k} \bigl( R_n^j x - \tfrac{x_n^k-(x_n^j)^*}{\lambdaambda_n^j}\bigr) \Bigr).
\end{align*}
\item Case d): Both $j$ and $k$ conform to Case 4.
\begin{align*}
[(T_n^j&)^{-1}T_n^k\psi^k](t,x) \\
&=\bigl(\tfrac{\lambdaambda_n^j}{\lambdaambda_n^k}\bigr)^{\frac12}
\psi^k\Bigl(\bigl(\tfrac{\lambdaambda_n^j}{\lambdaambda_n^k}\bigr)^{2} \bigl(t-\tfrac{t_n^j(\lambdaambda_n^j)^2-t_n^k(\lambdaambda_n^k)^2}{(\lambdaambda_n^j)^2}\bigr),
\tfrac{\lambdaambda_n^j}{\lambdaambda_n^k}(R_n^k)^{-1}\bigl( R_n^j x - \tfrac{(x_n^k)^*-(x_n^j)^*}{\lambdaambda_n^j}\bigr) \Bigr).
\end{align*}
\end{CI}
We only present the details for decoupling in the $L_{t,x}^5$ norm; the argument for decoupling in the other norms is very similar.
We first assume $\frac{\lambdaambda_n^j}{\lambdaambda_n^k}+\frac{\lambdaambda_n^k}{\lambdaambda_n^j}\to\infty$. Using H\"older and a change of
variables, we estimate
\begin{align*}
\|\psi^j(T_n^j)^{-1}T_n^k\psi^k\|_{L^5_{t,x}}
&\lambdae\min\bigl\{\|\psi^j\|_{L^\infty_{t,x}}\|(T_n^j)^{-1}T_n^k\psi^k\|_{L^5_{t,x}}+\|\psi^j\|_{L^5_{t,x}}\|(T_n^j)^{-1}T_n^k\psi^k\|_{L^\infty_{t,x}}\bigr\}\\
&\lambdaesssim \min\Bigl\{ \bigl(\tfrac{\lambdaambda_n^j}{\lambdaambda_n^k}\bigr)^{-\frac12}, \bigl(\tfrac{\lambdaambda_n^j}{\lambdaambda_n^k}\bigr)^{\frac12}\Bigr\}
\to 0 \qtq{as} n\to \infty.
\end{align*}
Henceforth, we may assume $\frac{\lambdaambda_n^j}{\lambdaambda_n^k}\to \lambdaambda_0\in (0,\infty)$.
If $\frac{|t_n^j(\lambdaambda_n^j)^2-t_n^k(\lambdaambda_n^k)^2|}{\lambdaambda_n^k\lambdaambda_n^j}\to \infty$, it is easy to see
that the temporal supports of $\psi^j$ and $(T_n^j)^{-1}T_n^k\psi^k$ are disjoint for $n$ sufficiently large. Hence
\begin{align*}
\lambdaim_{n\to \infty}\|\psi^j(T_n^j)^{-1}T_n^k\psi^k\|_{L^5_{t,x}} = 0.
\end{align*}
The only case left is when
\begin{align}\lambdaabel{s13}
\tfrac{\lambdaambda_n^j}{\lambdaambda_n^k}\to\lambdaambda_0, \quad \tfrac{t_n^j(\lambdaambda_n^j)^2-t_n^k(\lambdaambda_n^k)^2}{\lambdaambda_n^k\lambdaambda_n^j} \to t_0,
\qtq{and} \tfrac{|x_n^j-x_n^k|}{\sqrt{\lambdaambda_n^j\lambdaambda_n^k}}\to\infty.
\end{align}
In this case we will verify that the spatial supports of $\psi^j$ and $(T_n^j)^{-1}T_n^k\psi^k$ are disjoint for $n$ sufficiently large.
Indeed, in Case a),
\begin{align*}
\tfrac{|x_n^j-x_n^k|}{\lambdaambda_n^j}=\tfrac{|x_n^j-x_n^k|}{\sqrt{\lambdaambda_n^j\lambdaambda_n^k}} \sqrt{\tfrac{\lambdaambda_n^k}{\lambdaambda_n^j}}\to \infty
\qtq{as} n\to\infty.
\end{align*}
In Case b), for $n$ sufficiently large we have
\begin{align*}
\tfrac{|x_n^j-(x_n^k)^*|}{\lambdaambda_n^j}
&\ge \tfrac{|x_n^j-x_n^k|}{\lambdaambda_n^j}-\tfrac{|x_n^k-(x_n^k)^*|}{\lambdaambda_n^j}
\ge\tfrac{|x_n^j-x_n^k|}{\lambdaambda_n^j}-2\tfrac{d^k_\infty}{\lambdaambda_0},
\end{align*}
which converges to infinity as $n\to \infty$. In Case c), for $n$ sufficiently large we have
\begin{align*}
\tfrac{|(x_n^j)^*-x_n^k|}{\lambdaambda_n^j}
&\ge \tfrac{|x_n^j-x_n^k|}{\lambdaambda_n^j}-\tfrac{|x_n^j-(x_n^j)^*|}{\lambdaambda_n^j}
\ge \tfrac{|x_n^j-x_n^k|}{\lambdaambda_n^j}-2d^j_{\infty},
\end{align*}
which converges to infinity as $n\to \infty$. Finally, in Case d) for $n$ sufficiently large,
\begin{align*}
\tfrac{|(x_n^j)^*-(x_n^k)^*|}{\lambdaambda_n^j}
&\ge \tfrac{|x_n^j-x_n^k|}{\lambdaambda_n^j}-\tfrac{|x_n^j-(x_n^j)^*|}{\lambdaambda_n^j} -\tfrac{|x_n^k-(x_n^k)^*|}{\lambdaambda_n^j}
\ge \tfrac{|x_n^j-x_n^k|}{\lambdaambda_n^j}-2d_\infty^j-2\tfrac{d^k_\infty}{\lambdaambda_0},
\end{align*}
which converges to infinity as $n\to\infty$. Thus, in all cases,
\begin{align*}
\lambdaim_{n\to \infty}\|\psi^j(T_n^j)^{-1}T_n^k\psi^k\|_{L^5_{t,x}} = 0.
\end{align*}
This completes the proof of the lemma.
\end{proof}
Theorem~\ref{T:main} claims that for any initial data $u_0\in \dot H^1_D(\Omega)$ there is a global solution $u:{\mathbb{R}}\times\Omega\to {\mathbb{C}}$ to
\eqref{nls} with $S_{\mathbb{R}}(u)\lambdaeq C(\|u_0\|_{\dot H^1_D(\Omega)})$. Recall that for a time interval~$I$, the scattering size of $u$ on $I$ is given by
\begin{align*}
S_I(u)=\iint_{{\mathbb{R}}\times\Omega}|u(t,x)|^{10} \,dx\,dt.
\end{align*}
Supposing that Theorem~\ref{T:main} failed, there would be a critical energy $0<E_c<\infty$ so that
\begin{align}\lambdaabel{LofE}
L(E)<\infty \qtq{for} E<E_c \qtq{and} L(E)=\infty \qtq{for} E\geq E_c.
\end{align}
Recall from the introduction that $L(E)$ is the supremum of $S_I(u)$ over all solutions $u:I\times\Omega\to {\mathbb{C}}$ with $E(u)\lambdaeq E$ and defined on any interval $I\subseteq {\mathbb{R}}$.
The positivity of $E_c$ follows from small data global well-posedness. Indeed, the argument proves the stronger statement
\begin{align}\lambdaabel{SbyE}
\|u\|_{\dot X^1({\mathbb{R}}\times\Omega)}\lambdaesssim E(u_0)^{\frac12} \quad \text{for all data with } E(u_0)\lambdaeq \eta_0,
\end{align}
where $\eta_0$ denotes the small data threshold. Recall $\dot X^1= L_{t,x}^{10} \cap L_t^5\dot H^{1,\frac{30}{11}}$.
Using the induction on energy argument together with \eqref{LofE} and the stability result Theorem~\ref{T:stability}, we now prove a compactness result for optimizing sequences of blowup solutions.
\begin{prop}[Palais--Smale condition]\lambdaabel{P:PS}
Let $u_n: I_n\times\Omega\to {\mathbb{C}}$ be a sequence of solutions with $E(u_n)\to E_c$, for which there is a sequence of times $t_n\in I_n$
so that
\begin{align*}
\lambdaim_{n\to\infty} S_{\ge t_n}(u_n)=\lambdaim_{n\to\infty}S_{\lambdae t_n}(u_n)=\infty.
\end{align*}
Then the sequence $u_n(t_n)$ has a subsequence that converges strongly in $\dot H^1_D(\Omega)$.
\end{prop}
\begin{proof}
Using the time translation symmetry of \eqref{nls}, we may take $t_n\equiv0$ for all $n$; thus,
\begin{align}\lambdaabel{scat diverge}
\lambdaim_{n\to\infty} S_{\ge 0}(u_n)=\lambdaim_{n\to \infty} S_{\lambdae 0} (u_n)=\infty.
\end{align}
Applying Theorem~\ref{T:LPD} to the bounded sequence $u_n(0)$ in $\dot H^1_D(\Omega)$ and passing to a subsequence if necessary, we
obtain the linear profile decomposition
\begin{align}\lambdaabel{s0}
u_n(0)=\sum_{j=1}^J \phi_n^j+w_n^J
\end{align}
with the properties stated in that theorem. In particular, for any finite $0\lambdaeq J \lambdaeq J^*$ we have the energy decoupling condition
\begin{align}\lambdaabel{s01}
\lambdaim_{n\to \infty}\Bigl\{E(u_n)-\sum_{j=1}^J E(\phi_n^j)-E(w_n^J)\Bigr\}=0.
\end{align}
To prove the proposition, we need to show that $J^*=1$, that $w_n^1\to 0$ in $\dot H^1_D(\Omega)$, that the only profile $\phi^1_n$ conforms to Case~1, and that $t_n^1\equiv 0$. All other possibilities will be shown to contradict \eqref{scat diverge}. We discuss two scenarios:
\textbf{Scenario I:} $\sup_j \lambdaimsup_{n\to \infty} E(\phi_n^j) =E_c$.
From the non-triviality of the profiles, we have $\lambdaiminf_{n\to \infty} E(\phi_n^j)>0$ for every finite $1\lambdaeq j\lambdaeq J^*$; indeed,
$\|\phi_n^j\|_{\dot H^1_D(\Omega)}$ converges to $\|\phi^j\|_{\dot H^1}$. Thus, passing to a subsequence, \eqref{s01} implies that there is a single profile in the decomposition \eqref{s0} (that is, $J^*=1$) and we can write
\begin{equation}\lambdaabel{s11}
u_n(0)=\phi_n +w_n \qtq{with} \lambdaim_{n\to \infty} \|w_n\|_{\dot H_D^1(\Omega)}=0.
\end{equation}
If $\phi_n$ conforms to Cases 2, 3, or 4, then by the Theorems~\ref{T:embed2}, \ref{T:embed3}, or \ref{T:embed4}, there are global solutions $v_n$ to $\text{NLS}_\Omega$ with data $v_n(0)=\phi_n$ that admit a uniform spacetime bound. By Theorem~\ref{T:stability}, this spacetime bound extends to the solutions $u_n$ for $n$ large enough. However, this contradicts \eqref{scat diverge}. Therefore, $\phi_n$ must conform to Case~1 and \eqref{s11} becomes
\begin{equation}\lambdaabel{s11'}
u_n(0)=e^{it_n\lambdaambda_n^2\Delta_\Omega}\phi +w_n \qtq{with} \lambdaim_{n\to \infty} \|w_n\|_{\dot H_D^1(\Omega)}=0
\end{equation}
and $t_n\equiv 0$ or $t_n\to \pm \infty$. If $t_n\equiv 0$, then we obtain the desired compactness. Thus, we only need to preclude that $t_n\to\pm\infty$.
Let us suppose $t_n\to \infty$; the case $t_n\to -\infty$ can be treated symmetrically. In this case, the Strichartz inequality and the monotone convergence theorem yield
\begin{align*}
S_{\ge 0}(e^{it\Delta_\Omega}u_n(0))=S_{\ge 0}(e^{i(t+t_n\lambdaambda_n^2)\Delta_{\Omega}}\phi+e^{it\Delta_{\Omega}}w_n) \to 0 \qtq{as} n\to \infty.
\end{align*}
By the small data theory, this implies that $S_{\geq 0}(u_n)\to 0$, which contradicts \eqref{scat diverge}.
\textbf{Scenario 2:} $\sup_j \lambdaimsup_{n\to \infty} E(\phi_n^j) \lambdaeq E_c-2\delta$ for some $\delta>0$.
We first observe that for each finite $J\lambdaeq J^*$ we have $E(\phi_n^j) \lambdaeq E_c-\delta$ for all $1\lambdaeq j\lambdaeq J$ and $n$ sufficiently large. This is important for constructing global nonlinear profiles for $j$ conforming to Case~1, via the induction on energy hypothesis~\eqref{LofE}.
If $j$ conforms to Case~1 and $t_n^j\equiv 0$, we define $v^j:I^j\times\Omega\to {\mathbb{C}}$ to be the maximal-lifespan solution to
\eqref{nls} with initial data $v^j(0)=\phi^j$. If instead $t_n^j\to \pm \infty$, we define $v^j:I^j\times\Omega\to {\mathbb{C}}$ to be the
maximal-lifespan solution to \eqref{nls} which scatters to $e^{it\Delta_\Omega}\phi^j$ as $t\to \pm\infty$.
Now define $v_n^j(t,x):=v^j(t+t_n^j(\lambdaambda_n^j)^2,x)$. Then $v_n^j$ is also a solution to \eqref{nls} on the time interval
$I_n^j:=I^j-\{t_n^j(\lambdaambda_n^j)^2\}$. In particular, for $n$ sufficiently large we have $0\in I_n^j$ and
\begin{align}\lambdaabel{bb1}
\lambdaim_{n\to\infty}\|v_n^j(0)-\phi_n^j\|_{\dot H^1_D(\Omega)}=0.
\end{align}
Combining this with $E(\phi_n^j) \lambdaeq E_c-\delta$ and \eqref{LofE}, we deduce that for $n$ sufficiently large, $v_n^j$ (and also $v^j$) are global solutions that obey
\begin{align*}
S_{\mathbb{R}}(v^j)=S_{\mathbb{R}}(v_n^j)\lambdae L(E_c-\delta)<\infty.
\end{align*}
Combining this with the Strichartz inequality shows that all Strichartz norms of $v_n^j$ are finite and, in particular, the $\dot X^1$ norm.
This allows us to approximate $v_n^j$ in $\dot X^1({\mathbb{R}}\times\Omega)$ by $C_c^\infty({\mathbb{R}}\times{\mathbb{R}}^3)$ functions. More precisely, for any ${\varepsilon}>0$ there exist $N_{\varepsilon}^j\in {\mathbb{N}}$ and $\psi^j_{\varepsilon}\in C_c^\infty({\mathbb{R}}\times{\mathbb{R}}^3)$ so that for $n\geq N_{\varepsilon}^j$ we have
\begin{align}\lambdaabel{ap case1}
\|v_n^j - T_n^j\psi^j_{\varepsilon}\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<{\varepsilon}.
\end{align}
Speciffically, choosing $\tilde\psi_{\varepsilon}^j\in C_c^\infty({\mathbb{R}}\times{\mathbb{R}}^3)$ such that
\begin{align*}
\|v^j-\tilde \psi_{\varepsilon}^j\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<\tfrac {\varepsilon}2, \qtq{we set} \psi_{\varepsilon}^j(t,x):=(\lambdaambda_\infty^j)^{\frac 12}\tilde\psi_{\varepsilon}^j
\bigl((\lambdaambda_\infty^j)^2 t, \lambdaambda_\infty^j x+x_\infty^j\bigr).
\end{align*}
When $j$ conforms to Cases 2, 3, or 4, we apply the nonlinear embedding theorems of the previous section to construct the nonlinear profiles. More precisely, let $v_n^j$ be the global solutions to $\text{NLS}_\Omega$ constructed in Theorems~\ref{T:embed2}, \ref{T:embed3}, or \ref{T:embed4}, as appropriate. In particular, these $v_n^j$ also obey \eqref{ap case1} and $\sup_{n,j} S_{\mathbb{R}}(v_n^j)<\infty$.
In all cases, we may use \eqref{SbyE} together with our bounds on the spacetime norms of $v_n^j$ and the finiteness of $E_c$ to deduce
\begin{align}\lambdaabel{s2}
\|v_n^j\|_{\dot X^1({\mathbb{R}}\times\Omega)}\lambdaesssim_{E_c, \delta} E(\phi_n^j)^{\frac12} \lambdaesssim_{E_c, \delta}1.
\end{align}
Combining this with \eqref{s01} we deduce
\begin{align}\lambdaabel{s2lim}
\lambdaimsup_{n\to \infty} \sum_{j=1}^J \|v_n^j\|_{\dot X^1({\mathbb{R}}\times\Omega)}^2
\lambdaesssim_{E_c, \delta} \lambdaimsup_{n\to \infty} \sum_{j=1}^J E(\phi_n^j) \lambdaesssim_{E_c,\delta} 1,
\end{align}
uniformly for finite $J\lambdaeq J^*$.
The asymptotic orthogonality condition \eqref{E:LP5} gives rise to asymptotic decoupling of the nonlinear profiles.
\begin{lem}[Decoupling of nonlinear profiles] \lambdaabel{L:npd} For $j\neq k$ we have
\begin{align*}
\lambdaim_{n\to \infty} \|v_n^j v_n^k\|_{L_{t,x}^5({\mathbb{R}}\times\Omega)} +\|v_n^j \nabla v_n^k\|_{L_{t,x}^{\frac52}({\mathbb{R}}\times\Omega)}
+\|\nabla v_n^j \nabla v_n^k\|_{L_t^{\frac52} L_x^{\frac{15}{11}}({\mathbb{R}}\times\Omega)}=0.
\end{align*}
\end{lem}
\begin{proof}
Recall that for any ${\varepsilon}>0$ there exist $N_{\varepsilon}\in {\mathbb{N}}$ and $\psi_{\varepsilon}^j,\psi_{\varepsilon}^k\in C_c^\infty({\mathbb{R}}\times{\mathbb{R}}^3)$ so that
\begin{align*}
\|v_n^j - T_n^j\psi^j_{\varepsilon}\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)} + \|v_n^k - T_n^k\psi^k_{\varepsilon}\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<{\varepsilon}.
\end{align*}
Thus, using \eqref{s2} and Lemma~\ref{L:ortho} we get
\begin{align*}
\|v_n^j v_n^k\|_{L_{t,x}^5} &\lambdaeq \|v_n^j(v_n^k-T_n^k\psi_{{\varepsilon}}^k)\|_{L_{t,x}^5}+\|(v_n^j-T_n^j\psi_{\varepsilon}^j)T_n^k\psi_{{\varepsilon}}^k\|_{L_{t,x}^5}
+\|T_n^j\psi_{\varepsilon}^j\, T_n^k\psi_{\varepsilon}^k\|_{L_{t,x}^5}\\
&\lambdaesssim \|v^j_n\|_{\dot X^1} \|v_n^k - T_n^k\psi^k_{\varepsilon}\|_{\dot X^1} + \|v_n^j - T_n^j\psi^j_{\varepsilon}\|_{\dot X^1} \|\psi_{\varepsilon}^k\|_{\dot X^1} + \|T_n^j\psi_{\varepsilon}^j\, T_n^k\psi_{\varepsilon}^k\|_{L_{t,x}^5}\\
&\lambdaesssim_{E_c,\delta} {\varepsilon} + o(1) \qtq{as}n\to \infty.
\end{align*}
As ${\varepsilon}>0$ was arbitrary, this proves the first asymptotic decoupling statement.
The second decoupling statement follows analogously. For the third assertion, a little care has to be used to estimate the error terms, due to the asymmetry of the spacetime norm and to the restrictions placed by Theorem~\ref{T:Sob equiv}. Using the same argument as above and interpolation, we estimate
\begin{align*}
\|\nabla v_n^j \nabla v_n^k\|_{L_t^{\frac52} L_x^{\frac{15}{11}}}
&\lambdaeq \|\nabla v_n^j(\nabla v_n^k-\nabla T_n^k\psi_{{\varepsilon}}^k)\|_{L_t^{\frac52} L_x^{\frac{15}{11}}}+\|(\nabla v_n^j-\nabla T_n^j\psi_{\varepsilon}^j)\nabla T_n^k\psi_{{\varepsilon}}^k\|_{L_t^{\frac52} L_x^{\frac{15}{11}}}\\
&\quad +\|\nabla T_n^j\psi_{\varepsilon}^j\, \nabla T_n^k\psi_{\varepsilon}^k\|_{L_t^{\frac52} L_x^{\frac{15}{11}}}\\
&\lambdaesssim_{E_c,\delta}{\varepsilon} + \|\nabla T_n^j\psi_{\varepsilon}^j\, \nabla T_n^k\psi_{\varepsilon}^k\|_{L_{t,x}^{\frac53}}^{\frac23}\|\nabla T_n^j\psi_{\varepsilon}^j\, \nabla T_n^k\psi_{\varepsilon}^k\|_{L_t^\infty L_x^1}^{\frac13}\\
&\lambdaesssim_{E_c,\delta}{\varepsilon} + \|\nabla T_n^j\psi_{\varepsilon}^j\, \nabla T_n^k\psi_{\varepsilon}^k\|_{L_{t,x}^{\frac53}}^{\frac23}\|\nabla\psi_{\varepsilon}^j\|_{L_x^2}^{\frac13}\|\nabla\psi_{\varepsilon}^k\|_{L_x^2}^{\frac13}\\
&\lambdaesssim_{E_c,\delta}{\varepsilon} + o(1) \qtq{as}n\to \infty,
\end{align*}
where we used Lemma~\ref{L:ortho} in the last step. As ${\varepsilon}>0$ was arbitrary, this proves the last decoupling statement.
\end{proof}
As a consequence of this decoupling we can bound the sum of the nonlinear profiles in $\dot X^1$, as follows:
\begin{align}\lambdaabel{sum vnj}
\lambdaimsup_{n\to \infty} \Bigl\|\sum_{j=1}^J v_n^j\Bigr\|_{\dot X^1({\mathbb{R}}\times\Omega)}\lambdaesssim_{E_c,\delta}1 \quad\text{uniformly for finite $J\lambdaeq J^*$}.
\end{align}
Indeed, by Young's inequality, \eqref{s2}, \eqref{s2lim}, and Lemma~\ref{L:npd},
\begin{align*}
S_{\mathbb{R}}\Bigl(\sum_{j=1}^J v_n^j\Bigr)
&\lambdaesssim \sum_{j=1}^J S_{\mathbb{R}}(v_n^j)+J^8\sum_{j\neq k}\iint_{{\mathbb{R}}\times\Omega}|v_n^j||v_n^k|^9 \,dx\,dt\\
&\lambdaesssim_{E_c,\delta}1 + J^8 \sum_{j\neq k}\|v_n^jv_n^k\|_{L_{t,x}^5}\|v_n^k\|_{L_{t,x}^{10}}^8\\
&\lambdaesssim_{E_c,\delta}1 + J^8 o(1) \qtq{as} n\to \infty.
\end{align*}
Similarly,
\begin{align*}
\Bigl\|\sum_{j=1}^J \nabla v_n^j\Bigr\|_{L_t^5L_x^{\frac{30}{11}}}^2 &= \Bigl\|\Bigl(\sum_{j=1}^J \nabla v_n^j\Bigr)^2\Bigr\|_{L_t^{\frac52}L_x^{\frac{15}{11}}}\lambdaesssim \sum_{j=1}^J \|\nabla v_n^j\|_{L_t^5L_x^{\frac{30}{11}}}^2 + \sum_{j\neq k} \|\nabla v_n^j \nabla v_n^k\|_{L_t^{\frac52} L_x^{\frac{15}{11}}}\\
&\lambdaesssim_{E_c,\delta}1 + o(1)\qtq{as} n\to \infty.
\end{align*}
This completes the proof of \eqref{sum vnj}. The same argument combined with \eqref{s01} shows that given $\eta>0$, there exists $J'=J'(\eta)$ such that
\begin{align}\lambdaabel{sum vnj tail}
\lambdaimsup_{n\to \infty} \Bigl\|\sum_{j=J'}^J v_n^j\Bigr\|_{\dot X^1({\mathbb{R}}\times\Omega)}\lambdaeq \eta \quad \text{uniformly in $J\geq J'$}.
\end{align}
Now we are ready to construct an approximate solution to $\text{NLS}_\Omega$. For each $n$ and $J$, we define
\begin{align*}
u_n^J:=\sum_{j=1}^J v_n^j+e^{it\Delta_{\Omega}}w_n^J.
\end{align*}
Obviously $u_n^J$ is defined globally in time. In order to apply Theorem~\ref{T:stability}, it suffices to verify the following
three claims for $u_n^J$:
Claim 1: $\|u_n^J(0)-u_n(0)\|_{\dot H^1_D(\Omega)}\to 0$ as $n\to \infty$ for any $J$.
Claim 2: $\lambdaimsup_{n\to \infty} \|u_n^J\|_{\dot X^1({\mathbb{R}}\times\Omega)}\lambdaesssim_{E_c, \delta} 1$ uniformly in $J$.
Claim 3: $\lambdaim_{J\to\infty}\lambdaimsup_{n\to\infty}\|(i\partial_t+\Delta_{\Omega})u_n^J-|u_n^J|^4u_n^J\|_{\dot N^1({\mathbb{R}}\times\Omega)}=0$.
The three claims imply that for sufficiently large $n$ and $J$, $u_n^J$ is an approximate solution to \eqref{nls} with finite scattering size, which asymptotically matches $u_n(0)$ at time $t=0$. Using the stability result Theorem~\ref{T:stability} we see that for $n, J$ sufficiently large, the solution $u_n$ inherits\footnote{In fact, we obtain a nonlinear profile decomposition for the sequence of solutions $u_n$ with an error that goes to zero in $L^{10}_{t,x}$.} the spacetime bounds of $u_n^J$, thus contradicting \eqref{scat diverge}. Therefore, to complete the treatment of the second scenario, it suffices to verify the three claims above.
The first claim follows trivially from \eqref{s0} and \eqref{bb1}. To derive the second claim, we use \eqref{sum vnj} and the Strichartz inequality, as follows:
\begin{align*}
\lambdaimsup_{n\to \infty}\|u_n^J\|_{\dot X^1({\mathbb{R}}\times\Omega)}&\lambdaesssim \lambdaimsup_{n\to \infty}\Bigl\|\sum_{j=1}^J v_n^j\Bigr\|_{\dot X^1({\mathbb{R}}\times\Omega)}+\lambdaimsup_{n\to \infty}\|w_n^J\|_{\dot H^1_D(\Omega)}\lambdaesssim_{E_c,\delta}1.
\end{align*}
Next we verify the third claim. Adopting the notation $F(z)=|z|^4 z$, a direct computation gives
\begin{align}
(i\partial_t+\Delta_\Omega)u_n^J-F(u_n^J)
&=\sum_{j=1}^JF(v_n^j)-F(u_n^J)\notag\\
&=\sum_{j=1}^J F(v_n^j)-F\biggl(\sum_{j=1}^J v_n^j\biggr)+F\bigl(u_n^J-e^{it\Delta_{\Omega}}w_n^J\bigr)-F(u_n^J)\lambdaabel{s6}.
\end{align}
Taking the derivative, we estimate
\begin{align*}
\biggl|\nabla\biggl\{ \sum_{j=1}^JF(v_n^j)-F\biggl(\sum_{j=1}^J v_n^j\biggr)\biggr\} \biggr|
\lambdaesssim_{J} \sum_{j\neq k}|\nabla v_n^j||v_n^k|^4+|\nabla v_n^j||v_n^j|^3|v_n^k|
\end{align*}
and hence, using \eqref{s2} and Lemma~\ref{L:npd},
\begin{align*}
\biggl\|\nabla\biggl\{ \sum_{j=1}^JF(v_n^j)-F\biggl(\sum_{j=1}^J v_n^j\biggr)\bigg\}\biggr \|_{\dot N^0({\mathbb{R}}\times \Omega)}
&\lambdaesssim_{J} \sum_{j\neq k} \bigl\| |\nabla v_n^j| |v_n^k|^4 + |\nabla v_n^j||v_n^j|^3|v_n^k| \bigr\|_{L^{\frac {10}7}_{t,x}} \\
&\lambdaesssim_{J} \sum_{j\neq k} \bigl\|\nabla v_n^j v_n^k\bigr\|_{L^{\frac 52}_{t,x}}\bigl[\|v_n^k\|_{L^{10}_{t,x}}^3 +\|v_n^j\|_{L^{10}_{t,x}}^3\bigr]\\
&\lambdaesssim_{J,E_c,\delta} o(1) \qtq{as} n\to \infty.
\end{align*}
Thus, using the equivalence of Sobolev spaces Theorem~\ref{T:Sob equiv}, we obtain
\begin{equation}\lambdaabel{s7}
\lambdaim_{J\to \infty}\lambdaimsup_{n\to \infty} \biggl\| \sum_{j=1}^J F(v_n^j)-F\biggl(\sum_{j=1}^J v_n^j\biggr) \biggr\|_{\dot N^1({\mathbb{R}}\times \Omega)}=0.
\end{equation}
We now turn to estimating the second difference in \eqref{s6}. We will show
\begin{equation}\lambdaabel{s8}
\lambdaim_{J\to \infty} {\lambdaimsup_{n\to \infty}} \bigl\| F(u_n^J-e^{it\Delta_{\Omega}}w_n^J)-F(u_n^J) \bigr\|_{\dot N^1({\mathbb{R}}\times \Omega)}=0.
\end{equation}
By the equivalence of Sobolev spaces, it suffices to estimate the usual gradient of the difference in dual Strichartz spaces. Taking the derivative, we get
\begin{align*}
\bigl|\nabla \bigl\{F\bigl(u_n^J-e^{it\Delta_{\Omega}}w_n^J\bigr)-F(u_n^J)\bigr\}\bigr|
&\lambdaesssim \sum_{k=0}^3 |\nabla u_n^J| |u_n^J|^k |e^{it\Delta_{\Omega}}w_n^J|^{4-k} \\
&\quad + \sum_{k=0}^4 |\nabla e^{it\Delta_{\Omega}}w_n^J| |u_n^J|^k |e^{it\Delta_{\Omega}}w_n^J|^{4-k}.
\end{align*}
Using H\"older and the second claim, we obtain
\begin{align*}
\sum_{k=0}^3\bigl\||\nabla u_n^J| |u_n^J|^k |e^{it\Delta_{\Omega}}w_n^J|^{4-k}\bigr\|_{L^{\frac53}_t L^{\frac{30}{23}}_x}
&\lambdaesssim \sum_{k=0}^3\|\nabla u_n^J\|_{L^5_tL^{\frac {30}{11}}_x} \|u_n^J\|_{L_{t,x}^{10}}^k \|e^{it\Delta_{\Omega}}w_n^J\|_{L_{t,x}^{10}}^{4-k}\\
&\lambdaesssim_{E_c, \delta} \sum_{k=0}^3\|e^{it\Delta_{\Omega}}w_n^J\|_{L_{t,x}^{10}}^{4-k},
\end{align*}
which converges to zero as $n,J\to \infty$ by \eqref{E:LP1}. The same argument gives
\begin{align*}
\lambdaim_{J\to \infty}\lambdaimsup_{n\to \infty} \sum_{k=0}^3\bigl\||\nabla e^{it\Delta_{\Omega}}w_n^J| |u_n^J|^k |e^{it\Delta_{\Omega}}w_n^J|^{4-k}\bigr\|_{L^{\frac53}_t L^{\frac{30}{23}}_x}=0.
\end{align*}
This leaves us to prove
\begin{align}\lambdaabel{708}
\lambdaim_{J\to \infty}\lambdaimsup_{n\to \infty}\||\nabla e^{it\Delta_{\Omega}}w_n^J| |u_n^J |^4\|_{L_{t,x}^{\frac{10}7}}=0.
\end{align}
Using H\"older, the second claim, Theorem~\ref{T:Sob equiv}, and the Strichartz inequality, we get
\begin{align*}
\||\nabla e^{it\Delta_{\Omega}}w_n^J| |u_n^J|^4\|_{L_{t,x}^{\frac{10}7}}
&\lambdaesssim \|u_n^J \nabla e^{it\Delta_{\Omega}}w_n^J \|_{L_{t,x}^{\frac52}} \|u_n^J\|_{L_{t,x}^{10}}^3\\
&\lambdaesssim_{E_c,\delta}\|e^{it\Delta_{\Omega}}w_n^J\nabla e^{it\Delta_{\Omega}}w_n^J \|_{L_{t,x}^{\frac52}} +\Bigl\|\sum_{j=1}^J v_n^j \nabla e^{it\Delta_{\Omega}}w_n^J \Bigr\|_{L_{t,x}^{\frac52}}\\
&\lambdaesssim_{E_c,\delta}\|e^{it\Delta_{\Omega}}w_n^J\|_{L_{t,x}^{10}}^{\frac2{11}}\|e^{it\Delta_{\Omega}}w_n^J\|_{L_t^{\frac92}L_x^{54}}^{\frac9{11}}\|\nabla e^{it\Delta_{\Omega}}w_n^J \|_{L_t^5 L_x^{\frac{30}{11}}} \\
&\quad +\Bigl\|\sum_{j=1}^J v_n^j \nabla e^{it\Delta_{\Omega}}w_n^J \Bigr\|_{L_{t,x}^{\frac52}}\\
&\lambdaesssim_{E_c,\delta}\|e^{it\Delta_{\Omega}}w_n^J\|_{L_{t,x}^{10}}^{\frac2{11}}+\Bigl\|\sum_{j=1}^J v_n^j \nabla e^{it\Delta_{\Omega}}w_n^J \Bigr\|_{L_{t,x}^{\frac52}}.
\end{align*}
By \eqref{E:LP1}, the contribution of the first term to \eqref{708} is acceptable. We now turn to the second term.
By \eqref{sum vnj tail},
\begin{align*}
\lambdaimsup_{n\to \infty} \Bigl\| \Bigl(\sum_{j=J'}^J v_n^j\Bigr) \nabla e^{it\Delta_\Omega} w_n^J \Bigr\|_{L_{t,x}^{\frac52}}
&\lambdaesssim\lambdaimsup_{n\to \infty}\Bigl\|\sum_{j=J'}^J v_n^j\Bigr\|_{\dot X^1}\|\nabla e^{it\Delta_\Omega} w_n^J\|_{L_t^5L_x^{\frac{30}{11}}}\lambdaesssim_{E_c,\delta} \eta,
\end{align*}
where $\eta>0$ is arbitrary and $J'=J'(\eta)$ is as in \eqref{sum vnj tail}. Thus, proving \eqref{708} reduces to showing
\begin{align}\lambdaabel{834}
\lambdaim_{J\to\infty} \lambdaimsup_{n \to \infty}\| v_n^j \nabla e^{it\Delta_\Omega} w_n^J \|_{L_{t,x}^{\frac52}}=0 \qtq{for each} 1\lambdaeq j\lambdaeq J'.
\end{align}
To this end, fix $1\lambdaeq j\lambdaeq J'$. Let ${\varepsilon}>0$, $\psi_{\varepsilon}^j\in C^\infty_c({\mathbb{R}}\times{\mathbb{R}}^3)$ be as in \eqref{ap case1}, and let $R,T>0$ be such that
$\psi^j_{\varepsilon}$ is supported in the cylinder $[-T,T]\times\{|x|\lambdaeq R\}$. Then
$$
\supp(T_n^j \psi^j_{\varepsilon} ) \subseteq [(\lambdaambda_n^j)^2 (-T-t_n^j), (\lambdaambda_n^j)^2 (T-t_n^j)]\times\{|x -x_n^j|\lambdaeq \lambdaambda_n^j R\}
$$
and $\|T_n^j \psi^j_{\varepsilon}\|_{L^\infty_{t,x}} \lambdaesssim (\lambdaambda_n^j)^{-\frac12} \| \psi^j_{\varepsilon}\|_{L^\infty_{t,x}}$. If $j$ conforms to Case~4,
then $x_n^j$ above should be replaced by $(x_n^j)^*$. Thus, using Corollary~\ref{C:Keraani3.7} we deduce that
\begin{align*}
\| (T_n^j \psi^j_{\varepsilon}) \nabla e^{it\Delta_\Omega} w_n^J \|_{L_{t,x}^{\frac52}}
&\lambdaesssim T^{\frac{31}{180}} R^{\frac7{45}} \| \psi^j_{\varepsilon}\|_{L^\infty_{t,x}} \| e^{it\Delta_\Omega} w_n^J \|_{L^{10}_{t,x}}^{\frac1{18}}
\| w_n^J \|_{\dot H^1_D(\Omega)}^{\frac{17}{18}} \\
&\lambdaesssim_{\psi^j_{\varepsilon},E_c} \| e^{it\Delta_\Omega} w_n^J \|_{L^{10}_{t,x}}^{\frac1{18}}.
\end{align*}
Combining this with \eqref{ap case1} and using Theorem~\ref{T:Sob equiv} and Strichartz, we deduce that
\begin{align*}
\| v_n^j \nabla e^{it\Delta_\Omega} w_n^J \|_{L_{t,x}^{\frac52}}
& \lambdaesssim \| v^j_n - T_n^j \psi^j_{\varepsilon} \|_{\dot X^1} \| \nabla e^{it\Delta_\Omega} w_n^J \|_{L_t^5 L_x^{\frac{30}{11}}} +
C(\psi^j_{\varepsilon},E_c) \| e^{it\Delta_\Omega} w_n^J \|_{L^{10}_{t,x}}^{\frac1{18}} \\
&\lambdaesssim {\varepsilon} E_c + C(\psi^j_{\varepsilon},E_c) \| e^{it\Delta_\Omega} w_n^J \|_{L^{10}_{t,x}}^{\frac1{18}}.
\end{align*}
Using \eqref{E:LP1} we get $\text{LHS\eqref{834}} \lambdaesssim_{E_c} {\varepsilon}$. As ${\varepsilon}>0$ was arbitrary, this proves \eqref{834}.
This completes the proof of \eqref{708} and with it, the proof of \eqref{s8}. Combining \eqref{s7} and \eqref{s8} yields the third claim.
This completes the treatment of the second scenario and so the proof of the proposition.
\end{proof}
As an immediate consequence of the Palais--Smale condition, we obtain that the failure of Theorem~\ref{T:main} implies the existence
of almost periodic counterexamples:
\begin{thm}[Existence of almost periodic solutions]\lambdaabel{T:mmbs}
Suppose Theorem \ref{T:main} fails to be true. Then there exist a critical energy \/$0<E_c<\infty$ and a global solution $u$ to \eqref{nls} with
$E(u)=E_c$, which blows up in both time directions in the sense that
$$
S_{\ge 0}(u)=S_{\lambdae 0}(u)=\infty,
$$
and whose orbit $\{ u(t):\, t\in {\mathbb{R}}\}$ is precompact in $\dot H_D^1(\Omega)$. Moreover, there exists $R>0$ so that
\begin{align}\lambdaabel{E:unif L6}
\int_{\Omega\cap\{|x|\lambdaeq R\}} |u(t,x)|^6\, dx\gtrsim 1 \quad \text{uniformly for } t\in {\mathbb{R}}.
\end{align}
\end{thm}
\begin{proof}
If Theorem~\ref{T:main} fails to be true, there must exist a critical energy $0<E_c<\infty$ and a sequence of solutions $u_n:I_n\times\Omega\to {\mathbb{C}}$ such that $E(u_n)\to E_c$ and $S_{I_n}(u_n)\to \infty$. Let $t_n\in I_n$ be such that $S_{\ge t_n}(u_n)=S_{\lambdae t_n}(u_n)=\frac 12 S_{I_n}(u_n)$; then
\begin{align}\lambdaabel{s12}
\lambdaim_{n\to\infty} S_{\ge t_n}(u_n)=\lambdaim_{n\to\infty}S_{\lambdae t_n}(u_n)=\infty.
\end{align}
Applying Proposition \ref{P:PS} and passing to a subsequence, we find $\phi\in \dot H^1_D(\Omega)$ such that $u_n(t_n)\to \phi$
in $\dot H^1_D(\Omega)$. In particular, $E(\phi)=E_c$. We take $u:I\times\Omega\to {\mathbb{C}}$ to be the maximal-lifespan solution to \eqref{nls} with initial data $u(0)=\phi$. From the stability result Theorem~\ref{T:stability} and \eqref{s12}, we get
\begin{align}\lambdaabel{s14}
S_{\ge 0}(u)=S_{\lambdae 0}(u)=\infty.
\end{align}
Next we prove that the orbit of $u$ is precompact in $\dot H_D^1(\Omega)$. For any sequence $\{t'_n\}\subset I$, \eqref{s14} implies
$S_{\ge t_n'}(u)=S_{\lambdae t_n'}(u)=\infty$. Thus by Proposition~\ref{P:PS}, we see that $u(t_n')$ admits a subsequence that converges strongly in
$\dot H^1_D(\Omega)$. Therefore, $\{u(t): t\in {\mathbb{R}}\}$ is precompact in $\dot H^1_D(\Omega)$.
We now show that the solution $u$ is global in time. We argue by contradiction; suppose, for example, that $\sup I<\infty$. Let $t_n\to \sup I$. Invoking Proposition~\ref{P:PS} and passing to a subsequence, we find $\phi\in \dot H^1_D(\Omega)$ such that $u(t_n)\to \phi$ in
$\dot H^1_D(\Omega)$. From the local theory, there exist $T=T(\phi)>0$ and a unique solution $v:[-T,T]\times\Omega\to {\mathbb{C}}$ to \eqref{nls} with initial data $v(0)=\phi$ such that $S_{[-T,T]}v<\infty$. By the stability result Theorem~\ref{T:stability}, for $n$ sufficiently large we find a unique solution
$\tilde u_n:[t_n-T,t_n+T]\times\Omega\to {\mathbb{C}}$ to \eqref{nls} with data $\tilde u_n(t_n)=u(t_n)$ and $S_{[t_n-T,t_n+T]}(\tilde u_n)<\infty$. From uniqueness of solutions to \eqref{nls}, we must have $\tilde u_n=u$. Thus taking $n$ sufficiently large, we see that $u$ can be extended beyond $\sup I$, which contradicts the fact that $I$ is the maximal lifespan of $u$.
Finally, we prove the uniform lower bound \eqref{E:unif L6}. We again argue by contradiction. Suppose there exist sequences $R_n\to \infty$ and
$\{t_n\}\subset {\mathbb{R}}$ along which
$$
\int_{\Omega\cap\{|x|\lambdaeq R_n\}} |u(t_n,x)|^6\, dx \to 0.
$$
Passing to a subsequence, we find $u(t_n)\to \phi$ in $\dot H^1_D(\Omega)$ for some non-zero $\phi\in \dot H^1_D(\Omega)$.
Note that if $\phi$ were zero, then the solution $u$ would have energy less than the small data threshold, which would contradict \eqref{s14}.
By Sobolev embedding, $u(t_n)\to\phi$ in $L^6$, and since $R_n\to\infty$,
$$
\int_\Omega |\phi(x)|^6\,dx = \lambdaim_{n\to\infty} \int_{\Omega\cap\{|x|\lambdaeq R_n\}} |\phi(x)|^6\, dx = \lambdaim_{n\to\infty} \int_{\Omega\cap\{|x|\lambdaeq R_n\}} |u(t_n,x)|^6\, dx =0.
$$
This contradicts the fact that $\phi \neq 0$ and completes the proof of the theorem.
\end{proof}
Finally, we are able to prove the main theorem.
\begin{proof}[Proof of Theorem~\ref{T:main}]
We argue by contradiction. Suppose Theorem~\ref{T:main} fails. By Theorem~\ref{T:mmbs}, there exists a minimal energy blowup solution
$u$ that is global in time, whose orbit is precompact in $\dot H_D^1(\Omega)$, and that satisfies
$$
\int_{\Omega\cap\{|x|\lambdaeq R\}} |u(t,x)|^6\, dx \gtrsim 1 \quad \text{uniformly for } t\in {\mathbb{R}}
$$
for some large $R>1$. Integrating over a time interval of length $|I|\geq 1$, we obtain
\begin{align*}
|I|\lambdaesssim R\int_{I}\int_{\Omega\cap\{|x|\lambdaeq R\}} \frac{|u(t,x)|^6}{|x|}\,dx\,dt \lambdaesssim R\int_{I}\int_{\Omega\cap\{|x|\lambdaeq R|I|^{\frac12}\}}\frac{|u(t,x)|^6}{|x|}\,dx\,dt.
\end{align*}
On the other hand, for $R|I|^{\frac12}\geq 1$, the Morawetz inequality Lemma~\ref{L:morawetz} gives
\begin{align*}
\int_{I}\int_{\Omega\cap\{|x|\lambdaeq R|I|^{\frac12}\}}\frac{|u(t,x)|^6}{|x|}\,dx\,dt \lambdaesssim R|I|^{\frac12},
\end{align*}
with the implicit constant depending only on $E(u)=E_c$.
Taking $I$ sufficiently large depending on $R$ and $E_c$ (which is possible since $u$ is global in time), we derive a contradiction.
This completes the proof of Theorem~\ref{T:main}.
\end{proof}
\end{document} |
\begin{document}
{\tt t}itle[]{Embedded $\Q$-desingularization of real Schubert varieties and application to the relative $\Q$-algebrization of nonsingular algebraic sets}
\author{Enrico Savi}
\address{Dipartimento di Matematica, Via Sommarive, 14, Universit\`a di Trento, 38123 Povo (ITALY)}
\email{enrico.savi@unitn.it}
{\tt t}hanks{The author is supported by GNSAGA of INDAM}
\begin{abstract}
A real algebraic set $V\subset\mathtt{m}athbb{R}^{n}$ is said to be a $\mathtt{m}athbb{Q}$-nonsingular $\mathtt{m}athbb{Q}$-algebraic set if it is described, both globally and locally, by polynomials with rational coefficients. We prove that every nonsingular real algebraic set $V\subset\mathtt{m}athbb{R}^n$ with nonsingular algebraic subsets $\{V_i\}_{i=1}^\ell$ in general position is Nash diffeomorphic to a $\Q$-nonsingular $\mathtt{m}athbb{Q}$-algebraic set $V'\subset\mathtt{m}athbb{R}^N$ with $\mathtt{m}athbb{Q}$-nonsingular $\mathtt{m}athbb{Q}$-algebraic subsets $\{V'_i\}_{i=1}^\ell$ in general position and the Nash diffeomorphism $h:V{\tt t}o V'$ sends each $V_i$ to $V_i'$. A key result in the proof is the description of $\Z/2\Z$-homological cycles of real Grassmannian manifolds by $\mathtt{m}athbb{Q}$-nonsingular $\mathtt{m}athbb{Q}$-algebraic representatives via an explicit desingularization of real embedded Schubert varieties.
\end{abstract}
\keywords{Real algebraic sets, topology of real algebraic sets, algebraic models, Nash manifolds.}
\subjclass[2010]{Primary 14P05; Secondary 14P10, 14P20}
\date{{\tt t}oday}
\mathtt{m}aketitle
{\tt t}ableofcontents
\section*{Introduction}
One of the main topics in real algebraic geometry is the so called ``Algebrization problem", that is the characterization of those topological spaces admitting an homeomorphism to some real algebraic set, where real algebraic set means the common solution set of real polynomial equations in some real affine space. By Whitney embedding theorem, see \cite{Whi36}, every manifold $M$ of dimension $m$ can be smoothly embedded in $\R^{2m+1}$ and the image of such embedding can be described, both locally and globally, as the set of solutions of finitely many global real analytic equations. Thus, the further task to address was whether the previous global analytic equations could be produced algebraic, both globally and locally. Clearly, there are examples of non-compact manifolds that does not admit homeomorphic algebraic models, such as any manifold with infinitely many connected components. In contrast, the algebrization problem for compact manifolds was a challenging task to address.
In his remarkable paper \cite{Na52}, Nash proved that every compact smooth manifold $M$ of dimension $m$ is diffeomorphic to a real analytic submanifold $M'\subset\R^{2m+1}$ which is actually the union of some nonsingular connected components of a real algebraic subset of $\R^{2m+1}$. Nash conjectured that $M'$ could be chosen to be a whole nonsingular real algebraic subset of $\R^{2m+1}$, a so-called algebraic model of $M$. In 1973, Tognoli \cite{Tog73} proved this conjecture to be true, thus the so called Nash-Tognoli theorem asserts that: {\tt t}extit{Every compact manifold $M$ of dimension $m$ is diffeomorphic to a nonsingular real algebraic subset of $\R^{2m+1}$.} With respect to Nash's result two main original ideas appear in Tognoli's work: relative algebraic approximation of smooth functions by polynomial (and regular) ones and cobordism theory, in particular algebraic representatives of cobordism classes found by Milnor in \cite{Mil65}.
After Tognoli's solution of Nash's conjecture a systematic study of real algebraic sets and of the algebrization problem started. There is a wide literature devoted to improvements and extensions of Nash-Tognoli theorem. We refer the interested reader to the books \cite[Chapter II]{AK92a}, \cite[Chapter 14]{BCR98}, \cite[Chapter 6]{Man14}, the survey \cite[Section 2]{Kol17}, the papers \cite{CS92,Kuc11}, the more recent ones \cite{Ben1,GT17} and references therein.
At that point it is natural to ask whether the algebraic models of compact smooth manifolds can be further simplified by requiring that the coefficients of the describing polynomial equations belong to a subfield $k$ of $\R$ as small as possible. Since the rationals are the smallest subfield of $\R$, the final goal is $k=\Q$.
The answer is affirmative for $k=\R_{\mathtt{m}athrm{alg}}$, the field of real algebraic numbers, that is the smallest real closed field. This follows combining Nash's theorem and the algebrization result \cite[Cor.\,3.9]{CS92} for Nash manifolds over any real closed field by Coste and Shiota. More recently, by means of Zariski equisingular techniques, Parusi\'{n}ski and Rond in \cite{PR20} proved that every real algebraic subset $V$ of $\R^{m+n}$ of dimension $m$ is subanalytically homeomorphic to an algebraic subset $V'$ of $\R^{m+n}$ of dimension $m$ defined over $\R_{\mathtt{m}athrm{alg}}$. It is remarkable that any regularity assumption on the algebraic set $V$ is needed. Moreover, by \cite[Rmk.\,13]{PR20}, $V'$ can be chosen to be globally described by equations over $\Q$ if the extension of $\Q$ obtained by adding the coefficients of polynomial equations defining $V$ is purely transcendental. However, in general, the problem of finding algebraic homeomorphic models of real algebraic sets described by equations with rational coefficients is widely open, as explained by Parusi\'{n}ski in \cite[Open\,problem\,1, p.\,199]{Par21}.
In the recent work \cite{FG}, Fernando and Ghiloni introduced and studied $\Q$-algebraic sets, that is real algebraic sets described by polynomial equations with rational coefficients. By means of the complexification $V_\C$ of a real $\Q$-algebraic set $V$ and the action of the Galois group on $V_\C$, the authors prove important properties of real $\Q$-algebraic sets and define a precise notion of $\R|\Q$-nonsingularity for a point of a real $\Q$-algebraic set (see Definition \ref{def:R|Q}). This concept turned out to be crucial to extend the algebraic approximation techniques developed in \cite{Na52}, \cite{Tog73} and \cite{AK81a} obtaining algebraic models defined by polynomial equations with rational coefficients. Indeed, in \cite{GSa} Ghiloni and the author developed such $\Q$-approximation techniques and gave a complete solution of \cite[Open\,problem\,1, p.\,199]{Par21} in the case of compact nonsingular real algebraic sets and real algebraic sets with isolated singularities. The aim of this paper is to give a general answer also in the relative nonsingular case, that is to solve the following question:
\mathtt{m}athtt{n}oindent {\tt t}extsc{Relative $\Q$-algebrization of nonsingular algebraic sets:} {\tt t}extit{Is every nonsingular real algebraic set $V$, with nonsingular algebraic subsets $\{V_i\}_{i=1}^\ell$, in general position, homeomorphic to a nonsingular algebraic set $V'$, with nonsingular algebraic subsets $\{V'_i\}_{i=1}^\ell$, in general position, all defined by polynomial equations with rational coefficients such that the homeomorphism sends each $V_i$ to $V'_i$?}
Let us describe the structure of the paper.
Section \ref{sec:1} is devoted to a review on fundamental results of (real) $\Q$-algebraic geometry developed both in \cite{FG} and \cite{GSa}. We recall the notions of real $\Q$-algebraic set, the decomposition in $\Q$-irreducible components and the notion of $\R|\Q$-nonsingularity of a point in a real $\Q$-algebraic set (see Definition \ref{def:R|Q_p}). This notion turned out to be crucial in separating the irreducible components of $\Q$-algebraic sets (see Corollary \ref{cor:Q_setminus}) and to develop smooth approximation techniques with polynomial functions with rational coefficients in \cite[Sec.\,3]{GSa}. In the last part of this first section we propose fundamental examples of $\Q$-nonsingular $\Q$-algebraic sets (see Definition \ref{def:R|Q}) that became crucial in Subsection \ref{subsec:3.1}.
Section \ref{sec:2} is devoted to the study of $\Z/2\Z$ homology groups of real Grassmannians. By \cite[Thm.\,3.4.4]{BCR98} each real Grassmannian $\G_{m,n}$ of affine $m$-planes in $\R^{m+n}$ can be embedded as real algebraic subset of $\R^{(m+n)^2}$. Let us identify $\G_{m,n}$ with the above real algebraic subset of $\R^{(m+n)^2}$. It is well known that incidence conditions with respect to a complete flag induce a finite cellular decomposition of $\G_{m,n}$ such that the euclidean closure of each cell is an algebraic subset of $\G_{m,n}$ (see \cite{MS74}). The closure of those cells are particular Schubert varieties that generate the $\Z/2\Z$ homology groups of each $\G_{m,n}$. The main result of this section is that, if we choose the complete flag $0\subset\R\subset\R^2\subset\dots\subset\R^{m+n}$ with respect to the standard coordinates of $\R^{m+n}$, each Schubert variety $\sigma$ defined by incidence conditions with respect to the above flag admits a $\Q$-desingularization (see Definition \ref{def:Q_des}). Latter result ensures that each $\Z/2\Z$ homology class of each real Grassmannian $\G_{m,n}$ has a $\Q$-nonsingular representative. This fact is crucial to apply $\Q$-algebraic approximation techniques in Section \ref{sec:4}. We point out that the desingularization procedure we apply in this section is inspired by Zelevinski's paper \cite{Zel83} on small resolution of singularities of Schubert varieties.
Section \ref{sec:3} is divided in two different subsection. The first subsection is devoted to adapt `over $\Q$' the topological construction of what we call ``spine bordisms" introduced by Akbulut and King in \cite{AK81b}. We stress that the topological ideas come from Akbulut and King paper but to ensure the equations to have rational coefficients, both globally and locally, the choice of an appropriate embedding is crucial. The second subsection is a review on $\Q$-nice algebraic sets introduced by Ghiloni and the author in \cite{GSa}.
Section \ref{sec:4} contains the main results of the paper. Consider $\R^{k}$ equipped with the usual euclidean topology, for every $k\in\N$. Let $V$ be a nonsingular real algebraic subset of $\R^{m+n}$ of dimension $m$ equipped with the relative topology of $\R^{m+n}$. Let $\mathtt{m}athscr{C}^\infty(V,\R^k)$ be the set of all $\mathtt{m}athscr{C}^\infty$ maps from $V$ to $\R^k$, and let ${\EuScript N}(V,\R^k)\subset\mathtt{m}athscr{C}^\infty(V,\R^k)$ be the set of all Nash maps from $V$ to $\R^k$. Denote by $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}(V,\R^k)$ the set $\mathtt{m}athscr{C}^\infty(V,\R^k)$ equipped with the usual weak $\mathtt{m}athscr{C}^\infty$ topology, see \cite[Ch.\,2,\,Sec.\,1]{Hir94}, and ${\EuScript N}_\mathtt{m}athrm{w}(V,\R^k)$ the set ${\EuScript N}(V,\R^k)$ equipped with the relative topology induced by $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}(V,\R^k)$. Here we summarize both statements of Theorems \ref{thm:Q_tico_tognoli} and \ref{thm:non-compact} to completely address the {\tt t}extsc{Relative $\Q$-algebrization of nonsingular algebraic sets}.
\begin{thm}
Let $V$ be a nonsingular algebraic subset of $\R^{m+n}$ of dimension $m$ and let $\{V_i\}_{i=1}^\ell$ be a finite family of nonsingular algebraic subsets of $V$ of codimension $c_i$ in general position. Set $N:=\mathtt{m}ax(m+n,2m+1)$, if $V$ is compact, or $N:=\mathtt{m}ax(2(n+1),3(m+1)+n)$, if $V$ is non-compact. Then, for every neighborhood $\mathtt{m}athcal{U}$ of the inclusion map $\iota:V\hookrightarrow\R^{N}$ in ${\EuScript N}_{\mathtt{m}athrm{w}}(V,\R^{N})$ and for every neighborhood $\mathtt{m}athcal{U}_i$ of the inclusion map $\iota|_{V_i}:V_i\hookrightarrow\R^N$ in ${\EuScript N}_{\mathtt{m}athrm{w}}(V_i,\R^N)$, for every $i\in\{1,\dots,\ell\}$, there exist a $\Q$-nonsingular $\Q$-algebraic set $V'\subset\R^N$, a family $\{V'_i\}_{i=1}^\ell$ of $\Q$-nonsingular $\Q$-algebraic subsets of $V'$ in general position and a Nash diffeomorphism $h:V{\tt t}o V'$ which simultaneously takes each $V_i$ to $V'_i$ such that, if $\jmath:V'\hookrightarrow\R^N$ denotes the inclusion map, then $\jmath\circ h\in\mathtt{m}athcal{U}$ and $\jmath|_{V'_i}\circ h|_{M_i}\in\mathtt{m}athcal{U}_i$, for every $i\in\{1,\dots,\ell\}$. Moreover, $h$ extends to a semialgebraic homeomorphism from $\R^N$ to $\R^N$.
\end{thm}
\section{Review on real $\Q$-algebraic geometry}
\label{sec:1}
\subsection{$\Q$-Algebraic sets, $\R|\Q$-nonsingularity and $\Q$-regular functions}
In this subsection we briefly recall further concepts and results from papers \cite{FG} and \cite{GSa}.
Let $L|K$ be a extension of fields. Fix $n\in\N\setminus\{0\}$. Consider $K[x]:=K[x_1,\dots,x_{n}]\subset L[x_1,\dots,x_{n}]=:L[x]$ and $K^{n}\subset L^{n}$. Given $F\subset L[x]$ and $S\subset L^{n}$ define
\begin{align*}
\mathtt{m}athcal{Z}_L(F)&:=\{x\in L^{n}:f(x)=0,\,\forall f\in F\},\\
\I_K(S)&:=\{f\in K[x]:f(x)=0,\,\forall x\in S\}.
\end{align*}
Clearly $\I_K(S)$ is an ideal of $K[x]$. If $F=\{f_1,\dots,f_s\}\subset L[x]$, for some $s\in\N$, then we set $\mathtt{m}athcal{Z}_L(f_1,\dots,f_s):=\mathtt{m}athcal{Z}_\R(F)$.
Let us generalize the notions of (real) algebraic and $\Q$-algebraic sets.
\begin{definition}
Given a set $V\subset L^{n}$, we say that $V$ is a \emph{$K$-algebraic subset of $L^{n}$}, or $V\subset L^{n}$ is a \emph{$K$-algebraic set}, if there exists $F\subset K[x]$ such that $V=\mathtt{m}athcal{Z}_L(F)$.
\end{definition}
Denote as the \emph{$K$-Zariski topology of $L^{n}$} the unique topology of $L^{n}$ having $K$-algebraic sets as closed sets. Such a topology is noetherian, because it is coarser than the usual Zariski topology of $L^{n}$. As usual, Noetherianity implies that every $K$-algebraic subset of $L^{n}$ is the common solution set of a finite number of polynomials with coefficients in $K$.
\emph{Fix a $K$-algebraic set $V\subset L^{n}$.} We say that $V\subset L^{n}$ is \emph{$K$-irreducible} if it is irreducible with respect to the $K$-Zariski topology. By classical arguments, we have that $V\subset L^{n}$ is $K$-irreducible if and only if $\I_K(V)$ is a prime ideal of $K[x]$. Noetherianity also implies that every $K$-algebraic set can be uniquely decomposed as a finite union of $K$-irreducible algebraic subsets of $V$. We call those $K$-irreducible algebraic subsets the \emph{$K$-irreducible components} of $V\subset L^{n}$. Observe that $L$-irreducible components of $V\subset L^{n}$ coincides with the usual irreducible components of $V$, viewed as an algebraic subset of $L^{n}$. As the usual ($L$-)Zariski topology is finer than the $K$-Zariski topology, if the $K$-algebraic set $V\subset L^{n}$ is irreducible, it is also $K$-irreducible. If both $L$ and $K$ are algebraically closed or real closed, the converse implication is also true. Otherwise, it may happen that $V\subset L^{n}$ is $K$-irreducible but not irreducible. An example of this behaviour can be found choosing $L|K=\R|\Q$ and $V:=\{-\sqrt{2},\sqrt{2}\}\subset\R$.
The \emph{$K$-dimension $\dim_K(V)$} of $V$ is defined as the Krull dimension of the ring $K[x]/\I_K(V)$. As above, $\dim_L(V)$ coincides with the usual dimension $\dim(V)$ of $V$, viewed as an algebraic subset of $L^{n}$. A $K$-version of a classical result connecting irreducibility and dimension holds: if $V\subset L^{n}$ is $K$-irreducible and $W\subset L^{n}$ is a $K$-algebraic set such that $W\subsetneqq V$, then $\dim_K(W)<\dim_K(V)$. Another useful but deeper result proved in \cite{FG} is that $\dim_K(V)=\dim(V)$, provided $L$ is algebraically closed or real closed.
Let $L|K$ be an extension of fields and the $K$-algebraic set $V\subset L^{n}$. Observe that, if $L$ is algebraically closed, $\I_L(V)=\I_K(V)L[x]$ by Hilbert's Nullstellensatz, thus the ideal $\I_K(V)$ of $K[x]$ gives complete information about $V\subset L^{n}$. Latter equality holds as well when both $L$ and $K$ are real closed fields by the Tarski-Seidenberg principle. However, it is false in general. For instance, if $L|K=\R|\Q$ and $V:=\{x_1+\sqrt[3]{2}x_2=0\}=\{x_1^3+2x_2^3=0\}\subset\R^2$, then
\[
\I_\R(V)=(x_1+\sqrt[3]{2}x_2)\R[x_1,x_2]\supsetneqq(x_1^3+2x_2^3)\R[x_1,x_2]=\I_\Q(V)\R[x_1,x_2].
\]
We say that an algebraic set $S\subset L^{n}$ is \emph{defined over $K$} if $\I_L(S)=\I_K(S)L[x]$. Clearly, if $S\subset L^{n}$ is defined over $K$, then it is also $K$-algebraic. As we said, if $L$ is algebraically closed or both $L$ and $K$ are real closed, then the concepts of $K$-algebraic subset of $L^{n}$ and algebraic subset of $L^{n}$ defined over $K$ do coincide. As explained in \cite[Sec.\,2.2]{GSa}, in the real algebraic setting the situation is completely different and various notions of nonsingularity over $\Q$ arise. In what follows we recall the definition of $\R|\Q$-nonsingular points of a $\Q$-algebraic set introduced in \cite{FG} and further characterizations in \cite{GSa}.
Pick a point $a=(a_1,\ldots,a_{n})\in\R^{n}$ and let $V\subset\R^{n}$. We denote by $\mathtt{m}athfrak{n}_a$ the maximal ideal $(x_1-a_1,\ldots,x_{n}-a_{n})\R[x]$ of $\R[x]$ and by $\I_\Q(V)$ the vanishing ideal of $V$ in $\Q[x]$, as above.
The following notion of $\R|\Q$-regular point was introduced in \cite{FG}.
\begin{definition}[$\R|\Q$-regular points]\label{def:R|Q_p}
Let $V\subset\R^{n}$ be a $\Q$-algebraic set and let $a\in V$. We define the \emph{$\R|\Q$-local ring $\mathtt{m}athcal{R}^{\R|\Q}_{V,a}$ of $V$ at $a$} as
\[
\mathtt{m}athcal{R}^{\R|\Q}_{V,a}:=\R[x]_{\mathtt{m}athfrak{n}_a}/(\mathtt{m}athcal{I}_{\Q}(V)\R[x]_{\mathtt{m}athfrak{n}_a}).
\]
We say that $a$ is a \emph{$\R|\Q$-regular point of $V$} if $\mathtt{m}athcal{R}^{\R|\Q}_{V,a}$ is a regular local ring of dimension $\dim(V)$. We denote by $\operatorname{Reg}^{\R|\Q}(V)$ the set of all $\R|\Q$-regular points of $V$.
\end{definition}
In \cite{FG}, the authors show that the set $\operatorname{Reg}^{\R|\Q}(V)$ is a non-empty Zariski open subset of $\operatorname{Reg}(V)$. It may happen that $\operatorname{Reg}^{\R|\Q}(V)\subsetneq\operatorname{Reg}(V)$. For instance, the $\Q$-algebraic line $V:=\{x_1+\sqrt[3]{2}x_2=0\}=\{x_1^3+2x_2^3=0\}\subset\R^2$ is nonsingular (as a real algebraic set), but $(0,0)$ is not $\R|\Q$-nonsingular. This leads to the following definition originally introduced in \cite[Def.\,1.2]{GSa}.
\begin{definition}\label{def:R|Q}
Let $V\subset\R^{n}$ be a $\Q$-algebraic set. We say that $V$ is \emph{$\Q$-determined} if $\operatorname{Reg}^{\R|\Q}(V)=\operatorname{Reg}(V)$. If in addition $V$ is nonsingular, in other words, $V=\operatorname{Reg}(V)=\operatorname{Reg}^{\R|\Q}(V)$, we say that $V$ is $\Q$-nonsingular.
\end{definition}
\begin{notation}
In what follows, if $V\subset\R^{n}$ is a $\Q$-algebraic set and $a\in V$, we set $\reg^*_{V,a}:=\reg^{\R|\Q}_{V,a}$ and $\operatorname{Reg}^*(V):=\operatorname{Reg}^{\R|\Q}(V)$ for short.
\end{notation}
In \cite[Lem.\,2.10]{GSa}, Ghiloni and the author characterized the notion of $\R|\Q$-nonsingularity via a $\R|\Q$-jacobian criterion. In particular, if $V\subset\R^{n}$ of dimension $d$ is a nonsingular $\Q$-algebraic set, then $V$ is $\Q$-nonsingular if and only if for every $a\in V$ there are $q_1,\dots,q_{n-d}\in\mathtt{m}athcal{I}_\Q(V)$ such that $\mathtt{m}athtt{n}abla q_1(a),\dots,\mathtt{m}athtt{n}abla q_{n-d}(a)$ are linearly independent and there exists a Zariski open subset $U$ of $\R^{n}$ such that $V\cap U=\mathtt{m}athcal{Z}(q_1,\dots,q_{n-d})\cap U$.
Here we recall a crucial consequence of previous $\R|\Q$-jacobian criterion proved in \cite{GSa} and deeply applied in that paper. Its importance will be clear in the proof of our main theorem.
\begin{corollary}[{\cite[Cor.\,2.8]{GSa}}]\label{cor:Q_setminus}
Let $V\subset\R^{n}$ and $Z\subset\R^{n}$ be two $\Q$-nonsingular $\Q$-algebraic sets of the same dimension $d$ such that $Z\subsetneqq V$. Then $V\setminus Z\subset\R^{n}$ is a $\Q$-nonsingular $\Q$-algebraic set of dimension $d$ as well.
\end{corollary}
Let us recall the notion of $\Q$-regular map introduced in \cite{GSa}. Let $S\subset\R^{n}$ be a set and let $f:S{\tt t}o\R$ be a function. We say that $f$ is \emph{$\Q$-regular} if there are $p,q\in\Q[x]=\Q[x_1,\ldots,x_{n}]$ such that $\mathtt{m}athcal{Z}_\R(q)\cap S=\varnothing$ and $f(x)=\frac{p(x)}{q(x)}$ for every $x\in S$. We denote by $\reg^\Q(S)$ the set of $\Q$-regular functions on $S$. Observe that usual pointwise addition and multiplication induce a natural ring structure on $\reg^\Q(S)$. Let $T\subset\R^h$ be a set ad let $g:S{\tt t}o T$ be a map. We say that $g$ is \emph{$\Q$-regular} if there exist $g_1,\ldots,g_h\in\reg^\Q(S)$ such that $g(x)=(g_1(x),\ldots,g_h(x))$ for all $x\in S$. We denote by $\reg^\Q(S,T)$ the set of $\Q$-regular maps from $S$ to $T$. We say that the map $g:S{\tt t}o T$ is a \emph{$\Q$-biregular isomorphism} if $g$ is bijective and both $g$ and $g^{-1}$ are $\Q$-regular. If there exists such a $\Q$-biregular isomorphism $g:S{\tt t}o T$, we say that $S$ is \emph{$\Q$-biregularly isomorphic} to $T$. Observe that, as usual in real algebraic geometry, previous global definition of $\Q$-regular functions and maps coincides with the local one, that is: for each $a\in S$, there exist $p_a,q_a\in\Q[x]$ such that $q_a(a)\mathtt{m}athtt{n}eq0$ and $f(x)=\frac{p_a(x)}{q_a(x)}$ for all $x\in S\setminus \mathtt{m}athcal{Z}_\R(q_a)$.
Again, in \cite{GSa}, Ghiloni and the author proved basic properties of $\Q$-determined real $\Q$-algebraic sets and $\Q$-regular maps, for more details we refer to \cite[Lem.\,2.11]{GSa} and \cite[Lem.\,2.13]{GSa}. Those Lemmas will be strongly applied in this paper.
Let us recall the definitions of overt polynomial and projectively $\Q$-closed real algebraic set, introduced in \cite[p.\ 427]{AK81a} and \cite{GSa}. Let $p\in\R[x]$ be a polynomial. Write $p$ as follows: $p=\sum_{i=0}^dp_i$, where $d$ is the degree of $p$ and each polynomial $p_i$ is homogeneous of degree $i$ (as a convention we assume the degree of the zero polynomial to be $0$). We say that the polynomial $p\in\R[x]$ is \emph{overt} if $\mathtt{m}athcal{Z}_\R(p_d)$ is either empty or $\{0\}$. Observe that a nonconstant overt polynomial function $p:\R^{n}{\tt t}o\R$ is proper. A real $\Q$-algebraic set $V\subset\R^{n}$ is called \emph{projectively $\Q$-closed} if there exists an overt polynomial $p\in\Q[x]\subset\R[x]$ such that $V=\mathtt{m}athcal{Z}_\R(p)$. This notion coincides to require that ${\tt t}heta(V)$ is Zariski closed in $\Proy^{n}(\R)$, where ${\tt t}heta:\R^{n}{\tt t}o\Proy^{n}(\R)$ denotes the affine chart ${\tt t}heta(x_1,\ldots,x_{n}):=[1,x_1,\ldots,x_{n}]$. As a consequence, if $V$ is projectively closed, then it is compact in $\R^{n}$. We refer to \cite[Lem.\,2.15]{GSa} for fundamental properties of projectively $\Q$-closed algebraic subsets of $\R^{n}$.
\subsection{Fundamental examples}
Let $\G_{m,n}$ denote the Grassmannian manifold of $m$ dimensional subspaces of $\R^{m+n}$. Identify $\R^{(m+n)^2}$ with the set of $(m+n){\tt t}imes (m+n)$ real matrices. It is well known, see \cite[Chapter 3]{BCR98}, that every real Grassmannian $\G_{m,n}$ is biregular isomorphic to the following real algebraic subset of $\R^{(m+n)^2}$:
\begin{equation}\label{eq:GG}
\G_{m,n}=\big\{X\in\R^{(m+n)^2}:X^T=X,X^2=X,\mathtt{m}athrm{tr}(X)=m\big\}.
\end{equation}
The biregular map assigns to each point $p$ of the Grassmannian, corresponding to an $m$ subspace $V_p$ of $\R^{m+n}$, the matrix $X_p\in\R^{(m+n)^2}$ of the orthogonal projection of $\R^{m+n}$ onto $V_p$ with respect to the canonical basis of $\R^{m+n}$.
Let $\E_{m,n}$ denote the (total space of the) universal vector bundle over $\G_{m,n}$ as the following real algebraic subset of $\R^{(m+n)^2+m+n}$:
\[
\E_{m,n}:=\{(X,y)\in\G_{m,n}{\tt t}imes\R^{m+n}\,:\,Xy=y\}.
\]
It is well-known that $\E_{m,n}$ is a connected $\mathtt{m}athscr{C}^{\infty}$-submanifold of $\R^{(m+n)^2+m+n}$ of dimension $m(n+1)$.
In \cite[Lem.\,2.18,Lem.\,2.19]{GSa}, Ghiloni and the author proved that both $\G_{m,n}\subset\R^{(m+n)^2}$ and $\E_{m,n}\subset\R^{(m+n)^2+m+n}$ are projectively $\Q$-closed $\Q$-nonsingular $\Q$-algebraic sets. Those algebraic sets are fundamental examples to represent cobordism classes of smooth manifolds and to develop algebraic approximation techniques over $\Q$ as in \cite{GSa}. However, in this paper we need further examples to specify a precise topological construction of \cite{AK81b} in our embedded setting.
Let $\E_{m,n}$ denote the (total space of the) universal sphere bundle over $\G_{m,n}$ as the following real algebraic subset of $\R^{(m+n)^2+m+n+1}$:
\[
\E^*_{m,n}=\{(X,y,t)\in\G_{m,n}{\tt t}imes\R^{m+n}{\tt t}imes\R\,|\,Xy=y,\,|y|_n^2+t^2=t \}.
\]
It is well-known that $\E^*_{m,n}$ is a connected $\mathtt{m}athscr{C}^{\infty}$-submanifold of $\R^{(m+n)^2}{\tt t}imes\R^{m+n}{\tt t}imes\R$ of dimension $m(n+1)$.
\begin{lemma}\label{lem:Q_sphere_bundle}
Each universal sphere bundle $\E^*_{m,n}$ is a $\Q$-nonsingular projectively $\Q$-closed algebraic subset of $\R^{(m+n)^2}{\tt t}imes\R^{m+n}{\tt t}imes\R$.
\end{lemma}
\begin{proof}
Let $\phi:\R^{(m+n)^2}{\tt t}imes\R^{m+n}{\tt t}imes\R\rightarrow\R^{(m+n)^2}{\tt t}imes\R^{(m+n)^2}{\tt t}imes\R^{m+n}{\tt t}imes\R$ be the polynomial map defined by
\[
\phi(X,y,t):=(X^T-X,X^2-X,Xy-y,|y|^2_{m+n}+t^2-t).
\]
We prove that the polynomial $\mathtt{m}athrm{tr}(X)-m$ and the polynomial components of $\phi$ do suffice to describe nonsingular points of $\E^*_{m,n}\subset\R^{{m+n}^2}{\tt t}imes\R^{m+n}{\tt t}imes\R$, as in Definition \ref{def:R|Q}. As in the proof of \cite[Lem.\,2.18]{GSa}, it suffices to show that $\mathtt{m}athrm{rnk}\,J_\phi(A,b,c)\geq (m+n)^2+m+n+1-m(n+1)-1=(m+n)^2-mn+n$ for all $(A,b,c)\in \E^*_{m,n}$.
First, we prove that $\mathtt{m}athrm{rnk}\,J_\phi(D_m,v,c)\geq (m+n)^2-mn+n$ if $D$ is the diagonal matrix in $\R^{(m+n)^2}$ having $1$ in the first $m$ diagonal positions and $0$ otherwise, $v=(v_1,\ldots,v_{m+n})^T$ is a vector of $\R^{m+n}$ and $c\in\R$ such that $(D_m,v,c)\in \E^*_{m,n}$. For each $\ell\in\{1,\dots,m+n+1\}$, define the polynomial functions $h_\ell:\R^{(m+n)^2}{\tt t}imes\R^{(m+n)}{\tt t}imes\R{\tt t}o\R$ by
\begin{align*}
h_\ell(X,y,t)&:=\big(\sum_{j=1}^{m+n} x_{\ell j}y_j\big)-y_\ell\quad{\tt t}ext{if $\ell\mathtt{m}athtt{n}eq m+n+1$}\\
h_{m+n+1}(X,y,t)&:=|y|^2_{m+n}+t^2-t,
\end{align*}
for all $X=(x_{ij})_{i,j}\in\R^{(m+n)^2}$, $y=(y_1,\dots,y_{m+n})\in \R^{m+n}$ and $t\in\R$. Thus, with the same notation of the proof of \cite[Lem.\,2.19]{GSa}, it follows that $\phi(X,y,t)=((f_{ij}(X))_{i,j},(g_{ij}(X))_{i,j},(h_\ell(X,y,t))_\ell)$. Thanks to the proof of mentioned \cite[Lem.\,2.18]{GSa}, we already know that the rank of the jacobian matrix at $(D_m,v,c)$ of the map $(X,y,t)\mathtt{m}apsto((f_{ij}(X))_{i,j},(g_{ij}(X))_{i,j},(h_\ell(X,y,t))_\ell)$ is $\geq (m+n)^2+n-mn$. Thus, we only have to look at the components $(h_\ell(X,y))_\ell$ in order to prove that $h_{m+n+1}$ always produces an additional independent gradient with respect to the gradients of $(h_\ell(X,y))_{\ell\mathtt{m}athtt{n}eq m+n+1}$. By a direct computation we see that
\begin{align*}
\mathtt{m}athtt{n}abla h_\ell(D_m,v,c)&=\Big(\sum_{j=1}^{m+n} v_j E_{\ell j},-e_\ell,0\Big)\quad{\tt t}ext{if $\ell\in\{m+1,\dots,m+n\}$},\\
\mathtt{m}athtt{n}abla h_{m+n+1}(D_m,v,c)&=\big(0,2v,2c-1\big)
\end{align*}
where $E_{\ell j}$ is the matrix in $\R^{(m+n)^2}$ whose $(\ell,j)$-coefficient is equal to $1$ and $0$ otherwise, and $\{e_1,\ldots,e_{m+n}\}$ is the canonical vector basis of $\R^{m+n}$. Observe that, $\mathtt{m}athtt{n}abla h_{m+n+1}(D_m,v,c)$ is trivially linearly independent with respect to $(\mathtt{m}athtt{n}abla h_\ell(X,y))_{\ell\mathtt{m}athtt{n}eq m+n+1}$ when $c\mathtt{m}athtt{n}eq 1/2$, otherwise, if $c=1/2$, then $\mathtt{m}athtt{n}abla h_{m+n+1}(D_m,v,c)=(0,2v,0)$, so it is contained in the $m$-plane satisfying $D_m v=v$ and hence it is linearly independent with respect to $(\mathtt{m}athtt{n}abla h_\ell(X,y))_{\ell\mathtt{m}athtt{n}eq m+n+1}$ as well. Consequently, we obtain that $\mathtt{m}athrm{rnk}\,J_\phi(D_m,v,c)\geq (m+n)^2-mn+n$ for every $v\in\R^n$ and $c\in\R$ such that $(D_m,v,c)\in\E^*_{m,n}$.
Let us complete the proof. Let $(A,b,c)\in\E^*_{m,n}$, let $G\in O(m+n)$ be such that $D_m=G^TAG$ and let $v:=G^Tb$. By the choice of $G$ we see that $|v|^2_{m+n}=|G^T v|^2_{m+n}=|b|^2_{m+n}$, hence $c$ satisfies $|v|^2_{m+n}+c^2-c=0$ as well. Note that $D_m v=G^TAGG^Tb=G^TAb=G^Tb=v$, i.e., $(D_m,v,c)\in\E^*_{m,n}$. Define the linear automorphisms $\psi:\R^{(m+n)^2}{\tt t}o\R^{(m+n)^2}$ and ${\tt t}au:\R^{m+n}\rightarrow\R^{m+n}$ by $\psi(X):=G^TXG$ and ${\tt t}au(y)=G^T y$. Since $(\psi{\tt t}imes{\tt t}au{\tt t}imes\operatorname{id}_{\R})(A,b,c)=(D_m,v,c)$ and $(\psi{\tt t}imes\psi{\tt t}imes{\tt t}au{\tt t}imes\operatorname{id}_{\R})\circ\phi=\phi\circ(\psi{\tt t}imes{\tt t}au{\tt t}imes\operatorname{id}_{\R})$, we have that $J_{\psi{\tt t}imes\psi{\tt t}imes{\tt t}au{\tt t}imes\operatorname{id}_{\R}}(\phi(A,b,c))J_\phi(A,b)=J_\phi(D_m,v,c)J_{\psi{\tt t}imes{\tt t}au{\tt t}imes\operatorname{id}_{\R}}(A,b,c)$. Bearing in mind that both matrices $J_{\psi{\tt t}imes\psi{\tt t}imes{\tt t}au{\tt t}imes\operatorname{id}_{\R}}(\phi(A,b,c))$ and $J_{\psi{\tt t}imes{\tt t}au{\tt t}imes\operatorname{id}_{\R}}(A,b,c)$ are invertible, it follows that $\mathtt{m}athrm{rnk}\,J_\phi(A,b,c)=\mathtt{m}athrm{rnk}\,J_\phi(D_m,v,c)\geq (m+n)^2+n+1-mn$, as desired.
It remains to show that $\E^*_{m,n}$ is projectively $\Q$-closed to deduce that $\E^*_{m,n}$ is also defined over $\Q$. Since $\E^*_{m,n}\subset\R^{(m+n)^2}{\tt t}imes\R^{m+n}{\tt t}imes\R$ is the zero set of the polynomial $|\phi(X,y,t)|_{(m+n)^2}^2+(\mathtt{m}athrm{tr}(X)-m)^2\in\Q[(x_{ij})_{i,j},(y_k)_k,t]$ and $\E^*_{m,n}$ is contained in $\G_{m,n}{\tt t}imes S^{m+n}$, which is a projectively $\Q$-closed algebraic subset of $\R^{(m+n)^2}{\tt t}imes\R^{m+n}{\tt t}imes\R$ by \cite[Lem.\,2.18]{GSa} and $(\mathtt{m}athrm{d})$ of \cite[Lem.\,2.15]{GSa}, hence $(\mathtt{m}athrm{b})$ of \cite[Lem.\,2.15]{GSa} ensures that $\E^*_{m,n}$ is projectively $\Q$-closed in $\R^{(m+n)^2}{\tt t}imes\R^{m+n}{\tt t}imes\R$ as well.
\end{proof}
Let $W$ be a nonsingular algebraic subset of $\R^k$ of dimension $d$. Let $\G:=\prod_{i=1}^{\ell}\G_{m_i,n_i}$, $\E^*:=\prod_{i=1}^{\ell}\E_{m_i,n_i}^*$ and $\mathtt{m}u:W\rightarrow \G$ be a regular map. Let $\pi_i:=(\R^{(m+n)^2})^\ell{\tt t}o\R^{(m+n)^2}$ the projection onto the $i$-th coordinate and $\mathtt{m}u_i:W{\tt t}o\G_{m_i,n_i}$ defined as $\mathtt{m}u_i:=\pi_i\circ \mathtt{m}u$. We define the pull-back sphere bundle $\mathtt{m}u^*(\E^*)$ over $W$ of $\E^*$ via $\mathtt{m}u$ as the following real algebraic subset of $\R^k{\tt t}imes(\R^{m+n}{\tt t}imes\R)^{\ell}$:
\begin{align*}
\mathtt{m}u^*(\E^*):=\{(x,y^1,t_1,\dots,y^{\ell},t_{\ell})\in &W{\tt t}imes(\R^{m+n}{\tt t}imes\R)^{\ell}\,|\,\\&\mathtt{m}u_i(x)y^i=y^i,\,
|y^i|_{m+n}^2+t_i^2=t_i {\tt t}ext{ for }i=1,\dots,\ell\}.
\end{align*}
It is well-known that $\mathtt{m}u^*(\E^*)$ is a compact $\mathtt{m}athscr{C}^{\infty}$-submanifold of $\R^{k}{\tt t}imes(\R^{m+n}{\tt t}imes\R)^\ell$ of dimension $d+\sum_{i=1}^{\ell} m_i$.
\begin{lemma}\label{lem:Q_sphere_pullback}
Let $W$ be a $\Q$-nonsingular projectively $\Q$-closed real algebraic subset of $\R^k$ of dimension $d$. Let $\mathtt{m}u:W\rightarrow \G$ be a $\Q$-regular map. Then $\mathtt{m}u^*(\E^*)\subset \R^k{\tt t}imes(\R^{m+n}{\tt t}imes\R)^\ell$ is a $\Q$-nonsingular projectively $\Q$-closed algebraic sphere bundle over $W$.
\end{lemma}
\begin{proof}
For simplifying the notation we only prove the case $\ell=1$, in the general case the proof works in the same way. Let $\ell=1$, $\G=\G_{m,n}$ and $\E=\E_{m,n}$.
There are $s\in\N^*$ and $p_1(x),\dots,p_s(x)\in\Q[x]=\Q[x_1,\dots,x_k]$ such that $\mathtt{m}athcal{I}_{\R^k}(W)=(p_1,\dots,p_s)$. Let $\phi:\R^{k}{\tt t}imes\R^{m+n}{\tt t}imes\R\rightarrow\R^s{\tt t}imes\R^{m+n}{\tt t}imes\R$ be the regular map defined by
\begin{align*}
\phi(x,y,t):=(p_1(x),\dots,p_s(x),&\mathtt{m}u(x)y-y,|y|^2_{m+n}+t^2-t),
\end{align*}
where $\mathtt{m}u(x)\in\G\subset\R^{(m+n)^2}$ is in matrix form. We prove that the polynomial components of $\phi$ do suffice to describe nonsingular points of $\mathtt{m}u^*(\E^*)\subset\R^{k}{\tt t}imes\R^{m+n}{\tt t}imes\R$, as in Definition \ref{def:R|Q}. As in the proves of previous lemmas, it suffices to show that $\mathtt{m}athrm{rnk}\,J_\phi(a,b,c)\geq k-d+n+1$ for all $(a,b,c)\in \mathtt{m}u^*(\E^*)$.
As in the proof of Lemma \ref{lem:Q_sphere_bundle}, for every $s\in\{1,\dots,m+n+1\}$, define the polynomial functions $h_s:\R^{k}{\tt t}imes\R^{(m+n)}{\tt t}imes\R{\tt t}o\R$ by
\begin{align*}
h_s(X,y,t)&:=\big(\sum_{j=1}^{m+n} x_{sj}y_j\big)-y_s\quad{\tt t}ext{if $s\mathtt{m}athtt{n}eq m+n+1$}\\
h_{m+n+1}(x,y,t)&:=|y|^2_{m+n}+t^2-t,
\end{align*}
for all $X=(x_{ij})_{ij} \in\R^{(m+n)^2}$, $y=(y_1,\dots,y_{m+n})\in \R^{m+n}$ and $t\in\R$. Thus, with the same notation of the proof of Lemma \ref{lem:Q_sphere_bundle}, it follows that
\[
\phi(x,y,t)=(p_1(x),\dots,p_s(x),(h_s(\mathtt{m}u(x),y,t))_s).
\]
Let $\mathtt{m}athtt{n}u:W{\tt t}o\G$ defined as $\mathtt{m}athtt{n}u(x):=(\mathtt{m}athtt{n}u(x)_{ij})_{ij}$ be any regular function such that $D_m\in\mathtt{m}athtt{n}u(W)$. Thanks to the proof of \cite[Lem.\,2.19]{GSa} and since $W$ is nonsingular of dimension $d$, we get that the rank of the jacobian matrix of the map $(x,y,t)\mathtt{m}apsto(p_1(x),\dots,p_s(x),(h_r(\mathtt{m}athtt{n}u(x),y,t))_r)$ at every $(a,v,c)\in\mathtt{m}athtt{n}u^*(\E^*)$ such that $\mathtt{m}athtt{n}u(a)=D_m$ is $\geq k-d+n+1$, hence equal to $k-d+n+1$. Indeed:
\begin{align*}
\mathtt{m}athtt{n}abla p_i(a,v,c)&=(\mathtt{m}athtt{n}abla_x p_i(a),0,0)\quad{\tt t}ext{for every $i=1,\dots,s$};\\
\mathtt{m}athtt{n}abla h_r(a,v,c)&=\Big(\frac{\partial}{\partial x_1} \mathtt{m}athtt{n}u_{r 1}(a) v_1,-e_\ell,0\Big)\quad{\tt t}ext{if $\ell\in\{m+1,\dots,m+n\}$},\\
\mathtt{m}athtt{n}abla h_{m+n+1}(a,v,c)&=\big(0,2v,2c-1\big),
\end{align*}
where $x=(x_1,\dots,x_k)$, $\mathtt{m}athtt{n}abla_x p_i$ denotes the vector containing the partial derivatives with respect to the variables $x$ and $\{e_1,\ldots,e_{m+n}\}$ denotes the canonical vector basis of $\R^{m+n}$.
Let us complete the proof. Let $(a,b,c)\in\mathtt{m}u^*(\E^*)$, let $G\in O(m+n)$ be such that $D_m=G^T\mathtt{m}u(a)G$ and let $v:=G^Tb$. By the choice of $G$ we see that $|v|^2_{m+n}=|G^T v|^2_{m+n}=|b|^2_{m+n}$, hence $c$ satisfies $|v|^2_{m+n}+c^2-c=0$ as well. Note that $D_m v=G^TAGG^Tb=G^TAb=G^Tb=v$, i.e., $(D_m,v,c)\in\mathtt{m}athtt{n}u^*(E^*)$ with $\mathtt{m}athtt{n}u:W{\tt t}o\G$ defined as $\mathtt{m}athtt{n}u(a):=G^T\mathtt{m}u(a)G$. Define the regular function $\psi:\R^{k}{\tt t}imes\R^{m+n}{\tt t}imes\R\rightarrow\R^s{\tt t}imes\R^{m+n}{\tt t}imes\R$ by
\[
\psi(x,y,t):=(p_1(x),\dots,p_s(x),\mathtt{m}athtt{n}u(x)y-y,|y|^2_{m+n}+t^2-t),
\]
and the linear automorphism ${\tt t}au:\R^{m+n}\rightarrow\R^{m+n}$ by ${\tt t}au(y)=G^T y$. Since $(\operatorname{id}_{\R^k}{\tt t}imes{\tt t}au{\tt t}imes\operatorname{id}_\R)(a,b,c)=(a,v,c)$ and $(\operatorname{id}_{\R^s}{\tt t}imes{\tt t}au{\tt t}imes\operatorname{id}_{\R})\circ\phi=\phi'\circ(\operatorname{id}_{\R^k}{\tt t}imes{\tt t}au{\tt t}imes\operatorname{id}_\R)$ we have that
$J_{\operatorname{id}_{\R^s}{\tt t}imes{\tt t}au{\tt t}imes\operatorname{id}_{\R}}(\phi(a,b,c))J_\phi(a,b,c)=J_\psi(a,v,c)J_{\operatorname{id}_{\R^k}{\tt t}imes{\tt t}au{\tt t}imes\operatorname{id}_{\R}}(a,b,c)$. Bearing in mind that both matrices $J_{\operatorname{id}_{\R^s}{\tt t}imes{\tt t}au{\tt t}imes\operatorname{id}_{\R}}(\phi(a,v,c))$ and $J_{\operatorname{id}_{\R^k}times{\tt t}au{\tt t}imes\operatorname{id}_{\R}}(a,b,c)$ are invertible and since $\mathtt{m}athtt{n}u(a)=D_m$, it follows that $\mathtt{m}athrm{rnk}\,J_\phi(a,b,c)=\mathtt{m}athrm{rnk}\,J_\psi(a,v,c)= k-d+n+1$, as desired.
It remains to show that $\mathtt{m}u^*(\E^*)$ is projectively $\Q$-closed to deduce that $\mathtt{m}u^*(\E^*)$ is also defined over $\Q$. Since $\mathtt{m}u^*(\E^*)\subset\R^{k}{\tt t}imes\R^{m+n}{\tt t}imes\R$ is the zero set of the polynomial $|\phi(x,y,t)|_{s+m+n+1}^2\in\Q[(x_{i})_{i},(y_k)_k,t]$ and $\mathtt{m}u^*(\E^*)$ is contained in $W{\tt t}imes S^{m+n}$, which is a projectively $\Q$-closed algebraic subset of $\R^{k}{\tt t}imes\R^{m+n}{\tt t}imes\R$ by \cite[Lem.\,2.18]{GSa} and $(\mathtt{m}athrm{d})$ of \cite[Lem.\,2.15]{GSa}, hence $(\mathtt{m}athrm{b})$ of \cite[Lem.\,2.15]{GSa} ensures that $\mathtt{m}u^*(\E^*)$ is projectively $\Q$-closed in $\R^{(k}{\tt t}imes\R^{m+n}{\tt t}imes\R$ as well.
\end{proof}
\section{Homology of real embedded Grassmannians}\label{sec:2}
\subsection{$\Q$-Desingularization of real embedded Schubert varieties}\label{subsec:2.1}
Let $\G_{m,n}\subset\R^{(m+n)^2}$ be the embedded real Grassmannian manifold of $m$-planes in $\R^{m+n}$. Let us construct an embedded version of Schubert varieties inducing a cellular decomposition of $\G_{m,n}$. Consider the complete flag of $\R^{m+n}$ consisting of the strictly increasing sequence of each $\R^k$, with $k\leq m+n$, spanned by the first $k$ elements of the canonical basis of $\R^{m+n}$. That is:
\[
0\subset \R\subset\dots\subset\R^i\subset \dots\subset \R^{m+n}.
\]
We will refer to the previous complete flag as the \emph{canonical complete flag of $\R^{m+n}$}. Let us define the \emph{Schubert varieties} of $\G_{m,n}$ with respect to the above complete flag by following the convention in \cite[Chapter 3]{Man01}. Define a \emph{partition} $\lambda=(\lambda_1,\dots,\lambda_m)$ as a decreasing sequence of integers such that $n\geq \lambda_1\geq\dots\geq\lambda_m\geq 0$. Hence, $\lambda$ corresponds uniquely to a Young diagram in a $(m{\tt t}imes n)$-rectangle. Denote by $D_\ell$ the $(m+n)^2$ matrix associated to the orthogonal projection $\R^{m+n}{\tt t}o\R^\ell$ sending $(x_1,\dots,x_{m+n})\mathtt{m}apsto(x_1,\dots,x_\ell)$ with respect to the canonical basis of $\R^{m+n}$, for every $\ell=1,\dots,m+n$. Hence, $D_\ell$ is the diagonal matrix in $\R^{(m+n)^2}$ having $1$ in the first $\ell$ diagonal positions and $0$ otherwise. Define the \emph{Schubert open cell} of $\G_{m,n}$ associated to $\lambda$ with respect to the canonical complete flag as
\[
\Omega_\lambda:=\{X\in\G_{m,n}\,|\,\mathtt{m}athrm{rnk}(XD_\ell)=k\quad{\tt t}ext{if $n+k-\lambda_k\leq \ell\leq n+k-\lambda_{k+1}$})\}.
\]
Define the \emph{Schubert variety} of $\G_{m,n}$ associated to the partition $\lambda$ with respect to the canonical complete flag as
\begin{equation}\label{eq:sch}
\sigma_\lambda:=\{X\in\G_{m,n}\,|\,\mathtt{m}athrm{rnk}(XD_{n+k-\lambda_k})\geq k, {\tt t}ext{ for } k=1,\dots,m\}.
\end{equation}
It is evident that the partition $\lambda$ is uniquely determined and determinates uniquely a sequence of incidence conditions with respect to the canonical complete flag. In addition, the matrix $XD_\ell=(x'_{ij})_{i,j}\in\R^{(m+n)^2}$ satisfies the following relations with respect to $X=(x_{ij})_{i,j}\in\R^{(m+n)^2}$:
\[
x'_{ij}=x_{ij}{\tt t}ext{ if $i\leq \ell$ and }y_{ij}=0{\tt t}ext{ otherwise.}
\]
Here we summarize some general properties of Schubert varieties translated in our embedded construction. For more details see \cite[Ch. 6]{MS74} and \cite[Sec. 3.2]{Man01}.
\begin{lemma}\label{lem:schubert}
Let $\G_{m,n}\subset\R^{(m+n)^2}$ be an embedded Grassmannian manifold and let $\lambda$ be a partition of the rectangle $m{\tt t}imes n$. Let $\sigma_{\lambda}$ be the Schubert variety in $\G_{m,n}$ defined by the incidence conditions prescribed by $\lambda$ with respect to the canonical complete flag of $\R^{m+n}$. Then:
\begin{enumerate}[label=\emph{(\arabic*)},ref=(\arabic*)]
\item\label{lem:schubert_1} $\sigma_{\lambda}$ is an algebraic subset of $\R^{(m+n)^2}$ and $\Omega_\lambda\subset\operatorname{Nonsing}(\sigma_{\lambda})$;
\item $\Omega_\lambda$ is biregular isomorphic to $\R^{mn-|\lambda|}$, where $|\lambda|:=\sum_{k=1}^{m} \lambda_k$;
\item $\sigma_\lambda$ coincides with the euclidean closure of $\Omega_\lambda$;
\item $\sigma_\lambda=\bigcup_{\mathtt{m}u\geq\lambda} \Omega_\mathtt{m}u$, where $\mathtt{m}u\geq\lambda$ if and only if $\mathtt{m}u_k\geq \lambda_k$ for every $k=1,\dots,m$;
\item $\sigma_\lambda\supset\sigma_\mathtt{m}u$ if and only if $\lambda<\mathtt{m}u$.
\end{enumerate}
\end{lemma}
\begin{lemma}\label{lem:Q-schubert}
Let $\G_{m,n}\subset\R^{(m+n)^2}$ be a Grassmannian manifold and let $\lambda$ be a partition of the rectangle $m{\tt t}imes n$. Then the Schubert variety $\sigma_\lambda$ defined by the incidence conditions prescribed by $\lambda$ with respect to the canonical flag of $\R^{m+n}$ is a projectively $\Q$-closed $\Q$-algebraic subset of $\R^{(m+n)^2}$.
\end{lemma}
\begin{proof}
We want to prove that $\sigma_\lambda$ is $\Q$-algebraic, namely we prove that conditions in (\ref{eq:sch}) are algebraic. Recall that $X\in\G_{m,n}$ is the matrix of the orthogonal projection of $\R^{m+n}$ onto an $m$-dimensional subspace $W$ of $\R^{m+n}$, hence $\ker(X-\operatorname{id}_{\R^{m+n}})=W$. This means that upper conditions on $\mathtt{m}athrm{rnk}(XD_\ell)$ correspond to downer conditions on $\mathtt{m}athrm{rnk}((X-\operatorname{id}_{\R^{m+n}})D_\ell)$, in particular for every $k=1,\dots,m$ the following hold:
\[
\mathtt{m}athrm{rnk}(XD_{n+k-\lambda_k})\geq k\quad{\tt t}ext{if, and only if,}\quad\mathtt{m}athrm{rnk}((X-\operatorname{id}_{\R^{m+n}})D_{n+k-\lambda_k})\leq n-\lambda_k.
\]
The latter condition is algebraic since it corresponds to the vanishing of the determinant of all $(n-\lambda_k+1){\tt t}imes(n-\lambda_k+1)$-minors of the matrix $(X-\operatorname{id}_{\R^{m+n}})D_{n+k-\lambda_k}$. In particular, since $\G_{m,n}$ is $\Q$-algebraic, $\operatorname{id}_{\R^{m+n}}$ and $D_{n+k-\lambda_k}$ are matrices with rational coefficients and the determinant is a polynomial with rational coefficients with respect to the entries of the matrix $X$, the algebraic set $\sigma_\lambda$ is $\Q$-algebraic. In addition, since $\G_{m,n}$ is projectively closed, $\sigma_\lambda$ is projectively closed as well by \cite[Lem.\,2.15]{GSa}.
\end{proof}
The goal of this section is to find particular embedded desingularizations of Schubert's varieties defined by incidence conditions with respect to the canonical complete flag of $\R^{(m+n)^2}$.
\begin{definition}\label{def:Q_des}
Let $V\subset \R^m$ be a real $\Q$-algebraic set of dimension $d$. We say that $V'\subset \R^m{\tt t}imes\R^n$, for some $n\in\N$, is a \emph{desingularization} of $V$ if $V'$ is a nonsingular algebraic subset of $\R^{m+n}$ of dimension $d$ and $\pi|_{V'}:V'{\tt t}o V$ is a birational map, where $\pi:\R^m{\tt t}imes\R^n{\tt t}o\R^m$ is the projection onto the first factor. If, in addition, $V'$ is a $\Q$-nonsingular projectively $\Q$-closed algebraic subset of $\R^{m+n}$ we say that $V'$ is a \emph{$\Q$-desingularization} of $V$.
\end{definition}
Let $m,n\in\N^*:=\N\setminus\{0\}$. Let $\lambda=(\lambda_1,\dots,\lambda_m)$ with its associated Young diagram in a $(m{\tt t}imes n)$-rectangle. Let $c\in \N^*$. Let $a_1,\dots,a_{c-1}\in\N^*$, $a_c,b_0\in\N$ and $b_1,\dots,b_{c-1}\in\N^*$ such that:
\begin{enumerate}[label=(\alph*),ref=(\alph*)]
\item\label{en:a} $a_1+\dots+a_c=m\quad{\tt t}ext{and}\quad b_0+\dots+b_{c-1}=n$,
\item\label{en:b} $\lambda_j=\sum_{k=i}^{c} b_k$ for every $j\leq a_i$ and for every $i=1,\dots,c$.
\end{enumerate}
The interpretation of the previous integers with respect to the Young diagram associated to the partition $\lambda$ is explained in Figure \ref{fig:partition}.
\begin{figure}
\caption{Disposition of the $a_i$'s and $b_i$'s with respect to the partition $\lambda$.}
\label{fig:partition}
\end{figure}
\begin{remark}\label{rem:Q-grass}
Let $m',n'\in\N$ such that $m'\leq m$ and $n'\leq n$. Consider the Schubert variety of $\G_{m,n}$ associated to the partition $\lambda=(\lambda_1,\dots,\lambda_m)$ where:
\[
\lambda_k=
\begin{cases}
n \quad&{\tt t}ext{if } k\leq m-m';\\
n-n' &{\tt t}ext{if } k> m-m'.
\end{cases}
\]
If $m=m'$ and $n=n'$ the Schubert variety $\sigma_\lambda$ corresponds to the whole $\G_{m,n}$, otherwise $\sigma_\lambda$ is given by the equations:
\[
\sigma_\lambda=\{X\in\G_{m,n}\,|\,\mathtt{m}athrm{rnk}(XD_{m-m'})=m-m',\,\mathtt{m}athrm{rnk}(XD_{m+n'})=m\}.
\]
Clearly $\sigma_\lambda$ is biregular isomorphic to $\G_{m',n'}$. In our embedded version the $\Q$-biregular isomorphism $\varphi:\G_{m',n'}{\tt t}o\sigma_\lambda\subset\G_{m,n}$ can be defined as follows: let $X':=(x'_{ij})_{i,j=1,\dots, m'+n'}$, then $\varphi(X')=(x_{i,j})_{i,j=1,\dots, m+n}$ is defined as
\[
x_{ij}=
\begin{cases}
1\quad\quad{\tt t}ext{if $i=j$ and $i\leq m-m'$;}\\
x'_{st}\quad{\tt t}ext{if $m-m'<i,j<m+n'$ and $s=i-m+m'$, $t=j-m+m'$;}\\
0\quad\quad{\tt t}ext{otherwise.}
\end{cases}
\]
Recall that $\G_{m',n'}$ is $\Q$-nonsingular and projectively $\Q$-closed. Consider the graph of $\varphi$, it is a nonsingular $\Q$-algebraic subset of $\G_{m',n'}{\tt t}imes\G_{m,n}\subset \R^{m'+n'}{\tt t}imes\R^{m+n}$, hence projectively $\Q$-closed by \cite[Lem.\,2.15]{GSa} and $\Q$-nonsingular since $\varphi$ is $\Q$-biregular. Thus, ${\tt t}extnormal{graph}(\varphi)$ is a $\Q$-desingularization of $\sigma_{\lambda}$.
\end{remark}
By the above Remark \ref{rem:Q-grass} we are left to find $\Q$-desingularizations of Schubert varieties $\sigma_\lambda$ of $\G_{m,n}$ defined by incidence conditions with respect to the canonical complete flag such that $a_c$ and $b_0$ are non-null.
We define the \emph{depressions} of the partition $\lambda$, with $a_c,b_0>0$, as the elements of the Young diagram whose coordinates, with respect to the upper corner on the left, are:
\[
(a_1+\dots+a_{i}+1,b_i+\dots+b_{c-1}+1),\quad i=1,\dots,c-1.
\]
Here we provide an inductive desingularization of the Schubert variety $\sigma_{\lambda}$ with respect to the number $c-1\in\N$ of depressions of the partition $\lambda$.
\begin{lemma}\label{lem:des}
Let $\lambda$ be a partition of the $(m{\tt t}imes n)$-rectangle such that $a_c$ and $b_0$ are non-null. Let $\sigma_{\lambda}$ be the Schubert variety of $\G_{m,n}$ defined by incidence conditions, prescribed by $\lambda$, with respect to the canonical complete flag of $\R^{(m+n)^2}$. Let $m_k:=\sum_{i=1}^{k} a_i$, $n_k:=m+n-m_k$ and $d_k:=m_k+\sum_{i=1}^{k}b_{i-1}$ for every $k=1,\dots,c$.
Then the algebraic set:
\begin{align*}
Z_\lambda:=\{(X,Y_{c-1},\dots,Y_1)&\in \G_{m,n}{\tt t}imes\G_{m_{c-1},n_{c-1}}{\tt t}imes\dots{\tt t}imes\G_{m_1,n_1}\,|\\
&Y_i D_{d_i}=Y_i,\quad{\tt t}ext{for every $i=1,\dots,c-1$},\\
&Y_{i+1} Y_{i}=Y_{i},\quad{\tt t}ext{for every $i=1,\dots,c-2$},\\
&XY_{c-1}=Y_{c-1}\}.
\end{align*}
is a desingularization of $\sigma_{\lambda}$.
\end{lemma}
\begin{proof}
Let's prove by induction on $c\in\N^*:=\N\setminus\{0\}$. Let $c=1$, that is $a_1,b_0>0$ and $\lambda$ has no depressions, so $\lambda$ is the null partition. Thus, $\sigma_{\lambda}=\G_{m,n}$, which is a nonsingular algebraic subset of $\R^{(m+n)^2}$, thus there is nothing to prove.
Let $c>1$ and $\lambda$ be a partition with $c-1$ depressions such that $a_c,b_0>0$. Recall that the Schubert variety $\sigma_\lambda$ defined by the incidence conditions, prescribed by $\lambda$, with respect to the canonical complete flag of $\R^{m+n}$ is defined as:
\[
\sigma_\lambda=\{X\in\G_{m,n}\,|\,\mathtt{m}athrm{rnk}(XD_{d_k})\geq m_k\quad{\tt t}ext{for $k=1,\dots,c$.}\}
\]
Consider the algebraic set $Z_\lambda\subset\G_{m,n}{\tt t}imes\G_{m_{c-1},n_{c-1}}{\tt t}imes\dots {\tt t}imes\G_{m_1,n_1}$ as in the statement of Lemma \ref{lem:des}. Define $\pi_i:Z_\lambda{\tt t}o \G_{m_{c-i},n_{c-i}}$ for $i\in\{1,\dots,c\}$ be the restriction over $Z_\lambda$ of the projection from $\G_{m,n}{\tt t}imes\G_{m_{c-1},n_{c-1}}{\tt t}imes\dots {\tt t}imes\G_{m_1,n_1}$ onto the $(c-i+1)$-component.
Observe $\pi_1(Z_\lambda)=\{Y_1\in\G_{m_1,n_1}\,|\,Y_1 D_{d_1}=Y_1\}$ is biregular isomorphic to $\G_{m_1,d_1-m_1}=\G_{a_1,b_0}$. Let $\mathtt{m}u$ be a partition of the $(m-m_1){\tt t}imes n$-rectangle defined as: $\mathtt{m}u=(\mathtt{m}u_1,\dots,\mathtt{m}u_{m-a_1})$ with
\[
\mathtt{m}u_k= \lambda_{k+a_1} {\tt t}ext{ for every $k=1,\dots,m-a_1$}.
\]
Then, for every $B_1\in\pi_1(Z_\lambda)$, we observe that $\pi_1^{-1}(B_1)$ is biregular isomorphic to the set $Z_\mathtt{m}u$. Indeed, define the biregular isomorphism $\phi:Z_\mathtt{m}u {\tt t}o(\pi_1)^{-1}(D_{m_1})$ as follows: let $(A,B_{c-1},\dots,B_2)\in Z_\mathtt{m}u$ then define $\phi(A,B_{c-1},\dots,B_2):=(\varphi(A),\varphi(B_{c-1)},\dots,\varphi(B_2),D_{m_1})$, where $\varphi:\R^{(m-m_1+n)^2}{\tt t}o \R^{(m+n)^2}$ is defined as $\varphi((x_{ij})_{ij})=(x'_{ij})$, with
\begin{align*}
x'_{ij}:=
\begin{cases}
1\quad {\tt t}ext{if $i=j$ and $i\leq m_1$},\\
x_{st}\quad{\tt t}ext{if $m-m_1<i,j$, with $s=i-m+m_1$ and $t=j-m+m-1$},\\
0\quad{\tt t}ext{otherwise}.
\end{cases}
\end{align*}
Moreover, for every $B_1\in\pi_1(Z_\lambda)$, then $(\pi_1)^{-1}(B_1)$ is biregularly isomorphic to $(\pi_1)^{-1}(D_{m_1})$, indeed it suffices to chose $G\in O(m+n)$ such that $D_{m_1}=G^{T} B_1 G$ and apply $G$ to every factor of $(\pi_1)^{-1}(D_{m_1})$ to produce the wondered isomorphism. Observe that the partition $\mathtt{m}u$ has exactly $(c-2)$-depressions, indeed it is constructed by erasing the first depression $(a_1+1,n-b_0+1)$ of $\lambda$, thus by inductive assumption the algebraic set $Z_\mathtt{m}u$ is a desingularization of $\sigma_\mathtt{m}u$. In particular:
\[
\dim(Z_\mathtt{m}u)=\dim(\sigma_\mathtt{m}u)=\dim(\sigma_\lambda)-a_1b_0.
\]
Hence, $\pi_1:Z_\lambda{\tt t}o\G_{a_1,b_0}$ is an algebraic fibre bundle of dimension $\dim(\sigma_\lambda)$, thus $Z_\lambda$ is a nonsingular algebraic subset of $\R^{(m+n)^2 c}$ of dimension $\dim(\sigma_\lambda)$. Moreover, $Z_\lambda$ is a desingularization of $\sigma_\lambda$ indeed, if $A\in\Omega_\lambda$, then $(A,B_{c-1},\dots,B_1)\in Z_\lambda$ if and only if $B_i=A D_{d_i}$ for every $i\in\{1,\dots,c-1\}$. Hence, the map $\pi_c:Z_\lambda{\tt t}o \sigma_\lambda$ is birational by \ref{lem:schubert_1} of Lemma \ref{lem:schubert}.
\end{proof}
By Remark \ref{rem:Q-grass}, in order to produce a $\Q$-desingularization of every Schubert variety $\sigma_\lambda$ defined by incidence conditions, prescribed by $\lambda$, with respect to the canonical complete flag of $\R^{m+n}$, we are only left to prove the following result.
\begin{lemma} \label{lem:Q-Z_{lambda}}
Each algebraic fibre bundle $Z_{\lambda}$ as in Lemma \ref{lem:des} is a $\Q$-nonsingular projectively $\Q$-closed algebraic subset of $\R^{(m+n)^2 c}$.
\end{lemma}
\begin{proof}
By definition, $Z_\lambda$ is a $\Q$-algebraic subset of $\R^{(m+n)^2 c}$ defined by the following equations in the variables $X:=(x_{i,j})_{i,j=1,\dots,m+n}$ and $Y_k:=(y^{(k)}_{i,j})_{i,j=1,\dots,m+n}$, for $k=1,\dots,c-1$:
\begin{align*}
&X=X^T,\quad X^2=X,\quad \mathtt{m}athrm{tr}(X)=m;\\
&Y_k=Y_k^T,\quad Y_k^2=Y_k,\quad \mathtt{m}athrm{tr}(Y_k)=m_k\quad{\tt t}ext{for every $k=1,\dots,c-1$};\\
&Y_k D_{d_k}=Y_k \quad{\tt t}ext{for every $k=1,\dots,c-1$};\\
&Y_{k+1}Y_{k}=Y_{k}\quad{\tt t}ext{for every $k=1,\dots,c-2$};\\
&XY_{c-1}=Y_{c-1}.
\end{align*}
Let $\varphi_k:\R^{(m+n)^2 c} {\tt t}o\R^{(m+n)^2}{\tt t}imes\R^{(m+n)^2}{\tt t}imes\R^{(m+n)^2}{\tt t}imes\R^{(m+n)^2}$, for every $k=1,\dots,c-2$, $\varphi_{c-1}:\R^{(m+n)^2 c}{\tt t}o\R^{(m+n)^2}{\tt t}imes\R^{(m+n)^2}{\tt t}imes\R^{(m+n)^2}{\tt t}imes\R^{(m+n)^2}$ and $\varphi_{c}:\R^{(m+n)^2 c}{\tt t}o\R^{(m+n)^2}{\tt t}imes\R^{(m+n)^2}$ be defined as:
\begin{align*}
\varphi_k(X,Y_{c-1},\dots,Y_1)&:=(Y_k-Y_k^T,Y_k^2-Y_k,Y_kD_{d_k}-Y_k,Y_{k+1}Y_{k}-Y_{k})\\
\varphi_{c-1}(X,Y_{c-1},\dots,Y_1)&:=(Y_{c-1}-Y_{c-1}^T,Y_{c-1}^2-Y_{c-1},Y_{c-1}D_{d_{c-1}}-Y_{c-1},XY_{c-1}-Y_{c-1})\\
\varphi_{c}(X,Y_{c-1},\dots,Y_1)&:=(X-X^T,X^2-X).
\end{align*}
Define $\phi:\R^{(m+n)^2 c}{\tt t}o(\R^{(m+n)^2}{\tt t}imes\R^{(m+n)^2}{\tt t}imes\R^{(m+n)^2}{\tt t}imes\R^{(m+n)^2}) ^{c-1}{\tt t}imes\R^{(m+n)^2}{\tt t}imes\R^{(m+n)^2}$ be the polynomial map:
\[
\phi(X,Y_{c-1},\dots,Y_1):=(\varphi_1(X,Y_{c-1},\dots,Y_1),\dots,\varphi_{c-1}(X,Y_{c-1},\dots,Y_1,\varphi_c(X,Y_{c-1},\dots,Y_1)).
\]
We prove that the polynomials $\mathtt{m}athrm{tr}(X)-m$, $\mathtt{m}athrm{tr}(Y^k)-m_k$, for every $k=1,\dots,c-1$, and the polynomial components of $\phi$ do suffice to describe the local structure of nonsingular points of $Z_{\lambda}$ in $\R^{(m+n)^2 c}$. Since these polynomials have coefficients in $\Q$ and their common zero set is $Z_{\lambda}$, bearing in mind that
\[
\dim(Z_{\lambda})=\sum_{k=1}^{c} \dim(\G_{a_k,n-\sum_{i=k}^{c-1}b_k})=\sum_{k=1}^{c} a_k \left(n-\sum_{i=k}^{c-1}b_k\right)=\dim(\sigma_{\lambda}),
\]
it suffices to show that, for each $(A,B_{c-1},\dots,B_1)\in Z_{\lambda}$, the rank of the jacobian matrix $J_\phi(A,B_{c-1},\dots,B_1)$ of $\phi$ at $(A,B_{c-1},\dots,B_1)$ is greater than or equal to (and hence equal to)
\begin{align*}
c(m+n)^2-\dim(\sigma_{\lambda})&=\sum_{k=1}^{c}(m+n)^2-\dim(\G_{a_k,n-\sum_{i=k}^{c-1}b_k})\\
&=\sum_{k=1}^{c}(m+n)^2-a_k (d_k-m_k),
\end{align*}
i.e. $\mathtt{m}athrm{rnk}\,J_\phi(A,B_{c-1},\dots,B_1)\geq c(m+n)^2-\sum_{k=1}^{c}a_k (d_k-m_k)$ for all $(A,B_{c-1},\dots,B_1)\in Z_{\lambda}$.
First, we prove that $\mathtt{m}athrm{rnk}\,J_\phi(D_m,D_{m_{c-1}},\dots,D_{m_1})\geq c(m+n)^2-\dim(\sigma_\lambda)$ if $D_m=D_{m_c}$ and $D_{m_k}$ are the diagonal matrices in $\R^{(m+n)^2}$ having $1$ in the first $m_k$ diagonal positions and $0$ otherwise, for every $k=1,\dots,c$. Observe that $D_{m_{k+1}}D_{m_k}=D_{m_k}$ for every $k=1,\dots,c-1$, hence $((D_m,D_{m_{c-1}},\dots,D_{m_1})\in Z_{\lambda}$.
For each $i,j\in\{1,\ldots,m+n\}$ and $k\in\{1,\dots,c\}$, define the polynomial functions $f^{(k)}_{ij},g^{(k)}_{ij},p^{(k)}_{ij},\\q^{(k)}_{ij}:\R^{(m+n)^2}{\tt t}imes\R^{(m+n)^2}{\tt t}o\R$ by
\begin{align*}
&f^{(c)}_{ij}(X,Y_{c-1},\dots,Y_1):=x_{ij}-x_{ji}, \qquad g^{(c)}_{ij}(X,Y_{c-1},\dots,Y_1):={\tt t}extstyle\big(\sum_{\ell=1}^n x_{i\ell}x_{\ell j}\big)-x_{ij},\\
&f^{(k)}_{ij}(X,Y_{c-1},\dots,Y_1):=y^{(k)}_{ij}-y^{(k)}_{ji}, \qquad g^{(k)}_{ij}(X,Y_{c-1},\dots,Y_1):={\tt t}extstyle\big(\sum_{\ell=1}^n y^{(k)}_{i\ell}y^{(k)}_{\ell j}\big)-y^{k}_{ij},\\
&p^{(k)}_{ij}(X,Y_{c-1},\dots,Y_1):=
\begin{cases}
0\quad&{\tt t}ext{if $i,j\leq d_k=\sum_{\ell=1}^{k} (a_\ell+b_{\ell-1})$};\\
-y^{(k)}_{ij}&{\tt t}ext{otherwise},
\end{cases}\\
&q^{(c)}_{ij}(X,Y_{c-1},\dots,Y_1):=y^{(c-1)}_{ij}-\sum_{\ell=1}^{m+n} x_{i\ell}y^{(c-1)}_{\ell j},\\
&q^{(k)}_{ij}(X,Y_{c-1},\dots,Y_1):=y^{(k)}_{ij}-\sum_{\ell=1}^{m+n} y^{(k+1)}_{i\ell}y^{(k)}_{\ell j}\quad{\tt t}ext{with $k\mathtt{m}athtt{n}eq 1,c$.}
\end{align*}
for all $(X,Y_{c-1},\dots,Y_1)=((x_{ij})_{i,j},(y^{(c-1)}_{ij})_{i,j},\dots,(y^{(1)}_{ij})_{i,j})\in\R^{(m+n)^2 c}$. It follows that
\begin{align*}
\phi(X,Y_{c-1},\dots,Y_1)=\Big(&(f^{(1)}_{ij}(X,Y_{c-1},\dots,Y_1))_{i,j},(g^{(1)}_{ij}(X,Y_{c-1},\dots,Y_1))_{i,j},\\
&(p^{(1)}_{ij}(X,Y_{c-1},\dots,Y_1))_{i,j},(q^{(2)}_{ij}(X,Y_{c-1},\dots,Y_1))_{i,j},\dots,\\
&(f^{(c-1)}_{ij}(X,Y_{c-1},\dots,Y_1))_{i,j},(g^{(c-1)}_{ij}(X,Y_{c-1},\dots,Y_1))_{i,j},\\
&(p^{(c-1)}_{ij}(X,Y_{c-1},\dots,Y_1))_{i,j},(q^{(c)}_{ij}(X,Y_{c-1},\dots,Y_1))_{i,j},\\
&(f^{(c)}_{ij}(X,Y_{c-1},\dots,Y_1))_{i,j},(g^{(c)}_{ij}(X,Y_{c-1},\dots,Y_1))_{i,j}\Big).
\end{align*}
Define, for every $k\in\{1,\dots,c\}$:
\begin{align*}
S^{(k)}_1&:=\{(i,j)\in\{1,\ldots,m+n\}^2\,|\,i<j\leq d_k\},\\
S^{(k)}_2&:=\{(i,j)\in\{1,\ldots,m+n\}^2\,|\,i\leq j\leq m_k\},\\
S^{(k)}_3&:=\{(i,j)\in\{1,\ldots,m+n\}^2\,|\,m_k<i\leq j\leq d_k\},\\
S^{(k)}_4&:=\{(i,j)\in\{1,\ldots,m+n\}^2\,|\,d_k< i {\tt t}ext{ or } d_k< j\},\\
T^{(1)}&:=\varnothing,\\
T^{(k)}&:=\{(i,j)\in\{1,\ldots,m+n\}^2\,|\,m_k<i\leq d_k,\, j\leq m_{k-1}\}.
\end{align*}
Notice that the sum of the cardinalities of $S^{(k)}_1$, $S^{(k)}_2$, $S^{(k)}_3$ and $S^{(k)}_4$ equals $\frac{(d_k-1)d_k}{2}+\frac{m_k(m_k+1)}{2}+\frac{(d_k-m_k)(d_k-m_k+1)}{2}+(m+n)^2-d_k^2=(m+n)^2-m_k(d_k-m_k)$, for every $k\in\{1,\dots,c\}$. In particular, the sum of the cardinalities of $S^{(1)}_1$, $S^{(1)}_2$, $S^{(1)}_3$ and $S^{(1)}_4$ is equal to $a_1b_0$. In addition, the cardinality of $T^{(k)}$ is equal to $m_{k-1}(d_k-m_k)$, for every $k\in\{2,\dots,c\}$. Hence the sum of the cardinalities of $S^{(k)}_1$, $S^{(k)}_2$, $S^{(k)}_3$, $S^{(k)}_4$ and $T^{(k)}$ equals $(m+n)^2-a_k(d_k-m_k)$, for every $k\in\{2,\dots,c\}$.
By a direct computation, we see that
\[
\begin{array}{ll}
\mathtt{m}athtt{n}abla f^{(1)}_{ij}(D_m,D_{m_{c-1}},\dots,D_{m_1})=(0,\dots,0,E^{(1)}_{ij}-E^{(1)}_{ji}) & {\tt t}ext{ if $(i,j)\in S^{(1)}_1$,}\\
\mathtt{m}athtt{n}abla g^{(1)}_{ij}(D_m,D_{m_{c-1}},\dots,D_{m_1})=(0,\dots,0,E^{(1)}_{ij}) & {\tt t}ext{ if $(i,j)\in S^{(1)}_2$,}\\
\mathtt{m}athtt{n}abla g^{(1)}_{ij}(D_m,D_{m_{c-1}},\dots,D_{m_1})=(0,\dots,0,-E^{(1)}_{ij}) & {\tt t}ext{ if $(i,j)\in S^{(1)}_3$,}\\
\mathtt{m}athtt{n}abla p^{(1)}_{ij}(D_m,D_{m_{c-1}},\dots,D_{m_1})=(0,\dots,0,E^{(1)}_{ij}) & {\tt t}ext{ if $(i,j)\in S^{(1)}_4$,}\\
\end{array}
\]
and, for every $k\in\{2,\dots,c\}$
\[
\begin{array}{ll}
\mathtt{m}athtt{n}abla f^{(k)}_{ij}(D_m,D_{m_{c-1}},\dots,D_{m_1})=(0,\dots,0,E^{(k)}_{ij}-E^{(k)}_{ji},0,\dots,0) & {\tt t}ext{ if $(i,j)\in S^{(k)}_1$,}\\
\mathtt{m}athtt{n}abla g^{(k)}_{ij}(D_m,D_{m_{c-1}},\dots,D_{m_1})=(0,\dots,0,E^{(k)}_{ij},0,\dots,0) & {\tt t}ext{ if $(i,j)\in S^{(k)}_2$,}\\
\mathtt{m}athtt{n}abla g^{(k)}_{ij}(D_m,D_{m_{c-1}},\dots,D_{m_1})=(0,\dots,0,-E^{(k)}_{ij},0,\dots,0) & {\tt t}ext{ if $(i,j)\in S^{(k)}_3$,}\\
\mathtt{m}athtt{n}abla p^{(k)}_{ij}(D_m,D_{m_{c-1}},\dots,D_{m_1})=(0,\dots,0,E^{(k)}_{ij},0,\dots,0) & {\tt t}ext{ if $(i,j)\in S^{(k)}_4$,}\\
\mathtt{m}athtt{n}abla q^{(k)}_{ij}(D_m,D_{m_{c-1}},\dots,D_{m_1})=(0,\dots,0,-E^{(k)}_{ij},0,\dots,0) & {\tt t}ext{ if $(i,j)\in T^{(k)}$,}\\
\end{array}
\]
where $E^{(k)}_{ij}$ is the matrix in $\R^{(m+n)^2}$ whose $(i,j)$-coefficient equals $1$ and the other coefficients are $0$ holding the $(c-k+1)$-position in the vector $(X,Y_{c-1},\dots,Y_1)\in\R^{(m+n)^2 c}$, for every $k\in\{1,\dots,c\}$. Consequently, we have that
\[
\mathtt{m}athrm{rnk}\,J_\phi(D_m,D_{m_{c-1}},\dots,D_{m_1})\geq \sum_{k=1}^{c}((m+n)^2-a_k(d_k-m_k))=c(m+n)^2-\dim(\sigma_{\lambda}).
\]
Let $(A,B_{c-1},\dots,B_1)\in Z_{\lambda}$ and let $G\in O(m+n)$ be such that $D_m=G^TAG$ and $D_{m_k}=G^TB_kG$, for every $k\in\{1,\dots,c-1\}$. Define the linear automorphisms $\psi:\R^{(m+n)^2}{\tt t}o\R^{(m+n)^2}$ by $\psi(X):=G^TXG$ and $\psi^{{\tt t}imes k}:\R^{(m+n)^2 k}{\tt t}o \R^{(m+n)^2 k}$ by $\psi^{{\tt t}imes k}(X_1,\dots,X_k):=(\psi(X_1),\dots,\psi(X_k))$, for $k\in \N^*$. Since $\psi(A)=D_m$ and $(\psi^{{\tt t}imes (4c-2)})\circ\phi=\phi\circ(\psi^{{\tt t}imes c})$, we have that
\begin{align*}
J_{\psi^{{\tt t}imes (4c-2)}}(\phi(A,B_{c-1},\dots,B_1))&J_\phi(A,B_{c-1},\dots,B_1)=\\
&J_\phi(D_m,D_{m_{c-1}},\dots,D_{m_1})J_{\psi^{{\tt t}imes c}}(A,B_{c-1},\dots,B_1).
\end{align*}
Bearing in mind that both matrices $J_{\psi^{{\tt t}imes (4c-2)}}(\phi(A,B_{c-1},\dots,B_1))$ and $J_{\psi^{{\tt t}imes c}}(A,B_{c-1},\dots,B_1)$ are invertible, it follows that $\mathtt{m}athrm{rnk}\,J_\phi(A,B_{c-1},\dots,B_1)=\mathtt{m}athrm{rnk}\,J_\phi(D_m,D_{m_{c-1}},\dots,D_{m_1})\geq c(m+n)^2-\dim(\sigma_{\lambda})$, as desired. Since $Z_{\lambda}\subset\R^{(m+n)^2 c}$ is $\Q$-algebraic and is contained in the projectively $\Q$-closed algebraic set $\G_{m,n}{\tt t}imes\G_{m_{c-1},n_{c-1}}{\tt t}imes\dots{\tt t}imes\G_{m_1,n_1}$ of $\R^{(m+n)^2 c}$, \cite[Lem.\,2.15]{GSa} ensures that $Z_{\lambda}$ is projectively $\Q$-closed in $\R^{(m+n)^2 c}$ as well. This proves that $Z_\lambda$ is a projectively $\Q$-closed $\Q$-nonsingular algebraic subset of $\R^{(m+n)^2 c}$, as desired.
\end{proof}
A combination of Remark \ref{rem:Q-grass}, Lemma \ref{lem:des} and Lemma \ref{lem:Q-Z_{lambda}} provides a proof of the following fundamental result.
\begin{theorem}\label{thm:Q-desingularization}
Let $\G_{m,n}\subset \R^{(m+n)^2}$ be an embedded Grassmannian manifold and let $\sigma_{\lambda}$ be any Schubert variety of $\G_{m,n}$ defined by intersection conditions, prescribed by $\lambda$, with respect to the canonical complete flag of $\R^{m+n}$, that is
\[
0\subset\R\subset\R^2\subset\dots\subset\R^{m+n}.
\]
Then, $\sigma_{\lambda}$ admits a $\Q$-desingularization.
\end{theorem}
\subsection{Real embedded Grassmannians have totally $\Q$-algebraic homology}\label{seubsec:2.2}
Let $V$ be a subset of $\R^{m+n}$. For every $k\leq m+n$, denote by $H_k(V,\Z/2\Z)$ the $k$-th homology group of $V$ with coefficients in $\Z/2\Z$. It is well known that, if $V$ is a compact algebraic subset of $\R^{m+n}$ of dimension $m$, the fundamental class $[V]$ of $V$ in $H_m(V,\Z/2\Z)$ is a non trivial homology class. More details about fundamental classes of algebraic sets can be found in \cite[Ch. 11, Sec. 3]{BCR98} via triangulations. In the same way the existence of fundamental classes of compact real algebraic sets is a consequence of Hironaka's desingularization theorem and the existence of the fundamental class for compact $\mathtt{m}athscr{C}^\infty$ manifolds.
\begin{definition}\label{def:tot_Q_hom}
Let $k\in\N$ and $\alpha\in H_k(V,\Z/2\Z)$. We say that $\alpha$ is \emph{projectively $\Q$-algebraic} if there exists a projectively $\Q$-closed $\Q$-nonsingular real algebraic subset $Z$ of $V{\tt t}imes\R^p$ of dimension $k$ such that $\pi_{1*}([Z])=\alpha$, where $\pi_1:Z\rightarrow V$ is the projection map onto the first $(m+n)$-coordinates and $[Z]$ is the fundamental class of $Z$ in $H_k(Z,\Z/2\Z)$.
We say that $V$ has \emph{projectively $\Q$-algebraic homology} if every $\alpha\in H_k(V,\Z/2\Z)$ is projectively $\Q$-algebraic, for every $k\in\N$.
\end{definition}
Let us recall some notation about CW complexes. Let $X$ be a topological space endowed by a finite CW complex structure $S$ of dimension $m$. We denote by $\mathtt{m}athcal{S}^{(i)}$ the set of open $i$-cells of $\mathtt{m}athcal{S}$, for every $i=0,\dots,m$. Denote by $X_i:=\bigcup_{\Omega\in\mathtt{m}athcal{S}^{(i)}} \operatorname{Cl}(\Omega)$ the $i$-skeleton of $X$, for every $i=0,\dots,m$. Define $C_i(\mathtt{m}athcal{S},\Z/2\Z):=H_i(X_i,X_{i-1})$ the group of unoriented cellular $i$-chains of $\mathtt{m}athcal{S}$, for every $i=0,\dots,m$. Let $\partial_i^\mathtt{m}athcal{S}:C_i(\mathtt{m}athcal{S},\Z/2\Z){\tt t}o C_{i-1}(\mathtt{m}athcal{S},\Z/2\Z)$ denote the boundary operator in cellular homology, for every $i=0,\dots,m$. For more details about CW complexes and their homological theory we refer to \cite{LW69}.
\begin{lemma}\label{lem:CW}
Let $V\subset\R^{m+n}$ be a compact algebraic subset of dimension $m$. Suppose that $V$ admits a finite CW-complex structure such that the closure of each cell is algebraic. Then, for every $i=0,\dots,m$:
\[
H_i(V,\Z/2\Z)=<\{[\operatorname{Cl}(\Omega)]\in H_i(V,\Z/2\Z)\,|\,\Omega\in\mathtt{m}athcal{S}^{(i)}\}>.
\]
\end{lemma}
\begin{proof}
By classical arguments about cellular and simplicial homology, it suffices to prove that $\{[\operatorname{Cl}(\Omega)]\in H_i(\mathtt{m}athcal{S},\Z/2\Z)\,|\,\Omega\in\mathtt{m}athcal{S}^{(i)}\}$ constitutes a basis of $H_i(\mathtt{m}athcal{S},\Z/2\Z)$, for every $i=0,\dots,m$. Since $\operatorname{Cl}(\Omega)$ is algebraic for every open cell $\Omega\in\mathtt{m}athcal{S}$, the fundamental class of $\operatorname{Cl}(\Omega)$ is a well defined homology class. Suppose $\Omega\in\mathtt{m}athcal{S}^{(i)}$, then for every $\Omega'\in\mathtt{m}athcal{S}^{(i+1)}$ we have $\partial_{i+1}^{\mathtt{m}athcal{S}}(\operatorname{Cl}(\Omega'))=0$, since $\operatorname{Cl}(\Omega')$ is algebraic as well. Hence, we get that $[\operatorname{Cl}(\Omega)]\in H_i(\mathtt{m}athcal{S},\Z/2\Z)$ is non-null and linearly independent with respect to $\{[\operatorname{Cl}(\Omega')]\in H_i(\mathtt{m}athcal{S},\Z/2\Z)\,|\,\Omega'\in\mathtt{m}athcal{S}^{(i)}\,{\tt t}ext{and}\,\Omega'\mathtt{m}athtt{n}eq\Omega\}$ for every choice of $\Omega\in\mathtt{m}athcal{S}^{(i)}$ and $i\in\{0,\dots,d\}$, as desired.
\end{proof}
Following the notation of Subsection \ref{subsec:2.1} we refer to embedded Schubert varieties $\sigma_{\lambda}$ of $\G_{m,n}\subset\R^{(m+n)^2}$ defined by incidence conditions, prescribed by $\lambda$, with respect to the canonical complete flag of $\R^{m+n}$.
\begin{corollary}\label{cor:homology}
Let $\G_{m,n}\subset\R^{(m+n)^2}$. For every $k=0,\dots,mn$:
\[
H_k(\G_{m,n},\Z/2\Z)=<[\sigma_{\lambda}]\in H_k(\G_{m,n},\Z/2\Z)\,|\,|\lambda|=mn-k>,
\]
where $\lambda$ is a partition of the $(m{\tt t}imes n)$-rectangle, $\sigma_{\lambda}$ is the Schubert variety of $\G_{m,n}$ defined by the incidence conditions, prescribed by $\lambda$, with respect to the canonical complete flag.
\end{corollary}
\begin{proof}
By Lemma \ref{lem:schubert} the family of $\Omega_{\lambda}$ such that $\lambda$ is a partition of the $(m{\tt t}imes n)$-rectangle constitutes the cells of a finite CW-complex whose underlying topological space is $\G_{m,n}$ such that $\sigma_{\lambda}=\operatorname{Cl}(\Omega_{\lambda})$ is algebraic for every partition $\lambda$ of the $(m{\tt t}imes n)$-rectangle. Hence, the thesis follows by Lemma \ref{lem:CW}.
\end{proof}
\begin{theorem}\label{thm:Q-algebraic-homology}
Each $\G_{m,n}$ is a projectively $\Q$-closed $\Q$-nonsingular algebraic subset of $\R^{(m+n)^2}$ having projectively $\Q$-algebraic homology.
\end{theorem}
\begin{proof}
By Corollary \ref{cor:homology}, for every $k=0,\dots,mn$:
\[
H_k(\G_{m,n},\Z/2\Z)=<[\sigma_{\lambda}]\in H_k(\G_{m,n},\Z/2\Z)\,|\,|\lambda|=mn-k>,
\]
where each $\sigma_{\lambda}$ is a Schubert variety of $\G_{m,n}$ defined by incidence conditions, prescribed by $\lambda$, with respect to the canonical complete flag of $\R^{m+n}$. By Theorem \ref{thm:Q-desingularization}, each Schubert variety $\sigma_\lambda$ as above admits a $\Q$-desingularization, that is: there exists a projectively $\Q$-closed $\Q$-nonsingular algebraic subset $Z_\lambda$ of $\R^{(m+n)^2}{\tt t}imes \R^p$ of dimension $\dim(\sigma_{\lambda})$, for some $p\in\N$, such that $\pi_1:Z_\lambda{\tt t}o\sigma_{\lambda}$ is a birational map. Observe that, since $\pi_1:Z_\lambda{\tt t}o\sigma_{\lambda}$ is surjective, injective onto the Zariski open subset $\Omega_\lambda$ such that $\operatorname{Cl}(\Omega_{\lambda})=\sigma_{\lambda}$ and $\dim(Z_\lambda)=\dim(\sigma_{\lambda})$, we get that $\pi_{1*}([Z_\lambda])=[\sigma_{\lambda}]$, as desired.
\end{proof}
\section{Relative $\Q$-algebraic constructions}\label{sec:3}
\subsection{$\Q$-Algebraic bordism classes and unoriented spine bordisms}\label{subsec:3.1}
In \cite{GSa}, Ghiloni and the author introduce some variants `defined over $\Q$' of the classical algebraic unoriented bordism and investigate its relation with projectively $\Q$-algebraic homology. Here we briefly recall those notions and useful results. Let $W\subset\R^k$ be a real algebraic set. Given a compact $\mathtt{m}athscr{C}^\infty$ manifold $P$ and a $\mathtt{m}athscr{C}^\infty$ map $f:P{\tt t}o W$, we say that the unoriented bordism class of $f$ is \emph{projectively $\Q$-algebraic} if there exist a compact $\mathtt{m}athscr{C}^\infty$ manifold $T$ with boundary, a projectively $\Q$-closed $\Q$-nonsingular real algebraic set $Y\subset\R^h$, a $\mathtt{m}athscr{C}^\infty$ diffeomorphism $\psi:P\sqcup Y{\tt t}o\partial T$ and a $\mathtt{m}athscr{C}^\infty$ map $F:T\rightarrow W$ such that $F\circ\jmath\circ(\psi|_P)=f$ and $F\circ \jmath\circ(\psi|_Y)$ is a $\Q$-regular map, where $\jmath:\partial T\hookrightarrow T$ is the inclusion map.
\begin{definition}\label{def:Q-bordism}
We say that $W$ has \emph{projectively $\Q$-algebraic unoriented bordism} if for all $p\in\{0,\ldots,d\}$, for all compact $\mathtt{m}athscr{C}^\infty$ manifold $P$ of dimension $p$ and for all $\mathtt{m}athscr{C}^\infty$ map $f:P{\tt t}o W$, the unoriented bordism class of $f$ is projectively $\Q$-algebraic.
\end{definition}
Here we recall the fundamental result of \cite{GSa} about the equivalence of Definition \ref{def:tot_Q_hom} and Definition \ref{def:Q-bordism}. That is:
\begin{theorem}[{\cite[Thm.\,2.25]{GSa}}\label{thm:Q_homology}]
Let $W\subset\R^k$ be a $\Q$-nonsingular $\Q$-algebraic set. The following assertions are equivalent.
\begin{enumerate}[label=\emph{(\roman*)},ref=(\roman*)]
\item\label{en:Q_homology1} $W$ has projectively $\Q$-algebraic unoriented bordism.
\item\label{en:Q_homology2} $W$ has projectively $\Q$-algebraic homology.
\end{enumerate}
\end{theorem}
Let us specify `over $\Q$' the construction of the algebraic unoriented spine cobordism by Akbulut and King in \cite[Lem.\ 4.1]{AK81b}.
\begin{lemma}\label{lem:Q_tico_bordism}
Let $M$ be a compact $\mathtt{m}athscr{C}^\infty$ submanifold of $\R^{m+n}$ of dimension $m$ and let $M_i$, for $i\in\{1,\dots,\ell\}$, be closed $\mathtt{m}athscr{C}^\infty$ submanifolds of $M$ of codimension $c_i$ in general position. Then there are a compact $\mathtt{m}athscr{C}^\infty$ manifold with boundary $T$ and proper $\mathtt{m}athscr{C}^\infty$ submanifolds with boundary $T_i$, for $i=1,\dots,\ell$, in general position, a projectively $\Q$-closed $\Q$-nonsingular algebraic subset $Y$ of $\R^h$, for some $h\in\N$, a $\mathtt{m}athscr{C}^\infty$ diffeomorphism $\psi:M\sqcup Y{\tt t}o \partial T$ such that:
\begin{enumerate}[label=\emph{(\alph*)}, ref=(\alph*)]
\item\label{Q_tico_bordism_a} $Y$ is the disjoint union of projectively $\Q$-closed $\Q$-nonsingular algebraic subsets $Y^\alpha$ of $\R^{h}$, for every $\alpha\subset\{1,\dots,\ell\}$ such that $\bigcap_{i\in\alpha}M_i\mathtt{m}athtt{n}eq\varnothing$.
\item\label{Q_tico_bordism_b} $\partial T\cap T_i=\partial T_i$, $\psi(M)\cap T_i=\psi(M_i)$ and $\psi(Y^\alpha)\cap T_i=\psi(Y_{i}^\alpha)$ where $Y_i^{\alpha}$, for $i=1,\dots,\ell$, are projectively $\Q$-closed $\Q$-nonsingular algebraic subsets of $Y^\alpha$ in general position with $Y_i^\alpha=\varnothing$ whenever $i\mathtt{m}athtt{n}otin\alpha$.
\item\label{Q_tico_bordism_d} For every $\alpha\subset\{1,\dots,\ell\}$ and $i\in\alpha$, there is a $\Q$-regular function $\mathtt{m}u_i ^\alpha:Y_i^\alpha\rightarrow \G_{c_i,m+n-c_i}$ such that
\[
Y^\alpha=(\mathtt{m}u_i^{\alpha})^*(\E_{c_i,m+n-c_i}).
\]
In particular, $\mathtt{m}u_i^{\alpha}$ is the Gauss mapping of $Y^i_\alpha$ in $Y^\alpha$.
\end{enumerate}
\end{lemma}
\begin{proof}
For every $\alpha\subset\{1,\dots,\ell\}$ we denote by $M_\alpha:=\bigcap_{i\in\alpha} M_i$, if $\alpha\mathtt{m}athtt{n}eq\varnothing$, and $M_{\varnothing}:=M$. We argue by induction on the subsets $\alpha$ of $\{1,\dots,\ell\}$ so that $M_\alpha\mathtt{m}athtt{n}eq\varnothing$. The case in which all $M_\alpha=\varnothing$, for every $\alpha\subset\{1,\dots,\ell\}$, means that $M=M_\varnothing=\varnothing$, thus the theorem follows by taking $T=\varnothing$. Suppose the set of $\alpha\subset\{1,\dots,\ell\}$ so that $M_\alpha\mathtt{m}athtt{n}eq \varnothing$ is non-empty. Let $\alpha$ be such that $M_\alpha\mathtt{m}athtt{n}eq\varnothing$ and $M_{\alpha'}=\varnothing$ for every $\alpha'\subset\{1,\dots,\ell\}$ so that $\alpha\subset\alpha'$. Let $\beta_i:M_i\rightarrow\G_{c_i,m+n-c_i}$ be the Gauss mapping of $M_i$ in $M$ for every $i\in\alpha$. Let $\G_\alpha:=\prod_{i\in\alpha} \G_{c_i,m+n-c_i}$. By Theorem \ref{thm:Q-algebraic-homology} and \cite[Lem.\,2.26]{GSa}, $\G_\alpha$ is a projectively $\Q$-closed $\Q$-nonsingular algebraic subset of $\R^{(m+n)^2 |\alpha|}$ having projectively $\Q$-algebraic homology. Let $\beta_\alpha:M_\alpha\rightarrow \G_\alpha$ be the $\mathtt{m}athscr{C}^\infty$ function defined as $\beta_\alpha:=\prod_{i\in\alpha}\beta_i$. Thus, Theorem \ref{thm:Q-algebraic-homology} ensures the existence of $k_\alpha\in\N$, a compact $\mathtt{m}athscr{C}^\infty$ manifold with boundary $T_\alpha$, a projectively $\Q$-closed $\Q$-nonsingular algebraic subset $Y_\alpha$ of $\R^{(m+n)^2 |\alpha|}{\tt t}imes\R^{k_\alpha}$, a $\mathtt{m}athscr{C}^\infty$ diffeomorphism $\psi_\alpha:M_\alpha\sqcup Y_\alpha{\tt t}o \partial T_\alpha$ and a $\mathtt{m}athscr{C}^\infty$ map $\mathtt{m}u^\alpha:T_\alpha\rightarrow\G_\alpha$ such that $\mathtt{m}u^\alpha\circ\jmath_\alpha\circ(\psi_\alpha|_{M_\alpha})=\beta_\alpha$ and $g_\alpha:=\mathtt{m}u^\alpha\circ \jmath_\alpha\circ(\psi_\alpha|_Y)$ is a $\Q$-regular map, where $\jmath_\alpha:\partial T_\alpha\hookrightarrow T_\alpha$ is the inclusion map.
Let $\E_\alpha^*:=\prod_{i\in\alpha}\E_{c_i,m+n-c_i}^*$. Define the pullback bundle of $\E_\alpha^*$ via $\mathtt{m}u^\alpha$ as $S^\alpha:=(\mathtt{m}u^\alpha)^{\ast}(\E_\alpha^*)$ and the submanifolds $S_i^\alpha$ of $S^\alpha$ as follows
\begin{align*}
S^\alpha&:=\{(x,y_1,t_1,\dots,y_{|\alpha|},t_{|\alpha|}) \in T_\alpha{\tt t}imes(\R^{m+n}{\tt t}imes\R)^{|\alpha|}\,|\,(\mathtt{m}u^\alpha(x),y_1,t_1,\dots,y_{|\alpha|},t_{|\alpha|})\in \E_\alpha^*\}\\
S_i^\alpha&:=\{(x,y_1,t_1,\dots,y_{|\alpha|},t_{|\alpha|})\in S^\alpha\,|\,y_i=0,\,t_i=0\},\, {\tt t}ext{for $i\in\{1,\dots,|\alpha|\}$.}
\end{align*}
By definition, the $S_i^\alpha$, for $i=\{1,\dots,|\alpha|\}$, are in general position and $\bigcap_{i\in\alpha}S_i^\alpha=T_\alpha{\tt t}imes\{0\}\subset T_\alpha{\tt t}imes(\R^{m+n}{\tt t}imes\R)^{|\alpha|}$. In addition, considering the projections $\pi_{0}^{i}:S_i^\alpha\rightarrow T_\alpha$ and $\pi_i:\G_\alpha\rightarrow \G_{c_i,m+n-c_i}$, we define $\mathtt{m}u_i^\alpha:S_i^\alpha\rightarrow \G_{c_i,m+n-c_i}$ as $\mathtt{m}u_i^\alpha=\pi_i\circ\mathtt{m}u^\alpha\circ\pi_{0}^{i}$. Thus, we deduce that $S^\alpha$ is the pullback bundle of $\E_{c_i,m+n-c_i}^*$ by $\mathtt{m}u_i^\alpha$, i.e. $S^\alpha=(\mathtt{m}u_i^\alpha)^*(\E_{c_i,m+n-c_i}^*)$, where
\begin{align*}
(\mathtt{m}u_i^\alpha)^*(\E_{c_i,m+n-c_i}^*):=\{(x,y^1,t_1,\dots,y^\ell,t_\ell,y^{\ell+1},t_{\ell+1})&\in S_i^\alpha{\tt t}imes \R^{m+n}{\tt t}imes\R\,|\\ \,(\mathtt{m}u_i^\alpha(x),y^{|\alpha|+1},t_{|\alpha|+1})&\in\E_{c_i,m+n-c_i}^*\}.
\end{align*}
Thus, $S^\alpha$ and the $S_i^\alpha$, for every $i\in\alpha$, are $\mathtt{m}athscr{C}^\infty$ manifolds with boundary satisfying $\partial S_i^\alpha\subset\partial S^\alpha$. Define $M^\alpha:=\beta_\alpha^*(\E^*_\alpha)=(\mathtt{m}u^\alpha\circ\jmath_\alpha\circ\psi_\alpha)|_{M_\alpha}^*(\E^*_\alpha)\subset M_\alpha{\tt t}imes\R^{(m+n)^2 |\alpha|}$ and $Y^\alpha:=g_\alpha^*(\E^*_\alpha)=(\mathtt{m}u^\alpha\circ\jmath_\alpha\circ\psi_\alpha)|_{Y_\alpha}^*(\E^*_\alpha)\subset\R^{(m+n)^2 |\alpha|}{\tt t}imes\R^{k_\alpha}$. Observe that, by Lemma \ref{lem:Q_sphere_pullback}, we deduce that $Y^\alpha$ is a projectively $\Q$-closed $\Q$-nonsingular algebraic subset of $\R^{(m+n)^2 |\alpha|}{\tt t}imes\R^{k_\alpha}$. Since $\psi_\alpha:M_\alpha\sqcup Y_\alpha{\tt t}o\partial T_\alpha$ is a diffeomorphism, we deduce that $\Psi_\alpha:M^\alpha\sqcup Y^\alpha{\tt t}o\partial S^\alpha$ defined as $\Psi_\alpha(x,y^1,t_1,\dots,y^{|\alpha|},t_{|\alpha|})=(\psi_\alpha(x),y^1,t_1,\dots,y^{|\alpha|},t_{|\alpha|})$ is a diffeomorphism as well. Hence, define $Y^\alpha_i:=Y^\alpha\cap\Psi_\alpha^{-1}(\partial S^\alpha_i)$, for every $i\in\alpha$. Observe that $Y^\alpha_i=(\mathtt{m}u^\alpha_{\alpha\setminus \{i\}}|_{\Psi_\alpha(Y_\alpha)}\circ \Psi_\alpha|_{Y_\alpha})^*(\E^*_{\alpha\setminus \{i\}})$, where $\mathtt{m}u^\alpha_{\alpha\setminus \{i\}}:T_\alpha{\tt t}o\G_{c_1,m+n-c_1}{\tt t}imes\dots{\tt t}imes\G_{c_{i-1},m+n-c_{i-1}}{\tt t}imes\{0\}{\tt t}imes\G_{c_{i+1},m+n-c_{i+1}}{\tt t}imes\dots{\tt t}imes\G_{c_{|\alpha|},m+n-c_{|\alpha|}}$ defined as $\mathtt{m}u^\alpha_{\alpha\setminus\{i\}}(x):=(\mathtt{m}u^\alpha_1(x),\dots,\mathtt{m}u^\alpha_{i-1}(x),0,\mathtt{m}u^\alpha_{i+1}(x),\dots,\mathtt{m}u^\alpha_{|\alpha|}(x))$ and
\begin{equation}\label{eq:subbundle}
\E^*_{\alpha\setminus \{i\}}:=\{(y^1,t_1,\dots,y^{|\alpha|},t_{|\alpha|})\in\E^*_\alpha\,|\,y^i=0,\,t_i=0\},
\end{equation}
which is a projectively $\Q$-closed $\Q$-nonsingular algebraic sphere bundle by Lemma \ref{lem:Q_sphere_bundle}. Observe that $(\mathtt{m}u^\alpha_{\alpha\setminus \{i\}}\circ \Psi_\alpha)|_{Y_\alpha}$ is $\Q$-regular since $(\mathtt{m}u^\alpha\circ\psi_\alpha)|_{Y_\alpha}$ is so. Thus, $Y^\alpha_i$ is a projectively $\Q$-closed $\Q$-nonsingular algebraic subset of $\R^{}$ by Lemma \ref{lem:Q_sphere_pullback}, for every $i\in\alpha$.
Since $\mathtt{m}u^\alpha|_{M_\alpha}$ is the Gauss mapping of $M_\alpha$ in each $M_i$ with $i\in \alpha$, we can select two sufficiently small closed tubular neighborhoods $U_\alpha$ and $V_\alpha$ of $M_\alpha$ in $M^\alpha$ and in $M$, respectively, witch are diffeomorphic via a diffeomorphism $h_\alpha:U_\alpha\rightarrow V_\alpha$ satisfying $h_\alpha(U_\alpha\cap S_i^\alpha)=V_\alpha\cap M_i$, for every $i\in\alpha$. Consider the manifold with boundary $S$ defined as $S^\alpha\cup (M{\tt t}imes[0,1])$ identifying $U_\alpha$ and $V_\alpha{\tt t}imes\{1\}$ via $h_\alpha{\tt t}imes\{1\}:U_\alpha{\tt t}o V_\alpha{\tt t}imes\{1\}$ defined as $(h_\alpha{\tt t}imes\{1\})(a)=(h_\alpha(a),1)$, after smoothing corners. In the same way define the submanifolds with boundary $S_i$ as $S_i^\alpha\cup M_i{\tt t}imes[0,1]$ identifying $U_\alpha\cap S_i^\alpha$ with $(V_\alpha\cap M_i){\tt t}imes\{1\}$ via $h_\alpha{\tt t}imes\{1\}$. Observe that the submanifolds $S_i$ of $S$, with $i\in\alpha$, are in general position.
Define the manifold $N$ with submanifolds in general position $N_i$, for every $i\in\{1,\dots,\ell\}$, as follows:
\begin{align*}
N&:=(M^\alpha\setminus\operatorname{Int}(U_\alpha))\cup_{h_\alpha}(M\setminus\operatorname{Int}(V_\alpha)),\\
N_i&:=
\begin{cases}
N\cap S_i\quad&{\tt t}ext{if $i\in\alpha$,}\\
M_i&{\tt t}ext{otherwise.}
\end{cases}
\end{align*}
Observe that, by construction, $\partial(S^\alpha\cup_{h_\alpha} (M{\tt t}imes[0,1]))=N\sqcup Y^\alpha\sqcup M$, $\partial(S^\alpha_i\cup_{h_\alpha} (M_i{\tt t}imes[0,1]))=N_i\sqcup Y^\alpha_i\sqcup M_i$, for every $i\in\alpha$, and $\partial M_i{\tt t}imes[0,1]=N_i\sqcup M_i$, for every $i\mathtt{m}athtt{n}otin\alpha$. In particular, it holds that $N_\alpha:=\bigcap_{i\in\alpha}N_i=\varnothing$. By Whitney embedding theorem, there is a manifold $M'\subset\R^{2m+1}$ with submanifolds in general position $M'_i$, for $i\in\{1,\dots, \ell\}$, which is diffeomorphic to $N$ via a diffeomorphism $\varphi:M'{\tt t}o N$ such that $\varphi(M'_i)=N_i$, for every $i\in\{1,\dots,\ell\}$. Thus, by inductive assumption on $M'\subset\R^{2m+1}$, there exist $k'\in\N$, a manifold with boundary $T'$ and submanifolds with boundary $T'_i$, for every $i\in\{1,\dots,\ell\}$, with transverse intersection, a projectively $\Q$-closed $\Q$-nonsingular algebraic subset $Y'$ of $\R^{k'}$, for some $k'\in\N$, a $\mathtt{m}athscr{C}^\infty$ diffeomorphism $\psi':M'\sqcup Y'{\tt t}o \partial T'$ (wlg we can assume $\psi'(M')=N$ and $\psi'(M'_i)=N_i$) such that:
\begin{enumerate}[label={(\alph*$'$)}, ref=(\alph*$'$)]
\item $Y'$ is the disjoint union of a projectively $\Q$-closed $\Q$-nonsingular algebraic subsets $Y^{'\alpha}$ of $\R^{m+n}{\tt t}imes\R^{k'}$, for every $\alpha\subset\{1,\dots,\ell\}$ such that $\bigcap_{i\in\alpha}M'_i\mathtt{m}athtt{n}eq\varnothing$.
\item $\partial T'\cap T'_i=\partial T'_i$, $N\cap T'_i=\psi'(M')\cap T'_i=\psi(M'_i)=N_i$ and $\psi'(Y^{'\alpha})\cap T'_i=\psi'(Y_{i}^{'\alpha})$ where $Y_i^{'\alpha}$, for $i\in\{1,\dots,\ell\}$, are projectively $\Q$-closed $\Q$-nonsingular algebraic subsets of $Y^{'\alpha}$ transverse to each other with $Y_i^{'\alpha}=\varnothing$ whenever $i\mathtt{m}athtt{n}otin\alpha$.
\item For every $\alpha\subset\{1,\dots,\ell\}$ and $i\in\alpha$, there is a $\Q$-regular function $\mathtt{m}u_i ^{'\alpha}:Y_i^{'\alpha}\rightarrow \G_{c_i,2m+1-c_i}$ such that
\[
Y^{'\alpha}=(\mathtt{m}u_i^{'\alpha})^*(\G_{c_i,2m+1-c_i}).
\]
In particular, $\mathtt{m}u_i^{'\alpha}$ is the Gauss mapping of $Y^i_\alpha$ in $Y_i$.
\end{enumerate}
Define $T:=S\cup T'$ and $T_i:=S_i\cup T'_i$, after smoothing corners. Let $k:=\mathtt{m}ax(k_\alpha,k')$ and consider $\iota_\alpha:\R^{k_\alpha}{\tt t}o\R^k$ and $\iota':\R^{k'}{\tt t}o\R^k$ be the inclusion mappings. Then, after a translation of a rational factor $v\in\Q^{k}$, we may assume that $(\iota'(Y')+v)\cap \iota_\alpha(Y^\alpha)=\varnothing$, thus $Y:=\iota_\alpha (Y^\alpha)\sqcup (\iota'(Y')+v)$ is a projectively $\Q$-closed $\Q$-nonsingular algebraic subset of $\R^{k}$. Let $\psi:M\sqcup Y{\tt t}o \partial T$ defined as follows $\psi|_M:=\psi_\alpha|_M$, $\psi|_{\iota_\alpha(Y_{\alpha})}(x):=\psi_\alpha(\iota_\alpha^{-1}(x))$ and $\psi|_{\iota'(Y')+v}(x):=\psi'((\iota')^{-1}(x-v))$.
\end{proof}
Here we provide an embedded version of Lemma \ref{lem:Q_tico_bordism} and we double the spine bordism following the strategy used by Tognoli in \cite[\S \,{\it b}), pp. 176-177]{Tog73}.
\begin{theorem} \label{thm:Q-spine-cobordism}
Let $M$ be a compact $\mathtt{m}athscr{C}^\infty$ submanifold of $\R^{m+n}$ of dimension $m$, let $M_i$, for $i\in\{1,\dots,\ell\}$, be closed $\mathtt{m}athscr{C}^\infty$ submanifolds of $M$ of codimension $c_i$ in general position. Then there exist $s\in\N$ with $s\geq m+n$, a projectively $\Q$-closed $\Q$-nonsingular algebraic set $Y\subset\R^s=\R^{m+n}{\tt t}imes\R^{s-m-n}$ of dimension $m$, $\Q$-nonsingular $\Q$-algebraic subsets $Y_i$, for $i\in\{1,\dots,\ell\}$, of $Y$ in general position, a compact $\mathtt{m}athscr{C}^\infty$ submanifold $S$ of $\R^{s+1}=\R^s{\tt t}imes\R$ of dimension $m+1$ and compact $\mathtt{m}athscr{C}^\infty$ submanifolds $S_i$, for $i=1,\dots,\ell$, in general position with the following properties:
\begin{enumerate}[label=\emph{(\alph*)}, ref=(\alph*)]
\item\label{en:Q-bordism-improved-a} $M\cap Y=\varnothing$.
\item\label{en:Q-bordism-improved-b} $S\cap(\R^s{\tt t}imes(-1,1))=(M\sqcup Y){\tt t}imes(-1,1)$ and $S_i\cap(\R^s{\tt t}imes(-1,1))=(M_i\sqcup Y_i){\tt t}imes(-1,1)$, for every $i=1,\dots,\ell$.
\item\label{en:Q-bordism-improved-d} $Y$ is the finite disjoint union $\bigsqcup_{\alpha\in A}(Y^\alpha+v_\alpha)$ of projectively $\Q$-closed $\Q$-nonsingular algebraic sets of the form $Y^\alpha+v_\alpha\subset\R^s$, where $v_\alpha$ belongs to $\Q^s$, $Y^\alpha$ is inductively defined as in the proof of Lemma \ref{lem:Q_tico_bordism} and
\[
A:=\{\alpha\in \mathtt{m}athcal{P}(\{1,\dots,\ell\})\,|\,\bigcap_{j\in\alpha} M_j\mathtt{m}athtt{n}eq\varnothing\}.
\]
In addition, there are projectively $\Q$-closed algebraic $\Q$-nonsingular subset $Y_\alpha$ of $\R^s$ and $\Q$-regular functions $\mathtt{m}u_\alpha:Y_\alpha{\tt t}o \E^*_\alpha$ such that $Y^\alpha:=\mathtt{m}u_\alpha^*(\E^*_{\alpha,m+n})$, with $\E_{\alpha,m+n}^*:=\prod_{i\in\alpha}\E_{c_i,m+n-c_i}^*$.
\item\label{en:Q-bordism-improved-d'} For every $i\in\{1,\dots,\ell\}$, $Y_i$ is the finite disjoint union $\bigsqcup_{\alpha\in A_i}(Y^\alpha_i+v_\alpha)$ of projectively $\Q$-closed $\Q$-nonsingular algebraic sets of the form $Y^\alpha_i+v_\alpha\subset\R^s$, where $v_\alpha$ belongs to $\Q^s$ as above, $Y^\alpha_i$ is inductively defined as in the proof of Lemma \ref{lem:Q_tico_bordism} and
\[
A_i:=\{\alpha\in A\,|\, i\in\alpha\}.
\]
In addition, if $Y_\alpha$ is as in point \emph{\ref{en:Q-bordism-improved-d}}, there is a $\Q$-regular functions $\mathtt{m}u_i^\alpha:Y_\alpha{\tt t}o \E^*_\alpha$ such that $Y^\alpha_i:=(\mathtt{m}u_i^\alpha)^*(\E^*_{\alpha\setminus\{i\},m+n})$, where $\E^*_{\alpha\setminus\{i\},m+n}$ is defined as in \emph{(\ref{eq:subbundle})}. In particular, if $\beta_i:S_i{\tt t}o\G_{c_i,m+n}$ denotes the Gauss mapping of $S_i$ in $S$, then $\beta_i|_{Y_i}=\bigsqcup_{\alpha\in A_i} \mathtt{m}u_i^{\alpha}$ is a $\Q$-regular map.
\end{enumerate}
\end{theorem}
\begin{proof}
Thanks to the proof of Lemma \ref{lem:Q_tico_bordism}, for $s\geq n$ sufficiently large, we know that there exist a projectively $\Q$-closed $\Q$-nonsingular algebraic subset $Y=\bigsqcup_{\alpha\in A}(Y^\alpha+v_\alpha)$ of $\R^s$ and $\Q$-nonsingular algebraic subsets $Y_i=\bigsqcup_{\alpha\in A_i}(Y^\alpha_i+v_\alpha)$ of $\R^s$, with $i\in\{1,\dots,\ell\}$, in general position with above properties \ref{en:Q-bordism-improved-a} (changing the vectors $v_\alpha\in\Q^s$ if necessary), \ref{en:Q-bordism-improved-d} and \ref{en:Q-bordism-improved-d'}, compact $\mathtt{m}athscr{C}^\infty$ manifolds $T$ and $T_i$ with boundary $\partial T$ and $\partial T_i$ so that $T_i\subset T$ and $\partial T_i\subset\partial T$, for every $i=1,\dots,\ell$.
Let us construct the desired compact $\mathtt{m}athscr{C}^\infty$ submanifold $S$ of $\R^{s+1}=\R^s{\tt t}imes\R$, following the strategy used by Tognoli in \cite[\S \,{\it b}), pp. 176-177]{Tog73}. By the collaring theorem (see \cite[{Theorem 6.1, p. 113}]{Hir94}), there exist an open neighborhood $U$ of $\partial T$ in $T$ and a $\mathtt{m}athscr{C}^\infty$ diffeomorphism $\phi':U{\tt t}o \partial T{\tt t}imes[0,1)$ such that $\phi'(t)=(t,0)$ for all $t\in\partial T$ and $\phi'|_{T_i\cap U}:T_i\cap U{\tt t}o \partial T_i{\tt t}imes[0,1)$ is a diffeomorphism as well, for every $i=1,\dots,\ell$. Let $\phi:U{\tt t}o(M\sqcup Y){\tt t}imes[0,1)$ be the $\mathtt{m}athscr{C}^\infty$ diffeomorphism $\phi:=(\psi^{-1}{\tt t}imes\mathtt{m}athrm{id}_{[0,1)})\circ\phi'$. Note that
$\phi(t)=(\psi^{-1}(t),0)$ for all $t\in\partial T$. Set $A:=T\setminus\partial T$, $B:=\phi^{-1}((M\sqcup Y){\tt t}imes(0,\frac{1}{2}])\subset A$, $N:=\R^s{\tt t}imes(0,+\infty)$ and define the map ${\tt t}heta:B{\tt t}o N$ by ${\tt t}heta(x,x_{s+1}):=\phi(x,x_{s+1})$.
Since $s+1\geq2(d+1)+1$ and Tietze's theorem ensures the existence of a continuous extension of ${\tt t}heta$ from the whole $A$ to $N$, we can apply to ${\tt t}heta$ the extension theorem \cite[Theorem 5$(\mathtt{m}athrm{f})$]{Whi36} of Whitney, obtaining a $\mathtt{m}athscr{C}^\infty$ embedding $\Theta:A{\tt t}o N$ extending ${\tt t}heta$. Let $R:\R^{s+1}=\R^s{\tt t}imes\R{\tt t}o\R^{s+1}$ be the reflection $R(x,x_{s+1}):=(x,-x_{s+1})$ and let $S'$ and $S'_i$ be the compact $\mathtt{m}athscr{C}^\infty$ submanifolds $\Theta(A)\sqcup((M\sqcup Y){\tt t}imes\{0\})\sqcup R(\Theta(A))$ and $\Theta(A\cap T_i)\sqcup((M_i\sqcup Y_i){\tt t}imes\{0\})\sqcup R(\Theta(A\cap T_i))$ of $\R^{s+1}$, for every $i=1,\dots,\ell$, respectively. Thanks to the compactness of $T$ and of each $T_i$, there exists $\epsilon>0$ such that $S'\cap(\R^s{\tt t}imes(-\epsilon,\epsilon))=(M\sqcup Y){\tt t}imes(-\epsilon,\epsilon)$ and $S'_i\cap(\R^s{\tt t}imes(-\epsilon,\epsilon))=(M_i\sqcup Y_i){\tt t}imes(-\epsilon,\epsilon)$. Let $L:\R^{s+1}{\tt t}o\R^{s+1}$ be the linear isomorphism $L(x,x_{s+1}):=(x,\epsilon^{-1}x_{s+1})$. The compact $\mathtt{m}athscr{C}^\infty$ submanifold $S:=L(S')$ with submanifolds $S_i:=L(S'_i)$, for every $i=1,\dots,\ell$, in general position of $\R^{s+1}$ have the desired property $(\mathtt{m}athrm{b})$.
\end{proof}
\subsection{A review on $\Q$-nice $\Q$-algebraic sets}\label{subsec:3.2}
Here we briefly recall the notion of $\Q$-nice $\Q$-algebraic set and we develop an explicit example of $\Q$-nice $\Q$-algebraic set useful for applying approximation techniques `over $\Q$' in Section \ref{sec:4}.
\begin{definition}[{\cite[Def.\,3.1]{GSa}}] \label{def:Q-pair}
Let $L\subset\R^{m+n}$ be a real $\Q$-algebraic set. We say that $L$ is \emph{$\Q$-nice} if for every $a\in L$, there exists an open neighborhood $U_a$ of $a$ in $\R^{m+n}$ such that
\begin{equation}\label{eq:Q-approx-pair}
\mathtt{m}athcal{I}^\infty_{\R^{m+n}}(L)\mathtt{m}athscr{C}^\infty(U_a)\subset\mathtt{m}athcal{I}_\Q(L)\mathtt{m}athscr{C}^\infty(U_a),
\end{equation}
i.e., for every $f\in\mathtt{m}athcal{I}^\infty_{\R^{m+n}}(L)$, there are $u_1,\ldots,u_\ell\in\mathtt{m}athscr{C}^\infty(U_a)$ and $p_1,\dots,p_\ell$ generators of $\mathtt{m}athcal{I}_\Q(L)$ such that $f|_{U_a}=\sum_{i=1}^\ell u_i\cdot p_i|_{U_a}$.
\end{definition}
The reader observes that in condition \eqref{eq:Q-approx-pair} the converse inclusion always holds and it remains valid if we replace $U_a$ with a smaller open neighborhood of $a$ in $\R^{m+n}$. In addition, it is evident that the disjoint union of finitely many $\Q$-nice real algebraic subsets of $\R^{m+n}$ is again a $\Q$-nice real algebraic subset of $\R^{m+n}$. Moreover, by \cite[Cor. 3.3]{GSa}, every disjoint union of $\Q$-nonsingular real $\Q$-algebraic sets is $\Q$-nice and, by \cite[Lem. 3.4]{GSa}, if $L$ is a $\Q$-nice $\Q$-algebraic subset of $\R^{m+n}$ then $L{\tt t}imes\{0\}\subset\R^{m+n}{\tt t}imes\R^k$ is $\Q$-nice for every $k\in\N$. Next Lemma \ref{lem:Q-nice-hyp} will be very useful in the setting of Theorem \ref{thm:Q-spine-cobordism}.
\begin{lemma}\label{lem:Q-nice-hyp}
Let $M$ be a compact $\mathtt{m}athscr{C}^\infty$ submanifold of $\R^{m+n}$ of dimension $m$. Let $X\subset M$ be a $\Q$-nonsingular algebraic subset of codimension $c$ and $Y\subset M$ be a $\Q$-nonsingular algebraic hypersurface of $M$. If the germ $(M,X\cup Y)$ of $M$ at $X\cup Y$ is the germ of a $\Q$-nonsingular $\Q$-algebraic set, then $X\cup Y$ is $\Q$-nice.
\end{lemma}
\begin{proof}
Without lost of generality we may assume that any of the irreducible components of $X$ is contained in $Y$. Let $a\in (X\cup Y)\setminus (X\cap Y)=(X\setminus Y)\sqcup(Y\setminus X)$. Since both $X$ and $Y$ are $\Q$-nonsingular $\Q$-algebraic sets, up to shrink the neighborhood $U_a$, we deduce property (\ref{eq:Q-approx-pair}) by \cite[Cor. ]{GSa}.
Let $a\in X\cap Y$ and let $f\in\mathtt{m}athcal{I}^\infty_{\R^{m+n}}(X\cup Y)$. Let $V$ be a $\Q$-nonsingular $\Q$-algebraic subset of $\R^{m+n}$ such that $(M,X\cup Y)=(V,X\cup Y)$. Since $Y$ is a $\Q$-nonsingular $\Q$-algebraic hypersurface of $V$ there are $p_1,\dots,p_n\in\mathtt{m}athcal{I}_\Q(V)$ and $p\in\mathtt{m}athcal{I}_\Q(Y)$ whose gradients at $a$ are linearly independent over $\R$ and there is a neighborhood $U_a$ of $a$ in $\R^{m+n}$ such that $Y\cap U_a=V\cap \mathtt{m}athcal{Z}_{\R^{m+n}}(p)\cap U_a=\mathtt{m}athcal{Z}_{\R^{m+n}}(p,p_1,\dots,p_n)\cap U_a$. Hence, by \cite[Lemma 2.5.4]{AK92a}, there are $u,u_1,\dots,u_n\in\mathtt{m}athscr{C}^\infty(U_a)$ such that $f|_{U_a}=u\cdot p|_{U_a}+\sum_{i=1}^{n} u_i \cdot p_i|_{U_a}$, up to shrink the neighborhood $U_a$ of $a$ in $\R^{m+n}$ if necessary. Since any of the irreducible components of $X$ is contained in $Y$, we deduce that $Y\cap U_a=V\cap\mathtt{m}athcal{Z}_{\R^{m+n}}(p)\cap U_a\subsetneq (X\cup Y)\cap U_a$, thus $\mathtt{m}athcal{Z}_{\R^{m+n}}(p)\cap U_a\cap X\subset Y$, up to shrink the neighborhood $U_a$ of $a$ in $\R^{m+n}$ if necessary. In addition, since $f|_{U_a}=u\cdot p|_{U_a}+\sum_{i=1}^{n} u_i \cdot p_i|_{U_a}$, $p_1,\dots,p_n\in\mathtt{m}athcal{I}_\Q(V)$ and $\mathtt{m}athcal{Z}_{\R^{m+n}}(p)\cap U_a\cap X\subset Y$, we deduce that $X\cap U_a\subset\mathtt{m}athcal{Z}_{\R^{m+n}}(u)$. Now, let $U'_a\subset U_a$ be a neighborhood of $a$ in $\R^{m+n}$ such that $\overline{U'_a}\subsetneq U_a$. An explicit construction via partitions of unity subordinated to the open cover $\{U_a, \overline{U'_a}^{c}\}$ of $\R^{m+n}$ ensures the existence of $g\in\mathtt{m}athscr{C}^\infty(\R^{m+n})$ such that $g|_{U'_a}=u|_{U'_a}$ and $g\in\mathtt{m}athcal{I}^{\infty}_{R^{m+n}}(X)$. Since $X$ is a $\Q$-nonsingular $\Q$-algebraic subset of codimension $c$ in $V$, which is $\Q$-nonsingular $\Q$-algebraic as well, there are $q_1,\dots,q_c\in\mathtt{m}athcal{I}_\Q(X)$ such that $\mathtt{m}athtt{n}abla p_1(a),\dots,\mathtt{m}athtt{n}abla p_n(a),\mathtt{m}athtt{n}abla q_1(a),\dots,\mathtt{m}athtt{n}abla q_c(a)$ are linearly independent over $\R$ and there exists a neighborhood $V_a$ of $a$ in $\R^n$ such that $X\cap V_a=\mathtt{m}athcal{Z}_{\R^{m+n}}(p_1,\dots,p_m,q_1,\dots,q_c)\cap V_a$. Thus, by \cite[Lemma 2.5.4]{AK92a}, there are $v_1,\dots,v_c,u'_1,\dots,u'_n\in\mathtt{m}athscr{C}^\infty(V_a)$ such that $g|_{V_a}=\sum_{i=1}^c v_i \cdot q_i|_{V_a}+\sum_{i=1}^{n} u'_i \cdot p_i|_{V_a}$, up to shrink the neighborhood $V_a$ of $a$ in $\R^{m+n}$ if necessary. Thus, fixing $V'_a:=U'_a\cap V_a$, we have:
\begin{align*}
f|_{V'_a}&=g|_{V'_a}\cdot p|_{V'_a}+\sum_{i=1}^{n} u_i|_{V'_a}\cdot p_i|_{V'_a}\\
&=\left(\sum_{i=1}^c v_i|_{V'_a}\cdot q_i|_{V'_a}+\sum_{i=1}^{n} u'_i|_{V'_a}\cdot p_i|_{V'_a}\right) \cdot p|_{V'_a} +\sum_{i=1}^{n} u_i|_{V'_a}\cdot p_i|_{V'_a}.\\
&=\sum_{i=1}^c v_i|_{V'_a}\cdot (p\cdot q_i )|_{V'_a} + \sum_{i=1}^{n} \left(u_i|_{V'_a}+u'_i|_{V'_a}\cdot p|_{V'_a}\right)\cdot p_i|_{V'_a},
\end{align*}
where $p_1,\dots,p_n,p\cdot q_1,\dots,p\cdot q_c\in\mathtt{m}athcal{I}_\Q(X\cup Y)$, as desired.
\end{proof}
\section{Relative $\Q$-algebrization of nonsingular algebraic sets}\label{sec:4}
Here we are in position to prove a `$\Q$-nonsingular' version of the relative Nash-Tognoli theorem. With respect to \cite[Thm.\,3.10]{GSa} our construction via spine cobordisms allows us to deal with the relative case with arbitrary codimensions. In particular, in the case in which the starting compact manifold $M$ and each submanifold $M_i$, with $i\in\{1,\dots,\ell\}$, are actually Nash manifolds we get a Nash diffeomorphism in the statement. Hence, when the starting compact manifold $M$ and each submanifold $M_i$, with $i\in\{1,\dots,\ell\}$, are actually compact nonsingular algebraic sets our Theorem \ref{thm:Q_tico_tognoli} provides a positive answer of the {\tt t}extsc{Relative $\Q$-algebrization of nonsingular algebraic sets} in the compact case.
\begin{theorem}\label{thm:Q_tico_tognoli}
Let $M$ be a compact $\mathtt{m}athscr{C}^\infty$ submanifold of $\R^{m+n}$ of dimension $m$ and let $\{M_i\}_{i=1}^\ell$ be a finite family of $\mathtt{m}athscr{C}^\infty$ submanifolds of $M$ of codimension $c_i$ in general position. Set $N:=\mathtt{m}ax\{m+n,2 m+1\}$. Then, for every neighborhood $\mathtt{m}athcal{U}$ of the inclusion map $\iota:M\hookrightarrow\R^{N}$ in $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}(M,\R^{N})$ and for every neighborhood $\mathtt{m}athcal{U}_i$ of the inclusion map $\iota|_{M_i}:M_i\hookrightarrow\R^N$ in $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}(M_i,\R^N)$, for every $i\in\{1,\dots,\ell\}$, there exist a $\Q$-nonsingular projectively $\Q$-closed algebraic set $M'\subset\R^N$, a family $\{M'_i\}_{i=1}^\ell$ of $\Q$-nonsingular algebraic subsets of $M'$ in general position and a $\mathtt{m}athscr{C}^\infty$ diffeomorphism $h:M{\tt t}o M'$ which simultaneously takes each $M_i$ to $M'_i$ such that, if $\jmath:M'\hookrightarrow\R^N$ denotes the inclusion map, then $\jmath\circ h\in\mathtt{m}athcal{U}$ and $\jmath\circ h|_{M_i}\in\mathtt{m}athcal{U}_i$, for every $i\in\{1,\dots,\ell\}$.
If in addition $M$ and each $M_i$ are compact Nash manifolds, then we can assume $h:M\rightarrow M'$ is a Nash diffeomorphism and $h$ extends to a semialgebraic homeomorphism from $\R^N$ to $\R^N$.
\end{theorem}
\begin{proof}
An application of Theorem \ref{thm:Q-spine-cobordism} gives $s\in\N$, a projectively $\Q$-closed $\Q$-nonsingular algebraic set $Y\subset\R^{m+n+s}:=\R^{m+n}{\tt t}imes\R^{s}$ of dimension $m$, $\Q$-nonsingular $\Q$-algebraic subsets $Y_i$, for $i\in\{1,\dots,\ell\}$, of $Y$ in general position, a compact $\mathtt{m}athscr{C}^\infty$ submanifold $S$ of $\R^{m+n+s+1}=\R^{m+n+s}{\tt t}imes\R$ of dimension $m+1$ and compact $\mathtt{m}athscr{C}^\infty$ submanifolds $S_i$, for $i=1,\dots,\ell$, in general position satisfying \ref{en:Q-bordism-improved-a}-\ref{en:Q-bordism-improved-d'} of Theorem \ref{thm:Q-spine-cobordism}.
Consider the map $\beta_i:S_i{\tt t}o\G_{c_i,m+n+s+1-c_i}$ classifying the normal bundle of $S_i$ in $S$. By \ref{en:Q-bordism-improved-d'} of Theorem \ref{thm:Q-spine-cobordism} we have that $\beta_i|_{Y_i{\tt t}imes \{0\}}$ is a $\Q$-regular map extending the codomain from $\G_{c_i,m+n-c_i}$ to $\G_{c_i,m+n+s+1-c_i}$. An application of Theorem \ref{thm:Q_tico_tognoli}, with $``L":=Y_i{\tt t}imes\{0\}$, $``M":=S_i$, $``W":=\G_{c_i,m+n+s+1-c_i}$ and $``f":=\beta_i$ gives $t\in\N$, a projectively $\Q$-closed $\Q$-nonsingular algebraic subset $X_i $ of $\R^{m+n+s+1+t}$, a diffeomorphism $\rho_i:S_i{\tt t}o X_i$ and a $\Q$-regular map $\gamma_i:X_i\rightarrow \G_{c_i,m+n+s+1-c_i}$ satisfying $(\mathtt{m}athrm{a})$-$(\mathtt{m}athrm{c})$ of Theorem \ref{thm:Q-spine-cobordism}. In particular, $(Y_i{\tt t}imes\{0\}){\tt t}imes\{0\}\subset X_i$, $\rho(x)=(x,0)$ and $\gamma_i(x,0)=\beta_i|_{Y_i{\tt t}imes\{0\}}(x)$ for every $x\in Y_i{\tt t}imes\{0\}$.
Consider the pull back fibre bundle $Z_i:=\gamma_i^*(\E_{c_i,m+n+s+1-c_i}^*)$. By Lemma \ref{lem:Q_sphere_pullback}, $Z_i$ is a projectively $\Q$-closed $\Q$-nonsingular algebraic subset of $\R^{m+n+s+1+t}{\tt t}imes\R^{m+n+s+1+t+1}$ and it contains those subsets $Y^\alpha$ of $Y$ such that $i\in\alpha$, by $(\mathtt{m}athrm{c})$ of Theorem \ref{thm:Q-spine-cobordism}. More precisely, following the notations of Theorem \ref{thm:Q-spine-cobordism}, we have that
\[
Y'^{\alpha}:=(\gamma_i|_{(Y_{\alpha}+v_\alpha){\tt t}imes\{0\}{\tt t}imes\{0\}{\tt t}imes\{0\}})^*(\E_{c_i,m+n+s+1-c_i}^*)
\]
is contained in $Z_i$, for every $\alpha\in A_i$, and is $\Q$-biregularly isomorphic to $Y^{\alpha}$ via the map $Y^\alpha{\tt t}o Y'^{\alpha}$ as $(x,y)\mathtt{m}apsto(x,0,0,0,y)\in\R^{m+n+s}{\tt t}imes\R{\tt t}imes\R^t{\tt t}imes\R{\tt t}imes\R^{m+n+s+1+t+1}$. Let
\[
Y'_i:=\Big(\bigsqcup_{\alpha\in A_i} Y'^{\alpha}\Big)\sqcup \Big(\bigsqcup_{\alpha\mathtt{m}athtt{n}otin A_i} (Y^\alpha+v_\alpha){\tt t}imes\{0\}{\tt t}imes\{0\}{\tt t}imes\{0\}\Big).
\]
Since $\gamma_i$ can be chosen such that $\gamma_i\circ\rho$ is arbitrarily $\mathtt{m}athscr{C}^\infty_{w}$ close to $\beta_i$, those maps are homotopic, thus the normal bundle of $S_i$ in $S$ and the normal bundle of $X_i{\tt t}imes\{0\}{\tt t}imes\{0\}$ in $Z_i$ are equivalent. Hence, the germ $(S,S_i\cup (Y{\tt t}imes\{0\}))$ of $S$ at $S_i\cup (Y{\tt t}imes\{0\})$ is diffeomorphic to the germ $(Z_i\cup (\bigsqcup_{\alpha\mathtt{m}athtt{n}otin A_i}(Y^\alpha+v_\alpha){\tt t}imes\R{\tt t}imes\{0\}{\tt t}imes\{0\}),(X_i{\tt t}imes\{0\}{\tt t}imes\{0\})\cup Y'_i)$ of the $\Q$-nonsingular algebraic set $Z_i\cup (\bigsqcup_{\alpha\mathtt{m}athtt{n}otin A_i}(Y^\alpha+v_\alpha){\tt t}imes\R{\tt t}imes\{0\}{\tt t}imes\{0\})$ at $(X_i{\tt t}imes\{0\}{\tt t}imes\{0\})\cup Y'_i$. Let $\phi_i:U_i{\tt t}o V_i$ be the above $\mathtt{m}athscr{C}^\infty$ diffeomorphism between a neighborhood $U_i$ of $S_i\cup(Y{\tt t}imes\{0\})$ in $S$ and a neighborhood $V_i$ of $(X_i{\tt t}imes\{0\}{\tt t}imes\{0\})\cup Y'_i$ in $Z_i\cup (\bigsqcup_{\alpha\mathtt{m}athtt{n}otin A_i}(Y^\alpha+v_\alpha){\tt t}imes\R{\tt t}imes\{0\}{\tt t}imes\{0\})$ such that $\phi_i|_{S_i}=\rho_i{\tt t}imes\{0\}{\tt t}imes\{0\}$ and $\phi_i(x,y)=(x,0,0,0,y)$ for every $(x,y)\in Y\subset\R^{m+n}{\tt t}imes\R^s$. Let $V'_i\subset V_i$ be a neighborhood of $(X_i{\tt t}imes\{0\}{\tt t}imes\{0\})\cup Y'_i$ in $Z_i\cup (\bigsqcup_{\alpha\mathtt{m}athtt{n}otin A_i}(Y^\alpha+v_\alpha){\tt t}imes\R{\tt t}imes\{0\}{\tt t}imes\{0\})$ such that $\overline{V'_i}\subsetneq V_i$. Set $A_i:=\phi_i^{-1}(\overline{V'_i})\subset U_i$ closed neighborhood of $S_i\cup (Y{\tt t}imes\{0\})$ in $S$ and consider the map $\phi_i|_{A_i}:A_i{\tt t}o \R^{m+n+s+1+t+1}$. Since $m+n+s+1+t+1 \geq 2(m+1)+1$, Tietze's theorem ensures the existence of a continuous extension of $\varphi_i$ from the whole $S$ to $\R^{m+n+s+1+t+1}$, we can apply to $\phi_i|_{A_i}$ the extension theorem \cite[Theorem 5$(\mathtt{m}athrm{f})$]{Whi36} of Whitney, obtaining a $\mathtt{m}athscr{C}^\infty$ embedding $\phi'_i:S{\tt t}o \R^{m+n+s+1+t+1}$ extending $\phi_i|_{A_i}$. Thus, there exists a manifold $N_i\subset\R^{m+n+s+1+t+1}$ which is $\mathtt{m}athscr{C}^\infty$ diffeomorphic to $S_i$ via $\phi'_i$ and, by construction, the following properties are satisfied:
\begin{itemize}
\item[$(\mathtt{m}athrm{i})$] $(X_i{\tt t}imes\{0\}{\tt t}imes\{0\})\cup Y'_i\subset N_i$;
\item[$(\mathtt{m}athrm{ii})$] the germ of $N_i$ at $(X_i{\tt t}imes\{0\}{\tt t}imes\{0\})\cup Y'_i$ is the germ of a $\Q$-nonsingular algebraic set.
\end{itemize}
Since $X_i{\tt t}imes\{0\}{\tt t}imes\{0\}$ is a $\Q$-nonsingular algebraic subset of $N_i$ and $Y'_i$ is a $\Q$-nonsingular algebraic hypersurface of $N_i$ satisfying above property (ii), Lemma \ref{lem:Q-nice-hyp} ensures that $(X_i{\tt t}imes\{0\}{\tt t}imes\{0\})\cup Y'_i$ is a $\Q$-nice algebraic subset of $\R^{m+n+s+1+t+1}$. An application of Theorem \cite[Thm.\,3.9]{GSa} with $``L":=(X_i{\tt t}imes\{0\}{\tt t}imes\{0\})\cup Y'_i$, $``M":=N_i$ and $``W":=\{0\}$ gives $u\in\N$, a projectively $\Q$-closed $\Q$-nonsingular real algebraic set $X^i\subset\R^{m+n+s+1+t+1+u}$ such that $((X_i{\tt t}imes\{0\}{\tt t}imes\{0\})\cup Y'_i){\tt t}imes\{0\}\subset X^i$ and a $\mathtt{m}athscr{C}^\infty$ diffeomorphism ${\tt t}au_i:N_i{\tt t}o X^i$ such that ${\tt t}au_i(x,0)=x$ for every $x\in (X_i{\tt t}imes\{0\}{\tt t}imes\{0\})\cup Y'_i$. Define the diffeomoprhism $\varphi_i:S{\tt t}o X^i$ as $\varphi_i:={\tt t}au_i\circ \phi'_i$, for every $i\in\{1,\dots,\ell\}$.
Let $G:=\G_{m+1,n+s}$ and $\beta:S {\tt t}o G$ be the map classifying the normal bundle of $S$ in $\R^{m+n+s+1}$. Recall that $Y\subset\R^{m+n+s}$ and, by \ref{en:Q-bordism-improved-d} and \ref{en:Q-bordism-improved-d'} of Theorem \ref{thm:Q-spine-cobordism}, $\beta|_{Y{\tt t}imes\{0\}}$ coincides with the Gauss mapping of the $\Q$-nonsingular $\Q$-algebraic set $Y{\tt t}imes\R$ in $\R^{m+n+s+1}$. Hence, $\beta|_{Y{\tt t}imes\{0\}}$ is $\Q$-regular since the Gauss mapping of $Y{\tt t}imes\R$ in $\R^{m+n+s+1}$ is so.
Let $E:=\E_{m+1,n+s}=\{(A,b) \in G {\tt t}imes \R^{m+n+s+1} \, | \, Ab=b\}$ and let $E {\tt t}o G$ be the universal $\R^{n+s}$-bundle over the grassmannian $G$. Let $\beta^{\ast}(E):=\{(x,y) \in S {\tt t}imes \R^{m+n+s+1} \, | \,\beta(x) y=y\}$ be the pullback bundle and let ${\tt t}heta:\beta^{\ast}(E) {\tt t}o \R^{m+n+s+1}$ defined by ${\tt t}heta(x,y):=x+y$. By the Implicit Function Theorem, there exists an open neighborhood $U_0$ in $\beta^{\ast}(E)$ of the zero section $S {\tt t}imes 0$ of $\beta^{\ast}(E)$ and an open neighborhood $U$ of $S$ in $\R^{m+n+s+1}$ such that ${\tt t}heta|_{U_0}:U_0 {\tt t}o U$ is a diffeomorphism.
Define a smooth map $\mathtt{m}athrm{w}idetilde{\beta}:U {\tt t}o E$ and a smooth map $\mathtt{m}athrm{w}idetilde{\varrho}:U {\tt t}o S$ in the following way: for every $x \in U$, let $(z_x,y_x):=({\tt t}heta|_{U_0})^{-1}(x)$ and let $N_x:=\beta(z_x)$, then define $\mathtt{m}athrm{w}idetilde{\beta}(x):=(N_x,y_x)$ and $\mathtt{m}athrm{w}idetilde{\varrho}(x):=z_x$. Since $({\tt t}heta|_{U_0})^{-1}(S)=G {\tt t}imes \{0\}$, we have that ${\mathtt{m}athrm{w}idetilde{\beta}}^{-1}(G{\tt t}imes \{0\})=S$; moreover if $x \in S$ then $\mathtt{m}athrm{w}idetilde{\beta}(x)=(\beta(x),0)$, so $\mathtt{m}athrm{w}idetilde{\beta}|_{Y}$ is $\Q$-regular. Now we prove that $\mathtt{m}athrm{w}idetilde{\beta}$ is transverse to $G {\tt t}imes \{0\}$ in $E$. Fix $x \in S$ and let $N_x:=\beta(x)$. Let $\mathtt{m}athrm{w}idetilde{N}_x$ be the $(n+s)$-dimensional vector subspace of $\R^{m+n+s+1}$ corresponding to $N_x$ and let $y \in \mathtt{m}athrm{w}idetilde{N}_x$ so close to the origin $0$ of $\R^{m+n+s+1}$ that $(x,y) \in U_0$. We have that ${\tt t}heta|_{U_0}(x,y)=x+y$, so $({\tt t}heta|_{U_0})^{-1}(x+y)=(x,y)$ and $\mathtt{m}athrm{w}idetilde{\beta}(x+y)=(N_x,y)$. It follows that $d\mathtt{m}athrm{w}idetilde{\beta}_x(\mathtt{m}athrm{w}idetilde{N}_x)=\mathtt{m}athrm{w}idetilde{N}_x$, hence $d\mathtt{m}athrm{w}idetilde{\beta}_x(\R^{m+n+s+1})$ contains the vector subspace $\{0\} {\tt t}imes \R^{n+s}$ of $T_{N_x}(G) {\tt t}imes \R^{n+s}=T_{(N_x,0)}(E)$ and so $\mathtt{m}athrm{w}idetilde{\beta}$ is transverse to $G {\tt t}imes \{0\}$ at $x$.
Let $\varphi:S\rightarrow X^1{\tt t}imes\dots{\tt t}imes X^\ell$ be the $\mathtt{m}athscr{C}^\infty$ map defined as $\varphi=(\varphi_1,\dots,\varphi_\ell)$. Let $\mathtt{m}athrm{w}idetilde{\varphi}: U {\tt t}o X^1{\tt t}imes\dots{\tt t}imes X^\ell$ be the smooth map defined by $\mathtt{m}athrm{w}idetilde{\varphi}:=\varphi\circ \mathtt{m}athrm{w}idetilde{\varrho}$. The smooth map $\mathtt{m}athrm{w}idetilde{\beta} {\tt t}imes \mathtt{m}athrm{w}idetilde{\varphi}:U {\tt t}o E {\tt t}imes X^1{\tt t}imes\dots{\tt t}imes X^\ell$ satisfies the following properties:
\begin{itemize}
\item[$(\mathtt{m}athrm{iii})$]$\mathtt{m}athrm{w}idetilde{\beta} {\tt t}imes \mathtt{m}athrm{w}idetilde{\varphi}$ is transverse to $(G{\tt t}imes \{0\}) {\tt t}imes X^1{\tt t}imes\dots{\tt t}imes X^\ell$ and $(\mathtt{m}athrm{w}idetilde{\beta} {\tt t}imes\mathtt{m}athrm{w}idetilde{\varphi})^{-1}((G {\tt t}imes \{0\}) {\tt t}imes X^1{\tt t}imes\dots{\tt t}imes X^\ell)=X$,
\item[$(\mathtt{m}athrm{iv})$] $(\mathtt{m}athrm{w}idetilde{\beta} {\tt t}imes \mathtt{m}athrm{w}idetilde{\varphi})|_{Y}$ coincides with $(\mathtt{m}athrm{w}idetilde{\beta}|_{Y{\tt t}imes\{0\}}) {\tt t}imes (\operatorname{id}_{Y{\tt t}imes\{0\}})$, so it is $\Q$-regular.
\end{itemize}
Apply \cite[Lem.\,3.8]{GSa} with the following substitutions:
``$W$''$:=E{\tt t}imes X^1{\tt t}imes\dots{\tt t}imes X^\ell$, ``$L$''$:=Y{\tt t}imes\{0\}$, ``$f$''$:=\mathtt{m}athrm{w}idetilde{\beta} {\tt t}imes \mathtt{m}athrm{w}idetilde{\varphi}$ and ``$U$'' equal to some open neighborhood $U'$ of $S$ in $\R^{n+1}$ relatively compact in $U$ obtaining a $\Q$-nonsingular algebraic subset $Z$ of $\R^{m+n+s+1} {\tt t}imes \R^k$, for some integer $k$, an open subset $Z_0$ of $Z$ and a $\Q$-regular map $\eta:Z {\tt t}o E{\tt t}imes X^1{\tt t}imes\dots{\tt t}imes X^\ell$ such that, if $\pi:\R^{m+n+s+1} {\tt t}imes \R^k {\tt t}o \R^{m+n+s+1}$ is the natural
projection and $\iota:U'\hookrightarrow \R^{m+n+s+1} {\tt t}imes \R^k$ is the inclusion map, the following hold:
\begin{itemize}
\item[$(\mathtt{m}athrm{v})$] $Y{\tt t}imes\{0\}{\tt t}imes\{0\}\subset Z_0$, $\pi(Z_0)=U'$, the restriction $\pi|_{Z_0}:Z_0 {\tt t}o U'$ is a $\mathtt{m}athscr{C}^\infty$ diffeomorphism, and the $\mathtt{m}athscr{C}^\infty$ map $\sigma:U'{\tt t}o\R^{m+n+s+1+k}$, defined by $\sigma(x,x_{m+n+s+1}):=(\pi|_{Z_0})^{-1}(x,x_{m+n+s+1})$ for all $(x,x_{m+n+s+1})\in U'$, is arbitrarily $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to $\iota$.
\item[$(\mathtt{m}athrm{vi})$] $\eta(x,0)=(\mathtt{m}athrm{w}idetilde{\beta} {\tt t}imes \mathtt{m}athrm{w}idetilde{\varphi})(x)$ for all $x\in Y{\tt t}imes\{0\}$.
\item[$(\mathtt{m}athrm{vii})$] The $\mathtt{m}athscr{C}^\infty$ map $\mathtt{m}athrm{w}idehat{\eta}:U'{\tt t}o E{\tt t}imes X^1{\tt t}imes\dots{\tt t}imes X^\ell$, defined by $\mathtt{m}athrm{w}idehat{\eta}(x,x_{m+n+s+1}):=\eta(\sigma(x,x_{m+n+s+1}))$, is arbitrarily $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to $(\mathtt{m}athrm{w}idetilde{\beta} {\tt t}imes \mathtt{m}athrm{w}idetilde{\varphi})|_{U'}$.
\end{itemize}
Choose an open neighborhood $U''$ of $S$ in $\R^{m+n+s+1}$ such that $\overline{U''}\subset U'$. Set $Z_1:=(\pi|_{Z_0})^{-1}(U'')$. Since
$\mathtt{m}athrm{w}idetilde{\beta} {\tt t}imes \mathtt{m}athrm{w}idetilde{\varphi}$ is transverse to $(G{\tt t}imes\{0\}) {\tt t}imes X^1{\tt t}imes\dots{\tt t}imes X^\ell$ in $E{\tt t}imes X^1{\tt t}imes\dots{\tt t}imes X^\ell$,
by $(\mathtt{m}athrm{v})$, $(\mathtt{m}athrm{vi})$, $(\mathtt{m}athrm{vii})$ and \cite[Theorem 14.1.1]{BCR98}, we have that $S':=\mathtt{m}athrm{w}idehat{\eta}^{\,-1}((G{\tt t}imes\{0\}) {\tt t}imes X^1{\tt t}imes\dots{\tt t}imes X^\ell)$ is a compact $\mathtt{m}athscr{C}^\infty$ submanifold of $U''$ containing $Y{\tt t}imes\{0\}{\tt t}imes\{0\}$ and there exists a $\mathtt{m}athscr{C}^\infty$ diffeomorphism $\psi_1:U''{\tt t}o U''$ arbitrarily $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to $\mathtt{m}athrm{id}_{U''}$ such that $\psi_1(S)=S'$ and $\psi=\mathtt{m}athrm{id}_{U''}$ on $Y{\tt t}imes\{0\}{\tt t}imes\{0\}$. Moreover, $(\mathtt{m}athrm{v})$ of \cite[Lem.\,2.13]{GSa} ensures that $S'':=\eta^{-1}((G{\tt t}imes\{0\}) {\tt t}imes X^1{\tt t}imes\dots{\tt t}imes X^\ell)\subset\R^{m+n+s+1+k}$ is a nonsingular real $\Q$-algebraic set such that $S''_1:=S''\cap Z_1=(\pi|_{Z_1})^{-1}(S')\subset\operatorname{Reg}^*(S'')$. In addition, the $\mathtt{m}athscr{C}^\infty$ embedding $\psi_2:S{\tt t}o\R^{m+n+s+1+k}$ defined by $\psi(x,x_{m+n+s+1}):=(\pi|_{Z_1})^{-1}(\psi_1(x,x_{m+n+s+1}))$, is arbitrarily $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to the inclusion map $j_S:S\hookrightarrow\R^{m+n+s+1+k}$, $\psi_2=j_S$ on $Y{\tt t}imes\{0\}$ and $\psi_2(S)=S''_1$. Note that the set $S''_1$ is both compact and open in $S''$; thus, $S''_1$ is the union of some connected components of $S''$ and $S''_2:=S''\setminus S''_1$ is a closed subset of $\R^{m+n+s+1+k}$ (recall that a real algebraic set, as $S''$ is, only has finitely many connected components). Since $\psi_2$ is arbitrarily $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to $j_S$, the coordinate hyperplane $\{x_{m+n+s+1}=0\}$ of $\R^{m+n+s+1+k}$ is transverse to $S''_1$ in $\R^{m+n+s+1+k}$, $S''_1\cap\{x_{m+n+s+1}=0\}=M' \sqcup (Y{\tt t}imes\{0\}{\tt t}imes\{0\})$ for some compact $\mathtt{m}athscr{C}^\infty$ submanifold $M'$ of $\R^{m+n+s+1+k}$ and there exists a $\mathtt{m}athscr{C}^\infty$ embedding $\psi_3:M{\tt t}o\R^{m+n+s+1+k}$ arbitrarily $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to the inclusion map $j_M:M\hookrightarrow\R^{m+n+s+1+k}$ such that $M'=\psi_3(M)$.
Let $K$ be a compact neighborhood of $S''_1$ in $\R^{m+n+s+1+k}$ such that $K\cap S''_2=\varnothing$ and let $\pi_{m+n+s+1}:\R^{m+n+s+1+k}=\R^{m+n+s}{\tt t}imes\R{\tt t}imes\R^{k}{\tt t}o\R$ be the projection $\pi_{m+n+s+1}(x,x_{m+n+s+1},y):=x_{m+n+s+1}$. By \cite[Lem.\,2.15]{GSa} $(\mathtt{m}athrm{a})(\mathtt{m}athrm{d})$, the real algebraic set $Y{\tt t}imes\{0\}{\tt t}imes\{0\}\subset\R^{m+n+s+1+k}$ is projectively $\Q$-closed; thus, there exists a $\Q$-overt polynomial $q\in\Q[x_1,\dots,x_{m+n+s+1+k}]$ such that $\mathtt{m}athcal{Z}_\R(q)=Y{\tt t}imes\{0\}{\tt t}imes\{0\}$. Since $q$ is a proper function, replacing $q$ with $Cq^2$ for some rational number $C>0$ if necessary, we can assume that $q$ is $\Q$-overt, $\mathtt{m}athcal{Z}_\R(q)=Y{\tt t}imes\{0\}{\tt t}imes\{0\}$, $q\geq0$ on $\R^{m+n+s+1+k}$ and $q\geq 2$ on $\R^{m+n+s+1+k}\setminus K$. Let $K'$ be a compact neighborhood of $S''_1$ in $\mathtt{m}athrm{int}_{\R^{m+n+s+1+k}}(K)$. Using a $\mathtt{m}athscr{C}^\infty$ partition of unity subordinated to $\{\mathtt{m}athrm{int}_{\R^{m+n+s+1+k}}(K),\R^{m+n+s+1+k} \setminus K'\}$, we can define a $\mathtt{m}athscr{C}^\infty$ function $h:\R^{m+n+s+1+k}{\tt t}o\R$ such that $h=\pi_{m+n+s+1}$ on $K'$ and $h=q$ on $\R^{m+n+s+1+k}\setminus K$. Apply \cite[Lem.\,3.7]{GSa} to $h-q$, obtaining a $\Q$-regular function $u':\R^{m+n+s+1+k}{\tt t}o\R$ with the following properties:
\begin{itemize}
\item[$(\mathtt{m}athrm{viii})$] There exist $e\in\N$ and a polynomial $p\in\Q[x_1,\ldots,x_{m+n+s+1+k}]$ of degree $\leq 2e$ such that $u'(x)=p(x)(1+|x|_{s+t}^2)^{-e}$ for all $x\in\R^{m+n+s+1+k}$.
\item[$(\mathtt{m}athrm{ix})$] $Y{\tt t}imes\{0\}{\tt t}imes\{0\}\subset\mathtt{m}athcal{Z}_\R(u')$.
\item[$(\mathtt{m}athrm{x})$] $\sup_{x\in\R^{m+n+s+1+k}}|h(x)-q(x)-u'(x)|<1$.
\item[$(\mathtt{m}athrm{xi})$] $u'$ is arbitrarily $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to $\pi_{m+n+s+1}-q$ on $\mathtt{m}athrm{int}_{\R^{m+n+s+1+k}}(K')$.
\end{itemize}
Let $u:\R^{m+n+s+1+k}{\tt t}o\R$ be the $\Q$-regular map given by $u:=u'+q$, and let $v\in\Q[x]$, with $x=(x_1,\ldots,x_{m+n+s+1+k})$, be the polynomial $v(x):=q(x)(1+|x|_{m+n+s+1+k}^2)^e+p(x)$. Combining $(\mathtt{m}athrm{viii})$ with the fact that $q$ is non-constant and $\Q$-overt, we immediately deduce that $u(x)=(1+|x|_{m+n+s+1+k}^2)^{-e}v(x)$ and $v$ is $\Q$-overt. By $(\mathtt{m}athrm{ix})$, $(\mathtt{m}athrm{x})$ and $(\mathtt{m}athrm{xi})$, we know that $Y{\tt t}imes\{0\}{\tt t}imes\{0\}\subset\mathtt{m}athcal{Z}_\R(u)$, $u>1$ on $\R^{m+n+s+1+k}\setminus K$ and $u$ is arbitrarily $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to $\pi_{m+n+s+1}$ on $\mathtt{m}athrm{int}_{\R^{m+n+s+1+k}}(K')$. In particular, $0$ is a regular value of the restriction $u|_{S''_1}$ of $u$ to $S''_1$, $S''_1\cap \mathtt{m}athcal{Z}_\R(u)=M'' \sqcup X$ for some compact $\mathtt{m}athscr{C}^\infty$ submanifold $M''$ of $\R^{m+n+s+1+k}$ and there exists a $\mathtt{m}athscr{C}^\infty$ embedding $\psi_4:M'{\tt t}o\R^{m+n+s+1+k}$ arbitrarily $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to the inclusion map $j_{M'}:M'\hookrightarrow\R^{m+n+s+1+k}$ such that $M''=\psi_4(M')$. Since $M''\sqcup X=S''\cap \mathtt{m}athcal{Z}_\R(u)$, \cite[Lem.\,2.13]{GSa} ensures that $M''\sqcup X\subset\R^{m+n+s+1+k}$ is a $\Q$-nonsingular algebraic set. On the other hand, we also have that $M''\sqcup X=S''\cap \mathtt{m}athcal{Z}_\R(u)=S''\cap \mathtt{m}athcal{Z}_\R(v)$; thus, \cite[Lem.\,2.15]{GSa} $(\mathtt{m}athrm{b})$ implies that $M''\sqcup X$ is projectively $\Q$-closed. In addition, by Corollary \ref{cor:Q_setminus} and Lemma \cite[Lem.\,2.11]{GSa} we deduce that $M''$ is a projectively $\Q$-closed $\Q$-nonsingular algebraic subset of $\R^{m+n+s+1+k}$. Consider the embedding $\psi:M{\tt t}o \R^{m+n+s+1+k}$ defined as $\psi:=\psi_3\circ\psi_4$. Then, $\psi$ is arbitrarily $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to $j_M$, $\psi(M)=M''$ and consider the submanifolds $\psi(M_i)$, for every $i\in\{1,\dots,\ell\}$, of $M''$ in general position.
Consider $\pi_i:E{\tt t}imes X^1{\tt t}imes\dots{\tt t}imes X^\ell\rightarrow X^i$ the projection on the $i$-th component of $X^1{\tt t}imes\dots{\tt t}imes X^\ell$, thus $\pi_i\circ(\mathtt{m}athrm{w}idetilde{\beta}{\tt t}imes\mathtt{m}athrm{w}idetilde{\varphi})=\varphi_i\circ\mathtt{m}athrm{w}idetilde{\rho}$. Let $X'_i:=X_i{\tt t}imes\{0\}{\tt t}imes\{0\}{\tt t}imes\{0\}\subset X^i$, for every $i\in\{1,\dots,\ell\}$. By $(\mathtt{m}athrm{vii})$, we know that $\pi_i\circ\mathtt{m}athrm{w}idehat{\eta}$ is arbitrarily $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to $\varphi_i\circ\mathtt{m}athrm{w}idetilde{\rho}$, thus $\pi_i\circ\mathtt{m}athrm{w}idehat{\eta}$ is transverse to $X'_i$ in $X^i$, for every $i\in\{1,\dots,\ell\}$. By $(\mathtt{m}athrm{v})$, $(\mathtt{m}athrm{vi})$, $(\mathtt{m}athrm{vii})$ and \cite[Theorem 14.1.1]{BCR98}, we have that $S'_i:=\mathtt{m}athrm{w}idehat{\eta}^{\,-1}((G{\tt t}imes\{0\}) {\tt t}imes X^1{\tt t}imes\dots{\tt t}imes X'_i{\tt t}imes\dots{\tt t}imes X^\ell)=(\pi_i\circ\mathtt{m}athrm{w}idehat{\eta})^{-1}(X'_i)$ is a compact $\mathtt{m}athscr{C}^\infty$ submanifold of $S\subset U''$ containing $Y_i{\tt t}imes\{0\}{\tt t}imes\{0\}$ and there exists a $\mathtt{m}athscr{C}^\infty$ diffeomorphism $\psi^i_1:U''{\tt t}o U''$ arbitrarily $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to $\mathtt{m}athrm{id}_{U''}$ such that $\psi_1(S_i)=S'_i$ and $\psi=\mathtt{m}athrm{id}_{U''}$ on $Y_i{\tt t}imes\{0\}{\tt t}imes\{0\}$. Moreover, by $(\mathtt{m}athrm{v})$, Lemma \cite[Lem.\,2.13]{GSa} ensures that $S''_i:=\eta^{-1}((G{\tt t}imes\{0\}) {\tt t}imes X^1{\tt t}imes\dots{\tt t}imes X'_i{\tt t}imes\dots{\tt t}imes X^\ell)=(\pi_i\circ\eta)^{-1}(X'_i)\subset\R^{m+n+s+1+k}$ is a nonsingular real $\Q$-algebraic set such that $S''_{i1}:=S''_i\cap Z_1=(\pi|_{Z_1})^{-1}(S'_i)\subset\operatorname{Reg}^*(S''_i)$. In addition, the $\mathtt{m}athscr{C}^\infty$ embedding $\psi^i_2:S_i{\tt t}o\R^{m+n+s+1+k}$ defined by $\psi_2^i(x,x_{m+n+s+1}):=(\pi|_{Z_1})^{-1}(\psi^i_1(x,x_{m+n+s+1}))$, is arbitrarily $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to the inclusion map $j_{S_i}:S_i\hookrightarrow\R^{m+n+s+1+k}$, $\psi^i_2=j_{S_i}$ on $Y_i{\tt t}imes\{0\}$ and $\psi^i_2(S_i)=S''_{i1}$. Note that the set $S''_{i1}$ is both compact and open in $S''_i$; thus, $S''_{i1}$ is the union of some connected components of $S''_i$ and $S''_{i2}:=S''_i\setminus S''_{i1}$ is a closed subset of $\R^{m+n+s+1+k}$. Since $\psi^i_2$ is arbitrarily $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to $j_{S_i}$, the coordinate hyperplane $\{x_{m+n+s+1}=0\}$ of $\R^{m+n+s+1+k}$ is transverse to $S''_{i1}$ in $\R^{m+n+s+1+k}$, $S''_{i1}\cap\{x_{m+n+s+1}=0\}=M'_i \sqcup (Y_i{\tt t}imes\{0\}{\tt t}imes\{0\})$ for some compact $\mathtt{m}athscr{C}^\infty$ submanifold $M'_i$ of $\R^{m+n+s+1+k}$ and there exists a $\mathtt{m}athscr{C}^\infty$ embedding $\psi^i_3:M_i{\tt t}o\R^{m+n+s+1+k}$ arbitrarily $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to the inclusion map $j_{M_i}:M_i\hookrightarrow\R^{m+n+s+1+k}$ such that $M'_i=\psi_3(M_i)$. Observe that, by construction $M'_i\subset M'$, for every $i\in\{1,\dots,\ell\}$, are in general position. Define $M''_i:=M''\cup S''_{i}$, for every $i\in\{1,\dots,\ell\}$. By $(\mathtt{m}athrm{ix})$, $(\mathtt{m}athrm{x})$ and $(\mathtt{m}athrm{xi})$, we deduce that $M''_i$, for every $i\in\{1,\dots,\ell\}$ are compact submanifolds of $M''$ in general position and there exists a $\mathtt{m}athscr{C}^\infty$ embedding $\psi^i_4:M_i{\tt t}o\R^{m+n+s+1+k}$ arbitrarily $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to the inclusion map $j_{M'_i}:M'_i\hookrightarrow\R^{m+n+s+1+k}$ such that $M''_i=\psi^i_4(M'_i)$, for every $i\in\{1,\dots,\ell\}$. Consider the embeddings $\psi_i:M_i{\tt t}o \R^{m+n+s+1+k}$ defined as $\psi_i:=\psi^i_3\circ\psi^i_4$. Then, $\psi_i$ is $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to $j_{M_i}$ and $\psi(M_i)=M''_i$, for every $i\in\{1,\dots,\ell\}$. As a consequence, $\psi_i\circ(\psi|)^{-1}|:\psi(M_i){\tt t}o M''_i\subset M''$ is a $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ diffeomorphism $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to $j_{\psi(M_i)}:\psi(M_i)\hookrightarrow \R^{m+n+s+1+k}$, for every $i\in\{1,\dots,\ell\}$. Thus, \cite[Lem.\,2.9]{AK81a} ensures the existence of a $\mathtt{m}athscr{C}^\infty$ diffeomorphism $\psi_5:M''{\tt t}o M''$ such that $\psi_5(\psi(M_i))=M''_i$ and $\psi_5$ is $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to $j_{M'}:M'\hookrightarrow\R^{m+n+s+1+k}$.
Hence, by setting $``M' ":=M''$ and $``M'_i ":=M''_i$, for every $i\in\{1,\dots,\ell\}$, and the $\mathtt{m}athscr{C}^\infty$ diffeomorphism $h:M{\tt t}o M'$ as $``h ":=\psi_5\circ\psi_4\circ\psi_3$ we get the wondered projectively $\Q$-closed $\Q$-nonsingular algebraic model $M'$ of $M$ with $\Q$-nonsingular algebraic subsets $\{M'_i\}_{i=1}^\ell$ in general position such that there exists a $\mathtt{m}athscr{C}^\infty$ diffeomorphism $h:M{\tt t}o M'$ satisfying $h(M_i)=M'_i$, for every $i\in\{1,\dots,\ell\}$, $\jmath\circ h\in\mathtt{m}athcal{U}$ and $\jmath\circ h|_{M_i}\in\mathtt{m}athcal{U}_i$, for every $i\in\{1,\dots,\ell\}$, where $\jmath:M'\hookrightarrow\R^{m+n+s+1+k}$ denotes the inclusion map. Finally, applying $\Q$-generic projection techniques (see \cite[Lem.\,2.16]{GSa}) we may suppose that the projectively $\Q$-closed $\Q$-nonsingular algebraic set $M''\subset\R^N$, with $N:=\mathtt{m}ax(m+n,2m+1)$.
Assume in addition that $M$ and each $M_i$ are Nash manifolds, for every $i\in\{1,\dots,\ell\}$. By \cite[Thm.\,4.7]{GSa} we can assume that $h:M{\tt t}o M'$ is a Nash diffeomorphism such that $h(M_i)=M'_i$, for every $i\in\{1,\dots,\ell\}$. Moreover, an application of \cite{Jel08} provides a semi-algebraic homeomorphism $\R^N{\tt t}o\R^N$ extending $h$, as desired.
\end{proof}
Above result can be extended to the case $M$ and the submanifolds $M_i$, for every $i\in\{1,\dots,\ell\}$, are nonsingular algebraic sets not assumed to be compact. The strategy is to apply algebraic compactification getting a compact algebraic set with only an isolated singularity and then apply a relative variant of the strategy proposed in \cite[Thms.\,5.1\,$\&$\,5.2 ]{GSa}. Next theorem provides a complete proof of our Main Theorem, hence it gives a complete positive answer to the {\tt t}extsc{Relative $\Q$-algebrization of nonsingular algebraic sets}.
\begin{theorem}\label{thm:non-compact}
Let $V$ be a nonsingular algebraic subset of $\R^{m+n}$ of dimension $m$ and let $\{V_i\}_{i=1}^\ell$ be a finite family of nonsingular algebraic subsets of $V$ of codimension $c_i$ in general position. Set $N:=\mathtt{m}ax(2(m+n+1),3(m+1)+n)\}$. Then, for every neighborhood $\mathtt{m}athcal{U}$ of the inclusion map $\iota:V\hookrightarrow\R^{N}$ in ${\EuScript N}_{\mathtt{m}athrm{w}}(V,\R^{N})$ and for every neighborhood $\mathtt{m}athcal{U}_i$ of the inclusion map $\iota|_{V_i}:V_i\hookrightarrow\R^N$ in ${\EuScript N}_{\mathtt{m}athrm{w}}(V_i,\R^N)$, for every $i\in\{1,\dots,\ell\}$, there exist a $\Q$-nonsingular algebraic set $V'\subset\R^N$, a family $\{V'_i\}_{i=1}^\ell$ of $\Q$-nonsingular algebraic subsets of $V'$ in general position and a Nash diffeomorphism $h:V{\tt t}o V'$ which simultaneously takes each $V_i$ to $V'_i$ such that, if $\jmath:V'\hookrightarrow\R^N$ denotes the inclusion map, then $\jmath\circ h\in\mathtt{m}athcal{U}$ and $\jmath\circ h|_{M_i}\in\mathtt{m}athcal{U}_i$, for every $i\in\{1,\dots,\ell\}$. Moreover, $h$ extends to a semialgebraic homeomorphism from $\R^N$ to $\R^N$.
\end{theorem}
\begin{proof}
Up to translate $V$ and each $V_i$, with $i\in\{1,\dots,\ell\}$, of a rational factor we may suppose that the origin $\overline{0}$ of $\R^{m+n}$ is not contained in $V$. Let $s,s_1,\dots,s_\ell\in\R[x_1,\dots,x_{m+n}]$ such that $\mathtt{m}athcal{Z}_{\R}(s)=V$ and $\mathtt{m}athcal{Z}_{\R}(s_i)=V_i$, for every $i\in\{1,\dots,\ell\}$. Let $\sph^{m+n-1}$ be the standard unit sphere of $\R^{m+n}$ and let ${\tt t}heta:\R^{m+n}\setminus\{\overline{0}\}{\tt t}o\R^{m+n}\setminus\{\overline{0}\}$ as ${\tt t}heta(x)=\frac{x}{\parallel x \parallel_{\R^{m+n}}^2}$ be the inversion with respect to $\sph^{m+n-1}$. Recall that ${\tt t}heta\circ{\tt t}heta=\operatorname{id}_{\R^n\setminus\{\overline{0}\}}$. Let $d\geq \deg(s)$. Define the polynomials $t:=\parallel x \parallel^{2d}_{\R^{m+n}}\cdot (s\circ{\tt t}heta)(x)\in\R[x]$, $t_i:=\parallel x \parallel^{2d}_{\R^{m+n}} s_i\circ{\tt t}heta(x)\in\R[x]$, the compact algebraic sets $\mathtt{m}athrm{w}idetilde{V}:=\mathtt{m}athcal{Z}_\R(t)$ and $\mathtt{m}athrm{w}idetilde{V}_i:=\mathtt{m}athcal{Z}_\R(t_i)$, for every $i\in\{1,\dots,\ell\}$. By construction, $\mathtt{m}athrm{w}idetilde{V}={\tt t}heta(V)\sqcup\{\overline{0}\}$, $\mathtt{m}athrm{w}idetilde{V}_i={\tt t}heta(V)_i\sqcup\{\overline{0}\}$, for every $i\in\{1,\dots,\ell\}$, and ${\tt t}heta:V{\tt t}o W\setminus\{\overline{0}\}$ is a $\Q$-biregular map between locally closed algebraic sets, thus $\mathtt{m}athrm{w}idetilde{V}\setminus\{\overline{0}\}={\tt t}heta(V)$ and $\mathtt{m}athrm{w}idetilde{V}_i\setminus\{\overline{0}\}={\tt t}heta(V_i)$, for every $i\in\{1,\dots,\ell\}$. In general, $\overline{0}$ may be a singular point of $\mathtt{m}athrm{w}idetilde{V}$ and $\mathtt{m}athrm{w}idetilde{V}_i$, for $i\in\{1,\dots,\ell\}$.
By Hironaka's desingularization theorem there are a finite set $J\subset\N\setminus\{1,\dots,\ell\}$, nonsingular algebraic subsets $X$, $X_i$ and $E_j$, for every $i\in\{1,\dots,\ell\}$ and $j\in J$, and a regular map $p:X{\tt t}o \mathtt{m}athrm{w}idetilde{V}$ satisfying the following properties:
\begin{enumerate}
\item $E_j$ is an algebraic hypersurface of $X$ for every $j\in J$ and $\bigcup_{j\in J} E_j=p^{-1}(\overline{0})$;
\item the nonsingular algebraic sets $\{X_i\}_{i=1}^\ell\sqcup \{E_j\}_{j\in J}$ are in general position;
\item $p|_{X\setminus \bigcup_{j\in J} E_j}:X\setminus \bigcup_{j\in J} E_j{\tt t}o \mathtt{m}athrm{w}idetilde{V}$ is biregular.
\end{enumerate}
An application of Theorem \ref{thm:Q_tico_tognoli} with the following substitutions:
``$M$''$:=X$, ``$\ell$''$:=\ell+|J|$,``$M_i$''$:=X_i$, for every $i\in\{1,\dots,\ell\}$, ``$M_j$''$:=E_j$, for every $j\in J$, gives a projectively $\Q$-closed $\Q$-nonsingular algebraic set $X'\subset\R^{N}$ of dimension $m$, with $N:=\mathtt{m}ax(m+n,2m+1)$, $\Q$-nonsingular algebraic subsets $X'_i$, for $i\in\{1,\dots,\ell\}$, and $\Q$-nonsingular algebraic hypersurfaces $E'_j$, for $j\in J$, of $X'$ in general position and a Nash diffeomorphism $\phi:X{\tt t}o X'$ such that $\phi(X_i)=X'_i$, for every $i\in\{1,\dots,\ell\}$, and $\phi(E_j)=E'_j$, for every $j\in J$.
Consider the Nash map $p':=\phi^{-1}\circ p: X'{\tt t}o \mathtt{m}athrm{w}idetilde{V}$ such that $(p')^{-1}(0)=\bigcup_{i=1}^\ell X'_i$. Since the $\Q$-nonsingular algebraic sets $X_i$, with $i\in\{1,\dots,\ell\}$, are in general position, we deduce that $\bigcup_{i=1}^\ell X'_i$ is a $\Q$-algebraic set with monomial singularities (see \cite[Def.\,1.1.]{BFR14}) hence, \cite[Cor.\,4.A.4.]{BFR14} ensures that, for every $a\in\bigcup_{i=1}^\ell X'_i$ there are an open neighborhood $U_a$ of $a$ in $\R^{N}$, Nash functions $u_1,\dots,u_k\in\mathtt{m}athcal{N}(U_a)$ and polynomials $q_1,\dots,q_k\in\I_{\Q}(\bigcup_{i=1}^\ell X'_i)$ such that $f|_{U_a}=\sum_{j=1}^{k} u_j q_j|_{U_a}$. Indeed, since $a$ is a monomial singularity and each $X_i$ is a $\Q$-nonsingular algebraic subset of $\R^N$, there are a neighborhood $U_a$ of $a$ in $\R^N$, a neighborhood $U$ of $\overline{0}$ in $\R^N$ and a Nash diffeomorphism $\psi:U_a{\tt t}o U$ such that $\psi(\bigcup_{i=1}^\ell X'_i \cap U_a)$ is the germ at $\overline{0}$ of a union of coordinate linear varieties of $\R^N$ and the coordinates are chosen among the polynomials in $\I_\Q(X_i)$, for some $i\in\{1,\dots,\ell\}$, describing the local behaviour of the algebraic sets $X_i$'s at $a$. Thus, since the ideal of Nash functions on $U$ vanishing at $\psi(\bigcup_{i=1}^\ell X'_i \cap U_a)$ is generated by square-free monomials, we get that the ideal of Nash functions on $U_a$ vanishing at $\bigcup_{i=1}^\ell X'_i \cap U_a$ is generated by products of polynomials as above (without repetitions) vanishing in $\bigcup_{i=1}^\ell X'_i$. Hence, an application of \cite[Thm.\,3.9]{GSa} with the following substitutions: ``$M$''$:=X'$, ``$L$''$:=(\bigcup_{i=1}^\ell X'_i)\cup(\bigcup_{j\in J} E'_j)$, ``$W$''$:=\R^{m+n}$ and ``$f$''$:=p'$ gives a projectively $\Q$-closed $\Q$-nonsingular algebraic set $X''\subset\R^{N}{\tt t}imes\R^k$ of dimension $m$, for some $k\in\N$, such that $(\bigcup_{i=1}^\ell X'_i)\cup(\bigcup_{j\in J} E'_j){\tt t}imes\{0\}\subset X''$, a Nash diffeomorphism $\phi':X'{\tt t}o X''$ such that $\phi'(x)=(x,0)$, for every $x\in (\bigcup_{i=1}^\ell X'_i)\cup(\bigcup_{j\in J} E'_j)$ and a $\Q$-regular map $g:X''{\tt t}o \R^{m+n}$ such that $g(x,0)=0$ for every $x\in (\bigcup_{i=1}^\ell X'_i)\cup(\bigcup_{j\in J} E'_j)$ and $g\circ \phi'$ is $\mathtt{m}athscr{C}^\infty_{\mathtt{m}athrm{w}}$ close to $p'$. Moreover, up perform $\Q$-generic projection (see \cite[Lem.\,2.16]{GSa}), we may suppose that $X''\subset\R^{N}$, with $N:=\mathtt{m}ax(m+n,2m+1)$, as above.
Finally, an application of the $\Q$-version of the celebrated "Akbulut-King blowing down lemma", namely \cite[Lem.\,3.13]{GSa} , with the following substitutions: ``$X$''$:=X''$, ``$Y$''$:=\R^{m+n}$, ``$A$''$:=\bigcup_{j\in J} E'_j$, ``$p$''$:=g|_{\bigcup_{j\in J} E'_j}$ and ``$P$''$:=g$ gives a $\Q$-determined algebraic set $\mathtt{m}athrm{w}idetilde{V}'\subset\R^{N}{\tt t}imes\R^{m+n}{\tt t}imes\R$ of dimension $m$ with possibly only an isolated singularity at the origin of $\R^{N}{\tt t}imes\R^{m+n}{\tt t}imes\R$, with $N:=\mathtt{m}ax(m+n,2m+1)$, such that $\mathtt{m}athrm{w}idetilde{V}'_i:=f(\mathtt{m}athrm{w}idetilde{V}_i)\cup\{\overline{0}\}$, for every $i\in\{1,\dots,\ell\}$, is a $\Q$-determined algebraic subset of $\mathtt{m}athrm{w}idetilde{V}'$ of codimension $c_i$ with possibly only an isolated singularity at the origin $\overline{0}$ of $\R^{N}{\tt t}imes\R^{m+n}{\tt t}imes\R$, where $f:X''{\tt t}o \mathtt{m}athrm{w}idetilde{V'}$ denotes the $\Q$-regular map of \cite[Lem.\,3.13]{GSa} , and a semialgebraic homeomorphism $\mathtt{m}athrm{w}idetilde{h}:\mathtt{m}athrm{w}idetilde{V}{\tt t}o \mathtt{m}athrm{w}idetilde{V}'$ such that $\mathtt{m}athrm{w}idetilde{h}|_{\mathtt{m}athrm{w}idetilde{V}\setminus\{\overline{0}\}}:\mathtt{m}athrm{w}idetilde{V}\setminus\{\overline{0}\}{\tt t}o \mathtt{m}athrm{w}idetilde{V}'\setminus\{\overline{0}\}$ is a Nash diffeomorphism, $\mathtt{m}athrm{w}idetilde{h}|_{\mathtt{m}athrm{w}idetilde{V}_i}:\mathtt{m}athrm{w}idetilde{V}_i {\tt t}o \mathtt{m}athrm{w}idetilde{V}'_i$ is a semialgebraic homeomorphism, $\mathtt{m}athrm{w}idetilde{h}$ is $\mathtt{m}athscr{C}^0_\mathtt{m}athrm{w}$ close to the inclusion $\iota_{\mathtt{m}athrm{w}idetilde{V}}:\mathtt{m}athrm{w}idetilde{V}{\tt t}o\R^{N}{\tt t}imes\R^{m+n}$ and $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to $\iota_{\mathtt{m}athrm{w}idetilde{V}}|_{\mathtt{m}athrm{w}idetilde{V}\setminus\{\overline{0}\}}$ and $\mathtt{m}athrm{w}idetilde{h}_{\mathtt{m}athrm{w}idetilde{V}_i}$ is $\mathtt{m}athscr{C}^0_\mathtt{m}athrm{w}$ close to the inclusion $\iota_{\mathtt{m}athrm{w}idetilde{V}_i}:\mathtt{m}athrm{w}idetilde{V}{\tt t}o\R^{N}{\tt t}imes\R^{m+n}$ and $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to $\iota_{\mathtt{m}athrm{w}idetilde{V}}|_{\mathtt{m}athrm{w}idetilde{V}\setminus\{\overline{0}\}}$. Let $N':=N+m+n+1$. Let $t',t'_1,\dots,t'_\ell\in\Q[x_1,\dots,x_{N'}]$ such that $\mathtt{m}athcal{Z}_{\R}(t')=\mathtt{m}athrm{w}idetilde{V'}$ and $\mathtt{m}athcal{Z}_{\R}(t'_i)=\mathtt{m}athrm{w}idetilde{V'}_i$, for every $i\in\{1,\dots,\ell\}$. Let $\sph^{N'-1}$ be the standard unit sphere of $\R^{N'}$ and let ${\tt t}heta':\R^{N'}\setminus\{\overline{0}\}{\tt t}o\R^{N'}\setminus\{\overline{0}\}$ as ${\tt t}heta'(x)=\frac{x}{\parallel x \parallel_{\R^{N'}}^2}$ be the inversion with respect to $\sph^{N'-1}$. Recall that ${\tt t}heta'\circ{\tt t}heta'=\operatorname{id}_{\R^{N'}\setminus\{\overline{0}\}}$. Let $e\geq \deg(t')$. Define the polynomials $s':=\parallel x \parallel^{2e}_{\R^{N'}}\cdot (t'\circ{\tt t}heta')(x)\in\Q[x]$, $s'_i:=\parallel x \parallel^{2e}_{\R^{N'}} t'_i\circ{\tt t}heta'(x)\in\Q[x]$, the algebraic sets $V':=\mathtt{m}athcal{Z}_\R(s')$ and $V_i:=\mathtt{m}athcal{Z}_\R(s'_i)$, for every $i\in\{1,\dots,\ell\}$. By construction, $V'={\tt t}heta'(\mathtt{m}athrm{w}idetilde{V}'\setminus\{\overline{0}\})\cup\{\overline{0}\}$, $V'_i={\tt t}heta'(\mathtt{m}athrm{w}idetilde{V}'_i\setminus\{\overline{0}\})\cup\{\overline{0}\}$, for every $i\in\{1,\dots,\ell\}$, and ${\tt t}heta':\mathtt{m}athrm{w}idetilde{V}'\setminus\{\overline{0}\}{\tt t}o V'\cup\{\overline{0}\}$ is a $\Q$-biregular map between locally closed algebraic sets, thus ${\tt t}heta'(\mathtt{m}athrm{w}idetilde{V}'\setminus\{\overline{0}\})=V'\cup\{\overline{0}\}$ and ${\tt t}heta'(\mathtt{m}athrm{w}idetilde{V}'_i\setminus\{\overline{0}\})=V'_i\cup\{\overline{0}\}$, for every $i\in\{1,\dots,\ell\}$. Observe that, by construction, the $\Q$-nonsingular algebraic sets $\{V'_i\}_{i=1}^\ell$ are in general position. Hence, let $C\in\Q$ be sufficiently high and consider the algebraic sets $V'':=\{(x,y)\in\R^{N'}{\tt t}imes\R\,|\,y=1/C\sum_{k=1}^{N'} x_k^2,\, s'(x)=0\}$ and $V''_i:=\{(x,y)\in\R^{N'}{\tt t}imes\R\,|\,y=1/C\sum_{k=1}^{N'} x_k^2,\, s'_i(x)=0\}$. By construction, $V''$ and $V''_i$ are $\Q$-nonsingular real algebraic sets, for every $i\in\{1,\dots,\ell\}$, $V'\setminus\{\overline{0}\}$ and $V''$ are $\Q$-biregularly isomorphic via projection $\pi:\R^{N'}{\tt t}imes\R{\tt t}o\R^{N'}$, $\pi|_{V''_i}:V''_i{\tt t}o V'_i\setminus\{\overline{0}\}$, for every $i\in\{1,\dots,\ell\}$ and the $\Q$-nonsingular algebraic sets $\{V''_i\}_{i=1}^\ell$ are in general position.
Define the Nash diffeomorphism $h:V{\tt t}o V''$ as $h:=(\pi|_{V''})^{-1}\circ{\tt t}heta'|_{\mathtt{m}athrm{w}idetilde{V}'\setminus\{\overline{0}\}}\circ\mathtt{m}athrm{w}idetilde{h}|_{\mathtt{m}athrm{w}idetilde{V}\setminus\{\overline{0}\}}\circ{\tt t}heta'|_{V}$. By the choice of $\mathtt{m}athrm{w}idetilde{h}$ as above and the choice of the constant $C\in\Q$, we deduce that $h|_{V_i}:V_i {\tt t}o V''_i$ is a Nash diffeomorphism, $\jmath_{V''}\circ h$ is $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to the inclusion $\iota_{V}:V\hookrightarrow \R^{N'+1}$ and $h|_{V_i}$ is $\mathtt{m}athscr{C}^\infty_\mathtt{m}athrm{w}$ close to the inclusion map $\iota_{V_i}:V_i\hookrightarrow \R^{N'+1}$, where $\jmath_{V'}:V'\hookrightarrow\R^{N'+1}$ denotes the inclusion map. Moreover, an application of \cite{Jel08} provides a semi-algebraic homeomorphism $\R^N{\tt t}o\R^N$ extending $h$. Observe that $N'+1=\mathtt{m}ax(2(m+n+1),3(m+1)+n)$, as desired.
\end{proof}
\begin{bibdiv}
\begin{biblist}
\bib{AK81b}{article}{
author={Akbulut, Selman},
author={King, Henry C.},
title={A relative Nash theorem},
journal={Trans. Amer. Math. Soc.},
volume={267},
date={1981},
number={2},
pages={465--481},
issn={0002-9947},
review={\MR{626484}},
doi={10.2307/1998665},
}
\bib{AK81a}{article}{
author={Akbulut, Selman},
author={King, Henry C.},
title={The topology of real algebraic sets with isolated singularities},
journal={Ann. of Math. (2)},
volume={113},
date={1981},
number={3},
pages={425--446},
issn={0003-486X},
review={\MR{621011}},
doi={10.2307/2006992},
}
\bib{AK92a}{book}{
author={Akbulut, Selman},
author={King, Henry},
title={Topology of real algebraic sets},
series={Mathematical Sciences Research Institute Publications},
volume={25},
publisher={Springer-Verlag, New York},
date={1992},
pages={x+249},
isbn={0-387-97744-9},
review={\MR{1225577}},
doi={10.1007/978-1-4613-9739-7},
}
\bib{BFR14}{article}{
author={Baro, El\'{\i}as},
author={Fernando, Jos\'{e} F.},
author={Ruiz, Jes\'{u}s M.},
title={Approximation on Nash sets with monomial singularities},
journal={Adv. Math.},
volume={262},
date={2014},
pages={59--114},
issn={0001-8708},
review={\MR{3228424}},
doi={10.1016/j.aim.2014.05.006},
}
\bib{Ben1}{article}{
author={Benoist, Olivier},
title={On the subvarieties with nonsingular real loci of a real algebraic variety},
journal={Geometry and Topology {\tt t}extnormal{(to appear)}},
volume={},
date={},
pages={28},
issn={},
review={https://arxiv.org/abs/2005.06424},
doi={},
}
\bib{BCR98}{book}{
author={Bochnak, Jacek},
author={Coste, Michel},
author={Roy, Marie-Fran\c{c}oise},
title={Real algebraic geometry},
series={Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in
Mathematics and Related Areas (3)]},
volume={36},
note={Translated from the 1987 French original;
Revised by the authors},
publisher={Springer-Verlag, Berlin},
date={1998},
pages={x+430},
isbn={3-540-64663-9},
review={\MR{1659509}},
doi={10.1007/978-3-662-03718-8},
}
\bib{CS92}{article}{
author={Coste, Michel},
author={Shiota, Masahiro},
title={Nash triviality in families of Nash manifolds},
journal={Invent. Math.},
volume={108},
date={1992},
number={2},
pages={349--368},
issn={0020-9910},
review={\MR{1161096}},
doi={10.1007/BF02100609},
}
\bib{FG}{article}{
author={Fernando, Josè},
author={Ghiloni, Riccardo},
title={Sub-algebraic geometry, the algebraic geometry over subfields},
journal={(to apper)},
}
\bib{GSa}{article}{
author={Ghiloni, Riccardo},
author={Savi, Enrico},
title={The topology of real algebraic sets with isolated singularities is determined by the field of rational numbers},
journal={arXiv:2302.04142},
}
\bib{GT17}{article}{
author={Ghiloni, Riccardo},
author={Tancredi, Alessandro},
title={Algebraicity of Nash sets and of their asymmetric cobordism},
journal={J. Eur. Math. Soc. (JEMS)},
volume={19},
date={2017},
number={2},
pages={507--529},
issn={1435-9855},
review={\MR{3605023}},
doi={10.4171/JEMS/672},
}
\bib{Hir94}{book}{
author={Hirsch, Morris W.},
title={Differential topology},
series={Graduate Texts in Mathematics},
volume={33},
note={Corrected reprint of the 1976 original},
publisher={Springer-Verlag, New York},
date={1994},
pages={x+222},
isbn={0-387-90148-5},
review={\MR{1336822}},
}
\bib{Jel08}{article}{
author={Jelonek, Zbigniew},
title={On the extension of real regular embedding},
journal={Bull. Lond. Math. Soc.},
volume={40},
date={2008},
number={5},
pages={801--806},
issn={0024-6093},
review={\MR{2439645}},
doi={10.1112/blms/bdn058},
}
\bib{Kol17}{article}{
author={Koll\'{a}r, J\'{a}nos},
title={Nash's work in algebraic geometry},
journal={Bull. Amer. Math. Soc. (N.S.)},
volume={54},
date={2017},
number={2},
pages={307--324},
issn={0273-0979},
review={\MR{3619728}},
doi={10.1090/bull/1543},
}
\bib{Kuc11}{article}{
author={Kucharz, Wojciech},
title={Algebraicity of cycles on smooth manifolds},
journal={Selecta Math. (N.S.)},
volume={17},
date={2011},
number={4},
pages={855--878},
issn={1022-1824},
review={\MR{2861609}},
doi={10.1007/s00029-011-0064-0},
}
\bib{LW69}{book}{
author={Lundell, Albert T.},
author={Weingram, Stephen},
title={The topology of CW complexes},
series={The University Series in Higher Mathematics},
publisher={Van Nostrand Reinhold Co., New York},
date={1969},
pages={viii+216},
review={\MR{3822092}},
doi={10.1007/978-1-4684-6254-8},
}
\bib{Man01}{book}{
author={Manivel, Laurent},
title={Symmetric functions, Schubert polynomials and degeneracy loci},
series={SMF/AMS Texts and Monographs},
volume={6},
note={Translated from the 1998 French original by John R. Swallow;
Cours Sp\'{e}cialis\'{e}s [Specialized Courses], 3},
publisher={American Mathematical Society, Providence, RI; Soci\'{e}t\'{e}
Math\'{e}matique de France, Paris},
date={2001},
pages={viii+167},
isbn={0-8218-2154-7},
review={\MR{1852463}},
}
\bib{Man14}{book}{
author={Mangolte, Fr\'{e}d\'{e}ric},
title={Real algebraic varieties},
series={Springer Monographs in Mathematics},
note={Translated from the 2017 French original [ 3727103] by Catriona
Maclean},
publisher={Springer, Cham},
date={2020},
pages={xviii+444},
isbn={978-3-030-43104-4},
isbn={978-3-030-43103-7},
review={\MR{4179588}},
doi={10.1007/978-3-030-43104-4},
}
\bib{Mil65}{article}{
author={Milnor, J.},
title={On the Stiefel-Whitney numbers of complex manifolds and of spin
manifolds},
journal={Topology},
volume={3},
date={1965},
pages={223--230},
issn={0040-9383},
review={\MR{180977}},
doi={10.1016/0040-9383(65)90055-8},
}
\bib{MS74}{book}{
author={Milnor, John W.},
author={Stasheff, James D.},
title={Characteristic classes},
series={Annals of Mathematics Studies, No. 76},
publisher={Princeton University Press, Princeton, N. J.; University of
Tokyo Press, Tokyo},
date={1974},
pages={vii+331},
review={\MR{0440554}},
}
\bib{Na52}{article}{
author={Nash, John},
title={Real algebraic manifolds},
journal={Ann. of Math. (2)},
volume={56},
date={1952},
pages={405--421},
issn={0003-486X},
review={\MR{50928}},
doi={10.2307/1969649},
}
\bib{Par21}{article}{
author={Parusi\'{n}ski, Adam},
title={Algebro-geometric equisingularity of Zariski},
conference={
title={Handbook of geometry and topology of singularities II},
},
book={
publisher={Springer, Cham},
},
date={2021},
pages={177--222},
review={\MR{4367438}},
doi={10.1007/978-3-030-78024-1},
}
\bib{PR20}{article}{
author={Parusi\'{n}ski, Adam},
author={Rond, Guillaume},
title={Algebraic varieties are homeomorphic to varieties defined over
number fields},
journal={Comment. Math. Helv.},
volume={95},
date={2020},
number={2},
pages={339--359},
issn={0010-2571},
review={\MR{4115286}},
doi={10.4171/CMH/490},
}
\bib{Tog73}{article}{
author={Tognoli, Alberto},
title={Su una congettura di Nash},
journal={Ann. Scuola Norm. Sup. Pisa Cl. Sci. (3)},
volume={27},
date={1973},
pages={167--185},
issn={0391-173X},
review={\MR{396571}},
}
\bib{Whi36}{article}{
author={Whitney, Hassler},
title={Differentiable manifolds},
journal={Ann. of Math. (2)},
volume={37},
date={1936},
number={3},
pages={645--680},
issn={0003-486X},
review={\MR{1503303}},
doi={10.2307/1968482},
}
\bib{Zel83}{article}{
author={Zelevinsky, Andrei Vladlenovich},
title={Small resolutions of singularities of Schubert varieties},
language={Russian},
journal={Funktsional. Anal. i Prilozhen.},
volume={17},
date={1983},
number={2},
pages={75--77},
issn={0374-1990},
review={\MR{705051}},
}
\end{biblist}
\end{bibdiv}
\end{document} |
\begin{document}
\title{Mellin Analysis and Its Distance Concept Applications to Sampling Theory}
\noindent
{\small {\bf Abstract:} In this paper a notion of functional ``distance'' in the Mellin transform setting is introduced and a general representation formula
is obtained for it. Also, a determination of the distance is given in terms of Lipschitz classes and Mellin-Sobolev spaces. Finally applications to
approximate versions of certain basic relations valid for Mellin band-limited functions are studied in details.}
\section{Introduction}
A Mellin version of the Paley-Wiener theorem of Fourier analysis was introduced in \cite{BBMS}, using both complex and real approaches. Moreover, the
structure of the set of Mellin band-limited functions (i.e. functions with compactly supported Mellin transform) was studied. It turns out that a Mellin
band-limited function cannot at the same time be Fourier band-limited, and it is extendable as an analytic function to the Riemann surface of the (complex)
logarithm. This makes the theory of Mellin band-limited functions very different from the Fourier band-limited ones since one has to extend the notion of
the Bernstein spaces in a suitable way, involving Riemann surfaces (Mellin-Bernstein spaces). In the classical frame, Fourier band-limitedness is a very
fundamental assumption in order to obtain certain basic formulae such as the Shannon sampling theorem, the Mellin reproducing kernel formula, the Boas
differentiation formula, the Bernstein inequality, quadrature formulae and so on. When a function $f$ is no longer (Fourier) band-limited, certain
approximate versions of the above formulae are available with a remainder which needs to be estimated in a suitable way. This was done in \cite{BSS1},
\cite{BSS2}, \cite{BSS3} in terms of an appropriate notion of ``distance'' of $f$ from the involved Bernstein space. In the Mellin transform setting an
exponential version of the Shannon sampling theorem for Mellin band-limited functions was first introduced in a formal way in \cite{OSP}, \cite{BP} in
order to study problems arising in optical physics. A precise mathematical version of the exponential sampling formula, also in the approximate sense, was given in \cite{BJ3}, \cite{BJ2}, employing a rigorous Mellin transform analysis, as developed in
\cite{BJ0}, \cite{BJ1} (see also \cite{BBM2}). Furthermore, a Mellin version of the reproducing kernel formula, both for Mellin band-limited funtions and
in an approximate sense, was proved in \cite{BBM0}. Therefore it is quite natural to study estimates of the error in the approximate versions of the
exponential sampling theorem, the reproducing kernel formula, the Bernstein inequality and the Mellin-Boas differentiation formula using a new notion of
``Mellin distance'' of a function $f$ from a Mellin-Bernstein space. In the present paper, we introduce a notion of distance in the Mellin frame,
and we prove certain basic representation theorems for it (Sec.~3). In Sec.~4 we give precise evaluations of the Mellin distance in some fundamental
function spaces such as Lipschitz classes and Mellin-Sobolev spaces. In Sec.~5 we describe some important applications to the approximate exponential
sampling thoerem, the Mellin reproducing kernel theorem and the Boas differentiation formula in the Mellin setting, employing Mellin derivatives. Moreover,
the theory developed here enables one to obtain an interesting approximate version of the Bernstein inequality with an estimation of the remainder.
The present approach may also be employed in order to study other basic relations valid for Mellin band-limited functions.
\section{Notations and preliminary results}
Let $C(\mathbb{R}^+)$\,and $C(\{c\} \times i\mathbb{R})$ be the spaces of all uniformly continuous and bounded functions defined on $\mathbb{R}^+$ and on
the line $\{c\} \times i\mathbb{R}, c \in \mathbb{R},$ respectively, endowed with the usual sup-norm $\|\cdot\|_\infty,$ and let $C_0(\mathbb{R}^+)$ be the
subspace of
$C(\mathbb{R}^+)$ of functions $f$ satisfying $\lim_{x \rightarrow 0^+} f(x) = \lim_{x \rightarrow +\infty}f(x) = 0.$
For $1\leq p < +\infty,$ let $L^p= L^p(\mathbb{R}^+)$~ be the space of all the Lebesgue measurable and $p$-integrable complex-valued functions defined on
$\mathbb{R}^+$ endowed with the usual norm $\|\cdot\|_p.$ Analogous notations hold for functions
defined on $\mathbb{R}.$
For $p=1$ and $c \in \mathbb{R},$ let us consider the space
$$X^1_c = \{ f: \mathbb{R}^+\rightarrow \mathbb{C}: f(\cdot) (\cdot)^{c-1}\in L^1(\mathbb{R}^+) \}$$
endowed with the norm
$$ \| f\|_{X^1_c} := \|f(\cdot) (\cdot)^{c-1} \|_1 = \int_0^\infty |f(u)|u^{c-1} du.$$
More generally, let $X^p_c$ denote the space of all functions $f: \mathbb{R}^+\rightarrow \mathbb{C}$ such that $f(\cdot) (\cdot)^{c-1/p}\in
L^p(\mathbb{R}^+)$ with $1<p< \infty.$ In an equivalent form, $X^p_c$ is the space of all functions $f$ such that $(\cdot)^c f(\cdot) \in
L^p_\mu(\mathbb{R}^+),$ where $L^p_\mu= L^p_\mu(\mathbb{R}^+)$ denotes the Lebesgue space with respect to the (invariant) measure $\mu (A) = \int_A dt/t$
for any measurable set $A \subset \mathbb{R}^+.$ Finally, by $X^\infty_c$ we will denote the space of all functions $f:\mathbb{R}^+ \rightarrow \mathbb{C}$
such that $\|(\cdot) f(\cdot)\|_\infty = \sup_{x>0}|x^cf(x)| < +\infty.$
The Mellin transform of a function $f\in X^1_c$ is defined by (see e.g. \cite{MA}, \cite{BJ1})
$$ M_c[f](s) \equiv [f]^{\wedge}_{M_c} (s) = \int_0^\infty u^{s-1} f(u) du~~~(s=c+ it, t\in \mathbb{R}).$$
Basic properties of the Mellin transform are the following:
$$M_c[af + bg](s) = a M_c[f](s) + bM_c[g](s)~~~(f,g \in X^1_c,~a,b \in \mathbb{R}),$$
$$|M_c[f](s)| \leq \|f\|_{X^1_c}~~(s = c+it).$$
The inverse Mellin transform $M^{-1}_c[g]$ of the function $g \in L^1(\{c\} \times i\mathbb{R})$ is defined by
$$M^{-1}_c[g](x) := \frac{x^{-c}}{2 \pi}\int_{-\infty}^{+\infty} g(c+it) x^{-it}dt ~~~(x \in \mathbb{R^+}),$$
where by $L^p(\{c\} \times i\mathbb{R})$ for $p \geq 1,$ will mean the space of all functions $g:\{c\} \times i\mathbb{R} \rightarrow \mathbb{C}$ with
$g(c +i\cdot) \in L^p(\mathbb{R}).$
For $p=2$ the Mellin transform $M_c^2$ of $f \in X^2_c$ is given by (see \cite{BJ2})
$$M_c^2[f](s) \equiv [f]^{\wedge}_{M_c^2} (s) = \limm_{\rho \rightarrow +\infty}~\int_{1/\rho}^\rho f(u) u^{s-1}du~~~(s=c+it),$$
in the sense that
$$\lim_{\rho \rightarrow \infty}\bigg\|M_c^2[f](c+it) - \int_{1/\rho}^\rho f(u) u^{s-1}du\bigg\|_{L^{2}(\{c\}\times i\mathbb{R})} = 0.$$
In this instance, the Mellin transform is norm-preserving in the sense that (see \cite{BJ2})
$$\|g\|_{X^2_c} = \frac{1}{\sqrt{2\pi}}\|[g]^\wedge_{M_c}\|_{L^2(\{c\}\times i\mathbb{R})}.$$
More generally, using the Riesz-Thorin convexity theorem, one may introduce a definition of Mellin transform in $X^p_c$ with $p\in ]1,2[$ in an analogous
way, i.e., $$\lim_{\rho \rightarrow \infty}\bigg\|M_c^p[f](c+it) - \int_{1/\rho}^\rho f(u) u^{s-1}du\bigg\|_{L^{p'}(\{c\}\times i\mathbb{R})} = 0,$$
where $p'$ denotes the conjugate exponent of $p.$
Analogously, the inverse Mellin transform $M_c^{-1, 2}$ of a function $g \in L^{2}(\{c\} \times i\mathbb{R})$ is defined as
$$M_c^{-1,2}[f](s) = \limm_{\rho \rightarrow +\infty}~\int_{-\rho}^\rho g(c + iv) v^{-c-iv}dv,$$
in the sense that
$$\lim_{\rho \rightarrow \infty}\bigg\|M_c^{-1,2}[g](c+iv) - \frac{1}{2\pi}\int_{-\rho}^\rho g(c + iv) v^{-c-iv}dv\bigg\|_{X^2_c} = 0.$$
In a similar way one can define the inverse Mellin transform $M_c^{-1,p}$ with $p \in {]}1,2{[}.$
In what follows, we will continue to denote the Mellin transform of a function $g\in L^p(\mathbb{R}^+)$ by $[g]^\wedge_{M_c}$ and we will consider essentially the cases $p=1$ and $p=2.$
The Mellin translation operator $\tau_h^c$ for $h \in \mathbb{R}^+,~c \in \mathbb{R}$ and $f: \mathbb{R}^+ \rightarrow \mathbb{C}$ is defined by
$$(\tau_h^c f)(x) := h^c f(hx)~~(x\in \mathbb{R}^+).$$
\noindent
Setting $\tau_h:= \tau^0_h,$ then $(\tau_h^cf)(x) = h^c (\tau_hf)(x)$ and
$\|\tau_h^c f\|_{X^1_c} = \|f\|_{X^1_c}.$
\vskip0,4cm
For $1\leq p \leq 2$, denote by $B^p_{c,\sigma}$ the Bernstein space of all functions $f\in X^p_c\cap C(\mathbb{R}^+),$ $c \in \mathbb{R},$ which are Mellin
band-limited
to the interval $[-\sigma,\sigma],$ $\sigma \in \mathbb{R}^+,$ thus for which $[f]^\wedge_{M_c}(c+it) = 0$ for all $|t| > \sigma.$ We notice that, as in Fourier analysis, the inclusion $B^{p}_{c,\sigma} \subset B^q_{c,\sigma}$ holds for $1 \leq p<q \leq 2.$
\section{A notion of distance}
For $q \in [1,+\infty]$, let $G_c^q$ be the linear space of all functions $f:\mathbb{R}^+ \rightarrow \mathbb{C}$ that have the representation
$$f(x) = \frac{1}{2\pi}\int_{-\infty}^{\infty}\varphi(v) x^{-c-iv}dv \qquad (x>0),$$
where $\varphi \in L^1(\mathbb{R}) \cap L^q(\mathbb{R}).$
The space $G_c^q$ will be endowed with the norm
$$[\!\!|f|\!\!]_q := \|\varphi\|_{L^q(\mathbb{R})} = \left(\int_{-\infty}^\infty |\varphi(v)|^qdv\right)^{1/q}.$$
Note that this is really a norm. Indeed, $[\!\!|f|\!\!]_q = 0$ iff $f=0$ due to the existence and uniqueness of Mellin inversion (see \cite{BJ1}, \cite{BJ2}).
The above norm induces the metric
$$\mbox{\rm dist}_q(f,g) := [\!\!|f-g|\!\!]_q \quad \quad f,g \in G_c^q.$$
Note that in case $q=2$ we have
$$\mbox{\rm dist}_2(f,g) = \sqrt{2\pi}\|f-g\|_{X^2_c},$$
i.e., our distance reduces to the ``Euclidean'' distance in $X^2_c,$ up to the factor $\sqrt{2\pi}.$\newline
\noindent
As a consequence of the Mellin inversion formula, functions $f$ for which $[f]^\wedge_{M_c}(c+i\cdot) \in L^1(\mathbb{R}) \cap L^q(\mathbb{R})$ belong
to $G_c^q.$ For $p \in [1,2]$, the Mellin-Bernstein space $B_{c,\sigma}^p$ is a subspace of $G_c^q$ since the Mellin transform of $f \in B_{c,\sigma}^p$
has compact support as a function of $v \in \mathbb{R}$ and so it belongs to any $L^q(\mathbb{R}).$
For $f \in G_c^q$ we define
$$\mbox{\rm dist}_q(f,B_{c,\sigma}^p) = \inf_{g \in B_{c,\sigma}^p}[\!\!|f-g|\!\!]_q\,.$$
\vskip0,4cm
The following representation theorem holds:
\newtheorem{Theorem}{Theorem}
\begin{Theorem} \label{representation1} For any $f \in G_c^q$, we have
$$\mbox{\rm dist}_q(f,B_{c,\sigma}^p) = \left(\int_{|v| \geq \sigma}|\varphi (v)|^qdv\right)^{1/q} \quad \quad (1\leq q<\infty),$$
and if $\varphi$ is also continuous, then
$$\mbox{\rm dist}_\infty(f,B_{c,\sigma}^p) = \sup_{|v| \geq \sigma}|\varphi(v)|.$$
\end{Theorem}
{\bf Proof}.
Assume $q< \infty.$ Clearly, since $g \in B_{c, \sigma}^p$ implies $|[g]^\wedge_{M_c}(c+iv)| = 0$ for $|v| \geq \sigma,$ one has
$$\mbox{\rm dist}_q(f,B_{c,\sigma}^p) = \left\{\inf_{g \in B_{c,\sigma}^p}\int_{|v|\leq \sigma}|\varphi(v) - [g]^\wedge_{M_c}(c+iv)|^qdv + \int_{|v|\geq \sigma}|\varphi(v)|^qdv\right\}^{1/q}.$$
Therefore we have to prove that
$$I_{p,q}:= \inf_{g \in B_{c,\sigma}^p}\int_{|v| \leq \sigma} |\varphi(v) - [g]^\wedge_{M_c}(c+iv)|^q dv = 0.$$
For the given $\sigma >0,$ the space $C^\infty_c({]}-\sigma, \sigma{[})$, whose elements are all the infinitely differentiable functions with compact
support in ${]}-\sigma, \sigma{[},$ is dense in $L^q({]}-\sigma, \sigma{[})$ for $1\leq q< \infty$ (see e.g. \cite{AD}). Thus, given $\varphi
\in L^q(\mathbb{R})$ and $\varepsilon >0$, we can take a function $P \in C^\infty_c({]}-\sigma, \sigma{[})$ such that
$$\|\varphi - P\|_{L^q({]}-\sigma, \sigma{[})} < \varepsilon.$$
Now we define
$$g_\varepsilon (x) := \frac{x^{-c}}{2\pi}\int_{-\sigma}^\sigma P(v) x^{-iv}dv \quad (x>0).$$
Integrating $k$-times by parts, one can easily see that $x^cg_\varepsilon(x) = {\cal O}((\log x)^{-k})$ for $x \rightarrow +\infty$ and $x\rightarrow 0^+.$
This implies that $g_\varepsilon \in X^p_c$ for any $p\geq 1$, and $[g_\varepsilon]^\wedge_{M_c}(c+iv) = P(v)$ for $|v| \leq \sigma$ and $0$ otherwise.
Thus $g_\varepsilon \in B^p_{c, \sigma}.$ Now we conclude that
\begin{eqnarray*}
I_{p,q}^{1/q} &\leq& \|\varphi - [g_\varepsilon]^\wedge_{M_c}(c+i\cdot)\|_{L^q(]-\sigma, \sigma[)} \\&=& \|\varphi - P\|_{L^q(]-\sigma, \sigma[)} < \varepsilon.
\end{eqnarray*}
Hence the assertion follows for $1\leq q < \infty.$
The case $q=\infty$ is treated in a different way. To this end, given $\varepsilon >0,$ there exists a twice continuously differentiable function $\psi$
on $\mathbb{R}$ such that $\sup_{v \in \mathbb{R}}|\varphi (v) - \psi (v)| < \varepsilon/2.$ For example, $\psi$ may be chosen as an appropriate spline
function. For $\eta \in {]}0,\sigma{[},$ define
\begin{eqnarray*}
\psi_1(x):= \left\{\begin{array}{llll} 0 \quad &\mbox{if}\quad |v|\leq \sigma -\eta,\\[2ex]
\frac{\psi(-\sigma)}{-\eta^3}(v+\sigma-\eta)^3 \quad &\mbox{if}\quad -\sigma \leq v \leq -\sigma +\eta,\\[2ex]
\frac{\psi(\sigma)}{\eta^3}(v-\sigma + \eta)^3 \quad &\mbox{if}\quad \sigma - \eta \leq v \leq \sigma,\\[2ex]
\psi(v) \quad &\mbox{if}\quad |v| \geq \sigma.
\end{array} \right.
\end{eqnarray*}
Note that $\psi_1$ is continuous on $\mathbb{R}$ and
$$\|\psi_1\|_{L^\infty(\mathbb{R})} = \sup_{|v| \geq \sigma -\eta}|\psi_1(v)| = \sup_{|v| \geq \sigma}|\psi (v)| \leq
\sup_{|v|\geq \sigma}|\varphi(v)| + \frac{\varepsilon}{2}\,.$$
Next define $\psi_0(v):= \psi(v) - \psi_1(v)$ for $v \in \mathbb{R}.$ Then $\psi_0$ is continuous on $\mathbb{R},$ twice continuously differentiable on
${]}-\sigma, \sigma{[},$ it vanishes at $\pm \sigma$ and it has support on $[-\sigma, \sigma].$ With these properties, two integrations by parts show that
$$g_\varepsilon(x):=\frac{x^{-c}}{2 \pi}\int_{-\sigma}^\sigma \psi_0(v)x^{-iv}dv \quad \quad (x>0)$$
defines a function $g_\varepsilon \in X^p_c \cap B^p_{c,\sigma}.$ Furthermore, the Mellin inversion formula yields that
$[g_\varepsilon]^\wedge_{M_c}(c+i\cdot) = \psi_0(\cdot).$ Now we conclude that
\begin{eqnarray*}
\mbox{\rm dist}_\infty(f, B^p_{c,\sigma}) &= & \inf_{g \in B^p_{c, \sigma}}\|\varphi - [g]^\wedge_{M_c}(c+i\cdot)\|_{L^\infty(\mathbb{R})}\\&\leq&
\|\varphi - [g_\varepsilon]^\wedge_{M_c}(c+i\cdot)\|_{L^\infty(\mathbb{R})}\\
&=&\|\varphi - \psi_0\|_{L^\infty(\mathbb{R})}\\
&\leq& \|\varphi - \psi\|_{L^\infty(\mathbb{R})} +\|\psi - \psi_0\|_{L^\infty(\mathbb{R})}\\ &\leq&
\frac{\varepsilon}{2} + \|\psi_1\|_{L^\infty(\mathbb{R})}\\
&\leq& \varepsilon + \sup_{|v| \geq \sigma} |\varphi (v)|.
\end{eqnarray*}
This implies that $\mbox{\rm dist}_\infty(f, B^p_{c, \sigma}) \leq \sup_{|v| \geq \sigma}|\varphi (v)|.$ On the other hand,
\begin{eqnarray*}
\mbox{\rm dist}_\infty(f, B^p_{c, \sigma}) &=& \max\left\{\inf_{g \in B^p_{c, \sigma}} \sup_{|v| \leq \sigma}|\varphi (v) - [g]^\wedge_{M_c}(c+iv)|, \sup_{|v| \geq \sigma}|\varphi (v)|\right\}\\&\geq& \sup_{|v| \geq \sigma}|\varphi(v)|.
\end{eqnarray*}
Hence the formula stated in the theorem holds.
$\Box$
\vskip0,4cm
Next we will obtain a distance formula for Mellin derivatives.
We define the first Mellin derivative (the Mellin differential operator of first order) by
$$(\Theta_c^1f)(x) := \lim_{h\rightarrow 1}\frac{\tau^c_hf(x) - f(x)}{h-1} =
\lim_{h\rightarrow 1}\frac{h^cf(hx) - f(x)}{h-1} \quad (x>0),$$
and the Mellin differential operator of order $r \in \mathbb{N}$ is defined iteratively by $\Theta^r_c:= \Theta_c^1(\Theta_c^{(r-1)});$ see \cite{BJ1}.
We have the following
\begin{Theorem}\label{derivative} Let $f\in G_c^q.$ If $v^r\varphi (v)$ belongs to $L^1(\mathbb{R})$ as a function of $v$ for some $r \in \mathbb{N},$
then $f$ has Mellin derivatives up to order $r$ in $C_0(\mathbb{R})$ and
$$(\Theta^k_cf)(x) = \frac{(-i)^k}{2 \pi}\int_{-\infty}^{+\infty} v^k\varphi (v)x^{-c-iv}dv \quad \quad(k=0,1,\ldots r).$$
\end{Theorem}
{\bf Proof.} Suppose that $r=1.$ For $h \neq 1$ we have
\begin{eqnarray*}
\frac{h^cf(hx) - f(x)}{h-1} &=& \frac{1}{2 \pi}\frac{1}{h-1}\bigg(\int_{-\infty}^{+\infty} \varphi(v) h^{-iv}x^{-c-iv}dv -
\int_{-\infty}^{+\infty} \varphi(v) x^{-c-iv}dv\bigg) \\[1ex]
&=& \frac{x^{-c}}{2 \pi}\int_{-\infty}^{+\infty} \varphi(v) x^{-iv}\frac{h^{-iv} - 1}{h - 1}dv.
\end{eqnarray*}
Now
\begin{eqnarray*}
\bigg|\frac{h^{-iv} - 1}{h - 1}\bigg|=
\frac{2}{|h-1|}\bigg|\frac{e^{-\frac{iv \log h}{2}}-e^{\frac{iv \log h}{2}}}{2i}\bigg|=
2 \bigg|\frac{\sin (\frac{v \log h}{2})}{h-1}\bigg| \leq |v|.
\end{eqnarray*}
Since
$$\lim_{h \rightarrow 1} \frac{h^{-iv} - 1}{h - 1} = -iv$$
and $v\varphi(v)$ is absolutely integrable, Lebesgue's theorem on dominated convergence gives
$$(\Theta^1_c f)(x) = \frac{-i}{2\pi}\int_{-\infty}^{+\infty} v\varphi(v) x^{-c-iv}dv.$$
Moreover, using the Mellin inversion theorem (see \cite[Lemma~4, p.~349]{BJ1},
we have $\Theta^1_cf \in C_0(\mathbb{R}^+).$
The proof for general $r$ follows by mathematical induction.
$\Box$
\vskip0,3cm
Using Theorems \ref{representation1} and \ref{derivative}, we obtain immediately the distance of $\Theta^k_cf$ from the Bernstein space $B_{c,\sigma}^p$.
\vskip0,4cm
\noindent
\newtheorem{Corollary}{Corollary}
\begin{Corollary}\label{cor1}
Let $f \in G^q_c$ with $v^k\varphi \in L^1(\mathbb{R}) \cap L^q(\mathbb{R}).$ Then for every $p \in [1,2]$, we have
$$\mbox{\rm dist}_q(\Theta^k_cf,B_{c,\sigma}^p) = \left(\int_{|v| \geq \sigma}|v^k\varphi (v)|^qdv\right)^{1/q} \quad \quad (1\leq q<\infty),$$
and if $\varphi$ is continuous, then
$$\mbox{\rm dist}_\infty(\Theta^k_cf,B_{c,\sigma}^p) = \sup_{|v| \geq \sigma}|v^k\varphi(v)|.$$
\end{Corollary}
\vskip0,4cm
\section{Estimation of the Mellin distance}
In this section we will introduce certain basic ``intermediate'' function spaces between the spaces $B^p_{c,\sigma}$ and the space $G_c^q.$
We will consider mainly the cases $p=1$ and $p=2.$
In the following for $p\in [1,2],$ we will denote by $\mathcal{M}_c^p$ the space comprising all functions $f \in X_c^p \cap C(\mathbb{R})$ such that
$[f]^\wedge_{M_c} \in L^1(\{c\}\times i \mathbb{R}).$ This space is contained in $G_c^q$ for suitable values of $q,$ namely for
$q \in [1, p']$ with $p'$ being the conjugate exponent of $p.$ As for the classes $B^p_{c,\sigma},$ we have again the inclusion $\mathcal{M}^p_c \subset \mathcal{M}^q_c$ for $1 \leq p<q\leq 2.$
We begin with the definitions of differences of integer order and an appropriate modulus of smoothness.
For a function $f \in X^p_c,$ $r \in \mathbb{N}$ and $h >0$, we define
$$(\Delta_h^{r,c}f)(u) := \sum_{j=0}^r (-1)^{r-j}\left(\begin{array}{ll} r \\ j \end{array}\right) f(h^ju)h^{jc},$$
and for $\delta >0,$
$$\omega_{r}(f, \delta, X^p_c) := \sup_{|\log h|\leq \delta}\|\Delta_h^{r,c}f\|_{X^p_c}.$$
In particular for $p=1,$
$$\omega_{r}(f, \delta, X^1_c) := \sup_{|\log h|\leq \delta}\|\Delta_h^{r,c}f\|_{X^1_c}.$$
Among the basic properties of the above modulus of smoothness $\omega_{r}$ we list the following three:
\begin{enumerate}
\item $\omega_r(f, \cdot, X^p_c)$ is a non decreasing function on $\mathbb{R}^+;$
\item $\omega_r(f, \delta, X^p_c) \leq 2^r\|f\|_{X^p_c}$;
\item for any positive $\lambda$ and $\delta$, one has
$$\omega_r(f, \lambda \delta, X^p_c) \leq (1+\lambda)^r\omega_r(f,\delta,X^p_c).$$
\end{enumerate}
We know that for functions $f \in X^1_c$ or $f \in X^2_c$ one has (see \cite{BJ1}, \cite{BJ0}, \cite{BBM})
\begin{eqnarray}
[\Delta_h^{r,c}f]^\wedge_{M_c}(c+iv) = (h^{-iv} - 1)^r [f]^\wedge_{M_c}(c+iv) \qquad (v \in \mathbb{R}).
\end{eqnarray}
We have the following
\begin{Theorem} \label{estimate1}
If $f \in \mathcal {M}^1_c,$ then for any $q \in [1, \infty]$,
\begin{eqnarray*}
\mbox{\rm dist}_q(f, B^1_{c, \sigma}) \leq D \cdot \left\{\begin{array}{ll}
\displaystyle\left\{\int_{\sigma}^\infty [\omega_r(f, v^{-1}, X^1_c)]^qdv\right\}^{1/q} &\quad (q< \infty),\\[2ex]
\omega_r(f, \sigma^{-1}, X^1_c) &\quad (q=\infty),
\end{array} \right.
\end{eqnarray*}
where $D$ is a constant depending on $r$ and $q$ only.
\end{Theorem}
{\bf Proof}. From (1), setting $h= e^{\pi/v}$, we have
$$[\Delta_{h}^{r,c}f]^\wedge_{M_c}(c+iv) = (-2)^r [f]^\wedge_{M_c}(c+iv)$$
or
$$[f]^\wedge_{M_c}(c+iv) = \frac{1}{(-2)^r}\int_0^\infty (\Delta^{r,c}_{h}f)(u) u^{c+iv-1}du \quad \quad (h= e^{\pi/v})$$
and so
$$|[f]^\wedge_{M_c}(c+iv)| \leq \frac{1}{2^r}\int_0^\infty |(\Delta^{r,c}_{h}f)(u)|u^{c-1}du \leq \frac{1}{2^r}\omega_r(f, \frac{\pi}{|v|}, X^1_c).$$
Now using the properties of the modulus $\omega_r,$ we find that
$$\omega_r(f, \frac{\pi}{|v|}, X^1_c) \leq (1+\pi)^r \omega_r(f, \frac{1}{|v|}, X^1_c).$$
Thus
$$|[f]^\wedge_{M_c}(c+iv)| \leq \left(\frac{1+\pi}{2}\right)^r \omega_r(f, \frac{1}{|v|}, X^1_c).$$
In view of Theorem~\ref{representation1}, this implies the assertion for $q<\infty$.
The case $q=\infty$ is obtained analogously.
$\Box$
\vskip0,3cm
\begin{Theorem} \label{estimate2}
If $f \in \mathcal{M}^2_c$, then for any $q\in [1,2],$
$$\mbox{\rm dist}_q(f, B^2_{c,\sigma}) \leq D \left\{\int_{\sigma}^\infty [v^{-q/2}\omega_r(f, v^{-1}, X^2_c)]^qdv\right\}^{1/q},$$
where $D$ is a constant depending on $r$ and $q$ only.
\end{Theorem}
{\bf Proof}. First we consider the case $q=2.$ Then
$$|[\Delta_h^{r,c}f]^\wedge_{M_c}(c+iv)| = 2^r |\sin ((v \log h)/2)| |[f]^\wedge_{M_c}(c+iv)|$$
and (see \cite[Lemma 2.6]{BJ2})
$$\|[\Delta_h^{r,c}f]^\wedge_{M_c}\|_{L^2(\{c\}\times i \mathbb{R})} = \sqrt{2\pi}\|\Delta_h^{r,c}f\|_{X^2_c} \leq \sqrt{2\pi}\omega_r(f, |\log h|, X^2_c).$$
Now let $h\geq 1.$ For $v \in [(2\log h)^{-1}, (\log h)^{-1}]$, one has
$$\sin ((v\log h)/2) \geq \frac{1}{2\pi}\,,$$
and hence
$$\int_{1/(2\log h)}^{1/\log h} |[f]^\wedge_{M_c}(c+iv)|^2 dv \leq
(2\pi)^{2r}\int_0^\infty |\sin ((v\log h)/2)|^{2r}| |[f]^\wedge_{M_c}(c+iv)|^2dv.$$
Analogously we have
$$\int_{-1/\log h}^{-1/(2\log h)} |[f]^\wedge_{M_c}(c+iv)|^2 dv \leq
(2\pi)^{2r}\int_{-\infty}^0 |\sin ((v\log h)/2)|^{2r}| |[f]^\wedge_{M_c}(c+iv)|^2dv,$$
and so
\begin{eqnarray*}
\lefteqn{\int_{1/(2\log h) \leq|v| \leq 1/\log h}|[f]^\wedge_{M_c}(c+iv)|^2 dv} \qquad\qquad\quad\\
&\leq &
(2\pi)^{2r}\int_{-\infty}^\infty |\sin ((v\log h)/2)|^{2r}| |[f]^\wedge_{M_c}(c+iv)|^2dv \\
&\leq& \pi^{2r}[\omega_r(f, \log h, X^2_c)]^2.
\end{eqnarray*}
Now, let $\sigma >0$ be fixed, set $\sigma_k:= \sigma 2^k$ with $k \in \mathbb{N}_0$, and define $h$ by $\log h = 1/\sigma_{k+1}.$ Then
$$\int_{\sigma_k \leq |v| \leq \sigma_{k+1}}|[f]^\wedge_{M_c}(c+iv)|^2dv \leq
\pi^{2r}[\omega_r(f, \sigma_{k+1}^{-1}, X^2_c)]^2,$$
and so summation over $k$ yields
$$\int_{|v| \geq \sigma}|[f]^\wedge_{M_c}(c+iv)|^2dv \leq
\pi^{2r} \sum_{k=0}^\infty [\omega_r(f, \sigma_{k+1}^{-1}, X^2_c)]^2.$$
Now since $\sigma_{k+1}- \sigma_k = \sigma_k,$ from the monotonicity of $\omega_r$ as a function of $\delta,$ one has
$$\int_{\sigma_k}^{\sigma_{k+1}}v^{-1}[\omega_r(f, v^{-1}, X^2_c)]^2dv \geq
\frac{\sigma_k}{\sigma_{k+1}}[\omega_r(f, \sigma^{-1}_{k+1}, X^2_c)]^2,$$
from which we deduce
$$\sum_{k=0}^\infty [\omega_r(f, \sigma^{-1}_{k+1}, X^2_c)]^2 \leq
2\int_\sigma^\infty v^{-1}[\omega_r(f, v^{-1}, X^2_c)]^2dv.$$
This gives the assertion for $q=2.$ For $q\in [1, 2{[}$ one can proceed as in the proof of Proposition 13 in \cite{BSS2}, using H\"{o}lder's
inequality.
$\Box$
\subsection{Mellin-Lipschitz spaces}
For $\alpha \in {]}0,r]$ we define the Lipschitz class by
$$\mbox{Lip}_r(\alpha, X^p_c) := \{f \in X^p_c: \omega_r(f;\delta;X^p_c) = {\cal O}(\delta^\alpha), \delta \rightarrow 0^+\}.$$
\noindent
As a consequence of Theorems \ref{estimate1} and \ref{estimate2}, we obtain the following corollary which determines the Mellin distance of a function
$f \in \mbox{Lip}_r(\alpha, X^p_c)$ from $B^p_{c,\sigma}$ for $p=1,2.$
\begin{Corollary}\label{lip}
If $f \in \mbox{\rm Lip}_r(\alpha, X^1_c\cap C(\mathbb{R}))$ for some $r\in \mathbb{N},~ r \geq 2$ and $1<\alpha \leq r,$ then
$$\mbox{\rm dist}_1(f, B^{1}_{c, \sigma}) = {\cal O}(\sigma^{-\alpha +1})\quad \quad (\sigma \rightarrow +\infty).$$
Moreover, if $f \in \mathcal{M}^2_c \cap \mbox{\rm Lip}_r(\beta, X^2_c)$ with $r \in \mathbb{N},$ $q^{-1} - 2^{-1} < \beta \leq r,$ then
$$\mbox{\rm dist}_q (f, B^2_{c, \sigma}) = {\cal O}(\sigma^{-\beta - 1/2 + 1/q})\quad \quad (\sigma \rightarrow +\infty).$$
\end{Corollary}
The proof follows immediately from Theorems \ref{estimate1} and \ref{estimate2}. Note that, if $f \in \mbox{Lip}_r(\alpha, X^1_c\cap C(\mathbb{R})),$
then from the proof of Theorem \ref{estimate1} one has that $[f]^\wedge_{M_c} \in L^1(\{c\}\times i\mathbb{R});$
thus $f \in\mathcal{M}^1_c.$
For $q=2$ we obtain the estimate
$$\mbox{\rm dist}_2 (f, B^2_{c, \sigma}) = {\cal O}(\sigma^{-\beta})\quad \quad (\sigma \rightarrow +\infty).$$
\subsection{Mellin-Sobolev spaces}
Denote by $AC_{{\tt loc}}(\mathbb{R}^+)$ the space of all locally absolutely continuous functions on $\mathbb{R}^+$. The Mellin-Sobolev space
$W_c^{r,p}(\mathbb{R}^+)$ is defined as the space of all functions $f \in X^p_c$ which are equivalent to a function
$g \in C^{r-1}(\mathbb{R}^+)$ with
$g^{(r-1)}\in AC_{{\tt loc}}(\mathbb{R}^+)$ such that $\Theta^r_cg \in X^p_c$ (see \cite{BJ1}, \cite{BJ2}, \cite{BBM}).
For $p=1$ it is well known that for any $f \in W_c^{r,1}$ one has (see \cite{BJ1})
$$[\Theta^r_c]^\wedge_{M_c}(c+iv) = (-iv)^r[f]^\wedge_{M_c}(c+iv)\quad \quad (v \in \mathbb{R}).$$
The same result also holds for $1< p\leq 2,$ taking into account the general convolution theorem for Mellin transforms
(\cite[Lemma~3.1]{BJ2} in case $p=2$, \cite[Lemma 2]{BKT}).
By the above result, the Mellin-Sobolev space $W_c^{r,p}(\mathbb{R}^+)$ can be characterized as
$$W_c^{r,p}(\mathbb{R}^+) = \{f \in X^p_c: (-iv)^r [f]^\wedge_{M_c}(c+iv) = [g]^\wedge_{M_c}(c+iv), ~g \in L^p(\{c\}\times i \mathbb{R})\}.$$
We have the following
\begin{Theorem}\label{sobolev1}
Let $f \in \mathcal{M}^1_c\cap W^{r,1}_c(\mathbb{R}^+).$ Then for $q \in [1,\infty]$ and $r >1/q,$
\begin{eqnarray*}
\mbox{\rm dist}_q(f, B^1_{c,\sigma}) \leq D\|\Theta^r_cf\|_{X^1_c} \cdot \left\{\begin{array}{ll} \sigma^{-r +1/q}, &\quad q<\infty,\\
\sigma^{-r}, &\quad q= \infty,
\end{array} \right.
\end{eqnarray*}
where $D$ is a constant depending on $r$ and $q$ only.
If, in addition, $v[f]^\wedge_{M_c}(c+iv) \in L^1(\mathbb{R})$, then for $r> 1 + 1/q,$
\begin{eqnarray*}
\mbox{\rm dist}_q(\Theta_c f, B^1_{c,\sigma}) \leq D'\|\Theta^r_cf\|_{X^1_c} \cdot \left\{\begin{array}{ll} \sigma^{-r +1+1/q}, & \quad q<\infty,\\
\sigma^{-r+1}, & \quad q= \infty,
\end{array} \right.
\end{eqnarray*}
where $D'$ is again a constant depending on $r$ and $q$ only.
\end{Theorem}
{\bf Proof}. First we consider the case $q<\infty.$
The formula for the Mellin transform of a Mellin derivative yields
$$[f]^\wedge_{M_c}(c+iv) = (-iv)^{-r}[\Theta^r_cf]^\wedge_{M_c}(c+iv) \qquad (v \in \mathbb{R}
\setminus \{0\}).$$
Thus, from Theorem \ref{representation1} we obtain
\begin{eqnarray*}
\mbox{\rm dist}_q(f, B^1_{c,\sigma}) = \left\{\int_{|v| \geq \sigma}|v^{-r}[\Theta^r_cf(v)]^\wedge_{M_c}(c+iv)|^qdv\right\}^{1/q}.
\end{eqnarray*}
Since $\Theta^r_cf \in X^1_c,$ its Mellin transform is continuous and bounded on $\{c\} \times i\mathbb{R}$ (see \cite{BJ1}). Therefore
\begin{eqnarray*}
\mbox{\rm dist}_q(f, B^1_{c,\sigma}) &\leq& \|[\Theta^r_cf]^\wedge_{M_c}\|_{C(\{c\}\times i \mathbb{R})} \left\{2\int_{v \geq
\sigma}v^{-rq}dv\right\}^{1/q} \\
&\leq& \|\Theta^r_cf\|_{X^1_c}\bigg(\frac{2}{rq-1}\bigg)^{1/q} \frac{1}{\sigma^{r -1/q}},
\end{eqnarray*}
and hence the assertion for $q<\infty$ is proved with $D= (2/(rq-1))^{1/q}.$
For $q= \infty$ we use again Theorem \ref{representation1} and proceed analogously, obtaining $D=1.$ For the second part, note that under the assumptions
on $f$ and $v [f]^\wedge_{M_c}(c+iv) \in L^1(\{c\}\times i \mathbb{R})$, we have $\Theta_cf \in {M}^1_c \cap W^{r-1, 1}_c(\mathbb{R}^+).$
Therefore we can apply the first part of the proof to the function $\Theta_cf,$ obtaining immediately the assertion with the constant
$D'= (2/(rq-q-1))^{1/q}$ for $q< \infty$ and $D'= 1$ for $q = \infty.$
$\Box$
\vskip0,3cm
Note that if $f\in \mathcal{M}^1_c\cap W^{r,1}(\mathbb{R}^+)$ satisfies the further condition that
$[\Theta^r_cf]^\wedge_{M_c} \in L^q(\{c\} \times i\mathbb{R}),$ then one may write
\begin{eqnarray*}
\mbox{\rm dist}_q(f, B^1_{c,\sigma}) &=& \left\{\int_{|v| \geq \sigma}|v^{-r}[\Theta^r_cf(v)]^\wedge_{M_c}(c+iv)|^qdv\right\}^{1/q}\\[2ex]
&\leq& \frac{1}{\sigma^r}\|[\Theta^r_cf]^\wedge_{M_c}\|_{L^q(\{c\} \times i\mathbb{R})}.
\end{eqnarray*}
Moreover, one has
$$\mbox{\rm dist}_q(f, B^1_{c,\sigma}) = \mathcal{O}(\sigma^{-r}) \qquad (\sigma \rightarrow +\infty).$$
\vskip0,4cm
For $p=2$ we have the following
\begin{Theorem}\label{sobolev2}
Let $f \in \mathcal{M}^2_c\cap W^{r,2}_c(\mathbb{R}^+).$ Then for $q \in [1,2]$,
\begin{eqnarray*}
\mbox{\rm dist}_q(f, B^2_{c,\sigma}) \leq D\|\Theta^r_cf\|_{X^2_c}~ \sigma^{-r -1/2 + 1/q},
\end{eqnarray*}
where $D$ is a constant depending on $r$ and $q$ only.
If, in addition, $v[f]^\wedge_{M_c}(c+iv) \in L^1(\mathbb{R})$, then for $r> 1 + 1/2 +1/q,$
\begin{eqnarray*}
\mbox{\rm dist}_q(\Theta_c f, B^2_{c,\sigma}) \leq D'\|\Theta^r_cf\|_{X^2_c}~ \sigma^{-r +1/2+1/q},
\end{eqnarray*}
where $D'$ is again a constant depending on $r$ and $q$ only.
\end{Theorem}
{\bf Proof}. As in the previous theorem, by the formula of Mellin transform in $X^2_c$ for derivatives we have
$$\mbox{\rm dist}_q(f, B^2_{c,\sigma}) = \left\{\int_{|v| \geq \sigma}|v^{-r}[\Theta^r_c]^\wedge_{M_c}(c+iv)|^qdv\right\}^{1/q}.$$
For $q=2,$ using the property that the Mellin transform in $X^2_c$ is norm-preserving (see \cite[Lemma~2.6]{BJ2}), we have
\begin{eqnarray*}
\mbox{\rm dist}_q(f, B^2_{c,\sigma})&\leq& \frac{1}{\sigma^r}\left\{\int_{|v| \geq \sigma}|[\Theta^r_c]^\wedge_{M_c}(c+iv)|^2dv\right\}^{1/2} \\[1ex]
&\leq& \frac{1}{\sigma^r}\|[\Theta^r_cf]^\wedge_{M_c}\|_{L^2(\{c\}\times i \mathbb{R})} = \sqrt{2\pi}\frac{1}{\sigma^r}\|\Theta^r_cf\|_{X^2_c}.
\end{eqnarray*}
Therefore the assertion follows for $q=2$ with the constant $D = (2\pi)^{-1/2}.$ For $q\in [1,2{[}$ one can use H\"{o}lder's inequality with
$\mu = 2/(2-q),~\nu = 2/q$, obtaining
\begin{eqnarray*}
\mbox{\rm dist}_q(f, B^2_{c,\sigma})&\leq&
\left\{2 \int_{\sigma}^\infty v^{-rq\mu}dv\right\}^{1/(q\mu)}
\left\{\int_{|v|\geq \sigma}|[\Theta^r_cf]^\wedge_{M_c}(c+iv)|^{q\nu}dv\right\}^{1/(q\nu)}\\[1ex] &\leq&
\frac{1}{\sigma^{r+1/2 -1/q}}\left\{\frac{4-2q}{(2r+1)q - 2}\right\}^{1/q - 1/2}
\|[\Theta^r_cf]^\wedge_{M_c}\|_{L^2(\{c\}\times i \mathbb{R})} \\[1ex]
&=& \sqrt{2\pi}
\frac{1}{\sigma^{r+1/2 -1/q}}\left\{\frac{4-2q}{(2r+1)q - 2}\right\}^{1/q - 1/2} \|\Theta^r_cf\|_{X^2_c}.
\end{eqnarray*}
Thus the first inequality holds with
$$D= \sqrt{2\pi}\left\{\frac{4-2q}{(2r+1)q - 2}\right\}^{1/q - 1/2}.$$
The second inequality follows by arguments similar to those in the proof of Theorem~\ref{sobolev1}.
$\Box$
\vskip0,4cm
As a consequence, under the assumptions of the first part of Theorem \ref{sobolev2}, we can obtain an asymptotic estimate of the form
$$\mbox{\rm dist}_q(f, B^2_{c,\sigma}) = \mathcal{O}(\sigma^{-r-1/2+1/q}) \qquad (\sigma \rightarrow +\infty).$$
\section{Applications}
In this section we will illustrate applications to various basic formulae such as the approximate exponential sampling theorem, the approximate
reproducing kernel formula in the Mellin frame (see \cite{BJ3}, \cite{BBM0}), a generalized Boas differentiation formula and an extension of a
Bernstein-type inequality.
In the following for $c \in \mathbb{R}$, we denote by $\mbox{lin}_c$ the function
$$\mbox{lin}_c(x) := \frac{x^{-c}}{2\pi i}\frac{x^{\pi i} -x^{-\pi i}}{\log x} = \frac{x^{-c}}{2\pi}\int_{-\pi}^\pi x^{-it}dt \qquad (x>0, \, x\neq 1)$$
with the continuous extension $\mbox{lin}_c(1) = 1.$ Thus
$$\mbox{lin}_c(x) = x^{-c}\mbox{sinc}(\log x) \qquad (x>0).$$
Here, as usual, the ``sinc'' function is defined as
$$\mbox{sinc}(t):= \frac{\sin (\pi t)}{\pi t}~ \mbox{for}~t\neq 0,\qquad \mbox{sinc}(0) = 1.$$
It is clear that $\mbox{lin}_c \not \in X_{\overline{c}}$ for any $\overline{c}.$ However, it belongs to the space $X^2_c$ and its Mellin transform in
$X^2_c$-sense is given by
$$[\mbox{\rm lin}_c]^\wedge_{M_c}(c+iv) = \chi_{[-\pi, \pi]},$$
where $\chi_A$ denotes the characteristic function of the set $A.$
\subsection{Approximate exponential sampling formula}
For a function $f \in B^2_{c,\pi T}$ the following exponential sampling formula holds (see \cite{BJ3}, \cite{BJ2}):
$$f(x) = \sum_{k \in \mathbb{Z}}f(e^{k/T})\mbox{\rm lin}_{c/T}(e^{-k}x^T) \qquad (x>0).$$
As an approximate version in the space $\mathcal{M}^2_c$ we have (see \cite[Theorem~5.5]{BJ2}):
\newtheorem{Proposition}{Proposition}
\begin{Proposition}\label{approsamp}
Let $f \in \mathcal{M}_c^2.$
Then there holds the error estimate
\begin{eqnarray*}
\lefteqn{\bigg|f(x) - \sum_{k=-\infty}^{\infty} f(e^{k/T})\mbox{\rm lin}_{c/T}(e^{-k}x^T)\bigg|}\qquad\qquad\quad\\[2ex]
&\leq& \frac{x^{-c}}{\pi}\int_{|t| > \pi T}| [f]^\wedge_{M_c}(c+it)| dt \qquad(x \in \mathbb{R}^+,~T >0).
\end{eqnarray*}
\end{Proposition}
This estimate can now be given a ``metric interpretation''.
By Theorem \ref{representation1}, the right-hand side may be expressed as
$$\frac{x^{-c}}{\pi}\mbox{\rm dist}_1(f,B^2_{c, \pi T}).$$
Hence, introducing a remainder $(R_{\pi T} f)(x)$ by writing
\begin{equation}\label{approx_samp}
f(x)\,=\, \sum_{k\in\mathbb{Z}} f\left(e^{k/T}\right) \mbox{lin}_{c/T}\left(e^{-k}x^T\right)
+ \left(R_{\pi T}f\right)(x),
\end{equation}
we have by Proposition~\ref{approsamp}
$$
|\left(R_{\pi T} f\right)(x)|\,\leq\, \frac{x^{-c}}{\pi}\,\mbox{dist}_1(f, B_{c,\pi
T}^2) \qquad (x>0),$$
or equivalently,
\begin{eqnarray}\label{exp_samp}
\|R_{\pi T} f\|_{X_c^\infty}\,\leq\, \frac{1}{\pi}\,\mbox{dist}_1(f, B_{c,\pi T}^2).
\end{eqnarray}
This relation is a trivial equality when $f\in B_{c, \pi T}^2$. But
equality can also occur when $f\not\in B_{c,\pi T}^2.$
Indeed, consider the function
$$f(x)\,:=\, x^{-c} \mbox{sinc} (2T\log x -1).$$
By a straight forward calculation, we find that
$$ [f]_{M_c}^\wedge(c+iv)\,=\,
\frac{e^{iv/(2T)}}{2T}\,\mbox{rect} \left(\frac{v}{2T}\right),$$
where rect denotes the rectangle function defined by
\begin{eqnarray*}
\mbox{rect}(x)\,:=\left\{
\begin{array}{ccc}
1 & \hbox{ if } & |x|<\pi,\\
\frac{1}{2} & \hbox{ if } &|x|=\pi,\\
0 & \hbox{ if } & |x|>\pi.
\end{array}
\right.
\end{eqnarray*}
Thus, $f\not\in B_{c,\pi T}^2$ and
$$\mbox{dist}_1(f, B_{c,\pi T}^2)\,=\, \int_{|v|\geq\pi T}
|[f]_{M_c}^\wedge(c+iv)| dv\,=\, \frac{1}{2T} \int_{|v|\geq \pi T}
\mbox{rect} \left(\frac{v}{2T}\right) dv\,=\, \pi.$$
Furthermore,
$$f(e^{k/T})\,=\, e^{-kc/T} \mbox{sinc} (2k-1)\,=0$$
for all $k\in\mathbb{Z}$. Therefore $(R_{\pi T}f)(x)=f(x)$, which shows that
$$\|R_{\pi T}f\|_{X_c^\infty}\,=\, \sup_{x>0} |\mbox{sinc} (2T\log x
-1)|\,=\,1,$$
and so equality occurs in (\ref{exp_samp}).
\vskip0,3cm
Now, employing Theorem \ref{representation1}, Corollary \ref{lip} and the results on Mellin-Sobolev spaces, one has the following theorem.
\begin{Theorem}\label{expestimates}
For the remainder of the approximate exponential sampling formula (\ref{approx_samp}), the following asymptotic estimates hold:
\begin{enumerate}
\item If $f\in \mbox{\rm Lip}_r(\alpha, X^1_c\cap C(\mathbb{R}^+)),$ $r\in \mathbb{N},\,r\geq 2,\,1<\alpha \leq r,$ then
$$\|(R_{\pi T}f)\|_{X^\infty_c} = \mathcal{O}(T^{-\alpha +1})\qquad (T \rightarrow +\infty).$$
\item If $f \in \mathcal{M}^2_c \cap \mbox{\rm Lip}_r(\beta, X^2_c), \,r\in \mathbb{N}, \,1/2 < \beta \leq r,$ then
$$\|(R_{\pi T}f)\|_{X^\infty_c} = \mathcal{O}(T^{-\beta +1/2})\qquad (T \rightarrow +\infty).$$
\item If $f \in \mathcal{M}^1_c\cap W^{r,1}_c(\mathbb{R}^+), \, r>1,$ then
$$\|(R_{\pi T}f)\|_{X^\infty_c} = \mathcal{O}(T^{-r+1}) \qquad (T \rightarrow +\infty).$$
\item If $f \in \mathcal{M}^2_c \cap W^{r,2}_c(\mathbb{R}^+), \, r>1/2,$ then
$$\|(R_{\pi T}f)\|_{X^\infty_c} = \mathcal{O}(T^{-r+1/2}) \qquad (T \rightarrow +\infty).$$
\end{enumerate}
\end{Theorem}
\subsection{Approximate Mellin reproducing kernel formula}
Another interesting formula is the ``Mellin reproducing kernel formula'' for Mellin band-limited functions
$f \in B^2_{c, \pi T}$. It reads as (see \cite[Theorems 4 and 5]{BBM0})
$$f(x) = T\int_0^\infty f(y)\mbox{\rm lin}_{c/T}\bigg(\left(\frac{x}{y}\right)^T\bigg)\frac{dy}{y} \qquad (x>0).$$
An approximate version was established in \cite[Theorem 6]{BBM0} for functions in the class $\mathcal{M}^1_c.$ In the same way we can state a version in $\mathcal{M}^2_c,$ as follows
\begin{Proposition}\label{amrkf}
Let $f\in \mathcal{M}_c^2.$
Then for $x \in \mathbb{R}^+,$ and $T>0,$ there holds
\begin{equation}\label{rep_kernel}
f(x) = T\int_0^\infty f(y)\mbox{\rm lin}_{c/T}\bigg(\left(\frac{x}{y}\right)^T\bigg)\frac{dy}{y} + (R^\ast_{\pi T}f)(x),
\end{equation}
where
$$(R^\ast_{\pi T}f)(x) := \frac{x^{-c}}{2 \pi}\int_{|t|\geq \pi T} [f]^\wedge_{M_c}(c+it) x^{-it} dt.$$
Furthermore, we have the error estimate
$$|(R^\ast_{\pi T}f)(x)| \leq \frac{x^{-c}}{2 \pi}\int_{|t|\geq \pi T} |[f]^\wedge_{M_c}(c+it)|dt.$$
\end{Proposition}
{\bf Proof}. The proof is essentially the same as in \cite[Theorem 6]{BBM0}. Here for the sake of completeness we give some details. First, note that the convolution integral in (\ref{rep_kernel}) exists, using a H\"{o}lder-type inequality. Putting $G(x) := \mbox{lin}_{c/T}(x^T),$ its $X^2_c$~- Mellin transform is given by $[G]^\wedge_{M_c} = T^{-1}\chi_{[-\pi T, \pi T]}.$ Since $[f]^\wedge_{M_c}\in L^1(\{c\}\times i \mathbb{R}),$ using the Mellin inversion and the Fubini Theorem, one can easily obtain, with the same proof, an $X^2_c$~- extension of the Mellin-Parseval formula for convolutions (see \cite[Theorem 9]{BJ1} for functions in $X^1_c$), obtaining
$$T\int_0^\infty f(y)\mbox{\rm lin}_{c/T}\bigg(\left(\frac{x}{y}\right)^T\bigg)\frac{dy}{y} = \frac{x^{-c}}{2\pi}\int_{-\infty}^{+\infty}[f]^\wedge_{M_c}(c+it) [G]^\wedge_{M_c}(c+it)x^{-it}dt.$$
Therefore, by the Mellin inversion formula we have
\begin{eqnarray*}
&&T\int_0^\infty f(y)\mbox{\rm lin}_{c/T}\bigg(\left(\frac{x}{y}\right)^T\bigg)\frac{dy}{y} = \frac{x^{-c}}{2\pi}\int_{-\pi T}^{\pi T}[f]^\wedge_{M_c}(c+it) x^{-it}dt \\
&=&f(x) - \frac{x^{-c}}{2\pi} \int_{|t| \geq \pi T}[f]^\wedge_{M_c}(c+it) x^{-it}dt
\end{eqnarray*}
that is the assertion.
\vskip0,3cm
As before, employing Theorem \ref{representation1}, one can express the error estimate in terms of the distance, i.e.,
$$|(R^\ast_{\pi T}f)(x)|\leq \frac{x^{-c}}{2\pi}\mbox{\rm dist}_1(f,B^2_{c, \pi T})\qquad (x>0),$$
or equivalently,
\begin{equation}\label{ak_est}
\|R^\ast_{\pi T} f\|_{X_c^\infty}\,\leq\, \frac{1}{2\pi}\,\mbox{dist}_1(f, B_{c,\pi T}^2).
\end{equation}
This is again a sharp inequality. Indeed, consider $f(x):=x^{-c} \hbox{sinc}(2T\log x)$. Then $f$ satisfies the hypotheses of
Proposition~\ref{amrkf}. By a calculation we find that
$\hbox{dist}_1(f, B^2_{c,\pi T})=\pi$ and $\|R^\ast_{\pi T} f\|_{X_c^\infty}=1/2.$ Hence equality occurs in (\ref{ak_est}).
Using the estimates of the distance functional in Mellin-Lipschitz and Mellin-Sobolev spaces, we obtain the following results.
\begin{Theorem}\label{rkfest}
For the remainder of the approximate Mellin reproducing kernel formula (\ref{rep_kernel}), the following asymptotic estimates hold:
\begin{enumerate}
\item If $f\in \mbox{\rm Lip}_r(\alpha, X^1_c\cap C(\mathbb{R}^+)),$ $r\in \mathbb{N},\,r\geq 2,\,1<\alpha \leq r,$ then
$$\|(R^\ast_{\pi T}f)\|_{X^\infty_c} = \mathcal{O}(T^{-\alpha +1})\qquad (T \rightarrow +\infty).$$
\item If $f \in \mathcal{M}^2_c \cap \mbox{\rm Lip}_r(\beta, X^2_c), \,r\in \mathbb{N}, \,1/2 < \beta \leq r,$ then
$$\|(R^\ast_{\pi T}f)\|_{X^\infty_c} = \mathcal{O}(T^{-\beta +1/2})\qquad (T \rightarrow +\infty).$$
\item If $f \in \mathcal{M}^1_c\cap W^{r,1}_c(\mathbb{R}^+), \, r>1,$ then
$$\|(R^\ast_{\pi T}f)\|_{X^\infty_c} = \mathcal{O}(T^{-r+1}) \qquad (T \rightarrow +\infty).$$
\item If $f \in \mathcal{M}^2_c \cap W^{r,2}_c(\mathbb{R}^+), \, r>1/2,$ then
$$\|(R^\ast_{\pi T}f)\|_{X^\infty_c} = \mathcal{O}(T^{-r+1/2}) \qquad (T \rightarrow +\infty).$$
\end{enumerate}
\end{Theorem}
\subsection{A sampling formula for Mellin derivatives}
In the context of Fourier analysis the following differentiation formula
has been considered:
\begin{eqnarray}\label{diff1}
f'(x)\,=\, \frac{4T}{\pi} \sum_{k\in\mathbb{Z}} \frac{(-1)^{k+1}}{(2k-1)^2}
f\left(x+ \frac{2k-1}{2T}\right) \qquad (x\in\mathbb{R}).
\end{eqnarray}
It holds for all entire functions of exponential type $\pi T$ which are
bounded on the real line. In particular, it holds for trigonometric
polynomials of degree at most $\lfloor \pi T\rfloor$, where $\lfloor \pi T\rfloor$ denotes the integral part of $\pi T,$ and in this case the
series on the right-hand side can be reduced to a finite sum.
The formula for trigonometric polynomials was discovered by Marcel
Riesz \cite{RIE} in 1914. Its generalization (\ref{diff1}) is due to
Boas \cite{BOA}. Some authors refer to (\ref{diff1}) as the {\it generalized
Riesz interpolation formula}, others name it after Boas.
Formula (\ref{diff1}) has several interesting applications. It provides a
very short proof of Bernstein's inequality in $L^p(\mathbb{R})$ for all $p\in [1,
\infty]$. Modified by introducing a Gaussian multiplier, it leads to a
stable algorithm of high precision for numerical differentiation (see
\cite{SCH2}). Furthermore, it has been extended to higher order
derivatives (see \cite{BSS1}, \cite{SCH2}).
The following theorem gives an analogue of (\ref{diff1}) for Mellin
derivatives.
\begin{Theorem}\label{d_thm1}
For $f\in B_{c,\pi T}^\infty$ there holds
\begin{eqnarray}\label{d_thm1.1}
\Theta_cf(x)\,=\, \frac{4T}{\pi} \sum_{k\in\mathbb{Z}} \frac{(-1)^{k+1}}{(2k-1)^2}
\,e^{(k-1/2)c/T} f\left(x e^{(k-1/2)/T}\right)\qquad (x\in\mathbb{R}^+).
\end{eqnarray}
\end{Theorem}
{\bf Proof}. \,
Formula (\ref{d_thm1.1}) could be deduced from (\ref{diff1}) by making use of
the relationship between the Fourier transform and the Mellin transform.
In the following we give an independent proof completely within Mellin
analysis.
First assume that, in addition,
\begin{eqnarray} \label{d_thm1p1}
x^c f(x)\,=\, \mathcal{O}\left(\frac{1}{|\log x|}\right)
\qquad (x\rightarrow 0_+ \hbox{ and } x\rightarrow \infty).
\end{eqnarray}
Then the exponential sampling formula applies to $f$ and yields
$$ f(x)\,=\, \sum_{k\in\mathbb{Z}} f\left(e^{k/T}\right) \mbox{lin}_{c/T}\left(e^{-k}x^T\right).$$
The series converges absolutely and uniformly on compact subsets of
$\mathbb{R}^+$. When we apply the differentiation operator
$\Theta_c$ with respect to $x$, we may interchange it with the summation on the
right-hand side. Thus,
\begin{eqnarray}\label{d_thm1p2}
\Theta_c f(x)\,=\, \sum_{k\in\mathbb{Z}} f\left(e^{k/T}\right) \Theta_c
\mbox{lin}_{c/T}\left(e^{-k}x^T\right).
\end{eqnarray}
By a calculation we find that
$$\Theta_c\mbox{lin}_{c/T}\left(e^{-k}x^T\right)\,=\, T\,
\frac{e^{kc/T}x^{-c} \cos(\log(e^{-\pi k}x^{\pi T})) -
\mbox{lin}_{c/T}(e^{-k}x^T)}{\log(e^{-k}x^T)}\,.$$
The complicated cosine term disappears at $x=e^{1/(2T)}$. Then
(\ref{d_thm1p2}) becomes
\begin{eqnarray}\label{d_thm1p3}
\Theta_cf(e^{1/(2T)})\,=\,\frac{4T}{\pi} \sum_{k\in\mathbb{Z}}
\frac{(-1)^{k+1}}{(2k-1)^2}\, e^{(k-1/2)c/T} f\left(e^{k/T}\right).
\end{eqnarray}
In order to obtain the $\Theta_c$ derivative of $f$ at $x$, we consider
the function
$g\,:\, t\mapsto f(xe^{-1/(2T)}t).$
It satisfies the assumptions used for deducing (\ref{d_thm1p3}). Now, applying
(\ref{d_thm1p3}) to $g$, we arrive at the desired formula (\ref{d_thm1.1}).
We still have to get rid of the additional assumption (\ref{d_thm1p1}).
If $f$ is any function in $B_{c,\pi T}^\infty$, then
$$ f_\varepsilon(x)\,:=\,f(x^{1-\varepsilon}) \,x^{c\varepsilon(T-1)} \mbox{lin}_c(x^{\varepsilon T})$$
belongs to $B_{c,\pi T}^\infty$ for each $\varepsilon \in (0, 1)$ and it satisfies
(\ref{d_thm1p1}). Applying (\ref{d_thm1.1}) to $f_\varepsilon$ and letting $\varepsilon \rightarrow 0_+$,
we find that (\ref{d_thm1.1}) holds for $f$ as well.
$\Box$
\vskip0,3cm
We note that formula (\ref{d_thm1.1}) yields a very short proof for a
Bernstein-type inequality for Mellin derivatives in $L^p$ norms for any
$p\in [1, \infty].$ Indeed, by the triangular inequality for norms we have
\begin{eqnarray}\label{Bernst1}
\left\|\Theta_cf\right\|_{X_c^p}\,\leq\, \frac{4T}{\pi} \sum_{k\in\mathbb{Z}}
\frac{1}{(2k-1)^2}\,e^{(k-1/2)c/T}\,\left\|f( \cdot \,e^{(k-1/2)/T})\right\|_{X_c^p}.
\end{eqnarray}
It is easily verified that for any positive $a$, there holds
$$\left\|f( \cdot\, a)\right\|_{X_c^p}\,=\, a^{-c} \|f\|_{X_c^p}\,.$$
Furthermore, it is known that
$$\sum_{k\in\mathbb{Z}} \frac{1}{(2k-1)^2}\,=\, \frac{\pi^2}{4}.$$
Thus, it follows from (\ref{Bernst1}) that
\begin{eqnarray}\label{Bernst2}
\|\Theta_cf\|_{X_c^p}\,\leq\, \pi T \|f\|_{X_c^p}\,.
\end{eqnarray}
Inequality (\ref{Bernst2}) in conjunction with Theorem \ref{derivative} and the Mellin
inversion formula shows that if $f\in B_{c,\pi T}^p$ for some $p\in[1, \infty]$,
then $\Theta_cf \in B_{c,\pi T}^p$ as well.
If $f$ does not belong to $B_{c,\pi T}^\infty$ but the two sides of
formula (\ref{d_thm1.1}) exist, we may say that (\ref{d_thm1.1}) holds with
a remainder $(R^B_{\pi T} f)(x)$ defined as the deviation of the right-hand
side from $\Theta_cf(x)$. We expect that $|(R^B_{\pi T}f)(x)|$ is small if
$\Theta_cf$ is close to $B_{c,\pi T}^\infty.$
For the Mellin inversion class for $p=2$, $M_c^2$, we may state a
precise result as follows.
\begin{Theorem}\label{d_thm2}
Let $f\in \mathcal{M}_c^2$ and suppose that $v [f]_{M_c}^\wedge(c+iv)$ is absolutely
integrable on $\mathbb{R}$ with respect to $v$.
Then, for any $T>0$ and $x\in\mathbb{R}^+$, we have
\begin{eqnarray}\label{d_thm2.1}
\Theta_cf(x)\,=\, \frac{4T}{\pi} \sum_{k\in\mathbb{Z}} \frac{(-1)^{k+1}}{(2k-1)^2}
\,e^{(k-1/2)c/T} f\left(x e^{(k-1/2)/T}\right) + (R^B_{\pi T}f)(x),
\end{eqnarray}
where
\begin{eqnarray}\label{d_thm2.2}
(R^B_{\pi T}f)(x)\,=\, \frac{1}{2\pi i} \int_{|v|\geq\pi T}
\left[v- \pi T\phi\left(\frac{v}{\pi T}\right)\right] [f]_{M_c}^\wedge(c+iv) x^{-c-iv}dv
\end{eqnarray}
with
\begin{eqnarray}\label{d_thm2.3}
\phi(v)\,:=\, \bigg|v+1 -4\left\lfloor\frac{v+3}{4}\right\rfloor \bigg|-1.
\end{eqnarray}
In particular,
\begin{eqnarray}\label{d_thm2.4}
|(R^B_{\pi T}f)(x)|\,\leq\, \frac{x^{-c}}{2\pi} \int_{|v|\geq \pi T}
\left(|v|+\pi T\right)|[f]_{M_c}^\wedge(c+iv)|dv
\end{eqnarray}
and
\begin{eqnarray}\label{d_thm2.5}
\|R^B_{\pi T}f\|_{X_c^\infty}\,\leq\, \frac{1}{\pi}
\mbox{\rm dist}_1\left(\Theta_cf, B_{c,\pi T}^2\right).
\end{eqnarray}
\end{Theorem}
\begin{figure}
\caption{\label{phi}
\label{phi}
\end{figure}
{\bf Proof.}\,
Define
\begin{eqnarray}\label{d_thm2p1}
f_1(x)\,:=\, \frac{1}{2\pi} \int_{|v|\geq \pi T} [f]_{M_c}^\wedge(c+iv)
x^{-c-iv}dv.
\end{eqnarray}
Then $f-f_1\in B_{c,\pi T}^\infty$, and so (\ref{d_thm1.1}) applies. It
yields that
\begin{eqnarray}\label{d_thm2p2}
(R^B_{\pi T}f)(x)\,=\, \Theta_c f_1(x) - \frac{4T}{\pi} \sum_{k\in\mathbb{Z}}
\frac{(-1)^{k+1}}{(2k-1)^2}\, e^{(k-1/2)c/T}\,f_1\left(xe^{(k-1/2)/T}\right).
\end{eqnarray}
We know from Theorem~\ref{derivative} that
$$\Theta_c f_1(x)\,=\, \frac{1}{2\pi i} \int_{|v|\geq \pi T} v
[f]_{M_c}^\wedge(c+iv) x^{-c-iv} dv.$$
Furthermore, by (\ref{d_thm2p1}),
$$f_1\left(x e^{(k-1/2)/T}\right)\,=\, \frac{1}{2\pi} \int_{|v|\geq \pi T}
[f]_{M_c}^\wedge(c+iv)\left( x e^{(k-1/2)/T}\right)^{-c-iv} dv.$$
Using these integral representations and interchanging summation and
integration, which is allowed by Levi's theorem, we may rewrite
(\ref{d_thm2p2}) as
\begin{eqnarray}\label{d_thm2p3}
(R^B_{\pi T} f)(x)\,=\, \frac{1}{2\pi i} \int_{|v|\geq \pi T}
\bigg(v -\psi(v)\bigg) [f]_{M_c}^\wedge(c+iv) x^{-c-iv} dv,
\end{eqnarray}
where
$$ \psi(v)\,:=\, \frac{4Ti}{\pi} \sum_{k\in\mathbb{Z}}
\frac{(-1)^{k+1}}{(2k-1)^2}\,
e^{-i(k-1/2)v/T}.$$
Now, for $v\in\mathbb{R}$, consider the function $g_v\,:\, x \mapsto ix^{-iv}.$
We note that $g_v \in B_{0,\pi T}^\infty$ if $|v|\leq \pi T.$
Hence $g_v$ satisfies the hypotheses of Theorem~\ref{d_thm1} for $c=0$ and
this restriction on $v$. Since $\Theta_0 g_v(1)=v$, we find by applying
(\ref{d_thm1.1}) to $g_v$ with $c=0$ and $x=1$ that $\psi(v)= v$ for
$v\in [-\pi T, \pi T].$
We also note that $\psi(v+ 2\pi T)=-\psi(v)$ and (consequently) $\psi(v+4\pi
T) = \psi(v).$ Hence $\psi$ is a $4\pi T$-periodic function that is given
on the interval $[-\pi t, 3\pi T]$ by
\begin{eqnarray*}
\psi(v)\, =\left\{
\begin{array}{cl}
v & \hbox{ if } -\pi T\le v \le \pi T,\\ \\
2\pi T -v & \hbox{ if } \pi T \leq v \leq 3\pi T.
\end{array}
\right.
\end{eqnarray*}
Thus, using the function $\phi$ defined in (\ref{d_thm2.3}),
whose graph is shown in Fig.~\ref{phi},
we can express $\psi(v)$ as $\pi T\phi(v/(\pi
T)).$ Hence (\ref{d_thm2p3}) implies (\ref{d_thm2.2}).
Inequalities (\ref{d_thm2.4}) and (\ref{d_thm2.5}) are easily obtained by noting that
$|\phi(v)|\leq 1$ for $v\in\mathbb{R}$ and by recalling Corollary~\ref{cor1}.
$\Box$
\subsection{An extension of the Bernstein-type inequality}
Just the same way as we deduced (\ref{Bernst2}) from (\ref{d_thm1.1}), we may use
(\ref{d_thm2.1}) to obtain
$$ \|\Theta_c f\|_{X_c^p}\,\leq\, \pi T \|f\|_{X_c^p} + \|R^B_{\pi T}
f\|_{X_c^p}.$$
For $p=2$ we can profit from the isometry of the Mellin transform
expressed by the formula
$$ \|f\|_{X_c^2}\,=\, \frac{1}{\sqrt{2\pi}}\left(\int_\mathbb{R} |[f]_{M_c}^\wedge(c+iv)|^2dv \right)^{1/2},$$
obtaining the following theorem
\begin{Theorem}\label{Bernapprox}
Under the assumptions of Theorem \ref{d_thm2} we have
$$\|\Theta_c f\|_{X_c^2} \,\leq\,\pi T \|f\|_{X_c^2} + \frac{1}{\sqrt{2\pi}}
\,\mbox{\rm dist}_2(\Theta_c f, B_{c,\pi T}^2)$$
for any $T>0$.
\end{Theorem}
{\bf Proof}.
With $f_1$ defined in (\ref{d_thm2p1}) and $f_0:=f-f_1$, we have
\begin{eqnarray}\label{Bernst3}
\|\Theta_c f\|_{X_c^2}\,\leq\, \|\Theta_c f_0\|_{X_c^2} +
\|\Theta_c f_1\|_{X_c^2} \,\leq\,
\pi T \|f_0\|_{X_c^2} +\|\Theta_c f_1\|_{X_c^2}
\end{eqnarray}
since (\ref{Bernst2}) applies to $f_0$.
We are going to estimate the quantities on the right-hand side in terms of $
f$.
Using the isometry of the Mellin transform, we find that
\begin{align*}
\|f\|_{X_c^2}^2 &=\, \frac{1}{2\pi} \int_\mathbb{R}|[f]_{M_c}^\wedge(c+iv)|^2 dv\\
&=\, \frac{1}{2\pi}\left[
\int_{|v|\leq \pi T}|[f]_{M_c}^\wedge(c+iv)|^2 dv +
\int_{|v|\geq \pi T}|[f]_{M_c}^\wedge(c+iv)|^2 dv\right] \\
&=\, \|f_0\|_{X_c^2}^2 +\|f_1\|_{X_c^2}^2\,,
\end{align*}
which implies that $\|f_0\|_{X_c^2} \leq\|f\|_{X_c^2}.$ Next we note that
\begin{align*}
\|\Theta_c f_1\|_{X_c^2} &=\,
\frac{1}{\sqrt{2\pi}} \left(\int_\mathbb{R} |\left[\Theta_cf_1\right]_{M_c}^\wedge(c+iv)|^2dv\right)^{1/2}\\
&=\,\frac{1}{\sqrt{2\pi}} \left(\int_\mathbb{R} |v\left[f_1\right]_{M_c}^\wedge(c+iv)|^2
dv\right)^{1/2}\\
&=\,\frac{1}{\sqrt{2\pi}} \left(\int_{|v|\geq\pi T}|v[f]_{M_c}^\wedge(c+iv)|^2
dv\right)^{1/2}\\
&=\, \frac{1}{\sqrt{2\pi}} \mbox{dist}_2(\Theta_c f, B_{c,\pi T}^2).
\end{align*}
Thus (\ref{Bernst3}) implies the assertion.
$\Box$
\vskip0,4cm
\noindent
{\bf Aknowledgments}. Carlo Bardaro and Ilaria Mantellini have been partially supported by the ``Gruppo Nazionale per l'Analisi Matematica e Applicazioni (GNAMPA) of the ``Istituto Nazionale di Alta Matematica'' (INDAM) as well as by the Department of Mathematics and Computer Sciences of the University of Perugia.
\flushright{{\footnotesize \today}}
\end{document}
\end{document} |
\begin{document}
\title{ extbf{Regularization and Variable Selection with Copula Prior}
\begin{abstract}
In this work, we show that under specific choices of the copula, the lasso,
elastic net, and $g$-prior are particular cases of `copula prior,' for
regularization and variable selection method. We present `lasso with Gauss
copula prior' and `lasso with t-copula prior.' The simulation study and
real-world data for regression, classification, and large time-series data
show that the `copula prior' often outperforms the lasso and elastic net
while having a comparable sparsity of representation. Also, the copula
prior encourages a grouping effect. The strongly correlated predictors tend
to be in or out of the model collectively under the copula prior. The
`copula prior' is a generic method, which can be used to define the new
prior distribution. The application of copulas in modeling prior
distribution for Bayesian methodology has not been explored much. We
present the resampling-based optimization procedure to handle big data with
copula prior.
\end{abstract}
\noindent \textbf{Key words} Big data, Elastic Net, Feature Selection, Large $p$ small $n$, Lasso,
Posterior Mode, Shrinkage
\section{Introduction}\label{submission}
A machine learning algorithm can perform supervised learning task,
using a set of features \cite{Guyon.2003,
Chandrashekar.2014}. Variable Selection helps reducing computation
requirement, reducing the effect of `curse of dimensionality,' improve
the prediction performance and reveal the relationship between
predictors and the target variable. In microarray data which usually
consists of the `expression state' of a vast number of genes, it is
extremely desirable to pick multiple correlated genes to reveal
insights into the underlying biological pathway. Selecting correlated
variables often presents a challenge to the classical variable
selection methods. `Lasso' proposed by \cite{Tibshirani.1996} is a
popular choice for variable selection. It uses a $l_{1}$ penalty on
the model parameters. However, lasso selects only a small subset of
variables, from a group of highly correlated variables; affects the
prediction accuracy as well as the interpretability of the estimated
model.
To address this problem, \cite{Zou.Hastie.2005} proposed `elastic net'
(EN), which encourage a grouping effect; where strongly correlated variables tend to be in or out of the model together. However, the EN prior distribution is simple, and it does not incorporate correlation
information among variables in the model. To fix this issue, several
other regularizers have been developed. \cite{Peter.2013} describes a two-stage process in which one first cluster the features to identify the correlated variables and then apply lasso type penalties to learn the model. But now attempts are made to avoid the two-stage process and use a regularizer which could simultaneously
learn the coefficients and can identify the groups of strongly
correlated variables. Ordered weight $l_{1}$ (OWL) devised by
\cite{Zeng.2016} can discover the groups of strongly correlated
variables. However, OWL usually forces the coefficients within the
same group to have the similar value which makes it
undesirable. Another useful feature selection algorithm in this
context is the eigennet \cite{Yan.2011}. It selects the correlated variables by using the eigenstructure of the data to guide feature selection. From a Bayesian perspective, natural way to deal with the problem is to use multivariate Laplace distribution as a regularizer, which can account for the correlation between the coefficients, like a $g$-prior developed by \cite{Zellner.1986}. However, the multivariate Laplace distribution is complicated, as its pdf involves the modified Bessel function of the second kind \cite{Kotz.2001}. So computationally it becomes difficult to handle.
We present the multivariate version of the lasso, called the `lasso copula'
(LC) prior which can incorporate the correlation information between the
features. Due to its built-in correlation structure, it can discover the
groups of strongly correlated variables. The advantage of our proposed LC
prior is that it encourages grouping effect with an appropriate sparsity of
representation. The LC prior just like the lasso or EN can perform both
feature selection as well as regularization.
For estimating the coefficients, we propose a nonlinear optimization
procedure and resampling-based optimization procedure to handle big data. Through experiments on simulated data and real-life data sets, we show that the LC prior can outperform the state of the
art methods like the regular lasso and EN.
\subsection{Contribution}
\begin{itemize}
\item We present the `lasso Gauss copula' (LGC) prior $\&$ `lasso
$t$-copula' (LTC) prior
which can use the correlation information embedded in the data
to select correlated variables.
\item We show that LGC reduces to regular lasso prior when the
correlation between the features is 0. Hence understanding the
theoretical properties of LGC prior is of significant interest.
\item We propose a framework for tuning the hyperparameters of LC
prior. For estimating the coefficients, a non-linear optimization procedure is employed, and resampling procedure is
presented to handle large dataset with large feature space.
\end{itemize}
\section{Proposed Method}
We first describe the problem statement in detail, then
introduce the LC prior.
\subsection{Problem Statement}
Consider a linear regression problem consisting of $n$ independent and
identically distributed samples $\{x_{i},y_{i}\}_{i=1}^{n}$. Here
$x_{i}$ denotes a $p$ dimensional input vector for which output value
is denoted by $y_{i}$. Although we consider the regression problem
here; later we extended our approach to the classification and
time-series problem. For every sample $i$, we implement the following model
\begin{equation}
y_{i}=x_{i}^T\bfb+\epsilon_{i},~~~i=1,2,...,n,
\end{equation}
where $\epsilon_{i}\sim \mathcal{N}(0, \sigma^{2})$. Our goal is to select the correct set of features $\&$
learn the true value of coefficient vector $\bfb \in \mathbb{R}^p$ which
relates $y_{i}$ and $x_{i}$. For estimating $\bfb$, we minimize the
squared error loss function with LC prior as the regularizer. We define
${\mbox{\boldmath $y$}}=(y_{1},\ldots,y_{n})$ to be $n \times 1$ column vector of responses and
${\mbox{\boldmath $x$}}=(x_{1},\ldots,x_{n})^{T}$ as an $n \times p$ matrix of
features. \textit{Without loss of generality, we
assume that each response has been centered and each predictor has been
standardized.}
\subsection{Copula Prior}
Joint modeling of variables could be complicated if the marginals are not
Gaussian, i.e., it belongs to different parametric families. In such cases,
we can use copula techniques to define the multivariate distribution
functions. A copula is a function that connects univariate marginals to their
full multivariate distribution. The application of Copulas in modeling priors has not been explored much. In this
paper, we present how copula can be used to develop the joint priors over the
parameters.
Mathematically copula can be defined as a $p$ dimensional function $C$,
$$
C:[0,1]^p\rightarrow [0,1].
$$
The Sklar's theorem \cite{Sklar.1959} states that every multivariate
distribution function can be expressed as
\begin{equation}
\label{eqn_Sklar_copula}
F(\bfb)=C(F_{1}(\omega_{1}),\ldots,F_{p}(\omega_{p}),\theta),
\end{equation}
where $\theta$ is the dependence parameter and $F_i(\omega),~i=1,2,\ldots,p$
are marginal prior distributions. If
$F_{1}(\omega_{1}),\ldots,F_{p}(\omega_{p})$ are continuous then $\exists$
an unique $C$ satisfying (\ref{eqn_Sklar_copula}). If we consider the product
copula, i.e.,
\begin{equation}
\label{eqn_product_copula}
C(u_1,\ldots,u_p)=u_1\ldots u_p,
\end{equation}
and choose Gaussian distribution over $\omega_j$, i.e.,
$u_j=F_j(\omega_j)=\Phi(\omega_j,0,\tau)$ as marginal prior distribution then
it is ridge prior and corresponding penalty is $L_2$ penalty. If we choose
Laplace distribution as marginals, i.e.,
$F_j(\omega_j)=Laplace(\omega_j,0,\tau)$ and consider the product copula as
(\ref{eqn_product_copula}) then it is lasso prior and corresponding penalty
is $L_1$ penalty. Similarly if we choose EN distribution over $\omega_j$ as
marginal prior distribution and consider the product copula
(\ref{eqn_product_copula}), then it is EN prior and the corresponding penalty
is the convex combination between $L_1$ and $L_2$-norm. Following the similar
argument, if we choose the marginal distribution to be Gaussian distribution
and consider Gaussian copula with covariance matrix to be $\Sigma =
g(X^TX)^{-1}$, then it is $g$-prior \cite{Zellner.1986}. As it turns out,
the existing priors like ridge, lasso, EN, $g$-priors becomes special cases
of the proposed copula prior, for the particular choices of copula. We
present the complete list of existing cases and new copula priors in the
table \ref{table_list_of_copula_priors}.
\begin{table*}[ht]
\centering
\begin{tabular}{cccc}\hline
Marginal Distribution & Copula Type & Covariance & Prior \\ \hline\hline
Normal & product copula & ${\mbox{\boldmath $I$}}$ &ridge \\
Laplace & product copula & ${\mbox{\boldmath $I$}}$ &lasso \\
Elastic Net & product copula & ${\mbox{\boldmath $I$}}$ &elastic net\\
Normal & Multivariate Gaussian & $g(X^TX)^{-1}$ & $g$-prior\\
Laplace & Multivariate Gaussian & $\Sigma$ & lasso-Gauss-Copula\\
Laplace & Multivariate $t$ with $\nu$ df & $\Sigma$ & lasso-$t$-Copula\\
Laplace & Multivariate Cauchy & $\Sigma$ & lasso-Cauchy-Copula\\
\hline
\end{tabular}
\caption{List of Copula Prior for Regularizations, ${\mbox{\boldmath $I$}}$ is identity
matrix and $\Sigma$ is the unstructured/structured covariance matrix
needs to be estimated.}
\label{table_list_of_copula_priors}
\end{table*}
As \cite{Sklar.1959} showed that a multivariate distribution can be written
as a Copula,
\begin{equation}
\label{eqn_CDF_copula}
F[F_{1}^{-1}(u_{1}),\ldots,F_{p}^{-1}(u_{p})]=C(u_{1},\ldots,u_{p},\Sigma),
\end{equation}
where $F_{j}^{-1}(u_{j})=\omega_j,~j=1\ldots p$. Now if we consider the Gauss
copula, as $F=\Phi$, differentiating the equation (\ref{eqn_CDF_copula}) with
respect to $u_{1},\ldots, u_{p}$, we get the derivative of copula as
\begin{equation}\label{eqn_copula_derivative}
c(u_{1},\ldots,u_{p};\Sigma)=\frac{f[F_{1}^{-1}(u_{1}),\ldots,F_{p}^{-1}(u_{p})]}{{\mbox{\boldmath $p$}}rod_{i=1}^{p}f_{i}[F_{i}^{-1}(u_{i})]}.
\end{equation}
The $f$ in (\ref{eqn_copula_derivative}) is the joint PDF of the $F$, and
$f_1,f_2,\ldots,f_p$ are univariate marginal density functions. The
expression (\ref{eqn_copula_derivative}) holds for any choice of univariate
pdf $f_{i}'s$ and joint pdf $f$. The density of the Gaussian copula with the
covariance matrix $\Sigma$, \cite{Song.2000}
\begin{equation*}
c(\underline{u})=|\Sigma|^{-\frac{1}{2}}\exp\bigg\{-\frac{1}{2}{\mbox{\boldmath $q$}}^T(\Sigma^{-1}-{\mbox{\boldmath $I$}}_p){\mbox{\boldmath $q$}}\bigg\},
\end{equation*}
where $\underline{u}=\{u_1,u_2,\ldots,u_p\}$, ${\mbox{\boldmath $q$}}=(q_1,\ldots,q_p)^T$, with
$q_j=\Phi^{-1}(u_j)$ for $j=1,2,\ldots,p$ and $\Phi$ is the cdf of $N(0,1)$.
Note that $u_j=F_j(\omega_j)$, could be any distribution. The density of $t$
copula \cite{Demarta.2005} has the form
\begin{equation}\label{eqn_t_copula_density}
c_{\nu}^{t}(\underline{u})=\frac{f_{\nu,\Sigma}(t_{\nu}^{-1}(u_1),\ldots,t_{\nu}^{-1}(u_p))}{{\mbox{\boldmath $p$}}rod_{j=1}^{p}f_{\mu}(t_{\nu}^{-1}(u_i))},~~\underline{u}\in (0,1)^p,
\end{equation}
where $f_{\nu,\Sigma}$ is the joint density of $p$-variate multivariate
$t$-distributions $t_{p}(\nu,0,\Sigma)$ with $\nu$ degrees of freedom,
$\Sigma$ is the covariance matrix and $f_{\nu}$is the standard density of
univariate $t$-distribution with $\nu$ degrees of freedom. The joint prior
density function is, by differentiating (\ref{eqn_Sklar_copula})
\begin{equation}\label{eqn_jnt_copula_prior_pdf}
f(\bfb)=c[F_1(\omega_1),F_2(\omega_2),\ldots,F_p(\omega_p)]{\mbox{\boldmath $p$}}rod_{j=1}^{p}f_j(\omega_j),
\end{equation}
where $c$ is the density of $C$ and $f_1,\ldots,f_p$ are marginal prior
densities. Now we present the LGC prior.
\subsection{Lasso with Gauss-Copula Prior}
Suppose, $F_{L:j}(\omega_j)$ is the marginal prior cdf of the Laplace
distribution over $\omega_j$, $f_{L:j}$ is the marginal prior pdf of Laplace
distribution and consider Gauss copula for $c$ in
(\ref{eqn_jnt_copula_prior_pdf}), then we get the joint prior pdf for $\bfb$
as LGC prior, where
\begin{equation}\label{eqn_lasso_Gauss_copula_density}
\begin{split}
c(\underline{u})=|\Sigma|^{-\frac{1}{2}}\exp\bigg\{-\frac{1}{2}{\mbox{\boldmath $q$}}^T(\Sigma^{-1}-{\mbox{\boldmath $I$}}_p){\mbox{\boldmath $q$}}\bigg\},
\end{split}
\end{equation}
where $\underline{u}=\{F_{L:1}(\omega_1),\ldots,F_{L:p}(\omega_p)\}$,
${\mbox{\boldmath $q$}}=(q_{1},\ldots,q_{p})$ with $q_{j}=\Phi^{-1}(F_{L:j})~\forall j$.
Substituting equation (\ref{eqn_lasso_Gauss_copula_density}) into the
equation (\ref{eqn_jnt_copula_prior_pdf}) would yield the analytical
expression of the joint prior pdf of LGC prior. Assuming
that the density function $f_{L}$ is a Laplace pdf with location parameter 0
and scale parameter as $\lambda$, as in the lasso prior, the final expression
would be
\begin{eqnarray}\label{eqn_jnt_lasso_gauss_copula_prior_pdf}
f(\bfb)&=&|\Sigma|^{-\frac{1}{2}}\exp\bigg\{-
\frac{{\mbox{\boldmath $q$}}^{T}(\Sigma^{-1}-{\mbox{\boldmath $I$}}_{p}){\mbox{\boldmath $q$}}}{2}\bigg\}
{\frac{\lambda}{2}}^{p} \exp\big\{-\lambda\sum_{i=1}^{p}|\omega_{i}|\big\}.
\end{eqnarray}
In figure (\ref{fig_lasso_gauss_copula}), we present joint prior pdf of
LGC prior for the two dimensional case (i.e., $\omega_1$ and
$\omega_2$).
\begin{itemize}
\item Note that if we consider $\Sigma={\mbox{\boldmath $I$}}_p$, then the simple lasso prior
becomes special case of the LGC prior
(\ref{eqn_jnt_lasso_gauss_copula_prior_pdf}). These arguments can be
seen from the figure (\ref{fig_lasso_gauss_copula}), where the contour
plots for the LGC prior are shown, for different
values of $\rho$ (correlation parameter). In practice, $\rho$ is
learned from data.
\item The advantage of LGC prior is that it can
include the structural dependence among the predictor variables.
Due to the sharp edges of LGC, it can do subset
selection like the lasso or EN.
\item One disadvantage of lasso is that it usually fails to do group
selection, i.e., it gives inaccurate solutions when features
are correlated. Similar to the EN, the LGC can
deal with this problem by introducing correlation and making it as a
favourable choice for a regularizer.
\item Choice of copula can vary the nature of regularizer and hence
the final answer. In the experiment, presented in section
(\ref{section:experiments}), the LTC and LGC yielded different
solutions and standard error. Thus copula selection is also a
possibility when we have a large number of choices for copula. For a
small number of copula choices we can use cross-validation, but for
a large number of copula choices, there is a need for copula
selection. However, in this paper, we restrict ourselves only to the
LGC, LTC and its applications.
\end{itemize}
A desirable supervised learning task for generic data should have the following properties.
\begin{itemize}
\item It should be able to make automatic feature selection.
\item It should work in the case of $p>n$.
\item It should be able to make a group selection for the correlated predictors.
\end{itemize}
Copula prior enjoys all the above qualities. It can make automatic
feature selection, can work for higher dimensions, and can do grouped
selection due to its built-in correlation structure.
\subsection{Lasso with $t$-Copula Prior}
The density of $t$ copula \cite{Demarta.2005} has the form
\begin{equation}\label{eqn_t_copula_density}
c_{\nu}^{t}(\underline{u})=\frac{f_{\nu,\Sigma}(t_{\nu}^{-1}(u_1),\ldots,t_{\nu}^{-1}(u_p))}{{\mbox{\boldmath $p$}}rod_{j=1}^{p}f_{\mu}(t_{\nu}^{-1}(u_j))},~~\underline{u}\in (0,1)^p,
\end{equation}
where $f_{\nu,\Sigma}$ is the joint density of $p$-variate multivariate
$t$-distributions $t_{p}(\nu,0,\Sigma)$ with $\nu$ degrees of freedom,
$\Sigma$ is the covariance matrix and $f_{\nu}$is the standard density of
univariate $t$-distribution with $\nu$ degrees of freedom. The joint prior
density function is, by differentiating (\ref{eqn_Sklar_copula})
\begin{equation}\label{eqn_jnt_copula_prior_pdf}
f(\bfb)=c[F_1(\omega_1),F_2(\omega_2),\ldots,F_p(\omega_p)]{\mbox{\boldmath $p$}}rod_{j=1}^{p}f_j(\omega_j),
\end{equation}
where $c$ is the density of $C$ and $f_1,\ldots,f_p$ are marginal prior
densities. Now we consider the $F_{L:j}(\omega_j)$ as the marginal prior cdf
of the Laplace distribution over $\omega_j$, $f_{L:j}$ is the marginal prior
pdf of Laplace distribution and consider $t$-copula for $c$ in
(\ref{eqn_t_copula_density}), then we get the joint prior pdf for $\bfb$ as
`lasso $t$ copula' (LTC) prior, where
\begin{eqnarray}
\label{eqn_jnt_prior_pdf_t_copula_v1}
&& \log f(\omega_1,\ldots,\omega_p|\Sigma) = \stackrel{part~1}{\overbrace{\log
\big(c(F_{L:1}(\omega_1),\ldots,F_{L:p}(\omega_p))\big)}}+\stackrel{part~2}{\overbrace{\log\big({\mbox{\boldmath $p$}}rod_{j=1(1)p}f_{L:j}(\omega_j)\big)}}.
\end{eqnarray}
In (\ref{eqn_jnt_prior_pdf_t_copula_v1}), after some simplification, the part
1 can be expressed as,
\begin{equation}
\begin{split}
&\log\big(c(F_{L:1}(\omega_1),\ldots,F_{L:p}(\omega_p))\big)=p\log\Bigg(\frac{\Gamma(\frac{\nu+1}{2})}{\Gamma(\frac{\nu}{2})}\Bigg)\\
&-\bigg(\frac{\nu+p}{2}\bigg)\log\bigg(1+\frac{{\mbox{\boldmath $q$}}^T\Sigma^{-1}{\mbox{\boldmath $q$}}}{\nu}\bigg)
+\log \bigg(\frac{\Gamma(\frac{\nu+p}{2})}{\Gamma(\frac{\nu}{2})|\Sigma|^{1/2}}\bigg)\\
&+\sum_{j=1(1)p}\bigg(\frac{\nu+1}{2}\bigg)\log\bigg(1+\frac{q_j^2}{\nu}\bigg)
\end{split}
\end{equation}
where ${\mbox{\boldmath $q$}}=(q_{1},\ldots,q_{p})$ with $q_{j}=t_{\nu}^{-1}(F_{L:j})~\forall j$.
The part 2 of (\ref{eqn_jnt_prior_pdf_t_copula_v1}) can be expressed as
\begin{eqnarray*}
\log\big({\mbox{\boldmath $p$}}rod_{j=1(1)p}f_{L:j}(\omega_j)\big)&=&\log\bigg[{\mbox{\boldmath $p$}}rod_{j=1(1)p}\frac{\lambda}{2}\exp\big(-\lambda|\omega_j|\big)\bigg],\\
&=&p\log\big(\frac{\lambda}{2}\big)-\lambda \sum_{j=1}^p|\omega_j|.
\end{eqnarray*}
Hence the joint prior density in log-scale for LTC prior can be
expressed as
\begin{eqnarray*}
&&\log f(\omega_1,\ldots,\omega_p)
=-\bigg(\frac{\nu+p}{2}\bigg)\log\bigg(1+\frac{{\mbox{\boldmath $q$}}^T\Sigma^{-1}{\mbox{\boldmath $q$}}}{\nu}\bigg)\\
&&~~~+\sum_{j=1(1)p}\bigg(\frac{\nu+1}{2}\bigg)\log\bigg(1+\frac{q_j^2}{\nu}\bigg)\\
&&~~~+\log
\bigg(\frac{\Gamma(\frac{\nu+p}{2})}{\Gamma(\frac{\nu}{2})|\Sigma|^{1/2}}\bigg)+p\log\Bigg(\frac{\Gamma(\frac{\nu+1}{2})}{\Gamma(\frac{\nu}{2})}\Bigg)\\
&&~~~+p\log\big(\frac{\lambda}{2}\big)-\lambda \sum_{j=1}^p|\omega_j|.
\end{eqnarray*}
\begin{figure*}
\caption{Two dimensional contour plot for lasso-$t$-copula prior with $\nu=10$ for different values of correlation parameter ($\rho$). For $\rho=0$ the shape deflects from lasso.}
\label{fig_lasso_t_copula}
\end{figure*}
\begin{figure*}
\caption{Two dimensional contour plot for lasso-$t$-copula prior with $\nu=1$ for different values of correlation parameter ($\rho$). Also known as Cauchy copula.}
\label{fig_lasso_t1_copula}
\end{figure*}
If we consider $\Sigma={\mbox{\boldmath $I$}}$, then
$\log\big(c(F_{L:1}(\omega_1),\ldots,F_{L:p}(\omega_p))\big)$
would still be non-zero. Hence, unlike LGC prior, with zero correlation
among the coefficients, the shape of the $t$-copula prior deflect from
lasso prior. The argument can be seen clearly from figure
(\ref{fig_lasso_t_copula}), where the contour plots for the
LTC prior (with $\nu=10$) are shown, for different values
of correlation parameter.
In figure (\ref{fig_lasso_t1_copula}) we present the contour plot of
lasso with $t$-copula with the degrees of freedom to be $\nu=1$. This is
essentially Cauchy copula. Apparently, the contour plot for Cauchy copula
shows a very undesirable property.
\subsection{Optimization}
The standard approach would be to develop the full Bayesian solution to
estimate the posterior mean of $\bfb$ via MCMC technique
\cite{Park.2008,Li.Lin.2010,Kyung.2010}. However, we have to prove the
geometric ergodicity of the Markov chains \cite{Khare.Hobert.2013,
Roy.2017} for our proposed copula prior. This would be a significant
detour from the current paper. Hence we set aside this work for another article for which we are currently working on.
In this paper, we implement the posterior mode of $\bfb$. Please note that
posterior mode is Bayes estimator under Kullback-Leibler type loss function
\cite{Das.Dey.2010}. We estimate the posterior mode via augmented Lagrangian
optimization technique \cite{Conn.1991,
Birgin.2008}. This method
consolidates the objective function and the nonlinear penalty into a single
function. Here the objective function is the negative log of the likelihood
function, and the `penalty' is the negative log of copula prior. The
mathematical form of the objective function with copula regularizer would be
as follows,
\begin{equation*}
L(\bfb)=\|{\mbox{\boldmath $y$}}-{\mbox{\boldmath $x$}}\bfb\|_{2}^{2}-\ln f(\omega_1,\ldots,\omega_p|\Sigma,\lambda).
\end{equation*}
Now using (\ref{eqn_jnt_copula_prior_pdf}), and as marginals are
from Laplace distribution, we can write above equation as follows,
\begin{eqnarray}
\label{eqn_log_Post}
L(\bfb)&=&\|{\mbox{\boldmath $y$}}-{\mbox{\boldmath $x$}}\bfb\|_{2}^{2}-\ln
\big(c(F_{L:1}(\omega_1),\ldots,F_{L:p}(\omega_p))\big)
+\lambda \sum_{j=1}^p|\omega_j|.
\end{eqnarray}
Above equation is an unconstrained minimization problem. However, we
cannot use the augmented Lagrangian algorithm here, because it
requires the objective function and constraints to be twice
continuously differentiable. The presence of $\ell_{1}$ norm of $\bfb$ vector makes it not
differentiable at 0. Since $|\omega_j|$ is not differentiable at 0 for all
$j$, we do transformation to make it a continuous function. The approach is to split the
element $\omega_{j}$ of vector $\bfb$ into $\omega_{j}^{+}$ and
$\omega_{j}^{-}$ so that $\omega_{j}=\omega_{j}^{+}-\omega_{j}^{-}$. If
$\omega_{j}>0$, then we have $\omega_{j}^{+}=|\omega_{j}|$ and
$\omega_{j}^{-}=0$, else we will have $\omega_{j}^{-}=|\omega_{j}|$ and
$\omega_{j}^{+}=0$. Mathematically we can write
$$
\omega_{j}^{+}=\frac{|\omega_{j}|+\omega_{j}}{2},~~ and~~ \omega_{j}^{-}=\frac{|\omega_{j}|-\omega_{j}}{2}.
$$
Both $\omega_{j}^{+}$ and $\omega_{j}^{-}$ are non negative numbers. Main
advantage of this splitting is that now we can express
$|\omega_{j}|=\omega_{j}^{+}+\omega_{j}^{-}$, hence effectively we can now
avoid the absolute values. Substitute
$\omega_{j}=\omega_{j}^{+}-\omega_{j}^{-}$ and
$|\omega_{j}|=\omega_{j}^{+}+\omega_{j}^{-}$ into (\ref{eqn_log_Post}) we get
the final non linear optimization problem,
\begin{eqnarray}\label{eqn_obj}
\min_{\bfb^{+},\bfb^{-}}& ~& L(\bfb^{+},\bfb^{-}),\nonumber \\
\text{subject~to} &~&
\sum_{j=1}^p\omega_j^{+}\omega_{j}^{-}=0, \nonumber \\
\omega_{j}^{+},\omega_{j}^{-}&\geq& 0 ~~~ \forall j.
\end{eqnarray}
The objective function and constraints in (\ref{eqn_obj}) both are continuous functions.
Now we can use the augmented Lagrangian optimization on (\ref{eqn_obj}), find the optimal $\bfb^{+}$ and $\bfb^{-}$
and estimate the final solution as $\bfb^{*}=\bfb^{+}-\bfb^{-}$.
\begin{figure*}
\caption{Two dimensional contour plot for LGC prior for different values of correlation parameter ($\rho$).
When $\rho=0$ it represents lasso penalty. For non-zero $\rho$
contour plot represents non-convex penalty structure. In experiments, $\rho$ is
learned from data.}
\label{fig_lasso_gauss_copula}
\end{figure*}
We used the analytical gradient of LGC prior, to
speedup the optimization procedure for big data with large number of
features.
\subsection{Tuning of Hyperparameters}
The two unknown parameters for the LGC and LTC are the scale parameter
$\lambda$ and the variance-covariance matrix $\Sigma$. The dimensionality of
data plays a significant role in the estimation of $\Sigma$. For $n>p$ case
we can determine the prior correlation between the coefficients using the
covariance of predictors, i.e., $(X^{T}X)$. However for $n<p$ case, we cannot
use the covariance matrix, though it preserves the variance-covariance
structure of the data. Choosing $\Sigma$ as identity matrix in $n<p$ case
could be a poor choice if the features are highly correlated. Hence the Ridge
prior seems to be a compromise between the actual data covariance and the
Identity matrix. Following this idea we choose $\Sigma$ to be as follows
\begin{eqnarray}\label{Sigma}
\Sigma =\frac{(X^{T}X+cI)}{(1+c)}, ~~ if ~ n < p.
\end{eqnarray}
and
\begin{eqnarray*}
\Sigma =(X^{T}X), ~~ if ~ n \geq p.
\end{eqnarray*}
Here $p$ is the number of features and $c$ is a constant. To maintain the
variance-covariance structure of the data we would usually choose a very
small value of c. The scale parameter $\lambda$ is the other parameter, which
we would like to learn. For a given correlation as $\lambda$ increases the
copula penalty function also increases. We estimate the scale parameter
$\lambda$ via $10$-fold cross validation technique
\cite{Hastie.Tibshirani.Friedman.2008}.
\subsection{Analytical Gradient of LGC prior}
We used the analytical gradient of LGC prior, to speedup the optimization
procedure for big data with large number of features. The expression for the
gradient of squared loss function with LGC regularizer
L$(\omega^{+},\omega^{-})$, is as follows:
\begin{eqnarray}
\frac{{\mbox{\boldmath $p$}}artial L}{{\mbox{\boldmath $p$}}artial \omega^{+}}=-2X^{T}(y-X\omega)+(\Sigma^{-1}-{\mbox{\boldmath $I$}})K^{+}{\mbox{\boldmath $q$}}+\lambda\text{\textbf{1}}, \nonumber\\
\frac{{\mbox{\boldmath $p$}}artial L}{{\mbox{\boldmath $p$}}artial \omega^{-}}=2X^{T}(y-X\omega)+(\Sigma^{-1}-{\mbox{\boldmath $I$}})K^{-}{\mbox{\boldmath $q$}}+\lambda\text{\textbf{1}}.\nonumber,
\end{eqnarray}
where $\omega=\omega^{+}-\omega^{-}$, $K^{+}$ and $K^{-}$ are diagonal
matrices with
$K^{+}(i,i)=\frac{dq(\omega_{i})}{d\omega_{i}}\text{sgn}(\omega_{i}^{+})$,
and $K^{+}(i,j)=0{\mbox{\boldmath $q$}}uad \forall~ i\neq j$. Similarly we can express the matrix
$K^{-}$ as
$K^{-}(i,i)=\frac{dq(\omega_{i})}{d\omega_{i}}\text{sgn}(\omega_{i}^{-})$,
and $K^{-}(i,j)=0{\mbox{\boldmath $q$}}uad \forall ~i\neq j$. The $sgn(.)$ represents the signum
function.
\subsection{Archimedean Copula}
Other than elliptical copula (like Gauss or $t$ copula), the Archimedean copula provides the big class of models. For example Clayton copula \cite{Clayton.1978}, Frank copula\cite{Frank.1979} or Gumbel copula \cite{Gumbel.1960} are popular Archimedean copulas. However, it's worth noting that the Archimedean copulas with the dimension three or higher only allows positive association. The bivariate Archimedean copulas can handle the negative association. This undesirable feature of the Archimedean copulas makes it an unlikely candidate to be considered as the copula prior for $\bfb$.
\section{Results on Copula Prior}
In this section we present some important theoretical results for
LGC prior. The pdf of LGC prior can be expressed as
(\ref{eqn_jnt_lasso_gauss_copula_prior_pdf}). The nature of $q_j$ in (\ref{eqn_jnt_lasso_gauss_copula_prior_pdf}) is crucial in determining the nature of
the LGC regularizer, where $q_j$ is monotonic function of
$\omega_j$. Here we consider the following assumptions. In the
appendix \ref{Appendix_A}, we present graphical support for the
assumptions in figure (\ref{assumption_1}) and (\ref{assumption_4}).
\begin{assumption}\label{A1}
When $\omega_{j}>0$, then $q_{j}$ is concave in nature and for
$\omega_{j}<0$, $q_{j}$ is a convex function.
\end{assumption}
\begin{assumption}\label{A4}
If $\hat{\omega}_{k}\neq\hat{\omega}_{l}$ have the same sign then the
following inequality holds.
\begin{eqnarray*}\label{12}
&&\frac{q(\hat{\omega}_{k}+\hat{\omega}_{l})[q(\hat{\omega}_{k})\frac{dq(\hat{\omega}_{l})}{d\omega}
-q(\hat{\omega}_{l})\frac{dq(\hat{\omega}_{k})}{d\omega}]}{\frac{dq(\hat{\omega}_{l})}{d\omega}-\frac{dq(\hat{\omega}_{k})}{d\omega}
}\nonumber \\
&<&\frac{q(\hat{\omega}_{k})^{2}\frac{dq(\hat{\omega}_{l})}{d\omega}
-q(\hat{\omega}_{l})^{2}\frac{dq(\hat{\omega}_{k})}{d\omega}}{\frac{dq(\hat{\omega}_{l})}{d\omega}-\frac{dq(\hat{\omega}_{k})}{d\omega}}.
\end{eqnarray*}
\end{assumption}
\begin{lemma}\label{lma_1}
An unique solution for LGC penalty always exists.
\end{lemma}
\begin{lemma}\label{lma_2}
Under the assumption that each predictor is standardized, and $\Sigma = X^TX$
in (\ref{eqn_jnt_lasso_gauss_copula_prior_pdf}) then joint prior pdf of LGC
in (\ref{eqn_jnt_lasso_gauss_copula_prior_pdf}) can be expressed as
\begin{equation}\label{eqn_jt_pdf_LGC_approx_form_final}
f(\bfb)=-\sum_{i=1}^{p}q_{i}\sum_{j\neq i}q_{j}\rho_{ji}^{*}+
\lambda \sum_{j=1}^{p}|\omega_{j}|
\end{equation}
where $\rho_{ij}^{*}$ is partial correlation. Note that $\Sigma^{-1}$ the
covariance matrix, is converted into a correlation matrix.
\end{lemma}
\begin{lemma}\label{lma_3}
For $k,l \in \{1,2,\hdots,p\}$, if $x_{k}\approx x_{l}$ then $\rho_{jk}^*\approx
\rho_{jl}^* ~~~\forall j\neq \{k,l\}$
\end{lemma}
\begin{lemma}\label{lma_4}
If $x_{k}\approx x_{l}$ and $\rho_{kl}^{*}\textgreater 0$ $
\Rightarrow \hat{\omega}_{k}\approx\hat{\omega}_{l}$.
\end{lemma}
\begin{theorem}\label{thm_LGC_group_effect}
Given data ${\mbox{\boldmath $y$}}$, ${\mbox{\boldmath $X$}}$ and parameters $\lambda$, the response ${\mbox{\boldmath $y$}}$ is
centred and the predictors ${\mbox{\boldmath $X$}}$ are standardized. Let $\hat{\omega}_{k},
\hat{\omega}_{l}$ be the LGC estimate. Suppose that
$\hat{\omega}_{k}\geq\hat{\omega}_{l}>0$.
Define
\begin{equation*}
D_{\lambda}(k,l)=|q_{k}-q_{l}|\frac{dq_{k}}{d\omega_{k}}
\end{equation*}
then
\begin{eqnarray}
&&D_{\lambda}(k,l)\leq
|y|\frac{\sqrt{2(1-\rho_{lk})}}{|\rho_{kl}^{*}|} \nonumber \\
&&~~~~+\lambda\frac{\sqrt{\sum_{j\neq
k,l}q_{j}^2\frac{{\mbox{\boldmath $p$}}i}{2}}\sqrt{\sum_{j\neq
k,l}(\rho_{jl}^{*}-\rho_{jk}^{*})^{2}}}{|\rho_{kl}^{*}|} \label{Ineq}
\end{eqnarray}
\end{theorem}
The unitless quantity $D_{\lambda}(k,l)$ describes the difference between the coefficient paths of
predictors $k$ and $l$. If $x_k$ and $x_l$ are highly correlated, then
theorem (\ref{thm_LGC_group_effect}) says that the difference between
the coefficient paths of predictor $x_k$ and $x_l$ is almost 0. The
upper bound in the inequality in theorem (\ref{thm_LGC_group_effect}) provides a quantitative description for the
grouping effect of the LGC prior.
\section{Learning Copula Prior from Big Data}
In order to handle the `big data,' we present a resample technique for learning
of $\bfb$ with LC prior. We consider training dataset consisting of $n$ independent and
identically distributed samples
$\mathcal{D}_n=\{x_{i},y_{i}\}_{i=1}^{n}$, where $n$ is large. We draw a random re-sample of
subset $\mathcal{D}_m$ of size $m(<n)$ from $\mathcal{D}_n$. We learn $\hat{\bfb}$ from $\mathcal{D}_{m}$ using
(\ref{eqn_obj}) and repeat the process $M$ times, where $M$ is the
simulation size. So for the coefficients of each predictor we have
$M$ solutions and we estimate the final solution by taking the
median of the $M$ solutions. The algorithm is presented in (1).
\begin{figure}\label{alg_large_data}
\end{figure}
\section{Experiments}\label{section:experiments}
In this section we implement LC prior on simulated data and real
life examples. We compared the performance of LC prior with Lasso
\cite{Tibshirani.1996} and EN \cite{Zou.Hastie.2005}. For
implementation of EN and Lasso we have used publicly available
packages.
\begin{figure*}
\caption{Comparing the RMSE of the LGC, the LTC, the EN and the lasso. The LGC and t-copula outperform the EN and lasso in examples 1, 2 and 3. In example 4, the performance among the methods is similar. It indicates that copula prior often perform well in the presence of high correlation between the predictors.}
\label{fig_simulation_boxplot}
\end{figure*}
\begin{table*}[ht]
\centering
\begin{tabular}{l|rrrr}
\hline
& Example 1 & Example 2 & Example 3 & Example 4 \\
\hline
LGC & 2.92 (0.02) & 2.82 (0.020) & 12.35 (0.051)& 12.24 (0.07)\\
lasso $t_{\nu=10}$ Copula & 2.92 (0.02) & 2.83 (0.017) & 12.32 (0.048)& 12.18 (0.08)\\
Elastic net & 2.95 (0.03) & 2.89 (0.025) & 12.44 (0.068)& 12.19 (0.07) \\
lasso & 2.99 (0.02) & 2.97 (0.026) & 12.61 (0.043)& 12.19 (0.07)\\
\hline
\end{tabular}
\caption{Median RMSE for the simulated examples and
four methods based on 100 replications. The numbers in parentheses are the corresponding standard errors (of the medians) estimated by using the bootstrap with $B=1000$ resamplings on the 100 RMSE.}
\label{table_sumulation}
\end{table*}
\begin{figure*}
\caption{Solution Path of the LGC in Diabetes Datset}
\label{fig_diabetes_Gaussian_Copula_Path}
\end{figure*}
\begin{table*}[ht]
\centering
\begin{tabular}{l|rrr}
\hline
& Test set MSE & Tuning Parameter Estimates & Selected Features \\
\hline
Lasso & 897.071 & $\lambda$ = 5.59 & 6/10 \\
Elastic net & 871.368 & $\alpha$ = 0.947, $\lambda$ = 2.915 & 6/10 \\
Gaussian Copula & 856.693 & $\lambda$ =0.458 & 10/10 \\
t Copula ($\nu=10$) & 857.576 & $\lambda$ = 0.615 & 10/10 \\
\hline
\end{tabular}
\caption{Out-Sample Mean-Square Error (MSE) in Diabetes Dataset}
\label{tabl_diabetes_data_mse}
\end{table*}
\begin{table*}[ht]
\centering
\begin{tabular}{l|rrr}
\hline
& Test set MCE & Tuning Parameter Estimates & Selected Features \\
\hline
Lasso & 9/22 & $\lambda$ =0.09 & 7/201 \\
Elastic net & 5/22 & $\alpha$ =0 , $\lambda$ =0.03 & 201/201 \\
Gaussian Copula & 5/22 & $\lambda$ = 0.1 & 70/201 \\
\hline
\end{tabular}
\caption{Out-Sample Misclassification Error (MCE) in Colon Cancer Dataset}
\label{tabl_Alon_data_mce}
\end{table*}
\begin{table*}[ht]
\centering
\begin{tabular}{l|rrr}
\hline
& Test set MSE & Tuning Parameter Estimates & Included Features \\
\hline
Lasso & 0.699 & $\lambda$ = 0.01 & 16/27 \\
Elastic net & 0.699 & $\alpha$ = 0.21, $\lambda$ = 0.03 & 18/27 \\
Gaussian Copula & 0.768 & $\lambda$ = 0.14 & 4/27 \\
\hline
\end{tabular}
\caption{Out-Sample Mean Square Error (MSE) in Energy Dataset}
\label{tabl_energy_data_mse}
\end{table*}
\subsection{Synthetic experiments}
Here we present four examples from \cite{Tibshirani.1996,Zou.Hastie.2005},
to compare the prediction performance of the lasso and EN and
proposed copula prior. For each example, our simulated data consist of a
training data set, an independent validation data set, and a separate test
data set. The validation data sets were used to select the tuning parameters
and then the models were fitted on the training data set. We computed the
test error (the mean-squared error) on the test data set. The simulation
scenarios are as follows:
\begin{itemize}
\item[(a)] We consider the true model, ${\mbox{\boldmath $y$}}={\mbox{\boldmath $X$}}\bfb+ \epsilon, ~~\epsilon\sim
N(0,\sigma^2{\mbox{\boldmath $I$}})$, where we set $\bfb=(3, 1.5, 0, 0, 2, 0, 0, 0)$,
$\sigma=3$ and ${\mbox{\boldmath $X$}}_k\sim MVN_p(0,\Sigma),~k=1,\hdots,p$, where
$$
\Sigma=(\sigma_{ij})=\bigg\{\begin{array}{cc}
1 & i=j,\\
0.95 & i\neq j.
\end{array}
$$
We simulated 100 data sets. Training set sample size = 20; Validation set
sample size = 20; and Test set sample size = 200.
\item[(b)] Example 2 is the same as example 1, except that $\omega_j=0.85$
for all $j$.
\item[(c)] In example 3, we consider the true model, ${\mbox{\boldmath $y$}}={\mbox{\boldmath $X$}}\bfb+ \epsilon,
~~\epsilon\sim N(0,\sigma^2{\mbox{\boldmath $I$}})$, where we set
$$
\bfb=(\underbrace{0,\hdots,0}_{10},\underbrace{2,\hdots,2}_{10},\underbrace{0,\hdots,0}_{10},\underbrace{2,\hdots,2}_{10}),
$$
and $\sigma=15$ and ${\mbox{\boldmath $X$}}_k\sim MVN_p(0,\Sigma),~k=1,\hdots,p$, where
$$
\Sigma=(\sigma_{ij})=\bigg\{\begin{array}{cc}
1 & i=j,\\
0.95 & i\neq j.
\end{array}
$$
We simulated 100 data sets with Training set sample size = 100; Validation
set sample size = 100; and Test set sample size = 400.
\item[(d)] In example 4, we consider the true model, ${\mbox{\boldmath $y$}}={\mbox{\boldmath $X$}}\bfb+ \epsilon,
~~\epsilon\sim N(0,\sigma^2{\mbox{\boldmath $I$}})$, where we set
$$
\bfb=(\underbrace{3,\hdots,3}_{15},\underbrace{0,\hdots,0}_{25}),
$$
and $\sigma=15$. Let $\epsilon_i\stackrel{iid}{\sim} N(0,0.16)$, for
$i=1,2\hdots,15$, the predictors are generated from
\begin{eqnarray*}
X_i&=&Z_1+\epsilon_i, ~~~Z_1\sim N(0,1),~~~i=1,\hdots,5,\\
X_i&=&Z_2+\epsilon_i, ~~~Z_2\sim N(0,1),~~~i=6,\hdots,10,\\
X_i&=&Z_3+\epsilon_i, ~~~Z_3\sim N(0,1),~~~i=11,\hdots,15.
\end{eqnarray*}
We simulated 100 data sets with Training set sample size = 100; Validation
set sample size = 100; and Test set sample size = 400.
\end{itemize}
Table \ref{table_sumulation} and figure \ref{fig_simulation_boxplot} (box
plots) summarize the prediction results. We see that in the examples 1, 2 and
3, based on RMSE, the `lasso with $t_{\nu=10}$-copula' and LGC tend to
be more accurate than the lasso and the EN. While in example 4, the copula priors tend to do as well as the EN and lasso.
This is expected because in example 4, predictors are not correlated and
hence proposed copula priors will do as good as the regular lasso or the
EN penalty.
\subsection{Regression for Diabetes Data}\label{section_diabetes_dataset} The diabetes dataset arises from the study of 442 diabetes patients describes in the \cite{Efron.2004}. It comprises a sample of 442 diabetic patients. The independent variables are age, sex, body mass index (BMI), average blood pressure, and six blood serum measurements. The dependent variable is the quantitative measure of disease progression in one year after measuring the independent variables. For data analysis, we randomly split the data with 300 observations in the training set and the remaining 142 observations in the test set. We evaluate the MSE for copula prior, EN, and lasso
on the test data set. The table (\ref{tabl_diabetes_data_mse})
presents the results.
For choosing the optimal tuning parameters, we have used ten-fold
cross-validation for all the above regularizers. The LGC prior selects all
the variables in the final model. The unique solution path of LGC prior
(presented in figure \ref{fig_diabetes_Gaussian_Copula_Path}) is formed as it
takes into account the correlation among the predictors. The regularization
paths for lasso and EN for the diabetes dataset are reported in
\cite{Vidaurre.2013}. The six blood serum measurement variables (TC, LDL,
HDL, TCH, LTG, GLU) are highly correlated with each other since they belong
to the same person's blood. The BMI and map variable also have a significant
amount of correlation with other predictors. For all the three cases the
regularization path for age and sex variable is similar, as the age and sex
variable are not correlated with other predictors. However, the solution path
of LGC prior changes concerning lasso prior, as the LGC prior takes into
account the correlation among the predictors. As a result, it results in
lower MSE than regular lasso and EN.
Both the EN and lasso select sex, BMI, MAP, HDL, LTG, and GLU to be the
significant predictors. EN performs better than lasso due to the presence of
ridge penalty but tends to perform worse than the copula prior. The lasso is
a particular case of the LGC prior when the prior correlation among the
predictors is assumed 0. We have used $t$ copula with 10 degrees of freedom.
The optimal $\lambda$ value in case of $t$ copula comes out to be 0.615.
Again in the copula prior, the test data error is small, as compared with EN and
lasso; $t$ copula selects all the variables in the final model.
\subsection{Classification for Colon Cancer Data}
Microarray data is a classic example of high dimensional data. The
experiments on DNA, RNA and protein microarrays which consists of the
expression state of a vast number of genes generates high dimensional
data. There are often thousands of features (gene expression) for such
data but very few samples. As a result, there is a need for feature
selection in such type of data. The response variable often classified
as the cancerous cell or healthy cell.
Here, we consider the example of the Colon cancer data set as explained in
\cite{Alon.1999}. This dataset consists of 62 tissue samples collected from
colon cancer patients. From these 62 samples, 40 biopsies are from tumors
(labeled as ``1'') and rest 22 biopsies (labeled as ``0'') are from healthy
parts of the colon of the same patients. Top 2000 features are selected from
6500 features based on the confidence in measured gene expression levels.
The goal is to develop a diagnostic rule based on the gene expression
of 2000 genes to differentiate cancerous tissues from healthy
tissues. For this classification problem, we fit a logistic regression
on training data with LGC prior, as the regularizer function. After
learning the coefficients on training data, we use these coefficients
on test data to evaluate the misclassification error.
We divide the data into test and training data set. In the training dataset,
there are 40 tissue samples of which 13 are the normal tissues, and rest 27
are tumor samples. The misclassification error is evaluated on the remaining
22 samples. Since the data has very high dimension, we first select top 200
predictors based on their $t$ statistic scores from training data. It helps
us in making the computation easier.
The prior correlation matrix between coefficients learned from
(\ref{Sigma}). Another unknown quantity is the scale parameter
$\lambda$, which we learned from five-fold cross-validation
method. Table (\ref{tabl_Alon_data_mce}) compares the
out-sample misclassification error (MCE) of the LGC prior with other feature selection
methods like lasso $\&$ EN. The LGC prior has much lower
MCE than lasso. Both LGC and EN have the same
MCE, but LGC resulted in a sparse
representation. The LGC selected only 70 features out of 201 features,
whereas the EN selected all the features.
\subsection{Large Time Series Data for Energy and Housing}
The energy appliances dataset arises from the study of household
energy uses from appliances describes in the
\cite{Candanedo.2017}. The dataset is available for about 4.5 months
at 10 minutes interval. The house temperature and humidity conditions
were monitored with a wireless sensor network. The energy data was
logged every 10 minutes. Weather data collected from the nearest
airport weather station (Chievres Airport, Belgium) was downloaded
from a public dataset and merged with the experimental datasets using
the date and time column. Overall the data is a time series having
19735 observations and 27 features. We considered the first 3.15 month
($\approx 70\%$) as training data set, and the rest of dataset as the
test dataset for measuring the performance of the model.
Since the data is a time series, we checked the stationarity of
the variables involved. The Augmented Dickey-Fuller test confirms that
all the variables involved are stationary. Hence we used regular
linear time series model for feature selection.
As the size of data is large, we used the resampling technique defined in the
algorithm (\ref{alg_large_data}). A small resample of size 1500 sampled with
replacement from the training dataset and solutions obtained using
(\ref{eqn_obj}). The process repeated for 200 times. As a result, we have 200
solutions. Finally, the median of these 200 solutions is considered as the
final solution. For choosing the optimal tuning parameters, we used ten-fold
cross-validation for all the regularizers. We evaluated the MSE for LGC, EN,
and lasso on the test data set. The results presented in table
(\ref{tabl_energy_data_mse}). As evident from the table
(\ref{tabl_energy_data_mse}), the MSE for lasso,EN and LGC are similar. This
is expected because in this data set predictors are only slightly correlated.
\section{Conclusion}\label{Conclusion}
We presented the copula prior, a shrinkage and feature selection
method. The LC prior produces a sparse model with good prediction
accuracy while preserving the grouping effect. The empirical results
and simulations demonstrate the better performance of the LC prior and
its superiority over the EN and the lasso. When used in the binary
classification method, the LC prior appears to perform well on
microarray data regarding the misclassification error, and it makes
automatic gene selection.
The LC prior is implemented in standard supervised learning task, like
regression and classification. The copula prior is a generalization of
the EN and lasso, which has been shown to be an essential device for
model fitting and feature selection. Our method offers other insights
into the lasso and EN, and ways to improve it.
\section*{Appendix A: Proof of Results on Copula Prior}\label{Appendix_A}
\noindent \textbf{Proof of Lemma \ref{lma_3}}: The objective function is
\begin{eqnarray}\label{11_1}
L(\omega_{k},\omega_{l})&=& |{\mbox{\boldmath $y$}}-{\mbox{\boldmath $x$}}\bfb |_{2}^{2}-2q_{k}\sum_{j\neq k,j\neq l}\rho_{jk}^{*}q_{j}\nonumber \\
&&~~~~-2q_{l}\sum_{j\neq k,j\neq l}\rho_{jl}^{*}q_{j}-2\rho_{kl}^{*}q_{k}q_{l}+\lambda(|\omega_{k}|+|\omega_{l}|)
\end{eqnarray}
From \cite{Kwan.2014} we know that partial correlation satisfy the following
relation, $\rho_{jk}^{*}=\frac{\hat{\beta}_{jk}}{(1-R_{k}^2)}$, where
$\hat{\beta}_{jk}$ is the ols coefficient of the following regression
equation $x_{k}=\sum_{j\neq k}x_{j}\beta_{jk}+\epsilon_{k}$, and $R_{k}^2$ is
the R square value for this regression equation. By similar argument the partial correlation
$\rho_{jl}^{*}=\frac{\hat{\beta}_{jl}}{(1-R_{l}^2)}$ where $\hat{\beta}_{jl}$
is the ols coefficient of the following regression equation
$x_{l}=\sum_{j\neq l}x_{j}\beta_{jl}+\epsilon_{l}$, and $R_{l}^2$ is the R
square value for this regression equation. The ols coefficients $\hat{\beta_{k}}, \hat{\beta_{l}}$ satisfy the following
linear equation.
\begin{eqnarray}
\sum_{j\neq k}x_{j}\hat{\beta_{jk}} &=& x_{k} \label{ols_1} \\
\sum_{j\neq l}x_{j}\hat{\beta_{jl}}&=& x_{l} \label{ols_2}
\end{eqnarray}
Subtract (\ref{ols_2}) from (\ref{ols_1}), and using the approximation that
$x_{k}\approx x_{l}$ we will get the following equation.
\begin{equation}
\sum_{j\neq k,l}x_{j}\delta_{j}=0, \label{ols_3}
\end{equation}
where $\delta_{j}=(\hat{\beta_{jk}}-\hat{\beta_{jl}}) {\mbox{\boldmath $q$}}uad \forall$ j $\neq$
k,l. Equation (\ref{ols_3}) is satisfied only if
$\hat{\beta_{jk}}=\hat{\beta_{jl}} {\mbox{\boldmath $q$}}uad \forall$ j $\neq$ k,l. Similarly we
can show that $R_{k}^2$ approaches $R_{l}^2$ as $x_{k}$ approaches $x_{l}$.
Consequently if $x_{k}\approx x_{l}$ then ($\rho_{jk}^{*}\approx
\rho_{jl}^{*}){\mbox{\boldmath $q$}}uad \forall j\neq \{k,l\}$. Q.E.D.
\noindent \textbf{Proof of Lemma \ref{lma_4}}: Suppose $\hat{\bfb}$ is the optimal solution with
$\hat{\omega}_{k},\hat{\omega}_{l}\textgreater 0$. At the optimal point,
\scalebox{1.3}{$\frac{{\mbox{\boldmath $p$}}artial L}{{\mbox{\boldmath $p$}}artial \omega_{k}}=0$} and
\scalebox{1.3}{$\frac{{\mbox{\boldmath $p$}}artial L}{{\mbox{\boldmath $p$}}artial \omega_{l}}=0$}, so we have
\begin{eqnarray}
-2x_{k}^{T}({\mbox{\boldmath $y$}}&-&{\mbox{\boldmath $x$}}\hat{\bfb})-2\frac{dq(\hat{\omega_{k}})}{\hat{\omega_{k}}}\sum_{j\neq k,j\neq l}
\rho_{jk}^{*}q(\hat{\omega_{j}}) \nonumber\\
&&-2\rho_{lk}^{*}q(\hat{\omega_{l}})\frac{dq(\hat{\omega_{k}})}{\omega_{k}}+\lambda=0 \label{13_1}\\
-2x_{l}^{T}({\mbox{\boldmath $y$}}&-&{\mbox{\boldmath $x$}}\hat{\bfb})-2\frac{dq(\hat{\omega_{l}})}{\omega_{l}}\sum_{j\neq
k,j\neq l}\rho_{jl}^{*}q(\hat{\omega_{j}})\nonumber \\
&&-2\rho_{lk}^{*}q(\hat{\omega_{k}})\frac{dq(\hat{\omega_{l}})}{\omega_{l}}+\lambda=0 \label{14_1}
\end{eqnarray}
Now we subtract (\ref{14_1}) from (\ref{13_1}) and using the result from
Lemma 4.2 we have
\begin{eqnarray}
\sum_{j\neq k,j\neq l}&&\rho_{jk}^{*}
q(\hat{\omega_{j}})(\frac{dq(\hat{\omega_{l}})}{\omega_{l}}
-\frac{dq(\hat{\omega_{k}})}{\hat{\omega_{k}}}) \nonumber \\
&+&
\rho_{lk}^{*}\big(q(\hat{\omega_{k}})\frac{dq(\hat{\omega_{l}})}{\omega_{l}}\nonumber \\
&&~~~~~ - q(\hat{\omega_{l}})\frac{dq(\hat{\omega_{k}})}{\omega_{k}}\big)=0 \label{15}
\end{eqnarray}
The equation (\ref{15}) is trivially satisfied, if $\hat{\omega_{k}}=\hat{\omega_{l}}$. Another
possible root is $\hat{\omega}_{k}\neq\hat{\omega}_{l}>0$, where we have the following condition,
\begin{equation}
\sum_{j\neq k,j\neq l}\rho_{jk}^{*}q(\hat{\omega}_{j})=-\frac{\rho_{lk}^{*}[q(\hat{\omega}_{k})\frac{dq(\hat{\omega}_{l})}{\omega_{l}}-
q(\hat{\omega}_{l})\frac{dq(\hat{\omega}_{k})}{\omega_{k}}]}{\frac{dq(\hat{\omega}_{l})}{\omega_{l}}-\frac{dq(\hat{\omega}_{k})}{\hat{\omega_{k}}}}.\label{16}
\end{equation}
Substitute (\ref{16}) into (\ref{11_1}) we have the following
equation,
\begin{eqnarray}
L(\omega_{k},\omega_{l})&=&\|{\mbox{\boldmath $y$}}-{\mbox{\boldmath $x$}}\bfb\|_{2}^{2}\nonumber \\
&+&[2q_{k}+2q_{l}]\frac{\rho_{lk}^{*}[q(\hat{\omega}_{k})\frac{dq(\hat{\omega}_{l})}{\omega_{l}}-
q(\hat{\omega}_{l})\frac{dq(\hat{\omega}_{k})}{\omega_{k}}]}{\frac{dq(\hat{\omega}_{l})}{\omega_{l}}
-\frac{dq(\hat{\omega}_{k})}{\omega_{k}}}
\nonumber \\
&-&2\rho_{kl}^{*}q_{k}q_{l}+\lambda(|\omega_{k}|+|\omega_{l}|) \label{17}
\end{eqnarray}
Since $\hat{\omega}_{k}\neq\hat{\omega}_{l}$ is the optimal solution, then
$L(\hat{\omega}_{k},\hat{\omega}_{l})$ should be the minimum. Consider
another solution $S2=(\hat{\omega_{k}}+\hat{\omega_{l}},0)$. Since
$x_{k}\approx x_{l}$, then
$x_{k}\hat{\omega}_{k}+x_{l}\hat{\omega}_{l}=x_{k}(\hat{\omega}_{k}+\hat{\omega}_{l})+x_{l}\times
0$. Also
$\lambda(\hat{\omega}_{k}+\hat{\omega}_{l})=\lambda(\hat{\omega}_{k}+\hat{\omega}_{l}+0)$.
The only difference between the solution
$\hat{\omega_{k}}\neq\hat{\omega_{l}}$ and the new solution $S2$ would be the
following,
\begin{eqnarray}
&&\rho_{kl}^*\bigg[\frac{q(\hat{\omega}_{k})^{2}\frac{dq(\hat{\omega}_{l})}{d\omega}
-q(\hat{\omega}_{l})^{2}\frac{dq(\hat{\omega}_{k})}{d\omega}}{\frac{dq(\hat{\omega}_{l})}{d\omega}-\frac{dq(\hat{\omega}_{k})}{d\omega}
}\nonumber \\
&&~~~~-\frac{q(\hat{\omega}_{k}+\hat{\omega}_{l})[q(\hat{\omega}_{k})\frac{dq(\hat{\omega}_{l})}{d\omega}
-q(\hat{\omega}_{l})\frac{dq(\hat{\omega}_{k})}{d\omega}]}{\frac{dq(\hat{\omega}_{l})}{d\omega}-\frac{dq(\hat{\omega}_{k})}{d\omega}
}\bigg]\label{18_1}
\end{eqnarray}
The equation (\ref{18_1}) takes a positive value by assumption
(\ref{A4})which implies that L$(\hat{\omega}_{k}+\hat{\omega}_{l},0)\textless
L(\hat{\omega}_{k},\hat{\omega}_{l})$, hence a contradiction. So
$x_{k}\approx x_{l} \Rightarrow \hat{\omega}_{k}\approx\hat{\omega}_{l}.$
Q.E.D.
\noindent \textbf{Proof of Theorem \ref{thm_LGC_group_effect}}: Suppose $\hat{\bfb}$ is the optimal solution with
$\hat{\omega}_{k},\hat{\omega}_{l}>0$. At the optimal point,
\scalebox{1.3}{$\frac{{\mbox{\boldmath $p$}}artial L}{{\mbox{\boldmath $p$}}artial \omega_{k}}=0$} and
\scalebox{1.3}{$\frac{{\mbox{\boldmath $p$}}artial L}{{\mbox{\boldmath $p$}}artial \omega_{l}}=0$}, so we have
\begin{eqnarray}
&&-2x_{k}^{T}({\mbox{\boldmath $y$}}-{\mbox{\boldmath $x$}}\hat{\bfb})-2\frac{dq(\hat{\omega}_{k})}{\hat{\omega}_{k}}\sum_{j\neq k,j\neq l}
\rho_{jk}^{*}q(\hat{\omega}_{j}) \nonumber\\
&&~~~-2\rho_{lk}^{*}q(\hat{\omega}_{l})\frac{dq(\hat{\omega}_{k})}{\omega_{k}}+\lambda=0,
~~~and \label{13_2}\\
&&-2x_{l}^{T}({\mbox{\boldmath $y$}}-{\mbox{\boldmath $x$}}\hat{\bfb})-2\frac{dq(\hat{\omega}_{l})}{\omega_{l}}\sum_{j\neq k,j\neq l}\rho_{jl}^{*}q(\hat{\omega}_{j}) \nonumber\\
&&~~~-2\rho_{lk}^{*}q(\hat{\omega}_{k})\frac{dq(\hat{\omega}_{l})}{\omega_{l}}+\lambda=0 \label{14_2}
\end{eqnarray}
Subtract (\ref{14_2}) from (\ref{13_2}) and after some operations we get
the following equation,
\begin{equation}
\begin{split}
&\rho_{kl}^{*}(q_{l}\frac{dq_{k}}{\omega_{k}}-q_{k}\frac{dq_{l}}{\omega_{l}})=(x_{l}-x_{k})^{T}(y-X\omega)+
\\
&\sum_{j\neq k,j\neq l}q_{j}(\rho_{jl}^{*}\frac{dq_{l}}{\omega_{l}}-\rho_{jk}^{*}\frac{dq_{k}}{\omega_{k}}),
\end{split} \label{14_3}
\end{equation}
where,
\begin{equation*}
\begin{split}
&\mid \rho_{kl}^{*}(q_{l}\frac{dq_{k}}{\omega_{k}}-q_{k}\frac{dq_{l}}{\omega_{l}})|\leq |(x_{l}-x_{k})^{T}(y-X\omega)|
\\
&+ |\sum_{j\neq k,j\neq l}q_{j}(\rho_{jl}^{*}\frac{dq_{l}}{\omega_{l}}-\rho_{jk}^{*}\frac{dq_{k}}{\omega_{k}})|
\end{split}
\end{equation*}
by Cauchy-Schwarz inequality,
\begin{eqnarray*}
|\rho_{kl}^{*}|
\times|(q_{l}\frac{dq_{k}}{\omega_{k}}-q_{k}\frac{dq_{l}}{\omega_{l}})|\leq
|(x_{l}-x_{k})^{T}|\times|(y-X\omega)|\\
+|[q_{j}]_{j\neq k,l}|\times|(\frac{dq_{l}}{\omega_{l}}-\frac{dq_{k}}{\omega_{k}})| \times |[\rho_{jl}^{*}-\rho_{jk}^{*}]_{j\neq k,l}|.
\end{eqnarray*}
Note that $[q_{j}]_{j\neq k,l},[\rho_{jl}^{*}-\rho_{jk}^{*}]_{j\neq
k,l}$ are $1 \times (p-2)$ vectors. We have
$|(x_{l}-x_{k})^{T}|=\sqrt{x_{l}^{T}x_{l}+x_{k}^{T}x_{k}-2x_{l}^{T}x_{k}}=\sqrt{2(1-\rho_{lk})}$.
By assumption (\ref{A1}) q is a concave function, so we can say that
$|(\frac{dq_{l}}{\omega_{l}}-\frac{dq_{k}}{\omega_{k}})|$ is bounded by
$\lambda\frac{{\mbox{\boldmath $p$}}i}{2}$ (The derivative of $q$ at $0$). Substitute these
developments in above equation to get the upper bound. Similarly due to
concavity of q it is evident that
$|(q_{l}-q_{k})|\frac{dq_{k}}{\omega_{k}}\leq
|(q_{l}\frac{dq_{k}}{\omega_{k}}-q_{k}\frac{dq_{l}}{\omega_{l}})|$, if we
have $\omega_{k}\geq\omega_{l}$. Finally, we can say $|y-X\omega|\leq |y|$.
Q.E.D.
\section*{Related work}
In this section we review the existing feature selection algorithms $\&$
compare our proposed method with them. Lasso proposed by \cite{Tibshirani.1996} is widely used for feature
selection. Copula lasso prior is a multivariate extension of lasso prior
which accounts for the correlation between the features. In fact the `lasso
with Gauss copula prior' reduces to the lasso prior when the correlation
between the features is 0.
EN proposed by \cite{Yan.2011} also incorporates the correlation
between the features through eigen vectors of data covariance matrix. It is a
Bayesian hybrid model which discovers the correlated features through
the eigen information extracted from the data. EN \cite{Zou.Hastie.2005} uses a weighted combination of $l_{1}$ and
$l_{2}$ norms to encourage a grouping effect, where strongly correlated
variables tend to be in or out of the model together. However EN
does not use the correlation information embedded in the data in contrast
with copula prior. Copula function can be used to develop a multivariate
version of the EN to capture the correlation information between the
features. However in this paper we restrict ourselves to the LC prior.
Ordered weight $l_1$ (OWL) algorithm \cite{Zeng.2016} is also capable of
selecting correlated features. But it forces the features in the same group
to have the same coefficient value which introduces bias in the model.
\cite{Frank.1993} discuss a general $l_{q}$ penalty function on the model
parameters for $q > 0$. It is known as the Bridge estimator. Lasso
is a special case of Bridge estimator corresponding to $q=1$. For $q <
1$ the bridge penalty function becomes non convex.
\cite{Zellner.1986} introduced the $g$-prior. This prior replicates the
covariance structure of the data. However it cannot produce sparse solutions.
Multivariate laplace distribution in which covariance structure is identical
to data covariance serves many useful purposes. First it can identify the
correlated features due to its built-in correlation structure and secondly it
has the ability to produce sparse solutions. However handling multivariate
laplace distribution is computationally difficult, so we have used copula
techniques to develop the multivariate distribution function for lasso.
\section*{Acknowledgment}
Sourish Das's work was partially supported by the Infosys Foundation
grant to CMI.
\end{document} |
\begin{document}
\title{Controllable Quantum Interference from Two-Photon Scattershot Sources}
\author{Joshua J. Guanzon}
\email{joshua.guanzon@uq.net.au}
\affiliation{Centre for Quantum Computation and Communication Technology, School of Mathematics and Physics, The University of Queensland, St Lucia, Queensland 4072, Australia}
\author{Austin P. Lund}
\affiliation{Centre for Quantum Computation and Communication Technology, School of Mathematics and Physics, The University of Queensland, St Lucia, Queensland 4072, Australia}
\author{Timothy C. Ralph}
\affiliation{Centre for Quantum Computation and Communication Technology, School of Mathematics and Physics, The University of Queensland, St Lucia, Queensland 4072, Australia}
\date{\today}
\begin{abstract}
We describe a multi-mode passive linear optical network which emulates the two-photon number statistics of a beam splitter, irrespective on where the two photons enter the network. This is done by firstly defining general properties that the generator of the network matrix must fulfil. We then show that the network's effective transmission coefficient can be freely set to replicate either the absence of coincidence counts (i.e. the Hong-Ou-Mandel dip), or a 100\% coincidence rate, as well as all possible two-photon beam splitter statistics between these two extremal points. Finally, we demonstrate that this network, in comparison to simpler systems, provides a better sampling rate and other resource advantages.
\end{abstract}
\maketitle
\section{Introduction} \label{sec:MHD_intro}
In the past decade, the primary focus of quantum information processing has been on achieving exponential quantum advantages over classical computations. In this regard, linear optical networks coupled with single photon sources and detectors are a noteworthy platform, since they are experimentally accessible in comparison to other systems, and can in principle be used for universal quantum computation~\cite{knill2001scheme,yoran2003deterministic,nielsen2004optical,browne2005resource,ralph2005loss}. However, a key problem of this technology is the implementation of protocols which work at the very large scales required to approach universal computation. Recent experimental advances such as in integrated optical circuits, as well as improvements in theoretical protocols, have made implementing these large systems an increasingly practical reality~\cite{carolan2015universal,caspani2017integrated,qiang2018large,slussarenko2019photonic,bartlett2020universal}. A major theoretical advance came with the discovery of the boson sampling algorithm~\cite{aaronson2011computational}, which showed that classical computers can not efficiently simulate identical photons sent through a randomly chosen linear optical network~\footnote{The proof that is still used today is based on some assumptions about the distribution of matrix permanents for random matrices. However, these assumptions are highly plausible.}.
The aforementioned classical computation hardness argument was later expanded to encompass a problem called \emph{scattershot} boson sampling. This version of the problem considers the situation where the single photons sources are replaced with an array of non-linear crystals, which spontaneously generates entangled squeezed states that are used directly without any feed-forward mechanism~\cite{lund2014boson}. The scattershot source heralds multiple photons simultaneously with a heralding probability that reduces as the square-root of the number of photons. This source has the capacity to be hugely advantageous, and small scale experiments with this kind of device have already been successfully implemented~\cite{bentivegna2015experimental}. The trade-off is that the photons are distributed over many modes in a uniformly random fashion, though their exact location is made clear upon receiving the heralding signal. There is currently little known about the potential applications, beyond boson sampling, of this type of photon source.
In this paper, we describe a passive optical network which is designed to take advantage of a two-photon variant of the scattershot source, and produces quantum interference effects in a predictable manner. This source can be ideally modelled as an $m$ mode optical device which non-deterministically generates two separate, but otherwise indistinguishable, photons located anywhere amongst its $m$ channels $c_i^\dagger c_j^\dagger|0\rangle,$ $i\neq j\in[1,m]$. An example of the different possible configurations of the photons for $m=4$ is given in Fig.~\ref{fig:f1}\textbf{a}. Note that the interaction of two photons is the simplest non-trivial example of quantum interference, hence this study represents a necessary step towards understanding passive quantum processing with scattershot sources.
\begin{figure}
\caption{\label{fig:f1}
\label{fig:f1}
\end{figure}
We will show, for all possible inputs from an $m$ mode two-photon scattershot source, there exists an $m$ mode passive circuit $D_m(\theta)$ with a configurable parameter $\theta$, that can interfere two photons as if they were incident on a standard beam splitter with $\cos^2\theta$ intensity transmission and $\sin^2\theta$ intensity reflection. This quantum interference includes the extensively studied Hong-Ou-Mandel (HOM) dip effect~\cite{hong1987measurement}, which occurs when a beam splitter's transmission ratio is set to 50:50. This HOM effect is due to the destructive interference of two possible outcomes, resulting in the signature effect of zero probability that a coincidence count will occur (i.e. the two photons will never be detected separately). We will call all possible linear optical networks that can accept all inputs from two-photon scattershot sources and output the quantum interference of an arbitrary beam splitter as Multimode HOM Devices (MHDs), in which $D_m(\theta)$ is a particular subset of all these possible optical circuits.
This problem has been partially solved only for the 50:50 beam splitter case (i.e. for one specific $\theta$ configuration of our network) using Sylvester interferometers, a type of Hadamard transformation~\cite{crespi2015suppression}. This was studied recently in the context of photon indistinguishably and suppression (or destructive interference) laws, and it has been shown that these interferometers produce HOM-like interference between two photons entering it from any pair of input modes~\cite{crespi2015suppression,dittel2016many,viggianiello2018experimental,viggianiello2018optimal}. In our work, we want to find a configurable network $D_m(\theta)$ that naturally generalises Sylvester interferometers, in the sense that it can reproduce the statistics of not only a 50:50 beam splitter with a balanced transmission coefficient, but any general beam splitter with a $\theta$ transmission coefficient. Therefore, $\theta$ can be thought of as a parameter which determines the amount of interference experienced by two separate photons, irrespective of where those photons enter our circuit $D_m(\theta)$. This research thus introduces a type of scalable multi-mode network that acts like a general beam splitter for particular Fock state inputs. General beam splitters are very useful since they are the vital building blocks for all types of optical circuits. As an additional benefit, the construction method presented here has the potential to leverage the efficient multi-mode sampling and photonic generation advantages provided by scattershot sources at large-scales.
We begin by firstly characterising the linear optical network $D_m(\theta)$ in Section~\ref{sec:MHD_char}, which includes defining the necessary properties of the associated generator and corresponding matrix representation. Next, in Section~\ref{sec:MHD_dip} we will use these properties to show the existence of two $\theta$ critical points, which correspond to 0\% and 100\% coincidence count probability between two groups of output detectors. In Section~\ref{sec:MHD_inter} we will then prove that, between these two critical points, the probability profile is a decreasing continuous function as the network's analogous transmission coefficient $\theta$ is increased, just like in a standard beam splitter. Finally, in Section~\ref{sec:MHD_comp} the resource advantages and costs of this model will be analysed in comparison to other more straightforward systems.
\section{Characterisation of the MHD} \label{sec:MHD_char}
\subsection{Passive Linear Optical Networks}
A passive linear optical network with $m\in\mathbb{N}$ modes can be, in general, represented by an $m \times m$ unitary matrix $U_m$. This matrix maps the transformations of the bosonic annihilation operators of the input commencing modes to the output resulting modes $U_m\vec{c}=\vec{r}$. The network is passive and linear, hence the number of photons and energy are conserved; note that we will only consider the situation where no photon loss occurs. In this paper, we will focus on the matrix $U_m=D_m(\theta)$, whose components are dependent on a single real variable $\theta\in\mathbb{R}$. This variable will later be shown to be physically analogous to the transmission ratio of a standard beam splitter.
The $m$ channel two-photon scattershot source can be attached to the commencing modes of the device $D_m(\theta)$, as shown for the $m=4$ case in Fig.~\ref{fig:f1}\textbf{b}. So that we can conduct statistical measurements of the network, we need to use $m$ number-resolving detectors on the resulting modes. After a measurement is performed, the output detectors are split into two equal $m/2$ sized groups (which corresponds to labelling them either $A$ or $B$). Note that the particular grouping can change depending on which input was heralded by the scattershot source; this point will be elucidated later on in this paper. This detector grouping allows us to compare the probability of, for example, two photons landing in the $A$ group of detectors in the $D_m(\theta)$ case with the typical beam splitter $D_2(\theta)$ case (which only has one $A$ detector), as shown in Fig.~\ref{fig:f1}\textbf{c}.
\subsection{Properties of the Generator}
We will show that the stated beam splitter like interference of a MHD follows as a natural consequence of just three conditions on the generators $Y_m$. These are $m \times m$ real matrices which generate possible MHDs through an exponential map
\begin{align}
D_m(\theta) = \exp(\theta Y_m). \label{eq:MHD_exp}
\end{align}
Crucially, $Y_m$ is not dependent on $\theta$ (or any other variables), therefore studying these generators is a straightforward avenue to gaining insight into $D_m(\theta)$. There are three requirements on these generators, which we will briefly motivate in the next paragraph:
\begin{enumerate}
\item[A1.] Skew-symmetric matrix
\begin{equation}
Y^T_m = -Y_m.
\end{equation}
\item[A2.] Same magnitude off-diagonal components
\begin{equation}
|y_{i,j}|=|y_{p,q}|, \hspace{5 mm} \forall i\neq j,p\neq q.
\end{equation}
\item[A3.] Orthogonal matrix
\begin{equation}
Y_m Y_m^T = \mathbb{I}_m.
\end{equation}
\end{enumerate}
Note that $y_{i,j}$ is the component of $Y_m$ located at the $i$th row and $j$th column, while $\mathbb{I}_m$ is the $m \times m$ identity matrix.
The A1 condition guarantees the associated $D_m$ transformations are orthogonal matrices; this property can arise naturally from physical considerations of energy and number conservation in lossless beam splitters~\cite{campos1989quantum}. On the other hand, the A2 and A3 conditions physically correspond to input invariant quantum interference. We will later show in Section~\ref{sec:MHD_dip_allm} that these two additional restrictions guarantees an analogous HOM dip for all possible number of modes or sizes of our network.
Note that we could also consider complex matrices, with requirements A1 and A3 instead replaced with anti-Hermitian $Y^\dagger_m = -Y_m$, traceless $\mathrm{tr}(Y_m) = 0$, and unitary $Y_mY_m^\dagger = \mathbb{I}_m$ conditions, which gives rise to particular unitary transformations. However, we will only consider real matrices in this paper for the following reasons: this generalisation to complex matrices is not all encompassing, it will not result in all possible MHDs; for pedagogical reasons, the arguments which will be made later are easier to grasp with real matrices; and finally, the resultant $D_m$ real matrices can be converted to the many other possible MHD forms using similarity transformations.
A familiar example representation of this special orthogonal algebra in the simplest $m=2$ case is given by
\begin{equation}
Y_2 =
\begin{pmatrix}
0 & 1 \\
-1 & 0
\end{pmatrix}, \label{eq:MHD_Y2}
\end{equation}
which also happens to satisfy A2 and A3~\cite{zee2016group,schwichtenberg2015physics}. This can then be used to generate the following MHD
\begin{equation}
D_2(\theta) = \exp(\theta Y_2) =
\begin{pmatrix}
\cos\theta & \sin\theta \\
-\sin\theta & \cos\theta
\end{pmatrix},
\end{equation}
whose form is the stereotypical rotation transformation, commonly associated with beam splitters if we take $\theta$ to be the transmission ratio~\cite{campos1989quantum,zee2016group}. In the next section, we will analyse the matrix structure for a generalised $m$ amount of modes.
\subsection{Structure of the Matrix Representation}
As a consequence of the three previously stated conditions, the generator $Y_m$ has the general matrix form
\begin{align}
Y_m &= \frac{1}{\sqrt{m-1}}
\begin{pmatrix}
0 & s_{1,2} & \cdots & s_{1,m} \\
-s_{1,2} & 0 & \cdots & s_{2,m} \\
\vdots & \vdots & \ddots & \vdots \\
-s_{1,m} & -s_{2,m} & \cdots & 0
\end{pmatrix},
\end{align}
where $s_{i,j}=\pm1, \forall i\neq j$ are sign placeholders. Note that the orthogonality condition A3 imposes additional restrictions to the signs $s_{i,j}$ of the off-diagonal components; only those configurations in which the columns (and rows) of $Y_m$ are orthonormal to each other are allowed.
We can use these conditions to also show the general matrix form of $D_m$ is
\begin{align}
D_m(\theta) &= \exp(\theta Y_m) = \cos \theta \mathbb{I}_m + \sin \theta Y_m, \\
&=
\begin{pmatrix}
\cos\theta & \frac{s_{1,2}\sin\theta}{\sqrt{m-1}} & \cdots & \frac{s_{1,m}\sin\theta}{\sqrt{m-1}} \\
-\frac{s_{1,2}\sin\theta}{\sqrt{m-1}} & \cos \theta & \cdots & \frac{s_{2,m}\sin\theta}{\sqrt{m-1}} \\
\vdots & \vdots & \ddots & \vdots \\
-\frac{s_{1,m}\sin\theta}{\sqrt{m-1}} & -\frac{s_{2,m}\sin\theta}{\sqrt{m-1}} & \cdots & \cos \theta
\end{pmatrix}. \label{eq:MHD_Dm}
\end{align}
Therefore, the form of $D_m$ consists of $\cos \theta$ in the diagonal entries and $\pm \sin \theta / \sqrt{1-m}$ in the off-diagonals, with the particular sign chosen in a manner which satisfies orthogonality. Despite already knowing alot about the matrix structure, it is not obvious how one goes about finding a particular $Y_m$ (and thus $D_m$) for an arbitrary $m$; we will next show a straightforward method of creating higher mode matrices from lower mode matrices.
\subsection{Constructing Large-Scale MHD Networks}
It will be convenient for us to have an easy method of determining higher order MHD matrices due to the nature of the scattershot source. This is because, as will be properly quantified later in Section~\ref{sec:MHD_comp}, it is advantageous in terms of sampling rate to have an MHD network with as many modes as possible. If we know of a particular generator $Y_m$, we can calculate another matrix $Y_{2m}$ with double the amount of modes by inserting it into the following block matrix
\begin{align}
Y_{2m} = \frac{\sqrt{m-1}}{\sqrt{2m-1}}
\begin{pmatrix}
Y_m & Y_m + \frac{\mathbb{I}_m}{\sqrt{m-1}} \\
Y_m - \frac{\mathbb{I}_m}{\sqrt{m-1}} & -Y_m
\end{pmatrix}.
\end{align}
This construction approach is similar in spirit to the Sylvester construction method~\cite{crespi2015suppression}. However, through the exponential mapping given in Eq.~\eqref{eq:MHD_exp}, this method would ultimately result in a network that is configurable by the parameter $\theta$, in contrast to the fixed Sylvester interferometers.
We will now show that $Y_{2m}$ also satisfies the requirements of a generator. First, we note that it's transpose is equivalent to
\begin{align}
Y_{2m}^T &= \frac{\sqrt{m-1}}{\sqrt{2m-1}}
\begin{pmatrix}
Y_m^T & Y_m^T - \frac{\mathbb{I}_m }{\sqrt{m-1}} \\
Y_m^T + \frac{\mathbb{I}_m}{\sqrt{m-1}} & -Y_m^T
\end{pmatrix}, \nonumber \\
&= \frac{\sqrt{m-1}}{\sqrt{2m-1}}
\begin{pmatrix}
-Y_m & -Y_m - \frac{\mathbb{I}_m}{\sqrt{m-1}} \\
-Y_m + \frac{\mathbb{I}_m}{\sqrt{m-1}} & Y_m
\end{pmatrix}, \nonumber \\
&= -Y_{2m},
\end{align}
thus it also satisfies the skew-symmetric matrix A1 condition. By inspection, it is clear that $Y_m\pm\mathbb{I}_m/\sqrt{m-1}$ is a matrix where all of its components have a magnitude of $1/\sqrt{m-1}$. Hence the off-diagonals of $Y_{2m}$ all have the same magnitude of $1/\sqrt{2m-1}$, thus satisfying the A2 requirement where $|y_{i,j}|=|y_{p,q}|, \forall i\neq j,p\neq q$. Finally, we note that
\begin{align}
Y_{2m}Y_{2m}^T &= \frac{m-1}{2m-1}
\begin{pmatrix}
2\mathbb{I}_m + \frac{\mathbb{I}_m}{m-1} & 0_m \\
0_m & 2\mathbb{I}_m + \frac{\mathbb{I}_m}{m-1}
\end{pmatrix}, \nonumber \\
&= \mathbb{I}_{2m},
\end{align}
hence we have shown that $Y_{2m}$ satisfies the orthogonality A3 condition.
We have just shown that if we have a $Y_m$ that satisfies the generator conditions, we can easily double the amount of modes by calculating via the above block matrix method $Y_{2m}$, which is guaranteed by construction to also satisfy the generator conditions. For example, since we know of a $Y_2$ from Eq.~\eqref{eq:MHD_Y2}, we can readily calculate the associated $Y_4$ as follows
\begin{align}
Y_4 = \frac{1}{\sqrt{3}}
\begin{pmatrix}
0 & 1 & 1 & 1 \\
-1 & 0 & -1 & 1 \\
-1 & 1 & 0 & -1 \\
-1 & -1 & 1 & 0
\end{pmatrix}.
\end{align}
Hence applying the exponential mapping gives
\begin{align}
D_4(\theta) =
\begin{pmatrix}
\cos\theta & \frac{\sin\theta}{\sqrt{3}} & \frac{\sin\theta}{\sqrt{3}} & \frac{\sin\theta}{\sqrt{3}} \\
-\frac{\sin\theta}{\sqrt{3}} & \cos\theta & -\frac{\sin\theta}{\sqrt{3}} & \frac{\sin\theta}{\sqrt{3}} \\
-\frac{\sin\theta}{\sqrt{3}} & \frac{\sin\theta}{\sqrt{3}} & \cos\theta & -\frac{\sin\theta}{\sqrt{3}} \\
-\frac{\sin\theta}{\sqrt{3}} & -\frac{\sin\theta}{\sqrt{3}} & \frac{\sin\theta}{\sqrt{3}} & \cos\theta
\end{pmatrix},
\end{align}
which matches the general form given in Eq.~\eqref{eq:MHD_Dm}. By iterating this calculation, we have a practical procedure of determining $D_m$ to as many channels as you would want, to powers of two $m=2^k, k\in\mathbb{N}$. It can be shown that it is not possible to make these matrices for dimensions that are odd or singly even; we suspect that it is possible for doubly even $m=4k$ number of modes, in which the powers of two are a subset. We note that $D_m(\theta)$ is equivalent to a Hadamard matrix at a particular $\theta$ value, therefore the suspicion that these networks exist only for orders of $m=4k$ is further supported by the Hadamard matrix conjecture~\cite{hedayat1978hadamard}.
We are now ready to derive, from the defined properties, the physical consequences for a general $D_m$ circuit, with particular reference to the two photon interference experienced by a typical two level beam splitter $D_2$. The previously calculated $D_4$ case will also be used as a concrete example in the next few sections.
\section{Analysis of Critical Points} \label{sec:MHD_dip}
\subsection{Two Mode HOM Dip}
Suppose we have an input of two separate, but otherwise indistinguishable, photons into the two commencing modes $c_1^\dagger c_2^\dagger|0\rangle$ of a typical beam splitter $D_2(\theta)$. It can be shown that there exists a critical point $\theta_\mathrm{dip}(m=2)=\pi/4$ of the transmission ratio, where there is zero coincidence counts between the two output resulting modes $r_1^\dagger r_2^\dagger|0\rangle$. This HOM dip situation is summarised in Fig.~\ref{fig:f2}\textbf{a}. We can calculate the coincidence probability at $\theta_\mathrm{dip}$ formally as follows
\begin{align}
\cos(\theta_\mathrm{dip}) &= \sin(\theta_\mathrm{dip}) = \frac{1}{\sqrt{2}}, \nonumber \\
D_2(\theta_\mathrm{dip}) &= \frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1 \\
-1 & 1
\end{pmatrix}, \nonumber \\
\mathbb{P}_2(r_1 r_2 |c_1 c_2;\theta_\mathrm{dip}) &= \left|\mathrm{perm}\left[ \frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1 \\
-1 & 1
\end{pmatrix} \right]\right|^2 = 0.
\end{align}
In this case, the probability amplitude is related to the permanent of the entire matrix~\cite{aaronson2011computational,scheel2004permanents}. Note that the permanent function is essentially the determinant but without the negative factors.
\begin{figure}
\caption{\label{fig:f2}
\label{fig:f2}
\end{figure}
\subsection{Coincidence Probability Between Two Groups of Detectors}
The previous probability calculation can be extended generally to $D_m$. Suppose we want to know the probability of measuring a coincidence count between the \textit{resulting} modes $r_p^\dagger r_q^\dagger|0\rangle$, given an input into the system $D_m$ of two photons in the \textit{commencing} modes $c_i^\dagger c_j^\dagger|0\rangle$. Then the probability can be calculated by the squared permanent of a particular submatrix of $D_m$, composed of the intersection between \textit{rows} $\mathbf{r}_p$ and $\mathbf{r}_q$ with \textit{columns} $\mathbf{c}_i$ and $\mathbf{c}_j$ as follows
\begin{align}
D_{m}(\theta) &= \bordermatrix{ & & \mathbf{c}_i & & \mathbf{c}_j & \cr
& \cdot & \cdot & \cdot & \cdot & \cdot \cr
\mathbf{r}_p & \cdot & d_{p,i} & \cdot & d_{p,j} & \cdot \cr
& \cdot & \cdot & \cdot & \cdot & \cdot \cr
\mathbf{r}_q & \cdot & d_{q,i} & \cdot & d_{q,j} & \cdot \cr
& \cdot & \cdot & \cdot & \cdot & \cdot }, \\
\mathbb{P}_m(r_p r_q |c_i c_j;\theta) &= \left|\mathrm{perm}\left[ \begin{pmatrix}
d_{p,i} & d_{p,j} \\
d_{q,i} & d_{q,j}
\end{pmatrix} \right]\right|^2,
\end{align}
where we label the component in the $a$th row and $b$th column of $D_m$ as $d_{a,b}$~\cite{aaronson2011computational}.
In modes higher than $m=2$, we need to introduce the notion of grouping the output detectors together for the analogous HOM dip to make sense. We can divide the $m$ resulting output modes into two equal $m/2$ sized sets $A(c_ic_j)=\{r_p,...,r_q\}$ and $B(c_ic_j)=\{r_u,...,r_v\}$, such that there is no overlap between them $A\cap B =\{\}$. Note that we allow the freedom where the particular detector grouping chosen depends on the input location of the two photons $c_i^\dagger c_j^\dagger|0\rangle$. We may then calculate the analogous coincidence probability between these two groups of detectors by
\begin{equation}
\mathbb{P}_m(A B |c_i c_j;\theta) = \sum_{r_a\in A} \sum_{r_b\in B} \mathbb{P}_m(r_a r_b |c_i c_j;\theta).
\end{equation}
The analogous HOM dip condition occurs when the transmission coefficient is set to a particular $\theta=\theta_\mathrm{dip}$, which results in $\mathbb{P}_m(AB|c_ic_j;\theta_\mathrm{dip})=0$. Since probabilities can't be negative, this means we will need to show that each individual coincidence probability in the sum is zero
\begin{equation}
\mathbb{P}_m(r_a r_b |c_i c_j;\theta_\mathrm{dip}) = 0,\hspace{5 mm} r_a\in A, r_b \in B,
\end{equation}
in which there are $(m/2)^2=m^2/4$ of these terms. Note that for the beam splitter $m=2$ case, it is clear with $A=\{r_1\}$ and $B=\{r_2\}$ that $\mathbb{P}_2(A B |c_1 c_2;\theta_\mathrm{dip}) =\mathbb{P}_2(r_1 r_2 |c_1 c_2;\theta_\mathrm{dip})=0$, as expected.
\subsection{Higher Mode HOM Dip} \label{sec:MHD_dip_allm}
We will now show that an analogous HOM dip critical point exists at $\theta_\mathrm{dip}$, which we claim occurs when the transmission ratio is set to the following condition
\begin{align}
\cos(\theta_\mathrm{dip}) &= \frac{\sin(\theta_\mathrm{dip})}{\sqrt{m-1}} = \frac{1}{\sqrt{m}}.
\end{align}
This is consistent with the previous $m=2$ case, in which this is the point where all the components have the same magnitude. Hence the corresponding matrix is given by
\begin{align}
D_m(\theta_\mathrm{dip}) &= \frac{1}{\sqrt{m}} \begin{pmatrix}
1 & s_{1,2} & \cdots & s_{1,m} \\
-s_{1,2} & 1 & \cdots & s_{2,m} \\
\vdots & \vdots & \ddots & \vdots \\
-s_{1,m} & -s_{2,m} & \cdots & 1
\end{pmatrix}.
\end{align}
We note that this is essentially a skew-Hadamard matrix whose simplified structure is only possible because of the requirements on the generator. The components being the same magnitude, while potentially different signs, means that some of the contained $2\times2$ submatrices will have permanents that resolve to zero. In other words, the defined requirements on the generator, physically leads to the total destructive interference of probability amplitudes associated with certain coincidence outcomes. The precise details of this interference will now be explicitly elucidated.
Let us suppose that the two input photons enter $D_m(\theta_\mathrm{dip})$ at two arbitrary commencing modes $c_i^\dagger c_j^\dagger|0\rangle$. The given output probabilities of these two photons are associated with only columns $\mathbf{c}_i$ and $\mathbf{c}_j$ of $D_m(\theta_\mathrm{dip})$
\begin{equation}
\begin{pmatrix}
\mathbf{c}_i(\theta_\mathrm{dip}) & \mathbf{c}_j(\theta_\mathrm{dip})
\end{pmatrix} = \frac{1}{\sqrt{m}} \begin{pmatrix}
s_{1,i}' & s_{1,j}' \\
\vdots & \vdots \\
s_{m,i}' & s_{m,j}'
\end{pmatrix},
\end{equation}
where $s_{u,v}'=\pm1, \forall u,v$. We note that there are $\binom{m}{2} = m(m-1)/2$ possible $2\times2$ submatrices contained within the $\begin{pmatrix} \mathbf{c}_i & \mathbf{c}_j\end{pmatrix}$ columns, whose permanents are proportional to the coincidence probabilities between two pairs of output modes.
We will now prove that an analogous HOM dip exists, by showing that $m^2/4$ of these $2\times2$ submatrices have permanents which resolve to zero (i.e. these correspond to the output modes pairs which have no coincidences). Let us look at the coincidence rate between two arbitrary output modes $r_p^\dagger r_q^\dagger|0\rangle$, which is related to the permanent $x$ as follows
\begin{align}
x = \mathrm{perm}\left[ \frac{1}{\sqrt{m}} \begin{pmatrix}
s_{p,i}' & s_{p,j}' \\
s_{q,i}' & s_{q,j}'
\end{pmatrix}\right] &= \frac{1}{m}(s_{p,i}'s_{q,j}'+s_{q,i}'s_{p,j}') \nonumber \\
m s_{q,j}' s_{p,j}' x&= s_{p,i}'s_{p,j}' +s_{q,i}'s_{q,j}'.
\end{align}
Note that we multiplied both sides by $m s_{q,j}' s_{p,j}'$, and then used the fact that $(s_{u,v}')^2=1,\ \forall u,v$. Notice that the permanent $x=0$, if $s_{p,i}'s_{p,j}' +s_{q,i}'s_{q,j}'=0$, or in other words $\mathrm{sign}(s_{p,i}'s_{p,j}') \neq \mathrm{sign}(s_{q,i}'s_{q,j}')$. We can now calculate how many $x$'s are zero by noting the fact that condition A1 means $D_m(\theta)$ are orthogonal matrices, hence each column must be orthogonal with each other for all values of $\theta$. Thus we know that
\begin{equation}
\mathbf{c}_i(\theta_\mathrm{dip}) \cdot \mathbf{c}_j(\theta_\mathrm{dip})= \frac{1}{m}(s'_{1,i} s'_{1,j} + \cdots + s'_{m,i} s'_{m,j}) = 0,
\end{equation}
where we emphasise the fact that $s'_{p,i} s'_{p,j}=\pm1$. Since these terms add up to zero, it must be the case that $m/2$ of these terms are one $s'_{a,i} s'_{a,j}=1$, while the other $m/2$ of these terms are negative one $s'_{b,i} s'_{b,j}=-1$. This means that there are precisely
\begin{equation}
\left( \frac{m}{2} \right)^2 = \frac{m^2}{4}
\end{equation}
pairings in which $s_{a,i}'s_{a,j}' +s_{b,i}'s_{b,j}'=0$. Therefore $m^2/4$ of the $2\times2$ submatrix permanents are zero, and thus there is zero probability of detecting a coincidence count $\mathbb{P}_m(r_a r_b |c_i c_j;\theta_\mathrm{dip}) = 0$ between $m^2/4$ pairs of detectors, as needed to be shown.
As an aside, we can now easily assign which output modes should correspond to which detector group (i.e. $A$ or $B$), using the simple calculation
\begin{equation}
\mathrm{grp}(c_ic_j)=\mathrm{sign}(\mathbf{c}_i \odot \mathbf{c}_j),
\end{equation}
where $\odot$ is element-wise multiplication. Hence $\mathrm{grp}(c_ic_j)$ is a column vector with $m$ rows of $\pm1$ associated with each of the $m$ output detectors, which we then label $+1\rightarrow A$ and $-1\rightarrow B$. This condition is similar to proposed suppression laws in Sylvester interferometers~\cite{crespi2015suppression,dittel2016many,viggianiello2018experimental}. However, we will show in the next few sections that this same $A$/$B$ detector grouping could be used for all $\theta$ in our particular network, such that it leads to a similar probability profile as a general beam splitter.
\iffalse
\begin{figure}
\caption{\label{fig:p1}
\label{fig:p1}
\end{figure}
\fi
An example of $D_4$ at this critical point is given by
\begin{align}
D_4(\theta_\mathrm{dip}) = \frac{1}{2}
\begin{pmatrix}
1 & 1 & 1 & 1 \\
-1 & 1 & -1 & 1 \\
-1 & 1 & 1 & -1 \\
-1 & -1 & 1 & 1
\end{pmatrix}.
\end{align}
Let us suppose the two-photon input is in the second and third modes $c_2^\dagger c_3^\dagger|0\rangle$. We should then label the output detectors according to the calculation
\begin{align}
\mathrm{grp}(c_2c_3) =
\begin{pmatrix} 1 \\ 1 \\ 1 \\ -1 \end{pmatrix} \odot
\begin{pmatrix} 1 \\ -1 \\ 1 \\ 1 \end{pmatrix} =
\begin{pmatrix} 1 \\ -1 \\ 1 \\ -1 \end{pmatrix}\rightarrow\begin{pmatrix} A \\ B \\ A \\ B \end{pmatrix},
\end{align}
which means resulting modes should be grouped as $A(c_2c_3)=\{r_1,r_3\}$ and $B(c_2c_3)=\{r_2,r_4\}$. We can then show that the coincidence probabilities between one $A$ labelled detector and one $B$ labelled detector is
\begin{align}
\mathbb{P}_4(r_1 r_2 |c_2 c_3;\theta_\mathrm{dip}) &= \left|\mathrm{perm}\left[
\frac{1}{2}\begin{pmatrix}
1 & 1 \\
1 & -1
\end{pmatrix} \right]\right|^2 = 0, \nonumber \\
\mathbb{P}_4(r_1 r_4 |c_2 c_3;\theta_\mathrm{dip}) &= \left|\mathrm{perm}\left[
\frac{1}{2}\begin{pmatrix}
1 & 1 \\
-1 & 1
\end{pmatrix} \right]\right|^2 = 0, \nonumber \\
\mathbb{P}_4(r_2 r_3 |c_2 c_3;\theta_\mathrm{dip}) &= \left|\mathrm{perm}\left[
\frac{1}{2}\begin{pmatrix}
1 & -1 \\
1 & 1
\end{pmatrix} \right]\right|^2 = 0, \nonumber \\
\mathbb{P}_4(r_3 r_4 |c_2 c_3;\theta_\mathrm{dip}) &= \left|\mathrm{perm}\left[
\frac{1}{2}\begin{pmatrix}
1 & 1 \\
-1 & 1
\end{pmatrix} \right]\right|^2 = 0. \nonumber
\end{align}
Hence we have shown
\begin{equation}
\mathbb{P}_4(A B |c_2 c_3;\theta_\mathrm{dip}) = \sum_{r_a\in A} \sum_{r_b\in B} \mathbb{P}_4(r_a r_b |c_2 c_3;\theta_\mathrm{dip}) = 0, \nonumber
\end{equation}
which means total destructive interference occurs between the two groups of detectors at $\theta_\mathrm{dip}$. This procedure can be repeated for all possible two photon inputs $c_i^\dagger c_j^\dagger |0\rangle$. In summary, we have shown that at the critical point $D_m(\theta_\mathrm{dip})$, we can split up the output mode detectors into two $m/2$ sized sets, such that there will never be a coincidence count between these two groups of detectors.
\subsection{Two Mode 100\% Coincidence Rate}
The two-level beam splitter has another critical point at $\theta=0$, where the two photons will always emerge in separate output modes $c_1^\dagger c_2^\dagger |0\rangle \rightarrow r_1^\dagger r_2^\dagger|0\rangle$. This can be calculated formally as follows
\begin{align}
D_2(0) &= \begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix}, \nonumber \\
\mathbb{P}_2(r_1 r_2 |c_1 c_2;0) &= \left|\mathrm{perm}\left[ \begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix} \right]\right|^2 = 1.
\end{align}
This corresponds physically to setting the transmission coefficient to where the beam splitter is effectively transparent $D_2(0)=\mathbb{I}_2$, and thus does nothing to the input. Hence this critical point has a 100\% probability of measuring a coincidence count, as summarised in Fig.~\ref{fig:f2}\textbf{b}.
\subsection{Higher Mode 100\% Coincidence Rate}
We will show that, for all possible $D_m$, there exists a critical point where there is an analogous 100\% coincidence rate between the output detector groups $A$ and $B$. As with the $m=2$ case, setting the transmission coefficient of the device to $\theta=0$ results in the identity matrix
\begin{align}
D_m(0) &= \cos (0) \mathbb{I}_m + \sin (0) Y_m = \mathbb{I}_m.
\end{align}
At this critical point an input of two identical photons into two arbitrary modes $c^\dagger_i c^\dagger_j|0\rangle$ of $D_m(0)$ will always appear in the corresponding output modes $r^\dagger_i r^\dagger_j|0\rangle$. The probability of this outcome can be calculated as
\begin{align}
\mathbb{P}_m(r_i r_j |c_i c_j;0) &= \left|\mathrm{perm}\left[ \begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix} \right]\right|^2 = 1,
\end{align}
where all other possibilities are zero.
What we need to prove is that given a two photon input into $c_i$ and $c_j$, the corresponding modes $r_i$ and $r_j$ will always be in different detector groups. This means if $r_i \in A$, then we want to show that $r_j \in B$, so that we can say $\mathbb{P}_m(AB|c_i c_j;0)=\mathbb{P}_m(r_i r_j |c_i c_j;0)=1$. Now, to determine the grouping we look again at the HOM dip critical point $\theta=\theta_\mathrm{dip}$ where
\begin{align}
D_m(\theta_\mathrm{dip}) &= \frac{1}{\sqrt{m}}\bordermatrix{ & & \mathbf{c}_i & & \mathbf{c}_j & \cr
& \cdot & \cdot & \cdot & \cdot & \cdot \cr
\mathbf{r}_i & \cdot & 1 & \cdot & s_{i,j} & \cdot \cr
& \cdot & \cdot & \cdot & \cdot & \cdot \cr
\mathbf{r}_j & \cdot & -s_{i,j} & \cdot & 1 & \cdot \cr
& \cdot & \cdot & \cdot & \cdot & \cdot }.
\end{align}
We know that $s_{j,i}=-s_{i,j}$ since A1 states that $Y_m$ is a skew-symmetric matrix. Now we note that
\begin{align}
\mathrm{sign}(\mathbf{c}_i \odot \mathbf{c}_j)_i &= s_{i,j}, \nonumber \\
\mathrm{sign}(\mathbf{c}_i \odot \mathbf{c}_j)_j &= -s_{i,j}, \nonumber
\end{align}
therefore by the grouping method we know that $r_i$ and $r_j$ must belong to different detector groups. Hence we have shown that for $D_m(0)$ we have a 100\% coincidence rate between the $A$ and $B$ detectors. Note that this result was determined using the skew-symmetric properties of our matrices, which is a property that doesn't hold for the matrices which describe Sylvester interferometers.
As a concrete example, consider again the $m=4$ case with $D_4(0) = \mathbb{I}_4$. For an input $c_2^\dagger c_3^\dagger |0\rangle$, we can calculate $\mathbb{P}_4(r_2 r_3 |c_2 c_3;0) = 1$, with all other probabilities being zero. We already previously determined that for an input of $c_2^\dagger c_3^\dagger |0\rangle$ we have $r_3\in A$ and $r_2\in B$. Therefore, we have shown that the total coincidence probability is $\mathbb{P}_4(AB|c_2 c_3;0)=\mathbb{P}_4(r_2 r_3 |c_2 c_3;0) =1$. This can be repeated for all possible inputs from a two-photon scattershot source.
\section{Intermediate Transmission Values} \label{sec:MHD_inter}
We will now look at the number statistics between the two critical points of our network's configurable parameter $\theta\in[0,\theta_\mathrm{dip}]$. Recall, we want our network to reproduce the statistics of any arbitrary beam splitter with a $\cos^2\theta$ transmission. In other words, we will show that $\theta$ is an adjustable parameter which rationally controls the amount of interference for all possible pairs of input photons at the same time. The amount of interference will have a range between the two critical points covered in the previous section. This would mean this device would be useful beyond just the extremal values, as it could effectively be substituted in place of an arbitrary beam splitter in certain contexts, while being able to utilize the large-scale advantages of scattershot photonic sources.
\subsection{Two Mode Intermediate Transmission Values}
The coincidence probability can be calculated in the two mode case as follows
\begin{align}
\mathbb{P}_2(r_1 r_2 |c_1 c_2;\theta) &= \left|\mathrm{perm}\left[ \begin{pmatrix}
\cos \theta & \sin \theta \\
-\sin \theta & \cos \theta
\end{pmatrix} \right]\right|^2, \nonumber \\
&= \cos^2 (2\theta).
\end{align}
Note that between the two critical points $\theta\in[0,\pi/4]$, it is evident that $\mathbb{P}_2(r_1 r_2 |c_1 c_2;\theta)$ is a decreasing function from $\mathbb{P}_2(r_1 r_2 |c_1 c_2;0)=1$ to $\mathbb{P}_2(r_1 r_2 |c_1 c_2;\pi/4)=0$. This same decreasing property for the overall coincidence probability will be shown for all $D_m$. This property allows us to implement a one-to-one mapping between the number statistics of a typical beam splitter and the MHD.
\begin{figure*}
\caption{\label{fig:f3}
\label{fig:f3}
\end{figure*}
\subsection{Higher Mode Intermediate Transmission Values}
Suppose we have a two photon input into $D_m$, from separate arbitrary modes $c^\dagger_i c^\dagger_j|0\rangle $. The output probabilities only deals with columns the $\mathbf{c}_i$ and $\mathbf{c}_j$ of $D_m(\theta)$
\begin{equation}
\begin{pmatrix}
\mathbf{c}_i(\theta) & \mathbf{c}_j(\theta)
\end{pmatrix} = \begin{pmatrix}
\frac{s_{1,i}'}{\sqrt{m-1}}\sin \theta & \frac{s_{1,j}'}{\sqrt{m-1}}\sin \theta \\
\vdots & \vdots \\
\cos \theta & \frac{s_{i,j}'}{\sqrt{m-1}}\sin \theta \\
\vdots & \vdots \\
\frac{s_{j,i}'}{\sqrt{m-1}}\sin \theta & \cos \theta \\
\vdots & \vdots \\
\frac{s_{m,i}'}{\sqrt{m-1}}\sin \theta & \frac{s_{m,j}'}{\sqrt{m-1}}\sin \theta
\end{pmatrix}. \label{eq:MHD_cicj}
\end{equation}
We will now examine the various forms of individual coincidence probabilities $ \mathbb{P}_m(r_a r_b |c_i c_j;\theta)$ where $r_a\in A$ and $r_b \in B$, which contribute to the total coincidence probability $\mathbb{P}_m(A B |c_i c_j;\theta)$ between detector groups $A$ and $B$. Note that the individual coincidence probabilities relate to the squared permanent of two rows in the previous matrix. There are $(m/2-1)^2$ coincidence probabilities with the particular form
\begin{align}
\mathbb{P}_m(r_a r_b |c_i c_j;\theta) &= \left( s_{a,i}'s_{b,j}'\frac{\sin^2 \theta}{m-1} + s_{b,i}'s_{a,j}'\frac{\sin^2 \theta}{m-1} \right)^2, \nonumber \\
&= 0,\hspace{5 mm} \forall a\neq i,b\neq j, \nonumber
\end{align}
where we note that since these two rows belong to different output detector groups, it must be the case that $ s_{a,i}'s_{b,j}'=- s_{b,i}'s_{a,j}'$. There are also $m/2-1$ coincidence probabilities of the form
\begin{align}
\mathbb{P}_m(r_i r_b |c_i c_j;\theta) &= \left(s_{b,j}'\frac{\cos \theta \sin \theta}{\sqrt{m-1}} + s_{i,j}'s_{b,i}'\frac{\sin^2 \theta}{m-1} \right)^2, \nonumber \\
&= \left(\frac{\cos \theta \sin \theta}{\sqrt{m-1}} - \frac{\sin^2 \theta}{m-1} \right)^2,\hspace{5 mm} \forall b\neq j. \nonumber
\end{align}
Note that $\mathbb{P}_m(r_a r_j |c_i c_j;\theta)$, $\forall a\neq i$, has the same form as the previous expression, where there are also $m/2-1$ of these coincidence probabilities. Finally, there is only $1$ coincidence probability of the form
\begin{align}
\mathbb{P}_m(r_i r_j |c_i c_j;\theta) &= \left(\cos^2 \theta +s_{i,j}'s_{j,i}'\frac{\sin^2\theta}{m-1}\right)^2, \nonumber \\
&= \left(\cos^2 \theta - \frac{\sin^2\theta}{m-1}\right)^2.\nonumber
\end{align}
In total, these three forms represent the $(m/2-1)^2+2(m/2-1)+1=m^2/4$ individual coincidence probabilities between the $m/2$ detectors labelled $A$ and the $m/2$ detectors labelled $B$. Hence the total coincidence probability between the two detector groups is given by the following expression
\begin{align}
\mathbb{P}_m(A B |c_i c_j;\theta) &= (m-2)\left(\frac{\cos \theta \sin \theta}{\sqrt{m-1}} - \frac{\sin^2 \theta}{m-1} \right)^2 \nonumber \\
&\hspace{5 mm}+ \left(\cos^2 \theta - \frac{\sin^2\theta}{m-1}\right)^2. \label{eq:MHD_pAB}
\end{align}
From this expression it is clear that setting the transmission ratio to $\theta=0$ results in a total coincidence rate of $\mathbb{P}_m(A B |c_i c_j;0) = 1$, in contrast the restriction $\cos(\theta_\mathrm{dip})=\sin(\theta_\mathrm{dip})/\sqrt{m-1}$ results in $\mathbb{P}_m(A B |c_i c_j;\theta_\mathrm{dip}) = 0$. Both these results are consistent with our previous analysis of the two critical points. We can now take the derivative of this total probability and show for all values of $m$ that
\begin{align}
\frac{d}{d\theta}\mathbb{P}_m(A B |c_i c_j;\theta) < 0,\hspace{5 mm} \theta\in(0,\theta_\mathrm{dip}).
\end{align}
Hence we have shown that the total coincidence probability decreases, with increasing transmission coefficient, between these two critical points.
The coincidence probability profile for $m>2$ is evidently not a facsimile of the $m=2$ typical beam splitter profile, as made clear in Fig.~\ref{fig:f3}. However, we have the freedom to move the parameter $\theta$ of $D_m$ however we like; since we know that these profiles always start at $1$, then decreases smoothly to $0$, we can easily set up a one-to-one mapping between any two probability profiles. Explicitly, we can set the following
\begin{align}
\mathbb{P}_2(A B |c_1 c_2;\phi) &= \mathbb{P}_m(A B |c_i c_j;\theta), \\
\cos^2 (2\phi) &= (m-2)\left(\frac{\cos \theta \sin \theta}{\sqrt{m-1}} - \frac{\sin^2 \theta}{m-1} \right)^2 \nonumber \\
&\hspace{5 mm}+ \left(\cos^2 \theta - \frac{\sin^2\theta}{m-1}\right)^2,
\end{align}
such that all we have to do is numerically solve for $\phi(\theta)$ or $\theta(\phi)$. These functions will tell us how to move our network's parameter with respect to a typical beam splitter's transmission coefficient, such that their associated probability profiles will match exactly. Finally, we note that for high $m$ values, we can approximate the probability profile as
\begin{align}
\lim_{m\rightarrow\infty} \mathbb{P}_m(A B |c_i c_j;\theta) &= \cos^2\theta\sin^2\theta + \cos^4 \theta, \nonumber \\
&= \cos^2 \theta.
\end{align}
This means, to a good approximation for high amount of modes, we can fairly easily reproduce the beam splitter profile by moving the MHD parameter according to the simple expression $\theta(\phi)=2\phi$.
\subsection{Two Photon Bunching Probability}
We note that there is symmetry associated with the two column matrix in Eq.~\eqref{eq:MHD_cicj}, where the rows are equally partitioned into the $A$ or $B$ groups (recall that the two unique rows containing the $\cos\theta$ diagonal elements are always in separate groups). This means the probability associated with detecting two photons in the $A$ modes will be the same as detecting two photons in the $B$ modes,
\begin{align}
\mathbb{P}_m(A^2|c_i c_j;\theta) &= \mathbb{P}_m(B^2|c_i c_j;\theta).
\end{align}
It must be the case that all possible probabilities add up to one, hence
\begin{align}
\mathbb{P}_m(A^2|c_i c_j;\theta)+\mathbb{P}_m(AB|&c_i c_j;\theta)+\mathbb{P}_m(B^2|c_i c_j;\theta) = 1, \nonumber \\
\mathbb{P}_m(A^2|c_i c_j;\theta) &= \frac{1-\mathbb{P}_m(AB|c_i c_j;\theta)}{2}. \label{eq:MHD_pA2}
\end{align}
This was independently verified in the appendix by considering, once again, all the relevant permanents whose squared values will contribute to $\mathbb{P}_m(A^2|c_i c_j;\theta)$.
\section{Resource Analysis} \label{sec:MHD_comp}
\begin{figure*}
\caption{\label{fig:f4}
\label{fig:f4}
\end{figure*}
Outside of theoretical interest, this device does provide in principle experimental advantages over more straight forward systems, such as a simple array of beam splitters. We will firstly analyse these advantages from the perspective of what is optimal for the two-photon scattershot source, which consists of an array of non-linear squeezing crystals and photon detectors. Each crystal, with every pump pulse, has a chance of generating a pair of photons through spontaneous parametric down conversion (SPDC). One of the photons is funnelled to a photon detector, which heralds the existence of the other photon, as shown in Fig.~\ref{fig:f4}\textbf{a}. This heralded photon can then be run through an optical circuit $U$, before also being measured by detectors. This useful set up allows us to record how many, and in which channels, photons enter into $U$. Note that it is possible for multiple crystals to give off photons at the same time, thus allowing $U$ to perform multi-photon dynamics between initially dispersed photons. We only accept situations where two photons are heralded at the same time in two separate modes, as shown in Fig.~\ref{fig:f4}\textbf{b}.
We define the probability of success $\mathrm{P}_n(U)$ as the probability that the scattershot source generates the correct number and configuration of photons for input into a given system $U$. So that the comparisons will be fair, we will only consider systems which can accept input from a fixed $n$ number of these non-linear crystals. With all other things being equal, a higher probability of success $\mathrm{P}_n(U')<\mathrm{P}_n(U)$ essentially means that model $U$ has a faster sampling rate compared to model $U'$. Note that an increase in $n$ also would correspond to an increase sampling rate, though the scaling of this increase will differ depending on the particular model $U$ chosen. These points will be properly quantified in the following section.
Each non-linear crystal through SPDC produces the following pairs of photons in a squeezed state
\begin{align}
|\psi\rangle_q = \sqrt{1-\chi^2}\sum_{p=0}^\infty \chi^p|pp\rangle_q,
\end{align}
where $0\leq\chi\leq1$ is the parameter that determines the strength of the squeezing~\cite{lund2014boson}. Hence the total state for an array of $n$ non-linear squeezers is the following
\begin{align}
|\Psi\rangle=\bigotimes^n_{q=1}|\psi\rangle_q=\left(1-\chi^2\right)^{\frac{n}{2}}\prod_{q=1}^n\sum_{p=0}^\infty \chi^p|pp\rangle_q.
\end{align}
Now, the probability of heralding two photons and nothing elsewhere, say in the first two modes $|\Phi\rangle=|11\rangle_1\bigotimes|11\rangle_2\bigotimes^n_{p=3}|00\rangle_p$, is given by
\begin{align}
\left|\langle\Phi|\Psi\rangle \right|^2 = (1-\chi^2)^n\chi^4. \label{eq:2_p2ph}
\end{align}
However, in our $D_m$, we are allowed to herald two photons anywhere in the $m=n$ possible modes, in which there are $\binom{m}{2} = m(m-1)/2$ possible allowed inputs. Hence the net probability of success is given by
\begin{align}
\mathrm{P}_n(D_n)=(1-\chi^2)^n\chi^4\frac{n(n-1)}{2}.
\end{align}
On the other hand, if our system is a simple linear array of $n/2$ beam splitters $L_n$, the two photons must be heralded in the correct ports, as shown in Fig.~\ref{fig:f4}\textbf{c}. There are only $n/2$ acceptable inputs hence the total probability of success is
\begin{align}
\mathrm{P}_n(L_n)=(1-\chi^2)^n\chi^4\frac{n}{2}.
\end{align}
It is clear that for all possible sizes of the system that $D_n$ always provides a better probability of success compared to $L_n$, and this ratio increases with increasing $n$. Hence the designed MHD $D_n$ provides a much faster sampling rate compared to $L_n$.
We also consider modifying the source itself, as shown in Fig.~\ref{fig:f4}\textbf{d}. In this situation, we don't funnel one half of the photons into heralding detectors, instead the two output ports of the non-linear crystal are connected directly to the two input ports of a beam splitter. In that case, we only need a single crystal to generate a $|11\rangle$ pair, which simply changes the non-linear factor in Eq.~\eqref{eq:2_p2ph} from $\chi^4$ to $\chi^2$. Since there will be a total of $n$ squeezing crystals and $n$ beam splitters in the system $L_{2n}'$, this means the net probability of success is
\begin{align}
\mathrm{P}_n(L_{2n}') = (1-\chi^2)^n\chi^2n.
\end{align}
It is apparent, for smaller $n$ values and system sizes, this $L_{2n}'$ configuration has a higher sampling rate over $D_n$. However, above $n = 2/\chi^2 + 1$ amount of squeezers, $D_n$ once again provides a better sampling rate over $L_{2n}'$, where the ratio of this advantage grows as $n$ increases. Note that this modified source has the significant limitation in that the input is not heralded; we don't know that a squeezer crystal has emitted paired photons until after it has been detected after passing through the $L_{2n}'$ system. In contrast, the scattershot source allows us to know about the input beforehand. Therefore, we have knowledge of the output state and can in principle hold it in quantum memory as a known resource state, before choosing to do either further quantum processing or detection.
We note that $D_n$ has another advantage over both $L'_{2n}$ and $L_n$ in that it has the ability to count the number of photons without photon number resolving detectors. If we are restricted to on/off bucket photon detectors, we will still be able to measure say two photons landing in $A$ because they can land separately in any detector in group $A$. It is clear that $L'_{2n}$ and $L_n$ can't do this because they are composed of separate beam splitter systems stacked on top of each other. Furthermore, due to the mode mixing in the $D_n$ system, it can distinguish the instances where the source generates the incorrect input of more than two photons. This counting and error detection attributes means that the MHD can provide more accurate results compared to other systems.
The above computational gains come at a cost in terms of the physical resources and amount of optical components needed to make this device. All $D_n$ can be decomposed into at most $n(n-1)/2$ two-level unitary matrices or beam splitters~\cite{reck1994experimental}. As an example, we show the resulting circuit for $D_4$ in Fig.~\ref{fig:f4}\textbf{e}, determined using the particular decomposition method given in~\cite{nielsen2010quantum}. Note that we factored out extra $\pi$ phase shifts so that the two-level beam splitters given in this diagram are all in the form
\begin{align}
\begin{pmatrix}
\sqrt{1-\eta_i} & \sqrt{\eta_i} \\
\sqrt{\eta_i} & -\sqrt{1-\eta_i}
\end{pmatrix},
\end{align}
where the transmission ratios are given by $\eta_1=\sin^2\theta/[2+\cos(2\theta)]$, $\eta_2=2\sin^2\theta/[5+\cos(2\theta)]$, and $\eta_3=\sin^2\theta/3$.
\section{Conclusion} \label{sec:MHD_con}
We have shown that there exists a multimode HOM device $D_m(\theta)$ which can replicate the two-photon statistics of a beam splitter, irrespective of where these two photons enter amongst the system's $m$ modes. The fact that this circuit can display exactly the same quantum mechanical number statistics, invariant on the input modes, makes it an interesting case study. We firstly claim that this circuit can be generated from a skew-symmetric orthogonal matrix, whose off-diagonal elements are the same magnitude. We then show from these properties that at particular critical transmission ratios $\theta$, it can be tuned to replicate either the HOM dip or a 100\% coincidence rate. Furthermore, we show that the number statistics will decrease continuously with increasing $\theta$ between these two critical points, thus allowing the possibility to map it to the statistics profile of a typical beam splitter. Finally, we show that that this device provides experimental advantages in terms of higher sampling rate with better accuracy compared to other more straightforward systems.
\begin{acknowledgments}
This research was supported by the Australian Research Council Centre
of Excellence for Quantum Computation and Communication
Technology (Project No. CE110001027).
\end{acknowledgments}
\appendix*
\section{Direct Calculation of the Total Two Photon Bunching Probability} \label{sec:MHD_app2bunching}
We can directly calculate the total probability of the two photons landing in the $A$ labelled detectors, by considering the squared permanent of relevant rows in Eq.~\eqref{eq:MHD_cicj}. There are $\binom{m/2-1}{2} = \left(m/2-1\right)\left(m/2-2\right)/2$ possible outcomes in which two photons land in different detectors $r_a\neq r_{a'}$, but the same group $r_a,r_{a'}\in A$, with a probability of
\begin{align}
\mathbb{P}_m(r_a r_{a'} |c_i c_j;\theta) &= \left( s_{a,i}'s_{a',j}'\frac{\sin^2 \theta}{m-1} + s_{a',i}'s_{a,j}'\frac{\sin^2 \theta}{m-1} \right)^2, \nonumber \\
&= \frac{4\sin^4\theta}{(m-1)^2},\hspace{5 mm} \forall a\neq i,a'\neq i. \nonumber
\end{align}
It is the case that $s_{a,i}'s_{a',j}'=s_{a',i}'s_{a,j}'$ since the two output detectors belong to the same grouping. There also exists $m/2-1$ coincidence probabilities of the form
\begin{align}
\mathbb{P}_m(r_a r_i |c_i c_j;\theta) &= \left(s_{a,j}'\frac{\cos \theta \sin \theta}{\sqrt{m-1}} + s_{i,j}'s_{a,i}'\frac{\sin^2 \theta}{m-1} \right)^2, \nonumber \\
&= \left(\frac{\cos \theta \sin \theta}{\sqrt{m-1}} + \frac{\sin^2 \theta}{m-1} \right)^2,\hspace{5 mm} \forall a\neq i. \nonumber
\end{align}
We also need to consider the cases where the two photons land in the same detector $\mathbb{P}_m(r_a^2|c_i c_j;\theta)$. We note that in these cases, we need to divide the permanent squared by 2, due to considerations of bunching~\cite{gard2015introduction,lund2017quantum}. There are $m/2-1$ of the form
\begin{align}
\mathbb{P}_m(r_a^2 |c_i c_j;\theta) &= \frac{1}{2}\left( s_{a,i}'s_{a,j}'\frac{\sin^2 \theta}{m-1} + s_{a,i}'s_{a,j}'\frac{\sin^2 \theta}{m-1} \right)^2, \nonumber \\
&= \frac{2\sin^4\theta}{(m-1)^2},\hspace{5 mm} \forall a\neq i. \nonumber
\end{align}
Finally, there is 1 individual probability of the form
\begin{align}
\mathbb{P}_m(r_i^2 |c_i c_j;\theta) &= \frac{1}{2}\left( s_{i,j}'\frac{\cos\theta \sin\theta}{\sqrt{m-1}} + s_{i,j}'\frac{\cos\theta \sin\theta}{\sqrt{m-1}} \right)^2, \nonumber \\
&= \frac{2\cos^2\theta\sin^2\theta}{m-1}.\nonumber
\end{align}
Now, we can add up all these individual probabilities to get the total probability of two photons landing group $A$ detectors
\begin{align}
\mathbb{P}_m(A^2 |c_i c_j;\theta) &= \frac{1}{2}\left(\frac{m}{2}-1\right)\left(\frac{m}{2}-2\right)\frac{4\sin^4\theta}{(m-1)^2} \nonumber \\
&\hspace{5 mm}+ \left(\frac{m}{2}-1\right)\left(\frac{\cos \theta \sin \theta}{\sqrt{m-1}} + \frac{\sin^2 \theta}{m-1} \right)^2 \nonumber \\
&\hspace{5 mm}+ \left(\frac{m}{2}-1\right)\frac{2\sin^4\theta}{(m-1)^2}+\frac{2\cos^2\theta\sin^2\theta}{m-1}, \nonumber \\
&= \frac{m-2}{2(m-1)}\sin^4 \theta + \frac{m-2}{(m-1)^{3/2}}\cos\theta\sin^3\theta \nonumber \\
&\hspace{5 mm}+ \frac{m+2}{2(m-1)}\cos^2\theta\sin^2\theta. \nonumber
\end{align}
This probability expression is consistent with Eq.~\eqref{eq:MHD_pA2} and the total coincidence probability.
\end{document} |
\begin{document}
\mathop{\rm th}\nolimitsispagestyle{empty}
\noindent{\Large \bf
{Geometrical Structures of the Endomorphism Semiring\\ of a Finite Chain: Simplices, Strings and Triangles}}
\begin{quote}
{\bf \large Ivan Trendafilov}
\small
\emph{Department of Algebra and Geometry, Faculty of Applied Mathematics and\\ Informatics, Technical University of Sofia, Sofia, Bulgaria,\\ \emph{e-mail:}
ivan$\_$d$\_$trendafilov@abv.bg}
We establish new results concerning endomorphisms of a finite chain if the cardinality of the image of such endomorphism is no more than some fixed number $k$. The semiring of all such endomorphisms can be seen as a $k$--simplex whose vertices are the constant endomorphisms. We explore the properties of these $k$--simplices and find some results for arbitrary $k$. For $k=1$ and 2 we give a full description of simplices called strings and triangles, respectively.
\end{quote}
\noindent{\bf \large 1. \hspace{0.5mm} Introduction and Preliminaries}
The endomorphism semiring of a finite semilattice is well studied in [1] -- [7].
In the present paper we give a new treatment of the subsemirings of endomorphism semiring $\widehat{\mathcal{E}}_{\mathcal{C}_n}$ of a finite chain. We investigate endomorphisms $\alpha \in \widehat{\mathcal{E}}_{\mathcal{C}_n}$ such that $|\mathop{\rm Im}\nolimits(\alpha)| \leq k$, where $k$ is a positive integer and $k \leq n$. The set of these endomorphisms is a $k$--simplex with vertices constant endomorphisms $\overline{a} = \wr\, \underbrace{a, \ldots, a}_n\,\wr$ and proper sides all the $m$--simplices, where $m < k$, $m \neq 0$.
The paper is organized as follows. After the introduction and preliminaries, in the second section we study $k$--simplices for any natural $k$. Although we do not speak about any distance here, we define discrete neighborhoods with respect to any vertex of the simplex. In the main result of the section Theorem 5, we prove that the biggest discrete neighborhoods of the least and biggest vertex of the simplex are the semirings of a special type. In the third section we start the inductive study of the simplices and consider 1-simplices called strings. Section 4 is devoted to the basic properties of triangle $\triangle^{(n)}\{a,b,c\}$. Here we prove that any element of the interior of a triangle is a sum of the elements of two strings of this triangle.
In the next section we consider the set of endomorphisms of $\triangle^{(n)}\{a,b,c\}$ such that some of elements $a$, $b$ and $c$ occur just $m$ times, $0 \leq m < n$, of the image of the endomorphism. These sets are called layers of the triangle with respect to some vertex. When two boundary elements of the layer are idempotents, we call this layer a basic layer and prove that all basic layers with respect to $a$ and $c$ are semirings isomorphic to well-defined strings.
In the last section are placed the main results of the paper. Here we construct a new subsemiring of $\triangle^{(n)}\{a,b,c\}$, the so-called idempotent triangle, containing all the right identities of triangle $\triangle^{(n)}\{a,b,c\}$. In Theorem 33 we prove that the idempotent triangle is a disjoint union of two subsmirings. The first one is the semiring of the right identities of $\triangle^{(n)}\{a,b,c\}$ and the second is the maximal ideal of the idempotent triangle. In this section we describe the subsemirings of $a$--nilpotent, $b$--nilpotent and $c$--nilpotent endomorphisms as geometric figures: trapezoids and parallelogram. In this section is also investigated two subsemirings which are geometric parallelograms. In Theorem 40 we show that any triangle $\triangle^{(n)}\{a,b,c\}$ is a disjoint union of eight subsemirings.
Since the terminology for
semirings is not completely standardized, we say what our conventions are.
An algebra $R = (R,+,.)$ with two binary operations $+$ and $\cdot$ on $R$, is called a {\emph{semiring}} if:
$\bullet\; (R,+)$ is a commutative semigroup,
$\bullet\; (R,\cdot)$ is a semigroup,
$\bullet\;$ both distributive laws hold $ x\cdot(y + z) = x\cdot y + x\cdot z$ and $(x + y)\cdot z = x\cdot z + y\cdot z$ for any $x, y, z \in R$.
Let $R = (R,+,.)$ be a semiring.
If a neutral element $0$ of the semigroup $(R,+)$ exists and $0x = 0$, or $x0 = 0$, it is called a {\emph{left}} or a {\emph{right zero}}, respectively, for all $x \in R$.
If $0\cdot x = x\cdot 0 = 0$ for all $x \in R$, then it is called {\emph{zero}}. An element $e$ of a semigroup $(R,\cdot)$ is called a {\emph{left (right) identity}} provided
that $ex = x$, or $xe = x$, respectively, for all $x \in R$.
If a neutral element $1$ of the semigroup $(R,\cdot)$ exists, it is called {\emph{identity}}.
A nonempty subset $I$ of $R$ is called an \emph{ideal} if $I + I \subseteq I$, $R\,I \subseteq I$ and $I\, R \subseteq I$.
The facts concerning semirings can be found in [8]. For semilattices we refer to [9].
For a join-semilattice $\left(\mathcal{M},\vee\right)$ set $\mathcal{E}_\mathcal{M}$ of the endomorphisms of $\mathcal{M}$ is a semiring
with respect to the addition and multiplication defined by:
$\bullet \; h = f + g \; \mbox{when} \; h(x) = f(x)\vee g(x) \; \mbox{for all} \; x \in \mathcal{M}$,
$\bullet \; h = f\cdot g \; \mbox{when} \; h(x) = f\left(g(x)\right) \; \mbox{for all} \; x \in \mathcal{M}$.
This semiring is called the \emph{ endomorphism semiring} of $\mathcal{M}$.
In this article all semilattices are finite chains. Following [5] and [6], we fix a finite chain $\mathcal{C}_n = \; \left(\{0, 1, \ldots, n - 1\}\,,\,\vee\right)\;$
and denote the endomorphism semiring of this chain with $\widehat{\mathcal{E}}_{\mathcal{C}_n}$. We do not assume that $\alpha(0) = 0$ for arbitrary
$\alpha \in \widehat{\mathcal{E}}_{\mathcal{C}_n}$. So, there is not a zero in endomorphism semiring $\widehat{\mathcal{E}}_{\mathcal{C}_n}$. Subsemirings
${\mathcal{E}}_{\mathcal{C}_n}^a$, where $a \in \mathcal{C}_n$, of $\widehat{\mathcal{E}}_{\mathcal{C}_n}$ consisting of all endomorphisms $\alpha$ with
property $\alpha(a) = a$ are considered in [4].
If $\alpha \in \widehat{\mathcal{E}}_{\mathcal{C}_n}$ such that $f(k) = i_k$ for any $k \in \mathcal{C}_n$ we denote $\alpha$ as an ordered $n$--tuple
$\wr\,i_0,i_1,i_2, \ldots, i_{n-1}\,\wr$. Note that mappings will be composed accordingly, although we shall usually give preference to writing mappings on
the right, so that $\alpha \cdot \beta$ means ``first $\alpha$, then $\beta$''. The identity $\mathbf{i} = \wr\,0,1, \ldots, n-1\,\wr$ and all constant
endomorphisms $\kappa_i = \wr\, i, \ldots, i\,\wr$ are obviously (multiplicatively) idempotents.
Let $a \in \mathcal{C}_n$. For every endomorphism $\overline{a} = \wr\,a \,a\, \ldots\, a\,\wr$ the elements of
$$\mathcal{N}_n^{\,[a]} = \{\alpha \; | \; \alpha \in \widehat{\mathcal{E}}_{\mathcal{C}_n}, \; \alpha^{n_a} = \overline{a} \; \mbox{for some natural number} \; n_a\}$$
are called $a$--\emph{nilpotent endomorphisms}. An important result for $a$--nilpotent endomorphisms is
\textbf{Theorem} (Theorem 3.3 from [3]), \textsl{For any natural $n$, $n \geq 2$, and $a \in \mathcal{C}_n$ the set of $a$-- nilpotent endomorphisms $\mathcal{N}_n^{\,[a]}$ is a subsemiring of $\widehat{\mathcal{E}}_{\mathcal{C}_n}$. The order of this semiring is $\left|\mathcal{N}_n^{\,[k]}\right| = C_k . C_{n-k-1}$, where $C_k$ is the $k$ -- th Catalan number.}
Another useful result is
\textbf{Theorem} (Theorem 9 from [2]), \textsl{The subset of $\widehat{\mathcal{E}}_{\mathcal{C}_n}$, $n \geq 3$, of all idempotent endomorphisms with $s$ fixed points
$k_1, \ldots, k_s$, ${1 \leq s \leq n-1}$, is a semiring of order $\displaystyle \prod_{m=1}^{s-1} (k_{m+1} - k_{m})$.}
For definitions and results concerning simplices we refer to [10] and [11].
\noindent{\bf \large 2. \hspace{0.5mm} Simplices}
Let us fix elements $a_0, a_1, \ldots, a_{k-1} \in \mathcal{C}_n$, where $k \leq n$, $a_0 < a_1 < \ldots < a_{k-1}$, and let $A = \{a_0, a_1, \ldots, a_{k-1}\}$.
We consider endomorphisms $\alpha \in \widehat{\mathcal{E}}_{\mathcal{C}_n}$ such that $\mathop{\rm Im}\nolimits(\alpha) \subseteq A$. Let us call ${k}$ -- \emph{simplex} (or only a {\emph{simplex}) the set of all such endomorphisms $\alpha$. We denote $k$ -- simplex by
$\sigma^{(n)}_k(A) = \sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$.
It is easy to see that $\mathop{\rm Im}\nolimits(\alpha) \subseteq A$ and $\mathop{\rm Im}\nolimits(\beta) \subseteq A$ imply $\mathop{\rm Im}\nolimits(\alpha+ \beta) \subseteq A$ and $\mathop{\rm Im}\nolimits(\alpha\cdot \beta) \subseteq A$. Hence, we find
\textbf{Proposition} \exo/ \textsl{For any $A = \{a_0, a_1, \ldots, a_{k-1}\} \subseteq \mathcal{C}_n$ $k$ -- simplex $\sigma^{(n)}_k(A)$ is a subsemiring of $\widehat{\mathcal{E}}_{\mathcal{C}_n}$.}
The number $k$ is called a {\emph{dimension}} of $k$ -- simplex $\sigma^{(n)}_k(A)$. Endomorphisms $\overline{a_j}$, where $j = 0, \ldots, k - 1$, such that $\overline{a_j}(i) = a_j$ for any $i \in \mathcal{C}_n$ are called {\emph{vertices}} of $k$ -- simplex $\sigma^{(n)}_k(A)$.
Any simplex $\sigma^{(n)}\{b_0, b_1, \ldots, b_{\ell - 1}\}$, where $b_0, \ldots, b_{\ell-1} \in A$, is called a {\emph{face}} of $k$ -- simplex $\sigma^{(n)}_k(A)$. If $\ell < k$, face $\sigma^{(n)}\{b_0, b_1, \ldots, b_{\ell - 1}\}$ is called a {\emph{proper face}}.
The proper faces of $k$ -- simplex $\sigma^{(n)}_k(A) = \sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$ are:
$\bullet$ $0$ -- simplices, which are vertices $\overline{a_0}, \ldots, \overline{a_k}$.
$\bullet$ $1$ -- simplices, which are called {\emph{strings}}. They are denoted by $\mathcal{STR}^{(n)}\{a,b\}$, where $a, b \in A$.
$\bullet$ $2$ -- simplices, which are called {\emph{triangles}}. They are denoted by ${\triangle}^{(n)}\{a,b,c\}$, where $a, b, c \in A$.
$\bullet$ $3$ -- simplices, which are called {\emph{tetrahedra}}. They are denoted by $\mathcal{TETR}^{(n)}\{a,b,c,d\}$, where $a, b, c, d \in A$.
$\bullet$ The last proper faces are $k-1$ -- simplices $\sigma^{(n)}_{k-1}(B)$, where $B = \{b_0, \ldots, b_{k-2}\} \subset A$.
The {\emph{boundary}} of $k$ -- simplex $\sigma^{(n)}_k(A)$ is a union of all its proper faces and is denoted by $\mathcal{BD}\left(\sigma^{(n)}_k(A)\right)$. The set $\mathcal{INT}\left(\sigma^{(n)}_k(A)\right) = \sigma^{(n)}_k(A) \backslash \mathcal{BD}\left(\sigma^{(n)}_k(A)\right)$ is called an {\emph{interior}} of $k$ -- simplex $\sigma^{(n)}_k(A)$. The boundary and the interior of $k$ -- simplex are, in general, not semirings.
For any natural $n$, endomorphism semiring $\widehat{\mathcal{E}}_{\mathcal{C}_n}$ is $n$ -- simplex with vertices $\overline{0}, \ldots, \overline{n-1}$. The interior of this simplex consists of endomorphisms $\alpha$, such that $\mathop{\rm Im}\nolimits(\alpha) = \mathcal{C}_n$. Since the latter is valid only for identity $\mathbf{i} = \wr\,0,1, \ldots n -1\, \wr$, it follows that ${\mathcal{INT}\left( \widehat{\mathcal{E}}_{\mathcal{C}_n}\right) = \mathbf{i}}$.
There is a partial ordering of the faces of dimension $k - 1$ of $k$ -- simplex by the following way: least face does not contain the vertex $\overline{a_k}$ and biggest face does not contain the vertex $\overline{a_0}$.
The biggest face of the $n$ -- simplex $\widehat{\mathcal{E}}_{\mathcal{C}_n}$ is the $(n-1)$ -- simplex $\sigma^{(n)}_{n-1} \{1, \ldots, n-1\}$.
Now $\widehat{\mathcal{E}}_{\mathcal{C}_n}\backslash \sigma^{(n)}_{n-1} \{1, \ldots, n-1\} = {\mathcal{E}}_{\mathcal{C}_n}^{(0)}$ which is a subsemiring of $\widehat{\mathcal{E}}_{\mathcal{C}_n}$. Similarly, the least face of $\widehat{\mathcal{E}}_{\mathcal{C}_n}$ is $\sigma^{(n)}_{n-1} \{0, \ldots, n-2\}$.
Then $\widehat{\mathcal{E}}_{\mathcal{C}_n}\backslash \sigma^{(n)}_{n-1} \{0, \ldots, n-2\} = {\mathcal{E}}_{\mathcal{C}_n}^{(n-1)}$ which is also a subsemiring of $\widehat{\mathcal{E}}_{\mathcal{C}_n}$. The other faces of $\widehat{\mathcal{E}}_{\mathcal{C}_n}$, where $n \geq 3$, do not have this property. Indeed,
one middle side is $\sigma^{(n)}_{n-1} \{0, \ldots, k-1, k+1, \ldots, n-1\}$. But set $R = \widehat{\mathcal{E}}_{\mathcal{C}_n}\backslash \sigma^{(n)}_{n-1} \{0, \ldots, k-1, k+1, \ldots, n-1\}$ is not a semiring because for any $n \geq 3$ and any $k \in \{1, \ldots, n-2\}$ if $\alpha = \wr\, 0, \ldots, 0, k\,\wr \in R$, then $\alpha^2 = \overline{0} \notin R$.
Let us fix vertex $\overline{a_m}$, where $m = 0, \ldots, k-1$, of simplex $\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$.
The set of all endomorphisms $\alpha \in \sigma^{(n)}_k(A)$ such that $\alpha(i) = a_m$ just for $s$ elements $i \in \mathcal{C}_n$ is called {\emph{$s$-th layer of $k$ -- simplex with respect to $\overline{a_m}$}}, where $s = 0, \ldots, n-1$. We denote the $s$-th layer of the simplex with respect to $\overline{a_m}$ by $\mathcal{L}^{s}_{a_m}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$.
So, the $0$ -- layer with respect to any vertex of the $k$ -- simplex is a face of the $k$ -- simplex, hence, it is a semiring. In the general case, the $s$-th layer $\mathcal{L}^{s}_{a_m}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$, where $s \in \mathcal{C}_n$, $s = 1, \ldots, n-2$, is not a subsemiring of $k$ -- simplex.
From topological point of view, set $\{\overline{a_m}\} \cup \mathcal{L}^{n-1}_{a_m}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$ is a discrete neighborhood consisting of the ``nearest points to point'' $\overline{a_m}$. We denote this set by $\mathcal{DN}^{\,1}_m$. Similarly, we define $\mathcal{DN}^{\,2}_m = \mathcal{DN}^{\,1}_m\cup \mathcal{L}^{n-2}_{a_m}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$ and, in general,
$\displaystyle \mathcal{DN}^{\,t}_m = \{\overline{a_m}\}\cup \bigcup_{\ell = n-t}^{n-1} \mathcal{L}^{\ell}_{a_m}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$, where $m = 0, \ldots, k-1$, $t = 1, \ldots, n$.
\textbf{Proposition} \exo/ \textsl{Let $\overline{a_m}$, where $m = 0, \ldots, k-1$, be a vertex of the simplex $\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$ and $\mathcal{L}^{n-1}_{a_m}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$ be the $(n-1)$-th layer of the $k$ -- simplex with respect to $\overline{a_m}$. Then the set $\mathcal{DN}^{\,1}_m = \{\overline{a_m}\} \cup \mathcal{L}^{n-1}_{a_m}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$, where $m = 0, \ldots, k-1$, is a subsemiring of $\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$.}
\emph{Proof.} We consider three cases.
\emph{Case 1.} Let $m = 0$. Then elements of $\mathcal{DN}^{\,1}_0$ are endomorphisms:
$$\overline{a_0}, \; (a_0)_{n-1}a_1 = \wr\, \underbrace{a_0, \ldots, a_0}_{n-1},a_1\, \wr,\; \ldots \,, \; (a_0)_{n-1}a_{k-1} = \wr\, \underbrace{a_0, \ldots, a_0}_{n-1},a_{k-1} \wr. $$
Since $\overline{a_0} < (a_0)_{n-1}a_1 < \cdots < (a_0)_{n-1}a_{k-1}$, it follows that set $\mathcal{DN}^{\,1}_0$ is closed under the addition.
We find $(a_0)_{n-1}a_i \cdot \overline{a_0} = \overline{a_0} \cdot (a_0)_{n-1}a_i = \overline{a_0}$ for all $i = 1, \ldots, k-1$. Also we have
$(a_0)_{n-1}a_i \cdot (a_0)_{n-1}a_j = (a_0)_{n-1}a_j \cdot (a_0)_{n-1}a_i = \overline{a_0}$ for all $i,j \in \{1, \ldots, k -1\}$ with the only exception when $a_{k-1} = n-1$. Now $\left((a_0)_{n-1}(n-1)\right)^2 = (a_0)_{n-1}(n-1)$, $(a_0)_{n-1}(n-1)\cdot (a_0)_{n-1}a_i = (a_0)_{n-1}a_i$ and $(a_0)_{n-1}a_i \cdot (a_0)_{n-1}(n-1) = \overline{a_0}$. Hence, $\mathcal{DN}^{\,1}_0$ is a semiring.
\emph{Case 2.} Let $m = k-1$. Then elements of $\mathcal{DN}^{\,1}_{k-1}$ are endomorphisms:
$$ a_0(a_{k-1})_{n-1} = \wr\,a_0, \underbrace{a_{k-1}, \ldots, a_{k-1}}_{n-1}\,\wr,\; \ldots \,, \; a_{k-2}(a_{k-1})_{n-1} = \wr\,a_{k-2}, \underbrace{a_{k-1}, \ldots, a_{k-1}}_{n-1}\,\wr,\; \overline{a_{k-1}}. $$
Since $a_0(a_{k-1})_{n-1} < \cdots < a_{k-2}(a_{k-1})_{n-1} < \overline{a_{k-1}}$, it follows that the set $\mathcal{DN}^{\,1}_{k-1}$ is closed under the addition.
We find $a_i(a_{k-1})_{n-1} \cdot \overline{a_{k-1}} = \overline{a_{k-1}}\cdot a_i(a_{k-1})_{n-1} = \overline{a_{k-1}}$ for all $i = 1, \ldots, k-1$.
Also we have $a_i(a_{k-1})_{n-1} \cdot a_j(a_{k-1})_{n-1} = a_j(a_{k-1})_{n-1} \cdot a_i(a_{k-1})_{n-1} = \overline{a_{k-1}}$ for all ${i,j \in \{0, \ldots, k -2\}}$ with also the only exception when $a_0 = 0$. We have $\left(0(a_{k-1})_{n-1}\right)^2 = 0(a_{k-1})_{n-1}$, $0(a_{k-1})_{n-1}\cdot a_i(a_{k-1})_{n-1} = a_i(a_{k-1})_{n-1}$ and $a_i(a_{k-1})_{n-1} \cdot 0(a_{k-1})_{n-1} = \overline{a_{k-1}}$
Hence, $\mathcal{DN}^{\,1}_{k-1}$ is a semiring.
\emph{Case 3.} Let $0 < m < k-1$. Then elements of $\mathcal{DN}^{\,1}_m$ are endomorphisms:
$$ a_0(a_m)_{n-1} = \wr\,a_0, \underbrace{a_m, \ldots, a_m}_{n-1}\,\wr,\; \ldots \,, \; a_{m-1}(a_m)_{n-1} = \wr\,a_{m-1}, \underbrace{a_m, \ldots, a_m}_{n-1}\,\wr,\; \overline{a_m}, $$
$$ (a_m)_{n-1}a_{m+1} = \wr\,\underbrace{a_m, \ldots, a_m}_{n-1},a_{m+1}\,\wr,\; \ldots \,, \; (a_m)_{n-1}a_{k-1} = \wr\,\underbrace{a_m, \ldots, a_m}_{n-1}, a_{k-1}\,\wr.$$
Since $a_0(a_m)_{n-1} < \cdots < a_{m-1}(a_m)_{n-1} < \overline{a_m} < (a_m)_{n-1}a_{m+1} < \cdots < (a_m)_{n-1}a_{k-1}$, it follows that set $\mathcal{DN}^{\,1}_m$ is closed under the addition.
Now there are four possibilities:
\emph{3.1.} Let $0 < a_0$ and $a_{k-1} < n-1$. Then
$$a_i(a_m)_{n-1}\cdot a_j(a_m)_{n-1} = a_j(a_m)_{n-1}\cdot a_i(a_m)_{n-1} = \overline{a_m} \;\; \mbox{for any} \; i, j = 0, \ldots m-1,$$
$$(a_m)_{n-1}a_i \cdot (a_m)_{n-1}a_j = (a_m)_{n-1}a_j \cdot (a_m)_{n-1}a_i = \overline{a_m} \;\; \mbox{for any} \; i, j = m+1, \ldots k-1,$$
$$a_i(a_m)_{n-1}\cdot (a_m)_{n-1}a_j = (a_m)_{n-1}a_j\cdot a_i(a_m)_{n-1} = \overline{a_m}$$ $$ \mbox{for any} \; i = 0, \ldots, m-1 \; \mbox{and}\; j = m+1, \ldots k-1.
$$
Since $a_i(a_m)_{n-1}\cdot \overline{a_m} = \overline{a_m}\cdot a_i(a_m)_{n-1} = \overline{a_m}$ for $i = 1, \ldots, m - 1$ and, in a similar way, ${(a_m)_{n-1}a_j \cdot \overline{a_m} = \overline{a_m}\cdot (a_m)_{n-1}a_j = \overline{a_m}}$ for $j = m+1, \ldots, k -1$ and also $\left(\overline{a_m}\right)^2 = \overline{a_m}$, it follows that $\mathcal{DN}^{\,1}_m$ is a commutative semiring.
\emph{3.2.} Let $a_0 = 0$ and $a_{k-1} < n - 1$. Then $\left(0(a_m)_{n-1}\right)^2 = 0(a_m)_{n-1}$,
$$ 0(a_m)_{n-1} \cdot a_i(a_m)_{n-1} = a_i(a_m)_{n-1}, \;a_i(a_m)_{n-1}\cdot 0(a_m)_{n-1} = \overline{a_m} \;\;\mbox{for any}\; i = 1, \ldots m-1\; \mbox{and}$$ $$0(a_m)_{n-1} \cdot (a_m)_{n-1}a_j = (a_m)_{n-1}a_j \cdot 0(a_m)_{n-1} = \overline{a_m}\;\; \mbox{for any}\; j = m+ 1, \ldots, k -1.$$
We also observe that $\overline{a_m}\cdot 0(a_m)_{n-1} = 0(a_m)_{n-1} \cdot \overline{a_m} = \overline{a_m}$. All the other equalities between the products of the elements of $\mathcal{DN}^{\,1}_m$ are the same as in \emph{3.1}.
\emph{3.3.} Let $a_0 > 0$ and $a_{k-1} = n-1$. Then $\left((a_m)_{n-1}(n-1)\right)^2 = (a_m)_{n-1}(n-1)$,
$$(a_m)_{n-1}(n-1) \cdot a_i(a_m)_{n-1} = a_i(a_m)_{n-1}\cdot (a_m)_{n-1}(n-1) = \overline{a_m}\;\;\mbox{for any}\; i = 1, \ldots m-1\; \mbox{and}$$
$$(a_m)_{n-1}(n-1) \cdot (a_m)_{n-1}a_j = (a_m)_{n-1}a_j, \; (a_m)_{n-1}a_j \cdot (a_m)_{n-1}(n-1) = \overline{a_m}$$
for any $j = m+ 1, \ldots, k -1$.
We also observe that $\overline{a_m}\cdot (a_m)_{n-1}(n-1) = (a_m)_{n-1}(n-1) \cdot \overline{a_m} = \overline{a_m}$. All the other equalities between the products of the elements of $\mathcal{DN}^{\,1}_m$ are the same as in \emph{3.1}.
\emph{3.4.} Let $a_0 = 0$ and $a_{k-1} = n-1$. Now all equalities between the products of the elements of $\mathcal{DN}^{\,1}_m$ are the same as in \emph{3.1.}, \emph{3.2.} and \emph{3.3}. So, $\mathcal{DN}^{\,1}_m$ is a semiring.
$\Box$
Any simplex $\sigma^{(n)}\{b_0, b_1, \ldots, b_{\ell - 1}\}$ which is a face of simplex $\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$ is called
{\emph{internal of the simplex}} $\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$ if $a_0 \notin \sigma^{(n)}\{b_0, b_1, \ldots, b_{\ell - 1}\}$ and $a_{k-1} \notin \sigma^{(n)}\{b_0, b_1, \ldots, b_{\ell - 1}\}$. Similarly simplex $\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$, which is a face of $n$ -- simplex
$\widehat{\mathcal{E}}_{\mathcal{C}_n}$, is called {\emph{internal simplex}} if $0 \notin \sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$ and $n - 1 \notin \sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$.
Immediately from the proof of Proposition 2 follows
\textbf{Corollary} \exo/ \textsl{For any internal simplex $\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$ semirings $\mathcal{DN}^{\,1}_m$ are commutative and all their elements are $a_m$--nilpotent, where $m = 0, \ldots, k-1$.}
\textbf{Proposition} \exo/ \textsl{Let $\overline{a_m}$, where $m = 0, \ldots, k-1$, be a vertex of internal simplex $\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$. Then the set $\mathcal{DN}^{\,2}_m = \mathcal{DN}^{\,1}_m\cup \mathcal{L}^{n-2}_{a_m}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$, where $m = 0, \ldots, k-1$, is a subsemiring of $\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$.}
\emph{Proof.} The elements of $\mathcal{DN}^{\,2}_m$ are: $\overline{a_m}$, $a_i(a_m)_{n-1}$, where $i = 0, \ldots, m-1$, $(a_m)_{n-1}a_j$, where $j = m+1, \ldots, k-1$, $a_pa_q(a_m)_{n-2}$, where $p, q = 0, \ldots, m-1$, $p \leq q$, $(a_m)_{n-2}a_ra_s$, where $r, s = m+1, \ldots, k-1$, $r \leq s$, and $a_p(a_m)_{n-2}a_s$, where $p = 0, \ldots, m-1$, $s = m+1, \ldots, k-1$.
Since $\mathcal{DN}^{\,1}_m$ is closed under the addition in order to prove the same for $\mathcal{DN}^{\,2}_m$, we consider:
$$a_i(a_m)_{n-1}\! + a_pa_q(a_m)_{n-2} =\! \!\left\{ \begin{array}{ll} a_pa_q(a_m)_{n-2} & \! \mbox{if}\; i \leq p\\
a_ia_q(a_m)_{n-2} & \! \mbox{if}\; i > p \end{array}\! \right.\!, a_i(a_m)_{n-1}\! + (a_m)_{n-2}a_ra_s\! =\! (a_m)_{n-2}a_ra_s,$$
$$(a_m)_{n-1}a_j\! + (a_m)_{n-2}a_ra_s =\! \!\left\{ \begin{array}{ll} (a_m)_{n-2}a_ra_s & \! \mbox{if}\; j \leq s\\
(a_m)_{n-2}a_ra_j & \! \mbox{if}\; j > s \end{array}\! \right.\!, (a_m)_{n-1}a_j\! + a_pa_q(a_m)_{n-2}\! =\! (a_m)_{n-1}a_j,$$
$$a_i(a_m)_{n-1} + a_p(a_m)_{n-2}a_s = \left\{ \begin{array}{ll} a_p(a_m)_{n-2}a_s & \mbox{if}\; i \leq p\\
a_i(a_m)_{n-2}a_s & \mbox{if}\; i > p \end{array} \right.,$$
$$(a_m)_{n-1}a_j + a_p(a_m)_{n-2}a_s = \left\{ \begin{array}{ll} a_p(a_m)_{n-2}a_s & \mbox{if}\; j \leq s\\
a_p(a_m)_{n-2}a_j & \mbox{if}\; j > s \end{array} \right.,$$
$$a_pa_q(a_m)_{n-2} + (a_m)_{n-2}a_ra_s = (a_m)_{n-2}a_ra_s,$$
$$a_{p_0}a_{q_0}(a_m)_{n-2} + a_p(a_m)_{n-2}a_s = \left\{ \begin{array}{ll} a_p(a_m)_{n-2}a_s & \mbox{if}\; p_0 \leq p\\
a_{p_0}(a_m)_{n-2}a_s & \mbox{if}\; p_0 > p \end{array} \right.,$$
$$(a_m)_{n-2}a_{r_0}a_{s_0} + a_p(a_m)_{n-2}a_s = \left\{ \begin{array}{ll} a(a_m)_{n-2}a_{r_0}a_s & \mbox{if}\; s_0 \leq s\\
(a_m)_{n-2}a_{r_0}a_{s_0} & \mbox{if}\; s_0 > s \end{array} \right.,$$
$$ a_p(a_m)_{n-2}a_s + a_{p_0}(a_m)_{n-2}a_{s_0} = \left\{ \begin{array}{ll} a_p(a_m)_{n-2}a_s & \mbox{if}\; p \leq p_0, s \leq s_{0}\\
a_p(a_m)_{n-2}a_{s_0} & \mbox{if}\; p \leq p_0, s > s_{0}\\
a_{p_0}(a_m)_{n-2}a_s & \mbox{if}\; p > p_0, s \leq s_{0}\\
a_{p_0}(a_m)_{n-2}a_{s_0} & \mbox{if}\; p > p_0, s > s_{0}\\
\end{array} \right.,$$
$$ \overline{a_m} + a_pa_q(a_m)_{n-2} = \overline{a_m},\; \overline{a_m} + (a_m)_{n-2}a_ra_s = (a_m)_{n-2}a_ra_s,\; \overline{a_m} + a_p(a_m)_{n-2}a_s = (a_m)_{n-1}a_s,$$
where $i, p, q, p_0, q_0 = 0, 1, \ldots, m-1$, $p \leq q$, $p_0 < q_0$, $j, r, s,r_0, s_0 = {m+1}, \ldots, {k-1}$, $r \leq s$, $r_0 < s_0$. So, we prove that $\mathcal{DN}^{\,2}_m$ is closed under the addition.
Now we consider three cases, where, for the indices, the upper restrictions are fulfilled.
\emph{Case 1.} Let $a_m = 1$. We shall show that all endomorphisms of $\mathcal{DN}^{\,2}_1$ are $1$--nilpotent with the only exception when $a_{k-1} = n-2$. When $a_{k-1} < n-2$, since $1$ is the least image of any endomorphism, there are only a few equalities: $ 1_{n-2}a_ra_s\cdot 1_{n-2}a_{r_0}a_{s_0} = \overline{1}$,
$$ 1_{n-1}a_j \cdot 1_{n-2}a_ra_s = 1_{n-2}a_ra_s \cdot 1_{n-1}a_j = \overline{1},\; \overline{1}\cdot 1_{n-2}a_ra_s = 1_{n-2}a_ra_s \cdot \overline{1} = \overline{1}.$$
Hence, it follows that $\mathcal{DN}^{\,2}_1$ is a commutative semiring with trivial multiplication.
If $a_{k-1} = n-2$ it is easy to see that endomorphism $1_{n-2}(n-2)_2$ is the unique idempotent of $\mathcal{DN}^{\,2}_1$ (see [2]). Now we find
$1_{n-2}(n-2)_2 \cdot 1_{n-2}a_ra_s = 1_{n-2}(a_r)_2$, $1_{n-1}(n-2) \cdot 1_{n-2}a_ra_s = 1_{n-1}a_r$, $1_{n-2}a_ra_s \cdot 1_{n-1}(n-2) = \overline{1}$.
Hence, $\mathcal{DN}^{\,2}_1$ is a semiring.
\emph{Case 2.} Let $a_m = n-2$. We shall show that all the endomorphisms of $\mathcal{DN}^{\,2}_{n-2}$ are $1$--nilpotent with the only exception when $a_0 = 1$.
When $a_0 > 1$ we find: $$ a_pa_q(n-2)_{n-2}\cdot a_{p_0}a_{q_0}(n-2)_{n-2} = \overline{n-2},$$
$$a_i(n-2)_{n-1}\cdot a_pa_q(n-2)_{n-2} = a_pa_q(n-2)_{n-2}\cdot a_i(n-2)_{n-1} = \overline{n-2},$$
$$\overline{n-2}\cdot a_pa_q(n-2)_{n-2} = a_pa_q(n-2)_{n-2}\cdot \overline{n-2} = \overline{n-2}.$$
If $a_0 = 1$ the only idempotent is $1_2(n-2)_{n-2}$ and we find:
$$1_2(n-2)_{n-2}\cdot a_pa_q(n-2)_{n-2} = (a_q)_2(n-2)_{n-2},\;$$
$$1(n-2)_{n-1}\cdot a_pa_q(n-2)_{n-2} = a_q(n-2)_{n-1},\; a_pa_q(n-2)_{n-2}\cdot 1(n-2)_{n-1} = \overline{n-2},$$
Hence, $\mathcal{DN}^{\,2}_{n-2}$ is a semiring.
\emph{Case 3.} Let $1 < a_0$ and $a_{k-1} < n-2$. We find the following trivial equalities, which are grouped by duality:
$$a_pa_q(a_m)_{n-2}\cdot a_{p_0}a_{q_0}(a_m)_{n-2} = \overline{a_m},\; (a_m)_{n-2}a_ra_s\cdot (a_m)_{n-2}a_{r_0}a_{s_0} = \overline{a_m},$$
$$a_pa_q(a_m)_{n-2}\cdot a_{p_0}(a_m)_{n-2}a_{s_0} = a_{p_0}(a_m)_{n-2}a_{s_0} \cdot a_pa_q(a_m)_{n-2} = \overline{a_m},$$
$$(a_m)_{n-2}a_ra_s\cdot a_{p_0}(a_m)_{n-2}a_{s_0} = a_{p_0}(a_m)_{n-2}a_{s_0} \cdot (a_m)_{n-2}a_ra_s = \overline{a_m},$$
$$a_pa_q(a_m)_{n-2}\cdot (a_m)_{n-2}a_ra_{s} = (a_m)_{n-2}a_ra_{s} \cdot a_pa_q(a_m)_{n-2} = \overline{a_m},$$
$$a_i(a_m)_{n-1}\cdot a_pa_q(a_m)_{n-2} = a_pa_q(a_m)_{n-2}\cdot a_i(a_m)_{n-1} = \overline{a_m},$$
$$(a_m)_{n-1}a_j\cdot a_pa_q(a_m)_{n-2} = a_pa_q(a_m)_{n-2}\cdot (a_m)_{n-1}a_j = \overline{a_m},$$
$$a_i(a_m)_{n-1}\cdot a_p(a_m)_{n-2}a_s = a_p(a_m)_{n-2}a_s\cdot a_i(a_m)_{n-1} = \overline{a_m},$$
$$(a_m)_{n-1}a_j\cdot a_p(a_m)_{n-2}a_s = a_p(a_m)_{n-2}a_s\cdot (a_m)_{n-1}a_j = \overline{a_m},$$
$$a_i(a_m)_{n-1}\cdot (a_m)_{n-2}a_ra_s = (a_m)_{n-2}a_ra_s\cdot a_i(a_m)_{n-1} = \overline{a_m},$$
$$(a_m)_{n-1}a_j\cdot (a_m)_{n-2}a_ra_s = a(a_m)_{n-2}a_ra_s\cdot (a_m)_{n-1}a_j = \overline{a_m},$$
$$\overline{a_m}\cdot a_pa_q(a_m)_{n-2} = a_pa_q(a_m)_{n-2}\cdot \overline{a_m} = \overline{a_m},$$
$$\overline{a_m}\cdot a_p(a_m)_{n-2}a_s = a_p(a_m)_{n-2}a_s\cdot \overline{a_m} = \overline{a_m},$$
$$\overline{a_m}\cdot (a_m)_{n-2}a_ra_s = (a_m)_{n-2}a_ra_s\cdot \overline{a_m} = \overline{a_m}.$$
\emph{Case 4.} Let $a_0 = 1$ and $a_{k-1} < n-2$. Then $\;1_2(a_m)_{n-2}$ is the only idempotent in $\mathcal{DN}^{\,2}_m$. Additionally to the equalities of the previous case we find:
$$1_2(a_m)_{n-2}\cdot a_pa_q(n-2)_{n-2} = (a_q)_2(a_m)_{n-2}, \; 1(a_m)_{n-1}\cdot a_pa_q(a_m)_{n-2} = a_q(a_m)_{n-1}.$$
\emph{Case 5.} Let $1 < a_0$ and $a_{k-1} = n-2$. Now the only idempotent endomorphism in $\mathcal{DN}^{\,2}_m$ is $(a_m)_{n-2}(n-2)_2$. We additionally find the following equalities:
$$(a_m)_{n-2}(n-2)_2 \cdot (a_m)_{n-2}a_ra_s = (a_m)_{n-2}(a_r)_2,\; (a_m)_{n-1}(n-2)\cdot (a_m)_{n-2}a_ra_s = (a_m)_{n-1}a_r.$$
\emph{Case 6.} Let $a_0 = 1$ and $a_{k-1} = n-2$. Now, in $\mathcal{DN}^{\,2}_m$, there are two idempotents: $\;1_2(a_m)_{n-2}$ and $(a_m)_{n-2}(n-2)_2$. Here the equalities from cases 4 and 5 are valid and also all the equalities from case 3, under the respective restrictions for the indices, are fulfilled.
Hence, $\mathcal{DN}^{\,2}_m$ is a semiring.
$\Box$
\textbf{Theorem} \exo/ \textsl{Let $\sigma^{(n)}_k(A) = \sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$ be a simplex. }
\textsl{a. For the least vertex $\overline{a_0}$ it follows $\mathcal{DN}^{\,n-a_0-1}_0 = \sigma^{(n)}_k(A)\cap {\mathcal{E}}^{(a_0)}_{\mathcal{C}_n}$.
}
\textsl{b. For the biggest vertex $\overline{a_{k-1}}$ it follows $\mathcal{DN}^{\,a_{k-1}}_{k-1} = \sigma^{(n)}_k(A)\cap {\mathcal{E}}^{(a_{k-1})}_{\mathcal{C}_n}$.
}
\emph{Proof.} a.
Since $\overline{a_0}$ is the least vertex of the simplex, it follows that layer
$\mathcal{L}^{a_0 + 1}_{a_0}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$ consists of endomorphisms
${\alpha = (a_0)_{a_0 + 1}(a_1)_{p_1} \ldots (a_{k-1})_{p_{k-1}}}$, where $a_0 + 1 + p_1 + \cdots + p_{k-1} = n$, i.e. $\alpha(0) = a_0$, $\ldots$, $\alpha(a_0) = a_0$. All the layers
$\mathcal{L}^{\ell}_{a_0}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$, where $\ell \geq a_0 + 1$, consist of endomorphisms having $a_0$ as a fixed point. So, $\mathcal{DN}^{\,n-a_0-1}_0 \subseteq \sigma^{(n)}_k(A)\cap {\mathcal{E}}^{(a_0)}_{\mathcal{C}_n}$.
Conversely, let $\alpha \in \sigma^{(n)}_k(A)\cap {\mathcal{E}}^{(a_0)}_{\mathcal{C}_n}$. Then $\alpha(a_0) = a_0$. Since $\overline{a_0}$ is the least vertex of the simplex, we have $\alpha(0) = \ldots = \alpha(a_0 - 1) = a_0$, that is $\alpha \in \mathcal{L}^{\ell}_{a_0}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$, where $\ell \geq a_0 + 1$. Hence, $\mathcal{DN}^{\,n-a_0-1}_0 = \sigma^{(n)}_k(A)\cap {\mathcal{E}}^{(a_0)}_{\mathcal{C}_n}$.
b. Since $\overline{a_{k-1}}$ is the biggest vertex of the simplex, it follows that layer
$\mathcal{L}^{n - a_{k-1}}_{a_{k-1}}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$ consists of endomorphisms ${\alpha = (a_0)_{p_0} \ldots (a_{k-2})_{p_{k-2}}}(a_{k-1})_{n- a_{k-1}}$, where $p_0 + \cdots + p_{k-2} + n - a_{k-1} = n$. So, $p_0 + \cdots + p_{k-2} = a_{k-1}$ implies that the images of $0$, $\ldots$, $a_{k-1} - 1$ are not equal to $a_{k-1}$, but $\alpha(a_{k-1}) = a_{k-1}$.
For all the endomorphisms of layers
$\mathcal{L}^{\ell}_{a_{k-1}}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$, where $\ell \geq n - a_{k-1}$, we have $p_0 + \cdots + p_{k-2} = a_{k-1}$. Hence, the elements of these layers have $a_{k-1}$ as a fixed point and $\mathcal{DN}^{\,a_{k-1}}_0 \subseteq \sigma^{(n)}_k(A)\cap {\mathcal{E}}^{(a_{k-1})}_{\mathcal{C}_n}$.
Conversely, let $\alpha \in \sigma^{(n)}_k(A)\cap {\mathcal{E}}^{(a_{k-1})}_{\mathcal{C}_n}$. Then $\alpha(a_{k-1}) = a_{k-1}$. Since $\overline{a_{k-1}}$ is the biggest vertex of the simplex, we have $\alpha(a_{k-1} + 1) = \ldots = \alpha(n - 1) = a_{k-1}$. Thus, $\alpha \in \mathcal{L}^{\ell}_{a_{k-1}}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$, where $\ell \geq n - a_{k-1}$. Hence, $\mathcal{DN}^{\,a_{k-1}}_{k-1} = \sigma^{(n)}_k(A)\cap {\mathcal{E}}^{(a_{k-1})}_{\mathcal{C}_n}$.
$\Box$
\emph{{Remark}} \exo/ What is the least $\ell$, such that the discrete neighborhood $\mathcal{DN}^{\,\ell}_{m}$ of the vertex $\overline{a_m}$ of simplex $\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$, where $m \neq 0$ and $m \neq k-1$, is a semiring? Since $1_32_{n-2}$ is an $1$--nilpotent element of any simplex
$\sigma^{(n)}\{1, 2, a_2 \ldots, a_{k-1}\}$, it follows that $\ell = 2$.
From the last theorem it follows that all the $a_0$--nilpotent elements of the simplex $\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$ are from semiring $\mathcal{DN}^{\,n-a_0-1}_0$. But there are elements of $\mathcal{DN}^{\,n-a_0-1}_0$ which are not $a_0$--nilpotent. For instance, endomorphism $\alpha \in \mathcal{L}^{a_0 + 1}_{a_0}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$ such that $\alpha(i) = a_m$, where $m = 1, \ldots, k-1$, for any $i > a_0$ is an idempotent. In order to separate $a_0$--nilpotent elements from all the other elements of $\mathcal{DN}^{\,n-a_0-1}_0$, we consider the following
\textbf{Proposition} \exo/ \textsl{The endomorphism $\alpha \in \mathcal{DN}^{\,n-a_0-1}_0$ is $a_0$--nilpotent if}
$$\alpha(0) = \cdots = \alpha(a_0) = \cdots = \alpha(a_1) = a_0, \; \alpha(i) < i, \; \mbox{\textsl{for}}\; a_1 < i \leq n - 1.$$
\emph{Proof.} Let us suppose that for some $i \geq a_0 + 1$ follows $\alpha(i) \geq i$. Then $\alpha^m(i) \geq i \geq a_0 + 1$ for any natural $m$, which contradicts that $\alpha$ is $a_0$--nilpotent endomorphism. Hence, $\alpha(i) < i$ for $i \geq a_0 + 1$. In particular $\alpha(a_s) < a_s$, for any $s = 1, \ldots, k$ and then $\alpha(a_1) = a_0$.
$\Box$
From the last proposition it immediately follows that there are not $a_0$--nilpotent endomorphisms in layer $\mathcal{L}^{a_0 + 1}_{a_0}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$. Since $\mathcal{DN}^{\,n-a_0-1}_0$ is a proper subsemiring of the simplex, it follows that in layer
$\mathcal{L}^{a_0 + 1}_{a_0}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$ there are not any $a_m$--nilpotent elements. So, the elements of this layer are idempotents or roots of idempotents. But if $\alpha = (a_0)_{a_0 + 1}(a_1)_{p_1} \ldots (a_{k-1})_{p_{k-1}} \in \mathcal{L}^{a_0 + 1}_{a_0}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$, it follows by induction that $\alpha^m = (a_0)_{a_0 + 1}(a_1)_{q_1} \ldots (a_{k-1})_{q_{k-1}} \in \mathcal{L}^{a_0 + 1}_{a_0}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$ for any natural $m$. So, all the elements of this layer are idempotents or roots of idempotents of the same layer. Obviously, the layer is closed under the addition. So, we prove
\textbf{Proposition} \exo/ \textsl{For any simplex $\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}$ layer\\ $\mathcal{L}^{a_0 + 1}_{a_0}\left(\sigma^{(n)}\{a_0, a_1, \ldots, a_{k-1}\}\right)$ is a subsemiring of the simplex.}
\noindent{\bf \large 3. \hspace{0.5mm} Strings}
Let us denote the elements of semiring $\mathcal{STR}^{(n)}\{a,b\}$ by $a_kb_{n-k}$, where $k = 0,\ldots,n$ is the number of the elements of $\mathcal{C}_n$ with an image equal to $a$, i.e.
$$a_kb_{n-k} = \wr\, \underbrace{a, \ldots, a}_{k}, \underbrace{b, \ldots, b}_{n-k}\,\wr.$$
In particular, we denote $a_nb_0 = \overline{a}$ and $a_0b_n = \overline{b}$.
Let us consider the following subset of $\mathcal{STR}^{(n)}\{a,b\}$:
$$N^{[a]}\left(\mathcal{STR}^{(n)}\{a,b\}\right) = \{\overline{a}, \ldots, a_{b+1}b_{n-b-1}\}.$$
For any endomorphism $\alpha \in N^{[a]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$, that is $\alpha = a_{\ell\,} b_{n-\ell}$, where $b + 1 \leq \ell \leq n$, we have $\alpha(b) = a$. Hence $\alpha, \beta \in N^{[a]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ implies $\alpha\cdot\beta = \overline{a}$. Since $\alpha^2 = \overline{a}$ for any $\alpha \in N^{[a]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$, it follows, see [3], that $$N^{[a]}\left(\mathcal{STR}^{(n)}\{a,b\}\right) = N^{[a]}_n\cap \mathcal{STR}^{(n)}\{a,b\}.$$
From Theorem 3.3 of [3], see section 1, follows
\textbf{Proposition} \exo/ \textsl{The set $N^{[a]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ is a subsemiring of $\mathcal{STR}^{(n)}\{a,b\}$ consisting of all $a$ -- nilpotent elements of this string.}
The order of this semiring is $n-b$.
The next subset of $\mathcal{STR}^{(n)}\{a,b\}$ is:
$$Id\left(\mathcal{STR}^{(n)}\{a,b\}\right) = \{a_bb_{n-b}, \ldots, a_{a+1}b_{n-a-1}\}.$$
For any endomorphism of $Id\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ elements $a$ and $b$ are fixed points. From Corollary 3 of [2] we find that all the elements of $Id\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ are idempotents and from Theorem 9 of [2], see section 1, it follows
\textbf{Proposition} \exo/ \textsl{The set $Id\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ is a subsemiring of $\mathcal{STR}^{(n)}\{a,b\}$ consisting of all idempotent elements of this string different from $\overline{a}$ and $\overline{b}$.
The order of this semiring is $b-a$.}
The last considered subset of $\mathcal{STR}^{(n)}\{a,b\}$ is:
$$N^{[b]}\left(\mathcal{STR}^{(n)}\{a,b\}\right) = \{a_ab_{n-a}, \ldots, \overline{b}\}.$$
For any endomorphism $\alpha = a_{\ell\,} b_{n-\ell}$, where $0 \leq \ell \leq a$ it follows $\alpha(a) = b$. Hence $\alpha, \beta \in N^{[b]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ implies $\alpha\cdot\beta = \overline{b}$. Since $\alpha^2 = \overline{b}$ for any $\alpha \in N^{[b]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ it follows, see [2], that $$N^{[b]}\left(\mathcal{STR}^{(n)}\{a,b\}\right) = N^{[b]}_n\cap \mathcal{STR}^{(n)}\{a,b\}.$$
From Theorem 3.3 of [3], see section 1, we have
\textbf{Proposition} \exo/ \textsl{The set $N^{[b]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ is a subsemiring of $\mathcal{STR}^{(n)}\{a,b\}$ consisting of all $b$ -- nilpotent elements of this string.}
The order of this semiring is $a + 1$.
\textbf{Proposition} \exo/ \textsl{Let $a, b \in \mathcal{C}_n$, $a < b$ and $a_kb_{n-k} \in \mathcal{STR}^{(n)}\{a,b\}$, where $k = 0,\ldots,n$. Then}
$$\begin{array}{ll}
a_kb_{n-k} \cdot \alpha = \overline{a}, & \mbox{\textsl{if}} \;\; \alpha \in N^{[a]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)\\
a_kb_{n-k} \cdot \alpha = a_kb_{n-k}, & \mbox{\textsl{if}} \;\; \alpha \in Id\left(\mathcal{STR}^{(n)}\{a,b\}\right)\\
a_kb_{n-k} \cdot \alpha = \overline{b}, & \mbox{\textsl{if}} \;\; \alpha \in N^{[b]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)
\end{array}.$$
\emph{Proof.} For any $i \in \mathcal{C}_n$ and $\alpha \in N^{[a]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ it follows
$$(a_kb_{n-k} \cdot \alpha)(i) = \alpha(a_kb_{n-k}(i)) = \left\{ \begin{array}{ll} \alpha(a),& \mbox{if}\;\; 0 \leq i \leq k\\ \alpha(b), & \mbox{if}\; \; k+1 \leq i \leq n - 1 \end{array} =\: a \right.$$
which means that $a_kb_{n-k} \cdot \alpha = \overline{a}$.
For any $i \in \mathcal{C}_n$ and $\alpha \in Id \left(\mathcal{STR}^{(n)}\{a,b\}\right)$ it follows
$$(a_kb_{n-k} \cdot \alpha)(i) = \alpha(a_kb_{n-k}(i)) = \left\{ \begin{array}{ll} \alpha(a),& \mbox{if}\;\; 0 \leq i \leq k\\ \alpha(b), & \mbox{if}\; \; k+1 \leq i \leq n - 1 \end{array} = \right.$$ $$\hphantom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} \left\{ \begin{array}{ll} a,& \mbox{if}\;\; 0 \leq i \leq k\\ b, & \mbox{if}\; \; k+1 \leq i \leq n - 1 \end{array} = \: a_kb_{n-k}(i) \right.$$
which means that $a_kb_{n-k} \cdot \alpha = a_kb_{n-k}$.
For any $i \in \mathcal{C}_n$ and $\alpha \in N^{[b]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ it follows
$$(a_kb_{n-k} \cdot \alpha)(i) = \alpha(a_kb_{n-k}(i)) = \left\{ \begin{array}{ll} \alpha(a),& \mbox{if}\;\; 0 \leq i \leq k\\ \alpha(b), & \mbox{if}\; \; k+1 \leq i \leq n - 1 \end{array} =\: b \right.$$
which means that $a_kb_{n-k} \cdot \alpha = \overline{b}$.
$\Box$
Immediately follows
\textbf{Corollary} \exo/ \textsl{The idempotent endomorphisms of semiring $\mathcal{STR}^{(n)}\{a,b\}$, different from $\overline{a}$ and $\overline{b}$, are right identities.}
\textbf{Corollary} \exo/ \textsl{Any two different strings are nonisomorphic semirings.}
Using the fact that the strings are faces of any $k$ -- simplex for arbitrary $k \geq 2$, the last corollary implies
\textbf{Corollary} \exo/ \textsl{Any two different $k$ -- simplices are nonisomorphic semirings.}
\emph{{Remark}} \exo/ a. From Propopsition 1.4 we actually observe that the multiplicative structure of arbitrary string $\mathcal{STR}^{(n)}\{a,b\}$ is very clear: first, we find $n - b$ endomorphisms (all the $a$ -- nilpotent elements) which are square roots of $\overline{a}$ or $\overline{a}$, then $b - a$ idempotents, which are right identities and, in the end, $a + 1$ elements (all the $b$ -- nilpotent elements) which are square roots of $\overline{b}$ or $\overline{b}$.
b. The union of semirings $N^{[a]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ and $Id\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ is also a semi\-ring because
$$N^{[a]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)\cup Id\left(\mathcal{STR}^{(n)}\{a,b\}\right) = \mathcal{STR}^{(n)}\{a,b\}\cap {\mathcal{E}}^{(a)}_{\mathcal{C}_n}.$$
Similarly,
$$N^{[b]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)\cup Id\left(\mathcal{STR}^{(n)}\{a,b\}\right) = \mathcal{STR}^{(n)}\{a,b\}\cap {\mathcal{E}}^{(b)}_{\mathcal{C}_n}$$
is a subsemiring of $\mathcal{STR}^{(n)}\{a,b\}$.
Two strings $\mathcal{STR}^{(n)}\{a,b\}$ and $\mathcal{STR}^{(n)}\{x,y\}$ are called {\emph{consecutive}} if they have a common vertex. So, strings $\mathcal{STR}^{(n)}\{a,b\}$ and $\mathcal{STR}^{(n)}\{b,c\}$, $\mathcal{STR}^{(n)}\{a,b\}$ and $\mathcal{STR}^{(n)}\{a,c\}$, $\mathcal{STR}^{(n)}\{a,c\}$ and $\mathcal{STR}^{(n)}\{b,c\}$ (when $a < b < c$) are the three possibilities of the pairs of consecutive strings.
Let $a_kb_{n-k} \in \mathcal{STR}^{(n)}\{a,b\}$, where $k = 0,\ldots,n$, and $b_\ell c_{n-\ell} \in \mathcal{STR}^{(n)}\{b,c\}$, where $\ell = 0,\ldots,n$. Since $a_kb_{n-k} < b_\ell c_{n-\ell}$, then $a_kb_{n-k} + b_\ell c_{n-\ell} = b_\ell c_{n-\ell}$. By similar arguments, for any $a_m c_{n-m} \in \mathcal{STR}^{(n)}\{a,c\}$, we can construct $a_m c_{n-m} + b_\ell c_{n-\ell} = b_rc_{n-r}$, where $r = \min\{\ell,m\}$. But when we add endomorphisms $a_k b_{n-k}$ and $a_mc_{n-m}$, where $k < m$, the sum is $a_kb_{m-k}c_{n-m}$, so, the set of these three strings is not closed under the addition.
In the next proposition we examine the product of endomorphisms of two (not necessarily consecutive) strings.
\textbf{Proposition} \exo/ \textsl{Let $a_kb_{n-k} \in \mathcal{STR}^{(n)}\{a,b\}$, where $k = 0,\ldots,n$, and $x_\ell\, y_{n-\ell} \in \mathcal{STR}^{(n)}\{x,y\}$, where $\ell = 0,\ldots,n$. Then}
$$\begin{array}{ll}
a_kb_{n-k} \cdot x_\ell\, y_{n-\ell} = \overline{x}, & \mbox{\textsl{if}} \;\; b+1 \leq \ell \leq n\\
a_kb_{n-k} \cdot x_\ell\, y_{n-\ell} = x_ky_{n-k}, & \mbox{\textsl{if}} \;\; a + 1 \leq \ell \leq b\\
a_kb_{n-k} \cdot x_\ell\, y_{n-\ell} = \overline{y}, & \mbox{\textsl{if}} \;\; 0 \leq \ell \leq a
\end{array}.$$
\emph{Proof.} For any $i \in \mathcal{C}_n$ it follows
$$\left(a_kb_{n-k} \cdot x_\ell\, y_{n-\ell}\right)(i) = x_\ell\, y_{n-\ell}\left(a_kb_{n-k}\right)(i) = \left\{ \begin{array}{ll} x_\ell\, y_{n-\ell}(a),& \mbox{if}\;\; 0 \leq i \leq k\\ x_\ell\, y_{n-\ell}(b), & \mbox{if}\; \; k+1 \leq i \leq n - 1 \end{array}. \right.$$
If $b+1 \leq \ell \leq n$, we have $x_\ell\, y_{n-\ell}(b) = x$ and then $x_\ell\, y_{n-\ell}(a) = x$. So, for any $i \in \mathcal{C}_n$ we find that $\left(a_kb_{n-k} \cdot x_\ell\, y_{n-\ell}\right)(i) = x$ and hence $a_kb_{n-k} \cdot x_\ell\, y_{n-\ell} = \overline{x}$.
If $a + 1 \leq \ell \leq b$, it follows $x_\ell\, y_{n-\ell}(a) = x$ and $x_\ell\, y_{n-\ell}(b) = y$. Thus, we obtain
$$\left(a_kb_{n-k} \cdot x_\ell\, y_{n-\ell}\right)(i) = \left\{ \begin{array}{ll} x,& \mbox{if}\;\; 0 \leq i \leq k\\ y, & \mbox{if}\; \; k+1 \leq i \leq n - 1 \end{array} \right. = x_ky_{n-k}(i).$$
Hence, $a_kb_{n-k} \cdot x_\ell\, y_{n-\ell} = x_ky_{n-k}$.
If $ 0 \leq \ell \leq a$, we have $x_\ell\, y_{n-\ell}(a) = y$ and then $x_\ell\, y_{n-\ell}(b) = y$. So, for any $i \in \mathcal{C}_n$ we obtain $\left(a_kb_{n-k} \cdot x_\ell\, y_{n-\ell}\right)(i) = y$ and hence $a_kb_{n-k} \cdot x_\ell\, y_{n-\ell} = \overline{y}$.
$\Box$
Immediately it follows
\textbf{Corollary} \exo/ \textsl{For any $a, b, c \in \mathcal{C}_n$, $a < b < c$, the set consisting of all the elements of the consecutive strings $\mathcal{STR}^{(n)}\{a,b\}$ and $\mathcal{STR}^{(n)}\{b,c\}$ is a semiring.}
A subsemiring $S$ of endomorphism semiring $\widehat{\mathcal{E}}_{\mathcal{C}_n}$ is called a {\emph{trivial semiring}} if for any two elements $\alpha, \beta \in S$ it follows $\alpha\cdot \beta = \iota$, where $\iota$ is a fixed element of $S$. If semiring $S$ is a trivial, then there exists a unique idempotent $\iota\in S$ such that the product of any two elements of $S$ is equal to $\iota$. If this idempotent is the biggest (least) element of the trivial semiring $S$, the $S$ is called an \emph{upper (lower) trivial semiring}.
\emph{{Example}} \exo/ a) The semiring of $a$ -- nilpotent elements $N^{[a]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ (using the proof of Proposition 1.4) is a lower trivial semiring.
b) The semiring of $b$ -- nilpotent elements $N^{[b]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ (using the proof of Proposition 1.4) is an upper trivial semiring.
c) Let us consider the semiring from Corollary 2.2. Then from Proposition 2.1 follows that the union of semirings $N^{[b]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ and $N^{[b]}\left(\mathcal{STR}^{(n)}\{b,c\}\right)$ is a trivial semiring. Since $\overline{b}$ is the biggest element of $N^{[b]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ and the least element of $N^{[b]}\left(\mathcal{STR}^{(n)}\{b,c\}\right)$ it follows that the considered trivial semiring is neither upper trivial nor lower trivial.
Now we shall construct some useful subsemirings of a given string, some of which are trivial semirings.
We consider the following subset of string $\mathcal{STR}^{(n)}\{a,b\}$:
$$A_r = \{\overline{a}, a_{n-1}b, \ldots, a_rb_{n-r}\},$$
where $r = 1, \ldots, n$. Since $A_r$ is a chain for any $r$, it is closed under the addition. If $r \geq b + 1$, then $A_r \subseteq N^{[a]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$, so, $A_r$ is a lower trivial semiring. If $r \leq a$, then $A_r \cap N^{[b]}\left(\mathcal{STR}^{(n)}\{a,b\}\right) \neq \varnothing$ which implies that $A_r$ is not closed under the multiplication, i.e. it is not a semiring.
From Remark 1.7 b. it follows that
$$A_{a+1} = N^{[a]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)\cup Id\left(\mathcal{STR}^{(n)}\{a,b\}\right)
$$
is the biggest between sets $A_r$, which is a semiring.
Since every element of semiring $Id\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ is a right identity of string $\mathcal{STR}^{(n)}\{a,b\}$, it follows that for every $r = a + 1, \ldots, n$ the set $A_r$ is a semiring.
Using the same idea, we consider the subset of $\mathcal{STR}^{(n)}\{a,b\}$:
$$B_s = \{\overline{b}, ab_{n-1}, \ldots, a_sb_{n-s}\},$$
where $s = 0, \ldots, n-1$. The set $B_s$ is a chain for any $s$, so, it is closed under the addition. If $s \leq a$, then $B_s \subseteq N^{[b]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$, so, $B_s$ is an upper trivial semiring. If $s \geq b+ 1$, then $B_s \cap N^{[a]}\left(\mathcal{STR}^{(n)}\{a,b\}\right) \neq \varnothing$ which means that $B_s$ is not closed under the multiplication, so, $B_s$ is not a semiring.
Also from Remark 1.7 b. it follows that
$$B_{b} = N^{[b]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)\cup Id\left(\mathcal{STR}^{(n)}\{a,b\}\right)
$$
is the biggest from sets $B_s$ which is a semiring.
By the same way, considering that every element of semiring $Id\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ is a right identity of string $\mathcal{STR}^{(n)}\{a,b\}$ it follows that for every $s = 0, \ldots, b$ set $B_s$ is a semiring.
\noindent{\large \bf 4. Triangles}
Let $a, b, c \in \mathcal{C}_n$, $a < b < c$, are fixed elements. The set of endomorphisms $\alpha$ such that
$$\alpha(0) = \cdots = \alpha(k-1) = a, \alpha(k) = \cdots = \alpha(k+\ell-1) = b, \alpha(k+\ell) = \cdots = \alpha(n-1) = c$$
or briefly $\alpha = a_kb_\ell c_{n-k-\ell}$, where $0 \leq k \leq n-1$, $0 \leq \ell \leq n-1$ and $0 \leq n - k -\ell \leq n-1$ is actually the triangle $\triangle^{(n)}\{a,b,c\}$. Obviously, the order of this semiring is $\displaystyle \binom{n+2}{2}$.
The strings $\mathcal{STR}^{(n)}\{a,b\}$, $\mathcal{STR}^{(n)}\{a,c\}$ and $\mathcal{STR}^{(n)}\{b,c\}$ are called strings of $\triangle^{(n)}\{a,b,c\}$.
Let $R$ be a subsemiring of $\widehat{\mathcal{E}}_{\mathcal{C}_n}$ and $\alpha, \beta \in R$, $\alpha \neq \beta$. These endomorphisms are called {\emph{right-similar}} ({\emph{left-similar}}) if for any $\gamma \in R$ we have $\alpha\cdot \gamma = \beta\cdot \gamma$ ($\gamma\cdot \alpha = \gamma\cdot \beta$). We denote this by $\alpha \sim_r \beta$ ($\alpha \sim_\ell \beta$). In the next sections we shall answer the question: Are there right-similar (left-similar) elements in $\triangle^{(n)}\{a,b,c\}$ ?
\emph{Example} \exo/ The biggest side of the least tetrahedron $\mathcal{TETR}^{(4)}\{0,1,2,3\}$ is triangle $\triangle^{(4)}\{1,2,3\}$. The elements of this semiring can be arranged as in the following scheme (fig.1):
\centerline{\small Figure 1.}
It is easy to see that the interior of this triangle is set $Int = \{1_223, 12_23, 123_2\}$. Since $(123_2)^2 = 23_3 \notin Int$, it follows that the interior of $\triangle^{(4)}\{1,2,3\}$ is not a semiring. Since $1_223 = 1_22_2 + 1_33$, $12_23 = 12_3 + 1_33$ and $123_2 = 12_3 + 1_23_2$, it follows that every element of the interior of the triangle can be represented as a sum of an element of the least side of the triangle and an element of the middle side of the triangle.
In this triangle there are many left-similar endomorphisms: $12_3 \sim_\ell \overline{2}$, $13_3 \sim_\ell 23_3 \sim_\ell \overline{3}$, $123_2 \sim_\ell 2_23_2$, $12_23 \sim_\ell 2_33$. The endomorphism $1_223$ is a right-identity, so there are not right-similar endomorphisms in $\triangle^{(4)}\{1,2,3\}$.
\textbf{Proposition} \exo/ \textsl{Any element of the interior of $\triangle^{(n)}\{a,b,c\}$ can be uniquely represented as a sum of the elements of strings $\mathcal{STR}^{(n)}\{a,b\}$ and $\mathcal{STR}^{(n)}\{a,c\}$. }
\emph{Proof.} We easily calculate
$$a_{n-1}c + a_k b_{n-k} = a_k b_{n-k-1}c, \; \mbox{where} \; k = 0, \ldots, n-2.$$
By the same argument, for any $j = 1, \ldots, n-1$ we find
$$a_{n-j}c_j + a_k b_{n-k} = a_k b_{n-k-j}c_j, \; \mbox{where} \; k = 0, \ldots, n-j-1.$$
So, we prove more: all the elements of the interior of $\triangle^{(n)}\{a,b,c\}$ and all the elements of the interior of $\mathcal{STR}^{(n)}\{b,c\}$ are sums of the elements of $\mathcal{STR}^{(n)}\{a,b\}$ and $\mathcal{STR}^{(n)}\{a,c\}$. From the construction we observe that endomorphisms $\overline{a}$ and $\overline{c}$ do not occur in these sums and every representation of this type is unique.
$\Box$
\textbf{Corollary} \exo/ \textsl{The boundary of an arbitrary triangle $\triangle^{(n)}\{a,b,c\}$ is a multiplicative semigroup but not a semiring.}
\emph{Proof.} Since the boundary of $\triangle^{(n)}\{a,b,c\}$ is a union of strings $\mathcal{STR}^{(n)}\{a,b\}$, $\mathcal{STR}^{(n)}\{a,c\}$ and $\mathcal{STR}^{(n)}\{b,c\}$, from Proposition 8 it follows that this set is a multiplicative semigroup. From the last proposition, it follows that the boundary of $\triangle^{(n)}\{a,b,c\}$ is not closed under the addition.
$\Box$
\textbf{Corollary} \exo/ \textsl{The interior of an arbitrary triangle $\triangle^{(n)}\{a,b,c\}$, where $n \geq 4$, is an additive semigroup but not a semiring.}
\emph{Proof.} From the last proposition and the fact that $\mathcal{STR}^{(n)}\{a,b\}$ and $\mathcal{STR}^{(n)}\{a,c\}$ are semirings, it follows that the interior of the triangle is closed under the addition. If $a > 0$, it follows $\left(ab_b c_{n-b-1}\right)^2 = b_{b+1}c_{n-b-1}$. If $a = 0$ follows $\left(0b_{b-1}c_{n-b}\right)^2 = 0c_{n-1}$. So, in all the cases the interior of the triangle is not a semiring.
$\Box$
When $n = 3$, the interior of the least triangle $\triangle^{(3)}\{0,1,2\}$ is one-element semiring and this element is an identity $\mathbf{i} = 123$.
Let $a, b, c, x, y, z \in \mathcal{C}_n$, $a < b < c$ and $x < y < z$. We consider the map
$$\Phi : \triangle^{(n)}\{a,b,c\} \rightarrow \triangle^{(n)}\{x,y,z\}$$
such that $\Phi\left(a_kb_\ell c_{n-k-\ell}\right) = x_ky_\ell z_{n-k-\ell}$, where $0 \leq k \leq n-1$, $0 \leq k \leq n-1$ and $0 \leq n - k -\ell \leq n-1$. Obviously, $\Phi$ is order-preserving. Hence, the additive semigroups of any two triangles $\triangle^{(n)}\{a,b,c\}$ and $\triangle^{(n)}\{x,y,z\}$ are isomorphic. But $\triangle^{(n)}\{a,b,c\}$ and $\triangle^{(n)}\{x,y,z\}$ are nonisomorphic semirings.
\textbf{Proposition} \exo/ \textsl{In arbitrary triangle $\triangle^{(n)}\{a,b,c\}$, where $n > 3$, there is at least one right identity and there are not any left identities.}
\emph{Proof.}
The least idempotent of $\mathcal{STR}^{(n)}\{a,b\}$ is endomorphism $a_bb_{n-b}$ and the least idempotent of $\mathcal{STR}^{(n)}\{a,c\}$ is endomorphism $a_cc_{n-c}$. Their sum is $\varepsilon = a_bb_{c-b}c_{n-c}$. Let $\alpha \in \mathcal{STR}^{(n)}\{a,b\}$. If $\alpha(a) = a$, or $\alpha(a) = b$, or $\alpha(a) = c$, then it follows $(\alpha\cdot \varepsilon)(a) = a$, or $(\alpha\cdot \varepsilon)(a) = b$, or $(\alpha\cdot \varepsilon)(a) = a$, respectively. The same is valid if we replace $a$ with $b$, or $a$ with $c$. So, endomorphism $\varepsilon = a_bb_{c-b}c_{n-c}$ is a right identity of triangle $\triangle^{(n)}\{a,b,c\}$.
If $b > a + 1$, endomorphism $a_{a+1}b_{n-a-1}$ is another (in all the cases, the biggest) idempotent of $\mathcal{STR}^{(n)}\{a,b\}$. Now, in a similar way as above, it is easy to check that sum $a_{a+1}b_{n-a-1} + a_cc_{n-c} = a_{a+1}b_{c-a-1}c_{n-c}$ is another right identity of $\triangle^{(n)}\{a,b,c\}$.
Let us, similarly, suppose that $c > b + 1$. Then endomorphism $a_{b+1}c_{n-b-1}$ is another, different from $a_cc_{n-c}$, idempotent of $\mathcal{STR}^{(n)}\{a,c\}$. Now sum $a_bb_{n-b} + a_{b+1}c_{n-b-1} = a_bbc_{n-b-1}$ is a right identity of $\triangle^{(n)}\{a,b,c\}$.
If there are two right identities of the triangle, it implies that there is not a left identity in this semiring. So, we consider the case when there is only one idempotent of $\mathcal{STR}^{(n)}\{a,b\}$ and there is only one idempotent of $\mathcal{STR}^{(n)}\{a,c\}$. It is possible when $b = a + 1$ and $c = a + 2$. Thus, it is enough to prove that there is not a left identity in any triangle $\triangle^{(n)}\{a,a+1,a+2\}$. We consider two cases.
\emph{Case 1.} Let $a \geq 1$ and $\alpha = (a+1)(a+2)_{n-1}$. Then $\alpha(a) = \alpha(a+1) = \alpha(a+2) = a + 2$. Hence, for any
$\beta \in \triangle^{(n)}\{a,a+1,a+2\}$ we find $\beta\cdot \alpha = \overline{a + 2}$. Since $\beta\cdot \overline{a + 2} = \overline{a + 2}$ it follows that
$\alpha \, \sim_\ell \; \overline{a + 2}$. So, there are two left-similar elements of $\triangle^{(n)}\{a,a+1,a+2\}$ and hence there is not a left identity.
\emph{Case 2.} Let $a = 0$. In semiring $\triangle^{(n)}\{0,1,2\}$ we choose endomorphism $\alpha = 1_{n-1}2$. Then for any $\beta \in \triangle^{(n)}\{0,1,2\}$ it follows $\beta\cdot \alpha = \overline{1}$. Since $\beta\cdot \overline{1} = \overline{1}$, we show that $\alpha \, \sim_\ell \; \overline{1}$. So, there are two left-similar elements of $\triangle^{(n)}\{0,1,2\}$ and there is not a left identity in this triangle.
$\Box$
Since there is a right identity in semiring $\triangle^{(n)}\{a,b,c\}$, it follows
\textbf{Corollary} \exo/ \textsl{In an arbitrary triangle $\triangle^{(n)}\{a,b,c\}$, where $n > 3$, there are not right-similar endomorphisms.}
Immediately it follows
\textbf{Corollary} \exo/ \textsl{Only in the least triangle $\triangle^{(3)}\{0,1,2\}$ there is an identity $\mathbf{i} = 123$.}
\noindent{\large \bf 5. Layers in a Triangle}
Since any triangle $\triangle^{(n)}\{a,b,c\}$ is a 2--simplex, we define layers as in section 2. Any layer of a triangle is a chain. So, the elements of layer
$\mathcal{L}^{k}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$, where $k = 0, \ldots, n-1$, are the following $n-k+1$ endomorphisms:
$$a_kb_{n-k} < a_kb_{n-k-1}c < \cdots < a_kc_{n-k}.$$
Similarly, we can represent the elements of $\mathcal{L}^{k}_{b}\left(\triangle^{(n)}\{a,b,c\}\right)$ and $\mathcal{L}^{k}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$, where $k = 0, \ldots, n-1$.
When the least and the biggest element of the layer are idempotents, we call this layer a {\emph{basic layer}}. Let us consider layer $\mathcal{L}^{k}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$ with respect to vertex $\overline{a}$.. Since idempotents of $\mathcal{STR}^{(n)}\{a,b\}$ are endomorphisms
$a_bb_{n-b}, \ldots, a_{a+1}b_{n-a-1}$ and, similarly, idempotents of $\mathcal{STR}^{(n)}\{a,c\}$ are $a_cc_{n-c}, \ldots, a_{a+1}c_{n-a-1}$,
it follows that $\mathcal{L}^{k}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$ is a basic layer only if ${k = a + 1, \ldots, b}$.
All the elements of the layer are endomorphisms $\alpha = a_kb_{n-k-i}c_i$, where ${i = 0, \ldots, n-k}$.
It is easy to see that if $i \leq n - c -1$, then $\alpha(b) = \alpha(c) = b$. Such endomorphisms are called {\emph{left elements}} of the layer. If $n -c \leq i \leq n - b -1$, then $\alpha(b) =b$ and $\alpha(c) = c$, so, $\alpha$ is an idempotent which is a right identity of the triangle. If $i \geq n -b$, then $\alpha(b) = \alpha(c) = c$. Such endomorphisms are called {\emph{right elements}} of the layer. Let $\alpha$ and $\beta$ be left elements of the layer. If $\alpha(x) = b$, where $x \in \mathcal{C}_n$, then $(\alpha\cdot \beta)(x) = \beta(b) = b$. If $\alpha(x) = c$, where $x \in \mathcal{C}_n$, then $(\alpha\cdot \beta)(x) = \beta(c) = b$. Hence, $\alpha\cdot \beta = a_kb_{n-k}$. Similarly, if $\alpha$ and $\beta$ are right elements of the layer, then $\alpha\cdot \beta = a_kc_{n-k}$. Let $\alpha$ be a left element and $\beta$ be a right element of the layer. If $\alpha(x) = b$, where $x \in \mathcal{C}_n$, then $(\alpha\cdot \beta)(x) = \beta(b) = c$. If $\alpha(x) = c$, where $x \in \mathcal{C}_n$, then $(\alpha\cdot \beta)(x) = \beta(c) = c$. Hence, $\alpha\cdot \beta = a_kc_{n-k}$. Similarly, $\beta\cdot \alpha = a_kb_{n-k}$. Thus we obtain
\textbf{Proposition} \exo/ \textsl{Any basic layer $\mathcal{L}^{k}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$, $k = a + 1, \ldots, b$, is a semi\-ring.}
From the proof of last proposition it follows that in layer $\mathcal{L}^{k}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$ the first $n-c$ elements $\alpha$ are left elements, i.e. $\alpha^2 = a_kb_{n-k}$, the next $c-b$ endomorphisms are idempotents and the last $b - k +1$ elements $\alpha$ are right elements, i.e. $\alpha^2 = a_kc_{n-k}$. If there is a string $\mathcal{STR}^{(n_0)}\{a_0,b_0\}$ with $n-c$ $\,a_0$--nilpotent elements, $c -b$ idempotents and $b - k +1$ $\,b_0$--nilpotent elements are $b-k+1$, then two semirings $\mathcal{L}^{k}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$ and $\mathcal{STR}^{(n_0)}\{a_0,b_0\}$ will be isomorphic. From system $\left|\begin{array}{l} n_0 - b_0 = n -c\\ b_0 - a_0 = c - b\\a_0 + 1 = b - k + 1 \end{array}\right.$ we find $n_0 = n-k$, $a_0 = b -k$ and $b_0 = c - k$. So, we prove
\textbf{Proposition} \exo/ \textsl{For any $n \geq 3$, $a, b \in \mathcal{C}_n$, $a < b$ and $k = a+1, \ldots, b$, semirings $\mathcal{L}^{k}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$ and $\mathcal{STR}^{(n-k)}\{b-k,c-k\}$ are isomorphic.}
The basic layers $\mathcal{L}^{k}_{b}\left(\triangle^{(n)}\{a,b,c\}\right)$ are not closed under the multiplication in the general case. For instance, see fig,1, where the layer $\mathcal{L}^{2}_{2}\left(\triangle^{(4)}\{1,2,3\}\right)$ is a basic layer, but for his middle element $12_23$ we find $(12_23)^2 = 2_33 \notin \mathcal{L}^{2}_{2}\left(\triangle^{(4)}\{1,2,3\}\right)$.
Now let us consider layer $\mathcal{L}^{k}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$ with respect to vertex $\overline{c}$. Since idempotents of $\mathcal{STR}^{(n)}\{a,c\}$ are $a_cc_{n-c}, \ldots, a_{a+1}c_{n-a-1}$ and idempotents of $\mathcal{STR}^{(n)}\{b,c\}$ are $b_cc_{n-c}, \ldots, b_{b+1}c_{n-b-1}$, it follows that $\mathcal{L}^{k}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$ is a basic layer only if ${k = n - c, \ldots, n - b -1}$.
All the elements of the layer are endomorphisms $\alpha = a_ib_{n-k-i}c_k$, where ${i = 0, \ldots, n-k}$. If $b+1 \leq i \leq n-k$, then $\alpha(a) = \alpha(b) = a$. We call these endomorphisms (as in the previous case) left elements of the layer. When $a+ 1 \leq i \leq b$, it follows $\alpha(a) = a$, $\alpha(b) = b$, that is $\alpha$ is an idempotent which is a right identity of the triangle. If $0 \leq i \leq a$, then $\alpha(a) = \alpha(b) = b$. These endomorphisms are the right elements of the layer. By the same way, as for the basic layers with respect to vertex $\overline{a}$, we prove here that:
1. If $\alpha$ and $\beta$ are left elements of the layer, then $\alpha\cdot \beta = a_{n-k}c_{k}$.
2. If $\alpha$ and $\beta$ are right elements of the layer, then $\alpha\cdot \beta = b_{n-k}c_{k}$.
3. If $\alpha$ is a left but $\beta$ is a right element of the layer, then $\alpha\cdot \beta = b_{n-k}c_{k}$ and $\beta\cdot \alpha = a_{n-k}c_{k}$
So, we obtain
\textbf{Proposition} \exo/ \textsl{Any basic layer $\mathcal{L}^{k}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$, where $k = n - c, \ldots, n-b-1$, is a semi\-ring.}
Now we search a string $\mathcal{STR}^{(n_0)}\{a_0,b_0\}$ with $n-k-b$ $\;a_o$--nilpotent elements, $b-a$ idempotents and $a + 1$ $\;b_0$--nilpotent elements. It is easy to find that $n_0 = n -k$, $a_0 = a$ and $b_0 = b$. Thus we prove
\textbf{Proposition} \exo/ \textsl{For any $n \geq 3$, $b, c \in \mathcal{C}_n$, $b < c$ and $k = n-c, \ldots, n-b-1$, semirings $\mathcal{L}^{k}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$ and $\mathcal{STR}^{(n-k)}\{a,b\}$ are isomorphic.}
In triangle $\triangle^{(4)}\{1,2,3\}$ (fig. 1) we obtain that there are many left-similar endomorphisms. In order to prove that there are such elements in any triangle $\triangle^{(n)}\{a,b,c\}$ we first consider the case $a > 0$. Then $ac_{n-1} \sim_\ell \overline{c}$. Actually, if $\alpha = ac_{n-1}$, easily follows that $\alpha(a) = \alpha(b) = \alpha(c) = c$. Thus, for any $\beta \in \triangle^{(n)}\{a,b,c\}$ we have $\beta\cdot \alpha = \overline{c} = \beta\cdot \overline{c}$, that is $\alpha \sim_\ell \overline{c}$. Note that $\alpha$ and $\overline{c}$ are not right identities of the triangle.
Let $a = 0$. In $\triangle^{(n)}\{0,b,c\}$ we consider the biggest basic layer $\mathcal{L}^{1}_{0}\left(\triangle^{(n)}\{0,b,c\}\right)$. The elements of this layer are $0b_{n-1} < \cdots < 0bc_{n-2} < 0c_{n-1}$. Now, if $b > 1$, it follows that $0bc_{n-2} \sim_\ell 0c_{n-1}$. Indeed, let $\alpha = 0bc_{n-1}$. Since $\alpha(0) = 0$, $\alpha(b) = \alpha(c) = c$, it follows that if $\beta \in \triangle^{(n)}\{0,b,c\}$, then $\beta\cdot \alpha = \beta\cdot 0c_{n-1}$, i.e. $\alpha \sim_\ell 0c_{n-1}$. Note that $\alpha$ and $0c_{n-1}$ are not right identities of the triangle.
Now let us consider triangle $\triangle^{(n)}\{0,1,c\}$ and endomorphisms $01_{n-1}$ and $01_{n-2}$. Let $c < n-1$. Then for $\alpha = 01_{n-2}c$ we have $\alpha(0) = 0$, $\alpha(b) = \alpha(c) = 1$. So, for any $\beta \in \triangle^{(n)}\{0,1,c\}$, it follows $\beta\cdot \alpha = \beta\cdot 01_{n-1}$, that is $01_{n-1}c \sim_\ell 01_{n-1}$. Note that these endomorphisms are not right identities.
Finally let $c = n-1$. We consider $\triangle^{(n)}\{0,1,n-1\}$, where $n > 3$. Note, that in the least triangle $\triangle^{(3)}\{0,1,2\}$ there are not left-similar elements since there is an identity. Now let us consider endomorphisms $\alpha = 0_{n-1}1$ and $\beta = 0_{n-2}1_2$. We have
$\alpha(0) = \beta(0) = 0$, $\alpha(1) = \beta(1) = 0$, $\alpha(n-1) = \beta(n-1) = 1$. So, for any $\gamma \in \triangle^{(n)}\{0,1,n-1\}$, it follows $\gamma\cdot \alpha = \gamma\cdot \beta$, i.e. $\alpha \sim_\ell \beta$. Note that $\alpha$ and $\beta$ are not right identities. Hence, we prove
\textbf{Proposition} \exo/ \textsl{For any $n > 3$ the triangle $\triangle^{(n)}\{a,b,c\}$ contains a pair of elements which are left-similar endomorphisms and are not right identities.}
\noindent{\large \bf 6. Idempotents and Nilpotent Elements of a Triangle}
By a {\emph{boundary idempotent}} of triangle $\triangle^{(n)}\{a,b,c\}$ we understand an idempotent of any of the strings of this triangle. The idempotent of the interior of the triangle is called {\emph{interior idempotent}}. From Proposition 17, it follows that the set of all boundary idempotents is closed under the multiplication.
\textbf{Proposition} \exo/ \textsl{The interior idempotents of the triangle $\triangle^{(n)}\{a,b,c\}$ are just the right identities of the triangle.}
\emph{Proof.} As we know, [2], endomorphism $\alpha \in \widehat{\mathcal{E}}_{\mathcal{C}_n}$ with $s$ fixed points $k_1, \ldots, k_s$, ${1 \leq s \leq
n-1}$, is an idempotent if and only if $\mathop{\rm Im}\nolimits(\alpha) = \{k_1, \ldots, k_s\}$. So, for any interior idempotent of $\triangle^{(n)}\{a,b,c\}$, it follows that $a$, $b$ and $c$ are fixed points of $\alpha$. Then, for any $\beta \in \triangle^{(n)}\{a,b,c\}$ and $x \in \mathcal{C}_n$ easily follows $(\beta\cdot \alpha)(x) = \alpha(\beta(x)) = \beta(x)$. Hence, $\alpha$ is a right identity of $\triangle^{(n)}\{a,b,c\}$.
Conversely, if $\alpha$ is a right identity, then, obviously, $\alpha$ is an idempotent. If we assume that $\alpha$ is a boundary idempotent, say $\alpha = a_kb_{n-k}$, where $k = a+1, \ldots, b$, then $\overline{c}\cdot a_kb_{n-k} = \overline{b}$. This contradicts the choice that $\alpha$ is a right identity.
$\Box$
The set of right identities of $\triangle^{(n)}\{a,b,c\}$ is denoted by $\mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$.
For any triangle $\triangle^{(n)}\{a,b,c\}$ endomorphism $a_{a+1}c_{n-a-1}$ is an idempotent of $\mathcal{STR}^{(n)}\{a,c\}$ and endomorphism $b_{b+1}c_{n-b-1}$ is an idempotent of $\mathcal{STR}^{(n)}\{b,c\}$. Now, it follows that $a_{a+1}c_{n-a-1} + b_{b+1}c_{n-b-1} = b_{a+1}c_{n-a-1} \in N^{[a]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ since $(b_{a+1}c_{n-a-1})^2 = \overline{c}$. So, the set of idempotents of any triangle is not a semiring. But this does not mean that adding idempotents is a ``bad idea''.
Any discrete neighborhood of a vertex of the triangle $\triangle^{(n)}\{a,b,c\}$ (see fig. 2) can be represented as a triangle in a geometrical sense. So, such subset of $\triangle^{(n)}\{a,b,c\}$ is called {\emph{geometric triangle}}. In fig. 2 the endomorphisms $a_{n-m}c_m$, $b_{n-m}c_m$ and $\overline{c}$ are the ``vertices'' and subsets of $\mathcal{STR}^{(n)}\{a,c\}$ and $\mathcal{STR}^{(n)}\{b,c\}$ consisting of $n-m+1$ endomorphisms and also the layer $\mathcal{L}^{n-m}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$ are the ``sides'' of this geometric triangle.
Similarly, geometric triangles are called subsets depicted in figures 3 and 4.
\centerline{\small \hphantom{aaaa} Figure 2 \hphantom{aaaaaaaaaaaaaaaaaa} Figure 3 \hphantom{aaaaaaaaaaaaaaaaaaaa} Figure 4 \hphantom{aaaaa}}
Geometric triangles are, in general, not semirings. But some of them are semirings, for example, geometric triangles
$\mathcal{DN}^{n-1}_a = \{\overline{a}, a_{n-1}b, a_{n-1}c\}$, $\mathcal{DN}^{n-1}_b = \{\overline{b}, ab_{n-1}, b_{n-1}c\}$ and $\mathcal{DN}^{n-1}_c = \{\overline{c}, ac_{n-1}, bc_{n-1}\}$ are subsemirings of $\triangle^{(n)}\{a,b,c\}$ from Proposition 2.
From Theorem 5 it follows that the discrete neighborhood of vertex $\overline{a}$ of $\triangle^{(n)}\{a,b,c\}$ containing the biggest layer $\mathcal{L}^{a+1}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$ is semiring $\mathcal{DN}^{n-a-1}_a = \triangle^{(n)}\{a,b,c\}\cap {\mathcal{E}}^{(a)}_{\mathcal{C}_n}$. This semiring is a geometric triangle whose ``vertices'' the idempotent endomorphisms $\overline{a}$, $a_{a+1}b_{n-a-1}$ and $a_{a+1}c_{n-a-1}$. The ``sides'' of this geometric triangle are also semirings, since $\mathcal{L}^{a+1}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$ is a semiring (see Proposition 27) and other ``sides'' are semirings of type $A_{a+1}$ (see the end of section 3).
Similarly, from Theorem 5 we know that the discrete neighborhood of the vertex $\overline{c}$ of $\triangle^{(n)}\{a,b,c\}$ containing the biggest layer $\mathcal{L}^{n-c}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$ is semiring
$\mathcal{DN}^{c}_c = \triangle^{(n)}\{a,b,c\}\cap {\mathcal{E}}^{(c)}_{\mathcal{C}_n}$. This semiring is also a geometric triangle whose ``vertices'' are the idempotent endomorphisms $a_cc_{n-c}$, $b_cc_{n-c}$ and $\overline{c}$.
The intersection $\mathcal{DN}^{n-a-1}_a\cap \mathcal{DN}^{c}_c$ is a semiring consisting of all the endomorphisms with fixed points $a$ and $b$. Thus we construct a new geometric triangle whose ``vertices'' are the idempotent endomorphisms $a_cc_{n-c}$, $a_{a+1}b_{c-a-1}c_{n-c}$ and $a_{a+1}c_{n-a-1}$. This semiring is called an {\emph{idempotent triangle}} of $\triangle^{(n)}\{a,b,c\}$ and is denoted by $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)$.
\textbf{Theorem} \exo/ \textsl{For any triangle $\triangle^{(n)}\{a,b,c\}$, $n \geq 3$, the set of right identities $\mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$ is a subsemiring of $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)$ of order $(b-a)(c-a)$. The set $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)\backslash \mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$ is a subsemiring of $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)$ of order $ \frac{1}{2}((c-b)^2 + (b-a)^2 + c- a)$. The semirings $Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$ and $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)\backslash \mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$ are ideals of $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)$.}
\emph{Proof}. Let $\alpha = a_kb_{n-k-j}c_j$ be a right identity. From the last proposition it follows that $a$, $b$ and $c$ are the fixed points of $\alpha$. Since $\alpha(a) = a$, it follows $k \geq a + 1$. Since $\alpha(c) = c$, we have $j \geq n-c$. Finally, since $\alpha(b) = b$, it follows $a+1 \leq k \leq b$, $n-c \leq j \leq n - a - 1$ and $n - k - j \geq 1$. Hence, $a_kb_{n-k-j}c_j = a_kb_{n-k} + a_{n-j}c_j$, where $a_kb_{n-k} \in Id\left(\mathcal{STR}^{(n)}\{a,b\}\right)$ and $a_{n-j}c_{j} \in Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$. Actually we showed that $$a_kb_{n-k-j}c_j = \mathcal{L}^{k}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)\cap \mathcal{L}^{\ell}_{c}\left(\triangle^{(n)}\{a,b,c\}\right),$$
where $k = a+1, \ldots, b$ and
$\ell = n - c, \ldots, n-b-1$. Thus we prove that any right identity is an intersection of basic layer with respect to $\overline{a}$ and basic layer with respect to $\overline{c}$. Since there are $b-a$ basic layers with respect to $\overline{a}$ and $c-b$ basic layers with respect to $\overline{c}$, it follows that all right identities are $(b-a)(c-b)$. Since all elements of $\mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$ are right identities of $\triangle^{(n)}\{a,b,c\}$, it follows that the set $\mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$ is closed under the multiplication.
Let $\alpha \in \mathcal{L}^{k_1}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$, $\beta \in \mathcal{L}^{k_2}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$ and $\alpha, \beta \in \mathcal{L}^{\ell}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$, where where $k = a+1, \ldots, b$ and $\ell = n - c, \ldots, n-b-1$. If we assume that $k_1 \leq k_2$, then $\alpha + \beta = \alpha$. Similarly, if $\alpha$ and $\beta$ are endomorphisms of the same layer with respect to $\overline{a}$ and $\alpha \in \mathcal{L}^{\ell_1}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$, $\beta \in \mathcal{L}^{\ell_2}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$, where $\ell_1 \geq \ell_2$, then $\alpha + \beta = \alpha$. Finally, let
$\alpha = \mathcal{L}^{k_1}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)\cap \mathcal{L}^{\ell_1}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$ and
$\beta = \mathcal{L}^{k_2}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)\cap \mathcal{L}^{\ell_2}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$, where $k_1 \leq k_2$ and $\ell_1 \leq \ell_2$. Then we take $\gamma = \mathcal{L}^{k_1}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)\cap \mathcal{L}^{\ell_2}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$ and $\alpha + \beta = \gamma$. Hence, $\mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$ is a subsemiring of $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)$ of order $(b-a)(c-a)$.
Since $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right) \subset \mathcal{DN}^{n-a-1}_a$, it follows that there are not any $b$--nilpotent and $c$--nilpotent endomorphisms of $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)$.
Since $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right) \subset \mathcal{DN}^{c}_c$ it follows that there are not any $a$--nilpotent endomorphisms of $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)$. But we prove in Proposition 29 that all the left elements from some basic layer with respect to $\overline{c}$ are square roots of endomorphisms of $Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$. So, the idempotent triangle consist of all the right identities, all the elements of $Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$ and all the square roots of elements of $Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$. The vertices of the idempotent triangle are $a_cc_{n-c}$, $a_{a+1}b_{c-a-1}c_{n-c}$ and $a_{a+1}c_{n-a-1}$. So, any ``side'' of this geometric triangle consists of $c-a$ elements. Then there are $\frac{1}{2}(c-a)(c-a+1)$ endomorphisms of $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)$. Thus it follows that the elements of set $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)\backslash \mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$ are\\ $\frac{1}{2}(c-a)(c-a+1) - (c-b)(b-a) = \frac{1}{2}((c-b)^2 + (b-a)^2 + c- a)$.
Now we consider a partition of idempotent triangle $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)$ into three parts. The first one is a geometric triangle with ``vertices'' $a_cc_{n-c}$, $a_{b+1}b_{c-b-1}c_{n-c}$ and $a_{b+1}c_{n-b-1}$. This triangle is a subset of $\mathcal{DN}^{c}_c$ and there are not any common elements of triangle and the basic layers with respect to $\overline{a}$. Since all the endomorphisms of the triangle with an exception of elements $Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$ are left elements of the basic layers with respect to $\overline{c}$, the triangle is denoted by $L_{\triangle}$.
The second part of $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)$ is a geometric triangle with ``vertices'' $a_bc_{n-b}$, $a_{a+1}b_{b-a-1}c_{n-b}$ and $a_{a+1}c_{n-a-1}$. This triangle is a subset of $\mathcal{DN}^{n-a-1}_a$ and there are not any common elements of the triangle and the basic layers with respect to $\overline{c}$. Since all the endomorphisms of the triangle with an exception of elements $Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$ are right elements of the basic layers with respect to $\overline{a}$, the triangle is denoted by $R_{\triangle}$. The third part of the idempotent triangle is semiring $\mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$ whose elements are the intersections of all the basic layers with respect to $\overline{a}$ and $\overline{c}$. So, $L_{\triangle}\cup R_{\triangle} = \mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)\backslash \mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$.
Let $\alpha \in L_{\triangle}$ and $\beta \in R_{\triangle}$. Then $\alpha = a_mb_{\ell-m}c_{n-\ell}$, where $\ell = b + 1, \ldots, c$, $\ell - m \geq 0$ and $\alpha(a) = \alpha(b) = a$. Similarly, $\beta = a_mb_{\ell-m}c_{n-\ell}$, where $m = a+1, \ldots, b$, $\ell - m \geq 0$ and $\beta(b) = \beta(c) = c$.
Let $a_kc_{n-k} \in Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$. Then $k = a+1, \ldots, c$. We find:
1. $a_kc_{n-k}\cdot \alpha = a_kc_{n-k}$ for $\alpha \in L_{\triangle}$.
2. $a_kc_{n-k}\cdot \beta = a_kc_{n-k}$ for $\beta \in R_{\triangle}$.
3. $a_kc_{n-k}\cdot \varepsilon = a_kc_{n-k}$ for $\varepsilon \in \mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$.
So, the elements of semiring $Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$ are left zeroes of semiring $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)$.
Now we calculate:
4. $\alpha\cdot a_kc_{n-k} = a_mb_{\ell-m}c_{n-\ell}\cdot a_kc_{n-k} = a_{\ell}c_{n-\ell} \in Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$.
5. $\beta\cdot a_kc_{n-k} = a_mb_{\ell-m}c_{n-\ell}\cdot a_kc_{n-k} = a_{m}c_{n-m} \in Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$.
6. For any $\varepsilon = a_tb_{n-t-j}c_j \in \mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$, where $t = a+1, \ldots, b$ and\\ $j = n-c, \ldots, n-a-1$ it follows
$\varepsilon\cdot a_kc_{n-k} = \left\{ \begin{array}{ll} a_tc_{n-t}& \mbox{for}\;\; k = a+1, \ldots, b\\ a_{n-j}c_j & \mbox{for}\; \; k = b+1, \ldots, c \end{array} \right. $. Thus, in all cases $\varepsilon\cdot a_kc_{n-k} \in Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$. Hence, the semiring $Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$ is an ideal of $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)$.
In order to prove that $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)\backslash \mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$ is an ideal of $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)$ we find:
7. For any $\alpha = a_mb_{\ell-m}c_{n-\ell} \in L_{\triangle}$ and $\alpha_1 = a_{m_1}b_{{\ell_1}-m_1}c_{n-{\ell_1}} \in L_{\triangle}$, where $\ell, \ell_1 = b + 1, \ldots, c$, it follows $\alpha\cdot \alpha_1 = a_{\ell}c_{n-\ell} \in Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$.
8. For any $\beta = a_mb_{\ell-m}c_{n-\ell} \in R_{\triangle}$ and $\beta_1 = a_{m_1}b_{{\ell_1}-m_1}c_{n-{\ell_1}} \in R_{\triangle}$, where
$m, m_1 = a + 1, \ldots, b$, it follows $\beta\cdot \beta_1 = a_{m}c_{n-m} \in Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$.
9. For any $\alpha = a_mb_{\ell-m}c_{n-\ell} \in L_{\triangle}$ and $\beta = a_{m_1}b_{{\ell_1}-m_1}c_{n-{\ell_1}} \in r_{\triangle}$, where
$\ell = b + 1, \ldots, c$ and $m = a+1, \ldots, b$, it follows $\alpha\cdot \beta = a_{m}c_{n-m} \in Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$.
10. For any $\alpha = a_mb_{\ell-m}c_{n-\ell} \in L_{\triangle}$ and $\beta = a_{m_1}b_{{\ell_1}-m_1}c_{n-{\ell_1}} \in r_{\triangle}$, where $\ell = b + 1, \ldots, c$ and $m = a+1, \ldots, b$, it follows $\beta\cdot \alpha = a_{\ell}c_{n-\ell} \in Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)$.
11. For any $\alpha \in L_{\triangle}$, $\beta \in R_{\triangle}$ and $\varepsilon \in \mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$ it follows $\alpha\cdot \varepsilon = \alpha$ and $\beta\cdot \varepsilon = \beta$.
12. For any $\varepsilon = a_tb_{n-t-j}c_j \in \mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$, where $t = a+1, \ldots, b$ and\\ ${j = n-c, \ldots, n-a-1}$, $\alpha = a_kb_{\ell-k}c_{n-\ell} \in L_{\triangle}$, where $\ell = b + 1, \ldots, c$ and\\ $\beta = a_mb_{\ell-m}c_{n-\ell} \in R_{\triangle}$, where $m = a + 1, \ldots, b$, it follows
$$\varepsilon\cdot \alpha = a_tb_{n-t-j}c_j\cdot a_kb_{\ell-k}c_{n-\ell} = a_{n-j}c_j \in Id\left(\mathcal{STR}^{(n)}\{a,c\}\right)\; \mbox{and}\;$$
$$\varepsilon\cdot \beta = a_tb_{n-t-j}c_j\cdot a_mb_{\ell-m}c_{n-\ell} = a_tc_{n-t} \in Id\left(\mathcal{STR}^{(n)}\{a,c\}\right).$$
The endomorphisms $\alpha$ of $L_{\triangle}$ are characterized in triangle $\triangle^{(n)}\{a,b,c\}$ by equalities: $\alpha(a) = \alpha(b) = a$, $\alpha(c) = c$. So, if $\alpha, \beta \in L_{\triangle}$, then $(\alpha + \beta)(a) = (\alpha + \beta)(b) = a$ and $(\alpha + \beta)(c) = c$, i.e. $\alpha + \beta \in L_{\triangle}$. Similar reasonings we can have for triangle $R_{\triangle}$.
The biggest endomorphism of $L_{\triangle}$ is the vertex of this triangle $a_{b+1}c_{n-b-1}$. The least endomorphism of $R_{\triangle}$ is the vertex of the triangle $a_bc_{n-b}$ So, for any $\alpha \in L_{\triangle}$ and $\beta \in R_{\triangle}$ it follows $\alpha < a_{b+1}c_{n-b-1} < a_bc_{n-b} < \beta$. Hence, $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)\backslash \mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$ is an ideal of $\mathcal{IT}\left(\triangle^{(n)}\{a,b,c\}\right)$.
$\Box$
From the proof of the last theorem it follows
\textbf{Corollary} \exo/ \textsl{The geometric triangles $L_{\triangle}$ and $R_{\triangle}$ are subsemirings of triangle $\triangle^{(n)}\{a,b,c\}$.}
An immediate consequence of fact above is
\textbf{Corollary} \exo/ \textsl{For any triangle $\triangle^{(n)}\{a,b,c\}$, $n \geq 3$, the idempotent triangle is a disjoint union of subsemirings
$L_{\triangle}$, $R_{\triangle}$ and $\mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$.}
Similarly to geometric triangles, we can consider {\emph{geometric parallelograms}} and {\emph{geometric trapezoids}}. For example, semiring $\mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$ can be represented as a geometric parallelogram whose ``vertices'' are endomorphisms $a_bb_{c-b}c_{n-c}$, $a_{a+1}b_{c-a-1}c_{n-c}$, $a_{a+1}b_{b-a}c_{n-b-1}$ and $a_bbc_{n-b-1}$. (Note that exactly the last endomorphism is a boundary between the triangles $L_{\triangle}$ and $R_{\triangle}$.) The ``sides'' of this parallelogram are the idempotent parts of basic layers $\mathcal{L}^{n-c}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$, $\mathcal{L}^{n-b-1}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$, $\mathcal{L}^{a+1}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$ and $\mathcal{L}^{b}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$.
Now we consider the $a$--nilpotent elements of triangle $\triangle^{(n)}\{a,b,c\}$. The set of all $a$--nilpotent elements of this triangle is denoted by $N^{[a]}\left( \triangle^{(n)}\{a,b,c\}\right)$. Since $N^{[a]}\left( \triangle^{(n)}\{a,b,c\}\right) = N^{[a]}_n\cap \triangle^{(n)}\{a,b,c\}$, similarly to Proposition 9, it follows
\textbf{Proposition} \exo/ \textsl{The set $N^{[a]}\left( \triangle^{(n)}\{a,b,c\}\right)$ is a subsemiring of $\triangle^{(n)}\{a,b,c\}$.}
The semiring $N^{[a]}\left( \triangle^{(n)}\{a,b,c\}\right)$ can be represented as a geometric trapezoid whose ``vertices'' are endomorphisms $\overline{a}$, $a_{b+1}b_{n-b-1}$, $a_{b+1}b_{c-b}c_{n-c-1}$ and $a_{c+1}c_{n-c-1}$ and whose ``sides'' are semiring $N^{[a]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$, the subsetset of $n-c$ endomorphisms $\alpha$ from the left part of $\mathcal{L}^{b+1}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$ such that $\alpha(b) = a, \alpha(c) = b$, the subsetset of $c-b+1$ endomorphisms $\beta$ from the left part of $\mathcal{L}^{n-c-1}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$ such that $\alpha(b) = a, \alpha(c) = b$ and semiring $N^{[a]}\left(\mathcal{STR}^{(n)}\{a,c\}\right)$. We find, as in the proof of the last theorem, that the order of this semiring is $\left|N^{[a]}\left( \triangle^{(n)}\{a,b,c\}\right)\right| = \frac{1}{2}(n-c)(n + c - 2b + 1)$.
In the same way we can construct the following semirings:\\ $N^{[b]}\left( \triangle^{(n)}\{a,b,c\}\right) = N^{[b]}_n\cap \triangle^{(n)}\{a,b,c\}$ and $N^{[c]}\left( \triangle^{(n)}\{a,b,c\}\right) = N^{[c]}_n\cap \triangle^{(n)}\{a,b,c\}$.
The semiring $N^{[b]}\left( \triangle^{(n)}\{a,b,c\}\right)$ can be represented as a geometric parallelogram whose ``vertices'' are endomorphisms $a_ab_{n-a}$, $\overline{b}$, $b_{c+1}c_{n-c-1}$ and $a_ab_{c-a+1}c_{n-c-1}$ and whose ``sides'' are semiring $N^{[b]}\left(\mathcal{STR}^{(n)}\{a,b\}\right)$, semiring $N^{[b]}\left(\mathcal{STR}^{(n)}\{b,c\}\right)$, the subsetset of $a+1$ endomorphisms $\alpha$ from the right part of $\mathcal{L}^{n-c-1}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$ such that $\alpha(a) = \alpha(b) = \alpha(c) = b$ and the subset of $n-c$ endomorphisms $\beta$ from the left part of $\mathcal{L}^{a}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$ such that $\beta(a) = \beta(b) = \beta(c) = b$. The order of this semiring is $\left|N^{[b]}\left( \triangle^{(n)}\{a,b,c\}\right)\right| = (a+1)(n-c)$.
Finally, the semiring $N^{[c]}\left( \triangle^{(n)}\{a,b,c\}\right)$ can be represented as a geometric trapezoid whose ``vertices'' are endomorphisms $b_bc_{n-b}$, $\overline{c}$, $a_{a}c_{n-a}$ and $a_ab_{b-a}c_{n-b}$ and whose ``sides'' are semiring $N^{[c]}\left(\mathcal{STR}^{(n)}\{b,c\}\right)$, semiring $N^{[c]}\left(\mathcal{STR}^{(n)}\{a,c\}\right)$, the subsetset of $b-a+1$ endomorphisms $\alpha$ from the right part of $\mathcal{L}^{a}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$ such that $\alpha(b) = \alpha(c) = c$ and the subset of $a+1$ endomorphisms $\beta$ from the right part of $\mathcal{L}^{n-b}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$ such that $\beta(a) = b, \beta(b) = \beta(c) = c$. The order of this semiring is $\left|N^{[c]}\left( \triangle^{(n)}\{a,b,c\}\right)\right| = \frac{1}{2}(a+1)(2b-a+2)$.
\textbf{Proposition} \exo/ \textsl{The semirings $N^{[a]}\left( \triangle^{(n)}\{a,b,c\}\right)$, $N^{[b]}\left( \triangle^{(n)}\{a,b,c\}\right)$ and $N^{[c]}\left( \triangle^{(n)}\{a,b,c\}\right)$ are trivial.}
\emph{Proof.} Let $\alpha \in N^{[a]}\left( \triangle^{(n)}\{a,b,c\}\right)$. Then $a$ is a unique fixed point of $\alpha$. If we assume that $\alpha(b) = c$, then $c \geq \alpha(c) \geq \alpha(b) =c$ implies $\alpha(c) = c$, which is impossible. So, $\alpha(b) = a$. Now $\alpha(c) = b$, or $\alpha(c) = a$, which means that $\alpha = \overline{a}$. For any $\beta \in N^{[a]}\left( \triangle^{(n)}\{a,b,c\}\right)$ we find $(\alpha\cdot\beta)(a) = a$, $(\alpha\cdot\beta)(b) = \beta(\alpha(b)) = \beta(a) = a$ and $(\alpha\cdot\beta)(c) = \beta(\alpha(c)) = \beta(b) = a$. Hence, $\alpha\cdot \beta = \overline{a}$ and $N^{[a]}\left( \triangle^{(n)}\{a,b,c\}\right)$ is a trivial semiring. The same argument shows that $N^{[b]}\left( \triangle^{(n)}\{a,b,c\}\right)$ and $N^{[c]}\left( \triangle^{(n)}\{a,b,c\}\right)$ are also trivial semirings.
$\Box$
\emph{{Example}} \exo/ Let us consider triangle $\triangle^{(6)}\{1,3,4\}$. Figure 5 illustrates the semirings of 1--nilpotent, 3--nilpotent and 4--nilpotent endomorphisms, an idempotent triangle and right identities. Here we observe that $\triangle^{(6)}\{1,3,4\}$ can be represented as a union of the following semirings: $N^{[1]}\left( \triangle^{(6)}\{1,3,4\}\right)$, $N^{[3]}\left( \triangle^{(6)}\{1,3,4\}\right)$, $N^{[4]}\left( \triangle^{(6)}\{1,3,4\}\right)$, $\mathcal{L}^{2}_{1}\left(\triangle^{(6)}\{1,3,4\}\right)$, $\mathcal{L}^{3}_{1}\left(\triangle^{(6)}\{1,3,4\}\right)$ and $\mathcal{L}^{2}_{4}\left(\triangle^{(6)}\{1,3,4\}\right)$. But this union is not disjoint since the right identities are intersections of the basic layers of the triangle.
\centerline{\small Figure 5.}
In order to represent $\triangle^{(n)}\{a,b,c\}$ as a disjoint union of its subsemirings, we look for new subsemirings. Now we consider the set of all the left elements of the basic layers with respect to $\overline{a}$. This set can be represented as a geometric parallelogram whose ``vertices'' are endomorphisms $a_bb_{n-b}$, $a_{a+1}b_{n-a-1}$, $a_{a+1}b_{c-a}c_{n-c-1}$ and $a_bb_{c-b+1}c_{n-c-1}$. The ``sides'' of this parallelogram are semiring $Id\left(\mathcal{STR}^{(n)}\{a,b\}\right)$, the set of the left elements of the biggest basic layer $\mathcal{L}^{a+1}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$, the subset of $b-a$ endomorphisms $\alpha$ of the layer $\mathcal{L}^{n-c-1}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$ with fixed points $a$ and $b$ and the set of the left elements of the least basic layer $\mathcal{L}^{b}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$. We denote this parallelogram by $L_{{par}}$. Then $\left|L_{{par}}\right| = (b-a)(n-c)$.
Similarly, we consider the set of all the right elements of the basic layers with respect to $\overline{c}$. This set can also be represented as a geometric parallelogram whose ``vertices'' are endomorphisms $a_ab_{c-a}c_{n-c}$, $b_{c}c_{n-c}$, $b_{b+1}c_{n-b+1}$ and $a_ab_{b-a+1}c_{n-b-1}$. The ``sides'' of parallelogram are the set of right elements of the biggest basic layer $\mathcal{L}^{n-c}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$,
the semiring $Id\left(\mathcal{STR}^{(n)}\{b,c\}\right)$, the set of the right elements of least basic layer $\mathcal{L}^{n-b-1}_{c}\left(\triangle^{(n)}\{a,b,c\}\right)$ and
the subset of $c-b$ endomorphisms $\alpha$ of layer $\mathcal{L}^{a}_{a}\left(\triangle^{(n)}\{a,b,c\}\right)$ with fixed points $b$ and $c$. We denote this parallelogram by $R_{{par}}$. Then $\left|R_{{par}}\right| = (a+1)(c-b)$.
\textbf{Proposition} \exo/ \textsl{The geometric parallelograms $L_{{par}}$ and $R_{{par}}$ are subsemirings of triangle $\triangle^{(n)}\{a,b,c\}$.}
\emph{Proof.} We shall prove only that $L_{{par}}$ is a semiring, since the proof for $R_{{par}}$ is the same. Let $\alpha \in L_{\mbox{par}}$. Then $\alpha(a) = a$ and $\alpha(b) = \alpha(c) = b$. It is evident that $(\alpha + \beta)(a) = a$ and $(\alpha + \beta)(b) = (\alpha + \beta)(c) = b$. For products we show $(\alpha\cdot \beta)(a) = a$, $(\alpha\cdot \beta)(b) = b$ and $(\alpha\cdot \beta)(c) = \beta(\alpha(c)) = \beta(b) = b$.
$\Box$
As we have seen in the last proofs, any endomorphism $\alpha$ of some subsemiring of the triangle can be characterized by ordered triple $(x,y,z)$, where $\alpha(a) = x$, $\alpha(b) = y$ and $\alpha(c) = z$ and $x, y, z \in \{a,b,c\}$. This triple is called a {\emph{type}} of semiring.
Now we can summarize the results of Theorem 33, corollaries 34 and 35 and propositions 36 and 39 and arrange the following ``puzzle'' -- fig. 6, where we register the type of semirings.
\centerline{\small Figure 6.}
Actually we prove the following
\textbf{Theorem} \exo/ \textsl{Any triangle $\triangle^{(n)}\{a,b,c\}$, $n \geq 3$, is a disjoint union of the following subsemirings of the triangle:
$N^{[a]}\left( \triangle^{(n)}\{a,b,c\}\right)$, $N^{[b]}\left( \triangle^{(n)}\{a,b,c\}\right)$, $N^{[c]}\left( \triangle^{(n)}\{a,b,c\}\right)$, $L_{{par}}$, $R_{{par}}$, $L_{\triangle}$, $R_{\triangle}$ and $\mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$.}
As a direct consequence of the last theorem, it follows that semiring $\triangle^{(n)}\{a,b,c\}\cap {\mathcal{E}}^{(b)}_{\mathcal{C}_n}$ is a disjoint union of semirings $N^{[a]}\left( \triangle^{(n)}\{a,b,c\}\right)$, $L_{{par}}$, $R_{{par}}$ and $\mathcal{RI}\left(\triangle^{(n)}\{a,b,c\}\right)$, that is this semiring can be represented as a geometric parallelogram consisting of the four parallelograms corresponding to these subsemiring -- fig. 6.
\noindent{\large \bf References}
[1] J. Je$\hat{\mbox{z}}$ek, T. Kepka and M. Mar\`{o}ti, ``The endomorphism semiring of a se\-milattice'', \emph{Semigroup Forum}, 78, pp. 21 -- 26, 2009.
[2] I. Trendafilov and D. Vladeva, ``Idempotent Elements of the Endomorphism Semiring of a Finite Chain'', \emph{ISRN Algebra}, vol. 2013, Article ID 120231, 9 pages, 2013.
[3] I. Trendafilov and D. Vladeva, ``Nilpotent elements of the endomorphism semiring of a finite chain and Catalan numbers'',
\emph{Proceedings of the Forty Second Spring Conference of the Union of Bulgarian Mathematicians}, Borovetz, April 2--6, pp. 265 -- 271, 2013.
[4] I. Trendafilov and D. Vladeva, ``Endomorphism semirings without zero of a finite chain'', \emph{Proceedings of the Technical University of Sofia}, vol. 61, no. 2, pp. 9 -- 18, 2011.
[5] I. Trendafilov and D. Vladeva, ``The endomorphism semiring of a finite chain'', \emph{Proceedings of the Technical University of Sofia}, vol. 61, no. 1, pp. 9--18, 2011.
[6] I. Trendafilov and D. Vladeva, ``Subsemirings of the endomorphism semiring of a finite chain'', \emph{Proceedings of the Technical University of Sofia}, vol. 61, no. 1, pp. 19--28, 2011.
[7] J. Zumbr\"{a}gel, ``Classification of finite congruence-simple semirings
with zero,'' \emph{Journal of Algebra and Its Applications}, vol. 7, no. 3, pp. 363--377, 2008.
[8] J. Golan, \emph{Semirings and Their Applications}, Kluwer, Dordrecht, 1999.
[9] G. Gratzer, \emph{Lattice Theory: Foundation}, Birkh\"{a}user
Springer Basel AG, 2011.
[10] D. Ferrario and R. Piccinini, \emph{Simplicial Structures in Topology}, Springer, New York, USA, 2011.
[11] M. Desbrun, A, Hirani, M. Leok, J. Marsden, ``Discrete Exterior Calculus'', arXiv:math/0508341v2 [math.DG] 18 Aug 2005.
\end{document} |
\begin{document}
\date\today
\title[A note on exhaustion of hyperbolic complex manifolds]{A note on exhaustion of hyperbolic complex manifolds}
\author{Ninh Van Thu\textit{$^{1,2}$} and Trinh Huy Vu\textit{$^{1}$}}
\address{Ninh Van Thu}
\address{\textit{$^{1}$}~Department of Mathematics, Vietnam National University, Hanoi, 334 Nguyen Trai, Thanh Xuan, Hanoi, Vietnam}
\address{\textit{$^{2}$}~Thang Long Institute of Mathematics and Applied Sciences,
Nghiem Xuan Yem, Hoang Mai, HaNoi, Vietnam}
\email{thunv@vnu.edu.vn}
\address{Trinh Huy Vu}
\address{\textit{$^{1}$}~Department of Mathematics, Vietnam
National University at Hanoi, 334 Nguyen Trai str., Hanoi, Vietnam}
\email{trinhhuyvu1508@gmail.com}
\subjclass[2010]{Primary 32H02; Secondary 32M05, 32F18.}
\keywords{Hyperbolic complex manifold, exhausting sequence, $h$-extendible domain}
\begin{abstract}
The purpose of this article is to investigate a hyperbolic complex manifold $M$ exhausted by a pseudoconvex domain $\Omega$ in $\mathbb C^n$ via an exhausting sequence $\{f_j\colon \Omega\to M\}$ such that $f_j^{-1}(a)$ converges to a boundary point $\xi_0 \in \partial \Omega$ for some point $a\in M$.
\end{abstract}
\maketitle
\section{introduction}
Let $M$ and $\Omega$ be two complex manifolds. One says that \emph{$\Omega$ can exhaust $M$} or \emph{$M$ can be exhausted by $\Omega$} if for any compact subset $K$ of $M$ there is a holomorphic embedding $f_K \colon \Omega \to M$ such that $f_K(\Omega)\supset K$. In particular, one says that \emph{$M$ is a monotone union of $\Omega$} via a sequence of holomorphic embeddings $f_j\colon \Omega\to M$ if $f_j(\Omega)\subset f_{j+1}(\Omega)$ for all $j$ and $M=\bigcup_{j=1}^\infty f_j(\Omega)$ (see \cite{FS77, Fr83}).
In \cite[Theorem $1$]{Fr86}, there exists a bounded domain $D$ in $\mathbb C^n$ such that $D$ can exhaust any domain in $\mathbb C^n$. In addition, the unit ball $\mathbb B^n$ in $\mathbb C^n$ can exhaust many complex manifods, which are not biholomorphically equivalent to each other (see \cite{For04, FS77}). However, if $M$ in addition is hyperbolic then $M$ must be biholomorphically equivalent to $\mathbb B^n$ (cf. \cite{FS77}). Furthermore, any $n$-dimensional hyperbolic complex manifold, exhausted by a homogeneous bounded domain $D$ in $\mathbb C^n$, is biholomorphically equivalent to $D$. As a consequence, although the polydisc $\mathbb U^n$ and the unit ball $\mathbb B^n$ are both homogeneous and there is a domain $U$ in $\mathbb B^n$ that contains almost all of $\mathbb B^n$, i.e., $\mathbb B^n\setminus U$ has measure zero (cf. \cite[Theorem $1$]{FS77}) and is biholomorphically equivalent to $\mathbb U^n$, but $\mathbb U^n$ cannot exhaust the unit ball $\mathbb B^n$ since it is well-known that $\mathbb U^n$ is not biholomorphically equivalent to $\mathbb B^n$.
Let $M$ be a hyperbolic complex manifold exhausted by a bounded domain $\Omega\subset \mathbb C^n$ via an exhausting sequence $\{f_j\colon \Omega\to M\}$. Let us fix a point $a\in M$. Then, thanks to the boundedness of $\Omega$, without loss of generality we may assume that $f_j^{-1}(a)\to p\in \overline{\Omega}$ as $j\to \infty$. If $p\in \Omega$, then one always has $M$ is biholomorphically equivalent to $\Omega$ (cf. Lemma \ref{orbitinside} in Section \ref{S2}).
The purpose of this paper is to investigate such a complex manifold $M$ with $p\in \partial \Omega$. More precisely, our first main result is the following theorem.
\begin{theorem}\label{togetmodel} Let $M$ be an $(n+1)$-dimensional hyperbolic complex manifold and let $\Omega$ be a pseudoconvex domain in $\mathbb{C}^{n+1}$ with $C^\infty$-smooth boundary. Suppose that $M$ can be exhausted by $\Omega$ via an exhausting sequence $\{f_j: \Omega \to M\}$. If there exists a point $a \in M$ such that the sequence $f_j^{-1}(a)$ converges $\Lambda$-nontangentially to a $h$-extendible boundary point $\xi_0 \in \partial \Omega$ (see Definition \ref{def-order} in Section \ref{S2} for definitions of the $\Lambda$-nontangentially convergence and of the $h$-extendibility), then $M$ is biholomorphically equivalent to the associated model $M_P$ for $\Omega$ at $\xi_0$.
\end{theorem}
When $\xi_0$ is a strongly pseudoconvex boundary point, we do not need the condition that the sequence $f_j^{-1}(a)$ converges $\Lambda$-nontangentially to $\xi_0$ as $j\to \infty$. Moreover, in this circumstance, the model $M_P$ is in fact biholomorphically equivalent to $M_{|z|^2}$, which is biholomorphically equivalent to the unit ball $\mathbb B^n$. More precisely, our second main result is the following theorem.
\begin{theorem}\label{togetmodelstronglypsc} Let $M$ be an $ (n+1) $-dimensional hyperbolic complex manifold and let $\Omega$ be a pseudoconvex domain in $\mathbb{C}^{n+1}$. Suppose that $\partial\Omega$ is $\mathcal{C}^2$-smooth boundary near a strongly pseudoconvex boundary point $\xi_0 \in \partial \Omega$. Suppose also that $M$ can be exhausted by $\Omega$ via an exhausting sequence $\{f_j: \Omega \to M\}$. If there exists a point $a \in M$ such that the sequence $\eta_j := f_j^{-1}(a)$ converges to $\xi_0$, then $M$ is biholomorphically equivalent to the unit ball $\mathbb{B}^{n+1}$.
\end{theorem}
Notice that Theorem \ref{togetmodelstronglypsc} is a local version of \cite[Theorem $1.1$]{DZ19} and \cite[Theorem I]{Fr83} (see Corollary \ref{str-psc-ex} in Section \ref{S3}). We note that their proofs are based on the boundary estimate of the Fridman invariant and of the squeezing function for strongly pseudoconvex domains. However, in order to prove Theorem \ref{togetmodel} and Theorem \ref{togetmodelstronglypsc}, we shall use the scaling technique, achieved recently in \cite{Ber06, DN09, NN19}.
By applying Theorem \ref{togetmodelstronglypsc} and Lemma \ref{orbitinside}, we also prove that if a hyperbolic complex manifold $M$ exhausted by a general ellipsoid $D_P$ (see Section \ref{S4} for the definition of $D_P$), then $M$ is either biholomorphically equivalent to $D_P$ or the unit ball $\mathbb B^n$ (cf. Proposition \ref{generalellipsoid} in Section \ref{S4}). In particular, when $D_P$ is an ellipsoid $E_m\; (m\in \mathbb Z_{\geq 1})$, given by
$$
E_m=\left\{(z,w)\in \mathbb C^2 \colon |w|^2+|z|^{2m}<1\right\},
$$
in fact Proposition \ref{generalellipsoid} is a generalization of \cite[Theorem $1$]{Liu18}.
The organization of this paper is as follows: In Section~\ref{S2} we provide some results concerning the normality of a sequence of biholomorphisms and the $h$-extendibility. In Section \ref{S3}, we give our proofs of Theorem \ref{togetmodel} and Theorem \ref{togetmodelstronglypsc}. Finally, the proof of Proposition \ref{generalellipsoid} will be introduced in Section \ref{S4}.
\section{The normality and the $h$-extendibility}\label{S2}
\subsection{The normality of a sequence of biholomorphisms}
First of all, we recall the following definition (see \cite{GK} or \cite{DN09}).
\begin{define} Let $\{\Omega_i\}_{i=1}^\infty$ be a sequence of open sets in a complex manifold $M$ and $\Omega_0 $ be an open set of $M$. The sequence $\{\Omega_i\}_{i=1}^\infty$ is said to converge to $\Omega_0 $ (written $\lim\Omega_i=\Omega_0$) if and only if
\begin{enumerate}
\item[(i)] For any compact set $K\subset \Omega_0,$ there is an $i_0=i_0(K)$ such that $i\geq i_0$ implies that $K\subset \Omega_i$; and
\item[(ii)] If $K$ is a compact set which is contained in $\Omega_i$ for all sufficiently large $i,$ then $K\subset \Omega_0$.
\end{enumerate}
\end{define}
Next, we recall the following proposition, which is a generalization of the theorem of H. Cartan (see \cite{DN09, GK, TM}).
\begin{proposition} \label{T:7} Let $\{A_i\}_{i=1}^\infty$ and $\{\Omega_i\}_{i=1}^\infty$ be sequences of domains in a complex manifold $M$ with $\lim A_i=A_0$ and $\lim \Omega_i=\Omega_0$ for some (uniquely determined) domains $A_0$, $\Omega_0$ in $M$. Suppose that $\{f_i: A_i \to \Omega_i\} $ is a sequence of biholomorphic maps. Suppose also that the sequence $\{f_i: A_i\to M \}$ converges uniformly on compact subsets of $ A_0$ to a holomorphic map $F:A_0\to M $ and the sequence $\{g_i:=f^{-1}_i: \Omega_i\to M \}$ converges uniformly on compact subsets of $\Omega_0$ to a holomorphic map $G:\Omega_0\to M $. Then either of the following assertions holds.
\begin{enumerate}
\item[(i)] The sequence $\{f_i\}$ is compactly divergent, i.e., for each compact set $K\subset A_0$ and each compact set $L\subset \Omega_0$, there exists an integer $i_0$ such that $f_i(K)\cap L=\emptyset$ for $i\geq i_0$; or
\item[(ii)] There exists a subsequence $\{f_{i_j}\}\subset \{f_i\}$ such that the sequence $\{f_{i_j}\}$ converges uniformly on compact subsets of $A_0$ to a biholomorphic map $F: A_0 \to \Omega_0$.
\end{enumerate}
\end{proposition}
\begin{remark} \label{r1} By \cite[Proposition $2.1$]{Ber94} or \cite[Proposition $2.2$]{DN09} and by the hypotheses of Theorem \ref{togetmodel} and Theorem \ref{togetmodelstronglypsc}, it follows that for each compact subset $K\Subset M$ and each neighborhood $U$ of $\xi_0$ in $\mathbb C^{n+1}$, there exists an integer $j_0=j_0(K)$ such that $K\subset f_j(\Omega\cap U)$ for all $j\geq j_0$. Consequently, the sequence of domains $\{f_j(\Omega\cap U)\}$ converges to $M$.
\end{remark}
We will finish this subsection by recalling the following lemma (cf. \cite[Lemma $1.1$]{Fr83}).
\begin{lemma}[see \cite{Fr83}]\label{orbitinside}
Let $M$ be a hyperbolic manifold of complex dimension $n$. Assume that $M$ can be exhausted by $\Omega$ via an exhausting sequence $\{f_j: \Omega \to M\}$, where $\Omega$ is a bounded domain in $\mathbb{C}^n$. Suppose that there is an interior point $a \in M$ such that $f_j^{-1} (a) \to p \in \Omega$. Then, $M$ is biholomorphically equivalent to $\Omega$.
\end{lemma}
\subsection{The $h$-extendibility }
In this subsection, we recall some definitions and notations given in \cite{Cat84, Yu95}.
Let $\Omega$ be a smooth pseudoconvex domain in $\mathbb C^{n+1}$ and $p\in \partial\Omega$. Let $\rho$ be a local defining function for $\Omega$ near $p$. Suppose that the multitype $\mathcal{M}(p)=(1,m_1,\ldots,m_n)$ is finite. (See \cite{Cat84} for the notion of multitype.) Let us denote by $\Lambda=\left(1/m_1,\ldots,1/m_n\right)$. Then, there are distinguished coordinates $(z,w)=(z_1,\ldots,z_n,w)$ such that $p=0$ and $\rho(z,w)$ can be expanded near $0$ as follows:
$$
\rho(z,w)=\mathrm{Re}(w)+P(z)+R(z,w),
$$
where $P$ is a $\Lambda$-homogeneous plurisubharmonic polynomial that contains no pluriharmonic terms, $R$ is smooth and satisfies
$$
|R(z,w)|\leq C \left( |w|+ \sum_{j=1}^n |z_j|^{m_j} \right)^\gamma,
$$
for some constant $\gamma>1$ and $C>0$. Here and in what follows, a polynomial $P$ is called $\Lambda$-homogeneous if
$$
P(t^{1/m_1}z_1,t^{1/m_2}z_2, \ldots,t^{1/m_n}z_n)=P(z),\; \forall t>0, \forall z\in \mathbb C^n.
$$
\begin{define}[see \cite{NN19}]\label{def-order} The domain $M_P=\{(z,w)\in \mathbb C^n\times \mathbb C\colon \mathrm{Re}(w)+P(z)<0\}$ is called an \emph{associated model} of $\Omega$ at $p$. A boundary point $p\in \partial \Omega$ is called \emph{$h$-extendible} if its associated model $M_P$ is \emph{$h$-extendible}, i.e., $M_P$ is of finite type (see \cite[Corollary $2.3$]{Yu94}). In this circumstance, we say that a sequence $\{\eta_j=(\alpha_j,\beta_j)\}\subset \Omega$ \emph{converges $\Lambda$-nontangentially to $p$} if $|\mathrm{Im}(\beta_j)|\lesssim |\mathrm{dist}(\eta_j,\partial \Omega)|$ and $ \sigma(\alpha_j) \lesssim |\mathrm{dist}(\eta_j,\partial \Omega)|$, where
$$
\sigma(z)=\sum_{k=1}^n |z_k|^{m_k}.
$$
\end{define}
Throughout this paper, we use $\lesssim$ and $\gtrsim$ to denote inequalities up to a positive multiplicative constant. Moreover, we use $\approx $ for the combination of $\lesssim$ and $\gtrsim$. In addition, $\mathrm{dist}(z,\partial\Omega)$ denotes the Euclidean distance from $z$ to $\partial\Omega$. Furthermore, for $\mu>0$ we denote by $\mathcal{O}(\mu,\Lambda)$ the set of all smooth functions $f$ defined near the origin of $\mathbb C^n$ such that
$$
D^\alpha \overline{D}^\beta f(0)=0~\text{whenever}~ \sum_{j=1}^n (\alpha_j+\beta_j)\dfrac{1}{m_j} \leq \mu.
$$
If $n=1$ and $\Lambda = (1)$ then we use $\mathcal{O}(\mu)$ to denote the functions vanishing to order at least $\mu$ at the origin (cf. \cite{Cat84, Yu95}).
\section{Proofs of Theorem \ref{togetmodel} and Theorem \ref{togetmodelstronglypsc}}\label{S3}
This section is devoted to our proofs of Theorem \ref{togetmodel} and Theorem \ref{togetmodelstronglypsc}. First of all, let us recall the definition of the Kobayashi infinitesimal pseudometric and the Kobayashi pseudodistance as follows:
\begin{define} Let $M$ be a complex manifold. The Kobayashi infinitesimal pseudometric $F_M \colon M\times T^{1,0}M\to \mathbb R$ is defined by
$$
F_M(p,X)=\inf \left\{c>0\;|\; \exists \; f\colon \Delta \to M \;\text{holomorphic with}\; f(0)=p, f'(0)=X/c \right\},
$$
for any $p\in M$ and $X\in T^{1,0}M$, where $\Delta $ is the unit open disk of $\mathbb C$. Moreover, the Kobayashi pseudodistance $d_M^K\colon M\times M \to \mathbb R$ is defined by
$$
d_M^K(p,q)=\inf_\gamma\int_0^1 F_M(\gamma(t),\gamma'(t)) dt,
$$
for any $p,q\in M$ where the infimum is taken over all differentiable curves $\gamma:[0,1] \to M$ joining $p$ and $q$. A complex manifold $M$ is called hyperbolic if $d_M^K(p,q)$ is actually a distance, i.e., $d_M^K(p,q)>0$ whenever $p\ne q$.
\end{define}
Next, we need the following lemma, whose proof will be given in Appendix for the convenience of the reader, and the following proposition.
\begin{lemma}\label{conti-kob} Assume that $\{D_j\}$ is a sequence of domains in $\mathbb C^{n+1}$ converging to a model $M_P$ of finite type. Then, we have
$$
\lim_{j\to \infty} F_{D_j}(z,X)=F_{M_P}(z,X),~\forall (z,X)\in M_P\times \mathbb C^{n+1}.
$$
Moreover, the convergence takes place uniformly over compact subsets of $M_P\times \mathbb C^{n+1}$.
\end{lemma}
\begin{proposition}[see \cite{NN19}]\label{pro-scaling} Assume that $\{D_j\}$ is a sequence of domains in $\mathbb C^{n+1}$ converging to a model $M_P$ of finite type. Assume also that $\omega$ is a domain in $\mathbb C^k$ and $\sigma_j: \omega \to D_j$ is a sequence of holomorphic mappings such that $\{\sigma_j(a)\}\Subset M_P$ for some $a\in \omega$. Then $\{\sigma_j\}$ contains a subsequence that converges locally uniformly to a holomorphic map $\sigma: \omega \to M_P$.
\end{proposition}
Now we are ready to prove Theorem \ref{togetmodel} and Theorem \ref{togetmodelstronglypsc}.
\begin{proof}[Proof of Theorem \ref{togetmodel}]
Let $\rho$ be a local defining function for $\Omega$ near $\xi_0$ and the multitype $\mathcal{M}(\xi_0)=(1,m_1,\ldots,m_n)$ is finite. In what follows, denote by $\Lambda=(1/m_1,\ldots,1/m_n)$. Since $\xi_0$ is a $h$-extendible point, there exist local holomorphic coordinates $(z,w)$ in which $\xi_0=0$ and $\Omega$ can be described in a neighborhood $U_0$ of $0$ as follows:
$$
\Omega\cap U_0=\left\{\rho(z,w)=\mathrm{Re}(w)+ P(z) +R_1(z) + R_2(\mathrm{Im} w)+(\mathrm{Im} w) R(z)<0\right\},
$$
where $P$ is a $\Lambda$-homogeneous plurisubharmonic real-valued polynomial containing no pluriharmonic terms, $R_1\in \mathcal{O}(1, \Lambda),R\in \mathcal{O}(1/2, \Lambda) $, and $R_2\in \mathcal{O}(2)$. (See the proof of Theorem $1.1$ in \cite{NN19} or the proof of Lemma $4.11$ in \cite{Yu95}.)
By assumption, there exists a point $a\in M$ such that the sequence $\eta_j:=f^{-1}_j(a)$ converges $\Lambda$-nontangentially to $\xi_0$. Without loss of generality, we may assume that the sequence $\{\eta_j\}\subset \Omega\cap U_0$ and we write $\eta_j=(\alpha_j,\beta_j)=(\alpha_{j1},\ldots,\alpha_{jn},\beta_j)$ for all $j$. Then, the sequence $\{\eta_j:=f^{-1}(a)\}$ has the following properties:
\begin{itemize}
\item[(a)] $|\mathrm{Im}(\beta_j)|\lesssim |\mathrm{dist}(\eta_j,\partial \Omega)|$;
\item[(b)] $|\alpha_{jk}|^{m_k}\lesssim |\mathrm{dist}(\eta_j,\partial \Omega)|$ for $1\leq k\leq n$.
\end{itemize}
For the sequence $\{\eta_j=(\alpha_j,\beta_j)\}$, we associate with a sequence of points $\eta_j'=(\alpha_{j1}, \ldots, \alpha_{jn},\beta_j +\epsilon_j)$, where $\epsilon_j>0$, such that $\eta_j'$ is in the hypersurface $\{\rho=0\}$ for all $j$. We note that $\epsilon_j\approx \mathrm{dist}(\eta_j,\partial \Omega)$. Now let us consider the sequences of dilations $\Delta^{\epsilon_j}$ and translations $L_{\eta_j'}$, defined respectively by
$$
\Delta^{\epsilon_j}(z_1,\ldots,z_n,w)=\left(\frac{z_1}{\epsilon_j^{1/m_1}},\ldots,\frac{z_n}{\epsilon_j^{1/m_n}},\frac{w}{\epsilon_j}\right)
$$
and
$$
L_{\eta_j'}(z,w)=(z,w)-\eta'_j=(z-\alpha'_j,w-\beta'_j).
$$
Under the change of variables $(\tilde z,\tilde w):=\Delta^{\epsilon_j}\circ L_{\eta'_j}(z,w)$, i.e.,
\[
\begin{cases}
w-\beta'_j= \epsilon_j\tilde{w}\\
z_k-\alpha'_{j k}=\epsilon_j^{1/m_k}\tilde{z}_k,\, k=1,\ldots,n,
\end{cases}
\]
one can see that $\Delta^{\epsilon_j}\circ L_{\eta_j'}(\alpha_j,\beta_j)=(0,\cdots,0,-1)$ for all $j$. Moreover, as in \cite{NN19}, after taking a subsequence if necessary, we may assume that the sequence of domains $\Omega_j:=\Delta^{\epsilon_j}\circ L_{\eta_j'}(\Omega\cap U_0) $ converges to the following model
$$
M_{P,\alpha}:=\left \{(\tilde z,\tilde w)\in \mathbb C^n\times\mathbb C\colon \mathrm{Re}(\tilde w)+P(\tilde z+\alpha)-P(\alpha)<0\right\},
$$
which is obviously biholomorphically equivalent to the model $M_P$. Without loss of generality, in what follows we always assume that $\{\Omega_j\}$ converges to $M_P$.
Now we first consider the sequence of biholomorphisms $F_j:= T_j\circ f_j^{-1}\colon M\supset f_j(\Omega\cap U_0)\to \Omega_j$, where $T_j:=\Delta^{\epsilon_j}\circ L_{\eta_j'}$. Since $F_j(a)=(0',-1)$ and notice that $f_j(\Omega\cap U_0)$ converges to $M$ as $j\to \infty$ (see Remark \ref{r1}), by Proposition \ref{pro-scaling}, without loss of generality, we may assume that the sequence $F_j$ converges uniformly on on every compact subset of $M$ to a holomorphic map $F$ from $M$ to $\mathbb C^{n+1}$. Note that $F(M)$ contains a neighborhood of $(0',-1)$ and $F(M)\subset \overline{M_P}$.
Since $F_j$ is normal, by the Cauchy theorem it follows that $\{J(F_j)\}$ converges uniformly on every compact subsets of $M$ to $J(F)$, where $J(F)$ denotes the Jacobian determinant of $F$. However, by the Cartan theorem, $J(F_j)(z)$ is nowhere zero for any $j$ because $F_j$ is a biholomorphism. Then, the Hurwitz theorem implies that $J(F)$ is a zero function or nowhere zero. In the case that $JF\equiv 0$, $F$ is regular at no point of $M$. As $F(M)$ contains a neighborhood of $(0',-1)$, the Sard theorem shows that $F$ is regular outside a proper subvariety of $M$, which is a contradiction. This yields $JF$ is nowhere zero and hence $F$ is regular everywhere on $M$. By \cite[Lemma 0]{FS77}, it follows that $F(M)$ is open and $F(M)\subset M_P$.
Next, we shall prove that $F$ is one-to-one. Indeed, let $z_1, z_2\in M$ be arbitrary. Fix a compact subset $L \Subset M$ such that $z_1,z_2\in L$. Then, by Remark \ref{r1} there is a $j_0(L)>0$ such that $L\subset f_j(\Omega\cap U_0)$ and $F_j(L)\subset K\Subset M_P$ for all $j>j_0(L)$, where $K$ is a compact subset of $M_P$. By Lemma \ref{conti-kob} and the decreasing property of Kobayashi distance, one has
\begin{align*}
d^K_M(z_1,z_2)&\leq d^K_{f_j(\Omega\cap U_0)}(z_1,z_2)=d^K_{\Omega_j}(F_j(z_1),F_j(z_2)))\leq C \cdot d^K_{M_P}(F_j(z_1),F_j(z_2))\\
&\leq C \left( d^K_{M_P}(F(z_1),F(z_2))+ d^K_{M_P}(F_j(z_1),F(z_1))+d^K_{M_P}(F_j(z_2),F(z_2))\right),
\end{align*}
where $C>1$ is a positive constant. Letting $j\to \infty$, we obtain
$$
d^K_M(z_1,z_2)\leq C \cdot d^K_{M_P}(F(z_1),F(z_2)).
$$
Since $M$ is hyperbolic, it follows that if $F(z_1)=F(z_2)$, then $z_1=z_2$. Consequently, $F$ is one-to-one, as desired.
Finally, because of the biholomorphism from $M$ to $F(M)\subset M_P$ and the tautness of $M_P$ (cf. \cite{Yu95}), it follows that the sequence $ F_j^{-1}=f_j\circ T_j^{-1} \colon T_j(\Omega \cap U_0)\to f_j(\Omega \cap U_0) \subset M$ is also normal. Moreover, since $T_j\circ f_j^{-1}(a)=(0',-1)\in M_P$, it follows that the sequence $T_j\circ f_j^{-1}$ is not compactly divergent. Therefore, by Proposition \ref{T:7}, after taking some subsequence we may assume that $T_j\circ f_j^{-1}$ converges uniformly on every compact subset of $M$ to a biholomorphism from $M$ onto $M_P$. Hence, the proof is complete.
\end{proof}
\begin{remark}
If $M$ is a bounded domain in $\mathbb C^{n+1}$, the normality of the sequence $F_j^{-1}$ can be shown by using the Montel theorem. Thus, the proof of Theorem \ref{togetmodel} simply follows from Proposition \ref{T:7}.
\end{remark}
\begin{proof}[Proof of Theorem \ref{togetmodelstronglypsc}]
Let $\rho$ be a local defining function for $\Omega$ near $\xi_0$. We may assume that $\xi_0=0$. After a linear change of coordinates, one can find local holomorphic coordinates $(\tilde z,\tilde w)=(\tilde z_1,\cdots, \tilde z_n,\tilde w)$, defined on a neighborhood $U_0$ of $\xi_0$, such that
\begin{equation*}
\begin{split}
\rho(\tilde z,\tilde w)=\mathrm{Re}(\tilde w)+ \sum_{j=1}^{n}|\tilde z_j|^2+ O(|\tilde w| \|\tilde z\|+\|\tilde z\|^3)
\end{split}
\end{equation*}
By \cite[Proposition 3.1]{DN09} (or Subsection $3.1$ in \cite{Ber06} for the case $n=1$), for each point $\eta$ in a small neighborhood of the origin, there exists an automorphism $\Phi_\eta$ of $\mathbb C^n$ such that
\begin{equation*}
\begin{split}
\rho(\Phi_{\eta}^{-1}(z,w))-\rho(\eta)=\mathrm{Re}(w)+ \sum_{j=1}^{n}|z_j|^2+ O(|w| \|z\|+\|z\|^3).
\end{split}
\end{equation*}
Let us define an anisotropic dilation $\Delta^\epsilon$ by
$$
\Delta^\epsilon (z_1,\cdots,z_n,w)= \left(\frac{z_1}{\sqrt{\epsilon}},\cdots,\frac{z_{n}}{\sqrt{\epsilon}},\frac{w}{\epsilon}\right).
$$
For each $\eta\in \partial \Omega$, if we set $\rho_\eta^\epsilon(z, w)=\epsilon^{-1}\rho\circ \Phi_\eta^{-1}\circ( \Delta^\epsilon)^{-1}(z,w)$, then
\begin{equation*}
\rho_\eta^\epsilon(z, w)= \mathrm{Re}(w)+\sum_{j=1}^{n}|z_j|^2+O(\sqrt{\epsilon}).
\end{equation*}
By assumption, the sequence $\eta_j:=f^{-1}_j(a)$ converges to $\xi_0$. Then, we associate with a sequence of points ${\eta}_j'=(\eta_{j1}, \cdots, \eta_{jn},\eta_{j(n+1)}+\epsilon_j)$, $ \epsilon_j>0$, such that ${\eta}_j'$ is in the hypersurface $\{\rho=0\}$. Then $ \Delta^{\epsilon_j}\circ \Phi_{{\eta'}_j}({\eta}_j)=(0,\cdots,0,-1)$ and one can see that $ \Delta^{\epsilon_j}\circ \Phi_{{\eta'}_j}(\{\rho=0\}) $ is defined by an equation of the form
\begin{equation*}
\begin{split}
\mathrm{Re}(w)+\sum_{j=1}^{n}|z_j|^2+O(\sqrt{\epsilon_j})=0.
\end{split}
\end{equation*}
Therefore, it follows that, after taking a subsequence if necessary, $\Omega_j:=\Delta^{\epsilon_j}\circ \Phi_{{\eta'}_p}(U_0^-)$ converges to the following domain
\begin{equation}\label{Eq29}
\mathcal{E}:=\{\hat\rho:= \mathrm{Re}(w)+\sum_{j=1}^{n}|z_j|^2<0\},
\end{equation}
which is biholomorphically equivalent to the unit ball $\mathbb B^{n+1}$.
Now let us consider the sequence of biholomorphisms $F_j:= T_j\circ f_j^{-1} \colon M\supset f_j(\Omega \cap U_0)\to T_j(\Omega \cap U_0)$, where $T_j:= \Delta^{\epsilon_j}\circ \Phi_{{\eta'}_j}$. Since $F_j(a)=(0',-1)$, by \cite[Theorem 3.11]{DN09}, without loss of generality, we may assume that the sequence $F_j$ converges uniformly on every compact subset of $M$ to a holomorphic map $F$ from $M$ to $\mathbb C^{n+1}$. Note that $F(M)$ contains a neighborhood of $(0',-1)$ and $F(M)\subset \overline{M_P}$. Following the argument as in the proof of Theorem \ref{togetmodel}, we conclude that $F$ is a biholomorphism from $M$ onto $\mathcal{E}$, and thus $M$ is biholomorphically equivalent to $\mathbb B^{n+1}$, as desired.
\end{proof}
By Lemma \ref{orbitinside} and Theorem \ref{togetmodelstronglypsc}, we obtain the following corollary, proved by F. S. Deng and X. J. Zhang \cite[Theorem 2.4]{DZ19} and by B. L. Fridman \cite[Theorem I]{Fr83}.
\begin{corollary} \label{str-psc-ex} Let $D$ be a bounded strictly pseudoconvex domain in $\mathbb C^n$ with $\mathcal{C}^2$-smooth boundary. If a bounded domain $\Omega$ can be exhausted by $D$, then $\Omega$ is biholomorphically equivalent to $D$ or the unit ball $\mathbb B^n$.
\end{corollary}
\section{Exhausting a complex manifold by a general ellipsoid}\label{S4}
In this section, we are going to prove that if a complex manifold $M$ can be exhausted by a general ellipsoid $D_P$ (see the definition of $D_P$ below), then $M$ is biholomorphically equivalent to either $D_P$ or the unit ball $B^n$.
First of all, let us fix $n$ positive integers $m_1,\ldots, m_{n-1}$ and denote by $\Lambda:=\left(\frac{1}{m_1}, \ldots, \frac{1}{m_{n-1}}\right)$. We assign weights $\frac{1}{m_1}, \ldots, \frac{1}{m_{n-1}}, 1$ to $z_1,\ldots,z_n$. For an $(n-1)$-tuple $K = (k_1,\ldots,k_{n-1}) \in \mathbb{Z}^{n-1}_{\geq 0}$, denote the weight of $K$ by $$wt(K) := \sum_{j=1}^{k-1} \dfrac{k_j}{m_j}.$$
Next, we consider the general ellipsoid $D_P$ in $\mathbb C^n\;(n\geq2)$, defined by
\begin{equation*}
\begin{split}
D_P &:=\{(z',z_n)\in \mathbb C^n\colon |z_n|^2+P(z')<1\},
\end{split}
\end{equation*}
where
\begin{equation}\label{series expression of P on D_P}
P(z')=\sum_{wt(K)=wt(L)=1/2} a_{KL} {z'}^K \bar{z'}^L,
\end{equation}
where $a_{KL}\in \mathbb C$ with $a_{KL}=\bar{a}_{LK}$, satisfying that $P(z')>0$ whenever $z' \in \mathbb{C}^{n-1} \setminus \{0'\}$. We would like to emphasize here that the polynomial $P$ given in (\ref{series expression of P on D_P}) is $\Lambda$-homogeneous and the assumption that $P(z')>0$ whenever $z'\ne 0$ ensures that $D_P$ is bounded in $\mathbb{C}^n$ (cf. \cite[Lemma 6]{NNTK19}). Moreover, since $P(z')>0$ for $z'\ne 0$ and by the $\Lambda$-homogeneity, there are two constants $c_1,c_2>0$ such that
$$
c_1 \sigma_\Lambda(z') \leq P(z')\leq c_2 \sigma_\Lambda(z'), \; \forall z'\in \mathbb C^{n-1},
$$
where $\sigma_\Lambda(z')=|z_1|^{m_1}+\cdots+|z_{n-1}|^{m_{n-1}}$. In addition, $D_P$ is called a WB-domain if it is strongly pseudoconvex at every boundary point outside the set $\{(0',e^{i\theta})\colon \theta\in \mathbb R\}$ (cf. \cite{AGK16}).
Now we prove the following proposition.
\begin{proposition} \label{generalellipsoid} Let $M$ be a $n$-dimensional complex hyperbolic manifold. Suppose that $M$ can be exhausted by the general ellipsoid $D_P$ via an exhausting sequence $\{f_j: D_P \to M\}$. If $D_P$ is a $WB$-domain, then $M$ is biholomorphically equivalent to either $D_P$ or the unit ball $\mathbb{B}^n$.
\end{proposition}
\begin{remark}
The possibility that $M$ is biholomorphic onto the unit ball $\mathbb B^n$ is not excluded because $D_P$ can exhaust the unit ball $\mathbb B^n$ by \cite[Corollary $1.4$]{FM95}.
\end{remark}
\begin{proof}[Proof of Proposition \ref{generalellipsoid}]
Let $q$ be an arbitrary point in $M$. Then, thanks to the boundedness of $D_P$, after passing to a subsequence if necessary we may assume that the sequence $\{f_j^{-1}(q)\}_{j=1}^{\infty}$ converges to a point $p\in \overline{D_P}$ as $j \to \infty$.
We now divide the argument into two cases as follows:
\noindent
{\bf Case 1.} $f_j^{-1}(q)\to p\in D_P$. Then, it follows from Lemma \ref{orbitinside} that $M$ is biholomorphically equivalent to $D_P$.
\noindent
{\bf Case 2.} $f_j^{-1}(q)\to p\in\partial D_P$. Let us write $f_j^{-1}(q)=(a_j', a_{jn})\in D_P$ and $p=(a',a_n)\in \partial D_P$. As in \cite{NNTK19}, for each $j\in \mathbb N^*$ we consider $\psi_j\in \mathrm{Aut}(D_P)$, defined by
$$
\psi_j(z)=\left( \frac{(1-|a_{jn}|^2)^{1/m_1}}{(1-\bar{a}_{jn}z_n)^{2/m_1}} z_1,\ldots, \frac{(1-|a_{jn}|^2)^{1/m_{n-1}}}{(1-\bar{a}_{jn}z_n)^{2/m_{n-1}}} z_{n-1}, \frac{z_n-a_{jn}}{1-\bar{a}_{jn} z_n}\right).
$$
Then $\psi_j\circ f_j(q)=(b_j,0)$, where
$$
b_j= \left( \frac{a_{j1}}{(1-|a_{jn}|^2)^{1/m_1}} ,\ldots, \frac{a_{j (n-1)}}{(1-|a_{jn}|^2)^{1/m_{n-1}}}\right),\; \forall j\in \mathbb N^*.
$$
Without loss of generality, one may assume that $b_j\to b\in \mathbb C^{n-1}$ as $j\to \infty$.
Since $D_P$ is a $WB$-domain, two possibilities may occur:
\noindent
{\bf Subcase 1:} $p=(a',a_n)$ is a strongly pseudoconvex boundary point. In this subcase, it follows directly from Theorem \ref{togetmodelstronglypsc} that $M$ is biholomorphically equivalent to $\mathbb B^{n}$.
\noindent
{\bf Subcase 2:} $p=(0',e^{i\theta})$ is a weakly pseudoconvex boundary point. In this subcase, one must have $a_j'\to 0'$ and $a_{jn}\to e^{i\theta}$ as $j\to \infty$. Denote by $\rho(z):=|z_n|^2-1+P(z')$ a definition function for $D_P$. Then $\text{dist}(a_j, \partial D_P)\approx -\rho(a_j)= 1-|a_{jn}|^2-P(a_j')$.
Suppose that $\{a_j\}$ converges $\Lambda$-nontangentially to $p$, i.e., $P(a_j')\approx \sigma_\Lambda(a_j')\lesssim \text{dist}(a_j, \partial D_P)$, or equivalently $P(a_j')\leq C(1-|a_{jn}|^2-P(a_j')),\; \forall j\in \mathbb N^*$, for some $C>0$. This implies that
$$
P(a_j')\leq \dfrac{C}{1+C}(1-|a_{jn}|^2),\; \forall j\in \mathbb N^*,
$$
and thus $P(b_j)=\dfrac{1}{1-|a_{jn}|^2}P(a_j')\leq \dfrac{C}{1+C}<1,\; \forall j\in \mathbb N^*$. This yields $ \psi_j\circ f_j^{-1}(q)=(b_j,0)\to (b,0)\in D_P$ as $j\to \infty$. So, again by Lemma \ref{orbitinside} one concludes that $M$ is biholomorphically equivalent to $D_P$.
Now let us consider the case that the sequence $\{a_j\}$ does not converge $\Lambda$-nontangentially to $p$, i.e., $P(a_j')\geq c_j \text{dist}(a_j, \partial D_P), \; \forall j\in \mathbb N^*$, where $0<c_j\to +\infty$. This implies that $P(a_j')\geq c_j'(1-|a_{jn}|^2-P(a_j')),\; \forall j\in \mathbb N^*$, for some $0<c_j'\to +\infty$, and hence
$$
P(a_j')\geq \dfrac{c_j'}{1+c_j'}(1-|a_{jn}|^2),\; \forall j\in \mathbb N^*.
$$
Thus, one obtains that $P(b_j)=\dfrac{1}{1-|a_{jn}|^2}P(a_j')\geq \dfrac{c_j'}{1+c_j'}$, which implies that $P(b)=1$. Consequently, $\psi_j\circ f_j^{-1}(q)$ converges to the strongly pseudoconvex boundary point $p'=(b,0)$ of $\partial D_P$. Hence, as in Subcase $1$, it follows from Theorem \ref{togetmodelstronglypsc} that $M$ is biholomorphically equivalent to $\mathbb B^{n}$.
Therefore, altogether, the proof of Proposition \ref{generalellipsoid} finally follows.
\end{proof}
\section*{Appendix}
\begin{proof}[Proof of Lemma \ref{conti-kob}]
We shall follow the proof of \cite[Theorem $2.1$]{Yu95} with minor modifications. To do this, let us fix compact subsets $K\Subset M_P$ and $L\Subset \mathbb C^{n+1}$. Then it suffices to prove that $F_{D_j}(z,X)$ converges to $F_{M_P}(z,X)$ uniformly on $K\times L$. Indeed, suppose otherwise. Then, there exist $\epsilon_0>0$, a sequence of points $\{z_{j_\ell}\}\subset K$, and a sequence $X_{j_\ell}\subset L$ such that
$$
|F_{D_{j_\ell}}(z_{j_\ell},X_{j_\ell})-F_{M_P}(z_{j_\ell},X_{j_\ell})|>\epsilon_0,~\forall~\ell\geq 1.
$$
By the homogeneity of the Kobayashi metrics $F(z,X)$ in $X$, we may assume that $\|X_{j_\ell}\|=1$ for all $\ell\geq 1$. Moreover, passing to subsequences, we may also assume that $z_{j_\ell}\to z_0\in K$ and $X_{j_\ell}\to X_0\in L$ as $\ell \to \infty$. Since $M_P$ is taut (see \cite[Theorem $3.13$]{Yu95}), for each $(z,X)\in M_P\times \mathbb C^{n+1}$ with $X\ne 0$, there exists an analytic disc $\varphi\in \mathrm{Hol}(\Delta, M_P)$ such that $\varphi(0)=z$ and $\varphi'(0)=X/F_{M_P}(z,X)$. This implies that $F_{M_P}(z,X)$ is continuous on $M_P\times \mathbb C^{n+1}$. Hence, we obtain
$$
F_{M_P}(z_{j_\ell},X_{j_\ell})\to F_{M_P}(z_0,X_0),
$$
and thus we have
\begin{align}\label{eq136-0}
|F_{D_{j_\ell}}(z_{j_\ell},X_{j_\ell})-F_{M_P}(z_0,X_0)|>\epsilon_0/2
\end{align}
for $\ell$ big enough.
By definition, for any $\delta\in (0,1)$ there exists a sequence of analytic discs $\varphi_{j_\ell}\in \mathrm{Hol}(\Delta, D_{j_\ell})$ such that $\varphi_{j_\ell}(0)=z_0,\varphi_{j_\ell}'(0)= \lambda_{j_\ell} X_{j_\ell}$, where $\lambda_{j_\ell}>0$, and
\begin{align*}
F_{D_{j_\ell}}(z_{j_\ell},X_{j_\ell})\geq \frac{1}{\lambda_{j_\ell}}-\delta.
\end{align*}
It follows from Proposition \ref{pro-scaling} that every subsequence of the sequence $\{\varphi_{j_\ell}\}$ has a subsequence converging to some analytic disc $\psi\in \mathrm{Hol}(\Delta, M_P)$ such that $\psi(0)=z_0,\psi'(0)= \lambda X_0$, for some $\lambda>0$. Thus, one obtains that
$$
F_{M_P}(z_0,X_0)\leq \frac{1}{|\psi'(0)|}
$$
for any such $\psi$. Therefore, one has
\begin{align} \label{eq136-1}
\liminf_{\ell\to \infty} F_{D_{j_\ell}}(z_{j_\ell},X_{j_\ell})\geq F_{M_P}(z_0,X_0)-\delta.
\end{align}
On the other hand, as in \cite{Yu95}, by the tautness of $M_P$, there exists a analytic disc $\varphi \in \mathrm{Hol}(\Delta, M_P)$ such that $\varphi(0)=z_0, \varphi'(0)=\lambda X_0$, where $\lambda=1/F_{M_P}(z_0,X_0)$.
Now for $\delta\in (0,1)$, let us define an analytic disc $\psi_{j_\ell}^\delta:\Delta\to \mathbb C^{n+1}$ by settings:
\begin{align*}
\psi_{j_\ell}^\delta(\zeta):= \varphi((1-\delta)\zeta)+\lambda (1-\delta) (X_{j_\ell}-X_0)+(z_{j_\ell}-z_0)\; \text{for all}\; \zeta \in \Delta.
\end{align*}
Since $\varphi((1-\delta)\overline{\Delta})$ is a compact subset of $M_P$ and $X_{j_\ell}\to X_0$, $z_{j_\ell}\to z_0$ as $\ell\to \infty$, it follows that $\psi_{j_\ell}^\delta(\Delta)\subset D_{j_\ell}$ for all sufficiently large $\ell$, that is, $\psi_{j_\ell}^\delta\in \mathrm{Hol}(\Delta, D_{j_\ell})$. Moreover, by construction, $\psi_{j_\ell}^\delta(0)=z_{j_\ell}$ and $\left(\psi_{j_\ell}^\delta\right)'(0)=(1-\delta)\lambda X_{j_\ell}$. Therefore, again by definition, one has
\begin{align*}
F_{D_{j_\ell}}(z_{j_\ell},X_{j_\ell})\leq \frac{1}{(1-\delta) \lambda}=\frac{1}{(1-\delta)} F_{M_P}(z_0,X_0)
\end{align*}
for all large $\ell$. Thus, letting $\delta\to 0^+$, one concludes that
\begin{align}\label{eq136-2}
\limsup_{\ell\to \infty} F_{D_{j_\ell}}(z_{j_\ell},X_{j_\ell})\leq F_{M_P}(z_0,X_0).
\end{align}
By (\ref{eq136-1}), (\ref{eq136-2}), and (\ref{eq136-0}), we seek a contradiction. Hence, the proof is complete.
\end{proof}
\end{document} |
\begin{document}
\title{The length of unknotting tunnels}
\author{Daryl Cooper}
\author{Marc Lackenby}
\author{Jessica S. Purcell}
\begin{abstract}
We show there exist tunnel number one hyperbolic 3--manifolds with
arbitrarily long unknotting tunnel. This provides a negative answer
to an old question of Colin Adams.
{\epsilon}nd{abstract}
\maketitle
\newcommand{#1}{#1} #2\\
{\epsilon}nd{array}\right)}
\section{Introduction}\label{sec:intro}
In a paper published in 1995 \cite{adams:tunnels}, Colin Adams studied
geometric properties of hyperbolic tunnel number one manifolds. A
{\epsilon}mph{tunnel number one manifold} is defined to be a compact
orientable 3--manifold $M$ with torus boundary component(s), which
contains a properly embedded arc $\tau$, the exterior of which is a
handlebody. The arc $\tau$ is defined to be an {\epsilon}mph{unknotting
tunnel} of $M$.
When a tunnel number one manifold $M$ admits a hyperbolic structure,
there is a unique geodesic arc in the homotopy class of $\tau$. If
$\tau$ runs between distinct boundary components, Adams showed that
its geodesic representative has bounded length, when measured in the
complement of a maximal horoball neighborhood of the cusps. He asked
a question about the more general picture: does an unknotting tunnel
in a hyperbolic 3--manifold always have bounded length?
In response, Adams and Reid showed that when the tunnel number one
manifold is a 2--bridge knot complement, that unknotting tunnels have
bounded length \cite{adams-reid}. Akiyoshi, Nakagawa, and Sakuma
showed that unknotting tunnels in punctured torus bundles actually
have length zero \cite{ans}, hence bounded length.
Sakuma and Weeks also studied unknotting tunnels in 2--bridge knots
\cite{sakuma-weeks}. They found that any unknotting tunnel of a
2--bridge knot was isotopic to an edge of the canonical polyhedral
decomposition of that knot, first explored by Epstein and Penner
\cite{epstein-penner}. They conjectured that all unknotting tunnels
were isotopic to edges of the canonical decomposition. Heath and Song
later showed by example that not all unknotting tunnels could be
isotopic to edges of the canonical decomposition \cite{heath-song}.
However, the question of whether unknotting tunnels have bounded
length remained unanswered.
In this paper we finally settle the answer to this question. We show
that, in fact, the answer is no. There exist tunnel number one
manifolds with arbitrarily long unknotting tunnel.
\begin{named}{Theorem \ref{thm:long}}
There exist finite volume one--cusped hyperbolic tunnel number one
manifolds for which the geodesic representative of the unknotting
tunnel is arbitrarily long, as measured between the maximal horoball
neighborhood of the cusp.
{\epsilon}nd{named}
Note we are not claiming here that the unknotting tunnel in these
examples is ambient isotopic to a geodesic. Such examples can in fact
be constructed, but the argument is more complex and will appear in a
companion paper \cite{lackenby-purcell-2}. However, note that Theorem
\ref{thm:long} does force the unknotting tunnels in these examples to
be arbitrarily long, because the length of a properly embedded arc is
at least that of the geodesic in its homotopy class.
We prove Theorem \ref{thm:long} in two ways. The first proof, which
appears in Section \ref{sec:long-tunnels}, is geometric and partially
non-constructive. We analyze the infinite--volume hyperbolic
structures on the compression body $C$ with negative boundary a torus,
and positive boundary a genus 2 surface. A guiding principle is that
geometric properties of hyperbolic structures on $C$ should often have
their counterparts in finite--volume hyperbolic 3--manifolds with
tunnel number one. For example, any geometrically infinite hyperbolic
structure on $C$ is the geometric and algebraic limit of a sequence of
geometrically finite hyperbolic structures on $C$, and it is also the
geometric limit of a sequence of finite--volume hyperbolic
3--manifolds with tunnel number one. It is by finding suitable
sequences of hyperbolic structures on $C$ that Theorem \ref{thm:long}
is proved. In particular, the proof gives very little indication of
what the finite--volume hyperbolic 3--manifolds actually are.
The geometric proof of Theorem \ref{thm:long} leads naturally to the
study of geometrically finite structures on the compression body $C$
and their geometric properties. We include some background material
in Sections \ref{sec:prelim} and \ref{sec:ford}. However, we postpone
a more extensive investigation of geometrically finite structures on
$C$ to a companion paper \cite{lackenby-purcell-2}.
The second proof is more topological, and appears in Section
\ref{sec:dehn}. The idea is to start with a tunnel number one manifold
with two cusps. An argument using homology implies that there exist
Dehn fillings on one cusp which yield a tunnel number one manifold whose
core tunnel must be arbitrarily long.
A consequence of the second proof is that the resulting tunnel number
one manifold cannot be the exterior of a knot in a homology sphere. In
Section \ref{sec:homology}, we modify the construction of the first
proof to show there do exist tunnel number one manifolds with long
tunnel which are the exterior of a knot in a homology sphere. It
seems likely that the Dehn filling construction in Section
\ref{sec:dehn} can be modified to produce hyperbolic knots in homology
spheres with long unknotting tunnels. However, to establish this, a
substantially different method of proof would be required.
Although we construct examples of knots in homology 3--spheres with
long unknotting tunnels, we do not obtain knots in the 3--sphere using
our methods. It would be interesting to determine whether such
sequences of knots exist. If they do, can explicit diagrams of such
knots be found?
\section{Background and preliminary material}\label{sec:prelim}
In this section we will review terminology and results used throughout
the paper.
The first step in the proof of Theorem \ref{thm:long} is to show there
exist geometrically finite structures on a compression body $C$ with
arbitrarily long tunnel. We begin by defining these terms.
\subsection{Compression bodies}
A {\epsilon}mph{compression body} $C$ is either a handlebody, or the result of
taking a closed, orientable (possibly disconnected) surface $S$ cross
an interval $[0,1]$, and attaching 1--handles to $S\times\{1\}$. The
{\epsilon}mph{negative boundary}, denoted $\partial_-C$, is $S \times \{ 0\}$.
When $C$ is a handlebody, $\partial_{-}C = {\epsilon}mptyset$. The
{\epsilon}mph{positive boundary} is $\partial C \setminus \partial_{-}C$, and
is denoted $\partial_{+}C$.
Throughout this paper, we will be interested in compression bodies $C$
for which $\partial_{-}C$ is a torus and $\partial_{+}C$ is a genus
$2$ surface. We will refer to such a manifold as a
{\epsilon}mph{$(1,2)$--compression body}, where the numbers $(1,2)$ refer to
the genus of the boundary components.
Let $\tau$ be the union of the core of the attached 1--handle with
two vertical arcs in $S \times [0,1]$ attached to its endpoints. Thus,
$\tau$ is a properly embedded arc in $C$, and $C$ is a regular
neighborhood of $\partial_- C \cup \tau$. We refer to $\tau$ as the
{\epsilon}mph{core tunnel}. See Figure \ref{fig:comp-body}.
\begin{figure}
\centerline{\includegraphics[width=3in]{figures/compression-body.eps}}
\caption{The $(1,2)$--compression body. The core tunnel is the thick
line shown, with endpoints on the torus boundary.}
\label{fig:comp-body}
{\epsilon}nd{figure}
Note that the fundamental group of a $(1,2)$--compression body $C$ is
isomorphic to $({\mathbb{Z}}\times{\mathbb{Z}})\ast {\mathbb{Z}}$. We will denote the generators
of the ${\mathbb{Z}} \times {\mathbb{Z}}$ factor by $\alpha$, $\beta$, and we will
denote the generator of the second factor by $\gamma$.
\subsection{Hyperbolic structures}
Let $C$ be a $(1,2)$--compression body. We are interested in complete
hyperbolic structures on the interior of $C$. We obtain a hyperbolic
structure on $C \setminus {\partial} C$ by taking a discrete, faithful
representation $\rho\colon\thinspace \pi_1(C) \to {\rm PSL}(2,{\mathbb{C}})$ and considering the
manifold ${\mathbb{H}}^3/\rho(\pi_1(C))$.
\begin{define}
A discrete subgroup $\Gamma < {\rm PSL}(2,{\mathbb{C}})$ is {\epsilon}mph{geometrically
finite} if ${\mathbb{H}}^3/\Gamma$ admits a finite--sided, convex fundamental
domain. In this case, we will also say that the manifold
${\mathbb{H}}^3/\Gamma$ is {\epsilon}mph{geometrically finite}.
\label{def:gf-group}
{\epsilon}nd{define}
Geometrically finite groups are well understood. In this paper, we
will often use the following theorem of Bowditch (and its corollary,
Corollary \ref{cor:bowditch} below).
\begin{theorem}[Bowditch, Proposition 5.7 \cite{bowditch}]
If a subgroup $\Gamma < {\rm PSL}(2,{\mathbb{C}})$ is geometrically finite, then
every convex fundamental domain for ${\mathbb{H}}^3/\Gamma$ has finitely many
faces.
\label{thm:bowditch}
{\epsilon}nd{theorem}
\begin{define}
For $C$ a $(1,2)$--compression body, we will say that a discrete, faithful
representation $\rho$ is {\epsilon}mph{minimally parabolic} if for all $g \in
\pi_1(C)$, $\rho(g)$ is parabolic if and only if $g$ is conjugate to
an element of the fundamental group of the torus boundary component
${\partial}_-C$.
\label{def:min-parabolic}
{\epsilon}nd{define}
\begin{define}
A discrete, faithful representation $\rho\colon\thinspace\pi_1(C)\to {\rm PSL}(2,{\mathbb{C}})$ is
a {\epsilon}mph{minimally parabolic geometrically finite uniformization of
$C$} if $\rho$ is minimally parabolic, $\rho(\pi_1(C))$ is
geometrically finite as a subgroup of ${\rm PSL}(2,{\mathbb{C}})$, and
${\mathbb{H}}^3/\rho(\pi_1(C))$ is homeomorphic to the interior of $C$.
\label{def:gf}
{\epsilon}nd{define}
It is a classical result, due to Bers, Kra, and Maskit (see
\cite{bers74}), that the space of conjugacy classes of minimally
parabolic geometrically finite uniformizations of $C$ may be
identified with the Teichm\"uller space of the genus 2 boundary
component ${\partial}_+C$, quotiented out by ${\rm Mod}_0(C)$, the group of
isotopy classes of homeomorphisms of $C$ which are homotopic to the
identity.
In particular, note that the space of minimally parabolic
geometrically finite uniformizations is path connected.
\subsection{Isometric spheres and {F}ord domains}
The tool we use to study geometrically finite representations is that
of Ford domains. We define the necessary terminology in this section.
Throughout this subsection, let $M={\mathbb{H}}^3/\Gamma$ be a hyperbolic
manifold with a single rank two cusp, for example, the
$(1,2)$--compression body. In the upper half space model for ${\mathbb{H}}^3$,
assume the point at infinity in ${\mathbb{H}}^3$ projects to the cusp. Let $H$
be any horosphere about infinity. Let $\Gamma_\infty < \Gamma$ denote
the subgroup that fixes $H$. By assumption, $\Gamma_\infty =
{\mathbb{Z}}\times{\mathbb{Z}}$.
\begin{define}
For any $g \in \Gamma \setminus\Gamma_\infty$, $g^{-1}(H)$ will be a
horosphere centered at a point of ${\mathbb{C}}$, where we view the boundary at
infinity of ${\mathbb{H}}^3$ to be ${\mathbb{C}} \cup \{\infty\}$. Define the set $S_g$
to be the set of points in ${\mathbb{H}}^3$ equidistant from $H$ and
$g^{-1}(H)$. $S_g$ is the {\epsilon}mph{isometric sphere} of $g$.
\label{def:isometric-sphere}
{\epsilon}nd{define}
Note that $S_g$ is well--defined even if $H$ and $g^{-1}(H)$ overlap.
It will be a Euclidean hemisphere orthogonal to the boundary ${\mathbb{C}}$ of
${\mathbb{H}}^3$.
At first glance, it may seem more natural to consider points
equidistant from $H$ and $g(H)$, rather than $g^{-1}(H)$ as in
Definition \ref{def:isometric-sphere}. However, we use the historical
definition of isometric spheres in order to make use of the following
classical result, which we include as a lemma. A proof can be found,
for example, in Maskit's book \cite[Chapter IV, Section G]{maskit}.
\begin{lemma}
For any $g\in\Gamma\setminus\Gamma_\infty$, the action of $g$ on
${\mathbb{H}}^3$ is given by inversion in $S_g$ followed by a Euclidean
isometry. \qed
\label{lemma:isosphere-invert}
{\epsilon}nd{lemma}
The following is well known, and follows from standard calculations in
hyperbolic geometry. We give a proof in \cite{lackenby-purcell-2}.
\begin{lemma}
If $$g=\mat{a&b\\c&d}\in {\rm PSL}(2,{\mathbb{C}}),$$ then the center of the
Euclidean hemisphere $S_{g^{-1}}$ is $g(\infty) = a/c$. Its Euclidean
radius is $1/|c|$.
\label{lemma:iso-center-rad}
{\epsilon}nd{lemma}
Let $B_g$ denote the {\epsilon}mph{open} half ball bounded by $S_g$. Define
${\mathcal{F}}$ to be the set
$${\mathcal{F}} = {\mathbb{H}}^3 \setminus \bigcup_{g\in \Gamma \setminus \Gamma_\infty}
B_g.$$ Note ${\mathcal{F}}$ is invariant under $\Gamma_\infty$, which acts by
Euclidean translations on ${\mathbb{H}}^3$.
When $H$ bounds a horoball $H_\infty$ that projects to an embedded
horoball neighborhood about the rank 2 cusp of $M$, ${\mathcal{F}}$ is the set of
points in ${\mathbb{H}}^3$ which are at least as close to $H_\infty$ as to any
of its translates under $\Gamma \setminus \Gamma_\infty$. Such an
embedded horoball neighborhood of the cusp always exists, by the
Margulis lemma.
\begin{define}
A {\epsilon}mph{vertical fundamental domain for $\Gamma_\infty$} is a
fundamental domain for the action of $\Gamma_\infty$ cut out by
finitely many vertical geodesic planes in ${\mathbb{H}}^3$.
{\epsilon}nd{define}
\begin{define}
A {\epsilon}mph{Ford domain} of $M$ is the intersection of ${\mathcal{F}}$ with a
vertical fundamental domain for the action of $\Gamma_\infty$.
\label{def:ford-domain}
{\epsilon}nd{define}
A Ford domain is not canonical because the choice of fundamental
domain for $\Gamma_\infty$ is not canonical. However, for the
purposes of this paper, the region ${\mathcal{F}}$ in ${\mathbb{H}}^3$ is often more
useful than the actual Ford domain.
Note that Ford domains are convex fundamental domains. Thus we have
the following corollary of Bowditch's Theorem \ref{thm:bowditch}.
\begin{cor}
$M={\mathbb{H}}^3/\Gamma$ is geometrically finite if and only if a Ford
domain for $M$ has a finite number of faces.
\label{cor:bowditch}
{\epsilon}nd{cor}
\subsection{Visible faces and {F}ord domains}
\begin{define}
Let $g \in \Gamma\setminus\Gamma_\infty$. The isometric sphere $S_g$
is called {\epsilon}mph{visible from infinity}, or simply {\epsilon}mph{visible}, if
it is not contained in $\bigcup_{h \in \Gamma\setminus(\Gamma_\infty
\cup \Gamma_\infty g)} \bar{B_h}$. Otherwise, $S_g$ is called
{\epsilon}mph{invisible}.
Similarly, suppose $g, h \in \Gamma\setminus\Gamma_\infty$, and $S_g
\cap S_h \cap {\mathbb{H}}^3$ is nonempty. Then the edge of intersection $S_g
\cap S_h$ is called {\epsilon}mph{visible} if $S_g$ and $S_h$ are visible and
their intersection is not contained in $\bigcup_{k \in
\Gamma\setminus(\Gamma_\infty \cup \Gamma_\infty g \cup \Gamma_\infty
h)} \bar{B_k}$. Otherwise, it is {\epsilon}mph{invisible}.
\label{def:visible}
{\epsilon}nd{define}
The faces of ${\mathcal{F}}$ are exactly those that are visible from infinity.
In the case where $H$ bounds a horoball that projects to an embedded
horoball neighborhood of the rank 2 cusp of $M$, there is an
alternative interpretation of visibility. An isometric sphere $S_g$
is visible if and only if there exists a point $x$ in $S_g$ such that
for all $h \in \Gamma \setminus (\Gamma_\infty \cup \Gamma_\infty g)$,
the hyperbolic distance $d(x, h^{-1}(H))$ is greater than the
hyperbolic distance $d(x, H)$. Similarly, an edge $S_g \cap S_h$ is
visible if and only if there exists a point $x$ in $S_g \cap S_h$ such
that for all $k \in \Gamma \setminus (\Gamma_\infty \cup \Gamma_\infty
g \cup \Gamma_\infty h)$, the hyperbolic distance $d(x, H)$ is
strictly less than the hyperbolic distance $d(x, k^{-1}(H))$.
We present a result that allows us to identify minimally parabolic
geometrically finite uniformizations.
\begin{lemma}
Suppose $\rho\colon\thinspace \pi_1(C) \to {\rm PSL}(2,{\mathbb{C}})$ is a geometrically
finite uniformization. Suppose none of the visible isometric spheres
of the Ford domain of ${\mathbb{H}}^3/\rho(\pi_1(C))$ are visibly tangent on
their boundaries. Then $\rho$ is minimally parabolic.
\label{lemma:min-parabolic}
{\epsilon}nd{lemma}
By {\epsilon}mph{visibly} tangent, we mean the following. Set $\Gamma =
\rho(\pi_1(C))$, and assume a neighborhood of infinity in ${\mathbb{H}}^3$
projects to the rank two cusp of ${\mathbb{H}}^3/\Gamma$, with $\Gamma_\infty <
\Gamma$ fixing infinity in ${\mathbb{H}}^3$. For any $g \in \Gamma \setminus
\Gamma_\infty$, the isometric sphere $S_g$ has boundary that is a
circle on the boundary ${\mathbb{C}}$ at infinity of ${\mathbb{H}}^3$. This circle
bounds an open disk $D_g$ in ${\mathbb{C}}$. Two isometric spheres $S_g$ and
$S_h$ are {\epsilon}mph{visibly tangent} if their corresponding disks $D_g$
and $D_h$ are tangent on ${\mathbb{C}}$, and for any other $k \in \Gamma
\setminus \Gamma_\infty$, the point of tangency is not contained in
the open disk $D_k$.
\begin{proof}
Suppose $\rho$ is not minimally parabolic. Then it must have a rank 1
cusp. Apply an isometry to ${\mathbb{H}}^3$ so that the point at infinity
projects to this rank 1 cusp. The Ford domain becomes a finite sided
region $P$ meeting this cusp. Take a horosphere about infinity.
Because the Ford domain is finite sided, we may take this horosphere
about infinity sufficiently small that the intersection of the
horosphere with $P$ gives a subset of Euclidean space with sides
identified by elements of $\rho(\pi_1(C))$, conjugated appropriately.
The side identifications of this subset of Euclidean space, given by
the side identifications of $P$, generate the fundamental group of the
cusp. But this is a rank 1 cusp, hence its fundamental group is
${\mathbb{Z}}$. Therefore, the side identification is given by a single
Euclidean translation. The Ford domain $P$ intersects this horosphere
in an infinite strip, and the side identification glues the strip into
an annulus. Note this implies two faces of $P$ are tangent at
infinity.
Now conjugate back to our usual view of ${\mathbb{H}}^3$, with the point at
infinity projecting to the rank 2 cusp of the $(1,2)$--compression
body ${\mathbb{H}}^3/\rho(\pi_1(C))$. The two faces of $P$ tangent at infinity
are taken to two isometric spheres of the Ford domain, tangent at a
visible point on the boundary at infinity.
{\epsilon}nd{proof}
\begin{remark}
The converse to Lemma \ref{lemma:min-parabolic} is not true. There
exist examples of geometrically finite representations for which two
visible isometric spheres are visibly tangent, and yet the
representation is still minimally parabolic. We see examples of this
in \cite{lackenby-purcell-2}.
{\epsilon}nd{remark}
We next prove a result which will help us identify representations
which are {\epsilon}mph{not} discrete.
\begin{lemma}
Let $\Gamma$ be a discrete, torsion free subgroup of ${\rm PSL}(2,{\mathbb{C}})$ such
that $M = {\mathbb{H}}^3/\Gamma$ has a rank two cusp. Suppose that the point
at infinity projects to the cusp, and let $\Gamma_\infty$ be its
stabilizer in $\Gamma$. Then for all $\zeta \in \Gamma \setminus
\Gamma_\infty$, the isometric sphere of $\zeta$ has radius at most the
minimal (Euclidean) translation length of all elements in
$\Gamma_\infty$.
\label{lemma:not-gf}
{\epsilon}nd{lemma}
\begin{proof}
By the Margulis lemma, there exists an embedded horoball neighborhood
of the rank 2 cusp of ${\mathbb{H}}^3/\Gamma$. Let $H_\infty$ be a horoball
about infinity in ${\mathbb{H}}^3$ that projects to this embedded horoball.
Let $\tau$ be the minimum (Euclidean) translation length of all
nontrivial elements in the group $\Gamma_\infty$, say $\tau$ is the
distance translated by the element $w_\tau$. Suppose $S_\zeta$ has
radius $R$ strictly larger than $\tau$. Without loss of generality,
we may assume $S_\zeta$ is visible, for otherwise there is some
visible face $S_\xi$ which covers the highest point of $S_\zeta$,
hence must have even larger radius.
Because the radius $R$ of $S_\zeta$ is larger than $\tau$, $S_\zeta$
must intersect $w_\tau (S_\zeta) = S_{\zeta w_\tau^{-1}}$, and in
fact, the center $w_\tau\zeta^{-1}(\infty)$ of $S_{\zeta w_\tau^{-1}}$
must lie within the boundary circle $S_\zeta \cap {\mathbb{C}}$.
Consider the set of points $P$ equidistant from $\zeta^{-1}(H_\infty)$
and $w_\tau \zeta^{-1}(H_\infty)$. Because these horoballs are the
same size, $P$ must be a vertical plane in ${\mathbb{H}}^3$ which lies over the
perpendicular bisector of the line segment running from
$\zeta^{-1}(\infty)$ to $w_\tau\zeta^{-1}(\infty)$ on ${\mathbb{C}}$.
Now apply $\zeta$. This will take the plane $P$ to $S_0:=S_{\zeta
w_\tau^{-1} \zeta^{-1}}$. We wish to determine the (Euclidean) radius
of $S_0$. By Lemma \ref{lemma:isosphere-invert}, applying $\zeta$ is
the same as applying an inversion in $S_\zeta$, followed by a
Euclidean isometry. Only the inversion will affect the radius of
$S_0$. Additionally, the radius is independent of the location of the
center of the isometric sphere $S_\zeta$, so we may assume without
loss of generality that the center of $S_\zeta$ is at $0 \in {\mathbb{C}}$ and
that the center of $S_{\zeta w_\tau^{-1}}$ is at $\tau \in {\mathbb{C}}$. Now
inversion in a circle of radius $R$ centered at zero takes the point
$\tau$ to $R^2/\tau$, and the point at infinity to $0$. Thus the
center of $S_0$, which is the image of $\tau$ under $\zeta$, will be
of distance $R^2/\tau$ from a point on the boundary of $S_0$, i.e. the
image of $\infty$ on $P$ under $\zeta$. Hence the radius of $S_0$ is
$R^2/\tau > R$. Denote $R^2/\tau$ by $R_0$. We have $R_0 > R >
\tau$.
Now we have a new face $S_0$ with radius $R_0>R>\tau$. Again we may
assume it is visible. The same argument as above implies there is
another sphere $S_1$ with radius $R_1>R_0>\tau$. Continuing, we
obtain an infinite collection of visible faces of increasing radii.
These must all be distinct. But this is impossible: an infinite
number of distinct faces of radius greater than $\tau$ cannot fit
inside a fundamental domain for $\Gamma_\infty$. Thus $\Gamma$ is
indiscrete.
{\epsilon}nd{proof}
The following lemma gives us a tool to identify the Ford domain of a
geometrically finite manifold.
\begin{lemma}
Let $\Gamma$ be a subgroup of ${\rm PSL}(2,{\mathbb{C}})$ with rank 2 subgroup
$\Gamma_\infty$ fixing the point at infinity. Suppose the isometric
spheres corresponding to a finite set of elements of $\Gamma$, as well
as a vertical fundamental domain for $\Gamma_\infty$, cut out a
fundamental domain $P$ for $\Gamma$. Then $\Gamma$ is discrete and
geometrically finite, and $P$ must be a Ford domain of ${\mathbb{H}}^3/\Gamma$.
\label{lemma:finding-ford}
{\epsilon}nd{lemma}
\begin{proof}
The discreteness of $\Gamma$ follows from Poincar{\'e}'s polyhedron
theorem. The fact that it is geometrically finite follows directly
from the definition.
Suppose $P$ is not a Ford domain. Since the Ford domain is only
well--defined up to choice of fundamental region for $\Gamma_\infty$,
there is a Ford domain $F$ with the same choice of vertical
fundamental domain for $\Gamma_\infty$ as for $P$. Since $P$ is not a
Ford domain, $F$ and $P$ do not coincide. Because both are cut out by
isometric spheres corresponding to elements of $\Gamma$, there must be
additional visible faces that cut out the domain $F$ than just those
that cut out the domain $P$. Hence $F$ is a strict subset of $P$, and
there is some point $x$ in ${\mathbb{H}}^3$ which lies in the interior of $P$,
but does not lie in the Ford domain.
But now consider the covering map $\phi\colon\thinspace {\mathbb{H}}^3 \to {\mathbb{H}}^3/\Gamma$.
This map $\phi$ glues both $P$ and $F$ into the manifold
${\mathbb{H}}^3/\Gamma$, since they are both fundamental domains for $\Gamma$.
So consider $\phi$ applied to $x$. Because $x$ lies in the interior
of $P$, and $P$ is a fundamental domain, there is no other point of
$P$ mapped to $\phi(x)$. On the other hand, $x$ does not lie in the
Ford domain $F$. Thus there is some preimage $y$ of $\phi(x)$ under
$\phi$ which does lie in $F$. But $F$ is a subset of $P$. Hence we
have $y \neq x$ in $P$ such that $\phi(x) = \phi(y)$. This
is a contradiction.
{\epsilon}nd{proof}
\subsection{The {F}ord spine}
When we glue the Ford domain into the manifold $M={\mathbb{H}}^3/\Gamma$, as in
the proof of Lemma \ref{lemma:finding-ford}, the faces of the Ford
domain will be glued together in pairs to form $M$.
\begin{define}
The {\epsilon}mph{Ford spine} of $M$ is defined to be the image of the visible
faces of ${\mathcal{F}}$ under the covering ${\mathbb{H}}^3\to M$.
\label{def:spine}
{\epsilon}nd{define}
\begin{remark}
A spine usually refers to a subset of the manifold onto which there is
a retraction of the manifold. Using that definition, the Ford spine
is not strictly a spine. However, the Ford spine union the genus 2
boundary ${\partial}_+C$ will be a spine for the compression body.
{\epsilon}nd{remark}
Let $\rho$ be a geometrically finite uniformization. Recall that the
{\epsilon}mph{domain of discontinuity} $\Omega_{\rho(\pi_1(C))}$ is the
complement of the limit set of $\rho(\pi_1(C))$ in the boundary at
infinity ${\partial}_\infty {\mathbb{H}}^3$. See, for example, Marden \cite[section
2.4]{marden-book}.
\begin{lemma}
Let $\rho$ be a minimally parabolic geometrically finite
uniformization of a $(1,2)$--compression body $C$. Then the manifold
$({\mathbb{H}}^3 \cup \Omega_{\rho(\pi_1(C))})/\rho(\pi_1(C))$ retracts onto
the boundary at infinity $(\bar{{\mathcal{F}}} \cap {\mathbb{C}})/ \Gamma_\infty$, union
the Ford spine.
\label{lemma:spine}
{\epsilon}nd{lemma}
\begin{proof}
Let $H$ be a horosphere about infinity in ${\mathbb{H}}^3$ that bounds a
horoball which projects to an embedded horoball neighborhood of the
cusp of ${\mathbb{H}}^3/\rho(\pi_1(C))$. Let $x$ be any point in ${\mathcal{F}} \cap
{\mathbb{H}}^3$. The nearest point on $H$ to $x$ lies on a vertical line
running from $x$ to infinity. These vertical lines give a foliation
of ${\mathcal{F}}$. All such lines have one endpoint on infinity, and the other
endpoint on $\bar{{\mathcal{F}}} \cap {\mathbb{C}}$ or an isometric sphere of ${\mathcal{F}}$.
We obtain our retraction by mapping the point $x$ to the endpoint of
its associated vertical line, then quotienting out by the action of
$\rho(\pi_1(C))$.
{\epsilon}nd{proof}
To any face $F_0$ of the Ford spine, we obtain an associated
collection of visible elements of $\Gamma$: those whose isometric
sphere projects to $F_0$ (or more carefully, a subset of their
isometric sphere projects to the face $F_0$). We will often say that
an element $g$ of $\Gamma$ {\epsilon}mph{corresponds} to a face $F_0$ of the
Ford spine of $M$, meaning $S_g$ is visible, and (the visible subset
of) $S_g$ projects to $F_0$. Note that if $g$ corresponds to $F_0$,
then so does $g^{-1}$ and $w_0 g^{\pm 1} w_1$ for any words $w_0, w_1
\in \Gamma_\infty$.
\section{Ford domains of compression bodies}\label{sec:ford}
Let $C$ be a $(1,2)$--compression body. The fundamental group
$\pi_1(C)$ is isomorphic to $({\mathbb{Z}}\times{\mathbb{Z}})\ast {\mathbb{Z}}$. The ${\mathbb{Z}} \times
{\mathbb{Z}}$ factor has generators $\alpha$ and $\beta$, and the generator of
the ${\mathbb{Z}}$ factor is $\gamma$.
Suppose $\rho\colon\thinspace \pi_1(C) \to {\rm PSL}(2,{\mathbb{C}})$ is a minimally parabolic
geometrically finite uniformization of $C$. Then $\rho(\alpha)$ and
$\rho(\beta)$ are parabolic, and we will assume they fix the point at
infinity in ${\mathbb{H}}^3$. Together, they generate $\Gamma_\infty$. The
third element, $\rho(\gamma)$, is a loxodromic element. In
$\pi_1(C)$, $\alpha$ and $\beta$ are represented by loops in
$\partial_-C$. To form the $(1,2)$--compression body, we add to
$\partial_-C \times I$ a 1--handle. Then $\gamma$ is represented by a
loop around the core of this 1--handle.
In the simplest possible case imaginable, the Ford spine of
${\mathbb{H}}^3/\Gamma$ consists of a single face, corresponding to
$\rho(\gamma)$. Note if this case happened to occur, then in the lift
to ${\mathbb{H}}^3$, the only visible isometric spheres would correspond to
$\rho(\gamma)$, $\rho(\gamma^{-1})$, and their translates by elements
of $\Gamma_\infty$. Cutting out regions bounded by these hemispheres
would give the region ${\mathcal{F}}$. Topologically, the manifold
${\mathbb{H}}^3/\Gamma$ is obtained as follows. First take ${\mathcal{F}}/\Gamma_\infty$.
The interior of ${\mathcal{F}}/\Gamma_\infty$ is homeomorphic to $T^2 \times
(0,\infty)$. On the boundary on ${\mathbb{C}}$ of ${\mathcal{F}}/\Gamma_\infty$ lie
two hemispheres, corresponding to $\rho(\gamma)$ and
$\rho(\gamma^{-1})$. These are glued via $\rho(\gamma)$ to form
${\mathbb{H}}^3/\Gamma$ from ${\mathcal{F}}/\Gamma_\infty$.
This situation is illustrated in Figure \ref{fig:simple-ford}.
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\makebox{\begin{tabular}{c}
\input{figures/simple-forddomain.pstex_t} \\
{\epsilon}nd{tabular} } &
\hspace{1in} &
\makebox{\begin{tabular}{c}
\\
\includegraphics[width=1.5in]{figures/ford1-simple.ps}
{\epsilon}nd{tabular}}
{\epsilon}nd{tabular}
{\epsilon}nd{center}
\caption{Left: Schematic picture of a simple Ford domain. Right:
Three dimensional view of ${\mathcal{F}}$ in ${\mathbb{H}}^3$.}
\label{fig:simple-ford}
{\epsilon}nd{figure}
In the following lemma, we show that this simple Ford domain does, in
fact, occur.
\begin{lemma}
Let $C$ be a $(1,2)$--compression body. There exists a minimally
parabolic geometrically finite uniformization of $C$, $\rho\colon\thinspace
\pi_1(C) \to {\rm PSL}(2,{\mathbb{C}})$ such that the Ford spine of
${\mathbb{H}}^3/\rho(\pi_1(C))$ consists of a single face, corresponding to the
loxodromic generator.
\label{lemma:simple-ford}
{\epsilon}nd{lemma}
\begin{proof}
We construct such a structure by choosing $\rho(\alpha)$,
$\rho(\beta)$, $\rho(\gamma)$ in ${\rm PSL}(2,{\mathbb{C}})$.
Let $c \in {\mathbb{C}}$ be such that $|c| > 2$, and let $\rho(\alpha)$,
$\rho(\beta)$, and $\rho(\gamma)$ be defined by
$$\rho(\alpha) = \mat{1&2|c|\\0&1}, \quad \rho(\beta) =
\mat{1&2i|c|\\0&1}, \quad \rho(\gamma)=\mat{c&-1\\1&0}.$$
Let $\Gamma$ be the subgroup of ${\rm PSL}(2, {\mathbb{C}})$ generated by
$\rho(\alpha)$, $\rho(\beta)$, and $\rho(\gamma)$. By Lemma
\ref{lemma:iso-center-rad}, $S_{\rho(\gamma)}$ has center $0$, radius
$1$, and $S_{\rho(\gamma^{-1})}$ has center $c\in{\mathbb{C}}$, radius $1$.
For $|c|>2$, $S_{\rho(\gamma)}$ will not meet $S_{\rho(\gamma^{-1})}$.
Note also that by choice of $\rho(\alpha)$, $\rho(\beta)$, all
translates of $S_{\rho(\gamma)}$ and $S_{\rho(\gamma^{-1})}$ under
$\Gamma_\infty$ are disjoint. We claim that $\rho$ satisfies
the conclusions of the lemma.
Select a vertical fundamental domain for $\Gamma_\infty$ which
contains the isometric spheres $S_{\rho(\gamma)}$ and
$S_{\rho(\gamma^{-1})}$ in its interior. This is possible by choice
of $\rho(\alpha)$, $\rho(\beta)$, and $\rho(\gamma)$.
Consider the region $P$ obtained by intersecting this fundamental
region with the complement of $B_{\rho(\gamma)}$ and
$B_{\rho(\gamma^{-1})}$. As in the discussion above, we may glue this
region $P$ into a manifold $C_0$ by gluing $S_{\rho(\gamma)}$ to
$S_{\rho(\gamma^{-1})}$ via $\rho(\gamma)$, and by gluing vertical
faces by appropriate parabolic elements. The manifold $C_0$ will be
homeomorphic to the interior of a $(1,2)$--compression body.
Then Poincar{\'e}'s polyhedron theorem implies that the manifold $C_0$
has fundamental group generated by $\rho(\alpha)$, $\rho(\beta)$, and
$\rho(\gamma)$. Hence $C_0$ is the manifold ${\mathbb{H}}^3/\Gamma$.
By Lemma \ref{lemma:finding-ford}, this fundamental region $P$ must
actually be the Ford domain for the manifold, and $\Gamma$ is
geometrically finite. Since these isometric spheres are nowhere
tangent, $\rho$ is minimally parabolic, by Lemma
\ref{lemma:min-parabolic}.
{\epsilon}nd{proof}
The examples of Ford domains that will interest us will be more
complicated than that in
Lemma \ref{lemma:simple-ford}
\begin{example}
Fix $R>0$, and select $\varepsilon \in {\mathbb{R}}$ so that
$0<\varepsilon<e^{-R}$, or equivalently, so that $\log(1/\varepsilon) \in
(R, \infty)$. Set $\rho(\gamma)$ equal to
\begin{equation}
\mat{\displaystyle{\frac{i(1+\varepsilon)}{\sqrt{\varepsilon}}} &
\displaystyle{ \frac{i}{\sqrt{\varepsilon}}}
\\
-\displaystyle{\frac{i}{\sqrt{\varepsilon}}} &
-\displaystyle{\frac{i}{\sqrt{\varepsilon}}}}.
\label{eqn:gamma-mat}
{\epsilon}nd{equation}
Note that with $\rho(\gamma)$ defined in this manner, we have
$$\rho(\gamma^2) = \mat{-2-\varepsilon & -1 \\ 1 & 0}.$$
Thus the isometric sphere of $\rho(\gamma)$ has radius
$1/|i/\sqrt{\varepsilon}| = \sqrt{\varepsilon}$, while that of
$\rho(\gamma^2)$ has radius $1$ by Lemma \ref{lemma:iso-center-rad}.
Now select $\rho(\alpha)$ and $\rho(\beta)$ to be parabolic
translations fixing the point at infinity, with translation distance
large enough that the isometric spheres of $\rho(\gamma^2)$,
$\rho(\gamma^{-2})$, $\rho(\gamma)$, and $\rho(\gamma^{-1})$ do not
meet any of their translates under $\rho(\alpha)$ and $\rho(\beta)$.
The following will do:
$$\rho(\alpha) = \mat{1& 20 \\ 0 & 1}, \quad
\rho(\beta) = \mat{1 & 20i\\ 0& 1}.$$
\label{example:long-tunnel}
{\epsilon}nd{example}
\begin{lemma}
The representation $\rho\colon\thinspace \pi_1(C) \to {\rm PSL}(2,{\mathbb{C}})$ defined in
Example \ref{example:long-tunnel} is a minimally parabolic
geometrically finite hyperbolic uniformization of
$C$ whose Ford spine consists of exactly two
faces, corresponding to $\rho(\gamma)$ and $\rho(\gamma^2)$.
\label{lemma:gf}
{\epsilon}nd{lemma}
\begin{proof}
Consider the isometric spheres corresponding to $\rho(\gamma)$,
$\rho(\gamma^{-1})$, $\rho(\gamma^2)$, and $\rho(\gamma^{-2})$. We
will show that these faces, along with the faces of a vertical
fundamental domain for the action of $\rho(\alpha)$ and $\rho(\beta)$,
are the only faces of the Ford domain of the manifold
${\mathbb{H}}^3/\rho(\pi_1(C))$. Since the faces corresponding to
$\rho(\gamma)$ and to $\rho(\gamma^{-1})$ glue together, and since the
faces corresponding to $\rho(\gamma^2)$ and to $\rho(\gamma^{-2})$
glue, the Ford domain glues to give a Ford spine with exactly two
faces. The fact that the manifold is geometrically finite will then
follow by Lemma \ref{lemma:finding-ford}.
Choose vertical planes that cut out a vertical fundamental domain for
the action of $\Gamma_\infty$ and that avoid the isometric
spheres corresponding to $\rho(\gamma^{\pm 1})$ and $\rho(\gamma^{\pm
2})$. Because the translation distances of $\rho(\alpha)$ and
$\rho(\beta)$ are large with respect to the radii of these isometric
spheres, this is possible. For example, the planes $x=-10$, $x=10$,
$y=-10$, $y=10$ in $\{(x,y,z)|z>0\} = {\mathbb{H}}^3$ will do.
Now, the isometric spheres of $\rho(\gamma)$ and $\rho(\gamma^{-1})$
have center $-1$ and $-1-\varepsilon$, respectively, and radius
$\sqrt{\varepsilon}$, by Lemma \ref{lemma:iso-center-rad}. Similarly,
the isometric spheres of $\rho(\gamma^{2})$ and $\rho(\gamma^{-2})$
have centers $0$ and $-2-\varepsilon$, respectively, and radius $1$.
Then one may check: The isometric sphere of $\rho(\gamma^2)$ meets
that of $\rho(\gamma)$ in the plane $x=-1+\varepsilon/2$. The isometric
sphere of $\rho(\gamma)$ meets that of $\rho(\gamma^{-1})$ in the
plane $x=-1-\varepsilon/2$, and the isometric sphere of
$\rho(\gamma^{-1})$ meets that of $\rho(\gamma^{-2})$ in the plane
$x=-1-3\varepsilon/2$, as in Figure \ref{fig:dumbbell}. These are the
only intersections of these spheres that are visible from infinity.
If we glue the isometric spheres of $\rho(\gamma^{\pm 1})$ via
$\rho(\gamma)$ and the isometric spheres of $\rho(\gamma^{\pm 2})$ via
$\rho(\gamma^2)$, then these three edges of intersection are all glued
to a single edge.
\begin{figure}
\begin{center}
\input{figures/dumbbell.pstex_t}
{\epsilon}nd{center}
\caption{The isometric spheres corresponding to $\rho(\gamma^{-2})$,
$\rho(\gamma^{-1})$, $\rho(\gamma)$, and $\rho(\gamma^2)$.}
\label{fig:dumbbell}
{\epsilon}nd{figure}
Consider the monodromy around this edge. We must show that it is the
identity. Note that a meridian of the edge is divided into three
arcs, running from the faces labeled $S_{\gamma^{-1}}$ to
$S_{\gamma^{-2}}$, from $S_{\gamma^2}$ to $S_{\gamma}$, and from
$S_{\gamma^{-1}}$ to $S_{\gamma}$. To patch the first pair of arcs
together, we glue $S_{\gamma^{-2}}$ to $S_{\gamma^2}$ using the
isometry $\gamma^{-2}$. To patch the second and third pairs of arcs,
we glue $S_{\gamma}$ to $S_{\gamma^{-1}}$ by the isometry $\gamma$.
The composition of these three isometries is
$\gamma^{-2}\gamma\gamma$, which is the identity, as required.
Hence, by Poincar\'e's polyhedron theorem, the space obtained by
gluing faces of the polyhedron $P$ cut out by the above isometric
spheres and vertical planes is a manifold, with fundamental group
generated by $\rho(\gamma)$, $\rho(\gamma^2)$, $\rho(\alpha)$, and
$\rho(\beta)$.
We need to show that this is a uniformization of $C$, i.e., that
${\mathbb{H}}^3/\rho(\pi_1(C))$ is homeomorphic to the interior of $C$. The
Ford spine of ${\mathbb{H}}^3/\rho(\pi_1(C))$ has two faces, one of which has
boundary which is the union of the 1--cell of the spine and an arc on
$\partial_+C$. Collapse the 1--cell and this face. The result is a
new complex with the same regular neighborhood. It now has a single
2--cell attached to $\partial_+C$. Thus, ${\mathbb{H}}^3/\rho(\pi_1(C))$ is
obtained by attaching a 2--handle to $\partial_+C \times I$, and then
removing the boundary. In other words, ${\mathbb{H}}^3/\rho(\pi_1(C))$ is
homeomorphic to the interior of $C$.
Thus ${\mathbb{H}}^3/\rho(\pi_1(C))$ is homeomorphic to the interior of $C$,
and has a convex fundamental domain $P$ with finitely many faces. By
Lemma \ref{lemma:finding-ford}, this convex fundamental domain $P$ is
actually the Ford domain. Finally, since none of the isometric
spheres of the Ford domain are visibly tangent at their boundaries, by
Lemma \ref{lemma:min-parabolic} the representation is minimally
parabolic. Hence it is a minimally parabolic geometrically finite
uniformization of $C$.
{\epsilon}nd{proof}
\subsection{Dual edges}
To each face of the Ford spine, there is an associated dual edge,
which is defined as follows. For any face $F$ of the Ford spine,
there is some $g \in \Gamma \backslash \Gamma_\infty$ such that
(subsets of) $S_g$ and $S_{g^{-1}}$ are faces of a Ford domain, and
$S_g$ and $S_{g^{-1}}$ project to $F$. Above each of these isometric
spheres lies a vertical arc, running from the top of the isometric
sphere (i.e. the geometric center of the hemisphere) to the point at
infinity. Define the dual edge to be the union of the image of these
two arcs in ${\mathbb{H}}^3/\rho(\pi_1(C))$.
\begin{figure}
\begin{center}
\input{figures/simple-dual.pstex_t}
{\epsilon}nd{center}
\caption{The dual to the simplest Ford spine is an edge that lifts to
a collection of vertical geodesics in ${\mathcal{F}}$, shown in bold.}
\label{fig:simple-dual}
{\epsilon}nd{figure}
\begin{lemma}
For any uniformization $\rho\colon\thinspace \pi_1(C) \to {\rm PSL}(2,{\mathbb{C}})$, the
core tunnel will be homotopic to the edge dual to the isometric sphere
corresponding to the loxodromic generator of $\rho(\pi_1(C))$.
\label{lemma:homotopic-unknotting}
{\epsilon}nd{lemma}
\begin{proof}
Denote the loxodromic generator by $\rho(\gamma)$. Consider the core
tunnel in the compression body ${\mathbb{H}}^3/\rho(\pi_1(C))$. Take a
horoball neighborhood $H$ of the cusp, and its horospherical torus
boundary. The core tunnel runs through this horospherical torus ${\partial}
H$, into the cusp. Denote by $\tilde{H}$ a lift of $H$ to ${\mathbb{H}}^3$
about the point at infinity in ${\mathbb{H}}^3$.
There is a homeomorphism from $C$ to ${\mathbb{H}}^3/\rho(\pi_1(C)) \setminus
H$. Slide the tunnel in $C$ so that it starts and ends at the same
point, and so that the resulting loop represents $\gamma$. The image
of this loop under the homeomorphism to ${\mathbb{H}}^3/\rho(\pi_1(C))
\setminus H$ is some loop. This lifts to an arc in ${\mathbb{H}}^3$ starting
on $\tilde{H}$ and ending on $\rho(\gamma)(\tilde{H})$. Extend this
to an arc in ${\mathbb{H}}^3 / \rho(\pi_1(C))$ by attaching a geodesic in
$\tilde{H}$ and in $\rho(\gamma)(\tilde{H})$. This is isotopic to
(the interior of) the core tunnel. Now homotope this to a geodesic.
It will run through the isometric sphere corresponding to
$\rho(\gamma^{-1})$ once.
{\epsilon}nd{proof}
\section{Long unknotting tunnels}\label{sec:long-tunnels}
We are now ready to give the geometric proof of our main theorem.
\begin{theorem}
There exist finite volume one--cusped hyperbolic tunnel number one
manifolds for which the geodesic representative of the unknotting
tunnel is arbitrarily long, as measured between the maximal horoball
neighborhood of the cusp.
\label{thm:long}
{\epsilon}nd{theorem}
Recall that a {\epsilon}mph{tunnel number one manifold} is a manifold $M$ with
torus boundary components which admits an unknotting tunnel, that is,
a properly embedded arc $\tau$, the exterior of which is a handlebody.
Recall also that the length of the geodesic representative of an
unknotting tunnel is measured outside a maximal horoball neighborhood
of the cusp.
Before proving Theorem \ref{thm:long}, we need to prove a similar
statement for minimally parabolic geometrically finite hyperbolic
uniformizations of a $(1,2)$--compression body.
\begin{prop}
For any $R>0$, there exists a minimally parabolic geometrically
finite uniformization of a $(1,2)$--compression body such that the
geodesic representative of the homotopy class of the core tunnel has
length at least $R$.
\label{prop:geom-fin-long}
{\epsilon}nd{prop}
\begin{proof}
We will prove Proposition \ref{prop:geom-fin-long} by finding an
explicit minimally parabolic geometrically finite uniformization of a
$(1,2)$--compression body $C$. For fixed $R>0$, our explicit
uniformization will be that given in Example \ref{example:long-tunnel}
above. By Lemma \ref{lemma:gf}, this is a minimally parabolic
geometrically finite hyperbolic uniformization of the
$(1,2)$--compression body $C$ whose Ford spine consists of exactly two
faces, corresponding to $\rho(\gamma)$ and $\rho(\gamma^2)$. We claim
that the geodesic representative of the homotopy class of the core
tunnel has length at least $R$.
\begin{lemma}
Let $\rho\colon\thinspace \pi_1(C) \to {\rm PSL}(2,{\mathbb{C}})$ be a discrete, faithful
representation such that $\rho(\alpha)$, $\rho(\beta)$ are parabolics
fixing the point at infinity in ${\mathbb{H}}^3$, and $\rho(\gamma)$ is as in
equation (\ref{eqn:gamma-mat}). Then the geodesic representative of
the homotopy class of the core tunnel has length greater than
$R$.
\label{lemma:gf-long}
{\epsilon}nd{lemma}
\begin{proof}
By Lemma \ref{lemma:homotopic-unknotting}, the core tunnel is
homotopic to the geodesic dual to the isometric spheres corresponding
to $\rho(\gamma)$ and $\rho(\gamma^{-1})$. The length of this
geodesic is twice the distance along the vertical geodesic from the
top of one of the isometric spheres corresponding to $\rho(\gamma^{\pm
1})$ to a maximal horoball neighborhood of the cusp about infinity.
Since the isometric sphere of $\rho(\gamma^2)$ has radius $1$, a
maximal horoball about the cusp will have height at least $1$. The
isometric sphere of $\rho(\gamma)$ has radius $\sqrt{\varepsilon}$.
Integrating $1/z$ from $z=\sqrt{\varepsilon}$ to $1$, we find that the
distance along this vertical arc is at least
$\log{1/\sqrt{\varepsilon}}$. Hence the length of the geodesic
representative of the core tunnel is at least $\log{1/\varepsilon}$.
By choice of $\varepsilon$, this length is greater than $R$.
{\epsilon}nd{proof}
\begin{remark}
Note in the proof above that we may strengthen Lemma
\ref{lemma:gf-long} as follows. Because of the choice of $\varepsilon$
in equation (\ref{eqn:gamma-mat}), there exists some neighborhood $U$
of the matrix of (\ref{eqn:gamma-mat}) such that if $\rho(\alpha)$,
$\rho(\beta)$ are as above, but $\rho(\gamma)$ lies in $U$, then the
geodesic representative of the homotopy class of the core tunnel has
length greater than $R$.
{\epsilon}nd{remark}
This completes the proof of Proposition \ref{prop:geom-fin-long}.
{\epsilon}nd{proof}
Before we present the proof of Theorem \ref{thm:long}, we need to
recall terminology from Kleinian group theory.
We define the (restricted) character variety $V(C)$ to be the space of
conjugacy classes of representations $\rho \colon\thinspace \pi_1(C) \to {\rm
PSL}(2, {\mathbb{C}})$ such that elements of $\pi_1(\partial_- C)$ are sent to
parabolics. Note this definition agrees with Marden's definition of
the representation variety in \cite{marden-book}, but is a restriction
of the character variety in Culler and Shalen's classic paper
\cite{culler-shalen}. Convergence in $V(C)$ is known as algebraic
convergence.
Let $GF_0(C)$ denote the subset of $V(C)$ consisting of conjugacy
classes of minimally parabolic geometrically finite uniformizations of
$C$, given the algebraic topology. It follows from work of Marden
\cite[Theorem 10.1]{marden:geom} that $GF_0(C)$ is an open subset of
$V(C)$. We are interested in a type of structure that lies on the
boundary of $GF_0(C)$. These structures are discrete, faithful
representations of $C$ that are geometrically finite, but not
minimally parabolic.
\begin{define}
A {\epsilon}mph{maximal cusp for $C$} is a geometrically finite uniformization
of $C$, $\rho\colon\thinspace \pi_1(C) \to {\rm PSL}(2,{\mathbb{C}})$ such that every component of
the boundary of the convex core of ${\mathbb{H}}^3/\rho(\pi_1(C))$ is a
3--punctured sphere.
{\epsilon}nd{define}
A maximal cusp is in some sense the opposite of a minimally parabolic
representation. In a minimally parabolic representation, no elements
of ${\partial}_+C$ are pinched. In a maximal cusp, a full pants decomposition
of ${\partial}_+C$, or the maximal number of elements, is pinched to parabolic
elements.
Due to a theorem of Canary, Culler, Hersonsky, and Shalen
\cite[Corollary 16.4]{cchs}, conjugacy classes of maximal cusps for
$C$ are dense on the boundary of $GF_0(C)$ in $V(C)$. This theorem,
an extension of work of McMullen \cite{mcmullen:max-cusps}, is key in
the proof of Theorem \ref{thm:long}.
\begin{proof}[Proof of Theorem \ref{thm:long}]
Let $\rho_0$ be the geometrically finite representation of the proof
of Proposition \ref{prop:geom-fin-long}, with core tunnel homotopic to
a geodesic of length strictly greater than $R$. The translation
lengths of $\rho_0(\alpha)$ and $\rho_0(\beta)$ are bounded, say by
$B$.
We will consider $\rho_0$ to be an element of the character variety
$V(C)$. Indeed, define ${\mathcal{R}}$ to be the set of all representations
where $\rho(\alpha)$ and $\rho(\beta)$ are parabolics fixing infinity
with length bounded by $B$, and with $\rho(\gamma)$ fixed as in
equation (\ref{eqn:gamma-mat}). If we view the character variety
$V(C)$ as a subset of the variety of representations $\rho$ of
$\pi_1(C)$ where $\rho(\alpha)$ and $\rho(\beta)$ have been suitably
normalized to avoid conjugation, then we may consider ${\mathcal{R}}$ as a subset
of $V(C)$. Note $\rho_0$ is in ${\mathcal{R}}$.
The set ${\mathcal{R}}$ is clearly path connected. By Lemma \ref{lemma:gf-long},
for all uniformizations of $C$ in ${\mathcal{R}}$, the length of the geodesic
representative of the core tunnel is at least $R$.
Moreover, notice that ${\mathcal{R}}$ includes indiscrete representations, as
follows. Recall that the isometric sphere corresponding to $\gamma^2$
has radius $1$ when $\rho(\gamma)$ is defined as in equation
(\ref{eqn:gamma-mat}). Thus by Lemma \ref{lemma:not-gf}, whenever the
translation length of $\alpha$ is less than $1$, the representation
cannot be discrete.
Then consider a path in ${\mathcal{R}}$ from $\rho_0$ to an indiscrete
representation. At some point along this path, we come to ${\mathcal{R}} \cap {\partial}
GF_0(C)$.
By work of Canary, Culler, Hersonsky, and Shalen \cite{cchs},
generalizing work of McMullen \cite{mcmullen:max-cusps}, the set of
maximal cusps is dense in the boundary of geometrically finite
structures ${\partial} GF_0(C)$.
It follows that we can find a sequence of geometrically finite
representations $\rho_n$ of $\pi_1(C)$ such that the conformal
boundaries of the manifolds $C_n := {\mathbb{H}}^3/\rho_n(\pi_1(C))$ are
maximally cusped genus two surfaces, $C_n$ are homeomorphic to the
interior of $C$, and such that the algebraic limit of these manifolds
$C_n$ is a manifold $M = {\mathbb{H}}^3/\rho_\infty(\pi_1(C))$ where
$\rho_\infty$ is in ${\mathcal{R}}$. By the remark following Lemma
\ref{lemma:gf-long}, for large $n$, the core tunnels of the $C_n$ will
have geodesic representative with length greater than $R$.
Now, there exists a maximally cusped hyperbolic structure on the genus
2 handlebody $H$. In fact, by work of Canary, Culler, Hersonsky, and
Shalen \cite[Corollary 15.1]{cchs}, such structures are dense in the
boundary of geometrically finite structures on handlebodies. Thus,
there exists a hyperbolic manifold ${\mathbb{H}}^3/\Gamma_1$ homeomorphic to
the interior of $H$, such that every component of the boundary of the
convex core of ${\mathbb{H}}^3/\Gamma_1$ is a 3--punctured sphere. We will
continue to denote the hyperbolic manifold ${\mathbb{H}}^3/\Gamma_1$ by $H$.
\begin{figure}
\begin{center}
\includegraphics[height=1.5in]{figures/long-tunnel-2.ps} \hspace{.1in}
\includegraphics[height=1.5in]{figures/long-tunnel-3.ps} \hspace{.1in}
\includegraphics[height=1.5in]{figures/long-tunnel-4.ps} \hspace{.1in}
\includegraphics[height=1.5in]{figures/long-tunnel-5.ps}
{\epsilon}nd{center}
\caption{Shown is a picture of ${\mathcal{F}}$ for four geometrically finite
structures with long unknotting tunnel. These structures are
converging to a structure on ${\partial} GF_0(\pi_1(C))$. Note that in
each of the four structures, the pattern of isometric spheres
corresponding to that of Figure \ref{fig:dumbbell} is visible,
although the number of visible isometric spheres increases.}
\label{fig:long-inf}
{\epsilon}nd{figure}
Let $\phi_n$ be any homeomorphism from ${\partial}_+C$ to $H$ taking
parabolics of $C_n$ on ${\partial}_+C$ to the parabolics of ${\partial} H$. Because
$\phi_n$ takes 3--punctured spheres to 3--punctured spheres, it
extends to an isometry. Hence we may glue $C_n$ to $H$ via $\phi_n$
and obtain a tunnel number one manifold with three drilled out curves,
corresponding to the three parabolics of ${\partial}_+C$. These are three
torus boundary components of $M_n:=C_n \cup_{\phi_n} H$.
Select Dehn filling slopes $s^1$, $s^2$, $s^3$ on these three boundary
components that act as gluing one boundary to the other by a high
power of a Dehn twist. When we Dehn fill along these slopes, the
result is a tunnel number one manifold $M_n(s^1, s^2, s^3)$. By work
of Thurston \cite{thurston}, as the length of the slopes increases,
the Dehn filled manifold approaches $M_n$ in the geometric topology.
Thus the length of the geodesic representative of the homotopy class
of the unknotting tunnel in $M_n(s^1, s^2, s^3)$ approaches the length
of the geodesic representative of the homotopy class of the core
tunnel in $C_n$ as the lengths of $s^1$, $s^2$, and $s^3$ increase in
$M_n$.
Hence for large enough $n$ and long enough slopes $s^1$, $s^2$, $s^3$,
the Dehn filled manifold $M_n(s^1, s^2, s^3)$ is a tunnel number one
manifold with unknotting tunnel homotopic to a geodesic of length at
least $R$.
{\epsilon}nd{proof}
\subsection{Remarks}
While Theorem \ref{thm:long} gives us a manifold whose unknotting
tunnel has a long geodesic representative, the proof does not
guarantee that this tunnel is isotopic to a geodesic, even if we could
guarantee that the core tunnel is isotopic to a geodesic in the
approximating geometrically finite structures $C_n$. This isn't
important for the proof of Theorem \ref{thm:long}. However, in
\cite{lackenby-purcell-2}, we will explain how to modify the above
proof so that the unknotting tunnel is isotopic to a geodesic.
\section{Knots in homology 3-spheres}\label{sec:homology}
In this section, we refine the construction in Theorem \ref{thm:long}
in order to control the homology of the resulting manifolds.
\begin{theorem}
There exist hyperbolic knots in homology 3-spheres which have
tunnel number one, for which the geodesic representative
of the unknotting tunnel is arbitrarily long.
\label{thm:homology}
{\epsilon}nd{theorem}
The manifolds in the proof of Theorem \ref{thm:long} were contructed
by starting with maximally cusped geometrically finite uniformizations
of the compression body $C$ and the handlebody $H$, gluing them via an
isometry, and then performing certain Dehn fillings. We will now vary
this construction a little. We will again use maximally cusped
geometrically finite uniformizations of the $(1,2)$-compression body
$C$ and the genus 2 handlebody $H$, but we will not glue them
directly. Instead, we will also find a maximally cusped geometrically
finite uniformization of $S \times I$, where $S$ is the closed
orientable surface with genus 2, and we will glue $C$ to $S \times \{
1 \}$ and glue $H$ to $S \times \{ 0 \}$. In both gluings, the
parabolic loci will be required to match together, although we will
leave these loci unglued. The result is therefore a tunnel number one
manifold, with 6 disjoint embedded simple closed curves removed. We
will then perform certain Dehn fillings along these 6 curves to give
the required tunnel number one manifolds. The choice of hyperbolic
structure on $H$ requires some care. In particular we will need the
following terminology and results.
Let ${\rm ML}(\partial H)$ (respectively, ${\rm PML}(\partial H)$) be
the space of measured laminations (respectively, projective measured
laminations) on $\partial H$. (See for example
\cite{fathi-laudenbach-poenaru}). Let $i( \cdot, \cdot)$ denote the
intersection number between measured laminations. A measured
lamination $\lambda$ is said to be {\sl doubly incompressible} if
there is an ${\epsilon}psilon > 0$ such that $i(\lambda, \partial E) >
{\epsilon}psilon$ for all essential annuli and discs $E$ properly embedded in
$H$. Similarly, a projective measured lamination is {\sl doubly
incompressible} if any of its inverse images in ${\rm ML}(\partial H)$
is doubly incompressible. It is a consequence of Thurston's
geometrization theorem \cite{morgan-smith} that if $P$ is a collection
of simple closed curves on $\partial H$ that are pairwise non-parallel
in $\partial H$, essential in $\partial H$ and doubly incompressible,
then there is a geometrically finite uniformization of $H$. Let $P$
be the part of its parabolic locus $P$ that lies on $\partial_+C$. The
set of doubly incompressible projective measured laminations forms a
non-empty open subset of ${\rm PML}(\partial H)$ (see \cite{Lecuire}).
\begin{lemma}
\label{homtrivpa}
There is a homeomorphism
$\psi \colon\thinspacelon \partial H \rightarrow \partial H$ satisfying
the following conditions:
\begin{enumerate}
\item $\psi$ is pseudo-Anosov;
\item its stable and unstable laminations are doubly incompressible;
\item the induced homomorphism $\psi_\ast \colon\thinspacelon H_1(\partial H) \rightarrow
H_1(\partial H)$ is the identity.
{\epsilon}nd{enumerate}
{\epsilon}nd{lemma}
\begin{proof}
Since the stable laminations of pseudo-Anosovs are dense in ${\rm
PML}(\partial H)$, and the set of doubly incompressible laminations is
open and non-empty, there is a pseudo-Anosov homeomorphism $g$ with
doubly incompressible stable lamination. Let $h$ be a pseudo-Anosov
on $\partial H$ that acts trivially on $H_1(\partial H)$ (see
\cite{thurston-surfaces}). Let $\lambda_+$ and $\lambda_-$ be its
stable and unstable projective measured laminations, which we may
assume are distinct from the unstable lamination of $g$. Then the
pseudo-Anosov $g^m h g^{-m}$ also acts trivially on $H_1(\partial H)$.
Its stable and unstable laminations are $g^m(\lambda_+)$ and
$g^m(\lambda_-)$, which are arbitrarily close to the stable lamination
of $g$ for large $m$. Hence, they too are doubly incompressible when
$m$ is large. Thus, we may set $\psi$ to be one such $g^m h g^{-m}$.
{\epsilon}nd{proof}
\begin{proof}[Proof of Theorem \ref{thm:homology}]
Let $\phi \colon\thinspacelon \partial_+ C \rightarrow \partial H$ be a
homeomorphism such that, when $C$ is glued to $H$ via $\phi$, the
result is the standard genus two Heegaard splitting of the solid
torus. Fix a maximally cusped geometrically finite uniformization of
$C$ from the proof of Theorem \ref{thm:long}, for which the core
tunnel has long geodesic representative. Let $P$ be its parabolic
locus. Then $\phi(P)$ is a collection of simple closed curves on $H$.
Let $\tau$ be a composition of Dehn twists, one twist around each
component of $\phi(P)$. Let $\psi$ be the pseudo-Anosov homeomorphism
provided by Lemma \ref{homtrivpa}. By replacing $\psi$ by a power of
$\psi$ if necessary, we may assume that for each core curve $\alpha$
of $P$, $i(\alpha, \psi(\alpha)) \not= 0$. The tunnel number one
manifold that we are aiming for is obtained by gluing $C$ to $H$ via
$\psi^{m} \tau^{-M} \psi^{-1} \tau^M \phi$ for large integers $m$ and
$M$. Since $\psi$ acts trivially on homology, this has the same
homology as if we had glued by $\phi$, which gives the solid torus.
Thus, this manifold is indeed the exterior of a knot in a homology
3-sphere.
We first choose the integer $m$. As $m$ tends to infinity, $\psi^m
\phi(P)$ tends to the stable lamination of $\psi$ in ${\rm
PML}(\partial H)$. Hence, we may choose such an $m$ so that $\psi^m
\phi(P)$ is doubly incompressible.
We start with three manifolds:
\begin{enumerate}
\item $C - P$;
\item $(S \times [0,1]) - ((\phi (P) \times \{ 1 \}) \cup (\psi\phi (P) \times \{ 0 \}))$;
\item $H - \psi^{m} \phi(P)$.
{\epsilon}nd{enumerate}
Here, $S$ is the genus two surface, which we identify with $\partial H$.
The second of the above manifolds has a geometrically finite
uniformization, by Thurston's geometrization theorem. This
because any essential annulus in $S \times [0,1]$ with boundary in
$(\phi( P) \times \{ 1 \}) \cup (\psi\phi (P) \times \{ 0 \})$ can be
homotoped, relative to its boundary, so that it lies entirely
in $(\phi (P) \times \{ 1 \}) \cup (\psi \phi(P) \times \{ 0 \})$.
Similarly, because $\psi^m \phi(P)$ is doubly incompressible,
$H - \psi^m \phi(P)$ admits a geometrically finite hyperbolic
structure. Glue $C - P$ to $(S - \phi(P)) \times \{1 \}$ via
$\phi$, and glue $(S - \psi\phi(P)) \times \{ 0 \}$ to
$H - \psi^m \phi(P)$ via $\psi^{m-1}$. Since these manifolds
have conformal boundary that consists of 3-punctured spheres,
this gluing can be performed isometrically.
As in the proof of Theorem \ref{thm:long}, we now perform certain Dehn
fillings on the toral cusps of this manifold, apart from the cusp
corresponding to $\partial_- C$. If the Dehn filling is done
correctly, this has the effect of modifying the gluing map by powers
of Dehn twists. We may apply any iterate of these Dehn twists, and so
we apply the $M$th iterate, where $M$ is some large positive integer,
along each of the curves $\phi(P) \times \{ 1 \}$ in $S \times \{1\}$
and the $(-M)$th power along each of the curves in $\psi\phi(P) \times
\{ 0 \}$ in $S \times \{0 \}$. Thus, the gluing map becomes $\psi^{m}
\tau^{-M} \psi^{-1} \tau^M \phi$. As $M$ tends to infinity, these
manifolds tend geometrically to the unfilled manifold. In particular,
for large $M$, the geodesic representative of its unknotting tunnel
will be long.
{\epsilon}nd{proof}
\section{The {D}ehn filling construction}\label{sec:dehn}
In this section, we give the proof of Theorem \ref{thm:long} that uses
Dehn filling and homology.
Let $X$ be a compact $3$--manifold with four torus boundary components
and of Heegaard genus $2$. This means there is a closed genus $2$
surface $F$ in the interior of $X$ which separates $X$ into two
compression bodies, each homeomorphic to the manifold $V$ obtained by
adding one $2$--handle onto a copy of $F\times[0,1]$ along an
essential separating simple closed curve in $F\times \{1\}.$ We label
the torus boundary components of $X$ by $A_0$, $A_1$, $B_0$, $B_1$ so
that $A_0$ and $B_0$ are on the same side of $F$.
Let $\beta_0$, $\beta_1$, and $\alpha_1$ be essential simple closed
curves on $B_0$, $B_1$, and $A_1$, respectively. Let $M =
X(\alpha_1,\beta_0,\beta_1)$ be the manifold obtained by Dehn filling
using these slopes, so that $M$ has a single boundary component $A_0.$
Gluing a solid torus to each of the two boundary components of $V$
yields a genus $2$ handlebody. It follows that $M$ has tunnel number
one; indeed a tunnel is obtained using an arc with endpoints on $A_0$
that goes round the core of the solid torus used to fill along $B_0$.
\begin{lemma}
There exists $X$ as above such that the interior of $X$ admits a
complete hyperbolic structure of finite volume, such that $H_1(X)
\colon\thinspaceng \Gamma_A \oplus \Gamma_B$ where $\Gamma_A \colon\thinspaceng \Gamma_B \colon\thinspaceng
{\mathbb{Z}}^2$, and under maps induced by inclusion, $H_1(A_i) = \Gamma_A$ and
$H_1(B_i) = \Gamma_B$ for $i=1,2.$
\label{lemma:hyp-link}
{\epsilon}nd{lemma}
\begin{proof}
An example of $X$ is provided by the exterior of the $4$ component
link $L$ in $S^3$ shown in Figure \ref{fig:tunnellink}. The link $L =
a_0 \cup a_1 \cup b_0 \cup b_1$ consists of two linked copies of the
Whitehead link and is hyperbolic (by SnapPea \cite{snappea}).
Furthermore $Lk(a_0,a_1) = Lk(b_0,b_1) = 1$ and $Lk(a_i,b_j)=0$. The
diagram also shows disjoint arcs $\alpha$ connecting $a_0$ to $b_0$
and $\beta$ connecting $a_1$ to $b_1$. It is easy to slide these arcs
and links in such a way that the pair of graphs $a_0 \cup b_0 \cup
\alpha$ and $a_1 \cup b_1 \cup \beta$ are spines of the handlebodies
of the genus $2$ Heegaard spliting of $S^3$. It follows that $X =
S^3-{{\epsilon}ta}(L)$ has the required properties and $A_i =
\partial{\epsilon}ta(a_i)$ and $B_i = \partial{\epsilon}ta(b_i)$. Here ${\epsilon}ta(L)$
denotes an open tubular neighborhood of $L$.
{\epsilon}nd{proof}
\begin{figure}
\input{figures/tunnellink-small.pstex_t}
\caption{A hyperbolic link satisfying the conditions of Lemma
\ref{lemma:hyp-link}.}
\label{fig:tunnellink}
{\epsilon}nd{figure}
Suppose $X$ is a link of Lemma \ref{lemma:hyp-link}. Let $x$ be a
basepoint for $X$ on the boundary of a maximal horoball neighborhood
of the cusp corresponding to $A_0$. The idea for finding the Dehn
fillings to give $M$ is that given a base point $x$ in $X$ and $R>0$,
there are only finitely many homotopy classes of loops in $X$ based at
$x$ with length at most $3R$. These give finitely many classes in
$H_1(X)$ and hence, under projection, finitely many classes
$\gamma_1,\cdots \gamma_p \in \Gamma_B$. The Dehn fillings used to
obtain $M$ are chosen so that $H_1(M) \colon\thinspaceng {\mathbb{Z}} \oplus {\mathbb{Z}}_n$ with the
image of $\Gamma_B$ being ${\mathbb{Z}}_n$ and that of $\Gamma_A$ being ${\mathbb{Z}}$.
The fillings are also chosen so that none of the images of $\gamma_i$
generate ${\mathbb{Z}}_n$, and so that the hyperbolic metric on $M$ is
geometrically close to that of $X$. In particular, we may assume that
there is a bilipschitz homeomorphism, with bilipschitz constant very
close to $1$, between the $R$ neighborhood of the basepoint of $x$ in
$X$ and a subset of $M$. Let $m$ be the image of $x$ in $M$, which we
may assume lies on the boundary of a maximal horoball neighborhood of
the cusp of $M$. Then every loop in $M$ based at $m$ of length at
most $2R$ corresponds to a loop in $X$ based at $x$ with length at
most $3R$, say.
\begin{lemma}
Suppose $\Gamma$ is a free abelian group of rank $2$ and $\gamma_1,
\cdots \gamma_p \in \Gamma$. Then there is an integer $n>0$ and an
epimorphism $\phi_n\colon\thinspace \Gamma \rightarrow {\mathbb{Z}}_n$ such that for all
$i$ the element $\phi_n(\gamma_i)$ does not generate ${\mathbb{Z}}_n$.
\label{lemma:not-generate}
{\epsilon}nd{lemma}
\begin{proof}
Clearly we may assume that for all $i$, $\gamma_i \ne 0$. Then we may
identify $\Gamma$ with ${\mathbb{Z}}^2$ so that $\gamma_i = (a_i, b_i)$ and for
all $i$, $a_i \ne 0$. Set $m = \max_i |b_i|+2$ and define a
homomorphism $\phi\colon\thinspace {\mathbb{Z}}^2 \rightarrow {\mathbb{Z}}$ by $\phi((a,b)) = 2ma -
b$ which is surjective because $\phi((1,2m-1)) = 1$. Set $c_i =
|\phi(\gamma_i)|$. Then
$$c_i = |2ma_i-b_i| \ge 2m|a_i|-|b_i| \ge 2m-m\ \ge 2,$$
using that $|a_i|\ge 1$ and $|b_i|\le m$ and $m\ge 2$. Now define $n
= \prod_i c_i$ and define $\phi_n(\gamma) = \phi(\gamma)$ mod $n$.
Then $\phi_n(\gamma_i) = \pm c_i$ and $c_i \ne 1$ divides $n$ and
therefore does not generate ${\mathbb{Z}}_n.$
{\epsilon}nd{proof}
For the Dehn fillings of $B_0$ and $B_1$, choose simple closed curves
$\beta_i\subset B_i$ which generate the kernel of
$$\phi_n\colon\thinspace \Gamma_B \rightarrow {\mathbb{Z}}_n,$$
where here we are using the identifications $H_1(B_0) {\epsilon}quiv \Gamma_B
{\epsilon}quiv H_1(B_1)$. There are arbitrarily large pairs of such basis
elements; thus we may choose them so that the result of hyperbolic
Dehn filling $B_0$ and $B_1$ using these gives a two cusped hyperbolic
manifold with metric on the thick part as close to that of $X$ as
desired.
Now perform a very large Dehn filling (thus not distorting the
geometry of the thick part appreciably) along $A_1$. We claim we
obtain $M$ with all the required properties.
For suppose that the geodesic representative for the unknotting tunnel
of $M$ had length at most $R$. Let $T$ be the torus that forms the
boundary of a maximal horoball neighborhood of the cusp of $M$. The
basepoint $m$ of $M$ lies on $T$. We may pick $R$ large enough so
that $\pi_1(T,m)$ is generated by two curves of length at most $R$.
In addition, the geodesic representative for the unknotting tunnel can
be closed up to form a loop based at $m$ with length at most $2R$.
These three loops generate $\pi_1(M,m)$. By construction, $H_1(M)
\colon\thinspaceng {\mathbb Z} \oplus {\mathbb Z}_n$. The image of $H_1(T)$ in
$H_1(M)$ is the first summand, and the image of the third loop is a
proper subgroup of the second summand. Thus, these three loops cannot
generate $H_1(M)$, which is a contradiction. Hence, the geodesic
representative for the unknotting tunnel of $M$ has length more than
$R$. Since $R$ was arbitrarily large, this establishes Theorem
\ref{thm:long}.
{\epsilon}nd{document} |
\begin{document}
\title{{\bf On the evolution equation of compressible\\ vortex sheets}}
\author{
{\sc Alessandro Morando}\thanks{e-mail: alessandro.morando@unibs.it}\;,
{\sc Paolo Secchi}\thanks{e-mail: paolo.secchi@unibs.it}\;,
{\sc Paola Trebeschi}\thanks{e-mail: paola.trebeschi@unibs.it}\\
{\footnotesize DICATAM, Sezione di Matematica,
Universit\`a di Brescia, Via Valotti 9, 25133 Brescia, Italy}
}
\,\mathrm{d}ate{}
\maketitle
\begin{abstract}
We are concerned with supersonic vortex sheets for the Euler equations of compressible inviscid fluids in two space dimensions. For the problem with constant coefficients we derive an evolution equation for the discontinuity front of the vortex sheet. This is a pseudo-differential equation of order two. In agreement with the classical stability analysis, if the jump of the tangential component of the velocity satisfies $|[v\cdot\tau]|<2\sqrt{2}\,c$ (here $c$ denotes the sound speed) the symbol is elliptic and the problem is ill-posed. On the contrary, if $|[v\cdot\tau]|>2\sqrt{2}\,c$, then the problem is weakly stable, and we are able to derive a wave-type a priori energy estimate for the solution, with no loss of regularity with respect to the data.
Then we prove the well-posedness of the problem, by showing the existence of the solution in weighted Sobolev spaces.
\noindent{\bf Keywords:} Compressible Euler equations, vortex sheet, contact discontinuities,
weak stability, loss of derivatives, linear stability.
\noindent{\bf Mathematics Subject Classification:}
35Q35,
76N10,
76E17,
35L50
\end{abstract}
\tableofcontents
\section{Introduction}
\label{sect1}
We are concerned with the time evolution of vortex sheets for the Euler equations describing the motion of a compressible fluid.
Vortex sheets are interfaces between two incompressible or compressible flows across which there is a discontinuity in fluid velocity. Across a vortex sheet, the tangential velocity field has a jump, while the normal component of the flow velocity is continuous. The discontinuity in the tangential velocity field creates a concentration of vorticity along the interface. In particular, compressible vortex sheets are contact discontinuities to the Euler equations for compressible fluids and as such they are fundamental waves which play an important role in the study of general entropy solutions to multidimensional hyperbolic systems of conservation laws.
It was observed in \cite{M58MR0097930,FM63MR0154509}, by the normal mode analysis, that rectilinear vortex sheets for isentropic compressible fluids in two space dimensions are linearly stable when the Mach number $\mathsf{M}>\sqrt{2}$ and are violently unstable when $\mathsf{M}<\sqrt{2}$, while planar vortex sheets are always violently unstable in three space dimensions. This kind of instabilities is the analogue of the Kelvin--Helmholtz instability for incompressible fluids.
\iffalse
This problem is a nonlinear hyperbolic problem with a characteristic free boundary. The so-called Lopatinski\u{\i} condition holds only in a weak sense, which yields a loss of derivatives.
\fi
\citet{AM87MR914450} studied certain instabilities of two-dimensional supersonic vortex sheets by analyzing the interaction with highly oscillatory waves through geometric optics. A rigorous mathematical theory on nonlinear stability and local-in-time existence of two-dimensional supersonic vortex sheets was first established by Coulombel--Secchi \cite{CS08MR2423311,CS09MR2505379} based on their linear stability results in \cite{CS04MR2095445} and a Nash--Moser iteration scheme.
Characteristic discontinuities, especially vortex sheets, arise in a broad range of physical problems in fluid mechanics, oceanography, aerodynamics, plasma physics, astrophysics, and elastodynamics. The linear results in \cite{CS04MR2095445} have been generalized to cover the two-dimensional nonisentropic flows \cite{MT08MR2441089}, the three-dimensional compressible steady flows \cite{WY13MR3065290,WYuan15MR3327369}, and the two-dimensional two-phase flows \cite{RWWZ16MR3474128}.
Recently, the methodology in \cite{CS04MR2095445} has been developed to deal with several constant coefficient linearized problems arising in two-dimensional compressible magnetohydrodynamics (MHD) and elastic flows; see \cite{WY13ARMAMR3035981,CDS16MR3527627,CHW17Adv}. For three-dimensional MHD, Chen--Wang \cite{CW08MR2372810} and \citet{T09MR2481071} proved the nonlinear stability of compressible current-vortex sheets, which indicates that non-paralleled magnetic fields stabilize the motion of three-dimensional compressible vortex sheets. Moreover, the modified Nash--Moser iteration scheme developed in \cite{H76MR0602181,CS08MR2423311} has been successfully applied to the compressible liquids in vacuum \cite{T09MR2560044}, the plasma-vacuum interface problem \cite{ST14MR3151094}, three-dimensional compressible steady flows \cite{WY15MR3328144}, and MHD contact discontinuities \cite{MTT16Preprint}.
The approach of \cite{CS04MR2095445, CS08MR2423311} has been recently extended to get the existence of solutions to the non linear problem of relativistic vortex sheets in three-dimensional Minkowski spacetime \cite{CSW1707.02672}, and the two-dimensional nonisentropic flows \cite{MTW17MR}.
The vortex sheet motion is a nonlinear hyperbolic problem with a characteristic free boundary.
The analysis of the linearized problem in \cite{CS04MR2095445} shows that the so-called Kreiss-Lopatinski\u{\i} condition holds in a weak sense, thus one can only obtain an \emph{a priori} energy estimate with a loss of derivatives with respect to the source terms. Because of this fact, the existence of the solution to the nonlinear problem is obtained in \cite{CS08MR2423311} by a Nash-Moser iteration scheme, with a loss of the regularity of the solution with respect to the initial data.
At the best of our knowledge the approach of \cite{CS04MR2095445,CS08MR2423311} is the only one known up to now, while it would be interesting to have different methods of proof capable to give the existence and possibly other properties of the solution.
In particular, the location of the discontinuity front of the vortex sheet is obtained through the jump conditions at the front, see \eqref{RH}, and is implicitly determined by the fluid motion in the interior regions, i.e. far from the front.
On the contrary, it would be interesting to find an \lq\lq{explicit}\rq\rq evolution equation for the vortex sheet, i.e. for the discontinuity front, that might also be useful for numerical simulations. In this regard we recall that in case of irrotational, incompressible vortex sheets, the location of the discontinuity front is described by the Birchhoff-Rott equation, see \cite{MR1688875,MB02MR1867882,MP94MR1245492}, whose solution is sufficient to give a complete description of the fluid motion through the Biot-Savart law.
The evolution equation of the discontinuity front of current-vortex sheets plays an important role in the paper \cite{SWZ}.
In this paper we are concerned with supersonic vortex sheets for the Euler equations of compressible inviscid fluids in two space dimensions. For the problem with constant coefficients we are able to derive an evolution equation for the discontinuity front of the vortex sheet. This is a pseudo-differential equation of order two. In agreement with the classical stability analysis \cite{FM63MR0154509,M58MR0097930}, if the jump of the tangential component of the velocity satisfies $|[v\cdot\tau]|<2\sqrt{2}\,c$ (here $c$ denotes the sound speed) the symbol is elliptic and the problem is ill-posed. On the contrary, if $|[v\cdot\tau]|>2\sqrt{2}\,c$, then the problem is weakly stable, and we are able to derive a wave-type a priori energy estimate for the solution, with no loss of regularity with respect to the data.
By a duality argument we then prove the well-posedness of the problem, by showing the existence of the solution in weighted Sobolev spaces.
The fact that the evolution equation for the discontinuity front is well-posed, with no loss of regularity from the data to the solution, is somehow in agreement with the result of the linear analysis in \cite{CS04MR2095445} (see Theorem 3.1 and Theorem 5.2), where the solution has a loss of derivatives in the interior domains while the function describing the front conserves the regularity of the boundary data.
In a forthcoming paper we will consider the problem with variable coefficients, which requires a completely different approach.
\subsection{The Eulerian description}
We consider the isentropic Euler equations in the whole plane ${\mathbb R}^2$. Denoting by ${\bf v}=(v_1,v_2) \in {\mathbb R}^2$ the
velocity of the fluid, and by $\rho$ its density, the equations read:
\begin{equation}
\label{euler}
\begin{cases}
\,\mathrm{d}t \rho +\nabla \cdot (\rho \, {\bf v}) =0 \, ,\\
\,\mathrm{d}t (\rho \, {\bf v}) +\nabla \cdot
(\rho \, {\bf v} \otimes {\bf v}) +\nabla \, p =0 \, ,
\end{cases}
\end{equation}
where $p=p(\rho)$ is the pressure law. In all this paper $p$ is a $C^\infty$ function of $\rho$,
defined on $]0,+\infty[$, and such that $p'(\rho)>0$ for all $\rho$. The speed of sound $c(\rho)$ in the
fluid is defined by the relation:
\begin{equation*}
\forall \, \rho>0 \, ,\quad c(\rho) :=\sqrt{p'(\rho)} \, .
\end{equation*}
It is a well-known fact that, for such a pressure law, \eqref{euler} is a strictly hyperbolic system in
the region $(t,x)\in\, ]0,+\infty[ \, \times {\mathbb R}^2$, and \eqref{euler} is also symmetrizable.
We are interested in solutions of \eqref{euler} that are smooth on either side of a smooth hypersurface $\mathcal Gamma(t):=\{x=(x_1,x_2)\in {\mathbb R}^2 : F(t,x)=0\}=\{x_2=f(t,x_1)\}$ for each $t$ and that satisfy
suitable jump conditions at each point of the front $\mathcal Gamma (t)$.
Let us denote $\Omega^\partialm(t):=\{(x_1,x_2)\in {\mathbb R}^2 :x_2\gtrless f(t,x_1)\}$. Given any
function $g$ we denote $g^\partialm=g$ in $\Omega^\partialm(t)$ and $[g]=g^+_{|\mathcal Gamma}-g^-_{|\mathcal Gamma}$ the jump across
$\mathcal Gamma (t)$.
We look for smooth solutions $({\bf v}^\partialm,\rho^\partialm)$ of \eqref{euler} in $\Omega^\partialm(t)$ and such that, at each time $t$,
the tangential velocity is the only quantity that experiments a jump across the curve $\mathcal Gamma (t)$. (Tangential
should be understood as tangential with respect to $\mathcal Gamma (t)$). The pressure and the normal velocity should be
continuous across $\mathcal Gamma (t)$. For such solutions, the jump conditions across $\mathcal Gamma(t)$ read:
\begin{equation*}
\sigma ={\bf v}^\partialm\cdot n \, ,\quad [p]=0 \quad {\rm on } \;\mathcal Gamma (t) \, .
\end{equation*}
Here $n=n(t)$ denotes the outward unit normal on $\partialartial\Omega^-(t)$ and $\sigma$ denotes the velocity of
propagation of the interface $\mathcal Gamma (t)$. With our parametrization of $\mathcal Gamma (t)$, an equivalent formulation
of these jump conditions is
\begin{equation}
\label{RH}
\,\mathrm{d}t f ={\bf v}^+\cdot N ={\bf v}^-\cdot N \, ,\quad p^+ =p^- \quad {\rm on }\;\mathcal Gamma (t) \, ,
\end{equation}
where
\begin{equation}\label{defN}
N=(-\,\mathrm{d}uno f, 1)
\end{equation}
and $p^\partialm=p(\rho^\partialm)$. Notice that the function $f$ describing the discontinuity front is part of
the unknown of the problem, i.e. this is a free boundary problem.
For smooth solutions system \eqref{euler} can be written in the equivalent form
\begin{equation}
\label{euler1}
\begin{cases}
\,\mathrm{d}t \rho +({\bf v}\cdot\nabla) \rho +\rho \, \nabla\cdot{\bf v} =0 \, ,\\
\rho \,(\,\mathrm{d}t {\bf v} +({\bf v}\cdot\nabla)
{\bf v} ) +\nabla \, p =0 \, .
\end{cases}
\end{equation}
Because $ p'(\rho)>0 $, the function $p= p(\rho ) $ can be inverted and we can write $ \rho=\rho(p) $. Given a positive constant $ \bar{\rho}>0 $, we introduce the quantity $ P(p)=\log(\rho(p)/\bar{\rho}) $ and consider $ P $ as a new unknown. In terms of $ (P,\bf v) $, the system \eqref{euler1} equivalently reads
\begin{equation}
\label{euler2}
\begin{cases}
\,\mathrm{d}t P +{\bf v}\cdot\nabla P + \nabla\cdot{\bf v} =0 \, ,\\
\,\mathrm{d}t {\bf v} +({\bf v}\cdot\nabla)
{\bf v} +c^2\,\nabla \, P =0 \, ,
\end{cases}
\end{equation}
where now the speed of sound is considered as a function of $ P,$ that is $ c=c(P) $. Thus our problem reads
\begin{equation}
\label{euler3}
\begin{cases}
\,\mathrm{d}t P^\partialm +{\bf v}^\partialm\cdot\nabla P^\partialm + \nabla\cdot{\bf v}^\partialm =0 \, ,\\
\,\mathrm{d}t {\bf v}^\partialm +({\bf v}^\partialm\cdot\nabla)
{\bf v}^\partialm +c^2_\partialm\,\nabla \, P^\partialm =0 \, , \qquad {\rm in }\; \Omega^\partialm(t),
\end{cases}
\end{equation}
where we have set $ c_\partialm=c(P^\partialm) $.
The jump conditions \eqref{RH}
take the new form
\begin{equation}
\label{RH2}
\,\mathrm{d}t f ={\bf v}^+\cdot N ={\bf v}^-\cdot N \, ,\quad P^+ =P^- \quad {\rm on }\;\mathcal Gamma (t) \, .
\end{equation}
\section{Preliminary results}
Given functions $ {\bf v}^\partialm, P^\partialm$, we set
\begin{equation}
\begin{array}{ll}\label{defZ}
Z^\partialm:=\,\mathrm{d}t {\bf v}^\partialm
+( {\bf v}^\partialm \cdot \nabla) {\bf v}^\partialm .
\end{array}
\end{equation}
Next, we study the behavior of $Z^\partialm$ at $\mathcal Gamma(t)$. As in \cite{SWZ} we define
\begin{equation}
\begin{array}{ll}\label{deftheta}
\theta(t,x_1):= {\bf v}^\partialm(t,x_1,f(t,x_1))\cdot N(t,x_1),
\end{array}
\end{equation}
for $N$ given in \eqref{defN}.
\begin{lemma}\label{lemmaN}
Let $ f, {\bf v}^\partialm, \theta$ be such that
\begin{equation}
\begin{array}{ll}\label{dtfthetavN}
\,\mathrm{d}t f=\theta= {\bf v}^\partialm\cdot N \, \qquad {\rm on }\; \mathcal Gamma(t),
\end{array}
\end{equation}
and let $Z^\partialm$ be defined by \eqref{defZ}.
Then
\begin{equation}
\begin{array}{ll}\label{applN}
Z^+ \cdot N
\,\mathrm{d}s = \,\mathrm{d}t\theta + 2 v_1^+\,\mathrm{d}uno\theta + (v_1^+)^2 \,\mathrm{d}quno f \,,\\
Z^- \cdot N
\,\mathrm{d}s = \,\mathrm{d}t\theta + 2 v_1^-\,\mathrm{d}uno\theta + (v_1^-)^2 \,\mathrm{d}quno f
\quad {\rm on }\; \mathcal Gamma(t) .
\end{array}
\end{equation}
\end{lemma}
\begin{proof}
Dropping for convenience the $\partialm$ superscripts, we compute
\begin{equation*}
\begin{array}{ll}\label{}
\,\mathrm{d}t \theta=(\,\mathrm{d}t{\bf v}+\,\mathrm{d}due{\bf v}\,\mathrm{d}t f)\cdot N + {\bf v}\cdot \,\mathrm{d}t N=(\,\mathrm{d}t v_2+\,\mathrm{d}due v_2\,\mathrm{d}t f) -(\,\mathrm{d}t v_1+\,\mathrm{d}due v_1\,\mathrm{d}t f)\,\mathrm{d}uno f - v_1 \,\mathrm{d}t \,\mathrm{d}uno f \,,
\end{array}
\end{equation*}
and similarly
\begin{equation*}
\begin{array}{ll}\label{}
\,\mathrm{d}uno \theta=(\,\mathrm{d}uno v_2+\,\mathrm{d}due v_2\,\mathrm{d}uno f) -(\,\mathrm{d}uno v_1+\,\mathrm{d}due v_1\,\mathrm{d}uno f)\,\mathrm{d}uno f - v_1 \,\mathrm{d}quno f \, .
\end{array}
\end{equation*}
Substituting \eqref{dtfthetavN} in the first of the two equations it follows that
\begin{equation*}
\begin{array}{ll}\label{}
\,\mathrm{d}t v_2-\,\mathrm{d}t v_1\,\mathrm{d}uno f=\,\mathrm{d}t \theta+v_1\,\mathrm{d}uno \theta - \,\mathrm{d}t f(\,\mathrm{d}due v_2 -\,\mathrm{d}due v_1\,\mathrm{d}uno f ) \,,
\end{array}
\end{equation*}
and from the second equation, after multiplication by $v_1$, it follows that
\begin{equation*}
\begin{array}{ll}\label{}
v_1\,\mathrm{d}uno v_2- v_1\,\mathrm{d}uno v_1\,\mathrm{d}uno f=v_1\,\mathrm{d}uno \theta+v_1^2\,\mathrm{d}quno f - v_1\,\mathrm{d}uno f(\,\mathrm{d}due v_2 -\,\mathrm{d}due v_1\,\mathrm{d}uno f ) \,.
\end{array}
\end{equation*}
We substitute the last two equations in
\begin{equation*}
\begin{array}{ll}\label{}
Z\cdot N=(\,\mathrm{d}t v_2
+ {\bf v} \cdot \nabla v_2)-(\,\mathrm{d}t v_1
+ {\bf v} \cdot \nabla v_1)\,\mathrm{d}uno f \,,
\end{array}
\end{equation*}
rearrange the terms, use again \eqref{dtfthetavN}, and finally obtain
\[
Z \cdot N
\,\mathrm{d}s = \,\mathrm{d}t\theta + 2 v_1\,\mathrm{d}uno\theta + v_1^2 \,\mathrm{d}quno f \,,
\]
that is \eqref{applN}.
\end{proof}
\subsection{A first equation for the front}
We take the scalar product of the equation for $\bf v^\partialm$ in \eqref{euler3}, evaluated at $\mathcal Gamma(t)$, with the vector $N$. We get
\begin{equation*}
\big\{ Z^\partialm + c^2_\partialm \nabla P^\partialm\big\} \cdot N =0\, \quad {\rm on} \; \mathcal Gamma(t) \, ,
\end{equation*}
and applying Lemma \ref{lemmaN} we obtain
\begin{equation}
\begin{array}{ll}\label{puntoN}
\,\mathrm{d}s \,\mathrm{d}t\theta + 2 v_1^\partialm\,\mathrm{d}uno\theta + (v_1^\partialm)^2 \,\mathrm{d}quno f + c^2_\partialm \nabla P^\partialm \cdot N =0 \,\quad {\rm on} \; \mathcal Gamma(t) \, .
\end{array}
\end{equation}
Now we apply an idea from \cite{SWZ}. We take the {\it sum} of the "+" and "-" equations in \eqref{puntoN} to obtain
\begin{multline}\label{puntoN2}
\,\mathrm{d}s 2\,\mathrm{d}t\theta + 2 (v_1^++v_1^-)\,\mathrm{d}uno\theta + ( (v_1^+)^2+(v_1^-)^2) \,\mathrm{d}quno f
+ c^2 \nabla (P^+ + P^-) \cdot N =0 \,\quad {\rm on} \; \mathcal Gamma(t) \, ,
\end{multline}
where we have denoted the common value at the boundary $c=c_{\partialm|\mathcal Gamma(t)}=c(P^\partialm_{|\mathcal Gamma(t)})$.
Next, following again \cite{SWZ}, we introduce the quantities
\begin{equation}
\label{defwV}
{\bf w}=(w_1,w_2):=({\bf v}^++{\bf v}^-)/2, \qquad {\mathcal VV}=(V_1,V_2):=({\bf v}^+-{\bf v}^-)/2.
\end{equation}
Sustituting \eqref{defwV} in \eqref{puntoN2} gives
\begin{equation}\label{puntoN3}
\,\mathrm{d}s \,\mathrm{d}t\theta + 2 w_1\,\mathrm{d}uno\theta + (w_1^2 + V_1^2 )\,\mathrm{d}quno f
+\frac{c^2}2 \nabla (P^+ + P^-) \cdot N =0 \,\qquad {\rm on} \; \mathcal Gamma(t) \, .
\end{equation}
Finally we substitute the boundary condition $ \theta=\,\mathrm{d}t f $ in \eqref{puntoN3} and we obtain
\begin{equation}\label{puntoN4}
\,\mathrm{d}s \,\mathrm{d}qt f + 2 w_1\,\mathrm{d}uno\,\mathrm{d}t f + (w_1^2 + V_1^2 )\,\mathrm{d}quno f
+\frac{c^2}2 \nabla (P^+ + P^-) \cdot N =0 \,\qquad {\rm on} \; \mathcal Gamma(t) \, .
\end{equation}
\eqref{puntoN4} is a second order equation for the front $f$. However, it is nonlinearly coupled at the highest order with the other unknowns $ ({\bf v}^\partialm,P^\partialm) $ of the problem through the last term in the left side of \eqref{puntoN4}. In order to find an evolution equation for $f,$ it is important to isolate the dependence of $f$ on $P^\partialm$ at the highest order, i.e. up to lower order terms in $ ({\bf v}^\partialm,P^\partialm) $.
Notice that \eqref{puntoN4} can also be written in the form
\begin{equation}\label{puntoN5}
\,\mathrm{d}s (\,\mathrm{d}t + w_1\,\mathrm{d}uno)^2 f + V_1^2 \,\mathrm{d}quno f
+\frac{c^2}2 \nabla (P^+ + P^-) \cdot N -(\,\mathrm{d}t w_1+w_1\,\mathrm{d}uno w_1)\,\mathrm{d}uno f=0 \,\qquad {\rm on} \; \mathcal Gamma(t) \, .
\end{equation}
\subsection{The wave problem for the pressure}
Applying the operator $ \,\mathrm{d}t+{\bf v}\cdot\nabla $ to the first equation of \eqref{euler2} and $ \nabla\cdot $ to the second one gives
\begin{equation*}\label{}
\begin{cases}
(\,\mathrm{d}t +{\bf v}\cdot\nabla)^2 P + (\,\mathrm{d}t +{\bf v}\cdot\nabla)\nabla\cdot{\bf v} =0 \, ,\\
\nabla\cdot(\,\mathrm{d}t +{\bf v}\cdot\nabla)
{\bf v} +\nabla\cdot(c^2\,\nabla \, P) =0 \, .
\end{cases}
\end{equation*}
The difference of the two equations gives the wave-type equation\footnote[1]{Here we adopt the Einstein convention over repeated indices.}
\begin{equation}\label{wave0}
(\,\mathrm{d}t +{\bf v}\cdot\nabla)^2 P - \nabla\cdot(c^2\,\nabla \, P) = -[\,\mathrm{d}t +{\bf v}\cdot\nabla, \nabla\cdot\,]{\bf v}=\,\mathrm{d}i v_j\,\mathrm{d}j v_i.
\end{equation}
We repeat the same calculation for both $ ({\bf v}^\partialm,P^\partialm) $.
As for the behavior at the boundary, we already know that
\begin{equation}\label{bc1}
[P]=0 \, , \qquad
{\rm on }\; \mathcal Gamma(t) \, .
\end{equation}
As a second boundary condition it is natural to add a condition involving the normal derivatives of $ P^\partialm. $ We proceed as follows: instead of the {\it sum} of the equations \eqref{puntoN} as for \eqref{puntoN2}, we take the {\it difference} of the "+" and "-" equations in \eqref{puntoN} to obtain the jump of the normal derivatives $ \nabla P^\partialm \cdot N $,
\begin{equation}
\label{jumpQ}
[c^2 \nabla P \cdot N] =-[2 v_1\,\mathrm{d}uno\theta + v_1^2 \,\mathrm{d}quno f] \qquad {\rm on} \; \mathcal Gamma(t) \, .
\end{equation}
Recalling that $ \theta=\,\mathrm{d}t f $, we compute
\begin{equation}\label{jumpQ1}
[2 v_1\,\mathrm{d}uno\theta + v_1^2 \,\mathrm{d}quno f] = 4 V_1(\,\mathrm{d}t+w_1\,\mathrm{d}uno)\,\mathrm{d}uno f .
\end{equation}
Thus, from \eqref{jumpQ}, \eqref{jumpQ1} we get
\begin{equation}\label{bc2}
[c^2 \nabla P \cdot N] =-4 V_1(\,\mathrm{d}t+w_1\,\mathrm{d}uno)\,\mathrm{d}uno f \qquad {\rm on} \; \mathcal Gamma(t) \, .
\end{equation}
Collecting \eqref{wave0} for $P^\partialm$, \eqref{bc1}, \eqref{bc2} gives the coupled problem for the pressure
\begin{equation}\label{wave}
\begin{cases}
(\,\mathrm{d}t +{\bf v }^\partialm\cdot\nabla)^2 P^\partialm - \nabla\cdot(c^2_\partialm\,\nabla \, P^\partialm) =\mathcal F^\partialm &
{\rm in }\; \Omega^\partialm(t) \, ,\\
[P]=0 \, ,\\
[c^2 \nabla P \cdot N] =-4 V_1(\,\mathrm{d}t+w_1\,\mathrm{d}uno)\,\mathrm{d}uno f & {\rm on} \; \mathcal Gamma(t) \, ,
\end{cases}
\end{equation}
where
\begin{equation*}\label{key}
\mathcal F^\partialm:=\,\mathrm{d}i v_j^\partialm \,\mathrm{d}j v_i^\partialm.
\end{equation*}
Notice that $ \mathcal F $ can be considered a lower order term in the second order differential equation for $ P^\partialm $, differently from the right-hand side of the boundary condition for the jump of the normal derivatives, which is of order two in $ f. $
\section{The coupled problem \eqref{puntoN5}, \eqref{wave} with constant coefficients. The main result}
We consider a problem obtained by linearization of equation \eqref{puntoN5} and system \eqref{wave} about the constant velocity ${\bf v }^\partialm=(v_1^\partialm,0)$, constant pressure $P^+=P^-$, and flat front $\mathcal Gamma=\{x_2=0\}$, so that $N=(0,1)$, that is we study the equations
\begin{equation}\label{puntoN6}
\,\mathrm{d}s (\,\mathrm{d}t + w_1\,\mathrm{d}uno)^2 f + V_1^2 \,\mathrm{d}quno f
+\frac{c^2}2 \,\mathrm{d}due (P^+ + P^-) =0 \,\qquad {\rm if} \; x_2=0 \, ,
\end{equation}
\begin{equation}\label{wave2}
\begin{cases}
(\,\mathrm{d}t +v_1^\partialm\,\mathrm{d}uno)^2 P^\partialm - c^2\Delta \, P^\partialm =\mathcal F^\partialm \quad &
{\rm if }\; x_2\gtrless0 \, ,\\
[P]=0 \, ,\\
[c^2\,\mathrm{d}due P ] =-4 V_1(\,\mathrm{d}t+w_1\,\mathrm{d}uno)\,\mathrm{d}uno f & {\rm if} \; x_2=0 \, .
\end{cases}
\end{equation}
In \eqref{puntoN6}, \eqref{wave2}, $v^\partialm_1, c$ are constants and $c>0$, $w_1=(v_1^++v_1^-)/2, V_1=(v_1^+-v_1^-)/2$. $\mathcal F^\partialm$ is a given source term. \eqref{puntoN6}, \eqref{wave2} form a coupled system for $f$ and $P^\partialm$, obtained by retaining the highest order terms of \eqref{puntoN5} and \eqref{wave}. We are interested to derive from \eqref{puntoN6}, \eqref{wave2} an evolution equation for the front $f$.
For $\gamma\ge1$, we introduce $ \widetilde{f}:=e^{-\gamma t}f,\widetilde{P}^\partialm:=e^{-\gamma t}P^\partialm, \widetilde{\mathcal F}^\partialm:=e^{-\gamma t}\mathcal F^\partialm $ and consider the equations
\begin{equation}\label{puntoN7}
\,\mathrm{d}s (\gamma+\,\mathrm{d}t + w_1\,\mathrm{d}uno)^2 \widetilde{f} + V_1^2 \,\mathrm{d}quno \widetilde{f}
+\frac{c^2}2 \,\mathrm{d}due (\widetilde{P}^+ + \widetilde{P}^-) =0 \,\qquad {\rm if} \; x_2=0 \, ,
\end{equation}
\begin{equation}\label{wave3}
\begin{cases}
(\gamma+\,\mathrm{d}t +v_1^\partialm\,\mathrm{d}uno)^2 \widetilde{P}^\partialm - c^2\Delta \, \widetilde{P}^\partialm =\widetilde{\mathcal F}^\partialm \quad &
{\rm if }\; x_2\gtrless0 \, ,\\
[\widetilde{P}]=0 \, ,\\
[c^2\,\mathrm{d}due \widetilde{P} ] =-4 V_1(\gamma+\,\mathrm{d}t+w_1\,\mathrm{d}uno)\,\mathrm{d}uno \widetilde{f} & {\rm if} \; x_2=0 \, .
\end{cases}
\end{equation}
System \eqref{puntoN7}, \eqref{wave3} is equivalent to \eqref{puntoN6}, \eqref{wave2}. Let us denote by $ \widehat{f},\widehat{P}^\partialm,\widehat{\mathcal F}^\partialm $ the Fourier transforms of $ \widetilde{f},\widetilde{P}^\partialm, \widetilde{\mathcal F}^\partialm $ in $(t,x_1)$, with dual variables denoted by $(\,\mathrm{d}elta,\eta)$, and set $\tau=\gamma+i\,\mathrm{d}elta$. We have the following result:
\begin{theorem}\label{teo_equ}
Let $\widetilde{\mathcal F}^\partialm$ be such that
\begin{equation}\label{cond_infF}
\lim\limits_{x_2\to+\infty}\widehat{\mathcal F}^\partialm(\cdot,\partialm x_2)= 0 \, .
\end{equation}
Assume that $\widetilde{f},\widetilde{P}^\partialm$ is a solution of \eqref{puntoN7}, \eqref{wave3} with
\begin{equation}\label{cond_inf}
\lim\limits_{x_2\to+\infty}\widehat{P}^\partialm(\cdot,\partialm x_2)= 0 \, .
\end{equation}
Then $f$ solves the second order pseudo-differential equation
\begin{equation}\label{equ_f}
\,\mathrm{d}s \left( (\tau + iw_1\eta)^2 + V_1^2 \eta^2
\left( \frac{8(\tau+iw_1\eta)^2}{c^2(\mu^++\mu^-)^2} -1 \right)\right) \widehat{f} + \frac{\mu^+\mu^-}{\mu^++\mu^-}\,M =0 \, ,
\end{equation}
where $\mu^\partialm=\sqrt{\left(\frac{\tau+iv_1^\partialm\eta}{c}\right)^2+\eta^2}$ is
such that
$
{\mathbb R}e\mu^\partialm>0$ if ${\mathbb R}e\tau>0$,
and
\begin{equation}\label{def_M}
M=M(\tau,\eta):= \frac{1}{\mu^+}\int_{0}^{+\infty}e^{-\mu^+ y}\widehat{\mathcal F}^+ (\cdot, y)\, dy -
\frac{1}{\mu^-}\int_{0}^{+\infty}e^{-\mu^- y}\widehat{\mathcal F}^- (\cdot,- y)\, dy \, .
\end{equation}
\end{theorem}
From the definition we see that the roots $ \mu^\partialm $ are homogeneous functions of degree 1 in $(\tau, \eta)$. Therefore, the ratio $ (\tau+iw_1\eta)^2/(\mu^++\mu^-)^2 $ is homogeneous of degree 0. It follows that the symbol of \eqref{equ_f} is a homogeneous function of degree 2, see Remark \ref{remark52}. In this sense \eqref{equ_f} represents a second order pseudo-differential equation for $f$.
The main result of the paper is given by the following result.
\begin{theorem}\label{teoexist}
Assume $\frac{v}{c}>\sqrt{2}$, and let $ \mathcal F^+\in L^2({\mathbb R}^+;H^s_\gamma({\mathbb R}^2)) , \mathcal F^-\in L^2({\mathbb R}^-;H^s_\gamma({\mathbb R}^2))$. There exists a unique solution $f\in H^{s+1}_\gamma({\mathbb R}^2)$ of equation \eqref{equ_f} (with $w_1=0$), satisfying the estimate
\begin{equation}\label{stimafF1}
\gamma^3 \|f\|^2_{H^{s+1}_\gamma({\mathbb R}^2)} \le C\left( \|\mathcal F^+\|^2_{L^2({\mathbb R}^+;H^s_\gamma({\mathbb R}^2))}+\|\mathcal F^-\|^2_{L^2({\mathbb R}^-;H^s_\gamma({\mathbb R}^2))}\right) , \qquad\forall \gamma\ge1\, ,
\end{equation}
for a suitable constant $C>0$ independent of $\mathcal F^\partialm$ and $\gamma$.
\end{theorem}
See Remark \ref{ell_hyp} for a discussion about the different cases $ \frac{v}{c}\gtrless\sqrt{2} $ in relation with the classical stability analysis \cite{CS04MR2095445,FM63MR0154509,M58MR0097930,S00MR1775057}.
\subsection{Weighted Sobolev spaces and norms}\label{sec2.w}
We are going to introduce certain weighted Sobolev spaces in order to prove Theorem \ref{teoexist}.
Functions are defined over the two half-spaces $\{(t,x_1,x_2)\in\mathbb{R}^3:x_2\gtrless0\}$;
the boundary of the half-spaces is identified to $\mathbb{R}^2$.
For all $s\in\mathbb{R}$ and for all $\gamma\geq 1$,
the usual Sobolev space $H^s(\mathbb{R}^2)$ is equipped with the following norm:
\begin{align*}
\|v\|_{s,\gamma}^2:=\frac{1}{(2\partiali)^2} \iint_{\mathbb{R}^2}\Lambda^{2s}(\tau,\eta) |\widehat{v}(\,\mathrm{d}elta,\eta)|^2\,\mathrm{d} \,\mathrm{d}elta \,\mathrm{d}\eta,\qquad
\Lambda^{s}(\tau,\eta):=(\gamma^2+\,\mathrm{d}elta^2+\eta^2)^{\frac{s}{2}}=(|\tau|^2+\eta^2)^{\frac{s}{2}},
\end{align*}
where $\widehat{v}(\,\mathrm{d}elta,\eta)$ is the Fourier transform of $v(t,x_1)$ and $ \tau=\gamma+i\,\mathrm{d}elta $.
We will abbreviate the usual norm of $L^2(\mathbb{R}^2)$ as
\begin{align*}
\|\cdot\|:=\|\cdot\|_{0,\gamma}\, .
\end{align*}
The scalar product in $L^2(\mathbb{R}^2)$ is denoted as follows:
\begin{align*}
\langle a,b\rangle:=\iint_{\mathbb{R}^2} a(x)\overline{b(x)}\,\mathrm{d} x,
\end{align*}
where $\overline{b(x)}$ is the complex conjugation of $b(x)$.
For $s\in\mathbb{R}$ and $\gamma\geq 1$, we introduce the weighted Sobolev
space $H^{s}_{\gamma}(\mathbb{R}^2)$ as
\begin{align*}
H^{s}_{\gamma}(\mathbb{R}^2)&:=\left\{
u\in\mathcal{D}'(\mathbb{R}^2)\,:\, \mathrm{e}^{-\gamma t}u(t,x_1)\in
H^{s}(\mathbb{R}^2) \right\},
\end{align*}
and its norm $\|u\|_{H^{s}_{\gamma}(\mathbb{R}^2)}:=\|\mathrm{e}^{-\gamma t}u\|_{s,\gamma}$.
We write $L^2_{\gamma}(\mathbb{R}^2):=H^0_{\gamma}(\mathbb{R}^2)$ and $\|u\|_{L^2_{\gamma}(\mathbb{R}^2)}:=\|\mathrm{e}^{-\gamma t}u\|$.
We define $L^2(\mathbb{R}^\partialm;H^{s}_{\gamma}(\mathbb{R}^2))$
as the spaces of distributions with finite norm
\begin{align*}
\|u\|_{L^2(\mathbb{R}^\partialm;H^s_{\gamma}(\mathbb{R}^2))}^2:=\int_{\mathbb{R}^+}\|u(\cdot,\partialm x_2)\|_{H^s_{\gamma}(\mathbb{R}^2)}^2\,\mathrm{d} x_2
\, .
\end{align*}
\section{Proof of Theorem \ref{teo_equ}}
In order to obtain an evolution equation for $f$, we will find an explicit formula for the solution $P^\partialm$ of \eqref{wave3}, and substitute into \eqref{puntoN7}.
We first perform the Fourier transform of problem \eqref{puntoN7}, \eqref{wave3} and obtain
\begin{equation}\label{puntoN8}
\,\mathrm{d}s (\tau + iw_1\eta)^2 \widehat{f} - V_1^2 \eta^2\widehat{f}
+\frac{c^2}2 \,\mathrm{d}due (\widehat{P}^+ + \widehat{P}^-) =0 \,\qquad {\rm if} \; x_2=0 \, ,
\end{equation}
\begin{equation}\label{wave4}
\begin{cases}
(\tau +iv_1^\partialm\eta)^2 \widehat{P}^\partialm + c^2\eta^2 \widehat{P}^\partialm -c^2\partial^2_{22} \widehat{P}^\partialm =\widehat{\mathcal F}^\partialm \quad &
{\rm if }\; x_2\gtrless0 \, ,\\
[\widehat{P}]=0 \, ,\\
[c^2\,\mathrm{d}due \widehat{P} ] =-4i\eta V_1(\tau+iw_1\eta) \widehat{f} & {\rm if} \; x_2=0 \, .
\end{cases}
\end{equation}
To solve \eqref{wave4} we take the Laplace transform in $x_2$ with dual variable $s\in{\mathbb C} $, defined by
\begin{equation*}
\mathcal{L}[\widehat{P}^\partialm](s)=\int_0^\infty e^{-sx_2}\widehat{P}^\partialm(\cdot,\partialm x_2)\, dx_2\,,
\end{equation*}
\begin{equation*}
\mathcal{L}[\widehat{\mathcal F }^\partialm](s)=\int_0^\infty e^{-sx_2}\widehat{\mathcal F }^\partialm(\cdot,\partialm x_2)\, dx_2\,.
\end{equation*}
For the sake of simplicity of notation, here we neglect the dependence on $\tau,\eta$. From \eqref{wave4} we obtain
\begin{equation*}
\left((\tau+iv^\partialm_1\eta)^2+c^2\eta^2-c^2s^2\right)\mathcal{L}[\widehat{P}^\partialm](s)= \mathcal{L}[\widehat{\mathcal F }^\partialm](s) - c^2s\widehat{P}^\partialm(0) \mp c^2 \,\mathrm{d}due\widehat{P}^\partialm(0)\, .
\end{equation*}
It follows that
\begin{equation}\label{laplace}
\mathcal{L}[\widehat{P}^\partialm](s)=
\frac{c^2s\widehat{P}^\partialm(0) \partialm c^2 \,\mathrm{d}due\widehat{P}^\partialm(0)}{c^2s^2-(\tau+iv^\partialm_1\eta)^2-c^2\eta^2}
- \frac{\mathcal{L}[\widehat{\mathcal F }^\partialm](s)}{c^2s^2-(\tau+iv^\partialm_1\eta)^2-c^2\eta^2}\, .
\end{equation}
Let us denote by $\mu^\partialm=\sqrt{\left(\frac{\tau+iv_1^\partialm\eta}{c}\right)^2+\eta^2}$ the root of the equation (in $s$)
\[c^2s^2-(\tau+iv^\partialm_1\eta)^2-c^2\eta^2=0\,,\]
such that
\begin{equation}\label{mu}
{\mathbb R}e\mu^\partialm>0\quad{\rm if }\quad \gamma>0 \,
\end{equation}
(${\mathbb R}e$ denotes the real part). We show this property in Lemma \ref{lemma_mu}.
Recalling that $\mathcal{L}[e^{\alpha x}H(x)](s)=\frac1{s-\alpha}$ for any $\alpha\in{\mathbb C}$, where $H(x)$ denotes the Heaviside function, we take the inverse Laplace transform of \eqref{laplace} and obtain
\begin{multline}\label{{formulaQ+}}
\widehat{P}^+(\cdot,x_2)=\widehat{P}^+(0)\cosh(\mu^+ x_2) +\,\mathrm{d}due \widehat{P}^+(0) \frac{\sinh(\mu^+ x_2)}{\mu^+}\\ - \int_{0}^{x_2} \frac{\sinh(\mu^+ (x_2-y))}{c^2\mu^+} \widehat{\mathcal F }^+(\cdot,y)\, dy \, ,\qquad x_2>0 \,,
\end{multline}
\begin{multline}\label{{formulaQ-}}
\widehat{P}^-(\cdot,-x_2)=\widehat{P}^-(0)\cosh(\mu^- x_2) -\,\mathrm{d}due \widehat{P}^-(0) \frac{\sinh(\mu^- x_2)}{\mu^-}\\ - \int_{0}^{x_2} \frac{\sinh(\mu^- (x_2-y))}{c^2\mu^-} \widehat{\mathcal F }^-(\cdot,-y)\, dy \, ,\qquad x_2>0 \, .
\end{multline}
We need to determine the values of $\widehat{P}^\partialm(0) , \,\mathrm{d}due \widehat{P}^\partialm(0) $ in \eqref{{formulaQ+}}, \eqref{{formulaQ-}}. Two conditions are given by the boundary conditions in \eqref{wave4}, and two more conditions are obtained by imposing the behavior at infinity
\eqref{cond_inf}.
Recalling \eqref{mu}, under the assumption \eqref{cond_infF} it is easy to show that
\begin{equation}\label{cond_infF_int}
\lim\limits_{x_2\to+\infty}\int_{0}^{x_2}e^{-\mu^\partialm(x_2-y)}\widehat{\mathcal F}^\partialm (\cdot,\partialm y)\, dy= 0 \, .
\end{equation}
From \eqref{cond_inf}, \eqref{{formulaQ+}}, \eqref{{formulaQ-}}, \eqref{cond_infF_int} it follows that
\begin{equation}\label{cond_inf2}
\widehat{P}^\partialm(0) \partialm \frac{1}{\mu^\partialm} \,\mathrm{d}due \widehat{P}^\partialm(0) - \frac{1}{c^2\mu^\partialm}\int_{0}^{+\infty}e^{-\mu^\partialm y}\widehat{\mathcal F}^\partialm (\cdot,\partialm y)\, dy= 0 \, .
\end{equation}
Collecting the boundary conditions in \eqref{wave4} and \eqref{cond_inf2} gives the linear system
\begin{equation}\label{system}
\begin{cases}
\widehat{P}^+(0)-\widehat{P}^-(0)=0 \, ,\\
\,\mathrm{d}due \widehat{P}^+(0) - \,\mathrm{d}due \widehat{P}^-(0) =-4i\eta \frac{ V_1}{c}\left(\frac{\tau+iw_1\eta}{c}\right) \widehat{f}
\\
\mu^+\widehat{P}^+(0) + \,\mathrm{d}due \widehat{P}^+(0)= \frac{1}{c^2}\int_{0}^{+\infty}e^{-\mu^+ y}\widehat{\mathcal F}^+ (\cdot, y)\, dy
\\
\mu^-\widehat{P}^-(0) - \,\mathrm{d}due \widehat{P}^-(0)=\frac{1}{c^2}\int_{0}^{+\infty}e^{-\mu^- y}\widehat{\mathcal F}^- (\cdot,- y)\, dy \, .
\end{cases}
\end{equation}
The determinant of the above linear system equals $\mu^++\mu^- $; from \eqref{mu} it never vanishes as long as $ \gamma>0. $ Solving \eqref{system} gives
\begin{equation}\label{somma_deriv}
\,\mathrm{d}due \widehat{P}^+(0) + \,\mathrm{d}due \widehat{P}^-(0) =-4i\eta \frac{ V_1}{c}\left(\frac{\tau+iw_1\eta}{c}\right) \widehat{f}\; \frac{\mu^+-\mu^-}{\mu^++\mu^-} + 2\frac{\mu^+\mu^-}{\mu^++\mu^-}\frac{M}{c^2} \, ,
\end{equation}
where we have set
\begin{equation*}
M:= \frac{1}{\mu^+}\int_{0}^{+\infty}e^{-\mu^+ y}\widehat{\mathcal F}^+ (\cdot, y)\, dy -
\frac{1}{\mu^-}\int_{0}^{+\infty}e^{-\mu^- y}\widehat{\mathcal F}^- (\cdot,- y)\, dy \, .
\end{equation*}
We substitute \eqref{somma_deriv} into \eqref{puntoN8} and obtain the equation for $\widehat{f}$
\begin{equation}\label{equ_f0}
\,\mathrm{d}s \left( (\tau + iw_1\eta)^2 - V_1^2 \eta^2
-2i { V_1} \eta\left(\tau+iw_1\eta\right) \; \frac{\mu^+-\mu^-}{\mu^++\mu^-} \right) \widehat{f} + \frac{\mu^+\mu^-}{\mu^++\mu^-}M =0 \, .
\end{equation}
Finally, we compute
\[
\frac{\mu^+-\mu^-}{\mu^++\mu^-} =4\frac{V_1}{c^2}\frac{i\eta(\tau+iw_1\eta)}{(\mu^++\mu^-)^2},
\]
and substituting this last expression in \eqref{equ_f0} we can rewrite it as
\begin{equation*}
\,\mathrm{d}s \left( (\tau + iw_1\eta)^2 + V_1^2 \eta^2
\left(8 \left(\frac{\tau+iw_1\eta}{c(\mu^++\mu^-)}\right)^2 -1 \right)\right) \widehat{f} + \frac{\mu^+\mu^-}{\mu^++\mu^-}M =0 \, ,
\end{equation*}
that is \eqref{equ_f}.
\section{The symbol of the pseudo-differential equation \eqref{equ_f} for the front}
Let us denote the symbol of \eqref{equ_f} by $ \Sigma: $
\[ \Sigma=\Sigma(\tau,\eta):= (\tau + iw_1\eta)^2 + V_1^2 \eta^2
\left( \frac{8(\tau+iw_1\eta)^2}{c^2(\mu^+(\tau,\eta)+\mu^-(\tau,\eta))^2} -1 \right).\]
In order to take the homogeneity into account, we define the hemisphere:
\begin{align*}
\Xi_1:=\left\{(\tau,\eta)\in \mathbb{C}\times\mathbb{R}\, :\,
|\tau|^2+\eta^2=1,{\mathbb R}e \tau\geq 0 \right\},
\end{align*}
and the set of ``frequencies'':
\begin{align*}
\Xi:=\left\{(\tau,\eta)\in \mathbb{C}\times\mathbb{R}\, :\,
{\mathbb R}e \tau\geq 0, (\tau,\eta)\ne (0,0) \right\}=(0,\infty)\cdot\Xi_1 \,.
\end{align*}
From now on we assume
\[ v^+_1=v>0, \qquad v^-_1=-v \,, \]
so that \[ w_1=0, \qquad V_1=v\, . \]
From this assumption it follows that
\begin{equation}\label{def_Sigma}
\Sigma(\tau,\eta)= \tau^2 + v^2 \eta^2
\left(8\left( \frac{\tau/c}{\mu^+(\tau,\eta)+\mu^-(\tau,\eta)}\right)^2 -1 \right).
\end{equation}
\subsection{Study of the roots $\mu^\partialm$}
\begin{lemma}\label{lemma_mu}
Let $ (\tau,\eta)\in\Xi $ and let us consider the equation
\begin{equation}\label{equ_mu}
s^2=
\left(\frac{\tau \partialm iv\eta}{c}\right)^2+\eta^2.
\end{equation}
For both cases $\partialm$ of \eqref{equ_mu} there exists one root, denoted by $ \mu^\partialm=\mu^\partialm(\tau,\eta) $, such that $ {\mathbb R}e\mu^\partialm>0 $ as long as $ {\mathbb R}e\tau>0 $. The other root is $ -\mu^\partialm $. The roots $ \mu^\partialm$ admit a continuous extension to points $ (\tau,\eta)=(i\,\mathrm{d}elta,\eta)\in\Xi $, i.e. with $ {\mathbb R}e\tau=0 $.
Specifically we have:
(i) if $ \eta=0 $, $ \mu^\partialm(i\,\mathrm{d}elta,0)=i\,\mathrm{d}elta/c $ ;
(ii) if $ \eta\not=0 $,
\begin{equation}\label{mu+}
\begin{array}{ll}
\mu^+(i\,\mathrm{d}elta,\eta)=\sqrt{-\left(\frac{\,\mathrm{d}elta +v\eta}{c}\right)^2+\eta^2} \qquad &{\it if } \; -\left(\frac{v}{c}+1\right) <\frac{\,\mathrm{d}elta}{c\eta}<-\left(\frac{v}{c}-1\right)\, , \\
\mu^+(i\,\mathrm{d}elta,\eta)=0 \qquad &{\it if } \; \frac{\,\mathrm{d}elta}{c\eta}=-\left(\frac{v}{c}\partialm 1\right)\, ,
\\
\mu^+(i\,\mathrm{d}elta,\eta)=-i\sgn(\eta) \sqrt{\left(\frac{\,\mathrm{d}elta +v\eta}{c}\right)^2-\eta^2} \qquad &{\it if }\; \frac{\,\mathrm{d}elta}{c\eta}<-\left(\frac{v}{c}+1\right)\, , \\
\mu^+(i\,\mathrm{d}elta,\eta)=i\sgn(\eta) \sqrt{\left(\frac{\,\mathrm{d}elta +v\eta}{c}\right)^2-\eta^2} \qquad &{\it if }\; \frac{\,\mathrm{d}elta}{c\eta}>-\left(\frac{v}{c}-1\right)\, ,
\end{array}
\end{equation}
and
\begin{equation}\label{mu-}
\begin{array}{ll}
\mu^-(i\,\mathrm{d}elta,\eta)=\sqrt{-\left(\frac{\,\mathrm{d}elta -v\eta}{c}\right)^2+\eta^2} \qquad &{\it if } \; \frac{v}{c}-1 <\frac{\,\mathrm{d}elta}{c\eta}<\frac{v}{c}+1\, , \\
\mu^-(i\,\mathrm{d}elta,\eta)=0 \qquad &{\it if } \; \frac{\,\mathrm{d}elta}{c\eta}=\frac{v}{c}\partialm 1\, ,
\\
\mu^-(i\,\mathrm{d}elta,\eta)=-i\sgn(\eta) \sqrt{\left(\frac{\,\mathrm{d}elta -v\eta}{c}\right)^2-\eta^2} \qquad &{\it if }\; \frac{\,\mathrm{d}elta}{c\eta}<\frac{v}{c}-1\, , \\
\mu^-(i\,\mathrm{d}elta,\eta)=i\sgn(\eta) \sqrt{\left(\frac{\,\mathrm{d}elta -v\eta}{c}\right)^2-\eta^2} \qquad &{\it if }\; \frac{\,\mathrm{d}elta}{c\eta}>\frac{v}{c}+1\, .
\end{array}
\end{equation}
\end{lemma}
\begin{proof}
(i) If $ \eta=0 $, \eqref{equ_mu} reduces to $s^2=(\tau/c)^2$. We choose $\mu^\partialm=\tau/c$ which has ${\mathbb R}e\mu^\partialm>0$ if ${\mathbb R}e\tau=\gamma>0$; obviously, the continuous extension for ${\mathbb R}e\tau=0$ is $\mu^\partialm=i\,\mathrm{d}elta/c$.
(ii) Assume $ \eta\not=0.$ Let us denote
\begin{equation*}
\alpha^\partialm:=\left(\frac{\tau \partialm iv\eta}{c}\right)^2+\eta^2=\frac{\gamma^2-(\,\mathrm{d}elta \partialm v\eta)^2+c^2\eta^2}{c^2} +2i\gamma\frac{\,\mathrm{d}elta\partialm v\eta}{c^2}\, .
\end{equation*}
For $\gamma>0$,
$\Im\alpha^\partialm=0$ if and only if $\,\mathrm{d}elta\partialm v\eta=0$,
and
$\alpha^\partialm_{|\,\mathrm{d}elta\partialm v\eta=0}=(\gamma/c)^2+\eta^2>0$. It follows that either $\alpha^\partialm\in{\mathbb R}, \alpha^\partialm>0$, or $\alpha^\partialm\in{\mathbb C}$ with $\Im\alpha^\partialm\not=0$.
In both cases $\alpha^\partialm$ has two square roots, one with strictly positive real part (that we denote by $\mu^\partialm$), the other one with strictly negative real part. For the continuous extension in points with ${\mathbb R}e\tau=0,$ we have
\begin{equation}\label{caso+}
\mu^\partialm(i\,\mathrm{d}elta,\eta)=\sqrt{-\left(\frac{\,\mathrm{d}elta \partialm v\eta}{c}\right)^2+\eta^2} \qquad {\rm if } \quad -\left(\frac{\,\mathrm{d}elta \partialm v\eta}{c}\right)^2+\eta^2\ge0\, ,
\end{equation} and
\begin{equation}\label{caso-}
\mu^\partialm(i\,\mathrm{d}elta,\eta)=i\sgn(\,\mathrm{d}elta\partialm v\eta)\sqrt{\left(\frac{\,\mathrm{d}elta \partialm v\eta}{c}\right)^2-\eta^2} \qquad {\rm if } \quad -\left(\frac{\,\mathrm{d}elta \partialm v\eta}{c}\right)^2+\eta^2<0\, .
\end{equation}
We also observe that
\begin{equation}\label{caso+2}
\begin{array}{ll}
\sgn(\,\mathrm{d}elta+v\eta)=-\sgn(\eta) \qquad{\rm if } \quad \frac{\,\mathrm{d}elta}{c\eta}<-\left(\frac{v}{c}+1\right) ,\\
\sgn(\,\mathrm{d}elta+v\eta)=\sgn(\eta)\qquad{\rm if } \quad \frac{\,\mathrm{d}elta}{c\eta}>-(\frac{v}{c}-1) ,
\end{array}
\end{equation}
and
\begin{equation}\label{caso-2}
\begin{array}{ll}
\sgn(\,\mathrm{d}elta-v\eta)=-\sgn(\eta) \qquad{\rm if } \quad \frac{\,\mathrm{d}elta}{c\eta}<\frac{v}{c}-1 ,\\
\sgn(\,\mathrm{d}elta-v\eta)=\sgn(\eta)\qquad{\rm if } \quad \frac{\,\mathrm{d}elta}{c\eta}>\frac{v}{c}+1 .
\end{array}
\end{equation}
From \eqref{caso+}--\eqref{caso-2} we obtain \eqref{mu+}, \eqref{mu-}.
\end{proof}
\begin{corollary}\label{coroll_mu}
From \eqref{mu+}, \eqref{mu-} the roots $ \mu^\partialm $ only vanish in points $ (\tau,\eta)=(i\,\mathrm{d}elta,\eta) $ with
\[ \,\mathrm{d}elta=-(v\partialm c) \eta \qquad\forall\eta\not=0 \qquad ({\rm where}\;\mu^+=0)\, ,\]
or
\[ \,\mathrm{d}elta=(v\partialm c) \eta \qquad\forall\eta\not=0\qquad ({\rm where}\;\mu^-=0)\, .\]
If $v\not= c $, the above four families of points
\[
\,\mathrm{d}elta=-(v+ c) \eta, \quad \,\mathrm{d}elta=-(v- c)\eta, \quad \,\mathrm{d}elta=(v- c) \eta, \quad \,\mathrm{d}elta=(v+ c) \eta \, ,
\] are always mutually distinct. If $ v=c $, the two families in the middle coincide and we have
\begin{equation}\label{sommavc}
\mu^+(-2ic\eta,\eta)=0, \qquad \mu^+(0,\eta)=\eta^-(0,\eta)=0, \qquad \mu^-(2ic\eta,\eta)=0, \qquad\forall\eta\not=0\, .
\end{equation}
\end{corollary}
From \eqref{def_Sigma}, the symbol $ \Sigma $ is not defined in points $ (\tau,\eta)\in\Xi$ where $\mu^++\mu^-$
vanishes. From Lemma \ref{lemma_mu} we already know that ${\mathbb R}e\mu^\partialm>0$ in all points with ${\mathbb R}e\tau>0$. It follows that ${\mathbb R}e(\mu^++\mu^-)>0$ and thus $\mu^++\mu^-\not=0$ in all such points. Therefore the symbol is defined for ${\mathbb R}e\tau>0$.
It rests to study if $\mu^++\mu^-$
vanishes in points $ (\tau,\eta)=(i\,\mathrm{d}elta,\eta) $ with $ {\mathbb R}e\tau=0 $. From Corollary \ref{coroll_mu} we obtain that if $v\not= c $ then $\mu^++\mu^-\not=0$ in all points with $\,\mathrm{d}elta=-(v\partialm c) \eta$ and $\,\mathrm{d}elta=(v\partialm c) \eta$ (in these points, if $\mu^+=0$ then $\mu^-\not=0$ and viceversa). If $v= c $, then $\mu^+(0,\eta)+\mu^-(0,\eta)=0$.
From now on we adopt the usual terminology: $ v>c $ is the {\it supersonic} case, $ v<c $ is the {\it subsonic} case, $ v=c $ is the {\it sonic} case. The next lemma regards the supersonic case.
\begin{lemma}[$ v>c $]\label{super_mu}
Let $ (\tau,\eta)=(i\,\mathrm{d}elta,\eta)\in\Xi $ such that $ {\mathbb R}e\tau=0, \eta\not=0 $. For all such points the following facts hold.
\begin{itemize}
\item[(i)] If $\frac{\,\mathrm{d}elta}{c\eta}<-\left(\frac{v}{c}+1\right)$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, ${\mathbb R}e(\mu^+\mu^-)<0$;
\\
\item[(ii)] If $-\left(\frac{v}{c}+1\right) <\frac{\,\mathrm{d}elta}{c\eta}<-\left(\frac{v}{c}-1\right)
$ then $\mu^+\in {\mathbb R}^+$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, ${\mathbb R}e(\mu^+\mu^-)=0$;
\\
\item[(iii)] If $-\left(\frac{v}{c}-1\right) <\frac{\,\mathrm{d}elta}{c\eta}< \frac{v}{c}-1
$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in i{\mathbb R}$, $\mu^+(0,\eta)+\mu^-(0,\eta)=0$, ${\mathbb R}e(\mu^+\mu^-)>0$;
\\
\item[(iv)] If $\frac{v}{c}-1 <\frac{\,\mathrm{d}elta}{c\eta}< \frac{v}{c}+1
$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in {\mathbb R}^+$, $\mu^++\mu^-\not=0$, ${\mathbb R}e(\mu^+\mu^-)=0$;
\\
\item[(v)] If $\frac{\,\mathrm{d}elta}{c\eta}> \frac{v}{c}+1
$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, ${\mathbb R}e(\mu^+\mu^-)<0$.
\end{itemize}
\end{lemma}
We emphasize that the above properties hold in all points $(i\,\mathrm{d}elta,\eta)$ as indicated according to the value of ${\,\mathrm{d}elta}/({c\eta})$, except for the case (iii) where $\mu^++\mu^-=0$ if and only if $(\,\mathrm{d}elta,\eta)=(0,\eta).$
From \eqref{sommavc} and (iii) we have
\begin{equation}\label{somma_mu}
{\rm if }\;\, v\ge c \qquad \mu^+(0,\eta)+\mu^-(0,\eta)=0 \qquad \forall\eta\not=0\, .
\end{equation}
\begin{proof}[Proof of Lemma \ref{super_mu}]
(i) If $\frac{\,\mathrm{d}elta}{c\eta}<-\left(\frac{v}{c}+1\right)$ then $\mu^+\in i{\mathbb R}$ follows directly from $\eqref{mu+}_3$ and $\mu^-\in i{\mathbb R}$ follows from $\eqref{mu-}_3$ because $\frac{\,\mathrm{d}elta}{c\eta}<\frac{v}{c}-1$. Moreover, from \eqref{mu+}, \eqref{mu-} we have
\[
\mu^++\mu^-=-i\sgn(\eta)\left(\sqrt{\left(\frac{\,\mathrm{d}elta +v\eta}{c}\right)^2-\eta^2}+\sqrt{\left(\frac{\,\mathrm{d}elta -v\eta}{c}\right)^2-\eta^2} \right)\not=0 \,,
\]
\[
{\mathbb R}e(\mu^+\mu^-)=- \sqrt{\left(\left(\frac{\,\mathrm{d}elta +v\eta}{c}\right)^2-\eta^2 \right) \left( \left(\frac{\,\mathrm{d}elta -v\eta}{c}\right)^2-\eta^2\right)} <0\,.
\]
The cases (ii) and (iv) follow directly from \eqref{mu+}, \eqref{mu-}.
(iii) If $-\left(\frac{v}{c}-1\right) <\frac{\,\mathrm{d}elta}{c\eta}< \frac{v}{c}-1
$ then $\mu^\partialm\in i{\mathbb R}$ follows from \eqref{mu+}, \eqref{mu-}. Moreover it holds
\[
\mu^++\mu^-=i\sgn(\eta)\left(\sqrt{\left(\frac{\,\mathrm{d}elta +v\eta}{c}\right)^2-\eta^2}-\sqrt{\left(\frac{\,\mathrm{d}elta -v\eta}{c}\right)^2-\eta^2} \right)=0
\quad\mbox{if and only if }\, \,\mathrm{d}elta=0 \,,
\] recalling that here $\eta\not=0$.
It follows that $\mu^+(0,\eta)+\mu^-(0,\eta)=0$ for all $\eta\not=0$. We also have
\[
{\mathbb R}e(\mu^+\mu^-)= \sqrt{\left(\left(\frac{\,\mathrm{d}elta +v\eta}{c}\right)^2-\eta^2 \right) \left( \left(\frac{\,\mathrm{d}elta -v\eta}{c}\right)^2-\eta^2\right)} >0\,.
\]
The proof of case (v) is similar to the proof of case (i).
\end{proof}
The next lemma regards the subsonic case.
\begin{lemma}[$ v<c $]\label{sub_mu}
Let $ (\tau,\eta)=(i\,\mathrm{d}elta,\eta)\in\Xi $ such that $ {\mathbb R}e\tau=0, \eta\not=0 $. For all such points the following facts hold.
\begin{itemize}
\item[(i)] If $\frac{\,\mathrm{d}elta}{c\eta}<-\left(\frac{v}{c}+1\right)$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, ${\mathbb R}e(\mu^+\mu^-)<0$;
\\
\item[(ii)] If $-\left(\frac{v}{c}+1\right) <\frac{\,\mathrm{d}elta}{c\eta}<\frac{v}{c}-1
$ then $\mu^+\in {\mathbb R}^+$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, ${\mathbb R}e(\mu^+\mu^-)=0$;
\\
\item[(iii)] If $\frac{v}{c}-1 <\frac{\,\mathrm{d}elta}{c\eta}< -\left(\frac{v}{c}-1\right)
$ then $\mu^+\in {\mathbb R}^+$, $\mu^-\in {\mathbb R}^+$, $\mu^++\mu^-\not=0$, ${\mathbb R}e(\mu^+\mu^-)>0$;
\\
\item[(iv)] If $-\left(\frac{v}{c}-1\right) <\frac{\,\mathrm{d}elta}{c\eta}< \frac{v}{c}+1
$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in {\mathbb R}^+$, $\mu^++\mu^-\not=0$, ${\mathbb R}e(\mu^+\mu^-)=0$;
\\
\item[(v)] If $\frac{\,\mathrm{d}elta}{c\eta}> \frac{v}{c}+1
$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, ${\mathbb R}e(\mu^+\mu^-)<0$.
\end{itemize}
\end{lemma}
\begin{proof}
The proof is similar to the proof of Lemma \ref{super_mu} and so we omit the details.
\end{proof}
\begin{lemma}[$ v=c $]\label{eq_mu}
Let $ (\tau,\eta)=(i\,\mathrm{d}elta,\eta)\in\Xi $ such that $ {\mathbb R}e\tau=0, \eta\not=0 $. For all such points the following facts hold.
\begin{itemize}
\item[(i)] If $\frac{\,\mathrm{d}elta}{c\eta}<-2$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, ${\mathbb R}e(\mu^+\mu^-)<0$;
\\
\item[(ii)] If $-2<\frac{\,\mathrm{d}elta}{c\eta}<0$ then $\mu^+\in {\mathbb R}^+$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, ${\mathbb R}e(\mu^+\mu^-)=0$;
\\
\item[(iii)] If $0 <\frac{\,\mathrm{d}elta}{c\eta}< 2
$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in {\mathbb R}^+$, $\mu^++\mu^-\not=0$, ${\mathbb R}e(\mu^+\mu^-)=0$;
\\
\item[(iv)] If $\frac{\,\mathrm{d}elta}{c\eta}> 2
$ then $\mu^+\in i{\mathbb R}$, $\mu^-\in i{\mathbb R}$, $\mu^++\mu^-\not=0$, ${\mathbb R}e(\mu^+\mu^-)<0$.
\end{itemize}
\end{lemma}
\begin{proof}
The proof is similar to the proof of Lemma \ref{super_mu} and so we omit the details.
\end{proof}
\begin{corollary}\label{zerosommamu}
From Lemma \ref{super_mu}, see also \eqref{somma_mu}, and Lemma \ref{sub_mu} it follows that $\mu^++\mu^-=0$ at $ (\tau,\eta)\in\Xi$ if and only if $\tau=0$ and $v\ge c$.
\end{corollary}
For $v\ge c$, though $\mu^++\mu^-=0$ at $ (0,\eta)\in\Xi$ , nevertheless we can define $\Sigma (0,\eta)$ by continuous extension, see Lemma \ref{extend}.
We are also interested to know if the difference $\mu^+-\mu^-$ vanishes somewhere.
\begin{lemma}\label{diff_mu}
Let $ (\tau,\eta)\in\Xi$. Then $\mu^+(\tau,\eta)=\mu^-(\tau,\eta)$ if and only if:
\begin{itemize}
\item[(i)] $ (\tau,\eta)= (\tau,0) $,
\item[(ii)] $ (\tau,\eta)= (0,\eta)$, and $v\le c$.
\end{itemize}
\end{lemma}
\begin{proof}
From \eqref{equ_mu} we obtain that $(\mu^+)^2
=(\mu^-)^2$ if and only if $\eta=0$ or $\tau=0$. If $\eta=0$ then $\mu^+
=\mu^-=\tau/c$ which gives the first case. If $\tau=0$ then $(\mu^+)^2
=(\mu^-)^2=(1-(v/c)^2)\eta^2$. For $1-(v/c)^2<0$ we obtain from Lemma \ref{lemma_mu} $\mu^\partialm=\partialm i\eta\sqrt{(v/c)^2-1}$ which yields $\mu^+-\mu^-=2i\eta\sqrt{(v/c)^2-1}\not=0$. For $1-(v/c)^2\ge 0$ we obtain $\mu^\partialm= \sqrt{1-(v/c)^2}|\eta|$, that is the second case.
\end{proof}
From Lemma \ref{lemma_mu} we know that the roots $\mu^\partialm$ satisfy ${\mathbb R}e\mu^\partialm>0$ if ${\mathbb R}e\tau=\gamma>0$. Actually we can prove more than that.
\begin{lemma}\label{stima_Re_mu}
Let $ (\tau,\eta)\in\Xi$ with ${\mathbb R}e\tau=\gamma>0$. Then
\begin{equation}\label{est_Re_mu}
{\mathbb R}e\mu^\partialm(\tau,\eta)\ge \frac{1}{\sqrt{2}\,c}\,\gamma\, .
\end{equation}
\end{lemma}
\begin{proof}
We consider $\mu^+$.
From \eqref{equ_mu} we obtain
\[
({\mathbb R}e\mu^+)^2-(\Im\mu^+)^2=\frac{1}{c^2}(\gamma^2-(\,\mathrm{d}elta+v\eta)^2)+\eta^2, \qquad
{\mathbb R}e\mu^+\Im\mu^+=\frac{1}{c^2}\gamma(\,\mathrm{d}elta+v\eta) \,.
\]
Since ${\mathbb R}e\mu^+>0$ for $\gamma>0$, we can divide by ${\mathbb R}e\mu^+$ the second equation, then substitute the value of $\Im\mu^+$ into the first one and obtain
\[
({\mathbb R}e\mu^+)^4+\alpha({\mathbb R}e\mu^+)^2+\beta=0\,,
\]
where we have set
\[
\alpha=-\left(\frac{1}{c^2}(\gamma^2-(\,\mathrm{d}elta+v\eta)^2)+\eta^2
\right)\,, \qquad \beta=-\frac{1}{c^4}\gamma^2(\,\mathrm{d}elta+v\eta)^2 \le0 \,.
\]
We show that $\alpha^2-4\beta>0$ for $\gamma>0$, and obtain
\[
2 ({\mathbb R}e\mu^+)^2=-\alpha+\sqrt{\alpha^2-4\beta}\ge
(\gamma/c)^2+\left|\frac{1}{c^2}(\,\mathrm{d}elta+v\eta)^2-\eta^2
\right| -\left(\frac{1}{c^2}(\,\mathrm{d}elta+v\eta)^2-\eta^2
\right) \ge (\gamma/c)^2 \,,
\]
which gives \eqref{est_Re_mu} for $\mu^+$. The proof for $\mu^-$ is similar.
\end{proof}
\subsection{Study of the symbol $\Sigma$}
The next lemma regards the continuous extension of the symbol in points $(0,\eta)$ where $\mu^++\mu^-$ vanishes, see Corollary \ref{zerosommamu}. We only consider the case $v\geq c$.
\begin{lemma}\label{extend}
Assume $v\geq c$. Let $(\tau,\eta)\in\Xi$ with $ {\mathbb R}e\tau\ge0 $ and $\bar{\eta}\not=0$ fixed. Then
\begin{equation}\label{extension}\lim\limits_{(\tau,\eta)\to(0,\bar{\eta})}\left( \frac{\tau/c}{\mu^++\mu^-} \right)^2=\frac{(v/c)^2-1}{4(v/c)^2} \, .
\end{equation}
\end{lemma}
\begin{proof}
{\it First case: $v>c$}.
We first consider the case $ {\mathbb R}e\tau>0. $ Let $ \tau=\gamma+i\,\mathrm{d}elta$ with $0<\gamma\ll1, \,\mathrm{d}elta\in{\mathbb R}.$ Then
\[
\left(\frac{\tau \partialm iv\eta}{c}\right)^2+\eta^2 =a_\partialm+ib_\partialm \,,
\]
with
\begin{equation}\label{def_ab}
a_\partialm=\left(\frac{\gamma}{c}\right)^2- \left(\frac{\,\mathrm{d}elta\partialm v\eta}{c}\right)^2+\eta^2 , \quad b_\partialm=2\gamma\frac{\,\mathrm{d}elta\partialm v\eta}{c^2} \,.
\end{equation}
For the computation of the square roots of $a_\partialm+ib_\partialm$ it is useful to recall that the square roots of the complex number $a+ib$ ($a,b$ real) are
\begin{equation}\label{roots}
\partialm\left\{\sgn(b)\sqrt{\frac{r+a}{2}} +i\sqrt{\frac{r-a}{2}}\right\}, \qquad r=|a+ib|
\end{equation}
(by convention $\sgn(0)=1$). In our case we compute
\begin{equation}\label{rpm}
r_\partialm^2:=a_\partialm^2+b_\partialm^2
=
\left[ \left(\frac{\gamma}{c}\right)^2+ \left(\left| \frac{\,\mathrm{d}elta\partialm v\eta}{c} \right| - |\eta|\right)^2\right]
\left[ \left(\frac{\gamma}{c}\right)^2+ \left(\left| \frac{\,\mathrm{d}elta\partialm v\eta}{c} \right| + |\eta|\right)^2\right] \, .
\end{equation}
Substituting the definition of $ a_\partialm, b_\partialm $ in \eqref{def_ab}, $ r_\partialm $ in \eqref{rpm}, into \eqref{roots} and taking the limit as $\gamma\,\mathrm{d}ownarrow 0, \,\mathrm{d}elta\to\accentset{{\cc@style\underline{\mskip10mu}}}\,\mathrm{d}elta, \eta\to\accentset{{\cc@style\underline{\mskip10mu}}}\eta$, with $ (\accentset{{\cc@style\underline{\mskip10mu}}}\,\mathrm{d}elta, \accentset{{\cc@style\underline{\mskip10mu}}}\eta) \not=(0,0)$ we can prove again the formulas \eqref{mu+}, \eqref{mu-} of continuous extension of $ \mu^\partialm $ to points with $ {\mathbb R}e\tau=0 $.
Let us study the limit of $ \mu^++\mu^- $ as $\gamma\,\mathrm{d}ownarrow 0, \,\mathrm{d}elta\to0, \eta\to\accentset{{\cc@style\underline{\mskip10mu}}}\eta$, with $ \accentset{{\cc@style\underline{\mskip10mu}}}\eta\not=0$. By continuity, for $ (\gamma,\,\mathrm{d}elta,\eta) $ sufficiently close to $ (0,0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta) $, we have from \eqref{def_ab} that $ \sgn(b_+)=\sgn(\,\mathrm{d}elta+v\eta) =\sgn(\accentset{{\cc@style\underline{\mskip10mu}}}\eta)$. If $ \accentset{{\cc@style\underline{\mskip10mu}}}\eta>0$, from \eqref{roots} it follows that
\begin{equation*}
\mu^+= \sqrt{\frac{r_++a_+}{2}} +i\sqrt{\frac{r_+-a_+}{2}}\,.
\end{equation*}
With similar considerations we get
\begin{equation*}
\mu^-= \sqrt{\frac{r_-+a_-}{2}} -i\sqrt{\frac{r_--a_-}{2}}\,.
\end{equation*}
If $ \accentset{{\cc@style\underline{\mskip10mu}}}\eta<0$, from \eqref{roots} it follows that
\begin{equation*}
\mu^+= \sqrt{\frac{r_++a_+}{2}} -i\sqrt{\frac{r_+-a_+}{2}}\,, \qquad
\mu^-= \sqrt{\frac{r_-+a_-}{2}} +i\sqrt{\frac{r_--a_-}{2}}\,.
\end{equation*}
Thus,
\begin{equation}\label{sommamu}
\mu^++\mu^-= \sqrt{\frac{r_++a_+}{2}}+ \sqrt{\frac{r_-+a_-}{2}} +i\sgn(\accentset{{\cc@style\underline{\mskip10mu}}}\eta)\left(\sqrt{\frac{r_+-a_+}{2}} -\sqrt{\frac{r_--a_-}{2}}\right)\,.
\end{equation}
From \eqref{def_ab}, \eqref{rpm} we obtain (recall that $ v>c $)
\begin{equation}\label{lim_diff_ra}
\lim\limits_{(\gamma,\,\mathrm{d}elta,\eta) \to(0,0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta)}(r_\partialm-a_\partialm)=2\left(\left(\frac{v}{c}\right)^2-1\right)\accentset{{\cc@style\underline{\mskip10mu}}}\eta^2 \,,
\end{equation}
\begin{equation}\label{sommara}
r_\partialm+a_\partialm=\frac{r_\partialm^2-a_\partialm^2}
{r_\partialm-a_\partialm}=\frac{b_\partialm^2}
{r_\partialm-a_\partialm}=\frac{4\left(\frac{\gamma}{c}\right)^2\left(\frac{\,\mathrm{d}elta\partialm v\eta}{c}\right)^2}
{r_\partialm-a_\partialm} \,.
\end{equation}
From \eqref{sommamu}, \eqref{sommara}, the real part of $ \mu^++\mu^- $ is given by
\begin{equation}\label{Resommamu}
{\mathbb R}e(\mu^++\mu^-)=\sqrt{\frac{r_++a_+}{2}}+ \sqrt{\frac{r_-+a_-}{2}}=\sqrt{2}\, \frac{\gamma}{c} \, \left(\frac{\left| \frac{\,\mathrm{d}elta+ v\eta}{c} \right|}{\sqrt{r_+-a_+}} + \frac{\left| \frac{\,\mathrm{d}elta- v\eta}{c} \right|}{\sqrt{r_--a_-}} \right) \, .
\end{equation}
From \eqref{lim_diff_ra}, \eqref{Resommamu} it follows that
\begin{equation}\label{lim_Re_somma_mu}
{\mathbb R}e(\mu^++\mu^-)= \frac{\gamma}{c} \, \left(\frac{2\frac{v}{c}}{\sqrt{\left(\frac{v}{c}\right)^2-1}}+o(1)\right) \qquad{\rm as}\quad (\gamma,\,\mathrm{d}elta,\eta) \to(0,0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta)\, .
\end{equation}
Now we consider the imaginary part of $ \mu^++\mu^- $
\begin{equation}\label{Imsommamu}
\Im(\mu^++\mu^-)=\sgn(\accentset{{\cc@style\underline{\mskip10mu}}}\eta)\left(\sqrt{\frac{r_+-a_+}{2}} -\sqrt{\frac{r_--a_-}{2}}\right)=
\frac{\sgn(\accentset{{\cc@style\underline{\mskip10mu}}}\eta)}{\sqrt{2}}\frac{(r_+-a_+)-(r_--a_-)}{\sqrt{r_+-a_+} +\sqrt{r_--a_-}} \,.
\end{equation}
Using \eqref{def_ab}, \eqref{rpm} gives
\begin{equation}\label{diff_ra}
(r_+-a_+)-(r_--a_-)=\frac{r_+^2-r_-^2}{r_++r_-}+4\frac{v}{c^2}\,\mathrm{d}elta\eta \,,
\end{equation}
and
\begin{equation}\label{diff_ra2}
r_+^2-r_-^2=8\frac{v}{c^2}\,\mathrm{d}elta\eta\left[\left(\frac{\gamma}{c}\right)^2+\left(\frac{\,\mathrm{d}elta}{c}\right)^2+ \left(\left(\frac{ v}{c}\right)^2
-1\right)\eta^2
\right] \,.
\end{equation}
Moreover it holds
\begin{equation}\label{lim_somma_r}
\lim\limits_{(\gamma,\,\mathrm{d}elta,\eta) \to(0,0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta)}(r_++r_-)=2\left(\left(\frac{v}{c}\right)^2-1\right)\accentset{{\cc@style\underline{\mskip10mu}}}\eta^2 \,.
\end{equation}
Combining \eqref{lim_diff_ra}, \eqref{Imsommamu}--\eqref{lim_somma_r} gives
\begin{equation}\label{lim_Im_somma_mu}
\Im(\mu^++\mu^-)= \frac{\,\mathrm{d}elta}{c} \, \left(\frac{2\frac{v}{c}}{\sqrt{\left(\frac{v}{c}\right)^2-1}}+o(1)\right) \qquad{\rm as}\quad (\gamma,\,\mathrm{d}elta,\eta) \to(0,0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta)\, .
\end{equation}
From \eqref{lim_Re_somma_mu}, \eqref{lim_Im_somma_mu} we deduce
\begin{equation}\label{lim_somma_mu}
\mu^++\mu^-= \frac{\tau}{c} \, \left(\frac{2\frac{v}{c}}{\sqrt{\left(\frac{v}{c}\right)^2-1}}+o(1)\right) \qquad{\rm as}\quad (\gamma,\,\mathrm{d}elta,\eta) \to(0,0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta)\, .
\end{equation}
Thus, it follows that
\begin{equation*}
\lim\limits_{(\gamma,\,\mathrm{d}elta,\eta) \to(0,0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta)}\left( \frac{\tau/c}{\mu^++\mu^-} \right)^2=\frac{(v/c)^2-1}{4(v/c)^2} \, ,
\end{equation*}
that is \eqref{extension}.
Consider now the case $ {\mathbb R}e\tau=0 $, that is $ \tau=i\,\mathrm{d}elta $, and let us assume with no loss of generality that $ \accentset{{\cc@style\underline{\mskip10mu}}}\eta>0.$ For $ (\,\mathrm{d}elta,\eta) $ in a small neighborhood of $ (0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta) $ we have $ -(v-c)\eta<\,\mathrm{d}elta<(v-c)\eta $, that is $ -(\frac{v}{c}-1)<\frac{\,\mathrm{d}elta}{c\eta}<(\frac{v}{c}-1) $. From Lemma \ref{lemma_mu} (see also the proof of Lemma \ref{super_mu} (iii)) we get
\begin{equation*}
\frac{\tau/c}{\mu^++\mu^-}=\frac{\,\mathrm{d}elta/c}{\sqrt{\left(\frac{\,\mathrm{d}elta +v\eta}{c}\right)^2-\eta^2}-\sqrt{\left(\frac{\,\mathrm{d}elta -v\eta}{c}\right)^2-\eta^2}}=
\frac{\sqrt{\left(\frac{\,\mathrm{d}elta +v\eta}{c}\right)^2-\eta^2}+\sqrt{\left(\frac{\,\mathrm{d}elta -v\eta}{c}\right)^2-\eta^2}}{4v\eta/c} \, .
\end{equation*}
Passing to the limit as $ (\,\mathrm{d}elta,\eta) \mapsto (0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta) $ we obtain again \eqref{extension}.
The proof is complete.
{\it Second case: $v=c$}. Again we first consider the case ${\mathbb R}e\tau >0$: hence $\tau =\gamma + i\,\mathrm{d}elta$ with $0<\gamma\ll1$, $\,\mathrm{d}elta\in \mathbb{R}$. For $(\gamma,\,\mathrm{d}elta,\eta)$ sufficiently close to $(0,0,\accentset{{\cc@style\underline{\mskip10mu}}}\eta)$, with $\accentset{{\cc@style\underline{\mskip10mu}}}\eta\neq0$, $\mu^+ + \mu^-$ is given by \eqref{sommamu}, where $a_{\partialm}$, $b_{\partialm}$ and $r_{\partialm}$ are computed in \eqref{def_ab} and in \eqref{rpm} with $v=c$. Hence
\begin{equation}\label{mod_somma}
\begin{split}
\vert \mu^+ &+\mu^-\vert^2=r_++r_{-}+\sqrt{r_++a_+}\sqrt{r_-+a_-}-\sqrt{r_+-a_+}\sqrt{r_--a_-}\\
&=\left\vert\frac{\tau}{c}\right\vert(\alpha_++\alpha_-)+\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_++\left\vert\frac{\tau}{c}\right\vert\right)-\beta_+}\,\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_-+\left\vert\frac{\tau}{c}\right\vert\right)-\beta_-}\\
&\qquad -\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_+-\left\vert\frac{\tau}{c}\right\vert\right)+\beta_+}\,\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_--\left\vert\frac{\tau}{c}\right\vert\right)+\beta_-}\,,
\end{split}
\end{equation}
where we have set:
\begin{equation}\label{alphabeta}
\alpha_\partialm=\alpha_\partialm(\tau,\eta):=\sqrt{\left\vert\frac{\tau}{c}\right\vert^2+4\eta\left(\eta\partialm\frac{\,\mathrm{d}elta}{c}\right)}\,,\quad\beta_\partialm=\beta_\partialm(\,\mathrm{d}elta,\eta):=2\frac{\,\mathrm{d}elta}{c}\left(\frac{\,\mathrm{d}elta}{c}\partialm\eta\right)\,.
\end{equation}
Assume that $\overline\eta>0$, so that $\eta>0$ when it is sufficiently close to $\overline\eta$. For $\,\mathrm{d}elta>0$ sufficiently close to zero, we have $\beta_-<0$ thus
\begin{equation*}
\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_++\left\vert\frac{\tau}{c}\right\vert\right)-\beta_+}\,\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_-+\left\vert\frac{\tau}{c}\right\vert\right)-\beta_-}\ge \sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_++\left\vert\frac{\tau}{c}\right\vert\right)-\beta_+}\,\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_-+\left\vert\frac{\tau}{c}\right\vert\right)}
\end{equation*}
and
\begin{equation*}
\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_+-\left\vert\frac{\tau}{c}\right\vert\right)+\beta_+}\,\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_--\left\vert\frac{\tau}{c}\right\vert\right)+\beta_-}\le \sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_+-\left\vert\frac{\tau}{c}\right\vert\right)+\beta_+}\,\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_--\left\vert\frac{\tau}{c}\right\vert\right)}\,;
\end{equation*}
moreover from $\beta_+=2\frac{\,\mathrm{d}elta}{c}\left(\frac{\,\mathrm{d}elta}{c}+\eta\right)\le 2\left\vert\frac{\tau}{c}\right\vert\left(\frac{\,\mathrm{d}elta}{c}+\eta\right)$ we have
\begin{equation*}
\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_++\left\vert\frac{\tau}{c}\right\vert\right)-\beta_+}\ge\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_++\left\vert\frac{\tau}{c}\right\vert\right)-2\left\vert\frac{\tau}{c}\right\vert\left(\frac{\,\mathrm{d}elta}{c}+\eta\right)}
\end{equation*}
and
\begin{equation*}
\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_+-\left\vert\frac{\tau}{c}\right\vert\right)+\beta_+}\le\sqrt{\left\vert\frac{\tau}{c}\right\vert\left(\alpha_+-\left\vert\frac{\tau}{c}\right\vert\right)+2\left\vert\frac{\tau}{c}\right\vert\left(\frac{\,\mathrm{d}elta}{c}+\eta\right)}\,.
\end{equation*}
We use the last inequalities in \eqref{mod_somma} to find
\begin{equation*}
\begin{split}
\vert\mu^++\mu^-\vert^2\ge \left\vert\frac{\tau}{c}\right\vert\mathsf{T}heta(\tau,\eta)\,,
\end{split}
\end{equation*}
where
\begin{equation*}
\mathsf{T}heta(\tau,\eta):=\alpha_++\alpha_-+\sqrt{\alpha_++\left\vert\frac{\tau}{c}\right\vert-2\left(\frac{\,\mathrm{d}elta}{c}+\eta\right)}\,\sqrt{\alpha_++\left\vert\frac{\tau}{c}\right\vert}-\sqrt{\alpha_+-\left\vert\frac{\tau}{c}\right\vert+2\left(\frac{\,\mathrm{d}elta}{c}+\eta\right)}\,\sqrt{\alpha_--\left\vert\frac{\tau}{c}\right\vert}
\end{equation*}
satisfies
\begin{equation}\label{lim_theta}
\lim\limits_{(\tau,\eta)\to (0,\overline\eta)}\mathsf{T}heta(\tau,\eta)=2(2-\sqrt 2)\overline\eta>0\,.
\end{equation}
Hence for $(\gamma,\,\mathrm{d}elta,\eta)$ sufficiently close to $(0,0,\overline{\eta})$ with $\gamma,\,\mathrm{d}elta>0$, we get
\begin{equation}\label{stima_frac}
\left\vert\frac{\tau/c}{\mu^++\mu^-}\right\vert^2\le\frac{\vert\tau/c\vert}{\mathsf{T}heta(\tau,\eta)}\,.
\end{equation}
We observe that the same estimate is true also for $\,\mathrm{d}elta<0$ by noticing that
\[
\vert(\mu^++\mu^-)(\gamma,\,\mathrm{d}elta,\eta)\vert^2=\vert(\mu^++\mu^-)(\gamma,-\,\mathrm{d}elta,\eta)\vert^2
\]
(see \eqref{mod_somma}, \eqref{alphabeta}) and
\begin{equation*}
\left\vert\frac{\tau/c}{(\mu^++\mu^-)(\tau,\eta)}\right\vert^2=\left\vert\frac{\overline\tau/c}{(\mu^++\mu^-)(\overline\tau,\eta)}\right\vert^2\,.
\end{equation*}
From \eqref{lim_theta} and \eqref{stima_frac} we get
\begin{equation}\label{ext0}
\lim\limits_{(\tau,\eta)\to 0}\left(\frac{\tau/c}{\mu^++\mu^-}\right)^2=0\,,
\end{equation}
that is \eqref{extension} for $v=c$.
Consider now the case ${\mathbb R}e\tau=0$, that is $\tau=i\,\mathrm{d}elta$ and, as above, assume that $\overline\eta>0$. If $\,\mathrm{d}elta>0$, we may assume that $0<\frac{\,\mathrm{d}elta}{c\eta}<2$ for $(\,\mathrm{d}elta,\eta)$ sufficiently close to $(0,\overline\eta)$, being $\eta>0$; then from Lemma \ref{lemma_mu} (see formulas $\eqref{mu+}_4$ and $\eqref{mu-}_1$) we get
\begin{equation*}
\mu^+(i\,\mathrm{d}elta,\eta)=i\sqrt{\left(\frac{\,\mathrm{d}elta}{c}+\eta\right)^2-\eta^2}\,,\quad\mu^-(i\,\mathrm{d}elta,\eta)=\sqrt{-\left(\frac{\,\mathrm{d}elta}{c}-\eta\right)^2+\eta^2}\,.
\end{equation*}
If $\,\mathrm{d}elta<0$ (so $-2<\frac{\,\mathrm{d}elta}{c\eta}<0$ for $(\,\mathrm{d}elta,\eta)$ close to $(0,\overline\eta)$) again from Lemma \ref{lemma_mu} (formulas $\eqref{mu+}_1$ and $\eqref{mu-}_3$) we get
\begin{equation*}
\mu^+(i\,\mathrm{d}elta,\eta)=\sqrt{-\left(\frac{\,\mathrm{d}elta}{c}+\eta\right)^2+\eta^2}\,,\quad\mu^-(i\,\mathrm{d}elta,\eta)=-i\sqrt{\left(\frac{\,\mathrm{d}elta}{c}-\eta\right)^2-\eta^2}\,.
\end{equation*}
From the above values of $\mu^\partialm$ we get for all $\,\mathrm{d}elta\neq 0$
\begin{equation}\label{mod_idelta}
\vert\mu^++\mu^-\vert^2=4\left\vert\frac{\,\mathrm{d}elta}{c}\right\vert\eta\,,
\end{equation}
hence for $\tau=i\,\mathrm{d}elta$
\[
\left\vert\frac{\tau/c}{\mu^++\mu^-}\right\vert^2=\frac{\vert\,\mathrm{d}elta/c\vert}{4\eta}
\]
and passing to the limit as $(\,\mathrm{d}elta,\eta)\to (0,\overline\eta)$ we obtain again \eqref{ext0}.
The same calculations can be repeated also in the case of $\overline{\eta}<0$.
\end{proof}
\begin{remark}Because of \eqref{extension}, the symbol $\Sigma$ can be extended to points $(0,\eta)$ where $\mu^++\mu^-$ vanishes. In particular we have the following limit for the coefficient in brackets, see \eqref{def_Sigma},
\begin{equation}\lim\limits_{(\tau,\eta)\to(0,\bar{\eta})}8\left( \frac{\tau/c}{\mu^++\mu^-} \right)^2-1=\frac{(v/c)^2-2}{(v/c)^2} \, ,
\end{equation}
which changes sign according to $v/c\gtrless \sqrt2$.
This is in relation with the well-known stability criterion for vortex sheets, see \cite{CS04MR2095445,FM63MR0154509,M58MR0097930,S00MR1775057}; see also Remark \ref{ell_hyp}.
\begin{remark}\label{remark52}
We easily verify that $ \mu^++\mu^- $ is a homogeneous function of degree 1 in $(\tau,\eta)\in \Xi $ if $ {\mathbb R}e(\tau)>0. $ It follows that the continuous extension to points with $ {\mathbb R}e(\tau)=0$ of $ \frac{\tau/c}{\mu^++\mu^-} $ is homogeneous of degree 0 and the continuous extension of $ \Sigma $ is homogeneous of degree 2.
\end{remark}
\end{remark}
In the next lemma we study the roots of the symbol $\Sigma$.
\begin{lemma}\label{zeri_Sigma}
Let $ \Sigma(\tau,\eta) $ be the symbol defined in \eqref{def_Sigma}, for $ (\tau,\eta)\in\Xi. $
\begin{itemize}
\item[(i)] If $\frac{v}{c}<\sqrt{2}$, then $ \Sigma(\tau,\eta)=0 $ if and only if
\[
\tau=cY_1|\eta| \qquad \forall\eta\not=0\, , \]
where
\[ Y_1= \sqrt{-\left(\left(\frac{v}{c}\right)^2+1\right) + \sqrt{4\left(\frac{v}{c}\right)^2+1}}\, . \]
\item[(ii)] If $\frac{v}{c}>\sqrt{2}$, then $ \Sigma(\tau,\eta)=0 $ if and only if
\[ \tau=\partialm icY_2\eta \qquad \forall\eta\not=0 \, , \]
where
\[ Y_2= \sqrt{\left(\frac{v}{c}\right)^2+1 - \sqrt{4\left(\frac{v}{c}\right)^2+1}}\, .
\]
Each of these roots is simple. For instance, there exists a neighborhood $\mathcal V$ of $( icY_2\eta,\eta)$ in $\Xi_1$ and a $C^\infty$ function $H$ defined on $\mathcal V$ such that
\[ \Sigma(\tau,\eta)=(\tau-icY_2\eta)H(\tau,\eta), \quad H(\tau,\eta)\not=0 \quad\forall (\tau,\eta)\in\mathcal V.
\]
A similar result holds near $(-icY_2\eta,\eta)\in\Xi_1$.
\end{itemize}
\end{lemma}
\begin{remark}\label{ell_hyp}
(i) Recall that the equation \eqref{equ_f} was obtained by taking the Fourier transform with respect to $(t,x_1)$ of \eqref{puntoN7}, \eqref{wave3}, which corresponds to taking the Laplace transform with respect to $t$ and the Fourier transform with respect to $x_1$ of \eqref{puntoN6}, \eqref{wave2}. Taking the Fourier transform with respect to $t$ of \eqref{puntoN6}, \eqref{wave2} corresponds to the case $\gamma={\mathbb R}e\tau=0 $, i.e. $ (\tau,\eta)=(i\,\mathrm{d}elta,\eta) $.
If $\frac{v}{c}<\sqrt{2}$, from Lemma \ref{zeri_Sigma} the symbol $ \Sigma(\tau,\eta) $ only vanishes in points $ (\tau,\eta)$ with $\tau\in{\mathbb R}, \tau>0$. It follows that $ \Sigma(i\,\mathrm{d}elta,\eta)\not=0 $ for all $(\,\mathrm{d}elta,\eta)\in{\mathbb R}^2$. Therefore the symbol is elliptic, according to the standard definition. In this case planar vortex sheets are violently unstable, see \cite{S00MR1775057}.
(ii) If $\frac{v}{c}>\sqrt{2}$, $ \Sigma(\tau,\eta) $ vanishes in points $ (\tau,\eta)$ with ${\mathbb R}e\tau=0$, that is on the boundary of the frequency set $\Xi$. In this case planar vortex sheets are known to be weakly stable, in the sense that the so-called Lopatinski\u{\i} condition holds in a weak sense, see \cite{CS04MR2095445,FM63MR0154509,M58MR0097930,S00MR1775057}. For this case we expect a loss of derivatives for the solution with respect to the data.
\end{remark}
\begin{proof}[Proof of Lemma \ref{zeri_Sigma}]
As we can easily verify $ \Sigma(\tau,0)=\tau^2\not=0 $ for $ (\tau,0)\in\Xi $ and $ \Sigma(0,\eta)\neq 0 $ for $ (0,\eta)\in\Xi $ (see Corollary \ref{zerosommamu} and Lemma \ref{extend}). Thus we assume without loss of generality that $\tau\neq 0$ and $ \eta\not=0 $ and from Lemma \ref{diff_mu} $(\mu^+-\mu^-)(\tau,\eta)\neq 0$. We compute
\[
\frac{\tau/c}{\mu^++\mu^-}=\frac{(\tau/c)(\mu^+-\mu^-)}{(\mu^+)^2-(\mu^-)^2}=\frac{c}{4iv}\frac{\mu^+-\mu^-}{\eta}\,,
\]
\[
\left(\frac{\mu^+-\mu^-}{\eta}\right)^2=
2\left(\left(\frac{\tau}{c\eta}\right)^2-\left(\frac{v}{c}\right)^2+1-\frac{\mu^+\mu^-}{\eta^2}\right)\,,
\]
and substituting in \eqref{def_Sigma} gives
\begin{equation}\label{sigma1}
\Sigma=c^2\left(\mu^+\mu^--\eta^2\right).
\end{equation}
Let us introduce the quantities
\[
X:=\frac{\tau}{c\eta}, \qquad \tilde{\mu}^\partialm:=\frac{\mu^\partialm}{\eta}.
\]
It follows from \eqref{sigma1} that
\begin{equation}\label{sigma2}
\Sigma=0 \qquad\mbox{if and only if }\quad \tilde{\mu}^+\tilde{\mu}^-=1 \,.
\end{equation}
Let us study the equation
\begin{equation}\label{muquadro}
(\tilde{\mu}^+)^2(\tilde{\mu}^-)^2=1 \,.
\end{equation}
This last equation is equivalent to the biquadratic equation
\[
X^4+2\left(\left(\frac{v}{c}\right)^2+1\right)X^2 + \left(\frac{v}{c}\right)^2 \left(\left(\frac{v}{c}\right)^2-2\right)=0 \,.
\]
This is a polynomial equation of degree 2 in $ X^2 $ with real and distinct roots
\[
X^2=-\left(\left(\frac{v}{c}\right)^2+1\right)- \sqrt{4\left(\frac{v}{c}\right)^2+1} \,,
\]
and
\[
X^2=-\left(\left(\frac{v}{c}\right)^2+1\right)+ \sqrt{4\left(\frac{v}{c}\right)^2+1} \,.
\]
The first one gives the imaginary roots
\begin{equation}\label{roots_imag}
X_{1,2}=\partialm i Y_0, \qquad Y_0:=\sqrt{\left(\frac{v}{c}\right)^2+1 + \sqrt{4\left(\frac{v}{c}\right)^2+1}}\,.
\end{equation}
The second root of $ X^2 $ gives real or imaginary roots according to $ v/c \lessgtr \sqrt{2}.$ If $ v/c<\sqrt{2} $, there are 2 real and distinct roots
\begin{equation}\label{roots_real}
X_{3,4}=\partialm Y_1, \qquad Y_1:= \sqrt{-\left(\left(\frac{v}{c}\right)^2+1\right) + \sqrt{4\left(\frac{v}{c}\right)^2+1}}\,.
\end{equation}
If $ v/c>\sqrt{2} $, there are 2 imaginary roots
\begin{equation}\label{roots_imag2}
X_{3,4}=\partialm iY_2, \qquad Y_2:= \sqrt{\left(\frac{v}{c}\right)^2+1- \sqrt{4\left(\frac{v}{c}\right)^2+1}}\,.
\end{equation}
If $ v/c=\sqrt{2} $, then $ X_{3,4}=0 $.
Assume $ v/c<\sqrt{2} $ and consider first the real roots \eqref{roots_real}. In order to obtain $ {\mathbb R}e(\tau)>0 $ we choose $ X_3 $ or $ X_4 $ depending on $ \sgn(\eta), $
and we obtain that
$\tau=cY_1 |\eta|.$ By construction the pairs $ (cY_1 |\eta|,\eta) $ solve the equation \eqref{muquadro}. In order to verify if \eqref{sigma2} holds we proceed as follows. We compute
\[
(\tilde{\mu}^\partialm)^2=a\partialm ib, \qquad a=X^2-(v/c)^2+1, \qquad b=2Xv/c \,.
\]
Recalling \eqref{roots} we can determine $ \tilde{\mu}^\partialm $ and obtain that
\[
\tilde{\mu}^+ \tilde{\mu}^-=\left|\sqrt{\frac{r+a}{2}}+i\sqrt{\frac{r-a}{2}}\right|^2=r=|a+ib|>0\,.
\]
From \eqref{muquadro} it follows that $ \tilde{\mu}^+ \tilde{\mu}^-=1 $, that is \eqref{sigma2}.
Then, let us consider the imaginary roots in \eqref{roots_imag}. Correspondingly we have points $ (\tau,\eta)=(\partialm icY_0\eta,\eta) $ with $ {\mathbb R}e(\tau)=0. $ Compairing the values $ \,\mathrm{d}elta/(c\eta)=\partialm Y_0$, where $Y_0>v/c+1,$ with the corresponding cases (i), (v) of Lemma \ref{super_mu} (if $ 1<v/c<\sqrt{2} $) and (i), (v) of Lemma \ref{sub_mu} (if $ v/c<1 $), while in the sonic case $v=c$ we use Lemma \ref{lemma_mu}, we get
\[
{\mathbb R}e(\tilde{\mu}^+ \tilde{\mu}^-)=\eta^{-2}{\mathbb R}e(\mu^+\mu^-)<0\,.
\]
It means that in such points $ \tilde{\mu}^+ \tilde{\mu}^-=-1 $, that is \eqref{sigma2} is not satisfied. Therefore we have proved that in case $ v/c<\sqrt{2} $, the (only) roots of the symbol $\Sigma$ are the points $ (cY_1 |\eta|,\eta) $, for all $\eta\not=0$.
Now we assume $ v/c>\sqrt{2} $. For the imaginary roots \eqref{roots_imag} we can repeat the analysis made before.
Correspondingly to the roots $X_{1,2}$ we have the same points $ (\tau,\eta)=(\partialm icY_0\eta,\eta) $ with $ {\mathbb R}e(\tau)=0. $ Compairing the value $ \,\mathrm{d}elta/(c\eta)=\partialm Y_0$, where $Y_0>v/c+1,$ with the different cases of Lemma \ref{super_mu} we get
\[
{\mathbb R}e(\tilde{\mu}^+ \tilde{\mu}^-)=\eta^{-2}{\mathbb R}e(\mu^+\mu^-)<0\,.
\]
It means that in such points $ \tilde{\mu}^+ \tilde{\mu}^-=-1 $, and \eqref{sigma2} is not satisfied.
Correspondingly to the roots $X_{3,4}$ in \eqref{roots_imag2} we have the points $ (\tau,\eta)=(\partialm icY_2\eta,\eta) $ with $ {\mathbb R}e(\tau)=0. $ Because $ -(v/c-1)< \partialm Y_0<v/c-1$, from Lemma \ref{super_mu} (iii) we deduce
\[
{\mathbb R}e(\tilde{\mu}^+ \tilde{\mu}^-)=\eta^{-2}{\mathbb R}e(\mu^+\mu^-)>0\,.
\]
It means that in such points $ \tilde{\mu}^+ \tilde{\mu}^-=1 $, and \eqref{sigma2} is satisfied. Therefore we have proved that in case $ v/c>\sqrt{2} $, the (only) roots of the symbol $\Sigma$ are the points $ (\partialm icY_2\eta,\eta) $, for all $\eta\not=0$.
This completes the first part of the proof of Lemma \ref{zeri_Sigma}; it remains to prove that the roots corresponding to $ X_{3,4}=\partialm iY_2 $ are simple. Let us set $\sigma(X):=\tilde{\mu}^+ \tilde{\mu}^- -1$. From \eqref{sigma1} we have $\Sigma=c^2\eta^2\sigma(X)$. We wish to study $\sigma(X)$ in sufficiently small neighborhoods of points $ (\partialm icY_2\eta,\eta) $. From Lemma \ref{lemma_mu} we may assume that $ \tilde{\mu}^\partialm $ are different from 0 in such neighborhoods. First of all, from $$ (\tilde{\mu}^\partialm)^2=(X\partialm iv/c)^2 +1 $$ we obtain
\[
\frac{d\tilde{\mu}^+}{dX}=\frac{1}{\tilde\mu^+}(X+ iv/c), \qquad \frac{d\tilde{\mu}^-}{dX}=\frac{1}{\tilde\mu^-}(X- iv/c)\, .
\]
We prove that
\begin{equation*}
\begin{array}{ll}
\,\mathrm{d}s \frac{d\sigma}{dX}(X)=\frac{d\tilde{\mu}^+}{dX}\tilde{\mu}^- + \tilde{\mu}^+ \frac{d\tilde{\mu}^-}{dX}=\frac{\tilde\mu^-}{\tilde\mu^+}(X+ iv/c) + \frac{\tilde\mu^+}{\tilde\mu^-}(X- iv/c)
\\
\,\mathrm{d}s=\frac{1}{\tilde\mu^+\tilde\mu^-}\left\{\left((\tilde\mu^+)^2+(\tilde\mu^-)^2\right)X -i\left((\tilde\mu^+)^2-(\tilde\mu^-)^2\right) v/c\right\}
\\
\,\mathrm{d}s=\frac{2X}{\tilde\mu^+\tilde\mu^-}\left\{X^2+(v/c)^2+1\right\}
\,.
\end{array}
\end{equation*}
Moreover we have
\begin{equation*}
\sigma(X_{3})=0, \qquad K:=\left\{X^2+(v/c)^2+1\right\}_{|X=X_{3}}>0 \,.
\end{equation*}
Consequently we can write
\[
\sigma(X)=(X-X_3)\, \tilde{H}(X)\,,
\]
where, by continuity $ \tilde{H}(X)\not=0 $ in a neighborhood of $ X=X_3, $ because $ \tilde{H}(X_3)=\frac{d\sigma}{dX}(X_3)=2X_3K\not=0 $.
Thus we write
\begin{equation*}
\Sigma(\tau,\eta)=c^2\eta^2\sigma(X)=c^2\eta^2\sigma\left(\frac{\tau}{c\eta}\right)=(\tau-X_3c\eta)\,H(\tau,\eta), \qquad H(\tau,\eta):=c\eta\, \tilde{H}\left(\frac{\tau}{c\eta}\right)\,.
\end{equation*}
Since
\[
H(X_3c\eta,\eta)=c\eta\, \tilde{H}\left(X_3\right)\not=0 \qquad\forall \eta\not=0\,,
\]
by continuity $ H(\tau,\eta)\not=0 $ in a small neighborhood of $ (X_3c\eta,\eta) $. It is easily verified that $H$ is a homogeneous function of degree 1. By the same argument we prove the similar result for $ X=X_4. $ The proof of Lemma \ref{zeri_Sigma} is complete.
\end{proof}
\section{Proof of Theorem \ref{teoexist}}
\begin{lemma}
Let $\Sigma$ be the symbol defined by \eqref{def_Sigma} and $s\in{\mathbb R}, \gamma\ge1 $. Given any $f\in H^{s+2}_\gamma({\mathbb R}^2)$, let $g$ be the function defined by
\begin{equation}\label{def_g}
\Sigma(\tau,\eta)\widehat f(\tau,\eta)=\widehat g(\tau,\eta) \qquad (\tau,\eta)\in\Xi\, ,
\end{equation}
where $ \widehat{g} $ is the Fourier transform of $\widetilde{g}:= e^{-\gamma t}g. $ Then $g\in H^{s}_\gamma({\mathbb R}^2)$ with
\begin{equation*}
\|g\|_{H^{s}_\gamma({\mathbb R}^2)}\le C \|f\|_{H^{s+2}_\gamma({\mathbb R}^2)} \, ,
\end{equation*}
for a suitable positive constant $C$ independent of $ \gamma $.
\end{lemma}
\begin{proof}
The proof follows by observing that $ \Sigma(\tau,\eta)$ is a homogeneous function of degree 2 on $\Xi$, so there exists a positive constant $C$ such that
\begin{equation}\label{stimaSigma}
|\Sigma(\tau,\eta)|\le C(|\tau|^2+\eta^2)=C\Lambda^2(\tau,\eta) \qquad \forall (\tau,\eta)\in\Xi\,.
\end{equation}
Then
\begin{equation*}
\|g\|_{H^{s}_\gamma({\mathbb R}^2)}=\frac{1}{2\partiali}\|\Lambda^s\widehat{g}\|=\frac{1}{2\partiali}\|\Lambda^s\Sigma\widehat{f}\|
\le C\|\Lambda^{s+2}\widehat{f}\|=C \|f\|_{H^{s+2}_\gamma({\mathbb R}^2)} \, .
\end{equation*}
\end{proof}
In the following theorem we prove the a priori estimate of the solution $f$ to equation \eqref{def_g}, for a given $g$.
\begin{theorem}\label{teofg}
Assume $\frac{v}{c}>\sqrt{2}$. Let $\Sigma$ be the symbol defined by \eqref{def_Sigma} and $s\in{\mathbb R}$. Given any $f\in H^{s+2}_\gamma({\mathbb R}^2)$, let $g\in H^s_\gamma({\mathbb R}^2)$ be the function defined by \eqref{def_g}.
Then there exists a positive constant $C$ such that for all $\gamma\ge1$ the following estimate holds
\begin{equation}\label{stimafg}
\gamma \|f\|_{H^{s+1}_\gamma({\mathbb R}^2)} \le C \|g\|_{H^s_\gamma({\mathbb R}^2)} \, .
\end{equation}
\end{theorem}
\begin{proof}
The study of $\Sigma$ in the proof of Lemma \ref{zeri_Sigma} implies that for all $(\tau_0,\eta_0)\in\Xi_1$, there exists a neighborhood $\mathcal V$ of $(\tau_0,\eta_0)$ with suitable properties, as explained in the following. Because $\Xi_1$ is a $C^\infty$ compact manifold, there exists a finite covering $(\mathcal V_1,\,\mathrm{d}ots,\mathcal V_I)$ of $\Xi_1$ by such neighborhoods, and a smooth partition of unity $(\chi_1,\,\mathrm{d}ots,\chi_I)$ associated with this covering.
The $\chi_i's$ are nonnegative $C^\infty$ functions with
\[
\supp\chi_i\subset\mathcal V_i, \qquad \sum_{i=1}^I\chi_i^2=1.
\]
We consider two different cases.
{\it In the first case}
$\mathcal V_i$ is a neighborhood of an {\it elliptic} point, that is a point $ (\tau_0,\eta_0) $ where $ \Sigma(\tau_0,\eta_0)\not=0. $ By taking $\mathcal V_i$ sufficiently small we may assume that $ \Sigma(\tau,\eta)\not=0 $ in the whole neighborhood $\mathcal V_i$, and there exists a positive constant $C$ such that $$ |\Sigma(\tau,\eta)|\ge C \qquad \forall (\tau,\eta)\in\mathcal V_i\,.
$$
Let us extend the associated function $\chi_i$ to the whole set of frequencies $\Xi$, as a homogeneous mapping of degree 0 with respect to $(\tau,\eta)$.
$ \Sigma(\tau,\eta)$ is a homogeneous function of degree 2 on $\Xi$, so we have
\begin{equation}\label{stima_ellip0}
|\Sigma(\tau,\eta)|\ge C(|\tau|^2+\eta^2) \qquad \forall (\tau,\eta)\in\mathcal V_i\cdot{\mathbb R}^+\,.
\end{equation}
We deduce that
\begin{equation}\label{stima_ellip}
C(|\tau|^2+\eta^2)|\chi_i\widehat f(\tau,\eta)|\le
|\Sigma(\tau,\eta)\chi_i\widehat f(\tau,\eta)|=|\chi_i\widehat g(\tau,\eta)| \qquad \forall (\tau,\eta)\in\mathcal V_i\cdot{\mathbb R}^+\,.
\end{equation}
{\it In the second case}\label{first}
$\mathcal V_i$ is a neighborhood of a {\it root} of the symbol $ \Sigma $, i.e. a point $ (\tau_0,\eta_0) $ where $ \Sigma(\tau_0,\eta_0)=0. $ For instance we may assume that $ (\tau_0,\eta_0)=(icY_2\eta_0,\eta_0), \eta_0\not=0$, see Lemma \ref{zeri_Sigma}; a similar argument applies for the other family of roots $(\tau,\eta)=(-icY_2\eta,\eta)$.
According to Lemma \ref{zeri_Sigma} we may assume that on $\mathcal V_i$ it holds
\[ \Sigma(\tau,\eta)=(\tau-icY_2\eta)H(\tau,\eta), \quad H(\tau,\eta)\not=0 \quad\forall (\tau,\eta)\in\mathcal V_i.
\]
We extend the associated function $\chi_i$ to the whole set of frequencies $\Xi$, as a homogeneous mapping of degree 0 with respect to $(\tau,\eta)$.
Because $ H(\tau,\eta)\not=0 $ on $\mathcal V_i$, there exists a positive constant $C$ such that $$ |H(\tau,\eta)|\ge C \qquad \forall (\tau,\eta)\in\mathcal V_i\,.
$$
$H(\tau,\eta)$ is a homogeneous function of degree 1 on $\Xi$, so we have
\[
|H(\tau,\eta)|\ge C(|\tau|^2+\eta^2)^{1/2} \qquad \forall (\tau,\eta)\in\mathcal V_i\cdot{\mathbb R}^+\,.
\]
Then we obtain
\begin{equation}\label{stima_nellip0}
|\Sigma(\tau,\eta)|=|(\tau-icY_2\eta)H(\tau,\eta)|\ge C\gamma(|\tau|^2+\eta^2)^{1/2} \qquad \forall (\tau,\eta)\in\mathcal V_i\cdot{\mathbb R}^+\,,
\end{equation}
and we deduce that
\begin{equation}\label{stima_nellip}
C\gamma(|\tau|^2+\eta^2)^{1/2}|\chi_i\widehat f(\tau,\eta)|\le
|\chi_i\widehat g(\tau,\eta)| \qquad \forall (\tau,\eta)\in\mathcal V_i\cdot{\mathbb R}^+\,.
\end{equation}
In conclusion, adding up the square of \eqref{stima_ellip} and \eqref{stima_nellip}, and using that the $ \chi_i $'s form a partition of unity gives
\begin{equation}\label{stimapointwise}
C\gamma^2(|\tau|^2+\eta^2)|\widehat f(\tau,\eta)|^2\le
|\widehat g(\tau,\eta)|^2 \qquad \forall (\tau,\eta)\in\Xi\,.
\end{equation}
Multiplying the previous inequality by $ (|\tau|^2+\eta^2)^s $, integrating with respect to $ (\,\mathrm{d}elta,\eta)\in{\mathbb R}^2 $ and using Plancherel's theorem finally yields the estimate
\begin{equation*}
\gamma^2\|\widetilde{f}\|_{s+1,\gamma}^2\le C\|\widetilde{g}\|_{s,\gamma}^2\, ,
\end{equation*}
for a suitable constant $C$, that is \eqref{stimafg}.
\end{proof}
In the following theorem we prove the existence of the solution $f$ to equation \eqref{def_g}.
\begin{theorem}\label{teoexistfg}
Assume $\frac{v}{c}>\sqrt{2}$. Let $\Sigma$ be the symbol defined by \eqref{def_Sigma} and $s\in{\mathbb R}, \gamma\ge1$. Given any $g\in H^s_\gamma({\mathbb R}^2)$ there exists a unique solution $f\in H^{s+1}_\gamma({\mathbb R}^2)$ of equation \eqref{def_g}, satisfying the estimate
\eqref{stimafg}.
\end{theorem}
\begin{proof}
We use a duality argument. Let us denote by $ \Sigma^* $ the symbol of the adjoint of the operator with symbol $ \Sigma $, such that
\begin{align*}
\langle \Sigma\widehat{f},\widehat{h}\rangle=\langle \widehat{f},\Sigma^*\widehat{h}\rangle
\end{align*}
for $ f,h $ sufficiently smooth. From the definition \eqref{def_Sigma} we easily deduce that
\begin{equation}\label{equSS*}
\Sigma^\ast(\tau,\eta)=\Sigma(\bar{\tau},\eta)\,.
\end{equation}
Thus, from Theorem \ref{teofg}, see in particular \eqref{stima_ellip0}, \eqref{stima_nellip0}, \eqref{stimapointwise}, we obtain the estimate
\begin{equation*}
\gamma^2(|\tau|^2+\eta^2)|\widehat h(\tau,\eta)|^2\le
C|\Sigma^\ast(\tau,\eta)\widehat h(\tau,\eta)|^2 \,,
\end{equation*}
which gives by integration in $ (\,\mathrm{d}elta,\eta) $
\begin{equation}\label{stimaSigma*}
\gamma\|\Lambda\widehat h \|\le
C\|\Sigma^\ast\widehat h\| \,.
\end{equation}
We compute
\begin{align}\label{duality}
\left|\langle \widehat{g},\widehat{h}\rangle\right|=\left| \langle \Lambda^s\widehat{g},\Lambda^{-s}\widehat{h}\rangle\right|
\le\|\Lambda^s\widehat{g}\| \, \|\Lambda^{-s}\widehat{h}\|\,.
\end{align}
From \eqref{stimaSigma}, \eqref{equSS*}, \eqref{stimaSigma*} (with $\Lambda^{-s-1}\widehat{h}$ instead of $\widehat{h}$) we obtain
\begin{equation}\label{stimaL-s}
\|\Lambda^{-s}\widehat{h}\|=\|\Lambda\Lambda^{-s-1}\widehat{h}\|\le \frac{C}{\gamma}\|\Sigma^\ast\Lambda^{-s-1}\widehat h\| \le \frac{C}{\gamma}\|\Lambda^{-s+1}\widehat h\|= \frac{C}{\gamma}\|h\|_{H^{-s+1}_\gamma({\mathbb R}^2)} \, .
\end{equation}
Let us denote
\[
{\mathbb R}c:=\left\{ \Sigma^\ast\Lambda^{-s-1}\widehat h \,\, | \,\, h\in H^{-s+1}_\gamma({\mathbb R}^2) \right\} \,.
\]
From \eqref{stimaL-s} it is clear that $ {\mathbb R}c$ is a subspace of $ L^2({\mathbb R}^2) $; moreover, the map $ \Sigma^\ast\Lambda^{-s-1}\widehat h \mapsto \Lambda^{-s}\widehat{h} $ is well-defined and continuous from $ {\mathbb R}c$ into $ L^2({\mathbb R}^2) $. Given $ g\in H^s_\gamma({\mathbb R}^2) $, we define a linear form $ \ell $ on $ {\mathbb R}c $ by
\[
\ell(\Sigma^\ast\Lambda^{-s-1}\widehat h)= \langle \widehat{g},\widehat{h} \rangle \,.
\]
From \eqref{duality}, \eqref{stimaL-s} we obtain
\[
\left| \ell(\Sigma^\ast\Lambda^{-s-1}\widehat h)\right| \le \frac{C}{\gamma}\|\Lambda^s\widehat{g}\| \, \|\Sigma^\ast\Lambda^{-s-1}\widehat h\| \,.
\]
Thanks to the Hahn-Banach and Riesz theorems, there exists a unique $ w\in L^2({\mathbb R}^2) $ such that
\[
\langle w,\Sigma^\ast\Lambda^{-s-1}\widehat h \rangle = \ell(\Sigma^\ast\Lambda^{-s-1}\widehat h)\,, \qquad
\|w\|= \|\ell\|_{\mathcal L({\mathbb R}c)} \le \frac{C}{\gamma}\|\Lambda^s\widehat{g}\| \,.
\]
Defining $ \widehat f:= \Lambda^{-s-1}w$ we get $ f\in H^{s+1}_\gamma({\mathbb R}^2) $ such that
\[
\langle \Sigma\widehat f, \widehat h \rangle = \langle \widehat f, \Sigma^\ast\widehat h \rangle= \langle \widehat{g},\widehat{h} \rangle \qquad \forall h\in H^{-s+1}_\gamma({\mathbb R}^2)\,,
\]
which shows that $ f $ is a solution of equation \eqref{def_g}. Moreover
\[
\|f\|_{H^{s+1}_\gamma({\mathbb R}^2)}=\frac{1}{2\partiali}\|\Lambda^{s+1}\widehat{f}\|=\frac{1}{2\partiali}\|w\| \le \frac{C}{\gamma}\|\Lambda^s\widehat{g}\|=\frac{C}{\gamma}\|g\|_{H^{s}_\gamma({\mathbb R}^2)}\,,
\]
that is \eqref{stimafg}. The uniqueness of the solution follows from the linearity of the problem and the a priori estimate.
\end{proof}
Now we can conclude the proof of Theorem \ref{teoexist}.
\begin{proof}[Proof of Theorem \ref{teoexist}]
We apply the result of Theorem \ref{teoexistfg} for \[ \widehat g(\tau,\eta)=-\frac{\mu^+\mu^-}{\mu^++\mu^-}\,M \, , \]
with $M$ defined in \eqref{def_M}. We write
\begin{equation*}
\widehat g=\widehat g_1-\widehat g_2,
\end{equation*}
where
\begin{equation*}
\widehat g_1=-\frac{\mu^-}{\mu^++\mu^-}\int_{0}^{+\infty}e^{-\mu^+ y}\widehat{\mathcal F}^+ (\cdot, y)\, \,\mathrm{d} y \,, \qquad
\widehat g_2=-\frac{\mu^+}{\mu^++\mu^-}\int_{0}^{+\infty}e^{-\mu^- y}\widehat{\mathcal F}^- (\cdot,- y)\, \,\mathrm{d} y \, .
\end{equation*}
By the Plancherel theorem and Cauchy-Schwarz inequality we have
\begin{equation}\label{stimag1}
\begin{array}{ll}
\,\mathrm{d}s \|g_1\|_{H^s_\gamma({\mathbb R}^2)}^2=\frac{1}{(2\partiali)^2}\iint_{{\mathbb R}^2} \Lambda^{2s}\left| \frac{\mu^-}{\mu^++\mu^-}\int_{0}^{+\infty}e^{-\mu^+ y}\widehat{\mathcal F}^+ (\cdot, y)\, dy \right|^2\,\mathrm{d}\,\mathrm{d}elta \,\mathrm{d}\eta\\
\,\mathrm{d}s \le \frac{1}{(2\partiali)^2}\iint_{{\mathbb R}^2} \Lambda^{2s}\left| \frac{\mu^-}{\mu^++\mu^-}\right|^2\frac{1}{2{\mathbb R}e\mu^+}
\left(\int_{0}^{+\infty}|\widehat{\mathcal F}^+ (\cdot, y)|^2\, dy\right) \,\mathrm{d}\,\mathrm{d}elta \,\mathrm{d}\eta \, .
\end{array}
\end{equation}
Then we use the fact that $\frac{\mu^-}{\mu^++\mu^-}$ is a homogeneous function of degree zero in $\Xi$ so that
\[ \left| \frac{\mu^-}{\mu^++\mu^-}\right|^2\le C \qquad \forall (\tau,\eta)\in\Xi\, ,
\]
for a suitable constant $C>0$. Moreover, we have the estimate from below
\begin{equation*}{\mathbb R}e\mu^+\ge \frac{1}{\sqrt{2}\,c}\,\gamma\, ,
\end{equation*}
see Lemma \ref{stima_Re_mu}. Thus we obtain from \eqref{stimag1}
\begin{equation*}
\begin{array}{ll}
\,\mathrm{d}s \|g_1\|_{H^s_\gamma({\mathbb R}^2)}^2
\le \frac{C}{\gamma}\iint_{{\mathbb R}^2} \Lambda^{2s}
\left(\int_{0}^{+\infty}|\widehat{\mathcal F}^+ (\cdot, y)|^2\, \,\mathrm{d} y\right) \,\mathrm{d}\,\mathrm{d}elta \,\mathrm{d}\eta =\frac{C}{\gamma}\|\mathcal F^+\|_{L^2({\mathbb R}^+;H^s_\gamma({\mathbb R}^2))}^2 \, .
\end{array}
\end{equation*}
The proof of the estimate of $g_2$ is similar. This completes the proof of Theorem \ref{teoexist}.
\end{proof}
\end{document} |
\mathbf{b}egin{document}
\title{Unbiased estimates for linear regression\
via volume sampling}
\mathbf{b}egin{abstract}
Given a full rank matrix $\mathbf X$ with more columns than rows,
consider the task of estimating the pseudo inverse $\mathbf X^+$ based
on the pseudo inverse of a sampled subset of columns
(of size at least the number of rows). We show that this is possible
if the subset of columns is chosen proportional to the squared volume
spanned by the rows of the chosen submatrix (ie, volume sampling).
The resulting estimator is unbiased and
surprisingly the covariance of the estimator also has a
closed form: It equals a specific factor times
$\mathbf X^+\mathbf X^{+\top}$.
Pseudo inverse plays an important part in solving the linear least
squares problem, where we try to predict a label for each column of
$\mathbf X$. We assume labels are expensive and we are
only given the labels for the small subset of columns we sample
from $\mathbf X$. Using our methods we show that
the weight vector of the solution for the sub problem
is an unbiased estimator of the
optimal solution for the whole problem based on all
column labels.
We believe that these new formulas establish a fundamental
connection between linear least squares and volume sampling.
We use our methods to obtain an algorithm for volume
sampling that is faster than state-of-the-art and for obtaining
bounds for the total loss of the estimated least-squares solution on
all labeled columns.
\mathbf end{abstract}
\section{Introduction}
\mathbf ellabel{sec:introduction}
\mathbf{b}egin{wrapfigure}{r}{0.2\textwidth}
\mathbf vspace{-.7cm}
\mbox{\mathbf footnotesize $\mathbf X$}
\mathbf{b}egin{tikzpicture}[scale=0.4,baseline=(current bounding box.center)]
\draw [fill=brown!30] (0,0) rectangle (4,2);
\draw [color=black] (2.5,0) -- (2.5,2);
\draw (2,1) node {\mbox{\mathbf footnotesize $\mathbf x_i$}};
\mathbf end{tikzpicture}
\mathbf vspace{.05cm}
\mbox{\mathbf footnotesize $\mathbf I_S$}
\mathbf{b}egin{tikzpicture}[scale=0.4,baseline=(current bounding box.center)]
\draw (0,0) rectangle (4,3.9);
\draw [color=blue] (.45,3.5) -- (2.7,1.3);
\draw (1.75,3) node {\mbox{\mathbf footnotesize $\mathbf Blue{S}$}};
\mathbf end{tikzpicture}
\mathbf vspace{.05cm}
\mbox{\mathbf footnotesize $\mathbf X\mathbf I_S$}
\mathbf{b}egin{tikzpicture}[scale=0.4,baseline=(current bounding box.center)]
\draw (0,0) rectangle (4,2);
\draw[fill=blue!30] (.5,0) rectangle (2.7,2);
\draw (1.55,.5) node {\mbox{\mathbf footnotesize $\mathbf Blue{\mathbf X_S}$}};
\mathbf end{tikzpicture}
\mathbf vspace{.03cm}
\mbox{\mathbf footnotesize $\mathbf X^{+\top}$}
\mathbf{b}egin{tikzpicture}[scale=0.4,baseline=(current bounding box.center)]
\draw [fill=brown!30] (0,0) rectangle (4,2);
\mathbf end{tikzpicture}
\mathbf vspace{.05cm}
\mbox{\mathbf footnotesize$\!(\!\mathbf X\mathbf I_S\!)^{\!+\!\top}\!$}
\mathbf{b}egin{tikzpicture}[scale=0.4,baseline=(current bounding box.center)]
\draw (0,0) rectangle (4,2);
\draw[fill=blue!30] (.5,0) rectangle (2.7,2);
\draw (1.55,.5) node {\mbox{\mathbf footnotesize $\;\mathbf Blue{(\!\mathbf X_S\!)^{\!+\!\top}}$}};
\mathbf end{tikzpicture}
\mathbf vspace{-.1cm}
{n-d\choose s-d}aption{Set $S$ may not be consecutive.}
\mathbf ellabel{f:shapes}
\mathbf vspace{-1.3cm}
\mathbf end{wrapfigure}
Let $\mathbf X$ be a wide full rank matrix
with $d$ rows and $n$ columns where $n \ge d$.
Our goal is to estimate the pseudo inverse $\mathbf X^+$ of $\mathbf X$
based on the pseudo inverse of a subset of columns. More precisely,
we sample a subset $S\subseteq \{1..n\}$ of $s$ column indices
(where $s\geq d$). We let $\mathbf X_S$ be the sub-matrix of the $s$
columns indexed by $S$ (See Figure \ref{f:shapes}).
Consider a version of $\mathbf X$ in which
all but the columns of $S$ are zero.
This matrix equals $\mathbf X\mathbf I_S$ where $\mathbf I_S$ is an $n$-dimensional diagonal
matrix with $(\mathbf I_S)_{ii}=1$ if $i\in S$ and 0 otherwise.
We assume that the set of $s$ column indices of $\mathbf X$ is selected
proportional to the squared volume spanned by the rows of submatrix
$\mathbf X_S$, i.e. proportional to $\mathrm{det}(\mathbf X_S\mathbf X_S^\top)$
and prove a number of new surprising expectation formulas for this type of volume
sampling, such as
$$
\mathbb E[(\mathbf X\mathbf I_S)^+]=\mathbf X^+ \quad \text{and}\quad
\mathbb E[\mathbf underbrace{(\mathbf X_S\mathbf X_S^\top)^{-1}}_
{(\mathbf X\mathbf I_S)^{+\top}(\mathbf X\mathbf I_S)^+} ]= \mathbf frac{n-d+1}{s-d+1}\,
\mathbf X^{+\top}\mathbf X^+.$$
Note that $(\mathbf X\mathbf I_S)^+$ has the $n\times d$ shape of $\mathbf X^+$
where the $s$ rows indexed by $S$ contain $(\mathbf X_S)^+$ and the
remaining $n-s$ rows are zero.
The expectation of this matrix is $\mathbf X^+$ even though
$(\mathbf X_S)^+$ is clearly not a sub-matrix of $\mathbf X^+$.
In addition to the
expectation formulas, our new techniques lead to
an efficient volume sampling procedure which beats the state-of-the-art
by a factor of $n^2$ in time complexity.
Volume sampling is useful in numerous applications, from clustering to
matrix approximation, but we focus on the task of solving
linear least squares problems: For an $n-$dimensional
label vector $\mathbf y$, let $\mathbf w^*=\impliesgmin_\mathbf w ||\mathbf X^\top\mathbf w-\mathbf y||^2=\mathbf X^+\mathbf y$.
Assume the entire design matrix $\mathbf X$ is
known to the learner but labels are expensive
and you want to observe as few of them as possible.
Let $\of{\mathbf w^*}S=(\mathbf X_S)^+\mathbf y_S$ be the solution to the sub-problem based
on labels $\mathbf y_S$.
What is the smallest number of labels $s$ necessary, for
which there is a sampling procedure on sets $S$ of size
$s$ st the expected loss of $\of{\mathbf w^*}S$ is at most a
constant factor larger than the loss of $\mathbf w^*$ that uses all $n$
labels (where the constant is independent of $n$)?
More precisely, using the short hand $L(\mathbf w)=||\mathbf X^\top\mathbf w-\mathbf y||^2$
for the loss on all $n$ labels, what is the smallest size $s$ such that
$\mathbb E[L(\of{\mathbf w^*}S)]\mathbf elle \text{const}\, L(\mathbf w^*)$. This question is a version of
the ``minimal coresets'' open problem posed in {n-d\choose s-d}ite{coresets-regression}.
The size has to be at least $d$ and
one can show that randomization is necessary
in that any deterministic algorithm for choosing a set of
$d$ columns can suffer loss larger by a factor of $n$.
Also any iid sampling of $S$ (such as the commonly used leverage scores
{n-d\choose s-d}ite{fast-leverage-scores})
requires at least $\Omega(d \mathbf ellog d)$ examples to achieve a finite factor.
In this paper however we show that with a size $d$ volume
sample,
$\mathbb E[L(\of{\mathbf w^*}S)]=(d+1)L(\mathbf w^*)$ if $\mathbf X$ is in general position.
Note again that we have equality and not just an upper
bound. Also we can show that the multiplicative factor $d+1$ is optimal.
We further improve this factor to
$1+\mathbf epsilon$ via repeated volume sampling. Moreover, our
expectation formulas imply that when $S$ is size $s\ge d$ volume
sampled, then $\of{\mathbf w^*}S$ is an unbiased estimator for
$\mathbf w^*$, ie $\mathbb E[\of{\mathbf w^*}S]=\mathbf w^*$.
\mathbf vspace{-.1cm}
\section{Related Work}
\mathbf ellabel{sec:related-work}
Volume sampling is an extension of a determinantal point
process {n-d\choose s-d}ite{dpp}, which has been given a lot of attention in the literature
with many applications to machine learning, including
recommendation systems {n-d\choose s-d}ite{dpp-shopping} and clustering
{n-d\choose s-d}ite{dpp-clustering}. Many exact and approximate methods for efficiently
generating samples from this distribution have been proposed
{n-d\choose s-d}ite{efficient-volume-sampling,k-dpp}, making it a useful tool in
the design of randomized algorithms. Most of those methods focus on
sampling $s\mathbf elleq d$ elements. In this paper, we study volume sampling
sets of size $s\geq d$, which has been proposed in
{n-d\choose s-d}ite{avron-boutsidis13} and motivated with applications in graph
theory, linear regression, matrix approximation and more. The only
known polynomial time algorithm for size $s>d$ volume sampling was
recently proposed in {n-d\choose s-d}ite{dual-volume-sampling} with time complexity
$O(n^4 s)$. We offer a new algorithm with runtime $O((n-s+d)nd)$,
which is faster by a factor of at least $n^2$.
The problem of selecting a subset of input vectors for solving a linear
regression task has been extensively studied in statistics literature
under the terms {\mathbf em optimal design} {n-d\choose s-d}ite{optimal-design-book} and
{\mathbf em pool-based active learning}
{n-d\choose s-d}ite{pool-based-active-learning-regression}. Various
criteria for subset selection have been proposed, like A-optimality
and D-optimality. For example, A-optimality seeks to minimize
$\mathrm{tr}((\mathbf X_S\mathbf X_S^\top)^{-1})$, which is combinatorially hard to optimize
exactly.
We show that for size $s$ volume sampling (for $s\geq d$),
$\mathbb E[(\mathbf X_S\mathbf X_S^\top)^{-1}] = \mathbf frac{n-d+1}{s-d+1}\,
\mathbf X^{+\top}\mathbf X^+$ which provides an approximate
randomized solution for this task.
A related task has been explored in the field of computational
geometry, where efficient algorithms are sought for approximately
solving linear regression and matrix approximation
{n-d\choose s-d}ite{randomized-matrix-algorithms,
regression-input-sparsity-time,coresets-regression}. Here,
subsampling appears as one of the key
techniques for obtaining multiplicative bounds on the loss of the
approximate solution. In this line of work, volume sampling
size $s\mathbf elleq d$ has been used by
{n-d\choose s-d}ite{pca-volume-sampling,more-efficient-volume-sampling} for
matrix approximation. Another common sampling technique is based on
statistical leverage scores {n-d\choose s-d}ite{fast-leverage-scores}, which have
been effectively used for the task of linear regression. However, this
approach is based on iid sampling, and requires
sampling at least $\Omega(d\mathbf ellog d)$ elements to achieve multiplicative
loss bounds. On the other hand, the input vectors obtained from volume
sampling are selected jointly, which makes the chosen subset more
informative, and we show that just $d$ volume sampled elements are
sufficient to achieve a multiplicative bound.
\ifisarxiv
\mathbf vspace{-2mm}
\mathbf fi
\section{Unbiased estimators}
\mathbf ellabel{sec:pseudo-inverse}
\ifisarxiv
\mathbf vspace{-1mm}
\mathbf fi
Let $n$ be an integer dimension.
For each subset $S\subseteq \{1..n\}$ of size $s$ we are given a
matrix formula $\mathbf fof{\mathbf F}S$.
Our goal is to sample set $S$ of size $s$ using some
sampling process and then develop concise expressions for
$\mathbb E_{S:|S|=s}[\mathbf fof{\mathbf F}S]$. Examples of formula
classes $\mathbf fof{\mathbf F}S$ will be given below.
We represent the sampling by a directed acyclic graph (dag), with a
single root node corresponding to the full set $\{1..n\}$, Starting
from the root, we proceed along the edges of the graph,
iteratively removing elements from the set $S$.
Concretely, consider a dag with levels $s = n, n-1, ..., d$.
Level $s$ contains $n {n-d\choose s-d}hoose s$ nodes
for sets $S\subseteq \{1..n\}$ of size $s$.
Every node $S$ at level $s>d$ has $s$ directed edges
to the nodes $S-\{i\}$ at the next lower level.
These edges are labeled with a
conditional probability vector $P(S_{-i}|S)$.
The probability of a (directed) path is the product of the
probabilities along its edges.
The outflow of probability from each node on
all but the bottom level is 1.
We let the probability
$P(S)$ of node $S$ be the probability of all paths
from the top node $\{1..n\}$ to $S$ and set the
probability $P(\{1..n\})$ of the top node to 1.
We associate a formula $\mathbf fof{\mathbf F}S$ with each set node $S$ in the
dag. The following key equality lets us compute
expectations.
\mathbf{b}egin{lemma}
\mathbf ellabel{l:key}
If for all $S\subseteq \{1..n\}$ of size greater than $d$ we have
$$\mathbf Blue{\mathbf fof{\mathbf F}{S}=\sum_{i\in S} P({S_{-i}}|S)\mathbf fof{\mathbf F}{S_{-i}}},$$
then for any $s\in\{d..n\}$: $\;\;\mathbb E_{S:|S|=s} [\mathbf fof{\mathbf F}{S}]
=\sum_{S:|S|=s} P(S)\mathbf fof{\mathbf F}{S} = \mathbf fof{\mathbf F}{\{1..n\}}.$
\mathbf end{lemma}
[n]oindent{\mathbf{b}f Proof}$\;$
Suffices to show that expectations at successive layers are equal:
$$
\sum_{S:|S|=s} \!\!P(S) \,\mathbf Blue{\mathbf fof{\mathbf F}{S}}
= \!\!\sum_{S:|S|=s} \!\!P(S) \mathbf Blue{\sum_{i\in S} P(S_{-i}|S) \,\mathbf fof{\mathbf F}{S_{-i}}}
= \!\!\!\!\sum_{T:|T|=s-1} \mathbf underbrace{\sum_{j[n]otin T} P(T_{+j}) P(T|T_{+j})}_{P(T)} \mathbf fof{\mathbf F}{T}.
\ \mathbf BlackBox
$$
\ifisarxiv
\mathbf vspace{-1cm}
\mathbf else
\mathbf vspace{-.5cm}
\mathbf fi
\subsection{Volume sampling}
Given a wide full-rank matrix $\mathbf X\in\mathbb R^{d\times n}$
and a sample size $s\in \{d..n\}$,
volume sampling chooses subset $S\subseteq\{1..n\}$
of size $s$ with probability proportional to volume spanned by
the rows of submatrix $\mathbf X_S$, ie proportional to
$\mathrm{det}(\mathbf X_S\mathbf X_S^\top)$. The following corollary uses
the above dag setup to compute the normalization constant for this
distribution. When $s=d$, the corollary provides a novel minimalist
proof for the Cauchy-Binet formula: $\sum_{S:|S|=s}\mathrm{det}(\mathbf X_S\mathbf X_S^\top)=\mathrm{det}(\mathbf X\mathbf X^\top)$.
\mathbf{b}egin{corollary}
\mathbf ellabel{c:vol}
Let $\mathbf X\in \mathbb R^{d\times n}$
and $S\in \{1..n\}$ of size $n\ge s \ge d$ st
$\mathrm{det}(\mathbf X_S\mathbf X_S^\top)>0$. Then for any $i\in S$, define
\mathbf{b}egin{align*}
\!\!
P({S_{-i}}|S)\!:=\!\mathbf frac{\mathrm{det}(\mathbf X_{S_{-i}}\mathbf X_{S_{-i}}^\top)}
{(s\!-\!d)\mathrm{det}(\mathbf X_S\mathbf X_S^\top)}
\!=\! \mathbf frac{1\!-\!\mathbf x_i^\top (\mathbf X_S\mathbf X_S^\top)^{-1}\mathbf x_i} {s\!-\!d},
\tag{\mathbf{b}f reverse iterative volume sampling}
\mathbf end{align*}
where $\mathbf x_i$ is the $i$th column of $\mathbf X$
and $\mathbf X_S$ is the sub matrix of columns indexed by $S$.
Then $P({S_{-i}}|S)$ is a proper probability distribution
and thus $\sum_{S:|S|=s} P(S)=1$ for all $s\in\{d..n\}$.
Furthermore
\mathbf{b}egin{align*}
P(S)= \mathbf frac{\mathrm{det}(\mathbf X_S\mathbf X_S^\top)} {{n-d {n-d\choose s-d}hoose s-d}\mathrm{det}(\mathbf X\mathbf X^\top)}.
\tag{\mathbf{b}f volume sampling}
\mathbf end{align*}
\mathbf end{corollary}
[n]oindent
{\mathbf{b}f Proof}$\;$
For any $S$, st $\mathrm{det}(\mathbf X_S\mathbf X_S^\top)>0$,
it is easy to see that $P({S_{-i}}|S)$ forms a probability
vector:
$$\sum_{i\in S} P({S_{-i}}|S)=\sum_{i\in S}
\mathbf frac{1-\mathrm{tr}((\mathbf X_S\mathbf X_S^\top)^{-1}\mathbf x_i\mathbf x_i^\top)}{s-d}
= \mathbf frac{s-\mathrm{tr}((\mathbf X_S\mathbf X_S^\top)^{-1}\mathbf X_S\mathbf X_S^\top)}{s-d}
=\mathbf frac{s-d}{s-d}=1.$$
It remains to show the formula for the probability $P(S)$ of all paths
ending at $\mathbf fof{\mathbf F}S$.
We first consider the top node, ie $\{1..n\}$. In this
case both the path definition and the formula are 1.
For all but the top node,
the probability $P(S)$ equals the
total inflow of probability into that node from the
previous level, ie
\iffalse
\mathbf{b}egin{wrapfigure}{r}{0.30\textwidth}
\hspace{.5cm}
\mathbf{b}egin{tikzpicture}[font=\mathbf footnotesize,scale=0.6,pin distance=1.6mm]
\mathbf{b}egin{axis}[hide axis, xmin=-1.35,xmax=1.35,ymin=-2.7,ymax = 2.2]
\mathbf addplot[mark=none, ultra thick, green] coordinates {(-1,1) (0,-2)};
\mathbf addplot[mark=none, ultra thick, green] coordinates {(0,1) (0,-2)};
\mathbf addplot[mark=none, ultra thick, green] coordinates {(1,1) (0,-2)};
\mathbf addplot[mark=*] coordinates {(-1,1)} {};
\mathbf addplot[mark=*] coordinates {(0,1)} node[pin=90:{${S_{+i}}$}]{};
\mathbf addplot[mark=*] coordinates {(1,1)} {};
\mathbf addplot[mark=none] coordinates {(.27,-0.6)} node[pin=90:{$P(S|{S_{+i}})$}]{};
\mathbf addplot[mark=*] coordinates {(0,-2)} node[pin=-90:{$S$}]{};
\mathbf end{axis}
\mathbf end{tikzpicture}
\mathbf vspace{-5cm}
\mathbf end{wrapfigure}
\mathbf fi
\mathbf{b}egin{align*}
P(S)=\sum_{i[n]otin S} P(S|{S_{+i}}) \; P({S_{+i}})
&=\sum_{i[n]otin S}
\mathbf frac{\mathrm{det}(\mathbf X_S\mathbf X_S^\top)}
{(s+1-d){n-d\choose s-d}ancel{\mathrm{det}(\mathbf X_{S_{+i}}\mathbf X_{S_{+i}}^\top)}}
\mathbf frac{{n-d\choose s-d}ancel{\mathrm{det}(\mathbf X_{S_{+i}}\mathbf X_{S_{+i}}^\top)}}
{{n-d{n-d\choose s-d}hoose s+1-d}\mathrm{det}(\mathbf X\mathbf X^\top)}
\\&=
\mathbf frac{(n-s) \mathrm{det}(\mathbf X_S\mathbf X_S^\top)} {(s+1-d) {n-d {n-d\choose s-d}hoose
s+1-d} \mathrm{det}(\mathbf X\mathbf X^\top)}
= \mathbf frac{\mathrm{det}(\mathbf X_S\mathbf X_S^\top)} {{n-d {n-d\choose s-d}hoose s-d} \mathrm{det}(\mathbf X\mathbf X^\top)}.
\quad\quad\mathbf BlackBox
\mathbf end{align*}
Note that all paths from $S$ to a subset $T$ (of size $\ge d$)
have the same probability because the ratios of
determinants cancel along paths.
It is easy to verify
that this probability is
$\mathbf frac{\mathrm{det}(\mathbf X_T\mathbf X_T^\top)}
{(|S|-|T|)!\, {|S|-d {n-d\choose s-d}hoose |T|-d}
\mathrm{det}(\mathbf X_S\mathbf X_S^\top)}.$
Note that $\mathbf frac{0}{0}$ ratios are avoided because paths
with such ratios always lead to sets of probability
0.
\subsection{Expectation formulas for volume sampling}
All expectations in the remainder of the paper are wrt
volume sampling. We use the short hand $\mathbb E[\mathbf fof{\mathbf F}S]$ for
expectation with volume sampling where the size of the
sampled set is fixed to $s$.
The expectation formulas for two choices of $\mathbf fof{\mathbf F}S$ are
proven in the next two theorems. By Lemma \ref{l:key}
it suffices to show $\mathbf fof{\mathbf F}S=\sum_{i\in S} P({S_{-i}}|S)\mathbf fof{\mathbf F}{S_{-i}}$
for volume sampling.
We introduce a bit more notation first.
Recall that $\mathbf X_S$ is the sub matrix of
columns indexed by $S\subseteq\{1..n\}$
(See Figure \ref{f:shapes}).
Consider a version of $\mathbf X$ in which
all but the columns of $S$ are zero.
This matrix equals $\mathbf X\mathbf I_S$ where $\mathbf I_S$ is an
$n$-dimensional diagonal
matrix with $(\mathbf I_S)_{ii}=1$ if $i\in S$ and 0 otherwise.
\mathbf{b}egin{theorem}\mathbf ellabel{t:einv}
Let $\mathbf X\in\mathbb R^{d\times n}$ be a wide full rank matrix
(ie $n\geq d$). For $s\in \{d..n\}$, let
$S\subseteq 1..n$ be a size $s$ volume sampled set over $\mathbf X$. Then
$$\mathbb E[(\mathbf X\mathbf I_S)^+]=\mathbf X^+.$$
\mathbf end{theorem}
We believe that this fundamental formula lies at the core of why
volume sampling is important in many applications. In this work, we
focus on its application to linear regression. However,
{n-d\choose s-d}ite{avron-boutsidis13} discuss many problems where controlling the
pseudo-inverse of a submatrix is essential. For those
applications, it is important to establish variance bounds for the
estimator offered by Theorem \ref{t:einv}. In this case, volume
sampling once again offers very concrete guarantees. We obtain them by
showing the following formula, which can be viewed as a second moment
for this estimator.
\mathbf{b}egin{theorem}\mathbf ellabel{t:einvs}
Let $\mathbf X\in\mathbb R^{d\times n}$ be a full-rank matrix and $s\in\{d..n\}$.
If size $s$ volume sampling over $\mathbf X$ has full support, then
\[\mathbb E[\mathbf underbrace{(\mathbf X_S\mathbf X_S^\top)^{-1}}_
{(\mathbf X\mathbf I_S)^{+\top}(\mathbf X\mathbf I_S)^+} ]
= \mathbf frac{n-d+1}{s-d+1}\,
\mathbf underbrace{(\mathbf X\mathbf X^\top)^{-1}}_{\mathbf X^{+\top}\mathbf X^+}.\]
If volume sampling does not have full support then
the matrix equality ``$=$'' is replaced by the positive-definite
inequality ``$\mathbf preceq$''.
\mathbf end{theorem}
The condition that size $s$ volume sampling over $\mathbf X$ has full support is
equivalent to $\mathrm{det}(\mathbf X_S\mathbf X_S^\top)>0$ for
all $S\subseteq 1..n$ of size $s$.
Note that if size $s$ volume sampling has full support, then size
$t>s$ also has full support. So full support for the
smallest size $d$
(often phrased as $\mathbf X$ being {\mathbf em in general position})
implies that volume sampling wrt any size $s\ge d$ has full support.
Surprisingly by combining theorems \ref{t:einv}
and \ref{t:einvs}, we can
obtain a ``covariance type formula'' for the pseudo-inverse
matrix estimator:
\mathbf{b}egin{align}
&\mathbb E[((\mathbf X\mathbf I_S)^+-\mathbb E[(\mathbf X\mathbf I_S)^+])^\top\; ((\mathbf X\mathbf I_S)^+-\mathbb E[(\mathbf X\mathbf I_S)^+])]
[n]onumber\\
&=\mathbb E[(\mathbf X\mathbf I_S)^{+\top}(\mathbf X\mathbf I_S)^{+}] - \mathbb E[(\mathbf X\mathbf I_S)^{+}]^\top\; \mathbb E[(\mathbf X\mathbf I_S)^+]
[n]onumber\\
&=\mathbf frac{n-d+1}{s-d+1} \;\mathbf X^{+\top}\mathbf X^{+} - \mathbf X^{+\top}\mathbf X^{+}
=\mathbf frac{n-s}{s-d+1}\; \mathbf X^{+\top}\mathbf X^+.
\mathbf ellabel{e:covs}
\mathbf end{align}
Theorem \ref{t:einvs} can also be used to obtain an expectation formula for
the Frobenius norm
$\|(\mathbf X\mathbf I_S)^+\|_F$ of the estimator:
\mathbf{b}egin{align}
\mathbf ellabel{e:frobs}
\mathbb E\|(\mathbf X\mathbf I_S)^+\|_F^2 &= \mathbb E[\mathrm{tr}((\mathbf X\mathbf I_S)^{+\top}
(\mathbf X\mathbf I_S)^+)] = \mathbf frac{n-d+1}{s-d+1}\|\mathbf X^+\|_F^2.
\mathbf end{align}
This norm formula has been shown in {n-d\choose s-d}ite{avron-boutsidis13}, with
numerous applications. Theorem \ref{t:einvs} can be viewed as a
much stronger pre trace version of the norm formula.
Also our proof techniques are quite different and much simpler.
Note that if size $s$ volume sampling for $\mathbf X$ does not have
full support then \mathbf eqref{e:covs} becomes
a semi-definite inequality $\mathbf preceq$ between matrices and
\mathbf eqref{e:frobs} an inequality between numbers.
{\mathbf{b}f Proof of Theorem \ref{t:einv}}$\,$
We apply Lemma \ref{l:key} with $\mathbf fof{\mathbf F}S= (\mathbf X\mathbf I_S)^+$.
It suffices to show $\mathbf fof{\mathbf F}S=\sum_{i\in S} P({S_{-i}}|S)\mathbf fof{\mathbf F}{S_{-i}}$
for $P({S_{-i}}|S):=\mathbf frac{1-\mathbf x_i^\top(\mathbf X_S\mathbf X_S^\top)^{-1}\mathbf x_i}{s-d}$, ie:
$$(\mathbf X\mathbf I_S)^+
= \sum_{i\in S} \mathbf frac{1-\mathbf x_i^\top\mathbf XinvS\mathbf x_i}{s-d}
\mathbf underbrace{(\mathbf X\mathbf I_{S_{-i}})^+}
_{(\mathbf X\mathbf I_{S_{-i}})^\top (\mathbf X_{S_{-i}} \mathbf X_{S_{-i}}^\top)^{-1} }.$$
Proven by applying Sherman Morrison to
$(\mathbf X_{S_{-i}} \mathbf X_{S_{-i}}^\top)^{-1}=
(\mathbf X_S\mathbf X_S^\top-\mathbf x_i\mathbf x_i^\top)^{-1}$ on the rhs:
$$\sum_i \mathbf frac{1-\mathbf x_i^\top\mathbf XinvS\mathbf x_i}
{n-d} \quad
((\mathbf X\mathbf I_S)^\top-\mathbf e_i\mathbf x_i^\top)
\mathbf elleft(\mathbf XinvS +\mathbf frac{\mathbf XinvS \mathbf x_i\mathbf x_i^\top\mathbf XinvS} {1-\mathbf x_i^\top\mathbf XinvS\mathbf x_i}
\right)
.$$
We now expand the last two factors into 4 terms.
The expectation of the first
$(\mathbf X\mathbf I_S)^\top(\mathbf X_S\mathbf X_S^\top)^{-1}$ is $(\mathbf X\mathbf I_S)^+$
(which is the lhs) and the expectations of the remaining three terms times $n-d$
sum to 0:
\mathbf{b}egin{align*}
&-\sum_{i\in S} (1-\mathbf x_i^\top\mathbf XinvS\mathbf x_i)\, \mathbf e_i\mathbf x_i^\top\mathbf XinvS
+(\mathbf X\mathbf I_S)^\top{n-d\choose s-d}ancel{\mathbf XinvS} {n-d\choose s-d}ancel{\sum_{i\in S} \mathbf x_i\mathbf x_i^\top} \mathbf XinvS
\\&{\mathbb Q}uad
-\sum_{i\in S}\mathbf e_i(\mathbf x_i^\top\mathbf XinvS \mathbf x_i)\;\mathbf x_i^\top\mathbf XinvS
= 0.
\hspace{5cm} \mathbf BlackBox
\mathbf end{align*}
{\mathbf{b}f Proof of Theorem \ref{t:einvs}}$\,$
Choose $\mathbf fof{\mathbf F}S= \mathbf frac{s-d+1}{n-d+1} (\mathbf X_S\mathbf X_S^\top)^{-1}$.
By Lemma \ref{l:key} it suffices to
show $\mathbf fof{\mathbf F}S=\sum_{i\in S} P({S_{-i}}|S)\mathbf fof{\mathbf F}{S_{-i}}$ for volume sampling:
$$\mathbf frac{s-d+1}{{n-d\choose s-d}ancel{n-d+1}} (\mathbf X_S\mathbf X_S^\top)^{-1}
= \sum_{i\in S} \mathbf frac{1-\mathbf x_i^\top(\mathbf X_S\mathbf X_S^\top)^{-1}\mathbf x_i}{{n-d\choose s-d}ancel{s-d}}
\mathbf frac{{n-d\choose s-d}ancel{s-d}}{{n-d\choose s-d}ancel{n-d+1}} (\mathbf X_{S_{-i}}\mathbf X_{S_{-i}}^\top)^{-1}
$$
To show this we apply Sherman Morrison to
$(\mathbf X_{S_{-i}} \mathbf X_{S_{-i}}^\top)^{-1}$ on the rhs:
\mathbf{b}egin{align*}
&\sum_{i\in S} (1-\mathbf x_i^\top\mathbf XinvS\mathbf x_i)
\mathbf elleft(\mathbf XinvS +\mathbf frac{\mathbf XinvS \mathbf x_i\mathbf x_i^\top\mathbf XinvS}
{1-\mathbf x_i^\top\mathbf XinvS\mathbf x_i}\right)
\\&\mathbf Blue{=} \,(s-d) \mathbf XinvS
+ {n-d\choose s-d}ancel{\mathbf XinvS} {n-d\choose s-d}ancel{\sum_{i\in S}
\mathbf x_i\mathbf x_i^\top} \mathbf XinvS
=(s-d+1)\;\mathbf XinvS.
\mathbf end{align*}
If some denominators $1-\mathbf x_i^\top\mathbf XinvS\mathbf x_i$ are zero, then
only sum over $i$ for which the denominators are
positive. In this case the above matrix equality becomes a positive-definite inequality $\mathbf Blue{\mathbf preceq}$.
\mathbf BlackBox
\section{Linear regression with few labels}
\mathbf ellabel{sec:linear-regression}
\mathbf{b}egin{wrapfigure}{r}{0.4\textwidth}
\mathbf vspace{-18mm}
\mathbf{b}egin{tikzpicture}[font=[n]ormalsize,scale=0.8,pin distance=1.6mm]
\mathbf{b}egin{axis}[hide axis, xmin=-2.35,xmax=1.35,ymin=-2.2,ymax = 5.1]
\mathbf addplot [domain=-2.07:1.1,samples=250, ultra thick, blue]
{x^2} node [pos=0.3, xshift=-.5cm] {$L({n-d\choose s-d}dot)$};
\mathbf addplot [domain=-2.1:1.1,samples=250, ultra thick, red ] {2-x};
\mathbf addplot[mark=none, ultra thick, green] coordinates {(-2.15,-1) (1.15,-1)};
\mathbf addplot[mark=*] coordinates {(0,0)} node[pin=-20:{$L(\mathbf w^*)$}]{};
\mathbf addplot[mark=*] coordinates {(0,2)} node[pin=90:{$\mathbb E[L(\of{\mathbf w^*}S)]$}]{};
\mathbf addplot[mark=*] coordinates {(-2,4)} node[pin=90:{$\,\,\,L(\of{\mathbf w^*}{S_i})$}]{};
\mathbf addplot[mark=*] coordinates {(1,1)} node[pin=90:{$L(\of{\mathbf w^*}{S_j})\,\,\,$}]{};
\mathbf addplot[mark=*] coordinates {(-2,-1)} node[pin=-90:{$\of{\mathbf w^*}{S_i}$}]{};
\mathbf addplot[mark=*] coordinates {(1,-1)} node[pin=-90:{$\of{\mathbf w^*}{S_j}$}]{};
\mathbf addplot[mark=*] coordinates {(0,-1)} node[pin=-90:{$\mathbf w^*=\mathbb E(\of{\mathbf w^*}S)$}]{};
\draw [decorate,decoration={brace,amplitude=4.5pt},xshift=-2.5pt,yshift=0pt]
(0,0) -- (0,2) node [black,midway,xshift=-.8cm] {$d\,L(\mathbf w^*\!)$};
\mathbf end{axis}
\mathbf end{tikzpicture}
{n-d\choose s-d}aption{Unbiased estimator $\of{\mathbf w^*}S$ in expectation suffers loss
$(d+1)\,L(\mathbf w^*)$.}
\mathbf vspace{0mm}
\mathbf end{wrapfigure}
Our main motivation for studying volume sampling came from asking
the following simple question. Suppose we want
to solve a $d$-dimensional linear regression problem with
a matrix $\mathbf X\in\mathbb R^{d\times n}$ of input column vectors and a label
vector $\mathbf y\in\mathbb R^n$, ie find
$\mathbf w\in\mathbb R^d$ that minimizes the least squares loss $L(\mathbf w)=\|\mathbf X^\top\mathbf w-\mathbf y\|^2$:
\[\mathbf w^*\defeq \impliesgmin_{\mathbf w\in\mathbb R^d}L(\mathbf w)
=\mathbf X^{+\top}\mathbf y,\]
but the access to label vector $\mathbf y$ is restricted. We
are allowed to pick a subset
$S\subseteq\{1..n\}$ for which the labels $y_i$ (where $i\in S$) are
revealed to us, and then solve the subproblem $(\mathbf X_S,\mathbf y_S)$, obtaining
$\of{\mathbf w^*}S$. What is the smallest number of labels such that for any
$\mathbf X$, we can find $\of{\mathbf w^*}S$ for which $L(\of{\mathbf w^*}S)$ is only a multiplicative
factor away from $L(\mathbf w^*)$ (independent of the number of input vectors
$n$)? This question was posed as an open problem by
{n-d\choose s-d}ite{coresets-regression}. It is easy to show that we need at least
$d$ labels (when $\mathbf X$ is full-rank), so as to guarantee the
uniqueness of solution $\of{\mathbf w^*}S$.
We use volume sampling to show that $d$ labels are in fact sufficient
(proof in Section \ref{sec:proof-loss}).
\mathbf vspace{-1.5mm}
\mathbf{b}egin{theorem}\mathbf ellabel{t:loss}
If the input matrix $\mathbf X\in\mathbb R^{d\times n}$ is in general position,
then for any label vector $\mathbf y\in \mathbb R^n$, the expected
square loss (on all $n$ labeled vectors) of the optimal solution
$\of{\mathbf w^*}S$ for the subproblem
$(\mathbf X_S,\mathbf y_S)$, with the $d$-element set $S$ obtained from
volume sampling, is given by
\ifisarxiv\mathbf vspace{-1mm}\mathbf fi
\mathbf{b}egin{align*}
\mathbb E[L(\of{\mathbf w^*}S)] =(d+1)\; L(\mathbf w^*).
\mathbf end{align*}
\ifisarxiv\mathbf vspace{-1mm}\mathbf fi
If $\mathbf X$ is not in general position, then the expected loss is
upper-bounded by $(d+1)\; L(\mathbf w^*)$.
\mathbf end{theorem}
\mathbf vspace{-0.5mm}
The factor $d+1$ cannot be improved when selecting only $d$
labels (we omit the proof):
\mathbf{b}egin{proposition}
\mathbf ellabel{prop:optimal}
For any $d$, there exists a least squares problem $(\mathbf X,\mathbf y)$ with $d+1$
vectors in $\mathbb R^d$ such that for every $d$-element index set
$S\subseteq\{1,...,d+1\}$, we have \[L(\of{\mathbf w^*}S) = (d+1)\;L(\mathbf w^*).\]
\mathbf end{proposition}
\mathbf vspace{-.2cm}
Note that the multiplicative factor in Theorem \ref{t:loss} does not depend on
$n$. It is easy to see that this cannot be achieved by any
deterministic algorithm (without the access to labels). Namely,
suppose that $d=1$ and $\mathbf X$ is a vector of all ones, whereas the label
vector $\mathbf y$ is a vector of all ones except for a single zero. No
matter which column index we choose deterministically, if that index
corresponds to the label $0$, the solution to the subproblem will
incur loss $L(\of{\mathbf w^*}S)=n\, L(\mathbf w^*)$.
The fact that volume sampling is a joint distribution also plays an
essential role in proving Theorem \ref{t:loss}. Consider a matrix $\mathbf X$
with exactly $d$ unique linearly independent columns (and an arbitrary number of
duplicates). Any iid column sampling distribution (like for example
leverage score sampling) will require $\Omega(d\mathbf ellog d)$ samples to
retrieve all $d$ unique columns (ie coupon collector problem), which is
necessary to get any multiplicative loss bound.
The exact expectation formula for the least squares loss under volume
sampling suggests a deep connection between linear regression and this
distribution. We can use Theorem \ref{t:einv} to further
strengthen that connection. Note, that the least squares estimator
obtained through volume sampling can be written as
$\of{\mathbf w^*}S=(\mathbf X\mathbf I_S)^{+\top}\mathbf y$.
Applying formula for the expectation of
pseudo-inverse, we conclude that $\of{\mathbf w^*}S$ is an unbiased estimator of
$\mathbf w^*$.
\mathbf{b}egin{proposition}\mathbf ellabel{prop:unbiased}
Let $\mathbf X\in\mathbb R^{d\times n}$ be a full-rank matrix and $n \geq s\geq d$. Let
$S\subseteq 1..n$ be a size $s$ volume sampled set over $\mathbf X$. Then,
for arbitrary label vector $\mathbf y\in\mathbb R^n$, we have
\mathbf{b}egin{align*}
\mathbb E[\of{\mathbf w^*}S] =\mathbb E[(\mathbf X\mathbf I_S)^{+\top}\mathbf y] = \mathbf X^{+\top}\mathbf y = \mathbf w^*.
\mathbf end{align*}
\mathbf end{proposition}
For size $s=d$ volume sampling, the fact that $\mathbb E[\of{\mathbf w^*}S]$ equals $\mathbf w^*$
can be found in an old paper {n-d\choose s-d}ite{bental-teboulle}.
They give a direct proof based on Cramer's rule.
For us the above proposition is a direct consequence of
the matrix expectation formula given in Theorem
\ref{t:einv} that holds for volume sampling of any size $s\ge d$.
In contrast, the loss expectation formula of Theorem \ref{t:loss} is
limited to sampling of size $s=d$. Bounding the loss expectation for $s>d$ remains
an open problem. However, we consider a different strategy for extending volume
sampling in linear regression. Combining Proposition
\ref{prop:unbiased} with Theorem \ref{t:loss} we can compute the
variance of predictions generated by volume sampling, and obtain
tighter multiplicative loss bounds by sampling multiple $d$-element
subsets $S_1,...,S_t$ independently.
\mathbf{b}egin{theorem}\mathbf ellabel{t:repeated-sampling}
Let $(\mathbf X,\mathbf y)$ be as in Theorem \ref{t:loss}. For $k$ independent
size $d$ volume samples $S_1,...,S_k$,
\[\mathbb E \mathbf elleft[L\mathbf elleft(
\mathbf frac{1}{k}\sum_{j=1}^k\of{\mathbf w^*}{S_j}
\right)\right]
= \mathbf elleft(1+\mathbf frac{d}{k}\right)\,L(\mathbf w^*).\]
\mathbf end{theorem}
\mathbf vspace{-.2cm}
\mathbf proof
Denote $\mathbf ybh\defeq\mathbf X^\top\mathbf w^*$ and $\mathbf yof{\mathbf ybh}S\defeq\mathbf X^\top\of{\mathbf w^*}S$
as the predictions generated by $\mathbf w^*$ and $\of{\mathbf w^*}S$ respectively. We
perform bias-variance decomposition of the loss of $\of{\mathbf w^*}S$ (for
size $d$ volume sampling):
\mathbf{b}egin{align*}\mathbb E[L(\of{\mathbf w^*}S)] &= \mathbb E[\|\mathbf yof{\mathbf ybh}S -
\mathbf y\|^2]=\mathbb E[\|\mathbf yof{\mathbf ybh}S - \mathbf ybh + \mathbf ybh - \mathbf y\|^2] \\
&=\mathbb E[\|\mathbf yof{\mathbf ybh}S - \mathbf ybh\|^2]
+ \mathbb E[2(\mathbf yof{\mathbf ybh}S-\mathbf ybh)^\top(\mathbf ybh-\mathbf y)]
+ \|\mathbf ybh-\mathbf y\|^2\\
&\overset{(*)}{=} \sum_{i=1}^n\mathbb E\mathbf elleft[(\mathbf yh{\ofsub{S}_i} -
\mathbb E[\mathbf yh\ofsub{S}_i])^2\right] + L(\mathbf w^*)=
\sum_{i=1}^n\mathbf Var[\mathbf yh{\ofsub{S}_i}] + L(\mathbf w^*),
\mathbf end{align*}
where $(*)$ follows from Theorem \ref{t:einv}. Now, we use
Theorem \ref{t:loss} to obtain the total variance of predictions:
\mathbf{b}egin{align*}
\sum_{i=1}^n\mathbf Var[\mathbf yh{\ofsub{S}_i}] =\mathbb E[L(\of{\mathbf w^*}S)] - L(\mathbf w^*) = d\;L(\mathbf w^*).
\mathbf end{align*}
Now the expected loss of the average weight vector
wrt sampling $k$ independent sets $S_1,...,S_k$ is:
\mathbf{b}egin{align*}
\hspace{0.5cm}\mathbb E \mathbf elleft[L\mathbf elleft(
\mathbf frac{1}{k}\sum_{j=1}^k\of{\mathbf w^*}{S_j}
\right)\right]
&= \sum_{i=1}^n\mathbf Var\mathbf elleft[\mathbf frac{1}{k}\sum_{j=1}^k\mathbf yh{\ofsub{S_j}_i}\right]
+L(\mathbf w^*)
\\
&= \mathbf frac{1}{k^2}\mathbf elleft(\sum_{j=1}^k d\,L(\mathbf w^*)\right) +
L(\mathbf w^*)=\mathbf elleft(1 + \mathbf frac{d}{k}\right)L(\mathbf w^*). \hspace{2cm}\mathbf BlackBox
\mathbf end{align*}
It is worth noting that the average weight vector used in Theorem
\ref{t:repeated-sampling} is not expected to perform better than
taking the solution to the joint subproblem, $\of{\mathbf w^*}{S_{1:k}}$, where
$S_{1:k}= S_1{n-d\choose s-d}up ...{n-d\choose s-d}up S_k$. However, theoretical guarantees
for that case are not yet available.
\subsection{\mathbf{b}f Proof of Theorem \ref{t:loss}}
\mathbf ellabel{sec:proof-loss}
We use the following lemma regarding the leave-one-out loss for
linear regression {n-d\choose s-d}ite{prediction-learning-games}:
\mathbf{b}egin{lemma}\mathbf ellabel{lm:leave-one-out}
Let $\of{\mathbf w^*}{-i}$ denote the least squares solution for problem
$(\mathbf X_{-i},\mathbf y_{-i})$. Then, we have
\mathbf{b}egin{align*}
L(\mathbf w^*) =L(\of{\mathbf w^*}{-i}) - \mathbf x_i^\top(\mathbf X\mathbf X^\top)^{-1}\mathbf x_i \;
\mathbf ell_i(\of{\mathbf w^*}{-i}), \quad\text{ where }\quad \mathbf ell_i(\mathbf w) \defeq (\mathbf x_i^\top\mathbf w - y_i)^2.
\mathbf end{align*}
\mathbf end{lemma}
When $\mathbf X$ has $d+1$ columns and $\mathbf X_{-i}$ is a
full-rank $d\times d$ matrix, then $L(\of{\mathbf w^*}{-i}) = \mathbf ell_i(\of{\mathbf w^*}{-i})$ and Lemma
\ref{lm:leave-one-out} leads to the following:
\mathbf vspace{-3mm}
\mathbf{b}egin{align}
\mathrm{det}(\mathbf Xs\mathbf Xs^\top) &\overset{(1)}{=} \mathrm{det}(\mathbf X\mathbf X^\top)\overbrace{\|\mathbf ybh - \mathbf y\|^2}^{L(\mathbf w^*)}
{\mathbb Q}uad \text{ where } \mathbf Xs=\mathbf elleft(\!\!\!\mathbf{b}egin{array}{c}
\mathbf X \\
\mathbf y^\top \mathbf end{array}\!\!\!\!\right) [n]onumber\\
&\overset{(2)}{=} \mathrm{det}(\mathbf X\mathbf X^\top)
(1-\mathbf x_i^\top(\mathbf X\mathbf X^\top)^{-1}\mathbf x_i)
\mathbf ell_i(\of{\mathbf w^*}{-i}) [n]onumber\\
&\overset{(3)}{=} \mathrm{det}(\mathbf X_{-i}\mathbf X_{-i}^\top) \mathbf ell_i(\of{\mathbf w^*}{-i}),\mathbf ellabel{eq:simple-lemma}
\mathbf end{align}
where (1) is the ``base $\times$ height'' formula for volume, (2)
follows from Lemma \ref{lm:leave-one-out} and (3) follows from a
standard determinant formula.
Returning to the proof, our goal is to find the expected loss $\mathbb E[L(\of{\mathbf w^*}S)]$, where $S$
is a size $d$ volume sampled set.
First, we rewrite the expectation as follows:
\mathbf{b}egin{align}
\mathbb E[L(\of{\mathbf w^*}S)] &= \sum_{S,|S|=d} P(S) L(\of{\mathbf w^*}S)
=\sum_{S,|S|=d} P(S) \sum_{j=1}^n \mathbf ell_j(\of{\mathbf w^*}S)[n]onumber\\
&=\sum_{S,|S|=d}\sum_{j[n]otin S} P(S)\;\mathbf ell_j(\of{\mathbf w^*}S)
=\sum_{T,|T|=d+1}\sum_{j\in T}P(T_{-j})\;\mathbf ell_j(\of{\mathbf w^*}{T_{-j}}). \mathbf ellabel{eq:sum-swap}
\mathbf end{align}
We now use (\ref{eq:simple-lemma}) on the matrix $\mathbf X_T$ and
test instance $\mathbf x_j$ (assuming $\mathrm{rank}(\mathbf X_{T_{-j}})=d$):
\mathbf{b}egin{align}
\mathbf ellabel{eq:summand}
P(T_{-j})\;\mathbf ell_j(\of{\mathbf w^*}{T_{-j}}) =
\mathbf frac{\mathrm{det}(\mathbf X_{T_{-j}}\mathbf X_{T_{-j}}^\top)}{\mathrm{det}(\mathbf X\mathbf X^\top)}\;\mathbf ell_j(\of{\mathbf w^*}{T_{-j}}) =
\mathbf frac{\mathrm{det}(\mathbf Xs_T \mathbf Xs_T^\top)}{\mathrm{det}(\mathbf X\mathbf X^\top)}.
\mathbf end{align}
Since the summand does not depend on the index $j\in T$,
the inner summation in (\ref{eq:sum-swap}) becomes a multiplication
by $d+1$. This lets us write the expected loss as:
\mathbf{b}egin{align}
\mathbf ellabel{eq:th-cauchy-binet}
\mathbb E[L(\of{\mathbf w^*}S)] = \mathbf frac{d+1}{\mathrm{det}(\mathbf X\mathbf X^\top)}
\sum_{T,|T|=d+1}\!\!\mathrm{det}(\mathbf Xs_T \mathbf Xs_T^\top)
\overset{(1)}{=} (d+1)\mathbf frac{\mathrm{det}(\mathbf Xs\mathbf Xs^\top)}{\mathrm{det}(\mathbf X\mathbf X^\top)}
\overset{(2)}{=} (d+1)\,L(\mathbf w^*),
\mathbf end{align}
where (1) follows from the Cauchy-Binet formula
and (2) is an application of the ``base $\times$ height'' formula.
If $\mathbf X$ is not in general position, then for some summands in \mathbf eqref{eq:summand},
$\mathrm{rank}(\mathbf X_{T_{-j}})<d$ and $P(T_{-j})=0$.
Thus the left-hand side of \mathbf eqref{eq:summand} is $0$, while the right-hand
side is non-negative, so \mathbf eqref{eq:th-cauchy-binet} becomes an inequality,
completing the proof of Theorem \ref{t:loss}.
\section{Efficient algorithm for volume sampling}
\mathbf ellabel{sec:algorithm}
In this section we propose an algorithm for efficiently performing
exact volume sampling for any $s\geq d$. This addresses the
question posed by {n-d\choose s-d}ite{avron-boutsidis13}, asking for a
polynomial-time algorithm for the case when
$s>d$. {n-d\choose s-d}ite{efficient-volume-sampling,more-efficient-volume-sampling}
gave an algorithm for the case when $s=d$, which runs in time
$O(nd^3)$. Recently, {n-d\choose s-d}ite{dual-volume-sampling} offered an algorithm
for arbitrary $s$, which has complexity $O(n^4 s)$. We propose a new method, which uses
our techniques to achieve the time complexity $O((n-s+d)nd)$, a direct
improvement over {n-d\choose s-d}ite{dual-volume-sampling} by a factor of at least
$n^2$. Our algorithm also offers an improvement for
$s=d$ in certain regimes. Namely, when $n=o(d^2)$, then our algorithm
runs in time $O(n^2d)=o(nd^3)$, faster than the method proposed by
{n-d\choose s-d}ite{efficient-volume-sampling}.
Our algorithm implements reverse iterative sampling from Corollary \ref{c:vol}.
After removing $q$ columns, we are left with an index set
of size $n-q$ that is distributed according to volume sampling
for column set size $n-q$.
\mathbf{b}egin{theorem}
The sampling algorithm runs in time $O((n-s+d)nd)$,
using $O(d^2+n)$ additional memory,
and returns set $S$ which is distributed according to size
$s$ volume sampling over $\mathbf X$.
\mathbf end{theorem}
\mathbf{b}egin{proof}
For correctness we show the following invariants
that hold at the beginning of the {\mathbf{b}f while} loop:
\mathbf{b}egin{align*}
p_i = 1 - \mathbf x_i^\top(\mathbf X_S\mathbf X_S^\top)^{-1}\mathbf x_i = (|S|-d)\,P({S_{-i}}|S)
{\mathbb Q}uad
\text{and}
{\mathbb Q}uad
\mathbf Z= (\mathbf X_S\mathbf X_S^\top)^{-1}.
\mathbf end{align*}
At the first iteration the invariants trivially hold.
When updating the $p_j$ we use $\mathbf Z$ and the $p_i$
from the previous iteration, so we can rewrite the update as
\mathbf{b}egin{wrapfigure}{R}{0.33\textwidth}
\mathbf vspace{-.6cm}
\renewcommand{\thealgorithm}{}
\mathbf{b}egin{minipage}{0.33\textwidth}
\mathbf floatname{algorithm}{}
\mathbf{b}egin{algorithm}[H]
{\mathbf fontsize{8}{8}\selectfont
{n-d\choose s-d}aption{\mathbf{b}f \small \hspace{-.2cm}Reverse iterative volume sampling}
\mathbf{b}egin{algorithmic}
\STATE \textbf{Input:} $\mathbf X\!\in\!\mathbb R^{d\times n}$, $s\!\in\!\{d..n\}$
\STATE $\mathbf Z\mathbf elleftarrow (\mathbf X\mathbf X^\top)^{-1}$\mathbf ellabel{line:inv}
\STATE $\mathbf forall_{i\in\{1..n\}} \quad p_i\mathbf elleftarrow 1-\mathbf x_i^\top \mathbf Z\mathbf x_i$
\STATE $S \mathbf elleftarrow \{1,..,n\}$
\STATE {\mathbf{b}f while} $|S|>s$
\STATE \quad Sample $i \mathbf propto p_i$ out of $S$
\STATE \quad $S\mathbf elleftarrow S - \{i\}$
\STATE \quad $\mathbf v \mathbf elleftarrow \mathbf Z\mathbf x_i /\sqrt{p_i}$
\STATE \quad $\mathbf forall_{j\in S}\quad p_j\mathbf elleftarrow p_j - (\mathbf x_j^\top\mathbf v)^2$
\STATE \quad $\mathbf Z \mathbf elleftarrow \mathbf Z + \mathbf v\mathbf v^\top$
\STATE {\mathbf{b}f end}
\mathbb RETURN $S$
\mathbf end{algorithmic}
\mathbf ellabel{alg:sampling}
}
\mathbf end{algorithm}
\mathbf end{minipage}
\mathbf end{wrapfigure}
\mathbf{b}egin{align*}
&p_j \mathbf elleftarrow p_j - (\mathbf x_j^\top\mathbf v)^2 \\
&= 1- \mathbf x_j^\top(\mathbf X_S\mathbf X_S^\top)^{-1}\mathbf x_j -
\mathbf frac{(\mathbf x_j^\top\mathbf Z\mathbf x_i)^2}{1-\mathbf x_i^\top(\mathbf X_S\mathbf X_S^\top)^{-1}\mathbf x_i}\\
&=1- \mathbf x_j^\top(\mathbf X_S\mathbf X_S^\top)^{-1}\mathbf x_j -
\mathbf frac{\mathbf x_j^\top(\mathbf X_S\mathbf X_S^\top)^{-1}\mathbf x_i\mathbf x_i^\top(\mathbf X_S\mathbf X_S^\top)^{-1}\mathbf x_j}
{1-\mathbf x_i^\top(\mathbf X_S\mathbf X_S^\top)^{-1}\mathbf x_i}\\
&=1 - \mathbf x_j^\top\mathbf elleft( (\mathbf X_S\mathbf X_S^\top)^{-1} +
\mathbf frac{(\mathbf X_S\mathbf X_S^\top)^{-1}\mathbf x_i\mathbf x_i^\top(\mathbf X_S\mathbf X_S^\top)^{-1}}
{1-\mathbf x_i^\top(\mathbf X_S\mathbf X_S^\top)^{-1}\mathbf x_i}\right)\mathbf x_j \\
&\overset{(*)}{=}
1- \mathbf x_j^\top(\mathbf X_{S_{-i}}\mathbf X_{S_{-i}})^{-1}\mathbf x_j =(|S|-1-d)\,P(S_{-i,j}|{S_{-i}}),
\\[-.35cm]
\mathbf end{align*}
where $(*)$ follows from the Sherman-Morrison formula.
The update of $\mathbf Z$ is also an application of Sherman-Morrison
and this concludes the proof of correctness.
Runtime: Computing the
initial $\mathbf Z=(\mathbf X\mathbf X^\top)^{-1}$ takes $O(nd^2)$, as does
computing the initial values of $p_j$'s. Inside the \textbf{while}
loop, updating $p_j$'s takes $O(|S| d)=O(nd)$ and updating $\mathbf Z$ takes
$O(d^2)$. The overall runtime becomes $O(nd^2 + (n-s)nd) =
O((n-s+d)nd)$. The space usage (in addition to the input data) is
dominated by the $p_i$ values and matrix $\mathbf Z$.
\mathbf end{proof}
\section{Conclusions}
\mathbf ellabel{sec:conclusions}
We developed exact formulas for
$\mathbb E[(\mathbf X\mathbf I_S)^+)]$ and $\mathbb E[(\mathbf X\mathbf I_S)^+)^2]$
when the subset $S$ of $s$ column indices
is sampled proportionally to the volume $\mathrm{det}(\mathbf X_S\mathbf X_S^\top)$.
The formulas hold for any fixed size $s\in \{d..n\}$.
These new expectation formulas imply that the solution $\of{\mathbf w^*}S$
for a volume sampled subproblem of a linear regression problem is
unbiased. We also gave a formula relating the loss of the subproblem
to the optimal loss (ie $\mathbb E(L(\of{\mathbf w^*}S))=(d+1)L(\mathbf w^*)$). However, this
result only holds for sample size $s=d$. It is an open problem
to obtain such an exact expectation formula for $s>d$.
A natural algorithm is to draw $k$ samples $S_i$ of size $d$
and return $\of{\mathbf w^*}{S_{1:k}}$, where $S_{1:k}=\mathbf{b}igcup_iS_i$.
We were able to get exact expressions for the
loss $L(\mathbf frac{1}{k} \sum_i \of{\mathbf w^*}{S_i})$ of the average
predictor but it is an open problem to get
nontrivial bounds for the loss of the best predictor $\of{\mathbf w^*}{S_{1:k}}$.
We were able to show that for small sample sizes, volume
sampling a set jointly has the advantage: It achieves
a multiplicative bound for the smallest sample size $d$,
whereas any independent sampling routine requires sample
size at least $\Omega(d \mathbf ellog d)$.
We believe that our results demonstrate a fundamental connection
between volume sampling and linear regression, which demands further
exploration. Our loss expectation formula has already been applied by
{n-d\choose s-d}ite{regression-correspondence} to the task of linear regression
without correspondence.
\mathbf paragraph{Acknowledgements}
Thanks to Daniel Hsu and Wojciech Kot{\mathbf ell}owski for many
valuable discussions.
This research was supported by NSF grant IIS-1619271.
{n-d\choose s-d}learpage
[n]ewpage
\mathbf{b}ibliographystyle{plain}
\mathbf{b}ibliography{pap}
{n-d\choose s-d}learpage
[n]ewpage
\iffalse
\mathbf appendix
\section{Alternate proof of Theorem \ref{t:einvs}}
We make use of the following derivative for determinants {n-d\choose s-d}ite{detderiv}:
$$\text{For symmetric $\mathbf C$:}{\mathbb Q}uad
\mathbf frac{\mathbf partial \mathrm{det}(\mathbf X\mathbf C\mathbf X^\top)}{\mathbf partial \mathbf X}
= 2\mathrm{det}(\mathbf X\mathbf C\mathbf X^\top) (\mathbf X\mathbf C\mathbf X^\top)^{-1}\mathbf X\mathbf C.
$$
\mathbf{b}egin{align*}
(\mathbf X\mathbf X^\top)^{-1}\mathbf X
&= \mathbf frac{1}{2 \mathrm{det}(\mathbf X\mathbf X^\top)} &&\overbrace{2 \mathrm{det}(\mathbf X\mathbf X^\top) \;\;(\mathbf X\mathbf X^\top)^{-1}\mathbf X}
^{\mathbf frac{\mathbf partial \mathrm{det}(\mathbf X\mathbf X^\top)}{\mathbf partial \mathbf X}}
\tag{derivative w. $\mathbf C=\mathbf I$}
\\&=\mathbf frac{1}{2 \mathrm{det}(\mathbf X\mathbf X^\top)}
&&\mathbf frac{\mathbf partial \mathbf frac{1}{{n-d{n-d\choose s-d}hoose k-d}} \sum_S \mathrm{det}(\mathbf X\mathbf I_S\mathbf X^\top)}{\mathbf partial \mathbf X}
\tag{general. Cauchy Binet}
\\&=\mathbf frac{1}{2 {n-d{n-d\choose s-d}hoose k-d}\mathrm{det}(\mathbf X\mathbf X^\top)}
&&\sum_S \mathbf frac{\mathbf partial \mathrm{det}(\mathbf X\mathbf I_S\mathbf X^\top)}{\mathbf partial \mathbf X}
\tag{exchange of $\mathbf partial$ and $\Sigma$}
\\&=\mathbf frac{1}{2 {n-d{n-d\choose s-d}hoose k-d}\mathrm{det}(\mathbf X\mathbf X^\top)}
&&\sum_S 2\mathrm{det}(\mathbf X\mathbf I_S\mathbf X^\top) \;\; (\mathbf X \mathbf I_S \mathbf X^\top)^{-1}\mathbf X\mathbf I_S
\tag{derivative with $\mathbf C=\mathbf I_S$}
\\&= \sum_S \mathbf frac {\mathrm{det}(\mathbf X\mathbf I_S\mathbf X^\top)}{{n-d{n-d\choose s-d}hoose k-d}\mathrm{det}(\mathbf X\mathbf X^\top)}
&&(\mathbf X\mathbf I_S\mathbf X^\top)^{-1}\mathbf X\mathbf I_S
\tag{def. of volume sampling}
\\&= \mathbb E[(\mathbf X\mathbf I_S)^+].
\mathbf end{align*}
\mathbf fi
\mathbf end{document} |
\begin{document}
\title{Temporal and Spatial Dependence of Quantum Entanglement\\
from a Field Theory Perspective}
\author{Shih-Yuin Lin}
\email{sylin@phys.cts.nthu.edu.tw}
\affiliation{Physics Division, National Center for Theoretical Sciences,
P.O. Box 2-131, Hsinchu 30013, Taiwan}
\author{B. L. Hu}
\email{blhu@umd.edu}
\affiliation{Joint Quantum Institute and Department of Physics,
University of Maryland, College Park, Maryland 20742-4111, USA}
\date{v1: December 23, 2008. v2: April 5, 2009}
\begin{abstract}
We consider the entanglement dynamics between two Unruh-DeWitt
detectors at rest separated at a distance $d$. This simple model when
analyzed properly in quantum field theory shows many interesting facets
and helps to dispel some misunderstandings of entanglement dynamics.
We find that there is spatial dependence of quantum entanglement in
the stable regime due to the phase difference of vacuum fluctuations
the two detectors experience, together with the interference of the
mutual influences from the backreaction of one detector on the other.
When two initially entangled detectors are still outside each other's
light cone, the entanglement oscillates in time with an amplitude
dependent on spatial separation $d$.
When the two detectors begin to have causal contact,
an interference pattern of the relative degree of entanglement
(compared to those at spatial infinity) develops a parametric
dependence on $d$. The detectors separated at those $d$ with a stronger
relative degree of entanglement enjoy longer disentanglement times.
In the cases with weak coupling and large separation, the detectors
always disentangle at late times. For sufficiently small $d$, the two
detectors can have residual entanglement even if they initially were
in a separable state, while for $d$ a little larger, there could be
transient entanglement created by mutual influences. However, we see
no evidence of entanglement creation outside the light cone for
initially separable states.
\end{abstract}
\pacs{03.65.Ud,
03.65.Yz,
03.67.-a}
\maketitle
\section{Introduction}
Recently we have studied the disentanglement process between two
spatially separated Unruh-DeWitt (UD) detectors (pointlike objects
with internal degrees of freedom) or atoms, described by harmonic
oscillators, moving in a common quantum field: One at rest (Alice),
the other uniformly accelerating (Rob) \cite{LCH08}. These two
detectors are set to be entangled initially, while the initial state
of the field is the Minkowski vacuum. In all cases studied in
\cite{LCH08}, we obtain finite-time disentanglement (called ``sudden
death" of quantum entanglement \cite{YE04}), which are
coordinate dependent while the entanglement between the two detectors
at two spacetime points is independent of the choice of time slice
connecting these two events. Around the moment of complete
disentanglement there may be some short-time revival of entanglement
within a few periods of oscillations intrinsic to the detectors. In
the strong-coupling regime, the strong impact of vacuum fluctuations
experienced locally by each detector destroys their entanglement
right after the coupling is switched on.
In the above situation we find in \cite{LCH08} the event horizon for
the uniformly accelerated detector (Rob) cuts off the higher-order
corrections of mutual influences, and the asymmetric motions of Alice
and Rob obscure the dependence of the entanglement on the spatial
separation between them. To understand better how entanglement
dynamics depends on the spatial separation between two quantum
objects, in this paper we consider the entanglement between two
detectors at rest separated at a distance $d$, possibly the simplest
setup one could imagine. This will serve as a concrete model for us
to investigate and explicate many subtle points and some essential
misconceptions related to quantum entanglement elicited by the
classic paper of Einstein-Podolsky-Rosen (EPR) \cite{EPR}.
\subsection{Entanglement at spacelike separation: quantum nonlocality?}
One such misconception (or misnomer, for those who understand the
physics but connive to the use of the terminology) is ``quantum
nonlocality" used broadly and often too loosely in certain
communities \footnote{The issue of locality in quantum mechanics is
discussed in \cite{Unruh}. Note that in quantum information science
``quantum nonlocality" still respects causality \cite{PR97}.
In their classic paper \cite{EPR}, the EPR gedanken experiment was
introduced to bring out the incompleteness of quantum mechanics. EPR
made no mention of ``quantum nonlocality." This notion
seems to have crept in later for
the situation when local measurements are performed at a spacelike
separated entangled pair, which cannot be described by any local
hidden variable theory \cite{Bell}.}. Some authors think that quantum
entanglement entails some kind of ``spooky action at a distance"
between two spacelike separated quantum entities (qubits, for
example), and may even extrapolate this to mean ``quantum
nonlocality." The phrase ``spooky action at a distance" when traced
to the source \cite{BE47} refers to the dependence of ``what really
exists at one event" on what kind of measurement is carried out at
the other, namely, the consequence of measuring one part of an
entangled pair. Without bringing in quantum measurement, one cannot
explore fully the existence or consequences of ``spooky action at a
distance" but one could still talk about quantum entanglement between
two spacelike separated qubits or detectors. This is the main theme
of our present investigation. We show in a simple and generic model
with calculations based on quantum field theory (QFT) that nontrivial
dynamics of entanglement outside the light cone does exist.
Another misconception is that entanglement set up between two localized
quantum entities is independent of their spatial separation. This is false
for open systems interacting with an environment \footnote{The environment
here could be as innocuous and ubiquitous as a mediating quantum field or
vacuum fluctuations, whose intercession could in most cases engender
dissipative dynamics but in other special situations leave the
dynamics of the system unitary. For a discussion on the statistical
mechanical features of the equations of motion derived from a loop
expansion in quantum field theory, in particular the differences in
perspectives and results obtained from the in-in formulation in
contradistinction to the in-out formulation, see, e.g., \cite{CH08}}.
This has already been shown in two earlier investigations of the
authors \cite{ASH06, LCH08} and will be again in this paper.
A remark on nonlocality, or lack thereof, in QFT is in place here.
QFT is often regarded as ``local" in the sense that interactions of
the fields take place at the same spacetime point \footnote{In this
sense ``nonlocality" does exist in, e.g., noncommutative quantum field
theory or in certain quantum theories of spacetime, but that is a
much more severe breach of known physics, which need be dealt with at
a more fundamental level.}, e.g., for a bosonic field $\phi(x)$, a
local theory has no coupling of $\phi(x)$ and $\phi(y)$ at different
spacetime points $x$ and $y$. It follows that the vacuum expectation
value of the commutator $\left<\right. [\phi(x), \phi(y)]
\left.\right>$ vanishes for all $y$ outside the light cone of $x$,
which is what causality entails. Nevertheless, the Hadamard function
$\left<\right.\{\phi(x),\phi(y)\}\left.\right>$ is nonvanishing in
general, no matter $x-y$ is spacelike or timelike. In physical terms
the Hadamard function can be related to quantum noise in a stochastic
treatment of QFT \cite{HPZ}. In this restricted sense one could say
that QFT has certain nonlocal features. Of course it is well known
that in QFT processes occurring at spacelike separated events such as
virtual particle exchange are allowed.
\subsection{Issues addressed here}
With a careful and thorough analysis of this problem we are able to
address the following issues:
1) {\it Spatial separation between two detectors.}--Ficek and Tanas
\cite{FicTan06} as well as Anastopoulos, Shresta, and Hu (ASH)
\cite{ASH06} studied the problem of two spatially separated qubits
interacting with a common electromagnetic field. The former authors
while invoking the Born and Markov approximations find the appearance
of dark periods and revivals. ASH treat the non-Markovian behavior
without these approximations and find a different behavior at short
distances. In particular, for weak coupling, they obtain analytic
expressions for the dynamics of entanglement at a range of spatial
separation between the two qubits, which cannot be obtained when the
Born-Markov approximation is imposed.
A model with two detectors at rest in a quantum field at
finite temperature in (1+1)-dimensional spacetime has been considered
by Shiokawa in \cite{Tom08}, where some dependence of the early-time
entanglement dynamics on spatial separation can also be observed.
In \cite{LCH08} we did not see any simple proportionality between the
{\it initial} separation of Alice and Rob's detectors and the degree
of entanglement: The larger the separation, the weaker the entanglement
at some moments, but stronger at others. We wonder if this unclear
pattern arises because the spatial separation of the two detectors in
\cite{LCH08} changes in time and also in coordinate. In our present
problem the spatial separation between the two detectors is well
defined and remains constant in Minkowski time, so the dependence of
entanglement on the spatial separation should be much clearer and
distinctly identifiable.
2) {\it Stronger mutual influences.}--Among the cases we considered
in \cite{LCH08}, the largest correction from the mutual influences is
still under $2\%$ of the total while we have only the first and the
second-order corrections from the mutual influences. There the difficulty
for making progress is due to the complicated multidimensional
integrations in computing the back-and-forth propagations of the
backreactions sourced from the two detectors moving in different ways.
Here, for the case with both detectors at rest, the integration is
simpler and in some regimes we can include stronger and more higher-order
corrections of the mutual influences on the evolution of quantum entanglement.
3) {\it Creation of entanglement and residual entanglement.}--In
addition to finite-time disentanglement and the revival of quantum
entanglement for two detectors initially entangled, which have been
observed in \cite{LCH08} for a particular initial state, we expect to
see other kinds of entanglement dynamics with various initial states
and how it varies with spatial separations. Amongst the most
interesting behavior we found the creation of entanglement from an
initially separated state \cite{LHMC08} and the persistence of
residual entanglement at late times for two close-by detectors
\cite{PR07}.
\subsection{Summary of our findings}
When the mutual influences are sufficiently strong (under strong
coupling or small separation), the fluctuations of the detectors with
low natural frequency will accumulate, then get unstable and blow up.
As the separation approaches a merge distance (quantified later),
only for detectors with high enough natural frequencies will the
fluctuations not diverge eventually but acting more and more like
those in the two harmonic oscillator (2HO) quantum Brownian motion
(QBM) models \cite{CYH07, PR07} (where the two HOs occupy the same
spatial location) with renormalized frequencies.
If the duration of interaction is so short that each detector is
still outside the light cone of the other detector, namely, before the
first mutual influence reaches one another, the entanglement oscillates
in time with an amplitude dependent on spatial separation: At some
moments the larger the separation the weaker the entanglement,
but at other moments, the stronger the entanglement.
While such a behavior is affected by correlations of vacuum fluctuations
locally experienced by the two detectors without causal contact,
there is no evidence for entanglement generation outside the
light cone suggested by Franson in Ref. \cite{Franson}.
For an initially entangled pair of detectors, when one gets inside the
light cone of the other, certain interference patterns develop: At
distances where the interference is constructive the disentanglement
times are longer than those at other distances. This behavior is more
distinct when the mutual influences are negligible. For the detectors
separable initially, entanglement can be generated by mutual
influences if they are put close enough to each other.
At late times, under proper conditions, the detectors will be
entangled if the separation is sufficiently small, and separable
if the separation is greater than a specific finite distance. The
late-time behavior of the detectors is governed by vacuum fluctuations
of the field and independent of the initial state of the detectors.
Since the vacuum can be seen as the simplest medium that the two
detectors immersed in, we expect that the intuitions acquired here
will be useful in understanding quantum entanglement in atomic and
condensed matter systems (upon replacing the field in vacuum by those
in the medium). To this extent our results indicate that the
dependence of quantum entanglement on spatial separation of qubits
could enter in quantum gate operations (see \cite{ASH06} for comments
on possible experimental tests of this effect in cavity ions),
circuit layout, as well as having an effect on cluster states
instrumental to measurement-based quantum computing.
\subsection{Outline of this paper}
This paper is organized as follows. In Sec. II we describe our model
and the setup. In Sec. III the evolution of the operators is
calculated, then the instability for detectors with low natural
frequency is described in Sec. IV. We derive the zeroth-order results
in Sec. V, and the late-time results in Sec. VI. Examples with
different spatial separations of detectors in the weak-coupling limit
are given in Sec. VII. We conclude with some discussions in Sec. VIII.
A late-time analysis on the mode functions is performed in Appendix A,
while an early-time analysis of the entanglement dynamics in the
weak-coupling limit is given in Appendix B.
\section{The model}
Let us consider the Unruh-DeWitt detector theory in (3+1)-dimensional
Minkowski space described by the action \cite{LCH08, LH2005}
\begin{eqnarray}
S &=& -\int d^4 x {1\over 2}\partial_\mu\Phi\partial^\mu\Phi +
\sum_{j=A,B}\left\{\int d\tau_j {1\over 2}\left[\left(\partial_{\tau_j}
Q_j\right)^2 -\Omega_0^2 Q_j^2\right] + \lambda_0\int d^4 x \Phi (x)
\int d\tau_j Q_j(\tau_j)\delta^4\left(x^{\mu}-z_j^{\mu}(\tau_j)\right)
\right\}, \label{Stot1}
\end{eqnarray}
where the scalar field $\Phi$ is assumed to be massless, and $\lambda_0$
is the coupling constant. $Q_A$ and $Q_B$ are the internal degrees of
freedom of the two detectors, assumed to be two identical harmonic
oscillators with mass $m_0 =1$, bare natural frequency $\Omega_0$, and
the same local time resolution so their cutoffs
in two-point functions \cite{LH2005} are the same. The left detector is
at rest along the world line $z_A^\mu(t)=(t,-d/2,0,0)$ and the right
detector is sitting along $z_B^\mu(t)=( t,d/2,0,0)$. The proper times for
$Q_A$ and $Q_B$ are both the Minkowski time, namely, $\tau_A=\tau_B=t$.
We assume at $t=0$ the initial state of the combined system is a
direct product of the Minkowski vacuum $\left|\right. 0_M \left.
\right>$ for the field $\Phi$ and a quantum state $\left|\right. Q_A,
Q_B \left.\right>$ for the detectors $Q_A$ and $Q_B$, taken to be a
squeezed Gaussian state with minimal uncertainty, represented by the
Wigner function of the form
\begin{equation}
\rho(Q_A,P_A,Q_B,P_B) = {1\over \pi^2\hbar^2}\exp -{1\over 2}\left[
{\beta^2\over\hbar^2}\left( Q_A + Q_B\right)^2 +
{1\over \alpha^2}\left( Q_A - Q_B\right)^2 +
{\alpha^2\over\hbar^2}\left( P_A - P_B\right)^2 +
{1\over \beta^2}\left( P_A + P_B\right)^2 \right].
\label{initGauss}
\end{equation}
How the two detectors are initially entangled is determined by
properly choosing the parameters $\alpha$ and $\beta$ in $Q_A$ and
$Q_B$. When $\beta^2 = \hbar^2/\alpha^2$, the Wigner function
$(\ref{initGauss})$ becomes a product of the Wigner functions for
$Q_A$, $P_A$ and for $Q_B$, $P_B$, thus separable. If one
further chooses $\alpha^2 = \hbar/\Omega$, then the Wigner function
will be initially in the ground state of the two free detectors.
After $t=0$ the coupling with the field is turned on and the
detectors begin to interact with each other through the field while
the reduced density matrix for the two detectors becomes a mixed state.
The linearity of $(\ref{Stot1})$ guarantees that the quantum state
of the detectors is always Gaussian. Thus the dynamics of quantum
entanglement can be studied by examining the behavior of the quantity
$\Sigma$ \cite{LCH08} and the logarithmic negativity $E_{\cal N}$
\cite{VW02}:
\begin{eqnarray}
\Sigma &\equiv&\det\left[ {\bf V}^{PT}+{i\hbar\over 2}{\bf M}\right],\\
E_{\cal N} &\equiv& \max \left\{ 0, -\log_2 2c_- \right\}.
\end{eqnarray}
Here ${\bf M}$ is the symplectic matrix ${\bf 1}\otimes (-i)\sigma_y$,
${\bf V}^{PT}$ is the partial transpose ($(Q_A, P_A, Q_B, P_B)\to
(Q_A, P_A, Q_B, -P_B)$) of the covariance matrix
\begin{equation}
{\bf V} = \left( \begin{array}{cc} {\bf v}_{AA} & {\bf v}_{AB} \\
{\bf v}_{BA} & {\bf v}_{BB} \end{array}\right)
\end{equation}
in which the elements of the $2\times 2$ matrices ${\bf v}_{ij}$ are
symmetrized two-point correlators ${\bf v}_{ij}{}^{mn} =
\left<\right.{\cal R}_i^m , {\cal R}_j^n \left.\right> \equiv
\left<\right.({\cal R}_i^m {\cal R}_j^n + {\cal R}_j^n {\cal R}_i^m )
\left.\right>/2$ with ${\cal R}_i^m = (Q_i(t), P_i(t))$, $m,n= 1,2$
and $i, j = A,B$. $(c_+, c_-)$ is the symplectic spectrum of
${\bf V}^{PT}+ (i\hbar/2){\bf M}$, given by
\begin{equation}
c_\pm \equiv \left[Z \pm \sqrt{Z^2-4\det {\bf V}}\over 2
\right]^{1/2}
\label{SympSpec}
\end{equation}
with
\begin{equation}
Z = \det {\bf v}_{AA} + \det {\bf v}_{BB} - 2 \det {\bf v}_{AB}.
\end{equation}
For the detectors in Gaussian state, $E_{\cal N}>0$, $\Sigma <0$,
and $c_- < \hbar/2$, if and only if the quantum state of the detectors
is entangled \cite{Si00}.
$E_{\cal N}$ is an entanglement monotone \cite{Plenio05} whose value
can indicate the degree of entanglement: below we say the two detectors
have a stronger entanglement if the associated $E_{\cal N}$ is greater.
In the cases considered in Ref. \cite{LCH08} and this paper, the
behavior of $\Sigma$ is similar to $-E_{\cal N}$ when it is nonzero.
Indeed, the quantity $\Sigma$ can also be written as
\begin{equation}
\Sigma = \left( c_+^2 -{\hbar^2\over 4}\right)\left( c_-^2 -
{\hbar^2\over 4}\right)= \det {\bf V} - {\hbar^2 \over 4}Z +
{\hbar^4\over 16}.
\end{equation}
We found it is more convenient to use $\Sigma$ in calculating the
disentanglement time. We also define the uncertainty function
\begin{equation}
\Upsilon \equiv \det \left[ {\bf V} + i{\hbar\over 2}{\bf M}
\right], \label{Uncert}
\end{equation}
so that $\Upsilon \ge 0$ is the uncertainty relation \cite{Si00}.
To obtain these quantities, we have to know the correlators
$\left<\right.{\cal R}_i^m ,{\cal R}_j^n \left.\right>$, so we are
calculating the evolution of operators ${\cal R}_i^m$ in the following.
\section{Evolution of operators}
\label{EvoOp}
Since the combined system $(\ref{Stot1})$ is linear, in the Heisenberg
picture \cite{LH2005, LH2006}, the operators evolve as
\begin{eqnarray}
\hat{Q}_i(t) &=& \sqrt{\hbar\over 2\Omega_r}\sum_j\left[
q_i^{(j)}(t)\hat{a}_j +q_i^{(j)*}(t)\hat{a}_j^\dagger \right]
+\int {d^3 k\over (2\pi)^3}\sqrt{\hbar\over 2\omega}
\left[q_i^{(+)}(t,{\bf k})\hat{b}_{\bf k} +
q_i^{(-)}(t,{\bf k})\hat{b}_{\bf k}^\dagger\right], \\
\hat{\Phi}(x) &=& \sqrt{\hbar\over 2\Omega_r}\sum_j\left[
f^{(j)}(x)\hat{a}_j^{}+f^{(j)*}(x)\hat{a}_j^\dagger \right]
+\int {d^3 k\over (2\pi)^3}
\sqrt{\hbar\over 2\omega}\left[f^{(+)}(x,{\bf k})\hat{b}_{\bf k}
+f^{(-)}(x,{\bf k})\hat{b}_{\bf k}^\dagger\right],
\end{eqnarray}
with $i,j = A,B$. $q_i^{(j)}$, $q_i^{(\pm)}$, $f^{(j)}$, and
$f^{(\pm)}$ are the (c-number) mode functions, $\hat{a}_j$ and
$\hat{a}_j^\dagger$ are the lowering and raising operators for the
free detector $j$, while $\hat{b}_{\bf k}$ and $\hat{b}_{\bf k}^\dagger$
are the annihilation and creation operators for the free field.
The conjugate momenta are $\hat{P}_j(t) =\partial_t\hat{Q}_j(t)$ and
$\hat{\Pi}(x)=\partial_t\hat{\Phi}(x)$. The evolution equations for the
mode functions have been given in Eqs.$(9)-(12)$ in Ref. \cite{LCH08}
with $z_A(t)$ and $z_B(\tau)$ there replaced by $z_A^\mu(t)=(t,-d/2,0,0)$
and $z_B^\mu(t)=(t,d/2,0,0)$ here. Since we have assumed that the two
detectors have the same frequency cutoffs in their local frames, one
can do the same renormalization on frequency and obtain their effective
equations of motion under the influence of the quantum field \cite{LH2005}:
\begin{eqnarray}
\left( \partial_t^2 +2\gamma\partial_t + \Omega_r^2 \right)q_i^{(j)}(t)
&=& {2\gamma\over d}\theta (t-d) \bar{q}_i^{(j)}( t-d),\label{eomqA2} \\
\left( \partial_t^2 +2\gamma\partial_t + \Omega_r^2 \right)
q_i^{(+)}(t,{\bf k}) &=& {2\gamma\over d}\theta(t-d) \bar{q}_i^{(+)}
(t-d,{\bf k})+\lambda_0 f_0^{(+)}(z_i(t),{\bf k}), \label{eomqA+2}
\end{eqnarray}
where $\bar{q}_B \equiv q_A$, $\bar{q}_A \equiv q_B$, $\Omega_r$ is
the renormalized frequency obtained by absorbing the singular behavior
of the retarded solutions for $f^{(j)}$ and $f^{(\pm)}$ around
their sources (for details, see Sec.IIA in Ref.\cite{LH2005}).
Also $\gamma \equiv \lambda_0^2/8\pi$, and
$f_0^{(+)}(x,{\bf k}) \equiv e^{-i \omega t + i{\bf k\cdot x}}$,
with $\omega=|{\bf k}|$. Here one can see that $q_B$ and $q_A$ are
affecting, and being affected by, each other causally with a
retardation time $d$.
The solutions for $q_i^{(j)}$ and $q_i^{(+)}$ satisfying the initial
conditions
$f^{(+)}(0,{\bf x};{\bf k}) = e^{i{\bf k\cdot x}}$,
$\partial_t f^{(+)}(0,{\bf x};{\bf k})=-i\omega e^{i{\bf k\cdot x}}$,
$q_j^{(j)}(0) =1$, $\partial_t q_j^{(j)}(0)= -i\Omega_r$,
and $f_i^{(j)} (0,{\bf x}) =\partial_t f_i^{(j)} (0,{\bf x}) =
q^{(+)}(0;{\bf k})= \partial_t q^{(+)}(0;{\bf k}) = \bar{q}_j^{(j)}(0) =
\partial_t\bar{q}_j^{(j)}(0)=0$ (no summation over $j$) are
\begin{eqnarray}
q_{j}^{(+)}({\rm k};t) &=& {\sqrt{8\pi\gamma}\over \Omega}
\sum_{n=0}^\infty \theta(t-nd)\left( 2\gamma\over\Omega d\right)^n
e^{(-1)^n i k_1 z^1_j} \left\{ (M_1 - M_2)^{n+1}
e^{-i\omega(t-nd)} + \right.\nonumber\\& & \left.
e^{-\gamma(t-nd)}\sum_{m=0}^n (M_1-M_2)^{n-m} \left[M_2 W_m(t-nd) -
M_1 W_m^*(t-nd)\right]\right\},
\label{qjp}
\end{eqnarray} and
\begin{equation}
q_j^{(j)}=\sum_{n=0}^\infty q_{2n}, \,\,\,\,\,
\bar{q}_j^{(j)}=\sum_{n=0}^\infty q_{2n+1} \label{qjj}
\end{equation}
(no summation over $j$), where $\Omega \equiv \sqrt{\Omega_r^2 - \gamma^2}$,
$M_1 \equiv (-\omega-i\gamma +\Omega)^{-1}$,
$M_2 \equiv (-\omega-i\gamma -\Omega)^{-1}$,
$W_0(t) \equiv e^{i\Omega t}$,
\begin{equation}
W_n(t) \equiv \int_0^t dt_{n-1} \sin\Omega(t-t_{n-1}) \int_0^{t_{n-1}}
dt_{n-2}\sin\Omega(t_{n-1}-t_{n-2}) \cdots \int_0^{t_1} dt_0
\sin\Omega(t_1-t_0) W_0(t_0),
\end{equation}
for $n\ge 1$, and
\begin{equation}
q_n(t) = \theta(t-nd)\left( 2\gamma\over\Omega d\right)^n
e^{-\gamma(t-nd)} \left[s_1 W_n(t-nd)+s_2 W_n^*(t-nd)\right],
\end{equation}
with $s_1 \equiv [1 - \Omega^{-1}(\Omega_r+i \gamma)]/2$,
and $s_2 \equiv [1 + \Omega^{-1}(\Omega_r+i \gamma)]/2$.
Using the mode functions Eqs. $(\ref{qjp})$ and $(\ref{qjj})$
one can calculate the correlators of the detectors for the covariance
matrix ${\bf V}$ \cite{LCH08}, each splitting into two parts ($\left<\right.
..\left.\right>_{\rm a}$ and $\left<\right. .. \left.\right>_{\rm v}$)
due to the factorized initial state. Because of symmetry, one has
$\left<\right.Q_A^2 \left.\right>= \left<\right.Q_B^2 \left.\right>$,
$\left<\right.P_A^2 \left.\right>= \left<\right.P_B^2 \left.\right>$,
and $\left<\right.Q_A, P_B \left.\right>= \left<\right.Q_B, P_A
\left.\right>$. So only six two-point functions need to be calculated
for ${\bf V}$.
Since $q_n \sim [\gamma (t-n d)/\Omega d]^n e^{-\gamma(t-n d)}/n!$ for
large $t$, $q_n$ will reach its maximum amplitude ($\approx (n/e \Omega
d)^n/n!$) around $t-n d\approx n/\gamma$, which makes the numerical
error of the long-time behavior of ${\bf V}$ difficult to control.
Fortunately for the late-time behavior for all $d$ and the long-time
behavior for very small or very large $d$, we still have good
approximations, as we shall see below. However, before we proceed,
the issue of instability should be addressed first.
\section{Instability of low-frequency harmonic oscillators}
\label{instab}
Combining the equations of motion for $q_A^{(A)}$ and $q_B^{(A)}$, one has
\begin{equation}
\left(\partial_t^2 +2\gamma\partial_t +\Omega_r^2 \right)q_{\pm}^{(A)}(t)
= \pm {2\gamma\over d} q_{\pm}^{(A)}(t-d). \label{eomqpm}
\end{equation}
where $q_{\pm}^{(A)}(t) \equiv q_A^{(A)}(t)\pm q_B^{(A)}(t)$. For $t>d$ and
when $d$ is small, one may expand $q_{\pm}^{(A)}(t-d)$ around $t$ so that
\begin{equation}
\left(\partial_t^2 +2\gamma\partial_t +\Omega_r^2\right) q_{\pm}^{(A)}(t)
= \pm {2\gamma\over d}\left[ q_{\pm}^{(A)}(t)
- d \partial_t q_{\pm}^{(A)}(t)
+ {d^2\over 2} \partial_t^2 q_{\pm}^{(A)}(t)
- {d^3\over 3!} \partial_t^3 q_{\pm}^{(A)}(t) +\cdots\right],
\label{eomsmalld}
\end{equation}
or
\begin{eqnarray}
\left[\partial_t^2 +4\gamma\partial_t +\left(\Omega_r^2 -
{2\gamma\over d} \right)\right] q_+^{(A)}(t)= O(\gamma d),\label{EOMqR+}\\
\left[\partial_t^2 +\left(\Omega_r^2+{2\gamma\over d} \right)\right]
q_-^{(A)}(t) = O(\gamma d). \label{EOMqR-}
\end{eqnarray}
If we start with a small renormalized frequency $\Omega_r$ and a
small spatial separation $d < 2\gamma/\Omega_r^2$ with $\gamma d$
kept small so the $O(\gamma d)$ terms can be neglected, then $q_+^{(A)}$
will be exponentially growing since its effective frequency becomes
imaginary ($\Omega_r^2 - (2\gamma/d) < 0$), while $q_-^{(A)}$ oscillates
without damping. A similar argument shows that $q_{\pm}^{(B)}$
will have the same instability when two harmonic oscillators
with small $\Omega_r^2$ are situated close enough to each other.
One may wonder whether the $O(\gamma d)$ terms can alter the above
observations. In Appendix \ref{LateAna} we perform a late-time
analysis, which shows the same instability.
The conclusion is, if $\Omega_r^2 < 2\gamma/d$, all the mode functions will
grow exponentially in time so the correlators $\left<\right.Q_i,Q_j\left.
\right>$ or the quantum fluctuations of the detectors diverge at late times.
Accordingly, we define
\begin{equation}
d_{ins}\equiv 2\gamma/\Omega_r^2 \label{dins}
\end{equation}
as the ``radius of instability." For two detectors with separation
$d> d_{ins}$, the system is stable. For the cases with $d = d_{ins}$,
a constant solution for $q_+^{(j)}$ at late times is acquired by
$(\ref{EOMqR+})$, while for $d < d_{ins}$, the system is unstable.
Below we restrict our discussion to the stable regime,
$\Omega_r^2 >2\gamma/d$.
\section{Zeroth-order results}
\label{ZOR}
\begin{figure}
\caption{The zeroth-order results, no mutual influence is included here.
$\gamma=10^{-5}
\label{zeroS}
\end{figure}
Neglecting the mutual influences, the v-part of the zeroth-order
cross correlators read
\begin{eqnarray}
\left<\right.Q_A(t), Q_B(t) \left.\right>_{\rm v}^{(0)} &=&
{\hbar\over \pi \Omega^2 d}{\rm Re}{i\over \Omega+i \gamma}\left\{
\left[ \Omega + e^{-2\gamma t} \left( \Omega +2\gamma
e^{i\Omega t}\sin\Omega t\right)\right] {\cal S}_d \right. -
\nonumber\\ && \left. e^{-\gamma t}
\left[ \left(\Omega\cos\Omega t+\gamma \sin\Omega t\right)\left(
{\cal S}_{d-t} + {\cal S}_{d+t}\right)
+(\Omega+i\gamma)\sin\Omega t\left(
{\cal C}_{d-t}-{\cal C}_{d+t}\right)
\right]\right\},\label{QLQRv0}\\
\left<\right.P_A(t), P_B(t) \left.\right>_{\rm v}^{(0)} &=&
{\hbar\over \pi \Omega^2 d}{\rm Re}\,\, i (\Omega+i \gamma)\left\{
\left[ \Omega + e^{-2\gamma t} \left( \Omega -2\gamma
e^{i\Omega t}\sin\Omega t\right)\right] {\cal S}_d \right. -
\nonumber\\ && \left. e^{-\gamma t}
\left[ \left(\Omega\cos\Omega t-\gamma \sin\Omega t\right)\left(
{\cal S}_{d-t}+{\cal S}_{d+t}\right)
+(\Omega-i\gamma)\sin\Omega t\left( {\cal C}_{d-t}-{\cal C}_{d+t}
\right)\right]\right\}, \label{PLPRv0} \\
\left<\right.P_A(t), Q_B(t) \left.\right>_{\rm v}^{(0)} &=&
\left<\right.Q_A(t), P_B(t) \left.\right>_{\rm v}^{(0)} \nonumber\\
&=& {\hbar\gamma\over \pi \Omega^2 d}e^{-\gamma t}\sin\Omega t
\,\,{\rm Re}\,\, \left\{ -2 e^{(-\gamma +i\Omega) t} {\cal S}_d +
{\cal S}_{d-t} +{\cal S}_{d+t} + i \left( {\cal C}_{d-t} -
{\cal C}_{d+t}\right)\right\}, \label{PLQRv0}
\end{eqnarray}
where
\begin{eqnarray}
{\cal S}_x &\equiv& {1\over 2}({\rm Ci}[(\Omega+ i\gamma)x]+
{\rm Ci}[-(\Omega+ i\gamma)x])\sin[(\Omega+i\gamma)x]-
{\rm Si}[(\Omega+ i\gamma)x] \cos[(\Omega+i\gamma)x],\\
{\cal C}_x &\equiv& {1\over 2}({\rm Ci}[(\Omega+ i\gamma)x]+
{\rm Ci}[-(\Omega+ i\gamma)x])\cos[(\Omega+i\gamma)x] +
{\rm Si}[(\Omega+ i\gamma)x]\sin[(\Omega+i\gamma)x],
\end{eqnarray}
with sine-integral Si$(x)={\rm si}(x)+ \pi/2$ and cosine-integral
Ci$(x)$ \cite{Arfken}.
The a-part of the zeroth-order correlators as well as the
two-point functions (for a single inertial detector),
$\left<\right.Q_j^2\left.\right>_{\rm v}^{(0)}$, $\left<\right.Q_j,
P_j \left.\right>_{\rm v}^{(0)}$, and $\left<\right.P_j^2
\left.\right>_{\rm v}^{(0)}$
are all independent of the spatial separation $d$ [for explicit
expressions see Eq. (25) in Ref. \cite{LCH08} and Appendix A in
Ref. \cite{LH2006}].
So the $d$ dependence of the zeroth-order degrees of entanglement
$E_{\cal N}^{(0)}$ and $\Sigma^{(0)}$ are all coming from
$(\ref{QLQRv0})$-$(\ref{PLQRv0})$, which are due to the phase
difference of vacuum fluctuations that the detectors experience
locally.
Note that when
\begin{equation}
d \to d_{\min} \equiv {1\over\Omega}e^{1-\gamma_e -\Lambda_1},
\label{dmin}
\end{equation}
where $\gamma_e$ is the Euler constant and $\Lambda_1 \equiv -\ln
\Omega\Delta t -\gamma_e$ corresponds to the time-resolution $\Delta t$
of our detector theory \cite{LH2006},
one has $\left<\right.{\cal R}_A(t), {\cal R}_B(t) \left.
\right>_{\rm v}^{(0)} \to \left<\right.{\cal R}_A(t)^2\left.
\right>_{\rm v}^{(0)}=\left<\right. {\cal R}_B(t)^2\left.
\right>_{\rm v}^{(0)}$, ${\cal R}=P,Q$.
That is, the two detectors should be seen as
located at the same spatial point when $d\approx d_{\min}$ in our model,
which is actually a coarse-grained effective theory.
Let us call $d_{min}$ the ``merge distance."
\subsection{Early-time entanglement dynamics inside the light cone ($d<t$)}
\label{Earlyd<t}
In the weak-coupling limit ($\gamma\Lambda_1 \ll\Omega$),
when the separation $d$ is not too small,
the effect from the mutual influences comes weakly and slowly,
so the zeroth-order correlators dominate the early-time behavior of
the detectors. The asymptotic expansions of sine-integral and
cosine-integral functions read \cite{Arfken}
\begin{eqnarray}
{\rm Ci}[(\Omega+i\gamma)x] &\approx& {i\pi\over 2}
\left({x\over |x|}-1\right) +
{\sin (\Omega+i\gamma)x\over(\Omega+i\gamma)x}, \label{Cixgg1}\\
{\rm Si}[(\Omega+i\gamma)x] &\approx& {\pi\over 2}{x\over |x|}-
{\cos(\Omega+i\gamma)x\over (\Omega+i\gamma)x},\label{Sixgg1}
\end{eqnarray}
for $\Omega, \gamma>0$, and $|(\Omega+ i\gamma)x| \gg 1$.
So in the weak-coupling limit,
from $t-d=0$ up to $t-d \sim O(1/\gamma)$, one has
\begin{equation}
\left<\right.Q_A(t), Q_B(t) \left.\right>_{\rm v}^{(0)} \approx
\theta(t-d){\sin\Omega d\over \Omega d}{\hbar\over 2\Omega}
e^{-\gamma d} \left[ 1-e^{-2\gamma(t-d)}\right],
\label{QLQRv0wcl}
\end{equation}
$\left<\right.P_A(t), P_B(t) \left.\right>_{\rm v}^{(0)}\approx\Omega^2
\left<\right.Q_A(t), Q_B(t) \left.\right>_{\rm v}^{(0)}$ and
$\left<\right.P_A(t), Q_B(t) \left.\right>_{\rm v}^{(0)}, \left<\right.
Q_A(t), P_B(t) \left.\right>_{\rm v}^{(0)} \sim O(\gamma/\Omega)$.
The $\theta(t-d)$ implies the onset of a clear interference pattern
($\sim \sin\Omega d/\Omega d$) inside the light cone, as shown in Fig.
\ref{zeroS}. This is mainly due to the sign flipping of the sine-integral
function ${\rm Si}_{d-t}$ in
$(\ref{QLQRv0})$-$(\ref{PLQRv0})$ around $d=t$ when $d-t$ changes
sign. The $\theta(t-d)$ acts like that each detector starts to
``know" the existence of the other detector when they enter the
light cone of each other, though the mutual influences are not
considered here. In the next subsection we will see that there exists
some interference pattern of $O(\gamma)$ in $\Sigma$ even for $d>t$,
where no classical signal can reach one detector from the other.
\subsection{Outside the light cone ($d > t$)}
\label{outLC}
Before the first mutual influences from one detector reaches the other,
the zeroth-order results are exact. From $(\ref{Cixgg1})$ and
$(\ref{Sixgg1})$, when $d>t$ and $|\Omega+i\gamma|(d-t)\gg 1$, one has
\begin{eqnarray}
\left<\right.Q_A(t), Q_B(t) \left.\right>_{\rm v}^{(0)} &\approx&
{2\gamma\over\pi \Omega_r^4 d^2}\left[ 1
+ e^{-2\gamma t}\left(\cos\Omega t +{\gamma\over\Omega}\sin\Omega t\right)^2
- {2d^2 e^{-\gamma t}\over d^2-t^2}
\left(\cos\Omega t +{\gamma\over\Omega}\sin\Omega t\right)\right],
\label{QAQBout}\\
\left<\right.P_A(t), P_B(t) \left.\right>_{\rm v}^{(0)} &\approx&
{2\gamma\over\pi d^2} e^{-2\gamma t} {\sin^2\Omega t\over\Omega^2} ,\\
\left<\right.P_A(t), Q_B(t) \left.\right>_{\rm v}^{(0)} &=&
\left<\right.Q_A(t), P_B(t) \left.\right>_{\rm v}^{(0)} \approx
{2\gamma e^{-\gamma t} \over\pi \Omega_r^2 d^2}
{\sin\Omega t\over\Omega} \left[ - e^{-\gamma t}\left(\cos\Omega t
+{\gamma\over\Omega}\sin\Omega t\right) + {d^2\over d^2-t^2} \right],
\label{QAPBout}
\end{eqnarray}
which makes the values of $E_{\cal N}$ and $\Sigma$ depend on $d$ and $t$;
that is, the dependence of the degree of entanglement on the spatial
separation $d$ between the two detectors varies in time $t$, even before
they have causal contact with each other.
In the weak-coupling limit, with the initial state $(\ref{initGauss})$ and
$\Omega \gg \gamma\Lambda_j > \gamma$, $j=0,1$, one has
\begin{eqnarray}
E_{{\cal N}rel} &\equiv& -\log_2 2c_-(t,d) -\left[ -
\log_2 2c_-(t,\infty)\right] \nonumber\\
&\approx& {\gamma \hbar \over \pi \ln 2}{{\cal X}\over |{\cal X}|}
\sum_{n=0}^2 {a^\gamma_n\over b_\gamma} \cos n\Omega t +
O(\gamma^2\Lambda_0, \gamma^2\Lambda_1)
\label{ENrel}
\end{eqnarray}
when $d>t$ and $|\Omega+i\gamma|(d-t)\gg 1$, where
\begin{eqnarray}
a_0^\gamma &=& d^{-2}\left\{ \hbar^2\beta^2 + \alpha^2\left(
-\alpha^2\beta^2\Omega^2+\beta^4+4\beta^2\hbar\Omega-\hbar^2\Omega^2\right)
+ |{\cal X}|(\alpha^2\Omega^2-\beta^2)+\right. \nonumber\\
& & \,\,\,\,\,\,\,\left. 2\beta^2 e^{-2\gamma t}\left[\hbar^2+
\alpha^2(\beta^2-2\hbar\Omega)-|{\cal X}|\right]\right\}e^{-2\gamma t},\\
a_1^\gamma &=& -4(d^2-t^2)^{-1}\beta^2\left\{ 2\alpha^2\hbar\Omega + \left[
\hbar^2+\alpha^2(\beta^2-2\hbar\Omega)-|{\cal X}|\right] e^{-2\gamma t}
\right\}e^{-\gamma t},\\
a_2^\gamma &=& d^{-2}\left\{ 4\hbar\Omega\alpha^2\beta^2 + \left[\beta^2\hbar^2
+ \alpha^2\left(\alpha^2\beta^2\Omega^2+\beta^4-4\beta^2\hbar\Omega+
\hbar^2\Omega^2\right)-|{\cal X}|(\alpha^2\Omega^2+\beta^2)\right]
e^{-2\gamma t}\right\},\\
b_\gamma &=& \Omega^3\left\{ 2\hbar^2\Omega\alpha^2\beta^2 + \hbar\left[
\alpha^2(\alpha^2\beta^2\Omega^2+\beta^4-4\beta^2\hbar\Omega) +
(\alpha^2\Omega^2+\beta^2)(\hbar^2-|{\cal X}|)\right]e^{-2\gamma t}
\right.\nonumber\\ & & \,\,\,\,\,\,\, \left.+
(\hbar-\alpha^2\Omega)(\hbar\Omega-\beta^2)(\alpha^2\beta^2+\hbar^2
-|{\cal X}|)e^{-4\gamma t}\right\},
\end{eqnarray}
with ${\cal X}\equiv \hbar^2-\alpha^2\beta^2$. So for ${\cal X}\not=0$
the relative degree of entanglement at separation $d$ to those for the
detectors at the same moment but separated at infinite distance
oscillates in frequency $\Omega$ and/or $2\Omega$, depending on the values
of $a_n^\gamma$.
This explains the $(\cos \Omega t)/(d^2-t^2)$ pattern outside the
light cone in the upper-right plot of Fig. \ref{zeroS} and the small
oscillations before $t\approx 7.5$ in the lower-right plot of the same
figure, where $(\alpha, \beta)=(1.1, 4.5)$ so $(a_0^\gamma, a_1^\gamma,
a_2^\gamma)/b^\gamma \approx(1.94/d^2, -2.89/(d^2-t^2), 0.95/d^2)$ at early times.
Another example is, when $(\alpha, \beta)=(1.5, 0.2)$, one has $(a_0^\gamma,
a_1^\gamma, a_2^\gamma)/b^\gamma \approx (-4.68/d^2, -0.06/(d^2-t^2),
4.74/d^2)$ at early times, so the $d^{-2}\cos 2\Omega t$ pattern dominates
at large $d$ in the bottom-right plot of Fig. \ref{firstS2}.
For these cases, the larger the separation, the weaker the entanglement (in
terms of the logarithmic negativity) at some moments, but the stronger the
entanglement at other moments.
The sudden switching on of interaction at $t=0$ in our model will create
additional oscillation patterns outside the light cone. However, as shown in
$(\ref{ENrel})$, those oscillations are suppressed in the weak-coupling limit by
$O(\gamma \Lambda_0)$ of the above results. Here $\Lambda_0\equiv -
\ln\Omega\Delta t_0 -\gamma_e$ with $\Delta t_0$ corresponds to the time
scale of switching on the coupling between the detectors and the quantum field
(see Sec.IIIB in Ref.\cite{LH2006} for details).
When $\beta^2 = \hbar^2/\alpha^2$ or ${\cal X}=0$, the detectors are initially
separable and
\begin{eqnarray}
\Sigma &\approx& {\hbar^2\over 16\alpha^4\pi^2 \Omega^4}\left\{ \pi\Omega
(\hbar-\alpha^2\Omega)^2 e^{-2\gamma t}(1-e^{-2\gamma t}) +
\right.\nonumber\\ & & \left.
2\gamma\Lambda_1\left[ 2\hbar\alpha^2\Omega(1-e^{-2\gamma t})
+ h^2 e^{-2\gamma t}(1-\cos 2\Omega t)+ \alpha^4\Omega^2e^{-2\gamma t}
(1+\cos 2\Omega t)\right] \right\}^2 + O(\gamma^2),
\end{eqnarray}
outside the light cone, which is always positive so the detectors are always
separable. When we increase the coupling strength $\gamma$, we find that
the values of $\Sigma$ are pushed further away from those negative values of
entangled states. In Appendix \ref{EarlyAna} we also see that quantum
entanglement is only created deep in the light cone. Therefore in our model
we see no evidence of entanglement generation outside the light cone.
For $|{\cal X}| \not=0$ but sufficiently small, the detectors are initially
entangled, but after a very short-time scale $O(e^{-\gamma_e-(\Lambda_0/2)})$
the value of $\Sigma$ jumps to $(\hbar^2\gamma\Lambda_1 \alpha/\beta\pi)^2
-(\hbar{\cal X}/4\alpha\beta)^2$, which could be positive so that the
detectors become separable. In these cases quantum entanglement could revive
later as $\Sigma$ is oscillating with an amplitude proportional to $\gamma
\Lambda_1$, while these revivals of entanglement do not last more than a few
periods of the intrinsic oscillation in the detectors.
\subsection{Breakdown of the zeroth-order results}
\label{0bad}
At late times $t\gg \gamma^{-1}$, all $\left<\right. .. \left.
\right>_{\rm a}$ vanish, so $\left<\right. .. \left.\right>_{\rm v}$
dominate and the nonvanishing two-point correlation functions read
\begin{eqnarray}
\left<\right.Q_A, Q_B \left.\right>^{(0)}|_{t\gg \gamma^{-1}} &\approx&
{\hbar\over \pi \Omega d}{\rm Re}{i{\cal S}_d \over \Omega+i \gamma},
\label{QAQB0late}\\
\left<\right.P_A, P_B \left.\right>^{(0)}|_{t\gg \gamma^{-1}} &\approx&
{\hbar\over \pi \Omega d}{\rm Re} (i\Omega -\gamma) {\cal S}_d ,
\label{PAPB0late}\\
\left<\right.Q_A^2\left.\right>^{(0)}|_{t\gg \gamma^{-1}} =
\left<\right.Q_B^2\left.\right>^{(0)}|_{t\gg \gamma^{-1}} &\approx&
{i\hbar\over 2\pi\Omega}\ln {\gamma-i\Omega\over \gamma+i\Omega},\\
\left<\right.P_A^2\left.\right>^{(0)}|_{t\gg \gamma^{-1}} =
\left<\right.P_B^2\left.\right>^{(0)}|_{t\gg \gamma^{-1}} &\approx&
{\hbar\over\pi} \left\{ {i \over 2\Omega} (\Omega^2-\gamma^2)\ln
{\gamma-i\Omega\over \gamma+i\Omega} + \gamma\left[ 2\Lambda_1 -
\ln \left( 1+{\gamma^2\over\Omega^2}\right)\right]\right\},
\end{eqnarray}
from $(\ref{QLQRv0})$-$(\ref{PLQRv0})$ and from Ref. \cite{LH2006}.
When $d \to \infty$, the cross correlators vanish and the uncertainty
relation reads
\begin{equation}
\Upsilon^{(0)}|_{t\gg \gamma^{-1}} \equiv \det\left[
{\bf V}^{(0)}|_{t\gg \gamma^{-1}}+ {i\over 2}\hbar {\bf M}\right]
\approx \left( \left<\right.Q_A^2\left.\right>^{(0)}
\left<\right.P_A^2\left.\right>^{(0)}|_{t\gg \gamma^{-1}} -
{\hbar^2\over 4}\right)^2 \ge 0, \label{0thUnc}
\end{equation}
for sufficiently large $\Lambda_1$ \cite{LH2006}, so the uncertainty
relation holds perfectly. However, observing that $\left|{\cal S}_d
\right|\approx\pi e^{-\gamma d}$ for $d$ large enough but still finite,
the late-time $\Upsilon^{(0)}$ can reach the lowest values:
\begin{eqnarray}
&& \left( \left<\right.Q_A^2(t)\left.\right>^{(0)} \left<\right.
P_A^2(t)\left.\right>^{(0)}|_{t\gg \gamma^{-1}} - {\hbar^2\over 4}
\right)^2 + {\hbar^4 e^{-4\gamma d} \over 16 \Omega_r^4 d^4}
- \nonumber\\ & & {\hbar^2 e^{-2\gamma d} \over 4d^2}\left[
{\hbar^2\over 2 \Omega_r^2} + \left(\left<\right.Q_A^2(t)
\left.\right>^{(0)}|_{t\gg \gamma^{-1}}\right)^2 + \Omega_r^{-4}
\left(\left<\right.P_A^2(t)\left.\right>^{(0)}|_{t\gg \gamma^{-1}}
\right)^2\right]. \label{minU}
\end{eqnarray}
This zeroth-order result suggests that the uncertainty relation can
fail if $d$ is not large enough to make the value of the second line
of $(\ref{minU})$ overwhelmed by the first line [see Fig. \ref{zerounc}].
When this happens the zeroth-order results break down. Therefore
to describe the long-time entanglement dynamics
at short distances $d$ the higher-order corrections from the mutual
influences must be included for consistency.
When $\gamma \ll \gamma\Lambda_1 \ll \Omega$, one has a simple estimate
that the late-time $\Upsilon^{(0)}$ becomes negative if $d$ is smaller
than about $d_{0}\approx \pi /2\Lambda_1\gamma$, which is much
greater than $d_{ins}$ found in Sec.\ref{instab}.
\begin{figure}
\caption{The oscillating curve represents the value of
$\Upsilon^{(0)}
\label{zerounc}
\end{figure}
\section{Entanglement at late times}
\label{resient}
Since all $q_i^{(j)}$ vanish at late times in the stable regime (see
Appendix A), the late-time correlators consist of $q_j^{(\pm)}$ only,
for example,
\begin{equation}
\left<\right. Q_B^2 \left.\right>|_{t\to \infty} =\int {\hbar d^3 k
\over(2\pi)^3 2\omega} q_B^{(+)}(t,{\bf k}) q_B^{(-)}(t,{\bf k})
|_{t\to \infty},
\end{equation}
where $q_B^{(+)}(t,{\bf k})|_{t\to \infty}$ is given by $(\ref{lateTqp})$
and $q_B^{(-)}(t,{\bf k})|_{t\to \infty}$ is its complex conjugate.
After some algebra, we find that the value of the nonvanishing
correlators at late times can be written as
\begin{eqnarray}
\left<\right. Q_A^2 \left.\right>|_{t\to \infty} =
\left<\right. Q_B^2 \left.\right>|_{t\to \infty} &=&
2 {\rm Re}\left( {\cal F}_{0+} + {\cal F}_{0-} \right), \label{LTQ2}\\
\left<\right. Q_A, Q_B \left.\right>|_{t\to \infty} &=&
2 {\rm Re}\left( {\cal F}_{0+} - {\cal F}_{0-} \right), \\
\left<\right. P_A^2 \left.\right>|_{t\to \infty} =
\left<\right. P_B^2 \left.\right>|_{t\to \infty} &=&
2 {\rm Re}\left( {\cal F}_{2+} + {\cal F}_{2-} \right), \\
\left<\right. P_A, P_B \left.\right>|_{t\to \infty} &=&
2 {\rm Re}\left( {\cal F}_{2+} - {\cal F}_{2-} \right), \label{LTPx}
\end{eqnarray}
where
\begin{equation}
{\cal F}_{c\pm}(\gamma,\Omega, d) \equiv {\hbar i\over 4\pi}
\int_0^{\omega_{max}}d\omega {\omega^c\over \omega^2+2i\gamma\omega-
\Omega_r^2\pm {2\gamma\over d}e^{i\omega d} },
\end{equation}
and $\omega_{max}$ is the high frequency (UV) cutoff corresponding to
$\Lambda_1$.
In the stable regime one can write ${\cal F}_{c\pm}$
in a series form:
\begin{eqnarray}
{\cal F}_{c\pm}(\gamma,\Omega, d) &=& {\hbar i\over 4\pi}
\int_0^{\omega_{max}}d\omega
{\omega^c\over \omega^2+2i\gamma\omega-\Omega^2-\gamma^2}
\sum_{n=0}^\infty \left[{\mp {2\gamma\over d}e^{i\omega d} \over
\omega^2+2i\gamma\omega-\Omega^2 - \gamma^2}\right]^n \nonumber\\
&=& {\hbar i\over 4\pi} \int_0^{\omega_{max}} d\omega \sum_{n=0}^\infty
{1\over n!}\left[ \mp {\gamma\over \Omega d}e^{i\omega d}\partial_\Omega
\right]^n {\omega^c\over \omega^2+2i\gamma\omega-\Omega^2-\gamma^2},
\label{Fseries}
\end{eqnarray}
so we have
\begin{eqnarray}
{\cal F}_{0\pm}(\gamma,\Omega, d) &=& {\hbar \over 4\pi}\left\{
{i\over 2\Omega}\ln {\gamma-i\Omega\over\gamma+i\Omega} +
\sum_{n=1}^\infty {1\over n!} \left[ \mp{\gamma\over \Omega d}
\partial_\Omega\right]^n {\rm Re}\, {i\over\Omega}
e^{(\gamma+i\Omega)nd}\Gamma[0, (\gamma+i\Omega)nd]\right\},\\
{\cal F}_{2\pm}(\gamma,\Omega, d) &=& {\hbar \over 4\pi}\left\{
{i\over 2\Omega}\left(\Omega^2-\gamma^2\right)
\ln {\gamma-i\Omega\over\gamma+i\Omega}+\gamma\left[2\Lambda_1 -
\ln\left( 1+{\gamma^2\over\Omega^2}\right)\right] + \right.
\nonumber\\ & & \left. \sum_{n=1}^\infty {1\over n!}
\left[ \mp{\gamma\over \Omega d}
\partial_\Omega\right]^n {\rm Re} {i\over\Omega}e^{(\gamma+i\Omega)nd}
(\gamma+i\Omega)^2\Gamma[0, (\gamma+i\Omega)nd] \right\},
\end{eqnarray}
for large frequency cutoff $\omega_{max}$, or the corresponding
$\Lambda_1$.
Substituting the late-time correlators $(\ref{LTQ2})$-$(\ref{LTPx})$
into the covariance matrix ${\bf V}$, we get
\begin{eqnarray}
\Sigma |_{t\to \infty} &=&
\left( 16{\rm Re}{\cal F}_{0+}{\rm Re}{\cal F}_{2-}
-{\hbar^2\over 4}\right)
\left( 16{\rm Re}{\cal F}_{0-}{\rm Re}{\cal F}_{2+}
-{\hbar^2\over 4}\right),\\
\Upsilon|_{t\to\infty} &=&
\left( 16{\rm Re}{\cal F}_{0+}{\rm Re}{\cal F}_{2+}
-{\hbar^2\over 4}\right)
\left( 16{\rm Re}{\cal F}_{0-}{\rm Re}{\cal F}_{2-}
-{\hbar^2\over 4}\right).
\label{lateUnc}
\end{eqnarray}
Numerically we found that $16{\rm Re}{\cal F}_{0+}{\rm Re}{\cal F}_{2-}
-(\hbar^2/4)$ and $\Upsilon|_{t\to\infty}$ are positive definite in
the cases considered in this paper. We then identify the late-time
symplectic spectrum $(c_+, c_-)|_{t\to\infty}=
(4\sqrt{{\rm Re}{\cal F}_{0+}{\rm Re}{\cal F}_{2-}}, 4\sqrt{{\rm Re}
{\cal F}_{0-}{\rm Re}{\cal F}_{2+}})$. So if $16{\rm Re}
{\cal F}_{0+}{\rm Re}{\cal F}_{2-}-(\hbar^2/4)$ is negative, then
$\Sigma < 0$, $E_{\cal N}>0$, and the detectors are entangled.
\begin{figure}
\caption{Plots for $\Sigma$ (solid curve) and $\Upsilon$ (dashed curve)
at late times as a function of $d$, with parameters the same as those
in Fig. \ref{zerounc}
\label{dent}
\end{figure}
In the weak-coupling limit, keeping the correlators to $O(\gamma/d)$,
we have
\begin{eqnarray}
16{\rm Re}{\cal F}_{0+}{\rm Re}{\cal F}_{2-} -{\hbar^2\over 4} &\approx&
{\hbar^2\gamma\Lambda_1\over\pi\Omega} -
{\hbar^2\over\Omega^3} {\rm Re}\left\{ \left[
{i\gamma\Omega\over \pi d} + {2\gamma^2\Lambda_1\over\pi^2 d}(i+\Omega d)
\right] e^{i\Omega d}\Gamma[0,i\Omega d]\right\}, \label{soldent}
\end{eqnarray}
which is positive as $d\to\infty$, but negative when $d \to 0_+$. So
$(\ref{soldent})$ must cross zero at a finite ``entanglement distance"
$d_{ent} > 0$, where $\Sigma =0$. For $d < d_{ent}$, the detectors will have
residual entanglement, while for $d>d_{ent}$, the detectors are separable at
late times.
For small $\gamma$, $d_{ent}$ is almost independent of $\gamma$. We find
that when $\gamma\Lambda_1\ll\Omega$ and $\Lambda_1 \gg 1$,
\begin{equation}
d_{ent} \approx {\pi/2\Omega\over \Lambda_1-\ln{\pi\over 2\Lambda_1}}.
\label{dentdef}
\end{equation}
will be a good estimate if $d_{ent}\ll 1$.
Here $d_{ent}$ is still much larger than the
``merge distance" $d_{min}$ in $(\ref{dmin})$. For example, as shown
in Fig. \ref{dent}, when $\gamma = 0.0001$, $\Omega = 2.3$,
$\Lambda_1 = 25$, one has $d_{ent}\approx 0.025$, which is quite a
bit greater than the ``radius of instability"
$2\gamma/\Omega_r^2 \approx 3.8\times 10^{-5}$, and much greater
than the merge distance $d_{min}\approx 9 \times 10^{-12}$.
A corollary follows. If the initial state of the two detectors with
$d< d_{ent}$ is separable, then the residual entanglement implies
that there is an entanglement creation during the evolution. In
contrast, if the initial state of the two detectors with $d >
d_{ent}$ is entangled, then the late-time separability implies that
they disentangled in a finite time. Examples will be given in the
next section.
Note that the ill behavior of $\Upsilon^{(0)}$ has been cured by mutual
influences. The uncertainty function
$(\ref{lateUnc})$ is positive for all $d$ at late times.
Note also that, while the corrections from the mutual influences to
$\left<\right. Q_A^2\left.\right>|_{t\to \infty}$ and $\left<\right.
P_A^2 \left.\right>|_{t\to \infty}$ are $O(\gamma/d)$, the mutual
influences have been included in the leading order
approximation for the cross correlators.
Indeed, in $(\ref{Fseries})$, even as low as $n=1$, we have had
\begin{eqnarray}
\left<\right. Q_A,Q_B \left.\right>|_{t\to \infty} &\approx&
\left<\right. Q_A,Q_B \left.\right>^{(0)}|_{t\to \infty}-
{2 \hbar \gamma\over \pi} {4\gamma\over d} \int_0^\infty d\omega {\omega
\left[(\Omega^2_r-\omega^2)\cos\omega d-2\gamma\omega\sin\omega d\right]
\over \left[ (\omega^2-\Omega^2_r)^2+4 \gamma^2\omega^2\right]^2}.
\label{1stgamma}
\end{eqnarray}
However, this is slightly different from the approximation with the
first-order mutual influences included. Writing the $n=0$ and $n=1$
terms in Eq. $(\ref{qjp})$ as
\begin{equation}
q_j^{(+)} \approx q_{j,n=0}^{(+)} + q_{j,n=1}^{(+)},
\end{equation}
then the approximated cross correlator with the first-order mutual
influences included is the $\omega$ integration of Re
$[(q_{A,n=0}^{(+)} + q_{A,n=1}^{(+)})(q_{B,n=0}^{(+)} +
q_{B,n=1}^{(+)})]$, but in $(\ref{1stgamma})$ only Re
$[q_{A,n=0}^{(+)}q_{B,n=0}^{(+)}+ q_{A,n=0}^{(+)} q_{B,n=1}^{(+)}+
q_{A,n=1}^{(+)}q_{B,n=0}^{(+)}]$ contribute, though there are
$O(\gamma^0)$ terms in $q_{A,n=1}^{(+)}q_{B,n=1}^{(+)}$. The latter
is small for $\Omega d \gg 1$, and will be canceled by the mutual
influences of higher-orders.
\section{Entanglement dynamics in weak-coupling limit}
\subsection{Disentanglement at very large distance}
Suppose the two detectors are separated far enough $(d \gg \Omega)$ so
that the cross correlations and the mutual influences can be safely ignored.
Then in the weak-coupling limit ($\Omega \gg \gamma \Lambda_1$)
the zeroth-order results for the v-part of the self correlators dominate,
so that \cite{LCH08}
\begin{eqnarray}
\left<\right.Q_A^2\left.\right>_{\rm v} = \left<\right.Q_B^2\left.
\right>_{\rm v}&\approx& {\hbar\over 2\Omega} \left(1-e^{-2\gamma t}\right),
\label{Q2weak0}\\
\left<\right.P_A^2\left.\right>_{\rm v} = \left<\right.P_B^2\left.
\right>_{\rm v}&\approx& {\hbar\over 2}\Omega \left(1-e^{-2\gamma t}\right) +
{2\over\pi}\hbar\gamma\Lambda_1 , \label{P2weak0}
\end{eqnarray}
and $\left<\right.Q_A,P_A\left.\right>_{\rm v} = \left<\right.Q_B,P_B\left.
\right>_{\rm v} \sim O(\gamma)$, while the v-part of the cross correlators are
vanishingly small. This is exactly the case we have considered in Sec. IV A 2
of Ref. \cite{LCH08}, where we found
\begin{equation}
\Sigma \approx {\hbar^2 e^{-4\gamma t}\over 16\alpha^2\beta^2\Omega^2}
\left[ Z_8 \left( e^{-4\gamma t} -2 e^{-2\gamma t}\right) + Z_4\right]
+ {\hbar^3 \gamma\Lambda_1 \over 4\pi\alpha^2\beta^2\Omega^2}
Z_2 e^{-2\gamma t} +
{\hbar^4\over \pi^2\Omega^2} \gamma^2\Lambda_1^2,
\label{Sig0}
\end{equation}
with $Z_8\ge 0$, $Z_8-Z_4 \ge 0$ and $Z_2 \ge 0$ [$Z_8$, $Z_4$ and
$Z_2$ are parameters depending on $\alpha$ and $\beta$, defined in
Eqs.$(37)$, $(38)$ and $(41)$ of Ref. \cite{LCH08}, respectively.]
Accordingly the detectors always disentangle in a finite time. There
are two kinds of behaviors that $\Sigma$ could have. For $Z_4 >0$,
the disentanglement time is a function of $Z_4$, $Z_8$ and $\gamma$,
\begin{equation}
t^{(0)}_{dE>} \approx -{1\over 2\gamma}\ln\left(
1-\sqrt{1-{Z_4\over Z_8}}\right), \label{tdEZ4p}
\end{equation}
while for $Z_4<0$, the disentanglement time is much longer,
\begin{equation}
t^{(0)}_{dE<} \approx {1\over 2\gamma}
\ln { |Z_4|\pi/(2\hbar \gamma\Lambda_1)
\over Z_2 + \sqrt{Z_2^2-4\alpha^2\beta^2 Z_4} } , \label{tdEZ4n}
\end{equation}
and depends on $\Lambda_1$.
\subsection{Disentanglement at large distance}
When $d$ is large (so $1/\Omega d$ is small) but not too large to make
all the mutual influences negligible,
while the zeroth-order results for the v-part of the self-correlators
$(\ref{Q2weak0})$ and $(\ref{P2weak0})$ are still good, the
first-order correction [$n=1$ terms in ($\ref{qjp}$)] to the cross
correlators $\left<\right. Q_A, Q_B \left.\right>$ can be of the
same order of $\left<\right. Q_A, Q_B \left.\right>^{(0)}$ (a similar
observation on the late-time correlators has been mentioned in the
end of Sec. \ref{resient}). Including the first-order correction,
for $d > O(1/\sqrt{\gamma\Omega})$, we have a simple expression,
\begin{eqnarray}
\left<\right. Q_A, Q_B \left.\right>_{\rm v} &=&
\left<\right. Q_A, Q_B \left.\right>^{(0)}_{\rm v} + \theta(t-d)
{\hbar\over 2\Omega}{\sin\Omega d\over\Omega d}e^{-\gamma d}
\left[-1 + e^{-2\gamma (t-d)}\left(1+2 (t-d)\gamma \right)
+ O(\gamma/\Omega) \right] \nonumber\\
&\approx& \theta(t-d){\hbar \over \Omega}{\sin\Omega d\over\Omega d}
e^{-\gamma d}\gamma (t-d)e^{-2\gamma (t-d)} , \label{QLQRweak}
\end{eqnarray}
and $\left<\right. P_A, P_B \left.\right>_{\rm v}\approx \Omega^2
\left<\right. Q_A, Q_B \left.\right>_{\rm v}$ with other two-point functions
$\left<\right. .. \left.\right>_{\rm v}$ being $O(\gamma)$ for all $t$.
Here $\left<\right. Q_A, Q_B \left.\right>^{(0)}_{\rm v}$ in the weak-coupling
limit has been shown in $(\ref{QLQRv0wcl})$. The above approximation is good
over the time interval from $t=0$ up to $e^{-2\gamma(t-d)}>O(\gamma/\Omega)$,
namely, before $t-d \sim O(-\gamma^{-1}\ln(\gamma/\Omega))$.
Still, in this first-order approximation, $\left<\right. Q_A, Q_B
\left.\right>_{\rm v}$ and $\left<\right. P_A, P_B \left.\right>_{\rm
v}$ are the only correlators depending on the separation $d$.
Inserting those approximated expressions for the correlators into the
definition of $\Sigma$ or $E_{\cal N}$, we find that the interference
pattern in $d$ for the relative values of $\Sigma$ or $E_{\cal N}$
at early times (Fig. \ref{zeroS}) can last through the disentanglement
process to make the disentanglement time $t_{dE}$ longer or shorter
than those at $d\to\infty$, though the contrast decays noticeably
compared with those at early times. Two examples are shown in Fig.
\ref{tdEvsd}. For $Z_4>0$, the disentanglement time is about
\begin{equation}
t_{dE>} \approx t^{(0)}_{dE>} - {Z_6
\left(t^{(0)}_{dE>}-d\right)e^{\gamma d}\sin\Omega d \over Z_8 d
\left(1- e^{-2\gamma t^{(0)}_{dE>}}\right) + Z_6 \left[1-2\gamma
\left(t^{(0)}_{dE>}-d \right)\right]e^{\gamma d}\sin\Omega d},
\end{equation}
where $Z_6 \equiv (\hbar^2-\alpha^2\beta^2)(\alpha^2\Omega^2-\beta^2)$
[Fig. \ref{tdEvsd} (left)].
In this case the disentanglement time can be short compared to the
time scale $O(n/\gamma)$, $n\in N$ when the higher-order corrections
$q_n$ from mutual influences reach their maximum values (see Sec.
\ref{EvoOp}). So in the weak-coupling limit the above estimate could be
good from large $d$ all the way down to $\Omega d \sim O(1)$ but still
much greater than $\Omega d_{ent}$. If this is true, the difference of
disentanglement times for different spatial separations can be significant
at small $d$. For example, for $(\alpha, \beta) =(1.5, 0.2)$ with other
parameters the same as those in Fig. \ref{tdEvsd}, the disentanglement
time at $d\approx 4.4934/\Omega$ (where $\sin\Omega d/ \Omega d$
is the global minimum) is over $1.6$ times longer than those for
$d\approx 7.7253/\Omega$ (where the first peak of $\sin\Omega d/
\Omega d$ is located).
For $Z_4<0$, the correction of $\sin \Omega d$ is below the precision of
$t^{(0)}_{dE<}$ estimated in $(\ref{tdEZ4n})$. Here we just show the
numerical result up to the first-order mutual influences
in Fig. \ref{tdEvsd} (right), which shows that the interference pattern
in $d$ is suppressed but still nonvanishing for large disentanglement times.
\begin{figure}
\caption{The plot of $\Sigma$ as a function of $d$ and $t$,
up to the first-order correction.
$\Sigma$ is negative in the dark region and positive in the bright region.
For a fixed $d$, the disentanglement time $t_{dE}
\label{tdEvsd}
\end{figure}
\subsection{Entanglement generation at very short distance}
\label{createEnt}
\begin{figure}
\caption{(Upper left)
The solid curve and the long-dashed curve represent the values of $\Sigma$
and $\Upsilon$, respectively, while the dotted line is for the value of
$\Sigma$ with all $\left<\right. .. \left.\right>_{\rm v}
\label{EntGen}
\end{figure}
When $\Omega d \sim O(\epsilon)$, $\gamma/\Omega \sim O(\epsilon^2)$,
and $\epsilon \ll 1$, one can perform a dimensional reduction on
the third derivatives in $(\ref{eomsmalld})$, namely,
\begin{equation}
\dddot{q}_\pm^{(j)} \approx -{\Omega_r^2 \mp {2\gamma\over d} \over
1\mp\gamma d}\dot{q}_\pm^{(j)},
\end{equation}
to obtain, up to $O(\epsilon^5)$,
\begin{eqnarray}
\ddot{q}^{(j)}_\pm +2\gamma_\pm \dot{q}^{(j)}_\pm +\Omega^2_\pm
\dot{q}^{(j)}_\pm &\approx& 0, \label{eomqje5}\\
\ddot{q}^{(+)}_\pm +2\gamma_\pm \dot{q}^{(+)}_\pm +\Omega^2_\pm
\dot{q}^{(+)}_\pm &\approx& \lambda_\pm \left( e^{-i k_1 d/2 }
\pm e^{i k_1 d/2}\right)e^{-i\omega t}, \label{eomqpme5}
\end{eqnarray}
where $j=A,B$, $q_\pm^{(+)} \equiv q_A^{(+)} \pm q_B^{(+)}$ and
\begin{eqnarray}
\gamma_- &\equiv& {\gamma d^2\over 6}{\left(\Omega_r^2 +
{2\gamma\over d}\right)\over (1+\gamma d)^2}, \\
\gamma_+ &\equiv& {2\gamma\over 1-\gamma d} - {\gamma d^2\over 6}
{\left(\Omega_r^2 - {2\gamma\over d}\right)\over (1-\gamma d)^2},\\
\Omega^2_\pm &\equiv& {\Omega_r^2 \mp {2\gamma\over d}
\over 1 \mp \gamma d}, \,\,\,\,\,
\lambda_\pm \equiv {\lambda_0\over 1\mp \gamma d}.
\end{eqnarray}
Here $\gamma_- / \gamma_+$ is of $O(\epsilon^2)$. Note that $q_-^{j}$
and the decay modes in $q_-^{(+)}$ have subradiant behavior, while
$q_+^{j}$ and the decay modes in $q_+^{(+)}$ are superradiant. For
small $d$, the time scale $\gamma_-^{-1} \gg \gamma^{-1} >
\gamma_+^{-1}\approx 1/2\gamma$, and $\gamma_-^{-1}$ goes to infinity
as $d\to 0$.
The solutions for $(\ref{eomqje5})$ and $(\ref{eomqpme5})$ with suitable
initial conditions are
\begin{eqnarray}
q_j^{(j)}\pm \bar{q}_j^{(j)} &=& {1\over 2} e^{-\gamma_\pm t}
\left[s_1^{\pm}e^{i\Omega_\pm t}+s_2^\pm e^{-i\Omega_\pm t}\right],\\
q_A^{(+)}\pm q_B^{(+)} &=& {\lambda_\pm \over \Omega_\pm}
\left( e^{-i k_1 d/2 }\pm e^{i k_1 d/2}\right)\left[
\left(M_1^\pm-M_2^\pm\right)e^{-i\omega t}+ e^{-\gamma_\pm t}
\left( M_2^\pm e^{i\Omega_\pm t}-M_1^\pm e^{-i\Omega_\pm t}\right)\right],
\end{eqnarray}
where $s_1^\pm \equiv [1 -\Omega_\pm^{-1}(\Omega_r+ i\gamma_\pm)]/2$,
$s_2^\pm \equiv [1 + \Omega_\pm^{-1}(\Omega_r+ i\gamma_\pm)]/2$,
$M_1^\pm \equiv (-\omega-i\gamma_\pm +\Omega_\pm)^{-1}$, and
$M_2^\pm \equiv (-\omega-i\gamma_\pm -\Omega_\pm)^{-1}$.
Actually these solutions are the zeroth-order results with $\gamma$ and
$\Omega$ replaced by $\gamma_\pm$ and $\Omega_\pm$.
So we can easily reach the simple expressions
\begin{eqnarray}
\left<\right. Q_A^2 \left.\right>_{\rm v} &\approx&
{\lambda_+^2\over 16\pi\gamma_+}\left[
\left<\right. Q_A^2 \left.\right>_{\rm v}^{(0)}
+ \left<\right. Q_A, Q_B \left.\right>_{\rm v}^{(0)}
\right]_{\gamma\to\gamma_+}^{\Omega\to\Omega_+} +
{\lambda_-^2\over 16\pi\gamma_-}\left[
\left<\right. Q_A^2 \left.\right>_{\rm v}^{(0)} -
\left<\right. Q_A, Q_B \left.\right>_{\rm v}^{(0)}
\right]_{\gamma\to\gamma_-}^{\Omega\to\Omega_-},\\
\left<\right. Q_A,Q_B \left.\right>_{\rm v} &\approx&
{\lambda_+^2\over 16\pi\gamma_+}\left[
\left<\right. Q_A, Q_B \left.\right>_{\rm v}^{(0)}
+ \left<\right. Q_A^2 \left.\right>_{\rm v}^{(0)}
\right]_{\gamma\to\gamma_+}^{\Omega\to\Omega_+} +
{\lambda_-^2\over 16\pi\gamma_-}\left[
\left<\right. Q_A, Q_B \left.\right>_{\rm v}^{(0)}
-\left<\right. Q_A^2 \left.\right>_{\rm v}^{(0)}
\right]_{\gamma\to\gamma_-}^{\Omega\to\Omega_-},
\end{eqnarray}
and so on. Here $\left<\right. .. \left.\right>_{\rm v}^{(0)}$ are
those expressions given in $(\ref{QLQRv0})$-$(\ref{PLQRv0})$ above
and in Eqs.(A9) and (A10) of Ref.\cite{LH2006} ($\left<\right. Q_A,
P_A \left.\right>_{\rm v} = \partial_t \left<\right. Q_A^2\left.
\right>_{\rm v}/2$.) The prefactors $\lambda_\pm^2/16\pi\gamma_\pm$
are put there because in our definitions for the zeroth-order results
the overall factor $\lambda_0^2$ has been expressed in terms of
$8\pi\gamma$, but now $\gamma_\pm\not= \lambda_\pm/8\pi$.
In Fig. \ref{EntGen} we demonstrate an example in which the two
detectors are separable in the beginning but get entangled at late
times. There are three stages in their history of evolution:
1. At a very early time ($t \approx 0.15$) quantum entanglement has
been generated. This entanglement generation is dominated by the
mutual influences sourced by the initial information in the detectors
and mediated by the field. (For more early-time analysis, see
Appendix \ref{EarlyAna}.)
2. Then around the time scale $t\sim 1/\gamma_+$, the contribution from
vacuum fluctuations of the field ($\left<\right. .. \left.\right>_{\rm v}$)
takes over so that $\Sigma$ becomes quasisteady and appears to settle
down at a value depending on part of the initial data of the detectors.
More explicitly, at this stage $q_+^{(\mu)}$, $\mu =A, B, +, -$ have been
in their late-time values but $q_-^{(\mu)}$ are still about their initial
values, so
\begin{equation}
\Sigma|_{t\sim 1/\gamma_+}\approx {\hbar^4\over 64}\left[{\sin\Omega
d\over\Omega d}e^{-2\gamma d} + 1 -{2\over\hbar}\alpha^2\Omega \right]
\left[ {\sin\Omega d\over \Omega d}e^{-2\gamma d} + 1 -
{2\hbar\over\alpha^2\Omega} +{8\Lambda_1\gamma\over\pi\Omega}\right]
\label{transSig}
\end{equation}
in the weak-coupling and short distance approximation $\gamma \ll d\Omega^2
\ll \Omega$. Here $\Sigma$ depends on $\alpha$ only. The parameter $\beta$
in initial state $(\ref{initGauss})$ is always associated with $q_+^{(j)}$
in $\left<\right. .. \left.\right>_{\rm a}$ so it becomes negligible at
this stage [cf. Eq.(25) in \cite{LCH08}]. Note that $\Sigma|_{t\sim 1/
\gamma_+}$ can be positive for small $d$ only when $\alpha$ is at the
neighborhood of $\sqrt{\hbar/\Omega}$.
3. The remaining initial data persist until a much longer time scale
$t\sim 1/\gamma_-$ when $\Sigma$ approaches a value consistent with
the late-time results given in Sec. \ref{resient}, which are
contributed purely by the vacuum fluctuations of the field and
independent of any initial data in the detectors. In this example the
detectors have residual entanglement, though small compared to those
in stage 2.
The above behaviors in stages 2 and 3 cannot be obtained by
including only the first-order correction from the mutual influences.
Thus in this example we conclude that the mutual influences of the
detectors at very short distance generate a transient entanglement
between them in midsession,
while vacuum fluctuations of the field with the mutual influences included
give the residual entanglement of the detectors at late times.
For the detectors initially entangled, only the early-time behavior looks
different from the above descriptions. Their entanglement dynamics are
similar to the above in the second and the third stages.
\section{Discussion}
\subsection{Physics represented by length scales}
The physical behavior of the system we studied may be characterized
by the following length scales:
{\it Merge distance $d_{min}$ in Eq.}(\ref{dmin}).--Two detectors
separated at a distance less than $d_{min}$ would be viewed as those
located at the same spatial point;
{\it Radius of instability $d_{ins}$ in Eq.}(\ref{dins}).--For
any two detectors at a distance less than $d_{ins}$, their mode
functions will grow exponentially in time so the quantum fluctuations of
the detector diverge at late times;
{\it Entanglement distance $d_{ent}$ in Eq.}$(\ref{dentdef})$.--Two
detectors at a distance less than $d_{ent}$ will be entangled at
late times, otherwise separable;
And $d_0$ defined in Sec. \ref{0bad}.--For $d < d_0$ the zeroth-order
results breakdown.
A stable theory should have $d_{ent}$ and $d_{min}$ greater than $d_{ins}$.
\subsection{Direct interaction and effective interaction}
In a closed bipartite system a direct interaction between the two
parties, no matter how weak it is, will generate
entanglement at late times. However, as we showed above, an effective
interaction between the two detectors mediated by quantum fields will
not generate residual entanglement (though creating transient
entanglement is possible) if the two detectors are separated far
enough, where the strength of the effective interactions is weak but
not vanishing.
\subsection{Comparison with 2HO QBM results}
When $d \to d_{min}$ with large enough $\Omega$, our model will
reduce to a 2HO QBM model with real renormalized natural frequencies
for the two harmonic oscillators. Paz and Roncaglia \cite{PR07} have
studied the entanglement dynamics of this 2HO QBM model and found
that, at zero temperature, for both oscillators with the same natural
frequency, there exists residual entanglement at late times in some
cases and infinite sequences of sudden death and revival in other
cases. In the latter case the averaged asymptotic value of negativity
is still positive and so the detectors are ``entangled on average."
While our results show that the late-time behavior of the detectors
is independent of the initial state of the detectors, the asymptotic
value of the negativity at late times in \cite{PR07} does depend on
the initial data in the detectors (their initial squeezing factor).
This is because in \cite{PR07} the two oscillators are located
exactly at the same point, namely, $d=0$, so $\gamma_- =0$ and the
initial data carried by $q_-^{(j)}$ persists forever. Since in our
cases $d$ is not zero, the ``late" time in \cite{PR07} actually
corresponds to the time interval with $(1/\gamma_+) \ll t \ll
(1/\gamma_-)$ in our cases, which is not quite late for our
detectors.
\subsection{Where is the spatial dependence of entanglement coming from?}
Two factors are responsible for the spatial dependence of
entanglement. The first one is the phase difference of vacuum
fluctuations that the two detectors experience. This is mainly
responsible for the entanglement outside the light cone in all
coupling strengths and those inside the light cone
with sufficiently large separation in the weak-coupling limit,
such as the cases in Sec.\ref{ZOR}.
The second factor is the interference of retarded mutual
influences, which are generated by backreaction from the detectors
to the field. It is important in the cases with small separation
between the detectors, such as those in Sec.\ref{createEnt}.
\subsection{Non-Markovian behavior and strong coupling}
In our prior work \cite{LH2006} and \cite{LCH08}, the non-Markovian
behavior arises mainly from the vacuum fluctuations experienced by
the detectors, and the essential temporal nonlocality in the
autocorrelation of the field at zero temperature manifests fully in
the strong-coupling regime. Nevertheless, in Sec. \ref{createEnt} one
can see that, even in the weak-coupling limit, once the spatial
separation is small enough and the evolution time is long enough, the
mutual influences will create some non-Markovian behavior very
different from those results obtained from perturbation theory with
higher-order mutual influences on the mode functions neglected.\\
\noindent{\bf Acknowledgement} S.Y.L. wishes to thank Jen-Tsung Hsiang
for helpful discussions. This work is supported in part by grants
from the NSF Grants No. PHY-0426696, No. PHY-0601550, No. PHY-0801368,
and the Laboratory for Physical Sciences.
\begin{appendix}
\section{Late-time analysis on mode functions}
\label{LateAna}
Let
\begin{equation}
q_+^{(A)} (t) = \sum_j c_j e^{i K_j t}, \label{qR+FT}
\end{equation}
Equation (\ref{eomqpm}) gives
\begin{equation}
\sum_j c_j\left[-K_j{}^2+ 2i\gamma K_j + \Omega_r^2 \right]
e^{i K_j t}= {2\gamma\over d} \sum_{j'}c_{j'} e^{i K_{j'}(t-d)} .
\end{equation}
At late times, one is allowed to perform the
Fourier transformation on both sides with $t$ integrations over
$(-\infty, \infty)$ to obtain
\begin{equation}
-K_j^2+ 2i\gamma K_j + \Omega_r^2 = {2\gamma\over d} e^{-i K_j d} .
\label{eomK}
\end{equation}
There are infinitely many solutions for $K_j$ in the complex $K$
plane, so one needs infinitely many initial conditions to fix the
factors $c_j$. Our $q_+$ chosen as a free oscillator at the initial
moment and unaffected by its own history until $t=d$ in principle can
be specified by a set of $c_j$'s. Suppose this is true. Writing $K_j
\equiv x_j + i y_j$, the real and imaginary parts of $(\ref{eomK})$
then read
\begin{eqnarray}
(y-\gamma)^2 -x^2 + \Omega^2 &=& {2\gamma\over d} e^{y d}\cos x d,
\label{eomRe} \\
x (y - \gamma) &=& {\gamma\over d} e^{y d}\sin x d. \label{eomIm}
\end{eqnarray}
The solutions for them are shown in Fig. \ref{SolK}. The left-hand
side of $(\ref{eomRe})$ is a saddle surface over the $xy$ space,
while the right-hand side of $(\ref{eomRe})$ is exponentially growing
in the $+y$ direction and oscillating in the $x$ direction. For
$(\ref{eomIm})$, the situation is similar. From Fig. \ref{SolK}, one
can see that there is no complex solution for $K$ with nonvanishing
real part and negative imaginary part ($x\not=0$ and $y\le 0$). The
solutions for $K$ with its imaginary part negative must be purely
imaginary. Indeed, from $(\ref{eomIm})$ and Fig. \ref{SolK}
(upper right), one sees that
when $x\not= 0$, if $y\le 0$, then $(y-\gamma)\le -\gamma$, but $-0.2172
\gamma \alt \gamma e^{y d}(\sin x d)/(x d) < \gamma $, so there is no
solution of $(\ref{eomIm})$ with $y\le 0$ and $x\not=0$.
When $\Omega_r^2 > 2\gamma/d$, one finds that all solutions for $K$ in
$(\ref{eomK})$ are located in the upper half of the complex $K$ plane,
{\it i.e.}, all $y_j>0$, which means that all modes in $(\ref{qR+FT})$
decay at late times.
When $\Omega_r^2 = 2\gamma/d$, there exists a solution $K=0$, with
other solutions on the upper half $K$ plane. This implies that
$q_{+}^{(A)}$ becomes a constant at late times.
When $\Omega_r^2 < 2\gamma/d$, there must exist one and only one solution
for $K$ with negative $y$, which corresponds to the unstable growing mode.
This is consistent with our observation in Sec.\ref{instab}.
Therefore, we conclude that $q_+^{(A)}$ is stable and decays at late
times only for $\Omega_r^2 > 2\gamma/d$.
As for $q_-^{(A)}$,
from $(\ref{EOMqR-})$ it seems that $q_-^{(A)}$ would oscillate at
late times. However, similar analysis gives the conclusion that
$q_-^{(A)}$ decays at late times for all cases. Thus, by symmetry,
all $q_{j}^{(i)}$ decay at late times in the stable regime
$\Omega_r^2 > 2\gamma/d$.
\begin{figure}
\caption{
(Upper left) The solutions to ($\ref{eomK}
\label{SolK}
\end{figure}
Now we turn to $q^{(+)}_{A,B}$. Equation $(\ref{eomqA+2})$ implies that
\begin{eqnarray}
\left( \partial_t^2 +2\gamma \partial_t + \Omega_r^2 \right)^2
q_B^{(+)}(t,{\bf k}) &=&
\left({2\gamma\over d}\right)^2 q_B^{(+)}(t-2d,{\bf k})+ \nonumber\\
& & {\lambda_0}e^{-i\omega t}\left[\left( -\omega^2-2\gamma\omega
+\Omega_r^2\right)e^{i k_1 d/2} + {2\gamma\over d}
e^{i\omega d -i k_1 d/2}\right],
\end{eqnarray}
at late times. Again, let
\begin{equation}
q^{(+)}_{B} (t,{\bf k}) = \sum_j c^j_{\bf k} e^{i K^j_{\bf k} t},
\end{equation}
then one has
\begin{eqnarray}
\sum_j c^j_{\bf k} \left[ -\left(K_{\bf k}^j\right)^2 +
2i\gamma K_{\bf k}^j + \Omega_r^2 \right]^2 e^{i K^j_{\bf k} t} &=&
\sum_j c^j_{\bf k}\left({2\gamma\over d}\right)^2 e^{i K^j_{\bf k}(t-2d)}+
\nonumber\\ & & {\lambda_0}e^{-i\omega t}\left[\left( -\omega^2-
2i\gamma\omega +\Omega_r^2\right)e^{i k_1 d/2} + {2\gamma\over d}
e^{i\omega d -i k_1 d/2}\right].
\end{eqnarray}
After a Fourier transformation, for $K^j_{\bf k}\not= -\omega$,
the above equation becomes
\begin{eqnarray}
\left[ -\left(K_{\bf k}^j\right)^2 +
2i\gamma K_{\bf k}^j + \Omega_r^2 \right]^2 &=&
\left({2\gamma\over d}\right)^2 e^{-2 i K^j_{\bf k}d},
\end{eqnarray}
which is the square of Eq.$(\ref{eomK})$ for $q_+^{(A)}$, or the square
of the counterpart for $q_-^{(A)}$. So these $K^j_{\bf k}$ modes decay at
late times for $\Omega_r^2 > 2\gamma/d$ as $q_+^{(A)}$ and $q_-^{(A)}$ do.
On the other hand, if, say, $K^0_{\bf k} = -\omega$, one has
\begin{eqnarray}
\left[ -\omega^2 + 2i\gamma \omega + \Omega_r^2 \right]^2 c^0_{\bf k}
&=& \left({2\gamma\over d}\right)^2 c^0_{\bf k}e^{-2i\omega d}+
\nonumber\\ & & {\lambda_0}\left[\left( -\omega^2-2i\gamma\omega
+\Omega_r^2\right)e^{i k_1 d/2} + {2\gamma\over d}
e^{i\omega d -i k_1 d/2}\right].
\end{eqnarray}
This equation will not hold unless
\begin{equation}
c^0_{\bf k} = {{\lambda_0}\left[\left( -\omega^2-2i\gamma\omega
+\Omega_r^2\right)e^{i k_1 d/2} + {2\gamma\over d}
e^{i\omega d -i k_1 d/2}\right]\over
\left[ -\omega^2 + 2i\gamma \omega + \Omega_r^2 \right]^2 -
\left({2\gamma\over d}\right)^2 e^{-2i\omega d}}. \label{lateTc0}
\end{equation}
Therefore, for $\Omega_r^2 > 2\gamma/d$, the only mode which survives
at late times will be $e^{-i\omega t}$, and
\begin{equation}
q^{(+)}_B (t,{\bf k})|_{t\gg 1/\gamma} = c^0_{\bf k}e^{-i\omega t}.
\label{lateTqp}
\end{equation}
This is nothing but the sum of the $e^{-i\omega (t-nd)}$ part in Eq.
$(\ref{qjp})$ with $t \to \infty$ so summing from $n=0$ to $\infty$.
Thus, $(\ref{lateTqp})$ with $(\ref{lateTc0})$ has included the
mutual influences to all orders. The above analysis also indicates
that the $e^{-\gamma (t-nd)}$ part in $(\ref{qjp})$ really decays at
late times for $\Omega_r^2 > 2\gamma/d$.
\section{Early-time behaviors in weak-coupling limit}
\label{EarlyAna}
\begin{figure}
\caption{The early-time evolution of $E_{\cal N}
\label{firstS2}
\end{figure}
\begin{figure}
\caption{The early-time evolution of $E_{\cal N}
\label{firstS1}
\end{figure}
In the weak-coupling limit,
the cross correlators $\left<\right.{\cal R}_A, {\cal R}'_B \left.
\right>$ with ${\cal R}, {\cal R}'= Q,P$ are small until one
detector enters the other's light cone. From this observation one might
conclude that the cross correlations between the two detectors are
mainly generated by the mutual influences sourced by the quantum state
of the detectors and mediated by the field. This is not always true.
As shown in Sec.\ref{Earlyd<t}, the interference pattern inside the
light cone has been there in the zeroth-order results, where the mutual
interferences on the mode functions are not included. A comparison of
the first-order results in the upper plots in Fig. \ref{firstS2} and
those of the zeroth-order in Fig. \ref{zeroS} shows that the corrections
to entanglement dynamics from mutual influences at early times are
pretty small in that case. Actually the early-time dynamics of entanglement
in both examples in Fig. \ref{firstS2} are dominated by the zeroth-order
results, thus by the phase difference of vacuum fluctuations in
$\left<\right.{\cal R}_A,{\cal R}'_B\left. \right>_{\rm v}^{(0)}$ rather
than mutual influences. One can see this explicitly by inserting the
mode functions in the weak-coupling limit with the first-order correction
from the mutual influences into Eq.(25) in Ref. \cite{LCH08}, and write
\begin{eqnarray}
\Sigma(t) \approx \Sigma_0 + \sigma_1^{(0)} t + \sigma_2^{(0)} t^2 +
\theta(t-d)\left[\sigma_1^{(1)} (t-d)+ \sigma_2^{(1)} (t-d)^2\right] +
O(\gamma^3) \label{earlySig}
\end{eqnarray}
at early times when $O(e^{-\gamma_e -(\Lambda_0/2)}/\Omega) < t \ll
O(1/\gamma\Lambda_i)$, $i=0,1$. Here $\Sigma_0$, $\sigma_1$, and $\sigma_2$
depend on $\alpha$, $\beta$ and of $O(\gamma^0)$, $O(\gamma)$, and $O(\gamma^2)$,
respectively. Then it is easy to verify that mutual influences are negligible in
the dominating $\sigma_1^{(1)}$ term after $\theta(t-d)$ for the initial
states with the value of $\beta^2$ not in the vicinity of $\hbar^2/\alpha^2$
or $\alpha^2\Omega^2$.
In contrast, if the initial state ($\ref{initGauss}$) is nearly separable
($\beta^2 \approx \hbar^2/\alpha^2$), mutual influences will be important
in the detectors' early-time behavior.
In this case, dropping all terms with small oscillations in time, the
factors in $(\ref{earlySig})$ are approximately
\begin{eqnarray}
\Sigma_0 &\approx& {\hbar^2\over 4\pi^2\alpha^4\Omega^4}
\left[\hbar^2\gamma\Lambda_1 + \alpha^4 \Omega^2 \gamma
(2\Lambda_0+\Lambda_1)\right]^2, \nonumber\\
\sigma_2^{(0)} &\approx& {\gamma^2\hbar^2\left(\hbar -
\alpha^2\Omega\right)^4\over 4\Omega^2\alpha^4},\nonumber\\
\sigma_2^{(1)} &\approx& -{\gamma^2\hbar^2\left(\hbar^2 -
\alpha^4\Omega^2\right)^2\over 4\Omega^4\alpha^4 d^2}, \nonumber\\
\sigma_1^{(0)} &\approx& {\gamma\hbar^2\over 2\pi \Omega^3}\left[
2\Omega^2\gamma\Lambda_0 +\left({\hbar^2\over\alpha^4}+\Omega^2\right)
\gamma\Lambda_1\right]\left(\hbar -\alpha^2\Omega\right)^2,
\end{eqnarray}
with $\sigma_1^{(1)}$ negligible. So $\Sigma$ evolves as the following.
In a very short-time scale $O(e^{-\gamma_e-(\Lambda_0/2)}/\Omega)$
after the interaction is switched on, $\Sigma$ jumps from its initial
value $(\approx 0)$ to a value of the same order of $\Sigma_0$, which is
positive and determined by the numbers $\Lambda_0$ and $\Lambda_1$
corresponding to the cutoffs of this model (the difference
from the exact value is due to the oscillating terms dropped).
For $\alpha^2\not=\hbar/\Omega$ so $Q_A$ and $Q_B$ are each in a
squeezed state initially, the detectors keep separable at $t\le d$
since $\sigma_1^{(0)}$ and $\sigma_2^{(0)}$ are positive definite. But
$\sigma_2^{(1)}$ is negative and proportional to $1/d^2$, thus after
entering the light cone of the other detector, if the separation $d$ is
sufficiently small, or
\begin{equation}
d < d_1 \equiv {1\over\Omega} \left| \hbar +\alpha^2\Omega\over \hbar
-\alpha^2\Omega\right|, \label{defd1}
\end{equation}
$\sigma_2^{(1)}$ can overwhelm $\sigma_2^{(0)}$ and alter the evolution
of $\Sigma$ from concave up to concave down in time. If this happens,
the quantity $\Sigma$ could become negative after a finite
``entanglement time"
\begin{equation}
t_{ent} \approx {1\over 2}\left|\sigma_2^{(0)} + \sigma_2^{(1)}\right|^{-1}
\left[\sigma_1^{(0)} - 2 \sigma_2^{(1)} d + \sqrt{\left(\sigma_1^{(0)} -
2 \sigma_2^{(1)} d\right)^2+ 4\left|\sigma_2^{(0)} + \sigma_2^{(1)}\right|
\left(\Sigma_0+ \sigma_2^{(1)} d^2\right)}\right].
\end{equation}
This explains the entanglement generation at small $d$ in Fig. \ref{firstS1}.
[Note that the above prediction could fail if $t_{ent}>O(1/\gamma\Lambda_i)$,
$i=0,1$, and even for $t_{ent} < O(1/\gamma\Lambda_i)$ the above estimate
on $t_{ent}$ could have an error as large as $O(2 \pi/\Omega)$ due to the
dropped oscillating terms.]
The first-order corrections to $\left<\right. .. \left.\right>_{\rm a}$
contribute the $\sigma_2^{(1)}\cos^2\Omega d$ part of $\sigma_2^{(1)} =
\sigma_2^{(1)}(\cos^2\Omega d+\sin^2\Omega d)$, so for those cases with
separations small enough such that $\sin^2 \Omega d \ll \cos^2\Omega d$ the
early-time entanglement creations are mainly due to mutual influences of
the detectors, which is causal.
$d_1$ in $(\ref{defd1})$ can serve as an estimate for the maximum distance
that transient entanglement can be generated from a initially separable
state in the weak-coupling limit,
while for the detectors with the spatial separation between $d_1$ and
$d_{ent}$ the transient entanglement generated at early times will
disappear at late times.
\end{appendix}
\begin{references}
\bibitem{LCH08} S.-Y. Lin, C.-H. Chou and B. L. Hu, Phys. Rev. D
{\bf 78}, 125025 (2008).
\bibitem{YE04} T. Yu and J. H. Eberly, Phys. Rev. Lett. {\bf 93},
140404 (2004);
L.Diosi, ``Progressive decoherence and total environmental disentanglement",
in {\it Irreversible Quantum Dynamics}, eds. F.Benatti and R.Floreanini
(Springer, Berlin, 2003) [arXiv:quant-ph/0301096].
\bibitem{EPR} A. Einstein, B. Podolsky and N. Rosen, Phys. Rev. {\bf 47},
777 (1935).
\bibitem{Unruh} W. G. Unruh, Phys. Rev. A {\bf 59}, 126 (1999).
\bibitem{PR97} S. Popescu and D. Rohrlich,
``Causality and Nonlocality as Axioms for Quantum Mechanics",
in {\it Proceedings of the symposium on Causality and Locality in
Modern Physics and Astronomy: Open Questions and Possible Solutions}
(York University, Toronto, August 25-29, (1997). [arXiv:quant-ph/9709026].
\bibitem{BE47} A. Einstein, in {\it The Born-Einstein Letters},
edited by M. Born (Walker, New York, 1971).
\bibitem{Bell} J. S. Bell, Physics {\bf 1}, 195 (1964).
\bibitem{ASH06} C. Anastopoulos, S. Shresta and B. L. Hu,
``Quantum Entanglement under Non-Markovian Dynamics of Two Qubits
Interacting with a Common Electromagnetic Field" [arXiv:quant-ph/0610007].
\bibitem{HPZ} B. L. Hu, J. P. Paz, and Y. Zhang, Phys. Rev. D
{\bf 45}, 2843 (1992).
\bibitem{CH08} E. Calzetta and B. L. Hu, {\sl Nonequilibrium Quantum
Field Theory} (Cambridge University Press, Cambridge, UK) Chapters 1,
5, 6.
\bibitem{Franson} J. D. Franson, J. Mod. Opt. {\bf 55}, 2117 (2008).
\bibitem{FicTan06} Z. Ficek and R. Tanas, Phys. Rev. A {\bf 74},
024304 (2006).
\bibitem{Tom08} K. Shiokawa, Phys. Rev. A {\bf 79}, 012308 (2009).
\bibitem{LHMC08} C.-Y. Lai, J.-T. Hung, C.-Y. Mou, and P. Chen, Phys. Rev. B
{\bf 77}, 205419 (2008).
\bibitem{PR07} J. P. Paz and A. J. Roncaglia, Phys. Rev. Lett. {\bf 100},
220401 (2008).
\bibitem{CYH07} C.-H. Chou, T. Yu and B. L. Hu, Phys. Rev. E
{\bf 77}, 011112 (2008).
\bibitem{LH2005}
S.-Y. Lin and B. L. Hu, Phys. Rev. D {\bf 73}, 124018 (2006).
\bibitem{VW02} G. Vidal and R. F. Werner, Phys. Rev. A {\bf 65}, 032314
(2002).
\bibitem{Si00} R. Simon, Phys. Rev. Lett. {\bf 84}, 2726 (2000); L.-M.
Duan, G. Giedke, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. {\bf 84},
2722 (2000).
\bibitem{Plenio05} M. B. Plenio, Phys. Rev. Lett. {\bf 95}, 090503 (2005).
\bibitem{LH2006}
S.-Y. Lin and B. L. Hu, Phys. Rev. D {\bf 76}, 064008 (2007).
\bibitem{Arfken} G. B. Arfken and H. J. Weber, {\it Mathematical Methods for
Physicists} 6th Ed. (Elsevier, Amsterdam, 2005).
\end{references}
\end{document} |
\begin{document}
\title{ The heat equation for the Dirichlet fractional Laplacian with Hardy's potentials:
properties of minimal solutions and blow-up}
\author{\normalsize Ali BenAmor\footnote{University of Sousse. Sousse, Tunisia. E-mail: ali.benamor@ipeit.rnu.tn}
}
\date{}
\maketitle
\begin{abstract} Local and global properties of minimal solutions for the heat equation generated by the Dirichlet fractional Laplacian negatively perturbed by Hardy's potentials on open subsets of ${\mathbb{R}}^d$ are analyzed. As a byproduct we obtain instantaneous blow-up of nonnegative solutions in the supercritical case.
\end{abstract}
{\bf Key words}: fractional Laplacian, heat kernel, minimal solution, blow-up, Dirichlet form.\\
{\bf MSC2010}: 35K05, 35B09, 35S11.
\section{Introduction}
In this paper, we discuss mainly two questions: 1. Local and global properties of minimal solutions of the heat equation related to the Dirichlet fractional Laplacian on open subsets negatively perturbed by potentials of the type $\frac{c}{|x|^\alphaha},\ c>0$.\\
2. Relying on the results obtained in 1. we shall prove complete instantaneous blow-up of nonnegative solutions for the same equation provided $c$ is bigger than some critical value $c^*$.\\
To be more concrete, let $0<\alphaha<\min(2,d)$ and $\Omega$ be an open subset $\Omega\subsetset{\mathbb{R}}^d$ containing zero. We designate by $L_0^\Omega:=(-\Delta)^{\frac{\alphaha}{2}}|_\Omegaega$ the fractional Laplacian with zero Dirichlet condition on $\Omegaega^c$ (as explained in the next section). We consider the associated perturbed heat equation
\begin{eqnarray}
\label{heat1}
\left\{\begin{gathered}
-\frac{\partial u}{\partial t}=L_0^\Omega u - \frac{c}{|x|^\alphaha}u,
\quad \hbox{in } (0,T)\times\Omegaega,\\
u(t,\cdot)=0,\ on~~~\Omegaega^c,\ \forall\,0<t<T\leq\infty\\
u(0,x)= u_{0}(x),~~~{\rm a.e.\ in}\ \Omegaega,
\end{gathered}
\right.
\end{eqnarray}
where $c>0$ and $u_{0}$ is a nonnegative Borel measurable square integrable function on $\Omega$. The meaning of a solution for the equation (\ref{heat1}) will be explained in the next section.\\
Regarding the first addressed question, in the paper \cite{benamor-kenzizi}, the authors established existence of nonnegative exponentially bounded solutions on bounded Lipschitz domains provided
\begin{eqnarray}
0<c\leq c^*:=\frac{2^\alphaha\Gamma^2(\frac{d+\alphaha}{4})}{ \Gamma^2(\frac{d-\alphaha}{4})}.
\end{eqnarray}
They also proved that for $c>c^*$ complete instantaneous blowup takes place, provided $\Omega$ is a bounded Lipschitz domain.\\
Concerning properties of solutions, only partial information are available in the literature. Precisely in \cite[Corollary 5.1]{benamor-JPA} the authors proved that for bounded $C^{1,1}$ domains then under some additional condition one has the following asymptotic behavior of nonnegative solutions $u(t,x)$ for large time,
\begin{eqnarray}
u(t,x)\sim c_t |x|^{-\beta(c)}|y|^{-\beta(c)}\delta^{\alphaha/2}(x)\delta^{\alphaha/2}(y),\ a.e.
\label{asymp0}
\end{eqnarray}
where $0<\beta(c)\leq \frac{d-\alphaha}{2}$ and $\delta$ is the distance function to the complement of the domain.\\
In case $\Omega={\mathbb{R}}^d$, owing to recent results (see \cite{BogdanHardy}) concerning sharp estimates for the heat kernel of the mentioned evolution equation one can derive precise behavior of nonnegative solutions of the considered equation. Moreover, in \cite[Corollary 4.11]{BogdanHardy} the authors prove blowup of the heat kernel in the supercritical case on ${\mathbb{R}}^d$, which implies instantaneous blowup of any nonnegative solution on ${\mathbb{R}}^d$.\\
However, as long as we know, the second question is still open for general open subsets: It is not clear whether for $c>c^*$ and $\Omega$ unbounded any nonnegative solution blows up immediately and completely.\\
In these notes we shall, establish sharp local estimates with respect to the spatial variable, up to the boundary, of a special nonnegative solution (the minimal solution) of the considered heat equation on bounded sets, in the subcritical case. These estimates will lead to global sharp $L^p$ regularity property of the solution. We also prove complete instantaneous blowup in the supercritical case for arbitrary domains, regardless boundedness and regularity of the domain. Therefore we solve completely and in a unified manner the question of instantaneous blow-up.\\
Our strategy is described as follows: At first stage we show that in the subcritical case the underlying semigroups have heat kernels. Then we shall establish sharp estimates of the heat kernels near zero of the considered semigroups on bounded sets, which in turns will lead to sharp pointwise estimate of the minimal solution near zero of (\ref{heat1}). The latter results are then exploited to prove the above mentioned properties and to enable us to extend the $L^2$-semigroups to semigroups on some (weighted) $L^p$-spaces, determining therefore the optimal class of initial data. The main ingredients at this stage are, on one side a transformation procedure by harmonic functions that will transform the forms related to the considered semigroups into Dirichlet forms together with the use of the celebrated improved Hardy--Sobolev inequality to obtain an upper bound for the heat kernel. On the other side a lower bound for the heat kernel will be established, by using Dynkin--Hunt formula together with the sharp estimates from \cite[Theorem 1.1]{BogdanHardy}.\\
Then the precise description of the pointwise behavior of the heat kernel on bounded sets will deserve among others to establish blowup on open sets.\\
The inspiring point for us were the papers \cite{zuazua,baras-goldstein,cabre-martel} where the problem was addressed and solved for
the Dirichlet Laplacian (i.e. $\alphaha=2$). We shall record many resemblances between our results and those found in the latter cited papers though the substantial difference between the Laplacian and the fractional Laplacian.
\section{Preparing results}
From now on we fix an open subset $\Omegaega\subsetset{\mathbb{R}}^d$ containing zero and a real number $\alphaha$ such that $0<\alphaha<\min(2,d)$.\\
The Lebesgue spaces $L^2({\mathbb{R}}^d,dx)$, resp. $L^2(\Omegaega,dx)$ will be denoted by $L^2$, resp. $L^2(\Omegaega)$ and their respective norms will be denoted by $\|\cdot\|_{L^2}$, resp. $\|\cdot\|_{L^2(\Omega)}$ .
We shall write $\int\cdots$ as a shorthand for $\int_{{\mathbb{R}}^d}\cdots$.\\
The letters $C, C',c_t, K_{1}ppa_t$ will denote generic nonnegative finite constants
which may vary in value from line to line.\\
Consider the bilinear symmetric form ${\cal{E}}$ defined in $L^2$ by
\begin{eqnarray}
{\cal{E}}(f,g)&=&\frac{1}{2}{{\cal{A}}} (d,\alphaha)\int \int \frac{(f(x)-f(y))(g(x)-g(y))}
{|x-y|^{d+\alphaha}}\,dxdy,\nonumber\\
D({\cal{E}})&=&W^{\alphaha/2,2}({\mathbb{R}}^d)
:=\{f\in L^2\colon\,{\cal{E}}[f]:={\cal{E}}(f,f)<\infty\},\,
\label{formula1}
\end{eqnarray}
where
\begin{eqnarray}
{{\cal{A}}}{(d,\alphaha)}=\frac{\alphaha\Gamma(\frac{d+\alphaha}{2})}
{2^{1-\alphaha}\pi^{d/2}\Gamma(1-\frac{\alphaha}{2})},
\label{analfa}
\end{eqnarray}
is a normalizing constant.\\
Using Fourier transform $\hat f(\xi)=(2\pi)^{-d/2}\int e^{-ix\cdot\xi}f(x)\,dx$, a straightforward computation yields the following identity
(see \cite[Lemma 3.1]{frank})
\begin{eqnarray}
\int |\xi|^\alphaha|\hat f(\xi)|^2\,d\xi={\cal{E}}[f],\ \forall\,f\in W^{\alphaha/2,2}({\mathbb{R}}^d).
\label{form-fourier}
\end{eqnarray}
It is well known that ${\cal{E}}$ is a Dirichlet form, i.e. it is densely defined bilinear symmetric and closed form moreover it holds,
\begin{eqnarray}
\forall\,f\inW^{\alphaha/2,2}({\mathbb{R}}^d)({\mathbb{R}}^d){\mathbb{R}}ightarrow f_{0,1}:=(0\vee f )\wedge 1\inW^{\alphaha/2,2}({\mathbb{R}}^d)\ {\rm and}\ {\cal{E}}[f_{0,1}]\leq{\cal{E}}[f],
\end{eqnarray}
Furthermore ${\cal{E}}$ is regular, i.e. $C_c({\mathbb{R}}^d)\cap W^{\alphaha/2,2}({\mathbb{R}}^d)$ is dense in both spaces $C_c({\mathbb{R}}^d)$ and $W^{\alphaha/2,2}({\mathbb{R}}^d)$. For aspects related to Dirichlet forms we refer the reader to \cite{fukushima-book}.\\
The form ${\cal{E}}$ is related (via Kato representation theorem) to the selfadjoint
operator commonly named the fractional Laplacian on ${\mathbb{R}}^d$, which we shall denote by $L_0:=(-\Delta)^{\alphaha/2}$. We note that the domain of $L_0$ is the fractional Sobolev space $W^{\alphaha,2}({\mathbb{R}}^d)$.
For later purposes we recall the following Hardy's inequality
\begin{eqnarray}
\int \frac{f^2(x)}{|x|^\alphaha}\,dx\leq \frac{1}{c^*}{\cal{E}}[f],\ \forall\,f\in W^{\alphaha/2,2}({\mathbb{R}}^d),
\label{hardy-global}
\end{eqnarray}
with $1/{c^*}$ being the best constant in the latter inequality.\\
Henceforth we designate by $L_0^\Omega$, the operator which Dirichlet form in $L^2(\overline\Omega,dx)$ is given by
\begin{eqnarray*}
D({\cal{E}}_\Omega)&=&W_0^{\alphaha/2,2}(\Omega)\colon=\{f\in W^{\alphaha/2,2}({\mathbb{R}}^d)\colon\, f=0 ~~~q. e.~on~\Omega^c\}\nonumber\\
{\cal{E}}_\Omega(f,g)&=&{\cal{E}}(f,g)\nonumber\\
&=&\frac{1}{2}{\cal{A}}{(d,\alphaha)}\int_\Omega\int_\Omega \frac{(f(x)-f(y))(g(x)-g(y))}{|x-y|^{d+\alphaha}}\,dx\,dy
+\int_\Omega f(x)g(x)K_{1}ppa_\Omega(x)\,dx,
\end{eqnarray*}
where
\begin{eqnarray}
K_{1}ppa_\Omega(x):={\cal{A}}(d,\alphaha)\int_{\Omega^c}\frac{1}{|x-y|^{d+\alphaha}}\,dy.
\end{eqnarray}
For every $t\geq 0$ we designate by $e^{-tL_0^\Omega}$ the operator semigroup related to $L_0^\Omega$. In the case $\Omega={\mathbb{R}}^d$ we omit the superscript $\Omega$ in the notations.\\
It is a known fact (see \cite{bogdan-book}) that $e^{-tL_0^\Omega},\ t>0$ has a kernel (the heat kernel) $p_t^{L_0^\Omega}(x,y)$ which is symmetric jointly continuous and $p_t^{L_0^\Omega}(x,y)>0,\ \forall\,x,y\in\Omega$.\\
Let us introduce the notion of solution for problem (\ref{heat1}).
\begin{defi}
{\rm Let $V\in L^1_{loc}(\Omega)$ be nonnegative, $u_0\in L^2(\Omega)$ be nonnegative as well and $0<T\leq\infty$. We say that a Borel measurable function $u:[0,T)\times{\mathbb{R}}^d\to{\mathbb{R}}$ is a solution of the heat equation
\begin{eqnarray}
\label{heat2}
\left\{\begin{gathered}
-\frac{\partial u}{\partial t}=L_0^\Omega u - Vu,
\quad \hbox{in } (0,T)\times\Omegaega,\\
u(t,\cdot)=0,\ \ {\rm on}~~~\Omegaega^c,\ \forall\,0<t<T\\%\leq\infty\\
u(0,\cdot)= u_{0},~~~{\rm on }\,\, \Omegaega,
\end{gathered}
\right.
\end{eqnarray}
if
\begin{enumerate}
\item $u\in{\cal{L}}_{loc}^2\big([0,T), L_{loc}^2(\Omega)\big)$, where ${\cal{L}}^2$ is the Lebesgue space of square integrable functions.
\item $u\in L^{1}_{loc}\big((0,T)\times \Omega,dt\otimes V(x)\,dx\big)$.
\item For every $0\leq t< T$, $u(t,\cdot)= 0,\ a.e.$ on $\Omega^c$.
\item For every $0\leq t< T$ and every Borel function $\phi:[0,T)\times{\mathbb{R}}^d$ such that $\mathrm{supp}\,\phi\subsetset [0,T)\times\Omega$, $\phi,\ \frac{\partial \phi}{\partial t}\in L^2((0,T)\times\Omega)$, $\phi(t,\cdot)\in D(L_0)$ and
$$\int_0^t\int_\Omega |u(s,x)L_0\phi(s,x)|\,ds\,dx<\infty$$
the following identity holds true
\begin{eqnarray}
\int \big((u\phi)(t,x)-u_0(x)\phi(0,x)\big)\,dx &+&\int_{0}^{t}\int
u(s,x)(-\phi_{s}(s,x)+L_0^\Omega\phi(s,x))\,dx\,ds\nonumber\\
&=&\int_{0}^{t}\int u(s,x)\phi(s,x)V(x)\,dx\,ds.
\label{variational}
\end{eqnarray}
\end{enumerate}
}
\end{defi}
For every $c>0$ we denote by $V_c$ the Hardy potential
$$
V_c(x)=\frac{c}{|x|^\alphaha},\ x\neq 0.
$$
In \cite{benamor-kenzizi} it is proved that for bounded $\Omega$, $V=V_c$ and for $0<c\leq c^*$ equation (\ref{heat1}), with potential $V_c$, has a nonnegative solution, whereas for $c>c^*$ and $\Omega$ a bounded Lipschitz domain then no nonnegative solutions occur. It was recently proved in \cite{BogdanHardy} that the same statements hold true for $\Omega={\mathbb{R}}^d$. In these notes we shall, among others, fill the gap.\\
In the next section we shall be concerned with properties of a special nonnegative solution which is called {\em minimal solution} or {\em semigroup solution} in the subcritical case, i.e. $0<c<c^*$ and in the critical case, i.e. $c=c^*$.
The connotation minimal solution comes from the following observation (proved in \cite{baras-goldstein} for Dirichlet--Laplacian with Hardy potentials, whereas for Dirichlet fractional Laplacian its proved in \cite{benamor-kenzizi} for bounded domains and in Lemma \ref{domination} for general domains, finally it is proved in \cite{keller-lenz} in a different context): If $u_k$ is the semigroup solution for the heat equation with potential $V_c\wedge k,\ k\in\mathbb{N}$ and if $u$ is any nonnegative solution of (\ref{heat1}) then $u_\infty:=\lim_{k\to\infty}u_k$ is a nonnegative solution of (\ref{heat1}) and $u_\infty\leq u\ a.e.$.\\
We shall name $u_\infty$ the minimal nonnegative solution and shall denote it by $u$.\\
Let $0<c< c^*$. We denote by ${\cal{E}}_\Omega^{V_c}$ the quadratic form defined by
\begin{eqnarray}
D({\cal{E}}_\Omega^{V_c} )=W_0^{\alphaha/2,2}(\Omegaega),\ {\cal{E}}_\Omega^{V_c}[f] = {\cal{E}}_\Omega[f] - \int_\Omega f^2(x)V_c(x)\,dx.
\end{eqnarray}
Whereas for $c=c^*$, we set
\begin{eqnarray}
\dot{{\cal{E}}_\Omega}^{V_{c^*}}\colon\, D(\dot{{\cal{E}}_\Omega}^{V_{c^*}} )=W_0^{\alphaha/2,2}(\Omegaega),\ \dot{{\cal{E}}_\Omega}^{V_{c^*}}[f] = {\cal{E}}_\Omega[f] - \int_\Omega f^2(x)V_{c^*}(x)\,dx.
\end{eqnarray}
In the case $\Omega={\mathbb{R}}^d$ we shall omit the subscript $\Omega$.\\
As the closability of $\dot{{\cal{E}}_\Omega}^{V_{c^*}}$ in $L^2(\Omega)$ is not obvious we shall perform a method that enables us to prove in a unified manner the closedness of ${\cal{E}}_\Omega^{V_c}$ as well as the closability of $\dot{{\cal{E}}_\Omega}^{V_{c^*}}$ in $L^2(\Omega)$.\\
To that end we recall some known facts concerning harmonic functions of $L_0 -\frac{c}{|x|^\alphaha}$.\\
We know from \cite[Lemma 2.2]{benamor-JPA} that for every $0<c\leq c^*$ there is a unique $\beta=\beta(c)\in(0,\frac{d-\alphaha}{2}]$ such that $w_c(x):=|x|^{-\beta(c)},\ x\neq 0$ solves the equation
\begin{eqnarray}
&&(-\Delta)^{\alphaha/2}w-c|x|^{-\alphaha}w=0\ {\rm in\ the\ sense\ of\ distributions}.
\label{harmonic1}
\end{eqnarray}
That is
\begin{eqnarray}
< \hat w,|\xi|^{\alphaha}\hat\varphihi>-c<|x|^{-\alphaha}w,\varphihi>=0\ \forall\,\varphihi\in{\cal S}.
\end{eqnarray}
Making use of Riesz potential it is proved in \cite[Lemma 2.2]{benamor-JPA} that equation (\ref{harmonic1}) is equivalent to
\begin{eqnarray}
\int \frac{w_c(y)}{|x-y|^{d-\alphaha}}|y|^{-\alphaha}\,dy = c w_c(x),\ \forall\,x\neq 0.
\label{Riesz}
\end{eqnarray}
Furthermore for $\beta_*:=\frac{d-\alphaha}{2}$, we have $c=c^*$, i.e., $w_{c^*}(x)=|x|^{-\frac{d-\alphaha}{2}},\ x\neq 0$.\\
Next we fix definitively $c\in (0,c^*]$ .\\
For $0<c<c^*$ let $Q_\Omega^c$ be the $w_c$-transform of ${\cal{E}}_\Omega^{V_c}$, and for $c=c^*$ let $\dot Q_\Omega^{c^*}$ be the $w_{c^*}$-transform of $\dot{\cal{E}}_\Omega^{V_c}$ i.e., the quadratic forms defined in $L^2(\Omega,w_c^2dx)$ and in $L^2(\Omega,w_{c^*}^2dx)$ respectively by:
\begin{eqnarray*}
\dom(Q_\Omega^c):=\{f\in L^2(\Omega,w_c^2dx)\colon\,w_cf\inW_0^{\alphaha/2,2}(\Omegaega)\},\ Q_\Omega^c[f]={\cal{E}}_\Omega^{V_c}[w_cf],\ \forall\,f\in\,\dom(Q_\Omega^c),
\end{eqnarray*}
whereas
\begin{eqnarray*}
\dom(\dot Q_\Omega^{c^*}):=\{f\in L^2(\Omega,w_{c^*}^2dx)\colon\,w_{c^*}f\inW_0^{\alphaha/2,2}(\Omegaega)\},\ \dot Q_\Omega^{c^*}[f]=\dot{\cal{E}}_\Omega^{V_c}[w_{c^*}f],\ \forall\,f\in\,\dom(\dot Q_\Omega^{c^*}).
\end{eqnarray*}
In the case $\Omega={\mathbb{R}}^d$ we shall omit the subscript ${\mathbb{R}}^d$ in the above notations.
\begin{lem}
\begin{enumerate}
\item For every $0<c<c^*$, the form $Q_\Omega^c$ is a Dirichlet form in $L^2(\Omega,w_{c}^2dx)$ and
\begin{eqnarray}
Q_\Omega^c[f]=\frac{{\cal{A}}(d,\alphaha)}{2}\int\int \frac{(f(x)-f(y))^2}{|x-y|^{d+\alphaha}} w_c(x)w_c(y)\,dxdy,\ \forall\,f\in \dom(Q_\Omega^c).
\label{ID1}
\end{eqnarray}
\item For $c=c^*$ the form $\dot Q_\Omega^{c^*}$ is closable in $L^2(\Omega,w_{c}^2dx)$ and
\begin{eqnarray}
\dot Q_\Omega^{c^*}[f]=\frac{{\cal{A}}(d,\alphaha)}{2}\int\int \frac{(f(x)-f(y))^2}{|x-y|^{d+\alphaha}} w_{c_*}(x)w_{c_*}(y)\,dxdy,\ \forall\,f\in \dom(\dot Q_\Omega^{c^*}).
\label{ID2}
\end{eqnarray}
Let $Q_\Omega^{c^*}$ be the closure of $\dot Q_\Omega^{c^*}$. Then $Q_\Omega^{c^*}$ is a Dirichlet form. It follows in particular, that $\dot{\cal{E}}_\Omega^{V_{c^*}}$ is closable
\item The sets $C_c^\infty(\Omega\setminus\{0\})$ and $\dom (Q_\Omega^c)\cap C_c(\Omega)$ are cores for $Q_\Omega^c$. It follows that $Q_\Omega^c$ is regular for every $0<c\leq c^*$.
\end{enumerate}
\label{closability}
\end{lem}
\begin{rk}
\rm{
We shall show in Remark \ref{NotClosed} that $\dot{\cal{E}}_\Omega^{V_{c^*}}$ is in fact, not closed.
}
\end{rk}
\begin{proof}
The proof of formulae (\ref{ID1})-(\ref{ID2}) follows the lines of the proof of \cite[Lemma 3.1]{benamor-JPA}, where bounded $\Omega$ is considered, so we omit it.\\
We turn our attention now to prove the rest of the lemma.\\
Let $0<c<c^*$. Utilizing Hardy's inequality we obtain
\begin{eqnarray}
(1-\frac{c}{c^*}){\cal{E}}_\Omega[f]\leq {\cal{E}}_\Omega^{V_c}\leq{\cal{E}}_\Omega[f],\ \forall\,f\inW_0^{\alphaha/2,2}(\Omegaega),
\end{eqnarray}
from which the closedness of ${\cal{E}}_\Omega^{V_c}$ follows, as well as the closedness of $Q_\Omega^c$. From the definition of $Q_\Omega^c$ and the fact that ${\cal{E}}_\Omega^{V_c}$ is densely defined we conclude that $Q_\Omega^c$ is densely defined as well. On the other, on the light of formula (\ref{ID1}) it is obvious that the normal contraction acts on $\dom(Q_\Omega^c)$ and hence $Q_\Omega^c$ is a Dirichlet form.\\
For the critical case formula (\ref{ID2}) indicates that $Q_\Omega^{c^*}$ is Markovian and closable, by means of Fubini theorem. Thus, according to \cite[Theorem 3.1.1]{fukushima-book} its closure is a Dirichlet form.\\
To prove claim 3, we recall that $C_c^\infty(\Omega)$ and $W_0^{\alphaha/2,2}(\Omegaega)\cap C_c(\Omega)$ are cores for ${\cal{E}}_\Omega$ and hence they are cores for ${\cal{E}}_\Omega^{V_c}, \dot{\cal{E}}_\Omega^{V_c}$, since both forms are dominated by ${\cal{E}}_\Omega$. On the other hand the map $f\mapsto w_c^{-1} f$ maps $C_c^\infty(\Omega)$ into $C_c^\infty(\Omega\setminus\{0\})$ and $W_0^{\alphaha/2,2}(\Omegaega)\cap C_c(\Omega)$ into $\dom(Q_\Omega^c)\cap C_c(\Omega),\ \dom(\dot Q_\Omega^{c^*})\cap C_c(\Omega)$. All these considerations together with fact that $\dom(\dot Q_\Omega^{c_*})$ is a core for $Q_\Omega^{c^*}$ lead to assertion 3.
\end{proof}
Henceforth, we denote by ${\cal{E}}_\Omega^{V_{c^*}}$ the closure of $\dot{\cal{E}}_\Omega^{V_{c^*}}$, by $L_{V_c}^\Omega$ the selfadjoint operator associated to ${\cal{E}}_\Omega^{V_{c}}$ for every $0<c\leq c^*$ and by $e^{-tL_{V_c}^\Omega},\ t\geq 0$ the related semigroups.\\
Similarly, for every $0<c\leq c^*$ we designate by $A_\Omega^{w_c}$ the operator associated to $Q_\Omega^c$ in the weighted Lebesgue space $L^2(\Omega,w_c^2dx)$ and $T_{t,\Omega}^{w_c},\ t\geq 0$ its semigroup. Then
\begin{eqnarray}
A_\Omega^{w_c}=w_c^{-1}L_{V_c}^\Omega w_c\ {\rm and}\ T_{t,\Omega}^{w_c}=w_c^{-1}e^{-tL_{V_c}^\Omega}w_c,\ t\geq 0.
\label{TransformedSg}
\end{eqnarray}
The next proposition explains why are minimal solutions also semigroup solutions.
\begin{prop} For every $0<c\leq c^*$, the minimal solution is given by $u(t):=e^{-tL_{V_c}^\Omega}u_0,\ t>0$. Thus for each $t>0$, $u(t)\in D(L_{V_c}^\Omega)$ and $u\in C([0,\infty);L^2(\Omega))\cap C^1((0,\infty);L^2(\Omega))$. Furthermore $u$ fulfills Duhamel's formula
\begin{eqnarray}
u(t,x)&=&e^{-tL_0^\Omega}u_0(x)+\int_0^t\int_{\Omega} p_{t-s}^{L_0^\Omega}(x,y)u(s,y)V_c(y)\,dy\,ds,\ \forall\,t>0,\ a.e. x\in\Omega.
\label{Duhamel}
\end{eqnarray}
\label{sg-Sol}
\end{prop}
\begin{proof} Let $(h_k)_k$ be the sequence of closed quadratic forms in $L^2(\Omega)$ defined by
$$
h_k:={\cal{E}}_\Omega - V_c\wedge k,
$$
and $(H_k)_k$ be the related selfadjoint operators. Then $(h_k)_k$ is uniformly lower semibounded and $h_k\downarrow {\cal{E}}_\Omega^{V_c}$ in the subcritical case, whereas $h_k\downarrow \dot{\cal{E}}_\Omega^{V_{c^*}}$ in the critical case. As both forms ${\cal{E}}_\Omega^{V_c},\ \dot{\cal{E}}_\Omega^{V_{c^*}}$ are closable, we conclude by \cite[Theorem 3.11]{kato} that $(H_k)$ converges in the strong resolvent sense to $L_{V_c}^\Omega$ for every $0<c\leq c^*$. Hence $e^{-tH_k}$ converges strongly to $e^{-tL_{V_c}^\Omega}$ and then the monotone
sequence $u_k:=e^{-tH_k}u_0$ converges to $e^{-tL_{V_c}^\Omega}u_0$ which is nothing else but the minimal solution.\\
The remaining claims of the proposition follow from the standard theory of semigroups.
\end{proof}
As minimal solutions are given in term of semigroups we are led to analyze properties of the latter objects to gain information about the formers. Here is a first result in this direction.
\begin{prop} For every $t>0$ the semigroup $e^{-tL_\Omega^{V_c}},\ t>0$ has a measurable nonnegative symmetric absolutely continuous kernel, $p_{t}^{L_{V_c}^\Omega}$, in the sense that for every $v\in L^2(\Omega)$ it holds,
\begin{eqnarray}
e^{-tL_\Omega^{V_c}}v =\int_\Omega p_{t}^{L_{V_c}^\Omega}(\cdot,y)v(y)\,dy,\ a.e.\ x,y\in\Omega,\ \forall\,t>0.
\label{f0}
\end{eqnarray}
\label{ExistKernel}
\end{prop}
We shall call $p_{t}^{L_{V_c}^\Omega}$ the heat kernel of $e^{-tL_\Omega^{V_c}}$. Let us emphasize that formula (\ref{f0}) implies that the heat kernel $p_{t}^{L_{V_c}^\Omega}$ is finite a.e..
\begin{proof}
Owing to the known facts that $e^{-tL_0^\Omega},\ t>0$ has a nonnegative heat kernel and $V_c\wedge k$ is bounded we deduce that $e^{-tH_k}$ has a nonnegative heat kernel as well, which we denote by $P_{t,k}$. Moreover, since the sequence $(V_c\wedge k)_k$ is monotone increasing, we obtain with the help of Duhamel's formula that the sequence $(P_{t,k})_k$ is monotone increasing as well. Set
\begin{eqnarray}
p_{t}^{L_{V_c}^\Omega}(x,y):=\lim_{k\to\infty}P_{t,k}(x,y),\ \forall\,t>0,\ a.e.\ x,y\in\Omega.
\end{eqnarray}
Then $p_{t}^{L_{V_c}^\Omega}$ has all the first properties mentioned in the proposition.\\
Let $v\in L^2(\Omega)$ be nonnegative. Then by monotone convergence theorem, together with Proposition (\ref{sg-Sol}) we get
\begin{eqnarray}
e^{-tL_\Omega^{V_c}}v&=&\lim_{k\to\infty}u_k(t)=\lim_{k\to\infty}e^{-tH_k}v
=\lim_{k\to\infty}\int_\Omega P_{t,k}(\cdot,y)v(y)\,dy\nonumber\\
&=&\int_\Omega p_{t}^{L_{V_c}^\Omega}(\cdot,y)v(y)\,dy,\ a.e.\ x,y\in\Omega,\ \forall\,t>0.
\end{eqnarray}
For an arbitrary $v\in L^2(\Omega)$ formula (\ref{f0}) follows from the last step by decomposing $v$ into its positive and negative parts.
\end{proof}
\begin{rk}
{\rm
From Proposition \ref{ExistKernel} in conjunction with formula (\ref{TransformedSg}), we obtain existence of an absolutely continuous kernel for the semigroups $T_{t,\Omega}^{w_c}$ for each $t>0$ and each $0<c\leq c^*$. We shall denote by $q_t^{\Omega}$ the already mentioned kernel and we call it the heat kernel of $Q_\Omega^c$. For the particular case $\Omega={\mathbb{R}}^d$ we will omit the superscript ${\mathbb{R}}^d$. Let us stress that kernels $q_t^{\Omega}$ depend upon $c$. However, we shall keep the dependence hidden and emphasize it, whenever it would be relevant.\\
Once again, formula (\ref{TransformedSg}) leads to
\begin{eqnarray}
q_t^\Omega(x,y)=\frac{p_t^{L_{V_c}^\Omega}(x,y)}{w_c(x)w_c(y)},\ \forall\,t>0,\ a.e.,\ x,y\in\Omega.
\label{TransformedKernel}
\end{eqnarray}
}
\label{Transfer}
\end{rk}
In the particular case $\Omega={\mathbb{R}}^d$, we proceed to establish a very interesting global property for Dirichlet forms $Q^c$, namely conservativeness. The significance of conservativeness lies among others, in the fact that the Hunt process associated with $Q^c$ can start at quasi-every point and has an infinite life time.
To achieve our goal we introduce the following forms.
We fix $0<c\leq c_*$ and define the forms $\dot{\cal{F}}^c$ by:
\begin{align}
\dom(\dot{\cal{F}}^c)&= C_c^\infty({\mathbb{R}}^d),\
\dot{\cal{F}}^c[f]= \frac{{\cal{A}}(d,\alphaha)}{2}\int\int \frac{(f(x)-f(y))^2}{|x-y|^{d+\alphaha}} w_c(x)w_c(y)\,dxdy,\ \forall\,f\in C_c^\infty({\mathbb{R}}^d).
\end{align}
\begin{lem}
\begin{enumerate}
\item The quadratic form $Q^c$ is well defined, in the sense that $\dot{\cal{F}}^c[f]<\infty$ for every $f\in C_c^\infty({\mathbb{R}}^d)$. Moreover it is closable in $L^2({\mathbb{R}}^d,w_c^2dx)$.\\
Let
$$
{\cal{F}}^c= \text{the closure of } \dot{\cal{F}}^c \text{ in } L^2({\mathbb{R}}^d,w_c^2dx).
$$
\item For $c<c^*$, it holds $Q^c={\cal{F}}^c$.
\end{enumerate}
\end{lem}
\label{Equality}
\begin{proof}
The first part of assertion 1., is indeed equivalent to the following two conditions (see \cite[Example 1.2.1]{fukushima-book}): for every compact set $K$ and every open set $\Omega_1$ with $K\subsetset\Omega_1$ one should have
\begin{eqnarray*}
\int_{K\times K}|x-y|^{2-d-\alphaha}w_c(x)w_c(y)\,dx\,dy<\infty,\
\int_{K}\int_{\Omega_1^c}|x-y|^{-d-\alphaha}w_c(x)w_c(y)\,dx\,dy<\infty.
\end{eqnarray*}
The first part of the latter conditions was already proved for bounded sets in \cite[Lemma 3.1]{benamor-JPA}. Let us prove the finiteness of the second integral.\\
{\em Case 1}: $0\in K$. Then $0\not\in\Omega_1^c$ and $\sup_{y\in\Omega_1^c}w_{c}(y)<\infty$. Set $\delta:=dist(K,\Omega_1^c)>0$. Then $\delta>0$. Thus for every $x\in K$ we get
\begin{eqnarray}
\int_{\Omega_1^c}|x-y|^{-d-\alphaha}\,dy\leq \int_{\{|x-y|>\delta\}}|x-y|^{-d-\alphaha}\,dy\leq C<\infty.
\end{eqnarray}
Hence the second integral is finite.\\
{\em Case 2}: $0\not\in K$. Then $\sup_{x\in K}w_{c}(x)<\infty$. Let $x\in K$. Making use of identity (\ref{Riesz}) we obtain
\begin{align}
\int_{\Omega_1^c\cap B_1} |x-y|^{-d-\alphaha}w_c(y)\,dy\leq \delta^{-2\alphaha}\int_{\Omega_1^c\cap B_1} \frac{w_c(y)}{|x-y|^{d-\alphaha}}|y|^{-\alphaha}\leq C w_c(x),
\end{align}
and
\begin{align}
\int_{\Omega_1^c\cap B_1^c} |x-y|^{-d-\alphaha}w_c(y)\,dy\leq \int_{\{|x-y|>\delta\}}|x-y|^{-d-\alphaha}\,dy\leq C<\infty.
\end{align}
Hence, once again the second integral is finite and the first part of assertion 1. is proved.\\
The proof of closability is a standard matter so we omit it.\\
2.: Let $0<c<c^*$. First we show $C_c^\infty({\mathbb{R}}^d)\subsetset \dom(Q^c)$. Let $f\in C_c^\infty({\mathbb{R}}^d)$. We have to prove $w_c f\inW^{\alphaha/2,2}({\mathbb{R}}^d)$. On the one hand $w_c f\in L^2({\mathbb{R}}^d)$. On the other hand, following the lines of the proof of \cite[Lemma 3.1]{benamor-JPA} we obtain
\begin{align}
\int\int \frac{(w_c(x)f(x) - w_c(y)f(y))^2}{|x-y|^{d+\alphaha}}\,dx\,dy&= \int\int \frac{(f(x) - f(y))^2}{|x-y|^{d+\alphaha}}w_c(x)w_c(y)\,dx\,dy\nonumber\\
& + \int f^2(x) w_c^2(x) V_c(x)\,dx.
\end{align}
We already proved that the first integral is finite. Since $c<c^*$ and $f\in C_c^\infty({\mathbb{R}}^d)$, then
$$
\int f^2(x) w_c^2(x) V_c(x)\,dx<\infty.
$$
Hence $w_c f\inW^{\alphaha/2,2}({\mathbb{R}}^d)$ and $f\in D(Q^c)$.\\
Let us recall, by Lemma \ref{closability}-3, $C_c^\infty({\mathbb{R}}^d\setminus\{0\})$ is a form core for $Q^c$. Regarding the inclusion $C_c^\infty({\mathbb{R}}^d\setminus\{0\})\subsetset C_c^\infty({\mathbb{R}}^d)$, the latter space is also a core for $Q^c$. Furthermore, $C_c^\infty({\mathbb{R}}^d)$ is obviously a core for ${\cal{F}}^c$. Hence forms $Q^c$ and ${\cal{F}}^c$ coincide the common core $C_c^\infty({\mathbb{R}}^d)$. Thereby they are identical and the proof is completed.
\end{proof}
\begin{theo}
Assume that $\Omega={\mathbb{R}}^d$. Then for every $0<c\leq c^*$ the form $Q^c$ is conservative. It follows, in particular,
\begin{eqnarray}
\int_{{\mathbb{R}}^d} p_{t}^{L_{V_c}^{{\mathbb{R}}^d}}(x,y) w_c(y)\,dy=w_c(x),\ \forall\,x\neq 0.
\label{Totalmass}
\end{eqnarray}
\label{conservative}
\end{theo}
\begin{proof}
Identity (\ref{Totalmass}) is an immediate consequences of the conservativeness property which we proceed to prove.\\
As a first step we shall prove conservativeness in the subcritical case.\\
{\em The subcritical case.} Let $0<c<c^*$. On the light of Lemma \ref{Equality}-2, we shall use Masamune's result \cite{Masamune}, which asserts in our special case: If
\begin{eqnarray}
\sup_x w_c^{-1}(x)\int_{{\mathbb{R}}^d} (1\wedge |x-y|^2)|x-y|^{-d-\alphaha}w_c(y)\,dy<\infty,
\label{UniformBound}
\end{eqnarray}
and for some $a>0$,
\begin{eqnarray}
\int_{{\mathbb{R}}^d} e^{-a|x|} w_c^2(x)\,dx<\infty,
\label{Integrability}
\end{eqnarray}
then the form $Q^c$ is conservative.\\
Clearly condition (\ref{Integrability}) is fulfilled.\\
Let us show that condition (\ref{UniformBound}) is satisfied as well. We recall $w_c(x)=|y|^{-\beta(c)}$ for some $\beta:=\beta(c)\in(0,\frac{d-\alphaha}{2})$. Let
\begin{eqnarray}
I_1(x):=\int_{B_1(x)} \frac{|y|^{-\beta}}{|x-y|^{d+\alphaha-2}}\,dy,\ \alphaha'=2-\alphaha.
\end{eqnarray}
Let $|x|\leq 2$ and $\gamma:=\frac{d-\alphaha}{2}$. Then
\begin{eqnarray}
I_1(x)&=&\int_{B_1(x)} \frac{|y|^{-\beta} |y|^{\alphaha'}}{|x-y|^{d-\alphaha'}}|y|^{-\alphaha'}\,dy\nonumber\\
&\leq& 2^{\alphaha'} \int_{B_1(x)} \frac{|y|^{-\beta}}{|x-y|^{d-\alphaha'}} |y|^{-\alphaha'}\,dy.
\end{eqnarray}
In the case $\alphaha\geq 1$, we obtain
$$
\alphaha'>0,\ 0<\beta<d-\alphaha'.
$$
Thus we apply identity (\ref{Riesz}) to get
\begin{eqnarray}
\int_{B_1(x)} \frac{|y|^{-\beta}}{|x-y|^{d-\alphaha'}}|y|^{-\alphaha'}\,dy&\leq& \int_{{\mathbb{R}}^d}
\frac{|y|^{-\beta}}{|x-y|^{d-\alphaha'}}|y|^{-\alphaha'}\,dy\nonumber\\
&=&C w_c(x),
\end{eqnarray}
and
$$
I_1(x)\leq C w_c(x).
$$
In the case $0<\alphaha<1$, change $2-\alphaha$ by $\alphaha_1=\frac{1-\alphaha}{2}$ to obtain (by similar arguments)
\begin{eqnarray}
I_1(x)\leq C\int_{B_1(x)} \frac{|y|^{-\beta}}{|x-y|^{d-\alphaha_1}}|y|^{-\alphaha_1}\,dy
\leq C w_c(x).
\end{eqnarray}
Let now $|x|\geq 2$. Then for every $y\in B_1(x)$ we have $|y|\geq |x|-1\geq1$. Thus
$$
I_1(x)\leq C \frac{1}{(|x|-1)^{\beta}}.
$$
Hence in both cases we obtain
$$
\sup_x w_c(x) I_1(x)<\infty.
$$
For the remaining integral, let
\begin{eqnarray}
I_2(x):=\int_{B_1^c(x)} \frac{|y|^{-\beta}}{|x-y|^{d+\alphaha}}\,dy.
\end{eqnarray}
We decompose the integral into the sum of three integrals
\begin{eqnarray}
I_2(x)&=&\int_{B_1^c(x)\cap\{|y|< 1\}} \frac{|y|^{-\beta}}{|x-y|^{d+\alphaha}}\,dy
+\int_{B_1^c(x)\cap\{|y|> 1\wedge |x|/2\}} \frac{|y|^{-\beta}}{|x-y|^{d+\alphaha}}\,dy\nonumber\\
&+&\int_{B_1^c(x)\cap\{1<|y|<|x|/2\}} \frac{|y|^{-\beta}}{|x-y|^{d+\alphaha}}\,dy.
\end{eqnarray}
On the set $B_1^c(x)\cap\{|y|< 1\}$ we have $|x-y|^{-d-\alphaha}\leq |x-y|^{-d+\alphaha}|y|^{-\alphaha}$. Thus
\begin{eqnarray}
\int_{B_1^c(x)\cap\{|y|< 1\}} \frac{|y|^{-\beta}}{|x-y|^{d+\alphaha}}\,dy
&\leq& \int_{B_1^c(x)\cap\{|y|< 1\}} \frac{|y|^{-\beta}}{|x-y|^{d-\alphaha}}|y|^{-\alphaha}\,dy\nonumber\\
&\leq& \int_{B_1^c(x)}\frac{|y|^{-\beta}}{|x-y|^{d-\alphaha}}|y|^{-\alphaha}\,dy
\leq C|x|^\beta.
\end{eqnarray}
Furthermore
\begin{eqnarray}
\int_{B_1^c(x)\cap\{|y|> 1\wedge |x|/2\}} \frac{|y|^{-\beta}}{|x-y|^{d+\alphaha}}\,dy
\leq 2^{\beta} |x|^{-\beta}\int_{B_1^c(x)} |x-y|^{-d-\alphaha}\,dy
\leq C|x|^{-\beta}.
\end{eqnarray}
For the last integral we have two situations: if the set $E:=B_1^c(x)\cap\{1<|y|<|x|/2\}$ is empty, then we are done. If not, then on the set $E$, it holds
\begin{eqnarray}
|x-y|\geq \frac{|x|}{2}\geq |y|>1.
\end{eqnarray}
Hence
\begin{eqnarray}
\int_E \frac{|y|^{-\beta}}{|x-y|^{d+\alphaha}}\,dy&\leq&
2^{\beta} |x|^{-\beta}\int_E \frac{|y|^{-\beta}}{|x-y|^{\frac{d+3\alphaha}{2}}}\,dy\nonumber\\
&\leq& 2^{\beta} |x|^{-\beta}\int_E \frac{|y|^{-\beta}}{|y|^{\frac{d+3\alphaha}{2}}}\,dy
\leq C w_c(x).
\end{eqnarray}
Finally we get $\sup_{x} w_c(x) I_2(x)<\infty$.\\
Putting all together we get that condition (\ref{UniformBound}) is fulfilled. Thereby the form $Q^c$ is conservative.
{\em The critical case:} We recall that conservativeness means
$$
T_t^{w_{c^*}}1 =1.
$$
Here $T_t^{w_{c^*}}$ is the $L^\infty$-semigroup related to $Q^{c^*}$. Let $(\varphihi_k)\subsetset C_c({\mathbb{R}}^d)$ be a sequence of positive functions such that $\varphihi_k\uparrow 1$. Then from the standard construction of the $L^\infty$-semigroup ($(\varphihi_k)\subsetset L^2({\mathbb{R}}^d,w_{c^*}^2dx)\cap L^\infty({\mathbb{R}}^d)$) together with Remark \ref{Transfer}, we achieve
\begin{align*}
T_t^{w_{c^*}}1=\lim_{k\to\infty}\int q_t (\cdot,y)\varphihi_k(y) w_{c^*}^2(y)\,dy.
\end{align*}
An application of monotone convergence theorem leads to
$$
T_t^{w_{c^*}}1= \int q_t (\cdot,y) w_{c^*}^2(y)\,dy.
$$
From the contraction property of the $L^\infty$-semigroup related to $Q^{c^*}$ we derive
$$
\int q_t (\cdot,y) w_{c^*}^2(y)\,dy\leq 1,
$$
which leads to
\begin{eqnarray}
\int_{{\mathbb{R}}^d} p_{t}^{L_{V_{c^*}}^{{\mathbb{R}}^d}}(x,y) w_{c^*}(y)\,dy\leq w_{c^*}(x),\ \forall\,x\neq 0,\ t>0.
\label{Ineq1}
\end{eqnarray}
Now the first part of the proof yields, for every $0<c<c^*$,
\begin{align}
w_c(x)=\int_{{\mathbb{R}}^d} p_{t}^{L_{V_c}^{{\mathbb{R}}^d}}(x,y) w_c(y)\,dy&\leq \int_{{\mathbb{R}}^d} p_{t}^{L_{V_{c^*}}^{{\mathbb{R}}^d}}(x,y) w_{c}(y)\,dy\nonumber\\
&=\int_{B_1} p_{t}^{L_{V_{c^*}}^{{\mathbb{R}}^d}}(x,y) w_{c}(y)\,dy + \int_{B_1^c} p_{t}^{L_{V_{c^*}}^{{\mathbb{R}}^d}}(x,y) w_{c}(y)\,dy.
\end{align}
Let us observe that the first integrant is increasing, whereas the second one is decreasing with respect to $c$. Hence, letting $c\to c^*$ and combining monotone convergence theorem with inequality (\ref{Ineq1}) we achieve $T_t^{w_{c^*}}1 =1$ and the proof is completed.
\end{proof}
\begin{rk}
{\rm
Theorem \ref{conservative} was proved in \cite[Theorem 3.1]{BogdanHardy} and \cite[Theorem 2.4]{Jakubowski2019} however with a different method using integral analysis. Our proof is different.
}
\end{rk}
\section{Heat kernel estimates, local and global behavior of the minimal solution in space variable}
Along this section we assume that $\Omega$ is bounded.\\
Since potentials $V_c$ are too singular (they are not in the Kato-class, for example), investigations of properties of solutions of the evolution equations related to $L_0^\Omega - V_c$ becomes a delicate problem. In fact, the theory of elliptic regularity is no more applicable in this context. To overcome the difficulties we shall make use of the pseudo-ground state transformation for forms ${\cal{E}}_\Omega^{V_c}$ performed in Lemma \ref{closability} together with an improved Sobolev inequality. This transformation has the considerable effect to mutate forms ${\cal{E}}_\Omega^{V_c}$ to Dirichlet forms and to mutate $e^{-tL_{V_c}^\Omega}$ to Markovian ultracontractive semigroup on some weighted Lebesgue space. The analysis of the transformed forms will then lead us to get satisfactory results concerning estimating their heat kernels (sharply) and hence to reveal properties of minimal solutions.\\
As a first step we proceed to prove that Sobolev inequality holds for the $w_c$-transform of the form ${\cal{E}}_\Omega^{V_c}$. As a byproduct we obtain that the semigroup of the transformed from is ultracontractive and then very interesting upper bound for the heat kernel are derived.
\begin{theo}
\begin{enumerate}
\item Let $0<c<c^*$ and $p=\frac{d}{d-\alphaha}$. Then the following Sobolev inequality holds
\begin{eqnarray}
\parallel f^2\parallel_{ {L^p}(w_c^2dx)}\leq AQ_\Omega^{c}[f],\ \forall\,f\in D(Q_\Omega^c).
\label{w-sob}
\end{eqnarray}
\item For $c=c^*$ let $1<p<\frac{d}{d-\alphaha}$. Then the following Sobolev inequality holds
\begin{eqnarray}
\parallel f^2\parallel_{ {L^p}(w_{c^*}^2dx)}\leq AQ_\Omega^{c^*}[f],\ \forall\,f\in D(Q_\Omega^{c^*}).
\label{w-sob2}
\end{eqnarray}
\item For each $t>0$, and $0<c\leq c^*$ the operator $T_{t,\Omega}^{w_c}$ is ultracontractive.
\item For every $0<c<c^*$, there is a finite constant $C>0$ such that
\begin{eqnarray}
0<p_t^{L_{V_c}^\Omega}(x,y)\leq \frac{C}{t^{\frac{d}{\alphaha}}} w_c(x)w_c(y),\ a.e.\ {on}\ \Omega\times\Omega,\ \forall\,t>0.
\label{UppBound}
\end{eqnarray}
\item For $c=c^*$, and $1<p<\frac{d}{d-\alphaha}$ there is a finite constant $C>0$ such that
\begin{eqnarray}
0<p_t^{L_{V_{c^*}}^\Omega}(x,y)\leq \frac{C}{t^{\frac{p}{p-1}}} w_{c^*}(x)w_{c^*}(y),\ a.e.\ {on}\ \Omega\times\Omega,\ \forall\,t>0.
\label{UppBound2}
\end{eqnarray}
\end{enumerate}
\label{UC}
\end{theo}
\begin{proof}
1) and 2): Let $0<c<c^*$. From Hardy's inequality we derive
\begin{eqnarray}
(1-\frac{c}{c^*}){\cal{E}}_\Omega[f]\leq{\cal{E}}_\Omega^{V_c}[f],\ \forall\,f\inW_0^{\alphaha/2,2}(\Omegaega).
\end{eqnarray}
Now we use the known fact that $W_0^{\alphaha/2,2}(\Omegaega)$ embeds continuously into $L^{\frac{2d}{d-\alphaha}}$, to obtain the following Sobolev's inequality
\begin{eqnarray}
(\int_\Omega |f|^{\frac{2d}{d-\alphaha}}\,dx\big)^{\frac{d-\alphaha}{d}}\leq C{\cal{E}}_\Omega^{V_c}[f],\ \forall\,f\inW_0^{\alphaha/2,2}(\Omegaega).
\end{eqnarray}
An application of H\"older's inequality together with Lemma \ref{closability} and the fact that $\Omega$ is bounded, yield then inequality (\ref{w-sob}).\\
Towards proving Sobolev's inequality in the critical case we use the improved Hardy--Sobolev inequality, due to Frank--Lieb--Seiringer [Theorem 2.3]: For every $1\leq p<\frac{d}{d-\alpha}$ there is a constant $S_{d,\alpha}(\Omega)$ such that
\begin{eqnarray}
(\int |f|^{2p}\,dx)^{1/p}\leq S_{d,\alpha}(\Omega)\big({\cal{E}}_\Omega[f]-c^*\int_\Omega\frac{f^2(x)}{|x|^\alphaha}\,dx\big),\ \forall\,f\inW_0^{\alphaha/2,2}(\Omegaega),
\label{ISI}
\end{eqnarray}
and the rest of the proof runs as before.\\
3) and 4): As $Q_\Omega^c$ is a Dirichlet form, by the standard theory of Markovian semigroups, it is known (see \cite[p.75]{davies-book}) that Sobolev inequality implies ultracontractivity of $T_{t,\Omega}^{w_c}$ together with the bound
\begin{eqnarray}
\|T_{t,\Omega}^{w_c}\|_{L^2(\Omega,w_c^2dx),L^\infty(\Omega)}\leq\frac{c}{t^{d/\alphaha}},\ t>0.
\end{eqnarray}
Now ultracontractivity in turns implies that the semigroup $e^{-tA_\Omega^{w_c}}$ has a nonnegative symmetric (heat) kernel, which we denote by $q_t^\Omega$ and the latter estimate yields by \cite[p.59]{davies-book})
\begin{eqnarray}
0\leq q_t^\Omega(x,y)\leq \frac{c}{t^{d/\alphaha}},\ a.e.,\ \forall\,t>0.
\end{eqnarray}
Recalling formula (\ref{TransformedKernel}):
\begin{eqnarray}
q_t^\Omega(x,y)=\frac{p_t^{L_{V_c}^\Omega}(x,y)}{w_c(x)w_c(y)},\ a.e.,
\end{eqnarray}
yields the upper bounds (\ref{UppBound}) and (\ref{UppBound2}).\\
The proof of 5. is similar to the latter one so we omit it.
\end{proof}
At this stage we turn our attention to establish lower a bound for the heat kernels $p_t^{L_{V_c}^\Omega}$.\\
Let us first observe that from the definition, the Dirichlet form $Q_\Omega^c$ is nothing else but the part of the form $Q^c$ on $\Omega$, i.e., $Q^c_\Omega=Q^c|_{D(Q^c_\Omega)}$ where
$$
\dom(Q^c_\Omega)=\{f\in D(Q^c):f\equiv0\,\, {\rm on }\,\, \Omegaega^c\}.
$$
Since $Q^c$ is a Dirichlet form and $q_t$ is continuous there exists a Hunt process on $\mathbb{R}^d$ such that $$\mathbb{P}^x(X_t\in A)=\int_A q_t(x,y) w_c^2(y)\,dy, \quad A\in\mathcal{B}(\mathbb{R}^d).$$
By positivity of $w_c$ and the Dynkin-Hunt formula we get
$$q_t^\Omega(x,y)=q_t(x,y)-\mathbb{E}^x[\tau_\Omegaega<t,q_{t-\tau_\Omega}(X_{\tau_\Omega},y)],\quad x,y\in \Omega,$$
where $\tau_\Omegaega=\inf\{t>0:X_t\notin\Omega\}.$
Let $S(t,x):=|x|^{\beta(c)}+t^{\beta(c)/\alphaha}$ and $H(t,x):=1+w_c(xt^{-1/\alphaha})=w_c(x)S(t,x)$. We know from \cite[Lemma 5.1, Theorem 1.1]{BogdanHardy} together with formula (\ref{TransformedKernel}) that
\begin{equation}\label{eq:HKest_q}q_t(x,y)\approx S(t,x)\left(t^{-d/\alphaha}\wedge\frac{t}{|x-y|^{d+\alphaha}}\right)S(t,y),\quad t>0, \,x,y\in\mathbb{R}^d.
\end{equation}
\begin{theo}
For every $0<c\leq c^*$, every compact subset $K\subsetset\Omega$ and every $t>0$, there is a finite constant $K_{1}ppa_t=K_{1}ppa_t(K)>0$ such that
\begin{eqnarray}
p_t^{L_{V_c}^\Omega}(x,y)\geq K_{1}ppa_tw_c(x)w_c(y),\ a.e.\ {on}\ K\times K,\ \forall\,t>0.
\label{LowBound}
\end{eqnarray}
\label{ThmLowBound}
\end{theo}
\begin{proof}
Since $0\in \Omega$ we may and do assume that $0\in K$ (we can consider infimum of $p_t^{L_{V_c}^\Omega}$ on the larger set).
First we prove the lower bound for $q_t^\Omega$ on small balls around zero and small $t>0$. Let $0<r<1$ be such that $\overline{B_{4r}}\subsetset\Omega$ and $x,y\in B_{r}$. Then Dynkin--Hunt formula leads to
\begin{eqnarray}
q_t^\Omega(x,y)&=&q_t(x,y) - \mathbb{E}^x[t>\tau_{\Omega},q_{t-\tau_{\Omega}}(y,X_{\tau_{\Omega}})]\nonumber\\
&\geq& q_t(x,y) - \sup_{s\leq t,z\in \Omega^c}q_{s}(y,z).
\end{eqnarray}
Since $S(\cdot,y)$ is increasing and $|y-z|>|z|/2>r$ for $z\in\Omega^c$ by \eqref{eq:HKest_q} we obtain
\begin{align*}
\sup_{s\leq t,z\in \Omega^c}q_{s}(y,z)&\geq c_1 \sup_{z\in\Omega^c}S(t,y)\frac{t}{|z|^{d+\alphaha}}S(t,z)\\
&\geq c_1S(t,y) \frac{t}{r^{d+\alphaha}}\left( 1 + t^{\beta(c)/\alphaha}\right).
\end{align*}
Hence and again \eqref{eq:HKest_q} yields for $t\leq 1$
\begin{align*}
\frac{q_t^\Omega(x,y)}{S(t,y)}&\geq c_2 t^{\beta(c)/\alphaha}\left(t^{-d/\alphaha}\wedge \frac{t}{|x-y|^{d+\alphaha}}\right)-c_1 \frac{t}{r^{d+\alphaha}}.
\end{align*}
For $|x-y|\leq t^{1/\alphaha}$ and $t\leq T(r):=\left(\frac{c_2r^{d+\alphaha}}{2c_1}\right)^{\alphaha/(\alphaha+d-\beta(c))}<r$ we get
\begin{align*}
\frac{q_t^\Omega(x,y)}{S(t,y)}&\geq c_2 t^{(\beta(c)-d)/\alphaha}-c_1 \frac{t}{r^{d+\alphaha}}\geq \frac{c_2}{2} t^{(\beta(c)-d)/\alphaha}.
\end{align*}
This implies
$$q_t^\Omega(x,y)\geq c S(t,x)S(t,y)t^{-d/\alphaha},\quad |x|,|y|\leq \frac{t^{1/\alphaha}}{2}. $$ In consequence
$$p_t^{L_{V_c}^\Omega}(x,y)\geq c H(t,x)H(t,y)t^{-d/\alphaha}\geq c H(t,x)H(t,y)p_t^{L_{0}^\Omega}(x,y),\quad |x|,|y|\leq \frac{t^{1/\alphaha}}{2}.$$
Let $|x|\leq t^{1/\alphaha}/2<|y|\leq r$. Set $D:=B_{t^{1/\alphaha}/4}\setminus B_{t^{1/\alphaha}/8}$. By Duhamel's formula and estimates of $p_t^{L_0^\Omega}$ $$p_t^{L_{V_c}^\Omega}(z,y)\geq p_t^{L_0^\Omega}(z,y)\geq c \frac{t}{|y|^{d+\alphaha}}, \quad z\in D.$$ By the semigroup property
\begin{align*}
p_t^{L_{V_c}^\Omega}(x,y)&\geq \int_{D}p_{t/2}^{L_{V_c}^\Omega}(x,z)p_{t/2}^{L_{V_c}^\Omega}(z,y)\,dz\geq cH(t,x)t^{-d/\alphaha}\frac{t}{|y|^{d+\alphaha}}|D|\\&\geq c H(t,x)H(t,y)p_t^{L_{0}^\Omega}(x,y).
\end{align*}
For $t^{1/\alphaha}/2<|y|,|x|$. We get
$$p_t^{L_{V_c}^\Omega}(x,y)\geq p_t^{L_0^\Omega}(x,y)\geq \inf_{x,y\in K}p_t^{L_0^\Omega}(x,y)=c_t(K)>0.$$
For $|x|\leq t^{1/\alphaha}/2\leq r <|y|$ one can obtain by the semigroup property $p_t^{L_{V_c}^\Omega}(x,y)\geq cH(t,x)c_t(K)$.
In particular we have
\begin{equation*}
p_t^{L_{V_c}^\Omega}(x,y)\geq H(t,x)H(t,y)c_t(K),\quad t\leq T(r),\,x,y\in K.
\end{equation*}
If $t>T(r)$ we use the semigroup property to obtain
\begin{align*}
p_t^{L_{V_c}^\Omega}(x,y)&\geq \int\int_{|z|,|w|\leq r} p_{T(r)/4}^{L_{V_c}^\Omega}(x,z)p_{t-T(r/2)}^{L_{V_c}^\Omega}(z,w)p_{T(r)/4}^{L_{V_c}^\Omega}(w,y)dzdw\\ &\geq c(r,K)H(t,x)H(t,y)\inf_{|z|,|w|<r}p_{t-T(r/2)}^{L_{0}^\Omega}(z,w),
\end{align*}
which ends the proof.
\end{proof}
We are now in position to describe the exact behavior, in space variable, of the minimal solution of equation (\ref{heat1}), especially near $0$.
\begin{theo}
\begin{enumerate}
\item For every $t>0$ there is a finite constant $c_t>0$ such that,
\begin{eqnarray}
u(t,x)\leq c_tw_c(x),\ a.e.\ on\ \Omega.
\label{UB}
\end{eqnarray}
It follows in particular that $u(t,x)$ is bounded away from zero.
\item For every $t>0$, there are finite constants $c_t,\ c'_t>0$ such that
\begin{eqnarray}
c'_tw_c(x)\leq u(t,x)\leq c_tw_c(x),\ a.e.\ {\rm near}\ 0.
\label{Sharpo}
\end{eqnarray}
\end{enumerate}
Hence $u(t)$ has a standing singularity at $0$.
\label{SharpLoc}
\end{theo}
\begin{proof}
The upper bound (\ref{UB}) follows from Theorem \ref{UC}-4). Let us now prove the lower bound.\\
Let $K$ be a compact subset of $\Omega$ containing $0$ such that Lebesgue measure of the set $\{x\in K\colon\,u_0(x)>0\}$ is nonnegative.\\
Let $K_{1}ppa_t$ be as in (\ref{LowBound}), then
\begin{align*}
u(t,x)&=\int_\Omega p_t^{L_{V_c}^\Omega}(x,y)u_0(y)\,dy\geq\int_K p_t^{L_{V_c}^\Omega}(x,y)u_0(y)\,dy\geq
K_{1}ppa_t w_c(x)\int_K w_c(y)u_0(y)\,dy\nonumber\\
&\geq c_t'w_c(x),\ a.e.\ \text{ on }\ K,
\end{align*}
with $c_t'>0$, which was to be proved.
\end{proof}
The local sharp estimate (\ref{Sharpo}) leads to a sharp global regularity property of the minimal solution, expressing thereby the smoothing effect of the semigroup $e^{-tL_{V_c}^\Omega}$.
\begin{prop}
\begin{enumerate}
\item For every $t>0$, the minimal solution $u(t)$ lies in the space $L^p(\Omega),\ p\geq 1$ if and only if $1\leq p< \frac{d}{\beta}$.
\item The operator $e^{-tL_{V_c}^\Omega},\ t>0$ maps continuously $L^2(\Omega)$ into $L^p(\Omega)$ for every $2\leq p< \frac{d}{\beta}$.
\item The operator $e^{-tL_{V_c}^\Omega}: L^q(\Omega)\to L^p(\Omega),\ t>0$ is bounded for every $\frac{d}{d-\beta}<q<p<\frac{d}{\beta}$.
\item The operator $L_{V_c}^\Omega$ has compact resolvent. Set $(\varphihi_k^{L_{V_c}})_k$ its eigenfunctions. Then $(\varphihi_k^{L_{V_c}})_k\subsetset L^p(\Omega)$ for every $p< \frac{d}{\beta}$.
\end{enumerate}
\end{prop}
\begin{proof}
The first assertion is an immediate consequence of Theorem (\ref{SharpLoc}).\\
2): Let $u_0\in L^2(\Omega)$ and $p$ as described in the assertion. Thanks to the upper bounds (\ref{UppBound})-(\ref{UppBound2}) a straightforward computation leads to
\begin{eqnarray}
\int_\Omega \left|e^{-tL_{V_c}^\Omega} u_0(x)\right|^p\,dx\leq c_t(\int_\Omega w_c|u_0|\,dx)^p\int_\Omega w_c^p\,dx\leq C(\int_\Omega u_0^2\,dx)^{p/2}.
\end{eqnarray}
3): Follows from Riesz-Thorin interpolation theorem.\\
4): We claim that for each $t>0$ the operator $e^{-tL_{V_c}^\Omega}$ is a Hilbert--Schmidt. Indeed, the upper bound (\ref{UppBound}) lead to
$$
\int_\Omega\int_\Omega \left(p_t^{L_{V_c}^\Omega}\right)^2(x,y)\,dx\,dy\leq C\int_\Omega w_c^2(x)\,dx
\cdot\int_\Omega w_c^2(x)\,dx<\infty,
$$
and the claim is proved. Hence $L_{V_c}^\Omega$ has compact resolvent. The claim about eigenfunctions follows from assertion 2.
\end{proof}
\begin{rk}
{\rm We claim that $\dot{\cal{E}}_\Omega^{V_{c^*}}$ is not closed. Indeed, utilizing the inequality $ p_t^{L_{V_c}^\Omega}\geq p_t^{L_0^\Omega}$ we conclude that the semigroup $e^{-tL_{V_c}^\Omega}$ is irreducible for each $t>0$. Consequently, the smallest eigenvalue of $L_{V_c}^\Omega$ is non-degenerate, i.e. its eigenspace has dimension one and is generated by a nonnegative function, say $ \varphihi^{L_{V_{c^*}}^\Omega}$. If $\dot{\cal{E}}_\Omega^{V_{c^*}}$ were closed, then the ground state $ \varphihi^{L_{V_{c^*}}^\Omega}$ would be in the space $W_0^{\alphaha/2,2}(\Omegaega)$ and hence by the improved Sobolev inequality we would get $\int_\Omega (\varphihi^{L_{V_{c^*}}^\Omega})^2(x)\,dx<\infty$. However, from the lower bound (\ref{LowBound}), we obtain for each small ball around zero
$$
\int_\Omega \left(\varphihi^{L_{V_{c^*}}^\Omega}\right)^2(x)\,dx\geq C\int_B w_{c^*}^2(x)V_{c^*}(x)\,dx=\infty,
$$
leading to a contradiction.
}
\label{NotClosed}
\end{rk}
The already established upper estimate for the heat kernel enables one to extend the semigroup to a larger class of initial data.
\begin{theo}
\begin{enumerate}
\item The semigroup $e^{-tL_{V_c}^\Omega},\ t>0$ extends to a bounded linear operator from $L^1(\Omega,w_cdx)$ into $L^2(\Omega)$.
\item The semigroup $e^{-tL_{V_c}^\Omega},\ t>0$ extends to a bounded linear operator from $L^p(\Omega,w_cdx)$ into $L^p(\Omega)$ for every $1\leq p<\infty$.
\item The semigroup $e^{-tL_{V_c}^\Omega},\ t>0$ extends to a bounded linear semigroup from $L^p(\Omega,w_cdx)$ into $L^p(\Omega,w_cdx)$ for every $1\leq p<d/3$.
\end{enumerate}
\end{theo}
\begin{proof}
Having estimate (\ref{UppBound}) in hands, a straightforward computation yields
\begin{eqnarray*}
\int_\Omega (e^{-tL_{V_c}^\Omega}u_0)^2\,dx\leq c_t\int_\Omega w_c^2\,dx\cdot\big(\int_\Omega |u_0|w_c\,dy\big)^2,\ \forall\,t>0,
\end{eqnarray*}
and assertion 1. is proved.\\
Similarly, using H\"older's inequality we achieve
\begin{align*}
|e^{-tL_{V_c}^\Omega}u_0(x)|^p&\leq \int_\Omega p_t^{L_{V_c}^\Omega}(x,y)\,dy\int_\Omega p_t^{L_{V_c}^\Omega}(x,y) |u_0|^p\,dy
\leq c_tw_c^2(x)\int_\Omega w_c(y)\,dy\int_\Omega |u_0|^pw_c\,dy\\
&\leq c_t \left(\int_\Omega |u_0|^pw_c\,dy\right) w_c^2(x).
\end{align*}
Integrating w.r.t. $x$, we obtain assertion 2.\\
Assertion 3. can be proved in the same way.
\end{proof}
\section{Blow-up of nonnegative solutions on open sets in the supercritical case}
In this section we shall make use of the lower bound for the heat kernel as well as for nonnegative solutions in the critical case on bounded open sets, which we established in the last section, to show that for $c>c^*$ any nonnegative solution of the heat equation (\ref{heat1}) on arbitrary open sets containing zero blows up completely and instantaneously. This result accomplishes the corresponding one for bounded sets with Lipschitz boundary \cite{benamor-kenzizi} and the case where $\Omega={\mathbb{R}}^d$ \cite{BogdanHardy}, so that to get a full picture concerning existence and nonexistence of nonnegative solutions for Dirichlet fractional Laplacian with Hardy potentials.\\
However, the idea of the proof deviates from those developed in \cite{benamor-kenzizi,BogdanHardy}. Our proof relies on the sofar established lower bounds for $p_t^{L_{V_{c^*}}^\Omega}$ and for nonnegative solutions on balls.\\
Henceforth we fix a nonempty open set $\Omega\subsetset{\mathbb{R}}^d$ containing zero and $c>0$.\\
Let $V\in L_{loc}^1(\Omega,dx)$ be a nonnegative potential. We set $W_k:=V\wedge k$ and $(P_k)$ the heat equation corresponding to the Dirichlet fractional Laplacian perturbed by $-W_k$ instead of $-V$:
\begin{eqnarray}
\label{heat-app}
(P_k)\colon\left\{\begin{gathered}
-\frac{\partial u}{\partial t}=L_0^\Omega u - W_k u,\quad \hbox{in } (0,T)\times\Omega,\\
u(t,\cdot)=0,\ on~~~\Omegaega^c,\ \forall\,0<t<T\leq\infty\\
u(0,x)= u_{0}(x),~~~{\rm for}\ a.e.\ x\in {\mathbb{R}}^d,
\end{gathered}
\right.
\end{eqnarray}
Denote by $L_k$ the selfadjoint operator associated to the closed quadratic form ${\cal{E}}_\Omega-W_k$ and $u_k(t):=e^{-tL_k}u_0,\ t\geq 0$ the nonnegative semigroup solution of problem $(P_k)$. Then
$u_k$ satisfies Duhamel's formula:
\begin{eqnarray}
u_k(t,x)&=&e^{-tL_0^\Omega}u_0(x)+\int_0^t\int_\Omega p_{t-s}^{L_0^\Omega}(x,y)u_k(s,x)V_k(y)\,dy\,ds,\ \forall\,t>0,
\label{duhamel}
\end{eqnarray}
Let us list the properties of the sequence $(u_k)$ and establish existence of the minimal solution.
\begin{lem}
\begin{itemize}
\item[i)] The sequence $(u_k)$ is increasing.
\item[ii)] If $u$ is any nonnegative solution of problem (\ref{heat2}) then $u_k\leq u,\ \forall\,k$. Moreover $u_\infty:=\lim_{k\to\infty}u_k$ is a nonnegative solution of problem (\ref{heat2}) as well.
\end{itemize}
\label{domination}
\end{lem}
The proof runs as the one corresponding to the case of bounded domains (see \cite{benamor-kenzizi}), so we omit it.\\
We recall that we use the notation $u(t)$ to designate the minimal solution $u_\infty(t)$.
\begin{rk}
{\rm Let $0<c\leq c^*$. Owing to the lower bound (\ref{LowBound}) together with the fact that $p_t^{L_{V_c}^\Omega}\geq p_t^{L_{V_c}^B}$ for any ball such that $B\subsetset\Omega$ we automatically get: for every compact subset $K\subsetset\Omega$ and every $t>0$, there is a finite constant $K_{1}ppa_t=K_{1}ppa_t(K)>0$ such that
\begin{eqnarray}
p_t^{L_{V_c}^\Omega}(x,y)\geq K_{1}ppa_tw_c(x)w_c(y),\ a.e.\ {on}\ K\times K,\ \forall\,t>0.
\label{LowBoundGen}
\end{eqnarray}
Hence
\begin{eqnarray}
u(t)\geq c_t w_c(x)\ a.e.\ \text{near}\ 0.
\label{LowMinGen}
\end{eqnarray}
Thus, for any open nonempty subset the minimal solution has a standing singularity at $0$.
}
\end{rk}
Let us establish a Duhamel formula for the minimal solution.
\begin{lem}
Let $u$ be the minimal solution of equation (\ref{heat1}) with $c\geq c^*$. Then $u$ satisfies the following Duhamel's formula:
\begin{eqnarray}
u(t,x) = e^{-tL_{V_{c^*}}^\Omega}u_0(x) +(c-c^*)\int_0^t\int_{\Omega} p_{t-s}^{L_{V_{c^*}}^\Omega}(x,y)u(s,y)|y|^{-\alphaha}\,ds\,dy,\ \forall\,t>0,\ a.e.\,x.
\label{Rep}
\end{eqnarray}
\end{lem}
\begin{proof}
Set $W_k^*=V_{c^*}\wedge k$. Then
\begin{eqnarray}
u_k(t,x)=e^{-tL_k^\Omega}u_0(x)= e^{-tL_{W_k^*}^\Omega}u_0(x) +
\int_0^t\int_\Omega p_{t-s}^{L_{W_k^*}^\Omega}(x,y)u_k(s)(W_k-W_{k}^*)\,dy\,ds.
\end{eqnarray}
A simple calculation shows that the sequence $(W_k-W_{k}^*)$ is increasing. As the minimal solution is the limit of the $u_k$'s, the result follows by application of monotone convergence theorem.
\end{proof}
We have sofar collected enough material to announce the main theorem of this section.
\begin{theo}
Assume that $c>c*$. Then the heat equation (\ref{heat1}) has no nonnegative solutions. It follows, that the minimal solution blows up completely and instantaneously.
\end{theo}
\begin{proof}
Assume that a nonnegative solution $u$ exists. Relying on Lemma \ref{domination}, we may and shall suppose that $u=u_\infty$. Put $c'=c-c^*>0$ and let $B$ be an open ball centered at $0$ such that $B\subsetset\Omega$ and $u_0\not\equiv 0$ on $B$.\\
Owing to the fact that $p_t^{L_{V_{c^*}}^\Omega}\geq p_t^{L_{V_{c^*}}^B}$, the identity (\ref{Rep}) together with the lower bound from (\ref{LowMinGen}) we obtain
\begin{eqnarray}
u(t,x)\geq e^{-tL_{V_{c^*}}^B}u_0(x)\geq c_tw_{c^*}(x),\ a.e.\ {\rm on}\ B':=\frac{1}{2}B.
\end{eqnarray}
Using formulae (\ref{Rep}) and (\ref{LowBound}), once again together with the latter lower bound we obtain
\begin{align}
u(t,x) &\geq c' \int_0^t c_s\int_{B} p_{t-s}^{L_{V_{c^*}}^B}(x,y)u(s,y)|y|^{-\alphaha}\,ds\,dy \geq c'\int _0^t c_s\int_{B'} p_{t-s}^{L_{V_{c^*}}^B}(x,y)w_{c^*}(y)|y|^{-\alphaha}\,ds\,dy\nonumber\\
&\geq c'w_{c^*}(x)\int_0^t c'_s\int_{B'} w_{c^*}^2(y)|y|^{-\alphaha}\,ds\,dy.
\end{align}
However,
\begin{eqnarray}
\int_{B'} w_{c^*}^2(y)|y|^{-\alphaha}\,dy=\infty,
\end{eqnarray}
and the solution blows up, which finishes the proof.
\end{proof}
{\bf Acknowledgement.} The author is very grateful to Tomasz Grzywny. He did the major part of the proof of Theorem \ref{ThmLowBound}.
\end{document} |
\begin{document}
\begin{abstract} We explore the optimality of the constants making valid the recently established Little Grothendieck inequality for JB$^*$-triples and JB$^*$-algebras. In our main result we prove that for each bounded linear operator $T$ from a JB$^*$-algebra $B$ into a complex Hilbert space $H$ and $\varepsilon>0$, there is a norm-one functional $\varphi\in B^*$ such that
$$\norm{Tx}\le(\sqrt{2}+\varepsilon)\norm{T}\norm{x}_\varphi\quad\mbox{ for }x\in B.$$ The constant appearing in this theorem improves the best
value known up to date (even for C$^*$-algebras).
We also present an easy example witnessing that the constant cannot be strictly smaller than $\sqrt2$, hence our main theorem
is `asymptotically optimal'.
For type I JBW$^*$-algebras we establish a canonical decomposition of normal functionals which may be used to prove the main result in this special case and also seems to be of an independent interest. As a tool we prove a measurable version of the Schmidt representation of compact operators on a Hilbert space.
\mathbb Nd{abstract}
\title{On optimality of constants in the Little Grothendieck Theorem}
\section{Introduction}
We investigate the optimal values of the constant in the Little Grothendieck theorem for JB$^*$-algebra. The story begins in 1956
when Grothendieck \cite{grothendieck1956resume} proved his famous theorem on factorization of bilinear forms on spaces of continuous functions through Hilbert spaces. A weaker form of this result, called Little Grothendieck Theorem, can be formulated as a canonical factorization of bounded linear operators from spaces of continuous functions into a Hilbert space. It was also proved by Grothendieck \cite{grothendieck1956resume} (see also \cite[Theorem 5.2]{pisier2012grothendieck}) and reads as follows.
\begin{thma}\langlebel{T-C(K)}
There is a universal constant $k$ such that
for any bounded linear operator $T:C(K)\to H$, where $K$ is a compact space and $H$ is a Hilbert space, there is a Radon probability measure $\mu$ on $K$ such that
$$\norm{Tf}\le k\norm{T} \left(\int \abs{f}^2\,\mbox{\rm d}\mu|ight)^{\mathfrak{A}c12}\quad\mbox{ for }f\in C(K).$$
Moreover, the optimal value of $k$ is $\mathfrak{A}c2{\sqrt\pi}$ in the complex case and $\sqrt{\mathfrak{A}c\pi2}$ in the real case.
\mathbb Nd{thma}
The Grothendieck theorem was later extended to the case of C$^*$-algebras by Pisier \cite{pisier1978grothendieck} and Haagerup \cite{haagerup1985grothendieck}. Its `little version' reads as follows. Henceforth, all Hilbert spaces considered in this note will be over the complex field.
\begin{thma}\langlebel{T-C*alg}
Let $A$ be a C$^*$-algebra, $H$ a Hilbert space and $T:A\to H$ a bounded linear operator. Then there are two states $\varphi_1,\varphi_2\in A^*$ such that
$$\norm{Tx}\le\norm{T}\left(\varphi_1(x^*x)+\varphi_2(xx^*)|ight)^{\mathfrak{A}c12}\quad \mbox{ for }x\in A.$$
Moreover, the constant $1$ on the right-hand side is optimal.
\mathbb Nd{thma}
The positive part of the previous theorem is due to Haagerup \cite{haagerup1985grothendieck}, the optimality result was proved by Haagerup and Itoh in \cite{haagerup-itoh} (see also \cite[Section 11]{pisier2012grothendieck}). Let us recall that a \emph{state} on a C$^*$-algebra is a positive functional of norm one, hence in the case of a complex $C(K)$ space (which is a commutative C$^*$-algebra), a state is just a functional represented by a probability measure. Hence, as a consequence of Theorem~\operatorname{Re}f{T-C*alg}
we get a weaker version of the complex version of Theorem~\operatorname{Re}f{T-C(K)} with
$k\le\sqrt{2}$.
Let us point out that Theorem~\operatorname{Re}f{T-C*alg} is specific for (noncommutative) C$^*$-algebras due to the asymmetric role played there by the products $xx^*$ and $x^*x$.
To formulate its symmetric version recall that the Jordan product on a C$^*$-algebra $A$ is defined by
$$x\circ y=\mathfrak{A}c12(xy+yx)\quad\mbox{ for }x,y\in A.$$
Using this notation we may formulate the following consequence of Theorem~\operatorname{Re}f{T-C*alg}.
\begin{thma}\langlebel{T:C*alg-sym}
Let $A$ be a C$^*$-algebra, $H$ a Hilbert space and $T:A\to H$ a bounded linear operator. Then there is a state $\varphi\in A^*$ such that
$$\norm{Tx}\le2\norm{T}\varphi(x\circ x^*)^{\mathfrak{A}c12}\quad \mbox{ for }x\in A.$$
\mathbb Nd{thma}
To deduce Theorem~\operatorname{Re}f{T:C*alg-sym} from Theorem~\operatorname{Re}f{T-C*alg} it is enough to take $\varphi=\mathfrak{A}c12(\varphi_1+\varphi_2)$ and to use positivity of the elements $xx^*$ and $x^*x$. However, in this case the question on optimality of the constant remains open.
\begin{ques}
Is the constant $2$ in Theorem~\operatorname{Re}f{T:C*alg-sym} optimal?
\mathbb Nd{ques}
It is easy to show that the constant should be at least $\sqrt2$ (see Example~\operatorname{Re}f{ex:alg vs triple} below) and, to the best of our knowledge, no counterexample is known showing that $\sqrt{2}$ is not enough.
A further generalization of the Grothendieck theorem, to the setting of JB$^*$-triples (see Section~\operatorname{Re}f{sec: notation} for basic definitions and properties), was suggested by Barton and Friedman \cite{barton1987grothendieck}. However, their proof contained a gap found later by Peralta and Rodr\'{\i}guez Palacios \cite{peralta2001little,peralta2001grothendieck} who proved a weaker variant of the theorem. A correct proof was recently provided by the authors in \cite{HKPP-BF}. The `little versions' of these results are summarized in the following theorem.
\begin{thma}\langlebel{T:triples}
Let $E$ be a JB$^*$-triple, $H$ a Hilbert space and $T:E\to H$ a bounded linear operator.
\begin{enumerate}[$(1)$]
\item If $T^{**}$ attains its norm, there is a norm-one functional $\varphi\in E^*$ such that
$$\norm{Tx}\le\sqrt{2}\norm{T}\norm{x}_\varphi\quad\mbox{ for }x\in E.$$
\item Given $\varepsilon>0$, there are norm-one functionals $\varphi_1,\varphi_2\in E^*$ such that
$$\norm{Tx}\le(\sqrt{2}+\varepsilon)\norm{T}\left(\norm{x}^2_{\varphi_1}+\varepsilon\norm{x}^2_{\varphi_2}|ight)^{\mathfrak{A}c12}\quad\mbox{ for }x\in E.$$
\item Given $\varepsilon>0$, there is a norm-one functional $\varphi\in E^*$ such that
$$\norm{Tx}\le(2+\varepsilon)\norm{T}\norm{x}_\varphi\quad\mbox{ for }x\in E.$$
\mathbb Nd{enumerate}
\mathbb Nd{thma}
The pre-hilbertian seminorms $\norm{\cdot}_\varphi$ used in the statement are defined in Subsection~\operatorname{Re}f{subsec:seminorms} below.
Let us comment the history and the differences of the three versions.
It was claimed in \cite[Theorem 1.3]{barton1987grothendieck} that assertion $(1)$ holds without the additional assumption on attaining the norm, because the authors assumed this assumption is satisfied automatically. In \cite{peralta2001little} and \cite[Example 1 and Theorem 3]{peralta2001grothendieck} it was pointed out that this is not the case and assertion $(2)$ was proved using a variational principle from \cite{Poliquin-Zizler}.
In \cite[Lemma 3]{peralta2001grothendieck} also assertion $(1)$ was formulated.
Note that in $(2)$ not only the constant $\sqrt2$ is replaced by a slightly larger one, but also the pre-hilbertian seminorm on the right-hand side is perturbed. This perturbation was recently avoided in \cite[Theorem 6.2]{HKPP-BF}, at the cost of squaring the constant.
Further, although the proof from \cite{barton1987grothendieck} was not correct, up to now there is no counterexample to the statement itself.
In particular, the following question remains open.
\begin{ques}
What is the optimal constant in assertion $(3)$ of Theorem~\operatorname{Re}f{T:triples}? In particular, does assertion $(1)$ of the mentioned theorem hold without assuming the norm-attainment?
\mathbb Nd{ques}
The main result of this note is the following partial answer.
\begin{thm}\langlebel{t constant >sqrt2 in LG for JBstar algebras}
Let $B$ be a JB$^*$-algebra, $H$ a Hilbert space and $T:B\to H$ a bounded linear operator. Given $\varepsilon>0$, there is a norm-one functional $\varphi\in B^*$ such that
$$\norm{Tx}\le(\sqrt{2}+\varepsilon)\norm{T}\norm{x}_\varphi\quad\mbox{ for }x\in B.$$
In particular, this holds if $B$ is a C$^*$-algebra.
\mathbb Nd{thm}
Note that JB$^*$-algebras form a subclass of JB$^*$-triples and can be viewed as a generalization of C$^*$-algebras (see the next section). We further remark that the previous theorem is `asymptotically optimal' as the constant cannot be strictly smaller than $\sqrt2$ by Example~\operatorname{Re}f{ex:Tx=xxi} below.
The paper is organized as follows. Section~\operatorname{Re}f{sec: notation} contains basic background on JB$^*$-triples and JB$^*$-algebras. In Section~\operatorname{Re}f{sec:majorizing} we formulate the basic strategy of the proof using majorization results for pre-hilbertian seminorms.
In Section~\operatorname{Re}f{sec:type I} we deal with a large subclass of JBW$^*$-algebras (finite ones and those of type I). The main result of this section is Proposition~\operatorname{Re}f{P:type I approx} which provides a canonical decomposition of normal functionals on the just commented JBW$^*$-algebras.
This statement may be used to prove the main result in this special case and, moreover, it seems to be of an independent interest.
As a tool we further establish a measurable version of Schmidt decomposition of compact operators (see Theorem~\operatorname{Re}f{T:measurable Schmidt}).
In Section~\operatorname{Re}f{sec:JW*} we address Jordan subalgebras of von Neumann algebras. Section~\operatorname{Re}f{sec:proofs} contains the synthesis of the previous sections, the proof of the main result and some consequences. In particular, we show that Theorem~\operatorname{Re}f{T-C*alg} (with the precise constant) follows easily from Theorem~\operatorname{Re}f{t constant >sqrt2 in LG for JBstar algebras}.
Section~\operatorname{Re}f{sec:problems} contains several examples witnessing optimality of some results and related open problems. In Section~\operatorname{Re}f{sec:triples} we discuss the possibility of extending our results to general JB$^*$-triples.
\section{Basic facts on JB$^*$-triples and JB$^*$-algebras}\langlebel{sec: notation}
It is known that in most cases, like in $B(H)$, the hermitian part of a C$^*$-algebra $A$ need not be a subalgebra of $A$ because it is not necessarily closed for the associative product. This instability can be avoided, at the cost of loosing associativity, by replacing the associative product $a b$ in $A$ with the \emph{Jordan product} defined by \begin{equation}\langlebel{eq special Jordan product} a\circ b := \mathfrak{A}c12 (a b + ba).
\mathbb Nd{equation}
This may be seen as an inspiration for the following abstract definitions.
A real or complex \emph{Jordan algebra} is a non-necessarily associative algebra $B$ over $\mathbb{R}$ or $\mathbb{C}$ whose multiplication (denoted by $\circ$) satisfies the identities:\begin{equation}\langlebel{eq Jordan axioms} x\circ y = y \circ x \hbox{ (commutative law) and }
( x\circ y )\circ x^2 = x \circ ( y \circ x^2 ) \hbox{ (Jordan identity)}
\mathbb Nd{equation} for all $x,y\in B$, where $x^2 = x\circ x$.
Jordan algebras were the mathematical structures designed by the theoretical physicist P.~Jordan to formalize the notion of an algebra of observables in quantum mechanics in 1933. The term ``Jordan algebra'' was introduced by A.A.~Albert in the 1940s. Promoted by the pioneering works of I.~Kaplanski, E.M.~Alfsen, F.W.~Shultz, H.~Hanche-Olsen, E.~St\"{o}rmer, J.D.M.~Wright and M.A.~Youngson, JB$^*$- and JBW$^*$-algebras are Jordan extensions of C$^*$- and von Neumann algebras. A {\em JB$^*$-algebra} is a complex Jordan algebra $(B,\circ)$ equipped with a complete norm $\|\cdot\|$ and an involution $*$ satisfying the following axioms: \begin{enumerate}[$(a)$]
\item $\norm{x\circ y}\le\norm{x}\norm{y}$ for $x,y\in B$;
\item $\norm{U_{x} (x^*)}=\norm{x}^3$ for $x\in B$
({\em a Gelfand-Naimark type axiom}),
\mathbb Nd{enumerate} where $U_{x} (y) = 2(x\circ y)\circ x-x^2\circ y$ ($x,y\in B$). These axioms guarantee that the involution of every JB$^*$-algebra is an isometry (see \cite[Lemma 4]{youngson1978vidav} or \cite[Proposition 3.3.13]{Cabrera-Rodriguez-vol1}).
JB$^*$-algebras were also called \emph{Jordan C$^*$-algebras} by I. Kaplansky and other authors at the early stages of the theory.
Every C$^*$-algebra is a JB$^*$-algebra with its original norm and involution and the Jordan product defined in \eqref{eq special Jordan product}. Actually, every norm closed self-adjoint Jordan subalgebra of a C$^*$-algebra is a JB$^*$-algebra. Those JB$^*$-algebras obtained as JB$^*$-subalgebras of C$^*$-algebras are called \emph{JC$^*$-algebras}. There exist JB$^*$-algebras which are \emph{exceptional} in
the sense that they cannot be identified with a JB$^*$-subalgebra of a C$^*$-algebra, this is the case of the JB$^*$-algebra $H_3(\mathbb O)$ of all $3\times 3$-hermitian matrices with entries in the algebra $\mathbb O$ of complex octonions (see, for example, \cite[\S 7.2]{hanche1984jordan}, \cite[\S 6.1 and 7.1]{Cabrera-Rodriguez-vol2} or \cite[\S 6.2 and 6.3]{Finite}).
A JBW$^*$-algebra (respectively, a JW$^*$-algebra) is a JB$^*$-algebra (respectively, a JC$^*$-algebra) which is also a dual Banach space.
JB$^*$-algebras are intrinsically connected with another mathematical object deeply studied in the literature. A \emph{JB-algebra} is a real Jordan algebra $J$ equipped with a complete norm satisfying \begin{equation}\langlebel{eq axioms of JB-algebras} \|a^{2}\|=\|a\|^{2}, \hbox{ and } \|a^{2}\|\leq \|a^{2}+b^{2}\|\ \hbox{ for all } a,b\in J.
\mathbb Nd{equation}
In a celebrated lecture in St.\ Andrews in 1976 I. Kaplansky suggested the definition of JB$^*$-algebra and pointed out that the self-adjoint part $B_{sa}=\{x\in B : x^* =x\}$ of a JB$^*$-algebra is always a JB-algebra. One year later, J.D.M. Wright contributed one of the most influential results in the theory of JB$^*$-algebras by proving that the complexification of every JB-algebra is a JB$^*$-algebra (see \cite{Wright1977}). A \emph{JC-algebra} (respectively, a \emph{JW-algebra}) is a norm-closed (respectively, a weak$^*$-closed) real Jordan subalgebra of the hermitian part of a C$^*$-algebra (respectively, of a von Neumann algebra).
Suppose $B$ is a unital JB$^*$-algebra. The smallest norm-closed real Jordan subalgebra $C(a)$ of $B_{sa}$ containing a self-adjoint element $a$ in $B$ and $1$ is associative. According to the usual notation, the \emph{spectrum of $a$} in $B$, denoted by $Sp(a)$, is the the set of all real $\langlembda$ such that $a - \langlembda 1$ does not have an inverse in $C(a)$ (cf. \cite[3.2.3]{hanche1984jordan}). If $B$ is not unital we consider the unitization of $B$ to define the spectrum of a self-adjoint element. It is known that the JB$^*$-subalgebra of $B$ generated by a single self-adjoint element $a\in B$ and the unit is isometrically JB$^*$-isomorphic to the commutative C$^*$-algebra $C(Sp(a))$, of all complex-valued continuous functions on $Sp(a)$ (see \cite[3.2.4. The spectral theorem]{hanche1984jordan}). An element $a\in B$ is called positive if $a=a^*$ and $Sp(a)\subseteq \mathbb{R}_{0}^+$ (cf. \cite[3.3.3]{hanche1984jordan}).
Although there exist exceptional JB$^*$-algebras which cannot be embedded as JB$^*$-subalgebras of C$^*$-algebras, the JB$^*$-subalgebra of a JB$^*$-algebra $B$ generated by two hermitian elements (and the unit element) is a JC$^*$-algebra (compare Macdonald's and Shirshov-Cohn's theorems \cite[Theorems 2.4.13 and 2.4.14]{hanche1984jordan}, \cite[Corollary 2.2]{Wright1977} or \cite[Proposition 3.4.6]{Cabrera-Rodriguez-vol1}). Consequently, for each $x\in B$, the element $x\circ x^*$ is positive in $B$.
We refer to the references \cite{hanche1984jordan,Cabrera-Rodriguez-vol1} and \cite{Cabrera-Rodriguez-vol2} for the basic background, notions and results on JB$^*$-algebras.
C$^*$- and JB$^*$-algebras have been extensively employed as a framework for studying bounded symmetric domains in complex Banach spaces of infinite dimension, as an alternative notion to simply connected domains. The open unit ball of every C$^*$-algebra is a bounded symmetric domain (see \cite{harris1974bounded}) and the open unit balls of (unital) JB$^*$-algebras are, up to a biholomorphic mapping, those bounded symmetric domains
which have a realization as a tube domain, i.e. an upper half-plane (cf. \cite{BraKaUp78}). These examples do not exhaust all possible bounded symmetric domains in arbitrary complex Banach spaces, a strictly wider class of Banach spaces is actually required. The most conclusive result was obtained by W. Kaup who proved in 1983 that every bounded symmetric domain in a complex Banach space is biholomorphically equivalent to the open unit ball of a JB$^*$-triple \cite{kaup1983riemann}.
A complex Banach space $E$ belongs to the class of {\em JB$^*$-triples} if it admits a triple product (i.e., a continuous mapping) $\J{\cdot}{\cdot}{\cdot}:E^3\to E$ which is symmetric and bilinear in the outer variables and conjugate linear in the middle variable and satisfies the next algebraic and geometric axioms:
\begin{enumerate}[(JB$^*$-1)]
\item $\J xy{\J abc}=\J{\J xya}bc-\J a{\J yxb}c+\J ab{\J xyc}$ for any $x,y,a,b,c\in E$
({\em Jordan identity});
\item For any $a\in E$ the operator $L(a,a):x\mapsto \J aax$ is hermitian with non-negative spectrum;
\item $\norm{\J xxx}=\norm{x}^3$ for all $x\in E$
({\em a Gelfand-Naimark type axiom}).
\mathbb Nd{enumerate}
C$^*$-algebras and JB$^*$-algebras belong to the wide list of examples of JB$^*$-triples when they are equipped with the triple products given by \begin{equation}\langlebel{eq triple product JCstar and JBstar} \{a,b,c\} =\mathfrak{A}c12 ( a b^* c + c b^* a),\hbox{ and } \{a,b,c\} = (a\circ b^*) \circ c +(c\circ b^*) \circ a - (a\circ c) \circ b^*,
\mathbb Nd{equation} respectively (see \cite[Theorem 3.3]{BraKaUp78} or \cite[Theorem 4.1.45]{Cabrera-Rodriguez-vol1}). The first triple product in \eqref{eq triple product JCstar and JBstar} induces a structure of JB$^*$-triple on every closed subspace of the space $B(H,K),$ of all bounded linear operators between complex Hilbert spaces $H$ and $K$, which is closed under this triple product. In particular, $B(H,K)$ and every complex Hilbert space are JB$^*$-triples with their canonical norms and the first triple product given in \eqref{eq triple product JCstar and JBstar}.
In a JB$^*$-triple $E$ the triple product is contractive, that is,
$$\|\{x,y,z\}\|\le\|x\| \|y\| \|z\| \quad\mbox{ for all } x,y,z \mbox{ in } E$$ (cf. \cite[Corollary 3]{Friedman-Russo-GN} or \cite[Corollary 7.1.7]{Cabrera-Rodriguez-vol2}, \cite[P.\ 215]{chubook}).
A linear bijection between JB$^*$-triples is a triple isomorphism if and only if it is an isometry (cf. \cite[Proposition 5.5]{kaup1983riemann} or \cite[Theorems 3.1.7, 3.1.20]{chubook}). Thus, a complex Banach space admits a unique triple product under which it is a JB$^*$-triple.
A JBW$^*$-triple is a JB$^*$-triple which is also a dual space. Every JBW$^*$-triple admits a unique (in the isometric sense) predual and its triple product is separately weak$^*$-continuous (see \cite{BaTi}, \cite[Theorems 5.7.20, 5.7.38]{Cabrera-Rodriguez-vol2}).
Each idempotent $e$ in a Banach algebra $A$ produces a Peirce decomposition of $A$ as a sum of eigenspaces of the left and right multiplication operators by the idempotent $e$. A.A. Albert extended the classical Peirce decomposition to the setting of Jordan algebras in the middle of the last century. The notion of idempotent might mean nothing in a general JB$^*$-triple. The appropriate alternative is the concept of tripotent. An element $e$ in a JB$^*$-triple $E$ is a \emph{tripotent} if $\{e,e,e\}=e$. It is worth mentioning that when a C$^*$-algebra $A$ is regarded as a JB$^*$-triple with respect to the first triple product given in \eqref{eq triple product JCstar and JBstar}, an element $e\in A$ is a tripotent if and only if it is a partial isometry (i.e., $e e^*$, or equivalently $e^* e$, is a projection) in $A$.
In case we fix a tripotent $e$ in a JB$^*$-triple $E$, the classical Peirce decomposition for associative and Jordan algebras extends to a \emph{Peirce decomposition} of $E$ associated with the eigenspaces of the mapping $L(e,e)$, whose eigenvalues are all contained in the set $\{0,\mathfrak{A}c12,1\}$. For $j\in\{0,1,2\}$, the (linear) projection $P_{j} (e)$ of $E$ onto the eigenspace, $E_j(e)$, of $L(e, e)$ corresponding to the eigenvalue $\mathfrak{A}c{j}{2},$ admits a concrete expression in terms of the triple product as follows: $$\begin{aligned} P_2 (e) &= L(e,e)(2 L(e,e) -id_{E})=Q(e)^2, \nonumber \\ P_1 (e) &= 4 L(e,e)(id_{E}-L(e,e)) =2\left(L(e,e)-Q(e)^2|ight), \hbox{ and } \nonumber \\ P_0 (e) &= (id_{E}-L(e,e)) (id_{E}-2 L(e,e)),
\mathbb Nd{aligned}$$
where $Q(e)(x)=\J exe$ for $x\in E$.
The projection $P_{j} (e)$ is known as the \emph{Peirce}-$j$ \emph{projection} associated with $e$. Peirce projections are all contractive (cf. \cite[Corollary 1.2]{Friedman-Russo}), and the JB$^*$-triple $E$ decomposes as the direct sum
$$E= E_{2} (e) \oplus E_{1} (e)\oplus E_0 (e),$$ which is termed the \emph{Peirce decomposition} of $E$ relative to $e$ (see \cite{Friedman-Russo}, \cite[Definition 1.2.37]{chubook} or \cite[Subsection 4.2.2]{Cabrera-Rodriguez-vol1} and \cite[Section 5.7]{Cabrera-Rodriguez-vol2} for more details). In the particular case in which $e$ is a tripotent (i.e. a partial isometry) in a C$^*$-algebra $A$ with initial projection $p_i= e^* e$ and final projection $p_f= e e^*$, the subspaces in the Peirce decomposition are precisely $$A_2(e) =p_f A p_i, \, A_1(e) = p_f A (1-p_i)\oplus (1-p_f) A p_i,\, A_0(e) = (1-p_f) A (1-p_i). $$
A tripotent $e$ in a JB$^*$-triple $E$ is called \emph{complete} if $E_0 (e) =\{0\}$. We shall say that $e$ is \emph{unitary} if $E= E_2(e)$, or equivalently, if $\{e,e,x\}={x}$ for all $x\in E$. Obviously, every unitary is a complete tripotent, but the converse implication is not always true; consider for example a non-surjective isometry $e$ in $B(H)$. A non-zero tripotent $e$ satisfying $E_2(e) = \mathbb{C} e$ is called \emph{minimal}.
Note that in a unital JB$^*$ algebra there is another definition of a unitary element (cf.\ \cite[Definition 4.2.25]{Cabrera-Rodriguez-vol1}). However, it is equivalent
to the above-defined notion as witnessed by the following fact (where condition $(3)$ is the mentioned alternative definition). We will work solely with the notion of unitary tripotent defined above (i.e., with condition $(1)$ from the fact below) but we include these equivalences for the sake of completeness.
\begin{fact}
Let $B$ be a unital JB$^*$-algebra and let $u\in B$. The following assertions are equivalent.
\begin{enumerate}[$(1)$]
\item $u$ is a unitary tripotent, i.e., $u$ is a tripotent with $B_2(u)=B$.
\item $u$ is a tripotent and $u\circ u^*=1$.
\item $u\circ u^*=1$ and $u^2\circ u^*=u$, i.e., $u^*$ is the Jordan inverse of $u$.
\mathbb Nd{enumerate}
\mathbb Nd{fact}
\begin{proof}
The equivalence $(1)\mathcal Leftrightarrow(3)$ is proved in \cite[Proposition 4.3]{BraKaUp78} (see also \cite[Theorem 4.2.28]{Cabrera-Rodriguez-vol1}).
To prove the equivalence $(1)\mathcal Leftrightarrow(2)$ observe that assertion $(2)$ means that $1=\J uu1$, i.e., $1\in B_2(u)$. It remains to use \cite[Proposition 6.6]{hamhalter2019mwnc}.
\mathbb Nd{proof}
Complete tripotents in a JB$^*$-triple $E$ can be geometrically characterized since a norm-one element $e$ in $E$ is a complete tripotent if and only if it is an extreme point of its closed unit ball (cf. \cite[Lemma 4.1]{BraKaUp78}, \cite[Proposition 3.5]{kaup1977jordan} or \cite[Theorem 4.2.34]{Cabrera-Rodriguez-vol1}).
Consequently, every JBW$^*$-triple contains an abundant collection of complete tripotents.
Given a unitary element $u$ in a JB$^*$-triple $E$, the latter becomes a unital JB$^*$-algebra with Jordan product and involution defined by
\begin{equation}\langlebel{eq circ-star}
x\circ_{u} y=\J xuy\mbox{ and }x^{*_u}=\J uxu\qquad\mbox{for }x,y\in E,
\mathbb Nd{equation} see \cite[Theorem 4.1.55]{Cabrera-Rodriguez-vol1}. We even know that $u$ is the unit of this JB$^*$-algebra (i.e., $u\circ_u x=x$ for $x\in E$). Each tripotent $e$ in a JB$^*$-triple $E$ is a unitary in the JB$^*$-subtriple $E_2(e)$, and thus $(E_2(e),\circ_e,*_{e})$ is a unital JB$^*$-algebra. Therefore, since the triple product is uniquely determined by the structure of a JB$^*$-algebra, unital JB$^*$-algebras are in one-to-one correspondence with those JB$^*$-triples admitting a unitary element.
A linear subspace $I$ of a JB$^*$-triple $E$ is called a \emph{triple ideal} or simply an \emph{ideal} of $E$ if $\J IEE\subset I$ and $\J EIE\subset I$ (see \cite{horn1987ideal}). Let $I, J$ be two ideals of $E$. We shall say that $I$ and $J$ are orthogonal if $I\cap J =\{0\}$ (and consequently $\{I,J,E\} = \{J,I,E\}=\{0\}$). It is known that every weak$^*$-closed ideal $I$ of a JBW$^*$-triple $M$ is orthogonally complemented, that is, there exists another weak$^*$-closed ideal $J$ of $M$ which is orthogonal to $I$ and $M = I \oplus^{\infty} J$ (see \cite[Theorem 4.2$(4)$ and Lemmata 4.3 and 4.4]{horn1987ideal}). For each weak$^*$-closed ideal $I$ of $M$, we shall denote by $P_{I}$ the natural projection of $M$ onto $I$.\langlebel{eq ideals in JBW} Let us observe that, in this case $P_{I}$ is always a weak$^*$-continuous triple homomorphism.
\subsection{Positive functionals and prehilbertian seminorms}\langlebel{subsec:seminorms}
As in the case of C$^*$-algebras, a functional $\phi$ in the dual space, $B^*$, of a JB$^*$-algebra $B$ is called positive if it maps positive elements to non-negative real numbers. We shall frequently apply that a functional $\phi$ in the dual space of a unital JB$^*$-algebra $B$ is positive if and only if $\|\phi\| = \phi (1)$ (cf.
\cite[Lemma 1.2.2]{hanche1984jordan}). The same conclusion holds for functionals in the predual of a JBW$^*$-algebra.
A positive normal functional $\varphi$ in the predual of a JBW$^*$-algebra $B$ is called \emph{faithful} if $\varphi (a) = 0$ for $a\geq 0$ in $B$ implies $a=0$.
If $\phi$ is a positive functional in the dual of a C$^*$-algebra $A,$ and $1$ denotes the unit element in $A^{**}$, the mapping $$(a,b)\mapsto \phi \left(\mathfrak{A}c{ a b^* + b^* a}{2} |ight)= \phi \{a,b,1\} \ \ (a,b\in A)$$ is a positive semi-definite sesquilinear form on $A\times A$, whose associated prehilbertian seminorm is denoted by $\|x \|_{\phi} = (\phi \{x,x,1\})^{1/2}$. If we consider a positive functional $\phi$ in the dual of a JB$^*$-algebra $B$, the associated prehilbertian seminorm is defined by $\|x \|_{\phi}^2 = \phi \{x,x,1\} =\phi (x\circ x^*),$ where $1$ stands for the unit in $B^{**}$.
The lacking of a local order or positive cone in a general JB$^*$-triple, and hence the lacking of positive functionals makes a bit more complicated the definition of appropriate prehilbertian seminorms. Namely, let $\varphi$ be a functional in the predual of JBW$^*$-triple $M$ and let $z$ be a norm-one element in $M$ satisfying $\varphi (z) = \|\varphi\|$. Proposition 1.2 in \cite{barton1987grothendieck} proves that the mapping $M\times M\to \mathbb{C}$, $(x,y)\mapsto \varphi\{x,y,z\}$ is a positive semi-definite sesquilinear form on $M$ which does not depend on the choice of the element $z$ (that is, $\varphi\{x,y,z\} = \varphi\{x,y,\tilde{z}\}$ for every $x,y\in M$ and every $\tilde{z}\in M$ with $\|\tilde{z}\|=1$ and $\varphi(\tilde{z})=\norm{\varphi}$, see \cite[Proposition 5.10.60]{Cabrera-Rodriguez-vol2}). The associated prehilbertian seminorm is denoted by $\norm{x}_{\varphi} =(\varphi\{x,x,z\})^{1/2}$ ($x\in M$). Since the triple product of every JB$^*$-triple is contractive it follows that \begin{equation}\langlebel{eq seminorm inequality} \|x\|_\varphi\le\sqrt{\|\varphi\|}\|x\| \hbox{ for all } x\in M.\mathbb Nd{equation} If $\varphi$ is a non-zero functional in the dual $E^*$ of a JB$^*$-triple $E$, and we regard $E^*$ as the predual of the JBW$^*$-triple $E^{**}$, the prehilbertian seminorm $\norm{\cdot}_{\varphi}$ on $E^{**}$ acts on $E$ by mere restriction.
\subsection{Comparison theory of projections and tripotents} Two projections $p,q$ in a C$^*$-algebra $A$ (respectively, in a JB$^*$-algebra $B$) are said to be orthogonal ($p\perp q$ in short) if $ p q = 0$ (respectively, $p\circ q =0$). The relation ``being orthogonal'' can be used to define a natural partial ordering on the set of projections in $A$ (respectively, in $B$) defined by $ p\leq q$ if $q-p$ is a projection and $q-p \perp p$. We write $p < q$ if $p\leq q$ and $p\neq q$.
Two tripotents $e$ and $u$ in a JB$^*$-triple $E$ are called \emph{orthogonal} ($e\perp u$ in short) if $\{e,e,u\} = 0$ (equivalently, $u\in M_0 (e)$). It is known that $e\perp u$ if and only if any of the following equivalent reformulations holds:
\begin{enumerate}[$(1)$]
\item $e\in E_0(u)$.
\item $E_2(u)\subset E_0(e)$.
\item $L(u,e)=0$.
\item $L(e,u)=0$.
\item Both $u+e$ and $u-e$ are tripotents.
\item $\{u,u,e\} =0$.
\mathbb Nd{enumerate}
For proofs see \cite[Lemma 3.9]{loos1977bounded}, \cite[Proposition 6.7]{hamhalter2019mwnc} or \cite[Lemma 2.1]{Finite}. The induced partial order defined by this orthogonality on the set of tripotents is given by $e\leq u$ if $u-e$ is a tripotent with $u-e \perp e$.
Let $\varphi$ be a non-zero functional in the predual of a JBW$^*$-triple $M$. By Proposition 2 in \cite{Friedman-Russo} (or \cite[Proposition 5.10.57]{Cabrera-Rodriguez-vol2}) there exists a unique tripotent $s(\varphi)\in M$, called the \emph{support tripotent} of $\varphi$, such that $\varphi=\varphi\circ P_2(s(\varphi))$, and $\varphi|_{M_2(s(\varphi))}$ is a faithful positive functional on the JBW$^*$-algebra $M_2(s(\varphi))$. In particular, $\norm{x}_{\varphi}^2=\varphi\{x,x,s(\varphi)\}$ for all $x\in M$.
The support tripotent of a non-zero functional $\varphi$ in the predual of a JBW$^*$-triple $M$ is also the smallest tripotent in $M$ at which $\varphi$ attains its norm, that is, \begin{equation}\langlebel{eq minimality of the support tripotent} \hbox{$\varphi (u) =\|\varphi\|$ for some tripotent $u\in M$} \mathcal Rightarrow s(\varphi)\leq u.
\mathbb Nd{equation}
Namely, the element $P_2(s(\varphi))(u)$ lies in the unit ball of $M_2(s(\varphi))$ because $P_2(s(\varphi))$ is contractive. Since $\varphi = \varphi\circ P_2(s(\varphi))$ and $\varphi|_{M_2(s(\varphi))}$
is a faithful functional on the JBW$^*$-algebra $M_2(s(\varphi))$, we deduce that $P_2(s(\varphi)) (u) = s(\varphi)$. It follows from \cite[Lemma 1.6 or Corollary 1.7]{Friedman-Russo} that $s(\varphi)\leq u.$ Actually the previous arguments prove \begin{equation}\langlebel{eq minimality of the support tripotent for elements} \hbox{$\varphi (a) =\|\varphi\|$ for some element $a\in M$ with $\|a\|=1$} \mathcal Rightarrow a= s(\varphi)+P_0 (s(\varphi)) (a).
\mathbb Nd{equation}
Two projections $p$ and $q$ in a von Neumann algebra $W$ are called \emph{(Murray-von Neumann) equivalent} (written $p\sim q$) if there is a partial isometry $e\in W$ whose initial projection is $p$ and whose final projection is $q$. This Murray-von Neumann equivalence is employed to classify projections and von Neumann algebras in terms of their properties. For example a projection $p$ in $W$ is said to be \emph{finite} if there is no projection $q< p$ that is equivalent to $p$. For example, all finite-dimensional projections in $B(H)$ are finite, but the identity operator on $H$ is not finite when $H$ is an infinite-dimensional complex Hilbert space. The von Neumann algebra $W$ is called \emph{finite} if its unit element is a finite projection. The set of all finite projections in the sense of Murray-von Neumann in $W$ forms a (modular) sublattice of the set of all projections in $W$ (see e.g. \cite[Theorem V.1.37]{Tak}). We recall that a projection $p$ in $W$ is {\em infinite} if it is not finite, and {\em properly infinite} if $p\ne 0$ and $zp$ is infinite whenever $z$ is a central projection such that $zp\ne0$ (cf. \cite[Definition V.1.15]{Tak}).
In the setting of JBW$^*$-algebras the notion of finiteness was replaced by the concept of modularity, and the Murray-von Neumann equivalence by the relation ``being equivalent by symmetries'', that is, two projections $p,q$ in a JBW$^*$-algebra $N$ are called equivalent (by symmetries) (denoted by $p\stackrel{s}{\sim} q$) if there is a finite set $s_l, s_2 ,\ldots, s_n$ of self-adjoint symmetries (i.e. $s_j =1-2p_j$ for certain projections $p_j$) such that $Q(s_1)\cdots Q(s_n) (p) =q$, where $Q(s_j) (x) =\{s_j,x,s_j\} = 2 (s_j \circ x) \circ s_j - s_j^2 \circ x$ for all $x\in N$ (cf. \cite[\S 10]{Topping1965}, \cite[5.1.4]{hanche1984jordan}, \cite[\S 3]{AlfsenShultzGeometry2003} or \cite[\S 7.1]{Finite}). Unlike Murray-von Neumann equivalence, $p \stackrel{s}{\sim} q$ in $N$ implies $1-p \stackrel{s}{\sim} 1-q$. When $M$ is a von Neumann algebra regarded as a JBW$^*$-algebra, and $p,q$ are projections in $M$, $p\stackrel{s}{\sim}q$ if and only if $p$ and $q$ are unitarily equivalent, i.e. there exists a unitary $u\in M$ such that $u p u^* = q$ (see \cite[Proposition 6.56]{AlfsenShultzStateSpace2001}).
In particular, $p\stackrel{s}{\sim} q$ implies $p\sim q$.
In a recent contribution we study the notion of finiteness in JBW$^*$-algebras and JBW$^*$-triples from a geometric point of view. In the setting of von Neumann algebras, the results by H. Choda, Y. Kijima, and Y. Nakagami assert that a von Neumann algebra $W$ is finite if and only if all the extreme points of its closed unit ball are unitary (see \cite[Theorem 2]{ChodaKijimaNak69} or \cite[Proof of Theorem 4]{mil}). Therefore, a projection $p$ in $W$ is finite if and only if every extreme point of the closed unit ball of $pWp$ is a unitary in the latter von Neumann algebra. This is the motivation for the notion of finiteness introduced in \cite{Finite}. According to the just quoted reference, a tripotent $e$ in a JBW$^*$-triple $M$ is called
\begin{enumerate}[$\bullet$]
\item {\em finite} if any tripotent $u\in M_2(e)$ which is complete in $M_2(e)$ is already unitary in $M_2(e)$;
\item {\em infinite} if it is not finite;
\item {\em properly infinite} if $e\ne 0$ and for each weak$^*$-closed ideal $I$ of $M$ the tripotent $P_I (e)$ is infinite whenever it is nonzero.
\mathbb Nd{enumerate} If any tripotent in $M$ is finite, we say that $M$ itself is {\em finite}. Finite-dimensional JBW$^*$-triples are always finite \cite[Proposition 3.4]{Finite}. The JBW$^*$-triple $M$ is said to be {\em infinite} if it is not finite. Finally, $M$ is {\em properly infinite} if each nonzero weak$^*$-closed ideal of $M$ is infinite.
Every JBW$^*$-triple decomposes as an orthogonal sum of weak$^*$-closed ideals $M_1,$ $M_2$, $M_3$ and $M_4,$ where $M_1$ is a finite JBW$^*$-algebra, $M_2$ is either a trivial space or a properly infinite JBW$^*$-algebra, $M_3$ is a finite JBW$^*$-triple with no nonzero direct summand isomorphic to a JBW$^*$-algebra, and $M_4$ is either a trivial space or $M_4=qV_4$, where $V_4$ is a von Neumann algebra, $q\in V_4$ is a properly infinite projection such that $qV_4$ has no direct summand isomorphic to a JBW$^*$-algebra; we further know that $M_4$ is properly infinite in case that it is not zero (see \cite[Theorem 7.1]{Finite} where a more detailed description is presented). This decomposition applies in the particular case in which $M$ is a JBW$^*$-algebra with the appropriate modifications and simplifications on the summands to avoid those which are not JBW$^*$-algebras.
In a von Neumann algebra $W$ the two notions of finiteness coincide for projections (see \cite[Lemma 3.2$(a)$]{Finite}). Every modular projection in a JBW$^*$-algebra is a finite tripotent in the sense above, but the reciprocal is not always true (cf. \cite[Lemma 7.12 and Remark 7.13]{Finite}).
Finite JBW$^*$-triples enjoy formidable properties. For example, for each finite tripotent $u$ in a JBW$^*$-algebra $M$ there is a unitary element $e\in M$ with $u \leq e$ (cf. \cite[Proposition 7.5]{Finite}). More details and properties can be found in \cite{Finite}.
A projection $p$ in a von Neumann algebra $W$ is called \emph{abelian} if the subalgebra $pW p$ is abelian (see \cite[Definition V.1.15]{Tak}). The von Neumann algebra $W$ is said to be of \emph{type I} or \emph{discrete} if every nonzero (central) projection contains a nonzero abelian subprojection \cite[Definition V.1.17]{Tak}. In the previous definition the word central can be relaxed (see, for example, \cite[Corollary 4.20]{stra-zsi}).
A tripotent $e$ in a JB$^*$-triple is said to be \emph{abelian} if the JB$^*$-algebra $E_2(u)$ is associative, or equivalently, $(E_2(u),\circ_u,*_u)$ is a unital abelian C$^*$-algebra. Obviously, any minimal tripotent is abelian. We further know that every abelian tripotent is finite \cite[Lemma 3.2$(e)$]{Finite}.
According to \cite{horn1987classification,horn1988classification} and \cite{horn1987ideal}, a JBW$^*$-triple $M$ is said to be of \emph{type $I$} (respectively, \emph{continuous}) if it coincides with the weak$^*$ closure of the span of all its abelian tripotents (respectively, it contains no non-zero abelian tripotents). Every JBW$^*$-triple can be written as the orthogonal sum of two weak$^*$-closed ideals $M_1$ and $M_2$ such that $M_1$ is of type $I$ and $M_2$ is continuous (any of these summands might be trivial). G. Horn and E. Neher established in \cite{horn1987classification,horn1988classification} structure results describing type $I$ and continuous JBW$^*$-triples. Concretely, every JBW$^*$-triple of type $I$ may be represented in the form
\begin{equation}\langlebel{eq:representation of type I JBW* triples}
\bigoplus_{j\in J}^{\ell_\infty}A_j\overlineerline{\otimes}C_j,
\mathbb Nd{equation}
where the $A_j$'s are abelian von Neumann algebras and the $C_j$'s are Cartan factors (the concrete definitions will be presented below in Section \operatorname{Re}f{sec:type I}, the reader can also consult \cite{loos1977bounded, kaup1981klassifikation, kaup1997real} for details). To reassure the reader we shall simply note that every Cartan factor $C$ is a JBW$^*$-triple. In the case in which $C$ is a JW$^*$-subtriple of some $B(H)$ and $A$ is an abelian von Neumann algebra, the symbol $A\overlineerline{\otimes}C$ denotes the weak$^*$-closure of the algebraic tensor product $A{\otimes}C$ in the von Neumann tensor product $A\overlineerline{\otimes} B(H)$ (see \cite[Section IV.1]{Tak} and \cite[\S 1]{horn1987classification}). In the remaining cases $C$ is finite-dimensional and $A\overlineerline{\otimes} C$ will stand for the completed injective tensor product (see \cite[Chapter 3]{ryan}).
\section{Majorizing certain seminorms}\langlebel{sec:majorizing}
The main result will be proved using its dual version. The starting point is the following dual version of Theorem~\operatorname{Re}f{T:triples}$(2)$.
\begin{thm}[{\cite[Theorem 3]{peralta2001grothendieck}}]\langlebel{T:triples-dual}
Let $M$ be a JBW$^*$-triple, $H$ a Hilbert space and $T:M\to H$ a weak$^*$-to-weak continuous linear operator. Given $\varepsilon>0$, there are norm-one functionals $\varphi_1,\varphi_2\in M_*$ such that
$$\norm{Tx}\le(\sqrt{2}+\varepsilon)\norm{T}\left(\norm{x}^2_{\varphi_1}+\varepsilon\norm{x}^2_{\varphi_2}|ight)^{\mathfrak{A}c12}\quad\mbox{ for }x\in M.$$
\mathbb Nd{thm}
We continue by recalling two results from \cite{HKPP-BF}. The first one is essentially the main result and easily implies Theorem~\operatorname{Re}f{T:triples}$(3)$. The second one was used to prove one of the particular cases and we will use it several times as well.
\begin{prop}[{\cite[Theorem 2.4]{HKPP-BF}}]\langlebel{P:2-1BF} Let $M$ be a JBW$^*$-triple. Then given any two functionals $\varphi_1,\varphi_2$ in $M_*$, there exists a norm-one functional $\psi\in M_*$ such that
$$\norm{x}_{\varphi_1}^2+\norm{x}_{\varphi_2}^2\le 2(\norm{\varphi_1}+\norm{\varphi_2})\cdot \norm{x}_\psi^2$$ for all $x\in M.$
\mathbb Nd{prop}
\begin{lemma}[{\cite[Proposition 3.2]{HKPP-BF}}]\langlebel{L:rotation} Let $M$ be a JBW$^*$-triple and let $\varphi\in M_*$.
Assume that $p\in M$ is a tripotent such that $s(\varphi)\in M_2(p)$.
Then there exists a functional $\tilde{\varphi}\in M_*$ such that {$\norm{\tilde{\varphi}}=\norm{\varphi}$},
$s(\tilde{\varphi})\le p$ and $\norm{x}_\varphi\le\sqrt{2}\norm{x}_{\tilde{\varphi}}$ for all $x\in M$.
\mathbb Nd{lemma}
The key step to prove our main result is the following proposition which says that for JBW$^*$-algebras a stronger version of Proposition~\operatorname{Re}f{P:2-1BF} is achievable.
\begin{prop}\langlebel{P:majorize 1+2+epsilon}
Let $M$ be a JBW$^*$-algebra.
Then given any two functionals $\varphi_1,\varphi_2$ in $M_*$ and $\varepsilon>0$, there exists a norm-one functional $\psi\in M_*$ such that
$$\norm{x}_{\varphi_1}^2+\norm{x}_{\varphi_2}^2\le (\norm{\varphi_1}+2\norm{\varphi_2}+\varepsilon) \norm{x}_\psi^2 \mbox{ for }x\in M.$$
\mathbb Nd{prop}
Using this proposition we will easily deduce the main result in Section~\operatorname{Re}f{sec:proofs} below. Proposition \operatorname{Re}f{P:majorize 1+2+epsilon} will be proved using the following result.
\begin{prop}\langlebel{P:key decomposition alternative} Let $M$ be a JBW$^*$-algebra, $\varphi\in M_*$ and $\varepsilon>0$. Then there are a functional $\tilde\varphi\in M_*$
and a unitary element $w\in M$ such that $$\norm{\tilde\varphi}\le\norm{\varphi}, \quad s(\tilde\varphi)\le w \quad\mbox{ and }
\norm{\cdot}^2_\varphi\le(1+\varepsilon)\norm{\cdot}^2_{\tilde\varphi}.$$
\mathbb Nd{prop}
This proposition will be proved at the beginning of Section~\operatorname{Re}f{sec:proofs} using the results from Sections~\operatorname{Re}f{sec:type I} and~\operatorname{Re}f{sec:JW*}. Let us now show that it implies Proposition~\operatorname{Re}f{P:majorize 1+2+epsilon}.
\begin{proof}[Proof of Proposition~\operatorname{Re}f{P:majorize 1+2+epsilon} from Proposition~\operatorname{Re}f{P:key decomposition alternative}.]
Assume that $M$ is a JBW$^*$-algebra, $\varphi_1,\varphi_2\in M_*$ and $\varepsilon>0$.
Let $\tilde\varphi_1\in M_*$ and $w\in M$ correspond to $\varphi_1$ and $\mathfrak{A}c{\varepsilon}{\norm{\varphi_1}}$ by Proposition~\operatorname{Re}f{P:key decomposition alternative}.
Since $w$ is unitary, we have $M_2(w)=M$, hence we may apply Lemma~\operatorname{Re}f{L:rotation} to get $\psi_2\in M_*$ such that
$$s(\psi_{2})\le w, \ \norm{\psi_{2}}\le\norm{\varphi_{2}},\ \norm{\cdot}_{\varphi_{2
}}\le\sqrt{2}\norm{\cdot}_{\psi_{2}}.
$$
Then
$$\begin{aligned}
\norm{\cdot}_{\varphi_1}^2+\norm{\cdot}_{\varphi_2}^2&\le
\left(1+\mathfrak{A}c{\varepsilon}{\norm{\varphi_1}}|ight)\norm{\cdot}_{\tilde\varphi_{1}}^2+\norm{\cdot}_{\varphi_2}^2\le
\left(1+\mathfrak{A}c{\varepsilon}{\norm{\varphi_1}}|ight)\norm{\cdot}_{\tilde\varphi_{1}}^2+2\norm{\cdot}_{\psi_2}^2\\
&=\norm{\cdot}_{\left(1+\mathfrak{A}c{\varepsilon}{\norm{\varphi_1}}|ight)\tilde\varphi_{1}+2\psi_2}^2
=\left(\left(1+\mathfrak{A}c{\varepsilon}{\norm{\varphi_1}}|ight)\norm{\tilde\varphi_{1}}+2\norm{\psi_2}|ight)\norm{\cdot}_\psi^2,
\mathbb Nd{aligned}
$$
where $$\psi=\mathfrak{A}c{(1+\mathfrak{A}c{\varepsilon}{\norm{\varphi_1}})\tilde\varphi_{1}+2\psi_2}{(1+\mathfrak{A}c{\varepsilon}{\norm{\varphi_1}})\norm{\tilde\varphi_{1}}+2\norm{\psi_2}}.$$
(Note that the first equality follows from the fact that the support tripotents of both functionals are below $w$.)
Since the functionals $\tilde\varphi_{1}$ and $\psi_2$ attain their norms at $w$, we deduce that $\norm{\psi}=1$. It remains to observe
that
$$\left(1+\mathfrak{A}c{\varepsilon}{\norm{\varphi_1}}|ight)\norm{\tilde\varphi_{1}}+2\norm{\psi_2}\le \norm{\varphi_1}+\varepsilon+2\norm{\varphi_2}.$$
\mathbb Nd{proof}
\section{Finite or type I JBW$^*$-algebras}\langlebel{sec:type I}
The aim of this section is to prove a stronger version of Proposition~\operatorname{Re}f{P:key decomposition alternative} for a large subclass of JBW$^*$-algebras (see Proposition \operatorname{Re}f{P:type I approx}). We follow the notation from \cite{Finite} recalled in Section \operatorname{Re}f{sec: notation}.
Since in a finite JBW$^*$-algebra any tripotent is majorized by a unitary one (cf. \cite[Lemma 3.2(d)]{Finite}), we get the following observation.
\begin{obs}\langlebel{obs:finite JBW* algebras}
Let $M$ be a finite JBW$^*$-algebra. Then Proposition~\operatorname{Re}f{P:key decomposition alternative} holds for $M$ in a very strong version -- one can take $\tilde\varphi=\varphi$ and $\varepsilon =0$.
\mathbb Nd{obs}
There is a larger class of JBW$^*$-algebras for which we get a stronger and canonical version of Proposition~\operatorname{Re}f{P:key decomposition alternative}. The concrete result appears in the content of the following proposition. The exact relationship with Proposition~\operatorname{Re}f{P:key decomposition alternative} will be explained in
Remark \operatorname{Re}f{Rem} (1) below.
We first recall that, in the setting of JBW$^*$-triples, two normal functionals $\varphi$ and $\psi$ in the predual of a JBW$^*$-triple $M$ are called (\emph{algebraically}) \emph{orthogonal} (written $\varphi\perp \psi$) if their support tripotents are orthogonal in $M$---that is, $s(\varphi)\perp s(\psi)$ (cf. \cite{FriRu87,EdRu01}). It is shown in \cite[Lemma\ 2.3]{FriRu87} (see also \cite[Theorem 5.4]{EdRu01}) that $\varphi,\psi\in M_*$ are orthogonal if and only if they are ``geometrically'' \emph{$L$-orthogonal} in $M_*$ i.e., $\|\varphi \pm \psi\| = \|\varphi\| + \|\psi\|$.
In particular $\norm{\cdot}_{\varphi+\psi}^2=\norm{\cdot}_\varphi^2 + \norm{\cdot}_\psi^2$ if $\varphi$ and $\psi$ are orthogonal because in this case $\varphi$, $\psi$ and $\varphi+\psi$ attain their respective norms at $s(\varphi)+s(\psi)$.
\begin{prop}\langlebel{P:type I approx}
Let $M$ be a JBW$^*$-algebra which is triple-isomorphic to a direct sum $M_1\oplus^{\ell_\infty}M_2$, where $M_1$ is a finite JBW$^*$-algebra and $M_2$ is a type I JBW$^*$-algebra. Let $\varphi\in M_*$ be arbitrary. Then for each $\varepsilon>0$ there are two functionals $\varphi_1,\varphi_2\in M_*$ such that
\begin{enumerate}[$(i)$]
\item $\varphi=\varphi_1+\varphi_2$;
\item $\varphi_1\perp\varphi_2$;
\item $\norm{\varphi_2}<\varepsilon$;
\item $s(\varphi_1)$ is a finite tripotent in $M$.
\mathbb Nd{enumerate}
\mathbb Nd{prop}
The rest of this section is devoted to prove Proposition~\operatorname{Re}f{P:type I approx}. To this end we will use the following decomposition result which was essentially established in \cite{Finite}. Let us note that the concrete definition of a type 2 Cartan factor can be found in the next subsection.
\begin{prop}\langlebel{P:type I decomposition}
Let $M$ be a JBW$^*$-algebra which is triple-isomorphic to a direct sum $M_1\oplus^{\ell_\infty}M_2$, where $M_1$ is a finite JBW$^*$-algebra and $M_2$ is a type I JBW$^*$-algebra.
Then $M$ is triple-isomorphic to a JBW$^*$-algebra of the form
$$N\oplus^{\ell_\infty}\left(\bigoplus_{j\in J}L^\infty(\mu_j)\overlineerline{\otimes}C_j|ight)\oplus^{\ell_\infty}\left(\bigoplus_{\langlembda\in \mathcal Lambda}L^\infty(\nu_\langlembda)\overlineerline{\otimes}B(H_\langlembda)|ight),$$
where
\begin{itemize}
\item $N$ is a finite JBW$^*$-algebra;
\item $J$ and $\mathcal Lambda$ are (possibly empty) sets;
\item $\mu_j$'s and $\nu_\langlembda$'s are probability measures;
\item $C_j$ is an infinite-dimensional type 2 Cartan factor for each $j\in J$;
\item $H_\langlembda$ is an infinite-dimensional Hilbert space
for each $\langlembda\in\mathcal Lambda$.
\mathbb Nd{itemize}
\mathbb Nd{prop}
\begin{proof}
By \cite[Theorem 7.1]{Finite} $M$ is triple-isomorphic to $N\oplus^{\ell_\infty} N_1$, where $N$ is a finite JBW$^*$-algebra and $N_1$ is (either trivial or) a properly infinite JBW$^*$-algebra. By the same theorem $N_1$ is triple-isomorphic to
$$\left(\bigoplus_{j\in J}L^\infty(\mu_j)\overlineerline{\otimes}C_j|ight)\oplus^{\ell_\infty}N_2,$$
where the first summand has the above-mentioned form and $N_2$ is (either trivial or) a properly infinite von Neumann algebra. Since by the assumptions $N_2$ is clearly of type I, we may conclude using \cite[Theorem V.1.27]{Tak}.
\mathbb Nd{proof}
We observe that the validity of Proposition~\operatorname{Re}f{P:type I approx} is preserved by $\ell_\infty$-sums, so it is enough to prove it for the individual summands from Propostion~\operatorname{Re}f{P:type I decomposition}. For the finite JBW$^*$-algebra $N$ we may use Observation~\operatorname{Re}f{obs:finite JBW* algebras}.
We will prove the desired conclusion for the summands $L^\infty(\mu_j)\overlineerline{\otimes}C_j$. For the remaining summands an easier version of the same proof works as we will explain below.
\subsection{The case of type 2 Cartan factors}\langlebel{subsec:C2}
Let us start by recalling the definition of type 2 Cartan factors.
Let $H$ be a Hilbert space with a fixed orthonormal basis $(e_\gamma)_{\gamma\in \mathcal Gamma}$. Then $H$ is canonically represented as $\ell^2(\mathcal Gamma)$. For $\xi\in H$ let $\overlineerline{\xi}$ be the coordinatewise complex conjugate of $\xi$. Further, for $x\in B(H)$ we denote by $x^t$ the operator defined by
$$x^t\xi=\overlineerline{x^*\overlineerline{\xi}},\qquad\xi\in H.$$
Then $x^t$ is the transpose of $x$ with respect to the fixed orthonormal basis, i.e.,
$$\ip{x^t e_\gamma}{e_\delta}=\ip{x e_\delta}{e_\gamma}\mbox{ for }\gamma,\delta\in\mathcal Gamma$$
(see, e.g., \cite[Section 5.3]{Finite} for the easy computation). Then
$$B(H)_s=\{x\in B(H);\, x^t=x\}\mbox{ and }B(H)_a=\{x\in B(H);\, x^t=-x\}$$
are the so-called \emph{Cartan factors} of \emph{type 3} and \emph{2}, respectively. They are formed by operators with symmetric (antisymmetric, respectively) `representing matrices' with respect to the fixed orthonormal basis. We will deal with the second case, i.e., with `antisymmetric operators'.
So, assume that $H$ has infinite dimension (or, equivalently, $\mathcal Gamma$ is an infinite set). Let $M=B(H)_a$.
Define $\pi:B(H)\to M$ by $\pi(x)=\mathfrak{A}c12(x-x^t)$. Then $\pi$ is a norm-one projection which is moreover weak$^*$-to-weak$^*$ continuous. Hence $\pi_*:M_*\to B(H)_*$ defined by $\pi_*(\varphi)=\varphi\circ\pi$ is an isometric injection. Moreover
$$\begin{aligned}\pi_*(M_*)&=\{\varphi\in B(H)_*;\, \varphi(x^t)=-\varphi(x)\mbox{ for }x\in B(H)\}\\&=\{\varphi\in B(H)_*;\, \varphi|_{B(H)_s}=0\}.\mathbb Nd{aligned}$$
Recall that $B(H)_*$ is isometric to the space of nuclear operators $N(H)$ via the trace duality (cf. \cite[Theorem II.1.8]{Tak}). Moreover, any $y\in N(H)$ is represented in the form
$$y=\sum_{k\geq 1} \langlembda_k\ip{\cdot}{\eta_k}\xi_k$$
where $(\xi_k)$ and $(\eta_k)$ are orthonormal sequences in $H$ and the $\langlembda_k$ are positive numbers with $\,\mbox{\rm d}splaystyle\sum_{k\geq 1}\langlembda_k=\norm{y}_N$.
Then clearly
$$y^*=\sum_{k\geq 1} \langlembda_k\ip{\cdot}{\xi_k}\eta_k,$$
hence for any $\xi\in H$ we have
$$y^t\xi=\overlineerline{y^*\overlineerline{\xi}}
=\overlineerline{\sum_{k\geq 1} {\langlembda_k} \ip{\overlineerline{\xi}}{\xi_k}\eta_k}
=\sum_{k\geq 1}\langlembda_k\ip{\xi}{\overlineerline{\xi_k}}\overlineerline{\eta_k},$$
thus
$$y^t=\sum_{k\geq 1} \langlembda_k\ip{\cdot}{\overlineerline{\xi_k}}\overlineerline{\eta_k}.$$
In particular \begin{equation}\langlebel{eq transpose preserves traces} \tr{y^t}=\sum_{k\geq 1} \langlembda_k \ip{\overlineerline{\eta_k}}{\overlineerline{\xi_k}}=\sum_{k\geq 1} \langlembda_k\ip{\xi_k}{\eta_k}=\tr{y}.
\mathbb Nd{equation}
Hence, given $\varphi\in B(H)_*$ represented by $y\in N(H)$, the functional $\varphi^t(x)=\varphi(x^t)$, $x\in B(H)$ is represented by $y^t$. Indeed,
$$\varphi^t(x)=\varphi(x^t)=\tr{x^ty}=\tr{y^tx}=\tr{xy^t}\mbox{ for }x\in B(H).$$
It follows that
$$\pi_*M_*=\{\varphi\in B(H)_*;\, \varphi\mbox{ is represented by an antisymmetric nuclear operator}\}.$$
\begin{proof}[Proof of Proposition~\operatorname{Re}f{P:type I approx} for $M=B(H)_a$]
Fix $\varphi\in M_*$ of norm one and $\varepsilon>0$. Let $u=s(\varphi)\in M$.
Set $\tilde\varphi=\pi_*\varphi$. Fix $y\in N(H)$ representing $\tilde\varphi$. Then
$$y=\sum_{k\geq 1} \langlembda_k\ip{\cdot}{\eta_k}\xi_k$$
where $(\xi_k)$ and $(\eta_k)$ are orthonormal sequences in $H$ and the $\langlembda_k$ are strictly positive numbers with $\,\mbox{\rm d}splaystyle \sum_{k\geq 1}\langlembda_k=1$. Observe that
$$s(\tilde\varphi)=\sum_{k\geq 1}\ip{\cdot}{\xi_k}\eta_k.$$
Moreover, since $y$ is antisymmetric, we deduce that $s(\tilde\varphi)$ is also antisymmetric. Indeed, by the above
we have
$$y=-y^t=-\sum_{k\geq 1} \langlembda_k\ip{\cdot}{\overlineerline{\xi_k}}\overlineerline{\eta_k}.$$
Hence
$$s(\tilde\varphi)=-\sum_{k\geq 1}\ip{\cdot}{\overlineerline{\eta_k}}\overlineerline{\xi_k}=-s(\tilde\varphi)^t.$$
For $\delta>0$ set
$$y_\delta=\sum_{\langlembda_k\ge\delta} \langlembda_k\ip{\cdot}{\eta_k}\xi_k.$$
Then $y_\delta$ is a finite rank operator and
$$y_\delta^t=\sum_{\langlembda_k\ge\delta}\langlembda_k\ip{\cdot}{\overlineerline{\xi_k}}\overlineerline{\eta_k}.$$
By uniqueness of the nuclear representation (the sequence $(\langlembda_k)$ is unique and for any fixed $\langlembda>0$ the linear spans of those $\eta_k$, resp. $\xi_k$, for which $\langlembda_k=\langlembda$
are uniquely determined) we deduce that $y_\delta$ is antisymmetric and hence its support tripotent
$$u_\delta=\sum_{\langlembda_k\ge\delta}\ip{\cdot}{\xi_k}\eta_k$$
is antisymmetric as well.
Fix $\delta>0$ such that $\sum_{\langlembda_k<\delta}\langlembda_k<\varepsilon$.
Then $\norm{y-y_\delta}_N<\varepsilon$.
Let $\tilde\varphi_1$ be the functional represented by $y_\delta$ and $\tilde\varphi_2=\tilde\varphi-\tilde\varphi_1$ (i.e., the functional represented by $y-y_\delta$). Since $y_\delta$ is antisymmetric, both $\tilde\varphi_1$ and $\tilde\varphi_2$ belong to $\pi_*M_*$. Moreover, $s(\tilde\varphi_1)=u_\delta$ and $s(\tilde\varphi_2)=u-u_\delta$.
Since $u_\delta\perp u-u_\delta$, we deduce that $\tilde{\varphi}_1\perp\tilde{\varphi}_2$. Further, $u_\delta$ is a finite tripotent, being a finite rank partial isometry.
Since we are in $\pi_*M_*$, we have functionals $\varphi_1,\varphi_2\in M_*$ such that $\tilde{\varphi}_j=\pi_*\varphi_j$.
It is now clear that they provide the sought decomposition of $\varphi$.
\mathbb Nd{proof}
We have settled the case of $B(H)_a$. Note that for $M=B(H)$ the same proof works -- we just do not use the mapping $\pi$ and are not obliged to check the antisymmetry. The proof was done using the Schmidt decomposition of nuclear operators. To prove the result for the tensor product we will use a measurable version of Schmidt decomposition established in the following subsection.
\subsection{Measurable version of Schmidt decomposition}
In this subsection we are going to prove the following result (note that $K(H)$ denotes the C$^*$-algebra of compact operators on $H$).
\begin{thm}\langlebel{T:measurable Schmidt} Let $H$ be a
Hilbert space. Then there are sequences $(\langlembda_n)_{n=0}^\infty$ and $(\uu_n)_{n=0}^\infty$ of mappings such that the following properties are fulfilled for $n\in\mathbb N$ and $x\in K(H)$:
\begin{enumerate}[$(a)$]
\item $\langlembda_n:K(H)\to[0,\infty)$ is a lower-semicontinuous mapping;
\item $\langlembda_{n+1}(x)<\langlembda_n(x)$ whenever $x\in K(H)$ and $\langlembda_n(x)>0$;
\item $\uu_n:K(H)\to K(H)$ is a Borel measurable mapping;
\item $\uu_n(x)$ is a finite rank partial isometry on $H$;
\item $\uu_n(x)=0$ whenever $\langlembda_n(x)=0$;
\item The partial isometries $\uu_k(x)$, $k\in\mathbb N\cup\{0\}$, are pairwise orthogonal;
\item
$x=\sum_{n=0}^\infty\langlembda_n(x)\uu_n(x),$ where the series converges in the operator norm.
\mathbb Nd{enumerate}
\mathbb Nd{thm}
Let us point out that the Borel measurability in this theorem and in the lemmata used in the proof is considered with respect to the norm topology. However, if $X$ is a separable Banach space, it is well known and easy to see that any norm open set is weakly $F_\sigma$, hence the norm Borel sets coincide with the weak Borel sets (cf. \cite[pages 74 and 75]{Kuo75}). This applies in particular to $H$, $K(H)$ and $K(H)\times H$ where $H$ is a separable Hilbert space.
The proof will be done in several steps contained in the following lemmata.
\begin{lemma}\langlebel{L:measurability of singular numbers}
Let $H$ be a Hilbert space (not necessarily separable). For $x\in K(H)$ let $(\alpha_n(x))$ be the sequence of its singular numbers.
Moreover, let $(\langlembda_n(x))$ be the strictly decreasing version of $(\alpha_n(x))$ (recall that the sequence $(\alpha_n(x))$ itself is non-increasing), completed by zeros if necessary. I.e.,
$$\langlembda_n(x)=\begin{cases}
\alpha_k(x) & \mbox{ if }\card \{\alpha_0(x),\alpha_1(x),\dots,\alpha_k(x)\}=n+1\\
0 & \mbox{ if such $k$ does not exist.}
\mathbb Nd{cases}$$
Then the following assertions are valid for each $n\in\mathbb N\cup\{0\}$.
\begin{enumerate}[$(i)$]
\item $\alpha_n$ is a $1$-Lipschitz function on $K(H)$;
\item $\langlembda_n$ is a lower semicontinuous function on $K(H)$, in particular it is Borel measurable and of the first Baire class.
\mathbb Nd{enumerate}
\mathbb Nd{lemma}
\begin{proof}
$(i)$ This is proved in \cite[Corollary VI.1.6]{Gohberg90} and
easily follows from the following well-known formula for singular numbers
$$\alpha_n(x)=\operatorname{dist} \mathcal Big(x, \mathcal Big\{y\in K(H);\, \,\mbox{\rm d}m yH\le n\mathcal Big\}\mathcal Big), \quad x\in K(H), n\in\mathbb N\cup\{0\}$$ (cf. \cite[Theorem VI.1.5]{Gohberg90}).
$(ii)$ Clearly $\langlembda_n\ge0$. Moreover, for each $c>0$ we have
$\langlembda_n(x)>c$ if and only if
$$\exists\, c_0>c_1>\dots>c_n>c_{n+1}=c,\, \exists\, k_0,k_1,\dots k_n\in \mathbb N\, \hbox{ such that }$$
$$ \alpha_{k_j} (x)\in (c_{j+1},c_j)\ \ \forall j\in\{0,\dots,n\}.$$
Since the functions $\alpha_k$ are continuous by $(i)$, $\{x;\, \langlembda_n(x)>c\}$ is open. Now the lower semicontinuity easily follows.
Finally, any lower semicontinuous function on a metric space is clearly $F_\sigma$-measurable, hence Borel measurable and also of the first Baire class (cf. \cite[Corollary 3.8(a)]{LMZ}).
\mathbb Nd{proof}
\begin{lemma}\langlebel{L:measurable projections}
Let $H$ be a
Hilbert space. For any $x\in K(H)_+$ and $n\in\mathbb N\cup\{0\}$ let $p_n(x)$ be the projection onto the eigenspace with respect to the eigenvalue $\langlembda_n(x)$ provided $\langlembda_n(x)>0$
and $p_n(x)=0$ otherwise. Then the mapping $p_n$ is Borel measurable.
\mathbb Nd{lemma}
\begin{proof}
We start by proving that the mapping $p_0$ is Borel measurable. For $x\in K(H)_+\setminus\{0\}$ we set
$$\psi(x)=\mathfrak{A}c{x-\langlembda_0(x)\cdot I}{2(\langlembda_0(x)-\langlembda_1(x))}+I.$$
Then the mapping $\psi:K(H)_+\setminus\{0\}\to B(H)_{sa}$ is Borel measurable (by Lemma~\operatorname{Re}f{L:measurability of singular numbers}$(ii)$, note that for $x\in K(H)_+\setminus\{0\}$ we have $\langlembda_0(x)>\langlembda_1(x)$).
Moreover, since
$$x=\sum_{n\ge0} \langlembda_n(x)p_n(x),$$
by the Hilbert-Schmidt theorem, we deduce that
$$\psi(x)=p_0(x)+\sum_{n\ge1} \mathfrak{A}c{\langlembda_0(x)-2\langlembda_1(x)+\langlembda_n(x)}{2(\langlembda_0(x)-\langlembda_1(x))}p_n(x) +\mathfrak{A}c{\langlembda_0(x)-2\langlembda_1(x)}{2(\langlembda_0(x)-\langlembda_1(x))}(I-\sum_{n\ge0}p_n(x)),$$
hence
the spectrum of $\psi(x)$ is
$$\sigma(\psi(x))=\left\{1,\tfrac{\langlembda_0(x)-2\langlembda_1(x)}{2(\langlembda_0(x)-\langlembda_1(x))}|ight\}\cup\left\{\tfrac{\langlembda_0(x)-2\langlembda_1(x)+\langlembda_n(x)}{2(\langlembda_0(x)-\langlembda_1(x))};\, n\ge 1|ight\}\subset\{1\}\cup(-\infty,\tfrac12].$$
It follows that $p_0(x)=f(\psi(x))$ whenever $f$ is a continuous function on $\mathbb R$ with $f=0$ on $(-\infty,\mathfrak{A}c12]$ and $f(1)=1$.
Since the mapping $y\mapsto f(y)$ is continuous on $B(H)_{sa}$ by \cite[Proposition I.4.10]{Tak}, we deduce that $p_0$ is a Borel measurable mapping.
Further, for $n\in\mathbb N$ we have
$$p_n(x)=\begin{cases}0,& \hbox{if } \langlembda_n(x)=0,\\
p_0\left(x-\sum_{k=0}^{n-1}\langlembda_k(x)p_k(x)|ight),& \hbox{if } \langlembda_n(x)>0,\mathbb Nd{cases}$$
hence by the obvious induction we see that $p_n$ is Borel measurable as well.
\mathbb Nd{proof}
\begin{proof}[Proof of Theorem~\operatorname{Re}f{T:measurable Schmidt}]
Fix any $x\in K(H)$. Let $x=u(x)\abs{x}$ be the polar decomposition.
By the Hilbert-Schmidt theorem we have
$$\abs{x}=\sum_n \langlembda_n(x) \, p_n(\abs{x})$$
(note that $\langlembda_n(x)=\langlembda_n(\abs{x})$). Hence
$$x=\sum_n \langlembda_n(x) u(x) p_n(\abs{x})=\sum_n \langlembda_n(x) u_n(x),$$
where $u_n(x)=u(x)p_n(\abs{x})$ are mutually orthogonal partial isometries (of finite rank). The mappings $\langlembda_n$ are lower semicontinuous by Lemma~\operatorname{Re}f{L:measurability of singular numbers}.
Further, the assignment $x\mapsto\abs{x}=\sqrt{x^*x}$ is continuous by the properties of the functional calculus. Indeed, the mapping $x\mapsto x^*x$ is obviously continuous and the mapping $y\mapsto\sqrt{y}$ is continuous on the positive cone of $K(H)$ by \cite[Proposition I.4.10]{Tak}.
Hence, we can deduce from Lemma~\operatorname{Re}f{L:measurable projections} that the assignments $x\mapsto p_n(\abs{x})$ are Borel measurable.
Since $u_n(x)=0$ whenever $\langlembda_n(x)=0$ and $u_n(x)=\mathfrak{A}c1{\langlembda_n(x)}xp_n(\abs{x})$ if $\langlembda_n(x)>0$, it easily follows that the mapping $u_n$ is Borel measurable.
\mathbb Nd{proof}
\begin{prop}\langlebel{P:measurable nuclear rep}
Let $H$ be a separable Hilbert space. Consider the mappings $\langlembda_n$ and $u_n$ provided by Theorem~\operatorname{Re}f{T:measurable Schmidt}
restricted to $N(H)$. Then $\langlembda_n$ and $u_n$ are Borel measurable also with respect to the nuclear norm. Moreover, the series from assertion $(g)$ converges absolutely in the nuclear norm and, moreover,
$$\norm{x}=\sum_{n=0}\langlembda_n\norm{u_n(x)}$$
where the norm is the nuclear one.
\mathbb Nd{prop}
\begin{proof}
The Borel measurability of $\langlembda_n$ and $u_n$ follows from the continuity of the canonical inclusion of $N(H)$ into $K(H)$ together with Theorem~\operatorname{Re}f{T:measurable Schmidt}. The rest follows from the Schmidt representation of nuclear operators.
\mathbb Nd{proof}
\subsection{Proof of Proposition~\operatorname{Re}f{P:type I approx}}
Let us adopt the notation from Subsection~\operatorname{Re}f{subsec:C2}.
Moreover, let $\mu$ be a probability measure and $A=L^\infty(\mu)$.
Set $W=A\overlineerline{\otimes}B(H)$. Then $W$ is a von Neumann algebra canonically represented in $B(L^2(\mu,H))$ (for a detailed description
see e.g. \cite[Section 5.3]{Finite}). Moreover, on $L^2(\mu,H)$ we have a canonical conjugation (the pointwise one -- recall that $H=\ell^2(\mathcal Gamma)$ is equipped with the coordinatewise conjugation). Therefore we have a natural transpose of any $x\in W$ defined by
$$x^t(\f)=\overlineerline{x^*(\overlineerline{\f})}, \qquad \f\in L^2(\mu,H).$$
Then we have a canonical identification
$$M=A\overlineerline{\otimes}B(H)_a=W_a=\{x\in W;\, x^t=-x\}.$$
Similarly as in Subsection~\operatorname{Re}f{subsec:C2} we denote by $\pi$ the canonical projection of $W$ onto $M$, i.e., $x\mapsto\mathfrak{A}c12(x-x^t)$.
Recall that, by \cite[Theorem IV.7.17]{Tak}, $W_*=L^1(\mu,N(H))$ (the Lebesgue-Bochner space). Since $\pi$ is a weak$^*$-weak$^*$ continuous norm-one projection, we have an isometric embedding $\pi_*:M_*\to W_*$ defined by $\pi_*\omega=\omega\circ \pi$. Moreover, clearly
$$\pi_*(M_*)=\{\omega\in W_*;\, \omega^t=-\omega\}.$$
\begin{lemma}
Assume that $\g\in L^1(\mu,N(H))=W_*$. Then the following assertions hold.
\begin{enumerate}[$(i)$]
\item $\g^*(\omega)=(\g(\omega))^*$ $\mu$-a.e.,
\item $\g^t(\omega)=(\g(\omega))^t$ $\mu$-a.e.
\mathbb Nd{enumerate}
\mathbb Nd{lemma}
\begin{proof} Let us start by explaining the meaning. On the left-hand side we consider the involution and transpose applied to $\g$ as to a functional on $W$, while on the right-hand side these operations are applied to the nuclear operators $\g(\omega)$.
Observe that it is enough to prove the equality for $\g=\chi_E y$ (where $E$ is a measurable set and $y\in N(H))$ as functions of this form are linearly dense in $L^1(\mu,N(H))$, i.e., we want to prove
$$(\chi_E y)^*=\chi_E y^*\mbox{ and }(\chi_E y)^t=\chi_E y^t.$$
It is clear that the elements on the right-hand side belong to $L^1(\mu,N(H))=W_*$, so the equality may be proved as equality of functionals. Since these functionals are linear and weak$^*$-continuous on $W$, it is enough to prove the equality on the generators $f\otimes x$, $f\in L^\infty(\mu)$, $x\in B(H)$.
So, fix such $f$ and $x$ and recall that
$$(f\otimes x)^*=\overlineerline{f}\otimes x^*\mbox{ and }(f\otimes x)^t=f\otimes x^t.$$
Indeed, the first equality follows from the very definition of the von Neumann tensor product, the second one is proved in the computation before Lemma 5.10 in \cite{Finite}.
Hence we have
$$\begin{aligned}
\ip{(\chi_E y)^*}{f\otimes x}&=\overlineerline{\ip{\chi_Ey}{\overlineerline{f}\otimes x^*}}=
\overlineerline{\int_E\overlineerline{f}\,\mbox{\rm d}\mu \cdot\tr{yx^*}}
=\int_E f\,\mbox{\rm d}\mu \cdot\tr{(yx^*)^*}\\&=\int_E f\,\mbox{\rm d}\mu \cdot\tr{xy^*}=\int_E f\,\mbox{\rm d}\mu \cdot\tr{y^*x}=\ip{\chi_E y^*}{f\otimes x}\mathbb Nd{aligned}$$
and, similarly, by \eqref{eq transpose preserves traces}, we get
$$\begin{aligned}
\ip{(\chi_E y)^t}{f\otimes x}&=\ip{\chi_Ey}{f\otimes x^t}=
\int_E f\,\mbox{\rm d}\mu \cdot\tr{yx^t}
=\int_E f\,\mbox{\rm d}\mu \cdot\tr{(yx^t)^t}\\&=\int_E f\,\mbox{\rm d}\mu \cdot\tr{xy^t}=\int_E f\,\mbox{\rm d}\mu \cdot\tr{y^tx}=\ip{\chi_E y^t}{f\otimes x}.\mathbb Nd{aligned}$$
\mathbb Nd{proof}
It easily follows that
$$\pi_*(M_*)=L^1(\mu,N(H)_a).$$
\begin{lemma}\langlebel{L:sepred} Let $\g\in L^1(\mu,N(H))=W_*$. Then the following assertions hold.
\begin{enumerate}[$(i)$]
\item $\ip{f\otimes x}{\g}=\int f(\omega)\tr{x\g(\omega)}\,\mbox{\rm d}\mu(\omega)$ for $f\in L^\infty(\mu)$ and $x\in B(H)$.
\item There is a projection $p\in B(H)$ with separable range such that $p\g(\omega)p=\g(\omega)$ $\mu$-a.e. In this case we have $(1\otimes p)\g(1\otimes p)=\g$, i.e.,
$$\ip{T}{\g}=\ip{(1\otimes p)T(1\otimes p)}{\g}\mbox{ for }T\in W.$$
\mathbb Nd{enumerate}
\mathbb Nd{lemma}
\begin{proof}
$(i)$ Fix $f\in L^\infty(\mu)$ and $x\in B(H)$. Consider both the left hand side and the right hand side as functionals depending on $\g$. Since both functionals are linear and continuous on $L^1(\mu,N(H))$, it is enough to prove the equality for $\g=\chi_E y$ where $E$ is a measurable set and $y\in N(H)$. In this case we have
$$\ip{f\otimes x}{\chi_E y}=\int_E f\,\mbox{\rm d}\mu \tr{xy},$$
so the equality holds.
$(ii)$ Note that $\g$ is essentially separably-valued, so there is a separable subspace $Y\subset N(H)$ such that $\g(\omega)\in Y$ $\mu$-a.e. Since for any $y\in N(H)$ there is a projection $q$ with separable range with $qyq=y$ (due the the Schmidt representation),
the existence of $p$ easily follows.
To prove the last equality it is enough to verify it for the generators $T=f\otimes x$ and this easily follows from $(i)$.
\mathbb Nd{proof}
\begin{prop}\langlebel{P:L1 measurable repr}
Let $\g\in L^1(\mu,N(H))$. Then there are a separable subspace $H_0\subset H$, a sequence $(\mathbb Za_n)$ of nonnegative measurable functions and a sequence $(\uu_n)$ of measurable mappings with values in $K(H_0)$ such that the following holds for each $\omega$:
\begin{enumerate}[$(a)$]
\item $\mathbb Za_{n+1}(\omega)<\mathbb Za_n(\omega)$ whenever $\mathbb Za_n(\omega)>0$;
\item $\uu_n(\omega)$ is a finite rank partial isometry on $H_0$;
\item $\uu_n(\omega)=0$ whenever $\mathbb Za_n(x)=0$;
\item the partial isometries $\uu_k(\omega)$, $k\in\mathbb N\cup\{0\}$, are pairwise orthogonal;
\item $\g=\sum_{n=0}^\infty\mathbb Za_n\uu_n$ where the series converges absolutely almost everywhere and also in the norm of $L^1(\mu,N(H))$.
\mathbb Nd{enumerate}
\mathbb Nd{prop}
\begin{proof}
Let $p\in B(H)$ be a projection with separable range provided by
Lemma \operatorname{Re}f{L:sepred}$(ii)$ and set $H_0=pH$. Let $(\langlembda_n)$ and $(u_n)$ be the mappings provided by Theorem~\operatorname{Re}f{T:measurable Schmidt}.
Let $\uu_n(\omega)=u_n(\g(\omega))$ and $\mathbb Za_n(\omega)=\langlembda_n(\g(\omega))$. Then these functions are measurable due to measurability of $\g$ and Proposition~\operatorname{Re}f{P:measurable nuclear rep}.
Assertions $(a)-(d)$ now follow from Theorem~\operatorname{Re}f{T:measurable Schmidt}.
By Proposition~\operatorname{Re}f{P:measurable nuclear rep} we get the first statement of $(e)$ and, moreover,
$$\sum_n\norm{\mathbb Za_n(\omega)\uu_n(\omega)}=\norm{\g(\omega)}\ \mu\mbox{-a.e.},$$
hence the convergence holds also in the norm of $L^1(\mu,N(H))$, by the Lebesgue dominated convergence theorem for Bochner integral.
\mathbb Nd{proof}
Set
$$W_0=\{\f:\mathbb Omega\to B(H);\, \f\mbox{ is bounded, measurable and has separable range}
\}.$$
By a measurable function we mean a strongly measurable one, i.e., an almost everywhere limit of simple functions. However, note that weak measurability is equivalent in this case by Pettis measurability theorem
as we consider only functions with separable range.
Then $W_0$ is clearly a C$^*$-algebra when equipped with the pointwise operation and supremum norm.
We remark that the following lemma seems to be close to the results of \cite[Section IV.7]{Tak}. However, it is not clear how to apply these results in our situation, so we give the proofs.
\begin{lemma}
For $\f\in W_0$ and $\h\in L^2(\mu,H)$ define the function $T_{\f}\h$ by the formula
$$T_{\f}\h (\omega)= \f(\omega)(\h(\omega)),\quad \omega\in\mathbb Omega.$$
\begin{enumerate}[$(i)$]
\item For each $\f\in W_0$ the mapping $T_{\f}$ is a bounded linear operator on $L^2(\mu,H)$ which belongs to $W$ and satisfies $\norm{T_{\f}}\le\norm{\f}_\infty$.
\item If $\f\in W_0$ and $\g\in W_*=L^1(\mu,N(H))$, then
$$\ip{T_{\f}}{\g}=\int\tr{\f(\omega)\g(\omega)}\,\mbox{\rm d}\mu(\omega).$$
\item $T_{\f}$ is a partial isometry (a projection) in $W$ whenever $\f(\omega)$ is a partial isometry (a projection) $\mu$-a.e.
\item If $\g\in L^1(\mu,N(H))$ is represented as in Proposition~\operatorname{Re}f{P:L1 measurable repr}$(e)$, then $s(\g)\le \sum_n T_{\uu_n^*}$ where series converges in the SOT topology in $W$.
\mathbb Nd{enumerate}
\mathbb Nd{lemma}
\begin{proof}
$(i)$ It is clear that the mapping $\h\mapsto T_{\f}\h$ is a linear mapping assigning to each $H$-valued function another $H$-valued function. Moreover,
$$\norm{T_{\f}\h(\omega)}=\norm{\f(\omega)(\h(\omega))}\le\norm{\f(\omega)}\norm{\h(\omega)}\le\norm{\f}_\infty\norm{\h(\omega)}.$$
In particular, if a sequence $(\h_n)$ converges almost everywhere to a function $\h$, then $(T_{\f}\h_n)$ converges almost everywhere to $T_{\f}\h$. It follows that $T_{\f}$ is well defined on $L^2(\mu,H)$
(in the sense that if $\h_1=\h_2$ a.e., then $T_{\f}\h_1=T_{\f}\h_2$ a.e.).
The next step is to observe that $T_{\f}\h$ is measurable whenever $\h$ is measurable. This is easy for simple functions. Further,
any measurable function is an a.e. limit of a sequence of simple functions, hence the measurability follows by the above.
Further, it follows from the above inequality that $\norm{T_{\f}\h}_2\le \norm{\f}_\infty\norm{\h}_2$, thus $\norm{T_{\f}}\le\norm{\f}_\infty$. Finally, by \cite[Lemma 5.12]{Finite} we get that $T_{\f}\in W$.
$(ii)$ Let us first show that $\f\g\in L^1(\mu,N(H))$ whenever $\f\in W_0$ and $\g\in L^1(\mu,N(H))$. By the obvious inequalities the only thing to be proved is measurability of this mapping.
This is easy if $\g$ is a simple function. The general case follows from the facts that any measurable function is an a.e. limit of simple functions and that measurability is preserved by a.e. limits of sequences.
It remains to prove the equality.
Since the functions from $W_0$ are separably valued, countably valued functions are dense in $W_0$. So, it is enough to prove the equality for countably valued functions. To this end let
$$\f=\sum_{k\in\mathbb N}\chi_{E_k}x_k,$$
where $(E_k)$ is a disjoint sequence of measurable sets and $(x_k)$ is a bounded sequence in $B(H)$. For any $\h\in L^2(\mu,H)$ we have
$$T_{\f}\h(\omega)=\sum_{k\in\mathbb N}\chi_{E_k}(\omega)x_k(\h(\omega)),\qquad \omega\in \mathbb Omega.$$
Since $T_{\f}\h\in L^2(\mu,H)$ by $(i)$ and the sets $E_k$ are pairwise disjoint, we deduce that
$$T_{\f}\h=\sum_{k\in\mathbb N}T_{\chi_{E_k}x_k}\h,$$
where the series converges in $L^2(\mu,H)$.
Since this holds for any $\h\in L^2(\mu,H)$, we deduce that
$$T_{\f}=\sum_{k\in\mathbb N}T_{\chi_{E_k}x_k}$$
unconditionally in the SOT topology, hence also in the weak$^*$ topology of $W$. Thus, for any $\g\in W_*=L^1(\mu,N(H))$ we get
$$\ip{T_{\f}}{\g}=\sum_{k\in\mathbb N}\ip{T_{\chi_{E_k}x_k}}{\g}=\sum_{k\in\mathbb N}\int_{E_k}\tr{x_k\g(\omega)}\,\mbox{\rm d}\mu(\omega)=\int\tr{\f(\omega)\g(\omega)}\,\mbox{\rm d}\mu(\omega),
$$
where in the second equality we used Lemma~\operatorname{Re}f{L:sepred}$(i)$.
$(iii)$ This is obvious as the mapping $\f\mapsto T_{\f}$ is clearly a $*$-homomorphism of $W_0$ into $W$.
$(iv)$ First observe that the mappings $\uu_n^*$ belong to $W_0$. Indeed, by Proposition~\operatorname{Re}f{P:L1 measurable repr} the mapping $\uu_n$ is measurable and has separable range (as $K(H_0)$ is separable). Moreover, $\norm{\uu_n}_\infty\le1$ for each $n\in\mathbb N$. These properties are shared by $\uu_n^*$, hence $\uu_n^*\in W_0$.
By $(iii)$ we deduce that $T_{\uu_n^*}$ is a partial isometry for any $n\in\mathbb N$. Moreover, these partial isometries are pairwise orthogonal (cf. property $(d)$ from Proposition~\operatorname{Re}f{P:L1 measurable repr}), hence $U=\sum_n T_{\uu_n^*}$ is a well-defined partial isometry in $W$. Moreover, by taking $\g$ as in Proposition~\operatorname{Re}f{P:L1 measurable repr}$(e)$, we have
$$\begin{aligned}
\ip{U}{\g}&=\sum_{n=0}^\infty\ip{T_{\uu_n^*}}{\g}=\sum_{n=0}^\infty\int \tr{\uu_n^*(\omega)\g(\omega)}\,\mbox{\rm d}\mu\omega\\&=
\sum_{n=0}^\infty\int \mathbb Za_n(\omega)\tr{\uu_n^*(\omega)\uu_n(\omega)}\,\mbox{\rm d}\mu(\omega)
\\&=\int\sum_{n=0}^\infty \mathbb Za_n(\omega)\tr{\uu_n^*(\omega)\uu_n(\omega)}\,\mbox{\rm d}\mu(\omega)=\int \norm{\g(\omega)}\,\mbox{\rm d}\mu(\omega)=\norm{\g},\mathbb Nd{aligned}$$
thus $s(\g)\le U$.
\mathbb Nd{proof}
\begin{proof}[Proof of Proposition~\operatorname{Re}f{P:type I approx} for $A\overlineerline{\otimes}B(H)_a$]
Fix any $\g\in M_*=L^1(\mu,N(H)_a)$ and $\varepsilon>0$. Fix its representation from Proposition~\operatorname{Re}f{P:L1 measurable repr}. Fix $N\in\mathbb N$ such that
$$\norm{\sum_{n>N}\mathbb Za_n\uu_n}<\varepsilon.$$
This is possible by the convergence established in Proposition~\operatorname{Re}f{P:L1 measurable repr}.
Note that
$$-\g=\g^t=\sum_{n=1}^\infty\mathbb Za_n\uu_n^t,$$
hence
$\uu_n^t=-\uu_n^t$. (Note that the representation from Proposition~\operatorname{Re}f{P:L1 measurable repr} is unique due to the uniqueness of the Hilbert-Schmidt representation).
Let
$$\g_1=\sum_{n=1}^N\mathbb Za_n\uu_n.$$
Then $\g_1\in M_*$ as $\g_1^t=-\g_1$.
Further, let $$\vv=\sum_{n=1}^N \uu_n.$$
We have $\g-\g_1\perp \g_1$ as
$$s(\g_1)\le T_{\vv^*} \mbox{ and }s(\g-\g_1)\le \sum_{n>N}T_{\uu_n^*}$$
and the two tripotents on the right-hand sides are orthogonal. Moreover,
$T_{\vv^*}$ is a finite tripotent in $M$ by \cite[Proposition 5.31(i) and Lemma 5.16(ii)]{Finite}.
This completes the proof.
\mathbb Nd{proof}
\begin{proof}[Proof of Proposition~\operatorname{Re}f{P:type I approx} for $A\overlineerline{\otimes}B(H)$] The proof is an easier version of the previous case. Fix $\g\in W_*=L^1(\mu,N(H))$ and $\varepsilon>0$. In the same way we find $N$ and define $\g_1$ and $\vv$. We omit the considerations of the transpose and antisymmetry. Finally, $T_{\vv^*}$ is a finite tripotent in $W$
by \cite[Proposition 4.7 and Lemma 5.16(ii)]{Finite}.
\mathbb Nd{proof}
\section{JW$^*$-algebras}\langlebel{sec:JW*}
The aim of this section is to prove
the following proposition which will be used to prove Proposition~\operatorname{Re}f{P:key decomposition alternative}.
\begin{prop}\langlebel{P:key decomposition} Let $M$ be a JBW$^*$-algebra, $\varphi\in M_*$ and $\varepsilon>0$. Then there are functionals $\varphi_1,\varphi_2\in M_*$
and a unitary element $w\in M$ satisfying the following conditions.
\begin{enumerate}[$(i)$]
\item\langlebel{it:key decomposition i} $\norm{\varphi_1}\le\norm{\varphi}$;
\item $\norm{\varphi_2}<\varepsilon$;
\item $s(\varphi_1)\le w$;
\item\langlebel{it:key decomposition iv} $\norm{\cdot}_\varphi^2\le\norm{\cdot}_{\varphi_1}^2+\norm{\cdot}_{\varphi_2}^2$.
\mathbb Nd{enumerate}
\mathbb Nd{prop}
The proof will be done at the end of the section with the help of several lemmata.
We focus mainly on JW$^*$-algebras, i.e., on weak$^*$-closed Jordan $^*$-subalgebras of von Neumann algebras. To this end we recall some notation (cf. \cite[Section III.2]{Tak}).
Let $A$ be a C$^*$-algebra and let $\phi\in A^*$. Then we define
functionals $a\phi$ and $\phi a$ by
\begin{equation}
a\phi(x)=\phi(xa)
\quad\mbox{ and }\quad
\phi a(x)=\phi(ax) \quad\mbox{for } x\in A.
\mathbb Nd{equation}
Note that $a\phi,\phi a\in A^*$ and $\norm{a\phi}\le\norm{a}\norm{\phi}$, $\norm{\phi a}\le\norm{a}\norm{\phi}$.
We recall the natural isometric involution $\phi\mapsto\phi^*$ defined by $\phi^*(x)=\overlineerline{\phi(x^*)}$.
Then clearly
$(a\phi)^*=\phi^*a^*$, $(\phi a)^*=a^*\phi^*$.
If $W$ is a von Neumann algebra and if $\phi\in W_*$, $a\in W$ then $a\phi, \phi a\in W_*$.
Further, given $\phi\in W_*$ we set $\abs{\phi}=s(\phi)\phi$ where $s(\phi)\in W$ is the support tripotent of $\phi$. Then $\phi=s(\varphi)^*\abs{\phi}$ is the polar decomposition of $\phi$ (cf. \cite[Section III.4]{Tak}).
More generally, if $a\in W$ is a norm-one element on which $\phi$ attains its norm then we have $\betr{\phi}=a\phi$, $\phi=a^*\betr{\phi}$, $\betr{\phi^*}=\phi a$ (cf. \eqref{eq minimality of the support tripotent for elements}).
Note that $\betr{\phi}=\betr{\phi}^*$ since $\betr{\phi}$ is positive.
All this is stable by small perturbations as witnessed by the following lemma.
\begin{lemma}[{\cite[Lemma 3.3]{pfitzner-jot}}]\langlebel{l perturbation of functionals}
Let $A$ be a C$^*$-algebra, $\phi$ a functional on $A$ and $a,b$ in the unit ball of $A$.
Then
\begin{eqnarray}
\norm{\phi-a^*\betr{\phi}\;}
&\leq&
({2\norm{\phi}})^{1/2}\,\,\betr{\,\norm{\phi} - \phi(a)}^{1/2} \langlebel{glA3_1}\\
\norm{\betr{\phi}-a\phi}
&\leq&
({2\norm{\phi}})^{1/2}\,\,\betr{\,\norm{\phi} - \phi(a)}^{1/2} \langlebel{glA3_2}\\
\norm{\betr{\phi^*}-\phi a}
&\leq&
({2\norm{\phi}})^{1/2}\,\,\betr{\,\norm{\phi} - \phi(a)}^{1/2}. \langlebel{glA3_3}
\mathbb Nd{eqnarray}
\mathbb Nd{lemma}\noindent
(As to \eqref{glA3_3}, which is not stated explicitly in \cite[Lemma 3.3]{pfitzner-jot}, note that it
follows easily from \eqref{glA3_2} by $\norm{\betr{\phi^*}-\phi a}=\norm{\betr{\phi^*}-a^*\phi^*}\le(2\norm{\phi^*})^{1/2}\,\,\betr{\norm{\phi^*} - \phi^*(a^*)}^{1/2}=({2\norm{\phi}})^{1/2}\,\,\betr{\,\norm{\phi} - \phi(a)}^{1/2}$.)
There is another way to obtain positive functionals: We can write $\phi=\phi_1-\phi_2+i(\phi_3-\phi_4)$ with positive $\phi_k\in W_*$ ($k=1, 2, 3, 4$) such that
$\norm{\phi_k-\phi_{k+1}}=\norm{\phi_k}+\norm{\phi_{k+1}}\le\norm{\phi}$, $k=1, 3$ (cf. \cite[Theorem III.4.2]{Tak}).
Then we set
$$[\phi]=\mathfrak{A}c12\sum_{k=1}^4\phi_k=\mathfrak{A}c12(\abs{\phi_1-\phi_2}+\abs{\phi_3-\phi_4})$$
and obtain that $[\phi]\in W_*$ is positive, $\norm{[\phi]}\le\norm{\phi}$
and $\betr{\phi(a)}\le2[\phi](a)$ for all positive $a\in W$.
Finally, let us remark that if $A$ is a C$^*$-algebra, then $A^{**}$ is a von Neumann algebra and $A^*=(A^{**})_*$, thus $\abs{\phi}$ and $[\phi]$ make sense also for continuous functionals on a C$^*$-algebra.
\begin{lemma}\langlebel{l norm-one functional close enough to states at a unitary Cstar} Let $W$ be von Neumann algebra, let $w\in W$ be a unitary element and $\delta\in(0,1)$. Let $\phi\in W_*$ be a norm-one functional such that $\phi (w) >1-\delta$ (in particular, $\phi(w)\in\mathbb R$). Then $\psi := w^* |\phi|$ is a norm-one element of $W_*$ satisfying $\psi(w) =1$
and $\|\phi -\psi\| < \sqrt{2\delta}$.
\mathbb Nd{lemma}
\begin{proof}
On the one hand we have that $\|\psi \| \le \| |\phi|\| = \|\phi\|=1$. On the other hand, since $\psi (w) = (w^* |\phi|) (w) = |\phi| (w w^*) = |\phi| (1) = \| |\phi|\| = 1$ we deduce that $\norm{\psi}=1$.
Applying \eqref{glA3_1} of Lemma \operatorname{Re}f{l perturbation of functionals} we obtain $$ \|\phi - w^* |\phi| \| \leq \sqrt{2 } \betr{ 1 - \phi(w)}^{1/2} \leq \sqrt{2 \delta},$$ which finishes the proof.
\mathbb Nd{proof}
We continue by extending the previous lemma to JW$^*$-algebras.
\begin{lemma}\langlebel{l norm-one functional close enough to states at a unitary} Let $M$ be a JW$^*$-algebra, $w\in M$ a unitary element and $\delta\in(0,1)$. Let $\phi\in M_*$ be a norm-one functional such that $\phi (w) >1-\delta$ (in particular, $\phi(w)\in\mathbb R$). Then there exists a norm-one functional $\psi\in M_*$ satisfying $\psi(w) =1$
and $\|\phi -\psi\| < \sqrt{2\delta}$.
\mathbb Nd{lemma}
\begin{proof} Let us assume that $M$ is a JW$^*$-subalgebra of a von Neumann algebra $W$.
Let $1$ denote the unit of $M$. Then $1$ is a projection in $W$, thus, up to replacing $W$ by $1W1$, we may assume that $M$ contains the unit of $W$.
We observe that $w$, being a unitary element in $M$, is unitary in $W$. Let $\tilde{\phi}\in W_*$ be a norm-preserving extension of $\phi$ provided by \cite[Theorem]{Bun01}.
By hypothesis,
$1-\delta < {\phi} (w)= \tilde{\phi} (w) \leq \|{\phi}\| = \|\tilde{\phi}\|=1$. Now, applying Lemma \operatorname{Re}f{l norm-one functional close enough to states at a unitary Cstar} to $W$, $\tilde{\phi}\in W_*$ and the unitary $w$, we find a norm-one functional $\tilde{\psi}\in W_*$ satisfying $\tilde{\psi} (w) =1$ and $\|\tilde{\phi} -\tilde{\psi}\| < \sqrt{2\delta}$. Since $w\in M$ and $1= \tilde{\psi} (w)$, the functional $\psi = \tilde{\psi}|_{M}$ has norm-one, $\psi (w) =1$ and clearly $\|{\phi} -{\psi}\| < \sqrt{2\delta}$.
\mathbb Nd{proof}
\begin{lemma}\langlebel{l ad-hoc 1 Jordan} Let $M$ be a JW$^*$-algebra, let $\phi\in M_*$ and $\delta>0$.
Suppose $a_1, a_2$ are two norm-one elements in $M$ such that $$ \betr{\norm{\phi}-\phi(a_k)}<\delta\norm{\phi} \hbox{ for } k=1,2.$$
Then there is a positive functional $\omega\in M_*$ satisfying $\norm{\omega}\le2\sqrt{2\delta}\norm{\phi}$ and
$$\betr{\phi\J xx{a_1-a_2}}\le4\norm{x}_\omega^2 \hbox{ for all } x\in M.$$
\mathbb Nd{lemma}
\begin{proof} Similarly as in the proof of Lemma \operatorname{Re}f{l norm-one functional close enough to states at a unitary} we may assume that $M$ is a JW$^*$-subalgebra of a von Neumann algebra $W$ containing the unit of $W$.
Let $\tilde{\phi}\in W_*$ be a norm-preserving normal extension of $\phi$ (see \cite[Theorem]{Bun01}). Working in $W_*$ we set $\tilde\psi_l=a_1\tilde\phi-a_2\tilde\phi$ and $\tilde\psi_r=\tilde\phi a_1-\tilde\phi a_2$.
By \eqref{glA3_2} of Lemma \operatorname{Re}f{l perturbation of functionals}
we have $\norm{\betr{\tilde\phi}-a_k\tilde\phi}\le\sqrt{2\delta}\norm{\tilde\phi}$ ($k=1,2$) hence $\norm{\tilde\psi_l}\le2\sqrt{2\delta}\norm{\tilde\phi}$.
Likewise we get $\norm{\tilde\psi_r}\le2\sqrt{2\delta}\norm{\tilde\phi}$ with \eqref{glA3_3} of Lemma \operatorname{Re}f{l perturbation of functionals}.
Set $\tilde\omega=([\tilde\psi_l]+[\tilde\psi_r])/2$.
Then $\norm{\tilde\omega}\le2\sqrt{2\delta}\norm{\tilde\phi}$ and
$$\begin{aligned}
\betr{\tilde\phi\J xx{a_1-a_2}}&=\mathfrak{A}c12\betr{\tilde\psi_l(xx^*)+\tilde\psi_r(x^*x)}\le[\tilde\psi_l](xx^*)+[\tilde\psi_r](x^*x)\\
&\le([\tilde\psi_l]+[\tilde\psi_r])(xx^*+x^*x)=4\tilde\omega(\J xx1)=4\norm{x}_{\tilde\omega}^2.
\mathbb Nd{aligned}$$
It remains to set $\omega=\tilde\omega|_M$.
\mathbb Nd{proof}
\begin{lemma}\langlebel{l ad-hoc 2}
Let $M$ be a JW$^*$-algebra, $\phi\in M_*$ and let $a$ be a norm-one element of $M$.
Then there is a positive functional $\omega\in M_*$ such that
$$\norm{\omega}\le\norm{\phi}\quad\mbox{ and }\quad\forall x\in W:\betr{\phi\J xxa}\le4\norm{x}_\omega^2.$$
\mathbb Nd{lemma}
\begin{proof}
The proof resembles the preceding one of Lemma \operatorname{Re}f{l ad-hoc 1 Jordan}. Assume that $M$ is a JW$^*$-subalgebra of a von Neumann algebra $W$ and $1_W\in M$. Let $\tilde\phi\in W_*$ be a norm-preserving extension of $\phi$ (see \cite[Theorem]{Bun01}). Set $\tilde\psi_l=a\tilde\phi$ and $\tilde\psi_r=\tilde\phi a$.
Then $\norm{\tilde\psi_l}\le\norm{a}\norm{\tilde\phi}=\norm{\tilde\phi}$ and, similarly, $\norm{\tilde\psi_r}\le\norm{\tilde\phi}$.
Set $\tilde\omega=([\tilde\psi_l]+[\tilde\psi_r])/2$. Then $\norm{\tilde\omega}\le\norm{\tilde\phi}$ and
$$\begin{aligned}
\betr{\tilde\phi\J xxa}&=\mathfrak{A}c12\betr{\tilde\psi_l(xx^*)+\tilde\psi_r(x^*x)}\le[\tilde\psi_l](xx^*)+[\tilde\psi_r](x^*x)\\
&\le([\tilde\psi_l]+[\tilde\psi_r])(xx^*+x^*x)=4\tilde\omega(\J xx1)=4\norm{x}_{\tilde\omega}^2.
\mathbb Nd{aligned}$$
Finally, we may set $\omega=\tilde{\omega}|_M$.
\mathbb Nd{proof}
\begin{proof}[Proof of Proposition~\operatorname{Re}f{P:key decomposition}]
It follows from \cite[Theorem 7.1]{Finite} that any JBW$^*$-algebra $M$ can be represented by $M_1\oplus^{\ell_\infty} M_2$ where $M_1$ is a finite JBW$^*$-algebra and $M_2$ is a JW$^*$-algebra.
The validity of Proposition~\operatorname{Re}f{P:key decomposition} for finite JBW$^*$-algebras follows immediately from Observation~\operatorname{Re}f{obs:finite JBW* algebras}.
Since the validity of Proposition~\operatorname{Re}f{P:key decomposition} is clearly preserved by $\ell_\infty$-sums, it remains to prove it for JW$^*$-algebras.
So, assume that $M$ is a JW$^*$-algebra and $\varphi\in M_*$.
By homogeneity we may assume $\norm{\varphi}=1$. Fix $\varepsilon>0$. Choose $\delta>0$ such that $12\sqrt{2\delta}<\varepsilon$.
By the Wright-Youngson extension of the Russo-Dye theorem, the convex hull of all unitary elements in $M$ is norm dense in the closed unit ball of $M$ (see \cite[Theorem 2.3]{WrightYoungson77} or \cite[Fact 4.2.39]{Cabrera-Rodriguez-vol1}). We can therefore find a unitary element $w$ such that $\varphi (w) > 1-\delta$. By Lemma~\operatorname{Re}f{l norm-one functional close enough to states at a unitary} there exists a norm-one functional $\psi\in M_*$ satisfying $\psi(w)=1$
and $\|\varphi -\psi\| < \sqrt{2\delta}.$
Set $u=s(\varphi)$.
For $x\in M$ we then have
$$\begin{aligned}
\norm{x}_\varphi^2 &= \varphi \J xxu = \psi \J xxw
+(\varphi-\psi)\J xxw + \varphi \J xx{u-w}.
\mathbb Nd{aligned}$$
Applying Lemma~\operatorname{Re}f{l ad-hoc 2} to $\varphi-\psi$ and $w$ we find a positive functional $\omega_1\in M_*$ with $\norm{\omega_1}\le\norm{\varphi-\psi}<\sqrt{2\delta}$ such that
$$\abs{(\varphi-\psi)\J xxw}\le 4\norm{x}_{\omega_1}^2\mbox{ for }x\in M.$$
Applying Lemma~\operatorname{Re}f{l ad-hoc 1 Jordan} to the functional $\varphi$ and the pair $w,u\in M$ we get a positive functional $\omega_2\in M_*$ with $\norm{\omega_2}\le2\sqrt{2\delta}$ such that
$$\abs{\varphi \J xx{u-w}}\le 4\norm{x}_{\omega_2}^2\mbox{ for }x\in M.$$
Hence we have for each $x\in M$
$$\norm{x}_\varphi^2\le \norm{x}_\psi^2+4(\norm{x}_{\omega_1}^2+\norm{x}_{\omega_2}^2)=\norm{x}_\psi^2+\norm{x}_{4(\omega_1+\omega_2)}^2,$$
where we used that $\omega_1$ and $\omega_2$ are positive functionals. Since $s(\psi)\le w$ (just have in mind that $\psi(w)=1$ and \eqref{eq minimality of the support tripotent}), $w$ is unitary and
$$\norm{4(\omega_1+\omega_2)}<12\sqrt{2\delta},$$
it is enough to set $\varphi_1=\psi$ and $\varphi_2=4(\omega_1+\omega_2)$.
\mathbb Nd{proof}
\begin{remark}\langlebel{Rem}
(1) Note that by \cite[Proposition 7.5]{Finite} any finite tripotent in a JBW$^*$-algebra is majorized by a unitary element, hence Proposition~\operatorname{Re}f{P:type I approx} is indeed a stronger version of Proposition~\operatorname{Re}f{P:key decomposition} in the special case in which the JBW$^*$-algebra $M$ is a direct sum of a finite JBW$^*$-algebra and a type $I$ JBW$^*$-algebra. (For (\operatorname{Re}f{it:key decomposition i}) and (\operatorname{Re}f{it:key decomposition iv}) of Proposition~\operatorname{Re}f{P:key decomposition} see the remarks before the statement of Proposition~\operatorname{Re}f{P:type I approx}.)
Further, as will be seen at the beginning of the next section, Proposition~\operatorname{Re}f{P:key decomposition} is the main ingredient for proving Proposition~\operatorname{Re}f{P:key decomposition alternative}.
(2) There is an alternative way of proving Proposition~\operatorname{Re}f{P:key decomposition}. It follows from \cite[Theorem 7.1]{Finite} that any JBW$^*$-algebra $M$ can be represented by $M_1\oplus^{\ell_\infty} M_2\oplus^{\ell^\infty} M_3$ where $M_1$ is a finite JBW$^*$-algebra, $M_2$ is a type I JBW$^*$-algebra and $M_3$ is a von Neumann algebra. So, we can conclude using Proposition~\operatorname{Re}f{P:type I approx} and giving the above argument only for von Neumann algebras (which is slightly easier).
\mathbb Nd{remark}
\section{Proofs of the main results}\langlebel{sec:proofs}
We start by proving Proposition~\operatorname{Re}f{P:key decomposition alternative}.
\begin{proof}[Proof of Proposition~\operatorname{Re}f{P:key decomposition alternative}.]
Let $M$ be a JBW$^*$-algebra, $\varphi\in M_*$ and $\varepsilon>0$. By homogeneity we may assume that $\norm{\varphi}=1$. Let $\varphi_1,\varphi_2$ and $w$ correspond to $\varphi$ and $\mathfrak{A}c\varepsilon2$ by Proposition~\operatorname{Re}f{P:key decomposition}.
Since $w$ is unitary, we have $M_2(w)=M$, hence we may apply Lemma~\operatorname{Re}f{L:rotation} to get $\psi_2\in M_*$ such that
$$s(\psi_{2})\le w, \ \norm{\psi_{2}}\le\norm{\varphi_{2}},\ \norm{\cdot}_{\varphi_{2
}}\le\sqrt{2}\norm{\cdot}_{\psi_{2}}.
$$
Then
$$\begin{aligned}
\norm{\cdot}_\varphi^2&\le\norm{\cdot}_{\varphi_1}^2+\norm{\cdot}_{\varphi_2}^2\le
\norm{\cdot}_{\varphi_{1}}^2+2\norm{\cdot}_{\psi_2}^2=\norm{\cdot}_{\varphi_{1}+2\psi_2}^2
=(\norm{\varphi_{1}}+2\norm{\psi_2})\norm{\cdot}_\psi^2,
\mathbb Nd{aligned}
$$
where $$\psi=\mathfrak{A}c{\varphi_{1}+2\psi_2}{\norm{\varphi_{1}}+2\norm{\psi_2}}.$$
(Note that the first equality follows from the fact that the support tripotents of both functionals are below $w$.)
Since the functionals $\varphi_{1}$ and $\psi_2$ attain their norms at $w$, we deduce that $\norm{\psi}=1$. It remains to observe
that
$$\norm{\varphi_{1}}+2\norm{\psi_2}\le \norm{\varphi}+2\norm{\varphi_2}\le1+\varepsilon.$$
This completes the proof.
\mathbb Nd{proof}
Having proved Proposition~\operatorname{Re}f{P:key decomposition alternative}, we know that Proposition~\operatorname{Re}f{P:majorize 1+2+epsilon} is valid as well. Using it and Theorem~\operatorname{Re}f{T:triples-dual} we get the following theorem.
\begin{thm}\langlebel{T:JBW*-algebras}
Let $M$ be a JBW$^*$-algebra, let $H$ be a Hilbert space and let $T:M\to H$ be a weak$^*$-to-weak continuous linear operator. Given $\varepsilon>0$, there is a norm-one functional $\varphi\in M_*$ such that
$$\norm{Tx}\le(\sqrt2+\varepsilon)\norm{T}\norm{x}_\varphi\mbox{ for }x\in M.$$
\mathbb Nd{thm}
Now we get the main result by the standard dualization.
\begin{proof}[Proof of Theorem~\operatorname{Re}f{t constant >sqrt2 in LG for JBstar algebras}]
Let $T:B\to H$ be a bounded linear operator from a JB$^*$-algebra into a Hilbert space. Let $\varepsilon>0$. Since Hilbert spaces are reflexive, the second adjoint operator $T^{**}$ maps $B^{**}$ into $H$ and it is weak$^*$-to-weak continuous. Further, $B^{**}$ is a JBW$^*$-algebra (cf. \cite[Theorem 4.4.3]{hanche1984jordan} and \cite{Wright1977} or \cite[Proposition 5.7.10]{Cabrera-Rodriguez-vol2} and \cite[Theorems 4.1.45 and 4.1.55]{Cabrera-Rodriguez-vol1}), so Theorem~\operatorname{Re}f{T:JBW*-algebras} provides the respective functional $\varphi\in (B^{**})_*=B^*$.
\mathbb Nd{proof}
We further note that for JB$^*$-algebras we have two different forms of the Little Grothendieck theorem -- a triple version (the just proved Theorem~\operatorname{Re}f{t constant >sqrt2 in LG for JBstar algebras}) and an algebraic version (an analogue of Theorem~\operatorname{Re}f{T:C*alg-sym}). The difference is that the first form provides just a norm-one functional while the second one provides a state, i.e., a positive norm-one functional. Let us now show that the algebraic version may be proved from the triple version.
\begin{thm}\langlebel{T:algebraic version dual}
Let $M$ be a JBW$^*$-algebra, let $H$ be a Hilbert space and let $T:M\to H$ be a weak$^*$-to-weak continuous linear operator. Given $\varepsilon>0$, there is a state $\varphi\in M_*$ such that
$$\norm{Tx}\le(2+\varepsilon)\norm{T}\varphi(x\circ x^*)^{1/2}\mbox{ for }x\in M.$$
\mathbb Nd{thm}
\begin{proof}
By Theorem~\operatorname{Re}f{T:JBW*-algebras} there is a norm-one functional $\psi\in M_*$ such that
$$\norm{Tx}\le (\sqrt2+\mathfrak{A}c{\varepsilon}{\sqrt2})\norm{T}\norm{x}_\psi\mbox{ for }x\in M.$$
Since $M$ is unital and $M_2(1)=M$, Lemma~\operatorname{Re}f{L:rotation} yields a norm-one functional $\varphi\in M_*$ with $s(\varphi)\le1$ and $\norm{\cdot}_\psi\le\sqrt2\norm{\cdot}_\varphi$. Then
$\varphi$ is a state (note that $\varphi(1)=1$) and
$$\norm{Tx}\le(2+\varepsilon)\norm{T}\norm{x}_\varphi\mbox{ for }x\in M.$$
It remains to observe that
$$\norm{x}_\varphi=\sqrt{\varphi\J xx1}=\sqrt{\varphi(x\circ x^*)}$$
for $x\in M$.
\mathbb Nd{proof}
\begin{thm}\langlebel{T:algebraic version non-dual}
Let $B$ be a JB$^*$-algebra, let $H$ be a Hilbert space and let $T:B\to H$ be a bounded linear operator. Then there is a state $\varphi\in B^*$ such that
$$\norm{Tx}\le2\norm{T}\varphi(x\circ x^*)^{1/2}\mbox{ for }x\in B.$$
\mathbb Nd{thm}
\begin{proof}
Since $B^{**}$ is a JBW$^*$-algebra, $T^{**}$ maps $B^{**}$ into $H$ and $T^{**}$ is weak$^*$-to-weak continuous, by Theorem~\operatorname{Re}f{T:algebraic version dual} we get a sequence $(\varphi_n)$ of states on $B$ such that
$$\norm{Tx}\le(2+\mathfrak{A}c1n)\norm{T}\varphi_n(x\circ x^*)^{1/2}\mbox{ for }x\in B\mbox{ and } n\in\mathbb N.$$
Let $\tilde\varphi$ be a weak$^*$-cluster point of the sequence $(\varphi_n)$. Then $\tilde\varphi$ is positive, $\norm{\tilde\varphi}\le 1$
and
$$\norm{Tx}\le2\norm{T}\tilde\varphi(x\circ x^*)^{1/2}\mbox{ for }x\in B.$$
Now we can clearly replace $\tilde\varphi$ by a state. Indeed, if $\tilde\varphi\ne0$, we take $\varphi=\mathfrak{A}c{\tilde\varphi}{\norm{\tilde\varphi}}$. If $\tilde\varphi=0$, then $T=0$ and hence $\varphi$ may be any state. (Note that in case $B$ is unital, $\tilde\varphi$ is already a state.)
\mathbb Nd{proof}
We finish this section by showing that our main result easily implies Theorem~\operatorname{Re}f{T-C*alg}.
\begin{proof}[Proof of Theorem~\operatorname{Re}f{T-C*alg} from Theorem~\operatorname{Re}f{t constant >sqrt2 in LG for JBstar algebras}] Let $A$ be a C$^*$-algebra, let $H$ be a Hilbert space and let $T:A\to H$ be a bounded linear operator. By Theorem~\operatorname{Re}f{t constant >sqrt2 in LG for JBstar algebras} there is a sequence $(\psi_n)$ of norm-one functionals in $A^*$ such that
$$\norm{Tx}\le (\sqrt{2}+\mathfrak{A}c1n)\norm{T}\norm{x}_{\psi_n}\mbox{ for }x\in A \mbox{ and }n\in\mathbb N.$$
Recall that $A^{**}$ is a von Neumann algebra. Set $u_n=s(\psi_n)\in A^{**}$. Then
$$\norm{x}_{\psi_n}^2=\psi_n\J xx{u_n}=\mathfrak{A}c12(\psi_n(xx^*u_n)+\psi_n(u_nx^*x))=\mathfrak{A}c12(u_n\psi_n(xx^*)+\psi_n u_n(x^*x))
$$
for $x\in A$. Moreover, $\varphi_{1,n}=u_n\psi_n$ and
$\varphi_{2,n}=\psi_n u_n$
are states on $A$ (note that $\varphi_{1,n}=\abs{\psi_n}$ and $\varphi_{2,n}=\abs{\psi_n^*}$) such that
$${\norm{Tx}\le (\sqrt{2}+\mathfrak{A}c1n)\norm{T}\cdot\mathfrak{A}c1{\sqrt{2}}(\varphi_{1,n}(xx^*)+\varphi_{2,n}(x^*x))^{1/2}\mbox{ for }x\in A \mbox{ and }n\in\mathbb N.}$$
Let $(\varphi_1,\varphi_2)$ be a weak$^*$-cluster point of the sequence $((\varphi_{1,n},\varphi_{2,n}))_n$ in $B_{A^*}\times B_{A^*}$.
Then $\varphi_1,\varphi_2$ are positive functionals of norm at most one such that
$$\norm{Tx}\le \|T\| (\varphi_{1}(xx^*)+\varphi_{2}(x^*x))^{1/2}\mbox{ for }x\in A.$$
Similarly as above we may replace $\varphi_1$ and $\varphi_2$ by states.
\mathbb Nd{proof}
\section{Examples and problems}\langlebel{sec:problems}
\begin{ques}
Do Theorem~\operatorname{Re}f{t constant >sqrt2 in LG for JBstar algebras} and Theorem~\operatorname{Re}f{T:JBW*-algebras} hold with the constant $\sqrt2$ instead of $\sqrt2+\varepsilon$?
\mathbb Nd{ques}
We remark that these theorems do not hold with a constant strictly smaller than $\sqrt{2}$. Indeed, assume that Theorem~\operatorname{Re}f{t constant >sqrt2 in LG for JBstar algebras} holds with a constant $K$. Then Theorem~\operatorname{Re}f{T-C*alg} holds with constant $\mathfrak{A}c{K}{\sqrt{2}}$ (see the proof of the relationship of these two theorems in Section~\operatorname{Re}f{sec:proofs}). But the best constant for Theorem~\operatorname{Re}f{T-C*alg} is $1$ due to \cite{haagerup-itoh}.
Since the example in \cite{haagerup-itoh} uses a rather involved combinatorial construction, we provide an easier example showing that the constant in Theorem~\operatorname{Re}f{t constant >sqrt2 in LG for JBstar algebras} has to be at least $\sqrt{2}$.
\begin{example2}\langlebel{ex:Tx=xxi} Let $H$ be an infinite-dimensional Hilbert space. Let $A=K(H)$ be the C$^*$-algebra of compact operators.
Fix an arbitrary unit vector $\xi\in H$ and define $T:A\to H$ by $Tx=x\xi$ for $x\in A$. It is clear that $\norm{T}=\norm{\xi}=1$. Fix an arbitrary norm-one functional $\varphi\in A^*$. We are going to prove that
\begin{equation}
\sup \left\{\mathfrak{A}c{\norm{Tx}}{\norm{T} \norm{x}_\varphi};\, x\in A, \norm{x}_\varphi\neq0|ight\}\ge\sqrt{2}.\langlebel{eq Example}
\mathbb Nd{equation}
Recall that $K(H)^*$ is identified with $N(H)$, the space of nuclear operators on $H$ equipped with the nuclear norm, and $K(H)^{**}$ is identified with $B(H)$, the von Neumann algebra of all bounded linear operators on $H$.
Using the trace duality we deduce that there is a nuclear operator $z$ on $H$ such that $\tr{\abs{z}}=\norm{z}_N=1$ and $\varphi(x)=\tr{zx}$ for $x\in A$. Consider the polar decomposition $z=u\abs{z}$ in $B(H)$. Then $\abs{z}=u^*z$, hence $s(\varphi)\le u^*$. (Note that $\varphi(u^*)=\tr{zu^*}=\tr{u^*z}=\tr{\abs{z}}=1$, hence
$s(\varphi)\le u^*$ by \eqref{eq minimality of the support tripotent}. The converse inequality holds as well, but it is not important.)
It follows that for each $x\in A$ we have
$$\begin{aligned}\norm{x}_\varphi^2&=\varphi(\J xx{u^*})=\mathfrak{A}c12\varphi(xx^*u^*+u^*x^*x)=
\mathfrak{A}c12\tr{xx^*u^*z+u^*x^*xz}\\&=\mathfrak{A}c12(\tr{xx^*\abs{z}}+\tr{u^*x^*xz})\mathbb Nd{aligned}$$
If $\eta\in H$ is a unit vector, we define the operator
$$y_\eta(\mathbb Za)=\ip{\mathbb Za}{\xi}\eta,\qquad\mathbb Za\in H.$$
Then $y_\eta\in A$, $\norm{y_\eta}=1$ and $\norm{Ty_\eta}=1$. Moreover,
$$y_\eta^*(\mathbb Za)=\ip{\mathbb Za}{\eta}\xi,$$
hence
$$y_\eta y_\eta^*(\mathbb Za)=\ip{\mathbb Za}{\eta}\eta\mbox{ and }y_\eta^*y_\eta(\mathbb Za)=\ip{\mathbb Za}{\xi}\xi.$$
Thus
$$\norm{y_\eta}_\varphi^2=\mathfrak{A}c12(\tr{\abs{z}y_\eta y_\eta^*}+\tr{zu^* y_\eta^*y_\eta})=\mathfrak{A}c12(\ip{\abs{z}\eta}{\eta}+\ip{zu^*\xi}{\xi})\le\mathfrak{A}c12(1+\ip{\abs{z}\eta}{\eta}).$$
It follows that
$$\inf\{\norm{x}_\varphi^2;\, x\in A, \norm{Tx}=1\}\le \mathfrak{A}c 12 \inf\{1+\ip{\abs{z}\eta}{\eta};\, \norm{\eta}=1\}=\mathfrak{A}c12+\mathfrak{A}c12\min \sigma(\abs{z}),$$
where the last equality follows from \cite[Theorem 15.35]{fabianetal2011}.
Now, $z$ is a nuclear operator of norm one. Thus $0\in\sigma(\abs{z})$ as $H$ has infinite dimension.
Hence
$$\inf\{\norm{x}_\varphi;\, x\in A,\norm{Tx}=1\}\le\mathfrak{A}c1{\sqrt2},$$
which yields inequality \eqref{eq Example}.
\mathbb Qd\mathbb Nd{example2}
\begin{remark}
If $H$ is a finite-dimensional Hilbert space, the construction from Example~\operatorname{Re}f{ex:Tx=xxi} could be done as well. In this case $A=K(H)=B(H)$ can be identified with the algebra of $n\times n$ matrices where $n=\,\mbox{\rm d}m H$. In this case $\sigma(\abs{z})$ need not contain $0$, but at least one of the eigenvalues of $\abs{z}$ is at most $\mathfrak{A}c1n$. So, we get a lower bound $\sqrt{\mathfrak{A}c{2n}{n+2}}$ for the constant in Theorem~\operatorname{Re}f{t constant >sqrt2 in LG for JBstar algebras}.
\mathbb Nd{remark}
Next we address the optimality of the algebraic version of the Little Grothendieck theorem.
\begin{ques}
What is the optimal constant in Theorem~\operatorname{Re}f{T:C*alg-sym},
Theorem~\operatorname{Re}f{T:algebraic version dual} and Theorem~\operatorname{Re}f{T:algebraic version non-dual}? In particular,
do these theorems hold with the constant $\sqrt{2}$?
\mathbb Nd{ques}
Note that the constant cannot be smaller than $\sqrt2$ due to Example~\operatorname{Re}f{ex:Tx=xxi}. The following example shows that Example~\operatorname{Re}f{ex:Tx=xxi} cannot yield a greater lower bound.
\begin{example2}
Let $H$, $A$, $\xi$ and $T$ be as in Example~\operatorname{Re}f{ex:Tx=xxi}. Let $u\in A^{**}=B(H)$ be any unitary element. Then
$$\varphi_u(x)=\ip{x\xi}{u\xi},\qquad x\in A$$
defines a norm-one functional in $A^*$ such that $s(\varphi_u)\le u$ and, moreover,
$$\norm{Tx}\le\sqrt2 \norm{x}_{\varphi_u}\mbox{ for }x\in A.$$
Indeed, it is clear that $\norm{\varphi_u}\le 1$. Since $\varphi_u(u)=1$, necessarily $\norm{\varphi_u}=1$ and $s(\varphi)\le u$. Moreover, for $x\in A$ we have
$$\begin{aligned}
\norm{x}_{\varphi_u}^2&=\varphi_u\J xxu=\mathfrak{A}c12\varphi_u(xx^*u+ux^*x)
=\mathfrak{A}c12(\ip{xx^*u\xi}{u\xi}+\ip{ux^*x\xi}{u\xi})
\\&=\mathfrak{A}c12(\norm{x^*u\xi}^2+\norm{x\xi}^2)\ge\mathfrak{A}c12\norm{x\xi}^2=\mathfrak{A}c12\norm{Tx}^2.\mathbb Nd{aligned}$$
This completes the proof.
\mathbb Qd\mathbb Nd{example2}
We continue by recalling the example of \cite{haagerup-itoh} showing optimality of Theorem~\operatorname{Re}f{T-C*alg} and explaining that it does not show optimality neither of Theorem~\operatorname{Re}f{T:C*alg-sym} nor of Theorem~\operatorname{Re}f{T:algebraic version non-dual}.
An important tool to investigate optimality of constants in Theorem~\operatorname{Re}f{T-C*alg} is the following characterization.
\begin{prop}[{\cite[Proposition 23.5]{pisier2012grothendieck}}]\langlebel{p equivalent formulation C*}
Let $A$ be a C$^*$-algebra, $H$ a Hilbert space, $T:A|ightarrow H$ a bounded linear map and $K$ a positive number.
Then the following two assertions are equivalent.
\begin{enumerate}[(i)]
\item\langlebel{it equiv formul2 C*} There are states
$\varphi_1, \varphi_2$ on $A$ such that
\begin{eqnarray}
\norm{Tx}\le K\norm{T}(\varphi_1(x^*x)+\varphi_2(xx^*))^{1/2} \quad \mbox{for } x\in A.
\mathbb Nd{eqnarray}
\item\langlebel{it equiv formul1 C*} For any finite sequence $(x_j)$ in $A$ we have
\begin{eqnarray}
\left(\sum_j\norm{Tx_j}^2|ight)^{1/2}\le K\norm{T}\left(\mathcal Norm{\sum_j x_j^*x_j}+\mathcal Norm{\sum_j x_jx_j^*}|ight)^{1/2}.
\langlebel{eq equiv formul1 C*}
\mathbb Nd{eqnarray}
\mathbb Nd{enumerate}
\mathbb Nd{prop}
The following proposition is a complete analogue of the preceding one and can be used to study optimality of Theorem~\operatorname{Re}f{T:algebraic version non-dual}. We have not found it explicitly formulated in the literature, but its proof is completely analogous
to the proof of Proposition~\operatorname{Re}f{p equivalent formulation C*} given in \cite{pisier2012grothendieck}.
\begin{prop}\langlebel{p equivalent formulation JB*-algebra}
Let $A$ be a unital JB$^*$-algebra, $H$ a Hilbert space, $T:A|ightarrow H$ a bounded linear map and $K$ a positive number.
Then the following two assertions are equivalent.
\begin{enumerate}[(i)]
\item\langlebel{it equiv formul2 JB*} There is a state
$\varphi$ on $A$ such that
\begin{eqnarray}
\norm{Tx}\le K\norm{T}\varphi(x^*\circ x)^{1/2} \quad \mbox{for } x\in A.
\mathbb Nd{eqnarray}
\item\langlebel{it equiv formul1 JB*} For any finite sequence $(x_j)$ in $A$ we have
\begin{eqnarray}
\left(\sum_j\norm{Tx_j}^2|ight)^{1/2}\le K\norm{T}\mathcal Norm{\sum_j x_j^*\circ x_j}^{1/2}.
\langlebel{eq equiv formul1 JB*}
\mathbb Nd{eqnarray}
\mathbb Nd{enumerate}
\mathbb Nd{prop}
We recall the example originated in \cite{haagerup-itoh} and formulated and proved in this setting in \cite{pisier2012grothendieck}.
\begin{example}[{\cite[Lemma 11.2]{pisier2012grothendieck}}]\langlebel{example}
Consider an integer $n\ge1$. Let $N=2n+1$ and $d=\begin{pmatrix}2n+1\\n\mathbb Nd{pmatrix}=\begin{pmatrix}2n+1\\n+1\mathbb Nd{pmatrix}$.
Let $\tau_d$ denote the normalized trace on the space $M_d$ of $d\times d$ (complex) matrices. There are $x_1, \ldots, x_N$
in $M_d$ such that $\tau_d(x_i^*x_j)=1$ if $i=j$ and $=0$ otherwise, satisfying
\begin{eqnarray}
\sum_j x_j^*x_j=\sum_j x_jx_j^*=NI \langlebel{eq1 example}
\mathbb Nd{eqnarray}
and moreover such that, with $a_n=(n+1)/(2n+1)$,
\begin{eqnarray}
\forall \alpha=(\alpha_i)\in\mathbb C^N,\quad \mathcal Norm{\sum
_j\alpha_jx_j}_{(M_d)^*}=d\sqrt{a_n}\left(\sum_j\betr{\alpha_j}^2|ight)^{1/2}.
\langlebel{eq2 example}
\mathbb Nd{eqnarray}
\mathbb Nd{example}
In the following example we show that the previous one yields the optimality of Theorem~\operatorname{Re}f{T-C*alg} but does not help to find the optimal constant for Theorem~\operatorname{Re}f{T:C*alg-sym} or Theorem~\operatorname{Re}f{T:algebraic version non-dual}. The first part is proved already in \cite{haagerup-itoh} (cf. \cite[Section 11]{pisier2012grothendieck}) but we include the proof for the sake of completeness and, further, in order to compare it with the second part.
\begin{example2} Fix $n\ge1$.
With the notation of Example \operatorname{Re}f{example}
define $T:M_d\to \ell_2^N$ by $$T(x)=(\tau_d(x_j^*x))_{j=1}^N,\qquad x\in M_d.$$
Let $(\eta_j)_{j=1}^N$ be the canonical orthonormal basis of $\ell_2^N$. Then the dual mapping $T^*:\ell_2^N\to M_d^*$ fulfils
$$\ip{T^*(\eta_j)}{x}=\ip{\eta_j}{T(x)}=\tau_d(x_j^*x)=\mathfrak{A}c1d\tr{x_j^*x}\mbox{ for }x\in M_d,$$
thus $T^*(\eta_j)=\mathfrak{A}c1dx_j^*$ (we use the trace duality). Then \eqref{eq2 example} shows that
$$\norm{T^*(\alpha)}=\mathfrak{A}c1d\norm{\sum_{j=1}^N\alpha_jx_j^*}_{(M_d)^*}=\sqrt{a_n}\norm{\alpha}\mbox{ for }\alpha\in\ell_2^N.$$
In particular, $\mathfrak{A}c1{\sqrt{a_n}}T^*$ is an isometric embedding, thus $\mathfrak{A}c1{\sqrt{a_n}}T$ is a quotient mapping. Hence, $\norm{T}=\sqrt{a_n}$.
Further, $T(x_j)=\eta_j$ for $j=1,\dots,N$, so
$$\sum_{j=1}^N \norm{T(x_j)}^2=N$$
and
$$\norm{\sum_{j=1}^N x_j^*x_j}+\norm{\sum_{j=1}^N x_jx_j^*}=2\norm{NI}=2N.$$
Thus due to Proposition~\operatorname{Re}f{p equivalent formulation C*} the optimal value of the constant in Theorem~\operatorname{Re}f{T-C*alg} is bounded below by
$$\mathfrak{A}c{1}{\sqrt{2a_n}}=\sqrt{\mathfrak{A}c{2n+1}{2n+2}}\to 1.$$
On the other hand,
$$\norm{\sum_{j=1}^N x_j^*\circ x_j}=\norm{NI}=N,$$
thus Proposition~\operatorname{Re}f{p equivalent formulation JB*-algebra} yields that the optimal value of the constant in Theorem~\operatorname{Re}f{T:C*alg-sym} is bounded below by
$$\mathfrak{A}c1{\sqrt{a_n}}=\sqrt{\mathfrak{A}c{2n+1}{n+1}}\to \sqrt2,$$
so it gives nothing better than Example~\operatorname{Re}f{ex:Tx=xxi}.
In fact, this operator $T$ satisfies Theorem~\operatorname{Re}f{T:C*alg-sym} with constant $\mathfrak{A}c1{\sqrt{a_n}}\le\sqrt{2}$.
To see this observe that $(x_j)_{j=1}^N$ is an orthonormal system in $M_d$ equipped with the normalized Hilbert-Schmidt inner product.
Hence, any $x\in M_d$ can be expressed as
$$x=y+\sum_{j=1}^N\alpha_jx_j,$$
where $\alpha_j$ are scalars and $y\in\{x_1,\dots,x_N\}^{\perp_{HS}}$.
Then $T(x)=(\alpha_j)_{j=1}^N$ and
$$\tau_d(x^*\circ x)=\tau_d(x^*x)=\tau_d(y^*y)+\sum_{j=1}^N\abs{\alpha_j}^2\ge \sum_{j=1}^N\abs{\alpha_j}^2=\norm{T(x)}^2.$$
Hence
$$\norm{T(x)}\le \tau_d(x^*\circ x)^{1/2}=\mathfrak{A}c1{\sqrt{a_n}}\norm{T} \tau_d(x^*\circ x)^{1/2}.$$
Since $\tau_d$ is a state, the proof is complete.
\mathbb Nd{example2}
We continue by an example showing that there is a real difference between the triple and algebraic versions of the Little Grothendieck theorem.
\begin{example}\langlebel{ex:alg vs triple} \
\begin{enumerate}[$(a)$]
\item Let $M$ be any JBW$^*$-triple and let
$\varphi\in M_*$ be a norm-one functional. Then $$\abs{\varphi(x)}\le\norm{x}_\varphi,\mbox{ for all }x\in M,$$
hence $\varphi:M\to\mathbb C$ satisfies Theorem~\operatorname{Re}f{T:triples}$(3)$ with constant one.
\item Let $M_2$ be the algebra of $2\times 2$ matrices. Then there is a norm-one functional $\varphi:M_2\to \mathbb C$ not satisfying Theorem~\operatorname{Re}f{T:C*alg-sym} with constant smaller than $\sqrt{2}$.
\item In particular, the constant $\sqrt{2}$ in Lemma~\operatorname{Re}f{L:rotation} is optimal.
\mathbb Nd{enumerate}
\mathbb Nd{example}
\begin{proof}
$(a)$ The desired inequality was already stated in \cite[comments before Definition 3.1]{barton1990bounded}. Let us give some details. We set $e=s(\varphi)$. Then
$$\abs{\varphi(x)}=\abs{\varphi(P_2(e)x)}=\abs{\varphi(\J{P_2(e)x}ee}\le\norm{P_2(e)x}_\varphi\norm{e}_\varphi=\norm{P_2(e)x}_\varphi.$$
Moreover,
$$\begin{aligned}\norm{x}_\varphi^2&=\varphi\J xxe=\varphi(P_2(e)\J xxe)\\&=\varphi(\J{P_2(e)x}{P_2(e)x}e+\J{P_1(e)x}{P_1(e)x}e)=\norm{P_2(e)x}_\varphi^2+\norm{P_1(e)x}_\varphi^2\\&\ge\norm{P_2(e)x}_\varphi^2.\mathbb Nd{aligned}$$
$(b)$ Each $a\in M_2$ can be represented as $a=(a_{ij})_{i,j=1,2}$. Define $\varphi:M_2\to\mathbb C$ by
$$\varphi(a)=a_{12},\quad a\in M_2.$$
{
It is clear that $\norm{\varphi}=1$ and that $\varphi(s)=1$ where
$$s=\begin{pmatrix} 0 & 1\\ 0& 0\mathbb Nd{pmatrix}.$$
Let $\psi$ be any state on $M_2$.
Then
$$\norm{s}_\psi^2=\psi (\J ss{\mathbf{1}})=\mathfrak{A}c12\psi(s s^*+s^*s)=\mathfrak{A}c12\psi(\mathbf{1})=\mathfrak{A}c12.$$
Thus $\varphi(s)=\sqrt{2}\norm{s}_\psi$ for any state $\psi$ on $A=M_2$, which completes the proof.
$(c)$ This follows from $(b)$ (consider $p=\mathbf{1}$).}
\mathbb Nd{proof}
\section{Notes and problems on general JB$^*$-triples}\langlebel{sec:triples}
The main result, Theorem~\operatorname{Re}f{t constant >sqrt2 in LG for JBstar algebras}, is formulated and proved for JB$^*$-algebras. The assumption that we deal with a JB$^*$-algebra, not with a general JB$^*$-triple, was strongly used in the proof. Indeed, the key step was to prove
the dual version for JBW$^*$-algebras, Theorem~\operatorname{Re}f{T:JBW*-algebras},
and we substantially used the existence of unitary elements. So, the following problem remains open.
\begin{ques}
Is Theorem~\operatorname{Re}f{t constant >sqrt2 in LG for JBstar algebras} valid for general JB$^*$-triples?
\mathbb Nd{ques}
We do not know how to attack this question. However, there are some easy partial results. Moreover, some of our achievements may be easily extended to JBW$^*$-triples. In this section we collect such results.
The first example shows that for some JB$^*$-triples the optimal constant in the Little Grothendieck Theorem is easily seen to be $\sqrt2$. This is shown by completely elementary methods.
\begin{example2}
Let $H$ be a Hilbert space considered as the triple $B(\mathbb C,H)$ (i.e., a type 1 Cartan factor). That is, the triple product is given by
$$\J xyz=\mathfrak{A}c12(\ip xyz+\ip zyx),\quad x,y,z\in H.$$
The dual coincides with the predual and it is isometric to $H$. Let $y\in H^*$ be a norm-one element, i.e. we consider it as the functional $\ip{\cdot}{y}$. Then
$s(y)=y$. So, for $x\in H$ we have
$$\norm{x}_y^2=\ip{\J xxy}{y}=
\mathfrak{A}c12\ip{\ip xxy+\ip yxx}{y}=\mathfrak{A}c12(\norm{x}^2+\abs{\ip xy}^2)\ge\mathfrak{A}c12\norm{x}^2.$$
Hence, if $K$ is another Hilbert space and $T:H\to K$ is a bounded linear operator, then for any norm-one $y\in H^*$ we have
$$\norm{Tx}\le\norm{T}\norm{x}\le\sqrt{2}\norm{T}\norm{x}_y,$$
so we have the Little Grothendieck theorem with constant $\sqrt2$.
Moreover, the constant $\sqrt2$ is optimal in this case as soon as $\,\mbox{\rm d}m H\ge2$. Indeed, let $T:H\to H$ be the identity. Given any norm-one element $y\in H$, we may find a norm-one element $x\in H$ with $x\perp y$. The above computation shows that $\norm{x}=\sqrt{2}\norm{x}_y$.
\mathbb Nd{example2}
Another case, nontrivial but well known, is covered by the following example.
\begin{example2}
Assume that $E$ is a finite-dimensional JB$^*$-triple. Then $E$ is reflexive and, moreover, any bounded linear operator $T:E\to H$ (where $H$ is a Hilbert space) attains its norm. Hence $E$ satisfies the Little Grothendieck theorem with constant $\sqrt{2}$ by Theorem~\operatorname{Re}f{T:triples}$(1)$.
\mathbb Nd{example2}
We continue by checking which methods used in the present paper easily work for general triples.
\begin{obs}\langlebel{obs:type I approx triples}
Proposition~\operatorname{Re}f{P:type I approx} holds for corresponding JBW$^*$-triples as well.
\mathbb Nd{obs}
\begin{proof}
It is clear that it is enough to prove it separately for finite JBW$^*$-triples and for type I JBW$^*$-triples. The case of finite JBW$^*$-triples is trivial (one can take $\varphi_2=0$).
So, let $M$ be a JBW$^*$-triple of type I, $\varphi\in M_*$ and $\varepsilon>0$. Set $e=s(\varphi)$. Then $M_2(e)$ is a type I JBW$^*$-algebra (see \cite[comments on pages 61-62 or Theorem 4.2]{BuPe02}) and $\varphi|_{M_2(e)}\in M_2(e)_*$. Apply Proposition~\operatorname{Re}f{P:type I approx} to $M_2(e)$ and $\varphi|_{M_2(e)}$ to get $\varphi_1$ and $\varphi_2$.
The pair of functionals $\varphi_1\circ P_2(e)$ and $\varphi_2\circ P_2(e)$ completes the proof.
\mathbb Nd{proof}
Observe that the validity of Proposition~\operatorname{Re}f{P:type I approx} for finite JBW$^*$-triples is trivial but useless if we have no unitary element. However, the `type I part' may be used at least in some cases.
\begin{prop}\langlebel{P:B(H,K)}
Let $M=L^\infty(\mu)\overlineerline{\otimes}B(H,K)$, where $H$ and $K$ are infinite-di\-men\-sional Hilbert spaces. Then Proposition~\operatorname{Re}f{P:majorize 1+2+epsilon} holds for $M$.
\mathbb Nd{prop}
\begin{proof} Let us start by showing that Peirce-2 subspaces of tripotents in $M$ are upwards directed by inclusion. To this end first observe that $M=pV$, where $V$ is a von Neumann algebra and $p\in V$ is a properly infinite projection.
This is explained for example in \cite[p. 43]{hamhalter2019mwnc}. Now assume that $u_1,u_2\in pV$ are two tripotents (i.e., partial isometries in $V$ with final projections below $p$). By \cite[Lemma 9.8(c)]{hamhalter2019mwnc} there are projections $q_1,q_2\in V$ such that $q_j\ge p_i(u_j)$ and $q_j\sim p$ for $j=1,2$. Further, by \cite[Lemma 9.8(a)]{hamhalter2019mwnc} we have $q_1\vee q_2\sim p$, so there is a partial isometry $u\in V$ with $p_i(u)=q_1\vee q_2$ and $p_f(u)=p$. Then $u\in pV=M$ and $M_2(u)\supset M_2(u_1)\cup M_2(u_2)$.
Now we proceed with the proof of the statement itself. Let $\varphi_1,\varphi_2\in M_*$ and $\varepsilon>0$. Note that $M$ is of type I, hence we may apply Observation~\operatorname{Re}f{obs:type I approx triples} to get the respective decomposition $\varphi_1=\varphi_{11}+\varphi_{12}$.
Let $u\in M$ be a tripotent such that $M_2(u)$ contains $s(\varphi_{11}),s(\varphi_{12}),s(\varphi_2)$. Such a $u$ exists as Peirce-2 subspaces of tripotents in $M$ are upwards directed by inclusion as explained above. We can find a unitary $v\in M_2(u)$ with $s(\varphi_{11})\le v$ (recall that $s(\varphi_{11})$ is a finite tripotent and use \cite[Proposition 7.5]{Finite}). We conclude by applying Lemma~\operatorname{Re}f{L:rotation}.
\mathbb Nd{proof}
Combining the previous proposition with Theorem~\operatorname{Re}f{T:triples-dual} we get the following.
\begin{cor}
Let $M=L^\infty(\mu)\overlineerline{\otimes}B(H,K)$, where $H$ and $K$ are infinite-di\-men\-sio\-nal Hilbert spaces. Then Theorem~\operatorname{Re}f{T:JBW*-algebras} holds for $M$.
\mathbb Nd{cor}
We finish by pointing out main problems concerning JBW$^*$-triples.
\begin{ques}
Assume that $M$ is a JBW$^*$-triple of one of the following forms:
\begin{itemize}
\item $M=L^\infty(\mu,C)$, where $\mu$ is a probability measure and $C$ is a finite-di\-men\-sio\-nal JB$^*$-triple without unitary element.
\item $M=pV$, where $V$ is a von Neumann algebra and $p$ is a purely infinite projection.
\item $M=pV$, where $V$ is a von Neumann algebra and $p$ is a finite projection.
\mathbb Nd{itemize}
Is Theorem~\operatorname{Re}f{T:JBW*-algebras} valid for $M$?
\mathbb Nd{ques}
Note that these three cases correspond to the three cases distinguished in \cite{HKPP-BF}. We conjecture that the second case may be proved by adapting the results of Section~\operatorname{Re}f{sec:JW*} (but we do not see an easy way) and that the third case is the most difficult one (similarly as in \cite{HKPP-BF}).
\begin{remark}{|m Haagerup applied in \cite{haagerup1985grothendieck} ultrapower techniques to relax some of the extra hypotheses assumed by Pisier in the first approach to a Grothendieck inequality for C$^*$-algebras. We should include a few words justifying that Haagerup's techniques are not effective in the setting of JB$^*$-triples. Indeed, while
a cluster point (in a reasonable sense) of states of a unital C$^*$-algebra is a state, a cluster
point of norm-one functionals may be even zero. It is true for weak (weak$^*$) limits and also for ultrapowers. The ultrapower, $E_{\mathcal{U}},$ of a JB$^*$-triple, $E$, with respect to an ultrafilter $\mathcal{U}$, is again a JB$^*$-triple with respect to the natural extension of the triple product (see \cite[Corollary 10]{Dineen86}), and $E$ can be regarded as a JB$^*$-subtriple of $E_{\mathcal{U}}$ via the inclusion of elements as constant sequences. Given a norm one functional $\wedgeidetilde{\varphi}\in E_{\mathcal{U}}^*$ the restriction $\varphi = \wedgeidetilde{\varphi}|_{E}$ belongs to $E^*$ however we cannot guarantee that $\|x\|_{\wedgeidetilde{\varphi}} = \|[x]_{\mathcal{U}}\|_{\wedgeidetilde{\varphi}}$ is bounded by a multiple of $\|x\|_{{\varphi}}$. Let us observe that both prehilbertian seminorms coincide on elements of $E$ when the latter is a unital C$^*$-algebra and $\wedgeidetilde{\varphi}$ is a state on $E.$
}\mathbb Nd{remark}
\textbf{Acknowledgements}
A.M. Peralta partially supported by the Spanish Ministry of Science, Innovation and Universities (MICINN) and European Regional Development Fund project no. PGC2018-093332-B-I00, the IMAG–Mar{\'i}a de Maeztu grant CEX2020-001105-M/AEI/10.13039/501100011033, and by Junta de Andaluc\'{\i}a grants FQM375 and A-FQM-242-UGR18.
We would like to thank the referees for their carefully reading of our manuscript and their constructive comments.
\def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
\begin{thebibliography}{10}
\bibitem{AlfsenShultzStateSpace2001}
{\sc Alfsen, E.~M., and Shultz, F.~W.}
\newblock {\em State spaces of operator algebras}.
\newblock Mathematics: Theory \& Applications. Birkh\"{a}user Boston, Inc.,
Boston, MA, 2001.
\newblock Basic theory, orientations, and $C^*$-products.
\bibitem{AlfsenShultzGeometry2003}
{\sc Alfsen, E.~M., and Shultz, F.~W.}
\newblock {\em Geometry of state spaces of operator algebras}.
\newblock Mathematics: Theory \& Applications. Birkh\"{a}user Boston, Inc.,
Boston, MA, 2003.
\bibitem{barton1987grothendieck}
{\sc Barton, T., and Friedman, Y.}
\newblock Grothendieck's inequality for {$JB^*$}-triples and applications.
\newblock {\em J. London Math. Soc. (2) 36}, 3 (1987), 513--523.
\bibitem{barton1990bounded}
{\sc Barton, T., and Friedman, Y.}
\newblock Bounded derivations of {${|m JB}^*$}-triples.
\newblock {\em Quart. J. Math. Oxford Ser. (2) 41}, 163 (1990), 255--268.
\bibitem{BaTi}
{\sc Barton, T., and Timoney, R.~M.}
\newblock Weak{$^\ast$}-continuity of {J}ordan triple products and its
applications.
\newblock {\em Math. Scand. 59}, 2 (1986), 177--191.
\bibitem{BraKaUp78}
{\sc Braun, R., Kaup, W., and Upmeier, H.}
\newblock A holomorphic characterization of {J}ordan {$C\sp*$}-algebras.
\newblock {\em Math. Z. 161}, 3 (1978), 277--290.
\bibitem{Bun01}
{\sc Bunce, L.~J.}
\newblock Norm preserving extensions in {${|m JBW}^*$}-triple preduals.
\newblock {\em Quart. J. Math. Oxford 52}, 2 (2001), 133--136.
\bibitem{BuPe02}
{\sc Bunce, L.~J., and Peralta, A.~M.}
\newblock Images of contractive projections on operator algebras.
\newblock {\em J. Math. Anal. Appl. 272}, 1 (2002), 55--66.
\bibitem{Cabrera-Rodriguez-vol1}
{\sc Cabrera~Garc\'{\i}a, M., and Rodr\'{\i}guez~Palacios, A.}
\newblock {\em Non-associative normed algebras. {V}ol. 1}, vol.~154 of {\em
Encyclopedia of Mathematics and its Applications}.
\newblock Cambridge University Press, Cambridge, 2014.
\newblock The Vidav-Palmer and Gelfand-Naimark theorems.
\bibitem{Cabrera-Rodriguez-vol2}
{\sc Cabrera~Garc\'{i}a, M., and Rodr\'{i}guez~Palacios, A.}
\newblock {\em Non-associative normed algebras. {V}ol. 2}, vol.~167 of {\em
Encyclopedia of Mathematics and its Applications}.
\newblock Cambridge University Press, Cambridge, 2018.
\newblock Representation theory and the Zel'manov approach.
\bibitem{ChodaKijimaNak69}
{\sc Choda, H., Kijima, Y., and Nakagami, Y.}
\newblock Some extremal properties in the unit ball of von {N}eumann algebras.
\newblock {\em K\={o}dai Math. Sem. Rep. 21\/} (1969), 175--181.
\bibitem{chubook}
{\sc Chu, C.-H.}
\newblock {\em Jordan structures in geometry and analysis}, vol.~190 of {\em
Cambridge Tracts in Mathematics}.
\newblock Cambridge University Press, Cambridge, 2012.
\bibitem{Dineen86}
{\sc Dineen, S.}
\newblock Complete holomorphic vector fields on the second dual of a {B}anach
space.
\newblock {\em Math. Scand. 59}, 1 (1986), 131--142.
\bibitem{EdRu01}
{\sc Edwards, C.~M., and R\"{u}ttimann, G.~T.}
\newblock Orthogonal faces of the unit ball in a {B}anach space.
\newblock {\em Atti Sem. Mat. Fis. Univ. Modena 49}, 2 (2001), 473--493.
\bibitem{fabianetal2011}
{\sc Fabian, M., Habala, P., H\'{a}jek, P., Montesinos, V., and Zizler, V.}
\newblock {\em Banach space theory}.
\newblock CMS Books in Mathematics/Ouvrages de Math\'{e}matiques de la SMC.
Springer, New York, 2011.
\newblock The basis for linear and nonlinear analysis.
\bibitem{Friedman-Russo}
{\sc Friedman, Y., and Russo, B.}
\newblock Structure of the predual of a {$JBW^\ast$}-triple.
\newblock {\em J. Reine Angew. Math. 356\/} (1985), 67--89.
\bibitem{Friedman-Russo-GN}
{\sc Friedman, Y., and Russo, B.}
\newblock The {G}el$'$} \def\cprime{$'$ fand-{N}a\u{\i}mark theorem for {${|m
JB}^\ast$}-triples.
\newblock {\em Duke Math. J. 53}, 1 (1986), 139--148.
\bibitem{FriRu87}
{\sc Friedman, Y., and Russo, B.}
\newblock Conditional expectation and bicontractive projections on {J}ordan
{$C^\ast$}-algebras and their generalizations.
\newblock {\em Math. Z. 194}, 2 (1987), 227--236.
\bibitem{Gohberg90}
{\sc Gohberg, I., Goldberg, S., and Kaashoek, M.~A.}
\newblock {\em Classes of linear operators. {V}ol. {I}}, vol.~49 of {\em
Operator Theory: Advances and Applications}.
\newblock Birkh\"{a}user Verlag, Basel, 1990.
\bibitem{grothendieck1956resume}
{\sc Grothendieck, A.}
\newblock {\em R{\'e}sum{\'e} de la th{\'e}orie m{\'e}trique des produits
tensoriels topologiques}.
\newblock Soc. de Matem{\'a}tica de S{\~a}o Paulo, 1956.
\bibitem{haagerup1985grothendieck}
{\sc Haagerup, U.}
\newblock The {G}rothendieck inequality for bilinear forms on
{$C^\ast$}-algebras.
\newblock {\em Adv. in Math. 56}, 2 (1985), 93--116.
\bibitem{haagerup-itoh}
{\sc Haagerup, U., and Itoh, T.}
\newblock Grothendieck type norms for bilinear forms on {$C^*$}-algebras.
\newblock {\em J. Operator Theory 34}, 2 (1995), 263--283.
\bibitem{Finite}
{\sc Hamhalter, J., Kalenda, O. F.~K., and Peralta, A.~M.}
\newblock Finite tripotents and finite {${|m JBW}^*$}-triples.
\newblock {\em J. Math. Anal. Appl. 490}, 1 (2020), article no. 124217, 65pp.
\bibitem{hamhalter2019mwnc}
{\sc Hamhalter, J., Kalenda, O. F.~K., Peralta, A.~M., and Pfitzner, H.}
\newblock Measures of weak non-compactness in preduals of von {N}eumann
algebras and {$|m JBW^\ast$}-triples.
\newblock {\em J. Funct. Anal. 278}, 1 (2020), article no. 108300, 69pp.
\bibitem{HKPP-BF}
{\sc Hamhalter, J., Kalenda, O. F.~K., Peralta, A.~M., and Pfitzner, H.}
\newblock Grothendieck's inequalities for {JB}$^*$-triples: {P}roof of the
{B}arton-{F}riedman conjecture.
\newblock {\em Trans. Amer. Math. Soc. 374}, 2 (2021), 1327--1350.
\bibitem{hanche1984jordan}
{\sc Hanche-Olsen, H., and St\o~rmer, E.}
\newblock {\em Jordan operator algebras}, vol.~21 of {\em Monographs and
Studies in Mathematics}.
\newblock Pitman (Advanced Publishing Program), Boston, MA, 1984.
\bibitem{harris1974bounded}
{\sc Harris, L.~A.}
\newblock Bounded symmetric homogeneous domains in infinite dimensional spaces.
\newblock In {\em Proceedings on {I}nfinite {D}imensional {H}olomorphy
({I}nternat. {C}onf., {U}niv. {K}entucky, {L}exington, {K}y., 1973)\/}
(1974), pp.~13--40. Lecture Notes in Math., Vol. 364.
\bibitem{horn1987ideal}
{\sc Horn, G.}
\newblock Characterization of the predual and ideal structure of a {${|m
JBW}^*$}-triple.
\newblock {\em Math. Scand. 61}, 1 (1987), 117--133.
\bibitem{horn1987classification}
{\sc Horn, G.}
\newblock Classification of {JBW{$^*$}}-triples of type {${|m I}$}.
\newblock {\em Math. Z. 196}, 2 (1987), 271--291.
\bibitem{horn1988classification}
{\sc Horn, G., and Neher, E.}
\newblock Classification of continuous {$JBW^*$}-triples.
\newblock {\em Trans. Amer. Math. Soc. 306}, 2 (1988), 553--578.
\bibitem{kaup1981klassifikation}
{\sc Kaup, W.}
\newblock \"{U}ber die {K}lassifikation der symmetrischen hermiteschen
{M}annigfaltigkeiten unendlicher {D}imension. {I}.
\newblock {\em Math. Ann. 257}, 4 (1981), 463--486.
\bibitem{kaup1983riemann}
{\sc Kaup, W.}
\newblock A {R}iemann mapping theorem for bounded symmetric domains in complex
{B}anach spaces.
\newblock {\em Math. Z. 183}, 4 (1983), 503--529.
\bibitem{kaup1997real}
{\sc Kaup, W.}
\newblock On real {C}artan factors.
\newblock {\em Manuscripta Math. 92}, 2 (1997), 191--222.
\bibitem{kaup1977jordan}
{\sc Kaup, W., and Upmeier, H.}
\newblock Jordan algebras and symmetric {S}iegel domains in {B}anach spaces.
\newblock {\em Math. Z. 157}, 2 (1977), 179--200.
\bibitem{Kuo75}
{\sc Kuo, H.~H.}
\newblock {\em Gaussian measures in {B}anach spaces}.
\newblock Lecture Notes in Mathematics, Vol. 463. Springer-Verlag, Berlin-New
York, 1975.
\bibitem{loos1977bounded}
{\sc Loos, O.}
\newblock {\em Bounded symmetric domains and {J}ordan pairs}.
\newblock Lecture Notes. Univ. California at Irvine, 1977.
\bibitem{LMZ}
{\sc Luke\v{s}, J., Mal\'{y}, J., and Zaj\'{\i}\v{c}ek, L.}
\newblock {\em Fine topology methods in real analysis and potential theory},
vol.~1189 of {\em Lecture Notes in Mathematics}.
\newblock Springer-Verlag, Berlin, 1986.
\bibitem{mil}
{\sc Miljutin, A.~A.}
\newblock Isomorphism of the spaces of continuous functions over compact sets
of the cardinality of the continuum.
\newblock {\em Teor. Funkci\u\i\ Funkcional. Anal. i Prilo\v zen. Vyp. 2\/}
(1966), 150--156. (1 foldout).
\bibitem{peralta2001little}
{\sc Peralta, A.~M.}
\newblock Little {G}rothendiecks theorem for real {JB$^*$}-triples.
\newblock {\em Math. Z. 237}, 3 (2001), 531--545.
\bibitem{peralta2001grothendieck}
{\sc Peralta, A.~M., and Rodr{\'\i}guez-Palacios, A.}
\newblock Grothendieck's inequalities for real and complex {$|m
JBW^*$}-triples.
\newblock {\em Proc. London Math. Soc. 83}, 3 (2001), 605--625.
\bibitem{pfitzner-jot}
{\sc Pfitzner, H.}
\newblock Perturbation of {$l^1$}-copies and measure convergence in preduals of
von {N}eumann algebras.
\newblock {\em J. Operator Theory 47}, 1 (2002), 145--167.
\bibitem{pisier1978grothendieck}
{\sc Pisier, G.}
\newblock Grothendieck's theorem for noncommutative {C}$^*$-algebras, with an
appendix on {G}rothendieck's constants.
\newblock {\em Journal of Functional Analysis 29}, 3 (1978), 397--415.
\bibitem{pisier2012grothendieck}
{\sc Pisier, G.}
\newblock Grothendieck's theorem, past and present.
\newblock {\em Bull. Amer. Math. Soc. (N.S.) 49}, 2 (2012), 237--323.
\bibitem{Poliquin-Zizler}
{\sc Poliquin, R.~A., and Zizler, V.~E.}
\newblock Optimization of convex functions on {$w^*$}-compact sets.
\newblock {\em Manuscripta Math. 68}, 3 (1990), 249--270.
\bibitem{ryan}
{\sc Ryan, R.~A.}
\newblock {\em Introduction to tensor products of {B}anach spaces}.
\newblock Springer Monographs in Mathematics. Springer-Verlag London, Ltd.,
London, 2002.
\bibitem{stra-zsi}
{\sc Str\u{a}til\u{a}, c., and Zsid\'{o}, L.}
\newblock {\em Lectures on von {N}eumann algebras}.
\newblock Editura Academiei, Bucharest; Abacus Press, Tunbridge Wells, 1979.
\newblock Revision of the 1975 original, Translated from the Romanian by Silviu
Teleman.
\bibitem{Tak}
{\sc Takesaki, M.}
\newblock {\em Theory of operator algebras. {I}}.
\newblock Springer-Verlag, New York-Heidelberg, 1979.
\bibitem{Topping1965}
{\sc Topping, D.~M.}
\newblock Jordan algebras of self-adjoint operators.
\newblock {\em Mem. Amer. Math. Soc. No. 53\/} (1965), 48.
\bibitem{Wright1977}
{\sc Wright, J. D.~M.}
\newblock Jordan {$C\sp*$}-algebras.
\newblock {\em Michigan Math. J. 24}, 3 (1977), 291--302.
\bibitem{WrightYoungson77}
{\sc Wright, J. D.~M., and Youngson, M.~A.}
\newblock A {R}usso-{D}ye theorem for {J}ordan {$C\sp*$}-algebras.
\newblock In {\em Functional analysis: surveys and recent results ({P}roc.
{C}onf., {P}aderborn, 1976)\/} (1977), pp.~279--282. North--Holland Math.
Studies, Vol. 27; Notas de Mat., No. 63.
\bibitem{youngson1978vidav}
{\sc Youngson, M.~A.}
\newblock A {V}idav theorem for {B}anach {J}ordan algebras.
\newblock {\em Math. Proc. Cambridge Philos. Soc. 84}, 2 (1978), 263--272.
\mathbb Nd{thebibliography}
\mathbb Nd{document} |
{\beta}egin{equation}gin{document}
\title[]
{Instantaneously complete Chern-Ricci flow and K\"ahler-Einstein metrics }
{\alpha}uthor{Shaochuang Huang$^1$}
{\alpha}ddress[Shaochuang Huang]{Yau Mathematical Sciences Center, Tsinghua University, Beijing, China.}
\epsilonmail{schuang@mail.tsinghua.edu.cn}
\thanks{$^1$Research partially supported by China Postdoctoral Science Foundation \#2017T100059}
{\alpha}uthor{Man-Chun Lee}
{\alpha}ddress[Man-Chun Lee]{Department of
Mathematics, University of British Columbia, Canada}
\epsilonmail{mclee@math.ubc.ca}
{\alpha}uthor{Luen-Fai Tam$^2$}
{\alpha}ddress[Luen-Fai Tam]{The Institute of Mathematical Sciences and Department of
Mathematics, The Chinese University of Hong Kong, Shatin, Hong Kong, China.}
\epsilonmail{lftam@math.cuhk.edu.hk}
\thanks{$^2$Research partially supported by Hong Kong RGC General Research Fund \#CUHK 14301517}
\renewcommand{\subjclassname}{
\textup{2010} Mathematics Subject Classification}
\subjclass[2010]{Primary 32Q15; Secondary 53C44
}
\date{February 2019}
{\beta}egin{equation}gin{abstract}
In this work, we obtain some existence results of {\it Chern-Ricci Flows} and the corresponding {\it Potential Flows} on complex manifolds with possibly incomplete initial data. We discuss the behaviour of the solution as $t\to 0$. These results can be viewed as a generalization of an existence result of Ricci flow by Giesen and Topping for surfaces of hyperbolic type to higher dimensions in certain sense. On the other hand, we also discuss the long time behaviour of the solution and obtain some sufficient conditions for the existence of K\"ahler E metric on complete non-compact Hermitian manifolds, which generalizes the work of Lott-Zhang and Tosatti-Weinkove to complete non-compact Hermitian manifolds with possibly unbounded curvature.
\epsilonnd{abstract}
\keywords{Chern-Ricci flow, instantaneous completeness, K\"ahler E metric}
\maketitle
\markboth{Shaochuang Huang, Man-Chun Lee and Luen-Fai Tam}{Instantaneously complete Chern-Ricci flow and K\"ahler-Einstein metrics }
\section{Introduction}
In this work, we will discuss conditions on the existence of {\it Chern-Ricci Flows} and the corresponding {\it Potential Flows} on complex manifolds with possibly incomplete initial data. The flows will be described later. We will also discuss conditions on long-time existence and convergence to K\"ahler-Einstein metrics.
We begin with the definitions of Chern-Ricci flow and the corresponding potential flow. Let $M^n$ be a complex manifold with complex dimension $n$. Let $h$ be a Hermitian metric on $M$ and let $\theta_0$ be the K\"ahler form of $h$:
$$
\theta_0=\sqrt{-1} h_{i\bar{j}} dz^i\wedge d{\beta}ar z^j
$$
where $h=h_{i\bar{j}} dz^i\otimes d{\beta}ar z^j$ in local holomorphic coordinates. {In this work, Einstein summation convention is enforced.}
In general, suppose $\omega$ is a real (1,1) form on $M$, if $\omega=\sqrt{-1} g_{i\bar{j}} dz^i\wedge d{\beta}ar z^j$ in local holomorphic coordinates then the corresponding Hermitian form $g$ is given by
$$
g=g_{i\bar{j}} dz^i\otimes d{\beta}ar z^j.
$$
In case $\omega$ is only nonnegative, we call $g$ to be the Hermitian form of $\omega$ and $\omega$ is still called the K\"ahler form of $g$.
Now if $(M^n,h)$ is a Hermitian manifold with K\"ahler form $\theta_0$, let $\nablaabla$ be the Chern connection $\nablaabla $ of $h$ and $\text{\rm Ric}(h)$ be the Chern-Ricci tensor of $h$ (or the first Ricci curvature). In holomorphic local coordinates such that $h=h_{i\bar{j}} dz^i\otimes d{\beta}ar z^j$, the Chern Ricci form is given by
$$
\text{\rm Ric}(h)=-\sqrt{-1} \partial{\beta}ar\partial \log \det(h_{i\bar{j}}).
$$
For the basic facts on Chern connection and Chern curvature, we refer readers to \cite[section 2]{ TosattiWeinkove2015}, see also \cite[Appendix A]{Lee-Tam} for example.
Let $\omega_0$ be another nonnegative real (1,1) form on $M$. Define
{\beta}egin{equation}\langlebel{e-alpha}
{\alpha}:=-\text{\rm Ric}(\theta_0)+e^{-t}\left(\text{\rm Ric}(\theta_0)+\omega_0\right)
\epsilonnd{equation}
where $\text{\rm Ric}(\theta_0)$ is the Chern-Ricci curvature of $h$. We want to study the following parabolic complex Monge-Amp\`ere equation:
{\beta}egin{equation}\langlebel{e-MP-1}
\left\{
{\beta}egin{equation}gin{array}{ll}
{\displaystyle \frac{\partial u}{\partial t}}&=\displaystyle{\log\left(\frac{({\alpha}+\sqrt{-1}\partial{\beta}ar\partial u)^n}{\theta_0^n}\right)}-u\ \ \text{in $M\times(0,S]$} \\
u(0)&=0
\epsilonnd{array}
\rightght.
\epsilonnd{equation}
so that ${\alpha}+\sqrt{-1}\partial{\beta}ar\partial u>0$ for $t>0$. When $M$ is compact and $\omega_0=\theta_0$ is smooth metric, it was first studied by Gill in \cite{Gill}. Here we are interested in the case when $\omega_0$ is possibly an incomplete metric on a complete non-compact Hermitian manifold $(M,h)$. Following \cite{Lott-Zhang}, \epsilonqref{e-MP-1} will be called the {\it potential flow} of the following normalized Chern-Ricci flow:
{\beta}egin{equation}\langlebel{e-NKRF}
\left\{
{\beta}egin{equation}gin{array}{ll}
{\displaystyle \frac{\partial}{\partial t}\omega(t)} &= -\text{\rm Ric}(\omega(t))-\omega(t); \\
\omega(0)&= \omega_0.
\epsilonnd{array}
\rightght.
\epsilonnd{equation}
It is easy to see that the normalized Chern-Ricci flow will coincide with the normalized K\"ahler R flow if $\omega_0$ is K\"ahler. It is well-known that if $\omega_0$ is a Hermitian metric and $\omega(t)$ is Hermitian and a solution to \epsilonqref{e-NKRF} which is smooth up to $t=0$, then
{\beta}egin{equation}\langlebel{e-potential}
u(t)=e^{-t}\int_0^te^s\log \frac{(\omega (s))^n}{\theta_0^n}ds.
\epsilonnd{equation}
satisfies \epsilonqref{e-MP-1}. Moreover, $u(t)\to0$ in $C^\infty$ norm in any compact set as $t\to0$. On the other hand, if $u$ is a solution to \epsilonqref{e-MP-1} so that ${\alpha}+\sqrt{-1}\partial{\beta}ar\partial u>0$ for $t>0$, then
{\beta}egin{equation}\langlebel{e-potential-1}
\omega(t)={\alpha}+\sqrt{-1}\partial{\beta}ar\partial u
\epsilonnd{equation} is a solution to \epsilonqref{e-NKRF} on $M\times(0,S]$. However, even if we know $u(t)\to0$ as $t\to0$ uniformly on $M$, it is still unclear that $\omega(t)\to\omega_0$ in general.
The first motivation is to study Ricci flows starting from metrics which are possibly incomplete and with unbounded curvature. In complex dimension one, the existence of Ricci flow starting from an arbitrary metric has been studied in details by Giesen and Topping \cite{GiesenTopping-1,GiesenTopping-2, GiesenTopping,Topping}. In particular, the following was proved in \cite{GiesenTopping}: {\it If a surface admits a complete metric $H$ with constant negative curvature, then any initial data which may be incomplete can be deformed through the normalized Ricci flow for long time and converges to $H$. Moreover, the solution is instantaneously complete for $t>0$.} In higher dimensions, recently it is proved by Ge-Lin-Shen \cite{Ge-Lin-Shen} that on a complete non-compact K\"ahler manifold $(M,h)$ with $\text{\rm Ric}(h)\leq -h$ and bounded curvature, if $\omega_0$ is a K\"ahler metric, not necessarily complete, but with bounded $C^k$ norm with respect to $h$ for $k\ge 0$, then \epsilonqref{e-NKRF} has a long time solution which converges to the unique K\"ahler-Einstein metric with negative scalar curvature, by solving \epsilonqref{e-MP-1}. Moreover, the solution is instantaneously complete after it evolves.
Motivated by the above mentioned works, we first study the short time existence of the potential flow and the normalized Chern-Ricci flow. Our first result is the following:
{\beta}egin{equation}gin{thm}\langlebel{main-instant-complete}
Let $(M^n,h)$ be a complete non-compact Hermitian manifold with complex dimension $n$. Suppose there is $K>0$ such that the following hold.
{\beta}egin{equation}gin{enumerate}
\item There is a {proper} exhaustion function $\rho(x)$ on $M$ such that
$$|\partialartial\rho|^2_h +|\sqrt{-1}\partial\bar\partial \rho|_h \leq K.$$
\item $\mathrm{BK}_h\geq -K$;
\item The torsion of $h$, $T_h=\partialartial \omega_h$ satisfies
$$|T_h|^2_h +|\nablaabla^h_{{\beta}ar\partialartial} T_h |\leq K.$$
\epsilonnd{enumerate}
Let $\omega_0$ be a nonnegative real (1,1) form with corresponding Hermitian form $g_0$ on $M$ (possibly incomplete {or degenerate}) such that
{\beta}egin{equation}gin{enumerate}
\item[(a)] $g_0\le h$ and
$$|T_{g_0}|_h^2+|\nablaabla^h_{{\beta}ar\partialartial} T_{g_0}|_h+ |\nablaabla^{h}g_0|_h\leq K.$$
\item[(b)] There exist $f\in C^\infty(M)\cap L^\infty(M),{\beta}egin{equation}ta>0$ and $s>0$ so that $$-\text{\rm Ric}(\theta_0)+e^{-s}(\omega_0+\text{\rm Ric}(\theta_0))+\sqrt{-1}\partial\bar\partial f\geq {\beta}egin{equation}ta \theta_0.$$
\epsilonnd{enumerate}
Then \epsilonqref{e-MP-1} has a solution on $M\times(0, s)$ so that $u(t)\to 0$ as $t\to0$ uniformly on $M$. Moreover, for any $0<s_0<s_1<s$, $\omega(t)={\alpha}+\sqrt{-1}\partial{\beta}ar\partial u$ is the K\"ahler form of a complete Hermitian metric which is uniformly equivalent to $h$ on $M\times[s_0, s_1]$. In particular, $g(t)$ is complete for $t>0$.
\epsilonnd{thm}
Here $\mathrm{BK}_h\geq -K$ means that for any unitary frame $\{e_k\}$ of $h$, we have $R(h)_{i{\beta}ar ij{\beta}ar j}\geq -K$ for all $i,j$.
{\beta}egin{equation}gin{rem}
It is well-known that when $(M,h)$ is K\"ahler with bounded curvature, then condition (1) will be satisfied, \cite{Shi1989,Tam2010}. See also \cite{NiTam2013,Huang2018} for related results under various assumptions.
\epsilonnd{rem}
Condition (b) was used in \cite{Lott-Zhang,TosattiWeinkove2015, Lee-Tam} with $\omega_0$ replaced by $\theta_0$ and is motivated as pointed out in \cite{Lott-Zhang} as follows. If we are considering cohomological class instead, in case that $\omega(t)$ is closed, then \epsilonqref{e-NKRF} is:
$$
\partialartial_t[\omega(t)]=-[\text{\rm Ric}(\omega(t)]-[\omega(t)]
$$
and so
$$
[\omega(t)]=-(1-e^{-t})[\text{\rm Ric}(\theta_0)]+e^{-t}[\omega_0].
$$
Condition (b) is used to guarantee that $\omega(t)>0$. In our case $\omega_0,\theta_0, \omega(t)$ may not be closed and $\omega_0$ may degenerate. These may cause some difficulties. Indeed, the result is analogous to running K\"ahler R flow from a rough initial data. When $M$ is compact, the potential flow from a rough initial data had already been studied by several authors, see for example \cite{BG2013,SongTian2017,To2017} and the references therein.
On the other hand, a solution of \epsilonqref{e-MP-1} gives rise to a solution of \epsilonqref{e-NKRF} when $t>0$. It is rather delicate to see if the corresponding solution of \epsilonqref{e-NKRF} will attain the initial Hermitian form $\omega_0$. In this respect, we will prove the following:
{\beta}egin{equation}gin{thm}\langlebel{t-initial-Kahler-1}
With the same notation and assumptions as in Theorem \ref{main-instant-complete}. Let $\omega(t)$ be the solution of \epsilonqref{e-NKRF} obtained in the theorem. If in addition $h$ is K\"ahler and $d\omega_0=0$. Let $U=\{\omega_0>0\}$. Then $\omega(t)\rightghtarrow \omega_0$ in $C^\infty(U)$ as $t\rightghtarrow 0$, {uniformly on compact subsets of $U$}.
\epsilonnd{thm}
We should remark that if in addition $h$ has bounded curvature, then the theorem follows easily from pseudo-locality. The theorem can be applied to the cases studied in \cite{Ge-Lin-Shen} and to the case that $-\text{\rm Ric}(h)\ge {\beta}\theta_0$ outside a compact set $V$ and $\omega_0>0$ on $V$ with $\omega_0$ and its first covariant derivative are bounded. In particular, when $\Omega$ is a bounded strictly pseudoconvex domain of another manifold $M$ with defining function $\varphi$, then the $\Omega$ with the metric $h_{i{\beta}ar j}=-\partialartial_i \partialartial_{{\beta}ar j}\log(-\varphi)$ will satisfy the above, see \cite[(1.22)]{ChengYau1982}.
Another motivation here is to study the existence of K\"ahler-Einstein metric with negative scalar curvature on complex manifolds using geometric flows. In \cite{Aubin, Yau1978-2}, Aubin and Yau proved that if $M$ is a compact K\"ahler manifold with negative first Chern class $c_1(M)<0$, then it admits a unique K\"ahler-Einstein metric with negative scalar curvature by studying the elliptic complex Monge-Amp\`ere equation. Later, Cao \cite{Cao} reproved the above result using the K\"ahler-Ricci flow by showing that one can deform a suitable initial K\"ahler metric through normalized K\"ahler R flow to the K\"ahler-Einstein metric. Recently, Tosatti and Weinkove \cite{TosattiWeinkove2015} proved that under the same condition that $c_1(M)<0$ on a compact complex manifold, the normalized Chern-Ricci flow \epsilonqref{e-NKRF} with an arbitrary Hermitian initial metric also has a long time solution and converges to the K\"ahler-Einstein metric with negative scalar curvature.
In \cite{ChengYau1982}, Cheng and Yau proved that if $M$ is a complete non-compact K\"ahler manifold with Ricci curvature bounded above by a negative constant, injectivity radius bounded below by a positive constant and curvature tensor with its covariant derivatives are bounded, then $M$ admits a unique complete K\"ahler-Einstein metric with negative scalar curvature. In \cite{Chau04}, Chau used K\"ahler-Ricci flow to prove that if $(M, g)$ is a complete non-compact K\"ahler manifold with bounded curvature and $\text{\rm Ric}(g)+g=\sqrt{-1}\partial{\beta}ar\partial f $ for some smooth bounded function $f$, then it also admits a complete K\"ahler-Einstein metric with negative scalar curvature. Later, Lott and Zhang \cite{Lott-Zhang} generalized Chau's result by assuming $$-\text{\rm Ric}(g)+\sqrt{-1}\partial{\beta}ar\partial f\ge{\beta} g$$ for some smooth function $f$ with bounded $k$th covariant derivatives for each $k\geq0$ and positive constant ${\beta}$. In this work, we will generalize the results in \cite{Lott-Zhang,TosattiWeinkove2015} to complete non-compact Hermitian manifolds with possibly unbounded curvature.
For the long time existence and convergence, we will prove the following:
{\beta}egin{equation}gin{thm}\langlebel{main-longtime}
Under the assumption of Theorem \ref{main-instant-complete}, if in addition, $$-\text{\rm Ric}(h)+\sqrt{-1}\partial\bar\partial f\geq {\beta}egin{equation}ta \theta_0$$ for some $f\in C^\infty(M)\cap L^\infty(M)$, ${\beta}egin{equation}ta>0$. Then the solution constructed from Theorem \ref{main-instant-complete} is a longtime solution and converges to a unique complete K\"ahler Einstein metric with negative scalar curvature on $M$.
\epsilonnd{thm}
As a consequence, we see that if $h$ satisfies the conditions in the theorem, then $M$ supports a complete K\"ahler-Einstein metric with negative scalar curvature, generalizing the results in \cite{Lott-Zhang,TosattiWeinkove2015}.
The paper is organized as follows: In section 2, we will derive a priori estimates along the potential flow and apply it in section 3 to prove Theorem \ref{main-instant-complete}. Furthermore, we will study the short time behaviour of the constructed solution. In section 4, we will prove the Theorem \ref{main-longtime} and discuss longtime behaviour for general K\"ahler R flow if the initial data satisfies some extra condition. In Appendix A, we will collect some information about the relation between normalized Chern-Ricci flow and unnormalized one {together with some useful differential inequalities. In Appendix B, we will state a maximum principle which will be used in this work.}
\section{a priori estimates for the potential flow}\langlebel{s-aprior}
We will study the short time existence of the potential flow \epsilonqref{e-MP-1} with $\omega_0$ only being assumed to be nonnegative. We need some a priori estimates for the flow. In this section, we always assume the following:
{\beta}egin{equation}gin{enumerate}
\item There is a {proper} exhaustion function $\rho(x)$ on $M$ such that
$$|\partialartial\rho|^2_h +|\sqrt{-1}\partial\bar\partial \rho|_h \leq K.$$
\item $\mathrm{BK}_h\geq -K$.
\item The torsion of $h$, $T_h=\partialartial \omega_h$ satisfies
$$|T_h|^2_h +|\nablaabla^h_{{\beta}ar\partialartial} T_h |\leq K.$$
\epsilonnd{enumerate}
Here $K$ is some positive constant.
On the other hand, let $\omega_0$ be a real (1,1) form with corresponding Hermitian form $g_0$. We always assume the following:
{\beta}egin{equation}gin{enumerate}
\item[(a)] $g_0\le h$ and
$$|T_{g_0}|_h^2+|\nablaabla^h_{{\beta}ar\partialartial} T_{g_0}|_h+ |\nablaabla^{h}g_0|_h\leq K.$$
\item[(b)] There exist $f\in C^\infty(M)\cap L^\infty(M),{\beta}egin{equation}ta>0$ and $s>0$ so that $$-\text{\rm Ric}(\theta_0)+e^{-s}(\omega_0+\text{\rm Ric}(\theta_0))+\sqrt{-1}\partial\bar\partial f\geq {\beta}egin{equation}ta \theta_0.$$
\epsilonnd{enumerate}
Note that if $g_0\le Ch$, then we can replace $h$ by $Ch$, then (b) is still satisfied with a possibly smaller ${\beta}egin{equation}ta$.
Since $g_0$ can be degenerate, we perturb $g_0$ in the following way: Let $1\ge \epsilonta\ge 0$ be a smooth function on $\mathbb{R}$ such that $\epsilonta(s)=1$ for $s\le 1$ and $\epsilonta(s)=0$ for $s\ge 2$ so that $|\epsilonta'|+|\epsilonta''|\le c_1$, say. For $\epsilon>0$ and $\rho_0>>1$, let $\epsilonta_{0}(x)=\epsilonta(\rho(x)/\rho_0)$. Consider the metric:
{\beta}egin{equation}
{\gamma}mma_0={\gamma}mma_0(\rho_0,\epsilon)=\epsilonta_0\omega_0+(1-\epsilonta_0)\theta_0+\epsilon\theta_0.
\epsilonnd{equation}
Then
{\beta}egin{equation}gin{itemize}
\item ${\gamma}mma_0$ is the K\"ahler form of a complete Hermitian metric, which is uniformly equivalent to $h$;
\item $\mathrm{BK}({\gamma}mma_0 )\ge -C$ for some $C$ which may depend on $\rho_0, \epsilon$;
\item The torsion $|T_{{\gamma}mma_0} |_{{\gamma}mma_0}+|\nablaabla^{{\gamma}mma_0}_{{\beta}ar \partialartial} T_{{\gamma}mma_0}|_{{\gamma}mma_0}$ is uniformly bounded by a constant which may depend on $\rho_0, \epsilon$.
\epsilonnd{itemize}
We will obtain a short time existence for the potential flow starting with ${\gamma}mma_0$:
{\beta}egin{equation}gin{lma}\langlebel{l-perturbed-1} \epsilonqref{e-MP-1} has a solution $u(t)$ on $M\times[0, s)$ with ${\alpha}=-\text{\rm Ric}(\theta_0)+e^{-t}\left(\text{\rm Ric}(\theta_0)+{\gamma}mma_0\right)$ and $\omega(t)={\alpha}+\sqrt{-1}\partial{\beta}ar\partial u$ such that $\omega(t)$ satisfies \epsilonqref{e-NKRF} with initial data ${\gamma}mma_0$, where $\omega(t)$ is the K\"ahler form of $g(t)$. Moreover, $g(t)$ is uniformly equivalent to $h$ on $M\times[0, s_1]$ for all $s_1<s$.
\epsilonnd{lma}
{\beta}egin{equation}gin{proof} By the proof of \cite[Theorem 4.1]{Lee-Tam}, it is sufficient to prove that for any $0<s_1<s$,
$$
-\text{\rm Ric}({\gamma}mma_0)+e^{-s_1}({\gamma}mma_0+\text{\rm Ric}({\gamma}mma_0))+\sqrt{-1}\partial{\beta}ar\partial f_1\ge {\beta}egin{equation}ta_1{\gamma}mma_0
$$
for some smooth bounded function $f_1$ and some constant ${\beta}egin{equation}ta_1>0$. To simplify the notations, if $\epsilonta, \zeta$ are real (1,1) forms, we write $\epsilonta \succeq \zeta$ if $\epsilonta+\sqrt{-1}\partial{\beta}ar\partial \partialhi\ge \zeta$ for some smooth and bounded function $\partialhi$. We compute:
{\beta}egin{equation}e{\beta}egin{equation}gin{split}
-\text{\rm Ric}({\gamma}mma_0)+e^{-s_1}({\gamma}mma_0+\text{\rm Ric}({\gamma}mma_0))
=&-(1-e^{-s_1})\text{\rm Ric}({\gamma}mma_0)+e^{-s_1}{\gamma}mma_0\\
\succeq&-(1-e^{-s_1})\text{\rm Ric}(\theta_0)+e^{-s_1}{\gamma}mma_0\\
\succeq&\frac{1-e^{-s_1}}{1-e^{-s}}({\beta}egin{equation}ta \theta_0-e^{-s}\omega_0)+e^{-s_1}{\gamma}mma_0 \\
\ge&\frac{1-e^{-s_1}}{1-e^{-s}} {\beta}egin{equation}ta \theta_0 \\
\ge& {\beta}egin{equation}ta_1{\gamma}mma_0\epsilonnd{split}\epsilonnd{equation}e
for some ${\beta}egin{equation}ta_1>0$ because $0<s_1<s$ and ${\gamma}mma_0\ge \omega_0$. Here we have used {condition (b) above}, the fact that ${\gamma}mma_0^n =\theta_0^ne^H$ for some smooth bounded function $H$ and the definition of Chern-Ricci curvature.
\epsilonnd{proof}
Let $\omega(t)$ be the solution in the lemma and let $u(t)$ be the potential as in \epsilonqref{e-potential}.
Since we want to prove that \epsilonqref{e-MP-1} has a solution $u(t)$ on $M\times(0, s)$ with ${\alpha}=-\text{\rm Ric}(\theta_0)+e^{-t}\left(\text{\rm Ric}(\theta_0)+\omega_0\right)$ in next section, we need to obtain some uniform estimates of $u, \dot u$ and $\omega(t)$ which is independent of $\rho_0$ and $\epsilon$. The estimates are more delicate because the initial data $\omega_0$ maybe degenerate. For later applications, we need to obtain estimates on $(0,1]$ and $[1,s)$ if $s>1$. Note that for fixed $\rho_0, \epsilon$, $u(t)$ is smooth up to $t=0$. Moreover, $u, \dot u=:\frac{\partial}{\partial t}u$ are uniformly bounded on $M\times[0,s_1]$ for all $0<s_1<s$.
\subsection{a priori estimates for $u$ and $\dot u$}\langlebel{ss-uudot}
We first give estimates for upper bound of $u$ and $\dot u$.
{\beta}egin{equation}gin{lma}\langlebel{l-uudot-upper-1} There is a constant $C$ depending only on $n$ and $K$ such that
$$
u\le C\min\{t,1\}, \ \ \dot u\le \frac{Ct}{e^t-1}
$$
on $M\times[0, s)$, provided $0<\epsilon<1$.
\epsilonnd{lma}
{\beta}egin{equation}gin{proof}The proofs here follow almost verbatim from the K\"ahler case \cite{TianZhang2006}, but we include brief arguments for the reader's convenience. For notational convenience, we use $\Delta=g^{i{\beta}ar j} \partialartial_i \partialartial_{{\beta}ar j}$ to denote the Chern Laplacian associated to $g(t)$.
Since $-\text{\rm Ric}(\theta_0)=\omega(t)-e^{-t}(\text{\rm Ric}(\theta_0)+{\gamma}mma_0)-\sqrt{-1}\partial{\beta}ar\partial u$
by \epsilonqref{e-potential-1}, we have
{\beta}egin{equation}\langlebel{e-udot-1}
{\beta}egin{equation}gin{split}
\lf(\frac{\p}{\p t}-\Delta\ri) (e^t\dot u)=&e^t\dot u-e^t \operatorname{tr}_{\omega}\text{\rm Ric}(\theta_0)-e^t\lf(\frac{\p}{\p t}-\Delta\ri) u-n e^t\\
=&e^t\operatorname{tr}_\omega \left(- \text{\rm Ric}(\theta_0)+\sqrt{-1}\partial{\beta}ar\partial u\right)-ne^t\\
=&e^t\operatorname{tr}_\omega\left(\omega-e^{-t}(\text{\rm Ric}(\theta_0)+{\gamma}mma_0)\right)-ne^t\\
=& -\operatorname{tr}_\omega (\text{\rm Ric}(\theta_0)+{\gamma}mma_0)\\
=&\lf(\frac{\p}{\p t}-\Delta\ri) (\dot u+u)+n -\operatorname{tr}_\omega({\gamma}mma_0).
\epsilonnd{split}
\epsilonnd{equation}
Hence
{\beta}egin{equation}e
\lf(\frac{\p}{\p t}-\Delta\ri)(\dot u+u+nt-e^t\dot u)=\operatorname{tr}_\omega{\gamma}mma_0\ge0.
\epsilonnd{equation}e
At $t=0$, $\dot u+u+nt-e^t\dot u=0$. By maximum principle Lemma \ref{max}, we have
{\beta}egin{equation}\langlebel{e-uudot-upper-1}
(e^t-1)\dot u\le nt+u.
\epsilonnd{equation}
Next consider
{\beta}egin{equation}e
F=u-At-\kappa\rho
\epsilonnd{equation}e
on $M\times[0, s_1]$ for any fixed $s_1<s$. Here $\kappa>0$ is a constant.
Suppose $\sup\limits_{M\times[0, s_1]}F>0$, then there exists $(x_0, t_0)\in M\times(0, s_1]$ such that $F\leq F(x_0, t_0)$ on $M\times[0, s_1]$, and at this point,
{\beta}egin{equation}e{\beta}egin{equation}gin{split}
0\leq& \dot u -A=\log \left(\frac{\omega^n(t)}{\theta_0^n}\right)-u-A. \epsilonnd{split} \epsilonnd{equation}e
Also, $\sqrt{-1}\partial{\beta}ar\partial u\le \kappa\sqrt{-1}\partial{\beta}ar\partial \rho\le \kappa K\theta_0$. Hence at $(x_0,t_0)$,
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\omega(t)=&-\text{\rm Ric}(\theta_0)+e^{-t}(\text{\rm Ric}(\theta_0)+{\gamma}mma_0)+\sqrt{-1}\partial{\beta}ar\partial u\\
\le&(-1+e^{-t})\text{\rm Ric}(\theta_0)+e^{-t}{\gamma}mma_0+\kappa K\theta_0\\
\le &(L+2+\kappa K)\theta_0,
\epsilonnd{split}
\epsilonnd{equation}e
here $\text{\rm Ric}(\theta_0)\ge -L(n, K)\theta_0$. Hence at $(x_0,t_0)$ we have
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
u\le & n\log(L+2+\kappa K)-A\\
\le &0
\epsilonnd{split}
\epsilonnd{equation}e
if $A=n\log(L+2)+1$ and $\kappa>0$ is small enough. Hence $F(x_0,t_0)<0$. This is a contradiction. Hence $F\le 0$ on $M\times[0, s_1]$ provided $A=A(n,K)=n\log(L+2)+1$ and we have
{\beta}egin{equation}\langlebel{e-uudot-upper-2}
u\le At
\epsilonnd{equation}
by letting $\kappa\to0$. Combining this with \epsilonqref{e-uudot-upper-1}, we conclude that
$$
\dot u\le \frac{(A+n)t}{e^t-1}.
$$
Combining this with \epsilonqref{e-uudot-upper-2}, we conclude that $u\le C$ for some constant $C$ depending only on $n, K$. Since $s_1$ is arbitrary, we complete the proof of Lemma \ref{l-uudot-upper-1}.
\epsilonnd{proof}
Next, we will estimate the lower bound of $u$ and $\dot u$.
{\beta}egin{equation}gin{lma}\langlebel{l-all-u}{\beta}egin{equation}gin{enumerate}
\item[(i)] $u(x,t)\geq - \frac{C}{1-e^{-s}} t+nt\log(1-e^{-t})$ on $M\times[0, s)$ for some constant $C>0$ depending only on $ n, {\beta}egin{equation}ta, K, ||f||_\infty$.
\item [(ii)] For $0<s_1\leq 1$ and $s_1<s$,
{\beta}egin{equation}e
\dot u+u\ge\frac1{1-e^{s_1-s}}\left(n\log t-\frac{C}{1-e^{-s}}\right)
\epsilonnd{equation}e
some constant $C>0$ depending only on $ n, {\beta}egin{equation}ta, K, ||f||_\infty$
on $M\times(0, s_1]$.
\item [(iii)] For $0<s_1\leq 1$ and $s_1<s$,
$$\dot u+u\geq -C$$
on $M\times[0, s_1]$ for some constant $C>$ depending only on
$ n, {\beta}egin{equation}ta$, $K, ||f||_\infty, s_1, s$ and $\epsilon$.
\item [(iv)] Suppose $s>1$, then for $1<s_1<s$,
$$\dot u+u\ge -\frac{C(1+s_1e^{s_1-s})}{1-e^{s_1-s}}$$ on $M\times[1,s_1]$
for some constant $C(n, {\beta}egin{equation}ta, ||f||_\infty, K)>0$.
\item[(v)] For $0<s_1<s$,
$$u\ge -\frac{C(1+s_1e^{s_1-s})}{1-e^{s_1-s}}$$ on $M\times[0,s_1]$ for some constant $C(n, {\beta}egin{equation}ta, ||f||_\infty, K)>0$.
\epsilonnd{enumerate}
\epsilonnd{lma}
{\beta}egin{equation}gin{proof} In the following, $C_i$ will denote positive constants depending only on $n, {\beta}egin{equation}ta, ||f||_\infty, K$ and $D_i$ will denote positive constants which may also depend on $\rho_0, \epsilon$ but not on $\kappa$.
To prove (i): Consider {\beta}egin{equation}e
F=u(x,t)-\frac{1-e^{-t}}{1-e^{-s}}f(x)+A\cdot t-nt\log(1-e^{-t})+\kappa\rho(x) .\epsilonnd{equation}e
Suppose $\inf\limits_{M\times[0, s_1]}F<0$.
Then there exists $(x_0, t_0)\in M\times(0, s_1]$ such that $F\geq F(x_0, t_0)$ on $M\times[0, s_1]$. At this point, we have
{\beta}egin{equation}e{\beta}egin{equation}gin{split}
0\geq &\partialpt F
\\=&\dot u+A-\frac{e^{-t}}{1-e^{-s}}f(x)-n\log(1-e^{-t})-\frac{nt}{e^t-1}.\\
=&\log\frac{(-\text{\rm Ric}(\theta_0)+e^{-t}(\text{\rm Ric}(\theta_0)+{\gamma}mma_0)+\sqrt{-1}\partial\bar\partial u)^n}{\theta_0^n}-u +A\\
&-n\log(1-e^{-t})-\frac{nt}{e^t-1}-\frac{e^{-t}}{1-e^{-s}}f\\
\geq& \log\frac{(-\text{\rm Ric}(\theta_0)+e^{-t}(\text{\rm Ric}(\theta_0)+{\gamma}mma_0)+ \frac{1-e^{-t}}{1-e^{-s}}\sqrt{-1}\partial\bar\partial f-\kappa\sqrt{-1}\partial\bar\partial\rho)^n}{\theta_0^n}\\
&-C(n, K)-\frac{e^{-t}}{1-e^{-s}}f+A-n\log(1-e^{-t})-\frac{nt}{e^t-1},\\
\epsilonnd{split} \epsilonnd{equation}e
where we have used the fact that $u\leq C(n, K)$, and $\sqrt{-1}\partial\bar\partial u\ge\frac{1-e^{-t}}{1-e^{-s}}\sqrt{-1}\partial\bar\partial f-\kappa\sqrt{-1}\partial\bar\partial \rho$. Note that
$$
-\text{\rm Ric}(\theta_0)\ge\frac1{1-e^{-s}}\left({\beta}egin{equation}ta\theta_0-e^{-s}\omega_0-\sqrt{-1}\partial{\beta}ar\partial f\right),
$$
hence
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
&-\text{\rm Ric}(\theta_0)+e^{-t}(\text{\rm Ric}(\theta_0)+{\gamma}mma_0)+\frac{1-e^{-t}}{1-e^{-s}}\sqrt{-1}\partial\bar\partial f-\kappa\sqrt{-1}\partial\bar\partial\rho\\
\ge&e^{-t}{\gamma}mma_0+\frac{1-e^{-t}}{1-e^{-s}}\left({\beta}egin{equation}ta\theta_0-e^{-s}\omega_0 \right) -\kappa K\theta_0 \\
\ge& \frac{1}{2}\frac{1-e^{-t}}{1-e^{-s}} {\beta}egin{equation}ta\theta_0
\epsilonnd{split}
\epsilonnd{equation}e
if $\kappa $ is small enough. Here we have used the fact that $0<t<s$ and ${\gamma}mma_0\ge \omega_0$. Hence at $(x_0,t_0)$,
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
0\geq& n\log(1-e^{-t}) -C_1 \\
&-\frac{e^{-t}}{1-e^{-s}}f+A-n\log(1-e^{-t})-\frac{nt}{e^t-1}\\
\geq& -\frac{1}{1-e^{-s}}||f||_\infty+A-C_2 \\
>&0
\epsilonnd{split}
\epsilonnd{equation}e
if $A=\frac{1}{1-e^{-s}}||f||_\infty+C_2+1$. Hence for such $A$, $F\ge 0$ and for all $\kappa>0$ small enough, we conclude that
$$
u(x,t)\ge -At+nt\log(1-e^{-t}).
$$
To prove (ii), we have
{\beta}egin{equation}e
\lf(\frac{\p}{\p t}-\Delta\ri)(\dot u+u)=-\operatorname{tr}_\omega(\text{\rm Ric}(\theta_0))-n.
\epsilonnd{equation}e
On the other hand, by \epsilonqref{e-udot-1}, we also have
{\beta}egin{equation}e
\lf(\frac{\p}{\p t}-\Delta\ri) (e^t\dot u)=-\operatorname{tr}_\omega(\text{\rm Ric}(\theta_0)+{\gamma}mma_0).
\epsilonnd{equation}e
Hence
{\beta}egin{equation}\langlebel{e-udot-2}
{\beta}egin{equation}gin{split}
& \lf(\frac{\p}{\p t}-\Delta\ri)\left((1-e^{t-s})\dot u+u\right)\\
=&\operatorname{tr}_\omega(-\text{\rm Ric}(\theta_0)+e^{-s}(\text{\rm Ric}(\theta_0)+{\gamma}mma_0))-n\\
\ge&{\beta}egin{equation}ta\operatorname{tr}_\omega(\theta_0)-\Delta f-n.
\epsilonnd{split}
\epsilonnd{equation}
Let $F=(1-e^{t-s})\dot u+u-f-A\log t+\kappa\rho$, where $A>0$ is a constant to be determined. Since $\log t\to-\infty$ as $t\rightghtarrow 0$, we conclude that for $0<s_1<s$, if $\inf_{M\times[0,s_1]}F\le 0$, then there is $(x_0,t_0)\in M\times(0,s_1]$ so that
$F(x_0,t_0)=\inf_{M\times[0,s_1]}F$. By \epsilonqref{e-udot-2}, at $(x_0,t_0)$ we have
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
0\ge&\lf(\frac{\p}{\p t}-\Delta\ri) F\\
\ge&{\beta}egin{equation}ta\operatorname{tr}_\omega(\theta_0)-n-\frac At-\kappa D_1\\
\ge& n{\beta}egin{equation}ta\epsilonxp(-\frac1n(\dot u+u))-n-\frac At-\kappa D_1
\epsilonnd{split}
\epsilonnd{equation}e
where $D_1>0$ is a constant independent of $\kappa$. Hence at this point,
$$
\dot u+u\ge -n\log\left(\frac1{n{\beta}egin{equation}ta}(n+\frac At+\kappa D_1)\right).
$$
Hence at $(x_0,t_0)$, noting that $0<t_0\le s_1<s$ and $s_1\le 1$,
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
F\geq& (1-e^{t-s})(\dot u+u)+e^{t-s}u-f-A\log t\\
\ge&-(1-e^{t-s})n\log\left(\frac1{n{\beta}egin{equation}ta}(n+\frac At+\kappa D_1)\right)-\sup_M f-A\log t\\
&- \frac{C_3}{1-e^{-s}}+nt\log(1-e^{-t})\\
\ge&[(1-e^{t-s})n-A]\log t-(1-e^{t-s})n\log\left(\frac1{n{\beta}egin{equation}ta}(nt+A+\kappa t D_1)\right)\\
&-||f||_\infty-\frac{C_4}{1-e^{-s}} \\
\ge &- n\log\left(\frac1{n{\beta}egin{equation}ta}(2n+\kappa D_1)\right)-||f||_\infty-\frac{C_4}{1-e^{-s}}\epsilonnd{split}
\epsilonnd{equation}e
if $A=n$. Here we may assume that ${\beta}egin{equation}ta>0$ is small enough so that $2/{\beta}egin{equation}ta>1$. Hence we have
{\beta}egin{equation}gin{align*}
F\ge - n\log\left(\frac1{n{\beta}egin{equation}ta}(2n+\kappa D_1)\right)-||f||_\infty-\frac{C_4}{1-e^{-s}}.
\epsilonnd{align*}
on $M\times(0,s_1]$. Let $\kappa\to0$, we conclude that
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
(1-e^{t-s})\left(\dot u+u\right)=& (1-e^{t-s})\dot u+u-e^{t-s}u \\
\ge&n\log t-\frac{C_5}{1-e^{-s}},
\epsilonnd{split}
\epsilonnd{equation}e
where we have used the upper bound of $u$ in Lemma \ref{l-uudot-upper-1}. From this (ii) follows because $t\le s_1$.
The proof of (iii) is similar to the proof of (ii) by letting $A=0$. Note that in this case, the infimum of $F$ may be attained at $t=0$ which depends also on $\epsilon$.
To prove (iv), let
$F$ as in the proof of (ii) with $A=0$. Suppose $\inf_{M\times[\frac 12,s_1]}F=\inf_{M\times\{\frac 12\} }F$, then by (i) and (ii), we have
$$
F\ge -C_6.
$$
Suppose $\inf_{M\times[\frac 12,s_1]}F<\inf_{M\times\{\frac 12\}}F$, then we can find $(x_0,t_0)\in M\times(\frac 12,s_1]$ such that $F(x_0,t_0)$ attains the infimum. As in the proof of (ii), at this point,
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\dot u+u\ge-n\log\left(\frac1{n{\beta}egin{equation}ta}(n+\kappa D_2)\right)
\epsilonnd{split}
\epsilonnd{equation}e
where $D_2>0$ is a constant independent of $\kappa$. Hence as in the proof of (ii),
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
F(x_0,t_0)\ge &(1-e^{t_0-s})(\dot u+u)+e^{t_0-s}u-f\\
\geq&-n(1-e^{t_0-s}) \log\left(\frac1{n{\beta}egin{equation}ta}(n+\kappa D_2)\right)- \frac{C_7s_1e^{s_1-s}}{1-e^{-s}} -C_8\\
\ge&-n \log\left(\frac1{n{\beta}egin{equation}ta}(n+\kappa D_2)\right)- \frac{C_7s_1e^{s_1-s}}{1-e^{-s}} -C_8
\epsilonnd{split}
\epsilonnd{equation}e
because $t_0\le s_1$, where we have used (i) and we may assume that ${\beta}egin{equation}ta<1$.
Let $\kappa\to0$, we conclude that on $M\times[\frac 12, s_1]$,
{\beta}egin{equation}e{\beta}egin{equation}gin{split}
&(1-e^{t-s})(\dot u+u)+e^{t-s}u-f\ge n \log{\beta}egin{equation}ta- \frac{C_7s_1e^{s_1-s}}{1-e^{-s}} -C_8.\epsilonnd{split} \epsilonnd{equation}e
By Lemma \ref{l-uudot-upper-1}, we have
{\beta}egin{equation}e
\dot u+u\ge -\frac{C_9(1+s_1e^{s_1-s})}{1-e^{s_1-s}}
\epsilonnd{equation}e on $M\times[\frac 12,s_1]$ for some constant because $s>1$.
Finally, (v) follows from (i), Lemma \ref{l-uudot-upper-1} and (iv) by integration.
\epsilonnd{proof}
\subsection{a priori estimates for $\omega(t)$}\langlebel{ss-trace}
Next we will estimate the uniform upper bound of $g(t)$. Before we do this, we first give uniform estimates for the evolution of the key quantity $\log \operatorname{tr}_hg(t)$.
Let $\hat T$ and $T_0$ be the torsions of $h, {\gamma}mma_0$ respectively. Note that ${\gamma}mma_0$ depends on $\rho_0, \epsilon$. Let $\hat\nablaabla$ be the Chern connection of $h$. Recall that $T_{ij{\beta}ar l}=\partial_ig_{j{\beta}ar l}-\partial_j g_{i{\beta}ar l}$ etc.
Let $\widetilde g$ be such that $g(t)=e^{-t}\widetilde g(e^t-1)$. Let $s=e^t-1$. Then
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
-\text{\rm Ric} (\widetilde g(s))-g(t)=&-\text{\rm Ric} (g(t))-g(t)\\
=&\frac{\partial}{\partial t}g(t)\\
=&-e^{-t}\widetilde g(e^t-1)+\frac{\partial}{\partial s}\widetilde g(s)\\
=&-g(t)+\frac{\partial }{\partial s}\widetilde g(s).
\epsilonnd{split}
\epsilonnd{equation}e
So
{\beta}egin{equation}e
\frac{\partial }{\partial s}\widetilde g(s)=-\text{\rm Ric}(\widetilde g(s))
\epsilonnd{equation}e
and $\widetilde g(0)={\gamma}mma_0$.
Let $\Upsilon(t)=\operatorname{tr}_{h}g(t)$ and $\widetilde\Upsilon(s)=\operatorname{tr}_{h}\widetilde g(s)$.
By Lemma \ref{l-a-1}, we have
{\beta}egin{equation}e
\lf(\frac{\p}{\p s}-\wt\Delta\ri) \log \widetilde\Upsilon=\mathrm{I+II+III}
\epsilonnd{equation}e
where
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\mathrm{I}\le &2\widetilde\Upsilon^{-2}\text{{\beta}f Re}\left( h^{i{\beta}ar l} \widetilde g^{k{\beta}ar q} (T_0)_{ki{\beta}ar l}\hat \nablaabla_{{\beta}ar q}\widetilde\Upsilon\right).
\epsilonnd{split}
\epsilonnd{equation}e
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\mathrm{II}=&\widetilde\Upsilon^{-1} \widetilde g^{i\bar{j}} h^{k{\beta}ar l}\widetilde g_{k{\beta}ar q} \left(\hat \nablaabla_i \overline{(\hat T)_{jl}^p}- h^{p{\beta}ar q}\hat R_{i{\beta}ar lp{\beta}ar j}\right)\\
\epsilonnd{split}
\epsilonnd{equation}e
and
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\mathrm{III}=&-\widetilde\Upsilon^{-1} \widetilde g^{{i\bar{j}}} h^{k{\beta}ar l}\left(\hat \nablaabla_i\left(\overline{( T_0)_{jl{\beta}ar k} } \right) +\hat \nablaabla_{{\beta}ar l}\left( ( T_0)_{ik{\beta}ar j} \right)-\overline{ (\hat T)_{jl}^q}( T_0)_{ik{\beta}ar q} \right)
\epsilonnd{split}
\epsilonnd{equation}e
Now
$$
\widetilde \Upsilon(s)=e^t\Upsilon(t).
$$
So
{\beta}egin{equation}e
\lf(\frac{\p}{\p s}-\wt\Delta\ri) \log\widetilde\Upsilon(s)=e^{-t}\left(\lf(\frac{\p}{\p t}-\Delta\ri) \log\Upsilon+1\right)
\epsilonnd{equation}e
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\mathrm{I}\le &2e^{- 2t}\Upsilon^{-2}\text{{\beta}f Re}\left( h^{i{\beta}ar l} g^{k{\beta}ar q} (T_0)_{ki{\beta}ar l}\hat \nablaabla_{{\beta}ar q} \Upsilon\right).
\epsilonnd{split}
\epsilonnd{equation}e
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\mathrm{II}=&e^{-t}\Upsilon^{-1} g^{i\bar{j}} h^{k{\beta}ar l} g_{k{\beta}ar q} \left(\hat \nablaabla_i \overline{(\hat T)_{jl}^q}- h^{p{\beta}ar q}\hat R_{i{\beta}ar lp{\beta}ar j}\right)\\
\epsilonnd{split}
\epsilonnd{equation}e
and
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\mathrm{III}=&-e^{-2t}\Upsilon^{-1} g^{{i\bar{j}}} h^{k{\beta}ar l}\left(\hat \nablaabla_i\left(\overline{( T_0)_{jl{\beta}ar k} } \right) +\hat \nablaabla_{{\beta}ar l}\left( ( T_0)_{ik{\beta}ar j} \right)-\overline{ (\hat T)_{jl}^q}( T_0)_{ik{\beta}ar q} \right)
\epsilonnd{split}
\epsilonnd{equation}e
Hence
{\beta}egin{equation}\langlebel{e-logY}
\lf(\frac{\p}{\p t}-\Delta\ri)\log \Upsilon=\mathrm{I}'+\mathrm{II}'+\mathrm{III}'-1
\epsilonnd{equation}
where
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\mathrm{I}'\le &2e^{-t}\Upsilon^{-2}\text{{\beta}f Re}\left( h^{i{\beta}ar l} g^{k{\beta}ar q} (T_0)_{ki{\beta}ar l}\hat \nablaabla_{{\beta}ar q} \Upsilon\right).
\epsilonnd{split}
\epsilonnd{equation}e
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\mathrm{II}'=& \Upsilon^{-1} g^{i\bar{j}} h^{k{\beta}ar l} g_{k{\beta}ar q} \left(\hat \nablaabla_i \overline{(\hat T)_{jl}^q}- h^{p{\beta}ar q}\hat R_{i{\beta}ar lp{\beta}ar j}\right)\\
\epsilonnd{split}
\epsilonnd{equation}e
and
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\mathrm{III}'=&-e^{-t}\Upsilon^{-1} g^{{i\bar{j}}} h^{k{\beta}ar l}\left(\hat \nablaabla_i\left(\overline{( T_0)_{jl{\beta}ar k} } \right) +\hat \nablaabla_{{\beta}ar l}\left( ( T_0)_{ik{\beta}ar j} \right)-\overline{ (\hat T)_{jl}^q}( T_0)_{ik{\beta}ar q} \right)
\epsilonnd{split}
\epsilonnd{equation}e
Now we want to estimate the terms in the above differential inequality.
\underline{\it Estimate of $\mathrm{II}'$}
Choose an frame unitary with respect to $h$ so that $g_{{i\bar{j}}}=\langlembda_i\delta_{ij}$. Then
{\beta}egin{equation}\langlebel{e-logY-1}
{\beta}egin{equation}gin{split}
\mathrm{II}'=& (\sum_l\langlembda_l)^{-1}\langlembda_i^{-1}\langlembda_k\left(\hat\nablaabla_i\overline{(\hat T)_{ik}^k}-\hat R_{i{\beta}ar kk{\beta}ar i}\right)\\
\le &C(n,K)\operatorname{tr}_{g}h.
\epsilonnd{split}
\epsilonnd{equation}
\underline{\it Estimate of $\mathrm{III}'$}
Next, we compute the torsion of ${\gamma}mma_0$, $T_0=T_{{\gamma}mma_0}$, where ${\gamma}mma_0=\epsilonta(\frac{\rho(x)}{\rho_0})g_0+(1-\epsilonta(\frac{\rho(x)}{\rho_0}))h+\epsilon h$:{\beta}egin{equation}e{\beta}egin{equation}gin{split}
(T_0)_{ik{\beta}ar q}=&\partial_i({\gamma}mma_0)_{k{\beta}ar q}-\partial_k({\gamma}mma_0)_{i{\beta}ar q}\\
=&\epsilonta'\frac{1}{\rho_0}[\rho_i(x)(g_0)_{k{\beta}ar q}-\rho_k(x)(g_0)_{i{\beta}ar q}]+\epsilonta[\partial_i(g_0)_{k{\beta}ar q}-\partial_k(g_0)_{i{\beta}ar q}]\\
&+(1-\epsilonta+\epsilon)[\partial_ih_{k{\beta}ar q}-\partial_kh_{i{\beta}ar q}]-\epsilonta'\frac{1}{\rho_0}[\rho_ih_{k{\beta}ar q}-\rho_kh_{i{\beta}ar q}]. \epsilonnd{split} \epsilonnd{equation}e
By the assumptions, all terms above are bounded by $C(n, K)$ for all $\rho_0\geq 1$ and for all $\epsilon\leq 1$.
It remains to control $\hat \nablaabla_{{\beta}ar l}\left( ( T({\gamma}mma_0))_{ik{\beta}ar j} \right)$. We may compute $\hat \nablaabla_{{\beta}ar l}\left( ( T({\gamma}mma_0))_{ik{\beta}ar j} \right)$ directly. {\beta}egin{equation}e{\beta}egin{equation}gin{split}
& \hat \nablaabla_{{\beta}ar l}\left( ( T({\gamma}mma_0))_{ik{\beta}ar j} \right)\\=&\hat \nablaabla_{{\beta}ar l}(\partial_i({\gamma}mma_0)_{k{\beta}ar j}-\partial_k({\gamma}mma_0)_{i{\beta}ar j})\\
=&\hat \nablaabla_{{\beta}ar l}\{\epsilonta'\frac{1}{\rho_0}[\rho_i(x)(g_0)_{k{\beta}ar j}-\rho_k(x)(g_0)_{i{\beta}ar j}]+\epsilonta[\partial_i(g_0)_{k{\beta}ar j}-\partial_k(g_0)_{i{\beta}ar j}]\\
&+(1-\epsilonta+\epsilon)[\partial_ih_{k{\beta}ar j}-\partial_kh_{i{\beta}ar j}]-\epsilonta'\frac{1}{\rho_0}[\rho_ih_{k{\beta}ar q}-\rho_kh_{i{\beta}ar q}]\}\\
=&\epsilonta''\rho_{{\beta}ar l}\frac{1}{\rho^2_0}[\rho_i(g_0)_{k{\beta}ar j}-\rho_k(g_0)_{i{\beta}ar j}]+\epsilonta'\frac{1}{\rho_0}[\rho_{i{\beta}ar l}(g_0)_{k{\beta}ar j}-\rho_{k{\beta}ar l}(g_0)_{i{\beta}ar j}]\\
&+\epsilonta'\frac{1}{\rho_0}[\rho_i\hat \nablaabla_{{\beta}ar l}(g_0)_{k{\beta}ar j}-\rho_k\hat \nablaabla_{{\beta}ar l}(g_0)_{i{\beta}ar j}]+\epsilonta_{{\beta}ar l}[\partial_i(g_0)_{k{\beta}ar j}-\partial_k(g_0)_{i{\beta}ar j}]\\
&+\epsilonta \hat\nablaabla_{{\beta}ar l} T(g_0)_{ik{\beta}ar q}+(1-\epsilonta+\epsilon)\hat \nablaabla_{{\beta}ar l} T(h)_{ik{\beta}ar j}-\epsilonta'\frac{\rho_{{\beta}ar l}}{\rho_0}T(h)_{ik{\beta}ar j}\\
&-\epsilonta'\frac{1}{\rho_0}[\rho_{i{\beta}ar l}h_{k{\beta}ar q}-\rho_{k{\beta}ar l}h_{i{\beta}ar q}]-\epsilonta''\frac{1}{\rho^2_0}[\rho_{{\beta}ar l}\rho_{i}h_{k{\beta}ar q}-\rho_{{\beta}ar l}\rho_{k}h_{i{\beta}ar q}]. \epsilonnd{split} \epsilonnd{equation}e
Since we can control every term of the above equation by $C(n, K)$. Therefore, $|\hat \nablaabla_{{\beta}ar l}\left( ( T({\gamma}mma_0))_{ik{\beta}ar j} \right)|\leq C(n, K)$.
Therefore, if $0<\epsilon<1,\rho_0>1$
{\beta}egin{equation}\langlebel{e-logY-2}
\mathrm{III}'\leq C(n, K)\cdot e^{-t}\Upsilon^{-1} \Lambda.
\epsilonnd{equation}
where $\Lambda=\operatorname{tr}_{g}h$.
Now we will prove the uniform upper bound of $g(t)$.
{\beta}egin{equation}gin{lma}\langlebel{l-trace-2}
{\beta}egin{equation}gin{enumerate}
\item [(i)] For $0<s_1<s$,
$$
\operatorname{tr}_{h}g(x,t)\le \epsilonxp\left(\frac{C(E-\log(1-e^{-s}))}{1-e^{-t}}\right)
$$
on $M\times(0,s_1]$ for some constant $C>0$ depending only on $n,K, {\beta}egin{equation}ta, ||f||_\infty$ provided such that if $0<\epsilon<1$, $\rho_0>1$,
where
$$
E=\frac{(1+s_1e^{s_1-s})}{(1-e^{-s})(1-e^{s_1-s})}.
$$
\item [(ii)] For $0<s_1<s$, there is a constant $C$ depending only on $n,K, {\beta}egin{equation}ta, ||f||_\infty, s, s_1$ and also on $\epsilon$, but independent of $\rho_0$ such that
$$
\operatorname{tr}_{h}g\le C
$$
on $M\times[0, s_1]$.
\epsilonnd{enumerate}
\epsilonnd{lma}
{\beta}egin{equation}gin{proof} In the following, $C_i$ will denote constants depending only on $n,K, {\beta}$ and $||f||_\infty$, but not $\rho_0$ and $\epsilon$. $D_i$ will denote constants which may also depend on $\epsilon, \rho_0$, but not $\kappa$. We always assume $0<\epsilon<1<\rho_0$.
Let $v(x,t)\ge1$ be a smooth bounded function.
As before, let $\Upsilon=\operatorname{tr}_{h}g$ and $\Lambda=\operatorname{tr}_gh$ and let $\langlembda=0$ or 1. For $\kappa>0$, consider the function
$$F=(1-\langlembda e^{-t})\log \Upsilon-Av+\frac 1v-\kappa\rho+Bt\log (1-\langlembda e^{-t})
$$
on $M\times[0, s_1]$, where $A, B>0$ are constants to be chosen. We want to estimate $F$ from above.
Let
$$
\mathfrak{M}=\sup_{M\times[0,s_1]}F.
$$
Either (i) $\mathfrak{M}\le 0$; (ii) $\mathfrak{M}=\sup_{M\times\{0\}}F$; or (iii) there is $(x_0,t_0)$ with $t_0>0$ such that $F(x_0,t_0)=\mathfrak{M}$.
If (ii) is true, then
{\beta}egin{equation}\langlebel{e-tr-1}
\mathfrak{M}\le C_1(n).
\epsilonnd{equation}
because $g(0)={\gamma}mma_0\le (1+\epsilon)h$.
Suppose (iii) is true. If at this point $\Upsilon(x_0,t_0)\le 1$. Then \epsilonqref{e-tr-1} is true with a possibly larger $C_1$. So let us assume that $\Upsilon(x_0,t_0)>1$. By \epsilonqref{e-logY}, \epsilonqref{e-logY-1} and \epsilonqref{e-logY-2}, at $(x_0,t_0)$ we have:
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
0\le&\lf(\frac{\p}{\p t}-\Delta\ri) F\\
=&(1-\langlembda e^{-t})\lf(\frac{\p}{\p t}-\Delta\ri)\log \Upsilon+\langlembda e^{-t}\log \Upsilon-(\frac{1}{v^2}+A)\lf(\frac{\p}{\p t}-\Delta\ri) v\\
&-\frac{2}{v^3}|\nabla v|^2+\kappa \Delta \rho+B\left(\log(1-\langlembda e^{-t})+\frac{\langlembda t}{e^t-\langlembda}\right)\\
\le &(1-\langlembda e^{-t})C_2\Lambda \left( 1 +e^{-t}\Upsilon^{-1} \right)\\& +
2(1-\langlembda e^{-t})e^{-t}\Upsilon^{-2}\text{{\beta}f Re}\left( h^{i{\beta}ar l} g^{k{\beta}ar q} (T_0)_{ki{\beta}ar l}\hat \nablaabla_{{\beta}ar q} \Upsilon\right)\\
&+\langlembda e^{-t}\log \Upsilon- (\frac 1{v^2}+A)\lf(\frac{\p}{\p t}-\Delta\ri) v -\frac{2|\nablaabla v|^2}{v^3}\\
&+B\left(\log(1-\langlembda e^{-t})+\frac{\langlembda t}{e^t-\langlembda}\right)+\kappa D_1.
\epsilonnd{split}
\epsilonnd{equation}e
At $(x_0,t_0)$, we also have:
$$(1-\langlembda e^{-t}) \Upsilon^{-1}\hat \nablaabla \Upsilon-(\frac 1{v^2}+A)\hat\nablaabla v- \kappa\hat \nablaabla \rho=0.$$
Hence
{\beta}egin{equation}e{\beta}egin{equation}gin{split}
&2(1-\langlembda e^{-t})e^{-t} \Upsilon^{-2}\text{{\beta}f Re}\left( h^{i{\beta}ar l} g^{k{\beta}ar q} (T_0)_{ki{\beta}ar l}\hat \nablaabla_{{\beta}ar q} \Upsilon\right)\\
=& \frac{2e^{-t}}{\Upsilon}\text{{\beta}f Re}\left( h^{i{\beta}ar l} g^{k{\beta}ar q} (T_0)_{ki{\beta}ar l}((\frac 1{v^2}+A)\hat\nablaabla_{{\beta}ar q} v- \kappa\hat \nablaabla_{{\beta}ar q} \rho)\right) \\
\leq&\frac{1}{v^3}|\nablaabla v|^2+\frac{C_3(A+1+\frac{1}{v^2})^2\cdot v^3 \Lambda}{\Upsilon^2}+\kappa D_2\\
\epsilonnd{split}\epsilonnd{equation}e
Using the fact that $\Upsilon(x_0,t_0)>1$, we have at $(x_0,t_0)$:
Hence {\beta}egin{equation}\langlebel{e-g-1}{\beta}egin{equation}gin{split}
0\le& C_2(1-\langlembda e^{-t})\Lambda +
\frac{C_3(A+\frac{1}{v^2})^2\cdot v^3 \Lambda}{\Upsilon} +\langlembda e^{-t}\log \Upsilon\\
&- (\frac 1{v^2}+A)\lf(\frac{\p}{\p t}-\Delta\ri) v +B\left(\log(1-\langlembda e^{-t})+\frac{\langlembda t}{e^t-\langlembda}\right)+\kappa D_3.\epsilonnd{split}
\epsilonnd{equation}
Now let
$$
v=u-\frac{1- e^{-t}}{1-e^{-s}}f+\frac{C_4(1+s_1e^{s_1-s})}{(1-e^{-s})(1-e^{s_1-s})}
$$
By Lemmas \ref{l-uudot-upper-1} and \ref{l-all-u}, we can find $C_4>0$ so that $v\ge 1$, and there is $C_5>0$ so that $$v\le \frac{C_5(1+s_1e^{s_1-s})}{(1-e^{-s})(1-e^{s_1-s})}.
$$
Let
{\beta}egin{equation}\langlebel{e-E}
E:=\frac{(1+s_1e^{s_1-s})}{(1-e^{-s})(1-e^{s_1-s})}.
\epsilonnd{equation}
Note that
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
& \lf(\frac{\p}{\p t}-\Delta\ri) u\\
=&\dot u-\Delta u\\
=&\dot u-n+\operatorname{tr}_g\left(-(1-e^{-t}) \text{\rm Ric}(\theta_0)+e^{-t}{\gamma}mma_0 \right)\\
\ge&\dot u-n+\operatorname{tr}_g\left(\frac{1-e^{-t}}{1-e^{-s}}\left({\beta}egin{equation}ta\theta_0-e^{-s}\omega_0-\sqrt{-1}\partial{\beta}ar\partial f\right)+ e^{-t}{\gamma}mma_0\right)\\
\ge&\dot u+\left[\frac{{\beta}egin{equation}ta(1-e^{-t})}{1-e^{-s}}+\epsilon e^{-t}\right]\Lambda-\frac{1-e^{-t}}{1-e^{-s}}\Delta f -n\\
\ge&\dot u+\left[\frac{{\beta}egin{equation}ta(1-e^{-t})}{1-e^{-s}}+\epsilon e^{-t}\right]\Lambda+\lf(\frac{\p}{\p t}-\Delta\ri) \left(\frac{1- e^{-t}}{1-e^{-s}} f\right)-\frac{ e^{-t}}{1-e^{-s}}f-n\\
\ge&\dot u+u+\left[\frac{{\beta}egin{equation}ta(1-e^{-t})}{1-e^{-s}}+\epsilon e^{-t}\right]\Lambda+\lf(\frac{\p}{\p t}-\Delta\ri) \left(\frac{1- e^{-t}}{1-e^{-s}} f\right)-\frac{C_6}{1-e^{-s}}.
\epsilonnd{split}
\epsilonnd{equation}e
because ${\gamma}mma_0\ge \omega_0+\epsilon\theta_0$ and $t<s$.
On the other hand,
$$
-\dot u-u=\log\left(\frac{\det h}{\det g}\right)\le c(n)+n\log \Lambda.
$$
Hence
{\beta}egin{equation}\langlebel{e-g-2}
\lf(\frac{\p}{\p t}-\Delta\ri) v\ge -n\log \Lambda+ \left[\frac{{\beta}egin{equation}ta(1-e^{-t})}{1-e^{-s}}+\epsilon e^{-t}\right]\Lambda -\frac{C_7}{1-e^{-s}}.
\epsilonnd{equation}
On the other hand, in a unitary frame with respect to $h$ so that $g_{i\bar{j}}=\langlembda_i\delta_{ij}$, then
{\beta}egin{equation}\langlebel{e-tr-2}
{\beta}egin{equation}gin{split}
\Upsilon=&\sum_i\langlembda_i\\
=&\frac{\det g}{\det h}\sum_{i} (\langlembda_1\dots\hat\langlembda_i\dots\langlembda_n)^{-1}\\
\le &C_{8}\Lambda^{n-1}.
\epsilonnd{split}
\epsilonnd{equation}
where we have used the upper bound of $\dot u+u=\log\frac{\det g}{\det h} $ in Lemma \ref{l-uudot-upper-1}. Combining \epsilonqref{e-g-1}, \epsilonqref{e-g-2} and \epsilonqref{e-tr-2}, at $(x_0,t_0)$ we have
{\beta}egin{equation} \langlebel{e-tr-revised}
{\beta}egin{equation}gin{split}
0\le& C_2(1-\langlembda e^{-t})\Lambda\left(1 +
\frac{C_9 E^3(A+1)^2}{(1-\langlembda e^{-t})\Upsilon}\right) +\langlembda e^{-t}\left(\log C_8+(n-1)\log\Lambda\right)\\
&+ (\frac 1{v^2}+A)\left( n\log \Lambda- \left[\frac{{\beta}egin{equation}ta(1-e^{-t})}{1-e^{-s}}+\epsilon e^{-t}\right]\Lambda+\frac{C_7}{1-e^{-s}}\right) \\
&+B\left(\log(1-\langlembda e^{-t})+\frac{\langlembda t}{e^t-\langlembda}\right)+\kappa D_3\\
\le&\Lambda\left[C_2(1-\langlembda e^{-t}) \left(1 +
\frac{C_9E^3(A+1)^2}{(1-\langlembda e^{-t})\Upsilon}\right)-\frac{A+1}{C^2_5E^2}\left(\frac{{\beta}egin{equation}ta(1-e^{-t})}{1-e^{-s}}+\epsilon e^{-t}\right)\right] \\
&+[n(1+A)+\langlembda(n-1)] \log \Lambda+ \frac{C_{10}(A+1) }{1-e^{-s}}\\&+B\left(\log(1-\langlembda e^{-t})+\frac{\langlembda t}{e^t-\langlembda}\right)+\kappa D_3+\langlembda\log C_8 \epsilonnd{split}
\epsilonnd{equation}
where we have used the fact that $1\le v\le C_5E$.
{{\beta}f Case 1}: Let $\langlembda=1$. Suppose at $(x_0,t_0)$,
$$
\frac{C_2C_9E^3(A+1)^2}{(1- e^{-t})\Upsilon}\ge \frac12\frac{1}{C^2_5E^2}\cdot(A+1)\cdot{\beta}egin{equation}ta\cdot\frac{1}{1-e^{-s}}
$$
Then
{\beta}egin{equation}e (1- e^{-t})\Upsilon\le \frac{2C_2C_9C^2_5E^5(1-e^{-s})(A+1)}{{\beta}egin{equation}ta}\leq C_{11}E^5(A+1).
\epsilonnd{equation}e
Hence,
{\beta}egin{equation}e
(1- e^{-t})\log \Upsilon\le (1- e^{-t})\log(C_{11}E^5(A+1)) -(1-e^{-t})\log(1- e^{-t}) .
\epsilonnd{equation}e
Therefore,
{\beta}egin{equation}\langlebel{e-tr-1-2}
\mathfrak{M}\le C(1+\log E)+\log (A+1).
\epsilonnd{equation}
for some $C(n,{\beta}egin{equation}ta, K,||f||_\infty)>0$. Suppose at $(x_0,t_0)$,
$$
\frac{C_2C_9E^3(A+1)^2}{(1- e^{-t})\Upsilon}< \frac12\frac{1}{C^2_5E^2}\cdot(A+1)\cdot{\beta}egin{equation}ta\cdot\frac{1}{1-e^{-s}},
$$ then at $(x_0,t_0)$ we have
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
0\le & (1- e^{-t}) \Lambda \left(C_2 -
\frac12\frac{1}{C^2_5E^2}\cdot(A+1)\cdot{\beta}egin{equation}ta\cdot\frac{1}{1-e^{-s}}\right) +n(A+2)\log \Lambda\\
&+ \frac{C_{10}(A+1) }{1-e^{-s}} +B\left(\log(1- e^{-t})+\frac{ t}{e^t-1}\right)+\kappa D_3+\log C_8\\
=&\Lambda\left[ (1- e^{-t}) \left(C_2 -
\frac12\frac{1}{C^2_5E^2}\cdot(A+1)\cdot{\beta}egin{equation}ta\cdot\frac{1}{1-e^{-s}} \right)\right]\\
&+n(A+2)\log ((1-e^{-t})\Lambda)+\frac{C_{10}(A+1) }{1-e^{-s}}-n(A+2)\log(1-e^{-t})\\
&+B\left(\log(1- e^{-t})+\frac{ t}{e^t-1}\right)+\kappa D_3+\log C_8\\
\le &-(1-e^{-t})\Lambda +n(A+2)\log ((1-e^{-t})\Lambda)+\frac{C_{12}E^2}{1-e^{-s}},
\epsilonnd{split}
\epsilonnd{equation}e
provided $A=C_{13}E^2$ so that
$$
\frac12\frac{1}{C^2_5E^2}\cdot(A+1)\cdot{\beta}egin{equation}ta\cdot\frac{1}{1-e^{-s}} \ge (C_2+1)
$$
and
$B$ is chosen so that $B=n(A+2)$ and $\kappa$ is small enough so that $\kappa D_2\le 1$.
Hence using $1+\frac{1}{2}\log x\leq \sqrt{x},\;\forall x>0$, we have at $(x_0,t_0)$,
$$
(1-e^{-t})\Lambda\le \frac{C_{14}E^4}{1-e^{-s}},
$$
and so
$$
\log\Lambda\le \log\frac{C_{14}E^4}{1-e^{-s}}-\log(1-e^{-t}).
$$
By \epsilonqref{e-tr-2}, we have
{\beta}egin{equation}\langlebel{e-tr-4}
{\beta}egin{equation}gin{split}
&(1-e^{-t})\log\Upsilon\\
\leq&(1-e^{-t})\left( \log C_8+(n-1)\log \Lambda\right) \\
\le&(1-e^{-t})\left( \log C_8+(n-1)\left(\log\frac{C_{14}E^4}{1-e^{-s}}-\log(1-e^{-t})\right)\right)\\
\le &(n-1)\log(\frac{1}{1-e^{-s}})+C_{15}(1+\log E).
\epsilonnd{split}
\epsilonnd{equation}
Hence $\mathfrak{M}\le (n-1)\log(\frac{1}{1-e^{-s}})+C_{16}(1+\log E).$
By combining \epsilonqref{e-tr-1}, \epsilonqref{e-tr-1-2} and using the choice of $A$, we may let $\kappa\rightghtarrow 0$ to conclude that on $ M\times(0,s_1]$,
$$
(1-e^{-t})\log \Upsilon\le (n-1)\log(\frac{1}{1-e^{-s}})+C_{17}(1+E).
$$
and hence (i) in the lemma is true. Here we have used the fact that $E\geq \log E+1$.
{{\beta}f Case 2}: Let $\langlembda=0$, then \epsilonqref{e-tr-revised} becomes:
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
0\le&\Lambda\left[C_2\left(1 +
\frac{C_9E^3(A+1)^2}{\Upsilon}\right)-\frac{1}{C^2_5E^2}(A+1)\epsilon e^{-t}\right] \\
&+n(1+A) \log \Lambda+ \frac{C_{10}(A+1) }{1-e^{-s}}+\kappa D_3. \epsilonnd{split}
\epsilonnd{equation}e
We can argue as before to conclude that (ii) is true.
\epsilonnd{proof}
Combining the lower bound of $\dot u+u$, we obtain:
{\beta}egin{equation}gin{cor}\langlebel{eq-g} For any $0<s_0<s_1<s$, there is a constant $C$ depending only on $n,K, {\beta}, ||f||_\infty$ and $s_0, s_1, s$ but independent of $\epsilon,\rho_0$ such that if $0<\epsilon<1$, $\rho_0>1$, we have
{\beta}egin{equation}e
C^{-1}h\leq g(t)\leq Ch \epsilonnd{equation}e on $M\times[s_0, s_1]$. There is also a constant $\widetilde C(\epsilon)>0$ which may also depend on $\epsilon$ such that
{\beta}egin{equation}e
\widetilde C^{-1}h\leq g(t)\leq \widetilde Ch
\epsilonnd{equation}e
on $M\times[0, s_1]$.
\epsilonnd{cor}
\section{Short time existence for the potential flow and the normalized Chern-Ricci flow}
Using the a priori estimates in previous section, we are ready to discuss short time existence for the the potential flow and the Chern-Ricci flow. We begin with the short time existence of the potential flow. We have the following:
{\beta}egin{equation}gin{thm}\langlebel{t-instant-complete}
Let $(M,h)$ be a complete non-compact Hermitian metric { with K\"ahler form $\theta_0$.} Suppose there is $K>0$ such that the following holds.
{\beta}egin{equation}gin{enumerate}
\item There is a {proper} exhaustion function $\rho(x)$ on $M$ such that
$$|\partialartial\rho|^2_h +|\sqrt{-1}\partial\bar\partial \rho|_h \leq K.$$
\item $\mathrm{BK}_h\geq -K$;
\item The torsion of $h$, $T_h=\partialartial \omega_h$ satisfies
$$|T_h|^2_h +|\nablaabla^h_{{\beta}ar\partialartial} T_h |\leq K.$$
\epsilonnd{enumerate}
Let $\omega_0$ be a nonnegative real (1,1) form with corresponding Hermitian form $g_0$ on $M$ (possibly incomplete or degenerate) such that
{\beta}egin{equation}gin{enumerate}
\item[(a)] $g_0\le h$ and
$$|T_{g_0}|_h^2+|\nablaabla^h_{{\beta}ar\partialartial} T_{g_0}|_h+ |\nablaabla^{h}g_0|_h\leq K.$$
\item[(b)] There exist $f\in C^\infty(M)\cap L^\infty(M),{\beta}egin{equation}ta>0$ and $s>0$ so that $$-\text{\rm Ric}(\theta_0)+e^{-s}(\omega_0+\text{\rm Ric}(\theta_0))+\sqrt{-1}\partial\bar\partial f\geq {\beta}egin{equation}ta \theta_0.$$
\epsilonnd{enumerate}
Then \epsilonqref{e-MP-1} has a solution on $M\times(0, s)$ so that $u(t)\to 0$ as $t\to0$ uniformly on $M$. Moreover, for any $0<s_0<s_1<s$, { let
$$
{\alpha}(t)=-\text{\rm Ric}(\theta_0)+e^{-t}(\text{\rm Ric}(\theta_0)+\omega_0)
$$}
then
$$\omega(t)={\alpha}+\sqrt{-1}\partial{\beta}ar\partial u$$
is the K\"ahler form of a complete Hermitian metric which is uniformly equivalent to $h$ on $M\times[s_0, s_1]$.
\epsilonnd{thm}
{\beta}egin{equation}gin{proof}[Proof of Theorem \ref{t-instant-complete}] For later application, we construct the solution in the following way. Combining the local higher order estimate of Chern-Ricci flow (See \cite{ShermanWeinkove2013} for example) with Corollary \ref{eq-g} for any $1>\epsilon>0$, using diagonal argument as $\rho_0\to \infty$ we obtain a solution $u_\epsilon(t)$ to \epsilonqref{e-MP-1} with initial data $\omega_0+\epsilon \theta_0$ on $M\times[0,s)$ which is smooth up to $t=0$, so that the corresponding solution $g_\epsilon(t)$ of \epsilonqref{e-NKRF} has smooth solution on $M\times[0,s)$ with initial metric $g_\epsilon(0)=g_0+\epsilon h$. Moreover, $g_\epsilon$ is uniformly equivalent to $h$ on $M\times[0,s_1]$ for all $0<s_1<s$ and for any $0<s_0<s_1<s$, there is a constant $C>0$ independent of $\epsilon$ such that
$$
C^{-1}h\le g_\epsilon\le Ch
$$
on $M\times[s_0,s_1]$.
Using the local higher order estimate of Chern-Ricci flow \cite{ShermanWeinkove2013} again, we can find $\epsilon_i\to0$ such that $u_{\epsilon_i}$ converge locally uniformly on any compact subsets of $M\times(0,s)$ to a solution $u$ of \epsilonqref{e-MP-1}.
By Lemmas \ref{l-uudot-upper-1}, \ref{l-all-u}, we see that $u(t)\to 0$ as $t\to0$ uniformly $M$. Moreover, for any $0<s_0<s_1<s$, $\omega(t)={\alpha}+\sqrt{-1}\partial{\beta}ar\partial u$ is the K\"ahler form of the solution to \epsilonqref{e-NKRF}. Also, the corresponding Hermitian metric $g(t)$ is a complete Hermitian metric which is uniformly equivalent to $h$ in $M\times[s_0, s_1]$ for any $0<s_0<s_1<1$.
\epsilonnd{proof}
Next we want to discuss { the short time existence of the Chern-Ricci flow. The solution $\omega(t)$ obtained from the Theorem \ref{t-instant-complete} satisfies the normalized Chern-Ricci flow on $M\times(0,s)$. Hence we concentrate on the discussion of the behaviour of $\omega(t)$ as $t\to0$ for the solution obtained in Theorem \ref{t-instant-complete}}. In case that
$h$ is K\"ahler and $\omega_0$ is closed, we have the following:
{\beta}egin{equation}gin{thm}\langlebel{t-initial-Kahler-1}
With the same notation and assumptions as in Theorem \ref{t-instant-complete}. Let $\omega(t)$ be the solution of \epsilonqref{e-NKRF} obtained in the theorem. If in addition $h$ is K\"ahler and $d\omega_0=0$. Let $U=\{\omega_0>0\}$. Then $\omega(t)\rightghtarrow \omega_0$ in $C^\infty(U)$ as $t\rightghtarrow 0$, {uniformly in compact sets of $U$}.
\epsilonnd{thm}
{\beta}egin{equation}gin{rem}
If in addition $h$ has bounded curvature, then one can use Shi's K\"ahler R flow \cite{Shi1989,Shi1997} and the argument in \cite{ShermanWeinkove2012} to show that the K\"ahler R flow $g_i(t)$ starting from $g_0+\epsilon_i h$ has bounded curvature when $t>0$. The uniform local $C^k$ estimates will follow from the pseudo-locality theorem \cite[Corollary 3.1]{HeLee2018} and the modified Shi's local estimate \cite[Theorem 14.16]{Chow2}.
\epsilonnd{rem}
By Theorem \ref{t-instant-complete} we have the following:
{\beta}egin{equation}gin{cor}\langlebel{c-shorttime} Let $(M,h)$ be a complete non-compact K\"ahler manifold with bounded curvature. Let $\theta_0$ be the K\"ahler form of $h$. Suppose there is a compact set $V$ such that outside $V$, $-\text{\rm Ric}(\theta_0)+\sqrt{-1}\partial{\beta}ar\partial f\ge{\beta}egin{equation}ta \theta_0$ for some ${\beta}egin{equation}ta>0$ for some bounded smooth function $f$. Then for any closed nonnegative real (1,1) form $\omega_0$ such that $\omega_0\le \theta_0$, $|\nablaabla_h\omega_0|$ is bounded, and $\omega_0>0$ on $V$, there is $s>0$ such that \epsilonqref{e-NKRF} has a solution $\omega(t)$ on $M\times(0,s)$ so that $\omega(t)$ is uniformly equivalent to $h$ on $M\times[s_0,s_1]$ for any $0<s_0<s_1<s$ and $\omega(t)$ attains initial data $\omega_0$ in the set where $\omega_0>0$.
\epsilonnd{cor}
{\beta}egin{equation}gin{proof} Let $s>0$, then
$$
-(1-e^{-s})\text{\rm Ric}(\theta_0)+(1-e^{-s})\sqrt{-1}\partial{\beta}ar\partial f\ge (1-e^{-s}){\beta}egin{equation}ta \theta_0
$$
outside $V$. On $V$,
$$
\omega_0 -(1-e^{-s})\text{\rm Ric}(\theta_0)+(1-e^{-s})\sqrt{-1}\partial{\beta}ar\partial f\ge {\beta}egin{equation}ta'\theta_0
$$
for some ${\beta}egin{equation}ta'>0$, provided $s$ is small enough. The Corollary then follows from Theorems \ref{t-instant-complete} and \ref{t-initial-Kahler-1}.
\epsilonnd{proof}
{\beta}egin{equation}gin{rem}\langlebel{r-shorttime}
Suppose $\Omega$ is a bounded strictly pseudoconvex domain in $\mathbb{C}^n$ with smooth boundary, then there is a complete K\"ahler metric with Ricci curvature bounded above by the negative constant near infinity by \cite{ChengYau1982}. Hence Corollary \ref{c-shorttime} can be applied to this case, which has been studied by Ge-Lin-Shen \cite{Ge-Lin-Shen}.
\epsilonnd{rem}
To prove the Theorem \ref{t-initial-Kahler-1},
suppose $h$ is K\"ahler and $d\omega_0=0$, then solution in Theorem \ref{t-instant-complete} is the limit of solutions $g_i(t)$ of the normalized K\"ahler R flow on $M\times[0, s)$ with initial data $g_0+\epsilon_i h$, where $\epsilon_i\to 0$. Here we may assume $s\leq 1$. By Lemma \ref{l-all-u} (iii) and Lemma \ref{l-trace-2} (ii), each $g_i(t)$ is uniformly equivalent to $h$, the uniform constant here may depend on $\epsilon_i$. In this section, we will use $\tilde g_i(t)=(t+1)g_i( \log (t+1))$ to denote the unnormalized K\"ahler R flow and $\partialhi_i$ be the corresponding potential flow to the unnormalized K\"ahler R flow $\tilde g_i(t)$, see appendix.
We want to prove the following:
{\beta}egin{equation}gin{lma}\langlebel{l-initial-Kahler-1} With the same notation and assumptions as in Theorem \ref{t-initial-Kahler-1}, for any precompact open subset $\Omega$ of $U$, there is $S_1>0$ and $C>0$,
$$
C^{-1}h\le \tilde g_i(t)\le Ch
$$
for all $i$ in $\Omega\times[0,S_1]$.
\epsilonnd{lma}
{{\beta}egin{equation}gin{proof}[Proof of Theorem \ref{t-initial-Kahler-1}]
Suppose the lemma is true, then Theorem \ref{t-initial-Kahler-1} will follow from the local estimates in \cite{ShermanWeinkove2012}.
\epsilonnd{proof}
It remains to prove Lemma \ref{l-initial-Kahler-1}.}
{\beta}egin{equation}gin{lma}\langlebel{slma-1} We have $|\partialhi_i|\leq C_0,\;\dot\partialhi_i\leq C_0$ on $M\times[0, e^s-1)$ for some positive constant $C_0$ independent of $i$.
{\beta}egin{equation}gin{proof} By Lemma \ref{l-uudot-upper-1}, we have {\beta}egin{equation}e
\log\frac{{\omega_i}^n(s)}{\theta_0^n}=\dot u_i+u_i\leq C. \epsilonnd{equation}e Here $C$ is a positive constant independent of $i$ and ${\widetilde\omega_i}(s)$ is the corresponding normalized flow with the relation {\beta}egin{equation}e
\widetilde g_i(t)e^{-s}= g_i(s), t=e^s-1. \epsilonnd{equation}e
Then by the equation $\dot\partialhi_i=\log\frac{\widetilde\omega^n_i(t)}{\theta_0^n}$, we obtain the upper bound on $\dot\partialhi_i(t)$. The lower bound on $\partialhi_i$ follows from Lemma \ref{l-all-u}.
\epsilonnd{proof}
\epsilonnd{lma}
Before we state the next lemma, we fix some notations. Let $p\in U$. By scaling, we may assume that there is a holomorphic coordinate neighbourhood of $p$ which can be identified with $B_0(2)\subset \mathbb{C}^n$ with $p$ being the origin and $B_0(r)$ is the Euclidean ball with radius $r$. Moreover, $B_0(2)\Subset U$. We may further assume $\frac14h\le h_E\le 4h$ in $B_0(2)$ where $h_E$ is the Euclidean metric. Since $\omega_0>0$, there is $\sigma>0$ such that $B_{g_i(0)}(p,2\sigma)\subset B_0(2)$ and
$$
g_i(0)\ge 4\sigma^2h
$$
in $B_0(2)$ for some $0<\sigma<1$. This is because $g_i(0)=\omega_0+\epsilon_i h$. Here we use $h_E$ because we want to use the estimates in \cite{ShermanWeinkove2012} explicitly. Let $\tau=e^{s}-1$, where $s$ is the constant in assumption in Theorem \ref{t-instant-complete}, { and let $\dot \partialhi$ be as in the proof of Lemma \ref{slma-1}. It is easy to see that Lemma \ref{l-initial-Kahler-1} follows from the following:}
{\beta}egin{equation}gin{lma}\langlebel{local-bound} With the same notation and assumptions as in Theorem \ref{t-initial-Kahler-1} and with the above set up. There exist positive constants $1>{\gamma}mma_1, {\gamma}mma_2>0$ with ${\gamma}mma_2<\tau$ which are independent of $i$ such that
{ $${\gamma}mma_1^{-2}h\ge \widetilde g_i(t)\geq {\gamma}mma_1^2 h$$}
on $B_{\widetilde g_i(t)}(p,\sigma),\; t\in [0,{\gamma}mma_2{\gamma}mma_1^{8(n-1)}]$.
\epsilonnd{lma}
{\beta}egin{equation}gin{proof} The lower bound in lemma will follow from the following:\vskip .1cm
\nablaoindent\underline{\it Claim}: There are constants $1>{\gamma}mma_1, {\gamma}mma_2>0$ independent of ${\alpha}>0$ and $i$ with ${\gamma}mma_2<\tau$ such that if
$\widetilde g_i(t)\ge {\alpha}^2h$ on $B_{\widetilde g_i(t)}(p,\sigma)$, $t\in [0, {\gamma}mma_2{\alpha}^{8(n-1)}]$, then
$\widetilde g_i(t)\ge {\gamma}mma_1^2 h$ on $B_{\widetilde g_i(t)}(p,\sigma)$ for $t\in [0, {\gamma}mma_2{\alpha}^{8(n-1)}]$.
\vskip .1cm
The main point is that ${\gamma}mma_1$ does not depend on ${\alpha}$. Suppose the claim is true. Fix $i$, let ${\alpha}\le {\gamma}mma_1$ be the supremum of $\widetilde{\alpha}$ so that $\widetilde g_i(t)\ge \widetilde{\alpha}^2h$ on $ B_{\widetilde g_i(t)}(p,\sigma)$, $t\in [0, {\gamma}mma_2\widetilde{\alpha}^{8(n-1)}]$. Since $\widetilde g_i(0)\ge \sigma^2h$ in $U$, we see that ${\alpha}>0$. Suppose ${\alpha}<{\gamma}mma_1$. By continuity, there is $\epsilon>0$ such that ${\alpha}+\epsilon<{\gamma}mma_1$. Then ${\gamma}mma_2 {\alpha}^{8(n-1)} \le {\gamma}mma_2 <\tau$. By the claim, we can conclude that
$$
\widetilde g_i(t)\ge {\gamma}mma_1^2 h\geq ({\alpha}+\epsilon)^2h
$$
in $B_{\widetilde g_i(t)}(p,\sigma)$, $t\in [0,{\gamma}mma_2{\alpha}^{8(n-1)}]$. By choosing a possibly smaller $\epsilon$ and by continuity, the above inequality is also true for $t\in [0,{\gamma}mma_2({\alpha}+\epsilon)^{8(n-1)}]$. This is a contradiction.
To prove the claim, let ${\gamma}mma_1$ and ${\gamma}mma_2>0$ be two constants to be determined later and are independent of ${\alpha}$ and $i$. In the following, $C_k$ will denote a positive constant independent of ${\alpha}$ and $i$. In the following, for simplicity in notation, we suppress the index $i$ and simply write $\widetilde g_i$ as $g$.
Suppose ${\alpha}\le {\gamma}mma_1$ is such that
$$
g(t)\ge {\alpha}^2 h
$$
in $ B_{g(t)}(p,\sigma),\;t\in[0,{\gamma}mma_2 {\alpha}^{8(n-1)}]$. By Lemma \ref{slma-1}, $\det(g(t))/\det(h)\le C_1$ for some $C_1>1$. Hence we have
{\beta}egin{equation}e
{\alpha}^2h\le g(t)\le C_1{\alpha}^{-2(n-1)}h
\epsilonnd{equation}e
on $B_{g(t)}(p,\sigma),\;t\in[0,{\gamma}mma_2 {\alpha}^{8(n-1)}]$ and hence on $B_h(p,C_1^{-1/2}{\alpha}^{n-1}\sigma)\times [0,{\gamma}mma_2 {\alpha}^{8(n-1)}]$ because $B_h(p,C_1^{-1/2}{\alpha}^{n-1}\sigma)\subset B_{g(t)}(p,\sigma)$ for $t\in[0,{\gamma}mma_2 {\alpha}^{8(n-1)}]$. This can be seen by considering the maximal $h$-geodesic inside $B_t(p,\sigma)$. Together with the fact that $\frac14h \le h_E\le 4h$ on $B_0(2)$, we conclude that
{\beta}egin{equation}\langlebel{e-alpha-2}
{\alpha}_1^2h_E\le g(t)\le {\alpha}_1^{-2}h_E
\epsilonnd{equation}
on $ B_0(\frac1{2\sqrt{C_1}}{\alpha}^{n-1}\sigma)\times[0, {\gamma}mma_2{\alpha}^{8(n-1)}]$, where ${\alpha}_1>0$ is given by
{\beta}egin{equation}\langlebel{e-alpha-1}
{\alpha}_1^2=\frac1{4C_1}{\alpha}^{2(n-1)}.
\epsilonnd{equation}
By \cite[Theorem 1.1]{ShermanWeinkove2012}, we conclude that
{\beta}egin{equation}\langlebel{e-Rm}
|\text{\rm Rm}(g(t))|\le \frac{C_2}{{\alpha}_1^8t}
\epsilonnd{equation}
on $ B_0(\frac\sigma 2{\alpha}_1)\times[0, {\gamma}mma_2{\alpha}^{8(n-1)}]$. From the proof in \cite{ShermanWeinkove2012}, the constant $C_2$ depends on an upper bound of the existence time but not its precise value. In particular, it is independent of ${\alpha}$ here. By \epsilonqref{e-alpha-2}, we conclude that \epsilonqref{e-Rm} is true on { $ B_{g(t)}(p, \frac\sigma2{\alpha}_1^2)$, $t\in[0, {\gamma}mma_2{\alpha}^{8(n-1)}]$}.
By \cite[Lemma 8.3]{Perelman2003} (see also \cite[Chapter 18, Theorem 18.7]{Chow}), we have:
{\beta}egin{equation}\langlebel{e-distance-1}
\lf(\frac{\p}{\p t}-\Delta\ri) (d_t(p,x)+C_3{\alpha}_1^{-4}t^\frac12)\ge0
\epsilonnd{equation}
in the sense of barrier (see the definition in Appendix \ref{s-max}) outside $B_{g(t)}(p,{\alpha}_1^4\sqrt t)$, provided
{\beta}egin{equation}\langlebel{e-t-1}
t^\frac12\le \frac\sigma2{\alpha}_1^{-2}.
\epsilonnd{equation}
Let $\xi\ge0$ be smooth with $\xi=1$ on $[0,\frac 43]$ and is zero outside $[0,2]$, with $\xi'\le 0, |\xi'|^2/\xi+ |\xi''|\le C $. Let
$$
\Phi(x,t)=\xi( \sigma^{-1}\epsilonta(x,t))
$$
where $\epsilonta(x,t)=d_t(p,x)+C_3{\alpha}_1^{-4}t^\frac12$. For any $\epsilon>0$, for $t>0$ satisfying \epsilonqref{e-t-1}, if $d_t(p,x)+C_3{\alpha}_1^{-4}t^\frac12<\frac43\sigma$, then $\Phi(x,t)=1$ near $x$ and so
{\beta}egin{equation}e
\lf(\frac{\p}{\p t}-\Delta\ri)(\log(\Phi+\epsilon))=0.
\epsilonnd{equation}e
If $d_t(p,x)+C_3{\alpha}_1^{-4}t^\frac12\ge\frac43\sigma$ and $d_t(p,x)\ge {\alpha}_1^4t^\frac12$, then
in the sense of barrier we have:
{\beta}egin{equation}\langlebel{e-Phi-1}
{\beta}egin{equation}gin{split}
& \lf(\frac{\p}{\p t}-\Delta\ri) \log (\Phi+\epsilon)\\
=& \left(\frac{\xi'}{\xi} \sigma^{-1}\lf(\frac{\p}{\p t}-\Delta\ri)\epsilonta-\frac{\xi''}{\xi} \sigma^{-2}|\nablaabla\epsilonta|^2+\frac{(\xi')^2}{\xi^2} \sigma^{-2}|\nablaabla \epsilonta|^2\right)\\
\le & C_4(\Phi+\epsilon)^{-1}.
\epsilonnd{split}
\epsilonnd{equation}
by the choice of $\xi$ and \epsilonqref{e-distance-1}. Hence there exists $C_5>0$ such that if
{\beta}egin{equation}\langlebel{e-t-2}
t^\frac12\le C_5{\alpha}_1^4
\epsilonnd{equation}
then $t$ also satisfies \epsilonqref{e-t-1} and $C_3{\alpha}_1^{-4}t^{1/2}<\frac\sigma 3$. Moreover, $C_5$ can be chosen so that either $d_t(p,x)+C_3{\alpha}_1^{-4}t^\frac12<\frac43\sigma$ or $d_t(p,x)+C_3{\alpha}_1^{-4}t^\frac12\ge\frac43\sigma$ and $d_t(p,x)\ge {\alpha}_1^4t^\frac12$. Hence \epsilonqref{e-Phi-1} is true in the sense of barrier for $t\in (0, C_5^2{\alpha}_1^8]$.
Consider the function
$$
F=\log \operatorname{tr}_hg -Lv+m\log (\Phi+\epsilon)
$$
where $v=(\tau-t)\dot\partialhi+\partialhi-f+nt$, $\tau=e^s-1$. Here $L, m>0$ are constants to be chosen later which are independent of $i,\ {\alpha}$. Recall that $v$
satisfies
{\beta}egin{equation}e
\lf(\frac{\p}{\p t}-\Delta\ri) v=\operatorname{tr}_g (\omega_0-\tau \text{\rm Ric}(\theta_0)+\sqrt{-1}\partial\bar\partial f) \geq {\beta} \operatorname{tr}_g h.
\epsilonnd{equation}e
and
{\beta}egin{equation}e
\lf(\frac{\p}{\p t}-\Delta\ri) \log \operatorname{tr}_h g\le C_6\operatorname{tr}_g h
\epsilonnd{equation}e
by Lemma \ref{l-a-1} with vanishing torsion terms here. Let
{\beta}egin{equation}\langlebel{e-L}
L{\beta}egin{equation}ta= C_6+1+\tau^{-1}.
\epsilonnd{equation}
Note that by the A.M.-G.M. inequality and the definition of $\dot \partialhi$, we have
{\beta}egin{equation}\langlebel{e-AMGM}
-\dot \partialhi \le n\log \operatorname{tr}_gh;\ \ \log \operatorname{tr}_hg\le \dot\partialhi +(n-1)\log\operatorname{tr}_gh.
\epsilonnd{equation}
So
{\beta}egin{equation}e
\log \operatorname{tr}_gh\ge \frac{1}{ n(\tau L-1)+(n-1)}(\log \operatorname{tr}_hg-\tau L\dot\partialhi)
\epsilonnd{equation}e
Then in the sense of barrier
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\lf(\frac{\p}{\p t}-\Delta\ri) F\le & -\operatorname{tr}_gh+m C_4 (\Phi+\epsilon)^{-1}\\
\le &-\epsilonxp\left(C_7 (\log \operatorname{tr}_hg-\tau L\dot\partialhi)\right)+m C_4 (\Phi+\epsilon)^{-1}\\
\le &-\epsilonxp\left(C_7F-C_8-C_7m\log(\Phi+\epsilon)\right)+m C_4 (\Phi+\epsilon)^{-1}\\
=&-(\Phi+\epsilon)^{-1}mC_4\left[\epsilonxp (C_7F-C_8-\log(mC_4))-1\right]\\
\epsilonnd{split}
\epsilonnd{equation}e
if $mC_7=1$, where we have used the upper bound of $\dot \partialhi$ and the bound of $\partialhi$ in Lemmas \ref{slma-1}. So
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
&\lf(\frac{\p}{\p t}-\Delta\ri) (C_7F-C_8-\log(mC_4))\\ \le& -\frac{mC_4C_7}{ \Phi+\epsilon}\left[\epsilonxp (C_7F-C_8-\log(mC_4))-1\right]\\
\le&0
\epsilonnd{split}
\epsilonnd{equation}e
in the sense of barrier whenever $C_7F-C_8-\log(mC_4)>0$. Then by the maximum principle Lemma \ref{max}, we conclude that
$$
C_7F-C_8-\log(mC_4)\le\sup_{t=0}\left(C_7F-C_8-\log(mC_4)\right).
$$
Let $\epsilon\to0$, using the definition of $\Phi$, the choice of $C_5$ and the bound of $|\partialhi|$, we conclude that in $ B_{g(t)}(p,\sigma)$,
{\beta}egin{equation}\langlebel{e-trace-lower}
\log \operatorname{tr}_hg-L(\tau-t)\dot \partialhi \le C_9
\epsilonnd{equation}
provided $t\in [0, C_5^2{\alpha}_1^8]$. On the other hand, as in \epsilonqref{e-AMGM}, we have
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\log \operatorname{tr}_gh\le& -\dot \partialhi+(n-1)\log\operatorname{tr}_hg\\
=&(n-1)\left(\log\operatorname{tr}_hg-L(\tau-t)\dot \partialhi\right)+(n-1)(L(\tau-t)-1)\dot\partialhi\\
\le&C_{10}
\epsilonnd{split}
\epsilonnd{equation}e
provided
{\beta}egin{equation}\langlebel{e-t-3}
Lt\le L\tau-1.
\epsilonnd{equation}
Here we have used the upper bound of $\dot \partialhi$ in Lemma \ref{slma-1}.
Hence there is ${\gamma}mma_1>0$ independent of ${\alpha}$ and $i$ such that if $t$ satisfies \epsilonqref{e-t-2} and \epsilonqref{e-t-3}, then
$$
g_i(t)\ge {\gamma}mma_1^2h
$$
on $B_{g_i(t)}(p,\sigma)$. Let ${\gamma}mma_2<\tau$ be such that
$$
{\gamma}mma_2=\min\{C_5^2,L^{-1}(L\tau-1)\}\times (4C_1)^{-4}
$$
where $C_1, C_5$ are the constants in \epsilonqref{e-alpha-1} and \epsilonqref{e-t-2} respectively and $L$ is given by \epsilonqref{e-L}. If $t\in[0,{\gamma}mma_2{\alpha}^{8(n-1)}]$, then $t$ will satisfy \epsilonqref{e-t-2}. One can see that the claim this true.
{ By \epsilonqref{e-trace-lower} and Lemma \ref{slma-1}, we conclude that
$$
\widetilde g_i(t)\le C_{11}h
$$
on $B_{\widetilde g_i(t)}(p,\sigma)$ for $t\in[0,{\gamma}mma_2{\alpha}^{8(n-1)}]$. The upper bound in the Lemma follows by choosing a possibly smaller ${\gamma}mma_1$.}
\epsilonnd{proof}
For the case of Chern-Ricci flow, the result is less satisfactory because the property of $d(x,t)$ does not behave as nice as in the K\"ahler case. As before, under the assumptions of Theorem \ref{t-instant-complete}, let $g(t)$ be the Chern-Ricci flow $g(t)$ constructed in the theorem. We have the following:
{\beta}egin{equation}gin{prop}\langlebel{p-initial-CR}
With the same notation and assumptions as in Theorem \ref{t-instant-complete}. Suppose $\operatorname{tr}_{g_0}h=o(\rho)$. Then $g(t)\rightghtarrow g_0$ as $t\rightghtarrow 0$ in $M$. The convergence is in $C^\infty$ topology and is uniform in compact subsets of $M$.
\epsilonnd{prop}
Note that $g_0$ may still be complete. But it may not be equivalent to $h$ and { the curvature of $g_0$ may be unbounded}.
As before, $g(t)$ is the limit of solutions $g_i(t)$ of the unnormalized Chern-Ricci flow on $M\times[0, s)$ with initial data $g_0+\epsilon_i h$ with $\epsilon_i\to 0$. Here we may assume $s\leq 1$. We want to prove the following:
{\beta}egin{equation}gin{lma}\langlebel{l-initial-CR-1} With the same notation and assumptions as in Proposition \ref{p-initial-CR} and let $S<\tau:=e^s-1$, for any precompact open subset $\Omega$ of $M$, there is $C>0$,
$$
C^{-1}h\le g_i(t)\le Cg
$$
for all $i$ in $\Omega\times[0, S]$.
\epsilonnd{lma}
Suppose the lemma is true, then Proposition \ref{p-initial-CR} will follows from the local estimates in \cite{ShermanWeinkove2013} for Chern-Ricci flow. To prove the lemma, first we prove the following.
Let $\partialhi_i$ be the potential for $g_i$.
{\beta}egin{equation}gin{sublma}\langlebel{sl-initial-CR-1} Suppose
$$\liminf_{\rho\to\infty}\rho^{-1}\log\frac{\omega_0^n}{\theta_0^n}\ge0.
$$
Then for any $\sigma>0$ (small enough independent of $i$), there is a constant $C>0 $ independent of $i$ such that
{\beta}egin{equation}e
\dot\partialhi_i\ge -C -\sigma\rho
\epsilonnd{equation}e
on $M\times[0, S]$.
\epsilonnd{sublma}
{\beta}egin{equation}gin{proof} In the following, we will denote $\partialhi_i$ simply by $\partialhi$ and $g_i(t)$ simply by $g(t)$ if there is no confusion arisen. Note that $g(t)$ is uniformly equivalent to $h$. Let $\sigma>0$.
Let $F=-(\tau-t)\dot \partialhi-\partialhi+f-nt-\sigma \rho $. By \epsilonqref{e-a-1} and \epsilonqref{e-a-2}, for $0\le t\le S$, we have
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\lf(\frac{\p}{\p t}-\Delta\ri)(-(\tau-t)\dot \partialhi-\partialhi) =& (\tau-t)\operatorname{tr}_g\text{\rm Ric}(\theta_0)+\dot\partialhi-\dot\partialhi+\operatorname{tr}_g(\sqrt{-1}\partial{\beta}ar\partial \partialhi)\\
=&(\tau-t)\operatorname{tr}_g\text{\rm Ric}(\theta_0)+\left(n+t\operatorname{tr}_g(\text{\rm Ric}(\theta_0))-\operatorname{tr}_g(\theta_0)\right)\\
=&\tau\operatorname{tr}_g\text{\rm Ric}(\theta_0)+n-\operatorname{tr}_g\theta_0 \\
\epsilonnd{split}
\epsilonnd{equation}e
Hence by the fact that:
{\beta}egin{equation}e
\omega_0-\tau\text{\rm Ric}(\theta_0)+\sqrt{-1}\partial{\beta}ar\partial f\ge {\beta}egin{equation}ta\theta_0,
\epsilonnd{equation}e
we have
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\lf(\frac{\p}{\p t}-\Delta\ri) F\le &\tau\operatorname{tr}_g\text{\rm Ric}(\theta_0)-\operatorname{tr}_g\theta_0-\Delta f+\sigma \Delta \rho\\
\le& (-{\beta}egin{equation}ta+ \sigma C_1)\operatorname{tr}_g\theta_0\\
<&0
\epsilonnd{split}
\epsilonnd{equation}e
for some constant $C_1$ independent of $\sigma$ and $i$ for $\sigma$ with $C_1\sigma<{\beta}egin{equation}ta$.
Since $F$ is bounded from above, by the maximum principle Lemma \ref{max}, we conclude that
$$
\sup_{M\times[0, S]}F\le \sup_{M\times\{0\}}F.
$$
At $t=0$,
$$
F=-\tau\dot\partialhi-\sigma\rho+f.
$$
By the assumption, we conclude that $F\le C(\sigma)$ at $t=0$. Hence we have
$$
F\le C(\sigma)
$$
on $M\times[0, S]$. Since $\partialhi, f$ are bounded, the sublemma follows.
\epsilonnd{proof}
{\beta}egin{equation}gin{sublma}\langlebel{sl-initial-CR-2} With the same notations as in Sublemma \ref{sl-initial-CR-1}. Suppose $\operatorname{tr}_{g_0}h=o(\rho)$. Then
$$
\operatorname{tr}_hg_i\le C\epsilonxp(C'\rho)
$$
on $M\times[0, S]$ for some positive constants $C, C'$ independent of $i$.
\epsilonnd{sublma}
{\beta}egin{equation}gin{proof} We will denote $g_i$ by $g$ again and $\omega_{0}$ to be the K\"ahler form of the initial metric $g_i(0)=g_0+\epsilon_ih$.
Note that
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\lf(\frac{\p}{\p t}-\Delta\ri)\partialhi=&\dot\partialhi-\Delta \partialhi\\
=&\dot \partialhi-(n-\operatorname{tr}_g\omega_{0}+t\operatorname{tr}_g(\text{\rm Ric}(\theta_0)))\\
\ge& \dot \partialhi-n+ \operatorname{tr}_g\omega_{0}+\frac{t{\beta}egin{equation}ta}{\tau}\operatorname{tr}_gh-\frac{t}{\tau}\operatorname{tr}_g\omega_0-\frac t\tau\Delta f\\
\ge &\dot \partialhi- n-\frac t\tau\Delta f+(1-\frac S\tau)\operatorname{tr}_g\omega_{0}.
\epsilonnd{split}
\epsilonnd{equation}e
Then we have:
{\beta}egin{equation}\langlebel{e-initial-CR-1}
\lf(\frac{\p}{\p t}-\Delta\ri) (\partialhi+ nt-\frac t\tau f)\ge \dot\partialhi+(1-\frac S\tau)\operatorname{tr}_g\omega_{0}-C_0.
\epsilonnd{equation}
Since $|\partialhi|$ is bounded by a constant independent of $i$ on $M\times[0, S]$, see Lemma \ref{l-uudot-upper-1} and Lemma \ref{l-all-u}, there is a constant $C_1, C_2>0$ so that $\xi:=\partialhi +nt-\frac t\tau f+C_1\ge 1$ and $\xi\le C_2$ on $M\times[0, S]$. Here and below $C_j$ will denote positive constants independent of $i$.
Let $\Phi( \varsigma)=2-e^{-\varsigma}$ for $\varsigma\in \mathbb{R}$. Then for $\xi:=\partialhi +nt-\frac t\tau f+C_1\ge 1$, we have
{\beta}egin{equation}\langlebel{e-initia-CR-2}
\left\{
{\beta}egin{equation}gin{array}{ll}
\Phi(\xi)\ge & 1 \\
\Phi'(\xi)\ge & e^{-C_2}\\
\Phi''(\xi)\le &-e^{-C_2}
\epsilonnd{array}
\rightght.
\epsilonnd{equation}
on $M\times[0, S]$. Next, let $P(\varsigma)$ be a positive function on $\mathbb{R}$ so that $P'>0$.
Define
$$
F(x,t)=\Phi(\xi)P(\rho).
$$
Let $\Upsilon=tr_hg$, here $g=g_i$. Let $F\to\infty$ near infinity be a smooth function of $x, t$. Then by Lemma \ref{l-a-1}, we have
{\beta}egin{equation}e
\lf(\frac{\p}{\p t}-\Delta\ri) (\log \Upsilon-F)=\mathrm{I+II+III}-\lf(\frac{\p}{\p t}-\Delta\ri) F
\epsilonnd{equation}e
where
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\mathrm{I}\le &2\Upsilon^{-2}\text{{\beta}f Re}\left( h^{i{\beta}ar l} g^{k{\beta}ar q}( T_0)_{ki{\beta}ar l} \hat \nablaabla_{{\beta}ar q}\Upsilon\right),
\epsilonnd{split}
\epsilonnd{equation}e
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\mathrm{II}=&\Upsilon^{-1} g^{i\bar{j}} h^{k{\beta}ar l}g_{k{\beta}ar q} \left(\hat \nablaabla_i \overline{(\hat T)_{jl}^q}- \hat h^{p{\beta}ar q}\hat R_{i{\beta}ar lp{\beta}ar j}\right),\\
\epsilonnd{split}
\epsilonnd{equation}e
and
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\mathrm{III}=&-\Upsilon^{-1} g^{{i\bar{j}}} h^{k{\beta}ar l}\left(\hat \nablaabla_i\left(\overline{( T_0)_{jl{\beta}ar k}} \right) +\hat \nablaabla_{{\beta}ar l}\left( {( T_0)_{ik{\beta}ar j} }\right)-\overline{ (\hat T)_{jl}^q}( T_0)_{ik{\beta}ar q}^p \right).
\epsilonnd{split}
\epsilonnd{equation}e
Let $\Theta=\operatorname{tr}_gh$. Suppose $\log \Upsilon-F$ attains a positive maximum at $(x_0,t_0)$ with $t_0>0$, then at this point,
$$
\Upsilon^{-1}\hat\nablaabla \Upsilon=\hat\nablaabla F,
$$
and so
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\mathrm{I}\le &2\Upsilon^{-2}\text{{\beta}f Re}\left( h^{i{\beta}ar l} g^{k{\beta}ar q}( T_0)_{ki{\beta}ar l} \hat \nablaabla_{{\beta}ar q}\Upsilon\right)\\
\le &C\Upsilon^{-1}\Theta^\frac12|\nablaabla F|\\
\le&C'\Upsilon^{-1}\Theta^\frac12\left(P|\nablaabla \xi|+P'\Theta^\frac12\right).
\epsilonnd{split}
\epsilonnd{equation}e
because $|\partial\rho|_h$ is bounded. Here we use the norm with respect to the evolving metric $g(t)$.
$$
\mathrm{II}\le C\Theta,
$$
$$
\mathrm{III}\le C\Upsilon^{-1}\Theta.
$$
Here $C, C'$ are positive constants independent of $i$. On the other hand,
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
&\lf(\frac{\p}{\p t}-\Delta\ri) F\\
=&P\lf(\frac{\p}{\p t}-\Delta\ri)\Phi+2{{\beta}f Re}\left(g^{i\bar{j}}\partial_i\Phi\partial_{{\beta}ar j}P\rightght)+\Phi\lf(\frac{\p}{\p t}-\Delta\ri) P\\
\ge&P\left(\Phi'\lf(\frac{\p}{\p t}-\Delta\ri) \xi -\Phi''|\nablaabla\xi|^2\right)-C_4\Phi'P'\Theta^\frac12|\nablaabla\xi|-C_4\Theta\Phi (P'+|P''|)\\
\ge&P\Phi'\dot\partialhi+e^{-C_2}P(1-\frac S\tau)\operatorname{tr}_g\omega_0-C_0P+e^{-C_2}P|\nablaabla\xi|^2-\frac12e^{-C_2}P|\nablaabla\xi|^2
\\
&-C_5\frac{(P')^2}{P}\Theta-C_4\Theta(P'+|P''|).
\epsilonnd{split}
\epsilonnd{equation}e
Here we have used the fact that $|\partial\rho|_h, |\partial{\beta}ar\partial\rho|_h$ are bounded $\Phi(\xi)\le 2$ and \epsilonqref{e-initia-CR-2}.
So at $(x_0,t_0)$,
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
&\lf(\frac{\p}{\p t}-\Delta\ri) (\log \Upsilon-F)\\
\le& C_3\left( \Upsilon^{-1}\Theta^\frac12\left(P|\nablaabla \xi|+P'\Theta^\frac12\right) + \Upsilon^{-1}\Theta+ \Theta\right)\\
&- P\Phi'\dot\partialhi-e^{-C_2}P(1-\frac S\tau)\operatorname{tr}_g\omega_0+C_0P-\frac12e^{-C_2}P|\nablaabla\xi|^2\\
&+\Theta\left(C_5\frac{(P')^2}{P}+
C_4(P'+|P''|)\right)\\
\le&- P\Phi'\dot\partialhi-e^{-C_2}P(1-\frac S\tau)\operatorname{tr}_g\omega_0+C_0P+\left(-\frac12e^{-C_2} +\Upsilon^{-1}\right)P|\nablaabla\xi|^2
\\
&+C_6\Theta\left(\Upsilon^{-1}+1+\Upsilon^{-1}P'+P'+\Upsilon^{-1}P+\frac{(P')^2}{p}+|P''|\right).
\epsilonnd{split}
\epsilonnd{equation}e
Now
{\beta}egin{equation}e
-\dot\partialhi\le c(n)\log \Theta.
\epsilonnd{equation}e
Suppose $\omega_0\ge \frac1{Q(\rho)}\theta_0$ with $Q>0$ and suppose $\Upsilon^{-1}\le \frac12e^{-C_2}$ at $(x_0,t_0)$, then at $(x_0,t_0)$, we have
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
&\lf(\frac{\p}{\p t}-\Delta\ri) (\log \Upsilon-F)\\
\le&C_7P(\log\Theta+1)+\Theta\left[-C_8PQ^{-1}+C_9 \left( 1+ P'+\frac{(P')^2}{P}+|P''|\right)\right].
\epsilonnd{split}
\epsilonnd{equation}e
By the assumption on $\operatorname{tr}_{g_0}h$, for any $\sigma>0$ there is $\rho_0>0$ such that if $\rho\ge \rho_0$, then $\operatorname{tr}_{g_0}h\le \sigma\rho.$ Hence we can find $C=C(\sigma)$ such that
$$
g_0\ge \frac{1}{\sigma(\rho+C(\sigma))}h
$$
and $\rho+C(\sigma)\ge 1$
on $M$. Let $Q(\rho)=\sigma(\rho+C(\sigma)), P(\rho)=\rho+C(\sigma)$, then above inequality becomes
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\lf(\frac{\p}{\p t}-\Delta\ri) (\log \Upsilon-F)
\le&C_7P\log (e\Theta)+\Theta\left(-C_8\sigma^{-1}+3C_9\right)\\
\le& C_7P\log (e\Theta)-\frac12C_8\Theta
\epsilonnd{split}
\epsilonnd{equation}e
if we choose $\sigma$ small enough independent of $i$. Since $\log\Upsilon-F\to-\infty$ near infinity and uniform in $t\in [0, S]$, and $\log\Upsilon-F<0$ at $t=0$, by maximum principle, either $\log\Upsilon-F\le 0$ on $M\times[0, S]$ or there is $t_0>0$, $x_0\in M$ such that
$\log\Upsilon-F$ attains a positive maximum at $(x_0,t_0)$. Suppose at this point $\Upsilon^{-1}\ge\frac12 e^{-C_2}$, then
$$
\log\Upsilon-F\le C_{10}.
$$
Otherwise, at $(x_0,t_0)$ we have
{\beta}egin{equation}e
0\le C_7P\log (e\Theta)-\frac12C_8\Theta.
\epsilonnd{equation}e
Hence we have at this point $\Theta\le C_{11}$ which implies $\Upsilon\le C_{12}$ because $\dot\partialhi\le C$ for some constant independent of $i$. So
$$
\log\Upsilon-F\le\log C_{12}.
$$
Or
$$
\Theta\le C_{13}P^2.
$$
This implies $\log \Upsilon\le C_{14}(1+\log P)$. Hence
{\beta}egin{equation}e
\log \Upsilon-F\le C_{14}.
\epsilonnd{equation}e
From these considerations, we conclude that the sublemma is true.
\epsilonnd{proof}
{\beta}egin{equation}gin{proof}[Proof of Lemma \ref{l-initial-CR-1}] The lemma follows from Sublemmas \ref{sl-initial-CR-1} and \ref{sl-initial-CR-2}.
\epsilonnd{proof}
\section{Long time behaviour and convergence}
In this section, we will first study the longtime behaviour for the solution constructed in Theorem \ref{t-instant-complete}. Namely, we will show the following theorem:
{\beta}egin{equation}gin{thm}\langlebel{longtime}
Under the assumption of Theorem \ref{t-instant-complete}, if in addition, $$-\text{\rm Ric}(h)+\sqrt{-1}\partial\bar\partial f\geq {\beta}egin{equation}ta \theta_0$$ for some $f\in C^\infty(M)\cap L^\infty(M)$, ${\beta}egin{equation}ta>0$. Then the solution constructed from Theorem \ref{t-instant-complete} is a longtime solution and converges to a unique complete K\"ahler Einstein metric with negative scalar curvature on $M$.
\epsilonnd{thm}
Before we prove Theorem \ref{longtime}, let us prove a lower bound of $\dot u$ which will be used in the argument of convergence. Once we have uniform equivalence of metrics, we can obtain a better lower bound of $\dot{u}$.
{\beta}egin{equation}gin{lma}\langlebel{du-convergence} Assume the solution constructed from Theorem \ref{t-instant-complete} is a longtime solution, then there is a positive constant $C$ such that {\beta}egin{equation}e
\dot{u}\geq-Ce^{-\frac t2} \epsilonnd{equation}e on $M\times[2, \infty)$.
{\beta}egin{equation}gin{proof} Since we do not have upper bound of $g(t)$ as $t\to 0$, we shift the initial time of the flow to $t=1$. Note that {\beta}egin{equation}e{\beta}egin{equation}gin{split}
\lf(\frac{\p}{\p t}-\Delta\ri) (e^t\dot{u}-f)=&-tr_{ g}(\text{\rm Ric}(h)+g(1))+\Delta f\\
\geq&-tr_{ g}g(1)\geq-C_1. \epsilonnd{split}\epsilonnd{equation}e
Consider $Q=e^t\dot{u}-f+(C_1+1)t$. Then we can use maximum principle argument as before to obtain $Q(x, t)\geq \inf\limits_MQ(0)$. Then we have{\beta}egin{equation}e
e^t\dot{u}\geq -C_2-(C_1+1)t \epsilonnd{equation}e which implies {\beta}egin{equation}e
\dot{u}\geq -C_3e^{-\frac t2} \epsilonnd{equation}e on $M\times[1, \infty)$. We shift the time back, we obtain the result.
\epsilonnd{proof}
\epsilonnd{lma}
{\beta}egin{equation}gin{proof}[Proof of Theorem \ref{longtime}] The assumption $-\text{\rm Ric}(h)+\sqrt{-1}\partial\bar\partial f\geq {\beta}egin{equation}ta \theta_0$ implies that for all $s$ large enough, {\beta}egin{equation}e
-Ric(h)+e^{-s}(\omega_0+Ric(h))+\sqrt{-1}\partial\bar\partial \hat f\geq \frac{\beta}egin{equation}ta2 \theta_0. \epsilonnd{equation}e Here $\hat f=(1-e^{-s})f$ is a bounded function on $M$. By Theorem \ref{t-instant-complete} and Lemma \ref{l-trace-2}, \epsilonqref{e-MP-1} has a smooth solution on $M\times(0, \infty)$ with $g(t)$ uniformly equivalent to $h$ on any $[a, \infty)\subset(0, \infty)$. Combining the local higher order estimate of Chern-Ricci flow (See \cite{ShermanWeinkove2013} for example) and Lemma \ref{du-convergence}, we can conclude that $u(t)$ converges smoothly and locally to a smooth function $u_\infty$ as $t\to\infty$ and $\log\frac{\omega^n_\infty}{\theta_0^n}=u_\infty$. Taking $\sqrt{-1}\partial\bar\partial$ on both sides, we have {\beta}egin{equation}e
-\text{\rm Ric}(g_\infty)+\text{\rm Ric}(h)=\sqrt{-1}\partial\bar\partial u_\infty, \epsilonnd{equation}e which implies $-\text{\rm Ric}(g_\infty)=g_\infty$. Obviously, $g_\infty$ is K\"ahler. Uniqueness follows from \cite[Theorem 3]{Yau1978} (see also Proposition 5.1 in \cite{HLTT}).
\epsilonnd{proof}
Taking $g_0=h$ in the theorem, we have
{\beta}egin{equation}gin{cor} Let $(M,h)$ be a complete Hermitian manifold satisfying the assumptions in Theorem \ref{longtime}. Then the Chern-Ricci flow with initial data $h$ exists on $M\times[0,\infty)$ and converge uniformly on any compact subsets to the unique complete K\"ahler-Einstein metric with negative scalar curvature on $M$.
\epsilonnd{cor}
For K\"ahler R flow, we have the following general phenomena related to Theorem \ref{longtime}.
{\beta}egin{equation}gin{thm}\langlebel{convergence-krf}
Let $(M,h)$ be a smooth complete Hermitian manifold with
$\mathrm{BK}(h) \geq -K_0$ and $|\nablaabla^h_{{\beta}ar\partialartial}T_h|_h\leq K_0$ for some constant $K_0\geq 0$. Moreover, assume {\beta}egin{equation}e -\text{\rm Ric}(h)+\sqrt{-1}\partial\bar\partial f \geq k h \epsilonnd{equation}e
for some constant $k>0$ and function $f\in C^\infty(M)\cap L^\infty(M)$. Suppose $g(t)$ is a smooth complete solution to the normalized K\"ahler R flow on $M\times[0,+\infty)$ with $g(0)=g_0$ which satisfies {\beta}egin{equation}e
\frac{\det g_0}{\det h}\leq \Lambda \epsilonnd{equation}e and {\beta}egin{equation}e R(g_0)\geq -L \epsilonnd{equation}e for some $\Lambda,L>0$. Then $g(t)$ satisfies {\beta}egin{equation}e
C^{-1}h\leq g(t)\leq C h \epsilonnd{equation}e on $M\times[1, \infty)$ for some constant $C=C(n, K_0, k, ||f||_\infty, \Lambda, L)>0$. In particular, $ g(t)$ converges to the unique smooth complete K\"ahler-Einstein metric with negative scalar curvature.
\epsilonnd{thm}
{\beta}egin{equation}gin{proof} We can assume $k=1$, otherwise we rescale $h$. We consider the corresponding unnormalized K\"ahler-Ricci flow $\widetilde g(s)=e^{t}g(t)$ with $s=e^t-1$. Then the corresponding Monge-Amp\`ere equation to the unnormalized K\"ahler-Ricci flow is: {\beta}egin{equation}e\left\{{\beta}egin{equation}gin{array}{l}
\frac{\partial}{\partial s}\partialhi=\displaystyle{\log\frac{(\omega_0-s\text{\rm Ric}(\theta_0)+\sqrt{-1}\partial\bar\partial\partialhi)^n}{\theta_0^n}} \\
\partialhi(0)=0.
\epsilonnd{array} \rightght. \epsilonnd{equation}e
Here $\theta_0$ is the K\"ahler form of $h$. By the assumption $R(g_0)\geq -L $, Proposition 2.1 in \cite{Chen2009} and Lemma 5.1 in \cite{HLTT} with the fact {\beta}egin{equation}e
\lf(\frac{\p}{\p s}-\wt\Delta\ri) \widetilde R\geq \frac{1}{n}\widetilde R^2,\epsilonnd{equation}e we conclude that $\widetilde R:=R(\widetilde g(s))\geq \max\{-L, -\frac ns\}$ on $M\times[0, \infty)$. Note that $\ddot{\partialhi}=-R(\widetilde g(s))$, we have on $M\times[0, 1]$, $\dot{\partialhi}\leq C(L, \Lambda)$; on $M\times[1, \infty)$, $\dot{\partialhi}\leq C(L, \Lambda)+n\log s$.
For lower bound of $\dot{\partialhi}$, we consider $Q=-\dot{\partialhi}+f$. We compute: {\beta}egin{equation}e{\beta}egin{equation}gin{split}
\lf(\frac{\p}{\p s}-\wt\Delta\ri) Q=&-\lf(\frac{\p}{\p s}-\wt\Delta\ri) \dot{\partialhi}-\Delta f \\
=&tr_{\widetilde g}[\text{\rm Ric}(\theta_0)-\sqrt{-1}\partial\bar\partial f]\\
\leq&-tr_{\widetilde g}h\\
\leq&-ne^{-\frac{\dot{\partialhi}}{n}}\\
\leq&-ne^{\frac{1}{n}(Q-f)}\\
\leq&-C(n, ||f||_\infty)e^{\frac{Q}{n}}\\
\leq&-C(n, ||f||_\infty)Q^2, \epsilonnd{split}\epsilonnd{equation}e whenever $Q>0$.
Then by the same argument as in the proof of Proposition 2.1 in \cite{Chen2009}, we conclude that $\dot{\partialhi}\geq -C(n, \langlembda, ||f||_\infty)$ on $M\times[0, \infty)$. Here $\langlembda$ is the lower bound of $\frac{\det g_0}{\det h}$. However, this estimate is not enough for later applications. We consider $F=-\dot{\partialhi}+f+n\log s$. Then we similarly obtain {\beta}egin{equation}e
\lf(\frac{\p}{\p s}-\wt\Delta\ri) F\leq -C(n, ||f||_\infty)F^2, \epsilonnd{equation}e whenever $F>0$. By Lemma 5.1 in \cite{HLTT}, we conclude that $F\leq\frac{C(n, ||f||_\infty)}{s}$ on $M\times[0, \infty)$. Therefore, we obtain {\beta}egin{equation}e
\dot{\partialhi}\geq -C(n, ||f||_\infty)+n\log s \epsilonnd{equation}e on $M\times[1, \infty)$.
To sum up, for the bound of $\dot{\partialhi}$, we have:
(i) On $M\times[0, 1]$, $-C(n, \langlembda, ||f||_\infty)\leq \dot{\partialhi}\leq C(L, \Lambda)$;
(ii) On $M\times[1, \infty)$, $-C(n, ||f||_\infty)+n\log s\leq \dot{\partialhi}\leq C(L, \Lambda)+n\log s$.
Then we consider back to the normalized K\"ahler-Ricci flow $g(t)$. Since {\beta}egin{equation}e
\log\frac{\det g(t)}{\det h}=-n\log (s+1)+\frac{\partial}{\partial s}\partialhi(s), \epsilonnd{equation}e where $s=e^t-1$, we obtain: {\beta}egin{equation}e
-C(n, ||f||_\infty)\leq\dot{u}(t)+u(t)\leq C(L, \Lambda) \epsilonnd{equation}e on $M\times[\log 2, \infty)$. Here $u$ solves \epsilonqref{e-MP-1}.
Next, we consider $G(x, t)=\log tr_h g(t)-A(\dot{u}(t)+u(t)+f)$. Here $A$ is a large constant to be chosen. As in Section 1, we have {\beta}egin{equation}e
\lf(\frac{\p}{\p t}-\Delta\ri) \log tr_h g(t)\leq C(n, K_0)tr_{ g(t)}h-1. \epsilonnd{equation}e Therefore, {\beta}egin{equation}e{\beta}egin{equation}gin{split}
\lf(\frac{\p}{\p t}-\Delta\ri) G\leq& C(n, K_0)tr_{ g(t)}h-1+An+A(tr_{ g}\text{\rm Ric}(h)+tr_{ g}\sqrt{-1}\partial\bar\partial f)\\
\leq& (-A+C(n, K_0))tr_{g(t)}h-1+An\\
\leq& -tr_{g(t)}h+An. \epsilonnd{split}\epsilonnd{equation}e Here we take $A=C(n, K_0)+1$.
On the other hand, {\beta}egin{equation}e tr_h g(t)\leq\frac{1}{(n-1)!}\cdot(tr_{ g(t)}h)^{n-1}\cdot\frac{\det g}{\det h}\leq C(n, L, \Lambda)(tr_{ g(t)}h)^{n-1}. \epsilonnd{equation}e
Then we have {\beta}egin{equation}e{\beta}egin{equation}gin{split}
\lf(\frac{\p}{\p t}-\Delta\ri) G\leq&-C(n, L, \Lambda)(tr_h g(t))^{\frac{1}{n-1}}+C(n,K_0)\\
=&-C(n, L, \Lambda)e^{\frac{1}{n-1}\log tr_hg(t)}+C(n,K_0)\\
=&-C(n, L, \Lambda)e^{\frac{1}{n-1}[G+A(\dot{u}(t)+u(t)+f)]}+C(n,K_0)\\
\leq&-C(n, L, \Lambda, ||f||_\infty)e^{\frac{1}{n-1}G}+C(n,K_0)\\
\leq&-C(n, L, \Lambda, ||f||_\infty)G^2+C(n,K_0), \epsilonnd{split}\epsilonnd{equation}e whenever $G>0$.
By similar argument as in the proof of Lemma 5.1 in \cite{HLTT}, we conclude that $G\leq C(n, L, \Lambda, ||f||_\infty, K_0)$ on $M\times[1, \infty)$. The difference here is that we consider the normalized K\"ahler-Ricci flow instead of K\"ahler-Ricci flow. The Perelman's distance distortion lemma for normalized K\"ahler-Ricci flow is the following:{\beta}egin{equation}e
\lf(\frac{\p}{\p t}-\Delta\ri) d_t(x_0, x)\geq -\frac{5(n-1)}{3}r_0^{-1}-d_t(x_0, x). \epsilonnd{equation}e We then consider $t\cdot\partialhi(\frac{1}{Ar_0}[e^t\cdot d_t(x_0, x)+\frac{5(n-1)e^t}{3}r_0^{-1}])\cdot G(x,t)$, the results follows from the same argument as in the proof of Lemma 5.1 in \cite{HLTT}.
This implies {\beta}egin{equation}e
g(t)\leq C(n, L, \Lambda, ||f||_\infty, K_0)h \epsilonnd{equation}e on $M\times[1, \infty)$.
For lower bound, combining with $e^{\dot{u}(t)+u(t)}=\frac{\det g}{\det h}$, we have {\beta}egin{equation}e
g(t)\geq C^{-1}(n, L, \Lambda, ||f||_\infty, K_0)h \epsilonnd{equation}e on $M\times[1, \infty)$.
Once we obtain the uniform equivalence of metrics of the normalized K\"ahler-Ricci flow, the convergence follows from the same argument as in the proof of Theorem 5.1 in \cite{HLTT}. This completes the proof of Theorem \ref{convergence-krf}.
\epsilonnd{proof}
{\alpha}ppendix
\section{Some basic relations}
Let $g(t)$ be a solution to the Chern-Ricci flow,
$$
\partialartial_tg=-\text{\rm Ric}(g)
$$
and $h$ is another Hermitian metric. Let $\omega(t)$ be the K\"ahler form of $g(t)$, $\theta_0$ be the K\"ahler form of $h$. Let
$$
\partialhi(t)=\int_0^t\log \frac{\omega^n(s)}{\theta_0^n}ds.
$$
{\beta}egin{equation}\langlebel{e-a-1}
\omega(t)=\omega(0)-t\text{\rm Ric}(\theta_0)+\sqrt{-1}\partial{\beta}ar\partial\partialhi.
\epsilonnd{equation}
Let $\dot\partialhi=\frac{\partial}{\partial t}\partialhi$. Then
{\beta}egin{equation}\langlebel{e-a-2}
\lf(\frac{\p}{\p t}-\Delta\ri)\dot\partialhi=-\operatorname{tr}_g(\text{\rm Ric}(\theta_0)),
\epsilonnd{equation}
where $ \Delta$ is the Chern Laplacian with respect to $ g$.
On the other hand, if $g$ is as above, the solution $\widetilde g$ of the corresponding normalized Chern-Ricci flow with the same initial data
$$
\partialartial_t\widetilde g=-\text{\rm Ric}(\widetilde g)-\widetilde g
$$
is given by
$$
\widetilde g(x,t)=e^{-t}g(x,e^{t}-1).
$$
The corresponding potential $u$ is given by
$$
u(t)=e^{-t}\int_0^te^s\log \frac{\widetilde \omega^n(s)}{\theta_0^n}ds
$$
where $\widetilde \omega(s)$ is the K\"ahler form of $\widetilde g(s)$. Also,
{\beta}egin{equation}\langlebel{e-a-3}
\widetilde\omega(t)=-\text{\rm Ric}(\theta_0)+e^{-t}(\text{\rm Ric}(\theta_0)+\omega(0))+\sqrt{-1}\partial{\beta}ar\partial u.
\epsilonnd{equation}
{\beta}egin{equation}\langlebel{e-a-5}
\left(\frac{\partial}{\partial t}-\widetilde \Delta\right)(\dot u+u)=-\operatorname{tr}_{\widetilde g}\text{\rm Ric}(\theta_0)-n,
\epsilonnd{equation}
where $\widetilde \Delta$ is the Chern Laplacian with respect to $\widetilde g$.
{\beta}egin{equation}gin{lma}[See \cite{TosattiWeinkove2015,Lee-Tam}]\langlebel{l-a-1} Let $g(t)$ be a solution to the Chern-Ricci flow and let $\Upsilon=\operatorname{tr}_{ h}g$, and $\Theta=\operatorname{tr}_g h$.
{\beta}egin{equation}e
\lf(\frac{\p}{\p t}-\Delta\ri) \log \Upsilon=\mathrm{I+II+III}
\epsilonnd{equation}e
where
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\mathrm{I}\le &2\Upsilon^{-2}\text{{\beta}f Re}\left( h^{i{\beta}ar l} g^{k{\beta}ar q}( T_0)_{ki{\beta}ar l} \hat \nablaabla_{{\beta}ar q}\Upsilon\right).
\epsilonnd{split}
\epsilonnd{equation}e
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\mathrm{II}=&\Upsilon^{-1} g^{i\bar{j}} \hat h^{k{\beta}ar l}g_{k{\beta}ar q} \left(\hat \nablaabla_i \overline{(\hat T)_{jl}^q}- \hat h^{p{\beta}ar q}\hat R_{i{\beta}ar lp{\beta}ar j}\right)\\
\epsilonnd{split}
\epsilonnd{equation}e
and
{\beta}egin{equation}e
{\beta}egin{equation}gin{split}
\mathrm{III}=&-\Upsilon^{-1} g^{{i\bar{j}}} h^{k{\beta}ar l}\left(\hat \nablaabla_i\left(\overline{( T_0)_{jl{\beta}ar k}} \right) +\hat \nablaabla_{{\beta}ar l}\left( {( T_0)_{ik{\beta}ar j} }\right)-\overline{ (\hat T)_{jl}^q}( T_0)_{ik{\beta}ar q}^p \right).
\epsilonnd{split}
\epsilonnd{equation}e
where $T_0$ is the torsion of $g_0=g(0)$, $\hat T$ is the torsion of $h$ and $\hat\nablaabla$ is the derivative with respect to the Chern connection of $h$.
\epsilonnd{lma}
\section{A maximum principle}\langlebel{s-max}
We have the following maximum principle, see \cite{HLTT} for example.
{\beta}egin{equation}gin{lma}\langlebel{max} Let $(M^n,h)$ be a complete non-compact Hermitian manifold satisfying condition: There exists a smooth positive real exhaustion function $\rho$ such that $|\partialartial \rho|^2_h+|\sqrt{-1}\partialartial{\beta}ar\partialartial \rho|_h\leq C_1$. Suppose $g(t)$ is a solution to the Chern-Ricci flow on $M\times[0,S)$. Assume for any $0<S_1<S$, there is $C_2>0$ such that
$$
C_2^{-1}h\le g(t)
$$
for $0\leq t\le S_1$.
Let $f$ be a smooth function on $M\times[0,S)$ which is bounded from above such that
$$
\lf(\frac{\p}{\p t}-\Delta\ri) f\le0
$$
on $\{f>0\}$ in the sense of barrier. Suppose $f\le 0$ at $t=0$, then $f\le 0$ on $M\times[0,S)$.
\epsilonnd{lma}
{We say that
$$
\lf(\frac{\p}{\p t}-\Delta\ri) f\le \partialhi
$$
in the sense of barrier means that for fixed $t_1>0$ and $x_1$, for any $\epsilon>0$, there is a smooth function $\sigma(x)$ near $x$ such that $\sigma(x_1)=f(x_1,t_1)$, $\sigma(x)\le f(x,t_1)$ near $x_1$, such that $\sigma$ is $C^2$ and at $(x_1,t_1)$
{\beta}egin{equation}e
\frac{\partial_-}{\partial t}f(x,t)-\Delta \sigma(x)\le \partialhi(x)+\epsilon.
\epsilonnd{equation}e
Here
{\beta}egin{equation}e
\frac{\partial_-}{\partial t}f(x,t)=\liminf_{h\to 0^+}\frac{f(x,t)-f(x,t-h)}h.
\epsilonnd{equation}e
for a function $f(x,t)$.}
{\beta}egin{equation}gin{thebibliography}{1000}
{\beta}ibitem{Aubin} Aubin, T.,{\sl \'Equations du type Monge-Amp\`ere sur les vari\'et\'es k\"ahleriennes compactes}, C. R. Acad. Sci. Paris S\'er. A-B, \textbf{283} (1976), no. 3, Aiii, A119-A121.
{\beta}ibitem{BG2013}Boucksom, S.; Guedj, V.,{\sl Regularizing properties of the K\"ahler R flow}. In: An Introduction to the K\"ahler R Flow, Lecture Notes in Mathematics, vol. 2086, pp. 189-237. Springer, Switzerland (2013)
{\beta}ibitem{Cao} Cao, H.-D.,{\sl Deforming of K\"ahler metrics to K\"ahler-Einstein metrics on compact K\"ahler manifolds}, Invent. Math., \textbf{81} (1985), no. 2, 359-372.
{\beta}ibitem{Chau04} Chau, A.,{\sl Convergence of the K\"ahler-Ricci flow on noncompact K\"ahler manifolds}, J. Differential Geom., \textbf{66} (2004), no. 1, 211-232.
{\beta}ibitem{Chen2009}Chen, B.-L.,{\sl Strong uniqueness of the Ricci flow}, J. Differential Geometry, \textbf{82} (2009), no. 2, 363-382.
{\beta}ibitem{ChengYau1982} Cheng, S.-Y.; Yau, S.-T.,{\sl On the existence of a complete K\"ahler Metric on noncompact complex manifolds and the regularity of Fefferman's equation}, Communication of Pure and Applied Mathematics, \textbf{33} (1980), no. 4, 507-544.
{\beta}ibitem{Chow2} Chow, B.; Chu, S.-C.; Glickenstein, D.; Guenther, C.; Isenberg, J.; Ivey, T.; Knopf, D.; Lu, P.; Luo, F.; Ni, L.,{\sl The Ricci flow: Techniques and applications: Part II: Analytic aspects}, Mathematical Surveys and Monographs, {{\beta}f 144}, American Mathematical Society, Providence, RI, 2008.
{\beta}ibitem{Chow} Chow, B.; Chu, S.-C.; Glickenstein, D.; Guenther, C.; Isenberg, J.; Ivey, T.; Knopf, D.; Lu, P.; Luo, F.; Ni, L.,{\sl The Ricci flow : techniques and applications, Part III. Geometric-analytic aspects}, Mathematical Surveys and Monographs, {{\beta}f 163}, American Mathematical Society, Providence, RI, 2010.
{\beta}ibitem{Ge-Lin-Shen} Ge, H.; Lin, A.; Shen, L.-M.,{\sl The K\"ahler-Ricci flow on pseudoconvex domain}, arXiv: 1803.07761.
{\beta}ibitem{GiesenTopping-1} Giesen, G.; Topping, P.-M.,{\sl Ricci
flow of negatively curved incomplete surfaces},
Calc. Var. and PDE, \textbf{38} (2010), 357--367.
{\beta}ibitem{GiesenTopping} Giesen, G.; Topping, P.-M.,{\sl Existence of Ricci flows of incomplete surfaces}, Comm. Partial Differential Equations, \textbf{36} (2011), 1860-1880.
{\beta}ibitem{GiesenTopping-2}Giesen, G.; Topping, P.-M.,{\sl Ricci
flows with unbounded curvature}, Math. Zeit.,
\textbf{273} (2013), 449--460.
{\beta}ibitem{Gill} Gill, M.,{\sl Convergence of the parabolic complex Monge-Amp\`ere equation on compact Hermitian manifolds}, Comm. Anal. Geom., \textbf{19} (2011), no. 2, 277-303.
{\beta}ibitem{HeLee2018} He, F.; Lee, M.-C.,{\sl Weakly PIC1 manifolds with maximal volume growth}, arXiv:1811.03318.
{\beta}ibitem{Huang2018} Huang, S.-C.,{\sl A note on existence of exhaustion functions and its applications}, Journal of Geometric Analysis, \textbf{29} (2019), no. 2, 1649-1659.
{\beta}ibitem{HLTT} Huang, S.-C; Lee, M.-C.; Tam L.-F.; Tong, F.,{\sl Longtime existence of K\"ahler-Ricci flow and holomorphic sectional curvature}, arXiv:1805.12328.
{\beta}ibitem{Lee-Tam} Lee, M.-C.; Tam L.-F.,{\sl Chern-Ricci flows on noncompact complex manifolds}, To appear in J. Differential Geometry.
{\beta}ibitem{Lott-Zhang} Lott, J.; Zhang, Z.,{\sl Ricci flow on quasi-projective manifolds}, Duke Mathematical Journal, \text{156} (2011), no. 1, 87-123.
{\beta}ibitem{NiTam2013} Ni, L.; Tam, L.-F.,{\sl Poincar\`e-Lelong equation via the Hodge-Laplace heat equation}, Compos. Math., \textbf{149} (2013), no. 11, 1856-1870.
{\beta}ibitem{Perelman2003} Perelman, G.,{\sl The entropy formula for the Ricci flow and its geometric applications}, arXiv:math.DG/0211159.
{\beta}ibitem{ShermanWeinkove2012} Sherman, M.; Weinkove, B.,{\sl Interior derivative estimates for the K\"ahler-Ricci flow}, Pacific J. Math., \textbf{257} (2012), no. 2, 491-501.
{\beta}ibitem{ShermanWeinkove2013}
Sherman, M.; Weinkove, B.,{\sl Local Calabi and curvature estimates for the Chern-Ricci flow}, New York J. Math., \textbf{19} (2013), 565--582.
{\beta}ibitem{Shi1989} Shi, W.-X.,{\sl Ricci deformation of the metric on complete Riemannian manifolds}, J. Differential Geom., \textbf{30} (1989), 303-394.
{\beta}ibitem{Shi1997} Shi, W.-X.,{\sl Ricci flow and the uniformization on complete noncompact K\"ahler manifolds}, J. Differential Geom., \textbf{45} (1997), 94-220.
{\beta}ibitem{SongTian2017}Song, J.; Tian, G.,{\sl The K\"ahler R flow through singularities}. Invent. Math. \textbf{207}(2), 519-595 (2017)
{\beta}ibitem{Tam2010} Tam, L.-F.,{\sl Exhaustion functions on complete manifolds}, Recent advances in geometric analysis, 211-215, Adv. Lect. Math. (ALM), \textbf{11}, Int. Press, Somerville, MA, 2010.
{\beta}ibitem{TianZhang2006}Tian, G.; Zhang, Z., {\sl On the K\"ahler-Ricci flow on projective manifolds of general
type}, Chinese Ann. Math. Ser. B 27 (2006), no. 2, 179192.
{\beta}ibitem{To2017}T\^o, T.-D.,{\sl Regularizing properties of complex Monge-Amp\`ere flows II: Hermitian manifolds}, Math. Ann. \textbf{372}(1-2), 699-741(2018)
{\beta}ibitem{Topping} Topping, P.-M.,{\sl Ricci
flow compactness via pseudolocality, and
flows with incomplete
initial metrics}, J. Eur. Math. Soc. (JEMS), \textbf{12} (2010), 1429--1451.
{\beta}ibitem{TosattiWeinkove2015} Tosatti, V.; Weinkove, B.,{\sl On the evolution of a Hermitian metric by its Chern-Ricci form}, J. Differential Geom., \textbf{99} (2015), no. 1, 125--163.
{\beta}ibitem{Yau1978} Yau, S.-T.,{\sl A general Schwarz lemma for K\"ahler manifolds}, Amer. J. Math., \textbf{100} (1978), no. 1, 197-203.
{\beta}ibitem{Yau1978-2} Yau, S.-T.,{\sl On the Ricci curvature of a compact K\"ahler manifold and the complex Monge-Amp\`ere equation, I}, Comm. Pure Appl. Math., \textbf{31} (1978), no. 3, 339-411.
\epsilonnd{thebibliography}
\epsilonnd{document} |
\betaegin{document}
\nuewcommand{\betarac}[1]{\lambdaeft(#1\rhoight)}
\nuewcommand{\betafrac}[2]{\betarac{\pihirac{#1}{#2}}}
\nuewcommand{\sigmaet}[1]{\{#1\}}
\nuewcommand{\sigmatack}[2]{\gammaenfrac{}{}{0pt}{}{#1}{#2}}
\deltaef{\cal T}{{\cal T}}
\deltaef{\betaf M}{{\betaf M}}
\deltaef{\betaf k}{{\betaf k}}
\deltaef{\cal C}{{\cal C}}
\deltaef{\cal E}{{\cal E}}
\deltaef{\cal F}{{\cal F}}
\deltaef2^{\tauilde O(1/\epsilon^2)}{2^{\tauilde O(1/\epsilonsilonp^2)}}
\deltaef{\rhom Cond}{{\rhom Cond}}
\deltaef\hat{u}{\hat{u}}
\deltaef{\betaf C}{{\betaf C}}
\deltaef{\betaf T}{{\betaf T}}
\deltaef\hat{{\betaf D}}{\hat{{\betaf D}}}
\deltaef{\betaf M}{{\betaf M}}
\deltaef{\betaf D}{{\betaf D}}
\deltaef\hbox{Norm}{\hbox{Norm}}
\deltaef{\tauextstyle{1\over2}}{{\tauextstyle{1\over2}}}
\deltaef{\betaf R}{{\betaf R}}
\deltaef\mubox{\rhom e}cip#1{{1\over#1}}
\deltaef\hat{d}_{i,j}{\hat{d}_{i,j}}
\deltaef{\rhom Vol}{{\rhom Vol}}
\deltaef\hbox{ for }{\hbox{ for }}
\deltaef\hat{T}{\hat{T}}
\deltaef{\betaf A}{{\betaf A}}
\deltaef{\betaf W}{{\betaf W}}
\deltaef\hat{\betaW}{\hat{{\betaf W}}}
\deltaef\hat{c}{\hat{c}}
\deltaef\epsilonsilonp{\epsilonsilonpsilon}
\deltaef{\rhom ind}{{\rhom ind}}
\deltaef\hat{w}{\hat{w}}
\deltaef\betaar{\sigmaigma}{\betaar{\sigmaigma}}
\deltaef{\cal A}{{\cal A}}
\deltaef{\cal B}{{\cal B}}
\deltaef{\cal F}{{\cal F}}
\deltaef{\betaf E}{{\betaf E}}
\deltaef{\cal H}{{\cal H}}
\deltaef{\betaf E}x{{\betaf E}}
\deltaef{\cal S}{{\cal S}}
\deltaef{\cal P}{{\cal P}}
\deltaef{\cal Q}{{\cal Q}}
\deltaef{\rhom ind}{{\rhom ind}}
\deltaef\betaar{n}{\betaar{n}}
\deltaef\betaar{\nu}{\betaar{\nu}}
\deltaef\alpha{\alphalpha}
\deltaef\beta{\betaeta}
\deltaef\delta{\deltaelta}
\deltaef\Delta{\Deltaelta}
\deltaef\epsilonsilon{\epsilonsilonpsilon}
\deltaef\pihi{\pihi}
\deltaef\gamma{\gammaamma}
\deltaef\Gamma{\Gammaamma}
\deltaef\kappa{\kappaappa}
\deltaef\lambdaambda{\lambdaambdambda}
\deltaef\Kappa{\Kappaappa}
\deltaef\zeta{\zetaeta}
\deltaef\tauheta{\tauhetaeta}
\deltaef\Theta{\Theta}
\deltaef\lambda{\lambdaambdambda}
\deltaef\mu{\muu}
\deltaef\nu{\nuu}
\deltaef\pi{\pii}
\deltaef\Pi{\Pii}
\deltaef\rho{\rhoho}
\deltaef\Rho{\Rhoho}
\deltaef\sigma{\sigmaigma}
\deltaef\Sigma{\Sigmaigma}
\deltaef\tau{\tauau}
\deltaef\omega{\omegaega}
\deltaef\Omega{\Omega}
\deltaef\betaigmid{\rhoule[-3.5mm]{0.1mm}{9mm}}
\deltaef{\cal N}{{\cal N}}
\deltaef\Pir{\mubox{{\betaf Pr}}}
\deltaef{\cal G}{{\cal G}}
\deltaef{\cal A}{{\cal A}}
\deltaef{\bf whp }{{\betaf whp }}
\deltaef{\bf whp}{{\betaf whp}}
\deltaef\Pirob{{\betaf Pr}}
\deltaef\hat{\e}{\hat{\epsilonsilon}}
\deltaef{\cal B}D{{\betaf D}}
\deltaef{\betaf W}{{\betaf W}}
\deltaef\betaB{{\betaf B}}
\deltaef\hat{r}{\hat{r}}
\deltaef\hat{R}{\hat{R}}
\nuewcommand{\rhoatio}[2]{\mubox{${#1\over #2}$}}
\nuewcommand{\betabD}[1]{\betaar{{\betaf D}}^{(#1)}}
\nuewcommand{\gammaap}[1]{\mubox{\hspace{#1 in}}}
\nuewcommand{\betaD}[1]{{\betaf D}^{(#1)}}
\nuewcommand{\hbD}[1]{\hat{{\betaf D}}^{(#1)}}
\nuewcommand{{\betaf T}T}[1]{{\betaf T}^{(#1)}}
\nuewcommand{\mubox{$\lambdaim_{n \rhoightarrow \infty}$}}{\mubox{$\lambdaim_{n \rhoightarrow \infty}$}}
\nuewcommand{{\betaf Proof\hspace{2em}}}{{\betaf Proof\hspace{2em}}}
\nuewcommand{\mubox{$\cal T$}}{\mubox{$\cal T$}}
\nuewcommand{\hspace*{\pihiill}\mubox{${\cal B}ox$}}{\hspace*{\pihiill}\mubox{${\cal B}ox$}}
\nuewcommand{\betafm}[1]{\mubox{\betaoldmath $#1$}}
\nuewcommand{\mubox{\betafm{R}}}{\mubox{\betafm{R}}}
\nuewcommand{\epsilonsilonxpect}{\mubox{\betaf E}}
\nuewcommand{\mubox{\betaf E}}{\mubox{\betaf E}}
\nuewcommand{\card}[1]{\mubox{$|#1|$}}
\nuewcommand{\sigmacaps}[1]{\mubox{\sigmac #1}}
\nuewcommand{\rhodup}[1]{\lambdaceil #1 \rhoceil }
\nuewcommand{\rhodown}[1]{\lambdafloor #1 \rhofloor }
\nuewcommand{\munote}[1]{\muarginpar{\pihiootnotesize\rhoaggedright#1}}
\nuewcommand{\rhoight}{\rhoight}
\nuewcommand{\lambdaeft}{\lambdaeft}
\nuewcommand{\mubox{\rhom e}}{\mubox{\rhom e}}
\nuewcommand{\sigmaetminus}{\sigmaetminusinus}
\nuewenvironment{proof}{\nuoindent{\betaf Proof\,}}{
${\cal B}ox$}
\nuewtheorem{remark}{Remark}
\deltaef\betax{{\betaf x}}
\deltaef\gammaNM2{{\cal G}_{\nu,\mu}^{\delta\gammaeq 2}}
\deltaef\GammaNM2{{G}_{\nu,\mu}^{\delta\gammaeq 2}}
\nuewcommand{\con}[2]{#1\lambdaeftrightarrow #2}
\nuewcommand{\muathrm{dist}}{\muathrm{dist}}
\nuewcommand{D^{\muathrm{tree}}}{D^{\muathrm{tree}}}
\nuewcommand{\epsilonsilonhr}{\mubox{\sigmac Ehr}}
\nuewcommand{\muathrm{diam}}{\muathrm{diam}}
\tauitle{First Order Definability of Trees and\\ Sparse Random Graphs}
\mubox{\rhom e}newcommand{\alpharabic{footnote}}{\pihinsymbol{footnote}}
\alphauthor{Tom Bohman\pihiootnotemark[1]\ \pihiootnotemark[2], Alan Frieze\pihiootnotemark[1]\ \pihiootnotemark[3],
Tomasz {\L}uczak\pihiootnotemark[4], Oleg
Pikhurko\pihiootnotemark[1]\ \pihiootnotemark[5],\\ Clifford
Smyth\pihiootnotemark[1], Joel
Spencer\pihiootnotemark[6], and Oleg Verbitsky\pihiootnotemark[7]}
\deltaate{}
\muaketitle
\pihiootnotetext[1]{Department of Mathematical Sciences,
Carnegie Mellon University,
Pittsburgh, PA 15213, USA.}
\pihiootnotetext[2]{Partially supported by NSF grant DMS-0401147.}
\pihiootnotetext[3]{Partially supported by NSF Grant CCR-0200945.}
\pihiootnotetext[4]{Department of Discrete Mathematics, Adam Mickiewicz
University, Pozna\'n 61-614, Poland. Partially supported by KBN grant
1 P03A 025 27.}
\pihiootnotetext[5]{Partially supported by the Berkman Faculty
Development Fund, CMU.}
\pihiootnotetext[6]{Courant Institute, New York University, New York, NY
10012, USA.}
\pihiootnotetext[7]{Institut f\"ur Informatik, Humboldt Universit\"at,
Berlin 10099, Germany.
Supported by an Alexander von Humboldt fellowship.}
\mubox{\rhom e}newcommand{\alpharabic{footnote}}{\alpharabic{footnote}}
\betaegin{abstract}
Let $D(G)$ be the smallest quantifier depth of a first order formula
which is true for a graph $G$ but false for any other non-isomorphic
graph. This can be viewed as a measure for the first order descriptive
complexity of $G$.
We will show that almost surely $D(G)=\Theta(\pihirac{\lambdan n}{\lambdan\lambdan n})$,
where $G$ is a random tree of order $n$ or the giant component of a
random graph $\C G(n,\pihirac cn)$ with constant $c>1$. These results
rely on computing the maximum of $D(T)$ for a tree $T$ of order $n$
and maximum degree $l$, so we study this problem as
well.\epsilonsilonnd{abstract}
\sigmaection{Introduction}
This paper deals with graph properties expressible in first order
logic. The vocabulary consists of variables, connectives ($\vee$,
$\wedge$ and $\nueg$), quantifiers ($\epsilonsilonxists$ and $\hbox{ for }all$), and two
binary relations: the equality and the graph adjacency ($=$ and $\sigmaim$
respectively). The variables denote vertices only so we are not
allowed to quantify over sets or relations. The notation $G\muodels A$
means that a graph $G$ is a model for a \epsilonsilonmph{sentence} $A$ (a first
order formula without free variables); in other words, $A$ is true
for the graph $G$. All sentences and graphs are assumed to be
finite. The Reader is referred to Spencer's book~\cite{spencer:slrg}
(or to~\cite{kim+pikhurko+spencer+verbitsky:03rsa}) for more details.
A first order sentence $A$ \epsilonsilonmph{distinguishes} $G$ from $H$ if
$G\muodels A$ but $H\nuot\muodels A$. Further, we say that $A$
\epsilonsilonmph{defines} $G$ if $A$ distinguishes $G$ from any non-isomorphic
graph $H$. In other words, $G$ is the unique (up to an isomorphism)
finite model for $A$.
The \epsilonsilonmph{quantifier depth} (or simply \epsilonsilonmph{depth}) $D(A)$ is the
largest number of nested quantifiers in $A$. This parameter is closely
related to the complexity of checking whether $G\muodels A$.
The main parameter we will study is $D(G)$, the smallest quantifier
depth of a first order formula defining $G$. It was first
systematically studied by Pikhurko, Veith and
Verbitsky~\cite{pikhurko+veith+verbitsky:03} (see
also~\cite{pikhurko+verbitsky:03}). In a sense, a defining formula $A$
can be viewed as the canonical form for $G$ (except that $A$ is not
unique): in order to check whether $G\cong H$ it suffices to check
whether $H\muodels A$. Unfortunately, this approach does not seem to
lead to better isomorphism algorithms but this notion, being on the
borderline of combinatorics, logic and computer science, is
interesting on its own and might find unforeseen applications.
Within a short time-span various results on the values of $D(G)$ for
order-$n$ graphs appeared. The initial
papers~\cite{pikhurko+veith+verbitsky:03,pikhurko+verbitsky:03}
studied the maximum of $D(G)$ (the `worst' case). The `best' case is
considered by Pikhurko, Spencer, and
Verbitsky~\cite{pikhurko+spencer+verbitsky:04} while Kim, Pikhurko,
Spencer and Verbitsky~\cite{kim+pikhurko+spencer+verbitsky:03rsa}
obtained various results for random graphs.
Here we study these questions for trees and sparse random
structures. Namely, the three main questions we consider are:
\betaegin{description}
\item[Section~\mubox{\rhom e}f{general}:] What is $D^{\muathrm{tree}}(n,l)$, the maximum of
$D(T)$ over all trees of order at most $n$ and maximum degree at most
$l$?
\item[Section~\mubox{\rhom e}f{giant}:] What is $D(G)$, where $G$ is the giant
component of a random graph $\C G(n,\pihirac{c}{n})$ for constant
$c>1$?
\item[Section~\mubox{\rhom e}f{random}:] What is $D(T)$ for a random tree $T$ of
order $n$?
\epsilonsilonnd{description}
In all cases we determine the order of magnitude of the studied
function. Namely, we prove that $D^{\muathrm{tree}}(n,l)=\Theta(\pihirac{l\lambdan n}{\lambdan
l})$, and whp we have $D(G)=\Theta(\pihirac{\lambdan n}{\lambdan\lambdan n})$, whenever
$G$ is a random tree of order $n$ or the giant component of a random
graph $\C G(n,\pihirac cn)$ with constant $c>1$. (The acronym \epsilonsilonmph{whp}
stands for `with high probability', i.e.,\ with probability $1-o(1)$.)
Moreover, for some cases involving trees we estimate the smallest
quantifier depth of a first order formula defining $G$ up to a factor
of $1+o(1)$. For instance, we show that for a random tree $T$ of order
$n$ we have whp $D(T)=(1+o(1))\pihirac{\lambdan n}{\lambdan\lambdan n}$.
\comment{
also we prove that $D^{\muathrm{tree}}(n,l)=(1/2+o(1))\pihirac{l\lambdan n}{\lambdan l}$
whenever both $l=l(n)$ and $\lambdan n/\lambdan l$ tends to
infinity as $n\tauo\infty$.
}
\sigmaection{Further Notation and Terminology}
Our main tool in the study of $D(G)$ is the \epsilonsilonmph{Ehrenfeucht
game}. Its description can be found in Spencer's book~\cite{spencer:slrg}
whose terminology we follow (or
see~\cite[Section~2]{kim+pikhurko+spencer+verbitsky:03rsa}), so here
we will be very brief.
Given two graphs $G$ and $G'$, the \epsilonsilonmph{Ehrenfeucht game}
$\epsilonsilonhr_k(G,G')$ is a perfect information game played by two players,
called \epsilonsilonmph{Spoiler} and \epsilonsilonmph{Duplicator}, and consists of $k$
rounds, where $k$ is known in advance to both players. For
brevity, let us refer to Spoiler as `him' and to Duplicator as
`her'. In the $i$-th round, $i=1,\deltaots,k$, Spoiler selects one of the
graphs $G$ and $G'$ and marks one of its vertices by $i$; Duplicator
must put the same label $i$ on a vertex in the other graph. At the end
of the game let $x_1,\deltaots,x_k$ be the vertices of $G$ marked
$1,\deltaots,k$ respectively, regardless of who put the label there; let
$x_1',\deltaots,x_k'$ be the corresponding vertices in $G'$. Duplicator
wins if the correspondence $x_i\lambdaeftrightarrow x_i'$ is a partial
isomorphism, that is, we require that $\{x_i,x_j\}\in E(G)$ iff
$\{x_i',x_j'\}\in E(G')$ as well as that $x_i=x_j$ iff
$x_i'=x_j'$. Otherwise, Spoiler wins.
The key relation is that $D(G,G')$, the smallest depth of a first
order sentence $A$ distinguishing $G$ from $G'$, is equal to the
smallest $k$ such that Spoiler can win $\epsilonsilonhr_k(G,G')$. Also,
\betaeq{D}
D(G)=\muax_{G'\nuot\cong G} D(G,G'),
\epsilonsiloneq
see e.g.~\cite[Lemma~1]{kim+pikhurko+spencer+verbitsky:03rsa}.
Sometimes it will be notationally more convenient to prove the bounds on
$D(G,G')$ for colored graphs which generalize the usual (uncolored)
graphs. Graphs $G,G'$ are \epsilonsilonmph{colored} if we have unary relations
$U_i:V(G)\cup V(G')\tauo\{0,1\}$, $i\in I$. We say that the vertices in
the set $U_i^{-1}(1)$ have color $i$. Note that some vertices may be
uncolored and some may have more than one color. There are no
restrictions on a color class, i.e.,\ it does not have to be an
independent set. When the Ehrenfeucht game is played on colored
graphs, Duplicator must additionally preserve the colors of vertices.
Colorings can be useful even if we prove results for uncolored
graphs. For example, if $x\in V(G)$ and $x'\in V(G')$ were selected in
some round, then, without changing the outcome of the remaining game,
we can remove $x$ and $x'$ from $G$ and $G'$ respectively, provided we
color their neighbors with a new color. (Note that in an optimal
strategy of Spoiler, there is no need to select the same vertex
twice.)
We will also use the following fact, which can be easily deduced from
the general theory of the Ehrenfeucht game. Let $x,y\in V(G)$ be
distinct vertices. Then the smallest quantifier depth of a first order
formula $\Pihi(z)$ with one free variable $z$ such that $G\muodels
\Pihi(x)$ but $G\nuot\muodels \Pihi(y)$ is equal to the minimum $k$ such
that Spoiler can win the $(k+1)$-round game $\epsilonsilonhr_{k+1}(G,G)$, where
the vertices $x_1=x$ and $x_1'=y$ have been selected in the first
round.
In this paper $\lambdan$ denotes the natural logarithm, while the
logarithm base $2$ is written as $\lambdaog_2$.
\sigmaection{General Trees}\lambdaambdabel{general}
Let $D^{\muathrm{tree}}(n,l)$ be the maximum of $D(T)$ over all colored trees of
order at most $n$ and maximum degree at most $l$. We split the
possible range of $l,n$ into a few cases.
\betath{MaxDeg} Let both $l$ and $\lambdan n/\lambdan l$ tend to the
infinity. Then
\betaeq{MaxDeg}
D^{\muathrm{tree}}(n,l)= \lambdaeft(\pihirac12+o(1)\rhoight)\, \pihirac{ l\lambdan n}{\lambdan
l}.
\epsilonsiloneq
In fact, the lower bound can be achieved by uncolored trees.
\epsilonsilonnd{theorem}
In order to prove Theorem~\mubox{\rhom e}f{th:MaxDeg} we need some preliminary
results. Let $\muathrm{dist}_G(x,y)$ denote the distance in $G$ between $x,y\in
V(G)$.
\betalm{Distance} Suppose $x,y\in V(G)$ at distance $k$ were selected while
their counterparts $x',y'\in V(G')$ are at a strictly larger
distance (possibly infinity). Then Spoiler can win in at most
$\lambdaog_2k+1$ extra moves, playing all of the time inside
$G$.\epsilonsilonnd{lemma}
\betapf We prove the claim by induction on $k$. Assume $k\gammae 2$ and
choose an appropriate $xy$-path $P$. Spoiler selects a vertex $w\in
V(G)$ which is a \epsilonsilonmph{middle vertex} of $P$, that is,
$k_1=\muathrm{dist}_P(x,w)$ and $k_2=\muathrm{dist}_P(y,w)$ differ at most by
one. Suppose that Duplicator responds with $w'\in G'$. It is
impossible that $G'-z'$ contains both an $x'w'$-path of length at most
$k_1$ and a $y'w'$-path of length at most $k_2$. If, for example, the
latter does not exist, then we apply induction to $y,w\in G$. The
required bound follows by observing that $k_1,k_2\lambdae \ceil{\pihirac
k2}$.\epsilonsilonpf
The same method gives the following lemma.
\betalm{path} Let $G,G'$ be colored graphs. Suppose that $x,y\in V(G)$
and $x',y'\in V(G')$ have been selected such that $G$ contains some
$xy$-path $P$ of length at most $k$ such that some vertex of $P$ has
color $c$ while this is not true with respect to $G'$. Then Spoiler
can win in at most $\lambdaog_2 k +1$ moves playing all of the time inside $G$.
The same conclusion holds if all internal vertices of $P$ have colors
from some fixed set $A$ while any $x'y'$-path of length at most $k$
has a color not in $A$.\qed\epsilonsilonnd{lemma}
\betalm{Tree} Let $T$ be a tree of order $n$ and let $T'$ be a graph
which is not a tree. Then $D(T,T')\lambdae
\lambdaog_2n+3$.\epsilonsilonnd{lemma}
\betapf
If $T'$ is not connected, Spoiler selects two vertices $x',y'\in T'$
from different components. Then he switches to $G$ and applies
Lemma~\mubox{\rhom e}f{lm:Distance}, winning in at most $\lambdaog_2 n+3$ moves in
total.
Otherwise, let $C'\sigmaubset T'$ be a cycle of the shortest length
$l$. If $l>2n+1$, then Spoiler picks two vertices $x',y'$ at distance
at least $n$ in $C'$ (or equivalently in $T'$). But the diameter of
$T$ is at most $n-1$, Spoiler switches to $T$ and starts halving the
$xy$-path, making at most $\lambdaog_2 n+3$ moves in total, cf.\
Lemma~\mubox{\rhom e}f{lm:Distance}.
If $l\lambdae 2n+1$, then Spoiler selects some three adjacent vertices of
$C'$, say $x',z',y'$ in this order. Now, he applies
Lemma~\mubox{\rhom e}f{lm:path} with respect to $k=l-2$.\epsilonsilonpf
\betapf[Proof of Theorem~\mubox{\rhom e}f{th:MaxDeg}.] Let us prove the upper bound
first.
Let $T$ be any tree of order at most $n$ and maximum degree at most
$l$. Let $T'$ be an arbitrary colored graph not isomorphic to $T$. By
Lemma~\mubox{\rhom e}f{lm:Tree} we can assume that $T'$ is a tree.
In fact, we will be proving the upper bound on the version of the
$(T,T')$-game, wherein some distinguished vertex, called the
\epsilonsilonmph{root}, is given and all graph isomorphisms must additionally
preserve the root. (This can be achieved by introducing a new color
$U_0$ which is assigned to the root only.) The obtained upper bound,
if increased by $1$, applies to the original function $D(T,T')$
because we can regard $x_1$ and $x_1'$, the first two moves of the
Ehrenfeucht game, as the given roots.
It is easy to show that $T$ contains a vertex $x\in T$ such that any
component of $T-x$ has order at most $\pihirac n2$. We call such a vertex
a \epsilonsilonmph{median} of $T$. Spoiler selects this vertex $x$; let
Duplicator reply with $x'$. We can assume that the degrees of $x$ and
$x'$ are the same: otherwise Spoiler can exhibit this discrepancy in
at most $l+1$ extra moves.
\comment{Alternating sides at most once.}
We view the components of $T-x$ and $T'-x'$ as colored rooted graphs
with the neighbors of $x$ and $x'$ being the roots. As $T\nuot\cong
T'$, some component $C_1$ has different multiplicities $m_1$ and
$m_1'$ in $T-x$ and $T'-x'$. As $d(x)=d(x')$, we have at least two
such components. Assume that for $C_1$ and $C_2$ we have $m_1>m_1'$
and $m_2<m_2'$. By the condition on the maximum degree, $m_1'+m_2\lambdae
l-1$. Hence, $\muin(m_1',m_2)\lambdae \pihirac{l-1}2$. Let us assume, for
example, that $m_1'\lambdae \pihirac{l-1}2$. Spoiler chooses the roots of any
$m_1'+1$ $C_1$-components of $T-x$. It must be the case that some
vertices $y\in V(T)$ and $y'\in V(T')$ have been selected, so that $y$
lies in a $C_1$-component $F\sigmaubset T-x$ while $y'$ lies in a
component $F'\sigmaubset T'-x$ not isomorphic to $C_1$. Let $n_1$ be the
number of vertices in $F$. By the choice of $x$, $n_1\lambdae \pihirac n2$.
Now, Spoiler restricts his moves to $V(F)\cup V(F')$. If Duplicator
moves outside this set, then Spoiler uses Lemma~\mubox{\rhom e}f{lm:path},
winning in at most $\lambdaog_2n+O(1)$ moves. Otherwise Spoiler uses the
recursion applied to $F$.
Let $f(n,l)$ denote the largest number of moves (over all trees $T,T'$
with $v(T)\lambdae n$, $\Deltaelta(T)\lambdae l$, and $T\nuot\cong T'$) that
Duplicator can survive against the above strategy with the additional
restriction that a situation where Lemma~\mubox{\rhom e}f{lm:path} can be applied
never occurs and we always have that $d(x)=d(x')$. Clearly,
\betaeq{DTf}
D^{\muathrm{tree}}(n,l)\lambdae f(n,l) + \lambdaog_2n + l +O(1).
\epsilonsiloneq
As $m_1\lambdae \pihirac{n-1}{n_1}$, we get the following
recursive bound on $f$.
\betaeq{DT}
\tauextstyle
f(n,l)\lambdae \muax{\cal B}ig\{2 + \muin(\pihirac{l-1}2,\pihirac{n-1}{n_1}) +
f(n_1,l): 1\lambdae n_1\lambdae \pihirac n2{\cal B}ig\}.
\epsilonsiloneq
Denoting $n_0=n$ and unfolding~\mubox{\rhom e}q{DT} as long as $n_i\gammae 1$, say $s$
times, we obtain that $f(n,l)$ is bounded by the maximum of
\betaeq{f}
2s + \sigmaum_{i=1}^s
\muin\lambdaeft(\pihirac{l-1}2,\pihirac{n_{i-1}}{n_i}\rhoight),
\epsilonsiloneq
over all sequences $n_1,\deltaots,n_s$ such that
\betaeq{n}
1\lambdae n_i \lambdae \pihirac{n_{i-1}}2,\quad i\in[s].
\epsilonsiloneq
Note that the restrictions~\mubox{\rhom e}q{n} force $s$ to be at most $\lambdaog_2
n$. Let us maximize~\mubox{\rhom e}q{f} over all $s\in\I N$
and real $n_i$'s satisfying~\mubox{\rhom e}q{n}.
It is routine to see that for the optimal sequence we have $2\lambdae
\pihirac{n_{i-1}}{n_i}\lambdae \pihirac{l-1}2$, $i\in[s]$; moreover, both these
inequalities can be simultaneously strict for at most one index
$i$.
\comment{Indeed, suppose on the contrary that for two indexes $1\lambdae
i<j< s$ we have $2<n_i/n_{i+1}<\pihirac{l-1}2$ and
$2<n_j/n_{j+1}<\pihirac{l-1}2$. Redefine a new sequence: $n_h'=n_h$ if
$h\lambdae i$ or $h>j$, while $n_h'=xn_h$ for $i<h\lambdae j$. If $x=1$, then we
obtain the same sequence. Note that
$\pihirac{n_h'}{n_{h+1}'}=\pihirac{n_h}{n_{h+1}}$ for any $h$ except $h=i$
or $h=j$. So, we can slightly perturb $x$ either way, without
violating~\mubox{\rhom e}q{n}. The right-hand side of~\mubox{\rhom e}q{f}, as a function of
$x$ in a small neighborhood of $x=1$, is of the form $ax+\pihirac bx+c$
with $a,b>0$. But this function is strictly convex, so it cannot
attain its maximum at $x=1$, a contradiction.}
Let $t$ be the number of times we have $n_{i-1}=2n_i$. The
bound~\mubox{\rhom e}q{f} reads
\betaeq{st}
f(n,l)- 2 \lambdaog_2 n \lambdae 2t + (s-t)\, \pihirac{l-1}2.
\epsilonsiloneq
Given that $2^t(\pihirac{l-1}2)^{s-t-1}\lambdae n$, the right hand side
of~\mubox{\rhom e}q{st} is maximized for $t=O(\lambdaog l)$ and $s=(1+o(1))\, \pihirac{\lambdan
n}{\lambdan l}$, implying the upper bound~\mubox{\rhom e}q{MaxDeg} by~\mubox{\rhom e}q{DTf}.
Let us prove the lower bound. Let $k=\pihiloor{l/2}$. Define
$G_0=K_{1,l-1}$ and $G_0'=K_{1,l-2}$. Let $r_0\in V(G_0)$, $r_0'\in
V(G_0')$ be their roots. Define inductively on $i$ the following
graphs. $G_{i}$ is obtained by taking $k$ copies of $G_{i-1}$ and
$k-1$ copies of $G_{i-1}'$, pairwise vertex-disjoint, plus the root
$r_i$ connected to the root of each copy of $G_{i-1}$ and
$G_{i-1}'$. We have $d(r_i)\lambdae l-1$. The graph $G_{i}'$ is defined in
a similar way except that we take $k-1$ copies of $G_{i-1}$ and $k$ copies
of $G_{i-1}'$. Let $i$ be the largest index such that
$\muax(v(G_i),v(G_i'))\lambdae n$.
Let us disregard all roots, i.e.,\ view $G_j$ and $G_j'$ as usual
(uncolored) graphs. Note that the trees $G_i$ and $G_i'$ are
non-isomorphic as for every $j$ we can identify the level-$j$ roots as
the vertices at distance $j+1$ from some leaf.
Define $g_j=(k-1)j+l-2$, $j\in[0,i]$. Let us show by induction on $j$
that Duplicator can survive at least $g_j$ rounds in the
$(G_j,G_j')$-game. This is clearly true for $j=0$. Let $j\gammae 1$. If
Spoiler claims one of $r_j,r_j'$ then Duplicator selects the other. If
Spoiler selects a vertex in a graph from the ``previous'' level, for
example $F\sigmaubset G_j$ with $F\cong G_{j-1}'$, then Duplicator
chooses an $F'\sigmaubset G_i'$, $F'\cong G_{j-1}'$ and keeps the
isomorphism between $F$ and $F'$. So any moves of Spoiler inside
$V(F)\cup V(F')$ will be useless and we can ignore $F$ and $F'$. Thus
it takes Spoiler at least $k-1$ moves before we are down to the pair
$(G_{j-1},G_{j-1}')$, which proves the claim.
Thus we have $D(G_i) \gammae D(G_i,G_i') \gammae g_i=(\pihirac12+o(1))\,
\pihirac{l\lambdan n}{\lambdan l}$, finishing the proof.\epsilonsilonpf
\sigmamallskip\nuoindent{\betaf Remark.} Verbitsky~\cite{verbitsky:04} proposed a different argument
to estimate $D^{\muathrm{tree}}(n,l)$ which gives a weaker bounds
than those in Theorem~\mubox{\rhom e}f{th:MaxDeg} but can be applied
to other classes of graphs with small separators.
Let us study $D^{\muathrm{tree}}(n,l)$ for other $l,n$. The methods have much in
common with the proof of Theorem~\mubox{\rhom e}f{th:MaxDeg} so our explanations
are shorter.
\betath{l=n^C} Let an integer $t\gammae1$ be fixed. Suppose that
$l,n\tauo\infty$ so that $n\gammae l^t$ but $n=o(l^{t+1})$. Then
$D^{\muathrm{tree}}(n,l)=(\pihirac{t+1}2+o(1))\, l$. In fact, the lower bound can be
achieved by uncolored trees.\epsilonsilonnd{theorem}
\betapf The lower bound is proved by the induction on $t$. If $t=1$,
take $T_1=K_{1,l-2}$. One needs at least $l-1$ moves to distinguish it
from $T_1'=K_{1,l-1}$. Let $a=\pihiloor{l/2}$ and $b=\ceil{l/2}$. Suppose
we have already constructed $T_{t-1}$ and $T_{t-1}'$, rooted trees
with $\lambdae l^{t-1}$ vertices such that the root has degree at most
$l-1$. To construct $T_t$ take $a$ copies of $T_{t-1}$ and $b-1$
copies of $T_{t-1}'$ and connect them to the common root. For $T_t'$
we take $a-1$ and $b$ copies respectively. The degree of the main root
is $a+b-1= l-1$ as required. The order of $T_t$ is at most
$(a+b-1)l^{t-1}+1\lambdae l^t$. Also, Spoiler needs at least $a$ moves
before reducing the game to $(T_{t-1},T_{t-1}')$ (while, for $t=1$,
$l$ moves are needed to finish the game), giving the required bound.
Let us turn to the upper bound. Spoiler uses the same strategy as
before. Namely, he chooses a median $x\in T$ and of two possible
multiplicities, summing up to $l$, chooses the smaller. Let
$m_1+1,m_2+1,\deltaots,m_k+1$ be the number of moves per each selected
median. We have $n\gammae \pirod_{i=1}^k m_i$. Also, we have $k\lambdae
\lambdaog_2n$ because we always choose a median. Given these restrictions,
the inequalities $m_i\lambdae l/2$, $i\in[k-1]$, and $m_k\lambdae l-1$, the sum
$\sigmaum_{i=1}^k m_i$ is maximized if $m_k=l-1$ and as many as possible
$m_j=l/2$ are maximum possible. We thus factor out $l/2$ at most $t-1$
times until the remaining terms have the product (and so the sum)
$o(l)$. Thus,
$$
\sigmaum_{i=1}^k (m_i+1)\lambdae \lambdaog_2n+\sigmaum_{i=1}^km_i\lambdae l+\pihirac{(t-1)l}2+o(l),
$$
completing the proof.\epsilonsilonpf
Theorems~\mubox{\rhom e}f{th:MaxDeg} and~\mubox{\rhom e}f{th:l=n^C} do not cover all the
possibilities for $n,l$. The asymptotic computation in the remaining
cases seems rather messy. However, the order of magnitude of
$D^{\muathrm{tree}}(n,l)$ is easy to compute with what we already have. Namely,
Theorem~\mubox{\rhom e}f{th:l=n^C} implies that for $l=\Theta(n^t)$ with fixed
$t\in \I N$ we have $D^{\muathrm{tree}}(n,l)=\Theta(l)$. Also, if $l\gammae 2$ is
constant, then $D^{\muathrm{tree}}(n,l)=\Theta(\lambdan n)$, where the lower bound follows
from considering the order-$n$ path and the upper bound is obtained by
using the method of Theorem~\mubox{\rhom e}f{th:MaxDeg}.
\sigmaection{The Giant Component}\lambdaambdabel{giant}
Let $c>1$ be a constant, $p=\pihirac cn$, and $G$ be the giant component
of a random graph $\C G(n,p)$.
\comment{
Kim, Pikhurko, Spencer and
Verbitsky~\cite{kim+pikhurko+spencer+verbitsky:03rsa} conjectured that
whp $D(G)=O(\lambdan n)$.
}
Here we show the following result.
\betath{giant} Let $c>1$ be a constant, $p=c/n$, and $G$ be the giant
component of $\C G(n,p)$. Then whp
\betaeq{giant}
D(G)=\Theta\lambdaeft(\pihirac{\lambdan n}{\lambdan \lambdan n}\rhoight)
\epsilonsilonnd{equation}
\epsilonsilonnd{theorem}
This result allows us to conclude that for any $p=O(n^{-1})$ a
random graph $H\in \C G(n,p)$ satisfies whp
\betaeq{d/n}
D(H)=({\muathrm e}^{-np}+o(1))\, n.
\epsilonsilonnd{equation}
The proof is an easy modification of that
in~\cite{kim+pikhurko+spencer+verbitsky:03rsa} where the validity of
\mubox{\rhom e}q{d/n} was established for $p\lambdae (1.19...+o(1))\, n^{-1}$. The
lower bound in~\mubox{\rhom e}q{d/n} comes from considering the graph $H'$
obtained from $H$ by adding an isolated vertex (and noting that whp
$H$ has $({\muathrm e}^{-np}+o(1))\, n$ isolated vertices). The method
in~\cite{kim+pikhurko+spencer+verbitsky:03rsa} shows that the upper
bound~\mubox{\rhom e}q{d/n} can fail only if $D(G)>({\muathrm e}^{-np}+o(1))\, n$, where
$G$ is the giant component of $H$. (And $p/n\alphapprox 1.19...$ is the
moment when $v(G)\alphapprox {\muathrm e}^{-np}$.)
\sigmaubsection{Upper Bound}
The structure of the giant component is often characterized using its
core and kernel (e.g., see Janson, \L uczak, and
Ruci\'nski~\cite[Section~5]{janson+luczak+rucinski:rg}). We follow this
approach in the proof of the upper bound in \mubox{\rhom e}q{giant}. Thus, we
first bound $D(G)$ from above for a graph $G$ with small diameter
whose kernel fulfills some ``sparsness'' conditions. Then, we show
that these conditions hold whp for the kernel of the giant component
of a random graph.
\sigmaubsubsection{Bounding $D(G)$ Using the Kernel of $G$}\lambdaambdabel{DKernel}
The \epsilonsilonmph{core} $C$ of a graph $G$ is obtained by removing,
consecutively and as long as possible, vertices of degree at most
$1$. If $G$ is not a forest, then $C$ is non-empty and $\deltaelta(C)\gammae 2$.
First we need an auxiliary lemma which is easily proved, similarly to
the auxiliary lemmas in Section~\mubox{\rhom e}f{general}, by the path-halving
argument.
\betalm{cycle} Let $G,G'$ be graphs. Suppose $x\in V(G)$ and $x'\in
V(G')$ have been selected such that $G$ contains some cycle $P\nui x$
of length at most $k$ while $G'$ does not. Then Spoiler can win in at
most $\lambdaog_2 k+O(1)$ moves, playing all time inside
$G$.\qed\epsilonsilonnd{lemma}
\betalm{DCore} Let $G,G'$ be graphs and $C,C'$ be their cores. If
Duplicator does not preserve the core, then Spoiler can win in at most
$\lambdaog_2d+O(1)$ extra moves, where $d$ is the diameter of
$G$.\epsilonsilonnd{lemma}
\betapf Assume that $\muathrm{diam}(G')=\muathrm{diam}(G)$ for otherwise we are easily
done. Suppose that, for example, some vertices $x\in C$ and $x'\in \O
{C'}$ have been selected.
If $x$ lies on a cycle $C_1\sigmaubset C$, then we can find such a cycle
of length at most $2d+1$. Of course, $G'$ cannot have a cycle
containing $x'$, so Spoiler wins by Lemma~\mubox{\rhom e}f{lm:cycle} in
$\lambdaog_2(2d+1)+O(1)$ moves, as required.
Suppose that $x$ does not belong to a cycle. Then $G$ contains two
vertex-disjoint cycles $C_1,C_2$ connected by a path $P$ containing
$x$. Choose such a configuration which minimizes the length of $P\nui
x$. Then the length of $P$ is at most $d$. Spoiler selects the branching
vertices $y_1\in V(C_1)\cap V(P)$ and $y_2\in V(C_2)\cap V(P)$. If
some Duplicator's reply $y_i'$ is not on a cycle, we done again by
Lemma~\mubox{\rhom e}f{lm:cycle}. So assume there are cycles $C_i'\nui y_i'$. In
$G$ we have
\betaeq{dist}
\muathrm{dist}(y_1,y_2)= \muathrm{dist}(y_1,x) + \muathrm{dist}(y_2,x).
\epsilonsiloneq
As $x'\nuot\in C'$, any shortest $x'y_1'$-path and $x'y_2'$-path enter
$x'$ via the same edge $\{x',z'\}$. But then
\betaeq{distp}
\muathrm{dist}(y_1',y_2')\lambdae \muathrm{dist}(y_1',z')+\muathrm{dist}(y_2',z')= \muathrm{dist}(y_1',x') +
\muathrm{dist}(y_2',x')-2.
\epsilonsiloneq
By~\mubox{\rhom e}q{dist} and~\mubox{\rhom e}q{distp}, the distances between $x,y_1,y_2$
cannot be all equal to the distances between $x',y_1',y_2'$. Spoiler
can demonstrate this in at most $\lambdaog_2 (\muathrm{dist}(y_1,y_2)) +O(1)$, as
required.\epsilonsilonpf
In order to state our upper bound on $D(G)$ we have to define a number
of parameters of $G$. In outline, we try to show that any distict
$x,y\in V(C)$ can be distinguished by Spoiler reasonably fast. This
would mean that each vertex of $C$ can be identified by a first order
formula of small depth.
Note that $G$ can be decomposed into the core and
a number of trees $T_x$, $x\in V(C)$, rooted at vertices of $C$.
Thus, by specifying which pairs of vertices of
$C$ are connected and describing each $T_x$, $x\in V(C)$, we
completely define $G$. However, we have one unpleasant difficulty
that not all pairs of points of $C$ can be distinguished from one
another. For example, we may have a pendant triangle on $\{x,y,z\}$
with $d(x)=d(y)=2$, in which case the vertices $x$ and $y$ are
indistinguishable. However, we will show that whp we can
distinguish any two vertices of degree $3$ or more in $C$, which
suffices for our purposes.
Let us give all the details. For $x\in V(C)$, let $T_x\sigmaubset G$
denote the tree rooted at $x$, i.e., $T_x$ is a component containing
$x$ in the forest obtained from $G$ by removing all edges of $C$. Let
$$
t=\muax\{D(T_x): x\in V(C)\},
$$
where $D(T_x)$ is taken with respect to the class of graphs with one
root.
Let the \epsilonsilonmph{kernel} $K$ of $G$ be obtained from $C$ by the
\epsilonsilonmph{serial reduction} where we repeat as long as possible the
following step: if $C$ contains a vertex $x$ of degree $2$, then
remove $x$ from $V(C)$ but add the edge $\{y,z\}$ to $E(C)$ where
$y,z$ are the two neighbors of $x$. Note that $K$ may contain loops
and multiple edges. We agree that each loop contributes $2$ to the
degree. Then we have $\deltaelta(K)\gammae 3$.
Let $u=\Deltaelta(G)$ and $d$ be the diameter of $G$. It follows that
each edge of $K$ corresponds to the path $P$ in $C$ of length at most
$2d$.
\comment{(For otherwise any two vertices of $P$ at distance $d+1$
contradict the definition of $d$.)}
Let $l$ be an integer such that every set of $v\lambdae 6 l$ vertices of
$K$ spans at most $v$ edges in $K$. (Roughly speaking, we do not have
two short cycles close together.)
For $\{x,y\}\in E(K)$ let $A_{x,y}$ be the set of vertices obtained by
doing breadth first search in $K-x$ starting with $y$ until the
process dies or, after we have added a whole level, we reach at least
$k=2^{l-2}$ vertices. Let $K_{x,y}=K[A_{x,y}\cup \{x\}]$.
The \epsilonsilonmph{height} of $z\in V(K_{x,y})$ is the distance in $K-x$ between
$z$ and $y$. It is easy to deduce from the condition on short cycles
that each $K_{x,y}\sigmaubset K-x$ has at most one cycle and the maximum
height is at most $l$. In fact, the process dies only in the case if
$y$ is an isolated loop in $K-x$. For $xy\in E(K)$ let $G_{x,y}$ be a
subgraph of $G$ corresponding to $K_{x,y}$. We view $K_{x,y}$ and
$G_{x,y}$ as having two special \epsilonsilonmph{roots} $x$ and $y$.
Here is another assumption about $G$ and $l$ we make. Suppose that for
any $xx',yy'\in E(K)$ if $K_{x,x'}$ and $K_{y,y'}$ have both order at
least $k$ and $A_{x,x'}\cap A_{y,y'}=\epsilonsilonmptyset$, then the rooted graphs
$G_{x,x}$ and $G_{y,y'}$ are not isomorphic. Let
\betaegin{eqnarray}
b_0&=&\pihirac{l(\lambdan u+\lambdan \lambdan n + l)}{\lambdan l} +2u+\lambdaog_2d,\lambdaambdabel{eq:b0}\\
b&=& b_0 + t+ u +2\lambdaog_2d.\lambdaambdabel{eq:b}
\epsilonsilonnd{eqnarray}
\betalm{a} Under the above assumptions on $G$, we have $D(G)\lambdae b+O(1)$.
\epsilonsilonnd{lemma}
\betapf
Let $G'\nuot\cong G$. Let $C',K'$ be its core and kernel. We can
assume that $\Deltaelta(G')=u$ and its diameter is $d$ for otherwise
Spoiler easily wins in $u+2$ or $\lambdaog_2d+O(1)$ moves.
By Lemma~\mubox{\rhom e}f{lm:DCore} it is enough to show that Spoiler can win the
Ehrenfeucht $(G,G')$-game in at most $b-\lambdaog_2d+O(1)$ moves provided
Duplicator always respects $C$ and $K$. Call this game $\C C$.
Color $V(K)\cup E(K)$ and $V(C)$ by the isomorphism type of the
subgraphs of $G$ which sit on a vertex/edge. We have a slight problem
with the edges of $K$ as the color of an unordered edge may depend in
which direction we traverse it. So, more precisely, every edge of $K$
is considered as a pair of ordered edges each getting its own
color. Do the same in $G'$. As $G\nuot\cong G'$, the obtained colored
digraphs $K$ and $K'$ cannot be isomorphic. Call the corresponding
digraph game $\C K$.
\claim1 If Spoiler can win the game $\C K$ in $m$ moves, then he can
win $\C C$ in at most $m+t+u+\lambdaog_2d+O(1)$ moves.
\betapf[Proof of Claim.] We can assume that each edge of $K'$ corresponds to a path in
$G'$ of length at most $2d+1$: otherwise Spoiler selects a vertex of
$C'$ at the $C'$-distance at least $d+1$ from any vertex of $K'$ and
wins in $\lambdaog_2d+ O(1)$ moves.
Spoiler plays according to his $\C K$-strategy by making moves inside
$V(K)\sigmaubset V(G)$ or $V(K')\sigmaubset V(G')$. Duplicator's reply are
inside $V(K')$, so they correspond to replies in the $\C K$-game. In
at most $m$ moves, Spoiler can achieve that the set of colored edges
between some selected vertices $x,y\in K$ and $x',y'\in K'$ are
different. (Or loops if $x=y$.)
In at most $u+1$ moves, Spoiler can either win or select a vertex $z$
inside a colored $xy$-path $P$ (an edge of $K$) such that $z'$ either
is not inside an $x'y'$-path (an edge of $K'$) or its path $P'\nui z'$
has a different coloring from $P$. In the former case, Spoiler wins by
Lemma~\mubox{\rhom e}f{lm:path}: in $G$ there is an $xy$-path containing $z$ and
no vertex from $K$.
Consider the latter case. Assume that $|P|=|P'|$, for otherwise we are
done by Lemma~\mubox{\rhom e}f{lm:path}. Spoiler selects $w\in P$ such that
for the vertex $w'\in P'$ with $\muathrm{dist}_P(w,x)=\muathrm{dist}_{P'}(w',x')$ we
have $T_w\nuot\cong T'_{w'}$. If Duplicator does not reply with $w'$,
then she has violated distances. Otherwise Spoiler needs at most $t$
extra moves to win the game $\C T$ on $(T_w,T'_{w'})$ (and at most
$\lambdaog_2d+O(1)$ extra moves to catch Duplicator if she does not
respect $\C T$).\epsilonsiloncpf
It remains to bound $D(K)$, the colored digraph version. This requires
a few preliminary results.
\claim2 For any $\{x,x'\}\in K$ we have $D(K_{x,x})\lambdae b_0+O(1)$ in
the class of colored digraphs with two roots, where $b_0$ is defined
by~\mubox{\rhom e}q{b0}.{\muathrm e}dskip
\betapf[Proof of Claim.] Let $T=K_{x,x}$ and $T'\nuot\cong T$. If $T$ is a tree, then we
just apply a version of Theorem~\mubox{\rhom e}f{th:MaxDeg} using the order
($\lambdae\! u 2^{l}$) and maximum degree ($\lambdae\! u$). Otherwise, Spoiler
first selects a vertex $z\in T$ which lies on the (unique) cycle. We
have at most $u-1$ components in $T-z$, viewing each as a colored tree
where one extra color marks the neighbors of $z$. As $T\nuot\cong T'$,
in at most $u+1$ moves we can restrict our game to one of the
components. (If Duplicator does not respect components, she loses in
at most $\lambdaog_2 d +O(1)$ moves.) Now, one of the graphs is a colored
tree, and Theorem~\mubox{\rhom e}f{th:MaxDeg} applies.\epsilonsiloncpf
\claim3 For every two distinct vertices $x,y\in V(K)$ there is a first
order formula $\Pihi_{x,y}(z)$ with one free variable and quantifier
rank at most $b_0+\lambdaog_2d+O(1)$ such that $G\muodels \Pihi_{x,y}(x)$ and
$G\nuot\muodels \Pihi_{x,y}(y)$. (Note that we have to find $\Pihi_{x,y}$
for $x,y$ in the kernel only, but we evaluate $\Pihi_{x,y}$ with
respect to $G$.){\muathrm e}dskip
\betapf[Proof of Claim.] To prove the existence of $\Pihi_{x,y}$ we have to describe
Spoiler's strategy, where he has to distinguish $(G,x)$ and $(G,y)$
for given distinct $x,y\in K$.
If the multiset of isomorphism classes $K_{x,x'}$, over $\{x,x'\}\in
E(K)$ is not equal to the multiset $\{ K_{y,y'}: \{y,y'\}\in
E(K)\}$, then we are done by Claim~2. So let us assume that these
multisets are equal.
Note that an isomorphism $K_{x,x'}\cong K_{y,y'}$ implies an
isomorphism $G_{x,x'}\cong G_{y,y'}$. Also, by our assumption on $l$,
the isomorphism $G_{x,x'}\cong G_{y,y'}$ implies that $V(K_{x,x'})\cap
V(K_{y,y'})\nuot=\epsilonsilonmptyset$.
At most one neighbor of $x$ can be an isolated loop for otherwise, we
get 3 vertices spanning 4 edges. The same holds for $y$. As the
height of any $K_{a,b}$ is at most $l$, we conclude that
$\muathrm{dist}_K(x,y)\lambdae 2l$. A moment's thought reveals that there must be a
cycle of length at most $4l$ containing both $x$ and $y$. But this cycle
rules out the possibility of a loop adjacent to $x$ or to $y$. Thus,
in order to exclude $2$ short cycles in $K$ close to each other, it
must be the case that $\muathrm{dist}(x,y)\lambdae l-1$ and
$d_K(x)=d_K(y)=3$. Moreover, let $x_1,x_2,x_3$ and $y_1,y_2,y_3$ be
the neighbors of $x$ and $y$ such that $G_{x,x_i}\cong G_{y,y_i}$;
then (up to a relabeling of indices), we have the following paths
between $x$ and $y$: either $(x,x_1,\deltaots,y_1,y)$ and
$(x,x_2,\deltaots,y_3,y)$ or $(x,x_1,\deltaots,y_3,y)$ and
$(x,x_2,\deltaots,y_1,y)$
Now, $K_{x,x_3}$ is not isomorphic to $K_{x,x_1}$ nor to $K_{x,x_2}$
by the vertex-disjointness. (Note that it is not excluded that
$K_{x,x_1}\cong K_{x,x_2}$: they may intersect, for example, in $y$.)
But then $z=x$ is different from $z=y$ in the following respect: the
(unique) short cycle of $K$ containing $z$ has its two edges entering
$z$ from subgraphs isomorphic to $K_{x,x_1}$ and $K_{x,x_2}$ (while
for $z=y$ the corresponding subgraphs are isomorphic to $K_{x,x_1}$
and $K_{x,x_3}$).
This can be used by Spoiler as follows. Spoiler selects $x_1,x_2$. If
Duplicator replies with $y_3$, then Spoiler can use Claims~2 and~3
because $K_{y,y_3}$ is not-isomorphic to $K_{x,x_1}$ nor to
$K_{x,x_2}$. Otherwise, the edge $\{x,x_2\}$ is on a short cycle while
$\{y,y_2\}$ is not. Spoiler uses Lemma~\mubox{\rhom e}f{lm:cycle}.\epsilonsiloncpf
By Lemma~\mubox{\rhom e}f{lm:DCore} we can find $\Pihi_K(x)$, a formula of rank at
most $\lambdaog_2d+O(1)$ which, with respect to $G$, evaluates to $1$ for
all $x\in V(K)$ and to $0$ otherwise. More precisely,
Lemma~\mubox{\rhom e}f{lm:DCore} gives a formula $\Pihi_C(x)$ testing for $x\in
V(C)$. But $V(K)\sigmaubset V(C)$ are precisely the vertices of degree at
least $3$ in $C$.
\comment{So we can take
$$
\Pihi_K(x)= \Pihi_C(x) \wedge \epsilonsilonxists_{x_1,x_2,x_1} \lambdaeft(
\Pihi_C(x_1)\wedge \Pihi_C(x_2)\wedge \Pihi_C(x_3)\wedge x\sigmaim
x_1\wedge x\sigmaim x_2\wedge x\sigmaim x_3\wedge_{i\nuot= j} x_i\nuot=x_j\rhoight).
$$
}
Now, as it is easy to see, for any $x\in K$ the formula
\betaeq{Phi}
\Pihi_x(v):= \Pihi_K(v) \wedge \betaigwedge_{y\in V(K)\sigmaetminusinus \{x\}}
\Pihi_{x,y}(v)
\epsilonsiloneq
identifies uniquely $x$ and has rank at most
$\lambdaog_2d+b_0+ O(1)$.
Take $x\in V(K)$. If there is no $x'\in V(K')$ such that $G'\muodels
\Pihi_{x}(x')$, then Spoiler selects $x$. Whatever Duplicator's reply
$x'$ is, it evaluates differently from $x$ on $\Pihi_{x}$. Spoiler can
now win in at most $D(\Pihi_{x})$ moves, as required. If there are two
distinct $y',z'\in K'$ such that $G'\muodels \Pihi_{x}(y')$ and
$G'\muodels \Pihi_{x}(z')$, then Spoiler selects both $y'$ and $z'$. At
least one of Duplicator's replies is not equal to $x$, say,
$y\nuot=x$. Again, the selected vertices $y\in V(K)$ and $y'\in V(K')$
are distinguished by $\Pihi_x$, so Spoiler can win in at most extra
$D(\Pihi_x)$ moves.
Therefore, let us assume that for every $x\in V(K)$ there is the
unique vertex $x'=\pihi(x)\in V(K')$ such that $G'\muodels
\Pihi_x(x')$. Clearly, $\pihi$ is injective. Furthermore, $\pihi$ is
surjective for if $x'\nuot\in \pihi(V(K))$, then Spoiler wins by
selecting $x'\in V(K')$ and then using $\Pihi_x$, where $x\in V(K)$ is
Duplicator's reply. Moreover, we can assume that Duplicator always
respects~$\pihi$ for otherwise Spoiler wins in at most
$\lambdaog_2d+b_0+O(1)$ extra moves.
As $K\nuot\cong K'$, Spoiler can select $x,y\in V(K)$ such that the
multisets of colored paths (or loops if $x=y$) between $x$ and $y$ and
between $x'=\pihi(x)$ and $y'=\pihi(y)$ are distinct. Again, this means
that some colored path has different multiplicities and Spoiler can
highlight this in at most $u+1$ moves. Then in at most $\lambdaog_2l+O(1)$
moves he can ensure that some vertices $z\in V(K)$ and $z'\in V(K')$
are selected such that the removed trees $T_z$ and $T_{z'}$ rooted at
$z$ and $z'$ are not isomorphic, compare with
Lemma~\mubox{\rhom e}f{lm:path}.
Now, by the definition of $t$, at most $t$ moves are enough to
distinguish $T_z$ from $T_{z'}'$ (plus possible $\lambdaog_2 d +O(1)$ moves
to catch Duplicator if she replies outside $V(T_z)\cup V(T_{z'})$).
This completes the proof of Lemma~\mubox{\rhom e}f{lm:a}.\epsilonsilonpf
\sigmaubsubsection{Probabilistic Part}
Here we estimate the parameters from the previous section. As before,
let $G$ be the giant component of $\C G(n,\pihirac cn)$, let $C$ be its core,
etc.
It is well-known that whp $u=O(\pihirac{\lambdan n}{\lambdan\lambdan n})$ and
$d=O(\lambdan n)$. \comment{Reference???}
\betalm{Shaved} Whp every edge of $K$ corresponds to at most $O(\lambdan n)$
vertices of $G$. Similarly, for any $x\in V(C)$ we have
$v(T_x)=O(\lambdan n)$.\epsilonsilonnd{lemma}
\betapf
The expected number of $K$-edges, each corresponding to precisely $i=O(\lambdan n)$
vertices in $G$ is at most
$$
\betainom{n}{i}\betainom{i}{2} p^{i-1} i^{i-2} (1-p)^{(i-2)(n-i)} \lambdae
n i^2\lambdaeft(\pihirac{{\muathrm e} c}{{\muathrm e}^c}\rhoight)^{i}.
$$
But ${\muathrm e} c< {\muathrm e}^c$ for $c>1$, so if $i$ is large enough, $i>M\lambdan n$,
then the expectation is $o(n^{-3})$.
Similarly, the expected number of vertices $x$ with $v(T_x)=i=O(\lambdan n)$ is at most
$$n\betainom{n-1}{i-1}p^{i-1}i^{i-2}(1-p)^{(i-1)(n-i)}\lambdaeq 2n i\lambdaeft(\pihirac{{\muathrm e} c}{{\muathrm e}^c}\rhoight)^{i}.$$
\epsilonsilonpf
In particular, our results from Section~\mubox{\rhom e}f{general} imply that whp
$t=O(\pihirac{\lambdan n}{\lambdan \lambdan n})$.
Let, for example, $l=2\lambdan \lambdan n$. Thus $k/\lambdan n\tauo\infty$, where
$k=2^{l-2}$. It remains to prove that this choice of $l$ satisfies all
the assumptions.
\betalm{ShortCycle} Whp any set of $s\lambdae 6l$ vertices of $K$ spans at
most $s$ edges.\epsilonsilonnd{lemma}
\betapf A moment's thought reveals that it is enough to consider
sets spanning connected subgraphs only.
Let $L=M\lambdan n$ be given by Lemma~\mubox{\rhom e}f{lm:Shaved}. The probability
that there is a set $S$ such that $|S|=s\lambdaeq 6l$ and $K[S]$ is a
connected graph with at least $s+1$ edges is at most
\betaegin{align*}
&o(1)+\sigmaum_{s=4}^{6l}\betainom{n}{s}\, s^{s-2}\, {s\choose 2}^2\sigmaum_{0\lambdaeq
\epsilonsilonll_1,\lambdadots,\epsilonsilonll_{s+1}\lambdaeq L}
\pirod_{i=1}^{s+1}\betainom{n}{\epsilonsilonll_i}(\epsilonsilonll_i+2)^{\epsilonsilonll_i}p^{\epsilonsilonll_i+1}(1-p)^{\epsilonsilonll_i(n-\epsilonsilonll_i-2)}\\
&\lambdaeq o(1)+\sigmaum_{s=4}^{6l}\betafrac{n{\muathrm e}}{s}^s s^{s+2}\sigmaum_{0\lambdaeq \epsilonsilonll_1,\lambdadots,\epsilonsilonll_{s+1}\lambdaeq L}
\pirod_{i=1}^{s+1}\lambdaeft(\pihirac{c{\muathrm e}^2}{n}\lambdaeft(\pihirac{{\muathrm e}
c}{{\muathrm e}^c}\rhoight)^{\epsilonsilonll_i}\rhoight)\ \lambdae\ o(1)+
\sigmaum_{s=4}^{6l}\pihirac{(O(1))^s}{n}\ =\ o(1).
\epsilonsilonnd{align*}
The lemma is proved.\epsilonsilonpf
\betalm{Kab} Whp $K$ does not contain four vertices $x,x',y,y'$ such that
$xx',yy'\in E(K)$, $v(K_{x,x'})\gammae k$, $A_{x,x'}\cap
A_{y,y'}=\epsilonsilonmptyset$, and $G_{x,x'}\cong G_{y,y'}$.\epsilonsilonnd{lemma}
\betapf Given $c$, choose the following constants in this order: small
$\epsilonsilon_1>0$, large $M_1$, large $M_2$, small $\epsilonsilon_2>0$, and large $M_3$.
Consider breadth-first search in $G-x$ starting with $x'$. Let
$L_1=\{x'\}$, $L_2$, $L_3$, etc., be the levels. Let $T_i=\{x\}\cup
(\cup_{j=1}^i L_i)$. Let $s$ be the smallest index such that $|T_s|\gammae
M_2\lambdan n$.
Chernoff's bound implies that the probability of $|T_s|> 2cM_2 \lambdan n$
is $o(n^{-2})$. Indeed, this is at most the probability that the
binomial random variable with parameters $(n, \pihirac cn \tauimes M_2\lambdan n)$
exceeds $2cM_2\lambdan n$.
Similarly, with probability $1-o(n^{-3})$ we have $|L_{i+1}|=(c\pim
\epsilonsilon_2)|L_i|$ provided $i\gammae s$ and $|T_i|=o(n)$. Hence, we
see that from the first time we reach $2M_2\lambdan n$ vertices, the levels
increase proportionally with the coefficient close to $c$ for further
$\Theta(\lambdan n)$ steps.
Take some $i$ with $|T_i|=O(\lambdan n)$. The sizes of the first $\Theta(\lambdan
n)$ levels of the breadth-first search from the vertices of $L_i$ can
be bounded from below by independent branching processes with the number of
children having the Poisson distribution with mean $c-\epsilonsilon_2$. Indeed,
for every active vertex $v$ choose a pool $P$ of
$\ceil{(1-\pihirac{\epsilonsilon_2}c)n}$ available vertices and let $v$ choose its
neighbors from $P$, each with probability $c/n$. (The edges between
$v$ and $\O P$ are ignored.) If $v$ claimed $r$ neighbors, then, when
we take the next active vertex $u$, we add extra $r$ vertices to the
pool, so that its size remains constant.
With positive probability $p_1$ the ideal branching process survives
infinitely long; in fact, $p_1$ is the positive root of
$1-p_1={\muathrm e}^{-cp_1}$. Let
$$
p_2=\muax_{j\gammae 0} \pihirac{c^j{\muathrm e}^{-c}}{j!} <1.
$$
The numbers $p_1>0$ and $p_2<1$ are constants (depending on $c$
only).
Take the smallest $i$ such that $|T_i|\gammae 2cM_3\lambdan n$. The
breadth-first search inside $G$ goes on for at least $M_1$ further
rounds (after the $i$-th round) before we reach a vertex outside
$G_{x,x'}$. We know that $|L_i|\gammae (\pihirac{c-1}c-\epsilonsilon_1)\,|T_i|$ because
the levels grow proportionally from the $s$-th level. Let $Z$ consist
of the vertices of $L_i$ for which the search process in $G-x$ goes on
for at least $M_1$ further levels before dying out. By Chernoff's
bound, with probability $1-o(n^{-2})$ we have $|Z|\gammae \pihirac{p_1}2
|L_i|$.
Let us fix any $K_{x,x'}$ having all the above properties and compute
the expected number of copies of $K_{x,x'}$ in $G$. More precisely, we
compute the expected number of subgraphs of $G$ isomorphic to
$G[T_{i}]$ such that a specified $|Z|$-subset of the last level has
specified trees, each of height at least $M_1$, sitting on it. The
expected number of $G[T_i]$-subgraphs is at most
$n^{|T_i|}\,p_1^{|T_i|-1}$. This has to be multiplied by
$$
(p_2+o(1))^{M_1|Z|} \lambdae p_2^{M_1(c-1)p_1\,|T_i|/4c}:
$$
because if we want to get a given height-$M_1$ tree, then at least
$t$ times we have to match the sum of degrees of a level, each
coincidence having probability at most $p_2+o(1)$. As the constant
$M_1$ can be arbitrarily large, we can make the total expectation
$o(n^{-2})$.
Markov's inequality implies the lemma.\epsilonsilonpf
Finally, putting all together we deduce the upper bound of
Theorem~\mubox{\rhom e}f{th:giant}.
\sigmaubsection{Lower Bound}
Let $l=(1-\epsilonsilon) \pihirac{\lambdan n}{\lambdan \lambdan n}$ for some $\epsilonsilon>0$. We claim that
whp the core $C$ has a vertex $i$ adjacent to at least $l$ leaves of
$G$. (Then we have $D(C)\gammae l+1$: consider the graph obtained from $C$
by adding an extra leaf to $i$.)
Let us first prove this claim for the whole random graph $H\in \C
G(n,c/n)$ (rather than for the giant component $G\sigmaubset H$). For
$i\in [n]$ let $X_i$ be the event that the vertex $i$ is incident to
at least $l$ leaves. It is easy to estimate the expectation of
$X=\sigmaum_{i=1}^n X_i$:
\betaegin{eqnarray*}
E(X) &=& n \betainom{n-1}{ l} p^l (1-p)^{\betainom{l}{ 2} + l(n-l)} +O(1)\tauimes
n\betainom{n}{ l+1} p^{l+1}(1-p)^{(l+1)n}\\
&=& (1+o(1)) \pihirac{nc^l{\muathrm e}^{-cl}}{l!}\ \tauo\ \infty.
\epsilonsilonnd{eqnarray*}
Also, for $i\nuot=j$,
\betaegin{eqnarray*}
E(X_i\wedge X_j) &=&(1+o(1))\,\betainom{n-2}{ l} \betainom{n-l-2}{ l}p^{2l}
(1-p)^{\betainom{2l}{ 2} +2l(n-2l-1)}\\
&=& (1+o(1))\, E(X_i)E(X_j).
\epsilonsilonnd{eqnarray*}
The second moment method gives that $X$ is concentrated around its
mean.
Now, let us reveal the vertex set $A$ of the $2$-core of the whole
graph $H$. When we expose the stars demonstrating $X_i=1$ one by one,
then for each $i$ the probability of $i\in A$ is $\pihirac{|A|}n+o(1)$.
The sharper results of {\L}uczak~\cite{luczak:91}
\comment{Or Pittel~\cite{pittel:90}?}
imply that whp the core $C$ of the giant component has size
$\Theta(n)$. Hence, whp at least one vertex $i$ with $X_i=1$ belongs
to the $V(C)$, giving the required.
\sigmaection{Random Trees}\lambdaambdabel{random}
We consider the probabilistic model $\C T(n)$, where a tree $T$ on the
vertex set $[n]$ is selected uniformly at random among all $n^{n-2}$
trees. In this section we prove that whp $D(T)$ is close to the
maximum degree of $T$.
\betath{RandomTree} Let $T\in\C T(n)$. Whp
$D(T)=(1+o(1))\Deltaelta(T)=(1+o(1))\pihirac{\lambdan
n}{\lambdan\lambdan n}$.
\epsilonsilonnd{theorem}
\nuewcommand{{\tauextrm{Var}}}{{\tauextrm{Var}}}
\nuewcommand{{\tauextrm{Ch}}}{{\tauextrm{Ch}}}
\nuewcommand{{\tauextrm{del}}}{{\tauextrm{del}}}
Let ${\cal F}(n,k)$ be a forest chosen uniformly at random from the family
of ${\cal F}_{n,k}$ of all forests with the vertex set
$[n]$, which consist of $k$ trees rooted at vertices
$1,2,\deltaots,k$. Note that a random tree $T\in {\cal T}(n)$ can be
identified with ${\cal F}(n,1)$. We recall that $|{\cal F}_{n,k}|=kn^{n-k-1}$,
see e.g.\ Stanley~\cite[Theorem~5.3.2]{stanley:ec}. We start with the
following simple facts on ${\cal F}(n,k)$.
\betalm{forest} Let $k=k(n)\lambdae \lambdan^4 n$.
\betaegin{enumerate}
\mubox{\rhom e}newcommand{(\rhooman{enumi})}{(\rhooman{enumi})}
\item The expected number of vertices in all trees of ${\cal F}(n,k)$,
except for the largest one, is $O(k\sigmaqrt n)$.
\item The probability that ${\cal F}(n,k)$ contains precisely $\epsilonsilonll$, $\epsilonsilonll=0,\deltaots,k-1$,
isolated vertices is given by $(1+O({k^2}/{n}))
\betainom{k-1}\epsilonsilonll {\muathrm e}^{-\epsilonsilonll}(1-{\muathrm e}^{-1})^{k-\epsilonsilonll-1}$.
\item The probability that the roots of ${\cal F}(n,k)$ have more than $k(1+1/\lambdan n)+2\lambdan^2 n$
neighbors combined is $o(n^{-3})$.
\item The probability that $\epsilonsilonll$ given roots of ${\cal F}(n,k)$ have
degree at least $s\gammae 4$ each is bounded from above by $(2/(s-1)!)^\epsilonsilonll$
\epsilonsilonnd{enumerate}
\epsilonsilonnd{lemma}
\betapf If $i\lambdae n/2+1$, then the probability that a tree rooted at
a vertex $j=1,2,\deltaots,k$ in the forest ${\cal F}(n,k)$ has precisely $i$
vertices is given by
$$\betainom {n-k}{i-1} i^{i-2}
\pihirac{(k-1)(n-i)^{n-i-k}}{k n^{n-k-1}}=O(i^{-3/2})\,.$$
Consequently, the expectation of the sum of the orders of all
components of ${\cal F}(n,k)$ with at most $n/2+1$ vertices is $O(k \sigmaqrt n)$.
In order to see (ii) note that from the generalized
inclusion-exclusion principle the stated probability equals
\betaegin{equation}\lambdaambdabel{eqf1}
\betaegin{aligned}
\sigmaum_{i=\epsilonsilonll}^k&\betainom i\epsilonsilonll(-1)^{i-\epsilonsilonll}\betainom ki\pihirac{(k-i)(n-i)^{n-k-1}}{kn^{n-k-1}}\\
=&{\cal B}ig(1+O{\cal B}ig(\pihirac{k^2}{n}{\cal B}ig){\cal B}ig)
\sigmaum_{i=\epsilonsilonll}^k\pihirac{(k-1)!}{\epsilonsilonll!(i-\epsilonsilonll)!(k-1-i)!}(-1)^{i-\epsilonsilonll}{\muathrm e}^{-i}\\
=&{\cal B}ig(1+O{\cal B}ig(\pihirac{k^2}{n}{\cal B}ig){\cal B}ig)
\betainom{k-1}\epsilonsilonll {\muathrm e}^{-\epsilonsilonll}(1-{\muathrm e}^{-1})^{k-\epsilonsilonll-1}\,.
\epsilonsilonnd{aligned}
\epsilonsilonnd{equation}
For the probability that precisely $m$ ($\gammae\! k$) vertices
of ${\cal F}(n,k)$ are adjacent to the roots, Stirling's formula gives
\betaegin{equation}\lambdaambdabel{f1}
\betainom{n-k}{m}k^m\pihirac{m\,(n-k)^{n-k-m-1}}{k\,n^{n-k-1}}
\lambdae {\cal B}ig(1+O{\cal B}ig(\pihirac{k^2}n{\cal B}ig){\cal B}ig){\cal B}ig(\pihirac{{\muathrm e}^ {1-k/m}k}{m}{\cal B}ig)^{m}.
\epsilonsilonnd{equation}
For every $x$, $0<x<1$, we have $x{\muathrm e}^{1-x}\lambdae {\muathrm e}^{-(1-x)^2/2}$,
so the above formula is bounded from above by $\epsilonsilonxp(-\pihirac{(m-k)^2}{2m})$.
Since
$$\sigmaum_{m\gammae k(1+1/\lambdan n)+2\lambdan ^2n}\epsilonsilonxp{\cal B}ig(-\pihirac{(m-k)^2}{2m}{\cal B}ig)=o(n^{-3})\,,$$
the assertion follows.
For $k=1$ the probability that a given root has
degree at least $s$ is bounded from above by
$$\sigmaum_{t\gammae s}\betainom{n-1}{t}\pihirac{t(n-1)^{n-t-2}}{n^{n-2}}\lambdae
\sigmaum_{t\gammae s}\pihirac{1}{(t-1)!}\lambdae \pihirac{2}{(s-1)!}\;.$$
If we fix some $\epsilonsilonll\gammae 2$ roots, then if we condition on the vertex
sets of the $\epsilonsilonll$ corresponding components, the obtained trees are
independent and uniformly distributed, implying the required bound by
the above calculation.
\epsilonsilonpf
Using the above result one can estimate
the number of vertices of $T\in {\cal T}(n)$ with a
prescribed number of pendant neighbors.
\betalm{vert} Let $X_{\epsilonsilonll,m}$ denote the number of vertices in $T\in
{\cal T}(n)$ with precisely $\epsilonsilonll$ neighbors of degree one and $m$
neighbors of degree larger than one. Let
$$
A\sigmaubseteq\{(\epsilonsilonll,m)\colon\; 0\lambdae \epsilonsilonll\lambdae \lambdan n, \quad 1\lambdae m\lambdae \lambdan n
\}\,,$$
be a set of pairs of natural numbers and $X_A=\sigmaum_{(\epsilonsilonll,m)\in A}
X_{\epsilonsilonll,m}$. Then, the expectation
\betaegin{equation}\lambdaambdabel{eqf2}
E(X_A)=(1+o(1))\,n\sigmaum_{(\epsilonsilonll,m)\in A}
\pihirac{{\muathrm e}^{-\epsilonsilonll-1}}{\epsilonsilonll!}\pihirac{(1-{\muathrm e}^{-1})^{m-1}}{(m-1)!}
\epsilonsilonnd{equation}
and $E(X_A(X_A-1))=(1+o(1))\,(E(X_A))^2$. \epsilonsilonnd{lemma}
\betapf
Using Lemma~\mubox{\rhom e}f{lm:forest}(ii) we get
\betaegin{equation*}
E(X_A)=(1+o(1))n\sigmaum_{(\epsilonsilonll,m)\in A}\betainom{n-1}{m+\epsilonsilonll}\betainom{m+\epsilonsilonll-1}\epsilonsilonll
{\muathrm e}^{-\epsilonsilonll} (1-{\muathrm e}^{-1})^{m-1}\pihirac{(m+\epsilonsilonll)(n-1)^{n-m-\epsilonsilonll-2}}{n^{n-2}}
\epsilonsilonnd{equation*}
which gives (\mubox{\rhom e}f{eqf2}). In order to count the expected number of pairs
of vertices with prescribed neighborhoods one needs first to choose
$\epsilonsilonll+m$ neighbors of a vertex and then compute the expectation of
the number of vertices of a given neighborhood in the random forest
${\cal F}(n,\epsilonsilonll+m)$ obtained in this way. However, the largest
tree of ${\cal F}(n,\epsilonsilonll+m)$ has the expectation $n-O(\sigmaqrt n \lambdan n)$ (Lemma~\mubox{\rhom e}f{lm:forest});
one can easily observe that this fact implies
that the expected number of vertices with a prescribed neighborhood in ${\cal F}(n,\epsilonsilonll+m)$
is $(1+o(1))\,E(X_A)$, and so $E(X_A(X_A-1))=(1+o(1))\,(E(X_A))^2$.
\epsilonsilonpf
As an easy corollary of the above result we get a lower bound for
$D({\cal T}(n))$.
\betath{lower} Let $T\in\C T(n)$.
Whp $D(T)\gammae (1-o(1))\Deltaelta(T)=(1-o(1))\, \pihirac{\lambdan n}{\lambdan \lambdan n}$.
\epsilonsilonnd{theorem}
\betapf Since whp the maximum degree
is $(1-o(1)){\lambdan n}/{\lambdan\lambdan n}$, in order to prove the assertion
it is enough to show that whp $T$
contains a vertex $v$ with
\betaegin{equation}\lambdaambdabel{eqf3}
\epsilonsilonll_0=(1-o(1))\, \pihirac{\lambdan n}{\lambdan \lambdan n}
\epsilonsilonnd{equation}
neighbors of degree one; indeed, to characterize such a structure
Spoiler needs at least $\epsilonsilonll_0+1$ moves. Using Lemma~\mubox{\rhom e}f{lm:vert}, we
infer that the for the number of vertices $X_{\epsilonsilonll}$ of $T$ with
exactly $\epsilonsilonll$ neighbors of degree $1$ we have
$E(X_\epsilonsilonll)=O({\muathrm e}^{-\epsilonsilonll}n/\epsilonsilonll!)$. Thus, one can choose $\epsilonsilonll_0$ so that
(\mubox{\rhom e}f{eqf3}) holds and $E(X_{\epsilonsilonll_0})\tauo\infty$. Then, due to
Lemma~\mubox{\rhom e}f{lm:vert}, ${\tauextrm{Var}}(X_{\epsilonsilonll_0})=o((E(X_{\epsilonsilonll_0}))^2)$,
and Chebyshev's inequality implies that whp $X_{\epsilonsilonll_0}>0$.\epsilonsilonpf
Let us state another simple consequence of Lemma~\mubox{\rhom e}f{lm:forest}
which will be used in our proof of Theorem~\mubox{\rhom e}f{th:RandomTree}. Here and below $N_r(v)$
denotes the $r$-neighborhood of $v$, i.e., the set of all vertices of a graph which
are at the distance $r$ from $v$, and $N_{\lambdae r}(v)=\betaigcup_{i=0}^r N_i(r)$.
\betalm{largedegrees} Let $r_0=r_0(n)= \lambdaceil 7 \lambdan n\rhoceil $. Then,
whp the following holds for every vertex $v$ of $T\in {\cal T}(n)$:
\betaegin{enumerate}
\mubox{\rhom e}newcommand{(\rhooman{enumi})}{(\rhooman{enumi})}
\item $|N_{\lambdae r_0}(v)|\lambdae 10^8 \lambdan^4n\;,$
\item $N_{\lambdae r_0}(v)$ contains fewer than $\lambdan n/(\lambdan\lambdan n)^2$ vertices
of degree larger than $(\lambdan\lambdan n)^5$.
\epsilonsilonnd{enumerate}
\epsilonsilonnd{lemma}
\betapf For $s\lambdae r_0$ let $W_s=\cup_{i=0}^s N_i(v)$.
Note that, conditioned on the structure of the
subtree of $T$ induced by $W_s$
for some $s\lambdae r_0$, the forest $T- W_{s-1}$
can be identified with the random forest on $n-|W_{s-1}|$
vertices, rooted at the set $W_s$. Thus,
it follows from Lemma~\mubox{\rhom e}f{lm:forest}(iii) that
once for some $i$ we have $|N_i(v)|\gammae 4 \lambdan ^3 n$
then $|N_{i+1}(v)|\lambdae |N_i(v)|(1+2/\lambdan n)$,
so that
$$|N_{\lambdae r_0}(v)|\lambdae 4 r_0\lambdan ^3n (1+2/\lambdan n)^{r_0}\lambdae 10^8 \lambdan^4n\;.$$
In order to show (ii) note that (i) and Lemma~\mubox{\rhom e}f{lm:forest}(iv)
imply that the probability that, for some
vertex $v$, at least $\epsilonsilonll=\lambdafloor \lambdan n/(\lambdan\lambdan n)^2\rhofloor$ vertices of $N_{\lambdae r_0}(v)$
have degree larger than $m=(\lambdan\lambdan n)^5$ is bounded from above by
$$n\betainom {\lambdan^5 n}{\epsilonsilonll}\lambdaeft(\pihirac{2}{(m-1)!}\rhoight)^\epsilonsilonll
\lambdae n\lambdaeft(\pihirac{2{\muathrm e} \lambdan^5n}{\epsilonsilonll(m-1)!}\rhoight)^\epsilonsilonll\lambdae n{\muathrm e}^{-m\epsilonsilonll}=o(1).$$
\comment{
Here is a small hole: we know that the probability of having at least
$>m$ neighbors is at most $2/m!$ but why is the probability that $l$
given vertices each have degree $>m$ is at most $(2/(m-1)!)^l$?
Proof: we expose levels one by one. Once we have exposed a level, we
allow an adversary to choose any number of active vertices, provided
he does not choose more than $\epsilonsilonll$ vertices in total. Then adversary
succeeds (all his points have high degree) with probability at most
$(2/(m-1)!)^\epsilonsilonll$.
}
\epsilonsilonpf
In our further argument we need some more definitions. Let $T$ be a
tree and let $v$ be a vertex of $T$. For a vertex $w\in N_r(v)$ let
$P_{vw}$ denote the unique path connecting $v$ to $w$ (of length
$r$). Let the \epsilonsilonmph{check} ${\tauextrm{Ch}}(v;P_{vw})$ be the binary sequence
$b_0\cdots b_r$, in which, for $i=0,\deltaots, r$, $b_{i}$ is zero (resp.\
1) if the $i$-th vertex of $P_{vw}$ is adjacent (resp.\ not adjacent)
to a vertex of degree one. Finally, the \epsilonsilonmph{$r$-checkbook}
${\tauextrm{Ch}}_r(v)$ is the set
$$
{\tauextrm{Ch}}_r(v)=\{{\tauextrm{Ch}}(v;P_{vw})\colon w\in N_r\tauextrm{\ and }P_{vw}
\tauextrm{ is a path of length $r$}\}.
$$
Note that a checkbook is not a multiset, i.e., a check from
${\tauextrm{Ch}}_r(v)$ may correspond to more than one paths $P_{vw}$.
Our proof of the upper bound for $D({\cal T}(n))$ is based on the following
fact.
\betath{checks} Let $r_0=\lambdaceil 7 \lambdan n\rhoceil$.
Whp for each pair $P_{vw}$, $P_{v'w'}$ of
paths of length $r_0$ in $T\in {\cal T}(n)$ which share at most one vertex,
the checks ${\tauextrm{Ch}}(v;P_{vw})$ and ${\tauextrm{Ch}}(v;P_{v'w'})$
are different.
\epsilonsilonnd{theorem}
\betapf Let $C={\tauextrm{del}}(T)$ denote the tree obtained from $T$ by removing
all vertices of degree one. From Lemma~\mubox{\rhom e}f{lm:vert} it follows that
whp the tree $C$ has $(1-{\muathrm e}^{-1}-o(1))n$ vertices of which
$$
(1+o(1))\,n \sigmaum_{\epsilonsilonll>0} \pihirac{{\muathrm e}^{-\epsilonsilonll-1}}{\epsilonsilonll!} =
(\epsilonsilonxp({\muathrm e}^{-1}-1)-{\muathrm e}^{-1}+o(1))\,n$$
vertices have degree one and
$$
\alphalpha n = (1-\epsilonsilonxp({\muathrm e}^{-1}-1) +o(1))\, n.
$$
vertices have degree greater than one.
Moreover, among the set $B$ of $({\muathrm e}^{-1}+o(1))n$ vertices removed from $T$,
$$
(1+o(1))n\sigmaum_{l=0}^\infty
\epsilonsilonll\pihirac{{\muathrm e}^{-\epsilonsilonll-1}}{\epsilonsilonll!}=(1+o(1))\epsilonsilonxp({\muathrm e}^{-1}-2)n\,$$
were adjacent to vertices which became pendant in $C$.
Let $B'$ denote the set of the remaining
$$
({\muathrm e}^{-1}-\epsilonsilonxp({\muathrm e}^{-1}-2)+o(1))n=(\rhoho_0+o(1))n
$$
vertices which are adjacent to vertices of degree at least two in $C$.
Note that, given $C={\tauextrm{del}}(T)$, each attachment of
vertices from $B\sigmaetminusinus B'$ to pendant vertices
of $C$ such that each pendant vertex of $C$ get at least one vertex
from $B\sigmaetminusinus B'$, as well as each attachment of vertices from $B'$
to vertices of degree at least two from $C$ is equally likely.
Let $P_{vw}$, $P_{v'w'}$, be two paths of length $r_0$ in $T$
which share at most one vertex. Clearly, each vertex of $P_{vw}$,
except, maybe, at most two vertices at each of the ends, belong to $C$ and
have in it at least two neighbors; the same is true for $P_{v'w'}$.
Since $(\rhoho_0+o(1))n$ vertices from $B'$ are attached to the $\alphalpha
n$ vertices of degree at least two in $C$ at random, the
probability that one such vertex gets no attachment is
$$
p_0=(1+o(1))\, \lambdaeft(1-\pihirac1{\alphalpha n}\rhoight)^{\rhoho_0 n}= (1+o(1))\,
{\muathrm e}^{-\rhoho_0/\alphalpha} = 0.692...+o(1).
$$
Therefore, the probability that the checks ${\tauextrm{Ch}}(v,P_{vw})$ and
${\tauextrm{Ch}}(w,P_{v'w'})$ are identical is bounded from above by
$$
\lambdaeft(p_0^2+(1-p_0)^2 +o(1)\rhoight)^{r_0}\lambdae {\muathrm e}^{-3\lambdan n}=o(n^{-2})\,.
$$
Since by Lemma~\mubox{\rhom e}f{lm:largedegrees}(i) whp $T$ contains at most
$O(n\lambdan^4 n)$ checks of length $r_0$, the assertion follows.
\epsilonsilonpf
Now, let $r_0=\lambdaceil 7 \lambdan n\rhoceil$.
We call a tree $T$ on $n$ vertices \epsilonsilonmph{typical} if:
\betaegin{itemize}
\item for each pair of paths $P_{vw}$, $P_{v'w'}$ of length
$r_0$ which share at most one vertex,
the checks ${\tauextrm{Ch}}(v;P_{vw})$, ${\tauextrm{Ch}}(v;P_{v'w'})$ are different,
\item for the maximum degree $\Deltaelta$ of $T$ we have
$$\pihirac{\lambdan n}{2\lambdan\lambdan n}\lambdae \Deltaelta\lambdae \pihirac{2\lambdan n}{\lambdan\lambdan n} \,,$$
\item $|N_{\lambdae r_0}|\lambdae 10^8\lambdan ^4 n$, for every vertex $v$,
\item for every vertex $v$ at most $\lambdan n/(\lambdan\lambdan n)^2$ vertices of
degree larger than $(\lambdan\lambdan n)^5$ lie within distance
$r_0$ from $v$.
\epsilonsilonnd{itemize}
\betath{upper} For a typical tree $T\in\C T(n)$ we have
$D(T)\lambdae (1+o(1))\, \Deltaelta$. \epsilonsilonnd{theorem}
\betapf Let $T$ be a typical tree and $T'$ be any other graph which is
not isomorphic to $T$. We shall show that then Spoiler can win the
Ehrenfeucht game on $T$ and $T'$ in $(1+o(1))\Deltaelta$ moves.
Let us call a vertex $v$ of a graph a \epsilonsilonmph{yuppie}, if there are two
paths $P_{vw}$, $P_{vw'}$ of length $r_0$ starting at $v$ so that
$V(P_{vw})\cap V(P_{vw'})=\{v\}$. Note that the set of all yuppies
$Y$ spans a subtree in $T$, call it $K$.
Our approach is similar to that for the giant component from
Section~\mubox{\rhom e}f{giant}.
Let us view $K$ as a colored graph where the color of a vertex $x$ is
the isomorphism type of the component of $T-(Y\sigmaetminusinus\{x\})$ rooted
at $x$. Let $Y'$ be the set of yuppies of $T'$, and let
$K'=T'[Y']$. We can assume that Duplicator preserves the subgraphs $K$
and $K'$, for otherwise Spoiler wins in extra $O(\lambdan \lambdan n)$ moves.
\claim1 Any distinct $v,v'\in K$ can be distinguished (with respect to
$G$) in $O(\lambdan\lambdan n)$ moves.{\muathrm e}dskip
\betapf[Proof of Claim.] Assume that the $r_0$-checkbooks of $v,v'$ are the same for
otherwise Spoiler wins in $\lambdaog_2(r_0)+O(1)$ moves. (Please note that
the checkbooks are viewed as sets, not as multisets, so the number of
moves does not depend on the degrees of $v$ and $v'$.)
Take a path $P_{vx}$ of length $r_0$, which shares with $P_{vv'}$
only vertex $v$. Spoiler selects $x$. Let Duplicator reply with
$x'$. Assume that ${\tauextrm{Ch}}(w,P_{vx})={\tauextrm{Ch}}(v,P_{v'x'})$. The path
$P_{v'x'}$ must intersect $P_{vx}$; thus $v\in P_{v'x'}$. Next,
Spoiler selects the $P_{vx}$-neighbor $y$ of $v$; Duplicator's reply
must be $y'\in P_{v'x'}$.
Let $z\in T$ maximize $\muathrm{dist}(v,z)$ on the condition that
${\tauextrm{Ch}}(z)={\tauextrm{Ch}}(v)$ and $v$ lies between $y$ and $z$ in $T$. Define the
analogous vertex $z'$, replacing $v,y$ in the definition by
$v',y'$. We have $\muathrm{dist}(v,z)>\muathrm{dist}(v',z')$. Let Spoiler select
$w=z$. If Duplicator's reply $w'$ satisfies ${\tauextrm{Ch}}(w')\nuot\cong
{\tauextrm{Ch}}(w)$, then Spoiler quickly wins. Otherwise, $\muathrm{dist}(v,w)>
\muathrm{dist}(v',w')$. Moreover, $\muathrm{dist}(v,w)\lambdae 2r_0$ (because their
$r_0$-checkbooks are non-empty and equal). Spoiler wins in $\lambdaog_2
r_0+O(1)$ extra moves. The claim has been proved.\epsilonsiloncpf
Similarly to the argument surrounding~\mubox{\rhom e}q{Phi}, one can agrue that
for every vertex $x\in K$ there is a formula $\Pihi_x(v)$ of rank
$O(\lambdan \lambdan n)$ identifying $x$ (with respect to $T$). Moreover, we can
assume that this gives us an isomorphism $\pihi:K\tauo K'$ which is
respected by Duplicator.
As $T\nuot\cong T'$, there are two cases to consider.
\case1 There is $x\in K$ such that $T_x\nuot\cong T'_{x'}$, where
$x'=\pihi(x)$ and $T_{x'}'$ is the component of $T'-(Y' \sigmaetminusinus\{x'\})$
rooted at $x'$.{\muathrm e}dskip
Since each vertex of $T$ is within distance at most $r_0$ from some
yuppie, the tree $T_x$ has height at most $r_0$. If $T'_{x'}$ has a
path of length greater than $2r_0$ or a cycle, then Spoiler easily
wins, so assume that $T'$ is a tree. Now Spoiler should select all
vertices of $T_x$ which are of degree larger than $(\lambdan\lambdan n)^5$, say
$w_1,\deltaots,w_t$. Since $T$ is typical there are at most $\lambdan n/(\lambdan\lambdan
n)^2$ such vertices in $T_v$. Suppose that, in responce to that,
Duplicator chooses vertices $w'_1,\deltaots,w'_s$ in $T'_{x'}$. Then,
$T_v\sigmaetminusinus \{w_1,\deltaots,w_s\}$ splits into a number of trees $F_1,
\deltaots, F_u$, colored accordingly to their adjacencies to the
$w_i$'s. Now, for some $i$ the multisets of colored trees adjacent to
$w_i$ and $w_i'$ are different. Spoiler can highlight this by using at
most $\Deltaelta(T)+1$ moves. Now Spoiler plays inside some $F_i$ the
strategy of Theorem~\mubox{\rhom e}f{th:MaxDeg}. Note that $F_i$ has diameter
at most $2r_0$ and maximum degree at most $(\lambdan\lambdan n)^5$.{\muathrm e}dskip
\case2 $T'$ is not connected.{\muathrm e}dskip
As $K'\cong K$ is connected, there is a component $C'$ of $T'$ without
a yuppie. Spoiler chooses an $x'\in C'$. Now, any Duplicator's reply
$x$ is within distance $r_0$ from a yuppie, which is not true for
$x'$. Spoiler can win in $O(\lambdan \lambdan n)$ moves.{\muathrm e}dskip
Consequently, for a typical tree $T$,
$$
D(T)\lambdae \Deltaelta(T)+\pihirac{\lambdan n}{(\lambdan\lambdan n)^2}+O((\lambdan\lambdan n)^6)\,,
$$
and the assertion follows.
\epsilonsilonpf
\nuoindent {\it Proof of Theorem~\mubox{\rhom e}f{th:RandomTree}.}
Theorem~\mubox{\rhom e}f{th:RandomTree} is an immediate consequence
of Theorems~\mubox{\rhom e}f{th:lower} and~\mubox{\rhom e}f{th:upper} and the fact that,
due to Lemmas~\mubox{\rhom e}f{lm:forest} and~\mubox{\rhom e}f{lm:largedegrees},
whp a random tree $T\in {\cal T}(n)$ is typical.
\qed
\sigmaection{Restricting Alternations}
If Spoiler can win the Ehrenfeucht game, alternating between the
graphs $G$ and $G'$ at most $r$ times, then the corresponding sentence
has the \epsilonsilonmph{alternation number} at most $r$, that is, any chain of
nested quantifiers has at most $r$ changes between $\epsilonsilonxists$ and
$\hbox{ for }all$. (To make this well-defined, we assume that no quantifier is
within the range of a negation sign.) Let $D_r(G)$ be the smallest
depth of a sentence which defines $G$ and has the alternation number
at most $r$. It is not hard to see that $D_r(G)=\muax\{D_r(G,G'):
G'\nuot\cong G\}$, where $D_r(G,G')$ may be defined as the smallest $k$
such that Spoiler can win $\epsilonsilonhr_k(G,G')$ with at most $r$
alternations. For small $r$, this is a considerable restriction on the
structure of the corresponding formulas, so let us investigate the
alternation number given by our strategies.
Let $D^{\muathrm{tree}}_r(n,l)$ be the maximum of $D_r(T)$ over all colored trees of
order at most $n$ and maximum degree at most $l$.
Unfortunately, in Theorem~\mubox{\rhom e}f{th:MaxDeg} we have hardly any control
on the number of alternations. However, we can show that alternation
number $0$ suffices if we are happy to increase the upper bound by a
factor of $2$.
\betaegin{lemma}\lambdaambdabel{lem:treetree}
Let $T$ and $T'$ be colored trees. Suppose that $T\nuot\cong T'$,
where $\cong$ stands for the isomorphism relation for colored trees, i.e.,
the underlying (uncolored) trees of $T$ and $T'$ may be isomorphic.
Furthermore, assume that $v(T)\gammae v(T')$ and denote $n=v(T)$.
Assume also that $\Deltaelta(T)\lambdae l$ and let
both $l$ and $\lambdan n/\lambdan l$ tend to the infinity.
Then Spoiler can win the Ehrenfeucht game on $(T,T')$ in at most
\betaeq{D1}
(1+o(1)) \pihirac{l \lambdan n}{\lambdan l}.
\epsilonsiloneq
moves playing all time in~$T$.
\epsilonsilonnd{lemma}
\betapf
In the first move Spoiler selects a median $x\in T$; let
$x'$ be Duplicator's reply.
If $d(x)>d(x')$, then
Spoiler wins in extra $l$ moves, which is negligible when compared
to~(\mubox{\rhom e}f{eq:D1}). So, suppose that $d(x')\gammae d(x)$.
Let $t=d(x)$ and $C_1,\deltaots,C_t$ be the (rooted) components of $T-x$
indexed so that $v(C_1)\gammae v(C_2)\gammae\lambdadots\gammae v(C_t)$. Referring to
the root of a component we mean the vertex of it which is adjacent to
$x$. Spoiler starts selecting, one by one, the roots of
$C_1,C_2,\lambdadots$. Duplicator is enforced to respond with roots of
distinct components of $T'-x'$. Spoiler keeps doing so until the
following situation occurs: he selects the root $y$ of a component
$C=C'_i$ while Duplicator selects the root $y'$ of a component $C'$
such that $v(C)\gammae v(C')$ and $C\nuot\cong C'$ (as rooted trees). Such
a situation really must occur for some $i\lambdae t$ due to the conditions
that $v(T)\gammae v(T')$, $d(x)\lambdae d(x')$, and $T\nuot\cong T'$.
We claim that if Spoiler selects a vertex $z$ inside $C$, then
Duplicator must reply with some $z'\in C'$ for otherwise Spoiler wins
in at most $\lambdaog_2 n$ moves. Indeed, suppose $z'\nuot\in C'$. Spoiler
selects $z_1$ which is a middle point of the $yz$-path. Whatever the
reply $z_1'$ is, the $z'z_1'$-path or $z_1'y'$-path contains the vertex
$x'$. Suppose it is the $z'z_1$-path. Then Spoiler halves the
$zz_1$-path. In at most $\lambdaog_2n$ times he wins.
Thus making $i+1\lambdae t+1\lambdae l+1$ steps, we have reduced the game to two
non-isomorphic (rooted) trees, $C$ and $C'$, with $v(C)\lambdae
\muin(\pihirac1i,\pihirac12)\, v(T)$. In the game on $(C,C')$ Spoiler
applies the same strategy recursively. Two ending conditions are
possible: the root of $C$ has strictly larger degree than the root of
$C'$ and Duplicator violates a color, the adjacency, or the equality
relation. It is easy to argue, cf.\ the proof of
Theorem~\mubox{\rhom e}f{th:MaxDeg}, that the worst case for us is when we have
$i=(1+o(1))\, l$ all the time, which gives the required
bound~(\mubox{\rhom e}f{eq:D1}).
\epsilonsilonpf
\betath{DT0} Let both $l$ and $\lambdan n/\lambdan l$ tend to the
infinity. Then
\betaeq{}
D^{\muathrm{tree}}_0(n,l)\lambdae (1+o(1)) \pihirac{l \lambdan n}{\lambdan l}.
\epsilonsiloneq
\epsilonsilonnd{theorem}
\betapf
Let $T$ be a tree of order $n$ and maximum degree at most $l$
and let $G\nuot\cong T$.
If $\Deltaelta(T)\nue\Deltaelta(G)$ then Spoiler wins the Ehrenfeucht game on
$(T,G)$ in at most $l+2$ moves
playing in the graph of the larger degree. We will therefore assume that
$T$ and $G$ have the same maximum degree not exceeding~$l$.
\case1 $G$ contains a cycle of length no more than $n+1$.\sigmamallskip
Spoiler plays in $G$ proceeding as in the last paragraph of the proof
of Lemma~\mubox{\rhom e}f{lm:Tree}.
\case2 $G$ is connected and has no cycle of length up to $n+1$.
If $v(G)\lambdae n$, then $G$ must be a tree. Lemma \mubox{\rhom e}f{lem:treetree}
applies. Let us assume $v(G)>n$. Let $A$ be a set of $n+1$ vertices
spanning a connected subgraph in $G$. This subgraph must be a
tree. Spoiler plays in $G$ staying all time within
$A$. Lemma~\mubox{\rhom e}f{lem:treetree} applies.
\case3 $G$ is disconnected and has no cycle of length up to $n+1$.
We can assume that every component $H$ of $G$ is a tree for otherwise
Spoiler plays the game on $(T,H)$ staying in $H$, using the strategy
described above.
Suppose first that $G$ has a tree component $H$ such that $H\nuot\cong
T$ and $v(H)\gammae n$. If $v(H)=n$, let $T'=H$. Otherwise let $T'$ be a
subtree of $H$ on $n+1$ vertices. Spoiler plays the game on $(T,T')$
staying in $T'$ and applying the strategy of Lemma \mubox{\rhom e}f{lem:treetree}
(with $T$ and $T'$ interchanged and perhaps with $n+1$ in place
of~$n$).
Suppose next that all components of $G$ are trees of order less
than~$n$. In the first move Spoiler selects a median $x$ of $T$. Let
Duplicator respond with a vertex $x'$ in a component $T'$ of $G$. If
in the sequel Duplicator makes a move outside $T'$, then Spoiler wins
by Lemma~\mubox{\rhom e}f{lm:path}. As long as Duplicator stays in $T'$, Spoiler
follows the strategy of Lemma \mubox{\rhom e}f{lem:treetree}.
Finally, it remains to consider the case that $G$ has a component $T'$
isomorphic to~$T$. Spoiler plays in $G$. In the first move he
selects a vertex $x'$ outside $T'$. Let $x$ denote Duplicator's
response in $T$. Starting from the second move Spoiler plays the game
on $(T,T')$ according to Lemma \mubox{\rhom e}f{lem:treetree}, where $x$ is
considered colored in a color absent in $T'$.
Our description of Spoiler's strategy is complete.\epsilonsilonpf
It is not clear what the asymptotics of $D^{\muathrm{tree}}_0(n,l)$ is. We could not
even rule out the possibility that $D^{\muathrm{tree}}_0(n,l)=(\pihirac12+o(1))\,
\pihirac{l\lambdan n}{\lambdan l}$.
\comment{Also, it would be interesting to know $D^{\muathrm{tree}}_i$
for other small $i$, such as $i=1$ or $i=2$.}
The similar method shows that $D^{\muathrm{tree}}_0(n,l)=\Theta(\lambdan n)$ if $l\gammae 2$
is constant and $D^{\muathrm{tree}}_0(n,l)=\Theta(l)$ if $\pihirac{\lambdan n}{\lambdan l}=O(1)$
but the exact asymptotics seems difficult to compute.
Using these results, one can show that the upper bounds in
Theorems~\mubox{\rhom e}f{th:RandomTree} and~\mubox{\rhom e}f{th:giant} apply to $D_1(G)$,
that is, there are strategies for Spoiler requiring at most one
alternation. It is not clear whether 0 alternations is possible
here. One of a few places that seem to require an alternation is
establishing that $\pihi$ is a bijection: Spoiler may be forced to
start in one of the graphs, while later (for example, when showing that
$T_x\nuot\cong T'_{x'}$) he may need to swap graphs.
\betaegin{thebibliography}{1}
\betaibitem{janson+luczak+rucinski:rg}
S.~Janson, T.~{\L}uczak, and A.~Ruci{\'n}ski.
\nuewblock {\epsilonsilonm Random Graphs}.
\nuewblock Wiley-Intersci. Publ., 2000.
\betaibitem{kim+pikhurko+spencer+verbitsky:03rsa}
J.~H. Kim, O.~Pikhurko, J.~Spencer, and O.~Verbitsky.
\nuewblock How complex are random graphs in the first order logic?
\nuewblock {\epsilonsilonm Random Struct.\ Algorithms\/} 26:119--145 (2005).
\betaibitem{luczak:91}
T.~{\L}uczak.
\nuewblock Cycles in a random graph near the critical point.
\nuewblock {\epsilonsilonm Random Struct.\ Algorithms}, 2:422--440, 1991.
\betaibitem{pikhurko+spencer+verbitsky:04}
O.~Pikhurko, J.~Spencer, and O.~Verbitsky.
\nuewblock Succinct definitions in the first order graph theory.
\nuewblock To appear in {\epsilonsilonm Annals of Pure and Applied Logic}.
E-print \tauexttt{arXiv:math.LO/0405326}, 2004.
\betaibitem{pikhurko+veith+verbitsky:03}
O.~Pikhurko, H.~Veith, and O.~Verbitsky.
\nuewblock The first order definability of graphs: Upper bounds for quantifier
ranks.
\nuewblock Submitted. E-print \tauexttt{arXiv:math.CO/0311041}, 2004.
\betaibitem{pikhurko+verbitsky:03}
O.~Pikhurko and O.~Verbitsky.
\nuewblock Descriptive complexity of finite structures: Saving the quantifier
rank.
\nuewblock To appear in {\epsilonsilonm J.\ Symb.\ Logic}. E-print
\tauexttt{arXiv:math.LO/0305244}, 2004.
\betaibitem{spencer:slrg}
J.~Spencer.
\nuewblock {\epsilonsilonm The Strange Logic of Random Graphs}.
\nuewblock Springer Verlag, 2001.
\betaibitem{stanley:ec}
R.~P. Stanley.
\nuewblock {\epsilonsilonm Enumerative Combinatorics}.
\nuewblock Cambridge Univ.\ Press, 1997.
\betaibitem{verbitsky:04}
O.~Verbitsky.
\nuewblock The first order definability of graphs with separators via the
{E}hrenfeucht game.
\nuewblock To appear in {\it Theoretical Computer Science}. E-print
\tauexttt{arXiv:math.CO/0401361}, 2004.
\epsilonsilonnd{thebibliography}
\epsilonsilonnd{document}
\betaibliography{oleg,general,misc,graph,ex,random,enum}
\epsilonsilonnd{document} |
\begin{document}
\title[Seven-game series vs. five-game series]{Are seven-game baseball playoffs fairer than five-game series when home-field advantage is considered?}
\author{Brian Dean}
\address{Department of Mathematics and Computer Science\\
Salisbury University\\
Salisbury, MD 21801}
\email{bjdean@salisbury.edu}
\date{}
\begin{abstract}
Conventional wisdom in baseball circles holds that a seven-game playoff series is fairer than a five-game series. In an earlier paper, E. Lee May, Jr. showed that, treating each game as an independent event, a seven-game series is not significantly fairer. In this paper, we take a different approach, taking home-field advantage into account. That is, we consider a given series to consist of two disjoint sets of independent events---the home games and the road games. We will take the probability of winning a given road game to be different from the probability of winning a given home game. Our analysis again shows that a seven-game series is not significantly fairer.
\end{abstract}
\maketitle
\section{Introduction}\label{intro}
It is often said in baseball that a seven-game playoff series is fairer than a five-game series. The argument is that, in a five-game series, the team without home-field advantage need only win its two home games, and take just one out of three on the road, in order to win the series. On the other hand, to win a seven-game series, the team would have to either win all three home games and one of four on the road, or win at least two out of four on the road.
Analyzing this question is a useful exercise in mathematical modeling and probability. In \cite{Ma}, E. Lee May, Jr. showed that a seven-game series is not significantly fairer. (By \textit{significantly fairer}, we mean that there is at least a four percent greater probability of winning the seven-game series than winning the five-game series.) May approached the problem as follows: he let $p$ be the probability that the better team would win a given game in the series, and treated each game equally as an independent event without regard to where the game was being played.
In this paper, we will examine the same problem while attempting to account for home-field advantage. From now on, $p$ will represent the probability that the team with home-field advantage in the series will win a given home game. The probability that that team will win a given road game will be $rp$, where $r$, the \textit{road multiplier}, will be discussed in Section~\ref{roadmultiplier}. Each home game will be treated as an independent event, and each road game will be treated as an independent event.
Since May approached the problem from the point of view of the better team, he necessarily had $p\in [0.5,1]$. In this paper, where we approach the problem from the point of view of the team with home-field advantage, that will still be the case most of the time---in the Division Series and League Championship Series, home-field advantage goes to the better team. However, in the World Series, this is not always the case. Home-field advantage in the World Series alternated between the American and National Leagues through 2002; since 2003, it has been given to the champion of the league which had won that year's All-Star Game. Still, in most cases, if a team is good enough to reach the World Series, then the probability that it will win a given home game is still likely to be at least 0.5, regardless of the opposition. Nevertheless, it is possible that $p$ could be below 0.5, so we will only require $p\in [0,1]$. Practically speaking, it seems unlikely that $p$ would ever be below, say, 0.4, but we will not require that to be the case.
\section{The Road Multiplier}\label{roadmultiplier}
As discussed in the Introduction, we will take the probability that the team with home-field advantage will win a given road game to be $rp$, where $r$ is a fixed number which we will call the \textit{road multiplier}. For an individual team, the road multiplier is obtained by dividing the team's road winning percentage by its home winning percentage, i.e.,
$$\mbox{road multiplier}\,\, =\frac{\frac{RW}{RW+RL}}{\frac{HW}{HW+HL}}$$
where $RW$, $RL$, $HW$, and $HL$, are the number of the team's road wins, road losses, home wins, and home losses, respectively, in that season.
Our value $r$ will be the average of the road multipliers of the 96 teams which have made the playoffs in the wildcard era (1995-2006). This ends up giving us (to 9 decimal places)
$$r=0.894762228,$$
that is, we will consider the team with home-field advantage to be about 89.5 percent as likely to win a given road game as they are to win a given home game.
We will not list the results for all 96 teams here. However, we will make a few comments.
The five highest and five lowest road multipliers of the 96 are as follows:
\begin{tabular}{lccc}
Team & Home Record & Road Record & Road Multiplier \\
\hline
2001 Braves & 40-41 & 48-33 & 1.2 \\
1997 Orioles & 46-35 & 52-29 & 1.130434783 \\
2001 Astros & 44-37 & 49-32 & 1.113636364 \\
2005 White Sox & 47-34 & 52-29 & 1.106382979 \\
2006 Tigers & 46-35 & 49-32 & 1.065217391 \\
(tie) 2000 White Sox & 46-35 & 49-32 & 1.065217391 \\
\hline
2000 Mets & 55-26 & 39-42 & 0.709090909 \\
2005 Braves & 53-28 & 37-44 & 0.698113208 \\
2006 Cardinals & 49-31 & 34-47 & 0.685311162 \\
2003 Athletics & 57-24 & 39-42 & 0.684210526 \\
2005 Astros & 53-28 & 36-45 & 0.679245283
\end{tabular}
Of the 96 teams, 23 of them had road multipliers of 1 or higher (meaning that about a quarter of the teams did at least as well on the road as they did at home), while 12 of the teams had road multipliers of 0.75 or below. 12 of the 16 highest road multipliers belong to American League teams, while 11 of the 16 lowest road multipliers belong to National League teams. The road multipliers for the 12 World Series champions of the wildcard era, from highest to lowest, are as follows:
\begin{tabular}{lccc}
Team & Home Record & Road Record & Road Multiplier \\
\hline
2005 White Sox & 47-34 & 52-29 & 1.106382979 \\
1995 Braves & 44-28 & 46-26 & 1.045454545 \\
1999 Yankees & 48-33 & 50-31 & 1.041666667 \\
2000 Yankees & 44-35 & 43-39 & 0.941518847 \\
2001 Diamondbacks & 48-33 & 44-37 & 0.916666667 \\
1996 Yankees & 49-31 & 43-39 & 0.856147337 \\
1998 Yankees & 62-19 & 52-29 & 0.838709677 \\
2002 Angels & 54-27 & 45-36 & 0.833333333 \\
2004 Red Sox & 55-26 & 43-38 & 0.781818182 \\
1997 Marlins & 52-29 & 40-41 & 0.769230769 \\
2003 Marlins & 53-28 & 38-43 & 0.716981131 \\
2006 Cardinals & 49-31 & 34-47 & 0.685311162
\end{tabular}
\section{Comparing Three-Game Series and Five-Game Series}\label{threeversusfive}
Before comparing seven-game series and five-game series, we will first look at five-game series versus three-game series, as that case is a bit easier to dive right into. Throughout the next two sections, we will use the following notation: we will use capital letters (W and L) to denote games in which the team with home-field advantage wins and loses at home, and lowercase letters (w and l) to denote games in which that team wins and loses on the road. Thus, each instance of W will have probability $p$, each L will have probability $1-p$, each w will have probability $rp$, and each l will have probability $1-rp$.
\subsection{Three-game series}
There have never been three-game playoff series in baseball, except to break ties (most notably the playoff between the New York Giants and Brooklyn Dodgers following the 1951 season). However, if there were, they would likely be in one of two formats---either a 1-1-1 format (in which the team with home-field advantage plays games one and three at home, and game two on the road), or a 1-2 format (in which they play game one on the road and games two and three at home).
The scenarios for that team to win the series, in a 1-1-1 format, are as follows.
\begin{tabular}{lc}
Scenario & Probability \\
\hline
Ww & $p(rp)$ \\
WlW & $p^2(1-rp)$ \\
LwW & $p(rp)(1-p)$
\end{tabular}
Adding these probabilities, we see that the total probability that the team with home-field advantage will win the series, in a 1-1-1 format, is $(2r+1)p^2-2rp^3$.
The following are the corresponding scenarios if the series were played in a 1-2 format.
\begin{tabular}{lc}
Scenario & Probability \\
\hline
wW & $p(rp)$ \\
wLW & $p(rp)(1-p)$ \\
lWW & $p^2(1-rp)$
\end{tabular}
Again, the total probability of victory in this format is $(2r+1)p^2-2rp^3$. So, the probability that the team with home-field advantage will win a three-game series is the same in either format.
\subsection{Five-game series}
Major League Baseball employed five-game playoff series for the League Championship Series from 1969-1984. (Prior to 1969, the playoffs consisted solely of the teams with the best records in each league meeting in the World Series.) Since 1985, the League Championship Series have been in a best-of-seven format. However, five-game series returned with the advent of the wildcard system; since 1995, each league has had two five-game Division Series, with the winners advancing to the seven-game League Championship Series.
Two formats for best-of-five series have been used over the years: a 2-3 format (in which the team with home-field advantage plays the first two games on the road and the final three games at home), and a 2-2-1 format (in which that team plays games one, two, and five at home, and games three and four on the road). We will examine each format separately; as with the two formats for three-game series, we will see that the probability that the team with home-field advantage will win the series is independent of the format.
First, we examine the scenarios in which the team with home-field advantage will win the series, if the series is in a 2-3 format.
\begin{tabular}{lc}
Scenario & Probability \\
\hline
wwW & $p(rp)^2$ \\
lwWW & $p^2(rp)(1-rp)$ \\
wlWW & $p^2(rp)(1-rp)$ \\
wwLW & $p(rp)^2(1-p)$ \\
llWWW & $p^3(1-rp)^2$ \\
lwLWW & $p^2(rp)(1-p)(1-rp)$ \\
lwWLW & $p^2(rp)(1-p)(1-rp)$ \\
wlLWW & $p^2(rp)(1-p)(1-rp)$ \\
wlWLW & $p^2(rp)(1-p)(1-rp)$ \\
wwLLW & $p(rp)^2(1-p)^2$
\end{tabular}
Summing these, we see that the total probability that the team with home-field advantage will win the series, in a 2-3 format, is
$$(3r^2+6r+1)p^3-(9r^2+6r)p^4+6r^2p^5$$
Next, we look at the corresponding scenarios for a 2-2-1 format.
\begin{tabular}{lc}
Scenario & Probability \\
\hline
WWw & $p^2(rp)$ \\
LWww & $p(rp)^2(1-p)$ \\
WLww & $p(rp)^2(1-p)$ \\
WWlw & $p^2(rp)(1-rp)$ \\
LLwwW & $p(rp)^2(1-p)^2$ \\
LWlwW & $p^2(rp)(1-p)(1-rp)$ \\
LWwlW & $p^2(rp)(1-p)(1-rp)$ \\
WLlwW & $p^2(rp)(1-p)(1-rp)$ \\
WLwlW & $p^2(rp)(1-p)(1-rp)$ \\
WWllW & $p^3(1-rp)^2$
\end{tabular}
Again, if we add these, we see that the total probability of victory is
$$(3r^2+6r+1)p^3-(9r^2+6r)p^4+6r^2p^5$$
and so the probability that the team with home-field advantage will win a five-game series is the same in either format.
\subsection{Comparing the two}
To find the difference in probabilities in winning a five-game series and a three-game series, we just subtract the two: the probability of winning a five-game series, minus the probability of winning a three-game series, is the function
$$f(p)=6r^2p^5-(9r^2+6r)p^4+(3r^2+8r+1)p^3-(2r+1)p^2,\;\;p\in [0,1]$$
We will find the extreme values of $f$ using the Extreme Value Theorem.
The derivative of $f$ is
$$f'(p)=30r^2p^4-(36r^2+24r)p^3+(9r^2+24r+3)p^2-(4r+2)p;$$
keeping in mind that $r=0.894762228$, the derivative is 0 for
$$p=0$$
$$p\approx 0.294269665$$
$$p\approx 0.756820873$$
and for a value of $p$ between 1 and 2 (as can be verified using the Intermediate Value Theorem).
Checking the values of $f$ at the critical points and the endpoints, we get
$$f(0)=0$$
$$f(0.294269665)\approx -0.056156576$$
$$f(0.756820873)\approx 0.047338476$$
$$f(1)=0$$
So, a five-game series is at most about 4.73\% fairer than a three-game series, and at worst about 5.62\% less fair. However, as mentioned in the Introduction, it is extremely unlikely that $p$ would ever be as low as $0.294$. If we look at the value of $f$ at a more realistic lower bound for $p$, we get
$$f(0.4)=-0.0431953192$$
and so a five-game series is about 4.32\% less fair than a three-game series for that value of $p$. In summary, there does appear to be a significant difference between three-game and five-game series for certain values of $p$.
The value of $p$ in $[0,1]$ for which $f(p)=0$ is approximately 0.537783; for $p$ less than that, three-game series are fairer (or, put another way, five-game series are fairer for the team without home-field advantage), while for $p$ greater than that, five-game series are fairer.
\section{Comparing Five-Game Series and Seven-Game Series}\label{fiveversusseven}
We are now ready to examine the question of interest to us, comparing a five-game series and a seven-game series. We have already shown that the probability that the team with home-field advantage will win a five-game series, regardless of format, is
$$(3r^2+6r+1)p^3-(9r^2+6r)p^4+6r^2p^5$$
\subsection{Seven-game series}
A seven-game series in baseball is played under a 2-3-2 format---the team with home-field advantage plays games one, two, six, and seven at home, and the middle three games on the road. There are a total of 35 possible scenarios for victory, so we will not list each separately. However, we will list each scenario lasting four, five, or six games.
\begin{tabular}{lc}
Scenario & Probability \\
\hline
WWww & $p^2(rp)^2$ \\
LWwww & $p(rp)^3(1-p)$ \\
WLwww & $p(rp)^3(1-p)$ \\
WWlww & $p^2(rp)^2(1-rp)$ \\
WWwlw & $p^2(rp)^2(1-rp)$ \\
LLwwwW & $p(rp)^3(1-p)^2$ \\
LWlwwW & $p^2(rp)^2(1-p)(1-rp)$ \\
LWwlwW & $p^2(rp)^2(1-p)(1-rp)$ \\
LWwwlW & $p^2(rp)^2(1-p)(1-rp)$ \\
WLlwwW & $p^2(rp)^2(1-p)(1-rp)$ \\
WLwlwW & $p^2(rp)^2(1-p)(1-rp)$ \\
WLwwlW & $p^2(rp)^2(1-p)(1-rp)$ \\
WWllwW & $p^3(rp)(1-rp)^2$ \\
WWlwlW & $p^3(rp)(1-rp)^2$ \\
WWwllW & $p^3(rp)(1-rp)^2$
\end{tabular}
There are a total of 20 scenarios for victory which last the full seven games. Rather than list each one separately, we will just list the various combinations of W, L, w, and l, give the probability of each occurrence, and give the number of ways each scenario occurs. For example, occurrences of the first type include LLlwwWW, LWwlwLW, and WLwwlLW.
\begin{tabular}{lcc}
Scenario & Probability & Occurrences \\
\hline
2 W, 2 w, 2 L, 1 l & $p^2(rp)^2(1-p)^2(1-rp)$ & 9 \\
3 W, 1 w, 1 L, 2 l & $p^3(rp)(1-p)(1-rp)^2$ & 9 \\
1 W, 3 w, 3 L, 0 l & $p(rp)^3(1-p)^3$ & 1 \\
4 W, 0 w, 0 L, 3 l & $p^4(1-rp)^3$ & 1
\end{tabular}
Adding together all of the probabilities for the 35 victory scenarios, we see that the total probability that the team with home-field advantage will win a seven-game series is
$$(4r^3+18r^2+12r+1)p^4-(24r^3+48r^2+12r)p^5+(40r^3+30r^2)p^6-20r^3p^7$$
\subsection{Comparing the two}
If we take the probability of winning a seven-game series, and subtract the probability of winning a five-game series, we get the function
\begin{eqnarray*}
s(p) &=& -20r^3p^7+(40r^3+30r^2)p^6-(24r^3+54r^2+12r)p^5 \\
& & \,\, +(4r^3+27r^2+18r+1)p^4-(3r^2+6r+1)p^3
\end{eqnarray*}
where $p\in [0,1]$.
The derivative of this function is
\begin{eqnarray*}
s'(p) &=& -140r^3p^6 + (240r^3+180r^2)p^5-(120r^3+270r^2+60r)p^4 \\
& & \,\, +(16r^3+108r^2+72r+4)p^3-(9r^2+18r+3)p^2
\end{eqnarray*}
Again using the fact that we are taking $r=0.894762228$, the derivative $s'$ is 0 for
$$p=0$$
$$p\approx 0.329786090$$
$$p\approx 0.723663130$$
and for a value of $p$ between 1 and 1.05, and a value of $p$ between 1.05 and 1.1. (These last two can be verified using the Intermediate Value Theorem.)
Checking the values of $s$ at the critical points and the endpoints, we get
$$s(0)=0$$
$$s(0.329786090)\approx -0.038565024$$
$$s(0.723663130)\approx 0.034221072$$
$$s(1)=0$$
So, a seven-game series is at most about 3.42\% fairer than a five-game series, and at worst about 3.86\% less fair (and that occurs for a value of $p$ which is likely too small to occur in practice). Therefore, there is no significant difference between a five-game series and a seven-game series.
The value of $p$ in $[0,1]$ for which $s(p)=0$ is approximately 0.533711; for $p$ less than that, five-game series are fairer (i.e., seven-game series are fairer for the team without home-field advantage), while for $p$ greater than that, seven-game series are fairer.
\section{Further Questions}\label{furtherquestions}
There are a few ways in which this model could be amended. First, instead of finding a fixed value of $r$ for the road multiplier, we could keep $r$ as a variable (with appropriate upper and lower bounds for $r$), and then treat the functions $f$ and $s$ as functions of two variables.
Another approach would be to account for morale. In \cite{Re}, S. Reske approaches the problem as May did in \cite{Ma}---that is, with $p$ representing the probability that the better team would win a given game, without regard to home-field advantage. However, if the better team has a lead in the series, then its probability of winning the next game would be $p+a$, while if it trails in the series, then its probability of winning the next game would be $p-a$, where $a$ may be either positive or negative. The idea is that, if the team leads the series, its increase in morale (and subsequent decrease in the other team's morale) could actually make it more likely to win the next game, and vice versa if it trails the series. In that case, $a>0$. The case $a<0$ would correspond to what happens if the team leads the series but then gets overconfident, making it less likely to win the next game. With this approach, Reske again shows that there is no significant difference between a five-game series and a seven-game series. This could be easily adapted to account for home-field advantage, with the fixed value of $r$ we used in this paper: if the team with home-field advantage leads the series, and the next game is at home, its probability of winning would be $p+a$, while if the next game were on the road, it would be $r(p+a)$; similarly if the team with home-field advantage trails the series, its probability of winning the next game would be $p-a$ if at home, and $r(p-a)$ if on the road. This would again be a two-variable problem, with variables $p$ and $a$. If we do not require $r$ to be fixed, then it would become a three-variable problem.
A final approach could be one of cumulative morale. That is, if the team with home-field advantage leads the series by one game, then its probability of winning the next game would be $p+a$ or $r(p+a)$, if it leads the series by two games, its probability of winning the next game would be $p+2a$ or $r(p+2a)$, and so forth. The idea here would be that, the further ahead the team is, the greater its morale would get (if $a>0$), or the more overconfident it would get (if $a<0$).
\end{document} |
\begin{document}
\title{An Easier-To-Align Hong-Ou-Mandel Interference Demonstration}
\author{Nicholas S. DiBrita}
\altaffiliation[Present Address: ]{Department of Physics and Astronomy, Rice University, Houston, TX 77251, USA.}
\author{Enrique J. Galvez}
\email{egalvez@colgate.edu}
\affiliation{Department of Physics, Colgate University, Hamilton, NY 13346, USA}
\date{\today}
\begin{abstract}
The Hong-Ou-Mandel interference experiment is a fundamental demonstration of nonclassical interference and a basis for many investigations of quantum information. This experiment involves the interference of two photons reaching a symmetric beamsplitter. When the photons are made indistinguishable in all possible ways, an interference of quantum amplitudes results in both photons always leaving the same beamsplitter output port. Thus, a scan of distinguishable parameters, such as the arrival time difference of the photons reaching the beamsplitter, produces a dip in the coincidences measured at the outputs of the beamsplitter. The main challenge for its implementation as an undergraduate laboratory is the alignment of the photon paths at the beamsplitter. We overcome this difficulty by using a pre-aligned commercial fiber-coupled beamsplitter. In addition, we use waveplates to vary the distinguishability of the photons by their state of polarization. We present a theoretical
description at the introductory quantum mechanics level of the two types of experiments, plus a discussion of the apparatus alignment and list of parts needed.
\end{abstract}
\maketitle
\section{Introduction}\label{sec:intro}
In 1987 C.K. Hong, Z.Y. Ou, and L. Mandel reported on one of the most consequential experiments in quantum optics.\cite{HOM} It is an experiment that demonstrates the ensuing quantum interference of two properly prepared photons after each arrives separately at an adjacent input
port of a symmetric beamsplitter. When all of the properties of the two photons are identical, a striking phenomenon appears: the two photons always exit together at the same output
port of the beamsplitter and never exit at separate output
ports. This effect is a purely nonclassical phenomenon. The proper way to understand it is from a quantum-mechanical perspective, where the amplitudes for the various possibilities interfere. This result mimics a form of interaction between photons, but one that is solely due to quantum effects, similar to the exchange interaction of electrons in atoms. This quantum interaction has been used for a number of purposes,\cite{Bouchard20} such as entanglement,\cite{Ou88,Shih88} entanglement swapping,\cite{Pan98} teleportation,\cite{Bouwmeester97} implementation of CNOT gates,\cite{OBrien03} and ultimately, quantum computing with photons.\cite{kokRMP07}
The essence of the Hong-Ou-Mandel (HOM) interference phenomenon is shown in Fig.~\ref{fig:HOM}. When two photons arrive separately at adjacent input
ports of a beamsplitter, there are four possible outcomes. Either the two photons exit together out of the same output
port in one of two possible ways, as shown in Figs.~\ref{fig:HOM}(a) and \ref{fig:HOM}(b), or they exit out of separate
ports in one of two possible ways, as shown in Figs.~\ref{fig:HOM}(c) and \ref{fig:HOM}(d). Following Feynman,\cite{Feynman} consider the event when both photons exit out of separate output
ports of the beamsplitter. If the photons are indistinguishable, the probability for the event is the square of the sum of the probability amplitudes for each possibility considered separately. If the possibilities are distinguishable, then the probability of the event is the sum of the probabilities of the possibilities.
\begin{figure}
\caption{Schematic of the four possible paths of two photons, each incident separately on adjacent
input ports of a beamsplitter.}
\label{fig:HOM}
\end{figure}
Now assume the beamsplitter to be a {\em symmetric} one, i.e., with equal probabilities to transmit and reflect light, and equal amplitudes for reflection and transmission from either side of the beamsplitter. It is common to call the probability amplitudes for transmission and reflection $t$ and $r$, respectively. The absolute value for both $t$ and $r$ has to be $1/\sqrt{2}$, so that the probability of transmission and reflection is $1/2$ in each case. However, to conserve energy, or equivalently, probability, the transmission and reflection amplitudes have to be out of phase by $\pi/2$ for the case of the symmetric beamsplitter.\cite{Zeilinger,Holbrow} It is common to attach this phase to the reflection amplitude, so $r=\exp(i\pi/2)/\sqrt{2}=i/\sqrt{2}$ and $t=1/\sqrt{2}$. The probability amplitude that both photons come out of separate
output ports of the beamsplitter has two terms: when both transmit, it is $tt=1/2$ [Fig.~\ref{fig:HOM}(c)]; and both reflect, it is $rr=-1/2$ [Fig.~\ref{fig:HOM}(d)]. The probability for the event is then
\begin{equation}
P_{\rm ind}=|tt+rr|^2=0.
\label{eq:homint}
\end{equation}
That is, the two possibilities interfere destructively.
If the photons are distinguishable, such as when they arrive at the beamsplitter at distinguishable different times, then the probability is
\begin{equation}
P_{\rm dis}=|tt|^2+|rr|^2=1/2.
\end{equation}
Distinguishable different times means that a measurement of the two arrival times of the photons can be used to distinguish between the two possibilities. Other distinguishing attributes are the photons' polarization, energy, or spatial mode.
We note that the previous analysis applies to bosons, like the photon. For fermions (for example, electrons), the amplitude rule of Eq.~(\ref{eq:homint}) is not a sum but a difference of the two probability amplitudes.\cite{Feynman} This fact is due to the exchange symmetry of indistinguishable fermions, which unlike bosons, cannot occupy the same state (i.e., both fermions having the same momentum). Thus, in the HOM experiments with electrons,\cite{BocquillonSci13} the probability of Eq.~(\ref{eq:homint}) is 1. Feynman explains the distinction between bosons and fermions with a similar type of experiment, of identical particles in a head-on collision.\cite{Feynman} This phenomenon is more formally described in terms of the symmetry of the two-particle wavefunction, presented in Sec.~\ref{sec:results}. Ultimately, the HOM experiment is a demonstration of the superposition of the state of two particles and how it leads to measurable interference effects that are purely quantum
mechanical.
Recreation of this demonstration is not straightforward, mostly because the experimental alignment requires much effort and expertise, and thus is time consuming. To see the interference, both photons created from the same source---spontaneous parametric down-conversion (described below)--- have to travel exactly the same distance to the beamsplitter, so setting up the photon paths needs very careful alignment. Additionally, the experiment requires hardware that facilitates scanning the photon path difference by tens of micrometers.
A final challenge occurs at the beamsplitter. The photons' spatial mode must fully overlap at the beam splitter and along the output paths. Otherwise they will be spatially distinguishable. For educational purposes, this demonstration has been done before in free space,\cite{GarivotoEJP12} where the experimentalists implemented the following clever method of alignment: a Mach-Zehnder interferometer was set up such that its separate arms were the two photon paths. Once the paths were aligned to be equal via interferometry, the first beamsplitter was replaced by the down-conversion crystal.
The purpose of this article is to report on a Hong-Ou-Mandel interference demonstration that eliminates the free-space alignment of the photon beams reaching the beamsplitter. Instead, we use a commercial pre-aligned device.
The cost of this device is within the norm for component hardware in common use in quantum optics instructional physics laboratories.
This new system adds its own complication, though. That is, the down-converted photons need to be coupled to single-mode fibers. We find this extra effort to be a worthwhile trade-off. In Appendix A, we provide a suggested procedure for the required alignment. We also note that this experiment is already sold commercially as a black-box-type experiment.\cite{Qubitekk,Qutools} Our experiment is based on the beam-splitting component in one of these commercial products, but our aim is a demonstration in which students set up the entire apparatus, and it is done at a lesser cost.
The article is organized as follows. In Sec.~\ref{sec:app}, we provide a detailed description of the experimental apparatus. Section~\ref{sec:results} follows with a quantum-mechanical description of the phenomenon that fits within the curricular formalism of an undergraduate course, along with the experimental results. Two appendices give alignment procedures, a parts list and component costs.
\section{Apparatus}\label{sec:app}
A diagram of the apparatus is shown in Fig.~\ref{fig:app}. The figure has been sectioned to highlight important parts. The first section is the source of photon pairs via type-I spontaneous parametric down-conversion. A pump laser emitted horizontally polarized light of wavelength 405.4 nm. It was steered toward a beta barium borate (BBO) crystal that produced photon pairs that were vertically polarized. Up to this point, this is a standard setup for undergraduate quantum optics experiments.\cite{GalPRL14,URL}
In the central section, collimators (C) collect photons into optical fibers. They were placed at $\pm3^\circ$ from the incident pump-beam direction, and located 1-m away from the crystal. Bandpass filters (F), set to collect photons that are near the degenerate wavelength of 810.8~nm, were attached to the collimators along with a mounted iris.
Before we continue with more detail of the central section, we add that the third section is also standard: single-photon avalanche diode detectors (SPAD) fed digital electronic pulses from photon detections to an electronic counting and coincidence unit, which in turn fed data to a laptop/desktop with data acquisition programs written in MATLAB.\cite{URL}
\begin{figure}
\caption{Sections of the apparatus. A 405-nm pump laser beam was steered to a BBO crystal to generate vertically polarized photons. A fiber-coupled beamsplitter (FCBS) provided the photon interference. Other hardware components included half-wave plates ($\hat{W}
\label{fig:app}
\end{figure}
The central portion of the apparatus has several new components. Figure~\ref{fig:appfig} shows a photograph that emphasizes this section of the apparatus. The main component is a commercial pre-aligned fiber-coupled beamsplitter (FCBS). Because the polarization of the light has to be maintained, the fibers are single mode and polarization maintaining (PMOF). The two fiber inputs of the FCBS are connected to the collimators and the fiber outputs are connected to the two SPADs. The first departure from the standard experiments was the use of PMOFs. They significantly restricted the input light. To maximize the coupling of the down-converted photons to the fiber, we used collimators with adjustable focus (C). These collimators have fiber connectors type FC, which lock the fibers into a specific orientation. To collect photon pairs with the same polarization, both collimators were mounted to have the same orientation of the FC connectors in their mount.
\begin{figure}
\caption{Photograph of the apparatus showing the main components: down-conversion crystal (BBO), motorized rotation mount (MW); and linear stage (MS), polarization-maintaining optical fiber (PMOF); fiber-coupled beamsplitter (FCBS); and photon detectors (SPAD).}
\label{fig:appfig}
\end{figure}
The collimators were mounted on mirror mounts via adapters. One of the collimators' mount was attached to a magnetic mount and placed in contact with an aluminum plate with a 1-m radius of curvature (see Fig.~\ref{fig:app}), with center of curvature located approximately at the position of the crystal. The purpose of the latter was to allow the flexibility to translate the collimator sideways to optimize coincidence counts. The curved path reduced/eliminated the walk-off error that would be introduced if the collimator was translated linearly sideways. The other collimator's mirror mount was mounted on a double stack of translation stages set to move the collimator toward or away from the crystal. The bottom stage, attached to the breadboard, was a standard manual translation stage with a micrometer screw, whose purpose was to make coarse adjustments. On top of it was a small motorized translation stage (MS) for doing an automatic scan of the crystal-collimator distance. The
two stages changed the photon-path difference.
The last component of the arrangement was a pair of half-wave plates. One was mounted on a manual rotation mount and the other on a motorized rotation mount (MW). They are needed for two purposes: (1) to ensure that the photons enter the optical fibers with the same polarization orientation, and (2) for scanning the polarization distinguishability of the photons as described in Sec.~\ref{sec:pol}.
\section{Two Situations}\label{sec:results}
We describe the experiments in three parts. First, we describe the HOM interference itself. Next, we describe two situations where we can turn the interference on and off, one based on the path length and the other based on the polarization.
\subsection{The HOM Interference}
This HOM interference phenomenon has been described before in terms of number states.\cite{Kwiat92,GarivotoEJP12} Here, we use an alternative approach in terms of momentum states.
Photons 1 and 2 leaving their place of birth can arrive at the beamsplitter in a state with either momentum $\ket{x}$ or $\ket{y}$, as labeled in Fig.~\ref{fig:app}. Each photon can be in these two possible states. Thus, the full state of both photons is the tensor product of the two photon spaces. Due to the bosonic nature of the quantum state of the two photons, they must be described by a wavefunction that is symmetric by the interchange of the two particles. Thus, the initial state of the two photons is given by
\begin{equation}
\ket{\psi}_i=\frac{1}{\sqrt{2}}\left(\ket{x}_1\ket{y}_2+\ket{y}_1\ket{x}_2\right)
\label{eq:ind}
\end{equation}
To manipulate the state of the light, we can use the matrix notation of quantum mechanics: $\ket{x}=(1\;0)^T$ and $\ket{y}=(0\;1)^T$, with $T$ denoting the transpose of the matrix. The two product states are then given by
\begin{equation}
\ket{x}_1\ket{y}_2= \begin{pmatrix}1 \\ 0\end{pmatrix}_1\otimes \begin{pmatrix}0 \\ 1\end{pmatrix}_2=\begin{pmatrix}0 \\ 1 \\ 0 \\ 0\end{pmatrix}
\end{equation}
and
\begin{equation}
\ket{y}_1\ket{x}_2=\begin{pmatrix}0 \\ 1\end{pmatrix}_1\otimes \begin{pmatrix}1 \\ 0\end{pmatrix}_2=\begin{pmatrix}0 \\ 0 \\ 1 \\ 0\end{pmatrix},
\end{equation}
which results in the initial state given by
\begin{equation}
\ket{\psi}_i=\frac{1}{\sqrt{2}} \begin{pmatrix}0 \\ 1 \\ 1 \\ 0\end{pmatrix}.
\end{equation}
The symmetric beamsplitter must apply to each photon space, resulting in an operator acting on the larger space
\begin{equation}
\hat{B}_2=\begin{pmatrix}t & r \\ r & t\end{pmatrix}\otimes\begin{pmatrix}t & r \\r & t\end{pmatrix}=
\frac{1}{2}\begin{pmatrix}1 & i & i & -1\\
i & 1 & -1 & i\\
i & -1 & 1 & i\\
-1 & i & i & 1
\end{pmatrix},
\end{equation}
with $t$ and $r$ as defined in Sec.~\ref{sec:intro}. The final state is obtained in a straightforward way by applying the beamsplitter operator to the initial state
\begin{equation}
\ket{\psi}_f=\hat{B}_2\ket{\psi}_i=\frac{i}{\sqrt{2}}\begin{pmatrix}1 \\ 0 \\ 0 \\ 1\end{pmatrix}
\label{eq:psif}
\end{equation}
Notice that to within an overall phase, this state is equivalent to
\begin{equation}
\ket{\psi}_f=\frac{1}{\sqrt{2}}\left(\ket{x}_1\ket{x}_2+\ket{y}_1\ket{y}_2\right).
\label{eq:find}
\end{equation}
That is, in the final state both photons end up traveling along the same direction. Should we put photon detectors at each output of the beamsplitter, we would not get any coincidences. Note that Eq.~(\ref{eq:psif}) is a nonseparable (entangled) state of the two photons in the momentum degree of freedom.
If we put detectors at the two outputs of the beamsplitter, the probability of detecting photon 1 in one detector and photon 2 in the other, and {\em vice versa}, is
\begin{eqnarray}
P_c&=&P(x_1,y_2)+P(y_1,x_2)\label{eq:pc}\\
P_c&=&|\bra{x}_1\bra{y}_2\ket{\psi}_f|^2+||\bra{y}_1\bra{x}_2\ket{\psi}_f|^2\\
P_c&=&0.
\end{eqnarray}
n our photon counting experiment, when a photon impinges on a detector, the detector outputs a pulse that is sent to an electronic circuit. The circuit is set to record when a photon arrives at each of the two detectors within a certain time window, an event defined to be a coincidence. In our HOM experiment, the time window is about 40 ns and the interference effect results in no coincidences being detected.
Let us also consider the situation when there is no interference, that is, when the photons are distinguishable. For example, suppose we know that one photon arrives before the other because the length of paths of the photons from the crystal to the beamsplitter are distinguishably not the same. Suppose also that the photon arrival time from different paths is still within the experimental coincidence window. What would we measure?
One way to analyze the situation is this: assume photon 1 in momentum state $\ket{x}_1$ arrives at the beamsplitter distinguishably sooner than photon 2, which is in momentum state $\ket{y}_2$. Then, the system's initial state is $\ket{x}_1\ket{y}_2$ and the final state will be:
\begin{equation}
\ket{\psi}_{f,a}=\hat{B}_2\ket{x}_1\ket{y}_2=\frac{1}{2}\begin{pmatrix}i \\ 1 \\ -1 \\ i\end{pmatrix}
\end{equation}
Thus, from Eq.~(\ref{eq:pc}), the probability of measuring a coincidence experimentally will be $P_c=1/2$. We get the same result when we consider the other possibility, i.e., initial state $\ket{y}_1\ket{x}_2$. To make it symmetric we could say that the first possibility occurs half the time and the second possibility occurs the other half. This still gives us $P_c=1/2$.
Based on the previous discussion contrasting bosons and fermions, we can repeat this analysis for the case of fermions. Then, the initial wavefunction must be antisymmetric [i.e., Eq.~(\ref{eq:ind}) with a minus sign instead of a plus sign] because the total wavefunction must change sign with particle exchange. The application of the beamsplitter operation preserves the symmetry of the initial state, yielding an output state that is antisymmetric (i.e., the same as the initial state), underscoring that identical fermions cannot be in the state of Eq.~(\ref{eq:find}). Thus, the symmetry of the wave function accounts for the way the amplitudes combine in Eq.~(\ref{eq:homint}) (i.e., plus sign for bosons and minus sign for fermions).
\subsection{The Dip}\label{sec:dip}
In parametric down-conversion, photon pairs are emitted with energies $E_1=E$ and $E_2=E_0-E$, where $E_0$ is the energy of the pump (parent) photon. Right before being detected, the photons go through energy filters, which restrict the energy of the photons further. Thus, the apparatus detects photons in a superposition of energy states:
\begin{equation}
\ket{\psi}=\int dE\;a(E)\ket{E}_1\ket{E_0-E}_2
\label{eq:psimeas}
\end{equation}
where $a(E)$ is a measure of the overall bandwidth of the photons that are being detected. The state of the photons in Eq.~(\ref{eq:psimeas}) is non-separable. That is, the photons are in an entangled state of energy. Their exact energy is unknown to within a range of energies $\Delta E$ determined by $a(E)$ and they constitute a wavepacket. As such, the photons are coherent to within the coherence time $\Delta t\sim h/\Delta E$, where $h$ is Planck's constant. Because filters are specified in terms of the wavelength, we can express the energy bandwidth in terms of the wavelength: $\Delta E=hc\Delta \lambda/\lambda^2$,
where $c$ is the speed of light in vacuum. Thus, a practical way to express the coherence is in terms of the length of the wavepacket, also known as the coherence length
\begin{equation}
\ell_c=c\Delta t=\frac{\lambda^2}{\Delta\lambda}.
\label{eq:lc}
\end{equation}
If the photons arrive at the beamsplitter within the coherence time, then they can be considered indistinguishable (assuming all other photon properties are identical). An alternative reasoning is to say that the difference in the length of the two paths from the crystal to the beamsplitter is less than the coherence length. What occurs in the intermediate cases? For that we need to go deeper into the quantum mechanics formalism.
In the previous section we analyzed the two extreme interference situations: the photons are indistinguishable yielding no coincidences, or the photons are distinguishable and coincidences are observed. In the former case, the photons are in a quantum entangled state. In the latter, considering the two possibilities, the photons are in a mixed state. To account for mixed states we need to resort to another quantum-mechanical object for describing the state of the photons---the density matrix.\cite{GalAJP10}
The density matrix for the state of the light in the indistinguishable case is given by the outer product of the vector matrices:
\begin{equation}
\hat{ \rho}_{\rm ind}= \ket{\psi}\bra{\psi}=\frac{1}{2}\begin{pmatrix} 0 & 0 & 0 & 0\\
0 & 1 & 1 & 0\\
0 & 1 & 1 & 0\\
0 & 0 & 0 & 0
\end{pmatrix}.
\end{equation}
When the photons are in the distinguishable case, the density matrix is the weighted sum of the density matrices for each case considered separately:
\begin{equation}
\hat{\rho}_{\rm xy}=\ket{x}_1\ket{y}_2\bra{y}_2\bra{x}_1
\end{equation}
and
\begin{equation}
\hat{\rho}_{\rm yx}=\ket{y}_1\ket{x}_2\bra{x}_2\bra{y}_1,
\end{equation}
with the mixed state for the all the cases given by
\begin{equation}
\hat{\rho}_{\rm dis}=\frac{1}{2}\hat{\rho}_{\rm xy}+\frac{1}{2}\hat{\rho}_{\rm yx}=
\frac{1}{2}\begin{pmatrix} 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 0
\end{pmatrix},
\end{equation}
The density matrix after the photons go through the beamsplitter is given by
\begin{equation}
\hat{\rho}_f=\hat{B}\hat{\rho}_i\hat{B}^+.
\end{equation}
Readers can show that this outcome is consistent with $\ket{\psi}_f\bra{\psi}_f$ of Eq.~(\ref{eq:find}).
The coincidence probability will be given by Eq.~(\ref{eq:pc}), which when using the density matrix, is expressed in terms of the trace of the product of the density matrix of the state times the density matrix of the state being measured ($\ket{x}_1\ket{y}_2$ or $\ket{y}_1\ket{x}_2$). This results in
\begin{equation}
P_c={\rm Tr}[\hat{\rho}_{f}\hat{\rho}_{xy}]+{\rm Tr}[\hat{\rho}_{f}\hat{\rho}_{yx}].
\label{eq:tr}
\end{equation}
For the indistinguishable case, we easily find that $\hat{\rho}_f=\hat{\rho}_{\rm ind}$ yields $P_c=0$; for the distinguishable case, $\hat{\rho}_f=\hat{\rho}_{\rm dis}$ yields $P_c=1/2$.
The intermediate case can be expressed by the state
\begin{equation}
\hat{\rho}_{\rm int}=p\hat{\rho}_{ind}+(1-p)\hat{\rho}_{\rm dis}
\end{equation}
where $p$ is the probability that the photons are indistinguishable. This state is similar to the form of the Werner state.\cite{WernerPRA89} This matrix describes the situation when the state of the photons is partly indistinguishable and partly distinguishable, with $p$ and $(1-p)$ determining the relative weights.
It is left to the reader to show that the final density matrix after the beamsplitter is
\begin{equation}
\hat{\rho}_{\rm int-f}=\frac{1}{4}\begin{pmatrix}1+p & 0 & 0 & 1+p \\
0 & 1-p & -1+p & 0 \\
0 & -1+p & 1-p & 0\\
1+p & 0 & 0 & 1+p\end{pmatrix}
\label{eq:rhowf}
\end{equation}
Using $\hat{\rho}_{f}=\hat{\rho}_{\rm int-f}$ as given by Eq.~\ref{eq:rhowf}, Fig.~\ref{fig:HOMp}(a) shows the calculated coincidence probability from Eq.~(\ref{eq:tr}) as a function of $p$.
\begin{figure}
\caption{(a) Calculated probability of measuring a coincidence as a function of the Werner probability $p$; (b) Calculated Werner probability using a simple Gaussian model as a function of the the delay in overlap of the photon amplitudes $x_0$ relative to the coherence length $l_c$; (c) Calculated probability of measuring a coincidence as a function of $x_0/l_c$. }
\label{fig:HOMp}
\end{figure}
The landmark experiment by Hong, Ou, and Mandel demonstrated the interference effect by scanning the difference in path, and therefore the overlap of the interference of the two photon amplitudes, exhibiting a famous ``dip'' in the coincidences. We can reproduce the dip analytically using a simple model. If we consider the photon wavepackets as Gaussians with a width at half maximum given by $\ell_c$ but displaced by the path difference $x_0$, then $p$ is proportional to the overlap integral. If we displace the two Gaussians by $x_0$, then
\begin{equation}
p(x_0)=\int_{-\infty}^{+\infty}2\sqrt{\frac{2\ln(2)} {\ell_c\pi}}e^{-4\ln(2)x^2/\ell_c^2}e^{-4\ln(2)(x-x_0)^2/\ell_c^2}dx
\label{eq:px0}
\end{equation}
where we have normalized the overlap such that $p(0)=1$, as shown in Fig.~\ref{fig:HOMp}(b). If we now plot the coincidence probability as a function of $x_0$ we recreate the famous HOM dip, as shown in Fig.~\ref{fig:HOMp}(c).
We note that we present this simple model just to capture the essence of the phenomenon as measured in the laboratory. A more accurate calculation would have to take into account the actual measured bandwidth of the light and other experimental details.
After following the alignment procedure outlined in Appendix A and adjusting the waveplates so that photons are input into the fibers with the same polarization, the dip can be found and scanned. In a lab experience lasting only a few hours, the initial alignment is best done for the students beforehand. Perhaps other students can bring the apparatus to this point as part of a several-week laboratory exercise, as we do in the add-on lab of our quantum mechanics course.\cite{URL} Beyond this point students can be asked to ``discover'' the dip, study it in some detail, and investigate the effect of the bandpass filters on the width of the dip. In Fig.~\ref{fig:HOMdip}, we show the measurement of the dip taken by an undergraduate student (the first author of this paper), who did the experiment as a senior capstone project. This scan of coincidences was taken at 4~s per data point. The horizontal scale is the position of the motorized stage
(a Matlab program to acquire such data is posted in our website, Ref.~\onlinecite{URL}).
Data points were taken every 4 stepper-motor steps of the motorized translation stage, which correspond to a motion of the collimator by 5.33 $\mu$m, or a time delay of 18 fs in
arrival times of the two photons. Error bars are the standard for Poisson statistics. Accidental coincidences were of the order of 7 counts. \begin{figure}
\caption{Recording of the coincidences as a function of the position of one of the collimators; effectively the difference in the path length of the two photons, from the crystal to the beamsplitter. }
\label{fig:HOMdip}
\end{figure}
The quality of the measured dip can be evaluated by the visibility, defined as
$v=(N_{\rm max}-N_{\rm min})/(N_{\rm max}+N_{\rm min})$. We actually obtained this value by fitting an inverted Gaussian to the data, giving
$v=0.93$, a remarkably good value, which can be attributed largely to the advantage of using the commercial fiber-coupled beamsplitter. The full width of the dip at half minimum (FWHM) is about 55 $\mu$m, which is of the same order of the calculated coherence length of 66~$\mu$m. With a wider filter of (nominal) 30 nm bandwidth, we got 18 $\mu$m, consistent with the calculation ($22\;\mu$m). There is an asymmetry in the shoulders of the dip. We saw different shoulders when we did the experiment with different parameters (filters and position of the collimators). It depends on the shape of $a(E)$ in Eq.~(\ref{eq:psimeas}), which is related to the shape of the transmission curve of each bandpass filter. We also note that the dip appears only in the coincidence counts. The singles counts (i.e., photons detected by each detector separately) are constant throughout the scan. It underscores that it is a two-photon effect.
\subsection{Polarization Distinguishability}\label{sec:pol}
The photons reaching the beamsplitter can be distinguished by other degrees of freedom. That is, the path length difference of the photons arriving at the beamsplitter can be set to zero, while the photons are still distinguishable. One way this can be done is by manipulating their polarization.\cite{Kwiat92} The photons produced by type-I spontaneous parametric down conversion have the same polarization. If we rotate the polarization of one of them by $90^\circ$, then the two photons are distinguishable by polarization and the interference cancellation disappears. If the polarization setting is between $0^\circ$ and $90^\circ$, then the coincidence probability is somewhere in between.
Because polarization is represented by two-dimensional space, we can incorporate it in the pure-state description of the light.
The distinction between this and the situation of Sec.~\ref{sec:dip} is more subtle, involving open and closed quantum systems.\cite{Petruccione,Sales08} Without getting too technical, we proceed by calculating the probability for this situation by fully accounting for polarization in the state of the light. It also provides an additional way to understand two-photon interference.
If we add the polarization degree of freedom for each photon, this doubles the Hilbert space of each photon, and therefore the two-photon system becomes a 16-dimensional Hilbert space. Doing the matrix operations by hand is a bit unwieldy, with 16-element vectors and $16\times16$ operator matrices, but using various software platforms, such as Mathematica or MATLAB, we can do the laborious linear-algebraic steps easily. If we add polarization to the photon's state, then the initial state, where both photons are vertically polarized is given by
\begin{equation}
\ket{\psi}_i=\frac{1}{\sqrt{2}}\left(\ket{x,V}_1\ket{y,V}_2+\ket{y,V}_1\ket{x,V}_2\right),
\end{equation}
where we have now added a label $V$ to the momentum state of each photon in order to specify its polarization. Each product state is of the form
\begin{equation}
\ket{\varphi}=\begin{pmatrix}x \\ y\end{pmatrix}_1\otimes\begin{pmatrix}H \\ V\end{pmatrix}_1\otimes \begin{pmatrix}x \\ y\end{pmatrix}_2\otimes \begin{pmatrix}H \\ V\end{pmatrix}_2
\end{equation}
where $V$ and $H$ specify vertical and horizontal polarization, respectively. In vector form, the initial state would be $\ket{\psi}_i=2^{-1/2}(0\;0\;0\;0\;0\;0\;0\;1\;0\;0\;0\;0\;0\;1\;0\;0)^T$.
Before the photons reach the beamsplitter, they go through half-wave plates. For symmetry, we add one for each momentum input. (Experimentally, two waveplates are needed to keep the optical path of the two photons as close to equal as possible.) The waveplates for the $x$ and $y$ momentum states are oriented by angles $\theta$ and $\phi$ relative to the vertical direction, respectively.
We can express the operator for a half-wave plate oriented an angle $\theta$ by
\begin{equation}
\hat{W}_\theta=\begin{pmatrix}-\cos2\theta & -\sin2\theta\\ -\sin2\theta & \cos2\theta\end{pmatrix}.
\end{equation}
Because the two waveplates are attached to each momentum state, the operator for the two waveplates acting on the space of the two photons has the form
\begin{equation}
\hat{Z}_{\theta,\phi}=\hat{P}_x\otimes\hat{W}_\theta\otimes\hat{P}_y\otimes\hat{W}_\phi+\hat{P}_y\otimes\hat{W}_\phi\otimes\hat{P}_x\otimes\hat{W}_\theta,
\end{equation}
where $\hat{P}_x=\ket{x}\bra{x}$ and $\hat{P}_y=\ket{y}\bra{y}$ are projection operators for the momentum states.
It is left as an exercise for the reader to show that the state $\ket{\psi}^\prime$ after the waveplates and before the beamsplitter is
\begin{equation}
\ket{\psi}^\prime=\hat{Z}_{\theta,\phi}\ket{\psi}_i=\frac{1}{\sqrt{2}}\left(\ket{x,2\theta}_1\ket{y,2\phi}_2+\ket{y,2\phi}_1\ket{x,2\theta}_2\right)
\end{equation}
where for simplicity we have labeled the polarization state by the orientation of the polarization relative to the vertical direction.
The beamsplitter acts only on the momentum states, leaving the polarization states unchanged. Its operator is given by
\begin{equation}
\hat{B}_4=\hat{B}\otimes\hat{I}\otimes{B}\otimes\hat{I},
\end{equation}
where $\hat{I}$ is the identity, representing the inaction of the beamsplitter on the polarization degree of freedom. The next steps are mechanical: computing the final state followed by a calculation of the coincidence probability. At this point in our lab program, we ask students not just to perform the calculation, but to devise how to calculate the coincidence probability $P_c$ in the larger space following the prescription of Eq.~(\ref{eq:pc}) and produce the result
\begin{equation}
P_c=\frac{1}{2}\left[1-\cos^2(2\phi-2\theta)\right].
\label{eq:HOMhwp}
\end{equation}
Thus, the answer depends only on the relative orientations of the two polarizations, which is 0 when they are equal.
Measurements for this section of the experiment follow directly from the setup of the previous ones. Students can also be given the freedom to take the data in whichever form they decide. For example, the data of Fig.~\ref{fig:HOMhwp} shows a scan of the angle $\phi$ of the half-wave plate on the motorized mount. It follows remarkably close to the expectation of Eq.~(\ref{eq:HOMhwp}) for $\theta=0$. At $\phi=\pm 45^\circ$ the two input polarizations are orthogonal, the coincidences are at a maximum, which corresponds to about 1150 counts, the same as the ones corresponding to 1/2 probability in Fig.~\ref{fig:HOMdip}. At $\phi=0, \pm90^\circ$ the polarizations are parallel and we get destructive interference, with about the same value of counts as at the dip. Because of the simple functional form of the data of Fig.~\ref{fig:HOMhwp}, we fitted Eq.~(\ref{eq:HOMhwp}) with a visibility parameter
$v$ multiplying the cosine function, to the data (not shown to avoid cluttering the figure), which resulted in an excellent match, with a reduced chi-square of 1.04. The fitted visibility was
$v=0.94$, which is quite remarkable for a teaching laboratory experience.
\begin{figure}
\caption{Recording of the coincidences as a function of the angular position $\phi$ of one of the half-wave plates, while the other half- wave plate was oriented at $\theta=0^\circ$. Thus, the half-wave plate setting $\phi$ is half the difference in the angular orientation of the polarizations of the two photons. }
\label{fig:HOMhwp}
\end{figure}
\section{Conclusions}
In summary, we present a simplified version of the Hong-Ou-Mandel experiment that is suitable for the undergraduate instructional laboratory. The use of a fiber-coupled beamsplitter greatly simplified the alignments. The development of the experiment involved a one-semester senior capstone project. Once we knew how to overcome the challenging parts of the apparatus, disassembly and reassembly proceeded at the same pace as in other single-photon-type of experiments. In Appendix A, we present some of our recommendations for set up and alignment.
We were quite surprised by the high quality of the data, shown in Figs.~\ref{fig:HOMdip} and \ref{fig:HOMhwp}. Free-space alignment normally produces much lower visibilities, because of difficulties in the alignment. We found that the use of a motorized translation stage greatly streamlined finding the interference dip, and in making detailed scans. The use of waveplates helped in finding the best visibility, which was around 0.93-0.94. Such visibilities make this experiment a strong demonstration of a purely quantum interference effect. The motorized aspect of the scans also allows the experiment to be performed remotely.\cite{GalSPIE}
A discussion of this experiment leads to fundamental concepts of quantum mechanics, such as the symmetry of the wavefunction. The lack of coincidence is indeed because the wavefunction of the two photons must be symmetric due to their bosonic nature. Should another part of the wavefunction, such as polarization\cite{Weinfurter94,Braunstein95} or spatial mode,\cite{Walborn03} be antisymmetric, it would require an antisymmetric momentum wavefunction so that the total wavefunction is symmetric. This situation will lead to a {\em maximum} in the coincidences.\cite{Mattle96,Walborn03} Our classical intuition leads us to associate a physical force (e.g., electromagnetic, gravitational) whenever there is an interaction, but quantum mechanics allows such an interaction between particles to exist just because they are identical. This property also manifests tangibly with (identical) electrons in atoms via the exchange force (see Ref.~\onlinecite{Griffiths} for an illuminating presentation). The same is the case here with photons in a beamsplitter, which is also the basis for using photons in quantum computation.\cite{KLM01}
The temporal overlap and polarization aspect of the experiments presented above also underscores the requirement of indistinguishability for interference to occur.
It serves as a basis for discussing another remarkable quantum interference experiment, also known as the ``mind-boggling experiment,'' which exploits the indistinguishability aspects of this interference phenomenon.\cite{Zou91} In that experiment, interference between photons of {\em separate} pairs is seen when the two pairs are indistinguishable from each other, which leads to important consequences for quantum computing purposes, such as entanglement swapping,\cite{Pan98} teleportation,\cite{Bouwmeester97} and the entanglement of multiple qubits. A recent discussion of the more general case of $N$ photons reaching the beam splitter examines other situations not considered in this article.\cite{Masud23}
With quantum mechanics making irreversible inroads into technology, experiments such as this one make students appreciate the inner-workings of quantum mechanics. It constitutes an important step for understanding the technological tools of the future.
\appendix
\section{Experimental Set Up and Alignment}
Setting up the experiment must be done in stages. A first stage involves aligning the crystal and collimators for spontaneous parametric down-conversion. We did this with the simplest of setups. The collimators are attached to multimode fibers. We made marks on the breadboard where the collimators should be placed, 1-m away from the crystal. We used an alignment laser and a homemade plumb bob to mimic the path of the down-converted photons and couple the light into the fibers. Once this was done, we placed 30-nm filters on the collimators, turned the pump laser and the detectors on, and looked for coincidences. After optimization, the singles should be above 10000 counts per second (e.g., for us it was $\sim60000\;{\rm s}^{-1}$ ), and the coincidences between 5\% and 10\% of the singles counts.
Once the first stage was completed, we disconnected the multimode fibers and attached the inputs of the fiber-coupled beamsplitter to the collimators. At this point it helped us to couple the light from a laser into a fiber connected to one of the outputs of the beamsplitter. We did this with a commercial low-power handheld fiber-coupled laser, which could also serve as the alignment laser. The purpose of this was to send the light back from the collimators toward the crystal. We used it to adjust the focus of the collimators to match the size of the pump laser on the BBO crystal. This mode-matching step is important in coupling the photons into the fibers.
After the previous stage was set, we observed singles counts of the order 30,000 counts per second and about 800 coincidences per second. At this point we initiated the search for the dip in coincidences. Doing this by hand was difficult and tedious, mainly because the dip is of the order of 10 $\mu$m wide, so very small steps of the translation stage are needed, and that is difficult to do by hand. Much easier is to do a motorized scan. This finds the dip reliably. We did so both ways, and the motorized way was significantly easier.
Once the dip was found, we switched to narrower bandwidth filters (10 nm), which increased the width of the dip. At this point the dip was not optimally deep. The next and final stage involved adding identical waveplates in front of both collimators. Because they delay the light as it travels through them, we had to find the new location of the dip, displaced by the slight difference in thickness of the two waveplates. Adjustment of the relative orientation of the waveplates at the dip location resulted in the best interference condition.
\section{Parts List}
Table~\ref{tab:parts} lists the main parts that are needed for this experiment.
Other parts can be obtained from Ref.~\onlinecite{URL}.
The central piece for this experiment is the beamsplitter. As mentioned earlier, we used a fiber-coupled beamsplitter, listed in the table. It can also be custom-ordered to a commercial vendor of fiber-optical components, with the specification that it needs equal-length polarization maintaining fibers and use a 50:50 beamsplitter at a wavelength of 810 nm. It can also be done with a fiber beamsplitter,\cite{Qutools} although we have not tried it.
\begin{table}[h!]
\centering
\caption{Parts list with vendors, models and rounded prices. Vendor Abbreviations: Newlight Photonics: New. Phot.; Optsigma: Opto.; Power Technology Inc.: Pow. Tech.; Pacific Laser: Pac. Las.. SPADs with educational discount are sold by Alpha.\cite{Alpha} Electronics circuits are based on field programmable gate arrays.\cite{URL,Beck}}
\begin{ruledtabular}
\begin{tabular}{l c p{4.8cm} r p{5cm}}
Name & Number & Vendor \& Model & Price (\$) & Comment \\
\hline
Bandpass Filter & 2 & New. Phot. NBF810-30 & 160 & 30 nm, 810-nm center.\\
Bandpass Filter & 2 & New. Phot. NBF810-10 & 160 & 10 nm, 810-nm center.\\
BBO crystal & 1 & New. Phot. NCBBO5300-405(I)-HA3 & 620 & Type-I down-conversion crystal, 5x5x3 mm\\
Beamsplitter & 1 & Qubitekk
& 3000 & Fiber coupled. \\
Collimator & 2 & Thorlabs CFC8-B& 300 & Adjustable focus \\
Collimator adapter & 2 & Thorlabs AD15F2 & 30 & For mirror mount.\\
Detectors (SPAD) & 2 & Excelitas SPCM-AQHR & 3000 & Dark counts $\le1000$ cps. \\
Electronics & 1 & Altera DE-115 or Red Dog & 300 & For recording coincidences.\\
Fiber Laser & 1 & Thorlabs HLS635 or OZ Optics FOSS & 700 & 1 mW, hand-held.\\
Filter mount & 2 & Thorlabs SM1L05 & 20 & Mount for filter, 1-inch ID. \\
Iris & 2 & Thorlabs SM1D12 & 60 & Iris mounted on collimator. \\
Mirror mount & 2 & Thorlabs KM100T & 70 & For mounting collimator.\\
Pump laser & 1 & Pow. Tech. GPD405-50 & 510 & 405-nm laser module, 50 mW.\\
Rotational mount & 1 & Opto. GTPC-SPH30 & 250 & Manual. \\
Rotational adapter & 1 & Opto. GTPC-ADP25.4-38& 100 & Adapter for 1-inch aperture. \\
Rotational mount & 1 & Pac. Las. RSC-103E & 1600 & Motorized, USB connected. \\
Translation stage & 1 & Thorlabs MT1 & 330 & Manual, with micrometer.\\
Translation stage & 1 & Pac. Las. LST-10L& 1700 & Motorized.\\
Waveplate & 2 & New. Phot. WPA03-H-810 & 330 & half-wave, 810 nm, zero order. \\
\end{tabular}
\end{ruledtabular}
\label{tab:parts}
\end{table}
\begin{acknowledgments}
This work was funded by National Science Foundation grant PHY-2011937. The authors have no conflict of interest.
\end{acknowledgments}
\end{document} |
\begin{document}
\title{Portfolio problems with two levels decision-makers: Optimal portfolio selection with pricing decisions on transaction costs. Extended version and complete risk profiles analysis }
\textbf{Abstract}. {This paper presents novel bilevel leader-follower portfolio selection problems in which the financial intermediary becomes a decision-maker. This financial intermediary decides on the unit transaction costs for investing in some securities, maximizing its benefits, and the investor chooses his optimal portfolio, minimizing risk and ensuring a given expected return. Hence,} transaction costs become decision variables in the portfolio problem, {and two levels of decision-makers are incorporated}: the financial intermediary and the investor. These situations give rise to general Nonlinear Programming formulations in both levels of the decision process. We present different bilevel versions of the problem: financial intermediary-leader, investor-leader, and social welfare; besides, their properties are analyzed. Moreover, we develop Mixed Integer Linear Programming formulations for some of the proposed {problems} and effective algorithms for some others. Finally, {we report on some computational experiments performed on data taken from the Dow Jones Industrial Average, and analyze and compare the results obtained by the different models.}
\textbf{Key words:} Portfolio Optimization, bilevel programming, combinatorial optimization, {pricing problems, transaction costs,} Conditional Value at Risk measure (CVaR).
\section{Introduction}
The classical model in portfolio optimization was originally proposed by Markowitz in 1952 \cite{mar52}. This model has served as the initial point for the development of modern portfolio theory. Over time, portfolio optimization problems have become more realistic, incorporating real-life aspects that make the resulting portfolios more cost-effective than the alternatives that do not consider them \cite{cas11, kol14, Lynch11,man14,man15}. Transaction costs can be seen as one of these important actual features to be included in portfolio optimization. These costs are those incurred by the investors when buying and selling assets on financial markets, charged by {the brokers, the financial institutions or the market makers} playing the role of intermediary. Transaction costs usually include banks and brokers commissions, fees, etc. These commissions or fees have a direct impact on the portfolio, especially for individual or small investors, since they will determine the net returns, reducing them and decreasing also the budget available for future investments \cite{bau,bau10, Liu02}.\par
To the best of our knowledge, in the existing literature, transaction costs are assumed to be given {\cite{dav90, kor98, lob07, mag76, man14,man15, mor95}}. They can be a fixed cost applied to each selected security in the portfolio; or a variable {cost} to be paid which depends on the amount invested in each security included in the portfolio (see e.g. \cite{bau, bau10,kel00,man14, man15, val14,woo13} and the references therein). This dependence can be proportional {to the investment or} given by a fixed cost that is only charged if the amount invested exceeds a given threshold, or by some other functional form (see e.g. \cite{bau10, kon05, le09, man14,man15} and the references therein). But in any case, unit transaction costs are known and predetermined in the optimization process. Nevertheless, it is meaningful to analyze the situations where transaction costs {can be decision variables set by financial institutions so that} they are trying to maximize its profit as part of the decision process that leads to optimal portfolios for the investors.\par
The portfolio optimization problem considered in this paper is based on a single-period model of investment and incorporates a {transaction costs setting phase.} We assume that there are two decision-makers involved in the situation: {on the one hand, the investor and on the other hand, the broker specialist, market maker or financial institution ({that we will call from now on, for simplicity broker-dealer})}. At the beginning of a period, an investor allocates his capital among various assets and during the investment period, each asset generates a random rate of return. Moreover, we consider that the broker-dealer can charge some {unit} transaction costs on the securities selected by the investor trying to maximize its benefits {but anticipating the rational response of the investor}. This is a pricing phase in which the {broker-dealer} decides on how much is going to charge to {the investor for} the traded securities. Considering {unit} transaction costs as a decision variable of the model is a novel element in portfolio optimization and this is one of the main contributions of this paper. Then, at the end of the period, the result for the investor is a variation of his capital (increased or decreased) which is measured by the weighted average of the individual rates of return minus commissions or fees. {In addition}, the result for the {broker-dealer} is the amount paid by the investor, which depends on the {costs} set on the traded securities included in the portfolio chosen by the investor. \par
Based on the structure of financial markets, we assume a hierarchical relationship between the parties involved in the portfolio problem, that is, we {define a natural problem} in which the {broker-dealer} sets the {unit transaction costs} first, trying to anticipate the rational response of the investor. This hierarchical analysis of the portfolio problem has not been addressed before and it is another contribution of our paper. Once the {costs} are fixed, the investor chooses his optimal portfolio. For the sake of completeness, we also analyze the case in which the investor chooses his portfolio first, and after that, the {broker-dealer} sets the transaction costs. In order to model these hierarchical structures, we use a bilevel optimization approach (see e.g. \cite{bar13,col05, lab16, sin17}). Furthermore, we consider a social welfare {problem} {where} both, {broker-dealer} and investor, cooperate to maximize their returns. We assume in the different {problems} that all economic or financial information is common knowledge and that all the decision-makers in the problem have access to it.
The contributions of this paper can be summarized as follows: 1) it incorporates for the first time the above hierarchical approaches with two-levels of decision-makers on portfolio optimization problems (the {broker-dealer} sets {unit} transaction costs trying to maximize its benefits, whereas the investor minimizes risk while ensuring a given expected return \cite{ben03,ben14}); 2) it introduces transaction costs as decision variables controlled by the broker-dealer; and 3) it develops different bilevel programming formulations to obtain optimal solutions for the considered {problems}. This paper introduces new models for the bilevel portfolio optimization problem. As far as we know, bilevel models for the portfolio selection that {set unit transaction costs as decision variables of the problem} have not been considered in the literature {before}.\par
The rest of the paper is organized as follows. Section \ref{s:prelim} states the preliminaries and the notation used throughout the paper. In Section \ref{Sect:Bank-leader}, we present the {problem} in which the {broker-dealer} is the leader and we develop two different Mixed Integer Linear Programming (MILP) formulations to solve such problem. Section \ref{Investor-leader} introduces the investor-leader {problem} and develops a Linear Programming (LP) formulation for it. In the more general case where additional constraints are required on the portfolio selection, it is presented a convergent iterative algorithm based on an ``ad hoc'' decomposition of the model. Next, in Section \ref{Cooperative}, it is addressed a social welfare {problem} . There, we propose a MILP formulation and an algorithm based on Benders decomposition for solving {it}. Section \ref{Computational} is devoted to reporting on the computational study of the different {problems and solution methods} discussed in the previous sections. Our results are based on data taken from Dow Jones Industrial Average. Finally, Section \ref{sec:Conclusions} concludes the paper.
\section{Preliminaries\label{s:prelim}}
Let $N=\{1,...,n\}$ be the set of securities considered for an investment, $B\subseteq N$ a subset of securities in which the broker-dealer can charge {unit} transaction costs to the investor {and $R \define N \setminus \{B\}$}. In most cases, $B=N$, but there is no loss of generality to consider that $B$ is a proper subset of $N$. \par
{First,} we assume that the {broker-dealer} can {price} security $j\in B$ from a discrete set, with cardinality $s_j$, of admissible {costs}, $\mathbb{P}_j=\{c_{j1},...,c_{js_j}\}$, and the {broker-dealer}'s goal is to maximize its benefit. Further, we consider {proportional transaction costs:} the {cost} charged by the {broker-dealer} per security is proportional to the amount invested in such security. {Hence, the {broker-dealer}'s decision variables are unit transaction costs (commissions, fees, ...) to be charged (proportionally) to the securities. }
Let $x=(x_j)_{j=1,...,n}$ denote a vector of decision variables: $x_j$ being the weight of security $j$ in the portfolio. We only assume that the invested capital can not exceed the available budget {and non-negativity}, i.e.,
\[x: \sum_{j=1}^n x_j\leq 1, \ x_j\geq 0, \quad \text{ for } j=1,...,n.\]
This budget constraint is the minimum requirement on the structure of the portfolios. Nevertheless and without loss of generality, we could have assumed that some other linear constraints are imposed on the structure of the requested portfolio $x$. All the results in this paper can be easily extended to more general situations that consider polyhedral sets of constraints defining the admissible set of portfolios. \par
Let us denote by $p_j$ the {unit transaction cost} chosen by the {broker-dealer} to {charge} security $j$, $j\in B$. Then, for a given portfolio $x$ (fixed), the problem faced by the {broker-dealer} can be modeled using the following set of binary decision variables: $a_{jk}=1$ if {cost} $c_{jk}$ is assigned to $p_j$, this is, if $p_j=c_{jk}$; $a_{jk}=0$ otherwise. Thus, to maximize his profit the {broker-dealer} solves the following problem:\par
\begin{align}
\label{for:bank_of} \tag{\textbf{PricP}} \max &\displaystyle \sum_{j\in B} p_jx_j\\
\label{for:bank_c1} \ \ \ \ \ \hbox{s.t.} \ & \displaystyle p_j= \sum_{k=1}^{s_j}c_{jk}a_{jk}, \quad j\in B,\\
\label{for:bank_c2}&\displaystyle \sum_{k=1}^{s_j}a_{jk}=1, \hspace*{1cm} j \in B,\\
\label{for:bank_fr}&\displaystyle a_{jk}\in \{0,1\}, \hspace*{1cm} j\in B, k=1,...,s_j.
\end{align}
If no further constraints are imposed on {costs} the above is a valid formulation. However, in general, we will assume without loss of generality that the set of {costs} for the {broker-dealer} can be restricted to belong to some polyhedron $\mathbb{P}$, allowing $\mathbb{P}=\mathbb{R}^{|B|}_+$. This can be easily included in the above formulation with the following constraint:
\begin{equation}
\label{for:bank_frP} p\in \mathbb{P}.
\end{equation}
We observe that, if $x$ is known, and constraint (\ref{for:bank_frP}) is not included, the above problem is easy to solve (see Proposition \ref{pro:bank-follow}): the {broker-dealer} will set {transaction costs} to the maximum ones among those available for each security. Nevertheless, if the portfolio is unknown (to be decided by the investor) or additional constraints, such as regulation constraints, are imposed into the model, the problem becomes more difficult to be solved, since there exists no explicit expression for an optimal solution.
{Moreover}, we suppose that the investor wants to reduce the risk of his investment while ensuring a given expected return. At this point, several risk measures could be considered, among them variance of returns, Mean Absolute Deviation (MAD), Conditional Value at Risk (CVaR), Gini's Mean Difference, etcetera. (Here, we refer the reader to \cite{man03} for further details on the topic.) In this paper, we have focused on a portfolio optimization problem based on the CVaR risk measure. This risk measure aims to avoid large losses: for a specific probability level $\alpha$, the CVaR measures the conditional expectation of the smallest returns with a cumulative probability $\alpha$, that is, the average return of the given size (quantile) of worst realizations \cite{man03, prt17, roc00}. Therefore, we assume that the investor's goals are to maximize its CVaR and, at the same time, to ensure that a minimum expected reward $\mu_0$ is obtained with his portfolio.
{There exist in the literature different ways of accounting for the transaction costs into the portfolio model \cite{man15, man15_2}. For instance, including them in the objective function \cite{ang12, oli18, woo13}, subtracting them from the expected return \cite{kre11,man05}, reducing the capital available for the investment \cite{woo13}, etcetera. We assume in our approach that transaction costs are directly removed from the expected return.\par}
In order to model the above situation, we consider that the rate of return of each security $j\in N$ is represented by a random variable $R_j$ with a given mean $\mu_j=E(R_j)$. Each portfolio $x$ defines a random variable $R_x=\sum_{j=1}^nR_jx_j$ that represents the portfolio rate of return (its expected value can be computed as $\mu(x)=\sum_{j=1}^n\mu_jx_j$). We consider $T$ scenarios, each of them with probability $\pi_t$, $t = 1,..., T$, and assume that for each random variable $R_j$ its realization, $r_{jt}$, under the scenario $t$ is known. Thus, once the {broker-dealer} has set {the transaction costs}, $p$, the realization of the portfolio rate of return $R_x$ under scenario $t$ is given as $y_t=\sum_{j=1}^nr_{jt}x_j-\sum_{i\in B}p_ix_i$. \par
With this information, we assume that the investor wants to maximize the CVaR$_\alpha$, namely the conditional expectation of the smallest returns with cumulative probability $\alpha$, while ensuring a minimum expected return $\mu_0$. Thus, the portfolio optimization {problem} that the investor wants to solve can be formulated as:
{
\begin{align}
\label{for:CVaR_of} \tag{\textbf{CVaRP}}\max & \displaystyle \ \eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_td_t\\
\label{for:CVaR_cy} \ \ \ \ \ \hbox{s.t.} \ &y_t=\sum_{j=1}^nr_{jt}x_j-\sum_{i\in B}p_ix_i, \quad t=1,...,T, \\
\label{for:CVaR_creturn}&\sum_{t=1}^T\pi_ty_t\ge \mu_0,\\
\label{for:CVaR_cd} &\displaystyle d_t\ge \eta - y_t, \hspace*{0.9cm} t=1,...,T, \\
\label{for:CVaR_frd}& d_{t}\ge 0, \hspace*{1.8cm} t=1,...,T,\\
\label{for:CVaR_cx}&\displaystyle \sum_{j=1}^n x_{j}\leq 1, \\
\label{for:CVaR_frx}& x_{j}\ge 0, \hspace*{1.7cm} j=1,...,n,
\end{align}
{Observe that $\eta$ is a continuous variable that models the $\alpha$ \emph{Value at Risk}, $VaR_\alpha$, namely the value of the minimum threshold for which the probability of the scenarios with a return less than or equal to $\eta$ is at least $\alpha$. }
Next, (\ref{for:CVaR_cy}) and (\ref{for:CVaR_creturn}) are the scenario constraints. Constraint (\ref{for:CVaR_cy}) gives the expected return in each scenario. Note that, the expected return in each scenario is for the net rate of returns, $\sum_{j=1}^nr_{jt}x_j$, minus the transaction rates $\sum_{i\in B}p_ix_i$. Whereas constraint (\ref{for:CVaR_creturn}) ensures an expected return of, at least, $\mu_0$. The objective function and the set of constraints (\ref{for:CVaR_cd}) and (\ref{for:CVaR_frd}) model the CVaR (see Mansini et. al \cite{man03} for details). And finally, the sets of constraints (\ref{for:CVaR_cx}) and (\ref{for:CVaR_frx}) force $x$ to define a portfolio.}
We note also that by choosing different values for the parameters $\alpha$ and $\mu_0$, in the formulation above, different types of investors (i.e., different level of attitude towards risk) can be {incorporated in the model}.
\section{Bilevel {broker-dealer}-leader Investor-follower Portfolio Problem (BLIFP)} \label{Sect:Bank-leader}
We start analyzing a hierarchical structure in the financial markets in which the {broker-dealer} sets the transaction costs first, and after that, the investor chooses his portfolio. Observe that in this situation, the problem faced from the investor point of view reduces to a portfolio selection, under the considered criterion, which in this case is to hedge against risk maximizing the average $\alpha$-quantile of his smallest returns (CVaR$_\alpha$). Therefore, we study this situation from the point of view of both the financial intermediary and the investor, simultaneously, which is a novel perspective.\par
We model the situation as a bilevel leader-follower problem in which the {broker-dealer} has to fix the {transaction costs}, from the polyhedral set $\mathbb{P}\in \mathbb{R}^{|B|}$, maximizing his benefits by assuming that, after his decision is made, the investor will choose his optimal portfolio. \par
Using the bilevel optimization framework, the \textbf{BLIFP} can be modeled as follows:
{
\begin{align*}
\label{for:BLIFP} \tag{\textbf{BLIFP0}} \max &\displaystyle \sum_{j\in B} p_jx_j\\
\nonumber \ \ \ \ \hbox{s.t. }
&(\ref{for:bank_c1}), (\ref{for:bank_c2}), (\ref{for:bank_fr}),(\ref{for:bank_frP}), \tag{\text{Bank Constraints}}\\
&x\in arg \max \quad \displaystyle \eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_td_t,\\
&\ \ \ \ \ \ \ \ \nonumber \ \ \ \ \hbox{s.t.} \quad (\ref{for:CVaR_cy}), (\ref{for:CVaR_creturn}), (\ref{for:CVaR_cd}), (\ref{for:CVaR_frd}), (\ref{for:CVaR_cx}), (\ref{for:CVaR_frx}) \tag{\text{Investor Constraints}}.
\end{align*}
}
Our goal is to solve the above {bilevel programming model} to provide answers to the new portfolio optimization {problem}. We propose two different MILP formulations with the aim of making a computational comparison to check which one is more effective.
\par
\subsection{Formulation BLIFP1}
The main difficulty in handling \ref{for:BLIFP} is that some of its decision variables are constrained to be optimal solutions of a nested optimization problem. In order to overcome that issue we observe that the follower problem in \ref{for:BLIFP} is linear on $x$ when $p$ is given. This allows us to {easily} compute its exact dual as:
\begin{align}
\label{for:dual1}\tag{\textbf{Dual1}}\min \; & \displaystyle \beta +\mu_0\mu\\
\label{for:dual1_c1} \ \ \ \ \ \hbox{s.t.} \ &\displaystyle\beta-\sum_{t=1}^T(r_{jt}-p_j)\delta_t\ge 0, \quad j\in B,\\
\label{for:dual1_c2}&\displaystyle\beta-\sum_{t=1}^Tr_{jt}\delta_t\ge 0, \hspace*{1.5cm} j\in R,\\
\label{for:dual1_c3}&\displaystyle -\sum_{t=1}^T \gamma_{t}=1, \\
\label{for:dual1_c4}&\displaystyle \gamma_t\ge -\dfrac{\pi_t}{\alpha}, \hspace*{2.7cm} t=1,...,T, \\
\label{for:dual1_c5}&\displaystyle\gamma_t+\delta_t+\pi_t\mu =0, \hspace*{1.35cm} t=1,...,T, \\
\label{for:dual1_fr1}& \gamma_{t}\le 0, \hspace*{3.1cm} t=1,...,T,\\
\label{for:dual1_fr2}&\mu \le 0, \beta \geq 0.
\end{align}
{We note in passing that variables $\delta_t$, $\mu$, $\gamma_t$ and $\beta$, are the dual variables associated to constraints (\ref{for:CVaR_cy}), (\ref{for:CVaR_creturn}), (\ref{for:CVaR_cd}) and (\ref{for:CVaR_cx}), respectively. Therefore, they can be interpreted as multipliers explaining the marginal variation of the objective function values as a function of the corresponding constraints' right-hand-sides. Nevertheless, we do not go into details in the economic insights of this dual model, since our use is instrumental to obtain a single level reformulation of the hierarchical model.}
Then, \ref{for:BLIFP} can be reformulated, applying the Strong Duality Theorem, including the constraints of the primal and dual problem together with the equation that matches the objective values of the follower primal and dual problems. Thus, \ref{for:BLIFP} is equivalent to solving this new mathematical programming model:
\begin{align}
\nonumber \max &\displaystyle \sum_{j\in B} p_jx_j\\
\nonumber \ \ \ \ \ \hbox{s.t.} \ &(\ref{for:bank_c1}), (\ref{for:bank_c2}), (\ref{for:bank_fr}),(\ref{for:bank_frP}), \tag{\text{Bank Constraints}}\\
\label{for:strongduality_1}& \displaystyle \eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_td_t=\beta +\mu_0\mu,\\
\nonumber & {(\ref{for:CVaR_cy}), (\ref{for:CVaR_creturn}), (\ref{for:CVaR_cd}), (\ref{for:CVaR_frd}), (\ref{for:CVaR_cx}), (\ref{for:CVaR_frx}), \tag{\text{Investor Constraints}}}\\
\nonumber & (\ref{for:dual1_c1}), (\ref{for:dual1_c2}), (\ref{for:dual1_c3}), (\ref{for:dual1_c4}), (\ref{for:dual1_c5}), (\ref{for:dual1_fr1}), (\ref{for:dual1_fr2} \tag{\text{Dual Constraints}}).
\end{align}
We can observe that in the above formulation we have some bilinear terms, $p_jx_j$ and $p_j\delta_t$ that appear in the leader objective function and constraints (\ref{for:CVaR_cy}) and (\ref{for:dual1_c1}). In order to solve the problem using off-the-shelf solvers, they can be linearized `a la' McKormick, {\cite{mcc76}}, giving rise to another exact MILP formulation for the bilevel problem.\par
Indeed, since $p_j= \sum_{k=1}^{s_j}c_{jk}a_{jk},\; \forall j\in B$, we could substitute the terms $p_jx_j=\sum_{k=1}^{s_j}c_{jk}\hat{a}_{jk}$ adding variables $\hat{a}_{jk},\; \forall j\in B, k=1,...,s_j,$ and the following set of constraints:
\begin{equation}\label{for:linear_a_NO}
\begin{array}{lll}
&\displaystyle \hat{a}_{jk}\leq x_j, & j\in B, k=1,...,s_j,\\
&\displaystyle \hat{a}_{jk}\leq a_{jk}, & j\in B, k=1,...,s_j,\\
&\displaystyle \hat{a}_{jk}\geq x_j-(1-a_{jk}), & j\in B, k=1,...,s_j,\\
&\displaystyle \hat{a}_{jk}\geq 0, & j\in B, k=1,...,s_j.
\end{array}
\end{equation}
Furthermore, this linearization can be simplified. Observe that it is sufficient to include in (\ref{for:BLIFP}) variables $\hat a_{jk}$ and constraints
\begin{equation}\label{for:linear_a}
\begin{array}{lll}
&\displaystyle \hat{a}_{jk}\leq a_{jk},& j\in B, k=1,...,s_j,\\
&\displaystyle \hat{a}_{jk}\geq 0, & j\in B, k=1,...,s_j,\\
\end{array}
\end{equation}
\noindent from (\ref{for:linear_a_NO}) and to substitute the variables $x_j=\sum_{k=1}^{s_j}\hat{a}_{jk}, \forall j\in B$. We obtain in this manner an equivalent, {smaller} formulation with the bilinear terms $a_{jk}x_j$ linearized for all $j\in B, k=1,...,s_j$, but with less constraints and decision variables.
Following a similar argument we can linearize the products $p_j\delta_t=\sum_{k=1}^{s_j}c_{jk}a_{jk}\delta_t$. To do that, take $M$ a sufficiently large positive number and define the new variables $\hat{\delta}_{jkt}=a_{jk}\delta_t,$ $\forall j \in B, k=1,...,s_j, t=1,...,T$. This set of variables together with the following family of constraints linearize all the bilinear terms:
\begin{equation}\label{for:linear_delta+}
\begin{array}{lll}
&\displaystyle \hat{\delta}_{jkt}\leq \delta_t, & j\in B, k=1,...,s_j, t=1,...,T,\\
&\displaystyle \hat{\delta}_{jkt}\leq Ma_{jk}, & j\in B, k=1,...,s_j, t=1,...,T,\\
&\displaystyle \hat{\delta}_{jkt}\geq \delta_t-(1-a_{jk})M, & j\in B, k=1,...,s_j, t=1,...,T,\\
&\displaystyle \hat{\delta}_{jkt}\geq 0, & j\in B, k=1,...,s_j, t=1,...,T.\\
\end{array}
\end{equation}
Combining the above elements, all together, we obtain a valid MILP formulation for \textbf{BLIFP}:
{
\begin{align}
\label{for:BLIFP1} \tag{\textbf{BLIFP1}} \max &\displaystyle \sum_{j\in B} \sum_{k=1}^{s_j}c_{jk}\hat{a}_{jk}\\
\nonumber \tag{\ref{for:bank_c2}}\ \ \ \ \ \hbox{s.t.} \ & \displaystyle \sum_{k=1}^{s_j}a_{jk}=1, \hspace*{2.05cm} j \in B,\\
\nonumber \tag{\ref{for:bank_fr}}&\displaystyle a_{jk}\in \{0,1\}, \hspace*{2cm} j\in B, k=1,...,s_j,\\
\nonumber \tag{\ref{for:strongduality_1}}& \displaystyle \eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_td_t=\beta +\mu_0\mu\\
\label{for:CVaR_cy_linear}&y_t=\sum_{j\in B}r_{jt}\left( \sum_{k=1}^{s_j} \hat a_{jk}\right) +\sum_{j \in R}r_{jt}x_j-\sum_{j \in B}\sum_{k=1}^{s_j}c_{jk}\hat a_{jk}, \hspace*{0.3cm} t=1,...,T, \\
\nonumber \tag{\ref{for:CVaR_creturn}}&\sum_{t=1}^T\pi_ty_t\ge \mu_0,\\
\nonumber \tag{\ref{for:CVaR_cd}}&\displaystyle d_t\ge \eta - y_t, \hspace*{2.2cm} t=1,...,T, \\
\nonumber \tag{\ref{for:CVaR_frd}}& d_{t}\ge 0, \hspace*{3.5cm} t=1,...,T,\\
\label{for:CVaR_cx_a}&\displaystyle \sum_{j \in B}\sum_{k=1}^{s_j}\hat a_{jk}+\sum_{j \in R}x_j \leq 1, \\
\nonumber& \tag{\ref{for:CVaR_frx}}x_{j}\ge 0, \hspace*{3.45cm} j \in R,\\
\nonumber & \begin{array}{l}
\displaystyle \hat{a}_{jk}\leq a_{jk}, \hspace*{2.75cm} j\in B, k=1,...,s_j,\\
\displaystyle \hat{a}_{jk}\geq 0, \hspace*{3.05cm} j\in B, k=1,...,s_j,
\end{array} \tag{\ref{for:linear_a}}\\
\label{for:dual1_c1_linear} &\displaystyle\beta-\sum_{t=1}^T\left(r_{jt}\delta_t-\sum_{k=1}^{s_j}c_{jk}\hat{\delta}_{jkt} \right)\ge 0, \hspace*{0.3cm} j\in B,\\
\nonumber \tag{\ref{for:dual1_c2}}&\displaystyle\beta-\sum_{t=1}^Tr_{jt}\delta_t\ge 0, \hspace*{1.7cm} j\in R,\\
\nonumber \tag{\ref{for:dual1_c3}}&\displaystyle -\sum_{t=1}^T \gamma_{t}=1, \\
\nonumber \tag{\ref{for:dual1_c4}}&\displaystyle \gamma_t\ge -\dfrac{\pi_t}{\alpha}, \hspace*{2.9cm} t=1,...,T, \\
\nonumber \tag{\ref{for:dual1_c5}}&\displaystyle\gamma_t+\delta_t+\pi_t\mu =0, \hspace*{1.6cm} t=1,...,T, \\
\nonumber \tag{\ref{for:dual1_fr1}}& \gamma_{t}\le 0, \hspace*{3.4cm} t=1,...,T,\\
\nonumber \tag{\ref{for:dual1_fr2}}&\mu \le 0, \beta \geq 0,\\
\nonumber &\begin{array}{l}
\displaystyle \hat{\delta}_{jkt}\leq \delta_t, \hspace*{2.95cm} j\in B, k=1,...,s_j, t=1,...,T,\\
\displaystyle \hat{\delta}_{jkt}\leq Ma_{jk}, \hspace*{2.25cm} j\in B, k=1,...,s_j, t=1,...,T,\\
\displaystyle \hat{\delta}_{jkt}\geq \delta_t-(1-a_{jk})M, \hspace*{0.5cm} j\in B, k=1,...,s_j, t=1,...,T,\\
\displaystyle \hat{\delta}_{jkt}\geq 0, \hspace*{2.9cm} j\in B, k=1,...,s_j, t=1,...,T.
\end{array} \tag{\ref{for:linear_delta+}}
\end{align}
}
The above long formulation can be easily understood once the different sets of constraints are grouped by meaningful blocks. We observe that (\ref{for:bank_c2}), (\ref{for:bank_fr}) and (\ref{for:bank_frP}) are the constraints that define the feasible domain of the {broker-dealer} problem. Constraint (\ref{for:strongduality_1}) imposes the strong duality condition among the primal and dual formulation of the follower problem. Next, { (\ref{for:CVaR_cy_linear}), (\ref{for:CVaR_creturn}), (\ref{for:CVaR_cd}), (\ref{for:CVaR_frd}), (\ref{for:CVaR_cx_a}), (\ref{for:CVaR_frx}) and (\ref{for:linear_a}) are the constraints that correctly define the linearized version of the investor subproblem}. Finally, the constraints that come from the linearized version of the dual of the follower problem are (\ref{for:dual1_c1_linear}),(\ref{for:dual1_c2}), (\ref{for:dual1_c3}), (\ref{for:dual1_c4}), (\ref{for:dual1_c5}), (\ref{for:dual1_fr1}), (\ref{for:dual1_fr2}) and(\ref{for:linear_delta+}).
Using these blocks of constraints \ref{for:BLIFP1} can be written in the following compact form.
\begin{align}
\label{for:BLIFP1} \tag{\textbf{BLIFP1}} \max &\displaystyle \sum_{j\in B} \sum_{k=1}^{s_j}c_{jk}\hat{a}_{jk}\\
\nonumber \ \ \ \ \ \hbox{s.t.} \ & (\ref{for:bank_c2}), (\ref{for:bank_fr}),(\ref{for:bank_frP}), \tag{\text{Linear {broker-dealer} Constraints}}\\
\nonumber & (\ref{for:strongduality_1}), \tag{\text{Strong Duality Constraint}}\\
\nonumber & { (\ref{for:CVaR_cy_linear}), (\ref{for:CVaR_creturn}), (\ref{for:CVaR_cd}), (\ref{for:CVaR_frd}), (\ref{for:CVaR_cx_a}), (\ref{for:CVaR_frx}), (\ref{for:linear_a}), \tag{\text{Linear investor Constraints 1}}}\\
&\begin{array}{l}
(\ref{for:dual1_c1_linear}), (\ref{for:dual1_c2}), (\ref{for:dual1_c3}),
(\ref{for:dual1_c4}), (\ref{for:dual1_c5}), (\ref{for:dual1_fr1}),\\
(\ref{for:dual1_fr2}),(\ref{for:linear_delta+}).
\end{array} \tag{\text{Linear Dual Constraints}}
\end{align}
This valid formulation of \ref{for:BLIFP1} requires to set a valid value for the big-$M$ constraint. Setting an appropriate value is important to improve the performance of the resulting MIP. In the following, we prove the existence of a valid upper bound for such a value.
\begin{proposition} \label{pro:boundM1}
Let ${\cal B}(p) $ be the set of all full rank submatrices of the matrix representing the constraints of problem \ref{for:dual1} in standard form, where $p$ is a fixed set of {cost values}, and let $\cal{B}^S$(p) be the set of all matrices that result from ${\cal B}(p)$ replacing, one each time, their columns by the RHS of that problem. Moreover, let $\Delta(p):=\min \{ |det(B)|: B\in {\cal B}(p)\}$ and $\Delta^S(p)=\max \{|det(B)|: B\in {\cal B}^S(p)\}$.
Then $\displaystyle UB_{\delta}:=\max_{p} \Delta^S(p)/ \Delta(p)$ is a valid upper bound for the big-$M$ constant in \ref{for:BLIFP1}.
\end{proposition}
\begin{proof}
It is easy to observe that for each fixed set of {costs} $p$, $M\leq \max_{t=1,...,T}\delta_t$. Therefore the proof reduces to bound the terms $\delta_t$.\par
From constraint (\ref{for:dual1_c5}) in formulation \ref{for:dual1} we know that $\delta_t=-\gamma_t-\pi_t\mu , \quad\forall t=1,...,T,$ which implies that $\delta_t\ge 0$ for all $t=1,...,T$, since $\mu\leq 0$, and $\delta_t\leq 0$, and $\pi_t \geq 0$ for all $t=1,...,T$.
We observe that $\beta+\mu_0 \mu$ is bounded for any $\mu_0$ and for any set of {costs} $p$ (recall that this o.f. gives a CVaR). If we denote by $r_{max}=\displaystyle \max_{j=1,...,n, t=1,...,T}r_{jt}$, $r_{min}=\displaystyle \min_{j=1,...,n, t=1,...,T}r_{jt}$ {and $c_{max}=\displaystyle \max_{j=1,...,n,\ k=1,...,s_j}c_{jk}$, then $r_{min}-c_{max}\le \beta+\mu_0 \mu\le r_{max}$}. This implies that the solution of \ref{for:dual1} is attained at an extreme point and therefore no rays have to be considered.
Next, the extreme points of the feasible regions are solutions of systems of full dimensional equations taken from the constraint matrix of \ref{for:dual1} in standard form. Therefore, applying Cramer's rule we obtain that, at the extreme points, the values of any variable $\delta_t$ for all $t=1,\ldots,T$ satisfy: $\delta_t\le \Delta^S(p)/\Delta(p)$. Next, letting $p$ vary on the finite set of possible {costs} we obtain that $\displaystyle \delta_t\le \max_{p} \Delta^S(p)/\Delta(p)$.
\end{proof}
This bound is only of theoretical interest and in our computational experiments, we have set it empirically to be more accurate.
\subsection{Formulation BLIFP2\label{ss:BLIFP2}}
In this section, we derive an alternative formulation for \textbf{BLIFP} based on the representation of the {costs} as $p_jx_j=\sum_{k=1}^{s_j}c_{jk}\hat{a}_{jk}$ in the follower problem before its dual problem is obtained. This artifact produces an alternative single level model that we will analyze in the following.
Let us consider the CVaR problem in \ref{for:BLIFP}, and let us linearize the products of variables $p_ix_i$, as in the previous formulation. This way we obtain:
{
\begin{align*}
\max \ &\displaystyle \eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_td_t\\
\nonumber \ \ \ \ \ \hbox{s.t.} \ \tag{\ref{for:CVaR_cy_linear}}&y_t=\sum_{j\in B}r_{jt}\left( \sum_{k=1}^{s_j} \hat a_{jk}\right) +\sum_{j \in R}r_{jt}x_j-\sum_{j \in B}\sum_{k=1}^{s_j}c_{jk}\hat a_{jk}, \hspace*{0.3cm} t=1,...,T, \\
\nonumber&\sum_{t=1}^T\pi_ty_t\ge \mu_0, \tag{\ref{for:CVaR_creturn}} \\
\nonumber & \displaystyle d_t\ge \eta - y_t, \hspace*{0.75cm} t=1,...,T, \tag{\ref{for:CVaR_cd}} \\
\nonumber& d_{t}\ge 0, \hspace*{1.7cm} t=1,...,T, \tag{\ref{for:CVaR_frd}} \\
\tag{\ref{for:CVaR_cx_a}}&\displaystyle \sum_{j \in B}\sum_{k=1}^{s_j}\hat a_{jk}+\sum_{j \in R}x_j \leq 1, \\
\nonumber& x_{j}\ge 0, \hspace*{1.6cm} j=1,...,n,\tag{\ref{for:CVaR_frx}} \\
\nonumber& \begin{array}{l}
\displaystyle \hat{a}_{jk}\leq a_{jk}, \hspace*{0.95cm} j\in B, k=1,...,s_j,\\
\displaystyle \hat{a}_{jk}\geq 0, \hspace*{1.25cm} j\in B, k=1,...,s_j.
\end{array} \tag{\ref{for:linear_a}}
\end{align*}
}
Once again, to ease presentation, we write the above formulation in the following compact format.
\begin{align*}
\max \ &\displaystyle \eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_td_t\\
\nonumber \ \ \ \ \ \hbox{s.t.} \ & { (\ref{for:CVaR_cy_linear}), (\ref{for:CVaR_creturn}), (\ref{for:CVaR_cd}), (\ref{for:CVaR_frd}), (\ref{for:CVaR_cx_a}), (\ref{for:CVaR_frx}), (\ref{for:linear_a}). \tag{\text{Linear investor Constraints 1}}}\\
\end{align*}
Its dual problem is:
\begin{align}
\label{for:dual2_of} \min \ & \displaystyle \beta +\mu_0 \mu +\sum_{j\in B}\sum_{k=1}^{s_j}a_{jk}\sigma_{jk} \tag{\textbf{Dual2}} \\
\nonumber \ \ \ \ \ \hbox{s.t. } & (\ref{for:dual1_c2}), (\ref{for:dual1_c3}), (\ref{for:dual1_c4}), (\ref{for:dual1_c5}), (\ref{for:dual1_fr1}), (\ref{for:dual1_fr2}), \\
\label{for:dual2_c1} & \displaystyle \beta-\sum_{t=1}^Tr_{jt}\delta_t+\sum_{t=1}^Tc_{jk}\delta_t+\sigma_{jk}\ge 0, \quad j\in B, k=1,...,s_j,\\
\label{for:dual2_fr1} & \sigma_{jk}\ge 0, \quad j\in B, k=1,...,s_j.
\end{align}
Therefore, we can replace in \ref{for:BLIFP} the nested optimization problem on the CVaR including the group of constraints in (Linear investor Constraints 1) and (\ref{for:dual1_c2})-(\ref{for:dual1_fr2}), (\ref{for:dual2_c1}), (\ref{for:dual2_fr1}), that will be referred from now on as (Dual2 Constraints), together with the strong duality condition given by $$\displaystyle \eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_td_t=\beta +\mu_0 \mu +\sum_{j\in B}\sum_{k=1}^{s_j}a_{jk}\sigma_{jk}.$$
The combination of all these elements results in the following alternative valid formulation for \ref{for:BLIFP}.
\begin{align}
\nonumber\max &\displaystyle \sum_{j\in B} \sum_{k=1}^{s_j}c_{jk}\hat{a}_{jk}\\
\nonumber \ \ \ \ \ \hbox{s.t.} \ &(\ref{for:bank_c2}), (\ref{for:bank_fr}),(\ref{for:bank_frP}) \tag{\text{{Broker-dealer} Constraints}}\\
\label{for:strongduality_2}& \eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_td_t=\beta +\mu_0 \mu +\sum_{j\in B}\sum_{k=1}^{s_j}a_{jk}\sigma_{jk}\\
\nonumber & { (\ref{for:CVaR_cy_linear}), (\ref{for:CVaR_creturn}), (\ref{for:CVaR_cd}), (\ref{for:CVaR_frd}), (\ref{for:CVaR_cx_a}), (\ref{for:CVaR_frx}), (\ref{for:linear_a}), \tag{\text{Linear investor Constraints 1}}}\\
\nonumber & \begin{tabular}{l}
(\ref{for:dual1_c2}), (\ref{for:dual1_c3}), (\ref{for:dual1_c4}), (\ref{for:dual1_c5}), (\ref{for:dual1_fr1}), (\ref{for:dual1_fr2}),\\
(\ref{for:dual2_c1}), (\ref{for:dual2_fr1}).
\end{tabular} \tag{\text{Dual2 Constraints}}
\end{align}
The formulation above still contains bilinear terms, namely $a_{jk}\sigma_{jk}$, in constraint (\ref{for:strongduality_2}). Therefore, we linearize them as in \ref{for:BLIFP1} and we obtain another valid MILP formulation for \textbf{BLIFP}.\par
\begin{align}
\label{for:BLIFP2} \tag{\textbf{BLIFP2}} \max &\displaystyle \sum_{j\in B} \sum_{k=1}^{s_j}c_{jk}\hat{a}_{jk}\\
\nonumber \ \ \ \ \ \hbox{s.t.} \ &(\ref{for:bank_c2}), (\ref{for:bank_fr}),(\ref{for:bank_frP}) \tag{\text{Linear {broker-dealer} Constraints}}\\
\label{for:strongduality_3}& \eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_td_t=\beta +\mu_0 \mu +\sum_{j\in B}\sum_{k=1}^{s_j}\hat \sigma_{jk},\\
\nonumber & { (\ref{for:CVaR_cy_linear}), (\ref{for:CVaR_creturn}), (\ref{for:CVaR_cd}), (\ref{for:CVaR_frd}), (\ref{for:CVaR_cx_a}), (\ref{for:CVaR_frx}), (\ref{for:linear_a}), \tag{\text{Linear investor Constraints 1}}}\\
\nonumber & \begin{tabular}{l}
(\ref{for:dual1_c2}), (\ref{for:dual1_c3}), (\ref{for:dual1_c4}), (\ref{for:dual1_c5}), (\ref{for:dual1_fr1}), (\ref{for:dual1_fr2}),\\
(\ref{for:dual2_c1}), (\ref{for:dual2_fr1}).
\end{tabular} \tag{\text{Dual2 Constraints}}\\
\label{for:bigM}&\begin{array}{l}
\displaystyle \hat{\sigma}_{jk}\leq \sigma_{jk}, \hspace*{2.8cm} j\in B, k=1,...,s_j,\\
\displaystyle \hat{\sigma}_{jk} \leq Ma_{jk},\hspace*{2.4cm} j\in B, k=1,...,s_j,\\
\displaystyle \hat{\sigma}_{jk}\geq \sigma_{jk}-M(1-a_{jk}), \quad j\in B, k=1,...,s_j\\
\displaystyle \hat{\sigma}_{jk} \geq 0, \hspace*{3.1cm} j\in B, k=1,...,s_j,
\end{array}
\end{align}
Again, this valid formulation for \ref{for:BLIFP2} requires to prove the existence of a valid upper bound for the big-$M$ constant in (\ref{for:bigM}. In the following, we prove that a valid upper bound for such a value does exist.
\begin{proposition} \label{pro:boundM2}
Let $UB_{\delta}$ be the bound obtained in Proposition \ref{pro:boundM1} and $\displaystyle LB_{\beta}=\min_{p} \Delta^S(p)/ \Delta(p)$. Then $\displaystyle \max\{T(r_{max}-c_{min})UB_{\delta} - LB_{\beta}, 0\}$ is a valid upper bound for $M$ in \ref{for:BLIFP2}.
\end{proposition}
\begin{proof}
It is easy to observe that $M=\displaystyle \max_{j\in B, k=1,\ldots,s_j}\{\sigma_{jk}\}$ is a valid upper bound.
Since $\sigma_{jk}$ is being minimized (it is minimized in \ref{for:dual2_of}) and it must satisfy constraints (\ref{for:dual2_c1}) and (\ref{for:dual2_fr1}), there always exists, $\forall j\in B, k=1,...,s_j$, an optimal solution where these variables get the values:
\begin{equation*}
\sigma_{jk} = \left\{
\begin{array}{ll}
0, & \mathrm{if\ } \beta +\sum_{t=1}^T (c_{jk}-r_{jt})\delta_t \ge 0 \\
-\beta +\sum_{t=1}^T (r_{jt}-c_{jk})\delta_t, & \mathrm{otherwise.}
\end{array}
\right.
\end{equation*}
Because $\beta\geq 0$ by definition, if $\beta +\sum_{t=1}^T (c_{jk}-r_{jt})\delta_t$ is negative, then $\sum_{t=1}^T (c_{jk}-r_{jt})\leq 0$ and therefore $\sum_{t=1}^T (r_{jt}-c_{jk})\geq 0$.
Consequently the maximum value of this variable would be $\max \{0, T(r_{max}-c_{min})UB_{\delta}-LB_{\beta}\}$, where $UB_{\delta}$ and $LB_{\beta}$ are found by doing a similar discussion as in the Proposition \ref{pro:boundM1}.
\end{proof}
{A first comparison of the above two models, namely \ref{for:BLIFP1} and \ref{for:BLIFP2}, sheds some light on their problem solving difficulty. For the sake of simplicity, we denote by $d=\sum_{j\in B} |s_j|$ the number of different admissible costs in the models. Table \ref{t:comparamodelos} shows the number of binary and continuous variables and constraints in both models.
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c||c|}
\hline
& Binary & Continuous & Constraints \\ \hline
\ref{for:BLIFP1} & $d$ & $R+5T+d+dT+3$ & $2|B|+6T+2|R|+2d+4dT+6$ \\ \hline
\ref{for:BLIFP2} & $d$ & $R+4T+3d+3$ & $|B|+5T+|R|+7d+5$ \\ \hline
\end{tabular}
\caption{number of variables and constraints in models \ref{for:BLIFP1} and \ref{for:BLIFP2} \label{t:comparamodelos}}
\end{center}
\end{table}
The smaller dimension of \ref{for:BLIFP2} explains what we observe later in the computational experience: \ref{for:BLIFP2} is solved more efficiently than \ref{for:BLIFP1} (see Section \ref{Computational}).}
\section{Bilevel Investor-leader {broker-dealer}-follower Portfolio Problem (\textbf{ILBFP})} \label{Investor-leader}
For the sake of completeness, in this section, we consider the reverse situation to the one that has been analyzed in Section \ref{Sect:Bank-leader}, i.e., a hierarchical structure in the financial market where the investor acts first and once his portfolio $x$ is chosen the {broker-dealer} sets {transaction costs}. Although one could claim that this situation may be atypical in actual financial markets, we want to analyze this case from a theoretical point of view. Moreover, we wish to analyze its implications depending on different {broker-dealer}s and investors profiles. See Section \ref{Computational} for a comparative analysis. This situation leads to a bilevel leader-follower model in which the investor (leader) has to optimize his utility (maximize the CVaR ensuring a given expected reward, $\mu_0$) by assuming that once he has chosen the portfolio, the {broker-dealer} (follower) will maximize his benefits setting the applicable transaction costs.
We can formulate the problem as:
\begin{align}
\label{for:ILBFP} \max \ &\displaystyle \eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_td_t \tag{\textbf{ILBFP0}} \\
\ \ \ \ \ \hbox{s.t.} \ &\nonumber {(\ref{for:CVaR_cy}), (\ref{for:CVaR_creturn}), (\ref{for:CVaR_cd}), (\ref{for:CVaR_frd}), (\ref{for:CVaR_cx}), (\ref{for:CVaR_frx}), \tag{\text{Investor Constraints}}}\\
\label{for:bank_follower_of}&p\in arg \max \displaystyle \sum_{j\in B}p_jx_j,\\
\nonumber &\ \ \ \ \ \ \ \ \nonumber \ \ \ \ \hbox{s.t.} \ (\ref{for:bank_c1}), (\ref{for:bank_c2}), (\ref{for:bank_fr}),(\ref{for:bank_frP}) \tag{\text{{Broker-dealer} Constraints}}.
\end{align}
We {state} in the following proposition that if no further polyhedral constraints are imposed on {possible costs}, i.e., $\mathbb{P}=\mathbb{R}^{|B|}_+$, fixing the {transaction costs} to their maximum possible values is always an optimal solution of the follower ({broker-dealer}) problem.
\begin{proposition} \label{pro:bank-follow}
Let \textbf{\ref{for:bank_of} be the follower {broker-dealer} problem, not including constraint (\ref{for:bank_frP}),} in the problem \textbf{ILBFP0}. Let $x$ be a given portfolio and let $\displaystyle p_j^+=\max_{k=1,...,s_j}c_{jk} \quad \forall j \in B$. Then $p_j^+, \ \forall j \in B$, is an optimal solution of \textbf{\ref{for:bank_of}}.
\end{proposition}
Using the previous result, the \ref{for:ILBFP} can be simplified, in the cases in which constraint (\ref{for:bank_frP}) is not included since the nested optimization problem is replaced by the explicit form of an optimal solution. This results in a valid linear programming formulation to solve the problem.
\begin{align*}
\label{for:ILBFP_LP} \tag{\textbf{ILBFP-LP}}\max &\displaystyle \ \eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_td_t\\
\nonumber \ \ \ \ \hbox{s.t.} \ &{(\ref{for:CVaR_cy}), (\ref{for:CVaR_creturn}), (\ref{for:CVaR_cd}), (\ref{for:CVaR_frd}), (\ref{for:CVaR_cx}), (\ref{for:CVaR_frx}), \tag{\text{Investor Constraints}}}\\
&y_t=\sum_{j=1}^nr_{jt}x_j-\left(\sum_{j\in B}p_j^+x_j\right), \quad t=1,...,T.
\end{align*}
Nevertheless, the above result can not be extended to the case in which a more general polyhedron $\mathbb{P}$ defines the admissible set of transaction costs, and a single level MILP formulation can neither be obtained. To solve \textbf{ILBFP}, in this more general case, we propose an \textit{`ad hoc'} algorithm. To justify its validity we need the following theorem.
\begin{theorem}\label{Theo:ILBFP-Compact}
Let {us define} {$\lambda=\sum_{j\in B}p_jx_j$}, and {let us} denote by $\Omega$ the set containing the feasible {commissions and fees} rates of the {broker-dealer} problem in $\mathbb{P}$, {denoted by $p_{int}$}. The problem \ref{for:ILBFP} is equivalent to:
{
\begin{align*}
\label{for:ILBFP_Compact} \tag{\textbf{ILBFP-Compact}} \max &\ \displaystyle \eta - \dfrac{1}{\alpha}\sum_{t=1}^Tp_td_t\\
\ \ \ \ \hbox{st.} \ \ &y_t=\sum_{j=1}^nr_{jt}x_j-\left(\lambda\right), \hspace*{0.85cm} t=1,...,T, \\
&\sum_{t=1}^T\pi_ty_t\ge \mu_0,\\
&\displaystyle d_t\ge \eta - y_t, \hspace*{2.5cm} t=1,...,T, \\
& d_{t}\ge 0,\hspace*{3.2cm} t=1,...,T,\\
&\displaystyle \sum_{j=1}^n x_{j}\leq 1, \\
& x_{j}\ge 0, \hspace*{3.15cm} j=1,...,n,\\
&\lambda\geq \sum_{j \in B} p_{int,j}x_j, \hspace*{1.5cm} p_{int}\in \Omega.\\
\end{align*}
}
\end{theorem}
\begin{proof}
We prove first that, maximizing the objective function $\displaystyle \eta - \dfrac{1}{\alpha} \sum_{t=1}^T \pi_td_t$ in \ref{for:ILBFP} is equivalent to maximizing $\ {\displaystyle \eta (1-c_x)+\frac{1}{\alpha}\sum_{t\in \mathbb{T'}}\sum_{j=1}^n\pi_tr_{jt}x_j-c_x\lambda }$, where $c_x=\sum_{t\in \mathbb{T'}} \frac{\pi_t}{\alpha}>0 $ and $ \mathbb{T'}:=\{t=1,...,n : \eta -y_t\geq 0\}$. Observe that the constraints in \ref{for:ILBFP_Compact} imply that $d_t=\max\{0,\eta -y_t\}$ and $y_t=\sum_{j \in B}r_{jt}x_j-\sum_{j \in B}p_jx_j$ for all $t=1,...,T$. Therefore the objective value in the problem satisfies {the following rewriting}:
{\small
\begin{align}
\nonumber \max \eta - \dfrac{1}{\alpha} \sum_{t=1}^T \pi_td_t=&\max \eta - \dfrac{1}{\alpha} \sum_{t=1}^T \pi_t\max\{0,\eta-y_t\}\\
\nonumber &=\max \eta - \dfrac{1}{\alpha} \sum_{t\in \mathbb{T'}} \pi_t (\eta-y_t)\\
\nonumber &=\max \eta(1-c_x) + \frac{1}{\alpha}\sum_{t\in \mathbb{T'}} \pi_t \left(\sum_{j \in B}r_{jt}x_j-\sum_{j \in B}p_jx_j\right)\\
\label{for:theorem_constraint}&=\max \eta(1-c_x) + \frac{1}{\alpha}\sum_{t\in \mathbb{T'}} \pi_t \left(\sum_{j \in B}r_{jt}x_j\right)-c_x \lambda .
\end{align}
}
{Let $\lambda=\sum_{j\in B}p_j x_j$. The expression (\ref{for:theorem_constraint}) proves that the objective function of \ref{for:ILBFP_Compact} depends on $\lambda$ with a negative coefficient.}
Secondly, we have that, for a given portfolio $x$, the optimal value $\bar \lambda$ of the follower problem is
\begin{align*}
\bar \lambda=\max & \displaystyle \sum_{j\in B}p_jx_j\\
\hbox{s.t.} & \ (\ref{for:bank_c1}), (\ref{for:bank_c2}), (\ref{for:bank_fr}),(\ref{for:bank_frP}) \tag{\text{{Broker-dealer} Constraints}},
\end{align*}
\noindent and it is equivalent to evaluate the objective function in all the feasible points and to choose the largest one: \[\bar \lambda=\max \sum_{j \in B} p_{int,j}x_j,\; \ p_{int}\in \Omega.\]
Since {$c_x\geq 1$}, $\lambda$ is positive, and $\lambda$ is being minimized in (\ref{for:theorem_constraint}), the follower problem in \ref{for:ILBFP}, can be replaced by \[\lambda\geq \sum_{j \in B} p_{int,j}x_j,\; p_{int}\in \Omega,\]
and the result follows.
\end{proof}
Observe that, if the set of points in $\Omega$ were explicitly known, \ref{for:ILBFP_Compact} would be a MILP compact formulation with very likely an exponential number of constraints for the general case of \ref{for:ILBFP}. However, the points in the set $\Omega$ are usually difficult to enumerate a priori.
The idea of our algorithm is to start with an incomplete formulation of \ref{for:ILBFP_Compact} and reinforce it with a new inequality, coming from a new point in $\Omega$, after each new iteration of the algorithm.
\noindent {\sc Algorithm 1:}
\begin{description}
\item {\tt Initialization} Choose a feasible portfolio $x^0$. Set $CVaR^{0}=+\infty$
\item {\tt Iteration $\tau=1,2,\ldots$}
\begin{itemize}
\item Solve the {broker-dealer} (follower) problem for $x^{\tau-1}$. Let $p^{\tau}$ be an optimal solution.
\item Solve the incomplete formulation:
\begin{align*}
\label{for:ILBFP_Incomplete} \tag{\textbf{ILBFP-Incomplete$^{\tau}$}} \max & \displaystyle \ \eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_t d_t\\
\ \ \ \ \hbox{st.} \ \ &y_t=\sum_{j=1}^nr_{jt}x_j-\left(\lambda\right), \hspace*{0.85cm} t=1,...,T, \\
&\sum_{t=1}^T\pi_ty_t\ge \mu_0\\
&\displaystyle d_t\ge \eta - y_t, \hspace*{2.5cm} t=1,...,T, \\
& d_{t}\ge 0,\hspace*{3.2cm} t=1,...,T,\\
&\displaystyle \sum_{j=1}^n x_{j}\leq 1, \\
& x_{j}\ge 0, \hspace*{3.15cm} j=1,...,n,\\
&\lambda\geq \sum_{j \in B} p_j^{\nu}x_j, & \nu=1,...,\tau.\\
\end{align*}
Let $\chi^{\tau}=(x^{\tau}, y^{\tau}, \eta^{\tau}, d^{\tau})$, and let $(\chi^{\tau}, \lambda^{\tau})$ be an optimal solution and $CVaR^{\tau}$ the optimal value.
\begin{itemize}
\item If $(\chi^{\tau}, \lambda^{\tau})$ is feasible in \ref{for:ILBFP_Incomplete}, $(\chi^{\tau -1 },p^{\tau})$ are optimal solutions of \ref{for:ILBFP}, and $CVaR^{\tau}$ the optimal value. END.
\item If $(\chi^{\tau}, \lambda^{\tau})$ is not feasible in \ref{for:ILBFP_Incomplete}, go to iteration $\tau:=\tau+1$.
\end{itemize}
\end{itemize}
\end{description}
We prove in the following result the optimality of the solution obtained in Algorithm 1 and also its finiteness.
\begin{theorem}
Algorithm 1 finishes in a finite number of iterations with an optimal solution of \ref{for:ILBFP}.
\end{theorem}
\begin{proof}
We start guaranteeing the finiteness of the algorithm. On the one hand, the number of feasible solutions of the {broker-dealer} problem is finite, then the number of different cuts $\lambda\geq \sum_{j \in B} p_j^{\tau}x_j$ that can be added to the incomplete formulation is also finite. On the other hand, if a repeated cut is added then, $x^{\tau -1}$ is feasible in \textbf{ILBFP-Incomplete$^{\tau}$}, since \textbf{ILBFP-Incomplete$^{\tau}$} is equal to \textbf{ILBFP-Incomplete$^{\tau-1}$}, and then the algorithm stops. Therefore the algorithm finishes in a finite number of iterations.
We continue now proving the optimality of the solution obtained. Let us denote by $CVaR^*$ the optimal value of \ref{for:ILBFP}, that by Theorem \ref{Theo:ILBFP-Compact} is also the optimal value of \ref{for:ILBFP_Compact}.
First, assume that $(\chi^{\tau -1}, \lambda^{\tau-1})$ satisfies the stopping criterion. Then, it is clear that $(\chi^{\tau -1}, \lambda^{\tau-1})$ is also feasible in \textbf{ILBFP-Incomplete$^{\tau}$} and $CVaR^{\nu} \leq CVaR^{\nu-1}$ for all $\nu=1,...,\tau$, by construction. Hence, $(\chi^{\tau}, \lambda^{\tau})$ is also optimal in \textbf{ILBFP-Incomplete$^{\tau}$} and $CVaR^{\tau-1} = CVaR^{\tau}$.
Second, we have that $CVaR^*\leq CVaR^{\tau}$ always holds, since the polyhedron describing the feasible region of \ref{for:ILBFP_Compact} is included in the one defining the feasible region in \textbf{ILBFP-Incomplete$^{\tau}$}.
Finally, we have that if $(\chi^{\tau -1 },p^{\tau})$ is feasible in \ref{for:ILBFP}, then $CVaR^*= CVaR^{\tau}$ and it is an optimal solution of \ref{for:ILBFP}. Therefore, it remains to prove that $(\chi^{\tau -1 },p^{\tau})$ is feasible in \ref{for:ILBFP}.
Clearly $\chi^{\tau -1 }$ verifies constraints (\ref{for:CVaR_cd}), (\ref{for:CVaR_cx}), (\ref{for:CVaR_creturn}), (\ref{for:CVaR_frd}), (\ref{for:CVaR_frx}), since they are all included in the incomplete formulation, and also, $x^{\tau -1 }$, $p^{\tau}$ verify constraints $p\in arg \max \displaystyle \sum_{j\in B}p_jx_j$, (\ref{for:bank_c1}), (\ref{for:bank_c2}), (\ref{for:bank_fr}) and (\ref{for:bank_frP}), since
\begin{align*}
p^{\tau} \in arg \max \displaystyle \sum_{j\in B}p_jx_j^{\tau-1}\\
\nonumber \ \ \ \ \ \ \ \ \nonumber \ \ \ \ \hbox{s.t.} \ (\ref{for:bank_c1}), (\ref{for:bank_c2}), (\ref{for:bank_fr}),(\ref{for:bank_frP}) \tag{\text{{Broker-dealer} Constraints}}.
\end{align*}
To complete the proof we need to check that constraint (\ref{for:CVaR_cy}) is also satisfied.
Since $p^{\tau} \in arg \max \displaystyle \sum_{j\in B}p_jx_j^{\tau-1}$, then $\sum_{j \in B} p_j^{\tau}x_j^{\tau -1 }\geq \sum_{j \in B} p_jx_j^{\tau -1 }$ for any {cost} $p$ verifying (\ref{for:bank_c1}), (\ref{for:bank_c2}), (\ref{for:bank_fr}) and (\ref{for:bank_frP}). Using the same arguments that in Theorem \ref{Theo:ILBFP-Compact} it follows that variable $\lambda$ is being minimized in \textbf{ILBFP-Incomplete$^{\tau}$}, thus $\lambda^{\tau}= \sum_{j \in B} p_j^{\tau}x_j^{\tau -1 }$ and then constraint (\ref{for:CVaR_cy}) holds.
\end{proof}
\section{The Maximum Social Welfare Problem (\textbf{MSWP})} \label{Cooperative}
In some actual situations, the investor and the broker-dealer may have an incentive to work together to improve the social welfare of society. They can agree to cooperate and share risk and benefits to improve, in this way, their solutions by designing a joint strategy.\par
We also analyze this problem for the sake of completeness and to compare the performance of this situation where none of the parties has a hierarchical position over the other one. We think that even if the actual implementation of the cooperative model may be difficult, in a competitive actual market, one may gain some insights into the problem through analysis.
In {the} social welfare model, we assume that both, broker-dealer and investor, cooperate. {Let $0<\xi<1$ be the marginal rate of substitution between the two objectives. That is, the rate at which one of the parties can give up some units of one of the objective functions in exchange for another unit of the other one while maintaining the same overall value. Then,} the cooperative version of the problem can be written as a weighted sum of the two objective functions of each party in the feasible region delimited by the constraints of both problems:
\begin{align*}
\max \ &\displaystyle \xi \sum_{j\in B}p_jx_j+(1-\xi)\left(\eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_td_t \right)\\
\ \ \ \ \ \hbox{s.t.} \ & (\ref{for:bank_c1}), (\ref{for:bank_c2}), (\ref{for:bank_fr}), (\ref{for:bank_frP}) \tag{\text{{Broker-dealer} Constraints}},\\
& \\
&{(\ref{for:CVaR_cy}), (\ref{for:CVaR_creturn}), (\ref{for:CVaR_cd}), (\ref{for:CVaR_frd}), (\ref{for:CVaR_cx}), (\ref{for:CVaR_frx}) \tag{\text{Investor Constraints}}}.
\end{align*}
The above problem can be modeled as a MILP by linearizing the products of variables $a_{jk}x_j, \forall {j\in B}$ following the same scheme as in Section \ref{Sect:Bank-leader}:
\begin{align*}
\label{for:CoopP} \tag{\textbf{MSWP0}} \max \ &\displaystyle \xi \sum_{j\in B}\sum_{k=1}^{s_j} c_{jk}\hat{a}_{jk}+(1-\xi)\left(\eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_td_t \right)\\
\ \ \ \ \ \hbox{s.t.} \ & \quad (\ref{for:bank_c2}), (\ref{for:bank_fr}), (\ref{for:bank_frP}) \tag{\text{Linear {broker-dealer} Constraints}},\\
& \\
\nonumber & { (\ref{for:CVaR_cy_linear}), (\ref{for:CVaR_creturn}), (\ref{for:CVaR_cd}), (\ref{for:CVaR_frd}), (\ref{for:CVaR_cx_a}), (\ref{for:CVaR_frx}), (\ref{for:linear_a}). \tag{\text{Linear investor Constraints 1}}}\\
\end{align*}
{For simplicity, in the remaining, we} consider an unweighted maximum social welfare model where the two objective functions $\sum_{j\in B}\sum_{k=1}^{s_j} c_{jk}{a}_{jk}$ ({broker-dealer}) and $\eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_td_t$ (investor) are simply added. The following result proves that cooperation is always profitable for both parties in that the joint return exceeds the sum of individual returns of each of them.
\begin{proposition} \label{prop:social_welfare}
An optimal solution of the unweighted maximum social welfare {problem} induces an objective value that is greater than or equal to the sum of the optimal returns of the two parties in the same bilevel problem in any of the hierarchical {problems}.
\end{proposition}
\begin{proof}
Any feasible solution of \ref{for:BLIFP} and \ref{for:ILBFP} is feasible in \ref{for:CoopP} since all the constraints in this last problem appear in the two former formulations. Therefore, the feasible region of \ref{for:CoopP} includes the feasible regions of both, \ref{for:BLIFP} and \ref{for:ILBFP} and the result follows.
\end{proof}
\subsection{Benders decomposition}
We can also obtain a Benders decomposition, {\cite{ben62},} in order to state a Benders like algorithm to solve \ref{for:CoopP}, and compare the performance of both proposed methods to solve the problem.\par
Recall that the unweighted maximum welfare {problem} can be written as:
\begin{align*}
\max \ &\displaystyle \sum_{j\in B}p_jx_j+\left(\eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_td_t \right)\\
\ \ \ \ \ \hbox{s.t.} \ & (\ref{for:bank_c1}), (\ref{for:bank_c2}), (\ref{for:bank_fr}), (\ref{for:bank_frP}) \tag{\text{{Broker-dealer} Constraints}},\\
& \\
&{(\ref{for:CVaR_cy}), (\ref{for:CVaR_creturn}), (\ref{for:CVaR_cd}), (\ref{for:CVaR_frd}), (\ref{for:CVaR_cx}), (\ref{for:CVaR_frx}) \tag{\text{Investor Constraints}}}.
\end{align*}
In order to apply Benders decomposition we reformulate \ref{for:CoopP} as follows:
\begin{align*}
\label{for:CoopP'} \tag{\textbf{MSWP1}} \max &\displaystyle \sum_{j\in B}\sum_{k=1}^{s_j} c_{jk}\hat{a}_{jk}+q(y)\\
\hbox{s.t.} &\quad (\ref{for:bank_c2}), (\ref{for:bank_fr}), (\ref{for:bank_frP}) \tag{\text{Linear {broker-dealer} Constraints}}\\
&\begin{array}{lll}
\displaystyle \hat{a}_{jk}\leq a_{jk},& j\in B, k=1,...,s_j,\\
\displaystyle \hat{a}_{jk}\geq 0, & j\in B, k=1,...,s_j,\\
\end{array}\tag{\ref{for:linear_a}}\\
\tag{\ref{for:CVaR_cy_linear}}&y_t=\sum_{j\in B}r_{jt}\left( \sum_{k=1}^{s_j} \hat a_{jk}\right) +\sum_{j \in R}r_{jt}x_j-\sum_{j \in B}\sum_{k=1}^{s_j}c_{jk}\hat a_{jk}, \hspace*{0.3cm} t=1,...,T, \\
\nonumber&\sum_{t=1}^T\pi_ty_t\ge \mu_0, \tag{\ref{for:CVaR_creturn}} \\
\tag{\ref{for:CVaR_cx_a}}&\displaystyle \sum_{j \in B}\sum_{k=1}^{s_j}\hat a_{jk}+\sum_{j \in R}x_j \leq 1, \\
\nonumber& \tag{\ref{for:CVaR_frx}} x_{j}\ge 0, \hspace*{1.4cm} j \in R,\\
\end{align*}
where
\begin{align*}
q(y)= \max & \; \displaystyle\; \eta - \dfrac{1}{\alpha}\sum_{t=1}^T\pi_td_t\\
\mbox{s.t.: } & d_t-\eta \geq -y_t, \hspace*{0.6cm} t=1,...,T,\\
&d_t\geq 0, \hspace*{1.7cm} t=1,...,T.
\end{align*}
Note that in $q(y)$ we are essentially computing the CVaR for the given solution $\{y_t:t=1,\ldots,T\}$.
Computing again its dual problem, the evaluation of $q(y)$ can also be obtained as:
\begin{align*}
\hspace*{-1cm} \label{for:PrimalP}\tag{\textbf{PrimalP}} q(y)= \min & \sum_{t=1}^T -\gamma_t y_t \\
\mbox{s.t.: } & \gamma_t\geq \frac{-\pi_t}{\alpha}, \quad t=1,...,T,\\
&-\sum_{t=1}^T \gamma_t=1,\\
&\gamma_t\leq 0.
\end{align*}
Observe that the above problem, which we define as the Primal Problem, is a continuous knapsack problem with lower bounds, therefore it is well known that it can be solved by inspection. It suffices to sort non-increasingly the $y_t$ values and assigning, in that order, to each variable $\gamma_t$ the minimum feasible amount. \par
Note that in the above formulation the feasible region does not depend on the variables in \ref{for:CoopP'}, so if we denote by $\Omega$ the set of extreme point solutions of the feasible region of \ref{for:PrimalP}, $q(y)$ is equivalent to:
\begin{align}
\nonumber q(y)=\max &\quad q \\
\hbox{s.t. } & \label{for:Benders}\displaystyle q\leq \sum_{t=1}^T -\gamma_t^{\tau} y_t, \quad \gamma^{\tau} \in \Omega.
\end{align}
Therefore, the problem \ref{for:CoopP} with discrete {costs} can be written as:
\begin{align*}
\label{for:MasterP} \tag{\textbf{MasterP}} \max &\displaystyle \sum_{j\in B}\sum_{k=1}^{s_j} c_{jk}\hat{a}_{jk}+q\\
\hbox{s.t.} &\quad (\ref{for:bank_c2}), (\ref{for:bank_fr}), (\ref{for:bank_frP}) \tag{\text{Linear {broker-dealer} Constraints}}\\
&\begin{array}{lll}
\displaystyle \hat{a}_{jk}\leq a_{jk},& j\in B, k=1,...,s_j,\\
\displaystyle \hat{a}_{jk}\geq 0, & j\in B, k=1,...,s_j,\\
\end{array}\tag{\ref{for:linear_a}}\\
\tag{\ref{for:CVaR_cy_linear}}&y_t=\sum_{j\in B}r_{jt}\left( \sum_{k=1}^{s_j} \hat a_{jk}\right) +\sum_{j \in R}r_{jt}x_j-\sum_{j \in B}\sum_{k=1}^{s_j}c_{jk}\hat a_{jk}, \hspace*{0.3cm} t=1,...,T, \\
\nonumber&\sum_{t=1}^T\pi_ty_t\ge \mu_0, \tag{\ref{for:CVaR_creturn}} \\
\tag{\ref{for:CVaR_cx_a}}&\displaystyle \sum_{j \in B}\sum_{k=1}^{s_j}\hat a_{jk}+\sum_{j \in R}x_j \leq 1, \\
\nonumber& \tag{\ref{for:CVaR_frx}} x_{j}\ge 0, \hspace*{1.4cm} j \in R,\\
&q\leq \sum_{t=1}^T \gamma^{\tau} y_t,\hspace*{0.5cm} \gamma^{\tau} \in \Omega. \tag{\ref{for:Benders}}
\end{align*}
This analysis allows us to state a Benders algorithm as follows:
\noindent {\sc Benders Algorithm:}
\begin{description}
\item {\tt Initialization} Choose a solution $y^0$ of the master problem, solve the primal problem \ref{for:PrimalP} for the chosen $y^0$. Let $\gamma^0$ be an optimal solution for \ref{for:PrimalP} under $y^0$ and $q(y^0)$ the corresponding optimal value. Take $\mathbf{\Upsilon}=\{\gamma^0\}$ and go to iteration $\tau=1$.
\item {\tt Iteration $\tau=1,2,\ldots$} Solve the master problem $\ref{for:MasterP}$ replacing $\Omega$ with $\Upsilon$. Let $y^*$ and $q^*$ be optimal solutions of such problem.
\begin{itemize}
\item If $\tau=1$ and $q(y^0)=q^*$. END.
\item If $\tau>1$ and $q(y^*)=q^*$. END.
\item Otherwise, solve the primal problem \ref{for:PrimalP} for $y=y^*$. Let $\gamma^*$ be an optimal solution of such problem. Take $\gamma^{\tau}=\gamma^*$, $\mathbf{\Upsilon}=\mathbf{\Upsilon}\cup\{\gamma^{\tau}\}$, and go to iteration $\tau:=\tau+1$.
\end{itemize}
\end{description}
\section{Computational study and empirical application} \label{Computational}
This section is devoted to reporting some numerical experiments conducted to:
1) compare the effectiveness of the methods proposed to solve the different {problems}; 2) analyze the form of the solutions within each {model}; and 3) compare the profiles of the solutions, in terms of net values for the broker-dealer and expected return for the investor, across the three {defined problems}.
The computational experiments were carried out on a personal computer with Intel(R) Core(TM) i7-2600 {\scriptsize CPU}, 3.40GHz with 16.0 GB RAM. The algorithms and formulations were implemented and solved by using Xpress IVE 8.0.
{In order to conduct the computational study, we {take} historical data from Dow Jones Industrial Average. We {considered} daily returns of the 30 assets during one year ($T=251$ scenarios), and these $T$ historical periods are considered as equiprobable scenarios ($\pi_t=1/T$). Furthermore, to {perform} a richer comparison, we consider different types of instances for the broker-dealer sets of possible {transaction costs} and different risk profiles for the investor.
We assume that the broker-dealer charges transaction costs in a subset $B$ of the securities. In the instances we generated we {compare} the following cardinals for the set $B$: $|B|= 30, 20, 10$. In addition, each {cost} $p_j, j\in B$ was chosen {from a discrete set $\mathbb{P}_j=\{c_{j1},...,c_{js_j}\}$ of admissible values} . These parameters $s_j$ were randomly generated in the interval $[0,K]$ with $K=5,15,50$.}
The next table gathers the nine different types of instances (A to I) considered in our computational study:
\begin{table}[H]
\begin{center}
\begin{tabular}{c|c|c|c}
& $K=5$ & $K=15$ & $K=50$ \\
\hline
$|B|=30$ & {\bf A} & {\bf B} & {\bf C} \\
\hline
$|B|=20$ & {\bf D} & {\bf E} & {\bf F} \\
\hline
$|B|=10$ & {\bf G} & {\bf H} & {\bf I} \\
\end{tabular}
\caption{\label{table:instances} {{\footnotesize Types of instances for the sets of possible {costs} depending on the values of $|B|$ and $K$}}}
\end{center}
\end{table}
{Once the set $B$ and the parameter $s_j$ were set for each type of instance (A-I), we generate the possible {transaction costs} $c_{ij}$ as follows:
\begin{itemize}
\item randomly generated in the interval $[0.001,0.003]$ (\textit{cheaper} {costs}) in approximately $15\%$ of the securities,
\item randomly generated in the interval $[0.002,0.008]$ (\textit{normal} {costs}) in approximately $70\%$ of the securities,
\item randomly generated in the interval $[0.006,0.010]$ (\textit{more expensive} {costs}) in approximately $15\%$ of the securities.
\end{itemize}
}
For each type of instance defined in Table \ref{table:instances}, five different instances are generated and the average values are reported in all the tables and figures.
{Different investor profiles are also considered varying the values of parameters $\mu_0$ and $\alpha$. We assume three thresholds for the expected return $\mu_0=0.0,0.05, 0.1$. This way, we are modeling investors willing not to lose anything, or to win at least, $5\%$ or $10\%$ of their invested amount. In addition, we consider five different CVaR risk levels, $\alpha=0.01,0.05,0.5,0.9$. Note that usually, the smaller the $\alpha$, the higher the risk-aversion.}
\subsection{Comparing solution methods}
This section compares the computational performance of the different methods proposed to solve each one of the {problems}.
For the first {problem}, \textbf{BLIFP}, we proposed two different formulations: \ref{for:BLIFP1} and \ref{for:BLIFP2}. We show in all our tables, the average CPU time expressed in seconds (CPU) and the number of {instances} (\#) solved to optimality (out of 5) for each formulation, with a time limit of 3600 seconds.
Table \ref{table: model1} is organized in three blocks of rows. Each block reports results for $\mu_0=0.0,0.05, 0.1$, respectively. Each row in the table refers to a type of instance ($A,\ldots,I$). The columns are also organized in four blocks. Each block reports the results for a different risk level ($\alpha$).
{It can be observed that \ref{for:BLIFP2} is always faster and it solves a higher number of problems than \ref{for:BLIFP1} to optimality. {As anticipated in Section \ref{ss:BLIFP2} this behavior is explained by the smaller dimension of \ref{for:BLIFP2} in terms of variables and constraints.} For example, when $\alpha=0.5$ and $\mu=0.0$, \ref{for:BLIFP2} is able to solve all the instances of types D and H in few seconds, while \ref{for:BLIFP1} is not able to solve any of these instances. Therefore, we conclude that formulation \ref{for:BLIFP2} is more effective than \ref{for:BLIFP1} for solving {\textbf{BLIFP}}.}
{The second {problem} in our analysis is the one presented in Section \ref{Investor-leader}, namely \textbf{ILBFP}. For this situation, we have proposed a single level LP formulation \ref{for:ILBFP_LP} and Algorithm 1 to solve the problem. We report the results concerning this model (when no additional constrains on {transaction costs} are imposed in the set of {costs}) in Table \ref{table: model2}. It can be observed that the compact formulation is faster than the algorithm: all the instances can be solved by using the LP formulation in less than 7 seconds, meanwhile, the algorithm needs more than 100 seconds to solve some of them. However, the Algorithm 1 is also able to solve all the instances, and, as discussed in Section \ref{Investor-leader}, it can also be used when more general sets of {costs} are considered. }
{Finally, for the social welfare {problem,} \textbf{MSWP}, we have also proposed another single level formulation \ref{for:CoopP} and a Benders' like algorithm. The primal problems in the Benders Algorithm were solved by using the inspection method described in the previous section. We report the results concerning this model in Table \ref{table: model3} with the same layout as Table \ref{table: model2}. It can be observed that again the compact formulations is much faster than the algorithm. In spite of that, the algorithm is also able to solve the considered instances. }
{
\begin{table}[H]
\resizebox{\textwidth}{!}{
\begin{tabular}{rr|rrrr|rrrr|rrrr|rrrr|}
& & \multicolumn{ 4}{c}{{\bf $\alpha=0.05$}} & \multicolumn{ 4}{c}{{\bf $\alpha=0.1$}} & \multicolumn{ 4}{c}{{\bf $\alpha=0.5$}} & \multicolumn{ 4}{c}{{\bf $\alpha=0.9$}} \\
$\mu_0$ & & \multicolumn{ 2}{c}{\ref{for:BLIFP1}} & \multicolumn{ 2}{c}{\ref{for:BLIFP2}} & \multicolumn{ 2}{c}{\ref{for:BLIFP1}} & \multicolumn{ 2}{c}{\ref{for:BLIFP2}} & \multicolumn{ 2}{c}{\ref{for:BLIFP1}} & \multicolumn{ 2}{c}{\ref{for:BLIFP2}} & \multicolumn{ 2}{c}{\ref{for:BLIFP1}} & \multicolumn{ 2}{c}{\ref{for:BLIFP2}} \\
& & {\scriptsize CPU} & {\scriptsize \#} & {\scriptsize CPU} & {\scriptsize \#} & {\scriptsize CPU} & {\scriptsize \#} & {\scriptsize CPU} & {\scriptsize \#} & {\scriptsize CPU} & {\scriptsize \#} & {\scriptsize CPU} & {\scriptsize \#} & {\scriptsize CPU} & {\scriptsize \#} & {\scriptsize CPU} & {\scriptsize \#} \\
\hline
0 & \textbf{A} & 3600 & 0 & 181 & 5 & 3600 & 0 & 916 & 4 & 3600 & 0 & 3291 & 1 & 3600 & 0 & 5 & 5 \\
& \textbf{B} & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3079 & 1 \\
& \textbf{C} & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 \\
& \textbf{D} & 3600 & 0 & 2 & 5 & 3204 & 1 & 17 & 5 & 3600 & 0 & 59 & 5 & 1603 & 3 & 2 & 5 \\
& \textbf{E} & 3600 & 0 & 890 & 4 & 3600 & 0 & 2024 & 3 & 3600 & 0 & 3377 & 1 & 1882 & 3 & 3 & 5 \\
& \textbf{F} & 3600 & 0 & 2895 & 1 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 2984 & 1 & 76 & 5 \\
& \textbf{G} & 841 & 5 & 1 & 5 & 375 & 5 & 1 & 5 & 810 & 5 & 1 & 5 & 571 & 5 & 1 & 5 \\
& \textbf{H} & 2282 & 2 & 2 & 5 & 3117 & 1 & 2 & 5 & 3600 & 0 & 7 & 5 & 2178 & 2 & 1 & 5 \\
& \textbf{I} & 2959 & 1 & 1444 & 3 & 3600 & 0 & 1562 & 3 & 3600 & 0 & 939 & 4 & 1804 & 3 & 5 & 5 \\
\hline
0.05 & \textbf{A} & 3600 & 0 & 28 & 5 & 3600 & 0 & 343 & 5 & 3600 & 0 & 3291 & 1 & 3156 & 1 & 5 & 5 \\
& \textbf{B} & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3079 & 1 \\
& \textbf{C} & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 \\
& \textbf{D} & 3600 & 0 & 2 & 5 & 3204 & 1 & 3 & 5 & 3600 & 0 & 59 & 5 & 2217 & 2 & 2 & 5 \\
& \textbf{E} & 3600 & 0 & 110 & 5 & 3600 & 0 & 1923 & 3 & 3600 & 0 & 3377 & 1 & 1793 & 3 & 3 & 5 \\
& \textbf{F} & 3600 & 0 & 2905 & 1 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 2930 & 1 & 76 & 5 \\
& \textbf{G} & 841 & 5 & 1 & 5 & 375 & 5 & 1 & 5 & 810 & 5 & 1 & 5 & 62 & 5 & 1 & 5 \\
& \textbf{H} & 2282 & 2 & 1 & 5 & 3117 & 1 & 2 & 5 & 3600 & 0 & 7 & 5 & 153 & 5 & 1 & 5 \\
& \textbf{I} & 2959 & 1 & 930 & 4 & 3600 & 0 & 1575 & 3 & 3600 & 0 & 939 & 4 & 2439 & 2 & 5 & 5 \\
\hline
0.1 & \textbf{A} & 3600 & 0 & 6 & 5 & 3600 & 0 & 23 & 5 & 3600 & 0 & 3291 & 1 & 3156 & 1 & 5 & 5 \\
& \textbf{B} & 3600 & 0 & 616 & 5 & 3600 & 0 & 1326 & 4 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3079 & 1 \\
& \textbf{C} & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 & 3600 & 0 \\
& \textbf{D} & 3600 & 0 & 1 & 5 & 3204 & 1 & 2 & 5 & 3600 & 0 & 59 & 5 & 2217 & 2 & 2 & 5 \\
& \textbf{E} & 3600 & 0 & 24 & 5 & 3600 & 0 & 55 & 5 & 3600 & 0 & 3377 & 1 & 1793 & 3 & 3 & 5 \\
& \textbf{F} & 3600 & 0 & 1277 & 4 & 3600 & 0 & 2227 & 2 & 3600 & 0 & 3600 & 0 & 2930 & 1 & 76 & 5 \\
& \textbf{G} & 841 & 5 & 1 & 5 & 375 & 5 & 1 & 5 & 810 & 5 & 1 & 5 & 62 & 5 & 1 & 5 \\
& \textbf{H} & 2282 & 2 & 1 & 5 & 3117 & 1 & 1 & 5 & 3600 & 0 & 7 & 5 & 153 & 5 & 1 & 5 \\
& \textbf{I} & 2959 & 1 & 1477 & 3 & 3600 & 0 & 736 & 4 & 3600 & 0 & 939 & 4 & 2439 & 2 & 5 & 5 \\
\hline
\end{tabular}
}
\caption{\label{table: model1} {\footnotesize Comparison of the average CPU and number of {instances} (out of 5) solved to optimality, for \ref{for:BLIFP1} and \ref{for:BLIFP2}}}
\end{table}
}
\begin{landscape}
\begin{multicols}{2}
\begin{center}
\begin{table}[H]
\centering
\begin{footnotesize}
\begin{tabular}{rr|rr|rr|rr|rr|}
& & \multicolumn{ 2}{c}{$\alpha=0.05$} & \multicolumn{ 2}{c}{$\alpha=0.1$} & \multicolumn{ 2}{c}{$\alpha=0.5$} & \multicolumn{ 2}{c}{$\alpha=0.9$} \\
$\mu_0$ & & {\scriptsize \textbf{LP}} & {\scriptsize Alg. 1} & {\scriptsize \textbf{LP}} & {\scriptsize Alg. 1} & {\scriptsize \textbf{LP}} &{\scriptsize Alg. 1} & {\scriptsize \textbf{LP}} & {\scriptsize Alg. 1} \\
\hline
0.0 & \textbf{A} & 0.55 & 4.76 & 0.57 & 17.85 & 0.56 & 54.62 & 0.54 & 19.29 \\
& \textbf{B} & 1.51 & 12.64 & 1.61 & 50.68 & 1.60 & 144.21 & 1.49 & 47.97 \\
& \textbf{C} & 6.42 & 44.94 & 6.61 & 178.84 & 6.19 & 557.45 & 5.98 & 187.49 \\
& \textbf{D} & 0.39 & 6.80 & 0.41 & 12.09 & 0.42 & 40.26 & 0.42 & 13.18 \\
& \textbf{E} & 1.03 & 36.98 & 1.02 & 29.99 & 1.03 & 95.80 & 0.99 & 32.80 \\
& \textbf{F} & 3.26 & 24.52 & 3.31 & 86.80 & 3.31 & 298.50 & 3.23 & 89.21 \\
& \textbf{G} & 0.25 & 2.83 & 0.25 & 8.73 & 0.25 & 26.00 & 0.25 & 8.45 \\
& \textbf{H} & 0.47 & 3.99 & 0.47 & 14.37 & 0.48 & 46.73 & 0.46 & 15.32 \\
& \textbf{I} & 1.63 & 13.59 & 1.63 & 45.47 & 1.64 & 149.62 & 1.59 & 49.73 \\
\hline
0.05& \textbf{A} & 0.55 & 4.76 & 0.57 & 17.85 & 0.56 & 54.62 & 0.54 & 19.29 \\
& \textbf{B} & 1.51 & 12.64 & 1.61 & 50.68 & 1.60 & 144.21 & 1.49 & 47.97 \\
& \textbf{C} & 6.42 & 44.94 & 6.61 & 178.84 & 6.19 & 557.45 & 5.98 & 187.49 \\
& \textbf{D} & 0.39 & 6.80 & 0.41 & 12.09 & 0.42 & 40.26 & 0.42 & 13.18 \\
& \textbf{E} & 1.03 & 36.98 & 1.02 & 29.99 & 1.03 & 95.80 & 0.99 & 32.80 \\
& \textbf{F} & 3.26 & 24.52 & 3.31 & 86.80 & 3.31 & 298.50 & 3.23 & 89.21 \\
& \textbf{G} & 0.25 & 2.83 & 0.25 & 8.73 & 0.25 & 26.00 & 0.25 & 8.45 \\
& \textbf{H} & 0.47 & 3.99 & 0.47 & 14.37 & 0.48 & 46.73 & 0.46 & 15.32 \\
& \textbf{I} & 1.63 & 13.59 & 1.63 & 45.47 & 1.64 & 149.62 & 1.59 & 49.73 \\
\hline
0.1 & \textbf{A} & 0.55 & 4.76 & 0.57 & 17.85 & 0.56 & 54.62 & 0.54 & 19.29 \\
& \textbf{B} & 1.51 & 12.64 & 1.61 & 50.68 & 1.60 & 144.21 & 1.49 & 47.97 \\
& \textbf{C} & 6.42 & 44.94 & 6.61 & 178.84 & 6.19 & 557.45 & 5.98 & 187.49 \\
& \textbf{D} & 0.39 & 6.80 & 0.41 & 12.09 & 0.42 & 40.26 & 0.42 & 13.18 \\
& \textbf{E} & 1.03 & 36.98 & 1.02 & 29.99 & 1.03 & 95.80 & 0.99 & 32.80 \\
& \textbf{F} & 3.26 & 24.52 & 3.31 & 86.80 & 3.31 & 298.50 & 3.23 & 89.21 \\
& \textbf{G} & 0.25 & 2.83 & 0.25 & 8.73 & 0.25 & 26.00 & 0.25 & 8.45 \\
& \textbf{H} & 0.47 & 3.99 & 0.47 & 14.37 & 0.48 & 46.73 & 0.46 & 15.32 \\
& \textbf{I} & 1.63 & 13.59 & 1.63 & 45.47 & 1.64 & 149.62 & 1.59 & 49.73 \\
\hline
\end{tabular}
\end{footnotesize}
\caption{\label{table: model2} {\footnotesize Comparison of the average CPU for \ref{for:ILBFP_LP} and Algorithm 1}}
\end{table}
\columnbreak
\begin{table}[H]
\centering
\begin{footnotesize}
\begin{tabular}{rr|rr|rr|rr|rr|}
& & \multicolumn{ 2}{c}{$\alpha=0.05$} & \multicolumn{ 2}{c}{$\alpha=0.1$} & \multicolumn{ 2}{c}{$\alpha=0.5$} & \multicolumn{ 2}{c}{$\alpha=0.9$} \\
$\mu_0$ & & {\tiny \ref{for:CoopP}} & Ben. & {\tiny \ref{for:CoopP}} & Ben. & {\tiny \ref{for:CoopP}} & Ben. & {\tiny \ref{for:CoopP}} & Ben. \\
\hline
0 & \textbf{A} & 0.55 & 4.76 & 0.57 & 17.85 & 0.56 & 54.62 & 0.54 & 19.29 \\
& \textbf{B} & 1.51 & 12.64 & 1.61 & 50.68 & 1.60 & 144.21 & 1.49 & 47.97 \\
& \textbf{C} & 6.42 & 44.94 & 6.61 & 178.84 & 6.19 & 557.45 & 5.98 & 187.49 \\
& \textbf{D} & 0.39 & 6.80 & 0.41 & 12.09 & 0.42 & 40.26 & 0.42 & 13.18 \\
& \textbf{E} & 1.03 & 36.98 & 1.02 & 29.99 & 1.03 & 95.80 & 0.99 & 32.80 \\
& \textbf{F} & 3.26 & 24.52 & 3.31 & 86.80 & 3.31 & 298.50 & 3.23 & 89.21 \\
& \textbf{G} & 0.25 & 2.83 & 0.25 & 8.73 & 0.25 & 26.00 & 0.25 & 8.45 \\
& \textbf{H} & 0.47 & 3.99 & 0.47 & 14.37 & 0.48 & 46.73 & 0.46 & 15.32 \\
& \textbf{I} & 1.63 & 13.59 & 1.63 & 45.47 & 1.64 & 149.62 & 1.59 & 49.73 \\
\hline
0.05 & \textbf{A} & 0.55 & 4.76 & 0.57 & 17.85 & 0.56 & 54.62 & 0.54 & 19.29 \\
& \textbf{B} & 1.51 & 12.64 & 1.61 & 50.68 & 1.60 & 144.21 & 1.49 & 47.97 \\
& \textbf{C} & 6.42 & 44.94 & 6.61 & 178.84 & 6.19 & 557.45 & 5.98 & 187.49 \\
& \textbf{D} & 0.39 & 6.80 & 0.41 & 12.09 & 0.42 & 40.26 & 0.42 & 13.18 \\
& \textbf{E} & 1.03 & 36.98 & 1.02 & 29.99 & 1.03 & 95.80 & 0.99 & 32.80 \\
& \textbf{F} & 3.26 & 24.52 & 3.31 & 86.80 & 3.31 & 298.50 & 3.23 & 89.21 \\
& \textbf{G} & 0.25 & 2.83 & 0.25 & 8.73 & 0.25 & 26.00 & 0.25 & 8.45 \\
& \textbf{H} & 0.47 & 3.99 & 0.47 & 14.37 & 0.48 & 46.73 & 0.46 & 15.32 \\
& \textbf{I} & 1.63 & 13.59 & 1.63 & 45.47 & 1.64 & 149.62 & 1.59 & 49.73 \\
\hline
0.1 & \textbf{A} & 0.55 & 4.76 & 0.57 & 17.85 & 0.56 & 54.62 & 0.54 & 19.29 \\
& \textbf{B} & 1.51 & 12.64 & 1.61 & 50.68 & 1.60 & 144.21 & 1.49 & 47.97 \\
& \textbf{C} & 6.42 & 44.94 & 6.61 & 178.84 & 6.19 & 557.45 & 5.98 & 187.49 \\
& \textbf{D} & 0.39 & 6.80 & 0.41 & 12.09 & 0.42 & 40.26 & 0.42 & 13.18 \\
& \textbf{E} & 1.03 & 36.98 & 1.02 & 29.99 & 1.03 & 95.80 & 0.99 & 32.80 \\
& \textbf{F} & 3.26 & 24.52 & 3.31 & 86.80 & 3.31 & 298.50 & 3.23 & 89.21 \\
& \textbf{G} & 0.25 & 2.83 & 0.25 & 8.73 & 0.25 & 26.00 & 0.25 & 8.45 \\
& \textbf{H} & 0.47 & 3.99 & 0.47 & 14.37 & 0.48 & 46.73 & 0.46 & 15.32 \\
& \textbf{I} & 1.63 & 13.59 & 1.63 & 45.47 & 1.64 & 149.62 & 1.59 & 49.73 \\
\hline
\end{tabular}
\end{footnotesize}
\caption{\label{table: model3} {\footnotesize Comparison of the average CPU for \ref{for:CoopP} and Benders Algorithm}}
\end{table}
\end{center}
\end{multicols}
\end{landscape}
\subsection{Comparing solutions and risk profiles within {problems}}
This section analyzes the results provided by the {two hierarchical} {problems} in terms of broker-dealer's net profit and risk and expected return attained by the investor.
{Figure \ref{Grap:CVaR_M1} compares the CVaR values obtained for the different risk profiles for \textbf{BLIFP}. Each piecewise curve reports the CVaR values for different $\alpha$-levels and $\mu_0$-levels and the nine markets profiles ($A,\ldots,I$). We observe that the CVaR always increases with the value of $\alpha$, since this implies assuming more risk. It can also be seen in these figures that, when the value of $\alpha$ increases, the CVaR for the different values of $\mu_0$ becomes closer for each value of $\alpha$. This can be explained because when $\alpha=1$, if the constraint that the expected return must be greater or equal to $0$ is satisfied, both problems become the same, then, the bigger the $\alpha$ the more similar the results for different values of $\mu_0$. Furthermore, for a given $\alpha$, the CVaR for smaller $\mu_0$ is higher because in these cases the constraint on the expected return enlarges the feasible region as compared with higher values of $\mu_0$.}
\begin{figure}
\caption{{\footnotesize Values of the CVaR for \textbf{BLIFP}
\label{Grap:CVaR_M1}
\end{figure}
{Figure \ref{Grap:BankProfit_M1} compares, with a similar organization as Figure \ref{Grap:CVaR_M1}, the broker-dealer net profit for different investor's risk profiles. Analogously, Figure \ref{Grap:ExpectedReturn_M1} represents the expected return for the investor.
We observe in Figure \ref{Grap:BankProfit_M1} that the results of the broker-dealer net profit are bigger, in trend, for profiles with smaller values of $\alpha$, that is, for more risk-averse investments. In addition, we also show in Figure \ref{Grap:ExpectedReturn_M1} that, in general, bigger expected returns are obtained for higher values of $\alpha$. The reason for this is that by increasing $\alpha$ one is considering a wider range of values to compute the CVaR, and then the result is a value closer to the expected return (note that when $\alpha=1$ the expected return is equal to the CVaR).}
\begin{figure}
\caption{{\footnotesize Values of the broker-dealer profit for \textbf{BLIFP}
\label{Grap:BankProfit_M1}
\end{figure}
\begin{figure}
\caption{{\footnotesize Values of the expected return for \textbf{BLIFP}
\label{Grap:ExpectedReturn_M1}
\end{figure}
Finally, to conclude with the analysis of \textbf{BLIFP}, we remark that the smaller the cardinality of the set $B$ the better the CVaR and expected returns for the investor, but the worse the broker-dealer net profit. This is expected since we are reducing the number of securities where the broker-dealer could charge transaction costs.
{We proceed next to analyze the solutions of the second {problem}, namely \textbf{ILBFP}. We observe in Figure \ref{Grap:BankProfit_M2} the same trend that in the previous model: more risk-averse investments produce lower CVaR for the investor (left-upper figure), and bigger profits for the broker-dealer (right-upper figure), and decreasing the cardinality of the set $B$ results in a reduction of the broker-dealer profit. The behavior of expected return (lower figure) is similar to those observed in Figure \ref{Grap:ExpectedReturn_M1} for the corresponding \textbf{BLIFP}. }
\vspace*{-0.08cm}
\begin{figure}
\caption{{\footnotesize {Values of the CVaR (left-above), broker-dealer profit (right-above) and expected return (below) for \textbf{ILBFP}
\label{Grap:BankProfit_M2}
\end{figure}
{To finish this section, we consider the \textbf{MSWP} model. In this case, we have also included in our analysis the comparison of the objective function of this {problem, namely the broker-dealer net profit plus CVaR,} for the different risk profiles with respect to $\mu_0$ and $\alpha$, and type of market ($A,\ldots,I$). It can be seen in {the upper-right frame of} Figure \ref{Grap:Sum_M3}, that the objective value increases with the value of $\alpha$. The same {trend} is observed for the CVaR and the expected return (left figures). However, regarding the broker-dealer profit {we could not detect a clear pattern.}
The interested reader is referred {to the appendix, that includes all comparisons and graphical outputs gathered in our study. Furthermore, one can find there a discrete Pareto front of \textbf{MSWP}} for different values of the parameter $\xi$.}
\begin{figure}
\caption{{\footnotesize {Values of the CVaR (upper-left), broker-dealer profit (upper-right), expected return (lower-left) and objective value (lower-right) for different $\alpha$ and $\mu_0$ levels in \textbf{MSWP}
\label{Grap:Sum_M3}
\end{figure}
\subsection{Comparing solutions across problems}
This last section of the computational results is devoted to comparing the solutions provided for the three problems considered in this paper, namely \textbf{BLIFP}, \textbf{ILBFP} and \textbf{MSWP}. The goal is to analyze the solution across {problems} with respect to the goals of the two parties: broker-dealer net profit, CVaR levels and expected returns. Due to page length limitations in the paper version, we have included in our figures only some comparisons for certain risk profiles. The interested reader is referred again to the appendix, where we report comparisons for a broader range of risk profiles.
\begin{figure}
\caption{ {\footnotesize {Values of the CVaR for {\textbf{BLIFP}
\label{Grap:CVaR_compare}
\end{figure}
Figure \ref{Grap:CVaR_compare} shows a comparison of the CVaR values attained in \textbf{BLIFP} and \textbf{ILBFP} for different risk profiles {($\alpha=0.1$ and $\mu_0=0.05$ and $\alpha=0.5$ and $\mu_0=0.1$, in the right and left figures, respectively)}. We can observe in Figure \ref{Grap:CVaR_compare} that for each risk profile, the CVaR values are always higher in \textbf{BLIFP} than in \textbf{ILBFP}. Analogously, Figure \ref{Grap:BankProfit_compare} compares the values of the broker-dealer profit for the two hierarchical problems. It is also remarkable that \textbf{BLIFP} always results in higher profit values for each risk profile and all type of instances. In these comparisons, we do not include the values for the social welfare problem because they are not comparable due to the existence of multiple solutions (with the same value for the objective function but a very different balance between the distribution of the CVaR and the broker-dealer profit). As we mentioned above, we emphasize that in all our experiments, \textbf{BLIFP} always gives higher profit for the broker-dealer and better CVaR for the investor than \textbf{ILBFP}. {In this regard, it seems beneficial for the two parties to accept that the investor knows the transaction costs on the securities before setting his portfolio.}
\begin{figure}
\caption{ {\footnotesize {Values of the broker-dealer profit for {\textbf{BLIFP}
\label{Grap:BankProfit_compare}
\end{figure}
The last comparisons across models refer to the value of the sum of {broker-dealer profit plus the CVaR of the investor}, in Figure \ref{Grap:Sum_compare}, and the expected return value, in Figure \ref{Grap:ExpectedReturn_compare}. These two figures show the corresponding values attained by the three proposed problems, \textbf{BLIFP}, \textbf{ILBFP} and \textbf{MSWP}, for the different instances ($A,\ldots,I$) and two different risk profiles (see figures captions). As theoretically proved in Proposition \ref{prop:social_welfare}, we can observe in Figure \ref{Grap:Sum_compare} that the value of the sum of the broker-dealer profit plus the CVaR {of the investor} is always greater for the social welfare model (\textbf{MSWP}) than for the other two hierarchical problems, namely \textbf{BLIFP} and \textbf{ILBFP}. Finally, we compare {the obtained expected return values for the three problems}. From Figure \ref{Grap:ExpectedReturn_compare}, we can not conclude any dominating relationship among the problems with respect to the expected return value and therefore, the numerical experiments do not prescribe any preference relationship regarding the expected return.
\begin{figure}
\caption{ {\footnotesize {Values of the broker-dealer profit + CVaR for the three {problems}
\label{Grap:Sum_compare}
\end{figure}
\begin{figure}
\caption{ {\footnotesize {Values of the expected return for the three {problems}
\label{Grap:ExpectedReturn_compare}
\end{figure}
\section{Concluding remarks and extensions}\label{sec:Conclusions}
We have presented {three} single-period portfolio optimization problem{s} with transaction costs, considering two different decision-makers: the investor and the financial intermediary. Including the financial intermediaries {(broker-dealers)} as decision-makers leads to the incorporation of the transaction costs as decision variables in the portfolio selection problem. The action of both decision-makers was assumed to be hierarchical. We have considered the situations where each {of these decision-makers is leader and have analyzed them.} This hierarchical structure has been modeled using bilevel optimization. In addition, a social welfare model has also been studied.
In all cases, it has been assumed that the broker-dealer has to choose the unit transaction costs, for each security, from a discrete set of {possible costs}, maximizing its benefits, and that the investor aims to minimize the risk (optimizing his CVaR), ensuring a given expected return. Considering continuous sets of possible {values for the transaction costs} could be an interesting future research line.
In the considered models we assumed proportional transactions cost; however, other transaction costs structures such as fixed transaction costs or convex piecewise linear costs have been considered in the literature (for further details on transaction costs structures we refer the reader to \cite{man15_2}). These costs structures could be incorporated in our models by slightly modifying the resolution methods and increasing the complexity of problem-solving. For instance, in order to incorporate fixed fees and commissions, we should include some binary variables determining whether the investor chooses a security or not, and then accounting for its contribution to the transaction costs. The general tools from MILP can be adapted to solve the problem with this new structure of costs. This could be another interesting future research line.
In order to solve the three proposed {problems}, MILP and LP formulations, as well as algorithms, have been proposed. By making variations in the set of {costs}, and in the parameters to model the CVaR and the expected return, $\alpha$ and $\mu_0$, different broker-dealer and investor profiles can be considered.
In our analysis in Sections \ref{Sect:Bank-leader} and \ref{Investor-leader}, all the problems have been presented, for simplicity, with only one follower. Nevertheless, they could be easily extended to more than one. In particular, in Section \ref{Sect:Bank-leader}, the problem has been studied from the broker-dealer point of view, that is, the broker-dealer aims to maximize its benefit by assuming that once the {costs} for the securities are set, a single investor will choose his portfolio according to the described goals. We remark that the same procedure could be applied to several followers (investors). In fact, in that {problem}, $F$ different profiles of followers (risk-averse, risk-taker, etc.) could be considered, and the broker-dealer's goal would be maximizing the overall benefit for any linear function of its {costs}. This approach would allow the broker-dealer to improve the decision-making process in the cases where the same {costs} have to be set for all the investors, but different investor's profiles are considered.
{A detailed computational study has been conducted using data from the Dow Jones Industrial Average.} We have compared the solution methods, the solutions and the risk profiles within {problems}, and the solutions across {them}. From our computational experience, we have observed that the broker-dealer-leader investor-follower {problem} results in better solutions for both, the broker-dealer and the investor, in comparison with the investor-leader broker-dealer-follower {problem}. Furthermore, the social welfare model {problem}, as theoretically proved, in higher aggregated benefits.
\appendix
\section{Appendix}
\subsection{Comparing solutions and risk profiles within {problems}}
\subsubsection*{Discrete Pareto front for \textbf{MSWP}}
\begin{figure}
\caption{ {\footnotesize Discrete Pareto front of the \textbf{MSWP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Discrete Pareto front of the \textbf{MSWP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Discrete Pareto front of the \textbf{MSWP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Discrete Pareto front of the \textbf{MSWP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Discrete Pareto front of the \textbf{MSWP}
\end{figure}
\subsection{Comparing solutions across problems}
\subsubsection*{CVaR}
\begin{figure}
\caption{ {\footnotesize Values of the CVaR for {\textbf{BLIFP}
\label{Grap:CVaR_compare}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Values of the CVaR for {\textbf{BLIFP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Values of the CVaR for {\textbf{BLIFP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Values of the CVaR for {\textbf{BLIFP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Values of the CVaR for {\textbf{BLIFP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Values of the CVaR for {\textbf{BLIFP}
\end{figure}
\subsubsection*{Broker-dealer profit}
\begin{figure}
\caption{ {\footnotesize Values of the broker-dealer profit for {\textbf{BLIFP}
\label{Grap:broker-dealer profit_compare}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Values of the broker-dealer profit for {\textbf{BLIFP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Values of the broker-dealer profit for {\textbf{BLIFP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Values of the broker-dealer profit for {\textbf{BLIFP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Values of the broker-dealer profit for {\textbf{BLIFP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Values of the broker-dealer profit for {\textbf{BLIFP}
\end{figure}
\subsubsection*{Broker-dealer profit + CVaR}
\begin{figure}
\caption{ {\footnotesize Values of the broker-dealer profit + CVaR for {\textbf{BLIFP}
\label{Grap:broker-dealer profit + CVaR_compare}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Values of the broker-dealer profit + CVaR for {\textbf{BLIFP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Values of the broker-dealer profit + CVaR for {\textbf{BLIFP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Values of the broker-dealer profit + CVaR for {\textbf{BLIFP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Values of the broker-dealer profit + CVaR for {\textbf{BLIFP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Values of the broker-dealer profit + CVaR for {\textbf{BLIFP}
\end{figure}
\subsubsection*{Expected Return}
\begin{figure}
\caption{ {\footnotesize Expected return for {\textbf{BLIFP}
\label{Grap:broker-dealer profit + CVaR_compare}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Expected return for {\textbf{BLIFP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Expected return for {\textbf{BLIFP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Expected return for {\textbf{BLIFP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Expected return for {\textbf{BLIFP}
\end{figure}
\begin{figure}
\caption{ {\footnotesize Expected return for {\textbf{BLIFP}
\end{figure}
\end{document} |
{\bf e}gin{document}
\title {
{\LARGE \bf
Symbolic Analysis-based Reduced Order
\\ Markov Modeling of Time Series Data
}
}
\author{Devesh K. Jha$^{a,1}$, Nurali Virani$^{a,2}$, Jan Reimann$^{b}$, Abhishek Srivastav$^{c}$, Asok Ray$^{a, b}$
\thanks {$^a$ Devesh K. Jha, N. Virani and Asok Ray are with Mechanical \& Nuclear Engineering Department, Pennsylvania State University, University Park, PA 16802, USA, and are partially supported by the U.S. Air Force Office of Scientific Research under Grant No. FA9550-15-1-0400; {\tt\{dkj5042,nnv105,axr2\}@psu.edu}}
\thanks{$^b$ Jan Reimann and Asok Ray are with Department of Mathematics, Pennsylvania State University, University Park, PA, 16802, USA; {\tt\{jan.reimann,axr2\}@psu.edu}. Jan Reimann was partially supported by NSF Grant DMS-1201263}
\thanks{$^c$ Abhishek Srivastav is with AI and Machine Learning Lab, GE Global Research Center, San Ramon, CA, USA; {\tt srivastav@ge.com}}
\thanks{$^1$ Currently with Mitsubishi Electric Research Laboratories, Cambridge, MA 02139}
\thanks{$^2$ Currently with AI and Machine Learning Lab, GE Global Research Center, Niskayuna, NY} \\
\small \textbf{Keywords}: Symbolic Analysis, Markov Modeling, Order reduction, Combustion Instability
}
\maketitle
{\partial a}gestyle{plain}
{\bf e}gin{abstract}
This paper presents a technique for reduced-order Markov modeling for compact representation of time-series data. In this work, symbolic dynamics-based tools have been used to infer an approximate generative Markov model. The time series data are first symbolized by partitioning the continuous measurement space of the signal and then, the discrete sequential data are modeled using symbolic dynamics. In the proposed approach, the size of temporal memory of the symbol sequence is estimated from spectral properties of the resulting stochastic matrix corresponding to a first-order Markov model of the symbol sequence. Then, hierarchical clustering is used to represent the states of the corresponding full-state Markov model to construct a reduced-order (or size) Markov model with a non-deterministic algebraic structure. Subsequently, the parameters of the reduced-order Markov model are identified from the original model by making use of a Bayesian inference rule. The final model is selected using information-theoretic criteria. The proposed concept is elucidated and validated on two different data sets as examples. The first example analyzes a set of pressure data from a swirl-stabilized combustor, where controlled protocols are used to induce flame instabilities. Variations in the complexity of the derived Markov model represent how the system operating condition changes from a stable to an unstable combustion regime. In the second example, the data set is taken from NASA's data repository for prognostics of bearings on rotating shafts. We show that, even with a very small state-space, the reduced-order models are able to achieve comparable performance and that the proposed approach provides flexibility in the selection of a final model for representation and learning.
\end{abstract}
\section{Motivation and Introduction}
Hidden Markov model (HMM) is a widely used statistical learning tool for modeling uncertain dynamical systems~\cite{B06}, where the associated temporal data are used to infer a Markov chain with unobserved states. In this setting, the learning task is to infer the states and the corresponding parameters of the Markov chain. In addition to HMM, several other nonlinear techniques have been proposed for Markov modeling of time-series data. Symbolic time-series analysis-based Markov modeling is a recently proposed technique~\cite{R04} where the states of a Markov chain are represented as a collection of words (i.e., symbol blocks, also referred to as memory words) of different lengths, which can be identified from the time-series data on a discrete space with finite cardinality~\cite{SS04, R04, MR14, CL13}. The symbols are created from the continuously varying time-series data by projecting the data to a set with finite cardinality. A common ground among all these tools of Markov modeling as discrete sequences, is that the Markov chain is induced by probabilistic representation of a deterministic finite state auotmaton (DFSA), often called probabilistic finite state automata (PFSA)~\cite{VTDCC05}. While the PFSA-based inference provides a consistent, deterministic graph structure for learning, the deterministic algebraic structure is generally not a very compact representation and may often lead to large number of states in the induced Markov model. To circumvent this problem attempts have been made to reduce the state-space by merging statistically similar states of the model~\cite{MR14}. The problem is, however, that as these models are constructed by partitioning of phase space of the dynamical system, merging states that are statistically similar leads to algebraic inconsistency. On the other hand, if the states are merged to preserve the algebraic consistency, it leads to statistical impurity in the final models (i.e., states which have different statistics could be merged together). Other approaches for state aggregation in Markov chains could be found in~\cite{GPKK15, V12, XSB14}. However, these papers do not consider inference of the Markov model from the data which may not be suitable for analysis of data-driven systems~\cite{D05}.
The state space for Markov models, created by using symbolic analysis, increases exponentially with increase in memory or order of the symbolic sequence. Estimating the right memory is critical for temporal modeling of patterns observed in the sequential data. However, some of the states may be statistically similar and thus merging them can reduce the size of state-space. This paper presents reduced-order Markov modeling of time-series data to capture temporal patterns, where we estimate the size of temporal memory of the symbolic data using the spectral properties of a PFSA whose states are words of length one~\cite{Srivastav2014, JSMR15}. The constraint of deterministic algebraic structure is not imposed by the end objective, but due to the choice of the data representation model. Thus we propose to merge the states and remove the constraint of deterministic algebraic properties associated with PFSA, where the states of the Markov chain are now collection of words from its alphabet of length estimated in the last step. This state aggregation induces a non-determinism in the finite state model. The parameters of the reduced-order Markov model are estimated by a Bayesian inference technique from the parameters associated with the higher-order Markov model. The final model for data representation is selected using information-theoretic criteria, and thus, we get a unique stopping point to terminate the state-merging procedure. We also present a bound on the distortion of the predictive capability of the models up on reduction in the size of the state-space. The final model obtained is a generative model for the data; however, some predictive capability is lost as we remove the deterministic algebraic structure of a DFSA.
The proposed technique of state merging is inspired by time-critical applications where it is imperative to arrive at a reliable decision quickly as the dynamics of the process being monitored is really fast. In such applications, there are strict constraints on accuracy as well as the time needed to come to a decision. In this paper, we illustrate the concepts using two different datasets. We discuss in detail the example of combustion instability which is a highly nonlinear and complex phenomena and results in severe structural degradation in jet turbine engines. Some good surveys on the current understanding of the mechanisms for the combustion instability phenomena could be found in~\cite{OAL15, SSDC03, CDSBM14, HY09, MBDSC12}. Active combustion instability control (ACIC) with fuel modulation has proven to be an effective approach for reducing pressure oscillations in combustors~\cite{BMJK06, BMH07}. Based on the work available in literature, one can conclude that the performance of ACIC is primarily limited by the large delay in the feedback loop and the limited actuator bandwidth ~\cite{BMJK06, BMH07}. Early detection of combustion instability can potentially alleviate the problems with delay in the ACIC feedback loop and thus possibly improve the performance. Some recent work for detection and prediction of combustion instabilities could be found in~\cite{JSR16, VJR16, SCRR16, NTS14, MS15}. While the results in these papers are encouraging, there is no interpretation of the expected changes in the data-driven model that could be observed during changes in the operating regime of the underlying process. In contrast to the work reported in literature, we have presented an overall idea of changes in the underlying stochastic model structure and parameters during the complex instability phenomenon.
\textbf{Contributions.} This paper presents a technique for Markov modeling of time series data using a PFSA with nondeterministic algebraic structure. Nondeterminism is induced by merging states of a PFSA with deterministic algebraic structure inferred from discrete sequential data, which in turn allows very compact representation of temporal data. In contrast to the approach in~\cite{MR14}, we present a method to use information-theoretic criteria to arrive at a consistent stopping criterion for model selection. The resulting reduced-order model has fewer parameters to estimate; this is turn leads to faster convergence rates and thus faster decisions during test (or operation). We also present a bound on the distortion in the predictive capability of the models due to state-space reduction using Hamming distance between the sequences generated by the original and final model. The algorithms presented in the paper are validated on two different datasets-- pressure data obtained from a swirl-stabilized combustor to monitor thermo-acoustic instability and a public data set for bearing prognostics. We show changes in the complexity of the pressure data as the process moves from stable to unstable through the transient phase which is then used to arrive at a criterion that provides perfect class separability. Apart from the results on Markov modeling, the results on combustion instability could be of independent interest in combustion community.
\section{Background and Mathematical Preliminaries}
Symbolic analysis of time-series data is a recent approach where continuous
sensor data are converted to symbol sequences via partitioning of the continuous
domain~\cite{SAX07, R04}. The dynamics of the symbols sequences are
then modeled as a probabilistic finite state automaton (PFSA), which is defined
as follows:
{\bf e}gin{definition}[PFSA]\label{defn:PFSA}
A probabilistic finite state automaton (PFSA) is a tuple $G =( qSet, \alphaphabetSet, \delta, \bm{M})$ where
{\bf e}gin{itemize}
\item $qSet$ is a finite set of states of the automata;
\item $\alphaphabetSet$ is a finite alphabet set of symbols $\alphaphabet \in \alphaphabetSet$;
\item $\delta: qSet \times \alphaphabetSet \rightarrow qSet$ is the state transition function;
\item {$\bm{M}: qSet \times \alphaphabetSet \rightarrow [0, 1]$ is the
$\card{qSet}\times\card{\alphaphabetSet}$ emission matrix. The matrix
$\bm{M} = [m_{ij}]$ is row stochastic such that $m_{ij}$ is the
probability of generating symbol $\alphaphabet_{j}$ from state $q_{i}$}.
\end{itemize}
\end{definition}
{\bf e}gin{remark} The PFSA defined above has a deterministic algebraic structure which is governed by the transition function $\delta$; thus a symbol emission from a particular state will lead to a fixed state. However, the symbol emissions are probabilistic (represented by the emission matrix). On the other hand, the transition function for a non-deterministic finite state automaton is given by a map, $\delta: qSet \times \alphaphabetSet \rightarrow 2^qSet$ where, $2^qSet$ denotes the power set of $qSet$ and includes all subsets of $qSet$. The idea is also presented in Figure~\ref{fig:NonDeterminism} where we show that the same symbol can lead to multiple states, however in a probabilistic fashion. This allows more flexibility in modeling at the expense of some predictive accuracy.
\end{remark}
{\bf e}gin{figure}
\centering
\includegraphics[width=0.9\textwidth]{figures/fig1_eps.eps}
\caption{Graphical model showing non-determinism in a PFSA. The symbol $1$ emitted from state $q_1$ leads to different states with fixed probabilities indicating non-deterministic behavior.}
\label{fig:NonDeterminism}
\end{figure}
For symbolic analysis of time-series data, a class of PFSAs called the
$D$-Markov machine have been proposed~\cite{R04} as a sub-optimal but
computationally efficient approach to encode the dynamics of symbol sequences as
a finite state machine.
{\bf e}gin{definition} (\textbf{$D$-Markov Machine}~\cite{R04, MR14}) \label{def:D-Markov} A $D$-Markov machine is a statistically stationary stochastic process $S= \cdots a_{-1} a_{0} a_{1} \cdots $ (modeled by a PFSA in which each state is represented by a finite history of $D$ symbols), where the probability of occurrence of a new symbol depends only on the last $D$ symbols, i.e.,
{\bf e}gin{equation}\label{eq:D-Markov}
{\partial r}ob(s_n \mid \cdots s_{n-D} \cdots s_{n-1} ) = {\partial r}ob(s_n \mid s_{n-D} \cdots s_{n-1}) \nonumber
\end{equation}
where $D$ is called the depth of the Markov machine.
\end{definition}
A $D$-Markov machine is thus a $D^{th}$-order Markov approximation of the discrete symbolic process. For most stable and controlled engineering systems that tend to forget
their initial conditions, a finite length memory assumption is reasonable. The $D$-Markov machine is represented as a PFSA and states of this PFSA are words over alphabet $\alphaphabetSet$ of length $D$ (or less);
the state transitions are described by a sliding block code of memory $D$
and anticipation length of one~\cite{LM95}.
For systems with fading memory it is expected that the predictive influence of a
symbol progressively diminishes. In this context, depth is defined as follows.
{\bf e}gin{definition}[Depth]\label{def:depth}
Let $sSeq = s_{1}\dotss_{k}s_{k+1}s_{k+2}\dots$ be the
observed symbol sequence where each $s_{j}\in\alphaphabetSet\;\forall\; j \in
\mathds{N}$. Then, the depth of the process generating $sSeq$ is defined as the
length $D$ such that:
{\bf e}gin{equation}\label{eq:depthTrueDefn}
{\partial r}ob(s_{k}|s_{k - 1},\dots,s_{1}) = {\partial r}ob(s_{k}|s_{k - 1},\dots,s_{k - D})
\end{equation}
\end{definition}
An accurate estimation of depth for the symbolic dynamical process is required for the precise modeling of the underlying dynamics of the discrete sequence. Next we introduce an information-theoretic metric which is used for merging the states of the Markov model later in next section.
{\bf e}gin{definition}[Kullback-Leibler Divergence]\label{def:KLD}~\cite{G90}
The Kullback-Leibler (K-L) divergence of a discrete probability distribution $P$ from another distribution $\tilde{P}$ is defined as follows.
{\bf e}gin{equation}
D_{\textrm {KL}}(P\|\tilde{P})=\sum_{x\in X} {p}_X(x)\log\bigg(\frac{{p}_X(x)}{\tilde{p}_X(x)}\bigg) \nonumber
\end{equation}
It is noted that K-L divergence is not a proper distance as it is not symmetric. However, to treat it as a distance it is generally converted into symmetric divergence as follows, $d(P,\tilde{P})= D_{\textrm {KL}}(P\|\tilde{P})+D_{\textrm {KL}}(\tilde{P}\|P)$. This is defined as the K-L distance between the distributions $P$ and $\tilde{P}$.
\end{definition}
This distance is used to find out the structure in the set of the states of the PFSA-based Markov model whose states are words, over the alphabet of the PFSA, of length equal to the depth estimated for the discretized sequence.
\section{Technical Approach}
In this section, we present the details of the proposed approach for inferring a Markov model from the time series data. As discussed earlier, the first step is the discretization of the time-series data to generate a discrete symbol sequence. While it is possible to optimize the symbolization of time-series using some optimization criterion, we do not discuss such a technique here. The data is discretized using the unbiased principle of entropy maximization of the discrete sequence using maximum entropy partitioning (MEP)~\cite{RR06}. The proposed approach for Markov modeling then consists of the following four critical steps
{\bf e}gin{itemize}
\item Estimate the approximate size of temporal memory (or order) of the symbol sequence.
\item Cluster the states of the high-order Markov model.
\item Estimate the parameters of the reduced-order Markov model (i.e., the transition matrix).
\item Select the final model using information theoretic scores (described below, Section~\ref{subsec:MDL}).
\end{itemize}
Memory of the discrete sequence is estimated using a recently introduced method based on the spectral analysis of the Markov model with depth $1$. induced by a PFSA~\cite{Srivastav2014, JSMR15}. It is noted that these steps are followed during training to estimate the approximate model for data and during test, the parameters are estimated for the reduced-order model.
The key ideas behind these steps are explained in the next section.
\subsection{Estimation of Reduced-Order Markov Model}\label{subsec:reducedorder}
Depth $D$ of a symbol sequence has been redefined in~\cite{Srivastav2014}
as the number of time steps after which probability of current symbol is
independent of any past symbol i.e.:
{\bf e}gin{equation}\label{eq:depthDefn}
{\partial r}ob(s_{k}|s_{k - n}) = {\partial r}ob(s_{k}) \ \forall n>D
\end{equation}
Note that dependence in the proposed definition (eq.~\ref{eq:depthDefn}) is
evaluated on individual past symbols using ${\partial r}ob(s_{k}|s_{k - n})$ as
opposed to the assessing dependence on words of length $D$ using $
{\partial r}ob(s_{k}|s_{k - 1},\dots,s_{k - D})$. It is shown that if
the observed process is {\it forward causal} then observing any additional
intermediate symbols $s_{k- 1},\dots,s_{k - n + 1}$ cannot induce a
dependence between $s_{k }$ and $s_{k - n}$ if it did not exist on
individual level~\cite{Srivastav2014}.
Let $\bm{\Pi} = [\pi^{(1)}_{ij}]$ be the one-step transition probability
matrix of the PFSA $G$ constructed from this symbol sequence i.e.
{\bf e}gin{equation}\label{eq:stateTransition1step}
\bm{\Pi} = {\partial r}ob(s_{k}|s_{k - 1})
\end{equation}
Then using the distance of the transition matrix after steps from the stationary
point, depth can be defined as a length $D$ such that
{\bf e}gin{equation}\label{eq:depthTrace}
\abs{\trace{\bm{\Pi}^{n}} - \trace{\bm{\Pi}^{\infty}}} \leq \sum_{j = 2}^{J}
\abs{\lambda_{j}}^n < \epsilonsilon \ \forall n > D
\end{equation}
where $J$ is number of non-zero eigenvalues of $\bm{\Pi}$. Thus, the depth $D$ of the symbol sequence is estimated for a choice of $\epsilonsilon$ by estimating the stochastic matrix for the one-step PFSA. Next, another pass of data is done to estimate the PFSA parameters whose states are words over $\alphaphabetSet$ of length $D$, i.e., $\bm{\Pi} = {\partial r}ob(s_{k}|s_{k - 1},\dots, s_{k - D})$. It is noted that this step is critical for modeling accuracy.
The states of the reduced-order Markov model are then estimated by partitioning the set of words over $\alphaphabetSet$ of length $D$ estimated in the last step. This is done by using an agglomerative hierarchical clustering approach. The advantage of using the hierarchical clustering approach is that it helps visualize the structure of the set of the original states using an appropriate metric. Agglomerative hierarchical clustering is a bottom-up clustering approach~\cite{XW05} that generates a sparse network (e.g., a binary tree) of the state set $qSet$ (where $|Q|=|\alphaphabetSet|^D$) by successive addition of edges between the elements of $qSet$. Initially, each of the states $q_1,q_2,\dots,q_n$ is in its own cluster $C_1,C_2,\dots, C_n$ where $C_i\in \bm{M}hcal{C}$, which is the set of all clusters for the hierarchical cluster tree. The distance between any two states in $qSet$ is measured using the K-L distance between the symbol emission probabilities conditioned on them, i.e.,
{\bf e}gin{align}\label{eq:kldistance}
d(q_i,q_j)& = D_{\textrm{KL}}({\partial r}ob(\alphaphabetSet|q_i)\|{\partial r}ob(\alphaphabetSet|q_j))\nonumber \\
&+D_{\textrm{KL}}({\partial r}ob(\alphaphabetSet|q_j)\|{\partial r}ob(\alphaphabetSet|q_i))
\end{align}
where the terms on the right have the following meaning.
{\bf e}gin{align}
& D_{\textrm{KL}}({\partial r}ob(\alphaphabetSet|q_i)\|{\partial r}ob(\alphaphabetSet|q_j)) \nonumber \\
& = \sum_{s \in \alphaphabetSet}{\partial r}ob(s|q_i)\log\bigg( \frac{{\partial r}ob(s|q_i)}{{\partial r}ob(s|q_j)}\bigg) \nonumber
\end{align}
In terms of the distance measured by eq.~\eqref{eq:kldistance}, the pair of clusters that are nearest to each other are merged and this step is repeated till only one cluster is left. The tree structure displays the order of splits in the state set of the higher-order Markov model and is used to aggregate the states close to each other. For clarification of presentation, we show an example of a Markov chain with $27$ states and $3$ symbols on a simplex plane in Figure~\ref{fig:Simplexcluster}, where each \mbox{col}or{red} red pentagon \mbox{col}or{black} on the simplex represents one row of the symbol emission matrix. The hierarchical clustering is used to find the structure of the state set on the simplex place using the K-L distance. The set of states clustered together could be obtained based on the number of final states required in the final Markov model.
The overall algorithm is presented as a pseudo-code in Algorithm~\ref{algorithm:Modeling}. This algorithms is used to find the parameters of the models during training. The parameters during test are estimated using the clustering map $f_{N_{\textrm{max}}}$ and is further discussed in next section. In the later sections we show how an information theoretic criterion could be used to select the appropriate model to terminate the state merging algorithm or select a final model from the set of reduced-order models. Through numerical experiments using two different data-sets we also illustrate the main motivation of this work that although the right memory is required for accurate modeling of the symbolic process, the state-space not necessarily consist of all words corresponding to the estimated memory and we can achieve sufficiently-high predictive accuracy even with a smaller state-space. We are able to achieve this trade-off between the model complexity and predictive modeling accuracy using the information-theoretic criteria.
{\bf e}gin{figure}
\centering
\includegraphics[width=0.75\textwidth]{figures/simplex_fig1_1.eps}
\caption{The symbol emission probabilities for a Markov chain with $3$ symbols are shown on a simplex. Symmetric K-L distance is used to find the structure in the state-set in the information space and the states are clustered based on the revealed structure.}
\label{fig:Simplexcluster}
\end{figure}
{\bf e}gin{algorithm}[h] \small
\SetKwInput{KwIn}{Input}
\SetKwInput{KwOut}{Output}
\KwIn {The observed symbol sequence $sSeq=\{\dots s_1s_2s_3\dots|s_i \in \alphaphabetSet\}$}
\KwOut {The final Markov model, $\bm{M}hcal{M}=(\tilde{qSet},\tilde{\bm{M}},\tilde{\bm{\Pi}})$}
Estimate the $\bm{\Pi}$ matrix for 1-step Markov model using frequency counting with an uniform prior\;
Estimate the size of temporal memory, $D(\epsilonsilon)$ for $sSeq$ using equation~\eqref{eq:depthTrace}\;
Estimate $\bm{M}$ and $\bm{\Pi}$ for the $D(\epsilonsilon)$-Markov model using frequency counting with an uniform prior\;
$\bm{M}hcal{C}_{\midqSet\mid}=\{q_i\mid q_i \in qSet\}$\;
\For{$i = \midqSet\mid-1,\dots,1$}{{find distinct clusters $A,B \in \bm{M}hcal{C}_{i+1}$ minimizing $d(A\cup B)$\;}{$\bm{M}hcal{C}_i:=(\bm{M}hcal{C}_{i+1}\setminus\{A,B\})\cup \{A\cup B\}$}}
\Return{$\bm{M}hcal{C}_1,\dots,\bm{M}hcal{C}_{\midqSet\mid}$ and $f_i:qSet \rightarrow \bm{M}hcal{C}_i$} $\forall i \in \{1,\dots,\midqSet\mid\}$
Calculate the parameters of reduced model using $\tilde{qSet}=\bm{M}hcal{C}_{N_{\textrm{max}}}$, $f_{N_{\textrm{max}}}$ and equations~\eqref{eq:ParameterEst} through~\eqref{eq:pq1q2}\;
Calculate the Log-likelihood for models with Equation~\eqref{eq:depth_loglikelihood}\;
The final model is selected using the AIC or BIC criteria explained in Section~\ref{subsec:MDL}\;
\caption{Reduced Order Markov Modeling}
\label{algorithm:Modeling}
\end{algorithm}
\subsection{Parameter Estimation of the Reduced-Order Markov Model}\label{subsec:DBN}
The parameters of the Markov model obtained after clustering the states of the original PFSA with $|\alphaphabetSet|^D$ states is obtained using a Bayesian inference technique using the parameters estimated for the PFSA. In this proposed approach, the state transition matrix $\bm{\Pi}$, the emission matrix $\bm{M}$, and the state probability vector $pVec$ of the original PFSA model $G$ are available, along with the deterministic assignment map $f:qSet \rightarrow \widetilde{qSet}$ of the state in $qSet$ (i.e., state set of original model) to one of the state in $\widetilde{qSet}$ (i.e., state set of the reduced order model).
Since the reduced order model can represented by the tuple $\widetilde{G} = (\widetilde{qSet}, \widetilde{\bm{\Pi}})$, where $\widetilde{\bm{\Pi}} = [\tilde{\pi}_{ij}]$ is the state transition matrix, we employ a Bayesian inference technique to infer the individual values of transition probabilities $\tilde{\pi}_{ij} = {\partial r}ob(\tilde{q}_{k+1} = j \mid \tilde{q}_{k} = i)$ for all $i, j \in \widetilde{qSet}$.
Let $qVar_{k}$ be the random variable denoting the state of PFSA model at some time step $k \in \mathds{N}$ and $sVar_{k}$ denotes the symbol emitted from that state, this probabilistic emission process is governed by the emission matrix $\bm{M}$. The state of the reduced order model is obtained from a deterministic mapping of the state of the PFSA model, thus the state of this model is also a random variable, which is denoted by $\widetilde{qVar}_{k} = f(qVar_{k})$. The Bayesian network representing the dependencies between these variables is shown in the recursive as well as unrolled form in the Figure~\ref{fig:DBN}.
{\bf e}gin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{figures/DBN2.eps}
\caption{Graphical models representing the dependencies between the random variables}
\label{fig:DBN}
\end{figure*}
The conditional density ${\partial r}ob(\widetilde{qVar}_{k} = \tilde{q} \mid qVar_{k} = q)$ can be evaluated by checking if state $q$ belongs to the state cluster $\tilde{q}$ and assigning the value of 1 if true, else assign it the value of 0. Since we know that $\widetilde{qSet}$ partitions the set $qSet$, the conditional density is well-defined. Thus, it can be written as
{\bf e}gin{align}\label{eq:pc1q1}
{\partial r}ob(\widetilde{qVar}_{k} = \tilde{q} \mid qVar_{k} = q) = \indicator{\tilde{q}}{q},
\end{align}
where $\operatorname{I}$ is the indicator function with $\indicator{\tilde{q}}{q} = 1$, if element $q$ belongs to the set $\tilde{q}$, else it is $0$. The derivation of the Markov model ${\partial r}ob(\widetilde{qVar}_{k+1}\mid \widetilde{qVar}_{k})$ using ${\partial r}ob(qVar_{k+1}\mid qVar_{k})$, stationary probability vector $pVec$, and assignment map $f$ is shown ahead.
{\bf e}gin{align}\label{eq:ParameterEst}
&{\partial r}ob(\widetilde{qVar}_{k+1}\mid \widetilde{qVar}_{k}) = \sum_{q \in qSet} {\partial r}ob(\widetilde{qVar}_{k+1}, qVar_{k+1} = q \mid \widetilde{qVar}_{k}) \\
& \text{(Marginalization)}\notag\\
&{\partial h}antom{{\partial r}ob(\widetilde{qVar}} = \sum_{q \in qSet} {\partial r}ob(qVar_{k+1} = q \mid \widetilde{qVar}_{k}) {\partial r}ob(\widetilde{qVar}_{k+1} \mid qVar_{k+1} = q)\\
& \text{(Factorization using Figure~\ref{fig:DBN})}\notag\\
&{\partial h}antom{{\partial r}ob(\widetilde{qVar}} = \sum_{q \in qSet} {\partial r}ob(qVar_{k+1} = q \mid \widetilde{qVar}_{k}) \indicator{\widetilde{qVar}_{k+1}}{q} \\
& \text{(using~\eqref{eq:pc1q1})}\notag\\
&{\partial h}antom{{\partial r}ob(\widetilde{qVar}} = \sum_{q \in \widetilde{qVar}_{k+1}} {\partial r}ob(qVar_{k+1} = q \mid \widetilde{qVar}_{k}) \label{eq:pc2c1}.
\end{align}
We can obtain ${\partial r}ob(qVar_{k+1} \mid \widetilde{qVar}_{k})$ from Bayes' rule as
{\bf e}gin{align}\label{eq:pq2c1}
{\partial r}ob(qVar_{k+1} \mid \widetilde{qVar}_{k}) = \dfrac{{\partial r}ob(\widetilde{qVar}_{k} \mid qVar_{k+1}){\partial r}ob(qVar_{k+1})}{\sum_{q \in qSet}{\partial r}ob(\widetilde{qVar}_{k} \mid qVar_{k+1}=q){\partial r}ob(qVar_{k+1} = q)}.
\end{align}
Following the steps to obtain~\eqref{eq:pc2c1}, we also derive
{\bf e}gin{align}\label{eq:pc1q2}
{\partial r}ob(\widetilde{qVar}_{k} \mid qVar_{k+1}) = \sum_{q \in \widetilde{qVar}_{k}} {\partial r}ob(qVar_{k} = q \mid qVar_{k+1}) .
\end{align}
We can obtain ${\partial r}ob(qVar_{k} \mid qVar_{k+1})$ from Bayes' rule as
{\bf e}gin{align}\label{eq:pq1q2}
{\partial r}ob(qVar_{k} \mid qVar_{k+1}) = \dfrac{{\partial r}ob(qVar_{k+1} \mid qVar_{k}){\partial r}ob(qVar_{k})}{\sum_{q \in qSet}{\partial r}ob(qVar_{k+1} \mid qVar_{k}=q){\partial r}ob(qVar_{k} = q)}.
\end{align}
Note that, for the distribution ${\partial r}ob(qVar_{k})$ and ${\partial r}ob(qVar_{k+1})$, we use the stationary probability $pVec$. Using the equations \eqref{eq:pc2c1}, \eqref{eq:pq2c1},
\eqref{eq:pc1q2}, and \eqref{eq:pq1q2} together, one can easily obtain the desired state transition matrix $\widetilde{\bm{\Pi}}$ of the reduced order model. Once the state cluster set $\widetilde{qSet}$ and state transition matrix $\widetilde{\bm{\Pi}}$ are available, the reduced order model is completely defined.
\subsection{Model Selection using information theoretic criteria}\label{subsec:MDL}
In this section, we describe the model selection process during the underlying state merging process for model inference. We compute ``penalized'' likelihood estimates for different models. Then, the model with the lowest score is selection as the optimal model.
The (unpenalized) log-likelihood of a symbol sequence $sSeq$ given a Markov model $G$ is computed as follows:
{\bf e}gin{align}\label{eq:loglikelihood_Markov}
\mathcal{L}(sSeq | G) \operatorname{co}ng
\sum_{k=1}^{N}\log {\partial r}ob\left( s_{k} | q_k \right)
\end{align}
where the effects of the initial state are ignored because they become negligible for long statistically stationary symbol sequences. It is noted that with a finite symbol sequence, the log-likelihood is always finite. Furthermore, with the Markov models considered in this paper, the sum is simplified to the following form.
{\bf e}gin{align}\label{eq:loglikelihood_DMarkov}
\mathcal{L}(sSeq | G) \operatorname{co}ng
\sum_{k=D+1}^{N}\log {\partial r}ob\left( s_{k} | s_{k-1},\dots,s_{k-D} \right)
\end{align}
As discussed earlier, the states are merged using hierarchical clustering and thus, for every desired number of final states we get the deterministic map $f_{N_{\textrm{max}}}$ which determines how the original states are partitioned using the hierarchical clustering. This map is known for every terminal number of states and thus, we can find the log-likelihood of the symbol sequence using the following relationship.
{\bf e}gin{align}\label{eq:depth_loglikelihood}
\mathcal{L}(sSeq | \tilde{G}) \operatorname{co}ng
\sum_{k=D+1}^{N}\log {\partial r}ob\left( s_{k} | \tilde{q}_k=f_{N_{\textrm{max}}}(q_k) \right)
\end{align}
where, $\tilde{q}_k$ is the state of the reduced model and $q_k$ is the state of the original full-order model.
In the next step of the model selection process, a ``complexity penalty'' is added to the log-likelihood estimates, thereby balancing goodness of fit against the complexity of the model (and hence trying to prevent overfitting). We apply two widely-used such model selection functions, namely the Akaike information criterion (AIC)~\cite{Akaike:1974a} and the Bayesian information criterion (BIC)~\cite{Schwarz:1978a}:
{\bf e}gin{enumerate}
\item $\bm{M}hcal{M}_{\textrm{BIC}}=-2\mathcal{L}(sSeq | \tilde{G})+K\log(N)$, where $K$ is the number of free parameters and $N$ is the number of observations.
\item $\bm{M}hcal{M}_{\textrm{AIC}}=-2\mathcal{L}(sSeq | \tilde{G})+2K$, where $K$ is the number of free parameters.
\end{enumerate}
The number of free parameters to be estimated from the data is the parameters of the symbol emission parameters, i.e., $K=\mid\alphaphabetSet\mid \mid\tilde{qSet}\mid$. It is noted that this allows model selection for individual symbol sequences. The criterion here allows a terminal condition for state merging; however, different symbol sequences can have different models. The model with the minimum score is selected as the best model. Through the results presented in next sections we illustrate the fact that most of the temporal and predictive capabilities can be preserved for the models with a very small number of states when compared to the original model.
{\bf e}gin{remark}
The final Markov model is a finite depth approximation of the original time-series data. However, compared to the PFSA-based D-Markov machines in~\cite{R04, MR14}, the current aggregated model has a non-deterministic algebraic structure, i.e., the same symbol emissions from a state can lead to different states. While this leads to some loss in predictive capability as compared to the models in~\cite{R04, MR14}, this allows us to compress the size of the model as per the requirement at hand. This allows faster convergence rates for the symbol emission probabilities as we only require fewer parameters to estimate from data, which might lead to faster decisions during testing.
\end{remark}
In the rest of the paper, we will present a Hamming distance-based bound for distortion in the predictive capabilities of reduced models and demonstrate the utility of these models in practical problems of fault/anomaly detection from time-series data.
\section{Analysis of the Proposed Algorithm}\label{sec:analysis}
In this section, we will present a bound on the distortion of the model due to the reduction of state-space of the Markov model using Hamming distance between two symbol sequences. We first present the Pinsker's inequality~\cite{G90} which relates the information divergence with the variational distance between probability measures defined on arbitrary spaces. This is followed by another theorem which can be used to derive Hamming distance bounds using the informational divergence.
{\bf e}gin{theorem}[Pinsker's inequality]~\cite{G90}
Let $P$ and $Q$ be two probability distributions on a measurable space $(\bm{M}hds{X},\Sigma)$. Then, the following is true
{\bf e}gin{eqnarray}
d_{TV}(P,Q)\leq \sqrt{\frac{1}{2}D_{\textrm {KL}}(P\|Q)}
\end{eqnarray}
where $d_{TV}(P,Q)=\sup\limits_{A\in \Sigma} \{|P(A)-Q(A)|\}$ is the total variation distance.
\end{theorem}
{\bf e}gin{theorem}~\cite{M96}
Let $\bm{M}hds X$ be a countable set and let us denote by $x^n$ the sequence $(x_1,x_2,\dots,x_n)\in \bm{M}hds{X}^n$. Let $q^n$ be a Markov measure on $\bm{M}hds X^n$, that is, $q(x^n)=q(x_1){\partial r}od\limits_{i=2}^n q_i(x_i|x_{i-1})$. Then for any probability measure $p^n$ on $\bm{M}hds X^n$, the following is true
{\bf e}gin{eqnarray}
\bar{d}(p^n,q^n)\leq \bigg[\frac{1}{2n}D_{\textrm {KL}}(p^n\|q^n)\bigg]^{1/2}
\end{eqnarray}
where, $\bar{d}$ denotes the normed Hamming distance on $\bm{M}hds X^n \times \bm{M}hds X^n:$
{\bf e}gin{equation}
\bar{d}(x^n,y^n)=n^{-1}\sum\limits_{n=1}^n d(x_i,y_i),
\end{equation}
where $d(x_i,y_i)=1$ if $x_i\neq y_i$ and $0$ otherwise. The $\bar{d}$-distance between $p^n$ and $q^n$ is
{\bf e}gin{eqnarray}
\bar{d}(p^n,q^n)=\min E \bar{d}(\hat{X}^n,X^n),
\end{eqnarray}
where $\min$ is taken over all joint distributions with marginals $p^n=\bm{M}hrm{dist} \hat{X}^n$ and $q^n=\bm{M}hrm{dist} {X}^n$ and $E$ denotes the expectation operator.
\end{theorem}
The above theorem provides us a way to bound Hamming distance between sequences generated by two different distributions. Thus, using the above theorem, we find a bound on the Hamming distance between the symbol sequences generated by the reduced-order Markov model and the original model by estimating the K-L distance between the measure on symbol sequences induced by these models. An approximate estimate of the K-L distance between the original and a reduced model could be expressed and estimated as shown in the following.
Let the original D-Markov model be denoted by $\bm{M}hcal{M}$ and the reduced-order model by $\hat{\bm{M}hcal{M}}$. The Markov measure on the probability space $(S^n,\bm{M}hcal{E},P)$ where the set $S^n$ consists of sequences of length $n$ from an alphabet $\alphaphabetSet$ could be estimated using the symbol emission probabilities. More explicitly, the Markov measure of a sequence $S_n$ on $S^n$ induced by $\bm{M}hcal{M}$ is given by $P_{\bm{M}hcal{M}}(S_n)={\partial r}ob(q_1){\partial r}od\limits_{i=D+1}^n{\partial r}ob(s_i\mid q_i)$ (where $D$ is the depth of the model). Then, the K-L divergence between $\bm{M}hcal{M}$ and $\hat{\bm{M}hcal{M}}$ is given by the following expression.
{\bf e}gin{equation}~\label{HammingBound}
D_{\rm {KL}} (P^n_{\bm{M}hcal{M}}\|P^n_{\hat{\bm{M}hcal{M}}})=\sum\limits_{S_n \in S^n}P_{\bm{M}hcal{M}}(S_n) \log\bigg(\frac{P_{\bm{M}hcal{M}}(S_n)}{P_{\hat{\bm{M}hcal{M}}}(S_n)}\bigg)
\end{equation}
Then, the above expression can be simplified as follows.
{\bf e}gin{equation}
\log\bigg(\frac{P_{\bm{M}hcal{M}}(S_n)}{P_{\hat{\bm{M}hcal{M}}}(S_n)}\bigg)=\sum\limits_{i=D+1}^n\log({\partial r}ob(s_i\mid q_i))-\log({\partial r}ob(s_i\mid \hat{q}_i)), \nonumber
\end{equation}
where, $\hat{q}$ is the merged state and $q$ is the original state. Then the expression on the right could be further bounded using the Lipschitz constant for the logarithm function and under the assumption that $\log({\partial r}ob(s_j\mid q_i))\neq 0$ $\forall q_i \in qSet$ and all $s_j \in \alphaphabetSet$.
{\bf e}gin{align}
&\sum\limits_{i=D+1}^n\log({\partial r}ob(s_i\mid q_i))-\log({\partial r}ob(s_i\mid \hat{q}_i)) \label{eqn:logsum}\\
&\leq \sum\limits_{i=D+1}^n(\frac{{\partial r}ob(s_i\mid q_i)-{\partial r}ob(s_i\mid \hat{q}_i)}{{\partial r}ob(s_i\mid q_i)})\label{eqn:lipschitz}\\
&\leq (n-D-1)\kappa
\end{align}
where, $\kappa=\max\limits_{q\in Q, s\in \alphaphabetSet}\frac{{\partial r}ob(s\mid q)-{\partial r}ob(s\mid \hat{q})}{{\partial r}ob(s\mid q)}$. In the above inequalities, equation~\eqref{eqn:lipschitz} is obtained from equation~\eqref{eqn:logsum} by using the observation that ${\partial r}ob(s_i\mid \hat{q}_i)={\partial r}ob(s_i\mid {q}_i)+\eta$, where $\eta$ is the perturbation in the symbol emission probability from $q_i$ when it is clustered into a new state $\hat{q}_i$. Hence, the K-L distance in equation~\eqref{HammingBound} could be bounded by the following term.
{\bf e}gin{align}
D_{\rm {KL}} (P^n_{\bm{M}hcal{M}}\|P^n_{\hat{\bm{M}hcal{M}}})& \leq \sum\limits_{S_n \in S^n} P_{\bm{M}hcal{M}}(S_n) (n-D-1)\kappa \nonumber \\
& = (n-D-1)\kappa \sum\limits_{S_n \in S^n} P_{\bm{M}hcal{M}}(S_n) \nonumber \\
& =(n-D-1)\kappa
\end{align}
Thus, a uniform bound on the Hamming distance between the original and the final model could then be obtained as follows,
{\bf e}gin{equation}\label{eqn:bound}
\bar{d}(P_\bm{M}hcal{M}(S_n),P_{\bm{M}hcal{\hat{M}}}(S_n))\leq \sqrt{\frac{(n-D-1)\kappa}{2n}}
\end{equation}
The above inequality thus, allows us to compare models with different state-space based on the predictive accuracy of a reduced model when compared to the original model. As compared to the earlier information theoretic criteria, which were based on the efficiency of data compression by different models, the inequality in~\eqref{eqn:bound} allows to compare them based on their symbol emission statistics and thus, is computationally efficient. It is possible to find a rather tighter bound in an expected sense by using the stationary distribution of the two Markov chains to find an expected bound on Hamming distance. However, finding the same is left as an exercise for future work. Using the above bound for selection of models could be more efficient than the information theoretic metrics (as it can estimated by using the symbol emission probabilities instead of the penalized likelihoods); however, finding a penalized version of the bound for model selection is also left as a future exercise.
\section{Description of Experimentation and Data Sets}\label{sec:experiment}
In this section, we briefly describe the two different data-sets which have been used in this paper to illustrate and validate the proposed concepts. Specifically, we will describe the experiments done at Penn State to investigate instability in lean-premixed combustion and another benchmark data-set for anomaly detection in bearings. An important point to be noted here is that the numerical experiments we present in the following sections is to justify the fact that the reduced-order models obtained by the proposed algorithms are able to achieve the trade-off between predictive accuracy and model complexity. Further results for classification and anomaly detection are to illustrate that this proposed approach of model learning can still achieve good performance for machine learning objectives of class separability and anomaly detection.
\subsection{Combustion}\label{subsec:combustionexperiment}
A swirl-stabilized, lean-premixed, laboratory-scale combustor was used to perform the experimental study. Tests were conducted at a nominal combustor pressure of 1 atm over a range of operating conditions, as listed in Table~\ref{tab:par}.
{\bf e}gin{table}[!hbp]
\centering
\caption{Operating conditions}
{\bf e}gin{tabular}{c|c}
\hline
\textbf{Parameters} & \textbf{Value} \\
\hline
Equivalence Ratio & 0.525, 0.55, 0.60, 0.65\\
\hline
Inlet Velocity & 25-50 m/s i m/s increments \\
\hline
Combustor Length & 25-59 inch in 1 inch increments\\
\hline
\end{tabular}
\label{tab:par}
\end{table}
In each test, the combustion chamber dynamic pressure and the global OH and CH chemiluminescence intensity were measured to study the mechanisms of combustion instability. The measurements were made simultaneously at a sampling rate of 8192 Hz~(per channel), and data were collected for 8 seconds, for a total of 65536 measurements~(per channel). A total of $780$ samples of data were collected from all the tests where in every test the combustion process was driven from stable to unstable by changing either the equivalence ratio, ${\partial h}i$. However, as the accurate model of the process is not available, an accurate label of transition of the process to unstable phase is not available. It is noted that the data consists the behavior of the process over a large number of operating condition and thus provides a rich set of data to test the efficacy of the algorithm in detecting classes irrespective of the underlying operating conditions.
\subsection{Bearing Prognostic Data} This test data has been picked from NASA's prognostics data repository~\cite{NASAPHM, TMZT12}. A detailed description of the experiments could be found in~\cite{QLLY06}. The bearing test rig hosts four test bearings on one shaft which is driven by an AC motor at a constant speed. A constant force is applied on each of the bearings and accelerometer data is collected at every bearing at a sampling rate of \SI{20}{\kilo\hertz} for about \SI{1}{\second}. The tests are carried for $35$ days until a significant amount of debris was found in the magnetic plug of the test bearing. A defect in at least one of the bearings is found at the end of every test. In this paper, we will use the data from a bearing which shows anomalous behavior in the later parts of test. In particular, out of the three data sets, we use set one where an inner race fault occurred on Bearing $3$. In the analysis, we use data from Bearing $3$.
\section{Markov Modeling}
{\bf e}gin{figure}
\centering
\includegraphics[width=0.75\textwidth]{figures/autocorr.eps}
\caption{Autocorrelation function of time-series data during the unstable phase of combustion. The time-series data is down-sampled by the lag marked in red square. It is noted that the individual time-series have their own down-sampling lags.}
\label{fig:autocorr}
\end{figure}
{\bf e}gin{figure*}
\centering
\subfloat[Probability density function for the pressure time series data]{\includegraphics[width=0.75\textwidth]{figures/fig1.eps}\label{fig:datadistribution}}\quad
\subfloat[Spectral decomposition of the stochastic matrix for 1-step Markov model]{\includegraphics[width=0.75\textwidth]{figures/fig3.eps}\label{fig:spectralprop}}\\
\caption{The first plate in the above Figure shows the change in the empirical density calculated for the pressure time-series data as the process deviates from the stable operating condition to unstable operating condition. The second plate shows the spectral decomposition of the 1-step stochastic matrix for the data under stable and unstable operating conditions.}
\label{fig:databehavior}
\end{figure*}
In this section, we present results for modeling and analysis of the time-series data which are presented in this paper.
\subsection{Combustion}
Time-series data is first normalized by subtracting the mean and dividing by the
standard deviation of its elements; this step corresponds to bias removal and
variance normalization. Data from engineering systems is typically oversampled
to ensure that the underlying dynamics can be captured (in the current experiments, it was $\SI{8192}{\hertz}$ ). Due to coarse-graining from the symbolization process, an over-sampled
time-series may mask the true nature of the system dynamics in the symbolic
domain (e.g., occurrence of self loops and irrelevant spurious transitions in
the Markov chain). Time-series is first down-sampled to find the next crucial
observation. The first minimum of auto-correlation function generated from the
observed time-series is obtained to find the uncorrelated samples in time. The
data sets are then down-sampled by this lag. The autocorrelation function for the time-series data during unstable case is shown in Figure~\ref{fig:autocorr} where the data are downsampled by the lag marked in \mbox{col}or{red} red \mbox{col}or{black} rectangle in Figure~\ref{fig:autocorr}. To avoid discarding significant amount of data due to downsampling, down-sampled data using different initial
conditions is concatenated. Further details of this preprocessing can be found
in~\cite{Srivastav2014}.
The continuous time-series data set is then partitioned using maximum entropy
partitioning (MEP), where the information rich regions of the data
set are partitioned finer and those with sparse information are partitioned
coarser. In essence, each cell in the partitioned data set contains
(approximately) equal number of data points under MEP. A ternary alphabet with
$\alphaphabetSet=\{0,1,2\}$ has been used to symbolize the continuous combustion
instability data. As discussed in section~\ref{sec:experiment}, we analyze data sets from different phases, as the process goes from stable through the transient to the unstable region (the ground truth is decided using the RMS-values of pressure).
{\bf e}gin{figure}
\centering
\subfloat[Hierarchical cluster tree of stable states]{\includegraphics[width=0.75\textwidth]{figures/fig4.eps}\label{fig:clusterstable}}\quad
\subfloat[Hierarchical cluster tree of unstable states]{\includegraphics[width=0.75\textwidth]{figures/fig5.eps}\label{fig:culsterunstable}}\\
\caption{State clustering under stable and unstable conditions.}
\label{fig:clusterbehavior}
\end{figure}
In Figure~\ref{fig:datadistribution}, we show the observed changes in the behavior of the data as the combustion operating condition changes from stable to unstable. A change in the empirical distribution of data from unimodal to bi-modal is observed as the system moves from stable to unstable. We selected $150$ samples of pressure data from the stable and unstable phases each to analyze and compare. First, we compare the expected size of temporal memory during the two stages of operation. There are changes in the eigenvalue decomposition rate for the 1-step stochastic matrix calculated from the data during the stable and unstable behavior, irrespective of the combustor length and inlet velocity. During stable conditions, the eigenvalues very quickly go to zero as compared to the unstable operating condition (see Figure~\ref{fig:spectralprop}). This suggests that the size of temporal memory of the discretized data increases as we move to the unstable operating condition. This indicates that under the stable operating condition, the discretized data behaves as symbolic noise as the predictive power of Markov models remain unaffected even if we increase the order of the Markov model. On the other hand, the predictive power of the Markov models can be increased by increasing the order of the Markov model during unstable operating condition, indicating more deterministic behavior. An $\epsilonsilon=0.05$ is chosen to estimate the depth of the Markov models for both the stable and unstable phases. Correspondingly, the depth was calculated as $2$ and $3$ for the stable and unstable conditions (see Figure~\ref{fig:databehavior}). The corresponding $D(\epsilonsilon)$ is used to construct the Markov models next. First a PFSA whose states are words over $\alphaphabetSet$ of length $D(\epsilonsilon)$ is created and the corresponding maximum-likely parameters ($\bm{M}$ and $\bm{\Pi}$) are estimated. Then, the hierarchical clustering algorithm using K-L distance is used to cluster and aggregate the states. It is noted that we create individual models for every sample, i.e., every sample is partitioned individually so that the symbols will have different meaning (i.e., they represent different regions in the measurement space of the signals) for every sample. Consequently, each sample will have a different state-space when viewed in the continuous domain. Thus, we do not show the mean behavior of the samples during any operating regime as the state-space would be inconsistent (even though the cardinality could be the same).
In Figure~\ref{fig:clusterbehavior}, we show the hierarchical cluster tree which details the structure of the state-space for the PFSA with depth $D(\epsilonsilon)$ for a typical sample during stable and unstable behavior. The cluster tree also suggests the symbolic noise behavior of the data during the stable regime (the states are very close to each other based on the K-L distance). However, clearly a coarse clustering of states in the model during the unstable behavior would lead to significant information loss (as the states are statistically different). However, to compare the two Markov models, we keep the cardinality of the final models the same. For example, the algorithm is terminated with $3$ states in the final Markov model during the stable as well as the unstable regime. and the final aggregated states are the three clusters depicted in the Figure~\ref{fig:clusterbehavior}. Once the final aggregated states are obtained, we estimate the parameters of the model using the Bayesian inference discussed in section~\ref{subsec:DBN}.
Next, we present some results for model selection using the information-theoretic criteria discussed earlier in section~\ref{subsec:MDL}. BIC and AIC are used to select the model which achieves the minimum score. As seen in the Figures~\ref{fig:MDLstable} through~\ref{fig:MDLunstable}, the model with $5$ states is selected for stable as well as for the unstable case (note that the original model for the stable class had $9$ states for depth $2$ and the unstable model had $27$ states for a depth of $3$). In contrast to cross-validation, the two criteria provide an unsupervised way for model selection. Thus we see that we need much smaller state-space to preserve the temporal statistics of the data and AIC and BIC provide us with a technique to select the compact model.
{\bf e}gin{figure}
\centering
\subfloat[Model scores using the BIC and AIC criterion during a typical stable condition]{\includegraphics[width=0.75\textwidth]{figures/fig11a.eps}\label{fig:MDLstable}}\quad
\subfloat[Model scores using the BIC and AIC criterion during a typical unstable condition]{\includegraphics[width=0.75\textwidth]{figures/fig11b.eps}\label{fig:MDLunstable}}\\
\caption{Unsupervised model selection under stable and unstable conditions.}
\label{fig:MDLscores}
\end{figure}
In Figure~\ref{fig:HammingCombustion}, we show the Hamming distance between the sequences generated by the original model and the reduced models for a typical sample each from stable and unstable combustion. The box-plots are generated by simulating the original model and the reduced-order model to generate symbol sequences of length $1000$ from $100$ different initial states (i.e., a total of $100$ strings are generated) and the Hamming distance between them is calculated. A bound on the Hamming distance between the sequences generated by the original model and final model is also calculated using the inequality~\eqref{eqn:bound}. The results are shown in Figure~\ref{fig:HammingCombustion}. It is possible to use the proposed Hamming distance metric to select a final model; however, this measures the distance between the distributions induced by the Markov models, and model selection using it is left as a future work. It is noted that the bounds on Hamming distance can provide a computationally convenient way to select model scores as it can be found from the symbol emission probabilities of the model instead of explicitly looking at the predictive capability by looking at the likelihoods of the symbol sequences.
{\bf e}gin{figure}
\centering
\subfloat[Hamming distance between the original and final models for a typical stable combustion sample]{\includegraphics[width=0.75\textwidth]{figures/HammingDistanceStable.eps}\label{fig:Hammingstable}}\quad
\subfloat[Hamming distance between the original and final models for a typical unstable combustion sample]{\includegraphics[width=0.75\textwidth]{figures/HammingDistanceUnStable.eps}\label{fig:Hammingunstable}}\\
\caption{Box plot for Hamming distance between the original and reduced-order models obtained after merging based on the results in Section~\ref{sec:analysis}}
\label{fig:HammingCombustion}
\end{figure}
\subsection{Bearing}
The same procedure of downsampling and depth estimation is followed for analysis of bearing data as was described in the previous section for combustion. A ternary alphabet is again chosen to discretize the continuous data after downsampling and the maximum entropy partitioning is used to find the partitions. Using the spectral method, a depth of $2$ (i.e., a total of $9$ states) is estimated for an $\epsilonsilon=0.02$ (we skip the plot of spectral decomposition plot for brevity). The BIC and AIC score for the different models is shown in Figure~\ref{fig:ModelBearing} and the model with five states is selected using the obtained scores (marked in black rectangle). In Figure~\ref{fig:HammingDistBearing}, we show the Hamming distance between the sequences generated by the original model (with $9$ states) and the reduced models and the corresponding bounds obtained by inequality~\eqref{eqn:bound}.
{\bf e}gin{figure}
\centering
\includegraphics[width=0.75\textwidth]{figures/MDL_Bearing.eps}
\caption{Model scores using the BIC and AIC criteria; selected models are depicted by black rectangles.}
\label{fig:ModelBearing}
\end{figure}
{\bf e}gin{figure}
\centering
\includegraphics[width=0.75\textwidth]{figures/HDDistribution.eps}
\caption{Box plot of the Hamming distance between the original and reduced-order models along with the analytical bound presented in Section~\ref{sec:analysis}.}
\label{fig:HammingDistBearing}
\end{figure}
\section{Classification and Anomaly Detection Results}
In this section, we present some results for anomaly detection and classification using the pressure time-series data to infer the underlying reduced-order Markov model. As we discussed earlier in section~\ref{subsec:combustionexperiment}, the exact transition point of the system from stable to unstable is unknown, we first present results on anomaly detection and clustering of the data into different clusters which can be then associated with the stable and unstable class. We will present two different metrics for anomaly detection that allows models of different state-space and structure to be compared. It is noted that the word metric is used here in a loose sense; it is meant to be a distance that could be used to compare two different Markov models.
\subsection{Anomaly Detection}
As individual time-series have different state-space, we define some metrics to compare them. These metrics reflect changes in the information complexity of Markov models and reveal different behavior of combustion process based on the changes in the inferred data model. In particular, the following two metrics are defined.
{\bf e}gin{enumerate}
\item Cluster Divergence: This measure is defined for individual Markov models based on the cluster structure of the state-space of the model. Physically, it represents the maximum statistical difference between the states of the Markov model measures using K-L distance. It is calculated for a particular model $\bm{M}hcal{M}$ as follows
{\bf e}gin{equation}\label{eqn:metric}
\Delta_{\bm{M}hcal{M}}=\max\limits_{q_i,q_j \in qSet} d(q_i,q_j)
\end{equation}
where $d$ is defined by equation~\eqref{eq:kldistance}.
\item Discrepancy Statistics: We measure the discrepancy between the i.i.d. statistics and the Markov statistics for the discretized data. This could be also interpreted as the information gain for Markov models. This measure also represents the information complexity of the data. If the i.i.d. statistics and the Markov statistics are very close, then the data has no temporal statistics; however, an increase in this measure would indicate the information gain by creating a temporal Markov model for the data. This is measured by the following equation.
{\bf e}gin{equation}
H_\bm{M}hcal{M}=\sum_{q \in qSet} {\partial r}ob(q) D_{KL}({\partial r}ob(\alphaphabetSet\mid q)\| {\partial r}ob(\alphaphabetSet))
\end{equation}
where ${\partial r}ob(\alphaphabetSet\mid q)$ represents the symbol emission probability conditioned on a state $q$ of the Markov model and ${\partial r}ob(\alphaphabetSet)$ represents the marginal symbol emission probability. The term $D_{KL}$ represents the symmetric K-L distance between the two distributions.
\end{enumerate}
In Figure~\ref{fig:divergence}, we present some results to show the behavior of $\Delta_{\bm{M}hcal{M}}$ with increasing pressure fluctuations. It is noted that every model has been created in an unsupervised fashion by first discretizing and then, estimating the memory of the discrete sequence. As seen in Figure~\ref{fig:complexorig}, there are three distinct behaviors that can be associated with $\Delta_\bm{M}hcal{M}$. With low pressure fluctuations, the metric is very close to $0$, indicating that the states of the model are very similar statistically. This is seen until data number $200$ with corresponding $P_{rms}\sim 0.065 $ psig, which leads to a gradual change to a point where the measure saturates with $P_{rms}\sim 0.12$ psig (when the process becomes unstable). Thus, with this gradual trend with increasing pressure fluctuations, we associate different behaviors with the process. However, as is seen in the Figure~\ref{fig:complexorig}, the transition from stable to unstable behavior is not clearly defined and is very difficult to label during the experiments as the process is very fast. We show the pressure signals from the three different clusters in Figure~\ref{fig:pressure} where it could be seen that the sample number $250$ could be seen to approach an approximate limit cyclic behavior (and thus, could be loosely classified as transient stage). An important point to note at this point is that this measure is independent of any operating conditions and only depends on stability (or instability) of the process. This metric is thus used for anomaly detection. In Figure~\ref{fig:complexfinal}, we show the statistics of $\Delta_{\bm{M}hcal M}$ with four states. We see that there is some loss of information up on merging states in the unstable class; the stable cluster remains unchanged implying that the states are statistically similar and the model distortion up on merging of states is insignificant. Thus, $\Delta_{\bm{M}hcal M}$ can be reliably used to detect departure from stable behavior.
The statistics for the discrepancy measure for the full state models is shown in Figure~\ref{fig:InfoGain}. The plot in Figure~\ref{fig:InfoGain} also agrees qualitatively with the earlier results on $\Delta_{\bm{M}hcal M}$. From these plots, we can infer that the Markov statistics for the stable cluster is very similar to the i.i.d. statistics and thus the data is very much independently distributed and conditioning on the inferred states of the Markov models doesn't improve predictability (or information complexity) of the temporal model. Thus, these two measures help infer the changes in the behavior of the data during the combustion process and are useful for anomaly detection.
{\bf e}gin{figure}
\centering
\subfloat[$\Delta_{\bm{M}hcal{M}}$ for the full state model for the time-series data with increasing pressure root mean square]{\includegraphics[width=0.75\textwidth]{figures/fig9a.eps}\label{fig:complexorig}}\\
\subfloat[Typical pressure signals from the three clusters seen in Figure~\ref{fig:complexorig}]{\includegraphics[width=0.75\textwidth]{figures/PressureSignal_fig9b.eps}\label{fig:pressure}}
\end{figure}
{\bf e}gin{figure}\ContinuedFloat
\centering
\subfloat[$\Delta_{\bm{M}hcal{M}}$ for models with $4$ states for the time-series data with increasing pressure root mean square]{\includegraphics[width=0.75\textwidth]{figures/fig9c.eps}\label{fig:complexfinal}}
\caption{Anomalous behavior of data in the combustion process}
\label{fig:divergence}
\end{figure}
{\bf e}gin{figure}
\centering
\includegraphics[width=0.75\textwidth]{figures/fig10.eps}
\caption{Variation of discrepancy statistics $H_{\bm{M}hcal M}$ with increasing pressure fluctuations. This also shows an anomaly around the point $200$ and qualitatively agrees to the behavior of $\Delta_{\bm{M}hcal M}$.}
\label{fig:InfoGain}
\end{figure}
To see more explicitly the changes in the underlying models, the models during stable and unstable phases are visualized in the information space. To do this, we reduce the state space of the models to just $2$ states and estimate the corresponding emission parameters. As the models have three symbols, the emission matrix has $2$ rows and each row corresponds to the symbol emission probabilities conditioned on the two states. Each of these rows for $100$ cases from stable and $100$ cases from unstable are plotted on a single simplex plane which is shown in Figure~\ref{fig:Simplex}. The Figure shows the clusters of stable and unstable cases in the information space and that the model with even $2$ states are clustered separately. This shows that there is a structured change in the temporal dynamics of the data at the two phases and that the inferred Markov models are able to capture this change. Furthermore, the distinctive features of the models are sufficiently retained even after significant reduction in the state-space of the models.
{\bf e}gin{figure}
\centering
\includegraphics[width=0.75\textwidth]{figures/simplex_2states_1.eps}
\caption{Cluster of stable and unstable phase in information space. Each point is a row of the emission matrix for the reduced Markov model with $2$ states. The plot shows the change in the Markov model as the process moves from stable and unstable. \mbox{col}or{red} Red diamonds \mbox{col}or{black} represent the unstable phase while \mbox{col}or{green} green diamonds \mbox{col}or{black} represent the stable phase.}
\label{fig:Simplex}
\end{figure}
\subsection{Classification}
These models are then used to train classifiers using support vector machines (SVM) and decision trees (DT)~\cite{B06}. The rationale behind using multiple classifier is to show that the performance of the Markov models is independent of the classification technique (i.e., it works equally well with maximum margin classifiers or decision tree classifiers). The SVM classifier is trained using a radial basis function kernel while the decision tree is trained using the standard euclidean distance. The classifiers are trained with $100$ data points from each class and are tested on the remaining data (around $80$ and $380$ for stable and unstable respectively). The tests are repeated for $100$ different train and test data sets from the total data. The results of classification accuracy are listed in Table~\ref{table:classification}. The SVM classifier is able to achieve around $1.67\%$ error using models with $2$ states while the decision tree classifier is able to achieve around $4.70\%$ error using models with $4$ states. This provides another way of selecting the final model for state merging in a supervised learning setting. It is noted that the original models contain $9$ states for stable and $27$ for unstable class.
{\bf e}gin{table}
\caption{Performance of classifiers with different number of states. Mean Error= Lower is better. }\label{table:classification}
\centering
{\bf e}gin{tabular}{cccc}
\hline
Number of States & Classifier & Classification Error ($\%$)\\
\hline
\multirow{2}{*}{$9$} & SVM & $3.48 \pm 0.74$ \\
& DT & $9.83\pm 3.24$ \\
\hline
\multirow{2}{*}{$8$} & SVM & $3.62 \pm 0.71$ \\
& DT & $9.38 \pm 3.11$ & \\
\hline
\multirow{2}{*}{$7$} & SVM & $2.87 \pm 0.68$ \\
& DT & $7.70 \pm 2.61$ & \\
\hline
\multirow{2}{*}{$6$} & SVM & $2.48 \pm 0.61$ \\
& DT & $7.00 \pm 2.55$ & \\
\hline
\multirow{2}{*}{$5$} & SVM & $2.05 \pm 0.54$ \\
& DT & $6.10 \pm 2.17$ & \\
\hline
\multirow{2}{*}{$4$} & SVM & $1.86 \pm 0.43$ \\
& DT & $4.72 \pm 2.29$ & \\
\hline
\multirow{2}{*}{$3$} & SVM & $1.69 \pm 0.45$ \\
& DT & $5.56 \pm 1.90$ & \\
\hline
\multirow{2}{*}{$2$} & SVM & $1.67 \pm 0.43$ \\
& DT & $4.83 \pm 1.80$ & \\
\hline
\end{tabular}
\end{table}
\section{Summary, Conclusions and Future Work}
In recent times the idea of representation learning has become very popular in the machine learning literature as it allows decoupling of data for model learning from the end-objectives like classification or clustering. In this paper, we presented a technique for Markov modeling of time-series data using concepts of symbolic dynamics which allows inference of model structure as well as parameters for compact data representation. In the proposed technique we first estimate the memory size of the discretized time-series data. The size of memory is estimated using spectral decomposition properties of the one-step Markov model created from the symbol sequence. Then, a second pass of data is made to infer the model with the right memory and the corresponding symbol emission matrix is estimated. Then the equivalence class of states based on K-L distance between the states are estimated using hierarchical clustering of the corresponding states of the Markov model. The proposed concepts were validated using two different datasets-- combustion instability and bearing. Modeling of combustion instability still remains a puzzle in the combustion community. The Markov modeling technique was used to analyze the problem of combustion instability. The proposed ideas were tested on experimental data from a swirl-stabilized combustor used to study unstable thermo-acoustic phenomenon during combustion process. The proposed approach allows us to infer the complexity of the time-series data based on the inferred Markov model. Two different metrics were proposed for anomaly detection and classification of the stable and unstable classes. The results presented in this paper are encouraging as the inferred models are able to identify the stable and unstable phases independent of any other operating condition.
Simultaneous optimization of discretization and memory estimation for model inference is a topic of future research. While the results obtained with Markov modeling for the combustion instability problem are inspiring, further investigation with transient data is required for better characterization of the process. More thorough comparison of the proposed models with HMM models of similar state-space size is also an important topic of future work.
\section*{Acknowledgments}
The authors would like to thank Professor Domenic Santavicca and Mr. Jihang Li of Center for Propulsion, Penn State for kindly providing the experimental data for combustion used in this work.
\end{document} |
\begin{document}
\title{Data Fusion via Intrinsic Dynamic Variables:\\
An Application of Data-Driven Koopman Spectral Analysis}
\author{Matthew O. Williams}
\email{mow2@princeton.edu}
\affiliation{Program in Applied and Computational Mathematics (PACM), Princeton University, NJ 08544, USA.}
\author{Clarence W. Rowley}
\affiliation{Department of Mechanical and Aerospace Engineering, Princeton University, NJ 08544, USA.}
\author{Igor Mezi\'c}
\affiliation{Department of Mechanical Engineering, University of California, Santa Barbara, CA 93106, USA.}
\author{Ioannis G. Kevrekidis}
\affiliation{Department of Chemical and Biological Engineering \& PACM, Princeton University, NJ 08544, USA.}
\begin{abstract}
We demonstrate that numerically computed approximations of Koopman
eigenfunctions and eigenvalues create a natural framework for data fusion in
applications governed by nonlinear evolution laws.
This is possible because the eigenvalues of the Koopman operator are
invariant to invertible transformations of the system state, so that the values
of the Koopman eigenfunctions serve as a set of \emph{intrinsic coordinates}
that can be used to map between different observations (e.g., measurements
obtained through different sets of sensors) of the same fundamental behavior.
The measurements we wish to merge can also be nonlinear, but must be ``rich
enough'' to allow (an effective approximation of) the state to be reconstructed
from a single set of measurements.
This approach requires independently obtained sets of data that capture the
evolution of the heterogeneous measurements and a single pair
of ``joint'' measurements taken at one instance in time.
Computational approximations of eigenfunctions and their corresponding
eigenvalues from data are accomplished using Extended Dynamic Mode
Decomposition.
We illustrate this approach on measurements of spatio-temporal oscillations of
the FitzHugh-Nagumo PDE, and show how to fuse point measurements with principal
component measurements, after which either set of measurements can be used to estimate the other set.
\end{abstract}
\maketitle
In many applications, our understanding of a system comes from sets of partial measurements (functions
of the system state) rather than observations of the full state.
Linking these heterogeneous partial measurements (different sensors, different measurement times)
is the objective of a broad collection of techniques referred to as data fusion methods~\cite{Hall1997,Pohl1998,Yocky1995}.
Since the system state can itself be a measurement, these methods also
encompass traditional techniques for state estimation such as Kalman
filters~\cite{Simon2006,Stengel2012,bishop2001introduction} and stochastic estimation methods~\cite{Adrian1988,Adrian1994,Guezennec1989,Ho1964,Bonnet1994,Druault2010,Murray2007,Naguib2001}.
One might subdivide these approaches into ``dynamic'' methods, which require models of the underlying
evolution~\cite{Simon2006,Stengel2012,bishop2001introduction}, and ``static'' methods~\cite{Fieguth2010},
which do not require dynamical information but, in general, need more extensive
measurements to be successful.
Other, more recent, methods such as the gappy Proper Orthogonal Decomposition~\cite{Willcox2006},
nonlinear intrinsic variables~\cite{Dsilva2013}, or compressed sensing-based methods~\cite{Bright2013}
can be thought of as solving, in effect, the same problem, but make different
assumptions about the nature of the underlying system and the type of measurements used.
Though the exact details and assumptions differ, the overarching goal in data
fusion is to develop a mapping from one type of measurements to another type.
In this manuscript, we propose a method that generates such a mapping with the help of a set of
{\it intrinsic coordinates}; these coordinates are based on the (computationally approximated)
eigenfunctions of the Koopman operator~\cite{Koopman1931,Koopman1932,Mauroy2013,Mauroy2012,Budivsic2012}.
To merge measurements from heterogeneous sources,
the algorithm requires data in the form of time series from each source, and a
set of ``joint'' measurements (i.e., measurements known to correspond to the same underlying state).
Each of these individual measurements must be ``rich enough'' so that
the system state (or a quantity effectively equivalent to the state such as the
leading Principal Components~\cite{Lee2007}) can be recovered using a single
set of measurements, but this limitation can likely be overcome by enriching
measurement sets that are not ``rich enough'' through the use of time
delays~\cite{Chorin2000,Chorin2002,juang1994applied}.
The benefits of this approach are that it is naturally applicable to
nonlinear systems and sensors, and minimizes the number of joint measurements
required; in many systems, only a single pair is needed.
{\bf Problem formulation.}
Suppose we have {\em two} different sets of measurements, generated by two
different sets of (heterogeneous) sensors observing the same fundamental
behavior (the evolution of the state $\vec{x}(t)$).
Let $\vec{\tilde x}=\vec{\tilde g}(\vec{x})$ denote a measurement of this state obtained with the first set of sensors, and $\vec{\hat x}=\vec{\hat g}(\vec{x})$ a measurement with the second set; ``a measurement" is, in general, a vector valued observable obtained at a single moment in time by one of our two collections of sensors.
Here, $\vec{\tilde g}:\mathcal{M}\to\tilde{\mathcal{M}}$ and
$\vec{\hat g}:\mathcal{M}\to\hat{\mathcal{M}}$ are the functions that map from the
instantaneous system state ($\vec x \in\mathcal{M}$) to the corresponding instantaneous measurements.
We record time series of such measurements from each of our sets of sensors, and
divide each of the time series into sets of measurement pairs,
$\{(\vec{\tilde x}_m,\vec{\tilde x}f_m)\}_{m=1,\ldots,\tilde{M}}$ and $\{(\vec{\hat x}_{m},\vec{\hat x}f_{m})\}_{m=1,\ldots,\hat{M}}$, where $\vec{\tilde x}_{m}$ (resp. $\vec{\hat x}_{m}$) is the $m$-th measurement, and $\vec{\tilde x}f_{m}$ (resp. $\vec{\hat x}f_{m}$) is its value after a single sampling interval.
These time-series can be obtained independently, and the total number of measurements, $\tilde{M}$ and $\hat{M}$, can differ.
For simplicity, we assume the sampling interval, $\Delta t$, is the same for both data sets; this too can vary with only a slight modification to the algorithm.
The only requirements we place on these data sets are that: (i) the
data they contain are generated by the same system, (ii) the state can, in
principle, be determined using a snapshot from either collection of
measurements, and (iii) that the sampling interval remains constant {\em within} a data set.
The (required) {\em joint data set} is denoted as $\left\{ (\vec{\tilde x}_{m},\vec{\hat x}_{m})\right\}_{m=1}^{M_\text{joint}} $;
the subscripts denote the $m$-th measurement in the joint data set, and $M_\text{joint}=1$ is the number of data pairs.
This approach is applicable to an arbitrary number of different measurements;
the restriction to two is solely for simplicity of presentation.
{\bf The Koopman operator.}
The crux of our approach is the use of these time-series data to computationally approximate the leading eigenfunctions of the Koopman operator~\cite{Koopman1931,Koopman1932,Budivsic2012,Mauroy2013,Mauroy2013a,Mezi2013},
thus generating a mapping from measurement space to {\it an intrinsic variable space}.
The Koopman operator is defined for a specific autonomous dynamical system, whose evolution law we denote as
${\vec{x}\vec{F}sto\vec{F}(\vec{x}),}$
where $\vec{x}\in\mathcal{M}\subseteq\mathbb{R}^{N}$ is the system state, $\vec{F}:\mathcal{M}\to\mathcal{M}$ is the evolution operator, and $n\in\mathbb{N}$ is discrete time.
The action of the Koopman operator is
\begin{equation}
(\mathcal{K}\psi)({\vec{x}})=(\psi\circ\vec{F})({\vec{x}})=\psi(\vec{F}({\vec{x}})),\label{eq:koopman}
\end{equation}
where $\psi:\mathcal{M}\to\mathbb{R}$ is a scalar observable.
For brevity, we refer to $\varphi_k$ and $\mu_k$, as the $k$-th Koopman
eigenfunction and eigenvalue respectively.
We also define $\lambda_{k}=\frac{\log(\mu_{k})}{\Delta t}$, which is the
discrete time approximation of the continuous time eigenvalue.
Accompanying the eigenfunctions and eigenvalues are the {\em Koopman modes},
which are vectors in $\mathbb{C}^N$ (or spatial profiles if the dynamical system
is a spatially extended one) that can be used to reconstruct (or predict) the full state when combined with the Koopman eigenfunctions and eigenvalues~\cite{Rowley2009,williams_submitted}.
The Koopman eigenvalues, eigenfunctions, and modes have been used in many
applications including the analysis of fluid
flows~\cite{Schmid2010,Schmid2011,Schmid2012,Rowley2009,Seena2011}, power
systems~\cite{Susuki2013,Susuki2012,Susuki2011}, nonlinear
stability~\cite{Mauroy2013a}, and state space parameterization~\cite{Mauroy2013}; here we exploit their properties for data fusion purposes.
For this application, the ability of the Koopman eigenfunctions to generate a \emph{parameterization of state space} is key.
In the simple example that follows we use the phase of an ``oscillatory'' eigenfunction, which has $|\mu_k|=1$, and the magnitude of a ``decaying'' eigenfunction, which has $|\mu_k| < 1$,
as an intrinsic (quasi action-angle) coordinate system for the slow manifold
(i.e., the ``weakest'' stable manifold) of a limit cycle.
While there are many data driven methods for nonlinear manifold parameterization (see, e.g., Ref.~\cite{Lee2007,Nadler2005}), the benefit of this approach is that the resulting parameterization is, in principle, {\em invariant to invertible transformations of the underlying system} and, in that sense, are an intrinsic set of coordinates for the system.
Mathematically, if it is possible to reconstruct the underlying system state
from one snapshot of observation data, then $\vec{\tilde g}$ formally has an inverse,
which we denote as $\vec{T}$ (i.e., ${\vec{x}}=\vec{T}(\vec{\tilde x})$).
When this is not the case naturally, one can sometimes construct an ``extended''
$\vec{\tilde x}$ where such a $\vec T$ does exist by combining measurements taken at the
current and a finite number of previous times~\cite{juang1994applied,
Chorin2000,Stengel2012}.
Then if $\varphi:\mathcal{M}\to\mathbb{C}$ is a Koopman eigenfunction,
$\tilde{\varphi}=\tilde{\alpha}\varphi\circ\vec{T}$, where
$\tilde{\varphi}:\tilde{M}\to\mathbb{C}$, is formally an eigenfunction of the Koopman
operator with the eigenvalue $\mu$ for one set of sensors (rather than for the
full state).
The constant $\tilde{\alpha}\in\mathbb{C}$ highlights that this
eigenfunction is only defined up to a constant.
The evolution operator expressed in terms of $\vec{\tilde x}$ is $\vec{\tilde{F}}=\vec{\tilde g}\circ\vec{F}\circ\vec{T}$ and the action of the associated Koopman operator is $\tilde{\mathcal{K}}\psi=\psi\circ\vec{\tilde{F}}$.
Then, $(\tilde{\mathcal{K}}\tilde{\varphi})(\vec{\tilde x})=(\tilde{\varphi}\circ\vec{\tilde{F}})(\vec{\tilde x})=\tilde{\alpha}(\varphi\circ\vec{T}\circ\vec{\tilde g}\circ\vec{F}\circ\vec{T}\circ\vec{\tilde g})(\vec{x})=\tilde{\alpha}(\varphi\circ\vec{F})(\vec{x})=\tilde{\alpha}(\mathcal{K}\varphi)(\vec{x})=\mu\tilde{\alpha}\varphi(\vec{x})=\mu\tilde{\varphi}(\vec{\tilde x})$, where we have assumed that $\tilde{\varphi}$ is still an element of some space of scalar observables.
This same argument could be used to obtain a $\hat{\varphi}$ and $\hat{\alpha}$
for the measurements represented by $\vec{\hat x}$.
Finally, we define the ratio $\alpha = \hat{\alpha}/\tilde{\alpha}$, whose role we
will explain shortly.
To approximate these quantities, we use Extended Dynamic Mode Decomposition
(EDMD)~\cite{williams_submitted}, which is a recently developed data-driven
method that approximates the Koopman ``tuples'' (i.e., triplets of related
eigenvalues, eigenfunctions and modes).
The inputs to the EDMD procedure are sets of snapshot pairs,
$\{(\vec{x}_{m},\vec{y}_{m})\}_{m=1,\ldots,M}$, and a set of dictionary elements
that span a subspace of scalar observables, which we represent as the
vector valued function $\mat{\psi}({\vec{x}})=[\psi_{1}({\vec{x}}),\psi_{2}({\vec{x}}),\ldots,\psi_{K}({\vec{x}})]^{T}$ where $\psi_{k}:\mathcal{M}\to\mathbb{R}$ and $\mat{\psi}:\mathcal{M}\to\mathbb{R}^{K}$.
This procedure results in the matrix
\begin{equation}
\mat{{K}}\triangleq\mat{G}^{+}\mat{A}\in\mathbb{R}^{K\times K},
\end{equation}
which is a finite dimensional approximation of the Koopman operator, where $\mat{G}=\sum_{m=1}^{M}\mat{\psi}(\vec{x}_{m})\mat{\psi}(\vec{x}_{m})^{T}$, $\mat{A}=\sum_{m=1}^{M}\mat{\psi}(\vec{x}_{m})\mat{\psi}(\vec{y}_{m})^{T}$, and $+$ denotes the pseudo-inverse.
The $k$-th eigenvalue and eigenvector of $\mat{{K}}$, which we denote as $\mu_k$ and $\vec\xi_k$, produce an approximation of the $k$-th eigenvalue and eigenfunction of the Koopman operator~\cite{williams_submitted}.
We denote the approximate eigenfunction as
$\varphi_k(\vec{x})=\mat{\psi}^{T}(\vec{x})\vec{{\xi}}_{k} = \sum_{i=1}^K
\psi_i(\vec x)\vec{\xi}_k^{(i)}$ where $\vec\xi_k^{(i)}\in\mathbb{C}$ is the
$i$-th element of $\vec \xi_k$.
{\bf The numerical procedure}.
Because ${\vec{x}}$ is, by assumption, unknown, we instead apply EDMD to the
measurement data.
The $\vec{\tilde x}$ data generates approximations of the set of $\tilde{\varphi}_{k}$ and $\tilde{\mu}_{k}$, and the $\vec{\hat x}$ data generates approximations of the set of $\hat{\varphi}_{k}$ and $\hat{\mu}_{k}$.
To map between these separate sets of observations, note that
\begin{equation}
\tilde{\varphi}_{k}(\vec{\tilde x}_{m})=\frac{\hat{\alpha}_{k}}{\tilde{\alpha}_{k}}\hat{\varphi}_{k}(\vec{\hat x}_{m})=\alpha_{k}\hat{\varphi}_{k}(\vec{\hat x}_{m}),\label{eq:mapping}
\end{equation}
because $\varphi_{k}({\vec{x}}_{m})=\tilde{\alpha}_{k}\tilde{\varphi}_{k}(\vec{\tilde x}_{m})=\hat{\alpha}_{k}\hat{\varphi}_{k}(\vec{\hat x}_{m})$ when $\tilde{\varphi}_{k}$ is the eigenfunction that ``corresponds'' to $\hat{\varphi}_{k}$.
To determine which eigenfunctions correspond to one another, we require that $\tilde{\mu}_{k}\approx\hat{\mu}_{k}$.
This is also a sanity check that EDMD is indeed producing a reasonable
approximation of the Koopman operator; if no eigenvalues satisfy this
constraint, then the approximation generated by EDMD is not accurate enough to be useful.
Finally, to determine the $\alpha_{k}$, we use the joint data set along with \eqref{eq:mapping} to solve for $\alpha_{k}$ in a least squares sense (and thus register the data).
Taken together, the steps above produce an approximation of $\tilde{\varphi}_k$ given a measurement of $\vec{\hat x}$.
The final step in the procedure is to obtain a mapping from $\tilde{\varphi}_k$
to $\vec{\tilde x}$.
One conceptually appealing way is by expressing $\vec{\tilde x}$ as the sum of its Koopman modes and eigenfunctions~\cite{Rowley2009,Budivsic2012,williams_submitted}.
In this manuscript, we take a simpler approach and use interpolation routines for scattered data (see, e.g., Refs.~\cite{Lancaster1981,Levin1998}) to approximate the inverse mapping, $(\varphi_1(\vec{\tilde x}), \varphi_2(\vec{\tilde x}),\ldots) \vec{F}sto \vec{\tilde x}$.
In particular, we use two-dimensional piecewise linear interpolation as
implemented by the \texttt{griddata} command in Scipy~\cite{jones2001scipy}.
Overall, the data merging procedure is as follows:
\begin{enumerate}
\itemsep0em
\item Approximate the eigenfunctions and eigenvalues with the data
$\{(\vec{\tilde x}_m,\vec{\tilde x}f_m)\}_{m=1,\ldots,\tilde{M}}$ and
$\{(\vec{\hat x}_{m},\vec{\hat x}f_{m})\}_{m=1,\ldots,\hat{M}}$, and determine the number of
eigenfunctions required to usefully parameterize state space.
\item Match the $\tilde{\varphi}_{k}$ with its corresponding
$\hat{\varphi}_{k}$ by finding the closest pair of eigenvalues, $\tilde\mu_k$ and $\hat\mu_k$.
\item Use the joint data set, $\left\{
(\vec{\tilde x}_{m},\vec{\hat x}_{m})\right\}_{m=1}^{M_\text{joint}}$, to compute the
$\alpha_{k}$ for each eigenfunction.
\item Given a new measurement, $\vec{\hat x}$, compute $\hat{\varphi}_{k}(\vec{\hat x})$ and
use \eqref{eq:mapping} to approximate $\tilde{\varphi}_{k}(\vec{\tilde x})$.
\item Finally, approximate $\vec{\tilde x}$ from the
$\tilde{\varphi}_{k}(\vec{\tilde x})$ using an interpolation routine.
\end{enumerate}
This method can be considered a hybrid of static and dynamic state estimation
techniques:
dynamic information is required to construct the mapping to and from the set of
intrinsic variables, but is not used beyond that.
{\bf An illustrative example.} To demonstrate the efficacy of this technique, we
apply it to the FitzHugh-Nagumo PDE in 1D, which is given by:
\begin{subequations}
\label{eq:fhne}
\begin{align}
\partial_{t}v & =\partial_{xx}v+v-w-v^{3},\\
\partial_{t}w & =\delta\partial_{xx}w+\epsilon(v-c_{1}w-c_{0}),
\end{align}
\end{subequations}
where $v$ is the activation field, $w$ is the inhibition field, $c_{0}=-0.03$, $c_{1}=2.0$, $\delta=4.0$, $\epsilon=0.017$, for $x\in[0,20]$ with Neumann boundary conditions.
These parameter values are chosen so that \eqref{eq:fhne} has a spatio-temporal
limit cycle.
While the Koopman operator can be defined for~\eqref{eq:fhne}, the large dimension
of the state space (e.g., for a finite difference discretization) would make the necessary computations intractable.
Instead, the simpler task of constructing this mapping on a low-dimensional ``slow'' manifold, where the dynamics are effectively two-dimensional, is undertaken.
We start by collecting a large, representative data set, and performing
Principal Component Analysis (PCA)~\cite{Holmes1998} on it.
For the first set of measurements, we project $v$ and $w$ onto the leading {\em
three} principal components of the complete data set, so
${\vec{\tilde x}_n=[a_{1}(t_n),a_{2}(t_n),a_{3}(t_n)]}$ where $a_{k}$ is the coefficient
of the $k$-th principal component evaluated at $t_n$.
Together these modes capture over 95\% of the energy in the data set~\cite{Holmes1998}, and serve as an accurate, low-dimensional representation of the full state.
The other data come from pointwise measurements taken at $x=10$ (i.e., $\vec{\hat x}(t)=[v(10,t),w(10,t)]$).
Both sets of data are collected by generating 20 trajectories with a sampling interval of $\Delta t=2$, where each trajectory consists of $10^{3}$ snapshot pairs.
Each trajectory is computed by perturbing the unstable fixed point associated with the limit cycle, evolving \eqref{eq:fhne} for $\Delta T = 1000$ (roughly ten periods of oscillation) to allow fast transients to decay, and then recording the evolution of each measurement.
The data sets are generated independently, \emph{so no snapshot in one data set
corresponds exactly to any of the snapshots in the other}, and then ``whitened''
so that each measurement set has unit variance.
The registration data set consists of a {\em single measurement} from the PCA data set and the corresponding pointwise values.
We approximate the space of observables with a finite dimensional dictionary,
whose elements we denote as $\tilde{\psi}_k$ or $\hat{\psi}_k$.
In this application, $\tilde{\psi}_{k}$ and $\hat{\psi}_{k}$ are the $k$-th
shape function of a moving least squares interpolant~\cite{Lancaster1981,Levin1998}
with up to linear terms in each
variable~\cite{Li2002,Belytschko1994,Belytschko1996} and cubic spline weight
functions~\cite{Belytschko1996}.
Using a quad- or oct-tree to determine the node centers~\cite{Samet1990}
resulted in a set of 1622 basis functions for the PCA data and 1802 basis functions for the pointwise data.
Though non-trivial to implement, this set was chosen because it can exactly
reproduce constant and linear functions, but other choices of basis
functions are also viable~\cite{Wendland1999,Fasshauer1996,Karniadakis2013,Cockburn2000,Trefethen2000,Weideman2000,williams_submitted}.
\begin{figure*}
\caption{The data driven parameterization generated using the Koopman eigenfunctions
with eigenvalues near $\lambda_{1}
\label{fig:leading_eigenfunction_correspondence}
\end{figure*}
We applied the EDMD procedure to the dynamical data sets to obtain approximations of the Koopman eigenfunctions and eigenvalues.
Figure~\ref{fig:leading_eigenfunction_correspondence} shows the PCA and point data colored by the magnitude of the Koopman eigenfunction with $\lambda_{1}\approx-8\times10^{-4}$ and by the phase of the Koopman eigenfunction with $\lambda_{2}\approx4.7i\times10^{-2}$.
In particular, $\tilde{\lambda}_{1}=-7.26\times 10^{-4}$,
$\hat{\lambda}_{1}=-8.57\times 10^{-4}$, $\tilde{\lambda}_{2}=0.0473i$, and $\hat{\lambda}_{2}=0.0473i$, where $\tilde{\lambda}_{k}$ is the $k$-th eigenvalue computed using the $\vec{\tilde x}$ measurements, and $\hat{\lambda}_{k}$ the eigenvalue obtained using $\vec{\hat x}$ measurements.
Despite the differences in the nature of data, both relevant sets of Koopman
eigenfunctions generate (effectively) the same parameterization of the slow
manifold.
\begin{figure}
\caption{The principal component coefficients reconstructed from the pointwise
data (red) for a \emph{new}
\label{fig:reconstruct}
\end{figure}
Using this pair of parameterizations, we reconstruct the PCA coefficients from pointwise data.
Figure~\ref{fig:reconstruct} demonstrates the quality of this reconstruction
with data from a randomly initialized trajectory that approaches the limit
cycle.
In the figure, the black lines denote the true coefficient of each principal component, and the red dots indicate the predicted value.
Note that this trajectory \emph{was not used to compute the Koopman
eigenfunctions}; furthermore, the fact that the data are a time series was not
used in making this prediction, and each measurement of $\vec{\hat x}$ was considered individually.
Visually, the agreement between the predicted and actual states is good, but
there are errors in the reconstruction.
Over the time window $t\in[0, 400]$, which is shown in
Fig.~\ref{fig:reconstruct}, the relative error in each of the principal
components is $e_{1}=0.0405$, $e_{2}=0.0655$, and $e_{3}=0.0496$, where
$e_{i}=\|a_{i}^{\text{(true)}}-a_{i}^{\text{(pred)}}\|/\|a_{i}^{(\text{true})}\|$,
where $\|\cdot\|=\left(\int_{0}^{400}(\cdot)^{2}\; dt\right)^{1/2}$.
In general, the accuracy of this approach is better for points nearer to the
limit cycle, and the relative errors over the window $t\in[0, 4000]$,
which (as shown in the supplement) contains points closer to the limit cycle,
are only $e_{1}=0.0140$, $e_{2}=0.0295$, and $e_{3}=0.0234$ respectively.
In this example, we have not yet discussed the dimensionality of the data, but
as stated previously, the number of measurements in each data set is critical
for this approach to be justifiable mathematically.
Our focus is on dynamics near the limit cycle, which in this problem are
effectively two dimensional.
However, the data lie on a two-dimensional {\em nonlinear manifold} that PCA,
which fits the data with a hyperplane, requires {\em three} principal
components to accurately represent.
Therefore, the identified mapping is from $\mathbb{R}^2$ to a two-dimensional
nonlinear manifold in $\mathbb{R}^3$, and not from $\mathbb{R}^2$ to
$\mathbb{R}^3$.
{\bf Conclusions.} We have presented a method for data
fusion or state reconstruction that is suitable for nonlinear systems and
exploits the existence of an intrinsic set of variables generated by the eigenfunctions
of the Koopman operator.
Rather than mapping directly between different sets of measurements, our method
focuses on generating an independent mapping to and from the intrinsic
variables for each heterogeneous set of measurements.
In principle, this can be accomplished as long as a mapping from each set of
measurements (or measurements and their time delays) to the system state exists,
and the benefit of this approach is that the majority of the required data can be
obtained independently, and only a single ``joint'' pair of data is needed.
The keys to this method are: (i) the invariance of the Koopman eigenvalues to
invertible transformations, (ii) the fact that the eigenfunctions parameterize
state space, and (iii) the ability of data-driven methods, such as EDMD, to
produce approximations of the eigenvalues and eigenfunctions that are ``accurate
enough'' to allow these properties to be exploited in practical settings.
\textbf{Acknowledgements} The authors gratefully acknowledge support from the
National Science Foundation (DMS-1204783) and the AFOSR.
\appendix
\section{Supplementary Material}
In this supplement, we present an expanded discussion of some key points in the
data fusion procedure outlined in the manuscript.
In particular, we focus on the ability of the Koopman eigenfunctions to parameterize
state space, the accuracy of the mapping between eigenfunctions obtained with
different sets of data, and the quality of the reconstruction for our
illustrative example over a longer time horizon.
Each of these points has been touched upon in the manuscript, and our objective
is simply to present some additional supporting evidence rather than introduce any new concepts.
\subsubsection*{Measurement Parameterization and Mapping }
\begin{figure*}
\caption{Pseudo-color plot of the first three principal component coefficients,
$a_{1}
\label{fig:merging-pca}
\end{figure*}
In the manuscript, we claim that our set of two Koopman eigenfunctions
parameterizes both the PCA data and the point-wise measurement data, and give
a visual example of this in Fig.~\ref{fig:leading_eigenfunction_correspondence}.
However, the final step in reconstructing the principal component coefficients
(or point-wise measurements) is mapping from the intrinsic coordinates defined
by the Koopman eigenfunctions to the original set of variables.
In principle, the Koopman modes would allow this to be done, and would
effectively act as coefficients in a truncated series expansion of the inverse map.
To simplify our code, we instead use interpolation methods appropriate for scattered data.
These methods could include moving least squares interpolants, but we make use
of the linear interpolation routine implemented by the \texttt{griddata} command in Scipy.
This routine uses a Delaunay triangulation in conjunction with Barycentric
interpolation to create a piecewise linear approximation of the inverse
function.
Figure~\ref{fig:merging-pca} plots the coefficient of the $i$-th principal component as a function of the two Koopman eigenfunctions;
note that the value of the principal component varies continuously and ``slowly,'' which is why the simple piecewise linear interpolant employed here is
sufficient to produce a good approximation of the mapping from the Koopman eigenfunction values to the principal component coefficients.
With far fewer data points or more complicated attractors, this mapping may be more difficult to satisfactorily approximate, and would require more sophisticated interpolation/extrapolation algorithms than what we used here.
\begin{figure}
\caption{Plot of $\tilde{\varphi}
\label{fig:one_to_one}
\end{figure}
A related issue involves the data used in the approximation.
In Fig.~\ref{fig:merging-pca}, it is clear that the majority of the data lies near $\hat{\varphi}_{1}=0$.
There are two reasons for this.
First, it is easier to collect data near the limit cycle simply due to the dynamics of the underlying system.
Furthermore, the magnitude of the eigenfunction, $\varphi_{1}$, typically grows rapidly as one gets further from the limit cycle, and will even
have singularities at the unstable fixed point.
However, because the EDMD procedure uses data to approximate the eigenfunctions,
it is important to have data further away from the limit cycle despite the
(possible extra) effort in obtaining them.
It is for this reason that we use 20 independently initialized trajectories rather than a single, longer trajectory.
In the manuscript, we noted that the accuracy of our data fusion decreases the further we get from the limit cycle.
Figure~\ref{fig:one_to_one} quantifies this effect.
The figure shows the amplitude and phase of what ought to be the same eigenfunction (approximated through different sets of heterogeneous observations) as functions of one another.
Ideally, these two representations should coincide, and for $\angle\tilde{\varphi}_{2}$ and $\angle\hat{\varphi}_{2}$ they, in effect, do.
However, this is not the case for $\tilde{\varphi}_{1}$ and $\hat{\varphi}_{1}$,
which are shown in the left plot.
In theory, both of these eigenfunctions are zero on the periodic orbit, and their absolute value increases for points that are further away.
The figure, with the $\alpha$ we computed, shows good agreement between the
value of the eigenfunctions when their absolute values are small, but a growing
systematic error when they are large.
This difference is one of the causes of the errors we observe, and why the
accuracy of our approach decreases as one gets further from the limit cycle.
There are, in general, open questions about the validity of EDMD computations
performed on subsets of state space, and how they impact the accuracy
of the numerically computed eigenfunctions.
For this problem, the predictions our method produces are accurate (to the eye)
when $|\hat{\varphi}_{1}|<0.03$, but beyond that point, the predicted and
actual solutions will be quantitatively and visually different due to a
combination of sparse data and the ``partial domain'' issue.
\subsubsection*{Measurement Reconstruction}
\begin{figure*}
\caption{Plot of the predicted principal component coefficients obtained using
the data merging procedure outlined in the paper.
This figure is a ``zoomed out'' version of Fig.~\ref{fig:reconstruct}
\label{fig:long-prediction}
\end{figure*}
In Fig.~\ref{fig:reconstruct}, we reconstructed the principal component values
from point-wise data for a new, randomly-initialized trajectory on the interval
$t\in[0, 400]$
Figure~\ref{fig:long-prediction} shows
the reconstructed state using the same trajectory, but over the time interval
$t\in[0,4000]$.
Overall, there is good agreement between the predicted and actual principal
component coefficients; in particular, our approach accurately recovers the
``envelope'' of the oscillation in all three principal components.
Although not obvious to the eye, the relative error decreases from $\sim$5\%
in the short window to $\sim$2\% over the longer time interval.
Recall that neither time nor the fact that this data are a trajectory are used
explicitly in the reconstruction; the increase in accuracy is solely due to the
fact that points at later times are near to the limit cycle.
\begin{figure*}
\caption{Three different views of the original principal component data (shown
in red), the reconstructed principal component values of the 20 trajectories that comprise
the point-wise data set used in the manuscript (shown in blue), and the
reconstruction with 20 new point-wise trajectories that were not used in the Koopman computation
(also shown in blue).
Though there is some error, which can most easily be seen in the center plot,
our approach effectively embeds these new points on the nonlinear,
two-dimensional manifold that the original set of principal component
coefficients is confined to.
The region with the largest error occurs near the ``hole'' in the center of the manifold, and corresponds to points that are furthest from the limit cycle.
}
\label{fig:embded-new-manifold}
\end{figure*}
Finally, Fig.~\ref{fig:embded-new-manifold} shows the original set of PCA data
in blue, and 40 trajectories consisting of the predicted values from the
point-wise reconstruction, which are shown in red.
Twenty of the trajectories were the data used to construct the point-wise
approximation of the Koopman eigenfunctions, and twenty of the trajectories are ``new''.
The three plots in the figure show the same data from three different views.
The point of this figure is to demonstrate that our predicted values lie on (or
near) the nonlinear manifold on which the original PCA data live.
There are clearly some errors, which become particularly pronounced using the
view in the center plot, that appear as ``noise'' about an otherwise parabolic shape.
As stated previously, these errors are most apparent near the ``hole'' in the
center of the manifold, which corresponds to points far away from the limit
cycle where our parameterizations are known to be inaccurate.
However, even these errors are small compared to the total change in the
coefficient of the second principal component.
As a result, while this method is certainly not error free, it is able to
effectively map point-wise measurements to principal component values (and vice
versa) as long as the numerical approximations of the necessary Koopman
eigenfunctions are ``accurate enough'' to serve as a set of intrinsic variables.
\end{document} |
\begin{document}
\preprint{APS/123-QED}
\title{Cooperation between coherent controls and noises in quantum metrology}
\author{Yu Chen}
\affiliation{Mechanical and Automation Engineering, The Chinese University of HongKong}
\author{Haidong Yuan}
\email{hdyuan@mae.cuhk.edu.hk}
\affiliation{Mechanical and Automation Engineering, The Chinese University of HongKong}
\date{\today}
\begin{abstract}
We study the cooperation between coherent controls and noises in quantum metrology and show that such cooperation can create multiple paths for parametrization in quantum metrology, which goes beyond the standard scheme and opens new possibilities for achieving higher precision limits. We demonstrate the effect of the cooperative interplays between coherent controls and the noises through canonical examples in quantum metrology, and show that the cooperative scheme can beat the standard scheme, and in certain regimes the precision limit under the cooperative scheme with noises can surpass the ultimate precision limit of the standard scheme under the ideal unitary dynamics.
\end{abstract}
\maketitle
Metrology, which studies the precision limit of measurement and estimation, plays a central role in science and technology. In recent years quantum metrology, which exploits quantum mechanical effects to achieve better precision than classical schemes, has gained increasing attention and has found wide applications in various fields\cite{giovannetti2011advances,giovannetti2006quantum,anisimov2010quantum,braunstein1996generalized,paris2009quantum,Fujiwara2008,escher2012general,demkowicz2014usin,demkowicz2012elusive,
schnabel2010quantum,ligo2011gravitational,joo2011quantum,higgins2007entanglement,kolobov1999spatial,lugiato2002quantum,morris2015imaging,roga2016security,tsang2016quantum,bollinger1996optimal,buvzek1999optimal,
leibfried2004toward,roos2007designer,derevianko2011colloquium,ludlow2015optical,shapiro2009quantum,lopaeva2013experimental,dowling1998correlated,huelga1997improvement,chin2012quantum,HallPRX,Berry2015,Alipour2014,Beau2017}, such as gravitational wave detection\cite{schnabel2010quantum,ligo2011gravitational}, quantum phase estimation\cite{escher2012general,joo2011quantum,anisimov2010quantum,higgins2007entanglement}, quantum imaging\cite{kolobov1999spatial,lugiato2002quantum,morris2015imaging,roga2016security,tsang2016quantum}, quantum target-detection\cite{shapiro2009quantum,lopaeva2013experimental}, quantum gyroscope\cite{dowling1998correlated} and atomic clock synchronization\cite{bollinger1996optimal,buvzek1999optimal,leibfried2004toward,roos2007designer,derevianko2011colloquium,ludlow2015optical,borregaard2013near}.
Standard schemes of quantum metrology, as shown in Fig.\ref{fig:scheme}, can be explained with the canonical example that uses spins to estimate the magnitude of a magnetic field. Without noises, the dynamics of each spin in the magnetic field is governed by the Hamiltonian $H=B\sigma_z$, with $B$ as the parameter to be estimated(here the gyromagnetic ratio of the spin has been absorbed in $B$). If N spins are prepared in the GHZ state, $\frac{1}{\sqrt{2}}(|00\cdots0\rangle+|11\cdots1\rangle)$, and evolves under the dynamics for $t$ units of time, the final state is then $\frac{1}{\sqrt{2}}(e^{iNBt}|00\cdots 0\rangle+e^{-iNBt}|11\cdots 1\rangle)$, which has the quantum Fisher information\cite{Holevo, helstrom1976quantum} $F_Q=4N^2t^2$. According to the quantum Cram\'er-Rao bound\cite{Holevo, helstrom1976quantum}, the standard deviation of any unbiased estimator, $\delta \hat{B}=\sqrt{E[(\hat{B}-B)^2]}$, is lower bounded by the quantum Fisher information as $\delta \hat{B}\geq \frac{1}{\sqrt{mF_Q}}\geq\frac{1}{\sqrt{m}2Nt}$, here $m$ is the number of times that the procedure is repeated. This precision limit can be equivalently achieved by letting a single spin evolving under the dynamics for $T=Nt$ units of time. The precision limit, $\frac{1}{\sqrt{m}2T}$, which is referred as the Heisenberg limit, provides the minimum standard deviation for any unbiased estimator at any given time $T$. It is also known that adding any coherent controls alone can not surpass this limit, i.e., the Heisenberg limit, $\frac{1}{\sqrt{m}2T}$, can not be surpassed by adding extra terms to the Hamiltonian(which makes the Hamiltonian $H=B\sigma_z+H_C(t)$ where $H_C(t)$ represents externally added control terms)\cite{giovannetti2006quantum,boixo2007generalized,yuan2015optimal}. We note that for different Hamiltonians coherent controls may improve the precision limit\cite{yuan2015optimal,yuan2016sequential,Pang2017,Yang2017}, for example, it has been shown that for some dynamics governed by time-dependent Hamiltonians, properly designed controls can improve the precision limit to a scaling of $\frac{1}{T^2}$\cite{Pang2017,Yang2017,Naghiloo2017}. There are also works on the effect of controls under noisy dynamics, where different control techniques, including quantum error correction\cite{Plenio2000,Preskill2000,Dur2014,Arrad2014,Kessler2014,Ozeri2013,Unden2016,Matsuzaki2017,Lu2015,Sekatski2017,Rafal2017,Zhou2018}, dynamics decoupling\cite{Schmitt832,Boss837,SekatskoNJP2016,LangPRX2015,Taylor2008,Cooper2014} and optimal controls\cite{LiuSingle, LiuMulti}, are explored to improve the precision limits. In general noises are regarded as harmful and it is expected that the performance of controlled evolution+noises is worse than the performance of the controlled unitary evolution, controls are mainly employed to either eliminate or suppress the noises in order to recover the performance of the controlled unitary evolution.
\begin{figure}
\caption{(a)parallel scheme and (b)controlled sequential scheme. The two schemes are equivalent under unitary dynamics.}
\label{fig:scheme}
\end{figure}
As a contrast, we show that there are very rich dynamics provided by the interplay between coherent controls and noises that can contribute to the parametrization. Instead of suppressing the noises, controls can actively make use of the noises. Under such cooperative scheme, the precision limit may go beyond the limits of the standard schemes of quantum metrology. In particular we show that although coherent controls and noises alone may not improve the precision limit under a given evolution, the cooperative interplay between them can improve the precision limit. We show that through the interplay of coherent controls and noises, the parameter can be encoded into multiple components of the dynamics, which differs from the standard schemes where the parameter is usually only encoded in one part of the dynamics(either in the Hamiltonian or in the noisy operator). This provides many possibilities to go beyond the precision limit of the standard schemes. Through the canonical examples in quantum metrology, we show that the precision limit in the cooperative scheme can beat the corresponding value in the standard scheme and in certain regimes it can even surpass the highest precision limit that can be achieved under the controlled ideal unitary dynamics.
We note that there are studies on environment-assisted quantum metrology\cite{Goldstein2011,Cappellaro2012}, where the environment is assumed to be consisted with many spins that can not be controlled individually(but may be controlled collectively) and can be parameterized by the unknown field as well\cite{Goldstein2011,Cappellaro2012}. By designing proper pulses, the information accumulated by the spins in the environment can be transferred back to the probe state for the improvement of the precision limit\cite{Goldstein2011,Cappellaro2012}. These studies are different from the cooperative scheme studied here, where we do not assume that the environment is consisted of spins that can participate in the parametrization on its own, and the cooperative scheme works for markovian dynamics where there is no back flow of information from the environment to the system as required in previous schemes.
We will use the example of estimating the magnitude of a DC magnetic field with spins. Without noises, by preparing a spin in the state $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$ and letting it evolve under the dynamics governed by the Hamiltonian $H=B_z\sigma_z$ for $T$ units of time, we can get the final state $\frac{1}{\sqrt{2}}(e^{iB_zT}|0\rangle+e^{-iB_zT}|1\rangle)$ with the quantum Fisher information $F_Q=4T^2$, this leads to the Heisenberg limit $\delta \hat{B}_z\geq \frac{1}{\sqrt{m}2T}$\cite{Holevo,helstrom1976quantum}.
It has been shown that in this case adding coherent controls or ancillary systems can not surpass this limit\cite{giovannetti2006quantum,boixo2007generalized,yuan2015optimal}. We note that in some literatures any precision limit scales as $\frac{1}{T}$ is referred as the Heisenberg limit, in this article we will refer the Heisenberg limit as the (exact)highest precision limit that can be achieved under the controlled unitary dynamics at any given time $T$. For example, for the estimation of the frequency of a AC magnetic field considered in \cite{Pang2017}, this will correspond to $\frac{1}{\sqrt{m}BT^2}$\cite{Pang2017}. In this article we focus on the example of estimating the magnitude of the magnetic field, however, the cooperative interplays can be similarly exploited in other scenarios.
In practise, noises are unavoidable. Two type of noises, the spontaneous emission and the dephasing, are typically considered. Under the spontaneous emission, the dynamics of the spin can be described by the master equation
\begin{eqnarray}
\dot{\rho} = -i[B_z\sigma_z,\rho]+\gamma\left[\sigma_{-}\rho\sigma_{+}-\frac{1}{2}\left\{ \sigma_{+}\sigma_{-},\rho\right\} \right],
\label{eq:masteq_spon}
\end{eqnarray}
where $\sigma_{\pm}=(\sigma_{x}\pm i\sigma_y)/2$ is the raising and lowering operator in the basis of $|0\rangle$ and $|1\rangle$.
Under this dynamics the maximal quantum Fisher information that can be achieved at time $T$ is $F_Q=4e^{-\gamma T}T^2$\cite{yuan2015quantum}, which is always smaller than the Heisenberg limit $4T^2$. Similarly at the presence of dephasing noise,
$\dot{\rho} = -i[B_z\sigma_z,\rho]+\frac{\eta}{2}(\sigma_{z}\rho\sigma_{z}-\rho),$
the maximal quantum Fisher information that can be achieved at time $T$ is $F_Q=4e^{-2\eta T}T^2$\cite{yuan2015quantum}, which is also always smaller than the Heisenberg limit.
In general given a dynamics
\begin{eqnarray}
\label{eq:master}
\dot{\rho} = -i[H(B_z),\rho]+\sum_i\gamma_i[L_i\rho L_i^\dagger-\frac{1}{2}(L_i^\dagger L_i\rho+\rho L_i^\dagger L_i)],
\end{eqnarray}
the coherent and the noisy dynamics are usually 'independent' of each other, typically only one part of the dynamics contributes to the parametrization of the probe state.
However, by adding proper coherent controls, as we are going to show, the coherent part and the noisy part can contribute to the parametrization of the probe state cooperatively. This provides many new possibilities that go beyond the standard scheme. We demonstrate the cooperative scheme by the canonical examples of using the spins to estimate the magnetic field.
First consider the dynamics with the spontaneous emission. Without coherent controls, the lowering and raising operator, $\sigma_{-}=|0\rangle\langle 1|$ and $\sigma_{+}=|1\rangle\langle 0|$ in Eq.(\ref{eq:masteq_spon}), are independent of the magnitude of the magnetic field, the parameter is then only encoded in the Hamiltonian. However, if we add a (known) control field along the $X$-direction, this changes the Hamiltonian to
\begin{equation}
H =B_z \sigma_z + B_x \sigma_x.
\end{equation}
The ground and excite states of the Hamiltonian then become $| g\rangle = -\sin{\frac{\theta}{2}}| 1 \rangle + \cos{\frac{\theta}{2}}| 0 \rangle$ and $| e \rangle = \cos{\frac{\theta}{2}}| 1 \rangle + \sin{\frac{\theta}{2}}| 0 \rangle$ respectively, here $\theta = \arctan \frac{B_x}{B_z}$. This new basis, $| g\rangle$ and $|e\rangle$, contain the unknown parameter $B_z$. At the presence of the spontaneous emission, which induces the decay from the excited state to the ground state, the dynamics is described by
\begin{equation}
\label{eq:interplay}
\dot{\rho} = - i[H, \rho] + \gamma(\sigma_{-}^H \rho \sigma_{+}^H -\frac{1}{2}\{\sigma_{+}^H\sigma_{-}^H, \rho\}),
\end{equation}
where $\sigma_{+}^H = | e\rangle \langle g |$, $\sigma_{-}^H = | g \rangle \langle e |$ are in the new basis of $\{ |g\rangle, |e\rangle \}$, which contain the parameter.
The interplay between the coherent control and the decay thus encodes the parameter into multiple components of the dynamics, which can contribute to the parametrization of the probe state cooperatively to achieve higher precision limits than the standard scheme. To see the precision limit that can be achieved under this cooperative scheme, we first prepare the probe state as $\frac{|0\rangle+|1\rangle}{\sqrt{2}}$ in the original basis, then add the control field and let the spin evolve under the dynamics governed by Eq.(\ref{eq:interplay}) for $T$ units of time. Detailed derivation can be found in the supplementary material. In Fig.\ref{decay_short} we plotted the quantum Fisher information under the cooperative scheme and compared it with the standard scheme without coherent controls, and also the Heisenberg limit under the unitary dynamics.
From Fig.\ref{decay_short} we can see that the cooperative scheme beats the standard scheme, and the precision limit can even surpass the Heisenberg limit in the regime of small $T$. We note that this regime is where quantum metrology is mostly used in practical experiments where noises and phase ambiguity are present.
\begin{figure}
\caption{Quantum Fisher information for the system with the spontaneous emission, under the cooperative and the standard schemes respectively, where $B_z = 0.1, B_x = 0.1,\gamma = 0.5$. The Heisenberg limit under the unitary dynamics is also plotted for comparison.}
\label{decay_short}
\end{figure}
Similarly, for the case with the dephasing noise, after adding a control field along the $X$-direction, the dynamics becomes
\begin{eqnarray}
\label{eq:controldephasing}
\dot{\rho} = -i[B_z\sigma_z+B_x\sigma_x,\rho]+\frac{\eta}{2}(\sigma_{n}\rho\sigma_{n}-\rho),
\end{eqnarray}
where $\sigma_n=\frac{B_z\sigma_z+B_x\sigma_x}{\sqrt{B_z^2+B_x^2}}$ is along the direction of the total field. This change of dynamics is nothing but the change of the axis of the spin along the field. To see the effect of the interplay we also first prepare the probe state as $\frac{|0\rangle+|1\rangle}{\sqrt{2}}$ in the original basis, then add the control field and let the spin evolve under the dynamics governed by Eq.(\ref{eq:controldephasing}) for $T$ units of time. The effect of the interplay in this case can be seen from the Fig.\ref{dephasing_short}, where the cooperative scheme not only achieves higher precision limit than the standard scheme but also beats the Heisenberg limit in small time regime where practical quantum metrology is mostly used when noises are present.
\begin{figure}
\caption{Quantum Fisher information for the system with the dephasing noise, under the cooperative and the standard schemes respectively, where $B_z = 0.1, B_x = 0.1,\eta= 0.5$. The Heisenberg limit under the unitary dynamics is also plotted for comparison. }
\label{dephasing_short}
\end{figure}
In general the decay rate can also depend on the parameter. For example, if the environment is in the thermal state at temperature $T_e$, the dynamics of the spin is then
\begin{equation}
\begin{aligned}
\dot{\rho}= &-i [B_z\sigma_z+B_x\sigma_x,\rho]+\gamma_{+}^H(\sigma_{+}^H \rho \sigma_{-}^H - \frac{1}{2} \{ \sigma_{-}^H \sigma_{+}^H , \rho\}) \\
&+\gamma_{-}^H (\sigma_{-}^H \rho \sigma_{+}^H - \frac{1}{2} \{ \sigma_{+}^H \sigma_{-}^H,\rho \}),
\end{aligned}
\end{equation}
where $\gamma_{+}^H=\gamma_0 N$ and $\gamma_{-}^H=\gamma_0(N+1)$ with $N = \frac{1}{e^{\omega/T_e}-1}$(here we have taken $\hbar$ as 1) and $\gamma_0= \frac{4 \omega^3 |\vec d|^2}{3}$ with $\vec d$ as the dipole operator and $\omega=2\sqrt{B_z^2+B_x^2}$ is the energy gap between the excited state and the ground state\cite{breuer2002theory}. In this case the decay rate $\gamma$ also encodes the parameter which provide additional component that contribute to the parametrization. Fig.\ref{single_para} shows the effect when this additional component is included.
\begin{figure}
\caption{Quantum Fisher information under the cooperative scheme with the environment in the thermal state, where $B_z = 0.3, B_x = 0.1, |\vec d|= 2, T_e = 0$. The Heisenberg limit under the unitary dynamics is also plotted for comparison.}
\label{single_para}
\end{figure}
For multiple spins, it has been shown that under the unitary dynamics couplings between the spins do not help improving the precision limit\cite{giovannetti2006quantum,boixo2007generalized}. While with the interplay of controls and noises, the couplings can help shaping the ground and excited states, which can make them potentially more sensitive to the parameter. In particular, with the couplings, the ground and excited states can be entangled, which can induce non-local dynamics that can be exploited to improve the precision limit. For example, consider a two-spin system where the Hamiltonian is
\begin{equation}\label{hamE}
H=B_z(\sigma^{1}_{z}+\sigma^{2}_{z})+\sigma^{1}_{z}\sigma^{2}_{z},
\end{equation}
here $\sigma_z^1=\sigma_z\otimes I_2$ and $\sigma_z^2=I_2\otimes \sigma_z$, $I_2$ denotes the $2\times 2$ identity matrix and the Hamiltonian is written in the units such that the coupling strength between two spins is $1$.
Under the unitary dynamics, the highest precision limit is achieved by preparing the probe state as $\frac{|00\rangle+|11\rangle}{\sqrt{2}}$, which has the maximal quantum Fisher information $F_Q=16T^2$ at time $T$. We note that in this case the coupling does not help improve the precision limit, the same precision can be achieved without the coupling, i.e., when the Hamiltonian is just $B_{z}(\sigma^{1}_{z}+\sigma^{2}_{z})$, the maximal quantum Fisher information can also reach $16T^2$ by preparing the probe state as $\frac{|00\rangle+|11\rangle}{\sqrt{2}}$\cite{giovannetti2006quantum}.
Now consider adding a small control field $B_x\ll 1$ along the transverse direction,
\begin{equation}
H=\sigma^{1}_{z}\sigma^{2}_{z}
+B_{z}(\sigma^{1}_{z}+\sigma^{2}_{z})+B_x(\sigma_x^1+\sigma_x^2),
\end{equation}
and assume the system is in contact with a cool reservoir which induces decays between the eigen-states of the system as shown in Fig.\ref{energy_level}, where $E_k$, $k\in\{1,2,3,4\}$, are eigen-energies of $H$ and $E_1<E_2<E_3<E_4$. The dynamics of this system can then be described by the master equation
\begin{eqnarray}
\label{eq:two}
\aligned
\dot{\rho} = -i[H,\rho]+L(\rho),
\endaligned
\end{eqnarray}
where $L(\rho)=\sum_{(i,j)\in A}\gamma_{ij}\big(| E_j \rangle \langle E_i | \rho | E_i \rangle \langle E_j | - \frac{1}{2}\{ | E_i \rangle \langle E_i | ,\rho\}\big)$, $A = \{(4,3),(4,2),(3,2),(3,1)\}$, $|E_i\rangle$, $i\in\{1,2,3,4\}$, are the energy eigenstates of the systems, $\gamma_{ij} = \frac{4\omega_{ij}^3 |\vec d|^2}{3}$ are decay rates induced by a cool reservoir with $\omega_{ij}=E_i-E_j$.
\begin{figure}
\caption{Decay channels induced by a cool reservoir.}
\label{energy_level}
\end{figure}
\begin{figure}
\caption{Quantum Fisher information of the cooperative scheme and the Heisenberg limit for different $B_z$, where $B_x = 0.1, t = 1,|\vec d| =10$.}
\label{double_Bz}
\end{figure}
To see the effect of the cooperative scheme, we prepare the initial state as $\frac{|00\rangle + |11 \rangle}{\sqrt{2}}$ in the computational basis then add the control field along the transverse direction and let the system evolve under the dynamics governed by Eq.(\ref{eq:two}).
In Fig.\ref{double_Bz}, we plotted the quantum Fisher information at a fixed time $t =1$ for different $B_z$, it can be seen that the quantum Fisher information surpasses the highest precision limit under the ideal unitary dynamics when $B_z \in [0.89,1,14]$. We note that when $B_z$ is outside of this region, we can always shift it into this region by adaptive controls, in the asymptotical limit the precision limit can achieve the maximal value at $B_z\approx 1$. Here the point $B_z=1$ actually corresponds to a critical point around which the ground state is most sensitive to the parameter\cite{zhang2008detection}.
In the appendix we show that the quantum Fisher information of the ground state is approximately $\frac{2B_x^2}{(2B_x^2+(-1+B_z)^2)^2}$, which achieves the maximal value at the critical point. By tuning the controls, we can adjust the highest QFI of the ground state(which we denote as $F_{\max}=\max_{B_z}F_Q(|g\rangle)$) and the width of the region around $B_z\approx 1$ that has the QFI surpassing the Heisenberg limit $16T^2$(which we denote as $W=$ the length of the region $A = \{B_z | F_Q(|g\rangle) \ge 16T^2\}$). From the QFI of the ground state, we can get a relation between $F_{\max}$ and $W$ as
\begin{eqnarray}
\label{eq: tradeoff}
\frac{W^2}{4} = \frac{1}{\sqrt{F_{\max}}}(\frac{1}{4T}-\frac{1}{\sqrt{F_{\max}}}), \forall F_{\max} >16T^2.
\end{eqnarray}
This is a trade-off relation between $F_{\max}$ and $W$ and by changing the controls we can adaptively tune between $W$ and $F_{\max}$. In practise, the unknown region for the value of the parameter may be relatively wide at the beginning, we can then use a relatively bigger control field, which has a smaller $F_{\max}$ but a bigger $W$, to accommodate the relatively wide region; after collecting certain amount of measurement data which narrows down the unknown region, we can then adaptively weaken the control field for bigger $F_{\max}$. We note that in the standard scheme the term contains the parameter, which is $B_z(\sigma_z^1+\sigma^2_z)$, acts on the probe state locally, whose precision is thus bounded by the Heisenberg limit\cite{yuanfd}. The Heisenberg scaling as $\frac{1}{N}$ is obtained by assuming the operator containing the parameter acts on the spins locally\cite{yuanfd}, as shown in Fig.(\ref{fig:scheme}). However, under the cooperative scheme, the ground state becomes entangled, which, under the interplay between coherent controls and the decay, induces non-local dynamics. The cooperative scheme can thus turn local parametrization in the standard scheme to non-local parametrization, which then provides possibilities to go beyond the limit of the standard scheme. More gains are expected with more spins under the cooperative scheme with properly designed interplays between coherent controls and noises.
{\em Summary:} Given a dynamics with an unknown parameter, such as
\begin{eqnarray}
\dot{\rho} = -i[H(B_z),\rho]+\sum_i\gamma_i[L_i\rho L_i^\dagger-\frac{1}{2}(L_i^\dagger L_i\rho+\rho L_i^\dagger L_i)],\nonumber
\end{eqnarray}
we showed that with proper coherent controls, it is possible to encode the unknown parameter, which is only the Hamiltonian originally, into multiple components of the dynamics. This can achieve much higher precision limit than the standard scheme. The focus of the cooperative scheme is now on the design of the controls to better encode the parameter into multiple components of the dynamics, this opens new directions for the studies of quantum metrology. Under the cooperative scheme, many elements of the dynamics, such as the couplings and noises, become active players, as opposed to the passive roles played in the standard scheme. This article focuses on demonstrating the idea, aiming to show the possibility of achieving precision limits beyond the standard scheme. We expect this will lead to investigations of the full potential of the cooperative scheme and new ultimate precision limits that go beyond the standard scheme.
\begin{thebibliography}{68}
\expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi
\expandafter\ifx\csname bibnamefont\endcsname\relax
\def\bibnamefont#1{#1}\fi
\expandafter\ifx\csname bibfnamefont\endcsname\relax
\def\bibfnamefont#1{#1}\fi
\expandafter\ifx\csname citenamefont\endcsname\relax
\def\citenamefont#1{#1}\fi
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi
\providecommand{\bibinfo}[2]{#2}
\providecommand{\eprint}[2][]{\url{#2}}
\bibitem[{\citenamefont{Giovannetti et~al.}(2011)\citenamefont{Giovannetti,
Lloyd, and Maccone}}]{giovannetti2011advances}
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Giovannetti}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Lloyd}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Maccone}},
\bibinfo{journal}{Nature photonics} \textbf{\bibinfo{volume}{5}},
\bibinfo{pages}{222} (\bibinfo{year}{2011}).
\bibitem[{\citenamefont{Giovannetti et~al.}(2006)\citenamefont{Giovannetti,
Lloyd, and Maccone}}]{giovannetti2006quantum}
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Giovannetti}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Lloyd}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Maccone}},
\bibinfo{journal}{Physical review letters} \textbf{\bibinfo{volume}{96}},
\bibinfo{pages}{010401} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Anisimov et~al.}(2010)\citenamefont{Anisimov, Raterman,
Chiruvelli, Plick, Huver, Lee, and Dowling}}]{anisimov2010quantum}
\bibinfo{author}{\bibfnamefont{P.~M.} \bibnamefont{Anisimov}},
\bibinfo{author}{\bibfnamefont{G.~M.} \bibnamefont{Raterman}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Chiruvelli}},
\bibinfo{author}{\bibfnamefont{W.~N.} \bibnamefont{Plick}},
\bibinfo{author}{\bibfnamefont{S.~D.} \bibnamefont{Huver}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Lee}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.~P.} \bibnamefont{Dowling}},
\bibinfo{journal}{Physical review letters} \textbf{\bibinfo{volume}{104}},
\bibinfo{pages}{103602} (\bibinfo{year}{2010}).
\bibitem[{\citenamefont{Braunstein et~al.}(1996)\citenamefont{Braunstein,
Caves, and Milburn}}]{braunstein1996generalized}
\bibinfo{author}{\bibfnamefont{S.~L.} \bibnamefont{Braunstein}},
\bibinfo{author}{\bibfnamefont{C.~M.} \bibnamefont{Caves}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{G.~J.} \bibnamefont{Milburn}},
\bibinfo{journal}{annals of physics} \textbf{\bibinfo{volume}{247}},
\bibinfo{pages}{135} (\bibinfo{year}{1996}).
\bibitem[{\citenamefont{PARIS}(2009)}]{paris2009quantum}
\bibinfo{author}{\bibfnamefont{M.~G.~A.} \bibnamefont{PARIS}},
\bibinfo{journal}{International Journal of Quantum Information}
\textbf{\bibinfo{volume}{07}}, \bibinfo{pages}{125} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Fujiwara and Imai}(2008)}]{Fujiwara2008}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Fujiwara}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Imai}},
\bibinfo{journal}{Journal of Physics A: Mathematical and Theoretical}
\textbf{\bibinfo{volume}{41}}, \bibinfo{pages}{255304}
(\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Escher et~al.}(2011)\citenamefont{Escher,
de~Matos~Filho, and Davidovich}}]{escher2012general}
\bibinfo{author}{\bibfnamefont{B.~M.} \bibnamefont{Escher}},
\bibinfo{author}{\bibfnamefont{R.~L.} \bibnamefont{de~Matos~Filho}},
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Davidovich}},
\bibinfo{journal}{Nature Physics} \textbf{\bibinfo{volume}{7}},
\bibinfo{pages}{406} (\bibinfo{year}{2011}).
\bibitem[{\citenamefont{Demkowicz-Dobrza\ifmmode~\acute{n}\else \'{n}\fi{}ski
and Maccone}(2014)}]{demkowicz2014usin}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Demkowicz-Dobrza\ifmmode~\acute{n}\else
\'{n}\fi{}ski}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Maccone}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{113}},
\bibinfo{pages}{250801} (\bibinfo{year}{2014}),
URL \url{http://link.aps.org/doi/10.1103/PhysRevLett.113.250801}.
\bibitem[{\citenamefont{Demkowicz-Dobrzanski
et~al.}(2012)\citenamefont{Demkowicz-Dobrzanski, Kolodynski, and
Guta}}]{demkowicz2012elusive}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Demkowicz-Dobrzanski}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Kolodynski}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Guta}},
\bibinfo{journal}{Nature Communications} \textbf{\bibinfo{volume}{3}},
\bibinfo{pages}{1063} (\bibinfo{year}{2012}).
\bibitem[{\citenamefont{Schnabel et~al.}(2010)\citenamefont{Schnabel,
Mavalvala, McClelland, and Lam}}]{schnabel2010quantum}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Schnabel}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Mavalvala}},
\bibinfo{author}{\bibfnamefont{D.~E.} \bibnamefont{McClelland}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.~K.} \bibnamefont{Lam}},
\bibinfo{journal}{Nature communications} \textbf{\bibinfo{volume}{1}},
\bibinfo{pages}{121} (\bibinfo{year}{2010}).
\bibitem[{\citenamefont{LIGO}(2011)}]{ligo2011gravitational}
\bibinfo{author}{\bibnamefont{LIGO}}, \bibinfo{journal}{Nature Physics}
\textbf{\bibinfo{volume}{7}}, \bibinfo{pages}{962} (\bibinfo{year}{2011}).
\bibitem[{\citenamefont{Joo et~al.}(2011)\citenamefont{Joo, Munro, and
Spiller}}]{joo2011quantum}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Joo}},
\bibinfo{author}{\bibfnamefont{W.~J.} \bibnamefont{Munro}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{T.~P.} \bibnamefont{Spiller}},
\bibinfo{journal}{Physical review letters} \textbf{\bibinfo{volume}{107}},
\bibinfo{pages}{083601} (\bibinfo{year}{2011}).
\bibitem[{\citenamefont{Higgins et~al.}(2007)\citenamefont{Higgins, Berry,
Bartlett, Wiseman, and Pryde}}]{higgins2007entanglement}
\bibinfo{author}{\bibfnamefont{B.~L.} \bibnamefont{Higgins}},
\bibinfo{author}{\bibfnamefont{D.~W.} \bibnamefont{Berry}},
\bibinfo{author}{\bibfnamefont{S.~D.} \bibnamefont{Bartlett}},
\bibinfo{author}{\bibfnamefont{H.~M.} \bibnamefont{Wiseman}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.~J.} \bibnamefont{Pryde}},
\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{450}},
\bibinfo{pages}{393} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Kolobov}(1999)}]{kolobov1999spatial}
\bibinfo{author}{\bibfnamefont{M.~I.} \bibnamefont{Kolobov}},
\bibinfo{journal}{Reviews of Modern Physics} \textbf{\bibinfo{volume}{71}},
\bibinfo{pages}{1539} (\bibinfo{year}{1999}).
\bibitem[{\citenamefont{Lugiato et~al.}(2002)\citenamefont{Lugiato, Gatti, and
Brambilla}}]{lugiato2002quantum}
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Lugiato}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Gatti}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Brambilla}},
\bibinfo{journal}{Journal of Optics B: Quantum and semiclassical optics}
\textbf{\bibinfo{volume}{4}}, \bibinfo{pages}{S176} (\bibinfo{year}{2002}).
\bibitem[{\citenamefont{Morris et~al.}(2015)\citenamefont{Morris, Aspden, Bell,
Boyd, and Padgett}}]{morris2015imaging}
\bibinfo{author}{\bibfnamefont{P.~A.} \bibnamefont{Morris}},
\bibinfo{author}{\bibfnamefont{R.~S.} \bibnamefont{Aspden}},
\bibinfo{author}{\bibfnamefont{J.~E.} \bibnamefont{Bell}},
\bibinfo{author}{\bibfnamefont{R.~W.} \bibnamefont{Boyd}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.~J.} \bibnamefont{Padgett}},
\bibinfo{journal}{Nature communications} \textbf{\bibinfo{volume}{6}},
\bibinfo{pages}{5913} (\bibinfo{year}{2015}).
\bibitem[{\citenamefont{Roga and Jeffers}(2016)}]{roga2016security}
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Roga}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Jeffers}},
\bibinfo{journal}{Physical Review A} \textbf{\bibinfo{volume}{94}},
\bibinfo{pages}{032301} (\bibinfo{year}{2016}).
\bibitem[{\citenamefont{Tsang et~al.}(2016)\citenamefont{Tsang, Nair, and
Lu}}]{tsang2016quantum}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Tsang}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Nair}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{X.-M.} \bibnamefont{Lu}},
\bibinfo{journal}{Physical Review X} \textbf{\bibinfo{volume}{6}},
\bibinfo{pages}{031033} (\bibinfo{year}{2016}).
\bibitem[{\citenamefont{Bollinger et~al.}(1996)\citenamefont{Bollinger, Itano,
Wineland, and Heinzen}}]{bollinger1996optimal}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Bollinger}},
\bibinfo{author}{\bibfnamefont{W.~M.} \bibnamefont{Itano}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Wineland}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Heinzen}},
\bibinfo{journal}{Physical Review A} \textbf{\bibinfo{volume}{54}},
\bibinfo{pages}{R4649} (\bibinfo{year}{1996}).
\bibitem[{\citenamefont{Bu{\v{z}}ek et~al.}(1999)\citenamefont{Bu{\v{z}}ek,
Derka, and Massar}}]{buvzek1999optimal}
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Bu{\v{z}}ek}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Derka}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Massar}},
\bibinfo{journal}{Physical review letters} \textbf{\bibinfo{volume}{82}},
\bibinfo{pages}{2207} (\bibinfo{year}{1999}).
\bibitem[{\citenamefont{Leibfried et~al.}(2004)\citenamefont{Leibfried,
Barrett, Schaetz, Britton, Chiaverini, Itano, Jost, Langer, and
Wineland}}]{leibfried2004toward}
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Leibfried}},
\bibinfo{author}{\bibfnamefont{M.~D.} \bibnamefont{Barrett}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Schaetz}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Britton}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Chiaverini}},
\bibinfo{author}{\bibfnamefont{W.~M.} \bibnamefont{Itano}},
\bibinfo{author}{\bibfnamefont{J.~D.} \bibnamefont{Jost}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Langer}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{D.~J.} \bibnamefont{Wineland}},
\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{304}},
\bibinfo{pages}{1476} (\bibinfo{year}{2004}).
\bibitem[{\citenamefont{Roos et~al.}(2006)\citenamefont{Roos, Chwalla, Kim,
Riebe, and Blatt}}]{roos2007designer}
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Roos}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Chwalla}},
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Kim}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Riebe}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Blatt}},
\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{443}},
\bibinfo{pages}{316} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Derevianko and
Katori}(2011)}]{derevianko2011colloquium}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Derevianko}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Katori}},
\bibinfo{journal}{Reviews of Modern Physics} \textbf{\bibinfo{volume}{83}},
\bibinfo{pages}{331} (\bibinfo{year}{2011}).
\bibitem[{\citenamefont{Ludlow et~al.}(2015)\citenamefont{Ludlow, Boyd, Ye,
Peik, and Schmidt}}]{ludlow2015optical}
\bibinfo{author}{\bibfnamefont{A.~D.} \bibnamefont{Ludlow}},
\bibinfo{author}{\bibfnamefont{M.~M.} \bibnamefont{Boyd}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Ye}},
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Peik}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.~O.} \bibnamefont{Schmidt}},
\bibinfo{journal}{Reviews of Modern Physics} \textbf{\bibinfo{volume}{87}},
\bibinfo{pages}{637} (\bibinfo{year}{2015}).
\bibitem[{\citenamefont{Shapiro and Lloyd}(2009)}]{shapiro2009quantum}
\bibinfo{author}{\bibfnamefont{J.~H.} \bibnamefont{Shapiro}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Lloyd}},
\bibinfo{journal}{New Journal of Physics} \textbf{\bibinfo{volume}{11}},
\bibinfo{pages}{063045} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Lopaeva et~al.}(2013)\citenamefont{Lopaeva, Berchera,
Degiovanni, Olivares, Brida, and Genovese}}]{lopaeva2013experimental}
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Lopaeva}},
\bibinfo{author}{\bibfnamefont{I.~R.} \bibnamefont{Berchera}},
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Degiovanni}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Olivares}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Brida}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Genovese}},
\bibinfo{journal}{Physical review letters} \textbf{\bibinfo{volume}{110}},
\bibinfo{pages}{153603} (\bibinfo{year}{2013}).
\bibitem[{\citenamefont{Dowling}(1998)}]{dowling1998correlated}
\bibinfo{author}{\bibfnamefont{J.~P.} \bibnamefont{Dowling}},
\bibinfo{journal}{Physical Review A} \textbf{\bibinfo{volume}{57}},
\bibinfo{pages}{4736} (\bibinfo{year}{1998}).
\bibitem[{\citenamefont{Huelga et~al.}(1997)\citenamefont{Huelga, Macchiavello,
Pellizzari, Ekert, Plenio, and Cirac}}]{huelga1997improvement}
\bibinfo{author}{\bibfnamefont{S.~F.} \bibnamefont{Huelga}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Macchiavello}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Pellizzari}},
\bibinfo{author}{\bibfnamefont{A.~K.} \bibnamefont{Ekert}},
\bibinfo{author}{\bibfnamefont{M.~B.} \bibnamefont{Plenio}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.~I.} \bibnamefont{Cirac}},
\bibinfo{journal}{Physical Review Letters} \textbf{\bibinfo{volume}{79}},
\bibinfo{pages}{3865} (\bibinfo{year}{1997}).
\bibitem[{\citenamefont{Chin et~al.}(2012)\citenamefont{Chin, Huelga, and
Plenio}}]{chin2012quantum}
\bibinfo{author}{\bibfnamefont{A.~W.} \bibnamefont{Chin}},
\bibinfo{author}{\bibfnamefont{S.~F.} \bibnamefont{Huelga}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~B.}
\bibnamefont{Plenio}}, \bibinfo{journal}{Physical review letters}
\textbf{\bibinfo{volume}{109}}, \bibinfo{pages}{233601}
(\bibinfo{year}{2012}).
\bibitem[{\citenamefont{Hall and Wiseman}(2012)}]{HallPRX}
\bibinfo{author}{\bibfnamefont{M.~J.~W.} \bibnamefont{Hall}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.~M.} \bibnamefont{Wiseman}},
\bibinfo{journal}{Phys. Rev. X} \textbf{\bibinfo{volume}{2}},
\bibinfo{pages}{041006} (\bibinfo{year}{2012}),
URL \url{http://link.aps.org/doi/10.1103/PhysRevX.2.041006}.
\bibitem[{\citenamefont{Berry et~al.}(2015)\citenamefont{Berry, Tsang, Hall,
and Wiseman}}]{Berry2015}
\bibinfo{author}{\bibfnamefont{D.~W.} \bibnamefont{Berry}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Tsang}},
\bibinfo{author}{\bibfnamefont{M.~J.~W.} \bibnamefont{Hall}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.~M.}
\bibnamefont{Wiseman}}, \bibinfo{journal}{Phys. Rev. X}
\textbf{\bibinfo{volume}{5}}, \bibinfo{pages}{031018} (\bibinfo{year}{2015}),
URL \url{http://link.aps.org/doi/10.1103/PhysRevX.5.031018}.
\bibitem[{\citenamefont{Alipour et~al.}(2014)\citenamefont{Alipour, Mehboudi,
and Rezakhani}}]{Alipour2014}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Alipour}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Mehboudi}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.~T.} \bibnamefont{Rezakhani}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{112}},
\bibinfo{pages}{120405} (\bibinfo{year}{2014}),
URL \url{http://link.aps.org/doi/10.1103/PhysRevLett.112.120405}.
\bibitem[{\citenamefont{Beau and del Campo}(2017)}]{Beau2017}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Beau}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{del Campo}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{119}},
\bibinfo{pages}{010403} (\bibinfo{year}{2017}),
URL \url{https://link.aps.org/doi/10.1103/PhysRevLett.119.010403}.
\bibitem[{\citenamefont{Borregaard and S{\o}rensen}(2013)}]{borregaard2013near}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Borregaard}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.~S.} \bibnamefont{S{\o}rensen}},
\bibinfo{journal}{Physical review letters} \textbf{\bibinfo{volume}{111}},
\bibinfo{pages}{090801} (\bibinfo{year}{2013}).
\bibitem[{\citenamefont{Holevo}(1982)}]{Holevo}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Holevo}},
\emph{\bibinfo{title}{Probabilistic and Quantum Aspects of Quantum Theory}}
(\bibinfo{publisher}{North-Holland, Amsterdam}, \bibinfo{year}{1982}).
\bibitem[{\citenamefont{Helstrom}(1976)}]{helstrom1976quantum}
\bibinfo{author}{\bibfnamefont{C.~W.} \bibnamefont{Helstrom}},
\emph{\bibinfo{title}{Quantum detection and estimation theory}}
(\bibinfo{publisher}{Academic press}, \bibinfo{year}{1976}).
\bibitem[{\citenamefont{Boixo et~al.}(2007)\citenamefont{Boixo, Flammia, Caves,
and Geremia}}]{boixo2007generalized}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Boixo}},
\bibinfo{author}{\bibfnamefont{S.~T.} \bibnamefont{Flammia}},
\bibinfo{author}{\bibfnamefont{C.~M.} \bibnamefont{Caves}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.~M.} \bibnamefont{Geremia}},
\bibinfo{journal}{Physical review letters} \textbf{\bibinfo{volume}{98}},
\bibinfo{pages}{090401} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Yuan and Fung}(2015)}]{yuan2015optimal}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Yuan}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{C.-H.~F.} \bibnamefont{Fung}},
\bibinfo{journal}{Physical review letters} \textbf{\bibinfo{volume}{115}},
\bibinfo{pages}{110401} (\bibinfo{year}{2015}).
\bibitem[{\citenamefont{Yuan}(2016)}]{yuan2016sequential}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Yuan}},
\bibinfo{journal}{Physical Review Letters} \textbf{\bibinfo{volume}{117}},
\bibinfo{pages}{160801} (\bibinfo{year}{2016}).
\bibitem[{\citenamefont{Pang and Jordan}(2017)}]{Pang2017}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Pang}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.~N.} \bibnamefont{Jordan}},
\bibinfo{journal}{Nature Communications} \textbf{\bibinfo{volume}{8}},
\bibinfo{pages}{14695} (\bibinfo{year}{2017}).
\bibitem[{\citenamefont{Yang et~al.}(2017)\citenamefont{Yang, Pang, and
Jordan}}]{Yang2017}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Yang}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Pang}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.~N.} \bibnamefont{Jordan}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{96}},
\bibinfo{pages}{020301} (\bibinfo{year}{2017}),
URL \url{https://link.aps.org/doi/10.1103/PhysRevA.96.020301}.
\bibitem[{\citenamefont{Naghiloo et~al.}(2017)\citenamefont{Naghiloo, Jordan,
and Murch}}]{Naghiloo2017}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Naghiloo}},
\bibinfo{author}{\bibfnamefont{A.~N.} \bibnamefont{Jordan}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{K.~W.} \bibnamefont{Murch}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{119}},
\bibinfo{pages}{180801} (\bibinfo{year}{2017}),
URL \url{https://link.aps.org/doi/10.1103/PhysRevLett.119.180801}.
\bibitem[{\citenamefont{Macchiavello et~al.}(2000)\citenamefont{Macchiavello,
Huelga, Cirac, Ekert, and Plenio}}]{Plenio2000}
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Macchiavello}},
\bibinfo{author}{\bibfnamefont{S.~F.} \bibnamefont{Huelga}},
\bibinfo{author}{\bibfnamefont{J.~I.} \bibnamefont{Cirac}},
\bibinfo{author}{\bibfnamefont{A.~K.} \bibnamefont{Ekert}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.~B.} \bibnamefont{Plenio}},
\emph{\bibinfo{title}{Decoherence and quantum error correction in frequency
standards}} (\bibinfo{publisher}{Kluwer Academic/Plenum Publishers, New
York}, \bibinfo{year}{2000}).
\bibitem[{\citenamefont{Preskill}(2000)}]{Preskill2000}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Preskill}},
\bibinfo{journal}{arXiv:quant-ph/0010098} (\bibinfo{year}{2000}).
\bibitem[{\citenamefont{D\"ur et~al.}(2014)\citenamefont{D\"ur, Skotiniotis,
Fr\"owis, and Kraus}}]{Dur2014}
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{D\"ur}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Skotiniotis}},
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Fr\"owis}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Kraus}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{112}},
\bibinfo{pages}{080801} (\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Arrad et~al.}(2014)\citenamefont{Arrad, Vinkler,
Aharonov, and Retzker}}]{Arrad2014}
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Arrad}},
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Vinkler}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Aharonov}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Retzker}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{112}},
\bibinfo{pages}{150801} (\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Kessler et~al.}(2014)\citenamefont{Kessler, Lovchinsky,
Sushkov, and Lukin}}]{Kessler2014}
\bibinfo{author}{\bibfnamefont{E.~M.} \bibnamefont{Kessler}},
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Lovchinsky}},
\bibinfo{author}{\bibfnamefont{A.~O.} \bibnamefont{Sushkov}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~D.} \bibnamefont{Lukin}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{112}},
\bibinfo{pages}{150802} (\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Ozeri}(2013)}]{Ozeri2013}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Ozeri}},
\bibinfo{journal}{arXiv} p. \bibinfo{pages}{1310.3432}
(\bibinfo{year}{2013}).
\bibitem[{\citenamefont{Unden et~al.}(2016)\citenamefont{Unden,
Balasubramanian, Louzon, Vinkler, Plenio, Markham, Twitchen, Stacey,
Lovchinsky, Sushkov et~al.}}]{Unden2016}
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Unden}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Balasubramanian}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Louzon}},
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Vinkler}},
\bibinfo{author}{\bibfnamefont{M.~B.} \bibnamefont{Plenio}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Markham}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Twitchen}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Stacey}},
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Lovchinsky}},
\bibinfo{author}{\bibfnamefont{A.~O.} \bibnamefont{Sushkov}},
\bibnamefont{et~al.}, \bibinfo{journal}{Phys. Rev. Lett.}
\textbf{\bibinfo{volume}{116}}, \bibinfo{pages}{230502}
(\bibinfo{year}{2016}),
URL \url{https://link.aps.org/doi/10.1103/PhysRevLett.116.230502}.
\bibitem[{\citenamefont{Matsuzaki and Benjamin}(2017)}]{Matsuzaki2017}
\bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Matsuzaki}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Benjamin}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{95}},
\bibinfo{pages}{032303} (\bibinfo{year}{2017}),
URL \url{https://link.aps.org/doi/10.1103/PhysRevA.95.032303}.
\bibitem[{\citenamefont{Lu et~al.}(2015)\citenamefont{Lu, Yu, and
C.H.}}]{Lu2015}
\bibinfo{author}{\bibfnamefont{X.-M.} \bibnamefont{Lu}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Yu}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{C.H.}},
\bibinfo{journal}{Nature Communications} \textbf{\bibinfo{volume}{6}},
\bibinfo{pages}{7282} (\bibinfo{year}{2015}).
\bibitem[{\citenamefont{Sekatski et~al.}(2017)\citenamefont{Sekatski,
Skotiniotis, Ko{\l{}}ody{\'{n}}ski, and D{\"{u}}r}}]{Sekatski2017}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Sekatski}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Skotiniotis}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Ko{\l{}}ody{\'{n}}ski}},
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{D{\"{u}}r}},
\bibinfo{journal}{{Quantum}} \textbf{\bibinfo{volume}{1}},
\bibinfo{pages}{27} (\bibinfo{year}{2017}), ISSN \bibinfo{issn}{2521-327X},
URL \url{https://doi.org/10.22331/q-2017-09-06-27}.
\bibitem[{\citenamefont{Demkowicz-Dobrza\ifmmode~\acute{n}\else \'{n}\fi{}ski
et~al.}(2017)\citenamefont{Demkowicz-Dobrza\ifmmode~\acute{n}\else
\'{n}\fi{}ski, Czajkowski, and Sekatski}}]{Rafal2017}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Demkowicz-Dobrza\ifmmode~\acute{n}\else
\'{n}\fi{}ski}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Czajkowski}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Sekatski}},
\bibinfo{journal}{Phys. Rev. X} \textbf{\bibinfo{volume}{7}},
\bibinfo{pages}{041009} (\bibinfo{year}{2017}),
URL \url{https://link.aps.org/doi/10.1103/PhysRevX.7.041009}.
\bibitem[{\citenamefont{Zhou et~al.}(2018)\citenamefont{Zhou, Zhang, Preskill,
and Jiang}}]{Zhou2018}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Zhou}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Zhang}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Preskill}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Jiang}},
\bibinfo{journal}{Nature Communications} \textbf{\bibinfo{volume}{9(1)}},
\bibinfo{pages}{78} (\bibinfo{year}{2018}).
\bibitem[{\citenamefont{Schmitt et~al.}(2017)\citenamefont{Schmitt, Gefen,
St{\"u}rner, Unden, Wolff, M{\"u}ller, Scheuer, Naydenov, Markham, Pezzagna
et~al.}}]{Schmitt832}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Schmitt}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Gefen}},
\bibinfo{author}{\bibfnamefont{F.~M.} \bibnamefont{St{\"u}rner}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Unden}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Wolff}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{M{\"u}ller}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Scheuer}},
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Naydenov}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Markham}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Pezzagna}},
\bibnamefont{et~al.}, \bibinfo{journal}{Science}
\textbf{\bibinfo{volume}{356}}, \bibinfo{pages}{832} (\bibinfo{year}{2017}),
ISSN \bibinfo{issn}{0036-8075},
\eprint{http://science.sciencemag.org/content/356/6340/832.full.pdf},
URL \url{http://science.sciencemag.org/content/356/6340/832}.
\bibitem[{\citenamefont{Boss et~al.}(2017)\citenamefont{Boss, Cujia, Zopes, and
Degen}}]{Boss837}
\bibinfo{author}{\bibfnamefont{J.~M.} \bibnamefont{Boss}},
\bibinfo{author}{\bibfnamefont{K.~S.} \bibnamefont{Cujia}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Zopes}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{C.~L.} \bibnamefont{Degen}},
\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{356}},
\bibinfo{pages}{837} (\bibinfo{year}{2017}), ISSN \bibinfo{issn}{0036-8075},
\eprint{http://science.sciencemag.org/content/356/6340/837.full.pdf},
URL \url{http://science.sciencemag.org/content/356/6340/837}.
\bibitem[{\citenamefont{Sekatski et~al.}(2016)\citenamefont{Sekatski,
Skotiniotis, and D¨¹r}}]{SekatskoNJP2016}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Sekatski}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Skotiniotis}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{D¨¹r}},
\bibinfo{journal}{New Journal of Physics} \textbf{\bibinfo{volume}{18}},
\bibinfo{pages}{073034} (\bibinfo{year}{2016}),
URL \url{http://stacks.iop.org/1367-2630/18/i=7/a=073034}.
\bibitem[{\citenamefont{Lang et~al.}(2015)\citenamefont{Lang, Liu, and
Monteiro}}]{LangPRX2015}
\bibinfo{author}{\bibfnamefont{J.~E.} \bibnamefont{Lang}},
\bibinfo{author}{\bibfnamefont{R.~B.} \bibnamefont{Liu}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{T.~S.} \bibnamefont{Monteiro}},
\bibinfo{journal}{Phys. Rev. X} \textbf{\bibinfo{volume}{5}},
\bibinfo{pages}{041016} (\bibinfo{year}{2015}),
URL \url{https://link.aps.org/doi/10.1103/PhysRevX.5.041016}.
\bibitem[{\citenamefont{Taylor et~al.}(2008)\citenamefont{Taylor, Cappellaro,
Childress, Jiang, Budker, Hemmer, Yacoby, Walsworth, and Lukin}}]{Taylor2008}
\bibinfo{author}{\bibfnamefont{J.~M.} \bibnamefont{Taylor}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Cappellaro}},
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Childress}},
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Jiang}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Budker}},
\bibinfo{author}{\bibfnamefont{P.~R.} \bibnamefont{Hemmer}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Yacoby}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Walsworth}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~D.} \bibnamefont{Lukin}},
\bibinfo{journal}{Nature Physics} \textbf{\bibinfo{volume}{4}},
\bibinfo{pages}{810} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Cooper et~al.}(2014)\citenamefont{Cooper, Magesan, Yum,
and Cappellaro}}]{Cooper2014}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Cooper}},
\bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Magesan}},
\bibinfo{author}{\bibfnamefont{H.~N.} \bibnamefont{Yum}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Cappellaro}},
\bibinfo{journal}{Nature Communications} \textbf{\bibinfo{volume}{5}},
\bibinfo{pages}{3141} (\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Liu and Yuan}(2017{\natexlab{a}})}]{LiuSingle}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Liu}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Yuan}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{96}},
\bibinfo{pages}{012117} (\bibinfo{year}{2017}{\natexlab{a}}),
URL \url{https://link.aps.org/doi/10.1103/PhysRevA.96.012117}.
\bibitem[{\citenamefont{Liu and Yuan}(2017{\natexlab{b}})}]{LiuMulti}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Liu}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Yuan}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{96}},
\bibinfo{pages}{042114} (\bibinfo{year}{2017}{\natexlab{b}}),
URL \url{https://link.aps.org/doi/10.1103/PhysRevA.96.042114}.
\bibitem[{\citenamefont{Goldstein et~al.}(2011)\citenamefont{Goldstein,
Cappellaro, Maze, Hodges, Jiang, S\o{}rensen, and Lukin}}]{Goldstein2011}
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Goldstein}},
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Cappellaro}},
\bibinfo{author}{\bibfnamefont{J.~R.} \bibnamefont{Maze}},
\bibinfo{author}{\bibfnamefont{J.~S.} \bibnamefont{Hodges}},
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Jiang}},
\bibinfo{author}{\bibfnamefont{A.~S.} \bibnamefont{S\o{}rensen}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~D.} \bibnamefont{Lukin}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{106}},
\bibinfo{pages}{140502} (\bibinfo{year}{2011}),
URL \url{https://link.aps.org/doi/10.1103/PhysRevLett.106.140502}.
\bibitem[{\citenamefont{Cappellaro et~al.}(2012)\citenamefont{Cappellaro,
Goldstein, Hodges, Jiang, Maze, S\o{}rensen, and Lukin}}]{Cappellaro2012}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Cappellaro}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Goldstein}},
\bibinfo{author}{\bibfnamefont{J.~S.} \bibnamefont{Hodges}},
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Jiang}},
\bibinfo{author}{\bibfnamefont{J.~R.} \bibnamefont{Maze}},
\bibinfo{author}{\bibfnamefont{A.~S.} \bibnamefont{S\o{}rensen}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~D.} \bibnamefont{Lukin}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{85}},
\bibinfo{pages}{032336} (\bibinfo{year}{2012}),
URL \url{https://link.aps.org/doi/10.1103/PhysRevA.85.032336}.
\bibitem[{\citenamefont{Yuan and Fung}(2017{\natexlab{a}})}]{yuan2015quantum}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Yuan}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{C.-H.~F.} \bibnamefont{Fung}},
\bibinfo{journal}{npj:Quantum Information} \textbf{\bibinfo{volume}{3:14}}
(\bibinfo{year}{2017}{\natexlab{a}}).
\bibitem[{\citenamefont{Breuer and Petruccione}(2002)}]{breuer2002theory}
\bibinfo{author}{\bibfnamefont{H.-P.} \bibnamefont{Breuer}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Petruccione}},
\emph{\bibinfo{title}{The theory of open quantum systems}}
(\bibinfo{publisher}{Oxford University Press on Demand},
\bibinfo{year}{2002}).
\bibitem[{\citenamefont{Zhang et~al.}(2008)\citenamefont{Zhang, Peng,
Rajendran, and Suter}}]{zhang2008detection}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Zhang}},
\bibinfo{author}{\bibfnamefont{X.}~\bibnamefont{Peng}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Rajendran}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Suter}},
\bibinfo{journal}{Physical review letters} \textbf{\bibinfo{volume}{100}},
\bibinfo{pages}{100501} (\bibinfo{year}{2008}).
\bibitem[{\citenamefont{Yuan and Fung}(2017{\natexlab{b}})}]{yuanfd}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Yuan}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{C.-H.~F.} \bibnamefont{Fung}},
\bibinfo{journal}{New Journal of Physics} \textbf{\bibinfo{volume}{19}},
\bibinfo{pages}{113039} (\bibinfo{year}{2017}{\natexlab{b}}),
URL \url{http://stacks.iop.org/1367-2630/19/i=11/a=113039}.
\bibitem{dittmann1999explicit}
J~Dittmann.
\newblock Explicit formulae for the bures metric.
\newblock {\em Journal of Physics A: Mathematical and General}, 32(14):2663,
1999.
\end{thebibliography}
\appendix
\begin{widetext}
In the supplemental material, we give detailed calculations on the quantum Fisher information for the spin systems in the main text.
\section{Quantum Fisher information for the single spin system under the cooperative scheme}
With the spontaneous emission, the dynamics of the spin in the standard scheme is described by the master equation
\begin{eqnarray}
\dot{\rho} = -i[B\sigma_z,\rho]+\gamma\left[\sigma_{-}\rho\sigma_{+}-\frac{1}{2}\left\{ \sigma_{+}\sigma_{-},\rho\right\} \right],
\end{eqnarray}
where $\sigma_{\pm}=(\sigma_{x}\pm i\sigma_y)/2$ is the raising and lowering operator in the basis of $|0\rangle$ and $|1\rangle$.
If we add a (known) control field along the $X$-direction, this changes the Hamiltonian to
\begin{equation}
H =B_z \sigma_z + B_x \sigma_x,
\end{equation}
the ground and excite states of the Hamiltonian then become $| g\rangle = -\sin{\frac{\theta}{2}}| 1 \rangle + \cos{\frac{\theta}{2}}| 0 \rangle$ and $| e \rangle = \cos{\frac{\theta}{2}}| 1 \rangle + \sin{\frac{\theta}{2}}| 0 \rangle$ respectively, here $\theta = \arctan \frac{B_x}{B_z}$. At the presence of the spontaneous emission which induces the decay from the excited state to the ground state, the dynamics becomes
\begin{equation}
\label{eq:interplaysupp}
\dot{\rho} = - i[H, \rho] + \gamma(\sigma_{-}^H \rho \sigma_{+}^H -\frac{1}{2}\{\sigma_{+}^H\sigma_{-}^H, \rho\}),
\end{equation}
where $\sigma_{+}^H = | e\rangle \langle g |$, $\sigma_{-}^H = | g \rangle \langle e |$ are in the new basis of $\{ |g\rangle, |e\rangle \}$, which contain the parameter. This dynamics has a stead state $|g\rangle$ which has the quantum Fisher information
\begin{align}
F_Q(| g\rangle) &= 4(\langle \partial_{B_z} g |\partial_{B_z} g\rangle -|\langle \partial_{B_z} g| g \rangle|^2) \\
&= \frac{B_x^2}{(B_x^2+B_z^2)^2}.
\label{eq:ground_single}
\end{align}
To see the effect of the cooperative scheme, we first prepare the initial state as $| \psi\rangle = \frac{| 0 \rangle + | 1 \rangle}{\sqrt{2}}$ in the original basis then add the coherent controls along the $X$-direction and let the state evolve under the dynamics governed by Eq.(\ref{eq:interplaysupp}) for $T$ units of time.
In this case the state at time $T$ can be analytically obtained, which, in the basis of $\{| g \rangle,| e\rangle\}$, can be written as
\begin{eqnarray}
\rho_T = \left(\begin{array}{cc}
\frac{1}{2}(1+\sin\theta)e^{-\gamma T} & \frac{1}{2}\cos\theta e^{-\frac{1}{2}(\gamma+4i \Delta)T} \\\frac{1}{2}\cos\theta e^{-\frac{1}{2}(\gamma - 4i \Delta)T} & 1-\frac{1}{2}(1+\sin\theta)e^{-\gamma T}
\end{array}\right),
\end{eqnarray}
where $\Delta = \sqrt{B_z^2+B_x^2}$.
It can also be written in the original basis of $\{|0\rangle,|1\rangle\}$ through the transformation
\begin{eqnarray}
\label{eq:transform_basis}
\left(\begin{array}{c}
| e\rangle \\
| g\rangle
\end{array}\right) = \left(\begin{array}{cc}
\cos \frac{\theta}{2} & \sin \frac{\theta}{2} \\
- \sin \frac{\theta}{2} & \cos \frac{\theta}{2}
\end{array}\right) \left(\begin{array}{c}
| 1\rangle \\
| 0\rangle
\end{array}\right).
\end{eqnarray}
The quantum Fisher information of $\rho_T$ can be obtained from $F_Q = Tr[(\partial_{B_z} \rho_T)^2]+\frac{Tr[(\rho_T \partial_{B_z} \rho_T)^2]}{Det[\rho_T]}$\cite{dittmann1999explicit}, which gives
\begin{equation}
\begin{split}
F_Q(\rho_{T}) &= \frac{1}{{16 \Delta ^2}} \big\{e^{-\gamma T} (-24 \sin \theta +8 \sin 3 \theta +8 \cos 2 \theta (4 \Delta ^2 T^2+1)+\cos 4 \theta (8 \Delta ^2 T^2-1)\\ &+24 \Delta ^2 T^2+64 \Delta T \sin \theta \cos ^2\theta e^{-\frac{1}{2} \gamma T} \sin (2 \Delta T) (\sin (\theta )-e^{\gamma T}+1)+ \\ & 32 \sin^2\theta e^{-\frac{1}{2} \gamma T} \cos (2 \Delta T) (\sin \theta (e^{\gamma T}-1)-1)-16 (\sin \theta +2) \sin ^3\theta \sinh (\gamma T) \\ & -8 \sin ^2\theta (-4 \sin \theta +\cos 2 \theta -5) \cosh (\gamma T)+2 \sin ^2(2 \theta ) \cos (4 \Delta T)-7)\big\}
\end{split}
\end{equation}
The value is plotted in the figure of the main text. This is consistent with the QFI of the ground state, as in the limit of $T \to \infty$,
\begin{equation}
\begin{split}
\lim_{T \to \infty} F_Q(\rho_{T}) & = \frac{1}{\Delta^2}\{-\frac{1}{2}(2+\sin{\theta})\sin^3 \theta-\frac{1}{4}\sin^2 \theta (-4 \sin \theta + \cos 2\theta -5) \}\\
& = \frac{\sin^2 \theta}{\Delta^2} = \frac{B_x^2}{(B_x^2+B_z^2)^2},
\end{split}
\end{equation}
which is just the QFI of the ground state $|g\rangle$.
The quantum Fisher information of $|g\rangle$ gets larger when $B_z$ gets smaller, the improvement of the cooperative scheme is thus most obvious in the region of small $B_z$, which is the region quantum metrology is mostly used for. If $B_z$ is not small, we can always using adaptive controls to shift $B_z$, by adding control fields along the opposite direction.
The expression for $F_Q(\rho_T)$ is quite complicated, however for the short time limit, we can approximate $F_Q(\rho_T)$ with the low order Taylor expansions to gain some insights. The first and second order can be obtained by calculating the first and second derivatives as
\begin{eqnarray}
& \dot{F}_Q(0) \equiv \frac{d}{dT}F_Q(\rho_{T})\big|_{T=0} = \frac{\gamma \sin ^2(\theta ) \cos ^2(\theta )}{\Delta ^2} = \gamma \frac{B_x^2 B_z^2}{(B_x^2+B_z^2)^3}\\
& \ddot{F}_Q(0) \equiv \frac{d^2}{dT^2}F_Q(\rho_{T})\big|_{T=0} = \frac{\gamma ^2 \left(16 \sin ^3(\theta )-10 \cos (2 \theta )+3 \cos (4 \theta )+7\right)}{8 \Delta ^2}+8 = \frac{\gamma^2\sin^2 \theta}{2 \Delta^2}(6 \sin^2 \theta + 4\sin \theta -1) + 8.
\end{eqnarray}
Therefore the Taylor expansion of quantum Fisher information can be written as
\begin{equation}
\begin{split}
F_Q(\rho_{T}) & = \dot{F}_Q(0)T+\frac{1}{2}\ddot{F}_Q(0)T^2+o(T^3)\\
& = 4T^2 + \frac{\gamma \sin ^2(\theta ) \cos ^2(\theta )}{\Delta ^2}T+ \frac{\gamma^2 \sin^2 \theta}{4 \Delta^2}(6 \sin^2 \theta + 4\sin \theta -1) T^2 + o(T^3)
\end{split}
\end{equation}
It is now obvious that $F_Q$ can surpass the Heisenberg limit, $4T^2$, at the small time regime, and the improvement increases with strength of the decay, i.e., the noisier the better.
\section{Quantum Fisher information of the ground state for the two-spin system}
We consider a two-spin system where the Hamiltonian is
\begin{equation}
H=B_z(\sigma^{1}_{z}+\sigma^{2}_{z})+\sigma^{1}_{z}\sigma^{2}_{z},
\end{equation}
which is written in the units for the coupling strength between two spins equals to $1$, here $\sigma_z^1=\sigma_z\otimes I_2$ and $\sigma_z^2=I_2\otimes \sigma_z$, $I_2$ denotes the $2\times 2$ identity matrix. In this case the Heisenberg limit can be achieved by preparing the probe state as $\frac{|00\rangle+|11\rangle}{\sqrt{2}}$, which has the maximal quantum Fisher information $J=16T^2$ at time $T$. Under the unitary dynamics the coupling term only changes the global phase, the same precision limit can be achieved if the Hamiltonian is just $B_{z}(\sigma^{1}_{z}+\sigma^{2}_{z})$, i.e., the coupling is not useful in improving the precision limit\cite{giovannetti2006quantum,boixo2007generalized,yuan2015optimal}.
Now consider the cooperative scheme where we add a small control field $B_x\ll 1$ along the transverse direction, the Hamiltonian becomes
\begin{equation}
H=\sigma^{1}_{z}\sigma^{2}_{z}
+B_{z}(\sigma^{1}_{z}+\sigma^{2}_{z})+B_x(\sigma_x^1+\sigma_x^2).
\end{equation}
We can write the effective Hamiltonian on the two lowest energy levels as\cite{zhang2008detection}
\begin{equation}
H_{eff} = - |B_z| I + (1 - |B_z|) \sigma_z + \sqrt{2} B_x\sigma_x
\end{equation}
where $I$ denotes the identity operator.
The two lowest energy states of this effective Hamiltonian are $|g \rangle = -\sin{\frac{\theta}{2}}| \uparrow \rangle + \cos{\frac{\theta}{2}}| \downarrow \rangle$, $| e \rangle = \cos{\frac{\theta}{2}} | \uparrow \rangle + \sin{\frac{\theta}{2}}| \downarrow \rangle$, here $| \uparrow \rangle=|11\rangle$, $| \downarrow \rangle = \frac{|01\rangle + | 10 \rangle}{\sqrt{2}}$, $\tan{\theta} = \frac{\sqrt{2}B_x}{1-B_z}, \theta \in [0, \pi]$. These states can be taken as a good approximation of the two lowest energy states of the full Hamiltonian. The quantum Fisher information of this approximated ground state can be analytically obtained as $\frac{2B_x^2}{(2B_x^2+(-1+B_z)^2)^2}$, which achieves the maximum at the critical point $B_z=1$. We also compute the quantum Fisher information of the ground state of the full Hamiltonian numerically and from the Fig.\ref{fig:approx}, it can be seen that the ground state of the effective Hamiltonian is very close to the exact solution.
\begin{figure}
\caption{QFI of the ground states with the effective Hamiltonian and the exact Hamiltonian, with $B_x = 0.1$}
\label{fig:approx}
\end{figure}
\end{widetext}
\end{document} |
\begin{document}
\begin{abstract}
In a number of papers it was shown that there are one-dimensional
systems such that they contain solutions with, so called,
overcompressive singular shock waves
besides the usual elementary waves (shock and rarefaction ones
as well as contact discontinuities).
One can see their definition for a general 2 $\times$ 2 system
with fluxes linear in one of dependent variables in \cite{Ned1}.
This paper is devoted to examining
their interactions with themselves and elementary waves.
After a discussion of systems given in a general form, a complete
analysis will be given for the ion-acoustic system given in \cite{KeyKr}.
\noindent
{\it Keywords:}
conservation law systems, singular shock wave, interaction of singularities,
generalized functions
\end{abstract}
\maketitle
\section{Introduction}
Consider the system
\begin{equation}\label{gdss1}
\begin{split}
& (f_{2}(u))_{t}+(f_{3}(u)v+f_{4}(u))_{x}=0 \\
& (g_{1}(u)v+g_{2}(u))_{t}+(g_{3}(u)v+g_{4}(u))_{x}=0.
\end{split}
\end{equation}
where $f_{i},g_{j}$, $i=2,...,4$, $j=1,...,4$ are polynomials with the
maximal degree $m$, $(u,v)=(u(x,t),v(x,t))$ are unknown functions with
a physical range $\Omega$, $(x,t)\in {\mathbb R}\times {\mathbb R}_{+}$.
We shall fix the following notation for the rest of the paper:
$$f_{i}(y)=\sum_{k=0}^{m}a_{i,k}y^{k}, \;
g_{j}(y)=\sum_{k=0}^{m}b_{j,k}y^{k}, \; i=2,3,4,\; j=1,2,3,4. $$
There are cases when there is no classical solution to Riemann problem
for the above system. Sometimes, there is a solution in the form
of delta or singular shock wave.
In \cite{Ned1} one can see when
a system in evolution form (i.e.\
when $f_{2}=u$, $g_{1}=1$ and $g_{2}=0$) permits a solution in the
shape of singular shock wave. With the same type of reasoning
and a more effort, one can give the answer to the
same question in the case system (\ref{gdss1}).
The aim of this paper is to investigate what happens during and after
an interaction of a singular shock wave with another wave.
After a general statement about
new initial data taken at interaction point
(of course, true for delta shock waves, too) in Section 3,
we shall present a detailed
investigation in the case of the system (so called ion-acoustic system)
\begin{equation}\label{dss1}
\begin{split}
u_{t}+(u^{2}-v)_{x} & = 0 \\
v_{t}+(u^{3}/3-u)_{x} & = 0
\end{split}
\end{equation}
given in \cite{KeyKr}.
Definitions and concepts used here are from \cite{Ned1}, based on
the use of Colombeau generalized functions defined in \cite{ObeWa}.
They will be briefly described in the Section 2.
If one is not familiar with these concepts, he/she
can assume that a solution to the above system is given
by nets of smooth functions with equality substituted by
a distributional limit.
The reason why the generalized functions are used is
to give opportunity for extending the procedure in this paper for arbitrary
initial data when a system posses singular or delta shock wave
as a solution.
Few interesting facts observed during the investigations of system
(\ref{dss1}) are arising a question about possibilities in a general
case. Observed facts are:
\begin{enumerate}
\item The singular shock wave solution to a Riemann problem for
(\ref{dss1}) always has an increasing strength
of the rate ${\mathcal O}(t)$, $t \to \infty$. (The strength of the
shock is a function which multiplies the delta function contained in
a solution, $s(t)$ in (\ref{prom})).
After the interaction, the resulting singular shock wave is
supported by a curve, not necessary straight
line as before, and its strength can be
an increasing, but also a constant or a decreasing function with
the respect to the time variable.
\item When the resulting singular shock wave has a decreasing strength
(this can occurs during an interaction of a admissible
singular shock wave with a rarefaction wave),
after some time it can decompose into two shock waves. This is a
quite new phenomenon.
\end{enumerate}
The structure of this paper can be described in the following way.
In the second section we will introduce necessary notation and give
basic notions based on the papers \cite{ObeWa} and \cite{Ned1}.
In the third section, one can find a way how to continue a solution
to the general case of system (\ref{gdss1})
after an interaction point (Theorem \ref{glavna}).
The basic assumption is that a left-hand side of the first, and
the right-hand side of the second wave can be connected by a new
singular shock wave. The conditions for such a possibility are
formulated trough a notion of {\it second delta singular locus},
see Definition \ref{2d1}. Explicit calculations for a geometric description
of the locus are possible to perform for
system (\ref{gdss1}), but we shall omit it, to preserve readers
attention on the further topics.
The results given in these sections are used in the next one
devoted to special case (\ref{dss1}).
The first part of the fourth section is devoted to
description of a situation which can occur
after a singular shock and a shock wave interact. In the same
way one can do the same for two singular shock waves,
as one can at the end of this section.
The final, 5th section, contains the most interesting and important
results about singular shock and rarefaction wave interaction.
In that case the decoupling of a singular shock into a pair of shock waves,
already mentioned before, can occur. The analysis is done when a singular
shock wave is on the left-hand side of a rarefaction wave. But
one can easily see that these results can be obtained using the same
procedure when a singular shock is on the other side
of a rarefaction wave.
\section{Notation}
We shall briefly repeat some definitions of
Colombeau algebra given in \cite{ObeWa} and \cite{Ned1}. Denote
${\mathbb R}_{+}^{2}:={\mathbb R}\times (0,\infty)$,
$\overline{{\mathbb R}_{+}^{2}}:={\mathbb R}\times [0,\infty)$ and let
$C_{b}^{\infty}(\Omega)$ be the algebra of smooth functions on
$\Omega$ bounded together with all their derivatives. Let
$C_{\overline{b}}^{\infty}({\mathbb R}_{+}^{2})$ be a set of
all functions $u\in C^{\infty}({\mathbb R}_{+}^{2})$ satisfying
$u|_{{\mathbb R}\times (0,T)} \in C_{b}^{\infty}({\mathbb R}\times (0,T))$
for every $T>0$. Let us remark that every element of
$C_{b}^{\infty}({\mathbb R}_{+}^{2})$ has a smooth extension up to
the line $\{ t=0\}$, i.e.\ $C_{b}^{\infty}({\mathbb R}_{+}^{2})=
C_{b}^{\infty}(\overline{{\mathbb R}_{+}^{2}})$. This is also true for
$C_{\overline{b}}^{\infty}({\mathbb R}_{+}^{2})$.
\begin{definition}\label{emn}
${\mathcal E}_{M,g}({\mathbb R}_{+}^{2})$ is the set of all maps
$G:(0,1)\times {\mathbb R}_{+}^{2} \rightarrow {\mathbb R}$,
$(\varepsilon,x,t) \mapsto G_{\varepsilon}(x,t)$, where
for every $\varepsilon \in (0,1)$,
$G_{\varepsilon}\in C_{\overline{b}}^{\infty}({\mathbb R}_{+}^{2})$
satisfies:
\noindent
For every $(\alpha,\beta)
\in {\mathbb N}_{0}^{2}$ and $T>0$, there exists $N\in {\mathbb N}$ such that
$$\sup_{(x,t)\in {\mathbb R}\times (0,T)}
|\partial_{x}^{\alpha}\partial_{t}^{\beta} G_{\varepsilon}(x,t)|
={\mathcal O}(\varepsilon^{-N}), \text{ as } \varepsilon \rightarrow 0.$$
${\mathcal E}_{M,g}({\mathbb R}_{+}^{2})$ is an multiplicative differential
algebra, i.e.\
a ring of functions with the usual operations of addition and multiplication,
and differentiation which satisfies Leibniz rule.
${\mathcal N}_{g}({\mathbb R}_{+}^{2})$ is the set of all
$G\in {\mathcal E}_{M,g}({\mathbb R}_{+}^{2})$,
satisfying:
\noindent
For every $(\alpha,\beta)
\in {\mathbb N}_{0}^{2}$, $a\in {\mathbb R}$ and $T>0$
$$\sup_{(x,t)\in {\mathbb R}\times (0,T)}
|\partial_{x}^{\alpha}\partial_{t}^{\beta} G_{\varepsilon}(x,t)|
={\mathcal O}(\varepsilon^{a}), \text{ as } \varepsilon \rightarrow 0.$$
$\Box$
\end{definition}
Clearly, ${\mathcal N}_{g}({\mathbb R}_{+}^{2})$ is
an ideal of the multiplicative
differential algebra ${\mathcal E}_{M,g}({\mathbb R}_{+}^{2})$, i.e.\
if $G_{\varepsilon}\in {\mathcal N}_{g}({\mathbb R}_{+}^{2})$ and
$H_{\varepsilon}\in {\mathcal E}_{M,g}({\mathbb R}_{+}^{2})$,
then $G_{\varepsilon}H_{\varepsilon}\in {\mathcal N}_{g}({\mathbb R}_{+}^{2})$.
\begin{definition}\label{g}
The multiplicative differential algebra
${\mathcal G}_{g}({\mathbb R}_{+}^{2})$
of generalized functions is defined by
${\mathcal G}_{g}({\mathbb R}_{+}^{2})=
{\mathcal E}_{M,g}({\mathbb R}_{+}^{2})/{\mathcal N}_{g}({\mathbb R}_{+}^{2})$.
All operations in ${\mathcal G}_{g}({\mathbb R}_{+}^{2})$ are
defined by the corresponding ones in ${\mathcal E}_{M,g}({\mathbb R}_{+}^{2})$.
$\Box$
\end{definition}
If $C_{b}^{\infty}({\mathbb R})$ is used instead of
$C_{b}^{\infty}({\mathbb R}_{+}^{2})$ (i.e.\
drop the dependence on the $t$ variable), then
one obtains ${\mathcal E}_{M,g}({\mathbb R})$, ${\mathcal N}_{g}({\mathbb R})$,
and consequently, the space of generalized functions on a real line,
${\mathcal G}_{g}({\mathbb R})$.
In the sequel, $G$ denotes an element (equivalence class)
in ${\mathcal G}_{g}(\Omega)$
defined by its representative
$G_{\varepsilon}\in {\mathcal E}_{M,g}(\Omega)$.
Since $C_{\overline{b}}^{\infty}({\mathbb R}_{+}^{2})=
C_{\overline{b}}^{\infty}(\overline{{\mathbb R}_{+}^{2}})$,
one can define a restriction of a generalized function to $\{ t=0\}$
in the following way.
For given $G\in {\mathcal G}_{g}({\mathbb R}_{+}^{2})$, its restriction
$G|_{t=0}\in {\mathcal G}_{g}({\mathbb R})$ is the class determined
by a function
$G_{\varepsilon}(x,0)\in {\mathcal E}_{M,g}({\mathbb R})$.
In the same way as above,
$G(x-ct)\in {\mathcal G}_{g}({\mathbb R})$
is defined by $G_{\varepsilon}(x-ct)\in {\mathcal E}_{M,g}({\mathbb R})$.
If $G \in {\mathcal G}_{g}$ and $f\in C^{\infty}({\mathbb R})$
is polynomially bounded
together with all its derivatives, then one can
easily show that the composition $f(G)$,
defined by a representative $f(G_{\varepsilon})$,
$G\in {\mathcal G}_{g}$ makes sense. It means that $f(G_{\varepsilon})\in
{\mathcal E}_{M,g}$ if $G_{\varepsilon}\in {\mathcal E}_{M,g}$, and
$f(G_{\varepsilon})-f(H_{\varepsilon}) \in {\mathcal N}_{g}$ if
$G_{\varepsilon}-H_{\varepsilon}\in {\mathcal N}_{g}$.
The equality in the space of the generalized functions ${\mathcal G}_{g}$ is
to strong for our purpose, so we need to define
a weaker relation called association.
\begin{definition}\label{ass}
A generalized function $G\in {\mathcal G}_{g}(\Omega)$ is
said to be {\em associated with} $u\in {\mathcal D}'(\Omega)$, $G \approx u$,
if for some (and hence every) representative
$G_{\varepsilon}$ of $G$, $G_{\varepsilon} \rightarrow u$ in
${\mathcal D}'(\Omega)$ as $\varepsilon \rightarrow 0$.
Two generalized functions $G$ and $H$ are said to be
associated, $G\approx H$, if $G-H \approx 0$. The rate of convergence
in ${\mathcal D}'$ with respect to $\varepsilon$
is called the {\em rate of association}.
$\Box$
\end{definition}
A generalized function $G$ is said to be {\em of a bounded type} if
$$\sup_{(x,t)\in {\mathbb R}\times (0,T)} |G_{\varepsilon}(x,t)|
={\mathcal O}(1) \text{ as } \varepsilon \rightarrow 0,$$
for every $T>0$.
Let $u \in {\mathcal D}_{L^{\infty}}'({\mathbb R})$. Let ${\mathcal A}_{0}$
be the set of all functions $\phi \in C_{0}^{\infty}({\mathbb R})$ satisfying
$\phi(x)\geq 0$, $x\in {\mathbb R}$,
$\int \phi(x) dx=1$ and $\operatorname{supp}\phi \subset [-1,1]$, i.e.\
\begin{equation*}
{\mathcal A}_{0}=\{\phi\in C_{0}^{\infty}:\;
(\forall x\in {\mathbb R}) \phi(x)\geq 0,\;
\int\phi(x)dx=1,\; \mathop{\rm supp}\phi\subset [-1,1]\}.
\end{equation*}
Let $\phi_{\varepsilon}(x)=\varepsilon^{-1}\phi(x/\varepsilon)$,
$x\in {\mathbb R}$. Then
$$ \iota_{\phi}: u \mapsto u\ast \phi_{\varepsilon}/{\mathcal N}_{g},$$
where $u\ast \phi_{\varepsilon}/{\mathcal N}_{g}$ denotes the
equivalence class with respect to the ideal ${\mathcal N}_{g}$,
defines a mapping of ${\mathcal D}_{L^{\infty}}'({\mathbb R})$ into
${\mathcal G}_{g}({\mathbb R})$, where $\ast$ denotes the usual
convolution in ${\mathcal D}'$. It is clear that $\iota_{\phi}$ commutes
with the derivation, i.e.\
$$\partial_{x}\iota_{\phi}(u)=\iota_{\phi}(\partial_{x}u).$$
\begin{definition}\label{s-d}
\begin{itemize}
\item[(a)] $G \in {\mathcal G}_{g}({\mathbb R})$ is said to be
{\em a generalized step function} with value $(y_{0},y_{1})$
if it is of bounded type and
$$ G_{\varepsilon}(y)=
\begin{cases}
y_{0}, \; & y< -\varepsilon \\ y_{1}, \; & y> \varepsilon
\end{cases}
$$
Denote $[G]:=y_{1}-y_{0}$.
\item [(b)] $D \in {\mathcal G}_{g}({\mathbb R})$
is said to be {\em generalized split
delta function} ({\em S$\delta$-function}, for short) with value
$(\alpha_{0},\alpha_{1})$ if
$D=\alpha_{0}D^{-}+\alpha_{1}D^{+}$, where $\alpha_{0}+\alpha_{1}=1$
and
\begin{equation}\label{DG}
DG\approx (y_{0}\alpha_{0}+y_{1}\alpha_{1})\delta,
\end{equation}
for every generalized step function $G$ with value $(y_{0},y_{1})$.
\item[(c)]Let $m$ be an odd positive integer.
A generalized function $d \in {\mathcal G}_{g}({\mathbb R})$ is said to be
{\em $m'$-singular delta function} ({\em $m'$SD-function}, for short)
with value $(\beta_{0},\beta_{1})$ if
$d=\beta_{0}d^{-}+\beta_{1}d^{+}$, $\beta_{0}^{m-1}+\beta_{1}^{m-1}=1$,
$d^{\pm}\in {\mathcal G}_{g}({\mathbb R})$,
$(d^{\pm})^{i} \approx 0$, $i\in \{1,\dots,m-2,m\}$,
$(d^{\pm})^{m-1}\approx \delta$, and
\begin{equation}\label{m1dG}
d^{m-1}G\approx (y_{0}\beta_{0}^{m-1}+y_{1}\beta_{1}^{m-1})\delta,
\end{equation}
for every generalized step function $G$ with value $(y_{0},y_{1})$.
\item[(d)]Let $m$ be an odd positive integer.
A generalized function $d \in {\mathcal G}_{g}({\mathbb R})$ is said to be
{\em $m$-singular delta function} ({\em $m$SD-function}, for short)
with value $(\beta_{0},\beta_{1})$ if
$d=\beta_{0}d^{-}+\beta_{1}d^{+}$, $\beta_{0}^{m}+\beta_{1}^{m}=1$,
$d^{\pm}\in {\mathcal G}_{g}({\mathbb R})$,
$(d^{\pm})^{i} \approx 0$, $i\in \{1,\dots,m-1\}$,
$(d^{\pm})^{m}\approx \delta$, and
\begin{equation}\label{mdG}
d^{m}G\approx (y_{0}\beta_{0}^{m}+y_{1}\beta_{1}^{m})\delta,
\end{equation}
for every generalized step function $G$ with value $(y_{0},y_{1})$.
\end{itemize}
$\Box$
\end{definition}
In this paper we shall assume the compatibility condition
$Dd\approx 0$, where $D$ is S$\delta$- and $d$ is $m$SD- or $m'$Sd-function.
Suppose that the initial data are given by
\begin{equation} \label{id1}
u|_{t=T}=\begin{cases} u_{0}, & x<X \\ u_{1}, & x>X \end{cases}
\; v|_{t=T}=\begin{cases} v_{0}, & x<X \\ v_{1}, & x>X. \end{cases}
\end{equation}
\begin{definition} \label{singsh}
{\em Singular shock wave} (DSSW for short) is an associated
solution to (\ref{dss1}) with the initial data (\ref{id1}) of the form
\begin{equation} \label{prom}
\begin{split}
& u((x-X),(t-T))=G((x-X)-c(t-T)) \\
& +\tilde{s}(t)(\alpha_{0}d^{-}((x-X)-c(t-T))
+\alpha_{1}d^{+}((x-X)-c(t-T))) \\
& v((x-X),(t-T))=H((x-X)-c(t-T))\\
& +s(t)(\beta_{0}D^{-}((x-X)-c(t-T))
+\beta_{1}D^{+}((x-X)-c(t-T))) \\
& + \tilde{\tilde{s}}(t)(\gamma_{0}d^{-}((x-X)-c(t-T))
+\gamma_{1}d^{+}((x-X)-c(t-T)))
\end{split}
\end{equation}
where
\begin{itemize}
\item[(i)] $c\in {\mathbb R}$ is the speed of the wave,
\item[(ii)] $s(t)$, $\tilde{s}(t)$ and $\tilde{\tilde{s}}$
are smooth functions for $t\geq 0$, and equal zero at $t=T$.
\item[(iii)] $G$ and $H$ are generalized step functions
with values $(u_{0},u_{1})$
and $(v_{0},v_{1})$ respectively,
\item[(iv)] $d_{1}=\alpha_{0}d^{-}+\beta_{1}d^{+}$ and
$d_{2}=\gamma_{0}d^{-}+\gamma_{1}d^{+}$ are
$m$SD- or $m'$SD-functions,
\item[(v)] $D=\alpha_{0}D^{-}+\alpha_{1}D^{+}$ is an S$\delta$-function
compatible with $d$.
\end{itemize}
The {\it singular part} of the wave is
$$
\left[ \begin{matrix}\tilde{s}(t)(\alpha_{0}d^{-}+\alpha_{1}d^{+}) \\
s(t)(\beta_{0}D^{-}+\beta_{1}D^{+})+
\tilde{\tilde{s}}(t)(\gamma_{0}d^{-}+\gamma_{1}d^{+}) \end{matrix}\right].
$$
The wave is {\em overcompressive}
if its speed is less or equal to
the left- and greater or equal to the right-hand side
characteristics i.e.\
\begin{equation*}
\lambda_{2}(u_{0},v_{0})>\lambda_{1}(u_{0},v_{0})\geq c
\geq \lambda_{2}(u_{1},v_{1})>\lambda_{1}(u_{1},v_{1}).
\end{equation*}
$\Box$
\end{definition}
\begin{remark}\label{konstrukcija}
(a) In \cite{Ned1} one can find special choice for S$\delta$- and
and $d$ is $m$SD- or $m'$Sd-functions. For example
$D^{\pm} \in {\mathcal G}_{g}({\mathbb R})$
are given by the representatives
$$D_{\varepsilon}^{\pm}(y)
:={1 \over \varepsilon} \phi\Big({y-(\pm 2\varepsilon) \over \varepsilon}\Big),
\; \phi \in {\mathcal A}_{0}.$$
$m$SD- and $m'$SD-functions can be chosen in the same manner.
\noindent
(b) Compatibility condition for an S$\delta$-function $D$ and
an $m$SD- or $m'$SD-function $d$ is automatically fulfilled if
\begin{equation*}
\mathop{\rm supp} d_{\varepsilon}^{+} \cap
\mathop{\rm supp} D_{\varepsilon}^{+} =
\mathop{\rm supp} d_{\varepsilon}^{-} \cap
\mathop{\rm supp} D_{\varepsilon}^{-} = \emptyset
\end{equation*}
\noindent
(c) Idea behind the above definition of products (\ref{DG}), (\ref{m1dG})
and (\ref{mdG}) is the following. Starting point is that we know nothing
about infinitesimal values of the initial data (carried on by step functions
$G$ and $H$ above) around zero, but only that any such unmeasurable
influence stops at the points $\pm\varepsilon$. The above mentioned
definitions are made in order to get uniqueness of all products where
step functions, S$\delta$-, $m$SD- and $m'$SD-functions appear.
With an additional information for $G_{\varepsilon}$ and $H_{\varepsilon}$
around zero, one can choose $D$ and $d$ much more freely. For example,
in $G_{\varepsilon}$ and $H_{\varepsilon}$ are monotone functions
(which is quite natural assumption), relation (\ref{DG}) can be substituted by
$$
DG\approx \gamma \delta, \; \gamma \text{ can be any real between }
\min\{y_{0},y_{1}\} \text{ and } \max\{y_{0},y_{1}\}.
$$
The possibilities in Colombeau algebra are even wider for specific systems
instead of general case (\ref{gdss1}). One can look in \cite{Col92} for
a good review of such possibilities.
We dealing with a system in a general form and it is the reason
for using the above definition.
\noindent
(d) Due to absence of known additional facts for
the general case (\ref{gdss1}) (hyperbolicity, additional conservation
laws,...), one can use the overcompressibility as an admissibility
condition.
\end{remark}
\begin{definition} \label{d1}
The set of all points $(u_{1},v_{1})\in \Omega$ such that there exists an
singular shock wave solution (called {\it corresponding DSSW})
to Cauchy problem
(\ref{gdss1},\ref{id1}) is called {\em delta singular locus}.
We shall write $(u_{1},v_{1})\in \mathop{\rm DSL}(u_{0},v_{0})$.
If the corresponding DSSW is overcompressive,
then it is called {\em overcompressive delta singular locus}.
We shall write $(u_{1},v_{1})\in \mathop{\rm DSL}^{\ast}(u_{0},v_{0})$.
$\Box$
\end{definition}
In the sequel, the term ``solution'' will denote generalized function
which solves a system in the association sense.
\section{The new initial data}
Suppose that system (\ref{gdss1}) posses a DSSW solution
for some initial data. Assume one of the following.
\begin{itemize}
\item[(i)] If an $m$SD-function is contained in the above DSSW, then assume
\begin{equation}\label{deg}
\mathop{\rm deg}(g_{1})<m-1,\;
\mathop{\rm deg}(g_{2}) < m,\;
\mathop{\rm deg}(f_{2}) < m.
\end{equation}
\item[(ii)] If an $m'$SD-function is contained
in the above DSSW, then assume
\begin{equation}\label{deg'}
\mathop{\rm deg}(g_{1}) < m-2, \;
\mathop{\rm deg}(g_{2}) < m-1,\;
\mathop{\rm deg}(f_{2}) < m-1.
\end{equation}
\end{itemize}
Take the new initial data
\begin{equation} \label{eq6}
u|_{t=T}=\begin{cases} u_{0}, & x<X \\ u_{1}, & x>X \end{cases},
\; v|_{t=T}=\begin{cases} v_{0}, & x<X \\ v_{1}, & x>X \end{cases}
+ \zeta \delta_{(X,T)},
\end{equation}
for system (\ref{gdss1}), where $\zeta$ is a non-zero real.
\begin{definition} \label{2d1}
The set of all points $(u_{1},v_{1})\in \Omega$ such that there exists an
DSSW solution (called corresponding DSSW) to Cauchy problem
(\ref{gdss1},\ref{eq6}) for some $\zeta$ is called {\em second
delta singular locus} of initial strength $\zeta$
for $(u_{0},v_{0})$. We shall write
$(u_{1},v_{1})\in \mathop{\rm SDSL}_{\zeta}(u_{0},v_{0})$
If the the corresponding DSSW is overcompressive,
then it is called {\em overcompressive second delta singular locus},
and write
$(u_{1},v_{1})\in \mathop{\rm SDSL}_{\zeta}^{\ast}(u_{0},v_{0})$.
$\Box$
\end{definition}
Before the main theorem, let us give a useful lemma.
\begin{lemma} \label{podskup}
Suppose that $(u_{1},v_{1})\in \mathop{\rm DSL}(u_{0},v_{0})$.
Then $(u_{1},v_{1})\in \mathop{\rm SDSL}_{\zeta}(u_{0},v_{0})$,
if $\zeta>0$.
If the corresponding DSSW contains $m$SD-function, and $m$
is an odd number, then the statement holds true for every real $\zeta$.
Additionally, $\beta_{i}$, $i=1,2$, from Definition \ref{singsh} for the
corresponding DSSW do not depend on $\zeta$.
\end{lemma}
\begin{proof}
We shall give the proof for a DSSW containing $m$SD-function (\ref{prom}).
The other case can be proved in the same way.
Inserting functions $u$ and $v$ from (\ref{prom}) into system (\ref{gdss1})
with initial data (\ref{eq6}) and taking account relations
(\ref{deg}) or (\ref{deg'}), one gets
\begin{equation*}
\begin{split}
f_{2}(u) \approx & f_{2}(G) \\
g_{1}(u) \approx & g_{1}(G) \\
g_{2}(u) \approx & g_{2}(G) \\
f_{3}(u) \approx & f_{3}(G)
+ \tilde{s}(t)^{m-1}(u_{1}\alpha_{0}^{m-1}d^{-}+u_{0}\alpha_{1}^{m-1}d^{+})
m a_{3,m-1} \\
& + \tilde{s}(t)^{m}(\alpha_{0}^{m}d^{-}+\alpha_{1}^{m}d^{+}) a_{3,m}
\tilde{s}(t)^{m}a_{3,m}\delta \\
f_{4}(u)\approx & f_{4}(G)
+ \tilde{s}(t)^{m}(\alpha_{0}^{m-1}d^{+}+\alpha_{1}^{m-1}d^{+}) a_{4,m}
\tilde{s}(t)^{m}a_{4,m}\delta \\
g_{3}(u) \approx & g_{3}(G)
+ s(t)^{m-1}(u_{1}\beta_{0}^{m-1}d^{-}+u_{0}\beta_{1}^{m-1}d^{+})
m b_{3,m-1} \\
& + \tilde{s}(t)^{m}(\beta_{0}^{m}d^{-}+\beta_{1}^{m}d^{+}) b_{3,m}
\tilde{s}(t)^{m}b_{3,m}\delta \\
g_{4}(u)\approx & f_{4}(G)
+ \tilde{s}(t)^{m}(\beta_{0}^{m-1}d^{+}+\beta_{1}^{m-1}d^{+}) b_{4,m}
\tilde{s}(t)^{m}b_{4,m}\delta
\end{split}
\end{equation*}
There are two possible cases. Either $\tilde{\tilde{s}}\not \equiv 0$ and
$a_{3,m}=b_{3,m}=0$ (i.e.\
$\mathop{\rm deg}(f_{3})\leq m-1$ and $\mathop{\rm deg}(g_{3})\leq m-1$),
or $\tilde{\tilde{s}}\equiv 0$. In both the cases, the procedure which
follows is the same, so take $\tilde{\tilde{s}}\not \equiv 0$
for definiteness. From the first equation of (\ref{gdss1}) one gets
\begin{equation*}
\begin{split}
& (f_{2}(u))_{t}+(f_{3}(u)v+f_{4}(u))_{x} \\
\approx & -c([f_{2}(G)]+[f_{3}(G)H+f_{4}(G)])\delta \\
& + \tilde{s}(t)^{m-1}\tilde{\tilde{s}}(t)
(u_{1}\alpha_{0}^{m-1}\gamma_{0}+u_{0}\alpha_{1}^{m-1}\gamma_{1})
ma_{3,m-1}\delta' \\
& + (f_{3}(u_{0})\beta_{0}+f_{3}(u_{1})\beta_{1})\delta'
+ \tilde{s}(t)^{m}\delta' \approx 0.
\end{split}
\end{equation*}
One immediately gets the speed of DSSW,
$$ c={[f_{3}(G)H+f_{4}(G)] \over [f_{2}(G)]}, $$
and the relations
$$ \kappa_{1}s(t)=\tilde{s}(t)^{m-1}\tilde{\tilde{s}}(t)
\text{ and }
\kappa_{2}s(t)=\tilde{s}(t)^{m},$$
for some reals $\kappa_{1}$ and $\kappa_{2}$. Finally, one gets
\begin{equation}\label{esp1}
\kappa_{1}(u_{1}\alpha_{0}^{m-1}\gamma_{0}+u_{0}\alpha_{1}^{m-1}\gamma_{1})
m a_{3,m-1}+f_{3}(u_{0})\beta_{0}+f_{3}(u_{1})\beta_{1}+\kappa_{2}b_{4,m}=0.
\end{equation}
Inserting all these relations into the second equation, one gets
\begin{equation*}
\begin{split}
& (g_{1}(u)v+g_{2}(u))_{t}+ (g_{3}(u)v+g_{4}u))_{x} \\
\approx & (-c[g_{1}(G)H+g_{2}(G)]+[g_{3}(G)H+g_{4}(G)]
+s'(t)(g_{1}(u_{0}\beta_{0}+g_{1}(u_{1})\beta_{1}))\delta \\
& + s(t)(g_{1}(u_{0})\beta_{0}+g_{1}(u_{1})\beta_{1}+
g_{3}(u_{0})\beta_{0}+g_{3}(u_{1})\beta_{1} \\
& +\kappa_{1}(u_{1}\alpha_{0}^{m-1}\gamma_{0}+u_{0}\alpha_{1}^{m-1}\gamma_{1})
m b_{3,m-1}+\kappa_{2}b_{4,m})\delta'\approx 0.
\end{split}
\end{equation*}
The function $s$ must be a linear one, say $s'(t)=\sigma$, and
the above functional equation gives the last two equations in ${\mathbb R}$,
\begin{equation} \label{esp2}
-c[g_{1}(G)H+g_{2}(G)]+[g_{3}(G)H+g_{4}(G)]
+\sigma (g_{1}(u_{0})\beta_{0}+g_{1}(u_{1})\beta_{1})=0
\end{equation}
and
\begin{equation}\label{esp3}
\begin{split}
&-c((g_{1}(u_{0})+g_{3}(u_{0})\beta_{0}+(g_{1}(u_{1})+g_{3}(u_{1})\beta_{1})\\
&+\kappa_{1}(u_{1}\alpha_{0}^{m-1}\gamma_{0}+u_{0}\alpha_{1}^{m-1}\gamma_{1})
m b_{3,m-1}+\kappa_{2}b_{4,m}=0.
\end{split}
\end{equation}
In the above equations, only important fact about $s$ is its derivative.
Thus one can safely put $s(t)=\sigma t +\zeta$ and if the above
system (\ref{esp1}-\ref{esp3}) has a solution, then
$(u_{1},v_{1})\in \mathop{\rm SDSL}_{\zeta}(u_{0},v_{0})$ provided
that $\tilde{s}$ and $\tilde{\tilde{s}}$ can be recovered. This is certainly
the case when $\zeta>0$. If $m$ is an odd number, then $\tilde{s}
=s(t)^{1/m}$ and $\tilde{\tilde{s}}=\tilde{s}$ are always determined.
The second part of the assertion, that $\beta_{i}$, $i=1,2$ are
independent of $\zeta$ is obvious from the above.
\end{proof}
\begin{remark}
From the proof of the lemma one can see that it is actually possible
for $\zeta$ to take negative values, i.e.\
it is enough that $\zeta\geq -s(T)$, where $T$ is a time of interaction
when new initial data are given.
\end{remark}
The following assertion is crucial for the construction of weak solution
(a solution in an associated sense)
to (\ref{gdss1}) after an interaction: At an interaction point of a DSSW
and some other wave one can consider
the new initial value problem which contains delta function.
Suppose that the initial data are given by
\begin{equation} \label{3id}
u(x,0)=\begin{cases} u_{0},& \; x<a \\
u_{1},& \; a<x<b \\ u_{2},& \; x>b \end{cases} \text{ and }
v(x,0)=\begin{cases} v_{0},& \; x<a \\
v_{1},& \; a<x<b \\ v_{2},& \; x>b \end{cases}
\end{equation}
such that there exist a singular shock wave starting from the point $x=a$
and a shock wave (or another singular shock wave) starting from the point
$x=b$, $a<b$. They can interact if $c_{1}>c_{2}$, where $c_{i}$ is
the speed of the $i$-th wave, $i=1,2$. For the simplicity we shall assume
that $b=0$.
Let $(X,T)$ be the interaction point of the overcompressive
singular shock wave starting at the point $x=a$
\begin{equation} \label{prvidssw}
\begin{split}
u^{1}(x,t) = & G^{1}(x-c_{1}t-a)
+\tilde{s}^{1}(t)\Big(\alpha^{1}_{0}d^{-}(x-c_{1}t-a)
+\alpha^{1}_{1}d^{+}(x-c_{1}t-a)\Big) \\
v^{1}(x,t) = & H^{1}(x-c_{1}t-a)
+ s^{1}(t)\Big(\beta^{1}_{0}D^{-}(x-c_{1}t-a)
+\beta^{1}_{1}D^{+}(x-c_{1}t-a)\Big) \\
& +\tilde{\tilde{s}}^{1}(t)\Big(\gamma^{1}_{0}d^{-}(x-c_{1}t-a)
+\gamma^{1}_{1}d^{+}(x-c_{1}t-a)\Big)
\end{split}
\end{equation}
and the admissible (singular) shock wave
\begin{equation} \label{drugidssw}
\begin{split}
u^{2}(x,t) = & G^{2}(x-c_{2}t)
+\tilde{s}^{2}(t)\Big(\alpha^{2}_{0}d^{-}(x-c_{2}t)
+\alpha^{2}_{1}d^{+}(x-c_{2}t)\Big) \\
v^{2}(x,t) = & H^{2}(x-c_{2}t)
+ s^{2}(t)\Big(\beta^{2}_{0}D^{-}(x-c_{2}t)
+\beta^{2}_{1}D^{+}(x-c_{2}t)\Big)\\
& +\tilde{\tilde{s}}^{2}(t)\Big(\gamma^{2}_{0}d^{-}(x-c_{2}t-a)
+\gamma^{2}_{1}d^{+}(x-c_{2}t-a)\Big)
\end{split}
\end{equation}
where $G^{1}$, $G^{2}$, $H^{1}$ and $H^{2}$ are the generalized step functions
with values $(u_{0},u_{1})$, $(u_{1},u_{2})$, $(v_{0},v_{1})$
and $(v_{1},v_{2})$, respectively. Also, $(\alpha_{0}^{i})^{m_{1}}+
(\alpha_{1}^{i})^{m_{1}}=(\gamma_{0}^{i})^{m_{1}}+
(\gamma_{1}^{i})^{m_{1}}=\beta_{0}^{i}+\beta_{1}^{i}=1$, $i=1,2$.
Here, $m_{1}=m$ if singular part of singular shock wave is $m$SD-function
and $m_{1}=m-1$ in the case of $m'$SD-function.
If the second wave is a shock one,
then one can put $s^{2}\equiv \tilde{s}^{2}\equiv\tilde{\tilde{s}}^{2}\equiv 0$.
The speed of a singular shock wave (as well as for a shock wave) can be found
using the first equation in (\ref{gdss1}) because of assumptions (\ref{deg})
or (\ref{deg'}). For the first singular
shock wave (\ref{prvidssw}) we have
\begin{equation*}
\begin{split}
& (f_{2}(u))_{t}+(f_{3}(u)v+f_{4}(u))_{x} \approx
(f_{2}(G))_{t}+(f_{3}(G)H+f_{4}(G))_{x}
+ (\mathop{\rm const} s^{1}(t)\delta)_{x} \\
\approx & (-c_{1}[f_{2}(G)]+[f_{3}(G)H+f_{4}(G)])\delta+
\mathop{\rm const} s^{1}(t)\delta'\approx 0,
\end{split}
\end{equation*}
where the term $\mathop{\rm const}s^{1}(t)$ is determined, but we shall
not write the exact value since it is not needed for the assertion.
Missing argument in the above expression is $x-c_{1}t-a$.
Let $\Gamma_{1}=\{x=c_{1}t+a\}$ and $\Gamma_{2}=\{x=c_{2}t\}$.
Then $[\cdot]_{\Gamma_{i}}$ denotes the jump at the
curve $\Gamma_{i}$, $i=1,2$.
Thus, one can see that the speed of that singular shock wave
has the same value as in the case of shock wave,
$$c_{1}={ [f_{3}(G)H+f_{4}(G)]_{\Gamma_{1}}\over [f_{2}(G)]_{\Gamma_{1}}}.$$
Also,
$$c_{2}={ [f_{3}(G)H+f_{4}(G)]_{\Gamma_{2}}\over [f_{2}(G)]_{\Gamma_{2}}}.$$
Finally, one can see that the waves given by (\ref{prvidssw}) and
(\ref{drugidssw}) will interact at the point $(X,T)$ if $a<0$ and
$c_{1}>c_{2}$, where
\begin{equation*}
\begin{split}
T=& {a[f_{2}(G)]_{\Gamma_{1}}[f_{2}(G)]_{\Gamma_{2}}
\over [f_{3}(G)H+f_{4}(G)]_{\Gamma_{2}}[f_{2}(G)]_{\Gamma_{1}}
-[f_{3}(G)H+f_{4}(G)]_{\Gamma_{1}}[f_{2}(G)]_{\Gamma_{2}}}\\
X=& {a [f_{3}(G)H+f_{4}(G)]_{\Gamma_{2}}[f_{2}(G)]_{\Gamma_{1}}
\over [f_{3}(G)H+f_{4}(G)]_{\Gamma_{2}}[f_{2}(G)]_{\Gamma_{1}}
-[f_{3}(G)H+f_{4}(G)]_{\Gamma_{1}}[f_{2}(G)]_{\Gamma_{2}}}.
\end{split}
\end{equation*}
Denote by $(\tilde{u}(x,t),\tilde{v}(x,t))$ a solution before interaction
time $t=T$ consisting of waves (\ref{prvidssw},\ref{drugidssw}).
\begin{remark}
In the case of system (\ref{dss1})
one can easily calculate speeds of the above shocks and
coordinates of the interaction point.
The speeds of singular shock and entropy shock wave are
\begin{equation*}
c_{1}={u_{1}^{2}-v_{1}-u_{0}^{2}+v_{0} \over
u_{1}-u_{0}} \text{ and } c_{2}={u_{2}^{2}-v_{2}-u_{1}^{2}+v_{1} \over
u_{2}-u_{1}}.
\end{equation*}
If $c_{1}>c_{2}$, then one gets
\begin{equation*}
X={-ac_{2} \over c_{2}-c_{1}} \text{ and }
T={a \over c_{2}-c_{1}}.
\end{equation*}
for the interaction point $(X,T)$.
\end{remark}
\begin{theorem} \label{glavna}
Let system (\ref{gdss1}) be given.
Suppose that $(u_{2},v_{2})\in \mathop{\rm SDSL}_{\zeta}(u_{0},v_{0})$,
$\zeta=(\zeta_{1}+\zeta_{2})/(g_{1}(u_{0})\beta_{0}+g_{1}(u_{1})\beta_{1})$,
where the constants $\zeta_{i}$, $i=1,2$,
are defined by
\begin{equation*}
\begin{split}
& g_{1}(u^{1})v^{1}+g_{2}(u^{1})|_{(t=T)}
\approx \zeta_{1}\delta_{(X,T)} \\
& g_{1}(u^{2})v^{2}+g_{2}(u^{2})|_{(t=T)}
\approx \zeta_{2}\delta_{(X,T)}.
\end{split}
\end{equation*}
The corresponding DSSW, $(\hat{u},\hat{v})(x,t)$ is given by
\begin{equation} \label{izlaznidssw}
\begin{split}
\hat{u}(x,t) = & G(x-X-c(t-T)) \\
& +\tilde{s}(t)\Big(\alpha_{0}d^{-}(x-X-c(t-T))
+\alpha_{1}d^{+}(x-X-c(t-T))\Big) \\
\hat{v}(x,t) = & H(x-X-c(t-T)) \\
& + s(t)\Big(\beta_{0}D^{-}(x-X-c(t-T))
+\beta_{1}D^{+}(x-X-c(t-T))\Big) \\
& +\tilde{\tilde{s}}(t)\Big(\gamma_{0}d^{-}(x-X-c(t-T))
+\gamma_{1}d^{+}(x-X-c(t-T))\Big)
\end{split}
\end{equation}
for $t>T$. By Lemma \ref{podskup}, $\beta_{0}$ and $\beta_{1}$ are
determined independently on $\zeta$, so the definition of $DSSW$ makes
sense.
Then there exist a solution to (\ref{gdss1},\ref{3id}) in the association
sense such that it equals $(\tilde{u},\tilde{v})(x,t)$
for $t<T-\varepsilon$, and it equals $(\hat{u},\hat{v})(x,t)$
for $t>T+\varepsilon$.
\end{theorem}
\begin{proof}
Take a constant $t_{0}$ such that singular parts of the waves
$(u_{\varepsilon}^{1}(x,t),v_{\varepsilon}^{1}(x,t))$ and
$(u_{\varepsilon}^{2}(x,t),v_{\varepsilon}^{2}(x,t))$ has disjoint
supports (i.e.\
$c_{1}t-a-c_{2}t>4\varepsilon$, for $t<T-t_{0}\varepsilon$, if
one uses the construction of the S$\delta$, $m$SD and $m'$SD-functions
defined above).
Let us denote
\begin{equation*}
\begin{split}
& \Delta_{\varepsilon}=\{(x,t):\; |x-X|\leq t_{0}\varepsilon+\varepsilon,\;
|t-T|\leq t_{0}\varepsilon+\varepsilon\}, \\
& \tilde{\Delta}_{\varepsilon}=\{(x,t):\; |x-X|\leq t_{0}\varepsilon,\;
|t-T|\leq t_{0}\varepsilon\}, \\
& A_{\varepsilon}=\{(x,t):\; |x-X|\leq t_{0}\varepsilon+\varepsilon,\;
t=T-t_{0}\varepsilon-\varepsilon\}, \\
& B_{\varepsilon}=\{(x,t):\; x=X+t_{0}\varepsilon+\varepsilon,\;
|t-T|\leq t_{0}\varepsilon+\varepsilon\}, \\
& C_{\varepsilon}=\{(x,t):\; |x-X|\leq t_{0}\varepsilon+\varepsilon,\;
t=T+t_{0}\varepsilon+\varepsilon\}, \\
& D_{\varepsilon}=\{(x,t):\; x=X-t_{0}\varepsilon-\varepsilon,\;
|t-T|\leq t_{0}\varepsilon+\varepsilon\}.
\end{split}
\end{equation*}
Define a cut-off function $\xi_{\varepsilon}(x,t)$ which equals zero for
$(x,t)\in \Delta_{\varepsilon}$ and 1 for
$(x,t)\in \tilde{\Delta}_{\varepsilon}$.
Let
\begin{equation*}
(u_{temp},v_{temp})(x,t)=
\begin{cases}
(\tilde{u}(x,t),\tilde{v}(x,t)), & t<T \\
(\hat{u}(x,t),\hat{v}(x,t)), & t>T.
\end{cases}
\end{equation*}
We shall prove that the generalized functions $u$ and $v$ represented by
\begin{equation} \label{resenje}
u_{\varepsilon}(x,t)=u_{temp}(x,t)\xi_{\varepsilon}(x,t), \text{ and }
v_{\varepsilon}(x,t)=v_{temp}(x,t)\xi_{\varepsilon}(x,t),
\; x\in {\mathbb R},\; t\geq 0
\end{equation}
solve (\ref{gdss1}) in the association sense.
Denote
\begin{equation*}
{\mathbf F}(u,v)=\left[ \begin{matrix}f_{2}(u) \\
g_{1}(u)v+g_{2}(u) \end{matrix} \right] \text{ and }
{\mathbf G}(u,v)=\left[ \begin{matrix}f_{3}(u)v+f_{4}(u) \\
g_{3}(u)v+g_{4}(u) \end{matrix}\right].
\end{equation*}
We have
\begin{equation*}
\begin{split}
& \iint_{{\mathbb R}_{+}^{2}}{\mathbf F}(u,v)\Psi_{t} +
{\mathbf G}(u,v)\Psi_{x} dx dt \\
= & \iint_{\tilde{\Delta}_{\varepsilon}}{\mathbf F}(u,v)\Psi_{t} +
{\mathbf G}(u,v)\Psi_{x} dx dt \\
= & \iint_{{\mathbb R}_{+}^{2}\setminus \tilde{\Delta}_{\varepsilon}}
{\mathbf F}(u,v)\Psi_{t} + {\mathbf G}(u,v)\Psi_{x} dx dt,
\end{split}
\end{equation*}
for every test function $\Psi=\left[ \begin{matrix}\psi_{1}\\
\psi_{2}\end{matrix}\right]\in {\mathcal C}_{0}^\infty({\mathbb R}_{+}^{2})$.
The measure of the set $\tilde{\Delta}_{\varepsilon}$ is
${\mathcal O}(\varepsilon^{2})$, as $\varepsilon \rightarrow 0$,
while
\begin{equation*}
\| {\mathbf F}(u,v)\Psi_{t}
+{\mathbf G}(u,v)\Psi_{x}\|_{L^{\infty}({\mathbb R}_{+}^{2})}
\leq \mathop{\rm const} \varepsilon^{-1+1/m}
\end{equation*}
due to the assumptions in Definition \ref{s-d}.
Thus,
\begin{equation*}
\iint_{\tilde{\Delta}_{\varepsilon}}{\mathbf F}(u,v)\Psi_{t} +
{\mathbf G}(u,v)\Psi_{x} dx dt \sim \varepsilon^{1/m} \rightarrow 0, \text{ as }
\varepsilon \rightarrow 0.
\end{equation*}
Using the divergence theorem for the second integral one gets
\begin{equation*}
\begin{split}
& \iint_{{\mathbb R}_{+}^{2}\setminus \tilde{\Delta}_{\varepsilon}}
{\mathbf F}(u,v)\Psi_{t} + {\mathbf G}(u,v)\Psi_{x} dx dt \\
= & \int_{\partial\tilde{\Delta}_{\varepsilon}}{\mathbf F}(u,v)\Psi\nu_{t} +
\int {\mathbf G}(u,v)\Psi\nu_{x} ds \\
& - \iint_{{\mathbb R}_{+}^{2}\setminus \tilde{\Delta}_{\varepsilon}}
{\mathbf F}(u,v)_{t}\Psi + {\mathbf G}(u,v)_{x}\Psi dx dt.
\end{split}
\end{equation*}
The last integral in the above expression tend to zero as
$\varepsilon \rightarrow 0$ since $(u,v)$ solves (\ref{gdss1})
in ${\mathbb R}_{+}^{2}\setminus\tilde{\Delta}_{\varepsilon}$
due to the construction. For the other integral one gets
\begin{equation*}
\begin{split}
& \int_{\partial\tilde{\Delta}_{\varepsilon}}{\mathbf F}(u,v)\Psi \nu_{t} +
\int {\mathbf G}(u,v)\Psi \nu_{x}ds \\
= & \int_{A_{\varepsilon}}{\mathbf F}(u,v)\Psi dx
- \int_{C_{\varepsilon}}{\mathbf F}(u,v)\Psi dx
+ \int_{D_{\varepsilon}}{\mathbf G}(u,v)\Psi dt
- \int_{B_{\varepsilon}}{\mathbf G}(u,v)\Psi dt.
\end{split}
\end{equation*}
Functions $u_{\varepsilon}$ and $v_{\varepsilon}$ are $L^{\infty}$-bounded
uniformly in $\varepsilon$ on the sides
$B_{\varepsilon}$ and $D_{\varepsilon}$. Since their lengths
are ${\mathcal O}(\varepsilon)$, integrals over them tends to zero
as $\varepsilon \rightarrow 0$.
Using the fact that $f_{2}(d_{\varepsilon})\approx 0$ one gets
\begin{equation*}
\lim_{\varepsilon \rightarrow 0}F(\tilde{u},\tilde{v})|_{t=T}
=\left[\begin{matrix} 0 \\
(\zeta_{1}+\zeta_{2})\delta_{(X,T)}\end{matrix}\right],
\end{equation*}
as well as the construction of S$\delta$- and $m'$SD (or $m$SD)-functions,
one gets
\begin{equation*}
\lim_{\varepsilon \rightarrow 0}
\int_{A_{\varepsilon}} {\mathbf F}(u_{\varepsilon},v_{\varepsilon})dx
=\left[\begin{matrix} 0 \\
\zeta_{1}+\zeta_{2}\end{matrix}\right]\cdot \Psi(X,T).
\end{equation*}
Thus, there has to be true that
\begin{equation*}
\lim_{\varepsilon \rightarrow 0}
\int_{C_{\varepsilon}} {\mathbf F}(u_{\varepsilon},v_{\varepsilon})dx
=-\left[\begin{matrix} 0 \\
\zeta_{1}+\zeta_{2}\end{matrix}\right]\cdot \Psi(X,T).
\end{equation*}
This implies $f_{2}(\hat{u})|_{(X,T)}\approx 0$ and
\begin{equation} \label{dusl}
g_{1}(\hat{u})\hat{v}+g_{2}(\hat{u})|_{(X,T)}
\approx (\zeta_{1}+\zeta_{2})\delta_{(X,T)}.
\end{equation}
Due to conditions (\ref{deg}) or (\ref{deg'}) one immediately
gets $f_{2}(\hat{u})|_{(X,T)}\approx 0$. Put
$\zeta=(\zeta_{1}+\zeta_{2})/(g_{1}(u_{0})\beta_{0}+g_{1}(u_{1})\beta_{1})$.
Then
$$
g_{1}(\hat{u})\hat{v}+g_{2}(\hat{u})|_{t=T}
\approx \hat{G}\hat{H}+\hat{G}
+s(T)(g_{1}(u_{0})\beta_{0}+g_{1}(u_{1})\beta_{1})\delta(X)
$$
and after another restriction on $x=X$,
$$
g_{1}(\hat{u})\hat{v}+g_{2}(\hat{u})|_{(X,T)}\approx
(\zeta_{1}+\zeta_{2})\delta_{(X,T)}.
$$
This concludes the proof.
\end{proof}
\begin{remark} \label{distriblim}
The distributional limit of the result of the interaction is given by
\begin{equation*}
\begin{split}
u(x,t)
&= \left\{ \begin{aligned}
u_{0},&\; x<c_{1}t-a,\;t<t \\
u_{1},&\; c_{1}t-a<x<c_{2}t, \; t<T \\
u_{2},&\; x>c_{2}t, \; t<T \\
u_{0},&\; x<ct+X, \; t>T \\
u_{2},&\; x>ct+X, \; t>T
\end{aligned} \right. \\
v(x,t)
&= \left\{ \begin{aligned}
v_{0},&\; x<c_{1}t-a,\;t<t \\
v_{1},&\; c_{1}t-a<x<c_{2}t, \; t<T \\
v_{2},&\; x>c_{2}t, \; t<T \\
v_{0},&\; x<ct+X, \; t>T \\
v_{2},&\; x>ct+X, \; t>T
\end{aligned} \right\} + s_{1}(t)\delta_{S_{1}}+
s_{2}(t)\delta_{S_{2}}+s(t)\delta_{S},
\end{split}
\end{equation*}
where $S_{1}=\{(x,t):\; x=c_{1}t+a,\; t\in [0,T]$,
$S_{2}=\{(x,t):\; x=c_{2}t,\; t\in [0,T]$ and
$S=\{(x,t):\; x-X=c(t-T),\; t\in [T,\infty)$.
If the second wave (\ref{drugidssw}) is a shock one, then $s_{2}\equiv 0$.
The above solution is continuous in
$t$ with values in ${\mathcal D}'({\mathbb R})$. This fact
can be used in the approach similar to \cite{ShDan1}, where
the variable $t$ is treated separately, i.e.\
when system (\ref{gdss1}) is considered to be in evolution form.
\end{remark}
The theorem shows that after an interaction of a singular shock with
some shock or another singular shock the problem
reduces to solving system (\ref{gdss1}) with the new initial data
(\ref{eq6}).
\begin{remark}
\noindent
(i) The solution to the interaction problem from Theorem \ref{glavna}
is always associated with a lower association rate
than the solution of the original Riemann problem.
For specific system it seems possible to make more sophisticated
construction in order to improve the rate.
\noindent
(ii) It appears that $d_{\varepsilon}^{\pm}$
are unavoidable correction factors even their distributional limit
equals zero.
The conditions (\ref{deg}) and (\ref{deg'}) ensures that the new initial data
at intersection point do not depend on $m$SD- or $m'$SD-functions in
the solution. We have used them because the real nature of
$m$SD- and $m'$SD-functions is not so clear yet.
\end{remark}
The above theorem will be used in
the rest of the paper for investigation of interactions between singular
shock waves and other types of waves in the special case
of system (\ref{dss1}).
\section{Applications}
Consider now system (\ref{dss1}) which a special case to (\ref{gdss1}).
The authors of \cite{KeyKr} defined and proved existence
of singular shock wave solutions for some Riemann problems
of this system.
In the present paper, we will investigate interactions of
such solutions with the other solutions to Riemann problem for (\ref{dss1}).
In order to familiarize a reader with the presented results, let us
give some basic remarks about such solutions.
For a given Riemann data $(u_{0},v_{0})$, $(v_{0},v_{1})$, there are three
basic solution types:
\begin{enumerate}
\item[(a)] {\it Shock waves}
\begin{equation}\label{sw12}
u(x,y)=\left\{ \begin{aligned} u_{0},& \; x<ct \\ u_{1},& \;
x>ct \end{aligned} \right. \phantom{second}
v(x,y)=\left\{ \begin{aligned} v_{0},& \; x<ct \\ v_{1},& \;
x>ct \end{aligned} \right.
\end{equation}
where $c=[u^{2}-v]/[u]$ and $(u_{1},v_{1})$ lies in an admissible
part of Hugoniot locus of the point $(u_{0},v_{0})$.
\item[(b)] {\it Centered rarefaction waves}
\begin{equation}\label{rw1}
\begin{split}
& u(x,t)=\left\{ \begin{aligned} u_{0},& \; x<(u_{0}-1)t \\
x/t+1,& \; (u_{0}-1)t\leq x \leq (u_{1}-1)t \\
u_{1},& \; x>(u_{1}-1)t
\end{aligned} \right. \\
& v(x,t)=\left\{ \begin{aligned} v_{0},& \; x<(u_{0}-1)t \\
(x/t)^{2}/2+2x/t+C_{1},& \; (u_{0}-1)t\leq x \leq (u_{1}-1)t \\
v_{1},& \; x>(u_{1}-1)t
\end{aligned}\right.
\end{split}
\end{equation}
(1-rarefaction wave), where $C_{1}=v_{0}-u_{0}^{2}/2-u_{0}-1/2$,
when $(u_{1},v_{1})$ lies in an 1-rarefaction curve
starting at the point $(u_{0},v_{0})$. Or
\begin{equation}\label{rw2}
\begin{split}
& u(x,t)=\left\{ \begin{aligned} u_{0},& \; x<(u_{0}+1)t \\
x/t-1,& \; (u_{0}+1)t\leq x \leq (u_{1}+1)t \\
u_{1},& \; x>(u_{1}+1)t
\end{aligned}\right. \\
& v(x,t)=\left\{ \begin{aligned} v_{0},& \; x<(u_{0}+1)t \\
(x/t)^{2}/2-2x/t+C_{2},& \; (u_{0}+1)t\leq x \leq (u_{1}+1)t \\
v_{1},& \; x>(u_{1}+1)t
\end{aligned}\right.
\end{split}
\end{equation}
(2-rarefaction wave), where $C_{2}=v_{0}-u_{0}^{2}/2+u_{0}-1/2$,
when $(u_{1},v_{1})$ lies in an 2-rarefaction curve
starting at the point $(u_{0},v_{0})$.
\item[(c)] {\it Singular shock waves} (see Definition
\ref{prom}) of $3'$SD-type,
\begin{equation}\label{ssw}
\begin{split}
& u(x,y)=\left\{ \begin{aligned} u_{0},& \; x<ct \\ u_{1},& \;
x>ct \end{aligned} \right\}+\tilde{s}(t)(\alpha_{0}d_{\varepsilon}^{-}(x-ct)
+ \alpha_{1}d_{\varepsilon}^{+}(x-ct))\\
& v(x,y)=\left\{ \begin{aligned} v_{0},& \; x<ct \\ v_{1},& \;
x>ct \end{aligned} \right\}+s(t)(\beta_{0}D_{\varepsilon}^{-}(x-ct)+
\beta_{1}D_{\varepsilon}^{+}(x-ct)),
\end{split}
\end{equation}
where $c=[u^{2}-v]/[u]$, and all other terms
are as in that definition. That means
\begin{equation}\label{usl}
D_{\varepsilon}\approx \delta,
\;(d_{\varepsilon}^{\pm})^{i}\approx 0, \; i=1,3, \; (d_{\varepsilon}^{\pm})^{2}
\approx \delta,
\end{equation}
while $(u_{1},v_{1})$ lies in a region denoted by $Q_{7}$ in \cite{KeyKr}
of the point $(u_{0},v_{0})$ (see Figure 1).
\end{enumerate}
For an arbitrary Riemann problem to (\ref{dss1})
one can construct a solutions by
the means of these waves or their combinations (\cite{KeyKr}).
While interactions of the first two types can be handled in a usual
way, interactions involving singular shock waves are quite different
and far more interesting, so they become a topic of this paper.
The procedure for the singular shock wave interactions
can be also used for systems (\ref{gdss1}). But
a complete after-interaction solution highly depends on a particular
system. That is the reason why we treat system (\ref{dss1}) only.
In order to simplify notation, we shall substitute the point
$(X,T)$ in (\ref{eq6}) by $(0,0)$ and then solve the
Cauchy problem (\ref{dss1},\ref{eq6}).
There are no multiplication of $v$ with $u$
in system (\ref{dss1}), so in the sequel it will be enough
to take $D^{-}=D^{+}$,
$\alpha_{0}(t):=\alpha_{0}\tilde{s}(t)$,
$\alpha_{1}(t):=\alpha_{1}\tilde{s}(t)$ and
$\beta(t):=s(t)$, i.e.\
to look for a solution of the form
\begin{equation} \label{eq7}
\begin{split}
& u = G(x-ct)+(\alpha_{0}(t)d^{-}(x-ct)
+\alpha_{1}(t)d^{+}(x-ct)) \\
& v = H(x-ct)+\beta(t)D(x-ct),
\end{split}
\end{equation}
where $G$ and $H$ are generalized step functions,
while $d$ is $3'$SD- and $D$ is S$\delta$-function and $c\in {\mathbb R}$.
Let us determine SDSL of (\ref{dss1}) for some $(u_{0},v_{0})\in
{\mathbb R}^{2}$.
Substitution of (\ref{eq7}) into the first equation of the system gives
\begin{equation}\label{eq8}
\begin{split}
& c={u_{1}^{2}-v_{1}-u_{0}^{2}+v_{0} \over u_{1} - u_{0}} \\
& \alpha_{0}^{2}(t)+\alpha_{1}^{2}(t)=\beta(t),
\end{split}
\end{equation}
where $c$ is the speed of the wave. After neglecting all terms
converging to zero as $\varepsilon \to 0$,
the second equation becomes
\begin{equation*}
\begin{split}
& \partial_{t}H_{\varepsilon}(x-ct)+\beta'(t)\delta(x-ct)
-c\beta(t)\delta'(x-ct)
+\partial_{x}({1 \over 3}G_{\varepsilon}^{3}-G) \\
&+(u_{1}\alpha_{0}^{2}(t)+u_{0}\alpha_{1}^{2}(t))\delta'(x-ct)= 0.
\end{split}
\end{equation*}
Thus, the following relations has to hold.
\begin{equation} \label{eq8dva}
\beta'(t)=c(v_{1}-v_{0})-\big( {1 \over 3} u_{0}^{3} - u_{0}
-{1 \over 3} u_{1}^{3} + u_{1} \big) =: k,
\end{equation}
i.e.
\begin{equation*}
\beta(t)=kt+\zeta, \mbox{ since } \beta(0)=\zeta
\end{equation*}
and
\begin{equation} \label{eq9}
u_{1}\alpha_{0}^{2}(t)+u_{0}\alpha_{1}^{2}(t)=c\beta(t).
\end{equation}
Like in \cite{KeyKr} one can see that the overcompressibility means
\begin{equation*}
u_{0}-1 \geq c \geq u_{1}+1,
\end{equation*}
i.e., $v_{1}$ lies between the curves
\begin{equation*}
\begin{split}
& D=\{ (u,v):\; v=v_{0}+u^{2}+u-u_{0}u-u_{0} \} \\
& E=\{ (u,v):\; v=v_{0}-u+u_{0}u-u_{0}^{2}+u_{0} \},
\end{split}
\end{equation*}
and $u_{0}-u_{1}\geq 2$.
Denote by $J_{1}$ the union of the parts of admissible
Hugoniot locus
\begin{equation*}
S_{1}=\Big \{ (u,v): \;
v-v_{0}=(u-u_{0})\Big( {u_{0}+u \over 2}
+\sqrt{1-{(u_{0}-u)^{2}\over 12}}\Big)\Big\},
\end{equation*}
and
\begin{equation*}
S_{2}=\{ (u_{1},v_{1}): \;
v-v_{0}=(u-u_{0})\Big( {u_{0}+u \over 2}
-\sqrt{1-{(u_{0}-u)^{2}\over 12}}\Big)\Big\},
\end{equation*}
for $u\in [u_{0}-\sqrt{12},u_{0}-3]$. Note that $S_{i}$ is not an
$i$th shock curve but only a label.
The points between the curves
$D$ and $E$, and on the left-hand side
of $J_{1}$ defines the area denoted by $Q_{7}$ in \cite{KeyKr}.
Here, this area is called delta singular locus.
One can easily check that system
(\ref{eq8},\ref{eq9}) has a solution if and only if $\beta(t)>0$.
Depending on $k$, defined in (\ref{eq8dva}),
there are three possibilities for a resulting wave:
\noindent
(i) If $k>0$, then $\tilde{\beta}'(t)>0$ and $(u_{1},v_{1})\in Q_{7}$.
The resulting singular shock has the same properties as
before, i.e.\
its strength increases with the time.
\noindent
(ii) If $k=0$, then $\tilde{\beta} \equiv \text{const} = \zeta >0$
and the corresponding part of a singular overcompressive locus is $J_{1}$.
The result of the interaction is a new kind of singular shock wave,
its strength is a constant with respect to the time.
\noindent
(iii) If $k<0$ (this means that the point $(u_{1},v_{1})$
is on the left-hand side of $J_{1}$), then
the resulting singular shock wave has much more
differences from the usual one (with an increasing strength).
Its initial strength equals $\zeta$,
$\beta(0)=\zeta>0$, but linearly decreases in time. At some point
$T_{0}$ the strength of
the singular shock equals zero and the singular
shock wave does not exist after that.
In the rest of the paper we shall see some cases when this happens.
The new initial data for time $t=T_{0}$ are the Riemann ones, and the
solution after that time can be find in the usual way,
by using the results in \cite{KeyKr}.
All the above facts are collected in the following theorem.
\begin{theorem} \label{2sl}
The SDSL$_{\zeta}$, $\zeta>0$, for (\ref{dss1},\ref{eq6}) is the
area bounded by the curves $D$, $E$, $S_{2}\setminus J_{1}$ and
$S_{1}\setminus J_{1}$. (The area $Q_{7}$ is a subset of this one,
as known from Lemma \ref{podskup}.)
The overcompressive SDSL$_{\zeta}$, $\zeta >0$,
is a part of the SDSL bounded by the curves $D$ and $E$ such that
$u_{1}\leq u_{0}-2$.
\end{theorem}
\subsection{Interaction of a singular shock and an admissible shock wave}
Suppose that a singular shock wave with a speed $c_{1}$ and
a left- and right-hand values
$U_{0}=(u_{0},v_{0})$ and $U_{1}=(u_{1},v_{1})$, respectively,
interact with an admissible shock wave with a speed $c_{2}<c_{1}$
having left-hand and right-hand values
$U_{1}=(u_{1},v_{1})$ and $U_{2}=(u_{2},v_{2})$, respectively,
at a point
$(X,T)$.
\begin{lemma} \label{dss_sw}
If the above singular shock and shock wave are admissible,
$(u_{2},v_{2})$ lies between the lines $D$ and $E$.
Thus, the solution after the interaction is a single
overcompressive singular shock wave.
\end{lemma}
\begin{proof}
Since $u_{0} \geq u_{1}+3$ and $u_{1}>u_{2}$
(because of the admissibility conditions for singular and shock wave), we have
$u_{0} > u_{2}+3$. The point $(u_{2},v_{2})$ lies on the curve
$S_{1}$ or $S_{2}$ with the origin at the point $(u_{1},v_{1})$.
Thus
\begin{equation*}
v_{2}=v_{1}+(u_{2}-u_{1})\Big({u_{1}+u_{2}\over 2} \pm
\sqrt{1-{(u_{1}-u_{2})^{2} \over 12}}\Big).
\end{equation*}
The point $(u_{1},v_{1})$ lies in the area denoted by $Q_{7}$,
thus bellow or at the curve $D$ with the origin at
$(u_{0},v_{0})$. Therefore
\begin{equation*}
v_{1}\leq v_{0}+u_{1}^{2}+u_{1}-u_{0}u_{1}-u_{0}.
\end{equation*}
Let the point $(u_{0},v_{0})$ be the origin. The point $(u_{2},v_{2})$
will be bellow the curve $D$ if
\begin{equation*}
\begin{split}
& v_{0}+u_{1}^{2}+u_{1}-u_{0}u_{1}-u_{0}+
(u_{2}-u_{1})\Big({u_{1}+u_{2}\over 2} \pm
\sqrt{1-{(u_{1}-u_{2})^{2} \over 12}}\Big) \\
\leq & v_{0}+u_{2}^{2}+u_{2}-u_{0}u_{2}-u_{0}.
\end{split}
\end{equation*}
Non-positivity of $u_{1}-u_{2}$ gives
\begin{equation*}
\pm \sqrt{1-{(u_{1}-u_{2})^{2} \over 12}} \leq
{1 \over 2}(u_{0}-u_{1})+{1 \over 2}(u_{0}-u_{2})-1.
\end{equation*}
The left-hand side of the above inequality is less than 2, while
the right-hand side is greater that 2. Thus, the point $(u_{2},v_{2})$
really lies bellow the curve $D$.
In the same way one can prove that the point $(u_{2},v_{2})$
lies above the curve $E$.
\end{proof}
\begin{remark} \label{another}
In the same manner as above, one can prove that the situation is the same
when singular shock and shock wave change sides. That is, when an admissible
singular shock wave interacts with an admissible shock wave
from the right-hand side, then the solution is again a single admissible
singular shock wave.
\end{remark}
\subsection{Double singular shock wave interaction}
Suppose that an admissible singular shock wave with a speed $c_{1}$ and
left- and right-hand
side values $U_{0}=(u_{0},v_{0})$ and $U_{1}=(u_{1},v_{1})$,
respectively, interacts
with an another singular shock wave with a speed $c_{2}<c_{1}$ and
left-hand (right-hand) side values
$U_{1}=(u_{1},v_{1})$ ($U_{2}=(u_{2},v_{2})$) at the point $(X,T)$.
Since the conditions for the existence of singular shock waves include
$u_{0}-u_{1}\geq 3$ and $u_{1}-u_{2}\geq 3$, then $u_{0}-u_{2}\geq 6$, i.e.\
the point $(u_{2},v_{2})$ is on the left-hand side of the line
$u=u_{0}-\sqrt{12}$.
Concerning the position of the point $(u_{2},v_{2})$
in the plane of wave regions with the origin at $(u_{0},v_{0})$ there
are three possibilities:
\noindent
(i) The point $(u_{2},v_{2})$ is between or at the curves $D$ and $E$.
The result of the interaction is a single singular shock wave (with
increasing strength).
\noindent
(ii) The point $(u_{2},v_{2})$ is above the curve $D$.
The result of the interaction is an 1-rarefaction wave followed
with a singular shock wave.
\noindent
(iii) The point $(u_{2},v_{2})$ is bellow the curve $E$.
The result of the interaction is a singular shock wave
followed by a 2-rarefaction wave.
SDSL's always have increasing strength in these three cases.
\section{Intersection of a singular shock wave and a rarefaction wave}
The last possibility of singular shock wave interactions is between
a singular shock wave and a rarefaction wave.
That possibility is omitted from a considerations of the general case
due to a richness of possible behaviors. Nevertheless, the most
of specific Riemann problems can be treated similarly as system (\ref{dss1})
was here, at least up to some point.
For a given point $(u_{0},v_{0})$, the rarefaction curves are given by
(see \cite{KeyKr})
\begin{equation*}
\begin{split}
& R_{1}=\{(u,v):\;
v=v_{0}-{1\over 2}u_{0}^{2}+{1\over 2}u^{2}+u-u_{0}\}. \\
& R_{2}=\{(u,v):\;
v=v_{0}-{1\over 2}u_{0}^{2}+{1\over 2}u^{2}-u+u_{0}\}
\end{split}
\end{equation*}
Suppose that a singular shock wave with left- and right-hand side values
$U_{0}=(u_{0},v_{0})$ and $U_{1}=(u_{1},v_{1})$,
from the left-hand side interacts
with a rarefaction wave at some point $(X,T)$.
If the rarefaction wave is approximated with a number of small
amplitude (non-admissible) shock waves like in wave fronth tracking
algorithm (see \cite{Bre} for example),
intuition given in Theorem \ref{glavna},
such that the first task should be to look at the singular shock
wave and the interaction of singular shock and non-admissible shock wave.
It is possible to extend Theorem \ref{glavna} for such a case,
providing that a non-admissible shock wave has amplitude small enough
(of the rate $\varepsilon^{2}$, say).
Denote by $(u_{r},v_{r})$ the end-point in a rarefaction curve. Let us note
that the starting point of the curve $(u_{1},v_{1})$ is in $Q_{7}$.
In what follows, we shall abuse the notation and denote by
$(u_{1},v_{1})$ the left-hand side of an approximated non-admissible
shock wave. Denote by $(u_{1},v_{1})\in Q_{7}$ the left-hand side and by
$(u_{2},v_{2})$ the right-hand side value of a part from the rarefaction
curve. If $(u_{2},v_{2})\in Q_{7}$, then the result of the interaction is a
single singular shock wave, with the left-hand side value equals
$(u_{0},v_{0})$. The speed depends on initial values as in (\ref{eq8}).
So, one can continue the procedure taking approximate
points from the rarefaction
curve as the right-hand values of the non-admissible shock wave until it
reaches the border of $Q_{7}$.
After looking at the above discrete model we are back in a real situation.
Let us denote by $(c(t),t)$,
$t$ belonging to some interval,
a path of the resulting singular shock wave trough $Q_{7}$.
It is possible to explicitly calculate
the above path. For example if a singular shock wave interacts with
a centered 1-rarefaction waves,
substituting
\begin{equation*}
\begin{split}
& u(x,t)=\left\{ \begin{aligned} u_{1}, & \; x<c(t) \\
\phi_{1}(x/t), & \; x>c(t) \end{aligned} \right\} +
\alpha_{0}(t)d_{\varepsilon}^{-}(x-c(t))
+\alpha_{1}(t)d_{\varepsilon}^{+}(x-c(t)) \\
& v(x,t)=\left\{ \begin{aligned} v_{1}, & \; x<c(t) \\
\phi_{2}(x/t), & \; x>c(t) \end{aligned} \right\} +
\beta(t)D_{\varepsilon}(x-c(t))
\end{split}
\end{equation*}
in system (\ref{dss1}), one obtains
\begin{equation*}
\begin{split}
& \tilde{\alpha_{0}}^{2}(t)+\tilde{\alpha}_{1}^{2}(t)=\tilde{\beta}(t)\\
& c(t)=\Big(t(1-2(u_{1}-v_{0}+v_{1}+u_{0}^{2}-u_{1}^{2}))\\
& + T(1-2(u_{0}-v_{1}-u_{0}u_{1}+u_{0}^{2}-u_{1}^{2}))\Big)/(2(u_{0}-1))\\
& \tilde{\beta}'(t)=c'(t)\Big({1\over 2}\Big({c(t) \over t}+1\Big)
+\Big({c(t) \over t}+1\Big)+v_{1}-{1\over 2}u_{1}^{2}-u_{1}-v_{0}\Big)\\
& -\Big({1\over 3}\Big({c(t) \over t}+1\Big)^{3}-\Big({c(t) \over t}+1\Big)
-{1\over 3}u_{0}^{3}+u_{0}\Big),
\end{split}
\end{equation*}
where the initial data for $\beta$ at the point $t=T$ is the initial strength
of the singular shock wave $\beta(T)$.
The above calculations means that a form of the resulting singular shock
curve and its strength are uniquely determined trough the area $Q_{7}$.
If $(u_{r},v_{r})\in Q_{7}$, then the analysis is finished. Suppose that
this is not true. The main problem is to analyse situation when
rarefaction curve intersects the boundary of $Q_{7}$. Let us try to
find out what is happening by using
a discrete model.
Thus, the first real problem is to find a form of solution
when the points from the rarefaction curve satisfy:
$(u_{1},v_{1})\in Q_{7}$ and $(u_{2},v_{2})\not\in Q_{7}$.
Denote by $\tilde{D}$ and $\tilde{G}$ the intersection points of the
curve $J_{1}$ (or the line $u=u_{0}-3$) with the curves
$E$ and $D$, respectively (see Figure 2).
\subsection{The first critical case}
Denote by $J$ the 1-rarefaction curve starting from the point $\tilde{G}$ and
by $J_{2}$ the 2-rarefaction curve starting from the point $\tilde{D}$
The region where
$(u_{2},v_{2})$ can lie consist of five subregions:
\noindent
(i) {\it The rarefaction curve which starts at $(u_{1},v_{1})$
intersects the curve $D$ out of point $\tilde{G}$.}
The point $(u_{2},v_{2})$ lies in the region above the curve $D$ and
left of the line $u=u_{0}-3$. The final result of the interaction
is a 1-rarefaction wave ($R_{1}$)
followed by a singular shock wave with increasing strength.
\noindent
(ii) {\it The rarefaction curve which starts at $(u_{1},v_{1})$
intersects the curve $E$ out of point $\tilde{D}$.}
The point $(u_{2},v_{2})$ lies in the region below the curve $E$ and
on the left-hand side of the line $u=u_{0}-3$. The result of
the interaction is a singular shock wave
with increasing strength followed by a 2-rarefaction wave ($R_{2}$).
\noindent
(iii) {\it The rarefaction curve which starts at $(u_{1},v_{1})$
intersects the curve $J_{1}$ out of points $\tilde{D}$
and $\tilde{G}$.} Since an amplitude of a non-admissible
shock wave can be as small as necessary, one can assume that
the point $(u_{2},v_{2})$ lies in the
second delta singular locus and the resulting singular shock wave
has a negative strength.
The strength-function $\tilde{\beta}(t)=\zeta+k(t-T_{0})$ of the resulting
singular shock is decreasing, so, there could exists
a point $T_{1}=T-\zeta/k$ such that
$\tilde{\beta}(T_{1})=\alpha_{0}=\alpha_{1}=0$. Let
$X_{1}=cT_{1}+(X-T)$, where $c$ is the speed of the resulting
singular shock wave (space coordinate of the point where strength
reaches zero). Therefore, in the time $t=T_{1}$, we have to solve
new Riemann problem
\begin{equation*}
u|_{t=T_{1}}=\begin{cases} u_{0}, & x<X_{1} \\
u_{2},& x>X_{1} \end{cases}, \;
v|_{t=T_{1}}=\begin{cases} v_{0}, & x<X_{1} \\
v_{2},& x>X_{1} \end{cases}.
\end{equation*}
This problem has a unique entropy solution consists from
two shock waves, since the point $(u_{2},v_{2})$ is between the curves
$S_{1}$ and $S_{2}$, with respect to the origin at the
point $(u_{0},v_{0})$. This means that the singular
shock wave decouples into a pair of admissible shock waves.
If $u_{r}\leq u_{0}-2$, this pair of the shock waves are the final solution.
The case when $u_{r}<u_{0}-2$ belongs to the following
subsection, i.e.\
the second critical case.
\noindent
(iv) {\it The rarefaction curve $R_{j}$, $j=1 \text{ or } 2$,
which starts at $(u_{1},v_{1})$
intersects the curve $J_{1}$ in the point $\tilde{G}$.}
We can take $\tilde{G}=(u_{2},v_{2})$ for convenience. The set
of such points $(u_{1},v_{1})$ lies on the inverse rarefaction curve,
which starts from the right-hand side values, i.e.\
\begin{equation*}
\begin{split}
&\tilde{R}_{1}=\{(u,v):\; v=v_{0}+(u_{0}^{2}-u^{2})/2+u_{0}-u\} \\
&\text{and} \\
&\tilde{R}_{2}=\{(u,v):\; v=v_{0}+(u_{0}^{2}-u^{2})/2-u_{0}+u\}
\end{split}
\end{equation*}
(the same explanation will be used in Remark \ref{r4} bellow).
Straightforward calculation
shows that this curve lies in the region $Q_{7}$, thus this
situation is possible, as one can see using the
inverse rarefaction curves $\tilde{R}_{1}$ and $\tilde{R}_{2}$
given above.
If $j=1$, then
the point $(u_{2},v_{2})$ belongs
to $J$ and the solution after the interaction is an $R_{1}$-wave
followed by a singular shock wave with a constant strength.
If $j=2$, then
the point $(u_{2},v_{2})$ lies in the area bellow the curve $J$.
This can be verified by direct calculation,
taking into account that the amplitude of
a non-admissible shock is small enough,
$u_{2}<u_{0}-2$. The solution after the interaction is an
admissible singular shock wave with a decreasing strength.
Further explanations of a such singular shock wave is given in the
following subsection.
\noindent
(v) {\it The rarefaction curve $R_{j}$ which starts at $(u_{1},v_{1})$
intersects the curve $J_{1}$ in the point $\tilde{D}$.}
Again, let $\tilde{D}=(u_{2},v_{2})$.
Simple calculation, as in
the case (iv), shows that this situation is also possible
since the inverse rarefaction curves starting from
$\tilde{D}$ stay in $Q_{7}$.
If $j=2$, the point $(u_{2},v_{2})$ belongs
to $J_{2}$, and then the solution after the interaction is
singular shock wave with a constant strength followed by an
$R_{2}$-wave.
If $j=1$, then use of the same arguments as above gives that
the point $(u_{2},v_{2})$ lies in the area above the curve $J_{2}$
and the result of the interaction is an admissible
singular shock wave with a decreasing strength. Again, one can see
the following subsection for the further analysis.
\subsection{The second critical case}
Now we are dealing with the problem when the rarefaction wave
after passing through $J_{1}$, after
passes trough the curves $D$ or $E$.
One can see that this is the continuation of the cases (iii)-(v)
from the previous part.
\noindent
(a) Denote by $\hat{D}$ the area above the curve $D$,
bellow $S_{1}$ and on the left-hand side of the line $u=u_{0}-2$.
Also denote by $\hat{\hat{D}}$ the area above the curve $E$,
bellow $S_{1}$ and on the right-hand side of the line $u=u_{0}-2$.
If $(u_{2},v_{2})$ lies in one of these regions,
the solution is combination of a
rarefaction wave $R_{1}$ and an overcompressive singular shock wave with
a decreasing or constant strength.
\noindent
(b) Denote by $\hat{E}$ the area bellow the curve $E$,
above $S_{2}$ and on the left-hand side of the line $u=u_{0}-2$.
Also denote by $\hat{\hat{E}}$ the area bellow the curve $D$,
above $S_{2}$ and on the right-hand side of the line $u=u_{0}-2$.
If $(u_{2},v_{2})$ lies in one of these regions,
the solution is then a combination of
an overcompressive singular shock wave with
a decreasing or constant strength and a rarefaction wave $R_{2}$.
Denote by $D_{0}$ the area bounded by the curves $D$, $E$, $S_{1}$
and $S_{2}$ such that $u<u_{0}-2$ in $D_{0}$.
One can see that a rarefaction curve cannot enter into $D_{0}$
since it has to pass trough
the intersection point $(u_{0}-2,-2u_{0}+v_{0}+2)$ of $D$ and $E$, but
\begin{equation*}
\begin{split}
& 2u-2u_{0}-uu_{0}+u^{2}/2+u_{0}^{2}/2+2>0
\text{ (ie. } R_{1}
\text{ is above the curve } E\text{)} \\
& 2u_{0}-2u+uu_{0}-u^{2}/2-u_{0}^{2}/2-2<0 \text{ (ie. } R_{2}
\text{ is bellow the curve } D\text{)}.
\end{split}
\end{equation*}
Therefore, a rarefaction curve which passes trough the point
$D\cap E$ goes either into $\hat{\hat{D}}$ or $\hat{\hat{E}}$,
and these cases are analysed above.
Thus, we have described all important points of the interactions
between singular shock and rarefaction waves.
When a result of a single interaction is known, the question
about further singular shock path could be answered by a successive
use of the above procedures.
\begin{remark} \label{r4}
One can use the similar analysis of all possible cases when
a rarefaction wave which interacts with a singular shock wave is on
the left-hand side of it. Instead of direct rarefaction and singular
shock curves, the inverse ones should be used, i.e.\
$(u_{2},v_{2})$ is a starting point and one is able to calculate
$v_{0}$ from formulas of $E$ ,$D$, $S_{1}$, $S_{2}$, $R_{1}$ and $R_{2}$.
\end{remark}
\begin{remark}
In the contrast with the case in \cite{NedObe}, where interaction
can generate some ``strange'' solution containing unbounded $L_{loc}^{1}$
function, in the presented system one can find only bounded functions
and singular shock waves as a result on an interaction.
For a system (\ref{gdss1}) with $g_{1}\not \equiv \mathop{\rm const}$, or
$g_{2}\not \equiv 0$, interaction of singular shock and rarefaction waves
cannot be treated as easy as here.
\end{remark}
Thus, we have proved the following assertion for the interaction
in the case of system (\ref{dss1}).
\begin{theorem} \label{primenjena}
Suppose that a singular shock wave interacts with a rarefaction wave
at the time $T$. For some time period $T<t<T_{1}$ the solution
is represented by a singular shock wave supported by a uniquely defined
curve (not a line) followed by a new rarefaction wave. Depending on the
right-hand value of the primary rarefaction wave, one has the following
possible cases for a solution after $t>T_{1}$.
\begin{itemize}
\item[(a)] Single singular shock wave (supported by a line)
with an increasing strength.
\item[(b)] 1-rarefaction wave followed by singular shock wave
with an increasing strength.
\item[(c)] Singular shock wave with an increasing strength
followed by 2-rarefaction wave.
\item[(d)] Singular shock wave with a decreasing strength
prolonged by either a single singular shock wave with an increasing strength,
or a pair of admissible shock waves.
\item[(e)] 1-rarefaction wave followed by singular shock wave
with a constant strength.
\item[(f)] Singular shock wave with a constant strength
followed by 2-rarefaction wave.
\item[(g)] Singular shock wave with a decreasing strength
prolonged by either 1-rarefaction wave followed by singular shock wave
with decreasing or constant strength, or singular shock wave
with decreasing or constant strength followed by 2-rarefaction wave.
\end{itemize}
``Prolonged'' is the state after strength of singular shock wave
becomes zero. Such wave can also stop with non-zero strength, and then
there is obviously no prolongation described above.
\end{theorem}
\end{document} |
\begin{document}
{
\begin{center}
\Large\bf
Devinatz's moment problem: a description of all solutions.
\end{center}
\begin{center}
\bf S.M. Zagorodnyuk
\end{center}
\section{Introduction.}
We shall study the following problem: to find a non-negative Borel measure $\mu$ in a strip
$$ \Pi = \{ (x,\varphi):\ x\in \mathbb{R},\ -\pi\leq \varphi < \pi \}, $$
such that
\begin{equation}
\label{f1_1}
\int_\Pi x^m e^{in\varphi} d\mu = s_{m,n},\qquad m\in \mathbb{Z}_+, n\in \mathbb{Z},
\end{equation}
where $\{ s_{m,n} \}_{m\in \mathbb{Z}_+, n\in \mathbb{Z}}$ is a given sequence of complex numbers.
We shall refer to this problem as to {\bf the Devinatz moment problem}.
\noindent
A.~Devinatz was the first who introduced and studied this moment problem~\cite{cit_1000_D}.
He obtained the necessary and sufficient conditions of solvability for the moment problem~(\ref{f1_1})
and gave a sufficient condition for the moment problem to be determinate~\cite[Theorem 4]{cit_1000_D}.
\noindent
Our aim here is threefold. Firstly, we present a new proof of the Devinatz solvability criterion.
Secondly, we describe canonical solutions of the Devinatz moment problem (see the definition
below).
Finally, we describe all solutions of the Devinatz moment problem.
We shall use an abstract operator approach~\cite{cit_1500_Z} and results of Godi\v{c}, Lucenko and
Shtraus~\cite{cit_2000_GL},\cite[Theorem 1]{cit_3000_GP},\cite{cit_4000_S}.
{\bf Notations. } As usual, we denote by $\mathbb{R},\mathbb{C},\mathbb{N},\mathbb{Z},\mathbb{Z}_+$
the sets of real numbers, complex numbers, positive integers, integers and non-negative integers,
respectively.
For a subset $S$ of the complex plane we denote by $\mathfrak{B}(S)$ the set of all Borel subsets of $S$.
Everywhere in this paper, all Hilbert spaces are assumed to be separable. By
$(\cdot,\cdot)_H$ and $\| \cdot \|_H$ we denote the scalar product and the norm in a Hilbert space $H$,
respectively. The indices may be ommited in obvious cases.
For a set $M$ in $H$, by $\overline{M}$ we mean the closure of $M$ in the norm $\| \cdot \|_H$. For
$\{ x_k \}_{k\in T}$, $x_k\in H$, we write
$\mathop{\rm Lin}\nolimits \{ x_k \}_{k\in T}$ for the set of linear combinations of vectors $\{ x_k \}_{k\in T}$
and $\mathop{\rm span}\nolimits \{ x_k \}_{k\in T} =
\overline{ \mathop{\rm Lin}\nolimits \{ x_k \}_{k\in T} }$.
Here $T := \mathbb{Z}_+ \times \mathbb{Z}$, i.e. $T$ consists of pairs $(m,n)$,
$m\in \mathbb{Z}_+$, $n\in\mathbb{Z}$.
The identity operator in $H$ is denoted by $E$. For an arbitrary linear operator $A$ in $H$,
the operators $A^*$,$\overline{A}$,$A^{-1}$ mean its adjoint operator, its closure and its inverse
(if they exist). By $D(A)$ and $R(A)$ we mean the domain and the range of the operator $A$.
By $\sigma(A)$, $\rho(A)$ we denote the spectrum of $A$ and the resolvent set of $A$, respectively.
We denote by $R_z (A)$ the resolvent function of $A$, $z\in \rho(A)$.
The norm of a bounded operator $A$ is denoted by $\| A \|$.
By $P^H_{H_1} = P_{H_1}$ we mean the operator of orthogonal projection in $H$ on a subspace
$H_1$ in $H$. By $\mathbf{B}(H)$ we denote the set of all bounded operators in $H$.
\section{Solvability.}
Let a moment problem~(\ref{f1_1}) be given.
Suppose that the moment problem has a solution $\mu$. Choose an arbitrary power-trigonometric
polynomial $p(x,\varphi)$ of the following form:
\begin{equation}
\label{f1_2}
\sum_{m=0}^\infty \sum_{n=-\infty}^\infty \alpha_{m,n} x^m e^{in\varphi},\qquad \alpha_{m,n}\in \mathbb{C},
\end{equation}
where all but finite number of coefficients $\alpha_{m,n}$ are zeros.
We can write
$$ 0 \leq \int_\Pi |p(x,\varphi)|^2 d\mu =
\int_\Pi \sum_{m=0}^\infty \sum_{n=-\infty}^\infty \alpha_{m,n} x^m e^{in\varphi}
\overline{
\sum_{k=0}^\infty \sum_{l=-\infty}^\infty \alpha_{k,l} x^k e^{il\varphi}
} d\mu $$
$$ = \sum_{m,n,k,l} \alpha_{m,n}\overline{\alpha_{k,l}} \int_\Pi x^{m+k} e^{i(n-l)\varphi} d\mu =
\sum_{m,n,k,l} \alpha_{m,n}\overline{\alpha_{k,l}} s_{m+k,n-l}. $$
Thus, for arbitrary complex numbers $\alpha_{m,n}$ (where all but finite numbers are zeros) we have
\begin{equation}
\label{f2_1}
\sum_{m,k=0}^\infty \sum_{n,l=-\infty}^\infty \alpha_{m,n}\overline{\alpha_{k,l}} s_{m+k,n-l} \geq 0.
\end{equation}
Let $T = \mathbb{Z}\times \mathbb{Z}_+$ and for $t,r\in T$, $t=(m,n)$, $r=(k,l)$, we set
\begin{equation}
\label{f2_2}
K(t,r) = K((m,n),(k,l)) = s_{m+k,n-l}.
\end{equation}
Thus, for arbitrary elements $t_1,t_2,...,t_n$ of $T$ and
arbitrary complex numbers $\alpha_1,\alpha_2,...,\alpha_n$, with $n\in \mathbb{N}$, the following inequality holds:
\begin{equation}
\label{f2_3}
\sum_{i,j=1}^n K(t_i,t_j) \alpha_{i} \overline{\alpha_j} \geq 0.
\end{equation}
The latter means that $K(t,r)$ is a positive matrix in the sense of E.H.~Moore \cite[p.344]{cit_5000_A}.
Suppose now that a Devinatz moment problem is given and conditions~(\ref{f2_1}) (or what is the same
conditions~(\ref{f2_3})) hold. Let us show that the moment problem has a solution.
We shall use the following important fact (e.g.~\cite[pp.361-363]{cit_6000_AG}).
\begin{thm}
\label{t2_1}
Let $K = K(t,r)$ be a positive matrix on $T=\mathbb{Z}\times \mathbb{Z}_+$.
Then there exist a separable Hilbert space $H$ with a scalar product $(\cdot,\cdot)$ and
a sequence $\{ x_t \}_{t\in T}$ in $H$, such that
\begin{equation}
\label{f2_4}
K(t,r) = (x_t,x_r),\qquad t,r\in T,
\end{equation}
and $\mathop{\rm span}\nolimits\{ x_t \}_{t\in T} = H$.
\end{thm}
{\bf Proof. }
Consider an arbitrary infinite-dimensional linear vector space $V$ (for example, we can choose a space of complex
sequences $(u_n)_{n\in \mathbb{N}}$, $u_n\in \mathbb{C}$).
Let $X = \{ x_t \}_{t\in T}$ be an arbitrary infinite sequence of linear independent elements
in $V$ which is indexed by elements of $T$.
Set $L_X = \mathop{\rm Lin}\nolimits\{ x_t \}_{t\in T}$. Introduce the following functional:
\begin{equation}
\label{f2_5}
[x,y] = \sum_{t,r\in T} K(t,r) a_t\overline{b_r},
\end{equation}
for $x,y\in L_X$,
$$ x=\sum_{t\in T} a_t x_t,\quad y=\sum_{r\in T} b_r x_r,\quad a_t,b_r\in \mathbb{C}. $$
Here all but finite number of indices $a_t,b_r$ are zeros.
\noindent
The set $L_X$ with $[\cdot,\cdot]$ will be a pre-Hilbert space. Factorizing and making the completion
we obtain the required space $H$ (\cite[p. 10-11]{cit_7000_B}).
$\Box$
By applying this theorem we get that there exist a Hilbert space $H$ and a sequence
$\{ x_{m,n} \}_{m\in \mathbb{Z}_+, n\in \mathbb{Z}}$, $x_{m,n}\in H$, such that
\begin{equation}
\label{f2_6}
(x_{m,n}, x_{k,l})_H = K((m,n),(k,l)),\qquad m,k\in \mathbb{Z}_+,\ n,l\in \mathbb{Z}.
\end{equation}
Set $L = \mathop{\rm Lin}\nolimits\{ x_{m,n} \}_{(m,n)\in T}$.
We introduce the following operators
\begin{equation}
\label{f2_7}
A_0 x = \sum_{(m,n)\in T} \alpha_{m,n} x_{m+1,n},
\end{equation}
\begin{equation}
\label{f2_8}
B_0 x = \sum_{(m,n)\in T} \alpha_{m,n} x_{m,n+1},
\end{equation}
where
\begin{equation}
\label{f2_9}
x = \sum_{(m,n)\in T} \alpha_{m,n} x_{m,n} \in L.
\end{equation}
We should show that these definitions are correct.
Indeed, suppose that the element $x$ in~(\ref{f2_9}) has another representation:
\begin{equation}
\label{f2_10}
x = \sum_{(k,l)\in T} \beta_{k,l} x_{k,l}.
\end{equation}
We can write
$$ \left( \sum_{(m,n)\in T} \alpha_{m,n} x_{m+1,n}, x_{a,b} \right) =
\sum_{(m,n)\in T} \alpha_{m,n} K((m+1,n),(a,b)) $$
$$= \sum_{(m,n)\in T} \alpha_{m,n} s_{m+1+a,n-b} = \sum_{(m,n)\in T} \alpha_{m,n} K((m,n),(a+1,b)) $$
$$ = \left(\sum_{(m,n)\in T} \alpha_{m,n} x_{m,n}, x_{a+1,b} \right) = (x,x_{a+1,b}), $$
for arbitrary $(a,b)\in T$.
In the same manner we get
$$ \left(\sum_{(k,l)\in T} \beta_{k,l} x_{k+1,l}, x_{a,b} \right) = (x,x_{a+1,b}). $$
Since $\mathop{\rm span}\nolimits\{ x_{a,b} \}_{(a,b)\in T} = H$, we get
$$ \sum_{(m,n)\in T} \alpha_{m,n} x_{m+1,n} =
\sum_{(k,l)\in T} \beta_{k,l} x_{k+1,l}. $$
Thus, the operator $A_0$ is defined correctly.
\noindent
We can write
$$ \left\| \sum_{(m,n)\in T} (\alpha_{m,n}-\beta_{m,n}) x_{m,n+1} \right\|^2 $$
$$= \left( \sum_{(m,n)\in T} (\alpha_{m,n}-\beta_{m,n}) x_{m,n+1},
\sum_{(k,l)\in T} (\alpha_{k,l}-\beta_{k,l}) x_{k,l+1} \right) $$
$$ = \sum_{(m,n),(k,l)\in T} (\alpha_{m,n}-\beta_{m,n}) \overline{(\alpha_{k,l}-\beta_{k,l})}
K((m,n+1),(k,l+1)) $$
$$= \sum_{(m,n),(k,l)\in T} (\alpha_{m,n}-\beta_{m,n}) \overline{(\alpha_{k,l}-\beta_{k,l})}
K((m,n),(k,l)) $$
$$= \left( \sum_{(m,n)\in T} (\alpha_{m,n}-\beta_{m,n}) x_{m,n},
\sum_{(k,l)\in T} (\alpha_{k,l}-\beta_{k,l}) x_{k,l} \right) = 0. $$
Consequently, the operator $B_0$ is defined correctly, as well.
Choose an arbitrary $y = \sum_{(a,b)\in T} \gamma_{a,b} x_{a,b} \in L$. We have
$$ (A_0 x,y) = \sum_{m,n,a,b} \alpha_{m,n}\gamma_{a,b} (x_{m+1,n},x_{a,b}) =
\sum_{m,n,a,b} \alpha_{m,n}\gamma_{a,b} K((m+1,n),(a,b)) $$
$$ = \sum_{m,n,a,b} \alpha_{m,n}\gamma_{a,b} K((m,n),(a+1,b)) =
\sum_{m,n,a,b} \alpha_{m,n}\gamma_{a,b} (x_{m,n},x_{a+1,b}) = (x,A_0 y). $$
Thus, $A_0$ is a symmetric operator. Its closure we denote by $A$.
On the other hand, we have
$$ (B_0 x,B_0 y) = \sum_{m,n,a,b} \alpha_{m,n}\overline{\gamma_{a,b}} (x_{m,n+1},x_{a,b+1}) =
\sum_{m,n,a,b} \alpha_{m,n}\overline{\gamma_{a,b}} K((m,n+1),(a,b+1)) $$
$$ = \sum_{m,n,a,b} \alpha_{m,n}\overline{\gamma_{a,b}} K((m,n),(a,b)) =
\sum_{m,n,a,b} \alpha_{m,n}\overline{\gamma_{a,b}} (x_{m,n},x_{a,b}) = (x,y). $$
In particular, this means that $B_0$ is bounded. By continuity we extend $B_0$ to a bounded
operator $B$ such that
$$ (Bx,By) = (x,y),\qquad x,y\in H. $$
Since $R(B_0)=L$ and $B_0$ has a bounded inverse, we have $R(B)=H$. Thus, $B$ is a unitary operator in $H$.
Notice that operators $A_0$ and $B_0$ commute. It is straightforward to check that $A$ and $B$ commute:
\begin{equation}
\label{f2_11}
AB x = BA x,\qquad x\in D(A).
\end{equation}
Consider the following operator:
\begin{equation}
\label{f2_12}
J_0 x = \sum_{(m,n)\in T} \overline{\alpha_{m,n}} x_{m,-n},
\end{equation}
where
\begin{equation}
\label{f2_13}
x = \sum_{(m,n)\in T} \alpha_{m,n} x_{m,n} \in L.
\end{equation}
Let us check that this definition is correct. Consider another representation for $x$ as in~(\ref{f2_10}).
Then
$$ \left\| \sum_{(m,n)\in T} (\overline{\alpha_{m,n}} - \overline{\beta_{m,n}}) x_{m,-n} \right\|^2 $$
$$= \left( \sum_{(m,n)\in T} \overline{ (\alpha_{m,n}-\beta_{m,n}) } x_{m,-n},
\sum_{(k,l)\in T} \overline{ (\alpha_{k,l}-\beta_{k,l}) } x_{k,-l} \right) $$
$$ = \sum_{(m,n),(k,l)\in T} \overline{(\alpha_{m,n}-\beta_{m,n})} (\alpha_{k,l}-\beta_{k,l})
K((m,-n),(k,-l)) $$
$$= \overline{ \sum_{(m,n),(k,l)\in T} (\alpha_{m,n}-\beta_{m,n}) \overline{(\alpha_{k,l}-\beta_{k,l})}
K((m,n),(k,l)) } $$
$$= \overline{ \left( \sum_{(m,n)\in T} (\alpha_{m,n}-\beta_{m,n}) x_{m,n},
\sum_{(k,l)\in T} (\alpha_{k,l}-\beta_{k,l}) x_{k,l} \right) } = 0. $$
Thus, the definition of $J_0$ is correct.
For an arbitrary $y = \sum_{(a,b)\in T} \gamma_{a,b} x_{a,b} \in L$ we can write
$$ (J_0 x,J_0 y) = \sum_{m,n,a,b} \overline{\alpha_{m,n}}\gamma_{a,b} (x_{m,-n},x_{a,-b}) =
\sum_{m,n,a,b} \overline{\alpha_{m,n}}\gamma_{a,b} K((m,-n),(a,-b)) $$
$$ = \sum_{m,n,a,b} \overline{\alpha_{m,n}} \gamma_{a,b} K((a,b),(m,n)) =
\sum_{m,n,a,b} \overline{\alpha_{m,n}}\gamma_{a,b} (x_{a,b},x_{m,n}) = (y,x). $$
In particular, this implies that $J_0$ is bounded. By continuity we extend $J_0$ to a bounded antilinear
operator $J$ such that
$$ (Jx,Jy) = (y,x),\qquad x,y\in H. $$
Moreover, we get $J^2 = E_H$. Consequently, $J$ is a conjugation in $H$ (\cite{cit_8000_S}).
\noindent
Notice that $J_0$ commutes with $A_0$. It is easy to check that
\begin{equation}
\label{f2_14}
AJ x = JA x,\qquad x\in D(A).
\end{equation}
On the other hand, we have $J_0 B_0 = B_0^{-1} J_0$. By continuity we get
\begin{equation}
\label{f2_15}
JB = B^{-1}J.
\end{equation}
Consider the Cayley transformation of the operator A:
\begin{equation}
\label{f2_16}
V_A := (A+iE_H)(A-iE_H)^{-1},
\end{equation}
and set
\begin{equation}
\label{f2_17}
H_1 := \Delta_A(i),\ H_2 := H\ominus H_1,\ H_3:= \Delta_A(-i),\ H_4 := H\ominus H_3.
\end{equation}
\begin{prop}
\label{p2_1}
The operator $B$ reduces subspaces $H_i$, $1\leq i\leq 4$:
\begin{equation}
\label{f2_18}
BH_i = H_i,\qquad 1\leq i\leq 4.
\end{equation}
Moreover, the following equality holds:
\begin{equation}
\label{f2_19}
BV_Ax = V_ABx,\qquad x\in H_1.
\end{equation}
\end{prop}
{\bf Proof. } Choose an arbitrary $x\in \Delta_A(z)$, $x=(A-zE_H)f_A$, $f_A\in D(A)$,
$z\in \mathbb{C}\backslash \mathbb{R}$.
By~(\ref{f2_11}) we get
$$ Bx = BAf_A - zBf_A = ABf_A - zBf_A = (A-zE_H)Bf_A\in \Delta_A(z). $$
In particular, we have $BH_1\subseteq H_1$, $BH_3\subseteq H_3$.
Notice that $B_0^{-1}A_0 = A_0 B_0^{-1}$. It is a straightforward calculation to check that
\begin{equation}
\label{f2_20}
AB^{-1} x = B^{-1}A x,\qquad x\in D(A).
\end{equation}
Repeating the above argument with $B^{-1}$ instead of $B$ we get
$B^{-1}H_1\subseteq H_1$, $B^{-1}H_3\subseteq H_3$, and therefore
$H_1\subseteq BH_1$, $H_3\subseteq BH_3$. Consequently, the operator $B$ reduces
subspaces $H_1$ and $H_3$. It follows directly that $B$ reduces $H_2$ and $H_4$, as well.
\noindent
Since
$$ (A-iE_H) Bx = B(A-iE_H)x,\qquad x\in D(A), $$
for arbitrary $y\in H_1$, $y = (A-iE_H)x_A$, $x_A\in D(A)$, we have
$$ (A-iE_H) B (A-iE_H)^{-1} y = B y; $$
$$ B (A-iE_H)^{-1} y = (A-iE_H)^{-1} B y,\qquad y\in H_1, $$
and~(\ref{f2_19}) follows.
$\Box$
Our aim here is to construct a unitary operator $U$ in $H$, $U\supset V_A$, which commutes with $B$.
Choose an arbitrary $x\in H$, $x= x_{H_1} + x_{H_2}$. For an operator $U$ of the required type
by~Proposition~\ref{p2_1} we could write:
$$ BU x = BV_Ax_{H_1} + BU x_{H_2} = V_ABx_{H_1} + BU x_{H_2}, $$
$$ UB x = UB x_{H_1} + UB x_{H_2} = V_ABx_{H_1} + UB x_{H_2}. $$
So, it is enough to find an isometric operator $U_{2,4}$ which maps $H_2$ onto $H_4$, and
commutes with $B$:
\begin{equation}
\label{f2_21}
B U_{2,4} x = U_{2,4}B x,\qquad x\in H_2.
\end{equation}
Moreover, all operators $U$ of the required type have the following form:
\begin{equation}
\label{f2_22}
U = V_A \oplus U_{2,4},
\end{equation}
where $U_{2,4}$ is an isometric operator which maps $H_2$ onto $H_4$, and
commutes with $B$.
\noindent
We shall denote the operator $B$ restricted to $H_i$ by $B_{H_i}$, $1\leq i\leq 4$.
Notice that
\begin{equation}
\label{f2_23}
A^* J x= JA^* x,\qquad x\in D(A^*).
\end{equation}
Indeed, for arbitrary $f_A\in D(A)$ and $g_{A^*}\in D(A^*)$ we can write
$$ \overline{ (Af_A,Jg_{A^*}) } = (JAf_A, g_{A^*}) = (AJf_A, g_{A^*}) = (Jf_A, A^*g_{A^*}) $$
$$ = \overline{ (f_A,JA^*g_{A^*}) }, $$
and~(\ref{f2_23}) follows.
\noindent
Choose an arbitrary $x\in H_2$. We have
$$ A^* x = -i x, $$
and therefore
$$ A^* Jx = JA^* x = ix. $$
Thus, we have
$$ JH_2 \subseteq H_4. $$
In a similar manner we get
$$ JH_4 \subseteq H_2, $$
and therefore
\begin{equation}
\label{f2_24}
JH_2 = H_4,\quad JH_4 = H_2.
\end{equation}
By the Godi\v{c}-Lucenko Theorem (\cite{cit_2000_GL},\cite[Theorem 1]{cit_3000_GP}) we have a
representation:
\begin{equation}
\label{f2_25}
B_{H_2} = KL,
\end{equation}
where $K$ and $L$ are some conjugations in $H_2$.
We set
\begin{equation}
\label{f2_26}
U_{2,4} := JK.
\end{equation}
From~(\ref{f2_24}) it follows that $U_{2,4}$ maps isometrically $H_2$ onto $H_4$.
Notice that
\begin{equation}
\label{f2_27}
U_{2,4}^{-1} := KJ.
\end{equation}
Using relation~(\ref{f2_15}) we get
$$ U_{2,4} B_{H_2} U_{2,4}^{-1} x = JK KL KJ x = J LK J x = J B_{H_2}^{-1} J x $$
$$ = JB^{-1}J x = B x = B_{H_4} x,\qquad x\in H_4. $$
Therefore relation~(\ref{f2_21}) is true.
We define an operator $U$ by~(\ref{f2_22}) and define
\begin{equation}
\label{f2_28}
A_U := i(U+E_H)(U-E_H)^{-1} = iE_H + 2i(U-E_H)^{-1}.
\end{equation}
The inverse Cayley transformation $A_U$ is correctly defined since $1$ is not in the point spectrum of $U$.
Indeed, $V_A$ is the Cayley transformation of a symmetric operator while eigen subspaces $H_2$ and
$H_4$ have the zero intersection.
Let
\begin{equation}
\label{f2_29}
A_U = \int_\mathbb{R} s dE(s),\quad B = \int_{ [-\pi,\pi) } e^{i\varphi} dF(\varphi),
\end{equation}
where $E(s)$ and $F(\varphi)$ are the spectral measures of $A_U$ and $B$, respectively. These measures are
defined on $\mathfrak{B}(\mathbb{R})$ and $\mathfrak{B}([-\pi,\pi))$, respectively (\cite{cit_9000_BS}).
Since $U$ and $B$ commute, we get that $E(s)$ and $F(\varphi)$ commute, as well.
By induction argument we have
$$ x_{m,n} = A^m x_{0,n},\qquad m\in \mathbb{Z}_+,\ n\in \mathbb{Z}, $$
and
$$ x_{0,n} = B^n x_{0,0},\qquad n\in \mathbb{Z}. $$
Therefore we have
\begin{equation}
\label{f2_30}
x_{m,n} = A^m B^n x_{0,0},\qquad m\in \mathbb{Z}_+,\ n\in \mathbb{Z}.
\end{equation}
We can write
$$ x_{m,n} = \int_\mathbb{R} s^m dE(s) \int_{ [-\pi,\pi) } e^{in\varphi} dF(\varphi) x_{0,0} =
\int_\Pi s^m e^{in\varphi} d(E\times F) x_{0,0}, $$
where $E\times F$ is the product spectral measure on $\mathfrak{B}(\Pi)$.
Then
\begin{equation}
\label{f2_31}
s_{m,n} = (x_{m,n},x_{0,0})_H = \int_\Pi s^m e^{in\varphi} d((E\times F) x_{0,0}, x_{0,0})_H,\quad
(m,n)\in T.
\end{equation}
The measure $\mu := ((E\times F) x_{0,0}, x_{0,0})_H$ is a non-negative Borel measure on $\Pi$ and
relation~(\ref{f2_31}) shows that $\mu$ is a solution of the Devinatz moment problem.
Thus, we obtained a new proof of the following criterion.
\begin{thm}
\label{t2_2}
Let a Devinatz moment problem~(\ref{f1_1}) be given.
This problem has a solution if an only if conditions~(\ref{f2_1}) hold
for arbitrary complex numbers $\alpha_{m,n}$ such that all but finite numbers are zeros.
\end{thm}
{\bf Remark. } The original proof of Devinatz used the theory of reproducing kernels Hilbert spaces (RKHS).
In particular, he used properties of RKHS corresponding to the product of two positive matrices and
an inner structure of a RKHS corresponding to the moment problem. We used an abstract approach with
the Godi\v{c}-Lucenko Theorem and basic facts from the standard operator theory.
\section{Canonical solutions. A set of all solutions.}
Let a moment problem~(\ref{f1_1}) be given. Construct a Hilbert space $H$ and operators
$A,B,J$ as in the previous Section.
Let $\widetilde A\supseteq A$ be a self-adjoint extension of $A$ in a Hilbert space
$\widetilde H\supseteq H$. Let $R_z(\widetilde A)$, $z\in \mathbb{C}\backslash \mathbb{R}$, be the resolvent
function of $\widetilde A$, and $E_{\widetilde A}$ be its spectral measure.
Recall that the function
\begin{equation}
\label{f3_1}
\mathbf{R}_z(A) := P^{\widetilde H}_H R_z(\widetilde A),\qquad z\in \mathbb{C}\backslash \mathbb{R},
\end{equation}
is said to be a generalized resolvent of $A$. The function
\begin{equation}
\label{f3_2}
\mathbf{E}_A (\delta) := P^{\widetilde H}_H E_{\widetilde A} (\delta),\qquad \delta\in \mathfrak{B}(\mathbb{R}),
\end{equation}
is said to be a spectral measure of $A$.
There exists a one-to-one correspondence between generalized resolvents and spectral measures established by
the following relation~\cite{cit_6000_AG}:
\begin{equation}
\label{f3_3}
(\mathbf{R}_z(A) x,y)_H = \int_{\mathbb{R}} \frac{1}{t-z} d(\mathbf{E}_A x,y)_H,\qquad x,y\in H.
\end{equation}
We shall reduce the Devinatz moment problem to a problem of finding of generalized resolvents of a
certain class.
\begin{thm}
\label{t3_1}
Let a Devinatz moment problem~(\ref{f1_1}) be given and conditions~(\ref{f2_1}) hold.
Consider a Hilbert space $H$ and a sequence
$\{ x_{m,n} \}_{m\in \mathbb{Z}_+, n\in \mathbb{Z}}$, $x_{m,n}\in H$, such that relation~(\ref{f2_6})
holds where $K$ is defined by~(\ref{f2_2}).
Consider operators $A_0$,$B_0$ defined by~(\ref{f2_7}),(\ref{f2_8}) on
$L = \mathop{\rm Lin}\nolimits\{ x_{m,n} \}_{(m,n)\in T}$. Let $A=\overline{A_0}$, $B=\overline{B_0}$.
Let $\mu$ be an arbitrary solution of the moment problem. Then it has the following form:
\begin{equation}
\label{f3_4}
\mu (\delta)= ((\mathbf{E}\times F)(\delta) x_{0,0}, x_{0,0})_H,\qquad \delta\in \mathfrak{B}(\mathbb{R}),
\end{equation}
where $F$ is the spectral measure of $B$, $\mathbf{E}$ is a spectral measure of $A$ which commutes with
$F$. By $((\mathbf{E}\times F)(\delta) x_{0,0}, x_{0,0})_H$ we mean the non-negative Borel measure on
$\mathbb{R}$ which is obtained by the Lebesgue continuation procedure from the following
non-negative measure on rectangules
\begin{equation}
\label{f3_5}
((\mathbf{E}\times F)(I_x\times I_\varphi)) x_{0,0}, x_{0,0})_H :=
( \mathbf{E}(I_x) F(I_\varphi)) x_{0,0}, x_{0,0})_H,
\end{equation}
where $I_x\subset \mathbb{R}$, $I_\varphi\subseteq [-\pi,\pi)$ are arbitrary intervals.
\noindent
On the other hand, for an arbitrary spectral measure $\mathbf{E}$ of $A$ which commutes with the
spectral measure $F$ of $B$, by relation~(\ref{f3_4}) it corresponds a solution of the moment
problem~(\ref{f1_1}).
\noindent
Moreover, the correspondence between the spectral measures of $A$ which commute with the spectral meeasure of
$B$ and solutions of the Devinatz moment problem is bijective.
\end{thm}
{\bf Remark. } The measure in~(\ref{f3_5}) is non-negative. Indeed,
for arbitrary intervals $I_x\subset \mathbb{R}$, $I_\varphi\subseteq [-\pi,\pi)$, we can write
$$ \left( \mathbf{E}(I_x) F(I_\varphi) x_{0,0}, x_{0,0} \right)_H =
\left( F(I_\varphi) \mathbf{E}(I_x) F(I_\varphi) x_{0,0}, x_{0,0} \right)_H $$
$$ = \left( \mathbf{E}(I_x) F(I_\varphi) x_{0,0}, F(I_\varphi)
x_{0,0} \right)_H = \left( \widehat E(I_x) F(I_\varphi) x_{0,0}, \widehat E(I_x) F(I_\varphi)
x_{0,0} \right)_{\widehat H} \geq 0, $$
where $\widehat E$ is the spectral function of a self-adjoint extension $\widehat A\supseteq A$ in
a Hilbert space $\widehat H\supseteq H$ such that $\mathbf{E} = P^{\widehat H}_H \widehat E$.
The measure in~(\ref{f3_5}) is additive. If $I_\varphi = I_{1,\varphi}\cup I_{2,\varphi}$,
$I_{1,\varphi}\cap I_{2,\varphi} = \emptyset$, then
$$ \left( \mathbf{E}(I_x) F(I_\varphi) x_{0,0}, x_{0,0} \right)_H =
\left( F( I_{1,\varphi}\cup I_{2,\varphi} )\mathbf{E}(I_x) x_{0,0}, x_{0,0} \right)_H $$
$$ = \left( F(I_{1,\varphi})\mathbf{E}(I_x) x_{0,0}, x_{0,0} \right)_H +
\left( F(I_{2,\varphi})\mathbf{E}(I_x) x_{0,0}, x_{0,0} \right)_H. $$
The case $I_x = I_{1,x}\cup I_{2,x}$ is analogous.
Moreover, repeating the standard arguments~\cite[Chapter 5, Theorem 2, p. 254-255]{cit_9500_KF} we conclude
that the measure in~(\ref{f3_5}) is $\sigma$-additive.
Thus, it posesses the (unique) Lebesgue continuation to a (finite) non-negative Borel measure
on $\Pi$.
{\bf Proof. }
Consider a Hilbert space $H$ and operators $A$,$B$ as in the statement of the Theorem.
Let $F$ be the spectral measure of $B$. Let $\mu$ be an arbitrary solution of the moment problem~(\ref{f1_1}).
Consider the space $L^2_\mu$ of complex functions on $\Pi$ which are square integrable with respect to the
measure $\mu$. The scalar product and the norm are given by
$$ (f,g)_\mu =
\int_\Pi f(x,\varphi) \overline{ g(x,\varphi) } d\mu,\quad
\|f\|_\mu = \left( (f,f)_\mu \right)^{ \frac{1}{2} },\quad f,g\in L^2_\mu. $$
Consider the following operators:
\begin{equation}
\label{f3_6}
A_\mu f(x,\varphi) = xf(x,\varphi),\qquad D(A_\mu) = \{ f\in L^2_\mu:\ xf(x,\varphi)\in L^2_\mu \},
\end{equation}
\begin{equation}
\label{f3_7}
B_\mu f(x,\varphi) = e^{i\varphi} f(x,\varphi),\qquad D(B_\mu) = L^2_\mu.
\end{equation}
The operator $A_\mu$ is self-adjoint and the operator $B_\mu$ is unitary. Moreover, these operators
commute and therefore the spectral measure $E_\mu$ of $A_\mu$ and the spectral measure $F_\mu$ of $B_\mu$
commute, as well.
\noindent
Let $p(x,\varphi)$ be a (power-trigonometric) polynomial of the form~(\ref{f1_1}) and
$q(x,\varphi)$ be a (power-trigonometric) polynomial of the form~(\ref{f1_1}) with
$\beta_{m,n}\in \mathbb{C}$ instead of $\alpha_{m,n}$.
Then
$$ (p,q)_\mu = \sum_{(m,n)\in T, (k,l)\in T} \alpha_{m,n}\overline{ \beta_{k,l} }
\int_\Pi x^{m+k} e^{i(n-l)\varphi} d\mu $$
$$ = \sum_{(m,n)\in T, (k,l)\in T} \alpha_{m,n}\overline{ \beta_{k,l} } s_{m+k,n-l}, $$
On the other hand, we can write
$$ \left(
\sum_{(m,n)\in T} \alpha_{m,n} x_{m,n}, \sum_{(k,l)\in T} \beta_{k,l} x_{k,l} \right)_H =
\sum_{(m,n)\in T, (k,l)\in T} \alpha_{m,n}\overline{ \beta_{k,l} }
(x_{m,n},x_{k,l})_H $$
$$ = \sum_{(m,n)\in T, (k,l)\in T} \alpha_{m,n}\overline{ \beta_{k,l} } K((m,n),(k,l))
= \sum_{(m,n)\in T, (k,l)\in T} \alpha_{m,n}\overline{ \beta_{k,l} } s_{m+k,n-l}. $$
Therefore
\begin{equation}
\label{f3_8}
(p,q)_\mu = \left(
\sum_{(m,n)\in T} \alpha_{m,n} x_{m,n}, \sum_{(k,l)\in T} \beta_{k,l} x_{k,l} \right)_H.
\end{equation}
Consider thr following operator:
\begin{equation}
\label{f3_9}
V[p] = \sum_{(m,n)\in T} \alpha_{m,n} x_{m,n},\quad p=\sum_{(m,n)\in T} \alpha_{m,n} x^m e^{in\varphi}.
\end{equation}
Here by $[p]$ we mean the class of equivalence in $L^2_\mu$ defined by $p$. If two different polynomials
$p$ and $q$ belong to the same class of equivalence then by~(\ref{f3_8}) we get
$$ 0 = \| p-q \|_\mu^2 = (p-q,p-q)_\mu = \left( \sum_{(m,n)\in T} (\alpha_{m,n}-\beta_{m,n}) x_{m,n},
\sum_{(k,l)\in T} (\alpha_{k,l}-\beta_{k,l}) x_{k,l} \right) $$
$$ = \left\| \sum_{(m,n)\in T} \alpha_{m,n} x_{m,n} - \sum_{(m,n)\in T} \beta_{m,n} x_{m,n} \right\|_\mu^2. $$
Thus, the definition of $V$ is correct. It is not hard to see that $V$ maps a set of polynomials $P^2_{0,\mu}$
in $L^2_\mu$ on $L$. By continuity we extend $V$ to the isometric transformation from
the closure of polynomials $P^2_\mu = \overline{P^2_{0,\mu}}$ onto $H$.
\noindent
Set $H_0 := L^2_\mu \ominus P^2_\mu$. Introduce the following operator:
\begin{equation}
\label{f3_10}
U := V \oplus E_{H_0},
\end{equation}
which maps isometrically $L^2_\mu$ onto $\widetilde H := H\oplus H_0$.
Set
\begin{equation}
\label{f3_11}
\widetilde A := UA_\mu U^{-1},\quad \widetilde B := UB_\mu U^{-1}.
\end{equation}
Notice that
$$ \widetilde A x_{m,n} = UA_\mu U^{-1} x_{m,n} = UA_\mu x^m e^{in\varphi} = Ux^{m+1} e^{in\varphi} = x_{m+1,n}, $$
$$ \widetilde B x_{m,n} = UB_\mu U^{-1} x_{m,n} = UB_\mu x^m e^{in\varphi} = Ux^{m} e^{i(n+1)\varphi} = x_{m,n+1}. $$
Therefore $\widetilde A\supseteq A$ and $\widetilde B\supseteq B$.
Let
\begin{equation}
\label{f3_12}
\widetilde A = \int_\mathbb{R} s d\widetilde E(s),\quad \widetilde B = \int_{ [-\pi,\pi) } e^{i\varphi}
d \widetilde F(\varphi),
\end{equation}
where $\widetilde E(s)$ and $\widetilde F(\varphi)$ are the spectral measures of $\widetilde A$ and
$\widetilde B$, respectively.
Repeating arguments after relation~(\ref{f2_29}) we obtain that
\begin{equation}
\label{f3_13}
x_{m,n} = \widetilde A^m \widetilde B^n x_{0,0},\qquad m\in \mathbb{Z}_+,\ n\in \mathbb{Z},
\end{equation}
\begin{equation}
\label{f3_14}
s_{m,n} = \int_\Pi s^m e^{in\varphi} d((\widetilde E\times \widetilde F) x_{0,0}, x_{0,0})_{\widetilde H},\quad
(m,n)\in T,
\end{equation}
where $(\widetilde E\times \widetilde F)$ is the product measure of $\widetilde E$ and $\widetilde F$.
Thus, the measure $\widetilde \mu := ((\widetilde E\times \widetilde F) x_{0,0}, x_{0,0})_{\widetilde H}$
is a solution of the Devinatz moment problem.
\noindent
Let $I_x\subset \mathbb{R}$, $I_\varphi\subseteq [-\pi,\pi)$ be arbitrary intervals.
Then
$$ \widetilde \mu (I_x \times I_\varphi) = ((\widetilde E\times \widetilde F) (I_x \times I_\varphi)
x_{0,0}, x_{0,0})_{\widetilde H} $$
$$ = ( \widetilde E(I_x) \widetilde F(I_\varphi) x_{0,0}, x_{0,0})_{\widetilde H} =
( P^{\widetilde H}_H \widetilde E(I_x) \widetilde F(I_\varphi) x_{0,0}, x_{0,0})_{\widetilde H} $$
$$ = ( \mathbf{E}(I_x) F(I_\varphi) x_{0,0}, x_{0,0})_{H}, $$
where $\mathbf{E}$ is the correponding spectral function of $A$ and $F$ is the spectral function of $B$.
Thus, the measure $\widetilde \mu$ has the form~(\ref{f3_4}) since the Lebesgue continuation
is unique.
\noindent
Let us show that $\widetilde \mu = \mu$.
Consider the following transformation:
\begin{equation}
\label{f3_15}
S:\ (x,\varphi) \in \Pi \mapsto \left( \mathop{\rm Arg }\nolimits \frac{x-i}{x+i}, \varphi \right) \in \Pi_0,
\end{equation}
where $\Pi_0 = [-\pi,\pi) \times [-\pi,\pi)$ and $\mathop{\rm Arg }\nolimits e^{iy} = y\in [-\pi,\pi)$.
By virtue of $V$ we define the following measures:
\begin{equation}
\label{f3_16}
\mu_0 (VG) := \mu (G),\quad \widetilde\mu_0 (VG) := \widetilde\mu (G),\qquad G\in \mathfrak{B}(\Pi),
\end{equation}
It is not hard to see that $\mu_0$ and $\widetilde\mu_0$ are non-negative measures on
$\mathfrak{B}(\Pi_0)$.
Then
\begin{equation}
\label{f3_17}
\int_\Pi \left( \frac{x-i}{x+i} \right)^m e^{in\varphi} d\mu =
\int_{\Pi_0} e^{im\psi} e^{in\varphi} d\mu_0,
\end{equation}
\begin{equation}
\label{f3_18}
\int_\Pi \left( \frac{x-i}{x+i} \right)^m e^{in\varphi} d\widetilde\mu =
\int_{\Pi_0} e^{im\psi} e^{in\varphi} d\widetilde\mu_0,\qquad m,n\in \mathbb{Z};
\end{equation}
and
$$ \int_\Pi \left( \frac{x-i}{x+i} \right)^m e^{in\varphi} d\widetilde\mu =
\int_\Pi \left( \frac{x-i}{x+i} \right)^m e^{in\varphi}
d((\widetilde E\times \widetilde F) x_{0,0}, x_{0,0})_{\widetilde H} $$
$$ = \left( \int_\Pi \left( \frac{x-i}{x+i} \right)^m e^{in\varphi}
d(\widetilde E\times \widetilde F) x_{0,0}, x_{0,0} \right)_{\widetilde H} $$
$$ = \left( \int_\mathbb{R} \left( \frac{x-i}{x+i} \right)^m d\widetilde E
\int_{[-\pi,\pi)} e^{in\varphi} d\widetilde F x_{0,0}, x_{0,0} \right)_{\widetilde H} $$
$$ = \left( \left( (\widetilde A - iE_{\widetilde H})(\widetilde A + iE_{\widetilde H})^{-1} \right)^m
\widetilde B^n x_{0,0}, x_{0,0} \right)_{\widetilde H} $$
$$ = \left( U^{-1}\left( (\widetilde A - iE_{\widetilde H})(\widetilde A + iE_{\widetilde H})^{-1} \right)^m
\widetilde B^n U 1, U 1 \right)_\mu $$
$$ = \left( \left( (A_\mu - iE_{L^2_\mu})(A_\mu + iE_{L^2_\mu})^{-1} \right)^m
B_\mu^n 1, 1 \right)_\mu $$
\begin{equation}
\label{f3_19}
= \int_\Pi \left( \frac{x-i}{x+i} \right)^m e^{in\varphi} d\mu,\qquad m,n\in \mathbb{Z}.
\end{equation}
By virtue of relations~(\ref{f3_17}),(\ref{f3_18}) and~(\ref{f3_19}) we get
\begin{equation}
\label{f3_20}
\int_{\Pi_0} e^{im\psi} e^{in\varphi} d\mu_0 =
\int_{\Pi_0} e^{im\psi} e^{in\varphi} d\widetilde\mu_0,\qquad m,n\in \mathbb{Z}.
\end{equation}
By the Weierstrass theorem we can approximate any continuous function by exponentials and therefore
\begin{equation}
\label{f3_21}
\int_{\Pi_0} f(\psi) g(\varphi) d\mu_0 =
\int_{\Pi_0} f(\psi) g(\varphi) d\widetilde\mu_0,
\end{equation}
for arbitrary continuous functions on $\Pi_0$. In particular, we have
\begin{equation}
\label{f3_22}
\int_{\Pi_0} \psi^n \varphi^m d\mu_0 =
\int_{\Pi_0} \psi^n \varphi^m d\widetilde\mu_0,\qquad n,m\in \mathbb{Z}_+.
\end{equation}
However, the two-dimensional Hausdorff moment problem is determinate (\cite{cit_10000_ST}) and therefore we get
$\mu_0 = \widetilde\mu_0$ and $\mu=\mu_0$.
Thus, we have proved that an arbitrary solution $\mu$ of the Devinatz moment problem can be represented
in the form~(\ref{f3_4}).
Let us check the second assertion of the Theorem.
For an arbitrary spectral measure $\mathbf{E}$ of $A$ which commutes with the
spectral measure $F$ of $B$, by relation~(\ref{f3_4}) we define a non-negative Borel measure $\mu$
on $\Pi$. Let us show that the measure $\mu$ is a solution of the moment
problem~(\ref{f1_1}).
\noindent
Let $\widehat A$ be a self-adjoint extension of the operator $A$ in a Hilbert space
$\widehat H\supseteq H$, such that
$$ \mathbf{E} = P^{\widehat H}_H \widehat E, $$
where $\widehat E$ is the spectral measure of $\widehat A$.
By~(\ref{f2_30}) we get
$$ x_{m,n} = A^m B^n x_{0,0} = \widehat A^m B^n x_{0,0} = P^{\widehat H}_H \widehat A^m B^n x_{0,0} $$
$$ = P^{\widehat H}_H \left( \lim_{a\to +\infty} \int_{[-a,a)} x^m d\widehat E \right)
\int_{[-\pi,\pi)} e^{in\varphi} dF x_{0,0}
= \left( \lim_{a\to +\infty} \int_{[-a,a)} x^m d\mathbf{E} \right) $$
$$ * \int_{[-\pi,\pi)} e^{in\varphi} dF x_{0,0}
= \left( \lim_{a\to +\infty} \left( \int_{[-a,a)} x^m d\mathbf{E} \int_{[-\pi,\pi)} e^{in\varphi} dF \right)
\right) x_{0,0}, $$
\begin{equation}
\label{f3_23}
\qquad m\in \mathbb{Z}_+,\ n\in \mathbb{Z},
\end{equation}
where the limits are understood in the weak operator topology.
Then we choose arbitrary points
$$ -a = x_0 < x_1 < ... < x_{N}=a; $$
\begin{equation}
\label{f3_24}
\max_{1\leq i\leq N}|x_{i}-x_{i-1}| =: d,\quad N\in \mathbb{N};
\end{equation}
$$ -\pi = \varphi_0 < \varphi_1 < ... < \varphi_{M}=\pi; $$
\begin{equation}
\label{f3_25}
\max_{1\leq j\leq M}|\varphi_{j}-\varphi_{j-1}| =: r;\quad M\in \mathbb{N}.
\end{equation}
Set
$$ C_a := \int_{[-a,a)} x^m d\mathbf{E} \int_{[-\pi,\pi)} e^{in\varphi} dF =
\lim_{d\rightarrow 0} \sum_{i=1}^N x_{i-1}^m \mathbf{E}([x_{i-1},x_i)) $$
$$ * \lim_{r\rightarrow 0} \sum_{j=1}^M e^{in\varphi_{j-1}} F([\varphi_{j-1},\varphi_j)), $$
where the integral sums converge in the strong operator topology. Then
$$ C_a = \lim_{d\rightarrow 0} \lim_{r\rightarrow 0} \sum_{i=1}^N x_{i-1}^m \mathbf{E}([x_{i-1},x_i))
\sum_{j=1}^M e^{in\varphi_{j-1}} F([\varphi_{j-1},\varphi_j)) $$
$$ = \lim_{d\rightarrow 0} \lim_{r\rightarrow 0}
\sum_{i=1}^N \sum_{j=1}^M
x_{i-1}^m e^{in\varphi_{j-1}}
\mathbf{E}([x_{i-1},x_i)) F([\varphi_{j-1},\varphi_j)), $$
where the limits are understood in the strong operator topology. Then
$$ (C_a x_{0,0}, x_{0,0})_H =
\left( \lim_{d\rightarrow 0} \lim_{r\rightarrow 0}
\sum_{i=1}^N \sum_{j=1}^M
x_{i-1}^m e^{in\varphi_{j-1}}
\mathbf{E}([x_{i-1},x_i)) F([\varphi_{j-1},\varphi_j)) x_{0,0}, x_{0,0} \right)_H $$
$$ = \lim_{d\rightarrow 0} \lim_{r\rightarrow 0}
\sum_{i=1}^N \sum_{j=1}^M
x_{i-1}^m e^{in\varphi_{j-1}}
\left( \mathbf{E}([x_{i-1},x_i)) F([\varphi_{j-1},\varphi_j)) x_{0,0}, x_{0,0} \right)_H $$
$$ = \lim_{d\rightarrow 0} \lim_{r\rightarrow 0}
\sum_{i=1}^N \sum_{j=1}^M
x_{i-1}^m e^{in\varphi_{j-1}}
\left( (\mathbf{E}\times F) ( [x_{i-1},x_i)\times [\varphi_{j-1},\varphi_j) ) x_{0,0}, x_{0,0} \right)_H $$
$$ = \lim_{d\rightarrow 0} \lim_{r\rightarrow 0}
\sum_{i=1}^N \sum_{j=1}^M
x_{i-1}^m e^{in\varphi_{j-1}}
\left( \mu ( [x_{i-1},x_i)\times [\varphi_{j-1},\varphi_j) ) x_{0,0}, x_{0,0} \right)_H. $$
Therefore
$$ (C_a x_{0,0}, x_{0,0})_H =
\lim_{d\rightarrow 0} \lim_{r\rightarrow 0}
\int_{[-a,a)\times[-\pi,\pi)} f_{d,r} (x,\varphi) d\mu, $$
where $f_{d,r}$ is equal to $x_{i-1}^m e^{in\varphi_{j-1}}$ on the rectangular
$[x_{i-1},x_i) \times [\varphi_{j-1},\varphi_j)$, $1\leq i\leq N$, $1\leq j\leq M$.
\noindent
If $r\rightarrow 0$, then the simple function
$f_{d,r}$ converges uniformly to the function $f_d$ which is equal to
$x_{i-1}^m e^{in\varphi}$ on the rectangular
$[x_{i-1},x_i) \times [\varphi_{j-1},\varphi_j)$, $1\leq i\leq N$, $1\leq j\leq M$.
Then
$$ (C_a x_{0,0}, x_{0,0})_H =
\lim_{d\rightarrow 0}
\int_{[-a,a)\times[-\pi,\pi)} f_{d} (x,\varphi) d\mu. $$
If $d\rightarrow 0$, then the function
$f_{d}$ converges uniformly to the function $x^m e^{in\varphi}$. Since
$|f_d|\leq A^m$, by the Lebesgue theorem we get
\begin{equation}
\label{f3_26}
(C_a x_{0,0}, x_{0,0})_H =
\int_{[-a,a)\times[-\pi,\pi)} x^m e^{in\varphi} d\mu.
\end{equation}
By virtue of relations~(\ref{f3_23}) and~(\ref{f3_26}) we get
$$ s_{m,n} = (x_{m,n},x_{0,0})_H = \lim_{a\to +\infty} (C_a x_{0,0},x_{0,0})_H $$
\begin{equation}
\label{f3_27}
= \lim_{a\to+\infty} \int_{[-a,a)\times[-\pi,\pi)} x^m e^{in\varphi} d\mu =
\int_\Pi x^m e^{in\varphi} d\mu.
\end{equation}
Thus, the measure $\mu$ is a solution of the Devinatz moment problem.
Let us prove the last assertion of the Theorem. Suppose to the contrary that two different
spectral measures $\mathbf{E}_1$ and $\mathbf{E}_1$ of $A$ commute with the spectral measure $F$ of
$B$ and produce by relation~(\ref{f3_4}) the same solution $\mu$ of the Devinatz moment problem.
Choose an arbitrary $z\in \mathbb{C}\backslash \mathbb{R}$. Then
$$ \int_\Pi \frac{x^m}{x-z} e^{in\varphi} d\mu =
\int_\Pi \frac{x^m}{x-z} e^{in\varphi} ((\mathbf{E}_k\times F)(\delta) x_{0,0}, x_{0,0})_H $$
\begin{equation}
\label{f3_28}
= \lim_{a\to +\infty}
\int_{[-a,a)\times [-\pi,\pi)}
\frac{x^m}{x-z} e^{in\varphi} d((\mathbf{E}_k\times F)(\delta) x_{0,0}, x_{0,0})_H,\quad k=1,2.
\end{equation}
Consider arbitrary partitions of the type~(\ref{f3_24}),(\ref{f3_25}). Then
$$ D_a := \int_{[-a,a)\times [-\pi,\pi)}
\frac{x^m}{x-z} e^{in\varphi} d((\mathbf{E}_k\times F)(\delta) x_{0,0}, x_{0,0})_H $$
$$ = \lim_{d\to 0} \lim_{r\to 0}
\int_{[-a,a)\times [-\pi,\pi)} g_{z;d,r}(x,\varphi)
d((\mathbf{E}_k\times F)(\delta) x_{0,0}, x_{0,0})_H. $$
Here the function $g_{z;d,r}(x,\varphi)$ is equal to
$\frac{x_{i-1}^m}{x_{i-1}-z} e^{in\varphi_{j-1}}$
on the rectangular
$[x_{i-1},x_i) \times [\varphi_{j-1},\varphi_j)$, $1\leq i\leq N$, $1\leq j\leq M$.
Then
$$ D_a = \lim_{d\to 0} \lim_{r\to 0}
\sum_{i=1}^N \sum_{j=1}^M
\frac{ x_{i-1}^m }{ x_{i-1}-z } e^{in\varphi_{j-1}}
\left( \mathbf{E}_k ([x_{i-1},x_i)) F([\varphi_{j-1},\varphi_j)) x_{0,0}, x_{0,0} \right)_H $$
$$ =
\lim_{d\to 0} \lim_{r\to 0}
\left( \sum_{i=1}^N
\frac{ x_{i-1}^m }{ x_{i-1}-z } \mathbf{E}_k ([x_{i-1},x_i))
\sum_{j=1}^M
e^{in\varphi_{j-1}} F([\varphi_{j-1},\varphi_j)) x_{0,0}, x_{0,0} \right)_H $$
$$ =
\left( \int_{[-a,a)}
\frac{ x^m }{ x-z } d\mathbf{E}_k \int_{[-\pi,\pi)}
e^{in\varphi} dF x_{0,0}, x_{0,0} \right)_H. $$
Let $n = n_1+n_2$, $n_1,n_2\in \mathbb{Z}$. Then we can write:
$$ D_a = \left( B^{n_1} \int_{[-a,a)}
\frac{ x^m }{ x-z } d\mathbf{E}_k B^{n_2} x_{0,0}, x_{0,0} \right)_H $$
$$ =
\left( \int_{[-a,a)}
\frac{ x^m }{ x-z } d\mathbf{E}_k x_{0,n_2}, x_{0,-n_1} \right)_H. $$
By~(\ref{f3_28}) we get
$$ \int_\Pi \frac{x^m}{x-z} e^{in\varphi} d\mu =
\lim_{a\to +\infty} D_a =
\lim_{a\to +\infty}\left( \int_{[-a,a)}
\frac{ x^m }{ x-z } d \widehat{E}_k x_{0,n_2}, x_{0,-n_1} \right)_{\widehat H_k} $$
$$ = \left( \int_\mathbb{R}
\frac{ x^m }{ x-z } d\widehat{E}_k x_{0,n_2}, x_{0,-n_1} \right)_{\widehat H_k}
= \left( \widehat{A}^{m_2} R_z(\widehat{A}_k) \widehat{A}^{m_1} x_{0,n_2}, x_{0,-n_1} \right)_{\widehat H_k}
$$
\begin{equation}
\label{f3_29}
= \left( R_z(\widehat{A}_k) x_{m_1,n_2}, x_{m_2,-n_1} \right)_H,
\end{equation}
where $m_1,m_2\in \mathbb{Z}_+:\ m_1+m_2 = m$,
and $\widehat A_k$ is a self-adjoint extension of $A$ in a Hilbert space $\widehat H_k\supseteq H$ such that
its spectral measure $\widehat E_k$ generates $\mathbf{E}_k$: $\mathbf{E}_k = P^{\widehat H_k}_H \widehat E_k$;
$k=1,2$.
\noindent
Relation~(\ref{f3_29}) shows that the generalized resolvents corresponding to $\mathbf{E}_k$, $k=1,2$, coincide.
That means that the spectral measures $\mathbf{E}_1$ and $\mathbf{E}_2$ coincide. We obtained a contradiction.
This completes the proof.
$\Box$
\begin{dfn}
\label{d3_1}
A solution $\mu$ of the Devinatz moment problem~(\ref{f1_1}) we shall call {\bf canonical}
if it is generated by relation~(\ref{f3_4}) where $\mathbf{E}$ is an {\bf orthogonal}
spectral measure of $A$ which commutes with the spectral measure of $B$. Orthogonal spectral measures
are those measures which are the spectral measures of self-adjoint extensions of $A$ inside $H$.
\end{dfn}
Let a moment problem~(\ref{f1_1}) be given and conditions~(\ref{f2_1}) hold.
Let us describe canonical solutions of the Devinatz moment problem.
In the proof of Theorem~\ref{t2_2} we have constructed one canonical solution, see relation~(\ref{f2_31}).
Let $\mu$ be an arbitrary canonical solution and $\mathbf{E}$ be the corresponding orthogonal spectral
measure of $A$. Let $\widetilde A$ be the self-adjoint operator in $H$ which corresponds to $\mathbf{E}$.
Consider the Cayley transformation of $\widetilde A$:
\begin{equation}
\label{f3_30}
U_{\widetilde A} = (\widetilde A + iE_H)(\widetilde A - iE_H)^{-1} \supseteq V_A,
\end{equation}
where $V_A$ is defined by~(\ref{f2_16}).
Since $\mathbf{E}$ commutes with the spectral measure $F$ of $B$, then $U_{\widetilde A}$ commutes
with $B$.
By relation~(\ref{f2_22}) the operator $U_{\widetilde A}$ have the following form:
\begin{equation}
\label{f3_31}
U_{\widetilde A} = V_A \oplus \widetilde U_{2,4},
\end{equation}
where $\widetilde U_{2,4}$ is an isometric operator which maps $H_2$ onto $H_4$, and
commutes with $B$.
Let the operator $U_{2,4}$ be defined by~(\ref{f2_26}). Then the following operator
\begin{equation}
\label{f3_32}
U_2 = U_{2,4}^{-1} \widetilde U_{2,4},
\end{equation}
is a unitary operator in $H_2$ which commutes with $B_{H_2}$.
Denote by $\mathbf{S}(B;H_2)$ a set of all unitary operators in $H_2$ which commute with $B_{H_2}$.
Choose an arbitrary operator $\widehat U_2\in \mathbf{S}(B;H_2)$. Define
$\widehat U_{2,4}$ by the following relation:
\begin{equation}
\label{f3_33}
\widehat U_{2,4} = U_{2,4} \widehat U_2.
\end{equation}
Notice that $\widehat U_{2,4}$ commutes with $B_{H_2}$.
Then we define a unitary operator $U = V_A \oplus \widehat U_{2,4}$ and its Cayley transformation
$\widehat A$ which commute with the operator $B$.
Repeating arguments before~(\ref{f2_31}) we get a canonical solution of the Devinatz moment problem.
\noindent
Thus, all canonical solutions of the Devinatz moment problem are generated by operators
$\widehat U_2\in \mathbf{S}(B;H_2)$. Notice that different operators $U',U''\in \mathbf{S}(B;H_2)$ produce different
orthogonal spectral measures $\mathbf{E}',\mathbf{E}$. By Theorem~\ref{t3_1},
these spectral measures produce different solutions of the moment problem.
Recall some definitions from~\cite{cit_9000_BS}.
A pair $(Y,\mathfrak{A})$, where $Y$ is an arbitrary set and $\mathfrak{A}$ is a fixed
$\sigma$-algebra of subsets of $A$ is said to be a {\it measurable space}.
A triple $(Y,\mathfrak{A},\mu)$, where $(Y,\mathfrak{A})$ is a measurable space and
$\mu$ is a measure on $\mathfrak{A}$ is said to be a {\it space with a measure}.
Let $(Y,\mathfrak{A})$ be a measurable space, $\mathbf{H}$ be a Hilbert space and
$\mathcal{P}=\mathcal{P}(\mathbf{H})$ be a set of all orthogonal projectors in $\mathbf{H}$.
A countably additive mapping $E:\ \mathfrak{A}\rightarrow \mathcal{P}$, $E(Y) = E_{\mathbf{H}}$,
is said to be a {\it spectral measure} in $\mathbf{H}$.
A set $(Y,\mathfrak{A},H,E)$ is said to be a {\it space with a spectral measure}.
By $S(Y,E)$ one means a set of all $E$-measurable $E$-a.e. finite complex-valued functions on $Y$.
Let $(Y,\mathfrak{A},\mu)$ be a separable space with a $\sigma$-finite measure and
to $\mu$-everyone $y\in Y$ it corresponds a Hilbert space $G(y)$. A function
$N(y) = \dim G(y)$ is called the {\it dimension function}.
It is supposed to be $\mu$-measurable. Let $\Omega$ be a set of vector-valued functions $g(y)$ with
values in $G(y)$ which are defined $\mu$-everywhere and are measurable with respect to some
base of measurability. A set of (classes of equivalence) of such functions with
the finite norm
\begin{equation}
\label{f3_34}
\| g \|^2_{\mathcal{H}} = \int |g(y)|^2_{G(y)} d\mu(y) <\infty
\end{equation}
form a Hilbert space $\mathcal{H}$ with the scalar product given by
\begin{equation}
\label{f3_35}
( g_1,g_2 )_{\mathcal{H}} = \int (g_1,g_2)_{G(y)} d\mu(y).
\end{equation}
The space $\mathcal{H}= \mathcal{H}_{\mu,N} = \int_Y \oplus G(y) d\mu(y)$
is said to be a {\it direct integral of Hilbert spaces}.
Consider the following operator
\begin{equation}
\label{f3_36}
\mathbf{X}(\delta) g = \chi_\delta g,\qquad g\in \mathcal{H},\ \delta\in \mathfrak{A},
\end{equation}
where $\chi_\delta$ is the characteristic function of the set $\delta$.
The operator $\mathbf{X}$ is a spectral measure in $\mathcal{H}$.
Let $t(y)$ be a measurable operator-valued function with values in $\mathbf{B}(G(y))$ which is
$\mu$-a.e. defined and $\mu-\sup \|t(y)\|_{G(y)} < \infty$. The operator
\begin{equation}
\label{f3_37}
T:\ g(y) \mapsto t(y)g(y),
\end{equation}
is said to be {\it decomposable}. It is a bounded operator in $\mathcal{H}$ which commutes with
$\mathbf{X}(\delta)$, $\forall\delta\in \mathfrak{A}$.
Moreover, every bounded operator in $\mathcal{H}$ which commutes with
$\mathbf{X}(\delta)$, $\forall\delta\in \mathfrak{A}$, is decomposable~\cite{cit_9000_BS}.
In the case $t(y) = \varphi(y)E_{G(y)}$, where $\varphi\in S(Y,\mu)$, we set $T =: Q_\varphi$.
The decomposable operator is unitary if and only if $\mu$-a.e. the operator $t(y)$ is unitary.
Return to the study of canonical solutions. Consider the spectral measure
$F_2$ of the operator $B_{H_2}$ in $H_2$.
There exists an element $h\in H_2$ of the maximal type, i.e. the non-negative Borel measure
\begin{equation}
\label{f3_38}
\mu(\delta) := (F_2(\delta)h,h),\qquad \delta\in \mathfrak{B}([-\pi,\pi)),
\end{equation}
has the maximal type between all such measures (generated by other elements of $H_2$). This type
is said to be the {\it spectral type} of the measure $F_2$.
Let $N_2$ be the multiplicity function of the measure $F_2$. Then there exists a unitary transformation $W$
of the space $H_2$ on $\mathcal{H}=\mathcal{H}_{\mu,N_2}$ such that
\begin{equation}
\label{f3_39}
W B_{H_2} W^{-1} = Q_{e^{iy}},\qquad W F_2(\delta) W^{-1} = \mathbf{X}(\delta).
\end{equation}
Notice that $\widehat U_2\in \mathbf{S}(B;H_2)$ if and only if
the operator
\begin{equation}
\label{f3_40}
V_2 := W \widehat U_2 W^{-1},
\end{equation}
is unitary and commutes with $\mathbf{X}(\delta)$, $\forall\delta\in \mathfrak{[-\pi,\pi)}$.
The latter is equivalent to the condition that $V_2$ is decomposable and the values of the corresponding
operator-valued function $t(y)$ are $\mu$-a.e. unitary operators.
A set of all decomposable operators in $\mathcal{H}$ such that the values of the corresponding
operator-valued function $t(y)$ are $\mu$-a.e. unitary operators we denote by $\mathbf{D}(B;H_2)$.
\begin{thm}
\label{t3_2}
Let a Devinatz moment problem~(\ref{f1_1}) be given. In conditions of Theorem~\ref{t3_1} all
canonical solutions of the moment problem have the form~(\ref{f3_4}) where the spectral
measures $\mathbf{E}$ of the operator $A$ are constructed by operators from $\mathbf{D}(B;H_2)$.
Namely, for an arbitrary $V_2\in \mathbf{D}(B;H_2)$ we set $U_2 = W^{-1} V_2 W$,
$\widehat U_{2,4} = U_{2,4} \widehat U_2$, $U = V_A \oplus \widehat U_{2,4}$,
$\widehat A = i(U+E_H)(U-E_H)^{-1}$, and then $\mathbf{E}$ is the spectral measure of $\widehat A$.
\noindent
Moreover, the correspondence between $\mathbf{D}(B;H_2)$ and a set of all canonical solutions of
the Devinatz moment problem is bijective.
\end{thm}
{\bf Proof. } The proof follows directly from the previous considerations.
$\Box$
Consider a Devinatz moment problem~(\ref{f1_1}) and suppose that conditions~(\ref{f2_1}) hold.
Let us turn to a parameterization of all solutions of the moment problem.
We shall use Theorem~\ref{t3_1}. Consider relation~(\ref{f3_4}). The spectral measure $\mathbf{E}$
commutes with the operator $B$.
Choose an arbitrary $z\in \mathbb{C}\backslash \mathbb{R}$.
By virtue of relation~(\ref{f3_3}) we can write:
$$ (B\mathbf{R}_z(A) x,y)_H = (\mathbf{R}_z(A) x,B^*y)_H =
\int_{\mathbb{R}} \frac{1}{t-z} d(\mathbf{E}(t) x,B^*y)_H $$
\begin{equation}
\label{f3_41}
\int_{\mathbb{R}} \frac{1}{t-z} d(B\mathbf{E}(t) x,y)_H
= \int_{\mathbb{R}} \frac{1}{t-z} d(\mathbf{E}(t)B x,y)_H,\qquad x,y\in H;
\end{equation}
\begin{equation}
\label{f3_42}
(\mathbf{R}_z(A) Bx,y)_H = \int_{\mathbb{R}} \frac{1}{t-z} d(\mathbf{E}(t) Bx,y)_H,\qquad x,y\in H,
\end{equation}
where $\mathbf{R}_z(A)$ is the generalized resolvent which corresponds to $\mathbf{E}$.
Therefore we get
\begin{equation}
\label{f3_43}
\mathbf{R}_z(A) B = B \mathbf{R}_z(A),\qquad z\in \mathbb{C}\backslash \mathbb{R}.
\end{equation}
On the other hand, if relation~(\ref{f3_43}) holds, then
\begin{equation}
\label{f3_44}
\int_{\mathbb{R}} \frac{1}{t-z} d(\mathbf{E} Bx,y)_H =
\int_{\mathbb{R}} \frac{1}{t-z} d(B\mathbf{E} x,y)_H,\quad x,y\in H,\ z\in \mathbb{C}\backslash \mathbb{R}.
\end{equation}
By the Stieltjes inversion formula~\cite{cit_10000_ST}, we obtain that $\mathbf{E}$ commutes with $B$.
\noindent
We denote by $\mathbf{M}(A,B)$ a set of all generalized resolvents $\mathbf{R}_z(A)$ of $A$ which satisfy
relation~(\ref{f3_43}).
Recall some known facts from~\cite{cit_4000_S} which we shall need here.
Let $K$ be a closed symmetric operator in a Hilbert space $\mathbf{H}$, with the domain $D(K)$,
$\overline{D(K)} = \mathbf{H}$. Set $N_\lambda = N_\lambda(K) = \mathbf{H}
\ominus \Delta_K(\lambda)$, $\lambda\in \mathbb{C}\backslash \mathbb{R}$.
Consider an
arbitrary bounded linear operator $C$, which maps $N_i$ into $N_{-i}$.
For
\begin{equation}
\label{f3_45}
g = f + C\psi - \psi,\qquad f\in D(K),\ \psi\in N_i,
\end{equation}
we set
\begin{equation}
\label{f3_46}
K_C g = Kf + i C \psi + i \psi.
\end{equation}
Since an intersection of $D(K)$, $N_i$ and $N_{-i}$ consists only of the zero element,
this definition is correct.
Notice that $K_C$ is a part of the operator $K^*$.
The operator $K_C$ is said to be a {\it quasiself-adjoint extension of the operator $K$, defined by
the operator $K$}.
The following theorem can be found in~\cite[Theorem 7]{cit_4000_S}:
\begin{thm}
\label{t3_3}
Let $K$ be a closed symmetric operator in a Hilbert space $\mathbf{H}$ with the domain $D(K)$,
$\overline{D(K)} = \mathbf{H}$.
All generalized resolvents of the operator $K$ have the following form:
\begin{equation}
\label{f3_47}
\mathbf R_\lambda (K) = \left\{ \begin{array}{cc} (K_{F(\lambda)} - \lambda E_\mathbf{H})^{-1}, &
\mathop{\rm Im}\nolimits\lambda > 0\\
(K_{F^*(\overline{\lambda}) } - \lambda E_\mathbf{H})^{-1}, & \mathop{\rm Im}\nolimits\lambda < 0 \end{array}\right.,
\end{equation}
where $F(\lambda)$ is an analytic in $\mathbb{C}_+$ operator-valued function, which values are contractions
which map $N_i(A) = H_2$ into $N_{-i}(A) = H_4$ ($\| F(\lambda) \|\leq 1$),
and $K_{F(\lambda)}$ is the quasiself-adjoint extension of $K$ defined by $F(\lambda)$.
On the other hand, for any operator function $F(\lambda)$ having the above properties there corresponds by
relation~(\ref{f3_47}) a generalized resolvent of $K$.
\end{thm}
Notice that the correspondence between all generalized resolvents and functions $F(\lambda)$ in
Theorem~\ref{t3_3} is bijective~\cite{cit_4000_S}.
Return to the study of the Devinatz moment problem.
Let us describe the set $\mathbf{M}(A,B)$. Choose an arbitrary $\mathbf{R}_\lambda\in \mathbf{M}(A,B)$.
By~(\ref{f3_47}) we get
\begin{equation}
\label{f3_48}
\mathbf{R}_\lambda = (A_{F(\lambda)} - \lambda E_H)^{-1},\qquad \mathop{\rm Im}\nolimits\lambda > 0,
\end{equation}
where $F(\lambda)$ is an analytic in $\mathbb{C}_+$ operator-valued function, which values are contractions
which map $H_2$ into $H_4$, and
$A_{F(\lambda)}$ is the quasiself-adjoint extension of $A$ defined by $F(\lambda)$.
Then
$$ A_{F(\lambda)} = \mathbf{R}_\lambda^{-1} + \lambda E_H,\qquad \mathop{\rm Im}\nolimits\lambda > 0. $$
By virtue of relation~(\ref{f3_43}) we obtain
\begin{equation}
\label{f3_49}
BA_{F(\lambda)} h = A_{F(\lambda)} B h,\qquad h\in D(A_{F(\lambda)}),\ \lambda\in \mathbb{C}_+.
\end{equation}
Consider the following operators
\begin{equation}
\label{f3_50}
W_{\lambda} := (A_{F(\lambda)} + iE_H)(A_{F(\lambda)} - iE_H)^{-1} =
E_H + 2i(A_{F(\lambda)} - iE_H)^{-1},
\end{equation}
\begin{equation}
\label{f3_51}
V_A = (A +iE_H)(A - iE_H)^{-1} =
E_H + 2i(A - iE_H)^{-1},
\end{equation}
where $\lambda\in \mathbb{C}_+$.
Notice that (\cite{cit_4000_S})
\begin{equation}
\label{f3_52}
W_{\lambda} = V_A \oplus F(\lambda),\qquad \lambda\in \mathbb{C}_+.
\end{equation}
The operator $(A_{F(\lambda)} - iE_H)^{-1}$ is defined
on the whole $H$, see~\cite[p.79]{cit_4000_S}.
By relation~(\ref{f3_49}) we obtain
\begin{equation}
\label{f3_53}
B (A_{F(\lambda)} - iE_H)^{-1} h =
(A_{F(\lambda)} - iE_H)^{-1} B h,\qquad h\in H,\ \lambda\in \mathbb{C}_+.
\end{equation}
Then
\begin{equation}
\label{f3_54}
B W_\lambda = W_\lambda B,\qquad \lambda\in \mathbb{C}_+.
\end{equation}
Recall that by Proposition~\ref{p2_1} the operator $B$ reduces the subspaces $H_j$, $1\leq j\leq 4$,
and $BV_A = V_A B$. If we choose an arbitrary $h\in H_2$ and apply relations~(\ref{f3_54}),(\ref{f3_52}),
we get
\begin{equation}
\label{f3_55}
B F(\lambda) = F(\lambda) B,\qquad \lambda\in \mathbb{C}_+.
\end{equation}
Denote by $\mathbf{F}(A,B)$ a set of all analytic in $\mathbb{C}_+$ operator-valued functions
which values are contractions which map $H_2$ into $H_4$ and which satisfy relation~(\ref{f3_55}).
Thus, for an arbitrary $\mathbf{R}_\lambda\in \mathbf{M}(A,B)$ the corresponding function
$F(\lambda)\in \mathbf{F}(A,B)$. On the other hand, choose an arbitrary $F(\lambda)\in \mathbf{F}(A,B)$.
Then we derive~(\ref{f3_54}) with $W_\lambda$ defined by~(\ref{f3_50}). Then we get~(\ref{f3_53}),(\ref{f3_49})
and therefore
\begin{equation}
\label{f3_56}
B \mathbf{R}_\lambda = \mathbf{R}_\lambda B,\qquad \lambda\in \mathbb{C}_+.
\end{equation}
Calculating the conjugate operators for the both sides of the last equality we conclude that this
relation holds for all $\lambda\in \mathbb{C}$.
\noindent
Consider the spectral measure $F_2$ of the operator $B_{H_2}$ in $H_2$. We have obtained
relation~(\ref{f3_39}) which we shall use one more time.
Notice that $F(\lambda)\in \mathbf{F}(A,B)$ if and only if
the operator-valued function
\begin{equation}
\label{f3_57}
G(\lambda) := W F(\lambda) U_{2,4}^{-1} W^{-1},\qquad \lambda\in \mathbb{C}_+,
\end{equation}
is analytic in $\mathbb{C}_+$ and has values which are
contractions in $\mathcal{H}$ which commute with $\mathbf{X}(\delta)$, $\forall\delta\in \mathfrak{[-\pi,\pi)}$.
This means that for an arbitrary $\lambda\in \mathbb{C}_+$ the operator
$G(\lambda)$ is decomposable and the values of the corresponding
operator-valued function $t(y)$ are $\mu$-a.e. contractions.
A set of all decomposable operators in $\mathcal{H}$ such that the values of the corresponding
operator-valued function $t(y)$ are $\mu$-a.e. contractions we denote by $\mathrm{T}(B;H_2)$.
A set of all analytic in $\mathbb{C}_+$ operator-valued functions $G(\lambda)$ with values
in $\mathrm{T}(B;H_2)$ we denote by $\mathbf{G}(A,B)$.
\begin{thm}
\label{t3_4}
Let a Devinatz moment problem~(\ref{f1_1}) be given. In conditions of Theorem~\ref{t3_1} all
solutions of the moment problem have the form~(\ref{f3_4}) where the spectral
measures $\mathbf{E}$ of the operator $A$ are defined by the corresponding generalized
resolvents $\mathbf{R}_\lambda$ which are constructed by the following relation:
\begin{equation}
\label{f3_58}
\mathbf{R}_\lambda = (A_{F(\lambda)} - \lambda E_H)^{-1},\qquad \mathop{\rm Im}\nolimits\lambda > 0,
\end{equation}
where $F(\lambda) = W^{-1} G(\lambda) W U_{2,4}$, $G(\lambda)\in \mathbf{G}(A,B)$.
\noindent
Moreover, the correspondence between $\mathbf{G}(A,B)$ and a set of all solutions of
the Devinatz moment problem is bijective.
\end{thm}
{\bf Proof. } The proof follows from the previous considerations.
$\Box$
Consider an arbitrary non-negative Borel measure $\mu$ in the strip $\Pi$ which has all finite
moments~(\ref{f1_1}). What can be said about the density of power-trigonometric
polynomials~(\ref{f1_2}) in the corresponding space $L^2_\mu$?
The measure $\mu$ is a solution of the corresponding moment problem~(\ref{f1_1}).
Thus, $\mu$ admits a representation~(\ref{f3_4})
where $F$ is the spectral measure of $B$ and $\mathbf{E}$ is a spectral measure of $A$ which commutes with
$F$ (the operators $A$ and $B$ in a Hilbert space $H$ are defined as above).
Suppose that (power-trigonometric) polynomials are dense in $L^2_\mu$.
Repeating arguments from the beginning of the Proof of Theorem~\ref{t3_1} we see that
in our case $H_0 = \{ 0 \}$ and $\widetilde A$, $\widetilde B$ are operators in $H$.
Moreover, we have $\mu = ((\widetilde E\times \widetilde F) x_{0,0}, x_{0,0})_{H}$,
where $\widetilde E$ is the spectral measure of $\widetilde A$, $\widetilde F = F$.
Consequently, $\mu$ is a canonical solution of the Devinatz moment problem.
\noindent
The converse assertion is more complicated and will be studied elsewhere.
Sergey M. Zagorodnyuk
School of Mathematics and Mekhanics
Karazin Kharkiv National University
Kharkiv, 61077
Ukraine
\begin{center}
\bf
Devinatz's moment problem: a description of all solutions.
\end{center}
\begin{center}
\bf
S.M. Zagorodnyuk
\end{center}
In this paper we study Devinatz's moment problem:
to find a non-negative Borel measure $\mu$ in a strip
$\Pi = \{ (x,\varphi):\ x\in \mathbb{R},\ -\pi\leq \varphi < \pi \},$
such that $\int_\Pi x^m e^{in\varphi} d\mu = s_{m,n}$, $m\in \mathbb{Z}_+$, $n\in \mathbb{Z}$,
where $\{ s_{m,n} \}_{m\in \mathbb{Z}_+, n\in \mathbb{Z}}$ is a given sequence of complex numbers.
We present a new proof of the Devinatz solvability criterion for this moment problem.
We obtained a parameterization of all solutions of Devinatz's moment problem.
We used an abstract operator approach and results of Godi\v{c}, Lucenko and
Shtraus.
Key words: moment problem, measure, generalized resolvent.
MSC 2000: 44A60, 30E05.
}
\end{document} |
\begin{document}
\title{PL-$k$NN: A Parameterless Nearest Neighbors Classifier}
\author{\IEEEauthorblockN{Danilo Samuel Jodas}
\IEEEauthorblockA{Department of Computing\\
S\~ao Paulo State University, Brazil\\
danilojodas@gmail.com}
\and
\IEEEauthorblockN{Leandro Aparecido Passos, Ahsan Adeel}
\IEEEauthorblockA{CMI Lab\\School of Engineering and Informatics\\
University of Wolverhampton, UK\\
l.passosjunior@wlv.ac.uk, ahsan.adeel@deepci.org }
\and
\IEEEauthorblockN{Jo\~ao Paulo Papa}
\IEEEauthorblockA{Department of Computing\\
S\~ao Paulo State University, Brazil\\
joao.papa@unesp.br}
}
\maketitle
\begin{abstract}
Demands for minimum parameter setup in machine learning models are desirable to avoid time-consuming optimization processes. The $k$-Nearest Neighbors is one of the most effective and straightforward models employed in numerous problems. Despite its well-known performance, it requires the value of $k$ for specific data distribution, thus demanding expensive computational efforts. This paper proposes a $k$-Nearest Neighbors classifier that bypasses the need to define the value of $k$. The model computes the $k$ value adaptively considering the data distribution of the training set. We compared the proposed model against the standard $k$-Nearest Neighbors classifier and two parameterless versions from the literature. Experiments over 11 public datasets confirm the robustness of the proposed approach, for the obtained results were similar or even better than its counterpart versions.
\end{abstract}
\begin{IEEEkeywords}
Machine Learning, $k$-Nearest Neighbors, Classification, Clustering.
\end{IEEEkeywords}
\blfootnote{978-1-6654-9578-3/22/\$31.00 \copyright 2022 IEEE}
\section{Introduction}
\label{s.introduction}
Data classification is the most popular approach in machine learning. The paradigm comprises a set of labeled samples used to train a specific model, i.e., to learn intrinsic characteristics from data, for further classification of unlabeled instances. In this context, one can refer to traditional methods such as the Support Vector Machines~\cite{menezes2019width} and Artificial Neural Networks~\cite{mohammed2020defective}, as well as graph-based models, such as the Optimum-Path Forest~\cite{PapaIJIST09} and the $k$-Nearest Neighbors ($k$-NN)~\cite{Bentley:75}.
$k$-NN is a method used for both classification~\cite{ayyad2019gene} and regression~\cite{luken2019preliminary} purposes. It obtained notorious popularity in the last three decades due to its competitive results in a wide variety of domains, ranging from medicine~\cite{zhong2016predict} to engineering~\cite{farshad2012accurate}, and sentiment analysis~\cite{murugappan2011human}. Although efficient and straightforward, $k$-NN is sensitive to the proper selection of the $k$ value, which may denote a stressful task to the user. A similar problem is faced by most machine learning methods and has been commonly overpassed through metaheuristic optimization algorithms~\cite{passosASOC:19}. Regarding $k$-NN, Wicaksono and Supianto~\cite{wicaksono2018hyper} recently employed such approaches to model the problem of selecting an appropriate $k$. Ling et al.~\cite{ling2015knn} proposed a distance-based strategy to choose the best $k$ by considering a region centered in each instance of the test set. On the other hand, Zhang and Song~\cite{zhang2014knn} proposed a neural network-based model to predict the best $k$ by considering a set of features extracted from each sampled dataset. Besides, Singh et al.~\cite{singh2010pager} proposed a parameterless version of the $k$-NN algorithm for regression purposes. Similar work presented by Desai et al.~\cite{desai2010gear} also claims a parameterless version of $k$-NN for regression purposes. However, the model requires four extra hyperparameters. Ayyad et al.~\cite{ayyad2019gene} proposed a modified version of the $k$-NN classifier for gene expression cancer classification. The proposed model defines a circle with radius $r$ centered at the test sample under prediction, where $r$ is computed by two versions of their Modified $k$-NN (MKNN) model: Smallest MKNN (SMKNN) and Largest MKNN (LMKNN), which measure the minimum and maximum distance between the test sample and each class centroid of the training set, respectively. Although effective in the context of gene data analysis, the method may still suffer from the high amount of neighbors in cases where the class centroids are distant from each other.
This paper proposes the Parameterless $k$-Nearest Neighbors (PL-$k$NN) classifier, a $k$-Nearest Neighbors variant that avoids selecting the proper value of $k$ by introducing a mechanism that automatically chooses the number of neighbors that correctly matches the data distribution. The proposed model is similar to SMKNN presented by Ayyad et al.~\cite{ayyad2019gene}; however, we suggest the following two improvements:
\begin{itemize}
\item To use the median sample instead of the mean sample as the class centroid;
\item To use a semicircle whose radius is defined as the distance of the test sample to the nearest class centroid.
\end{itemize}
\noindent Regarding the second contribution, we want to find the training samples assumed to be as close as possible to the nearest centroid of the test sample under prediction. This approach is efficient when there is a mixing of training samples of different classes. In this case, the proposed model defines a semicircle enclosing only the nearest samples close to the cluster assumed as the class of the test instance.
The remainder of the paper is organized as follows: Section II introduces the proposed PL-$k$NN model. Afterward, Sections III and IV present the methodology and the experimental results, respectively. Finally, Section V states conclusions and future work.
\section{Proposed method}
\label{s.propsoed_method}
Let ${\cal Y} = \{\omega_1, \omega_2, \omega_3,\dots,\omega_n\}$ be the set of classes from the dataset, where $\omega_i$ represents the $i^{th}$ class. Also, let ${\cal X} = {\{\bm{x}_{1}, \bm{x}_{2}, \bm{x}_{3},\dots,\bm{x}_{m}\}}$ be the set of samples of the dataset represented by a feature vector denoted as $\bm{x}_{j} \in R^{d}$. In the supervised classification approach, each sample $\bm{x}_j$ is assigned to a class $y_j\in{\cal Y}$ such that the pair $(\bm{x}_{j}, y_{j})$ is used in the subsequent training and testing of the classifier. Formally speaking, this step involves partitioning the samples such that ${\cal X} = {\cal X}_{1}\cup{\cal X}_{2}$ and ${\cal X}_{1}\cap{\cal X}_{2} = \emptyset$, where ${\cal X}_{1}$ and ${\cal X}_{2}$ denote the training and testing sets, respectively.
In the proposed method, the training set is split into $n$ clusters that represent each class $y_i \in {\cal Y}$. Further, the number of nearest neighbors $k$ connected to a target sample $\bm{x}_j$ is adaptively defined according to its distance to all training samples inside the radius of the nearest cluster’s centroid. Let ${\cal C} = \{\bm{c}_1,\bm{c}_2,\ldots,\bm{c}_n\}$ be the set of centroids such that $c_i$ denotes the centroid of the $i^{th}$ cluster, which contains all samples from ${\cal X}_{1}$ that belong to class $\omega_i$. The training stage of the proposed model is summarized as follows:
\begin{enumerate}
\item Cluster the dataset into $n$ clusters;
\item For each sample $\bm{x}_{i}\in {\cal X}_{1}$, assign it to the $y_i^{th}$ cluster;
\item For each $y_i^{th}$ cluster, compute its centroid $\bm{c}_{y_i}$ as the median sample;
\item Compute the weight of each training sample $\bm{x}_{i}$ regarding its centroid $\bm{c}_{y_i}$ using $W(\bm{x}_{i},\bm{c}_{y_i}) = D_{E}(\bm{x}_{i},\bm{c}_{y_i})^{-1}$, where $D_{E}(\bm{x}_{i},\bm{c}_{y_i})$ denotes the Euclidean distance between sample $\bm{x}_{i}$ and the centroid $\bm{c}_{y_i}$;
\item Repeat steps 2-4 for all samples in ${\cal X}_1$.
\end{enumerate}
\noindent The training stage is similar to SMKNN, except for the class centroid computation, whose value is computed from the average of each training sample's features in the SMKNN variant.
The PL-$k$NN training stage consists of finding the cluster centroid and the distance weights of all training samples. The distance is computed between each training sample $\bm{x}_{i} \in {\cal X}_{1}$ and the centroid $\bm{c}_{y_i}$ of its cluster. Step 1 is concerned with the splitting of the dataset into $n$ clusters according to the number of classes in ${\cal Y}$. In Step 2, each training sample $\bm{x}_{i}$ is assigned to the cluster of its class $y_{i}$. Step 3 finds the samples assumed to be the center of each cluster by computing its median feature vector. This approach is more effective than using the average due to the following reasons: i) the average is not a valid instance of the cluster, although it is assumed to be located as close as possible to its center, and ii) the average is more sensitive to outliers, and consequently, the resulting value may differ from the data distribution center. In contrast, the median is the sample in the middle of the data distribution, which reduces the effect of instances distant from the dense region of the cluster. Step 4 is defined to assign a weight for all instances inside the cluster. The more distant from the centroid, the smaller the weight of the training sample. Samples distant from the cluster will have less impact as a neighbor of a test sample.
The following steps are performed for each testing sample $\bm{s}$:
\begin{enumerate}
\item Calculate the Manhattan distances $D_{M}(\bm{s}, \bm{c}_i)$ between $\bm{s}$ and all cluster's centroids;
\item Get the centroid $\bm{c}^\star$ with the smallest distance;
\item Define a circle with radius $D_{M}(\bm{s}, \bm{c}^\star)$ around the test sample $\bm{s}$;
\item Compute the angle $\theta$ between $\bm{s}$ and $\bm{c}^\star$;
\item Let ${\cal T}=\{\bm{t}_1,\bm{t}_2,\ldots,\bm{t}_z\}$ be the set of all training samples $\bm{t}_i$ with an angle $\theta_i$ between $-90^o$ and $+90^o$ considering the vector connecting $\bm{s}$ and $\bm{c}^\star$ (dark gray area inside the circle in Figure~\ref{fig.semicircle}). The idea is to pick only the samples inside the semi-circle formed between $\bm{s}$ and the cluster centroid $\bm{c}^\star$;
\item Determine the final class of $\bm{s}$ as the class with the higher frequency among all training samples $\bm{t}_i\in{\cal T}$ selected in the previous step (see Equations~\ref{eq.prediction} and~\ref{eq.final_class} below).
\end{enumerate}
We want to find the nearest neighbors of the test sample $\bm{s}$ that significantly impact the prediction of its final class. This approach performs similarly to SMKNN, except for the step that chooses the instances that fall inside the circle enclosing $\bm{s}$. Figure~\ref{fig.semicircle} depicts the aforementioned idea.
\begin{figure}
\caption{Proposed approach to select the samples inside the semicircle that surrounds the test sample $\bm{s}
\label{fig.semicircle}
\end{figure}
As illustrated in Figure~\ref{fig.semicircle}, instead of picking all samples inside the circle, we want to find the ones assumed to be as close as possible to the cluster of the centroid $\bm{c}^\star$. Furthermore, since Euclidean distance is sensitive to high-dimensional spaces, we use the Manhattan distance to avoid intensifying features with large differences in such scenarios.
Step 6 of the algorithm above regards the final prediction of the test sample $\bm{s}$ according to the following equations:
\begin{equation}
\label{eq.prediction}
p_i(\bm{s}) = \frac{\displaystyle\sum_{\forall \bm{t}\in{\cal T}\wedge\lambda(t)=\omega_i}{D_{M}(\bm{t}_{j},\bm{s})}^{-1}\ast W(\bm{t}_{j},\bm{c}_i)}{\sum_{k=1}^n p_k(\bm{s})},\forall i\in{\cal Y},
\end{equation}
\begin{center}
and
\end{center}
\begin{equation}
\label{eq.final_class}
y_s = \argmax_i(p_i),\forall i=1,2,\ldots, n,
\end{equation}
\noindent where $y_s$ is the predicted class of $s$, $\lambda(\bm{t})$ outputs the true label of sample $\bm{t}$, and $p_i(\bm{s})$ stands for the probability of sample $\bm{s}$ belonging to class $\omega_i$. We want to penalize the neighbor's samples of $\bm{s}$ with the farthest distance to their cluster centroid. Those samples will have less impact on the final prediction of $\bm{s}$ since they are distant from their correct class group, representing possible outliers.
\section{Methodology}
\label{s.methodology}
This section presents the datasets used in this study and the setup of the experiments.
\subsection{Datasets}
The experiments were performed over 11 public datasets from the UCI Machine Learning~\footnote{\url{https://archive.ics.uci.edu/ml/index.php}} repository. The datasets include binary and multiclass labels, variation in the number of features, and numerical features only. The latter aspect is more concerned with avoiding the encoding of categorical features. The description of the datasets is presented in Table \ref{tab.datasets}.
\begin{table}[!ht]
\centering
\caption{DESCRIPTION OF THE DATASETS USED IN THE EXPERIMENTS.}
\label{tab.datasets}
\begin{tabular}{|l|r|r|r|}
\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}\\\textbf{ Dataset }\end{tabular}} & \multicolumn{3}{c|}{\textbf{ Description }} \\
\cline{2-4}
& \multicolumn{1}{l|}{\textbf{ Samples }} & \multicolumn{1}{l|}{\textbf{ Features }} & \multicolumn{1}{l|}{\textbf{ Classes }} \\
\hline
Blood Transfusion (BT) & 748 & 4 & 2 \\
\hline
Breast Cancer Diagnostic (BCD) & 569 & 30 & 2 \\
\hline
Breast Cancer Original (BCO) & 699 & 10 & 2 \\
\hline
Forest Type (FT) & 523 & 27 & 4 \\
\hline
HCV data (HCV) & 615 & 13 & 5 \\
\hline
Indian Liver (IL) & 583 & 10 & 2 \\
\hline
Mammographic Mass (MM) & 961 & 6 & 2 \\
\hline
Somerville Hapiness (SH) & 143 & 6 & 2 \\
\hline
SPECT Heart (SPTH) & 267 & 44 & 2 \\
\hline
Urban Land Cover (ULC) & 168 & 148 & 9 \\
\hline
Wine (WN) & 178 & 13 & 3 \\
\hline
\end{tabular}
\end{table}
\subsection{Experimental Setup}
The PL-$k$NN model~\footnote{Source code available at https://github.com/danilojodas/PL-kNN.git} and all baselines were implemented using Python 3.6. We rely on Algorithm 1 presented in Ayyad et al.~\cite{ayyad2019gene} to develop the source code of SMKNN and LMKNN. The datasets were divided into 20 folds of training, validation, and testing sets considering a proportion of 70\%, 15\%, and 15\%, respectively. Apart from the statistical analysis between the baselines and the proposed classifier's results, the splitting strategy also enables the optimization of the $k$ value employed by the standard $k$-NN classifier using the training and validation sets. The best $k$ was computed from a range between 1 and 50 to find the value that maximizes the accuracy over the validation set. Also, we used the average accuracy and F1-Score to assess the PL-$k$NN effectiveness. Finally, the Wilcoxon signed-rank~\cite{Wilcoxon45} test with 5\% of significance was employed to evaluate the statistical similarity among PL-$k$NN and the baselines over each dataset. Besides, a post hoc analysis was also conducted using the Nemenyi test~\cite{Nemenyi63} with $\alpha=0.05$ to expose the critical difference (CD) among all techniques.
\section{Experiments}
\label{s.experiments}
This section compares the PL-$k$NN with $k$-NN, SMKNN, and LMKNN. Despite intending the gene classification task, SMKNN and LMKNN are easily adaptable to other contexts due to the cluster analysis, which is intrinsic to any data distribution. Moreover, both techniques work similarly to PL-$k$NN, particularly the SMKNN variant, which is the base of our model, thus constituting a reasonable comparison in public datasets.
For the sake of comparison, the best value for the $k$-NN model was configured using the optimization step described in Section~\ref{s.methodology}. Notice the most accurate average F1-Score is in bold, while similar results according to the Wilcoxon signed-rank test with $5\%$ of significance are underlined for all classifiers. The F1-Score was particularly preferable to assess the model effectiveness because of the imbalanced class distribution of some datasets used in the experiments, such as the Forest Type, HCV data, and SPECT Heart.
Table II presents the average results where the proposed model obtained the best F1-Score in seven out of eleven datasets. The proposed model showed similar and higher F1-Score values for the Blood Transfusion, Forest Type, HCV, Indian Liver, Mammographic, Somerville Happiness, SPECT Heart, Urban Land Cover, and Wine datasets. Even when the proposed model showed inferior results, the average metrics were almost equivalent as observed in the Forest Type and Mammographic Mass datasets. Furthermore, the proposed method surpassed the effectiveness obtained by SMKNN and LMKNN in cases where $k$-NN showed the best F1-Score and the results were statistically different. Notice such behavior presented by Breast Cancer Diagnostic and Breast Cancer Original datasets. Besides, it is worth noting the low performance of LMKNN over some datasets such as Forest Type, HCV, Indian Liver, and Urban Land Cover, which probably relates to its mechanism that assumes a more significant number of neighbors while computing the radius of the circle.
\begin{table}
\centering
\caption{AVERAGE RESULTS OBTAINED BY EACH CLASSIFIER.}
\begin{tabular}{|l|l|l|l|}
\hline
\multirow{2}{*}{\textbf{ Dataset }} & \multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{ Measure }} \\
\cline{3-4}
& & \multicolumn{1}{c|}{\textbf{ Accuracy }} & \multicolumn{1}{c|}{\textbf{ F1-Score }} \\
\hline
\multirow{4}{*}{BT} & $k$-NN & 0.7844 ± 0.0270 & 0.3806 ± 0.0934 \\
\cline{2-4}
& LMKNN & 0.7594 ± 0.0250 & 0.3192 ± 0.0786 \\
\cline{2-4}
& SMKNN & 0.7585 ± 0.0319 & \uline{0.4217 ± 0.0944}\uline{} \\
\cline{2-4}
& PL-$k$NN & 0.7121 ± 0.0499 & \textbf{0.4439 }±\textbf{ 0.0848} \\
\hline
\multirow{4}{*}{BCD} & $k$-NN & 0.9553 ± 0.0254 & \textbf{0.9373 }± \textbf{0.0371} \\
\cline{2-4}
& LMKNN & 0.9076 ± 0.0221 & 0.8595 ±
0.0376 \\
\cline{2-4}
& SMKNN & 0.9247 ± 0.0212 & 0.8898 ±
0.0338 \\
\cline{2-4}
& PL-$k$NN & 0.9406 ± 0.0176 & 0.9153 ±
0.0266 \\
\hline
\multirow{4}{*}{BCO} & $k$-NN & 0.9648 ± 0.0195 & \textbf{0.9479 }±\textbf{ 0.0300} \\
\cline{2-4}
& LMKNN & 0.7743 ± 0.0278 & 0.5073 ±
0.0896 \\
\cline{2-4}
& SMKNN & 0.9481 ± 0.0196 & 0.9208 ±
0.0306 \\
\cline{2-4}
& PL-$k$NN & 0.9510 ± 0.0198 & 0.9255 ±
0.0310 \\
\hline
\multirow{4}{*}{FT} & $k$-NN & 0.8692 ± 0.0554 & \textbf{0.8616 ± 0.0614} \\
\cline{2-4}
& LMKNN & 0.5256 ± 0.0290 & 0.3014 ±
0.0219 \\
\cline{2-4}
& SMKNN & 0.8513 ± 0.0437 & 0.8397 ±
0.0557 \\
\cline{2-4}
& PL-$k$NN & 0.8538 ± 0.0424 & \uline{0.8494 ± 0.0494}\uline{} \\
\hline
\multirow{4}{*}{HCV} & $k$-NN & 0.9136 ± 0.0170 & \uline{0.4839 ± 0.1036}\uline{} \\
\cline{2-4}
& LMKNN & 0.8696 ± 0.0000 & 0.1860 ±
0.0000 \\
\cline{2-4}
& SMKNN & 0.9071 ± 0.0199 & \uline{0.4843 ± 0.1245}\uline{} \\
\cline{2-4}
& PL-$k$NN & 0.9092 ± 0.0147 & \textbf{0.4985 ± 0.1197} \\
\hline
\multirow{4}{*}{IL} & $k$-NN & 0.6897 ± 0.0378 & 0.1958
± 0.1020 \\
\cline{2-4}
& LMKNN & 0.7126 ± 0.0178 & 0.0723
± 0.0643 \\
\cline{2-4}
& SMKNN & 0.7121 ± 0.0361 & 0.2558
± 0.0886 \\
\cline{2-4}
& PL-$k$NN & 0.7121 ± 0.0294 & \textbf{0.3336 ± 0.0635} \\
\hline
\multirow{4}{*}{MM} & $k$-NN & 0.8045 ± 0.0330 & \textbf{0.7855 ± 0.0380} \\
\cline{2-4}
& LMKNN & 0.7740 ± 0.0276 & 0.7627
± 0.0276 \\
\cline{2-4}
& SMKNN & 0.7823 ± 0.0282 & 0.7666
± 0.0294 \\
\cline{2-4}
& PL-$k$NN & 0.7785 ± 0.0302 & \uline{0.7689 ± 0.0301}\uline{} \\
\hline
\multirow{4}{*}{SH} & $k$-NN & 0.5476 ± 0.1190 & \uline{0.4667 ± 0.1844}\uline{} \\
\cline{2-4}
& LMKNN & 0.6119 ± 0.0695 & \uline{0.5106 ± 0.1338}\uline{} \\
\cline{2-4}
& SMKNN & 0.5667 ± 0.0877 & \uline{0.4601 ± 0.1212}\uline{} \\
\cline{2-4}
& PL-$k$NN & 0.6167 ± 0.0714 & \textbf{0.5177 ± 0.1266} \\
\hline
\multirow{4}{*}{SPTH} & $k$-NN & 0.8087 ± 0.0483 & 0.3797
± 0.1890 \\
\cline{2-4}
& LMKNN & 0.7587 ± 0.0582 & \uline{0.5150 ± 0.1033}\uline{} \\
\cline{2-4}
& SMKNN & 0.7163 ± 0.0644 & 0.4665
± 0.0905 \\
\cline{2-4}
& PL-$k$NN & 0.7675 ± 0.0507 & \textbf{0.5320 ± 0.0702} \\
\hline
\multirow{4}{*}{ULC} & $k$-NN & 0.7782 ± 0.0265 & 0.7671
± 0.0351 \\
\cline{2-4}
& LMKNN & 0.5515 ± 0.0390 & 0.3470
± 0.0375 \\
\cline{2-4}
& SMKNN & 0.7728 ± 0.0359 & 0.7654
± 0.0393 \\
\cline{2-4}
& PL-$k$NN & 0.8005 ± 0.0377 & \textbf{0.7946 ± 0.0402} \\
\hline
\multirow{4}{*}{WN} & $k$-NN & 0.9519 ± 0.0424 & \uline{0.9530 ±}\uline{ }\uline{0.0424} \\
\cline{2-4}
& LMKNN & 0.9481 ± 0.0429 & \uline{0.9508 ±}\uline{ }\uline{0.0421} \\
\cline{2-4}
& SMKNN & 0.9537 ± 0.0368 & \uline{0.9545 ±}\uline{ }\uline{0.0370} \\
\cline{2-4}
& PL-$k$NN & 0.9630 ± 0.0310 & \textbf{0.9640 }± \textbf{0.0308} \\
\hline
\end{tabular}
\end{table}
Besides the Wilcoxon signed-rank test, we also employ the Nemenyi test to provide an overall statistical analysis. The method examines the critical difference among all techniques to plot the method's average rank in a horizontal bar (see Figure~\ref{fig.nemenyi_test}). Notice lower ranks denote better performance, and the methods connected are similar in terms of statistical significance. One can notice the best result attained by the PL-$k$NN model. Also, the proposed approach achieved statistical similarity and better performance compared to the baseline techniques.
\begin{figure}
\caption{Nemenyi test computed for all techniques.}
\label{fig.nemenyi_test}
\end{figure}
\section{Conclusions and Future Works}
\label{s.conclusions}
This paper presented PL-$k$NN, a novel approach for automatically determining the number of nearest neighbors for the $k$-NN classifier. Experiments over 11 datasets showed the competitive results obtained from the proposed model according to statistical analysis applied to all the baselines used for comparison. Besides, the Nemenyi test also confirms the statistical similarity of the PL-$k$NN results with the ones obtained by the standard $k$-NN classifier configured with the best $k$ value.
Regarding future studies, we intend to identify the regions in which the entire circle may be necessary to provide more neighboring samples to increase the prediction accuracy. Furthermore, we also plan to extend PL-$k$NN to regression analysis.
\section*{Acknowledgment}
The authors are grateful to FAPESP grants \#2013/07375-0, \#2014/12236-1, \#2017/02286-0, \#2018/21934-5, \#2019/07665-4, and \#2019/18287-0, Engineering and Physical Sciences Research Council (EPSRC) grant EP/T021063/1, CNPq grants \#307066/2017-7, and \#427968/2018-6, and Petrobras grant \#2017/00285-6.
\end{document} |
\begin{document}
\title{Congested Urban Networks Tend to be Insensitive to Signal Settings: Implications for Learning-Based Control}
\author{\IEEEauthorblockN{Jorge Laval\IEEEauthorrefmark{1}, and
Hao Zhou\IEEEauthorrefmark{1}}
\IEEEauthorblockA{\IEEEauthorrefmark{1} School of Civil and Environmental Engineering,
Georgia Institute of Technology, Atlanta, GA 30332 USA}
\thanks{Corresponding author: J. Laval (email: jorge.laval@ce.gatech.edu).}}
\IEEEtitleabstractindextext{
\begin{abstract}
This paper highlights several properties of large urban networks that can have an impact on machine learning methods applied to traffic signal control.
In particular, we note that the average network flow tends to be independent of the signal control policy as density increases past the critical density. We show that this property, which so far has remained under the radar, implies that
no control (i.e. a random policy) can be an effective control strategy for a surprisingly large family of networks, especially for networks with short blocks.
We also show that this property makes deep reinforcement learning (DRL) methods ineffective when trained under congested conditions\blue{, independently of the particular algorithm used}.
Accordingly, in contrast to the conventional wisdom around learning-based methods promoting the exploration of all states, we find that for urban networks it is advisable to discard any congested data when training, and that doing so will improve performance under all traffic conditions. Our results apply to all possible grid networks thanks to a parametrization introduced here. The impact of the turning probability was found to be very significant, in particular to explain the loss of symmetry observed in the macroscopic fundamental diagram of the networks, which is not captured by existing theories that rely on corridor approximations without turns. Our findings also suggest that supervised learning methods have enormous potential as they require very little examples to produce excellent policies.
\end{abstract}
\begin{IEEEkeywords}
Traffic signal control, machine learning, deep reinforcement learning
\end{IEEEkeywords}}
\maketitle
\IEEEdisplaynontitleabstractindextext
\IEEEpeerreviewmaketitle
\section{Introduction}
Congested urban networks are known to behave chaotically and to be very unpredictable, with \blue{microscopic} outputs being hypersensitive to the input demand
\cite{daganzo1996nature,daganzo1998queue,nair2001non,adewumi2016application}.
\blue{An explanation for these unpredictable dynamics has long been conjectured \cite{nagel1995emergent,nagatani2002physics,helbing2001traffic,chowdhury2000statistical,nagel2003still} based on the analogy of gas-liquid phase transitions \cite{stauffer2018introduction}. In this analogy, near the critical density chaotic dynamics emerge as a result of the power-law distribution of congested clusters--the area and the time-space plane where vehicles are stopped. Power laws are the hallmark of fractal objects and complex systems having phase transitions, and due to their infinite variance are responsible for the chaotic and unpredictable dynamics \cite {schroeder2009fractals}.
This complexity has proven hard to tackle in the literature, where numerous signal control algorithms, mathematical programs and learning-based control methods to optimize network performance have shown only mild success} \cite{khamis2014adaptive, chu2016large,xu2018network,ge2019cooperative,wei2019colight,tan2019cooperative,gong2019decentralized}. Although operational improvements have been shown in these references, they mostly correspond to light traffic conditions or very small networks. And success has been limited when it comes to outperforming greedy benchmarks even under lightly congested conditions \cite{belletti2017expert}.
But \blue{on large networks and on a coarser network-level scale} the empirical verification of the existence of a network-level Macroscopic Fundamental Diagram (MFD) is a statement of the ``order emerging from chaos'' so ubiquitous in complex dynamical systems.
The MFD
gives the average link flow on a network as a function of the average link density, arguably independently of trip origins and destinations, and route choice.
This robustness strongly suggests that the effects of signal timing might be limited.
We argue here that the main cause of our struggles to control \blue{large} congested urban networks is not complexity but simplicity. While small networks might exhibit high throughput variations \cite{daganzo1998queue}, we show here that large congested networks tend to produce a throughput that is quite predictable, and which cannot be controlled in congestion because it tends to be independent of the signal control policy. We call this property the \textbf{\textit{``{congested network property }c''}}\blue{, and it is the main motivation of this paper to examine its consequences for traffic control}.
An early indication of this property, which unfortunately remained under the radar all these years, can be traced back to the work of Robert Herman and his group in the mid-eighties around the two-fluid model \cite{Herman84, Hani85, mahmassani1990network}.
Although not mentioned explicitly in these references, some of the figures reveal a significant insensitivity of network performance with respect to signal control \footnote{For example, figures 9a, 10a and 11a in \cite{Hani1985} show that providing signal progression does not significantly improve the average speed on the network; figures 5.10 and 5.13 in \cite{Hani85} show that signal offset has no discernible influence on network throughput.}.
They also find that block length positively influences network throughput.
Notice that these results are based on the microscopic model NETSIM over an idealized grid network with identical block lengths, which might explain why the authors were reluctant to highlight the insensitivity to traffic signal control.
Here we show that the {congested network property } applies even for networks with different block lengths and signal settings.
A more recent indication of this property can be found in the stochastic method of cuts \cite{Laval2015Stochastic}, showing that the network MFD can be well approximated by a function of only three measurable network parameters:
\begin{subequations}\label{parameters}
\begin{align}
\lambda &\propto\fracjl{E(\mbox{block length})}{E(\mbox{green time})},\label{lambda0}\\
\delta &=COV(\mbox{block length})\quad \\
\rho &= \fracjl{E(\mbox{red time})}{E(\mbox{green time})}
\end{align}
\end{subequations}
where $E(\cdot), COV(\cdot)$ stand for expectation and coefficient of variation, respectively, and block length refers to the distance between consecutive traffic lights.
But $\rho = 1$ when we consider all travel directions on grid networks, such as in this paper, and therefore can be dropped from the formulation. This strongly suggests that signal timing might be irrelevant when considering all directions of travel and only affects network performance by the average green time across all directions of travel.
Other recent indications of the {congested network property } are hard to find, to the best of our knowledge. With the exception of \cite{gayah2014impacts}--who found locally adaptive signals to have little effect on the MFD in heavily congested networks--we conjecture that this is because the {congested network property } is easy to miss unless (i) the space of all possible large-scale networks is explored (i.e., networks with different parameters $\lambda$ and turning proportion), and perhaps more crucially, unless (ii) the performance is analyzed under all density levels, e.g. with the MFD. As mentioned in the first paragraph, a single, small network is used in most studies, which prevents any general conclusions. In addition, typically performance is evaluated for prescribed demand patterns that do not span the whole range of densities. Even studies based on the MFD have not observed this property. \cite{girault2016exploratory} studied the impacts of several control strategies on the MFD of an idealized grid network, and found that the impacts of coordination were highly sensitive to the signal cycle time, and that poor coordination can significantly decrease the network capacity and free-flow travel speed. Similarly, \cite{abdelghaffar2019novel} reported positive simulation results from signal control over a large network of Los Angeles (CA) downtown area with 420 intersections.
The most likely explanation for these observations is that long-block networks were used; in fact, a quick sample of the block length for this latter study reveals that it is around 300 meters, which would imply a long-block network for an average green time (not reported in the study) below 70 seconds, which appears very plausible.
The lack of awareness of the {congested network property } is having negative consequences on emerging control technologies\blue{, and preventing this is the main motivation of the paper.} A case in point is deep reinforcement learning (DRL), where, we claim, potentially all training methods proposed to date are unable to learn effective control policies as soon as congestion appears on the network. In fact, two decades ago \cite{camponogara2003distributed} showed that training above the critical density the resulting DRL policies deteriorate significantly. While the current explanation of these observation is the potentially non-stationary and/or non-Markovian behavior of the environment \cite{choi2000hidden, da2006dealing}, here we posit that the congested network property\minor{,} and \textit{not} the particular DRL method used, is entirely to blame.
We also show that the turning probability at intersections is also a key variable that significantly affects the MFD.
Unfortunately, its impacts are not well understood in the literature as existing MFD estimation methods are based on arterial corridors without turning movements, and have been used as an approximation to the MFD of the whole network. This implies assuming that turning movements do not affect the MFD, which is not always the case. \cite{daganzo2011macroscopic} found the time for a one-way street network seems to be insensitive to the probability of turning, but \cite{jin2013kinematic} found random turning ratios lead to more symmetric traffic patterns and higher flow-rates, and \cite{gayah2014impacts} found that a fixed turning probability of 0.2 leads a one-way street network towards gridlock more quickly than with random turning. A recent study \cite{xu2020analytical} adopted a double-ring network approach to estimate analytically the impacts of turning on the MFD using the stochastic method of cuts. They found only slight variations in the MFD as a function of the turning probability.
In this paper we
(i) provide additional evidence for the {congested network property } by expanding the experiments in \cite{Herman84, Hani85, mahmassani1990network} to all grid network topologies,
(ii) unveil additional properties such as loss of symmetry, overlapping and detaching, and
(iii) analyze how these properties affect the performance of machine learning methods applied to signal control using both the kinematic wave model (in the main text) and the off-the-shelf simulation model SUMO \cite{SUMO2012} (in the appendix).
\minor{Towards} this end, the remainder of the paper is organized as follows. We start with the background section on the MFD, DRL and a survey of related work. Then, we define the problem setup and apply it to a series of experiments that highlight the main properties found here. Finally, the paper concludes with a discussion and outlook section.
\section{BACKGROUND}
\subsection{\blue{The Macroscopic Fundamental Diagram (MFD)}}
The network MFD has its origins in \cite{godfrey1969mechanism} as a way of describing the traffic flow of urban networks at an aggregate level, and has been used in the past as a concise way of displaying network simulation output \cite{Smeed1967Road,Herman1979Two,Herman84, mahmassani1984investigation}.
For a given traffic network, it describes the relationship between traffic variables averaged across all \textit{lanes} in the network.
The main requirement for a well-defined MFD is that congestion be homogeneously distributed across the network, i.e. there must be no ``hot spots'' in the network. For analytical derivations it is often also assumed that each \textit{lane} of the network obeys the kinematic wave model \cite{Lighthill1955Kinematic,richards1956shock} with common fundamental diagram \cite{daganzo2008analytical, Laval2015Stochastic}.
In this way, upper bounds for the MFD have been found using the method of cuts in the case of homogeneous networks \cite{daganzo2008analytical}. In a flow-density diagram a ``cut'' is a strthisaight line with slope (wave speed) corresponding to the average speed of a moving observer and intercept given by the maximum passing rate the observer would measure. By varying the observer speed one obtains a series of cuts whose lower envelope gives an approximation of the MFD. For general networks, \cite{Laval2015Stochastic} introduces the stochastic method of cuts and shows that (the probability distribution of) the MFD can be well approximated by a function of the three parameters in \eqref{parameters}. In particular to this paper, the cuts' wave speed produced by this theory for extreme free-flow, $u_0$, and extreme congestion, $-w_0$, are given by:
\begin{equation}\label{u0}
u_0=w_0=\frac{4 \lambda }{\delta ^2+2 \lambda +1}.
\end{equation}
after using $\rho= 1$ in equation (17b) of \cite{Laval2015Stochastic}.
\blue{As we show in this paper, it turns out that the parameter $\delta$ that regulates the variance of block lengths affects the MFD only slightly, and thus can also be dropped from the formulation. The only variable left is $\lambda$, a measure of the propensity of the network to experience spillbacks, which waste capacity. It was shown in \cite{Laval2015Stochastic} that $\lambda< 1$ is the ``short-block condition'', i.e. the network becomes prone to spillback, which can have a severe effect on capacity. Conversely, a network with $\lambda>1$ has long blocks (compared to the green time) and therefore will not exhibit spillback. \minor{We} show here that the {congested network property } is more pervasive in networks with short blocks, where throughput tends to be insensitive to signal control for all density levels; in networks with long blocks this property tends to be observed in extreme free flow and extreme congestion only, leaving room for operational improvements around the critical density.
}
Notice that these estimation methods give the MFD of an arterial corridor without turning movements, and have been used as an approximation to the MFD of the whole network. This implies assuming that turning movements do not affect the MFD, which is not the case as will be shown here. \blue{Also note that the MFD will be used in this paper
simply to illustrate the results in the most concise manner possible, and does not intervene in generating these results.
}
\subsection{Reinforcement learning}
The use of deep neural networks within Reinforcement Learning algorithms has produced important breakthroughs in recent years. These deep reinforcement learning (DRL) methods have outperformed expert knowledge methods in areas such as arcade games, backgammon, the game Go and autonomous driving \cite{mnih2015human, silver2017mastering,chen2019model}.
In the area of traffic signal control numerous DRL control methods have been proposed both for isolated intersections \cite{li2016traffic,genders2016using} and small networks \cite{chu2015traffic,chu2019multi,tan2019cooperative,ge2019cooperative}. The vast majority of these methods have been trained with a single (dynamic) traffic demand profile, and then validated using another one, possibly including a surge \cite{ge2019cooperative}.
In the current signal control DRL literature the problem is treated, invariably, as an episode process, which is puzzling given that the problem is naturally a continuing (infinite horizon) one. Here, we adopt the \textit{continuing} approach to maximize the long-term average reward. We argue that in signal control there is no terminal state because the process actually goes on forever. And what may appear as a terminal state, such as an empty network, cannot be considered so because it is not achieved through the correct choice of actions but by the traffic demand, which is uncontrollable. An explanation for this puzzling choice in the literature might be that DRL training methods for episodic problems have a much longer history and our implemented in most machine learning development frameworks.
For continuing problems this is not unfortunately the case, and we propose here the training algorithm {\sc REINFORCE-TD}, which is in the spirit of REINFORCE with baseline \cite{willianms1988toward} but for continuing problems. To the best of our knowledge, this extension of REINFORCE is not available in the literature.
Reinforcement learning is typically formulated within the framework of a {\em Markov
decision process} (MDP). At discrete time step $t$ the environment is in state $S_t\in{\cal S}$, the agent will choose and action $A_t\in{\cal A}$, to maximize a function of future rewards $R_{t+1}, R_{t+2}\ldots$ with $R_-: {\cal S} \times {\cal A} \rightarrow \Re$. There is a state transition probability distribution $\Pr(s',r|s,a)=\Pr(S_t=s',R_t=r|S_{t-1}=s, A_{t-1}=a)$ that gives the probability of making a transition from state $s$
to state $s'$ using action $a$ is denoted $\Pr(s,a,s')$, and is commonly referred to as the ``model''.
The model is
{\em Markovian} since the state transitions are independent of any previous
environment states or agent actions. For more details on MDP models the reader is referred to \cite{bellman1957markovian,bertsekas1987dynamic,howard1960dynamic,puterman1994markovian}
The agent's decisions are characterized by a stochastic
\textbf{policy} \( \pi (a|s) \), which is the probability of taking action
\( a \) in state \( s \).
In the continuing case the agent seeks to maximize the \textit{average reward}:
\begin{equation}\label{eta}
\eta (\pi )\equiv \lim_{T\rightarrow\infty} \frac{1}{T}\sum_{t=1}^T E_{\pi}\left[R_t\right]
\end{equation}
The term $E_{\pi}$ means that the expected value (with respect to the distribution of states) \RoundTwo{assuming} the policy is followed.
In the case of traffic signal control for large-scale grid network, methods based on transition probabilities are impractical because the state-action space tends to be too large as the number of agents increases.
An alternative approach that circumvents this \textit{curse of dimensionality} problem---the approach we pursue here---are ``policy-gradient'' algorithms, where the policy is parameterized as \( \pi (a|s;{\theta }), \theta \in \mathcal{R}^{m} \), typically a neural network. Parameters $\theta$ are adjusted to
improve the performance of the policy $\pi$ by following the gradient of cumulative
future rewards, given by the identity
\begin{equation}
\label{policy_grad}
\nabla \eta=E_{\pi}[G_t \nabla_{\theta}\log \pi (a|s)]
\end{equation}
as shown in \cite{sutton1999policy} for both continuing and episodic problems. In continuing problems cumulative rewards $G_t$ are measured relative to the average cumulative reward:
\begin{equation}\label{return}
G_t=\sum_{i=t+1}^\infty (R_i-\eta(\pi))
\end{equation}
and is known as the \textit{differential return}.
\subsection{Related work}
\RoundTwo{Signal control has a long history in several research domains. It is beyond the scope of this study to survey all of the existing methods. Instead, in this section we will focus on the recent RL-based signal control methods that are applicable to congested large urban networks. }
The related literature is split between two approaches for formulating the large-scale traffic control problem, either a centralized DRL algorithm or a decentralized method with communication and cooperation among multi-agents.
The centralized approach \cite{genders2016using,li2016traffic,chu2016large} adopts a single-agent and tries to tackle the high-dimensional continuous control problem by memory replay, dual networks and advantage actor-critic \cite{lillicrap2015continuous, mnih2015human}. The decentralized method takes advantage of multiple agents and usually requires the design of efficient communication and coordination to address the limitation of partial observation of local agents. Current studies \cite{khamis2014adaptive,wei2019colight,tan2019cooperative,gong2019decentralized} often decompose the large network into small regions or individual intersections, and train the local-optimum policies separately given reward functions reflecting certain level of cooperation with neighboring agents. How to incorporate those communication information to help design the reward function for local agents remains an open question.
The environment modeling, state representation and reward function design are key ingredients in DRL. For the environment emulator, most studies use popular microscopic traffic simulation packages such as AIMSUM or SUMO. Recently, FLOW \cite{kheterpal2018flow} has been developed as a computational framework integrating SUMO with some advanced DRL libraries to implement DRL algorithm on ground traffic scenarios. \cite{vinitsky2018benchmarks} provided a benchmark for major traffic control problems including the multiple intersection signal timing.
There also exist studies \cite{chu2015traffic,arel2010reinforcement,ge2019cooperative} adopting methods to use self-defined traffic models as the environment. Complementary to those microscopic simulation packages, macroscopic models are able to represent the traffic state using cell or link flows. The advantage of macroscopic models is twofold: i) reducing complexity in state space and computation ii) being compatible with domain knowledge from traffic flow theory such as MFD theory.
Expert knowledge has been included in some studies to reduce the scale of the network control problem. In \cite{xu2018network}, critical nodes dictating the traffic network were identified first before the DRL was implemented. The state space can be remarkably reduced. MFD theory cannot provide sufficient information to determine the traffic state of a network. For instance, \cite{chu2015traffic} successfully integrated the MFD with a microscopic simulator to constrain the searching space of the control policies in their signal design problem. They defined the reward as the trip completion rate of the network, and simultaneously enforcing the network to remain under or near the critical density. The numerical experiments demonstrated that their policy trained by the integration of MFD yields a more robust shape of the MFD, as well as a better performance of trip completion maximization, compared to that of a fixed and a greedy policy.
While most of the related studies on traffic control only focus on developing effective and robust deep learning algorithms, few of them have shown traffic considerations, such as the impact of traffic density. The learning performance of RL-based methods under different densities have not been sufficiently addressed. To the best of our knowledge, \cite{camponogara2003distributed} is the only study which trained a RL policy for specific and varied density levels, but unfortunately their study only accounted for free-flow and mid-level congestion.
\cite{dai2011neural} classified the traffic demand into four vague levels and reported that inflow rates at 1000 and 1200 veh/h needed more time for the algorithm to show convergence. But they did not report network density, nor try more congested situations nor discussed why the converging process has been delayed.
Most studies only trained RL methods in non-congestion conditions, \cite{ge2019cooperative} adopted the Q-value transfer algorithm (QTCDQN) for the cooperative signal control between a simple $2\minor{\times}2$ grid network and validated the adaptability of their algorithm to dynamic traffic environments with different densities, such as \RoundTwo{the} recurring congestion and occasional congestion.
It can be seen that most recent studies focus on developing effective and robust multi-agent DRL algorithms to achieve coordination among intersections. The number of intersections in those studies are usually limited, thus their results might not apply to large networks. Although the signal control is indeed a continuing problem, it has been always modeled as an episodic process. From the perspective of traffic considerations, expert knowledge has only been incorporated in down-scaling the size of the control problem or designing novel reward functions for DRL algorithm. Few studies have tested their methods on a full spectrum of traffic demands, the learning performance under different traffic densities, especially the congestion regimes, has not been fully explored.
\RoundTwo{It is worth noting that, based on the recent development of the MFD theory in the traffic flow domain, many perimeter and boundary signal control methods have been developed to reduce congestion. Relevant to our research, \cite{geroliminis2012optimal, haddad2012stability,aboudolas2013perimeter} proposed and investigated the perimeter control method, whose general idea is to control the inflow/outflow across city zones by adjusting the signals at the boundaries such that the congestion level in certain zones can be managed.
The MFD control method is novel and promising. It well incorporates the expert knowledge from the traffic flow theory, and it is also centralized, which significantly differs from the decentralized signal control popular in the learning-based literature. To this end, we believe the MFD control methods suggest a promising research direction, which is to better leverage traffic flow models, and use cooperative mult-agent reinforcement learning (MARL) to better explore the learning potential. MARL is a heated research direction in computer science, and is certainly beyond the scope of this paper. Instead, we will limit the learning-based signal control methods to those who do not require a centralized controller or communications from neighbors, such that all signals in the network apply the same policy. }
\section{Problem definition}
\blue{This section formalizes all the ingredients needed to formulate our problem, including the traffic flow model to describe the behavior of vehicles, the type of network, the vehicle routing assumptions and the traffic signal configuration.
}
\textbf{The traffic flow model} used in this paper is the kinematic wave model \blue{(or LWR model) } \cite{Lighthill1955Kinematic,richards1956shock} with a triangular flow-density fundamental diagram \blueOLD{\cite {newell2002simplified}}. \blue{This is the simplest model able to predict the main features of traffic flow, and it has become the standard analysis tool and traffic flow theory.
It turns out that there are several models in the literature that are equivalent to the kinematic wave model: \minor{the} celebrated Newell's car-following model \cite{newell2002simplified} is the car-following version, which can be formulated as a cellular automaton (CA) model \cite{Dag04a}.
But \cite{Lav16} showed that the shape of the triangular fundamental diagram is irrelevant thanks to a symmetry in the kinematic wave model, allowing us to use an isosceles fundamental diagram, which has many useful properties in practice; see \cite{Lav16} for the details.
This implies that
elementary CA rule 184 \cite{wolfram1984cellular} is in turn equivalent to the kinematic wave model, and will be used in this paper. Accordingly, here } we set both the free-flow speed and the wave speed equal to 1, implying that the saturation flow is 1/2, the critical density $k_c$ is also 1/2 and the jam density 1, without loss of generality \footnote{ \blueOLD{The transformed densities used in this paper, $k$, are related to the ``real'' densities, $k'$, by $k'=
2k/(\theta+1), k<1/2$ and $k'= 1-2(1-k)\theta/(\theta+1), k\ge 1/2$, where $\theta$ is there observed free-flow speed to wave speed ratio.
}}.
In a CA model, each lane of the road is divided into small cells $i=1,2,\ldots \ell$ the size of a vehicle jam spacing, where cell $\ell$ is the most downstream cell of the lane. The value in each cell, namely $c_i$, can be either ``1'' if a vehicle is present and ``0'' otherwise. The update scheme for CA Rule 184, shown in Fig.~\ref{f1}, operates over a neighborhood of length 3, and can be written as:
\begin{equation}\label{CA Rule 184}
c_i :=c_{i-1}\lor c_{i-1}\land c_i \lor c_i\land c_{i+1}
\end{equation}
The vector $c$ is a vector of bits and \eqref{CA Rule 184} is Boolean algebra, which explains the high computational efficiency of this traffic model. \blue{ Notice that CA Rule 184 implies the exceedingly simple traffic rule ``advance if you can'', which can be understood as the canonical rule for traffic flow.} This also implies
that the current state of the system is described completely by the state in the previous time step; i.e. it is Markovian and deterministic. Stochastic components are added by the signal control policy, and therefore our traffic model satisfies the main assumption of the MDP framework.
\begin{figure}
\caption{CA Rule 184: The top row in each of the eight possible cases shows the neighborhood values $( c_{i-1}
\label{f1}
\end{figure}
\textbf{The network} corresponds to a grid network of bidirectional streets with one lane per direction and with a traffic light on all intersections. To attain spatial homogeneity, the network is defined on a torus where each street can be thought of as a ring road where all intersections have 4 incoming and 4 outgoing approaches; see Fig.~\ref{network}.
\blueOLD{Let us define the ``N-S axis'' as the set of N-S bidirectional streets; similarly for the ``E-W axis''.}
\textbf{Vehicle routing} is random:
A driver reaching the stop line, say Mary, will choose to turn with probability $p$ or keep going straight with probability $1-p$. If Mary decides to turn, she will turn left, right or U-turn with equal probability. For instance, $p=3/4$ gives an equal probability of 1/4 to all possibilities and therefore promotes a uniform distribution of density on the network\footnote{\blueOLD{We have verified that our results are independent whether or not one includes U-turns.}}.
If two or more vehicles are bound for the same approach during a time step, the tie is broken randomly. If the downstream approach is blocked then Mary will not move during that time step, and will repeat the same selection process during the next time step.
Notice that this random routing \blueOLD{can be viewed} as a simplified form of driver adaptation, which avoids unrealistic bifurcations in the MFD \cite{daganzo2011macroscopic}. \footnote{Bifurcation takes place when drivers cannot clear the intersection because the downstream link in their route is jammed. If the driver does not adapt and change her route, the jam propagates even faster, eventually leading to a deadlock, with a portion of the links in the network being jammed, and the rest being empty. With driver adaptation, however, the jam propagation is slowed down by distributing congestion more uniformly across the entire network.}
It also makes our results applicable only to grid networks where both supply and demand are spatially homogeneous, e.g. where origins and destinations are uniformly distributed across the network, such as in a busy downtown CBD. In the appendix we have tried other assignment methods that do create bifurcation, showing that the results of the paper remain unchanged.
In the spirit of our discussion surrounding \eqref{parameters} we parametrize the space of all grid networks by
$\lambda$ and $\delta$. As such,
\textbf{the block length}, defined as the distance (in number of cells) between two neighboring traffic lights on a given street, is a random variable with coefficient of variation $\delta$, while keeping its
mean, $\ell$, constant.
Notice that we have verified that values of $\ell\ge6$ do not change simulation results.
\textbf{Traffic signals} operate under the simplest possible setting with only red and green phases (no lost time, red-red, yellow nor turning phases). All the control policies consider here are \textit{incremental} in the sense that decisions are taken every $g$ time steps, which can be interpreted as a minimum green time:
After the completion of each green time of length $g$, the controller decides whether to prolong the current phase or to switch light colors.
The following signal control policies will be used as baseline in this paper:
\begin{enumerate}[itemsep=-1mm]
\item LQF: ``longest queue first" gives the green to the direction with the longest queue; it is a greedy methods for the ``best'' control,
\item SQF: ``shortest queue first", a greedy methods for the ``worst'' control,
\item RND: ``random'' control gives the green with equal probability to both directions, akin to no control.
\end{enumerate}
The motivation to include SQF is that any possible control method will produce performance measures in between LQF and SQF.
To achieve the $\lambda$-parametrization we note from \eqref{lambda0} the expected green time is $E(\mbox{green time})=\ell(1/u+1/w)/\lambda$,
where $(1/u+1/w)$ is the proportionality constant in \eqref{lambda0}, $u$ and $-w$ are the fundamental diagram free-flow speed and wave speed, respectively, which are both equal to 1 in our scaling, so:
\begin{equation}\label{g0}
E(\mbox{green time})=2\ell/\lambda
\end{equation}
But under incremental control the expected green time is unknown a priory; it can only be estimated after the simulation run. In particular, setting a minimum green time $g$ in the simulation will yield $E(\mbox{green time})\ge g$ due to the possibility of running two or more minimum green phases in a row. We have verified that under LQF $E(\mbox{green time})\approx g$, which is as expected because after a discharge it is very unlikely that the same approach would have the largest queue. Under the random policy the number of minimum green phases in a row is described by a Bernoulli process of probability one half, and therefore $E(\mbox{green time})\approx 2g$. With this, we are able to compare LQF and RND control for a given $\lambda$ by setting a minimum green time $g$ in the simulation as:
\begin{subequations}\label{g}
\begin{empheq}[left={g=\empheqlbrace\,}]{align}
2\ell/\lambda & \qquad \text{for LQF} \label{ga}\\
\ell/\lambda & \qquad \text{for RND} \label{gb}
\end{empheq}
\end{subequations}
Unfortunately, under SQF a $\lambda$-parametrization is not possible because $\lambda$ becomes ill-defined. As we will detail shortly, at a given intersection $E(\mbox{green time}) \rightarrow \infty$ for one direction and $E(\mbox{green time}) \rightarrow 0$ for the other; i.e., after a few iterations the signal colors becomes permanent at all intersections.
Although this behavior is clearly unpractical, it turns out that SQF is \blue{the} key in understanding the behavior of DRL methods in congestion.
\begin{figure}
\caption{Example $4\times $5 traffic network. The connecting links to form the torus are shown as dashed directed links; we have omitted the cells on these links to avoid clutter. Each segment has $\ell=10 $ cells; an additional cell has been added downstream of each segment to indicate the traffic light color.}
\label{network}
\end{figure}
\section{Baseline experiments}
In this section we perform a series of experiments to highlight important properties of urban networks under the baseline control policies defined above. We are interested on the steady-state MFD these policies produce when deployed to all intersections in the network, and for different parameters $\lambda, \delta$ and $p$.
The MFD for each policy is obtained by simulating this policy for constant
network densities $k\in (0,1)$ and reporting the average flow in the network after 4 cycles\footnote{\blueOLD{Note that flows will not vary for different aggregation intervals because our network is designed to be both temporally and spatially homogeneous.}}. This process is repeated 50 times for each density value to obtain an approximate 90\%-probability interval (between the 5th and the 95th percentile) of the flow for each density value. Based on our results and to facilitates this discussion, we argue that networks have 4 distinctive traffic states: extreme free-flow ($k<0.2$), moderate free-flow ($0.2<k<0.5$), moderate congestion ($0.5<k<0.8$) and extreme congestion ($k>0.8$).
\footnote{\blueOLD{In terms of the ``real'' densities, $k'$, and using footnote 3 with $\theta=4$,} we would have approximately: extreme free-flow ($k'<0.1$), moderate free-flow ($0.1<k'<0.25$), moderate congestion ($0.25<k'<0.7$) and extreme congestion ($k'>0.7$).}
The following results are based on 2 figures showing our simulation results
for different network parameters $\lambda, p$ and $\delta$ for the three baseline signal control methods. Fig.~\ref{MFDs} shows the case of a fixed turning probability, $p=0.75$, and different network parameters $\lambda$ and $\delta$, to make the point that the effect of $\delta$ is small.
The first row corresponds to homogeneous networks (identical block lengths, $\delta = 0$), while the second row to inhomogeneous networks (highly variable block lengths, $\delta = 0.7$).
Fig.~\ref{MFDsp} is for homogeneous networks ($\delta = 0$) and selected network parameter $\lambda$ and $p$, to see the impact of turning probabilities. These figures also show the extreme cuts produced by our earlier theory
in \cite{Laval2015Stochastic}, which correspond to straight lines of slopes $u_0$ and $w_0$ given by \eqref{u0}.
\begin{figure}
\caption{Simulation results for baseline policies under equally-probable turning case $p=0.75$.
First row: homogeneous networks (identical block lengths, $\delta = 0$); second row: inhomogeneous networks (highly variable block lengths, $\delta = 0.7$). The straight lines correspond to the extreme cuts
in \cite{Laval2015Stochastic}
\label{MFDs}
\end{figure}
We can draw the following remarks from Fig.~\ref{MFDs} and \ref{MFDsp}:
\begin{enumerate}[label=\textbf{R-\arabic*},itemsep=2pt,parsep=2pt]
\item \label{lambda} \textbf{Effects of network parameters.} The block length parameter $\lambda$ has a significant impact on the MFD, especially for $\lambda<2$ and for the LQF and random control. This is as expected given that the probability of spillbacks in the network is directly related to $\lambda$. The variability of block length parameter $\delta$ does not impact network throughput very significantly: by comparing the diagrams in each column of Fig.~\ref{MFDs} we can see that the shape of the MFDs is practically the same, with inhomogeneous networks producing slightly less capacity ($\approx 5\%$ on average). This result is surprising and indicates that networks with highly variable block lengths with mean $\ell$ performs only slightly less efficiently than an idealised chessboard network with identical blocks of length $\ell$. To simplify our analysis in the sequel, we now assume $\delta = 0$.
\item \label{loss of symmetry} \textbf{Loss of symmetry.} There is a loss of symmetry in the MFDs for the LQF and RND policies,
which can be traced to the turning probability $p$. From Fig.~\ref{MFDsp} we have mapped the level of skewness in the $(\lambda, p)$-plane and summarize the results in the middle and right panels of Fig.~\ref{BLsumm}. It can be seen that the patterns are very different between the two policies, which is unexpected. Notice that for large turning probabilities the MFDs in congestion lie above the theoretical estimates, indicating that congestion propagates significantly faster due to turning movements.
\item \label{Detache} \textbf{Detaching: emergence of permanent street colors under SQF}. The middle row in Fig.~\ref{MFDsp} shows two examples where SQF flows exceed all other policies in congestion.
This ``detaching'' behavior in the MFD happens in $p\le 0.3$, and is a consequence of the signal color under SQF becoming permanent at each intersection. This induces a surprising collective pattern where all streets in the network are either under a permanent green and at high flows, namely ``green streets'', or under permanent red and at zero flow, namely ``red streets''. All green streets belong to the same axis, say N-S, which may contain some red streets; the other axis, E-W, contains only red streets.This is shown in Fig.~\ref{f-detach} (left), where it can be seen that the network reached an equilibrium where half of the N-S streets are green, and all other streets are red. Although permanent street colors tend to emerge for all values of $p$, we have observed that detaching only occurs in $p\le 0.3$, where the high flows in the green streets, shown as a green disk in the right side of the figure, are able to compensate for the zero flow in all red streets (red disk in the figure), such that the average traffic state in the network (gray disk) is above the LQF-MFD.
Consideration shows that depending on the proportion of green streets the
average traffic state lays anywhere within the shaded triangle in the figure, whose left edge is achieved when
the proportion of green streets is maximal, i.e. 50 \% in the case of square networks. Points along the line of slope $-w$ in the figure indicate that the N-S axis is operating in the congested branch of the FD, and points in the detaching area, that the N-S axis is operating in its free-flow branch. This is clearly unpractical but indicates that a good strategy under severe congestion might be to favor one axis over the other.
\item \label{random} \textbf{LQF/RND overlap.} In a large number of networks the LQF and RND policies overlap over a significant range of densities, which
indicates that no control (i.e. RND) is an effective control method in such cases. This happens in extreme congestion and (to a lesser extent) in extreme free-flow on all networks.
Consideration of (an extended version of) Fig.~\ref{MFDsp} reveals that the regions in the $(\lambda, p)$-plane
where these policies overlap \textit{for all densities} can be summarized as in the bottom left panel of Fig.~\ref{BLsumm}. It can be seen that these regions represent a significant proportion of all possible grid networks.
\item \label{cnp} \textbf{The {congested network property }c:} As density increases above the critical density, network throughput \textit{tends to be more and more independent} of signal control, to the point of becoming absolutely independent of signal control once extreme congestion is reached. The precise density at which this tendency is noticeable depends on the parameters $\lambda$ and $p$, and can be approximated by the density at which LQF and RND first overlap in congestion. This is shown in the bottom right panel of Fig.~\ref{BLsumm}, which depicts the regions in the $(\lambda, p)$-plane where these policies first overlap in moderate congestion. It can be seen that the majority of networks are affected by this property.
\end{enumerate}
\begin{figure}
\caption{Simulation results for baseline policies for homogeneous networks ($\delta = 0$) and different network parameter $\lambda$ (columns) and $p$ (rows).
The straight lines correspond to the extreme cuts
in \cite{Laval2015Stochastic}
\label{MFDsp}
\end{figure}
\begin{figure}
\caption{Summary of results in the $(\lambda, p)$-plane.
Top row:
Symmetric = no skewness,
Right = skewed to the right,
Left = skewed to the left.
Bottom left:
Yes = LQF and RND overlap for all densities,
No = LQF and RND do not overlap for all densities.
Bottom right:
Yes = LQF and RND overlap in moderate congestion,
No = LQF and RND do not overlap in moderate congestion.
The font size is proportional to the effect being shown in each panel. These diagrams are approximate and were constructed by direct observation of a large number of flow-density charts such as the ones in Fig.~\ref{MFDsp}
\label{BLsumm}
\end{figure}
The precise mechanisms roughly outlined above are still under investigation and will be formulated on sequel papers.
Here, we focus on the impact of the {congested network property } in emerging control technologies, as it shows that urban networks are more predictable than previously thought with respect to signal control.
Recall from the introduction that earlier works from the late eighties found some evidence of the {congested network property } in \ref{cnp} using simulation on a homogeneous grid network; studied the impacts of several control strategies on the MFD of an idealized grid network. We have also highlighted the importance of parametrizing urban networks by $\lambda$ and $p$ because of the unique and repeatable features that emerge under particular values of these parameters.
As we will see in the next section this {congested network property } creates a challenge for learning effective control policies under congestion, where the policy tends to produce results similar to SQF, including detaching.
\begin{figure}
\caption{SQF detaching. Left: the network reached an equilibrium where half of the N-S streets are ``green steets'', and all other streets are ``red streets''. Right: low-density diagram showing the high flows in the green streets (green disk), zero flow in all red streets (red disk) and the average traffic state in the network (gray disk). }
\label{f-detach}
\end{figure}
\section{Machine learning experiments}
In this section we perform the same experiments in the previous section but with signal control policies based on machine learning methods.
Each traffic signal is an agent equipped with a deep neural network with weights $\theta$ to represent the control policy $\pi (a|s;{\theta })$, as shown in Fig.~\ref{f3}. It is a 3-layer perceptron with tanh nonlinearity, known to approximate any continuous function with an arbitrary accuracy provided the network is ``deep enough'' \cite{kuurkova1992kolmogorov}.
The input to the network is \textbf{the state observable by the agent} and corresponds to \textit{all} 8 approaches to-from the intersection:
a vector of length 8, each entry representing the number of vehicles in each approach. The output is a single real number that gives the probability of turning the light red for the N-S approaches (and therefore turns the light green for the E-W approaches). Recall that these actions can be taken at most every $g$ time steps, per \eqref{g}.
\begin{figure}
\caption{Neural network architecture to approximate the policy. The numbers on top of the arrows indicate the dimensions of the corresponding input/output vectors, and the numbers below the squares are as follows: the input is the state observable by the agent, 1: linear layer, 2: tanh function, 3: linear layer, 4: summation layer, 5: sigmoid function, and
the output is a single real number that gives the probability of turning the light red for the N-S approaches.}
\label{f3}
\end{figure}
Because our network is spatially homogeneous and without boundaries, there is no reason why policies should be different across agents, and therefore we will train \textit{a single agent} and share its parameters with all other agents.
After training, we evaluate the performance of the policy once deployed to all agents
by observing the resulting MFD.
Shown in all flow-density diagrams that follow is the mean LQF policy, depicted as a thick dashed curve. In this way, we are able to test the hypotheses that the policy outperforms LQF simply by observing if the shaded area is above the dashed line.
In particular, we will say that a policy is ``optimal'' if it outperforms LQF, ``near-optimal'' if it performs similarly to LQF, and ``suboptimal'' if it underperforms LQF.
To train the policy we will use the following 2 methods, each described the following subsections:
\begin{enumerate}[itemsep=-1mm]
\item Supervised learning: given labeled data, the weights are set to minimize the prediction error, and
\item Deep Reinforcement Learning (policy gradient).
\end{enumerate}
\subsection{Supervised learning policies} \label{supervised section}
This section reports a rather surprising result: training the policy with \textbf{only two} examples yields a near-optimal policy. These examples are shown in Fig.~\ref{supervised0} and correspond to two extreme situations where the choice is trivial: the left panel shows extreme state $s_1$, where both N-S approaches are empty and the E-W ones are at jam density (and therefore red should be given to those approaches with probability one), while the right panel shows $s_2 $, the opposite situation (and therefore red should be given to N-S approaches with probability zero); in both cases all outgoing approaches are empty. The training data is simply:
\begin{equation}\label{trivial}
\pi(s_1)\rightarrow 1,\qquad \pi(s_2)\rightarrow 0.
\end{equation}
We have verified that this policy gives optimal or near-optimal policies for all network parameters; see Fig.~\ref{supervised1} for selected parameter values.
\begin{figure}
\caption{Supervised learning experiment. Left: Extreme state $s_1$, where both N-S approaches are empty and the E-W ones are at jam density; we have omitted the cells on links other than the ones observable by the middle intersection to avoid clutter. Right: extreme state $s_2 $, the opposite of $s_1$.}
\label{supervised0}
\end{figure}
\begin{figure}
\caption{Supervised learning experiment: Resulting MFD (shaded area) for selected parameter values. Dots represent simulation points, and the dashed line the corresponding LQF-MFD.}
\label{supervised1}
\end{figure}
\subsection{DRL policies}
Here the policy parameters are trained using DRL on a single intersection using Algorithm \ref{reinforce} in the appendix.
The density of vehicles in the network, $k$, is kept constant during the entire training process.
We define \textbf{the reward} at time $t,\ R_t,$ as the \textit{average advantage flow per lane}, defined here as the average flow through the intersection during $(t,t+g)$ \textit{minus} the flow predicted by the LQF-MFD at the prevailing density.
In this context the LQF-MFD can be seen as a baseline for the learning algorithm, which reduces parameter variance.
\begin{figure}
\caption{Policies trained with constant demand and random initial parameters $\theta$ with
$\lambda=1/2$ and equally likely turning probability $p=3/4$. The label in each diagram gives the iteration number and the constant density value. First column: NS red probabilities of the extreme states, $\pi(s_1)$ in dashed line and $\pi(s_2)$ in solid line. The remaining columns show the flow-density diagrams obtained at different iterations, and the last column shows the iteration producing the highest flow at $k= 0.5$, if not reported on a earlier column.
\RoundTwo{Each row corresponds to a different density for training.}
\label{constantr}
\end{figure}
The results for network parameters $\lambda=1/2, p=3/4$, and
random initial policy weights $\theta$ are shown in Fig.~\ref{constantr}.
Each row corresponds to a constant training density $k$, while the first column depicts the NS red probabilities of the extreme states, $\pi(s_1)$ and $\pi(s_2)$ (described in \blue{Section} \ref{supervised section}) as a function of the iteration number, and these probabilities should tend to \eqref{trivial} for ``sensible'' policies. To facilitate the discussion, let DRL-F be a DRL policy trained under free-flow conditions, i.e. $k< 0.5$ and DRL-C under congestion, i.e. $k\ge 0.5$.
We can see from Fig.~\ref{constantr} that:
\begin{enumerate}[itemsep=-1mm]
\item DRL-F policies are only near-optimal, with lower training densities leading to policies closer to LQF.
\item DRL-C policies are suboptimal and deteriorate as $k$ increases. Sensible policies cannot be achieved for training density $k\ge 0.7$ as probabilities $\pi(s_1)$ and $\pi(s_2)$ converge to the SQF values; see the first column in Fig.~\ref{constantr}.
\end{enumerate}
\begin{figure}
\caption{Summary of results in the $(\lambda, p)$-plane. Left: regions where, starting from random initial parameters, the ``best'' DRL policy (across training densities) performs better $(>)$, worse $(<)$ or slightly worse than $(\le)$ LQF. Right: regions where the DRL policy trained in congestion deteriorates, starting with optimal parameters. The font size is proportional to the effect being shown in each panel.}
\label{DRLsumm}
\end{figure}
These observations indicate that DRL policies are only near-optimal and lose their ability to learn a sensible policy as the training density $k$ increases.
This is consistent with the {congested network property }c, whereby the more the congestion, the less the policy affects intersection throughput.
This can make the DRL-C gradient $\nabla_{\theta}\log \pi\rightarrow 0$ for $p\ge 0.5$, not because an optimal policy has been reached but because there is nothing to be learned at that density level; see Fig.~\ref{grad}.
Notice that for lower turning probabilities the tendency of the DRL-C gradient is similar to the other panels in the figure, but still the probabilities $\pi(s_1)$ and $\pi(s_2)$ converge to the SQF values.
While the above findings are true for all network parameters, it is important to note that DRL can outperform LQF in some cases. In $0.05\le p\le 0.2, \lambda <1$ detaching is learned by (i) DRL-F with random initial weights, and by (ii) DRL-F and DRL-C with optimal weights given by supervised learning. That DRL-F can learn detaching is surprising since at that training density level ($k<0.5$) detaching does not take place. Not so for DRL-C because we have shown that it tends to SQF.
The DRL detaching behavior is even more pronounced than SQF, which indicates that the number of ``green streets'' obtained by this policy is higher than under SQF. This improvement upon SQF is unexpected and indicates that DRL is able to learn how to outperform LQF in congestion, albeit impractically since signal colors become permanent.
\begin{figure}
\caption{Evolution of the gradient $L_2$-norm $||\nabla_{\theta}
\label{grad}
\end{figure}
This is shown in Fig.~\ref{DRLdetach}, where the first panel shows how the resulting MFD detaches from the LQF-MFD, in a way consistent with our explanation in Fig.~\ref{f-detach}.
The remaining panels show simulation results with the same policy but with higher turning probabilities during simulation, $p_{\text{sim}}$ in the figure. It can be seen that the detaching decreases with $p_{\text{sim}}$ eventually disappearing for $p_{\text{sim}}\ge 0.3$, at which point the network is subject to the {congested network property }c.
\begin{figure}
\caption{Detaching policy found by DRL. Each panel shows simulation results with the same policy but with different turning probabilities during simulation, $p_{\text{sim}
\label{DRLdetach}
\end{figure}
Finally, Fig.~\ref{DRLsumm} shows a summary of results in the $(\lambda, p)$-plane. The left panel shows the regions where the DRL policy trained at any density and starting from random initial parameters performs better $(>)$, worse $(<)$ or comparable to $(\approx)$ LQF. It can be seen that except for detaching, the DRL policy always underperforms LQF.
The right panel shows the regions where the DRL policy trained in congestion deteriorates, starting with optimal parameters from the supervised experiment. \blue{In summary, except for detaching and when $p=0$, the additional DRL training under congested conditions leads to a deterioration of the policy, which increases with $p$. }
\section{Discussion and outlook}
This paper has raised more questions than answers by exposing several important properties of urban networks that have remained unnoticed for decades, and that have important implications for traffic control. While a sequel paper will explore the theoretical aspects of these properties, here we focused on their impact on machine learning methods applied to traffic signal control on large networks. Although our results apply only to inhomogenous grid networks with uniformly distributed origins and destinations, we strongly suspect that the mechanisms unveiled here remain important driving forces in more general networks. For example, a key lesson is that urban networks need to be parameterized at least by $\lambda$ and $p$ before anything meaningful can be said about their performance. This consistent gap in the literature, which treats long- and short-block networks alike, may be responsible for the admittedly incremental advances of traffic signal control in the last decades.
Our main result is the {congested network property }c: on congested urban networks the intersection throughput tends to be independent of signal control. This property affects all types of signal control once the density exceeds the critical density, by rendering them more and more similar to random (i.e. no) control as density increases.
The bottom right panel of Fig. \ref{BLsumm} \blue{shows} that the MFD for LQF and RND policies start to overlap in moderate congestion in roughly 2/3 of the cases, which is an indication that the {congested network property } applies to most urban networks. It is worth recalling that in severe congestion all policies (LQF, RND and SQF) overlap on all networks, indicating that signal control has absolutely no effect on network throughput at those densities. Of course, this may be as expected since in extreme congestion all approaches are full; the LQF/RND overlap in other traffic states is less intuitive, and remains an open question.
\subsection{Implications for DRL}
\subsubsection{Conjecture on the challenges faced by DRL}
We have seen that the {congested network property } hinders the training process under congested conditions
, which in some cases leads to learning the worst policy, SQF.
Even starting with initial weights given by the supervised training policy, we saw that additional training under congested conditions leads to a deterioration of the policy. We have verified similar behavior under dynamic demand loads whenever congestion appears in the network.
This means, potentially, that all the DRL methods proposed in the literature to date are unable to learn sensible policies and deteriorate as soon as congestion appears on the network. It might also explain DRL's limited success for traffic signal control problems observed so far, currently believed to be due to urban networks being non-stationary and/or non-Markovian \cite{choi2000hidden,da2006dealing}. We believe instead that the {congested network property } is to blame, \blue{independently of the DRL method used (see appendix),} and that future work should focus on new DRL methods able to extract relevant knowledge from congested conditions.
In the meantime, it is advisable to train DRL policies under free-flow conditions only, discarding any information from congested ones, as we have shown here that such free-flow DRL policies are comparable to LQF.
A full explanation of the effects of the {congested network property } on DRL is still missing.
An important clue was provided earlier that the DRL-C gradient tends to vanish, not because an optimal policy was found but because there is nothing to be learned at that density level; see Fig.~\ref{grad}. This is consistent with the chaotic nature of network traffic near the critical density mentioned in the introduction, and research is needed to test whether or not this is sufficient to explain the gradient behavior observed here.
We conjecture that SQF may provide additional insight, since its behavior is markedly different in both traffic regimes; recall Figs.~\ref{MFDs} and \ref{MFDsp}. An explanation consistent with our observations is presented in Fig. \ref {simple-mfd}, where it is conjectured that the ``learning potential'' at a given density is proportional to the gap, in absolute value, between the network supply function under LQF and the flow under SQF: maximal in free flow, starts shrinking at the critical density $k_c$ to become negligible when reaching extreme congestion and finally increasing again as density keeps increasing.
The precise shape of this diagram, and the underlying impacts for traffic control, depends only on parameters $\lambda$ and $p$, as expected. For example, on a long-blocks network the potential to the right of point ``A'' in the figure would be negligible, explaining why DRL does not learn detaching on these networks. But the shape of this diagram also depends on the flow under the SQF policy, which we strongly suspect is heavy-tailed. \blue{Additional research is needed to validate our conjecture.}
\begin{figure}
\caption{DRL learning potential in real coordinates. Notice that the positive flows under the SQF policy shown correspond to a linear approximation with $u'<u$.}
\label{simple-mfd}
\end{figure}
That DRL successfully learns LQF in free-flow conditions
indicates that most of what the agent needs to learn, e.g. flushing the longest queue, is encoded in free flow.
This might be intuitive considering that in free flow the reward tends to grow linearly with the state variables since there are no spillbacks.
Less intuitive is that DRL consistently learns SQF in congestion. Since the gradient information tends to become less useful due to the {congested network property }c, one would expect that it learns RND instead. Why this happens remains an open question, and might hold the key to improving DRL methods in moderate congestion.
\subsubsection{Turning probabilities}
The impact of the turning probability $p$ turned out to be very significant, not only for explaining the behavior of baseline policies but also for endowing DRL to find policies that exceed SQF's ability to produce the detaching phenomenon introduced in this paper.
The case $p\rightarrow 0$ can be problematic in the current framework because the environment tends to be deterministic, which contradicts the assumptions of the type of stochastic gradient descent methods traditionally used in DRL. We observed that for $p<5\%$ near-optimal DRL policies are hard to find.
Turning probabilities also explain the loss of symmetry observed for LQF and RND baseline policies, which is not captured by existing theories that rely on corridor approximations without turns. Unveiling the mechanisms for the loss of symmetry due to turning should provide significant insight into the operation of urban networks. It becomes clear that future research should focus on mapping the origin and destination table and dynamic traffic assignment models to turning probabilities. \blueOLD{Future work should also explore whether a simple double-ring network is able to capture the main results in this paper, possibly extending the framework proposed in \cite {xu2020analytical}.}
\subsubsection{State representations}
Although not shown in the main text, we have verified that other state representations have little impact on the resulting machine learning policies. Besides from a vector of length 8 used in the main text as input to the neural net, where each entry represents the number of queued vehicles in each approach to/from the intersection, we also tried (i) a vector of length 4 only considering incoming approaches, (ii) a $8\times \ell$ matrix of bits, given the four incoming and the four outgoing $c$-vectors from the CA model, one for each approach to/from the intersection, and (iii) a $4\times \ell$ matrix only considering incoming approaches.
Considering all 8 approaches to/from the intersection would make it possible for the model to learn to avoid spillbacks. But according to the main result in this paper, this does not happen. Instead, we found that with the $8\times \ell$ input (but not the $4\times \ell$) supervised learning yields a near-optimal policy with only two examples. This is surprising because the outgoing approaches are simply null vectors in these two examples used for training, but somehow the larger configuration endows the model with better extrapolation capabilities.
\subsection{The promise of supervised learning}
Notably, we also found that supervised learning with only two examples yields optimal or near-optimal policies for all network parameter values. This intriguing result indicates that extreme states $s_1$ and $s_2$ encode vital information and that the neural network can successfully extrapolate to all other states.
Understanding precisely why this happens could lead to very effective supervised learning methods based on expert knowledge, and perhaps to supplement DRL's inability to learn under congested conditions.
\subsection{Generality of our results}
A crucial assumption in this work was full driver adaptation to avoid bifurcations in the MFD, which have not been observed in the field to the best of our knowledge. In the appendix, we have verified the congested network property and its detrimental impact on DRL still hold true with more realistic routing behaviors and network configurations in SUMO, where vehicles have actual destinations and different levels of driver adaptation. The code is also shared in an open Github repository:\url{https://github.com/HaoZhouGT/signal\_control\_paper}
With non-adaptive drivers a different DRL framework would have to be used with the reward function having to capture the possibility of localized gridlocks and the state observable by the agent having to capture their spatial extent. The agent might also need to learn the origin-destination matrix since gridlock probabilities grow with the number of nearby origins and destinations.
One of our ongoing studies \cite{zhou2021gridlock} is focusing on more realistic driver adaptation mechanisms. We conjecture that their impacts will be observed mainly on networks with short blocks under extreme congestion, where the blockage probability is higher, and that this will lead to bifurcations depending on the parameters of the driver adaptation model.
\subsection{Implementation challenges of detaching}
Finally, detaching is a surprising finding that deserves more attention. Fig 6 provides a complete picture of the theory behind it, but its implementation might be controversial. One alternative might be favoring one axis over the other most of the time, but still giving the right of way to the other axis from time to time. This implementation challenge is the focus of future research by the authors.
\section{The training algorithm {\sc REINFORCE-TD}}
In this paper we propose the training algorithm {\sc REINFORCE-TD}, which is in the spirit of REINFORCE with baseline \cite{willianms1988toward} but for continuing problems. To the best of our knowledge, this extension of REINFORCE is not available in the literature, which is almost entirely focused on episodic problems as discussed earlier. Notice that we tried other methods in the literature
with very similar results, so {\sc REINFORCE-TD}\ is chosen here since it has the fewest hyperparameters: learning rates $\alpha $ and $\beta$ for weights, $\theta $, and average reward, $\eta(\pi) $, respectively. Using a grid search over these hyperparameters resulted in $\alpha=0.2 $ and $\beta=0.05$.
Recall that REINFORCE is probably the simplest policy gradient algorithm that uses \eqref{policy_grad} to guide the weight search. In the episode setting it is considered a Monte-Carlo method since it requires full episode replay, and it has been considered to be incompatible with continuing problems in the literature \cite{sutton2018reinforcement}.
Here, we argue that a one-step Temporal Difference (TD) approach \cite{sutton1988learning} can be used instead of the Monte-Carlo replay to fit the continuing setting.
This boils down to estimating the differential return \eqref{return} by the temporal one-step differential return of an action:
\begin{equation}\label{return2}
G_t\approx R_t-\eta(\pi)
\end{equation}
Notice that the second term in this expression can be interpreted as a baseline in REINFORCE, which are known to reduce weight variance.
The pseudocode is shown in Algorithm \ref{reinforce}.
\begin{algorithm}
\caption{{\sc REINFORCE-TD}}
\label{reinforce}
\begin{algorithmic}[1]
\STATE Input: weightized policy \( \pi (a|s;{\theta }), \theta \in \mathcal{R}^{m} \), average density $k$
\STATE Set hyper-parameter $\alpha,\beta$, set average reward $\eta=0$
\STATE Initialize vector $\theta$
\STATE Initialize the network state $S$ as a Bernoulli process with probability $k$ over the cells in the network
\REPEAT
\STATE Generate action $A\sim \pi(\cdot|S;\theta)$
\STATE Take action $A$, observe the new state $S'$ and reward $R$ (by running the traffic simulation model for $g$ time steps)
\STATE $G\gets R-\eta$
\STATE $\eta\gets\eta+\beta \ G$
\STATE $\theta\gets\theta+\alpha\ G \ \nabla_{\theta}\log \pi (A|S;\theta)$
\STATE $S\gets S'$
\UNTIL{forever}
\end{algorithmic}
\end{algorithm}
\section{Repeating DRL experiments in a microscopic simulator: SUMO}
We include this appendix to highlight that the results presented here can be replicated and extended using the the open-source microscopic traffic simulator SUMO \cite{SUMO2012}.
All DRL settings are kept as in the main text, except for the state representation, which corresponds here to a vector containing the queue lengths of each lane approaching the intersection.
For training, we implemented both REINFORCE-TD and \blue{more advanced DRL methods for the continuing setting in \cite{sutton2018reinforcement}, including the well-known Actor to Critic (A2C). We also verified the learning challenge found by this paper is independent of the algorithm. As evidence we repeated the same DRL experiment using a more recent alternative to A2C, the Proximal Policy Optimization (PPO) \cite{schulman2017proximal} which is known to be more robust to gradient variances by constraining the update step using a similar idea of trust region optimization. The results of PPO method is shown in Fig.\ref{ppo-method}, where we tested the trained policy using PPO against a straightforward scenario where the queue only takes place at the east-west left-turn lanes and accordingly a sensible policy should choose to give green lights to east-west with a high probability close to 1. Fig.\ref{ppo-method} shows that the PPO algorithm successfully outputs a sensible policy when trained in the free flow state ($k=0.1$), but the learning fails to converge at high densities, e.g. ($k=0.7$). The results suggests that the learning challenge found in this paper should be independent of DRL algorithm.} The code is also shared in an open Github repository:\url{https://github.com/HaoZhouGT/signal\_control\_paper}
Fig \ref{sumo_mfd} shows the (untransformed) SUMO experiment results for the 3 baseline policies studied here and for a DRL policy trained under congested conditions (DRL-C) using REINFORCE-TD. It can be seen that these results are consistent with the model in the main text.
\begin{figure}
\caption{A $9\times9$ grid network in SUMO: All roads are one-lane, traffic lights are two-phase without protected left-turns. $\lambda=4.0$ and $\delta= 0$. Vehicles randomly turn at intersections and make U-turn at boundaries. Notice that we modified the default routing algorithm in SUMO to achieve high density level without gridlock. Internal links of intersections are removed and downstream spill-back are prevented.}
\label{sumo_net}
\end{figure}
\begin{figure}
\caption{MFD of DRL-C and baseline policies: Upper and lower envelopes correspond to the 95\% and 5\% percentiles of average network flow from 100 trails. Red shaded area depicts the MFD of DRL-C in extreme congestion (density 0.9). Curves are derived through interpolation from discrete point values at density 0.1,0.2,...,0.9. }
\label{sumo_mfd}
\end{figure}
\begin{figure}
\caption{\blue{Using the PPO algorithm: ns\_thru, ew\_thru, ns\_left and ew\_left correspond to the four phases for north-south through, east-west through, north-south left-turn, and east-west left turn traffic. The tested intersection traffic state is specifically designed such that a sensible policy must turn the green light for the ew\_left phase. (a) when the training data is in free-flow, a sensible policy is learned. (b) The policy deteriorates when trained with congested data. }
\label{ppo-method}
\end{figure}
\section{More realistic routing behaviors}
The DRL learning experiments in this paper assumes full driver adaptation at intersections and vehicles do not have destinations, which is designed to maintain a constant density level and avoid the notorious gridlock issues. In this appendix we provide more evidence that our findings still hold given more realistic routing behaviors and ODs.
Now all vehicles are equipped with true destinations which are evenly distributed over the network, and human drivers can be adaptive or not according to a probability $P_r$. For adaptive human drivers, it means the route will be recalculated and updated to reduce the travel time based on the dynamic traffic conditions. It needs one more parameter, the rerouting period $T_r$, which indicates how frequently they might update their routes according to dynamic traffic conditions. To see the effect of gridlock on the network output, Fig.\ref{gridlock} depicts the history of completed trips from 50 repetitive simulations under LQF signal control with $k=0.2$, $P_r = 0.5$, $T_r = 120s$.
\begin{figure}
\caption{Deterioration of the network output under the LQF signal control ( $k=0.2$, $P_r = 0.5$, $T_r = 120s$): each curve corresponds to one simulation.}
\label{gridlock}
\end{figure}
Apparently, realistic routing behaviors can not produce stable network throughput. To derive a sensible MFD, the network trip completion rates are collected only from first 150 cycles of each simulation to avoid the gridlocks, and the results are summarized in Fig.\ref{mfd-adaptation}. Unlike the throughput we showed in the paper with random turnings, the network throughputs here are all unstable and decay towards zero due to the gridlock.
\begin{figure}
\caption{Comparing LQF and random policies at fixed densities with different driver adaptation: upper and lower bounds correspond to 90\% and 10\% percentiles of the network trip completion rates collected from 200 cycles.}
\label{mfd-adaptation}
\end{figure}
The simulations with realistic routing behaviors output different MFD shapes under the same signal control policies. However, the new results still support our major finding on the effect of signal control: the LQF outperforms a random policy in light and moderate congestion, but they are almost equivalent in extreme congested scenarios. Thus we conclude the congested network property does not vary with the driver adaptation levels, and research is ongoing to investigate the challenges of non-adaptive drivers to learning methods.
\ifCLASSOPTIONcaptionsoff
\fi
\end{document} |
\begin{document}
\title{Multiresolution Decomposition of Areal Count Data}
\thispagestyle{empty}
\section{Introduction}
Decomposing an observed signal or spatial field into scale-dependent components allows recognizing its inherent and prominent features.
Those features give insight to where local or global phenomena manifest themselves and assist in understanding the structure of hierarchical information.
Holmstr\"om et al.~(2011) proposed a procedure in the tradition of image processing that hence is applicable to Gaussian data distributed on regular grids~\cite{H01}.
We extend this method to count data which is potentially observed on an irregular grid, often termed \lq{areal count data}\rq~\cite{C93}.
The original multiresolution decomposition approach can be divided into three individual steps: 1)~spatial field resampling based on a Bayesian hierarchical model, 2)~smoothing on multiple scales, then calculating differences between these smooths to specify details for each resampled field separately, and 3)~posterior credibility analysis.
In the following paragraphs we summarize a) the Bayesian hierarchical model for step 1) and b) how to calculate differences between smooths in step 2).
Those are the relevant parts in the procedure for the proposed extension, outlined in Section~2.
The original multiresolution decomposition assumes that an observed field $\boldsymbol{y}$ consists of the true field $\boldsymbol{x}$ and additive noise.
Based on these flexible model assumptions the hierarchical model is constructed.
a) Bayesian hierarchical model: the true field $\boldsymbol{x}$ is presumed to follow a Gaussian distribution, which implies a selfsame likelihood function.
Its positive valued variance is modeled with a scaled--inv--$\chi^2$ prior and the spatial component of the field $\boldsymbol{x}$ is captured with an intrinsic Gaussian Markov random field (IGMRF) using a precision matrix $\boldsymbol{Q}$~\cite{R05}.
With those choices, the resulting marginal posterior is of closed form and corresponds to a multivariate t-distribution~\cite{E05}.
b) Calculate differences between smooths: the proposed penalty smoother is defined as $\boldsymbol{S}_{\lambda} = (\mathbf{I} + \lambda\boldsymbol{Q})^{-1}$, where $\lambda$ is the scale or smoothing parameter, such that $0 = \lambda_1 < \lambda_2 < \ldots < \lambda_L = \infty$.
The spatial field $\boldsymbol{x}$ is interpreted as random vector, $\boldsymbol{S}_{\lambda_1}\boldsymbol{x} = \boldsymbol{x}$ defines the identity mapping and $\boldsymbol{S}_{\lambda_L}\boldsymbol{x} =~\boldsymbol{S}_{\infty}\boldsymbol{x}$ the mean field.
On the ground of those preliminaries, $\boldsymbol{x}$ can be decomposed as differences of consecutive smooths: $\boldsymbol{x} = \sum_{l=1}^{L-1} \left( \boldsymbol{S}_{\lambda_l} - \boldsymbol{S}_{\lambda_{l+1}} \right)\boldsymbol{x} + \boldsymbol{S}_{\infty}\boldsymbol{x}$.
Scale-dependent details are then formalized as $\boldsymbol{z}_l = \left(\boldsymbol{S}_{\lambda_l} - \boldsymbol{S}_{\lambda_{l+1}} \right)\boldsymbol{x}$ for $l = 1, \ldots, L-1$ and $\boldsymbol{z}_L = \boldsymbol{S}_{\infty}\boldsymbol{x}$.
Pivotal for a) and b) is the definition of the precision matrix~$\boldsymbol{Q}$:
\begin{equation}
\boldsymbol{x}^\top\boldsymbol{Qx} = \sum_j \bigg( \sum\limits_{i \sim j} x_i - 4 x_j \biggr)^2,
\end{equation}
where $i{\sim}j$ denotes neighboring grid locations.
To ensure four neighbors at every grid location $i$, the boundary values of $\boldsymbol{x}$ are extended across the initial grid.
This definition inherently demands the data allocated to a regular grid but bears the advantage that individual computational steps can be optimized based on $\boldsymbol{Q}$'s fast eigendecomposition, such that large dimensional problems can be solved efficiently.
\section{Extension}\label{sec:method}
To decompose areal count data, first the resampling pattern described in a) needs modification.
Assuming the $n$ observed counts $\boldsymbol{y}=(y_1,\dots,y_n)^\top$ are realizations from a conditionally independent Poisson distribution and the expected counts $\boldsymbol{e}=(e_1,\dots,e_n)^\top$ are known for every location in the spatial field.
The Poisson's rate for a location $i$, is defined as the product of the expected count $e_i$ and the respective relative risk, denoted as $\exp{(\eta_i)}$.
We construct the hierarchical model, to resample the spatial field, with the likelihood function
\begin{equation}
\pi(\boldsymbol{y}|\eta_1,\dots,\eta_n) \propto \prod_{i=1}^{n} \exp{\bigl(y_i\eta_i - e_i\exp{(\eta_i)\bigr)}},
\end{equation}
which corresponds to the classical Besag--York--Molli{\'e} (BYM) model~\cite{B91}.
Whereat $\boldsymbol{\eta}$ is modeled as the composition of the true log-relative risk $\boldsymbol{u}$ and a normal zero-mean noise term $\boldsymbol{v}$, with unknown precision parameter $\kappa_{\boldsymbol{v}}$.
Analogous to the original model, we use a first order IGMRF process to model the spatial component with accompanying precision parameter $\kappa_{\boldsymbol{u}}$, such that
\begin{equation}
\pi(\boldsymbol{u}|\kappa_{\boldsymbol{u}}) \propto \kappa_{\boldsymbol{u}}^{\frac{n-1}{2}} \exp{\left( -\frac{\kappa_{\boldsymbol{u}}}{2} \sum_{i \sim j} (u_i - u_j)^2 \right)} = \kappa_{\boldsymbol{u}}^{\frac{n-1}{2}} \exp{\left( -\frac{\kappa_{\boldsymbol{u}}}{2} \boldsymbol{u}^\top \boldsymbol{R} \boldsymbol{u} \right)}.
\end{equation}
Again $i{\sim}j$ denotes neighboring lattice locations but here in terms of regions sharing a common border.
Assigning Gamma priors for both precision parameters implies a posterior distribution of non-closed form.
Hence, we use a Gibbs sampler with a Metropolis-Hastings (MH) step to resample the log-relative risks $\boldsymbol{u}$, the noise components $\boldsymbol{v}$ and parameters~\cite{G15}.
Finally, we exploit that the mean of a Poisson distribution is equivalent to its rate and reconstruct the spatial field with $\boldsymbol{e} \cdot \exp{(\boldsymbol{u} + \boldsymbol{v})}$, for every sampled field $\boldsymbol{u}$ and $\boldsymbol{v}$.
We form the scale-dependent details still relying on a penalty smoother.
Instead of using the matrix $\boldsymbol{Q}$ from the original model, we include the precision matrix $\boldsymbol{R}$ of the first order IGMRF~\cite{R05}.
The definition of $\boldsymbol{R}$ does not limit the data to be associated with a regular grid and can be constructed based on adjacency relations of the respective observations.
Since we use a different precision matrix, the optimized implementation relying on $\boldsymbol{Q}$ cannot be employed but we alternatively take advantage of the precision's sparse structure and apply tailored algorithms~\cite{F10}.
\section{Application}\label{sec:application}
The extension's feasibility is demonstrated on the German oral cavity cancer dataset~\cite{K00}.
This data includes cancer counts for 544 districts of Germany over 1986--1990, as well as the expected number of cases derived demographically.
The main bulk of the oral cavity counts range between one and hundred counts per district but single highly populated districts have up to 500.
The data including additional relevant information is available via the \textsc{R} package \textbf{spam}~\cite{F10}.
Following the multiresolution decomposition steps, we first resample the areal counts using suitable sampler specifications~\cite{G15} and verify the convergence of the MH sampler with common diagnostic tools~\cite{B98}.
Figure~\ref{fig1} shows how well the reconstructed field corresponds to the original data.
Only in northeast Germany, where the field is less smooth, the differences are larger.
Since the BYM model was designed not to be oversensitive to extreme counts, part of the resampling difference can be explained through its damping effect~\cite{W10}.
\begin{figure}
\caption{Oral cavity cancer data on logarithmic scale.
Left: the observed number of cases; middle: the mean of the reconstructed fields; right: the difference between the left and the middle panels.}
\label{fig1}
\end{figure}
In the second step, we choose suitable scales~(\cite{P13}) $\lambda_1 = 0$, $\lambda_2 = 1$ and $\lambda_3 = 25$ and form scale-dependent details (Figure~\ref{fig2}).
Completing the decomposition, we calculate pointwise probability maps~\cite{H01} (Figure~\ref{fig3}).
The detail $\boldsymbol{z}_1$ reflects spatial noise as well as the relatively low or high counts in the data.
This is also supported by its pointwise probability map, where no large red or blue clusters are visible.
$\boldsymbol{z}_2$ catches larger patches of districts and shows local peculiarities.
Detail $\boldsymbol{z}_3$ consists of the largest scale range and shows the east-west or nationwide trend but this trend is less distinct compared to the more local ones, indicated by the legends of each panel.
\begin{figure}
\caption{Scale dependent details $\boldsymbol{z}
\label{fig2}
\end{figure}
\begin{figure}
\caption{Pointwise probability maps.
Left:~$\boldsymbol{z}
\label{fig3}
\end{figure}
\section{Discussion}\label{sec:discussion}
We extended the multiresolution decomposition approach from Holmstr\"om et al. (2011), which originally processes data coming from a Gaussian distribution on a regular grid, to areal count data.
Establishing an MH sampling model makes it possible to resample count data and use an arbitrary precision matrix.
Employing the BYM model to include prior demographical knowledge, in the form of the known expected counts, enables us to model the data without being oversensitive to possible outliers.
The \textsc{R} code to reproduce this example is available at https://git.math.uzh.ch/roflur/bymresa.
\end{document} |
\begin{document}
\title[ mappings connected with parallel addition ]
{On the mappings connected with parallel addition of nonnegative operators}
\author[Yury Arlinski\u{\i}]{Yu.M. Arlinski\u{\i}}
\address{Department of Mathematical Analysis \\
East Ukrainian National University \\
Prospect Radyanskii, 59-a, Severodonetsk, 93400, Ukraine\\
and
Department of Mathematics, Dragomanov National Pedagogical University,
Kiev, Pirogova 9, 01601, Ukraine}
\email{yury.arlinskii@gmail.com}
\subjclass[2010]
{47A05, 47A64, 46B25}
\keywords{Parallel sum, iterates, fixed point}
\begin{abstract}
We study a mapping $\tau_G$ of the cone ${\mathbf B}^+({\mathcal H})$ of bounded nonnegative self-adjoint operators in a complex Hilbert space ${\mathcal H}$ into itself. This mapping is defined as a strong limit of iterates of the mapping ${\mathbf B}^+({\mathcal H})\ni X\mapsto\mu_G(X)=X-X:G\in{\mathbf B}^+({\mathcal H})$, where $G\in{\mathbf B}^+({\mathcal H})$ and $X:G$ is the parallel sum.
We find explicit expressions for $\tau_G$ and establish its properties. In particular, it is shown that $\tau_G$ is sub-additive, homogeneous of degree one, and its image coincides with set of its fixed points which is the subset of ${\mathbf B}^+({\mathcal H})$, consisting of all $Y$ such that ${\rm ran\,} Y^{1/2}\cap{\rm ran\,} G^{1/2}=\{0\}$. Relationships between $\tau_G$ and Lebesgue type decomposition of nonnegative self-adjoint operator are established and applications to the properties of unbounded self-adjoint operators with trivial intersections of their domains are given.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
We will use the following notations: ${\rm dom\,} A$, ${\rm ran\,} A$, and ${\xker\,} A$ are the domain, the range, and the kernel of a linear operator $A$, ${\rm \overline{ran}\,} A$ and ${\rm clos\,}{\cL}$ denote the closure of ${\rm ran\,} A$ and of the set $\cL$, respectively. A linear operator $A$ in a Hilbert space $\cH$ is called
\begin{itemize}
\item bounded from bellow if $(A f,f)\ge m||f||^2 $ for all $f\in{\rm dom\,} A$ and some real number $m$,
\item positive definite if $m>0$,
\item nonnegative if $(A f,f)\ge 0 $ for all $f\in{\rm dom\,} A.$
\end{itemize}
The cone of all bounded self-adjoint non-negative operators in a complex Hilbert space $\cH$ we denote by $\bB^+(\cH)$ and let $\bB^{+}_0(\cH)$ be the subset of operators from $\bB^+(\cH)$ with
trivial kernels. If $A,B\in \bB^+(\cH)$ and $C=ABA$, then by Douglas theorem \cite{Doug} one has ${\rm ran\,} C^{1/2}=A{\rm ran\,} B^{1/2}$.
If $\cK$ is a subspace (closed linear manifold) in $\cH$, then $P_\cK$ is the orthogonal projection in $\cH$ onto $\cK$, and $\cK^\perp\stackrel{def}{=}\cH\ominus\cK$.
Let $X,G\in\bB^+(\cH)$.
The
\textit{parallel sum} $X:G$ is defined by the quadratic form:
\[
\left((X:G)h,h\right)\stackrel{def}{=}\inf_{f,g \in \cH}\left\{\,\left(Xf,f\right)+\left(Gg,g\right):\,
h=f+g \,\right\} \ ,
\]
see \cite{AD}, \cite{FW}, \cite{K-A}. One can establish for $X:G$ the following equivalent
definition \cite{AT}, \cite{PSh}
\[
X:G=s-\lim\limits_{\varepsilon\downarrow 0}\,
X\left(X+G+\varepsilon I\right)^{-1}G.
\]
Then for positive definite bounded self-adjoint operators $X$ and $G$ we obtain
\[
X:G=(X^{-1}+G^{-1})^{-1} \ .
\]
As is known \cite{PSh}, $X:G$ can be calculated as follows
\[
X:G=X-\left((X+G)^{-1/2}X\right)^*\left((X+G)^{-1/2}X\right).
\]
Here for $A\in\bB^+(\cH)$ by $A^{-1}$ we denote the Moore--Penrose
pseudo-inverse.
The operator $X:G$ belongs to $\bB^+(\cH)$ and, as it is established in \cite{AT}, the equality
\begin{equation}
\label{ukfdyj}
{\rm ran\,}
(X:G)^{1/2}={\rm ran\,} X^{1/2}\cap{\rm ran\,} G^{1/2}
\end{equation}
holds true.
If $T$ is bounded operator in $\cH$, then in general
$$T^*(A:B)T\le (T^*AT):(T^*BT)$$
for $A,B\in\bB^+(\cH)$, but, see \cite{Ar2},
\begin{multline}
\label{trans}{\xker\,} T^*\cap{\rm ran\,} (A+B)^{1/2}=\{0\} \\
\Longrightarrow T^*(A:B)T= (T^*AT):(T^*BT).
\end{multline}
Besides, if $A'\le A''$, $B'\le B''$, then $A':B'\le A'':B''$ and, moreover \cite{PSh},
\begin{equation}
\label{monotcon}
A_n\downarrow A\quad\mbox{and}\quad B_n\downarrow B\quad\mbox{strongly}\Rightarrow A_n:B_n\downarrow A:B\quad\mbox{strongly}.
\end{equation}
Let $X,G\in\bB^+(\cH)$. Since $X\le X+G$ and $G\le X+G$, one gets
\begin{multline}
\label{fg2}
X=(X+G)^{1/2}M(X+G)^{1/2},\\
G=(X+G)^{1/2}(I-M)(X+G)^{1/2}
\end{multline}
for some non-negative contraction $M$ on $\cH$ with ${\rm ran\,} M\subset{\rm \overline{ran}\,}(X+G)$.
\begin{lemma} {\rm \cite{Ar2}}
\label{yu1} Suppose $X, G\in \bB^+(\cH)$ and let $M$ be as in
\eqref{fg2}. Then
\[
X:G=(X+G)^{1/2}(M-M^2)(X+G)^{1/2}.
\]
\end{lemma}
Since
$$
{\rm ran\,} M^{1/2}\cap{\rm ran\,} (I-M)^{1/2}={\rm ran\,} (M-M^2)^{1/2},
$$
the next proposition is an immediate consequence of Lemma \ref{yu1}, cf. \cite{FW}, \cite{PSh}.
\begin{proposition}
\label{root} 1) ${\rm ran\,} (X:G)^{1/2}={\rm ran\,} X^{1/2}\cap{\rm ran\,} G^{1/2}$.
2) The following statements are equivalent:
\begin{enumerate}
\def\rm (\roman{enumi}){\rm (\roman{enumi})}
\item
$X:G=0$;
\item
$M^2=M$, i.e., the operator $M$ in \eqref{fg2} is an orthogonal projection in
${\rm \overline{ran}\,}(X+G)$;
\item ${\rm ran\,} X^{1/2}\cap{\rm ran\,} G^{1/2}=\{0\}$.
\end{enumerate}
\end{proposition}
Fix $G\in\bB^+(\cH)$ and define a mapping
\begin{equation}
\label{mapmu}
\bB^+(\cH)\ni X\mapsto\mu_G(X)\stackrel{def}{=}X-X:G\in \bB^+(\cH).
\end{equation}
Then
\begin{enumerate}
\item $0\le\mu_G(X)\le X$,
\item $\mu_G(X)=X\iff X:G=0\iff{\rm ran\,} X^{1/2}\cap {\rm ran\,} G^{1/2}=\{0\}$.
\end{enumerate}
Therefore, if $G$ is positive definite, then the set of fixed points of $\mu_G$ consists of a unique element, the trivial operator.
Denote by $\mu^{[n]}_G$ the $n$th iteration of the mapping $\mu_G$, i.e., for $X\in\bB^+(\cH)$
\begin{multline*}
\mu^{[2]}_G(X)=\mu_G(\mu_G(X)),\;\mu^{[3]}_G(X)=\mu_G(\mu^{[2]}_G(X)),\cdots,\\
\mu^{[n]}_G(X)=\mu_G(\mu^{[n-1]}_G(X)).
\end{multline*}
Since
\[
X\ge\mu_G(X)\ge \mu^{[2]}_G(X)\ge\cdots\ge\mu^{[n]}_G(X)\ge \cdots,
\]
the strong limit of $\{\mu^{[n]}_G(X)\}_{n=0}^\infty$ exists for an arbitrary $X\in\bB^+(\cH)$ and is an operator from $\bB^+(\cH)$.
In this paper we study the mapping
\[
\bB^+(\cH)\ni X\mapsto\tau_G(X)\stackrel{def}{=}s-\lim\limits_{n\to\infty}\mu^{[n]}_G(X)\in\bB^+(\cH).
\]
We show that the range and the set of fixed points of $\tau_G$ coincides with the cone
\begin{multline*}
\bB^+_G(\cH)=\left\{Y\in\bB^+(\cH): {\rm ran\,} Y^{1/2}\cap{\rm ran\,} G^{1/2}=\{0\}\right\}\\
=\left\{Y\in \bB^+(\cH), \;Y:G=0\right\}.
\end{multline*}
We find explicit expressions for $\tau_G$ and establish its properties. In particular, we show that $\tau_G$ is homogenous and sub-additive, i.e., $\tau_G(\lambda X)=\lambda\tau_G(X)$ and
$\tau_G(X+Y)\le \tau_G(X)+\tau_G(Y)$ for an arbitrary operators $X, Y\in\bB^+(\cH)$ and an arbitrary positive number $\lambda$. It turns out that
$$\tau_G(X)=\tau_{\widetilde G}(X)=\tau_G(\widetilde G+X)$$ for all $X\in\bB^+(\cH)$, where $\widetilde G\in\bB^+(\cH)$ is an arbitrary operator such that ${\rm ran\,} \widetilde G^{1/2}={\rm ran\,} G^{1/2}$. We prove the equality $\tau_G(X)=X-[G]X,$
where the mapping
\[
\bB^+(\cH)\ni X\mapsto[G]X\stackrel{def}{=}s-\lim\limits_{n\to\infty}(nG:X)\in\bB^+(\cH)
\]
has been defined and studied by T.~Ando \cite{Ando_1976} and then in \cite{Pek_1978}, \cite{Kosaki_1984}, and \cite{E-L}. In the last Section \ref{applll} we apply the mappings $\{\mu^{[n]}_G\}$ and $\tau_G$ to the problem of the existence of a self-adjoint operator
whose domain has trivial intersection with the domain of given unbounded self-adjoint operator \cite{Neumann}, \cite{Dix}, \cite{FW}, \cite{ES}. Given an unbounded self-adjoint operator $A$, in Theorem \ref{ytcgjl} we suggest several assertions equivalent to the existence of a unitary operator $U$ possessing the property $U{\rm dom\,} A\cap{\rm dom\,} A=\{0\}$. J.~von Neumann \cite[Satz 18]{Neumann} established that such $U$ always exists for an arbitrary unbounded self-adjoint $A$ acting in a separable Hilbert space. In a nonseparable Hilbert space always exists an unbounded self-adjoint operator $A$ such that for any unitary $U$ the relation $U{\rm dom\,} A\cap{\rm dom\,}\ne\{0\}$ holds, see \cite{ES}.
\section{The mapping $\mu_G$ and strong limits of its orbits}
\begin{lemma}
\label{vspm}
Let $F_0\in\bB^+(\cH)$. Define the orbit
\[
F_1=\mu_G(F_0),\; F_2=\mu_G(F_1),\ldots,F_{n+1}=\mu_G(F_n),\ldots.
\]
Then the sequence $\{F_n\}$ is non-increasing:
$$F_0\ge F_1\ge\cdots \ge F_n\ge F_{n+1}\ge\cdots,$$
and the strong limit
\[
F\stackrel{def}{=}s-\lim\limits_{n\to\infty} F_n
\]
is a fixed point of $\mu_G$, i.e.,
satisfies the condition
\[
F:G=0.
\]
\end{lemma}
\begin{proof} Since $\mu_G(X)\le X$ for all $X\in\bB^+(\cH)$, the sequence $\{F_n\}$ is non-increasing.
Therefore, there exists a strong limit $F=s-\lim\limits_{n\to\infty} F_n.$
On the other hand, because the sequence $\{F_n\}$ in non-increasing,
the sequence $\{F_n:G\}$ is non-increasing as well and property \eqref{monotcon} of parallel addition leads to
\[
s-\lim\limits_{n\to\infty}(F_n:G)=F:G.
\]
Besides, the equalities
\[
F_n:G=F_n-F_{n+1}, \;n=0,1,\ldots
\]
yield $F:G=0.$
Thus, $F=\mu_G(F)$, i.e., $F$ is a fixed point of the
mapping $\mu_G$.
\end{proof}
For $G,F_0\in\bB^+(\cH)$ define subspaces
\begin{equation}
\label{prosm}\begin{array}{l}
\Omega\stackrel{def}{=}{\rm{clos}}\left\{f\in\cH:(G+F_0)^{1/2}f\in{\rm ran\,} G^{1/2}\right\}, \\
{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}\stackrel{def}{=}\cH\ominus\Omega.
\end{array}
\end{equation}
Note that if a linear operator ${\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}$ is defined by
\begin{equation}
\label{contrv}
\left\{\begin{array}{l}x=(G+F_0)^{1/2}f+g\\
{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X} x=G^{1/2}f,\; f\in\cH,\; g\in{\xker\,}(G+F_0)
\end{array}
\right.,
\end{equation}
then ${\rm dom\,} {\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}={\rm ran\,}(G+F_0)^{1/2}\oplus{\xker\,}(G+F_0)$ is a dense in $\cH$ linear manifold and ${\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}$ is a contraction. Let $\overline{{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}}$ be the continuation of ${\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}$ on $\cH$. Clearly $\overline{{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}}={\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^{**}$.
If we denote by $(G+F_0)^{-1/2}$ the Moore-Penrose pseudo-inverse to $(G+F_0)^{1/2}$, then from \eqref{contrv} one can get that
\begin{equation}
\label{opercv}
\begin{array}{l}
{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}(G+F_0)^{1/2}=G^{1/2}=(G+F_0)^{1/2}{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^*,\\
{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^*=(G+F_0)^{-1/2}G^{1/2},\;{\rm ran\,} {\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^*\subseteq{\rm \overline{ran}\,}(G+F_0),\\
{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X} g=G^{1/2}(G+F_0)^{-1/2}g,\; g\in{\rm ran\,}(G+F_0)^{1/2}.
\end{array}
\end{equation}
Moreover,
\begin{equation}
\label{11}
\Omega={\rm \overline{ran}\,} {\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^*\oplus{\xker\,}(G+F_0), \; {\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}={\xker\,} \left(\overline{{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}}{\upharpoonright\,}{\rm \overline{ran}\,}(G+F_0)\right).
\end{equation}
Besides we define the following contractive linear operator
\begin{equation}
\label{contrw}
\left\{\begin{array}{l}x=(G+F_0)^{1/2}f+g\\
\cW x=F_0^{1/2}f,\; f\in\cH,\; g\in{\xker\,}(G+F_0).
\end{array}
\right.
\end{equation}
The operator $\cW$ is defined on ${\rm dom\,} \cW={\rm ran\,}(G+F_0)^{1/2}\oplus{\xker\,}(G+F_0)$ and
\begin{equation}
\label{opwa}
\begin{array}{l}
\cW(G+F)^{1/2}=F^{1/2}_0=(G+F_0)^{1/2}\cW^*,\\
\cW^*=(G+F_0)^{-1/2}F^{1/2}_0,\;{\rm ran\,} \cW^*\subseteq{\rm \overline{ran}\,} (G+F_0),\\
\cW h=F^{1/2}_0(G+F_0)^{-1/2}h,\; h\in{\rm ran\,} (G+F_0)^{1/2}.
\end{array}
\end{equation}
Let $\overline\cW=\cW^{**}$ be the continuation of $\cW$ on $\cH$. Clearly, $\overline\cW^*=\cW^*.$
Note that
\[
{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^*\overline{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X} h+\cW^*\overline \cW h=h,\;h\in{\rm \overline{ran}\,}(G+F_0)
\]
Set
\begin{equation}
\label{prosn}
\sN\stackrel{def}{=}{\xker\,}(I-\overline{\cW}\,\overline{\cW}^*).
\end{equation}
Since ${\xker\,} \cW^*={\xker\,} F_0$, the subspace $\sN$ is contained in ${\rm \overline{ran}\,} F_0$.
\begin{proposition}
\label{singar}
The equalities
\begin{multline}
\label{equival1}
{\rm ran\,} (I-\overline{\cW}\,\overline{\cW}^*)^{1/2}=\left\{f\in \cH: F^{1/2}_0f\in{\rm ran\,} G^{1/2}\right\}\\
=\left\{f\in \cH: F^{1/2}_0f\in{\rm ran\,} (F:G_0)^{1/2}\right\}
\end{multline}
hold.
\end{proposition}
\begin{proof}
Set $\cH_0\stackrel{def}{=}{\rm \overline{ran}\,}(G+F_0)$. Note that ${\xker\,} (G+F_0)={\xker\,} G\cap{\xker\,} F_0$.
Define
\begin{equation}
\label{mo}
M_0\stackrel{def}{=}\overline\cW^*\overline\cW{\upharpoonright\,}\cH_0.
\end{equation}
Then $M_0\in\bB^+(\cH_0)$ and
\begin{equation}
\label{cvop}
\overline{{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}}^*\overline{{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}}{\upharpoonright\,}=I_{\cH_0}-M_0=I_{\cH_0}-\overline\cW^*\overline\cW{\upharpoonright\,}\cH_0.
\end{equation}
From \eqref{opercv} and \eqref{opwa}
\begin{multline}
\label{equiv11}
F^{1/2}_0f=G^{1/2}h\iff (G+F_0)^{1/2}\cW^*f=(G+F_0)^{1/2}{\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^* h\\
\iff \cW^*f={\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^* h
\end{multline}
Equality \eqref{cvop} yields
\[
{\rm ran\,} {\mathcal V}} \def\cW{{\mathcal W}} \def\cX{{\mathcal X}^*={\rm ran\,} (I_{\cH_0}-\overline\cW^*\overline\cW{\upharpoonright\,}\cH_0)^{1/2}
\]
Hence \eqref{equiv11} is equivalent to the inclusion $f\in {\rm ran\,} (I-\overline \cW\overline\cW^*)^{1/2}.$ Application of \eqref{ukfdyj} completes the proof.
\end{proof}
Thus from \eqref{prosn} and \eqref{equiv11} we get
\begin{equation}
\label{nob1}
\sN=\cH\ominus{\left\{{\rm clos}\left\{g\in\cH:F^{1/2}_0g\in{\rm ran\,} G^{1/2}\right\}\right\}}.
\end{equation}
\begin{theorem}
\label{form1}
Let $G\in\bB^+(\cH)$, $F_0\in\bB^+(\cH)$, $F_n\stackrel{def}{=}\mu_G(F_{n-1})$, $n\ge 1$, $F\stackrel{def}{=}s-\lim_{n\to \infty}F_n$.
Then
\begin{equation}
\label{prosm1}
F=(G+F_0)^{1/2}P_{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}(G+F_0)^{1/2}
\end{equation}
and
\begin{equation}
\label{prosn1} F=F^{1/2}_0P_\sN F^{1/2}_0,
\end{equation}
where ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}$ and $\sN$ are given by \eqref{prosm} and \eqref{nob1}, respectively.
\end{theorem}
\begin{proof}
From \eqref{contrw}, \eqref{contrw}, \eqref{mo}, \eqref{cvop}, \eqref{11} we have
\[
\begin{array}{l}
F_0=(G+F_0)^{1/2}M_0(G+F_0)^{1/2},\\
G=(G+F_0)^{1/2}(I_{\cH_0}-M_0)(G+F_0)^{1/2},
\end{array}
\]
\[
{\xker\,} (I_{\cH_0}-M_0)={\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O},\;{\rm \overline{ran}\,} (I-M_0)=\cH_0\ominus{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}=\Omega\ominus{\xker\,} (G+F_0).
\]
Then by Lemma \ref{yu1}
\begin{multline*}
F_0:G=(G+F_0)^{1/2}(M_0:(I_{\cH_0}-M_0))(G+F_0)^{1/2}\\
=(G+F_0)^{1/2}(M_0-M^2_0)(G+F_0)^{1/2}.
\end{multline*}
It follows that
\[
F_1=\mu_G(F_0)=F_0-F_0:G=(G+F_0)^{1/2}M^2_0(G+F_0)^{1/2}.
\]
Then (further $I=I_{\cH_0}$ is the identity operator) from \eqref{trans}
\begin{multline*}
F_1:G=(G+F_0)^{1/2}\left((I-M_0):M^2_0\right)(G+F_0)^{1/2}\\=(G+F_0)^{1/2}\left((I-M_0)M^2_0(I-M_0+M^2_0)^{-1}\right)(G+F_0)^{1/2},
\end{multline*}
\begin{multline*}
F_2\stackrel{def}{=}\mu_G(F_1)=F_1-F_1:G\\
=(G+F_0)^{1/2}\left(M_0^2-(I-M_0)M^2_0(I-M_0+M^2_0)^{-1}\right)(G+F_0)^{1/2}\\
=(G+F_0)^{1/2}M^4_0(I-M_0+M^2_0)^{-1}(G+F_0)^{1/2}.
\end{multline*}
Let us show by induction that for all $n\in\dN$
$$F_n\stackrel{def}{=}\mu_G(F_{n-1})=(G+F_0)^{1/2}M_n(G+F_0)^{1/2}\quad\mbox{for all} \quad n\in\dN,$$
where
\begin{enumerate}
\item $\{M_n\}$ is a non-increasing sequence from $\bB^+(\cH_0)$,
\item $I-M_0+M_n$ is positive definite,
\item $M_n$ commutes with $M_0$,
\item $M_{n+1}=(I-M_0+M_n)^{-1}M^2_n.$
\end{enumerate}
All statements are already established for $n=1$ and for $n=2$. Suppose that all statements are valid for some $n$.
Further, using the equality $M_0M_n=M_nM_0$, we have
\begin{multline*}
I-M_0+M_{n+1}=I-M_0+(I-M_0+M_n)^{-1}M^2_n\\
=(I-M_0+M_n)^{-1}\left((I-M_0+M_n)(I-M_0)+M^2_n\right)\\
=(I-M_0+M_n)^{-1}\left((I-M_0)^2+M_n(I-M_0)+M^2_n\right)\\
=(I-M_0+M_n)^{-1}\left(\left((I-M_0)+\varphirac{1}{2}M_n\right)^2+\varphirac{3}{4}M^2_n\right).
\end{multline*}
Since
\[
(I-M_0)+\varphirac{1}{2}M_n\ge \varphirac{1}{2}\left(I-M_0+M_n\right),
\]
and $I-M_0+M_n$ is positive definite, we get that the operator $I-M_0+M_{n+1}$ is positive definite.
\begin{multline*}
M_0M_{n+1}=M_0(I-M_0+M_n)^{-1}M^2_n\\
=(I-M_0+M_n)^{-1}M^2_n M_0=M_{n+1}M_0.
\end{multline*}
From \eqref{trans} we have
\begin{multline*}
F_{n+1}:G=(G+F_0)^{1/2}\left((I-M_0):M_{n+1}\right)(G+F_0)^{1/2}\\
=(G+F_0)^{1/2}(I-M_0)M_{n+1}(I-M_0+M_{n+1})^{-1}(G+F_0)^{1/2},
\end{multline*}
and
\begin{multline*}
F_{n+2}=\mu_G(F_{n+1})=F_{n+1}-F_{n+1}:G\\
=(G+F_0)^{1/2}\left(M_{n+1}-(I-M_0)M_{n+1}(I-M_0+M_{n+1})^{-1}\right)(G+F_0)^{1/2}\\
=(G+F_0)^{1/2}(I-M_0+M_{n+1})^{-1}M^2_{n+1}(G+F_0)^{1/2}\\
=(G+F_0)^{1/2}M_{n+2}(G+F_0)^{1/2}.
\end{multline*}
One can prove by induction that inequality $I-M_n\ge 0$ and the equalities $M_{n+1}=(I-M_0+M_n)^{-1}M^2_n$ for all $n\in\dN$
imply
$${\xker\,}(I-M_n)={\xker\,}(I-M_0),\;n\in\dN.$$
Let $M=\lim\limits_{n\to\infty} M_n$. Then $F=(G+F_0)^{1/2}M(G+F_0)^{1/2}$. Since $M_{n+1}(I-M_0+M_{n})=M^2_n,$
we get
$(I-M_0)M=0$. Thus, ${\rm ran\,} M\subseteq{\xker\,}(I-M_0).$ Since $M{\upharpoonright\,}{\xker\,}(I-M_0)=I$, we get
$M=P_{{\xker\,}(I-M_0)}.$ It follows that \eqref{prosm1} holds true.
The inequalities $0\le \mu_G(X)\le X$ yield $F_n=F^{1/2}_0N_nF^{1/2}_0,$
where $\{N_n\}$ is non-increasing sequence from $\bB^+(\cH)$, $0\le N_n\le I$ for all $n\in\dN$, and ${\xker\,} N_n\supseteq{\xker\,} F_0$. Let $ N=s-\lim_{n\to \infty}N_n$. Then
$F=F^{1/2}_0 NF^{1/2}_0$.
From \eqref{contrw} we have
\[
F^{1/2}_0=\cW(G+F_0)^{1/2}=(G+F_0)^{1/2}\cW^*,
\]
Since $M_0=\overline\cW^*\overline\cW{\upharpoonright\,}\cH_0$ we get and $\overline\cW=VM^{1/2}_0$, where $V$ is isometry from ${\rm \overline{ran}\,} M_0$ onto ${\rm \overline{ran}\,} F_0$. Thus
\[
F^{1/2}_0=VM^{1/2}_0(G+F_0)^{1/2},\; M^{1/2}_0(G+F_0)^{1/2}=V^*F^{1/2}_0.
\]
Because $P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}}=M^{1/2}_0P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}} M^{1/2}_0$ we get from $F=(G+F_0)^{1/2}P_{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}(G+F_0)^{1/2}$:
\[
F=F^{1/2}_0VP_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}} V^*F^{1/2}_0.
\]
The operator $VP_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}} V^*$ is orthogonal projection in ${\rm \overline{ran}\,} F_0$. Denote $\sN_0={\rm ran\,} VP_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}} V^*=V{\rm ran\,} P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}}.$
From $ (G+F_0)^{1/2}M^{1/2}_0h=F^{1/2}_0Vh$, for all $h\in{\rm \overline{ran}\,} M_0$ we obtain
\[
(G+F_0)^{1/2}\varphi=F^{1/2}_0V\varphi,\; \varphi\in{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}={\xker\,}(I_{\cH_0}-M_0),
\]
and then
\[
\varphi=(G+F_0)^{-1/2}F^{1/2}_0V\varphi.
\]
Hence
\[
(G+F_0)^{-1/2}F^{1/2}_0g=V^*g,\; g=V\varphi\in\sN_0.
\]
On the other hand
\[
(G+F_0)^{-1/2}F^{1/2}_0x=\overline{\cW}^*x\quad\mbox{for all}\quad x\in\cH.
\]
It follows that $\overline{\cW}^*g=V^*g$ for all $g\in\sN_0$. So
\[
g\in\sN_0\iff ||\overline{\cW}^*g||=||g||\iff g\in{\xker\,} (I-\overline{\cW}\,\overline{\cW}^*).
\]
Thus, $\sN_0$ coincides with $\sN$ defined in \eqref{prosn}, and \eqref{prosn1} holds true. \end{proof}
\begin{corollary}\label{commute}
Suppose $F_0$ commutes with $G$. Then $\sN$ defined in \eqref{prosn} takes the form $\sN={\xker\,} G\cap{\rm \overline{ran}\,} F_0.$
In particular,
\begin{enumerate}
\item if ${\xker\,} F_0\supseteq{\xker\,} G$, then $F=0$,
\item if $F_0=G$, then $F=0$,
\item if ${\xker\,} G=\{0\}$, then $F=0$.
\end{enumerate}
\end{corollary}
\begin{proof}
If $F_0G=GF_0$. Then $F^{1/2}_0(G+F_0)^{-1/2}f=(G+F_0)^{-1/2}F^{1/2}_0f$ for all $f\in{\rm ran\,} (G+F_0)^{1/2}$. Hence, $\cW^*=\overline\cW=\cW^{**}$ and $\overline \cW$ is nonnegative contraction. It follows from \eqref{prosn} that
\[
\sN={\xker\,}(I-\overline{\cW}^2)={\xker\,} (I-\cW^*)
={\xker\,} (I-(G+F_0)^{1/2}F_0^{1/2}).
\]
Clearly
\[
f\in {\xker\,} (I-(G+F_0)^{1/2}F_0^{1/2})\iff f\in{\xker\,} G\cap{\rm \overline{ran}\,} F_0.
\]
Furthermore, applying \eqref{prosn1} we get implications
\[
\begin{array}{l}
{\xker\,} F_0\supseteq{\xker\,} G\Longrightarrow {\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_0=\{0\},\\
{\xker\,} G=\{0\}\Longrightarrow{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}=\{0\}.
\end{array}
\]
\end{proof}
\begin{corollary}
\label{new1}
If $G\in\bB^+_0(\cH)$ and if $F_0$ is positive definite, then $F=0$.
\end{corollary}
\begin{proof}
In the case when $F_0$ is positive definite the subspace ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}$ defined in \eqref{prosm} can be described as follows:
${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}= (G+F_0)^{1/2}{\xker\,} G$. Hence, if ${\xker\,} G=\{0\}$, then ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}=\{0\}$ and \eqref{prosm1} gives $F=0$.
\end{proof}
\begin{theorem} Let $G\in\bB^+(\cH)$, $F_0 \in \bB^+(\cH)$, $F_{n+1}=\mu_G(F_n)$, $n\ge 0$, $F=\lim_{n\to \infty}F_n$.
\begin{enumerate}
\item If ${\rm ran\,} F^{1/2}_0\subseteq{\rm ran\,} G^{1/2}$, then $F=0$.
\item If ${\rm ran\,} F^{1/2}_0={\rm ran\,} G^{\alpha}$, where $\alpha<1/2$, then $F=0$.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) Let ${\rm ran\,} F^{1/2}_0\subseteq{\rm ran\,} G^{1/2}$. Then
$
F^{1/2}_0\cH\subseteq{\rm ran\,} G^{1/2}
$. From \eqref{nob1} and \eqref{prosn1} it follows $F=0$.
(2) Suppose ${\rm ran\,} F^{1/2}_0={\rm ran\,} G^{\alpha},$ where $\alpha< 1/2$. Then by Douglas theorem \cite{Doug} the operator $F_0$ is of the form
\[
F_0=G^{\alpha}Q_0G^{\alpha},
\]
where $Q$ is positive definite in $\cH_0={\rm \overline{ran}\,} G$.
Hence, $G+G^{\alpha}QG^{\alpha}=G^{\alpha}(G^{1-2\alpha}+Q_0)G^{\alpha}$, and
\begin{multline*}
\mu_G(F_0)= \left((G+G^{\alpha}Q_0G^{\alpha})^{-1/2}G^{\alpha}Q_0G^{\alpha} \right)^*(G+G^{\alpha}Q_0G^{\alpha})^{-1/2}G^{\alpha}Q_0G^{\alpha}\\
=G^{\alpha}Q_0(G^{1-2\alpha}+Q_0)^{-1}Q_0G^{\alpha}=G^{\alpha}\mu_{G^{1-2\alpha}}(Q_0)G^{\alpha}.
\end{multline*}
Note that $Q_1\stackrel{def}{=}\mu_{G^{1-2\alpha}}(Q_0)$ is positive definite. Therefore for $F_1=\mu_G(F_0)$ possess the property
${\rm ran\,} F^{1/2}_1={\rm ran\,} G^{\alpha}$. By induction we can prove that
\[
F_{n+1}=\mu_G(F_n)=G^{\alpha}\mu_{G^{1-2\alpha}}(Q_n)G^{\alpha}
=G^{\alpha}Q_{n+1}G^{\alpha}.
\]
Using that $Q_0$ is positive definite and applying Corollary \ref{new1}, we get $\lim_{n\to\infty}Q_n=0$. Hence
\[
F=\lim\limits_{n\to\infty}F_n=\lim\limits_{n\to\infty}G^{\alpha}Q_nG^{\alpha}=0.
\]
\end{proof}
\begin{corollary}
\label{mnogo}
Let $\lambda>0$. Define a subspace
\[
{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_\lambda=\cH\ominus{\left\{{\rm clos}\left\{g\in\cH:(\lambda G+F_0)^{1/2}g\in{\rm ran\,} G^{1/2}\right\}\right\}}
\]
Then
\[
(\lambda G+F_0)^{1/2}P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_\lambda}(\lambda G+F_0)^{1/2}
=F^{1/2}_0P_{\sN} F^{1/2}_0,
\]
where $\sN$ is given by \eqref{nob1}.
\end{corollary}
\begin{proof} Replace $G$ by $\lambda G$ and consider a sequence
$$F_0, F_1=\mu_{\lambda G}(F_0),\;F_{n}=\mu_{\lambda G}(F_{n-1}),\ldots.$$
Clearly
\begin{multline*}
\cH\ominus{\left\{{\rm clos}\left\{g\in\cH:F^{1/2}_0g\in{\rm ran\,} (\lambda G)^{1/2}\right\}\right\}}\\=
\cH\ominus{\left\{{\rm clos}\left\{g\in\cH:F^{1/2}_0g\in{\rm ran\,} G^{1/2}\right\}\right\}}=\sN.
\end{multline*}
By Theorem \ref{form1}
\[
s-\lim\limits_{n\to\infty}F_n=F^{1/2}_0P_\sN F^{1/2}_0.
\]
On the other side the application of \eqref{prosm1} gives
\[
s-\lim\limits_{n\to\infty}F_n=(\lambda G+F_0)^{1/2}P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_\lambda}(\lambda G+F_0)^{1/2}.
\]
\end{proof}
\begin{theorem}
\label{interzero}
Let $G\in\bB^+_0(\cH)$, ${\rm ran\,} G\ne \cH$. Let $F_0\in\bB^+(\cH)$, $F_n\stackrel{def}{=}\mu_G(F_{n-1})$, $n\ge 1$, $F\stackrel{def}{=}s-\lim_{n\to \infty}F_n$.
Then
\begin{multline*}
F\in\bB^+_0(\cH)\Longrightarrow \left\{\begin{array}{l}F_0\in\bB^+_0(\cH),\\
{\rm ran\,}(G+F_0)\cap{\rm ran\,} G^{1/2}=\{0\}\end{array}\right.\\
\iff
\left\{\begin{array}{l}F_0\in\bB^+_0(\cH),\\
{\rm ran\,} F_0\cap{\rm ran\,} G^{1/2}=\{0\}\end{array}\right..
\end{multline*}
Moreover, the following conditions are equivalent:
\begin{enumerate}
\def\rm (\roman{enumi}){\rm (\roman{enumi})}
\item $F\in\bB^+_0(\cH)$,
\item ${\rm ran\,} (G+F_0)^{1/2}\cap {\rm \overline{ran}\,} (G+F_0)^{-1/2}G^{1/2}=\{0\}$,
\item for each converging sequence $\{y_n\}\subset{\rm ran\,} G^{1/2}$ such that
$$\lim_{n\to\infty}y_n\in{\rm ran\,} F_0$$
follows that the sequence
$\{(G+F_0)^{-1/2}y_n\}$ is diverging,
\item ${\rm ran\,} F^{1/2}_0\cap{\rm clos\,}\left\{F^{-1/2}_0\left({\rm ran\,} F^{1/2}_0\cap {\rm ran\,} G^{1/2}\right)\right\}=\{0\}$,
\item for each converging sequence $\{z_n\}\subset {\rm ran\,} F^{1/2}_0\cap{\rm ran\,} G^{1/2}$ such that
$$\lim_{n\to\infty}z_n\in{\rm ran\,} F_0$$
follows that the sequence $\{F^{-1/2}_0 z_n\}$ is diverging.
\end{enumerate}
\end{theorem}
\begin{proof}
Clearly $F\in\bB^+_0(\cH)\iff{\xker\,} F=\{0\}$. Since ${\xker\,} (G+F_0)=\{0\},$ from \eqref{prosm1}, \eqref{prosm}, \eqref{contrv}, \eqref{opercv} it follows equivalences
\begin{multline*}
{\xker\,} F=\{0\}\iff\Omega\cap {\rm ran\,} (G+F_0)^{1/2}=\{0\}\\
\iff{\rm ran\,} (G+F_0)^{1/2}\cap {\rm \overline{ran}\,} (G+F_0)^{-1/2}G^{1/2}=\{0\}.
\end{multline*}
So (i)$\iff$(ii).
In particular
$$ {\xker\,} F=\{0\}\Longrightarrow {\rm ran\,} (G+F_0)^{1/2}\cap {\rm ran\,} (G+F_0)^{-1/2}G^{1/2}=0.$$
Hence
\begin{equation}
\label{inters}
{\rm ran\,} (G+F_0)\cap {\rm ran\,} G^{1/2}=0.
\end{equation}
Assume that ${\rm ran\,} G^{1/2}\cap{\rm ran\,} F_0\ne\{0\}$. Then $F_0x=G^{1/2}y$
for some $x,y\in \cH$. Set $z\stackrel{def}{=}y+G^{1/2}x$. Then $F_0x=G^{1/2}(z-G^{1/2}x)$ and $(G+F_0)x=G^{1/2}z$ that contradicts to \eqref{inters}.
Conversely, if ${\rm ran\,} (G+F_0)\cap {\rm ran\,} G^{1/2}\ne\{0\}$, then ${\rm ran\,} G^{1/2}\cap{\rm ran\,} F_0\ne\{0\}$.
So, \eqref{inters} is equivalent to ${\rm ran\,} G^{1/2}\cap{\rm ran\,} F_0=\{0\}.$
Note that the latter is equivalent to $F^2_0:G=0.$
Suppose ${\rm ran\,} (G+F_0)^{1/2}\cap {\rm \overline{ran}\,} (G+F_0)^{-1/2}G^{1/2}\ne\{0\}.$ Then there is a sequence $\{x_n\}\subset \cH$ and a vector $f\in\cH$ such that
\[
(G+F_0)^{1/2}f=\lim\limits_{n\to\infty}(G+F_0)^{-1/2}G^{1/2}x_n
\]
Hence $\lim\limits_{n\to\infty}G^{1/2}x_n=(G+F_0)f.$
Let $y_n=G^{1/2}(x_n-G^{1/2}f),$ $n\in\dN$. Then $\{y_n\}\subset{\rm ran\,} G^{1/2}$, $\lim\limits_{n\to\infty}y_n=F_0f,$
and
\[
\lim\limits_{n\to\infty}(G+F_0)^{-1/2}y_n=(G+F_0)^{1/2}f-(G+F_0)^{-1/2}Gf.
\]
Conversely, if there is converging sequence $\{y_n=G^{1/2}z_n\}$ such that
$$\lim_{n\to\infty}y_n= F_0f$$
and the sequence $\{(G+F_0)^{-1/2}y_n\}$ converges as well, then from
$$\lim_{n\to\infty}G^{1/2}(z_n+G^{1/2}f)=(G+ F_0)f$$
and because the operator $(G+F_0)^{-1/2}$ is closed, we get
\begin{multline*}
(G+F_0)^{1/2}f=(G+F_0)^{-1/2}(G+F_0)f\\
=\lim\limits_{n\to\infty}(G+F_0)^{-1/2}G^{1/2}(z_n+G^{1/2}f).
\end{multline*}
This means that
${\rm ran\,} (G+F_0)^{1/2}\cap {\rm \overline{ran}\,} (G+F_0)^{-1/2}G^{1/2}\ne\{0\}.$ Thus, conditions (i) and (ii) are equivalent.
Using \eqref{ukfdyj}, \eqref{contrw}, \eqref{opwa}, \eqref{equival1}, \eqref{prosn1}, and
Theorem \ref{form1}, the equivalences (i)$\iff$(iv)$\iff$(v) can be proved similarly.
\end{proof}
\section{The mapping $\tau_G$}
Recall that the mapping $\mu_G$ is defined by \eqref{mapmu} and by $\mu^{[n]}_G$ we denote the $n$th iteration of the mapping $\mu_G$.
Note that
\[
\mu^{[n+1]}_G(X)=\mu^{[n]}_G(X)-\mu^{[n]}_G(X):G,\; n\ge 0.
\]
Hence
\begin{equation}
\label{rec}
\sum\limits_{k=0}^n\left(\mu^{[k]}_G(X):G\right)=X-\mu^{[n+1]}_G(X).
\end{equation}
Clearly
\[
X\ge \mu_G(X)\ge \mu^{[2]}_G(X)\ge\cdots\ge \mu^{[n]}_G(X)\ge\cdots.
\]
Therefore, the mapping
\[
\bB^+(\cH)\ni X\mapsto\tau_G(X)\stackrel{def}{=}s-\lim\limits_{n\to\infty}\mu^{[n]}_G(X)\in\bB^+(\cH)
\]
is well defined. Besides, using \eqref{rec} and the monotonicity of parallel sum, we see that
\begin{enumerate}
\item $ \mu^{[n]}_G(X):G\ge \mu^{[n+1]}_G(X):G$ for all $n\in\dN_0,$
\item the series $\sum\limits_{n=0}^\infty \left(\mu^{[n]}_G(X):G\right)$ is converging in the strong sense and
\begin{equation}
\label{ryad}
\sum\limits_{n=0}^\infty \left(\mu^{[n]}_G(X):G\right)=X-\tau_G(X).
\end{equation}
\end{enumerate}
Hence the mapping $\tau_G$ can be defined as follows:
\[
\tau_G(X)\stackrel{def}{=}X-\sum\limits_{n=0}^\infty \left(\mu^{[n]}_G(X):G\right).
\]
Most of the following properties of the mapping $\tau_G$ are already established in the statements above.
\begin{theorem}
\label{propert}
The mapping $\tau_G$ possesses the properties:
\begin{enumerate}
\item $\tau_G(\mu_G(X))=\tau_G(X)$ for all $X\in\bB^+(H),$ therefore,\\ $\tau_G(\mu_G^{[n]}(X))=\tau_G(X)$ for all natural $n$;
\item $\tau_G(X):G=0$ for all $X\in\bB^+(H);$
\item $\tau_G(X)\le X$ for all $X\in\bB^+(\cH)$ and $\tau_G(X)=X$ $\iff$ $X:G=0$ $\iff$ ${\rm ran\,} X^{1/2}\cap{\rm ran\,} G^{1/2}=\{0\}$;
\item $\tau_G(X)=\tau_G(\tau_G(X))$ for an arbitrary $X\in\bB^+(\cH)$;
\item define a subspace
\begin{equation}
\label{ghjcn1}
{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}:
=\cH\ominus{\rm{clos}}\left\{f\in\cH,\;(G+X)^{1/2}f\in{\rm ran\,} G^{1/2}\right\},
\end{equation}
then
\begin{equation}
\label{formula11}
\tau_G(X)=(G+X)^{1/2}P_{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}(G+X)^{1/2};
\end{equation}
\item define a contraction $\cT=(G+X)^{-1/2}X^{1/2}$
and subspace
\[
\sL\stackrel{def}{=}{\xker\,}(I-\cT^*\cT),
\]
then
\begin{equation}
\label{ghjcn2}
\sL=\cH\ominus{\left\{{\rm clos}\left\{g\in\cH,\;X^{1/2}g\in{\rm ran\,} G^{1/2}\right\}\right\}}
\end{equation}
and
\begin{equation}
\label{formula111}
\tau_G(X)=X^{1/2}P_\sL X^{1/2};
\end{equation}
in particular, if $X$ is positive definite, then $\sL=X^{1/2}{\xker\,} G$;
\item $XG=GX$ $\Longrightarrow$ $\tau_G(X)=X^{1/2}P_{\sN}X^{1/2}$, where $\sN$ takes the form $\sN={\xker\,} G\cap{\rm \overline{ran}\,} X$;
\item $\tau_G(G)=0$;
\item ${\rm ran\,} X^{1/2}\subseteq{\rm ran\,} G^{1/2}$ $\Longrightarrow$ $\tau_G(X)=0;$ in particular,
$$\tau_G\left(X:G\right)=0$$
for every $X\in\bB^+(\cH)$;
\item ${\rm ran\,} X^{1/2}={\rm ran\,} G^{\alpha}$, $\alpha<1/2$ $\Longrightarrow$ $\tau_G(X)=0;$
\item $\tau_G(\lambda G+X)=\tau_{\eta G}(X)=\tau_G(X)$ for all $\lambda>0$ and $\eta>0$;
\item $\tau_G(\xi X)=\xi \tau_G(X),$ $\xi>0;$
\item if ${\rm ran\,} G^{1/2}_1={\rm ran\,} G^{1/2}_2$, then
\[
\tau_{G_1}(X)=\tau_{G_2}(X)=
\tau_{G_1}(G_2+X)=\tau_{G_2}(G_1+X)
\]
for all $X\in\bB^+(\cH)$;
\item if ${\rm ran\,} G^{1/2}_1\subseteq{\rm ran\,} G^{1/2}_2$, then $\tau_{G_1}(X)\ge \tau_{G_2} (X)$ for all $X\in\bB^+(\cH)$;
\item $\tau_G(X)\in \bB^+_0(\cH)$ $\Longrightarrow$ $X\in\bB^+_0(\cH)$ and $X^2:G=0$;
\item the following conditions are equivalent:
\begin{enumerate}
\def\rm (\roman{enumi}){\rm (\roman{enumi})}
\item $\tau_G(X)\in\bB^+_0(\cH)$,
\item $ X\in \bB^+_0(\cH)$ and ${\rm ran\,} (G+X)^{1/2}\cap{\rm clos\,}\{(G+X)^{-1/2}{\rm ran\,} G^{1/2}\}=\{0\}$,
\item $ X\in \bB^+_0(\cH)$ and for each converging sequence $\{y_n\}\subset{\rm ran\,} G^{1/2}$ such that
$$\lim_{n\to\infty}y_n\in{\rm ran\,} X$$ it
follows that the sequence
$\{(G+X)^{-1/2}y_n\}$ is diverging,
\item $ X\in \bB^+_0(\cH)$ and
${\rm ran\,} X^{1/2}\cap{\rm clos\,}\left\{X^{-1/2}\left({\rm ran\,} X^{1/2}\cap {\rm ran\,} G^{1/2}\right)\right\}=\{0\}$,
\item$ X\in \bB^+_0(\cH)$ and for each converging sequence $\{z_n\}\subset {\rm ran\,} X^{1/2}\cap{\rm ran\,} G^{1/2}$ such that
$$\lim_{n\to\infty}z_n\in{\rm ran\,} X$$
follows that the sequence $\{X^{-1/2} z_n\}$ is diverging;
\end{enumerate}
\item and if $X$ is a compact operator, then $X$ is a compact operator as well, moreover, if $\tau_G(X)$ from the Shatten-von Neumann class $S_p$ \cite{GK}, then $\tau_G(X)\in S_p.$
\end{enumerate}
\end{theorem}
\begin{proof}
Equalities in (6) follow from \eqref{contrw}, Proposition \ref{singar} and Theorem \ref{form1}, (11) follows from Corollary \ref{mnogo}.
If $\xi>0$, then
\begin{multline*}
\tau_G(\xi X)=(G+\xi X)^{1/2}P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_{1/\xi}} (G+\xi X)^{1/2}\\
=\xi ((1/\xi)G+X)^{1/2}P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_{1/\xi}} ((1/\xi)G+X)^{1/2}\\
=\xi\tau_G(X).
\end{multline*}
This proves (12).
If ${\rm ran\,} G^{1/2}_1={\rm ran\,} G^{1/2}_2$, then
\[
X^{1/2}g\in{\rm ran\,} G^{1/2}_1\iff X^{1/2}g\in{\rm ran\,} G^{1/2}_2.
\]
Now from property (6) follows the equality $\tau_{G_1}(X)=\tau_{G_2}(X)$. Using (11) we get
\begin{multline*}
\tau_{G_1}(G_2+X)=\tau_{G_2}(G_2+X)=\tau_{G_2}(X)\\
=\tau_{G_1}(X)=\tau_{G_1}(G_1+X)=\tau_{G_2}(G_1+X).
\end{multline*}
So, property (13) is proved. If ${\rm ran\,} G^{1/2}\subseteq{\rm ran\,} G^{1/2}_2$, then
$$X^{1/2}g\in{\rm ran\,} G^{1/2}_1\Longrightarrow X^{1/2}g\in{\rm ran\,} G^{1/2}_2.$$
Hence
\begin{multline*}
\sL_1=\cH\ominus{\left\{{\rm clos}\left\{g\in\cH:X^{1/2}g\in{\rm ran\,} G^{1/2}_1\right\}\right\}}\\
\supseteq \sL_2=\cH\ominus{\left\{{\rm clos}\left\{g\in\cH:X^{1/2}g\in{\rm ran\,} G^{1/2}_2\right\}\right\}},
\end{multline*}
and
\[
\tau_{G_1}(X)=X^{1/2}P_{\sL_1}X^{1/2}\ge X^{1/2}P_{\sL_2}X^{1/2}=\tau_{G_2}(X).
\]
If $X$ is compact operator, then from $\tau_G(X)=X^{1/2}P_\sL X^{1/2}$ it follows that $\tau_G(X)$ is compact operator. If $X\in S_p$, where $p\ge 1$ and $S_p$ is Shatten--von Neumann ideal, then from $X^{1/2},P_\sL X^{1/2}\in S_{2p}$ follows that $X^{1/2}P_\sL X^{1/2}\in S_p$ \cite[page 92]{GK}.
\end{proof}
\begin{remark}
Given $G\in\bB^+(\cH)$. All $ \widetilde G\in\bB^+(\cH)$ such that ${\rm ran\,} \widetilde G^{1/2}={\rm ran\,} G^{1/2}$ are of the form
\[
\widetilde G=G^{1/2}Q G^{1/2},
\]
where $Q,Q^{-1}\in\bB^+({\rm \overline{ran}\,} G)$.
\end{remark}
\begin{remark}
\label{extr}
Let $G,\widetilde G\in\bB^+(\cH)$ and ${\rm ran\,} G^{1/2}={\rm ran\,}\widetilde G^{1/2}$.
The equalities
$$\tau_G(\widetilde G+X)=(\widetilde G+X)^{1/2}\widetilde P(\widetilde G+X)^{1/2}=\tau_G(X)=X^{1/2}P_\sL X^{1/2},$$
where $\widetilde P$ is the orthogonal projection onto the subspace
\[
\cH\ominus{\rm{clos}}\left\{f\in\cH:(\widetilde G+X)^{1/2}f\in{\rm ran\,} G^{1/2}\right\},
\]
see \eqref{ghjcn1} and \eqref{ghjcn2},
show that $\tau_G (X)$ is an extreme point of the operator interval $[0,X]$ and operator intervals $[0, \widetilde G+X]$ cf. \cite{Ando_1996}.
\end{remark}
\begin{remark}
\label{osta}
Let $G,X\in\bB_0^+(\cH)$, ${\rm ran\,} G^{1/2}\cap {\rm ran\,} X^{1/2}=\{0\}$. From properties (13) and (16) in Theorem \ref{propert} follows that if the equality
\[
{\rm ran\,} (G+X)^{1/2}\cap{\rm \overline{ran}\,}((G+X)^{-1/2}G^{1/2})=\{0\}
\]
holds true, then it remains valid if $G$ is replaced by $\widetilde G$ such that ${\rm ran\,} \widetilde G^{1/2}={\rm ran\,} G^{1/2}.$
\end{remark}
\begin{proposition}
\label{polez}
1) Assume $G\in \bB^+(\cH)$. (a) If $X:G\ne 0,$ then $\left(\mu^{[n]}_G(X)\right):G\ne 0$ for all $n$.
b) If $X\in\bB^+_0(\cH)$, then $\mu^{[n]}_G(X)\in\bB^+_0(\cH)$ for all $n$. Moreover, if ${\rm ran\,} X^{1/2}\supseteq{\rm ran\,} G^{1/2},$ then
${\rm ran\,}\left(\mu^{[n]}_G(X)\right)^{1/2}={\rm ran\,} X^{1/2}$ for all $n$.\\
\noindent 2) If $G\in\bB^+_0(\cH)$ and $\tau_G(X)\in\bB^+_0(\cH)$, then $\mu^{[n]}_G(X)\in\bB^+_0(\cH)$
\begin{equation}
\label{dobav2}
{\rm ran\,} \left(\mu^{[n]}_G(X)\right)^{1/2}\cap{\rm clos\,}\left\{\left(\mu^{[n]}_G(X)\right)^{-1/2}{\rm ran\,} G^{1/2}\right\}=\{0\},
\end{equation}
in particular, $\left(\mu^{[n]}_G(X)\right)^2:G=0$ ($\iff {\rm ran\,}\mu^{[n]}_G(X)\cap{\rm ran\,} G^{1/2}=\{0\}$) for all $n$.
\end{proposition}
\begin{proof} Due to the property $\tau_G(\mu_G(X))=\tau_G(X)$ for all $X\in\bB(\cH)$,
it is sufficient to prove that the assertions of proposition hold for $n=1$. Let $\cH_0={\rm \overline{ran}\,}(G+X)$.
There exists $M\in \bB^+(\cH_0)$ such that
\[
X=(G+X)^{1/2}M(G+X)^{1/2}, \; G=(G+X)^{1/2}(I-M)(G+X)^{1/2}.
\]
Then
\begin{multline*}
\mu_G(X)=X-X:G\\=(G+X)^{1/2}M(G+X)^{1/2}-(G+X)^{1/2}M(I-M)(G+X)^{1/2}\\
=(G+X)^{1/2}M^2(G+X)^{1/2}.
\end{multline*}
It follows
\[
{\rm ran\,}\left(\mu_G(X)\right)^{1/2}=(G+X)^{1/2}{\rm ran\,} M.
\]
Because $X:G\ne 0$, we have ${\rm ran\,} X^{1/2}\cap{\rm ran\,} G^{1/2}\ne\{0\}$. Therefore
\[
{\rm ran\,} M^{1/2}\cap{\rm ran\,} (I-M)^{1/2}\ne \{0\}.
\]
This means that there are $f,h\in\cH$ such that $M^{1/2}f=(I-M)^{1/2}h$. Hence
\[
Mf=(I-M)^{1/2}M^{1/2}h.
\]
Since ${\rm ran\,}(X:G)^{1/2}=(G+X)^{1/2}{\rm ran\,} (M-M^2)^{1/2}$, we get
$${\rm ran\,}\left(\mu_G(X)\right)^{1/2}\cap{\rm ran\,}(X:G)^{1/2}\ne \{0\}.$$
But ${\rm ran\,}(X:G)^{1/2}\subseteq{\rm ran\,} G^{1/2}$. Hence $\mu_G(X):G\ne 0.$
Clearly
\begin{multline*}
{\rm ran\,} X^{1/2}\supseteq{\rm ran\,} G^{1/2}\iff {\rm ran\,} M^{1/2}\supseteq{\rm ran\,} (I-M)^{1/2}\\
\iff{\rm ran\,} M=\cH_0.
\end{multline*}
Hence
\begin{multline*}
{\rm ran\,}\left(\mu_G(X)\right)^{1/2}=(G+X)^{1/2}{\rm ran\,} M={\rm ran\,} (G+X)^{1/2}\\
={\rm ran\,} X^{1/2}\supseteq{\rm ran\,} G^{1/2}.
\end{multline*}
If ${\xker\,} X=\{0\}$, then ${\xker\,}(G+X)=\{0\}$ and ${\rm ran\,}(G+X)^{1/2}\cap{\xker\,} M =\{0\}$. It follows that ${\rm ran\,}(G+X)^{1/2}\cap{\xker\,} M^2 =\{0\}$.
Hence ${\xker\,}\mu_G(X)=\{0\}$.
Since $\tau_G(\mu_G(X))=\tau_G(X)$ and $\tau_G(X)\in\bB^+_0(\cH)$ implies ${\xker\,} X=\{0\}$ and $X^2:G=0$, see Theorem \ref{interzero}, we get
$$\tau_G(X)\in\bB^+_0(\cH)\Longrightarrow {\xker\,}\mu_G(X)=\{0\},\; \left(\mu_G(X)\right)^2:G=0.$$
\end{proof}
\begin{remark}
\label{dobav} Let $G\in\bB^+_0(\cH)$. Assume that ${\rm ran\,} X^{1/2}\supset {\rm ran\,} G^{1/2}$ and $\tau_G(X)\in\bB^+_0(\cH)$.
Denoting ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_n={\rm clos\,}\left\{\left(\mu^{[n]}_G(X)\right)^{-1/2}{\rm ran\,} G^{1/2}\right\}$, one obtains from \eqref{dobav2} that
\[
{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_n\cap{\rm ran\,} X^{1/2}={\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_n^\perp\cap{\rm ran\,} X^{1/2}=\{0\}\;\varphiorall n\in\dN.
\]
These relations yield
\[
{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_n\cap{\rm ran\,} G^{1/2}={\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_n^\perp\cap{\rm ran\,} G^{1/2}=\{0\}\;\varphiorall n\in\dN.
\]
If $J_n=P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_n}-P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}^\perp_n}=2P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_n}-I,$ $n\in\dN$, then $J_n=J_n^*=J^{-1}_n$ ($J_n$ is a fundamental symmetry in $\cH$ for each natural number $n$), and
\[
{\rm ran\,} (J_nG^{1/2}J_n)\cap{\rm ran\,} G^{1/2}=\{0\}\; \varphiorall n\in\dN,
\]
cf. \cite{Arl_ZAg_IEOT_2015}, \cite{schmud}.
\end{remark}
Let $G\in\bB^+(\cH)$. Set
\[
\bB^+_G(\cH)=\left\{Y\in\bB^+(\cH): {\rm ran\,} Y^{1/2}\cap{\rm ran\,} G^{1/2}=\{0\}\right\}.
\]
Observe that $Y\in\bB^+_G(\cH)\Longrightarrow Y^{1/2}QY^{1/2}\in \bB^+_G(\cH)$ for an arbitrary $Q\in\bB^+(\cH)$.
The cone $\bB^+_G(\cH)$ is the set of all fixed points of the mappings $\mu_G$ and $\tau_G$. In addition
\[
\bB^+_G(\cH)=\tau_G(\bB^+(\cH)).
\]
Actually, property (13) in Theorem \ref{propert} shows that
if $Y\in\bB^+_G(\cH)$, then for each $\widetilde G\in\bB^+(\cH)$ such that ${\rm ran\,} \widetilde G^{1/2}={\rm ran\,} G^{1/2}$, the operator $Y+\widetilde G$ is contained in the pre-image $\tau_G^{-1}\{Y\}$,
i.e., the equality
\[
\tau_G(\widetilde G+Y)=Y=(\widetilde G+Y)^{1/2}P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_{\widetilde G }}(\widetilde G+Y)^{1/2}
\]
holds, where
$${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}_{\widetilde G}=\cH\ominus\{g\in\cH:(\widetilde G+Y)^{1/2}g\in{\rm ran\,} G^{1/2}\}.$$
In particular,
\[
\tau_G(\widetilde G+\tau_G(X))=\tau_G(X),\;\varphiorall X\in\bB^+(\cH).
\]
Thus, the operator $\widetilde G+Y$ is contained in the \textit{basin of attraction} of the fixed point $Y$ of the mapping $\mu_G$ for an arbitrary $\widetilde G\in\bB^+(\cH)$ such that ${\rm ran\,} \widetilde G^{1/2}={\rm ran\,} G^{1/2}$. In addition since ${\rm ran\,}(\widetilde G+Y)^{1/2}={\rm ran\,} G^{1/2}\dot+{\rm ran\,} Y^{1/2}$, the statement 1 b) of Proposition \ref{polez} yields that
$${\rm ran\,} \left(\mu^{[n]}_G(\widetilde G+Y)\right)^{1/2}=const\supset{\rm ran\,} G^{1/2}\;\varphiorall n\in\dN.$$
\section{Lebesgue type decomposition of nonnegative operators and the mapping $\tau_G$}
Let $A\in\bB^+(\cH)$. T.~Ando in \cite{Ando_1976} introduced and studied the mapping
\[
\bB^+(\cH)\ni B\mapsto [A]B\stackrel{def}{=}s-\lim\limits_{n\to\infty}(nA:B)\in\bB^+(\cH).
\]
The decomposition
\[
B=[A]B+(B-[A]B)
\]
provides the representation of $B$ as the sum of $A$-\textit{absolutely continuous} ($[A]B$) and $A$-\textit{singular} ($(B-[A]B$) parts of $B$ \cite{Ando_1976}.
An operator $C\in\bB^+(\cH)$ is called $A$-absolutely continuous \cite{Ando_1976} if there exists a nondecreasing sequence $\{C_n\}\subset\bB^+(\cH)$
such that $C=s-\lim_{n\to\infty}C_n$ and $C_n\le \alpha_n A$ for some $\alpha_n$, $n\in\dN$ ($\iff {\rm ran\,} C^{1/2}_n\subseteq{\rm ran\,} A^{1/2}$ $\varphiorall n\in\dN$). An operator $C\in\bB^+(\cH)$ is called $A$-singular if the intersections of operator intervals $[0,C]$ and $[0,A]$ is the trivial operator ($[0,C]\cap [0,A]=0$). Moreover, the operator $[A]B$ is maximum among all $A$-absolutely continuous nonnegative operators $C$ with $C\le B$.
The decomposition of $B$ on $A$-absolutely continuous and $A$-singular parts is generally non-unique. Ando in \cite{Ando_1976} proved that uniqueness holds if and only if ${\rm ran\,}([A]B)^{1/2}\subseteq{\rm ran\,} A^{1/2}$.
Set
\begin{equation}
\label{omtuf}
\Omega_{A}^B\stackrel{def}{=}{\rm{clos}}\left\{f\in\cH:B^{1/2}f\in{\rm ran\,} A^{1/2}\right\}.
\end{equation}
It is established in \cite{Ando_1976} that the following conditions are equivalent
\begin{enumerate}
\def\rm (\roman{enumi}){\rm (\roman{enumi})}
\item $B$ is $A$-absolutely continuous,
\item $[A]B=B,$
\item $\Omega_A^B=\cH$.
\end{enumerate}
In \cite{Pek_1978} (see also \cite{Kosaki_1984}) the formula
\begin{equation}
\label{formu}
[A]B=B^{1/2}P_{\Omega^B_{A}}B^{1/2}
\end{equation}
has been established.
Hence the operator $[A]B$ possesses the following property, see \cite{Pek_1978}:
\begin{multline*}
\max\left\{Y\in\bB^+(\cH):0\le Y\le B,\;{\rm{clos}}\{Y^{-1/2}({\rm ran\,} A^{1/2})\}=\cH\right\}\\
=[A]B.
\end{multline*}
The notation $B_{{\rm ran\,} A^{1/2}}$ and the name \textit{convolution on the operator domain} was used for $[A]B$ in \cite{Pek_1978}.
Notice that from \eqref{formu} it follows the equalities
\begin{multline*}
{\rm ran\,}\left([A]B\right)^{1/2}=B^{1/2}\Omega_A^B,\\
B-[A]B=B^{1/2}(I-P_{\Omega^B_{A}})B^{1/2},\\
[A]B:(B-[A]B)=0,\; A:(B-[A]B)=0.
\end{multline*}
In addition due to \eqref{ukfdyj}, \eqref{omtuf}, and \eqref{formu}:
\begin{enumerate}
\item $[A](\lambda B)=\lambda\left([A]B\right),$ $\lambda>0,$
\item ${\rm ran\,} \widetilde A^{1/2}={\rm ran\,} A^{1/2}$ $\Longrightarrow$ $[\widetilde A]B=[A]B $ for all $B\in\bB^+(\cH),$
\item $[A:B]B=[A]B$.
\end{enumerate}
\begin{theorem}
\label{singp}
\begin{enumerate}
\item
Let $G\in\bB^+(\cH)$. Then for each $X\in\bB^+(\cH)$ the equality
\begin{equation}
\label{razn}
\tau_G(X)=X-[G]X
\end{equation}
holds. Therefore, $\tau_G(X)=0$ if and only if $X$ is $G$-absolutely continuous. In addition
$\tau_G([G]X)=0$ for all $X\in\bB^+(\cH)$.
If ${\rm ran\,} \widetilde G^{1/2}={\rm ran\,} G^{1/2}$ for some $\widetilde G \in\bB^+(\cH)$, then
\begin{equation}
\label{cbyuek}
\tau_G(X)=X-[\widetilde G] X=\widetilde G+X-[G](\widetilde G+X).
\end{equation}
Hence
\begin{equation}
\label{cnhfyyj}
\widetilde G=[G](\widetilde G +X)-[G](X),
\end{equation}
and
\begin{equation}
\label{tot}
X-\tau_G(X)=[G](\widetilde G+X)-\widetilde G.
\end{equation}
In addition
\begin{equation}
\label{izm}
\sum\limits_{n=0}^\infty \left(\mu^{[n]}_G(X):G\right)=[G]X,\; \varphiorall X\in\bB^+(\cH).
\end{equation}
\item The following inequality is valid for an arbitrary $X_1, X_2\in\bB^+(\cH)$:
\begin{equation}
\label{ytjblf}
\tau_G(X_1+X_2)\le \tau_G(X_1)+\tau_G(X_2).
\end{equation}
\item the following statements are equivalent:
\begin{enumerate}
\def\rm (\roman{enumi}){\rm (\roman{enumi})}
\item $\tau_G(X)\in\bB^+_0(\cH)$,
\item $X\in\bB^+_0(\cH)\quad\mbox{and}\quad\left([G]X\right):X^2=0,$
\item $G+X\in\bB^+_0(\cH)$ and $[G](G+X):( G+X)^{2}=0.$
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{proof}
(1) From \eqref{omtuf}, \eqref{formu}, and Theorem \ref{propert} we get equalities
\begin{multline*}
\tau_G(X)=X^{1/2}(I-P_{\Omega^X_{G}})X^{1/2}=X-[G]X,\\
\tau_G(\widetilde G+X)=(\widetilde G+X)^{1/2}(I-P_{\Omega^{\widetilde G+X}_{G}})(\widetilde G+X)^{1/2}=\widetilde G+X-[G](\widetilde G+X).
\end{multline*}
Then \eqref{cbyuek}, \eqref{cnhfyyj}, and \eqref{tot} follow from the equalities $\tau_G(X)= \tau_{\widetilde G}[X]=\tau_G(\widetilde G+X)$.
Since $[G]([G]X)=[G]X$, we get $\tau_G\left([G]X\right)=0.$
Note that using the equality $[G](X+\alpha G)=[G]X+\alpha G$ \cite[Lemma 1]{Nishio}
and the equality
\[
\tau_G(\alpha G+X)=\alpha G+X-[G](\alpha G+X),
\]
we get $\tau_G(\alpha G+X)=X-[G]X=\tau_G(X)$.
Equation \eqref{izm} follows from \eqref{ryad} and \eqref{razn}.
(2) Inequality \eqref{ytjblf} follows from the inequality, see \cite{E-L},
$$[G](X_1+X_2)\ge [G]X_1+[G]X_2$$
and equality \eqref{razn}.
(3) From \eqref{omtuf} and statements (16a) and (16d) of Theorem \ref{propert} it follows
\begin{multline*}
\tau_G(X)\in\bB^+_0(\cH)\iff X\in\bB^+_0(\cH)\quad\mbox{and}\quad\Omega_G^X\cap{\rm ran\,} X^{1/2}=\{0\}\\
\iff X\in\bB^+_0(\cH)\quad\mbox{and}\quad X^{1/2}\Omega_G^X\cap{\rm ran\,} X=\{0\}\\
\iff X\in\bB^+_0(\cH)\quad\mbox{and}\quad{\rm ran\,}\left([G]X\right))^{1/2}\cap{\rm ran\,} X=\{0\}\\
\iff X\in\bB^+_0(\cH)\quad\mbox{and}\quad\left([G]X\right):X^2=0.
\end{multline*}
Further we use the equality $\tau_G(X)=\tau_G(G+X)$, see statement (13) of Theorem \ref{propert}.
\end{proof}
\section{The mappings $\{\mu_G^{[n]}\},$ $\tau_G,$ and intersections of domains of unbounded self-adjoint operators}
\label{applll}
Let $A$ be an unbounded self-adjoint operator in an infinite dimensional Hilbert space $\cH$. J.von Neumann \cite[Satz 18]{Neumann} established that if $\cH$ is \textit{separable}, then there is a self-adjoint operator unitary equivalent to $A$ such that its domain has trivial intersection with the domain of $A$. Another proof of this result was proposed by J.~Dixmier in \cite{Dix}, see also \cite[Theorem 3.6]{FW}. In the case of \textit{nonseparable} Hilbert space in \cite{ES} it is constructed an example of unbounded self-adjoint operator $A$ such that for any unitary $U$ one has ${\rm dom\,} (U^*AU)\cap{\rm dom\,} A\ne \{0\}$. So, in general, the von Neumann theorem does not hold. It is established in \cite[Theorem 4.6]{ES}, that the following are equivalent for a dense operator range $\cR$ (the image of a bounded nonnegative self-adjoint operator in $\cH$ \cite{FW}) in an infinite-dimensional Hilbert space:
\begin{enumerate}
\def\rm (\roman{enumi}){\rm (\roman{enumi})}
\item
there is a unitary operator $U$ such that $U\cR\cap\cR=\{0\}$;
\item for every subspace (closed linear manifold) $\cK\subset \cR$ one has ${\rm dim\,} \cK\le{\rm dim\,} \cK^\perp$.
\end{enumerate}
In the theorem below we suggest another several statements equivalent to the von Neumann's theorem.
\begin{theorem}
\label{ytcgjl}
Let $\cH$ be an infinite-dimensional complex Hilbert space and let $A$ be an unbounded self-adjoint operator in $\cH$.
Then the following assertions are equivalent
\begin{enumerate}
\item there exists a unitary operator $U$ in $\cH$ such that
$${\rm dom\,}(U^*AU)\cap{\rm dom\,} A=\{0\};$$
\item there exists an unbounded self-adjoint operator $S$ in $\cH$ such that
$${\rm dom\,} S\cap{\rm dom\,} A=\{0\};$$
\item there exists a fundamental symmetry $J$ in $\cH$ ($J=J^*=J^{-1}$) such that
$${\rm dom\,}(JAJ)\cap{\rm dom\,} A=\{0\};$$
\item there exists a subspace ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}$ in $\cH$ such that
$${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}\cap{\rm dom\,} A={\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}^\perp\cap{\rm dom\,} A=\{0\};$$
\item there exists a positive definite self-adjoint operator $B$ in $\cH$ such that
$${\rm dom\,} B\supset{\rm dom\,} A\quad\mbox{and}\quad{\rm clos\,}\left\{B{\rm dom\,} A\right\}\cap {\rm dom\,} B=\{0\},$$
\item there exists a closed densely defined restriction $A_0$ of $A$ such that ${\rm dom\,} (AA_0)=\{0\}$ (this yields, in particular, ${\rm dom\,} A^2_0=\{0\}$).
\end{enumerate}
\end{theorem}
\begin{proof}
Let $|A|=\sqrt{A^2}$. Set $G=\left(|A|+I\right)^{-2}$. Then $G\in\bB^+_0(\cH)$ and ${\rm ran\,} G^{1/2}={\rm dom\,} A.$
According to \cite[Proposition 3.1.]{Arl_ZAg_IEOT_2015} the following assertion for the operator range $\cR$ are equivalent
\begin{enumerate}
\def\rm (\roman{enumi}){\rm (\roman{enumi})}
\item There exists in $\cH$ an orthogonal projection $P$ such that
\[
{\rm ran\,} P\cap\cR=\{0\} \ \ \ {\rm{and}} \ \ \ {\rm ran\,} (I-P)\cap\cR=\{0\} \ .
\]
\item There exists in $\cH$ a fundamental symmetry $J$ such that
\[
J\cR\cap\cR=\{0\} \ .
\]
\end{enumerate}
Now we will prove that
(2)$\Longrightarrow$(1), (3), (4), (5). The existence of self-adjoint $S$ with the property ${\rm dom\,} S\cap{\rm dom\,} A=\{0\}$ implies the existence of
$F\in\bB^+_0(\cH)$ such that ${\rm ran\,} F^{1/2}\cap{\rm ran\,} G^{1/2}=\{0\}$ (for example, take $F=(|S|+I)^{-2}$). Then the equality $F:G=0$ yields, see Proposition \ref{root} that
\[
G=(G+F)^{1/2}P(G+F)^{1/2},\; F=(G+F)^{1/2}(I-P)(G+F)^{1/2},
\]
where $P$ is orthogonal projection in $\cH$. The equalities ${\xker\,} G={\xker\,} F=\{0\}$ imply
$${\rm ran\,} P\cap{\rm ran\,} (G+F)^{1/2}={\rm ran\,} (I-P)\cap{\rm ran\,} (G+F)^{1/2}=\{0\}.$$
Since ${\rm ran\,} G^{1/2}\subset{\rm ran\,} (G+F)^{1/2}$, we get
$${\rm ran\,} P\cap{\rm ran\,} G^{1/2}={\rm ran\,} (I-P)\cap{\rm ran\,} G^{1/2}=\{0\}.$$
Let ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}={\rm ran\,} P$, then holds (4). Put $J=P-(I-P)=2P-I$. The operator $J$ is fundamental symmetry and $J{\rm ran\,} G^{1/2}\cap{\rm ran\,} G^{1/2}=\{0\}$. This gives (3).
Since ${\xker\,} F=\{0\}$ and $F=\tau_G(F)=\tau_G(G+F)$, using Theorem \ref{propert}, equalities \eqref{ghjcn1}, \eqref{formula11}, and Theorem \ref{interzero} we obtain
\[
{\rm ran\,} (G+F)^{1/2}\cap {\rm \overline{ran}\,} (G+F)^{-1/2}G^{1/2}=\{0\}.
\]
Denoting $B=(G+F)^{-1/2}$, we arrive to (5).
Let us proof (5)$\Longrightarrow$(2). Set $X=B^{-2}$. Then ${\rm ran\,} X^{1/2}\supset{\rm ran\,} G^{1/2}$ and
\[
X\in \bB^+_0(\cH),\;
{\rm ran\,} X^{1/2}\cap{\rm clos\,}\left\{X^{-1/2}{\rm ran\,} G^{1/2}\right\}=\{0\}.
\]
The equivalence of conditions (16a) and (16d) of Theorem \ref{propert} implies ${\xker\,}\tau_G(X)=\{0\}.$ Since the operator $Y=\tau_G(X)$ possesses the property
${\rm ran\,} Y^{1/2}\cap{\rm ran\,} G^{1/2}=\{0\}$, we get for $S=Y^{-2}$ that ${\rm dom\,} S\cap{\rm dom\,} A=\{0\}$.
Now we are going to prove $(4)\iff (6)$.
Suppose (6) is valid, i.e., $A_0$ is closed densely defined restriction of $A$ such that ${\rm dom\,} (AA_0)=\{0\}$.
Let
$$\cU=(A-iI)(A+iI)^{-1}$$
be the Cayley transform of $A$. $\cU$ is a unitary operator and
\[
A=i(I+\cU)(I-\cU)^{-1},\; {\rm dom\,} A={\rm ran\,} (I-\cU),{\rm ran\,} A={\rm ran\,} (I+\cU).
\]
Let $\cU_0=(A_0-iI)(A_0+iI)^{-1}$ be the Cayley transform of $A_0$. Set ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}\stackrel{def}{=}{\rm ran\,} (A_0+iI)$. Then $\cU_0=\cU{\upharpoonright\,}{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}$,
\begin{multline*}
{\rm dom\,} A_0={\rm ran\,} (I-\cU_0)=(I-\cU){\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O},\\
{\rm ran\,} A_0={\rm ran\,} (I+\cU_0)=(I+\cU){\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}.
\end{multline*}
Because ${\rm dom\,} A_0$ is dense in $\cH$, we get ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}^\perp\cap{\rm dom\,} A=\{0\}$.
The equality ${\rm dom\,} (AA_0)=\{0\}$ is equivalent to
\[
\left\{\begin{array}{l}{\rm ran\,} A_0\cap{\rm dom\,} A=\{0\},\\
{\xker\,} A_0=\{0\}\end{array}
\right..
\]
The latter two equalities are equivalent to ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}\cap{\rm dom\,} A=\{0\}$. Thus (4) holds. If (4) holds, then define
the symmetric restriction $A_0$ as follows
\[
{\rm dom\,} A_0=(I-\cU){\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O},\; A_0=A{\upharpoonright\,}{\rm dom\,} A_0
\]
we get ${\rm dom\,}(AA_0)=\{0\}$.
The proof is complete.
\end{proof}
Let us make a few remarks.
\begin{remark}
\label{ljd1}
If (5) is true, then
\begin{enumerate}
\item
the more simple proof of the implication (5)$\Rightarrow$(2) is the observation that (5) implies ${\rm dom\,} B^2\cap {\rm dom\,} A=\{0\}$;
\item taking into account that $B^{-1}$ is bounded and ${\rm dom\,} A$ is dense in $\cH$, we get
\[
\left(\cH\ominus{\rm clos\,}\left\{B{\rm dom\,} A\right\}\right)\cap {\rm dom\,} B=\{0\},
\]
if we set ${\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}\stackrel{def}{=}{\rm clos\,}\left\{B{\rm dom\,} A\right\}$, we see that the inclusion ${\rm dom\,} A\subset{\rm dom\,} B$ implies (4), i.e., this is one more way to prove (5)$\Longrightarrow$(4) and (5)$\Longrightarrow$(6);
\item using the proof of Theorem \ref{ytcgjl} and equalities \eqref{ghjcn2} and \eqref{formula111}, we see that the operator $S\stackrel{def}{=}\left(B^{-1}P_{{\mathfrak M}} \def\sN{{\mathfrak N}} \def\sO{{\mathfrak O}^\perp}B^{-1}\right)^{-1}$ is well defined, self-adjoint positive definite, and ${\rm dom\,} S\cap{\rm dom\,} A=\{0\}$;
\item denoting $B_0=B{\upharpoonright\,}{\rm dom\,} A$ and taking the closure of $B_0,$ we get the closed densely defined positive definite symmetric operator $\bar{B}_0$ (a closed restriction of $B$) such that
\[
{\rm dom\,} (B\bar{B}_0)=\{0\}.
\]
\end{enumerate}
\end{remark}
\begin{remark}
In the case of an infinite-dimensional Hilbert space be separable.
K.~Schm\"{u}dgen in \cite[Theorem 5.1]{schmud} established the validity of assertion (4) for an arbitrary $A$. In \cite{Arl_ZAg_IEOT_2015} using parallel addition of operators it is shown that validity (2) for an arbitrary unbounded self-adjoint $A$ implies (4).
The first construction of a densely defined closed symmetric operator $T$ such that ${\rm dom\,} T^2=\{0\}$ was given by M.A.~Naimark \cite{Naimark1}, \cite{Naimark2}.
In \cite{Chern} P.~Chernoff gave an example of semi-bounded from bellow symmetric $T$ whose square has trivial domain. K.~Schm\"{u}dgen in \cite[Theorem 5.2]{schmud} proved that each unbounded self-adjoint operator $H$ has two closed densely defined restrictions $H_1$ and $H_2$ such that
\[
{\rm dom\,} H_1\cap{\rm dom\,} H_2=\{0\}\quad\mbox{and}\quad{\rm dom\,} H^2_1={\rm dom\,} H^2_2=\{0\}.
\]
In \cite{ArlKov_2013} the abstract approach to the construction of examples of nonnegative self-adjoint operators $\cL$ and their closed densely defined restrictions $\cL_0$
such that ${\rm dom\,}(\cL\cL_0)=\{0\}$ has been proposed. In \cite[Theorem 3.33]{Arl_ZAg_IEOT_2015} it is established that each unbounded self-adjoint $A$ has two closed densely defined restrictions $A_1$ and $A_2$ possessing properties
\begin{multline*}
{\rm dom\,} A_1\dot+{\rm dom\,} A_2={\rm dom\,} A,\; {\rm dom\,} (AA_1)={\rm dom\,} (AA_2)=\{0\},
\\
{\rm dom\,} A_1\cap{\rm dom\,} A^2={\rm dom\,} A_2\cap{\rm dom\,} A^2=\{0\}.
\end{multline*}
M.~Sauter in the e-mail communication with the author suggested
another proof of the equivalence of (1) and (2) in Theorem \ref{ytcgjl}. His proof is essentially relied on the methods developed in the paper \cite{ES}.
\end{remark}
We conclude this paper by the theorem related to the assertions (2) and (5) of Theorem \ref{ytcgjl}. The proof is based on the properties of the mappings $\{\mu^{[n]}_G\}$ and $\tau_G$.
\begin{theorem}
\label{bynth}
Let $\cH$ be an infinite dimensional separable Hilbert space and let $A$ be unbounded self-adjoint operator in $\cH$.
Then for each positive definite self-adjoint operator $S$ such that ${\rm dom\,} S\cap {\rm dom\,} A=\{0\}$ there exists a sequence $\{S_n\}$ of positive definite operators possessing properties
\begin{itemize}
\item ${\rm dom\,} S_n={\rm dom\,} S\dot+{\rm dom\,} A$ $\varphiorall n$,
\item ${\rm clos\,}\left\{S_n{\rm dom\,} A\right\}\cap {\rm dom\,} S_n=\{0\}$ $\varphiorall n$,
\item ${\rm dom\,} S^2_n\cap{\rm dom\,} A=\{0\}$ $\varphiorall n,$
\item if $\sL_n=\cH\ominus {\rm clos\,}\left\{S_n{\rm dom\,} A\right\}$, then $S=\left(S^{-1}_nP_{\sL_n}S^{-1}_n\right)^{-1}$ $\varphiorall n$,
\item for each $f\in {\rm dom\,} S\dot+{\rm dom\,} A$ the sequence $\{||S_nf||\}_{n=1}^\infty$ is nondecreasing,
\item ${\rm dom\,} S=\left\{f: \sup\limits_{n\ge 1}||S_n f||<\infty\right\},$
\item ${\rm s-R}-\lim\limits_{n\to\infty}S_n=S$, where ${\rm s-R}$ is the strong resolvent limit of operators \cite[Chapter 8, \S 1]{Ka}.
\end{itemize}
\end{theorem}
\begin{proof}
Let $G\stackrel{def}{=}(|A|+I)^{-2}$, $F\stackrel{def}{=}S^{-2}$. Then ${\rm ran\,} G^{1/2}={\rm dom\,} A,$ ${\rm ran\,} F^{1/2}={\rm dom\,} S$. According to Theorem \ref{propert} the equalities
\[
F=\tau_G(F)=\tau_G(G+F)
\]
are valid.
Set
\[
F_n=\mu_G^{[n]}(G+F), n=0,1,\ldots.
\]
Then $\{F_n\}$ is non-increasing sequence of operators, $\tau_G(F_n)=F$, and
$$s-\lim\limits_{n\to\infty}F_n=F.$$
Due to the L\"{o}wner-Heinz inequality we have that the sequence of operators $\{F^{1/2}_n\}_{n=1}^\infty$ is non-increasing. In addition
\[
s-\lim\limits_{n\to\infty}F^{1/2}_n=F^{1/2}.
\]
Since ${\rm ran\,} F^{1/2}_0={\rm ran\,} G^{1/2}\dot+{\rm ran\,} F^{1/2}$, Proposition \ref{polez} yields ${\rm ran\,} F^{1/2}_n={\rm ran\,} G^{1/2}\dot+{\rm ran\,} F^{1/2}$ for all natural numbers $n$.
Now define
\[
S_n=F^{-1/2}_n,\; n=0,1,\ldots.
\]
Then for all $n$:
$${\rm dom\,} S_n={\rm ran\,} F^{1/2}_n={\rm ran\,} G^{1/2}\dot+{\rm ran\,} F^{1/2}={\rm dom\,} A\dot+{\rm dom\,} S,$$
the sequences of unbounded nonnegative self-adjoint operators $\{S^2_n\}$ and $\{S_n\}$ are non-decreasing,
\[
\lim\limits_{n\to\infty} S^{-1}_n=S^{-1},\;\lim\limits_{n\to\infty} S^{-2}_n=S^{-2}.
\]
The latter means, that
\[
{\rm s-R}-\lim\limits_{n\to\infty}S_n=S,\; {\rm s-R}-\lim\limits_{n\to\infty}S^2_n=S^2.
\]
Taking into account that $\tau_G(F_n)=F$ and using statement 2) of Proposition \ref{polez} we conclude that the equality
\[
{\rm \overline{ran}\,} F^{1/2}_n \cap {\rm clos\,}\{F^{-1/2}_n{\rm ran\,} G^{1/2}\}=\{0\}
\]
holds for each $n\in\dN$.
Hence ${\rm clos\,}\left\{S_n{\rm dom\,} A\right\}\cap {\rm dom\,} S_n=\{0\}$ and ${\rm dom\,} S^2_n\cap{\rm dom\,} A=\{0\}$ for all natural numbers $n.$
Set
\[
\sL_n:=\cH\ominus {\rm clos\,}\left\{S_n{\rm dom\,} A\right\}
=\cH\ominus{\rm clos\,}\{F^{-1/2}_n{\rm ran\,} G^{1/2}\}.
\]
Taking in mind the equality (see \eqref{ghjcn2} and \eqref{formula111})
\[
F=\tau_G(F_n)=F^{1/2}_nP_{\sL_n}F^{1/2}_n,
\]
we get
$S=\left(S^{-1}_nP_{\sL_n}S^{-1}_n\right)^{-1}$ for all $n\in\dN$.
Let $f\in{\rm dom\,} S={\rm ran\,} F^{1/2}$. Since $F_n\ge F$ for all $n\in\dN$, we have $F^{-1}_n\le F^{-1}$, i.e., $||S_n f||\le ||S f||$ for all $n$.
Suppose that $||S_n f||\le C$ for all $n$. Then there exists a subsequence of vectors $\{S_{n_k}f\}_{k=1}^\infty$ that
converges weakly to some vector $\varphi$ in $\cH$, i.e,
\[
\lim\limits_{k\to\infty} (S_{n_k} f, h)=(\varphi, h) \quad\mbox{for all} \quad h\in\cH.
\]
Further for all $g\in \cH$
\begin{multline*}
(f, g)=(F^{1/2}_{n_k}S_{n_k}f,g)=(S_{n_k}f,F^{1/2}_{n_k}g)\\
=(S_{n_k}f,F^{1/2}g)+ (S_{n_k}f,F^{1/2}_{n_k}g-F^{1/2}g)\rightarrow (\varphi,F^{1/2}g)=(F^{1/2}\varphi, g).
\end{multline*}
It follows that $f\in{\rm dom\,} S$.
Thus, we arrive to the equality
${\rm dom\,} S=\left\{f: \sup\limits_{n\ge 1}||S_n f||<\infty\right\}.$
The proof is complete.
\end{proof}
\end{document} |
\begin{document}
\title{Existence, covolumes and infinite generation of lattices for Davis complexes}
\author{Anne Thomas\thanks{This work was supported in part by NSF Grant No. DMS-0805206 and in part by EPSRC Grant No. EP/D073626/2. The author is currently supported by ARC Grant No. DP110100440.} \\ School of Mathematics and Statistics F07 \\ University of Sydney \\ Sydney NSW 2006 \\ Australia \\ anne.thomas@sydney.edu.au}
\date{revised 6 April 2010}
\maketitle
\begin{abstract} Let $\Sigma$ be the Davis complex for a Coxeter system $(W,S)$. The automorphism group
$G$ of $\Sigma$ is naturally a locally compact group, and a simple combinatorial condition due to
Haglund--Paulin determines when $G$ is nondiscrete. The Coxeter group $W$ may be regarded as a uniform
lattice in $G$. We show that many such $G$ also admit a nonuniform lattice
$\Gamma$, and an infinite family of uniform lattices with covolumes converging to that of $\Gamma$. It follows
that the set of covolumes of lattices in $G$ is nondiscrete. We also show that the nonuniform lattice $\Gamma$
is not finitely generated. Examples of $\Sigma$ to which our results apply include buildings and non-buildings, and many complexes of dimension greater than $2$. To prove these results, we introduce
a new tool, that of ``group actions on complexes of groups", and use this to construct our lattices as fundamental groups of complexes of groups with universal cover $\Sigma$. \end{abstract}
\ensuremath{\sigma}ection{Introduction}\label{s:intro}
Let $G$ be a locally compact topological group, with Haar measure $\mu$. A discrete subgroup $\Gamma \leq
G$ is a \emph{lattice} if $\Gamma \backslash G$ carries a finite $G$--invariant measure, and is \emph{uniform} if
$\Gamma \backslash G$ is compact. Some basic questions are: \begin{enumerate} \item\label{q:existence} Does $G$
admit a (uniform or nonuniform) lattice? \item\label{q:covolumes} What is the set of \emph{covolumes}
of lattices in $G$, that is, the set of positive reals \[\mathcal{V}(G):=\{ \mu(\Gamma\backslash G) \mid \mbox{$\Gamma
< G$ is a lattice}\}?\] \item\label{q:generation} Are lattices in $G$ finitely generated?
\end{enumerate}
These questions have been well-studied in classical cases. For example, suppose $G$ is a reductive
algebraic group over a local field $K$ of characteristic $0$. Then $G$ admits a uniform lattice,
constructed by arithmetic means (Borel--Harder~\cite{BoHa}), and a nonuniform lattice only if $K$ is
archimedean (Tamagawa~\cite{Ta}). If $G$ is a semisimple real Lie group, the set $\mathcal{V}(G)$ is in most
cases discrete (see~\cite{Lu} and its references). If in addition $G$ is simple and higher-rank, then $G$ and hence
its lattices have Kazhdan's Property (T) (see, for example,~\cite{Ma}). Since countable groups with
Property (T) are finitely generated, it follows that all lattices in $G$ are finitely generated.
A nonclassical case is $G$ the automorphism group of a locally finite tree $T$. The study of lattices in $G=\ensuremath{\mathbb{A}}ut(T)$ was initiated by Bass and Lubotzky, and has yielded many surprising differences from classical results (see the survey~\cite{Lu} and the reference~\cite{BL}). For
example, the set $\mathcal{V}(G)$ is in many cases nondiscrete, and nonuniform
tree lattices are never finitely generated.
In fact, the automorphism group $G$ of any locally finite polyhedral complex $X$ is naturally a
locally compact group (see Section~\ref{ss:lattices}). For many $X$ with $\dim(X)\geq 2$, there
is greater rigidity than for trees, as might be expected in higher dimensions. For instance,
Burger--Mozes~\cite{BM} proved a `Normal Subgroup Theorem' for products of trees (parallel to that
of Margulis~\cite{Ma} for higher-rank semisimple Lie groups), and Bourdon--Pajot~\cite{BP} and
Xie~\cite{X} established quasi-isometric rigidity for certain Fuchsian buildings. On the other
hand, lattices in $G=\ensuremath{\mathbb{A}}ut(X)$ can exhibit the same flexibility as tree lattices. For example, the
set $\mathcal{V}(G)$ is nondiscrete for certain right-angled buildings~\cite{Th1} and Fuchsian
buildings~\cite{Th2}. Another example is density of commensurators of uniform lattices in $G$,
proved by Haglund~\cite{H2} for certain $2$--dimensional Davis complexes, and by Haglund~\cite{H5}
and Kubena Barnhill--Thomas~\cite{KBT} for right-angled buildings. Apart from right-angled
buildings, very little is known for $X$ of dimension $ > 2$. Almost nothing is known for $X$ not a
building.
In this paper we consider Questions~\eqref{q:existence}--\eqref{q:generation} above for lattices
in $G=\ensuremath{\mathbb{A}}ut(\Sigma)$, where $\Sigma$ is the Davis complex for a Coxeter system $(W,S)$
(see~\cite{D} and Section~\ref{ss:Davis_complexes} below). The Davis complex is a locally finite,
piecewise Euclidean $\ensuremath{\mathbb{C}}AT(0)$ polyhedral complex, and the Coxeter group $W$ may be regarded as a
uniform lattice in $G$. Our results are the Main Theorem and its
Corollaries~\ref{c:nondiscreteness} and~\ref{c:infinite_generation} below, which establish
tree-like properties for lattices in many such $G$. After stating these results, we discuss how they apply to (barycentric
subdivisions of) Fuchsian buildings and Platonic polygonal complexes, and to many Davis complexes $\Sigma$ with
$\dim(\Sigma) >
2$.
To state the Main Theorem, recall that for a Coxeter system $(W,S)$ with
$ W = \langle\, S \mid (st)^{m_{st}} \rangle$, and any $T \ensuremath{\sigma}ubset S$, the \emph{special subgroup} $W_T$ is the subgroup of $W$ generated by the elements $s \in T$. A special subgroup $W_T$ is \emph{spherical} if
it is finite, and the set of spherical special subgroups of $W$ is partially ordered by inclusion.
The poset of nontrivial spherical special subgroups is an abstract simplicial complex $L$, called the
\emph{nerve} of $(W,S)$. We identify each generator $s \in S$ with the corresponding vertex $W_{
\{s\} } = \langle s \rangle$ of $L$, and denote by $A$ the group of \emph{label-preserving
automorphisms} of $L$, that is, the group of automorphisms $\alpha$ of $L$ such that $m_{st} =
m_{\alpha(s)\alpha(t)}$ for all $s, t \in S$. The group $G=\ensuremath{\mathbb{A}}ut(\Sigma)$ is nondiscrete if and only if there is a nontrivial $\alpha \in A$ such that $\alpha$ fixes the star in $L$ of some vertex $s$
(Haglund--Paulin~\cite{HP1}).
\begin{maintheorem}\label{t:existence} Let $(W,S)$ be a Coxeter system, with nerve $L$ and Davis
complex $\Sigma$. Let $A$ be the group of label-preserving automorphisms of $L$. Assume that
there are vertices $s_1$ and $s_2$ of $L$, and nontrivial elements $\alpha_1, \alpha_2 \in A$,
such that for $i = 1,2$: \begin{enumerate} \item\label{c:fix} $\alpha_i$ fixes the star of
$s_{3-i}$ in $L$; \item\label{c:free} the subgroup $\langle \alpha_i \rangle$ of $A$ acts freely on the $\langle \alpha_{i} \rangle$--orbit of $s_i$, in particular $\alpha_i(s_{i}) \neq s_{i}$; \item\label{c:orbit} for all $t_i \neq s_i$
such that $t_i$ is in the $\langle \alpha_{i} \rangle$--orbit of $s_i$, $m_{s_it_i} = \infty$; and
\item\label{c:halvable} all spherical special subgroups $W_T$ with $s_i \in T$ are \emph{halvable
along $s_i$} (see Definition~\ref{d:halvable} below).\end{enumerate} Then $G=\ensuremath{\mathbb{A}}ut(\Sigma)$ admits:
\begin{itemize} \item a nonuniform lattice $\Gamma$; and \item an infinite family of uniform lattices
$(\Gamma_n)$, such that $\mu(\Gamma_n \backslash G) \to \mu(\Gamma \backslash G)$, where $\mu$ is Haar measure on $G$.
\end{itemize}\end{maintheorem}
\begin{corollary}\label{c:nondiscreteness} The set of covolumes of
lattices in $G$ is nondiscrete. \end{corollary}
\begin{corollary}\label{c:infinite_generation} The group $G$ admits a lattice
which is not finitely generated. \end{corollary}
\noindent Corollary~\ref{c:infinite_generation} follows from the proof of the Main Theorem and
Theorem~\ref{t:group_action_intro} below. By the discussion above,
Corollary~\ref{c:infinite_generation} implies that the group $G$ in the Main Theorem does not have
Property (T). This was already known for $G=\ensuremath{\mathbb{A}}ut(\Sigma)$, where $\Sigma$ is any Davis complex (Haglund--Paulin~\cite{HP1}); our results thus provide an
alternative proof of this fact in some cases.
We describe several infinite families of examples of Davis complexes $\Sigma$ to
which our results apply in Section~\ref{s:examples} below. To establish these
applications, we use properties of spherical buildings in~\cite{R}, and some
results of graph theory from~\cite{DM}. In two dimensions, examples include the Fuchsian
buildings considered in~\cite{Th2}, and some of the highly symmetric Platonic
polygonal complexes investigated by \'Swi{\polhk{a}}tkowski~\cite{Sw}. Platonic polygonal
complexes are not in general buildings, and even the existence of lattices (other
than the Coxeter group $W$) in their automorphism groups was not previously known. An example of a
Platonic polygonal complex is the (unique) $\ensuremath{\mathbb{C}}AT(0)$ $2$--complex with all
$2$--cells squares, and the link of every vertex the Petersen graph
(Figure~\ref{f:petersen} below). The Main Theorem and its corollaries also apply to
many higher-dimensional $\Sigma$, including both buildings and complexes which are not buildings.
\begin{figure}
\caption{Petersen graph}
\label{f:petersen}
\end{figure}
To prove the Main Theorem, we construct the lattices $\Gamma_n$ and $\Gamma$ as fundamental groups of complexes of
groups with universal covers $\Sigma$ (see~\cite{BH} and Section~\ref{ss:complexes_of_groups} below). The construction is given in Section~\ref{s:proof} below, where we also prove Corollary~\ref{c:infinite_generation}.
Complexes of groups are a generalisation to higher dimensions of graphs of groups. Briefly, given a polyhedral complex $Y$, a (simple) complex of groups $G(Y)$ over $Y$ is an assignment of a \emph{local group} $G_\ensuremath{\sigma}$ to each cell $\ensuremath{\sigma}$ of $Y$, with monomorphisms $G_\ensuremath{\sigma} \to G_\tau$ whenever $\tau \ensuremath{\sigma}ubset \ensuremath{\sigma}$, so that the obvious diagrams commute. The action of a group $G$ on a polyhedral complex $X$ induces a complex of groups $G(Y)$ over $Y = G \backslash X$. A complex of groups is \emph{developable} if it is isomorphic to a complex of groups induced in this way. A developable complex of groups $G(Y)$ has a simply-connected \emph{universal cover} $\widetilde{G(Y)}$, equipped with a canonical action of the \emph{fundamental group of the complex of groups} $\pi_1(G(Y))$.
A key difference from graphs of groups is that complexes of groups are not in general
developable. In addition, even if $G(Y)$ is developable, with universal cover say $X$, it may be impossible to identify $X$ of dimension $\geq 2$ using only local data such as the links of
its vertices (see Ballmann--Brin~\cite{BB1} and Haglund~\cite{H1}). To ensure that our complexes
of groups are developable with universal cover $\Sigma$, we use covering theory for complexes of
groups (see~\cite{BH} and~\cite{LT}, and Section~\ref{ss:definitions} below). The
main result needed is that if there is a covering of complexes of groups $G(Y) \to H(Z)$, then
$G(Y)$ is developable if and only if $H(Z)$ is developable, and the universal covers of $G(Y)$ and
$H(Z)$ are isometric (see Theorem~\ref{t:coverings} below).
The other main ingredient in the proof of the Main Theorem is Theorem~\ref{t:group_action_intro} below, which introduces a theory of ``group actions on complexes of groups". This is a method of manufacturing new complexes of groups with a given universal cover, by acting on previously-constructed complexes of groups. Given a complex of groups $G(Y)$, and the action of a group $H$ on $Y$, the $H$--action \emph{extends to an action on $G(Y)$} if there is a homomorphism from $H$ to $\ensuremath{\mathbb{A}}ut(G(Y))$. Roughly, this means that for each cell $\ensuremath{\sigma}$ of $Y$, each $h \in H$ induces a group isomorphism $G_\ensuremath{\sigma} \to G_{h\cdot\ensuremath{\sigma}}$, so that the obvious diagrams commute (see Section~\ref{ss:definitions} below for definitions). In Section~\ref{s:group_actions} below we prove:
\begin{theorem}\label{t:group_action_intro} Let $G(Y)$ be a (simple) complex of groups over $Y$, and suppose that the action of a group $H$ on $Y$ extends to an action on $G(Y)$. Then the $H$--action induces a complex of groups $H(Z)$ over $Z = H \backslash Y$ such that there is a covering of complexes of groups $G(Y) \to H(Z)$. Moreover there is a natural short exact sequence
\[ 1 \to \pi_1(G(Y)) \to \pi_1(H(Z)) \to H \to 1,\] and if $H$ fixes a vertex of $Y$, then
\[ \pi_1(H(Z)) \cong \pi_1(G(Y)) \rtimes H.\]
\end{theorem}
\noindent Theorem~\ref{t:group_action_intro} is also used in~\cite{KBT}, and we expect this result
to be of independent interest. To our knowledge, group actions on complexes of groups have not
previously been considered. In~\cite{BJ}, Bass--Jiang determined the structure of the full
automorphism group of a graph of groups, but did not define or study the graph of groups induced
by a group action on a graph of groups. A more precise statement of
Theorem~\ref{t:group_action_intro}, including some additional facts about $H(Z)$, is given as
Theorem~\ref{t:group_action} below.
The Main Theorem is proved as follows. The action of the Coxeter group $W$ on
$\Sigma$ induces a complex of groups $G(Y_1)$ over $Y_1= W \backslash \Sigma$, with local
groups the spherical special subgroups of $W$. We then construct a family of finite
complexes of groups $G(Y_n)$ and $H(Z_n)$, and two infinite complexes of groups
$G(Y_\infty)$ and $H(Z_\infty)$, so that there are coverings of complexes of groups
as sketched in Figure~\ref{f:coverings} below.
\begin{figure}
\caption{Coverings of
complexes of groups}
\label{f:coverings}
\end{figure}
\noindent The fundamental groups of $H(Z_n)$ and $H(Z_\infty)$ are, respectively,
the uniform lattices $\Gamma_n$, and the nonuniform lattice $\Gamma$, in $G=\ensuremath{\mathbb{A}}ut(\Sigma)$.
For the local groups of $G(Y_n)$ and $G(Y_\infty)$, we use
Condition~\eqref{c:halvable} in the Main Theorem to replace certain spherical
special subgroups $W_T$ by the subgroup $\ensuremath{\operatorname{half}}_s(W_T)$, defined as follows:
\begin{definition}\label{d:halvable} Let $W_T$ be a spherical special subgroup of $W$, and suppose $s \in T$. Then $W_T$ is \emph{halvable along $s$} if the union \[(T - \{ s \}) \cup \{ sts \mid t \in T - \{s\} \} \] generates an index $2$ subgroup, denoted $\ensuremath{\operatorname{half}}_s(W_T)$, of $W_T$.\end{definition}
\noindent The complexes of groups $H(Z_n)$ and $H(Z_\infty)$ are induced by group actions on,
respectively, $G(Y_n)$ and $G(Y_\infty)$. To construct these group actions, we use
the automorphisms $\alpha_1$ and $\alpha_2$ of $L$.
I am grateful to Benson Farb for introducing me to these questions, and for his continuing
encouragement and advice. I also thank G. Christopher Hruska and Kevin Wortman for many useful
discussions. This particular work was inspired by conversations with Tadeusz Januszkiewicz and
Jacek \'Swi{\polhk{a}}tkowski, and much of this project was carried out at the Mathematical Sciences
Research Institute in Fall 2007, where I benefited from talking with Angela Kubena Barnhill, Michael W.
Davis, Jonathan P. McCammond and Damian Osajda. I would also like to thank Karen Vogtmann for helpful comments on this manuscript, and an anonymous referee for careful reading and worthwhile suggestions.
\ensuremath{\sigma}ection{Background}\label{s:background}
In this section we present brief background. In Section~\ref{ss:lattices} we describe the natural topology on $G$ the group of
automorphisms of a locally finite polyhedral complex $X$, and characterise uniform and nonuniform
lattices in $G$. Section~\ref{ss:Davis_complexes} constructs the Davis complex $\Sigma$ for a
Coxeter system $(W,S)$, following~\cite{D}. In Section~\ref{ss:complexes_of_groups} we recall the basics of Haefliger's
theory of complexes of groups (see~\cite{BH}), and describe the complex of groups $G(Y_1)$ induced by
the action of $W$ on $\Sigma$.
\ensuremath{\sigma}ubsection{Lattices in automorphism groups of polyhedral complexes}\label{ss:lattices}
Let $G$ be a locally compact topological group. We will use the following normalisation of Haar
measure $\mu$ on $G$.
\begin{theorem}[Serre, \cite{S}]\label{t:Scovolumes} Suppose that a locally compact group $G$ acts on a set $V$ with compact open
stabilisers and a finite quotient $G\backslash V$. Then there is a normalisation of the Haar measure
$\mu$ on $G$, depending only on the choice of $G$--set $V$, such that for each discrete subgroup
$\Gammaamma$ of $G$ we have \[\mu(\Gammaamma \backslash G) = \ensuremath{\operatorname{Vol}}(\Gammaamma \backslash \! \backslash V):= \ensuremath{\sigma}um_{v \in \Gammaamma \backslash V }
\frac{1}{|\Gammaamma_v|} \,\leq\infty. \] \end{theorem}
Suppose $X$ is a connected, locally finite polyhedral complex. Let $G=\ensuremath{\mathbb{A}}ut(X)$. With the compact-open
topology, $G$ is naturally a locally compact topological group, and the $G$--stabilisers of cells in $X$ are compact and open.
Hence if $G \backslash X$ is finite, there are several natural choices of sets $V$ in
Theorem~\ref{t:Scovolumes} above. By the same arguments as for tree lattices~(\cite{BL}, Chapter 1),
it can be shown (for any suitable set $V$) that a discrete subgroup $\Gamma \leq G$ is a lattice if and only if the series $\ensuremath{\operatorname{Vol}}(\Gamma
\backslash \! \backslash V)$ converges, and $\Gamma$ is uniform if and only if this is a sum with finitely many terms. In
Section~\ref{ss:Davis_complexes} below we describe our choice of $G$--set $V$ when $G$ is the group
of automorphisms of a Davis complex $\Sigma$.
\ensuremath{\sigma}ubsection{Davis complexes}\label{ss:Davis_complexes}
In this section we recall the construction of the Davis complex for a Coxeter system.
We follow the reference~\cite{D}.
A \emph{Coxeter group} is a group $W$ with a finite generating set $S$ and presentation of the form
\[ W = \langle s\in S \mid s^2 = 1 \,\, \forall \, s \in S, (s t )^{m_{st}} = 1 \,\,\forall \, s,t \in S \mbox{ with
}s \neq t\rangle \] with $m_{st}$ an integer $\geq 2$
or $m_{st} = \infty$, meaning that $st$ has infinite order. The pair $(W,S)$ is
called a \emph{Coxeter system}.
{\bf {
}{\noindent}Example: }one This example will be followed throughout this section, and also referred to in
Sections~\ref{ss:complexes_of_groups} and~\ref{s:proof} below. Let
\[ W = \langle s_1,s_2,s_3,s_4,s_5 \mid s_i^2=1, (s_1s_4)^m=(s_2s_4)^m=(s_3s_4)^m = 1,\] \[
(s_1s_5)^{m'}=(s_2s_5)^{m'}=(s_3s_5)^{m'} = 1\rangle \]
where $m$ and $m'$ are integers $\geq 2$.
Let $(W,S)$ be a Coxeter system. A subset $T$ of $S$ is \emph{spherical} if the corresponding special
subgroup $W_T$ is spherical, that is, $W_T$ is finite. By convention, $W_\emptyset$ is the trivial
group. Denote by $\ensuremath{\mathcal{S}}$ the set of all spherical subsets of $S$. The set $\ensuremath{\mathcal{S}}$ is partially ordered by
inclusion, and the poset $\ensuremath{\mathcal{S}}_{> \emptyset}$ is the nerve $L$ of $(W,S)$ (this is equivalent to the
description of $L$ in the introduction above). By definition, a nonempty set $T$ of vertices of $L$ spans a
simplex $\ensuremath{\sigma}_T$ in $L$ if and only if $T$ is spherical. We identify the generator $s \in S$ with the
vertex $\{s \}$ of $L$. The vertices $s$ and $t$ of $L$ are joined by an edge in $L$ if and only
if $m_{st}$ is finite, in which case we label this edge by the integer $m_{st}$. The nerve $L$ of
Example 1 above, with its edge labels, is sketched in Figure~\ref{f:nerve} below.
\begin{figure}
\caption{Nerve $L$ of Example 1, with edge labels}
\label{f:nerve}
\end{figure}
We denote by $K$ the geometric realisation of the poset $\ensuremath{\mathcal{S}}$. Equivalently, $K$ is the cone on the
barycentric subdivision of the nerve $L$ of $(W,S)$. Note that $K$ is compact and contractible, since
it is the cone on a finite simplicial complex. Each vertex of $K$ has
\emph{type} a spherical subset of $S$, with the cone point having type $\emptyset$. For Example 1 above,
$K$ and the types of its vertices are sketched on the left of Figure~\ref{f:K}.
\begin{figure}
\caption{$K$ and types of vertices (left) and mirrors (right) for Example 1}
\label{f:K}
\end{figure}
For each $s \in S$ let $K_s$ be the union of the (closed) simplices in $K$ which contain the
vertex $s$ but not the cone point. In other words, $K_s$ is the closed star of the vertex $s$
in the barycentric subdivision of $L$. Note that $K_s$ and $K_t$ intersect if and only if
$m_{st}$ is finite. The family $(K_s)_{s \in S}$ is a \emph{mirror structure} on $K$, meaning
that $(K_s)_{s\in S}$ is a family of closed subspaces of $K$, called \emph{mirrors}. We call
$K_s$ the \emph{$s$--mirror} of $K$. For Example 1 above, the mirrors $K_i=K_{s_i}$ are shown
in heavy lines on the right of Figure~\ref{f:K}.
For each $x \in K$, put \[S(x):= \{ s \in S \mid x \in K_s \}.\] Now define an equivalence relation
$\ensuremath{\sigma}im$ on the set $W \times K$ by $(w,x) \ensuremath{\sigma}im (w',x')$ if and only if $x = x'$ and $w^{-1}w' \in
W_{S(x)}$. The \emph{Davis complex} $\Sigma$ for $(W,S)$ is then the quotient space: \[ \Sigma := (W
\times K) / \ensuremath{\sigma}im. \] The types of vertices of $K$ induce types of the vertices of $\Sigma$, and the
natural $W$--action on $W \times K$ descends to a type-preserving action on $\Sigma$, with compact quotient $K$, so
that the $W$--stabiliser of a vertex of $\Sigma$ of type $T \in \ensuremath{\mathcal{S}}$ is a conjugate of the spherical
special subgroup $W_T$.
We identify $K$
with the subcomplex $(1,K)$ of $\Sigma$, and write $wK$ for the translate $(w,K)$, where $w \in W$.
Any $wK$ is called a \emph{chamber} of $\Sigma$. The mirrors $K_s$ of $K$, or any of their
translates by elements of $W$, are called the \emph{mirrors} of $\Sigma$. Two distinct chambers of $\Sigma$
are \emph{$s$--adjacent} if their intersection is an $s$--mirror, and are \emph{adjacent} if their
intersection is an $s$--mirror for some $s \in S$. Note that the chambers $wK$ and $w'K$ are
$s$--adjacent if and only if $w^{-1}w' = s$, equivalently $w' = ws$ and $w's = w$. For Example 1 above, part of the Davis complex $\Sigma$ for $(W,S)$ is shown in
Figure~\ref{f:davis} below. There are $2m$ copies of $K$ glued around the vertices of types
$\{s_i,s_4\}$, for $i = 1,2,3$, since $W_{\{s_i,s_4\}}$ has order $2m$. Similarly, there are $2m'$ copies of $K$ glued around the vertices of types $\{s_i,s_5\}$, for $i = 1,2,3$.
The Davis complex $\Sigma$ may be metrised with a piecewise Euclidean structure, such that $\Sigma$
is a complete $\ensuremath{\mathbb{C}}AT(0)$ space (Moussong, see Theorem 12.3.3 of \cite{D}). From now on we will assume
that $\Sigma$ is equipped with this metric.
\begin{figure}
\caption{Part of $\Sigma$, for Example 1}
\label{f:davis}
\end{figure}
Suppose that $G=\ensuremath{\mathbb{A}}ut(\Sigma)$ is the group of automorphisms of a Davis complex $\Sigma$. Since $W$ acts cocompactly on $\Sigma$, with finite stabilisers, it may be regarded as a uniform
lattice in $G$. We take as the set $V$ in Theorem~\ref{t:Scovolumes} above the set of vertices of $\Sigma$ of type $\emptyset$. Then the covolume of $W$ is $1$, since $W$ acts simply transitively on $V$.
\ensuremath{\sigma}ubsection{Complexes of groups}\label{ss:complexes_of_groups}
We now outline the basic theory of complexes of groups, following Chapter III.$\ensuremath{\mathcal{C}}$ of~\cite{BH}. The definitions of the more involved notions of morphisms and coverings of complexes of groups are postponed to Section~\ref{ss:definitions} below.
In the literature, a complex of groups $G(Y)$ is constructed over a space or set $Y$ belonging to
various different categories, including simplicial complexes, polyhedral complexes, or, most
generally, \emph{scwols} (small categories without loops). In each case there is a set of
vertices, and a set of oriented edges with a composition law. The formal definition of a scwol is:
\begin{definition}\label{d:scwol} A \emph{scwol} $X$ is the disjoint
union of a set $V(X)$ of vertices and a set $E(X)$ of edges, with each edge $a$ oriented from its
initial vertex $i(a)$ to its terminal vertex $t(a)$, such that $i(a) \not = t(a)$. A pair of
edges $(a,b)$ is \emph{composable} if $i(a)=t(b)$, in which case there is a third edge $ab$, called
the \emph{composition} of $a$ and $b$, such that $i(ab)=i(b)$, $t(ab)=t(a)$ and if $i(a) = t(b)$
and $i(b)=t(c)$ then $(ab)c = a(bc)$ (associativity). \end{definition}
\noindent We will always assume scwols are \emph{connected} (see Section~III.$\mathcal{C}$.1.1 of~\cite{BH}). Morphisms of scwols and group actions on scwols are defined as follows:
\begin{definition}\label{d:morphism_scwols} Let $X$, $Y$ and $Z$ be scwols. A \emph{nondegenerate morphism} $f:Y \to Z$ is a map that sends $V(Y)$ to $V(Z)$, sends $E(Y)$ to $E(Z)$ and is such that:
\begin{enumerate}
\item for each $a \in E(Y)$, we have $i(f(a)) = f(i(a))$ and $t(f(a)) = f(t(a))$;
\item for each pair of composable edges $(a,b)$ in $Y$, we have $f(ab) = f(a)f(b)$; and
\item for each vertex $\ensuremath{\sigma} \in V(Y)$, the restriction of $f$ to the set of edges with initial vertex $\ensuremath{\sigma}$ is a bijection onto the set of edges of $Z$ with initial vertex $f(\ensuremath{\sigma})$.
\end{enumerate}
A \emph{morphism of scwols} $f:Y \to Z$ is a functor from the category $Y$ to the category $Z$ (see Section~III.$\mathcal{C}$.A.1 of~\cite{BH}). An \emph{automorphism} of a scwol $X$ is a morphism from $X$ to $X$ that has an inverse.
\end{definition}
\begin{definition}\label{d:action_on_scwol} An \emph{action} of a
group $G$ on a scwol $X$ is a homomorphism from $G$ to the group of automorphisms of $X$ such that for all $a \in E(X)$ and all $g \in G$: \begin{enumerate}
\item $g\cdot i(a) \not = t(a)$; and \item \label{i:no_inversions} if $g \cdot i(a) = i(a)$ then $g \cdot a = a$.
\end{enumerate} \end{definition}
Suppose now that $\Sigma$ is the Davis complex for a Coxeter system $(W,S)$, as defined in Section~\ref{ss:Davis_complexes} above. Recall that each vertex $\ensuremath{\sigma}igma \in V(\Sigma)$ has type $T$ a spherical subset of $S$. The edges $E(\Sigma)$ are then naturally oriented by inclusion of type. That is, if the edge $a$ joins the vertex $\ensuremath{\sigma}igma$ of type $T$ to
the vertex $\ensuremath{\sigma}igma'$ of type $T'$, then $i(a)=\ensuremath{\sigma}igma$ and $t(a)=\ensuremath{\sigma}igma'$ exactly when $T
\ensuremath{\sigma}ubsetneq T'$. It is clear that the sets $V(\Sigma)$ and $E(\Sigma)$ satisfy the properties of a scwol.
Moreover, if $Y$ is a subcomplex of $\Sigma$, then the sets $V(Y)$ and $E(Y)$ also satisfy
Definition~\ref{d:scwol} above. By abuse of notation, we identify $\Sigma$ and $Y$ with the associated
scwols.
We now define complexes of groups over scwols.
\begin{definition}
A \emph{complex of groups} $G(Y)=(G_\ensuremath{\sigma}igma, \psi_a, g_{a,b})$ over a scwol
$Y$ is given by: \begin{enumerate} \item a group $G_\ensuremath{\sigma}igma$ for each
$\ensuremath{\sigma}igma \in V(Y)$, called the \emph{local group} at $\ensuremath{\sigma}igma$;
\item a monomorphism $\psi_a: G_{i(a)}\rightarrow G_{t(a)}$ along the edge $a$ for each
$a \in E(Y)$; and
\item for each pair of composable edges, a twisting element $g_{a,b} \in
G_{t(a)}$, such that \[ \ensuremath{\mathbb{A}}d(g_{a,b})\circ\psi_{ab} = \psi_a
\circ\psi_b
\] where $\ensuremath{\mathbb{A}}d(g_{a,b})$ is conjugation by $g_{a,b}$ in $G_{t(a)}$,
and for each triple of composable edges $a,b,c$ the following
\emph{cocycle condition} holds
\[\psi_a(g_{b,c})\,g_{a,bc} = g_{a,b}\,g_{ab,c}.\] \end{enumerate}
\end{definition}
\noindent A complex of groups is \emph{simple} if each $g_{a,b}$ is trivial.
{\bf {
}{\noindent}Example: } This example will be followed throughout this section, and used in the proof of the Main
Theorem in Section~\ref{s:proof} below. Let $(W,S)$ be a Coxeter system with nerve $L$ and let $K$ be
the cone on the barycentric subdivision of $L$, as in Section~\ref{ss:Davis_complexes} above. Put $Y_1
= K$, with the orientations on edges discussed above. We construct a simple complex of groups $G(Y_1)$
over $Y_1$ as follows. Let $\ensuremath{\sigma} \in V(Y_1)$. Then $\ensuremath{\sigma}$ has type a spherical subset $T$ of $S$, and we
define $G_\ensuremath{\sigma} = W_T$. All monomorphisms along edges of $Y_1$ are then natural inclusions, and all $g_{a,b}$
are trivial. For $(W,S)$ as in Example 1 of Section~\ref{ss:Davis_complexes} above, the complex of
groups $G(Y_1)$ is shown in Figure~\ref{f:GWK} below. In this figure, $D_{2m}$ and $D_{2m'}$ are the
dihedral groups of orders $2m$ and $2m'$ respectively, and $C_2$ is the cyclic group of order $2$.
\begin{figure}
\caption{The complex of groups $G(Y_1)$, for Example 1 of Section~\ref{ss:Davis_complexes}
\label{f:GWK}
\end{figure}
Suppose a group $G$ acts on a scwol $X$, as in Definition~\ref{d:action_on_scwol} above. Then the
quotient $Y = G \backslash X$ also has the structure of a scwol, and the action of $G$ induces a complex of
groups $G(Y)$ over $Y$ (this construction is generalised in Section~\ref{ss:induced_cogs} below). Let $Y =G \backslash X$ with $p: X \to Y$ the natural projection. For each $\ensuremath{\sigma}igma \in V(Y)$, choose a lift $\overline\ensuremath{\sigma}igma \in V(X)$ such that $p(\overline\ensuremath{\sigma}igma) = \ensuremath{\sigma}igma$. The local group $G_\ensuremath{\sigma}igma$ of $G(Y)$ is then defined to be the stabiliser of $\overline\ensuremath{\sigma}igma$ in $G$, and the monomorphisms $\psi_a$ and elements $g_{a,b}$ are defined using further choices. The resulting complex of groups $G(Y)$ is unique up to
isomorphism (see Definition~\ref{d:morphism} below).
A complex of groups is \emph{developable} if it is isomorphic to a complex of groups $G(Y)$ induced, as
just described, by such an action. Complexes of groups, unlike graphs of groups, are not in general
developable. The complex of groups $G(Y_1)$ above is developable, since it is the complex of groups
induced by the action of $W$ on $\Sigma$, where for each $\ensuremath{\sigma} \in V(Y_1)$, with $\ensuremath{\sigma}$ of type $T$, we
choose $\overline\ensuremath{\sigma}igma$ in $\Sigma$ to be the vertex of $(1,K) = K \ensuremath{\sigma}ubset \Sigma$ of type $T$.
Let $G(Y)$ be a complex of groups. The \emph{fundamental group of the complex of groups} $\pi_1(G(Y))$
is defined so that if $Y$ is simply connected and $G(Y)$ is simple, $\pi_1(G(Y))$ is isomorphic to the
direct limit of the family of groups $G_\ensuremath{\sigma}igma$ and monomorphisms $\psi_a$. For example, since $Y_1 =
K$ is contractible and $G(Y_1)$ is a simple complex of groups, the fundamental group of
$G(Y_1)$ is $W$.
If $G(Y)$ is a developable complex of groups, then it has a \emph{universal cover}
$\widetilde{G(Y)}$. This is a connected, simply-connected scwol, equipped with an action of
$\pi_1(G(Y))$, so that the complex of groups induced by the action of the fundamental group on the
universal cover is isomorphic to $G(Y)$. For example, the universal cover of $G(Y_1)$ is $\Sigma$.
Let $G(Y)$ be a developable complex of groups over $Y$, with universal cover $X$ and fundamental
group $\Gamma$. We say that $G(Y)$ is \emph{faithful} if the action of $\Gamma$ on $X$ is faithful, in which
case we may identify $\Gamma$ with its image in $\ensuremath{\mathbb{A}}ut(X)$. If $X$ is locally finite, then with the
compact-open topology on $\ensuremath{\mathbb{A}}ut(X)$, by the discussion in Section~\ref{ss:lattices} above the subgroup
$\Gammaamma \leq \ensuremath{\mathbb{A}}ut(X)$ is discrete if and only if all local groups of $G(Y)$ are finite. Moreover, if
$\ensuremath{\mathbb{A}}ut(X)$ acts cocompactly on $X$, a discrete $\Gamma \leq \ensuremath{\mathbb{A}}ut(X)$
is a uniform lattice in $\ensuremath{\mathbb{A}}ut(X)$ if and only if $Y \cong \Gamma \backslash X$ is finite, and a discrete $\Gamma \leq \ensuremath{\mathbb{A}}ut(X)$ is a
nonuniform lattice if and only if $Y \cong \Gamma \backslash X$ is infinite and the series $\ensuremath{\operatorname{Vol}}(\Gamma \backslash \! \backslash V)$ converges, for
some $V \ensuremath{\sigma}ubset X$ on which $G=\ensuremath{\mathbb{A}}ut(X)$ acts according to the hypotheses of
Theorem~\ref{t:Scovolumes} above.
\ensuremath{\sigma}ection{Group actions on complexes of groups}\label{s:group_actions}
In this section we introduce a theory of group actions on complexes of groups. The main result is
Theorem~\ref{t:group_action} below, which makes precise and expands upon
Theorem~\ref{t:group_action_intro} of the introduction. The terms appearing in Theorem~\ref{t:group_action}
which have not already been discussed in Section~\ref{ss:complexes_of_groups} above will be defined in
Section~\ref{ss:definitions} below, where we also introduce some notation. In
Section~\ref{ss:induced_cogs} below we construct the complex of groups induced by a group action on a complex
of groups, and in Section~\ref{ss:induced_covering} we construct the induced covering. Using these
results, in Section~\ref{ss:fund_group} we consider the structure of the fundamental group of the
induced complex of groups.
We will require only actions on \emph{simple} complexes of groups $G(Y)$ by \emph{simple} morphisms; this case is already substantially technical. If in addition the action on $Y$ has a strict fundamental domain, it is possible to make choices so that the induced complex of groups is also simple, and many of the proofs in this section become much easier. However, in our applications, the group action will not necessarily have a strict fundamental domain.
\begin{theorem}\label{t:group_action} Let $G(Y)$ be a simple complex of groups over a connected scwol
$Y$, and suppose that the action of a group $H$ on $Y$ extends to an action by simple morphisms on
$G(Y)$. Then the $H$--action induces a complex of groups $H(Z)$ over $Z = H \backslash Y$, with $H(Z)$
well-defined up to isomorphism of complexes of groups, such that there is a covering of complexes of
groups \[ G(Y) \to H(Z). \] Moreover there is a natural short exact sequence \[1 \to \pi_1(G(Y)) \to \pi_1(H(Z)) \to H \to 1,\] and if $H$ fixes a vertex of $Y$, then
\[ \pi_1(H(Z)) \cong \pi_1(G(Y)) \rtimes H.\] Finally, if $G(Y)$ is faithful and the $H$--action on $G(Y)$ is faithful then
$H(Z)$ is faithful.\end{theorem}
We will use the following general result on functoriality of coverings (which is implicit in~\cite{BH}, and stated and proved explicitly in~\cite{LT}).
\begin{theorem}\label{t:coverings} Let $G(Y)$ and $H(Z)$ be complexes of groups over scwols $Y$ and
$Z$. Suppose there is a covering of complexes of groups $\Phi:G(Y) \to H(Z)$. Then $G(Y)$ is
developable if and only if $H(Z)$ is developable. Moreover, $\Phi$ induces a monomorphism of fundamental groups \[ \pi_1(G(Y)) \hookrightarrow \pi_1(H(Z))\]
and an equivariant isomorphism of universal covers \[ \widetilde{G(Y)} \ensuremath{\sigma}tackrel{\cong}{\longrightarrow}
\widetilde{H(Z)}.\] \end{theorem}
\ensuremath{\sigma}ubsection{Definitions and notation}\label{ss:definitions}
We gather here definitions and notation needed for the statement and proof of
Theorem~\ref{t:group_action} above. Throughout this section, $Y$ and $Z$ are scwols, $G(Y)=(G_\ensuremath{\sigma},\psi_a)$ is a simple
complex of groups over $Y$, and $H(Z)=(H_\tau,\theta_a, h_{a,b})$ is a complex of groups over
$Z$.
\begin{definition}\label{d:morphism} Let $f: Y\to Z$ be a morphism of scwols (see Definition~\ref{d:morphism_scwols} above). A \emph{morphism} $\Phi: G(Y) \to H(Z)$ over $f$ consists of: \begin{enumerate}
\item a homomorphism $\phi_\ensuremath{\sigma}igma: G_\ensuremath{\sigma}igma \to H_{f(\ensuremath{\sigma}igma)}$ for each $\ensuremath{\sigma}igma \in V(Y)$, called
the \emph{local map} at $\ensuremath{\sigma}$; and
\item\label{i:commuting} an element $\phi(a) \in H_{t(f(a))}$ for each $a \in E(Y)$, such that the following diagram commutes
\[\xymatrix{
G_{i(a)} \ar[d]^-{\phi_{i(a)}} \ar[rrr]^{\psi_a} & & & G_{t(a)} \ar[d]^-{\phi_{t(a)}}
\\
H_{f(i(a))} \ar[rrr]^{\ensuremath{\mathbb{A}}d(\phi(a))\circ \theta_{f(a)}} & & & H_{f(t(a))}
}\]
and for all pairs of
composable edges $(a,b)$ in $E(Y)$, \[ \phi(ab) = \phi(a) \,\psi_a(\phi(b))h_{f(a),f(b)} . \]
\end{enumerate} \end{definition}
\noindent A morphism is \emph{simple} if each element $\phi(a)$ is trivial. If $f$ is an isomorphism of scwols, and each $\phi_\ensuremath{\sigma}igma$ an isomorphism of the local groups, then $\Phi$ is an \emph{isomorphism of complexes of groups}.
We introduce the following, expected, definitions. An \emph{automorphism} of $G(Y)$ is an
isomorphism $\Phi:G(Y) \to G(Y)$. It is not hard to verify that the set of automorphisms of
$G(Y)$ forms a group under composition, which we denote $\ensuremath{\mathbb{A}}ut(G(Y))$ (see Section~III.$\mathcal{C}$.2.4 of~\cite{BH}
for the definition of composition of morphisms). We then say that a group \emph{$H$ acts
on $G(Y)$} if there is a homomorphism \[ H \to \ensuremath{\mathbb{A}}ut(G(Y)).\]
Our notation is as follows. Suppose $H$ acts on $G(Y)$. Then in particular, $H$ acts on the
scwol $Y$ in the sense of Definition~\ref{d:action_on_scwol} above. We write the action of $H$ on $Y$ as
$\ensuremath{\sigma} \mapsto h\cdot\ensuremath{\sigma}$ and $a \mapsto h\cdot a$, for $h \in H$, $\ensuremath{\sigma} \in V(Y)$ and $a \in E(Y)$. The element
$h \in H$ induces the automorphism $\Phi^h$ of $G(Y)$. The data for
$\Phi^h$ is a family $(\phi^h_\ensuremath{\sigma}igma)_{\ensuremath{\sigma}igma \in V(Y)}$ of group isomorphisms
$\phi^h_\ensuremath{\sigma}igma:G_\ensuremath{\sigma}igma \to G_{h\cdot\ensuremath{\sigma}}$, and a family of elements $(\phi^h(a))_{a \in E(Y)}$
with $\phi^h(a) \in G_{t(h \cdot a)}$, satisfying the definition of morphism above (Definition~\ref{d:morphism}).
We say that the $H$--action is \emph{by simple morphisms} if each $\Phi^h$ is simple, that is, if each $\phi^h(a) \in G_{t(h \cdot a)}$ is the trivial element. Explicitly, for each $a \in E(Y)$ and each $h \in H$, the following diagram commutes. \[\xymatrix{
G_{i(a)} \ar[d]^-{\phi^h_{i(a)}} \ar[rrr]^{\psi_a} & & & G_{t(a)} \ar[d]^-{\phi^h_{t(a)}}
\\
G_{h \cdot i(a)} \ar[rrr]^{\psi_{h \cdot a}} & & & G_{h \cdot t(a)}
}\] We note also that the composition of simple morphisms $\Phi^{h'} \circ \Phi^{h}$ is the simple morphism
$\Phi^{h'h}$ with local maps \begin{equation}\label{e:composition}\phi^{h' h}_\ensuremath{\sigma} = \phi^{h'}_{h\cdot\ensuremath{\sigma}}\circ \phi^{h}_\ensuremath{\sigma}.\end{equation}
Finally we recall the definition of a covering of complexes of groups.
\begin{definition}\label{d:covering} A morphism $\Phi:G(Y) \to H(Z)$ over a nondegenerate morphism of
scwols $f:Y\to Z$ (see Definition~\ref{d:morphism_scwols} above) is a
\emph{covering of complexes of groups} if further: \begin{enumerate}\item each $\phi_\ensuremath{\sigma}igma$ is
injective; and \item \label{i:covbijection} for each $\ensuremath{\sigma}igma \in V(Y)$ and $b \in E(Z)$ such that
$t(b) = f(\ensuremath{\sigma}igma)$, the map on cosets \[ \Phi_{\ensuremath{\sigma}/b}:\left(\coprod_{\ensuremath{\sigma}ubstack{a \in f^{-1}(b)\\ t(a)=\ensuremath{\sigma}igma}} G_\ensuremath{\sigma}igma /
\psi_a(G_{i(a)})\right) \to H_{f(\ensuremath{\sigma}igma)} / \theta_b(H_{i(b)})\] induced by $g \mapsto
\phi_\ensuremath{\sigma}igma(g)\phi(a)$ is a bijection.\end{enumerate}\end{definition}
\ensuremath{\sigma}ubsection{The induced complex of groups and its properties}\label{ss:induced_cogs}
Suppose that a group $H$ acts by simple morphisms on a simple complex of groups $G(Y)=(G_\ensuremath{\sigma},\psi_a)$. In
this section we construct the complex of groups $H(Z)$ induced by this action, prove that $H(Z)$ is
well-defined up to isomorphism of complexes of groups and discuss faithfulness.
Let $Z$ be the quotient scwol $Z = H \backslash Y$ and let $p:Y \to Z$ be the natural projection. For each
vertex $\tau \in V(Z)$ choose a representative $\overline\tau \in V(Y)$ such that $p(\overline\tau) =
\tau$. Let $\ensuremath{\operatorname{St}}ab_H(\overline\tau)$ be the subgroup of $H$ fixing $\overline\tau$ and let $G_{\overline\tau}$ be the local
group of $G(Y)$ at $\overline\tau$. Since the $H$--action is by simple morphisms, by Equation~\eqref{e:composition} above there is a group homomorphism $\zeta:\ensuremath{\operatorname{St}}ab_H(\overline\tau) \to \ensuremath{\mathbb{A}}ut(G_{\overline\tau})$ given by $\zeta(h) = \phi^h_{\overline\tau}$. For each $\tau \in V(Z)$ we then define the local group $H_\tau$ to be the corresponding semidirect product of $G_{\overline\tau}$ by $\ensuremath{\operatorname{St}}ab_H(\overline\tau)$, that is, \[H_\tau := G_{\overline\tau} \rtimes_\zeta \ensuremath{\operatorname{St}}ab_H(\overline\tau) = G_{\overline\tau} \rtimes \ensuremath{\operatorname{St}}ab_H(\overline\tau).\]
For each edge $a \in E(Z)$ with $i(a) = \tau$ there is, since $H$ acts on $Y$ in the sense of
Definition~\ref{d:action_on_scwol} above, a unique edge $\overline{a} \in E(Y)$ such that
$p(\overline{a})=a$ and $i(\overline{a}) = \overline{i(a)} = \overline\tau$. For each $a \in E(Z)$
choose an element $h_a \in H$ such that $h_a\cdot t(\overline{a}) = \overline{t(a)}$.
\begin{lemma}\label{l:monom} Let $g \in G_{i(\overline{a})} = G_{\overline{i(a)}}$ and $h \in
\ensuremath{\operatorname{St}}ab_H\left(\overline{i(a)}\right)$. Then the map \[ \theta_a: (g,h) \mapsto
(\phi^{h_a}_{t(\overline{a})} \circ \psi_{\overline{a}} (g), h_a h h_a^{-1}) \] is a monomorphism
$H_{i(a)} \to H_{t(a)}$. \end{lemma}
\begin{proof} We will show that $\theta_a$ is a group homomorphism. Since $\phi^{h_a}_{t(\overline{a})}$, $\psi_{\overline{a}}$ and the conjugation $h \mapsto h_a h h_a^{-1}$ are all injective, the conclusion will then follow.
Let $g,g' \in G_{\overline{i(a)}}$ and $h,h' \in \ensuremath{\operatorname{St}}ab_H(\overline{i(a)})$. Note that since $h$ and $h'$ fix $\overline{i(a)} = i(\overline{a})$, they fix the edge $\overline{a}$ and hence fix the vertex $t(\overline{a})$ as well. We have
\begin{eqnarray*}
\theta_a((g,h)(g,h')) & = & \theta_a(g\phi^h_{\overline{i(a)}}(g'), hh') \\
& = & (\phi^{h_a}_{t(\overline{a})}\circ \psi_{\overline{a}}(g\phi^h_{\overline{i(a)}}(g')),h_ahh'h_a^{-1})
\end{eqnarray*}
while
\begin{eqnarray*}
\theta_a(g,h)\theta_a(g',h') & = & (\phi^{h_a}_{t(\overline{a})}\circ \psi_{\overline{a}}(g),h_ahh_a^{-1})(\phi^{h_a}_{t(\overline{a})}\circ \psi_{\overline{a}}(g'),h_ah'h_a^{-1})\\
& = & (\phi^{h_a}_{t(\overline{a})}\circ \psi_{\overline{a}}(g)\phi^{h_ahh_a^{-1}}_{\overline{t(a)}}\circ \phi^{h_a}_{t(\overline{a})}\circ \psi_{\overline{a}}(g'),h_ahh'h_a^{-1}).
\end{eqnarray*}
After applying Equation~\eqref{e:composition} above to the map $\phi^{h_a h h_a^{-1}}$, and some cancellations, it remains to show that
\[\psi_{\overline{a}} \circ \phi^h_{\overline{i(a)}}(g') = \phi^h_{t(\overline{a})} \circ \psi_{\overline{a}}(g').\]
This follows from the fact that $\Phi^h$ is a simple morphism with $h\cdot \overline{a} = \overline{a}$.
\end{proof}
To complete the construction of $H(Z)$, for each composable pair of edges $(a,b)$ in $E(Z)$, define
\[h_{a,b} = h_a h_b h_{ab}^{-1}.\]
One checks that $h_{a,b} \in \ensuremath{\operatorname{St}}ab_H(\overline{t(a)})$ hence $(1,h_{a,b}) \in H_{t(a)}$. By abuse of notation we write
$h_{a,b}$ for $(1, h_{a,b})$.
\begin{proposition} The datum $H(Z) = (H_\ensuremath{\sigma}, \theta_a, h_{a,b})$ is a complex of groups.
\end{proposition}
\begin{proof} Given Lemma~\ref{l:monom} above, it remains to show that for each pair of composable edges
$(a,b)$ in $E(Z)$, \begin{equation}\label{e:thetas} \ensuremath{\mathbb{A}}d(h_{a,b}) \circ \theta_{ab} = \theta_a
\circ \theta_b,\end{equation} and that the cocycle condition holds. Let $(g,h) \in H_{i(b)}=G_{\overline{i(b)}} \rtimes
\ensuremath{\operatorname{St}}ab_H(\overline{i(b)})$. We compute \begin{eqnarray*}\ensuremath{\mathbb{A}}d(h_{a,b}) \circ \theta_{ab} (g,h) & = &
(\phi^{h_{a,b}}_{\overline{t(ab)}}\circ \phi^{h_{ab}}_{t(\overline{ab})} \circ \psi_{\overline{ab}}(g),h_{a,b}h_{ab} h
h_{ab}^{-1}h_{a,b}^{-1}) \end{eqnarray*} while
\begin{eqnarray*}\theta_a \circ \theta_b (g,h) & = & (\phi^{h_a}_{t(\overline{a})} \circ
\psi_{\overline{a}} \circ \phi^{h_b}_{t(\overline{b})} \circ \psi_{\overline{b}}(g), h_a h_b h
h_b^{-1} h_a^{-1}).\end{eqnarray*} By definition of $h_{a,b}$ it remains to show equality in the first component.
By Equation~\eqref{e:composition} and the definition of $h_{a,b}$,
\[ \phi^{h_{a,b}}_{\overline{t(ab)}} = \phi^{h_a}_{t(\overline{a})} \circ \phi^{h_b}_{t(\overline{ab})} \circ \phi^{h_{ab}^{-1}}_{\overline{t(ab)}}. \]
Hence it suffices to prove
\begin{equation}\label{e:useful}\phi^{h_b}_{t(\overline{ab})}
\circ \psi_{\overline{ab}} = \psi_{\overline{a}} \circ \phi^{h_b}_{t(\overline{b})} \circ \psi_{\overline{b}}. \end{equation}
Since $G(Y)$ is a simple complex of groups, and $\overline{ab}$ is the composition of the edges
$h_b^{-1}.\overline{a}$ and $\overline{b}$, we have \[ \psi_{\overline{ab}} = \psi_{h_b^{-1}
\overline{a}} \circ \psi_{\overline{b}}.\] Applying this, and the fact that $\phi^{h_b}_{t(\overline{b})}$ is a simple morphism on the edge $h_b^{-1} \overline{a}$, we have \[ \phi^{h_b}_{t(\overline{ab})} \circ \psi_{\overline{ab}} = \phi^{h_b}_{t(\overline{ab})} \circ \psi_{h_b^{-1} \overline{a}} \circ \psi_{\overline{b}} =
\psi_{\overline{a}} \circ \phi^{h_b}_{t(\overline{b})} \circ \psi_{\overline{b}}.\]
Hence Equation~\eqref{e:useful} holds.
The cococycle condition follows from the definition of $h_{a,b}$. We conclude that $H(Z)$ is a complex of groups.
\end{proof}
We now have a complex of groups $H(Z)$ induced by the action of $H$ on $G(Y)$. This construction
depended on choices of lifts $\overline\tau$ and of elements $h_a \in H$. We next show (in a generalisation of Section~III.$\mathcal{C}$.2.9(2) of~\cite{BH}) that:
\begin{lemma}\label{l:well_defined} The complex of groups $H(Z)$ is well-defined up to isomorphism of complexes of groups.
\end{lemma}
\begin{proof} Suppose we made a different choice of lifts $\overline\tau'$ and elements $h_a'$,
resulting in a complex of groups $H'(Z) = (H'_\tau, \theta'_a, h'_{a,b})$. An isomorphism
$\Lambda = (\lambda_\ensuremath{\sigma}, \lambda(a))$ from $H(Z)$ to $H'(Z)$ over the identity map $Z \to Z$ is
constructed as follows. For each $\tau \in V(Z)$, choose an element $k_\tau \in H$ such that
$k_\tau \cdot \overline\tau = \overline\tau'$, and define a group isomorphism $\lambda_\tau:H_\tau \to
H'_\tau$ by \[\lambda_\tau(g,h) = (\phi^{k_\tau}_{\overline\tau}(g), k_\tau h k_\tau^{-1}).\] For
each $a \in E(Z)$, define $\lambda(a) = (1, k_{t(a)} h_a k^{-1}_{i(a)} {h'_a}^{-1})$. Note that by ~III.$\mathcal{C}$.2.9(2)
of~\cite{BH}, $\lambda(a) \in H'_{t(a)}$.
The verification that $\Lambda = (\lambda_\ensuremath{\sigma},\lambda(a))$ is an isomorphism of complexes of groups is straightforward.\end{proof}
We remind the reader that faithfulness of a complex of groups is defined in the final paragraph of Section~\ref{ss:complexes_of_groups} above.
\begin{lemma}\label{l:faithful} If $G(Y)$ is faithful and the $H$--action on $Y$ is faithful then $H(Z)$ is faithful.
\end{lemma}
\begin{proof} This follows from the construction of $H(Z)$, and the characterisation of faithful
complexes of groups in Proposition 38 of~\cite{LT}.
\end{proof}
\ensuremath{\sigma}ubsection{The induced covering}\label{ss:induced_covering}
Suppose $H$ acts by simple morphisms on a simple complex of groups $G(Y)$, inducing a complex of groups $H(Z)$ as in Section~\ref{ss:induced_cogs} above. In this section we construct a covering of complexes of groups $\Lambda:G(Y) \to H(Z)$ over the quotient map $p:Y \to Z$.
For $\ensuremath{\sigma} \in V(Y)$, the local maps $\lambda_\ensuremath{\sigma}:G_\ensuremath{\sigma} \to H_{p(\ensuremath{\sigma})}$ are defined as follows. Recall
that for each vertex $\tau \in V(Z)$ we chose a lift $\overline\tau \in V(Y)$. Now for each $\ensuremath{\sigma} \in V(Y)$, we choose $k_\ensuremath{\sigma} \in H$ such that $k_\ensuremath{\sigma} \cdot \ensuremath{\sigma} = \overline{p(\ensuremath{\sigma})}$. Hence $\phi^{k_\ensuremath{\sigma}}_\ensuremath{\sigma}$ is an isomorphism $G_\ensuremath{\sigma} \to G_{\overline{p(\ensuremath{\sigma})}}$. The local map $\lambda_\ensuremath{\sigma}:G_\ensuremath{\sigma} \to
H_{p(\ensuremath{\sigma})}$ is then defined by \[\lambda_\ensuremath{\sigma}: g \mapsto (\phi^{k_\ensuremath{\sigma}}_\ensuremath{\sigma}(g), 1).\] Note
that each $\lambda_\ensuremath{\sigma}$ is injective.
For each edge $a \in E(Y)$, define \[\lambda(a) = (1, k_{t(a)}k_{i(a)}^{-1} h_b^{-1})\] where
$p(a) = b \in E(Z)$. Note that, since $H$ acts on $Y$ in the sense of
Definition~\ref{d:action_on_scwol} above, we have $k_{i(a)}\cdot a = \overline{b}$
hence $k_{t(a)}k_{i(a)}^{-1} h_b^{-1}$ fixes $\overline{t(b)}$. Thus $\lambda(a) \in
H_{t(b)}$ as required.
\begin{proposition}\label{p:covering} The map $\Lambda = (\lambda_\ensuremath{\sigma},\lambda(a))$ is a covering of complexes of groups. \end{proposition}
\begin{proof} It may be checked that $\Lambda$ is a morphism of complexes of groups. As noted, each
of the local maps $\lambda_\ensuremath{\sigma}$ is injective. It remains to show that for each $\ensuremath{\sigma}igma \in V(Y)$ and
$b\in E(Z)$ such that $t(b) = p(\ensuremath{\sigma}igma)=\tau$, the map on cosets \[ \Lambda_{\ensuremath{\sigma}/b}:\left(\coprod_{\ensuremath{\sigma}ubstack{a \in p^{-1}(b)\\
t(a)=\ensuremath{\sigma}igma}} G_\ensuremath{\sigma}igma / \psi_a(G_{i(a)})\right) \to H_{\tau} / \theta_b(H_{i(b)})\] induced by $g
\mapsto \lambda_\ensuremath{\sigma}(g)\lambda(a)=(\phi^{k_\ensuremath{\sigma}}_\ensuremath{\sigma}(g), k_\ensuremath{\sigma} k_{i(a)}^{-1} h_b^{-1})$ is a bijection.
We first show that $\Lambda_{\ensuremath{\sigma}/b}$ is injective. Suppose $a$ and $a'$ are in $p^{-1}(b)$ with
$t(a)=t(a')=\ensuremath{\sigma}$, and suppose $g, g' \in G_\ensuremath{\sigma}$ with $g$ representing a coset of
$\psi_{a}(G_{i(a)})$ in $G_\ensuremath{\sigma}$ and $g'$ a coset of $\psi_{a'}(G_{i(a')})$ in $G_\ensuremath{\sigma}$. Assume that
$\lambda_\ensuremath{\sigma}(g)\lambda(a)$ and $\lambda_\ensuremath{\sigma}(g')\lambda(a')$ belong to the same coset of
$\theta_b(H_{i(b)})$ in $H_\tau$.
Looking at the second component of the semidirect product $H_\tau$, it follows from the definition of
$\theta_b$ (Lemma~\ref{l:monom} above) that for some $h \in \ensuremath{\operatorname{St}}ab_H({\overline{i(b)}})$, \[ k_\ensuremath{\sigma}
k_{i(a)}^{-1} h_b^{-1} = \left(k_\ensuremath{\sigma} k_{i(a')}^{-1} h_b^{-1}\right) \left( h_b h h_b^{-1}\right)
\\ = k_\ensuremath{\sigma} k_{i(a')}^{-1} h h_b^{-1}.\] Thus $k_{i(a')}k_{i(a)}^{-1}=h$ fixes $\overline{i(b)}$.
Hence $k_{i(a)}^{-1}k_{i(a')}$ fixes $k_{i(a)}^{-1}\overline{i(b)} = i(a)$, and so
$k_{i(a)}^{-1}k_{i(a')}$ fixes $a$. Thus $k_{i(a')} \cdot a = k_{i(a)} \cdot a = \overline{b} = k_{i(a')} \cdot a'$,
hence $a = a'$.
Looking now at the first component of $\lambda_\ensuremath{\sigma}(g)\lambda(a)$ and
$\lambda_\ensuremath{\sigma}(g')\lambda(a')=\lambda_\ensuremath{\sigma}(g')\lambda(a)$ in the semidirect product $H_\tau$, by
definition of $\theta_b$, for some $x \in G_{\overline{i(b)}}$ we have \begin{eqnarray*}
\phi^{k_\ensuremath{\sigma}}_\ensuremath{\sigma}(g) & = & \phi^{k_\ensuremath{\sigma}}_\ensuremath{\sigma}(g')\phi^{k_\ensuremath{\sigma} k_{i(a)}^{-1} h_{b}^{-1}}_{\overline{t(b)}}
\circ \phi^{h_b}_{t(\overline{b})} \circ \psi_{\overline{b}}(x) \\ & = &
\phi^{k_\ensuremath{\sigma}}_\ensuremath{\sigma}(g')\phi^{k_\ensuremath{\sigma}}_{\ensuremath{\sigma}} \circ \phi^{k_{i(a)}^{-1}}_{t(\overline{b})} \circ
\psi_{\overline{b}}(x).\end{eqnarray*} Since $\phi^{k_\ensuremath{\sigma}}_{\ensuremath{\sigma}}$ is an isomorphism, and $k_{i(a)}^{-1}\cdot\overline{b} = a$, this implies \[(g')^{-1}g = \phi^{k_{i(a)}^{-1}}_{t(\overline{b})}
\circ \psi_{\overline{b}}(x) = \psi_a \circ \phi^{k_{i(a)}^{-1}}_{\overline{i(b)}}(x) \in
\psi_a(G_{i(a)})\] as required. Thus the map $\Lambda_{\ensuremath{\sigma}/b}$ is injective.
To show that $\Lambda_{\ensuremath{\sigma}/b}$ is surjective, let $g \in G_{\overline{\tau}}$ and $h \in \ensuremath{\operatorname{St}}ab_H(\overline\tau)$, so
that $(g,h) \in H_\tau$. Let $a$ be the unique edge of $Y$ with $t(a) =\ensuremath{\sigma}$ and such that $k_\ensuremath{\sigma}\cdot a = h h_b
\overline{b}$. Let $g'$ be the unique element of $G_\ensuremath{\sigma}$ such that $\phi^{k_\ensuremath{\sigma}}_\ensuremath{\sigma}(g') = g \in
G_{\overline\tau}$. We claim that $\lambda_\ensuremath{\sigma}(g')\lambda(a)$ lies in the same coset as $(g,h)$. Now
\[\lambda_\ensuremath{\sigma}(g')\lambda(a) = (\phi^{k_\ensuremath{\sigma}}_\ensuremath{\sigma}(g'), k_\ensuremath{\sigma} k_{i(a)}^{-1} h_b^{-1})=(g,k_\ensuremath{\sigma} k_{i(a)}^{-1} h_b^{-1})\]
so it suffices to show that $k_\ensuremath{\sigma} k_{i(a)}^{-1} h_b^{-1} \in h h_b \ensuremath{\operatorname{St}}ab_H(\overline{i(b)}) h_b^{-1}$.
Equivalently, we wish to show that $h_b^{-1} h^{-1} k_\ensuremath{\sigma} k_{i(a)}^{-1}$ fixes $\overline{i(b)}$. We have
$k_{i(a)}\cdot i(a) = \overline{i(b)}$ by definition, and the result follows by our choice
of $a$. Thus $\Lambda_{\ensuremath{\sigma}/b}$ is surjective.
Hence $\Lambda$ is a covering of complexes of groups.\end{proof}
\ensuremath{\sigma}ubsection{The fundamental group}\label{ss:fund_group}
Suppose $H$ acts by simple morphisms on a simple complex of groups $G(Y)$, inducing a complex of groups $H(Z)$ as in Section~\ref{ss:induced_cogs} above. In this section we establish the short exact sequence of Theorem~\ref{t:group_action} above, and provide sufficient conditions for the fundamental group of $H(Z)$ to be the semidirect product of the fundamental group of $G(Y)$ by $H$.
Fix $\ensuremath{\sigma}_0$ a vertex of $Y$ and let $p:Y \to Z$ be the natural projection. We refer the reader to Section~III.$\mathcal{C}$.3 of~\cite{BH} for the definition of the \emph{fundamental group of G(Y) at $\ensuremath{\sigma}_0$}, denoted $\pi_1(G(Y),\ensuremath{\sigma}_0)$. We will use notation and results from that section in the following proof. Let $\pi_1(H(Z),p(\ensuremath{\sigma}_0))$ be the fundamental group of $H(Z)$ at $p(\ensuremath{\sigma}_0)$.
\begin{proposition}\label{p:SES} There is a natural short exact sequence
\[1 \to \pi_1(G(Y),\ensuremath{\sigma}_0) \to \pi_1(H(Z),p(\ensuremath{\sigma}_0)) \to H \to 1.\]
\end{proposition}
\begin{proof}
To obtain a monomorphism $\pi_1(G(Y),\ensuremath{\sigma}_0) \to \pi_1(H(Z),p(\ensuremath{\sigma}_0))$, we use the morphism of complexes of groups $\Lambda:G(Y) \to H(Z)$ defined in Section~\ref{ss:induced_covering} above. By Proposition~III.$\mathcal{C}$.3.6 of~\cite{BH}, $\Lambda$ induces a natural homomorphism
\[\pi_1(\Lambda,\ensuremath{\sigma}_0):\pi_1(G(Y),\ensuremath{\sigma}_0) \to \pi_1(H(Z),p(\ensuremath{\sigma}_0)).\] Since
$\Lambda$ is a covering (Proposition~\ref{p:covering} above), Theorem~\ref{t:coverings} above implies
that this map $\pi_1(\Lambda,\ensuremath{\sigma}_0)$ is in fact injective.
We next define a surjection $\pi_1(H(Z),p(\ensuremath{\sigma}_0)) \to H$. The group $H$ may be regarded as a complex of groups over a single vertex. There is then a canonical morphism of complexes of groups $\Phi:H(Z) \to H$, defined as follows. Recall that for each $\tau \in V(Z)$, the local group $H_\tau$ is given by $H_\tau = G_{\overline\tau} \rtimes \ensuremath{\operatorname{St}}ab_H(\overline\tau)$. The local map $\phi_\tau:H_\tau \to H$ in the morphism $\Phi$ is defined to be projection to the second factor $\ensuremath{\operatorname{St}}ab_H(\overline{\tau}) \leq H$. For each edge $b$ of $Z$, we define $\phi(b) = h_b$. It may then be checked that $\Phi$ is a morphism.
By Proposition~III.$\mathcal{C}$.3.6 of~\cite{BH}, the morphism $\Phi$ induces a homomorphism of fundamental groups
\[\pi_1(\Phi,p(\ensuremath{\sigma}_0)): \pi_1(H(Z),p(\ensuremath{\sigma}_0)) \to H.\]
By~III.$\mathcal{C}$.3.14 and Corollary~III.$\mathcal{C}$.3.15 of~\cite{BH}, if $G(Y)$ were a complex of trivial groups, this map would be surjective. Since the image of $\pi_1(\Phi,p(\ensuremath{\sigma}_0))$ does not in fact depend on the local groups of $G(Y)$, we have that in all cases, $\pi_1(\Phi,p(\ensuremath{\sigma}_0))$ is surjective, as required.
It follows from definitions that the image of the monomorphism $\pi_1(\Lambda,\ensuremath{\sigma}_0)$ is the kernel of the surjection $\pi_1(\Phi,p(\ensuremath{\sigma}_0))$. Hence the sequence above is exact.\end{proof}
\begin{corollary}\label{p:fund_gp_splits}
If $H$ fixes a vertex of $Y$,
\[ \pi_1(H(Z),p(\ensuremath{\sigma}_0)) \cong \pi_1(G(Y),\ensuremath{\sigma}_0) \rtimes H.\]
\end{corollary}
\begin{proof}
Suppose that $H$ fixes the vertex $\ensuremath{\sigma}igma$ of $Y$. We will construct a section $\iota:H \to \pi_1(H(Z),p(\ensuremath{\sigma}_0))$ for the surjective homomorphism $\pi_1(\Phi,p(\ensuremath{\sigma}_0)): \pi_1(H(Z), p(\ensuremath{\sigma}_0)) \to H$ given in the proof of Proposition~\ref{p:SES} above.
The vertex $\ensuremath{\sigma}igma$ is the unique lift $\overline\tau$ of a vertex $p(\ensuremath{\sigma}igma) = \tau \in Z$. Hence
\[H_\tau = G_{\overline\tau} \rtimes \ensuremath{\operatorname{St}}ab_H(\overline\tau) = G_\ensuremath{\sigma} \rtimes H.\]
By definition of the surjection $\pi_1(\Phi,p(\ensuremath{\sigma}_0)): \pi_1(H(Z), p(\ensuremath{\sigma}_0)) \to H$, a section $\iota:H \to \pi_1(H(Z),p(\ensuremath{\sigma}_0))$ is then given by the inclusion $H \to H_\tau$.
\end{proof}
This completes the proof of Theorem~\ref{t:group_action}.
\ensuremath{\sigma}ection{Proof of the Main Theorem}\label{s:proof}
We now prove the Main Theorem and Corollary~\ref{c:infinite_generation}, stated in the introduction. Throughout this section, we adopt
the notation of the Main Theorem, and assume that the vertices $s_1$ and $s_2$ of the nerve $L$, and the
elements $\alpha_1$ and $\alpha_2$ of the group $A$ of label-preserving automorphisms of $L$, satisfy
Conditions~\eqref{c:fix}--\eqref{c:halvable} of its statement. In Section~\ref{ss:underlying} we
introduce notation, and construct a family of finite polyhedral complexes $Y_n$, for $n \geq 1$, and an
infinite polyhedral complex $Y_\infty$. We then in Section~\ref{ss:GYn} construct complexes of groups
$G(Y_n)$ and $G(Y_\infty)$ over these spaces, and show that there are coverings of complexes of groups
$G(Y_n) \to G(Y_1)$ and $G(Y_\infty) \to G(Y_1)$. In Section~\ref{ss:Hn} we define the action of a
finite group $H_n$ on $Y_n$, and of an infinite group $H_\infty$ on $Y_\infty$, and then in
Section~\ref{ss:action} we show that these actions extend to actions on the complexes of groups
$G(Y_n)$ and $G(Y_\infty)$. In Section~\ref{ss:conclusion} we combine these results with
Theorem~\ref{t:group_action} above to complete the proof of the Main Theorem.
Corollary~\ref{c:infinite_generation} is proved in Section~\ref{ss:corollary_proof}.
\ensuremath{\sigma}ubsection{The spaces $Y_n$ and $Y_\infty$}\label{ss:underlying}
In this section we construct a family of finite polyhedral complexes $Y_n$ and an infinite polyhedral
complex $Y_\infty$.
We first set up some notation.
For $i = 1,2$, let $q_i \geq 2$ be the order of $\alpha_i$. It will be convenient to put, for all $k \geq 0$, $s_{2k+1} = s_1$ and $s_{2k+2} = s_2$, and similarly $\alpha_{2k+1} = \alpha_1$, $\alpha_{2k+2}=\alpha_2$, $q_{2k+1}=q_1$ and $q_{2k+2} = q_2$. Conditions~\eqref{c:fix}--\eqref{c:halvable} of the Main Theorem then become:
\begin{enumerate}
\item for all $n \geq 1$, $\alpha_n$ fixes the star of $s_{n+1}$ in $L$;
\item for all $n \geq 1$, the subgroup $\langle \alpha_n \rangle$ of $A$ acts freely on the $\langle \alpha_{n} \rangle$--orbit of $s_n$, in particular $\alpha_n(s_{n}) \neq s_{n}$;
\item for all $n \geq 1$, and all $t_n \neq s_n$ such that $t_n$ is in the $\langle \alpha_{n} \rangle$--orbit of $s_n$, $m_{s_{n}t_n} = \infty$; and
\item for all $n \geq 1$, all spherical special subgroups of $W$ which contain $s_n$ are halvable along $s_n$.
\end{enumerate}
We now use the sequences $\{s_n\}$ and $\{ \alpha_n\}$ to define certain elements and subsets of $W$.
Let $w_1$ be the trivial element of $W$ and for $n \geq 2$ let $w_n$ be the product
\[ w_{n} = s_1 s_2 \cdots s_{n-1} \in W.\]
Denote by $W_{n,n}$ the one-element set $\{ w_{n} \}$. For $n \geq 2$, and $1 \leq k < n$, in order to simplify notation, write $\alpha^{j_{n-1},\ldots,j_k}$ for the composition of automorphisms
\[ \alpha^{j_{n-1},\ldots,j_k}= \alpha_{n-1}^{j_{n-1}}\cdots \alpha_k^{j_k} \]
where $0 \leq j_i < q_i$ for $k \leq i < n$. Let $w_{j_{n-1},\ldots,j_k}$ be the element of $W$:
\begin{equation}\label{e:wk} w_{j_{n-1},\ldots,j_k} = w_n \alpha^{j_{n-1}}(s_{n-1})\alpha^{j_{n-1}, j_{n-2}}(s_{n-2})\cdots\alpha^{j_{n-1}, \ldots ,j_{k+1}}(s_{k+1})\alpha^{j_{n-1},\ldots, j_k}(s_k). \end{equation}
Now for $n \geq 2$ and $1 \leq k < n$, define
\[
W_{k,n} = \{ w_{j_{n-1},\ldots,j_k} \in W \mid \mbox{$0 \leq j_i < q_i$ for $k \leq i < n$} \}.
\]
Note that if $j_{n-1} =0$ then $w_{j_{n-1},\ldots,j_k} \in W_{k,n-1}$.
{\bf {
}{\noindent}Example: } Let $(W,S)$ be the Coxeter system in Example 1 of Section~\ref{ss:Davis_complexes} above, with
nerve $L$ shown in Figure~\ref{f:nerve} above. For $i = 1,2$, let $\alpha_i \in A$ be the automorphism
of $L$ which fixes the star of $s_{3-i}$ in $L$ and interchanges $s_i$ and $s_3$. Then if $m$ and $m'$
are both even, the Main Theorem applies to this example. (If $T = \{ s\}$ then $W_T$ is halvable along
$s$ with $\ensuremath{\operatorname{half}}_s(W_T)$ the trivial group. If $T=\{ s,t \}$ then $W_T$ is the dihedral group of order
$2m_{st}$, and $W_T$ is halvable along $s$ if and only if $m_{st}$ is even, in which case $\ensuremath{\operatorname{half}}_s(W_T)$
is the dihedral group of order $m_{st}$.) Note that $q_1 = q_2 = 2$, and so, for instance,
\begin{eqnarray*} W_{1,3} & = & \{ 1, s_1\alpha_1(s_1), s_1s_2\alpha_2(s_2)\alpha_2(s_1),
s_1s_2\alpha_2(s_2)\alpha_2\alpha_1(s_1)\} \\ W_{2,3} & = & \{ s_1, s_1s_2\alpha_2(s_2) \} \\ W_{3,3} &
= & \{ s_1 s_2 \}.\end{eqnarray*}
The following lemma establishes key properties of the sets $W_{k,n}$.
\begin{lemma}\label{l:disjoint} For all $n \geq 1$: \begin{enumerate}\item the sets $W_{1,n}$,
$W_{2,n}$, \ldots, $W_{n,n}$ are pairwise disjoint; and \item for all $1 \leq k < n$, if
\[w_{j_{n-1},\ldots,j_k}=w_{j'_{n-1},\ldots,j'_k}\] (where $0 \leq j_i < q_i$ for $k \leq i < n$) then
$j_k = j'_k$, $j_{k+1}=j'_{k+1}$, \ldots, and $j_{n-1}=j'_{n-1}$.\end{enumerate} \end{lemma}
\begin{proof} Given $1 \leq k \leq k' <n$, with $0 \leq j_i < q_i$ for $k \leq i < n$ and $0 \leq j'_i <
q_i$ for $k' \leq i < n$, suppose \begin{equation}\label{e:equal_ws} w_{j_{n-1},\ldots,j_k} =
w_{j'_{n-1},\ldots,j'_{k'}}.\end{equation} Then \[\alpha^{j_{n-1}}(s_{n-1})\alpha^{j_{n-1},
j_{n-2}}(s_{n-2})\cdots\alpha^{j_{n-1}, \ldots, j_{k'},\ldots,
,j_{k+1}}(s_{k+1})\alpha^{j_{n-1},\ldots,j_{k'},\ldots, j_k}(s_k)
\]\[=\alpha^{j'_{n-1}}(s_{n-1})\alpha^{j'_{n-1}, j'_{n-2}}(s_{n-2})\cdots\alpha^{j'_{n-1}, \ldots
,j'_{k'+1}}(s_{k'+1})\alpha^{j'_{n-1},\ldots, j'_{k'}}(s_{k'}).\] By Condition~\eqref{c:fix} above, for each
$k \leq i < n$, the automorphism $\alpha_i$ fixes $s_{i+1}$, thus \begin{eqnarray*} \alpha^{j_{n-1}, \ldots
,j_{i+1}}(s_{i+1})\alpha^{j_{n-1},\ldots, j_i}(s_i) & = & \alpha^{j_{n-1}, \ldots
,j_{i+1},j_i}(s_{i+1})\alpha^{j_{n-1},\ldots, j_i}(s_i) \\ & = & \alpha^{j_{n-1},\ldots, j_i}(s_{i+1}s_i).
\end{eqnarray*} Also since $\alpha_i$ fixes the star of $s_{i+1}$ but $\alpha_i(s_i) \neq s_i$, we
have $m_{s_{i+1}s_i} = \infty$. Since $\alpha^{j_{n-1},\ldots, j_i}$ is a label-preserving automorphism, it
follows that the product of the two generators \[ \alpha^{j_{n-1}, \ldots
,j_{i+1}}(s_{i+1})\alpha^{j_{n-1},\ldots, j_i}(s_i) \] has infinite order, for each $k \leq i < n$.
Similarly for each $k'\leq i < n$. Thus the only way for Equation~\eqref{e:equal_ws} to hold is if $k =
k'$, and for each $k \leq i < n$, $\alpha_i^{j_i}(s_i) = \alpha_i^{j'_i}(s_i)$. Since $\langle \alpha_i \rangle$ acts freely on the $\langle \alpha_i \rangle$--orbit of $s_i$ and we specified $0 \leq j_i < q_i$, the result follows. \end{proof}
For $n \geq 1$, and $1 \leq k \leq n$, define $Y_{k,n}$ to be the set of chambers
\[ Y_{k,n} := \{ w K \mid w \in W_{k,n} \}. \]
Recall that we are writing $wK$ for the pair $(w,K)$. By Lemma~\ref{l:disjoint} above, for fixed $n$, the sets $Y_{1,n},\ldots,Y_{n,n}$ are pairwise disjoint. We now define $Y_n$ to be the polyhedral complex obtained by ``gluing together" the chambers in $Y_{1,n},\ldots,Y_{n,n}$, using the same relation $\ensuremath{\sigma}im$ as in the Davis complex $\Sigma$ for $(W,S)$. More precisely,
\[Y_n := \left(\coprod_{k=1}^n Y_{k,n}\right) / \ensuremath{\sigma}im\]
where, for $x,x' \in K$, we have $(w,x) \ensuremath{\sigma}im (w',x')$ if and only if $x = x'$ and $w^{-1}w' \in W_{S(x)}$. Note that $Y_1 = Y_{1,1} = K$. To define $Y_\infty$, for each $k \geq 1$, noting that $W_{k,n}$ is only defined for $1 \leq k \leq n$, put
\[ W_{k,\infty} := \bigcup_{n=k}^\infty W_{k,n}.\]
Then $Y_{k,\infty}$ is the set of chambers
\[Y_{k,\infty} := \{ wK \mid w \in W_{k,\infty}\}.\] Similarly to the finite case, the sets $Y_{1,\infty},Y_{2,\infty},\ldots$ are pairwise disjoint, and we define
\[Y_\infty = \left(\coprod_{k=1}^\infty Y_{k,\infty}\right) / \ensuremath{\sigma}im\]
for the same relation $\ensuremath{\sigma}im$. Note that there are natural strict inclusions as subcomplexes
\[Y_1 \ensuremath{\sigma}ubset Y_2 \ensuremath{\sigma}ubset \cdots \ensuremath{\sigma}ubset Y_n \ensuremath{\sigma}ubset \cdots Y_\infty.\]
(In fact, $Y_n$ and $Y_\infty$ are subcomplexes of the Davis complex $\Sigma$, but we will not adopt this point of view.)
We define a mirror of $Y_n$ or $Y_\infty$ to be an \emph{interior mirror} if it is contained in more than one chamber.
{\bf {
}{\noindent}Example: } Let $(W,S)$, $\alpha_1$ and $\alpha_2$ be as in the previous example of this section. To indicate the construction of $Y_n$ and $Y_\infty$ in this case, Figure~\ref{f:Y4} below
depicts the dual graph for $Y_4$, that is, the graph with vertices the chambers of $Y_4$, and edges joining adjacent chambers. The edges are labelled with the type of the corresponding interior mirror. Figure~\ref{f:Yinfty} sketches the dual graph for $Y_\infty$.
\begin{figure}
\caption{Dual graph for $Y_4$, with vertices and edges labelled}
\label{f:Y4}
\end{figure}
\begin{figure}
\caption{Dual graph for $Y_\infty$}
\label{f:Yinfty}
\end{figure}
We now describe features of $Y_n$ and $Y_\infty$ which will be needed below. The first lemma follows from the construction of $Y_n$ and $Y_\infty$ and Lemma~\ref{l:disjoint} above.
\begin{lemma}\label{l:adjacency} Let $w=w_{j_{n-1},\ldots,j_k} \in W_{k,n}$. All of the chambers of $Y_n$ to which $wK \in Y_{k,n}$ is adjacent are described by the following.
\begin{enumerate}
\item\label{i:up} For $n \geq 1$ and $1 \leq k < n$, the chamber $wK$ is adjacent to exactly one chamber of $Y_{k+1,n}$, namely it is $\alpha^{j_{n-1},\ldots,j_k}(s_k)$--adjacent to the chamber $w_{j_{n-1},\ldots,j_{k+1}}K$ of $Y_{k+1,n}$.
\item\label{i:down} For $n \geq 2$ and $1 \leq k \leq n$, the chamber $wK$ is adjacent to exactly $q_{k-1}$ distinct chambers of $Y_{k-1,n}$, namely for each $0 \leq j_{k-1} < q_{k-1}$, the chamber $wK$ is $\alpha^{j_{n-1},\ldots,j_k,j_{k-1}}(s_{k-1})$--adjacent to the chamber $w_{j_{n-1},\ldots,j_k,j_{k-1}}K$ of $Y_{k-1,n}$.
\end{enumerate}
Similarly for $Y_\infty$.
\end{lemma}
\begin{corollary}\label{c:mirrors}
\begin{enumerate}
\item\label{i:vertex} Any vertex of $Y_n$ is contained in at most two distinct chambers of $Y_n$, and similarly for $Y_\infty$.
\item\label{i:mirrors} Any two interior mirrors of $Y_n$ or $Y_\infty$ are disjoint.
\end{enumerate}
\end{corollary}
\begin{proof} Suppose $\ensuremath{\sigma}$ is a vertex of $Y_n$, contained in the chamber $wK$, where $w$ is as in Lemma~\ref{l:adjacency} above. If $\ensuremath{\sigma}$ is contained in more than one chamber of $Y_n$ or $Y_\infty$, then $\ensuremath{\sigma}$ is contained in an interior mirror $K_s$, for some $s \in S$. By the construction of $Y_n$ and Lemma~\ref{l:adjacency} above, $s$ is either an image of $s_k$, or one of $q_{k-1}$ distinct images of $s_{k-1}$, under some element of $A$. Suppose $s$ is in the image of $s_k$. Condition~\eqref{c:fix} of the Main Theorem implies that $m_{s_ks_{k-1}} = \infty$. Hence the mirror $K_s$ is disjoint from each of the $q_{k-1}$ mirrors of types the $q_{k-1}$ images of $s_{k-1}$. Therefore the only chambers of $Y_n$ which contain $\ensuremath{\sigma}$ are the two chambers $wK$ and $wsK$. Now suppose $s$ is one of the $q_{k-1}$ images of $s_{k-1}$ under some element of $A$. Condition~\eqref{c:orbit} of the Main Theorem implies that the mirrors of types each of these images are pairwise disjoint, and so again $\ensuremath{\sigma}$ is contained in only two distinct chambers of $Y_n$. Similarly, any two interior mirrors of $Y_n$ or $Y_\infty$ are disjoint.
\end{proof}
\begin{corollary}\label{l:subYn} For all $n \geq 2$, there are $q_{n-1}$ disjoint subcomplexes of $Y_n$,
denoted $Y_{n-1}^{j_{n-1}}$ for $0 \leq j_{n-1} < q_{n-1}$, each isomorphic to $Y_{n-1}$, and with
$Y_{n-1}^0 = Y_{n-1} \ensuremath{\sigma}ubset Y_n$. For each $0 \leq j_{n-1} < q_{n-1}$, the subcomplex $Y_{n-1}^{j_{n-1}}$
is attached to the chamber $w_{n}K=s_1 s_2 \cdots s_{n-1}K$ of $Y_n$ along its mirror of type
$\alpha^{j_{n-1}}(s_{n-1})$. An isomorphism \[F^{j_{n-1}}: Y_{n-1} \to Y_{n-1}^{j_{n-1}}\] is given by
sending the chamber \[w_{j_{n-2},\ldots,j_k}K \in Y_{k,n-1}\] to the chamber \[w_{j_{n-1},j_{n-2},\ldots,j_k}K
\in Y_{k,n},\] and the vertex of $w_{j_{n-2},\ldots,j_k}K$ of type $T$ to the vertex of
$w_{j_{n-1},j_{n-2},\ldots,j_k}K$ of type $\alpha^{j_{n-1}}(T)$, for each spherical subset $T$ of $S$.
\end{corollary}
\begin{proof} By induction on $n$, using Lemma~\ref{l:adjacency} and Corollary~\ref{c:mirrors} above.
\end{proof}
\ensuremath{\sigma}ubsection{Complexes of groups $G(Y_n)$ and $G(Y_\infty)$}\label{ss:GYn}
We now construct complexes of groups $G(Y_n)$ over each $Y_n$, and $G(Y_\infty)$ over $Y_\infty$, and show that there are coverings $G(Y_n) \to G(Y_1)$ and $G(Y_\infty) \to G(Y_1)$. To simplify notation, write $Y$ for $Y_n$ or $Y_\infty$.
To define the local groups of $G(Y)$, let $\ensuremath{\sigma}$ be a vertex of $Y$, of type $T$. By
Corollary~\ref{c:mirrors} above, $\ensuremath{\sigma}$ is contained in at most two distinct chambers of $Y$. If $\ensuremath{\sigma}$ is
only contained in one chamber of $Y$, put $G_\ensuremath{\sigma}=W_T$. If $\ensuremath{\sigma}$ is contained in two distinct chambers of
$Y$, then by Corollary~\ref{c:mirrors} above $\ensuremath{\sigma}$ is contained in a unique interior mirror $K_s$, with $s \in
T$. By the construction of $Y$, $s$ is in the $A$--orbit of some $s_n$, $n \geq 1$. By
Condition~\eqref{c:halvable} of the Main Theorem, it follows that the group $W_T$ is halvable along
$s$. We define the local group at $\ensuremath{\sigma}$ to be $G_\ensuremath{\sigma}=\ensuremath{\operatorname{half}}_{s}(W_T)$.
The monomorphisms between local groups are defined as follows. Let $a$ be an edge of $Y$, with $i(a)$ of type $T$ and $t(a)$ of type $T'$, so that $T \ensuremath{\sigma}ubsetneq T'$. If both of the vertices $i(a)$ and $t(a)$ are contained in a unique chamber of $Y$, then the monomorphism $\psi_a$ along this edge is defined to be the natural inclusion $W_T \hookrightarrow W_{T'}$. If $i(a)$ is contained in two distinct chambers, then $i(a)$ is contained in a unique interior mirror $K_s$, with $s \in T$. Thus $s \in T'$ as well, and so $t(a)$ is also contained in the mirror $K_s$. From the definitions of $\ensuremath{\operatorname{half}}_s(W_T)$ and $\ensuremath{\operatorname{half}}_s(W_{T'})$, it follows that there is a natural inclusion $\ensuremath{\operatorname{half}}_s(W_T) \hookrightarrow \ensuremath{\operatorname{half}}_s(W_{T'})$, and we define $\psi_a$ be this inclusion. Finally suppose $i(a)$ is contained in a unique chamber of $Y$ but $t(a)$ is contained in two distinct chambers of $Y$. Then for some $k \geq 1$, $i(a)$ is in a chamber of $Y_{k,n}$ (respectively, $Y_{k,\infty}$), and $t(a)$ is either in $Y_{k-1,n}$ or in $Y_{k+1,n}$ (respectively, in $Y_{k-1,\infty}$ or $Y_{k+1,\infty}$). Moreover $t(a)$ is contained in a unique interior mirror $K_s$, with $s \in T' - T$. If $t(a)$ is in $Y_{k-1,n}$ (respectively, $Y_{k-1,\infty}$), then we define $\psi_a$ to be the natural inclusion $W_T \hookrightarrow \ensuremath{\operatorname{half}}_s(W_{T'})$. If $t(a)$ is in $Y_{k+1,n}$ (respectively, $Y_{k+1,\infty}$), then we define $\psi_a$ to be the monomorphism defined on the generators $t \in T$ of $W_T$ by $\psi_a(t) := sts \in \ensuremath{\operatorname{half}}_s(W_{T'})$, that is, $\psi_a = \ensuremath{\mathbb{A}}d(s)$.
It is not hard to verify that for all pairs of composable edges $(a,b)$ in $Y$, $\psi_{ab} = \psi_a \circ \psi_b$. Hence we have constructed simple complexes of groups $G(Y_n)$ and $G(Y_\infty)$ over $Y_n$ and $Y_\infty$ respectively. Note that these complexes of groups are faithful, since by construction the local group at each vertex of type $\emptyset$ is trivial. Note also that $G(Y_1)$ is the same complex of groups as constructed in Section~\ref{ss:complexes_of_groups} above, which has fundamental group $W$ and universal cover $\Sigma$.
{\bf {
}{\noindent}Example: } Let $(W,S)$, $\alpha_1$ and $\alpha_2$ be as in the examples in Section~\ref{ss:underlying}
above. The complex of groups $G(Y_2)$ is sketched in Figure~\ref{f:GY2}. From left to
right, the three chambers here are $K$, $s_1K$ and $s_1\alpha_1(s_1)K$. We denote by
$D_{2m}$ the dihedral group of order $2m$, with $D_m$ the dihedral group of order $m$, and similarly for
$D_{2m'}$ and $D_{m'}$ (recall that $m$ and $m'$ are even).
\begin{figure}
\caption{Complex of groups $G(Y_2)$}
\label{f:GY2}
\end{figure}
\begin{proposition} There are coverings of complexes of groups $G(Y_n) \to G(Y_1)$ and $G(Y_\infty) \to G(Y_1)$.
\end{proposition}
\begin{proof} Let $f_n:Y_n \to Y_1$ and $f_\infty:Y_\infty \to Y_1$ be the maps sending each vertex of
$Y_n$ or $Y_\infty$ of type $T$ to the unique vertex of $Y_1 = K$ of type $T$. Then by construction of
$Y_n$ and $Y_\infty$, the maps $f_n$ and $f_\infty$ are nondegenerate morphisms of scwols. We define
coverings $\Phi_n:G(Y_n) \to G(Y_1)$ and $\Phi_\infty:G(Y_\infty) \to G(Y_1)$ over $f_n$ and $f_\infty$
respectively. To simplify notation, write $Y$ for respectively $Y_n$ or $Y_\infty$, $f$ for respectively $f_n$ or
$f_\infty$, and $\Phi$ for respectively $\Phi_n$ or $\Phi_\infty$.
Let $\ensuremath{\sigma}$ be a vertex of $Y$, of type $T$. If the local group at $\ensuremath{\sigma}$ is $G_\ensuremath{\sigma}=W_T$ then the map of
local groups $\phi_\ensuremath{\sigma}:G_\ensuremath{\sigma} \to W_T$ is the identity map. If the local group at $\ensuremath{\sigma}$ is $\ensuremath{\operatorname{half}}_s(W_T)$,
for some $s \in T$, then $\phi_\ensuremath{\sigma}:\ensuremath{\operatorname{half}}_s(W_T) \to W_T$ is the natural inclusion as an index $2$
subgroup. To define elements $\phi(a)$, if the monomorphism $\psi_a$ in $G(Y)$ is natural inclusion,
define $\phi(a) = 1$. If $\psi_a$ is $\ensuremath{\mathbb{A}}d(s)$, then define $\phi(a) = s$. It is then easy to check
that, by construction, $\Phi$ is a morphism of complexes of groups.
To show that $\Phi$ is a covering of complexes of groups, we first observe that each of the local
maps $\phi_\ensuremath{\sigma}$ is injective. Now fix $\ensuremath{\sigma}igma$ a vertex of $Y$, of type $T'$, and $b$ an edge of
$Y_1=K$ such that $t(b) = f(\ensuremath{\sigma})$, with $i(b)$ of type $T$ (hence $T \ensuremath{\sigma}ubsetneq T'$). We must show
that the map \[ \Phi_{\ensuremath{\sigma}/b}:\coprod_{\ensuremath{\sigma}ubstack{a \in f^{-1}(b)\\ t(a)=\ensuremath{\sigma}igma}} G_\ensuremath{\sigma}igma /
\psi_a(G_{i(a)}) \to W_{T'} / W_{T}\] induced by $g \mapsto \phi_\ensuremath{\sigma}igma(g)\phi(a)$ is a bijection,
where $G_\ensuremath{\sigma}$ and $G_{i(a)}$ are the local groups of $G(Y)$.
First suppose that $\ensuremath{\sigma}$ is contained in a unique chamber of $Y$. Then by construction, there is a
unique edge $a$ of $Y$ with $i(a)$ of type $T$ and $t(a) = \ensuremath{\sigma}$, hence a unique edge $a \in f^{-1}(b)$
with $t(a) = \ensuremath{\sigma}$. Moreover, $G_\ensuremath{\sigma} = W_{T'}$, $G_{i(a)} = W_T$, the monomorphism $\psi_a$ is natural
inclusion hence $\phi(a) = 1$, and $\phi_\ensuremath{\sigma}:G_\ensuremath{\sigma} \to W_{T'}$ is the identity map. Hence
$\Phi_{\ensuremath{\sigma}/b}$ is a bijection in this case.
Now suppose that $\ensuremath{\sigma}$ is contained in two distinct chambers of $Y$. Then $\ensuremath{\sigma}$ is contained in a
unique interior mirror $K_s$ of $Y$, with $s \in T'$. Assume first that $s \in T$ as well. Then
there is a unique edge $a$ of $Y$ with $i(a)$ of type $T$ and $t(a) = \ensuremath{\sigma}$. This edge is also
contained in the mirror $K_s$. Hence there is a unique $a \in f^{-1}(b)$ with $t(a) = \ensuremath{\sigma}$. By
construction, we have $G_\ensuremath{\sigma} = \ensuremath{\operatorname{half}}_s(W_{T'})$, the map $\phi_\ensuremath{\sigma}:G_\ensuremath{\sigma} \to W_{T'}$ is natural
inclusion as an index $2$ subgroup, $G_{i(a)} = \ensuremath{\operatorname{half}}_s(W_T)$, the map $\psi_a$ is natural inclusion,
and $\phi(a)$ trivial. Since the index $[W_{T'}:W_T] = [\ensuremath{\operatorname{half}}_s(W_{T'}):\ensuremath{\operatorname{half}}_s(W_T)]$ is finite, it
is enough to verify that the inclusion $\ensuremath{\operatorname{half}}_s(W_{T'}) \to W_{T'}$ induces an injective map on
cosets \[ \ensuremath{\operatorname{half}}_s(W_{T'})/\ensuremath{\operatorname{half}}_s(W_T) \to W_{T'}/W_T.\] For this, suppose that $w,w' \in
\ensuremath{\operatorname{half}}_s(W_{T'})$ and that $wW_T = w'W_T$ in $W_{T'}$. Then $w^{-1}w' \in W_T \cap \ensuremath{\operatorname{half}}_s(W_{T'})$.
By definitions, it follows that $w^{-1}w' \in \ensuremath{\operatorname{half}}_s(W_T)$, as required.
Now assume that $\ensuremath{\sigma}$ is contained in the interior mirror $K_s$, with $s \not \in T$. There are then
two edges $a_1, a_2 \in f^{-1}(b)$ such that $t(a_1) = t(a_2) = \ensuremath{\sigma}$. Without loss of generality,
$\psi_{a_1}$ is natural inclusion $W_T \to \ensuremath{\operatorname{half}}_s(W_{T'})$ and $\phi(a_1) = 1$, while $\psi_{a_2}(g)
= sgs$ with $\phi(a_2) = s$. Since the index $[\ensuremath{\operatorname{half}}_s(W_{T'}):W_T] =
\frac{1}{2}[W_{T'}:W_T]$ is finite, it is enough to show that the map on cosets $\Phi_{\ensuremath{\sigma}/b}$ is
surjective. Let $w \in W_{T'}$. If $w \in \ensuremath{\operatorname{half}}_s(W_{T'})\leq W_{T'}$, then the image of the coset
$w\psi_{a_1}(G_{i(a_1)}) = w W_T$ in $G_\ensuremath{\sigma}$ is the coset $wW_T$ in $W_{T'}$. If $w \not \in
\ensuremath{\operatorname{half}}_s(W_{T'})$, then since $\ensuremath{\operatorname{half}}_s(W_{T'})$ has index $2$ in $W_{T'}$, and $s \not\in
\ensuremath{\operatorname{half}}_s(W_{T'})$, there is a $w' \in \ensuremath{\operatorname{half}}_s(W_{T'}) \leq W_{T'}$ such that $w = w's$. The image of
the coset $w'\psi_{a_2}(G_{i(a_2)}) = w'(sW_Ts)$ in $\ensuremath{\operatorname{half}}_s(W_{T'})$ is then the coset
$w'\phi(a_2)W_T = w'sW_T =wW_T$ in $W_{T'}$. Thus $\Phi_{\ensuremath{\sigma}/b}$ is surjective, as required.
We conclude that $\Phi$ is a covering of complexes of groups.
\end{proof}
\ensuremath{\sigma}ubsection{Group actions on $Y_n$ and $Y_\infty$}\label{ss:Hn}
In this section we construct the action of a finite group $H_n$ on $Y_n$ in the sense of Definition~\ref{d:action_on_scwol} above, and that of an infinite group $H_\infty$ on $Y_\infty$.
We first define the groups $H_n$ and $H_\infty$. For each $n \geq 1$, let $C_{q_n}$ denote the cyclic
group of order $q_n$. Note that $C_{q_n} \cong \langle \alpha_n \rangle$. We define $H_1$ to be the trivial group and $H_2 = C_{q_1}$. For $n \geq 3$, we
define $H_n$ to be the wreath product \begin{eqnarray*}H_n & = & H_{n-1} \wr C_{q_{n-1}} \\ & = &
(\cdots((C_{q_1} \wr C_{q_2}) \wr C_{q_3}) \wr \cdots )\wr C_{q_{n-1}} \\ & = & C_{q_1} \wr C_{q_2} \wr
\cdots \wr C_{q_{n-1}},\end{eqnarray*} that is, $H_n$ is the semidirect product by $C_{q_{n-1}}$ of the
direct product of $q_{n-1}$ copies of $H_{n-1}$, where $C_{q_{n-1}}$ acts on this direct product by
cyclic permutation of coordinates. Note that $H_n$ is a finite group of order
\begin{equation}\label{e:orderHn} |H_n| = q_1^{q_2 q_3 \cdots q_{n-1}} q_2^{q_3 \cdots q_{n-1}} \cdots
q_{n-2}^{q_{n-1}}q_{n-1}.\end{equation} We define $H_\infty$ to be the infinite iterated (unrestricted)
wreath product \[ H_\infty:= C_{q_1} \wr C_{q_2} \wr \cdots \wr C_{q_{n-1}} \wr \cdots \] We then have
natural inclusions \[H_1 < H_2 < \cdots < H_n < \cdots < H_\infty.\] The
following lemma will be needed for the proof of Corollary~\ref{c:infinite_generation} in
Section~\ref{ss:corollary_proof} below.
\begin{lemma}\label{l:not_fg} The group $H_\infty$ is not finitely generated.\end{lemma}
\begin{proof} By definition of
$H_\infty$, for any nontrivial $h \in H_\infty$ there is an $n \geq 1$ such that $h \in H_n$.
\end{proof}
We now define the actions of $H_n$ and $H_\infty$ on $Y_n$ and $Y_\infty$ respectively. This uses the
label-preserving automorphisms $\alpha_n
\in A$. Note that the action of $A$ on the nerve $L$ extends to the chamber $K$, fixing the vertex of type
$\emptyset$. This action does not in general have a strict fundamental domain. Inconveniently, this
action also does not satisfy Condition~\eqref{i:no_inversions} of Definition~\ref{d:action_on_scwol}
above, since for any nontrivial $\alpha \in A$, there is an edge $a$ of $K$ with $i(a)$ of type
$\emptyset$ but $\alpha(a) \neq a$. However, to satisfy Definition~\ref{d:action_on_scwol}, it suffices
to define actions on $Y_n$ and $Y_\infty$, and then extend in the obvious way to the scwols which are
the barycentric subdivisions of these spaces, with naturally oriented edges.
For each $n \geq 1$ fix a generator $a_n$ for the cyclic group $C_{q_n}$. Recall that $\alpha_n \in A$
has order $q_n$. Thus for any $\alpha \in A$, there is a faithful representation $C_{q_n} \to A$, given
by $a_n \mapsto \alpha\alpha_n\alpha^{-1}$. Recall also that $\alpha_n$ fixes the star in $L$ of the
vertex $s_{n+1}$, and that $\langle \alpha_n \rangle$ acts freely on the $\langle \alpha_n \rangle$--orbit of $s_n$. Hence $a_n \mapsto \alpha\alpha_n\alpha^{-1}$
induces an action of $C_{q_n}$ on the chamber $K$, which fixes pointwise the mirror of type
$\alpha(s_{n+1})$, and permutes cyclically the set of mirrors of types $\alpha\alpha_n^{j_n}(s_n)$, for
$0 \leq j_n < q_n$.
We define the action of $H_n$ on $Y_n$ inductively, as follows. The group $H_1$ is trivial. For $n
\geq 2$, assume that the action of $H_{n-1}$ on $Y_{n-1}$ has been given. The subgroup $C_{q_{n-1}}$ of
$H_n$ then fixes the chamber $w_{n}K=s_1 s_2 \cdots s_{n-1}K$ of $Y_n$ setwise, and acts on this chamber
via $a_{n-1} \mapsto \alpha_{n-1}$. By the discussion above, this action fixes pointwise the mirror of
type $s_{n}$ of $w_nK$, and permutes cyclically the $q_{n-1}$ mirrors of types
$\alpha_{n-1}^{j_{n-1}}(s_{n-1})$, with $0 \leq j_{n-1} < q_{n-1}$, along which (by Lemma~\ref{l:subYn}
above), $q_{n-1}$ disjoint subcomplexes of $Y_n$, each isomorphic to $Y_{n-1}$, are attached.
By induction, a copy of $H_{n-1}$ in $H_n$ acts on each of these copies of $Y_{n-1}$ in $Y_n$. More precisely, for $0 \leq j_{n-1} < q_{n-1}$, the $j_{n-1}$st copy of $H_{n-1}$ in $H_n$ acts on the subcomplex $Y^{j_{n-1}}_{n-1}$ of Lemma~\ref{l:subYn} above. This action is given by conjugating the (inductively defined) action of $H_{n-1}$ on $Y_{n-1} \ensuremath{\sigma}ubset Y_n$ by the isomorphism $F^{j_{n-1}}:Y_{n-1} \to Y^{j_{n-1}}_{n-1}$ in Lemma~\ref{l:subYn}. By definition, the action of $C_{q_{n-1}}$ cyclically permutes the subcomplexes $Y^{j_{n-1}}_{n-1}$, and so we have defined an action of $H_n$ on $Y_n$.
The action of $H_\infty$ on $Y_\infty$ is similar.
We now describe the fundamental domains for these actions. For each $n \geq 1$ and each $1 \leq k \leq
n$, observe that $H_n$ acts transitively on the set of chambers $Y_{k,n}$. Let $K_1 = K$, and for $n
\geq 2$ let $K_n$ be the quotient of the chamber $w_nK = s_1 s_2 \cdots s_{n-1}K$ by the action of
$C_{q_{n-1}}\leq H_n$ as defined above. In $K_n$, the mirrors of types
$\alpha_{n-1}^{j_{n-1}}(s_{n-1})$, for $0 \leq j_{n-1} < q_{n-1}$, have been identified. By abuse of
notation, we refer to these identified mirrors as the mirror of type $s_{n-1}$ of $K_n$. Note also that
$C_{q_{n-1}}\leq H_n$ fixes pointwise the mirror of type $s_n$ of $w_nK$, and so we may speak of the
mirror of type $s_n$ of $K_n$. Then a fundamental domain for the action of $H_n$ on $Y_n$ is the finite
complex \[ Z_n := \left(K_1 \cup K_2 \cup \cdots \cup K_n\right) / \ensuremath{\sigma}im, \] where $\ensuremath{\sigma}im$ means we
identify the $s_{i-1}$--mirrors of $K_{i-1}$ and $K_{i}$, for $1 \leq i < n$. Similarly, a fundamental
domain for the action of $H_\infty$ on $Y_\infty$ is the infinite complex \[ Z_\infty := \left(K_1 \cup
K_2 \cup \cdots \cup K_n \cup \cdots \right) / \ensuremath{\sigma}im.\]
Finally we describe the stabilisers in $H_n$ and $H_\infty$ of the vertices of $Y_n$ and $Y_\infty$.
Let $wK$ be a chamber of $Y_n$ or $Y_\infty$. Then there is a smallest $k \geq 1$ such that $wK \in
Y_k$. By construction, it follows that the stabiliser in $H_n$ or $H_\infty$ of any vertex in the
chamber $wK$ is a subgroup of the finite group $H_k$. Hence $H_n$ and $H_\infty$ act with finite
stabilisers. Note also that for every $n \geq 1$, the action of $H_n$ fixes the vertex of type
$\emptyset$ in the chamber $w_nK$. We may thus speak of the vertex of type $\emptyset$ in the quotient
$K_n$ defined above. In fact, in the fundamental domains $Z_n$ and $Z_\infty$ defined above, the vertex
of type $\emptyset$ in $K_n$, for $n \geq 1$, has a lift in $Y_n$ or $Y_\infty$ with stabiliser the
finite group $H_n$. We observe also that the actions of $H_n$ and $H_\infty$ are faithful, since the
stabiliser of the vertex of type $\emptyset$ of $K_1=K$ is the trivial group $H_1$.
Figure~\ref{f:Zinfty} shows $Z_\infty$ and the stabilisers of (lifts of) its vertices of type $\emptyset$ for the
example in Section~\ref{ss:underlying} above.
\begin{figure}
\caption{Fundamental domain $Z_\infty$}
\label{f:Zinfty}
\end{figure}
\ensuremath{\sigma}ubsection{Group actions on $G(Y_n)$ and $G(Y_\infty)$}\label{ss:action}
In this section we show that the actions of $H_n$ and $H_\infty$ on $Y_n$ and $Y_\infty$, defined in Section~\ref{ss:Hn} above, extend to actions (by simple morphisms) on the complexes of groups $G(Y_n)$ and $G(Y_\infty)$. To simplify notation, write $H$ for $H_n$ or $H_\infty$, $Y$ for $Y_n$ or $Y_\infty$, and $Z$ for $Z_n$ or $Z_\infty$. Technically, instead of working with $G(Y)$, we work with the corresponding naturally defined complex of groups over the barycentric subdivision of $Y$, so that the action of $H$ satisfies Definition~\ref{d:action_on_scwol} above. By abuse of notation we will however continue to write $G(Y)$.
Recall that for $\ensuremath{\sigma}$ a vertex of $Y$ of type $T$, the local group $G_\ensuremath{\sigma}$ is either $W_T$ or
$\ensuremath{\operatorname{half}}_s(W_T)$, and the latter occurs if and only if $\ensuremath{\sigma}$ is contained in an interior $s$--mirror of $Y$
with $s \in T$. Let $wK$ be a chamber of $Y$ and let $h \in H$. By definition of the $H$--action,
there is an $\alpha \in A$ such that for each vertex $\ensuremath{\sigma}$ in $wK$, with $\ensuremath{\sigma}$ of type $T$, the vertex
$h\cdot\ensuremath{\sigma}$ of $h\cdot wK$ has type $\alpha(T)$. Moreover, if $\ensuremath{\sigma}$ is contained in an interior $s$--mirror then
$h\cdot\ensuremath{\sigma}$ is contained in an interior $\alpha(s)$--mirror. We may thus define the local map
$\phi^h_\ensuremath{\sigma}:G_\ensuremath{\sigma} \to G_{h\cdot\ensuremath{\sigma}}$ by $\phi^h_\ensuremath{\sigma}(t) = \alpha(t)$ for each $t \in T$, and (if $G_\ensuremath{\sigma} =
\ensuremath{\operatorname{half}}_s(W_T)$), $\phi^h_\ensuremath{\sigma}(sts) = \alpha(s)\alpha(t)\alpha(s)$. Then $\phi^h_\ensuremath{\sigma}$ is an isomorphism
either $W_T
\to W_{\alpha(T)}$, or $\ensuremath{\operatorname{half}}_s(W_T) \to \ensuremath{\operatorname{half}}_{\alpha(s)}(W_{\alpha(T)})$, as appropriate. It is not hard to verify
that these local maps define an action of $H$ on $G(Y)$ by simple morphisms.
\ensuremath{\sigma}ubsection{Conclusion}\label{ss:conclusion}
In this section we combine the results of Sections~\ref{ss:underlying}--\ref{ss:action} above to complete the proof of the Main Theorem.
Recall that $G(Y_1)$ is developable with universal cover $\Sigma$ (see
Section~\ref{ss:complexes_of_groups}). By Proposition~\ref{p:covering} and
Theorem~\ref{t:coverings} above, it follows that the complexes of groups $G(Y_n)$
and $G(Y_\infty)$ are developable with universal cover $\Sigma$. Let $H(Z_n)$ be the complex of groups induced by $H_n$ acting on
$G(Y_n)$, and $H(Z_\infty)$ that induced by $H_\infty$ acting on $G(Y_\infty)$. By
Theorem~\ref{t:group_action} above, there are coverings of complexes of groups
$G(Y_n) \to H(Z_n)$ and $G(Y_\infty) \to H(Z_\infty)$. Hence (by
Theorem~\ref{t:coverings} above) each $H(Z_n)$ and $H(Z_\infty)$ is developable with
universal cover $\Sigma$.
Let $\Gamma_n$ be the fundamental group of $H(Z_n)$ and $\Gamma$ the fundamental group of
$H(Z_\infty)$. Since the complexes of groups $G(Y_n)$ and $G(Y_\infty)$ are
faithful, and the actions of $H_n$ and $H_\infty$ are faithful,
Theorem~\ref{t:group_action} above implies that $H(Z_n)$ and $H(Z_\infty)$ are
faithful complexes of groups. Thus $\Gamma_n$ and $\Gamma$ may be identified with subgroups
of $G=\ensuremath{\mathbb{A}}ut(\Sigma)$. Now $G(Y_n)$ and $G(Y_\infty)$ are complexes of finite groups,
and the $H_n$-- and $H_\infty$--actions have finite vertex stabilisers. Hence by
construction, $H(Z_n)$ and $H(Z_\infty)$ are complexes of finite groups. Therefore
$\Gamma_n$ and $\Gamma$ are discrete subgroups of $G$. Since the fundamental domain $Z_n$
is finite, it follows that each $\Gamma_n$ is a uniform lattice. To show that $\Gamma$ is a
nonuniform lattice, we use the normalisation of Haar measure $\mu$ on
$G=\ensuremath{\mathbb{A}}ut(\Sigma)$ defined in Section~\ref{ss:lattices} above, with the $G$--set $V$
the set of vertices of $\Sigma$ of type $\emptyset$. Since the local groups of
$H(Z_\infty)$ at the
vertices of type $\emptyset$ in $Z_\infty$ are $H_1$, $H_2$, \ldots, we have \[ \mu(\Gamma \backslash G) =
\ensuremath{\sigma}um_{n=1}^\infty \frac{1}{|H_n|}. \] This series converges (see
Equation~\eqref{e:orderHn} above for the order of $H_n$, and note that each $q_n
\geq 2$). We conclude that $\Gamma$ is a nonuniform lattice in $G$. Moreover, as the
covolumes of the uniform lattices $\Gamma_n$ are the partial sums of this series, we
have $\mu(\Gamma_n \backslash G) \to \mu(\Gamma \backslash G)$, as required. This completes the proof of
the Main Theorem.
\ensuremath{\sigma}ubsection{Proof of Corollary~\ref{c:infinite_generation}}\label{ss:corollary_proof}
The nonuniform lattice $\Gamma$ is the fundamental group of the complex of groups
$H(Z_\infty)$ induced by the action of $H_\infty$ on $G(Y_\infty)$. By the short exact sequence in
Theorem~\ref{t:group_action} above, there is a surjective
homomorphism $\Gamma \to H_\infty$. Since $H_\infty$ is not finitely
generated (Lemma~\ref{l:not_fg} above), we conclude that $\Gamma$ is not finitely
generated.
\ensuremath{\sigma}ection{Examples}\label{s:examples}
In this section we describe several infinite families of examples to which the Main
Theorem applies. By the \emph{dimension} of the Davis complex $\Sigma$ for a
Coxeter system $(W,S)$, we mean the maximum cardinality of a spherical subset of
$S$. We note that there may be maximal spherical special subgroups $W_T$ with $|T|$
strictly less than $\dim(\Sigma)$.
\ensuremath{\sigma}ubsection{Two-dimensional examples}
If $\dim(\Sigma) = 2$ then the nerve of the Coxeter system $(W,S)$ is a graph $L$ with
vertex set $S$ and two vertices $s$ and $t$ joined by an edge if and only if $m_{st}$ is
finite. Assume for simplicity that for some integer $m \geq 2$ all finite $m_{st} = m$. Then
$\Sigma$ is the barycentric subdivision of a polygonal complex $X$, with all $2$--cells of $X$
regular Euclidean $2m$--gons, and the link of every vertex of $X$ the graph $L$. Such an $X$ is
called a \emph{$(2m,L)$--complex}. Condition~\eqref{c:halvable} of the Main Theorem can hold
only if $m$ is even, and so we also assume this. It is then not hard to find graphs $L$ so that, for
some pair $s_1$ and $s_2$ of non-adjacent vertices of $L$, and for some nontrivial elements $\alpha_1,
\alpha_2 \in \ensuremath{\mathbb{A}}ut(L)$, Conditions~\eqref{c:fix},~\eqref{c:free} and~\eqref{c:orbit} of the Main Theorem also hold. We
present three infinite families of examples.
\ensuremath{\sigma}ubsubsection{Buildings with complete bipartite links}\label{sss:right-angled}
Let $L$ be the complete bipartite graph $K_{q,q'}$, with $q,q' \geq 2$. If $q \geq
3$ then there are (nonadjacent) vertices $s_1$ and $s_2$ of $L$, and nontrivial elements $\alpha_1$ and $\alpha_2$ of $\ensuremath{\mathbb{A}}ut(L)$, so that the Main Theorem applies.
If $m = 2$ then $\Sigma$ is the barycentric subdivision of the product of trees $T_q
\times T_{q'}$, where $T_q$ is the $q$--regular tree. In particular, if $m = m' =
2$ in Example 1 of Section~\ref{ss:Davis_complexes} above, then $\Sigma$ is the
barycentric subdivision of $T_3 \times T_2$. If $m \geq 4$, then by Theorem~12.6.1
of~\cite{D} the complex $\Sigma$ may be metrised as a piecewise hyperbolic
$\ensuremath{\mathbb{C}}AT(-1)$ polygonal complex. With this metric, if $p = 2m$ and $q = q'$ then $\Sigma$ is the barycentric subdivision of Bourdon's building $I_{p,q}$
(studied in, for example,~\cite{B1} and~\cite{BP}), which is the unique $2$--complex
with all $2$--cells regular right-angled hyperbolic $p$--gons $P$, and the link of
every vertex the complete bipartite graph $K_{q,q}$. Bourdon's building is a
right-angled hyperbolic building, of type $(W',S')$ where $W'$ is the Coxeter group
generated by the set of reflections $S'$ in the sides of $P$.
\ensuremath{\sigma}ubsubsection{Fuchsian buildings}\label{sss:fuchsian}
A Fuchsian building is a $2$--dimensional hyperbolic building. Bourdon's building
$I_{p,q}$ is a (right-angled) Fuchsian building. For Fuchsian buildings which are
not right-angled see,
for example,~\cite{B2} and~\cite{GP}.
To show that the Main Theorem applies to certain Fuchsian buildings which are not
right-angled, let $L$ be the finite building of rank $2$ associated to a Chevalley group $\ensuremath{\mathcal{G}}$ (see~\cite{R}). Then
$L$ is a bipartite graph, with vertex set say $S = S_1 \ensuremath{\sigma}qcup S_2$, and for some $k \in \{3,4,6,8\}$,
$L$ has girth $2k$ and diameter $k$. Figure~\ref{f:proj_plane} depicts the building $L$ for the group
$\ensuremath{\mathcal{G}} = GL(3,\ensuremath{\mathbb{F}}_2) = GL(3,2)$, for which $k = 3$. The white vertices of this building may be identified
with the set of one-dimensional subspaces of the vector space $V=\ensuremath{\mathbb{F}}_2 \times \ensuremath{\mathbb{F}}_2 \times \ensuremath{\mathbb{F}}_2$, and the
black vertices with the set of two-dimensional subspaces of $V$. Two vertices are joined by an edge if
those two subspaces are incident.
\begin{figure}
\caption{The building $L$ for $\ensuremath{\mathcal{G}
\label{f:proj_plane}
\end{figure}
The group $\ensuremath{\mathcal{G}}$ acts on $L$, preserving the type of vertices, with quotient an edge. Suppose $s_1 \in
S_1$, and let $s_2 \in S_2$ be a vertex at distance $k$ from $s_1$. Since $L$ is a thick building,
there is more than one such vertex $s_2$. For $i = 1,2$, the stabiliser $P_i$ of $s_i$ in $\ensuremath{\mathcal{G}}$ acts
transitively on the set of vertices of $L$ at distance $k$ from $s_i$. Now, by Theorem~6.18 of~\cite{R},
$P_i$ has a Levi decomposition \[P_i = U_i \rtimes L_i\] where $L_i$ is the subgroup of $P_i$ fixing the
vertex $s_{3-i}$. Moreover, by Lemma 6.5 of~\cite{R}, $U_i$ fixes the star of $s_i$ in $L$. Hence we
may find elements $\alpha_{3-i} \in U_{i}$ for which
Conditions~\eqref{c:fix} and~\eqref{c:free} of the Main Theorem hold. Condition~\eqref{c:orbit} of the Main Theorem follows
since $L$ is bipartite and the action of $\ensuremath{\mathcal{G}}$ preserves the type of vertices. For example, for $L$ as
in Figure~\ref{f:proj_plane}, if $s_1$ is the vertex $\{ (1,0,0) \}$, we may choose $s_2$ to be the
vertex $\{ (0,1,0),(0,0,1),(0,1,1)\}$, and then choose \[\alpha_1 = \left(\begin{array}{ccc} 1 & 0 & 0
\\ 1 & 1 & 0 \\ 1 & 0 & 1\end{array}\right) \quad\mbox{and}\quad\alpha_2 = \left(\begin{array}{ccc} 1 &
1 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{array}\right).\]
Suppose now that $L$ as above is the nerve of a Coxeter system $(W,S)$. By
Theorem~12.6.1 of~\cite{D}, since $L$ has girth $\geq 6$, the corresponding Davis
complex $\Sigma$ may also be metrised as a piecewise hyperbolic $\ensuremath{\mathbb{C}}AT(-1)$ polygonal
complex. With this metrisation, $\Sigma$ is then the barycentric subdivision of a
Fuchsian building, with the link of every vertex $L$ and all $2$--cells regular
hyperbolic $2m$--gons (of vertex angle $\frac{\pi}{k}$). We call such a building a
\emph{$(2m,L)$--building}. In general, there may be uncountably many isomorphism
classes of $(2m,L)$--buildings (see for instance~\cite{GP}). In fact, the Davis
complex $\Sigma$ is the barycentric subdivision of the unique locally reflexive
$(2m,L)$--building with trivial holonomy (see Haglund~\cite{H2}).
\ensuremath{\sigma}ubsubsection{Platonic polygonal complexes}
A polygonal complex $X$ is \emph{Platonic} if $\ensuremath{\mathbb{A}}ut(X)$ acts transitively on the set of flags
(vertex, edge, face) in $X$. Any Platonic polygonal complex is a $(k,L)$--complex,
with $k \geq 3$ and $L$ a graph such that $\ensuremath{\mathbb{A}}ut(L)$ acts transitively on the set of oriented
edges in $L$. In~\cite{Sw}, \'Swi{\polhk{a}}tkowski\ studied CAT(0) Platonic polygonal complexes $X$,
where $L$ is a trivalent graph. Such complexes are not in general buildings.
A graph $L$ is said to be \emph{$n$--arc regular}, for some $n \geq 1$, if $\ensuremath{\mathbb{A}}ut(L)$ acts simply
transitively on the set of edge paths of length $n$ in $L$. For example, the Petersen graph in
Figure~\ref{f:petersen} above is $3$--arc regular. Any finite, connected, trivalent graph $L$,
with $\ensuremath{\mathbb{A}}ut(L)$ transitive on the set of oriented edges of $L$, is $n$--arc regular for some $n
\in \{1,2,3,4,5\}$ (Tutte~\cite{T}). \'Swi{\polhk{a}}tkowski~\cite{Sw} showed that if $n \in \{3,4,5\}$,
then for all $k \geq 4$ there is a unique $(k,L)$--complex $X$, with $X$ Platonic. Thus if
$k=2m$ is even, the barycentric subdivision of $X$ is the Davis complex $\Sigma$ for $(W,S)$,
where $(W,S)$ has nerve $L$ and all finite $m_{st} = m$.
Now suppose $L$ is a finite, connected, trivalent, $n$--arc regular graph with $n \in \{
3,4,5\}$. Choose vertices $s_1$ and $s_2$ of $L$ at distance two in $L$ if $n = 3,4$, and at
distance three in $L$ if $n = 5$. Then by Propositions 3--5 of Djokovi\'c--Miller~\cite{DM}, for
$i = 1,2$ there are involutions $\alpha_i \in \ensuremath{\mathbb{A}}ut(L)$ such that $\alpha_i$ fixes the star of
$s_{3-i}$ in $L$, and $\alpha_i(s_i) \neq s_i$ is not adjacent to $s_i$. Thus if $m$
is even, the Main Theorem applies to $G=\ensuremath{\mathbb{A}}ut(\Sigma)$.
\ensuremath{\sigma}ubsection{Higher-dimensional examples}
We now discuss examples in dimension $> 2$ to which the Main Theorem applies. The construction of the building $\Sigma$ below was suggested by an anonymous referee (our own examples were just for $W$ right-angled).
We first discuss when Condition~\eqref{c:halvable} in the Main Theorem can hold. Suppose $W_T$ is a
spherical special subgroup of $W$, with $k = |T| > 2$. If $W_T$ is irreducible, then from the
classification of spherical Coxeter groups (see, for example,~\cite{D}), it is not hard to verify that
$W_T$ is halvable along $s \in T$ if and only if $W_T$ is of type $B_k$, with $s \in T$ the unique generator
so that $m_{st} \in \{2,4\}$ for all $t \in T - \{s\}$; in this case $\ensuremath{\operatorname{half}}_s(W_T)$ is of type $D_k$. If $W_T$ is reducible, then so long as $s$ is contained in a direct factor $W_{T'}$, $T' \ensuremath{\sigma}ubsetneq T$, such that either $W_{T'} = \langle s \rangle \cong C_2$, $W_{T'}$ is an even dihedral group, or $W_{T'}$ is of type $B_j$ with $j < k$ and $s$ the particular generator described above, then $W_T$ will be halvable along $s$.
Now let $L$ be a thick spherical building of rank $k > 2$. A reducible example is $L$ the join of $k$ sets of points, with each set having cardinality at least $3$. An irreducible example is $L$ the building for a Chevalley group $\ensuremath{\mathcal{G}}$ of rank $k$ over a finite field, such as $GL(k+1,2)$.
Define a Coxeter group $W$ with nerve $L$ as follows. Fix $\Delta$ a chamber of $L$. Then $\Delta$ is a simplex on $k$ vertices. Let $p: L \to \Delta$ be the projection onto this chamber. Label the edges of $\Delta$ by the $m_{st}$ for a finite Coxeter group $V$ on $k$ generators, such that $V$ is a product of cyclic groups of order $2$, even dihedral groups and copies of $B_j$, $j < k$. For example, when $V$ is right-angled all $m_{st} = 2$. Pull the edge labels of $\Delta$ back via $p$ to obtain a labelling of the edges of $L$. This defines a Coxeter group $W$ with nerve $L$, so that each maximal spherical special subgroup of $W$ is isomorphic to $V$.
The Davis complex $\Sigma$ for $W$ is tiled by copies of the barycentric subdivision of the Coxeter polytope $P$ associated to $V$. For example, when $V$ is right-angled, $P$ is a $k$--cube. The link of each vertex of $P$ is $L$. Applying the metric criterion of Charney--Lytchak~\cite{CL}, it follows that $\Sigma$ is the barycentric subdivision of a building. Note that $\dim(\Sigma) = k > 2$.
Choose vertices $s_1$ and $s_2$ in $L$ which are
\emph{opposite} (see~\cite{R}). By the same arguments as in
Section~\ref{sss:fuchsian} above, there are (type-preserving) elements $\alpha_1,\alpha_2 \in \ensuremath{\mathbb{A}}ut(L)$ so that Conditions~\eqref{c:fix}--\eqref{c:orbit} of the Main Theorem hold. A careful choice of $V$, such that $s_1$ and $s_2$ if contained in some copy of $B_j$ are both the required generators, then guarantees that Condition~\eqref{c:halvable} of the Main Theorem holds. Hence the Main Theorem applies to many examples of buildings of dimension $> 2$.
We do not know of any \emph{hyperbolic} buildings of dimension $>2$ to
which the Main Theorem applies. For the $3$--dimensional constructions of
Haglund--Paulin in~\cite{HP2}, certain of the $m_{st}$ must be equal to $3$, so
Condition~\eqref{c:halvable} of the Main Theorem will not hold.
A slight modification of the above construction, for example by adding a vertex $s$ to $L$ with $m_{st} = \infty$ for all $t \in S - \{ s\}$, produces nerves which are not
buildings, hence examples of $\Sigma$ of dimension $> 2$ which are not buildings.
\end{document} |
\begin{equation}gin{document}
\pagestyle{plain}
\pagenumbering{arabic}
\title{Prescription for experimental determination of \ the dynamics of
a quantum black box}
\begin{equation}gin{abstract}
We give an explicit prescription for experimentally determining the evolution
operators which completely describe the dynamics of a quantum mechanical
black box -- an arbitrary open quantum system. We show necessary and
sufficient conditions for this to be possible, and illustrate the general
theory by considering specifically one and two quantum bit systems. These
procedures may be useful in the comparative evaluation of experimental
quantum measurement, communication, and computation systems.
\end{abstract}
\pacs{PACS numbers: 03.65.Bz, 89.70.+c,89.80.th,02.70.--c}
\begin{equation}gin{multicols}{2}[]
\def\begin{equation}{\begin{equation}gin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{equation}a{\begin{equation}gin{eqnarray}}
\def\end{equation}a{\end{eqnarray}}
\defA^\dagger{A^\dagger}
\newcommand{\mattwoc}[4]{\left[
\begin{equation}gin{array}{cc}{#1}&{#2}\\{#3}&{#4}\end{array}\right]}
\newcommand{\ket}[1]{\mbox{$|#1\rangle$}}
\newcommand{\bra}[1]{\mbox{$\langle #1|$}}
\def\rangle{\rangle}
\def\langle{\langle}
\section{Introduction}
Consider a black box with an input and an output. Given that the transfer
function is linear, if the dynamics of the box are described by classical
physics, well known recipes exist to completely determine the response
function of the system. Now consider a {\em quantum-mechanical} black box
whose input may be an arbitrary quantum state (in a finite dimensional Hilbert
space), with internal dynamics and an output state (of same dimension as the
input) determined by quantum physics. The box may even be connected to an
external reservoir, or have other inputs and outputs which we wish to ignore.
Can we determine the quantum transfer function of the system?
The answer is yes. Simply stated, the most arbitrary transfer function of a
quantum black box is to map one density matrix into another, $\rho_{in}
{\rightarrow} \rho_{out}$, and this is determined by a linear mapping ${\cal
E}$ which we shall give a prescription for obtaining. The interesting
observation is that this black box may be an attempt to realize a useful
quantum device. For example, it may be a quantum cryptography
channel\cite{Bennett92,Hughes95} (which might include an eavesdropper!), a
quantum computer in which decoherence occurs, limiting its
performance\cite{Unruh94,Chuang95a}, or just an imperfect quantum logic
gate\cite{Turchette95,Monroe95}, whose performance you wish to characterize to
determine its usefulness.
How many parameters are necessary to describe a quantum black box acting on
an input with a state space of $N$ dimensions? And how may these parameters
be experimentally determined? Furthermore, how is the resulting description
of ${\cal E}$ useful as a performance characterization?
We consider these questions in this paper. After summarizing the relevant
mathematical formalism, we prove that ${\cal E}$ may be determined completely
by a matrix of complex numbers $\chi$, and provide an accessible experimental
prescription for obtaining $\chi$. We then give explicit constructions for
the cases of one and two quantum bits (qubits), and then conclude by
describing related performance estimation quantities derivable from $\chi$.
\section{State Change Theory}
A general way to describe the state change experienced by a quantum system is
by using {\em quantum operations}, sometimes also known as {\em
superscattering operators} or {\em completely positive maps}. This formalism
is described in detail in \cite{Kraus83a}, and is given a brief but
informative review in the appendix to \cite{Schumacher96a}. A quantum
operation is a linear map ${\cal E}$ which completely describes the dynamics
of a quantum system,
\begin{equation}gin{equation}
\rho \rightarrow \frac{{\cal E}(\rho)}{\mbox{tr}({\cal E}(\rho))}
\,.
\label{eq:rhomapfirst}
\end{equation}
A particularly useful description of quantum operations for theoretical
applications is the so-called {\em operator-sum representation}:
\begin{equation}gin{equation} \label{eqtn: op sum rep}
{\cal E}(\rho) = \sum_i A_i \rho A_i^{\dagger}
\,.
\label{eq:eeffect}
\end{equation}
The $A_i$ are operators acting on the system alone, yet they completely
describe the state changes of the system, including any possible unitary
operation (quantum logic gate), projection (generalized measurement), or
environmental effect (decoherence). In the case of a ``non-selective''
quantum evolution, such as arises from uncontrolled interactions with an
environment (as in the decoherence of quantum computers), the $A_i$ operators
satisfy an additional completeness relation,
\begin{equation}gin{eqnarray}
\sum_i A_i^{\dagger} A_i = I
\,.
\label{eq:completeness}
\end{eqnarray}
This relation ensures that the trace factor $\mbox{tr}({\cal E}(\rho))$ is
always equal to one, and thus the state change experienced by the system can
be written
\begin{equation}gin{equation}
\rho \rightarrow {\cal E}(\rho)
\,.
\label{eq:rhomap}
\end{equation}
Such quantum operations are in a one to one correspondence with the set of
transformations arising from the joint unitary evolution of the quantum system
and an initially uncorrelated environment\cite{Kraus83a}. In other words, the
quantum operations formalism also describes the master equation and quantum
Langevin pictures widely used in quantum optics \cite{Louisell,Gardiner91},
where the system's state change arises from an interaction Hamiltonian between
the system and its environment\cite{Mabuchi96}.
Our goal will be to describe the state change process by determining the
operators $A_i$ which describe ${\cal E}$, (and until Section~\ref{sec:meas}
we shall limit ourselves to those which satisfy Eq.(\ref{eq:completeness})).
Once these operators have been determined many other quantities of great
interest, such as the {\em fidelity}, {\em entanglement fidelity} and {\em
quantum channel capacity} can be determined. Typically, the $A_i$ operators
are derived from a {\em theoretical} model of the system and its environment;
for example, they are closely related to the Lindblad operators. However,
what we propose here is different: to determine systematically from {\em
experiment} what the $A_i$ operators are for a specific quantum black box.
\section{General Experimental Procedure}
The experimental procedure may be outlined as follows. Suppose the state
space of the system has $N$ dimensions; for example, $N=2$ for a single qubit.
$N^2$ pure quantum states $|\psi_1\rangle\langle\psi_1|,
\ldots,|\psi_{N^2}\rangle\langle\psi_{N^2}|$ are experimentally prepared, and the output
state ${\cal E}(|\psi_j\rangle\langle\psi_j|)$ is measured for each input. This may be
done, for example, by using quantum state
tomography\cite{Raymer94a,Leonhardt96,Leibfried96a}. In principle, the
quantum operation ${\cal E}$ can now be determined by a linear extension of
${\cal E}$ to all states. We prove this below.
The goal is to determine the unknown operators $A_i$ in Eq.(\ref{eq:eeffect}).
However, experimental results involve numbers (not operators, which are a
theoretical concept). To relate the $A_i$ to measurable parameters, it is
convenient to consider an equivalent description of ${\cal E}$ using a {\em
fixed} set of operators $\tilde{A}_i$, which form a basis for the set of
operators on the state space, so that
\begin{equation}gin{eqnarray}
A_i = \sum_m a_{im} \tilde{A}_m
\label{eq:atildedef}
\end{eqnarray}
for some set of complex numbers $a_{im}$. Eq.(\ref{eq:eeffect}) may
thus be rewritten as
\begin{equation}
\label{eqtn: two sided rep}
{\cal E}(\rho) = \sum_{mn} \tilde{A}_m \rho
\tilde{A}_{n}^{\dagger} \chi_{mn}
\,,
\end{equation}
where $\chi_{mn} \equiv \sum_i a_{im} a_{in}^*$ is a ``classical'' {\em error
correlation matrix} which is positive Hermitian by definition. This shows
that ${\cal E}$ can be completely described by a complex number matrix,
$\chi$, once the set of operators $\tilde{A}_i$ has been fixed. In general,
$\chi$ will contain $N^4-N^2$ independent parameters, because a general linear
map of $N$ by $N$ matrices to $N$ by $N$ matrices is described by $N^4$
independent parameters, but there are $N^2$ additional constraints due to the
fact that the trace of $\rho$ remains one. We will show how to determine
$\chi$ experimentally, and then show how an operator sum representation of the
form Eq.(\ref{eqtn: op sum rep}) can be recovered once the $\chi$ matrix is
known.
Let $\rho_j$, $1\leq j \leq N^2$ be a set of linearly independent basis
elements for the space of $N$$\times$$N$ matrices. A convenient choice is the
set of projectors $\ket{n}\bra{m}$. Experimentally, the output state ${\cal
E}(\ket{n}\bra{m})$ may be obtained by preparing the input states $\ket{n}$,
$\ket{m}$, $\ket{n_+} = (\ket{n}+\ket{m})/\sqrt{2}$, and $\ket{n_-} =
(\ket{n}+i\ket{m})/\sqrt{2}$ and forming linear combinations of ${\cal
E}(\ket{n}\bra{n})$, ${\cal E}(\ket{m}\bra{m})$, ${\cal
E}(\ket{n_+}\bra{n_+})$, and ${\cal E}(\ket{n_-}\bra{n_-})$. Thus, it is
possible to determine ${\cal E}(\rho_j)$ by state tomography, for each
$\rho_j$.
Furthermore, each ${\cal E}(\rho_j)$ may be expressed as a linear combination
of the basis states,
\begin{equation}
{\cal E}(\rho_j)
= \sum_k \lambda_{jk} \rho_k
\,,
\end{equation}
and since ${\cal E}(\rho_j)$ is known, $\lambda_{jk}$ can thus be determined.
To proceed, we may write
\begin{equation}
\tilde{A}_m \rho_j \tilde{A}_n^\dagger = \sum_k \begin{equation}ta^{mn}_{jk} \rho_k
\,,
\label{eq:betadef}
\end{equation}
where $\begin{equation}ta^{mn}_{jk}$ are complex numbers which can be determined by
standard algorithms given the $\tilde{A}_m$ operators and the $\rho_j$
operators. Combining the last two expressions we have
\begin{equation}
\sum_k \sum_{mn} \chi_{mn} \begin{equation}ta^{mn}_{jk} \rho_k
= \sum_k \lambda_{jk}\rho_k
\,.
\end{equation}
{} From independence of the $\rho_k$ it follows that for each $k$,
\begin{equation}
\label{eqtn: chi condition}
\sum_{mn} \begin{equation}ta^{mn}_{jk} \chi_{mn} = \lambda_{jk}
\,.
\end{equation}
This relation is a necessary and sufficient condition for the matrix
$\chi$ to give the correct quantum operation ${\cal E}$. One may
think of $\chi$ and $\lambda$ as vectors, and $\begin{equation}ta$ as a
$N^4$$\times$$N^4$ matrix with columns indexed by $mn$, and rows by
$ij$. To show how $\chi$ may be obtained, let $\kappa$ be the
generalized inverse for the matrix $\begin{equation}ta$, satisfying the relation
\begin{equation}gin{equation}
\begin{equation}ta^{mn}_{jk} = \sum_{st,xy} \begin{equation}ta_{jk}^{st} \kappa_{st}^{xy}
\begin{equation}ta_{xy}^{mn}
\,.
\end{equation}
Most computer algebra packages are capable of finding such generalized
inverses. In appendix \ref{appendix: chi} it is shown that $\chi$
defined by
\begin{equation}gin{eqnarray}
\chi_{mn} = \sum_{jk} \kappa_{jk}^{mn} \lambda_{jk}
\label{eqtn:chidefn}
\end{eqnarray}
satisfies the relation (\ref{eqtn: chi condition}). The proof is
somewhat subtle, but it is not relevant to the application of the
present algorithm.
Having determined $\chi$ one immediately obtains the operator sum
representation for ${\cal E}$ in the following manner. Let the unitary matrix
$U^\dagger$ diagonalize $\chi$,
\begin{equation}gin{eqnarray}
\chi_{mn} = \sum_{xy} U_{mx} d_{x} \delta_{xy} U^*_{ny}
.
\end{eqnarray}
{} From this it can easily be verified that
\begin{equation}gin{eqnarray}
A_i = \sqrt{d_i} \sum_j U_{ij} \tilde{A}_j
\end{eqnarray}
gives an operator-sum representation for the quantum operation ${\cal
E}$. Our algorithm may thus be summarized as follows: $\lambda$ is
experimentally measured, and given $\begin{equation}ta$, determined by a choice of
$\tilde{A}$, we find the desired parameters $\chi$ which completely
describe ${\cal E}$.
\section{One and Two Qubits}
The above general method may be illustrated by the specific case of a
black box operation on a single quantum bit (qubit). A convenient
choice for the fixed operators $\tilde{A}_i$ is
\begin{equation}a
\tilde{A}_0 &=& I
\label{eq:fixedonebit}
\\ \tilde{A}_1 &=& \sigma_x
\\ \tilde{A}_2 &=& -i \sigma_y
\\ \tilde{A}_3 &=& \sigma_z
\,,
\label{eq:fixedonebitend}
\end{equation}a
where the $\sigma_i$ are the Pauli matrices. There are 12 parameters,
specified by $\chi$, which determine an arbitrary single qubit black box
operation ${\cal E}$; three of these describe arbitrary unitary transforms
$\exp(i\sum_k r_k\sigma_k)$ on the qubit, and nine parameters describe
possible correlations established with the environment $E$ via
$\exp(i\sum_{jk} \gamma_{jk} \sigma_j\otimes\sigma^E_k)$. Two combinations of
the nine parameters describe physical processes analogous to the $T_1$ and
$T_2$ spin-spin and spin-lattice relaxation rates familiar to us from
classical magnetic spin systems. However, the dephasing and energy loss rates
determined by $\chi$ do not simply describe ensemble behavior; rather, $\chi$
describes the dynamics of a {\em single quantum system}. Thus, the
decoherence of a single qubit must be described by {\em more than just two
parameters}. {\em Twelve} are needed in general.
These 12 parameters may be measured using four sets of experiments. As a
specific example, suppose the input states $\ket{0}$, $\ket{1}$,
$\ket{+}=(\ket{0}+\ket{1})/\sqrt{2}$ and $\ket{-} =
(\ket{0}+i\,\ket{1})/\sqrt{2}$ are prepared, and the four matrices
\begin{equation}a
\rho'_1 &=& {\cal E}(\ket{0}\bra{0})
\\ \rho'_4 &=& {\cal E}(\ket{1}\bra{1})
\\ \rho'_2 &=& {\cal E}(\ket{+}\bra{+})
- i {\cal E}(\ket{-}\bra{-})
- (1-i)(\rho'_1 + \rho'_4)/2
\\ \rho'_3 &=& {\cal E}(\ket{+}\bra{+})
+ i {\cal E}(\ket{-}\bra{-})
- (1+i)(\rho'_1 + \rho'_4)/2
\end{equation}a
are determined using state tomography. These correspond to $\rho'_j = {\cal
E}(\rho_j)$, where
\begin{equation}
\rho_1 = \mattwoc{1}{0}{0}{0}
\,,
\end{equation}
$\rho_2 = \rho_1 \sigma_x$, $\rho_3=\sigma_x\rho_2$, and $\rho_4 = \sigma_x
\rho_1\sigma_x$. From Eq.(\ref{eq:betadef}) and
Eqs.(\ref{eq:fixedonebit}-\ref{eq:fixedonebitend}) we may determine $\begin{equation}ta$,
and similarly $\rho'_j$ determines $\lambda$. However, due to the particular
choice of basis, and the Pauli matrix representation of $\tilde{A}_i$, we may
express the $\begin{equation}ta$ matrix as the Kronecker product $\begin{equation}ta = \Lambda\otimes
\Lambda$, where
\begin{equation}
\Lambda = \frac{1}{2} \mattwoc{I}{\sigma_x}{\sigma_x}{-I}
\,,
\end{equation}
so that $\chi$ may be expressed conveniently as
\begin{equation}
\chi = \Lambda \mattwoc{\rho'_1}{\rho'_2}{\rho'_3}{\rho'_4} \Lambda
\,,
\end{equation}
in terms of block matrices.
Likewise, it turns out that the parameters $\chi_2$ describing the
black box operations on two qubits can be expressed as
\begin{equation}
\chi_2 = \Lambda_2 \overline{\rho}' \Lambda_2
\,,
\end{equation}
where $\Lambda_2 = \Lambda \otimes \Lambda$, and $\overline{\rho}'$ is
a matrix of sixteen measured density matrices,
\begin{equation}
\overline{\rho}' = P^T
\left[\begin{equation}gin{array}{cccc}
\rho'_{11} & \rho'_{12} & \rho'_{13} & \rho'_{14}
\\ \rho'_{21} & \rho'_{22} & \rho'_{23} & \rho'_{24}
\\ \rho'_{31} & \rho'_{32} & \rho'_{33} & \rho'_{34}
\\ \rho'_{41} & \rho'_{42} & \rho'_{43} & \rho'_{44}
\end{array}\right]
P
\,,
\end{equation}
where $\rho'_{nm} = {\cal E}(\rho_{nm})$, $\rho_{nm} = T_n
\ket{00}\bra{00} T_m$, $T_1 = I\otimes I$, $T_2 = I\otimes \sigma_x$,
$T_3 = \sigma_x \otimes I$, $T_4 = \sigma_x \otimes \sigma_x$, and $P
= I\otimes [(\rho_{00}+\rho_{12}+\rho_{21}+\rho_{33})\otimes I]$ is a
permutation matrix. Similar results hold for $k>2$ qubits. Note that
in general, a quantum black box acting on $k$ qubits is described by
$16^k-4^k$ independent parameters.
There is a particularly elegant geometric view of quantum
operations for a single qubit. This is based on the Bloch vector,
$\vec \lambda$, which is defined by
\begin{equation}gin{equation}
\rho = \frac{I+\vec \lambda \cdot \vec \sigma}{2},
\end{equation}
satisfying $| \vec \lambda | \leq 1$. The map Eq.(\ref{eq:rhomap})
is equivalent to a map of the form
\begin{equation}gin{equation}
\vec \lambda \stackrel{\cal E}{\rightarrow} \vec \lambda'
= M \vec \lambda + \vec c
\,,
\label{eqtn: affine map}
\end{equation}
where $M$ is a $3$$\times$$3$ matrix, and $\vec c$ is a constant
vector. This is an {\em affine map}, mapping the Bloch sphere into
itself. If the $A_i$ operators are written in the form
\begin{equation}gin{eqnarray}
A_i = \alpha_i I + \sum_{k=1}^3 a_{ik} \sigma_k,
\end{eqnarray}
then it is not difficult to check that
\begin{equation}gin{eqnarray}
M_{jk} & = & \sum_l \left[ \begin{equation}gin{array}{l}
a_{lj} a_{lk}^* + a_{lj}^* a_{lk}
+ \\ \left( |\alpha_l|^2- \sum_p a_{lp} a_{lp}^* \right) \delta_{jk}
+ \\
i \sum_p \epsilon_{jkp}
( \alpha_l a_{lp}^* - \alpha_l^* a_{lp} )
\end{array} \right]
\\
c_k &=& 2i \sum_l \sum_{jp} \epsilon_{jpk} a_{lj} a_{lp}^*
\,,
\end{eqnarray}
where we have made use of Eq.(\ref{eq:completeness}) to simplify the
expression for $\vec c$.
The meaning of the affine map Eq.(\ref{eqtn: affine map}) is made clearer
by considering the polar decomposition \cite{Horn91a} of the matrix $M$.
Any real matrix $M$ can always be written in the form
\begin{equation}gin{eqnarray}
M = O S
\,,
\end{eqnarray}
where $O$ is a real orthogonal matrix with determinant $1$,
representing a proper rotation, and $S$ is a real symmetric
matrix. Viewed this way, the map Eq.(\ref{eqtn: affine map}) is just a
deformation of the Bloch sphere along principal axes determined by
$S$, followed by a proper rotation due to $O$, followed by a
displacement due to $\vec c$. Various well-known decoherence measures
can be identified from $M$ and $\vec c$; for example, $T_1$ and $T_2$
are related to the magnitude of $\vec c$ and the norm of $M$. Other
measures are described in the following section.
\section{Related Quantities}
We have described how to determine an unknown quantum operation ${\cal E}$ by
systematically exploring the response to a complete set of states in the
system's Hilbert space. Once the operators $A_i$ have been determined, many
other interesting quantities can be evaluated. A quantity of particular
importance is the {\em entanglement fidelity} \cite{Schumacher96a,Nielsen96c}.
This quantity can be used to measure how closely the dynamics of the quantum
system under consideration approximates that of some ideal quantum system.
Suppose the target quantum operation is a unitary quantum operation,
${\cal U}(\rho) = U \rho U^{\dagger}$, and the actual quantum
operation implemented experimentally is ${\cal E}$. The entanglement
fidelity can be defined as \cite{Nielsen96c}
\begin{equation}gin{eqnarray}
F_e(\rho,{\cal U},{\cal E})
& \equiv & \sum_i \left|
\mbox{tr}(U^{\dagger} A_i \rho) \right|^2
\\
&=& \sum_{mn} \chi_{mn} \mbox{tr} (U^{\dagger} \tilde{A}_m \rho)
\mbox{tr}(\rho \tilde{A}_n^{\dagger} U)
\,.
\end{eqnarray}
The second expression follows from the first by using Eq.(\ref{eq:atildedef}),
and shows that errors in the experimental determination of ${\cal E}$
(resulting from errors in preparation and measurement) propagate linearly to
errors in the estimation of entanglement fidelity. The minimum value of $F_e$
over all possible states $\rho$ is a single parameter which describes how well
the experimental system implements the desired quantum logic gate.
One may also be interested in the minimum {\em fidelity} of the gate
operation. This is given by the expression,
\begin{equation}gin{eqnarray}
F \equiv \min_{|\psi\rangle} \langle \psi | U^{\dagger}
{\cal E}(|\psi\rangle \langle \psi|) U |\psi \rangle,
\end{eqnarray}
where the minimum is over all pure states, $|\psi\rangle$. As for the
entanglement fidelity, we may show that this quantity can be
determined robustly, because of its linear dependence on the
experimental errors.
Another quantity of interest is the {\em quantum channel capacity}, defined by
Lloyd \cite{Lloyd96a,Schumacher96b} as a measure of the amount of quantum
information that can be sent using a quantum communication channel, such as an
optical fiber. In terms of the parameters discussed in this paper,
\begin{equation}gin{eqnarray}
C({\cal E})
\equiv \max_{\rho} S({\cal E}(\rho)) - S_e(\rho,{\cal E})
\,,
\end{eqnarray}
where $S({\cal E}(\rho))$ is the von Neumann entropy of the density operator
${\cal E}(\rho)$, $S_e(\rho,{\cal E})$ is the {\em entropy exchange}
\cite{Schumacher96a}, and the maximization is over all density operators
$\rho$ which may be used as input to the channel. It is a measure of the
amount of quantum information that can be sent reliably using a quantum
communications channel which is described by a quantum operation ${\cal E}$.
One final observation is that our procedure can in principle be used to
determine the form of the Lindblad operator, ${\cal L}$, used in Markovian
master equations of the form
\begin{equation}gin{eqnarray}
\dot \rho = {\cal L}(\rho),
\end{eqnarray}
where for convenience time is measured in dimensionless units, to make ${\cal
L}$ dimensionless. This result follows from the fact that Lindblad operators
${\cal L}$ are just the logarithms of quantum operations; that is, $\exp({\cal
L})$ is a quantum operation for any Lindblad operator, ${\cal L}$, and $\log
{\cal E}$ is a Lindblad operator for any quantum operation ${\cal E}$. This
observation may be used in the future to experimentally determine the form of
the Lindblad operator for systems, but will not be explored further here.
\section{Quantum Measurements}
\label{sec:meas}
Quantum operations can also be used to describe measurements. For each
measurement outcome, $i$, there is associated a quantum operation, ${\cal
E}_i$. The corresponding state change is given by
\begin{equation}gin{eqnarray}
\rho \rightarrow \frac{{\cal E}_i(\rho)}{\mbox{tr}({\cal E}_i(\rho))}
\,,
\end{eqnarray}
where the probability of the measurement outcome occurring is $p_i =
\mbox{tr}({\cal E}_i(\rho))$. Note that this mapping may be {\em nonlinear},
because of this renormalization factor.
Despite the possible nonlinearity, the procedure we have described may be
adapted to evaluate the quantum operations describing a measurement. To
determine ${\cal E}_i$ we proceed exactly as before, except now we must
perform the measurement a large enough number of times that the probability
$p_i$ can be reliably estimated, for example by using the frequency of
occurrence of outcome $i$. Next, $\rho'_j$ is determined using tomography,
allowing us to obtain
\begin{equation}gin{eqnarray}
{\cal E}_i(\rho_j) = \mbox{tr}({\cal E}_i(\rho_j)) \rho'_j,
\end{eqnarray}
for each input $\rho_j$ which we prepare, since each term on the right hand
side is known. Now we proceed exactly as before to evaluate the quantum
operation ${\cal E}_i$. This procedure may be useful, for example, in
evaluating the effectiveness of a quantum-nondemolition (QND)
measurement\cite{braginsky92}.
\section{Conclusion}
In this paper we have shown how the dynamics of a quantum system may be
experimentally determined using a systematic procedure. This elementary {\em
system identification} step \cite{Ljung87} opens the way for robust
experimental determination of a wide variety of interesting quantities.
Amongst those that may be of particular interest are the quantum channel
capacity, the fidelity, and the entanglement fidelity. We expect these
results to be of great use in the experimental study of quantum computation,
quantum error correction, quantum cryptography, quantum coding and quantum
teleportation.
\section*{Acknowledgments}
We thank C.~M.~Caves, R.~Laflamme, Y.~Yamamoto, and W.~H.~Zurek for many
useful discussions about quantum information and quantum optics. This work was
supported in part by the Office of Naval Research (N00014-93-1-0116), the
Phillips Laboratory (F29601-95-0209), and the Army Research Office
(DAAH04-96-1-0299). We thank the Institute for Theoretical Physics for its
hospitality and for the support of the National Science Foundation
(PHY94-07194). ILC acknowledges financial support from the Fannie and John
Hertz Foundation, and MAN acknowledges financial support from the
Australian-American Educational Foundation (Fulbright Commission).
\appendix
\section{Proof of the $\chi$ relation}
\label{appendix: chi}
The difficulty in verifying that $\chi$ defined by (\ref{eqtn:chidefn})
satisfies (\ref{eqtn: chi condition}) is that in general $\chi$
is not uniquely determined by the last set of equations. For
convenience we will rewrite these equations in matrix form as
\begin{equation}gin{eqnarray}
\label{eqtn: chi cond app}
\begin{equation}ta \vec \chi & = & \vec \lambda \\
\label{eqtn: chi defn app}
\vec \chi & \equiv & \kappa \vec \lambda
\,.
\end{eqnarray}
{} From the construction that led to equation (\ref{eqtn: two sided rep})
we know there exists at least one solution to equation
(\ref{eqtn: chi cond app}), which we shall call $\vec \chi '$. Thus
$\vec \lambda = \begin{equation}ta \vec \chi '$. The generalized inverse satisfies
$\begin{equation}ta \kappa \begin{equation}ta = \begin{equation}ta$. Premultiplying the definition of $\vec \chi$
by $\begin{equation}ta$ gives
\begin{equation}gin{eqnarray}
\begin{equation}ta \vec \chi & = & \begin{equation}ta \kappa \vec \lambda \\
& = & \begin{equation}ta \kappa \begin{equation}ta \vec \chi ' \\
& = & \begin{equation}ta \vec \chi ' \\
& = & \lambda
\,.
\end{eqnarray}
Thus $\chi$ defined by (\ref{eqtn: chi defn app}) satisfies the equation
(\ref{eqtn: chi cond app}), as was required to show.
\begin{equation}gin{thebibliography}{10}
\bibitem{Bennett92}
C.~H. Bennett, G. Brassard, and A.~K. Ekert, Sci. Am. {\bf 267}, 50 (1992).
\bibitem{Hughes95}
R. Hughes {\it et~al.}, Contemp. Physics {\bf 36}, 149 (1995).
\bibitem{Unruh94}
W.~G. Unruh, Phys. Rev. A {\bf 51}, 992 (1995).
\bibitem{Chuang95a}
I.~L. Chuang, R. Laflamme, P. Shor, and W.~H. Zurek, Science {\bf 270}, 1633
(1995).
\bibitem{Turchette95}
Q.~A. Turchette {\it et~al.}, Phys. Rev. Lett. {\bf 75}, 4710 (1995).
\bibitem{Monroe95}
C. Monroe {\it et~al.}, Phys. Rev. Lett. {\bf 75}, 4714 (1995).
\bibitem{Kraus83a}
K. Kraus, {\em States, Effects, and Operations} (Springer-Verlag, Berlin,
1983).
\bibitem{Schumacher96a}
B.~W. Schumacher, LANL e-print quant-ph/9604023, to appear in Phys. Rev. A
(1996).
\bibitem{Louisell}
W.~H. Louisell, {\em Quantum Statistical Properties of Radiation} (Wiley, New
York, 1973).
\bibitem{Gardiner91}
C.~W. Gardiner, {\em Quantum Noise} (Springer-Verlag, New York, 1991).
\bibitem{Mabuchi96}
H. Mabuchi, quant-ph/9608020 (1996).
\bibitem{Raymer94a}
M. Raymer, M. Beck, and D. McAlister, Phys. Rev. Lett. {\bf 72}, 1137 (1994).
\bibitem{Leonhardt96}
U. Leonhardt, Phys. Rev. A {\bf 53}, 2998 (1996).
\bibitem{Leibfried96a}
D. Leibfried {\it et~al.}, unpublished (1996).
\bibitem{Horn91a}
R.~A. Horn and C.~R. Johnson, {\em Topics in matrix analysis} (Cambridge
University Press, Cambridge, 1991).
\bibitem{Nielsen96c}
M.~A. Nielsen, B.~W. Schumacher, C.~M. Caves, and H. Barnum, in preparation
(1996).
\bibitem{Lloyd96a}
S. Lloyd, LANL e-print quant-ph/9604015, submitted to pra (1996).
\bibitem{Schumacher96b}
B.~W. Schumacher and M.~A. Nielsen, LANL e-print quant-ph/9604022, to appear
in Phys. Rev. A (1996).
\bibitem{braginsky92}
V.~B. Braginsky and F.~Y. Khalili, {\em Quantum Measurement} (Cambridge
Unviersity Press, Cambridge, England, 1992).
\bibitem{Ljung87}
L. Ljung, {\em System Identification: Theory for the User} (Prentice Hall PTR,
Upper Saddle River, 1987).
\end{thebibliography}
\end{multicols}
\end{document} |
\begin{document}
\begin{CJK*}{GBK}{song}
\CJKindent
\title{Differential inequality of the second derivative that leads to
normality}
\author{Qiaoyu Chen, Shahar Nevo, Xuecheng Pang
}
\date{}
\maketitle \vskip 3mm
\SHOUYEJIAOZHU{2010 Mathematical subject classification
30A10,~30D45.}
\SHOUYEJIAOZHU{ keywords and phrases, Normal family,
Differential inequality.}
\centerline{{\CJKfamily{hei} Abstract}}
\noindent Let~$\mathcal{F}$~ be a
family of functions meromorphic in a domain ~D.~
If~$\{\df{|f^{''}|}{1+|f|^{3}}:f\in \mathcal{F} \}$~is locally
uniformly bounded away from zero,~then~$\mathcal{F}$~ is normal.
\noindent I.~Introduction. \\
Recently,~progress was occurred concerning the
study of the connection between differential inequalities and
normality. ~A natural point of departure for this subject is the
well-known
theorem due to F.Marty.\\
\CJKfamily{hei}{Marty's Theorem }~[8, P.75]\quad A family ~$\mathcal{F}$~of
functions meromorphic in a domain ~D~ is normal~ if and only
if~~$\{f^{\#}:~f \in \mathcal{F}\}$~is locally
uniformly bounded in ~D~.\\
Following Marty's Theorem,~L.~Royden proved the followiing
generalization.\\
\CJKfamily{hei}{Theorem R}[7]\quad Let ~$\mathcal{F}$~ be a family of functions
meromorphic in a domain ~D, ~with the property that for each compact
set ~$K\subset D$~, ~there is a positive increasing function
~$h_K$~, ~such that ~$|f'(z)|\leq h_K(|f(z)|)$~for all ~$f\in
\mathcal{F} $~ ~and ~$z\in K$.~Then~$\mathcal{F}$~is normal in~D.\\
This result was significantly extended further in various
directions, ~see ~$[3],[9]~and~[11]$. ~S.Y.Li and H.Xie established
a different kind of generalization ~of Marty's Theorem that involves
higher derivatives. \\
\CJKfamily{hei}{Theorem LX} ~$[4]$\quad Let~$\mathcal{F}$~ be a family of
functions meromorphic in a domain ~D,~such that each $f\in
\mathcal{F} $~has zeros only of multiplicities $\geq k~,k\in
N$.~Then~$\mathcal{F}$~is normal in D if and only if the family
$$\left\{\df {|f^{(k)}(z)|}{1+|f(z)|^{k+1}}:f\in \mathcal{F}\right\}$$
is locally uniformly bounded in D.\\
In~$[6]$,~the second and the third authors gave a counterexample to
the validity of Theorem LX,~without the condition on the
multiplicities of zeros for ~the case ~$k=2$.\\
Concerning differential inequalities ~with the reversed sign of the
inequality,~J.~Grahl,~and the ~second author proved the ~following
result,~that may be ~considered as a counterpart to ~Marty's
Theorem.\\
\CJKfamily{hei}{Theorem GN }\quad $[1]$\quad Let ~$\mathcal{F}$~ be a family of
functions
meromorphic in D,~and~$c>0$~.~If ~$f^{\#}(z)>c$~
for every~$f\in \mathcal{F} $~and~$z\in D$,~then~$\mathcal{F}$~is normal in
~D.\\
N.Steinmetz ~$[10]$,~gave a shorter proof of Theorem GN,~using the
Schwarzian derivative and some Well-known facts on linear
differential equations.\\
Then in~$[5]$,~X.J.Liu together
~with the second and third
~authors generalized Theorem GN
~and proved
the following result.\\
\CJKfamily{hei}{Theorem LNP}\quad Let~$1\leq\alpha< \infty$~ and~$c>0$.~Let
~~$\mathcal{F}$~ be the family of all meroforphic functions ~$f $
~in~D, ~such that
$${\df {|f'(z)|}{1+|f(z)|^{\alpha}}>C}$$
~for every
~$ z\in D$.\\~
Then the following hold:\\
(1) If ~$\alpha>1$,~then~$\mathcal{F}$~is normal ~in ~D.~\\
(2) If~$\alpha=1$,~then~$\mathcal{F}$~is quasi-normal in ~D~ but
not necessarily normal. \\
Observe that (2) of the theorem is a
differential inequalities that
distinguish between quasi-normality to normality. \\
In this paper,
we continue to ~study differential inequality ~with
the reversed sign ~$(``\geq ")$~ ~and prove the following theorem.\\
\CJKfamily{hei}{Theorem 1.}\quad Let ~D~be a domain in ~$\mathbb{C}$~and let
~$c>0$.~Then the family ~~$\mathcal{F}$~ of all functions
~f~meromorphic in ~D~,~such that ~$$\df {|f^{''}(z)|}{1+|f(z)|^{3}}>
C$$~for ~every
~$z\in D$~is normal. \\
Observe that the above differential
~inequality is the reversed
inequality to that of Theorem LX in the case ~$k=2$.~\\
Let us set some notation. \\
For ~$z_0 \in C $~and ~$r>0$.~$\Delta(z_0, r) = \{z: |z - z_0| <
r\}$,~$\overline{\Delta}(z_0, r) = \{z: |z - z_0| \leq r\}$. We
write~$f_n(z)\overset\chi \Rightarrow f(z)$~on ~$D$~to indicate that
the sequence $\{f_n(z)\}$ converges to ~$f(z)$ in the spherical
metric, uniformly on compact subsets of ~$D$, and $f_n(z)\Rightarrow
f(z)$~on ~$D$ if the convergence is also in the Euclidean metric.\\
II\quad
Proof of Theorem 1.\\
Since ~$|f''| > c$~ for every~$f \in\mathcal{F},$~it ~follows
that~$\{f'':f \in\mathcal{F}\}$~is normal in ~D~.~Let
~$\{f_n\}^{\infty}_{n=1}$~be a sequence of functions from
$\mathcal{F}$.~Without loss of ~generality,~we can assume that
~$f^{''}_n(z)\overset\chi \Rightarrow H$ in D~.~Let us separate
into two cases.\\
Case 1.\quad$f_n,n\geq 1$~are holomorphic ~functions~in ~D~.\\
Case 1.1\quad H~is holomorphic function~in~D~.\\
Since normality is a local property. ~It is enough to prove that
~$\{f_n\}$~is normal at each point of ~D~. Let ~$z_0\in D$~without
loss of generality, ~we can assume that~$z_0=0$~.By the assumption
on ~H~, ~there exist some~$r >0,~M >C$,~such that~$|f^{''}_n(z)|\leq
M$~ ~for every~$z\in \Delta(0,r)$~if ~$n$~ is large ~enough.~We then
get for large enough ~$n$~ and~$z\in \Delta(0,r)$~that
~$1+|f_n(z)|^3 \leq \frac {2 M}{C}$ ~and we deduce ~that
~$\{f_n\}^{\infty}_{n=1}$~ is normal at ~$z=0,$~ ~as required.\\
Case 1.2 \quad$H \equiv\infty$~in~D.\\
Again,~let $z_0 \in D$ and assume that $z_0 =0.$~ Let $r >0$ be such
that $\overline{\Delta}(0, r)\subset D$. Without loss of
generality,~we can assume that ~$|f^{''}_n(z)|> 1$~
~for every~$z\in\Delta(0,r),~n\in\mathbb{N}$.~Then~$\log |f_n^{''}|$~is a positive harmonic function in $\Delta(0,r)$.\\
From Harnack's inequality we then get that\\
(1)~$$|f_n^{''}(z)|\leq|f_n^{''}(0)|^{\frac{1+|z|}{1-|z|}} $$~for
every $z\in\Delta(0,r),~n\in\mathbb{N}.$\\
Let us fix some~$0<\rho<\displaystyle{\frac{r}{2}}$.~Then
\\(2) $$\frac{r+\rho}{r-\rho}<3.$$\\
For every $n\geq 1,$ let ~$z_n\in\{z:|z|=\rho\}$~be such that
$$
|f_n(z_n)|=\max\CJKfamily{li}mits_{|z|\leq\rho}|f_n(z)|=M(\rho,f_n)$$
By Cauchy's Inequality ,~we get that\\
$$|f^{''}_n(0)|\leq
\df{2}{\rho^2}M(\rho,f_n)=\df{2}{\rho^2}|f_n(z_n)|.$$
Hence,~by (1),~we get
$$C\leq \df{|f^{''}_n(z_n)|}{1+|f_n(z_n)|^3}\leq
\df{|f^{''}_n(z_n)|}{|f_n(z_n)|^3}\leq
\df{|f^{''}_n(0)|^{\df{r+\rho}{r-\rho}}}{|f_n(z_n)|^3} \leq
\left(\df{2}{\rho^2}\right)^{\df{r+\rho}{r-\rho}}\left|f_n(z_n)\right|^{\df{r+\rho}{r-\rho}-\displaystyle{3}},$$
Thus, by (2)
$$M(\rho,f_n)=|f(z_n)|\leq \left(\df{1}{C}\left(\df{2}{\rho^2}\right)^{\df{r+\rho}
{r-\rho}}\right)^{\df{1}{3-\df{r+\rho}{r-\rho}}},$$ which means that
$\{f_n\}$ is locally uniformly bounded in $\Delta(0,\rho)$ and thus
$\{f_n\}$ is normal at $z=0$.\\
Case 2 \quad $f_n$ are meromorphic functions with pole in $D$.\\
By Case 1 we have to prove normality only at point $z_0$,~where
$H(z_0)=\infty$.~Such points exist if $H$ is a meromorphic function
with poles in $D$ or if $H\equiv \infty$. So let $z_0$ be such that
$H(z_0)\equiv \infty$.~Without loss of generality , we can assume
that $z_0=0$. After moving to a subsequence, that without loss of
generality will also be denoted by $\{f_n\}_1^{\infty}$,~we can
assume that there is a sequence $\zeta_n\rightarrow 0$~such that
~$f_n(\zeta_n)=\infty$.~For if it was not the case,then for some
$\delta
>0$ ~and large enough ~$n$~,$f_n$~would be holomorphic in
$\Delta(0,\delta)$,and then we would get the asserted normality by
case (1).\\
Also we can assume the existence of \\
(3)\quad\quad a sequence~$\eta_n\rightarrow 0$~such that $f_n(\eta_n)=0$.\\
Indeed,~since ~$H(z_0)=\infty$~there exists some ~$\delta> 0$~such
that for large enough ~$n$~$\min\CJKfamily{li}mits_{z\in \Delta
(0,\delta)}|f_n^{''}|>1$.\\
Combining it with ~$f_n\neq 0$~ in some neighbourhood of
~$z=0$~gives the normality at~$z=0$~by Gu's Criterion [2]. \\
We can also assume that ~$\{f_n^{'}\}$~ is not normal
at~$z=0$.~Indeed, if~$\{f_n^{'}\}$~would be normal at ~$z=0$,~then
by Marty's theorem there exist ~$r_1>0$~and ~$M>0$~such that for
large enough ~$n$,~ $\df{|f_n^{''}(z)|}{1+|f_n^{'}(z)|^2}<M $ for
~$z\in \Delta(0,r_1)$.~ Since ~$H(0)=\infty $,there exists some
~$r_2\leq r_1$~such that for large enough ~$n,$~$|f_n^{''}(z)|\geq
2M$~for
~$z\in\Delta(0,r_2)$.\\
We thus have for large enough ~$n$~and ~$z\in \Delta(0,r_2)$,~
$1+|f_n^{'}(z)|^2>\df{|f_n^{''}(z)|}{M}\geq 2$~and thus
~$|f^{'}_n(z)|\geq 1$.We then get
$$\df{|f^{'}_n(z)|^2}{|f^{''}_n(z)|}=\df{|f^{'}_n(z)|^2}{1+|f^{'}_n(z)|^2}\cdot
\df{1+|f^{'}_n(z)|^2}{|f^{''}_n(z)|}\geq \df{1^2}{1+1^2}\cdot
\df{1}{M}=\df{1}{2M}.$$ Hence We have for large enough ~$n$~and ~
$z\in
\Delta(0,r_2)$\\
$(4)\quad\quad\displaystyle{\df{|f^{'}_n(z)|^2}{1+|f_n(z)|^3}=\df{|f^{'}_n(z)|^2}{|f^{''}_n(z)|}\cdot
\df{|f^{''}_n(z)|}{1+|f_n(z)|^3}>\df{1}{2M}\cdot C}.$ \\
Now,for every ~$x\geq 0,\df{\sqrt{1+x^2}}{1+x}\geq
\df{1}{\sqrt{2}}$,~and by taking square root of (4),~we get
$$\df{|f^{'}_n(z)|}{1+|f_n(z)|^\frac{3}{2}}=\df{|f^{'}_n(z)|}{\sqrt{1+|f_n(z)|^3}}\cdot
\df{\sqrt{1+|f_n(z)|^3}}{1+|f_n(z)|^\frac{3}{2}}>\sqrt{\df{C}{2M}}\cdot\df{1}{\sqrt{2}}
.$$ \\
By (1) of Theorem LNP, ~with ~$\alpha=\frac{3}{2}>1$,~we
deduce that~$\{f_n\}$ ~ is normal in $\Delta(0,r_2)$ and~we are
done.
\\
Thus
we can assume that~$\{f'_n\}$ ~is not normal at ~$z=0$. \\
Similarly
to (3) we can assume~that there is a sequence
~$s_n\to 0$~such that~$f^{'}_n(s_n)=0$.\\
We claim that we
can assume that~$\{\df{f^{'}_n}{f^{''}_n}\}_{n=1}^{\infty}$~is not
normal at
~$z=0$. \\
Otherwise,~after moving to a~subsequence that will also be denoted
by~$\{\df{f^{'}_n}{f^{''}_n}\}_{n=1}^{\infty}$ we
have~$\df{f^{'}_n}{f^{''}_n}\Rightarrow H_1$ in $\Delta(0,r)$ , for
some~$r
> 0$.~Since ~$f^{''}_n \neq 0$~and
$\df{f^{'}_n}{f^{''}_n}(\zeta_n)=0$~ then ~$H_1$~must be holomorphic
function in~$\Delta(0,r).$
Differentiation then gives\\
(5)\quad\quad $1-\df{f^{'}_n f^{''}_n }{(f^{''}_n)^2}\Rightarrow
H^{'}_1$~in $\Delta(0,r).$\\
At ~$z=s_n$~ the left hand of~(5)~is equal to 1.~on the other hand
~in some small neighbourhood of~$z=\zeta_n$,~We have
$f_n(z)=\frac{A_n}{z-z_n}+\hat{f}_n(z),~$ where $A_n\neq 0$ is a
constant,~and $\hat{f}_n(z)$ is analytic. ~Here we used that
according to the ~assumption of Theorem ~$1$~,~all ~poles of
~$f_n$~must be simple. \\
Hence we have
~$f^{'}_n(z)=\df{-A_n}{(z-\zeta_n)^2}+\hat{f}_n^{'}(z),
f^{''}_n(z)=\df{2A_n}{(z-\zeta_n)^3}+\hat{f}_n^{''}(z),
f_n^{(3)}(z)=\df{-6A_n}{(z-z_n)^4}+\hat{f}_n^{(3)}(z)$.~ Then the
left hand of (5) get at~$z=\zeta_n.$~The value
~$1-\df{6}{4}=-\df{1}{2}\neq 1$,
a contradiction .\\
Claim ~ there exist ~$r>0$~and ~$k>0$~such that for large enough
~$n$,
$|\df{f_n}{f_n^{''}}(z)|,~~\left|\df{f_n^2}{f_n^{''}}(z)\right|\leq K$~for ~$z\in \Delta(0,r)$.\\
Proof of Claim \quad Since ~$H(0)=\infty $,there exist ~$r>0$~and
~$M>0$~such that $\overline{\Delta}(0,r)\subset D$~and such that for
large enough $n$, $|f_n^{''}(z)|>M$ for ~$z\in \Delta (0,r)$.\\
Now ,if $|f_n(z)|\leq |f_n^{''}(z)|^{\frac{1}{3}}$ then \\
(6) \quad\quad $|\df{f_n}{f_n^{''}}(z)|\leq
\df{|f_n^{''}(z)|^{\frac{1}{3}}}{|f_n^{''}(z)|}\leq
\df{1}{M^{\frac{2}{3}}}$\\
and\\
(7)\quad\quad$|\df{f_n^2}{f_n^{''}}(z)|\leq
\df{|f_n^{''}(z)|^{\frac{2}{3}}}{|f_n^{''}(z)|}\leq
\frac{1}{M^{\frac{1}{3}}}$.\\
If on the other hand $|f_n(z)|\geq
|f_n^{''}(z)|^{\frac{1}{3}}$,~then since $\df{x}{1+x^3}\leq
\df{2^{\frac{2}{3}}}{3}$~for ~$x\geq 0$,~we
get \\
(8)\quad\quad
$|\df{f_n}{f_n^{''}}(z)|=\df{1+|f_n(z)|^3}{|f_n^{''}(z)|}\cdot
\df{|f_n(z)|}{1+|f_n(z)|^3}\leq \df{1}{C}\cdot
\df{2^{\frac{2}{3}}}{3}$.\\
Also We have $\df{x^2}{1+x^3}\leq \frac{2^{\frac{2}{3}}}{3}$~for
~$x\geq
0$ and thus\\
(9)\quad\quad$|\df{f_n^2}{f_n^{''}}(z)|=\df{1+|f_n(z)|^3}{|f_n^{''}(z)|}\cdot
\df{|f_n^2(z)|}{1+|f_n(z)|^3}\leq \df{1}{C}\cdot
\df{2^{\frac{2}{3}}}{3}$.\\
The claim then follows by taking
$k=\max{\{\frac{1}{M^{\frac{2}{3}}},\frac{1}{M^{\frac{1}{3}}},\frac{1}{C}\cdot\frac{2^{\frac{2}{3}}}{3}\}}$
and consider (6),(7),(8)and (9).\\
From the claim we deduce that $\{\df{f_n}{f_n^{''}}\}_{=1}^\infty$
and $\{\df{f_n^2}{f_n^{''}}\}_{=1}^\infty$ are normal in
$\Delta(0,r)$,~so after moving to a subsequence,~that also will be
denote by
$\{f_n\}_{n=1}^\infty $,~we get that \\
(10)\quad\quad$\df{f_n}{f_n^{''}}\rightarrow H_1$ in $\Delta(0,r)$\\
and\\
(11)\quad\quad$\df{f_n^2}{f_n^{''}}\rightarrow H_2$ in $\Delta(0,r)$\\
From the claim it follows that $H_1$ and $H_2$ are holomorphic in
$\Delta(0,r)$.\\
Differentiating (10) and (11) gives respectively \\
(12)\quad\quad$\df{f_n^{'}}{f_n^{''}}-\df{f_n^{(3)}}{f_n^{''}}\cdot
f_n
\Rightarrow H_1^{'}$ in $\Delta(0,r)$\\
and \\
(13)\quad\quad$2f_n\cdot\df{f_n^{'}}{f_n^{''}}-f_n^2\cdot\df{f_n^{(3)}}{{f^{''}_n}^2}\Rightarrow
H_2^{'}$ in $\Delta(0,r)$.\\
Since $\{f_n''\}_{n=1}^\infty$ is normal, there exists some $k_1>0$
such that $ \frac{|f_n^{(3)}(z)|}{1+|f_n''(z)|}\leq k_1 $ for every
$n\geq1$ and for every $z\in\Delta(0,r)$. Since in addition for
large enough $n$, $|f_n''(z)|>M$, then
\begin{eqnarray*}
\frac{|f_n^{(3)}(z)|}{|f_n^{''}(z)|^2}&=&\frac{|f_n^{(3)}(z)|}{1+|f_n^{''}(z)|^2}\frac{1+|f_n^{''}(z)|^2}{|f_n^{''}(z)|^2}\\
&\leq&k_1(1+\frac{1}{M^2}):=k_2.
\end{eqnarray*}
\\
Thus~\\
(14)\quad\quad$
\frac{|f_n^{''}(z)|^2}{|f_n^{(3)}(z)|}\geq\frac{1}{k^2} $~for large
enough n. \\
Now since we assume that $\{\frac{f_n'}{f_n''}\}$ is not normal at
$z=0$, then after moving to a subsequence, that also will be denoted
by $\{f_n\}_{n=1}^\infty$, we get that there exists a sequence of
points $t_n\rightarrow0$, such that
$$
\frac{f_n'}{f_n''}(t_n):=M_n\rightarrow\infty, \quad M_n\in
\mathbb{C}.
$$
Substituting $z=t_ n$ in $(12)$ gives\\
(15)
$$
M_n-\frac{f_n^{(3)}\cdot
f_n}{f_n''^2}(t_n):=\varepsilon_n\rightarrow H_1'(0).
$$
Hence
$$
f_n(t_n)=(M_n-\varepsilon_n)\frac{f_n^{''2}}{f_n^{(3)}}(t_n)
$$
From $(15)$ we get,~ by substituting $z=t_n$ in $(13)$
$$
2(M_n-\varepsilon_n)\frac{f_n^{''2}}{f_n^{(3)}}(t_n)M_n-(M_n-\varepsilon_n)^2\left(\frac{f_n^{''2}}{f_n^{(3)}}(t_n)\right)^2\frac{f_n^{(3)}}{f_n^{''2}}(t_n):=\delta_n\rightarrow
H_2'(0).
$$
\\
From this we get after simplifying
$$
(M_n^2-\varepsilon_n^2)\frac{f_n^{''2}}{f_n^{(3)}}(t_n)=\delta_n.
$$
But by (14) the left hand above tends to $\infty$~ as
$n\rightarrow\infty$, while the right hand is bounded, a
contradiction.\\
This completes the proof of Theorem 1.
QIAOYU CHEN, DEPARTMENT OF MATHEMATICS, EAST CHINA NORMAL
UNIVERSITY,SHANG HAI 200241,P.R.CHINA\\
E-mail address: goodluckqiaoyu@126.com
SHAHAR NEVO, DEPARTMENT OF MATHEMATICS, BAR-ILAN UNIVERSITY, 52900
RAMAT-GAN, ISRAEL \\
E-mail address: nevosh@macs.biu.ac.il
XUECHENG PANG, DEPARTMENT OF MATHEMATICS, EAST CHINA NORMAL
UNIVERSITY,SHANG HAI 200241,P.R.CHINA\\
E-mail address: xcpang@math.ecnu.cn
\end{CJK*}
\end{document} |
\begin{document}
\title[On smooth divisors of a projective hypersurface.]{On smooth divisors of a projective hypersurface.}
\author{Ellia Ph.}
\address{Dipartimento di Matematica, via Machiavelli 35, 44100 Ferrara (Italy)}
\email{phe@dns.unife.it}
\author{Franco D.}
\address{Dipartimento di Matematica e Applicazioni "R. Caccioppoli", Univ. Napoli "Federico II", Ple Tecchio 80, 80125 Napoli (Italy)}
\email{davide.franco@unina.it}
\date{16/06/2004}
\maketitle
\hskip7cm{\it Dedicated to Christian Peskine.}
\section*{Introduction.}
This paper deals with the existence of smooth divisors of a projective hypersurface $\Sigma \subset\mathbb{P}^n $ (projective space over an algebraically closed field of characteristic zero). According to a celebrated conjecture of Hartshorne, at least when $n\geq 7$, any such a variety should be a complete intersection.
Since the existence of smooth, non complete intersection, subcanonical $X \subset \mathbb{P}^n$ of codimension two
is equivalent, via the correspondance of Serre, to the existence of indecomposable rank two vector bundles on $\mathbb{P}^n$ and since no indecomposable vector bundle of $\mathbb{P}^n $, $n\geq 5$, is presently known,
it is widely believed that any smooth, subcanonical subvariety of $\mathbb{P}^n $, $n\ge5$, of codimension two is a complete intersection. Furthermore recall that, by a theorem of Barth, the subcanonical condition is automatically satisfied if $n \geq 6$. This in turn implies that a smooth (subcanonical if $n=5$) divisor of a
projective hypersurface $\Sigma \subset\mathbb{P}^n $, $n\geq 5$, is a complete intersection too.
\par In this paper we show that, roughly speaking, for any $\Sigma \subset \mathbb{P}^n$ there can be at most $\textit{finitely many}$ exceptions to the last statement. Indeed our main result is:
\begin{theorem}
\label{mainthm}
Let $\Sigma \subset \mathbb{P}^n$, $n \geq 5$ be an integral hypersurface of degree $s$. Let $X \subset \Sigma$ be a smooth variety with $dim(X)=n-2$. If $n=5$, assume $X$ subcanonical. If $X$ is not a complete intersection in $\mathbb{P}^n$, then:
$$d(X) \leq \frac{s(s-1)[(s-1)^2-n+1]}{n-1}+1.$$
\end{theorem}
In other words a smooth codimension two subvariety of $\mathbb{P}^n$, $n \geq 5$ (if $n=5$, we assume $X$ subcanonical) which is not a complete intersection cannot lie on a hypersurface of too low degree (too low with respect to its own degree) and, {\it on a fixed hypersurface}, Hartshorne's conjecture in codimension two is "asymptotically" true.
\par
The starting point is Severi-Lefschetz theorem which states that if $n \geq 4$ and if $X$ is a Cartier divisor on $\Sigma$, then $X$ is the complete intersection of $\Sigma$ with another hypersurface. For instance if $\Sigma $ is either smooth or singular in a finite set of points and if $n \geq 5$, the picture is very clear:
\begin{enumerate}
\item
there $\textit{exists}$ smooth $X\subset \Sigma$ with $dim(X)=n-2$ and with degree arbitrarily large;
\item
$\textit{any}$ smooth $X\subset \Sigma$ with $dim(X)=n-2$ $\textit{is}$ a complete intersection of $\Sigma$ with another hypersurface
\item $\textit{no}$ smooth $X\subset \Sigma$ with $dim(X)=n-2$ can meet the singular locus of $\Sigma$.
\end{enumerate}
\vskip1cm
Using Theorem \ref{mainthm} we get (the first statement comes again from an easy application of the Theorem of Severi-Lefschetz-Grothendieck):
\begin{theorem}
\label{sigma}
Let $\Sigma \subset \mathbb{P}^n$, $n\geq 5$, be an integral hypersurface of degree $s$
with \par $dimSing(\Sigma)\geq1$.
\begin{enumerate}
\item If $n\geq 6$ and
$dimSing(\Sigma)\leq n-5$ then $\Sigma $ does not contain any smooth variety of dimension $n-2$.
\item Suppose $dimSing(\Sigma)\geq n-4$. If $X\subset \Sigma $ is smooth, subcanonical, with $dim(X)=n-2$ then
$d(X)\leq s\frac{(s-1)((s-1)^2-n+1)}{n-1}+1$.
\end{enumerate}
\end{theorem}
We point out a consequence of this result.
\begin{corollary}
\label{sigmahilb} Let $\Sigma \subset \mathbb{P}^n$, $n\geq 5$, be an integral hypersurface
s.t. $dimSing(\Sigma)\geq 1$.
\begin{enumerate}
\item If $n\geq 6$ and
$dimSing(\Sigma)\leq n-5$ then $\Sigma $ does not contain any smooth variety of dimension $n-2$.
\item Suppose $dimSing(\Sigma)\geq n-4$. Then there are only finitely many components of
$\mathcal{H}ilb(\Sigma)$ containing smooth, subcanonical varieties of dimension $n-2$.
\end{enumerate}
\end{corollary}
Last but not least, at the end of the paper we show how this circle of ideas allows to improve the main results of \cite{EF} about subcanonical varieties of $\mathbb{P}^5 $ and $\mathbb{P}^6$:
\begin{theorem}
\label{n=s=5}
Let $X \subset \mathbb{P}^5$ be a smooth threefold with $\omega _X \simeq \ensuremath{\mathcal{O}} _X(e)$. If $h^0(\ensuremath{\mathcal{I}} _X(5)) \neq 0$,
then $X$ is a complete intersection.
\end{theorem}
\begin{theorem}
\label{n=s=6}
Let $X \subset \mathbb{P}^6$ be a smooth fourfold. If $h^0(\ensuremath{\mathcal{I}} _X(6)) \neq 0$,
then $X$ is a complete intersection.
\end{theorem}
Theorem \ref{mainthm} follows, thanks to a crucial remark essentially proved in \cite{EP} (see Lemma \ref{l1}), from a bound of $e$ (where $\omega _X \simeq \ensuremath{\mathcal{O}} _X(e)$), see Theorem \ref{thmSpec}, which can be viewed as a strong (since the degree is not involved) generalization of the "Speciality theorem" of Gruson-Peskine \cite{GP}. The proof of this bound is quite simple if $X \cap Sing(\Sigma )$ has the right dimension. This is done in the first section where a weaker version of Theorem \ref{thmSpec} and hence of Theorem \ref{mainthm} is proved (if $n=5$ we assume $Pic(X) \simeq \mathbb{Z} .H$). In the second section we show how a refinement of the proof yields our final result. Finally let's observe that our approach doesn't apply to the case $n=4$.
\vskip1cm
\textbf{Acknowledgment:} It is a pleasure to thank Enzo Di Gennaro who explained to one of us (D.F.) some of the deep results of \cite{K}.
\section{Reduction and the speciality theorem, weak version.}
\begin{notations}
Given a projective scheme $Y \subset \mathbb{P}^n $ we denote by $d(Y)$ the \textit{degree} of $Y$.
\end{notations}
\begin{notations}
\label{not2} In this section, $X \subset \mathbb{P}^n, n \geq 5$, will denote a smooth, non degenerate, codimension two
subvariety which is not a complete intersection. We will always assume $X$ subcanonical: $\omega _X \simeq
\ensuremath{\mathcal{O}} _X(e)$; notice that this condition is fullfilled if $Pic(X) \simeq \mathbb{Z} .H$; finally, thanks to a theorem of Barth, this last condition is automatically fullfilled if $n \geq 6$.\par\noindent
By Serre's construction we may associate to $X$ a rank two vector bundle:
$$ 0 \to \ensuremath{\mathcal{O}} \to E \to \ensuremath{\mathcal{I}} _X(e+n+1) \to 0 $$
The Chern classes of $E$ are: $c_1(E)=e+n+1,c_2(E)=d(X)=:d$.\par\noindent
Let $\Sigma$ be an hypersurface of degree $s $ containing $X$.
Then $\Sigma $ gives a section of $\ensuremath{\mathcal{I}} _X(s)$ which lifts to a section $\sigma _{\Sigma}\in H^0(E(-e-n-1+s))$ (notice that $\sigma_{\Sigma }$ is uniquely defined if $e+n+1-s<0$).
{\it Assume} that $Z$, the zero-locus of $\sigma_{\Sigma }$, has codimension two. Notice that since $X$ is not a complete intersection, this certainly holds if $s = min\{t\:|\:h^0\ensuremath{\mathcal{I}} _X(t)) \neq 0\}$.
Anyway, if $Z$ has codimension two, then $d(Z)=c_2(E(-e-n-1+s))=d-s(e+n+1-s)$ and $\omega _Z \simeq \ensuremath{\mathcal{O}} _Z(-e-2n-2+2s)$.
\end{notations}
\begin{remark}
\label{snb} By \cite{R}, if $X\subset \Sigma \subset \mathbb{P}^n $, $n\geq 3$, with $\omega _X \simeq \ensuremath{\mathcal{O}} _X(e)$ and $d(\Sigma)\leq n-2$ then $X$ is complete intersection, hence in the remainder of this paper we will assume $s\geq n-1$.
\end{remark}
\begin{remark}
\label{jac} Notice that $E(-e-n-1)\mid _X\simeq \ensuremath{\mathcal{N}} ^*_X$. It is well known
that the scheme $X\cap Z$ is the base locus of the jacobian system of $\Sigma $ on $X$: $X\cap Z=X\cap Jac(\Sigma)$.
So, the \textit{fundamental cycle} (\cite{F} 1.5) of $Z$ in $\mathcal{A}_*(X)$ is $c_2(\ensuremath{\mathcal{N}} ^*_X(s))$ as soon as $X$ and $Z$ intersect in the expected codimension.
\end{remark}
The main goal of this section is to prove:
\begin{theorem}[Speciality theorem, weak version]
\label{thmSpecW}
Let $X \subset \mathbb{P}^n$, $n \geq 5$ be a smooth codimension two subvariety. If $n=5$ assume $Pic(X) \simeq \mathbb{Z} .H$. Let $\Sigma$ be an hypersurface of degree $s$ containing $X$. If $X$ is not a complete intersection, then:
$$e \leq \frac{(s-1)[(s-1)^2-n+1]}{n-1}-n+1$$
where $\omega _X \simeq \ensuremath{\mathcal{O}} _X(e)$.
\end{theorem}
Let's see how this is related with a bound of the degree. First recall the following:
\begin{lemma}
\label{l1}
Let $X \subset \mathbb{P}^n$, $n \geq 4$, be a smooth codimension two subvariety which is not a complete intersection. Let $\Sigma$ be an hypersurface of minimal degree containing $X$. Set $s:=d(\Sigma )$.
\begin{enumerate}
\item $n-4 \leq dim(X \cap Sing(\Sigma )) \leq n-3$.
\item If $\omega _X \simeq \ensuremath{\mathcal{O}} _X(e)$, then $d(X) \leq s(n-1+e)+1$.
\item If $dim(X \cap Sing(\Sigma ))=n-3$ and if $Pic(X) \simeq \mathbb{Z} .H$, then $d(X) \leq (s-2)(n-1+e)+1$.
\end{enumerate}
\end{lemma}
\begin{proof}
The first item is \cite{EF}, Lemma 2.1; 2) is \cite{EF} Lemma 2.2 (i) and the last item is \cite{EF} Lemma 2.2 (ii) with $l=2$ (thanks to Severi and Zak theorems $h^1(\ensuremath{\mathcal{I}} _X(1))=0$, \cite{Z}).
\end{proof}
Theorem \ref{thmSpecW} and the second item of this lemma give us immediately:
\begin{theorem}
\label{thmA}
Let $\Sigma \subset \mathbb{P}^n$, $n \geq 5$, be an integral hypersurface of degree $s$. Let $X \subset \Sigma$ be a smooth subvariety with $dim(X)=n-2$. If $n=5$ assume $Pic(X) \simeq \mathbb{Z} .H$. If $X$ is not a complete intersection, then $d(X) < \frac{s(s-1)[(s-1)^2-n+1]}{n-1}+1$.
\end{theorem}
In order to prove Theorem \ref{thmSpecW} we need some preliminary results.
\begin{lemma}
\label{Ysubcanonical}
Let $\Sigma$ denote an hypersurface of degree $s$ containing $X$.
With assumptions ($codim(\sigma _{\Sigma})_0=2$) and notations as in \ref{not2}, assume $dim(X\cap Z)=n-4$. Then $Y:=X\cap Z$ is a subcanonical, l.c.i. scheme with $\omega_Y\simeq\ensuremath{\mathcal{O}}_Y(2s-n-1)$. Moreover $Y$ is the base locus of the jacobian system of $\Sigma$ in $X$.
\end{lemma}
\begin{proof}
We are assuming that $Y$ is a proper intersection between $X$ and $Z$ hence
$$
0 \to \ensuremath{\mathcal{O}} \to E\mid_X(-e-n-1+s) \to \ensuremath{\mathcal{I}} _{Y, X}(-e-n-1+2s) \to 0
$$
so $\ensuremath{\mathcal{N}}^*_{Y,X}\simeq E\mid_X(-s)$ and the first statement follows by adjunction. For the last statement, use \ref{jac}.
\end{proof}
\begin{notations} Keep the assumptions of Lemma \ref{Ysubcanonical} and denote by $\Sigma_1$ and $\Sigma_2$ two general partials of $\Sigma$. Since $dim(X\cap Z)=n-4$, $C:= X\cap \Sigma_1\cap \Sigma_2$ is a subcanonical, l.c.i. scheme containing $Y$ such that $\ensuremath{\mathcal{N}}_{C,X}\simeq \ensuremath{\mathcal{O}} _X(s-1) \oplus \ensuremath{\mathcal{O}}_X(s-1)$. We have $\omega_C\simeq \ensuremath{\mathcal{O}}_C(e+2s-2)$. The scheme $C$ is a complete intersection in $X$ which links $Y$ to another subscheme.\\
\end{notations}
\begin{lemma}
\label{R}
With notations as in Lemma \ref{Ysubcanonical}, denote by $R$ the residual to $Y$ with respect to $C$. Then $C=Y\cup R$ is a geometric linkage and $\Delta:= R\cap Y$ is a Cartier divisor of $Y$ such that: $\ensuremath{\mathcal{I}} _{\Delta , Y}\simeq \ensuremath{\mathcal{O}}_Y (-e-n+1)$.\\
Furthermore: $d(\Delta)\leq (s-1)d(X)((s-1)^2-d(Z))$ and:\\
$d(Z)(e+n+1) \leq (s-1)[(s-1)^2-d(Z)]$.
\end{lemma}
\begin{proof}
Denote by $Y_{red}$ the support of $Y$ and set
$Y_{red}=Y_1 \cup \dots \cup Y_r$ where $Y_i$, $1\leq i \leq r$, are the irreducible components of $Y_{red}$.
Furthermore, denote by $P_i$ the general point of $Y_i$. Since $Y$ is l.c.i. in $X$ and since $\ensuremath{\mathcal{I}} _{Y, X}(s-1)$
is globally generated by the partials of $\Sigma$, we can find two general elements in $Jac(\Sigma)$ generating the fibers of
$\ensuremath{\mathcal{N}}^* _{Y, X}(s-1)$ at each $P_i$, $1\leq i\leq r$. This implies that $R\cup Y$ is a geometric linkage.
\par Now consider the local Noether sequence (exact sequence of liaison):
$$
0\to \ensuremath{\mathcal{I}}_C \to \ensuremath{\mathcal{I}} _R \to \omega_Y \otimes \omega_C^{-1}\to 0.
$$
we get $$
\omega_Y \otimes \omega_C^{-1}\simeq\frac{\ensuremath{\mathcal{I}}_R}{\ensuremath{\mathcal{I}}_C}\simeq \frac{\ensuremath{\mathcal{I}}_R +\ensuremath{\mathcal{I}}_Y}{\ensuremath{\mathcal{I}}_C+\ensuremath{\mathcal{I}} _Y}\simeq
\frac{\ensuremath{\mathcal{I}}_{\Delta}}{\ensuremath{\mathcal{I}}_Y}\simeq \ensuremath{\mathcal{I}} _{\Delta , Y}$$ (the second isomorphism follow by geometric linkage, since $\ensuremath{\mathcal{I}}_R\cap\ensuremath{\mathcal{I}}_Y =\ensuremath{\mathcal{I}}_C$)
hence
$\omega_Y \otimes \omega_C^{-1}\simeq \ensuremath{\mathcal{O}}_Y(-e-n+1)\simeq \ensuremath{\mathcal{I}} _{\Delta , Y}$ and we are done.\\
For the last statement, the scheme $\Delta \subset R $ is the base locus of the jacobian system of
$\Sigma $ in $R$, hence $\Delta \subset \tilde{\Sigma}\cap R $ with $\tilde{\Sigma }$ a general element of $Jac(\Sigma)$
and $d(\Delta )\leq d(R)\cdot (s-1)$. We conclude since $d(R)\cdot (s-1)=(d(C)- d(Z))\cdot (s-1)=
((s-1)^2d(X)-d(Z)d(X))\cdot (s-1)$. The last inequality follows from $d(\Delta )=d(Y)\cdot (e+n+1)=d(X)\cdot d(Z) \cdot (e+n+1)$.
\end{proof}
Now we can conclude the proof of Theorem \ref{thmSpecW} (and hence of Theorem \ref{thmA}).
\begin{proof}[Proof of Theorem \ref{thmSpecW}]
It is enough to prove the theorem for $s$ minimal. Let $\Sigma$ be an hypersurface of minimal degree containing $X$, we set $s:=d(\Sigma )$ and $d:=d(X)$. According to Lemma \ref{l1} we distinguish two cases.\\
1) $dim(X \cap Sing(\Sigma ))=n-3$. In this case, by Lemma \ref{l1}, we have $d \leq (s-2)(n-1+e)+1$. On the other hand $d(Z) = d-s(e+n+1-s)$ (see \ref{not2}). It follows that: $d(Z) \leq (s-1)^2 -2(n-1+e)$. Since $d(Z) \geq n-1$ by \cite{R}, we get: $\frac{(s-1)^2-n+1}{2}-n+1 \geq e$. One checks (using $s \geq n-1$) that this implies the bound of Theorem \ref{thmSpecW}.\\
2) $dim(X \cap Sing(\Sigma ))=n-4$. By the last inequality of Lemma \ref{R}, $e \leq (s-1)[\frac{(s-1)^2}{d(Z)}-1]-n+1$. Since $d(Z) \geq n-1$ by \cite{R}, we get the result.
\end{proof}
\section{The speciality theorem.}
In this section we will refine the proof of Theorem \ref{thmSpecW} for $n=5$ in order to prove Theorem \ref{mainthm} of the introduction. For this we have to assume only that $X$ is subcanonical, which, of course, is weaker than assuming $Pic(X) \simeq \mathbb{Z} .H$. The assumption $Pic(X) \simeq \mathbb{Z} . H$ is used just to apply the last statement of Lemma \ref{l1} in order to settle the case $dim(X \cap Sing(\Sigma ))=n-3$. Here instead we will argue like in the proof of the case $dim(X \cap Sing(\Sigma ))=n-4$, but working modulo the divisorial part (in $X$) of $X \cap Sing(\Sigma )$; this will introduce some technical complications, but conceptually, the proof runs as before. Since the proof works for every $n \geq 5$ we will state it in this generality giving thus an alternative proof of Theorem \ref{thmSpecW}.
\begin{notations}
\label{not3}
In this section, with assumptions and notations as in \ref{not2}, we will assume furthermore that $dim(X\cap Z)=n-3$ and will denote by $L$ the dimension $n-3$ part of $X\cap Z\subset X$; moreover we set $\ensuremath{\mathcal{L}} = \ensuremath{\mathcal{O}} _X(L) $.
\par\noindent
Set $Y':=res_L(X\cap Z)$, we have $\ensuremath{\mathcal{I}} _{Y', X}:=(\ensuremath{\mathcal{I}} _{X\cap Z , X}:\ensuremath{\mathcal{I}} _{L , X})$. Since we have:
$$0 \to \ensuremath{\mathcal{O}} \to E\mid_X(-e-n-1+s)\otimes \ensuremath{\mathcal{L}} ^* \to \ensuremath{\mathcal{I}} _{Y' , X}(-e-n-1+2s)\otimes (\ensuremath{\mathcal{L}} ^*)^2 \to 0$$
it follows that $\ensuremath{\mathcal{N}}^*_{Y',X}\simeq E\mid_X(-s)\otimes \ensuremath{\mathcal{L}}$ and $Y'$ is a l.c.i. scheme with
$\omega_{Y'}\simeq\ensuremath{\mathcal{O}}_Y(2s-n-1)\otimes (\ensuremath{\mathcal{L}}^*)^2$.
\par \noindent
Denote by $\Sigma_1$ and $\Sigma_2$ two general partials of $\Sigma$. Since $X \cap Z = X \cap Sing(\Sigma )$, $\Sigma_1$ and $\Sigma_2$ both contain $L$. Let $C':=res_L(X\cap \Sigma_1\cap \Sigma_2)$. Since $\ensuremath{\mathcal{N}}_{C',X}\simeq (\ensuremath{\mathcal{O}} _{C'}(s-1) \oplus \ensuremath{\mathcal{O}}_{C'}(s-1))\otimes \ensuremath{\mathcal{L}}^*$. We have
$\omega_{C'}\simeq \ensuremath{\mathcal{O}}_{C'}(e+2s-2)\otimes (\ensuremath{\mathcal{L}}^*)^2$.
\end{notations}
\begin{lemma}
\label{R2}
Denote by $R'$ the residual to $Y'$ with respect to $C'$. Then $C'=Y'\cup R'$ is a geometric linkage and $\Delta':= R'\cap Y'$ is a Cartier divisor of $Y'$ such that:
$\ensuremath{\mathcal{I}} _{\Delta' , Y'}\simeq \ensuremath{\mathcal{O}}_{Y'}(-e-n+1)$.
\end{lemma}
\begin{proof}
We argue as in the proof of Lemma \ref{R}: denote by $Y_{red}'$ the support of $Y'$, set
$Y_{red}'=Y_1' \cup \dots \cup Y_r'$, where $Y_i'$, $1\leq i \leq r$, are the irreducible components of $Y_{red}'$, and denote by $P_i$ the general point of $Y_i'$. Choose the partials $\Sigma_1$ and $\Sigma _2$ in such a way that they generate the ideal sheaf of $X\cap Z$ at each $P_i$, $1\leq i\leq r$. In order to check that $R'\cup Y'$ is a geometric linkage we only need to consider the components contained in $L$. Consider a point $P_i\in L$. Since
$L\subset X\cap Z \subset \Sigma_1 \cap \Sigma _2$, the local equations of $X\cap Z$ in
$(\ensuremath{\mathcal{I}} _{Y, X}(s-1))_{P_i}$ have the form $(lf,lg)$ where $l$ is the equation of $L$,
$lf$ is the equation of $\Sigma_1$ and $lg$ the equation of $\Sigma_2$.
Since $Y':=res_L(X\cap Z)$ and $C':=res_L(X\cap \Sigma_1\cap \Sigma_2)$ then the ideals of both $Y'$ and $C'$ at $P_i$ are equal to $(f,g)\subset (\ensuremath{\mathcal{I}} _{Y, X}(s-1))_{P_i}$.
This implies that $R'\cup Y'$ is a geometric linkage and the remainder of the proof is similar as above.
\end{proof}
\begin{lemma}
\label{lemmaN-3}
Let $\Sigma \subset \mathbb{P}^n$, $n\geq 5$, be an hypersurface of degree $s$ containing $X$, a smooth variety with $dim(X)=n-2$ and $\omega _X \simeq \ensuremath{\mathcal{O}} _X(e)$. Assume $\sigma _{\Sigma}$ vanishes in codimension two and $dim(X \cap Sing(\Sigma ))=n-3$ (see \ref{not2}). Then $e < s-n$ or $d(Z)\cdot (e+n+1) \leq (s-1)[(s-1)^2-d(Z)]$.
\end{lemma}
\begin{proof}
We keep back the notations of \ref{not3}. Notice that the fundamental cycle of $Y'$ in $\textbf{A} _{n-4}(X)$ is
$$c_2(E\mid_X(-e-n-1+s)\otimes\ensuremath{\mathcal{L}}^*)=d(Z)H^2 + (e+n+1-2s)H\cap L +L^2\:\:(+)$$ ($H$ represents the hyperplane class and $\cap $ denotes the \textit{cap product} in $\textbf{A}_*(X)$. By abuse of notations, for any $A\in \textbf{A} _{i}(X)\subset \textbf{A}_*(X)$ we denote by
$d(A)\in \mathbb{Z}$ the
\textit{degree} of $A$: $d(A):= d(A\cap H^i)$, $A\cap H^i\in A_0(\mathbb{P}^n )\simeq \mathbb{Z}$. \item For any closed subscheme $\Gamma \subset X$ we still denote by $\Gamma \in \textbf{A} _{*}(X)$ the \textit{fundamental cycle}
of $\Gamma $ (\cite{F} 1.5).\\
We claim that:
$$d(\Delta')\leq (s-1)d(X)((s-1)^2-d(Z))-[(s-1)(e+n-1)+(s-1)^2-d(Z)]d(H^2\cap L)+$$
$$+(e+n-1)d(H\cap L^2)\:\:(*)
$$
Assume the claim for a while and let's show how to conclude the proof.
Combining \ref{R2} with $(*)$ we get
$$
d(\Delta')=d(Y')(e+n-1)\leq
$$
$$
\leq (s-1)d(X)((s-1)^2-d(Z))-[(s-1)(e+n-1)+(s-1)^2-d(Z)]d(H^2\cap L)+$$
$$+(e+n-1)d(H\cap L^2)
$$ and by $(+)$ above
$$
d(\Delta')=(e+n-1)d(H\cap(d(Z)H^2 + (e+n+1-2s)H\cap L +L^2))\leq
$$
$$
\leq (s-1)d(X)((s-1)^2-d(Z))-[(s-1)(e+n-1)+(s-1)^2-d(Z)]d(H^2\cap L)+$$
$$+(e+n-1)d(H\cap L^2).
$$
If $e<s-n$ we are done, so we can assume $e+n\geq s$.
We have
$$d(X)d(Z)(e+n-1)\leq (s-1)d(X)((s-1)^2-d(Z))+$$
$$+[(e+n-1)(s-e-n)-(s-1)^2+d(Z)]d(L)$$
To conclude it is enough to check that $(e+n-1)(s-e-n)-(s-1)^2+d(Z)\leq 0$. Since $d(Z)=d-s(e+n+1-s)$ (see \ref{not2}) and since $d \leq s(n-1+e)+1$ by Lemma \ref{l1}, this follows from: $s(n-1+e)+1 \leq s(e+n+1-s)+(s-1)^2+(e+n-s)(e+n-1)$. A short computation shows that this is equivalent to $0 \leq (e+n-s)(e+n-1)$, which holds thanks to our assumption $e+n\geq s$.\\
{\it Proof of the claim:}\\
Denote by $\mid M \mid $ the moving part of the Jacobian system of $\Sigma $ in $X$ and by $\ensuremath{\mathcal{M}} $ the corresponding line bundle.
The scheme $\Delta '$ is the base locus of $\mid M \mid_{R'}$ hence
$\Delta '\subset \tilde{M}\cap R'$ where $\tilde{M}$ is a general element of $\mid M \mid $. We have
$$
d(\Delta ')\leq d(\tilde{M}\cap R')=d(c_1(\ensuremath{\mathcal{M}} _{R'})).
$$
\par
In order to prove the statement we need to calculate the cycle
$c_1(\ensuremath{\mathcal{M}} _{R'})\in \textbf{A} _{n-5}(X)$.
First of all we calculate the fundamental cycle of $R'$ in $\textbf{A} _{n-4}(X)$:
$$R'\sim C'-Y'\sim ((s-1)H-L)^2-(d(Z)H^2 + (e+n+1-2s)H\cap L +L^2)=
$$
$$
=((s-1)^2-d(Z))H^2-(e+n-1)H\cap L.$$
Finally, the cycle $c_1(\ensuremath{\mathcal{M}} _{R'})\in \textbf{A} _{n-5}(X)$ is:
$$c_1(\ensuremath{\mathcal{M}} _{R'})\sim ((s-1)H-L)\cap R'\sim
$$
$$
\sim (s-1)((s-1)^2-d(Z))H^3-((s-1)(e+n-1)+(s-1)^2-d(Z))H^2\cap L+(e+n-1)H\cap L^2.$$
The claim follows from:
$$
d(\Delta')\leq d(c_1(\ensuremath{\mathcal{M}} _{R'}))=
$$
$$
d((s-1)((s-1)^2-d(Z))H^3-((s-1)(e+n-1)+(s-1)^2-d(Z))H^2\cap L+(e+n-1)H\cap L^2)
$$
\end{proof}
Now we can state the improved version of Theorem \ref{thmSpecW}:
\begin{theorem}[Speciality theorem]
\label{thmSpec}
Let $X \subset \mathbb{P}^n$, $n \geq 5$, be a smooth variety with $dim(X)=n-2$ and $\omega _X \simeq \ensuremath{\mathcal{O}} _X(e)$. Let $\Sigma \subset \mathbb{P}^n$ denote an hypersurface of degree $s$ containing $X$. If $X$ is not a complete intersection, then:
$$e \leq \frac{(s-1)[(s-1)^2-n+1]}{n-1}-n+1.$$
\end{theorem}
\begin{proof}
It is sufficient to prove the theorem for $s$ minimal. We distinguish two cases (see Lemma \ref{l1}).\\
If $dim(X \cap Sing(\Sigma ))=n-4$, then we argue exactly as in the proof of Theorem \ref{thmSpecW}.\\
If $dim(X \cap Sing(\Sigma ))=n-3$, then by Lemma \ref{lemmaN-3} we have $e < s-n$ or $d(Z)\cdot (e+n+1) \leq (s-1)[(s-1)^2-d(Z)]$. In the first case we conclude using $s \geq n-1$ (Remark \ref{snb}) and, in the second case, we conclude using the fact that $d(Z) \geq n-1$ by \cite{R}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{mainthm}]
As explained in the Section 1, it follows from Theorem \ref{thmSpec} and Lemma \ref{l1}.
\end{proof}
\section{Proofs of \ref{sigma} and of \ref{sigmahilb}.}
\begin{proof}[Proof of Theorem \ref{sigma}] If $X$ is not a complete intersection, this follows from Theorem \ref{mainthm}. Assume $X$ is a complete intersection. Let $F$ and $G$ ($d(F)=f,d(G)=g$) be two generators of the ideal of $X$. Then the equation of $\Sigma$ has the form $PF+QG$. But since $\Sigma$ is irreducible and since $X \cap Sing(\Sigma )\neq \emptyset$, then both $P$ and $Q$ have degree $>0$. This implies $s-1 \geq f$ and $s-1 \geq g$ hence $d=fg \leq (s-1)^2 < s\frac{(s-1)((s-1)^2-n+1}{n-1}+1$.
\end{proof}
\begin{proof}[Proof of Corollary \ref{sigmahilb}] The argument goes as in the proof of \cite{CDG} Lemma 4.3: by
\cite{K} the coefficients of the Hilbert polynomial of $X$ can be bounded in terms of the degree $d$ hence in terms of $s$, by \ref{sigma}, and there are finitely many components of $\mathcal{H}ilb(\Sigma)$ containing smooth varieties of dimension $n-2$.
\end{proof}
\section{Proof of \ref{n=s=5} and \ref{n=s=6}}
\begin{notations} By \cite{EF}, we may assume that $X$ lies on an irreducible hypersurface $\Sigma$ of degree $n$, $5 \leq n \leq 6$ and that $h^0(\ensuremath{\mathcal{I}} _X(n-1))=0$. The assumption of \ref{not2} is satisfied and by Lemma \ref{R} and Lemma \ref{lemmaN-3}, we get: $e <s-n$ or $d(Z)\cdot (e+n-1) \leq (s-1)[(s-1)^2-d(Z)]$. The first case cannot occur in our situation since we may assume $e \geq 3$ if $n=5$ by \cite{BC} (resp. $e \geq 8$ if $n=6$ by \cite{HS} Cor. 6.2). So we may assume $d(Z)\cdot (e+n+1) \leq (s-1)[(s-1)^2-d(Z)]\: (*)$. Now if $e \geq E$, from $(*)$ we get: $d(Z) \leq \frac{(s-1)^3}{E+n+s}\:(+)$.
\end{notations}
\begin{proof}[Proof of Theorem \ref{n=s=5}]
Applying $(+)$ with $n=s=5$ and $E=3$ we get $d(Z) \leq 4$, hence $d(Z)=4$ (\cite{R}). Arguing as in \cite{EF} Lemma 2.6, every irreducible component of $Z_{red}$ appears with multiplicity, so $Z$ is either a multiplicity four structure on a linear space or a double structure on a quadric. In both cases it is a complete intersection: in the first case this follows from \cite{Mano} and in the second one, from the fact that $Z$ is given by the Ferrand construction since $emdim(Z_{red})\leq 4$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{n=s=6}]
Applying $(+)$ with $n=s=6$ and $E=8$, we get $d(Z) \leq 6$. If $d(Z)=6$, $(*)$ implies $e \leq 8$. So $e=8$ and $6=d(Z)=d-6e-6$. It follows that $d=60$ and we conclude with \cite{EF} Theorem 1.1. So $d(Z) \leq 5$, hence (\cite{R}), $d(Z)=5$. Now $(*)$ yields $e \leq 13$. Moreover $5=d(Z)=d-6e-6$ yields $d=6e+11$. If $e \leq 10$, again, we conclude with Theorem 1.1 of \cite{EF}. We are left with the following possibilities: $(d,e)=(77,11),(83,12),(89,13)$. We conclude with \cite{HS} (list on page 216).
\end{proof}
\end{document} |
\begin{document}
\pagestyle{plain}
\title{Representations of the Infinite-Dimensional Affine Group}
\date{}
\author{
\textbf{Yuri Kondratiev}\\
Department of Mathematics, University of Bielefeld, \\
D-33615 Bielefeld, Germany,\\
Dragomanov University, Kyiv, Ukraine\\
}
\begin{abstract}
We introduce an infinite-dimensional affine group and construct its irreducible unitary representation. Our approach follows the one used by Vershik, Gelfand and Graev for the diffeomorphism group, but with modifications made necessary by the fact that the group does not act on the phase space. However it is possible to define its action on some classes of functions.
\end{abstract}
\maketitle
\vspace*{3cm}
{\bf Key words: } affine group; configurations; Poisson measure; ergodicity
{\bf MSC 2010}. Primary: 22E66. Secondary: 60B15.
\section{Introduction}
Given a vector space $V$ the affine group can be described concretely as the semidirect product of $V$ by $\mathrm{GL}(V)$, the general linear group of $V$:
$$
\mathrm{Aff} (V)=V \rtimes \mathrm{ GL} (V).
$$
The action of $\mathrm{GL}(V)$ on $V$ is the natural one (linear transformations are automorphisms), so this defines a semidirect product.
Affine groups play important role in the geometry and its applications, see, e.g., \cite{Ar,Ly}. Several recent papers \cite{AJO,AK,EH,GJ,Jo,Ze} are devoted to representations of the real, complex and $p$-adic affine groups and their generalizations, as well as diverse applications, from wavelets and Toeplitz operators to non-Abelian pseudo-differential operators and $p$-adic quantum groups.
In the particular case of field $V= \X$ the group $\mathrm{Aff}(\X)$ defined as following.
Consider a function $b:\X \to \X$ which is a step function on $\X$.
Take another matrix valued function $A:\X\to L(\X) $ s.t. $A(x)=\mathrm{Id} +A_0(x)$, $A(x)$ is invertible, $A_0$ is a matrix valued step function
on $\X$. Introduce an
infinite dimensional affine group
$\Aff (\X)$ that is the set of all pairs $g=(A,b)$ with component satisfying assumptions above. Define the group
operation
$$
g_2 g_1= (A_2,b_2) (A_1, b_1) = (A_1 A_2, b_1 +A_1 b_2).
$$
The unity in this group is $e=(\mathrm{Id} ,0)$.
For $g\in \Aff(\X)$ holds $g^{-1}= (A^{-1}, -A^{-1}b)$.
It is clear that for step mappings we use these definitions are correct.
Our aim is to construct irreducible representations of $\Aff (\X)$. As a rule, only special classes of irreducible representations can be constructed for infinite-dimensional groups. For various classes of such groups, special tools were invented; see \cite{Is,Ko} and references therein.
We will follow an approach by Vershik-Gefand -Graev \cite{VGG75} proposed in the case of
the group of diffeomorphisms. A direct application of this approach meets certain difficulties
related with the absence of the possibility to define the action of the group $\Aff (\X)$ on a phase space
similar to \cite{VGG75}. A method to overcome this problem is the main technical step in the present paper.
We wold like to mention that a similar approach was already used in \cite{PAFF} for the construction
of the representation for p-adic infinie dimensional affine group.
\maketitle
\section{Infinite dimensional affine group}
In our definitions and studies of vector and matrix valued functions on $\X$ we
will use as basic functional spaces collections of step mappings.
It means that each such mapping is a finite sum of indicator functions with measurable
bounded supports with constant vector/matrix coefficients. Such spaces of functions on
$\X$ are rather unusual in the framework of infinite dimensional groups but we will try
to show that their use is natural for the study of affine groups.
For $x\in\X$ consider the section $G_x= \{g(x)\; |\; g\in \Aff(\X)\}$.
It is an affine group with constant coefficients. Note that for a ball $B_N (0) \subset \X$ with the radius $N$
centered at zero we have $g(x)= (1,0), x\in B^c_N(0)$.
Define the action of $g$ on a point $x\in\X$ as
$$
gx= g(x)x = A(x)^{-1} (x+b(x)).
$$
Denote the orbit $O_x=\{gx| g\in G_x\}\subset \X$.
Actually, as a set $O_x=\X$ but elements of this set are
parametrized by $g\in G_x$.
For any element $y\in O_x$ and $h\in G_x$ we can define
$hy= h(gx)= (hg)x\in O_x$. It means that we have the group
$G_x$ action on the orbit $O_x$.
It gives
$$
(g_1g_2)(x) x= g_1(x)( g_2(x)x)
$$
that corresponds to the group multiplication
$$
g_2 g_1= (A_2,b_2) (A_1, b_1) = (A_1 A_2, b_1 +A_1 b_2)
$$
considered in the given point $x$.
\begin{Remark}
The situation we have is quite different w.r.t. the standard
group of motions on a phase space. Namely,
we have one fixed point $x\in\X$ and the section group
$G_x$ associated with this point. Then we have the motion of
$x$ under the action of $G_x$. It gives the group action on the
orbit $O_x$.
\end{Remark}
We will use the configuration space $\Ga(\X)$, i.e., the set of all locally finite
subsets of $\X$.
Each configuration may be identified with the measure
$$
\gamma(dx) = \sum_{x\in\gamma} \delta_x
$$
which is a positive Radon measure on $\X$: $\gamma\in \M(\X)$.
We define the vague topology on $\Ga(\X)$ as the weakest topology for
which all mappings
$$
\Ga(\X) \ni \ga \mapsto <f,\gamma>\in \R,\;\; f\in C_0(\X)
$$
are continuous. The Borel $\sigma$-algebra for this topology denoted
$\B(\Ga(\X))$.
For $\ga\in \Ga(\X)$, $\ga=\{x\}\subset \X$ define
$g\gamma$ as a motion of the measure $\ga$:
$$
g\ga=\sum_{x\gamma} \delta_{g(x)x}\in \M(\X).
$$
Here we have the group action of $\Aff(\X)$ produced by individual transformations
of points from the configuration. Again, as above, we move a fixed configuration using
previously defined actions of $G_x$ on $x\in\ga$.
Note that $g\gamma$ is not more a configuration. More precisely, for some $B_N(0) $
the set $(g\ga)_N= g\ga\cap B_N^c(0)$ is a configuration in $B^c_N(0) $ but the finite part
of $g\ga$ may include multiple points.
For any $f\in \mathcal D(\X,\C)$ we have corresponding cylinder function on $\Ga(\X)$:
$$
L_f(\ga)= <f,\ga > = \int_{\X} f(x)\ga(dx) = \sum_{x\in \ga} f(x).
$$
Denote ${\mathcal P}_{cyl}$ the set of all cylinder polynomials generated by such functions.
More generally, consider functions of the form
\begin{equation}
\label{cyl}
F(\ga)= \psi(<f_1,\ga>,\dots, <f_n,\ga>),\; \ga\in\Ga(\X), f_j\in \mathcal D(\X), \psi\in C_b(\R^n).
\end{equation}
These functions form the set $\mathcal F_b(\Ga(\X))$ of all bounded cylinder functions.
For any clopen set $\Lambda \in \mathcal{O}_b(\X)$ (also called a finite volume) denote
$\Ga(\Lambda)$ the set of all (with necessity finite) configurations
in $\La$. We have as before the vague topology on this space and
the Borel $\sigma$-algebra $\B(\Ga(\La))$ is generated by functions
$$
\Ga(\La)\ni\ga \mapsto <f,\ga>\in\R
$$
for $f\in C_0 (\La)$. For any $\La\in \mathcal{O}_b(\X)$ and $T\in \B(\Ga(\La))$
define a cylinder set
$$
C(T)=\{\ga\in\Ga(\X)\;|\; \ga_{\La}=\ga \cap \La \in T\}.
$$
Such sets form a $\sigma$-algebra $\B_{\La}(\Ga(\X))$ of cylinder sets
for the finite volume $\La$. The set of bounded functions on $\Ga(\X)$ measurable
w.r.t. $\B_{\La}(\Ga(\X))$ we denote $B_{\La}(\Ga(\X))$. That is a set of cylinder functions
on $\Ga(\X)$. As a generating family for this set we can use the functions of the form
$$
F(\ga)= \psi(<f_1,\ga>,\dots, <f_n,\ga>),\; \ga\in\Ga(\X), f_j\in C_0(\La), \psi\in C_b(\R^n).
$$
For so-called one-particle functions $f:\X\to\R, f\in\mathcal D(\X)$ consider
$$
(gf)(x)= f(g(x) x), x\in \X.
$$
Then $gf\in \mathcal D(\X)$. Thus,
we have the group action
$$
\mathcal D(\X)\in f \mapsto gf\in \mathcal D(\X),\;\;g\in\Aff
$$
of the infinite dimensional group $\Aff$ in the space of functions
$\mathcal D(\X)$.
Note that due to our definition, we have
$$
<f, g\ga> = <gf,\ga>
$$
and it is reasonable to define for cylinder functions (\ref{cyl}) the action of the group $\Aff$
as
$$
(V_g F)(\ga)= \psi(<gf_1,\ga>,\dots <gf_n,\ga>.
$$
Obviously $V_g: \mathcal F_b (\Ga(\X))\to \mathcal F_b(\Ga(\X))$.
Denote $m(dx)$ the Haar measure on $\X$. The dual transformation to one-particle motion is defined
via the following relation
$$
\int_{\X} f(g(x)x) m(dx)=\int_{\X} f(x) g^\ast m(dx)
$$
if exists such measure $g^\ast m$ on $\X$.
\begin{Lemma}
\label{gm}
For each $g\in \Aff$
$$
g^\ast m(dx)= \rho_{g}(x) m(dx)
$$
where $\rho_g = 1_{B_R^c(0) } + r_g^0,\;\; r_g^0\in \mathcal D(\X,\R_+).$
Here as above
$$
B_R^c(0)= \{x\in\X\;|\; |x|_p \geq R\}.
$$
\end{Lemma}
\begin{proof}
We have following representations for coefficients of $g(x)$:
$$
b(x)= \sum_{k=1}^{n} b_k 1_{B_k}(x) ,
$$
$$
a(x)= \sum_{k=1}^{n} a_k 1_{B_k}(x) + 1_{B^c_R(0)}(x)
$$
where $B_k$ are certain balls in $\X$.
Then
$$
\int_{\X} f(g(x)x) m(dx)= \sum_{k=1}^n \int_{B_k} f(\frac{x+b_k}{a_k}) m(dx) + \int_{B^c_R (0)} f(x) m(dx) =
$$
$$
\sum_{k=1}^{n} \int_{C_k} f(y) |a_k|_p m(dy) + \int_{B^c_R(0)} f(y) m(dy),
$$
where
$$
C_k= a_k^{-1}(B_k + b_k).
$$
Therefore,
$$g^\ast m= (\sum_{k=1}^n |a_k|_p 1_{C_k} + 1_{B^c_R(0)}) m.
$$
Note that informally we can write
$$
(g^\ast m)(dx) = dm(g^{-1}x).
$$
\end{proof}
Note that by the duality we have the group action on the Lebesgue measure. Namely,
for $f\in \mathcal D(\X)$ and $g_1, g_2\in \Aff$
$$
\int_{\X} (g_2 g_1) f(x) m(dx)= \int_{\X} g_1 f (x) (g_2^\ast m) (dx) =
$$
$$
\int_{\X} f(x) (g_1^\ast g_2^\ast m)(dx)= \int_{\X} f(x) ((g_2 g_1)^\ast m)(dx).
$$
In particular
$$
(g^{-1})^\ast (g^\ast m)= m.
$$
\begin{Lemma} Let $F\in B_\La (\Ga(\X))$ and $g\in\Aff $ has the form
$g(x)=(1, h1_{B}(x))$ with certain $h\in \X$ and $B\in \mathcal{O}_b(\X)$ s.t. $\La\subset B$.
Then
$$
V_gF\in B_{\La -h} (\Ga(\X)).
$$
\end{Lemma}
\begin{proof}
Due to the formula for the action $V_gF$ we need to analyze the support
of functions $f_j (x+h1_B(x))$ for $\supp f_\subset \La$. If $x\in B^c$ then
$x\in \La^c$ and therefore $f_j (x+h1_B(x))=f_j(x)=0$. For $x\in B$
we have $f_j(x+h)$ and only for $x+h\in \La$ this value may be nonzero,
i.e., $\supp g f_j \subset \La- h$.
\end{proof}
Denote $\pi_m$ the Poisson measure on $\Ga(\X)$ with the intensity measure $m$.
\begin{Lemma}
\label{V}
For all $F \in {\mathcal P}_{cyl}$ or $F\in \mathcal F_b (\Ga(\X))$ and $g\in \Aff $ holds
$$
\int_{\Ga(\X)} V_g F d\pi_m = \int_{\Ga(\X)} Fd\pi_{g^\ast m} .
$$
\end{Lemma}
\begin{proof}
It is enough to show this equality for exponential functions
$$
F(\ga)= e^{<f,\ga>},\;\; f\in\mathcal D(\X).
$$
We have
$$
\int_{\Ga(\X)} V_g F d\pi_m = \int_{\Ga(\X)} e^{<gf, \ga>} d\pi_m(\ga)=
$$
$$
\exp[ \int_{\X} (e^{gf(x)} -1) dm(x)] = \exp[ \int_{\X} (e^{f(x)} -1) d(g^{\ast} m)(x)=
$$
$$
\int_{\Ga(\X)} F d\pi_{g^\ast m }.
$$
\end{proof}
\begin{Remark} For all functions $F,G\in \mathcal F(\Ga(\X))$ a similar
calculation shows
$$
\int_{\Ga(\X)} V_g F \; Gd\pi_m = \int_{\Ga(\X)} F \; V_{g^{-1}} G d\pi_{g^\ast m} .
$$
\end{Remark}
Let $\pi_m$ be the Poisson measure on $\Ga(\X)$ with the intensity
measure $m$. For any $\La\in \mathcal{O}_b(\X)$ consider the distribution $\pi_m^\La$
of $\pi_m$ in $\Ga(\La)$ corresponding the projection $\ga\to \ga_\La$.
It is again a Poisson measure $\pi_{m_\La}$ in $\Ga(\La)$ with the intensity
$m_\La$ which is the restriction of $m$ on $\La$. Infinite divisibility of
$\pi_m$ gives for $F_j\in B_{\La_j}(\Ga(\X)), j=1,2$ with $\La_1\cap \La_2=\emptyset$
$$
\int_{\Ga(\X)} F_1(\ga) F_2(\ga) d\pi_m(\ga)= \int_{\Ga(\X)} F_1(\ga) d\pi_m(\ga)
\int_{\Ga(\X)} F_2(\ga) d\pi_m(\ga)=
$$
$$
\int_{\Ga(\La_1)} F_1 d\pi^{\La_1}_m \int_{\Ga(\La_2)} F_2 d\pi^{\La_2}_m.
$$
\begin{Lemma}
For any $F\in B_\La(\Ga(\X)$ and $g=(1, h1_B)\in \Aff $ with $\La \cap (B+h)=\emptyset$ holds
$$
\int_{\Ga(\X)} (V_g F)(\ga) d\pi_m(\ga)= \int_{\Ga(\X)} F(\ga)d\pi_m(\ga).
$$
\end{Lemma}
\begin{proof}
Due to our calculations above we have
$$
\int_{\Ga(\X)} (V_gF)(\ga) d\pi_m(\ga)= \int_{\Ga(\X)} F(\ga) d\pi_{g^{\ast}m}(\ga)=
$$
$$
\int_{\Ga(\La)} F(\eta) d\pi^{\La}_{g^{\ast}m} (\eta) =\int_{\Ga(\La)} F(\eta) d\pi_{ (g^{\ast}m)_\La} (\eta).
$$
But we have shown
$$
(g^{\ast}m)(dx)= (1+ 1_{B+h}(x)) m(dx) = m(dx)
$$
for $x\in \La$, i.e., $(g^{\ast}m)_\La =m$.
\end{proof}
\begin{Lemma}
\label{prod}
For any $F_1,F_2 \in \mathcal F_b(\Ga(\X))$ there exists $g\in\Aff$ such that
$$
\int_{\Ga(\X)} F_1 \; V_g F_2 d\pi_m = \int_{\Ga(\X)} F_1 d\pi_m \int_{\Ga(\X)} F_2 d\pi_m .
$$
\end{Lemma}
\begin{proof}
By the definition, $F_j\in B_{\La_j}(\Ga(\X)), j=1,2$ for some $\La_1,\La_2 \in \mathcal{O} (\X)$.
Let us take $g=(1, h1_B)$ with the following assumptions:
$$
\La_2\subset B,\;\; \La_1\cap (\La_2-h) =\emptyset,\;\; \Lambda_2\cap (B+h) =\emptyset.
$$
Then accordingly to previous lemmas
$$
\int_{\Ga(\X)} F_1 V_g F_2 d\pi_m = \int_{\Ga(\X)} F_1 d\pi_m \int_{\Ga(\X)} F_2 d\pi_m .
$$
\end{proof}
\section{$\Aff$ and Poisson measures}
For $F\in {\mathcal P}_{cyl} $ or $F\in \mathcal F_b (\Ga(\X))$, we consider the motion of $F$ by $g\in \Aff$ given by
the operator $V_g$.
Operators $V_g$ have the group property
defined point-wisely: for any $\ga \in \Ga(\X) $
$$
(V_h (V_gF))(\ga)= (V_{hg} F) (\ga).
$$
This equality is the consequence of our definition of the group action of $\Aff$
on cylinder functions.
As above,
consider $\pi_m$, the Poisson measure on $\Ga(\X)$ with the intensity measure $m$. For the transformation
$V_g$ the dual object is defined as the measure $V^\ast_g \pi_m$ on $\Ga(\X)$ given by the relation
$$
\int_{\Ga(\X)} (V_gF) (\ga) d\pi_m(\ga) =\int_{\Ga(\X)} F(\ga) d(V^\ast_g \pi_m)(\ga),
$$
where $V^\ast_g \pi_m= \pi_{g^\ast m}$, see Lemma \ref{V}.
\begin{Corollary}
For any $g\in \Aff$ the Poisson measure $V_g^\ast \pi_m$ is absolutely continuous
w.r.t. $\pi_m$ with the Radon-Nykodim derivative
$$
R(g,\ga)= \frac{d\pi_{g^\ast m}(\ga)}{d\pi_{ m} (\ga)} \in L^1(\pi_m).
$$.
\end{Corollary}
\begin{proof}
Note that density $\rho_g = 1_{B_R^c(0) } + r_g^0,\;\; r_g^0\in \mathcal D(\X,\R_+)$ of $g^\ast m$ w.r.t. $m$
may be equal zero on some part of $\X$ and, therefore, the equivalence of of considered
Poisson measures is absent. Due to \cite{LS03}, the Radon-Nykodim derivative
$$
R(g,\ga)= \frac{d\pi_{g^\ast m}(\ga)}{d\pi_{ m} (\ga)}
$$
exists if
$$
\int_{\X} |\rho_g(x)-1| m(dx)= \int_{B_R(0)} |1-r_g^0 (x)| m(dx) <\infty.
$$
\end{proof}
\begin{Remark}
As in the proof of Proposition 2.2 from \cite{AKR} we have an explicit formula for $R(g,\ga)$:
$$
R(g,\ga)= \prod_{x\in\ga} \rho_g (x) \exp(\int_{\X} (1-\rho_g(x)) m(dx).
$$
The point-wise existence of this expression is obvious.
\end{Remark}
This fact gives us the possibility to apply the Vershik-Gelfand-Graev approach realized by these authors for the case
of diffeomorphism group.
Namely, for $F\in {\mathcal P}_{cyl}$ or $F\in {\mathcal P}_{cyl}(\Ga(\X)$ and $g\in \Aff$ introduce operators
$$
(U_g F)(\ga) = (R(g^{-1} ,\ga) )^{1/2} (V_gF)(\ga).
$$
\begin{Theorem}
Operators $U_g,\; g\in \Aff$ are unitary in $L^2 (\Ga(\X), \pi_m)$ and give an irreducible representation
of $\Aff$.
\end{Theorem}
\begin{proof}
Let us check the isometry property of these operators.
We have using Lemmas \ref{V}, \ref{gm}
$$
\int_{\Ga(\X)} |U_g|^2 d\pi_m = \int_{\Ga(\X)} |V_g F|^2(\ga) d\pi_{(g^{-1})^\ast m} (\ga)=
$$
$$
\int_{\Ga(\X)} |F(\ga)|^2 d\pi_{(gg^{-1})\ast m}(\ga)= \int_{\Ga(\X)} |F(\ga)|^2 d\pi_{ m}(\ga).
$$
From Lemma \ref{V} follows that $U_g^\ast = U_{g^{-1}}.$
We need only to check irreducibility that shall
follow from the ergodicity of Poisson measures \cite{VGG75}. But to this end we need first of all to define the action of
the group $\Aff$ on sets from $\B(\Ga(\X)$. As we pointed out above, we can not define this
action point-wisely. But we can define the action of operators $V_g$ on the indicators $1_A(\ga)$ for
$A\in \B(\Ga(Q))$. Namely, for given $A$ we take a sequence of cylinder sets $A_n, n\in \N$ such that
$$
\pi_{m}(A\mathcal Delta A_n) \to 0, n\to \infty.
$$
Then
$$
U_g 1_{A_n} =V_g 1_{A_n} (R(g^{-1} ,\cdot) )^{1/2} \to G (R(g^{-1} ,\cdot) )^{1/2} \in L^2(\pi_m), n\to\infty
$$
in $L^2(\pi_m)$. Each $V_g 1_{A_n} $ is an indicator of a cylinder set and
$$
V_g 1_{A_n} \to G \;\; \pi_m - a.s., n\to \infty.
$$
Therefore,
$G=1$ or $G=0$ $\pi_m$-a.s. We denote this function $V_g 1_A$.
For the proof of the ergodicity of the measure $\pi_m$ w.r.t. $\Aff$ we need to show the following fact:
for any $A\in \B(\Ga(\X))$ such that $\forall g\in\Aff\;\; V_g 1_A = 1_A\; \pi_m- a.s.$ holds $\pi_m(A)= 0$
or $\pi_m(A)= 1$.
Fist of all, we will show that for any pair of sets $A_1, A_2 \in \B(\Ga(Q))$ with $\pi_m(A_1)>0,\;\;
\pi_m(A_2) >0$ there exists $g\in\Aff$ such that
\begin{equation}
\label{ineq}
\int_{\Ga(\X)} 1_{A_1} V_g 1_{A_2} d\pi_m \geq \frac{1}{2} \pi_m(A_1) \pi_m(A_2).
\end{equation}
Because any Borel set may be approximated by cylinder sets, it is enough to show this fact
for cylinder sets. But for such sets due to Lemma \ref{prod} we can choose $g\in \Aff$ such that
$$
\int_{\Ga(\X)} 1_{A_1} V_g 1_{A_2} d\pi_m = \pi_m(A_1) \pi_m(A_2).
$$
Then using an approximation we will have (\ref{ineq}).
To finish the proof of the ergodicity, we consider any $A\in\B(\Ga(\X)$ such that
$$
\forall g\in \Aff\; V_g1_A = 1_A \;\;\pi_m - a.s.,\;\; \pi_m(A)>0.
$$
We will show that then $\pi_m(A)= 1$. Assume $\pi_m(\Ga\setminus A) >0$.
Due to the statement above, there exists $g\in \Aff$ such that
$$
\int_{\Ga(\X)} 1_{\Ga\setminus A} V_g 1_A >0.
$$
But due to the invariance of $1_A$ it means
$$
\int_{\Ga(\X)} 1_{\Ga\setminus A} 1_A d\pi_m >0
$$
that is impossible.
\end{proof}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Gross Pitaevskii Equation with a Morse potential: bound states and evolution of wave packets}
\author[label1]{Sukla Pal\corref{cor1}}
\ead{sukla@bose.res.in}
\author[label2]{Jayanta K. Bhattacharjee}
\address[label1]{Department of Theoretical Physics, S.N.Bose National Centre For Basic Sciences, JD-Block, Sector-III, Salt Lake City, Kolkata-700098, India}
\address[label2]{Harish-Chandra Research Institute, Chhatnag road, Jhunsi, Allahabad-211019, India}
\cortext[cor1]{Corresponding author}
\begin{abstract}
We consider systems governed by the Gross Pitaevskii equation (GPE) with the Morse potential $V(x)=D(e^{-2ax}-2e^{-ax})$ as the trapping potential. For positive values of the coupling constant $g$ of the cubic term in GPE, we find that the critical value $g_c$ beyond which there are no bound states scales as $D^{3/4}$ (for large $D$). Studying the quantum evolution of wave packets, we observe that for $g<g_c$, the initial wave packet needs a critical momentum for the packet to escape from the potential. For $g>g_c$, on the otherhand, all initial wave packets escape from the potential and the dynamics is like that of a quantum free particle. For $g<0$, we find that there can be initial conditions for which the escaping wave packet can propagate with very little change in width i,e., it remains almost shape invariant.
\end{abstract}
\begin{keyword}
Critical coupling constant \sep Gaussian wave-packet \sep quantum evolution \sep Threshlod momentum \sep Quantum fluctuation \sep Threshlod energy
\end{keyword}
\end{frontmatter}
\section{Introduction}
The wave packet dynamics of the usual Schrodinger equation has been extensively studied for the free particle as well as for different kinds of external potential. For the free particle, if a nonlinear term in the wave function is added to the Schr$\ddot{o}$dinger equation, then one has the non-linear Schr$\ddot{o}$dinger equation (NLSE)\cite{fg,n}. This too has been a subject of intense study and one has found various kinds of exact solutions like solitons, pulses, fronts etc.\cite{pp,1,2}. If one adds a potential, then one has the Gross Pitaevskii equation (GPE)\cite{a,b} which has been very useful for describing the process of Bose Einstein condensate (BEC)\cite{8,10} for a trapped gas (it should be noted that experimentally BEC has only been observed in a trapped gas). The trapping potential makes the problem far more difficult and the only case which has been reasonably well studied is the simple harmonic potential. In an experiment, it is actually far more convenient to tune the trapping potential and observe different behaviors of the condensate. With this in mind, we wanted to study the GPE with a Morse potential so that one can achieve a much greater flexibility in adjusting the potential. In fact, the potential can have bound states or no bound state at all, depending upon certain parameters of the potential. The corresponding dynamics is treated both for $g>0$ and $g<0$, where $g$ is the coupling constant of the non linear term.
The primary importance of GPE lies in the fact that it describes the dynamics of the condensate in the process of Bose Einstein Condensation (BEC)\cite{7,11}. The emergence of different kinds of soliton (dark, bright, grey \cite{c}-\cite{e} is well known in case of GPE. Also the controllable soliton emission \cite{f} has been investigated for GPE with a shallow trap and with negative interspecies interaction. In this article, we have dealt with the attractive and repulsive interatomic interaction separately for the Morse potential and the dynamics in both of these two cases have been explored both numerically and analytically.
Time dependent Gross Pitaevskii equation (GPE) with one spatial dimension has the following form:
\begin{eqnarray}
\label{eq1}
i\hbar\partial_t\psi =-\frac{\hbar^2}{2m}\nabla^2\psi+g|\psi|^2\psi+V_{ext}(x)\psi
\end{eqnarray}
\section{Bound state}
The stationary states of Gross Pitaevskii equation (GPE) have the form $\psi(x,t)=e^{i\mu t/\hbar}u(x)$, so that $\mu u(x)=-\frac{\hbar^2}{2m}\nabla^2u+gu|u|^2+V_{ext}u$ and lowest value of $\mu$ is the ground state energy $E_0$ which is to be obtained by minimizing $E$ of Eq. (\ref{eq2}).
\begin{eqnarray}\label{eq2}
E[\psi(x)]=\int^{\infty}_{-\infty}[\frac{\hbar^2}{2m}|\nabla\psi|^2+V_{ext}|\psi|^2+\frac{g}{2}|\psi|^4]dx
\end{eqnarray}
where, $V_{ext}(x)=D(e^{-2ax}-2e^{-ax})$. For $a<<1$, $V_ext(x)\sim x^2$ indicating the oscillator limit of the Morse. On the otherhand, for $a\rightarrow\infty$, $V_{ext}(0)=-D$ and $0$ else where for $x>0$ as is clearly depicted in Fig (\ref{fig1}).
\begin{figure}
\caption{Schematic diagram of Morse potential depicting the two extreme conditions. Blue curve shows approximately the oscillator limit ($a\ll 1$) and the violet one (for $a\rightarrow\infty$) indicates the free behavior for region of $x>0$.}
\label{fig1}
\end{figure}
With the differential equation for $u$ given as
\begin{eqnarray}
\label{eq4}
-\frac{\hbar^2}{2m}\frac{d^2u}{dx^2}+D(e^{-2ax}-2e^{-ax})u+g|u|^u=\mu u
\end{eqnarray}
we can explore the asymptotic solution as $x\rightarrow-\infty$. The function $u$ has to vanish as $x\rightarrow\pm\infty$ being a bound state function and hence the dominant part of Eq. (\ref{eq4}) for $x\rightarrow-\infty$ will be $e^{-2ax}$ and we have in that range
\begin{eqnarray}
\label{eq5}
-\frac{\hbar^2}{2m}\frac{d^2u}{dx^2}+De^{-2ax}u\sim 0
\end{eqnarray}
Making the substitution $e^{-ax}=y$, we find the exact asymptotic form of $u$ for $x\rightarrow-\infty$ ($y\rightarrow\infty$) to be of the following form
\begin{eqnarray}
\label{eq6}
u_{asy}=Ae^{-Ky}
\end{eqnarray}
where, $K^2=\frac{2mD}{\hbar^2a^2}$ and A is a constant to be determined from normalization in y-space. Since the ground state wave function will have no zeroes at any finite nonzero value of $y$ (note that $-\infty\le x\le\infty$ corresponds to $0\le y\le\infty$), we take $u(y)=\sqrt{N}y^{\alpha}e^{-Ky}$, to be the trial wave function for the minimization of $E$ given in Eq. (\ref{eq2}) and we obtain the following expression for the ground state energy,
\begin{eqnarray}
\label{eq7}
E(\alpha)=\frac{\hbar^2a^2}{2m}[\alpha^2+\alpha(1-2K)+\frac{agm}{\hbar^2a^2}\frac{\Gamma(4\alpha)}{2^{4\alpha}[\Gamma(2\alpha)]^2}]
\end{eqnarray}
If $g=0$ (i.e., usual Schr$\ddot{o}$dinger equation with a Morse potential), we get the critical $\alpha$ for minimization of energy is $\alpha=K-\frac{1}{2}$ and the ground state energy turns out to be
\begin{eqnarray}
\label{eq8}
E=-\frac{\hbar^2a^2}{2m}\frac{1}{4}(1-2K)^2
\end{eqnarray}
which is the exact answer.It should be noted that for $K=\frac{1}{2}$, the binding energy reaches zero. i.e., for $K<\frac{1}{2}$ (equivalently for $a>\sqrt{\frac{8mD}{\hbar^2}}$, there can be no bound state in this potential. We now analyze the effect of non linear term. Returning to Eq. (\ref{eq7}), we consider the term involving `$g$' and and writing $\lambda=\frac{agm}{\hbar^2a^2}$, we get as the minimization condition
\begin{eqnarray}
\label{eq9}
2\alpha+(1-2K)+\lambda\frac{d}{d\alpha}\frac{\Gamma(4\alpha)}{2^{4\alpha}[\Gamma(2\alpha)]^2}=0
\end{eqnarray}
Since the tightly bound situation correspond to large value of $K$, we use the above equation in the limit when $\alpha$ is reasonably large and $\Gamma(x)$ can be replaced by the asymptotic form
\begin{eqnarray*}
\Gamma(x)\sim e^{-x}x^xx^{-1/2}\sqrt{2\pi}[1+\frac{1}{12x}+\cdots]
\end{eqnarray*}
We get from Eq. (\ref{eq9}) to the leading order, $\alpha$ to be given by
\begin{equation*}
\alpha=K-\frac{1}{2}-\frac{\lambda}{2\sqrt{2\pi}}\frac{1}{(2K-1)^{1/2}}
\end{equation*}
This gives a ground state energy of
\begin{equation}\label{eq10}
E=\frac{\hbar^2a^2}{2m}[-\frac{1}{4}(2K-1)^2+\frac{\lambda}{2\sqrt{\pi}}(2K-1)^{1/2}]
\end{equation}
an equation which is accurate only for $K\gg1$. In Table 1, we show the comparison of ground state energies obtained from the actual solution of the minimization condition of Eq. (\ref{eq9}) and the energy obtained from Eq. (\ref{eq10}) for fixed values of $\lambda$ of $\lambda=1.0<\lambda_c$ and different values of $K$. As expected the approximation of Eq. (\ref{eq10}) gets closer to the answer obtained from Eq. (\ref{eq9}) as $K$ increases. The increase of $\lambda$ weakens the ground state and for large $K$, we have the result that the critical $\lambda$ for disappearance of a bound state scales as $K^{3/2}$ i.e., $D^{3/4}$.
\begin{table}[H]
\centering
\begin{tabular}{c c c }
\hline\hline
$K$ & $E_{0}(from Eq. (\ref{eq10}))$ & $E_{0}(from Eq. (\ref{eq9}))$ \\ [0.5ex]
\hline
2 & -1.762 & -2.243\\
3 & -5.619 & -6.25\\
4 & -11.504 & -12.25\\
5 & -19.404 & -20.25\\
6 & -29.315 & -30.25\\[1ex]
\hline
\end{tabular}
\caption{Results compairing the ground state energies from variational calculation (third column) with the expression (Eq. (\ref{eq10}) obtained considering the asymptotic series for different values of $K$ keeping $\lambda$ fixed at 1.0}
\end{table}
The interesting thing would be to explore the dynamics of an initial Gaussian wave packet in the region $\lambda>\lambda_c$ and $\lambda<\lambda_c$ for $\lambda>0$. In the next section we describethe wave packet dynamics both for $\lambda>0$ and $\lambda<0$.
\section{Wave packet dynamics}
The effective Hamiltonian of GPE with a Morse trap is given by
\begin{eqnarray*}
H=\frac{p^2}{2m}+D(e^{-2ax}-2e^{-ax})+g\int dx|\psi(x)|^4
\end{eqnarray*}
We make $H$ dimensionless by the following rescallings: $p=\sqrt{mD}\bar{p}$, $\bar{\psi}=\psi\sqrt{\frac{1}{a}}$, $x=\frac{\bar{x}}{a}$, $\Delta=\frac{\bar{\Delta}}{a}$, $t=\bar{t}\sqrt{\frac{a^2D}{m}}$, $\bar{V}=\frac{V}{D}$ and $\frac{g}{\sqrt{2\pi}}=\gamma\frac{D}{a}$. Hence the relation between $\lambda$ and $\gamma$ turns out to be $\lambda=\sqrt{2\pi}\gamma\frac{K^2}{2}$. Considering these transformations the dimensionless Hamiltonian turns out to be
\begin{eqnarray}\label{eq11}
\bar{H}=\frac{H}{D}=\frac{\bar{p}^2}{2}+(e^{-2\bar{x}}-2e^{-\bar{x}})+\sqrt{2\pi}\gamma\int d\bar{x}|\bar{\psi}(\bar{x})|^4
\end{eqnarray}
We consider Gaussian wave packets of the form given below
\begin{equation}\label{aa}
\psi(x,t)=\frac{1}{\pi^{1/4}\sqrt{\Delta(t)}}e^{-\frac{(x-x_0(t))^2}{2\Delta(t)^2}}e^{ip(t)x/\hbar}
\end{equation}
i.e., we assume an initially Gaussian form remains Gaussian with a changing centre and width. The initial shape corresponding to $\bar{x}_0=0$ and $\Delta=\Delta_0$ has the energy
\begin{eqnarray}\label{a}
\frac{\langle E_0\rangle}{D}=\frac{1}{2K^2\bar{\Delta}_0}+\frac{\bar{p_0}^2}{2}+[e^{\bar{\Delta}_0^2}-2e^{\frac{\bar{\Delta}_0^2}{4}}]+\frac{\gamma}{\bar{\Delta}_0}
\end{eqnarray}
The dynamics of the wave packet is governed by the following equations (we drop the bars with the understanding that all quantities are dimensionless).
\begin{eqnarray}\label{eq13}
\begin{aligned}
\frac{d\langle x\rangle}{dt} &= \langle p\rangle\\
\frac{d\langle x^2\rangle}{dt} &= \langle xp+px\rangle\\
\frac{d\langle p\rangle}{dt} &= -\langle\frac{dV}{dx}\rangle= 2(\langle e^{-2x}\rangle-\langle e^{-x}\rangle)\\
\frac{d\langle p^2\rangle}{dt} &= -\langle p\frac{dV}{dx}+\frac{dV}{dx}p\rangle- \sqrt{2\pi}\gamma\frac{d}{dt}\int|\psi|^4dx\\
\frac{d\langle xp+px\rangle}{dt} &=2\langle p^2\rangle-2aD\langle x\frac{dV}{dx}\rangle+\sqrt{2\pi}\gamma\int|\psi|^4dx\\
&= 2\langle p^2\rangle+4aD(\langle xe^{-2x}\rangle-\langle xe^{-x}\rangle)\\&+\sqrt{2\pi}\gamma\int|\psi|^4dx
\end{aligned}
\end{eqnarray}
With the help of the equations given in Eq. (\ref{eq13}), we find that the dynamics of the wave packet will be governed by the two coupled equations given below
\begin{eqnarray}
\frac{d^2}{dt^2}x_0=2[e^{-(2x_0-\Delta^2)}-e^{-(x_0-\frac{\Delta^2}{4})}]\label{eq14}
\end{eqnarray}
\begin{eqnarray}\label{eq15}
\begin{aligned}
\frac{d^2}{dt^2}\Delta^2=\frac{2}{K^2}\frac{1}{\Delta^2}+\frac{\gamma}{\Delta}+4\Delta^2[&\frac{1}{2}e^{-(x_0-\frac{\Delta^2}{4})}\\&-e^{-(2x_0-\Delta^2)}]
\end{aligned}
\end{eqnarray}
Here we have explored the wave packet dynamics for $K=2$ which fixes $\gamma_c=0.917$.
\subsection{\bf{Dynamics: ($\gamma>0$ and $\gamma<\gamma_c$); Corresponding figure: Fig \ref{fig2}}}
\begin{figure}
\caption{Figure (a) shows the dynamics of the peak ($x_0$) of the wave packet for $\gamma(=0.5)<\gamma_c(=0.917)$ for four different values of $p_0$. The potential parameter $K=2$ and the initial $x_0=0$ while the width $\Delta_0=0.4$. For low values of $p_0$ the wave packet hits the potential barrier at right and reflects back and forth within the trap showing oscillatory behavior. With increase of $p_0$ the number of reflection reduces and finally at $p_0=p_{th}
\label{fig2}
\end{figure}
In this parameter region after solving the coupled equations given in Eq. (\ref{eq14}) and Eq. (\ref{eq15}) numerically, we have found that when initial momentum ($p_0$) of the wave packet is small, it reflects back from the potential barrier at right. Then it moves to the left and collides with the infinite barrier or the potential at right. This back and forth oscillation within the trap continues with time. With the increase of $p_0$, the oscillation of the peak of the wave packet within the trap goes on with the higher amplitude up to a certain threshold value ($(p_{th}$) of $p_0$. At $p_0=p_{th}$ it simply comes out of the potential barrier and hence the oscillatory behavior of $x_0(t)$ ceases. For $p_0>p_{t})$ the graph for $x_0(t$) vs. $t$ shows sharp linear increase (not shown in given figures). For $K=2$, we observed $p_{th}=0.45$. Hence with the help of Eq. (\ref{a}), we conclude that the initial wave packet has to have the minimum average energy $E=0.751D$ to come out of the potential barrier. We will call this minimum average energy required for emitting the wave packet from the potential as the threshold energy ($E_{th}$). It should be noted that in the classical case, for a partice to escape from the Morse potential, the average $E$ needs to be greater than or equal to zero. If the particle is at the origin initially with a potential energy of -1, then the threshhold momentum would be $p_0=\sqrt{2}$. In the quantum case the average initial momentum is 0.45 clearly showing the role of quantum fluctuations. The dynamics of the width also follows the same qualitative feature as described above. The dynamics in this parameter region is clearly depicted in Fig \ref{fig2}. For convenience we keep the initial width of the wave packet fixed at $\Delta_0=0.4$ throughout the study.
\subsection{\bf{Dynamics: ($\gamma>0$ and $\gamma>\gamma_c$); Corresponding figure: Fig \ref{fig3}}}
\begin{figure}
\caption{Figure (a) shows the dynamics of the peak ($x_0$) of the wave packet for $\gamma=1.2$ for three different values of $p_0$. The dynamics of $x_0$ always shows linear increase for all these three values of $p_0$ unlike the previous case of $\gamma<\gamma_c$. This implies in this parameter region the wave packet will always come out of the potential barrier whatever be the initial momentum. The width also increases and the dynamics is like that of a free particle.}
\label{fig3}
\end{figure}
In this parameter region we have observed the linear increase of $x_0(t)$ irrespective of initial momentum ($p_0$) of the wave packet. The width also shows continuous increase with time. With the help of Eq. (\ref{a}), we now conclude that wave packet in this parameter region always have energy $E> E_{th}$($=0.751D$ obtained for $K=2$ and $\gamma=0.5$). Thus the wave packet always comes out of the potential barrier and delocalises in space. In Fig \ref{fig3}, we clearly describe this feature considering $\gamma(=1.2)>\gamma_c(=0.917)$, $\Delta_0=0.4$ and $K=2$. The behavior is always thatof a free particle.
\subsection{\bf{Dynamics{($\gamma<0$); Corresponding figure: Fig \ref{fig4} and Fig\ref{fig5}}}}
\begin{figure}
\caption{Figure (a) shows the dynamics of the peak ($x_0$) of the wave packet for $\gamma=-0.5$ for three different values of $p_0$. $K=2$ and ($\Delta_0=0.4$) are considered. Like Figure 2(a), with lower values of $p_0$ the wave packet hits the potential barrier and reflects back and forth within the trap. $x_0(t)$ shows oscillatory behavior in this case also. With increase of $p_0$ the number of reflection reduces, oscillatory behavior ceases and finally when $p_0=p_{th}
\label{fig4}
\end{figure}
In this parameter region the dynamics of $x_0$ shows the same qualitative feature as is described in Figure \ref{fig2}. We set $\gamma=-0.5$, $K=2$ and $\Delta_0=0.4$. But unlike the previous cases now the dynamics of the width is bounded. The random oscillation of width continues with time with moderate amplitude and as $p_0\rightarrow p_{th}$ the oscillation takes place in a more periodic manner with constant amplitude which implies that the wave packet will never delocalise in space after coming out of the potential barrier. This case is depicted in Fig \ref{fig4}. In Fig \ref{fig5} we have shown the dynamics of $x_0$ and $\Delta$ for a greater value of $\gamma=-1.2$. The dynamics of $x_0$ follows the same qualitative feature as described earlier. But this time the dynamics of the width shows an interesting feature. The width of the wave packet has a tendency to retain its initial shape within the trap as well as after coming out of the trap.
\begin{figure}
\caption{Figure (a) shows the dynamics of the peak ($x_0$) of the wave packet for $\gamma=-1.2$ for three different values of $p_0$. $K=2$ and ($\Delta_0=0.4$) are considered. Like Fig. 4(a), with low values of $p_0$ the wave packet hits the potential barrier and reflects back and forth within the trap, showing oscillatory behavior. With increase of $p_0$ the number of reflection reduces, oscillatory behavior ceases and finally when $p_0=p_{th}
\label{fig5}
\end{figure}
\begin{table}[H]
\centering
\begin{tabular}{c c c}
\hline\hline
$K$ & $E_{th}$(for $\gamma=0.5$) & $E_{th}$(for $\gamma=-0.5$) \\ [0.5ex]
\hline
2 & 0.751D & -0.934D\\
3 & 0.904D & -1.09D\\
4 & 0.971D & -1.14D\\
5 & 0.986D & -1.17D\\
6 & 1.004D & -1.185D\\[1ex]
\hline
\end{tabular}
\caption{Results showing how the threshold energy ($E_{th}$) behaves with potential paramater $K$ for two fixed values of $\gamma$ ($\gamma=0.5$ and $\gamma=-0.5$)}
\end{table}
\section{Conclusion}
In conclusion, we have explored the features of the GPE with the Morse potential. We have found that for a positive coupling constant and a deep potential there is a critical coupling $g_c$ where the ground state disappears as a bound state and $g_c^{4/3}$ scales as the depth of the potential. For the dynamics in this potential if $g<g_c$, the initial wave packet needs to posses a threshold average momentum for the packet to escape from the potential. For $g>g_c$ however, the wave packet dynamics always resembles that for a quantum free particle. If the coupling $g$ is negative, we find that the packet does escape from the potential for above threshold value of the average momentum and further if $g$ is more negative than a critical value, the width of the packet remains constant in time.
\section*{Acknowledgments}
One of the authors, Sukla Pal would like to thank S. N. Bose National Centre for Basic Sciences for the financial support during the work. Sukla Pal acknowledges Harish-Chandra Research Institute for hospitality and support during visit.
\end{document} |
\begin{document}
\title{Differential Operators, Gauges, and Mixed Hodge Modules}
\author{Christopher Dodd}
\begin{abstract}
The purpose of this paper is to develop a new theory of gauges in
mixed characteristic. Namely, let $k$ be a perfect field of characteristic
$p>0$ and $W(k)$ the $p$-typical Witt vectors. Making use of Berthelot's
arithmetic differential operators, we define for a smooth formal scheme
$\mathfrak{X}$ over $W(k)$, a new sheaf of algebras $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$
which can be considered a higher dimensional analogue of the (commutative)
Dieudonne ring. Modules over this sheaf of algebras can be considered
the analogue (over $\mathfrak{X}$) of the gauges of Ekedahl and Fontain-Jannsen.
We show that modules over $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$
admit all of the usual $\mathcal{D}$-module operations, and we prove
a robust generalization of Mazur's theorem in this context. Finally,
we show that an integral form of a mixed Hodge module of geometric
origin admits, after a suitable $p$-adic completion, the structure
of a module over $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$.
This allows us to prove a version of Mazur's theorem for the intersection
cohomology and the ordinary cohomology of an arbitrary quasiprojective
variety defined over a number field.
\end{abstract}
\maketitle
\tableofcontents{}
\section{Introduction}
In this work, we will develop the technology needed to state and prove
\emph{Mazur's theorem for a mixed Hodge module}. In order to say what
this means, we begin by recalling the original Mazur's theorem. Fix
a perfect field $k$ of positive characteristic; let $W(k)$ denote
the $p$-typical Witt vectors. Let $X$ be a smooth proper scheme
over $k$. To $X$ is attached its crystalline cohomology groups $\mathbb{H}_{crys}^{i}(X)$,
which are finite type $W(k)$-modules; the complex $\mathbb{H}_{crys}^{\cdot}(X)$
has the property that $\mathbb{H}_{crys}^{\cdot}(X)\otimes_{W(k)}^{L}k\tilde{\to}\mathbb{H}_{dR}^{\cdot}(X)$
(the de Rham cohomology of $X$ over $k$). Furthermore, if $\mathfrak{X}$
is a smooth, proper formal scheme over $W(k)$, whose special fibre
is $X$, then there is a canonical isomorphism
\[
\mathbb{H}_{crys}^{i}(X)\tilde{\to}\mathbb{H}_{dR}^{i}(\mathfrak{X})
\]
for any $i$. In particular, the action of the Frobenius endomorphism
on $X$ endows $\mathbb{H}_{dR}^{i}(\mathfrak{X})$ with an endomorphism
$\Phi$ which is semilinear over the Witt-vector Frobenius $F$. It
is known that $\Phi$ becomes an automorphism after inverting $p$;
the ``shape'' of the map $\Phi$ is an interesting invariant of
the pair $(\mathbb{H}_{crys}^{i}(X),\Phi)$. To make this precise,
one attaches, to any $r\in\mathbb{Z}$, the submodule $(\mathbb{H}_{crys}^{i}(X))^{r}=\{m\in\mathbb{H}_{crys}^{i}(X)|\Phi(m)\in p^{r}\mathbb{H}_{crys}^{i}(X)\}$
(the equality takes place in $\mathbb{H}_{crys}^{i}(X)[p^{-1}]$).
Thus we have a decreasing, exhaustive filtration, whose terms measure
how far $\Phi$ is from being an isomorphism.
On the other hand, the de Rham cohomology of $X$ comes with another
filtration, the Hodge filtration, which comes from the Hodge to de
Rham spectral sequence $E_{1}^{r,s}=\mathbb{H}^{s}(X,\Omega_{X}^{r})\Rightarrow\mathbb{H}_{dR}^{r+s}(X)$.
Then we have the following remarkable
\begin{thm}
\label{thm:(Mazur)}(Mazur) Suppose that, for each $i$, the group
$\mathbb{H}_{crys}^{i}(X)$ is $p$-torsion-free, and that the Hodge
to de Rham spectral sequence of $X$ degenerates at $E_{1}$. Then
the image of the filtration $(\mathbb{H}_{crys}^{i}(X))^{r}$ in $\mathbb{H}_{dR}^{i}(X)$
is the Hodge filtration.
\end{thm}
This is (the first half of) \cite{key-13}, theorem 3 (in fact, under
slightly weaker hypotheses; compare \cite{key-14} corollary 3.3,
and \cite{key-10}, theorem 8.26). The theorem also includes a similar
description of the conjugate filtration (the filtration coming from
second spectral sequence of hypercohomology) on $\mathbb{H}_{dR}^{i}(X)$;
we will address this as part of the more general theorem 1.2 below.
This result allowed Mazur to prove Katz's conjecture relating the
slopes of $\Phi$ to the Hodge numbers of $X$.
In the years following \cite{key-13}, it was realized that the theorem
can be profitably rephrased in terms of certain additional structures
on $\mathbb{H}_{crys}^{i}(X)$. Let $A$ be a commutative ring. Denote
by $D(A)$ the commutative ring $A[f,v]/(fv-p)$; put a grading on
this ring by placing $A$ in degree $0$, $f$ in degree $1$, and
$v$ in degree $0$. Then a\emph{ gauge }(over \emph{$A$}) is a graded
module over $D(A)$, ${\displaystyle M=\bigoplus_{i\in\mathbb{Z}}M^{i}}$.
Set ${\displaystyle M^{\infty}:=M/(f-1)\tilde{=}\lim_{\to}M^{i}}$,
and ${\displaystyle M^{-\infty}:=M/(v-1)\tilde{=}\lim_{\to}M^{-i}}$.
One says that $M$ is an $F$-gauge if there is an isomorphism $F^{*}M^{\infty}\tilde{\to}M^{-\infty}$
(c.f. \cite{key-20}, definition 2.1, \cite{key-5}, chapter 1, or
section 2.1 below).
Then, in the above situation, one associates the $W(k)$- gauge
\begin{equation}
\mathbb{H}_{\mathcal{G}}^{i}(X):=\bigoplus_{r\in\mathbb{Z}}(\mathbb{H}_{crys}^{i}(X))^{r}\label{eq:Basic-Gauge-defn}
\end{equation}
where $f:(\mathbb{H}_{crys}^{i}(X))^{r}\to(\mathbb{H}_{crys}^{i}(X))^{r+1}$
acts by multiplication by $p$, and $v:(\mathbb{H}_{crys}^{i}(X))^{r}\to(\mathbb{H}_{crys}^{i}(X))^{r-1}$
acts as the inclusion. One has $\mathbb{H}_{\mathcal{G}}^{i}(X)^{\infty}\tilde{=}\mathbb{H}_{crys}^{i}(X)$,
and the isomorphism\linebreak{}
$F^{*}(\mathbb{H}_{crys}^{i}(X))^{\infty}\to(\mathbb{H}_{crys}^{i}(X))^{-\infty}$
comes from the action of $\Phi$.
Remarkably, it turns out that there is a reasonable definition of
$\mathbb{H}_{\mathcal{G}}^{i}(X)$ for any $X$, even without the
assumption that each group $\mathbb{H}_{crys}^{i}(X)$ is $p$-torsion-free,
or that the Hodge to de Rham spectral sequence degenerates at $E_{1}$.
To state the result, note that for any gauge $M$ (over any $A$),
$M^{-\infty}$ carries a decreasing filtration defined by $F^{i}(M^{-\infty})=\text{image}(M^{i}\to M^{\infty})$.
Passing to derived categories, we obtain a functor $D(\mathcal{G}(D(A)))\to D((A,F)-\text{mod})$
(here $\mathcal{G}(D(A))$ is the category of gauges, and $D((A,F)-\text{mod})$
is the filtered derived category of $A$); we will denote this functor
$M^{\cdot}\to M^{\cdot,-\infty}$. The analogous construction can
be carried out for $+\infty$ as well using the increasing filtration
$C^{i}(M^{\infty})=\text{image}(M^{i}\to M^{\infty})$. In particular,
if $M^{\cdot}\in D(\mathcal{G}(D(A)))$, then each $H^{i}(M^{\cdot,-\infty})$
and each $H^{i}(M^{\cdot,\infty})$ is a filtered $A$-module.
\begin{thm}
\label{thm:=00005BFJ=00005D} For any smooth $X$ over $k$, there
is a functorially attached complex of $W(k)$-gauges, $\mathbb{H}_{\mathcal{G}}^{\cdot}(X)$,
such that $\mathbb{H}_{\mathcal{G}}^{i}(X)^{\infty}\tilde{=}\mathbb{H}_{crys}^{i}(X)$
for all $i$. Further, there is an $F$-semilinear isomorphism $H^{i}((\mathbb{H}_{\mathcal{G}}^{\cdot}(X)\otimes_{W(k)}^{L}k))^{-\infty}\tilde{\to}(\mathbb{H}_{dR}^{i}(X),F)$
and a linear isomorphism $H^{i}((\mathbb{H}_{\mathcal{G}}^{\cdot}(X)\otimes_{W(k)}^{L}k))^{\infty}\tilde{\to}(\mathbb{H}_{dR}^{i}(X),C)$,
where $F$ and $C$ denote the Hodge and conjugate filtrations, respectively.
When $\mathbb{H}_{crys}^{i}(X)$ is torsion-free for all $i$ and
the Hodge to de Rham spectral sequence degenerates at $E_{1}$, then
this functor agrees with the gauge constructed above in \eqref{Basic-Gauge-defn}.
\end{thm}
As far as I am aware, the first proof of this theorem appears in Ekedahl's
book \cite{key-20}. This is also the first place that the above notion
of gauge is defined; Ekedahl points out that Fontaine discovered the
notion independantly. Ekedahl's proof relies on deep properties of
the de Rham-Witt complex and on the results of the paper \cite{key-37};
in that paper, it is shown that there is attached to $X$ a complex
inside another category $D^{b}(\mathcal{R}-\text{mod})$ where $\mathcal{R}$
is the so-called Raynaud ring; then, in definition 2.3.1 of \cite{key-20}
Ekehahl constructs a functor from $D^{b}(\mathcal{R}-\text{mod})$
to the derived category $D^{b}(\mathcal{G}(D(A)))$; the composition
of these two functors yeilds the construction of the theorem. Another,
rather different proof of the theorem is given in \cite{key-5}, section
7.
Now let us turn to $\mathcal{D}$-modules and Hodge modules. From
at least the time of Laumon's work (\cite{key-19}), it has been understood
that the filtered complex $\mathbb{H}_{dR}^{\cdot}(X)$ (with its
Hodge filtration) can be understood as an object of filtered $\mathcal{D}$-module
theory. To explain this, let $\mathcal{D}_{X}^{(0)}$ denote the level
zero PD-differential operators on $X$. Then $\mathcal{D}_{X}^{(0)}$
acts on $\mathcal{O}_{X}$, and we have a canonical isomorphism
\[
\int_{\varphi}\mathcal{O}_{X}[d_{X}]\tilde{\to}\mathbb{H}_{dR}^{\cdot}(X)
\]
where $\varphi$ denotes the map $X\to\text{Spec}(k)$, $d_{X}=\text{dim}(X)$,
and ${\displaystyle \int_{\varphi}}$ is the push-forward for $\mathcal{D}_{X}^{(0)}$-modules.
In addition, $\mathcal{D}_{X}^{(0)}$ comes equipped with a natural
\emph{increasing }filtration, the symbol filtration. Laumon's work\footnote{Strictly speaking, Laumon works in characteristic zero. But the same
formalism works for $\mathcal{D}_{X}^{(0)}$ in positive characteristic;
I'll address this below in the paper} upgrades the push-forward functor to a functor from filtered $\mathcal{D}_{X}^{(0)}$-modules
to filtered $k$-vector spaces; and we have that\emph{
\[
\int_{\varphi}\mathcal{O}_{X}[d_{X}]\tilde{\to}(\mathbb{H}_{dR}^{\cdot}(X),F')
\]
}where $F'$ is the Hodge filtration, suitably re-indexed to make
it an increasing filtration. Furthermore, Laumon works in the relative
setting; i.e., he constructs a filtered push-forward for any morphism
$\varphi:X\to Y$ of smooth varieties.
This leads to the question, of weather the construction of \thmref{=00005BFJ=00005D}
can be understood in terms of some sort of upgrade of filtered $\mathcal{D}$-modules
to a category of graded modules. The main body of this work shows
that, at least when the schemes in question lift to smooth formal
schemes over $W(k)$, the answer is yes\footnote{In fact, the answer is always yes. But we will adress the non-liftable
case in future work}. To state the first result, recall that, in addition to the symbol
filtration, the algebra $\mathcal{D}_{X}^{(0)}$ carries a decreasing
filtration by two sided ideals, the conugate filtration, denoted $\{C^{i}(\mathcal{D}_{X}^{(0)})\}_{i\in\mathbb{Z}}$
(it was first defined in \cite{key-11}, section 3.4, c.f. also \defref{Hodge-and-Con}
below).
\begin{thm}
\label{thm:D01}Let $\mathfrak{X}$ be a smooth formal scheme over
$W(k)$. Then there is a locally noetherian sheaf of algebras $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$
with the following properties:
1) ${\displaystyle \widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}=\bigoplus_{i}\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1),i}}$
is a graded $D(W(k))$-algebra, and $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}/(v-1)\tilde{=}\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}$,
while the sheaf $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}/(f-1)$
has $p$-adic completion equal to $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(1)}$.
2) Let $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}/p:=\mathcal{D}_{X}^{(0,1)}$,
a graded sheaf of $k$-algebras on $X$. The filtration $\text{im}(\mathcal{D}_{X}^{(0,1),i}\to\mathcal{D}_{X}^{(0)}\tilde{=}\mathcal{D}_{X}^{(0,1)}/(v-1))$
agrees with the conugate filtration on $\mathcal{D}_{X}^{(0)}$.
3) We have $\mathcal{D}_{X}^{(1)}=\mathcal{D}_{X}^{(0,1)}/(f-1)$.
Consider the filtration $F^{i}(\mathcal{D}_{X}^{(1)})=\text{im}(\mathcal{D}_{X}^{(0,1),i}\to\mathcal{D}_{X}^{(0)}\tilde{=}\mathcal{D}_{X}^{(0,1)}/(f-1))$.
Then filtered modules over $(\mathcal{D}_{X}^{(1)},F^{\cdot})$ are
equivalent to filtered modules over $(\mathcal{D}_{X}^{(0)},F^{\cdot})$
(the symbol filtration on $\mathcal{D}_{X}^{(0)}$).
\end{thm}
This sheaf of algebras is constructed in \secref{The-Algebra} below;
part $2)$ of the theorem is proved in \remref{Description-of-conjugate},
and part $3)$ is \thmref{Filtered-Frobenius}. This theorem shows
that a graded module over $\mathcal{D}_{X}^{(0,1)}$ is a simultanious
generalization of a conugate-filtered and a Hodge-filtered $\mathcal{D}_{X}^{(0)}$-module.
The algebra $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$ admits
analogues of all of the usual $\mathcal{D}$-module operations; namely,
tensor product, duality, left-right interchange, and well as push-forward
and pull-back over arbitrary morphisms (between smooth formal schemes).
By construction the sheaf $D(\mathcal{O}_{\mathfrak{X}})=\mathcal{O}_{\mathfrak{X}}[f,v]/(fv-p)$
carrries an action of $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$.
Let $D(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
denotes the derived category of graded $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$-modules;
then we have
\begin{thm}
For any morphism $\varphi:\mathfrak{X}\to\mathfrak{Y}$ of smooth
formal schemes we denote the pushforward by ${\displaystyle \int_{\varphi}:D(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))\to D(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}))}$.
If $\varphi$ is proper, then the pushforward takes $D_{coh}^{b}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
to $D_{coh}^{b}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}))$.
We have ${\displaystyle (\int_{\varphi}\mathcal{M})^{-\infty}\tilde{=}(\int_{\varphi}\mathcal{M}^{-\infty})}$,
where the pushforward on the right is in the category of $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}$-modules.
In particular, if $\mathfrak{Y}$ is $\text{Specf}(W(k))$, then ${\displaystyle {\displaystyle \int_{\varphi}}D(\mathcal{O}_{\mathfrak{X}})}$
is a bounded complex of finite type gauges, and we have isomorphisms
\[
({\displaystyle \int_{\varphi}}D(\mathcal{O}_{\mathfrak{X}}))^{-\infty}[d_{X}]\tilde{=}\mathbb{H}_{dR}^{\cdot}(\mathfrak{X})
\]
and
\[
({\displaystyle \int_{\varphi}}D(\mathcal{O}_{\mathfrak{X}}))^{\infty}[d_{X}]\tilde{=}F^{*}\mathbb{H}_{dR}^{\cdot}(\mathfrak{X})
\]
where $F$ is the Witt-vector Frobenius. After passing to $k$ we
obtain isomorphisms in the filtered derived category
\[
({\displaystyle \int_{\varphi}}D(\mathcal{O}_{\mathfrak{X}})\otimes_{W(k)}^{L}k)^{-\infty}[d_{X}]\tilde{=}(\mathbb{H}_{dR}^{\cdot}(X),C')
\]
(where $C'$ in the conjugate filtration, appropriately re-indexed
to make it a decreasing filtration), and
\[
({\displaystyle \int_{\varphi}}D(\mathcal{O}_{\mathfrak{X}})\otimes_{W(k)}^{L}k)^{\infty}[d_{X}]\tilde{=}F^{*}(\mathbb{H}_{dR}^{\cdot}(X),F')
\]
where where $F'$ is the Hodge filtration, suitably re-indexed to
make it an increasing filtration
\end{thm}
This theorem is proved in \secref{Push-Forward} below.
In fact $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$ has many more
favorable properties which are developed extensively in this paper;
including a well-behaved pull-back for arbitrary maps, an internal
tensor product which satisfies the projection formula, and a relative
duality theory; these are sections five through eight below. Simultaneously,
we develop the analogous theory $\mathcal{D}_{X}^{(0,1)}$-modules;
here $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}/p$; technically,
we do a little more than that, and develop the theory of $\mathcal{D}_{X}^{(0,1)}$-modules
over smooth varieties which do not have to lift to $W(k)$. The two
theories play off each other nicely- we often use reduction mod $p$
and various versions of Nakayama's lemma to reduce statements about
$\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$ to statements about
$\mathcal{D}_{X}^{(0,1)}$; on the other hand, there are always local
lifts of a smooth variety over $k$, so local questions about $\mathcal{D}_{X}^{(0,1)}$
often reduce to questions about $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$.
There is also an interesting and rich theory over the truncated Witt
vectors $W_{n}(k)$, but, given the length of this paper, we will
undertake a detailed study of it in another work.
We also have a comparison with the gauge constructed in \thmref{=00005BFJ=00005D};
however, we will defer the proof of this result to a later paper.
That is because it seems best to prove it as a consequence of a more
general comparison theorem between the category of gauges constructed
here and the one constructed in \cite{key-5}; and this general statement
is still a work in progress. It also seems that there is a close connection
with the recent works of Drinfeld \cite{key-39} and Bhatt-Lurie \cite{key-40}
via a kind of Koszul duality formalism; again, the details are a work
in progress\footnote{The author has been discussing these topics with Bhargav Bhatt }.
Now we discuss Mazur's theorem in the relative context. We begin with
the
\begin{defn}
A module $\mathcal{M}\in\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})$
is standard if if $\mathcal{M}^{-\infty}$ and $\mathcal{M}^{\infty}$
are $p$-torsion-free, each map $f_{\infty}:\mathcal{M}^{i}\to\mathcal{M}^{\infty}$
is injective; and, finally, there is a $j_{0}\in\mathbb{Z}$ so that
\[
f_{\infty}(\mathcal{M}^{i+j_{0}})=\{m\in\mathcal{M}^{\infty}|p^{i}m\in f_{\infty}(\mathcal{M}^{j_{0}})\}
\]
for all $i\in\mathbb{Z}$.
\end{defn}
Note that, over $W(k)$, this is a generalization of the construction
of the gauge in \eqref{Basic-Gauge-defn}; with the roles of $f$
and $v$ reversed (this is related to the re-indexing of the Hodge
and conjugate filtrations; c.f. also \remref{basic-equiv} below).
Thus a general version of Mazur's theorem will give conditions on
a complex of gauges which ensure that each cohomology group is standard.
In order to state such a theorem, we need to note that there is a
notion of $F$-gauge in this context, or, to be more precise, a notion
of $F^{-1}$-gauge:
\begin{defn}
(c.f. \defref{Gauge-Defn!}) Let $F^{*}:\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}-\text{mod}\to\widehat{\mathcal{D}}_{\mathfrak{X}}^{(1)}-\text{mod}$
denote Berthelot's Frobenius pullback (c.f. \thmref{Berthelot-Frob}
below for details). Then an $F^{-1}$-gauge over $\mathfrak{X}$ is
an object of $\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)})$
equipped with an isomorphism $F^{*}\mathcal{M}^{-\infty}\tilde{\to}\widehat{\mathcal{M}^{\infty}}$
(here $\widehat{?}$ denotes $p$-adic completion). There is also
a version for complexes in $D(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}))$,
namely, an $F^{-1}$-gauge in $D(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}))$
is a complex $\mathcal{M}^{\cdot}$ equipped with an isomorphism $F^{*}\mathcal{M}^{\cdot,-\infty}\tilde{\to}\widehat{\mathcal{M}^{\cdot,\infty}}$
(here $\widehat{?}$ denotes the cohomolocial or derived completion,
c.f. \defref{CC} and \propref{Basic-CC-facts} below).
\end{defn}
We denote by $D_{F^{-1}}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
the category of complexes for which there exists an isomorphism $F^{*}\mathcal{M}^{\cdot,-\infty}\tilde{\to}\widehat{\mathcal{M}^{\cdot,\infty}}$
as above. Then we have the following rather general version of Mazur's
theorem:
\begin{thm}
(c.f. \thmref{F-Mazur}) Let $\mathcal{M}^{\cdot}\in D_{\text{coh},F^{-1}}^{b}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$.
Suppose that $\mathcal{H}^{n}(\mathcal{M}^{\cdot})^{-\infty}$ is
$p$-torsion-free for all $n$, and suppose that $\mathcal{H}^{n}((\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f])$
is $f$-torsion-free for all $n$. Then $\mathcal{H}^{n}(\mathcal{M}^{\cdot})$
is standard for all $n$.
\end{thm}
Using the formalism of filtered $\mathcal{D}$-modules one verifies
that the condition that $\mathcal{H}^{n}((\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f])$
is $f$-torsion-free for all $n$ is a generalization of the degeneration
of the Hodge-to-de Rham spectral sequence. Therefore this theorem,
along with the previous one, provide a robust generalization of Mazur's
theorem, which allows much more general kinds of coefficients.
The conditions of the theorem are satisfied in several important cases.
Suppose $R$ is a finitely generated $\mathbb{Z}$-algebra, and suppose
that $X_{R}$ is a smooth $R$ scheme, and let $\varphi:X_{R}\to Y_{R}$
be a proper map. Suppose that $(\mathcal{M}_{R},F)$ is a filtered
coherent $\mathcal{D}_{X_{R}}^{(0)}$-module on $X_{R}$. If the associated
complex filtered $\mathcal{D}$-module, $(\mathcal{M}_{\mathbb{C}},F)$
undergirds a mixed Hodge module, then by Saito's theory the Hodge-to-de
Rham spectral sequence for ${\displaystyle \int_{\varphi}(\mathcal{M}_{\mathbb{C}},F)}$
degenerates at $E_{1}$. Thus the same is true over $R$, after possibly
localizing. Further localization ensures that each ${\displaystyle \mathcal{H}^{i}(\int_{\varphi}(\mathcal{M}_{R},F))}$
is flat over $R$.
Now suppose we have a map $R\to W(k)$. Let $\varphi:\mathfrak{X}\to\mathfrak{Y}$
denote the formal completion of the base change to $W(k)$. Then the
theorem applies if there exist a $p$-torsion-free gauge $\mathcal{N}$
over $\mathfrak{X}$ such that $\mathcal{N}^{-\infty}\tilde{=}\widehat{\mathcal{M}\otimes_{R}W(k)}$
and $F^{*}(\mathcal{M}_{k},F)\tilde{\to}\mathcal{N}^{\infty}/p$.
By a direct construction, this happens for $\mathcal{M}=\mathcal{O}_{X}$
as well as $\mathcal{M}=j_{\star}\mathcal{O}_{U}$ and $\mathcal{M}=j_{!}\mathcal{O}_{U}$
(where $U\subset X$ is an open inclusion whose compliment is a normal
crossings divisor, and $j_{\star}$ and $j_{!}$ denote the pushforwards
in mixed Hodge module theory). Therefore, by the theorem itself, it
happens when $(\mathcal{M}_{\mathbb{C}},F)$ is itself a Hodge module
``of geometric origin'' (c.f. \corref{Mazur-for-Hodge-1}). In this
paper we give some brief applications of this to the case where $\mathcal{M}_{\mathbb{C}}$
is the local cohomology along some subcheme; but we expect that there
are many more.
Finally, let's mention that Hodge modules of geometric origin control
both the intersection cohomology and singular cohomology of singular
varieties over $\mathbb{C}$. So we can obtain
\begin{thm}
\label{thm:Mazur-for-IC-Intro}Let $X_{R}$ be a (possibly singular)
quasiprojective variety over $R$. Then, after possibly localizing
$R,$ there is a filtered complex of $R$-modules $I\mathbb{H}^{\cdot}(X_{R})$,
whose base change to $\mathbb{C}$ yields $I\mathbb{H}^{\cdot}(X_{\mathbb{C}})$,
with its Hodge filtration. Now suppose $R\to W(k)$ for some perfect
field $k$. Then for each $i$, there is a standard gauge $\tilde{I\mathbb{H}}^{i}(X)_{W(k)}$
so that
\[
\tilde{I\mathbb{H}}^{i}(X)_{W(k)}^{-\infty}\tilde{=}I\mathbb{H}^{\cdot}(X_{R})\otimes_{R}W(k)
\]
and so that
\[
\tilde{I\mathbb{H}}^{i}(X)_{W(k)}^{\infty}\tilde{=}F^{*}(I\mathbb{H}^{\cdot}(X_{R})\otimes_{R}W(k))
\]
Under this isomorphism, the Hodge filtration on $\tilde{I\mathbb{H}}^{i}(X)_{W(k)}^{\infty}/p$
agrees with the Frobenius pullback of the image of the Hodge filtration
in $I\mathbb{H}^{\cdot}(X_{R})\otimes_{R}k$.
The analogous statement holds for the ordinary cohomology of a quasiprojective
variety $X_{R}$, with its Hodge filtration; as well as the compactly
supported cohomology.
\end{thm}
This is proved in \corref{Mazur-for-IC} and \corref{Mazur-for-Ordinary}below.
As in \cite{key-13} and \cite{key-38}, \cite{key-55} this result
implies that the ``Newton polygon'' lies on or above the ``Hodge
polygon'' for both the ordinary and the intersection cohomology of
quasiprojective varieties, in the circumstances of the above theorem.
We note here that the theorem gives an $F$-semilinear action on the
groups $I\mathbb{H}^{\cdot}(X_{R})\otimes_{R}W(k)[p^{-1}]$, as well
as the ordinary cohomology groups $\mathbb{H}^{\cdot}(X_{R})\otimes_{R}W(k)[p^{-1}]$,
and the compactly supported cohomology as well. This action has already
been constructed as a consequence of the formalism of rigid cohomology
(c.f. \cite{key-80},\cite{key-81}). However, to my knowledge this
``integral'' version of the action has not been considered before.
\subsection{Plan of the Paper}
The first chapter has two sections. In the first, we quickly review
the theory of gauges over $W(k)$, and in particular give the equivalence
between $F$-guages and $F^{-1}$-guages in this context. In the second,
we give a quick recollection of some generalities on graded modules,
before reviewing and extending (to the case of graded modules) the
very important technical notion of cohomological completeness (also
known as derived completeness). The Nakayama lemma is key here, as
the reduction mod $p$ will be one of our main technical tools for
proving theorems.
The next chapter introduces $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$,
as well as its analogue $\mathcal{D}_{X}^{(0,1)}$ over a smooth $k$-variety
$X$ (which does not have to lift to a smooth formal scheme), and
performs some basic local calculations. In particular, we prove \corref{Local-coords-over-A=00005Bf,v=00005D},
which provides a local description of $\mathcal{D}_{X}^{(0,1)}$ which
is analogous to the basic descriptions of differential operators ``in
local coordinates'' that one finds in other contexts.
In chapter $4$, we study the categories of graded modules over $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$
and $\mathcal{D}_{X}^{(0,1)}$, importing and generalizing some key
results of \cite{key-5}. We prove the ``abstract'' version of Mazur's
theorem (\thmref{Mazur!}) for a complex of gauges. Then we go on
to introduce the notion of an $F^{-1}$-gauge over $X$ (and $\mathfrak{X}$),
which makes fundamental use of Berthelot's Frobenius descent. We explain
in \thmref{Filtered-Frobenius} how this Frobenius descent interacts
with the natural filtrations coming from the grading on $\mathcal{D}_{X}^{(0,1)}$.
Along the way, we look at the relationship between modules over $\mathcal{D}_{X}^{(0,1)}$
and modules over two important Rees algebras:$\mathcal{R}(\mathcal{D}_{X}^{(0)})$
and $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$, the Rees algebras
of $\mathcal{D}_{X}^{(0)}$ with respect to the symbol and conjugate
filtrations, respectively.
Chapters $5$ through $8$ introduce and study the basic $\mathcal{D}$-module
operations in this context: pullback, tensor product, left-right interchange,
pushforward, and duality. Much of this is similar to the story for
algebraic $\mathcal{D}$-modules (as covered in \cite{key-49}, for
instance). For instance, even though $\mathcal{D}_{X}^{(0,1)}$ and
$\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$ do not have finite
homological dimension, we show that the pushforward, pullback, and
duality functors do have finite homological dimension. As usual, the
study of the pushforward (chapter $7$ below) is the most involved,
and we spend some time exploring the relationship with the pushforwards
for $\mathcal{R}(\mathcal{D}_{X}^{(0)})$ and $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$,
respectively; these admit descriptions in terms of the more standard
filtered pushforwards of $\mathcal{D}$-modules.
Finally, in the last chapter we put everything together and prove
Mazur's theorem for a Hodge module of geometric origin; this uses,
essentially, all of the theory built in the previous sections. In
addition to the applications explained in the introduction, we give
some applications to the theory of the Hodge filtration on the local
cohomology of a subcheme of a smooth complex variety.
There is one appendix to the paper- in which we prove a technical
result useful for constructing the gauge $j_{\star}(D(\mathcal{O}))$,
the pushforward of the trivial gauge over a normal crossings divisor.
\subsection{Notations and Conventions}
Let us introduce some basic notations which are used throughout the
paper. For any ring (or sheaf of rings) $\mathcal{R}$, we will denote
by $D(\mathcal{R})$ the graded ring in which $\mathcal{R}$ has degree
$0$, $f$ has degree $1$, $v$ has degree $-1$, and $fv=p$. The
symbol $k$ will always denote a perfect field of positive characteristic,
and $W(k)$ the $p$-typical Witt vectors. Letters $X$, $Y$,$Z$
will denote smooth varieties over $k$, while $\mathfrak{X}$,$\mathfrak{Y}$,$\mathfrak{Z}$
will denote smooth formal schemes over $W(k)$. When working with
formal schemes, we let $\Omega_{\mathfrak{X}}^{1}$ denote the sheaf
of continuous differentials (over $W(k)$), and $\mathcal{T}_{\mathcal{X}}$
denote the continuous $W(k)$-linear derivations; we set $\Omega_{\mathfrak{X}}^{i}=\bigwedge^{i}\Omega_{\mathfrak{X}}^{1}$
and $\mathcal{T}_{\mathfrak{X}}^{i}=\bigwedge^{i}\mathcal{T}_{\mathfrak{X}}$.
We denote by $X^{(i)}$ the $i$th Frobenius twist of $X$; i.e.,
the scheme $X\times_{\text{Spec}(k)}\text{Spec}(k)$, where $k$ map
to $k$ via $F^{i}$. Since $k$ is perfect, the natural map $\sigma:X^{(i)}\to X$
is an isomorphism. On the other hand, the relative Frobenius $X\to X^{(i)}$
is a bijection on topological spaces, which allows us to identify
$\mathcal{O}_{X^{(i)}}\tilde{=}\mathcal{O}_{X}^{p^{i}}$; we shall
tacitly use this below.
Now we introduce some conventions on differential operators. If $\mathfrak{X}$
is a smooth formal scheme over $W(k)$, then for each $i\geq0$ we
have Berthelot's ring of differential operators of level $i$, $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(i)}$,
introduced in \cite{key-1} This is a $p$-adically complete, locally
noetherian sheaf of rings on $\mathfrak{X}$. In general, this sheaf
is somewhat complicated to define, but when $\mathfrak{X}=\text{Specf}(\mathcal{A})$
is affine and admits local coordinates\footnote{i.e., $\Gamma(\Omega_{\mathfrak{X}}^{1})$ is a free module over $\mathcal{A}$}
one has the following description of its global sections: let $D_{\mathcal{A}}^{(\infty)}$
denote the subring of $\text{End}_{W(k)}(\mathcal{A})$ consisting
of the the finite order, continuous differential operators on $\mathcal{A}$.
Define $D_{\mathcal{A}}^{(i)}\subset D_{\mathcal{A}}^{(\infty)}$
to be the subring generated by differential operators of level $\leq p^{i}$.
Then we have
\[
\Gamma(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(i)})=\widehat{D_{\mathcal{A}}^{(i)}}
\]
where $\widehat{?}$ stands for $p$-adic completion. For each $i\geq0$
there is a natural, injective map $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(i)}\to\widehat{\mathcal{D}}_{\mathfrak{X}}^{(i+1)}$;
when $\mathfrak{X}=\text{Specf}(\mathcal{A})$ is as above it is given
by the $p$-adic completion of the tautological inclusion $D_{\mathcal{A}}^{(i)}\subset D_{\mathcal{A}}^{(i+1)}$.
Similarly, we have the sheaves of algebras $\mathcal{D}_{X}^{(i)}$
when $X$ is smooth over $k$. In the case $i=0$, this is simply
the usual sheaf of pd-differential operators on $X$ (c.f. \cite{key-10}).
This sheaf can be rather rapidly defined (as in \cite{key-3} chapter
1, though there they are called crystalline differential operators)
as the enveloping algebroid of the tangent sheaf $\mathcal{T}_{X}$.
Finally let us mention that we will be often working with derived
categories of graded modules in this work. In that context, the symbol
$[i]$ denotes a shift in homological degree, while $(i)$ denotes
a shift in the grading degree.
\section{Preliminaries}
\subsection{Gauges over $W(k)$}
In this section we set some basic notation and terminology; all of
which is essentially taken from the paper \cite{key-5}. Let $k$
be a perfect field of characteristic $p>0$; and let $W(k)$ be the
$p$-typical Witt vectors. Let $S$ be a noetherian $W(k)$-algebra.
We recall from \cite{key-5} (also \cite{key-20}) that a gauge over
$S$ is a graded module ${\displaystyle M=\bigoplus_{i=\infty}^{\infty}M^{i}}$
over the graded ring $D(S)$ where, (as always) we suppose $\text{deg}(f)=1$,
$\text{deg}(v)=-1$, and $fv=p$. A morphism of gauges is a morphism
in the category of graded modules.
If $M$ is a gauge, we denote the resulting multiplication maps by
$f:M^{i}\to M^{i+1}$ and $v:M^{i}\to M^{i-1}$ for all $i$.
As explained in \cite{key-5}, lemma 1.1.1, such a module is finitely
generated over $R$ iff each $M^{i}$ is finite over $S$ and the
maps $f:M^{r}\to M^{r+1}$ and $v:M^{-r}\to M^{-r-1}$ are isomorphisms
for $r>>0$. It follows that in this case the map $v:M^{r}\to M^{r-1}$
is $p\cdot$ for $r>>0$, and $f:M^{-r}\to M^{-r+1}$ is $p\cdot$
for $r>>0$. In the terminology of \cite{key-5}, such a gauge is
\emph{concentrated in a finite interval}.
\begin{defn}
\label{def:endpoints} Let $M$ be a gauge.
1) Set ${\displaystyle M^{\infty}:=M/(f-1)M\tilde{\to}\lim_{r\to\infty}M^{r}}$
and ${\displaystyle M^{-\infty}:=M/(v-1)M\tilde{\to}\lim_{r\to-\infty}M^{r}}$.
2) For each $i$, denote by $f_{\infty}:M^{i}\to M^{\infty}$ and
$v_{-\infty}:M^{i}\to M^{-\infty}$ the induced maps.
3) Define $F^{i}(M^{\infty}):=\text{image}(M^{i}\to M^{\infty})$
and $C^{i}(M^{-\infty}):=\text{image}(M^{i}\to M^{-\infty})$. In
particular, $F^{i}$ is an increasing filtration on $M^{\infty}$
and $C^{i}$ is a decreasing filtration on $M^{-\infty}$. Clearly
any morphism of gauges $M\to N$ induces morphisms of filtered modules
$(M^{\infty},F^{\cdot})\to(N^{\infty},F^{\cdot})$ and $(M^{-\infty},C^{\cdot})\to(N^{-\infty},C^{\cdot})$.
\end{defn}
If $M$ is finitely generated we see that $M^{r}\tilde{=}M^{\infty}$
and $M^{-r}\tilde{=}M^{-\infty}$ for all $r>>0$.
Many gauges arising in examples posses an additional piece of structure-
a Frobenius semi-linear isomorphism from $M^{\infty}$ to $M^{-\infty}$.
So let us now suppose that $S$ is equipped with an endomorphism $F$
which extends the Frobenius on $W(k)$.
\begin{defn}
\label{def:F-gauge} (\cite{key-5}, section 1.4) An $F$-gauge is
a gauge $M$ equipped with an isomorphism $\varphi:F^{*}M^{\infty}\tilde{\to}M^{-\infty}$.
A morphism of $F$-gauges is required to respect the isomorphism $\varphi$.
More precisely, given a morphism $G:M\to N$, it induces $G^{\infty}:M^{\infty}\to N^{\infty}$
and $G^{-\infty}:M^{-\infty}\to N^{-\infty}$, and we demand $\varphi\circ F^{*}G^{\infty}=G^{\infty}\circ\varphi$.
This makes the category of $F$-gauges into an additive category,
which is abelian if $F^{*}$ is an exact functor.
\end{defn}
Now suppose in addition that $F:S\to S$ is an isomorphism. Then:
\begin{rem}
\label{rem:basic-equiv}There is an equivalence of categories from
$F$-gauges to $F^{-1}$-gauges; namely, send $M$ to the gauge $N$
where $N^{i}=M^{-i}$, $f:N^{i}\to N^{i+1}$ is defined to be $v:M^{-i}\to M^{-i-1}$
, $v:N^{i}\to N^{i-1}$ is defined to be $f:M^{-i}\to M^{-i+1}$.
Then $M^{\infty}=N^{-\infty}$ , $M^{-\infty}=N^{-\infty}$, and the
isomorphism $\varphi:F^{*}M^{\infty}\tilde{\to}M^{-\infty}$ yields
an isomorphism $\psi^{-1}:F^{*}N^{-\infty}\tilde{\to}N^{\infty}$;
which is equivalent to giving an isomorphism $\psi:(F^{-1})^{*}N^{\infty}\tilde{\to}N^{-\infty}$.
\end{rem}
Finally, we want to quickly review an important construction of gauges.
We suppose here that $S=W(k)$; equipped with its Frobenius automorphism
$F$. We use the same letter $F$ to denote the induced automorphism
of the field $B=W(k)[p^{-1}]$. We will explain how gauges arrive
from lattices of $B$-vector spaces:
\begin{example}
\label{exa:BasicGaugeConstruction}Let $D$ be a finite dimensional
$B$-vector space, and let $M$ and $N$ be two lattices (i.e., finite
free $W(k)$-modules which span $D$) in $D$. To this situation we
may attach a gauge over $W(k)$ as follows: for all $i\in\mathbb{Z}$
define
\[
M^{i}=\{m\in M|p^{i}m\in N\}
\]
We let $f:M^{i}\to M^{i+1}$ be the inclusion, and $v:M^{i}\to M^{i-1}$
be the multiplication by $p$. For $i>>0$ we have $p^{i}M\subset N$
and so $M^{i}=M$ for all such $i$. For $i<<0$ we have $p^{-i}N\subset M$
and so $M^{i}=p^{-i}N\tilde{=}N$ for such $i$. In particular we
obtain $M^{-\infty}\tilde{=}N$ and $M^{\infty}\tilde{=}M$. This
is evidently a finite-type gauge over $W(k)$. Now suppose that there
is an $F$-semi-linear automorphism $\Phi:D\to D$ so that $M=\Phi(N)$.
Then the previous construction gives an $F^{-1}$ gauge via the isomorphism
$\Phi:N=M^{-\infty}\to M^{\infty}=M$.
\end{example}
\begin{rem}
\label{rem:=00005BFJ=00005D-standard}In \cite{key-5}, section 2.2,
there is associated an $F$-gauge to a finite dimensional $B$ vector
space $D$, equipped with lattice $M\subset D$ and a semi-linear
automorphism $\Phi:D\to D$. We recall that their construction is
\end{rem}
\[
M^{i}=\{m\in M|\Phi(m)\in p^{i}M\}=\{m\in M|m\in p^{i}\Phi^{-1}(M)\}
\]
for all $i\in\mathbb{Z}$. In this instance $f:M^{i}\to M^{i+1}$
is the multiplication by $p$, and $v:M^{i}\to M^{i-1}$ is the inclusion.
If we set $N=\Phi^{-1}(M)$ then this is exactly the $F$-gauge which
corresponds to the $F^{-1}$ gauge constructed in \exaref{BasicGaugeConstruction}
above, via the equivalence of categories of \remref{basic-equiv}.
In \cite{key-5} this construction is referred to as the standard
construction of gauges. We will generalize this below in \subsecref{Standard}.
\subsection{Cohomological Completion of Graded Modules}
In this section we give some generalities on sheaves of graded modules.
Throughout this section, we let $X$ be a noetherian topological space
and $\tilde{\mathcal{R}}=\bigoplus_{i\in\mathbb{Z}}\tilde{\mathcal{R}}^{i}$
a $\mathbb{Z}$-graded sheaf of rings on $X$. The noetherian hypothesis
ensures that, for each open subset $U\subset X$, the functor $\mathcal{F}\to\mathcal{F}(U)$
respects direct sums; although perhaps not strictly necessary, it
simplifies the discussion of graded sheaves (and it always applies
in this paper). Denote $\tilde{\mathcal{R}}^{0}=\mathcal{R}$, a sheaf
of rings on $X$.
Let $\mathcal{G}(\tilde{\mathcal{R}})$ denote the category of graded
sheaves of modules over $\tilde{\mathcal{R}}$. This is a Grothendieck
abelian category; the direct sum is given by the usual direct sum
of sheaves. To construct the product of sheaves $\{\mathcal{M}_{i}\}_{i\in I}$,
one takes the sheafification of the pre-sheaf of local sections of
the form $(m_{i})_{i\in I}$ for which there is a bound on the degree;
i.e. $-N\leq\text{deg}(m_{i})\le N$ for a fixed $N\in\mathbb{N}$
and all $i\in I$. Since $X$ is a noetherian space, this pre-sheaf
is actually already a sheaf.
It follows formally that $\mathcal{G}(\tilde{\mathcal{R}})$ has enough
injectives; this can also be proved in the traditional way by constructing
enough injective in the category of modules over a graded ring and
then noting that the sheaf ${\displaystyle \prod_{x\in X}\mathcal{I}_{x}}$
is injective if $\mathcal{I}_{x}$ is an injective object in the category
of graded $\tilde{\mathcal{R}}_{x}$-modules. We note that an injective
in $\mathcal{G}(\tilde{\mathcal{R}})$ might not be an injective $\mathcal{\tilde{R}}$-module.
However, from the previous remark it follows that any injective in
$\mathcal{G}(\tilde{\mathcal{R}})$ is a summand of a sheaf of the
form $\prod_{x\in X}\mathcal{I}_{x}$; as such sheaves are clearly
flasque it follows that any injective in $\mathcal{G}(\tilde{\mathcal{R}})$
is flasque.
For each $i\in\mathbb{Z}$ we have the exact functor $\mathcal{M}\to\mathcal{M}^{i}$
which takes $\mathcal{G}(\tilde{\mathcal{R}})\to\mathcal{R}-\text{mod}$;
the direct sum of all of these functors is isomorphic to the identity
(on the underlying sheaves of $\mathcal{R}$-modules). Note that the
functor $\mathcal{M}\to\mathcal{M}^{0}$ admits the left adjoint $\mathcal{N}\to\tilde{\mathcal{R}}\otimes_{\mathcal{R}}\mathcal{N}$.
Let $D(\mathcal{G}(\tilde{\mathcal{R}}))$ denote the (unbounded)
derived category of $\mathcal{G}(\tilde{\mathcal{R}})$. Then the
exact functor $\mathcal{M}\to\mathcal{M}^{i}$ derives to a functor
$\mathcal{M}^{\cdot}\to\mathcal{M}^{\cdot,i}$, and we have $\mathcal{M}^{\cdot}={\displaystyle \bigoplus_{i}\mathcal{M}^{\cdot,i}}$
for any complex in $D(\mathcal{G}(\tilde{\mathcal{R}}))$.
\begin{lem}
Let $\varphi:X\to Y$ be a continuous map, and let $\tilde{\mathcal{R}}_{X}$
and $\tilde{\mathcal{R}}_{Y}$ be graded sheaves of algebras on $X$
and $Y$, respectively. Suppose there is a morphism of graded rings
$\varphi^{-1}(\tilde{\mathcal{R}}_{Y})\to\tilde{\mathcal{R}}_{X}$.
Then we can form the derived functor $R\varphi_{*}:D(\mathcal{G}(\tilde{\mathcal{R}}_{X}))\to D(\mathcal{G}(\tilde{\mathcal{R}}_{Y}))$,
as well as $R\varphi_{*}:D(\tilde{\mathcal{R}}_{X}-\text{mod})\to D(\tilde{\mathcal{R}}_{Y}-\text{mod})$.
1) Let $\mathcal{F}_{X}$ denote the forgetful functor from $\mathcal{G}(\tilde{\mathcal{R}}_{X})$
to $\tilde{\mathcal{R}}_{X}-\text{mod}$ (and similarly for $\mathcal{F}_{Y}$).
Then for any $\mathcal{M}^{\cdot}\in D^{+}(\mathcal{G}(\tilde{\mathcal{R}}_{X}))$,
we have $\mathcal{F}_{Y}R\varphi_{*}(\mathcal{M}^{\cdot})\tilde{\to}R\varphi_{*}(\mathcal{F}_{X}\mathcal{M}^{\cdot})$;
where on the right hand side $R\varphi_{*}$ denotes the pushforward
$D^{+}(\tilde{\mathcal{R}}_{X}-\text{mod})\to D^{+}(\tilde{\mathcal{R}}_{Y}-\text{mod})$.
If $X$ and $Y$ have finite dimension, then this isomorphism holds
for all $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}_{X}))$.
2) Again assuming $X$ and $Y$ have finite dimension; for each $i\in\mathbb{Z}$
we have $R\varphi_{*}(\mathcal{M}^{\cdot,i})\tilde{=}R\varphi_{*}(\mathcal{M}^{\cdot})^{i}$
in $D(\mathcal{R}_{Y}-\text{mod})$.
3) For every $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}_{X}))$
and $\mathcal{N}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}_{Y}))$
we have
\[
R\varphi_{*}R\underline{\mathcal{H}om}_{\varphi^{-1}(\tilde{\mathcal{R}}_{Y})}(\varphi^{-1}\mathcal{N}^{\cdot},\mathcal{M}^{\cdot})\tilde{\to}R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}_{Y}}(\mathcal{N}^{\cdot},R\varphi_{*}\mathcal{M}^{\cdot})
\]
\end{lem}
\begin{proof}
1) The statement about $D^{+}(\mathcal{G}(\tilde{\mathcal{R}}_{X}))$
follows immediately from the fact that injectives are flasque. For
the unbounded derived category, the assumption implies $\varphi_{*}$
has finite homological dimension; and by what we have just proved
the forgetful functor takes acyclic objects to acyclic objects. Therefore
we can apply the composition of derived functors (as in \cite{key-9},
corollary 14.3.5), which implies that, since $\varphi_{*}$, $\mathcal{F}_{X}$,
and $\mathcal{F}_{Y}$ have finite homological dimension in this case,
there is an isomorphism $R\varphi_{*}\circ\mathcal{F}_{X}\tilde{=}R(\varphi_{*}\circ\mathcal{F}_{X})\tilde{\to}R(\mathcal{F}_{Y}\circ\varphi_{*})\tilde{=}\mathcal{F}_{Y}\circ R\varphi_{*}$.
2) As above this follows from \cite{key-9}, corollary 14.3.5, using
$\varphi_{*}\circ\mathcal{M}^{i}\tilde{=}(\varphi_{*}\mathcal{M})^{i}$.
3) This is essentially identical to the analogous fact in the ungraded
case.
\end{proof}
Now we briefly discuss the internal Hom and tensor on these categories.
If $\mathcal{M}$ and $\mathcal{N}$ are objects of $\mathcal{G}(\tilde{\mathcal{R}})$,
we have the sheaf of $\mathbb{Z}$-modules $\mathcal{H}om_{\mathcal{G}(\tilde{\mathcal{R}})}(\mathcal{M},\mathcal{N})$
as well as the sheaf of graded $\mathbb{Z}$-modules ${\displaystyle \underline{\mathcal{H}om}(\mathcal{M},\mathcal{N})=\bigoplus_{i\in\mathbb{Z}}\mathcal{H}om_{\mathcal{G}(\tilde{\mathcal{R}})}(\mathcal{M},\mathcal{N}(i))}$;
if $\mathcal{M}$ is locally finitely presented this agrees with $\mathcal{H}om$
on the underlying $\tilde{\mathcal{R}}$-modules. Also, if $\mathcal{M}\in\mathcal{G}(\tilde{\mathcal{R}})$
and $\mathcal{N}\in\mathcal{G}(\tilde{\mathcal{R}}^{opp})$, we have
the tensor product $\mathcal{N}\otimes_{\tilde{\mathcal{R}}}\mathcal{M}$
which is graded in the natural way. Suppose now that $\tilde{\mathcal{S}}$
is another sheaf of graded algebras on $X$,
\begin{lem}
\label{lem:basic-hom-tensor}1) Let $\mathcal{N}$ be a graded $(\mathcal{\tilde{\mathcal{R}}},\mathcal{\tilde{\mathcal{S}}})$
bimodule, $\mathcal{M}\in\mathcal{G}(\tilde{\mathcal{S}})$, and $\mathcal{P}\in\mathcal{G}(\tilde{\mathcal{R}})$.
Then there is an isomorphism
\[
\underline{\mathcal{H}om}_{\mathcal{\tilde{R}}}(\mathcal{N}\otimes_{\mathcal{\tilde{S}}}\mathcal{M},\mathcal{P})\tilde{\to}\underline{\mathcal{H}om}_{\tilde{\mathcal{S}}}(\mathcal{M},\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\mathcal{N},\mathcal{P}))
\]
Now, if we consider $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{S}}))$
and $\mathcal{P}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}))$,
we have a map
\[
R\underline{\mathcal{H}om}_{\mathcal{\tilde{R}}}(\mathcal{N}\otimes_{\mathcal{\tilde{S}}}^{L}\mathcal{M}^{\cdot},\mathcal{P}^{\cdot})\to R\underline{\mathcal{H}om}_{\tilde{\mathcal{S}}}(\mathcal{M}^{\cdot},R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\mathcal{N},\mathcal{P}^{\cdot}))
\]
and if, further, $\mathcal{N}$ is flat over $\tilde{\mathcal{S}}^{opp}$,
then this map is an isomorphism.
2) Now suppose $\tilde{\mathcal{S}}\subset\tilde{\mathcal{R}}$ is
a central inclusion of graded rings (in particular $\tilde{\mathcal{S}}$
is commutative). Then for any $\mathcal{M}\in\mathcal{G}(\tilde{\mathcal{R}})$,
$\mathcal{N}\in\mathcal{G}(\tilde{\mathcal{R}}^{opp})$, and $\mathcal{P}\in\mathcal{G}(\tilde{\mathcal{R}})$
there are isomorphisms
\[
\underline{\mathcal{H}om}_{\mathcal{\tilde{S}}}(\mathcal{N}\otimes_{\mathcal{\tilde{R}}}\mathcal{M},\mathcal{P})\tilde{\to}\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\mathcal{M},\underline{\mathcal{H}om}_{\tilde{\mathcal{S}}}(\mathcal{N},\mathcal{P}))
\]
the analogous result holds at the level of complexes: if $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}))$,
$\mathcal{N}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}^{opp}))$,
and $\mathcal{P}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}))$
there are isomorphisms
\[
R\underline{\mathcal{H}om}_{\mathcal{\tilde{S}}}(\mathcal{N}^{\cdot}\otimes_{\mathcal{\tilde{R}}}^{L}\mathcal{M}^{\cdot},\mathcal{P}^{\cdot})\tilde{\to}R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\mathcal{M}^{\cdot},R\underline{\mathcal{H}om}_{\tilde{\mathcal{S}}}(\mathcal{N}^{\cdot},\mathcal{P}^{\cdot}))
\]
\end{lem}
This is proved in a nearly identical way to the ungraded case (c.f.
\cite{key-9}, theorem 14.4.8).
Throughout this work, we will make extensive use of various sheaves
of rings over $W(k)$ and derived categories of sheaves of modules
over them. One of our main techniques will be to work with complexes
of sheaves which are\emph{ }complete in a suitable sense, and then
to apply Nakayama's lemma to deduce properties of those complexes
from their $\text{mod}$ $p$ analogues. The technical set-up for
this is the theory of cohomologically complete complexes (also called
\emph{derived complete complexes }in many places) which has been treated
in the literature in many places, e.g., \cite{key-41}, \cite{key-42},
\cite{key-43}, Tag 091N, and \cite{key-82}, section 3.4. We will
use the reference \cite{key-8}, chapter 1.5, which deals with non-commutative
sheaves of algebras in a very general setting (namely, they work with
sheaves of rings over $\mathbb{Z}[h]$, which are $h$-torsion-free).
However, we actually have to extend the theory slightly to get exactly
what we need, because our interest is in complexes of \emph{graded}
modules, and the useful notion of completeness in this setting is
to demand, essentially, that each graded piece of a module (or complex)
is complete. We will set this up in a way that we can derive the results
in a similar way to \cite{key-8} (or even derive them from \cite{key-8}
sometimes).
From now on, we impose the assumption that $\tilde{\mathcal{R}}$
is a $W(k)$-algebra (where $W(k)$ sits in degree $0$) which is
$p$-torsion-free. Note that we have the sheaf of algebras $\tilde{\mathcal{R}}[p^{-1}]$,
which we regard as an object of $\mathcal{G}(\tilde{\mathcal{R}})$
via $\tilde{\mathcal{R}}[p^{-1}]=\bigoplus_{i\in\mathbb{Z}}\tilde{\mathcal{R}}^{i}[p^{-1}]$.
There is the category $\mathcal{G}(\tilde{\mathcal{R}}[p^{-1}])$
of graded sheaves of modules over $\tilde{\mathcal{R}}[p^{-1}]$,
and there is the functor $D(\mathcal{G}(\tilde{\mathcal{R}}[p^{-1}]))\to D(\mathcal{G}(\tilde{\mathcal{R}}))$;
which is easily seen to be fully faithful, with essential image consisting
of those complexes in $D(\mathcal{G}(\tilde{\mathcal{R}}))$ for which
$p$ acts invertibly on each cohomology sheaf (compare \cite{key-8},
lemma 1.5.2); we shall therefore simply regard $D(\mathcal{G}(\tilde{\mathcal{R}}[p^{-1}]))$
as being a full subcategory of $D(\mathcal{G}(\tilde{\mathcal{R}}))$.
Then, following \cite{key-8}, definition 1.5.5, we make the
\begin{defn}
\label{def:CC}1) An object $\mathcal{M}^{\cdot}\in D(\mathcal{R}-\text{mod})$
is said to be cohomologically complete if $R\mathcal{H}om_{\mathcal{R}}(\mathcal{R}[p^{-1}],\mathcal{M}^{\cdot})=R\mathcal{H}om_{W(k)}(W(k)[p^{-1}],\mathcal{M}^{\cdot})=0$.
2) An object $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{\tilde{R}}))$
is said to be cohomologically complete if \linebreak{}
$R\underline{\mathcal{H}om}(\tilde{\mathcal{R}}[p^{-1}],\mathcal{M}^{\cdot})=0$.
\end{defn}
We shall see below that two notions are not quite consistent with
one another, however, we shall only use definition $2)$ when working
with graded objects, so this will hopefully cause no confusion.
Following \cite{key-8}, proposition 1.5.6), we have:
\begin{prop}
\label{prop:Basic-CC-facts}1) The cohomologically complete objects
in $D(\mathcal{G}(\tilde{\mathcal{R}}))$ form a thick triangulated
subcategory, denoted $D_{cc}(\mathcal{G}(\tilde{\mathcal{R}}))$.
An object $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{\tilde{R}}))$
is in $D_{cc}(\mathcal{G}(\tilde{\mathcal{R}}))$ iff $R\underline{\mathcal{H}om}(\mathcal{N}^{\cdot},\mathcal{M}^{\cdot})=0$
for all $\mathcal{N}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}[p^{-1}]))$.
2) If $\tilde{\mathcal{S}}$ is any graded sheaf of $p$-torsion-free
$W(k)$-algebras equipped with a graded algebra map $\tilde{\mathcal{S}}\to\tilde{\mathcal{R}}$,
and $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}))$,
then $\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\tilde{\mathcal{R}}))$
iff $\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\tilde{\mathcal{S}}))$
3) For every $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}))$
there is a distinguished triangle
\[
R\underline{\mathcal{H}om}(\tilde{\mathcal{R}}[p^{-1}],\mathcal{M}^{\cdot})\to\mathcal{M}^{\cdot}\to R\underline{\mathcal{H}om}(\tilde{\mathcal{R}}[p^{-1}]/\tilde{\mathcal{R}}[-1],\mathcal{M}^{\cdot})
\]
and we have $R\underline{\mathcal{H}om}(\tilde{\mathcal{R}}[p^{-1}]/\tilde{\mathcal{R}}[-1],\mathcal{M}^{\cdot})\in D_{cc}(\mathcal{G}(\tilde{\mathcal{R}}))$
while $R\underline{\mathcal{H}om}(\tilde{\mathcal{R}}[p^{-1}],\mathcal{M}^{\cdot})\in D(\mathcal{G}(\tilde{\mathcal{R}}[p^{-1}]))$.
In particular, the category $D_{cc}(\mathcal{G}(\tilde{\mathcal{R}}))$
is naturally equivalent to the quotient of $D(\mathcal{G}(\tilde{\mathcal{R}}))$
by $D(\mathcal{G}(\tilde{\mathcal{R}}[p^{-1}]))$.
4) Recall that for each object $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}))$
we have, for $i\in\mathbb{Z}$, the $i$th graded piece $\mathcal{M}^{\cdot,i}\in D(\mathcal{R}-\text{mod})$.
Then $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}))$
is in $D_{cc}(\mathcal{G}(\mathcal{R}))$ iff each $\mathcal{M}^{\cdot,i}\in D_{cc}(\mathcal{R}-\text{mod})$.
\end{prop}
\begin{proof}
1) For any $\mathcal{N}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}[p^{-1}]))$
we have
\[
R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\mathcal{N}^{\cdot},\mathcal{M}^{\cdot})\tilde{=}R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\tilde{\mathcal{R}}[p^{-1}]\otimes_{\tilde{\mathcal{R}}}^{L}\mathcal{N}^{\cdot},\mathcal{M}^{\cdot})\tilde{\to}R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\mathcal{N}^{\cdot},R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\tilde{\mathcal{R}}[p^{-1}],\mathcal{M}^{\cdot}))
\]
here, we have used the fact $\tilde{\mathcal{R}}[p^{-1}]$ is an $(\tilde{\mathcal{R}},\tilde{\mathcal{R}})$-bimodule,
along with \lemref{basic-hom-tensor}, 1).
Thus if $R\underline{\mathcal{H}om}(\tilde{\mathcal{R}}[p^{-1}]\mathcal{M}^{\cdot})=0$
then $R\underline{\mathcal{H}om}(\mathcal{N}^{\cdot},\mathcal{M}^{\cdot})=0$
as claimed. Therefore $D_{cc}(\mathcal{G}(\tilde{\mathcal{R}}))$
is the (right) orthogonal subcategory to the thick subcategory $D(\mathcal{G}(\tilde{\mathcal{R}}[p^{-1}]))$;
it follows that is a thick triangulated subcategory.
2) We have
\[
R\underline{\mathcal{H}om}_{\tilde{\mathcal{S}}}(\tilde{\mathcal{S}}[p^{-1}],\mathcal{M}^{\cdot})\tilde{\to}R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\tilde{\mathcal{S}}[p^{-1}]\otimes_{\tilde{\mathcal{S}}}^{L}\tilde{\mathcal{R}},\mathcal{M}^{\cdot})\tilde{\to}R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\tilde{\mathcal{R}}[p^{-1}],\mathcal{M}^{\cdot})
\]
from which the result follows.
3) This triangle follows by applying $R\underline{\mathcal{H}om}$
to the short exact sequence
\[
\tilde{\mathcal{R}}\to\tilde{\mathcal{R}}[p^{-1}]\to\tilde{\mathcal{R}}[p^{-1}]/\tilde{\mathcal{R}}
\]
and noting that $R\underline{\mathcal{H}om}(\tilde{\mathcal{R}},)$
is the identity functor. The complex $R\underline{\mathcal{H}om}(\tilde{\mathcal{R}}[p^{-1}],\mathcal{M}^{\cdot})$
is contained in $D(\mathcal{G}(\tilde{\mathcal{R}}[p^{-1}]))$ via
the action of $\tilde{\mathcal{R}}[p^{-1}]$ on itself. On the other
hand, as above there is a canonical isomorphism
\[
R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\tilde{\mathcal{R}}[p^{-1}],R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\tilde{\mathcal{R}}[p^{-1}]/\tilde{\mathcal{R}}[-1],\mathcal{M}^{\cdot}))\tilde{\leftarrow}R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\tilde{\mathcal{R}}[p^{-1}]\otimes_{\tilde{\mathcal{R}}}^{L}(\tilde{\mathcal{R}}[p^{-1}]/\tilde{\mathcal{R}}),\mathcal{M}^{\cdot})[1]
\]
and the term on the right is zero since $\tilde{\mathcal{R}}[p^{-1}]\otimes_{\tilde{\mathcal{R}}}^{L}(\tilde{\mathcal{R}}[p^{-1}]/\tilde{\mathcal{R}})=0$;
therefore
\[
R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\tilde{\mathcal{R}}[p^{-1}]/\tilde{\mathcal{R}}[-1],\mathcal{M}^{\cdot})\in D_{cc}(\mathcal{G}(\mathcal{R}))
\]
This shows that the inclusion $D_{cc}(\mathcal{G}(\mathcal{R}))\to D(\mathcal{G}(\mathcal{R}))$
admits a right adjoint, and the statement about the quotient category
follows immediately.
4) For each $\mathcal{M}\in\mathcal{G}(\tilde{\mathcal{R}})$ there
is an isomorphism of functors
\[
\mathcal{H}om_{\mathcal{R}}(\mathcal{R}[p^{-1}],\mathcal{M}^{0})\tilde{=}\mathcal{H}om_{\mathcal{G}(\tilde{\mathcal{R}})}(\tilde{\mathcal{R}}[p^{-1}],\mathcal{M})
\]
given by restricting a morphism on the right hand side to degree $0$;
this follows from the fact that an local section of $\mathcal{H}om_{\mathcal{G}(\tilde{\mathcal{R}})}(\tilde{\mathcal{R}}[p^{-1}],\mathcal{M})$
is simply a system $(m_{i})$ of local sections of $\mathcal{M}^{0}$
satisfying $pm_{i}=m_{i-1}$; which is exactly a local section of
$\mathcal{H}om_{\mathcal{R}}(\mathcal{R}[p^{-1}],\mathcal{M}^{0})$.
Now, $\mathcal{M}\to\mathcal{M}^{0}$ admits a left adjoint (namely
$\mathcal{N}\to\tilde{\mathcal{R}}\otimes_{\mathcal{R}}\mathcal{N}$),
and $\mathcal{N}\to\mathcal{H}om_{\mathcal{R}}(\mathcal{R}[p^{-1}],\mathcal{N})$
admits a left adjoint (namely $\mathcal{M}\to\mathcal{R}[p^{-1}]\otimes_{\mathcal{R}}\mathcal{M}$).
So by \cite{key-9}, proposition 14.4.7, the derived functor of $\mathcal{H}om_{\mathcal{R}}(\mathcal{R}[p^{-1}],\mathcal{M}^{0})$
is given by the functor $R\mathcal{H}om_{\mathcal{R}}(\mathcal{R}[p^{-1}],\mathcal{M}^{\cdot,0})$
for any $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\tilde{R}))$. Therefore
there is an isomorphism of functors
\[
R\mathcal{H}om_{\mathcal{R}}(\mathcal{R}[p^{-1}],\mathcal{M}^{\cdot,0})\tilde{\to}R\mathcal{H}om_{\mathcal{G}(\tilde{\mathcal{R}})}(\tilde{\mathcal{R}}[p^{-1}],\mathcal{M})
\]
Therefore
\[
R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\tilde{\mathcal{R}}[p^{-1}],\mathcal{M}^{\cdot})=\bigoplus_{i}R\mathcal{H}om_{\mathcal{G}(\tilde{\mathcal{R}})}(\tilde{\mathcal{R}}[p^{-1}],\mathcal{M}^{\cdot}(i))\tilde{=}\bigoplus_{i}R\mathcal{H}om_{\mathcal{R}}(\mathcal{R}[p^{-1}],\mathcal{M}^{\cdot,-i})
\]
and the result follows.
\end{proof}
We will refer to the functor $\mathcal{M}^{\cdot}\to R\underline{\mathcal{H}om}(\tilde{\mathcal{R}}[p^{-1}]/\tilde{\mathcal{R}}[-1],\mathcal{M}^{\cdot})$
as the \emph{graded derived completion} of $\mathcal{M}^{\cdot}$,
or, usually, simply the completion of $\mathcal{M}^{\cdot}$ if no
confusion seems likely; we will denote it by $\widehat{\mathcal{M}}^{\cdot}$.
A typical example of a cohomologically complete complex in $D(\mathcal{R}-\text{mod})$
is the following: suppose $\mathcal{M}^{\cdot}=\mathcal{M}$ is concentrated
in degree $0$. Then if $\mathcal{M}$ is $p$-torsion free, then
$\mathcal{M}$ is $p$-adically complete iff $\mathcal{M}^{\cdot}$
is cohomologically complete (c.f. \cite{key-8}, lemma 1.5.4). By
part $4)$ of the proposition, if $\mathcal{M}=\mathcal{M}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}))$
is concentrated in a single degree, then if each $\mathcal{M}^{i}$
is $p$-torsion free and $p$-adically complete, then $\mathcal{M}^{\cdot}$
is cohomologically complete (in the graded sense). Therefore the two
notions are not in general compatible; an infinite direct sum of $p$-adically
complete modules is generally not complete.
Now we develop this notion a bit more:
\begin{lem}
\label{lem:reduction-of-completion}Let $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}))$.
Then the natural map $\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k\to\mathcal{\widehat{M}}^{\cdot}\otimes_{W(k)}^{L}k$
is an isomorphism.
\end{lem}
\begin{proof}
The cone of the map $\mathcal{M}^{\cdot}\to\mathcal{\widehat{M}}^{\cdot}$
is contained in $D(\mathcal{G}(\tilde{\mathcal{R}}[p^{-1}]))$, and
therefore vanishes upon applying $\otimes_{W(k)}^{L}k$.
\end{proof}
Now we can transfer the Nakayama lemma into the graded setting:
\begin{cor}
\label{cor:Nakayama}Let $\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\mathcal{\tilde{R}}))$,
and let $a\in\mathbb{Z}$. If $\mathcal{H}^{i}(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)=0$
for all $i<a$, then $\mathcal{H}^{i}(\mathcal{M}^{\cdot})=0$ for
all $i<a$. In particular $\mathcal{M}^{\cdot}=0$ iff $\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k=0$.
Therefore, if $\mathcal{M}^{\cdot},\mathcal{N}^{\cdot}\in D_{cc}(\mathcal{G}(\mathcal{\tilde{R}}))$
and $\eta:\mathcal{M}^{\cdot}\to\mathcal{N}^{\cdot}$ is a morphism
such that $\eta\otimes_{W(k)}^{L}k:\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k\to\mathcal{N}^{\cdot}\otimes_{W(k)}^{L}k$
is an isomorphism, then $\eta$ is an isomorphism.
\end{cor}
\begin{proof}
By part 4) of the previous proposition this follows immediately from
the analogous fact for cohomologically complete sheaves over $\mathcal{R}$;
which is \cite{key-8}, proposition 1.5.8.
\end{proof}
For later use, we record a few more useful properties of cohomologically
complete sheaves, following \cite{key-8}, propositions 1.5.10 and
1.5.12.
\begin{prop}
\label{prop:Push-and-complete}1) Suppose $\mathcal{M}^{\cdot},\mathcal{N}^{\cdot}\in D_{cc}(\mathcal{G}(\tilde{\mathcal{R}}))$,
and let $\tilde{\mathcal{S}}$ be any central graded sub-algebra of
$\tilde{\mathcal{R}}$ which contains $W(k)$. Then $R\underline{\mathcal{H}om}(\mathcal{M}^{\cdot},\mathcal{N}^{\cdot})\in D_{cc}(\mathcal{G}(\tilde{\mathcal{S}}))$.
2) Suppose $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}))$
and $\mathcal{N}^{\cdot}\in D_{cc}(\mathcal{G}(\tilde{\mathcal{R}}))$.
Then the map $\mathcal{M}^{\cdot}\to\widehat{\mathcal{M}}^{\cdot}$
induces an isomorphism
\[
R\underline{\mathcal{H}om}(\widehat{\mathcal{M}}^{\cdot},\mathcal{N}^{\cdot})\tilde{\to}R\underline{\mathcal{H}om}(\mathcal{M}^{\cdot},\mathcal{N}^{\cdot})
\]
3) Suppose $\varphi:X\to Y$ is a continuous map, and suppose $\tilde{\mathcal{R}}$
is a graded sheaf of algebras on $Y$ (satisfying the running assumptions
of the section). Let $\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\varphi^{-1}(\tilde{\mathcal{R}})))$.
Then $R\varphi_{*}(\mathcal{M}^{\cdot})\in D_{cc}(\mathcal{G}(\tilde{\mathcal{R}}))$.
Therefore, if $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\varphi^{-1}(\tilde{\mathcal{R}})))$
is any complex, then we have
\[
\widehat{R\varphi_{*}(\mathcal{M}^{\cdot})}\tilde{\to}R\varphi_{*}(\widehat{\mathcal{M}^{\cdot}})
\]
\end{prop}
\begin{proof}
1) As $\tilde{\mathcal{S}}$ is central we have
\[
R\underline{\mathcal{H}om}_{\tilde{\mathcal{S}}}(\tilde{\mathcal{S}}[p^{-1}],R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\mathcal{M}^{\cdot},\mathcal{N}^{\cdot}))\tilde{\leftarrow}R\underline{\mathcal{H}om}_{\tilde{\mathcal{S}}}(\mathcal{M}^{\cdot}\otimes_{\tilde{\mathcal{S}}}^{L}\tilde{\mathcal{S}}[p^{-1}],\mathcal{N}^{\cdot})
\]
\[
\tilde{\to}R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\mathcal{M}^{\cdot},R\underline{\mathcal{H}om}_{\tilde{\mathcal{S}}}(\tilde{\mathcal{S}}[p^{-1}],\mathcal{N}^{\cdot}))
\]
where the second isomorphism follows from the flatness of $\tilde{\mathcal{S}}[p^{-1}]$
over $\tilde{\mathcal{S}}$, and first isomorphism follows directly
from
\[
\tilde{\mathcal{S}}[p^{-1}]\tilde{=}\text{lim}(\tilde{\mathcal{S}}\xrightarrow{p}\tilde{\mathcal{S}}\xrightarrow{p}\tilde{\mathcal{S}}\cdots)\tilde{=}{\displaystyle \text{hocolim}(\tilde{\mathcal{S}}\xrightarrow{p}\tilde{\mathcal{S}}\xrightarrow{p}\tilde{\mathcal{S}}\cdots)}
\]
so the result follows from part $2)$ of \propref{Basic-CC-facts}.
2) This follows since $\text{cone}(\mathcal{M}^{\cdot}\to\widehat{\mathcal{M}}^{\cdot})$
is contained in the orthogonal to $D_{cc}(\mathcal{G}(\tilde{\mathcal{R}}))$,
by definition.
3) For the first claim, we use the adjunction
\[
R\varphi_{*}R\underline{\mathcal{H}om}_{\varphi^{-1}(\tilde{\mathcal{R}})}(\varphi^{-1}(\tilde{\mathcal{R}})[p^{-1}],\mathcal{M}^{\cdot})\tilde{=}R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\tilde{\mathcal{R}}[p^{-1}],R\varphi_{*}(\mathcal{M}^{\cdot}))
\]
along with part $2)$ of \propref{Basic-CC-facts}. For the second,
we use the distinguished triangle
\[
R\underline{\mathcal{H}om}(\tilde{\mathcal{R}}[p^{-1}],\mathcal{M}^{\cdot})\to\mathcal{M}^{\cdot}\to\widehat{\mathcal{M}^{\cdot}}
\]
Since $p$ acts invertibly on $R\underline{\mathcal{H}om}(\tilde{\mathcal{R}}[p^{-1}],\mathcal{M}^{\cdot})$,
it will also act invertibly on\linebreak{}
$R\varphi_{*}(R\underline{\mathcal{H}om}(\tilde{\mathcal{R}}[p^{-1}],\mathcal{M}^{\cdot}))$,
and the result follows from the fact that $R\varphi_{*}(\widehat{\mathcal{M}^{\cdot}})$
is already cohomologically complete.
\end{proof}
In using this theory, it is also useful to note the following straightforward
\begin{lem}
\label{lem:Hom-tensor-and-reduce}Let $\tilde{\mathcal{R}}$ be as
above. Then, for any $\mathcal{M}^{\cdot},\mathcal{N}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}}))$
we have
\[
R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}}(\mathcal{M}^{\cdot},\mathcal{N}^{\cdot})\otimes_{W(k)}^{L}k\tilde{\to}R\underline{\mathcal{H}om}_{\tilde{\mathcal{R}}/p}(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k,\mathcal{N}^{\cdot}\otimes_{W(k)}^{L}k)
\]
If we have $\mathcal{N}^{\cdot}\in D(\mathcal{G}(\tilde{\mathcal{R}})^{\text{opp}})$,
then we have
\[
(\mathcal{N}^{\cdot}\otimes_{\tilde{\mathcal{R}}}^{L}\mathcal{M}^{\cdot})\otimes_{W(k)}^{L}k\tilde{\to}(\mathcal{N}^{\cdot}\otimes_{W(k)}^{L}k)\otimes_{\tilde{\mathcal{R}}/p}^{L}(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)
\]
\end{lem}
To close out this section, we will give an explicit description of
the (ungraded) cohomological completion functor in a special case.
Let $\mathcal{R}$ be a $p$-torsion-free sheaf of rings on $X$ as
above; suppose $\mathcal{R}$ is left noetherian. Let us suppose that,
in addition, the $p$-adic completion $\widehat{\mathcal{R}}$ is
$p$-torsion-free, left noetherian, and that there exists a base of
open subsets $\mathcal{B}$ on $X$ such that, for any $U\in\mathcal{B}$
and any coherent sheaf $\mathcal{M}$ of $\mathcal{R}_{0}=\mathcal{R}/p=\widehat{\mathcal{R}}/p$
modules on $U$, we have $H^{i}(U,\mathcal{M})=0$ for all $i>0$
(these are assumptions $1.2.2$ and $1.2.3$ of \cite{key-8}; they
are always satisfied in this paper). Then we have
\begin{prop}
\label{prop:Completion-for-noeth}Let $\mathcal{M}^{\cdot}\in D_{coh}^{b}(\mathcal{R}-\text{mod})$.
Then there is an isomorphism
\[
\widehat{\mathcal{M}^{\cdot}}\tilde{=}\widehat{\mathcal{R}}\otimes_{\mathcal{R}}^{L}\mathcal{M}^{\cdot}
\]
where $\widehat{\mathcal{M}^{\cdot}}$ denotes the derived completion
as usual.
\end{prop}
\begin{proof}
Let $\mathcal{M}$ be a coherent $\mathcal{R}$-module, and let $\widehat{\mathcal{M}}$
denote its $p$-adic completion. By \cite{key-8}, lemma 1.1.6 and
the assumption on $\mathcal{B}$, we have ${\displaystyle \widehat{\mathcal{M}}(U)=\lim_{\leftarrow}\mathcal{M}(U)/p^{n}}$
for any $U\in\mathcal{B}$. So, by the noetherian hypothesis, we see
that the natural map $\widehat{\mathcal{R}}(U)\otimes_{\mathcal{R}(U)}\mathcal{M}(U)\to\widehat{\mathcal{M}}(U)$
is an isomorphism for all $U\in\mathcal{B}$ . It follows that the
map $\widehat{\mathcal{R}}\otimes_{\mathcal{R}}\mathcal{M}\to\widehat{\mathcal{M}}$
is an isomorphism of sheaves; and therefore (as $p$-adic completion
is exact on $\mathcal{R}(U)-\text{mod}$) that $\widehat{\mathcal{R}}$
is flat over $\mathcal{R}$.
Now consider an arbitrary $\mathcal{M}^{\cdot}\in D_{coh}^{b}(\mathcal{R}-\text{mod})$.
The above implies
\[
\mathcal{H}^{i}(\widehat{\mathcal{R}}\otimes_{\mathcal{R}}^{L}\mathcal{M}^{\cdot})\tilde{=}\widehat{\mathcal{R}}\otimes_{\mathcal{R}}\mathcal{H}^{i}(\mathcal{M}^{\cdot})\tilde{\to}\widehat{\mathcal{H}^{i}(\mathcal{M}^{\cdot})}
\]
Therefore $\widehat{\mathcal{R}}\otimes_{\mathcal{R}}^{L}\mathcal{M}^{\cdot}\in D_{coh}^{b}(\widehat{\mathcal{R}}-\text{mod})$,
which is contained in $D_{cc}(\mathcal{R}-\text{mod})$ by \cite{key-8},
theorem 1.6.1.
Let $\mathcal{C}^{\cdot}$ be the cone of the map $\mathcal{M}^{\cdot}\to\widehat{\mathcal{R}}\otimes_{\mathcal{R}}^{L}\mathcal{M}^{\cdot}$
. Then we have a long exact sequence
\[
\mathcal{H}^{i-1}(\mathcal{C}^{\cdot})\to\mathcal{H}^{i}(\mathcal{M}^{\cdot})\to\widehat{\mathcal{H}^{i}(\mathcal{M}^{\cdot})}\to\mathcal{H}^{i}(\mathcal{C}^{\cdot})
\]
and since both the kernel and cokernel of $\mathcal{H}^{i}(\mathcal{M}^{\cdot})\to\widehat{\mathcal{H}^{i}(\mathcal{M}^{\cdot})}$
are in $\mathcal{R}[p^{-1}]-\text{mod}$, we conclude that $\mathcal{C}^{\cdot}\in D(\mathcal{R}[p^{-1}]-\text{mod})$.
Since $\widehat{\mathcal{R}}\otimes_{\mathcal{R}}^{L}\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{R}-\text{mod})$,
the result follows from the fact that $D_{cc}(\mathcal{R}-\text{mod})$
is the quotient of $D(\mathcal{R}-\text{mod})$ by $D(\mathcal{R}[p^{-1}]-\text{mod})$.
\end{proof}
\section{\label{sec:The-Algebra}The Algebra $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$}
To define the algebra $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$
and prove \thmref{D01}, we are going to apply the basic gauge construction
(\exaref{BasicGaugeConstruction}) to Berthelot's differential operators.
Let $\mathfrak{X}$ be a smooth formal scheme over $W(k)$, and $X$
its special fibre. If $\mathfrak{X}$ is affine, then we denote $\mathfrak{X}=\text{Specf}(\mathcal{A})$,
and $X=\text{Spec}(A)$.
\begin{defn}
\label{def:D^(0,1)-in-the-lifted-case} We set
\[
\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),i}:=\{\Phi\in\mathcal{\widehat{D}}_{\mathfrak{X}}^{(1)}|p^{i}\Phi\in\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0)}\}
\]
We let $f:\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),i}\to\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),i+1}$
denote the inclusion, and $v:\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),i}\to\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),i-1}$
denote the multiplication by $p$. If $\Phi_{1}\in\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),i}$
and $\Phi_{2}\in\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),j}$,
then $\Phi_{1}\cdot\Phi_{2}\in\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),i+j}$,
and in this way we give
\[
\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}=\bigoplus_{i\in\mathbb{Z}}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),i}
\]
the structure of a sheaf of graded algebras over $D(W(k))$.
Now suppose $\mathfrak{X}=\text{Specf}(\mathcal{A})$. Then we have
ring theoretic analogue of the above: define $\widehat{D}_{\mathcal{A}}^{(0,1),i}:=\{\Phi\in\widehat{D}_{\mathcal{A}}^{(1)}|p^{i}\Phi\in\widehat{D}_{\mathcal{A}}^{(0)}\}$,
and as above we obtain the graded ring
\[
\widehat{D}_{\mathcal{A}}^{(0,1)}=\bigoplus_{i}\widehat{D}_{\mathcal{A}}^{(0,1),i}
\]
over $D(W(k))$.
In this case, we also have the finite-order analogue: define $D_{\mathcal{A}}^{(0,1),i}:=\{\Phi\in D_{\mathcal{A}}^{(1)}|p^{i}\Phi\in D_{\mathcal{A}}^{(0)}\}$,
and as above we obtain the graded ring
\[
D_{\mathcal{A}}^{(0,1)}=\bigoplus_{i}D_{\mathcal{A}}^{(0,1),i}
\]
over $D(W(k))$.
\end{defn}
It is easy to see that $\widehat{D}_{\mathcal{A}}^{(0,1),i}:=\Gamma(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),i})$
when $\mathfrak{X}=\text{Specf}(\mathcal{A})$.
With the help of local coordinates, this algebra is not too difficult
to study. We now suppose $\mathfrak{X}=\text{Specf}(\mathcal{A})$
where $\mathcal{A}$ possesses local coordinates; i.e., there is a
collection $\{x_{i}\}_{i=1}^{n}\in\mathcal{A}$ and derivations $\{\partial_{i}\}_{i=1}^{n}$
such that $\partial_{i}(x_{j})=\delta_{ij}$ and such that $\{\partial_{i}\}_{i=1}^{n}$
form a free basis for the $\mathcal{A}$-module of $W(k)$-linear
derivations. We let $\partial_{i}^{[p]}:=\partial_{i}^{p}/p!$, this
is a differential operator of order $p$ on $\mathcal{A}$.
\begin{lem}
\label{lem:Basic-structure-of-D_A^(i)} For $i\geq0$ we have that
$\widehat{D}_{\mathcal{A}}^{(0,1),i}$ is the left $\widehat{D}_{\mathcal{A}}^{(0)}$-module\footnote{In fact, it is also the right $\widehat{D}_{\mathcal{A}}^{(0)}$-module
generated by the same elements, as an identical proof shows } generated by $\{(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\}$
where ${\displaystyle \sum_{t=1}^{n}j_{t}\leq i}$. For $i\leq0$
we have that $\widehat{D}_{\mathcal{A}}^{(0,1),i}=p^{-i}\cdot\widehat{D}_{\mathcal{A}}^{(0)}$.
\end{lem}
\begin{proof}
Let $i>0$. Clearly the module described is contained in $\widehat{D}_{\mathcal{A}}^{(0,1),i}$.
For the converse, we begin with the analogous finite-order version
of the statement. Namely, let $\Phi\in D_{\mathcal{A}}^{(1)}$ be
such that $p^{i}\Phi\in D_{\mathcal{A}}^{(0)}$. We can write
\[
\Phi=\sum_{I,J}a_{I,J}\partial_{1}^{i_{1}}(\partial_{1}^{[p]})^{j_{1}}\cdots\partial_{n}^{i_{n}}(\partial_{n}^{[p]})^{j_{n}}=\sum_{I,J}a_{I,J}\frac{\partial_{1}^{i_{1}+pj_{1}}\cdots\partial_{n}^{i_{n}+pj_{n}}}{(p!)^{|J|}}
\]
where $|J|=j_{1}+\dots+j_{n}$, and the sum is finite. After collecting
like terms together, we may suppose that $0\leq i_{j}<p$. In that
case, the $a_{I,J}\in\mathcal{A}$ are uniquely determined by $\Phi$,
and $\Phi\in D_{\mathcal{A}}^{(0)}$ iff $p^{|J|}|a_{I,J}$ for all
$I,J$. So, if $p^{i}\Phi\in D_{\mathcal{A}}^{(0)}$, we have ${\displaystyle a_{I,J}\frac{p^{i}}{p^{|J|}}\in\mathcal{A}}$.
Thus whenever $|J|>i$ we have $a_{I,J}\in p^{|J|-i}\mathcal{A}$.
On the other hand
\[
p^{|J|-i}(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}=u\cdot\partial_{1}^{pj'_{1}}\cdots\partial_{n}^{pj'_{n}}\cdot(\partial_{1}^{[p]})^{j''_{1}}\cdots\partial_{n}^{i_{n}}(\partial_{n}^{[p]})^{j''_{n}}
\]
where $u$ is a unit in $\mathbb{Z}_{p}$, ${\displaystyle \sum j'_{i}=|J|-i}$,
and ${\displaystyle \sum j''_{i}=i}$ (this follows from the relation
$p!\partial_{j}^{[p]}=\partial_{j}^{p}$). Therefore if $|J|>i$ we
have
\[
a_{I,J}\frac{\partial_{1}^{i_{1}+pj_{1}}\cdots\partial_{n}^{i_{n}+pj_{n}}}{(p!)^{|J|}}\in D_{\mathcal{A}}^{(0)}\cdot(\partial_{1}^{[p]})^{j''_{1}}\cdots\partial_{n}^{i_{n}}(\partial_{n}^{[p]})^{j''_{n}}
\]
It follows that $\Phi$ is contained in the $D_{\mathcal{A}}^{(0)}$-submodule
generated by $\{(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\}$
where $j_{1}+\dots+j_{n}\leq i$. So this submodule is exactly $\{\Phi\in D_{\mathcal{A}}^{(1)}|p^{i}\Phi\in D_{\mathcal{A}}^{(0)}\}$.
Now let $\Phi\in\widehat{D}_{\mathcal{A}}^{(0,1),i}$. Then we can
write
\[
p^{i}\Phi=\sum_{j=0}^{\infty}p^{j}\Phi_{j}
\]
where $\Phi_{j}\in D_{\mathcal{A}}^{(0)}$. Therefore, if $j\le i$
we have, by the previous paragraph, that $p^{-i}(p^{j}\Phi_{j})$
is contained in the $D_{\mathcal{A}}^{(0)}$ submodule generated by
$\{(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\}$
where $j_{1}+\dots+j_{n}\leq i$. So the result follows from ${\displaystyle \Phi=\sum_{j=0}^{i}p^{j-i}\Phi_{j}+\sum_{j=i+1}^{\infty}p^{j-i}\Phi_{j}}$
as the second term in this sum is contained in $\widehat{\mathcal{D}}_{\mathcal{A}}^{(0)}$.
This proves the lemma for $i\geq0$; while for $i\leq0$ it follows
immediately from the definition.
\end{proof}
From this it follows that ${\displaystyle \widehat{D}_{\mathcal{A}}^{(0,1),\infty}:=\lim_{\rightarrow}\widehat{D}_{\mathcal{A}}^{(0,1),i}}$
is the sub-algebra of $\text{End}_{W(k)}(\mathcal{A})$ generated
by $\widehat{D}_{\mathcal{A}}^{(0)}$ and $\{\partial_{1}^{[p]},\dots,\partial_{n}^{[p]}\}$.
We have
\begin{lem}
\label{lem:Basic-Structure-of-D^(1)} The algebra $\widehat{D}_{\mathcal{A}}^{(0,1),\infty}$
is a (left and right) noetherian ring, whose $p$-adic completion
is isomorphic to $\widehat{D}_{\mathcal{A}}^{(1)}$. Further, we have
$\widehat{D}_{\mathcal{A}}^{(0,1),\infty}[p^{-1}]\tilde{=}\widehat{D}_{\mathcal{A}}^{(0)}[p^{-1}]$.
\end{lem}
\begin{proof}
First, put a filtration on $\widehat{D}_{\mathcal{A}}^{(0,1),\infty}$
by setting $F^{j}(\widehat{D}_{\mathcal{A}}^{(0,1),\infty})$ to be
the $\widehat{D}_{\mathcal{A}}^{(0)}$-submodule generated by $\{(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\}$
where $j_{1}+\dots+j_{n}\leq i$. Then $\text{gr}(\widehat{D}_{\mathcal{A}}^{(0,1),\infty})$
is a quotient of a polynomial ring $\widehat{D}_{\mathcal{A}}^{(0)}[T_{1},\dots T_{n}]$
where $T_{i}$ is sent to the class of $\partial_{i}^{[p]}$ in $\text{gr}_{1}(\widehat{D}_{\mathcal{A}}^{(0,1),\infty})$.
To see this, we need to show that the image of $\partial_{i}^{[p]}$
in $\text{gr}(\widehat{D}_{\mathcal{A}}^{(0,1),\infty})$ commutes
with $\widehat{D}_{\mathcal{A}}^{(0)}=\text{gr}^{0}(\widehat{D}_{\mathcal{A}}^{(0,1),\infty})$;
this follows from the relation
\[
[\partial_{i}^{[p]},a]=\sum_{j=1}^{p}\partial_{i}^{[j]}(a)\partial_{i}^{[p-j]}\in\widehat{D}_{\mathcal{A}}^{(0)}
\]
for any $a\in\mathcal{A}$. So the fact that $\widehat{D}_{\mathcal{A}}^{(0,1),\infty}$
is a (left and right) noetherian ring follows from the Hilbert basis
theorem and the fact that $\widehat{D}_{\mathcal{A}}^{(0)}$ is left
and right noetherian.
Now we compute the $p$-adic completion of $\widehat{D}_{\mathcal{A}}^{(0,1),\infty}$.
Inside $\text{End}_{W(k)}(\mathcal{A})$, we have
\[
D_{\mathcal{A}}^{(1)}\subset\widehat{D}_{\mathcal{A}}^{(0,1),\infty}\subset\widehat{D}_{\mathcal{A}}^{(1)}
\]
and so, for all $n>0$ we have
\[
D_{\mathcal{A}}^{(1)}/p^{n}\to\widehat{D}_{\mathcal{A}}^{(0,1),\infty}/p^{n}\to\widehat{D}_{\mathcal{A}}^{(1)}/p^{n}
\]
and the composition is the identity. Thus $D_{\mathcal{A}}^{(1)}/p^{n}\to\widehat{D}_{\mathcal{A}}^{(0,1),\infty}/p^{n}$
is injective. On the other hand, suppose $\Phi\in\widehat{D}_{\mathcal{A}}^{(0,1),\infty}$.
By definition we can write
\[
\Phi=\sum_{I}\varphi_{i}\cdot(\partial_{1}^{[p]})^{i_{1}}\cdots(\partial_{n}^{[p]})^{i_{n}}
\]
where $I=(i_{1},\dots,i_{n})$ is a multi-index, $\varphi_{i}\in\widehat{D}_{\mathcal{A}}^{(0)}$,
and the sum is finite. Choose elements $\psi_{i}\in D_{A}^{(0)}$
such that $\psi_{i}-\varphi_{i}\in p^{n}\cdot\widehat{D}_{\mathcal{A}}^{(0)}$
(this is possible since $\widehat{D}_{\mathcal{A}}^{(0)}$ is the
$p$-adic completion of $D_{A}^{(0)}$). Then if we set
\[
\Phi'=\sum_{I}\psi_{i}\cdot(\partial_{1}^{[p]})^{i_{1}}\cdots(\partial_{n}^{[p]})^{i_{n}}\in D_{\mathcal{A}}^{(1)}
\]
we see that the class of $\Phi'$ in $D_{\mathcal{A}}^{(1)}/p^{n}$
maps to the class of $\Phi\in\widehat{D}_{\mathcal{A}}^{(0,1),\infty}/p^{n}$.
Thus $D_{\mathcal{A}}^{(1)}/p^{n}\to\widehat{D}_{\mathcal{A}}^{(0,1),\infty}/p^{n}$
is onto and therefore an isomorphism, and the completion result follows
by taking the inverse limit.
Finally, since each $\partial_{i}^{[p]}=\partial_{i}^{p}/p!$ is contained
in $\widehat{D}_{\mathcal{A}}^{(0)}[p^{-1}]$, we must have $\widehat{D}_{\mathcal{A}}^{(0)}\subset D_{\mathcal{A}}^{(0,1),\infty}\subset\widehat{D}_{\mathcal{A}}^{(0)}[p^{-1}]$,
so that $\widehat{D}_{\mathcal{A}}^{(0,1),\infty}[p^{-1}]\tilde{=}\widehat{D}_{\mathcal{A}}^{(0)}[p^{-1}]$.
\end{proof}
\begin{cor}
$\widehat{D}_{\mathcal{A}}^{(0,1)}$ is a (left and right) noetherian
ring, which is finitely generated as an algebra over $\widehat{D}_{\mathcal{A}}^{(0)}[f,v]$.
Therefore the sheaf $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$
is a coherent, locally noetherian sheaf of rings which is stalk-wise
noetherian.
\end{cor}
This follows immediately from the above. Set $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1),-\infty}:=\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}/(v-1)\tilde{=}\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}$,
while $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1),\infty}:=\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}/(f-1)$
has $p$-adic completion equal to $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(1)}$.
\subsection{Generators and Relations, Local Coordinates}
In addition to the description above as via endomorphisms of $\mathcal{O}_{\mathfrak{X}}$,
it is also useful to have a more concrete (local) description of $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$
and, especially, $\mathcal{D}_{\mathfrak{X}}^{(0,1)}/p$ . Suppose
$\mathfrak{X}=\text{Specf}(\mathcal{A})$ possesses local coordinates
as above. We'll start by describing ${\displaystyle \widehat{D}_{\mathcal{A}}^{(0,1),+}:=\bigoplus_{i=0}^{\infty}\widehat{D}_{\mathcal{A}}^{(0,1),i}}$.
\begin{defn}
Let $M_{\mathcal{A}}$ be the free graded $\mathcal{A}$-module on
generators $\{\xi_{i}\}_{i=1}^{n}$ (in degree $0$), and $f$ and
$\{\xi_{i}^{[p]}\}_{i=1}^{n}$ (in degree $1$). Let $\mathcal{B}^{(0,1),+}$
be the quotient of the tensor algebra $T_{\mathcal{A}}(M_{\mathcal{A}})$
by the relations $[f,m]$ (for all $m\in M_{\mathcal{A}}$), $[\xi_{i},a]-\partial_{i}(a)$
(for all $i$, and for any $a\in A$), $[\xi_{i},\xi_{j}]$, $[\xi_{i}^{[p]},\xi_{j}^{[p]}]$,
$[\xi_{i}^{[p]},\xi_{j}]$ (for all $i,j$), ${\displaystyle [\xi_{i}^{[p]},a]-f\cdot\sum_{r=0}^{p-1}\frac{\partial_{i}^{p-r}}{(p-r)!}(a)\cdot\frac{\xi_{i}^{r}}{r!}}$
(for all $i$, and for any $a\in A$), $f\xi_{i}^{p}-p!\xi_{i}^{[p]}$
for all $i$, and $\xi_{i}^{p}\xi_{j}^{[p]}-\xi_{j}^{p}\xi_{i}^{[p]}$
for all $i$ and $j$.
The algebra $\mathcal{B}^{(0,1),+}$ inherits a grading from $T_{\mathcal{A}}(M_{\mathcal{A}})$.
Let $\mathcal{C}^{(0,1),+}$ be the graded ring obtained by $p$-adically
completing each component of $\mathcal{B}^{(0,1),+}$.
\end{defn}
Then we have
\begin{lem}
\label{lem:Reduction-is-correct}There is an isomorphism of graded
algebras $\mathcal{C}^{(0,1),+}\tilde{\to}\widehat{D}_{\mathcal{A}}^{(0,1),+}$.
\end{lem}
\begin{proof}
There is an evident map $T_{\mathcal{A}}(M_{\mathcal{A}})\to\widehat{D}_{\mathcal{A}}^{(0,1),+}$
which is the identity on $\mathcal{A}$ and which sends $\xi_{i}\to\partial_{i}$,
$f\to f$ and $\xi_{i}^{[p]}\to\partial_{i}^{[p]}$. Clearly this
induces a graded map $\mathcal{B}^{(0,1),+}\to\tilde{\to}\widehat{D}_{\mathcal{A}}^{(0,1),+}$.
Since each graded component of $\widehat{D}_{\mathcal{A}}^{(0,1),+}$
is $p$-adically complete, we obtain a map $\mathcal{C}^{(0,1),+}\to\tilde{\to}\widehat{D}_{\mathcal{A}}^{(0,1),+}$.
Let us show that is is an isomorphism.
We begin with the surjectivity. In degree $0$, we have that $\mathcal{B}^{(0,1),0}$
is generated by $\mathcal{A}$ and $\{\xi_{i}\}_{i=1}^{n}$ and satisfies
$[\xi_{i},a]=\partial_{i}(a)$ for all $i$ and all $a\in\mathcal{A}$.
Thus the obvious map $\mathcal{B}^{(0,1),0}\to\mathcal{D}_{\mathcal{A}}^{(0)}$
is an isomorphism, and therefore so is the completion $\mathcal{C}^{(0,1),0}\to\widehat{\mathcal{D}}_{\mathcal{A}}^{0}=\widehat{\mathcal{D}}_{\mathcal{A}}^{(0,1),0}$.
Further, we saw above (in \lemref{Basic-structure-of-D_A^(i)}) that
each $\widehat{\mathcal{D}}_{\mathcal{A}}^{(0,1),i}$ (for $i\geq0$)
is generated, as a module over $\widehat{\mathcal{D}}_{\mathcal{A}}^{0}$,
by terms of the form $\{f^{i_{0}}\partial_{1}^{[p]i_{1}}\cdots\partial_{n}^{[p]i_{n}}\}$
where $i_{0}+i_{1}\dots i_{n}=i$. By definition, $\mathcal{C}^{(0,1),i}$
is exactly the $\mathcal{C}^{(0,1),0}$-module generated by terms
of the form $\{f^{i_{0}}\xi_{1}^{[p]i_{1}}\cdots\xi_{n}^{[p]i_{n}}\}$.
Thus we see that the map surjects onto the piece of degree $i$ for
all $i$; hence the map is surjective.
To show the injectivity, consider the graded ring ${\displaystyle \mathcal{A}[f]=\bigoplus_{i=0}^{\infty}\mathcal{A}}$.
The algebra $\widehat{D}_{\mathcal{A}}^{(0,1),+}$ acts on $\mathcal{A}[f]$
as follows: if $\Phi\in\widehat{D}_{\mathcal{A}}^{(0,1),j}$ then
$\Phi\cdot(af^{i})=\Phi(a)f^{i+j}$, where $\Phi(a)$ is the usual
action of $\Phi$ on $\mathcal{A}$, coming from the fact that $\Phi\in\widehat{D}_{\mathcal{A}}^{(1)}$.
In addition, $\mathcal{C}^{(0,1),+}$ acts on $\mathcal{A}[f]$ via
$\xi_{i}(af^{j})=\xi_{i}(a)f^{j}$ and $\xi_{i}^{[p]}(af^{j})=\partial_{i}^{[p]}(a)f^{j+1}$.
This action agrees with the composed map
\[
\mathcal{C}^{(0,1),+}\to\widehat{D}_{\mathcal{A}}^{(0,1),+}\to\text{End}_{W(k)}(\mathcal{A}[f])
\]
where the latter map comes from the action of $\widehat{D}_{\mathcal{A}}^{(0,1),+}$
on $\mathcal{A}[f]$. We will therefore be done if we can show that
this composition is injective.
For this, we proceed by induction on the degree $i$. When $i=0$
it follows immediately from the fact that $\mathcal{C}^{(0,1),0}\tilde{=}\widehat{D}_{\mathcal{A}}^{(0)}$.
Let $\Phi\in\mathcal{C}^{(0,1),i}$. If $\Phi$ acts as zero on $\mathcal{A}[f]$,
we will show that $\Phi\in f\cdot\mathcal{C}^{(0,1),i-1}$; the induction
assumption (and the fact that $f$ acts injectively on $\mathcal{A}[f]$)
then implies that $\Phi=0$.
Write
\[
\Phi=\sum_{J}\Phi_{J}(\xi_{1}^{[p]})^{j_{1}}\cdots(\xi_{n}^{[p]})^{j_{n}}-f^{i}\Psi_{0}-\sum_{s=1}^{i-1}f^{i-s}\sum_{J}\Psi_{sJ}(\xi_{1}^{[p]})^{j_{1}}\cdots(\xi_{n}^{[p]})^{j_{n}}
\]
where, in the first sum, each $J$ satisfies ${\displaystyle i=|J|=\sum_{i=1}^{n}j_{i}}$,
and in the second sum we have $|J|=i-s$, and $\Phi_{J},\Psi_{0},\Psi_{sJ}\in\mathcal{C}^{(0,1),0}\tilde{=}\widehat{D}_{\mathcal{A}}^{(0)}$.
We shall show that every term in the first sum is contained in $f\cdot\mathcal{C}^{(0,1),i-1}$.
Expanding in terms of monomials in the $\{\xi_{i}\}$, denoted $\{\xi^{I}\}$,
we obtain an equation
\[
\Phi=\sum_{I,J}a_{J,I}\xi^{I}(\xi_{1}^{[p]})^{j_{1}}\cdots(\xi_{n}^{[p]})^{j_{n}}-f^{i}\sum_{I}b_{0,I}\xi^{I}-\sum_{s=1}^{i-1}f^{i-s}\sum_{I,J}b_{s,I,J}\xi^{I}(\xi_{1}^{[p]})^{j_{1}}\cdots(\xi_{n}^{[p]})^{j_{n}}
\]
where $a_{J,I}\to0$, $b_{0,I}\to0$, and $b_{s,I,J}\to0$ (in the
$p$-adic topology on $\mathcal{A}$) as $|I|\to\infty$. For any
multi-index $J=(j_{1},\dots,j_{n})$ let $pJ=(pj_{1},\dots,pj_{n})$.
The relations $\xi_{i}^{p}\xi_{j}^{[p]}=\xi_{j}^{p}\xi_{i}^{[p]}$
(for all $i,j$ ) in $\mathcal{C}^{(0,1),+}$ ensure that $\xi^{I}(\xi_{1}^{[p]})^{j_{1}}\cdots(\xi_{n}^{[p]})^{j_{n}}=\xi^{I'}(\xi_{1}^{[p]})^{j'_{1}}\cdots(\xi_{n}^{[p]})^{j'_{n}}$
whenever $I+pJ=I'+pJ'$ and $|J|=|J'|$. Since, in the sum ${\displaystyle \sum_{I,J}a_{J,I}\xi^{I}(\xi_{1}^{[p]})^{j_{1}}\cdots(\xi_{n}^{[p]})^{j_{n}}}$,
we have $|J|=i$ for all $J$, we may collect terms together and assume
that each multi-index $I+pJ$ is represented only once.
Now, the fact that $\Phi$ acts as zero on $\mathcal{A}[f]$ implies
that the differential operators ${\displaystyle \sum_{I,J}a_{J,I}\partial^{I}(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}}$
and ${\displaystyle \sum_{I}b_{0,I}\partial^{I}+\sum_{s=1}^{i-1}\sum_{I,J}b_{s,I,J}\partial^{I}(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}}$
act as the same endomorphism on $\mathcal{A}$. Therefore, for each
$a_{J,I}$ which is nonzero, we have
\[
a_{J,I}\partial^{I}(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}=\sum_{I'=I+pJ}b_{0,I'}\partial^{I'}+\sum_{s=1}^{i-1}\sum_{I'+pJ'=I+pJ}b_{s,I',J'}\partial^{I'}(\partial_{1}^{[p]})^{j'_{1}}\cdots(\partial_{n}^{[p]})^{j'_{n}}
\]
Now, after inverting $p$, and using $\partial_{i}^{[p]}=\partial_{i}^{p}/p!$
inside $\widehat{\mathcal{D}}_{\mathcal{A}}^{(1)}[p^{-1}]$, we obtain
the equation
\[
\frac{a_{J,I}}{(p!)^{i}}=b_{0,I'}+\sum_{s=1}^{i-1}\sum_{I',J'}\frac{b_{s,I',J'}}{(p!)^{s}}
\]
which implies $a_{J,I}\in p\cdot\mathcal{A}$. But we have the relation
$f\xi_{i}^{p}-p!\xi_{i}^{[p]}$ in $\mathcal{C}^{(0,1),+}$; i.e.,
$p\xi_{i}^{[p]}\in f\cdot\mathcal{C}^{(0,1),+}$. Therefore $a_{J,I}\in p\cdot\mathcal{A}$
implies $a_{J,I}\xi^{I}(\xi_{1}^{[p]})^{j_{1}}\cdots(\xi_{n}^{[p]})^{j_{n}}\in f\cdot\mathcal{C}^{(0,1),i-1}$.
Since this holds for all $I,J$, we see that in fact $\Phi\in f\cdot\mathcal{C}^{(0,1),i-1}$
as desired.
\end{proof}
\begin{rem}
Given the isomorphism of the theorem, from now on, we shall denote
$\xi_{i}$ by $\partial_{i}$ and $\xi_{i}^{[p]}$ by $\partial_{i}^{[p]}$
inside $\mathcal{C}^{(0,1),+}$.
\end{rem}
Next, we have
\begin{lem}
\label{lem:linear-independance-over-D_0-bar} Suppose that $\{\Phi_{sJ}\}$
are elements of $\widehat{D}_{\mathcal{A}}^{(0)}$, and suppose that,
for some $i\geq1$, we have
\[
\sum_{s=0}^{i-1}\sum_{|J|=s}f^{i-s}\Phi_{sJ}(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\in p\cdot\widehat{D}_{\mathcal{A}}^{(0,1),i}
\]
in $\widehat{D}_{\mathcal{A}}^{(0,1),+}$. Then each $\Phi_{sJ}$
is contained in the right ideal generated by $\{\partial_{1}^{p},\dots,\partial_{n}^{p}\}$
and $p$.
\end{lem}
\begin{proof}
As in the previous proof we may expand the $\Phi_{0}$ and $\Phi_{s,J}$
in terms of the $\{\partial^{I}\}$ to obtain
\begin{equation}
\Phi=\sum_{s=0}^{i-1}f^{i-s}\sum_{I,J,|J|=s}b_{s,I,J}\partial^{I}(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\label{eq:first-form-for-phi}
\end{equation}
where $b_{s,I,J}\to0$ as $|I|\to\infty$. Om the other hand, since
$\Phi\in p\cdot\widehat{D}_{\mathcal{A}}^{(0,1),i}$, and $\widehat{D}_{\mathcal{A}}^{(0,1),i}$
is generated over $\widehat{D}_{\mathcal{A}}^{(0)}$ by $\{f^{i-s}(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\}_{0\leq s\leq i,|J|=s}$,
we also obtain
\begin{equation}
\Phi=\sum_{s=0}^{i}f^{i-s}\sum_{I,J,|J|=s}a_{s,I,J}\partial^{I}(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\label{eq:second-form-for-phi}
\end{equation}
where each $a_{0I}$ and $a_{s,I,J}$ are contained in $p\cdot\widehat{D}_{\mathcal{A}}^{(0)}$,
and $a_{0I}\to0$, $a_{s,I,J}\to0$ as $|I|\to\infty$.
For a multi-index $K$, let $\tilde{K}$ denote the multi-index $(\tilde{k}_{1},\dots,\tilde{k}_{n})$
such that $\tilde{k}_{i}\leq k_{i}$ for all $i$ and such that $\tilde{k}{}_{i}$
is the greatest multiple of $p$ less than or equal $k_{i}$. Write
$\tilde{K}=p\tilde{J}$, and $K=\tilde{I}+p\tilde{J}$. Then if $K=I'+pJ'$
for some $I'\neq\tilde{I}$ and $J'\neq\tilde{J}$, we must have $j_{m}<\tilde{j}_{m}$
for some $m$; which implies that $\partial^{I'}$ is contained in
the right ideal generated by $\partial_{m}^{p}$. Since $f\cdot\partial_{i}^{p}=p!\partial_{i}^{[p]}$,
we obtain
\[
f^{i-s}b_{s,I',J'}\partial^{I'}(\partial_{1}^{[p]})^{j'_{1}}\cdots(\partial_{n}^{[p]})^{j'_{n}}=f^{i-s-1}b_{s-1,I'',J''}\partial^{I''}(\partial_{1}^{[p]})^{j''_{1}}\cdots(\partial_{n}^{[p]})^{j''_{n}}
\]
where $I''+pJ''=I'+pJ'=K$, with $j''_{i}=j_{i}'+1$, and $b_{s-1,I'',J''}\in p\cdot\mathcal{A}$.
Therefore each such term is in the right ideal generated by $\{\partial_{i}^{p}\}$
and is contained in $p\cdot\widehat{D}_{\mathcal{A}}^{(0,1),i}$,
and so we may subtract each of these terms from $\Phi$ without affecting
the statement.
Thus we may assume that each nonzero $b_{s,I,J}$ in \eqref{first-form-for-phi}
is of the form $\tilde{I}+p\tilde{J}$ as above, and so there is only
one nonzero $b_{s,I,J}$ for each multi-index $K$.
Now, comparing the actions of each of the expressions \eqref{first-form-for-phi}
and \eqref{second-form-for-phi} on $\mathcal{A}[f]$, we obtain,
for each multi-index $K$, the equality
\[
b_{s\tilde{,I},\tilde{J}}=\sum_{I+pJ=K}\sum_{s}a_{s,I,J}
\]
and since each $a_{s,I,J}\in p\cdot\mathcal{A}$, we see $b_{s\tilde{,I},\tilde{J}}\in p\cdot\mathcal{A}$.
Since this is true for all $b_{s\tilde{,I},\tilde{J}}$ the result
follows.
\end{proof}
Using these results, we can give a description of $\widehat{D}_{\mathcal{A}}^{(0,1),+}/p:=D_{A}^{(0,1),+}.$
Let $I$ be the two-sided ideal of $D_{A}^{(0)}:=\widehat{D}_{\mathcal{A}}^{(0)}/p$
generated by $\mathcal{Z}(D_{A}^{(0)})^{+}$, the positive degree
elements of the center\footnote{The center of $D_{A}^{(0)}$ is a graded algebra via the isomorphism
$\mathcal{Z}(D_{A}^{(0)})\tilde{=}A^{(1)}[\partial_{1}^{p},\dots,\partial_{n}^{p}]$,
the degree of each $\partial_{i}^{p}$ is $1$}, and let $\overline{D_{A}^{(0)}}=D_{A}^{(0)}/I$.
\begin{thm}
\label{thm:Local-Coords-for-D+} Let ${\displaystyle D_{A}^{(0,1),+}=\bigoplus_{i=0}^{\infty}D_{A}^{(0,1),i}}$
be the decomposition according to grading. Then each $D_{A}^{(0,1),i}$
is a module over $D_{A}^{(0)}=D_{A}^{(0,1),0}$, and
\[
D_{A}^{(0,1),i}=f\cdot D_{A}^{(0,1),i-1}\oplus\sum_{|J|=i}D_{A}^{(0)}\cdot(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}
\]
as $D_{A}^{(0)}$-modules. Further, $f\cdot D_{A}^{(0,1),i-1}$ is
free over $\overline{D_{A}^{(0)}}$, and the module\linebreak{}
${\displaystyle \sum_{|J|=i}D_{A}^{(0)}\cdot(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}}$
is isomorphic, as a $D_{A}^{(0)}$-module, to $I^{i}$, via the map
which sends $(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}$
to $\partial_{1}^{p}{}^{j_{1}}\cdots\partial_{n}^{p}{}^{j_{n}}$.
In particular, on each $D_{A}^{(0,1),i}$ we have $\text{ker}(f)=\text{im}(v)$
and $\text{im}(f)=\text{ker}(v)$.
\end{thm}
\begin{proof}
Let $i\geq1$. By definition $D_{A}^{(0,1),i}$ is generated, over
$D_{A}^{(0)}$ terms of the form \linebreak{}
$\{f^{i-s}\{(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\}_{|J|=s}$;
and so it is also generated by $f\cdot D_{A}^{(0,1),i-1}$ and $\{(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\}_{|J|=i}$
. Suppose we have an equality of the form
\[
\sum_{J=i}\bar{\Phi}_{J}(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}=\sum_{s=0}^{i-1}f^{i-s}\sum_{J}\bar{\Psi}_{sJ}(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}
\]
in $D_{A}^{(0,1),i}$ (here, $\bar{\Phi}_{J},\bar{\Psi}_{sJ}$ are
in $D_{A}^{(0)}$). Choosing lifts to $\Phi_{J},\Psi_{J}\in\widehat{D}_{\mathcal{A}}^{(0)}$
yields
\[
\sum_{J=i}\Phi_{J}(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}-\sum_{s=0}^{i-1}f^{i-s}\sum_{J}\Psi_{sJ}(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\in p\cdot D_{\mathcal{A}}^{(0,1),i}\subset f\cdot D_{\mathcal{A}}^{(0,1),i-1}
\]
(the last inclusion follows from $(p!)\partial_{i}^{[p]}=f\partial_{i}^{p}$);
and so (the proof of) \lemref{Reduction-is-correct} now forces $\Phi_{J}\in p\cdot\widehat{D}_{\mathcal{A}}^{(0,1),i}$
for all $J$ so $\bar{\Phi}_{J}=0$ as desired. The isomorphism of
${\displaystyle \sum_{|J|=i}D_{A}^{(0)}\cdot(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}}$
with $I^{i}$ is given by the reduction of the morphism $p^{i}\cdot$
on $D_{\mathcal{A}}^{(0,1),i}$, and \lemref{linear-independance-over-D_0-bar}
yields that $f\cdot D_{A}^{(0,1),i-1}$ is free over $\overline{D}_{A}^{(0)}$;
a basis is given by $\{f^{i-|J|}(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\}_{0\leq|J|\leq i}$.
The last statement follows directly from this description.
\end{proof}
We now use this to describe the entire graded algebra $D_{A}^{(0,1)}:=D_{\mathcal{A}}^{(0,1)}/p$.
\begin{cor}
\label{cor:Local-coords-over-A=00005Bf,v=00005D} The algebra $D_{A}^{(0,1)}$
is a free graded module over $D(A)$, with a basis given by the set
$\{\partial^{I}(\partial^{[p]})^{J}\}$, where $I=(i_{1},\dots,i_{n})$
is a multi-index with $0\leq i_{j}\leq p-1$ for all $j$ and $J$
is any multi-index with entries $\geq0$.
\end{cor}
\begin{proof}
By the previous corollary, any element of $D_{A}^{(0,1),+}$ can be
written as a finite sum
\[
\sum_{I,J}a_{I,J}\partial^{I}(\partial^{[p]})^{J}
\]
where $a_{I,J}\in A[f]$ and $I$ and $J$ are arbitrary multi-indices.
As any element in $D_{A}^{(0,1),-}$ is a sum of the form
\[
\sum_{i=1}^{m}v^{i}\sum_{J}b_{i,J}(\partial)^{J}
\]
we see that in fact any element of $D_{A}^{(0,1)}$ can be written
as a finite sum
\[
\sum_{I,J}a_{I,J}\partial^{I}(\partial^{[p]})^{J}
\]
where $a_{I,J}\in A[f,v]$ and $I$ and $J$ are arbitrary multi-indices.
Iteratively using the relations $(p-1)!\partial_{i}^{p}=v\partial_{i}^{[p]}$
we see that we may suppose that each entry of $I$ is contained in
$\{0,\dots,p-1\}$; this shows that these elements span.
To see the linear independence, suppose we have
\begin{equation}
\sum_{I,J}a_{I,J}\partial^{I}(\partial^{[p]})^{J}=0\label{eq:lin-dep}
\end{equation}
where now each entry of $I$ is contained in $\{0,\dots,p-1\}$. Write
\[
a_{I,J}=\sum_{s\geq0}f^{s}a_{I,J,s}+\sum_{t<0}v^{t}a_{I,J,t}
\]
We have
\[
a_{I,J}\partial^{I}(\partial^{[p]})^{J}=\sum_{s\geq0}f^{s}a_{I,J,s}\partial^{I}(\partial^{[p]})^{J}+\sum_{J',t}a_{I,J,t}\partial^{I+pJ'}(\partial^{[p]})^{J''}+\sum_{t<0}v^{-t-|J|}a_{I,J,t}\partial^{I+pJ}
\]
where, in the middle sum, $t$ satisfies $0<-t\leq|J|$; for each
such $t$ we pick $J'$ such that $J'+J''=J$ and $|J'|=-t$. Now,
the previous corollary gives an isomorphism
\[
D_{A}^{(0,1),i}\tilde{=}D_{A}^{(0)}/I^{i}\oplus I^{i}
\]
where $I=C^{1}(D_{A}^{(0)})$, for all $i\geq0$; this in fact holds
for all $i\in\mathbb{Z}$ if we interpret $I^{i}=D_{A}^{(0)}$ for
$i<0$. This implies that the elements $\{f^{s}\partial^{I}(\partial^{[p]})^{J},\partial^{I+pJ'}(\partial^{[p]})^{J''},v^{-t-|J|}\partial^{I+pJ}\}$
where $I,J$ are multi-indices with each entry of $I$ is contained
in $\{0,\dots,p-1\}$, are linearly independent over $A$ (one may
look at each degree separately and use the above description). Thus
\eqref{lin-dep} implies $a_{I,J,s}=0=a_{I,J,t}$ for all $I,J,s,t$;
hence each $a_{I,J}=0$ as desired.
\end{proof}
Finally, let us apply this result to describe the finite order operators
$D_{\mathcal{A}}^{(0,1)}$. Namely, we have
\begin{cor}
\label{cor:Each-D^(i)-is-free}The algebra $D_{\mathcal{A}}^{(0,1)}$
is free over $D(\mathcal{A})$, with a basis given by the set $\{\partial^{I}(\partial^{[p]})^{J}\}$,
where $I=(i_{1},\dots,i_{n})$ is a multi-index with $0\leq i_{j}\leq p-1$
for all $j$ and $J$ is any multi-index with entries $\geq0$.
\end{cor}
\begin{proof}
By the previous result, the images of these elements in $D_{A}^{(0,1)}=D_{\mathcal{A}}^{(0,1)}/p$
form a basis over $D(\mathcal{A})$. Since $D_{\mathcal{A}}^{(0,1)}$
is $p$-torsion-free, and $D(\mathcal{A})$ is $p$-adically separated,
it follows directly that these elements are linearly independent over
$D(\mathcal{A})$. The fact that they span follows (as in the previous
proof) from \lemref{Basic-structure-of-D_A^(i)}.
\end{proof}
\subsection{$\mathcal{D}^{(0,1)}$-modules over $k$}
Now let $X$ be an arbitrary smooth variety over $k$; in this subsection
we make no assumption that there is a lift of $X$; however, if $U\subset X$
is an affine, there is a always a lift of $U$ to as smooth formal
scheme $\mathfrak{U}$. In this section we will construct a sheaf
of algebras $\mathcal{D}_{X}^{(0,1)}$ such that, on each open affine
$U$ which possesses local coordinates, we have $\mathcal{D}_{X}^{(0,1)}(U)=\widehat{\mathcal{D}}_{\mathfrak{U}}^{(0,1)}(\mathfrak{U})/p$.
There is a natural action of $\mathcal{D}_{X}^{(0)}$ on $\mathcal{O}_{X}$;
inducing a map $\mathcal{D}_{X}^{(0)}\to\mathcal{E}nd_{k}(\mathcal{O}_{X})$,
and we let $\overline{\mathcal{D}_{X}^{(0)}}\subset\mathcal{E}nd_{k}(\mathcal{O}_{X})$
denote the image of $\mathcal{D}_{X}^{(0)}$ under this map. It is
a quotient algebra of $\mathcal{D}_{X}^{(0)}$, and a quick local
calculation gives
\begin{lem}
\label{lem:Basic-description-of-D-bar} Let $U\subset X$ be an open
subset, which possesses local coordinates $\{x_{1},\dots,x_{n}\}$,
and let $\{\partial_{1},\dots,\partial_{n}\}$ denote derivations
satisfying $\partial_{i}(x_{j})=\delta_{ij}$. Then the kernel of
the map $\mathcal{D}_{X}^{(0)}(U)\to\mathcal{E}nd_{k}(\mathcal{O}_{X}(U))$
is the two sided ideal $\mathcal{I}$ generated by $\{\partial_{1}^{p},\dots,\partial_{n}^{p}\}$.
The image consists of differential operators of the form ${\displaystyle \sum a_{I}\partial^{I}}$
where the sum ranges over multi-indices $I=(i_{1},\dots,i_{n})$ for
which $0\leq i_{j}<p$ (for all $j$), the $a_{I}\in\mathcal{O}_{X}(U)$,
and $\partial^{I}=\partial_{1}^{i_{1}}\cdots\partial_{n}^{i_{n}}$.
\end{lem}
In particular, if $U=\text{Spec}(A)$ then we have $\overline{\mathcal{D}_{X}^{(0)}}(U)=\overline{D_{A}^{(0)}}$
as defined in the previous section.
Now let $\mathcal{D}iff_{X}^{\leq n}$ denote the sheaf of differential
operators of order $\leq n$ on $X$. This is a sub-sheaf of $\mathcal{E}nd_{k}(\mathcal{O}_{X})$.
\begin{defn}
\label{def:L}1) Let $\tilde{\mathcal{D}iff}_{X}^{\leq p}$ denote
the sub-sheaf of $\mathcal{D}iff_{X}^{\leq p}$ defined by the following
condition: a local section $\delta$ of $\mathcal{D}iff_{X}^{\leq p}$
is contained in $\tilde{\mathcal{D}iff}_{X}^{\leq p}$ if, for any
local section $\Phi\in\overline{\mathcal{D}_{X}^{(0)}}$, we have
$[\delta,\Phi]\in\overline{\mathcal{D}_{X}^{(0)}}$ (Here, the bracket
is the natural Lie bracket on $\mathcal{E}nd_{k}(\mathcal{O}_{X})$
coming from the algebra structure).
2) We define the sub-sheaf $\mathfrak{l}_{X}\subset\mathcal{D}iff_{X}$
to be $\tilde{\mathcal{D}iff}_{X}^{\leq p}+\overline{\mathcal{D}_{X}^{(0)}}$.
\end{defn}
The sections in $\mathfrak{l}_{X}$ can easily be identified in local
coordinates. Suppose $U=\text{Spec}(A)$ possess local coordinates
$\{x_{1},\dots,x_{n}\}$, and coordinate derivations $\{\partial_{1},\dots,\partial_{n}\}$.
\begin{prop}
\label{lem:O^p-action} Let $U\subset X$ be an open subset as above.
Then we have
\[
\mathfrak{l}_{X}(U)=\bigoplus_{i=1}^{n}\mathcal{O}_{U}^{p}\cdot\partial_{i}^{[p]}\oplus\overline{\mathcal{D}_{X}^{(0)}}(U)
\]
In particular, $\mathfrak{l}_{X}$ is a sheaf of $\mathcal{O}_{X}^{p}$-modules
(via the left action of $\mathcal{O}_{X}^{p}$ on $\mathcal{E}nd_{k}(\mathcal{O}_{X})$).
\end{prop}
\begin{proof}
First, let's show that the displayed sum is contained in $\mathfrak{l}_{X}(U)$.
By definition $\overline{\mathcal{D}_{X}^{(0)}}(U)\subset\mathfrak{l}_{X}(U)$.
Let $\Phi\in\overline{\mathcal{D}_{X}^{(0)}}(U)$, and write ${\displaystyle \Phi=\sum_{I}a_{I}\partial^{I}}$
as in \lemref{Basic-description-of-D-bar}. Then, for any $g\in O_{X}(U)$,
we have
\[
[g^{p}\partial_{i}^{[p]},\sum_{I}a_{I}\partial^{I}]=\sum_{I}[g^{p}\partial_{i}^{[p]},a_{I}\partial^{I}]=\sum_{I}[g^{p}\partial_{i}^{[p]},a_{I}]\partial^{I}+\sum_{I}a_{I}[g^{p}\partial_{i}^{[p]},\partial^{I}]
\]
Now, ${\displaystyle [g^{p}\partial_{i}^{[p]},a_{I}]\partial^{I}=g^{p}[\partial_{i}^{[p]},a_{I}]\partial^{I}=g^{p}\sum_{r=0}^{p-1}\partial_{i}^{[p-r]}(a_{I})\partial_{i}^{[r]}\cdot\partial^{I}\in\overline{\mathcal{D}_{X}^{(0)}}(U)}$.
Further, $a_{I}[g^{p}\partial_{i}^{[p]},\partial^{I}]=a_{I}g^{p}[\partial_{i}^{[p]},\partial^{I}]+a_{I}[g^{p},\partial^{I}]\partial_{i}^{[p]}=0$.
Thus we see that each $g^{p}\partial_{i}^{[p]}\in\mathfrak{l}_{X}(U)$,
and the right hand side is contained in the left.
For the converse, let $\Phi\in\mathcal{D}iff_{X}^{\leq p}(U)$. It
may be uniquely written as
\[
\Phi=\sum_{i=1}^{n}a_{i}\partial_{i}^{[p]}+\sum_{I}a_{I}\partial^{I}
\]
where $a_{i}$ and $a_{I}$ are in $\mathcal{O}_{X}(U)$, and the
second sum ranges over multi-indices $I=(i_{1},\dots,i_{n})$ with
each $i_{j}<p$ and so that $i_{1}+\dots+i_{n}\leq p$. For any coordinate
derivation $\partial_{j}$, we have
\[
[\Phi,\partial_{j}]=-(\sum_{i=1}^{n}\partial_{j}(a_{i})\partial_{i}^{[p]}+\sum_{I}\partial_{j}(a_{I})\partial^{I})
\]
For this to be contained in $\overline{\mathcal{D}_{X}^{(0)}}(U)$,
we must have $\partial_{j}(a_{i})=0$ for all $i$. Therefore, if
$[\Phi,\partial_{j}]\in\overline{\mathcal{D}_{X}^{(0)}}(U)$ for all
$j$, we must have $\partial_{j}(a_{i})=0$ for all $j$ (and all
$i$), which means that each $a_{i}\in\mathcal{O}_{X}(U)^{p}$. Therefore,
if $\Phi\in\tilde{\mathcal{D}iff}_{X}^{\leq p}(U)$, then $\Phi$
must be contained in ${\displaystyle \bigoplus_{i=1}^{n}\mathcal{O}_{U}^{p}\cdot\partial_{i}^{[p]}\oplus\overline{\mathcal{D}_{X}^{(0)}}(U)}$,
and the result follows.
\end{proof}
\begin{cor}
$\mathfrak{l}_{X}$ is a sheaf of Lie subalgebras of $\mathcal{E}nd_{k}(\mathcal{O}_{X})$.
\end{cor}
\begin{proof}
As the question is local, it suffices to prove that $\mathfrak{l}_{X}(U)$
is closed under the bracket for a neighborhood $U$ which possesses
local coordinates. We use the description of the previous lemma. So
we must show that all brackets of the form
\[
[g^{p}\partial_{i}^{[p]},h^{p}\partial_{j}^{[p]}]
\]
and
\[
[g^{p}\partial_{i}^{[p]},\sum_{I}a_{I}\partial^{I}]
\]
are contained in $\mathfrak{l}_{X}(U)$. Here the notation is as above;
so $g,h\in\mathcal{O}_{X}(U)$, and $I=(i_{1},\dots,i_{n})$ is a
multi-index with each $i_{j}<p$. In fact, we already showed that
${\displaystyle [g^{p}\partial_{i}^{[p]},\sum_{I}a_{I}\partial^{I}]\in\overline{\mathcal{D}_{X}^{(0)}}(U)}$
in the course of the proof of the previous lemma. So we are left to
analyze the first bracket. Now,
\[
[g^{p}\partial_{i}^{[p]},h^{p}\partial_{j}^{[p]}]=h^{p}[g^{p}\partial_{i}^{[p]},\partial_{j}^{[p]}]+[g^{p}\partial_{i}^{[p]},h^{p}]\partial_{j}^{[p]}
\]
\[
=h^{p}[g^{p},\partial_{j}^{[p]}]\partial_{i}^{[p]}+g^{p}[\partial_{i}^{[p]},h^{p}]\partial_{j}^{[p]}
\]
and we have
\[
[\partial_{i}^{[p]},h^{p}]=\sum_{r=0}^{p-1}\partial_{i}^{[p-r]}(h^{p})\partial_{i}^{[r]}=\partial_{i}^{[p]}(h^{p})
\]
and similarly, $[g^{p},\partial_{j}^{[p]}]=-\partial_{j}^{[p]}(g^{p})$.
It is a well-known fact that $\partial_{i}^{[p]}(h^{p})=(\partial_{i}(h))^{p}$
(for the sake of completeness, we include a proof directly below).
It follows immediately that $[g^{p}\partial_{i}^{[p]},h^{p}\partial_{j}^{[p]}]\in\mathfrak{l}_{X}(U)$,
and the corollary follows.
To prove that $\partial_{i}^{[p]}(h^{p})=(\partial_{i}(h))^{p}$,
recall the following formula for Hasse-Schmidt derivations acting
on powers:
\[
\partial_{i}^{[j]}(h^{m})=\sum_{i_{1}+\dots+i_{m}=j}\partial_{i}^{[i_{1}]}(h)\cdots\partial_{i}^{[i_{m}]}(h)
\]
which is easily checked by induction. Put $m=j=p$ in the formula.
The set
\[
\{(i_{1},\dots,i_{p})\in\mathbb{Z}_{\geq0}^{p}|i_{1}+\dots+i_{p}=p\}
\]
is acted upon by the symmetric group $S_{p}$, and, after grouping
like terms together, we see that each term $\partial_{i}^{[i_{1}]}(h)\cdots\partial_{i}^{[i_{m}]}(h)$
in the sum is repeated $N$ times, where $N$ is the size of the $S_{p}$
orbit of $(i_{1},\dots,i_{p})$. There is a unique orbit of size $1$,
namely $i_{1}=i_{2}=\cdots=i_{p}=1$; and for every other orbit, the
size is a number of the form ${\displaystyle \frac{p!}{c_{1}!\cdots c_{r}!}}$
for some numbers $c_{i}<p$ such that $\sum c_{i}=p$ . Any such is
divisible by $p$, and so all these terms are zero in the sum since
we are in characteristic $p$. Thus we obtain
\[
\partial_{i}^{[p]}(h^{p})=\partial_{i}^{[1]}(h)\cdots\partial_{i}^{[1]}(h)=(\partial_{i}(h))^{p}
\]
as claimed.
\end{proof}
Now we will build the ring $\mathcal{D}_{X}^{(0,1)}$ out of $\mathcal{D}_{X}^{(0)}$
and $\mathfrak{l}_{X}$, in a manner quite analogous to the way in
which $\mathcal{D}_{X}^{(0)}$ is built out of $\mathcal{O}_{X}$
and $\mathcal{T}_{X}$ as an enveloping algebra of a Lie algebroid;
in the classical case, this construction is given in \cite{key-44}
(for schemes) and \cite{key-46} (for rings). Our construction is
similar in spirit to these works (c.f. also \cite{key-45}).
\begin{defn}
\label{def:D-=00005Cplus-L}Let $f:\mathcal{D}_{X}^{(0)}\to\mathfrak{l}_{X}$
denote the map $\mathcal{D}_{X}^{(0)}\to\overline{\mathcal{D}_{X}^{(0)}}\subset\mathfrak{l}_{X}$.
Define the sheaf
\[
\mathfrak{L}_{X}:=\mathcal{D}_{X}^{(0)}\oplus\bigoplus_{i=1}^{\infty}\mathfrak{l}_{X}=\bigoplus_{i=0}^{\infty}\mathfrak{L}_{X}^{i}
\]
and make it into a graded $k[f]$-module by letting $f:\mathfrak{l}_{X}\to\mathfrak{l}_{X}$
be the identity in degrees $\geq1$; thus any homogenous element in
degree $i\geq1$ can be uniquely written $f^{i-1}\Psi$ for some $\Psi\in\mathfrak{l}_{X}$.
For local sections $\Phi\in\mathcal{D}_{X}^{(0)}$ and $f^{i-1}\Psi\in\mathfrak{L}_{X}^{i}$,
define $[\Phi,f^{i-1}\Psi]:=f^{i-1}[f\circ\Phi,\Psi]$ where on the
right we have the bracket in $\mathfrak{l}_{X}$. We then make $\mathfrak{L}_{X}$
into a sheaf of graded Lie algebras by setting $[f^{i-1}\Psi_{1},f^{j-1}\Psi_{2}]=f^{i+j-1}[\Psi_{1},\Psi_{2}]$
where $\{\Psi_{1},\Psi_{2}\}$ are local sections of $\mathfrak{l}_{X}$.
The Jacobi identity can be verified by a direct computation.
\end{defn}
Next we introduce the action of $v$:
\begin{lem}
\label{lem:Construction-of-v-1}There is a unique endomorphism $v$
of $\mathfrak{L}_{X}$ satisfying $v(\mathcal{D}_{X}^{(0)})=0$ and,
upon restriction to an open affine $U$ which possesses local coordinates,
$v(\partial_{i}^{[p]})=(p-1)!\partial_{i}^{p}$ for coordinate derivations
$\{\partial_{i}\}_{i=1}^{n}$. This endomorphism vanishes on $f(\mathcal{D}_{X}^{(0)})$,
and on ${\displaystyle \bigoplus_{i=2}^{\infty}\mathfrak{L}_{X}^{i}}$.
\end{lem}
\begin{proof}
Since $v(\mathcal{D}_{X}^{(0)})=0$ it suffices to define $v$ on
$\mathfrak{l}_{X}$. For any $\Phi\in\mathfrak{l}_{X}$, the action
of $\Phi$ preserves $\mathcal{O}_{X}^{p}$, and the restriction of
$\Phi$ to $\mathcal{O}_{X}^{p}\tilde{=}\mathcal{O}_{X^{(1)}}$ is
a derivation on $\mathcal{O}_{X}^{p}$ (this follows immediately from
\lemref{O^p-action} and the fact that $\partial_{i}^{[p]}(g^{p})=(\partial_{i}(g))^{p}$).
Further this derivation is trivial iff $\Phi\in f(\mathcal{D}_{X}^{(0)})\subset\mathfrak{l}_{X}$.
On the other hand, since $k$ is perfect there is a natural isomorphism
between the sheaf of derivations on $\mathcal{O}_{X^{(1)}}$ and the
sheaf of derivations on $\mathcal{O}_{X}$, given as follows: if $\partial'$
is a (local) derivation on $\mathcal{O}_{X^{(1)}}$, then we can define
a derivation of $\mathcal{O}_{X}$ by $\partial(g)=(\partial'(g^{p}))^{1/p}$;
this is possible precisely by the identification $\mathcal{O}_{X}^{p}\tilde{=}\mathcal{O}_{X^{(1)}}$.
This association is easily checked to be an isomorphism using local
coordinates; let's name it $\tau:\text{Der}(\mathcal{O}_{X^{(1)}})\to\text{Der}(\mathcal{O}_{X})$.
Further, there is a map $\sigma:\text{Der}(\mathcal{O}_{X})\to\mathcal{Z}(\mathcal{D}_{X}^{(0)})$
defined by $\partial\to\partial^{p}-\partial^{[p]}$, where $\partial^{[p]}$
is the $p$th iterate of the derivation (c.f. \cite{key-3}, chapter
1). In particular this map takes $\partial_{i}\to\partial_{i}^{p}$
if $\partial_{i}$ is a coordinate derivation as above.
Now we define $v(\Phi)=(p-1)!\cdot\sigma\circ\tau(\Phi|_{\mathcal{O}_{X^{(1)}}})$;
by the above discussion this satisfies all the properties of the lemma.
\end{proof}
Now we proceed to the definition of $\mathcal{D}_{X}^{(0,1),+}$.
By the functoriality of the enveloping algebra construction, we can
now form the pre-sheaf of enveloping algebras $\mathcal{U}(\mathfrak{L}_{X})$;
this is a pre-sheaf of graded algebras with the grading inherited
from $\mathfrak{L}_{X}$. Inside this pre-sheaf is the pre-sheaf $\mathcal{U}^{+}(\mathfrak{L}_{X})$,
which is the pre-sheaf of non-unital algebras generated by $\mathfrak{L}_{X}\subset\mathcal{U}(\mathfrak{L}_{X})$.
For any local section $\Phi\in\mathcal{D}_{X}^{(0)}$, let $\Phi'$
denote its image in $\mathcal{U}^{+}(\mathfrak{L}_{X})$, by regarding
$\Phi\in\mathcal{D}_{X}^{(0)}\subset\mathfrak{L}_{X}$; similarly,
for a local sections $\Psi\in\mathfrak{L}_{X}$, let $\Psi'\in\mathfrak{L}_{X}\subset\mathcal{U}^{+}(\mathfrak{L}_{X})$
denote its image.
\begin{defn}
Let $\mathcal{J}$ be the pre-sheaf of homogenous two-sided ideals
in $\mathcal{U}^{+}(\mathfrak{L}_{X})$ generated by the following
sections: for any local sections $\Phi_{1},\Phi_{2}\in\mathcal{D}_{X}^{(0)}$:
$(\Phi_{1}\cdot\Phi_{2})'-\Phi_{1}'\cdot\Phi_{2}'$ , $f\cdot\Phi_{1}'-f(\Phi_{1})'$,
$\Phi'_{1}\cdot f(\Phi'_{2})-f(\Phi_{1}')\cdot\Phi'_{2}$, $\Phi'_{1}\cdot f(\Phi'_{2})-f\cdot(\Phi_{1}'\cdot\Phi'_{2})$.
Further, if $\Psi_{1},\Psi_{2}\in\mathfrak{L}_{X}$ are any local
sections, we add the elements $\Psi_{1}'\Psi_{2}'-\Psi_{2}'\Psi_{1}'-[\Psi_{1},\Psi_{2}]'$,
as well as $g'\cdot\Psi_{1}=(g\cdot\Psi_{1})'$ for any local section
$g\in\mathcal{O}_{X}^{p}$ (the action of $\mathcal{O}_{X}^{p}$ on
$\mathfrak{l}_{X}$ is that of \lemref{O^p-action}). Finally, we
add $\Phi_{1}'\cdot\Psi'_{1}-\Phi_{2}'\cdot\Psi'_{2}$ where $\Phi_{i}$
are local sections of $\mathcal{Z}(D_{X}^{(0)})$ such that $\Phi_{1}\cdot v(\Psi_{1})=\Phi_{2}\cdot v(\Psi_{2})$.
Define $\mathcal{D}_{X}^{(0,1),+}$ to be the sheafification of the
presheaf $\mathcal{U}^{+}(\mathfrak{L}_{X})/\mathcal{J}$. It is a
graded sheaf of algebras on $X$.
\end{defn}
Of course, such a definition is only really useful if we can write
the algebra out explicitly in the presence of coordinates. Fortunately,
this is the case; in fact, if $U=\text{Spec}(A)$ we can compare it
with the presentation of $D_{A}^{(0,1),+}=D_{\mathcal{A}}^{(0,1),+}/p$
discussed in the previous section:
\begin{thm}
\label{thm:D-is-quasi-coherent} Let $\tilde{D}_{A}^{(0,1),+}$ be
the quasi-coherent sheaf on $U$ obtained by localizing $D_{A}^{(0,1),+}$.
This a sheaf of algebras on $U$. There is an isomorphism (of graded
sheaves of algebras) $\mathcal{D}_{X}^{(0,1),+}|_{U}\tilde{=}\tilde{D}_{A}^{(0,1),+}$.
In particular, $\mathcal{D}_{X}^{(0,1),+}$ is a quasi-coherent sheaf
of algebras on $X$, and we have $\mathcal{D}_{X}^{(0,1),0}\tilde{=}\mathcal{D}_{X}^{(0)}$.
\end{thm}
\begin{proof}
We have the algebra $\mathcal{U}^{+}(\mathfrak{L}_{X})(U)/\mathcal{J}(U)$.
It admits a map to $D_{A}^{(0,1),+}$ as follows: by \lemref{O^p-action},
the lie algebra $\mathfrak{L}_{X}(U)$ is equal to
\[
\mathcal{D}_{X}^{(0)}(U)\oplus\bigoplus_{i=1}^{\infty}(f^{i}(\overline{\mathcal{D}_{X}^{(0)}}(U))\oplus\bigoplus_{i=1}^{n}f^{i-1}\mathcal{O}_{X}^{p}(U)\cdot\partial_{i}^{[p]})
\]
\[
=D_{A}^{(0)}\oplus\bigoplus_{i=1}^{\infty}(f^{i}(\overline{D_{A}^{(0)}})\oplus\bigoplus_{i=1}^{n}f^{i-1}A^{p}\cdot\partial_{i}^{[p]})
\]
We map this to $D_{A}^{(0,1),+}$ via the identification of $D_{A}^{(0)}$
with $D_{A}^{(0,1),0}$, and by sending $f^{i}(\overline{D_{A}^{(0)}})$
to $f^{i}\cdot D_{A}^{(0)}\tilde{=}\overline{D_{A}^{(0)}}$ and $f^{i-1}g^{p}\partial_{i}^{[p]}$
to $f^{i-1}g^{p}\partial_{i}^{[p]}\in D_{A}^{(0,1),i}$. By sending
$f$ to $f$ we get a map of algebras $\mathcal{U}^{+}(\mathfrak{L}_{X})(U)/\mathcal{J}(U)\to D_{A}^{(0,1),+}$
(one checks the relations directly).
Conversely, we get a map $D_{A}^{(0,1),+}\to\mathcal{U}^{+}(\mathfrak{L}_{X})(U)/\mathcal{J}(U)$
by sending $A\to A\subset\mathcal{D}_{X}^{(0)}(U)$, $\partial_{i}\to\partial_{i}\in\mathcal{D}_{X}^{(0)}(U)$,
$\partial_{i}^{[p]}\to\partial_{i}^{[p]}\in\mathfrak{l}_{X}(U)$ and
$f\to f$. Again checking the relations, this is a morphism of algebras,
and the compositions in both directions are the identity on generators.
Therefore the presheaf $U\to\mathcal{U}^{+}(\mathfrak{L}_{X})(U)/\mathcal{J}(U)$,
when restricted to open affines which admit local coordinates, agrees
with the assignment $U\to D_{A}^{(0,1),+}$. But the latter, by the
description of \thmref{Local-Coords-for-D+}, clearly agrees with
the quasi-coherent sheaf $\tilde{D}_{A}^{(0,1),+}$ on $\text{Spec}(A)$,
and the result follows.
\end{proof}
Finally, we need to define the entire algebra $\mathcal{D}_{X}^{(0,1)}$.
This entails extending the operator $v$ to an endomorphism of all
of $\mathcal{D}_{X}^{(0,1),+}$.
\begin{lem}
\label{lem:Construction-of-v} There is a unique $\mathcal{D}_{X}^{(0)}$-linear
endomorphism $v$ of $\mathcal{D}_{X}^{(0,1),+}$ satisfying $v(\mathcal{D}_{X}^{(0,1),i})\subset D_{X}^{(0.1),i-1}$
for all $i\geq1$ (and $v(\mathcal{D}_{X}^{(0,1),0})=0$), $v(\Phi_{1}\cdot\Phi_{2})=\Phi_{1}v(\Phi_{2})$
for all $\Phi_{1},\Phi_{2}\in\bigoplus_{i=1}^{\infty}\mathcal{D}_{X}^{(0,1),i}$,
$v(f\cdot\Phi)=0$ for all $\Phi$, and such that the restriction
of $v$ to $\mathfrak{L}_{X}$ agrees with the map $v$ constructed
in \lemref{Construction-of-v-1}.
\end{lem}
\begin{proof}
Define $v$ on $\mathcal{D}_{X}^{(0)}\oplus\mathfrak{l}_{X}$ to be
the map constructed in \lemref{Construction-of-v-1}. The claim is
that there is a unique extension of this map to all of $\mathcal{D}_{X}^{(0,1),+}$
satisfying the conditions of the lemma.
By the uniqueness, it is enough to check this locally. Let $U=\text{Spec}(A)$
posses local coordinates. By \thmref{Local-Coords-for-D+}, if we
set $v$ to be zero on $D_{A}^{(0,1),0}$ and $f\cdot(D_{A}^{(0,1),+})$,
and we define
\[
v((\partial_{j}^{[p]})^{i_{j}}\cdots(\partial_{n}^{[p]})^{i_{n}})=\partial_{j}^{p}\cdot(\partial_{j}^{[p]})^{i_{j}-1}\cdots(\partial_{n}^{[p]})^{i_{n}}
\]
where $j$ is the first index such that $i_{j}\geq1$, then we have
a well-defined $D_{A}^{(0)}$-linear map satisfying all the properties
of the lemma, and which agrees with the $v$ defined above on $\mathfrak{L}_{X}(U)$.
On the other hand, $D_{A}^{(0,1),+}$ is generated as a $D_{A}^{(0)}$-module
by $D_{A}^{(0)}$, $f\cdot(D_{A}^{(0,1),+})$, and elements which
are products of $\mathfrak{L}_{X}(U)$ (again by \thmref{Local-Coords-for-D+}).
So any map which satisfies the above list of properties and equals
$v$ on $\mathfrak{L}_{X}(U)$ is equal to the one we have written
down; so the uniqueness follows as well.
\end{proof}
Now we arrive at
\begin{defn}
\label{def:D(0,1)}The sheaf of algebras $\mathcal{D}_{X}^{(0,1)}$
is defined as the $\mathbb{Z}$-graded sheaf of $k[v,f]$-algebras,
which as a graded sheaf is given by
\[
\bigoplus_{i=-\infty}^{-1}\mathcal{D}_{X}^{(0)}\oplus\mathcal{D}_{X}^{(0,1),+}
\]
and where we extend the action of $f$ (to an operator of degree $1$)
from $\mathcal{D}_{X}^{(0,1),+}$ to $\mathcal{D}_{X}^{(0,1)}$ by
setting $f=0$ on ${\displaystyle \bigoplus_{i=-\infty}^{-1}\mathcal{D}_{X}^{(0)}}$,
and we extend the action of $v$ (to an operator of degree $-1$)
on $\bigoplus_{i=1}^{\infty}\mathcal{D}_{X}^{(0,1),i}$ by letting
$v:\mathcal{D}_{X}^{(0,1),i}\to\mathcal{D}_{X}^{(0,1),i-1}$ be the
identity whenever $i\leq0$. The product on this algebra extends the
product on $\mathcal{D}_{X}^{(0,1),+}$ as follows: on the negative
half ${\displaystyle \bigoplus_{i=-\infty}^{0}\mathcal{D}_{X}^{(0)}}=\mathcal{D}_{X}^{(0)}\otimes_{k}k[v]$,
we use the obvious graded product. For $i\leq0$, if $\Phi\in\mathcal{D}_{X}^{(0,1),i}\tilde{=}D_{X}^{(0)}$
and $\Psi\in\mathcal{D}_{X}^{(0,1),+}$, we set
\[
\Phi\cdot\Psi=\Phi_{0}v^{i}(\Psi)
\]
where $\Phi_{0}$ is the element $\Phi\in D_{X}^{(0)}$, now regarded
as an element of degree $0$.
\end{defn}
From this definition and \thmref{D-is-quasi-coherent}, we see that
this is a quasicoherent sheaf of algebras, and we have an isomorphism
\[
\widehat{D}_{\mathcal{A}}^{(0,1)}/p\tilde{=}\mathcal{D}_{X}^{(0,1)}(U)
\]
for any $U=\text{Spec}(A)$ which possesses local coordinates. It
follows that $\mathcal{D}_{X}^{(0,1)}$ is a coherent, locally noetherian
sheaf of rings which is stalk-wise noetherian. One sees directly the
isomorphism $\mathcal{D}_{X}^{(0)}\tilde{=}\mathcal{D}^{(0,1)}/(v-1)$,
and we may now define $\mathcal{D}_{X}^{(1)}:=\mathcal{D}^{(0,1)}/(f-1)$.
We will see below that this agrees with Berthelot's definition; this
is clear if $X$ is liftable but not quite obvious in general.
\section{Gauges Over $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$}
We now have several locally noetherian graded rings and so we can
consider categories of modules over them; in particular we have the
category of graded $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$-modules,
(which we refer to a gauges over $\mathfrak{X}$) $\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})$
and the category of coherent graded modules $\mathcal{G}_{coh}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})$.
We have the analogous categories in positive characteristic as well
as $\mathcal{G}_{qcoh}(\mathcal{D}_{X}^{(0,1)})$, the graded quasicoherent
$\mathcal{D}_{X}^{(0,1)}$-modules; as $\mathcal{D}_{X}^{(0,1)}$
is itself a quasi-coherent sheaf of algebras, this is simply the category
of sheaves in $\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})$
which are quasi-coherent over $\mathcal{O}_{X}[f,v]$.
In this chapter we develop the basic properties of these categories
of gauges; we begin by collecting a few of their most basic properties.
For any object in $\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})$
(or $\mathcal{G}(\mathcal{D}_{X}^{(0,1)})$) set $\mathcal{M}^{\infty}:=\mathcal{M}/(f-1)$
and $\mathcal{M}^{-\infty}:=\mathcal{M}/(v-1)$; these are exact functors
to the categories of $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1),\infty}$
and $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}(=\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1),-\infty})$-modules,
respectively; there are obvious maps $f_{\infty}:\mathcal{M}^{i}\to\mathcal{M}^{\infty}$
and $v_{-\infty}:\mathcal{M}^{i}\to\mathcal{M}^{-\infty}$ for each
$i$. We use the same notation to denote the analogous constructions
for $\mathcal{G}(\mathcal{D}_{X}^{(0,1)})$.
We have:
\begin{lem}
\label{lem:Basic-v}Let $\mathcal{M}\in\mathcal{G}_{coh}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)})$.
Then each $\mathcal{M}^{i}$ is coherent as a $\mathcal{\widehat{D}}_{X}^{(0)}$-module.
Further, for all $i<<0$, the map $v:\mathcal{M}^{i}\to\mathcal{M}^{i-1}$
is an isomorphism. The same holds for $\mathcal{M}\in\mathcal{G}_{coh}(\mathcal{D}_{X}^{(0,1)})$.
\end{lem}
\begin{proof}
By definition we have, at least locally, an exact sequence
\[
\bigoplus_{i=1}^{s}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}(r_{i})\to\bigoplus_{i=1}^{m}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}(l_{i})\to\mathcal{M}\to0
\]
Now the result follows as the lemma is true for $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}$
by construction. As the same holds for $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}$,
we may prove $2)$ in an identical manner.
\end{proof}
This allows us to give:
\begin{defn}
\label{def:Index!}Let $\mathcal{M}\in\mathcal{G}_{coh}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)})$.
Then the index of $\mathcal{M}$ in $\mathbb{Z}\cup\{\infty\}$ is
the largest integer $i$ for which $v:\mathcal{M}^{j}\to\mathcal{M}^{j-1}$
is an isomorphism for all $j\leq i$. The index is $\infty$ if $v$
is an isomorphism for all $i$ (this can indeed happen; c.f. \exaref{Exponential!}
below). We can make the same definition for $\mathcal{M}\in\mathcal{G}_{coh}(\mathcal{D}_{X}^{(0,1)})$.
\end{defn}
We will now use show how cohomological completeness gives a convenient
criterion for a complex to be in $D_{coh}^{b}(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}))$.
\begin{prop}
\label{prop:coh-to-coh}We have $D_{coh}^{b}(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}))\subset D_{cc}(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}))$.
Further, for $\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}))$,
we have $\mathcal{M}^{\cdot}\in D_{coh}^{?}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)})$
iff $\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k\in D_{coh}^{?}(\mathcal{G}(\mathcal{\widehat{D}}_{X}^{(0,1)}))$,
where $?=+$ or $?=b$.
\end{prop}
\begin{proof}
Recall that if $\mathcal{F}$ is a sheaf of $W(k)$-modules which
is $p$-torsion free and $p$-adically complete; or if it is annihilated
by $p^{N}$ for some fixed $N\in\mathbb{N}$, then $\mathcal{F}$
(considered as a complex concentrated in one degree) is contained
in $D_{cc}(W(k))$. It follows that a coherent $\mathcal{\widehat{D}}_{X}^{(0)}$-module
is cohomologically complete; therefore so is an element of $\mathcal{G}_{coh}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)})$
by \propref{Basic-CC-facts}, part $4)$. Since $D_{cc}(\mathcal{G}(\mathcal{D}_{\mathfrak{X}}^{(0,1)}))$
is closed under extensions, the first statement follows directly (c.f.
\cite{key-8}, theorem 1.6.1).
For the second statement, the forward direction is obvious. For the
converse, we note that by \cite{key-8} theorem 1.6.4, since (for
either $?=+$ or $?=b$) each \linebreak{}
$(\mathcal{M}^{\cdot})^{i}\otimes_{W(k)}^{L}k\in D_{coh}^{+}(\mathcal{D}_{X}^{(0)}-\text{mod})$,
we must have $(\mathcal{M}^{\cdot})^{i}\in D_{coh}^{+}(\mathcal{D}_{\mathfrak{X}}^{(0)}-\text{mod})$.
In particular $\mathcal{H}^{j}((\mathcal{M}^{\cdot})^{i})$ is $p$-adically
complete for each $i$ and $j$. Further, we have the short exact
sequences for the functor $\otimes_{W(k)}^{L}k$
\[
0\to\mathcal{H}^{j}(\mathcal{M}^{\cdot})/p\to\mathcal{H}^{j}(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\to\mathcal{T}or_{1}^{W(k)}(\mathcal{H}^{j+1}(\mathcal{M}^{\cdot}),k))\to0
\]
which implies also that $\mathcal{H}^{j}(\mathcal{M}^{\cdot})/p$
is coherent over $\mathcal{D}_{X}^{(0,1)}$ for all $j$ (this follows
from the fact that $\mathcal{H}^{j}((\mathcal{M}^{\cdot})^{i})/p$
is coherent, and hence quasi-coherent, for all $i$; which implies
$\mathcal{H}^{j}(\mathcal{M}^{\cdot})/p$ is a quasicoherent sub-sheaf
of the coherent $\mathcal{D}_{X}^{(0,1)}$-module $\mathcal{H}^{j}(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)$,
and hence coherent).
Now, for a fixed $j$, we can consider, for any $i$
\[
v:\mathcal{H}^{j}((\mathcal{M}^{\cdot})^{i})\to\mathcal{H}^{j}((\mathcal{M}^{\cdot})^{i-1})
\]
and
\[
\mathcal{D}_{\mathfrak{X}}^{(0,1),1}\otimes_{\mathcal{D}_{\mathfrak{X}}^{(0)}}\mathcal{H}^{j}((\mathcal{M}^{\cdot})^{i})\to\mathcal{H}^{j}((\mathcal{M}^{\cdot})^{i+1})
\]
Since $\mathcal{H}^{j}(\mathcal{M}^{\cdot})/p$ is coherent over $\mathcal{D}_{X}^{(0,1)}$,
we have that the reduction mod $p$ of $v$ is surjective for $i<<0$
and the the reduction mod $p$ of the second map is surjective for
$i>>0$. By the usual complete Nakayama lemma, we see that $v$ is
surjective for $i<<0$ and the second map is surjective for $i>>0$;
therefore $\mathcal{H}^{j}(\mathcal{M}^{\cdot})$ is locally finitely
generated over $\mathcal{D}_{\mathfrak{X}}^{(0,1)}$ (since each $\mathcal{H}^{j}((\mathcal{M}^{\cdot})^{i})$
is coherent over $\mathcal{D}_{\mathfrak{X}}^{(0)}$).
Now, let $U\subset X$ be an open affine and let $D(g)\subset U$
be a principle open inside $U$; let $\tilde{g}$ be a lift of the
function $g$ to $\Gamma(\mathcal{O}_{U})$. As each $\mathcal{H}^{j}((\mathcal{M}^{\cdot})^{i})$
is coherent, we have that $\mathcal{H}^{j}((\mathcal{M}^{\cdot})^{i})(D(g))$
is isomorphic to the completion of the localization of $\mathcal{H}^{j}((\mathcal{M}^{\cdot})^{i})(U)$
at $\tilde{g}$. It follows that $\mathcal{H}^{j}(\mathcal{M}^{\cdot})(D(g))$
is given by localizing $\mathcal{H}^{j}(\mathcal{M}^{\cdot})(U)$
at $\tilde{g}$ and then completing each component. If $\mathcal{F}$
is a graded free module over $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$,
it has the same description; and so the kernel of any map $\mathcal{F}\to\mathcal{H}^{j}(\mathcal{M}^{\cdot})|_{U}$
also has this description (as the functor of localizing and completing
is exact on coherent $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}$-modules);
hence it is locally finitely generated and so $\mathcal{H}^{j}(\mathcal{M}^{\cdot})$
is itself coherent.
Finally, we note that $\mathcal{H}^{j}(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)=0$
implies $\mathcal{H}^{j}(\mathcal{M}^{\cdot})/p=0$ by the above short
exact sequence. So, if $\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k\in D_{coh}^{+}(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$,
we see that $\mathcal{H}^{j}(\mathcal{M}^{\cdot})/p=0$ for all $j<<0$;
which implies $\mathcal{H}^{j}(\mathcal{M}^{\cdot})=0$ for $j<<0$
since each $\mathcal{H}^{j}(\mathcal{M}^{\cdot})^{i}$ is $p$-adically
complete; i.e., we have $\mathcal{M}^{\cdot}\in D_{\text{Coh}}^{+}(\mathcal{D}_{\mathfrak{X}}^{(0,1)})$;
the same argument applies for bounded complexes.
\end{proof}
This proposition will be our main tool for showing that elements of
$D_{cc}(\mathcal{G}(\mathcal{D}_{\mathfrak{X}}^{(0,1)}))$ are actually
in $D_{coh}^{b}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)})$.
\subsection{\label{subsec:Standard}Standard Gauges, Mazur's Theorem}
In this subsection we discuss the analogue of (the abstract version
of) Mazur's theorem in the context of $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$-gauges.
Since the notion of gauge was invented in order to isolate the structures
used in the proof of Mazur's theorem, it comes as no surprise that
there is a very general version of the theorem available in this context.
Before proving it, we discuss some generalities, starting with
\begin{defn}
\label{def:Standard!}Let $\mathcal{M}\in\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})$.
We say $\mathcal{M}$ is standard if $\mathcal{M}^{-\infty}$ and
$\mathcal{M}^{\infty}$ are $p$-torsion-free, each $f_{\infty}:\mathcal{M}^{i}\to\mathcal{M}^{\infty}$
is injective; and, finally, there is a $j_{0}\in\mathbb{Z}$ so that
\[
f_{\infty}(\mathcal{M}^{i+j_{0}})=\{m\in\mathcal{M}^{\infty}|p^{i}m\in f_{\infty}(\mathcal{M}^{j_{0}})\}
\]
for all $i\in\mathbb{Z}$.
\end{defn}
The $j_{0}$ appearing in this definition is not unique; indeed, from
the definition if $i<0$ we have $f_{\infty}(\mathcal{M}^{i+j_{0}})=p^{-i}\cdot f_{\infty}(\mathcal{M}^{j_{0}})$
which implies that we can replace $j_{0}$ with any $j<j_{0}$. In
particular the\emph{ }index\emph{ }of a standard gauge (as in \defref{Index!})
is the maximal $j_{0}$ for which the description in the definition
is true (and it takes the value $\infty$ if this description is true
for all integers). Note that if $\mathcal{M}$ is standard, so is
the shift $\mathcal{M}(j)$, and the index of $\mathcal{M}(j)$ is
equal to $\text{index}(\mathcal{M})+j$.
As in the case where $\mathfrak{X}$ is a point (which is discussed
above in \exaref{BasicGaugeConstruction}), standard gauges are (up
to a shift of index) exactly the ones that can be constructed from
lattices:
\begin{example}
\label{exa:Basic-Construction-over-X} Let $\mathcal{N}'$ be a $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}[p^{-1}]$-module,
and let $\mathcal{N}$ be a lattice; i.e., a $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}$-submodule
such that $\mathcal{N}[p^{-1}]=\mathcal{N}'$. Recalling the isomorphism
$\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}[p^{-1}]\tilde{=}\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1),\infty}[p^{-1}]$
(c.f. \lemref{Basic-Structure-of-D^(1)}), we also suppose given a
$\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1),\infty}$-lattice of
$\mathcal{N}'$ called $\mathcal{M}^{\infty}$. Then we may produce
a standard gauge $\mathcal{M}$ via
\[
\mathcal{M}^{i}=\{m\in\mathcal{M}^{\infty}|p^{i}m\in\mathcal{N}\}
\]
If $\mathcal{M}^{\infty}$ is coherent over $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1),\infty}$
and $\mathcal{N}$ is coherent over $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}$,
then $\mathcal{M}$ is a coherent gauge.
\end{example}
Let us give some general properties of standard gauges:
\begin{lem}
\label{lem:Standard-is-rigid}Suppose $\mathcal{M}\in\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})$
is standard; and let $\mathcal{M}_{0}=\mathcal{M}/p$ be its reduction
mod $p$. Then $\mathcal{M}_{0}$ has $\text{ker}(f)=\text{im}(v)$
and $\text{ker}(v)=\text{im}(f)$; further, if $\overline{m}_{i}\in\mathcal{M}_{0}^{i}$,
then $f\overline{m}_{i}=0=v\overline{m}_{i}$ iff $\overline{m}_{i}=0$.
\end{lem}
\begin{proof}
Since $fv=0$ on $\mathcal{M}_{0}$, we always have $\text{im}(f)\subset\text{ker}(v)$
and $\text{im}(v)\subset\text{ker}(f)$; so we consider the other
inclusions.
Let $m_{i}\in\mathcal{M}^{i}$, and denote its image in $\mathcal{M}_{0}^{i}$
by $\overline{m}_{i}$. Suppose $v\overline{m}_{i}=0$. Then $vm_{i}=pm_{i-1}$
for some $m_{i-1}\in\mathcal{M}^{i-1}$, so that $f_{\infty}(vm_{i})=pf_{\infty}(m_{i})=pf_{\infty}(m_{i-1})$.
Since $\mathcal{M}^{\infty}$ is $p$-torsion-free this yields $f_{\infty}(m_{i})=f_{\infty}(m_{i-1})$
so that $fm_{i-1}=m_{i}$ by the injectivity of $f_{\infty}$. Thus
$\overline{m}_{i}\in\text{im}(f)$ and we see $\text{ker}(v)\subset\text{im}(f)$
as required.
Now suppose $f\overline{m}_{i}=0$. Then $fm_{i}=pm_{i+1}$ for some
$m_{i+1}\in\mathcal{M}^{i+1}$ so that $f_{\infty}(m_{i})=pf_{\infty}(m_{i+1})=f_{\infty}(vm_{i+1})$,
and the injectivity of $f_{\infty}$ implies $m_{i}=vm_{i+1}$ so
that $\overline{m}_{i}\in\text{im}(v)$ as required.
To obtain the last property; since $\mathcal{M}$ is standard, after
shifting the grading if necessary, we may identify $f_{\infty}(\mathcal{M}^{i})$
with $\{m\in\mathcal{M}^{\infty}|p^{i}m\in f_{\infty}(\mathcal{M}^{0})\}$.
If $m_{i}\in\mathcal{M}^{i}$ and $f\overline{m}_{i}=0=v\overline{m}_{i}$
then $fm_{i}=pm_{i+1}$ and $vm_{i}=pm_{i-1}$; therefore $f_{\infty}(m_{i})=pf_{\infty}(m_{i+1})$
and $pf_{\infty}(m_{i})=pf_{\infty}(m_{i-1})$ so that $p^{2}f_{\infty}(m_{i+1})=pf_{\infty}(m_{i-1})$
which implies $pf_{\infty}(m_{i+1})=f_{\infty}(m_{i-1})$. But $p^{i-1}f_{\infty}(m_{i-1})\in f_{\infty}(\mathcal{M}^{0})$,
so that $p^{i}f_{\infty}(m_{i+1})\in f_{\infty}(\mathcal{M}^{0})$
which forces $f_{\infty}(m_{i+1})\in f_{\infty}(\mathcal{M}^{i})$
so that $m_{i+1}=fm'_{i}$ for some $m'_{i}\in\mathcal{M}^{i}$. So
$fm_{i}=pm_{i+1}=f(pm_{i}')$ which implies $m_{i}=pm'_{i}$ and so
$\overline{m}_{i}=0$.
\end{proof}
This motivates the
\begin{defn}
(\cite{key-5}, definition 2.2.2) A gauge $\mathcal{M}_{0}$ over
$\mathcal{D}_{X}^{(0,1)}$ is called quasi-rigid if it satisfies $\text{ker}(f)=\text{im}(v)$
and $\text{ker}(v)=\text{im}(f)$, it is called rigid if it is quasi-rigid
and, in addition, $\text{ker}(f)\cap$$\text{ker}(v)=0$.
\end{defn}
By the above lemma, a gauge is rigid if it is of the form $\mathcal{M}/p$
for some standard gauge $\mathcal{M}$.
As explained in \cite{key-5}, rigidity is a very nice condition;
and in particular we have the following generalization of \cite{key-5},
lemma 2.2.5:
\begin{lem}
\label{lem:Basic-Facts-on-Rigid}Let $\mathcal{M}_{0}\in\mathcal{G}(\mathcal{D}_{X}^{(0,1)})$.
Then $\mathcal{M}_{0}$ is rigid iff $\mathcal{M}_{0}/f$ is $v$-torsion
free and $\mathcal{M}_{0}/v$ is $f$-torsion-free.
Further, $\mathcal{M}_{0}$ is quasi-rigid iff $\mathcal{M}_{0}\otimes_{k[f,v]}^{L}k[f]\tilde{=}\mathcal{M}_{0}/v$
and $\mathcal{M}_{0}\otimes_{k[f,v]}^{L}k[v]\tilde{=}\mathcal{M}_{0}/f$.
\end{lem}
\begin{proof}
Suppose $\mathcal{M}_{0}$ is rigid. To show $\mathcal{M}_{0}/f$
is $v$-torsion free we have to show that if $m$ is a local section
of $\mathcal{M}_{0}$ with $vm=fm'$, then $m\in\text{im}(f)$. Since
$\text{im}(f)=\text{ker}(v)$ we have $v(vm)=0$, and since also $f(vm)=0$
we must (by the second condition of rigidity) have $vm=0$. Therefore
$m\in\text{ker}(v)=\text{im}(f)$ as desired. The proof that $\mathcal{M}_{0}/v$
is $f$-torsion-free is essentially identical.
Now suppose $\mathcal{M}_{0}$ satisfies $\mathcal{M}_{0}/f$ is $v$-torsion
free and $\mathcal{M}_{0}/v$ is $f$-torsion-free. Suppose $m\in\text{ker}(f)$.
Then the image of $m$ in $\mathcal{M}_{0}/v$ is $f$-torsion, hence
$0$; and so $m\in\text{im}(v)$; therefore $\text{ker}(f)=\text{im}(v)$
and similarly $\text{ker}(v)=\text{im}(f)$. If $fm=0=vm$, then $m\in\text{ker}(f)\cap\text{ker}(v)=\text{im}(v)\cap\text{ker}(v)=\text{ker}(f)\cap\text{im}(f)$.
Since $m\in\text{im}(v)$ the image of $m$ in $\mathcal{M}_{0}/v$
is zero; also, $m=fm'$, so since $\mathcal{M}_{0}/v$ is $f$-torsion
free we see $m'\in\text{im}(v)$. So $m=fm'=fv(m'')=0$ as desired.
Now we consider the quasi-rigidity condition: we can write the following
free resolution of $k[f]$ over $D(k)$:
\[
\cdots\rightarrow D(k)(-1)\xrightarrow{v}D(k)\xrightarrow{f}D(k)(-1)\xrightarrow{v}D(k)
\]
so that $\mathcal{M}_{0}\otimes_{D(k)}^{L}k[f]$ has no higher cohomology
groups iff $\text{ker}(v)=\text{im}(f)$ and $\text{ker}(f)=\text{im}(v)$;
the same holds for $\mathcal{M}_{0}\otimes_{D(k)}^{L}k[v]\tilde{=}\mathcal{M}_{0}/f$.
\end{proof}
Now we turn to conditions for checking that a gauge is standard.
\begin{prop}
\label{prop:Baby-Mazur}Let $\mathcal{M}\in\mathcal{G}_{\text{coh}}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})$,
and suppose that $\mathcal{M}^{-\infty}$ and $\mathcal{M}^{\infty}$
are $p$-torsion-free. Set $\mathcal{M}_{0}=\mathcal{M}/p$, and suppose
$\mathcal{M}_{0}/v$ is $f$-torsion-free. Then $\mathcal{M}$ is
standard; in particular $\mathcal{M}$ is $p$-torsion free.
\end{prop}
\begin{proof}
Let $\mathcal{N}_{i}=\text{ker}(f_{\infty}:\mathcal{M}^{i}\to\mathcal{M}^{\infty})$
and let $\mathcal{N}=\bigoplus_{i}\mathcal{N}_{i}$. Clearly $\mathcal{N}$
is preserved under $W(k)[f]$; further, since $f_{\infty}(vm)=pf_{\infty}(m)$
we see that $\mathcal{N}$ is preserved under $W(k)[f,v]$.
For $m$ a local section of $\mathcal{M}^{i}$ let $\overline{m}$
denote its image in $\mathcal{M}_{0}$. If $m\in\mathcal{N}$ then
certainly $f_{\infty}(\overline{m})=0$ in $\mathcal{M}_{0}^{\infty}$.
Since $\mathcal{M}_{0}/v$ is $f$-torsion-free, the map $f_{\infty}:\mathcal{M}_{0}^{i}/v\mathcal{M}_{0}^{i+1}\to\mathcal{M}_{0}^{\infty}$
is injective; so $\overline{m}\in\text{im}(v)$. Thus there is some
$m'$ so that $m-vm'\in p\cdot\mathcal{M}^{i}$; since $p=fv$ we
see that $m\in\text{im}(v)$ as well; i.e., we can assume $m=vm'$.
Since $0=f_{\infty}(vm')=pf_{\infty}(m')$ and $\mathcal{M}^{\infty}$
is $p$-torsion-free, we see that $m'\in\mathcal{N}$ as well. So
in fact $\mathcal{N}=v\cdot\mathcal{N}$.
Now, as $\mathcal{M}$ is coherent, we may choose some $j_{0}$ for
which $v_{-\infty}:\mathcal{M}^{j}\to\mathcal{M}^{-\infty}$ is an
isomorphism for all $j\le j_{0}$. Then, for each such $j$, $\mathcal{M}^{j}$
is $p$-torsion-free (since $\mathcal{M}^{-\infty}$ is). Further,
since $fv=p$, we have that $f$ and $v$ are isomorphisms after inverting
$p$, which shows $f_{\infty}:\mathcal{M}^{j}[p^{-1}]\tilde{\to}\mathcal{M}^{\infty}[p^{-1}]$.
Since $\mathcal{M}^{j}$ and $\mathcal{M}^{\infty}$ are $p$-torsion-free,
we see that $f_{\infty}$ is injective on $\mathcal{M}^{j}$. Thus
$\mathcal{N}$ is concentrated in degrees above $j_{0}$, and we see
that every element of $\mathcal{N}$ is killed by a power of $v$.
Since $\mathcal{M}$ is coherent, it is locally noetherian, so that
every local section of $\mathcal{M}$ killed by a power of $v$ is
actually killed by $v^{N}$ for some fixed $N\in\mathbb{N}$. Therefore,
we have $v^{N}\cdot\mathcal{N}=0$. Since also $\mathcal{N}=v\cdot\mathcal{N}$
we obtain $\mathcal{N}=0$. Thus each $f_{\infty}:\mathcal{M}^{i}\to\mathcal{M}^{\infty}$
is injective. It follows that each $\mathcal{M}^{i}$ is $p$-torsion-free,
and since $fv=p$ we see that $\mathcal{M}$ is $f$ and $v$-torsion-free
as well.
Choose $j_{0}$ so that $v:\mathcal{M}^{j}\to\mathcal{M}^{j-1}$ is
an isomorphism for all $j\leq j_{0}$. To finish the proof, we have
to show that, for all $i\in\mathbb{Z}$, $f_{\infty}(\mathcal{M}^{i+j_{0}})=\{m\in\mathcal{M}^{\infty}|p^{i}m\in f_{\infty}(\mathcal{M}^{j_{0}})\}$.
If $i\leq0$, then $v^{-i}:\mathcal{M}^{j_{0}}\to\mathcal{M}^{i+j_{0}}$
is an isomorphism, and $f_{\infty}(\mathcal{M}^{i+j_{0}})=p^{-i}f_{\infty}(\mathcal{M}^{j_{0}})$
as required. If $i>0$, then for $m\in\mathcal{M}^{i+j_{0}}$ we have
$f_{\infty}(v^{i}m)=p^{i}f_{\infty}(m)\in f_{\infty}(\mathcal{M}^{j_{0}})$
so that $f_{\infty}(\mathcal{M}^{i+j_{0}})\subseteq\{m\in\mathcal{M}^{\infty}|p^{i}m\in f_{\infty}(\mathcal{M}^{j_{0}})\}$.
For the reverse inclusion, let $m\in\mathcal{M}^{\infty}$ be such
that $p^{i}m=f_{\infty}(m_{j_{0}})$ for some $m_{j_{0}}\in\mathcal{M}^{j_{0}}$.
By definition $\mathcal{M}^{\infty}$ is the union of its sub-sheaves
$f_{\infty}(\mathcal{M}^{n})$, so suppose $m=f_{\infty}(m_{l})$
for some $m_{l}\in\mathcal{M}^{l}$, with $l>i+j_{0}$. Since $f_{\infty}(v^{i}m_{l})=p^{i}f_{\infty}(m_{l})=p^{i}m=f_{\infty}(m_{j_{0}})$,
we see that
\[
f^{l-(i+j_{0})}(m_{j_{0}})=v^{i}m_{l}
\]
Consider the image of this equation in $\mathcal{M}_{0}$. It shows
that that $f^{l-(i+j_{0})}(\overline{m}_{j_{0}})\in v\cdot\mathcal{M}_{0}$.
Since $f$ is injective on $\mathcal{M}_{0}/v$, the assumption that
$l-(i+j_{0})>0$ implies $\overline{m}_{j_{0}}\in v\cdot\mathcal{M}_{0}$.
As above, since $fv=p$ this implies $m_{j_{0}}\in v\cdot\mathcal{M}$;
writing $m_{j_{0}}=vm_{j_{0}+1}$ we now have the equation $f^{l-(i+j_{0})}(vm_{j_{0}+1})=v^{i}m_{l}$.
Since $v$ acts injectively on $\mathcal{M}$ we see that $f^{l-(i+j_{0})}(m_{j_{0}+1})=v^{i-1}m_{l}$.
Applying $f_{\infty}$, we see that $p^{i-1}m\in f_{\infty}(\mathcal{M}^{j_{0}+1})$.
If $i=1$, this immediately proves $f_{\infty}(\mathcal{M}^{1+j_{0}})=\{m\in\mathcal{M}^{\infty}|pm\in f_{\infty}(\mathcal{M}^{j_{0}})\}$.
For $i>1$, then by induction on $i$ we can suppose $pm\in f_{\infty}(\mathcal{M}^{j_{0}+i-1})$.
But then $f_{\infty}(vm_{l})=pf_{\infty}(m_{l})=pm=f_{\infty}(m_{j_{0}+i-1})$
for some $m_{j_{0}+i-1}\in\mathcal{M}^{j_{0}+i-i}$. This implies
$f^{l-(j_{0}+i)}(m_{j_{0}+i-1})=vm_{l}$ so if $l>j_{0}+i$ then,
arguing exactly as in the previous paragraph, we have $m_{j_{0}+i-1}=vm_{j_{0}+i}$
for some $m_{j_{0}+i}\in\mathcal{M}^{j_{0}+i}$ and so $f^{l-(j_{0}+i)}(m_{j_{0}+i})=m_{l}$
which implies $m=f_{\infty}(m_{l})\in f_{\infty}(\mathcal{M}^{j_{0}+i})$
as required.
\end{proof}
This result implies a convenient criterium for ensuring that gauges
are standard; this is the first analogue of Mazur's theorem in this
context:
\begin{thm}
\label{thm:Mazur!}Let $\mathcal{M}^{\cdot}\in D_{\text{coh}}^{b}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$.
Suppose that $\mathcal{H}^{n}(\mathcal{M}^{\cdot})^{-\infty}$ and
$\mathcal{H}^{n}(\mathcal{M}^{\cdot})^{\infty}$ are $p$-torsion-free
for all $n$, and suppose that $\mathcal{H}^{n}((\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f])$
is $f$-torsion-free for all $n$. Then $\mathcal{H}^{n}(\mathcal{M}^{\cdot})$
is standard for all $n$.
In particular, $\mathcal{H}^{n}(\mathcal{M}^{\cdot})$ is $p$-torsion-free,
and $\mathcal{H}^{n}(\mathcal{M}^{\cdot})/p$ is rigid for all $n$.
We have $\mathcal{H}^{n}(\mathcal{M}^{\cdot})/p\tilde{=}\mathcal{H}^{n}(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)$
, $(\mathcal{H}^{n}(\mathcal{M}^{\cdot})/p)/v\tilde{=}\mathcal{H}^{n}((\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f])$,
and $(\mathcal{H}^{n}(\mathcal{M}^{\cdot})/p)/f\tilde{=}\mathcal{H}^{n}((\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[v])$
for all $n$. Further, $(\mathcal{H}^{n}(\mathcal{M}^{\cdot})/p)/f$
is $v$-torsion-free and $(\mathcal{H}^{n}(\mathcal{M}^{\cdot})/p)/v$
is $f$-torion-free for all $n$.
\end{thm}
\begin{proof}
Suppose that $b\in\mathbb{Z}$ is the largest integer so that $\mathcal{H}^{b}(\mathcal{M}^{\cdot})\neq0$.
Then $b$ is the largest integer for which $\mathcal{H}^{b}((\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k))\neq0$,
and
\[
\mathcal{H}^{b}((\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k))\tilde{=}\mathcal{H}^{b}(\mathcal{M}^{\cdot})/p
\]
Thus we have a distinguished triangle
\[
\tau_{\leq b-1}(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\to\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k\to(\mathcal{H}^{b}(\mathcal{M}^{\cdot})/p)[-b]
\]
to which we may apply the functor $\otimes_{k[f,v]}^{L}k[f]$. This
yields
\begin{equation}
\tau_{\leq b-1}(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f]\to(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f]\to(\mathcal{H}^{b}(\mathcal{M}^{\cdot})/p)[-b]\otimes_{D(k)}^{L}k[f]\label{eq:triangle!}
\end{equation}
Since $\otimes_{D(k)}k[f]$ is right exact, the complex on the left
is still concentrated in degrees $\leq b-1$, and the middle and right
complex are concentrated in degrees $\leq b$. Further
\[
\mathcal{H}^{b}((\mathcal{H}^{b}(\mathcal{M}^{\cdot})/p)[-b]\otimes_{D(k)}^{L}k[f])\tilde{=}\mathcal{H}^{0}((\mathcal{H}^{b}(\mathcal{M}^{\cdot})/p)\otimes_{D(k)}^{L}k[f])\tilde{=}(\mathcal{H}^{b}(\mathcal{M}^{\cdot})/p)/v
\]
Therefore $(\mathcal{H}^{b}(\mathcal{M}^{\cdot})/p)/v\tilde{=}\mathcal{H}^{b}((\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f])$
is $f$-torsion-free by assumption. Thus we may apply the previous
proposition to $\mathcal{H}^{b}(\mathcal{M}^{\cdot})$ and conclude
that it is standard.
Now, to finish the proof that $\mathcal{H}^{n}(\mathcal{M}^{\cdot})$
is standard for all $n$, we proceed by induction on the cohomological
length of $\mathcal{M}^{\cdot}$. If the length is $1$ we are done.
If not, we have the distinguished triangle
\[
\tau_{\leq b-1}(\mathcal{M}^{\cdot})\to\mathcal{M}^{\cdot}\to\mathcal{H}^{b}(\mathcal{M}^{\cdot})[-b]
\]
which yields the triangle
\[
\tau_{\leq b-1}(\mathcal{M}^{\cdot})\otimes_{W(k)}^{L}k\to\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k\to(\mathcal{H}^{b}(\mathcal{M}^{\cdot})/p)[-b]
\]
where we have used that $\mathcal{H}^{b}(\mathcal{M}^{\cdot})$ is
$p$-torsion-free to identify $(\mathcal{H}^{b}(\mathcal{M}^{\cdot})/p)\tilde{=}\mathcal{H}^{b}(\mathcal{M}^{\cdot})\otimes_{W(k)}^{L}k$.
As noted above, we have $\mathcal{H}^{b}((\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k))\tilde{=}\mathcal{H}^{b}(\mathcal{M}^{\cdot})/p$,
so this triangle implies the isomorphism
\[
\tau_{\leq b-1}(\mathcal{M}^{\cdot})\otimes_{W(k)}^{L}k\tilde{=}\tau_{\leq b-1}(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)
\]
Further, since $\mathcal{H}^{b}(\mathcal{M}^{\cdot})$ is standard
we have that $\mathcal{H}^{b}(\mathcal{M}^{\cdot})/p$ is rigid; therefore
by \lemref{Basic-Facts-on-Rigid} we have $(\mathcal{H}^{b}(\mathcal{M}^{\cdot})/p)\otimes_{D(k)}^{L}k[f]\tilde{=}(\mathcal{H}^{b}(\mathcal{M}^{\cdot})/p)/v$
is concentrated in a single degree. Thus, the distinguished triangle
\eqref{triangle!} becomes
\[
(\tau_{\leq b-1}(\mathcal{M}^{\cdot})\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f]\to(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f]\to(\mathcal{H}^{b}(\mathcal{M}^{\cdot})/p)/v[-b]
\]
and so we have the isomorphism
\[
(\tau_{\leq b-1}(\mathcal{M}^{\cdot})\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f]\tilde{=}\tau_{\leq b-1}((\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f])
\]
Thus the complex $\tau_{\leq b-1}(\mathcal{M}^{\cdot})$ satisfies
the assumption that $(\tau_{\leq b-1}(\mathcal{M}^{\cdot})\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f]$
has cohomology sheaves which are $f$-torsion-free, and so the complex
$\tau_{\leq b-1}(\mathcal{M}^{\cdot})$ satisfies all of the assumptions
of the theorem, but has a lesser cohomological length than $\mathcal{M}^{\cdot}$.
So we conclude by induction that $\mathcal{H}^{n}(\mathcal{M}^{\cdot})$
is standard for all $n$.
For the final part, since standard modules are torsion-free, we see
\[
\mathcal{H}^{n}(\mathcal{M}^{\cdot})/p\tilde{=}\mathcal{H}^{n}(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)
\]
for all $n$, and since each $\mathcal{H}^{n}(\mathcal{M}^{\cdot})/p$
is rigid, the complex $\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k$ has
cohomology sheaves which are all acyclic for $\otimes_{D(k)}k[f]$
and for $\otimes_{D(k)}k[f]$, by (\lemref{Basic-Facts-on-Rigid});
and the last sentence follows.
\end{proof}
\begin{rem}
As we shall see below, the condition that each cohomology sheaf of
$((\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f])$
is $f$-torsion-free is quite natural; it says that the spectral sequence
associated to the Hodge filtration on $(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)^{\infty}$
degenerates at $E^{1}$; this can be checked using Hodge theory in
many geometric situations. On the other hand, one conclusion of the
theorem is that each cohomology sheaf of $(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[v]$
is $v$-torsion-free; this corresponds to degeneration of the conjugate
spectral sequence on $(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)^{-\infty}$.
Over a point, one checks in an elementary way (using the finite dimensionality
of the vector spaces involved) that these two degenerations are equivalent;
this is true irrespective of weather the lift $\mathcal{M}^{\cdot}$
has $p$-torsion-free cohomology groups. This allows one to make various
stronger statements in this case (c.f., e.g., \cite{key-10}, proof
of theorem 8.26). I don't know if this is true over a higher dimensional
base.
\end{rem}
In most cases of interest, the assumption that $\mathcal{H}^{n}(\mathcal{M}^{\cdot})^{\infty}$
is $p$-torsion-free is actually redundant, more precisely, it is
implied by the assumption that $\mathcal{H}^{n}(\mathcal{M}^{\cdot})^{-\infty}$
is $p$-torsion-free when one has a Frobenius action; c.f. \thmref{F-Mazur}
below.
\subsection{Filtrations, Rees algebras, and filtered Frobenius descent}
In this section, we consider how the various gradings and filtrations
appearing in this paper (in positive characteristic) relate to the
more usual Hodge and conjugate filtrations in $\mathcal{D}$-module
theory. We start with the basic definitions; as usual $X$ is smooth
over $k$.
\begin{defn}
\label{def:Hodge-and-Con} The decreasing filtration ${\displaystyle \text{image}(\mathcal{D}^{(0,1),i}\to\mathcal{D}_{X}^{(0)})}:=C^{i}(\mathcal{D}_{X}^{(0)})$
is called the conjugate filtration. The increasing filtration ${\displaystyle \text{image}(\mathcal{D}^{(0,1),i}\to\mathcal{D}_{X}^{(1)})}:=F^{i}(\mathcal{D}_{X}^{(1)})$
is called the Hodge filtration.
Similarly, for any $\mathcal{M}\in\mathcal{G}(\mathcal{D}_{X}^{(0,1)})$
we may define ${\displaystyle \text{image}(\mathcal{M}^{i}\xrightarrow{v_{\infty}}\mathcal{M}^{-\infty})}:=C^{i}(\mathcal{M}^{-\infty})$
and ${\displaystyle \text{image}(\mathcal{M}^{i}\xrightarrow{f_{\infty}}\mathcal{M}^{\infty})}:=F^{i}(\mathcal{M}^{\infty})$,
the conjugate and Hodge filtrations, respectively.
\end{defn}
\begin{rem}
\label{rem:Description-of-conjugate}1) From the explicit description
of $v$ given in (the proof of) \lemref{Construction-of-v}, we see
that $C^{i}(\mathcal{D}_{X}^{(0)})=\mathcal{I}^{i}\mathcal{D}_{X}^{(0)}$
where $\mathcal{I}$ is the two-sided ideal of $\mathcal{D}_{X}^{(0)}$
generated by $\mathcal{Z}(\mathcal{D}_{X}^{(0)})^{+}$, the positive
degree elements of the center\footnote{The center is a graded sheaf of algebras via the isomorphism $\mathcal{Z}(\mathcal{D}_{X}^{(0)})\tilde{=}\mathcal{O}_{T^{*}X^{(1)}}$}.
In local coordinates, $\mathcal{I}$ is the just ideal generated by
$\{\partial_{1}^{p},\dots,\partial_{n}^{p}\}$, which matches the
explicit description of the action of $v$ given above. This is the
definition of the conjugate filtration on $\mathcal{D}_{X}^{(0)}$
given, in {[}OV{]} section 3.4, extended to a $\mathbb{Z}$-filtration
by setting $C_{i}(\mathcal{D}_{X}^{(0)})=\mathcal{D}_{X}^{(0)}$ for
all $i\leq0$.
2) On the other hand, from \thmref{Local-Coords-for-D+}, we see that
$F^{i}(\mathcal{D}_{X}^{(1)})$ a locally free, finite $\overline{\mathcal{D}_{X}^{(0)}}$-module;
in local coordinates it has a basis $\{(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\}_{0\leq|J|\leq i}$.
3) If $\mathcal{M}$ is a coherent gauge over $X$, then by \lemref{Basic-v}
the Hodge filtration of $\mathcal{M}^{\infty}$ is exhaustive and
$F^{i}(\mathcal{M}^{\infty})=0$ for $i<<0$, and the conjugate filtration
satisfies $C^{i}(\mathcal{M}^{-\infty})=\mathcal{M}^{-\infty}$ for
all $i<<0$.
\end{rem}
\begin{defn}
\label{def:Rees-and-Rees-bar}Let $\mathcal{R}(\mathcal{D}_{X}^{(1)})$
denote the Rees algebra of $\mathcal{D}_{X}^{(1)}$ with respect to
the Hodge filtration; and let $\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})$
denote the Rees algebra of $\mathcal{D}_{X}^{(0)}$ with respect to
the conjugate filtration. We will denote the Rees parameters (i.e.,
the element $1\in F^{1}(\mathcal{D}_{X}^{(1)})$, respectively $1\in C^{-1}(\mathcal{D}_{X}^{(0)})$)
by $f$ and $v$, respectively. We also let $\mathcal{R}(\mathcal{D}_{X}^{(0)})$
denote the Rees algebra of $\mathcal{D}_{X}^{(0)}$ with respect to
the symbol filtration; here the Rees parameter will also be denoted
$f$.
\end{defn}
\begin{lem}
We have $\mathcal{D}_{X}^{(0,1)}/v\tilde{=}\mathcal{R}(\mathcal{D}_{X}^{(1)})$
and $\mathcal{D}_{X}^{(0,1)}/f\tilde{=}\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})$
as graded rings.
\end{lem}
\begin{proof}
By \corref{Local-coords-over-A=00005Bf,v=00005D}, we have that $f$
acts injectively on $\mathcal{D}_{X}^{(0,1)}/v$. Since $fv=0$ the
map $f_{\infty}:\mathcal{D}_{X}^{(0,1),i}\to\mathcal{D}_{X}^{(1)}$
factors through a map $f_{\infty}:\mathcal{D}_{X}^{(0,1),i}/v\to\mathcal{D}_{X}^{(1)}$,
which has image equal to $F^{i}(\mathcal{D}_{X}^{(1)})$ (by definition).
The kernel is $0$ since $f$ acts injectively; so $\mathcal{D}_{X}^{(0,1),i}/v\tilde{\to}F^{i}(\mathcal{D}_{X}^{(1)})$
as required. The isomorphism $\mathcal{D}_{X}^{(0,1)}/f\tilde{=}\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})$
is proved identically.
\end{proof}
Therefore we have the natural functors
\[
\mathcal{M}^{\cdot}\to\mathcal{R}(\mathcal{D}_{X}^{(1)})\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot}\tilde{\to}k[f]\otimes_{D(k)}^{L}\mathcal{M}^{\cdot}
\]
from $D(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$ to $D(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(1)})))$
and
\[
\mathcal{M}^{\cdot}\to\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot}\tilde{\to}k[v]\otimes_{D(k)}^{L}\mathcal{M}^{\cdot}
\]
from $D(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$ to $D(\mathcal{G}(\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})))$.
We are going to give some basic results on the derived categories
of modules over these rings. As a motivation, we recall general result
of Schapira-Schneiders (\cite{key-47}, theorem 4.20; c.f, also example
4.22)
\begin{thm}
Let $(\mathcal{A},F)$ be a $\mathbb{Z}$-filtered sheaf of rings
on a topological space; let $\mathcal{R}(\mathcal{A})$ denote the
associated Rees algebra. Let $D((\mathcal{A},F)-\text{mod})$ denote
the filtered derived category of modules over $(\mathcal{A},F)$.
Then there is an equivalence of categories
\[
\mathcal{R}:D((\mathcal{A},F)-\text{mod})\tilde{\to}D(\mathcal{G}(\mathcal{R}(\mathcal{A})))
\]
which preserves the subcategories of bounded, bounded below, and bounded
above complexes. To a filtered module $\mathcal{M}$ (considered as
a complex in degree $0$) this functor attaches the usual Rees module
$\mathcal{R}(\mathcal{M})$.
\end{thm}
Recall that a filtered complex $\mathcal{M}^{\cdot}$ over $(\mathcal{A},F)$
is said to be strict if each morphism $d:(\mathcal{M}^{i},F)\to(\mathcal{M}^{i+1},F)$
satisfies $d(m)\in F_{j}(\mathcal{M}^{i+1})$ iff $m\in F_{j}(\mathcal{M}^{i})$
(for all local sections $m$). Then $\mathcal{M}^{\cdot}$ is quasi-isomorphic
to a strict complex iff each cohomology sheaf $\mathcal{H}^{i}(\mathcal{R}(\mathcal{M}^{\cdot}))$
is torsion-free with respect to the Rees parameter. If $\mathcal{M}^{\cdot}$
is a bounded complex, for which the filtration is bounded below (i.e.
there is some $j\in\mathbb{Z}$ so that $F_{j}(\mathcal{M}^{i})=0$
for all $i$), then this condition is equivalent to the degeneration
at $E_{1}$ of the spectral sequence associated to the filtration.
Now we return the discussion to $\mathcal{R}(\mathcal{D}_{X}^{(1)})$
and $\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})$. We begin with
the latter; recall that Ogus and Vologodsky in \cite{key-11} have
considered the filtered derived category associated to the conjugate
filtration on $\mathcal{D}_{X}^{(0)}$; by the above theorem\footnote{The careful reader will note that in their work they require filtrations
to be separated; however, this leads to a canonically isomorphic filtered
derived category, as explained in \cite{key-59}, proposition 3.1.22 } this category is equivalent to $\mathcal{G}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}))$.
After we construct our pushforward on $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$,
we will show that it is compatible with the one constructed on \cite{key-11},
for now, we will just prove the following basic structure theorem
for $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$; following \cite{key-3},
theorem 2.2.3:
\begin{prop}
We have $\mathcal{Z}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}))\tilde{=}\mathcal{O}_{T^{*}X^{(1)}}[v]$;
this is a graded ring where $\mathcal{O}_{T^{*}X^{(1)}}$ is graded
as usual and $v$ is placed in degree $-1$. The algebra $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$
is Azumaya over $\mathcal{Z}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}))$,
of index $p^{\text{dim}(X)}$. In particular, $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})(U)$
has finite homological dimension for each open affine $U$.
\end{prop}
\begin{proof}
The filtered embedding $\mathcal{O}_{T^{*}X^{(1)}}\to\mathcal{D}_{X}^{(0)}$
induces the map $\mathcal{O}_{T^{*}X^{(1)}}[v]\to\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$,
by the very definition of the conjugate filtration it is a map of
graded rings. To show that this map is an isomorphism onto the center,
note that by \corref{Local-coords-over-A=00005Bf,v=00005D}, after
choosing etale local coordinates we have that a basis for $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$
over $\mathcal{O}_{X}[v]$ is given by $\{\partial^{I}(\partial^{[p]})^{J}\}$
where each entry of $I$is contained in $\{0,\dots,p-1\}$; and the
formula for the bracket by $\partial_{i}^{[p]}$ (c.f. \thmref{Local-Coords-for-D+})
shows that $(\partial^{[p]})^{J}$ is now central. Thus the center
is given by $\mathcal{O}_{X^{(1)}}[v,\partial_{1}^{[p]},\dots,\partial_{n}^{[p]}]$
which is clearly the (isomorphic) image of the map.
The above local coordinates also show that $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$
is locally free over $\mathcal{Z}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}))$,
of rank $p^{2\text{dim}(X)}$. Now we can follow the strategy of \cite{key-3},
to show that $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$ is Azumaya:
we consider the commutative subalgebra $\mathcal{A}_{X,v}:=\mathcal{O}_{X}\otimes_{\mathcal{O}_{X^{(1)}}}\mathcal{O}_{T^{*}X^{(1)}}[v]$
inside $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$; it acts by
right multiplication on $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$,
and $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$ is a locally
free module over it of rank $p^{\text{dim}(X)}$. We have the action
map
\[
A:\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{T^{*}X^{(1)}}[v]}\mathcal{A}_{X,v}\to\mathcal{E}nd_{\mathcal{A}_{X,v}}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}))
\]
which is a morphism of algebras, both of which are locally free modules
of rank $p^{2\text{dim}(X)}$ over $\mathcal{A}_{X,v}$. Since the
left hand side is the pullback of $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$,
considered as a sheaf of algebras over $T^{*}X^{(1)}\times\mathbb{A}^{1}$,
to the flat cover $X\times_{X^{(1)}}T^{*}X^{(1)}\times\mathbb{A}^{1}$,
we see that $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$ is Azumaya
if $A$ is an isomorphism.
To prove that $A$ an isomorphism it suffices to prove it after inverting
$v$ and after setting $v=0$. Upon inverting $v$, we have $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})=\mathcal{D}_{X}^{(0)}[v,v^{-1}]$,
so the map $A$ simply becomes the analogous map for $\mathcal{D}_{X}^{(0)}$
tensored with $k[v,v^{-1}]$; this is shown to be an isomorphism by
\cite{key-3}, proposition 2.2.2. Upon setting $v=0$, we obtain
\[
A_{0}:\text{gr}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{T^{*}X^{(1)}}}\mathcal{A}_{X}\to\mathcal{E}nd_{\mathcal{A}_{X}}(\text{gr}(\mathcal{D}_{X}^{(0)}))
\]
where $\text{gr}(\mathcal{D}_{X}^{(0)})$ is the associated graded
of $\mathcal{D}_{X}^{(0)}$ with respect to the conjugate filtration;
this is a (split) Azumaya algebra which is easily seen to be isomorphic
to $\overline{\mathcal{D}}_{X}^{(0)}\otimes_{\mathcal{O}_{X^{(1)}}}\mathcal{O}_{T^{*}X^{(1)}}$
(c.f. \cite{key-11}; the discussion below lemma 3.18). Thus the map
$A_{0}$ is again an isomorphism; indeed, we have
\[
\text{gr}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{T^{*}X^{(1)}}}\mathcal{A}_{X}\tilde{\to}\overline{\mathcal{D}}_{X}^{(0)}\otimes_{\mathcal{O}_{X^{(1)}}}\mathcal{O}_{T^{*}X^{(1)}}\otimes_{\mathcal{O}_{T^{*}X^{(1)}}}\mathcal{A}_{X}
\]
\[
\tilde{\to}\mathcal{E}nd_{\mathcal{O}_{X^{(1)}}}(\mathcal{O}_{X})\otimes_{\mathcal{O}_{X^{(1)}}}\mathcal{A}_{X}
\]
so that each reduction of $\text{gr}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{T^{*}X^{(1)}}}\mathcal{A}_{X}$
to closed point in $X\times_{X^{(1)}}T^{*}X^{(1)}$ is a matrix algebra
of rank $p^{\text{dim}(X)}$, and hence a central simple ring, and
the result follows immediately.
\end{proof}
Next we turn to the category of modules over $\mathcal{R}(\mathcal{D}_{X}^{(1)})$,
in this case, we can describe them in terms of the familiar filtered
$\mathcal{D}_{X}^{(0)}$-modules (in terms of the symbol filtration).
The key to doing so is a version of Berthelot's Frobenius descent
for filtered $\mathcal{D}_{X}^{(1)}$-modules; while we will consider
the more general Frobenius descent over $\mathfrak{X}$ in the next
subsection, we will give the basic construction on $X$ for now.
To proceed, recall that we have the embedding $\overline{\mathcal{D}_{X}^{(0)}}\subset\mathcal{D}_{X}^{(1)}$
which is simply the image of map $f_{\infty}:\mathcal{D}_{X}^{(0)}\to\mathcal{D}_{X}^{(1)}$.
Let $\mathcal{J}\subset\overline{\mathcal{D}_{X}^{(0)}}$ denote the
annihilator of $1\in\mathcal{O}_{X}$ under the action of $\overline{\mathcal{D}_{X}^{(0)}}$
on $\mathcal{O}_{X}$; we have the left ideal $\mathcal{D}_{X}^{(1)}\cdot\mathcal{J}$.
\begin{prop}
\label{prop:Basic-F^*-over-k}There is an isomorphism of $\mathcal{O}_{X}$-modules
$\mathcal{D}_{X}^{(1)}/\mathcal{D}_{X}^{(1)}\cdot\mathcal{J}\tilde{\to}F^{*}\mathcal{D}_{X}^{(0)}$,
thus endowing $F^{*}\mathcal{D}_{X}^{(0)}$ with the structure of
a left $\mathcal{D}_{X}^{(1)}$-module; and hence the structure of
a $(\mathcal{D}_{X}^{(1)},\mathcal{D}_{X}^{(0)})$-bimodule. Let $F^{i}(\mathcal{D}_{X}^{(1)}/\mathcal{D}_{X}^{(1)}\cdot\mathcal{J})$
be the filtration induced from the Hodge filtration on $\mathcal{D}_{X}^{(1)}$,
and let $F^{i}(\mathcal{D}_{X}^{(0)})$ be the symbol filtration.
Then we have
\[
F^{i}(\mathcal{D}_{X}^{(1)}/\mathcal{D}_{X}^{(1)}\cdot\mathcal{J})\cdot F^{j}(\mathcal{D}_{X}^{(0)})=F^{i+j}(\mathcal{D}_{X}^{(1)}/\mathcal{D}_{X}^{(1)}\cdot\mathcal{J})
\]
for all $i,j\geq0$. The induced morphism $\mathcal{D}_{X}^{(1)}\to\mathcal{E}nd_{\mathcal{D}_{X}^{(0),\text{opp}}}(F^{*}\mathcal{D}_{X}^{(0)})$
is an isomorphism of filtered algebras.
\end{prop}
\begin{proof}
We put a right $\mathcal{D}_{X}^{(0)}$-module structure on $\mathcal{D}_{X}^{(1)}/\mathcal{D}_{X}^{(1)}\cdot\mathcal{J}$
as follows: let $\Phi\in\mathcal{D}_{X}^{(1)}/\mathcal{D}_{X}^{(1)}\cdot\mathcal{J}$
be a section, over some open affine subset $U$ which possesses local
coordinates. Let $\partial$ be a derivation over $U$. We may choose
a differential operator $\delta$ of order $p$ on $U$ such that
$\delta(f^{p})=(\partial f)^{p}$ for all $f\in\mathcal{O}_{X}(U)$;
for instance, if $\partial=\sum a_{i}\partial_{i}$ then we may choose
$\delta=\sum a_{i}^{p}\partial_{i}^{[p]}$. If $\delta'$ is another
such differential operator, then $\delta-\delta'$ is a section of
$\overline{\mathcal{D}_{X}^{(0)}}(U)$ which annihilates $\mathcal{O}_{X}(U)^{p}$.
In particular, $\delta-\delta'\in\mathcal{J}$, and so $\Phi\delta=\Phi\delta'$
inside $\mathcal{D}_{X}^{(1)}/\mathcal{D}_{X}^{(1)}\cdot\mathcal{J}$.
So ,if we set $\Phi\star f=\Phi\cdot f^{p}$ and $\Phi\star\partial=\Phi\cdot\delta$
we obtain a (semilinear) right action of $\mathcal{D}_{X}^{(0)}$
on $\mathcal{D}_{X}^{(1)}/\mathcal{D}_{X}^{(1)}\cdot\mathcal{J}$.
Since $\mathcal{O}_{X}$ acts on $\mathcal{D}_{X}^{(1)}/\mathcal{D}_{X}^{(1)}\cdot\mathcal{J}$
on the left, the map
\[
(f,\Psi)\to f\star\Psi
\]
induces a morphism $F^{*}\mathcal{D}_{X}^{(0)}\to\mathcal{D}_{X}^{(1)}/\mathcal{D}_{X}^{(1)}\cdot\mathcal{J}$.
To show it is an isomorphism, let us consider filtrations: by \thmref{Local-Coords-for-D+}
we have that $F^{i}(\mathcal{D}_{X}^{(1)})(U)$ is the free $\overline{\mathcal{D}_{X}^{(0)}}(U)$
module on $\{(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\}_{|J|\leq i}$.
Since $\overline{\mathcal{D}_{X}^{(0)}}/\mathcal{J}\tilde{\to}\mathcal{O}_{X}$,
we see that $F^{i}(\mathcal{D}_{X}^{(1)}/\mathcal{D}_{X}^{(1)}\cdot\mathcal{J})$
is the free $\mathcal{O}_{X}(U)$-module on $\{(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\}_{|J|\leq i}$.
On the other hand, $F^{i}(\mathcal{D}_{X}^{(0)})(U)$ is the free
$\mathcal{O}_{X}(U)$-module on $\{\partial_{1}^{j_{1}}\cdots\partial_{n}^{j_{n}}\}_{|J|\leq i}$.
Since $1\star\partial_{i}=\partial_{i}^{[p]}$ we deduce $F^{*}F^{i}(\mathcal{D}_{X}^{(0)})=F^{i}(\mathcal{D}_{X}^{(1)}/\mathcal{D}_{X}^{(1)}\cdot\mathcal{J})$
which implies that the map $F^{*}\mathcal{D}_{X}^{(0)}\to\mathcal{D}_{X}^{(1)}/\mathcal{D}_{X}^{(1)}\cdot\mathcal{J}$
is an isomorphism. The same calculation gives
\[
F^{i}(\mathcal{D}_{X}^{(1)}/\mathcal{D}_{X}^{(1)}\cdot\mathcal{J})\cdot F_{j}(\mathcal{D}_{X}^{(0)})=F^{i+j}(\mathcal{D}_{X}^{(1)}/\mathcal{D}_{X}^{(1)}\cdot\mathcal{J})
\]
Therefore, the map
\[
\mathcal{D}_{X}^{(1)}\to\mathcal{E}nd_{\mathcal{D}_{X}^{(0),\text{op}}}(F^{*}\mathcal{D}_{X}^{(0)})
\]
is a morphism of filtered algebras, where the filtration on the right
hand side is defined by
\[
F^{i}(\mathcal{E}nd_{\mathcal{D}_{X}^{(0)}}(F^{*}\mathcal{D}_{X}^{(0)}))=\{\varphi\in\mathcal{E}nd_{\mathcal{D}_{X}^{(0)}}(F^{*}\mathcal{D}_{X}^{(0)})|\varphi(F^{j}(F^{*}\mathcal{D}_{X}^{(0)})\subset F^{i+j}(F^{*}\mathcal{D}_{X}^{(0)})\phantom{i}\text{for all}\phantom{i}j\}
\]
Upon passing to the associated graded, we obtain the morphism
\[
\text{gr}(\mathcal{D}_{X}^{(1)})\to\text{gr}\mathcal{E}nd_{\mathcal{D}_{X}^{(0),\text{op}}}(F^{*}\mathcal{D}_{X}^{(0)})\tilde{=}\mathcal{E}nd_{\text{gr}(\mathcal{D}_{X}^{(0)})}(\text{gr}(F^{*}\mathcal{D}_{X}^{(0)}))
\]
(the last isomorphism follows from the fact that $F^{*}\mathcal{D}_{X}^{(0)}$
is a locally free filtered module over $\mathcal{D}_{X}^{(0)}$).
Working in local coordinates, we obtain the morphism
\[
\mathcal{\overline{D}}_{X}^{(0)}[\partial_{1}^{[p]},\dots,\partial_{n}^{[p]}]\to\mathcal{E}nd_{\text{Sym}_{\mathcal{O}_{X}}(\mathcal{T}_{X})}(F^{*}(\text{Sym}_{\mathcal{O}_{X}}(\mathcal{T}_{X})))
\]
where $\partial_{i}^{[p]}$ is sent to $\partial_{i}\in\mathcal{T}_{X}$.
By Cartier descent, there is an isomorphism $\mathcal{\overline{D}}_{X}^{(0)}\tilde{=}\mathcal{E}nd_{\mathcal{O}_{X}}(F^{*}\mathcal{O}_{X})$
(here, the action of $\mathcal{O}_{X}$ on $F^{*}\mathcal{O}_{X}$
is on the right-hand factor in the tensor product; in other words,
it is the action of $\mathcal{O}_{X}$ on itself through the Frobenius);
and so we see that this map is an isomorphism. Thus the map $\mathcal{D}_{X}^{(1)}\to\mathcal{E}nd_{\mathcal{D}_{X}^{(0),\text{op}}}(F^{*}\mathcal{D}_{X}^{(0)})$
is an isomorphism as claimed.
\end{proof}
This yields a functor $\mathcal{M}\to F^{*}\mathcal{M}:=F^{*}\mathcal{D}_{X}^{(0)}\otimes_{\mathcal{D}_{X}^{(0)}}\mathcal{M}$
(the Frobenius pullback) from $\mathcal{D}_{X}^{(0)}-\text{mod}$
to $\mathcal{D}_{X}^{(1)}-\text{mod}$; from the last part of the
above proposition and standard Morita theory one sees that it is an
equivalence of categories. Further:
\begin{thm}
\label{thm:Filtered-Frobenius} The Frobenius pullback $F^{*}$ can
be upgraded to an equivalence from $\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(0)}))$
to $\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(1)}))$. Therefore,
the functor $F^{*}$ can also be upgraded to an equivalence of categories
from filtered $\mathcal{D}_{X}^{(0)}$-modules to filtered $\mathcal{D}_{X}^{(1)}$-modules.
In particular, $\mathcal{R}(\mathcal{D}_{X}^{(1)})(U)$ has finite
homological dimension for each open affine $U$.
\end{thm}
\begin{proof}
In \propref{Basic-F^*-over-k}, we showed that $F^{*}\mathcal{D}_{X}^{(0)}$
is filtered, in a way that strictly respects the filtered action of
both $\mathcal{D}_{X}^{(1)}$ and $\mathcal{D}_{X}^{(0)}$. So, consider
the Rees module $\mathcal{R}(F^{*}\mathcal{D}_{X}^{(0)})$. This is
a graded $(\mathcal{R}(\mathcal{D}_{X}^{(1)}),\mathcal{R}(\mathcal{D}_{X}^{(0)}))$-bimodule;
and the isomorphism $F^{*}F_{i}(\mathcal{D}_{X}^{(0)}\tilde{=}F_{i}(F^{*}\mathcal{D}_{X}^{(0)})$
proved in loc.cit. shows that $\mathcal{R}(F^{*}\mathcal{D}_{X}^{(0)})\tilde{=}F^{*}\mathcal{R}_{X}^{(0)}$
as a right $\mathcal{R}(\mathcal{D}_{X}^{(0)})$-module. Thus the
result will follow if we can show that the action map
\begin{equation}
\mathcal{R}(\mathcal{D}_{X}^{(1)})\to\mathcal{E}nd_{\mathcal{R}(\mathcal{D}_{X}^{(0)})}(\mathcal{R}(F^{*}\mathcal{D}_{X}^{(0)}))\tilde{=}\underline{\mathcal{E}nd}{}_{\mathcal{R}(\mathcal{D}_{X}^{(0)})}(\mathcal{R}(F^{*}\mathcal{D}_{X}^{(0)}))\label{eq:first-map}
\end{equation}
is an isomorphism (the latter isomorphism follows from the fact that
$\mathcal{R}(F^{*}\mathcal{D}_{X}^{(0)})$ is coherent over $\mathcal{R}(\mathcal{D}_{X}^{(0)})$).
Both sides are therefore positively graded algebras over the ring
$k[f]$; taking reduction mod $f$ we obtain the map $\text{gr}(\mathcal{D}_{X}^{(1)})\to\mathcal{E}nd_{\text{gr}(\mathcal{D}_{X}^{(0)})}(\text{gr}(F^{*}\mathcal{D}_{X}^{(0)}))$
which we already showed to be an isomorphism. Thus by the graded Nakayama
lemma \eqref{first-map} is surjective. As both sides are $f$-torsion
free it follows that it is an isomorphism. Thus the first result of
$1)$ is proved, the second follows by identifying filtered modules
with graded modules over the Rees ring which are torsion-free with
respect to $f$.
\end{proof}
\begin{rem}
\label{rem:The-inverse-to-F^*}The inverse to the functor $F^{*}$
can be described as follows: via the embedding $\overline{\mathcal{D}_{X}^{(0)}}\subset\mathcal{R}(\mathcal{D}_{X}^{(1)})$,
any module $\mathcal{M}$ over $\mathcal{R}(\mathcal{D}_{X}^{(1)})$
possesses a connection which has $p$-curvature $0$. Apply this to
$\mathcal{R}(\mathcal{D}_{X}^{(1)})/\mathcal{J}\cdot\mathcal{R}(\mathcal{D}_{X}^{(1)})$,
where as above $\mathcal{J}\subset\overline{\mathcal{D}_{X}^{(0)}}$
denotes the annihilator of $1\in\mathcal{O}_{X}$ under the action
of $\overline{\mathcal{D}_{X}^{(0)}}$ on $\mathcal{O}_{X}$. We obtain
from the above argument the isomorphism
\[
(\mathcal{R}(\mathcal{D}_{X}^{(1)})/\mathcal{J}\cdot\mathcal{R}(\mathcal{D}_{X}^{(1)}))^{\nabla}\tilde{=}\mathcal{R}(\mathcal{D}_{X^{(1)}}^{(0)})
\]
Thus for any the sheaf $\mathcal{M}^{\nabla}:=\text{ker}(\nabla:\mathcal{M}\to\mathcal{M})$
inherits the structure of a module over $\mathcal{R}(\mathcal{D}_{X^{(1)}}^{(0)})$.
As $k$ is perfect we have an isomorphism of schemes $\sigma:X^{(1)}\to X$,
and so composing with this we can obtain from $\mathcal{M}^{\nabla}$
an $\mathcal{R}(\mathcal{D}_{X}^{(0)})$-module; this is the inverse
functor to $F^{*}$.
\end{rem}
To close out this subsection, we'd like to discuss an important tool
for studying $\mathcal{G}(\mathcal{D}_{X}^{(0,1)})$; namely, reducing
statements to their analogues in $\mathcal{R}(\mathcal{D}_{X}^{(1)})$
and $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$. For any $\mathcal{M}\in\mathcal{G}(\mathcal{D}_{X}^{(0,1)})$,
we have a short exact sequence
\[
0\to\text{ker}(f)\to\mathcal{M}\to\mathcal{M}/\text{ker}(f)\to0
\]
the module on the left is annihilated by $f$; i.e., it is a module
over $\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})$ while the module
on the right is annihilated by $v$; i.e., it is a module over $\mathcal{R}(\mathcal{D}_{X}^{(1)})$.
This allows us to deduce many basic structural properties of $\mathcal{G}(\mathcal{D}_{X}^{(0,1)})$
from properties of $\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(1)}))$
and $\mathcal{G}(\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)}))$.
We now give the key technical input; to state it, we will abuse notation
slightly, so that if $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(1)})))$
(or in $D(\mathcal{G}(\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})))$)
we will use the same symbol $\mathcal{M}^{\cdot}$ to denote its image
in $D(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$
\begin{prop}
\label{prop:Sandwich!}1) Let $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(1)})))$.
Suppose $\mathcal{N}\in\mathcal{G}(\mathcal{D}_{X}^{(0,1),\text{opp}})$
is quasi-rigid. Then
\[
\mathcal{N}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot}\tilde{=}\mathcal{N}/v\otimes_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}^{L}\mathcal{M}^{\cdot}
\]
Similarly, if $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})))$
we have
\[
\mathcal{N}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot}\tilde{=}\mathcal{N}/f\otimes_{\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})}^{L}\mathcal{M}^{\cdot}
\]
The analogous statement holds for $\mathcal{N}\in\mathcal{G}(\mathcal{D}_{X}^{(0,1)})$
quasi-rigid and $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(1)})^{\text{opp}}))$,
resp. $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})^{\text{opp}}))$.
2) As above, let $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(1)})))$
and suppose $\mathcal{N}$ is quasi-rigid. Then
\[
R\underline{\mathcal{H}om}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{M}^{\cdot},\mathcal{N})\tilde{=}R\underline{\mathcal{H}om}_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}(\mathcal{M}^{\cdot},\text{ker}(v:\mathcal{N}\to\mathcal{N}))
\]
Similarly, if $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})))$
then
\[
R\underline{\mathcal{H}om}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{M},\mathcal{N})\tilde{=}R\underline{\mathcal{H}om}_{\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})}(\mathcal{M},\text{ker}(f:\mathcal{N}\to\mathcal{N}))
\]
\end{prop}
\begin{proof}
1) Choose a flat resolution $\mathcal{F}^{\cdot}\to\mathcal{N}$ (in
the category of right $\mathcal{D}_{X}^{(0,1)}$-gauges); concretely,
the terms of $\mathcal{F}^{\cdot}$ are direct sums of sheaves of
the form $j_{!}(\mathcal{D}_{X}^{(0,1)}(i)|_{U})$ (where $U\subset X$
is open and $j_{!}$ denotes extension by zero). Then $\mathcal{N}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot}$
is represented by the complex
\[
\mathcal{F}^{\cdot}\otimes_{\mathcal{D}_{X}^{(0,1)}}\mathcal{M}^{\cdot}\tilde{=}(\mathcal{F}/v)^{\cdot}\otimes_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}\mathcal{M}^{\cdot}
\]
where the isomorphism follows from the fact that each term of $\mathcal{M}^{\cdot}$
is annihilated by $v$. On the other hand, $(\mathcal{F}/v)^{\cdot}$
is a complex, whose terms are direct sums of sheaves of the form $j_{!}(\mathcal{R}(\mathcal{D}_{X}^{(1)}(i)|_{U})$,
which computes $\mathcal{N}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{R}(\mathcal{D}_{X}^{(1)})$.
However, we have $\mathcal{N}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{R}(\mathcal{D}_{X}^{(1)})\tilde{=}\mathcal{N}/v$
by the assumption on $\mathcal{N}$ (c.f. \lemref{Basic-Facts-on-Rigid})
Therefore $(\mathcal{F}/v)^{\cdot}$ is a flat resolution (in the
category of graded right $\mathcal{R}(\mathcal{D}_{X}^{(1)})$-modules)
of $\mathcal{N}$, and so
\[
(\mathcal{F}/v)^{\cdot}\otimes_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}\mathcal{M}^{\cdot}\tilde{=}\mathcal{N}\otimes_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}^{L}\mathcal{M}^{\cdot}
\]
as claimed. The case $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})))$
is essentially identical.
2) Choose an injective resolution $\mathcal{N}\to\mathcal{I}^{\cdot}$.
Then we have that $R\underline{\mathcal{H}om}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{M}^{\cdot},\mathcal{N})$
is represented by
\[
\underline{\mathcal{H}om}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{M}^{\cdot},\mathcal{I}^{\cdot})=\underline{\mathcal{H}om}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{M}^{\cdot},\mathcal{I}^{\cdot,v=0})=\underline{\mathcal{H}om}_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}(\mathcal{M}^{\cdot},\mathcal{I}^{\cdot,v=0})
\]
where $\mathcal{I}^{j,v=0}=\{m\in\mathcal{I}^{j}|vm=0\}$. From the
isomorphism $\underline{\mathcal{H}om}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{M},\mathcal{I}^{\cdot})=\underline{\mathcal{H}om}_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}(\mathcal{M},\mathcal{I}^{\cdot,v=0})$
we see that the the functor $\mathcal{I}\to\mathcal{I}^{v=0}$ takes
injectives in $\mathcal{G}(\mathcal{D}_{X}^{(0,1)})$ to injectives
in $\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(1)}))$. On the other
hand, we have $\mathcal{I}^{j,v=0}=\underline{\mathcal{H}om}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{R}(\mathcal{D}_{X}^{(1)}),\mathcal{I}^{j,v=0})$.
Thus the functor $\underline{\mathcal{H}om}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{R}(\mathcal{D}_{X}^{(1)}),)$
takes injectives in $\mathcal{G}(\mathcal{D}_{X}^{(0,1)})$ to injectives
in $\mathcal{R}(\mathcal{D}_{X}^{(1)})$ and so
\[
R\underline{\mathcal{H}om}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{M}^{\cdot},\mathcal{N})\tilde{=}R\underline{\mathcal{H}om}_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}(\mathcal{M}^{\cdot},R\underline{\mathcal{H}om}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{R}(\mathcal{D}_{X}^{(1)}),\mathcal{N}))
\]
On the other hand, using the resolution
\[
\cdots\to\mathcal{D}_{X}^{(0,1)}(-1)\xrightarrow{v}\mathcal{D}_{X}^{(0,1)}\xrightarrow{f}\mathcal{D}_{X}^{(0,1)}(-1)\xrightarrow{v}\mathcal{D}_{X}^{(0,1)}\to\mathcal{R}(\mathcal{D}_{X}^{(1)})
\]
one deduces
\[
R\underline{\mathcal{H}om}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{R}(\mathcal{D}_{X}^{(1)}),\mathcal{N})\tilde{=}\text{ker}(v:\mathcal{N}\to\mathcal{N})
\]
and the first statement in $2)$ follows; the second statement is
proved in an identical fashion.
\end{proof}
Here is a typical application:
\begin{prop}
\label{prop:Quasi-rigid=00003Dfinite-homological}A quasicoherent
gauge $\mathcal{N}\in\mathcal{G}_{qcoh}(\mathcal{D}_{X}^{(0,1)})$
is quasi-rigid iff, for each open affine $U\subset X$, $\mathcal{N}(U)$
has finite projective dimension over $\mathcal{D}_{X}^{(0,1)}(U)$.
\end{prop}
\begin{proof}
Let $\mathcal{N}$ be quasi-rigid. Then for any quasicoherent $\mathcal{M}\in\mathcal{G}(\mathcal{D}_{X}^{(0,1),\text{opp}})$,
we have the short exact sequence
\[
0\to\text{ker}(f)\to\mathcal{M}\to\mathcal{M}/\text{ker}(f)\to0
\]
which yields the distinguished triangle
\[
0\to\mathcal{N}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\text{ker}(f)\to\mathcal{N}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}\to\mathcal{N}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}/\text{ker}(f)\to0
\]
Applying the previous result; we see that the outer two tensor products
are isomorphic to tensor products over $\mathcal{R}(\mathcal{D}_{X}^{(1)})$
and $\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})$, respectively.
As these algebras have finite homological dimension (the dimension
is $2\text{dim}(X)+1$, in fact) over any open affine, we see that
$\mathcal{N}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}$ is
a bounded complex; since this is true for all $\mathcal{M}$ we obtain
the forward implication. For the reverse, note that by \lemref{Basic-Facts-on-Rigid},
the functor $\mathcal{M}\to\mathcal{R}(\mathcal{D}_{X}^{(1)})\otimes_{\mathcal{D}_{X}^{(0,1)}}\mathcal{M}\tilde{\to}k[f]\otimes_{D(k)}\mathcal{M}$
has infinite homological dimension when $\mathcal{M}$ is not quasi-rigid.
\end{proof}
\subsection{\label{subsec:Frobenius-Descent,--Gauges}Frobenius Descent, $F^{-1}$-Gauges}
In this section we recall Berthelot's theory of Frobenius descent
for $\mathcal{D}$-modules and give the definition of an $F^{-1}$-gauge
over a higher dimensional base.
We begin by briefly recalling Berthelot's theory of the Frobenius
action in mixed characteristic. This is developed using the theory
of (mixed) divided powers in \cite{key-2}; for the reader's convenience
we will recall a simple description in the case of interest to us
(this point of view is emphasized in \cite{key-48}).
First suppose that $\mathfrak{X}$ admits an endomorphism $F$ which
lifts the Frobenius on $X$, and whose restriction to $W(k)$ agrees
with the Witt-vector Frobenius on $W(k)$. This is equivalent to giving
a morphism $\mathfrak{X}\to\mathfrak{X}^{(1)}$ whose composition
with the natural map $\mathfrak{X}^{(1)}\to\mathfrak{X}$ agrees with
$F$ (here, $\mathfrak{X}^{(1)}$ denotes the first Frobenius twist
of $\mathfrak{X}$ over $W(k)$); we will also denote the induced
morphism $\mathfrak{X}\to\mathfrak{X}^{(1)}$ by $F$. On the underlying
topological spaces (namely $X$ and $X^{(1)}$), this map is a bijection,
and we shall consistently consider $\mathcal{O}_{\mathfrak{X}^{(1)}}$
as a sheaf of rings on $X$, equipped with an injective map of sheaves
of algebras $F^{\#}:\mathcal{O}_{\mathfrak{X}^{(1)}}\to\mathcal{O}_{\mathfrak{X}}$
which makes $\mathcal{O}_{\mathfrak{X}}$ into a finite $\mathcal{O}_{\mathfrak{X}^{(1)}}$-module.
Now consider the sheaf $\mathcal{H}om_{W(k)}(\mathcal{O}_{\mathfrak{X}^{(1)}},\mathcal{O}_{\mathfrak{X}})$.
For any $i\geq0$, this is a $(\mathcal{D}_{\mathfrak{X}}^{(i+1)},\mathcal{D}_{\mathfrak{X}^{(1)}}^{(i)})$
bi-module (via the actions of these rings of differential operators
on $\mathcal{O}_{\mathfrak{X}}$ and $\mathcal{O}_{\mathfrak{X}^{(1)}}$,
respectively). Then we have the
\begin{thm}
\label{thm:Berthelot-Frob}(Berthelot) The $(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i+1)},\mathcal{\widehat{D}}_{\mathfrak{X}^{(1)}}^{(i)})$
bi-sub-module of $\mathcal{H}om_{W(k)}(\mathcal{O}_{\mathfrak{X}^{(1)}},\mathcal{O}_{\mathfrak{X}})$
locally generated by $F^{\#}$ is isomorphic to $\mathcal{O}_{\mathfrak{X}}\otimes_{\mathcal{O}_{\mathfrak{X}^{(1)}}}\mathcal{\widehat{D}}_{\mathfrak{X}^{(1)}}^{(i)}$,
via the map
\[
(f,\Phi)\to f\circ F^{\#}\circ\Phi\in\mathcal{H}om_{W(k)}(\mathcal{O}_{\mathfrak{X}^{(1)}},\mathcal{O}_{\mathfrak{X}})
\]
for local sections $f\in\mathcal{O}_{\mathfrak{X}}$ and $\Phi\in\mathcal{\widehat{D}}_{\mathfrak{X}^{(1)}}^{(i)}$.
This gives the sheaf $\mathcal{O}_{\mathfrak{X}}\otimes_{\mathcal{O}_{\mathfrak{X}^{(1)}}}\mathcal{\widehat{D}}_{\mathfrak{X}^{(1)}}^{(i)}=F^{*}\mathcal{\widehat{D}}_{\mathfrak{X}^{(1)}}^{(i)}$
the structure of a $(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i+1)},\mathcal{\widehat{D}}_{\mathfrak{X}^{(1)}}^{(i)})$
bimodule. The associated functor, denoted $F^{*}$,
\[
\mathcal{M}\to F^{*}\mathcal{\widehat{D}}_{\mathfrak{X}^{(1)}}^{(i)}\otimes_{\mathcal{D}_{\mathfrak{X}^{(1)}}^{(i)}}\mathcal{M}\tilde{=}F^{*}\mathcal{M}
\]
is an equivalence of categories from $\mathcal{\widehat{D}}_{\mathfrak{X}^{(1)}}^{(i)}-\text{mod}\to\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i+1)}-\text{mod}$
; which induces an equivalence $\text{Coh}(\mathcal{\widehat{D}}_{\mathfrak{X}^{(1)}}^{(i)})\to\text{Coh}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i+1)})$.
In particular, the map $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(i+1)}\to\mathcal{E}nd_{\mathcal{\widehat{D}}_{\mathfrak{X}^{(1)}}^{(i),\text{op}}}(F^{*}\mathcal{\widehat{D}}_{\mathfrak{X}^{(1)}}^{(i)})$
is an isomorphism of sheaves of algebras.
As $W(k)$ is perfect, we have an isomorphism $\mathfrak{X}^{(1)}\tilde{\to}\mathfrak{X}$;
and we may therefore regard $F^{*}$ as being an equivalence of categories
from $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}-\text{mod}$ to $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i+1)}-\text{mod}$
\end{thm}
This is proved in \cite{key-2}, section 2.3. In fact, in the case
where $\mathfrak{X}=\text{Specf}(\mathcal{A})$ is affine and admits
etale local coordinates, and the map $F$ acts on coordinates $\{t_{i}\}_{i=1}^{n}$
via $F(t_{i})=t_{i}^{p}$, then the first assertion can be proved
quite directly. The second is the theory of \cite{key-2}. Note that
this description implies that the reduction mod $p$ of the bimodule
$F^{*}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0)}$ agrees with the
bimodule $F^{*}\mathcal{D}_{X}^{(0)}$ constructed in \propref{Basic-F^*-over-k}.
\begin{rem}
\label{rem:Compare-With-Berthelot}1) Let $\mathcal{D}_{X,\mathbf{Ber}}^{(1)}$
denote Berthelot's ring of divided power differential operators of
level $1$ on $X$. Then the Frobenius descent theory of the previous
theorem gives an isomorphism
\[
\mathcal{D}_{X,\text{Ber}}^{(1)}\to\mathcal{E}nd_{\mathcal{D}_{X}^{(0),\text{op}}}(F^{*}\mathcal{D}_{X}^{(0)})
\]
It follows that $\mathcal{D}_{X,\mathbf{Ber}}^{(1)}\tilde{=}\mathcal{D}_{X}^{(1)}$
even if $X$ is not liftable to $W(k)$.
2) The Frobenius descent over $X$ implies the Frobenius descent over
$\mathfrak{X}$, once one constructs the $(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(1)},\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0)})$
bimodule structure on $F^{*}\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}$.
Indeed, this structure yields a morphism
\[
\widehat{\mathcal{D}}_{\mathfrak{X}}^{(1)}\to\mathcal{E}nd_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0),\text{op}}}(F^{*}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0)})
\]
as both sides are $p$-adically complete and $p$-torsion-free, to
check that this map is an isomorphism one simply has to reduce mod
$p$.
\end{rem}
Now let us return to a general $\mathfrak{X}$. It is an fundamental
fact that Frobenius descent doesn't really depend on the existence
of the lift $F$:
\begin{thm}
(Berthelot) Suppose $F_{1},F_{2}$ are two lifts of Frobenius on $\mathfrak{X}$.
Then there is an isomorphism of bimodules $\sigma_{1,2}:F_{1}^{*}\mathcal{D}_{\mathfrak{X}}^{(i)}\tilde{\to}F_{2}^{*}\mathcal{D}_{\mathfrak{X}}^{(i)}$.
If $F_{3}$ is a third lift, we have $\sigma_{2,3}\circ\sigma_{12}=\sigma_{1,3}$.
\end{thm}
This is \cite{key-2}, theorem 2.2.5; c.f. also \cite{key-21}, corollary
13.3.8. As lifts of Frobenius always exist locally, this implies that
there is a globally defined bimodule $F^{*}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}$,
which induces an equivalence $F^{*}:\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}-\text{mod}\to\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i+1)}-\text{mod}$;
we use the same letter to denote the derived equivalence $F^{*}:D(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}-\text{mod})\to D(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i+1)}-\text{mod})$.
The equivalence of categories $F^{*}$ has many remarkable properties,
in particular its compatibility with the push-forward, pullback, and
duality functors for $\mathcal{D}$-modules; we will recall these
properties in the relevant sections below.
It will also be useful to recall some basic facts about the right-handed
version of the equivalence. Recall that we have equivalences of categories
$\mathcal{M}\to\omega_{\mathfrak{X}}\otimes_{\mathcal{O}_{\mathfrak{X}}}\mathcal{M}$
from $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}-\text{mod}$ to $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i),\text{op}}-\text{mod}$
for any $i$ (c.f, \cite{key-1}, or \propref{Left-Right-Swap} below).
This implies that there is a functor $\mathcal{M}\to F^{!}\mathcal{M}:=\omega_{\mathfrak{X}}\otimes_{\mathcal{O}_{\mathfrak{X}}}F^{*}(\omega_{\mathfrak{X}}^{-1}\otimes_{\mathcal{O}_{\mathfrak{X}}}\mathcal{M})$
which is an equivalence from $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i),\text{op}}-\text{mod}$
to $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i+1),\text{op}}-\text{mod}$.
By basic Grothendieck duality theory (c.f. \cite{key-2}, 2.4.1),
there is an isomorphism
\[
F^{!}\mathcal{M}\tilde{=}F^{-1}\mathcal{H}om_{\mathcal{O}_{\mathfrak{X}}}(F_{*}\mathcal{O}_{\mathfrak{X}},\mathcal{M})
\]
of sheaves of $\mathcal{O}_{\mathfrak{X}}$-modules (this justifies
the notation). If we put $\mathcal{M}=\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}$
this isomorphism exhibits the left $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}$-module
structure on $F^{!}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}$.
\begin{prop}
\label{prop:F^*F^!}1) The equivalence of categories $F^{!}:\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i),\text{op}}-\text{mod}\to\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i+1),\text{op}}-\text{mod}$
is given by $\mathcal{M}\to\mathcal{M}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}}F^{!}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}$.
2) There are isomorphisms of $(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i+!)},\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i+1)})$
bimodules $F^{*}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}}F^{!}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}=F^{*}F^{!}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}\tilde{\to}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i+1)}$
and $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i+1)}\tilde{\leftarrow}F^{!}F^{*}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}=F^{*}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}}F^{!}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}$.
In particular, for a $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i+1)}$-module
$\mathcal{M}$, we have $\mathcal{M}=F^{*}\mathcal{N}$ iff $F^{!}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i+1)}}\mathcal{M}\tilde{=}\mathcal{N}$.
\end{prop}
This is proved in \cite{key-2}, 2.5.1 (c.f. also \cite{key-21},
lemma 13.5.1). Further, by applying the Rees functor it directly implies
the analogue for the filtered Frobenius descent of \thmref{Filtered-Frobenius}:
\begin{cor}
\label{cor:Filtered-right-Frob}There is an equivalence of categories
$F^{!}:\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(0)})^{\text{op}})\to\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(1)})^{\text{op}})$;
which yields a $(\mathcal{R}(\mathcal{D}_{X}^{(0)}),\mathcal{R}(\mathcal{D}_{X}^{(1)}))$
bimodule $F^{!}\mathcal{R}(\mathcal{D}_{X}^{(0)})$. We have isomorphisms
of $(\mathcal{R}(\mathcal{D}_{X}^{(1)}),\mathcal{R}(\mathcal{D}_{X}^{(1)}))$
bimodules
\[
F^{!}F^{*}\mathcal{R}(\mathcal{D}_{X}^{(0)})\tilde{\to}\mathcal{R}(\mathcal{D}_{X}^{(1)})\leftarrow F^{*}F^{!}\mathcal{R}(\mathcal{D}_{X}^{(0)})
\]
\end{cor}
Now we proceed to the
\begin{defn}
\label{def:Gauge-Defn!}An $F^{-1}$-gauge over $\mathfrak{X}$ is
an object of $\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)})$
equipped with an isomorphism $F^{*}\mathcal{M}^{-\infty}\tilde{\to}\widehat{\mathcal{M}^{\infty}}$
(here $\widehat{?}$ denotes $p$-adic completion). A coherent $F^{-1}$-gauge
is an $F^{-1}$-gauge whose underlying $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}$-module
is coherent. We define the category of $F^{-1}$-gauges, $\mathcal{G}_{F^{-1}}(\mathcal{D}_{\mathfrak{X}}^{(0,1)})$
by demanding that morphisms between $F^{-1}$-gauges respect the $F^{-1}$-structure
(as in \defref{F-gauge}), and similarly for the category of coherent
$F^{-1}$-gauges, $\mathcal{G}_{F^{-1},coh}(\mathcal{D}_{\mathfrak{X}}^{(0,1)})$.
Similarly, An $F^{-1}$-Gauge over $X$ is an object of $\mathcal{G}(\mathcal{D}_{X}^{(0,1)})$
equipped with an isomorphism $F^{*}\mathcal{M}^{-\infty}\tilde{\to}\widehat{\mathcal{M}^{\infty}}$,
and for the category $\mathcal{G}_{F^{-1}}(\mathcal{D}_{X}^{(0,1)})$
we demand that morphisms between $F^{-1}$-gauges respect the $F^{-1}$-structure.
We have the obvious subcategories of quasi-coherent and coherent gauges.
\end{defn}
In the world of coherent gauges, we have seen in \propref{Completion-for-noeth}
that completion is an exact functor. Therefore, the category of coherent
$F^{-1}$-gauges over $\mathfrak{X}$ is abelian; the same does not
seem to be true for the category of all gauges over $\mathfrak{X}$.
On the other hand, the category of all $F^{-1}$-gauges over $X$
is abelian, as are the categories of coherent and quasicoherent $F^{-1}$-gauges.
Now let us turn to the derived world:
\begin{defn}
\label{def:F-gauge-for-complexes}A complex $\mathcal{M}^{\cdot}$
in $D(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)})$
is said to admit the structure of an $F^{-1}$-gauge if there is an
isomorphism $F^{*}(\mathcal{M}^{\cdot})^{-\infty}\tilde{\to}\widehat{(\mathcal{M}^{\cdot})^{\infty}}$
where $\widehat{}$ denotes the cohomological completion. Similarly,
we say that a complex $\mathcal{M}^{\cdot}$ in $D(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$
admits the structure of an $F^{-1}$-gauge if there is an isomorphism
$F^{*}(\mathcal{M}^{\cdot})^{-\infty}\tilde{\to}(\mathcal{M}^{\cdot})^{\infty}$.
We will use a subscript $F^{-1}$ to denote the relevant categories;
e.g. $D_{F^{-1}}(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)})$.
\end{defn}
These are not triangulated categories in general, though there is
an obvious functor $D^{b}(\mathcal{G}_{F^{-1},coh}(\mathcal{D}_{\mathfrak{X}}^{(0,1)}))\to D_{coh,F^{-1}}^{b}(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)})$
(and similarly for $X$). To give the correct triangulated analogue
of \defref{Gauge-Defn!} one must use higher homotopy theory; namely,
the glueing of $\infty$-categories along a pair of functors. I intend
to peruse this in a later project. For the purposes of this paper,
\defref{F-gauge-for-complexes} will suffice.
\begin{rem}
\label{rem:Cut-off-for-F-gauges}Suppose $\mathcal{M}^{\cdot}\in D_{coh,F^{-1}}^{b}(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)})$,
then by \propref{Completion-for-noeth} $\widehat{\mathcal{H}^{i}(\mathcal{M}^{\cdot})^{\infty}}\tilde{=}\mathcal{H}^{i}(\widehat{\mathcal{M}^{\cdot,\infty}})$.
Therefore $\mathcal{H}^{i}(\mathcal{M}^{\cdot})$ admits the structure
of an $F^{-1}$-gauge for each $i$. Further, as both $F^{*}$ and
the completion functor are exact, we have that $\tau_{\leq i}(\mathcal{M}^{\cdot})$
and $\tau_{\geq i}(\mathcal{M}^{\cdot})$ are contained in $\mathcal{M}^{\cdot}\in D_{coh,F^{-1}}^{b}(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)})$,
where $\tau_{\leq i},\tau_{\geq i}$ are the cut-off functors.
\end{rem}
Given this, we can give the more refined version of Mazur's theorem
for $F^{-1}$-gauges:
\begin{thm}
\label{thm:F-Mazur}Let $\mathcal{M}^{\cdot}\in D_{\text{coh},F^{-1}}^{b}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$.
Suppose that $\mathcal{H}^{n}(\mathcal{M}^{\cdot})^{-\infty}$ is
$p$-torsion-free for all $n$, and suppose that $\mathcal{H}^{n}((\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f])$
is $f$-torsion-free for all $n$. Then $\mathcal{H}^{n}(\mathcal{M}^{\cdot})$
is standard for all $n$.
In particular, $\mathcal{H}^{n}(\mathcal{M}^{\cdot})$ is $p$-torsion-free,
and $\mathcal{H}^{n}(\mathcal{M}^{\cdot})/p$ is rigid for all $n$.
We have $\mathcal{H}^{n}(\mathcal{M}^{\cdot})/p\tilde{=}\mathcal{H}^{n}(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)$
, $(\mathcal{H}^{n}(\mathcal{M}^{\cdot})/p)/v\tilde{=}\mathcal{H}^{n}((\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f])$,
and $(\mathcal{H}^{n}(\mathcal{M}^{\cdot})/p)/f\tilde{=}\mathcal{H}^{n}((\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[v])$
for all $n$. Further, $(\mathcal{H}^{n}(\mathcal{M}^{\cdot})/p)/f$
is $v$-torsion-free and $(\mathcal{H}^{n}(\mathcal{M}^{\cdot})/p)/v$
is $f$-torion-free for all $n$.
\end{thm}
\begin{proof}
This follows from \thmref{Mazur!} if we can show that $\mathcal{H}^{n}(\mathcal{M}^{\cdot})^{\infty}\tilde{=}\mathcal{H}^{n}(\mathcal{M}^{\cdot,\infty})$
is also $p$-torsion-free for all $n$. Since $\mathcal{M}^{\cdot}\in D_{coh,F^{-1}}^{b}(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)})$
, we have that the cohomological completion of the complex $\mathcal{\widehat{M}}^{\cdot,\infty}$
is isomorphic to $F^{*}(\mathcal{M}^{\cdot,-\infty})$; and this complex
has $p$-torsion-free cohomologies by the assumption. Since $\mathcal{M}^{\cdot,\infty}$
is a bounded complex with coherent cohomology sheaves, by \propref{Completion-for-noeth}
we have that $\mathcal{H}^{n}(\mathcal{\widehat{M}}^{\cdot,\infty})\tilde{=}\widehat{\mathcal{H}^{n}(\mathcal{M}^{\cdot,\infty})}$,
where the completion on the right denotes the usual $p$-adic completion.
But the module $\mathcal{H}^{n}(\mathcal{M}^{\cdot,\infty})$, being
coherent, is $p$-torsion-free iff its $p$-adic completion is. Thus
each $\mathcal{H}^{n}(\mathcal{M}^{\cdot,\infty})$ is $p$-torsion-free
as desired.
\end{proof}
In the case where $\mathfrak{X}=\text{Specf}(W(k))$ is a point, and
$\mathcal{M}^{\cdot}$ is the gauge coming from cohomology of some
smooth proper $\mathfrak{X}$ (this exists by \thmref{=00005BFJ=00005D},
and we'll construct it, in the language of this paper, in \secref{Push-Forward}
below), this is exactly the content of \thmref{(Mazur)}; indeed,
the first assumption is that $\mathbb{H}_{dR}^{i}(\mathfrak{X})$
is $p$-torsion-free for all $i$, and the second assumption is the
degeneration of the Hodge to de Rham spectral sequence.
\subsection{Examples of Gauges}
We close out this chapter by giving a few important examples of gauges,
beyond $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$ itself.
\begin{example}
Let $\mathfrak{X}$ be a smooth formal scheme. Then $D(\mathcal{O}_{\mathfrak{X}})\in\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})$
by the very definition of $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$
; indeed, we have $D(\mathcal{O}_{\mathfrak{X}}){}^{i}=\{g\in\mathcal{O}_{\mathfrak{X}}|p^{i}g\in\mathcal{O}_{\mathfrak{X}}\}$
so that the natural action of $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}[p^{-1}]$
on $\mathcal{O}_{\mathfrak{X}}[p^{-1}]$ induces the action of $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$
on $\mathcal{O}_{\mathfrak{X}}[f,v]$. This is an $F^{-1}$-gauge
via the isomorphism $F^{*}\mathcal{O}_{\mathfrak{X}}\tilde{\to}\mathcal{O}_{\mathfrak{X}}$.
\end{example}
To generalize this, suppose $\mathfrak{D}\subset\mathfrak{X}$ is
a locally normal crossings divisor. Let $\mathfrak{U}$ be the compliment
of $\mathfrak{D}$. Denote the inclusion map by $j$. We are going
to define a coherent $F^{-1}$-gauge ${\displaystyle j_{\star}D(\mathcal{O}_{\mathfrak{U}})}$,
whose cohomology is the gauge version of the log de Rham cohomology
of $\mathfrak{X}$ with respect to $\mathfrak{D}$.
To proceed, let $\mathfrak{V}\subset\mathfrak{X}$ be an affine open,
on which there are local coordinates $\{x_{1},\dots,x_{n}\}$ in which
the divisor $\mathfrak{D}$ is given by $\{x_{1}\cdots x_{j}=0\}$.
Then (starting with the action of finite-order differential operators),
we may consider the $D_{\mathfrak{V}}^{(0)}$-submodule of $\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}]$
generated by $x_{1}^{-1}\cdots x_{j}^{-1}$; it is easily seen to
be independent of the choice of coordinates; hence we obtain a well-defined
$D_{\mathfrak{V}}^{(0)}$-module denoted ${\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{U}})}^{\text{fin}}$;
and we define the $\mathcal{\widehat{D}}_{\mathfrak{V}}^{(0)}$ module,
denoted $(j_{\star}\mathcal{O}_{\mathfrak{U}})^{-\infty}|_{\mathfrak{V}}$
to be the $p$-adic completion of ${\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{U}})}^{\text{fin}}$.
By glueing we obtain a coherent $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0)}$-module
$(j_{\star}\mathcal{O}_{\mathfrak{U}})^{-\infty}$. We have
\begin{lem}
\label{lem:Injectivity-of-completion}For any $\mathfrak{V}$ as above,
the natural map $\text{(\ensuremath{{\displaystyle j_{\star}\mathcal{O}_{\mathfrak{U}}}}})^{-\infty}|_{\mathfrak{V}}\to\widehat{(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}])}$
(where $\widehat{}$ denotes $p$-adic completion) is injective.
\end{lem}
We'll give a proof of this rather technical result in \secref{Appendix:-an-Inectivity}.
From this we deduce
\begin{lem}
\label{lem:Hodge-filt-on-log}Let $F$ be a lift of Frobenius satisfying
$F(x_{i})=x_{i}^{p}$ for all $1\leq i\leq n$. Then the natural map
$F^{*}\text{(\ensuremath{{\displaystyle j_{\star}\mathcal{O}_{\mathfrak{U}}}}})^{-\infty}|_{\mathfrak{V}}\to\widehat{(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}])}$
is injective, and its image is the $\widehat{\mathcal{D}}_{\mathfrak{V}}^{(1)}$-submodule
generated by $x_{1}^{-1}\cdots x_{j}^{-1}$.
\end{lem}
\begin{proof}
For each $r>0$ we have an isomorphism $F^{*}(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}]/p^{r})\tilde{\to}\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}]/p^{r}$;
upon taking the inverse limit we obtain $F^{*}\widehat{(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}])}\tilde{\to}\widehat{(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}])}$.
Since $F^{*}$ is an exact, conservative functor on $\mathcal{O}_{\mathfrak{V}}-\text{mod}$,
the previous lemma implies that $F^{*}\text{(\ensuremath{{\displaystyle j_{\star}\mathcal{O}_{\mathfrak{U}}}}})^{-\infty}|_{\mathfrak{V}}\to\widehat{(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}])}$
is injective. Since the image of $({\displaystyle j_{\star}\mathcal{O}_{\mathfrak{U}}})^{-\infty}|_{\mathfrak{V}}\to\widehat{(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}])}$
is the $\widehat{\mathcal{D}}_{\mathfrak{V}}^{(0)}$-submodule generated
by $x_{1}^{-1}\cdots x_{j}^{-1}$, the image of $F^{*}(j_{\star}{\displaystyle \mathcal{O}_{\mathfrak{U}}})^{-\infty}|_{\mathfrak{V}}\to\widehat{(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}])}$
is the $\widehat{\mathcal{D}}_{\mathfrak{V}}^{(1)}$-submodule generated
by $F(x_{1}^{-1}\cdots x_{j}^{-1})=x_{1}^{-p}\cdots x_{j}^{-p}$.
But since $\partial_{i}^{[p]}x_{i}^{-1}=-x_{i}^{-p-1}$ we see that
$\widehat{\mathcal{D}}_{\mathfrak{V}}^{(1)}\cdot x_{1}^{-1}\cdots x_{j}^{-1}=\widehat{\mathcal{D}}_{\mathfrak{V}}^{(1)}\cdot x_{1}^{-p}\cdots x_{j}^{-p}$
as claimed.
\end{proof}
We can now construct the full gauge ${\displaystyle j_{\star}\mathcal{O}_{\mathfrak{U}}[f,v]}$
as follows: denote by $\widehat{\text{(\ensuremath{{\displaystyle j_{\star}\mathcal{O}_{\mathfrak{U}}}}})^{\infty}}$
the $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(1)}$-submodule of $\widehat{(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}])}$
locally generated by $x_{1}^{-1}\cdots x_{j}^{-1}$; as above this
is independent of the choice of coordinates for the divisor $\mathfrak{D}$.
Then we have
\begin{example}
\label{exa:Integral-j} Define $({\displaystyle j_{\star}D(\mathcal{O}_{\mathfrak{U}})})^{i}:=\{m\in\widehat{\text{(\ensuremath{{\displaystyle j_{\star}\mathcal{O}_{\mathfrak{U}}}}})^{\infty}}|p^{i}m\in\text{(\ensuremath{{\displaystyle j_{\star}\mathcal{O}_{\mathfrak{U}}}}})^{-\infty}\}$.
By the above discussion this is an object in $\text{Coh}_{F^{-1}}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})$
via the isomorphism \linebreak{}
$F^{*}\text{(\ensuremath{{\displaystyle j_{\star}\mathcal{O}_{\mathfrak{U}}}}})^{-\infty}\tilde{\to}\widehat{\text{(\ensuremath{{\displaystyle j_{\star}\mathcal{O}_{\mathfrak{U}}}}})^{\infty}}$.
Let ${\displaystyle j_{\star}D(\mathcal{O}_{U}})$ denote the reduction
mod $p$. We claim that the $l$th term of the Hodge filtration on
$({\displaystyle j_{\star}D(\mathcal{O}_{U})})^{\infty}$ is given
by $F^{*}(F^{l}(\mathcal{D}_{X}^{(0)})\cdot(x_{1}^{-1}\cdots x_{j}^{-1}))$,
where $F^{l}\mathcal{D}_{X}^{(0)}$ is the $l$th term of the symbol
filtration.
To see this, we work again in local coordinates over $\mathfrak{V}$.
One computes that $(\partial_{i}^{[p]})^{l}(x_{i}^{-p})=u\cdot l!x_{i}^{-p(l+1)}$
where $u$ is a unit in $\mathbb{Z}_{p}$. Therefore the module \linebreak{}
$D_{\mathfrak{V}}^{(1)}\cdot x_{1}^{-1}\cdots x_{j}^{-1}=D_{\mathfrak{V}}^{(1)}\cdot x_{1}^{-p}\cdots x_{j}^{-p}$
is spanned over $\mathcal{O}_{\mathfrak{V}}$ by terms of the form
$I!\cdot x_{1}^{-p(i_{1}+1)}\cdots x_{j}^{-p(i_{j}+1)}$; the $p$-adic
completion of this module is ${\displaystyle \widehat{\text{(\ensuremath{{\displaystyle j_{\star}\mathcal{O}_{\mathfrak{U}}}}})^{\infty}}}$.
For a multi-index $I$, set $\tilde{I}=(pi_{1}+p-1,\dots,pi_{j}+p-1)$.
Then \linebreak{}
$I!\cdot x_{1}^{-p(i_{1}+1)}\cdots x_{j}^{-p(i_{j}+1)}\in({\displaystyle j_{\star}D(\mathcal{O}_{\mathfrak{U}})})^{r}$
iff $p^{r}\cdot I!\cdot x_{1}^{-p(i_{1}+1)}\cdots x_{j}^{-p(i_{j}+1)}\in\mathcal{\widehat{D}}_{\mathfrak{V}}^{(0)}\cdot x_{1}^{-1}\cdots x_{j}^{-1}$
. Furthermore, it is not difficult to see that $p^{r}\cdot I!\cdot x_{1}^{-p(i_{1}+1)}\cdots x_{j}^{-p(i_{j}+1)}\in\mathcal{\widehat{D}}_{\mathfrak{V}}^{(0)}\cdot x_{1}^{-1}\cdots x_{j}^{-1}$
iff $p^{r}\cdot I!\cdot x_{1}^{-p(i_{1}+1)}\cdots x_{j}^{-p(i_{j}+1)}\in\mathcal{D}_{\mathfrak{V}}^{(0)}\cdot x_{1}^{-1}\cdots x_{j}^{-1}$;
in turn, this holds iff $r\geq\text{val}(\tilde{I}!)-\text{val}(I!)$
(since $\mathcal{D}_{\mathfrak{V}}^{(0)}\cdot x_{1}^{-1}\cdots x_{j}^{-1}$
is spanned by terms of the form $I!x_{1}^{-(i_{1}+1)}\cdots x_{j}^{-(i_{j}+1)}$);
here $\text{val}$ denotes the usual $p$-adic valuation; so that
$\text{val}(p)=1$.
On the other hand one has
\[
\text{val}((pi+p-1)!)-\text{val}(i!)=i
\]
for all $i\geq0$. So ${\displaystyle \text{val}(\tilde{I}!)-\text{val}(I!)=\sum_{t=1}^{j}i_{t}}$
which implies $I!\cdot x_{1}^{-p(i_{1}+1)}\cdots x_{j}^{-p(i_{j}+1)}\in({\displaystyle j_{\star}D(\mathcal{O}_{\mathfrak{U}})})^{r}$
iff ${\displaystyle r\geq\sum_{t=1}^{j}i_{t}}$.
On the other hand, $(F^{l}(\mathcal{D}_{\mathfrak{X}}^{(0)})\cdot(x_{1}^{-1}\cdots x_{j}^{-1})$
is spanned over $\mathcal{O}_{\mathfrak{V}}$ by terms of the form
$I!\cdot x_{1}^{-(i_{1}+1)}\cdots x_{j}^{-(i_{j}+1)}$ where ${\displaystyle \sum_{t=1}^{j}i_{t}\leq l}$.
Thus the module $F^{*}(F^{l}(\mathcal{D}_{X}^{(0)})\cdot(x_{1}^{-1}\cdots x_{j}^{-1}))$
is exactly the image in $({\displaystyle j_{\star}\mathcal{O}_{U}[f,v]})^{\infty}$
of $({\displaystyle j_{\star}\mathcal{O}_{\mathfrak{U}}[f,v]})^{r}$,
which is the claim.
\end{example}
Finally, we end with an example of a standard, coherent gauge which
definitely does not admit an $F^{-1}$-action:
\begin{example}
\label{exa:Exponential!} Let $\mathfrak{X}=\widehat{\mathbb{A}_{W(k)}^{1}}$.
Consider the $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}$-module
$e^{x}$; i.e., the sheaf $\mathcal{O}_{\mathfrak{X}}$ equipped with
the action determined by
\[
\sum_{i=0}^{\infty}a_{i}\partial^{i}\cdot1=\sum_{i=0}^{\infty}a_{i}
\]
(here $a_{i}\to0$ as $i\to\infty$). Then $\mathcal{O}_{\mathfrak{X}}[p^{-1}]$
is a coherent $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1),\infty}$-module
since $\partial^{[p]}\cdot1=(p!)^{-1}$; it has a $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}$-lattice
given by $\mathcal{O}_{\mathfrak{X}}$. Thus by \exaref{Basic-Construction-over-X}
we may define
\[
(e^{x})^{i}:=\{m\in\mathcal{O}_{\mathfrak{X}}[p^{-1}]|p^{i}m\in\mathcal{O}_{\mathfrak{X}}\}
\]
and we obtain a gauge, also denoted $e^{x}$, such that $(e^{x})^{i}\tilde{=}\mathcal{O}_{\mathfrak{X}}$
for all $i$, and such that $v$ is an isomorphism for all $i$, while
$f$ is given by multiplication by $p$. We have $(e^{x})^{-\infty}\tilde{=}\mathcal{O}_{\mathfrak{X}}$
while $(e^{x})^{\infty}=\mathcal{O}_{\mathfrak{X}}[p^{-1}]$.
\end{example}
This example indicates that the ``exponential Hodge theory'' appearing,
e.g., in Sabbah's work \cite{key-22}, could also be a part of this
story; this should be interesting to pursue in future work.
\section{\label{sec:Operations:PullBack}Operations on Gauges: Pull-back}
Let $\varphi:\mathfrak{X}\to\mathfrak{Y}$ be a morphism of smooth
formal schemes over $W(k)$. Let us begin by setting our conventions
on the pullback of $\mathcal{O}$-modules:
\begin{defn}
\label{def:Correct-Pullback}1) If $\mathcal{M}\in\mathcal{O}_{\mathfrak{Y}}-\text{mod}$,
we set $\varphi^{*}\mathcal{M}:=\mathcal{O}_{\mathfrak{X}}\widehat{\otimes}_{\varphi^{-1}(\mathcal{O}_{\mathfrak{Y}})}\varphi^{-1}(\mathcal{M}^{\cdot})$,
the $p$-adic completion of the naive tensor product. If $\mathcal{M}^{\cdot}\in D(\mathcal{O}_{\mathfrak{Y}})$,
then we define $L\varphi^{*}\mathcal{M}^{\cdot}:=\mathcal{O}_{\mathfrak{X}}\widehat{\otimes}_{\varphi^{-1}(\mathcal{O}_{\mathfrak{Y}})}^{L}\varphi^{-1}(\mathcal{M}^{\cdot})$;
the cohomological completion of the usual derived tensor product.
2) Consider $D(\mathcal{O}_{\mathfrak{X}})$ and $D(\mathcal{O}_{\mathfrak{Y}})$
as graded sheaves of rings as usual. If $\mathcal{M}^{\cdot}\in D(\mathcal{G}(D(\mathcal{O}_{\mathfrak{Y}})))$,
then we define $L\varphi^{*}\mathcal{M}^{\cdot}:=D(\mathcal{O}_{\mathfrak{X}})\widehat{\otimes}_{\varphi^{-1}(D(\mathcal{O}_{\mathfrak{Y}}))}^{L}\varphi^{-1}(\mathcal{M}^{\cdot})$,
the graded cohomological completion of the usual derived tensor product.
\end{defn}
\begin{rem}
The reader will note several inconsistencies in these notations. First
of all, we do not, in general, have $\mathcal{H}^{0}(L\varphi^{*}\mathcal{M})=\varphi^{*}\mathcal{M}$.
Furthermore, the functor $L\varphi^{*}$ does not commute with the
forgetful functor from graded $\mathcal{O}[f,v]$-modules to $\mathcal{O}$-modules.
However, we will only use the underived $\varphi^{*}$ in a few very
special cases (c.f. the lemma directly below), when in fact the equality
$\mathcal{H}^{0}(L\varphi^{*}\mathcal{M})=\varphi^{*}\mathcal{M}$
does hold. Further, we will only apply the graded functor when working
with a graded module; and this will almost always be the case. Hopefully
this notational scheme does not cause any undue confusion.
\end{rem}
Now we should check that this operation behaves well on the basic
objects of interest in our paper:
\begin{lem}
\label{lem:phi-pullback-of-D^i}For each $i\in\mathbb{Z}$ we have
\[
L\varphi^{*}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1),i})\tilde{=}\varphi^{*}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1),i})\tilde{=}\lim_{n}(\mathcal{O}_{\mathfrak{X}_{n}}\otimes_{\varphi^{-1}(\mathcal{O}_{\mathfrak{Y}_{n}})}\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1),i}/p^{n}))
\]
In particular, we have
\[
\mathcal{H}^{0}(L\varphi^{*}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1),i}))\tilde{=}\varphi^{*}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1),i})
\]
under the conventions of \defref{Correct-Pullback}. The same holds
if we replace $\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1),i}$ by
$\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(j)}$ for any $j\geq0$.
\end{lem}
\begin{proof}
As this question is local, we can assume $\mathfrak{X}=\text{Specf}(\mathcal{B})$
and $\mathfrak{Y}=\text{Specf}(\mathcal{A})$ where $\mathcal{A}$
possess local coordinates $\{t_{1},\dots,t_{n}\}$. By definition
we have that $\widehat{D}_{\mathcal{A}}^{(0,1),i}$ is the $p$-adic
completion of $D_{\mathcal{A}}^{(0,1),i}$. By \corref{Each-D^(i)-is-free}
we have that $D_{\mathcal{A}}^{(0,1),i}$ is free over $\mathcal{A}$.
In particular, it is $p$-torsion free and $p$-adically separated;
and so by \cite{key-8}, lemma 1.5.4 its cohomological completion
is equal to $\widehat{D}_{\mathcal{A}}^{(0,1),i}$. Therefore we have
the short exact sequence
\[
D_{\mathcal{A}}^{(0,1),i}\to\widehat{D}_{\mathcal{A}}^{(0,1),i}\to K
\]
where $p$ acts invertibly on $K$. Now we apply the functor $\mathcal{B}\otimes_{\mathcal{A}}^{L}$.
By \cite{key-8}, theorem 1.6.6, we have that $\widehat{D}_{\mathcal{A}}^{(0,1),i}$
is flat over $\mathcal{A}$. Thus we see that $\mathcal{B}\widehat{\otimes}_{\mathcal{A}}^{L}\mathcal{\widehat{D}}_{\mathcal{A}}^{(0,1),i}$,
the cohomological completion of $\mathcal{B}\otimes_{\mathcal{A}}^{L}\widehat{D}_{\mathcal{A}}^{(0,1),i}$,
is isomorphic to the cohomological completion of $\mathcal{B}\otimes_{\mathcal{A}}^{L}\mathcal{D}_{\mathcal{A}}^{(0,1),i}$,
which is just the usual $p$-adic completion since this is a free
$\mathcal{B}$-module, and the statement follows. An identical argument
works for $\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(j)}$.
\end{proof}
Now let $j\geq0$. We recall that, for each $j\geq0$, Berthelot has
constructed a pullback functor $\varphi^{!,(j)}$ from $\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(j)}-\text{mod}$
to $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(j)}-\text{mod}$. In fact,
in \cite{key-2}, section 3.2, he has shown that $\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(j)}:=\varphi^{*}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(j)})$
carries the structure of a left $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(j)}$-module.
By definition $\varphi^{*}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(j)}$
carries the structure of a right $\varphi^{-1}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(j)})$-module.
This, in turn allows one to define the functor $\varphi^{*,(j)}$
via
\[
L\varphi^{*,(j)}(\mathcal{M})=\varphi^{*}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(j)}\widehat{\otimes}_{\varphi^{-1}(\mathcal{\widehat{D}}_{\mathfrak{Y}})}^{L}\varphi^{-1}(\mathcal{M})\tilde{=}L\varphi^{*}(\mathcal{M})
\]
(where the last isomorphism is as sheaves of $\mathcal{O}_{\mathfrak{X}}$-modules).
One sets $\varphi^{!,(j)}:=L\varphi^{*,(j)}[d_{X/Y}]$ (where $d_{X/Y}=\text{dim}(X)-\text{dim}(Y)$).
In fact, this is not quite Berthelot's definition, as he does not
use the cohomological completion; rather, he first defines the functor
in the case of a morphism $\varphi:\mathfrak{X}_{n}\to\mathfrak{Y}_{n}$
(the reductions mod $p^{n}$ of $\mathfrak{X}$ and $\mathfrak{Y}$,
respectively), and then applies the $\text{R}\lim$ functor. However,
the two notions agree on bounded complexes of coherent $\widehat{\mathcal{D}}_{\mathfrak{Y}}$-modules;
the version introduced here seems better suited to very general complexes.
In order to upgrade this to gauges, we must upgrade the bimodule $\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0)}$
to a bimodule $\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}$:
\begin{defn}
\label{def:Transfer-Bimod} We set
\[
\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}:=\bigoplus_{i\in\mathbb{Z}}\varphi^{*}\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1),i}
\]
The sheaf ${\displaystyle \bigoplus_{i\in\mathbb{Z}}\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1),i}}$
is a graded sheaf of $D(W(k))$-modules; induced from the $D(W(k))$
action on $\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)}$. Note that
$\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1),-\infty}=\varphi^{*}\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0)}$.
\end{defn}
Let us analyze this sheaf:
\begin{prop}
\label{prop:Basic-properties-of-the-transfer-module}1) For each $i\in\mathbb{Z}$
, the natural map $\iota:\varphi^{*}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1),i}\to\varphi^{*}\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(1)}$
(induced from the inclusion $\eta:\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1),i}\to\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(1)}$)
is injective.
2) The image $\iota(\varphi^{*}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1),i})$
is equal to the sheaf whose local sections are given by $\{\Psi\in\varphi^{*}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(1)}|p^{i}\Psi\in\iota(\varphi^{*}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0)})\}$.
In particular, $\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}$
is a standard gauge.
3) The sheaf ${\displaystyle \mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}}$
carries the structure of a graded $(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)},\varphi^{-1}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)}))$-bimodule
as follows: we have the inclusions $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0)}\subset\mathcal{\widehat{D}}_{\mathfrak{X}}^{(1)}$,
so if $\Phi\in\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),i}$ and
$\Psi\in\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1),j}$
are local sections, then $p^{i+j}(\Phi\cdot\Psi)=(p^{i}\Phi)\cdot(p^{j}\Psi)\in\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0)}$.
Similarly, $\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}$
becomes a right $\varphi^{-1}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)})$-module
via $\varphi^{-1}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0)}\subset\varphi^{-1}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(1)}$.
\end{prop}
\begin{proof}
1) As the statement is local, we can suppose $\mathfrak{Y}=\text{Specf}(\mathcal{A})$
and $\mathfrak{X}=\text{Specf}(\mathcal{B})$ where $\mathcal{A}$
and $\mathcal{B}$ admit local coordinates; let the reductions mod
$p$ be $Y=\text{Spec}(A)$ and $X=\text{Spec}(B)$. By \corref{Local-coords-over-A=00005Bf,v=00005D}
we know that $D_{A}^{(0,1)}$ is a free graded $A[f,v]$-module, therefore
$\varphi^{*}D_{A}^{(0,1)}=B\otimes_{A}D_{A}^{(0,1)}$ is a free graded
$B[f,v]$-module; and we have that the kernel of $f_{\infty}:\varphi^{*}D_{A}^{(0,1),i}\to\varphi^{*}D_{A}^{(0,1),\infty}=\varphi^{*}D_{A}^{(1)}$
is exactly the image of $v:\varphi^{*}D_{A}^{(0,1),i+1}\to\varphi^{*}D_{A}^{(0,1),i}$.
Now consider $m\in\text{ker}(\iota:\varphi^{*}\widehat{D}_{\mathcal{A}}^{(0,1),i}\to\varphi^{*}\widehat{D}_{\mathcal{A}}^{(1)})$.
The reduction mod $p$ of $\iota$ agrees with $f_{\infty}:\varphi^{*}D_{A}^{(0,1),i}\to\varphi^{*}D_{A}^{(1)}$.
Let $\overline{m}$ denote the image of $m$ in $\varphi^{*}D_{A}^{(0,1),i}$.
Then $\overline{m}\in\text{ker}(\varphi^{*}D_{A}^{(0,1),i}\to\varphi^{*}D_{A}^{(1)})=v\cdot\varphi^{*}D_{A}^{(0,1),i+1}$.
So, since $fv=p,$ we have $m\in v\cdot\varphi^{*}\widehat{D}_{\mathcal{A}}^{(0,1),i+1}$;
write $m=vm'$. By definition, the composition $\widehat{D}_{\mathcal{A}}^{(0,1),i+1}\xrightarrow{v}\widehat{D}_{\mathcal{A}}^{(0,1),i}\xrightarrow{\eta}\widehat{D}_{\mathcal{A}}^{(1)}$
is equal to $p\cdot\eta:\widehat{D}_{\mathcal{A}}^{(0,1),i+1}\to\widehat{D}_{\mathcal{A}}^{(1)}$;
thus also $\iota\circ v=p\cdot\iota$ and so $\iota(m)=\iota(vm')=p\iota(m')=0$;
therefore $m'\in\text{ker}(\iota)$ as $\varphi^{*}\widehat{D}_{\mathcal{A}}^{(1)}$
is $p$-torsion-free\footnote{Indeed, it is the inverse limit of the $W_{n}(k)$-flat modules $(\varphi^{*}\widehat{D}_{\mathcal{A}}^{(1)})/p^{n}=(\mathcal{B}/p^{n})\otimes_{\mathcal{A}/p^{n}}(\widehat{D}_{\mathcal{A}}^{(1)}/p^{n})$}.
Iterating the argument, we see that $m\in v^{N}\varphi^{*}\widehat{D}_{\mathcal{A}}^{(0,1),i+N}$
for all $N>0$; reducing mod $p$, this forces $\overline{m}=0$ since
(again by \corref{Local-coords-over-A=00005Bf,v=00005D}) $\varphi^{*}D_{A}^{(0,1)}$
is $v$-adically seperated. Thus $m=pm_{1}$; and then $\iota(m_{1})=0$
since $\varphi^{*}\widehat{D}_{\mathcal{A}}^{(1)}$ is $p$-torsion-free;
continuing in this way we obtain $m\in\bigcap_{n}p^{n}\cdot\varphi^{*}\widehat{D}_{\mathcal{A}}^{(0,1),i}=0$.
2) For each $i\geq0$ we have a short exact sequence
\[
0\to\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1),i}\to\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1),i+1}\to\mathcal{F}_{i}\to0
\]
where $\mathcal{F}_{i}$ is a sheaf which is annihilated by $p$.
By the injectivity just proved (and the equality $L\varphi^{*}\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1),i}=\varphi^{*}\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1),i}$)
we obtain the short exact sequence
\[
0\to\varphi^{*}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1),i}\to\varphi^{*}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1),i+1}\to\mathcal{H}^{0}(L\varphi^{*}\mathcal{F}_{i})\to0
\]
and, since $\mathcal{F}_{i}$ is annihilated by $p$, we have $\mathcal{H}^{0}(L\varphi^{*}\mathcal{F}_{i})=\mathcal{O}_{X}\otimes_{\varphi^{-1}(\mathcal{O}_{Y})}\varphi^{-1}(\mathcal{F}_{i})$.
So we obtain $p\cdot\varphi^{*}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1),i+1}\subset\varphi^{*}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1),i}$,
and since $\varphi^{*}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1),0}=\varphi^{*}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0)}$,
we see inductively that $\varphi^{*}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1),i}\subset\{\Psi\in\varphi^{*}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(1)}|p^{i}\Psi\in\iota(\varphi^{*}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0)})\}$
for all $i$.
For the converse direction, we work locally and assume $\mathfrak{X}=\text{Specf}(\mathcal{B})$
and $\mathfrak{Y}=\text{Specf}(\mathcal{A})$ where $\mathcal{A}$
possess etale local coordinates $\{t_{1},\dots,t_{n}\}$. Then we
have that $\Gamma(\varphi^{*}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(1)})=\mathcal{B}\widehat{\otimes}_{\mathcal{A}}\widehat{D}_{\mathcal{A}}^{(1)}\tilde{=}\mathcal{B}\widehat{\otimes}_{\mathcal{A}}D_{\mathcal{A}}^{(1)}$.
As in the proof of \lemref{Basic-structure-of-D_A^(i)}, we will consider
the finite-order analogue first. From (the proof of) that lemma, it
follows that, any element of $\mathcal{B}\otimes_{\mathcal{A}}D_{\mathcal{A}}^{(1)}$
admits a unique expression of the form
\[
\Psi=\sum_{I,J}b_{I,J}\frac{\partial_{1}^{i_{1}+pj_{1}}\cdots\partial_{n}^{i_{n}+pj_{n}}}{(p!)^{|J|}}
\]
for which $0\leq i_{j}<p$, all $b_{I,J}\in\mathcal{B}$, and the
sum is finite. We have that $p^{i}\Psi\in\mathcal{B}\otimes_{\mathcal{A}}D_{\mathcal{A}}^{(0)}$
iff ${\displaystyle \frac{p^{i}}{p^{|J|}}b_{I,J}}\in\mathcal{B}$.
So, if $|J|>i$ we can conclude (again, as in the proof of \lemref{Basic-structure-of-D_A^(i)})
that
\[
b_{I,J}\frac{\partial_{1}^{i_{1}+pj_{1}}\cdots\partial_{n}^{i_{n}+pj_{n}}}{(p!)^{|J|}}=\tilde{b}_{I,J}\cdot\partial_{1}^{i_{1}+pj'_{1}}\cdots\partial_{n}^{i_{n}+pj'_{n}}\cdot(\partial_{1}^{[p]})^{j''_{1}}\cdots\partial_{n}^{i_{n}}(\partial_{n}^{[p]})^{j''_{n}}
\]
where $\tilde{b}_{I,J}\in\mathcal{B}$, and $j''_{1}+\dots+j_{n}''=i$.
In particular $\Psi$ is contained in the $\mathcal{B}$-submodule
spanned by $\{\partial_{1}^{i_{1}}\cdots\partial_{n}^{i_{n}}\cdot(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\}$
where $j_{1}+\dots+j_{n}\le i$, which is exactly the image of $\mathcal{B}\otimes_{\mathcal{A}}D_{\mathcal{A}}^{(0,1),i}$
in $\mathcal{B}\otimes_{\mathcal{A}}D_{\mathcal{A}}^{(1)}$.
Now, if $\Psi\in\mathcal{B}\widehat{\otimes}_{\mathcal{A}}\widehat{D}_{\mathcal{A}}^{(1)}$
is such that $p^{i}\Psi\in\mathcal{B}\widehat{\otimes}_{\mathcal{A}}\widehat{D}_{\mathcal{A}}^{(0)}$,
then we can write ${\displaystyle p^{i}\Psi=\sum_{j=0}^{\infty}p^{j}\Psi_{j}}$
where $\Psi_{j}\in\mathcal{B}\otimes_{\mathcal{A}}D_{\mathcal{A}}^{(0)}$.
Therefore
\[
\Psi=\sum_{j=0}^{i}p^{j-i}\Psi_{j}+\sum_{j=i+1}^{\infty}p^{j-i}\Psi_{j}
\]
where, by the previous paragraph, the first sum is contained in the
$\mathcal{B}$-submodule spanned by $\{\partial_{1}^{i_{1}}\cdots\partial_{n}^{i_{n}}\cdot(\partial_{1}^{[p]})^{j_{1}}\cdots(\partial_{n}^{[p]})^{j_{n}}\}$
where $j_{1}+\dots+j_{n}\le i$, and the second sum is contained in
$\mathcal{B}\widehat{\otimes}_{\mathcal{A}}\widehat{D}_{\mathcal{A}}^{(0)}$.
Thus $\Psi$ is in the image of $\mathcal{B}\widehat{\otimes}_{\mathcal{A}}\widehat{D}_{\mathcal{A}}^{(0,1),i}$
as required. It follows directly from the definition that $\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}$
is standard. Part $3)$ of the proposition follows immediately.
\end{proof}
\begin{rem}
\label{rem:Direct-defn-of-transfer-bimodule}Combining the previous
proposition with \lemref{phi-pullback-of-D^i}, we also obtain the
description
\[
\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}\tilde{=}L\varphi^{*}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)})=D(\mathcal{O}_{\mathfrak{X}})\widehat{\otimes}_{\varphi^{-1}(D(\mathcal{O}_{\mathfrak{Y}}))}^{L}\varphi^{-1}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)})
\]
in the category $D_{cc}(\mathcal{G}(D(\mathcal{O}_{\mathfrak{X}}))$.
\end{rem}
This leads to the
\begin{defn}
\label{def:Pullback!}Let $\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)}))$.
Then we define
\[
L\varphi^{*}(\mathcal{M}^{\cdot}):=\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}\widehat{\otimes}_{\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}}^{(0,1)})}^{L}\varphi^{-1}(\mathcal{M}^{\cdot})\in\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}))
\]
where, as usual $\widehat{?}$ denotes graded derived completion.
The induced left action of $\mathcal{D}_{\mathfrak{X}}^{(0,1)}$ given
by the above definition; set $\varphi^{!}:=L\varphi^{*}[d_{X/Y}]$.
\end{defn}
In order to study this definition, we shall use the corresponding
mod $p$ theory; as usual this can be defined by reduction mod $p$
when the schemes $X$ and $Y$ are liftable, but it actually exists
for all $\varphi:X\to Y$. This is contained in the the following
\begin{prop}
\label{prop:pull-back-in-pos-char}Let $\varphi:X\to Y$ be a morphism
of smooth varieties over $k$.
1) There is a map of sheaves $\alpha:\mathfrak{l}_{X}\to\varphi^{*}\mathcal{D}_{Y}^{(0,1),1}$
(where $\mathfrak{l}_{X}$ is defined in \defref{L}).
2) Let $\beta:\mathcal{T}_{X}\to\varphi^{*}\mathcal{D}_{Y}^{(0,1),0}=\varphi^{*}\mathcal{D}_{Y}^{(0)}$
denote the natural map. There is a left action of $\mathcal{D}_{X}^{(0,1)}$
on $\varphi^{*}\mathcal{D}_{Y}^{(0,1)}$ satisfying $\partial\cdot(1\otimes1)=\beta(\partial)$
for all $\partial\in\mathcal{T}_{X}$ and $\delta\cdot(1\otimes1)=\alpha(\delta)$
for all $\delta\in\mathfrak{l}_{X}$. This action commutes with the
right action of $\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})$ on $\varphi^{*}\mathcal{D}_{Y}^{(0,1)}$.
\end{prop}
\begin{proof}
1) Let $\Phi$ be a local section of $\mathfrak{l}_{X}$. Composing
the map $\varphi^{\#}:\varphi^{-1}(\mathcal{O}_{Y})\to\mathcal{O}_{X}$
with $\Phi$ gives a differential operator from $\varphi^{-1}(\mathcal{O}_{Y})$
to $\mathcal{O}_{X}$; call this operator $\Phi'$. We claim $\Phi'\in\mathcal{O}_{X}\otimes_{\varphi^{-1}(\mathcal{O}_{Y})}\varphi^{-1}\mathfrak{l}_{Y}$
(here, we are using the fact that the sheaf $\mathfrak{l}_{Y}$ is
a subsheaf of $\mathcal{D}iff_{Y}$ and that $\mathcal{O}_{X}\otimes_{\varphi^{-1}(\mathcal{O}_{Y})}\varphi^{-1}(\mathcal{D}iff_{Y})\tilde{=}\mathcal{D}iff(\varphi^{-1}(\mathcal{O}_{Y}),\mathcal{O}_{X})$).
Let $U\subset X$ and $V\subset Y$ be open subsets which possess
local coordinates, such that $\varphi(U)\subset V$. As in \lemref{O^p-action}
write
\[
\Phi=\sum_{i=1}^{n}a_{i}^{p}\partial_{i}^{[p]}+\sum_{I}a_{I}\partial^{I}
\]
where $a_{i},a_{I}\in\mathcal{O}_{X}(U)$. The map $(\sum_{I}a_{I}\partial^{I})\circ\varphi^{\#}:\varphi^{-1}(\mathcal{O}_{V})\to\mathcal{O}_{U}$
is a differential operator which satisfies $((\sum_{I}a_{I}\partial^{I})\circ\varphi^{\#})(g^{p}\cdot h)=\varphi^{\#}(g^{p})\cdot((\sum_{I}a_{I}\partial^{I})\circ\varphi^{\#})(h)$
for all $g,h\in\mathcal{O}_{V}$. From this we conclude
\[
(\sum_{I}a_{I}\partial^{I})\circ\varphi^{\#}=\sum b_{J}\partial^{J}
\]
where $b_{J}\in\mathcal{O}_{X}(U)$ and now $\partial^{J}=\partial_{1}^{j_{1}}\cdots\partial_{r}^{j_{r}}$
are coordinate derivations on $V$ (to prove this, write the differential
operator $(\sum_{I}a_{I}\partial^{I})\circ\varphi^{\#}$ in terms
of $\partial_{1}^{[j_{1}]}\cdots\partial_{r}^{[j_{r}]}$ and then
use the linearity over $\varphi^{\#}(g^{p})$ to deduce that there
are no terms with any $j_{i}\geq p$).
Similarly, the map ${\displaystyle \sum_{i=1}^{n}a_{i}^{p}\partial_{i}^{[p]}}\circ\varphi^{\#}:\varphi^{-1}(\mathcal{O}_{V})\to\mathcal{O}_{U}$
is a differential operator of order $\leq p$, whose action on any
$p$th power in $\varphi^{-1}(\mathcal{O}_{V})$ is a $p$th power
in $\mathcal{O}_{U}$. From this one easily sees
\[
(\sum_{i=1}^{n}a_{i}^{p}\partial_{i}^{[p]})\circ\varphi^{\#}=\sum_{j=1}^{r}b_{j}^{p}\partial_{j}^{[p]}+\sum_{J}b_{J}\partial^{J}
\]
for some $b_{j},b_{J}\in\mathcal{O}_{U}$. So we conclude $\Phi'\in\mathcal{O}_{X}\otimes_{\varphi^{-1}(\mathcal{O}_{Y})}\varphi^{-1}\mathfrak{l}_{Y}$
as desired. Further, since $\mathfrak{l}_{Y}\subset\mathcal{D}_{Y}^{(0,1),1}$
we obtain $\varphi^{-1}(\mathfrak{l}_{Y})\subset\varphi^{-1}(\mathcal{D}_{Y}^{(0,1),1})$
and therefore a map $\mathcal{O}_{X}\otimes_{\varphi^{-1}(\mathcal{O}_{Y})}\varphi^{-1}(\mathfrak{l}_{Y})\to\mathcal{O}_{X}\otimes_{\varphi^{-1}(\mathcal{O}_{Y})}\varphi^{-1}(\mathcal{D}_{Y}^{(0,1),1})$;
we can now define $\alpha$ as the composition.
2) It suffices to check this locally. Restrict to an open affine $U\subset X$
which posses etale local coordinates, and we may suppose $\varphi(U)\subset V$,
where $V$ also possesses etale local coordinates. Writing $U=\text{Spec}(A)$
and $V=\text{Spec}(B)$, we let $\mathcal{A}$ and $\mathcal{B}$
be flat lifts of $A$ and $B$ to $W(k)$, as in the proof of \lemref{linear-independance-over-D_0-bar}
above. Let $\varphi^{\#}:\mathcal{B}\to\mathcal{A}$ be a lift of
$\varphi^{\#}:B\to A$ (these always exist for affine neighborhoods
which posses local coordinates, by the infinitesimal lifting property).
Then the construction of \defref{Transfer-Bimod} provides an action
of $D_{\mathcal{B}}^{(0,1)}$ on $\varphi^{*}(D_{\mathcal{A}}^{(0,1)})$
which commutes with the obvious right action of $D_{\mathcal{A}}^{(0,1)}$.
The reduction mod $p$ of this action, when restricted to $\mathcal{T}_{X}\subset\mathcal{D}_{X}^{(0)}$
and $\mathfrak{l}_{X}\subset\mathcal{D}_{X}^{(0,1),1}$ clearly agrees
with the map described above. Thus the map extends (uniquely) to an
action, as claimed.
\end{proof}
Thus we have
\begin{defn}
Let $\mathcal{D}_{X\to Y}^{(0,1)}:=\varphi^{*}\mathcal{D}_{Y}^{(0,1)}$,
equipped with the structure of a graded $(\mathcal{D}_{X}^{(0,1)},\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)}))$-bimoddule
as above. Let $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{D}_{Y}^{(0,1)}))$.
Then we define $L\varphi^{*}(\mathcal{M}^{\cdot}):=\mathcal{D}_{X\to Y}^{(0,1)}\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}^{L}\varphi^{-1}(\mathcal{M}^{\cdot})$
with the induced left action of $\mathcal{D}_{X}^{(0,1)}$ given by
the above. Set $\varphi^{!}=L\varphi^{*}[d_{X/Y}]$. The functor $L\varphi^{*}$
takes $D_{\text{qcoh}}(\mathcal{G}(\mathcal{D}_{Y}^{(0,1)}))$ to
$D_{\text{qcoh}}(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$.
\end{defn}
\begin{rem}
In fact, as an object in $D(\mathcal{G}(D(\mathcal{O}_{X})))$, we
have that $L\varphi^{*}(\mathcal{M}^{\cdot})$ agrees with the usual
pullback of $\mathcal{O}$-modules. This follows directly from the
isomorphism $\varphi^{*}\mathcal{D}_{Y}^{(0,1)}\tilde{=}\mathcal{O}_{X}\otimes_{\varphi^{-1}(\mathcal{O}_{Y})}\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})$,
and the fact that $\mathcal{D}_{Y}^{(0,1)}$ is flat over $\mathcal{O}_{Y}$.
The analogous fact is also true for $\varphi:\mathfrak{X}\to\mathfrak{Y}$;
making use of \remref{Direct-defn-of-transfer-bimodule}. It follows
that $L\varphi^{*}$ has finite homological dimension.
\end{rem}
Now we record some basic properties of these functors:
\begin{lem}
\label{lem:composition-of-pullbacks}If $\psi:\mathfrak{Y}\to\mathfrak{Z}$,
there is an isomorphism of functors \linebreak{}
$L\varphi^{*}\circ L\psi^{*}\tilde{=}L(\psi\circ\varphi)^{*}$. The
same result holds for $\varphi:X\to Y$ and $\psi:Y\to Z$.
\end{lem}
\begin{proof}
(compare \cite{key-49}, proposition 1.5.11) We have, by \remref{Direct-defn-of-transfer-bimodule},
\[
\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}\widehat{\otimes}_{\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}}^{(0,1)})}^{L}\varphi^{-1}(\mathcal{D}_{\mathfrak{Y\to\mathfrak{Z}}}^{(0,1)})
\]
\[
=(D(\mathcal{O}_{\mathfrak{X}})\widehat{\otimes}_{\varphi^{-1}(D(\mathcal{O}_{\mathfrak{Y}}))}^{L}\varphi^{-1}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)}))\widehat{\otimes}_{\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}}^{(0,1)})}^{L}\varphi^{-1}(\mathcal{O}_{\mathfrak{Y}}[f,v]\widehat{\otimes}_{\psi^{-1}(D(\mathcal{O}_{\mathfrak{Z}}))}^{L}\psi^{-1}(\mathcal{\widehat{D}}_{\mathfrak{Z}}^{(0,1)}))
\]
\[
\tilde{=}D(\mathcal{O}_{\mathfrak{X}})\widehat{\otimes}_{\varphi^{-1}(D(\mathcal{O}_{\mathfrak{Y}}))}^{L}(\varphi^{-1}D(\mathcal{O}_{\mathfrak{Y}})\widehat{\otimes}_{(\psi\circ\varphi)^{-1}(D(\mathcal{O}_{\mathfrak{Z}}))}^{L}(\psi\circ\varphi)^{-1}(\mathcal{\widehat{D}}_{\mathfrak{Z}}^{(0,1)}))
\]
\[
\tilde{=}D(\mathcal{O}_{\mathfrak{X}})\widehat{\otimes}_{(\psi\circ\varphi)^{-1}(D(\mathcal{O}_{\mathfrak{Z}}))}^{L}(\psi\circ\varphi)^{-1}(\mathcal{\widehat{D}}_{\mathfrak{Z}}^{(0,1)})=\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Z}}^{(0,1)}
\]
as $(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)},(\psi\circ\varphi)^{-1}\mathcal{\widehat{D}}_{\mathfrak{Z}}^{(0,1)})$-bimodules.
This yields
\[
L(\psi\circ\varphi)^{*}(\mathcal{M}^{\cdot})=\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Z}}^{(0,1)}\widehat{\otimes}_{(\psi\circ\varphi)^{-1}(\mathcal{D}_{\mathfrak{Z}}^{(0,1)})}^{L}(\psi\circ\varphi)^{-1}(\mathcal{M}^{\cdot})
\]
\[
\tilde{=}(\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}\widehat{\otimes}_{\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}}^{(0,1)})}^{L}\varphi^{-1}(\mathcal{D}_{\mathfrak{Y\to\mathfrak{Z}}}^{(0,1)}))\widehat{\otimes}_{(\psi\circ\varphi)^{-1}(\mathcal{D}_{\mathfrak{Z}}^{(0,1)})}^{L}(\psi\circ\varphi)^{-1}(\mathcal{M}^{\cdot})
\]
\[
\tilde{=}\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}\widehat{\otimes}_{\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}}^{(0,1)})}^{L}\varphi^{-1}((\mathcal{D}_{\mathfrak{Y\to\mathfrak{Z}}}^{(0,1)})\widehat{\otimes}_{\psi^{-1}(\mathcal{D}_{\mathfrak{Z}}^{(0,1)})}^{L}\psi^{-1}(\mathcal{M}^{\cdot}))=L\varphi^{*}(L\psi^{*}\mathcal{M}^{\cdot})
\]
An identical argument works for $\varphi:X\to Y$ and $\psi:Y\to Z$.
\end{proof}
Next, we have
\begin{prop}
\label{prop:Basic-base-change-for-pullback}1) Let $\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\mathcal{D}_{\mathfrak{Y}}^{(0,1)}))$.
Then $L\varphi^{*}(\mathcal{M}^{\cdot})^{-\infty}\tilde{\to}L\varphi^{*,(0)}(\mathcal{M}^{\cdot,-\infty})$
and $\widehat{L\varphi^{*}(\mathcal{M}^{\cdot})^{\infty}}\tilde{\to}L\varphi^{*,(1)}(\widehat{\mathcal{M}^{\cdot,\infty}})$.
The analogous result holds for $\varphi:X\to Y$.
2) Let $\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\mathcal{D}_{\mathfrak{Y}}^{(0,1)}))$.
Then $L\varphi^{*}(\mathcal{M}^{\cdot})\otimes_{W(k)}^{L}k\tilde{\to}L\varphi^{*,(0)}(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)$.
\end{prop}
\begin{proof}
1) By construction we have
\[
(\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)})\widehat{\otimes}_{D(W(k))}^{L}W(k)[f,v]/(f-1)\tilde{=}\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(1)}
\]
and
\[
(\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0)})\widehat{\otimes}_{D(W(k))}^{L}W(k)[f,v]/(v-1)\tilde{=}\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0)}
\]
from which the result follows directly. Similarly, for part $2)$
one uses
\[
(\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)})\otimes_{W(k)}^{L}k\tilde{=}\mathcal{\widehat{D}}_{X\to Y}^{(0,1)}
\]
\end{proof}
Specializing to the case of positive characteristic, it is also useful
to have comparisons with the pullbacks of $\mathcal{R}(\mathcal{D}_{X}^{(1)})$
and $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$-modules. First,
we need to give the relevant definitions:
\begin{defn}
Suppose $\varphi:X\to Y$. We let $\mathcal{R}_{X\to Y}^{(1)}:=\varphi^{*}\mathcal{D}_{Y}^{(0,1)}/(v)$
and $\mathcal{\overline{R}}_{X\to Y}^{(0)}:=\varphi^{*}\mathcal{D}_{Y}^{(0,1)}/(f)$;
considered as a graded $(\mathcal{R}(\mathcal{D}_{X}^{(1)}),\varphi^{-1}(\mathcal{R}(\mathcal{D}_{Y}^{(1)}))$
bimodule (respectively a graded $(\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)}),\varphi^{-1}(\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)}))$
bimodule). Let $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{R}(\mathcal{D}_{Y}^{(1)})))$.
Then we define $L\varphi^{*,(1)}(\mathcal{M}^{\cdot}):=\mathcal{R}_{X\to Y}^{(1)}\otimes_{\varphi^{-1}(\mathcal{R}(\mathcal{D}_{Y}^{(1)})}^{L}\varphi^{-1}(\mathcal{M}^{\cdot})$
with the induced left action of $\mathcal{R}(\mathcal{D}_{X}^{(1)})$
given by the bimodule structure.. Set $\varphi^{\dagger,(1)}=L\varphi^{*,(1)}[d_{X/Y}]$.
We make the analogous definition for $\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})$-modules;
and denote the corresponding functors $L\varphi^{*,(0)}$ and $\varphi^{\dagger,(1)}$.
\end{defn}
We note that the functor $L\varphi^{*,(1)}$ takes $D_{qc}(\mathcal{G}(\mathcal{R}(\mathcal{D}_{Y}^{(1)}))$
to $D_{qc}(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(1)}))$; and
similarly for $\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})$. Then
we have the
\begin{prop}
\label{prop:pullback-and-R}Let $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{D}_{Y}^{(0,1)}))$.
There is an isomorphism of functors
\[
\mathcal{R}(\mathcal{D}_{X}^{(1)})\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\varphi^{\dagger}\mathcal{M}^{\cdot}\tilde{=}\varphi^{\dagger,(1)}(\mathcal{R}(\mathcal{D}_{Y}^{(1)})\otimes_{\mathcal{D}_{Y}^{(0,1)}}^{L}\mathcal{M}^{\cdot})
\]
and similarly for $\varphi_{0}^{\dagger}$.
\end{prop}
\begin{proof}
We have
\[
\varphi^{\dagger,(1)}(\mathcal{R}(\mathcal{D}_{Y}^{(1)})\otimes_{\mathcal{D}_{Y}^{(0,1)}}^{L}\mathcal{M}^{\cdot})=\mathcal{R}_{X\to Y}^{(1)}\otimes_{\varphi^{-1}(\mathcal{R}(\mathcal{D}_{Y}^{(1)}))}^{L}\varphi^{-1}(\mathcal{R}(\mathcal{D}_{Y}^{(1)})\otimes_{\mathcal{D}_{Y}^{(0,1)}}^{L}\mathcal{M}^{\cdot})[d_{X/Y}]
\]
\[
\tilde{=}\mathcal{R}_{X\to Y}^{(1)}\otimes_{\varphi^{-1}(\mathcal{R}(\mathcal{D}_{Y}^{(1)}))}^{L}\varphi^{-1}(\mathcal{R}(\mathcal{D}_{Y}^{(1)}))\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}^{L}\varphi^{-1}(\mathcal{M}^{\cdot})[d_{X/Y}]
\]
\[
\tilde{=}\mathcal{R}_{X\to Y}^{(1)}\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}^{L}\varphi^{-1}(\mathcal{M}^{\cdot})[d_{X/Y}]
\]
Now, by definition, the module $\mathcal{D}_{X\to Y}^{(0,1)}$, admits,
locally on $X$ and $Y$, a lift $\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}$
which we have constructed above in \defref{Transfer-Bimod}. This
lift is a standard gauge, and so $\mathcal{D}_{X\to Y}^{(0,1)}$ is
quasi-rigid. So, using the resolution (c.f. \lemref{Basic-Facts-on-Rigid})
\[
\cdots\to\mathcal{D}_{X}^{(0,1)}(-1)\xrightarrow{v}\mathcal{D}_{X}^{(0,1)}\xrightarrow{f}\mathcal{D}_{X}^{(0,1)}(-1)\xrightarrow{v}\mathcal{D}_{X}^{(0,1)}\to\mathcal{R}(\mathcal{D}_{X}^{(1)})
\]
for $\mathcal{R}(\mathcal{D}_{X}^{(1)})$ over $\mathcal{D}_{X}^{(0,1)}$,
this tell us that
\begin{equation}
\mathcal{R}(\mathcal{D}_{X}^{(1)})\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{D}_{X\to Y}^{(0,1)}\tilde{=}\mathcal{D}_{X\to Y}^{(0,1)}/v=\mathcal{R}_{X\to Y}^{(1)}\label{eq:transfer-iso-1}
\end{equation}
i.e., this complex is concentrated in degree $0$ and is equal to
$\mathcal{R}_{X\to Y}^{(1)}$ there. Thus
\[
\mathcal{R}_{X\to Y}^{(1)}\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}^{L}\varphi^{-1}(\mathcal{M}^{\cdot})[d_{X/Y}]
\]
\[
\tilde{=}\mathcal{R}(\mathcal{D}_{X}^{(1)})\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{D}_{X\to Y}^{(0,1)}\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}^{L}\varphi^{-1}(\mathcal{M}^{\cdot})[d_{X/Y}]=\mathcal{R}(\mathcal{D}_{X}^{(1)})\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\varphi^{\dagger}\mathcal{M}^{\cdot}
\]
as desired. The case of $\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})$-modules
is essentially identical.
\end{proof}
Finally, we also have
\begin{prop}
\label{prop:Smooth-pullback-preserves-coh}If $\varphi$ is smooth,
then $L\varphi^{*}$ takes $D_{coh}^{b}(\mathcal{G}(\mathcal{D}_{\mathfrak{Y}}^{(0,1)}))$
to $D_{coh}^{b}(\mathcal{G}(\mathcal{D}_{\mathfrak{X}}^{(0,1)}))$.
The same holds for a smooth morphism $\varphi:X\to Y$.
\end{prop}
\begin{proof}
By part $2)$ of \propref{Basic-base-change-for-pullback}, as well
as \propref{coh-to-coh}, the first statement reduces to the second.
We may assume that $X=\text{Spec}(B)$ and $Y=\text{Spec}(A)$ both
possess local coordinates. After further localizing if necessary we
can suppose that there are local coordinates $\{\partial_{1},\dots,\partial_{n}\}$
on $B$ such that the $A$-linear derivations of $B$ are $\{\partial_{1},\dots,\partial_{d}\}$.
In this case, if we let $J\subset\mathcal{D}_{B}^{(0,1)}$ be the
ideal generated by $\{\partial_{1},\dots,\partial_{d},\partial_{1}^{[p]},\dots,\partial_{d}^{[p]}\}$,
then we have
\[
\mathcal{D}_{B}^{(0,1)}/J\tilde{=}B\otimes_{A}\mathcal{D}_{A}^{(0,1)}
\]
which shows that $B\otimes_{A}\mathcal{D}_{A}^{(0,1)}=\varphi^{*}\mathcal{D}_{A}^{(0,1)}$
is a coherent $\mathcal{D}_{B}^{(0,1)}$-module, which is flat as
a module over $\mathcal{D}_{A}^{(0,1),\text{opp}}$. This shows that
$\varphi^{*}$ is exact; and the coherence of the pullback for an
arbitrary coherent $\mathcal{D}_{A}^{(0,1)}$-module $\mathcal{M}$
follows by taking a finite presentation for $\mathcal{M}$.
\end{proof}
\section{\label{sec:Operations:Swap-Tensor}Operations on Gauges: Left-Right
Interchange and tensor Product}
The first goal of this subsection is to prove
\begin{prop}
\label{prop:Left-Right-Swap} Let $\mathcal{M}\in\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)})$.
Then $\omega_{\mathfrak{X}}\otimes_{\mathcal{O}_{\mathfrak{X}}}\mathcal{M}$
carries the structure of a right graded $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}$-module.
This functor defines an equivalence of categories, which preserves
coherent modules. The derived functor preserves the subcategories
of derived complete complexes.
The analogous result holds for $X$ (i.e., in positive characteristic);
there, the functor preserves the category of quasi-coherent sheaves
as well.
\end{prop}
In order to prove this, we first recall that $\omega_{\mathfrak{X}}$
naturally carries the structure of a right $\mathcal{D}_{\mathfrak{X}}^{(i)}$-module
for all $i\geq0$; indeed, $\omega_{\mathfrak{X}}[p^{-1}]$ carries
a right $\mathcal{D}_{\mathfrak{X}}^{(i)}[p^{-1}]=\mathcal{D}_{\mathfrak{X}}^{(0)}[p^{-1}]$
structure via the Lie derivative (c.f., e.g. \cite{key-4}, page 8).
In local coordinates, this action is simply given by
\[
(gdx_{1}\wedge\cdots\wedge dx_{n})\partial=-\partial(g)dx_{1}\wedge\cdots\wedge dx_{n}
\]
for any derivation $\partial$. It follows that $\mathcal{D}_{\mathfrak{X}}^{(i)}$
preserves $\omega_{\mathfrak{X}}$ (for all $i$). As $\omega_{\mathfrak{X}}$
is $p$-adically complete, we see that it also inherits a right $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(i)}$-module
structure.
\begin{lem}
Let $D(\omega_{\mathfrak{X}})=\omega_{\mathfrak{X}}\otimes_{\mathcal{O}_{\mathfrak{X}}}D(\mathcal{O}_{\mathfrak{X}})$.
Then $D(\omega_{\mathfrak{X}})$ has a natural right graded $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}$-module
structure. Similarly, $D(\omega_{X})$ admits a right graded $\mathcal{D}_{X}^{(0,1)}$-module
structure, for any smooth $X$ over $k$.
\end{lem}
\begin{proof}
We note that $(\omega_{\mathfrak{X}}[f,v])^{i}=\{m\in\omega_{\mathfrak{X}}[p^{-1}]|p^{i}m\in\omega_{\mathfrak{X}}\}$.
Thus the first result follows by using the right $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(1)}$-module
structure on $\omega_{\mathfrak{X}}$. To prove the second result,
we choose on open affine $\text{Spec}(A)\subset X$ which possesses
etale local coordinates. In coordinates, the required action is given
by
\[
(gdx_{1}\wedge\cdots\wedge dx_{n})\partial=-\partial(g)dx_{1}\wedge\cdots\wedge dx_{n}
\]
and
\[
(gdx_{1}\wedge\cdots\wedge dx_{n})\partial^{[p]}=-f\cdot\partial^{[p]}(g)dx_{1}\wedge\cdots\wedge dx_{n}
\]
for any $g\in D(\mathcal{O}_{\mathfrak{X}})$. If we choose a lift
$\mathcal{A}$ of $A$, then, after lifting the coordinates, we see
that this action is the reduction mod $p$ of the action just defined;
in particular it is actually independent of the choice of coordinates
and therefore glues to define an action on all of $X$.
\end{proof}
Now we recall a very general construction from \cite{key-4}, section
1.4b
\begin{lem}
Let $\mathcal{L}$ be any line bundle on $\mathfrak{X}$. Placing
$\mathcal{L}$ and $\mathcal{L}^{-1}$ in degree $0$, the sheaf $\mathcal{\widehat{D}}_{\mathfrak{X},\mathcal{L}}^{(0,1)}:=\mathcal{L}\otimes_{\mathcal{O}_{\mathfrak{X}}}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}\otimes_{\mathcal{O}_{\mathfrak{X}}}\mathcal{L}^{-1}$
carries the structure of a graded algebra on $\mathfrak{X}$, via
the multiplication
\[
(s_{1}\otimes\Phi_{1}\otimes t_{1})\cdot(s_{2}\otimes\Phi_{2}\otimes t_{2})=s_{1}\otimes\Phi_{1}<t_{1},s_{1}>\Phi_{2}\otimes t_{2}
\]
There is a functor $\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)})\to\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X},\mathcal{L}}^{(0,1)})$
given by $\mathcal{M}\to\mathcal{L}\otimes_{\mathcal{O}_{\mathfrak{X}}}\mathcal{M}$;
the action of $\mathcal{\widehat{D}}_{\mathfrak{X},\mathcal{L}}^{(0,1)}$
on $\mathcal{L}\otimes_{\mathcal{O}_{\mathfrak{X}}}\mathcal{M}$ is
defined by
\[
(s\otimes\Phi\otimes t)\cdot(s_{1}\otimes m)=s\otimes\Phi_{1}<t,s_{1}>m
\]
This functor is an equivalence of categories, whose inverse is given
by $\mathcal{N}\to\mathcal{L}^{-1}\otimes_{\mathcal{O}_{\mathfrak{X}}}\mathcal{N}$.
\end{lem}
So, \propref{Left-Right-Swap} follows directly from
\begin{lem}
There is an isomorphism of algebras $\mathcal{\widehat{D}}_{\mathfrak{X},\omega_{\mathfrak{X}}}^{(0,1)}\tilde{=}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),\text{op}}$.
The same is true over $X$.
\end{lem}
\begin{proof}
We have the isomorphism $\mathcal{\widehat{D}}_{\mathfrak{X},\omega_{\mathfrak{X}}}^{(0,1)}\tilde{=}D(\omega_{\mathfrak{X}})\otimes_{D(\mathcal{O}_{\mathfrak{X}})}\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}\otimes_{D(\mathcal{O}_{\mathfrak{X}})}D(\mathcal{\omega}_{\mathfrak{X}}^{-1})$.
This yields a left action of $\mathcal{\widehat{D}}_{\mathfrak{X},\omega_{\mathfrak{X}}}^{(0,1)}$
on $\omega_{\mathfrak{X}}[f,v]$, given by
\[
(s\otimes\Phi\otimes t)\cdot s_{1}=s\otimes\Phi\cdot<t,s_{1}>
\]
where $<,>$ refers to the pairing $D(\mathcal{\omega}_{\mathfrak{X}})\otimes_{D(\mathcal{O}_{\mathfrak{X}})}D(\mathcal{\omega}_{\mathfrak{X}}^{-1})\to D(\mathcal{O}_{\mathfrak{X}})$.
Computing in local coordinates, one sees that the image of $\mathcal{\widehat{D}}_{\mathfrak{X},\omega_{\mathfrak{X}}}^{(0,1)}$
in $\mathcal{E}nd_{W(k)}(D(\omega_{\mathfrak{X}}))$ is the same as
the image of $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),\text{op}}$
in $\mathcal{E}nd_{W(k)}(D(\omega_{\mathfrak{X}}))$ via the right
action defined above. This yields the isomorphism over $\mathfrak{X}$.
To deal with $X$, one first obtains the isomorphism locally (via
a local lifting of the variety), and then shows that the resulting
isomorphism is independent of the choice of coordinates (as in the
proof of the previous lemma).
\end{proof}
Next, we define tensor products of (left) $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$-modules.
The first step is to define the external product of sheaves:
\begin{defn}
1) Let $\mathfrak{X}$ and $\mathfrak{Y}$ be smooth formal schemes,
and let $\mathcal{M}^{\cdot}\in D(\mathcal{G}(D(\mathcal{O}_{\mathfrak{X}})))$,
$\mathcal{N}^{\cdot}\in D(\mathcal{G}(D(\mathcal{O}_{\mathfrak{Y}})))$.
Then we define
\[
\mathcal{M}^{\cdot}\boxtimes\mathcal{N}^{\cdot}:=Lp_{1}^{*}(\mathcal{M}^{\cdot})\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{X}\times\mathfrak{Y}})}^{L}Lp_{2}^{*}(\mathcal{N}^{\cdot})\in D_{cc}(\mathcal{G}(D(\mathcal{O}_{\mathfrak{X}\times\mathfrak{Y}})))
\]
where $p_{i}$ ($i\in\{1,2\}$) are the projections and $Lp_{1}^{*},Lp_{2}^{*}$
are defined as in \defref{Correct-Pullback}.
2) Let $X$ and $Y$ be smooth schemes over $k$. Then for $\mathcal{M}^{\cdot}\in D(\mathcal{G}(D(\mathcal{O}_{X})))$,
$\mathcal{N}^{\cdot}\in D(\mathcal{G}(D(\mathcal{O}_{Y})))$. Then
we define
\[
\mathcal{M}^{\cdot}\boxtimes\mathcal{N}^{\cdot}:=Lp_{1}^{*}(\mathcal{M}^{\cdot})\otimes_{D(\mathcal{O}_{X\times Y})}^{L}Lp_{2}^{*}(\mathcal{N}^{\cdot})\in D(\mathcal{G}(D(\mathcal{O}_{X\times Y})))
\]
where for $\mathcal{M}^{\cdot}\in D(\mathcal{G}(D(\mathcal{O}_{X})))$
we have $Lp_{1}^{*}\mathcal{M}^{\cdot}=D(\mathcal{O}_{X\times Y})\otimes_{p_{1}^{-1}(D(\mathcal{O}_{X}))}^{L}\mathcal{M}^{\cdot}\in D(\mathcal{G}(D(\mathcal{O}_{X\times Y})))$
( and similarly for $p_{2}$).
\end{defn}
The relationship with $\mathcal{D}$-modules is the following:
\begin{lem}
1) There is an isomorphism
\[
\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}\boxtimes\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}\tilde{=}\widehat{\mathcal{D}}_{\mathfrak{X}\times\mathfrak{Y}}^{(0,1)}
\]
of sheaves of algebras on $\mathfrak{X}\times\mathfrak{Y}$.
2) There is an isomorphism
\[
\mathcal{D}_{X}^{(0,1)}\boxtimes\mathcal{D}_{Y}^{(0,1)}\tilde{=}\mathcal{D}_{X\times Y}^{(0,1)}
\]
of sheaves of algebras on $X\times Y$.
\end{lem}
\begin{proof}
First suppose $\mathfrak{X}=\text{Specf}(\mathcal{A})$ and $\mathfrak{Y}=\text{Specf}(\mathcal{B})$.
Then there is a morphism $\mathcal{D}_{\mathcal{A}}^{(\infty)}\otimes_{W(k)}\mathcal{D}_{\mathcal{B}}^{(\infty)}\to\mathcal{D}_{\mathcal{A}\widehat{\otimes}_{W(k)}\mathcal{B}}^{(\infty)}$
defined as follows: for sections $a\in\mathcal{A}$ and $b\in\mathcal{B}$,
we set
\[
(\Phi_{1}\otimes\Phi_{2})(a\otimes b)=\Phi_{1}(a)\otimes\Phi_{2}(b)
\]
and we extend to $\mathcal{A}\widehat{\otimes}_{W(k)}\mathcal{B}$
by linearity and continuity. For a fixed integer $j\geq0$, this yields
a map $\mathcal{D}_{\mathcal{A}}^{(j)}\otimes_{W(k)}\mathcal{D}_{\mathcal{B}}^{(j)}\to\mathcal{D}_{\mathcal{A}\widehat{\otimes}_{W(k)}\mathcal{B}}^{(j)}$;
these maps are compatible with localization at any element of $\mathcal{A}$
or $\mathcal{B}$. After $p$-adically completing we get a map $\widehat{\mathcal{D}}_{\mathcal{A}}^{(j)}\widehat{\otimes}_{W(k)}\widehat{\mathcal{D}}_{\mathcal{B}}^{(j)}\to\widehat{\mathcal{D}}_{\mathcal{A}\widehat{\otimes}_{W(k)}\mathcal{B}}^{(j)}$,
and these maps sheafifiy to a map \linebreak{}
$p_{1}^{-1}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(j)})\widehat{\otimes}_{W(k)}p_{2}^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(j)})\to\widehat{\mathcal{D}}_{\mathfrak{X}\times\mathfrak{Y}}^{(j)}$.
Note that since $\mathcal{D}_{\mathcal{A}}^{(j)}\otimes_{W(k)}\mathcal{D}_{\mathcal{B}}^{(j)}$
is $p$-torsion-free (as is $\mathcal{D}_{\mathcal{A}\widehat{\otimes}_{W(k)}\mathcal{B}}^{(j)}$),
the usual $p$-adic completion of these sheaves agrees with the cohomological
completion. It follows that $p_{1}^{-1}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(j)})\widehat{\otimes}_{W(k)}p_{2}^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(j)})\tilde{=}p_{1}^{*}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(j)})\widehat{\otimes}_{\mathcal{O}_{\mathfrak{X}\times\mathfrak{Y}}}p_{2}^{*}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(j)}))$.
1) We claim that the map
\[
p_{1}^{*}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(j)})\widehat{\otimes}_{\mathcal{O}_{\mathfrak{X}\times\mathfrak{Y}}}p_{2}^{*}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(j)}))\to\widehat{\mathcal{D}}_{\mathfrak{X}\times\mathfrak{Y}}^{(j)}
\]
is an isomorphism; indeed, both sides are $p$-adically complete and
$p$-torsion-free, so it suffices to check this after reduction mod
$p$, where it becomes an easy computation in local coordinates. Thus
we obtain isomorphisms
\[
\{\Phi\in p_{1}^{*}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(1)})\widehat{\otimes}_{\mathcal{O}_{\mathfrak{X}\times\mathfrak{Y}}}p_{2}^{*}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(1)})|p^{i}\Phi\in p_{1}^{*}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)})\widehat{\otimes}_{\mathcal{O}_{\mathfrak{X}\times\mathfrak{Y}}}p_{2}^{*}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0)})\}
\]
\[
\tilde{\to}\{\Phi\in\widehat{\mathcal{D}}_{\mathfrak{X}\times\mathfrak{Y}}^{(1)}|p^{i}\Phi\in\widehat{\mathcal{D}}_{\mathfrak{X}\times\mathfrak{Y}}^{(0)}\}=\widehat{\mathcal{D}}_{\mathfrak{X}\times\mathfrak{Y}}^{(0,1),i}
\]
for each $i\in\mathbb{Z}$.
On the other hand, we claim that there is an isomorphism
\[
(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}\boxtimes\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)})^{i}\tilde{\to}\{\Phi\in p_{1}^{*}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(1)})\widehat{\otimes}_{\mathcal{O}_{\mathfrak{X}\times\mathfrak{Y}}}p_{2}^{*}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(1)})|p^{i}\Phi\in p_{1}^{*}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)})\widehat{\otimes}_{\mathcal{O}_{\mathfrak{X}\times\mathfrak{Y}}}p_{2}^{*}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0)})\}
\]
Combined with the above, this proves $(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}\boxtimes\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)})^{i}\tilde{\to}\widehat{\mathcal{D}}_{\mathfrak{X}\times\mathfrak{Y}}^{(0,1),i}$
as required. To see it, note that we have the map
\[
f_{\infty}:(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}\boxtimes\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)})^{i}\to(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}\boxtimes\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)})^{\infty}
\]
The completion of the right hand side is $p_{1}^{*}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(1)})\widehat{\otimes}_{\mathcal{O}_{\mathfrak{X}\times\mathfrak{Y}}}p_{2}^{*}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(1)})$;
so we obtain a map
\[
(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}\boxtimes\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)})^{i}\to\{\Phi\in p_{1}^{*}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(1)})\widehat{\otimes}_{\mathcal{O}_{\mathfrak{X}\times\mathfrak{Y}}}p_{2}^{*}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(1)})|p^{i}\Phi\in p_{1}^{*}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)})\widehat{\otimes}_{\mathcal{O}_{\mathfrak{X}\times\mathfrak{Y}}}p_{2}^{*}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0)})\}
\]
and to see that it is an isomorphism, one may check it after reduction
mod $p$; then it follows from the result of part $2)$ proved directly
below.
2) As above we have the map $p_{1}^{-1}\mathcal{D}_{X}^{(\infty)}\otimes_{k}p_{2}^{-1}\mathcal{D}_{Y}^{(\infty)}\to\mathcal{D}_{X\times Y}^{(\infty)}$.
Restricting to $\mathcal{T}_{X}$ and $\mathfrak{l}_{X}$ (a defined
in \defref{L} above)we get maps $p_{1}^{\#}:p_{1}^{-1}(\mathcal{T}_{X})\to\mathcal{T}_{X\times Y}$
and $p_{1}^{\#}:p_{1}^{-1}(\mathfrak{l}_{X})\to\mathfrak{l}_{X\times Y}$;
and similarly for $p_{2}$. Thus we get a map
\[
A:(\mathcal{T}_{X}\boxtimes1)\oplus(1\boxtimes\mathcal{T}_{Y})\oplus(\mathfrak{l}_{X}\boxtimes1)\oplus(1\boxtimes\mathfrak{l}_{Y})\to\mathcal{D}_{X\times Y}^{(0,1)}
\]
defined by
\[
A(\partial_{1}\boxtimes1+1\boxtimes\partial_{2}+\delta_{1}\boxtimes1+1\boxtimes\delta_{2})=p_{1}^{\#}(\partial_{1})+p_{2}^{\#}(\partial_{2})+p_{1}^{\#}(\delta_{1})+p_{2}^{\#}(\delta_{2})
\]
On the other hand, the sheaf $(\mathcal{T}_{X}\boxtimes1)\oplus(1\boxtimes\mathcal{T}_{Y})\oplus(\mathfrak{l}_{X}\boxtimes1)\oplus(1\boxtimes\mathfrak{l}_{Y})$
generates $\mathcal{D}_{X}^{(0,1)}\boxtimes\mathcal{D}_{Y}^{(0,1)}$
as a sheaf of algebras over $\mathcal{O}_{X\times Y}[f,v]$. Thus
to show that $A$ extends (necessarily uniquely) to an isomorphism
of algebras, we can so do locally.
So, let $\{x_{1},\dots,x_{n}\}$ and $\{y_{1},\dots,y_{m}\}$ be local
coordinates on $X$ and $Y$, respectively, with associated derivations
$\{\partial_{x_{1}},\dots,\partial_{x_{n}}\}$ and $\{\partial_{y_{1}},\dots,\partial_{y_{m}}\}$.
Then by \corref{Local-coords-over-A=00005Bf,v=00005D} an $D(\mathcal{O}_{X})$-basis
for $\mathcal{D}_{X}^{(0,1)}$ is given by the set $\{\partial_{x}^{I}(\partial_{x}^{[p]})^{J}\}$
for multi-indices $I,J$ such that each entry of $I$ is contained
in $\{0,1,\dots,p-1\}$; the analogous statement holds over $Y$.
Therefore the set $\{\partial_{x}^{I_{1}}(\partial_{x}^{[p]})^{J_{1}}\otimes\partial_{y}^{I_{2}}(\partial_{y}^{[p]})^{J_{2}}\}$
is an $\mathcal{O}_{X\times Y}[f,v]$-basis for $\mathcal{D}_{X}^{(0,1)}\boxtimes\mathcal{D}_{Y}^{(0,1)}$;
but also $\{\partial_{x}^{I_{1}}\partial_{y}^{I_{2}}(\partial_{x}^{[p]})^{J_{1}}(\partial_{y}^{[p]})^{J_{2}}\}$
is certainly an $D(\mathcal{O}_{X\times Y})$-basis for $\mathcal{D}_{X\times Y}^{(0,1)}$
and so the result follows immediately.
\end{proof}
Now we can define the tensor product:
\begin{defn}
Let $\Delta:\mathfrak{X}\to\mathfrak{X}\times\mathfrak{X}$ denote
the diagonal morphism.
1) Then for $\mathcal{M}^{\cdot},\mathcal{N}^{\cdot}\in D(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
we define $\mathcal{M}^{\cdot}\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{X}})}^{L}\mathcal{N}^{\cdot}:=L\Delta^{*}(\mathcal{M}^{\cdot}\boxtimes\mathcal{N}^{\cdot})\in D_{cc}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$,
where $\mathcal{M}^{\cdot}\boxtimes\mathcal{N}^{\cdot}$ is regarded
as an element of $D_{cc}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}\times\mathfrak{X}}^{(0,1)}))$
via the isomorphism $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}\boxtimes\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}\tilde{=}\widehat{\mathcal{D}}_{\mathfrak{X}\times\mathfrak{X}}^{(0,1)}$.
2) For $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1),\text{op}}))$
and $\mathcal{N}^{\cdot}\in D(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$,
we define $\mathcal{M}^{\cdot}\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{X}})}^{L}\mathcal{N}^{\cdot}:=\omega_{\mathfrak{X}}\otimes_{\mathcal{O}_{\mathfrak{X}}}((\omega_{\mathfrak{X}}^{-1}\otimes_{\mathcal{O}_{\mathfrak{X}}}\mathcal{M}^{\cdot})\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{X}})}^{L}\mathcal{N}^{\cdot})\in D(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1),\text{op}}))$
One has the analogous constructions for a smooth $X$ over $k$.
\end{defn}
From the construction, one sees directly that, as an $D(\mathcal{O}_{\mathfrak{X}})$-module,
the module $\mathcal{M}^{\cdot}\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{X}})}^{L}\mathcal{N}^{\cdot}$
agrees with the $D(\mathcal{O}_{\mathfrak{X}})$-module denoted in
the same way. The issue that this construction resolves is how to
put a $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$-module structure
on this object.
To proceed further, it is useful to note some explicit formulas in
coordinates:
\begin{rem}
\label{rem:Two-actions-agree}Suppose we have local coordinates $\{x_{i}\}_{i=1}^{n}$
and $\{\partial_{i}\}_{i=1}^{n}$ on $\mathfrak{X}$. Then for modules
$\mathcal{M},\mathcal{N}\in\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})$
we can put an action of $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$
on $\mathcal{M}\otimes_{D(\mathcal{O}_{\mathfrak{X}})}\mathcal{N}$
via the following formulas:
\[
\partial_{i}(m\otimes n)=\partial_{i}m\otimes n+m\otimes\partial_{i}n
\]
and
\[
\partial_{i}^{([p]}(m\otimes n)=f\sum_{j=1}^{p-1}\partial^{[j]}(m)\otimes\partial^{[p-j]}(m)+\partial^{[p]}(m)\otimes n+m\otimes\partial^{[p]}(n)
\]
Taking a flat resolution of $\mathcal{N}$, this gives $\mathcal{M}\otimes_{D(\mathcal{O}_{\mathfrak{X}})}^{L}\mathcal{N}$
the structure of an element of $D(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$,
which means that $\mathcal{M}\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{X}})}^{L}\mathcal{N}$
belongs to $D_{cc}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$.
This object is isomorphic to the tensor product defined above. Indeed,
in local coordinates the action of $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$
on $\Delta^{*}(\widehat{\mathcal{D}}_{\mathfrak{X}\times\mathfrak{X}}^{(0,1)})$
is given as follows: let $\{\partial_{i},\partial'_{i}\}_{i=1}^{n}$
be local coordinate derivations on $\mathfrak{X}\times\mathfrak{X}$.
Then the action is given by $\partial_{i}\cdot1=\partial_{i}+\partial_{i}'$
and $\partial_{i}^{[p]}\cdot1=f\sum_{j=1}^{p-1}\partial_{i}^{[j]}\cdot(\partial'_{i})^{[p-j]}+\partial_{i}^{[p]}+(\partial'_{i})^{[p]}$,
which agrees with the above formula.
\end{rem}
This allows us to prove the following useful
\begin{lem}
\label{lem:Juggle}(Compare \cite{key-50}, lemma 2.2.5) Let $\mathcal{M}^{\cdot},\mathcal{P}^{\cdot}$
be elements of $D(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
and $\mathcal{N}^{\cdot}\in D(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1),\text{opp}}))$.
Then there is an isomorphism
\[
\mathcal{N}^{\cdot}\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}(\mathcal{M}^{\cdot}\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{X}})}^{L}\mathcal{P}^{\cdot})\tilde{\to}(\mathcal{N}^{\cdot}\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{X}})}^{L}\mathcal{M}^{\cdot})\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{P}^{\cdot}
\]
\end{lem}
\begin{proof}
Let $\mathcal{M},\mathcal{P}\in\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})$
and $\mathcal{N}\in\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1),\text{opp}})$.
We have a map of $D(\mathcal{O}_{\mathfrak{X}})$-modules
\[
\mathcal{N}\otimes_{D(\mathcal{O}_{\mathfrak{X}})}(\mathcal{M}\otimes_{D(\mathcal{O}_{\mathfrak{X}})}\mathcal{P})\to(\mathcal{N}\otimes_{D(\mathcal{O}_{\mathfrak{X}})}\mathcal{M})\otimes_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}\mathcal{P}
\]
simply because $D(\mathcal{O}_{\mathfrak{X}})$ is a sub-algebra of
$\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$. Using the local description
of the $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$-module action
on $\mathcal{N}\otimes_{D(\mathcal{O}_{\mathfrak{X}})}\mathcal{M}$
given by \remref{Two-actions-agree}, one sees that this map factors
through $\mathcal{N}\otimes_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\mathcal{M}\otimes_{D(\mathcal{O}_{\mathfrak{X}})}\mathcal{P})$
and we obtain a morphism
\[
\mathcal{N}\otimes_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\mathcal{M}\otimes_{D(\mathcal{O}_{\mathfrak{X}})}\mathcal{P})\to(\mathcal{N}\otimes_{D(\mathcal{O}_{\mathfrak{X}})}\mathcal{M})\otimes_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}\mathcal{P}
\]
Since $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$ is flat over
$D(\mathcal{O}_{\mathfrak{X}})$, we can compute the associated derived
functors using K-flat resolutions over $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$
of $\mathcal{N}$, and $\mathcal{P}$, respectively. Doing so gives
a map in the derived category
\[
\mathcal{N}^{\cdot}\otimes_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}(\mathcal{M}^{\cdot}\otimes_{D(\mathcal{O}_{\mathfrak{X}})}^{L}\mathcal{P}^{\cdot})\to(\mathcal{N}^{\cdot}\otimes_{D(\mathcal{O}_{\mathfrak{X}})}^{L}\mathcal{M}^{\cdot})\otimes_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{P}^{\cdot}
\]
and passing to the derived completions gives the map in the statement
of the lemma; to show it is an isomorphism we may reduce mod $p$
and, taking K-flat resolutions, assume that each term of both $\mathcal{N}^{\cdot}$
and $\mathcal{P}^{\cdot}$ is stalk-wise free over $\mathcal{D}_{X}^{(0,1)}$;
thus the statement comes down to the claim that
\[
\mathcal{D}_{X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{M}\otimes_{D(\mathcal{O}_{X})}\mathcal{D}_{X}^{(0,1)})\tilde{\to}(\mathcal{D}_{X}^{(0,1)}\otimes_{D(\mathcal{O}_{X})}\mathcal{M})\otimes_{\mathcal{D}_{X}^{(0,1)}}\mathcal{D}_{X}^{(0,1)}
\]
which is immediate.
\end{proof}
Finally, we note the following compatibility of tensor product and
pull-back, which follows directly from unpacking the definitions.
\begin{lem}
\label{lem:Tensor-and-pull}Let $\varphi:\mathfrak{X}\to\mathfrak{Y}$
be a morphism. Then there is a canonical isomorphism $L\varphi^{*}(\mathcal{M}^{\cdot}\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{Y}})}^{L}\mathcal{N}^{\cdot})\tilde{\to}L\varphi^{*}(\mathcal{M}^{\cdot})\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{X}})}^{L}L\varphi^{*}(\mathcal{N}^{\cdot})$.
The analogous statement holds for a morphism of smooth $k$-schemes
$\varphi:X\to Y$.
\end{lem}
\section{\label{sec:Push-Forward}Operations on Gauges: Push-Forward}
As above let $\varphi:\mathfrak{X}\to\mathfrak{Y}$. Now that we have
both the pull-back and the left-right swap, we can define the push-forward.
We start by noting that $\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)}$
carries a natural right module structure over itself (by right multiplication).
Therefore, by \propref{Left-Right-Swap} there is a natural left $\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)}$
gauge structure on $\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)}\otimes\omega_{\mathfrak{Y}}^{-1}$.
By \defref{Pullback!} there is a natural left $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}$-module
structure on $\varphi^{*}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)}\otimes\omega_{\mathfrak{Y}}^{-1})=L\varphi^{*}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}\otimes\omega_{\mathfrak{Y}}^{-1})$.
\begin{defn}
\label{def:Push!}1) Define the $(\varphi^{-1}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)}),\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)})$
bimodule $\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}:=\varphi^{*}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)}\otimes\omega_{\mathfrak{Y}}^{-1})\otimes\omega_{\mathfrak{X}}$;
here, the right $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}$-module
structure comes from the left $\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}$-module
structure on $\varphi^{*}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)}\otimes\omega_{\mathfrak{Y}}^{-1})$;
the left $\varphi^{-1}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)})$-structure
comes from the left multiplication of $\varphi^{-1}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)})$
on $\varphi^{*}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)}\otimes\omega_{\mathfrak{Y}}^{-1})$.
2) Let $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}))$.
Then we define ${\displaystyle \int_{\varphi}\mathcal{M}^{\cdot}:=R\varphi_{*}(\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}\widehat{\otimes}_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot})}\in D(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)}))$.
3) If we instead have $\varphi:X\to Y$ over $k$; then for $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$
we define ${\displaystyle \int_{\varphi}\mathcal{M}^{\cdot}:=R\varphi_{*}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot})}\in D(\mathcal{G}(\mathcal{D}_{Y}^{(0,1)}))$
where $\mathcal{D}_{Y\leftarrow X}^{(0,1)}$ is defined analogously
to $\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}$.
4) if $\mathfrak{Y}=\text{Specf}(W(k))$, then we denote ${\displaystyle \mathbb{H}_{\mathcal{G}}^{\cdot}(\mathcal{M}^{\cdot}):=\int_{\varphi}\mathcal{M}^{\cdot}}$
for any $\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}))$.
Similarly, there are push-forwards in the category of right $\mathcal{\widehat{D}}^{(0,1)}$-modules
defined by ${\displaystyle \int_{\varphi}\mathcal{M}_{r}^{\cdot}:=R\varphi_{*}(\mathcal{M}_{r}^{\cdot}\widehat{\otimes}_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)})}$
for $\mathcal{M}_{r}^{\cdot}\in D(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}))^{\text{op}}$;
clearly the left-right interchange intertwines the two pushforwards.
Similar remarks apply to a morphism $\varphi:X\to Y$ over $k$.
\end{defn}
We begin by recording some basic compatibilities; for these note that
we have the transfer bimodule $\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0)}:=\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}/(v-1)$
in the category of $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}$-modules,
and $\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(1)}:=(\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}/(f-1))^{\widehat{}}$
(here the $()^{\widehat{}}$ denotes $p$-adic completion, which is
the same as cohomological completion in this case by \propref{Basic-properties-of-the-transfer-module}).
One may therefore define ${\displaystyle \int_{\varphi,0}\mathcal{M}^{\cdot}:=R\varphi_{*}(\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0)}\widehat{\otimes}_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0)}}^{L}\mathcal{M}^{\cdot})}$
for $\mathcal{M}^{\cdot}\in\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}$
and ${\displaystyle \int_{\varphi,1}\mathcal{M}^{\cdot}:=R\varphi_{*}(\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(1)}\widehat{\otimes}_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0)}}^{L}\mathcal{M}^{\cdot})}$
for $\mathcal{M}^{\cdot}\in\widehat{\mathcal{D}}_{\mathfrak{X}}^{(1)}$.
As in the case of the pullback, this is not quite Berthelot's definition
of these functors; because he uses the more traditional $\text{R}\lim$.
However, they do agree in important cases, such as when $\varphi$
is proper and $\mathcal{M}^{\cdot}$ is coherent.
We have
\begin{prop}
\label{prop:push-and-complete-for-D} Let $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}))$.
1) ${\displaystyle (\int_{\varphi}\mathcal{M}^{\cdot})\otimes_{W(k)}^{L}k\tilde{=}\int_{\varphi}(\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k)}$
in the category $D(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$.
2) $(\int_{\varphi}\mathcal{M}^{\cdot})^{-\infty}\tilde{=}(\int_{\varphi,0}\mathcal{M}^{\cdot,-\infty})$
where the pushforward on the right is defined as $R\varphi_{*}(\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0)}\widehat{\otimes}_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0)}}^{L}\mathcal{M}^{\cdot,-\infty})$.
3) If $\mathcal{M}^{\cdot}\in D_{coh}^{b}(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}))$,
then $\widehat{((\int_{\varphi}\mathcal{M}^{\cdot})^{\infty})}\tilde{=}\int_{\varphi,1}\widehat{(\mathcal{M}^{\cdot,\infty})}$
where both uses of $\widehat{}$ denote derived completion.
\end{prop}
\begin{proof}
1) We have
\[
\int_{\varphi}\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k=R\varphi_{*}(\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}\widehat{\otimes}_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot})\otimes_{W(k)}^{L}k
\]
\[
\tilde{=}R\varphi_{*}((\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}\widehat{\otimes}_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot})\otimes_{W(k)}^{L}k)
\]
(since $k$ is a perfect complex over $W(k)$, this is a special case
of the projection formula where we consider $X$ and $Y$ as ringed
spaces with the locally constant sheaf of rings $W(k)$; c.f. {[}Stacks{]},
tag 0B54). We have the isomorphism
\[
k\otimes_{W(k)}^{L}\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}\tilde{=}\mathcal{D}_{Y\leftarrow X}^{(0,1)}
\]
since $\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}$
is a $p$-torsion-free sheaf; and so
\[
R\varphi_{*}((\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}\widehat{\otimes}_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot})\otimes_{W(k)}^{L}k)\tilde{=}R\varphi_{*}(k\otimes_{W(k)}^{L}(\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}\widehat{\otimes}_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot}))
\]
\[
R\varphi_{*}(k\otimes_{W(k)}^{L}(\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot}))\tilde{=}R\varphi_{*}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot})
\]
where we used that for any complex $\mathcal{N}^{\cdot}$ we have
$k\otimes_{W(k)}^{L}\mathcal{N}^{\cdot}\tilde{=}k\otimes_{W(k)}^{L}\widehat{\mathcal{N}^{\cdot}}$
(c.f. \lemref{reduction-of-completion}). Now, we have
\[
R\varphi_{*}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot})\tilde{=}\int_{\varphi}\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}(\mathcal{D}_{X}^{(0,1)}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot})
\]
But since $\mathcal{D}_{X}^{(0,1)}=\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}/p$
we have $\mathcal{D}_{X}^{(0,1)}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot}\tilde{=}\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k$
and the result follows.
2) For any complex we have $\mathcal{M}^{\cdot,-\infty}=\mathcal{M}^{\cdot}\otimes_{D(W(k))}^{L}(D(W(k))/(v-1))$.
Thus the proof is an easier variant of that of $1)$, replacing $\otimes_{W(k)}^{L}k$
with $\otimes_{D(W(k))}^{L}D(W(k))/(v-1)$.
3) We have
\[
\mathcal{K}^{\cdot}\to\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1),\infty}\to\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(1)}
\]
where $\mathcal{K}^{\cdot}$ is a complex of $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1),\infty}[p^{-1}]$-modules;
indeed, $\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1),\infty}$
is a $p$-torsion-free sheaf whose completion is exactly $\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(1)}$
(c.f. \propref{Basic-properties-of-the-transfer-module} and \lemref{Basic-Structure-of-D^(1)}).Thus
there is a distinguished triangle
\[
\mathcal{K}^{\cdot}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),\infty}}^{L}\mathcal{M}^{\cdot,\infty}\to\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1),\infty}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),\infty}}^{L}\mathcal{M}^{\cdot,\infty}\to\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(1)}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),\infty}}^{L}\mathcal{M}^{\cdot,\infty}
\]
and the term on the left is a complex of $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1),\infty}[p^{-1}]$-modules.
Thus the derived completion of $\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1),\infty}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),\infty}}^{L}\mathcal{M}^{\cdot,\infty}$
is isomorphic to the derived completion of $\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(1)}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),\infty}}^{L}\mathcal{M}^{\cdot,\infty}$.
Further, we have
\[
\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(1)}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),\infty}}^{L}\mathcal{M}^{\cdot,\infty}\tilde{=}\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(1)}\otimes_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(1)}}^{L}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(1)}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),\infty}}^{L}\mathcal{M}^{\cdot,\infty})
\]
And, since $\mathcal{M}^{\cdot}\in D_{coh}^{b}(\mathcal{G}(\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}))$,
we have (by \propref{Completion-for-noeth}) that $\widehat{\mathcal{M}^{\cdot,\infty}}\tilde{=}\widehat{\mathcal{D}}_{\mathfrak{X}}^{(1)}\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),\infty}}^{L}\mathcal{M}^{\cdot,\infty}$
as modules over $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(1)}$. Therefore
we obtain
\[
\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1),\infty}\widehat{\otimes}_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),\infty}}^{L}\mathcal{M}^{\cdot,\infty}\tilde{=}\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(1)}\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(1)}}^{L}\widehat{(\mathcal{M}^{\cdot,\infty})}
\]
and so, taking $R\varphi_{*}$ yields
\[
R\varphi_{*}(\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1),\infty}\widehat{\otimes}_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1),\infty}}^{L}\mathcal{M}^{\cdot,\infty})\tilde{\to}\int_{\varphi,1}\widehat{(\mathcal{M}^{\cdot,\infty})}
\]
But the term on the left is isomorphic to the derived completion of
${\displaystyle \int_{\varphi}\mathcal{M}^{\cdot,\infty}}$ by \propref{Push-and-complete}.
\end{proof}
Now we will discuss the relationship between the $\mathcal{D}_{X}^{(0,1)}$
pushforward and the push-forwards over $\mathcal{R}(\mathcal{D}_{X}^{(1)})$
and $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$. As usual we'll
work with the functors $\text{\ensuremath{\mathcal{M}}}^{\cdot}\to\mathcal{R}(\mathcal{D}_{X}^{(1)})\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot}\tilde{=}k[f]\otimes_{D(k)}^{L}\mathcal{M}^{\cdot}$
and $\text{\ensuremath{\mathcal{M}}}^{\cdot}\to\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot}\tilde{=}k[v]\otimes_{D(k)}^{L}\mathcal{M}^{\cdot}$
which take $D(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$ to $D(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(1)})))$
and $D(\mathcal{G}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})))$,
respectively (as in \propref{Quasi-rigid=00003Dfinite-homological}).
Both of the algebras $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$
and $\mathcal{R}(\mathcal{D}_{X}^{(0)})$ possess transfer bimodules
associated to any morphism $\varphi:X\to Y$, and hence are equipped
with a push-pull formalism. In the case of $\mathcal{R}(\mathcal{D}_{X}^{(0)})$
this is well known (c.f., e.g \cite{key-22}, chapter $1$), while
in the case of $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$ this
theory is developed in \cite{key-11}, in the language of filtered
derived categories. We shall proceed using the push-pull formalism
for $\mathcal{D}_{X}^{(0,1)}$-modules that we have already developed,
and discuss the relations with the other theories in section \subsecref{Hodge-and-Conjugate}
below.
\begin{defn}
Let $\varphi:X\to Y$ be a morphism. We define a $(\varphi^{-1}\mathcal{R}(\mathcal{D}_{Y}^{(1)}),\mathcal{R}(\mathcal{D}_{X}^{(1)}))$
bimodule $\mathcal{R}_{Y\leftarrow X}^{(1)}:=\mathcal{D}_{Y\leftarrow X}^{(0,1)}/v$.
Define a $(\varphi^{-1}\mathcal{\overline{R}}(\mathcal{D}_{Y}^{(1)}),\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}))$
bimodule $\mathcal{R}_{Y\leftarrow X}^{(1)}:=\mathcal{D}_{Y\leftarrow X}^{(0,1)}/f$.
Define ${\displaystyle \int_{\varphi,1}}\mathcal{M}^{\cdot}=R\varphi_{*}(\mathcal{R}_{Y\leftarrow X}^{(1)}\otimes_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}^{L}\mathcal{M}^{\cdot})$
on the category $\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(1)}))$,
and analogously ${\displaystyle \int_{\varphi,0}}$ for $\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})$-modules.
As above, there is also a push-forward for right modules defined by
${\displaystyle \int_{\varphi,1}}\mathcal{M}_{r}^{\cdot}=R\varphi_{*}(\mathcal{M}_{r}^{\cdot}\otimes_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}^{L}\mathcal{R}_{X\to Y}^{(1)})$,
and analogously for right $\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})$-modules.
We have the basic compatibility:
\end{defn}
\begin{prop}
If $\mathcal{M}^{\cdot}\in D_{qcoh}(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$,
then we have
\[
{\displaystyle \mathcal{R}(\mathcal{D}_{Y}^{(1)})\otimes_{\mathcal{D}_{Y}^{(0,1)}}^{L}\int_{\varphi}\mathcal{M}^{\cdot}\tilde{=}\int_{\varphi,1}(\mathcal{R}(\mathcal{D}_{X}^{(1)})\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot})}
\]
The analogous result holds for $\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})$.
\end{prop}
\begin{proof}
We have
\[
\mathcal{R}(\mathcal{D}_{Y}^{(1)})\otimes_{\mathcal{D}_{Y}^{(0,1)}}^{L}\int_{\varphi}\mathcal{M}^{\cdot}=\mathcal{R}(\mathcal{D}_{Y}^{(1)})\otimes_{\mathcal{D}_{Y}^{(0,1)}}^{L}R\varphi_{*}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot})
\]
\[
\tilde{=}R\varphi_{*}(\varphi^{-1}(\mathcal{R}(\mathcal{D}_{Y}^{(1)}))\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}^{L}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot}))
\]
(we will prove this last isomorphism in the lemma directly below).
We have the isomorphism
\[
\varphi^{-1}(\mathcal{R}(\mathcal{D}_{Y}^{(1)}))\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}^{L}\mathcal{D}_{Y\leftarrow X}^{(0,1)}\tilde{=}\mathcal{R}_{Y\leftarrow X}^{(1)}
\]
which is proved in the same way as \eqref{transfer-iso-1} above.
Therefore
\[
R\varphi_{*}(\varphi^{-1}(\mathcal{R}(\mathcal{D}_{Y}^{(1)}))\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}^{L}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot}))\tilde{=}R\varphi_{*}(\mathcal{R}_{Y\leftarrow X}^{(1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot})
\]
\[
\tilde{=}R\varphi_{*}(\mathcal{R}_{Y\leftarrow X}^{(1)}\otimes_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}^{L}\mathcal{R}(\mathcal{D}_{X}^{(1)})\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot})=\int_{\varphi,1}(\mathcal{R}(\mathcal{D}_{X}^{(1)})\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot})
\]
as claimed. The proof for the case of $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$
is essentially identical.
\end{proof}
In the previous proof we used the
\begin{lem}
\label{lem:baby-projection-1}Let $\mathcal{M}^{\cdot}\in D_{qcoh}(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$,
and $\mathcal{N}^{\cdot}\in D_{qcoh}(\mathcal{G}(\mathcal{D}_{Y}^{(0,1)})^{\text{opp}})$.
Then there is an isomorphism
\[
\mathcal{N}^{\cdot}\otimes_{\mathcal{D}_{Y}^{(0,1)}}^{L}R\varphi_{*}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot})\tilde{=}R\varphi_{*}(\varphi^{-1}(\mathcal{N}^{\cdot})\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}^{L}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot}))
\]
\end{lem}
\begin{proof}
(c.f. the proof of \cite{key-17}, proposition 5.3). First, we construct
a canonical map
\[
\mathcal{N}^{\cdot}\otimes_{\mathcal{D}_{Y}^{(0,1)}}^{L}R\varphi_{*}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot})\to R\varphi_{*}(\varphi^{-1}(\mathcal{N}^{\cdot})\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}^{L}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot}))
\]
as follows: one may replace $\mathcal{N}^{\cdot}$ with a complex
of $K$-flat graded $\mathcal{D}_{Y}^{(0,1)}$-modules, $\mathcal{F}^{\cdot}$.
Choosing a quasi-isomorphism $\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot}\tilde{\to}\mathcal{I}^{\cdot}$,
a $K$-injective complex of graded $\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})$-modules,
one obtains the quasi-isomorphism
\[
\mathcal{N}^{\cdot}\otimes_{\mathcal{D}_{Y}^{(0,1)}}^{L}R\varphi_{*}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot})\tilde{\to}\mathcal{F}^{\cdot}\otimes_{\mathcal{D}_{Y}^{(0,1)}}\varphi_{*}\mathcal{I}^{\cdot}
\]
Then there is the obvious isomorphism
\[
\mathcal{F}^{\cdot}\otimes_{\mathcal{D}_{Y}^{(0,1)}}\varphi_{*}\mathcal{I}^{\cdot}\tilde{\to}\varphi_{*}(\varphi^{-1}(\mathcal{F}^{\cdot})\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}\mathcal{I}^{\cdot})
\]
and a canonical map
\[
\varphi_{*}(\varphi^{-1}(\mathcal{F}^{\cdot})\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}\mathcal{I}^{\cdot})\to R\varphi_{*}((\varphi^{-1}(\mathcal{F}^{\cdot})\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}\mathcal{I}^{\cdot}))
\]
\[
\tilde{\to}R\varphi_{*}(\varphi^{-1}(\mathcal{N}^{\cdot})\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}^{L}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot}))
\]
Thus we obtain the canonical map
\[
\mathcal{N}^{\cdot}\otimes_{\mathcal{D}_{Y}^{(0,1)}}^{L}R\varphi_{*}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot})\to R\varphi_{*}(\varphi^{-1}(\mathcal{N}^{\cdot})\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}^{L}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot}))
\]
this map exists for all $\mathcal{N}^{\cdot}\in D(\mathcal{G}(\mathcal{D}_{Y}^{(0,1)})^{\text{op}})$
and $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$.
To check whether it is an isomorphism, we may work locally on $Y$
and suppose that $Y$ is affine from now on.
To prove this, we proceed in a similar manner to the proof of the
projection formula for quasi-coherent sheaves, in the general version
of \cite{key-17}, proposition 5.3. Fix $\mathcal{M}^{\cdot}\in D_{qcoh}(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$.
For any $\mathcal{N}^{\cdot}\in D_{qcoh}(\mathcal{G}(\mathcal{D}_{Y}^{(0,1)})^{\text{opp}})$,
we claim that $\varphi^{-1}(\mathcal{N}^{\cdot})\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}^{L}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot})$
is quasi-isomorphic to a complex in $D_{qcoh}(D(\mathcal{O}_{X}))$.
To see this, we observe that any quasicoherent $\mathcal{D}_{X}^{(0,1)}$-module
$\mathcal{M}$ is a quotient of the $\mathcal{D}_{X}^{(0,1)}$-module
$\mathcal{D}_{X}^{(0,1)}\otimes_{D(\mathcal{O}_{X})}\mathcal{M}$
(where the $\mathcal{D}_{X}^{(0,1)}$-module is via the action on
the left hand factor on the tensor product). It follows that any bounded-above
complex in $D_{qcoh}(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$ is quasi-isomorphic
to a complex, whose terms are of the form $\mathcal{D}_{X}^{(0,1)}\otimes_{D(\mathcal{O}_{X})}\mathcal{M}$
for quasi-coherent $\mathcal{M}$. Therefore any complex in $D_{qcoh}(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$
is a homotopy colimit of such complexes. Therefore $\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot}$
is quasi-isomorphic to a complex of quasicoherent $D(\mathcal{O}_{X})$-modules.
In addition, since $Y$ is affine, $\mathcal{N}^{\cdot}$ is quasi-isomorphic
to a $K$-projective complex of $\mathcal{D}_{Y}^{(0,1)}$-modules;
in particular, a complex whose terms are projective $\mathcal{D}_{Y}^{(0,1)}$-modules.
It follows that $\varphi^{-1}(\mathcal{N}^{\cdot})\otimes_{\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})}^{L}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot})$
is quasi-isomorphic to a complex in $D_{qcoh}(D(\mathcal{O}_{X}))$
as claimed.
Now, since $R\varphi_{*}$ commutes with arbitrary direct sums on
$D_{qcoh}(D(\mathcal{O}_{X}))$ (by \cite{key-17}, lemma 1.4), we
see that both sides of arrow commute with arbitrary direct sums (over
objects in $D_{qcoh}(\mathcal{G}(\mathcal{D}_{Y}^{(0,1)})^{\text{opp}})$);
so the set of objects on which the arrow is an isomorphism is closed
under arbitrary direct sums. Since $Y$ is affine, the category $D_{qcoh}(\mathcal{G}(\mathcal{D}_{Y}^{(0,1)})^{\text{op}})$
is generated by the compact objects $\{\mathcal{D}_{Y}^{(0,1)}[i]\}_{i\in\mathbb{Z}}$;
therefore (as in the proof of \cite{key-17}, lemma 5.3), it actually
suffices to show that the arrow is an isomorphism on $\mathcal{D}_{Y}^{(0,1)}$
itself, but this is obvious.
\end{proof}
This type of projection formula is so useful that we will record here
a minor variant:
\begin{lem}
\label{lem:proj-over-D}Let $\varphi:\mathfrak{X}\to\mathfrak{Y}$
be a morphism. Let $\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}))$
and $\mathcal{N}^{\cdot}\in D_{cc}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1),\text{opp}}))$,
such that $\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k\in D_{qcoh}(\mathcal{G}(\mathcal{D}_{Y}^{(0,1)}))$
and $\mathcal{N}^{\cdot}\otimes_{W(k)}^{L}k\in D_{qcoh}(\mathcal{G}(\mathcal{D}_{X}^{(0,1)})^{\text{opp}})$.
Then we have
\[
(\int_{\varphi}\mathcal{N}^{\cdot})\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}}^{L}\mathcal{M}^{\cdot}\tilde{\to}R\varphi_{*}(\mathcal{N}^{\cdot}\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}L\varphi^{*}\mathcal{M}^{\cdot})
\]
The analogous statement holds for $\mathcal{M}^{\cdot}\in D_{qcoh}(\mathcal{G}(\mathcal{D}_{Y}^{(0,1)}))$
and $\mathcal{N}^{\cdot}\in D_{coh}^{b}(\mathcal{G}(\mathcal{D}_{X}^{(0,1),\text{opp}}))$;
as well as for the Rees algebras $\mathcal{R}(\mathcal{D}^{(1)})$
and $\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})$.
\end{lem}
\begin{proof}
We have that ${\displaystyle \int_{\varphi}\mathcal{N}^{\cdot}=R\varphi_{*}(\mathcal{N}^{\cdot}\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)})}$.
As in the proof of \lemref{baby-projection-1}, there is a morphism
\begin{equation}
R\varphi_{*}(\mathcal{N}^{\cdot}\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)})\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}}^{L}\mathcal{M}^{\cdot}\to R\varphi_{*}(\mathcal{N}^{\cdot}\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}\widehat{\otimes}_{\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)})}^{L}\varphi^{-1}(\mathcal{M}^{\cdot}))\label{eq:adunction}
\end{equation}
Indeed, one constructs the map
\[
R\varphi_{*}(\mathcal{N}^{\cdot}\otimes_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)})\otimes_{\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}}^{L}\mathcal{M}^{\cdot}\to R\varphi_{*}(\mathcal{N}^{\cdot}\otimes_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}\otimes_{\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)})}^{L}\varphi^{-1}(\mathcal{M}^{\cdot}))
\]
exactly as above; and then passes to the cohomological completion.
Since $L\varphi^{*}\mathcal{M}^{\cdot}=\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}\widehat{\otimes}_{\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)})}^{L}\varphi^{-1}(\mathcal{M}^{\cdot})$
by definition, the result will follow if \eqref{adunction} is an
isomorphism. To prove that, apply $\otimes_{W(k)}^{L}k$ and quote
the previous result. The proof in the case of the Rees algebras is
completely analogous.
\end{proof}
Here is an important application of these ideas:
\begin{lem}
\label{lem:Composition-of-pushforwards}Let $\varphi:\mathfrak{X}\to\mathfrak{Y}$
and $\psi:\mathfrak{Y}\to\mathfrak{Z}$ be morphisms. There is a canonical
map
\[
\int_{\psi}\circ\int_{\varphi}\mathcal{M}^{\cdot}\to\int_{\psi\circ\varphi}\mathcal{M}^{\cdot}
\]
for any $\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$,
which is an isomorphism if $\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k\in D_{qoh}(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$.
If $\varphi:X\to Y$ and $\psi:Y\to Z$ are morphisms, we have the
analogous statements in $D(\mathcal{G}(\mathcal{D}_{Z}^{(0,1)}))$.
\end{lem}
\begin{proof}
As in \lemref{composition-of-pullbacks}, we have an isomorphism
\[
\varphi^{-1}(\mathcal{D}_{\mathfrak{Z\leftarrow\mathfrak{Y}}}^{(0,1)})\widehat{\otimes}_{\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}}^{(0,1)})}^{L}\mathcal{\widehat{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}\tilde{=}\mathcal{\widehat{D}}_{\mathfrak{Z}\leftarrow\mathfrak{X}}^{(0,1)}
\]
as $((\psi\circ\varphi)^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Z}}^{(0,1)}),\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})$
bimodules. Then we have
\[
\int_{\psi}\circ\int_{\varphi}\mathcal{M}^{\cdot}=R\psi_{*}(\mathcal{D}_{\mathfrak{Z\leftarrow\mathfrak{Y}}}^{(0,1)}\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}}^{L}R\varphi_{*}(\mathcal{D}_{\mathfrak{Y\leftarrow\mathfrak{X}}}^{(0,1)}\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot}))
\]
\[
\to R\psi_{*}R\varphi_{*}(\varphi^{-1}(\mathcal{D}_{\mathfrak{Z\leftarrow\mathfrak{Y}}}^{(0,1)})\widehat{\otimes}_{\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)})}^{L}\mathcal{D}_{\mathfrak{Y\leftarrow\mathfrak{X}}}^{(0,1)}\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot})
\]
\[
\tilde{\to}R(\psi\circ\varphi)_{*}(\mathcal{\widehat{D}}_{\mathfrak{Z}\leftarrow\mathfrak{X}}^{(0,1)}\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot})=\int_{\psi\circ\varphi}\mathcal{M}^{\cdot}
\]
where the first arrow is constructed as in \lemref{proj-over-D} and
the second isomorphism is given above. Applying the functor $\otimes_{W(k)}^{L}k$
and using \propref{Push-and-complete}, part $1)$, we reduce to proving
the analogous statement for $\varphi:X\to Y$ and $\psi:Y\to Z$;
where it follows exactly as in \lemref{baby-projection-1}.
\end{proof}
We shall also need results relating the pushforwards when $\mathcal{M}\in\mathcal{G}(\mathcal{D}_{X}^{(0,1)})$
is already annihilated by $f$ (or $v$):
\begin{prop}
\label{prop:Sandwich-push}Suppose $\mathcal{M}\in\mathcal{G}(\mathcal{D}_{X}^{(0,1)})$
satisfies $v\mathcal{M}=0$. Then ${\displaystyle \int_{\varphi}\mathcal{M}}$
is contained in the image of the functor $D(\mathcal{R}(\mathcal{D}_{X}^{(1)})-\text{mod})\to D(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$.
In fact, there is an isomorphism of graded sheaves of $\mathcal{O}_{X}[f,v]$-modules
\[
R\varphi_{*}(\mathcal{R}_{Y\leftarrow X}^{(1)}\otimes_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}^{L}\mathcal{M})\tilde{=}R\varphi_{*}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M})
\]
In other words, the pushforward of $\mathcal{M}$, regarded as a module
over $\mathcal{R}(\mathcal{D}_{X}^{(1)})$, agrees with its pushforward
as a $\mathcal{D}_{X}^{(0,1)}$-module. The analogous result hold
when $f\mathcal{M}=0$.
\end{prop}
\begin{proof}
This is an immediate consequence of \propref{Sandwich!}
\end{proof}
As a consequence of these results, we obtain:
\begin{thm}
\label{thm:phi-push-is-bounded}Let $\varphi:X\to Y$ be a morphism.
Then, for each of the algebras $\mathcal{R}(\mathcal{D}_{X}^{(1)})$,
$\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$, and $\mathcal{D}_{X}^{(0,1)}$,
the pushforward along $\varphi$ takes $D_{qcoh}^{b}$ to $D_{qcoh}^{b}$.
If $\varphi$ is proper, then the pushforward along $\varphi$ takes
$D_{coh}^{b}$ to $D_{coh}^{b}$.
\end{thm}
\begin{proof}
Let us start with the statement, that the pushforward takes $D_{qcoh}$
to $D_{qcoh}$ in all of these cases. For this, we can argue as in
the proof of \lemref{baby-projection-1}: namely, one may assume $Y$
is affine, and then if $\mathcal{M}^{\cdot}\in D_{qcoh}(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$,
we may replace $\mathcal{M}^{\cdot}$ by a homotopy colimit of $\mathcal{D}_{X}^{(0,1)}$-modules
of the form $\mathcal{D}_{X}^{(0,1)}\otimes_{\mathcal{O}_{X}[f,v]}\mathcal{M}$,
for quasi-coherent $\mathcal{M}$. Therefore $\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{M}^{\cdot}$
is quasi-isomorphic to a complex of quasicoherent $\mathcal{O}_{X}[f,v]$-modules,
which implies that the cohomology sheaves of its pushforward are quasi-coherent
$\mathcal{O}_{Y}[f,v]$-modules; and therefore quasi-coherent $\mathcal{D}_{Y}^{(0,1)}$-modules.
The same argument works for $\mathcal{R}(\mathcal{D}_{X}^{(1)})$
and $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$.
To prove the boundedness, we can factor $\varphi$ as a closed immersion
(the graph $X\to X\times Y$) followed by the projection $X\times Y\to Y$,
and, applying \lemref{Composition-of-pushforwards}, we see that it
suffices to consider separately the case of a closed immersion and
the case of a smooth morphism. For a closed immersion $\iota:X\to Y$
we have that the bimodule $\mathcal{D}_{X\to Y}^{(0)}$ is locally
free over $\mathcal{D}_{X}^{(0,1)}$ (this elementary fact will be
checked below in \lemref{transfer-is-locally-free}) and so the tensor
product $\otimes_{\mathcal{D}_{X}^{(0,1)}}\mathcal{D}_{X\to Y}^{(0)}$
takes quasicoherent sheaves to quasicoherent sheaves.
Now, if $X\to Y$ is smooth, we have by (the proof of) \propref{Smooth-pullback-preserves-coh},
that $\mathcal{D}_{Y\leftarrow X}^{(0,1)}$ is a coherent $\mathcal{D}_{X}^{(0,1),\text{opp}}$-module.
Further, since it is locally the reduction mod $p$ of a standard
module, it is rigid, so that by \propref{Quasi-rigid=00003Dfinite-homological}
it is locally of finite homological dimension; and the result follows
directly. Thus we see that ${\displaystyle \int_{\varphi}}$ is bounded
on $D_{qcoh}(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$, the same holds
for the pushforward on $D_{qcoh}(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(1)})))$
and $D_{qcoh}(\mathcal{G}(\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})))$
by the previous proposition.
Now suppose $\varphi$ is proper. Let us say that a right $\mathcal{D}_{X}^{(0,1)}$-module
is induced if it is of the form $\mathcal{F}\otimes_{D(\mathcal{O}_{X})}\mathcal{D}_{X}^{(0,1)}$
for some coherent $\mathcal{F}$ over $D(\mathcal{O}_{X})$. In this
case we have
\[
\int_{\varphi}\mathcal{F}\otimes_{D(\mathcal{O}_{X})}\mathcal{D}_{X}^{(0,1)}=R\varphi_{*}(\mathcal{F}\otimes_{D(\mathcal{O}_{X})}\mathcal{D}_{X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{D}_{X\to Y}^{(0,1)})
\]
\[
\tilde{=}R\varphi_{*}(\mathcal{F}\otimes_{D(\mathcal{O}_{X})}^{L}\mathcal{D}_{X}^{(0,1)}\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\varphi^{*}(\mathcal{D}_{Y}^{(0,1)}))\tilde{\to}R\varphi_{*}(\mathcal{F}\otimes_{D(\mathcal{O}_{X})}^{L}\varphi^{*}(\mathcal{D}_{Y}^{(0,1)}))
\]
\[
\tilde{\to}R\varphi_{*}(\mathcal{F})\otimes_{D(O_{Y})}^{L}\mathcal{D}_{Y}^{(0,1)}
\]
Thus the result is true for any induced module. If $\mathcal{M}$
is an arbitrary coherent right $\mathcal{D}_{X}^{(0,1)}$-module,
then, as a quasicoherent sheaf over $\mathcal{O}_{X}[f,v]$, it is
the union of its $\mathcal{O}_{X}[f,v]$ coherent sub-sheaves. Selecting
such a subsheaf which generates $\mathcal{M}$ as a $\mathcal{D}_{X}^{(0,1)}$-module,
we obtain a short exact sequence
\[
0\to\mathcal{K}\to\mathcal{F}\otimes_{D(\mathcal{O}_{X})}\mathcal{D}_{X}^{(0,1)}\to\mathcal{M}\to0
\]
where $\mathcal{K}$ is also coherent. Since the functor ${\displaystyle \int_{\varphi}}$
is concentrated in homological degrees $\leq d_{X/Y}$ for all coherent
$\mathcal{D}_{X}^{(0,1)}$-modules, we can now deduce the coherence
of ${\displaystyle \mathcal{H}^{i}(\int_{\varphi}\mathcal{M})}$ by
descending induction on $i$. This proves the result for $\mathcal{D}_{X}^{(0,1)}$-modules,
and we can deduce the result for $\mathcal{R}(\mathcal{D}_{X}^{(1)})$,
$\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$-modules by again
invoking \propref{Sandwich-push}.
\end{proof}
From this and the formalism of cohomological completion (specifically,
\propref{coh-to-coh}), we deduce
\begin{cor}
\label{cor:proper-push-over-W(k)}Let $\varphi:\mathfrak{X}\to\mathfrak{Y}$
be proper. Then ${\displaystyle \int_{\varphi}}$ takes $D_{coh}^{b}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
to $D_{coh}^{b}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}))$.
If $\mathcal{M}^{\cdot}\in D_{coh,F^{-1}}^{b}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
then ${\displaystyle \int_{\varphi}\mathcal{M}}\in D_{coh,F^{-1}}^{b}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}))$.
\end{cor}
\begin{proof}
The first part follows immediately from the proceeding theorem by
applying $\otimes_{W(k)}^{L}k$. The second part follows from \propref{push-and-complete-for-D}
(part $3)$), as well as Berthelot's theorem that ${\displaystyle \int_{\varphi,0}F^{*}\tilde{\to}F^{*}\int_{\varphi,1}}$
(c.f. \cite{key-2}, section 3.4, and also \thmref{Hodge-Filtered-Push}
below).
\end{proof}
\subsection{\label{subsec:Hodge-and-Conjugate}Push-forwards for $\mathcal{R}(\mathcal{D}_{X}^{(1)})$
and $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$. }
In this section we take a close look at the theory over $k$. In particular,
we study the pushforwards of modules over $\mathcal{R}(\mathcal{D}_{X}^{(1)})$
and $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$, and compare
them with more traditional filtered pushforwards found in the literature.
For $\mathcal{R}(\mathcal{D}_{X}^{(1)})$ and $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$
modules themselves, we will construct the analogue of the relative
de Rham resolution. This will allow us to exhibit an adjunction between
${\displaystyle \int_{\varphi}}$ and $\varphi^{\dagger}$ when $\varphi$
is smooth.
We begin with $\mathcal{R}(\mathcal{D}_{X}^{(1)})$, where we can
reduce everything to the more familiar situation of $\mathcal{R}(\mathcal{D}_{X}^{(0)})$-modules
using the fact that $\mathcal{R}(\mathcal{D}_{X}^{(1)})$ is Morita
equivalent to $\mathcal{R}(\mathcal{D}_{X}^{(0)})$ (c.f. \thmref{Filtered-Frobenius}).
Let $\varphi:X\to Y$. Recall that Laumon constructed in \cite{key-19}
the push-forward in the filtered derived category of $\mathcal{D}_{X}^{(0)}$-modules
(with respect to the symbol filtration); essentially, his work upgrades
the bimodule $\mathcal{D}_{Y\leftarrow X}^{(0)}$ to a filtered $(\varphi^{-1}(\mathcal{D}_{Y}^{(0)}),\mathcal{D}_{X}^{(0)})$-bimodule
via
\[
F_{i}(\mathcal{D}_{Y\leftarrow X}^{(0)}):=\varphi^{-1}(F_{i}(\mathcal{D}_{Y}^{(0)})\otimes_{\mathcal{O}_{Y}}\omega_{Y}^{-1})\otimes_{\varphi^{-1}(\mathcal{O}_{Y})}\omega_{X}
\]
(c.f. \cite{key-19}, formula 5.1.3); then one may define ${\displaystyle \int_{\varphi}}$
via the usual formula, but using the tensor product and push-forward
in the filtered derived categories. On the other hand, we can apply
the Rees construction to the above filtered bimodule to obtain $\mathcal{R}(\mathcal{D}_{Y\leftarrow X}^{(0)})$,
a graded $(\varphi^{-1}(\mathcal{R}(\mathcal{D}_{Y}^{(0)})),\mathcal{R}(\mathcal{D}_{X}^{(0)}))$
bimodule, which (again by the usual formula) yields a push-forward
functor ${\displaystyle \int_{\varphi}}:D(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(0)})))\to D(\mathcal{G}(\mathcal{R}(\mathcal{D}_{Y}^{(0)})))$,
and we have the following evident compatibility:
\begin{lem}
Let $\mathcal{M}^{\cdot}\in D((\mathcal{D}_{X}^{(0)},F)-\text{mod})$.
Then we have
\[
\mathcal{R}(\int_{\varphi}\mathcal{M}^{\cdot})\tilde{\to}\int_{\varphi}\mathcal{R}(\mathcal{M}^{\cdot})
\]
In particular, the Hodge-to-deRham spectral sequence for ${\displaystyle \int_{\varphi}\mathcal{M}^{\cdot}}$
degenerates at $E_{1}$ iff each of the sheaves ${\displaystyle \mathcal{H}^{i}(\int_{\varphi}\mathcal{R}(\mathcal{M}^{\cdot}))}$
is torsion-free over the Rees parameter $f$.
\end{lem}
Next, we relate this to the pull-back and push-forward for $\mathcal{R}(\mathcal{D}_{X}^{(1)})$
modules; starting with the analogous statement for pull-back:
\begin{lem}
\label{lem:Hodge-Filtered-Pull}Let $\varphi:X\to Y$ and suppose
$\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(0)})))$.
Then $L\varphi^{*}\circ F_{Y}^{*}\mathcal{M}^{\cdot}\tilde{=}F_{X}^{*}\circ L\varphi^{*}\mathcal{M}^{\cdot}$.
Here, the pullback on the left is in the category of $\mathcal{R}(\mathcal{D}^{(1)})$-modules,
while the pullback on the right is in the category of $\mathcal{R}(\mathcal{D}^{(0)})$-modules.
\end{lem}
\begin{proof}
Since $\varphi\circ F_{X}=F_{Y}\circ\varphi$ we have
\[
L\varphi^{*}\circ F_{Y}^{*}\mathcal{M}^{\cdot}\tilde{\to}F_{X}^{*}\circ L\varphi^{*}\mathcal{M}^{\cdot}
\]
as (graded) $\mathcal{O}_{X}$-modules; we need to check that this
map preserves the $\mathcal{R}(\mathcal{D}_{X}^{(1)})$-module structures
on both sides.This question is local, so we may suppose $X=\text{Spec}(B)$
and $Y=\text{Spec}(A)$ both posses local coordinates. Further, by
taking a K-flat resolution of $\mathcal{M}^{\cdot}$ we may suppose
that $\mathcal{M}^{\cdot}=\mathcal{M}$ is concentrated in a single
degree. Now, as an $\mathcal{R}(\mathcal{D}_{X}^{(1)})$-module, $F_{Y}^{*}\mathcal{M}$
possesses the structure of a connection with $p$-curvature $0$,
and so the induced connection on $\varphi^{*}F_{Y}^{*}\mathcal{M}$
also has $p$-curvature $0$; and the kernel of this connection is
equal to $(\varphi^{(1)})^{*}\mathcal{M}^{(1)}\subset\varphi^{*}F_{Y}^{*}\mathcal{M}$
(here $\mathcal{M}^{(1)}$ denotes $\sigma^{*}\mathcal{M}$ where
$\sigma:X^{(1)}\to X$ is the natural isomorphism of schemes). Note
that $\mathcal{M}^{(1)}$ possesses the action of $\mathcal{R}(\mathcal{D}_{Y^{(1)}}^{(0)})$
(c.f. \remref{The-inverse-to-F^*}).
Let $\{\partial_{i}\}_{i=1}^{n}$ be coordinate derivations on $X$.
Then the action of $\partial_{i}^{[p]}$ on $\varphi^{*}F_{Y}^{*}\mathcal{M}$
is given (by \propref{pull-back-in-pos-char}) by first restricting
$\partial_{i}^{[p]}$ to a differential operator $\varphi^{-1}(\mathcal{O}_{Y})\to\mathcal{O}_{X}$,
writing the resulting operator as
\[
\sum_{j=1}^{r}b_{j}^{p}\partial_{j}^{[p]}+\sum_{J}b_{J}\partial^{J}
\]
(where $\{\partial_{j}\}_{j=1}^{r}$ are coordinate derivations on
$Y$, and $b_{j},b_{J}\in B$) and then letting $\partial_{i}^{[p]}$
act as
\[
\sum_{j=1}^{r}b_{j}^{p}\partial_{j}^{[p]}+\sum_{J}b_{J}\partial^{J}
\]
therefore, the action of $\partial_{i}^{[p]}$ preserves $(\varphi^{(1)})^{*}\mathcal{M}^{(1)}$
and it acts there as ${\displaystyle \partial_{i}^{[p]}(1\otimes m)=\sum_{j=1}^{r}b_{j}^{p}\cdot\partial_{j}^{[p]}(m)}$.
But the action of $\{\partial_{j}^{[p]}\}$ on $\mathcal{M}^{(1)}$
defines the action of $\mathcal{R}(\mathcal{D}_{X^{(1)}}^{(0)})$
on $\mathcal{M}^{(1)}$, and this formula simply defines the pullback
from $\mathcal{R}(\mathcal{D}_{Y^{(1)}}^{(0)})$ to $\mathcal{R}(\mathcal{D}_{X^{(1)}}^{(0)})$-modules;
in other words, $(\varphi^{(1)})^{*}\mathcal{M}^{(1)}=((\varphi)^{*}\mathcal{M})^{(1)}$
where $\varphi^{*}\mathcal{M}$ is the usual pullback of $\mathcal{R}(\mathcal{D}^{(0)})$-modules.
Thus we see that $\varphi^{*}F_{Y}^{*}\mathcal{M}=F_{X}^{*}((\varphi)^{*}\mathcal{M})$
as $\mathcal{R}(\mathcal{D}_{X}^{(1)})$-modules, as desired.
\end{proof}
Now we discuss push-forward:
\begin{thm}
\label{thm:Hodge-Filtered-Push}Let $\mathcal{M}^{\cdot}$ be a complex
of graded $\mathcal{R}(\mathcal{D}_{X}^{(1)})$-modules, and via \thmref{Filtered-Frobenius}
write $\mathcal{M}^{\cdot}\tilde{=}F_{X}^{*}\mathcal{N}^{\cdot}$,
where $\mathcal{N}^{\cdot}$ is a complex of graded $\mathcal{R}(\mathcal{D}_{X}^{(0)})$-modules.
There is an isomorphism
\[
\int_{\varphi,1}\mathcal{M}^{\cdot}\tilde{\to}F_{X}^{*}\int_{\varphi}\mathcal{N}^{\cdot}
\]
where ${\displaystyle \int_{\varphi}\mathcal{N}}^{\cdot}$ is the
pushforward of $\mathcal{N}^{\cdot}$ over $\mathcal{R}(\mathcal{D}_{X}^{(0)})$.
\end{thm}
\begin{proof}
(following \cite{key-2}, theoreme 3.4.4) By left-right interchange
it is equivalent to prove the right-handed version
\[
\int_{\varphi,1}F_{X}^{!}\mathcal{N}^{\cdot}\tilde{\to}F_{Y}^{!}\int_{\varphi}\mathcal{N}^{\cdot}
\]
for any $\mathcal{N}^{\cdot}\in D(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(0)})^{\text{opp}}))$.
We have
\[
\int_{\varphi,1}F_{X}^{!}(\mathcal{N}^{\cdot})=R\varphi_{*}(F_{X}^{!}(\mathcal{N}^{\cdot})\otimes_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}^{L}\mathcal{R}_{X\to Y}^{(1)})=R\varphi_{*}(\mathcal{N}^{\cdot}\otimes_{\mathcal{R}(\mathcal{D}_{X}^{(0)})}^{L}F_{X}^{!}(\mathcal{R}(\mathcal{D}_{X}^{(0)}))\otimes_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}^{L}\mathcal{R}_{X\to Y}^{(1)})
\]
Now, recall
\[
\mathcal{R}_{X\to Y}^{(1)}=\varphi^{*}\mathcal{R}(\mathcal{D}_{X}^{(1)})\tilde{=}\varphi^{*}F_{Y}^{*}F_{Y}^{!}\mathcal{R}(\mathcal{D}_{Y}^{(1)})\tilde{=}F_{X}^{*}\varphi^{*}F_{Y}^{!}\mathcal{R}(\mathcal{D}_{Y}^{(1)})
\]
where the second isomorphism is \corref{Filtered-right-Frob}, and
the third is by the lemma above; note that this isomorphism preserves
the natural right $\varphi^{-1}(\mathcal{R}(\mathcal{D}_{Y}^{(1)}))$
-module structures on both sides. It follows (c.f. \propref{F^*F^!},
part $2)$, and \corref{Filtered-right-Frob}) that
\[
F_{X}^{!}(\mathcal{R}(\mathcal{D}_{X}^{(0)}))\otimes_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}^{L}\mathcal{R}_{X\to Y}^{(1)}\tilde{=}\varphi^{*}F_{Y}^{!}\mathcal{R}(\mathcal{D}_{Y}^{(0)})
\]
(as $(\mathcal{R}(\mathcal{D}_{X}^{(1)}),\varphi^{-1}(\mathcal{R}(\mathcal{D}_{Y}^{(1)}))$
bimodules). Therefore
\[
R\varphi_{*}(\mathcal{N}^{\cdot}\otimes_{\mathcal{R}(\mathcal{D}_{X}^{(0)})}^{L}F^{!}(\mathcal{R}(\mathcal{D}_{X}^{(0)}))\otimes_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}^{L}\mathcal{R}_{X\to Y}^{(1)})\tilde{=}R\varphi_{*}(\mathcal{N}^{\cdot}\otimes_{\mathcal{R}(\mathcal{D}_{X}^{(0)})}^{L}\varphi^{*}F_{Y}^{!}\mathcal{R}(\mathcal{D}_{Y}^{(0)}))
\]
\[
\tilde{=}\int_{\varphi}\mathcal{N}^{\cdot}\otimes_{\mathcal{R}(\mathcal{D}_{Y}^{(0)})}^{L}F_{Y}^{!}\mathcal{R}(\mathcal{D}_{Y}^{(0)}))
\]
where the last line is \lemref{proj-over-D}. However,
\[
\int_{\varphi}\mathcal{N}^{\cdot}\otimes_{\mathcal{R}(\mathcal{D}_{Y}^{(0)})}^{L}F_{Y}^{!}\mathcal{R}(\mathcal{D}_{Y}^{(0)}))=F_{Y}^{!}\int_{\varphi}\mathcal{N}^{\cdot}
\]
whence the result.
\end{proof}
To exploit this result, we recall that the formalism of de Rham cohomology
applies to $\mathcal{R}(\mathcal{D}_{X}^{(0)})$:
\begin{prop}
Let $\varphi:X\to Y$ be smooth of relative dimension $d$. Then the
induced connection $\nabla:\mathcal{D}_{X}^{(0)}\to\mathcal{D}_{X}^{(0)}\otimes\Omega_{X/Y}^{1}(1)$
is a morphism of filtered right $\mathcal{D}_{X}^{(0)}$-modules (with
respect to the symbol filtration; the symbol $(1)$ denotes a shift
in the filtration). The associated de Rham complex
\[
\mathcal{D}_{X}^{(0)}\to\mathcal{D}_{X}^{(0)}\otimes\Omega_{X/Y}^{1}(1)\to\mathcal{D}_{X}^{(0)}\otimes\Omega_{X/Y}^{2}(2)\to\dots\to\mathcal{D}_{X}^{(0)}\otimes\Omega_{X/Y}^{d}(d)
\]
is exact except at the right most term, where the cokernel is $\mathcal{D}_{Y\leftarrow X}^{(0)}(d)$
(as a filtered module).
After applying the left-right swap and a shift in the filtration,
we obtain the Spencer complex
\[
\mathcal{D}_{X}^{(0)}\otimes\mathcal{T}_{X/Y}^{d}(-d)\to\mathcal{D}_{X}^{(0)}\otimes\mathcal{T}_{X/Y}^{d-1}(-d+1)\to\dots\to\mathcal{D}_{X}^{(0)}\otimes\mathcal{T}_{X/Y}(-1)\to\mathcal{D}_{X}^{(0)}
\]
of left filtered $\mathcal{D}_{X}^{(0)}$-modules, which is exact
except at the right most term, and the cokernel is $\mathcal{D}_{X\to Y}^{(0)}$
(as a filtered module). Applying the Rees functor yields a complex
\[
\mathcal{R}(\mathcal{D}_{X}^{(0)})\otimes\mathcal{T}_{X/Y}^{d}(-d)\to\mathcal{R}(\mathcal{D}_{X}^{(0)})\otimes\mathcal{T}_{X/Y}^{d-1}(-d+1)\to\dots\to\mathcal{R}(\mathcal{D}_{X}^{(0)})\otimes\mathcal{T}_{X/Y}(-1)\to\mathcal{R}(\mathcal{D}_{X}^{(0)})
\]
which is exact except at the right most term, and the cokernel is
$\mathcal{R}(\mathcal{D}_{X\to Y}^{(0)})$.
\end{prop}
The proof of this is identical to that of the corresponding result
in characteristic zero (\cite{key-4}, proposition 4.2); one notes
that the associated graded is a Koszul resolution. Applying this resolution
in the definition of the filtered push-forward, one deduces
\begin{cor}
Let $\varphi:X\to Y$ be smooth of relative dimension $d$. Let $\mathcal{M}$
be a filtered $\mathcal{D}_{X}^{(0)}$-module (with respect to the
symbol filtration). Then there is an isomorphism
\[
\int_{\varphi}\mathcal{M}[-d]\tilde{=}R\varphi_{*}(\mathcal{M}(-d)\xrightarrow{\nabla}\mathcal{M}\otimes\Omega_{X/Y}^{1}(1-d)\xrightarrow{\nabla}\mathcal{M}\otimes\Omega_{X/Y}^{2}(2)\xrightarrow{\nabla}\dots\xrightarrow{\nabla}\mathcal{M}\otimes\Omega_{X/Y}^{d})
\]
in the filtered derived category of $\mathcal{O}_{Y}$-modules.
\end{cor}
In fact, with a little more work, one can show that, for any $i$,
the $\mathcal{D}_{Y}^{(0)}$-module structure on the sheaf ${\displaystyle \mathcal{H}^{i}(\int_{\varphi}\mathcal{M})}$
is given by the Gauss-Manin connection (c.f., e.g. \cite{key-51},
proposition 1.4). Thus the push-forward for $\mathcal{R}(\mathcal{D}_{X}^{(1)})$-modules
is exactly the ``Frobenius pullback of Gauss-Manin.''
As another corollary, we have
\begin{cor}
\label{cor:sm-adunction-for-filtered-D}Let $\varphi:X\to Y$ be smooth
of relative dimension $d$.
1) There is an isomorphism $R\underline{\mathcal{H}om}{}_{\mathcal{R}(\mathcal{D}_{X}^{(0})}(\mathcal{R}(\mathcal{D}_{X\to Y}^{(0)}),\mathcal{R}(\mathcal{D}_{X}^{(0)}))\tilde{=}\mathcal{R}(\mathcal{D}_{Y\leftarrow X}^{(0)})(d)[-d]$
as $(\varphi^{-1}(\mathcal{R}(\mathcal{D}_{Y}^{(0)})),\mathcal{R}(\mathcal{D}_{X}^{(0)}))$
bimodules.
2) There is an isomorphism of functors
\[
R\varphi_{*}R\underline{\mathcal{H}om}_{\mathcal{R}(\mathcal{D}_{X}^{(0)})}(\varphi^{\dagger}\mathcal{N}^{\cdot},\mathcal{M}^{\cdot})\tilde{\to}R\underline{\mathcal{H}om}{}_{\mathcal{R}(\mathcal{D}_{Y}^{(0)})}(\mathcal{N}^{\cdot},\int_{\varphi}\mathcal{M}^{\cdot}(d))
\]
for any $\mathcal{N}^{\cdot}\in D(\mathcal{G}(\mathcal{R}(\mathcal{D}_{Y}^{(0)})))$
and any $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(0)})))$.
The analogous isomorphism holds for $\mathcal{R}(\mathcal{D}_{X}^{(1)})$-modules.
\end{cor}
\begin{proof}
Part $1)$ follows directly from the previous proposition; compare
\cite{key-4}, propositions 4.2 and 4.19. Then $2)$ follows from
$1)$, as in \cite{key-4}, Theorem 4.40 (we'll give the argument
below in a slightly different context in \corref{smooth-adjunction})
Finally, the statement for $\mathcal{R}(\mathcal{D}_{X}^{(1)})$-modules
follows from the Frobenius descent (\lemref{Hodge-Filtered-Pull}
and \thmref{Hodge-Filtered-Push}).
\end{proof}
Next we are going to give the analogue of these results for the push-forward
of $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$-modules; and compare
with the constructions of \cite{key-11}, section 3.4. We start with
the analogues of the de Rham resolution and the adjunction for smooth
morphisms. Although $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$
does possess a canonical flat connection, the resulting (relative)
de Rham complex is not a resolution of a transfer bimodule. Instead,
we consider the action of the center
\[
\mathcal{Z}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}))\tilde{=}\mathcal{O}_{T^{*}X^{(1)}}[v]
\]
The action map $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{X^{(1)}}}\mathcal{T}_{X^{(1)}}(-1)\to\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$
yields (by dualizing) a map
\[
\Theta:\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\to\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{X^{(1)}}}\Omega_{X^{(1)}}^{1}(1)
\]
which makes $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$ into
a Higgs sheaf over $X^{(1)}$. In particular we have $\Theta\circ\Theta=0$
and so we can form the complex $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{X^{(1)}}}\Omega_{X^{(1)}}^{i}(i)$
with the differential induced from $\Theta$. In addition, we can
form the analogue of the Spencer complex, whose terms are $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{X^{(1)}}}\mathcal{T}_{X^{(1)}}^{i}(-i)$.
Now let $\varphi:X\to Y$ be a smooth morphism of relative dimension
$d$. Let $X_{Y}^{(1)}\to Y$ be the base change of this morphism
over the absolute Frobenius on $Y$. Then we can perform the above
constructions for $\Omega_{X_{Y}^{(1)}/Y}^{i}$ instead of $\Omega_{X^{(1)}}^{i}$.
We have
\begin{lem}
\label{lem:Koszul-Res-For-R-bar} The complex $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{X_{Y}^{(1)}}}\mathcal{T}_{X_{Y}^{(1)}/Y}^{i}(-i)$
is exact except at the right-most term. The image of the map
\[
\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{X_{Y}^{(1)}}}\mathcal{T}_{X_{Y}^{(1)}/Y}(-1)\to\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})
\]
is the central ideal $\mathcal{J}$ generated by $\mathcal{T}_{X_{Y}^{(1)}/Y}\subset\mathcal{Z}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}))$.
The cokernel of the map, $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})/\mathcal{J}$,
carries the structure of a right $\mathcal{D}_{X/Y}^{(0)}$-module,
of $p$-curvature zero; this action commutes with the natural left
$\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$-module structure
on $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})/\mathcal{J}$. The
cokernel of the associated map $(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})/\mathcal{J})\otimes_{\mathcal{O}_{X/Y}}\mathcal{T}_{X/Y}\to\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})/\mathcal{J}$
is isomorphic to $\mathcal{\overline{R}}_{X\to Y}^{(0)}$.
The complex $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{X_{Y}^{(1)}}}\Omega_{X_{Y}^{(1)}/Y}^{i}(i)$
is exact except at the right-most term. The cokernel of the map
\[
\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{X_{Y}^{(1)}}}\Omega_{X_{Y}^{(1)}/Y}^{d-1}(d-1)\to\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{X_{Y}^{(1)}}}\Omega_{X_{Y}^{(1)}/Y}^{d}(d)
\]
denoted $\mathcal{K}_{X/Y}$, carries the structure of a left $\mathcal{D}_{X/Y}^{(0)}$-module,
of $p$-curvature zero; this action commutes with the natural right
$\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$-module structure
on $\mathcal{K}_{X/Y}$. The kernel of the associated connection on
$\mathcal{K}_{X/Y}$ is isomorphic to $\mathcal{\overline{R}}_{Y\leftarrow X}^{(0)}(d)$.
\end{lem}
\begin{proof}
Choose local coordinates on $X$ for which $\text{Der}(\mathcal{O}_{X})$
is the free module on $\{\partial_{1},\dots\partial_{n}\}$ and $\text{Der}_{\mathcal{O}_{Y}}(\mathcal{O}_{X})=\{\partial_{n-d+1},\dots\partial_{n}\}$.
Then the complex under consideration is simply the Koszul complex
for the elements $\{\partial_{n-d+1}^{p},\dots,\partial_{n}^{p}\}$,
which proves the exactness statements. Furthermore, as the elements
$\{\partial_{n-d+1}^{p},\dots,\partial_{n}^{p}\}$ are central in
$\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$, we see that $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})/\mathcal{J}$
has the structure of a left and right $\mathcal{D}_{X}^{(0)}$-module
(we are here using the fact that $\mathcal{D}_{X}^{(0)}$ is the degree
$0$ part of $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$). Now,
we have
\[
\mathcal{\overline{R}}_{X\to Y}^{(0)}=\text{coker}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{X_{Y}^{(1)}}}\mathcal{T}_{X/Y}(-1)\to\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}))
\]
\[
=\text{coker}((\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})/\mathcal{J})\otimes_{\mathcal{O}_{X/Y}}\mathcal{T}_{X/Y}\to\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})/\mathcal{J})
\]
since $X\to Y$ is smooth. The second statement follows similarly.
\end{proof}
Now we can give the analogue of \corref{sm-adunction-for-filtered-D}.
It reads:
\begin{cor}
\label{cor:sm-adjunction-for-R-bar}Let $\varphi:X\to Y$ be smooth
of relative dimension $d$.
1) There is an isomorphism $R\underline{\mathcal{H}om}{}_{\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0})}(\mathcal{\overline{R}}(\mathcal{D}_{X\to Y}^{(0)}),\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)}))\tilde{=}\mathcal{\overline{R}}_{Y\leftarrow X}^{(0)}(d)[-d]$
as $(\varphi^{-1}(\mathcal{\overline{R}}(\mathcal{D}_{Y}^{(0)})),\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}))$
bimodules.
2) There is an isomorphism of functors
\[
R\varphi_{*}R\underline{\mathcal{H}om}_{\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})}(\varphi^{\dagger,(0)}\mathcal{N}^{\cdot},\mathcal{M}^{\cdot})\tilde{\to}R\underline{\mathcal{H}om}{}_{\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})}(\mathcal{N}^{\cdot},\int_{\varphi,0}\mathcal{M}^{\cdot}(d))
\]
for any $\mathcal{N}^{\cdot}\in D(\mathcal{G}(\mathcal{R}(\mathcal{D}_{Y}^{(0)})))$
and any $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(0)})))$.
\end{cor}
\begin{proof}
As in the proof of \corref{smooth-adjunction} below, $2)$ follows
formally from $1)$. To see $1)$, we note that for any $\mathcal{N}\in\mathcal{G}(\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)}))$
the complex $R\underline{\mathcal{H}om}{}_{\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0})}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})/\mathcal{J},\mathcal{N})$
can be considered a complex of left $\mathcal{D}_{X/Y}^{(0)}$-modules
with $p$-curvature $0$ (as $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})/\mathcal{J}$
is a right $\mathcal{D}_{X/Y}^{(0)}$-module of $p$-curvature zero,
and this action commutes with the left $\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0})$-action).
As Cartier descent for $\mathcal{D}_{X/Y}^{(0)}$-modules of $p$-curvature
$0$ is an exact functor, applying the previous lemma we obtain
\[
R\underline{\mathcal{H}om}{}_{\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0})}(\mathcal{\overline{R}}(\mathcal{D}_{X\to Y}^{(0)}),\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)}))\tilde{=}R\underline{\mathcal{H}om}{}_{\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0})}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})/\mathcal{J},\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)}))^{\nabla}
\]
\[
(\mathcal{K}_{X/Y}[-d])^{\nabla}=\mathcal{\overline{R}}_{Y\leftarrow X}^{(0)}(d)[-d]
\]
as desired.
\end{proof}
Now we'll give the relation of our pushforward to the constructions
of \cite{key-11}, section 3.4. We recall that to any morphism $\varphi:X\to Y$
we may attach the diagram
\[
T^{*}X\xleftarrow{d\varphi}X\times_{Y}T^{*}Y\xrightarrow{\pi}T^{*}Y
\]
and we use the same letters to denote the products of these morphisms
with $\mathbb{A}^{1}$; We have the following analogue of \cite{key-52},
proposition 3.7 (c.f. also \cite{key-11}, theorem 3.11)
\begin{lem}
\label{lem:Bez-Brav}There is an equivalence of graded Azumaya algebras
$(d\varphi^{(1)})^{*}\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\sim(\pi^{(1)})^{*}\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})$.
\end{lem}
\begin{proof}
Consider the (graded) Azumaya algebra $\mathcal{A}:=(d\varphi^{(1)})^{*}\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{(X\times_{Y}T^{*}Y)^{(1)}}[v]}(\pi^{(1)})^{*}\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})^{\text{opp}}$
on $(X\times_{Y}T^{*}Y)^{(1)}\times\mathbb{A}^{1}$. It is enough
to find a (graded) splitting module for $\mathcal{A}$; i.e., a graded
$\mathcal{A}$-module which is locally free of rank $p^{\text{dim}(X)+\text{dim}(Y)}$
over $\mathcal{O}_{(X\times_{Y}T^{*}Y)^{(1)}}[v]$.
The graded $(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}),\varphi^{-1}(\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})))$
bimodule $\overline{\mathcal{R}}_{X\to Y}:=\varphi^{*}\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})$
inherits the structure of an $\mathcal{A}$-module; we claim it is
locally free over $\mathcal{O}_{(X\times_{Y}T^{*}Y)^{(1)}}[v]$ of
the correct rank. This can be checked after inverting $v$ and setting
$v=0$; upon inverting $v$ it becomes (via the isomorphisms $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})[v^{-1}]\tilde{=}\mathcal{D}_{X}^{(0)}[v,v^{-1}]$
and $\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})[v^{-1}]\tilde{=}\mathcal{D}_{Y}^{(0)}[v,v^{-1}]$)
a direct consequence of \cite{key-52}, proposition 3.7. After setting
$v=0$ we have $\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})/v=\text{gr}(\mathcal{D}_{Y}^{(0)})=\overline{\mathcal{D}}_{Y}^{(0)}\otimes_{\mathcal{O}_{Y^{(1)}}}\mathcal{O}_{T^{*}Y^{(1)}}$;
and similarly for $X$. Therefore
\[
\varphi^{*}\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})/v\tilde{=}\mathcal{O}_{X}\otimes_{\varphi^{-1}(\mathcal{O}_{Y})}\varphi^{-1}(\overline{\mathcal{D}}_{Y}^{(0)}\otimes_{\mathcal{O}_{Y^{(1)}}}\mathcal{O}_{T^{*}Y^{(1)}})
\]
\[
\tilde{=}\mathcal{O}_{X}\otimes_{\varphi^{-1}(\mathcal{O}_{Y})}\varphi^{-1}(\overline{\mathcal{D}}_{Y}^{(0)})\otimes_{\varphi^{-1}(\mathcal{O}_{Y^{(1)}})}\varphi^{-1}(\mathcal{O}_{T^{*}Y^{(1)}})
\]
But $\varphi^{-1}(\overline{\mathcal{D}}_{Y}^{(0)})$ is locally free
of rank $p^{\text{dim}(Y)}$ over $\varphi^{-1}(\mathcal{O}_{Y})$,
and $\mathcal{O}_{X}$ is locally free of rank $p^{\text{dim}(X)}$
over $\mathcal{O}_{X^{(1)}}$; so $\varphi^{*}\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})/v$
is locally free of rank $p^{\text{dim}(X)+\text{dim}(Y)}$ over $\mathcal{O}_{(X\times_{Y}T^{*}Y)^{(1)}}$
as claimed.
\end{proof}
Next, we have the following straightforward:
\begin{lem}
Let $\varphi:X\to Y$ be smooth. Then $d\varphi$ is a closed immersion,
and we may regard the algebra $(d\varphi^{(1)})^{*}\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$
as a (graded) central quotient of $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$.
The obvious functor $(d\varphi)_{*}:D(\mathcal{G}((d\varphi^{(1)})^{*}\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})))\to D(\mathcal{G}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})))$
admits a right adjoint $(d\varphi)^{!}$ defined by $\mathcal{M}^{\cdot}\to R\underline{\mathcal{H}om}_{\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})}((d\varphi^{(1)})^{*}\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}),\mathcal{M}^{\cdot})$.
\end{lem}
Therefore we obtain
\begin{cor}
\label{cor:Filtered-Bez-Brav}Let $C:D(\mathcal{G}((d\varphi^{(1)})^{*}\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})))\to D(\mathcal{G}((\pi^{(1)})^{*}\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})))$
denote the equivalence of categories resulting from \lemref{Bez-Brav}.
Then, when $\varphi:X\to Y$ is smooth of relative dimension $d$,
there is an isomorphism of functors
\[
\int_{\varphi,0}\tilde{\to}R\pi_{*}^{(1)}\circ C\circ(d\varphi^{(1)})^{!}[-d]:D(\mathcal{G}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})))\to D(\mathcal{G}(\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})))
\]
Therefore, the functor ${\displaystyle \int_{\varphi,0}}[d]$ agrees,
under the application of the Rees functor, with the pushforward of
conjugate-filtered derived categories constructed in \cite{key-11},
section 3.4.
\end{cor}
\begin{proof}
(in the spirit of \cite{key-11}, proposition 3.12) We have, for any
$\mathcal{M}^{\cdot}\in D(\mathcal{G}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})))$,
\[
C\circ(d\varphi)^{!}(\mathcal{M}^{\cdot})=C\circ R\underline{\mathcal{H}om}_{\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})}((d\varphi^{(1)})^{*}\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}),\mathcal{M}^{\cdot})
\]
\[
\tilde{=}\underline{\mathcal{H}om}_{(d\varphi^{(1)})^{*}\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})}(\varphi^{*}\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)}),R\underline{\mathcal{H}om}_{\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})}((d\varphi^{(1)})^{*}\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}),\mathcal{M}^{\cdot}))
\]
Since $\varphi^{*}\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})$
is locally projective over $(d\varphi^{(1)})^{*}\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$,
this is canonically isomorphic to
\[
R\underline{\mathcal{H}om}_{\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})}(\varphi^{*}\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})\otimes_{(d\varphi^{(1)})^{*}\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})}(d\varphi^{(1)})^{*}\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}),\mathcal{M}^{\cdot}))
\]
\[
=R\underline{\mathcal{H}om}_{\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})}(\varphi^{*}\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)}),\mathcal{M}^{\cdot}))
\]
so that
\[
R\pi_{*}^{(1)}\circ C\circ(d\varphi^{(1)})^{!}(\mathcal{M}^{\cdot})\tilde{\to}R\pi_{*}^{(1)}R\underline{\mathcal{H}om}_{\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})}(\varphi^{*}\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)}),\mathcal{M}^{\cdot}))
\]
\[
\tilde{=}R\varphi_{*}R\underline{\mathcal{H}om}_{\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})}(\varphi^{*}\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)}),\mathcal{M}^{\cdot}))
\]
But the right-hand functor is canonically isomorphic to ${\displaystyle \int_{\varphi,0}}$
by smooth adjunction (\corref{sm-adjunction-for-R-bar}).
\end{proof}
From this description it follows directly (c.f. \cite{key-11}, lemma
3.18) that (up to a renumbering) the spectral sequence associated
to the filtration on ${\displaystyle \int_{\varphi}\mathcal{O}_{X}}$
agrees with the usual conjugate spectral sequence; i.e., the ``second
spectral sequence'' for $R\varphi_{dR,*}(\mathcal{O}_{X})$ as discussed
in \cite{key-12}.
\subsection{Adjunction for a smooth morphism, base change, and the projection
formula}
In this section, we prove adjunction for a smooth morphism $\varphi:\mathfrak{X}\to\mathfrak{Y}$
and the projection formula for an arbitrary morphism; as consequences
we obtain the smooth base change and the and the Kunneth formula,
in fairly general contexts. To start off, let us recall:
\begin{prop}
For a smooth morphism $\varphi:\mathfrak{X}\to\mathfrak{Y}$ there
is an isomorphism of sheaves $\mathcal{R}\mathcal{H}om_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0)},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)})\tilde{=}\widehat{\mathcal{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0)}[-d_{X/Y}]$.
\end{prop}
This is proved identically to the analogous fact for $\mathcal{D}_{X}^{(0)}$
and $\mathcal{R}(\mathcal{D}_{X}^{(0)})$-modules, as discussed above
in \corref{sm-adunction-for-filtered-D}.
\begin{prop}
For a smooth morphism $\varphi:\mathfrak{X}\to\mathfrak{Y}$ of relative
dimension $d$, there is an isomorphism $\mathcal{R}\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})\tilde{=}\widehat{\mathcal{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}(d)[-d]$
\end{prop}
\begin{proof}
We have
\[
\mathcal{R}\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})\otimes_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}
\]
\[
\tilde{=}\mathcal{R}\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})\otimes_{D(W(k))}^{L}(W(k)[f,v]/(v-1)
\]
\[
\tilde{=}\mathcal{R}\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}\otimes_{D(W(k))}^{L}(W(k)[f,v]/(v-1),\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}\otimes_{D(W(k))}^{L}(W(k)[f,v]/(v-1))
\]
\[
\tilde{=}R\mathcal{H}om_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0)},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)})
\]
and
\[
\mathcal{R}\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})\otimes_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{D}_{X}^{(0,1)}\tilde{=}\mathcal{R}\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})\otimes_{W(k)}^{L}k
\]
\[
\tilde{=}R\mathcal{H}om_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{D}_{X\to Y}^{(0,1)},\mathcal{D}_{X}^{(0,1)})
\]
By the same token, we have
\[
R\underline{\mathcal{H}om}{}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{D}_{X\to Y}^{(0,1)},\mathcal{D}_{X}^{(0,1)})\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{R}(\mathcal{D}_{X}^{(1)})\tilde{=}R\underline{\mathcal{H}om}{}_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}(\mathcal{R}_{X\to Y}^{(1)},\mathcal{R}(\mathcal{D}_{X}^{(1)}))
\]
and the analogous statement for $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$.
Applying the smooth adjunction for $\mathcal{R}(\mathcal{D}_{X}^{(1)})$-modules
(\corref{sm-adunction-for-filtered-D}, part $2)$) to the case where
$\mathcal{N}^{\cdot}=\mathcal{R}(\mathcal{D}_{Y}^{(1)})$ and $\mathcal{M}^{\cdot}=\mathcal{R}(\mathcal{D}_{X}^{(1)})$,
we have an isomorphism
\[
R\underline{\mathcal{H}om}{}_{\mathcal{R}_{X}}(\mathcal{R}_{X\to Y}^{(1)},\mathcal{R}(\mathcal{D}_{X}^{(1)}))\tilde{=}\mathcal{R}_{Y\leftarrow X}^{(1)}(d)[-d]
\]
and by \corref{sm-adjunction-for-R-bar} we have
\[
R\underline{\mathcal{H}om}{}_{\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0})}(\mathcal{\overline{R}}(\mathcal{D}_{X\to Y}^{(0)}),\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)}))\tilde{=}\mathcal{\overline{R}}_{Y\leftarrow X}^{(0)}(d)[-d]
\]
Furthermore, using the relative de Rham resolution for $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}$-modules
(or, equivalently, the previous proposition) we have $\mathcal{R}\mathcal{H}om_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0)},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)})\tilde{=}\widehat{\mathcal{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0)}[-d_{X/Y}]$.
On the other hand, we have the short exact sequence
\[
\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})\to\mathcal{D}_{X}^{(0,1)}\to\mathcal{R}(\mathcal{D}_{X}^{(1)})(-1)
\]
which by \propref{Sandwich!} yields the distinguished triangle
\[
R\underline{\mathcal{H}om}{}_{\bar{\mathcal{R}}(\mathcal{D}_{X}^{(0)})}(\mathcal{\overline{R}}_{X\to Y},\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)}))\to R\underline{\mathcal{H}om}{}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{D}_{X\to Y}^{(0,1)},\mathcal{D}_{X}^{(0,1)})
\]
\[
\to R\underline{\mathcal{H}om}{}_{\mathcal{R}(\mathcal{D}_{X}^{(1)})}(\mathcal{R}_{X\to Y},\mathcal{R}(\mathcal{D}_{X}^{(1)}))(-1)
\]
which implies that $R\underline{\mathcal{H}om}{}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{D}_{X\to Y}^{(0,1)},\mathcal{D}_{X}^{(0,1)})$
is concentrated in a single homological degree (namely $d$). So,
since $R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})$
is cohomologically complete, we see that $\mathcal{H}^{d}(R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
is $p$-torsion-free and concentrated in degree $0$. We also see,
by \propref{coh-to-coh}, that this module is coherent over $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$
(each of $\mathcal{R}_{Y\leftarrow X}^{(1)}$ and $\mathcal{\overline{R}}_{Y\leftarrow X}^{(0)}$
are coherent, since $X\to Y$ is smooth). Further, since $R\underline{\mathcal{H}om}{}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{D}_{X\to Y}^{(0,1)},\mathcal{D}_{X}^{(0,1)})\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{R}(\mathcal{D}_{X}^{(1)})$
and $R\underline{\mathcal{H}om}{}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{D}_{X\to Y}^{(0,1)},\mathcal{D}_{X}^{(0,1)})\otimes_{\mathcal{D}_{X}^{(0,1)}}^{L}\mathcal{\overline{R}}(\mathcal{D}_{X}^{(0)})$
are concentrated in degree $d$ as well, we see that $\text{im}(f)=\text{ker}(v)$
and $\text{im}(v)=\text{ker}(f)$ on $\mathcal{H}^{d}(R\underline{\mathcal{H}om}{}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{D}_{X\to Y}^{(0,1)},\mathcal{D}_{X}^{(0,1)}))$
(by \lemref{Basic-Facts-on-Rigid}). Furthermore, the distinguished
triangle above now yields the short exact sequence
\[
\mathcal{\overline{R}}_{Y\leftarrow X}^{(0)}(d)\to\mathcal{H}^{d}(R\underline{\mathcal{H}om}{}_{\mathcal{D}_{X}^{(0,1)}}(\mathcal{D}_{X\to Y}^{(0,1)},\mathcal{D}_{X}^{(0,1)}))\to\mathcal{R}_{Y\leftarrow X}^{(1)}(d-1)
\]
and since $\mathcal{R}_{Y\leftarrow X}^{(1)}$ is $f$-torsion-free,
we see that $\text{im}(v)=\text{ker}(f)=\mathcal{\overline{R}}_{Y\leftarrow X}^{(0)}(d)$
and so $\mathcal{H}^{d}(R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
satisfies the conditions of \propref{Baby-Mazur}. So we may conclude
that the module $\mathcal{H}^{d}(R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
is standard. Furthermore, we see that the grading on $\mathcal{\overline{R}}_{Y\leftarrow X}^{(0)}(d)$
is zero in degrees $<-d$ and is nontrivial in degree $-d$ and above.
Therefore, the index (as defined directly below \defref{Standard!})
is $d$. Since we identified $\mathcal{H}^{d}(R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})^{-\infty})$
with $\widehat{\mathcal{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0)}$
we see that
\[
\mathcal{H}^{d_{X/Y}}(R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})^{i-d})=\{m\in\widehat{\mathcal{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0)}[p^{-1}]|p^{i}m\in\widehat{\mathcal{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0)}\}
\]
which is exactly the definition of $\widehat{\mathcal{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}(d)$.
\end{proof}
From this one deduces
\begin{cor}
\label{cor:smooth-adjunction}Let $\varphi:\mathfrak{X}\to\mathfrak{Y}$
be smooth of relative dimension $d$; let $\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
and $\mathcal{N}^{\cdot}\in D_{cc}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}))$.
Then there is an isomorphism of functors
\[
R\varphi_{*}R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\varphi^{\dagger}\mathcal{N}^{\cdot},\mathcal{M}^{\cdot})\tilde{\to}R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}}(\mathcal{N}^{\cdot},\int_{\varphi}\mathcal{M}^{\cdot}(d))
\]
In particular, if $\varphi$ is also proper, then since both $\varphi^{\dagger}$
and ${\displaystyle \int_{\varphi}}$ preserve $D_{coh}^{b}$, we
obtain that these functors form an adjoint pair on $D_{coh}^{b}$.
Further, the analogous isomorphism for $\varphi:X\to Y$ holds, and
in this situation the functors are adjoint on $D_{qcoh}^{b}$ in this
setting (even if $\varphi$ is not proper).
\end{cor}
\begin{proof}
(following \cite{key-4}, Theorem 4.40). We have
\[
R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\varphi^{\dagger}\mathcal{N}^{\cdot},\mathcal{M}^{\cdot})\tilde{\to}R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}\widehat{\otimes}_{\varphi^{-1}\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}}^{L}\varphi^{-1}\mathcal{N}^{\cdot},\mathcal{M}^{\cdot})[d]
\]
\[
\tilde{\to}R\underline{\mathcal{H}om}{}_{\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)})}(\varphi^{-1}\mathcal{N}^{\cdot},R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)},\mathcal{M}^{\cdot}))[d]
\]
To prove the last isomorphism, one may reduce mod $p$, and then apply
\lemref{basic-hom-tensor} (part $1$), noting that $\mathcal{D}_{X\to Y}^{(0,1)}$
is faithfully flat over $\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})$.
Further, we have
\[
R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)},\mathcal{M}^{\cdot})\tilde{\leftarrow}R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot}\tilde{=}\widehat{\mathcal{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot}(d)[-d]
\]
where the first isomorphism again follows by reduction mod $p$ and
then applying the fact that $\mathcal{D}_{X\to Y}^{(0,1)}$ is (locally)
isomorphic to a bounded complex of projective $\mathcal{D}_{X}^{(0,1)}$-modules
(by \propref{Quasi-rigid=00003Dfinite-homological}) and the second
isomorphism is the previous proposition. Applying this to the previous
isomorphism we obtain
\[
R\underline{\mathcal{H}om}{}_{\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)})}(\varphi^{-1}\mathcal{N}^{\cdot},R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)},\mathcal{M}^{\cdot}))[d]\tilde{=}R\underline{\mathcal{H}om}{}_{\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)})}(\varphi^{-1}\mathcal{N}^{\cdot},\widehat{\mathcal{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot}(d))
\]
Then applying $R\varphi_{*}$ we obtain
\[
R\varphi_{*}R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\varphi^{\dagger}\mathcal{N}^{\cdot},\mathcal{M}^{\cdot})
\]
\[
\tilde{=}R\varphi_{*}R\underline{\mathcal{H}om}{}_{\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)})}(\varphi^{-1}\mathcal{N}^{\cdot},\widehat{\mathcal{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot}(d))
\]
\[
\tilde{\to}R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}}(\mathcal{N}^{\cdot},R\varphi_{*}(\widehat{\mathcal{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot}(d)))\tilde{\to}R\underline{\mathcal{H}om}{}_{\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}}(\mathcal{N}^{\cdot},\int_{\varphi}\mathcal{M}^{\cdot}(d))
\]
where the final isomorphism, is the adjunction between $\varphi^{-1}$
and $R\varphi_{*}$. One applies analogous reasoning for $\varphi:X\to Y$.
\end{proof}
Now we prove the projection formula, and then give the the smooth
base change and Kunneth formulas in this context. We start with
\begin{thm}
\label{thm:Projection-Formula}(Projection Formula) Let $\varphi:\mathfrak{X}\to\mathfrak{Y}$
be a morphism. Let $\mathcal{M}^{\cdot}\in D_{cc}^{b}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
and $\mathcal{N}^{\cdot}\in D_{cc}^{b}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}))$,
be such that $\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k\in D_{qcoh}(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$
and $\mathcal{N}^{\cdot}\otimes_{W(k)}^{L}k\in D_{qcoh}(\mathcal{G}(\mathcal{D}_{Y}^{(0,1)}))$.
Then we have
\[
\int_{\varphi}(L\varphi^{*}(\mathcal{N}^{\cdot})\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{X}})}^{L}\mathcal{M}^{\cdot})\tilde{\to}\mathcal{N}^{\cdot}\otimes_{D(\mathcal{O}_{\mathfrak{Y}})}^{L}\int_{\varphi}\mathcal{M}^{\cdot}
\]
\end{thm}
The proof works essentially the same way as the complex analytic one
(c.f. \cite{key-50}, theorem 2.3.19). In particular, we use \lemref{proj-over-D},
as well as the tensor product juggling lemma \lemref{Juggle}
\begin{proof}
By the left-right interchange it suffices to prove
\[
\int_{\varphi}(\mathcal{M}_{r}^{\cdot}\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{X}})}^{L}L\varphi^{*}\mathcal{N}^{\cdot})\tilde{=}\int_{\varphi}(\mathcal{M}_{r}^{\cdot})\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{Y}})}^{L}\mathcal{N}^{\cdot}
\]
where $\mathcal{M}_{r}^{\cdot}=\omega_{\mathfrak{X}}\otimes_{\mathcal{O}_{\mathfrak{X}}}\mathcal{M}^{\cdot}$.
We have
\[
\int_{\varphi}(\mathcal{M}_{r}^{\cdot})\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{Y}})}^{L}\mathcal{N}^{\cdot}\tilde{=}\int_{\varphi}(\mathcal{M}_{r}^{\cdot})\widehat{\otimes}_{\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)}}^{L}(\mathcal{N}^{\cdot}\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{Y}})}^{L}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)})
\]
\[
\tilde{=}R\varphi_{*}(\mathcal{M}_{r}^{\cdot}\widehat{\otimes}_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}}^{L}L\varphi^{*}(\mathcal{N}^{\cdot}\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{Y}})}^{L}\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)}))\tilde{=}R\varphi_{*}(\mathcal{M}_{r}^{\cdot}\widehat{\otimes}_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}}^{L}L\varphi^{*}(\mathcal{N}^{\cdot})\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{X}})}^{L}\varphi^{*}(\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0,1)}))
\]
\[
\tilde{=}R\varphi_{*}((\mathcal{M}_{r}^{\cdot}\widehat{\otimes}_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}}^{L}L\varphi^{*}(\mathcal{N}^{\cdot}))\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{X}})}^{L}\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)})\tilde{=}R\varphi_{*}((\mathcal{M}_{r}^{\cdot}\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{X}})}^{L}L\varphi^{*}(\mathcal{N}^{\cdot}))\widehat{\otimes}_{\mathcal{\widehat{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{\widehat{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)})
\]
\[
=\int_{\varphi}(\mathcal{M}_{r}^{\cdot}\widehat{\otimes}_{D(\mathcal{O}_{\mathfrak{X}})}^{L}L\varphi^{*}\mathcal{N}^{\cdot})
\]
as claimed; note that the second isomorphism is \lemref{proj-over-D}
which uses the assumption on $\mathcal{M}^{\cdot}$ and $\mathcal{N}^{\cdot}$.
\end{proof}
Now we turn to the smooth base change. Consider the fibre square of
smooth formal schemes
$$ \begin{CD} \mathfrak{X}_{\mathfrak{Z}} @>\tilde{\psi} >> \mathfrak{X} \\ @VV\tilde{\varphi}V @VV{\varphi}V \\ \mathfrak{Z} @>\psi >> \mathfrak{Y} \end{CD} $$where
the bottom row $\psi:\mathfrak{Z}\to\mathfrak{Y}$ is smooth of relative
dimension $d$.
We have also the analogous square for smooth varieties over $k$.
\begin{thm}
\label{thm:Smooth-base-change}Suppose that $\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
and $\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k\in D_{qcoh}^{b}(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$.
There is an isomorphism
\[
\psi^{\dagger}\int_{\varphi}\mathcal{M}^{\cdot}\tilde{\to}\int_{\tilde{\varphi}}\tilde{\psi}{}^{\dagger}\mathcal{M}^{\cdot}
\]
inside $D_{cc}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{Z}}^{(0,1)}))$.
The analogous statement holds for smooth varieties over $k$.
\end{thm}
\begin{proof}
By the adjunction for ${\displaystyle (\tilde{\psi}^{\dagger},\int_{\tilde{\psi}}(d))}$
there is a morphism of functors
\[
\int_{\varphi}\to\int_{\varphi}\circ\int_{\tilde{\psi}}(\tilde{\psi})^{\dagger}(d)\tilde{=}\int_{\psi}\circ\int_{\tilde{\varphi}}(\tilde{\psi})^{\dagger}(d)
\]
where the last isomorphism follows from the composition of push-forwards
(\lemref{Composition-of-pushforwards}). Now, applying the adjunction
for ${\displaystyle (\psi^{\dagger},\int_{\psi}(d))}$, we obtain
a morphism
\[
\psi^{\dagger}\int_{\varphi}\to\int_{\tilde{\varphi}}(\tilde{\psi})^{\dagger}
\]
After applying $\otimes_{W(k)}^{L}k$ we obtain the analogous map
over $k$. So it suffices to show that the map is an isomorphism for
varieties over $k$. Furthermore, working locally on $Z$, we reduce
to the case where the map $\psi:Z\to Y$ factors as an etale morphism
$Z\to Z'$ followed by a projection $Z'\tilde{=}Y\times\mathbb{A}^{d}\to Y$.
In the case of an etale morphism, the functor ${\displaystyle \int_{\varphi}}$
agrees with $R\varphi_{*}$, so the result follows from the usual
flat base change for quasicoherent sheaves. In the case of the projection,
we have
\[
\int_{\tilde{\varphi}}(\tilde{\psi})^{\dagger}\mathcal{M}^{\cdot}\tilde{=}\int_{\text{id}\times\varphi}D(\mathcal{O}_{\mathbb{A}_{k}^{d}})\boxtimes\mathcal{M}^{\cdot}[d]\tilde{=}D(\mathcal{O}_{\mathbb{A}_{k}^{d}})\boxtimes\int_{\varphi}\mathcal{M}^{\cdot}[d]\tilde{=}\psi^{\dagger}\mathcal{M}^{\cdot}
\]
where the second isomorphism follows directly from the definition
of the pushforward; this implies the result in this case.
\end{proof}
From this we deduce the Kunneth formula:
\begin{cor}
Let $\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
and $\mathcal{N}^{\cdot}\in D_{cc}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}))$,
so that $\mathcal{M}^{\cdot}\otimes_{W(k)}^{L}k\in D_{qcoh}(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$
and $\mathcal{N}^{\cdot}\otimes_{W(k)}^{L}k\in D_{qcoh}(\mathcal{G}(\mathcal{D}_{Y}^{(0,1)}))$.
Then there is an isomorphism
\[
\mathbb{H}_{\mathcal{G}}^{\cdot}(\mathcal{M}^{\cdot}\boxtimes\mathcal{N}^{\cdot})\tilde{=}\mathbb{H}_{\mathcal{G}}^{\cdot}(\mathcal{M}^{\cdot})\widehat{\otimes}_{W(k)[f,v]}^{L}\mathbb{H}_{\mathcal{G}}^{\cdot}(\mathcal{N}^{\cdot})
\]
(where $\mathbb{H}_{\mathcal{G}}^{\cdot}$ is defined in \defref{Push!})The
analogous statement holds for complexes in $D_{qcoh}(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$
and $D_{qcoh}(\mathcal{G}(\mathcal{D}_{Y}^{(0,1)}))$.
\end{cor}
This is a formal consequence of the projection formula and the smooth
base change (compare, e.g. \cite{key-53}, corollary 2.3.30).
\section{Operations on Gauges: Duality}
In this section we study the duality functor on $D_{coh}^{b}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
(and on $D_{coh}^{b}(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$. Although
neither $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$ nor $\mathcal{D}_{X}^{(0,1)}$
have finite homological dimension, we shall show (using \propref{Sandwich!})
that there is a well-behaved duality functor $\mathbb{D}$ which takes
bounded complexes of coherent modules to bounded complexes of coherent
modules. Further, under suitable conditions this functor commutes
with push-forward, in the following sense:
\begin{thm}
Let $\varphi:\mathfrak{X}\to\mathfrak{Y}$ be either a smooth proper
morphism or a projective morphism. Then there is an isomorphism of
functors
\[
\int_{\varphi}\mathbb{D}_{\mathfrak{X}}\tilde{\to}\mathbb{D}_{\mathfrak{Y}}\int_{\varphi}
\]
The analogous statement holds for either a smooth proper or a projective
morphism $\varphi:X\to Y$. In particular; when $\varphi$ is smooth
proper the functors $(\int_{\varphi},\varphi^{\dagger})$ form an
adjoint pair on $D_{coh}^{b}$.
\end{thm}
The proof, which will essentially occupy this section of the paper,
is somewhat unsatisfactory. The key point is to construct a trace
morphism
\[
\text{tr}:\int_{\varphi}D(\mathcal{O}_{\mathfrak{X}})[d_{X}]\to D(\mathcal{O}_{\mathfrak{Y}})[d_{Y}]
\]
When $\varphi$ is smooth proper this is done by first constructing
the map in $\mathcal{D}^{(0)}$ and $\mathcal{D}^{(1)}$ modules (using
the Hodge to de Rham spectral sequence), and then deducing its existence
for $\mathcal{D}^{(0,1)}$-modules. When $\varphi$ is a closed immersion
the construction of the trace follows from a direct consideration
of the structure of ${\displaystyle \int_{\varphi}}$ (the transfer
bimodule is easy to describe in this case). For a projective $\varphi$
one defines the trace by breaking up the map into an immersion followed
by a projection. Presumably there is a way to construct the trace
for all proper morphisms at once, but I have been unable to find it.
To kick things off, we need to define the duality functor and show
that it has finite homological dimension.
\begin{defn}
Let $\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$.
We define $\mathbb{D}_{\mathfrak{X}}(\mathcal{M}^{\cdot}):=\omega_{\mathfrak{X}}^{-1}\otimes_{\mathcal{O}_{\mathfrak{X}}}R\underline{\mathcal{H}om}(\mathcal{M}^{\cdot},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})[d_{X}]\in D_{cc}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
(where we have used the natural right $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}$-module
structure on $R\underline{\mathcal{H}om}(\mathcal{M}^{\cdot},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})$).
The same formula defines $\mathbb{D}_{X}$ for a smooth variety $X$
over $k$; and in the analogous way we define the duality functors
for $\mathcal{R}(\mathcal{D}_{X}^{(1)})$ and $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$.
\end{defn}
This is really a duality on the category of coherent modules:
\begin{prop}
Suppose $\mathcal{M}^{\cdot}\in D_{coh}^{b}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$
then $\mathbb{D}_{\mathfrak{X}}(\mathcal{M}^{\cdot})\in D_{coh}^{b}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$.
Further, the natural transformation $\mathcal{M}^{\cdot}\to\mathbb{D}_{\mathfrak{X}}\mathbb{D}_{\mathfrak{X}}\mathcal{M}^{\cdot}$
is an isomorphism.
The same result holds for a smooth variety $X$ over $k$.
\end{prop}
\begin{proof}
By reduction mod $p$ it suffices to prove the result for $X$. Using
\propref{Sandwich!}, and the fact that $\text{ker}(f:\mathcal{D}_{X}^{(0,1)}\to\mathcal{D}_{X}^{(0,1)})\tilde{=}\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})(1)$
and $\text{ker}(v:\mathcal{D}_{X}^{(0,1)}\to\mathcal{D}_{X}^{(0,1)})\tilde{=}\mathcal{R}(\mathcal{D}_{X}^{(1)})(-1)$
one reduces to proving the analogous result for $\mathcal{R}(\mathcal{D}_{X}^{(1)})$
and $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$. But these algebras
have finite homological dimension, as noted above, and the results
follow at once.
\end{proof}
\subsection{Duality for a smooth proper morphism}
Now we turn to defining the trace morphism, and proving the duality,
for a smooth proper map $\mathfrak{X}\to\mathfrak{Y}$ of relative
dimension $d$. In this case the usual Grothendieck duality theory
gives us a canonical morphism
\[
\text{tr}:R^{d}\varphi_{*}(\omega_{\mathfrak{X}/\mathfrak{Y}})\to\mathcal{O}_{\mathfrak{Y}}
\]
Now consider $\mathcal{O}_{\mathfrak{X}}$ as a module over $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}$.
As the pushforward ${\displaystyle \int_{\varphi,0}\mathcal{O}_{\mathfrak{X}}}$
can be computed by the relative de Rham complex, looking at the Hodge-to-de
Rham spectral sequence in degree $2d$ yields an isomorphism of $\mathcal{O}_{\mathfrak{Y}}$-modules
\[
\mathcal{H}^{d}(\int_{\varphi,0}\mathcal{O}_{\mathfrak{X}})\tilde{=}R^{d}\varphi_{*}(\omega_{\mathfrak{X}/\mathfrak{Y}})
\]
so; composing with the trace morphism above, we obtain a map
\[
\text{tr}:\mathcal{H}^{d}(\int_{\varphi,0}\mathcal{O}_{\mathfrak{X}})\to\mathcal{O}_{\mathfrak{Y}}
\]
of $\mathcal{\widehat{D}}_{\mathfrak{Y}}^{(0)}$-modules.
Now consider $\varphi:\mathfrak{X}_{n}\to\mathfrak{Y}_{n}$, the reduction
mod $p^{n}$ of $\varphi$ for each $n\geq0$. Repeating the argument,
we can construct
\[
\text{tr}:\mathcal{H}^{d}(\int_{\varphi,0}\mathcal{O}_{\mathfrak{X}_{n}})\to\mathcal{O}_{\mathfrak{Y}_{n}}
\]
and, in fact, the inverse limit of these maps is the trace constructed
above. In this setting, the de Rham complex $\Omega_{\mathfrak{X}_{n}/\mathfrak{Y}_{n}}^{\cdot}$
has the structure of a complex of coherent sheaves over the scheme
$W_{n}(\mathcal{O}_{X^{(n)}})$ (here we are identifying the underlying
topological spaces of $\mathfrak{X}_{n}$ and $W_{n}(\mathcal{O}_{X^{(n)}})$).
Thus we may also consider the second spectral sequence for the pushforward
of this complex, and we obtain an isomorphism
\[
R^{d}\varphi_{*}(\text{coker}(d:\Omega_{\mathfrak{X}_{n}/\mathfrak{Y}_{n}}^{d-1}\to\omega_{\mathfrak{X}_{n}/\mathfrak{Y}_{n}}))\tilde{\to}R^{d}\varphi_{*}(\omega_{\mathfrak{X}/\mathfrak{Y}})
\]
or, equivalently,
\[
R^{d}\varphi_{*}(\mathcal{D}_{\mathfrak{Y}_{n}\leftarrow\mathfrak{X}_{n}}^{(0)}\otimes_{\mathcal{D}_{\mathfrak{X}_{n}}^{(0)}}\mathcal{O}_{\mathfrak{X}_{n}})\tilde{\to}R^{d}\varphi_{*}(\omega_{\mathfrak{X}_{n}/\mathfrak{Y}_{n}})
\]
Now we consider the the pushforward of $\mathcal{O}_{\mathfrak{X}_{n}}$,
in the category of $\mathcal{D}_{\mathfrak{X}_{n}}^{(1)}$-modules.
By the commutativity of Frobenius descent with push-forward (\cite{key-2},
theoreme 3.4.4), we have
\[
\int_{\varphi,1}\mathcal{O}_{\mathfrak{X}_{n}}\tilde{=}\int_{\varphi,1}F^{*}\mathcal{O}_{\mathfrak{X}_{n}}\tilde{\to}F^{*}\int_{\varphi,0}\mathcal{O}_{\mathfrak{X}_{n}}
\]
Therefore we obtain a trace map
\[
\text{tr}:\mathcal{H}^{d}(\int_{\varphi,1}\mathcal{O}_{\mathfrak{X}_{n}})\to\mathcal{O}_{\mathfrak{Y}_{n}}
\]
in the category of $\mathcal{D}_{\mathfrak{Y}_{n}}^{(1)}$-modules;
and, using the second spectral sequence for the pushforward as above,
we have
\[
R^{d}\varphi_{*}(\mathcal{D}_{\mathfrak{Y}_{n}\leftarrow\mathfrak{X}_{n}}^{(1)}\otimes_{\mathcal{D}_{\mathfrak{X}_{n}}^{(1)}}\mathcal{O}_{\mathfrak{X}_{n}})\tilde{\to}\mathcal{H}^{d}(\int_{\varphi,1}\mathcal{O}_{\mathfrak{X}_{n}})
\]
Using these maps, we construct a trace for $\mathcal{D}_{\mathfrak{X}_{n}}^{(0,1)}$-modules:
\begin{lem}
There is a canonical morphism
\[
\text{tr}:R^{d}\varphi_{*}(\mathcal{D}_{\mathfrak{Y}_{n}\leftarrow\mathfrak{X}_{n}}^{(0,1)}\otimes{}_{\mathcal{D}_{\mathfrak{X}_{n}}^{(0,1)}}D(\mathcal{O}_{\mathfrak{X}_{n}}))\to D(\mathcal{O}_{\mathfrak{Y}_{n}})
\]
which has the property that the map $\text{tr}^{\infty}:R^{d}\varphi_{*}(\mathcal{D}_{\mathfrak{Y}_{n}\leftarrow\mathfrak{X}_{n}}^{(0,1)}\otimes{}_{\mathcal{D}_{\mathfrak{X}_{n}}^{(0,1)}}D(\mathcal{O}_{\mathfrak{X}_{n}}))^{\infty}\to D(\mathcal{O}_{\mathfrak{Y}_{n}}){}^{\infty}$
agrees with the trace map for $\mathcal{D}_{\mathfrak{X}_{n}}^{(1)}$-modules
constructed above; and the map $\text{tr}^{-\infty}:R^{d}\varphi_{*}(\mathcal{D}_{\mathfrak{Y}_{n}\leftarrow\mathfrak{X}_{n}}^{(0,1)}\otimes{}_{\mathcal{D}_{\mathfrak{X}_{n}}^{(0,1)}}D(\mathcal{O}_{\mathfrak{X}_{n}}))^{-\infty}\to D(\mathcal{O}_{\mathfrak{Y}_{n}})^{-\infty}$
agrees with the trace map for $\mathcal{D}_{\mathfrak{X}_{n}}^{(0)}$-modules
constructed above. We have the analogous statement for a proper morphism
$\varphi:X\to Y$, as well as in the categories of $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$-modules
and $\mathcal{R}(\mathcal{D}_{X}^{(1)})$-modules.
This map yields a trace map in the derived category:
\[
\text{tr}:\int_{\varphi}D(\mathcal{O}_{\mathfrak{X}_{n}})[d]\to D(\mathcal{O}_{\mathfrak{Y}_{n}})
\]
Upon taking the inverse limit over $n$, we obtain a map
\[
\text{tr}:\int_{\varphi}D(\mathcal{O}_{\mathfrak{X}})[d]\to D(\mathcal{O}_{\mathfrak{Y}})
\]
\end{lem}
\begin{proof}
We begin with the case $n=1$; i.e., $\mathfrak{X}_{n}=X$ and $\mathfrak{Y}_{n}=Y$.
We claim that the $\varphi^{-1}(\mathcal{D}_{Y}^{(0,1)})$-gauge $\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes{}_{\mathcal{D}_{X}^{(0,1)}}D(\mathcal{O}_{X})$
satisfies the property that $v$ is an isomorphism in degrees $0$
and below and $f$ is an isomorphism in degrees $1$ and above. This
can be checked in local coordinates, where we have the isomorphism
\[
\mathcal{D}_{Y\leftarrow X}^{(0,1)}=\mathcal{J}\backslash\mathcal{D}_{X}^{(0,1)}
\]
where $\mathcal{J}$ is the right ideal generated by $\{\partial_{n-d+1},\dots,\partial_{n},\partial_{n-d+1}^{[p]},\dots,\partial_{n}^{[p]}\}$.
In degrees below $0$, the elements $\{\partial_{n-d+1}^{[p]},\dots,\partial_{n}^{[p]}\}$
act trivially; so that
\[
(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes{}_{\mathcal{D}_{X}^{(0,1)}}D(\mathcal{O}_{X}))^{i}=\mathcal{O}_{X}/(\partial_{n-d+1},\dots,\partial_{n})
\]
for all $i\leq0$. On the other hand we have
\[
(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes{}_{\mathcal{D}_{X}^{(0,1)}}D(\mathcal{O}_{X}))^{i}=\mathcal{O}_{X}/(\partial_{n-d+1},\dots,\partial_{n},\partial_{n-d+1}^{[p]},\dots,\partial_{n}^{[p]})
\]
for $i>0$; and the claim about $f$ and $v$ follows immediately.
As the functor $R^{d}\varphi_{*}$ commutes with direct sums, we see
that the gauge
\[
R^{d}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes{}_{\mathcal{D}_{X}^{(0,1)}}D(\mathcal{O}_{X}))
\]
has the same property: $v$ is an isomorphism in degrees $0$ and
below and $f$ is an isomorphism in degrees $1$ and above. Thus we
may define
\[
\text{tr}:R^{d}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes{}_{\mathcal{D}_{X}^{(0,1)}}D(\mathcal{O}_{X}))^{i}\to\mathcal{O}_{Y}
\]
for any $i$ as follows: if $i\leq0$ we have $v_{-\infty}:R^{d}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes{}_{\mathcal{D}_{X}^{(0,1)}}D(\mathcal{O}_{X}))^{i}\tilde{=}R^{d}\varphi_{*}(\mathcal{D}_{Y\leftarrow X}^{(0)}\otimes_{\mathcal{D}_{X}^{(0)}}\mathcal{O}_{X})$
and so we define the trace as the composition $\text{tr}\circ v_{-\infty}$,
where here $\text{tr}$ denotes the trace for $\mathcal{D}_{X}^{(0)}$-modules
constructed above. If $i>0$ we have $f_{\infty}:R^{d}(\mathcal{D}_{Y\leftarrow X}^{(0,1)}\otimes{}_{\mathcal{D}_{X}^{(0,1)}}D(\mathcal{O}_{X}))^{i}\tilde{=}R^{d}\varphi_{*}(\mathcal{D}_{Y\leftarrow X}^{(1)}\otimes_{\mathcal{D}_{X}^{(1)}}\mathcal{O}_{X})$
and so we define the trace as the composition $\text{tr}\circ f_{\infty}$,
where here $\text{tr}$ denotes the trace for $\mathcal{D}_{X}^{(1)}$-modules
constructed above. In a similar way, we construct the trace map in
the categories of $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$-modules
and $\mathcal{R}(\mathcal{D}_{X}^{(1)})$-modules.
Now we consider $\mathfrak{X}_{n}$ for $n>1$. Since the functor
$\varphi_{*}$ has homological dimension $d$ (on the category of
quasicoherent sheaves), we have that $(R^{d}\varphi_{*}\mathcal{F})\otimes_{W(k)}k\tilde{=}(R^{d}\varphi_{*}\mathcal{F}\otimes_{W(k)}^{L}k)$
for any $\mathcal{F}\in\text{Qcoh}(\mathfrak{X}_{n})$. So, by Nakayama's
lemma and the result of the previous paragraph, we see that $f$ is
onto in degrees $1$ and above while $v$ is onto in degrees $0$
and below; by the coherence of the sheaves involved we see that these
maps are isomorphisms for $i<<0$ and $i>>0$. Since the target of
the trace map, $D(\mathcal{O}_{\mathfrak{Y}_{n}})$, has the property
that $v$ is an isomorphism in degrees $0$ and below and $f$ is
an isomorphism in degrees $1$ and above, we may define the trace
map in the exact same way as above.
\end{proof}
\begin{rem}
\label{rem:trace-and-compose}If $\varphi:\mathfrak{X}\to\mathfrak{Y}$
and $\psi:\mathfrak{Y}\to\mathfrak{Z}$, then the trace map for the
composition satisfies ${\displaystyle \text{tr}_{\psi\circ\varphi}=\text{tr}_{\psi}\circ\int_{\psi}\text{tr}_{\varphi}}$.
This follows from the analogous result for the trace map in coherent
sheaf theory.
\end{rem}
Now, following the usual method of algebraic $\mathcal{D}$-module
theory (c.f. \cite{key-49}, theorem 2.7.2), we have
\begin{prop}
There is a canonical morphism
\[
\int_{\varphi}\mathbb{D}_{\mathfrak{X}}\mathcal{M}^{\cdot}\to\mathbb{D}_{\mathfrak{Y}}\int_{\varphi}\mathcal{M}^{\cdot}
\]
for any $\mathcal{M}^{\cdot}\in D_{cc}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$.
The same holds for $\mathcal{M}^{\cdot}\in D(\mathcal{G}(\mathcal{D}_{X}^{(0,1)}))$
when we have a proper map $\varphi:X\to Y$. Further, these maps are
compatible under application of $\otimes_{W(k)}^{L}k$.
\end{prop}
\begin{proof}
We have
\[
\int_{\varphi}\mathbb{D}_{\mathfrak{X}}\mathcal{M}^{\cdot}=R\varphi_{*}(R\underline{\mathcal{H}om}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\mathcal{M}^{\cdot},\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)})\widehat{\otimes}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)})\widehat{\otimes}_{\mathcal{O}_{\mathfrak{Y}}}^{L}\omega_{\mathfrak{Y}}^{-1}[d_{X}]
\]
\[
=R\varphi_{*}(R\underline{\mathcal{H}om}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\mathcal{M}^{\cdot},\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)})\widehat{\otimes}_{\mathcal{O}_{\mathfrak{Y}}}^{L}\omega_{\mathfrak{Y}}^{-1}[d_{X}]
\]
while
\[
\mathbb{D}_{\mathfrak{Y}}\int_{\varphi}\mathcal{M}^{\cdot}=R\underline{\mathcal{H}om}_{\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}}(\int_{\varphi}\mathcal{M}^{\cdot},\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)})\widehat{\otimes}_{\mathcal{O}_{\mathfrak{Y}}}^{L}\omega_{\mathfrak{Y}}^{-1}[d_{Y}]
\]
To construct a canonical map between these complexes, we begin by
considering ${\displaystyle \int_{\varphi}\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}}$.
By $\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}=L\varphi^{*}\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}$,
we may apply \thmref{Projection-Formula} to obtain
\[
\int_{\varphi}\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}=\int_{\varphi}L\varphi^{*}\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}\tilde{\to}(\int_{\varphi}D(\mathcal{O}_{\mathfrak{X}}))\widehat{\otimes}_{\mathcal{O}_{\mathfrak{Y}}[f,v]}^{L}\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}
\]
so applying the trace map yields a canonical morphism
\[
\int_{\varphi}\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}[d]\to\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}
\]
and since $d=d_{X}-d_{Y}$ we have
\[
\int_{\varphi}\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}[d_{X}]\to\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}[d_{Y}]
\]
Then we have
\[
R\varphi_{*}(R\underline{\mathcal{H}om}_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}(\mathcal{M}^{\cdot},\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)})[d_{X}]
\]
\[
\to R\varphi_{*}(R\underline{\mathcal{H}om}_{\varphi^{-1}(\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)})}(\widehat{\mathcal{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}\otimes_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot},\widehat{\mathcal{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}\otimes_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)})[d_{X}]
\]
\[
\to R\underline{\mathcal{H}om}_{\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}}(R\varphi_{*}(\widehat{\mathcal{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}\otimes_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\mathcal{M}^{\cdot}),R\varphi_{*}(\widehat{\mathcal{D}}_{\mathfrak{Y}\leftarrow\mathfrak{X}}^{(0,1)}\otimes_{\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}}^{L}\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}))[d_{X}]
\]
\[
=R\underline{\mathcal{H}om}_{\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}}(\int_{\varphi}\mathcal{M}^{\cdot},\int_{\varphi}\widehat{\mathcal{D}}_{\mathfrak{X}\to\mathfrak{Y}}^{(0,1)}[d_{X}])\to R\underline{\mathcal{H}om}_{\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}}(\int_{\varphi}\mathcal{M}^{\cdot},\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0,1)}[d_{Y}])
\]
where the last map is the trace. Combining with the above description
yields the canonical map
\[
\int_{\varphi}\mathbb{D}_{\mathfrak{X}}\mathcal{M}^{\cdot}\to\mathbb{D}_{\mathfrak{Y}}\int_{\varphi}\mathcal{M}^{\cdot}
\]
as desired; the case of a proper map $\varphi:X\to Y$ is identical.
\end{proof}
Now we turn to
\begin{thm}
\label{thm:Duality-for-smooth-proper}The canonical map $\int_{\varphi}\mathbb{D}_{\mathfrak{X}}\mathcal{M}^{\cdot}\to\mathbb{D}_{\mathfrak{Y}}\int_{\varphi}\mathcal{M}^{\cdot}$
is an isomorphism for $\mathcal{M}^{\cdot}\in D_{coh}^{b}(\mathcal{G}(\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0,1)}))$.
The same is true for a proper map $\varphi:X\to Y$.
\end{thm}
The proof of this result will make use of several auxiliary results.
First, we recall a basic computation for pushforwards of $\mathcal{R}(\mathcal{D}_{X}^{(0)})$-modules;
as in the previous section we have the diagram
\[
T^{*}X\xleftarrow{d\varphi}X\times_{Y}T^{*}Y\xrightarrow{\pi}T^{*}Y
\]
and the result reads
\begin{lem}
Let $\mathcal{M}^{\cdot}\in D_{coh}^{b}(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(0)})))$.
There is an isomorphism
\[
(\int_{\varphi}\mathcal{M}^{\cdot})\otimes_{k[f]}^{L}k\tilde{=}R\pi_{*}((d\varphi)^{!}(\mathcal{M}^{\cdot}\otimes_{k[f]}^{L}k))
\]
inside $D_{coh}^{b}(T^{*}Y)$; in this formula $d\varphi^{!}$ is
the extraordinary inverse image in coherent sheaf theory.
\end{lem}
This is a result of Laumon, (c.f. \cite{key-19}, construction 5.6.1).
For a proof in the Rees algebra language, see \cite{key-70}, corollary
3.9.
Next, we need the following Grothendieck duality statement for Azumaya
algebras:
\begin{lem}
\label{lem:GD-for-Az}Let $X$ and $Y$ be smooth varieties over $k$
and let $\pi:X\to Y$ be a smooth proper morphism, of relative dimension
$d$. Suppose that $\mathcal{A}_{Y}$ be an Azumaya algebra on $Y$.
Set $\mathcal{A}_{X}=\pi^{*}\mathcal{A}_{Y}$, an Azumaya algebra
on $X$. Then there is a trace map $R^{d}\pi_{*}(\mathcal{A}_{X}\otimes_{\mathcal{O}_{X}}\omega_{X})\to\mathcal{A}_{Y}\otimes_{\mathcal{O}_{Y}}\omega_{Y}$
which induces, for any $\mathcal{M}^{\cdot}\in D_{coh}^{b}(\mathcal{A}_{X}-\text{mod})$
a functorial isomorphism
\[
R\pi_{*}R\mathcal{H}om_{\mathcal{A}_{X}}(\mathcal{M}^{\cdot},\mathcal{A}_{X}\otimes_{\mathcal{O}_{X}}\omega_{X})[d]\tilde{\to}R\mathcal{H}om_{\mathcal{A}_{Y}}(R\pi_{*}\mathcal{M}^{\cdot},\mathcal{A}_{Y}\otimes_{\mathcal{O}_{Y}}\omega_{Y})
\]
inside $D_{coh}^{b}(\mathcal{O}_{Y}-\text{mod})$.
\end{lem}
\begin{proof}
Via the projection formula we have
\[
R\pi_{*}(\mathcal{A}_{X}\otimes_{\mathcal{O}_{X}}\omega_{X})=R\pi_{*}(\pi^{*}\mathcal{A}_{Y}\otimes_{\mathcal{O}_{X}}\omega_{X})\tilde{\to}\mathcal{A}_{Y}\otimes_{\mathcal{O}_{Y}}^{L}R\pi_{*}(\omega_{X})
\]
so the usual trace $\text{tr}:R^{d}\pi_{*}(\omega_{X})\to\omega_{Y}$
induces a trace $\text{tr}:R^{d}\pi_{*}(\mathcal{A}_{X}\otimes_{\mathcal{O}_{X}}\omega_{X})\to\mathcal{A}_{Y}\otimes_{\mathcal{O}_{Y}}\omega_{Y}$.
Since $\pi$ has homological dimension $d$, we have $R\pi_{*}(\mathcal{A}_{X}\otimes_{\mathcal{O}_{X}}\omega_{X})[d]\to R^{d}\pi_{*}(\mathcal{A}_{X}\otimes_{\mathcal{O}_{X}}\omega_{X})$
so that there is a map
\[
R\pi_{*}(\mathcal{A}_{X}\otimes_{\mathcal{O}_{X}}\omega_{X})[d]\to\mathcal{A}_{Y}\otimes_{\mathcal{O}_{Y}}\omega_{Y}
\]
Thus we obtain
\[
R\pi_{*}R\mathcal{H}om_{\mathcal{A}_{X}}(\mathcal{M}^{\cdot},\mathcal{A}_{X}\otimes_{\mathcal{O}_{X}}\omega_{X})[d]\to R\mathcal{H}om_{\mathcal{A}_{Y}}(R\pi_{*}\mathcal{M}^{\cdot},R\pi_{*}(\mathcal{A}_{X}\otimes_{\mathcal{O}_{X}}\omega_{X})[d])
\]
\[
\to R\mathcal{H}om_{\mathcal{A}_{Y}}(R\pi_{*}\mathcal{M}^{\cdot},\mathcal{A}_{Y}\otimes_{\mathcal{O}_{Y}}\omega_{Y})
\]
for any $\mathcal{M}^{\cdot}\in D_{coh}^{b}(\mathcal{A}_{X}-\text{mod})$.
To prove that this map is an isomorphism, we can can work in the etale
(or flat) topology on $Y$ and so assume that $\mathcal{A}_{Y}$ is
split; i.e., $\mathcal{A}_{Y}=\mathcal{E}nd(\mathcal{E}_{Y})$ for
some vector bundle $\mathcal{E}_{Y}$. This implies $\mathcal{A}_{X}=\mathcal{E}nd(\mathcal{E}_{X})$
where $\mathcal{E}_{X}=\pi^{*}\mathcal{E}_{Y}$. Then for any $\mathcal{M}^{\cdot}\in D_{coh}^{b}(\mathcal{A}_{X}-\text{mod})$
we have $\mathcal{M}^{\cdot}=\mathcal{E}_{X}\otimes_{\mathcal{O}_{X}}\mathcal{N}^{\cdot}$
for a complex $\mathcal{N}^{\cdot}\in D_{coh}^{b}(\mathcal{O}_{X}-\text{mod})$.
Therefore
\[
R\pi_{*}R\mathcal{H}om_{\mathcal{A}_{X}}(\mathcal{M}^{\cdot},\mathcal{A}_{X}\otimes_{\mathcal{O}_{X}}\omega_{X})[d]\tilde{=}R\pi_{*}R\mathcal{H}om_{\mathcal{A}_{X}}(\mathcal{E}_{X}\otimes_{\mathcal{O}_{X}}\mathcal{N}^{\cdot},\mathcal{E}_{X}\otimes_{\mathcal{O}_{X}}(\mathcal{E}_{X}^{*}\otimes_{\mathcal{O}_{X}}\omega_{X}))[d]
\]
\[
\tilde{=}R\pi_{*}R\mathcal{H}om_{\mathcal{O}_{X}}(\mathcal{N}^{\cdot},\mathcal{E}_{X}^{*}\otimes_{\mathcal{O}_{X}}\omega_{X})[d]\tilde{=}R\pi_{*}R\mathcal{H}om_{\mathcal{O}_{X}}(\mathcal{M}^{\cdot},\omega_{X}[d])
\]
\[
\tilde{\to}R\mathcal{H}om_{\mathcal{O}_{Y}}(R\pi_{*}\mathcal{M}^{\cdot},\omega_{Y})\tilde{=}R\mathcal{H}om_{\mathcal{A}_{Y}}(\mathcal{E}_{Y}\otimes_{\mathcal{O}_{Y}}R\pi_{*}\mathcal{M}^{\cdot},\mathcal{E}_{Y}\otimes_{\mathcal{O}_{Y}}\omega_{Y})
\]
\[
\tilde{\to}R\mathcal{H}om_{\mathcal{A}_{Y}}(R\pi_{*}\mathcal{M}^{\cdot},\mathcal{A}_{Y}\otimes_{\mathcal{O}_{Y}}\omega_{Y})
\]
where the isomorphism $R\pi_{*}R\mathcal{H}om_{\mathcal{O}_{X}}(\mathcal{M}^{\cdot},\omega_{X}[d])\tilde{\to}R\mathcal{H}om_{\mathcal{O}_{Y}}(R\pi_{*}\mathcal{M}^{\cdot},\omega_{Y})$
is Grothendieck duality for coherent sheaves.
\end{proof}
Now we can proceed to the
\begin{proof}
(of \thmref{Duality-for-smooth-proper}) By applying $\otimes_{W(k)}^{L}k$
we reduce to the characteristic $p$ situation of a smooth proper
morphism $\varphi:X\to Y$. By induction on the cohomological length,
we may suppose that $\mathcal{M}^{\cdot}$ is concentrated in a single
degree; i.e., $\mathcal{M}^{\cdot}=\mathcal{M}\in\mathcal{G}(\mathcal{D}_{X}^{(0,1)})$.
Then $\mathcal{M}$ admits a short exact sequence
\[
\mathcal{M}_{0}\to\mathcal{M}\to\mathcal{M}_{1}
\]
where $\mathcal{M}_{0}\in\mathcal{G}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}))$
and $\mathcal{M}_{1}\in\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(1)}))$.
By \propref{Sandwich!} and \propref{Sandwich-push}, we see that
is suffices to prove the analogous statements in $D_{coh}^{b}(\mathcal{G}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})))$
and $D_{coh}^{b}(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(1)})))$.
By Frobenius descent (\thmref{Hodge-Filtered-Push}), one sees that
it suffices to prove the result for $D_{coh}^{b}(\mathcal{G}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})))$
and $D_{coh}^{b}(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(0)})))$.
These two cases require similar, but slightly different techniques;
we begin with the case of $\mathcal{R}(\mathcal{D}_{X}^{(0)})$. In
this case, since the grading on $\mathcal{R}(\mathcal{D}_{X}^{(0)})$
is concentrated in degrees $\geq0$, the graded Nakayama lemma applies
and so it suffices to prove that the map is an isomorphism after applying
$\otimes_{k[f]}^{L}k$; i.e., we have to prove
\[
R\pi_{*}(d\varphi)^{!}R\mathcal{H}om_{\mathcal{O}_{T^{*}X}}((\mathcal{M}\otimes_{k[f]}^{L}k),\omega_{T^{*}X})[d]
\]
\[
\tilde{\to}R\mathcal{H}om_{\mathcal{O}_{T^{*}Y}}(R\pi_{*}((d\varphi)^{!}(\mathcal{M}\otimes_{k[f]}^{L}k),\omega_{T^{*}Y})
\]
Since $d\varphi$ is a closed immersion of smooth schemes, we have
\[
(d\varphi)^{!}R\mathcal{H}om_{\mathcal{O}_{T^{*}X}}((\mathcal{M}\otimes_{k[f]}^{L}k),\omega_{T^{*}X})[d]
\]
\[
\tilde{=}R\mathcal{H}om_{\mathcal{O}_{X\times_{Y}T^{*}Y}}((d\varphi)^{!}(\mathcal{M}\otimes_{k[f]}^{L}k),(d\varphi)^{!}\omega_{T^{*}X})[d]
\]
Furthermore, $(d\varphi)^{!}\omega_{T^{*}X}=\omega_{X\times_{Y}T^{*}Y}\tilde{=}\pi^{!}(\omega_{T^{*}Y})$.
Therefore
\[
R\pi_{*}R\mathcal{H}om_{\mathcal{O}_{T^{*}X}}((d\varphi)^{!}(\mathcal{M}\otimes_{k[f]}^{L}k),(d\varphi)^{!}\omega_{T^{*}X})[d]
\]
\[
\tilde{=}R\pi_{*}R\mathcal{H}om_{\mathcal{O}_{X\times_{Y}T^{*}Y}}((d\varphi)^{!}(\mathcal{M}\otimes_{k[f]}^{L}k),\pi^{!}\omega_{T^{*}Y}[d])
\]
\[
\tilde{\to}R\mathcal{H}om_{\mathcal{O}_{T^{*}Y}}(R\pi_{*}((d\varphi)^{!}(\mathcal{M}\otimes_{k[f]}^{L}k),\omega_{T^{*}Y})
\]
where the last isomorphism is induced by the trace for $X\times_{Y}T^{*}Y\xrightarrow{\pi}T^{*}Y$,
i.e., it is given by Grothendieck duality for $\pi$; this proves
the result for $D_{coh}^{b}(\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(0)})))$.
In order to handle $D_{coh}^{b}(\mathcal{G}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})))$,
we apply a similar technique, but working directly\footnote{As the grading on $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$
in unbounded, the graded Nakayama lemma does not apply} with the Azumaya algebra $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$.
Since the morphism $\varphi$ is smooth, we can make use of \corref{Filtered-Bez-Brav}
and work with the functor $R\pi_{*}^{(1)}\circ C\circ(d\varphi^{(1)})^{!}$.
We therefore have to prove
\[
R\pi_{*}^{(1)}\circ C\circ(d\varphi^{(1)})^{!}R\mathcal{H}om_{\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})}(\mathcal{M}^{\cdot},\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}))\otimes_{\mathcal{O}_{X}}\omega_{X}^{-1}[d]
\]
\[
\tilde{\to}R\mathcal{H}om_{\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})}(R\pi_{*}^{(1)}\circ C\circ(d\varphi^{(1)})^{!}\mathcal{M}^{\cdot},\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}))\otimes_{\mathcal{O}_{Y}}\omega_{Y}^{-1}
\]
We proceed as above. We have an isomorphism
\[
C\circ(d\varphi^{(1)})^{!}R\mathcal{H}om_{\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})}(\mathcal{M}^{\cdot},\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)}))\otimes_{\mathcal{O}_{X}}\omega_{X}^{-1}
\]
\[
\tilde{=}R\mathcal{H}om_{(\pi^{(1)})^{*}(\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)}))}(C\circ(d\varphi^{(1)})^{!}\mathcal{M}^{\cdot},C\circ(d\varphi^{(1)})^{!}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{X}}\omega_{X}^{-1}))
\]
Applying the definition of $(d\varphi^{(1)})^{!}$ and $C$, one deduces
\[
C\circ(d\varphi^{(1)})^{!}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{X}}\omega_{X}^{-1})\tilde{=}(\pi^{(1)})^{*}(\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})\otimes_{\mathcal{O}_{Y}}\omega_{Y}^{-1})\otimes_{\mathcal{O}_{(X\times_{Y}T^{*}Y)^{(1)}}}\omega_{(X\times_{Y}T^{*}Y)^{(1)}}
\]
Therefore
\[
R\mathcal{H}om_{\pi^{*}(\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)}))}(C\circ(d\varphi^{(1)})^{!}\mathcal{M}^{\cdot},C\circ(d\varphi^{(1)})^{!}(\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{X}}\omega_{X}^{-1}))[d]
\]
\[
\tilde{\to}R\mathcal{H}om_{\pi^{*}(\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)}))}(C\circ(d\varphi^{(1)})^{!}\mathcal{M}^{\cdot},(\pi^{(1)})^{*}(\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})\otimes_{\mathcal{O}_{Y}}\omega_{Y}^{-1})\otimes_{\mathcal{O}_{(X\times_{Y}T^{*}Y)^{(1)}}}\omega_{(X\times_{Y}T^{*}Y)^{(1)}})
\]
\[
\tilde{\to}R\mathcal{H}om_{\overline{\mathcal{R}}(\mathcal{D}_{Y}^{(0)})}(R\pi_{*}^{(1)}\circ C\circ(d\varphi^{(1)})^{!}\mathcal{M}^{\cdot},\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})\otimes_{\mathcal{O}_{Y}}\omega_{Y}^{-1})
\]
where the last isomorphism follows from \lemref{GD-for-Az}; this
proves the result for $\overline{\mathcal{R}}(\mathcal{D}_{X}^{(0)})$.
\end{proof}
This implies, by an identical argument to theorem 2.7.3 of \cite{key-49}:
\begin{cor}
\label{cor:Smooth-proper-adunction}There is a functorial isomorphism
\[
R\underline{\mathcal{H}om}_{\mathcal{D}_{\mathfrak{Y}}^{(0,1)}}(\int_{\varphi}\mathcal{M}^{\cdot},\mathcal{N}^{\cdot})\tilde{\to}\varphi_{*}R\underline{\mathcal{H}om}_{\mathcal{D}_{\mathfrak{X}}^{(0,1)}}(\mathcal{M}^{\cdot},\varphi^{\dagger}\mathcal{N}^{\cdot})
\]
for all $\mathcal{M}^{\cdot}\in D_{coh}^{b}(\mathcal{G}(\mathcal{D}_{\mathfrak{X}}^{(0,1)}))$
and $\mathcal{N}^{\cdot}\in D_{coh}^{b}(\mathcal{G}(\mathcal{D}_{\mathfrak{Y}}^{(0,1)}))$.
\end{cor}
\subsection{Duality for a Projective morphism}
Now we turn to constructing the trace map in the case where $\varphi:\mathfrak{X}\to\mathfrak{Y}$
is a closed embedding, of relative dimension $d$. In this case the
pushforward is fairly easy to describe:
\begin{lem}
\label{lem:transfer-is-locally-free}Let $\varphi:\mathfrak{X}_{n}\to\mathfrak{Y}_{n}$
be the reduction to $W_{n}(k)$ of the closed embedding $\varphi$.
Then the transfer bimodule $\mathcal{D}_{\mathfrak{X}_{n}\to\mathfrak{Y}_{n}}^{(0,1)}$
is locally free over $\mathcal{D}_{\mathfrak{X}_{n}}^{(0,1)}$ and
is coherent over $\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1),\text{opp}}$.
Thus the functor $\int_{\varphi}^{0}:\mathcal{G}_{coh}(\mathcal{D}_{\mathfrak{X}_{n}}^{(0,1)})\to\mathcal{G}_{coh}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)})$
is exact.
\end{lem}
\begin{proof}
Working locally, we can assume that $\mathfrak{X}_{n}=\text{Spec}(B_{n})$,
$\mathfrak{Y}_{n}=\text{Spec}(A_{n})$, and $A_{n}$ admits local
coordinates $\{x_{1},\dots,x_{n}\}$ for which $B_{n}=A_{n}/(x_{1},\dots,x_{m})$.
Then
\[
\Gamma(\mathcal{D}_{\mathfrak{X}_{n}\to\mathfrak{Y}_{n}}^{(0,1)})=\mathcal{D}_{A_{n}}^{(0,1)}/I\cdot\mathcal{D}_{A_{n}}^{(0,1)}
\]
is coherent over $\mathcal{D}_{A_{n}}^{(0,1),\text{opp}}$. Now, \corref{Local-coords-over-A=00005Bf,v=00005D}
implies that $\mathcal{D}_{A_{n}}^{(0,1)}$ is free over $D(A_{n})$
(c.f. also the proof of \corref{Each-D^(i)-is-free}), with basis
given by the set $\{\partial^{I}(\partial^{[p]})^{J}\}$, where $I=(i_{1},\dots,i_{n})$
is a multi-index with $0\leq i_{j}\leq p-1$ for all $j$ and $J$
is any multi-index with entries $\geq0$. So $\mathcal{D}_{A_{n}}^{(0,1)}/I\cdot\mathcal{D}_{A_{n}}^{(0,1)}$
is free over $\mathcal{D}_{B_{n}}^{(0,1)}$with basis given by $\{\partial^{I}(\partial^{[p]})^{J}\}$,
where $I=(i_{1},\dots,i_{m})$ is a multi-index with $0\leq i_{j}\leq p-1$
for all $j$ and $J=(j_{1},\dots,j_{m})$ is any multi-index with
entries $\geq0$.
\end{proof}
Now we can proceed to analyze this functor, and the pullback $\varphi^{\dagger}$,
in exactly the same way as is done in the usual algebraic $\mathcal{D}$-module
theory. In this case, the existence of the trace map is essentially
deduced from the duality. To start off, we have
\begin{prop}
Let $\varphi:\mathfrak{X}_{n}\to\mathfrak{Y}_{n}$ be as above. Define
$\varphi^{\sharp}(\mathcal{M}^{\cdot}):=R\underline{\mathcal{H}om}_{\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)})}(\mathcal{D}_{\mathfrak{Y}_{n}\leftarrow\mathfrak{X}_{n}}^{(0,1)},\varphi^{-1}(\mathcal{M}^{\cdot}))$.
Then there is an isomorphism of functors $\varphi^{\dagger}\tilde{=}\varphi^{\sharp}$
on $D(\mathcal{G}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)}))$.
\end{prop}
\begin{proof}
This is very similar to \cite{key-49}, propositions 1.5.14 and 1.5.16.
One first shows
\[
R\underline{\mathcal{H}om}_{\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1),\text{opp}})}(\mathcal{D}_{\mathfrak{X}_{n}\to\mathfrak{Y}_{n}}^{(0,1)},\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)}))\tilde{=}\mathcal{D}_{\mathfrak{Y}_{n}\leftarrow\mathfrak{X}_{n}}^{(0,1)}[-d]
\]
by using the Koszul complex to write a locally free resolution for
$\mathcal{D}_{\mathfrak{X}_{n}\to\mathfrak{Y}_{n}}^{(0,1)}$ over
$\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1),\text{opp}})$;
note that by the left-right interchange this implies
\[
R\underline{\mathcal{H}om}_{\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)})}(\mathcal{D}_{\mathfrak{Y}_{n}\leftarrow\mathfrak{X}_{n}}^{(0,1)},\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)}))\tilde{=}\mathcal{D}_{\mathfrak{X}_{n}\to\mathfrak{Y}_{n}}^{(0,1)}[-d]
\]
Then, we have
\[
\varphi^{\dagger}(\mathcal{M}^{\cdot})=\mathcal{D}_{\mathfrak{X}_{n}\to\mathfrak{Y}_{n}}^{(0,1)}\otimes_{\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)})}^{L}\varphi^{-1}(\mathcal{M}^{\cdot})[-d]
\]
\[
\tilde{=}R\underline{\mathcal{H}om}_{\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)})}(\mathcal{D}_{\mathfrak{Y}_{n}\leftarrow\mathfrak{X}_{n}}^{(0,1)},\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)}))\otimes_{\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)})}^{L}\varphi^{-1}(\mathcal{M}^{\cdot})
\]
\[
\tilde{\to}R\underline{\mathcal{H}om}_{\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)})}(\mathcal{D}_{\mathfrak{Y}_{n}\leftarrow\mathfrak{X}_{n}}^{(0,1)},\varphi^{-1}(\mathcal{M}^{\cdot}))
\]
where the last isomorphism uses the fact that $\mathcal{D}_{\mathfrak{Y}_{n}\leftarrow\mathfrak{X}_{n}}^{(0,1)}$
admits, locally, a finite free resolution over $\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)})$.
\end{proof}
In turn, this implies
\begin{cor}
We have a functorial isomorphism
\[
R\underline{\mathcal{H}om}_{\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)}}(\int_{\varphi}\mathcal{M}^{\cdot},\mathcal{N}^{\cdot})\tilde{\to}\varphi_{*}R\underline{\mathcal{H}om}_{\mathcal{D}_{\mathfrak{X}_{n}}^{(0,1)}}(\mathcal{M}^{\cdot},\varphi^{\dagger}\mathcal{N}^{\cdot})
\]
for all $\mathcal{M}^{\cdot}\in D_{qcoh}^{b}(\mathcal{G}(\mathcal{D}_{\mathfrak{X}_{n}}^{(0,1)}))$
and $\mathcal{N}^{\cdot}\in D_{qcoh}^{b}(\mathcal{G}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)}))$.
\end{cor}
\begin{proof}
(Just as in \cite{key-49}, proposition 1.5.25). By the previous proposition,
it suffices to prove the result for $\varphi^{\sharp}$ instead of
$\varphi^{\dagger}$. To proceed, note that we have the local cohomology
functor $\mathcal{N}^{\cdot}\to R\Gamma_{\mathfrak{X}_{n}}(\mathcal{N}^{\cdot})$
which takes $\mathcal{N}^{\cdot}\in D_{qcoh}^{b}(\mathcal{G}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)}))$
to $D_{qcoh}^{b}(\mathcal{G}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)}))$.
We have
\[
R\underline{\mathcal{H}om}_{\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)}}(\int_{\varphi}\mathcal{M}^{\cdot},\mathcal{N}^{\cdot})=R\underline{\mathcal{H}om}_{\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)}}(\varphi_{*}(\mathcal{D}_{\mathfrak{Y}_{n}\leftarrow\mathfrak{X}_{n}}^{(0,1)}\otimes_{\mathcal{D}_{\mathfrak{X}_{n}}^{(0,1)}}^{L}\mathcal{M}^{\cdot}),\mathcal{N}^{\cdot})
\]
\[
\tilde{=}R\underline{\mathcal{H}om}_{\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)}}(\varphi_{*}(\mathcal{D}_{\mathfrak{Y}_{n}\leftarrow\mathfrak{X}_{n}}^{(0,1)}\otimes_{\mathcal{D}_{\mathfrak{X}_{n}}^{(0,1)}}^{L}\mathcal{M}^{\cdot}),R\Gamma_{\mathfrak{X}_{n}}(\mathcal{N}^{\cdot}))
\]
\[
\tilde{=}\varphi_{*}(R\underline{\mathcal{H}om}_{\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)})}(\varphi^{-1}(\varphi_{*}(\mathcal{D}_{\mathfrak{Y}_{n}\leftarrow\mathfrak{X}_{n}}^{(0,1)}\otimes_{\mathcal{D}_{\mathfrak{X}_{n}}^{(0,1)}}^{L}\mathcal{M}^{\cdot})),\varphi^{-1}(R\Gamma_{\mathfrak{X}_{n}}(\mathcal{N}^{\cdot})))
\]
\[
\tilde{=}\varphi_{*}(R\underline{\mathcal{H}om}_{\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)})}(\mathcal{D}_{\mathfrak{Y}_{n}\leftarrow\mathfrak{X}_{n}}^{(0,1)}\otimes_{\mathcal{D}_{\mathfrak{X}_{n}}^{(0,1)}}^{L}\mathcal{M}^{\cdot}),\varphi^{-1}(R\Gamma_{\mathfrak{X}_{n}}(\mathcal{N}^{\cdot})))
\]
\[
\tilde{=}\varphi_{*}R\underline{\mathcal{H}om}_{\mathcal{D}_{\mathfrak{X}_{n}}^{(0,1)}}(\mathcal{M}^{\cdot},R\underline{\mathcal{H}om}_{\varphi^{-1}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)})}(\mathcal{D}_{\mathfrak{Y}_{n}\leftarrow\mathfrak{X}_{n}}^{(0,1)},\varphi^{-1}(R\Gamma_{\mathfrak{X}_{n}}(\mathcal{N}^{\cdot}))))
\]
\[
\tilde{=}\varphi_{*}R\underline{\mathcal{H}om}_{\mathcal{D}_{\mathfrak{X}_{n}}^{(0,1)}}(\mathcal{M}^{\cdot},\varphi^{\sharp}\mathcal{N}^{\cdot})
\]
where, in both the second isomorphism and the last, we have used the
existence of an exact triangle
\[
R\Gamma_{\mathfrak{X}_{n}}(\mathcal{N}^{\cdot})\to\mathcal{N}^{\cdot}\to\mathcal{K}^{\cdot}
\]
where $\mathcal{K}^{\cdot}\in D_{qcoh}^{b}(\mathcal{G}(\mathcal{D}_{\mathfrak{Y}_{n}}^{(0,1)}))$
is isomorphic to $Rj_{*}(\mathcal{N}^{\cdot}|_{\mathfrak{Y}_{n}\backslash\mathfrak{X}_{n}})$;
here $j:\mathfrak{Y}_{n}\backslash\mathfrak{X}_{n}\to\mathfrak{Y}_{n}$
is the inclusion. In particular, we have that $R\underline{\mathcal{H}om}(\mathcal{C}^{\cdot},\mathcal{K}^{\cdot})=0$
for any $\mathcal{C}^{\cdot}$ supported along $\mathfrak{X}_{n}$.
\end{proof}
\begin{cor}
There is a canonical map
\[
\text{tr}:\int_{\varphi}\mathcal{O}_{\mathfrak{X}_{n}}[d_{X}]\to\mathcal{O}_{\mathfrak{Y}_{n}}[d_{Y}]
\]
After taking inverse limit, we obtain a trace map ${\displaystyle \text{tr}:\int_{\varphi}\mathcal{O}_{\mathfrak{X}}[d_{X}]\to\mathcal{O}_{\mathfrak{Y}}[d_{Y}]}$.
If $\psi:\mathfrak{Y}\to\mathfrak{Z}$ is a smooth morphism,
We also have a trace map
\[
\text{tr}:\int_{\varphi}\mathcal{O}_{\mathfrak{X}}[d_{X}]\to\mathcal{O}_{\mathfrak{Y}}[d_{Y}]
\]
given by taking the inverse limit of the above maps; the same compatibility
holds for this trace as well.
\end{cor}
\begin{proof}
The previous corollary gives an adjunction ${\displaystyle \int_{\varphi}\varphi^{\dagger}\to\text{Id}}$.
Since $\varphi^{\dagger}(\mathcal{O}_{\mathfrak{Y}_{n}})=\mathcal{O}_{\mathfrak{X}_{n}}[d_{X}-d_{Y}]$
we obtain the trace map via this adjunction.
\end{proof}
Now, by factoring an arbitrary projective morphism as a closed immersion
followed by a smooth projective map, we obtain by composing the trace
maps a trace map for an arbitrary projective morphism. Arguing as
in the classical case (c.f. \cite{key-54}, section 2.10), we see
that this map is independent of the choice of the factorization. Therefore
we obtain
\begin{thm}
Let $\varphi:\mathfrak{X}\to\mathfrak{Y}$ be a projective morphism.
Then we have a functorial morphism
\[
\int_{\varphi}\mathbb{D}_{\mathfrak{X}}\mathcal{M}^{\cdot}\to\mathbb{D}_{\mathfrak{Y}}\int_{\varphi}\mathcal{M}^{\cdot}
\]
which is an isomorphism for $\mathcal{M}^{\cdot}\in D_{coh}^{b}(\mathcal{G}(\mathcal{D}_{\mathfrak{X}}^{(0,1)}))$.
Further, we have a functorial isomorphism
\[
R\underline{\mathcal{H}om}_{\mathcal{D}_{\mathfrak{Y}}^{(0,1)}}(\int_{\varphi}\mathcal{M}^{\cdot},\mathcal{N}^{\cdot})\tilde{\to}\varphi_{*}R\underline{\mathcal{H}om}_{\mathcal{D}_{\mathfrak{X}}^{(0,1)}}(\mathcal{M}^{\cdot},\varphi^{\dagger}\mathcal{N}^{\cdot})
\]
for all $\mathcal{M}^{\cdot}\in D_{coh}^{b}(\mathcal{G}(\mathcal{D}_{\mathfrak{X}}^{(0,1)}))$
and $\mathcal{N}^{\cdot}\in D_{coh}^{b}(\mathcal{G}(\mathcal{D}_{\mathfrak{Y}}^{(0,1)}))$.
\end{thm}
\section{Applications}
In this section we put things together and give the statement and
proof of our generalization of Mazur's theorem for a mixed Hodge module.
We begin with a brief review of the pushforward operation in the world
of mixed Hodge modules.
Let $X_{\mathbb{C}}$ be a smooth complex variety, and suppose that
$(\mathcal{M}_{\mathbb{C}},F^{\cdot},\mathcal{K}_{\mathbb{Q}},W_{\cdot})$
is a mixed Hodge module on $X_{\mathbb{C}}$. We won't attempt to
recall a complete definition here, instead referring the reader to\cite{key-15},\cite{key-16},
and the excellent survey \cite{key-56}. We will only recall that
$\mathcal{M}_{\mathbb{C}}$ is a coherent $\mathcal{D}$-module which
comes equipped with a good filtration $F^{\cdot}$, a weight filtration
$W_{\cdot}$, and $\mathcal{K}_{\mathbb{Q}}$ is a perverse sheaf
defined over $\mathbb{Q}$ which corresponds to $\mathcal{M}_{\mathbb{C}}$
under the Riemann-Hilbert correspondence. In this paper, our attention
is on the filtration $F^{\cdot}$ and we will mostly suppress the
other aspects of the theory. For the sake of notational convenience,
we will denote simply by $\mathcal{O}_{X_{\mathbb{C}}}$ the mixed
Hodge module whose underlying filtered $\mathcal{D}$-module is $\mathcal{O}_{X_{\mathbb{C}}}$
with its trivial filtration: $F^{i}(\mathcal{O}_{X_{\mathbb{C}}})=\mathcal{O}_{X_{\mathbb{C}}}$
for all $i\geq0$, while $F^{i}(\mathcal{O}_{X_{\mathbb{C}}})=0$
for $i<0$.
Now let $\varphi:X_{\mathbb{C}}\to Y_{\mathbb{C}}$ be a morphism
of smooth complex varieties. By Nagata's compatification theorem,
combined with Hironaka's resolution of singularities, we can find
an open immersion $j:X_{\mathbb{C}}\to\overline{X}_{\mathbb{C}}$
into a smooth variety, whose compliment is a normal crossings divisor,
and a proper morphism $\overline{\varphi}:\overline{X}_{\mathbb{C}}\to Y_{\mathbb{C}}$,
with $\varphi=\overline{\varphi}\circ j$.
Then, the following is one of the main results of \cite{key-16} (c.f.
theorem 4.3 and theorem 2.14)
\begin{thm}
Let $\varphi,\overline{\varphi},j$ be morphisms as above.
1) There is a mixed Hodge module $(j_{\star}(\mathcal{M}_{\mathbb{C}}),F^{\cdot}j_{*}\mathcal{K}_{\mathbb{Q}},W_{\cdot})$,
whose underlying $\mathcal{D}$-module agrees with the usual pushforward
of $\mathcal{D}$-modules under $j$. This defines an exact functor
$j_{\star}:\text{MHM}(X_{\mathbb{C}})\to\text{MHM}(\overline{X}_{\mathbb{C}})$.
2) There is an object of $D^{b}(\text{MHM}(Y_{\mathbb{C}}))$, $R\overline{\varphi}_{\star}(j_{\star}(\mathcal{M}_{\mathbb{C}}),F^{\cdot}j_{*}\mathcal{K}_{\mathbb{Q}},W_{\cdot})$,
whose underlying complex of filtered $\mathcal{D}$-modules agrees
with ${\displaystyle \int_{\overline{\varphi}}(j_{\star}\mathcal{M}_{\mathbb{C}})}$.
This object of $D^{b}(\text{MHM}(Y_{\mathbb{C}}))$ is, up to isomorphism,
independent of the choice of factorization $\varphi=\overline{\varphi}\circ j$.
Furthermore, the filtration on this complex is strict.
\end{thm}
The reason for stating the theorem this way is that, if $\varphi$
is not proper, the filtered pushforward ${\displaystyle \int_{\varphi}}$
of filtered $\mathcal{D}$-modules does not agree with the pushforward
of mixed Hodge modules. The issue appears already if $Y_{\mathbb{C}}$
is a point and $\mathcal{M}_{\mathbb{C}}=\mathcal{O}_{X_{\mathbb{C}}}$.
In that case, the pushforward $R\varphi_{\star}$ returns\footnote{up to a homological shift, and a re-indexing of the Hodge filtration}
Deligne's Hodge cohomology of $X_{\mathbb{C}}$, while ${\displaystyle {\displaystyle \int_{\varphi}}}$
returns the de Rham cohomology of $X_{\mathbb{C}}$ equipped with
the naive Hodge-to-de Rham filtration; these disagree, e.g., if $X_{\mathbb{C}}$
is affine.
The construction of the extension $j_{\star}(\mathcal{M}_{\mathbb{C}})$
is, in general, quite deep, and relies on the detailed study of the
degenerations of Hodge structures given in \cite{key-60} and \cite{key-61}.
However, when $\mathcal{M}_{\mathbb{C}}=\mathcal{O}_{X_{\mathbb{C}}}$
is the trivial mixed Hodge module, one can be quite explicit:
\begin{lem}
\label{lem:Hodge-filt-on-j_push}Let $j:X_{\mathbb{C}}\to\overline{X}_{\mathbb{C}}$
be an open immersion of smooth varieties, whose compliment is a normal
crossings divisor $D_{\mathbb{C}}$. Let $x\in X_{\mathbb{C}}$ be
a point, about which $D_{\mathbb{C}}$ is given by the equation $\{x_{1}\cdots x_{j}=0\}$.
Then as filtered $\mathcal{D}$-modules we have $j_{\star}\mathcal{O}_{X_{\mathbb{C}}}=(j_{*}(\mathcal{O}_{X_{\mathbb{C}}}),F^{\cdot})$
where $F^{l}(j_{*}(\mathcal{O}_{X_{\mathbb{C}}})):=F^{l}(\mathcal{D}_{X_{\mathbb{C}}})\cdot(x_{1}\cdots x_{j})^{-1}$.
In particular, $F^{l}(j_{*}(\mathcal{O}_{X_{\mathbb{C}}}))$ is spanned
over $\mathcal{O}_{X_{\mathbb{C}}}$ by terms of the form $x_{1}^{-(i_{1}+1)}\cdots x_{j}^{-(i_{j}+1)}$
where ${\displaystyle \sum_{t=1}^{j}i_{t}\leq l}$.
\end{lem}
For a proof, see \cite{key-6}, section 8. This implies that the Hodge
cohomology of $X_{\mathbb{C}}$, as an object in the filtered derived
category of vector spaces, can be computed as ${\displaystyle \int_{\overline{\varphi}}j_{\star}\mathcal{O}_{X_{\mathbb{C}}}(d)[d]}$
where $\overline{\varphi}:\overline{X}_{\mathbb{C}}\to\{*\}$. Of
course, this can be checked directly by comparing the log de Rham
complex with the de Rham complex of a the filtered $\mathcal{D}$-module
$j_{\star}\mathcal{O}_{X_{\mathbb{C}}}$.
Combining \thmref{Mazur!}with \corref{proper-push-over-W(k)} gives:
\begin{prop}
1) Let $\varphi:\mathfrak{X}\to\mathfrak{Y}$ be a projective morphism,
and let $\mathfrak{D}\subset\mathfrak{X}$ be a (possibly empty) normal
crossings divisor. Let ${\displaystyle j_{\star}D(\mathcal{O}_{\mathfrak{X}})}$
be the gauge of \exaref{Integral-j} on $\mathfrak{X}$. Suppose that
each $\mathcal{H}^{i}({\displaystyle \int_{\varphi}(j_{\star}\mathcal{O}_{\mathfrak{X}})^{-\infty}})$
is a $p$-torsion-free $\widehat{\mathcal{D}}_{\mathfrak{Y}}^{(0)}$-module,
and that each $\mathcal{H}^{i}({\displaystyle (\int_{\varphi}{\displaystyle j_{\star}D(\mathcal{O}_{\mathfrak{X}}})}\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f])$
is $f$-torsion-free. Then each $\mathcal{H}^{i}{\displaystyle (\int_{\varphi}{\displaystyle j_{\star}D(\mathcal{O}_{\mathfrak{X}}}}))$
is a standard gauge on $\mathfrak{Y}$.
2) Let ${\displaystyle j_{!}D(\mathcal{O}_{\mathfrak{X}}):=\mathbb{D}_{\mathfrak{X}}j_{\star}D(\mathcal{O}_{\mathfrak{X}})}$.
The same conclusion holds for $j_{!}D(\mathcal{O}_{\mathfrak{X}})$.
\end{prop}
When $\mathfrak{Y}$ is a point this recovers the log-version of Mazur's
theorem, as discussed in Ogus' paper \cite{key-18}.
Now let $R$ be a finite type algebra over $\mathbb{Z}$ so that there
exists smooth (over $R$) models $X_{R},Y_{R}$ for $X_{\mathbb{C}}$
and $Y_{\mathbb{C}}$, respectively, and a projective morphism $\varphi:X_{R}\to Y_{R}$
whose base change to $\mathbb{C}$ is the original morphism. We may
suppose the divisor $D_{\mathbb{C}}$ is defined over $R$ as well.
Let $\mathcal{D}_{X_{R}}^{(0)}$ be the level zero differential operators
over $X_{R}$, equipped with the symbol filtration; let the associated
Rees algebra be $\mathcal{R}(\mathcal{D}_{X_{R}}^{(0)})$ (as usual
we will use $f$ for the Rees parameter). Since $\text{Rees}(j_{*}\mathcal{O}_{U_{\mathbb{C}}})$
is a coherent $\mathcal{R}(\mathcal{D}_{X_{\mathbb{C}}})$-module,
we can by generic flatness choose a flat model for $\text{Rees}(j_{*}\mathcal{O}_{U_{\mathbb{C}}})$;
in fact, we can describe it explicitly as follows: if $D_{R}$ is
given, in local coordinates, by $\{x_{1}\cdots x_{j}=0\}$, then we
may consider
\[
\mathcal{D}_{X_{R}}^{(0)}\cdot x_{1}^{-1}\cdots x_{j}^{-1}\subset j_{*}\mathcal{O}_{U_{R}}
\]
with the filtration inherited from the symbol filtration on $\mathcal{D}_{X_{R}}^{(0)}$.
The Rees module of this filtered $\mathcal{D}_{X_{R}}^{(0)}$-module
is a flat $R$-model for $\text{Rees}(j_{*}\mathcal{O}_{U_{\mathbb{C}}})$.
Let us call this sheaf ${\displaystyle j_{\star}\mathcal{O}_{U_{R}}[f]}$;
we will denote the associated filtered $\mathcal{D}_{X_{R}}^{(0)}$-module
by ${\displaystyle j_{\star}\mathcal{O}_{U_{R}}}$. Then, localizing
$R$ if necessary, we have that
\[
\int_{\varphi}j_{\star}\mathcal{O}_{U_{R}}[f]
\]
is an $f$-torsion-free complex inside $D_{coh}^{b}(\mathcal{D}_{Y_{R}}^{(0)}-\text{mod})$
(since it becomes $f$-torsion-free after base change to $\mathbb{C}$,
as remarked above). By generic flatness, we may also suppose (again,
localizing $R$ if necessary), that each cohomology sheaf ${\displaystyle \mathcal{H}^{i}(\int_{\varphi}j_{\star}\mathcal{O}_{U_{R}})}$
is flat over $R$. Let $k$ be a perfect field of characteristic $p>0$,
for which there is a morphism $R\to W(k)$ (so that $R/p\to k$)\footnote{If we extend $R$ so that it is smooth over $\mathbb{Z}$, then any
map $R/p\to k$ lifts to $R\to W(k)$}. Then, combining this discussion with the previous proposition, we
obtain
\begin{cor}
\label{cor:Mazur-for-Hodge-1}Let $\mathfrak{X}$ be the formal completion
of $X_{R}\times_{R}W(k)$, and similarly for $\mathfrak{Y}$. Then
each gauge $\mathcal{H}^{i}({\displaystyle (\int_{\varphi}{\displaystyle j_{\star}D(\mathcal{O}_{\mathfrak{X}})}}))$
is a standard, coherent, $F^{-1}$-gauge on $\mathfrak{Y}$. There
is an isomorphism
\[
\mathcal{H}^{i}(({\displaystyle (\int_{\varphi}{\displaystyle j_{\star}\mathcal{O}_{\mathfrak{X}}[f,v]}})\otimes_{W(k)}^{L}k)\otimes_{D(k)}^{L}k[f])\tilde{\to}F^{*}\mathcal{H}^{i}(\int_{\varphi}j_{\star}\mathcal{O}_{U_{R}}[f]\otimes_{R}^{L}k)
\]
in $\mathcal{G}(\mathcal{R}(\mathcal{D}_{X}^{(1)}))$. In particular,
the Hodge filtration on ${\displaystyle \mathcal{H}^{i}({\displaystyle (\int_{\varphi}{\displaystyle j_{\star}D(\mathcal{O}_{\mathfrak{X}}})}))^{\infty}/p}$
is the Frobenius pullback of the Hodge filtration on ${\displaystyle \mathcal{H}^{i}(\int_{\varphi}j_{\star}\mathcal{O}_{U_{R}}\otimes_{R}^{L}k)}$.
The same holds if we replace ${\displaystyle j_{\star}D(\mathcal{O}_{\mathfrak{X}}})$
by ${\displaystyle j_{!}D(\mathcal{O}_{\mathfrak{X}}})$. The same
statement holds for the pushforward of ${\displaystyle \mathcal{H}^{i}(\int_{\varphi}j_{\star}\mathcal{O}_{U_{R}})}$
under another proper morphism $\psi:Y\to Z$.
\end{cor}
\begin{proof}
The displayed isomorphism follows immediately from \thmref{Hodge-Filtered-Push}.
Since ${\displaystyle \int_{\varphi}j_{\star}\mathcal{O}_{U_{R}}[f]}$
has $f$-torsion free cohomology sheaves, which are also flat over
$R$, we deduce that ${\displaystyle {\displaystyle ((\int_{\varphi}{\displaystyle j_{\star}\mathcal{O}_{\mathfrak{X}}[f,v]}})\otimes_{W(k)}^{L}k})\otimes_{D(k)}^{L}k[f]$
has $f$-torsion free cohomology sheaves. Comparing the description
of the Hodge filtration on ${\displaystyle ({\displaystyle j_{\star}\mathcal{O}_{\mathfrak{X}}[f,v]}})^{\infty}/p$
with the result of \lemref{Hodge-filt-on-j_push}, the result now
follows from \thmref{F-Mazur}.
\end{proof}
Let us give some first applications of these results.
Suppose that $X_{\mathbb{C}}$ is an arbitrary (possibly singular)
quasi-projective variety. Let $V_{\mathbb{C}}$ be a smooth quasi-projective
variety such that there is a closed embedding $X_{\mathbb{C}}\to V_{\mathbb{C}}$,
and let $\overline{V}_{\mathbb{C}}$ be a projective compatification
of $V_{\mathbb{C}}$ (i.e., $\overline{V}_{\mathbb{C}}\backslash V_{\mathbb{C}}$
is a normal crossings divisor). Let $U_{\mathbb{C}}\subset X_{\mathbb{C}}$
be an affine open \emph{smooth} subset. Let $\varphi:\tilde{X}_{\mathbb{C}}\to X_{\mathbb{C}}$
denote a resolution of singularities so that $\varphi$ is an isomorphism
over $U_{\mathbb{C}}$ and $\varphi^{-1}(X_{\mathbb{C}}\backslash U_{\mathbb{C}})$
is a normal crossings divisor $\tilde{D}_{\mathbb{C}}\subset\tilde{X}_{\mathbb{C}}$.
The decomposition theorem for Hodge modules implies that the complex
${\displaystyle \int_{\varphi}\mathcal{O}_{\tilde{X}_{\mathbb{C}}}}\in D^{b}(\text{MHM}_{X})$
is quasi-isomorphic to the direct sum of its cohomology sheaves, and
that each such sheaf is a direct sum of simple, pure Hodge modules.
Therefore, if $j:U_{\mathbb{C}}\to X_{\mathbb{C}}$ (resp. $j':U_{\mathbb{C}}\to\tilde{X}_{\mathbb{C}}$)
denotes the inclusion, then the image of the natural map
\[
\mathcal{H}^{0}({\displaystyle \int_{\varphi}\mathcal{O}_{\tilde{X}_{\mathbb{C}}}})\to\mathcal{H}^{0}(\int_{\varphi}j'_{\star}\mathcal{O}_{U_{\mathbb{C}}})\tilde{\to}\mathcal{H}^{0}(j_{\star}\mathcal{O}_{U_{\mathbb{C}}})
\]
is the Hodge module $\text{IC}_{X}$; indeed, ${\displaystyle \mathcal{H}^{0}({\displaystyle \int_{\varphi}\mathcal{O}_{\tilde{X}_{\mathbb{C}}}})}=\text{IC}_{X}\oplus\mathcal{M}$
where $\mathcal{M}$ is a pure Hodge module supported on $X_{\mathbb{C}}\backslash U_{\mathbb{C}}$;
its image in $\mathcal{H}^{0}({\displaystyle j_{\star}\mathcal{O}_{U_{\mathbb{C}}})}$
is therefore isomorphic to $\text{IC}_{X}$ (as a Hodge module, and
so in particular as a filtered $\mathcal{D}$-module).
Now let $\overline{X}_{\mathbb{C}}$ denote the closure of $X_{\mathbb{C}}$
in $\overline{V}_{\mathbb{C}}$, and let $\varphi:\tilde{\overline{X}}_{\mathbb{C}}\to\overline{X}_{\mathbb{C}}$
be a resolution of singularities, whose restriction to $X_{\mathbb{C}}\subset\overline{X}_{\mathbb{C}}$
is isomorphic to $\varphi:\tilde{X}_{\mathbb{C}}\to X_{\mathbb{C}}$,
and so that the inverse image of $\overline{X}_{\mathbb{C}}\backslash X_{\mathbb{C}}$
is a normal crossings divisor (we can modify $\varphi$ if necessary
to ensure that this happens). Let $i:X_{\mathbb{C}}\to\overline{X}_{\mathbb{C}}$
and $i':\tilde{X}_{\mathbb{C}}\to\tilde{\overline{X}}_{\mathbb{C}}$
denote the inclusions. Since Hodge modules on $X_{\mathbb{C}}$ are,
by definition, Hodge modules on $V_{\mathbb{C}}$ which are supported
on $X_{\mathbb{C}}$, the fact that $\overline{V}_{\mathbb{C}}\backslash V_{\mathbb{C}}$
is a divisor implies that $i_{*}$ is an exact functor on the category
of mixed Hodge modules. Therefore the image of the natural map
\[
\mathcal{H}^{0}(\int_{\varphi}i'_{\star}\mathcal{O}_{\tilde{X}_{\mathbb{C}}})\tilde{=}i_{\star}\mathcal{H}^{0}({\displaystyle \int_{\varphi}\mathcal{O}_{\tilde{X}_{\mathbb{C}}}})\to i_{\star}\mathcal{H}^{0}(\int_{\varphi}j'_{\star}\mathcal{O}_{U_{\mathbb{C}}})\tilde{=}\mathcal{H}^{0}(i\circ j)_{\star}\mathcal{O}_{U_{\mathbb{C}}}
\]
is isomorphic to $i_{\star}(\text{IC}_{X})$ (again, as a Hodge module,
and so in particular as a filtered $\mathcal{D}$-module).
As above, we now select a finite type $\mathbb{Z}$-algebra $R$ so
that everything in sight is defined and flat over $R$, and let $R\to W(k)$
for some perfect $k$ of characteristic $p>0$. Let $\tilde{\mathfrak{\overline{X}}}\to\mathfrak{\overline{X}}\subset\overline{\mathcal{V}}$
be the formal completion of $\tilde{\overline{X}}_{R}\times_{R}W(k)\to\overline{X}_{R}\times_{R}W(k)\subset\overline{V}_{R}\times_{R}W(k)$.
Abusing notation slightly we'll also denote by $\varphi$ the composed
map $\tilde{\mathfrak{\overline{X}}}\to\widehat{\overline{\mathcal{V}}}$.
\begin{cor}
\label{cor:Mazur-for-IC}1) The image of the map
\[
\mathcal{H}^{0}(\int_{\varphi}i'_{\star}D(\mathcal{O}_{\tilde{\mathfrak{X}}}))\to\mathcal{H}^{0}(\int_{\varphi}(i'\circ j')_{*}D(\mathcal{O}_{\mathfrak{U}}))
\]
defines a coherent, standard $F^{-1}$-gauge on $\widehat{\mathbb{P}^{n}}$,
denoted $\text{IC}_{\mathfrak{X}}$. The $\widehat{\mathcal{D}}_{\overline{\mathcal{V}}}^{(0)}$-module
$\text{IC}_{\mathfrak{X}}^{-\infty}$ is isomorphic to the $p$-adic
completion of $\text{IC}_{X_{R}}\otimes_{R}W(k)$, where $\text{IC}_{X_{R}}$
is an $R$-model for $\text{IC}_{X_{\mathbb{C}}}$. The Hodge filtration
on the $\mathcal{D}_{\overline{V}_{k}}^{(1)}$-module $\widehat{\text{IC}_{\mathfrak{X}}^{\infty}}/p\tilde{=}F^{*}\text{IC}_{\mathfrak{X}}^{-\infty}/p$
is equal to the Frobenius pullback of the Hodge filtration on $\text{IC}_{\mathfrak{X}}^{-\infty}/p\tilde{=}\text{IC}_{X_{R}}\otimes_{R}k$
coming from the Hodge filtration on $\text{IC}_{X_{R}}$.
2) The intersection cohomology groups $\text{IH}^{i}(X_{R})\otimes_{R}W(k):=\mathbb{H}_{dR}^{i}(\text{IC}_{X_{R}})\otimes_{R}W(k)$
satisfy the conclusions of Mazur's theorem; as in \thmref{Mazur-for-IC-Intro}
\end{cor}
\begin{proof}
Since the displayed map is a map of coherent gauges, the image, $i_{\star}\text{IC}_{\mathfrak{X}}$,
is a coherent gauge. Since both ${\displaystyle i'_{\star}D(\mathcal{O}_{\tilde{\mathfrak{X}}})}$
and $(i'\circ j')_{*}D(\mathcal{O}_{\mathfrak{U}}))$ are $F^{-1}$gauges,
and the natural map $i'_{\star}D({\displaystyle \mathcal{O}_{\tilde{\mathfrak{X}}})\to(i'\circ j')_{*}D(\mathcal{O}_{\mathfrak{U}})}$
is $F^{-1}$-equivariant, the same is true of the displayed map, and
so $i_{\star}\text{IC}_{\mathfrak{X}}$ is an $F^{-1}$-gauge. By
\propref{push-and-complete-for-D} (and the exactness of the functor
$\mathcal{M}\to\mathcal{M}^{-\infty}$) we have that the image of
\[
\mathcal{H}^{0}(\int_{\varphi}i'_{\star}D(\mathcal{O}_{\tilde{\mathfrak{X}}}))^{-\infty}\to\mathcal{H}^{0}(\int_{\varphi}(i'\circ j')_{*}D(\mathcal{O}_{\mathfrak{U}}))^{-\infty}
\]
is equal to the image of
\[
\mathcal{H}^{0}(\int_{\varphi}(i_{\star}\mathcal{O}_{\tilde{\mathfrak{X}}})^{-\infty})\to\mathcal{H}^{0}\int_{\varphi}((i'\circ j')_{*}D(\mathcal{O}_{\mathfrak{U}}))^{-\infty}
\]
in the category of $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}$-modules.
On the other hand, we have $R$-flat filtered $\mathcal{D}_{X_{R}}^{(0)}$-modules
${\displaystyle \mathcal{H}^{0}(\int_{\varphi}i_{\star}\mathcal{O}_{\tilde{X}_{R}})}$
and ${\displaystyle \mathcal{H}^{0}(\int_{\varphi}(i'\circ j')_{*}\mathcal{O}_{U_{R}})}$
such that the $p$-adic completion of $\mathcal{H}^{0}(\int_{\varphi}i_{\star}\mathcal{O}_{\tilde{X}_{R}}){\displaystyle \otimes_{R}W(k)}$
is ${\displaystyle \mathcal{H}^{0}(\int_{\varphi}(i_{\star}\mathcal{O}_{\tilde{\mathfrak{X}}})^{-\infty})}$,
and the $p$-adic completion of ${\displaystyle \mathcal{H}^{0}(\int_{\varphi}(i'\circ j')_{*}\mathcal{O}_{U_{R}})}$
is ${\displaystyle \mathcal{H}^{0}\int_{\varphi}((i'\circ j')_{*}D(\mathcal{O}_{\mathfrak{U}}))^{-\infty}}$,
and, after localizing $R$ if necessary, we may further suppose that
the kernel of the map
\begin{equation}
{\displaystyle \mathcal{H}^{0}(\int_{\varphi}i_{\star}\mathcal{O}_{\tilde{X}_{R}})}\to\mathcal{H}^{0}(\int_{\varphi}(i'\circ j')_{*}\mathcal{O}_{U_{R}})\label{eq:natural-map-over-R}
\end{equation}
is a summand (in the category of filtered $\mathcal{D}_{X_{R}}^{(0)}$-modules)
of ${\displaystyle {\displaystyle \mathcal{H}^{0}(\int_{\varphi}\mathcal{O}_{\tilde{X}_{R}})}}$
(as this is true over $\mathbb{C}$). Thus the image is flat over
$R$, and so its $p$-adic completion is $p$-torsion-free; therefore
$\text{IC}_{\mathfrak{X}}^{-\infty}$ is $p$-torsion-free, as is
$\text{IC}_{\mathfrak{X}}^{\infty}$ (since $\text{IC}_{\mathfrak{X}}^{-\infty}$
is an $F^{-1}$-gauge; c.f. the proof of \thmref{F-Mazur}) Further,
the map \eqref{natural-map-over-R} is strict with respect to the
Hodge filtration, and so the same is true after taking reduction mod
$p$ and applying $F^{*}$. It follows that $\text{IC}_{\mathfrak{X}}^{\infty}/p/v$
is $f$-torsion-free.
Thus by \propref{Baby-Mazur}, we see that $\text{IC}_{\mathfrak{X}}$
is a standard gauge; the statement about the Hodge filtration follows
from \corref{Mazur-for-Hodge-1}. This proves part $1)$, and part
$2)$ follows from taking the pushforward to a point.
\end{proof}
\begin{rem}
The construction above involved a few auxiliary choices- namely, the
ring $R$ and the resolution $\tilde{X}_{R}$. However, any two resolutions
of singularities can be dominated by a third. Therefore, after possibly
localizing $R$, any two definitions of $\text{IC}_{X_{R}}$ agree.
Further, we if we have an inclusion of rings $R\to R'$ then $\text{IC}_{X_{R}}\otimes_{R}R'=\text{IC}_{X_{R'}}$.
Therefore we have $\mathbb{H}_{dR}^{i}(\text{IC}_{X_{R}})\otimes_{R}R'\tilde{\to}\mathbb{H}_{dR}^{i}(\text{IC}_{X_{R'}})$
when both are flat. Since any two finite-type $\mathbb{Z}$-algebras
can be embedded into a third, we also obtain a comparison for any
two such algebras.
\end{rem}
Now suppose $X_{\mathbb{C}}$ is a smooth (quasiprojective) scheme,
and let $i:Y_{\mathbb{C}}\to X_{\mathbb{C}}$ be a closed immersion;
here, $Y_{\mathbb{C}}$ can be singular; let $j:X_{\mathbb{C}}\backslash Y_{\mathbb{C}}\to X_{\mathbb{C}}$
be the open immersion. Now, let $\tilde{j}:X_{\mathbb{C}}\to\overline{X}_{\mathbb{C}}$
be a smooth proper compactification of $X_{\mathbb{C}}$, so that
$\overline{X}_{\mathbb{C}}\backslash X_{\mathbb{C}}$ is a normal
crossings divisor. Choose flat $R$ models for everything in sight.
Then we have
\begin{cor}
\label{cor:Mazur-for-Ordinary}For each $i$, the Hodge cohomology
group $H^{i}(Y_{\mathbb{C}})$ admits a flat model $H^{i}(Y_{R})$
(as a filtered vector space). Let $k$ is a perfect field such that
$R\to W(k)$. Then there is a standard $F^{-1}$-gauge $H^{i}(Y_{R})_{W(k)}^{\cdot}$
such that $H^{i}(Y_{R})_{W(k)}^{-\infty}\tilde{=}H^{i}(Y_{R})\otimes_{R}W(k)$,
and such that the Hodge filtration on $H^{i}(Y_{R})_{W(k)}^{\infty}/p$
agrees with the Frobenius pullback of the Hodge filtration on $H^{i}(Y_{R})\otimes_{R}k$.
In particular, there is a Frobenius-linear isomorphism of $H^{i}(Y_{R})_{W(k)}[p^{-1}]$
for which the Hodge filtration on $H^{i}(Y_{R})_{W(k)}$ satisfies
the conclusions of Mazur's theorem. The same holds for the compactly
supported Hodge cohomology $H_{c}^{i}(Y_{\mathbb{C}})$.
\end{cor}
\begin{proof}
As the usual Hodge cohomology and the compactly supported Hodge cohomology
are interchanged under applying the filtered duality functor, it suffices
to deal with the case of the compactly supported cohomology. Let us
recall how to define this in the language of mixed Hodge modules.
We have the morphism
\[
Rj_{!}(\mathcal{O}_{X_{\mathbb{C}}\backslash Y_{\mathbb{C}}})\to\mathcal{O}_{X_{\mathbb{C}}}
\]
in the category of mixed Hodge modules (where $\mathcal{O}$ has its
usual structure as the trivial mixed Hodge module). The cone of this
map is, by definition, the complex of mixed Hodge modules representing
the unit object on $Y_{\mathbb{C}}$; we denote it by $\mathbb{I}_{Y_{\mathbb{C}}}$.
Then we have
\[
H_{c}^{i}(Y_{\mathbb{C}})=\int_{\varphi}^{d+i}R\tilde{j}_{!}\mathbb{I}_{Y_{\mathbb{C}}}=\int_{\varphi}^{d+i}R\tilde{j}_{!}(\text{cone}(Rj_{!}(\mathcal{O}_{X_{\mathbb{C}}\backslash Y_{\mathbb{C}}})\to\mathcal{O}_{X_{\mathbb{C}}}))
\]
\[
\tilde{=}\int_{\varphi}^{d+i}\text{cone}(R(\tilde{j}\circ j)_{!}(\mathcal{O}_{X_{\mathbb{C}}\backslash Y_{\mathbb{C}}})\to R\tilde{j}_{!}\mathcal{O}_{X_{\mathbb{C}}})
\]
Now, after spreading out both $R(\tilde{j}\circ j)_{!}(\mathcal{O}_{X_{\mathbb{C}}\backslash Y_{\mathbb{C}}})$
and $R\tilde{j}_{!}\mathcal{O}_{X_{\mathbb{C}}})$ over $R$, we can
apply \corref{Mazur-for-Hodge-1}.
\end{proof}
\begin{rem}
The previous two corollaries also hold for quasiprojective varieties
defined over $\overline{\mathbb{Q}}$. Although the theory of mixed
Hodge modules only exists over $\mathbb{C}$, its algebraic consequences,
such as the strictness of the pushforward of modules of the form $j_{\star}(\mathcal{O}_{X})$,
hold over any field of characteristic $0$. So the above results go
through in this case as well.
\end{rem}
Finally, we wish to give some relations of the theory of this paper
to the Hodge structure of the local cohomology sheaves $\mathcal{H}_{Y_{\mathbb{C}}}^{i}(\mathcal{O}_{X_{\mathbb{C}}})$,
as developed in {[}MP1{]}, {[}MP2{]}. Here, $X_{\mathbb{C}}$ is a
smooth affine variety and $Y_{\mathbb{C}}\subset X_{\mathbb{C}}$
is a subscheme defined by $(Q_{1},\dots,Q_{r})$. In this case, the
nontrivial sheaf is
\[
\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}})\tilde{=}\mathcal{O}_{X_{\mathbb{C}}}[Q_{1}^{-1}\cdots Q_{r}^{-1}]/\sum_{i=1}^{r}\mathcal{O}_{X_{\mathbb{C}}}\cdot Q_{1}^{-1}\cdots\widehat{(Q_{i}^{-1})}\cdots Q_{r}^{-1}
\]
where $\widehat{?}$ stands for ``omitted.'' As above, these sheaves
admits a Hodge structure via
\[
\mathcal{H}_{Y_{\mathbb{C}}}^{i}(\mathcal{O}_{X_{\mathbb{C}}})\tilde{=}\mathcal{H}^{i}(\int_{\varphi}\int_{j'}\mathcal{O}_{U_{\mathbb{C}}})
\]
where $\varphi:\tilde{X}_{\mathbb{C}}\to X_{\mathbb{C}}$ is a resolution
of singularities such that $\varphi^{-1}(Y_{\mathbb{C}})$ is a normal
crossings divisor; and $j':U_{\mathbb{C}}\to\tilde{X}_{\mathbb{C}}$
is the inclusion. The resulting Hodge filtration is independent of
the choice of the resolution. Taking $R$-models for everything in
sight at above, we obtain a filtered $\mathcal{D}_{X_{R}}^{(0)}$-module
${\displaystyle \mathcal{H}^{i}(\int_{\varphi}j'_{\star}\mathcal{O}_{U_{R}})}$
which (localizing $R$ if necessary) is a flat $R$-model for $\mathcal{H}_{Y_{\mathbb{C}}}^{i}(\mathcal{O}_{X_{\mathbb{C}}})$.
Now let $R\to W(k)$, and let $\mathfrak{X}$, $\tilde{\mathfrak{X}}$,
etc. be the formal completions of the base-change to $W(k)$ as usual.
Then we have a gauge
\[
\mathcal{M}_{Y}:=\mathcal{H}^{i}(\int_{\varphi}j'_{\star}D(\mathcal{O}_{\mathfrak{U}}))
\]
which satisfies $\mathcal{M}_{Y}^{-\infty}={\displaystyle \mathcal{H}^{i}(\int_{\varphi}(j'_{\star}\mathcal{O}_{\mathfrak{U}_{W(k)}})^{-\infty})}$.
\begin{lem}
\label{lem:injectivity-for-local-coh}Let $\widehat{\mathcal{H}_{\mathfrak{Y}}^{i}(\mathcal{O}_{\mathfrak{X}})}:=\mathcal{H}^{i}(Rj_{*}\mathcal{O}_{\mathfrak{U}})$.
(This is simply the $p$-adic completion of the $i$th algebraic local
cohomology of $\mathfrak{X}$ along $\mathfrak{Y}$). Then the natural
map
\[
\mathcal{M}_{Y}^{-\infty}\to\widehat{\mathcal{H}_{\mathfrak{Y}}^{i}(\mathcal{O}_{\mathfrak{X}})}
\]
is injective. If $F$ is a lift of Frobenius, the natural map $F^{*}\mathcal{M}_{Y}^{-\infty}\to\widehat{\mathcal{H}_{\mathfrak{Y}}^{i}(\mathcal{O}_{\mathfrak{X}})}$
is also injective.
\end{lem}
\begin{proof}
We have the Hodge filtration on ${\displaystyle \mathcal{H}^{i}(\int_{\varphi}j'_{\star}\mathcal{O}_{U_{R}})}$,
which is is a filtration by coherent $\mathcal{O}_{X_{R}}$-modules;
base changing to $W(k)$ yields a Hodge filtration on ${\displaystyle \mathcal{H}^{i}(\int_{\varphi}j'_{\star}\mathcal{O}_{U_{W(k)}})}$.
The map in question is the $p$-adic completion of the natural map
\[
\mathcal{H}^{i}(\int_{\varphi}j'_{\star}\mathcal{O}_{U_{W(k)}})\to\mathcal{H}_{Y_{W(k)}}^{i}(\mathcal{O}_{X_{W(k)}})
\]
and the right hand module also has a Hodge filtration, which is simply
the restriction of the Hodge filtration on $\mathcal{H}_{Y_{B}}^{r}(\mathcal{O}_{X_{B}})$
where $B=\text{Frac}(W(k))$. So the proof proceeds in an essentially
identical manner to \lemref{Injectivity-of-completion}.
\end{proof}
Now, fix an integer $m\geq0$. Let us explain how to use this gauge
to obtain an arithmetic description of the Hodge filtration, up to
level $m$. Since $m$ is fixed, we may, after localizing $R$ as
needed, suppose that the image of the map ${\displaystyle \mathcal{H}^{i}(\int_{\varphi}j'_{\star}\mathcal{O}_{U_{R}})})\to\mathcal{H}_{Y_{R}}^{i}(\mathcal{O}_{X_{R}})$
is equal to $F^{m}(\mathcal{H}_{Y_{F}}^{i}(\mathcal{O}_{X_{F}}))\cap\mathcal{H}_{Y_{R}}^{i}(\mathcal{O}_{X_{R}})$.
In particular the map
\[
F^{m}({\displaystyle \mathcal{H}^{i}(\int_{\varphi}j'_{\star}\mathcal{O}_{U_{R}})})\otimes_{R}k\to\mathcal{H}_{Y_{k}}^{i}(\mathcal{O}_{X_{k}})
\]
is injective; under the isomorphism $F^{*}\mathcal{H}_{Y_{k}}^{i}(\mathcal{O}_{X_{k}})\tilde{\to}\mathcal{H}_{Y_{k}}^{i}(\mathcal{O}_{X_{k}})$,
we also obtain an injection $F^{*}(F^{m}({\displaystyle \mathcal{H}^{i}(\int_{\varphi}j'_{\star}\mathcal{O}_{U_{R}})}))\otimes_{R}k\to\mathcal{H}_{Y_{k}}^{i}(\mathcal{O}_{X_{k}})$.
Then
\begin{prop}
\label{prop:Hodge-for-local-coh!}Let the be notation as above. We
have that the image of $\{g\in F^{*}\mathcal{M}_{Y}^{-\infty}|p^{j}g\in\mathcal{M}_{Y}^{-\infty}\}$
in $\mathcal{H}_{Y_{k}}^{i}(\mathcal{O}_{X_{k}})$ is exactly $F^{*}(F^{j}({\displaystyle \mathcal{H}^{i}(\int_{\varphi}j'_{\star}\mathcal{O}_{U_{R}})}))\otimes_{R}k)$.
For each $0\leq j\leq m$, this is also the image of $\{g\in\widehat{\mathcal{H}_{\mathfrak{Y}}^{i}(\mathcal{O}_{\mathfrak{X}})}|p^{j}g\in\mathcal{M}_{Y}^{-\infty}\}$.
\end{prop}
\begin{proof}
By construction $\mathcal{M}_{Y}$ is a standard, coherent, $F^{-1}$-gauge
of index $0$ (this can be easily seen as the Hodge filtration is
concentrated in degrees $\geq0$). Therefore, we have $\widehat{\mathcal{M}_{Y}^{\infty}}\tilde{=}F^{*}\mathcal{M}_{Y}$,
and by the previous lemma $F^{*}\mathcal{M}_{Y}\to F^{*}\widehat{\mathcal{H}_{\mathfrak{Y}}^{i}(\mathcal{O}_{\mathfrak{X}})}\tilde{\to}\widehat{\mathcal{H}_{\mathfrak{Y}}^{i}(\mathcal{O}_{\mathfrak{X}})}$
is injective. Since $\mathcal{M}_{Y}$ is standard of index $0$,
we have
\[
\mathcal{M}_{Y}^{j}=\{m\in\mathcal{M}_{Y}^{\infty}|p^{j}m\in f_{\infty}(\mathcal{M}_{Y}^{0})\}
\]
Note that if $j\leq0$, this means $\mathcal{M}_{Y}^{j}\tilde{=}\mathcal{M}_{Y}^{-\infty}$,
and the map
\[
\eta_{i}:\mathcal{M}_{Y}^{j}\xrightarrow{f_{\infty}}\mathcal{M}_{Y}^{\infty}\xrightarrow{\widehat{?}}F^{*}\mathcal{M}_{Y}\to\widehat{\mathcal{H}_{\mathfrak{Y}}^{i}(\mathcal{O}_{\mathfrak{X}})}
\]
is simply $p^{-j}$ times the injection $\mathcal{M}_{Y}^{-\infty}\to\widehat{\mathcal{H}_{\mathfrak{Y}}^{i}(\mathcal{O}_{\mathfrak{X}})}$,
and is therefore injective by \lemref{injectivity-for-local-coh}
. If $j>0$, then $p^{j}\cdot\eta_{j}=\eta_{0}\circ v^{j}$ is injective
for the same reason, and so $\eta_{j}$ is injective since everything
in sight is $p$-torsion -free. Thus the entire gauge embeds into
$\widehat{\mathcal{H}_{\mathfrak{Y}}^{i}(\mathcal{O}_{\mathfrak{X}})}$
and the first result follows.
For the second result, consider the standard gauge $\mathcal{N}_{Y}$
defined by $\mathcal{N}_{Y}^{j}:=\{m\in\widehat{\mathcal{H}_{\mathfrak{Y}}^{i}(\mathcal{O}_{\mathfrak{X}})}|p^{j}m\in\mathcal{M}_{Y}^{0}\}$
(the actions of $f$ and $v$ are inclusion and multiplication by
$p$ as usual). We have the natural injection for each $j$ $\mathcal{M}_{Y}^{j}\to\mathcal{N}_{Y}^{j}$,
which yields a morphism of gauges $\mathcal{M}_{Y}\to\mathcal{N}_{Y}$.
Let us show that for $j\leq m$ the map $\psi:\mathcal{M}_{Y}^{j}\to\mathcal{N}_{Y}^{j}$
is an isomorphism. For $j\leq0$ this is clear by definition, so suppose
it is true for some $j-1\leq m-1$. For any $j$ let $\mathcal{M}_{Y,0}^{j}:=\mathcal{M}_{Y}^{j}/p$
and similarly define $\mathcal{N}_{Y,0}^{j}$.
We first claim that the isomorphism $\psi:\mathcal{M}_{Y,0}^{j-1}\to\mathcal{N}_{Y,0}^{j-1}$
induces an isomorphism $\text{ker}(f:\mathcal{M}_{Y,0}^{j-1}\to\mathcal{M}_{Y,0}^{j})\tilde{\to}\text{ker}(f:\mathcal{N}_{Y,0}^{j-1}\to\mathcal{N}_{Y,0}^{j})$.
Indeed, we have that $\mathcal{M}_{Y,0}^{j-1}/\text{ker}(f)\tilde{=}F^{j-1}(\mathcal{M}_{Y,0}^{\infty})$
and $\mathcal{N}_{Y,0}^{j-1}/\text{ker}(f)\tilde{=}F^{j-1}(\mathcal{N}_{Y,0}^{\infty})$
(as $\mathcal{M}_{Y}$ and $\mathcal{N}_{Y}$ are standard gauges).
Further, the composed morphism
\[
F^{j-1}(\mathcal{M}_{Y,0}^{\infty})\to F^{j-1}(\mathcal{N}_{Y,0}^{\infty})\to\mathcal{H}_{Y_{k}}^{i}(\mathcal{O}_{X_{k}})
\]
is injective (since $j-1\leq m$); therefore $F^{j-1}(\mathcal{M}_{Y,0}^{\infty})\to F^{j-1}(\mathcal{N}_{Y,0}^{\infty})$
is injective, and it is clearly surjective since $\mathcal{M}_{Y,0}^{j-1}\to\mathcal{N}_{Y,0}^{j-1}$
is surjective. Therefore it is an isomorphism; and hence so is $\text{ker}(f:\mathcal{M}_{Y,0}^{j-1}\to\mathcal{M}_{Y,0}^{j})\to\text{ker}(f:\mathcal{N}_{Y,0}^{j-1}\to\mathcal{N}_{Y,0}^{j})$
as claimed.
Now suppose $m\in\text{ker}(\psi:\mathcal{M}_{Y,0}^{j}\to\mathcal{N}_{Y,0}^{j})$.
Then $vm\in\text{ker}(\psi:\mathcal{M}_{Y,0}^{j-1}\to\mathcal{N}_{Y,0}^{j-1})=0$,
so that $m\in\text{ker}(v)=\text{im}(f)$. If $m=fm'$, then we see
$\psi m'\in\text{ker}(f)$; but by the above paragraph this implies
$m'\in\text{ker}(f)$; therefore $m=0$ and $\psi:\mathcal{M}_{Y,0}^{j}\to\mathcal{N}_{Y,0}^{j}$
is injective. Thus the cokernel of $\psi:\mathcal{M}_{Y}^{j}\to\mathcal{N}_{Y}^{j}$
is $p$-torsion-free. On the other hand, we clearly have $p^{j}\cdot\mathcal{N}_{Y}^{j}\subset\mathcal{M}_{Y}^{j}$;
so that the cokernel of $\psi$ is annihilated by $p^{j}$; therefore
the cokernel is $0$ and we see that $\psi:\mathcal{M}_{Y}^{j}\to\mathcal{N}_{Y}^{j}$
is an isomorphism as claimed.
\end{proof}
Note that this gives a description of the reduction mod $p$ of the
Hodge filtration (up to $F^{m}$) which makes no reference to a resolution
of singularities. It does depend on an $R$-model for the $\mathcal{D}$-module
$\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}})$,
though any two such models agree after localizing $R$ at an element.
Now let us further suppose that $Y_{\mathbb{C}}\subset X_{\mathbb{C}}$
is a complete intersection of codimension $r$. By {[}MP2{]}, proposition
7.14, (c.f. also section $9$ of loc. cit.) we have
\[
F^{m}(\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}}))\subset O^{m}(\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}}))=\text{span}_{\mathcal{O}_{X_{\mathbb{C}}}}\{Q_{1}^{-a_{1}}\cdots Q_{r}^{-a_{r}}|\sum a_{i}\leq m+r\}
\]
In {[}MP2{]} the condition $F^{m}(\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}}))=O^{m}(\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}}))$
is discussed at length; and the point of view developed there shows
that the largest $m$ for which there is equality is a subtle measure
of the singularities of $Y_{\mathbb{C}}$. In fact, equality for any
$m$ already implies serious restrictions on the singularities; indeed,
$F^{0}(\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}}))=O^{0}(\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}}))$
is equivalent to $Y_{\mathbb{C}}$ having du Bois singularities (this
is the first case of theorem F of loc. cit.).
Now, using the methods of this paper, let us show
\begin{cor}
\label{cor:Canonical-Singularities} Suppose $F^{0}(\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}}))=O^{0}(\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}}))$,
i.e., $Q_{1}^{-1}\cdots Q_{r}^{-1}\in F^{0}(\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}}))$.
Then the log-canonical threshold of $Y_{\mathbb{C}}$ is $r$.
\end{cor}
Combined with the above, this gives a new proof of the famous fact
that du Bois singularities are canonical, in the l.c.i. case at least
(c.f. \cite{key-57}, \cite{key-58}). It is also a (very) special
case of {[}MP2{]}, conjecture 9.11; of course, it also follows from
theorem C of {[}MP2{]}, using the results of \cite{key-57}.
To prove this result, we will recall a few facts from positive characteristic
algebraic geometry, following {[}BMS{]}. We return to a perfect field
$k$ of positive characteristic and $X$ smooth over $k$. Let $\mathcal{I}\subset\mathcal{O}_{X}$
be an ideal sheaf. For each $m>0$ we let $\mathcal{I}^{[1/p^{m}]}$
be the minimal ideal sheaf such that $\mathcal{I}\subset(F^{m})^{*}(\mathcal{I}^{[1/p^{m}]})$
(here we are using the isomorphism $(F^{m})^{*}\mathcal{O}_{X}\tilde{\to}\mathcal{O}_{X}$;
for any ideal sheaf $\mathcal{J}$ we have $(F^{m})^{*}\mathcal{J}=\mathcal{J}^{[p^{m}]}$,
the ideal locally generated by $p^{m}$th powers of elements of $\mathcal{J}$).
Then, for each $i>0$ one has inclusions
\[
(\mathcal{I}^{i})^{[1/p^{m}]}\subset(\mathcal{I}^{i'})^{[1/p^{m'}]}
\]
whenever $i/p^{m}\leq i'/p^{m'}$ and $m\leq m'$ (this is {[}BMS{]},
lemma 2.8). These constructions are connected to $\mathcal{D}$-module
theory as follows: for any ideal sheaf $\mathcal{I}$, we have $\mathcal{D}_{X}^{(m)}\cdot\mathcal{I}=(F^{m+1})^{*}(\mathcal{I}^{[1/p^{m+1}]})$
(c.f. \cite{key-64}, remark 2.6, and \cite{key-62}, lemma 3.1).
Now, fix a number $c\in\mathbb{R}^{+}$. If $x\to\lceil x\rceil$
denotes the ceiling function, then the previous discussion implies
inclusions
\[
(\mathcal{I}^{\lceil cp^{m}\rceil})^{[1/p^{m}]}\subset(\mathcal{I}^{\lceil cp^{m+1}\rceil})^{[1/p^{m+1}]}
\]
for all $m$. Thus we have a chain of ideals, which necessarily stabilizes,
and so we can define
\[
\tau(\mathcal{I}^{c})=(\mathcal{I}^{\lceil cp^{m}\rceil})^{[1/p^{m}]}
\]
for all $m>>0$. These ideals are called generalized test ideals.
There is a deep connection to the theory of multiplier ideals in complex
algebraic geometry, which is due to Hara and Yoshida ({[}HY{]}, theorems
3.4 and 6.8). Suppose we have a complex variety $X_{\mathbb{C}}$,
and flat $R$-model $X_{R}$, and an ideal sheaf $\mathcal{I}_{R}$
which is also flat over $R$. Fix a rational number $c$; we may then
choose a flat model $\mathcal{J}(\mathcal{I}_{R}^{c})$ for the multiplier
ideal $\mathcal{J}(\mathcal{I}_{\mathbb{C}}^{c})$. Then for all perfect
fields $k$ of sufficiently large positive characteristic, we have
\[
\mathcal{J}(\mathcal{I}_{R}^{c})\otimes_{R}k=\tau(\mathcal{I}_{k}^{c})
\]
Finally, we note that since $\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}})$
is a coherent $\mathcal{D}_{X_{\mathbb{C}}}$-module, there exists
some $l>0$ such that $\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}})=\mathcal{D}_{X_{\mathbb{C}}}\cdot(Q_{1}\cdot Q_{r})^{-l}$
. Therefore we may obtain an $R$-model by taking the sheaf
\[
\mathcal{D}_{X_{R}}^{(0)}\cdot(Q_{1}\cdots Q_{r})^{-l}\subset\mathcal{H}_{Y_{R}}^{r}(\mathcal{O}_{X_{R}})
\]
After base change to $F=\text{Frac}(R)$ this agrees with ${\displaystyle \mathcal{H}^{r}(\int_{\varphi}j'_{\star}\mathcal{O}_{U_{R}})}$;
therefore the two models agree after possibly localizing $R$. In
particular, $\mathcal{M}_{Y}^{-\infty}$ is the $p$-adic completion
of $\mathcal{D}_{X_{W(k)}}^{(0)}\cdot(Q_{1}\cdots Q_{r})^{-l}$.
Now let us turn to the
\begin{proof}
(of \corref{Canonical-Singularities}) Let $\mathcal{I}_{\mathbb{C}}=(Q_{1},\dots,Q_{r})$
and let us fix a rational number $0<c<r$. Suppose that the ideal
$\mathcal{J}(\mathcal{I}_{\mathbb{C}}^{c})\subsetneq\mathcal{O}_{X_{\mathbb{C}}}$.
We spread everything out over $R$, and reduce to $k$ of large positive
characteristic. Then the above implies $\tau(\mathcal{I}_{k}^{c})\subsetneq\mathcal{O}_{X_{k}}$.
Now, recall that we have fixed an $R$-model $\mathcal{D}_{X_{R}}^{(0)}\cdot(Q_{1}\cdots Q_{r})^{-l}$
of $\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}})$.
Then the description of the Hodge filtration in \propref{Hodge-for-local-coh!}
implies that $F^{*}(F_{0}(\mathcal{D}_{X_{R}}^{(0)}\cdot(Q_{1}\cdots Q_{r})^{-l}))\otimes_{R}k)$
is the image of $\mathcal{D}_{X_{R}}^{(0)}\cdot(Q_{1}\cdots Q_{r})^{-l}\otimes_{R}k$
in $\mathcal{H}_{Y_{k}}^{r}(\mathcal{O}_{X_{k}})$; in other words,
the $\mathcal{D}_{X_{k}}^{(0)}$-submodule generated by $(Q_{1}\cdots Q_{r})^{-l}$.
Thus the assumption $F_{0}(\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}}))=O_{0}(\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}}))$
is equivalent to the statement
\[
(Q_{1}\cdots Q_{r})^{-p}\in\mathcal{D}_{X_{k}}^{(0)}\cdot(Q_{1}\cdots Q_{r})^{-l}
\]
inside $\mathcal{H}_{Y_{k}}^{r}(\mathcal{O}_{X_{k}})$. Since $\mathcal{H}_{Y_{k}}^{r}(\mathcal{O}_{X_{k}})$
is the quotient of $\mathcal{O}_{X_{k}}[(Q_{1}\cdots Q_{r})^{-1}]$
by the submodule generated by $\{Q_{1}^{-1}\cdots\widehat{Q_{i}^{-1}}\cdots Q_{r}^{-1}\}_{i=1}^{r}$,
which is contained in the $\mathcal{D}_{X_{k}}^{(0)}$-submodule generated
by $(Q_{1}\cdots Q_{r})^{-l}$, we see that the assumption actually
implies
\[
(Q_{1}\cdots Q_{r})^{-p}\in\mathcal{D}_{X_{k}}^{(0)}\cdot(Q_{1}\cdots Q_{r})^{-l}
\]
inside $\mathcal{O}_{X_{k}}[(Q_{1}\cdots Q_{r})^{-1}]$.
To use this, note that the map $(Q_{1}\cdots Q_{r})^{p}\cdot$ is
a $\mathcal{D}_{X_{k}}^{(0)}$-linear isomorphism on $\mathcal{O}_{X_{k}}[(Q_{1}\cdots Q_{r})^{-1}]$.
Thus we see
\[
\mathcal{O}_{X_{k}}=\mathcal{D}_{X_{k}}^{(0)}\cdot(Q_{1}\cdots Q_{r})^{p-l}\subset F^{*}(\mathcal{I}^{r(p-l)})^{[1/p]}
\]
so that $(\mathcal{I}^{r(p-l)})^{[1/p]}=\mathcal{O}_{X_{k}}$ which
implies $\tau(\mathcal{I}^{r(1-l/p)})=\mathcal{O}_{X_{k}}$. Taking
$p$ large enough so that $r(1-l/p)>c$, we deduce $\tau(\mathcal{I}_{k}^{c})=\mathcal{O}_{X_{k}}$
(the test ideals form a decreasing filtration, by {[}BMS{]}, proposition
2.11); contradiction. Therefore in fact $\mathcal{J}(\mathcal{I}_{\mathbb{C}}^{c})=\mathcal{O}_{X_{\mathbb{C}}}$
for all $c\in(0,r)$ which is the statement.
\end{proof}
As a corollary of this argument, we have:
\begin{cor}
Suppose $r=1$ in the previous corollary (so that $\mathcal{I}=(Q)$).
Then, under the assumption that $F^{0}(\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}}))=O^{0}(\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}}))$,
we have that, for all $p>>0$, $\tau(Q^{(1-l/p)})=\mathcal{O}_{X_{k}}$.
\end{cor}
This says that, after reducing mod $p$, for $p>>0$ the $F$-pure
threshold of $Q$ is $\geq1-l/p$. Recall that $l$ is any integer
for which $Q^{-l}$ generates the $\mathcal{D}_{X_{\mathbb{C}}}$-module
$j_{*}(\mathcal{O}_{U_{\mathbb{C}}})$; thus we may take $l$ to be
the least natural number such that $b_{Q}(-l-t)\neq0$ for all $t\in\mathbb{N}$
(here $b_{Q}$ is the $b$-function for $Q$). In this language, this
result was recently reproved (and generalized) in \cite{key-65},
by completely different techniques.
To finish off this section, we'll spell out how the description of
the Hodge filtration in \propref{Hodge-for-local-coh!} relates to
the condition $F_{i}(\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}}))=O_{i}(\mathcal{H}_{Y_{\mathbb{C}}}^{r}(\mathcal{O}_{X_{\mathbb{C}}}))$
when $Y_{\mathbb{C}}$ is a hypersurface inside $X_{\mathbb{C}}$;
$Y_{\mathbb{C}}=Z(Q)$. In this case, we get an intriguing description
in terms of the behavior of $\mathcal{H}_{Y}^{1}(\mathcal{O}_{X})$
in mixed characteristic:
\begin{cor}
We have $F_{i}(\mathcal{H}_{Y}(\mathcal{O}_{X}))=O_{i}(\mathcal{H}_{Y}^{1}(\mathcal{O}_{X}))$
iff $p^{i}Q^{-(i+1)p}\in\mathcal{D}_{X_{W_{i+1}(k)}}^{(0)}\cdot Q^{-l}$
inside $\mathcal{H}_{Y_{W_{i+1}(k)}}^{1}(\mathcal{O}_{X_{W_{i+1}(k)}})$
for $p>>0$.
\end{cor}
\begin{proof}
Applying the condition of \propref{Hodge-for-local-coh!}, we see
that $F_{i}(\mathcal{H}_{Y}(\mathcal{O}_{X}))=O_{i}(\mathcal{H}_{Y}^{1}(\mathcal{O}_{X}))$
iff, for all $p>>0$, there exists some $g\in\widehat{\mathcal{H}_{\mathfrak{Y}}^{1}(\mathcal{O}_{\mathfrak{X}})}$,
with $p^{i}g\in\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}\cdot Q^{-l}$
whose image, in $\mathcal{H}_{Y_{k}}^{1}(\mathcal{O}_{X_{k}})$ is
$Q^{-(i+1)p}$. This holds iff there is some $g_{1}\in\widehat{\mathcal{H}_{\mathfrak{Y}}^{1}(\mathcal{O}_{\mathfrak{X}})}$
so that
\[
Q^{-(i+1)p}=g+pg_{1}
\]
inside $\widehat{\mathcal{H}_{\mathfrak{Y}}^{1}(\mathcal{O}_{\mathfrak{X}})}$.
But this is equivalent to
\[
p^{i}Q^{-(i+1)p}=p^{i}g+p^{i+1}g_{1}=\Phi\cdot Q^{-l}+p^{i+1}g_{1}
\]
for some $\Phi\in\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}$. This,
in turn, is a restatement of the corollary.
\end{proof}
\section{\label{sec:Appendix:-an-Inectivity}Appendix: an Injectivity Result}
In this appendix we give a proof of the following technical result
used in \lemref{Hodge-filt-on-log}:
\begin{lem}
The natural map $\text{(\ensuremath{{\displaystyle j_{\star}\mathcal{O}_{\mathfrak{U}}}}})^{-\infty}|_{\mathfrak{V}}\to\widehat{(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}])}$
(where $\widehat{}$ denotes $p$-adic completion) is injective.
\end{lem}
Recall that $j_{\star}(\mathcal{O}_{\mathfrak{U}})$ was defined as
the $\widehat{\mathcal{D}}_{\mathfrak{X}}^{(0)}$-module locally generated
by $x_{1}^{-1}\cdots x_{j}^{-1}$, where $x_{1}\cdots x_{j}$ is a
local equation for the divisor $\mathfrak{D}\subset\mathfrak{X}$.
\begin{proof}
Let $\mathfrak{V}$ be an open affine. On $\mathfrak{V}$, the map
in question is the $p$-adic completion of the inclusion ${\displaystyle {\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})}}^{\text{fin}}\to\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}]$,
where ${\displaystyle {\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})}}^{\text{fin}}$
is the $D_{\mathfrak{V}}^{(0)}$-submodule of $j_{*}(\mathcal{O}_{\mathfrak{V}})$
generated by $x_{1}^{-1}\cdots x_{j}^{-1}$. This is a map of $p$-torsion-free
sheaves, let $\mathcal{C}$ denote its cokernel. Then the kernel of
the completion is given by
\[
\lim_{\leftarrow}\mathcal{C}[p^{n}]
\]
where $\mathcal{C}[p^{n}]=\{m\in\mathcal{C}|p^{n}m=0\}$, and the
maps in the inverse system are multiplication by $p$.
Now, both ${\displaystyle {\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})}}^{\text{fin}}$
and $\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}]$ are
filtered by the Hodge filtration; on ${\displaystyle {\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})}}^{\text{fin}}$
it is given by $F^{l}(\mathcal{D}_{\mathfrak{V}}^{(0)})\cdot(x_{1}^{-1}\cdots x_{r}^{-1})$,
which is precisely the span over $\mathcal{O}_{\mathfrak{V}}$ of
terms of the form $I!\cdot x_{1}^{-i_{1}-1}\cdots x_{j}^{-i_{j}-1}=\partial^{I}x_{1}^{-1}\cdots x_{j}^{-1}$
for $|I|\leq l$, here we have denoted $I!=i_{1}!\cdots i_{j}!$.
The Hodge filtration $F^{l}(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}])$
is defined to be the span over $\mathcal{O}_{\mathfrak{V}}$ of terms
of the form $x_{1}^{-i_{1}-1}\cdots x_{j}^{-i_{j}-1}$ for $|I|\leq l$.
From this description it follows that, in both cases, all of the terms
$F^{i}$ and $F^{i}/F^{i-1}$ are $p$-torsion-free; and the morphism
is strict with respect the the filtrations; i.e.,
\[
F^{i}(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}])\cap{\displaystyle {\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})}}^{\text{fin}}=F^{i}({\displaystyle {\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})}}^{\text{fin}})
\]
Now we consider the inclusion $\mathcal{R}({\displaystyle {\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})}}^{\text{fin}})\to\mathcal{R}(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}])$
(where $\mathcal{R}$ stands for the Rees functor with respect to
the Hodge filtrations on both sides). The strictness of the map implies
\[
\text{coker}(\mathcal{R}({\displaystyle {\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})}}^{\text{fin}})\to\mathcal{R}(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}]))=\mathcal{R}(\mathcal{C})
\]
We now shall show that the $p$-adic completion\footnote{In this appendix only, we use the completion of the \emph{entire}
Rees module, NOT the graded completion of the rest of the paper; similarly,
the product is the product in the category of all modules, not the
category of graded modules} of this map is injective. The natural map
\[
\mathcal{R}({\displaystyle {\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})}}^{\text{fin}})=\bigoplus_{i=0}^{\infty}F^{i}({\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})^{\text{fin}}}\to\prod_{i=0}^{\infty}F^{i}({\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})^{\text{fin}}}
\]
is injective, and the cokernel is easily seen to be $p$-torsion-free;
therefore we obtain an injection
\[
\widehat{\mathcal{R}({\displaystyle {\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})}}^{\text{fin}})}\to\widehat{\prod_{i=0}^{\infty}F^{i}({\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})^{\text{fin}}}}
\]
and the analogous statement holds for $\mathcal{R}(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}])$.
Further, one has an isomorphism
\[
\widehat{\prod_{i=0}^{\infty}F^{i}({\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})^{\text{fin}}}}\tilde{=}\prod_{i=0}^{\infty}(\widehat{F^{i}({\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})^{\text{fin}}})}=\prod_{i=0}^{\infty}F^{i}({\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})^{\text{fin}}}
\]
where the last equality is because $F^{i}({\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})^{\text{fin}}}$
is a coherent $\mathcal{O}_{\mathfrak{V}}$-module and therefore $p$-adically
complete; similarly
\[
\widehat{\prod_{i=0}^{\infty}F^{i}(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}]}\tilde{=}\prod_{i=0}^{\infty}(\widehat{F^{i}(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}])}=\prod_{i=0}^{\infty}F^{i}(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}]
\]
So, since each $F^{i}({\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})^{\text{fin}}}\to F^{i}(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}]$
is injective, we obtain an injection
\[
\widehat{\mathcal{R}({\displaystyle {\displaystyle (j_{\star}\mathcal{O}_{\mathfrak{V}})}}^{\text{fin}})}\to\widehat{\mathcal{R}(\mathcal{O}_{\mathfrak{V}}[x_{1}^{-1}\cdots x_{j}^{-1}])}
\]
This means that ${\displaystyle \lim_{\leftarrow}\mathcal{R}(\mathcal{C})[p^{n}]}=0$.
Let $f$ denote the parameter in the Rees ring. Then, for each $n$
we have a short exact sequence
\[
\mathcal{R}(\mathcal{C})[p^{n}]\xrightarrow{f-1}\mathcal{R}(\mathcal{C})[p^{n}]\to C[p^{n}]
\]
Since ${\displaystyle \lim_{\leftarrow}\mathcal{R}(\mathcal{C})[p^{n}]}=0$,
to prove ${\displaystyle \lim_{\leftarrow}\mathcal{C}[p^{n}]=0}$
we must show that $f-1$ acts injectively on ${\displaystyle \text{R}^{1}\lim_{\leftarrow}\mathcal{R}(\mathcal{C})[p^{n}]}$.
Recall that this module is the cokernel of
\[
\eta:\prod_{n=1}^{\infty}\mathcal{R}(C)[p^{n}]\to\prod_{n=1}^{\infty}\mathcal{R}(C)[p^{n}]
\]
where $\eta(c_{1},c_{2},c_{3},\dots)=(c_{1}-pc_{2},c_{2}-pc_{3},\dots)$.
Now, since each $\mathcal{R}(C)[p^{n}]$ is graded, we may define
a homogenous element of degree $i$ in ${\displaystyle \prod_{n=1}^{\infty}\mathcal{R}(C)[p^{n}]}$
to be an element $(c_{1},c_{2},\dots)$ such that each $c_{j}$ has
degree $i$. Any element of $d\in{\displaystyle \prod_{n=1}^{\infty}\mathcal{R}(C)[p^{n}]}$
has a unique representation of the form ${\displaystyle \sum_{i=0}^{\infty}d_{i}}$
where $d_{i}$ is homogenous of degree $i$ (this follows by looking
at the decomposition by grading of each component). Since the map
$\eta$ preserves the set of homogenous elements of degree $i$, we
have ${\displaystyle \eta(\sum_{i=0}^{\infty}d_{i})=\sum_{i=0}^{\infty}\eta(d_{i})}$.
Suppose that $(f-1)d=\eta(d')$. Write ${\displaystyle d=\sum_{i=j}^{\infty}d_{i}}$
where $d_{j}\neq0$. Then
\[
(f-1){\displaystyle \sum_{i=j}^{\infty}d_{j}}=-d_{j}+\sum_{i=j+1}^{\infty}(fd_{i-1}-d_{i})=\sum_{i=0}^{\infty}\eta(d_{i}')
\]
So we obtain $d_{j}=-\eta(d_{j}')$, and $d_{i}=fd_{i-1}+\eta(d_{i}')$
for all $i>j$, which immediately gives $d_{i}\in\text{image}(\eta)$
for all $i$; so $d\in\text{image }(\eta)$ and $f-1$ acts injectively
on $\text{coker}(\eta)$ as required.
\end{proof}
The University of Illinois at Urbana-Champaign, csdodd2@illinois.edu
\end{document} |
\begin{document}
\title{On semi-vector spaces and semi-algebras}
\author{Giuliano G. La Guardia, Jocemar de Q. Chagas, Ervin K. Lenzi, Leonardo Pires
\thanks{Giuliano G. La Guardia ({\tt \small
gguardia@uepg.br}), Jocemar de Q. Chagas ({\tt \small jocemarchagas@uepg.br})
and Leonardo Pires ({\tt \small lpires@uepg.br}) are
with Department of Mathematics and Statistics,
State University of Ponta Grossa (UEPG), 84030-900, Ponta Grossa -
PR, Brazil. Ervin K. Lenzi ({\tt \small eklenzi@uepg.br}) is with Department of Physics,
State University of Ponta Grossa (UEPG), 84030-900, Ponta Grossa -
PR, Brazil. Corresponding author: Giuliano G. La Guardia ({\tt \small
gguardia@uepg.br}). }}
\maketitle
\begin{abstract}
It is well-known that the theories of semi-vector spaces and semi-algebras --
which were not much studied over time -- are utilized/applied in Fuzzy Set
Theory in order to obtain extensions of the concept of fuzzy numbers as well as to
provide new mathematical tools to investigate properties and new results
on fuzzy systems. In this paper we investigate the theory of
semi-vector spaces over the semi-field of nonnegative real numbers
${\mathbb R}_{0}^{+}$. We prove several results concerning semi-vector
spaces and semi-linear transformations.
Moreover, we introduce in the literature the concept of
eigenvalues and eigenvectors of a semi-linear operator, describing in some cases
how to compute them. Topological properties of semi-vector spaces
such as completeness and separability are also investigated. New families of
semi-vector spaces derived from semi-metric,
semi-norm, semi-inner product, among others are exhibited. Additionally,
some results on semi-algebras are presented.
\end{abstract}
\emph{keywords}: semi-vector space; semi-algebras; semi-linear operators
\section{Introduction}
The concept of semi-vector space was introduced by Prakash and Sertel in
\cite{Prakash:1974}. Roughly speaking, semi-vector spaces are
``vector spaces" where the scalars are in a semi-field. Although the concept of
semi-vector space was investigated over time,
there exist few works available in the literature dealing
with such spaces \cite{Radstrom:1952,Prakash:1974,Prakash:1976,Pap:1980,Gahler:1999,Janyska:2007,Milfont:2021}.
This fact occurs maybe due to the limitations that such concept brings,
i.e., the non-existence of (additive) symmetric for some (for all) semi-vector. A
textbook in such a topic of research is the book by Kandasamy~\cite{Kandasamy:2002}.
Although the seminal paper on semi-vector spaces is \cite{Prakash:1974},
the idea of such a concept was implicit in \cite{Radstrom:1952},
where Radstrom shown that a semi-vector space over the semi-field of nonnegative real
numbers can be extended to a real vector space
(see \cite[Theorem 1-B.]{Radstrom:1952}).
In \cite{Prakash:1974}, Prakash and Sertel investigated the
structure of topological semi-vector spaces. The authors were concerned with
the study of the existence of fixed points in compact convex sets and also
to generate min-max theorems in topological semi-vector spaces. In
\cite{Prakash:1976}, Prakash and Sertel investigated
properties of the topological semi-vector space consisting
of nonempty compact subsets of a real Hausdorff topological vector space.
In \cite{Pap:1980}, Pap investigated and formulated the concept of integrals of
functions having, as counter-domain, complete semi-vector spaces. W.
Gahler and S. Gahler \cite{Gahler:1999} showed that a (ordered) semi-vector
space can be extended to a (ordered) vector space and a (ordered)
semi-algebra can be extended to a (ordered) algebra. Moreover, they
provided an extension of fuzzy numbers. Janyska et al.~\cite{Janyska:2007}
developed such theory (of semi-vector space) by proving useful results
and defining the semi-tensor product of (semi-free) semi-vector spaces.
They were also interested to propose an algebraic model of physical scales.
Canarutto~\cite{Canarutto:2012} explored the concept of semi-vector spaces
to express aspects and to exploit
nonstandard mathematical notions of basics of quantum particle physics on
a curved Lorentzian background. Moreover, he dealt with the case of
electroweak interactions. Additionally, in \cite{Canarutto:2016},
Canarutto provided a suitable formulation of the fundamental mathematical
concepts with respect to quantum field theory. Such a paper presents a
natural application of the concept of semi-vector spaces and semi-algebras.
Recently, Bedregal et al. \cite{Milfont:2021} investigated (ordered)
semi-vector spaces over a weak semi-field $K$ (i.e., both $(K, +)$ and
$(K, \bullet)$ are monoids) in the context of fuzzy sets and applying
the results in multi-criteria group decision-making.
In this paper we extend the theory of semi-vector spaces. The semi-field of
scalars considered here is the semi-field of nonnegative real numbers.
We prove several results in the context of semi-vector spaces and semi-linear
transformations. We introduce the concept of semi-eigenvalues
and semi-eigenvectors of an operator and of a matrix, showing how to compute
it in specific cases. We investigate topological properties such as
completeness, compactness and separability of semi-vector spaces.
Additionally, we present interesting new families
of semi-vector spaces derived from semi-metric, semi-norm, semi-inner
product, metric-preserving
functions among others. Furthermore, we show some results concerning semi-algebras.
Summarizing, we provide new results on semi-vector spaces and semi-algebras,
although such theories are very difficult to be investigated due to the fact that vectors do
not even have (additive) symmetrical. These new results can be possibly utilized in the
theory of fuzzy sets in order to extend it or in the generation of new results
concerning such a theory.
The paper is organized as follows. In Section~\ref{sec2} we recall some concepts
on semi-vector spaces which will be utilized in this work.
In Section~\ref{sec3} we present and prove several results concerning semi-vector spaces
and semi-linear transformations. We introduce naturally the concepts of eigenvalue
and eigenvector of a semi-linear operator and of matrices. Additionally, we exhibit
and show interesting examples of semi-vector spaces derived from semi-metric, semi-norms,
metric-preserving functions among others. Results concerning semi-algebras are
also presented. In Section~\ref{sec3a} we show relationships between Fuzzy Set Theory
and the theory of semi-vector spaces and semi-algebras. Finally, a summary
of this paper is presented in Section~\ref{sec4}.
\section{Preliminaries}\label{sec2}
In this section we recall important facts on semi-vector spaces
necessary for the development of this work. In order to
define formally such concept, it is necessary to define the concepts of
semi-ring and semi-field.
\begin{definition}\label{defSR}
A semi-ring $(S, + , \bullet )$ is a set $S$ endowed with two binary
operations, $+: S\times S\longrightarrow S$ (addition),
$\bullet: S\times S\longrightarrow S$ (multiplication) such that: $\operatorname{(1)}$
$(S, +)$ is a commutative monoid; $\operatorname{(2)}$ $(S, \bullet)$ is a semigroup;
$\operatorname{(3)}$ the multiplication $\bullet$ is distributive with respect to $+$:
$\forall \ x, y, z \in S$, $(x + y)\bullet z = x\bullet z + y\bullet z$ and
$x\bullet(y + z ) = x\bullet y + x\bullet z$.
\end{definition}
We write $S$ instead of writing $(S, + , \bullet )$ if there is not
possibility of confusion. If the multiplication $\bullet$
is commutative then $S$ is a commutative semi-ring. If there exists $1
\in S$ such that, $ \forall \ x \in S$ one has
$1\bullet x = x\bullet 1 = x$, then $S$ is a semi-ring with identity.
\begin{definition}\cite[Definition 3.1.1]{Kandasamy:2002}\label{defSF}
A semi-field is an ordered triple $(K, +, \bullet )$ which is a
commutative semi-ring with unit satisfying the following conditions:
$\operatorname{(1)}$ $\forall \ x, y \in K$, if $x+y=0$ then $x=y=0$;
$\operatorname{(2)}$ if $x, y \in K$ and $x\bullet y = 0$ then $x=0$ or $y=0$.
\end{definition}
Before proceeding further, it is interesting to observe that in \cite{Gahler:1999}
the authors considered the additive cancellation law in the definition of
semi-vector space. In \cite{Janyska:2007}, the authors did not assume the existence
of the zero (null) vector.
In this paper we consider the definition of a semi-vector space in the context of
that shown in \cite{Gahler:1999}, Sect.3.1.
\begin{definition}\label{defSVS}
A semi-vector space over a semi-field $K$ is a ordered triple
$(V,$ $+, \cdot)$, where $V$ is a set endowed with the operations
$+: V\times V\longrightarrow V$ (vector addition) and
$\cdot: K\times V\longrightarrow V$
(scalar multiplication) such that:
\begin{itemize}
\item [ $\operatorname{(1)}$] $(V, +)$ is an abelian monoid
equipped with the additive cancellation law: $\forall \ u, v, w \in V$,
if $u + v = u + w$ then $v = w$;
\item [ $\operatorname{(2)}$] $\forall$ $\alpha\in K$ and $\forall$
$u, v \in V$, $\alpha (u+v)=\alpha u + \beta v$;
\item [ $\operatorname{(3)}$] $\forall$ $\alpha, \beta \in K$
and $\forall$ $v\in V$, $(\alpha + \beta)v= \alpha v + \beta v$;
\item [ $\operatorname{(4)}$] $\forall$ $\alpha, \beta \in K$
and $\forall$ $v\in V$, $(\alpha\beta)v=\alpha (\beta v)$;
\item [ $\operatorname{(5)}$] $\forall$ $v \in V$ and $1 \in K$, $1v=v$.
\end{itemize}
\end{definition}
Note that from Item~$\operatorname{(1)}$ of Definition~\ref{defSVS},
all semi-vector spaces considered in this paper are \emph{regular},
that it, the additive cancellation law is satisfied. The zero (or null) vector
of $V$, which is unique, will be denoted by $0_{V}$. Let $v \in V$, $v\neq 0 $.
If there exists $u \in V$ such
that $v + u =0$ then $v$ is said to be \emph{symmetrizable}.
A semi-vector space $V$ is said to be \emph{simple} if the unique
symmetrizable element is the zero vector $0_{V}$. In other words, $V$ is
simple if it has none nonzero symmetrizable elements.
\begin{definition}\cite[Definition 1.4]{Janyska:2007}\label{defSBasis}
Let $V$ be a simple semi-vector space over ${\mathbb R}_{0}^{+}$.
A subset $B \subset V$ is called a semi-basis of $V$ if
every $v \in V$, $v\neq 0$, can be written in a unique way as $v =
\displaystyle\sum_{i \in I_v}^{} v^{(i)} b_i$,
where $v^{(i)} \in {\mathbb R}^{+}$, $b_i \in B$ and $I_v$ is a
finite family of indices uniquely determined by $v$. The finite
subset $B_v \subset B$ defined by $B_v := \{b_i \}_{i \in I_v }$
is uniquely determined by $v$. If a semi-vector
space $V$ admits a semi-basis then it is said to be semi-free.
\end{definition}
The concept of semi-dimension can be defined in analogous way to
semi-free semi-vector spaces due to the next result.
\begin{corollary}\cite[Corollary 1.7]{Janyska:2007}
Let $V$ be a semi-free semi-vector space. Then all semi-bases of $V$ have
the same cardinality.
\end{corollary}
Therefore, the semi-dimension of a semi-free semi-vector space is the
cardinality of a semi-basis (consequently, of all semi-bases) of $V$.
We next present some examples of semi-vector spaces.
\begin{example}\label{ex1}
All real vector spaces are semi-vector spaces, but they are not simple.
\end{example}
\begin{example}\label{ex2}
The set ${[{\mathbb R}_{0}^{+}]}^{n}=\underbrace{{\mathbb R}_{0}^{+}
\times \ldots \times {\mathbb R}_{0}^{+}}_{n \operatorname{times}}$
endowed with the usual sum of coordinates and scalar multiplication is a
semi-vector space over ${\mathbb R}_{0}^{+}$.
\end{example}
\begin{example}\label{ex3}
The set ${\mathcal M}_{n\times m}({\mathbb R}_{0}^{+})$ of matrices $n \times m$
whose entries are nonnegative real numbers equipped with the sum of
matrices and multiplication of a matrix by a scalar (in ${\mathbb R}_{0}^{+}$, of course)
is a semi-vector space over ${\mathbb R}_{0}^{+}$.
\end{example}
\begin{example}\label{ex4}
The set ${\mathcal P}_{n}[x]$ of polynomials with coefficients
from ${\mathbb R}_{0}^{+}$ and degree less than or equal
to $n$, equipped with the usual of polynomial sum and scalar multiplication,
is a semi-vector space.
\end{example}
\begin{definition}\label{semi-subspace}
Let $(V, +, \cdot )$ be a semi-vector space over ${\mathbb R}_{0}^{+}$. We say that a
non-empty subset $W$ of $V$ is a semi-subspace of $V$ if $W$ is closed
under both addition and scalar multiplication of $V$, that is,
\begin{itemize}
\item [ $\operatorname{(1)}$] $\forall \ w_1 , w_2 \in W \Longrightarrow
w_1 + w_2 \in W$;
\item [ $\operatorname{(2)}$] $\forall \ \lambda \in {\mathbb R}_{0}^{+}$ and
$\forall \ w \in W \Longrightarrow \lambda w \in W$.
\end{itemize}
\end{definition}
The uniqueness of the zero vector implies that for each $\lambda \in
{\mathbb R}_{0}^{+}$ on has $\lambda 0_{V} = 0_{V}$. Moreover, if $ v \in V$,
it follows that $0 v = 0 v + 0 v$; applying the regularity one obtains $0 v =0_{V}$.
Therefore, from Item~$\operatorname{(2)}$, every
semi-subspace contains the zero vector.
\begin{example}\label{ex4a}
Let ${\mathbb Q}_{0}^{+}$ denote the set of nonnegative rational numbers.
The semi-vector space ${\mathbb Q}_{0}^{+}$ considered as an
${\mathbb Q}_{0}^{+}$ space is a semi-subspace of
${\mathbb R}_{0}^{+}$ considered as an ${\mathbb Q}_{0}^{+}$ space.
\end{example}
\begin{example}\label{ex4b}
For each positive integer $ i \leq n$, the subset ${\mathcal P}_{(i)}[x]\cup \{0_{p}\}$,
where ${\mathcal P}_{(i)}[x]=\{p(x); \partial (p(x))=i \} $
and $0_{p}$ is the null polynomial, is a semi-subspace of ${\mathcal P}_{n}[x]$, shown in
Example~\ref{ex4}.
\end{example}
\begin{example}\label{ex4c}
The set of diagonal matrices of order $n$ with entries in ${\mathbb R}_{0}^{+}$ is a
semi-subspace of ${\mathcal M}_{n}({\mathbb R}_{0}^{+})$,
where the latter is the semi-vector space of square matrices with entries in ${\mathbb R}_{0}^{+}$
(according to Example~\ref{ex3}).
\end{example}
\begin{definition}\cite[Definition 1.22]{Janyska:2007}\label{semilineartrans}
Let $V$ and $W$ be two semi-vector spaces and $T: V\longrightarrow W$ be a map.
We say that $T$ is a semi-linear transformation if:
$\operatorname{(1)}$ $\forall \ v_1, v_2 \in V$, $T(v_1 + v_2) = T(v_1) + T(v_2)$;
$\operatorname{(2)}$ $\forall \lambda \in {\mathbb R}_{0}^{+}$ and
$\forall \ v \in V$, $T(\lambda v) =\lambda T(v)$.
\end{definition}
If $U$ and $V$ are semi-vector spaces then the set
$\operatorname{Hom}(U, V)=\{ T:U\longrightarrow V;
T \operatorname{is \ semi-linear} \}$ is also a semi-vector space.
\section{The New Results}\label{sec3}
We start this section with important remarks.
\begin{remark}\label{mainremark}
\begin{itemize}
\item [ $\operatorname{(1)}$] Throughout this section we always consider
that the semi-field $K$ is the set of nonnegative real numbers, i.e.,
$K= {\mathbb R}_{0}^{+}={\mathbb R}^{+}\cup \{0\}$.
\item [ $\operatorname{(2)}$] In the whole section (except
Subsection~\ref{subsec2}) we assume that the semi-vector
spaces $V$ are simple, i.e., the unique symmetrizable element is the zero vector $0_{V}$.
\item [ $\operatorname{(3)}$] It is well-known that a semi-vector space
$(V, +, \cdot)$ can be always extended to a vector space according
to the equivalence relation on $V \times V$ defined by $(u_1 , v_1 ) \sim (u_2 , v_2 )$
if and only if $u_1 + v_2 = v_1 + u_2$ (see \cite{Radstrom:1952}; see also
\cite[Section 3.4]{Gahler:1999}). However, our results are obtained without utilizing
such a natural embedding. In other words, if one want to compute, for instance,
the eigenvalues of a matrix defined over ${\mathbb R}_{0}^{+}$ we cannot solve the
problem in the associated vector spaces and then discard the negative ones. Put differently,
all computations performed here are restricted to nonnegative real numbers and
also to the fact that none vector (with exception of $0_V$) has (additive) symmetrical.
However, we will show that, even in this case, several results can be obtained.
\end{itemize}
\end{remark}
\begin{proposition}\label{prop1}
Let $V$ be a semi-vector space over ${\mathbb R}_{0}^{+}$. Then the following hold:
\begin{itemize}
\item [ $\operatorname{(1)}$] let $ v \in V$, $ v \neq 0_{V}$, and $\lambda
\in {\mathbb R}_{0}^{+}$; if $\lambda v = 0_{V}$ then $\lambda = 0$;
\item [ $\operatorname{(2)}$] if $\alpha , \beta \in {\mathbb R}_{0}^{+}$,
$v \in V$ and $ v \neq 0_{V}$, then the equality $\alpha v = \beta v$
implies that $\alpha = \beta$.
\end{itemize}
\end{proposition}
\begin{proof}
$\operatorname{(1)}$ If $\lambda \neq 0$ then there exists its
multiplicative inverse ${\lambda}^{-1}$, hence
$ 1 v = {\lambda}^{-1} 0_{V}= 0_{V}$, i.e., $v = 0_{V}$, a contradiction.\\
$\operatorname{(2)}$ If $\alpha \neq \beta$, assume w.l.o.g.
that $\alpha > \beta$, i.e., there exists a positive real
number $c$ such that $\alpha = \beta + c$. Thus, $\alpha v = \beta v$ implies
$\beta v + c v = \beta v$. From the cancellation law we have $c v = 0_{V}$,
and from Item~$\operatorname{(1)}$ it follows that $c = 0$, a contradiction.
\end{proof}
We next introduce in the literature the concept of eigenvalue and eigenvector of a
semi-linear operator.
\begin{definition}\label{eigenvector}
Let $V$ be a semi-vector space and $T:V\longrightarrow V$ be a semi-linear
operator. If there exist a non-zero vector $v \in V$ and
a nonnegative real number $\lambda$ such that $T(v)=\lambda v$, then $\lambda$ is an
eigenvalue of $T$ and $v$ is an eigenvector of $T$ associated with $\lambda$.
\end{definition}
As it is natural, the zero vector joined to the set of the eigenvectors
associated with a given eigenvalue has a semi-subspace structure.
\begin{proposition}\label{eigenspace}
Let $V$ be a semi-vector space over ${\mathbb R}_{0}^{+}$ and
$T:V\longrightarrow V$ be a semi-linear operator. Then the set
$V_{\lambda} = \{ v \in V ; T(v)=\lambda v \}\cup \{0_{V}\}$ is a semi-subspace of $V$.
\end{proposition}
\begin{proof}
From hypotheses, $V_{\lambda}$ is non-empty. Let $u, v \in V_{\lambda}$, i.e.,
$T(u)=\lambda u $ and $T(v)=\lambda v $. Hence, $T(u + v )= T(u) + T(v)=
\lambda (u + v )$, i.e., $u + v \in V_{\lambda}$. Further, if $\alpha \in {\mathbb R}_{0}^{+}$
and $u \in V$, it follows that $T(\alpha u)=\alpha T(u)= \lambda (\alpha u)$, that is,
$\alpha u \in V_{\lambda}$. Therefore, $V_{\lambda}$ is a semi-subspace of $V$.
\end{proof}
The next natural step would be to introduce the characteristic polynomial
of a matrix, according to the standard Linear Algebra. However, how
to compute $\det (A -\lambda I)$ if $-\lambda$ can be a negative real number? Based
on this fact we must be careful to compute the eigenvectors of a matrix.
In fact, the main tools to be utilized in computing eigenvalues/eigenvectors of
a square matrix whose entries are nonnegative real numbers is the
additive cancellation law in ${\mathbb R}_{0}^{+}$ and also the fact that positive
real numbers have multiplicative inverse. However, in much cases, such a tools are
not sufficient to solve the problem. Let us see some cases when it is
possible to compute eigenvalues/eigenvectors of a matrix.
\begin{example}\label{examatr1}
Let us see how to obtain (if there exists) an eigenvalue/eigenvector of a
diagonal matrix $A \in {\mathcal M}_{2}({\mathbb R}_{0}^{+})$,
\begin{eqnarray*}
A= \left[\begin{array}{cc}
a & 0\\
0 & b\\
\end{array}
\right],
\end{eqnarray*}
where $a \neq b$ not both zeros.
Let us assume first that $a, b > 0$.
Solving the equation $A v = \lambda v$, that is,
\begin{eqnarray*}
\left[\begin{array}{cc}
a & 0\\
0 & b\\
\end{array}
\right]
\left[\begin{array}{c}
x\\
y\\
\end{array}
\right]= \left[\begin{array}{c}
\lambda x\\
\lambda y\\
\end{array}
\right],
\end{eqnarray*}
we obtain $\lambda = a$ with associated eigenvector $x(1, 0)$ and
$\lambda = b$ with associated eigenvector $y(0, 1)$.
If $a\neq 0$ and $b = 0$, then $\lambda = a$ with eigenvectors $x(1, 0)$.
If $a = 0$ and $b \neq 0$, then $\lambda = b$ with eigenvectors $y(0, 1)$.
\end{example}
\begin{example}\label{examatr2}
Let $A \in {\mathcal M}_{2}({\mathbb R}_{0}^{+})$ be a matrix of the form
\begin{eqnarray*}
A= \left[\begin{array}{cc}
a & b\\
0 & a\\
\end{array}
\right],
\end{eqnarray*}
where $a \neq b$ are positive real numbers. Let us solve the matrix equation:
\begin{eqnarray*}
\left[\begin{array}{cc}
a & b\\
0 & a\\
\end{array}
\right]
\left[\begin{array}{c}
x\\
y\\
\end{array}
\right]= \left[\begin{array}{c}
\lambda x\\
\lambda y\\
\end{array}
\right].
\end{eqnarray*}
If $ y \neq 0$, $\lambda = a$; hence $b y = 0$, which
implies $b=0$, a contradiction. If $ y = 0$, $x \neq 0$; hence $\lambda = a$
with eigenvectors $(x, 0)$.
\end{example}
If $V$ and $W$ are semi-free semi-vector spaces then it is possible to define the
matrix of a semi-linear transformation $T: V \longrightarrow W$ as in the usual
case (vector spaces).
\begin{definition}\label{semi-free matrix}
Let $T: V \longrightarrow W$ be a semi-liner transformation between
semi-free semi-vector spaces with semi-basis $B_1$ and $B_2$, respectively.
Then the matrix $[T]_{B_1}^{B_2}$ is the matrix of the transformation $T$.
\end{definition}
\begin{theorem}\label{diagonalmatrix}
Let $V$ be a semi-free semi-vector space over ${\mathbb R}_{0}^{+}$ and
let $T:V\longrightarrow V$ be a semi-linear operator. Then $T$
admits a semi-basis $B = \{ v_1 , v_2 , \ldots , v_n \}$ such that ${[T]}_{B}^{B}$
is diagonal if and only if $B$ consists of eigenvectors of $T$.
\end{theorem}
\begin{proof}
The proof is analogous to the case of vector spaces.
Let $B=\{ v_1 , v_2 , \ldots ,$ $v_n \}$ be a semi-basis of $V$ whose elements
are eigenvectors of $T$. We then have:
\begin{eqnarray*}
T(v_1)= {\lambda}_1 v_1 + 0 v_2 + \ldots + 0 v_n,\\
T(v_2)= 0 v_1 + {\lambda}_{2} v_2 + \ldots + 0 v_n,\\
\vdots\\
T(v_n)= 0 v_1 + 0 v_2 + \ldots + {\lambda}_{n} v_n,
\end{eqnarray*}
which implies that $[T]_{B}^{B}$ is of the form
\begin{eqnarray*}
[T]_{B}^{B}= \left[\begin{array}{ccccc}
{\lambda}_1 & 0 & 0 & \ldots & 0\\
0 & {\lambda}_2 & 0 & \ldots & 0\\
\vdots & \vdots & \vdots & \ldots & \vdots\\
0 & 0 & 0 & \ldots & {\lambda}_{n}\\
\end{array}
\right].
\end{eqnarray*}
On the other hand, let $B^{*}= \{ w_1 , w_2 , \ldots , w_n \}$ be a
semi-basis of $V$ such that $[T]_{B^{*}}^{B^{*}}$ is diagonal:
\begin{eqnarray*}
[T]_{B^{*}}^{B^{*}}=\left[\begin{array}{ccccc}
{\alpha}_1 & 0 & 0 & \ldots & 0\\
0 & {\alpha}_2 & 0 & \ldots & 0\\
\vdots & \vdots & \vdots & \ldots & \vdots\\
0 & 0 & 0 & \ldots & {\alpha}_{n}\\
\end{array}
\right];
\end{eqnarray*} thus,\\
\begin{eqnarray*}
T(w_1)= {\alpha}_1 w_1 + 0 w_2 + \ldots + 0 w_n = {\alpha}_1 w_1,\\
T(w_2)= 0 w_1 + {\alpha}_{2} w_2 + \ldots + 0 w_n = {\alpha}_{2} w_2,\\
\vdots\\
T(w_n)= 0 w_1 + 0 w_2 + \ldots + {\alpha}_{n} w_n = {\alpha}_{2} w_{n}.
\end{eqnarray*}
This means that $w_i$ are eigenvectors of $T$ with corresponding
eigenvalues ${\alpha}_{i}$, for all $i = 1, 2, \ldots , n$.
\end{proof}
\begin{definition}\label{kernel}
Let $T: V \longrightarrow W$ be a semi-linear transformation. The set
$\operatorname{Ker}(T)=\{ v \in V ; T(v)=0\}$ is called kernel of $T$.
\end{definition}
\begin{proposition}\label{subkernel}
Let $T: V \longrightarrow W$ be a semi-linear transformation.
Then the following hold:
\begin{itemize}
\item [ $\operatorname{(1)}$] $\operatorname{Ker}(T)$ is a semi-subspace of $V$;
\item [ $\operatorname{(2)}$] if $T$ is injective then
$\operatorname{Ker}(T) = \{0_{V}\}$;
\item [ $\operatorname{(3)}$] if $V$ has semi-dimension $1$ then
$\operatorname{Ker}(T) = \{0_{V}\}$ implies that $T$ is injective.
\end{itemize}
\end{proposition}
\begin{proof}
$\operatorname{(1)}$ We have $T(0_{V})= T(0_{V})+T(0_{V})$. Since $W$
is regular, it follows that $T(0_{V})=0_{W}$, which implies
$\operatorname{Ker}(T) \neq \emptyset$. If $u, v \in \operatorname{Ker}(T)$ and $\lambda \in
{\mathbb R}_{0}^{+}$, then $u + v \in \operatorname{Ker}(T)$ and $\lambda v \in
\operatorname{Ker}(T)$, which implies that $\operatorname{Ker}(T)$
is a semi-subspace of $V$.\\
$\operatorname{(2)}$ Since $T(0_{V})=0_{W}$, it follows that
$\{0_{V}\}\subseteq \operatorname{Ker}(T)$. On the other hand, let $ u \in
\operatorname{Ker}(T)$, that is, $T(u)=0_{W}$. Since $T$ is injective, one has $u = 0_{V}$.
Hence, $\operatorname{Ker}(T) = \{0_{V}\}$.\\
$\operatorname{(3)}$ Let $B=\{ v_0 \}$ be a semi-basis of $V$. Assume that
$T(u) = T(v)$, where $u, v \in V$ are such that $u = \alpha v_0$ and $v = \beta v_0 $. Hence,
$\alpha T(v_0) = \beta T(v_0 )$. Since $\operatorname{Ker}(T) = \{0_{V}\}$ and $v_0 \neq 0$,
it follows that $T(v_0) \neq 0$. From Item~$\operatorname{(2)}$ of Proposition~\ref{prop1},
one has $\alpha = \beta$, i.e., $u = v$.
\end{proof}
\begin{definition}\label{image}
Let $T: V \longrightarrow W$ be a semi-linear transformation.
The image of $T$ is the set of all vectors $w \in W$ such
that there exists $v \in V$ with $T(v)=w$, that is, $\operatorname{Im}(T)=\{
w \in W ; \exists \ v \in V \operatorname{with} T(v)=w\}$.
\end{definition}
\begin{proposition}\label{subImage}
Let $T: V \longrightarrow W$ be a semi-linear transformation.
Then the image of $T$ is a semi-subspace of $W$.
\end{proposition}
\begin{proof}
The set $\operatorname{Im}(T)$ is non-empty because $T(0_{V})=0_{W}$. It is
easy to see that if $w_1 , w_2 \in \operatorname{Im}(T)$ and $\lambda \in
{\mathbb R}_{0}^{+}$, then $ w_1 + w_2 \in \operatorname{Im}(T)$ and
$\lambda w_1 \in \operatorname{Im}(T)$.
\end{proof}
\begin{theorem}\label{isosemi}
Let $V$ be a $n$-dimensional semi-free semi-vector space over
${\mathbb R}_{0}^{+}$. Then $V$ is isomorphic to $({\mathbb R}_{0}^{+})^{n}$.
\end{theorem}
\begin{proof}
Let $B = \{ v_1 , v_2 , \ldots , v_n \}$ be a semi-basis of $V$ and consider
the canonical semi-basis
$e_i = (0, 0, \ldots , $ $0, \underbrace{1}_{i}, 0, \ldots, 0)$
of $({\mathbb R}_{0}^{+})^{n}$, where $i=1, 2, \ldots , n$.
Define the map $T:V \longrightarrow ({\mathbb R}_{0}^{+})^{n}$ as follows:
for each $v = \displaystyle\sum_{i=1}^{n}a_i v_i \in V$, put
$T(v) = \displaystyle\sum_{i=1}^{n}a_i e_i$. It is easy to see that $T$
is bijective semi-linear transformation, i.e., $V$ is isomorphic to
$({\mathbb R}_{0}^{+})^{n}$, as required.
\end{proof}
\subsection{Complete Semi-Vector Spaces}\label{subsec1}
We here define and study complete semi-vector spaces, i.e.,
semi-vector spaces whose norm (inner product) induces a metric under
which the space is complete.
\begin{definition}\label{semiBanach}
Let $V$ be a semi-vector space over ${\mathbb R}_{0}^{+}$. If there exists a
norm $\| \ \|:V \longrightarrow {\mathbb R}_{0}^{+}$ on $V$ we say that $V$ is a
normed semi-vector space (or normed semi-space, for short). If the norm defines a metric
on $V$ under which $V$ is complete then $V$ is said to be Banach semi-vector space.
\end{definition}
\begin{definition}\label{semiHilbert}
Let $V$ be a semi-vector space over ${\mathbb R}_{0}^{+}$. If there
exists an inner product $\langle \ , \ \rangle:V\times V
\longrightarrow {\mathbb R}_{0}^{+}$ on $V$ then $V$ is an inner product
semi-vector space (or inner product semi-space).
If the inner product defines a metric on $V$ under which
$V$ is complete then $V$ is said to
be Hilbert semi-vector space.
\end{definition}
The well-known norms on ${\mathbb R}^n$ are also norms on
$[{\mathbb R}_{0}^{+}]^{n}$, as we show in the next propositions.
\begin{proposition}\label{R+1}
Let $V = [{\mathbb R}_{0}^{+}]^{n}$ be the Euclidean semi-vector space
(over ${\mathbb R}_{0}^{+}$) of
semi-dimension $n$ . Define the function $\| \ \|:V \longrightarrow
{\mathbb R}_{0}^{+}$ as follows: if $x = (x_1 , x_2 , \ldots ,$ $x_n ) \in V$,
put $\| x \|=\sqrt{x_1^2 + x_2^2 + \ldots + x_n^2}$. Then $\| \ \|$ is a
norm on $V$, called the Euclidean norm on $V$.
\end{proposition}
\begin{proof}
It is clear that $\| x \| = 0$ if and only if $x=0$ and for all
$\alpha \in {\mathbb R}_{0}^{+}$ and $x \in V$,
$\| \alpha x \| = |\alpha | \| x \|$. To show the triangle inequality it is
sufficient to apply the
Cauchy-Schwarz inequality in ${\mathbb R}_{0}^{+}$: if
$x = (x_1 , x_2 , \ldots , x_n )$ and $y = (y_1 , y_2 , \ldots , y_n )$ are
semi-vectors in $V$ then $\displaystyle\sum_{i=1}^{n} x_i y_i \leq
{\left(\displaystyle\sum_{i=1}^{n} x_i^2 \right)}^{1/2} \cdot
{\left(\displaystyle\sum_{i=1}^{n} y_i^2 \right)}^{1/2}$.
\end{proof}
In the next result we show that the Euclidean norm on $[{\mathbb R}_{0}^{+}]^{n}$
generates the Euclidean metric on it.
\begin{proposition}\label{R+1a}
Let $x = (x_1 , x_2 , \ldots ,x_n )$, $y = (y_1 , y_2 , \ldots , y_n )$ be
semi-vectors in $V = [{\mathbb R}_{0}^{+}]^{n}$. Define the function
$d:V \times V \longrightarrow {\mathbb R}_{0}^{+}$ as follows: for every fixed $i$,
if $x_i = y_i$ put $c_i =0$; if $x_i \neq y_i$, put
${\varphi}_i = {\psi}_i + c_i$, where ${\varphi}_i =\max \{x_i, y_i \}$ and
${\psi}_i =\min \{ x_i , y_i\}$ (in this case, $c_i > 0$); then consider
$d(x, y) = \sqrt{c_1^2 + \ldots + c_n^2}$. The function $d$ is a metric on $V$.
\end{proposition}
\begin{remark}
Note that in Proposition~\ref{R+1a} we could have defined $c_i$
simply by the nonnegative real number satisfying $\max
\{x_i, y_i \}=\min \{x_i, y_i \} + c_i$. However, we prefer to separate
the cases when $c_i=0$ and $c_i > 0$ in order to improve the
readability of this paper.
\end{remark}
\begin{proof}
It is easy to see that $d(x, y)=0$ if and only if $x=y$ and $d(x, y)=d(y,x)$.
We will next prove the triangle inequality. To do this, let
$x = (x_1 , x_2 , \ldots ,x_n )$, $y = (y_1 , y_2 , \ldots , y_n )$
and $z = (z_1 , z_2 , \ldots , z_n )$ be semi-vectors in $V = [{\mathbb R}_{0}^{+}]^{n}$.
We look first at a fixed $i$. If
$x_i = y_i = z_i$ or if two of them are equal then $d(x_i , z_i ) \leq d(x_i , y_i ) +
d(y_i, z_i )$. Let us then assume that $x_i$, $y_i$ and $z_i$ are pairwise distinct.
We have to analyze the six cases: $\operatorname{(1)}$ $x_i < y_i < z_i$;
$\operatorname{(2)}$ $x_i < z_i < y_i$; $\operatorname{(3)}$ $y_i < x_i < z_i$;
$\operatorname{(4)}$ $y_i < z_i < x_i$; $\operatorname{(5)}$ $z_i
< x_i < y_i$; $\operatorname{(6)}$ $z_i < y_i < x_i$.
In order to verify the triangle inequality we will see what occurs in the worst cases.
More precisely, we assume that for all $i=1, 2, \ldots , n$ we have
$x_i < y_i < z_i$ or, equivalently, $z_i < y_i < x_i$. Since both cases are analogous we only
verify the (first) case $x_i < y_i < z_i$, for all $i$. In such cases there exist
positive real numbers $a_i$, $b_i$, for all $i=1, 2, \ldots , n$,
such that $y_i = x_i + a_i$ and $z_i = y_i + b_i$, which implies
$z_i = x_i + a_i + b_i$. We need to show that
$d(x, z) \leq d(x, y) + d(y, z)$, i.e.,
${\left(\displaystyle\sum_{i=1}^{n}(a_i + b_i)^2\right)}^{1/2} \leq
{\left(\displaystyle\sum_{i=1}^{n} a_i^2\right)}^{1/2} +
{\left(\displaystyle\sum_{i=1}^{n} b_i^2\right)}^{1/2}$.
The last inequality is equivalent to the inequality
$\displaystyle\sum_{i=1}^{n} (a_i + b_i)^2 \leq
\displaystyle\sum_{i=1}^{n} a_i^2 + \displaystyle\sum_{i=1}^{n} b_i^2 +
2{\left(\displaystyle\sum_{i=1}^{n} a_i^2 \right)}^{1/2}
\cdot {\left(\displaystyle\sum_{i=1}^{n}
b_i^2\right)}^{1/2}$. Again, the
last inequality is equivalent to $\displaystyle\sum_{i=1}^{n} a_i b_i \leq
{\left(\displaystyle\sum_{i=1}^{n} a_i^2\right)}^{1/2}\cdot {\left(\displaystyle\sum_{i=1}^{n}
b_i^2\right)}^{1/2}$, which is the Cauchy-Schwarz inequality in
${\mathbb R}_{0}^{+}$. Therefore, $d$ satisfies the triangle inequality, hence it is a
metric on $V$.
\end{proof}
\begin{remark}
Note that Proposition~\ref{R+1a} means that the Euclidean
norm on $[{\mathbb R}_{0}^{+}]^{n}$
(see Proposition~\ref{R+1}) generates the Euclidean metric on $[{\mathbb R}_{0}^{+}]^{n}$.
This result is analogous to the fact that every norm defined on vector spaces
generates a metric on it. Further, a semi-vector space $V$ is Banach
(see Definition~\ref{semiBanach}) if the norm generates a metric under which
every Cauchy sequence in $V$ converges to an element of $V$.
\end{remark}
\begin{proposition}\label{R+1b}
Let $V = [{\mathbb R}_{0}^{+}]^{n}$ and define the function
$\langle \ , \ \rangle:V\times V \longrightarrow
{\mathbb R}_{0}^{+}$ as follows: if $u = (x_1 , x_2 , \ldots , x_n )$ and
$v = (y_1 , y_2 , \ldots , y_n )$ are semi-vectors in $V$, put
$\langle u , v \rangle = \displaystyle\sum_{i=1}^{n}x_i y_i$. Then
$\langle \ , \ \rangle$ is an inner product on $V$,
called dot product.
\end{proposition}
\begin{proof}
The proof is immediate.
\end{proof}
\begin{proposition}\label{R+1c}
The dot product on $V = [{\mathbb R}_{0}^{+}]^{n}$ generates the
Euclidean norm on $V$.
\end{proposition}
\begin{proof}
If $x= (x_1 , x_2 , \ldots , x_n ) \in V$, define the norm of $x$ by
$\| x \|=\sqrt{\langle x, x\rangle}$. Note that the norm is exactly
the Euclidean norm given in Proposition~\ref{R+1}.
\end{proof}
\begin{remark}
We observe that if an inner product on a semi-vector space $V$ generates
a norm $\| \ \|$ and such a norm generates a metric $d$ on $V$,
then $V$ is a Hilbert space (according to Definition~\ref{semiHilbert}) if every Cauchy
sequence in $V$ converges w.r.t. $d$ to an element of $V$.
\end{remark}
\begin{proposition}\label{R+2}
Let $V = [{\mathbb R}_{0}^{+}]^{n}$ and define the function ${\| \ \|}_1:V
\longrightarrow {\mathbb R}_{0}^{+}$ as follows: if $x = (x_1 ,
x_2 , \ldots ,$ $x_n ) \in V$, ${\| x \|}_1=\displaystyle\sum_{i=1}^{n} x_i$.
Then ${\| x \|}_1$ is a norm on $V$.
\end{proposition}
\begin{proof}
The proof is direct.
\end{proof}
\begin{proposition}\label{R+2a}
Let $x = (x_1 , x_2 , \ldots ,x_n )$, $y = (y_1 , y_2 , \ldots , y_n )$ be
semi-vectors in $V = [{\mathbb R}_{0}^{+}]^{n}$. Define the function
$d_1:V \times V \longrightarrow {\mathbb R}_{0}^{+}$ in the following way. For every fixed $i$,
if $x_i = y_i$, put $c_i =0$; if $x_i \neq y_i$, put
${\varphi}_i = {\psi}_i + c_i$, where ${\varphi}_i =\max \{x_i, y_i \}$ and
${\psi}_i =\min \{ x_i , y_i\}$. Let us consider that
$d_1 (x, y) = \displaystyle\sum_{i=1}^{n} c_i $.
Then the function $d_1$ is a metric on $V$ derived from the norm
${\| \ \|}_1$ shown in Proposition~\ref{R+2}.
\end{proposition}
\begin{proof}
We only prove the triangle inequality. To avoid stress of notation,
we consider the same that was considered in the proof of
Proposition~\ref{R+1a}. We then fix $i$ and only investigate the worst case
$x_i < y_i < z_i$. In this case, there exist positive real numbers
$a_i$, $b_i$ for all $i=1, 2 , \ldots , n$, such that $y_i = x_i + a_i$
and $z_i = y_i + b_i$, which implies $z_i = x_i + a_i + b_i$. Then, for all
$i$, $d_1 (x_i , z_i) \leq d_1 (x_i , y_i ) + d_1 (y_i , z_i)$; hence,
$d_1 (x, z)=\displaystyle\sum_{i=1}^{n} d_1 (x_i , z_i) =
\displaystyle\sum_{i=1}^{n} (a_i + b_i ) =
\displaystyle\sum_{i=1}^{n} a_i + \displaystyle\sum_{i=1}^{n} b_i =
\displaystyle\sum_{i=1}^{n} d_1 (x_i , y_i ) + \displaystyle\sum_{i=1}^{n}
d_1 (y_i , z_i )= d_1 (x, y) + d_1 (y, z)$. Therefore, $d_1$ is a metric on $V$.
\end{proof}
\begin{proposition}\label{R+3}
Let $V = [{\mathbb R}_{0}^{+}]^{n}$ be the Euclidean semi-vector space of
semi-dimension $n$. Define the function ${\| \ \|}_2:V \longrightarrow
{\mathbb R}_{0}^{+}$ as follows: if $x = (x_1 , x_2 , \ldots ,$ $x_n ) \in V$,
take ${\| x \|}_2=\displaystyle\max_{i} \{ x_i \}$. Then ${\| x \|}_2$
is a norm on $V$.
\end{proposition}
\begin{proposition}\label{R+3a}
Keeping the notation of Proposition~\ref{R+1a}, define the function
$d_2:V \times V \longrightarrow {\mathbb R}_{0}^{+}$ such that
$d_2 (x, y) = \max_{i} \{ c_i \}$. Then $d_2$ is a metric on $V$. Moreover, $d_2$
is obtained from the norm ${\| \ \|}_2$ exhibited in Proposition~\ref{R+3}.
\end{proposition}
\begin{proposition}\label{R+4}
The norms $\| \ \|$, ${\| \ \|}_1$ and ${\| \ \|}_2$ shown in
Propositions~\ref{R+1},~\ref{R+2} and \ref{R+3} are equivalent.
\end{proposition}
\begin{proof}
It is immediate to see that ${\| \ \|}_2 \leq \| \ \| \leq
{\| \ \|}_1 \leq n {\| \ \|}_2$.
\end{proof}
In a natural way we can define the norm of a bounded semi-linear
transformation.
\begin{definition}\label{semibounded}
Let $V$ and $W$ be two normed semi-vector spaces and let $T:V
\longrightarrow W$ be a semi-linear transformation.
We say that $T$ is bounded if there exists a real number $c > 0$
such that $\| T(v)\|\leq c \| v \|$.
\end{definition}
If $T:V \longrightarrow W$ is bounded and $v \neq 0$
we can consider the quotient $\frac{\| T(v)\|}{\| v \|}$. Since
such a quotient is upper bounded by $c$, the supremum $\displaystyle
\sup_{v \in V, v\neq 0}\frac{\| T(v)\|}{\| v \|}$
exists and it is at most $c$. We then define
$$\| T \|= \displaystyle\sup_{v \in V, v\neq 0}\frac{\| T(v)\|}{\| v \|}.$$
\begin{proposition}\label{R+5}
Let $T: V \longrightarrow W$ be a bounded semi-linear transformation.
Then the following hold:
\begin{itemize}
\item [ $\operatorname{(1)}$] $T$ sends bounded sets in bounded sets;
\item [ $\operatorname{(2)}$] $\| T \|$ is a norm, called norm of $T$;
\item [ $\operatorname{(3)}$] $\| T \|$ can be written in the form
$\| T \|= \displaystyle\sup_{v \in V, \| v \| = 1 } \| T(v) \|$.
\end{itemize}
\end{proposition}
\begin{proof}
Items~$\operatorname{(1)}$~and~ $\operatorname{(2)}$ are immediate.
The proof of Item~$\operatorname{(3)}$ is analogous to the
standard proof but we present it here to guarantee that our mathematical
tools are sufficient to perform it. Let $v\neq 0$ be a semi-vector
with norm $\| v \|= a \neq 0$ and set $u=(1/a)v$. Thus,
$\| u \| =1$ and since $T$ is semi-linear one has
$$\| T \|= \displaystyle\sup_{v \in V, v\neq 0} \frac{1}{a}\|
T(v)\|=\displaystyle\sup_{v \in V, v\neq 0} \| T( (1/a) v) \|=
\displaystyle\sup_{u \in V, \| u \| =1} \| T(u)\|=$$ $=
\displaystyle\sup_{v \in V, \| v \| =1} \| T(v)\|$.
\end{proof}
\subsubsection{The Semi-Spaces ${l}_{+}^{\infty}$, ${l}_{+}^{p}$
and ${\operatorname{C}}_{+}[a, b]$}\label{subsubsec1}
In this subsection we investigate topological aspects of
some semi-vector spaces over ${\mathbb R}_{0}^{+}$ such
as completeness and separability. We investigate the sequence spaces
${l}_{+}^{\infty}$, ${l}_{+}^{p}$, ${\operatorname{C}}_{+}[a, b]$,
which will be defined in the sequence.
We first study the space ${l}_{+}^{\infty}$, the set
of all bounded sequences of nonnegative real numbers.
Before studying such a space we must define a metric on it,
since the metric in $l^{\infty}$
which is defined as $ d(x, y)=\displaystyle\sup_{i \in
{\mathbb N}} | {x}_i - {y}_i |$,
where $x = ({x}_i )$ and $y = ({y}_i )$ are sequences in
$l^{\infty}$,
has no meaning to us, because there is no sense in
considering $- {y}_i$ if ${y}_i > 0$. Based on this fact,
we circumvent
this problem by utilizing the total order of ${\mathbb R}$
according to Proposition~\ref{R+1a}. Let $x = ({\mu}_i )$ and
$y = ({\nu}_i )$ be sequences in $l_{+}^{\infty}$. We then fix $i$,
and define $c_i$ as was done in Proposition~\ref{R+1a}:
if ${\mu}_i = {\nu}_i $ then we put $c_i = 0$; if ${\mu}_i \neq {\nu}_i $,
let ${\gamma}_i=\max \{{\mu}_i , {\nu}_i \}$
and ${\psi}_i= \min \{{\mu}_i , {\nu}_i \}$; then there exists a positive
real number $c_i$ such that ${\gamma}_i = {\psi}_i + c_i$
and, in place of $| {\mu}_i - {\nu}_i |$, we put $c_i$. Thus,
our metric becomes
\begin{eqnarray}\label{lmetric}
d(x, y) = \displaystyle\sup_{i \in {\mathbb N}} \{c_i \}.
\end{eqnarray}
It is clear that $d(x, y)$ shown in Eq.~(\ref{lmetric}) defines a metric. However,
we must show that the tools that we have are sufficient to proof this fact,
once we are working on ${\mathbb R}_{0}^{+}$.
\begin{proposition}\label{metricsup}
The function $d$ shown in Eq.~(\ref{lmetric}) is a metric on ${l}_{+}^{\infty}$.
\end{proposition}
\begin{proof}
It is clear that $d(x,y)\geq 0$ and $d(x,y)= 0 \Longleftrightarrow
x=y$. Let $x = ({\mu}_i )$ and $y = ({\nu}_i )$ be two sequences
in $l_{+}^{\infty}$. Then, for every fixed $i \in {\mathbb N}$, if $c_i=
d({\mu}_i , {\nu}_i )=0$ then ${\mu}_i = {\nu}_i$, i.e., $d({\mu}_i
, {\nu}_i )=d({\nu}_i , {\mu}_i )$. If $c_i > 0$
then $c_i= d({\mu}_i , {\nu}_i )$ is computed by ${\gamma}_i =
{\psi}_i + c_i$, where ${\gamma}_i=\max \{{\mu}_i , {\nu}_i \}$
and ${\psi}_i= \min \{{\mu}_i , {\nu}_i \}$. Hence,
$d({\nu}_i , {\mu}_i ) = c_i^{*}$ is computed by
${\gamma}_i^{*} = {\psi}_i^{*} + c_i^{*}$, where
${\gamma}_i^{*}=\max \{{\nu}_i, {\mu}_i \}$ and ${\psi}_i^{*}=
\min \{{\nu}_i, {\mu}_i \}$, which implies $d({\mu}_i , {\nu}_i )
=d({\nu}_i , {\mu}_i )$. Taking the supremum over all $i$'s we have
$d(x, y) = \displaystyle\sup_{i \in {\mathbb N}} \{c_i \}=
\displaystyle\sup_{i \in {\mathbb N}} \{c_i^{*} \}=d(y, x)$.
To show the triangle inequality, let $x = ({\mu}_i )$,
$y = ({\nu}_i )$ and $z=({\eta}_i)$ be sequences in $l_{+}^{\infty}$.
For every fixed $i$, we will prove that
$d({\mu}_i , {\eta}_i )\leq d({\mu}_i , {\nu}_i ) + d({\nu}_i ,
{\eta}_i )$. If ${\nu}_i = {\mu}_i = {\eta}_i$, the result is
trivial. If two of them are equal, the result is also trivial. Assume
that ${\mu}_i$, ${\nu}_i$ and ${\eta}_i$ are pairwise distinct.
As in the proof of Proposition~\ref{R+1a}, we must investigate the six cases:\\
$\operatorname{(1)}$ ${\mu}_i < {\nu}_i < {\eta}_i$;
$\operatorname{(2)}$ ${\mu}_i < {\eta}_i < {\nu}_i$;
$\operatorname{(3)}$ ${\nu}_i < {\mu}_i < {\eta}_i$;
$\operatorname{(4)}$ ${\nu}_i < {\eta}_i < {\mu}_i$;
$\operatorname{(5)}$ ${\eta}_i < {\mu}_i < {\nu}_i$;
$\operatorname{(6)}$ ${\eta}_i < {\nu}_i < {\mu}_i$.
We only show $\operatorname{(1)}$ and $\operatorname{(2)}$.
To show $\operatorname{(1)}$, note that there exist positive real
numbers $c_i$ and $c_i^{'}$ such that ${\nu}_i =
{\mu}_i + c_i$ and ${\eta}_i = {\nu}_i + c_i^{'}$, which implies $\eta_i =
\mu_i + c_i + c_i^{'}$. Hence, $d({\mu}_i , {\eta}_i )=c_i + c_i^{'}=
d({\mu}_i , {\nu}_i ) + d({\nu}_i , {\eta}_i )$.
Let us show $\operatorname{(2)}$. There exist positive real
numbers $b_i$ and $b_i^{'}$ such that ${\eta}_i =
{\mu}_i + b_i$ and ${\nu}_i={\eta}_i + b_i^{'}$, so ${\nu}_i = {\mu}_i
+ b_i + b_i^{'}$. Therefore, $d({\mu}_i , {\eta}_i )=b_i < d({\mu}_i ,
{\nu}_i ) + d({\nu}_i , {\eta}_i )=b_i + 2b_i^{'}$.
Taking the supremum over all $i$'s we have
$\displaystyle\sup_{i \in {\mathbb N}} \{d({\mu}_i ,
{\eta}_i ) \} \leq \displaystyle\sup_{i \in {\mathbb N}} \{d({\mu}_i ,
{\nu}_i )\} + \displaystyle\sup_{i \in {\mathbb N}} \{d({\nu}_i , {\eta}_i ) \}$, i.e.,
$d(x, z) \leq d(x, y) + d(y, z)$. Therefore, $d$ is a metric on ${l}_{+}^{\infty}$.
\end{proof}
\begin{definition}\label{defl}
The metric space ${l}_{+}^{\infty}$ is the set of all bounded
sequences of nonnegative real numbers equipped with the metric
$d(x, y) = \displaystyle\sup_{i \in {\mathbb N}} \{c_i \}$ given previously.
\end{definition}
We prove that ${l}_{+}^{\infty}$ equipped with the previous metric is complete.
\begin{theorem}\label{lcomplete}
The space ${l}_{+}^{\infty}$ with the metric $d(x, y) = \displaystyle\sup_{i
\in {\mathbb N}} \{c_i \}$ shown above is complete.
\end{theorem}
\begin{proof}
The proof follows the same line as the standard proof of completeness
of ${l}^{\infty}$; however it is necessary to adapt it
to the metric (written above) in terms of nonnegative real numbers. Let $(x_n)$
be a Cauchy sequence in ${l}_{+}^{\infty}$, where $x_i =
({\eta}_{1}^{(i)}, {\eta}_{2}^{(i)}, \ldots )$. We must show that
$(x_n )$ converges to an element of ${l}_{+}^{\infty}$. As $(x_n)$
is Cauchy, given $\epsilon > 0$, there exists a positive integer
$K$ such that, for all $n, m > K$, $$d(x_n, x_m)=\displaystyle\sup_{j
\in {\mathbb N}} \{c_j^{(n, m)} \} < \epsilon,$$
where $c_j^{(n, m)}$ is a nonnegative real number such that, if
${\eta}_{j}^{(n)}={\eta}_{j}^{(m)}$ then $c_j^{(n, m)}=0$, and
if ${\eta}_{j}^{(n)} \neq {\eta}_{j}^{(m)}$ then $c_j^{(n, m)}$
is given by $\max \{{\eta}_{j}^{(n)}, {\eta}_{j}^{(m)}\} = \min
\{{\eta}_{j}^{(n)}, {\eta}_{j}^{(m)}\} +c_j^{(n, m)}$. This
implies that for each fixed $j$ one has
\begin{eqnarray}\label{distCauchy1}
c_j^{(n, m)} < \epsilon,
\end{eqnarray}
where $n, m > K$. Thus, for each fixed $j$, it follows that
$({\eta}_{j}^{(1)}, {\eta}_{j}^{(2)}, \ldots )$ is a
Cauchy sequence in ${\mathbb R}_{0}^{+}$. Since ${\mathbb R}_{0}^{+}$ is a
complete metric space, the sequence $({\eta}_{j}^{(1)}, {\eta}_{j}^{(2)},
\ldots )$ converges to an element ${\eta}_{j}$ in ${\mathbb R}_{0}^{+}$.
Hence, for each $j$, we form the sequence $x$ whose coordinates are
the limits ${\eta}_{j}$, i.e., $x =({\eta}_{1}, {\eta}_{2}, {\eta}_{3},
\ldots )$. We must show that $x \in {l}_{+}^{\infty}$ and $x_n
\longrightarrow x$.
To show that $x$ is a bounded sequence, let us consider the number
$c_j^{(n, \infty)}$ defined as follows: if ${\eta}_{j} =
{\eta}_{j}^{(n)}$ then $c_j^{(n, \infty)}=0$, and if ${\eta}_{j} \neq {\eta}_{j}^{(n)}$,
define $c_j^{(n, \infty)}$ be the positive real number satisfying
$\max \{{\eta}_{j} , {\eta}_{j}^{(n)} \}= \min \{{\eta}_{j} , {\eta}_{j}^{(n)} \}
+ c_j^{(n, \infty)}$. From the inequality $(\ref{distCauchy1})$ one has
\begin{eqnarray}\label{distCauchy2}
c_j^{(n, \infty)}\leq\epsilon .
\end{eqnarray}
Because ${\eta}_{j} \leq {\eta}_{j}^{(n)} + c_j^{(n, \infty)}$ and since
${\eta}_{j}^{(n)} \in l_{+}^{\infty}$, it follows that ${\eta}_{j}$ is
a bounded sequence for every $j$. Hence, $x = ({\eta}_{1},
{\eta}_{2}, {\eta}_{3}, \ldots ) \in {l}_{+}^{\infty}$.
From $(\ref{distCauchy2})$ we have
$$\displaystyle\sup_{j \in {\mathbb N}} \{c_j^{(n, \infty)} \} \leq \epsilon,$$
which implies that $x_n \longrightarrow x$. Therefore, $l_{+}^{\infty}$ is complete.
\end{proof}
Although $l_{+}^{\infty}$ is a complete metric space, it is not separable.
\begin{theorem}\label{lnotsep}
The space ${l}_{+}^{\infty}$ with the metric $d(x, y) = \displaystyle\sup_{i
\in {\mathbb N}} \{c_i \}$ is not separable.
\end{theorem}
\begin{proof}
The proof is the same as shown in \cite[1.3-9]{Kreyszig:1978}, so it is omitted.
\end{proof}
Let us define the space analogous to the space $l^p$.
\begin{definition}\label{deflp}
Let $p \geq 1$ be a fixed real number. The set ${l}_{+}^{p}$ consists
of all sequences $x =({\eta}_{1}, {\eta}_{2}, {\eta}_{3}, \ldots )$
of nonnegative real numbers such that $\displaystyle\sum_{i=1}^{\infty} ({\eta}_{i})^{p} < \infty$,
whose metric is defined by
$ d(x, y)={\left[\displaystyle\sum_{i=1}^{\infty} {[c_{i}]}^{p}\right]}^{1/p}$,
where $y =({\mu}_{1}, {\mu}_{2}, {\mu}_{3}, \ldots )$ and $c_i$ is
defined as follows: $c_i = 0$ if ${\mu}_i = {\eta}_i $, and if ${\mu}_i > {\eta}_i$
(respect. ${\eta}_i > {\mu}_i$) then $c_i > 0$ is such that ${\mu}_i = {\eta}_i + c_i$.
\end{definition}
\begin{theorem}\label{lp+complete}
The space ${l}_{+}^{p}$ with the metric $ d(x,y)=
{\left[\displaystyle\sum_{i=1}^{\infty}
{[c_{i}]}^{p}\right]}^{1/p}$ exhibited above is complete.
\end{theorem}
\begin{proof}
Recall that given two sequences $({\mu}_i)$ and $({\eta}_i )$
in ${l}_{+}^{p}$ the Minkowski inequality for sums reads as
\begin{eqnarray*}
{\left[\displaystyle\sum_{i=1}^{\infty} {|{\mu}_i +
{\eta}_i |}^{p}\right]}^{1/p} \leq {\left[\displaystyle
\sum_{j=1}^{\infty} {|{\mu}_j|}^{p}\right]}^{1/p} + {\left[\displaystyle
\sum_{k=1}^{\infty} {|{\eta}_k|}^{p}\right]}^{1/p}.
\end{eqnarray*}
Applying the Minkowski inequality as per \cite[1.5-4]{Kreyszig:1978}
with some adaptations, it follows that $d(x,y)$ is, in fact,
a metric. In order to prove the completeness of ${l}_{+}^{p}$, we proceed
similarly as in the proof of Theorem~\ref{lcomplete} with some
adaptations. The main adaptation is performed according to
the proof of completeness of $l^p$ in \cite[1.5-4]{Kreyszig:1978}
replacing the last equality $x=x_m +( x - x_m) \in l^p$
(after Eq.~(5)) by two equalities in order to avoid negative real numbers.
\begin{enumerate}
\item [ $\operatorname{(1)}$] If the $i$-th coordinate
$x^{(i)}- x_{m}^{(i)}$ of the sequence $x- x_m$ is
positive, then define $c_{m}^{(i)} = x^{(i)}- x_{m}^{(i)}$ and write
$x^{(i)} = x_{m}^{(i)} + c_{m}^{(i)}$. From Minkowski
inequality, it follows that the sequence $(x^{(i)})_i$
is in $l_{+}^{p}$.
\item [ $\operatorname{(2)}$] If $x^{(j)}- x_{m}^{(j)}$
is negative, then define $c_{m}^{(j)}= x_{m}^{(j)} -
x^{(j)}$ and write $x_{m}^{(j)}= x^{(j)} + c_{m}^{(j)} $. Since
$x_m \in l_{+}^{p}$, from the comparison criterion for
positive series it follows that the sequence $(x^{(j)})_j$ is also in $l_{+}^{p}$.
\end{enumerate}
\end{proof}
\begin{theorem}\label{lp+separable}
The space ${l}_{+}^{p}$ is separable.
\end{theorem}
\begin{proof}
The proof follows the same line of \cite[1.3-10]{Kreyszig:1978}.
\end{proof}
\begin{definition}\label{continon[a,b]}
Let $I=[a, b]$ be a closed interval in ${\mathbb R}_{0}^{+}$,
where $a\geq 0$ and $a < b$. Then ${\operatorname{C}}_{+}[a, b]$ is
the set of all continuous nonnegative real valued functions on $I=[a, b]$,
whose metric is defined by $d(f(t), g(t)) =
\displaystyle\max_{t \in I} \{c(t)\}$, where $c(t)$ is given by
$\max \{ f(t), g(t) \} =\min \{ f(t), g(t) \} + c(t)$.
\end{definition}
\begin{theorem}\label{cont[a,b]complete}
The metric space $({\operatorname{C}}_{+}[a, b], d)$, where $d$ is given in
Definition~\ref{continon[a,b]}, is complete.
\end{theorem}
\begin{proof}
The proof follows the same lines as the standard one with some modifications.
Let $(f_{m})$ be a Cauchy sequence in ${\operatorname{C}}_{+}[a, b]$. Given $\epsilon > 0$
there exists a positive integer $N$ such that, for all $m, n > N$, it follows that
\begin{eqnarray}\label{In1}
d(f_{m} , f_{n}) = \displaystyle\max_{t \in I} \{c_{m, n} (t)\} < \epsilon,
\end{eqnarray}
where $\max \{ f_{m} (t) , f_{n} (t) \} = \min \{ f_{m} (t) , f_{n} (t) \} + c_{m, n}(t)$.
Thus, for any fixed $t_0 \in I$ we have $c_{m, n} (t_0 ) < \epsilon$,
for all $m, n > N$. This means that $(f_1 (t_0 ), f_2 (t_0 ), \ldots )$ is a
Cauchy sequence in ${\mathbb R}_{0}^{+}$, which converges to $f(t_0 )$ when
$m \longrightarrow \infty$ since ${\mathbb R}_{0}^{+}$ is complete. We then
define a function $f: [a, b] \longrightarrow {\mathbb R}_{0}^{+}$ such that
for each $t \in [a, b]$, we put $f(t)$.
Taking $n \longrightarrow \infty$ in (\ref{In1}) we obtain
$\displaystyle\max_{t \in I} \{c_{m} (t)\} \leq \epsilon$ for all $m > N$, where
$\max \{ f_{m} (t) , f(t) \} = \min \{ f_{m} (t) , f(t) \} + c_{m}(t)$, which
implies $c_{m}(t)\leq \epsilon$ for all $t \in I$. This fact means that
$(f_{m}(t))$ converges to $f(t)$ uniformly on $I$, i.e., $f \in
{\operatorname{C}}_{+}[a, b]$ because the functions $f_{m}$'s are continuous on $I$.
Therefore, ${\operatorname{C}}_{+}[a, b]$ is complete, as desired.
\end{proof}
\subsection{Interesting Semi-Vector Spaces}\label{subsec2}
In this section we exhibit semi-vector spaces over $K= {\mathbb R}_{0}^{+}$
derived from semi-metrics, semi-metric-preserving functions, semi-norms,
semi-inner products and sub-linear functionals.
\begin{theorem}\label{teo1}
Let $X$ be a semi-metric space and ${ \mathcal M}_{X}=\{ d: X \times X\longrightarrow
{\mathbb R}; d$ $\operatorname{is \ a \ semi-metric \ on} X\}$.
Then $({ \mathcal M}_{X}, +, \cdot )$ is a semi-vector space over ${\mathbb R}_{0}^{+}$,
where $+$ and $\cdot$ are the
addition and the scalar multiplication (in ${\mathbb R}_{0}^{+}$) pointwise,
respectively.
\end{theorem}
\begin{proof}
We first show that ${ \mathcal M}_{X}$ is closed under addition.
Let $d_1 , d_2 \in { \mathcal M}_{X}$ and set
$d:= d_1 + d_2$. It is clear that $d$ is nonnegative real-valued
function. Moreover, for all $x, y \in X$, $d(x, y) = d(y, x)$.
Let $x \in X$; $d(x, x) = d_1(x, x) + d_2 (x,x) =0$.
For all $x, y, z \in X$, $d(x, z)=d_1 (x, z) + d_2 (x, z)\leq [d_1 (x, y) + d_2 (x, y)]+
[d_1 (y, z) + d_2 (y, z)]= d(x, y) + d(y, z)$.
Let us show that ${ \mathcal M}_{X}$ is closed under scalar multiplication. Let $d_1
\in { \mathcal M}_{X}$ and define $d = \lambda d_1$, where
$\lambda \in {\mathbb R}_{0}^{+}$. It is clear that $d$ is real-valued nonnegative and for all
$x, y \in X$, $d(x, y)=d(y, x)$. Moreover, if $x \in X$, $d(x, x)=0$.
For all $x, y, z \in X$, $d(x, z)=\lambda d_1 (x, z)\leq \lambda [d_1 (x, y)
+ d_1 (y, z)]= d(x, y) + d(y, z)$.
This means that ${ \mathcal M}_{X}$ is closed under scalar multiplication.
It is easy to see that $({ \mathcal M}_{X}, +, \cdot )$ satisfies the
other conditions of Definition~\ref{defSVS}.
\end{proof}
Let $(X, d)$ be a metric space. In~\cite{Corazza:1999}, Corazza investigated
interesting functions $f:{\mathbb R}_{0}^{+}\longrightarrow {\mathbb R}_{0}^{+}$
such that the composite of $f$ with $d$, i.e., $X \times X \xrightarrow{d}
{{\mathbb R}_{0}^{+}} \xrightarrow{f} {{\mathbb R}_{0}^{+}}$ also generates
a metric on $X$. Let us put this concept formally.
\begin{definition}\label{metricprese}
Let $f:{\mathbb R}_{0}^{+}\longrightarrow {\mathbb R}_{0}^{+}$
be a function. We say that $f$ is metric-preserving if for
all metric spaces $(X, d)$, the composite $f \circ d$ is a metric.
\end{definition}
To our purpose we will consider semi-metric preserving functions as follows.
\begin{definition}\label{semi-metricprese}
Let $f:{\mathbb R}_{0}^{+}\longrightarrow {\mathbb R}_{0}^{+}$ be a
function. We say that $f$ is semi-metric-preserving if for
all semi-metric spaces $(X, d)$, the composite $f \circ d$ is a semi-metric.
\end{definition}
We next show that the set of semi-metric preserving functions has a semi-vector
space structure.
\begin{theorem}\label{teo1a}
Let ${ \mathcal F}_{pres}=\{ f:{\mathbb R}_{0}^{+}\longrightarrow
{\mathbb R}_{0}^{+}; f \operatorname{is \ semi-metric \ preserving} \}$.
Then $({ \mathcal F}_{pres}, +, \cdot )$ is a semi-vector space over ${\mathbb R}_{0}^{+}$,
where $+$ and $\cdot$ are the addition and the scalar multiplication
(in ${\mathbb R}_{0}^{+}$) pointwise, respectively.
\end{theorem}
\begin{proof}
We begin by showing that ${ \mathcal F}_{pres}$ is closed under addition
and scalar multiplication pointwise.
Let $f, g \in { \mathcal F}_{pres}$. Given a semi-metric space $(X, d)$, we must prove that
$(f + g)\circ d$ is also semi-metric preserving. We know
that $[(f + g)\circ d] (x, y ) \geq 0$ for all $x, y \in X$. Let $x \in X$; then
$[(f + g)\circ d ](x, x )= f(d(x, x)) + g (d(x, x)) = 0$. It is clear that
$[(f + g ) \circ d](x, y)= [(f + g ) \circ d](y, x)$. Let $x, y, z \in X$. One has:
$[(f + g ) \circ d](x, y)= f(d(x, y)) + g(d(x, y))\leq [f(d(x, z))+ g(d(x, z))]+
[f(d(z, y))+ g(d(z, y))]= (f + g)(d(x, z)) + (f + g)(d(z, y))=
[(f + g)\circ d](x, z) + [(f + g)\circ d](z, y) $.
Here, we show that for each $f \in { \mathcal F}_{pres}$ and $ \alpha \in
{\mathbb R}_{0}^{+}$, it follows that $ \alpha f \in { \mathcal F}_{pres}$.
We show only the triangular inequality since the other conditions are immediate.
Let us calculate: $[\alpha f \circ d](x, y)= \alpha f (d(x, y))\leq
\alpha f (d(x, z)) + \alpha f (d(z, y)) = [\alpha f \circ d](x, z) + [\alpha f \circ d](z, y)$.
The null vector is the null function $0_{f}:{\mathbb R}_{0}^{+}\longrightarrow
{\mathbb R}_{0}^{+}$. The other conditions are easy to verify.
\end{proof}
\begin{theorem}\label{teo2}
Let $V$ be a semi-normed real vector space and ${ \mathcal N}_{V}=
\{ \| \ \|: V\longrightarrow {\mathbb R}; \| \ \|$
$\operatorname{is \ a \ semi-norm \ on} V\}$. Then $({ \mathcal N}_{V}, +,
\cdot )$ is a semi-vector space over ${\mathbb R}_{0}^{+}$,
where $+$ and $\cdot$ are addition and scalar multiplication
(in ${\mathbb R}_{0}^{+}$) pointwise, respectively.
\end{theorem}
\begin{proof}
From hypotheses, ${ \mathcal N}_{V}$ is non-empty. Let ${\| \ \|}_{1} ,
{\| \ \|}_{2} \in { \mathcal N}_{V}$ and set $\| \ \|:=
{\| \ \|}_{1} + {\| \ \|}_{2}$. For all $v \in V$, $\| v \|\geq 0$.
If $v \in V$ and $\alpha \in {\mathbb R}$ then $\| \alpha v \|=|\alpha| \| v \|$.
For every $u, v \in V$, it follows that $\| u + v \|:= {\| u + v \|}_{1} +
{\| u + v \|}_{2}\leq ({ \| u \|}_{1} + {\| u \|}_{2} ) +
({\| v \|}_{1} + {\| v \|}_{2})= \| u \| + \| v \|$. Hence, ${ \mathcal N}_{V}$
is closed under addition.
We next show that ${ \mathcal N}_{V}$ is closed under scalar multiplication.
Let ${\| \ \|}_{1} \in { \mathcal N}_{V}$ and define
$\| \ \|:= \lambda {\| \ \|}_{1}$, where $\lambda \in {\mathbb R}_{0}^{+}$. For all
$v \in V$, $\| v \|\geq 0$. If $\alpha \in {\mathbb R}$ and $ v \in V$,
$ \| \alpha v \|= |\alpha |( \lambda {\| v\|}_{1})= |\alpha | \| v \|$.
Let $u, v \in V$. Then $\| u + v \|\leq \lambda {\| u \|}_{1}+
\lambda {\| v \|}_{1}=\|u\| + \|v\|$. Therefore, ${ \mathcal N}_{V}$
is closed under addition and scalar multiplication over ${\mathbb R}_{0}^{+}$.
The zero vector is the null function $ \textbf{0}: V \longrightarrow
{\mathbb R}$. The other conditions
of Definition~\ref{defSVS} are straightforward.
\end{proof}
\begin{remark}
Note that ${ \mathcal N}_{V}^{\diamond}=\{\| \ \|: V\longrightarrow {\mathbb R};
\| \ \|$ $\operatorname{is \ a \ norm \ on} V\}$
is also closed under both function addition and scalar multiplication pointwise.
\end{remark}
\begin{lemma}\label{prop1}
Let $T:V\longrightarrow W$ be a linear transformation.
\begin{itemize}
\item [ $\operatorname{(1)}$] If $\| \ \|:W\longrightarrow {\mathbb R}$ is a semi-norm on
$W$ then $\| \ \|\circ T: V \longrightarrow {\mathbb R}$ is a semi-norm on $V$.
\item [ $\operatorname{(2)}$] If $T$ is injective linear and $\| \ \|:
W\longrightarrow {\mathbb R}$ is a norm on $W$ then $\| \ \|\circ T$ is a norm on $V$.
\end{itemize}
\end{lemma}
\begin{proof}
We only show Item~$\operatorname{(1)}$. It is clear that $[\| \ \|\circ T](v)
\geq 0$ for all $v \in V$. For all $\alpha
\in {\mathbb R}$ and $v \in V$, $[\| \ \|\circ T](\alpha v)=
| \alpha | \| T(v) \| = | \alpha | [\| \ \|\circ T](v)$. Moreover, $ \forall \ v_1 , v_2 \in V$,
$[\| \ \|\circ T](v_1 + v_2)\leq [\| \ \|\circ T](v_1 )+ [\| \ \|\circ T](v_2 )$.
Therefore, $\| \ \|\circ T$ is a semi-norm on $V$.
\end{proof}
\begin{theorem}\label{teo2a}
Let $V$ and $W$ be two semi-normed vector spaces and $T:V\longrightarrow W$ be
a linear transformation. Then
$${ \mathcal N}_{V_{T}}=\{ \| \ \| \circ T:
V\longrightarrow {\mathbb R}; \| \ \| \operatorname{is \ a \ semi-norm \ on} W\}$$
is a semi-subspace of $({ \mathcal N}_{V}, +, \cdot )$.
\end{theorem}
\begin{proof}
From hypotheses, it follows that ${ \mathcal N}_{V_{T}}$ is non-empty.
From Item~$\operatorname{(1)}$ of Lemma~\ref{prop1}, it follows that $\| \ \|\circ T$
is a semi-norm on $V$. Let $f, g \in { \mathcal N}_{V_{T}}$, i.e.,
$f = {\| \ \|}_1 \circ T$ and $g = {\| \ \|}_2 \circ T$, where ${\| \ \|}_1$ and ${\| \ \|}_2$
are semi-norms on $W$. Then $f + g = [ {\| \ \|}_1 + {\| \ \|}_2 ]\circ T \in { \mathcal N}_{V_{T}}$.
For every nonnegative real number $\lambda$ and $f \in { \mathcal N}_{V_{T}}$,
$\lambda f = \lambda [ \| \ \|\circ T] = (\lambda \| \ \| )\circ T \in { \mathcal N}_{V_{T}}$.
\end{proof}
\begin{theorem}\label{teo2b}
Let ${\mathcal N}$ be the class whose members are $\{{ \mathcal N}_{V}\}$, where the
${ \mathcal N}_{V}$ are given in Theorem~\ref{teo2}. Let
$\operatorname{Hom}({\mathcal N})$ be the class whose members are the sets
$$\operatorname{hom}({ \mathcal N}_{V}, { \mathcal N}_{W})=\{
F_T:{ \mathcal N}_{V}\longrightarrow { \mathcal N}_{W};
F_T ( {\| \ \|}_{V})= {\| \ \|}_{V} \circ T\},$$
where $T: W \longrightarrow V$ is a linear transformation and
${\| \ \|}_{V}$ is a semi-norm on $V$. Then $({\mathcal N}, \operatorname{Hom}({\mathcal N}),
Id, \circ )$ is a category.
\end{theorem}
\begin{proof}
The sets $\operatorname{hom}({ \mathcal N}_{V}, { \mathcal N}_{W})$ are pairwise disjoint.
For each ${ \mathcal N}_{V}$, there exists $Id_{({ \mathcal N}_{V})}$ given by
$Id_{({ \mathcal N}_{V})} ({\| \ \|}_{V})={\| \ \|}_{V}={\| \ \|}_{V}\circ Id_{(V)}$.
It is clear that if ${F}_{T}:{ \mathcal N}_{V}\longrightarrow { \mathcal N}_{W}$ then
${F}_{T}\circ Id_{({ \mathcal N}_{V})} = {F}_{T}$ and
$Id_{({ \mathcal N}_{W})}\circ {F}_{T} = {F}_{T}$.
It is easy to see that for every $T:W\longrightarrow V$ linear transformation,
the map $F_{T}$ is semi-linear, i.e.,
$F_{T}({\| \ \|}_{V}^{(1)} + {\| \ \|}_{V}^{(2)})=
F_{T}({\| \ \|}_{V}^{(1)}) + F_{T}({\| \ \|}_{V}^{(2)})$ and
$F_{T}(\lambda {\| \ \|}_{V})= \lambda F_{T}({\| \ \|}_{V})$,
for every ${\| \ \|}_{V}, {\| \ \|}_{V}^{(1)}, {\| \ \|}_{V}^{(2)} \in { \mathcal N}_{V}$
and $\lambda \in {\mathbb R}_{0}^{+}$.
Let ${ \mathcal N}_{U}, { \mathcal N}_{V}, { \mathcal N}_{W},
{ \mathcal N}_{X} \in {\mathcal N}$ and $F_{T_1} \in
\operatorname{hom}({ \mathcal N}_{U}, { \mathcal N}_{V})$,
$F_{T_2} \in \operatorname{hom}({ \mathcal N}_{V}, { \mathcal N}_{W})$,
$F_{T_3} \in \operatorname{hom}({ \mathcal N}_{W}, { \mathcal N}_{X})$, i.e.,
$${ \mathcal N}_{U}\xrightarrow{F_{T_1}} { \mathcal N}_{V}\xrightarrow{F_{T_2}} { \mathcal N}_{W}
\xrightarrow{F_{T_3}} { \mathcal N}_{X}.$$
The linear transformations are of the forms
$$X\xrightarrow{T_3} W\xrightarrow{T_2} V \xrightarrow{T_1} U
\xrightarrow{{\| \ \|}_{U}} {\mathbb R}.$$
The associativity $(F_{T_3}\circ F_{T_2})\circ F_{T_1}=F_{T_3}\circ (F_{T_2}\circ F_{T_1})$
follows from the associativity of composition of maps. Moreover, the map
$F_{T_3}\circ F_{T_2}\circ F_{T_1} \in \operatorname{Hom}({\mathcal N})$ because
$F_{T_3}\circ F_{T_2}\circ F_{T_1} = ({\| \ \|}_{U})\circ (T_1\circ T_2\circ T_3)$ and
$T_1\circ T_2\circ T_3$ is a linear transformation. Therefore, $({\mathcal N},
\operatorname{Hom}({\mathcal N}), Id, \circ )$ is a category, as required.
\end{proof}
\begin{theorem}\label{teo3}
Let $V$ be a real vector space endowed with a semi-inner product and let
${ \mathcal P}_{V}=\{ \langle \ ,
\ \rangle: V\times V\longrightarrow {\mathbb R}; \langle \
, \ \rangle$ $\operatorname{is \ a \ semi-inner \ product \ on} V\}$.
Then $({ \mathcal P}_{V}, +, \cdot )$ is a semi-vector space
over ${\mathbb R}_{0}^{+}$, where $+$ and $\cdot$ are
addition and scalar multiplication (in ${\mathbb R}_{0}^{+}$)
pointwise, respectively.
\end{theorem}
\begin{proof}
The proof is analogous to that of Theorems~\ref{teo1}~and~\ref{teo2}.
\end{proof}
\begin{proposition}\label{prop2}
Let $V, W$ be two vector spaces and $T_1 , T_2:V\longrightarrow W$ be two
linear transformations. Let us
consider the map $T_1 \times T_2 : V \times V \longrightarrow W\times W$ given
by $T_1 \times T_2 (u, v) = (T_1(u), T_2 (v))$. If $\langle \ , \ \rangle$
is a semi-inner product on $W$ then $\langle \ , \ \rangle \circ
T_1 \times T_2$ is a semi-inner product on $V$.
\end{proposition}
\begin{proof}
The proof is immediate, so it is omitted.
\end{proof}
Let $V$ be a real vector space. Recall that a sub-linear functional on $V$
is a functional $t: V\longrightarrow {\mathbb R}$ which is sub-additive:
$\forall \ u, v \in V$, $t(u + v)\leq t(u) + t(v)$; and positive-homogeneous:
$\forall \ \alpha \in {\mathbb R}_{0}^{+}$ and $\forall \ v \in V$,
$t(\alpha v ) =\alpha t(v)$.
\begin{theorem}\label{teo4}
Let $V$ be a real vector space. Let us consider ${ \mathcal S}_{V}=
\{ S: V\longrightarrow
{\mathbb R};$ $S \operatorname{is} \operatorname{sub-linear} \operatorname{on} V\}$.
Then $({ \mathcal S}_{V}, +, \cdot )$ is a semi-vector space on
${\mathbb R}_{0}^{+}$, where $+$ and $\cdot$ are
addition and scalar multiplication (in ${\mathbb R}_{0}^{+}$) pointwise, respectively.
\end{theorem}
\begin{proof}
The proof follows the same line of that of Theorems~\ref{teo1}~and~\ref{teo2}~and~\ref{teo3}.
\end{proof}
\subsection{Semi-Algebras}\label{subsec4}
We start this section by recalling the definition of semi-algebra and
semi-sub-algebra. For more details the reader can consult
\cite{Gahler:1999}. In \cite{Olivier:1995}, Olivier and Serrato investigated
relation semi-algebras, i.e., a semi-algebra being both a Boolean algebra and an
involutive semi-monoid, satisfying some conditions
(see page 2 in Ref.~\cite{Olivier:1995} for more details). Roy
\cite{Roy:1970} studied the semi-algebras of continuous and
monotone functions on compact ordered spaces.
\begin{definition}\label{semialgebra}
A semi-algebra $A$ over a semi-field $K$ (or a $K$-semi-algebra) is a semi-vector
space $A$ over $K$ endowed with a binary operation called multiplication
of semi-vectors $\bullet: A \times A\longrightarrow A$ such that, $\forall \ u, v, w \in A$ and
$\lambda \in K$:
\begin{itemize}
\item [ $\operatorname{(1a)}$] $ u \bullet (v + w)= (u \bullet v) + (u \bullet w)$ (left-distributivity);
\item [ $\operatorname{(1b)}$] $ (u + v)\bullet w= (u \bullet w) + (v \bullet w)$ (right-distributivity);
\item [ $\operatorname{(2)}$] $ \lambda (u \bullet v)= (\lambda u)\bullet v = u \bullet (\lambda v)$.
\end{itemize}
\end{definition}
A semi-algebra $A$ is \emph{associative} if $(u\bullet v)\bullet
w=u\bullet (v\bullet w)$ for all $u, v, w \in A$; $A$ is said to
be \emph{commutative} (or abelian) is the multiplication is commutative, that
is, $\forall \ u, v \in A$, $u\bullet v= v\bullet u$; $A$ is called
a semi-algebra with identity if there exists an element
$1_A \in A$ such that $\forall \ u \in A$, $1_A \bullet u = u \bullet 1_A =u$;
the element $1_A $ is called identity of $A$. The identity element
of a semi-algebra $A$ is unique (if exists). If $A$ is a
semi-free semi-vector space then the dimension of $A$ is its dimension
regarded as a semi-vector space. A semi-algebra is \emph{simple} if it
is simple as a semi-vector space.
\begin{example}\label{ex5}
The set ${\mathbb R}_{0}^{+}$ is a commutative semi-algebra with identity $e=1$.
\end{example}
\begin{example}\label{ex6}
The set of square matrices of order $n$ whose entries are in ${\mathbb R}_{0}^{+}$,
equipped with the sum of matrices, multiplication of a matrix by a scalar
(in ${\mathbb R}_{0}^{+}$, of course) and by multiplication of
matrices is an associative and non-commutative semi-algebra with identity $e=I_{n}$
(the identity matrix of order $n$), over ${\mathbb R}_{0}^{+}$.
\end{example}
\begin{example}\label{ex7}
The set ${\mathcal P}_{n}[x]$ of polynomials with coefficients
from ${\mathbb R}_{0}^{+}$ and degree less than or equal
to $n$, equipped with the usual of polynomial sum and scalar multiplication is a
semi-vector space.
\end{example}
\begin{example}\label{ex8}
Let $V$ be a semi-vector space over a semi-field $K$. Then the set
${\mathcal L}(V, V)=\{T:V\longrightarrow V;
T \operatorname{is \ a \ semi-linear \ operator}\}$ is a semi-vector space.
If we define a vector multiplication as the composite of semi-linear
operators (which is also semi-linear) then we have a semi-algebra
over $K$.
\end{example}
\begin{definition}\label{subsemialgebra}
Let $A$ be a semi-algebra over $K$. We say that a non-empty set $S \subseteq A$
is a semi-subalgebra if $S$ is closed under the operations of $A$, that is,
\begin{itemize}
\item [ $\operatorname{(1)}$] $\forall \ u, v \in A$, $u + v \in A$;
\item [ $\operatorname{(2)}$] $\forall \ u, v \in A$, $u \bullet v \in A$;
\item [ $\operatorname{(3)}$] $\forall \ \lambda \in K$ and $\forall u \in A$, $\lambda u \in A$.
\end{itemize}
\end{definition}
\begin{definition}\label{A-homomorphism}
Let $A$ and $B$ two semi-algebras over $K$. We say that a
map $T:A\longrightarrow B$ is an $K$-semi-algebra homomorphism
if, $\forall \ u, v \in A$ and $\lambda \in K$, the following conditions hold:
\begin{itemize}
\item [ $\operatorname{(1)}$] $T(u + v) = T(u) + T(v)$;
\item [ $\operatorname{(2)}$] $T(u \bullet v) = T(u) \bullet T(v)$;
\item [ $\operatorname{(3)}$] $T(\lambda v ) = \lambda T(v)$.
\end{itemize}
\end{definition}
Definition~\ref{A-homomorphism} means that $T$ is both a semi-ring homomorphism and also
semi-linear (as semi-vector space).
\begin{definition}\label{isomorphic}
Let $A$ and $B$ be two $K$-semi-algebras. A $K$-semi-algebra isomorphism
$T:A \longrightarrow B$ is a bijective $K$-semi-algebra homomorphism.
If there exists such an isomorphism, we say that $A$ is isomorphic to $B$, written $A\cong B$.
\end{definition}
The following results seems to be new, because semi-algebras over
${\mathbb R}_{0}^{+}$ are not much investigated in the literature.
\begin{proposition}\label{propalghomo}
Assume that $A$ and $B$ are two $K$-semi-algebras, where $K={\mathbb R}_{0}^{+}$
and $A$ has identity $1_A$. Let $T:A \longrightarrow B$
be a $K$-semi-algebra homomorphism. Then the following properties hold:
\begin{itemize}
\item [ $\operatorname{(1)}$] $T(0_A)= 0_B$;
\item [ $\operatorname{(2)}$] If $ u\in A$ is invertible then its inverse is
unique and $(u^{-1})^{-1}= u$;
\item [ $\operatorname{(3)}$] If $T$ is surjective then $T(1_A) = 1_B$, i.e., $B$ also has identity;
furthermore, $T(u^{-1})= [T(u)]^{-1}$;
\item [ $\operatorname{(4)}$] If $u, v \in A$ are invertible then
$(u\bullet v )^{-1}= v^{-1}\bullet u^{-1}$;
\item [ $\operatorname{(5)}$] the composite of $K$-semi-algebra homomorphisms is also a
$K$-semi-algebra homomorphism;
\item [ $\operatorname{(6)}$] if $T$ is a $K$-semi-algebra isomorphism then also
is $T^{-1}:B \longrightarrow A$.
\item [ $\operatorname{(7)}$] the relation $A \sim B$ if and only if $A$
is isomorphic to $B$ is an equivalence relation.
\end{itemize}
\end{proposition}
\begin{proof}
Note that Item~$\operatorname{(1)}$ holds because the additive cancelation
law holds in the definition of semi-vector spaces (see Definition\ref{defSVS}).
We only show Item $\operatorname{(3)}$ since the remaining items are direct.
Let $v \in B$; then there exists $u \in A$ such that $T(u)=v$. It then follows that
$v \bullet T(1_A )= T(u\bullet 1_A)=v$ and $T(1_A ) \bullet v = T(1_A \bullet u)=v$;
which means that $T(1_A)$ is the identity of $B$, i.e., $T(1_A) = 1_B$.
We have: $T(u) \bullet T(u^{-1})= T( u \bullet u^{-1})=T(1_A)=1_B$ and
$T(u^{-1}) \bullet T(u)= T( u^{-1} \bullet u)=T(1_A)=1_B$, which implies
$T(u^{-1})= [T(u)]^{-1}$.
\end{proof}
\begin{proposition}\label{associunitsemi}
If $A$ is a $K$-semi-algebra with identity $1_A$ then $A$ can be embedded in
${\mathcal L}(A, A)$, the semi-algebra of semi-linear operators on $A$.
\end{proposition}
\begin{proof}
For every fixed $v \in A$, define $v^{*}:A \longrightarrow A$ as
$v^{*}(x) = v\bullet x$. It is easy to see that $v^{*}$
is a semi-linear operator on $A$. Define $h: A \longrightarrow {\mathcal L}(A, A)$ by
$h(v)= v^{*}$. We must show that
$h$ is a injective $K$-semi-algebra homomorphism where the product in
${\mathcal L}(A, A)$ is the composite of maps from $A$ into $A$.
Fixing $u, v \in A$, we have: $[h(u + v)](x)=
(u + v)^{*}(x)= (u + v)\bullet x = u\bullet x + v \bullet x =
u^{*}(x) + v^{*}(x) = [h(u)](x) + [h(v)](x)$,
hence $h(u + v)= h(u) + h(v)$. For $\lambda \in K$ and $v \in A$, it follows
that $[h(\lambda v)](x) = (\lambda v)^{*}(x)= (\lambda v)x = \lambda (vx)=
[\lambda h(v)](x)$, i.e., $h(\lambda v)= \lambda h(v)$. For fixed $u, v \in A$,
$[h(u\bullet v)](x)= (u\bullet v)^{*}(x)= (u\bullet v)\bullet x = u\bullet
(v\bullet x)=u\bullet v^{*}(x)=u^{*}(v^{*}(x))=[h(u)\circ h(v)](x)$, i.e.,
$h(u\bullet v)= h(u) \circ h(v)$.
Assume that $h(u)=h(v)$, that is, $u^{*}=v^{*}$; hence, for every $x \in A$,
$u^{*}(x) = v^{*}(x)$, i.e., $u\bullet x = v\bullet x$ . Taking in particular
$x=1_A$, it follows that $u = v$, which implies that $h$ is injective. Therefore,
$A$ is isomorphic to $h(A)$, where $h(A)\subseteq {\mathcal L}(A, A)$.
\end{proof}
\begin{definition}\label{semi-Liesemialgebra}
Let $A$ be a semi-vector space over a semi-field $K$. Then $A$ is
said to be a Lie semi-algebra if $A$ is equipped with
a product $[ \ , \ ]: A \times A\longrightarrow A$ such that the following conditions hold:
\begin{itemize}
\item [ $\operatorname{(1)}$] $[ \ , \ ]$ is semi-bilinear, i.e.,
fixing the first (second) variable, $[ \ , \ ]$ is semi-linear w.r.t.
the second (first) one;
\item [ $\operatorname{(2)}$] $[ \ , \ ]$ is anti-symmetric, i.e.,
$[v , v]=0$ $\forall \ v \in A$;
\item [ $\operatorname{(3)}$] $[ \ , \ ]$ satisfies the Jacobi identity:
$\forall \ u, v, w \in A$, $[u, [v,w]]+ [w, [u,v]]+ [v, [w, u]]=0$
\end{itemize}
\end{definition}
From Definition~\ref{semi-Liesemialgebra} we can see that a Lie semi-algebra
can be non-associa-tive, i.e., the product $[ \ , \ ]$ is not always associative.
Let us now consider the semi-algebra ${ \mathcal M}_n ({\mathbb R}_{0}^{+})$
of matrices of order $n$ with entries in ${\mathbb R}_{0}^{+}$
(see Example~\ref{ex6}). We know that ${ \mathcal M}_n ({\mathbb R}_{0}^{+})$ is simple,
i.e., with exception of the zero matrix (zero vector), no matrix
has (additive) symmetric. Therefore, the product of such matrices
can be nonzero. However, in the case of a Lie semi-algebra $A$, if $A$ is
simple then the unique product $[ \ , \ ]$ that can be defined over $A$ is
the zero product, as it is shown in the next result.
\begin{proposition}\label{semi-Lieabelian}
If $A$ is a simple Lie semi-algebra over a semi-field $K$ then the
semi-algebra is abelian, i.e., $[u, v]=0$ for all $u, v \in A$.
\end{proposition}
\begin{proof}
Assume that $u, v \in A$ and $[u, v ] \neq 0$. From
Items~$\operatorname{(1)}$~and~$\operatorname{(2)}$
of Definition~\ref{semi-Liesemialgebra}, it follows that $[u+v , u+v ]
=[u, u] + [u, v] + [v, u] + [v, v]=0$, i.e., $[u, v] + [v, u]=0$.
This means that $[u, v]$ has symmetric $[v, u]\neq 0$, a
contradiction.
\end{proof}
\begin{definition}\label{subLiesemi}
Let $A$ be a Lie semi-algebra over a semi-field $K$. A Lie semi-subalgebra
$B \subseteq A$ is a semi-subspace of $A$ which is closed under
$[u, v ]$, i.e., for all $u, v \in B$, $[u, v] \in B$.
\end{definition}
\begin{corollary}
All semi-subspaces of $A$ are semi-subalgebras of $A$.
\end{corollary}
\begin{proof}
Apply Proposition~\ref{semi-Lieabelian}.
\end{proof}
\section{Fuzzy Set Theory and Semi-Algebras}\label{sec3a}
The theory of semi-vector spaces and semi-algebras is a natural
generalization of the corresponding theories of vector spaces
and algebras. Since the scalars are in semi-fields (weak semi-fields), some
standard properties does not hold in this new context. However,
as we have shown in Section~\ref{sec3}, even in case of
nonexistence of symmetrizable elements, several results are still true.
An application of the theory of semi-vector spaces is in the investigation
on Fuzzy Set Theory, which was introduced by Lotfali Askar-Zadeh \cite{Zadeh:1965}.
In fact, such a theory fits in the investigation/extension
of results concerning fuzzy sets and their corresponding theory.
Let us see an example.
Let $L$ be a linearly ordered complete lattice with distinct smallest and
largest elements $0$ and $1$. Recall that a fuzzy number is a function
$x:{\mathbb R}\longrightarrow L$ on the field of real numbers satisfying the following
items (see \cite[Sect. 1.1]{Gahler:1999}): $\operatorname{(1)}$ for each
$\alpha \in L_0$ the set $x_{\alpha}= \{\varphi \in {\mathbb R} |
\alpha \leq x(\varphi)\} $ is a closed interval $[x_{\alpha l} ,
x_{\alpha r}]$, where $L_0= \{ \alpha \in L | \alpha > 0\}$;
$\operatorname{(2)}$ $\{\varphi \in {\mathbb R} | 0 < x(\varphi)\}$
is bounded.
We denote the set ${\mathbb R}_L$ to be the set of all fuzzy numbers;
${\mathbb R}_L$ can be equipped with a partial order in
the following manner: $x \leq y $ if and only if $x_{\alpha l} \leq y_{\alpha l}$
and $x_{\alpha r} \leq y_{\alpha r}$ for all $\alpha \in L_0$. In this scenario,
Gahler et al. showed that the concepts of semi-algebras can be utilized to
extend the concept of fuzzy numbers, according to the following proposition:
\begin{proposition}\cite[Proposition 19]{Gahler:1999}
The set ${\mathbb R}_L$ is an ordered commutative semi-algebra.
\end{proposition}
Thus, a direct utilization of the investigation of the structures of
semi-vector spaces and semi-algebras is the possibility to generate
new interesting results on the Fuzzy Set Theory.
Another work relating semi-vector spaces and Fuzzy Set Theory is the
paper by Bedregal et al. \cite{Milfont:2021}. In order to study
the aggregation functions (geometric mean, weighted average, ordered
weighted averaging, among others) w.r.t. an admissible order
(a total order $\preceq$ on $L_n ([0, 1])$ such that for all $x, y \in L_n ([0, 1])$,
$x \ {\leq}_{n}^{p} \ y \Longrightarrow x\preceq y$), the authors worked with
semi-vector spaces over a weak semi-field.
Let $L_n ([0, 1]) = \{(x_1, x_2 , \ldots , x_n ) \in {[0, 1]}^{n}
| x_1 \leq x_2 \leq \ldots \leq x_n \}$ and $U= ([0, 1],
\oplus , \cdot)$ be a weak semi-field defined as follows: for all $x, y
\in [0, 1]$, $x \oplus y = \min\{ 1, x+y\}$ and $\cdot$ is the
usual multiplication. The product order proposed by Shang
et al.~\cite{Shang:2010} is given as follows: for all $x=
\{(x_1, x_2 , \ldots , x_n )$ and $y= \{(y_1, y_2 , \ldots ,
y_n )$ vectors in $L_n ([0, 1])$, define
$x \ {\leq}_{n}^{p} \ y \Longleftrightarrow {\pi}_{i}(x)\leq {\pi}_{i}(x) $
for each $i \in \{1, 2, \ldots , n\}$,
where ${\pi}_i : L_n ([0, 1]) \longrightarrow [0, 1] $ is the
$i$-th projection ${\pi}_i (x_1 , x_2 , \ldots , x_n ) = x_i$.
With these concepts in mind, the authors showed two important results:
\begin{theorem}(see \cite[Theorem 1]{Milfont:2021})\label{mil21}
${\mathcal L}_{n} ([0, 1]) = (L_n ([0, 1], \dotplus, \odot)$ is a
semi-vector space over $U$, where $r \odot v = (rx_1 ,
\ldots , rx_n )$ and $u \dotplus v = (x_1 \oplus y_1 , \ldots , x_n
\oplus y_n ) $. Moreover, $({\mathcal L}_{n} ([0, 1]),
{\leq}_{n}^{p})$ is an ordered semi-vector space over $U$,
where ${\leq}_{n}^{p}$ is the product order.
\end{theorem}
\begin{proposition}(see \cite[Propostion 2]{Milfont:2021})
For any bijection $f: \{1, 2 , \ldots , n\}
\longrightarrow \{1, 2 , \ldots , n\}$,
the pair\\ $({\mathcal L}_{n}([0, 1]), {\preceq}_f)$
is an ordered semi-vector space over $U$, where
${\preceq}_f$, defined in
\cite[Example 1]{Milfont:2021}, is an admissible order.
\end{proposition}
As a consequence of the investigation made, the authors propose an
algorithm to perform a multi-criteria and multi-expert decision making method.
Summarizing the ideas: the better the theory of semi-vector spaces
is extended and developed, the more applications and more results
we will have in the Fuzzy Set Theory. Therefore, it is important to understand
deeply which are the algebraic and geometry structures of semi-vector
spaces, providing, in this way, support for the development of the own
theory as well as other interesting theories as, for example, the
Fuzzy Set Theory.
\section{Summary}\label{sec4}
In this paper we have extended the theory of semi-vector spaces,
where the semi-field of scalars considered here is
the nonnegative real numbers. We have proved several results in the context
of semi-vector spaces and semi-linear transformations. We
introduced the concept of eigenvalues and eigenvectors of a
semi-linear operator and of a matrix and shown how to compute it
in specific cases. Topological properties of semi-vector spaces
such as completeness and separability were also investigated.
We have exhibited interesting new families of semi-vector
spaces derived from semi-metric, semi-norm, semi-inner product,
among others. Additionally, some results concerning semi-algebras
were presented. The results presented in this paper can be possibly
utilized in the development and/or investigation of new properties of
fuzzy systems and also in the study of correlated areas of research.
\section*{Acknowledgment}
\small
\end{document} |
\begin{document}
\begin{center}
{\bf Combinatorial Sums $\sum_{k\equiv r(\mbox{mod }
m)}{n\choose k}a^k$ and Lucas Quotients (II)}
\vskip 20pt
{\bf Jiangshuai Yang}\\
{\smallit Key Laboratory of Mathematics Mechanization, NCMIS, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, People's Republic of China}\\
{\tt yangjiangshuai@amss.ac.cn}\\
\vskip 10pt
{\bf Yingpu Deng}\\
{\smallit Key Laboratory of Mathematics Mechanization, NCMIS, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, People's Republic of China}\\
{\tt dengyp@amss.ac.cn}\\
\end{center}
\vskip 30pt
\centerline{\bf Abstract}
\noindent In \cite{dy}, we obtained some congruences for Lucas quotients of two infinite families of Lucas sequences by studying the combinatorial sum
$$\sum_{k\equiv r(\mbox{mod }m)}{n\choose k}a^k.$$
In this paper, we show that the sum can be expressed in terms of some recurrent sequences with orders not exceeding $\varphi{(m)}$ and
give some new congruences.
\pagestyle{myheadings}
\thispagestyle{empty}
\baselineskip=12.875pt
\vskip 30pt
\section{Introduction}
\noindent Let $p$ be an odd prime, using the formula for the sum
$$\sum_{k\equiv r(\mbox{mod }8)}{n\choose k},$$
Sun \cite{s1995} proved that
\[\sum\limits_{k=1}^{\frac{p-1}{2}}\frac{1}{k\cdot2^k}\equiv\sum\limits_{k=1} ^{[\frac{3p}{4}]}\frac{(-1)^{k-1}}{k}\pmod p.\]
Later, Shan and E.T.H.Wang \cite{sw} gave a simple proof of the above congruence. In \cite{sun5}, Sun proved five similar congruences by using the formulas for Fibonacci quotient and Pell quotient.
In \cite{s2002}, Sun showed that the sum
$$\sum_{k\equiv r(\mbox{mod }m)}{n\choose k},$$
where $n,m$ and $r$ are integers with $m,n>0$, can be expressed in terms of some recurrent sequences with orders not exceeding $\varphi{(m)}/2$, and obtained the following congruence
\[\sum_{k=1}^{\frac{p-1}{2}}\frac{3^k}{k}\equiv\sum_{k=1}^{\left[\frac{p}{6}\right]}\frac{(-1)^k}{k} \pmod p.
\]
In \cite{dy}, we studied more general sum
\begin{equation}\label{generalsum}
\sum_{k\equiv r(\mbox{mod }m)}{n\choose k}a^k,
\end{equation}
and obtained congruences for Lucas quotients of two infinite families of Lucas sequences. See (\cite{dy} Theorems 4.10 and 5.4). In this paper, we continue studying the sum. We show that it can be expressed in terms of some recurrent sequences with orders not exceeding $\varphi{(m)}$, and
obtain some new congruences.
For $x\in\mathbb{R}$, we use $[x]$ to denote the integral part of $x$ i.e., the largest integer $\leq x$. For odd prime $p$ and integer $b$, let $\left(\frac bp\right)$ denote the Legendre symbol and $q_p(b)$ denote the Fermat quotient $(b^{p-1}-1)/p$ if $p\nmid b$. When $c,d\in\mathbb{Z}$, as usual $(c,d)$ stands for the greatest common divisor of $c$ and $d$. For any positive integer $m$, let $\zeta_m=e^{\frac{2\pi i}{m}}$ be the primitive $m$-th root of unity and let $\varphi({m})$, $\mu{(m)}$ denote the Euler totient function and M$\ddot{\textup{o}}$bius function respectively. Throughout this paper, we fix $a\neq 0,\pm1$.
\section{Main Results}
\begin{definition}\label{defsum}
{\rm Let $n,m,r$ be integers with $n>0$ and $m>0$. We define
$$\left[\begin{array}{c}n \\ r\\\end{array}\right] _{m}(a):=\sum_{\substack{k=0\\k\equiv r({\mbox{mod }}m)}}^n\binom nk a^k,$$
where ${n\choose k}$ is the binomial coefficient with the convention ${n\choose k}=0$ for $k<0$ or $k>n$.}
\end{definition}
\noindent Then we have the following theorem.
\begin{theorem}\label{Maintheorem}
Let $m,n\in\mathbb{Z}^+$, and $k\in\mathbb{Z}$. Write
$$W_{n}(k,m)=\sum_{\substack{l=1\\(l,m)=1}}^m\zeta_m^{-kl}(1+a\zeta_m^l)^n,$$
and
$$A_{m}(x)=\prod_{\substack{l=1\\(l,m)=1}}^m(x-1-a\zeta_m^l)=\sum\limits_{s=0}^{\varphi(m)}b_sx^s.$$
Then
$$A_{m}(x)\in \mathbb{Z}[x] \quad and \quad\sum\limits_{s=0}^{\varphi(m)}b_sW_{n+s}(k,m)=0.$$
Moreover, for any $r\in\mathbb{Z}$ we have
\begin{equation*}
\left[\begin{array}{c}n \\r\\\end{array}\right] _{m}(a)=\frac1m\sum\limits_{d\mid m }W_{n}(r,d).
\end{equation*}
\end{theorem}
\begin{proof}
It is easy to see that the coefficients of $A_{m}(x+1)$ are symmetric polynomials in those primitive $m$-th roots of unity with integer coefficients. Sicne
\[\Phi_m(x)=\prod_{\substack{l=1\\(l,m)=1}}^m(x-\zeta_m^l)\in\mathbb{Z}[x],\]
$A_{m}(x+1)\in \mathbb{Z}[x]$ by Fundamental Theorem on Symmetric Polynomials. Therefore $A_{m}(x)\in \mathbb{Z}[x].$
For any positive integer $n$, we clearly have
\begin{align*}
\sum\limits_{s=0}^{\varphi(m)}b_sW_{n+s}(k,m)
&=\sum\limits_{s=0}^{\varphi(m)}b_s\sum_{\substack{l=1\\(l,m)=1}}^m\zeta_m^{-kl}(1+a\zeta_m^l)^{n+s}\\
&=\sum_{\substack{l=1\\(l,m)=1}}^m\zeta_m^{-kl}(1+a\zeta_m^l)^{n}\sum\limits_{s=0}^{\varphi(m)}b_s(1+a\zeta_m^l)^{ s}\\
&=\sum_{\substack{l=1\\(l,m)=1}}^m\zeta_m^{-kl}(1+a\zeta_m^l)^{n}A_{m}(1+a\zeta_m^l)\\
&=0.
\end{align*}
Let $r\in\mathbb{Z}$, then we have
\begin{align*}
\left[\begin{array}{c}n \\ r\\\end{array}\right] _{m}(a)
&=\sum\limits_{k=0}^n\binom nka^k\cdot\frac1m\sum\limits_{l=1}^m\zeta_m^{(k-r)l}\\
&=\frac1m\sum\limits_{l=1}^m\zeta_m^{-rl}(1+a\zeta_m^l)^n\\
&=\frac1m\sum\limits_{d\mid m}\sum_{\substack{b=1\\(b,d)=1}}^d\zeta_d^{-rb}(1+a\zeta_d^b)^n\\
&=\frac1m\sum\limits_{d\mid m}W_{n}(r,d).
\end{align*}
This ends the proof.
\end{proof}
Note that the theorem is a generalization of Theorem 1 of \cite{s2002}.
\begin{remark}\label{Wremark}
The last result shows that $\left[\begin{array}{c}n \\ r\\\end{array}\right] _{m}(a)$ can be expressed in terms of some linearly recurrent sequences with orders not exceeding $\varphi{(m)}.$
\end{remark}
Now we list $A_{m}(x)$ for $1\leq m\leq6$:
\begin{align*}
&A_{1}(x)=x-1-a, A_{2}(x)=x-1+a,\\
&A_{3}(x)=x^2-(2-a)x+a^2-a+1, A_{4}(x)=x^2-2x+a^2+1,\\
&A_{5}(x)=x^4-(4-a)x^3+(a^2-3a+6)x^2+(a^3-2a^2+3a+4)+a^4-a^3+a^2-a+1,\\
&A_{6}(x)=x^2-(a+2)x+a^2+a+1.
\end{align*}
\begin{lemma}\textup{(\cite{s2002})}\label{Molemma}
Let $m,c$ be integers with $m>0$. Then we have
\begin{equation*}
\sum\limits_{d\mid m}\mu(\frac md)d\delta_{d\mid c}=\varphi(m)\frac{\mu(m/(c,m))}{\varphi(m/(c,m))},
\end{equation*}
where
\[\delta_{d\mid c}=\begin{cases}1,&\mbox{ if }d\mid c \mbox{ holds};\\0,&\mbox{otherwise}.\end{cases} \]
\end{lemma}
\begin{proof}
We can find that both sides are multiplicative with respect to $m$, thus we only need to prove it when $m$ is a prime power. For any prime $p$ and positive integer $k$, we have
\begin{align*}
\sum\limits_{d\mid p^k}\mu(\frac {p^k}d)d\delta_{d\mid c}
&=\sum\limits_{s=0}^k\mu(p^{k-s})p^s\delta_{p^s\mid c}\\
&=p^k\delta_{p^k\mid c}-p^{k-1}\delta_{p^{k-1}\mid c}\\
&=\begin{cases}p^k-p^{k-1}&\textup{if}\; p^k\mid c,\\
-p^{k-1}&\textup{if}\;p^{k-1}\parallel c,\\
0&\textup{if} \;p^{k-1}\nmid c.\end{cases}\\
&=\varphi(p^k)\frac{\mu(p^k/(c,p^k))}{\varphi(p^k/(c,p^k))}.
\end{align*}
This concludes the proof.
\end{proof}
\begin{theorem}\label{Wtheorem}
Let $m,n\in\mathbb{Z}^+ ,r\in\mathbb{Z}$. Then
\begin{equation*}
W_{n}(r,m)=\varphi(m)\sum\limits_{k=0}^n\frac{\mu(m/(k-r,m))}{\varphi(m/(k-r,m))}\binom nka^k.
\end{equation*}
\end{theorem}
\begin{proof}
By Theorem \ref{Maintheorem}, Lemma \ref{Molemma} and M$\ddot{\textup{o}}$bius Inversion Theorem, we have
\begin{align*}
W_{n}(r,m)
&=\sum\limits_{d\mid m}\mu(\frac md)d\left[\begin{array}{c}n \\r\\\end{array}\right]_d(a)\\
&=\sum\limits_{d\mid m}\mu(\frac md)d\sum\limits_{k=0}^n\binom nka^k\delta_{d\mid k-r} \\
&= \sum\limits_{k=0}^n\binom nka^k\sum\limits_{d\mid m}\mu(\frac md)d\delta_{d\mid k-r}\\
&=\varphi(m)\sum\limits_{k=0}^n\frac{\mu(m/(k-r,m))}{\varphi(m/(k-r,m))}\binom nka^k.
\end{align*}
\end{proof}
\begin{corollary}\label{Wcorollary}
Let $m,n$ be two relatively prime positive integers. Then we have
\begin{equation*}
W_{n}(0,m)-\varphi(m)-\mu(m)a^n=\varphi(m)n\sum\limits_{k=1}^{n-1}\frac{\mu(m/(m,k))}{\varphi(m/(m,k))}\binom{n-1}{k-1}\frac{a^k}{k}
\end{equation*}
and
\begin{equation*}
W_{n}(n,m)-\varphi(m)a^n-\mu(m)=\varphi(m)n\sum\limits_{k=1}^{n-1}\binom{n-1}{k-1}\frac{\mu(m/(m,k))}{\varphi(m/(m,k))}\frac{a^{n-k}}{k}.
\end{equation*}
\end{corollary}
\begin{proof}
Since $\binom nk=\frac nk\binom{n-1}{k-1}$ for $1\leq k\leq n$, we can derive the results by setting $r=0,n$ respectively in Theorem \ref{Wtheorem},
\end{proof}
\begin{corollary}\label{Wpcorlllary}
Let $m\in\mathbb{Z}^+$ and $p$ be an odd prime not dividing $am$. Then we have
\begin{equation*}
\frac{W_{p}(0,m)- \varphi(m)-\mu(m)a^p}{p}\equiv-\varphi(m)\sum\limits_{k=1}^{p-1}\frac{\mu(m/(m,k))}{\varphi(m/(m,k))}\cdot\frac{(-a)^k}{k}\pmod p,
\end{equation*}
and
\begin{equation*}
\frac{W_{p}(p,m)-\varphi(m)a^p-\mu(m)}{p}\equiv\varphi(m)\sum\limits_{k=1}^{p-1}\frac{\mu(m/(m,k))}{\varphi(m/(m,k))}\cdot\frac{1}{k(-a)^{k-1}}\pmod p.
\end{equation*}
\end{corollary}
\begin{proof}
Since $\binom{p-1}{k}=(-1)^k$ for $0\leq k\leq p-1$, the results follow from Corollary \ref{Wcorollary}.
\end{proof}
\section{Some New Congruences}
In this section, we give some new congruences by using the results of \cite{dy}.
\begin{lemma}\label{3uvlemma}
Let $p\nmid 3a(2-a)(a^3+1)$ be and odd prime, and $\{u_n\}_{n\geq0},\{v_n\}_{n\geq0}$ be the Lucas sequences defined as
$$u_0=0,\;u_1=1,\;u_{n+1}=(2-a)u_n-(a^2-a+1)u_{n-1}\;\textup{for}\;n\geq1;$$
$$v_0=2,\;v_1=(2-a),\;v_{n+1}=(2-a)v_n-(a^2-a+1)v_{n-1}\;\textup{for}\;n\geq1.$$
Then we have: \\
\begin{description}
\item[(1)]
\[ \frac{u_p-\left(\frac{-3}{p}\right)}{p}\equiv\sum\limits_{k=1}^{\frac{p-1}2}\frac{(-3)^{k-1}}{2k-1}\cdot\left(\frac{a}{2-a}\right)^{2k-2}
+\left(\frac{-3}{p}\right)\left(q_p(a)-q_p(2)+\frac12q_p(3)\right)\pmod p;\]
\item[(2)]
\[ \quad\frac{v_{p}-(2-a)}{p}\equiv
(2-a)\left[-\frac12\sum\limits_{k=1}^{\frac{p-1}2}\frac{(-3)^k}{k}\cdot\left(\frac{a}{2-a}\right)^{2k}-q_p(2)+q_p(2-a)\right]\pmod p.\]
\end{description}
\end{lemma}
\begin{proof}
By Lemmas 2.1 and 2.2 of \cite{dy}, we have
$u_p=\frac1{a\sqrt{-3}} \left[\left(\frac{2-a}{2}+\frac a2\sqrt{-3}\right)^p- \left(\frac{2-a}{2}-\frac a2\sqrt{-3}\right)^p\right]$
, $v_p= \left(\frac{2-a}{2}+\frac a2\sqrt{-3}\right)^p+ \left(\frac{2-a}{2}-\frac a2\sqrt{-3}\right)^p$, and $u_p\equiv\left(\frac{-3}{p}\right)\pmod p$, $v_p=(2-a)u_p-2(a^2-a+1)u_{p-1} =2u_{p+1}-(2-a)u_p\equiv(2-a)\pmod p$. Then
\begin{align*}
2^{p-1}u_p
&=\sum_{\substack{k=0\\k\; odd}}^{p}\binom pk(2-a)^{p-k}(a\sqrt{-3})^{k-1}\\
&=a^{p-1}(-3)^{\frac{p-1}{2}}+\sum\limits_{k=1}^{\frac{p-1}{2}}\binom p{2k-1}(2-a)^{p-2k+1}a^{2k-2}(-3)^{k-1}\\
&=a^{p-1}(-3)^{\frac{p-1}{2}}+p\sum\limits_{k=1}^{\frac{p-1}{2}}\frac{(-3)^{k-1}}{2k-1}\binom {p-1}{2k-2}(2-a)^{p-2k+1}a^{2k-2}\\
&\equiv a^{p-1}(-3)^{\frac{p-1}{2}}+ p\sum\limits_{k=1}^{\frac{p-1}{2}}\frac{(-3)^{k-1}}{2k-1}\cdot\left(\frac{a}{2-a}\right)^{2k-2}\pmod {p^2},\\
\end{align*}
and
\begin{align*}
2^{p-1}v_p
&=\sum_{\substack{k=0\\k\; even}}^{p}\binom pk(2-a)^{p-k}(a\sqrt{-3})^k\\
&=(2-a)^p+\sum\limits_{k=1}^{\frac{p-1}{2}}\binom p{2k}(2-a)^{p-2k}a^{2k}(-3)^k\\
&=(2-a)^p+p\sum\limits_{k=1}^{\frac{p-1}{2}}\frac{(-3)^k}{2k}\binom {p-1}{2k-1}(2-a)^{p-2k}a^{2k}\\
&\equiv(2-a)^p-\frac{2-a}2 p\sum\limits_{k=1}^{\frac{p-1}{2}}\frac{(-3)^k}{k}\cdot\left(\frac{a}{2-a}\right)^{2k}\pmod {p^2}.\\
\end{align*}
Hence (1) and (2) follow from Lemma 2.6(1) of \cite{dy}.
\end{proof}
\begin{corollary}\label{30corollary}
Let $p\nmid 3a(2-a)(a^3+1)$ be and odd prime. Then we have
\[\sum\limits_{k=1}^{[\frac{p}{3}]}\frac{(-a)^{3k}}{k}\equiv(2-a)\left[\frac12\sum\limits_{k=1}^{\frac{p-1}2}\frac{(-3)^k}{k}\cdot\left(\frac{a}{2-a}\right)^{2k}
+q_p(2)-q_p(2-a)\right]-(a+1)q_p(a+1)\pmod p.\]
\end{corollary}
\begin{proof}
The result follows from Lemma 4.9 of \cite{dy} and Lemma \ref{3uvlemma}(2).
\end{proof}
\begin{theorem}\label{31theorem}
Let $p\nmid 3a(a-1)(2-a)(a^3+1)$ be an odd prime, and $\{u_n\}_{n\geq0}$ be the Lucas sequence defined as
$$u_0=0,\;u_1=1,\;u_{n+1}=(2-a)u_n-(a^2-a+1)u_{n-1}\;\textup{for}\;n\geq1.$$
\begin{description}
\item[(1)] If $p\equiv1\pmod 3$, we have
\[\frac{u_{p-1}}{p}\equiv-\frac{2}{a(a-1)}\sum\limits_{k=1}^{\frac{p-1}{3}}\frac{(-a)^{3k-1}}{3k-1}+\frac{a+1}{3a(a-1)}\left(q_p(a^2-a+1)-2q_p(a+1)\right)\pmod p\]
and
\begin{align*}
\sum\limits_{k=1}^{\frac{p-1}{3}}\frac{(-a)^{3k-1}}{3k-1}
&\equiv\frac{a(a-1)}{a-2}\sum\limits_{k-1}^{\frac{p-1}{2}}\frac{(-3)^{k-1}}{2k-1}\cdot\left(\frac{a}{2-a}\right)^{2k-2}\\
&+\frac{a(a-1)}{a-2} [q_p(a)-q_p(2)+\frac12q_p(3)]\\
&-\frac{1}{3}(a+1)q_p(a+1)- \frac{ a^2-a+1}{3(a-2)}q_p(a^2-a+1) \pmod p.
\end{align*}
\item[(2)] If $p\equiv2\pmod 3$, we have
\[\frac{u_{p+1}}{p}\equiv\frac{2(a^2-a+1)}{a(a-1)}\sum\limits_{k=1}^{\frac{p+1}{3}}\frac{(-a)^{3k-2}}{3k-2}-\frac{a^3+1}{3a(a-1)}\left(q_p(a^2-a+1)-2q_p(a+1)\right)\pmod p\]
and
\begin{align*}
\sum\limits_{k=1}^{\frac{p+1}{3}}\frac{(-a)^{3k-2}}{3k-2}
&\equiv-\frac{a(a-1)}{a-2}\sum\limits_{k=1}^{\frac{p-1}{2}}\frac{(-3)^{k-1}}{2k-1}\cdot\left(\frac{a}{2-a}\right)^{2k-2}\\
&\quad+\frac{a(a-1)}{a-2} [q_p(a)-q_p(2)+\frac12q_p(3)]\\
&\quad-\frac{1}{3}(a+1)q_p(a+1)-\frac{ a^2-a+1}{3(a-2)}q_p(a^2-a+1) \pmod p.
\end{align*}
\end{description}
\end{theorem}
\begin{proof}
Since$(2-a)^2-4(a^2-a+1)=-3a^2$, we have $p\mid u_{p-\left(\frac{-3}{p}\right)}$ by Lemma 2.2 of \cite{dy}.
Let $\{v_n\}_{n\geq0}$ be the Lucas sequence define as Lemma \ref{3uvlemma}.
(1) By Lemma 2.1 and Theorem 4.1 of \cite{dy}, we have $-(a+1)u_p+(a^2-a+1)u_{p-1}=3\left[\begin{array}{c}p \\ 2\\\end{array}\right] _{3}(a)-(1+a)^p$ and $v_{p-1}=2u_p-(2-a)u_{p-1}$. Thus by Lemma 2.4 of \cite{dy}, we have
\begin{align*}
3a(a-1)u_{p-1}
&=6\left[\begin{array}{c}p \\ 2\\\end{array}\right] _{3}(a)-2(1+a)^p+(a+1)v_{p-1}\\
&\equiv-6p\sum\limits_{k-1}^{\frac{p-1}{3}}\frac{(-a)^{3k-1}}{3k-1}+(a+1)\left[(v_p-2)-2((a+1)^{p-1}-1)\right]\pmod{p^2}
\end{align*}
and
\begin{align*}
3a(a-1)(u_{p}-1)
&=3(2-a)\left[\begin{array}{c}p \\ 2\\\end{array}\right] _{3}(a)-(2-a)(1+a)^p+(a^2-a+1)v_{p-1}-3a(a-1)\\
&\equiv-3(2-a)p\sum\limits_{k=1}^{\frac{p-1}{3}}\frac{(-a)^{3k-1}}{3k-1}-(2-a)(1+a)((a+1)^{p-1}-1)\\
&\quad+(a^2-a+1)(v_{p-1}-2) \pmod{p^2}.
\end{align*}
Thence by Lemma 2.7 of \cite{dy} and Lemma \ref{3uvlemma}(1),
\[\frac{u_{p-1}}{p}\equiv-\frac{2}{a(a-1)}\sum\limits_{k=1}^{\frac{p-1}{3}}\frac{(-a)^{3k-1}}{3k-1}+\frac{a+1}{3a(a-1)}\left(q_p(a^2-a+1)-2q_p(a+1)\right)\pmod p,\]
and
\begin{align*}
\sum\limits_{k=1}^{\frac{p-1}{3}}\frac{(-a)^{3k-1}}{3k-1}
&\equiv\frac{a(a-1)}{a-2}\sum\limits_{k-1}^{\frac{p-1}{2}}\frac{(-3)^{k-1}}{2k-1}\cdot\left(\frac{a}{2-a}\right)^{2k-2}\\
&+\frac{a(a-1)}{a-2} [q_p(a)-q_p(2)+\frac12q_p(3)]\\
&-\frac{1}{3}(a+1)q_p(a+1)- \frac{ a^2-a+1}{3(a-2)}q_p(a^2-a+1) \pmod p.
\end{align*}
(2) By Lemma 2.1 and Theorem 4.1 of \cite{dy}, we have $-u_{p+1}+(a+1)u_{p}=3\left[\begin{array}{c}p \\ 1\\\end{array}\right] _{3}(a)-(1+a)^p$ and $v_{p+1}=(2-a)u_{p+1}-2(a^2-a+1)u_p$. Thus by Lemmas 2.4 of \cite{dy}, we have
\begin{align*}
-3a(a-1) u_{p+1}
&=6(a^2-a+1)\left[\begin{array}{c}p \\ 1\\\end{array}\right] _{3}(a)-2(a^2-a+1)(1+a)^p+(a+1)v_{p+1}\\
&\equiv-6(a^2-a+1)p\sum\limits_{k=1}^{\frac{p+1}{3}}\frac{(-a)^{3k-2}}{3k-2}-2(a^2-a+1)(a+1)\left[(a+1)^{p-1}-1\right]\\
&\quad+(a+1)\left[v_{p+1}-2(a^2-a+1)\right]\pmod{p^2}
\end{align*}
and
\begin{align*}
-3a(a-1)(u_{p}+1)
&=3(2-a)\left[\begin{array}{c}p \\ 1\\\end{array}\right] _{3}(a)- (2-a)(1+a)^p+ v_{p+1}-3a(a-1)\\
&\equiv-3 (2-a )p\sum\limits_{k=1}^{\frac{p+1}{3}}\frac{(-a)^{3k-2}}{3k-2}- (2-a)(a+1)\left[(a+1)^{p-1}-1\right]\\
&\quad +v_{p+1}-2(a^2-a+1) \pmod{p^2}.
\end{align*}
Thence by Lemma 2.7 of \cite{dy} and Lemma \ref{3uvlemma}(1),
\[\frac{u_{p+1}}{p}\equiv\frac{2(a^2-a+1)}{a(a-1)}\sum\limits_{k=1}^{\frac{p+1}{3}}\frac{(-a)^{3k-2}}{3k-2}-\frac{a^3+1}{3a(a-1)}\left(q_p(a^2-a+1)-2q_p(a+1)\right)\pmod p,\]
and
\begin{align*}
\sum\limits_{k=1}^{\frac{p+1}{3}}\frac{(-a)^{3k-2}}{3k-2}
&\equiv-\frac{a(a-1)}{a-2}\sum\limits_{k=1}^{\frac{p-1}{2}}\frac{(-3)^{k-1}}{2k-1}\cdot\left(\frac{a}{2-a}\right)^{2k-2}\\
&+\frac{a(a-1)}{a-2} [q_p(a)-q_p(2)+\frac12q_p(3)]\\
&-\frac{1}{3}(a+1)q_p(a+1)-\frac{ a^2-a+1}{3(a-2)}q_p(a^2-a+1) \pmod p.
\end{align*}
\end{proof}
Set $a=-2$ in Corollary \ref{30corollary} and Theorem \ref{31theorem}, we have the following two corollaries.
\begin{corollary}
Let $p\neq3,7$ be and odd prime. Then we have
\begin{description}
\item[(1)]
\[\sum\limits_{k=1}^{[\frac p3]}\frac{8^k}{k}\equiv\sum\limits_{k=1}^{\frac {p-1}{2}}\frac{2}{k}\cdot\left(-\frac34\right)^k-4q_p(2)\pmod p.\]
\item[(2)]
If $p\equiv1 \pmod 3$,
\[\sum\limits_{k=1}^{\frac{p-1}{3}}\frac{8^k}{3k-1}\equiv4\sum\limits_{k=1}^{\frac {p-1}{2}}\frac{1}{2k-1}\cdot\left(-\frac34\right)^k-\frac 32q_p(3)+\frac 76q_p(7)\pmod p.\]
If $p\equiv2 \pmod 3$,
\[\sum\limits_{k=1}^{\frac{p+1}{3}}\frac{8^k}{3k-2}\equiv-8\sum\limits_{k=1}^{\frac {p-1}{2}}\frac{1}{2k-1}\cdot\left(-\frac34\right)^k-3q_p(2)+\frac 73q_p(7)\pmod p.\]
\end{description}
\end{corollary}
\begin{corollary}\label{-2corollary}
Let $p\neq3,7$ be and odd prime, and $\{u_n\}_{n\geq0}$ be the Lucas sequence defined as
$$u_0=0,\;u_1=1,\;u_{n+1}=4u_n-7u_{n-1}\;\textup{for}\;n\geq1.$$
Then, if $p\equiv1\pmod 3$,
\[\frac{u_{p-1}}p\equiv-\frac{1}{6}\sum\limits_{k=1}^{\frac{p-1}{3}}\frac{8^k}{3k-1}-\frac{1}{18} q_p(7)\pmod p,\]
if $p\equiv2\pmod 3$,
\[\frac{u_{p+1}}p\equiv\frac{7}{12}\sum\limits_{k=1}^{\frac{p+1}{3}}\frac{8^k}{3k-2}+\frac{7}{18} q_p(7)\pmod p.\]
\end{corollary}
The following theorem can reduce the summation terms occurring in the expression of Lucas quotients in Corollary 4.11 of \cite{dy} and Corollary 3.5.
\begin{theorem}
Let $p\neq3,7$ be an odd prime, and $\left\{u_n\right\}_{n\geq0}$ be the Lucas sequence defined as Corollary 3.5.
Then if $p\equiv1\pmod3$,
\begin{align*}
\frac{u_{p-1}}p
&\equiv\frac16\sum\limits_{k=1}^{\frac{p-1}{6}}\frac{64^k}{k}+\frac13q_p(7)+\frac12q_p(3)\\
&\equiv-\frac{1}{3}\sum\limits_{k=1}^{\frac{p-1}{6}}\frac{64^k}{6k-1}-\frac{1}{18} q_p(7)+\frac 16q_p(3) \pmod p,
\end{align*}
if $p\equiv2\pmod3$,
\begin{align*}
\frac{u_{p+1}}p
&\equiv-\frac76\sum\limits_{k=1}^{\frac{p-5}{6}}\frac{64^k}{k}-\frac73q_p(7)-\frac72q_(3)\\
&\equiv\frac{7}{6}\sum\limits_{k=1}^{\frac{p+1}{6}}\frac{64^k}{6k-2}+\frac{7}{18} q_p(7)+\frac 76q_p(3)\pmod p.
\end{align*}
\end{theorem}
\begin{proof}
By Lemma 2.4 and Theorem 4.5 of \cite{dy}, if $p\equiv1\pmod 3$,
\[3^{p-1}-(-3)^{\frac{p-1}{2}}=\left[\begin{array}{c}p \\2 \\\end{array}\right] _3(2)\equiv-p\sum\limits_{k=1}^{\frac{p-1}{3}}\frac{(-2)^{3k-1}}{3k-1}\pmod{p^2},\]
if $p\equiv2\pmod 3$,
\[3^{p-1}+(-3)^{\frac{p-1}{2}}=\left[\begin{array}{c}p \\1 \\\end{array}\right] _3(2)\equiv-p\sum\limits_{k=1}^{\frac{p+1}{3}}\frac{(-2)^{3k-2}}{3k-2}\pmod{p^2}.\]
Thus by the Lemma 2.6 of \cite{dy}, we have
\[ \sum\limits_{k=1}^{\frac{p-1}{3}}\frac{(-8)^{k}}{3k-1}\equiv q_p(3)\pmod p \;\textup{if} \;p\equiv1 \pmod 3,\] and
\[ \sum\limits_{k=1}^{\frac{p+1}{3}}\frac{(-8)^{k}}{3k-2}\equiv-2q_p(3) \pmod p \;\textup{if} \;p\equiv2 \pmod 3.\]
Hence the results follow from Corollaries 4.7 and 4.11 of \cite{dy} and Corollary \ref{-2corollary}.
\end{proof}
\section{A Specific Lucas Sequence}
\noindent Let $A,B\in\mathbb{Z}$. The Lucas sequences $u_n=u_n(A,B)(n\in\mathbb{N})$ and $v_n=v_n(A,B)(n\in\mathbb{N})$ are defined by
\[u_0=1,\;u_1=1,\; u_{n+1}=Bu_n-Au_{n-1}(n\geq1);\]
\[v_0=2,\;v_1=B,\; v_{n+1}=Bv_n-Av_{n-1}(n\geq1).\]
Next we give some properties of the Lucas sequences with $A=5$ and $B=2$. We need some lemmas. Let $D=B^2-4A.$
\begin{lemma}\label{ulucasmod}
Let $p$ be an odd prime not dividing $DA$.
\begin{description}
\item[(1)]
If $p\equiv1\pmod 4$, then $p\mid u_{\frac{p-1}{4}}$ if and only if $v_{\frac{p-1}{2}}\equiv2A^{\frac{p-1}{4}}\pmod p$ and
$p\mid v_{\frac{p-1}{4}}$ if and only if $v_{\frac{p-1}{2}}\equiv-2A^{\frac{p-1}{4}}\pmod p$.
\item[(2)] If $p\equiv3\pmod 4$, then $p\mid u_{\frac{p+1}{4}}$ if and only if $v_{\frac{p+1}{2}}\equiv2A^{\frac{p+1}{4}}\pmod p$ and
$p\mid v_{\frac{p+1}{4}}$ if and only if $v_{\frac{p+1}{2}}\equiv-2A^{\frac{p+1}{4}}\pmod p$.
\end{description}
\end{lemma}
\begin{proof}
(1) and (2) follow from the fact that $v_{2n}=v_n^2-2A^n=Du_n^2+2A^n.$
\end{proof}
\begin{lemma}\textup{(\cite{sun4})}\label{uvlucasmod}
Let $p$ be an odd prime and $A'$ be an integer such that $4A'\equiv B^2-4A\pmod p $. Let
$u_n'=u_n(A',B),\;v_n'=v_n(A',B)$. Then we have
\[
u_{\frac{p+1}{2}}\equiv\frac12\left(\frac2p\right)v'_{\frac{p-1}{2}}\pmod p,\;u_{\frac{p-1}{2}}\equiv-\left(\frac2p\right)u'_{\frac{p-1}{2}}\pmod p,
\]
\[v_{\frac{p+1}{2}}\equiv\left(\frac2p\right)v'_{\frac{p+1}{2}}\pmod p,\;v_{\frac{p-1}{2}}\equiv2\left(\frac2p\right)u'_{\frac{p+1}{2}}\pmod p.
\]
\end{lemma}
\begin{remark}
\begin{description}
\item[(1)] Let $S_n=u_n(1,4),\;T_n=v_n(1,4)$. For any prime $p>3$, by the facts that $u'_n=u_n(3,4)=\frac{1}{2}(3^n-1)$ and $v_n'=v_n(3,4)=3^n+1$, we have
\begin{align*}
&S_{\frac{p+1}{2}}\equiv\frac12\left(\frac{2}{p}\right)\left[\left(\frac{3}{p}\right)+1\right]\pmod p,\quad
S_{\frac{p-1}{2}}\equiv-\frac12\left(\frac{2}{p}\right)\left[\left(\frac{3}{p}\right)-1\right]\pmod p,\\
&T_{\frac{p+1}{2}}\equiv \left(\frac{2}{p}\right)\left[3\left(\frac{3}{p}\right)+1\right]\pmod p,\quad
T_{\frac{p-1}{2}}\equiv\left(\frac{2}{p}\right)\left[3\left(\frac{3}{p}\right)-1\right]\pmod p.
\end{align*}
Thus by Lemma \ref{ulucasmod}, $p\mid S_{[\frac{p+1}{4}]}$ iff $p\equiv1,19\pmod {24}$ and
$p\mid T_{[\frac{p+1}{4}]}$ iff $p\equiv7,13\pmod {24}$. Sun \cite{s2002} got these by studying the sum (\ref{generalsum}) for $a=1$ and $m=12$;
\item[(2)] Let $P_n=u_n(-1,2),\;Q_n=v_n(-1,2)$ and $u_n'=u_n(2,2),\;v_n=v_n'(2,2).$
For any odd prime $p$, by the facts that $u_{4n}'=0,u_{4n+1}'=(-4)^n,u_{4n+2}'=u_{4n+3}'=2(-4)^n$ and
$v_{4n}'=v_{4n+1}'=2(-4)^n,u_{4n+2}'=0,u_{4n+3}'=(-4)^{(n+1)}$, we have
\[P_{\frac{p-\left(\frac2p\right)}{2}}\equiv\begin{cases}0\pmod p,& \textup{if}\,p\equiv1\pmod 4,\\(-1)^{[\frac{p+5}{8}]}2^{\frac{p-3}{4}}\pmod p,& \textup{if}\;p\equiv3\pmod 4,\end{cases}
\]
\[Q_{\frac{p-\left(\frac2p\right)}{2}}\equiv\begin{cases}(-1)^{[\frac{p}{8}]}2^{\frac{p+3}{4}}\pmod p,& \textup{if}\;p\equiv1\pmod 4,\\0\pmod p,& \textup{if}\;p\equiv3\pmod 4,\end{cases}\]
and
\[P_{\frac{p+\left(\frac2p\right)}{2}}\equiv(-1)^{[\frac{p+1}{8}]}2^{[\frac{p }{4}]}\pmod p,\quad Q_{\frac{p+\left(\frac2p\right)}{2}}\equiv(-1)^{[\frac{p+5}{8}]}2^{[\frac{p+5}{4}]}\pmod p.\]
\end{description}
Sun got \cite{sun2} these by studying the sum (\ref{generalsum}) for $a=1$ and $m=8$.
\end{remark}
\begin{lemma}\label{uv1lucasmod}
Let $p\nmid B$ be an odd prime and $A'$ be an integer such that $A'\equiv \frac{A}{B^2}\pmod p $. Let
$u_n'=u_n(A',1),\;v_n'=v_n(A',1).$ Then we have
\[u_{\frac{p+1}{2}}\equiv\left(\frac Bp\right)u'_{\frac{p+1}{2}}\pmod p,\quad u_{\frac{p-1}{2}}\equiv\frac1B\left(\frac Bp\right)u_{\frac{p-1}{2}}'\pmod p,\]
\[v_{\frac{p+1}{2}}\equiv B\left(\frac Bp\right)v'_{\frac{p+1}{2}} \pmod p,\quad v_{\frac{p-1}{2}}\equiv \left(\frac Bp\right)v_{\frac{p-1}{2}}'\pmod p.\]
\end{lemma}
\begin{proof}
By Lemma 2.1 of \cite{dy} and $D'=1-4A'\equiv\frac{D}{B^2}\pmod p,$ we have
\begin{align*}
u_{n}
&=2\sum_{\substack{k=0\\k\;odd}}^{n}\binom{n}{k}\left(\frac{B}2\right)^{n-k}\left(\frac{D}2\right)^{\frac{k-1}{2}}\\
&=2B^{n-1}\sum_{\substack{k=0\\k\;odd}}^{n}\binom{n}{k}\left(\frac{1}2\right)^{\frac{p-1}{2}-k}\left(\frac{D}{2B^2}\right)^{\frac{k-1}{2}}\\
&\equiv2B^{n-1}\sum_{\substack{k=0\\k\;odd}}^{n}\binom{n}{k}\left(\frac{1}2\right)^{n-k}\left(\frac{D'}{2}\right)^{\frac{k-1}{2}}\\
&=B^{n-1}u'_n\pmod p,
\end{align*}
and
\begin{align*}
v_{n}
&=2\sum_{\substack{k=0\\k\;even}}^{n}\binom{n}{k}\left(\frac{B}2\right)^{n-k}\left(\frac{D}2\right)^{\frac{k}{2}}\\
&=2B^{n}\sum_{\substack{k=0\\k\;even}}^{n}\binom{n}{k}\left(\frac{1}2\right)^{\frac{p-1}{2}-k}\left(\frac{D}{2B^2}\right)^{\frac{k}{2}}\\
&\equiv2B^{n}\sum_{\substack{k=0\\k\;even}}^{n}\binom{n}{k}\left(\frac{1}2\right)^{n-k}\left(\frac{D'}{2}\right)^{\frac{k}{2}}\\
&=B^{n}v'_n\pmod p,
\end{align*}
Thus
\begin{align*}
&u_{\frac{p+1}{2}}\equiv B^{\frac{p-1}{2}}u'_{\frac{p+1}{2}}\equiv\left(\frac Bp\right)u'_{\frac{p+1}{2}}\pmod p,\\
& u_{\frac{p-1}{2}}\equiv B^{\frac{p-3}{2}}u'_{\frac{p-1}{2}}\equiv\frac1B\left(\frac Bp\right)u_{\frac{p-1}{2}}'\pmod p, \\
&v_{\frac{p+1}{2}}\equiv B^{\frac{p+1}{2}}v'_{\frac{p+1}{2}}\equiv B\left(\frac Bp\right)v'_{\frac{p+1}{2}} \pmod p,\\
& v_{\frac{p-1}{2}}\equiv B^{\frac{p-1}{2}}v'_{\frac{p-1}{2}}\equiv \left(\frac Bp\right)v_{\frac{p-1}{2}}'\pmod p.
\end{align*}
\end{proof}
\begin{theorem} \label{52lucas}
Let $p\neq5$ be an odd prime and $\{U_n\}_{n\geq0}$ and $\{V_n\}_{n\geq0}$ be the Lucas sequences defined as
$$U_0=0,U_1=1,U_{n+1} =2U_{n}-5U_{n-1} \;\textup{for}\;n\geq1;$$
$$V_0=2,V_1=2,V_{n+1}=2V_{n}-5V_{n-1}\;\textup{for}\;n\geq1.$$
\begin{description}
\item[(1)] If $p\equiv\pm1\pmod 5$, we have
\begin{align*}
&U_{\frac{p+\left(\frac{-1}{p}\right)}{2}}\equiv\left(\frac{-1}{p}\right)(-1)^{[\frac{p+5}{10}]}5^{[\frac{p}{4}]}\pmod p,\\ &U_{\frac{p-\left(\frac{-1}{p}\right)}{2}}\equiv0\pmod p,\\
&V_{\frac{p+\left(\frac{-1}{p}\right)}{2}}\equiv2(-1)^{[\frac{p+5}{10}]}5^{[\frac{p}{4}]} \pmod p,\\
&V_{\frac{p-\left(\frac{-1}{p}\right)}{2}}\equiv2(-1)^{[\frac{p+5}{10}]}5^{[\frac{p+1}{4}]}\pmod p.
\end{align*}
\item[(2)] If $p\equiv\pm2\pmod 5$, we have
\begin{align*}
&U_{\frac{p+\left(\frac{-1}{p}\right)}{2}}\equiv\frac12\left(\frac{-1}{p}\right)(-1)^{[\frac{p+5}{10}]}5^{[\frac{p}{4}]}\pmod p,\\ &U_{\frac{p-\left(\frac{-1}{p}\right)}{2}}\equiv\frac12\left(\frac{-1}{p}\right)(-1)^{[\frac{p+5}{10}]}5^{[\frac{p+1}{4}]}\pmod p,\\
&V_{\frac{p+\left(\frac{-1}{p}\right)}{2}}\equiv4(-1)^{[\frac{p-5}{10}]}5^{[\frac{p}{4}]} \pmod p,\\
&V_{\frac{p-\left(\frac{-1}{p}\right)}{2}}\equiv0\pmod p.
\end{align*}
\end{description}
\end{theorem}
\begin{proof}
Let $F_n=u_n(-1,1)$ and $L_n=v_n(-1,1)$ be Fibonacci sequence and its companion. Then by Lemmas \ref{uvlucasmod} and \ref{uv1lucasmod}, we have
\[U_{\frac{p+1}{2}}\equiv\frac12L_{\frac{p-1}{2}}\pmod p, \quad
U_{\frac{p-1}{2}}\equiv-\frac12F_{\frac{p-1}{2}}\pmod p,\]
\[V_{\frac{p+1}{2}}\equiv2L_{\frac{p+1}{2}}\pmod p,\quad
V_{\frac{p-1}{2}}\equiv2F_{\frac{p+1}{2}}\pmod p.\]
Thus by Corollaries 1 and 2 of \cite{ss}, we can derive the results.
\end{proof}
\begin{remark}
In \cite{dy}, we gave some congruences for the Lucas quotient $ U_{p-\left(\frac{-1}p\right)}/p$ by studying the sum (\ref{generalsum}) for $a=-2$ and $m=4$.
\end{remark}
\begin{corollary}
Let $p\neq5$ be an odd prime, $\{U_n\}_{n\geq0}$ and $\{V_n\}_{n\geq0}$ be Lucas sequences defined as above.
\begin{description}
\item[(1)] If $p\equiv1\pmod4$, then $p\mid U_{\frac{p-1}{4}}$ if and only if $p\equiv1\pmod {20}$ and $p\mid V_{\frac{p-1}{4}}$ if and only if $p\equiv9\pmod {20}$.
\item[(2)] If $p\equiv3\pmod4$, then $p\mid U_{\frac{p+1}{4}}$ if and only if $p\equiv19\pmod {20}$ and $p\mid V_{\frac{p+1}{4}}$ if and only if $p\equiv11\pmod {20}$.
\end{description}
\end{corollary}
\begin{proof}
(1) and (2) follow from Lemma \ref{ulucasmod} and Theorem \ref{52lucas}.
\end{proof}
\noindent \textbf{Acknowledgments}\quad The work of this paper
was supported by the NNSF of China (Grant No. 11471314), and the National Center for Mathematics and Interdisciplinary Sciences, CAS.
\end{document} |
\begin{document}
\title
{A Doubly Exponentially Crumbled Cake}
\author{
{Tobias Christ \footnote{Institute of Theoretical Computer Science, ETH Z\"urich, 8092 Z\"urich, Switzerland, {\texttt{\{christt, afrancke, gebauerh\}@inf.ethz.ch}} } }\quad
{Andrea Francke $^*$} \quad
{Heidi Gebauer $^*$} \quad
{Ji\v{r}\'{\i} Matou\v{s}ek \footnote{Dept. of Applied Mathematics and Institute of Theoretical Computer Science, Charles University, Malostransk\'{e} n\'{a}m. 25,
118~00~~Praha~1, Czech Republic, and Institute of Theoretical Computer Science, ETH Z\"urich, 8092 Z\"urich, Switzerland, {\texttt{matousek@kam.mff.cuni.cz}}} } \quad
{Takeaki Uno \footnote{National Institute of Informatics, 2-1-2, Hitotsubashi, Chiyoda-ku,
Tokyo 101-8430, Japan, {\texttt{uno@nii.jp}}}}
}
\maketitle
\begin{abstract}
We consider the following cake cutting game:
Alice chooses a set~$P$ of $n$~points in
the square (cake)~$[0,1]^2$, where $(0,0) \in P$;
Bob cuts out $n$ axis-parallel rectangles with disjoint
interiors, each of them having a point of $P$ as the
lower left corner; Alice keeps the rest.
It has been conjectured that Bob can always secure at least half
of the cake. This remains unsettled, and it is not even known
whether Bob can get any positive fraction independent of~$n$.
We prove that \emph{if} Alice can force Bob's share
to tend to zero, \emph{then} she must use very many points; namely,
to prevent Bob from gaining more than $1/r$ of the cake,
she needs at least $2^{2^{\Omega(r)}}$ points.
\end{abstract}
\section{Introduction}
Alice has baked a square cake with raisins for Bob, but
really she would like to keep most of it for herself.
In this, she relies on a peculiar habit of Bob: he eats only
rectangular pieces of the cake, with sides parallel
to the sides of the cake, that contain exactly one raisin each,
and that raisin has to be exactly in the lower left corner
(see Fig.~\ref{f:example}). Alice gets whatever remains
after Bob has cut out all such pieces. In order to give
Bob at least some chance, Alice has to put a raisin
in the lower left corner of the whole cake.
Mathematically, the cake is the square $[0,1]^2$, the raisins
form an $n$-point set $P\subset [0,1]^2$, where
$(0,0)\in P$ is required, and Bob's share consists of
$n$ axis-parallel rectangles with disjoint
interiors, each of them having a point of $P$ as the
lower left corner.
By placing points densely along the main diagonal,
Alice can limit Bob's share to~$\frac 12+\eps$,
with $\eps>0$ arbitrarily small.
A natural question then is, can Bob always obtain
at least half of the cake?
This question (in a cake-free formulation) appears in
Winkler~\cite{Win07} (``Packing Rectangles'', p.~133),
where he claims it to be at least 10 years old and of
origin unknown to him. The first written reference seems
to be an IBM puzzle webpage~\cite{IBM04}.
\begin{figure}
\caption{\label{f:example}
\label{f:example}
\end{figure}
We tried to answer the question and could not, probably similar to many
other people before us. We believe that there are no simple examples
leaving more than $\frac 12$ to Alice, but on the other hand,
it seems difficult to prove even that Bob can always secure
$0.0001\%$ of the cake. We were thus led to seriously considering
the possibility that Alice might be able
to limit Bob's share to less than $1/r$, for every $r>0$,
but that the number of points $n$ she would need
would grow enormously as a function of~$r$.
Here we prove a doubly exponential lower bound on this function.
First we introduce the following notation. For a finite
$P\subset[0,1]^2$,
let $\bob(P)$ be the largest area Bob can win for $P$, and
let $\bob(n)$ be the infimum of $\bob(P)$ over all $n$-point $P$
as above.\footnote{It is easily checked that, given $P$,
there are finitely many possible placement of
Bob's \emph{inclusion-maximal} rectangles, and therefore,
$\bob(P)$ is attained by some choice of rectangles.
On the other hand, it is not so clear whether
$\bob(n)$ is attained; we leave this question
aside.} Also, for a real number $r>1$ let
$n(r):=\min\{n: \bob(n)\le 1/r\} \in \{1,2,\ldots\}\cup \{\infty\}$.
\begin{theorem}\label{t:} There exists a constant $r_0$ such that for all $r\ge r_0$,
$
n(r)\ge 2^{2^{r/2}}$.
\end{theorem}
The only previous results on this problem we could find
is the Master's thesis of M\"{u}ller-Itten \cite{Mue10}. She conjectured
that Alice's optimal strategy is placing the $n$ points
on the main diagonal with equal spacing (for which Bob's share
is $\frac{1}{2}\left(1 + \frac{1}{n}\right)$). She proved
this conjecture for $n\le 4$, and also in the ``grid''
case with $P=\{(0,0)\}\cup \{ (\frac{i}{n}, \frac{\pi(i)}{n}):i \in \{1, \ldots, n-1\}\}$, where $\pi$ is a permutation of $\{1, \ldots, n-1\}$.
She also showed that $\bob(n)\ge\frac 1n$.
The problem considered here can be put into a wider context.
Various problems of fair division of resources, often phrased
as cake-cutting problems, go back at least to Steinhaus, Banach and Knaster;
see, e.g., \cite{RW98}. Even closer to our particular setting
is Winkler's \emph{pizza problem}, recently solved by
Cibulka et al.~\cite{CKMS10}.
\section{Preliminaries}
We call a point~$a$ a \emph{minimum} of a set $X\subseteq [0,1]^2$
if there is no $b\in X\setminus \{a\}$ for which both
$x(b)\le x(a)$ and $y(b)\le y(a)$.
Let $p_1,p_2,\ldots,p_k$ be an enumeration
of the minima of $P\setminus \{(0,0)\}$ in the order
of decreasing $y$-coordinate (and increasing $x$-coordinate).
Let $\stairs(P)$ be the union of all the axis-parallel rectangles
with lower left corners at $(0,0)$ whose interior avoids $P$;
see Fig.~\ref{f:sta}(a).
\begin{figure}
\caption{\label{f:sta}
\label{f:sta}
\end{figure}
Furthermore, let $s$ be the area of $\stairs(P)$, and let $\alpha$ be the largest
area of an axis-parallel rectangle contained in $\stairs(P)$.
Let us also define $\rho :=\frac s\alpha$.
For a point $p\in P$ and an axis-parallel rectangle $B\subseteq[0,1]^2$
with lower left corner at $p$, we denote by~$a$ be the maximum area
of the cake Bob can gain in~$B$ using only rectangles
with lower left corner in points of $B\cap P$.
By re-scaling, we have $a=\beta\cdot\bob(P_B)$,
where $\beta$ is the area of~$B$ and $P_B$ denotes the set $P\cap B$
transformed by the affine transform that maps $B$ onto $[0,1]^2$.
We will use the monotonicity of $\bob(\cdot)$, i.e.,
$\bob(n+1)\le\bob(n)$ for all $n\ge 1$.
Indeed, Alice can always place an extra point on the right side
of the square, say, which does not influence Bob's share.
\section{The decomposition} We decompose the complement of $\stairs(P)$
into horizontal rectangles $B_1,\ldots,B_k$ as indicated
in Fig.~\ref{f:sta}(a), so that $p_i$ is the lower left
corner of $B_i$. Let $\beta_i$ be the area of $B_i$;
we have $s+\sum_{i=1}^k\beta_i=1$.
By the above and by an obvious superadditivity, we have
\begin{equation}\label{e:decompose}
\bob(P)\ge \alpha+\sum_{i=1}^k \beta_i \bob(P_i),
\end{equation}
where $P_i:= P_{B_i}.$ (This is a somewhat simple-minded estimate, since it doesn't take
into account any interaction among the $B_i$).
The following lemma captures the main properties of this
decomposition.
\begin{lemma}\label{l:nobig}
Let us assume that $\rho=\frac s\alpha\ge r_0$, where
$r_0$ is a suitable (sufficiently large) constant.
Then
\begin{itemize}\item
$s \le \frac 14 \cdot 2^{-\rho}$ (the staircase has a small area), and
\item
$\sum_{j:j\ne i}\beta_i \ge 2^\rho s$
for every $i=1,2,\ldots,k$ (none of the subproblems
occupies almost all of the area).
\end{itemize}
\end{lemma}
\begin{proof} First we note that
since no rectangle with lower left corner $(0,0)$
and upper right corner in $\stairs(P)$ has area bigger than
$\alpha$, the region $\stairs(P)$ lies below the hyperbola
$y=\frac \alpha x$. Thus
$s\le \alpha+\int_{\alpha}^1\frac \alpha x\,{\rm d}x=\alpha+\alpha
\ln\frac 1\alpha$. This yields $\alpha\le e^{-\rho+1}$,
and so $s=\rho\alpha \le \rho e^{-\rho+1}\le \frac 14 \cdot 2^{-\rho}$
(for $\rho$ sufficiently large).
It remains to show that $\sum_{j:j\ne i}\beta_i \ge 2^\rho s$;
since $\sum_{j=1}^k\beta_j=1-s$, it suffices to show
$\beta_i\le 1-2\cdot 2^\rho s$ for all~$i$.
Let $y_i$ be the $y$-coordinate of $p_i$ for $i \geq 1$, and let $y_{0} = 1$;
we have $\beta_{i + 1} \le y_{i}-y_{i+1}$ for $i \geq 0$.
First, if $y_i\le\frac 12$, then $\beta_{i+1}\le
\frac 12\le 1-2\cdot 2^\rho s$ by the above, and
we are done. So we assume $y_i>\frac 12$.
The area of $\stairs(P)$ can be bounded from above as indicated
in Fig.~\ref{f:sta}(b). Namely, the rectangle $R$ has area
at most $\alpha$ (since it is contained in $\stairs(P)$), and
the rectangle $R'$ above it also has area no more than $\alpha$
(using $y_i>\frac 12$). The top right corner of $R''$
lies on the hyperbola $y=\frac\alpha x$ used above, and thus $R''$
has area at most $\alpha$ as well. Finally, the region $H$
on the right of $R''$ and below the hyperbola has area
$\int_{\alpha/y_{i+1}}^1\frac\alpha x\,{\rm d}x =\alpha\ln(y_{i+1}/\alpha)$.
Since $\stairs(P)\subseteq R\cup R'\cup R''\cup H$, we
have $s\le \alpha(3+\ln(y_{i+1}/\alpha))$. Using $\rho=\frac s\alpha$
we obtain $y_{i+1}\ge \alpha e^{\rho-3}= se^{\rho-3}/\rho
\ge 2\cdot2^{\rho} s$ (again using the assumption that $\rho$ is large).
Finally, we have $\beta_{i+1}\le 1-y_{i+1}\le 1-2\cdot 2^\rho s$,
and the lemma is proved.
\end{proof}
\section{Proof of Theorem~\ref{t:}}
\begin{proof}
Let $r\ge r_0$.
We may assume that $r$ is of the form $r=1/\bob(n)$,
where $n=n(r)$. In particular, $\bob(m)>\frac 1r$ for all $m<n$.
We will derive the following recurrence for such an $r$:
\begin{equation}\label{e:recur}
n(r)\ge 2 n(r-2^{-(r+1)/2}).
\end{equation}
Applying it iteratively $t:=2^{r/2}$ times, we find that
$n(r)\ge 2^t n(r-1)\ge 2^t$ as claimed in the theorem.
We thus start with the derivation of (\ref{e:recur}).
Let us look at the inequality (\ref{e:decompose}) for an $n$-point set $P$
that attains $\bob(n)$.\footnote{Or rather,
since we haven't proved that $\bob(n)$
is attained, we should choose $n$-point $P$
with $\bob(P)<\bob(n')$ for all $n'<n$.}
Since $n_i:=|P_i|<n$ for all $i$, we have
$\bob(P_i)>\frac 1r$ for all $i$.
Let $\alpha$ and $s$ be as above.
First we derive $\rho=\frac s\alpha\ge r$.
Indeed, if we had $\alpha>\frac sr$, then the right-hand of
(\ref{e:decompose}) can be estimated as follows:
$$
\alpha+\sum_{i=1}^k \beta_i \bob(P_i)>
\frac 1r\biggl(s+\sum_{i=1}^k\beta_i\biggr)=
\frac1r,
$$
which contradicts the inequality~(\ref{e:decompose}).
So $\rho\ge r\ge r_0$ indeed.
Let us set $\gamma_i :=
\bob(P_i)-\frac 1r$; this is Bob's ``gain'' over the ratio $\frac 1r$
in the $i$th subproblem. From (\ref{e:decompose}) we have
\begin{eqnarray*}
\frac 1r&\ge& \sum_{i=1}^k\beta_i\left(\frac 1r+\gamma_i\right)
\ge \frac 1r\biggl( \sum_{i=1}^k\beta_i\biggr)+\sum_{i=1}^k\beta_i\gamma_i
=\frac {1-s}r +\sum_{i=1}^k\beta_i\gamma_i,
\end{eqnarray*}
and so
\begin{equation}\label{e:gains}
\sum_{i=1}^k\beta_i\gamma_i\le \frac sr.
\end{equation}
According to Lemma~\ref{l:nobig}, we can partition the index set
$\{1,2,\ldots,k\}$ into two subsets $I_1,I_2$ so that
$\sum_{i\in I_j}\beta_i\ge 2^\rho s \ge 2^r s$ for $j=1,2$.
Let $i_1$ be such that $\gamma_{i_1}=\min_{i\in I_1}\gamma_i$, and
similarly for $i_2$. Then (\ref{e:gains}) gives, for $j=1,2$,
$$
\frac sr \ge \sum_{i\in I_j}\beta_i\gamma_i\ge \gamma_{i_j}\sum_{i\in I_j}\beta_i\ge \gamma_{i_j} 2^r s,
$$
and so $\gamma_{i_j}\le \gamma^*:= 2^{-r}/r$.
Let us define $r^*<r$ by $\frac 1{r^*}=\frac1r+\gamma^*$.
Then we know that at least two of the sets $P_i$ contain at least
$n(r^*)$ points each, and hence $n(r)\ge 2 n(r^*)$.
We calculate $r^*=\frac r{1+r\gamma^*}\ge r(1-r\gamma^*)= r-r2^{-r}\ge
r-2^{-(r+1)/2}$ (again using $r\ge r_0$).
So we have derived the desired recurrence (\ref{e:recur}),
and Theorem~\ref{t:} is proved.
\end{proof}
\subsection*{Acknowledgments}
This research was partially done at the \emph{Gremo Workshop on Open Problems} 2010,
and the support of the ETH Z\"urich is gratefully acknowledged.
We would like to thank Michael Hoffmann and Bettina Speckmann
for useful discussion, and the organizers and participants of GWOP~2010 for a beautiful workshop.
\end{document} |
\begin{document}
\setcounter{page}{15}
\publyear{22}
\papernumber{2140}
\volume{188}
\issue{1}
\finalVersionForARXIV
\title{Decidability of Definability Issues in the Theory of Real Addition}
\author{Alexis B\`{e}s\thanks{Address for correspondence: Univ. Paris Est Creteil, LACL, F-94010 Creteil, France. \newline \newline
\vspace*{-6mm}{\scriptsize{Received July 2022; \ accepted November 2022.}}}
\\
Univ. Paris Est Creteil, LACL, F-94010 Creteil, France \\
bes@u-pec.fr
\and Christian Choffrut\\
IRIF (UMR 8243), CNRS and Universit\'e Paris 7 Denis Diderot, France\\
Christian.Choffrut@irif.fr
}
\maketitle
\runninghead{A. B\`{e}s and Ch. Choffrut}{Decidability of Definability Issues in the Theory of Real Addition}
\begin{abstract}
Given a subset of $X\subseteq \mathbb{R}^{n}$ we can associate with every point $x\in \mathbb{R}^{n}$ a vector space $V$ of maximal dimension with the property that for some ball centered at $x$, the subset $X$ coincides inside the ball with a union of lines parallel to $V$. A point is singular if $V$ has dimension $0$.
In an earlier paper we proved that a \Ls-definable relation $X$ is \Ss-definable if and only if the number of singular points is finite and every rational section of $X$ is \Ss-definable, where a rational section is a set obtained from $X$ by fixing some component to a rational value.
Here we show that we can dispense with the hypothesis of $X$ being \Ls-definable by requiring that the components of the singular points be rational numbers. This provides a topological characterization of first-order definability in the structure \Ss.
It also allows us to deliver a self-definable criterion (in Muchnik's terminology) of \Ss- and \Ls-definability for a wide class of relations, which turns into an effective criterion provided that the corresponding theory is decidable. In particular these results apply to the class of
so-called $k-$recognizable relations which are defined by finite Muller automata via the representation of the reals in a integer basis $k$, and allow us to prove that it is decidable whether a $k-$recognizable relation (of any arity) is $l-$recognizable for every base $l \geq 2$.
\end{abstract}
\section{Introduction}
In his seminal work on Presburger Arithmetic \cite{Muchnik03}, Muchnik provides a characterization of definability of a relation $X \subseteq \mathbb{Z}^n$ in $\langle \mathbb{Z},+ , <\rangle$ in terms of sections of $X$ and local periodicity properties of $X$. It also shows that the characterization can be expressed as a $\langle \mathbb{Z},+, <,X \rangle$-sentence, and thus can be decided if $\langle \mathbb{Z},+, <, X \rangle$ is decidable. As an application Muchnik proves that it is decidable whether a $k-$recognizable relation $X \subseteq \mathbb{Z}^n$ is $\langle \mathbb{Z},+,< \rangle$-definable. Recall that given an integer $k \geq 2$, $X$ is $k-$recognizable if it is recognizable by some finite automaton whose inputs are the base-$k$ encoding of integers (see \cite{BHMV94}).
The present paper continues the line of research started in \cite{BC2020}, which aims to extend Muchnik's results and techniques to the case of reals with addition. Consider the structure \Ss\ of the additive ordered group of reals along with the
constant $1$. It is well-known that the subgroup $\mathbb{Z}$ of integers is not first-order-definable in this structure. Let \Ls\ denote the expansion of \Ss\ with the unary predicate ``$x\in \mathbb{Z}$''. In \cite{BC2020} we prove a topological characterization of \Ss-definable relations in the family of \Ls-definable relations, and use it to derive, on the one hand, that it is decidable whether or not a relation on the reals definable in \Ls\ can be defined in \Ss\ and on the other hand that there is no intermediate structure between \Ls\ and \Ss\ (since then, the latter result has been generalized by Walsberg \cite{Wal20} to a large class of $o-$minimal structures)
We recall the topological characterization of \Ss\ in \Ls, \cite[Theorem 6.1]{BC2020}. We say that the neighborhood of a point $x\in \mathbb{R}^{n}$
relative to a relation $X\subseteq \mathbb{R}^{n}$ has a {\it stratum} if there exists a direction such that the intersection of X with any sufficiently small neighborhood around $x$ is the trace of a union of lines parallel to the given direction. When $X$ is \Ss-definable, all points have strata, except finitely many which we call singular. In \cite{BC2020} we give necessary and sufficient conditions for a \Ls-definable relation $X \subseteq \mathbb{R}^n$ to be \Ss-definable, namely {(FSP)}: it has finitely many singular points and {(DS)}: all intersections of $X$ with arbitrary hyperplanes parallel to $n-1$ axes and having rational components on the remaining axis are \Ss-definable.
We asked whether it is possible to remove the assumption that the given relation is \Ls-definable.
In the present paper we prove that the answer is positive if a new assumption is added, see below.
Let us first explain the structure of the proof in \cite{BC2020}. The necessity of the two conditions (FSP) and (DS) is easy. The difficult part was their sufficiency and it used very specific properties of the \Ls-definable relations, in particular the fact that \Ss- and
\Ls-definable relations are locally indistinguishible. In order to show the existence of
a \Ss-formula for $X$ we showed two intermediate
properties, (RB): for every nonsingular point $x$, there exists a basis of the strata subspace composed of vectors with rational components, and (FI): there are finitely many ``neighborhood types", i.e., the equivalence relation $x \sim y$ on $\mathbb{R}^n$ which holds
if there exists $r>0$ such that ($x+w \in X \leftrightarrow y+w \in X$ for every $|w|<r$) has finite index.
When passing from the characterization of \Ls-definable relations to the characterization of
general ones the topological characterization uses the same intermediate
properties but they are much more delicate to establish and an extra condition (RSP) is required: all singular points of $X$ have rational components.
Moreover we show that this characterization is effective under natural conditions. Indeed,
if every nonempty \SXs{X}-definable relation contains a point with rational components, then the \Ss-definability of $X$ is
expressible in the structure \SXs{X} itself. The crucial point is the notion of \emph{quasi-singular} points
generalizing that of singular points. We were forced to consider this new notion because the \mathbb{X}s{X}-predicate which
defines singular points in \Ls\ no longer defines them in general structures.
In so doing we can turn
the criterion for \Ss-definability into an effective criterion provided that the theory of \SXs{X} is decidable. More precisely we show that for every
decidable expansion $\+M$ of \Ss\ such that every nonempty $\+M$-definable relation contains a point with rational components, one can decide
whether or not a given $\+M$-definable relation is \Ss-definable.
We extend the result of \Ss-definability of a general relation to that of
\Ls-definability. Every relation
on the reals can be uniquely decomposed into some relations on the integers and some relations on the unit hypercubes (\cite{BFL08}, see also \cite{FV59}). This decomposition yields
a simple characterization of the \Ls-definable relations, which is expressible in \linebreak \LXs{X} provided that all nonempty \LXs{X}-definable relations
contain a point with rational components.
Combining the result on \Ss-definability for the reals and Muchnik's result on $\langle \mathbb{Z}, +,<,1 \rangle$-definable integer relations
we show that for every decidable expansion $\+N$ of \Ls\ such that every nonempty
$\+N$-definable relation contains a point with rational components, one can decide whether or not a given $\+N$-definable relation is \Ls-definable.
We also study a particularly significant case.
The notion of $k$-recognizability for relations on integers can be extended to the case of relations on reals, by considering Muller automata which read infinite words encoding reals written in base $k$, see \cite[Definition 1]{BRW1998}.
The class of $k-$recognizable
relations coincides with the class of relations definable in some expansion of $\langle \mathbb{R}, +,< ,\mathbb{Z}\rangle$ of the form $\langle \mathbb{R},\mathbb{Z},+,<,X_k \rangle$ where $X_k$ is a base
dependent ternary predicate \cite[section 3]{BRW1998}. This expansion satisfies the above required condition since it has a decidable theory and every nonempty
$k-$recognizable relation contains a point with rational components.
The \Ls-definable relations define a subclass which has a very specific relevance since it coincides with the class of relations which are
$k-$recognizable for every $k \geq 2$ \cite{BB2009,BBB2010,BBL09}. A consequence of our result is that given a $k-$recognizable relation
it can be decided if it is
$\ell-$recognizable for all bases $\ell \geq 2$. This falls into the more general issue of finding effective characterizations of subclasses of $k-$reco\-gnizable relations.
A previous result of this type was proved by Milchior in \cite{Milchior17} by showing that it is decidable
whether or not a weakly $k-$recognizable subset of $\mathbb{R}$ is definable in \Ss, where ``weak'' is
defined as a natural condition on the states of a deterministic automaton.
We give a short outline of our paper. Section \ref{sec:prelim} gathers basic definitions and notation. In Section \ref{sec:useful} we recall the main useful definitions and results from \cite{BC2020} in order to make the paper selfcontained. In Section \ref{sec:main} we show that the conjunction of conditions (RSP), (RB) and (FI) characterizes the \Ss-definable relations. In Section \ref{sec:selfdef} we deal with the self-definable criterion of \Ss-definability. We introduce the crucial notion of quasi-singular point and show that it is definable in \SXs{X}. We also provide an alternative, inductive, formulation of \Ss-definability for $X$: every relation obtained from $X$
by assigning fixed real values to arbitrary components contains finitely many quasi-singular points. We then show how to extend the results to the case of \Ls. In Section \ref{sec:applications} we show that the self-definable criterion of \Ss-definability (resp. \Ls-definability) of a relation $X \subseteq \mathbb{R}^n$ can be turned into an effective criterion provided that $X$ is definable in a suitable decidable theory, and apply the result to the class of $k$-recognizable relations.
\subsubsection*{Other related work.}
Muchnik's approach, namely expressing in the theory of the structure a property of the structure itself, can be used in other settings.
We refer the interested reader to the discussion in \cite[Section 4.6]{SSV14} and also to \cite{PW00,Bes13,Milchior17} for examples of such structures.
A similar method has already been used in 1966, see \cite[Thm 2.2.]{GS66} where the authors are able to express in Presburger theory whether or not
a Presburger subset is the Parikh image of a context-free language.
The theory of (expansions of) dense ordered groups has been studied extensively in model theory, in particular in connection with o-minimality, see e.g. \cite{DG17,DMS10}. Let us also mention a recent series of results by Hieronymi which deal with expansions of \Ls, and in particular with the frontier of decidability for such expansions, see, e.g., \cite{Hie19} and its bibliography.
\section{Preliminaries}
\label{sec:prelim}
Throughout this work we assume the vector space $\mathbb{R}^{n}$ is provided with the metric
$L_{\infty}$ (i.e., $|x|=\max_{1\leq i\leq n} |x_{i}|$). Let $B(x,r)$ denote the open ball centered at $x\in \mathbb{R}^{n}$ and of radius $r>0$. Given $x,y \in \mathbb{R}^n$ let $[x,y]$ (resp. $(x,y)$) denote the closed segment (resp. open segment) with extremities $x,y$. We use also notations such as $[x,y)$ or $(x,y]$ for half-open segments.
Let us specify our logical conventions and notations. We work within first-order predicate calculus with equality. We identify formal symbols and their interpretations.
We are mainly concerned with the structures \Ss\ and \Ls.
Given a structure $\cal M$ with domain $D$ and $X \subseteq D^n$, we say that $X$ is {\em definable in $\cal M$}, or {\em $\cal M$-definable}, if there exists a formula $\varphi(x_1,\dots,x_n)$ in the signature of $\cal M$ such that $\varphi(a_1,\dots,a_n)$ holds in $\cal M$ if and only if $(a_1,\dots,a_n) \in X$ (this corresponds to the usual notion of {\em definability without parameters}).
The \Ss-theory admits quantifier elimination in the following sense, which can be interpreted geometrically
as saying that a \Ss-definable relation is a finite union of closed and open polyhedra.
\begin{theorem}{\cite[Thm 1]{FR75}}
\label{th:quantifier-elimination-for-R-plus}
Every formula in \Ss\ is equivalent to a finite Boo\-lean combination of inequalities between linear combinations of variables with coefficients in $\mathbb{Z}$ (or, equivalently, in $\mathbb{Q}$).
In particular every nonempty \Ss-definable relation contains a point with rational components.
\end{theorem}
\section{Local properties of real relations}
\label{sec:useful}
Most of the definitions and results in this section are taken from \cite{BC2020}.
{
These are variants of notions and results already known in computational geometry, see e.g. \cite{BN88,BBD12} for the case of \Ss-definable relations.
}
We only give formal proofs for the new results. In the whole section we fix $n \geq 1$ and $X \subseteq \mathbb{R}^n$.
\subsection{Strata}\label{subsection:strata}
The following clearly defines an equivalence relation.
\begin{definition}
\label{de:same-neighborhood}
Given $x,y \in \mathbb{R}^{n}$ we write $ x \sim_X y$ or simply $ x\sim y$
when $X$ is understood, if there exists a real $r>0$ such that
the translation $w \mapsto w +y-x$ is a one-to-one mapping from $B(x,r)\cap X$ onto $B(y,r)\cap X$.
\end{definition}
\begin{example}
\label{ex:square}
Let $X$ be a closed subset of the plane delimited by a square. There are ten $\sim_X$-equivalence classes:
the set of points interior to the square, the set of points interior to its complement, the four vertices and the
four open edges.
\end{example}
Let ${\mathcal Cl}(x)$ denote the $\sim$-equivalence class to which $x$ belongs.
\begin{definition}\label{de:strata}
\break
\vspace*{-8mm}
\begin{enumerate}
\item Given a non-zero vector $v \in \mathbb{R}^n$ and a point $y\in \mathbb{R}^n$, let $L_{v}(y)=\{y+\alpha v \ | \ \alpha \in \mathbb{R}\}$ be
the line passing through $y$ in the direction $v$.
More generally, if $X\subseteq \mathbb{R}^n$ let $L_{v}(X)$ denote the set $\bigcup_{x\in X} L_{v}(x)$.
\item A non-zero vector $v \in \mathbb{R}^n$ is an $X$-\emph{stratum} at $x$
(or simply a \emph{stratum} when $X$ is understood)
if there exists a real $r>0$ such that
\begin{equation}
\label{eq:saturation}
B(x, r) \cap L_{v}(X \cap B(x, r) ) \subseteq X.
\end{equation}
This can be seen as saying that inside the ball $B(x,r)$, the relation $X$ is a union of lines parallel to $v$.
By convention the zero vector is also considered as a stratum.
\item The set of $X$-strata at $x$ is denoted $\text{Str}_{X}(x)$ or simply $\text{Str}(x)$.
\end{enumerate}
\end{definition}
\begin{proposition}\cite[Proposition 3.4]{BC2020}
\label{pr:strata-subspace}
For every $x\in \mathbb{R}^{n}$ the set $\Strem(x) $ is a vector subspace of
$\mathbb{R}^{n}$.
\end{proposition}
\begin{definition}
\label{de:dimension}
The \emph{dimension} dim$(x)$ of a point $x \in \mathbb{R}^n$ is the dimension of the subspace $\Str(x)$.
We say that $x$ is a $d$-{\em point} if $d=\dim(x)$.
Moreover if $d=0$ then $x$ is said to be $X$-\emph{singular}, or simply \emph{singular}, and
otherwise it is \emph{nonsingular}.
\end{definition}
\begin{example}\label{ex:square2}(Example \ref{ex:square} continued) Let $x \in \mathbb{R}^2$. If $x$ belongs to the interior of the square or of its complement, then $\Str(x)= \mathbb{R}^2$. If $x$ is one of the four vertices of the square then
we have $\Str(x)=\{0\}$, i.e., $x$ is singular. Finally, if $x$ belongs to an open edge of the square but is not a
vertex, then $\Str(x)$ has dimension 1, and two points of opposite edges have the same strata subspace,
while two points of adjacent edges have different strata subspaces.
\end{example}
It can be shown that all strata at $x$ can be defined with respect to a common value $r$ in expression~(\ref{eq:saturation}).
\begin{proposition} \cite[Proposition 3.9]{BC2020}
\label{pr:uniform-radius}
For every $x\in \mathbb{R}^{n}$ there exists a real $r>0$ such that for every $v\in \Strem(x)\setminus \{0\}$
we have
$$
B(x, r) \cap L_{v}(X \cap B(x, r)) \subseteq X.
$$
\end{proposition}
\begin{definition}
A \emph{$X$-safe radius} (or simply a \emph{safe radius} when $X$ is understood) for $x$ is a real $r>0$ satisfying the condition of Proposition \ref{pr:uniform-radius}.
Clearly if $r$ is safe then so are all $0<s\leq r$. By convention every real is
a safe radius if $\Str(x)=\{0\}$.
\end{definition}
\begin{example}(Example \ref{ex:square} continued) For an element $x$ in the interior of the square
or the interior of its complement a safe radius is the (minimal) distance from $x$ to the edges of the square.
If $x$ is a vertex
then $\Str(x)=\{0\}$ and every $r>0$ is safe for $x$. In all other cases $r$ can be chosen as the minimal distance
of $x$ to a vertex.
\end{example}
\begin{remark} \label{re:sim-and-str}
If $x\sim y$ then $\text{\em Str}(x) =\text{\em Str}(y) $, therefore given an $\sim$-equivalence class $E$, we may define $\Str(E)$ as the set of common strata of all $x\in E$.
Observe that the converse is false.
{
In Example \ref{ex:square} for instance, points in the interior and points in the complement of the interior of the square have the same set of strata, namely $\mathbb{R}^2$, but are not $\sim-$equivalent.
}
\end{remark}
It is possible to combine the notions of strata and of safe radius.
\begin{lemma} \cite[Lemma 3.13]{BC2020}
\label{le:strx-subset-stry}
Let $x\in \mathbb{R}^{n}$ and $r$ be a safe radius for $x$. Then for all $y\in B(x,r)$ we have
$\Strem{(x)}\subseteq \Strem{(y)}$.
\end{lemma}
\begin{example}\label{ex:square3}(Example \ref{ex:square} continued) Consider a point $x$ on an (open) edge
of the square and a safe radius $r$ for $x$. For every point $y$ in $B(x,r)$ which is not on the edge we have
$\Str(x)\subsetneq \Str(y)=\mathbb{R}^{2}$. For all other points we have $\Str(x)=\Str(y)$.
\end{example}
Inside a ball whose radius is safe for the center, all points along a stratum are $\sim$-equivalent.
\begin{lemma}\label{le:tech1}
Let $x$ be non-singular, $v \in \Str(x)\setminus\{0\}$, and $r$ be safe for $x$. For every $z \in B(x,r)$ we have $L_v(z) \cap B(x,r) \subseteq {\mathcal Cl}(z)$.
\end{lemma}
\begin{proof}
Let $z' \in L_v(z) \cap B(x,r) $, and $s>0$ be such that both $B(z,s),B(z',s)$ are included in $B(x,r)$. For every $w\in B(0,s)$ we have $z'+w \in L_v(z+w)$ thus $z+w\in X \leftrightarrow z'+w\in X$.
\end{proof}
\subsection{Relativization to affine subspaces}
We relativize the notion of singularity and strata to an affine subspace $S\subseteq \mathbb{R}^{n}$.
The next definition should come as no surprise.
\begin{definition}
\label{de:H-singular}
Given a subset $X\subseteq \mathbb{R}^n$, an affine subspace $S\subseteq \mathbb{R}^{n}$ and a point $x\in S$, we say that a vector $v \in \mathbb{R}^n \setminus \{0\}$ parallel to $S$
is an $(X,S)$-\emph{stratum for the point} $x$ if for all sufficiently small $r>0$ it holds
\begin{equation}
\label{eq:relative-stratum}
B(x,r) \cap L_{v}(X \cap B(x,r) \cap S) \subseteq X.
\end{equation}
By convention the zero vector is also considered as a $(X,S)$-stratum. The set of $(X,S)$-strata of $x$ is denoted $\text{Str}_{(X ,S)}(x)$.
We define the equivalence relation $x \sim_{(X,S)} y$ on $S$ as follows: $x \sim_{(X,S)} y$ if and only if there exists a real $r>0$ such that $x+w \in X \leftrightarrow y+w \in X$ for every $w \in \mathbb{R}^n$ parallel to $S$ and such that $|w|<r$.
A point $x\in S$ is $(X,S)$-\emph{singular} if it has no $(X,S)$-stratum. For simplicity when $S$ is the space $\mathbb{R}^{n}$ we maintain the previous terminology and
speak of $X$-strata and
$X$-singular points. We say that a real $r>0$ is $(X,S)$-\emph{safe} if (\ref{eq:relative-stratum}) holds for every nonzero $(X,S)-$stratum $v$.
\end{definition}
\begin{remark} Singularity and nonsingularity do not go through restriction to affine subspaces. E.g., in the real plane, let $X=\{(x,y) \ | \ y<0\}$ and $S=\{(x,y) \mid x=0\}$. Then the origin is not $X-$singular but it is $(X,S)-$singular. All other elements of $S$ admit $(0,1)$ as an $(X,S)-$stratum thus they are not $(X,S)-$singular.
The opposite situation may occur. In the real plane, let $X=\{(x,y) \ | \ y<0\} \cup S$.
Then the origin is $X-$singular but it is not $(X,S)-$singular.
\end{remark}
\subsubsection{Relativization of the space of strata}
\begin{lemma}\label{le:general-projstrat}
Let $S$ be an affine hyperplane of $\mathbb{R}^n$ and $x \in S$. {Let $V$ be the vector subspace generated by $\text{Str}_X(x) \setminus \text{Str}_{(X,S)}(x)$.
If $V\not=\{0\}$ then $\text{Str}_X(x) = V + \text{Str}_{(X,S)}(x)$, and otherwise $\text{Str}_X(x) \subseteq \text{Str}_{(X,S)}(x)$.}
\end{lemma}
\begin{proof}
It is clear that if $V=\{0\}$ then every $X$-stratum of $S$ is an $(X,S)$-stratum.
Now assume there exists $v\in \text{Str}_X(x) \setminus \text{Str}_{(X,S)}(x)$. It suffices to prove that
all $w\in \text{Str}_{(X,S)}(x)$ belong to $\text{Str}_X(x)$. Let $s>0$ be simultaneously $(X ,S)-$safe and $X-$safe for $x$. Let $0<s'<s$ be such that $L_v(z) \cap S \subseteq B(x,s)$ for every $z \in B(x,s')$. Let $y_1,y_2 \in B(x,s')$ be such that $y_1-y_2$ and $w$ are parallel. It suffices to prove the equivalence $y_1 \in X \leftrightarrow y_2 \in X$ . Let $y'_1$ (resp. $y'_2$) denote the intersection point of $L_v(y_1)$ and $S$ (resp. $L_v(y_2)$ and $S$). We have $y_1,y'_1 \in B(x,s) $, $v \in \text{Str}_X{(x)}$, and $s$ is $X-$safe for $x$, thus $y_1 \in X \leftrightarrow y'_1 \in X$. Similarly we have $y_2 \in X \leftrightarrow y'_2 \in X$. Now $y'_1,y'_2 \in B(x,s)$, $y'_1-y'_2$ and $w$ are parallel, and $w \in \text{Str}_{(X,S)}(x)$, which implies $y'_1 \in X \leftrightarrow y'_2 \in X$ and thus finally
$y_1 \in X \leftrightarrow y_2 \in X$.
\end{proof}
\begin{corollary}\label{cor:projstrat}
Let $S$ be an hyperplane of $\mathbb{R}^n$ with underlying vector subspace $V$, and let $x \in S$ be non-singular.
If $\text{Str}_X(x) \setminus V$ is nonempty then
$
\text{Str}_{(X,S)}(x)=\text{Str}_X(x) \cap V
$.
\end{corollary}
\subsubsection{Relativization of the $\sim$-relation}
\begin{lemma}\label{le:tech2}
Let $S$ be an hyperplane of $\mathbb{R}^n$, $y,z \in S$, and $v \ne \{0\}$ be a common $X-$stratum of $y,z$ not parallel to $S$. If $y \sim_{(X ,S)} z$ then $y \sim_X z$.
\end{lemma}
\begin{proof}
Assume $y \sim_{(X ,S)} z$, and let $r>0$ be $(X ,S)-$ and $X-$ safe both for $y$ and $z$.
Since $v$ is not parallel to $S$, there exists $s>0$ such that for every $w \in \mathbb{R}^n$ with $|w|<s$, the intersection point of $L_v(y+w)$ (resp. $L_v(z+w)$) and $S$ exists because $\dim(S)=n-1$ and belongs to $B(y,r)$ (resp. $B(z,r)$).
It suffices to show that $y+w \in X \leftrightarrow z+w \in X$. Let $y+w'$ be the intersection point of $L_v(y+w)$ and $S$.
By our hypothesis on $s$, $y+w'$ belongs to $B(y,r)$. Moreover $r$ is $X-$safe for $y$, $v \in \text{Str}_X(y)$, and $w'-w$ is parallel to $v$, therefore $y+w \in X \leftrightarrow y+w' \in X$. Similarly we have $z+w \in X \leftrightarrow z+w' \in X$.
Now $|w'|<r$, thus by our assumptions $y \sim_{(X ,S)} z$ we have $y+w' \in X \leftrightarrow z+w' \in X$ and therefore
$y+w \in X \leftrightarrow z+w\in X$.
\end{proof}
Next we consider a particular case for $S$ which plays a crucial role in expressing the characterisation stated in the main theorem.
It is also a tool for reasoning by induction in Section \ref{subsec:alternative}.
\begin{definition}
\label{de:section}
Given an index $0\leq i < n$ and a real $c\in \mathbb{R}$ consider the hyperplane
$$
H= \mathbb{R}^{i}\times \{c\} \times \mathbb{R}^{n-i-1}.
$$
The intersection $X \cap H$ is
called a \emph{section} of $X$. It is a \emph{rational section} if $c$ is a rational number.
We define $\pi_{H}: \mathbb{R}^{n} \rightarrow \mathbb{R}^{n-1}$ as $\pi_{H}(x_1,\dots,x_n)=(x_1,\dots,x_{i-1},x_{i+1},\dots,x_n)$.
\end{definition}
The following facts are easy consequences of the above definitions:
for all $x,y \in H$ and $v $ a vector parallel to $H$ we have:
\begin{enumerate}
\item $x \sim_{(X,H)} y$ if and only if $\pi_{H}(x) \sim_{\pi_{H}(X)} \pi_{H}(y)$
\item $v \in \text{Str}_{(X ,H)}(x)$ if and only if $\pi_{H}(v) \in \text{Str}_{\pi_{H}(X)}(\pi_{H}(x))$. In particular $x$ is $(X,H)-$singular if and only if $\pi_{H}(x)$ is $\pi_{H}(X)-$singular.
\end{enumerate}
\subsection{Intersection of lines and equivalence classes}\label{subsection:intersectionlines}
In this section we describe the intersection of a $\sim$-class $E$ with a line parallel to some $v \in \Str(E)$.
It relies on the notion of adjacency of $\sim$-classes.
\begin{definition}
Let $E$ be a nonsingular $\sim$- class and let $v$ be one of its strata.
A point $x$ is $v-$\emph{adjacent}
\emph{to} $E$ if there exists $\epsilon>0$ such that for all $0< \alpha\leq \epsilon$ we have $x+\alpha v\in E$.
\end{definition}
{
\begin{example}(Example \ref{ex:square} continued)
We specify Example \ref{ex:square} by choosing the square as the unit square with vertices $(0,0),(0,1),(1,0)$ and $(1,1)$. All elements of the bottom open edge of the square belong to the same $\sim$-class $E$. The vector $v=(1,0)$ is a stratum of $E$. The vertex $(0,0)$ is $v-$adjacent to $E$. Similarly every element of $E$ is also $v-$adjacent to $E$. However the vertex $(1,0)$ is not $v-$adjacent to $E$ (but it is $(-v)-$adjacent to $E$).
\end{example}
}
The notion of adjacency is a property of the $\sim$-class.
\begin{lemma}\cite[Lemma 5.2]{BC2020}
\label{le:congruence}
Let $F$ be a $\sim$-class.
\begin{enumerate}
\item For all $x,y\in F$, all nonzero vectors $v$ and all $\sim$-classes $E$, $x$ is
$v$-adjacent to $E $ if and only if $y$ is $v$-adjacent to $E$.
\item For each vector $v$ there exists a most one $\sim$-class $E$ such that $F$ is $v$-adjacent to $E$.
\end{enumerate}
\end{lemma}
Consequently, if for some $x\in F$ and some vector $v$, $x$ is $v$-adjacent to $E$ it makes sense to say that the class $F$ is $v$-adjacent to $E$.
\begin{lemma}\label{le:open-ter}\cite[Corollary 5.6]{BC2020}
Let $x \in \mathbb{R}^n$ be non-singular, $E={\mathcal Cl}(x)$ and let $v \in \Strem(x)\setminus\{0\}$. The set $L_{v}(x)\cap E$ is a union of disjoint open segments
(possibly infinite in one or both directions) of $L_{v}(x)$, i.e.,
of the form $(y-\alpha v , y+ \beta v)$ with $0< \alpha,\beta\leq \infty$ and $y\in E$.
If $\alpha < \infty$ (resp. $\beta < \infty$) then the point $y-\alpha v$ (resp. $y+ \beta v$)
belongs to a $\sim$-class $F\not=E$ such that
$\text{dim} (F)< \text{dim} (E)$ and $F$ is $v$-adjacent (resp. $(-v)-$adjacent) (or simply \emph{adjacent}
when $v$ is understood) to $E$.
\end{lemma}
\section{Characterizations of \Ss-definable relations}\label{sec:main}
\subsection{Characterization in \Ls-definable relations}
\label{subsec:properties-of-Ss-and-Ls}
We recall our previous characterization of \Ss-definable among \Ls-definable relations.
\begin{theorem}
\label{th:CNS}\cite[Theorem 6.1]{BC2020}
Let $n \geq 1$ and let $X \subseteq \mathbb{R}^n$ be \Ls-definable. Then $X$ is \str{\mathbb{R},+,<,1}-definable if and only if the following
two conditions hold
\begin{description}
\itemsep=0.9pt
\item {\em (FSP)} There exist only finitely many $X-$singular points.
\item {\em (DS)} Every rational section of $X$ is \Ss-definable.
\end{description}
\end{theorem}
The necessity of condition (FSP) is proved by Proposition 4.6 of \cite{BC2020} and that of (DS)
is trivial since a rational section is the intersection of two \Ss-definable relations.
The proof that conditions (FSP) and (DS) are sufficient uses several properties of \Ls-definable
relations which are listed in the form of a proposition below.
\begin{proposition}\label{pr:recap}
Let $n \geq 1$ and $X \subseteq \mathbb{R}^n$ be \Ls-definable. The following holds.
\begin{description}
\itemsep=0.9pt
\item (RSP) The components of the $X$-singular points are rational numbers \cite[Proposition 4.6]{BC2020}.
\item (FI) The equivalence relation $\sim$ has finite index and thus the number of different vector
spaces $\Strem(x)$ is finite when $x$ runs over $\mathbb{R}^{n}$ \cite[Corollary 4.5]{BC2020}.
\item (RB) For all nonsingular points $x$, the vector space $\Str(x)$
has a rational basis in the sense that it can be generated by a set of vectors with rational components \cite[Proposition 4.7]{BC2020}.
\end{description}
\end{proposition}
\subsection{Characterization in arbitrary relations}
\label{sec:caract-effectif}
Now we aim to characterize \Ss-definability for an arbitrary relation $X \subseteq \mathbb{R}^n$.
We prove that the conditions (FSP),(DS),(RSP) are sufficient, i.e., compared to Theorem \ref{th:CNS} one can remove the condition ``$X$ is \Ls-definable" and add condition (RSP).
\begin{theorem}
\label{th:crit-n}
Let $n \geq 1$ and $X\subseteq \mathbb{R}^{n}$. Then $X$
is \Ss-definable if and only if it satisfies the three conditions {\em (FSP), (DS), (RSP)}
\begin{description}
\itemsep=0.9pt
\item {\em (FSP)} It has only finitely many singular points.
\item {\em (DS)} Every rational section of $X$ is \Ss-definable.
\item {\em (RSP)} Every singular point has rational components.
\end{description}
\end{theorem}
Observe that the three conditions are needed, as shown by the following relations which are not \Ss-definable.
\begin{itemize}
\itemsep=0.9pt
\item Consider the binary relation $X=\{(x,x) \ | \ x \in \mathbb{Z}\}$. The singular elements of $X$ are precisely the elements of $X$, thus $X$ satisfies (RSP) but not (FSP). It satisfies (DS) because every rational section of $X$ is either empty or equal to the singleton $\{(x,x)\}$ for some $x \in \mathbb{Z}$, thus it is \Ss-definable.
\item The binary relation $X=\mathbb{R} \times \mathbb{Z}$ has no singular point thus it satisfies (FSP) and (RSP). However it does not satisfy (DS) since, e.g., the rational section $\{0\} \times \mathbb{Z}$ is not \Ss-definable.
\item The unary relation $X=\{\sqrt{2}\}$ admits $\sqrt{2}$ as unique singular point, thus it satisfies (FSP) but not (RSP). It satisfies (DS) since every rational section of $X$ is empty.
\end{itemize}
Now we prove Theorem \ref{th:crit-n}.
\begin{proof} The necessity of the first two conditions is a direct consequence of Theorem \ref{th:CNS}, that of the third condition is due to
Proposition~\ref{pr:recap}.
Now we turn to the other direction which is the bulk of the proof and
we proceed in two steps. First we show that properties (FSP), (DS) and (RSP) imply properties (RB) and (FI)
(Claims \ref{cl:RB} and \ref{cl:FI})
and then based on these two properties we show that there exists a \Ss-formula defining~$X$.
\begin{claim}
\label{cl:RB}
If $X$ satisfies conditions (FSP), (DS) and (RSP) then it satisfies condition (RB).
\end{claim}
\begin{proof}
We prove that for every non-singular point $x \in \mathbb{R}^n$, $\Str(x)$ has a rational basis. If $n=1$ this follows from the fact that for every $x \in \mathbb{R}$ the set $\Str(x)$ is either
equal to $\{0\}$ or equal to $\mathbb{R}$, thus we assume $n \geq 2$.
For every $i\in \{1, \ldots, n\}$ let $H_{i}=\{(x_{1}, \ldots, x_{n})\in \mathbb{R}^{n}\mid x_{i}=0\}$.
Let us call \emph{rational {$i-$hyperplane}} any hyperplane $S$ of the form $S=\{(x_{1}, \ldots, x_{n})\in \mathbb{R}^{n}\mid x_{i}=c\}$ where $c \in \mathbb{Q}$.
The underlying vector space of $S$ is $H_i$.
Let $x$ be a $d-$point with $d\geq 1$, i.e., a point for which $V=\Str(x)$ has dimension $d$. For $d=n$ the result is obvious. For $1 \leq d <n$ we prove the result by induction on $d$.
\noindent \underline{Case $d=1$:} It suffices to show that every $1-$point $x$ has a stratum in $\mathbb{Q}^n$. Let $v \in \Str(x)\setminus\{0\}$, and let $r>0$ be safe for $x$. We can find $i \in \{1,\dots,n\}$ and two distinct rational $i-$hyperplanes $S_1$ and $S_2$, not parallel to $v$, such that $L_v(x)$ intersects $S_1$ (resp. $S_2$) inside $B(x,r)$, say at some point $y_1$ (resp. $y_2$). By Lemma \ref{le:tech1} we have $y_1 \sim x$. By Corollary \ref{cor:projstrat} it follows that
$$\text{Str}_{(X,S_1)}(y_1)=\text{Str}_X(y_1) \cap H_i=\text{Str}_X(x) \cap H_i$$
and the rightmost expression is reduced to $\{0\}$ since $d=1$ and $v \not\in H_i$. This implies that $y_1$ is $(X,S_1)-$singular, i.e., that $\pi_{S_1}(y_1)$ is $\pi_{S_1}(X)-$singular. Similarly $y_2$ is $(X,S_2)-$singular, i.e., $\pi_{S_2}(y_2)$ is $\pi_{S_2}(X)-$singular.
By condition (DS) the rational sections $X \cap S_1$ (resp. $X \cap S_2$) are \Ss-definable, thus the $(n-1)-$ary relations $\pi_{S_1}(X)$ (resp. $\pi_{S_2}(X)$) are also \Ss-definable, and by our hypothesis (RSP) this implies that $\pi_S(y_1)$ (resp. $\pi_S(y_2)$) has rational components. Thus the same holds for $y_1$ and $y_2$, and also for $y_1-y_2$, and the result follows from the fact that $y_1-y_2 \in \text{Str}_{X}(x)$.
\noindent\underline{Case $2 \leq d<n$:}
Let $I \subseteq \{1,\dots,n\}$ denote the set of indices $i$ such that $V \not\subseteq H_i$. We have $V \subseteq \bigcap_{i \in \{1,\dots,n\}\setminus I} H_i$ thus $\dim(V) \leq n-(n-|I|)=|I|$, and it follows from our assumption $\dim(V)=d\geq 2$ that $|I|\geq 2$.
Now we prove that $V= \sum_{i \in I} (V \cap H_i)$. It suffices to prove $V \subseteq \sum_{i \in I} (V \cap H_i)$, and this in turn amounts to prove that $\dim(\sum_{i \in I} (V \cap H_i))=d$. For every $1 \leq i \leq n$ we have
$$\dim(V+H_i)=\dim(V)+\dim(H_i)-\dim(V \cap H_i).$$
Now if $i \in I$ then $\dim(V+H_i)>\dim(H_i)$, i.e., $\dim(V+H_i)=n$, which leads to $\dim(V \cap H_i)=d+(n-1)-n=d-1$. Thus, in order to prove $\dim(\sum_{i \in I} (V \cap H_i))=d$ it suffices to show that there exist $i,j \in I$ such that $V \cap H_i \ne V \cap H_j$. Assume for a contradiction that for all $i,j \in I$ we have $V \cap H_i = V \cap H_j$. Then for every $i \in I$ we have
$$V \cap H_i= V \cap \bigcap_{j \in I}{H_j} \subseteq \bigcap_{j \not\in I}{H_j} \cap \bigcap_{j \in I} H_j=\{0\}$$
which contradicts the fact that $\dim(V \cap H_i)=d-1 \geq 1$.
We proved that $V= \sum_{i \in I} (V \cap H_i)$, thus it suffices to prove that for every $i \in I$, $V \cap H_i$ has a rational basis. Let $v$ be an element of $V \setminus H_i$, and let $r$ be safe for $x$. We can find a rational $i-$hyperplane $S$ not parallel to $v$ and such that the intersection point of $S$ and $L_v(x)$, say $y$, belongs to $B(x,r)$. By Lemma \ref{le:tech1} (applied to $z=x$) we have
$y \sim x$. Corollary \ref{cor:projstrat} then implies
$$\text{Str}_{(X ,S)}(y)=\text{Str}_X(y) \cap H_i=\text{Str}_X(x) \cap H_i=V \cap H_i$$
which yields
$$
\text{Str}_{\pi_S(X)}(y)=\pi_S(V \cap H_i).
$$
Now by condition (DS), $X \cap S$ is \Ss-definable, and $\pi_S(X)$ as well. Therefore by Proposition \ref{pr:recap} applied to $X \cap S$, the relation $X \cap S$ satisfies (RB) thus $\pi_S(V \cap H_i)$ has a rational basis, and this implies that $V \cap H_i$ also has a rational basis.
\end{proof}
\begin{claim}
\label{cl:FI}
If $X$ satisfies conditions (FSP), (DS) and (RSP) then it satisfies condition (FI).
\end{claim}
\begin{proof}
Before proving the claim we need a simple definition.
\begin{definition}
Given $X \subseteq \mathbb{R}^n$ and a $\sim$-class $E$, we define the \emph{isolated part of $E$} as the subset
$$Z=\{x \in E \ | \ L_{v}(x)\subseteq E \text{ for all nonzero vectors } v\in \Str(E)\}.$$
A subset of $\mathbb{R}^n$ is $X$-\emph{isolated} (or simply \emph{isolated} when $X$ is understood) if it is equal to the isolated part of some $\sim$-class.
\end{definition}
{
\begin{example}
Let $X \subseteq \mathbb{R}^2$ be defined as $X=L_1 \cup L_2$ where $L_1$ denotes the horizontal axis and $L_2$ denotes the open half-line $L_2=\{(x_1,x_2) \ | \ x_2= 1 \text{ and } x_1>0 \}$. In this case there are three $\sim$-classes, namely $E_1=X$, $E_2=\{(0,1)\}$ and $E_3=\mathbb{R}^2\setminus (E_1 \cup E_2)$. Let us describe the isolated part for each of these $\sim$-classes. A point belongs to the isolated part of a $\sim$-class if whatever stratum is chosen, all points in the direction are trapped in the class. For instance for the $\sim$-class $E_1=X$, we can show that the isolated part is obtained by deletion of the half-line $L_2$ of $X$, whose points are clearly not trapped. Indeed the subspace $\Str(E_1)$ is generated by the vector $(1,0)$. Therefore for every $v \in \Str(E_1)$, if $x \in L_1$ then $L_v(x)=L_1 \subseteq E_1$, and if $x \in L_2$ then the line $L_v(x)$ intersects $E_2$ thus $L_v(x) \not\subseteq E_1$. This shows the isolated part of $E_1$ is equal to $L_1$. The $\sim$-class $E_2$ has dimension $0$ thus obviously it is equal to its isolated part. Finally the isolated part of $E_3$ is empty since the vector $v=(0,1)$ is a stratum of $E_3$ and for every $x \in E_3$ the line $L_{v}(x)$ intersects $E_1$.
\end{example}
}
\begin{lemma}
\label{le:isolated-classes}
Let $X\subseteq \mathbb{R}^n$ satisfy (FSP), (DS) and (RSP). We have
\begin{enumerate}
\item Let $E$ be a $\sim$-class and $Z$ be its isolated part.
Then $Z$ is a {finite} union of affine subspaces with underlying vector subspace $\Str(E)$ each containing a point with rational components.
\item There exist finitely many isolated subsets.
\end{enumerate}
\end{lemma}
\begin{proof}
By induction on $n$. For $n=1$ if $X$ is equal to $\mathbb{R}$ or to the empty set,
the only isolated set is $X$ and it obviously satisfies $(1)$. Otherwise a nonempty isolated set $Z$
consists of equivalent points of a $\sim$-class of dimension $0$, i.e., it is a union of singular points. Now by (FSP) and (DS) there exist finitely many such points and they have rational components, which implies $(1)$ and $(2)$.
Now let $n \geq 1$. All isolated sets $Z$ included in a $\sim$-class $E$ of dimension $0$ satisfy $(1)$, and moreover there are finitely many such sets $Z$. Thus it suffices to consider the case where $Z \ne \emptyset$ and $\Str(E) \ne \{0\}$.
Let $v \in \Str(E) \setminus\{0\}$ and let $i\in \{1,\ldots, n\}$ be such that $v \not\in H_i$. For every $z \in Z$ we have $L_v(z) \subseteq Z$, thus $Z$ intersects the hyperplane $H_{i}$.
All elements of $Z \cap H_i$ are $\sim_X$-equivalent thus they are also $\sim_{(X,H_i)}$-equivalent.
Furthermore for every $x\in Z\cap H_{i}$ we have $\text{Str}_{(X,H_i)}(x)=\text{Str}_{X}(x)\cap H_{i}$ by Corollary \ref{cor:projstrat}
and thus for every $w\in \text{Str}_{X}(x)\cap H_{i}$ we have $L_w(x) \subseteq Z\cap H_{i}$. This shows that $\pi_{H_i}(x)$ belongs to a $\pi_{H_i}(X)-$isolated set, hence $\pi_{H_i}(Z)$ is included in a
$\pi_{H_i}(X)-$isolated set, say $W\subseteq \pi_{H_i}(H_i)$.
Now by condition (DS) the set $\pi_{H_i}(X)$ is \Ss-definable, thus by Theorem \ref{th:CNS} it satisfies also (FSP). By our induction hypothesis it follows that $W$ can be written as $W=\bigcup^{p}_{j=1} W_{j}$, where either all $W_{j}$'s are parallel affine subspaces with underlying vector space $\pi_{H_i}(\Str(E))$ each containing some point with rational components (by $(1)$), or each $W_j$ is reduced to a point with rational components (by $(1)$).
Every $W_j$ which intersects $\pi_{H_i}(Z)$ satisfies $W_j \subseteq \pi_{H_i}(Z)$, which shows that $\pi_{H_i}(Z)= \bigcup_{j \in J} W_{j}$ for some $J \subseteq \{1,\dots,p\}$. That is, we have $Z \cap H_i= \bigcup_{j \in J} W'_{j}$ where each $W'_j=\pi^{-1}_{H_i}(W_j)$. Observe that if $x$ belongs to $W_{j}$ and has rational components then the point $x'=\pi^{-1}_{H_i}(x)$ also has rational components.
Now $Z=(Z \cap H_i)+\Str(E)$ thus $Z= \bigcup_{j \in J} (W'_{j}+\Str(E))$. Since the underlying vector space of each $W'_j$ is included in $\Str(E)$, this proves $(1)$.
Concerning $(2)$ we observe that $Z$ is completely determined by $Z \cap H_i$, i.e., $\pi_{H_i}(Z)$. By our induction hypothesis there are finitely many $\pi_{H_i}(X)-$isolated parts $W=\bigcup^{p}_{j=1} W_{j}$ and each $X$-isolated part is determined by a subset of the form
$\bigcup_{j \in J} W_{j}$ for some $J \subseteq \{1,\dots,p\}$. This proves point $(2)$.
\end{proof}
Now we turn to the proof of Claim \ref{cl:FI}.
Lemma \ref{le:isolated-classes} shows that the number of $\sim$-classes having a nonempty isolated part is finite.
It thus suffices to prove that for every $0 \leq d \leq n$ there exist finitely many $d$-classes $E$ having an empty isolated part.
For $d=0$ the result follows from $(FSP)$ and the fact that each $0-$class is a union of singular points. For $d=n$ there exist at most two $d-$classes, which correspond to elements in the interior of $X$ or the interior of its complement.
For $0 \leq d <n$ we reason by induction on $d$. Observe first that if a $\sim$-class $E$ has dimension $d$ and has an empty isolated part, then there exist $x \in E$ and $v \in \Str(E)\setminus\{0\}$ such that $L_v(x) \not\subseteq E$. By Lemma \ref{le:open-ter} this implies that there exist $y \in L_v(x)$ and a $\sim$-class $F$ such that $y\in F$, $F$ is adjacent to $E$, $\dim(F)<\dim(E)$, and $[x,y)\subseteq {\mathcal Cl}(x)$. Now by our induction hypothesis there exist finitely many $\sim$-classes with dimension less than $d$. Thus in order to prove the claim, it suffices to show that there are finitely many $d$-classes to which some $d'$-class with $d'<d$ is adjacent.
In order to meet a contradiction, assume that there exists a $d'-$class $F$ which is adjacent to infinitely many $d$-classes, say $E_{j}$ with $j\in J$. We may furthermore assume that for each class $E_{j}$ there is no integer $d'<d''<d$ such that some $d''$-class is adjacent to $E_{j}$.
Because of Lemma \ref{le:congruence} it is enough to fix an element $y$ in $F$ and investigate the classes to which it is adjacent.
\noindent We first consider the case $d'=0$.
Because of condition (FSP), for some real $s>0$ the point $y$ is the unique singular point in $B(y,s)$. Moreover for every $j \in J$, $F$ is adjacent to $E_j$, thus there exists a point $x_{j}\in E_{j}$ such that $[x_j,y) \subseteq E_j$. Let $\mathit{HL}_{j}$ denote the open halfline with endpoint $y$ and containing $x_j$.
Observe that we necessarily have $\mathit{HL}_{j}\cap B(y,s)\subseteq {\mathcal Cl}(x_{j})$. Indeed, by Lemma \ref{le:open-ter} the condition
$\mathit{HL}_{j}\cap B(y,s)\subsetneq {\mathcal Cl}(x_{j})$ implies that there exists a point $z=y + \alpha (x_{j}-y) \in B(y,s)$ such that $\alpha >1$ and $\text{dim}(z)< d$. Since $y$ is the unique singular point in $B(y,s)$ this implies $\text{dim}(z)>0$ but then because of $[x_{j},z)\subseteq {\mathcal Cl}(x_{j})$ the maximality condition stipulated for $d'$ is violated.
Let $z_{j}$ be the point on $\mathit{HL}_{j}$ at distance $\frac{s}{2}$ from $y$ and let $z$ be adherent to the set $\{z_j \ | \ j \in J\}$. The
point $z$ is nonsingular since $y$ is the unique singular point in the ball $B(y,s)$. Let $v \in \Str(z)\setminus\{0\}$.
Consider
some $\ell \in \{1, \ldots, n\}$, some rational $\ell -$hyperplane $S$ such that $z \not\in S$ and some real $0<t<\frac{s}{2}$ such that $L_{v}(B(z,t))\cap S\subseteq B(z,\frac{s}{2})$. The ball $B(z,t)$ contains infinitely many non $\sim$-equivalent points, and by Lemma \ref{le:tech2} their projections on $S$ in the direction $v$ are
non $\sim_{(X,S)}$-equivalent.
But by condition (DS) the relation $X\cap S$ is \Ss-definable, thus $\pi_S(X)$ satisfies condition (FI) of Proposition \ref{pr:recap}, a contradiction.
\noindent Now we consider the case where $d'>0$.
Choose some $v \in \Str(y)$ and let $r$ be a safe radius for $y$. We can find $0<s<r$, $k \in \{1,\dots,n\}$ and some $k-$hyperplane $S$ not parallel to $v$ such that
$L_v(B(y,s))\cap S \subseteq B(y,r)$.
By definition of $y$, $B(y,s)$ intersects infinitely many pairwise distinct $d-$classes. Given two non $\sim$-equivalent $d$-points $z_1,z_2 \in B(y,s)$, and their respective projections $w_1,w_2$ over $S$ along the direction $v$, we have $w_1 \not\sim_{(X,S)} w_2$ by Lemma \ref{le:tech2}.
This implies that there exist infinitely many $\sim_{(X,S)}$-classes. However by condition (DS), the relation $X \cap S$ is \Ss-definable,
thus $\pi_S(X)$ satisfies condition (FI) of Proposition \ref{pr:recap}, a contradiction.
\end{proof}
Now we turn to the proof of Theorem \ref{th:crit-n}.
Observe that $X$ is equal to the union of $\sim$-classes of its elements, thus by Claim \ref{cl:FI}, in order to prove that $X$ is \Ss-definable it suffices to prove that all $\sim_X$-classes are \Ss-definable.
More precisely, we prove that each $\sim$-class $E$ is definable from $\sim$-classes $F$ with smaller dimension, i.e., that $E$ is definable in the expansion of \Ss\ obtained by adding a predicate for each such $F$. We proceed by induction on the dimension $d$ of $\Str(E)$.
If $d=0$ then $E$ is a union of singular points, and by (FSP) and (RSP) it follows that $E$ is a finite subset of $\mathbb{Q}^n$ thus is \Ss-definable.
Assume now $0<d\leq n$. By Claim \ref{cl:RB} there exists a rational basis $V(E)=\{v_1,\dots,v_d\}$ of $\Str(E)$. Let $Z \subseteq E$ be the isolated part of $E$ and let $E'=E \setminus Z$. By Lemma \ref{le:isolated-classes} $(1)$, $Z$ is a {finite} union of parallel affine subspaces with underlying vector space $V(E)$ each containing a point with rational components, thus $Z$ is \Ss-definable.
It remains to prove that $E'$ is \Ss-definable. We use the following characterization of $E'$.
\begin{lemma}\label{le:non-isolated}
For every $x \in \mathbb{R}^n$, we have $x \in E'$ if and only if there exist $1 \leq p \leq d$ and a sequence of pairwise distinct elements $x_0,\dots,x_p \in \mathbb{R}^n$ such that $x_0=x$ and
\begin{enumerate}
\item for every $0\leq k \leq p-1$, $x_{k+1}-x_k\in V(E)$ and $[x_k,x_{k+1})$ does not intersect any $\sim$-class of strictly smaller dimension
than $\dim(E)$
\item if $F= {\mathcal Cl}(x_{p})$ then $F$ is $(x_{p-1}-x_{p})$-adjacent to $E$ and $\dim(F)<\dim(E)$.
\end{enumerate}
\end{lemma}
\begin{proof}
We first prove that the conditions are sufficient. We prove by backward induction that $[x_k,x_{k+1}) \subseteq E$ for every $0\leq k \leq p-1$. This will imply that $x=x_0 \in E$, and the fact that $x_p-x$ belongs to $\Str(E)$ and $\dim(F)<\dim(E)$ will lead to $x \in E'$. Set $k=p-1$.
By Point 2 of Lemma \ref{le:congruence} the element $x_{p}$ is $(x_{p-1}- x_{p})$-adjacent to $E$, thus $[x_{p-1},x_{p})$ intersects
$E$. Moreover $[x_{p-1},x_{p})$ does not intersect any $\sim$-class $G$ such that $\dim(G)<\dim(E)$, thus by Lemma \ref{le:open-ter} we have $[x_{p-1},x_{p}) \subseteq E$. For $0\leq k<p-1$, by our induction hypothesis we have $x_{k+1} \in E$. Moreover $[x_{k},x_{k+1})$ does not intersect any $\sim$-class $G$ such that $\dim(G)<\dim(E)$, thus $[x_{k},x_{k+1}) \subseteq E$ again by Lemma \ref{le:open-ter}.
We prove the necessity. By definition of $E'$ and Lemma \ref{le:open-ter} there exist $v\in \Str{(E)}$ and $y\in L_v(x)$ such that
$[x,y)\subseteq E$ and $y\not\in E$. Decompose $v=\alpha_{1} v_{i_{1}} + \cdots + \alpha_{p} v_{i_{p}}$ where
$0<i_{1}< \cdots < i_{p}\leq d$ and $\alpha_{1} , \cdots, \alpha_{p} \not= 0$. We can assume
w.l.o.g that $y$ is chosen such that $p$ is
minimal and furthermore that $\alpha_{p}$ is minimal too.
For $0 \leq k<p$ set $x_{k}=x + \alpha_{1} v_{i_{1}} + \cdots + \alpha_{k}v_{i_{k}}$. By minimality of $p$ and $\alpha_p$, the segments
$[x_{0}, x_{1}), \ldots, [x_{p-1}, x_{p})$ intersect no class of dimension less than $\dim(E)$.
Then $y=x_{p}$ is $(x_{p-1}-x_{p})$-adjacent to $E$.
\end{proof}
In order to prove that $E'$ is \Ss-definable it suffices to show that we can express in \Ss\ the existence of a sequence $x_0,\dots,x_p \in \mathbb{R}^n$ which satisfies both conditions of Lemma \ref{le:non-isolated}. Observe that $V(E)$ is finite and each of its element is \Ss-definable, thus we can express in \Ss\ the fact that a segment is parallel to some element of $V(E)$. Moreover by (FI) there exist finitely many $\sim$-classes $F$ such that $\dim(F)<\dim(E)$, and all such classes are \Ss-definable by our induction hypothesis. This allows us to express condition $(1)$ in \Ss. For $(2)$ we use again the fact that there are only finitely many classes $F$ to consider and that all of them are \Ss-definable.
\end{proof}
\subsection{An alternative noneffective formulation.}
\label{subsec:alternative}
In this section we re-formulate Theorem \ref{th:crit-n} in terms of (generalized) projections of $X$
by building on the notion of generalized section which extends that of
section, in the sense that it allows us to fix several components.
\begin{definition}
\label{de:generalized-section} Given $n \geq 1$ and $X \subseteq \mathbb{R}^n$, a \textit{generalized section of $X$} is a relation of the form
\begin{equation}
\label{eq:s-a}
X_{s,a} =\{(x_{1}, \ldots, x_{n})\in X \mid x_{s_{1}} =a_{{1}}, \ldots, x_{s_{r}} =a_{{r}}\}
\end{equation}
where $r>0$, $s= (s_1,\dots,s_r)$ is an $r-$tuple of integers with $1\leq s_{1} < \cdots <s_{r}\leq n$, and $a=(a_{{1}}. \ldots, a_{{r}})$ is an $r-$tuple of reals.
When $r=0$ we define $X_{s,a} =X$ by convention, i.e., $X$ is a generalized section of itself. If $r>0$ then the section is said to be {\em proper}. If all elements of $a$ are rational numbers then $X_{s,a}$ is called a {\em rational generalized section of $X$}.
In the above definition, each $X_{s,a}$ is a subset of $\mathbb{R}^n$. If we remove the $r$ fixed components $x_{s_{1}},\ldots, x_{s_{r}}$ we can see $X_{s,a}$ as a subset of $\mathbb{R}^{n-r}$, which will be called a {\em generalized projection} of $X$ (resp. a {\em rational generalized projection} of $X$ if $X_{s,a}$ is a rational generalized section of $X$).
\end{definition}
\begin{proposition}
\label{pr:recurs-cri}
For every $n \geq 1$, a relation $X\subseteq \mathbb{R}^{n}$ is \Ss-definable if and only if every {rational generalized
projection} of $X$ has finitely many singular points and these points have rational components.
\end{proposition}
\begin{proof}
The proof goes by induction on $n$. The case $n=1$ is obvious. Assume now $n >1$.
Let $X$ be \Ss-definable and let $Y$ be a rational generalized projection of $X$. If $Y=X$ then the result follows from Theorem \ref{th:crit-n}. If $Y$ is proper then $Y$ is definable in \str{\mathbb{R},+,<,1,X} thus it is also \Ss-definable, and the result follows from our induction hypothesis.
Conversely assume that every {rational generalized projection} of $X$ has finitely many singular points and they have rational components. We show that $X$ satisfies all three conditions of Theorem \ref{th:crit-n}. Conditions (FSP) and (RSP) follow from our hypothesis and the fact that $X$ is a {rational generalized projection} of itself. It remains to prove condition (DS) namely that every rational section of $X$ is \Ss-definable. This amounts to proving that every rational projection $Z$ of $X$ is \Ss-definable. Now every generalized projection $Y$ of $Z$ is also a generalized projection of $X$, thus by our induction hypothesis $Y$ has finitely many singular points and they have rational components. Since $Z$ is a proper projection of $X$, by our induction hypothesis it follows that $Z$ is \Ss-definable.
\end{proof}
\section{A definable criterion for \Ss-definability in suitable structures}
\label{sec:selfdef}
In this section we prove that for every $n \geq 1$ and $X \subseteq \mathbb{R}^n$, if every nonempty \SXs{X}-definable relation contains a point with rational
components then we can state a variant of Proposition \ref{pr:recurs-cri} which is
expressible in \SXs{X}. This means that there exists a \SXs{X}-sentence
(uniform in $X$) which expresses the fact that $X$ is \Ss-definable. This provides a \emph{definable} criterion for \Ss-definability, similar to Muchnik's result
\cite{Muchnik03} for definability in Presburger Arithmetic. We also extend these ideas to the case of \Ls-definability.
In this section ${\mathcal{S}_X}$ stands for the structure \SXs{X}.
\subsection{Quasi-singular points}
We aim to express \Ss-definability of a relation $X \subseteq \mathbb{R}^n$ in the structure ${\mathcal{S}_X}$ itself. A natural approach is to express the conditions of Proposition \ref{pr:recurs-cri} as an ${\mathcal{S}_X}$-sentence, however the formulation involves the set $\mathbb{Q}$ as well as the set of $X$-singular elements. On the one hand $\mathbb{Q}$ is not necessarily ${\mathcal{S}_X}$-definable, and on the other hand the naive definition of $X-$singularity involves
the operation of multiplication which
is also not necessarily ${\mathcal{S}_X}$-definable. For the special case where $X$ is \Ls-definable, we introduced in \cite[Lemma 4.9]{BC2020}
an ad hoc definition. Yet this definition
does not necessarily hold when the relation $X$ is no longer assumed to be \Ls-definable. In order to overcome
this difficulty we introduce a weaker property but
that is still definable in ${\mathcal{S}_X}$. This proves to be sufficient to establish our result.
\begin{definition} Let $X \subseteq \mathbb{R}^n$, $x \in \mathbb{R}^n$, and $r,s$ be two reals such that $0<s<r$.
\begin{itemize}
\item a vector $v\in \mathbb{R}^n$ is an $(r,s)$-\emph{quasi-stratum} of $x$ if $|v|\leq s$ and $(y\in X \leftrightarrow y+v \in X)$
holds for all $y \in \mathbb{R}^n$ such that $y,y+v\in B(x,r)$.
\item
We say that $x \in \mathbb{R}^n$ is $X-$\emph{quasi-singular} if it does not satisfy the following property:
\begin{equation}
\begin{array}{l}
\label{eq:qs}
\text{there exist reals } r, s>0 \text{ such that the set of $(r,s)$-{quasi-strata} of $x$ is nonempty, closed, } \\
\text{and is stable under } v \mapsto v/2. \vspace*{-6mm}
\end{array}
\end{equation}
\end{itemize}
\end{definition}
It is not difficult to check that if $x$ is not singular and $r$ is safe, then every stratum of $x$ is an $(r,s)$-quasi-stratum for $0<s<r$. However even for $r$ safe, there may exist $(r,s)$-quasi-strata of $x$ which are not strata of $x$, as shown in the following example.
\begin{example}
Let $n=2$, $X=(\mathbb{Z} \cup \{-{3 \over 2},-{1 \over 2},{1 \over 2},{3 \over 2}\}) \times \mathbb{R}$, and $x=(0,0)$. Then $\Strem(x)$ is generated by the vector $(0,1)$, and every real $r>0$ is safe for $x$. Given $0<s<r$, the $(r,s)$-quasi-strata of $x$ can be characterized as follows:
\begin{itemize}
\itemsep=0.9pt
\item if $r>{5 \over 2}$ then the $(r,s)$-quasi-strata of $x$ are vectors of the form $(0,l)$ with $|l| \leq s$ (i.e these are the strata of $x$ with norm at most $s$).
\item if $r \leq {5 \over 2}$ then these are vectors of the form $({k \over 2},l)$ where $k \in \mathbb{Z}$, $k \leq 2r$, and $|({k \over 2},\ell)|\leq s$. Note that for $s<{1 \over 2}$, these are exactly the strata of $x$ with norm at most $s$.
\end{itemize}
\end{example}
\begin{lemma}\label{le:defquasi}
Let $X \subseteq \mathbb{R}^n$. The set of quasi-singular elements of $X$ is ${\mathcal{S}_X}$-definable. Moreover the property
``$X$ has finitely many quasi-singular elements" is ${\mathcal{S}_X}$-definable (uniformly in $X$) .
\end{lemma}
\begin{proof}
The property that $v$ is an $(r,s)$-stratum of $x$ can be expressed by the formula
$$
\phi(X,x, r ,s, v)\equiv 0< |v|< s \wedge \forall y \ (y,y+v\in B(x,r) \rightarrow (y\in X \leftrightarrow y+v \in X)).
$$
The set of $X$-quasi-singular elements can be defined by the formula
\begin{equation}
\label{eq:asterisk}
\begin{array}{ll}
QS(x,X) \equiv & \neg \exists r \exists s \ ((0<s<r) \wedge \exists v (\phi(X,x,r,s,v)) \wedge \\
& \forall v\ (\phi(X,x,r,s,v)\rightarrow \phi(X,x,r,s,\frac{v}{2})) \wedge \\
& \forall v \ (((|v|<s) \wedge \forall \epsilon>0 (\exists u \ (\phi(X,x,r,s,u)\wedge |v-u|<\epsilon ))) \rightarrow \phi(X,x,r,s,v)))
\end{array}
\end{equation}
The finiteness of the set of quasi-singular points can be expressed by the formula
\begin{equation}
\label{eq:finiteQS}
\begin{array}{ll}
FS(X)\equiv & (\exists t>0 \ \forall x \ (QS(x,X) \rightarrow |x|<t))\\
& \wedge (\exists u>0
(\forall x \forall y ( (QS(x,X) \wedge QS(y,X) \wedge x\not=y)\rightarrow |x-y|>u)))
\end{array}
\end{equation}
\vspace*{-5mm}
\end{proof}
\begin{lemma}\label{le:as}
Let $X \subseteq \mathbb{R}^n$. If $x$ is not quasi-singular then for some reals $0<s<r$ there exists an $(r,s)$-quasi-stratum of $x$ and every $(r,s)$-quasi-stratum of $x$ is a stratum of $x$.
\end{lemma}
\begin{proof}
In this proof ``quasi-stratum'' stands for ``$(r,s)$-quasi-stratum''. We consider the negation of $QS$. The matrix of the formula consists of four conjuncts.
The second conjunct asserts that there exists a quasi-stratum. The third conjunct asserts that if a vector $v$ is a quasi-stratum
then for all integers $p\leq 0$ the vector $2^{p} v $ is a quasi-stratum. Also if $p\geq 0$ and $|2^{p} v|<s $ then $2^{p} v $
is a quasi-stratum. Indeed, because $B(x,r)$ is convex, if $y$ and $y+2v$ belong to $B(x,r)$
then $y+v$ belongs to $B(x,r)$ and we have
\begin{equation}
\label{eq:transitive}
y\in X \leftrightarrow y+ v\in X \leftrightarrow z= y+ 2 v\in X.
\end{equation}
This generalizes to any $p\geq 0$ provided $|2^{p} v|<s$.
We will show that if $v$ is quasi-stratum, then it is a stratum, i.e.,
if $y,z\in B(x,r)$ and $z\in L_{v}(y)$ then $y\in X\leftrightarrow z \in X$. To fix ideas
set $z=y + 2^{\ell} \beta v$ for some real $0<\beta<1$. Let $\alpha_{q}= \sum_{-q< i<0} a_{i} 2^{i}$ with $a_{i}\in \{0,1\}$, be a sequence of dyadic rationals converging to $\beta$.
Since $|2^{i} v|<s$ holds for all $-q< i<0$, every $2^{i} v $ is a quasi-stratum and therefore so is $\alpha_{q} v$.
Arguing as in (\ref{eq:transitive}) we see that for all $t, t+\alpha_{q} v\in B(x,r)$ we have
$t\in X \leftrightarrow t+ \alpha_{q} v \in X$. Because $\alpha_{q} <1$ this shows that all $ \alpha_{q} v$
are quasi-strata. The last conjunct implies that $ \beta v$ is a quasi-stratum
and again using the same argument as in (\ref{eq:transitive}) we get
$$
y\in X \leftrightarrow y+ \beta v\in X \leftrightarrow y+ 2 \beta v \leftrightarrow \cdots \leftrightarrow z= y+ 2^{\ell} \beta v\in X
$$
\vspace*{-8mm}
\end{proof}
\begin{lemma}\label{le:sing-quasi-sing}
For every $X \subseteq \mathbb{R}^n$, every $X-$singular element is $X-$quasi-singular.
\end{lemma}
\begin{proof} If $x \in \mathbb{R}^n$ is not $X-$quasi-singular then there exist $0<s<r$ and an $(r,s)$-quasi-stratum,
which is a stratum by Lemma \ref{le:as}.
\end{proof}
\begin{lemma}\label{le:equiv-quasi-sing-and-sing-for-Ss}
If $X \subseteq \mathbb{R}^n$ is \Ss-definable, every $X-$quasi-singular element is $X-$singular.
\end{lemma}
\begin{proof}
We prove that if $x \in \mathbb{R}^n$ is not $X-$singular then it is not quasi-singular, i.e., that it satisfies $\neg QS(x,X)$, cf. Expression (\ref{eq:asterisk}). We find suitable values of $r,s$ such that the set of $(r,s)$-quasi-strata of $x$ coincides with the set of strata $v$ of $x$ such that $|v|\leq s$. The result will then follow from the fact that $\Str(x)$ is a non-trivial vector subspace.
{ The relation $X$ is \Ss-definable thus by \cite[Corollary 4.4]{BC2020} there exists $r>0$ such that inside the ball $B(x,r)$, $X$ coincides with a finite collection of cones.
By cone we mean an intersection of open or closed halfspaces delimited by finitely many, say $k$,
hyperplanes of dimension $n-1$ and containing $x$. Without loss of generality we can assume that $r$ is safe. We show that a suitable value for $s$ is $s=\frac{1}{k}r$.
Since $r$ is safe}, every stratum $v$ of $x$ with $|v|\leq s$ is also an $(r,s)$-quasi-stratum of $x$. Conversely let $v$ be an $(r,s)$-quasi-stratum of $x$, and assume for a contradiction that $v \not\in \Str(x)$. Then there exists a point $y\in B(x,r)$ such that the line $L_{v}(y)$ intersects $X$ and its complement
inside $B(x,r)$.
{
Let $h$ be any homothety with ratio $0<\lambda \leq 1$ centered at $x$ such that
the segment $L_{v}(h(y))\cap B(x,r)$ has length greater than $r$.
}
Then, within $B(x,r)$, the line $L_{v}(h(y))$ decomposes into $2\leq p\leq k$ segments
which are alternatively inside and outside $X$. One of these segments has length at least $\frac{1}{p}r\geq s \geq |v|$.
We obtain that for some $z\in L_{v}(h(y))$ we have $z, z+v \in B(x,r)$ and $z\in X \leftrightarrow z+ v \not\in X $,
which contradicts our assumption that $v$ is an $(r,s)$-quasi-stratum of $x$.
\end{proof}
Note that in Lemma \ref{le:equiv-quasi-sing-and-sing-for-Ss} the condition that $X$ is \Ss-definable cannot be removed. Consider, e.g., $X=\mathbb{R} \times \mathbb{Q}$. Then it can be shown that for all $x \in \mathbb{R}^2$ we cannot find any reals $0<s<r$ for which the set of $(r,s)$-quasi-strata of $x$ is closed,
and this implies that $x$ is $X-$quasi-singular. However $x$ is not $X-$singular since $(1,0)$ is an $X-$stratum for $x$.
\subsection{Alternative characterization of \Ss-definability in ${\mathcal{S}_X}$.}
We can state the following variant of Theorem \ref{th:crit-n} for ${\mathcal{S}_X}$-definable relations under the hypothesis that
all nonempty ${\mathcal{S}_X}$-definable relations $Y$ contain a point with rational components
(recall that ${\mathcal{S}_X}$ stands for
the structure \SXs{X} where $X$ is some fixed but arbitrary relation). Observe that this implies that all
definable finite subsets $Y$ of $\mathbb{R}^{n}$ are included in $\mathbb{Q}^{n}$. Indeed, for all points $y\in Y$ with rational components
the set $Y\setminus \{y\}$ is ${\mathcal{S}_X}$-definable.
\begin{proposition}
\label{pr:altern-cri-quasi} Let $n \geq 1$ and $X\subseteq \mathbb{R}^{n}$ be such that every nonempty ${\mathcal{S}_X}$-definable relation contains a point with rational components. Then $X$ is \Ss-definable if and only if every {generalized projection} of $X$ has finitely many quasi-singular points.
\end{proposition}
\begin{proof}
We proceed by induction on $n$.
\underline{Case $n=1$.} Assume first that $X$ is \Ss-definable. The only {generalized
projection} of $X$ that need to be studied is $X$ itself. Now by Theorem \ref{th:crit-n}, $X$ has finitely many singular points, and by Lemma \ref{le:equiv-quasi-sing-and-sing-for-Ss} these are precisely its quasi-singular points.
Conversely assume that every generalized projection of $X$ has finitely many quasi-singular points. If the generalized projection is not $X$, then it is a singleton and there is nothing to check. It remains to consider the case where the projection is equal to $X$. By
Lemma \ref{le:sing-quasi-sing} this implies that $X$ has finitely many singular points. Now $X \subseteq \mathbb{R}$ thus the set of $X-$singular points
coincides with the topological boundary $Bd(X)$ of $X$. It follows that $Bd(X)$ is finite, i.e., $X$ is the union of finitely many intervals. Moreover
$Bd(X)$ is ${\mathcal{S}_X}$-definable
and by our assumption on ${\mathcal{S}_X}$ it follows that $Bd(X) \subseteq \mathbb{Q}$ thus every $X-$singular point is rational. The result follows from Theorem \ref{th:crit-n}.
\underline{Case $n>1$.}
Assume first that $X$ is \Ss-definable. The relation satisfies property (FSP) of Theorem \ref{th:crit-n} and by Lemma \ref{le:equiv-quasi-sing-and-sing-for-Ss} it has finitely many quasi-singular points. It thus suffices to consider proper subsets of $X$.
Assume without loss of generality that the projections are obtained by freezing the $0<p\leq n$ first components. For every $a= (a_1,\dots,a_p)\in \mathbb{R}^{p}$, consider the projection
$$
X_{a}= \{(x_{p+1},\dots,x_n) \in \mathbb{R}^{n-p} \mid (a_1,\dots,a_p,x_{p+1},\dots,x_n) \in X\}.
$$
Consider the set $A$ of elements $a=(a_1,\dots,a_p)\in \mathbb{R}^p$ such that the relation $ X_{a}$
has infinitely many quasi-singular points.
Using expression (\ref{eq:finiteQS}), the set $A$ is ${\mathcal{S}_X}$-definable, thus it is \Ss-definable because so is $X$. If this set were nonempty,
by Theorem \ref{th:quantifier-elimination-for-R-plus} it would contain an element of $\mathbb{Q}^p$, which means that there exists a rational generalized projection of $X$ which has infinitely many quasi-singular points, a contradiction.
Conversely assume that every {generalized
projection} of $X$ has finitely many quasi-singular points, and let us prove that $X$ satisfies all conditions of Theorem \ref{th:crit-n}. Condition (DS) follows from the fact that every rational section of $X$ is a generalized projection of $X$ thus
is \Ss-definable by our induction hypothesis. For conditions (FSP) and (RSP), we observe that $X$ is a generalized projection
of itself thus the set of $X-$quasi-singular points is finite. By Lemma \ref{le:defquasi} this set is
${\mathcal{S}_X}$-definable thus it is a subset of $\mathbb{Q}^n$ by our assumption on ${\mathcal{S}_X}$, and the result follows from Lemma \ref{le:sing-quasi-sing}.
\end{proof}
\subsection{Defining \Ss-definability}
The formulation of conditions in Proposition \ref{pr:altern-cri-quasi} allows us to express \Ss-definability as a sentence in the structure ${\mathcal{S}_X}$ itself.
\begin{theorem}\label{th:seldef}
Let $n \geq 1$ and $X\subseteq \mathbb{R}^{n}$ be such that every nonempty ${\mathcal{S}_X}$-definable relation contains a point with rational components. There exists a ${\mathcal{S}_X}$-sentence $\Phi_{n}$ (which is uniform in $X$) which holds in ${\mathcal{S}_X}$ if and only if $X$ is \Ss-definable.
\end{theorem}
\begin{proof}
{Let $[n]$ denote the set $\{1, \ldots, n\}$.}
By Proposition \ref{pr:altern-cri-quasi} it suffices to express the fact that every generalized projection of $X$ has finitely many quasi-singular points.
This leads us to consider all possible
generalized projections obtained by freezing a subset $[n] \setminus I$ of components as in Definition \ref{de:generalized-section}.
Let $\mathbb{R}^{I}$ denote the product of copies of $\mathbb{R}$ indexed by $I$
and for all $x,y\in \mathbb{R}^{I}$ set $|x-y|_{I}= |(x+z) -(y+z)|$ for any $z\in \mathbb{R}^{[n]\setminus I}$.
For $x\in \mathbb{R}^{I}$ and $r\geq 0$ set
$B_{I}(x,r)=\{y\in \mathbb{R}^{I} \mid |x-y|_{I} <r\}$. We use the pair $(n,I)$ as a parameter
for the predicates $\phi, QS,FS$ (see Lemma \ref{le:defquasi}).
{ The symbol $\xi$ stands for the subvector with frozen components (we have $\xi \in \mathbb{R}^{[n]\setminus I}$). With some abuse of notation we write $\xi+z$ for $\xi \in \mathbb{R}^{[n]\setminus I}$ and $z \in \mathbb{R}^I$, that is, if $\xi=(\xi_i)_{i \in \mathbb{R}^{[n]\setminus I}}$
and $z=(z_i)_{i \in I}$ then $\xi+z=(w_1,\dots,w_n)$ with $w_i=z_i$ if $i \in I$ and $w_i=\xi_i$ otherwise. We can define the predicates $\phi_{n,I}$, $QS_{n,I}$ and $FS_{n,I}$ as follows:}
\begin{equation}
\nonumber
\begin{array}{ll}
\phi_{n,I}(X,\xi,x, r,s,v)\equiv & \xi\in \mathbb{R}^{[n]\setminus I} \wedge x, v\in \mathbb{R}^{ I} \wedge 0< |v|_{I}< s\\
& \wedge \forall y \in \mathbb{R}^{ I} (y,y+v\in B_{I}(x,r) \rightarrow (\xi+ y\in X \leftrightarrow \xi+ y+v \in X))
\end{array}
\end{equation}
\begin{equation}
\nonumber
\begin{array}{ll}
QS_{n,I}(x,\xi, X)\equiv & \neg \exists r \exists s (( 0<s<r) \wedge \exists v\ (\phi_{n,I}(X,\xi,x, r,s,v)) \\
&\wedge \forall v\ (\phi_{n,I}(X,\xi,x,r,s,v) \rightarrow \phi_{n,I}(X,\xi,x, r,s,\frac{v}{2}) ) \\
&\wedge \forall u \ (((|u|_{I}<s) \wedge \forall \epsilon>0 (\exists v\ (\phi_{n,I}(X,\xi,x, r,s,v)\wedge |v-u|_{I}<\epsilon )))\\
& \phantom{xxxxxxxxxxxxxxxxx} \rightarrow \phi_{n,I}(X,\xi,x, r,s,u)))
\end{array}
\end{equation}
\begin{equation}
\nonumber
\begin{array}{ll}
FS_{n,I}(X,\xi)\equiv & (\exists t>0 \forall x \ (QS_{n,I}(x,\xi, X) \rightarrow |x|_{I}<t)\\
& \wedge (\exists s>0
(\forall x \forall y ( (QS_{n,I}(x,\xi, X) \wedge QS_{n,I}(y,\xi, X) \wedge x\not=y)\rightarrow |x-y|_{I}>s)
\end{array}
\end{equation}
This leads to the following definition of $\Phi_{n}$:
\begin{equation}
\label{eq:ouf}
\Phi_{n}\equiv \bigwedge_{I\subseteq [n]} \forall \xi\in \mathbb{R}^{[n]\setminus I} \ \ FS_{n,I}(X,\xi).
\end{equation}
\vspace*{-7mm}
\end{proof}
{
\begin{remark}
One can prove that Theorem \ref{th:seldef} does not hold anymore if we remove the assumption that every nonempty ${\mathcal{S}_X}$-definable relation contains a point with rational components. Indeed consider $n=1$ (the case $n \geq 1$ easily reduces to this case) and a singleton set $X=\{x\} \subseteq \mathbb{R}$. Then
by Theorem \ref{th:quantifier-elimination-for-R-plus}, $X$ is \Ss-definable if and only if $x \in \mathbb{Q}$. Thus if there exists a ${\mathcal{S}_X}$-sentence $\Phi_{n}$ which expresses that $X$ is \Ss-definable, then it is easy to transform $\Phi_{n}$ into a \Ss-formula $\Phi'_n(x)$ which defines $\mathbb{Q}$ in \Ss, and this contradicts Theorem \ref{th:quantifier-elimination-for-R-plus}.
\end{remark}
}
\subsection{Extensions to \Ls-definability}
\label{sec:extensions}
We extend the previous results to the case of \Ls-definability.
Here ${\mathcal{T}_X}$ stands for
the structure \LXs{X} with $n \geq 1$ and $X \subseteq \mathbb{R}^n$. We prove that if every nonempty ${\mathcal{T}_X}$-definable relation contains a point with rational components then one can express the property that $X$ is \Ls-definable with a ${\mathcal{T}_X}$-sentence.
The construction is based on the decomposition of any set of reals into ``integer'' and ``fractional'' sets, which allows us to reduce the \Ls-definability of $X$, on one hand to the $\langle \mathbb{Z},+, < \rangle$-definability of some subsets of $\mathbb{Z}^n$ and on the other hand to the \Ss-definability of a collection of subsets of $[0,1)^n$. In order to express these two kinds of properties in ${\mathcal{T}_X}$, we rely respectively on Muchnik's Theorem \cite{Muchnik03} and on
Theorem \ref{th:seldef}.
We start with a property which holds for all relations under no particular assumption.
Given a relation $X\subseteq \mathbb{R}^{n}$ consider the denumerable set of distinct restrictions of $X$ to unit hypercubes, i.e.,
$$
\tau_{a}\big(X \cap ([a_{1}, a_{1}+1), \cdots, [a_{n}, a_{n}+1) ) \big)
$$
where $a=(a_{1}, \cdots, a_{n})\in \mathbb{Z}^{n}$ and $\tau_{a}$ is the translation $x\mapsto x-a$. Let $\Delta_{m}$ denote this
collection of sets where $m$ runs over some denumerable set $M$. For each $m \in M$, let $\Sigma_{m}\subseteq \mathbb{Z}^{n}$ satisfy the condition
$$
x\in \Sigma_{m} \leftrightarrow x + \Delta_{m} = X \cap \big( [ x_{1}, x_{1},+1), \cdots, [ x_{n}, x_{n},+1) \big)
$$
Observe that the decomposition
\begin{equation}
\label{eq:unique-decomposition}
X= \bigcup_{m \in M} \Sigma_{m} + \Delta_{m}
\end{equation}
is unique by construction.
In the particular case of \Ls-definable relations we have the following result.
\begin{proposition}{(\cite[Theorem 7]{BFL08}, see also \cite{FV59})}
\label{prop-decomp}
A relation $X \subseteq \mathbb{R}^n$ is \Ls-definable if and only if in the decomposition (\ref{eq:unique-decomposition})
the following three conditions hold:
\begin{description}
\item {\em (FU)} the set $M$ in (\ref{eq:unique-decomposition}) is finite.
\item {\em (IP)} each $\Sigma_{m}$ is $\langle \mathbb{Z},+, < \rangle$-definable.
\item {\em (FP)} each $\Delta_{m}$ is \Ss-definable.
\end{description}
\end{proposition}
\begin{proposition}\label{pr:reduc-presb}
For all $n \geq 1$ and $X \subseteq \mathbb{R}^n$, if every nonempty ${\mathcal{T}_X}$-definable relation contains a point with rational components then there exists a ${\mathcal{T}_X}$-sentence $\mathbb{G}amma_n$ (uniform in $X$) which holds if and only if $X$ is \Ls-definable
\end{proposition}
\begin{proof}
In view of Proposition \ref{prop-decomp} it suffices to show that the three conditions are expressible in
\str{\mathbb{R},+,<,\mathbb{Z},X}.
Let $\Phi(X)$ be the ${\mathcal{T}_X}$-formula which states that $X$ is \Ss-definable, see equation (\ref{eq:ouf}).
\noindent Condition (FU): Let $x \approx y$ denote the equivalence relation which says that the two points $x$ and $y$ belong to the same $\Sigma_{m}$ in the decomposition \ref{eq:unique-decomposition}.
It is expressed by the ${\mathcal{T}_X}$-formula
\begin{equation}
\label{eq:approx}
x \in \mathbb{Z}^{n} \wedge y\in \mathbb{Z}^{n} \wedge \forall z \in [0,1)^n\ (x+z \in X \leftrightarrow y+z \in X)
\end{equation}
The finiteness of the number of classes is expressed by the ${\mathcal{T}_X}$-formula
$$
\exists N \ \forall x \in \mathbb{Z}^{n} \ \exists y \in \mathbb{Z}^{n} \ ( |y| < N \wedge y\approx x)
$$
\noindent Condition (IP): By \cite[Thm 1]{Muchnik03} for every $Y \subseteq \mathbb{Z}^n$ there exists a \str{\mathbb{Z},+,<,Y}-formula $\Psi(Y)$ (uniform in $Y$) which holds if and only if $Y$ is \str{\mathbb{Z},+,<}-definable. Let $\Psi^*(Y)$ denote the \str{\mathbb{R},\mathbb{Z},+,<,Y}-formula obtained from $\Psi(Y)$ by relativizing all quantifiers to $\mathbb{Z}$. Given $Y \subseteq \mathbb{Z}^n$ (seen as a subset of $\mathbb{R}^n$), the formula $\Psi^*(Y)$ holds in \str{\mathbb{R},\mathbb{Z},+,<,Y} if and only if $Y$ is \str{\mathbb{Z},+,<}-definable. Thus we can express in ${\mathcal{T}_X}$ the fact that all $\approx$-equivalence classes are \str{\mathbb{Z},+,<}-definable with the formula
$$
\forall x\in \mathbb{Z}^{n} \Psi^*((y\in \mathbb{Z}^{n} \wedge y\approx x))
$$
\noindent Condition (FP): The fact that every hypercube of $X$ of unit side is \Ss-definable is expressed by\vspace*{-1.8mm}
$$
\forall x_{1}, \ldots x_{n} \in \mathbb{Z}^{n} \ \Phi((0\leq y_{1}< 1\wedge \cdots \wedge 0\leq y_{n}< 1 \wedge (x_{1}+ y_{1}, \cdots, x_{n}+y_{n}) \in X)).
$$
\vspace*{-8mm}
\end{proof}
\section{Application to decidability}
\label{sec:applications}
\subsection{Deciding \Ss-definability and \Ls-definability}
\label{subsec:decid-general}
Theorem \ref{th:seldef} and Proposition \ref{pr:reduc-presb} prove the existence of definable criteria for \Ss-definability and \Ls-definability for a given relation $X$, respectively. If $X$ is definable in some decidable expansion of \Ss\ (resp. \Ls) then we can obtain effective criteria. This can be formulated as follows.
\begin{theorem}\label{th:eff1}
Let $\+M$ be any decidable expansion of \Ss\ such that every nonempty $\+M$-definable relation contains a point with rational components. Then it is decidable whether a $\+M$-definable relation $X \subseteq \mathbb{R}^n$ is \Ss-definable.
\end{theorem}
\begin{proof}
Assume that $X$ is $\+M$-definable by the formula $\psi(x)$. In Equation (\ref{eq:ouf}), if we substitute $\psi(x)$ for every occurrence
of $x\in X$
then we obtain a $\+M$-sentence which holds if and only if $X$
is \Ss-definable, and the result follows from the decidability of $\+M$.
\end{proof}
\begin{theorem}\label{th:eff2}
Let $\+N$ be any decidable expansion of \Ls\ such that every nonempty $\+N$-definable relation contains a point with rational components. Then it is decidable whether a $\+N$-definable relation $X \subseteq \mathbb{R}^n$ is \Ls-definable (resp. whether a $\+N$-definable relation $X \subseteq \mathbb{R}^n$ is \Ss-definable).
\end{theorem}
\begin{proof}
The claim about \Ss-definability follows immediately from the fact that $\+N$ satisfies the conditions of Theorem \ref{th:eff1}. For \Ls-definability, we use the same idea as for the proof of Theorem \ref{th:eff1}, but instead of $\Phi_n$ we use the sentence $\mathbb{G}amma_n$ of Proposition \ref{pr:reduc-presb}.
\end{proof}
\subsection{Application to recognizable numerical relations}
We finally apply the results of Section \ref{subsec:decid-general} to the class of $k$-recognizable relations on reals.
Let us recall that given an integer {base} $k \geq 2$ and a non-negative real $x$, a {\em $k$-encoding} of $x$ is any right infinite word on the alphabet
$\Sigma_k= \{0, \ldots, k-1\}\cup \{\star\}$ of the form $w=a_{p}\ldots a_{1}\star a_{0} a_{-1} a_{-2 } \ldots$ such that $a_i \in \{0, \ldots, k-1\}$ for every $i \leq p$ and $x= \sum_{i \leq p} a_i k^i$. The definition extends to the case of negative reals $x$ by using the $k$'s complement representation method where the leftmost digit equals $k-1$: a {$k-$encoding} of $x$ is a right infinite word on the alphabet $\Sigma_k$ of the form $w=a_{p}\ldots a_{1}\star a_{0} a_{-1} a_{-2 } \ldots$ where $a_p=k-1$ and $x=-k^p + \sum_{i \leq p-1} a_i k^i$. Note that every real has infinitely many $k$-encodings.
In order for an automaton to be able to process $n$-tuples of representations in base $k$ of reals we
prefix it, if necessary, with as few occurrences of $0$ for the nonnegative components or $k-1$ to the negative components
so that the $n$ components have the same length to the left of the symbol $\star$. This does not change the numerical values represented.
By identifying a real with its $k$-encodings, relations of arity $n$ on $\mathbb{R}$ can thus be viewed as subsets of $n$-tuples
of sequences
on $\{0, \ldots, k-1\}\cup \{\star\}$, i.e., as subsets of
$$
(\{0, \ldots, k-1\}^{n})^{*} \{\overbrace{(\star, \ldots, \star)}^{n \text{\tiny times}}\} (\{0, \ldots, k-1\}^{n})^{\omega}
$$
\begin{definition}
\label{de:recognizable}
A relation $X\subseteq \mathbb{R}^{n}$ is $k$-\emph{recognizable}
if the set of $k$-encodings of its elements
is recognized by some deterministic Muller-automaton.
\end{definition}
The collection of recognizable relations has a natural logical characterization.
\begin{theorem}\cite[Thm 5 and 6]{BRW1998}
\label{th:brw98}
Let $k \geq 2$ be an integer. A subset of $\mathbb{R}^{n}$ is $k$-recognizable if and only if it is definable in
$\langle \mathbb{R},\mathbb{Z},+,<,X_k\rangle$ where $X_k \subseteq \mathbb{R}^3$ is such that $X_k(x,y,z)$ holds if and only if $y$ is a power of $k$ and $z$ is the coefficient of
$y$ in some $k-$encoding of $x$.
Consequently, since the emptiness problem for recognizable relations is decidable, the theory of $\langle \mathbb{R},\mathbb{Z},+,<,X_k \rangle$ is decidable.
\end{theorem}
Moreover the class of \Ls-definable relations enjoys the following characterization.
\begin{theorem}\cite{BB2009,BBB2010,BBL09}
\label{th:all-bases}
A subset of $\mathbb{R}^{n}$ is \Ls-definable if and only if it is $k$-recognizable for every integer $k \geq 2$.
\end{theorem}
As a consequence, deciding whether a $k$-recognizable relation is $l-$recognizable for every base $l \geq 2$ amounts to decide whether it is \Ls-definable. We can prove the following result.
\begin{theorem}\label{th:rec-z-r}
Given an integer $k \geq 2$, it is decidable whether a $k$-recognizable relation $X \subseteq \mathbb{R}^n$ is \Ls-definable (resp. whether a $k-$recognizable relation $X \subseteq \mathbb{R}^n$ is \Ss-definable).
\end{theorem}
\begin{proof} By Theorems \ref{th:eff2} and \ref{th:brw98} it suffices to prove that every nonempty $k$-recognizable relation $Y \subseteq \mathbb{R}^n$ contains an element in $\mathbb{Q}^{n}$. By our assumption the set of $k$-encodings of elements of $Y$ is nonempty and is recognized by a finite Muller automaton, thus it contains an ultimately periodic $\omega-$word, which is the $k-$encoding of some element of $\mathbb{Q}^n$.
\end{proof}
\subsection*{Acknowledgment}
We wish to thank the anonymous referees and Fran\c{c}oise Point for their useful comments and suggestions.
\end{document} |
\begin{document}
\begin{abstract}
Given a double cone $\mathcal{C}$ with entropy at most two that is symmetric across some hyperplane, we show that any integral Brakke flow coming out of the cone must inherit the reflection symmetry for all time, provided the flow is smooth for a short time. As a corollary we prove that any such flow coming out of a rotationally symmetric double cone must stay rotationally symmetric for all time. We also show the existence of a non-self-similar flow coming out of a double cone with entropy at most two, and give an example of such a flow with a finite time singularity. Additionally, we show the existence of self-expanders with triple junctions, which are exceptions to our main theorem.
\end{abstract}
\maketitle
\section{Introduction}
A family of properly embedded hypersurface $\{\Sigma_t\}_{t \in I}$ satisfies the mean curvature flow (MCF) equation if:
\begin{align*}
\left(\frac{\partial x}{\partial t}\right)^\perp = H_{\Sigma_t}(x), \;\; x \in \Sigma_t.
\end{align*}
Here $H_{\Sigma_t}$ is the mean curvature vector of $\Sigma_t$, $x$ is the position vector and $\perp$ denotes the projection onto the normal bundle. In this paper we study mean curvature flows (MCF) that come out of (smooth) cones. Due to the singularity at the origin, there could be multiple distinct mean curvature flows coming out of a given cone $\mathcal{C}$. We are interested in how symmetries of the cone influence these solutions. \par
The simplest solutions coming out of cones are self-expanders. We say a properly embedded hypersurface $\Sigma^n \subset \mathbb{R}^{n+1}$ is a \textit{self-expander} if
\begin{align}
\label{self-expander-equation}
H_\Sigma(x) = \frac{x^\perp}{2}.
\end{align}
Equivalently, $\Sigma$ is an self-expander if and only if $\{\sqrt{t}\Sigma\}_{t \in (0,\infty)}$ is a solution to the MCF. Self-expanders are of particular importance in the study of singularities of MCF as they arise naturally as models of how MCFs flow through conical singularities. \par
There exist, however, other solutions coming out of the cone that are not self-similar. A class of such solutions was first sketched by Bernstein--Wang \cite{BWTopologicalUniqueness} as Morse flow lines between unstable and stable self-expanders asymptotic to the same cone. Note that the existence of such solution requires the existence of an unstable self-expander asymptotic to $\mathcal{C}$, which is not guaranteed. \par
In the present article, we study flows coming out of a double cone with reflection symmetry. Our main result roughly says that any MCF coming out of a suitable cone with a reflection symmetry across some hyperplane must inherit the symmetry for all future times, provided the flow is initially smooth for a short time (see \cref{preliminaries} for terminologies).
\begin{thm}
\label{reflection-symmetry}
Let $\Pi \subset \mathbb{R}^{n+1}$ be a hyperplane passing through the origin and $\mathbb{H}$ an open half-space with $\partial \mathbb{H} = \Pi$. Let $\mathcal{C} \subset \mathbb{R}^{n+1}$ be a smooth double cone with $\lambda[\mathcal{C}] < 2$ such that $\mathcal{C} \cap \mathbb{H}$ is a Lipschitz graph over $\Pi$. Let $\mathcal{M} = \{\mu_t\}_{t \in [0,\infty)}$ be an integral, unit-regular and cyclic Brakke flow coming out of $\mathcal{C}$; that is,
\begin{align*}
\lim_{t \to 0} \mu_t = \mathcal{H}^n \llcorner \mathcal{C}
\end{align*}
as Radon measures. Suppose also $\mathcal{M}$ is smooth on $(0,T)$ for some $T > 0$. If $\mathcal{C}$ is symmetric across $\Pi$, then so is $\mathcal{M}$ for $t \in [0,\infty)$. Moreover, $\mathcal{M}$ is smooth away from $\Pi$.
\end{thm}
Applying the above to rotationally symmetric double cones we obtain:
\begin{cor}
\label{main-theorem}
Let $\mathcal{C} \subset \mathbb{R}^{n+1}$ be a smooth rotationally symmetric double cone with $\lambda[\mathcal{C}] < 2$ (see \cref{preliminaries} for the precise definitions). Let $\mathcal{M} = \{\mu_t\}_{t \in [0,\infty)}$ be an integral, unit-regular and cyclic Brakke flow coming out of $\mathcal{C}$ that is smooth on $(0,T)$. Then $\mathcal{M}$ is rotationally symmetric with the same axis of symmetry as $\mathcal{C}$. Moreover, $\mathcal{M}$ is smooth away from its axis of symmetry.
\end{cor}
\begin{rem}
In our previous work \cite{Chen}, we only showed that the flow is rotationally symmetric up to the first singular time $T$, so the point of the current work is to show that the symmetry still holds after any singularity, which, as part of the conclusion, must lie on the axis of symmetry.
\end{rem}
\begin{rem}
The entropy condition $\lambda[\mathcal{C}] < 2$ is likely redundant. See \cref{further-remarks}.
\end{rem}
In fact, if the cone $\mathcal{C} \subset \mathbb{R}^{n+1}$ is of the form
\begin{align}
\label{good-cone}
x_1^2 = m^2\left(x_2^2 + \cdots + x_{n+1}^2\right), \;\; m > 0,
\end{align}
where $m$ is the parameter determined by the cone angle, a much stronger conclusion holds:
\begin{cor}
\label{cylindrical-singularity}
Suppose $\mathcal{C}$ is of the form \cref{good-cone} and has entropy $\lambda[\mathcal{C}] < 2$. Suppose $\mathcal{M}$ is an integral, unit-regular and cyclic Brakke flow coming out of $\mathcal{C}$. If $\mathcal{M}$ is smooth on $(0,T)$, then $\mathcal{M}$ is rotationally symmetric across the $x_1$-axis. The only possible singularity model of $\mathcal{M}$ is the round cylinder $\mathbb{R} \times \mathbb{S}^{n-1}$. Moreover, there can be at most one of such singularity, which, if it exists, must occur at the origin.
\end{cor}
We can also apply \cref{reflection-symmetry} to cones with $O(p + 1) \times O(n-p+1)$ symmetry to prove:
\begin{cor}
\label{simons-cone-thm}
Suppose $n \ge 2$ and $1 \le p \le n-1$. Let $\mathcal{C} \subset \mathbb{R}^{n+2}$ be a cone invariant under $O(p+1) \times O(n-p+1)$ with $\lambda[\mathcal{C}] < 2$. Let $\mathcal{M} = \{\mu_t\}_{t \in [0,\infty)}$ be an integral, unit-regular and cyclic Brakke flow coming out of $\mathcal{C}$. If there is $T > 0$ such that $\mathcal{M}$ is smooth on $(0,T)$, then $\mathcal{M}$ inherits the $O(p+1) \times O(n-p+1)$ symmetry (with the same axes of symmetry).
\end{cor}
There are many other related works on the rotational symmetry of self-expanders. Observe that if there is a unique self-expander asymptotic to a rotationally symmetric cone, then it must inherit the rotational symmetry. The first nontrivial result for double cones is obtained by Fong--McGrath \cite{FongMcGrath}. They proved that a mean-convex self-expander (i.e. $H_\Sigma > 0$) asymptotic to a rotationally symmetric double cone is rotationally symmetric. This is later generalized by Bernstein--Wang (Lemma 8.3 in \cite{BWIntegerDegree}) to weakly stable self-expanders. In our previous work \cite{Chen}, we showed in full generality that any smooth self-expander asymptotic to a rotationally symmetric double cone is rotationally symmetric. Rotational symmetry of self-expanding solitons of other geometric flows has been studied by Chodosh \cite{Chodosh} and Chodosh--Fong \cite{ChodoshFong}. \par
Next we briefly comment on some of the assumptions made in \cref{reflection-symmetry}. First of all, the only extra assumption over the smooth case is the entropy bound $\lambda[\mathcal{C}] < 2$. This is due to the complicated nature of singularities of higher multiplicities arising from Brakke flows. In particular, the entropy condition is essential to the maximum principle \cref{maximum-principles} (in which the entropy controls the Gaussian density) and the proof of \cref{tameness}. \par
Secondly, the cyclicity of $\mathcal{M}$ is needed to ensure singularities modeled on triple junctions do not appear along the flow. Standard Schauder estimates and pseudolocality arguments can only guarantee smoothness outside of a large ball (see for example \cref{interior-estimates}) but do not rule out formations of triple junctions. This condition is explicitly used in \cref{tameness}. In fact, we will show that there exists a self-expander with triple junctions in \cref{ode-appendix}. Our moving plane method will not work for such self-expanders. However, if we assume that $\mathcal{C}$ is also symmetric across the hyperplane perpendicular to its axis of symmetry, then there is still some hope that all self-expanders with triple junctions are rotationally symmetric (See \cref{further-remarks}). \par
Finally, it is not immediately clear that our theorem contains more flows than \cite{Chen}, which includes \textit{smooth} self-expanders and low entropy flow lines of Bernstein--Wang \cite{BWTopologicalUniqueness}. For this reason we will establish an existence result of a non-self-similar flow coming out of a cone $\mathcal{C}$ with $\lambda[\mathcal{C}] < 2$. We warn the readers that, while the flow constructed below agrees with the construction from \cite{BWTopologicalUniqueness} on the smooth part, they are not necessarily the same flow. This is due to the lack of understanding how expander mean convexity is preserved after the singularity.
\begin{thm}
\label{nontrivial-flow-lines}
Let $\mathcal{C} \subset \mathbb{R}^{n+1}$ be a smooth cone with $\lambda[\mathcal{C}] < 2$. Let $\Sigma$ be a self-expander $C^{2,\alpha}$-asymptotic to $\mathcal{C}$. If $\Sigma$ is unstable, there exists a non-self-similar immortal integral Brakke flow $\mathcal{M} = \{\mu_t\}_{t \in (0,\infty)}$ such that
\begin{align*}
\lim_{t \to 0} \mu_t = \mathcal{H}^n \llcorner \mathcal{C}.
\end{align*}
Moreover, there is $T > 0$ such that $\mathcal{M}$ is smooth on $(0,T)$.
\end{thm}
\begin{rem}
This existence theorem is weaker than the one obtained in \cite{BCW} in which no entropy assumption is made, but is enough for our purposes. Thanks to the entropy bound, it is much simpler to analyze our situation as the flows we produce are automatically matching. A similar construction for self-shrinkers is carried out by Chodosh--Choi--Mantoulidis--Schulze \cite{CCMSGeneric}.
\end{rem}
\begin{rem}
Without the more restrictive entropy bound, it is not enough to produce the flow line as a smooth MCF --- Some unstable connected self-expanders under the flow will necessarily disconnect in order to reach the stable double-disk solution. See \cref{cylindrical-singularity} and \cref{disconnection}.
\end{rem}
Let us now briefly discuss the proof of \cref{reflection-symmetry}. As in \cite{Chen}, the proof is based on the moving plane method, first used by Alexandrov to prove that embedded, compact CMC hypersurfaces are round spheres. The method was then employed by Gidas--Ni--Nirenberg \cite{GNNSymmetry} who proved radial symmetry of positive solutions to the Yamabe problem, and by Schoen \cite{Schoen} who proved the uniqueness of catenoids. Recently, there are a number of other results on MCF that utilize the moving plane method, including \cite{MSS} and \cite{CHHAncient}. Unfortunately since we are dealing with potentially singular flows, these methods for smooth flows are no longer sufficient. Instead we will use a novel variant without smoothness, recently developed by \cite{CHHW} (for parabolic equations) and \cite{HHW} (for elliptic equations). The key ingredient in the proof is the Hopf lemma without smoothness \cref{hopf-lemma}, which allows us to upgrade regularity of the flow concurrently with the symmetry. \par
In order to apply the (geometric) moving plane method to noncompact objects, it is mandatory to have a well-controlled asymptote at infinity. In the asymptotically cylindrical case \cite{CHHW} (see also the preceding \cite{CHHAncient}), a fine neck analysis is carried out in order to determine the asymptotic expansion near infinity. However, to our advantage, in the asymptotically conical case, the asymptote near infinity is entirely determined by the given cone $\mathcal{C}$, and it is an immediate consequence of the pseudolocality theorem that our flows are nice $C^{2,\alpha}$ normal graphs over $\mathcal{C}$ outside of a large ball with appropriate decay rates (see \cref{smooth-construction-appendix} or the works of Bernstein--Wang \cite{BWSmoothCompactness} or \cite{BWTopology}). Knowing this, we can then carry out the moving plane method with the usual maximum principle and Hopf lemma replaced by \cref{maximum-principles} and \cref{hopf-lemma} respectively (barring some technicality, e.g. the tameness assumption \cref{tameness}). \par
We will prove \cref{reflection-symmetry} in \cref{reflection-symmetry-section}. In \cref{rotational-symmetry-section} we apply \cref{reflection-symmetry} to prove the various corollaries mentioned above. In \cref{construction-of-matching-motion} we give a construction for \cref{nontrivial-flow-lines}. In \cref{ode-appendix} we will prove an ODE existence result of self-expanders with triple junctions, which shows that the cyclic assumption in \cref{reflection-symmetry} is indeed necessary. In \cref{smooth-construction-appendix} we review the construction of smooth Morse flow line following \cite{BWTopologicalUniqueness}. Finally in \cref{maximum-principles-appendix} we recall the key maximum principle and Hopf lemma from Section 3 of \cite{CHHW}. \par
\subsection*{Acknowledgment}
I would like to thank my advisor, Jacob Bernstein, for many helpful comments on the paper and continuous encouragement. I would also like to thank Junfu Yao for helpful conversations, Kyeongsu Choi and Or Hershkovits for explaining some arguments in \cite{CHHW}, and the referee for many constructive comments on the first draft of this article.
\section{Preliminaries}
\label{preliminaries}
\subsection{Notations} Throughout the paper lower case letters such as $x$ denote points in $\mathbb{R}^{n+1}$, while upper case letters such as $X$ denote points in the spacetime $\mathbb{R}^{n+1} \times [0,\infty)$. $B_r(x)$ denotes the Euclidean ball of radius $r$ centered at $x$, and $P(X,r)$ denotes the parabolic ball centered at $X = (x,t)$ of radius $r$, i.e.
\begin{align*}
P(X,r) = B_r(x) \times (t - r^2,t].
\end{align*}
$\mathcal{T}_r(A)$ denotes the tubular neighborhood of $A \subset \mathbb{R}^{n+1}$ of radius $r$. Finally, for $x = (x', x_{n+1})$, let
\begin{align*}
B_r^n(x) = \{(y',y_{n+1}) \in \mathbb{R}^{n+1} \mid \abs{y' - x'} < r, y_{n+1} = x_{n+1}\},
\end{align*}
and $C_r(x)$ be the open cylinder of height $r$ over $B_r^n(x)$, i.e.
\begin{align*}
C_r(x) = \{(y',y_{n+1}) \in \mathbb{R}^{n+1} \mid \abs{x' - y'} < r, \abs{x_{n+1} - y_{n+1}} < r\}.
\end{align*}
\par
By a \textit{(hyper)cone} we mean a set $\mathcal{C} \subset \mathbb{R}^{n+1}$ that is invariant under dilation, i.e. $\rho \mathcal{C} = \mathcal{C}$ for all $\rho > 0$. The \textit{link} of $\mathcal{C}$ is $\mathcal{L}(\mathcal{C}) = \mathcal{C} \cap \mathbb{S}^n$. We say $\mathcal{C}$ is smooth if $\mathcal{L}(\mathcal{C})$ is a smooth hypersurface of $\mathbb{S}^n$. A double cone is a cone $\mathcal{C}$ whose link $\mathcal{L}(\mathcal{C})$ has two connected components lying in opposite hemispheres of $\mathbb{S}^n$. A hypersurface $\Sigma$ is \textit{$C^{k,\alpha}$-asymptotically conical} to $\mathcal{C}$ if
\begin{align*}
\lim_{\rho \to 0^+} \rho \Sigma = \mathcal{C} \text{ in } C^{k,\alpha}_{loc}(\mathbb{R}^{n+1} \setminus \{0\}).
\end{align*}
We will simply say $\Sigma$ is asymptotically conical to $\mathcal{C}$ if it is smoothly asymptotically conical. In our applications, the cone $\mathcal{C}$ is almost always assumed to be smooth and the asymptotically conical hypersurfaces will come from solutions to MCF, which are in fact smoothly asymptotic to $\mathcal{C}$ by standard Schauder theory. \par
Given a hypersurface $\Sigma \subset \mathbb{R}^{n+1}$, the \textit{Gaussian surface area} of $\Sigma$ is
\begin{align*}
F[\Sigma] = \int_\Sigma e^{-\frac{\abs{x}^2}{4}} d\mathcal{H}^n.
\end{align*}
Following Colding--Minicozzi \cite{CMGeneric}, the \textit{entropy} of $\Sigma$ is
\begin{align*}
\lambda[\Sigma] = \sup_{\rho \in \mathbb{R}^+, x_0 \in \mathbb{R}^{n+1}} F[\rho \Sigma + x_0] = \sup_{\rho, x_0} \frac{1}{(4\pi \rho)^{n/2}} \int_{\Sigma} e^{-\frac{\abs{x-x_0}^2}{4\rho}} d\mathcal{H}^n.
\end{align*}
By Huisken's monotonicity formula, the entropy is nonincreasing along a MCF.
\subsection{Integral Brakke flows}
\label{brakkeflow}
For the rest of the article we will be using notations from geometric measure theory. We refer to \cite{SimonBook} and \cite{Ilmanen} for the relevant definitions. \par
Since we are dealing with potentially singular MCF, we need to generalize the classes of MCF to make sense of the flow and hence the symmetries past singularities. The measure-theoretic generalization of MCF is the Brakke flow \cite{Brakke} (see also the more recent book of Tonegawa \cite{Tonegawa}), which is a flow of varifolds. Given an integral $n$-rectifiable Radon measure $\mu$, let $V(\mu)$ denote the associated integral varifold, and $H$ its generalized mean curvature vector given by the formula:
\begin{align*}
\int \Div_{V(\mu)} X d\mu = - \int H \cdot X d\mu.
\end{align*}
where $X$ is a compactly supported $C^1$ vector field. \par
Given an open set $U \subset \mathbb{R}^{n+1}$, by an \textit{integral $n$-Brakke flow} in $U$ we mean a family of integral $n$-rectifiable Radon measures $\mathcal{M} = \{\mu_t\}_{t \in I}$ such that:
\begin{enumerate}[label = (\alph*)]
\item For a.e. $t \in I$, $V(\mu)$ has locally bounded first variation and its generalized mean curvature vector $H$ is orthogonal to the approximating tangent space of $V(\mu)$ $x$-a.e.
\item For any bounded interval $[a,b] \subset I$ and compact set $K \subset U$,
\begin{align*}
\int_a^b \int_K (1 + \abs{H}^2) d\mu_tdt < \infty.
\end{align*}
\item For $[a,b] \subset I$ and every $\phi \in C_c^1(U \times [a,b], \mathbb{R}^+)$,
\begin{align*}
\int \phi d\mu_b - \int \phi d\mu_a \le \int_a^b \int \left(-\phi\abs{H}^2 + \abs{H} \cdot \nabla \phi + \frac{\partial \phi}{\partial t}\right)d\mu_tdt.
\end{align*}
\end{enumerate}
Since we are working in codimension one we will drop the dependence on $n$ in the definition above and simply refer to it as a Brakke flow. \par
Brakke flow has two main drawbacks. First, due to the inequality in condition (c), a Brakke flow can vanish abruptly (in fact some evolution must involve such vanishing). In order to avoid this technical difficulty in \cref{construction-of-matching-motion} we will employ the notion of a matching flow which, in some sense, prevents certain sudden loss of mass. Secondly, and as part of our motivation, Brakke flow does not have to be unique. In fact, there could be multiple (smooth) self-expanders coming out of a fixed cone, which is singular at the origin. \par
A Brakke flow $\mathcal{M}$ is \textit{unit-regular} if, for a spacetime point $X = (x,t)$, $\mathcal{M}$ is smooth and has no sudden mass loss if a tangent flow at $X$ is a multiplicity one hyperplane. We say $\mathcal{M}$ is \textit{cyclic} if the associated mod-2 flat chain $[V(\mu_t)]$ (see eg. \cite{WhiteCurrentsChains}) has no boundary. By works of Ilmanen \cite{Ilmanen}, Brakke flows produced by elliptic regularization are unit-regular and cyclic. \par
Throughout our presentation, given a Brakke flow $\mathcal{M} = \{\mu_t\}_{t \in I}$, we will write $\mathcal{M}_t = \supp \mu_t$ for $t \in I$.
\section{Reflection symmetry}
\label{reflection-symmetry-section}
In this section we prove \cref{reflection-symmetry}. Fix a smooth double cone $\mathcal{C}$ with $\lambda[\mathcal{C}] < 2$ and an integral, unit-regular Brakke flow $\mathcal{M} = \{\mu_t\}$ satisfying the assumptions of \cref{reflection-symmetry}; that is,
\begin{align*}
\lim_{t \to 0} \mu_t = \mathcal{H}^n \llcorner \mathcal{C},
\end{align*}
and $\mathcal{M}$ is smooth on $(0,T)$ for some $T > 0$. Given a hyperplane $\Pi$, if $\mathcal{C} \cap \mathbb{H}$ is graphical over $\Pi$ and symmetric across $\Pi$, the main theorem in our previous work \cite{Chen} implies that $\mathcal{M}$ is symmetric across $\Pi$ until its first singular time, past which the usual moving plane argument stops to work. For this reason, we will prove \cref{reflection-symmetry} using a version of the moving plane method without assuming smoothness, recently developed by \cite{CHHW} (see also \cite{HHW}). \par
A technical ingredient we need for the moving plane method is the notion of tameness, which we now define.
\iffalse
At the first singular time, any singularity $X$ must be on the axis of symmetry, and so any tangent flow at $X$ must also be rotationally symmetric. Since the flow is not closed and $\lambda[\mathcal{C}] < 2$, it is not hard to see that such a tangent flow must be a multiplicity one cylinder. Knowing this, we suspect it is possible to extend the argument in the smooth case up to the second singular time. Beyond the second singular time, however, we can not rule out other types of singularity anymore. For example, it is possible that the flow $\mathcal{M}$ forms two necks simultaneously at $T$ and subsequently disconnect into three connected components, one of which will be closed and disappears at a compact singularity. \par
\fi
\begin{defn}[Definition 3.1 in \cite{CHHW}]
\label{tameness-defn}
For an integral Brakke flow $\mathcal{M}$ in $\mathbb{R}^{n+1}$, we say $X \in \mathcal{M}$ is a tame point of the flow if the $-1$ time slice of every tangent flow at $X$ is smooth with multiplicity one away from a singular set $\mathcal{S}$ with $\mathcal{H}^{n-1}(\mathcal{S}) = 0$. We say $\mathcal{M}$ is a tame flow if every point $X \in \mathcal{M}$ is a tame point.
\end{defn}
For instance, a tame flow should not have a singularity modeled on a triple junction (that is, three hyperplanes meeting at equal angles) or a multiplicity two plane. Tameness is a key assumption to apply the Hopf lemma without smoothness \cref{hopf-lemma}, which, in turn, is crucial to the moving plane method. The next proposition establishes tameness of $\mathcal{M}$.
\begin{prop}
\label{tameness}
Let $\mathcal{M}$ be as above. Then $\mathcal{M}$ is a tame flow.
\end{prop}
\begin{proof}
It suffices to check the definition. Let $\mathcal{X} = \{\nu_t\}_{t \in (-\infty,0]}$ be a tangent flow at $(x_0,t_0)$. Since $\lambda[\mathcal{C}] < 2$, $\mathcal{X}$ has multiplicity 1 (i.e. the Gaussian density is 1 $\mathcal{H}^{n-1}$-a.e. on $\mathcal{X}$, $t$-a.e.). \par
If $\mathcal{X}$ is static or quasi-static, then $\nu_{-1} = \mathcal{H}^n \llcorner \Gamma$ for some stationary cone $\Gamma$. If $\Gamma$ splits off $(n-2)$-lines, then $\nu_{-1} = \mu_{\mathbb{R}^{n-2}} \times \nu'$ where $\nu'$ is a one-dimensional stationary cone in $\mathbb{R}^2$. Hence $\nu'$ is a union of half-rays. Since $\lambda[\nu'] = \lambda[\nu_{-1}] < 2$, there are at most 3 rays. Since $\mathcal{M}$ is cyclic, $\nu'$ cannot be 3 rays meeting at the origin. This is because the triple junction is not cyclic, and a cyclic Brakke flow cannot have a singularity modeled on a non-cyclic singularity by \cite{WhiteCurrentsChains}. Therefore $\nu'$ consists of 2 lines and in fact $\nu_{-1}$ is smooth. So any singular cone $\nu_{-1}$ can split off at most $(n-3)$-lines, and consequently the singular part of $\nu_{-1}$ has codimension at least 3. \par
If $\mathcal{X}$ is a non-flat self-shrinker, then any tangent cone $\nu'$ to $\nu$ is a stationary cone with entropy at most 2 (here we used the fact that self-shrinkers are minimal surfaces with respect to the metric $g_{ij} = e^{-\frac{\abs{x}^2}{2n}}\delta_{ij}$ which is conformal to the Euclidean metric). It follows from the above discussion that $\nu'$ can split off at most $(n-3)$-lines (if $\nu'$ splits of $(n-2)$-lines then it is a multiplicity 1 hyperplane, which, by Allard regularity theorem, means that $\mathcal{S}(\mathcal{X}) \subset \{0\}$ and consequently $\mathcal{X}$ is a multiplicity 1 hyperplane), and so $\nu_{-1}$ is smooth away from a set of Hausdorff dimension at most $(n-2)$. Thus $\mathcal{M}$ is tame.
\end{proof}
\begin{rem} With a more restrictive entropy bound it is possible to refine the codimension of the singular set even more. See \cite{BWSharpLower} or Section 4 of \cite{CHHW}.
\end{rem}
Next we establish some properties of $\mathcal{M}$. The next proposition says that the flow $\mathcal{M}$ stays asymptotically conical for all future time.
\begin{prop}
\label{asymptotic-conical}
Let $\mathcal{M}$ be as above. Then $\mathcal{M}$ is asymptotically conical to $\mathcal{C}$ for all $t \in (0,\infty)$. Consequently, for every $t > 0$ there is $R = R(t)$ such that $\mathcal{M}_t \setminus B_R(0)$ is a smooth MCF.
\end{prop}
\begin{proof}
This follows from pseudolocality for Brakke flows and parabolic Schauder estimates as in \cref{interior-estimates} (see also Proposition 4.4 of \cite{BWTopology}).
\end{proof}
Fix an open half space $\mathbb{H}$ and let $\Pi = \partial \mathbb{H}$. We now use the pseudolocality theorem to prove that if $\mathcal{C}$ is graphical over $\Pi$, then so is $\mathcal{M}$ outside of a large compact set. This will serve as an asymptotic expansion at infinity and be upgraded into interior graphicality via the moving plane method. For the next two lemmas we write $\mathcal{M}_t^+ = \mathcal{M}_t \cap \mathbb{H}$.
\begin{lem}
\label{asymptotic-expansion}
Suppose $\mathcal{C} \cap \mathbb{H}$ is a Lipschitz graph over $\Pi$. let $\mathcal{M}$ be as above, then for every $t > 0$ there is $R = R(t)$ and a smooth function $u$ on $\mathcal{C}$ such that
\begin{align*}
\mathcal{M}_t^+ \setminus B_R(0) \subset \{p + u(p)\nu_\mathcal{C}(p) \mid p \in \mathcal{C}\}.
\end{align*}
and $\abs{u(p)} \le C \abs{p}^{-1}$ for $p \in \mathcal{C}$ for some constant $C = C(t)$.
\end{lem}
\begin{proof}
Fix a time $t_0$. Let us first show that $\mathcal{M}_t^+ \setminus B_R(0)$ can be written as a smooth normal graph over $\mathcal{C}$. By \cref{asymptotic-conical}, there exists $R = R(t) > 0$ such that $\mathcal{M}_t \setminus B_R(0)$ is asymptotically conical to $\mathcal{C}$. Since $\mathcal{M}_0 = \mathcal{C}$, by pseudolocality theorem for Brakke flows (Theorem 1.5 of \cite{IlmanenNevesSchulze}), given $\eta > 0$ there exists $t_1$ such that for $0 < t < t_1$ and $x \in \mathcal{C} \setminus B_1(0)$, $\mathcal{M}_t \cap C_{\sqrt{t_1}}(x)$ can be written as a normal graph over $B_{\sqrt{t_1}}^n(x) \cap T_{x}\mathcal{C}$ with Lipschitz constant bounded by $\eta$. By parabolic rescaling, we see that, for $0 < t < 2t_0$ and $x \in \mathcal{C} \setminus B_{\sqrt{2t_0t_1^{-1}}}(0)$, $\mathcal{M}_t \cap C_{\sqrt{2t_0}}(x)$ can be written as a normal graph over $B_{\sqrt{2t_0}}^n(x)$ with Lipschitz constant bounded by $\eta$. In particular putting $t = t_0$ gives the desired graphicality. The regularity of $u$ follows from \cref{asymptotic-conical}. \par
To see that the function $u$ decays near infinity, by a similar argument as in \cref{distance-estimate}, there exists $N$ such that for all $R > 1$ we have
\begin{align*}
\mathcal{M}_{t_0} \setminus B_{NR\sqrt{t_0+1}}(0) \subset \mathcal{T}_{R^{-1}\sqrt{t_0+1}}(\mathcal{C}).
\end{align*}
Equivalently, for $R > N\sqrt{t_0+1}$,
\begin{align*}
\mathcal{M}_{t_0} \setminus B_{R'}(0) \subset \mathcal{T}_{N(t_0+1)(R')^{-1}}(\mathcal{C}).
\end{align*}
Enlarge $R$ if needed so that $\mathcal{M}_{t_0}^+ \setminus B_R(0)$ is a normal graph over $\mathcal{C}$. We see that $u$ satisfies $\abs{u(p)} \le C\abs{p}^{-1}$ for $p \in \mathcal{C}$ and $u(p) \in \mathcal{M}_t^+ \setminus B_{2R}(0)$.
\end{proof}
\begin{lem}
\label{graphicality}
Suppose $\mathcal{C} \cap \mathbb{H}$ is a Lipschitz graph over $\Pi$. Let $\mathcal{M}$ be as above. For every $t > 0$ there is $R = R(t)$ such that $\mathcal{M}_t^+ \setminus B_R(0)$ can be written as a graph over $\Pi$; that is, the projection $\pi: \mathcal{M}_t^+ \setminus B_R(0) \to \Pi$ is injective.
\end{lem}
\begin{proof}
By \cref{asymptotic-expansion}, for every $\eta > 0$, there is $R = R(t)$ such that $\mathcal{M}_t^+ \setminus B_R(0)$ is a normal graph over $\mathcal{C}$ with Lipschitz constant bounded by $\eta$. Since $\mathcal{C} \cap \mathbb{H}$ is a Lipschitz graph over $\Pi$, the unit normal vector $\nu_\Pi$ is not contained in any tangent space to $x' \in (\mathcal{C} \cap \mathbb{H}) \setminus \{0\}$. Therefore by taking $\eta$ sufficiently small we may make sure that $\nu_\Pi$ is also not contained in any tangent space to $x \in \mathcal{M}_t^+ \setminus B_R(0)$ (here $R = R(t,\eta)$, but of course $\eta$ in turn depends on $t$). This proves that $\mathcal{M}_t^+ \setminus B_R(0)$ is graphical over $\Pi$ as well.
\end{proof}
\cref{tameness} and \cref{graphicality} allow us to use the moving plane method without smoothness, which we now carry out. Let
\begin{align*}
\Pi^s = \{(x,x_{n+1}) \in \mathbb{R}^{n+1} \mid x_{n+1}= s\} \times [0,\infty) \subset \mathbb{R}^{n+1} \times [0,\infty)
\end{align*}
be the hyperplane at level $s$ in spacetime. Given a set $A \subset \mathbb{R}^{n+1} \times [0,\infty)$ and $s \in [0,\infty)$ we let
\begin{align*}
A^{s+} = \{(x,x_{n+1},t) \in A \mid x_{n+1} > s\} \text{ and } A^{s-} = \{(x,x_{n+1},t) \in A \mid x_{n+1} < s\}
\end{align*}
be the parts of $A$ lying above $\Pi_s$ and below $\Pi_s$ respectively. Finally, the set
\begin{align*}
A^{s*} = \{(x,x_{n+1},t) \mid (x, 2s - x_{n+1},t) \in A\}
\end{align*}
is the reflection of $A$ across $\Pi_s$. We say $A > B$ for $A, B \subset \mathbb{R}^{n+1} \times [0,\infty)$ provided for any $(x,s,t) \in A$ and $(x,s',t) \in B$ we have $s > s'$. In contrast, a subscript $t$ will continue to denote the time $t$ slice of a spacetime set. \par
To set up the proof of \cref{reflection-symmetry}, WLOG we may assume the hyperplane is $\{x_{n+1} = 0\}$. Fix a time $T_0 > 0$. We consider $\mathcal{M}$ on $[0,T_0)$ as its spacetime track; namely,
\begin{align*}
\mathcal{M} = \bigcup_{t =0}^{T_0} \mathcal{M}_t \times \{t\} \subset \mathbb{R}^{n+1} \times [0,T_0].
\end{align*}
Finally, let
\begin{align*}
S = \{s \in (0,\infty) \mid (\mathcal{M}^{s+})^* > \mathcal{M}^{s-}, \text{ and } (\mathcal{M}^{s+})_t \text{ is graphical over } (\Pi^s)_t \text{ for } t \in [0,T_0]\}.
\end{align*}
Here graphicality means that the projection $\pi_s: (\mathcal{M}^{s+})_t \to (\Pi^s)_t$ is injective for $t \in [0,T]$. Since each $(\mathcal{M}^{s+})_t$ is countably $n$-rectifiable, graphicality is equivalent to that the unit normal $e_{n+1} = (0,\ldots,0,1)$ of $(\Pi^s)_t$ is not contained in the approximate tangent space of $(\mathcal{M}^{s+})_t$ for $t \in [0,T_0]$. Observe that $(\mathcal{M}^{s+})^*$ is asymptotically conical to the translated cone $(\mathcal{C} + 2se_{n+1}) \times [0,\infty)$ (in the sense of \cref{asymptotic-expansion} --- this ensures that a hypersurface cannot be simultaneously asymptotic to two distinct cones). \par
First we need a lemma about smoothness of the top part of the flow similar to Proposition 7.4 of \cite{CHHW}.
\begin{lem}
\label{smoothness}
Suppose $s > 0$ and $s \in S$. Then $\mathcal{M}^{s+}$ is a smooth MCF asymptotic to $\mathcal{C}$. Moreover, every point on $\mathcal{M} \cap \{x_{n+1} = s\}$ is a regular point of the flow.
\end{lem}
\begin{proof}
Let
\begin{align*}
I_s = \{s' \ge s \mid \mathcal{M}^{s'+} \text{ is smooth} \}.
\end{align*}
By \cref{asymptotic-conical}, there is $R = R(t)$ such that $\mathcal{M}_t \setminus B_{R}(0)$ is a smooth MCF asymptotic to $\mathcal{C}$. So for sufficiently large $s$ depending on $T_0$ we see that $\mathcal{M}^{s+}$ is asymptotic to $\mathcal{C} \times [0,T_0]$. This shows $I_s$ is not empty. \par
Let $s_0 = \inf I_s$. We first argue that $\mathcal{M} \cap \{x_{n+1} = s_0\}$ consists of regular points. Let $(x_0,t_0) \in \mathcal{M} \cap \{x_{n+1} = s_0\}$ and let $\mathcal{M}^*$ be the flow reflected across $\Pi_s$. We wish to apply the Hopf lemma \cref{hopf-lemma} to $\mathcal{M}$, $\mathcal{M}^*$ and $\mathbb{H} = \{x_n < s\}$ to conclude that $(x_0,t_0)$ is a regular point. To this end we must check that the conditions are satisfied. Tameness follows from \cref{tameness}. We may also assume that $\partial \mathbb{H}$ is not a tangent flow to either $\mathcal{M}$ or $\mathcal{M}^*$ at $(x_0,t_0)$, because otherwise the entropy bound together with Brakke regularity theorem implies $(x_0,t_0)$ is a regular point. Finally, we claim $\reg \mathcal{M}_t \cap \mathbb{H}$ and $\reg \mathcal{M}^*_t \cap \mathbb{H}$ are disjoint for $t$ sufficiently close to $t_0$. Suppose not, then there must be a first time of contact:
\begin{align}
t_1 = \inf\{t \mid (\mathcal{M}^{s_0-}_{t}) \cap (\mathcal{M}^{s_0+})^*_t \cap \mathbb{H} \ne \emptyset \}
\end{align}
in $\mathbb{H}$. Given any point
\begin{align*}
(x_1,t_1) \in (\mathcal{M}^{s_0-}_t) \cap (\mathcal{M}^{s_0+})^*_t \subset \mathbb{H}.
\end{align*}
By definition of $s_0$ we know $(\mathcal{M}^{s_0+})^*_{t_1}$ is in fact a smooth MCF, so maximum principle \cref{maximum-principles} implies that $\mathcal{M}^{s_0-}$ agrees with $(\mathcal{M}^{s_0+})^*$ in some parabolic cylinder around $(x_1,t_1)$. The same reasoning applied to any other point in $\mathcal{M}^{s_0-}\cap (\mathcal{M}^{s_0+})^*$ shows that a connected component of $\mathcal{M}^{s_0-}$ agrees with a connected component of $(\mathcal{M}^{s_0+})^*$. This implies that $(\mathcal{M}^{s_0+})^*$ is simultaneously asymptotic to $\mathcal{C} \times [0,\infty)$ and $(\mathcal{C} + 2s_0e_{n+1}) \times [0,\infty)$, a contradiction. Hence the last condition in order to apply \cref{hopf-lemma} is satisfied and we conclude that $\mathcal{M} \cap \{x_{n+1} = s_0\}$ is regular. \par
Lastly we show that $s_0 = s$. This is a consequence of the fact that $\mathcal{M} \cap \Pi^{s_0}$ is compact. Using small balls as barriers similar to \cref{distance-estimate}, one sees that there exists some constant $N_1$ such that
\begin{align*}
\mathcal{M}_t \setminus B_{N_1R\sqrt{t+1}}(0) \subset \mathcal{T}_{R^{-1}\sqrt{t+1}}(\mathcal{C})
\end{align*}
for $R > 1$. On the other hand, for a fixed $t$ there is a constant $N_2$ such that
\begin{align*}
\mathcal{M}_t \cap B_{N_1\sqrt{t+1}}(0) \subset \mathcal{T}_{N_2}(\mathcal{C}).
\end{align*}
as the first set is clearly compact. These two facts together imply the existence of a constant $N_3$ such that
\begin{align*}
\mathcal{M}_t \cap \{x_{n+1} = s_0\} \subset \mathcal{T}_{N_3}(\mathcal{C}) \cap \{x_{n+1} = s_0\}.
\end{align*}
This shows that $\mathcal{M}_t \cap \{x_{n+1} = s_0\}$ is compact (as $\mathcal{C} \cap \{x_{n+1} = s_0\}$ is compact), and since $t \in [0,T_0]$, $\mathcal{M} \cap \Pi^{s_0}$ is compact as well. To finish the proof, note that by the previous paragraph $\mathcal{M} \cap \{x_{n+1} = s_0\}$ consist of regular points only. At each regular point $(x_0,t_0)$ there is some $r = r(x_0,t_0)$ such that $\mathcal{M}$ is smooth in $P((x_0,t_0),r)$. Since $\mathcal{M} \cap \{x_{n+1} = s_0\}$ is compact, $r$ is uniformly bounded below away from 0, and this is a contradiction unless $s_0 = s$.
\end{proof}
\begin{proof}[Proof of \cref{reflection-symmetry}]
To finish the proof we must show $S$ is nonempty, $S$ is open, and $S$ is closed. \par
Again by \cref{asymptotic-conical}, for sufficiently large $s$ we can make sure that $\mathcal{M}^{s+}$ is a smooth MCF,
\begin{align*}
(\mathcal{M}^{s+})^* \cap ((\mathcal{M}^{s-}) \cap \{x_{n+1} \ge 0\}) = \emptyset,
\end{align*}
and that for any $(x,s_1,t) \in (\mathcal{M}^{s+})^*$ and $(x,s_2,t) \in \mathcal{M}^{0-}$ it holds that $s_1 - s_2 \ge 2s-1$. These two facts imply that for sufficiently large $s$ the inequality $(\mathcal{M}^{s+})^* > \mathcal{M}^{s-}$ is valid. On the other hand, by \cref{graphicality}, there is $R = R(T_0)$ such that $(\mathcal{M}^{0+})_{T_0} \setminus B_R(0)$ is graphical over $(\Pi^0)_{T_0}$. So for $s > R$ we have $(\mathcal{M}^{s+})_t$ is graphical over $(\Pi^s)_t$ for all $t \in [0,T_0)$. This shows $S$ is not empty. \par
It is clear that $(\mathcal{M}^{s+})^* > \mathcal{M}^{s-}$ is an open condition. To see that the graphicality condition is also an open condition, let $\theta_t(x)$ be the angle between the unit normal to the approximate tangent space at a point $x \in \mathcal{M}_t$ and $e_{n+1}$. Suppose that $s \in S$, then graphicality is equivalent to $\theta_t(x) < \frac{\pi}{2}$ for all $t \in [0,T_0)$ and $x \in (\mathcal{M}^{s+})_t$. Since the flow $\mathcal{M}$ is $C^{2,\alpha}$-asymptotically conical, for given $t$ there exists $\varepsilon > 0$ such that $\theta_t(x) < \pi/2$ for all $x \in (\mathcal{M}^{s'+})_t$ where $\abs{s' - s} < \varepsilon$. Since the time interval is compact, there is a universal $\varepsilon$ such that the above holds for all $t \in [0,T_0)$. This shows openness of $S$. \par
Finally we show $S$ is closed. Obviously if $s \in S$ then $[s,\infty) \subset S$. So we assume $(s,\infty) \subset S$ and suppose for a contradiction that $s \not \in S$. At level $s$, either $(\mathcal{M}^{s+})^* \cap \mathcal{M}^{s-} \ne \emptyset$ or there is some $t_0 \in [0,T_0)$ such that $(\mathcal{M}^{s+})_{t_0}$ fails to be graphical over $(\Pi^s)_{t_0}$. \par
In the first case, $s$ is necessarily the first level of contact. By choosing $r$ small enough we can ensure $(\mathcal{M}^{s+})^*$ and $\mathcal{M}^{s-}$ are graphical in $P(X,r)$ where $X \in (\mathcal{M}^{s+})^* \cap \mathcal{M}^{s-}$. Moreover, by \cref{smoothness}, the reflected part $(\mathcal{M}^{s+})^*$ is a smooth MCF, so all the conditions of the maximum principle \cref{maximum-principles} are satisfied (note that the Gaussian density bound is automatic from the entropy bound). Applying \cref{maximum-principles}, we see $(\mathcal{M}^{s+})^*$ and $\mathcal{M}^{s-}$ agree in a neighborhood of $X$. Now an identical argument as in the proof of \cref{smoothness} shows that $(\mathcal{M}^{s_0+})^*$ is simultaneously asymptotic to $\mathcal{C} \times [0,\infty)$ and $(\mathcal{C} + 2s_0e_{n+1}) \times [0,\infty)$, which is again a contradiction. \par
In the second case, WLOG we may assume $t_0$ is the first time the graphicality condition fails. Then there necessarily exists a point $X = (x,s,t_0) \in \mathcal{M}_{t_0} \cap \{x_{n+1} = s\}$ whose tangent space contains the vector $e_{n+1}$. We again check the condition to apply Hopf lemma \cref{hopf-lemma} to $\mathcal{M}^1 = (\mathcal{M}^{s+})^*$, $\mathcal{M}^2 = \mathcal{M}^{s-}$ and $\mathbb{H} = \{x_{n+1} < s\}$ as in the proof of \cref{smoothness}. Tameness follows from \cref{tameness}. Since $e_{n+1}$ is normal to the hyperplane $\{x_{n+1} = s\}$ and $X$ is a regular point of $\mathcal{M}$ by \cref{smoothness}, we see that $\partial \mathbb{H}$ is not the tangent flow to either $\mathcal{M}^1$ or $\mathcal{M}^2$ (here we used the fact that the tangent flow at a regular point agrees with the static flow of the tangent plane). The disjointness of the regular parts of $\mathcal{M}^1$ and $\mathcal{M}^2$ in $\mathbb{H}$ follows identically as in the proof of \cref{smoothness}. Hence, we may apply \cref{hopf-lemma} to conclude that $\mathcal{M}^1$ and $\mathcal{M}^2$ have distinct tangents, which is a contradiction since the tangent spaces agree at $X$. This concludes the proof that $S$ is closed. \par
This shows that $S = (0,\infty)$. At $s = 0$, one sees that the graphicality condition is preserved (alternatively one can run the moving plane method from the other side, i.e. $s < 0$), but the strict inequality $(\mathcal{M}^{0+})^* > \mathcal{M}^{0-}$ does not hold anymore, which implies that $(\mathcal{M}^{0+})^* \cap \mathcal{M}^{0-} \ne \emptyset$. Applying the maximum principle \cref{maximum-principles} once again we conclude $(\mathcal{M}^{0+})^* = \mathcal{M}^{0-}$, and this is the required reflection symmetry across $\Pi^0 = \{x_{n+1} = 0\}$.
\end{proof}
\section{Rotational symmetry}
\label{rotational-symmetry-section}
In this section we prove \cref{main-theorem} and \cref{simons-cone-thm}. First let $\mathcal{C}$ be a smooth, rotationally symmetric double cone. A typical example of such a $\mathcal{C}$ is given by \cref{good-cone} which has the $x_1$ axis as its axis of symmetry. Our theorem allows more general cones of the form
\begin{align*}
x_1^2 = \begin{cases} m_1^2(x_2^2 + x_3^2 + \cdots + x_n^2), & x_1 \ge 0 \\ m_2^2(x_2^2 + x_3^2 + \cdots + x_n^2), & x_1 < 0\end{cases},
\end{align*}
where $m_1, m_2 >0$ (i.e. the top and bottom parts of the cone can have different cone angles). A cone $\mathcal{C}$ of the above form should also satisfy $\lambda[\mathcal{C}] < 2$, but this is not explicitly known. See \cref{further-remarks} for more on the entropy of cones.
\begin{proof}[Proof of \cref{main-theorem}]
WLOG we may assume the axis of symmetry is the $x_1$-axis. Observe that rotational symmetry is equivalent to reflection symmetry across every hyperplane containing the $x_1$-axis. Up to a rotation it suffices to show that $\mathcal{M}$ is symmetric across the hyperplane $\{x_{n+1} = 0\}$. Since $\mathcal{C}$ is a smooth graph over $\{x_{n+1} = 0\}$, the desired conclusion follows from \cref{reflection-symmetry}.
\end{proof}
\begin{proof}[Proof of \cref{cylindrical-singularity}]
The rotational symmetry is \cref{main-theorem}. Since $\mathcal{C}$ is symmetric across the hyperplane $\{x_1 = 0\}$ as well, we can apply \cref{reflection-symmetry} to $\mathbb{H} = \{x_1 > 0\}$ to conclude that $\mathcal{M}$ is smooth away from $\{x_1 = 0\}$. Together with \cref{main-theorem} we infer that the only possible singularity of $\mathcal{M}$ is at the origin. Moreover, any tangent flow $\mathcal{X}$ at the first singular time must be rotationally symmetric. By the classification of rotationally symmetric self-shrinkers of Kleene and M\o ller \cite{KleeneMoller}, $\mathcal{X}$ has to be one of the following: a round sphere, a round cylinder $\mathbb{R} \times \mathbb{S}^{n-1}$ or a smooth embedded $\mathbb{S}^1 \times \mathbb{S}^{n-1}$. Since $\mathcal{M}$ is not closed, we conclude that $\mathcal{X}$ has to be a round cylinder. The uniqueness of the cylinder follows from the work of Colding--Minicozzi \cite{CMUniqueness}.
\end{proof}
\begin{rem}
The above corollary does not guarantee the existence of a cylindrical singularity, as it is entirely possible that the flow remains smooth for all times. See \cref{disconnection} for sufficient conditions for the flow to have a singularity.
\end{rem}
Next we apply the same method to cones with more general symmetry groups. For this part we will work, for convenience, in $\mathbb{R}^{n+2}$ instead. Let $O(p)$ denote the symmetry group of $\mathbb{S}^{p-1} \subset \mathbb{R}^p$. Fix an integer $1 \le p \le n-1$ and suppose $\mathcal{C}$ is a smooth double cone with $\lambda[\mathcal{C}] < 2$ that has symmetry group $O(p+1) \times O(n-p+1)$. Typical examples are cones $\mathcal{C}_{n,p}$ over the families of minimal hypersurfaces in $\mathbb{S}^n$ given by
\begin{align*}
\mathcal{S}_{n,p} = \sqrt{\frac{p}{n}} \mathbb{S}^p \times \sqrt{\frac{n-p}{n}} \mathbb{S}^{n-p} \subset \mathbb{S}^n.
\end{align*}
The cones $\mathcal{C}_{n,p}$ are known as Simons-type cones. An immediate consequence of \cref{simons-cone-thm} is:
\begin{cor}
Any smooth self-expander coming out of a Simons-type cone $\mathcal{C}_{n,p}$ inherits the $O(p+1) \times O(n-p+1)$-symmetry of $\mathcal{C}_{n,p}$.
\end{cor}
\begin{rem}
Similar results for minimal surfaces have been obtained by Mazet \cite{Mazet}, using the elliptic moving plane method.
\end{rem}
\begin{proof}[Proof of \cref{simons-cone-thm}]
Write $(x_1,\ldots,x_{p+1},y_1,\ldots,y_{n-p+1})$ the standard coordinates on $\mathbb{R}^{n+2}$. WLOG we may assume $\mathcal{L}(\mathcal{C)} = \sigma_1 \times \sigma_2$ where $\sigma_1$ is rotationally symmetric across the $x_1$ axis and $\sigma_2$ rotationally symmetric across the the $y_1$-axis. Evidently showing the rotational symmetry in $x$-coordinates is enough, as the identical argument works for the $y$-coordinates. It suffices to show that the reflection symmetry is preserved through all hyperplanes of the form
\begin{align*}
\sum_{i = 2}^{p+1} c_i x_i = 0,
\end{align*}
which, up to an ambient rotation in the $x$-coordinates, we may assume to be $\{x_{p+1} = 0\}$. Note that the a cone with $O(p+1)\times O(n-p+1)$ symmetry takes the form (up to relabeling)
\begin{align*}
\sum_{i=1}^{p+1} a_i x_i^2 = \sum_{j=1}^{n-p+1} b_jy_j^2
\end{align*}
for suitable choices of coefficients $a_i$ and $b_j$. It is not hard to see that $\mathcal{C} \cap \{x_{p+1} > 0\}$ is a graph over $\{x_{p+1} = 0\}$ via
\begin{align*}
x_{p+1} = a_{p+1}^{-1/2}\left(\sum_{j=1}^{n-p+1} b_jy_j^2 - \sum_{i=1}^p a_ix_i^2\right)^{1/2}.
\end{align*}
Hence \cref{reflection-symmetry} applies and the desired reflection symmetry follows (of course, we have to put
\begin{align*}
\Pi^s = \{(x,x_{p+1},y) \in \mathbb{R}^{n+2} \mid x_{p+1} = s\} \times [0,\infty) \subset \mathbb{R}^{n+2} \times [0,\infty).
\end{align*}
and change the dimensions in the proofs accordingly).
\end{proof}
\section{Construction of the matching motion}
\label{construction-of-matching-motion}
In this section we use the ideas of Bernstein--Wang in \cite{BWTopologicalUniqueness} and \cite{BWTopology} to produce an immortal Brakke flow starting from a cone $\mathcal{C}$ with $\lambda[\mathcal{C}] < 2$ that is not self-expanding. This demonstrates that \cref{main-theorem} is not void. \par
\subsection{Self-expanders}
\label{expander-preliminary}
Let us briefly summarize some basic facts about self-expanders. It is often helpful to consider the variational characterization of self-expanders. Formally, self-expanders are critical points of the functional:
\begin{align*}
E[\Sigma] = \int_\Sigma e^{\frac{\abs{x}^2}{4}} d\mathcal{H}^n
\end{align*}
and equation \cref{self-expander-equation} corresponds to the Euler-Lagrange equation of $E[\Sigma]$. We record the second variation of $E$. For a proof see for example Proposition 4.2 in \cite{BWSpace}.
\begin{thm}
If $\{\phi_t\}_{t \in (-\varepsilon,\varepsilon)}$ is a compactly supported normal variation of $\Sigma$ with $\left.\frac{d\phi_t}{dt} \right|_{t = 0} = f\nu_\Sigma$, where $\nu_\Sigma$ is the outwards unit normal. Then
\begin{align*}
\left.\frac{d^2}{dt^2}\right|_{t=0} E[\phi_t(\Sigma)] = -\int_{\Sigma} f L_\Sigma f d\mathcal{H}^n
\end{align*}
where $L_\Sigma$ is the stability operator of $\Sigma$ given by
\begin{align*}
L_\Sigma = \Delta_\Sigma + \frac{1}{2} x\cdot \nabla_\Sigma + \abs{A_\Sigma}^2 - \frac{1}{2}.
\end{align*}
\end{thm}
A real number $\mu \in \mathbb{R}$ is an eigenvalue for $-L_\Sigma$ if there is a function $u \in W^1_{\frac{1}{4}}(\Sigma) \setminus \{0\}$ such that $-L_\Sigma u = \mu u$, where
\begin{align*}
W^1_{\frac{1}{4}}(\Sigma) = \{f: \Sigma \to \mathbb{R} \mid \int_\Sigma (\abs{f}^2 + \abs{\nabla f}^2)e^{\frac{\abs{x}^2}{4}} d\mathcal{H}^n < \infty\}.
\end{align*}
The \textit{index} of a self-expander is the number of negative eigenvalues of $-L_\Sigma$, which is equal to
\begin{align*}
\sup \{\dim V \mid V \text{ linear subspace } \subset C_0^2(\Sigma), -\int_{\Sigma} f L_\Sigma f \le 0 \; \forall f \in V \setminus \{0\}\}.
\end{align*}
We say a self-expander is \textit{stable} if it has index zero. By Lemma 4.1 in \cite{BWIntegerDegree}, the operator $L_\Sigma$ is formally adjoint in $W^0_{\frac{1}{4}}(\Sigma)$, and $L_\Sigma$ has a discrete spectrum; that is, the eigenvalues of $-L_\Sigma$ can be ordered as $\mu_1 < \mu_2 < \cdots < \mu_n < \cdots$. Moreover, the space of eigenfunctions associated to the lowest eigenvalue $\mu_1$ is 1-dimensional, and any eigenfunction $f$ of $\mu_1$ has a sign. \par
We need the following basic distance estimates of asymptotically conical self-expanders.
\begin{prop}
\label{expander-curvature-estimates}
Let $\mathcal{C}$ be a smooth cone. Suppose $\Sigma$ is a self-expander $C^{2,\alpha}$-asymptotic to $\mathcal{C}$, then there is $N > 0$ such that $\Sigma \setminus B_{NR}(0) \subset \mathcal{T}_{R^{-1}}(\mathcal{C})$ for $R > 1$.
\end{prop}
\begin{proof}
Since $\Sigma$ is smooth and $\rho \Sigma \to \mathcal{C}$ in $C^{2,\alpha}_{loc}(\mathbb{R}^n \setminus \{0\})$, for sufficiently small $\rho$ we have on $\rho\Sigma \cap (B_{1}(0) \setminus \{0\})$
\begin{align*}
\abs{A_{\rho\Sigma}} \le C.
\end{align*}
Hence for $x \in \Sigma$ with $\abs{x}$ sufficiently large depending on the above,
\begin{align*}
\abs{A(x)} = \rho \abs{A_{\rho \Sigma}(\rho x)} \le C\abs{x}^{-1}
\end{align*}
if we pick $\rho = \frac{1}{2}\abs{x}^{-1}$ so that $\rho x \in B_1(0)$. This proves that $\abs{A(x)} \le C \abs{x}^{-1}$ for all $x \in \Sigma$.
Together with the self-expander equation, these imply that there is $C > 0$ with
\begin{align*}
\dist(x, \mathcal{C}) < C \abs{x}^{-1}
\end{align*}
for $x \in \mathcal{C} \setminus B_1(0)$. Finally by scaling it follows that there is $N$ such that for $R \ge 1$.
\begin{align*}
\Sigma \setminus B_{NR}(0) \subset \mathcal{T}_{R^{-1}}(\mathcal{C}). &\qedhere
\end{align*}
\end{proof}
\begin{rem}
Note that similar to the above we can also estimate the derivatives of $A$:
\begin{align*}
\abs{\nabla^m A} \le C \abs{x}^{-m-1},
\end{align*}
provided the cone is sufficiently regular.
\end{rem}
\subsection{Mean curvature flow with boundary}
\label{prelim-mcf-boundary}
Since we are dealing with noncompact initial hypersurfaces, many existence theorems (in particular the unit density theorem 11.4 in \cite{Ilmanen}) do not apply directly in our case. To account for this we will utilize White's recent work on MCF with boundary \cite{WhiteMCFBoundary}, which is a generalization of the Brakke flow in \cref{brakkeflow}. For simplicity we will only work in the ambient manifold $\overbar{B_R(0)}$. \par
Given a hypersurface $\Sigma$ with boundary $\Gamma \subset \partial B_R(0)$ in an open set $U \subset \overbar{B_R(0)}$ and an integral $n$-rectifiable Radon measure $\mu$, the first variation formula becomes:
\begin{align*}
\int \Div_{V(\mu)} X d\mu = -\int H \cdot X d\mu + \int \nu_\mu \cdot X d(\mathcal{H}^{n-1} \llcorner \Gamma)
\end{align*}
for any compactly supported $C^1$ vector field $X$, where $H$ is the generalized mean curvature vector and $\nu$ the approximating normal vector to $\Gamma$. By an \textit{integral $n$-Brakke flow with boundary $\Gamma$} in $U$ we mean a family of integral $n$-rectifiable Radon measures $\mathcal{M} = \{\mu_t\}_{t \in I}$ satisfying the items (a), (b) and (c) in \cref{brakkeflow} with the extra condition:
\begin{enumerate}[label = (\alph*)]
\setcounter{enumi}{3}
\item For a.e. $t \in I$ the normal vector satisfies $\abs{\nu_{\mu_t}} \le 1$ $\mathcal{H}^{n-1}$-a.e on $\Gamma$.
\end{enumerate}
For simplicity we will refer to the above as Brakke flow with boundary $\Gamma$. This is unambiguous since, by item (d), the boundary $\Gamma$ stays unchanged under the Brakke flow. \par
As before, a Brakke flow with boundary $\Gamma$ is unit-regular if, for a spacetime point $X = (x,t)$, $\mathcal{M}$ is smooth and has no sudden mass loss if a tangent flow at $X$ is a multiplicity one plane. $\mathcal{M}$ is cyclic if the associated mod-2 flat chain $[V(\mu_t)]$ has boundary equal to $\Gamma$. By works of White \cite{WhiteMCFBoundary}, Brakke flows with boundary $\Gamma$ produced by elliptic regularization are unit-regular and cyclic.
\begin{thm}[Theorem 1.1, Theorem 14.1 of \cite{WhiteMCFBoundary}]
\label{brakkeflow-with-boundary}
Let $\Sigma \subset \overbar{B_R(0)}$ be a hypersurface with boundary $\Gamma \subset \partial B_R(0)$. There exists a unit-regular and cyclic Brakke flow with boundary $\Gamma$, $\mathcal{M} = \{\mu_t\}_{t \in [0,\infty)}$ with $\mu_0 = \mathcal{H}^{n} \llcorner \Sigma$.
\end{thm}
Similarly Brakke flow with boundary needs not be unique, but White's theorem says that a unit-regular and cyclic one always exists. White proved in addition a strong boundary regularity theorem (Theorem 17.1 of \cite{WhiteMCFBoundary}) in the codimension one case, ruling out a scenario where interior singularities could accumulate to a boundary singularity. Hence the boundary $\Gamma$ truly remains unchanged in the classical sense.
\subsection{Level set flows and matching motions}
The final ingredient we need is the set-theoretic generalization of MCF, initially developed by \cite{ChenGigaGoto} and \cite{EvansSpruck} as viscosity solutions to certain PDEs. Given a closed set $\Gamma_0 \subset \mathbb{R}^{n+1}$, we choose any uniformly continuous function $u_0$ such that $\Gamma_0 = \{x \in \mathbb{R}^{n+1}\mid u_0(x) = 0\}$. There exists a unique $u \in C(\mathbb{R}^{n+1} \times [0,\infty))$ which is a viscosity solution to the problem
\begin{align*}
\begin{cases}
u_t = \sum_{i,j=1}^{n+1} \left(\delta_{ij} - \frac{u_{x_i}u_{x_j}}{\abs{\nabla u}^2}\right)u_{x_ix_j} & \text{ on } \mathbb{R}^{n+1} \times [0,\infty) \\
u(x,0) = u_0(x) & \text{ on } \mathbb{R}^{n+1} \times \{0\}.
\end{cases}
\end{align*}
Let $\Gamma_t = \{x \in \mathbb{R}^{n+1} \mid u(x,t) = 0\}$. We call $\mathcal{K} = \bigcup_{t \in [0,\infty)} \Gamma_t \times \{t\}$ the \textit{level set flow} of $\Gamma_0$. \par
Since the viscosity solution is unique, level set flow is also unique with given initial data. However, level set flows might fatten, i.e. $\mathcal{K}$ might develop a non-empty interior (for example the figure eight fattens immediately). Formally, the level set flow of $\Gamma_0$ fattens if $\mathcal{H}^{n+1}(\Gamma_t) > 0$ for some $t > 0$. A theorem of Ilmanen (11.3 in \cite{Ilmanen}) shows that fattening phenomenon is not generic and can therefore be perturbed away. \par
Alternatively, it has been observed that the level set flow can be characterized as the "biggest flow" of a closed set satisfying the avoidance principle. There is a rich literature on this more geometrically intuitive way of handling set flows and we refer to \cite{WhiteTopology}, \cite{HershkovitsWhite} and \cite{BCW} for more information on this approach. \par
Ilmanen \cite{Ilmanen} combined ideas from Brakke flows and level set flows and introduced the notion of a matching motion, which turns out to be the suitable notion for our purposes. Let $I_n(U)$ be the set of integral $n$-current in $U$.
\begin{defn}[8.1, 9.1 of Ilmanen \cite{Ilmanen}]
Let $\mathcal{K} \in I_{n+1}(\mathbb{R}^{n+1} \times \mathbb{R}^+)$, $\mathcal{M} = \{\mu_t\}_{t \in [0,\infty)}$ be a Brakke flow and $\Gamma_0 \in I_n(\mathbb{R}^{n+1})$ with finite mass and empty boundary. A pair $(\mathcal{K},\mathcal{M})$ is an \textit{enhanced motion} with initial data $\Gamma_0$ if
\begin{enumerate}[label = (\alph*)]
\item $\partial \mathcal{K} = \Gamma_0$ and $\mathcal{K}_t \in I_n(\mathbb{R}^{n+1})$ for a.e. $t \ge 0$;
\item $\partial \mathcal{K}_t = 0$ and $t \to \mathcal{K}_t$ is continuous in the flat topology for $t \ge 0$;
\item $\mu_0 = \mu_{\Gamma_0}$, $\mathbb{M}[\mu_t] \le \mathbb{M}[\mu_0]$ and $\mu_{\mathcal{K}_t} \le \mu_t$ for a.e. $t \ge 0$.
\end{enumerate}
If the pair $(\mathcal{K},\mathcal{M})$ further satisfies
\begin{enumerate}[label = (\alph*),resume]
\item $\mu_t = \mu_{\mathcal{K}_t} = \mu_{V(\mu_t)}$ for $t \ge 0$,
\end{enumerate}
then we call it a \textit{matching motion}.
\end{defn}
Note that in the above definition we have already abused some notation. Indeed, in our applications $\mathcal{K}$ is going to be the level set flow from $\Gamma_0$. A fundamental result of Ilmanen (Section 12 of \cite{Ilmanen}) shows that a nonfattening level set flow is a matching motion, which justifies our abuse of notation here. \par
We will use the following result of S. Wang \cite{Shengwen} which asserts that limit of low entropy matching motions is a matching motion. Recall that a sequence of Brakke flow $\mathcal{M}_i = \{\mu_t^i\}_{t \in [0,\infty)}$ converges to $\mathcal{M} = \{\mu_t\}_{t \in [0,\infty)}$ if $\mu_t^i \to \mu_t$ as Radon measures and, after possibly passing to a subsequence, $V(\mu_t^i) \to V(\mu_t)$ as varifolds for a.e $t \in [0,\infty)$.
\begin{prop}[Theorem 3.5 of \cite{Shengwen}]
\label{limit-matching-motion}
Let $(\mathcal{K}_i,\mathcal{M}_i)$ be a sequence of matching motions converging to an enhanced motion $(\mathcal{K},\mathcal{M})$ with $\lambda[\mathcal{M}] < 2$, then $(\mathcal{K},\mathcal{M})$ is a matching motion.
\end{prop}
\begin{rem}
The theorem fails without the entropy assumption as the set-theoretic limit of a sequence of grim reapers (which has entropy 2) is two lines but the limit in the sense of currents is empty (as the two lines cancel each other).
\end{rem}
\subsection{Construction of the smooth flow}
\label{construction-of-smooth-flow}
Before we construct the weak flow, we briefly summarize the construction of Morse flow line starting from an unstable self-expander \cite{BWTopologicalUniqueness}. For reader's convenience we have also included a summary of the theorems with some proofs in \cref{smooth-construction-appendix}. \par
Recall that for an unstable self-expander $\Sigma$, the stability operator $L_\Sigma$ has discrete spectrum and eigenfunctions to the lowest eigenvalue $\mu_1 < 0$ have a sign. Let $f$ be the unique positive eigenfunction of $\mu_1$ with $\norm{f}_{W^0_{\frac{1}{4}}(\Sigma)} = 1$. For $\varepsilon > 0$ we form the perturbations of $\Sigma$ by $f$ given by
\begin{align}
\label{sigma-epsilon}
\Sigma^\varepsilon = \Psi^\varepsilon(\Sigma) \text{ where } \Psi^\varepsilon(x) = x + \varepsilon f(x) \nu_\Sigma.
\end{align}
By \cref{eigenfunction-decay-estimates}, there is $N> 0$ depending on $\varepsilon$ such that $\Sigma^\varepsilon \setminus B_{NR}(0) \subset \mathcal{T}_{R^{-1}}(\mathcal{C})$ for all $R > 1$. By \cref{expander-mean-convexity}, $\Sigma^\varepsilon$ is expander mean-convex (see \cref{smooth-construction-appendix} for the definition) for $\varepsilon$ sufficiently small. By the existence theorem \cref{existence-theorem} there is a unique MCF starting from an asymptotically conical, expander mean-convex hypersurface, and the expander-mean-convexity is preserved along the flow until the first singular time. Applying \cref{existence-theorem} to $\Sigma^\varepsilon$, we get for each $\varepsilon$ a unique MCF $\mathcal{M}^\varepsilon = \{\Sigma^\varepsilon_t\}_{t \in [1,T^\varepsilon)}$ with $\Sigma^\varepsilon_1 = \Sigma^\varepsilon$. Moreover, by \cref{eigenfunction-decay-estimates} and \cref{expander-curvature-estimates}, $\Sigma^\varepsilon$ has uniformly bounded curvature, so the interior estimates of Ecker--Huisken \cite{EHInterior} implies that the interval of existence of is independent of $\varepsilon$. Moreover, at first singular time $T^\varepsilon$,
\begin{align*}
\lim_{t \to T^\varepsilon} \sup_{\Sigma^\varepsilon_t \cap B_{N'\sqrt{t}}} \abs{A_{\Sigma_t^\varepsilon}} = \infty
\end{align*}
for some constant $N' > 0$. \par
\subsection{Construction of the Brakke flow}
We now turn to our construction of the weak flow. One of the method to construct a weak flow is via capping off the hypersurfaces $\Sigma^\varepsilon \cap \overbar{B_R(0)}$ smoothly and taking a sequence of Brakke flows (or weak set flows) starting from these capped-off hypersurfaces. However, one has to be careful with the above method when working with entropy bounds, since the cap might increase the entropy. On the other hand, the capping method has the advantage that, by a suitable choice of the caps, the expander mean-convexity is preserved through the cap and hence in the limits (here with weak set flows one has to interpret the mean-convexity in the weak sense as well). Such a construction for self-shrinkers can be found in Section 7 of \cite{CCMSGeneric} and, for self-expanders, in \cite{BCW}. \par
Here we use an alternative approach of Brakke flow with boundary. This construction is less technical and respects the entropy well. However, it is also less clear how expander mean-convexity is preserved through the flows and hence in the limit. This is not needed in our case because we are only concerned with existence. It is an interesting question to determine whether the flows constructed from the two methods above agree (they are, of course, the same notion when smooth, so the point is to determine how each of them flows past singularities). We believe this should be the case with the entropy bound $\lambda[\mathcal{C}] < 2$, but the general picture might be less clear. \par
The following proposition is the weak flow analogy of the smooth flow produced in \cref{existence-theorem}.
\begin{prop}
\label{prop32}
Let $\mathcal{C}, \Sigma$ be as in \cref{nontrivial-flow-lines}, and $\Sigma^\varepsilon$ be as in \cref{sigma-epsilon}. There exists $\varepsilon_0 > 0$ such that, for $\abs{\varepsilon} < \varepsilon_0$, there exists an immortal matching motion $(\mathcal{K}^\varepsilon,\mathcal{M}^\varepsilon)$ where $\mathcal{K}^\varepsilon = \{\Gamma^\varepsilon_t\}_{t \in [1,\infty)}$ and $\mathcal{M}^\varepsilon = \{\mu^\varepsilon_t\}_{t \in [1,\infty)}$ such that $\Gamma^\varepsilon_1 = \Sigma^\varepsilon$ and $\mu^\varepsilon_1 = \mathcal{H}^{n} \llcorner \Sigma^\varepsilon$. Moreover the flow $(\mathcal{K}^\varepsilon,\mathcal{M}^\varepsilon)$ agrees with the smooth flow starting from $\Sigma^\varepsilon$ for $t \in [1,T^\varepsilon)$.
\end{prop}
\begin{proof}
Suppose $\varepsilon > 0$, as the argument for $\varepsilon < 0$ is identical. Let $\Sigma^{\varepsilon,R} = \Sigma^\varepsilon \cap \overbar{B_R(0)}$ be the hypersurface in $\overbar{B_R(0)}$ with boundary $\Sigma^{\varepsilon} \cap \partial B_R(0)$. By \cref{brakkeflow-with-boundary}, there exists an unit-regular and cyclic Brakke flow with boundary $\mathcal{M}^{\varepsilon,R} = \{\mu^{\varepsilon,R}_t\}_{t \in [0,\infty)}$ starting from $\Sigma^{\varepsilon,R}$. The flow $\mathcal{M}^{\varepsilon,R} \llcorner B_{R/2}(0)$ is therefore a (usual) Brakke flow inside $B_{R/2}(0)$. Since nonfattening is generic, we may choose a sequence $R_i \to \infty$ such that the associated level set flow of $\mathcal{M}^{\varepsilon,R_i} \llcorner B_{R_i/2}(0)$ is nonfattening. This produces a sequence of matching motions
\begin{align*}
(\mathcal{K}^{\varepsilon,R_i},\mathcal{M}^{\varepsilon,R_i} \llcorner B_{R_i/2}(0)).
\end{align*}
By compactness of Brakke flow we may now pass to a subsequence $R_i \to \infty$ to obtain a limiting enhanced motion $(\mathcal{K}^{\varepsilon},\mathcal{M}^\varepsilon)$ in $\mathbb{R}^{n+1}$ starting from $\Sigma^\varepsilon$. \par
By Lemma 3.5 of \cite{BWSmoothCompactness}, $\lambda[\Sigma] = \lambda[\mathcal{C}] < 2$, and by Lemma 6.2 of \cite{BWTopologicalUniqueness}, for every $\delta > 0$ there exists $\varepsilon_0$ such that $\abs{\lambda[\Sigma^\varepsilon] - \lambda[\Sigma]} < \delta$ for $\varepsilon < \varepsilon_0$. Choosing $\delta$ small enough so that $\lambda[\mathcal{C}] + \delta < 2$ and $\varepsilon_0$ small according to $\delta$ ensures that $\lambda[\mathcal{M}^\varepsilon] = \lambda[\Sigma^\varepsilon] < 2$, and so, in view of \cref{limit-matching-motion}, $(\mathcal{K}^\varepsilon,\mathcal{M}^\varepsilon)$ is matching. \par
Finally, using the argument in \cref{interior-estimates} with pseudolocality of MCF replaced by that of Brakke flow there exist $\delta > 0$ and $N' > 0$ such that $\supp \mu^\varepsilon_t \setminus B_{N'\sqrt{t}}(0) = \Sigma^\varepsilon_t \setminus B_{N'\sqrt{t}}(0)$ for $t \in [1,1+\delta]$. Since $(\mathcal{K}^\varepsilon,\mathcal{M}^\varepsilon)$ is matching, $\Gamma_t^\varepsilon \setminus B_{N'\sqrt{t}}(0) = \Sigma^\varepsilon_t \setminus B_{N'\sqrt{t}}(0)$ as well. It follows from uniqueness of level set flow that $\Gamma_t^\varepsilon$ agrees with $\Sigma_t^\varepsilon$. Using the matching property again we infer that $\supp \mu_t^\varepsilon = \Sigma_t^\varepsilon$. It is easy to see that these flow agree up to the first singular time of $\Sigma^\varepsilon_t$ (i.e. $T^\varepsilon$).
\end{proof}
Let $(\mathcal{K}^\varepsilon,\mathcal{M}^\varepsilon)$ be the matching motions constructed as above. We can once again take a limit as $\varepsilon \to 0^+$ to obtain a limiting enhanced motion $(\mathcal{K},\mathcal{M})$ where $\mathcal{K} = \{\Gamma_t\}_{t \in [1,\infty)}$ and $\mathcal{M} = \{\mu_t\}_{t \in [1,\infty)}$ such that $\Gamma_0 = \Sigma$ and $\mu_0 = \mathcal{H}^n \llcorner \Sigma$. However, this limit is not enough to prove \cref{nontrivial-flow-lines} as we will only recover the flow of the self-expander $\Sigma$ (as in the case of the smooth flow). Moreover, the limit does not attain the cone as its initial data. We need to translate the flow properly so that the starting time can be extended back to 0, and argue that we do not get the original flow of the self-expander back in the process.
\begin{proof}[Proof of \cref{nontrivial-flow-lines}] Again WLOG suppose $\varepsilon > 0$. Let $(\mathcal{K}^\varepsilon,\mathcal{M}^\varepsilon)$ be the matching motions from \cref{prop32}. It is convenient to work with the following rescaled MCF:
\begin{align*}
\tilde{\mu}^{\varepsilon}_s = \mu^{\varepsilon}_t \circ t^{-\frac{1}{2}} \text{ and } \tilde{\Gamma}^{\varepsilon}_s = \Gamma^\varepsilon_t, \;\; s = \log t.
\end{align*}
and let $(\tilde{\mathcal{K}}^\varepsilon, \tilde{\mathcal{M}}^\varepsilon)$ denote the rescaled flow. Under the rescaling, a smooth MCF will satisfy the rescaled MCF equation
\begin{align*}
\left(\frac{\partial x}{\partial s}\right)^\perp = H_{\Sigma_s} - \frac{x^\perp}{2},
\end{align*}
which has self-expanders as stable solutions. The flow $\tilde{\mathcal{M}}^\varepsilon$ is defined on the time interval $[0,\infty)$ with $\supp \tilde{\mu}^\varepsilon_0 = \Sigma^\varepsilon$. Since $\Sigma$ is unstable, the lowest eigenvalue $\lambda_1$ of $-L_\Sigma$ satisfies $\lambda_1 < \frac{1}{2}$, and consequently the $\mathcal{M}^{\varepsilon}$ "flows faster" than the parabolic rescaling $\sqrt{t}$. To be precise, for the the flow of the self-expander $\Sigma$ we have
\begin{align*}
\lim_{\lambda \to 0^+} \dist(\Sigma, \lambda \Sigma_{\lambda^{-2}}) = \lim_{\lambda \to 0^+} \dist(\Sigma, \lambda \sqrt{\lambda^{-2}} \Sigma) = 0
\end{align*}
but since we are perturbing $\Sigma$ by its eigenfunction whose eigenvalue is below $\frac{1}{2}$, we must have
\begin{align*}
\lim_{\lambda \to 0^+} \dist(\Sigma^\varepsilon, \lambda \supp \mu^\varepsilon_{\lambda^{-2}}) = \infty.
\end{align*}
In the rescaled setting, the above means that $\tilde{M}^\varepsilon$ moves out exponentially (the exact rate of which depends on $\lambda_1$). As such, we can find a sequence of time $s_\varepsilon$ with
\begin{align*}
d(\supp \tilde{\mu}^\varepsilon_{s_\varepsilon},x_0) = \gamma
\end{align*}
for some fixed point $x_0 \in \Sigma$ and positive constant $\gamma$. On the other hand $\varepsilon \to 0^+$, the rescaled flows $\tilde{\mathcal{M}}^\varepsilon$ converge to the static rescaled flow of $\Sigma$, so in fact $s_\varepsilon \to \infty$ as $\varepsilon \to 0^+$, i.e. one has to go further in time to reach a distance $\gamma$ away from $x_0 \in \Sigma$. This fact allows us to time translate $\tilde{\mathcal{M}}$ by $-s_\varepsilon$ to obtain a sequence of rescaled MCFs $\tilde{\mathcal{M}}^{\varepsilon,s_\varepsilon} = \{\tilde{\mu}^{\varepsilon,s_\varepsilon}_s\}_{s \in [-s_\varepsilon,\infty)}$ such that
\begin{align*}
\supp \tilde{\mu}^{\varepsilon,s_\varepsilon}_{-s_\varepsilon} = \Sigma^\varepsilon \text{ and } d( \supp \tilde{\mu}^{s,s_\varepsilon}_0, x_0) = \gamma.
\end{align*}
By compactness of Brakke flows we can take $\varepsilon \to 0^+$ to obtain a limiting flow $\tilde{\mathcal{M}}$ defined on $(-\infty,\infty)$. It is easy to see that $\tilde{\mathcal{M}}$ is not the flow of $\Sigma$ as $d(\supp \tilde{\mu}_0, \Sigma) \ge \gamma$. Moreover, since $\Sigma$ is asymptotically conical and $\Sigma^\varepsilon \to \Sigma$ smoothly as $\varepsilon \to 0$, it follows that
\begin{align*}
\lim_{s \to -\infty} \supp \tilde{\mu}_s = \lim_{\varepsilon \to 0} \supp \tilde{\mu}_{-s_\varepsilon}^{\varepsilon,s_\varepsilon} = \lim_{\varepsilon \to 0} \Sigma^\varepsilon = \Sigma.
\end{align*}
In view of the rescaling, this proves that the flow achieves $\mathcal{C}$ as the initial data. \par
Since $\tilde{\mu}_s$ is integral, it follows from strong maximum principle for varifolds \cite{SolomonWhite} that $\lim_{s \to -\infty} \tilde{\mu}_s = k \mathcal{H}^n \llcorner \Sigma$. Since $\lambda[\mathcal{C}] < 2$, $k = 1$, so the desired regularity follows from Brakke regularity theorem.
\end{proof}
\subsection{An example of singularity formation}
To conclude the section, we give a sufficient condition when there exists a flow coming out of $\mathcal{C}$ that has a singularity. The proof makes heavy use of the structure theory of self-expanders developed by Bernstein and Wang in a series of papers starting from \cite{BWSpace}. It would be interesting to see if a simpler proof exists.
\begin{prop}
\label{disconnection}
Suppose $\mathcal{C} \subset \mathbb{R}^{n+1}$, $2 \le n \le 6$, is a smooth double cone with $\lambda[\mathcal{C}] < 2$ and that the two connected components of $\mathcal{L}(\mathcal{C})$ are graphs over some (fixed) hyperplane. Suppose that there is a connected self-expander asymptotic to $\mathcal{C}$. Then there exists an integral Brakke flow coming out of $\mathcal{C}$ that has a singularity in finite time.
\end{prop}
\begin{rem}
We note that the above is consistent with the topological uniqueness result of \cite{BWTopologicalUniqueness}. Indeed, by Proposition 5.6 of \cite{BWTopologicalUniqueness}, if $\lambda[\mathcal{C}] < \lambda[\mathbb{S}^{n-1} \times \mathbb{R}]$, the flow produced by \cref{nontrivial-flow-lines} is smooth for all time. On the other hand, any such double cone will not have a connected self-expander.
\end{rem}
\begin{proof}
Let $\sigma = \mathcal{L}(\mathcal{C})$, and let $W$ be the connected component of $\mathbb{S}^n$ lying between the two connected components of $\sigma$. By Corollary 1.2 of \cite{BWSpace}, the set of generic cones (in the sense that there is no $C^{2}$-asymptotically self-expander with nontrivial Jacobi fields that fix the infinity) whose link lie in $W$, is dense near $\mathcal{C}$. These facts allow us to take a sequence of $C^{2,\alpha}$-hypersurfaces $\sigma_i$ in $\mathbb{S}^2$ such that
\begin{itemize}
\item $\sigma_i \to \sigma$ in $C^{2,\alpha}(\mathbb{S}^{n})$ as $i \to \infty$;
\item $\mathcal{C}_i$ is a generic, smooth double cone for all $i$, where $\mathcal{C}_i$ is the cone over $\sigma_i$;
\item $\lambda[\mathcal{C}_i] < 2$ for sufficiently large $i$, by Lemma 6.2 of \cite{BWTopologicalUniqueness}.
\end{itemize}
From the above we immediately see that there exists a unique disconnected, stable self-expander $\Gamma_i$ $C^{2,\alpha}$-asymptotic to $\mathcal{C}_i$ (by evolution of entire graph \cite{EHEntireGraph}). We also see that $\mathcal{C}_i \subset \Omega$, where $\Omega$ is the connected component of $\mathbb{R}^{n+1} \setminus \mathcal{C}$ that contains $W$. Denote by $\Sigma_0$ the connected self-expander asymptotic to $\mathcal{C}$. Using a direct method with $\Sigma_0$ as the barrier, similar to Lemma 8.2 of \cite{BWIntegerDegree}, we can find a connected self-expander asymptotic to $\mathcal{C}_i$ in $\Omega'$, where $\Omega'$ is the connected component of $\mathbb{R}^{n+1} \setminus \Sigma_0$ such that the outward unit normal of $\mathcal{C}$ points into $\Omega'$.
\par
Since there exists a unique disconnected self-expander $\Gamma_i$ asymptotic to $\mathcal{C}_i$, by the partial ordering of self-expanders asymptotic to a fixed cone (Theorem 4.1 of \cite{BWTopologicalUniqueness}), we can pick an innermost connected self-expander $\Sigma_i$ $C^{2,\alpha}$-asymptotic to $\mathcal{C}_i$ (i.e. pick any $\Sigma_i$ such that the only self-expander lying on the inside of $\Sigma_i$ is the disconnected $\Gamma_i$ - note that $\Sigma_i$ might not be unique). We claim that $\Sigma_i$ is unstable. If not, the mountain pass theorem (Corollary 1.2 \cite{BWMountainPass}, this requires $2 \le n \le 6$) and the genericity of $\mathcal{C}_i$ imply the existence of an unstable self-expander $\Sigma'$ lying between $\Sigma_i$ and $\Gamma_i$. $\Sigma'$ must then be connected, but this contradicts the partial ordering. \par
Since $\Sigma_i$ is unstable and $\lambda[\mathcal{C}_i] < 2$, we can produce using \cref{nontrivial-flow-lines} an integral Brakke flow $\mathcal{M}^i = \{\mu^i_t\}_{t \in (0,\infty)}$ that moves inwards initially (by expander mean convexity) and satisfies $\lim_{t \to 0} \mu^i_t = \mathcal{H}^n \llcorner \mathcal{C}_i$. Suppose for a contradiction that $\mathcal{M}^i$ is smooth for all $t$, then the flow is expander mean convex (in the classical sense) and moves inwards for all time. Moreover, using an almost identical argument as in Proposition 5.1(3) of \cite{BWTopologicalUniqueness}, the rescaled flow (rescaling as in the proof of \cref{nontrivial-flow-lines}) $\tilde{\mathcal{M}}^i$ converges as $s \to \infty$ to a smooth, stable self-expander asymptotic to $\mathcal{C}_i$, which must lie inside $\Sigma_i$. Since $\Sigma_i$ is an innermost connected self-expander, the stable limit must be $\Gamma_i$ which is disconnected, a contradiction. \par
Now let $s^i$ denote the first singular time of the rescaled flows $\tilde{\mathcal{M}}^i$, and time translate $\tilde{\mathcal{M}}^{i}$ by $-s^i$ to obtain rescaled flows $\tilde{\mathcal{M}}^{i,s^i}$ with a singularity at time $0$. By compactness of MCF we obtain a rescaled flow $\tilde{\mathcal{M}}$ such that $\tilde{\mathcal{M}}^{i,s^i} \to \tilde{\mathcal{M}}$ subsequentially. Rescaling back we see that, by upper semicontinuity of the Gaussian density, $\mathcal{M}$ has its first singularity at time $t = 1$. Finally, we claim that $\mathcal{M}$ indeed comes out of the cone $\mathcal{C}$. As $\mathcal{M}$ is smooth on $(0,1)$, it is enough to show that
\begin{align}
\label{link-convergence}
\lim_{t \to 0} \supp {\mu}_t \cap \mathbb{S}^{n} = \sigma \text{ in } C^{2,\alpha}(\mathbb{S}^n).
\end{align}
Since each $\mathcal{M}^i$ attains $\mathcal{C}_i$ as initial data and $\mathcal{C}_i$ is $C^{2,\alpha}$-regular, we have
\begin{align*}
\lim_{t \to 0} \supp \mu_t^i \cap \mathbb{S}^{n} = \sigma_i \text{ in } C^{2,\alpha}(\mathbb{S}^{n}).
\end{align*}
As $\sigma_i \to \sigma$ in $C^{2,\alpha}(\mathbb{S}^n)$, by a diagonalization argument, we see that \cref{link-convergence} holds. This completes the proof.
\end{proof}
Assuming \cref{yao-conjecture}, the assumptions in \cref{disconnection} are satisfied by cones of the type \cref{good-cone}, given that the parameter $m$ is sufficiently small. In fact, numerical computations do confirm that the cones $x_1^2 = m^2(x_2^2 + x_3^2)$ for $m \le 1$ have entropy less than 2. Moreover, by \cite{AngenentIlmanenChopp}, there exists a connected self-expander for sufficiently small $m$. In these cases, combining the above with \cref{cylindrical-singularity}, we have the much stronger conclusion that any such cone has a potential evolution which disconnects at a cylindrical singularity.
\section{Self expanders with triple junctions}
\label{ode-appendix}
An important question in the study of self-expander is to determine the number of self-expanders coming out of a given cone. A classical result of Ecker--Huisken \cite{EHEntireGraph} shows that there exists a unique self-expander coming out of a graphical cone. In general, however, Angenent--Ilmanen--Chopp \cite{AngenentIlmanenChopp} showed numerically that uniqueness fails for double cones. It is proved rigorously by Helmensdorfer \cite{Helmensdorfer} that there are at least three distinct smooth self-expanders asymptotic to a rotationally symmetric double cone of the form \cref{good-cone} provided the cone angle is sufficiently large (when the cone angle is small, a barrier argument shows that uniqueness indeed holds --- see Lemma 8.1 of \cite{BWIntegerDegree}). \par
In this section we prove a simple ODE result on the existence of two self-expanders with triple junctions for rotationally symmetric double cones with sufficiently large cone angle. The proof roughly follows the setup of Helmensdorfer \cite{Helmensdorfer}, although we do not need the clearing out lemma for MCF in the following analysis. This provides an example of a singular self-expander, and also illustrates that the cyclicity assumption in \cref{reflection-symmetry} is essential as tameness (\cref{tameness}) clearly fails for self-expanders with triple junction singularities. However, the examples constructed below are in fact still rotationally symmetric. \par
We consider cones $\mathcal{C}_m$ of the form \cref{good-cone}, where $m$ is the parameter therein. Observe that $\mathcal{C}_m$ has a rotational symmetry across the $x_1$-axis as well as a reflection symmetry across the $\{x_1 = 0\}$ hyperplane. \par
Assume the expander $\Sigma$ has a triple junction singularity at a point $(0,x_0)$. Imposing rotational symmetry on $\Sigma$ across the $x_1$-axis we may assume $x_0 = (a,0,\ldots,0)$. Note also at a triple junction singularity, the tangent cone is stationary and is therefore the union of three half-lines meeting at an angle of $2\pi/3$. These observations reduce the problem to finding a function $u: \mathbb{R}^+ \to \mathbb{R}$ satisfying the following ODE (written in spherical coordinates):
\begin{align}
\label{expander-ode}
\frac{u_{rr}}{1+ u_r^2} - \frac{n-1}{u} + \frac{1}{2}r u_r - \frac{1}{2}u = 0.
\end{align}
with initial data $u(0) = a$ and $u'(0) = \frac{\sqrt{3}}{3}$. The solution is asymptotic to $\mathcal{C}_m$ if
\begin{align}
\label{cone-condition}
\lim_{r \to \infty} \frac{u(r)}{r} = \frac{1}{m}.
\end{align}
Of course, by the usual ODE existence and uniqueness theorem, the solution to the problem \cref{expander-ode} is unique with a given initial data $a$. First we show that any solution to \cref{expander-ode} is asymptotically conical.
\begin{prop}
\label{prop-c1}
For any $a > 0$, there is a (unique) $m = m(a)$ such that \cref{cone-condition} holds.
\end{prop}
\begin{proof}
First we prove $u > 0$ for all $r > 0$. Suppose for a contradiction that there is $r_0$ such that $u(r_0) < 0$. Since initially $u$ and $u_r$ are both positive, by the mean value theorem there must be a local maximum $r_1 \in (0,r_0)$ with $u(r_1) > 0$, but at such an $r_1$ we can use \cref{expander-ode} to get
\begin{align*}
u_{rr}(r_1) = \frac{n-1}{u(r_1)} + \frac{1}{2}u(r_1) > 0,
\end{align*}
a contradiction. So $u$ is indeed positive. By the above calculation, this implies that all critical points of $u$ are local minima. \par
To continue, let $\alpha(r) = \arctan(u/r)$. Differentiating, we obtain
\begin{align*}
\alpha'(r) = \frac{ru_r - u}{u^2 + r^2} \text{ and } \alpha''(r) = \frac{ru_{rr}}{u^2 + r^2} - \frac{(ru_r - u)(2uu_r + 2r)}{(u^2+r^2)^2}.
\end{align*}
At a critical point $r_0$ of $\alpha(r)$, we have $r_0u_r(r_0) - u(r_0)= 0$ and \cref{expander-ode} implies that $u_{rr}(r_0) = \frac{n-1}{u(r_0)} > 0$. Hence $\alpha''(r_0) = \frac{ru_{rr}}{u^2 + r^2} > 0$. Therefore all critical points of $\alpha(r)$ are local minima as well. Since $\alpha(r)$ is bounded and only has local minima, monotone convergence theorem shows that $\lim_{r \to \infty} \alpha(r)$ exists. \par
\par
It remains to show that the $\lim_{r \to \infty} \alpha(r) \in (0,\frac{\pi}{2})$. If the limit is 0, then $m = 0$ and the MCF of $\Sigma$ is contained in the level set flow of the $x_1$-axis, which disappears immediately. This is impossible. If the limit is $\frac{\pi}{2}$, the MCF of $\Sigma$ is contained in the level set flow of the hyperplane $\{x_1 = 0\}$, which is static. This is again impossible by, say, the initial condition $u_r(0) = \frac{\sqrt{3}}{3}$.
\end{proof}
Knowing the above, our problem becomes essentially a shooting problem: Given $m$, we wish to find the appropriate initial condition $a$ so that the solution to \cref{expander-ode} satisfies \cref{cone-condition}. Let $u^a(r)$ be the solution to \cref{expander-ode} with initial condition $u^a(0) = a$. Consider the asymptotic cone angle parameter $m$ as a function of $a$:
\begin{align*}
m(a) = \lim_{r \to \infty} \frac{u^a(r)}{r} = \lim_{r \to \infty} u_r^a(r) \in (0,\infty).
\end{align*}
\begin{prop}
\label{continuity}
$m(a)$ is a continuous function on $(0,\infty)$.
\end{prop}
\begin{proof}
First we record that
\begin{align*}
u_{rr}(0) = \frac{4}{3} \left(\frac{n-1}{a} + \frac{1}{2}a\right) > 0.
\end{align*}
From the proof of \cref{prop-c1}, we see that a critical point of $u$ must be a local minimum, but since $u$ is initially increasing and smooth, there cannot be any critical point at all. So $u$ is strictly increasing and we deduce that $u_r > 0$ for all $r > 0$. On the other hand, l'H\^{o}pital's rule on \cref{cone-condition} yields $\lim_{r \to \infty} u_r = m$. Going back to \cref{expander-ode} and taking the limit as $r \to \infty$ yield also $\lim_{r \to \infty} u_{rr} = 0$. \par
Fix an $a \in (0,\infty)$. Clearly $u^a \ge a$ and $u_r^{a}$ is bounded above. Therefore we may fix a constant $N$ with $\frac{1}{N} < a$ such that $\abs{a' - a} < \frac{1}{N}$ implies that
\begin{align}
\label{uniform-bound}
\frac{1}{u^{a'}} \le c
\end{align}
for some constant $c$ depending on $a$ and $N$. We compute
\begin{align}
\label{estimate-2}
\left(\frac{u}{r}\right)_r = \frac{1}{r}\left(u_r - \frac{u}{r}\right) = \frac{2}{r^2}\left(\frac{n-1}{u} - \frac{u_{rr}}{1 + (u_r)^2}\right).
\end{align}
For $\abs{a - a'} < \frac{1}{N}$ there are two cases. If $u_{rr}^{a'}$ is never zero, then it is always positive. This immediately gives the bound:
\begin{align}
\label{upper-bound}
\left(\frac{u^{a'}}{r}\right)_r \le \frac{c}{r^2}.
\end{align}
If $u_{rr}^{a'}(r_0) = 0$ at some point $r_0$, differentiating \cref{expander-ode} once we get
\begin{align}
\label{urrr}
\frac{1}{1+u_r^2}\left(u_{rrr} - \frac{2u_r(u_{rr})^2}{1+u_r^2}\right) + \frac{n-1}{u^2} u_r + \frac{1}{2}ru_{rr} = 0.
\end{align}
From this we immediately see that when $u_{rr}^{a'}(r_0) = 0$,
\begin{align*}
u_{rrr}^{a}(r_0) = -(1+u_{r}^2)\frac{n-1}{u^2} < 0,
\end{align*}
so every critical point of $u_r^{a'}$ is a local maximum, for which there can be at most one of them. Therefore $r_0$ is the only zero of $u_{rr}^{a'}$. In this case, we see from \cref{urrr} that, at any negative local minimum of $u_{rr}^{a'}$, we have
\begin{align*}
\frac{2u_r^{a'}}{(1 + (u_r^{a'})^2)^2} (u_{rr}^{a'})^2 \le \frac{(n-1)u_r^{a'}}{(u^{a'})^2} \implies \frac{ (u_{rr}^{a'})^2}{(1 + (u_r^{a'})^2)^2} \le \frac{n-1}{2(u^{a'})^2} \le c,
\end{align*}
where we used \cref{uniform-bound}. Going back to \cref{estimate-2}, this yields the same type of uniform upper bound as \cref{upper-bound} (up to increase $c$). Therefore we conclude \cref{upper-bound} holds for all $\abs{a - a'} < \frac{1}{N}$. \par
Integrating \cref{upper-bound} from $r$ to $\infty$, we obtain the estimate:
\begin{align*}
m(a') - \frac{u^{a'}(r)}{r} < \frac{c}{r}, \; \; \abs{a' - a} < \frac{1}{N}.
\end{align*}
Now given $\varepsilon > 0$ we can pick $r_0 > 0$ such that $c/r_0 < \varepsilon/3$ and $\delta$ so small that $\abs{a' - a} < \delta$ implies that $\abs{u^{a'} - u^a} < \varepsilon/3$ on $(0,r_0]$ (this follows from continuous dependence on initial data as we are now in a compact set). Using the triangle inequality we get that
\begin{align*}
\abs{m(a) - m(a')} \le \abs{m(a) - \frac{u^a(r_0)}{r_0}} + \abs{\frac{u^a(r_0)}{r_0} - \frac{u^{a'}(r_0)}{r_0}} + \abs{m(a') - \frac{u^{a'}(r_0)}{r_0}} < \varepsilon.
\end{align*}
This finishes the proof of continuity.
\end{proof}
We are now in the position to prove the existence theorem.
\begin{thm}
\label{existence-singular-expander}
There is an $M_0 > 0$ such that for all $M > M_0$, there exists at least two distinct values $a_1, a_2 \in (0,\infty)$ depending on $M$ such that $m(a_1) = m(a_2) = M$.
\end{thm}
\begin{proof}
We will show that $m(a) \to \infty$ both as $a \to 0$ and as $a \to \infty$. In view of \cref{continuity} this will prove the theorem. \par
Let us show that $m(a) \to \infty$ as $a \to \infty$. First of all, we show that $u_r^a$ cannot be uniformly bounded above. Indeed if $u_r^a$ is uniformly bounded above by some constant $C$, then $u - ru_r \ge a - C$ for $r \in [0,1]$ and so $u_{rr} \ge \frac{1}{2}(a-C)$ on $[0,1]$ by \cref{expander-ode}. But then
\begin{align*}
u^a_r(1) = \frac{\sqrt{3}}{3} + \int_0^1 u^a_{rr}(r)dr \ge \frac{\sqrt{3}}{3} + \frac{1}{2}(a - C) \to \infty
\end{align*}
as $a \to \infty$, a contradiction. \par
Recall from the proof of \cref{continuity} that $u_{rr}^{a}$ can have at most one zero. If there is a sequence $a_i \to \infty$ such that $u_{rr}^{a_i}$ has one zero $r_i$, then \cref{estimate-2} immediately implies that
\begin{align*}
\left(\frac{u^{a_i}}{r}\right)_r > 0, r > r_i.
\end{align*}
Integrating the above from $r_i$ to $\infty$ we get
\begin{align}
\label{estimate-1}
m(a_i) > \frac{u^{a_i}(r_i)}{r_i}.
\end{align}
On the other hand, at a zero of $u_{rr}$, \cref{expander-ode} gives that
\begin{align}
\label{expander-ode-2}
\frac{u^{a_i}(r_i)}{r_i} = u^{a_i}_r(r_i)- \frac{2(n-1)}{u^{a_i}(r_i)r_i}
\end{align}
Observe that $u_r^{a_i}(r_i) = \sup_{r} u_r^{a_i}(r)$ because $r_i$ is a local maximum of $u^{a_i}_r$ and $u^{a_i}_{rr}(r) < 0$ for all $r > r_i$. Since $u_r^{a_i}$ is not uniformly bounded, we have that \begin{align*}
\lim_{i \to \infty} \frac{u^{a_i}(r_i)}{r_i} = \lim_{i \to \infty} u^{a_i}_r(r_i)- \frac{2(n-1)}{u^{a_i}(r_i)r_i} = \infty,
\end{align*}
where we also used $u^{a_i}(r_i) > a_i$. Recalling \cref{estimate-1}, we see that $\lim_{i \to \infty} m(a_i) = \infty$. \par
Otherwise there is $a_0 > 0$ such that $u_{rr}^a$ has no zero for all $a > a_0$. This means that $u_r^a$ is strictly increasing for all $a > a_0$. Since $u_r^a$ is not uniformly bounded we can find a sequence $a_i \to \infty$ and $\{r_i\} \subset [0,\infty)$ such that $u_r^{a_i}(r_i) > i$. Of course monotonicity implies that $m(a_i) \ge u_r^{a_i}(r_i) > i$. This shows that $m(a) \to \infty$ as $a \to \infty$. \par
Next we will show that $m(a) \to \infty$ as $a \to 0$ as well. Again we will first argue that $u_r^a$ cannot be uniformly bounded near $0$. Suppose for a contradiction that there is $a_0 > 0$ such that $\abs{u_r^a} \le C$ for all $0 < a < a_0$, then since $u_r^a > 0$ we get
\begin{align}
\label{upper-bound-2}
u^a(r) = a + \int_0^r u_r^a(t) dt \le a + Cr, \;\; a < a_0.
\end{align}
On the other hand, \cref{expander-ode} gives that
\begin{align*}
u_{rr}^a \ge \frac{n-1}{u^a} + \frac{1}{2}(u^a - r u_r^a) \ge \frac{n-1}{2u^a} + \sqrt{n-1} - Cr, \;\; a < a_0.
\end{align*}
In particular for sufficiently small $a$ we can ensure $\sqrt{n-1} \ge Cr$ and so that $u_{rr}^a(r) \ge \frac{n-1}{2u^a(r)}$ for $r < \sqrt{a}$. Hence, using \cref{upper-bound-2}, we may estimate
\begin{align*}
u_{r}^a(\sqrt{a}) &\ge \frac{\sqrt{3}}{3} + \int_0^{\sqrt{a}} \frac{n-1}{2u^a(r)} dr \\
&\ge \frac{\sqrt{3}}{3} + \frac{n-1}{2}\int_0^{\sqrt{a}} \frac{1}{a + Cr} dr = \frac{\sqrt{3}}{3} + \frac{n-1}{2C} \log(1 + Ca^{-\frac{1}{2}}) \to \infty
\end{align*}
as $a \to 0$, a contradiction. This shows that $u_r^a$ is not uniformly bounded near $0$. \par
Suppose there is a sequence $a_i \to 0$ such that $u_{rr}^{a_i}$ has one zero $r_i$. In view of \cref{expander-ode-2}, if $r_i$ is bounded away from 0, then taking $i \to \infty$ will give
\begin{align*}
\lim_{i \to \infty} \frac{u^{a_i}(r_i)}{r_i} = \lim_{i \to \infty} u^{a_i}_r(r_i)- \frac{2(n-1)}{u^{a_i}(r_i)r_i} = \infty,
\end{align*}
where we used the fact that $u^{a_i}(r_i) \ge a_i + \frac{\sqrt{3}}{3}r_i$ and that $u^{a}_r$ is not uniformly bounded near 0. By \cref{estimate-1}, we can conclude $m(a_i) \to \infty$ as $i \to \infty$ as before. So we henceforth assume that $r_i \to 0$ and assume for a contradiction that there is a constant $C$ such that $r^{-1}u^{a_i}(r) \le C$ uniformly for $r \ge r_i$. Since $u_r$ is decreasing on $[r_i,\infty)$ we get
\begin{align*}
u^{a_i}(2r_i) \ge \int_{r_i}^{2r_i} u^{a_i}_r(t) dt \ge r_i u^{a_i}_r(2r_i) \implies u_r^{a_i}(2r_i) \le \frac{u^{a_i}(2r_i)}{r_i} \le 2C.
\end{align*}
Since $u^{a_i}_{rr}(2r_i) < 0$, \cref{expander-ode} gives that
\begin{align*}
0 < \frac{u^{a_i}(2r_i)}{2r_i} \le \frac{-2(n-1)}{2r_iu^{a_i}(2r_i)} + u^{a_i}_r(2r_i).
\end{align*}
Rearranging, we deduce that $r_iu^{a_i}(2r_i) \ge (n-1)u_r^{a_i}(2r_i)^{-1} \ge \frac{1}{2}(n-1)C^{-1}$, and so
\begin{align*}
2Cr_i^2 \ge r_iu^{a_i}(2r_i) \ge \frac{1}{2}(n-1)C^{-1} \implies r_i^2 \ge \frac{1}{4}(n-1)C^{-2}
\end{align*}
a contradiction as $r_i \to 0$. Hence $r^{-1}u^{a_i}(r)$ is not uniformly bounded on $(r_i,\infty)$, so for each $i$ we may find $r'_{i} \in (r_i,\infty)$ such that $(r_i')^{-1}u^{a_i}(r'_i) > i$. \cref{estimate-1} then implies $m(a_i) \to \infty$ as $i \to \infty$. \par
Otherwise $u_{rr}^{a}$ has no zero for sufficiently small $a$ and monotonicity as before implies that $m(a) \to \infty$ as $a \to 0$. This completes the proof.
\end{proof}
\section{Further remarks}
\label{further-remarks}
We conclude our article with some open questions and conjectures, some of which we have already alluded to before. \par
The most natural question to ask is what happens if $\mathcal{C}$ is rotationally symmetric and the link $\mathcal{L}(\mathcal{C})$ has three (or more) connected components. It is not expected that self-expanders asymptotic to $\mathcal{C}$ will be rotationally symmetric. This should be compared to the case of minimal surfaces - the Costa surface, which is not rotationally symmetric, has two catenoidal ends and one planar end. It is therefore natural to expect that something similar happens if the cone is given by the union of a rotationally symmetric double cone and a hyperplane. We suspect that gluing method by desingularizing the connected expander (asymptotic to the cone) with the hyperplane will produce a counterexample. \par
Another important problem is to determine the entropy of rotationally symmetric double cones. In fact, we conjecture that the assumption $\lambda[\mathcal{C}] < 2$ in \cref{main-theorem} is redundant (where as the same assumption in \cref{reflection-symmetry} is essential). More precisely we conjecture:
\begin{conj}
\label{yao-conjecture}
Let $\mathcal{C}$ be a rotationally symmetric double cone of the form:
\begin{align*}
x_1^2 = \begin{cases} m_1(x_2^2 + x_3^2 + \cdots + x_n^2), & x_1 \ge 0 \\ m_2(x_2^2 + x_3^2 + \cdots + x_n^2), & x_1 < 0\end{cases},
\end{align*}
where $m_1, m_2 > 0$. Then $\lambda[\mathcal{C}] < 2$.
\end{conj}
When $m_1 = m_2 = m$, observe that when $m \to 0$, $\mathcal{C}$ converges to a multiplicity 2 plane which has entropy 2, and that when $m \to \infty$, $\mathcal{C}$ converges after suitable translations to a cylinder $\mathbb{R} \times \mathbb{S}^{n-1}$ which has entropy strictly less than 2. This also explains why the two connected components of $\mathcal{L}(\mathcal{C})$ need to be in different half-spaces, for otherwise when $m$ is small the double cone could be close to a multiplicity two cylinder which has entropy strictly larger than 2. \par
It is not so hard to calculate the Gaussian area of $\mathcal{C}$ at the origin, but unlike self-shrinkers, the maximum in our case needs not to happen at the origin (the argument for self-shrinkers can be found in eg. Section 7 of \cite{CMGeneric}). In fact when $m$ is large, the entropy is achieved far away from the origin (in a region where the cone looks more like a cylinder). Although we have strong numerical evidence that the conjecture is true, the analysis of the Gaussian area functional on the cone centered away from the origin is complicated to handle. \par
Less clear is the entropy of cones with $O(p + 1) \times O(n - p + 1)$ symmetry. Ilmanen and White \cite{IlmanenWhite} give an exact formula for the Gaussian density of $\mathcal{C}_{n,p}$ at the origin:
\begin{align*}
\Theta_{\mathcal{C}_{n,p}}(0) = \frac{\sigma_{p} \sigma_{n-p}}{\sigma_{n}} \left(\frac{p}{n}\right)^{p/2}\left(\frac{n-p}{n}\right)^{(n-p)/2},
\end{align*}
where $\sigma_p$ is the volume of the unit sphere in $\mathbb{R}^{p+1}$. It can be checked this is less than 2 for all $n,p$ (the proof is by rather tedious computation so we omit it here, but one can easily verify by numerics as well). Since $\mathcal{C}_{n,p}$ are minimal, they are also self-shrinkers. It follows from a theorem of Colding--Minicozzi \cite{CMGeneric} that the entropy of $\mathcal{C}_{n,p}$ is achieved at the origin. Hence, in fact, $\lambda[\mathcal{C}_{n,p}] = \Theta_{\mathcal{C}_{n,p}}(0) < 2$. For example, the cone $\mathcal{C}_{2,1} \subset \mathbb{R}^4$ has entropy $\frac{3}{2}$. It is therefore reasonable to make the following conjecture, partly due to Solomon:
\begin{conj}[cf. Section 4 of \cite{IlmanenWhite}]
Any double cone with $O(p+1) \times O(n - p + 1)$ symmetry has entropy at least that of $\mathcal{C}_{n,p}$ and at most 2.
\end{conj}
Finally we discuss the rotational symmetry of singular self-expanders. As we remarked before, the Hopf lemma \cref{hopf-lemma} cannot deal with triple junction singularities which are a possibility as we have seen in \cref{ode-appendix}.
\begin{ques}
What can we say about the symmetry of a singular self-expander coming out of a rotationally symmetric double cone? \par
In particular, if $\Sigma$ is a self-expander asymptotic to $\mathcal{C}_m$ (notation as in \cref{ode-appendix}) that is smooth away from $\{x_1 = 0\}$ and only has triple junction singularities on $\{x_1 = 0\}$, is $\Sigma$ rotationally symmetric?
\end{ques}
It is possible that one can use a parabolic variant of the argument of Bernstein--Maggi \cite{BernsteinMaggi} for singular Plateau surfaces (in particular Lemma 2.4 therein) to prove the desired rotational symmetry.
\appendix
\section{Construction of the Smooth Flow}
\label{smooth-construction-appendix}
Here we give a more detailed overview of the construction of smooth Morse flow lines from \cref{construction-of-smooth-flow}. The construction is an adaptation of the work of Bernstein and Wang in a series of papers, including \cite{BWMountainPass}, \cite{BWRelativeEntropy}, \cite{BWTopologicalUniqueness} and \cite{BWTopology}. \par
Throughout the section, we assume $\mathcal{C} \subset \mathbb{R}^{n+1}$ is a smooth cone. The first proposition was originally proven for self-shrinkers in \cite{BWTopology} and we give a modified proof for self-expanders.
\begin{prop}[cf. Lemma 4.3 of \cite{BWTopology}]
\label{distance-estimate}
Suppose $\Sigma$ is an hypersurface $C^{2,\alpha}$-asymptotic to $\mathcal{C}$, and there is $N > 0$ such that $\Sigma \setminus B_{NR}(0) \subset \mathcal{T}_{R^{-1}}(\mathcal{C})$ for $R > 1$. If $\{\Sigma_t\}_{t \in [1,T]}$ is an integral Brakke flow starting from $\Sigma$ in $\mathbb{R}^{n+1}$, then there is a constant $N' > 0$ such that
\begin{align*}
\Sigma_t \setminus B_{N'R \sqrt{t}}(0) \subset \mathcal{T}_{R^{-1}\sqrt{t}}(\mathcal{C})
\end{align*}
for $R > 1$.
\end{prop}
\begin{proof}
For any $x \in \mathbb{R}^{n+1} \setminus (B_{NR}(0) \cup \mathcal{T}_{R^{-1}}(\mathcal{C}))$ let
\begin{align*}
\rho(x) = \inf\{ \rho' \ge 0 \mid B_{\rho'}(x) \cap (B_{NR}(0) \cup \mathcal{T}_{R^{-1}}(\mathcal{C})) \ne \emptyset\}
\end{align*}
and let
\begin{align*}
\rho_t(x) = \begin{cases}
\sqrt{\rho(x)^2 - 2n(t-1)} & \rho(x)^2 \ge 2n(t-1) \\ 0 & \rho(x)^2 < 2n(t-1)
\end{cases}
\end{align*}
be the corresponding MCF starting from $B_{\rho}(x)$. Let
\begin{align*}
U_t = \bigcup_{\rho_t(x) > 0} B_{\rho_t(x)}(x).
\end{align*}
be the time $t$ slice of the above MCF. Since initially $\Sigma \cap U_1 = \emptyset$ maximum principle implies that $U_t \cap \Sigma_t = \emptyset$ for all $t \in [1,T)$. This proves that
\begin{align}
\Sigma_t \setminus B_{NR+ \sqrt{2n(t-1)}}(0) \subset \mathcal{T}_{R^{-1} + \sqrt{2n(t-1)}}(\mathcal{C})
\label{eq31}
\end{align}
since $\mathbb{R}^{n+1} \setminus (B_{NR+ \sqrt{2n(t-1)}}(0) \cup \mathcal{T}_{R^{-1} + \sqrt{2n(t-1)}}(\mathcal{C})) \subset U_t$.\par
Next we consider the map $\Phi: \mathcal{C} \setminus \{0\} \times \mathbb{R} \to \mathbb{R}^{n+1}$ given by $\Phi(x,\lambda) = x + \lambda \nu_\mathcal{C}$. Choose $\lambda_0 < 1/2$ depending on $\mathcal{C}$ small enough so that $\Phi|_{\mathcal{L}(\mathcal{C}) \times (-2\lambda_0,2\lambda_0)}$ is a diffeomorphism onto its image. It follows, since $\mathcal{C}$ is a cone, that $\Phi$ is a diffeomorphism on the set
$\{(x,\lambda) \mid \abs{\lambda} < 2\lambda_0 \abs{x}\}$. \par
Consider, for some $N'$ to be chosen later,
\begin{align*}
V_t = \mathbb{R}^{n+1} \setminus (U_t \cup B_{N'R\sqrt{t}}(0)).
\end{align*}
We first claim that $N'$ can be chosen so that $y \in V_t$ can be written as $x + \lambda'\abs{x} \nu_\mathcal{C}(x)$ for some $\abs{\lambda'} < \lambda_0$. Indeed, since $y \not \in U_t$ we have that $\dist(y, \mathcal{C}) \le R^{-1} + \sqrt{2n(t-1)}$ provided we choose $N'$ large enough so that
\begin{align*}
N'R\sqrt{t} > NR + \sqrt{2n(t-1)}.
\end{align*}
Let $x$ be the nearest point projection of $y$ onto $\mathcal{C}$, then using $R > 1$ we have that
\begin{align*}
\frac{\abs{y-x}}{\abs{x}} &\le \frac{1 + \sqrt{2nt}}{N'\sqrt{t} - 1 - \sqrt{2nt}} = \frac{(\sqrt{t})^{-1} + \sqrt{2n}}{N' - (\sqrt{t})^{-1} - \sqrt{2n}} < \frac{1+\sqrt{2n}}{N' - 1 - \sqrt{2n}} < \lambda_0
\end{align*}
provided we choose $N'$ large depending on $n$ only. Hence the claim holds. \par
Let $y \in \Sigma_t \setminus B_{N'R\sqrt{t}}(0) \subset V_t$. We claim that, up to further increasing $N'$, $\dist(y, \mathcal{C}) < R^{-1}\sqrt{t}$ and this will finish the proof. To this end consider $y_0 = x + \lambda_0 \abs{x} \nu_\mathcal{C}$. We have that
\begin{align*}
\abs{y_0} = \abs{x + \lambda_0 \abs{x}\nu_\mathcal{C}} = \sqrt{1 + \lambda_0^2}\abs{x} > NR + \lambda_0\abs{x} - \frac{1}{3}R^{-1},
\end{align*}
if $N'$ is chosen large enough so that
\begin{align*}
\abs{x} \ge (N'R - \sqrt{2n})\sqrt{t} - R^{-1} > 4NR - R^{-1},
\end{align*}
where we used the fact that $\sqrt{1 + \lambda_0^2} - \lambda_0 > \frac{1}{3}$.
This implies that $B_{\lambda_0\abs{x} - R^{-1}}(y_0) \cap B_{NR}(0) = \emptyset$. Since $\dist(y_0,\mathcal{C}) = \lambda_0 \abs{x}$ we conclude that
\begin{align*}
\rho(y_0) = \lambda_0\abs{x} - R^{-1}.
\end{align*}
Increasing $N'$ if necessary we can also ensure that
\begin{align*}
\lambda_0\abs{x} - R^{-1} > \sqrt{2n(t-1)}.
\end{align*}
Since $y \not\in B_{\rho_{t}(y_0)}(y_0) \subset U_t$ we can estimate
\begin{align*}
\dist(y,\mathcal{C}) &\le \dist(y_0,\mathcal{C}) - \rho_t(y_0) =
\lambda_0 \abs{x} - \sqrt{(\lambda_0 \abs{x} - R^{-1})^2 - 2n(t-1)}.
\end{align*}
To finish the proof we compute, for large $N'$,
\begin{align*}
&\phantom{{}={}}(\lambda_0 \abs{x} - R^{-1}\sqrt{t})^2 - ((\lambda_0 \abs{x} - R^{-1})^2 - 2n(t-1)) \\
&= 2R^{-1}\lambda_0 \abs{x} (1 - \sqrt{t}) + R^{-2}(t - 1) + 2n(t-1) \\
&= R^{-1}(\sqrt{t} - 1)(-2\lambda_0 \abs{x} + (R^{-1} + 2nR)(\sqrt{t}+1)) \le 0.
\end{align*}
which is equivalent to
\begin{align*}
\lambda_0 \abs{x} - R^{-1}\sqrt{t} \le \sqrt{(\lambda_0 \abs{x} - R^{-1})^2 - 2n(t-1)} \implies \dist(y,\mathcal{C}) \le R^{-1}\sqrt{t}
\end{align*}
provided $N'$ is chosen large enough so that
\begin{align*}
\lambda_0 \abs{x} \ge (N'R - \sqrt{2n})\sqrt{t} - R^{-1} \ge (R^{-1} + 2nR)\sqrt{t}. &\qedhere
\end{align*}
\end{proof}
The next proposition shows the desired regularity for an asymptotically conical MCF. Since the proof is used repeatedly in our presentation, we have also included a proof for the sake of completeness. In the proof we will use the notation $[f]_{\alpha;\Omega}$ to denote the $\alpha$-H\"{o}lder seminorm of $f$ on $\Omega$ (note $\Omega$ could be a subset of $\mathbb{R}^{n+1}$ or a time interval), i.e.
\begin{align*}
[f]_{\alpha;\Omega} = \sup_{x, y \in \Omega, x \ne y} \frac{\abs{f(x) - f(y)}}{\abs{x - y}^\alpha}.
\end{align*}
\begin{prop}[Lemma 5.3(2) of \cite{BWTopologicalUniqueness}]
\label{interior-estimates}
Suppose $\Sigma$ is an hypersurface $C^{2,\alpha}$-asymptotic to $\mathcal{C}$ and let $\{\Sigma_t\}_{t \in [1,T)}$ be a MCF starting from $\Sigma$. If there is $N > 0$ such that $\Sigma_1 \setminus B_{NR}(0) \subset \mathcal{T}_{R^{-1}}(\mathcal{C})$ for all $R > 1$, then there is $N' > 0$ such that $\Sigma_t \setminus B_{N'\sqrt{t}}(0)$ can be written as a (smooth) normal graph over $\mathcal{C} \setminus B_{N'\sqrt{t}}(0)$. In particular we have the uniform curvature bound
\begin{align*}
\sup_{t \in [1,T)} \sup_{\Sigma_t \setminus B_{N'\sqrt{t}}} \abs{A_{\Sigma_t}} < \infty.
\end{align*}
\end{prop}
\begin{proof}
Fix $t \in [1,T)$ and let $\delta > 0$. Since $\Sigma$ is asymptotically conical, by \cref{expander-curvature-estimates} there is $N_1$ and $\varepsilon > 0$ depending on $\delta$ such that for any $x_0 \in \mathcal{C} \setminus B_{N_1}(0)$, $\Sigma_1 \cap C_{\eta}(x_0)$ can be written as a graph $\tilde{f}_{x_0}(x)$ over some neighborhood of $x_0$ in $T_x\mathcal{C}$ containing $B^n_{\eta}(x_0)$. Here $\eta = \varepsilon\abs{x}$. Moreover up to increasing $N_1$ we can ensure \cref{distance-estimate} holds, and that $\tilde{f}_{x_0}$ satisfies the estimates
\begin{align}
\label{appendix-eq-1}
\sum_{i=0}^2 \eta^{-1+i} \sup_{B^n_\eta(x_0)} |\nabla^i \tilde{f}_{x_0}| + r^{1 + \alpha} [\nabla^2 \tilde{f}_{x_0}]_{\alpha} < \delta
\end{align}
By pseudolocality of MCF (Theorem 1.5 in \cite{IlmanenNevesSchulze}), given $\varepsilon > 0$, there is $N_1 > 0$ such that for every $x_0 \in \Sigma_1 \setminus B_{N_1}(0)$ and $s \in [1,t]$, $\Sigma_s \cap C_{\eta/2}(x_0)$ can be written as a normal graph over $\Sigma_1 \cap B^n_{\eta/2}(x_0)$. Combining the above two facts, we see that for sufficiently small $\delta$ and $\varepsilon$, $\Sigma_s \cap C_{\eta/2}(x_0)$ can be written as the graph of a function $f_{x_0}(s,x)$ over some neighborhood of $T_{x_0}\mathcal{C}$ for all $s \in [1,t]$ and $x_0 \in \mathcal{C} \setminus B_{N_1}(0)$. Moreover, $f_{x_0}$ satisfies the pointwise estimates
\begin{align*}
(\eta/2)^{-1} \sup_{B^n_{\eta/2}(x_0)} \abs{f_{x_0}(s,\cdot)} + \sup_{B^n_{\eta/2}(x_0)} \abs{\nabla_x f_{x_0}(s,\cdot)} < 1.
\end{align*}
For the rest of the proof we fix an $x_0$ and put $f = f_{x_0}$. Since $\{\Sigma_s\}_{s \in [1,t]}$ is a graphical MCF near $x_0$, $f$ satisfies the evolution equation:
\begin{align*}
\frac{\partial f}{\partial s} = \sqrt{1 + \abs{\nabla_{x} f}^2} \Div\left(\frac{\nabla_{x}f}{\sqrt{1 + \abs{\nabla_{x} f}^2}}\right).
\end{align*}
This is a quasilinear parabolic equation, so we may use H\"{o}lder estimates (eg. Theorem 1.1 in Chapter 4 of \cite{LSU}) to get that
\begin{align*}
\sup_{s \in [1,t]} [\nabla_x f(s,\cdot)]_{\alpha;B_{\eta/4}(x_0)} + \sup_{B^n_{\eta/4}(x_0)}[\nabla_x f(\cdot,x)]_{\alpha/2;[1,t]} \le C(\eta/4)^{-\alpha}.
\end{align*}
for any $\alpha \in (0,1)$. Standard Schauder estimates (see eg. Chapter 5 of \cite{Lieberman} or Theorem 5.1 in Chapter 4 of \cite{LSU}) yield higher order estimates of the form
\begin{align*}
\sum_{i=0}^2 (\eta/8)^{i-1}\sup_{B^n_{\eta/8}(x_0)}\abs{\nabla^i f(s,\cdot)} + (\eta/8)^{1 + \alpha} [\nabla^2 f(s,\cdot)]_{\alpha;B_{\eta/8}(x_0)} \le C
\end{align*}
for $s \in [1,t]$ and
\begin{align*}
\sup_{x \in B^n_{\eta/8}(x_0)} [\nabla_x f_{x_0}(s,x)]_{\frac{1}{2};[1,t]} \le C (\eta/8)^{-1}.
\end{align*}
From the above we may estimate
\begin{align*}
\abs{f_{x_0}(s,x) - f_{x_0}(1,x_0)} \le C(s-1)(\eta/8)^{-1} + \delta \abs{x-x_0} + C(\eta/8)^{-1}\abs{x-x_0}^2
\end{align*}
where we also used the evolution equation and the fact that $\abs{\nabla_x f(1,x_0)} < \delta$ from \cref{appendix-eq-1}. These implies that, for $\rho < 1/8$, a fixed $s \in [1,t]$ and $x_0 \in \mathcal{C} \setminus B_{\tilde{N}\sqrt{s}}(0)$,
\begin{align*}
(\rho \eta)^{-1}\sup_{x \in B_{\rho \eta}^n(x_0)} \abs{f(s,x)} &\le (\rho\eta)^{-1}N_1\abs{x_0}^{-1} + C(s-1)\rho^{-1}\eta^{-2} + \delta + C\rho \\
&\le \frac{(\rho \varepsilon)^{-1} N_1}{\tilde{N}^2 s} + \frac{C(s-1)\rho^{-1}}{\tilde{N}^2 s} + \delta + C\rho
\end{align*}
where we used that $\abs{f(1,x_0)} < N_1 \abs{x_0}^{-1}$ by \cref{distance-estimate}. The right hand side of the above equation can be made arbitrarily small provided we choose $\delta$ and $\rho$ small enough and $\tilde{N}$ large enough. Similarly we can estimate the derivative
\begin{align*}
\abs{\nabla_{x_0} f_{x_0}(s,x) - f_{x_0}(1,x_0)} \le C(\eta/8)^{-1} \abs{x - x_0} + C(\eta/8)^{-1} \sqrt{s - 1}
\end{align*}
and
\begin{align*}
\sup_{x \in B^n_{\rho\eta}(x_0)}\abs{\nabla_x f(s,x)} \le \delta + C\rho + C\frac{\sqrt{s-1}}{\tilde{N} \sqrt{s}}
\end{align*}
which can be made arbitrarily small as well. These two decay estimates together with Schauder estimates above with $\eta/8$ replaced by $\rho \eta$ give
\begin{align*}
\sum_{i=0}^2 (\eta/8)^{i-1}\sup_{B^n_{\eta/8}(x_0)}\abs{\nabla^i f(s,\cdot)} + (\eta/8)^{1 + \alpha} [\nabla^2 f(s,\cdot)]_{\alpha;B_{\eta/8}(x_0)} \le \frac{1}{2} + C(\rho + \rho^{1 + \alpha})
\end{align*}
which can be made to be less than 1 provided $\rho$ is chosen small enough. This proves that $\Sigma_t \cap C_{\rho\eta}(x_0)$ is a graph over (a neighborhood of) $T_{x_0}\mathcal{C}$ for $x_0 \in \mathcal{C} \setminus B_{\tilde{N}\sqrt{t}}(0)$ with derivative bounds up to the second order. The curvature bounds follow easily, and higher order bounds follow similarly using Schauder estimates.
\end{proof}
Given a MCF $\{\Sigma_t\}_{t \in I}$, the \textit{expander mean curvature} of $\Sigma_t$ is
\begin{align*}
E_{\Sigma_t}(x) = 2tH_{\Sigma_t} + \inner{x,\nu_{\Sigma_t}}.
\end{align*}
We say $\{\Sigma_t\}$ is \textit{expander mean convex} if the $E_{\Sigma_t}(x) > 0$ along the flow. For a fixed time $t$ and a hypersurface $\Sigma$, the relative expander mean curvature of $\Sigma$ is
\begin{align*}
E_\Sigma^t(x) = 2tH_\Sigma + x^\perp.
\end{align*}
For $\beta > 0$ define the auxiliary function $g_\beta: \mathbb{R}^+ \to \mathbb{R}^+$ by
\begin{align*}
g_\beta(s) = s^{-\beta} e^{-\beta s}.
\end{align*}
We now prove the main existence theorem for expander mean convex hypersurfaces without entropy bound.
\begin{thm}[Existence Theorem, cf. Proposition 5.1 of \cite{BWTopologicalUniqueness}]
\label{existence-theorem}
Let $\Sigma$ be a hypersurface $C^{2,\alpha}$-asymptotic to $\mathcal{C}$ with no closed components. Suppose that there is $N$ such that $\Sigma \setminus B_{NR}(0) \subset \mathcal{T}_{R^{-1}}(\mathcal{C})$ and that there is $c,\beta > 0$ such that
\begin{align*}
E_\Sigma(x) \ge cg_\beta (1 + \abs{x}^2) > 0, \;\; x\in \Sigma.
\end{align*}
Then there exists a unique MCF $\{\Sigma_t\}_{t \in [1,T)}$ with $\Sigma_1 = \Sigma$, where $T$ is the first singular time (possibly $\infty$). Moreover the MCF satisfies
\begin{enumerate}
\item $\Sigma_t$ is $C^{2,\alpha}$-asymptotic to $\mathcal{C}$ for all $t \in [1,T)$.
\item $E_{\Sigma_t}(x) > cg_\beta(1 + \abs{x}^2 + 2n(t-1))$ for all $t \in [1,T)$ and $x \in \Sigma_t$.
\item If $T < \infty$, we have
\begin{align*}
\lim_{t \to T} \sup_{\Sigma_t \cap B_{N'\sqrt{t}}} \abs{A_{\Sigma_t}} = \infty.
\end{align*}
\end{enumerate}
\end{thm}
\begin{proof}
Consider the map $\Phi: \Sigma \times (-\varepsilon,\varepsilon) \to \mathbb{R}^{n+1}$ given by
\begin{align*}
\Phi(x,\lambda) = x + \lambda \nu_\Sigma.
\end{align*}
Since $\Sigma$ is asymptotically conical, we can choose $\varepsilon$ sufficiently small so that the above map is a diffeomorphism onto its image for every $\lambda \in (-\varepsilon,\varepsilon)$. Using this parametrization we can invoke standard existence theorem for MCF to conclude that there exists a unique MCF starting from $\Sigma_1 = \Sigma$. For the properties, Item (1) and (3) follow from \cref{interior-estimates}, and Item (2) is Lemma 5.4 of \cite{BWTopologicalUniqueness}.
\end{proof}
Given an unstable self-expander $\Sigma$ asymptotic to $\mathcal{C}$, we wish to apply \cref{existence-theorem} to the perturbed self-expander $\Sigma^\varepsilon$ defined by
\begin{align*}
\Sigma^\varepsilon = \Psi^\varepsilon(\Sigma) \text{ where } \Psi^\varepsilon(x) = x + \varepsilon f(x) \nu_\Sigma.
\end{align*}
It remains to check that $\Sigma^\varepsilon$ satisfies the assumption of \cref{existence-theorem}. We need the following $C^0$-estimate on the first eigenfunction $f$.
\begin{lem}[Proposition 3.2 of \cite{BWMountainPass}]
\label{eigenfunction-decay-estimates}
Let $\Sigma$ be an unstable connected self-expander. Let $f$ be the unique positive eigenfunction of $L_\Sigma$ with eigenvalue $\mu_1 < 0$ and $\norm{f}_{W^0_{1/4}(\Sigma)} = 1$. Then $f$ satisfies the following $C^0$ estimate:
\begin{align*}
C^{-1}(1 + \abs{x}^2)^{-\frac{1}{2}(n+1 - 2\mu_1)} e^{-\frac{\abs{x}^2}{4}} \le f \le C(1 + \abs{x}^2)^{-\frac{1}{2}(n+1 - 2\mu_1)} e^{-\frac{\abs{x}^2}{4}}.
\end{align*}
where $C = C(\Sigma)$.
\end{lem}
\cref{eigenfunction-decay-estimates} and \cref{expander-curvature-estimates} imply that there exists $N > 0$ such that $\Sigma^\varepsilon \setminus B_{NR}(0) \subset \mathcal{T}_{R^{-1}}(\mathcal{C})$ for $R > 1$, so the first condition to apply \cref{existence-theorem} is satisfied. Finally we need to show that perturbing by the first eigenfunction produces a hypersurface with positive expander mean curvature. This relies on the fact that $-L_\Sigma$ is the linearization of the self-expander equation.
\begin{lem}
\label{expander-mean-convexity}
Let $\Sigma$ be a connected self-expander $C^{2,\alpha}$-asymptotic to $\mathcal{C}$. Let $f$ be the unique positive eigenfunction corresponding to $\mu_1$ of $L_\Sigma$ with $\norm{f}_{W^0_{1/4}(\Sigma)} = 1$. Then there exists $\varepsilon_0 > 0$ such that for all $\abs{\varepsilon} < \varepsilon_0$ there is $\beta = \beta(\varepsilon)$ such that
\begin{align*}
E_{\Sigma^\varepsilon}(x) \ge cg_\beta(1+\abs{x}^2).
\end{align*}
Here $\Sigma^\varepsilon$ is the image of $\Sigma$ under the map $\Phi(x) = x + \varepsilon f(x)\nu_\Sigma$.
\end{lem}
\begin{proof}
By Lemma A.2 of \cite{BWRelativeEntropy} we have (the computation is long, but the result should be standard),
\begin{align*}
E_{\Sigma^\varepsilon}(x) = -\varepsilon L_\Sigma f + \varepsilon^2 Q(f,\inner{x,\nabla_\Sigma f},\nabla_\Sigma f,\nabla^2_\Sigma f)
\end{align*}
for some homogeneous quadratic polynomial $Q$ with bounded coefficients. When $\varepsilon > 0$, it follows from \cref{eigenfunction-decay-estimates}, that, up to further shrinking $\varepsilon$,
\begin{align*}
E_{\Sigma^\varepsilon}(x) \ge \varepsilon\mu_1C^{-1}(1+\abs{x}^2)^{-\frac{1}{2}(n+1 - 2\mu_1)} e^{-\frac{1+\abs{x}^2}{4}} \ge cg_\beta(1 + \abs{x}^2)
\end{align*}
where $\beta = \frac{1}{2}(n+1 - 2\mu_1) > 0$. The case $\varepsilon < 0$ can be handled similarly.
\end{proof}
\cref{expander-mean-convexity} shows that $\Sigma^\varepsilon$ satisfies the second condition of \cref{existence-theorem} for sufficiently small $\varepsilon$. As such, \cref{existence-theorem} can be applied to conclude the short-time existence of an expander mean-convex MCF starting from $\Sigma^\varepsilon$.
\section{Maximum principles}
\label{maximum-principles-appendix}
Here we record the maximum principles from Section 3 of \cite{CHHW} for Brakke flows. These are the essential tools in applying the moving plane method without smoothness. If $\mathcal{M}$ is an integral Brakke flow and $X = (x_0,t_0) \in \supp \mathcal{M}$, the Gaussian density at $X$ is
\begin{align*}
\Theta_\mathcal{M}(X) = \lim_{\rho \to 0} \frac{1}{(4\pi \rho^2)^{n/2}} \int_{\mathcal{M}_{t_0 - \rho^2}} e^{-\frac{\abs{x-x_0}^2}{4\rho^2}} d\mathcal{H}^n.
\end{align*}
$\Theta_\mathcal{M}(X)$ is well-defined by Huisken's monotonicity formula. Observe that an entropy upper bound automatically gives upper bounds on all Gaussian densities.
\begin{thm}[Maximum principle for Brakke flows, Theorem 3.4 of \cite{CHHW}]
\label{maximum-principles}
Let $\mathcal{M}$ be a smooth MCF defined in a parabolic ball $P(X,r)$, where $X = (x_0,t_0) \in \supp \mathcal{M}$ and $r > 0$ is sufficiently small such that $\supp \mathcal{M}$ separates $P(x,r)$ into two open connected components $U$ and $U'$. Let $\mathcal{M}'$ be an integral Brakke flow in $P(X,r)$ with $X \in \supp \mathcal{M}'$ and Gaussian density $\Theta_X(\mathcal{M}') < 2$. If $\supp \mathcal{M}' \subset U \cup \supp \mathcal{M}$, then $X$ is a smooth point for $\mathcal{M}'$, and $\mathcal{M}'$ agrees with $\mathcal{M}$ in a small parabolic ball.
\end{thm}
\begin{thm}[Hopf lemma for tame Brakke flows, Theorem 3.19 of \cite{CHHW}]
\label{hopf-lemma}
Let $\mathcal{M}$ and $\mathcal{M}'$ be two integral Brakke flows defined in a parabolic ball $P(X,r)$ where $X = (x_0,t_0) \in \supp \mathcal{M} \cap \mathcal{M}'$. Suppose $X$ is a tame point (see \cref{tameness-defn}) for both $\mathcal{M}$ and $\mathcal{M}'$ and let $\mathbb{H} \subset \mathbb{R}^{n+1}$ be an open half space with $x_0 \in \partial \mathbb{H}$. If in addition $\partial \mathbb{H}$ is not the tangent flow to either $\mathcal{M}$ or $\mathcal{M}'$, and $\reg \mathcal{M}_t \cap \mathbb{H}$ and $\reg \mathcal{M}'_t \cap \mathbb{H}$ are disjoint for $t \in (t_0 - r^2, t_0)$, then $\mathcal{M}$ and $\mathcal{M}'$ are smooth at $(0,0)$ with distinct tangents.
\end{thm}
We remark that in contrast to the usual smooth maximum principle and Hopf lemma, the smoothness is in fact a conclusion in both of the statements above.
{\footnotesize
}
\end{document} |
\begin{document}
\begin{center}
{\large \sc \bf {Coherence evolution and transfer supplemented by the state-restoring}
}
\vskip 15pt
{\large
E.B.Fel'dman and A.I.~Zenchuk
}
\vskip 8pt
{\it $^2$Institute of Problems of Chemical Physics, RAS,
Chernogolovka, Moscow reg., 142432, Russia}.
\end{center}
\begin{abstract}
The evolution of quantum coherences comes with a set of conservation laws provided that the Hamiltonian governing this evolution conserves the spin-excitation number. At that, coherences do not intertwist during the evolution. Using the transmission line and the receiver in the initial ground state we can transfer the coherences to the receiver without interaction between them, { although the matrix elements contributing to each particular coherence intertwist in the receiver's state. }
Therefore we propose a tool based on the unitary transformation at the receiver side to { untwist these elements and thus} restore (at least partially) the structure of the sender's initial density matrix. A communication line with two-qubit sender and receiver is considered as an example of implementation of this technique.
\end{abstract}
\maketitle
\section{Introduction}
\label{Section:Introduction}
The multiple quantum (MQ) NMR dynamics is a basic tool of well developed MQ NMR spectroscopy studying the nuclear spin distribution
in different systems \cite{BMGP,DMF}.
{ Working with spin polarization we essentially deal with the diagonal elements of the density matrix. However, the MQ NMR method allows us to split the whole density matrix into $N+1$ parts, and each of these parts contributes into a specific observable quantity called coherence intensity.}
Thus studying the coherence intensities and the methods of manipulating them becomes an important direction in development of MQ NMR methods. For instance, the problem of relaxation of MQ coherences was studied in \cite{KS1,KS2,AS,CCGR,BFVV}. A similar problem in nonopore was considered in \cite{DFZ}).
In MQ NMR experiment, the special sequence of the magnetic pulses is used to generate the so-called two-spin/two-quantum Hamiltonian ($H_{MQ}$)
which is the non-secular part of the dipole-dipole interaction Hamiltonian averaged over fast oscillations.
It was shown in the approximation of nearest-neighbor interactions that the $H_{MQ}$ Hamiltonian can be reduced to the flip-flop XX-Hamiltonian ($H_{XX}$) \cite{Mattis} via the unitary transformation \cite{DMF}. Notice, that $H_{MQ}$ does not commute with
the $z$-projection of the total
spin momentum $I_z$, while $[H_{XX},I_z]=0$.
In this paper we consider the evolution problem for the created MQ coherences.
Therefore, after creating the coherences, we switch off the irradiation and allow the coherences to evolve independently under the Hamiltonian commuting with $I_z$ (this can be, for instance, $H_{dz}$ Hamiltonian \cite{Abragam,Goldman} or $H_{XX}$ flip-flop Hamiltonian).
We show that the coherences do not interact during the evolution governed by the Hamiltonian conserving the $z$-projection of the total spin momentum. This fact gives rise to the set of conservation laws associated with such dynamics, namely, the
coherence intensity of an arbitrary order conserves.
But the density-matrix elements contributing into the same order coherence do intertwist.
In addition, the coherences, created in some subsystem (sender) can be transferred to another subsystem (receiver) through the transmission line
without interaction between coherences if only the both receiver and transmission line are in the initial state having only
the zero-order coherence.
This process can be considered as a particular implementation of the remote state creation in spin systems \cite{Z_2014,BZ_2015}.
We show that the sender's density-matrix elements in the receiver's state can be untwisted using the method
based on the unitary transformation of the receiver or, more effectively, of the extended receiver. The theoretical arguments are supplemented with the particular model of communication line having two-node sender and receiver. Notice that the extended receiver was already used in the previous papers concerning the remote state creation \cite{BZ_2016} with the purpose of proper correcting the created state of the receiver and improving the characteristics of the remote state creation \cite{Z_2014,BZ_2015}.
The paper is organized as follows. In Sec.\ref{Section:DC} we select the matrices $\rho^{(n)}$ responsible for forming
the $n$-order coherence intensity and study some extremal values of coherence intensities.
The evolution of the coherence intensities is considered in Sec.\ref{Section:ev}. The transfer of the coherences from the sender to the receiver is studied in Sec.\ref{Section:cohtr}.
In Sec.\ref{Section:model} we apply the results of previous sections to a particular model of a chain with 2-qubit sender and receiver.
The brief discussion of obtained results is given in Sec.\ref{Section:conclusion}.
\section{Density matrix and coherences}
\label{Section:DC}
It was shown { (for instance, see \cite{FL})} that the density matrix of a quantum state can be written as a sum
\begin{eqnarray}\label{RhoC}
\rho = \sum_{n={-N}}^N \rho^{(n)},
\end{eqnarray}
where each submatrix $ \rho^{(n)}$ consists of the elements of $\rho$ responsible for the spin-state transitions changing the total $z$-projection of the spin momentum by $n$. These elements contribute to the so-called $n$-order coherence intensity $I_n$ which can be registered using the MQ NMR methods. To select the density matrix elements contributing to the $n$-order coherence we turn to the { density-matrix representation in the multiplicative basis
\begin{eqnarray}\label{multb}
|i_1\dots i_N\rangle,\;\;i_k=0,1,\;\;k=1,\dots,N,
\end{eqnarray}
where $i_k$ denotes the state of the $k$th spin.
Thus, the transformation from the computational basis to the multiplicative one reads}
\begin{eqnarray}\label{mult}
\rho_{ij}= \rho_{i_1\dots i_N;j_1\dots j_N},\;\;\; i=\sum_{n=1}^N i_n 2^{n-1} +1,\;\; j=\sum_{n=1}^N j_n 2^{n-1} +1.
\end{eqnarray}
Then,
according to the definition,
\begin{eqnarray}\label{defI}
I_n(\rho) ={\mbox{Tr}} \Big(\rho^{(n)}\rho^{(-n)}\Big) = \sum_{\sum_k (j_k - i_k) = n} |\rho_{i_1\dots i_N;j_1\dots j_N}|^2,\;\;
|n|\le N.
\end{eqnarray}
\subsection{Extremal values of coherence intensities}
First of all we find the extremal values of the zero order coherence intensity of $\rho$ provided that all other
coherences absent, so that $\rho=\rho_0$. By the definition (\ref{defI}),
\begin{eqnarray}
I_0={\mbox{Tr}} \Big(\rho_0 \rho_0\Big) = {\mbox{Tr}} \left(U_0\Lambda_0 U_0^+\right)^2 = {\mbox{Tr}} \Lambda_0^2 =
\sum_{i=1}^{2^N} \lambda_{0i}^2,
\end{eqnarray}
where $N$ is the number of spins in the sender, $\Lambda_0={\mbox{diag}}(\lambda_{01},\dots,\lambda_{02^N})$ and $U_0$ are, respectively, the
eigenvalue and eigenvector matrices of $\rho$.
Therefore we have to find the extremum of $I_0$ with the normalization condition
$\sum_{i=1}^{2^N} \lambda_{0i} =1$.
Introducing the Lagrange factor $\alpha$ we
reduce the problem to constructing the extremum of the function
\begin{eqnarray}
\tilde I_0 = \sum_{i=1}^{2^N} \lambda_{0i}^2 - \alpha \left( \sum_{i=1}^{2^N} \lambda_{0i} -1\right).
\end{eqnarray}
Differentiating with respect to $\lambda_{0i}$ and equating the result to zero we obtain the system of equations
\begin{eqnarray}
2\lambda_{0i}=\alpha,\;\;i=1,\dots,2^N,
\end{eqnarray}
therefore, $\lambda_{0i}=\frac{\alpha}{2}$. Using the normalization we have $\alpha=\frac{1}{2^{N-1}}$,
so that $\lambda_{0i}=\frac{1}{2^N}$. The second derivative of $\tilde I_0$ shows that this is a minimum. Thus, we have
\begin{eqnarray}
I_0^{min}=\frac{1}{2^N}, \;\;\rho|_{I_{0}^{min}} = \frac{1}{2^N}E,
\end{eqnarray}
where $E$ is the $2^N\times 2^N$ identity matrix.
To find the maximum value of $I_0$ we observe that
\begin{eqnarray}
\sum_{i=1}^{2^N} \lambda_{0i}^2 =\left(\sum_{i=1}^{2^N} \lambda_{0i}\right)^2 -\sum_{i\neq j} \lambda_{0i}\lambda_{0j}=1-\sum_{i\neq j} \lambda_{0i}\lambda_{0j} \le 1.
\end{eqnarray}
It is obvious that the unit can be achieved if there is only one nonzero eigenvalue $\lambda_{01}=1$.
Thus
\begin{eqnarray}
I_0^{max}=1, \;\;\rho|_{I_{0}^{max}} = {\mbox{diag}}(1,\underbrace{0,0,\dots0}_{2^N-1}).
\end{eqnarray}
Now we proceed to the analysis of the $n$-order coherence intensity for the matrix having only three non-zero coherences of zero- and $\pm n$-order,
assuming that the zero-order coherence intensity $I_{0}$ is minimal, i.e.,
\begin{eqnarray}\label{rhoin}
\rho=\frac{1}{2^N}E + \tilde \rho^{(n)} = U_n \left(\frac{1}{2^N}E +\Lambda_n\right) U^+_n,\;\;\;\tilde \rho^{(n)}=\rho^{(n)} + \rho^{(-n)}
\end{eqnarray}
where $\Lambda_n={\mbox{diag}}(\lambda_{n1},\dots,\lambda_{n2^N})$ and $U_n$ are the matrices of eigenvalues and eigenvectors of
$\tilde \rho^{(n)}$. Of course, $U_n$ is also the eigenvector matrix for the whole $\rho$ in this case and
\begin{eqnarray}\label{constr2}
\sum_{i=1}^{2^N} \lambda_{ni} =0.
\end{eqnarray}
Now we proof one of the interesting property of the eigenvalues for the considered case.
{\bf Proposition 1.}
Eigenvalues $\lambda_{ni}$ appear in pairs:
\begin{eqnarray}\label{pairs}
\lambda_{n(2i-1)}= \eta_{ni}, \;\;\lambda_{n(2i)}= -\eta_{ni},
\;\;\;i=1,\dots,2^{N-1}.
\end{eqnarray}
{\it Proof.} First we show that,
along with $\tilde \rho^{(n)}$, the odd powers of this matrix are also traceless.
For instance, let us show that
\begin{eqnarray}\label{rr}
{\mbox{Tr}}(\tilde \rho^{(n)})^3 = \sum_{i,j,k} \tilde \rho^{(n)}_{ij} \tilde \rho^{(n)}_{jk} \tilde \rho^{(n)}_{ki} = 0.
\end{eqnarray}
Using the multiplicative basis for the density-matrix elements in the rhs of eq. (\ref{rr}), we remark that only such elements $\tilde \rho_{ij}$, $\tilde \rho_{jk}$ and
$\tilde \rho_{ki}$ are nonzero that, respectively,
$\sum_m i_{m} -\sum_m j_{m} = \pm n$, $\sum_m j_{m} -\sum_m k_{m} = \pm n$ and $\sum_m k_{m} -\sum_m i_{m} = \pm n$. However, summing all these equalities
we obtain the identical zero in the lhs and either $\pm 3 n$ or $\pm n$ in the RHS.
This contradiction means that there must be zero matrix elements in each term of the sum (\ref{rr}), i.e., the trace is zero.
Similar consideration works for higher odd powers of $\tilde \rho^{(n)}$
(however, the sum $\tilde \rho^{(n)} + \tilde \rho^{(k)}$, $k\neq n$, doesn't possesses this property, i.e., the trace of any its power is non-zero in general).
Consequently, along with (\ref{constr2}), the following equalities hold:
\begin{eqnarray}\label{sumni}
\sum_{i=1}^{2^N} \lambda_{ni}^m =0 \;\;{\mbox{for any odd}}\;\;m.
\end{eqnarray}
Condition (\ref{sumni}) holds for any odd $m$ if only the eigenvalues $\lambda_{ni}$ appear in pairs (\ref{pairs}).
{To prove this statement, first we assume that all eigenvalues are non-degenerate and
let the eigenvalue $\lambda_{n1}$ be maximal by absolute value.
We divide sum (\ref{sumni}) by $\lambda_{n1}^m$:
\begin{eqnarray}\label{sumni2}
1+\sum_{i=2}^{2^N} \left(\frac{\lambda_{ni}}{\lambda_{n1}}\right)^m =0, \;\;{\mbox{for odd}}\;\;m.
\end{eqnarray}
Each term in the sum can not exceed one by absolute value. Now we take the limit
$m\to\infty$ in eq.(\ref{sumni2}). It is clear that all the terms such that $\left|\frac{\lambda_{ni}}{\lambda_{n1}}\right|<1$
vanish. Since this sum is zero, there must be an eigenvalue $\lambda_{n2}$ such that $\lambda_{n2} = -\lambda_{n1}$. Then, the
appropriate term in (\ref{sumni2}) yields -1. So, two first terms in sum (\ref{sumni2}) cancel each other which reduces
(\ref{sumni2}) to
\begin{eqnarray}\label{sumni3}
\sum_{i=3}^{2^N} \lambda_{ni}^m =0, \;\;{\mbox{for odd}}\;\;m.
\end{eqnarray}
Next, we select the maximal (by absolute value) of the remaining eigenvalues, repeat our arguments and conclude that there are two more eigenvalues equal by absolute value and having opposite signs. And so on. Finally, after $2^{N-1}$, steps we result in conclusion that all eigenvalues appear in pairs (\ref{pairs}).
Let the $(2k+1)$th eigenvalue on the $(2k+1)$-step is $s$-multiple, i.e. $\lambda_{n(2k+1)} =\dots = \lambda_{n(2k+s)}$. Then the sum (\ref{sumni}) gets the form
\begin{eqnarray}\label{sumni4}
\sum_{i=2k+1}^{2^N} \left(\frac{\lambda_{ni}}{\lambda_{n(2k+1)}}\right)^m =
s +\sum_{i=2k+s+1}^{2^N} \left(\frac{\lambda_{ni}}{\lambda_{n(2k+1)}}\right)^m,\;\; s\in{\mathbb N},\;\;s\le N-2k,\;\;{\mbox{odd}} \;\;m.
\end{eqnarray}
Now, to compensate $s$ we need an $s$-multiple eigenvalue, such that
$\lambda_{n(2k+s+1)} = \dots = \lambda_{n(2k+2s)} = - \lambda_{n(2k+1)}$. Thus, if there is $s$-multiple positive eigenvalue, there must be an $s$-multiple negative eigenvalue.
This ends the proof.} $\Box$
Next, since all the eigenvalues of $\rho$ must be non-negative and the density matrix $\rho$ has the structure (\ref{rhoin}), the negative eigenvalues $\eta_{ni}$ can not exceed $\frac{1}{2^N}$ by absolute value. Therefore, the maximal $n$-order coherence intensity corresponds to the case
\begin{eqnarray}
\eta_{ni} =\frac{1}{2^N}.
\end{eqnarray}
Consequently,
\begin{eqnarray}
I_n^{max}+I_{-n}^{max} = 2 I_n^{max} =\sum_{j=1}^{N_n} \lambda_{ni}^2 =\frac{N_n}{2^{2N}}\le \frac{1}{2^N},
\end{eqnarray}
where $I_n^{max}=I_{-n}^{max}$ and $N_n$ is the number of nonzero eigenvalues of $\tilde \rho^{(n)}$.
This number equals to the rank of $\tilde \rho^{(n)}$ which, in turn, can be found as follows.
{\bf Proposition 2.} The rank of the matrix $\tilde \rho^{(n)}$ can be calculated using the formula
\begin{eqnarray}\label{ran}
N_n={\mbox{ran}}\;\tilde \rho^{(n)} = \sum_{k=0}^{N} \min \left(
\left(N\atop k \right) ,\left(N\atop k+n \right)+\left(N\atop k-n \right) \;\;
\right),
\end{eqnarray}
where
the binomial coefficients $\left(N\atop m \right)=0$ for $m<0$.
{\it Proof.}
For the $n$-order coherence, the number of states with $k$ excited spins equals
$ \left(N\atop k \right)$. The $\pm n$-order coherence collects the elements of $\rho$ responsible for transitions from the states with
$k$ excited spins to the states with
$k\pm n$ excited spins. All together, there are $\left(N\atop k+n \right)+\left(N\atop k-n \right)$ such transitions.
These transitions can be collected into the matrix of $ \left(N\atop k \right)$ columns and
$\left(N\atop k+n \right)+\left(N\atop k-n \right)$ rows, whose maximal rank equals $\min \left(
\left(N\atop k \right) ,\left(N\atop k+n \right)+\left(N\atop k-n \right)\right)$.
Obviously, the rank of $\tilde \rho^{(n)}$ equals the sum of calculated ranks for different $k=0,\dots,N$, i.e., we obtain formula (\ref{ran}).$\Box$
{\bf Consequence.}
For the coherence intensity of the first order ($n=1$) eq.(\ref{ran}) yields:
\begin{eqnarray}\label{ran1}
N_1= \sum_{k=0}^{N}
\left(N\atop k \right) = 2^N.
\end{eqnarray}
{Proof.}
We have to show that in this case
\begin{eqnarray}\label{con}
\left(N\atop k \right) \le \left(N\atop k+1 \right)+\left(N\atop k-1 \right),\;\;0\le k \le N.
\end{eqnarray}
First we consider the case $k>1$ and $k<N$. Then
\begin{eqnarray}\label{intermed1}
\left(N\atop k+1 \right)+\left(N\atop k-1 \right) = \left(N\atop k \right) \left(\frac{N-k}{k+1} + \frac{k}{N-k+1}\right).
\end{eqnarray}
Let us show that the expression inside the parenthesis is $\ge 1$.
After simple transformations, this condition takes the form
\begin{eqnarray}\label{ge}
3 k^2 - 3 N k +N^2 -1\ge 0,
\end{eqnarray}
where the lhs is a quadratic expression in $k$. The roots of the lhs read
\begin{eqnarray}
k_{1,2}=\frac{3 N \pm\sqrt{12-3 N^2}}{6},
\end{eqnarray}
which are imaginary for $N>2$. Therefore the parabola $3 k^2 - 3 N k +N^2$ lies in the upper half-plane $k$
for $N> 2$ and consequently condition (\ref{ge}) holds for $N\ge2$. In our case, the minimal $N$ is 2, which corresponds to the 1-qubit sender and 1-qubit receiver without the transmission line between them.
If $k=1$ then, instead of (\ref{intermed1}), we have
\begin{eqnarray}
\left(N\atop 2 \right)+\left(N\atop 0 \right) = \left(N\atop 2 \right) +1 = \left(N\atop 1 \right) \frac{N-1}{2} +1 \ge \left(N\atop 1 \right),\;\;N\in{\mathbb N} .
\end{eqnarray}
Therefore condition (\ref{con}) is also satisfied.
If $k=0$, then $\left(N\atop 1 \right)=1$ and
\begin{eqnarray}
\left(N\atop 1 \right)+\left(N\atop -1 \right) = \left(N\atop 1 \right) >\left(N\atop 0 \right) ,
\end{eqnarray}
therefore condition (\ref{con}) is also satisfied.
The cases $k=N$ can be considered in a similar way.
$\Box$
Thus, $N_1$ equals
the maximal possible rank $N_1={\mbox{ran}} \;\tilde \rho^{(1)}$, so that $\displaystyle 2 I_1^{max}= \frac{1}{2^{N}}$.
{ Similarly, for the $N$-order coherence we have only two nonzero terms in (\ref{rr}) which give $N_N=2$ and
$2 I_N^{max} =\frac{1}{2^{2N-1}}$.
For the intensities of the other-order coherences we do not give similar result for any $N$. The
maximal coherence intensities of the $n$-order ($n>0$) for $N=2,\dots,5$ are given in Table \ref{Table1}.}
This table shows the ordering of $I_n^{max}$:
\begin{eqnarray}\label{order}
I_0^{max} > I_1^{max}> \dots >I_N^{max}.
\end{eqnarray}
\begin{table}
\begin{tabular}{|c|cc|ccc|cccc|ccccc|}
\hline
$N$ & \multicolumn{2}{|c|}{2} & \multicolumn{3}{|c|}{3}&\multicolumn{4}{|c|}{4}&\multicolumn{5}{|c|}{5}\cr
\hline
$n$ & 1 & 2 & 1 & 2 &3 & 1 & 2 &3 &4 &1&2&3&4&5 \cr
$N_n$& 4 & 2 & 8 & 4 &2& 16 & 12 &4&2&32&24&14&4&2\cr
$2 I_n^{max}$&$\displaystyle \frac{1}{4}$ & $\displaystyle \frac{1}{8}$& $\displaystyle \frac{1}{8}$ & $\displaystyle \frac{1}{16}$ & $\displaystyle \frac{1}{32}$&$\displaystyle \frac{1}{16}$ & $\displaystyle \frac{3}{64}$ & $\displaystyle \frac{1}{64}$&$\displaystyle \frac{1}{128}$ & $\displaystyle \frac{1}{32}$ & $\displaystyle \frac{3}{128}$ & $\displaystyle \frac{7}{512}$&$\displaystyle \frac{1}{256}$ & $\displaystyle \frac{1}{512}$ \cr
\hline
\end{tabular}
\caption{The maximal coherence intensities $I_n^{max}$ of the $n$-order coherence and the rank $N_n$
of $\tilde \rho^{(n)}$ for the different numbers of nodes $N$ in a spin system. }\label{Table1}
\end{table}
Regarding the minimum of any non-zero-order coherence intensity, its value is obvious:
\begin{eqnarray}
I_n^{min} = 0.
\end{eqnarray}
\section{Evolution of coherences}
\label{Section:ev}
\subsection{Conservation laws}
First of all we remind a famous conservation law which holds for any evolutionary quantum system.
{\bf Proposition 3.}
The sum of all coherence intensities conserves:
\begin{eqnarray}\label{Lrho2}
\frac{d}{d t} \sum_{n=-N}^N I_n = \frac{d}{d t}{\mbox{Tr}} \Big( \rho^{(n)}\rho^{(-n)}\Big) =0.
\end{eqnarray}
{\it Proof.} In fact,
{ consider the Liouvile equation
\begin{eqnarray}\label{L}
i \frac{d \rho}{dt} =[\rho,H].
\end{eqnarray}
Using this equation we have
\begin{eqnarray}
i{\mbox{Tr}}\frac{d \rho^2}{dt} = {\mbox{Tr}} [\rho^2,H] =0.
\end{eqnarray}
Therefore
\begin{eqnarray}
{\mbox{Tr}}\rho^2 = {\mbox{Tr}}\left(\sum_{n=-N}^N \rho^{(n)}\rho^{(-n)}\right) = \sum_{n=-N}^N {\mbox{Tr}} (\rho^{(n)}\rho^{(-n)}) = \sum_{n=-N}^N I_n\equiv const.
\end{eqnarray}
which is equivalent to
eq.(\ref{Lrho2})}. $\Box$
In addition, if the system evolves under the Hamiltonian commuting with $I_z$,
\begin{eqnarray}\label{comm}
[H,I_z]=0,
\end{eqnarray}
then there is a family of conservation laws specified as follows.
{\bf Consequence.}
If (\ref{comm}) holds then all coherences conserve, i.e.
\begin{eqnarray}\label{cohI}
\frac{dI_n}{dt} = 0,\;\;\; |n|\le N .
\end{eqnarray}
{\it Proof.}
From eq.(\ref{L}) we have
\begin{eqnarray}
i \rho^{(n)} \frac{d \rho}{dt} + i \frac{d \rho}{dt}\rho^{(-n)} = \rho^{(n)} [ H,\rho] +
[H,\rho] \rho^{(-n)} .
\end{eqnarray}
The trace of this equation reads
\begin{eqnarray}\label{Tr0}
&& {\mbox{Tr}} \left(i \rho^{(n)} \frac{d \rho}{dt} + i \frac{d \rho}{dt}\rho^{(-n)} \right) =
i \frac{d}{dt } {\mbox{Tr}} \Big( \rho^{(n)} \rho^{(-n)}\Big) \equiv \\\nonumber
&&
i \frac{d I_n}{dt } = {\mbox{Tr}}\Big(\rho^{(n)} H\rho-\rho H\rho^{(n)}\Big) -
{\mbox{Tr}}\Big( \rho H \rho^{(-n)}-\rho^{(-n)} H \rho\Big).
\end{eqnarray}
We can introduce factors $e^{i \phi I_z}$ and $e^{-i \phi I_z} $ under the trace, substitute expansion (\ref{RhoC}) for $\rho$ and use commutation relation (\ref{comm}). Then we have
\begin{eqnarray}\label{TrTr}
&&
{\mbox{Tr}} \Big(e^{i \phi I_z} (\rho^{(n)} H\rho-\rho H\rho^{(n)} )e^{-i \phi I_z}\Big)
-{\mbox{Tr}} \Big(e^{i \phi I_z} (\rho H \rho^{(-n)}-\rho^{(-n)} H \rho)e^{-i \phi I_z}\Big) =\\\nonumber
&&
\sum_{k=-N}^N \left({\mbox{Tr}} \Big( e^{i \phi (n+k) } (\rho^{(n)} H\rho^{(k)} -\rho^{(k)} H\rho^{(n)})\Big) -
{\mbox{Tr}}\Big( e^{i \phi (k-n)} (\rho^{(k)} H \rho^{(-n)}-\rho^{(-n)} H \rho^{(k)})\Big) \right).
\end{eqnarray}
Since this trace must be independent on $\phi$ we have $k=-n$ and $k=n$ in the first and the second trace respectively.
Therefore expression (\ref{TrTr}) is identical to zero and eq.(\ref{Tr0}) yields
set of conservation lows (\ref{cohI}).
$\Box$
Equalities (\ref{cohI}) represent the set of conservation laws associated with the dynamics of a spin system under the Hamiltonian $H$
commuting with $I_z$.
\subsection{On map $\rho^{(n)}(0) \to \rho^{(n)}(t)$ }
Here we derive an important consequence of conservation laws (\ref{cohI}) describing the dependence of the elements of the evolutionary matrix $\rho^{(n)}(t)$ on the elements of the initial matrix $\rho^{(n)}(0)$.
First of all we notice that the Hamiltonian commuting with $I_z$ has the following block structure:
\begin{eqnarray}\label{Hn}
H=\sum_{l=0}^N H^{(l)},
\end{eqnarray}
where the block $H_l$ governs the dynamics of states with $l$ excited spins ($l$-excitation block).
Then any matrix $\rho^{(n)}$ can be also represented as
\begin{eqnarray}
\rho^{(n)}=\sum_{l=0}^{N-n} \rho^{(l,l+n)},\;\;
\rho^{(-n)}=\sum_{l=n}^{N} \rho^{(l,l-n)},\;\;n=0,1,\dots,N.
\end{eqnarray}
Then, introducing the evolution operators
\begin{eqnarray}
V(t)=e^{-i H t},\;\;\; V^{(l)}(t)=e^{-i H^{(l)} t},
\end{eqnarray}
we can write the evolution of the density matrix as
\begin{eqnarray}
&&
\rho(t)=V(t) \rho(0) V^+(t) = \sum_{n=-N}^N V(t) \rho^{(n)}(0) V^+(t) =\\\nonumber
&&
\sum_{n=0}^N \sum_{l=0}^{N-n} V^{(l)}(t) \rho^{(l,l+n)}(0) (V^{(l+n)}(t))^+ +
\sum_{n=-N}^{-1} \sum_{l=n}^{N} V^{(l)}(t) \rho^{(l,l-n)}(0) (V^{(l-n)}(t))^+ .
\end{eqnarray}
Since the operators $V^{(l)}$ do not change the excitation number, we can write
\begin{eqnarray}\label{In0}
&&
\rho(t) =\sum_{n=-N}^N \rho^{(n)}(t),\\\label{In}
&&
\rho^{(n)}(t) = \sum_{l=0}^{N-n} V^{(l)}(t) \rho^{(l,l+n)}(0) (V^{(l+n)}(t))^+\equiv P^{(n)} \left[t, \rho^{(n)}(0)\right],\\\nonumber
&&
\rho^{(-n)} = (\rho^{(n)}(t))^+ = \sum_{l=n}^{N} V^{(l)}(t) \rho^{(l,l-n)}(0) (V^{(l-n)}(t))^+\equiv P^{(-n)} \left[t, \rho^{(-n)}(0)\right],
\end{eqnarray}
where we introduce the linear evolutionary operators $P^{(n)}$ ($P^{(-n)}$) mapping
the matrix $\rho^{(n)}(0)$ ($\rho^{(-n)}(0)$) into the evolutionary matrix $\rho^{(n)}(t)$ ($\rho^{(-n)}(t)$) responsible for the same $n$-order ($(-n)$-order) coherence, i.e.,
the operator $P^{(n)}$ applied to the matrix of the $n$-order coherence doesn't generate coherences of different order. We notice that, in certain sense, formulas (\ref{In}) are similar to the Liouville representation \cite{Fano}. Hereafter we do not write $t$ in the
arguments of $P^{(n)}$ for simplicity.
\section{Coherence transfer from sender to receiver}
\label{Section:cohtr}
\subsection{Coherence transfer as map $\rho^{(S)}(0)\to \rho^{(R)}(t)$}
\label{Section:map}
Now we consider the process of the coherence transfer from the M-qubit sender ($S$) to the M-qubit receiver ($R$) connected by the transmission line ($TL$).
The receiver's density matrix reads
\begin{eqnarray}\label{rhoR}
\rho^R(t)={\mbox{Tr}}_{/R}\rho(t)= \sum_{n=-M}^M \rho^{(R;n)}(t),
\end{eqnarray}
where the trace is taken over all the nodes of the quantum system except the receiver, and $\rho^{(R;n)}$ means the submatrix of $\rho^{(R)}$ contributing into the $n$-order coherence.
To proceed further, we consider the tensor product initial state
\begin{eqnarray}
\rho(0)=\rho^{(S)}(0)\otimes \rho^{(TL,R)}(0),
\end{eqnarray}
Obviously
\begin{eqnarray}
\rho^{(n)}(0) = \sum_{n_1+n_2=n} \rho^{(S;n_1)}(0)\otimes \rho^{(TL,R;n_2)}(0),
\end{eqnarray}
where $\rho^{(S;n)}$ and $\rho^{(TL,R;n)}$ are matrices contributing to the $n$-order coherence of, respectively, $\rho^{(S)}$ and $\rho^{(TL)}$.
Using expansion (\ref{In0}) and operators $P^{(n)}$ defined in (\ref{In}) we can write
\begin{eqnarray}
\rho^{(R)} = {\mbox{Tr}}_{/R} \sum_{n=-N}^N P^{(n)} \left[\rho^{(n)}(0)\right]=
{\mbox{Tr}}_{/R} \sum_{n=-N}^N \sum_{n_1+n_2=n} P^{(n)} \left[\rho^{(S;n_1)}(0)\otimes \rho^{(TL,R;n_2)}(0)\right].
\end{eqnarray}
Next we need the following Proposition.
{\bf Proposition 4.}
The partial trace of matrix $\rho$ does not mix coherences of different order and, in addition,
\begin{eqnarray}\label{PT}
{\mbox{Tr}}_{/R} \rho^{(n)} = 0,\;\; |n|>M,
\end{eqnarray}
{\it Proof.}
We split the whole multiplicative basis of quantum state into the $2^M$-dimensional sub-basis $B^{(R)}$ of the receiver's states and the $2^{N-M}$-dimensional sub-basis of the subsystem consisting of the sender and the transmission line $B^{(S,TL)}$,
i.e., $|i\rangle = |i^{S,TL}\rangle \otimes |i^R\rangle $. Then
elements of the density matrix $\rho$ are enumerated by the double indexes $i=(i^{S,TL},i^R)$ and $j=(j^{S,TL},j^R)$, i.e.,
\begin{eqnarray}
\rho_{ij}=\rho_{(i^{S,TL},i^R),(j^{S,TL},j^R)}.
\end{eqnarray}
Then eq.(\ref{rhoR}) written in components reads
\begin{eqnarray}
\rho^{(R)}_{i^Rj^R} = {\mbox{Tr}}_{/R} \rho = \sum_{i^{S,TL}} \rho_{(i^{S,TL},i^R),(i^{S,TL},j^R)}.
\end{eqnarray}
Therefore the coherences in the matrix $\rho^{(R)}$ are formed only by the transitions in the subspace spanned by $B^{(R)}$. Therefore, the matrix $\rho^{(R;n)}$ forming the $n$-order coherence of the receiver consists of the elements included into the $n$-order coherence of the whole quantum system. Consequently, trace does not mix coherences.
Since the receiver is an $M$-qubit subsystem, it can form only the coherences of order $n$ such that $|n|\le M$, which agrees with justifies condition (\ref{PT}).
$\Box$
This Proposition allows us to conclude that
\begin{eqnarray}\label{Rn}
\rho^{(R;n)} = {\mbox{Tr}}_{/R} \sum_{n_1+n_2=n} P^{(n)}\left[ \rho^{(S;n_1)}(0)\otimes \rho^{(TL,R;n_2)}(0)\right],\;\; |n|\le M.
\end{eqnarray}
Formula (\ref{Rn}) shows that, in general, all the coherences of $\rho^{(S;n)}$ are mixed in any particular order coherence of the receiver's density matrix $\rho^R$. However,
this is not the case if the initial state $\rho^{TL,R}(0)$ consists of elements contributing only to the zero-order coherence.
Then (\ref{Rn}) gets the form
\begin{eqnarray}\label{Rn2}
\rho^{(R;n)} = {\mbox{Tr}}_{/R} \Big( P^{(n)} \Big[\rho^{(S;n)}(0)\otimes \rho^{(TL,R;0)}(0)\Big]\Big),\;\; |n|\le M.
\end{eqnarray}
In this case the elements contributing to the $n$-order coherence of $\rho^S(0)$ contribute only to the $n$-order coherence of $\rho^R(t)$.
\subsection{Restoring of sender's state at receiver's side}
\label{Section:selecting}
In Sec.\ref{Section:map} we show that, although the coherences of the sender's initial state are properly separated in the
receiver's state, the elements contributing to the particular $n$-order coherence of $\rho^S_0$ are mixed in $\rho^R_n$. But we would like to separate the elements of $\rho^S_0$ in $\rho^R(t)$, so that, in the ideal case,:
\begin{eqnarray}\label{rhoij}
&&\rho^R_{ij}(t) = f_{ij}(t) \rho^S_{ij},\;\;(i,j)\neq (2^M,2^M),\\\nonumber
&&\rho^R_{2^M2^M}(t) = 1- \sum_{i=1}^{2^M-1} f_{ii}(t) \rho^S_{ii}.
\end{eqnarray}
We refer to the state with elements satisfying (\ref{rhoij}) as a completely restored state.
Perhaps, relation (\ref{rhoij}) can not be realized for all elements of $\rho^R$, in other words, the complete sender's state restoring is impossible, in general case.
However, the simple case of a complete restoring is the transfer of the one-qubit sender state to the one-qubit receiver because in this case there is only one element $\rho^S_{12}$ in $\rho^S$ contributing to the first order coherence in $\rho^R$ and one independent element $\rho^S_{11}$ contributing to the zero-order coherence.
In addition, we can notice that the highest order coherences have the form (\ref{rhoij}) in general case, because there is only one element of the density matrix contributing to the $\pm M$-order coherence.
Regarding the other coherences,
we can try to partially restore at least some of the elements using the local unitary transformation at the receiver side.
\subsubsection{Unitary transformation of extended receiver as state-restoring tool}
\label{Section:U}
Thus we can use the unitary transformation at the receiver to (partially) restore the initial sender's state $\rho^{(S)}(0)$ in the density matrix $\rho^{(R)}(t)$ at some time instant $t$ in the sense of definition (\ref{rhoij}).
It is simple to estimate that the number of parameters in the unitary transformation $U^{(R)}$ of the receiver itself is not enough to restore all the elements of the density matrix $\rho^{(S)}(0)$. To make the complete restoring possible we must increase the number of parameters in the unitary transformation by extending the receiver to $M^{(ext)}>M$ nodes and use the transformation $U^{(ext)}$ of this extended receiver to restore the state $\rho^{(S)}(0)$.
Thus we consider the $M^{(ext)}$-dimensional extended receiver
and require that the above mentioned unitary transformation does not mix different submatrices $\rho^{(n)}$. This is possible if $U$ commutes with the $z$-projection of the total extended receiver's spin momentum.
In this case the matrix $\rho^R$ can be obtained from $\rho$ in three steps: (i) reducing $\rho(t)$ to the density matrix of the extended receiver
$\rho^{R_{ext}}(t)$, (ii) applying the restoring unitary transformation $U^{(ext)}$ and (iii) reducing the resulting density matrix $U^{(ext)}\rho^{R_{ext}}(t)(U^{(ext)})^+$ to
$\rho^{R}$.
To find out the general form of the unitary transformation we consider this transformation in the basis constructed on the matrices $I^{\pm}_j$ and $I_{zj}$.
This basis reads:
for the one-qubit subsystem ($i$th qubit of the whole quantum system),
\begin{eqnarray}\label{B1}
B^{(i)}: E, I_{zi}, I^+_i, I^-_i;
\end{eqnarray}
for the two-qubit subsystem (the $i$th and $j$th qubits),
\begin{eqnarray}\label{B2}
B^{(ij)}=B^{(i)}\otimes B^{(j)};
\end{eqnarray}
for the three-qubit subsystem (the $i$th, $j$th and $k$th qubits),
\begin{eqnarray}\label{B3}
B^{(ijk)}=B^{(ij)}\otimes B^{(k)};
\end{eqnarray}
for the four-qubit subsystem (the $i$th, $j$th, $k$th and $m$th qubits),
\begin{eqnarray}\label{B4}
B^{(ijkm)}=B^{(ij)}\otimes B^{(km)},
\end{eqnarray}
and so on.
The elements of the basis commuting with $I_z$ are formed by the pairs $I^+_p I^-_q$ and by the diagonal matrices $I_{zk}$, $E$.
Thus, the one-qubit basis (\ref{B1}) involves two elements commuting with $I_z$:
\begin{eqnarray}\label{B1U}
B^{(C;i)}: E, I_{zi}.
\end{eqnarray}
The two-qubit basis (\ref{B2}) involves $6$ such elements:
\begin{eqnarray}\label{B2U}
B^{(C;ij)}: E, \;\;I_{zi},\;\; I_{zj}, \;\;I_{zi} I_{zj}, \;\;I^+_i I^-_j,\;\; I^+_j I^-_i.
\end{eqnarray}
The three-qubit basis (\ref{B3}) involves 20 such elements:
\begin{eqnarray}\label{B3U}
B^{(C;ijk)}: E,\;\; I_{zp},\;\; I_{zp} I_{zs},\;\; I_{zi} I_{zj}I_{zk},\;\; I^+_p I^-_s,I^+_p I^-_s I_{zr}, \;\; p,s,r\in \{i,j,k\}, \;r\neq p \neq s .
\end{eqnarray}
The four-qubit basis (\ref{B4}) involves 70 such elements:
\begin{eqnarray}\label{B4U}
B^{(C;ijkm)} &:& E, \;\;I_{zp}, \;\; I_{zp} I_{zs},\;\; I_{zp} I_{zs}I_{zr},\;\;I_{zi} I_{zj} I_{zk} I_{zm},\;\;
I^+_p I^-_s,\;\;I^+_p I^-_s I_{zr},\;\;I^+_p I^-_s I_{zr} I_{zq}, \\\nonumber
&&
I^+_p I^-_s I^+_r I^-_q,\;\;p,s,r,q \in \{i,j,k,m\},\;\; p\neq s \neq r \neq q ,
\end{eqnarray}
and so on.
However, there is a common phase which can not effect the elements of the density matrix. Therefore, the number of parameters in the above unitary transformations which can effect the density-matrix elements is less then the dimensionality of the bases (\ref{B1U}-\ref{B4U}) by one.
\section{Particular model}
\label{Section:model}
As a particular model,
we consider the spin-1/2 chain with two-qubit sender and receiver and the tensor product initial state
\begin{eqnarray}\label{in2}
\rho(0)=\rho^S(0) \otimes \rho^{TL,R}(0),
\end{eqnarray}
where $\rho^S(0)$ is an arbitrary initial state of the sender
and $\rho^{TL,R}(0)$ is the initial thermal equilibrium state of the transmission line and receiver,
\begin{eqnarray}
\label{inTLB}
\rho^{TL,B} &=&\frac{e^{bI_{z}}}{Z},\;\;Z=\left(2 \cosh\frac{b}{2}\right)^{N-2},
\end{eqnarray}
where $b=\frac{1}{k T}$, $T$ is temperature and $k$ is the Boltzmann constant. Thus, both $\rho^{(S)}$ and
$\rho^{(R)}$ are $4\times 4$ matrices.
Let the evolution of the spin chain be governed by the nearest-neighbor $XX$-Hamiltonian \cite{Mattis}
\begin{eqnarray}\label{XY}
H=\sum_{i=1}^{N-1} D (I_{ix}I_{(i+1)x} +I_{iy}I_{(i+1)y}),
\end{eqnarray}
where $D$ is a coupling constant. Obviously, $[H,I_z]=0$.
Using the Jordan-Wigner transformations \cite{JW,CG} we can derive the explicit formula for the density matrix of the two-qubit receiver (\ref{rhoR})
but we do not represent the details of this derivation for the sake of brevity.
To proceed further, let us write formulas (\ref{Rn}) contributing into each particular coherence as follows.
For the zero order coherence we have
\begin{eqnarray}\label{coh0}
\rho^{(R;0)}_{ij}&=& \alpha_{ij;11} \rho^S_{11} + \alpha_{ij;22} \rho^S_{22} +
\alpha_{ij;33} \rho^S_{33} + \alpha_{ij;44} \rho^S_{44} +
\alpha_{ij;23} \rho^S_{23} + \alpha_{ij;32} (\rho^S_{23})^*
,\\\nonumber
&&(i,j)= (1,1),(2,2),(3,3),(2,3)\\\nonumber
\rho^{(R;0)}_{44} &=& 1- \sum_{i=1}^3 \rho^R_{ii},\;\;\alpha_{ii;32}=\alpha_{ii;23}^*,
\end{eqnarray}
there are $12$ real parameters $\alpha_{ii;jj}$, $i=1,2,3$, $j=1,2,3,4$, and $9$ complex parameters $\alpha_{ii;23}$, $i=1,2,3$, $\alpha_{23;ii}$, $i=1,2,3,4$, $\alpha_{23;23}$ and $\alpha_{23;32}$, i.e., 30 real parameters.
For the first order coherence:
\begin{eqnarray}\label{coh1}
(\rho^R_1)_{ij}= \alpha_{ij;12} \rho^S_{12} + \alpha_{ij;13} \rho^S_{13} +
\alpha_{ij;24} \rho^S_{24} + \alpha_{ij;34} \rho^S_{34},\;\;
(i,j)= (1,2),(1,3),(2,4),(3,4),
\end{eqnarray}
there are 16 complex parameters, or 32 real ones.
Finally, for the second order coherence we have
\begin{eqnarray}\label{coh2}
\rho^R_{14}= \alpha_{14;12} \rho^S_{14},
\end{eqnarray}
there is one complex parameter (two real ones).
In all these formulas,
$\alpha_{ij;nm}$ are defined by the interaction Hamiltonian and they depend on the time $t$.
\subsection{Simple example of $\rho^{(S;1)}$-restoring}
We see that there are 64 real parameter we would like to adjust in eqs.(\ref{coh0}-\ref{coh2}).
For the purpose of complete restoring of an arbitrary state we need the extended receiver of $M=4$ nodes so that the number of the effective parameters in the unitary transformation described in Sec.\ref{Section:U} would be 69.
However, for the sake of simplicity, here we use the unitary transformation of the two-qubit receiver to perform a complete restoring of the $\pm1$-order coherence matrices $\rho^{(S;\pm 1)}(0)$ of a special form, namely
\begin{eqnarray}\label{inS}
\rho^{(S;1)} + \rho^{(S;-1)} =
\left(
\begin{array}{cccc}
0&a&a&0\cr
a^*&0&0&a\cr
a^*&0&0&0\cr
0&a^*&0&0
\end{array}
\right).
\end{eqnarray}
The unitary transformation constructed on the basis (\ref{B2U}) reads:
\begin{eqnarray}\label{U2q}
U=e^{i \phi_1 ( I_1^+I_2^- + I_1^-I_2^+)} e^{ \phi_2 ( I_1^+I_2^- - I_1^-I_2^+)} e^{i \Phi},
\end{eqnarray}
where $\Phi={\mbox{diag}}(\phi_3,\dots,\phi_6)$ is a diagonal matrix and $\phi_i$, $i=1,\dots,6$, are arbitrary real parameters.
Eqs. (\ref{coh1}) reduce to
\begin{eqnarray}\label{coh1ex}
(\rho^R_1)_{ij}=\alpha_{ij} a ,\;\; \alpha_{ij}= \alpha_{ij;12} + \alpha_{ij;13} +
\alpha_{ij;24},\;\;
(i,j)= (1,2),(1,3),(2,4),(3,4).
\end{eqnarray}
We consider the chain of $N=20$ nodes and set $b=10$. The time instant for the state registration at the receiver is chosen by the requirement to maximize the maximal-order coherence intensity (the second order in this model) because this intensity has the least maximal possible value according to (\ref{order}). This time instance was found numerically and it equals $D t=24.407$.
Next,
using the parameters $\phi_i$ of the unitary transformation (\ref{U2q}) we can put zero the coefficient
$\alpha_{34}$ and thus obtain the completely restored matrices $\rho^{(R;\pm1)}$ in the form
\begin{eqnarray}\label{Rt}
\rho^{(R;1)} + \rho^{(R;-1)} =
\left(
\begin{array}{cccc}
0&\alpha_{12} a&\alpha_{13}a&0\cr
\alpha_{12}^*a^*&0&0&\alpha_{24}a\cr
\alpha_{13}^*a^*&0&0&0\cr
0&\alpha_{24}^*a^*&0&0
\end{array}
\right).
\end{eqnarray}
The appropriate values of the parameters $\phi_i$ are following:
\begin{eqnarray}
\phi_1=2.41811,\;\;\phi_2=1.57113,\;\;\phi_k=0,\;\;k=2,\dots,6.
\end{eqnarray}
Therewith,
\begin{eqnarray}
\alpha_{12}=0.00021 + 0.63897 i,\;\;\;\alpha_{13}=0.00010 - 0.30585 i,\;\;\alpha_{24}=0.00010-0.30582 i .
\end{eqnarray}
Thus, using the unitary transformation of the receiver we restore the sender's initial matrices $\rho^{(S;\pm1)}(0)$ in the sense of definition (\ref{rhoij}).
This result holds for arbitrary admittable initial matrices $\rho^{(S;0)}(0)$ and $\rho^{(S;2)}(0)$.
\section{Conclusion}
\label{Section:conclusion}
The MQ coherence intensities are the characteristics of a density matrix which can be measured in MQ NMR experiments. We show that the coherences evolve independently if only
the Hamiltonian governing the spin dynamics conserves the total $z$-projection of the spin momentum. This is an important property of quantum coherences which allows us to store them in the sense that the family of the density-matrix elements contributing into a particular-order coherence do not intertwist with other elements during evolution. In addition, if we connect the spin system with formed coherences (called sender in this case) to the transmission line and receiver we can transfer these coherences without mixing them if only the initial state of $\rho^{(TL,R)}(0)$ has only the zero-order coherence.
We also describe the restoring method which could allow (at least partially) to reconstruct the sender's initial state. This state-restoring is based on the unitary transformation at the receiver side involving, in general, the so-called extended receiver with the purpose to enlarge the number of parameters in the unitary transformation. The partial state-restoring of two-qubit receiver via the unitary transformation on it is performed as a simplest example. Examples of more accurate restoring involving the extended receiver require
large technical work and will be done in a specialized paper.
This work is partially supported by the Program of RAS ''Element base of quantum
computers'' and by the Russian Foundation for Basic Research, grants No.15-07-07928 and
16-03-00056.
\end{document} |
\begin{document}
\title{The classification of $n$-dimensional nilpotent non-Tortkara anticommutative algebras with $(n-4)$-dimensional annihilator}
\begin{abstract}
In this paper, we give a complete classification of $n$-dimensional nilpotent non-Tortkara anticommutative algebras with $(n-4)$-dimensional annihilator over $\mathbb{C}$.
\end{abstract}
\keywords{Anticommutative algebra, Tortkara algebra, Automorphism group.}
\section{Introduction}
One of the classical problems in the theory of algebras is to classify (up to isomorphism) the nilpotent algebras of dimension $n$ from a certain variety defined by some family of polynomial equalities. There
are many results related to algebraic classification of small dimensional nilpotent algebras in varieties of Jordan,
Lie, Leibniz, Zinbiel and other algebras. Algebraic classification of nilpotent Lie algebras of dimension 7 (over algebraically closed field and $\mathbb{R}$) \cite{gong1998classification}; algebraic classification of five-dimensional nilpotent Jordan algebras \cite{hegazi2016classification2}; algebraic classification of four-dimensional nilpotent Leibniz algebras \cite{demir2017classification}; algebraic classification of complex 5-dimensional Zinbiel algebras \cite{alvarez2022algebraic}.
There are also some results related to algebraic classification of $n$-dimensional nilpotent algebras with $(n-s)$-dimensional annihilator, where $s$ is small positive integer. The classification of $n$-dimensional anticommutative algebras with $(n-3)$-dimensional annihilator \cite{calderon2019classification}; The classification of $n$-dimensional non-associative Jordan algebras with $(n-3)$-dimensional annihilator \cite{hegazi2018classification}; The classification of $n$-dimensional non-Lie Malcev algebras with $(n-4)$-dimensional annihilator \cite{hegazi2016classification}.
In this paper, we give a complete classification of $n$-dimensional nilpotent non-Tortkara anticommutative algebras with $(n-4)$-dimensional annihilator over $\mathbb{C}$. An algebra $(A,\cdot)$ over $\mathbb{C}$ is anticommutative if it satisfies: $xy+yx=0$ for all $x, y \in A$. Let $A$ be an anticommutative algebra. The ideal $\mathrm{A n n}(A)=\{x \in A:xA=0\}$ is called the annihilator of $A$. An anticommutative
algebra is Tortkara algebra, if it satisfies:
$$(ab)(cb) = J(a, b, c)b, \text{where}\, J(a, b, c)=(ab)c + (bc)a + (ca)b.$$
Our main result in this paper is the following theorem.
\begin{thm}[Main result]\label{thm1}
Let $A$ be a $n$-dimensional nilpotent non-Tortkara anticommutative algebra with $\dim \mathrm{Ann}(A)=n-4$
over $\mathbb{C}$. Then $A$ is isomorphic to one of the followings:
\begin{itemize}
\item
$n=4$
\begin{itemize}
\item $\mathrm{empty\,\,set};$
\end{itemize}
\item
$n=5$
\begin{itemize}
\item
$A_{5,1}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5};$
\end{itemize}
\item
$n=6$
\begin{itemize}
\item
$A_{6,1}=A_{5,1}\oplus \mathbb{C}e_6;$
\item
$A_{6,2}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{2} e_{4}=e_{6};$
\item
$A_{6,3}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{2} e_{3}=e_{6};$
\item
$A_{6,4}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{1} e_{4}=e_{6};$
\item
$A_{6,5}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{2} e_{3}=e_{6}, e_{1} e_{4}=e_{6};$
\end{itemize}
\item
$n=7$
\begin{itemize}
\item
$A_{7,i}=A_{6,i}\oplus\mathbb{C}e_7,i=1,\cdots,5;$
\item
$A_{7,6}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{2} e_{4}=e_{6}, e_{2} e_{3}=e_{7};$
\item
$A_{7,7}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{2} e_{4}=e_{6}, e_{1} e_{4}=e_{7};$
\item
$A_{7,8}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{1} e_{4}=e_{6}, e_{2} e_{3}=e_{7};$
\item
$A_{7,9}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{2} e_{4}=e_{6}, e_{1} e_{4}=e_{7}, e_{2} e_{3}=e_{7};$
\end{itemize}
\item
$n=8$
\begin{itemize}
\item
$A_{8,i}=A_{7,i}\oplus\mathbb{C}e_8,i=1,\cdots,9;$
\item
$A_{8,10}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{2} e_{4}=e_{6}, e_{2} e_{3}=e_{7},e_{1} e_{4}=e_{8};$
\end{itemize}
\item
$n\geqslant9$
\begin{itemize}
\item
$
A_{n, i}=A_{8, i} \oplus \mathbb{C} e_{9} \oplus \cdots \oplus \mathbb{C} e_{n}, i=1, \ldots, 10.
$
\end{itemize}
\end{itemize}
\end{thm}
The paper is organized as follows. In Section \ref{sec2}, we describe a method(Skjelbred-Sund method) for classifying all anticommutative algebras of dimension $n$ with $s$-dimensional annihilator given those algebras of dimension $n-s$, which also
appeared in \cite{hegazi2016classification}. In Section \ref{sec3}, the proof of Theorem \ref{thm1} is given.
Throughout the paper we use the following notation. All the vector spaces and algebras will be assumed to be finite dimensional over $\mathbb{C}$. The multiplication of an algebra is specified by giving only the nonzero products among the basis elements.
\section{The analogue of Skjelbred-Sund method for anticommutative algebras}\label{sec2}
Hegazi given the analogue of the Skjelbred–Sund method for Malcev algebras in \cite{hegazi2016classification}. Inspired by \cite{hegazi2016classification}, we give the analogue of the Skjelbred–Sund method for anticommutative algebras in this section. Proofs for all the conclusions mentioned in this section can be found in \cite{hegazi2016classification}.
Let ${A}$ be an anticommutative algebra, ${V}$ be a vector space and ${Z}^{2}({A}, {V})$ is defined to be the set of all skew-symmetric bilinear maps $\theta: {A} \times {A} \longrightarrow {V}$. For $f \in \mathrm{Hom}({A}, {V})$, we define $\delta f \in {Z}^{2}({A}, {V})$ by the equality $\delta f(x, y)=f(x y)$ and set ${B}^{2}({A}, {V})=\{\delta f \mid f \in \operatorname{Hom}({A}, {V})\}$. One can easily check that ${B}^{2}({A}, {V})$ is a linear subspace of ${Z}^{2}({A}, {V})$. Let us define ${H}^{2}({A}, {V})$ as the quotient space ${Z}^{2}({A}, {V}) / {B}^{2}({A}, {V})$. The equivalence class of $\theta \in {Z}^{2}({A}, {V})$ in ${H}^{2}({A}, {V})$ is denoted by $[\theta]$. As usual, we call the elements of ${Z}^{2}({A}, {V})$ cocycles, those of ${B}^{2}({A}, {V})$ coboundaries, and ${H}^{2}({A}, {V})$ is the corresponding second cohomology space.
Suppose now that $\operatorname{dim} {A}=m<n$ and $\operatorname{dim} {V}=n-m$. Let $\theta \in Z^{2}(A, V)$, we can define on the space ${A}_{\theta}:={A} \oplus {V}$ the anticommutative bilinear product by the equality $(x+x^{\prime})(y+y^{\prime})=x y+\theta(x, y)$ for $x, y \in {A}, x^{\prime}, y^{\prime} \in {V}$. $A_{\theta}$ is an anticommutative algebra and ${A}_{\theta}$ is nilpotent if and only if ${A}$ is nilpotent. The set $\theta^{\perp}=\{x \in A: \theta(x, A)=0\}$ is called the radical of $\theta$. Then $\mathrm{Ann}\left(A_{\theta}\right)=\left(\theta^{\perp} \cap \mathrm{A n n}(A)\right) \oplus V$ by \cite[Lemma 4]{hegazi2016classification}. The algebra ${A}_{\theta}$ is called a $(n-m)$-dimensional central extension of ${A}$ by ${V}$.
\begin{lem}\cite[Lemma 5]{hegazi2016classification}
Let $A$ be an anticommutative algebra with $\mathrm{Ann}(A) \neq 0$. Then there exists, up to isomorphism, a unique anticommutative algebra $A^{\prime}$, and $\theta \in Z^{2}\left(A^{\prime}, \mathrm{Ann} (A)\right)$ with $\theta^{\perp} \cap \mathrm{Ann} \left(A^{\prime}\right)=0$ such that $A \cong A_{\theta}^{\prime}$ and $A / \mathrm{A n n}(A) \cong A^{\prime} .$
\end{lem}
Let $e_{1}, \ldots, e_{s}$ be a basis of $V$, and $\theta \in Z ^{2}(A, V)$. Then $\theta$ can be uniquely written as $\theta(x, y)=\sum_{i=1}^{s} \theta_{i}(x, y) e_{i}$, where $\theta_{i} \in Z ^{2}(A, \mathbb{C}) .$ Moreover, $\theta^{\perp}=\theta_{1}^{\perp} \cap \theta_{2}^{\perp} \cap \cdots \cap \theta_{s} ^{\perp}.$ Further, $\theta \in B^{2}(A, V)$ if and only if all $\theta_{i} \in B^{2}(A, \mathbb{C}) .$
Let $A$ be an anticommutative algebra with a basis $e_{1}, e_{2}, \ldots, e_{n}$. Then by $\Delta_{i j}$ we denote the skew-symmetric bilinear form $\Delta_{i j}: A \times A \longrightarrow \mathbb{C}$ with $\Delta_{i j}\left(e_{i}, e_{j}\right)=-\Delta_{i j}\left(e_{j}, e_{i}\right)=1$ and $\Delta_{i j}\left(e_{l}, e_{m}\right)=0$ if $\{i, j\} \neq\{l, m\} .$ Then the set $\left\{\Delta_{i j}: 1 \leq i<j \leq n\right\}$ is a basis for the linear space of skew-symmetric bilinear forms on $A$. Then every $\theta \in Z ^{2}(A, \mathbb{C})$ can be uniquely written as $\theta=\sum_{1 \leq i<j \leq n} c_{i j} \Delta_{i, j}$, where $c_{i j} \in \mathbb{C} .$
Let $\left\{e_{1}, e_{2}, \ldots, e_{m}\right\}$ be a basis of $A^2$. Then the set $\left\{\delta e_{1}^{*}, \delta e_{2}^{*}, \ldots, \delta e_{m}^{*}\right\}$ by \cite[Lemma 6]{hegazi2016classification}, where $e_{i}^{*}\left(e_{j}\right)=\delta_{i j}$ and $\delta_{i j}$ is the Kronecker delta, is a basis of $B^{2}(A, \mathbb{C})$. Let $\theta, \vartheta \in Z^{2}(A, V)$ such that $[\theta]=[\vartheta] .$ Then $\theta^{\perp} \cap \mathrm{Ann}(A)=\vartheta^{\perp} \cap \mathrm{Ann} (A)$ or, equivalently, $\mathrm{Ann}\left(A_{\theta}\right)=\operatorname{\mathrm{\mathrm{Ann}}}\left(A_{\vartheta}\right)$ by \cite[Lemma 7]{hegazi2016classification}. Furthermore, $A_{\theta} \cong A_{\vartheta}$.
Let $\mathrm{A u t}(A)$ be the automorphism group of an anticommutative algebra $A .$ Let $\phi \in \mathrm{A u t}(A)$. For $\theta \in Z^{2}(A, V)$ define $\phi \theta(x, y)=\theta(\phi(x), \phi(y)) .$ Then $\phi \theta \in Z^{2}(A, V) .$ So, $\mathrm{Aut} (A)$ acts on $Z^{2}(A, V)$. $\phi \theta \in B^{2}(A, V)$ if and only if $\theta \in B^{2}(A, V)$ by \cite[Lemma 8]{hegazi2016classification}. So, $\mathrm{Aut} (A)$ acts on $H^{2}(A, V)$.
Let $\phi=\left(a_{i j}\right) \in A u t(A)$ and $\theta \in Z^{2}(A, \mathbb{C}) .$ Let $C=\left(c_{i j}\right)$ be the matrix representing $\theta$ and $C^{\prime}=\left(c_{i j}^{\prime}\right)$ be the matrix representing $\phi \theta$. Then $C^{\prime}=\phi^{t} C \phi$.
\begin{definition}
Let $A$ be an anticommutative algebra. If $A=I \oplus \mathbb{C} x$ is a direct sum of two ideals, then $\mathbb{C} x$ is called an annihilator component of $A .$
\end{definition}
Let $\theta(x, y)=\sum_{i=1}^{s} \theta_{i}(x, y) e_{i} \in Z^{2}(A, V)$ and $\theta^{\perp} \cap \mathrm{Ann}(A)=0 .$ Then $A_{\theta}$ has an annihilator component if and only if $\left[\theta_{1}\right],\left[\theta_{2}\right], \ldots,\left[\theta_{s}\right]$ are linearly dependent in $H^{2}(A, \mathbb{C})$ by \cite[Lemma 13]{hegazi2016classification}. Let $\vartheta(x, y)=\sum_{i=1}^{s} \vartheta_{i}(x, y) e_{i}$ be another element of $Z^{2}(A, V) .$ Suppose that $A_{\theta}$ has no annihilator components and $\theta^{\perp} \cap \mathrm{Ann}(A)=\vartheta^\perp \cap \mathrm{Ann}(A)=0$. Then $A_{\theta} \cong A_{\vartheta}$ if and only if there exists a map $\phi \in \mathrm{A u t}(A)$ such that the set $\left\{\left[\phi \vartheta_{i}\right]: i=1, \ldots, s\right\}$ spans the same subspace of $H^{2}(A, \mathbb{C})$ as the set $\left\{\left[\theta_{i}\right]: i=1, \ldots, s\right\} $ by \cite[Lemma 14]{hegazi2016classification}.
Let $V$ be a finite-dimensional vector space over a $\mathbb{C}$. The Grassmannian $G_{k}(V)$ is the set of all $k$-dimensional linear subspaces of $V$. Let $G_{s}\left(H^{2}(A, \mathbb{C})\right)$ be the Grassmannian of subspaces of dimension $s$ in $H^{2}(A, \mathbb{C})$. There is a natural action of $\mathrm{Aut}(A)$ on $G_{s}\left(H^{2}(A, \mathbb{C})\right)$. Let $\phi \in \mathrm{Aut} (A)$. For $W=\left\langle\left[\theta_{1}\right],\left[\theta_{2}\right], \ldots,\left[\theta_{s}\right]\right\rangle \in G_{s}\left(H^{2}(A, \mathbb{C})\right)$ define $\phi W=\left\langle\left[\phi \theta_{1}\right],\left[\phi \theta_{2}\right], \ldots,\left[\phi \theta_{s}\right]\right\rangle$. Then $\phi W \in G_{s}\left(H^{2}(A, \mathbb{C})\right)$.
We denote the orbit of $W \in G_{s}\left(H^{2}(A, \mathbb{C})\right)$ under the action of $\operatorname{\mathrm{Aut}}(A)$ by
$\mathrm{Orb}(W)$.
Let $W_{1}=\left\langle\left[\theta_{1}\right],\left[\theta_{2}\right], \ldots,\left[\theta_{s}\right]\right\rangle, W_{2}=\left\langle\left[\vartheta_{1}\right],\left[\vartheta_{2}\right], \ldots,\left[\vartheta_{s}\right]\right\rangle \in G_{s}\left(H^{2}(A, \mathbb{C})\right)$. If $W_{1}=W_{2}$, then $\bigcap_{i=1}^{s} \theta_{i}^{\perp} \cap \mathrm{A n n}(A)=\bigcap_{i=1}^{s} \vartheta_{i}^{\perp} \cap \mathrm{A n n}(A)$ by \cite[Lemma 15]{hegazi2016classification}. This result allows us to define
$$
T_{s}(A)=\left\{W=\left\langle\left[\theta_{1}\right],\left[\theta_{2}\right], \ldots,\left[\theta_{s}\right]\right\rangle \in G_{s}\left(H^{2}(A, \mathbb{C})\right): \underset{i=1}{\bigcap} \theta_{i}^{\perp} \cap A n n(A)=0\right\}
$$
The set $T_{s}(A)$ is stable under the action of $\mathrm{Aut}(A)$ by \cite[Lemma 16]{hegazi2016classification}.
Let $V$ be an $s$-dimensional vector space spanned by $e_{1}, e_{2}, \ldots, e_{s}$. Given a anticommutative algebra $A$, let $E(A, V)$ denote the set of all anticommutative algebras without annihilator components which are $s$-dimensional annihilator extensions of $A$ by $V$ and have $s$-dimensional annihilator. Then $E(A, V)=$ $\left\{A_{\theta}: \theta(x, y)=\sum_{i=1}^{s} \theta_{i}(x, y) e_{i}\right.$ and $\left.\left\langle\left[\theta_{1}\right],\left[\theta_{2}\right], \ldots,\left[\theta_{s}\right]\right\rangle \in T_{s}(A)\right\} .$ Given $A_{\theta} \in E(A, V)$, let $\left[A_{\theta}\right]$ denote the isomorphism class of $A_{\theta} .$ Let $A_{\theta}, A_{\vartheta} \in E(A, V)$. Suppose that $\theta(x, y)=\sum_{i=1}^{s} \theta_{i}(x, y) e_{i}$ and $\vartheta(x, y)=\sum_{i=1}^{s} \vartheta_{i}(x, y) e_{i} .$ Then $\left[A_{\theta}\right]=\left[A_{\vartheta}\right]$ if and only if $$\mathrm{Orb}\left\langle\left[\theta_{1}\right],\left[\theta_{2}\right], \ldots,\left[\theta_{s}\right]\right\rangle= \mathrm{Orb}\left\langle\left[\vartheta_{1}\right],\left[\vartheta_{2}\right], \ldots,\left[\vartheta_{s}\right]\right\rangle$$ by \cite[Lemma 17]{hegazi2016classification}.
\begin{thm}\cite[Theorem 18]{hegazi2016classification}\label{thm2}
There exists a one-to-one correspondence between the set of $\mathrm{Aut}(A)$-orbits on $T_{s}(A)$ and the set of isomorphism classes of $E(A, V)$. This correspondence is defined by
$$\operatorname{\mathrm{Orb}}\left\langle\left[\theta_{1}\right],\left[\theta_{2}\right], \ldots,\left[\theta_{s}\right]\right\rangle \in\left\{\operatorname{\mathrm{Orb}}(W): W \in T_{s}(A)\right\} \leftrightarrow\left[A_{\theta}\right] \in\left\{\left[A_{\vartheta}\right]: A_{\vartheta} \in E(A, V)\right\}$$ where $\theta(x, y)=\sum_{i=1}^{s} \theta_{i}(x, y) e_{i}$.
\end{thm}
By this theorem, we may construct all anticommutative algebras of dimension $n$ with $s$-dimensional annihilator, given those algebras of dimension $n-s$, in the following way:
\begin{enumerate}
\item For a given anticommutative algebra $A$ of dimension $n-s$, determine $H^{2}(A, \mathbb{C})$, $\mathrm{\mathrm{Ann}}(A)$ and $\operatorname{\mathrm{Aut}}(A)$.
\item Determine the set of $\mathrm{Aut}(A)$-orbits on $T_{s}(A)$.
\item For each orbit, construct the anticommutative algebra corresponding to a representative of it.
\end{enumerate}
This method gives all (Tortkara and non-Tortkara) anticommutative algebras with non-trivial annihilator. We want to develop this method in such a way that it only gives non-Tortkara anticommutative algebras. Clearly, any annihilator extension of non-Tortkara anticommutative algebra is non-Tortkara. So, we only have to study the central extensions of Tortkara algebras. Let $A$ be a Tortkara algebra and $\theta \in Z^{2}(A, \mathbb{C})$. Then $A_{\theta}$ is a Tortkara algebra if and only if $\theta((ab),(cb)) =\theta( J(a, b, c),b)$ for all $a, b, c \in A$, where $J(a, b, c)=(ab)c + (bc)a + (ca)b.$ Define a subspace $Z_{T}^{2}(A, \mathbb{C})$ of $Z^{2}(A, \mathbb{C})$ by
$$
Z_{T}^{2}(A, \mathbb{C})=\left\{\theta \in Z^{2}(A, \mathbb{C}): \theta((ab),(cb)) =\theta( J(a, b, c),b)\,\forall a,b,c \in A\right\}
$$
Define $H_{T}^{2}(A, \mathbb{C})=Z_{T}^{2}(A, \mathbb{C}) / B^{2}(A, \mathbb{C}) .$ Therefore, $H_{T}^{2}(A, \mathbb{C})$ is a subspace of $H^{2}(A, \mathbb{C})$. Define $R_{s}(A)= T_{s}(A)\cap G_{s}\left(H_{T}^{2}(A, \mathbb{C})\right)$. Then $T_{s}(A)=R_{s}(A) \cup U_{s}(A)$ where $U_{s}(A)=T_{s}(A)-R_{s}(A) .$ The sets $R_{s}(A)$ and $U_{s}(A)$ are stable under the action of $\mathrm{Aut}(A)$. Let $E_{T}(A, V)=\left\{A_{\theta} \in E(A, V): A_{\theta} \mathrm{\,\,is\,\,Tortkara\,\, algebra} \right\} .$ Then $E(A, V)=E_{T}(A, V) \cup E_{non-T}(A, V)$ where $E_{ {non-T }}(A, V)=E(A, V)-E_{T}(A, V) .$
\begin{thm}\cite[Theorem 19]{hegazi2016classification}\label{thm3}
Let $A$ be a Tortkara algebra.
\begin{enumerate}
\item There exists a one-to-one correspondence between the set of $\mathrm{Aut} (A)$-orbits on $R_{s}(A)$ and the set of isomorphism classes of $E_{T}(A, V)$.
\item There exists a one-to-one correspondence between the set of $ \mathrm{Aut}(A)$-orbits on $U_{s}(A)$ and the set of isomorphism classes of $E_{n o n-T}(A, V)$.
\end{enumerate}
\end{thm}
By this theorem and Theorem \ref{thm3}, we may construct all non-Tortkara anticommutative algebras of dimension $n$ with $s$-dimensional annihilator, given those algebras of dimension $n-s$, in the following way:
\begin{enumerate}
\item
For a given anticommutative algebra $A$ of dimension $n-s$, if $A$ is non-Tortkara then do the following:
\begin{enumerate}
\item
Determine $H^{2}(A, \mathbb{C})$, $\mathrm{\mathrm{\mathrm{Ann}}}(A)$ and $\mathrm{Aut} (A)$.
\item
Determine the set of $\mathrm{A u t}(A)$-orbits on $T_{s}(A)$.
\item
For each orbit, construct the anticommutative algebra corresponding to a representative of it.
\end{enumerate}
\item
Otherwise, do the following:
\begin{enumerate}
\item
Determine $H_{T}^{2}(A, \mathbb{C}), H^{2}(A, \mathbb{C}), \mathrm{A n n}(A)$ and $\mathrm{A u t}(A)$.
\item
Determine the set of $\mathrm{Aut}(A)$-orbits on $U_{s}(A)$.
\item
For each orbit, construct the anticommutative algebra corresponding to a representative of it.
\end{enumerate}
\end{enumerate}
\section{The proof of Theorem \ref{thm1}}\label{sec3}
Thanks to \cite{kaygorodov2020algebraic}, we have the classification of all nontrivial 4-dimensional nilpotent anticommutative algebras.
$$
\begin{array}{|l|l|l|l|}
\hline {A} & \text {Multiplication table } & {H}_{{T}}^{2}({A},\mathbb{C}) & {H}^{2}({A},\mathbb{C}) \\
\hline {A}_{1} & \rm{trivial\,\,algebra} &
\begin{aligned}
&\left\langle\left[\Delta_{12}\right],\left[\Delta_{13}\right],\left[\Delta_{14}\right],\right.\\ &\left.\left[\Delta_{23}\right],\left[\Delta_{24}\right],\left[\Delta_{34}\right]\right\rangle
\end{aligned} &
\begin{aligned}
&\left\langle\left[\Delta_{12}\right],\left[\Delta_{13}\right],\left[\Delta_{14}\right],\right.\\ &\left.\left[\Delta_{23}\right],\left[\Delta_{24}\right],\left[\Delta_{34}\right]\right\rangle
\end{aligned}\\
\hline {A}_{2} & e_{1} e_{2}=e_{3} & \left\langle\left[\Delta_{13}\right],\left[\Delta_{14}\right],\left[\Delta_{23}\right],\left[\Delta_{24}\right],\left[\Delta_{34}\right]\right\rangle & {H}_{{T}}^{2}\left({A}_{2},\mathbb{C}\right) \\
\hline {A}_{3} & e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4} & \left\langle\left[\Delta_{14}\right],\left[\Delta_{23}\right],\left[\Delta_{24}\right]\right\rangle & {H}_{{T}}^{2}\left({A}_{3},\mathbb{C}\right) \oplus\left\langle\left[\Delta_{34}\right]\right\rangle \\
\hline
\end{array}
$$
In view of \cite{gorshkov2019variety}, all anticommutative central extensions of $A_{2}$ and of the 4-dimensional trivial algebra
are Tortkara algebras, so we need only consider central extensions of $A_{3}$. After direct calculation, we can get the form of $\mathrm{Aut}(A_3)$:
$$
\phi=\left(\begin{array}{cccc}
x & 0 & 0 & 0 \\
y & z & 0 & 0 \\
u & v & x z & 0 \\
h & g & x v & x^{2} z
\end{array}\right)
$$
where $xz\not=0$.
In the following subsections we will give the classification of all $n$-dimensional nilpotent non-Tortkara anticommutative algebras with $(n-4)$-dimensional annihilator.
\subsection{$n=4$}
There is no anticommutative algebra satisfies conditions of Theorem \ref{thm1}, because the annihilator of $A_1,A_2$ and $A_3$ are non-empty.
\subsection{$n=5$}\label{sec3.2}
The anticommutative algebra satisfying conditions of Theorem \ref{thm1} must have no annihilator component. It is non-split non-Tortkara central extensions of $A_3$. According to Theorem \ref{thm3}, we
need to find the representatives of the $\mathrm{Aut} (A_3)$-orbits on $U_1 (A_3)$. Choose an arbitrary subspace $W\in U_1(A_3)$. Such a subspace is spanned by $\theta=a_1[\Delta_{14}]+a_2[\Delta_{23}]+a_3[\Delta_{24}]+a_4[\Delta_{34}]$ with $a_4\not=0$. Let $\phi\in \mathrm{Aut}(A_3)$ be the following automorphism
$$
\phi=\left(\begin{array}{cccc}
x & 0 & 0 & 0 \\
y & z & 0 & 0 \\
-(a_1x+a_3y) & -a_3z & x z & 0 \\
h & a_2z & -a_3zx & x^{2} z
\end{array}\right)
$$
where $xz\not=0$. Then $\phi W=<[\Delta_{34}]>$. Hence we get a representative $<[\Delta_{34}]>$. This shows that $\mathrm{Aut} (A_3)$ has only one orbit on $U_1 (A_3)$. So we get the algebra:
\[
A_{5,1}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}.
\]
\subsection{$n=6$}\label{sec3.3}
First we classify nilpotent non-Tortkara anticommutative algebras with annihilator component. We get the algebras $A_{6,1}=A_{5,1}\oplus \mathbb{C}e_6$. Next we classify nilpotent non-Tortkara anticommutative without any annihilator component. For this, choose an arbitrary subspace $W\in U_2(A_3)$. Such a subspace is spanned by $[\theta_1]=a_1[\Delta_{14}]+a_2[\Delta_{23}]+a_3[\Delta_{24}]+a_4[\Delta_{34}]$ and $[\theta_2]=b_1[\Delta_{14}]+b_2[\Delta_{23}]+b_3[\Delta_{24}]+b_4[\Delta_{34}]$
with $(a_4,b_4)\not=(0,0)$. By possibly swapping $[\theta_1]$ and $[\theta_2]$, we may assume with out loss of generality that $a_4\not=0$. Then, from Subsection \ref{sec3.2}, we may assume that $[\theta_1]=[\Delta_{34}]$. Further, by subtracting scalar multiples of $[\theta_1]$ from $[\theta_2]$, we may assume
that $[\theta_2]=a_1[\Delta_{14}]+a_2[\Delta_{23}]+a_3[\Delta_{24}]$. Hence we may assume without loss of generality that $$W=<[\Delta_{34}],a_1[\Delta_{14}]+a_2[\Delta_{23}]+a_3[\Delta_{24}]>.
$$
Let us consider the following cases:
\textbf{Case1:} $a_3\not=0$, Let $\phi\in \mathrm{Aut}(A_3)$ be as follows:
$$
\phi=\left(\begin{array}{cccc}
x & 0 & 0 & 0 \\
-a_1xa_3^{-1} & z & 0 & 0 \\
0 & -a_2za_3^{-1}& x z & 0 \\
h & v^2z^{-1} & -a_2za_3^{-1}x & x^{2} z
\end{array}\right)
$$
where $xz\not=0$. Then $\phi W=<vzx^2[\Delta_{24}]+x^3z^2[\Delta_{34}],a_3x^2z^2[\Delta_{24}]>=<[\Delta_{34}],[\Delta_{24}]>=W_1$
\textbf{Case2:} $a_3=a_1=0,a_2\not=0$, Let $\phi\in \mathrm{Aut}(A_3)$ be as follows:
$$
\phi=\left(\begin{array}{cccc}
x & 0 & 0 & 0 \\
y & z & 0 & 0 \\
0 & 0 & x z & 0 \\
h & 0 & 0 & x^{2} z
\end{array}\right)
$$
where $xz\not=0$. Then $\phi W=<x^3z^2[\Delta_{34}],a_2xz^2[\Delta_{23}]>=<[\Delta_{34}],[\Delta_{23}]>=W_2$
\textbf{Case3:} $a_3=a_2=0,a_1\not=0$, Let $\phi\in \mathrm{Aut}(A_3)$ be as follows:
$$
\phi=\left(\begin{array}{cccc}
x & 0 & 0 & 0 \\
y & z & 0 & 0 \\
0 & 0 & x z & 0 \\
h & 0 & 0 & x^{2} z
\end{array}\right)
$$
where $xz\not=0$. Then $\phi W=<x^3z^2[\Delta_{34}],a_1zx^3[\Delta_{14}]>=<[\Delta_{34}],[\Delta_{14}]>=W_3$
\textbf{Case4:} $a_3=0,a_1a_2\not=0$, Let $\phi\in \mathrm{Aut}(A_3)$ be as follows:
$$
\phi=\left(\begin{array}{cccc}
x & 0 & 0 & 0 \\
0 & z & 0 & 0 \\
0 & 0 & x z & 0 \\
h & 0 & 0 & x^{2} z
\end{array}\right)
$$
where $x=a_2,z=a_1a_2$. Then $\phi W=<x^3z^2[\Delta_{34}],a_1^2a_2^4([\Delta_{23}]+[\Delta_{14}])>=<[\Delta_{34}],[\Delta_{23}]+[\Delta_{14}]>=W_4$
As shown we have three representatives, namely
\[
\begin{aligned}
&W_1:<[\Delta_{34}],[\Delta_{24}]>\\
&W_2:<[\Delta_{34}],[\Delta_{23}]>\\
&W_3:<[\Delta_{34}],[\Delta_{14}]>\\
&W_4:<[\Delta_{34}],[\Delta_{23}]+[\Delta_{14}]>\\
\end{aligned}
\]
Next, we claim that $\mathrm{Orb}(W_i)\cap \mathrm{Orb}(W_j)=\emptyset,i\not=j$. Choose any element of $\mathrm{Aut}(A_3)$:
$$
\phi=\left(\begin{array}{cccc}
x & 0 & 0 & 0 \\
y & z & 0 & 0 \\
u & v & x z & 0 \\
h & g & x v & x^{2} z
\end{array}\right)
$$
where $xz\not=0$. Then
\[
\phi W_1=<x^2z^2[\Delta_{34}]-zg[\Delta_{23}]+x(uz-yv)[\Delta_{14}],xz[\Delta_{24}]+v[\Delta_{23}]+xy[\Delta_{14}]>
\]
$[\Delta_{23}]$ is not in $\phi W_1$, otherwise there are $\lambda_1$ and $\lambda_2$ such that
\[
[\Delta_{23}]=\lambda_1(x^2z^2[\Delta_{34}]-zg[\Delta_{23}]+x(uz-yv)[\Delta_{14}])+\lambda_2(xz[\Delta_{24}]+v[\Delta_{23}]+xy[\Delta_{14}])
\]
We have $\lambda_1=\lambda_2=0$. Obviously, the above equation cannot be holded, it's a contradiction. Then $\mathrm{Orb}(W_1)\cap \mathrm{Orb}(W_2)=\emptyset$. $[\Delta_{14}]$ is not in $\phi W_1$, otherwise there are $\lambda_1$ and $\lambda_2$ such that
\[
[\Delta_{14}]=\lambda_1(x^2z^2[\Delta_{34}]-zg[\Delta_{23}]+x(uz-yv)[\Delta_{14}])+\lambda_2(xz[\Delta_{24}]+v[\Delta_{23}]+xy[\Delta_{14}])
\]
We have $\lambda_1=\lambda_2=0$. Obviously, the above equation cannot be holded, it's a contradiction. Then $\mathrm{Orb}(W_1)\cap \mathrm{Orb}(W_3)=\emptyset$. $[\Delta_{14}]+[\Delta_{23}]$ is not in $\phi W_1$, otherwise there are $\lambda_1$ and $\lambda_2$ such that
\[
[\Delta_{14}]+[\Delta_{23}]=\lambda_1(x^2z^2[\Delta_{34}]-zg[\Delta_{23}]+x(uz-yv)[\Delta_{14}])+\lambda_2(xz[\Delta_{24}]+v[\Delta_{23}]+xy[\Delta_{14}])
\]
We have $\lambda_1=\lambda_2=0$. Obviously, the above equation cannot be holded, it's a contradiction. Then $\mathrm{Orb}(W_1)\cap \mathrm{Orb}(W_4)=\emptyset$.
After $W_2$ is acted on by $\phi$, we can get
\[
\phi W_2=<x^2z^2[\Delta_{34}]+zxv[\Delta_{24}]+uzx[\Delta_{14}],[\Delta_{23}]>
\]
$[\Delta_{14}]$ is not in $\phi W_2$, otherwise there are $\lambda_1$ and $\lambda_2$ such that
\[
[\Delta_{14}]=\lambda_1(x^2z^2[\Delta_{34}]+zxv[\Delta_{24}]+uzx[\Delta_{14}])+\lambda_2[\Delta_{23}]
\]
We have $\lambda_1=\lambda_2=0$, Obviously, the above equation cannot be holded, it's a contradiction. Then $\mathrm{Orb}(W_2)\cap \mathrm{Orb}(W_3)=\emptyset$. $[\Delta_{14}]+[\Delta_{23}]$ is not in $\phi W_2$, otherwise there are $\lambda_1$ and $\lambda_2$ such that
\[
[\Delta_{14}]+[\Delta_{23}]=\lambda_1(x^2z^2[\Delta_{34}]+zxv[\Delta_{24}]+uzx[\Delta_{14}])+\lambda_2[\Delta_{23}]
\]
We have $\lambda_1=0,\lambda_2=1$. Obviously, the above equation cannot be holded, it's a contradiction. Then $\mathrm{Orb}(W_2)\cap \mathrm{Orb}(W_4)=\emptyset$.
After $W_3$ is acted on by $\phi$, we can get
\[
\phi W_3=<x^2z^2[\Delta_{34}]+zxv[\Delta_{24}]+(v^2-zg)[\Delta_{23}],[\Delta_{14}]>
\]
$[\Delta_{14}]+[\Delta_{23}]$ is not in $\phi W_3$, otherwise there are $\lambda_1$ and $\lambda_2$ such that
\[
[\Delta_{14}]+[\Delta_{23}]=\lambda_1(x^2z^2[\Delta_{34}]+zxv[\Delta_{24}]+(v^2-zg)[\Delta_{23}])+\lambda_2[\Delta_{14}]
\]
We have $\lambda_1=0,\lambda_2=1$. Obviously, the above equation cannot be holded, it's a contradiction. Then $\mathrm{Orb}(W_3)\cap \mathrm{Orb}(W_4)=\emptyset$. All in all, we have $\mathrm{Orb}(W_i)\cap \mathrm{Orb}(W_j)=\emptyset,i\not=j$. So, the corresponding anticommutative algebras are pairwise non-isomorphic. Hence
we get the algebras:
\[
\begin{aligned}
&A_{6,2}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{2} e_{4}=e_{6};\\
&A_{6,3}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{2} e_{3}=e_{6};\\
&A_{6,4}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{1} e_{4}=e_{6};\\
&A_{6,5}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{2} e_{3}=e_{6}, e_{1} e_{4}=e_{6}.\\
\end{aligned}
\]
\subsection{$n=7$}
First we classify the non-Tortkara anticommutative algebras with annihilator component. We get the algebras $A_{7,i}=A_{6,i}\oplus\mathbb{C}e_7,i=1,\cdots,5$. Next we classify the non-Tortkara anticommutative algebras without any annihilator component. Choose an arbitrary subspace
$W\in U_3(A_3)$. Such a subspace is spanned by
\[
\begin{aligned}
&[\theta_1]=a_1[\Delta_{14}]+a_2[\Delta_{23}]+a_3[\Delta_{24}]+a_4[\Delta_{34}], \\
&[\theta_2]=b_1[\Delta_{14}]+b_2[\Delta_{23}]+b_3[\Delta_{24}]+b_4[\Delta_{34}]\\
&[\theta_3]=c_1[\Delta_{14}]+c_2[\Delta_{23}]+c_3[\Delta_{24}]+c_4[\Delta_{34}]
\end{aligned}
\]
with $(a_4,b_4,c_4)\not=(0,0,0)$. In view of Subsection \ref{sec3.3}, we may assume without loss of generality that $W\in\{S_1, S_2, S_3,S_4\}$ where
$$
\begin{aligned}
&S_1=<[\Delta_{34}],[\Delta_{24}],a_1[\Delta_{14}]+a_2[\Delta_{23}]>\\
&S_2=<[\Delta_{34}],[\Delta_{14}],a_2[\Delta_{23}]+a_3[\Delta_{24}]>\\
&S_3=<[\Delta_{34}],[\Delta_{23}],a_1[\Delta_{14}]+a_3[\Delta_{24}]>\\
&S_4=<[\Delta_{34}],[\Delta_{14}]+[\Delta_{23}],a_1[\Delta_{14}]+a_3[\Delta_{24}]>
\end{aligned}
$$
\textbf{Case1:} $W=S_1$
\textbf{Case1.1:} $a_1=0$ or $a_2=0$, then $W=<[\Delta_{34}],[\Delta_{24}],[\Delta_{23}]>$ or $W=<[\Delta_{34}],[\Delta_{24}],[\Delta_{14}]>$.
\textbf{Case1.2:} $a_1a_2\not=0$. Let $\phi\in \mathrm{Aut}(A_3)$ be as follows:
$$
\phi=\left(\begin{array}{cccc}
x & 0 & 0 & 0 \\
0 & z & 0 & 0 \\
0 & 0 & x z & 0 \\
h & 0 & 0 & x^{2} z
\end{array}\right)
$$
where $x=a_2,z=a_1a_2$. Then $\phi W=<x^3z^2[\Delta_{34}],x^2z^2[\Delta_{24}],a_1^2a_2^4([\Delta_{14}]+[\Delta_{23}])>=<[\Delta_{34}],[\Delta_{24}],[\Delta_{14}]+[\Delta_{23}]>$
\textbf{Case2:} $W=S_2$
\textbf{Case2.1:} $a_2=0$ or $a_3=0$. Then $W=<[\Delta_{34}],[\Delta_{14}],[\Delta_{24}]>$ or $W=<[\Delta_{34}],[\Delta_{14}],[\Delta_{23}]$.
\textbf{Case2.2:} $a_2a_3\not=0$. Let $\phi\in \mathrm{Aut}(A_3)$ be as follows:
$$
\phi=\left(\begin{array}{cccc}
x & 0 & 0 & 0 \\
0 & z & 0 & 0 \\
0 & 0 & x z & 0 \\
h & 0 & 0 & x^{2} z
\end{array}\right)
$$
where $x=a_3^2,z=a_2a_3^{-1}$. Then $\phi W=<x^3z^2[\Delta_{34}],zx^3[\Delta_{14}],a_2^3([\Delta_{23}]+[\Delta_{24}])> =<[\Delta_{34}],[\Delta_{14}],[\Delta_{23}]+[\Delta_{24}]>$
\textbf{Case3:} $W=S_3$
\textbf{Case3.1:} $a_1=0$ or $a_3=0$. Then $W=<[\Delta_{34}],[\Delta_{23}],[\Delta_{24}]>$ or $W=<[\Delta_{34}],[\Delta_{23}],[\Delta_{14}]>$.
\textbf{Case3.2:} $a_1a_3\not=0$. Let $\phi\in \mathrm{Aut}(A_3)$ be as follows:
$$
\phi=\left(\begin{array}{cccc}
x & 0 & 0 & 0 \\
0 & z & 0 & 0 \\
0 & 0 & x z & 0 \\
h & 0 & 0 & x^{2} z
\end{array}\right)
$$
where $x=a_3,z=a_1$. Then $\phi W=<x^3z^2[\Delta_{34}],xz^2[\Delta_{23}],a_1^2a_3^3([\Delta_{14}]+[\Delta_{24}])> =<[\Delta_{34}],[\Delta_{23}],[\Delta_{14}]+[\Delta_{24}]>$.
\textbf{Case4:} $W=S_4$
\textbf{Case4.1:} $a_1=0$ or $a_3=0$. Then $W=<[\Delta_{34}],[\Delta_{14}]+[\Delta_{23}],[\Delta_{24}]>$ or $W=<[\Delta_{34}],[\Delta_{23}],[\Delta_{14}]>$.
\textbf{Case4.2:} $a_1a_3\not=0$. Let $\phi\in \mathrm{Aut}(A_3)$ be as follows:
$$
\phi=\left(\begin{array}{cccc}
1 & 0 & 0 & 0 \\
-a_1a_3^{-1}& 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{array}\right)
$$
Then $\phi W=<[\Delta_{34}],[\Delta_{14}]+[\Delta_{23}],a_3[\Delta_{24}]>=<[\Delta_{34}],[\Delta_{14}]+[\Delta_{23}],[\Delta_{24}]>$
To summarize, we have the following representatives:
\[
\begin{aligned}
&W_1=<[\Delta_{34}],[\Delta_{24}],[\Delta_{23}]>\\
&W_2=<[\Delta_{34}],[\Delta_{24}],[\Delta_{14}]>\\
&W_3=<[\Delta_{34}],[\Delta_{14}],[\Delta_{23}]>\\
&W_4=<[\Delta_{34}],[\Delta_{24}],[\Delta_{14}]+[\Delta_{23}]>\\
&W_5=<[\Delta_{34}],[\Delta_{14}],[\Delta_{23}]+[\Delta_{24}]>\\
&W_6=<[\Delta_{34}],[\Delta_{23}],[\Delta_{14}]+[\Delta_{24}]>
\end{aligned}
\]
Let us now determine the possible orbits among the representatives $W_1,\cdots,W_6$. Choose any element of $\mathrm{Aut}(A_3)$:
$$
\phi=\left(\begin{array}{cccc}
x & 0 & 0 & 0 \\
y & z & 0 & 0 \\
u & v & x z & 0 \\
h & g & x v & x^{2} z
\end{array}\right)
$$
where $xz\not=0$. Then
\[
\phi W_1=<[\Delta_{23}],z[\Delta_{24}]+y[\Delta_{14}],xz[\Delta_{34}]+\left(u-vyz^{-1}\right)[\Delta_{14}]>
\]
Let $y=z,u=v$, we have $\phi W_1=W_6$. Then $\mathrm{Orb}(W_1)=\mathrm{Orb}(W_6)$. $[\Delta_{14}]$ is not in $\phi W_1$, otherwise there are $\lambda_1$ , $\lambda_2$ and $\lambda_3$ such that
\[
[\Delta_{14}]=\lambda_1[\Delta_{23}]+\lambda_2(z[\Delta_{24}]+y[\Delta_{14}])+\lambda_3(xz[\Delta_{34}]+\left(u-vyz^{-1}\right)[\Delta_{14}])
\]
We have $\lambda_1=\lambda_2=\lambda_3=0$. Obviously, the above equation cannot be holded, it's a contradiction. Then $\mathrm{Orb}(W_1)\cap \mathrm{Orb}(W_2)=\emptyset$, $\mathrm{Orb}(W_1)\cap \mathrm{Orb}(W_3)=\emptyset$ and $\mathrm{Orb}(W_1)\cap \mathrm{Orb}(W_5)=\emptyset$. We claim that $\mathrm{Orb}(W_1)\cap \mathrm{Orb}(W_4)=\emptyset$, otherwise there are $\lambda_1$ , $\lambda_2$ and $\lambda_3$ such that
\[
[\Delta_{34}]=\lambda_1[\Delta_{23}]+\lambda_2(z[\Delta_{24}]+y[\Delta_{14}])+\lambda_3(xz[\Delta_{34}]+\left(u-vyz^{-1}\right)[\Delta_{14}])
\]
We have $\lambda_1=\lambda_2=0,u=vyz^{-1}$. There are $\lambda_1$ , $\lambda_2$ and $\lambda_3$ such that
\[
[\Delta_{24}]=\lambda_1[\Delta_{23}]+\lambda_2(z[\Delta_{24}]+y[\Delta_{14}])+\lambda_3(xz[\Delta_{34}]+\left(u-vyz^{-1}\right)[\Delta_{14}])
\]
We have $\lambda_1=\lambda_3=0,y=0$ and $u=y=0$. There are $\lambda_1$ , $\lambda_2$ and $\lambda_3$ such that
\[
[\Delta_{14}]+[\Delta_{23}]=\lambda_1[\Delta_{23}]+\lambda_2z[\Delta_{24}]+\lambda_3xz[\Delta_{34}]
\]
We have $\lambda_1=1,\lambda_2=\lambda_3=0$. Obviously, the above equation cannot be holded, it's a contradiction. Then $\mathrm{Orb}(W_1)\cap \mathrm{Orb}(W_4)=\emptyset$.
After $W_2$ is acted on by $\phi$, we can get
\[
\phi W_2=<[\Delta_{14}],xz[\Delta_{24}]+v[\Delta_{23}],x^2z[\Delta_{34}]-g[\Delta_{23}]>
\]
Let $g=0,v=xz$, we have $\phi W_2=W_5$. Then $\mathrm{Orb}(W_2)=\mathrm{Orb}(W_5)$. We claim that $\mathrm{Orb}(W_2)\cap \mathrm{Orb}(W_3)=\emptyset$, otherwise there are $\lambda_1$ , $\lambda_2$ and $\lambda_3$ such that
\[
[\Delta_{23}]=\lambda_1[\Delta_{14}]+\lambda_2(xz[\Delta_{24}]+v[\Delta_{23}])+\lambda_3(x^2z[\Delta_{34}]-g[\Delta_{23}])
\]
We have $\lambda_1=\lambda_2=\lambda_3=0$. Obviously, the above equation cannot be holded, it's a contradiction. Then $\mathrm{Orb}(W_2)\cap \mathrm{Orb}(W_3)=\emptyset$.
We claim that $\mathrm{Orb}(W_2)\cap \mathrm{Orb}(W_4)=\emptyset$, otherwise there are $\lambda_1$ , $\lambda_2$ and $\lambda_3$ such that
\[
[\Delta_{34}]=\lambda_1[\Delta_{14}]+\lambda_2(xz[\Delta_{24}]+v[\Delta_{23}])+\lambda_3(x^2z[\Delta_{34}]-g[\Delta_{23}])
\]
We have $g=0$. There are $\lambda_1$ , $\lambda_2$ and $\lambda_3$ such that
\[
[\Delta_{24}]=\lambda_1[\Delta_{14}]+\lambda_2(xz[\Delta_{24}]+v[\Delta_{23}])+\lambda_3(x^2z[\Delta_{34}]-g[\Delta_{23}])
\]
We have $v=0$. There are $\lambda_1$ , $\lambda_2$ and $\lambda_3$ such that
\[
[\Delta_{14}]+[\Delta_{23}]=\lambda_1[\Delta_{14}]+\lambda_2 xz[\Delta_{24}]+\lambda_3x^2z[\Delta_{34}]
\]
We have $\lambda_1=\lambda_2=\lambda_3=0$. Obviously, the above equation cannot be holded, it's a contradiction. Then $\mathrm{Orb}(W_2)\cap \mathrm{Orb}(W_4)=\emptyset$.
After $W_3$ is acted on by $\phi$, we can get
\[
\phi W_3=<[\Delta_{14}],[\Delta_{23}],xz[\Delta_{34}]+v[\Delta_{24}]>
\]
We claim that $\mathrm{Orb}(W_3)\cap \mathrm{Orb}(W_4)=\emptyset$, otherwise there are $\lambda_1$ , $\lambda_2$ and $\lambda_3$ such that
\[
[\Delta_{34}]=\lambda_1[\Delta_{14}]+\lambda_2[\Delta_{23}]+\lambda_3(xz[\Delta_{34}]+v[\Delta_{24}])
\]
We have $v=0$. There are $\lambda_1$ , $\lambda_2$ and $\lambda_3$ such that
\[
[\Delta_{24}]=\lambda_1[\Delta_{14}]+\lambda_2[\Delta_{23}]+\lambda_3xz[\Delta_{34}]
\]
We have $\lambda_1=\lambda_2=\lambda_3=0$. Obviously, the above equation cannot be holded, it's a contradiction. Then $\mathrm{Orb}(W_3)\cap \mathrm{Orb}(W_4)=\emptyset$.
All representatives of $\mathrm{Aut}(A_3)$-orbits are:
\[
\begin{aligned}
&W_1=<[\Delta_{34}],[\Delta_{24}],[\Delta_{23}]>\\
&W_2=<[\Delta_{34}],[\Delta_{24}],[\Delta_{14}]>\\
&W_3=<[\Delta_{34}],[\Delta_{14}],[\Delta_{23}]>\\
&W_4=<[\Delta_{34}],[\Delta_{24}],[\Delta_{14}]+[\Delta_{23}]>
\end{aligned}
\]
So the corresponding anticommutative algebras are pairwise non-isomorphic. Thus we get the algebras:
\[
\begin{aligned}
&A_{7,6}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{2} e_{4}=e_{6}, e_{2} e_{3}=e_{7};\\
&A_{7,7}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{2} e_{4}=e_{6}, e_{1} e_{4}=e_{7};\\
&A_{7,8}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{1} e_{4}=e_{6}, e_{2} e_{3}=e_{7};\\
&A_{7,9}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{2} e_{4}=e_{6}, e_{1} e_{4}=e_{7}, e_{2} e_{3}=e_{7}.
\end{aligned}
\]
\subsection{$n=8$}\label{sec3.5}
First we classify the non-Tortkara anticommutative algebras with annihilator component. We get the algebras $A_{8,i}=A_{7,i}\oplus\mathbb{C}e_8,i=1,\cdots,9$. Next we classify the non-Tortkara anticommutative algebras without any annihilator component. Choose an arbitrary subspace $W\in U_4(A_3)$. Then $W=H^2(A_3,\mathbb{C})$. So we have only one orbit with a
representative $<[\Delta_{14}],[\Delta_{23}],[\Delta_{24}],[\Delta_{34}]>$. Hence we get the algebra:
\[
A_{8,10}:e_{1} e_{2}=e_{3}, e_{1} e_{3}=e_{4}, e_{3} e_{4}=e_{5}, e_{2} e_{4}=e_{6}, e_{2} e_{3}=e_{7},e_{1} e_{4}=e_{8}.
\]
\subsection{$n\geqslant 9$}
From the results of Subsection \ref{sec3.5}, we get the algebras:
$$
A_{n, i}=A_{8, i} \oplus \mathbb{C} e_{9} \oplus \cdots \oplus \mathbb{C} e_{n}, i=1, \ldots, 10.
$$
\section{Acknowledgment}
I would like to express my sincere gratitude to professor Jianzhi Han from Tongji University.
\end{document} |
\begin{document}
\title{Sphere covering by minimal number of caps and short closed sets
\thanks{{\it 1991 A M S Subject Classification.} 52A45
{\it Key words and phrases.} Sphere covering by closed sets.}}
\author{A. B. N\'emeth}
\maketitle
\begin{abstract}
A subset of the sphere is said short if it is contained in an open
hemisphere. A short closed set which is geodesically convex is called a cap.
The following theorem holds:
1. The minimal number of short closed sets covering the $n$-sphere is $n+2$.
2. If $n+2$ short closed sets cover the $n$-sphere then
(i) their intersection is empty;
(ii) the intersection of any proper subfamily of them is non-empty.
In the case of caps (i) and (ii) are also sufficient for the family to
be a covering of the sphere.
\end{abstract}
\section{Introduction and the main result}
Denote by $\mathbb R^{n+1}$ the $n+1$-dimensional Euclidean space endowed
with a Cartesian reference system,
with the scalar product $\langle\cdot,\cdot\rangle$ and with the topology it generates.
Denote by $S^n$ the $n$-dimensional unit sphere in $\mathbb R^{n+1}.$
A subset of the sphere $S^n$ is said \emph{short} if it is contained in an open
hemisphere.
The subset $C\subset S^n$ is called {\it geodesically convex}
if together with any two of its points it contains the arc
of minimal length of the principal circle on $S^n$ through these
points. $S^n$ itself is a geodesically convex set.
A short closed set which is geodesically convex is called a \emph{cap}.
We use the notation $\co A$ for the convex hull of $A$ and the notation
$\sco A$ for the geodesical convex hull of $A\subset S^n$ (the union of the
geodesical lines with endpoints in $A$). Further $\dist(\cdot,\cdot)$ will denote
the geodesical distance of points.
Besides the standard notion of simplex we also use
the notion of the spherical simplex $\Delta$
placed in the north hemisphere $S^+$ of $S^n$ such that their vertices are on the equator of $S^n$.
In this case
$\|\Delta\|=S^+$.
Our main result is:
\begin{theo}
\begin{enumerate}
\item The minimal number of short closed sets covering $S^n$ is $n+2$.
\item If a family $F_1,...F_{n+2}$ of short closed sets covers $S^n$, then:
(i) $\cap_{i=1}^{n+2} F_i = \emptyset$;
(ii) $\cap_{i\not=j}F_i\not= \emptyset,\;\forall \; j=1,...,n+2$;
(iii) if $a_j\in \cap_{i\not=j}F_i$, then the vectors $a_1,...,a_{n+2}$
are the vertices of an $n+1$-simplex containing $0$ in its interior.
\end{enumerate}
If the sets $F_i$ are caps, then (i) and (ii) are also sufficient for the family to be a cover of $S^n$.
\end{theo}
Let $\Delta$ be an $n+1$-dimensional simplex with
vertices in $S^n$ containing the origin in its interior.
Then the radial projection from $0$ of the closed $n$-dimensional
faces of $\Delta$ into $S^n$ furnishes $n+2$ caps
covering $S^n$ and satisfying (i) and (ii).
A first version for caps of the above theorem is the content of the unpublished note \cite{nemeth2006}.
\begin{remark}
We mention the formal relation in case of caps with those in the \emph{Nerve Theorem} (\cite{hatcher2002} Corollary 4G3).
If we consider \emph{"open caps"} in place of caps, then the conclusion (ii) can be deduced from
the mentioned theorem. Moreover, the conclusion holds for a "good" open cover of the sphere too,
i. e., an open cover with contractible members and contractible finite intersections.
In our theorem the covering with caps has the properties of a "good" covering in this theorem:
the members of the covering together with their nonempty intersections are contractible, but
their members are closed, circumstance which seems to be rather sophisticated to be surmounted.
(Thanks are due to Imre B\'ar\'any, who mentioned me this possible connection.)
\end{remark}
We shall use in the proofs the following (spherical)
variant of Sperner's lemma
(considered for simplices by Ky Fan \cite{FA}):
\begin{lem}\label{sperner}
If a collection of closed sets $F_1,...,F_{n+1}$ in $S^n$
covers the spherical simplex engendered by the points $a_1,...a_{n+1}\in S^n$ and
$\sco \{a_1,...,a_{i-1},a_{i+1},...,a_{n+1}\}\subset F_i,\; i=1,...,n+1$ then
$\cap_{i=1}^{n+1} F_i\not= \emptyset.$
\end{lem}
Our first goal is to present the proof for caps. (We mention that using the methods in
\cite{KN} and \cite{KN1} the proof can be carried out in a purely geometric way in contrast with the proof in
\cite{nemeth2006}, where we refer to the Sperner lemma.)
Using the variant for caps of the theorem and the Sperner lemma we prove then the
variant for short closed sets.
Except the usage of Lemma \ref{sperner}, our methods are elementary:
they use repeatedly the induction with respect to the dimension.
\section{The proof of the theorem for caps}
1.
Consider $n+1$ caps $C^1,...,C^{n+1}$ on $S^n$. $C^i$ being a cap, can be separated
strictly by a hyperplane
$$H_i= \{x\in \mathbb R^{n+1}:\,\langle a_i,x\rangle +\alphapha_i =0\}$$
from the origin. We can suppose without loss of generality, that
the normals $a_i$ are linearly independent, since
by slightly moving them we can achieve this, without affecting
the geometrical picture.
If the normals $a_i$ are considered oriented toward $0$,
this strict separation means that $\alphapha_i >0,\;i=1,...,n.$
The vectors $a_i,\,i=1,...,n+1$ engender a reference
system in $\mathbb R^{n+1}$. Let $x$ be a nonzero element of
the positive orthant of this reference system. Then,
for $t\geq 0$, one has
$\langle a_i,tx \rangle \geq 0,\;\forall \,i=1,...,n+1.$
Hence, for each $t\geq 0$,
$tx$ will be a solution of the system
$$\langle a_i,y\rangle +\alphapha_i>0,\;i=1,...,n+1.$$
and thus
$$(*)\qquad tx\in \cap_{i=1}^{n+1} H_i^+,\;\forall \, t\geq 0$$
with
$$H_i^+=\{y \in R^{n+1};\,\langle a_i,y\rangle +\alphapha_i > 0\}.$$
Now, if $C^1,...,C^{n+1}$ covers $S^n$, then
so does the union $\cup_{i=1}^{n+1} H_i^-$ of halfspaces
$$H_i^-=\{y \in R^{n+1};\,\langle a_i,y\rangle +\alphapha_i \leq 0\}.$$
Since $H_i^+$ is the complementary set of $H_i^-$
and $S^n\subset \cup_{i=1}^{n+1} H_i^-$, the set
$\cap_{i=1}^{n+1} H_i^+$ must be inside $S^n$ and
hence bounded. But (*) shows that $tx$ with
$x\not= 0$ is in this set for any $t\geq 0$.
The obtained contradiction shows that the
family $C^1,...,C^{n+1}$ cannot cover $S^n$.
\begin{remark}
The proof of this item is also consequence
of the Lusternik-Schnirelmann theorem
\cite{LS} which asserts that if $S^n$
is covered by the closed sets $F_1,...,F_k$ with
$F_i\cap (-F_i)=\emptyset,\,i=1,...,k,$ then $k\geq n+2.$
\end{remark}
2. Let $C^1,...,C^{n+2}$ be caps covering $S^n$.
(i) Then they cannot have a common
point $x$, since this case $-x$ cannot be covered by any
$C^i$. (No cap
can contain diametrically opposite points of $S^n$.)
Hence, condition (i) must hold.
(ii) To prove that $\cap_{j\not= i} C^j \not= \emptyset,
\;\forall \,i=1,...,n+2$ we proceed by induction.
For $S^1$, the circle, $C^i$ is an arc (containing
its endpoints) of length $< \pi$,
$i=1,2,3$. The arcs $C^1, C^2, C^3$ cover $S^1$.
Hence, they cannot have common points, and the endpoint
of each arc must be contained in exactly one of
the other two arcs. Hence, $C^i$ meets $C^j$ for
every $j\not= i.$ If $c_i\in C^j\cap C^k,\;
j\not= i\not= k\not=j$, then $c_1, \,c_2,\,c_3$ are tree
pairwise different points on the circle,
hence they are in general position and $0$ is
an interior point of the triangle they span.
Suppose the assertions (ii) and (iii) hold for
$n-1$ and let us prove them for $n$.
Take $C^{n+2}$ and let $H$ be a hyperplane through
$0$ which does not meet $C^{n+2}.$ Then, $H$ determines
the closed hemispheres $S^-$ and $S^+$. Suppose
that $C^{n+2}$ is placed inside $S^-$ (in the interior
of $S^-$ with respect the topology of $S^n$).
Hence, $C^1,...,C^{n+1}$ must cover $S^+$ and
denoting by $S^{n-1}$ the $n-1$-dimensional sphere $S^n\cap H$,
these sets cover $S^{n-1}.$
Now, $D^i= C^i\cap S^{n-1},\;i=1,...,n+1$ are
caps in $S^{n-1}$
which cover this sphere. Thus, the induction hypothesis
works for these sets.
Take the points $d_i\in \cap_{j\not= i}D^j$. Then,
$d_1,...,d_{n+1}$ will be in general position and $0$ is an
interior point of the simplex they span. By
their definition, it follows that $d_k\in D^j, \;
\forall k\not= j$ and hence $d_1,...,d_{j-1},d_{j+1},
...,d_{n+1}\in D^j,\; j=1,2,...n+1. $
Consider the closed hemisphere $S^+$ to be endowed
with a spherical simplex structure $\Delta$
whose vertices are the points $d_1,...,d_{n+1}$
.
Since $C^1,...,C^{n+1}$ cover $S^+$, and
$d_1,...,d_{j-1},d_{j+1},
...,d_{n+1}\in D^j\subset C^j\cap S^+,\; j=1,2,...n+1 $,
Lemma \ref{sperner}
can be applied to the spherical simplex $\Delta$, yielding
$$\cap_{j=1}^{n+1} C^j \supset\cap_{j=1}^{n+1} (S^+\cap C^j) \not=
\emptyset.$$
This shows that each collection of $n+1$ sets $C^j$ have
nonempty intersection and proves (ii) for $n$.
(If we prefer a purely geometric proof of this item, we can refer to the
spherical analogue of the results in \cite{KN1}.)
From the geometric picture is
obvious that two caps meet if and only
if their convex hulls meet. Hence, from the conditions
(i) and (ii) for the caps $C^i$, it follows that
these conditions hold also for
$A^i=\co C^i,\;i=1,...,n+2.$
Take
$$a_i\in \cap_{j\not= i}A^j,\;i=1,...,n+2.$$
Let us show that
for an arbitrary $k\in N$,
$$a_k\not\in
\aff \{a_1,...,a_{k-1},a_{k+1},...,a_{n+2}\}.$$
Assume the contrary. Denote
$$H=\aff \{a_1,...,a_{k-1},a_{k+1},...,a_{n+2}\}.$$
Thus, $\dim H\leq n.$ The points $a_i$ are all in the manifold $H$. Denote
$$B^i=H\cap A^i.$$
Since $a_i\in \cap_{j\not= i}A^j$ and $a_i\in H$, it follows that
$$a_i\in \cap_{j\not= i}A^j\cap H=\cap_{j\not= i} B^j,\;\forall\,i.$$
This means that the family of convex compact
sets $\{B^j:\,j=1,...,n+2\}$ in $H$ possesses the property that every $n+1$ of its
elements have nonempty intersection.
Then, by Helly's theorem, they have a
common point. But this would
be a point of $\cap_{i=1}^{n+2} A^i $ too, which contradicts (ii) for the sets $A^i$.
Hence,
every $n+2$ points $c_i\in \cap_{j\not= i} C^j \subset \cap_{j\not= i}
A^j,\; i=1,...,n+2$ are in general position.
Since $c_1,...,c_{i-1},c_{i+1},...,c_{n+2} \in C^i$, it follows
that the open halfspace determined by the hyperplane they engender
containing $0$ contains also the point $c_i$.
This proves (iii).
Suppose that the caps $C^1,...,C^{n+2}$ posses the properties in (i) and (ii).
Then, the method in the above proof yields that the points
$$c_i\in \cap_{j\not= i} C^j,\;i=1,...,n+2$$ engender an $n+1$-simplex with $0$
in its interior and
$$c_1,...,c_{i-1},c_{i+1},...,c_{n+2} \in C^i,\; i=1,...,n+2.$$
The radial projections of the $n$-faces of this simplex into $S^n$
obviously cover $S^n$. The union of these projections
are contained in $\cup_{i=1}^{n+2} C^i$.
This completes the proof for cups.
\section{The proof of the theorem for short closed sets}
We carry out the proof by induction.
Consider $n=1$ and suppose $F_1,F_2,F_3$ are short closed sets covering $S^1$.
If $a\in \cap_{i=1}^3 F_i$, then by the above hypothesis $-a\not \in \cup_{i=1}^3 F_i$, which is impossible.
Hence,
$$\cap_{i=1}^3 F_i= \emptyset$$ must hold.
Denote $C_3= \sco F_3$,
then $C=\clo S^1\setminus C_3$ is a connected arc of $S^1$ covered by $F_1,F_2$.
One must have $C\cap F_i\not=\emptyset , i=1,2$, since if for instance $C\cap F_2= \emptyset,$
then it would follow that the closed sets $F_1$ and $C_3$, both of geodesical diameter $< \pi$
cover $S^1$, which is impossible.
Since $C$ is connected and $C\cap F_i,\; i=1,2$ are closed sets
in $C$ covering this set, $F_1\cap F_2\supset (C\cap F_1)\cap (C\cap F_2) \not= \emptyset$ must hold.
The geodesically convex sets $C_i=\sco F_i,\;i=1,2,3$ cover $S^1$, hence applying
the theorem for caps to $ a_j \in \cap_{i\not= j} F_i \subset \cap_{i\not= j} C_i,\; j=1,2,3$,
we conclude that these points are in general position and
the simplex engendered by them must contain $0$ as an interior point.
Suppose that the assertions hold for $n-1$ and prove them for $n$.
Suppose that
$$S^n \subset \cup_{i=1}^{n+2} F_i,\; F_i\;\textrm{short, closed},\; i=1,...,n+2.$$
The assertion (i) is a consequence of the theorem for caps applied to
$C_i =\sco F_i,\; i=1,...,n+2$ (or a consequence of the Lusternik Schnirelmann theorem).
Suppose that $F_{n+2}$ is contained in the interior (with respect to the topology
of $S^n$) of the south hemisphere $S^-$ and denote by $S^{n-1}$
the equator of $S^n$.
Now $S^{n-1} \subset \cup_{i=1}^{n+1} F'_i$ with $F'_i=(S^{n-1}\cap F_i
),\; i=1,...,n+1,$
and we can apply the induction hypothesis for $S^{n-1}$ and the closed sets $F'_i,\; i=1,...,n+1.$
Since $C'_i=\sco F'_i,\; i=1,...,n+1$ cover $S^{n-1}$, and they are caps,
the theorem for caps applies and hence the points
\begin{equation*}
a_j \in \cap_{i=1,i\not=j}^{n+1} C'_i,\;j=1,...,n+1
\end{equation*}
are in general position.
The closed sets
\begin{equation*}
A_i = C'_i\cup(F_i\cap S^+)= C'_i\cup (F_i\cap \inter S^+),\;i=1,...,n+1
\end{equation*}
cover $S^+$, the north hemisphere considered as a spherical simplex $\Delta$ engendered by $a_1,...,a_{n+1}$ $(\|\Delta\|=S^+$).
(Here $\inter S^+$ is the interior of $S^+$
in the space $S^n$.)
Further,
$$\sco \{a_1,...,a_{k-1},a_{k+1},...,a_{n+1}\} \subset A_k,\; k=1,...,n+1.$$
Hence, we can apply Lemma \ref{sperner} to conclude that there exists a point $a$ in
$\cap_{i=1}^{n+1} A_i \not= \emptyset.$
Since
$$C'_i\cap (F_j\cap \inter S^+)=\emptyset,\;\forall \;i,\;j,$$
it follows that
$$a \in \cap_{i=1}^{n+1} A_i= \cap_{i=1}^{n+1} C'_i \cup \cap_{i+1}^{n+1} F_i\cap \inter S^+=\cap_{i+1}^{n+1} F_i\cap \inter S^+,$$
because $\cap_{i=1}^{n+1} C'_i= \emptyset$ by the induction hypothesis and the theorem for caps.
Thus,
$$a\in \cap_{i=1}^{n+1} F_i,$$
and we have condition (ii) fulfilled for $n$.
The condition (iii) follows from the theorem for caps applied to
$$C^i= \sco F^i,\;i=1,...,n+2.$$
\begin{remark}
If $S^1$ is covered by the closed sets $F_1, F_2, F_3$ with the property
$F_i\cap (-F_i)=\emptyset,\; i=1,2,3 $, then
$F_i\cap F_j \not= \emptyset$ $\forall \;i, j.$
Indeed, assume that $F_1\cap F_2= \emptyset$. Then $\dist (F_1,F_2)=\varepsilon >0.$ If $a_i\in F_i$
are the points in $F_i, \; i=1,2$ with $\dist (a_1,a_2)= \varepsilon,$ then the closed arc $C\subset S^1$
with the endpoints $a_1,\; a_2$ must be contained in $F_3$, and hence $-C\cap F_3= \emptyset$, and
then $-C$ must be covered by $F_1\cup F_2$. Since $-a_1 \in -C$ cannot be in $F_1$, it must be in $F_2$,
and $-a_2\in F_1$. Thus, $F_1\cap -C\not= \emptyset$ and $F_2\cap -C \not= \emptyset,$ while
the last two sets cover $-C$. Since $-C$ is connected and the respective sets are closed, they
must have a common point, contradicting the hypothesis $F_1\cap F_2= \emptyset$.
This way, we obtain (ii) fulfilled for $n=1$ for this more general case.
We claim that the conditions also hold for $n$, that is, if the closed sets $F_1,...,F_{n+2}$ with
$F_i\cap (-F_i)=\emptyset,\; i=1,...,n+2 $ cover $S^n$, then condition (ii) holds. (Condition
(i) is a consequence of the definition of the sets $F_i$.)
\end{remark}
\end{document} |
\begin{document}
\title{A FIXED POINT FRAMEWORK FOR RECOVERING SIGNALS FROM NONLINEAR
TRANSFORMATIONS\thanks{The work of P. L. Combettes was supported by
the National Science Foundation under grant CCF-1715671 and the
work of Z. C. Woodstock was supported by the National Science
Foundation under grant DGE-1746939.}}
\author{\IEEEauthorblockN{
Patrick L. Combettes}
\IEEEauthorblockA{\textit{Department of Mathematics}\\
\textit{North Carolina State University}\\
Raleigh, NC 27695-8205, USA}
\and
\IEEEauthorblockN{
Zev C. Woodstock}
\IEEEauthorblockA{\textit{Department of Mathematics}\\
\textit{North Carolina State University}\\
Raleigh, NC 27695-8205, USA}}
\maketitle
\begin{abstract}
We consider the problem of recovering a signal from
nonlinear transformations, under convex constraints modeling
\ensuremath{{\varnothing}}h{a priori} information. Standard feasibility and optimization
methods are ill-suited to tackle this problem due to the
nonlinearities. We show that, in many common
applications, the transformation model can be
associated with fixed point equations involving firmly
nonexpansive operators. In turn, the
recovery problem is reduced to a tractable common fixed point
formulation, which is solved efficiently by a provably convergent,
block-iterative algorithm. Applications to signal and image
recovery are demonstrated. Inconsistent problems are also
addressed.
\end{abstract}
\begin{IEEEkeywords}
firmly nonexpansive operator,
fixed point model,
nonlinear transformation,
signal recovery.
\end{IEEEkeywords}
\section{Introduction}
\label{sec:1}
Under consideration is the general problem of recovering an
original signal $\overline{x}$ in a Euclidean space
$\ensuremath{{\mathcal H}}$ from a finite number of transformations $(r_k)_{k\in K}$
of the form
\begin{equation}
\label{e:1}
r_k=R_k\overline{x}\in\ensuremath{{\mathcal G}}_k,
\end{equation}
where $R_k\colon\ensuremath{{\mathcal H}}\to\ensuremath{{\mathcal G}}_k$ is an operator mapping the solution
space $\ensuremath{{\mathcal H}}$ to the Euclidean space $\ensuremath{{\mathcal G}}_k$.
In addition to these transformations, some \ensuremath{{\varnothing}}h{a priori}
constraints on $\overline{x}$ are available in the
form of a finite family of closed convex subsets $(C_j)_{j\in J}$
of $\ensuremath{{\mathcal H}}$ \cite{Youl82,Aiep96,Rzep18,Tofi16,Trus84}. Altogether, the
recovery problem is to
\begin{equation}
\label{e:2}
\text{find}\;\;x\in\bigcap_{j\in J}C_j\;\;\text{such that}\;\;
(\forall k\in K)\quad R_kx=r_k.
\end{equation}
One of the most classical instances of this formulation was
proposed by Youla in \cite{Youl78}, namely
\begin{equation}
\label{e:3}
\text{find}\;\;x\in V_1\;\;\text{such that}\;\;\ensuremath{\text{\rm proj}}_{V_2}x=r_2,
\end{equation}
where $V_1$ and $V_2$ are vector subspaces of $\ensuremath{{\mathcal H}}$ and
$\ensuremath{\text{\rm proj}}_{V_2}$ is the projection operator onto $V_2$. As shown in
\cite{Youl78},
\eqref{e:3} covers many basic signal processing problems, such as
band-limited extrapolation or image reconstruction from diffraction
data, and it can be solved with a simple alternating projection
algorithm. The extension of \eqref{e:3} to recovery problems with
several transformations modeled as linear projections
$r_k=\ensuremath{\text{\rm proj}}_{V_k}\overline{x}$ is discussed in
\cite{Joat10,Reye13}.
More broadly, if the operators $(R_k)_{k\in K}$
are linear, reliable algorithms are available to solve \eqref{e:2}.
In particular, since the associated constraint set is
an affine subspace with an explicit projection operator, standard
feasibility
algorithms can be used \cite{Aiep96}. Alternatively, proximal
splitting methods can be considered; see \cite{MaPr18} and its
references.
In the present paper we consider the general situation in
which the operators $(R_k)_{k\in K}$ in \eqref{e:1} are not
necessarily linear, a stark departure from common assumptions in
signal recovery problems. Examples of such nonlinearly generated
data $(r_k)_{k\in K}$ in \eqref{e:1} include
hard-thresholded wavelet coefficients of $\overline{x}$, the
positive part of the Fourier transform of $\overline{x}$, a
mixture of best approximations of $\overline{x}$ from
closed convex sets, a maximum a posteriori denoised version of
$\overline{x}$, or measurements of $\overline{x}$ acquired through
nonlinear sensors.
A significant difficulty one faces in the nonlinear context
is that the constraint \eqref{e:1} is typically not representable
by an exploitable convex constraint; see, e.g.,
\cite{Blum13,Cast19}. As a result, finding a solution to
\eqref{e:2} with a provenly convergent and numerically efficient
algorithm is a challenging task. In particular, standard convex
feasibility algorithms are not applicable. Furthermore,
variational relaxations involving a penalty of the type
$\sum_{k\in K}\phi_k(\|R_kx-r_k\|)$ typically lead to nonconvex
problems, even for choices as basic as $\phi_k=|\cdot|^2$ and $R_k$
taken as the projection operator onto a closed convex set.
Our strategy to solve \eqref{e:2} is to forego the feasibility and
optimization approaches in favor of the flexible and unifying
framework of fixed point theory. Our first contribution is
to show that, while $R_k$ in \eqref{e:1} may be a very badly
conditioned (possibly discontinuous) operator, common transformation
models can be reformulated as fixed point equations
with respect to an operator with much better properties,
namely a firmly nonexpansive operator. Next, using a
suitable modeling of the constraint sets $(C_j)_{j\in J}$, we
rephrase \eqref{e:2} as an equivalent common fixed point
problem and solve it with a reliable and efficient extrapolated
block-iterative fixed point algorithm. This strategy is outlined
in Section~\ref{sec:2}, where we also provide the algorithm. In
Section~\ref{sec:3}, we present several numerical illustrations of
the proposed framework to nonlinear signal and image recovery.
Finally, inconsistent problems are addressed in Section~\ref{sec:4}.
\section{Fixed point model and algorithm}
\label{sec:2}
For background on the tools from fixed point theory and convex
analysis used in this section, we refer the reader to
\cite{Livre1}. Let us first recall that an operator
$T\colon\ensuremath{{\mathcal H}}\to\ensuremath{{\mathcal H}}$ is firmly nonexpansive if
\begin{multline}
\label{e:10}
(\forall x\in\ensuremath{{\mathcal H}})(\forall y\in\ensuremath{{\mathcal H}})\quad\|Tx-Ty\|^2\ensuremath{\ensuremath{\leqslant}qslant}\\
\|x-y\|^2-\|(\ensuremath{\operatorname{Id}}\,-T)x-(\ensuremath{\operatorname{Id}}\,-T)y\|^2,
\end{multline}
and firmly quasinonexpansive if
\begin{equation}
\label{e:11}
(\forall x\in\ensuremath{{\mathcal H}})(\forall y\in\ensuremath{\text{\rm Fix}\,} T)\quad
\scal{y-Tx}{x-Tx}\ensuremath{\ensuremath{\leqslant}qslant} 0,
\end{equation}
where $\ensuremath{\text{\rm Fix}\,} T=\menge{x\in\ensuremath{{\mathcal H}}}{Tx=x}$. Finally, the subdifferential
of a convex function $f\colon\ensuremath{{\mathcal H}}\to\ensuremath{\mathbb{R}}$ at $x\in\ensuremath{{\mathcal H}}$ is
\begin{equation}
\partial f(x)\!=\!\menge{u\in\ensuremath{{\mathcal H}}\!}{\!(\forall y\in\ensuremath{{\mathcal H}})
\,\scal{y-x}{u}
+f(x)\!\ensuremath{\ensuremath{\leqslant}qslant}\!f(y)}.
\end{equation}
As discussed in Section~\ref{sec:1}, the transformation model
\eqref{e:1} is too general to make finding a solution to
\eqref{e:2} via a provenly convergent method possible. We
therefore assume the following.
\begin{assumption}
\label{a:1}
The problem \eqref{e:2} has at least one solution, $J\cap K=\ensuremath{{\varnothing}}$,
and the following hold:
\begin{enumerate}
\item
\label{a:1iii}
For every $k\in K$, $S_k\colon\ensuremath{{\mathcal G}}_k\to\ensuremath{{\mathcal H}}$ is an operator such
that $S_k\circ R_k$ is firmly nonexpansive and
\begin{equation}
\label{e:ai}
\bigg(\forall x\in\bigcap_{j\in J}C_j\bigg)\;
S_k(R_kx)=S_kr_k\;\Rightarrow\; R_kx=r_k.
\end{equation}
\item
\label{a:1i}
For every $j\in J_1\subset J$, the operator $\ensuremath{\text{\rm proj}}_{C_j}$ is
easily implementable.
\item
\label{a:1ii}
For every $j\in J\smallsetminus J_1$,
$f_j\colon\ensuremath{{\mathcal H}}\to\ensuremath{\mathbb{R}}$ is a convex function such that
$C_j=\menge{x\in\ensuremath{{\mathcal H}}}{f_j(x)\ensuremath{\ensuremath{\leqslant}qslant} 0}$.
\end{enumerate}
\end{assumption}
In view of Assumption~\ref{a:1}\ref{a:1iii}, let us replace
\eqref{e:2} by the equivalent problem
\begin{equation}
\label{e:22}
\text{find}\;x\in\bigcap_{j\in J}C_j\;\text{such that}\;
(\forall k\in K)\;S_k(R_kx)=S_kr_k.
\end{equation}
Concrete examples of suitable operators
$(S_k)_{k\in K}$ will be given in Section~\ref{sec:3}
(see also \cite{Ibap20}). The motivation behind
\eqref{e:22} is that it leads to a tractable fixed point
formulation. To see this, set
\begin{equation}
\label{e:21}
(\forall k\in K)\quad T_k=S_kr_k+\ensuremath{\operatorname{Id}}\,-S_k\circ R_k
\end{equation}
and let $x\in\bigcap_{j\in J}C_j$.
Then, for every $k\in K$, \eqref{e:1}
$\Leftrightarrow$ $S_k(R_k{x})=S_kr_k$ $\Leftrightarrow$
${x}=S_kr_k+{x}-S_k(R_k{x})$
$\Leftrightarrow$ ${x}\in\ensuremath{\text{\rm Fix}\,} T_k$. A key observation at
this point is that \eqref{e:10} implies that the operators
$(T_k)_{k\in K}$ are firmly nonexpansive, hence firmly
quasinonexpansive.
If $j\in J_1$, per Assumption~\ref{a:1}\ref{a:1i}, the set
$C_j$ will be activated in the algorithm through the use of the
operator $T_j=\ensuremath{\text{\rm proj}}_{C_j}$, which is firmly nonexpansive
\cite[Proposition~4.16]{Livre1}.
On the other hand, if $j\in J\smallsetminus J_1$,
the convex inequality representation of
Assumption~\ref{a:1}\ref{a:1ii} will lead
to an activation of $C_j$ through its subgradient projector.
Recall that the subgradient projection of $x\in\ensuremath{{\mathcal H}}$ onto $C_j$
relative to $u_j\in\partial f_j(x)$ is
\begin{equation}
\label{e:7}
T_jx=
\begin{cases}
x-\dfrac{f_j(x)}{\|u_j\|^2}u_j,&\text{if}\;\;f_j(x)>0;\\
x,&\text{if}\;\;f_j(x)\ensuremath{\ensuremath{\leqslant}qslant} 0,
\end{cases}
\end{equation}
and that $T_j$ is firmly quasinonexpansive, with $\ensuremath{\text{\rm Fix}\,} T_j=C_j$
\cite[Proposition~29.41]{Livre1}.
The advantage of the subgradient projector onto $C_j$ is that,
unlike the exact projector, it does not require solving a
nonlinear best approximation problem, which makes it much easier
to implement in the presence of convex inequality constraints
\cite{Imag97}. Altogether, \eqref{e:2} is equivalent to the
common fixed point problem
\begin{equation}
\label{e:23}
\text{find}\;\;x\in\bigcap_{i\in J\cup K}\ensuremath{\text{\rm Fix}\,} T_i,
\end{equation}
where each $T_i$ is firmly quasinonexpansive. This allows us to
solve \eqref{e:2} as follows.
\begin{theorem}{\rm\cite{Ibap20}}
\label{t:1}
Consider the setting of problem \eqref{e:2} under
Assumption~\ref{a:1}. Let
$x_0\in\ensuremath{{\mathcal H}}$, let $0<\varepsilon<1/\text{\rm card}(J\cup K)$, and
set $(\forall k\in K)$ $p_k=S_kr_k$ and $F_k=S_k\circ R_k$. Iterate
\begin{equation}
\label{e:alg}
\hskip -0.6mm
\begin{array}{l}
\text{for}\;\;n=0,1,\ldots\\
\ensuremath{\leqslant}ft\lfloor
\begin{array}{l}
\ensuremath{{\varnothing}}\neq I_n\subset J\cup K\\
\{\omega_{i,n}\}_{i\in I_n}\subset[\varepsilon,1],\;
\sum_{i\in I_n}\omega_{i,n}=1\\
\text{for every}\;\;i\in I_n\\
\ensuremath{\leqslant}ft\lfloor
\begin{array}{l}
\text{if}\;\;i\in J_1\\
\ensuremath{\leqslant}ft\lfloor
\begin{array}{l}
y_{i,n}=\ensuremath{\text{\rm proj}}_{C_i}x_n-x_n\\
\end{array}
\right.\\
\text{if}\;\;i\in J\smallsetminus J_1\\
\ensuremath{\leqslant}ft\lfloor
\begin{array}{l}
u_{i,n}\in\partial f_i(x_n)\\
y_{i,n}=
\begin{cases}
-\dfrac{f_i(x_n)}{\|u_{i,n}\|^2}u_{i,n}
&\text{if}\;f_i(x_n)>0\\
0,&\text{if}\;f_i(x_n)\ensuremath{\ensuremath{\leqslant}qslant} 0
\end{cases}
\end{array}
\right.\\
\text{else}\\
\ensuremath{\leqslant}ft\lfloor
y_{i,n}=p_i-F_ix_n
\right.\\
\nu_{i,n}=\|y_{i,n}\|\\
\end{array}
\right.\\
\nu_n=\sum_{i\in I_n} \omega_{i,n} \nu_{i,n}^2\\
\text{if}\;\nu_n=0\\
\ensuremath{\leqslant}ft\lfloor
\begin{array}{l}
x_{n+1}=x_n
\end{array}
\right.\\
\text{else}\\
\ensuremath{\leqslant}ft\lfloor
\begin{array}{l}
y_n=\sum_{i\in I_n}\omega_{i,n} y_{i,n}\\
\Lambda_n=\nu_n/\|y_n\|^2\\
\lambda_n\in[\varepsilon,(2-\varepsilon)\Lambda_n]\\
x_{n+1}=x_n+\lambda_n y_n.
\end{array}
\right.\\[6.8mm]
\end{array}
\right.\\
\end{array}
\end{equation}
Suppose that there exists an integer $M>0$ such
that
\begin{equation}
\label{e:33}
(\forall n\in\ensuremath{\mathbb N})\quad\bigcup_{m=0}^{M-1}I_{n+m}=J\cup K.
\end{equation}
Then $(x_n)_{n\in\ensuremath{\mathbb N}}$ converges to a solution to \eqref{e:2}.
\end{theorem}
When $K=\ensuremath{{\varnothing}}$, \eqref{e:alg} coincides with the
extrapolated method of parallel subgradient projections (EMOPSP)
of \cite{Imag97}. It has in addition the ability to
incorporate the constraints \eqref{e:1}, while maintaining the
attractive features of EMOPSP. First, it can process
the operators in blocks of variable size.
The control scheme \eqref{e:33} just
imposes that every operator be activated at least once within any
$M$ consecutive iterations. Second, because the extrapolation
parameters $(\Lambda_n)_{n\in\ensuremath{\mathbb N}}$ can attain
large values in $\ensuremath{\leqslant}ft[1,\ensuremath{{+\infty}}\right[$, large steps are possible,
which lead to fast convergence compared to standard relaxation
schemes, where $\Lambda_n\equiv 1$.
\section{Applications}
\label{sec:3}
We illustrate several instances of \eqref{e:2}, develop tractable
reformulations of the form \eqref{e:22}, and solve them using
\eqref{e:alg}, where $x_0=0$ and the relaxation strategy
is that recommended in \cite[Chapter~5]{Aiep96}, namely
\begin{equation}
\label{e:lsteps}
(\forall n\in\ensuremath{\mathbb N})\quad\lambda_n=
\begin{cases}
\Lambda_n/2,&\text{if}\;\;n=0\mod 3;\\
1.99\Lambda_n,&\text{otherwise.}
\end{cases}
\end{equation}
\subsection{Restoration from distorted signals}
\label{sec:31}
The goal is to recover the original form of the $N$-point
($N=2048$) signal $\overline{x}$ from the following (see
Fig.~\ref{fig:1}):
\begin{itemize}
\item
A bound $\gamma_1$ on the energy of the finite differences of
$\overline{x}$, namely $\|D\overline{x}\|\ensuremath{\ensuremath{\leqslant}qslant}\gamma_1$, where
$D\colon(\xi_i)_{i\in\{0,\ldots,N-1\}}\mapsto
(\xi_{i+1}-\xi_i)_{i\in\{0,\ldots,N-2\}}$. The bound is given
from prior information as $\gamma_1=1.17$.
\item
A distortion $r_2=R_2\overline{x}$,
where $R_2$ clips componentwise to $[-\gamma_2,\gamma_2]$
($\gamma_2=0.1$) \cite[Section~10.5]{Tarr18}.
\item
A distortion $r_3=R_3\overline{x}$ of a low-pass
version of $\overline{x}$, where
$R_3=Q_{3}\circ L_{3}$. Here $L_{3}$ bandlimits by zeroing all
but the $83$ lowest-frequency coefficients of the Discrete
Fourier Transform, and $Q_{3}$ induces componentwise distortion
via the operator \cite[Section~10.6]{Tarr18}
$\theta_3=(2/\pi)\arctan(\gamma_{3}\:\cdot)$,
where $\gamma_{3}=10$ (see Fig.~\ref{fig:2'}).
\end{itemize}
\begin{figure}
\caption{Signals in Section~\ref{sec:31}
\label{fig:1}
\end{figure}
The solution space is the standard Euclidean space
$\ensuremath{{\mathcal H}}=\mathbb{R}^N$. To formulate the recovery problem as an
instance of \eqref{e:2}, set $J=\{1\}$, $J_1=\ensuremath{{\varnothing}}$, $K=\{2,3\}$,
and $C_1=\menge{x\in\ensuremath{{\mathcal H}}}{f_1(x)\ensuremath{\ensuremath{\leqslant}qslant} 0}$, where
$f_1=\|D\cdot\|-\gamma_1$. Then the objective is to
\begin{multline}
\label{e:1d}
\text{find}\;\;x\in C_1\;\;\text{such that}\;\;R_2x=r_2\;\;
\text{and}\;\;R_3x=r_3.
\end{multline}
Next, let us verify that Assumption~\ref{a:1}\ref{a:1iii} is
satisfied. On the one hand, since $R_2$ is the projection onto
the closed convex set $[-\gamma_2,\gamma_2]^{N}$, it is firmly
nonexpansive, so we set
$S_2=\ensuremath{\operatorname{Id}}\,$. On the other hand, if we set
$S_3=\gamma_{3}^{-1}L_3$, then $S_3\circ R_3$ is
firmly nonexpansive and satisfies \eqref{e:ai} \cite{Ibap20}.
We thus obtain an instance of \eqref{e:22}, to which we
apply \eqref{e:alg} with \eqref{e:lsteps} and
$(\forall n\in\mathbb{N})$
$I_n=J\cup K$ and $(\forall i\in I_n)$ $\omega_{i,n}=1/3$.
The recovered signal shown in Fig.~\ref{fig:1} effectively
incorporates the information from the prior constraint and
the nonlinear distortions.
\begin{figure}
\caption{Distortion operator $\theta_{3}
\label{fig:2'}
\end{figure}
\subsection{Reconstruction from thresholded scalar products}
\label{sec:32}
The goal is to recover the original form of the $N$-point
($N=1024$) signal $\overline{x}$ shown in Fig.~\ref{fig:sig}
from thresholded scalar products $(r_k)_{k\in K}$ given by
\begin{multline}
\label{e:94}
(\forall k\in K)\quad r_k=R_k\overline{x},\quad\text{with}\\
R_k\colon\ensuremath{{\mathcal H}}\to\ensuremath{\mathbb{R}}\colon x\mapsto Q_{\gamma}\scal{x}{e_k},
\end{multline}
where
\begin{itemize}
\item
$(e_k)_{k\in K}$ is a collection of normalized vectors in $\ensuremath{\mathbb{R}}^N$
with zero-mean i.i.d. entries.
\item
$Q_{\gamma}$ ($\gamma=0.05$) is the thresholding operator
\begin{equation}
\label{e:22-1}
Q_{\gamma}\colon\xi\mapsto
\begin{cases}
\ensuremath{\text{\rm sign}}(\xi)\sqrt{\xi^2-\gamma^2},
&\text{if}\;\;|\xi|>\gamma;\\
0,&\text{if}\;\;|\xi|\ensuremath{\ensuremath{\leqslant}qslant}\gamma
\end{cases}
\end{equation}
of \cite{Taov00} (see Fig.~\ref{fig:thr}).
\item
$K=\{1,\ldots,m\}$, where $m=1200$.
\end{itemize}
The solution space $\ensuremath{{\mathcal H}}$ is the standard Euclidean space
$\mathbb{R}^{N}$, and \eqref{e:94}
gives rise to the special case of \eqref{e:2}
\begin{equation}
\label{e:sig2}
\text{find}\;\;x\in\ensuremath{{\mathcal H}}\;\;\text{such that}\;\;(\forall k\in K)
\quad r_k=Q_{\gamma}\scal{x}{e_k},
\end{equation}
in which $J=\ensuremath{{\varnothing}}$.
\begin{figure}
\caption{Original signal $\overline{x}
\label{fig:sig}
\end{figure}
\noindent
Note that the standard soft-thresholder on $[-\gamma,\gamma]$
can be written as
\begin{equation}
\label{e:st-r}
\soft{\gamma}\colon\xi\mapsto
\ensuremath{\text{\rm sign}}(Q_{\gamma}\xi)
\ensuremath{\leqslant}ft(\sqrt{(Q_{\gamma}\xi)^2+\gamma^2}-\gamma\right).
\end{equation}
To formulate \eqref{e:22} we set, for every $k\in K$,
\begin{equation}
\label{e:ex2-p}
S_k\colon\ensuremath{\mathbb{R}}\to\ensuremath{{\mathcal H}}\colon\xi\mapsto
\ensuremath{\text{\rm sign}}(\xi)\ensuremath{\leqslant}ft(\sqrt{\xi^2+\gamma^2}-\gamma\right)e_k,
\end{equation}
which fulfills Assumption~\ref{a:1}\ref{a:1iii} and yields
$S_k\circ R_k=(\soft{\gamma}\scal{\cdot}{e_k})e_k$ \cite{Ibap20}.
We apply \eqref{e:alg} with \eqref{e:lsteps} and the following
control scheme. We split $K$ into $12$ blocks of $100$ consecutive
indices, and select $I_n$ by periodically sweeping through the
blocks, hence satisfying \eqref{e:33} with $M=12$. Moreover,
$\omega_{i,n}\equiv 1/100$. The reconstructed signal shown in
Fig.~\ref{fig:sig} illustrates the ability of the proposed approach
to effectively exploit nonlinearly generated data.
\begin{figure}
\caption{The thresholder \eqref{e:22-1}
\label{fig:thr}
\end{figure}
\subsection{Image recovery}
\label{sec:33}
The goal is to recover the $N\times N$ ($N=256$) image
$\overline{x}$ from the following (see Fig.~\ref{fig:IM1}):
\begin{itemize}
\item
The Fourier phase $\angle\ensuremath{\operatorname{DFT}}\,(\overline{x})$
($\ensuremath{\operatorname{DFT}}\,(\overline{x})$ denotes the 2D Discrete Fourier Transform
of $\overline{x}$).
\item
The pixel values of $\overline{x}$ reside in $[0,255]$.
\item
An upper bound $\gamma_3$ on the total variation
$\operatorname{tv}(\overline{x})$ \cite{Imag04}. In this
experiment, $\gamma_3=1.2\operatorname{tv}
(\overline{x})=1.10\times 10^{6}$.
\item
A compressed representation $r_4=R_4\overline{x}$. Here,
$R_4=Q_4\circ W$, where $W$ is the 2D Haar wavelet transform and
$Q_4$ performs componentwise hard-thresholding via ($\rho=325$)
\begin{equation}
(\forall\xi\in\ensuremath{\mathbb{R}})\quad\hard{\rho}\xi=
\begin{cases}
\xi,&\text{if}\;\;|\xi|>\rho;\\
0,&\text{if}\;\;|\xi|\ensuremath{\ensuremath{\leqslant}qslant}\rho.
\end{cases}
\end{equation}
\item
A down-sampled blurred image $r_5=R_5\overline{x}$.
Here $R_5=Q_5\circ H_5$, where the linear operator
$H_5\colon\ensuremath{\mathbb{R}}^{N\times N}\to\ensuremath{\mathbb{R}}^{N\times N}$
convolves with a $5\times 5$ Gaussian kernel with variance $1$,
and $Q_5\colon\ensuremath{\mathbb{R}}^{N\times N}\to\ensuremath{\mathbb{R}}^{8\times 8}$ maps the average
of each of the 64 disjoint $32\times 32$ blocks of an $N\times N$
image to a representative pixel in an $8\times 8$ image
\cite{Nasr14}.
\end{itemize}
The solution space is $\ensuremath{{\mathcal H}}=\ensuremath{\mathbb{R}}^{N\times N}$ equipped with the
Frobenius norm $\|\cdot\|$. To cast the recovery task as an
instance of \eqref{e:2}, we set $J=\{1,2,3\}$, $J_1=\{1,2\}$,
$K=\{4,5\}$,
$C_1=\menge{x\in\ensuremath{{\mathcal H}}}{\angle\ensuremath{\operatorname{DFT}}\,(x)=\angle\ensuremath{\operatorname{DFT}}\,(\overline{x})}$,
$C_2=[0,255]^{N\times N}$, $f_3=\operatorname{tv}-\gamma_3$,
and $C_3=\menge{x\in\ensuremath{{\mathcal H}}}{f_3(x)\ensuremath{\ensuremath{\leqslant}qslant} 0}$. Expressions for
$\ensuremath{\text{\rm proj}}_{C_1}$ and $\partial f_3$ are provided in \cite{Levi83} and
\cite{Imag04}, respectively. The objective is to
\begin{equation}
\label{e:im3}
\text{find}\;\;x\in\bigcap_{j=1}^3C_j\;\;\text{such that}\;\;
\begin{cases}
R_4x=r_4;\\
R_5x=r_5.
\end{cases}
\end{equation}
Let us verify that Assumption~\ref{a:1}\ref{a:1iii} holds.
For every $\xi\in\ensuremath{\mathbb{R}}$,
\begin{equation}
\label{e:ht-st}
\quad\soft{\rho}\xi =
\hard{\rho}\xi+\begin{cases}
-\rho,&\text{if}\;\;\hard{\rho}\xi>\rho;\\
0,&\text{if}\;\;-\rho\ensuremath{\ensuremath{\leqslant}qslant}\hard{\rho}\xi\ensuremath{\ensuremath{\leqslant}qslant}\rho;\\
\rho,&\text{if}\;\;\hard{\rho}\xi<-\rho.
\end{cases}
\end{equation}
\begin{figure}
\caption{Images from Section~\ref{sec:33}
\label{fig:IM1}
\end{figure}
\noindent We construct $S_4$ such that
$S_4\circ R_4=W^{-1}\circ T\circ W$,
where $T$ applies $\soft{\rho}$ componentwise. In turn, recalling
that $r_4$ is the result of hard-thresholding, $S_4r_4$ is
built by first adding the quantity on the right-hand side of
\eqref{e:ht-st} to $r_4$ componentwise, and then applying the
inverse Haar transform. This guarantees that $S_4$ satisfies
Assumption~\ref{a:1}\ref{a:1iii} \cite{Ibap20}.
Next, we let $D_5\subset\ensuremath{{\mathcal H}}$ be the subspace of
$32\times 32$-block-constant matrices and construct an
operator $S_5$ satisfying Assumption~\ref{a:1}\ref{a:1iii}
and the identity
$S_5\circ R_5=H_5\circ\ensuremath{\text{\rm proj}}_{D_5}\circ H_5$ \cite{Ibap20}. In turn,
$S_5r_5=H_5s_5$, where $s_5\in D_5$ is built by repeating
each pixel value of $r_5$ in the block it represents. We thus
arrive at an instance of \eqref{e:22}, which we solve
using \eqref{e:alg} with \eqref{e:lsteps} and
\begin{equation}
(\forall n\in\ensuremath{\mathbb N})\;\;I_n=J\cup K\;\text{and}\;
(\forall i\in I_n)\;\omega_{i,n}=1/5.
\end{equation}
The resulting image displayed in Fig.~\ref{fig:IM1}(d) shows that
our framework makes it possible to exploit the information from
the three prior constraints and from the transformations $r_4$ and
$r_5$ to obtain a quality recovery.
\section{Inconsistent problems}
\label{sec:4}
Inaccuracies and unmodeled dynamics may cause \eqref{e:2} to admit
no solution. In such instances, we propose the following
relaxation for \eqref{e:2} \cite{Ibap20}.
\begin{assumption}
\label{a:r}
For every $j\in J$, the operator $\ensuremath{\text{\rm proj}}_{C_j}$ is
easily implementable and, for every $k\in K$,
Assumption~\ref{a:1}\ref{a:1iii} holds. In addition,
$\{\omega_j\}_{j\in J}\subset\ensuremath{\ensuremath{\leqslant}ft]0,1\right]}$ and
$\{\omega_k\}_{k\in K}\subset\ensuremath{\ensuremath{\leqslant}ft]0,1\right]}$ satisfy
$\sum_{j\in J}\omega_j+\sum_{k\in K}\omega_k=1$.
\end{assumption}
\noindent Under Assumption~\ref{a:r}, the goal is to
\begin{multline}
\label{e:r}
\text{find}\;\;x\in\ensuremath{{\mathcal H}}\;\;\text{such that}\\
\sum_{j\in J}\omega_j(x-\ensuremath{\text{\rm proj}}_{C_j}x)+\sum_{k\in K}
\omega_k(S_kR_kx-S_kr_k)=0.
\end{multline}
When $K=\varnothing$, the solutions of \eqref{e:r} are the
minimizers of the least squared-distance proximity function
$\sum_{j\in J}\omega_j d^2_{C_j}$ \cite{Aiep96}. If
\eqref{e:2} does have solutions, then it is equivalent to
\eqref{e:r}. The algorithm of \cite{Comb20} can be used to solve
\eqref{e:r} block-iteratively.
\end{document} |
\begin{document}
\fontsize{.5cm}{.5cm}\selectfont\sf
\title[Schubert Draft]{Double Quantum Schubert Cells and\\ Quantum Mutations}
\date{\today}
\author{Hans P. Jakobsen}
\address{
Department of Mathematical Sciences\\ University of
Copenhagen\\Universitetsparken 5\\
DK-2100, Copenhagen,
Denmark} \email{jakobsen@math.ku.dk}
\begin{abstract}Let ${\mathfrak p}\subset {\mathfrak g}$ be a parabolic
subalgebra of s simple finite dimensional Lie algebra over ${\mathbb C}$. To
each pair $w^{\mathfrak a}\leq w^{\mathfrak c}$ of minimal left coset
representatives in the quotient space $W_p\backslash W$ we construct explicitly
a quantum seed ${\mathcal Q}_q({\mathfrak a},{\mathfrak
c})$. We define Schubert creation and annihilation mutations and show that our
seeds are related by such mutations. We also introduce more elaborate seeds to
accommodate our mutations. The quantized Schubert Cell decomposition
of the quantized generalized flag manifold can be viewed as the result of such
mutations having their origins in the pair $({\mathfrak a},{\mathfrak c})=
({\mathfrak e},{\mathfrak p})$, where the empty string ${\mathfrak e}$
corresponds to the neutral element. This makes it possible to give simple proofs
by induction. We exemplify this in three directions: Prime ideals, upper cluster
algebras, and the diagonal of a quantized minor.
\end{abstract}
\subjclass[2010]{MSC 17B37 (primary),\ MSC 13F60, \ MSC 16T20 (primary), \ MSC 17A45 (secondary), \and MSC 20G42 (secondary)}
\maketitle
\section{Introduction}
We study a class of quadratic algebras connected to quantum parabolics and
double quantum Schubert cells. We begin by considering a finite-dimensional
simple Lie algebra ${\mathfrak g}$ over ${\mathbb C}$ and a parabolic
sub-algebra ${\mathfrak p}\subset{\mathfrak g}$. Then we consider a fixed Levi
decomposition
\begin{equation}
{\mathfrak p}={\mathfrak l}+{\mathfrak u},
\end{equation}
with ${\mathfrak u}\neq 0$ and ${\mathfrak l}$ the Levi subalgebra.
The main references for this study are the articles by A. Berenstein and A.
Zelevinski \cite{bz} and by C. Geiss, B. Leclerc, J. Schr\"oer \cite{leclerc}.
We also refer to \cite{jak-cen} for further background.
Let, as usual, $W$ denote the Weyl group. Let $W_p=\{w\in W\mid
w(\triangle^-)\cap \triangle^+\subseteq \triangle^+({\mathfrak l}) \}$ and
$W^p$, by some called the Hasse Diagram of $G\backslash P$, denote the usual
set of minimal length coset representatives of $W_p\backslash W$. Our primary
input is a pair of Weyl group elements $w^{\mathfrak a},w^{\mathfrak c}\in W^p$
such that $w^{\mathfrak a}\leq w^{\mathfrak c}$. We will often, as here, label
our elements $w$ by ``words'' ${\mathfrak a}$; $w=w^{\mathfrak a}$, in a
fashion similar, though not completely identical, to that of \cite{bz}.
Details
follow in later sections, but we do mention here that the element $e$ in $W$ is
labeled by ${\mathfrak e}$ corresponding to the empty string;
$e=\omega^{\mathfrak e}$ while the longest elements in $W^p$ is labeled by
${\mathfrak p}$.
To each pair $w^{\mathfrak a},w^{\mathfrak c}$ as above we construct explicitly
a quantum seed
\begin{equation}{\mathcal Q}_q({\mathfrak a},{\mathfrak
c}):=({\mathcal C}_q({\mathfrak a},{\mathfrak
c}), {\mathcal L}_q({\mathfrak a},{\mathfrak
c}), {\mathcal B}_q({\mathfrak a},{\mathfrak
c})).\end{equation}
The cluster ${\mathcal C}_q({\mathfrak a},{\mathfrak
c})$ generates a quadratic algebra ${\mathcal A}_q({\mathfrak a},{\mathfrak
c})$ in the space of functions on ${\mathcal U}_q({\mathfrak n})$.
After that we define transitions
\begin{equation}{\mathcal Q}_q({\mathfrak a},{\mathfrak
c})\rightarrow {\mathcal Q}_q({\mathfrak a}_1,{\mathfrak
c}_1).
\end{equation}
We call our transitions quantum Schur (creation/annihilation) mutations and
prove that they are indeed just (composites of) quantum mutations in the sense
of Berenstein and Zelevinski. These actually have to be augmented by what we
call creation/annihilation mutations which are necessary since we have to work
inside a larger ambient space. To keep the full generality, we may also have to
restrict our seeds to sub-seeds.
The natural scene turns out to be
\begin{equation}{\mathcal Q}_q({\mathfrak a},{\mathfrak b},{\mathfrak
c}):=({\mathcal C}_q({\mathfrak a},{\mathfrak b},{\mathfrak
c}), {\mathcal L}_q({\mathfrak a},{\mathfrak b},{\mathfrak
c}), {\mathcal B}_q({\mathfrak a},{\mathfrak b},{\mathfrak
c})),\end{equation}
which analogously is determined by a triple $w^{\mathfrak a},w^{\mathfrak
b},w^{\mathfrak c}\in W^p$ such that $w^{\mathfrak a}\leq w^{\mathfrak b}\leq
w^{\mathfrak c}$.
Later we extend our construction to even \begin{equation}
{\mathcal Q}_q({\mathfrak r}_1,\dots, {\mathfrak
r}_{n-1},{\mathfrak r}_n)\textrm{ and }{\mathcal A}_q({\mathfrak r}_1,\dots,
{\mathfrak
r}_{n-1},{\mathfrak r}_n),
\end{equation}
though we do not use it here for anything specific.
It is a major point of this study to establish how our seeds and algebras can
be
constructed, inside an ambient space, starting from a single variable (indeed:
none). In this sense the quantized generalized flag manifold of $(G/P)_q$ as
built from quantized Schubert Cells can be built from a single cell. Furthermore, we prove that we can pass between our seeds by Schubert creation
and
annihilation mutations inside a larger ambient space.
This sets the stage for (simple) inductive arguments which is a major point of this article, and is what we will
pursue here.
We first prove by induction that the two-sided sided ideal
$I({\det}_{s}^{{\mathfrak
a},{\mathfrak
c}})$ in ${\mathcal A}_q({\mathfrak a},{\mathfrak c})$ generated by the quantized
minor ${\det}_{s}^{{\mathfrak
a},{\mathfrak
c}}$ is prime.
Then we prove that each upper cluster algebra ${\mathbb U}({\mathfrak
a},{\mathfrak c})$ equals its quadratic algebra ${\mathcal A}_q({\mathfrak
a},{\mathfrak c})$.
There is a sizable overlap between these result and results previously
obtained by K. Goodearl M. Yakimov (\cite{good},\cite{good1}).
We further use our method to study the diagonal of a quantum minor.
The idea of induction in this context was introduced in \cite{jz} and applications were studied in
the case of a specific type of parabolic related to type $A_n$. Further ideas
relating to explicit constructions of compatible pairs in special cases were
studied in \cite{jp}.
\section{A little about quantum groups and cluster algebras}
\subsection{2.1 Quantum Groups}
We consider quantized enveloping algebras $U={\mathcal
U}_q({\mathfrak g})$ in
the standard notation given either eg. by
Jantzen (\cite{jan}) or by Berenstein and Zelevinsky
(\cite{bz}), though their
assumptions do not coincide completely.
To be completely on the safe side, we state our assumptions and
notation, where
it may differ: Our algebra is a Hopf
algebra defined in the usual fashion from a semi-simple
finite-dimensional
complex Lie algebra ${\mathfrak g}$. They are
algebras over ${\mathbb Q}(q)$. $\Phi$ denotes a given set of
roots and throughout, $\Pi=\{\alpha_1,\alpha_2,\dots,\alpha_R\}$ a
fixed choice of simple roots. Our
generators are then given as
$$\{E_\alpha,F_\alpha,K^\alpha\}_{\alpha\in\Pi},$$
but we will allow elements of the form $K^\eta$ for any integer weight. $W$ denotes the Weyl group defined by $\Phi$.
Finally we let $\{\Lambda_\alpha\mid\alpha\in\Pi\}$ denote the set of
fundamental
weights. We assume throughout that the diagonalizing elements $d_\alpha$ are
determined
by
\begin{equation}
\forall \alpha\in\Pi:(\Lambda_\alpha,\alpha)=d_\alpha.
\end{equation}
\begin{Lem}[(2.27) in \cite{fz}]\langlebel{3.1} Let $\alpha_i\in \Phi$. Then
$$(\sigma_i+1)(\Lambda_i)+\sum_{j\neq
i}a_{ji}(\Lambda_j)=0.$$
\end{Lem}
\subsection{Quantum Cluster Algebras}
We take over without further ado the terminology and constructions
of
(\cite{bz}). Results from \cite{leclerc} are also put to good use.
\begin{Def}
We say that two elements $A,B$ in some algebra over ${\mathbb C}$ $q$-commute
if, for some $r\in{\mathbb R}$:
\begin{equation}AB=q^rBA.
\end{equation}
\end{Def}
To distinguish between the original mutations and the more elaborate ones we need here, and to honor the founding fathers A. Berenstein, S. Fomin,
and
A. Zelevinski, we use the following terminology:
\begin{Def}A quantum mutation as in \cite{bz} is called a BFZ-mutation. \end{Def}
\subsection{A simple observation}
If $\underline{a}=(a_1,a_2,\dots,a_{\mathfrak m})$ and
$\underline{f}=(f_1,f_2,\dots,f_{\mathfrak m})$ are vectors
then\begin{Lem}{(\cite{jz})}\langlebel{2.22}\begin{equation}
{\mathcal L}_q(\underline{a})^T=(\underline{f}
)^T\Leftrightarrow\forall
i:X_iX^{\underline{a}}=q^{f_i}X^{\underline{a}}X_i.\end{equation}
In particular, if there exists a $j$ such that $\forall i:
f_i=-\delta_{i,j}$
then the column vector $\underline{a}$ can be the $j$th column
in the matrix ${\mathcal B}$
of a compatible pair.
\end{Lem}
\noindent However simple this actually is, it will have a great
importance later
on.
\section{On Parabolics}
The
origin of the following lies in A.
Borel \cite{borel}, and B. Kostant \cite{kos}. Other main
contributors are
\cite{bgg} and
\cite{stein}. See also \cite{cap}. We have also found (\cite{sager}) useful.
\begin{Def} Let $w\in W$. Set $$\Phi_\omega=\{\alpha\in \Delta^+\mid
w^{-1}\alpha\in
\Delta^-\}=w( \Delta^-)\cap \Delta^+.$$\end{Def}
We have that $\ell(w)=\ell(w^{-1})=\vert\Phi_\omega\vert$.
We set
$\Phi_\omega=\Delta^+(w)$.
From now on, we work with a fixed
parabolic
$\mathfrak p$ with a Levi decomposition
\begin{equation}
{\mathfrak p}={\mathfrak l}+{\mathfrak u},
\end{equation}
where ${\mathfrak l}$ is the Levi subalgebra, and where we assume ${\mathfrak
u}\neq 0$,
Let
\begin{Def}
\begin{eqnarray*}
W_p&=&\{w\in W\mid \Phi_\omega\subseteq \Delta^+({\mathfrak l})\},\\
W^p&=&\{w\in W\mid \Phi_\omega\subseteq \Delta^+({\mathfrak u})\}.
\end{eqnarray*}
$W^p$ is a set of distinguished representatives of the right
coset space
$W_p\backslash W$.
\end{Def}
It is well known (see eg (\cite{sager})) that any $w\in W$ can be written
uniquely as $w=w_pw^p$ with $w_p\in W_p$ and $w^p\in W^p$.
One defines, for each $w$ in the Weyl Group $W$, the Schubert cell $X_w$. This
is a cell in
${\mathbb P}(V)$, the projective space over a specific
finite-dimensional
representation of ${\mathfrak g}$. The closure,
${X_w}$, is called a
Schubert variety. The main classical theorems are
\begin{Thm}[Kostant,\cite{kos}]$$G/P=\sqcup_{w\in
W^p}X_w.$$\end{Thm}
{\begin{Thm}[\cite{stein}]\langlebel{stein}
Let $w,w'\in W^p$. Then $$X_{w'}\subseteq {{X_{w}}}$$
if and only
$w'\leq w$ in the usual Bruhat ordering.
\end{Thm}
If $\omega^{\mathfrak r}=\omega_m\tilde\omega$ and
$\omega_{m}=\omega_n\hat\omega$ with $\omega_n,\omega_m\in W^P$
and all Weyl
group elements reduced, we say that $\omega_n<_L\omega_m$ if $\hat\omega\neq
e$.
This is the weak left Bruhat order.
\section{The quadratic algebras}\langlebel{sec4}
Let $\omega=s_{\alpha_1}s_{\alpha_2}\dots
s_{\alpha_t}$ be an element of the Weyl group written in reduced form.
Following
Lusztig (\cite{luz}), we construct roots $\gamma_i=\omega_{i-1}(\alpha_i)$ and
elements $Z_{\gamma_i}\in {\mathcal U}_q({\mathfrak n}_\omega)$.
The following result is well known, but notice a change $q\to q^{-1}$ in
relation to
(\cite{jak-cen}).
\begin{Thm}[\cite{lev},\cite{lev0}] \langlebel{4.1}Suppose that $1\leq i<j\leq t$. Then
$$Z_{i}Z_{j}=q^{-(
\gamma_i,\gamma_j)}Z_{j}Z_{i} + {\mathcal R}_{ij},$$ where ${\mathcal R}_{ij}$
is of lower order in the sense that it involves only elements $Z_k$ with $i< k<
j$.
Furthermore, the elements
$$Z_t^{a_t}\dots Z_2^{a_2}Z_1^{a_1}$$ with $a_1,a_2,\dots,a_t\in{\mathbb N}_0$
form a basis of ${\mathcal U}_q({\mathfrak n}_\omega)$.
\end{Thm}
Our statement follows \cite{jan},\cite{jan2}. Other authors, eg.
\cite{lev},
\cite{leclerc} have used the other Lusztig braid operators. The
result is
just a difference between $q$ and $q^{-1}$. Proofs of this
theorem which are
more accessible are available (\cite{cp},\cite{jan2}).
It is known that this algebra is isomorphic to the algebra of functions on
${\mathcal U}_q({\mathfrak n}_\omega)$ satisfying the usual finiteness
condition. It is analogously equivalent to the algebra of functions on
${\mathcal U}^-_q({\mathfrak n}_\omega)$ satisfying a similar finiteness
condition. See eg (\cite{leclerc}) and (\cite{jan}).
\section{basic structure}\langlebel{sec5}
Let $\omega^{\mathfrak p}$ be the maximal element in $W^p$. It
is the one which
maps all roots in $\Delta^+({\mathfrak
u})$ to $\Delta^-$. (Indeed: To $\Delta^-({\mathfrak u})$.)
Let $w_0$ be the
longest element in $W$ and $w_L$ the
longest in the Weyl group of ${\mathfrak l}$, Then
\begin{equation}w^{\mathfrak p}w_L=w_0.\end{equation}
Let
$\omega^{\mathfrak
r}=\sigma_{i_1}\sigma_{i_2}\cdots\sigma_{i_r}\in W^p$ be written in
a fixed reduced form. Then
$\ell(\omega^{\mathfrak
r})=r$. We assume here that $r\geq 1$. We set $e=\omega^{\mathfrak e}$ and
$\ell(\omega^{\mathfrak e})=0$ where ${\mathfrak e}$
denotes the empty set, construed as the empty sequence. We also let ${\mathfrak r}$ denote the sequence
$i_1,i_2,\dots,i_r$ if ${\mathfrak r}\neq {\mathfrak e}$. If a sequence ${\mathfrak s}$ corresponds to
an analogous element $\omega^{\mathfrak s}\in W^p$ we define
\begin{equation}
{\mathfrak s}\leq {\mathfrak r}\Leftrightarrow \omega^{\mathfrak s}\leq_L
\omega^{\mathfrak r}.
\end{equation}
Set
\begin{equation}\Delta^+(\omega^{\mathfrak
r})=\{ \beta_{i_1},\dots,\beta_{i_r}\}.\end{equation}
\begin{Def}
Let
${\mathbf b}$
denote the
map $\Pi\to\{1,2,\dots,R\}$ defined by ${\mathbf
b}(\alpha_i)=i$.
Let $\overline\pi_{\mathfrak r}:\{1,2,\dots,
r\}\to\Pi$ be given by
\begin{equation}\overline\pi_{\mathfrak
r}(j)=\alpha_{i_j}.\end{equation}
If $\overline\pi_{\mathfrak r}(j)=\alpha$ we say that $\alpha$
(or
$\sigma_\alpha$) occurs at position
$j$
in $w^{\mathfrak r}$, and we say that
$\overline\pi_{\mathfrak r}^{-1}(\alpha)$ are the positions at
which $\alpha$
occurs in $w$.
Set \begin{equation}
{\pi}_{\mathfrak r}={\mathbf b}\circ\overline\pi_{\mathfrak r}.
\end{equation}
\end{Def}
$\pi_{\mathfrak e}$ is construed as a map whose image is the empty set.
Recall from (\cite{jak-cen}):
\begin{Def}Let $\omega^{\mathfrak r}\in W^p$ be given and suppose $s\in
Im(\pi_{\mathfrak r})$. Then
$s=\pi_{\mathfrak r}(n)$ for some $n$ and we set
$\omega_n:=\sigma_{i_1}\sigma_{i_2}\cdots\sigma_{i_n}$. Suppose
$\omega_n=\omega_1
\sigma_{i_n}\omega_2\dots\omega_t
\sigma_{i_n}$ and $\omega_i\in
W\setminus\{e\}$ for $i>1$. Further assume that each $\omega_i$ is reduced and
does
not contain any $\sigma_{i_n}$. We denote this simply as $n\leftrightarrow
(s,t)$. We further write
$\beta_{n}\leftrightarrow \beta_{s,t}$ and
\begin{equation}\omega_n\leftrightarrow \omega_{s,t}\end{equation} if $n,s,t$
are
connected as
above. It is convenient to set $\omega_{s,0}=e$ for all $s\in\{1,2,\dots, R\}$.
For a fixed $s\in\{1,2,\dots,R\}$ we let $s_{\mathfrak r}$ denote the
maximal such $t$. If there is no such decomposition we set $t=0$. So, in
particular, $s_{\mathfrak e}=0$, and $s_{\mathfrak r}$ is the number of times $\sigma_s$ occurs in $\omega^{\mathfrak
r}$.
Finally we set (cf. (\cite{jak-cen}))
\begin{equation}
{\mathbb U}({\mathfrak r})=\{(s,t)\in {\mathbb N}\times {\mathbb
N}_0\mid
1\leq s\leq
R\textrm{
and }0\leq t\leq s_{\mathfrak r}\}.
\end{equation}
\end{Def}
Notice that if $(s,t)\in{\mathbb U}({\mathfrak r})$ then we may
construct a subset
${\mathbb U}({\mathbf s}, {\mathbf t})$ of ${\mathbb U}$ by the above recipe,
replacing
$\omega^{\mathfrak r}$ by $\omega_{s,t}$. In this subset $t$ is
maximal. Likewise, if ${\mathfrak s}\leq {\mathfrak r}$ we have of course
${\mathbb U}({\mathfrak s})\subseteq {\mathbb U}({\mathfrak r})$ and may set
${\mathbb U}({\mathfrak r}\setminus {\mathfrak s})={\mathbb U}({\mathfrak
r})\setminus {\mathbb U}({\mathfrak s})$.
\section{Key structures and background results}
\subsection{Quantized minors}
Following a construction of classical minors by S. Fomin and
A. Zelevinsky
\cite{fz}, the last mentioned and A. Berenstein have introduced a
family of quantized
minors $\Delta_{u\cdot\langlembda,v\cdot\langlembda}$ in \cite{bz}. These
are elements of
the quantized coordinate ring ${\mathcal O}_q(G)$. The results by K. Brown and
K. Goodearl (\cite{brown}) were important in this process.
{The element
$\Delta_{u\cdot\langlembda,v\cdot\langlembda}$ is determined by $u,v\in W$ and a
positive weight $\langlembda$. We will always assume that $u\leq_L v$.
\subsection{Identifications} There is a well-known pairing
between
${\mathcal
U}^{\leq}$ and ${\mathcal U}^{\geq}$ (\cite{jan}) and there is a
unique
bilinear
form on ${\mathcal U}_q({\mathfrak n})$. With this we can
identify $({\mathcal
U}^{\geq})^*$ with ${\mathcal U}^{\geq}$.
One can even define a product in $({\mathcal U}_q({\mathfrak
n}))^*$ that makes
it isomorphic to ${\mathcal U}_q({\mathfrak n})$ \cite{leclerc}.
We can in this
way identify the elements $\Delta_{u\cdot\langlembda,v\cdot\langlembda}$
with elements of ${\mathcal U}^{\geq}$.
\subsection{Key results from \cite{bz} and \cite{leclerc}}
The
quantized minors are by definitions functions on ${\mathcal
U}_q({\mathfrak g})$
satisfying certain finiteness conditions.
What is needed first are certain commutation relations that they
satisfy.
Besides this, they can be restricted to being functions on
${\mathcal
U}_q({\mathfrak b})$ and even on ${\mathcal U}_q({\mathfrak
n})$. Our main references here are (\cite{bz}) and (\cite{leclerc}); the
details of the following can be found in the latter.
\begin{Lem}[\cite{bz}]The
element $\triangle_{u\langlembda,v\langlembda}$ indeed depends only on
the weights
$u\langlembda,v\langlembdaλ$, not on the choices of
$u, v$ and their reduced words.
\end{Lem}
\begin{Thm}[A version of Theorem~10.2 in \cite{bz}]
\langlebel{10.2}For any
$\langlembda,\mu\in P^+$, and $s, s', t, t' \in W$ such that
$$\ell(s's) = \ell(s') + \ell(s), \ell(t't) = \ell(t') + \ell(t)
,$$the
following holds:
$$ \triangle_{s's\langlembda,t'\langlembda} · \triangle_{s'\mu,t't\mu} =q^{(s\langlembda |
\mu) - (\langlembda |
t\mu)}\triangle_{s'\mu,t't\mu} ·
\triangle_{s's\langlembda,t'\langlembda}.$$
\end{Thm}
It is very important for the following that the conditions essentially are on the Weyl group elements. The requirement
on $\langlembda,\mu$ is furthermore independent of those.
An equally important fact we need is the following $q$-analogue of
\cite[Theorem~1.17]{fz}:
\begin{Thm}[\cite{leclerc}, Proposition~3.2]\langlebel{3.2}
Suppose that for $u,v\in W$ and $i\in I$ we have
$l(us_i)=l(u)+1$ and
$l(vs_i)=l(v)+1$. Then
\begin{equation}\langlebel{eq3.2}
\Delta_{us_i(\Lambda_i),vs_i(\Lambda_i)}\,\Delta_{u(\Lambda_i),v(\Lambda_i)}=
({q^{-d_i}})\Delta_{us_i(\Lambda_i),v(\Lambda_i)}\,\Delta_{
u(\Lambda_i),
vs_i(\Lambda_i)}+
\prod_{j\neq i}\Delta_{u(\Lambda_j),v(\Lambda_j)}^{-a_{ji}}
\end{equation}
holds in ${\mathcal O}_q(\frak g)$.
\end{Thm}
(That a factor $q^{-d_i}$ must be inserted for the general case is clear.)
One considers in \cite{leclerc}, and transformed to our terminology, modified
elements
\begin{equation}\langlebel{59}D_{\xi,\eta}=\triangle_{\xi,\eta}K^{-\eta}.\end{equation}
We suppress here the restriction map $\rho$, and our $K^{-\eta}$ is denoted as
$\triangle^\star_{\eta,\eta}$ in \cite{leclerc}. The crucial property is that
\begin{equation}
K^{-\eta}\triangle_{\xi_1,\eta_1}=q^{-(\eta,\xi_1-\eta_1)}\triangle_{\xi_1,
\eta_1}K^{-\eta}.
\end{equation}
The family $D_{\xi,\eta}$
satisfies equations analogous to those in Theorem~\ref{10.2} subject to the
same restrictions on the relations between the weights.
The following result is important:
\begin{Prop}[\cite{leclerc}]
Up to a power of $q$, the following holds:
\begin{equation}Z_{c,d}=D_{\omega^{\mathfrak
r}_{c,d-1}(\Lambda_c),\omega^{\mathfrak
r}_{c,d}(\Lambda_c)}.
\end{equation}
\end{Prop}
We need a small modification of the elements $D_{\xi,\eta}$ of \cite{leclerc}:
\begin{Def}
\begin{equation}E_{\xi,\eta}:=q^{\frac14(\xi-\eta,\xi-\eta)+\frac12(\rho,
\xi-\eta)}D_{\xi,\eta}.\end{equation}
\end{Def}
It is proved in (\cite{ki}), (\cite{re})) that $E_{\xi,\eta}$
is invariant under the dual
bar anti-homomorphism augmented by $q\to q^{-1}$.
Notice that this change does not affect commutators:
\begin{equation}
D_1D_2=q^\alpha D_2D_1\Leftrightarrow E_1E_2=q^\alpha E_2E_1
\end{equation}
if $E_i=q^{x_i}D_i$ for $i=1,2$.
\begin{Def}We say that
\begin{equation}\langlebel{less}
E_{\xi,\eta}<E_{\xi_1,\eta_1}
\end{equation}
if $\xi=s's\langlembda$, $\eta=t'\langlembda$, $\xi_1=s'\mu$ and $\eta_1=t't\mu$ and
the conditions of Theorem~\ref{10.2} are satisfied.
\end{Def}
The crucial equation is
\begin{Cor}
\begin{equation}
E_{\xi,\eta}<E_{\xi_1,\eta_1}\Rightarrow
E_{\xi,\eta}E_{\xi_1,\eta_1}=q^{(\xi-\eta,\xi_1+\eta_1)}E_{\xi_1,\eta_1}E_{\xi,
\eta}.
\end{equation}\end{Cor}
\subsection{Connecting with the toric frames}
\begin{Def}Suppose that $\triangle_i$, $i=1,\dots,r$ is a family of mutually
$q$-commuting elements. Let $n_1,\dots,n_r\in{\mathbb Z}$. We then set
\begin{equation}N(\prod_{i=1}^r
\triangle_i^{n_i})=q^m\prod_{i=1}^r\triangle_i^{n_i},
\end{equation}where $q^m$ is determined by the requirement that
\begin{equation}q^{-m}\triangle_r^{n_r}\dots
\triangle_2^{n_2}\triangle_1^{n_1}= q^m
\triangle_1^{n_1}\triangle_2^{n_2}\dots \triangle_r^{n_r}.
\end{equation}
\end{Def}
It is easy to see that
\begin{equation}
\forall \mu\in S_r: N(\prod_{i=1}^r
\triangle_{\mu(i)}^{n_{\mu(i)}})=N(\prod_{i=1}^r.
\triangle_i^{n_i})
\end{equation}
It is known through \cite{bz} that eg. the quantum minors are independent of
the
choices of the reduced form of $\omega^{\mathfrak r}_{\mathfrak p}$. Naturally,
this carries over to $\omega^{\mathfrak r}$. The quadratic algebras we have
encountered are independent of actual choices. In the coming definition we wish
to maintain precisely the right amount of independence.
Let us now formulate Theorem~\ref{3.2} in our language while using the
language and notation of
toric frames from \cite{bz}. In the following Theorem we first state a formula which uses our terminology, and then we reformulate it in the
last two lines in terms of toric frames $M$. These frames are defined by a
cluster made up by certain elements of the form $E_{\xi,\eta}$ to be made more
precise later.
\begin{Thm}\langlebel{toric}
\begin{eqnarray}
E_{us_i\Lambda_i,vs_i\Lambda_i}&=&N\left( E_{us_i\Lambda_i,v\Lambda_i}
E_{u\Lambda_i,vs_i\Lambda_i} E_{u\Lambda_i,v\Lambda_i}^{-1}\right)
\\\nonumber&+&N\left((\prod_{j\neq i}
E_{u(\Lambda_j),v(\Lambda_j)}^{-a_{ji}})
E_{u(\Lambda_i),v(\Lambda_i)}^{-1}\right)\\
&=&M(E_{us_i(\Lambda_i),v(\Lambda_i)}+E_{u(\Lambda_i),
vs_i(\Lambda_i)}-E_{u(\Lambda_i),v(\Lambda_i)})\\&+&
M(\sum_{j\neq
i}-a_{ji}E_{u(\Lambda_j),v(\Lambda_j)}-E_{u(\Lambda_i),v(\Lambda_i)}).
\end{eqnarray}
\end{Thm}
\noindent{\em Proof of Theorem~\ref{toric}:} We first state a lemma whose proof
is omitted as it is straightforward.
\begin{Lem}
Let $\Delta_{\xi_k}$ be a family of $q$-commuting elements of weights $\xi_k$,
$k=1,\dots,r$ in the sense that for any weight $b$:\begin{equation}
\forall k=1,\dots,r: K^b\Delta_{\xi_k}=q^{(b,\xi_k)}\Delta_{\xi_k}K^b.
\end{equation}
Let $\alpha$ be defined by
\begin{equation}
\Delta_{\xi_r}\cdots\Delta_{\xi_1}=q^{-2\alpha}
\Delta_{\xi_1}\cdots\Delta_{\xi_r}
\end{equation}Furthermore, let $b_1,\dots,b_r$ be integer weights. Then
\begin{eqnarray}
&(\Delta_{\xi_1}\Delta_{\xi_2}\cdots\Delta_{\xi_r})
K^{b_1}K^{b_2}\cdots
K^{b_r}=\\\nonumber
&q^{\sum_{k<\ell}(b_k,\xi_\ell)}(\Delta_{\xi_1}K^{b_1})(\Delta_{\xi_2}K^{
b_2})\cdots(\Delta_{\xi_r}K^{b_r}),\textrm{ and,}\\\nonumber
&(\Delta_{\xi_r}K^{b_r})\cdots(\Delta_{\xi_1}K^{b_1})=\\\nonumber &q^{-2\alpha}
q^{(\sum_{k<\ell}-\sum_{\ell<k})(b_\ell,\xi_k)}(\Delta_{\xi_1}K^{b_1}
)\cdots(\Delta_{\xi_r}K^{b_r}),
\textrm{ so that}\\\nonumber
&(\Delta_{\xi_1}K^{b_1})\cdots(\Delta_{\xi_r}K^{b_r})=\\\nonumber &q^{\alpha}
q^{-\frac12(\sum_{k<\ell}-\sum_{\ell<k})(b_\ell,\xi_k)}N\left(
(\Delta_{\xi_1}K^{b_1})\cdots(\Delta_{\xi_r}K^{b_r})\right).
\end{eqnarray}
Finally,
\begin{eqnarray}
&q^{-\alpha}(\Delta_{\xi_1}\Delta_{\xi_2}\cdots\Delta_{\xi_r})
K^{b_1}K^{b_2} \cdots K^{b_r}=\\\nonumber&q^{-\frac12(\sum_{\ell\neq
k})(b_\ell,\xi_k)}N\left(
(\Delta_{\xi_1}K^{b_1})\cdots(\Delta_{\xi_r}K^{b_r})\right).
\end{eqnarray}
\end{Lem}
We apply this lemma first to the case where the elements $\xi_k$ are taken from
the set
$\{-\textrm{sign}(a_{ki})(u\Lambda_k-v\Lambda_k)\mid a_{ki}\neq0 \}$ and where
each
element corresponding to an $a_{ki}<0$ is taken $-a_{ki}$ times. Then
$r=\sum_{k\neq
i}\vert a_{ji}\vert+1$. The terms considered actually commute so that here,
$\alpha=0$.
The weights $b_k$ are chosen in the same fashion, but here
$b_k=\textrm{sign}(a_{ki})(v\Lambda_k)$. We have that
\begin{equation}\sum_{\ell\neq
k}(b_\ell,\xi_k)=\left(\sum_{\ell}b_\ell,
\sum_{k}\xi_k\right)-\sum_{k}(b_k,\xi_k).
\end{equation}
It follows from (\ref{3.1}) that $\sum_{\ell}b_\ell=-vs_i\langlembda_i$ and
$\sum_{k}\xi_k=(us_i\Lambda_i-vs_i\langlembda_i)$. Now observe that for all $k$:
$-(v\Lambda_k,(u-v)\Lambda_k)=\frac12(\xi_k,\xi_k)$. Let
$\xi_0=(us_i-vs_i)\Lambda_i$.
The individual summands in $\sum_k(b_k,\xi_k)$ can be treated analogously.
Keeping track of the
multiplicities and signs, it follows that
\begin{eqnarray}
q^{-\alpha}(\Delta_{\xi_1}\Delta_{\xi_2}\dots\Delta_{\xi_r})K^{b_1}K^{b_2}\dots
K^{b_r}=\\\nonumber q^{-\frac14(\xi_0,\xi_0)+\frac14\sum_k\varepsilon_k(\xi_k,\xi_k)}N\left(
(\Delta_{\xi_1}K^{b_1})\dots(\Delta_{\xi_r}K^{b_r})\right).
\end{eqnarray}
Let us turn to the term
\begin{equation}
q^{-d_i}\Delta_{us_i\Lambda_i,v\Lambda_i}
\Delta_{u\Lambda_i,vs_i\Lambda_i}\Delta_{u\Lambda_i,v\Lambda_i}^{-1}K^{
-vs_i\Lambda_i}.
\end{equation}
We can of course set
$K^{-vs_i\Lambda_i}=K^{-v\Lambda_i}K^{-vs_i\Lambda_i}K^{v\Lambda_i}$.
Furthermore, it is
known (and easy to see) that
\begin{eqnarray}
&\Delta_{u\Lambda_i,v\Lambda_i}^{-1}\Delta_{u\Lambda_i,vs_i\Lambda_i}
\Delta_{us_i\Lambda_i,v\Lambda_i}=\\\nonumber&q^{-2d_i}\Delta_{us_i\Lambda_i,
v\Lambda_i}
\Delta_{u\Lambda_i,vs_i\Lambda_i}\Delta_{u\Lambda_i,v\Lambda_i}^{-1},
\end{eqnarray}
so that $\alpha=d_i$ here. We easily get again that
$\sum_{\ell}b_\ell=-vs_i\langlembda_i$
and $\sum_{k}\xi_k=(us_i\Lambda_i-vs_i\langlembda_i)$.
Let us introduce elements $\tilde
E_{\xi,\eta}=q^{\frac14(\xi-\eta,\xi-\eta)}\Delta_{\xi,\eta}K^{-\eta}$. It then
follows
that (c.f. Theorem~\ref{3.2})
\begin{eqnarray}
\tilde E_{us_i\Lambda_i,vs_i\langlembda_i}&=&N\left(\tilde
E_{us_i\Lambda_i,v\Lambda_i}
\tilde E_{u\Lambda_i,vs_i\Lambda_i}\tilde E_{u\Lambda_i,v\Lambda_i}^{-1}\right)
\\\nonumber&+&N\left((\prod_{j\neq i}\tilde
E_{u(\Lambda_j),v(\Lambda_j)}^{-a_{ji}})\tilde
E_{u(\Lambda_i),v(\Lambda_i)}^{-1}\right).
\end{eqnarray}
The elements $E_{\xi,\eta}$ differ from the elements $\tilde E_{\xi,\eta}$ by a
factor
which is $q$ to an exponent which is linear in the weight $(\xi-\eta)$. Hence
an
equation
identical to the above holds for these elements. \qed
\section{Compatible pairs}
We now construct some general families of quantum clusters and quantum seeds.
The first, simplest, and most important, correspond to double Schubert Cells:
Let ${\mathfrak e}\leq {\mathfrak s}<{\mathfrak t}<{\mathfrak v}\leq{\mathfrak
p}$.
Set
\begin{eqnarray*}{\mathbb U}^{d,{\mathfrak t},{\mathfrak
v}}&:=&\{(a,j)\in {\mathbb U}({\mathfrak p})\mid a_{\mathfrak t}<j\leq
a_{\mathfrak v}\},\\
{\mathbb U}_{R<}^{d,{\mathfrak t},{\mathfrak
v}}&:=&\{(a,j)\in {\mathbb U}({\mathfrak p})\mid a_{\mathfrak t}<j<
a_{\mathfrak v}\},\\
{\mathbb U}^{u,{\mathfrak s},{\mathfrak
t}}&:=&\{(a,j)\in {\mathbb U}({\mathfrak p})\mid a_{\mathfrak s}\leq j<
a_{\mathfrak t}\},\\
{\mathbb U}_{L<}^{u,{\mathfrak s},{\mathfrak
t}}&:=&\{(a,j)\in {\mathbb U}({\mathfrak p})\mid a_{\mathfrak s}< j<
a_{\mathfrak t}\}.
\end{eqnarray*}
Further, set
\begin{eqnarray} {\mathbb U}^{d,{\mathfrak t}}&=&{\mathbb U}^{d,{\mathfrak
t},{\mathfrak
p}},\\
{\mathbb U}^{u,{\mathfrak t}}&=&{\mathbb U}^{d,{\mathfrak e},{\mathfrak
t}}.
\end{eqnarray}
It is also convenient to define
\begin{Def}
\begin{eqnarray}E_s(i,j)&:=&E_{\omega^{{\mathfrak
p}}_{(s,i)}\Lambda_s,\omega^{\mathfrak p}_{(s,j)}\Lambda_s} \quad(0\leq i<j\leq
s_{{\mathfrak p}}).
\end{eqnarray}
For $j'\geq s_{\mathfrak t}$ we set
\begin{equation}
E^d_{\mathfrak t}(s,j'):=E_s(s_{\mathfrak t},j').
\end{equation}
For $j'\leq s_{\mathfrak t}$ we set
\begin{equation}
E^u_{\mathfrak t}(s,j'):=E_s(j',s_{\mathfrak t}).
\end{equation}
Finally, we set
\begin{eqnarray}
{\mathcal C}_q^d({\mathfrak t},{\mathfrak
v})&=&\{E^d_{\mathfrak t}(s,j')\mid (s,j')\in
{\mathbb U}^{d,{\mathfrak t},{\mathfrak
v}}\},\\
{\mathcal C}_q^u({\mathfrak
s},{\mathfrak t})&=&\{E^u_{\mathfrak t}(s,j'); (s,j')\in {\mathbb
U}^{u,{\mathfrak
s},{\mathfrak t}}\},\\
{\mathcal C}_q^d({\mathfrak t})&=&{\mathcal C}_q^d({\mathfrak t},{\mathfrak
p}),\textrm{ and}\\
{\mathcal C}_q^u({\mathfrak t})&=&{\mathcal C}_q^u({\mathfrak s},{\mathfrak
t}).
\end{eqnarray}
\end{Def}
It is clear that ${\mathcal C}_q^d({\mathfrak t},{\mathfrak
v})\subseteq {\mathcal C}_q^d({\mathfrak t})$ for any ${\mathfrak v}>{\mathfrak
t}$ and ${\mathcal C}_q^u({\mathfrak s},{\mathfrak
t})\subseteq {\mathcal C}_q^u({\mathfrak t})$ for any ${\mathfrak s}<{\mathfrak
t}$.
\begin{Lem}The elements in the set ${\mathcal C}_q^d({\mathfrak t})$ are
$q$-commuting and the elements in the set
${\mathcal C}_q^u({\mathfrak t})$ are
$q$-commuting.\langlebel{above}
\end{Lem}
The proof is omitted as it is very similar to the proof of
Proposition~\ref{7.13} which comes later.
\begin{Def}${\mathcal A}_q^d({\mathfrak t}, {\mathfrak v})$ denotes the ${\mathbb
C}$-algebra
generated by ${\mathcal C}_q^d({\mathfrak t},{\mathfrak v})$ and ${\mathcal
A}_q^u({\mathfrak s},{\mathfrak
t})$ denotes the ${\mathbb C}$-algebra generated by ${\mathcal C}_q^u({\mathfrak
s},{\mathfrak t})$. Further, ${\mathcal F}_q^d({\mathfrak t},{\mathfrak v})$ and
${\mathcal F}_q^u({\mathfrak s},{\mathfrak t})$ denote the corresponding
skew-fields of fractions. Likewise, ${\mathbf L}_q^d({\mathfrak t},{\mathfrak v})$ and ${\mathbf L}_q^u({\mathfrak s},{\mathfrak t})$ denote the
respective Laurent quasi-polynomial algebras. Finally, ${\mathcal
L}_q^d({\mathfrak t},{\mathfrak v})$ and ${\mathcal L}_q^u({\mathfrak s},{\mathfrak
t})$ denote the symplectic forms
associated with the clusters ${\mathcal C}_q^d({\mathfrak t},{\mathfrak v})$, and
${\mathcal C}_q^u({\mathfrak s},{\mathfrak t})$, respectively.
\end{Def}
\begin{Def}Whenever ${\mathfrak a}<{\mathfrak b}$, we set
\begin{equation}\forall s\in Im(\pi_{\mathfrak b}): {\det}_{s}^{{\mathfrak a},{\mathfrak b}}:=E_{\omega^{\mathfrak
a}\Lambda_s,\omega^{\mathfrak b}\Lambda_s}. \end{equation}
\end{Def}
We conclude
in particular that
\begin{Prop}\langlebel{quasipol}The elements ${\det}_{s}^{{\mathfrak
t},{\mathfrak p}}$ $q$-commute with all elements
in the algebra ${\mathcal A}_q^d({\mathfrak t})$ and the elements
${\det}_{s}^{{\mathfrak
e},{\mathfrak t}}$ $q$-commute with all elements
in the algebra ${\mathcal A}_q^d({\mathfrak t})$.
\end{Prop}
\begin{Def}An element $C$ in a quadratic algebra ${\mathcal A}$ that
$q$-commutes with all the generating elements is said to be covariant.
\end{Def}
As a small aside, we mention the following easy generalization of the result in
(\cite{jak-cen}):
\begin{Prop}It ${\mathfrak a}<{\mathfrak b}$, then
the spaces ${\mathcal A}_q^u({\mathfrak a},{\mathfrak b})$ and ${\mathcal
A}_q^d({\mathfrak a},{\mathfrak b})$ are quadratic algebras. In both cases, the
center is given by $Ker(\omega^{\mathfrak a}+\omega^{\mathfrak b})$. The semi-group of covariant elements in generated by $\{ {\det}_{s}^{{\mathfrak a},{\mathfrak b}}\mid s\in Im(\pi_{\mathfrak b})\}$.
\end{Prop}
We now construct some elements in ${\mathbf L}_q^d({\mathfrak t})$ and ${\mathbf
L}_q^u({\mathfrak t})$ of fundamental importance. They are indeed monomials in
the elements of $\left[{\mathcal C}_q^d({\mathfrak t})\right]^{\pm1}$ and
$\left[{\mathcal C}_q^u({\mathfrak t})\right]^{\pm1}$, respectively.
First a technical definition:
\begin{Def}
$p(a,j,k)$ denotes the largest non-negative integer for which
$$\omega^{\mathfrak p}_{(k,p(a,j,k))}\Lambda_k=\omega^{\mathfrak
p}_{(a,j)}\Lambda_k.$$
\end{Def}
We also allow $E_a(j,j)$ which is defined to be $1$.
Here are then the first building blocks:
\begin{Def}
\begin{eqnarray}\nonumber
&\forall (a,j)\in {\mathbb U}^{d,{\mathfrak t}}:\\& H^d_{\mathfrak
t}(a,j):=E_a(a_{\mathfrak t},j)E_a(a_{\mathfrak t},j-1)
\prod_{a_{ka}<0}E_k(k_{\mathfrak t},p(a,j,k))^{a_{ka}} \\\nonumber
&\forall (a,j)\in {\mathbb U}^{d,{\mathfrak t}}\textrm{ with }j<a_{\mathfrak
p}:\\
&B^d_{\mathfrak t}(a,j):=H^d_{\mathfrak t}(a,j)(H^d_{\mathfrak t}(a,j+1))^{-1}.
\end{eqnarray}
\end{Def}
The terms $E(k_{\mathfrak t},p(a,j,k))$ and $E_a(a_{\mathfrak t},j-1)$ are
well-defined but may become equal to $1$. Also notice that, where defined,
$H^d_{\mathfrak t}(a,j), B^d_{\mathfrak t}(a,j)\in {\mathbf L}_q^d({\mathfrak
t})$.
\begin{Lem}\langlebel{7.10}If $E_{\xi,\eta}<H^d_{\mathfrak t}(a,j)$ in the sense
that it is less
than or equal to each factor $E_{\xi_1,\eta_1}$ of $H^d_{\mathfrak t}(a,j)$
(and
$<$ is defined in (\ref{less})), then
\begin{equation}\langlebel{54}
E_{\xi,\eta}H^d_{\mathfrak t}(a,j)=q^{(\xi-\eta,\omega^{\mathfrak
t}(\alpha_a))}H^d_{\mathfrak t}(a,j)E_{\xi,\eta}.
\end{equation}
If $E_{\xi,\eta}\geq H^d_{\mathfrak t}(a,j)$, then
\begin{equation}\langlebel{55}
E_{\xi,\eta}H^d_{\mathfrak t}(a,j)=q^{(-\xi-\eta,\omega^{\mathfrak
t}(\alpha_a))}H^d_{\mathfrak t}(a,j)E_{\xi,\eta}.
\end{equation}
\end{Lem}
\proof This follows from (\ref{less}) by observing that we have the following
pairs $(\xi_1,\eta_1)$ occurring in $H^d_{\mathfrak t}(a,j)$:
$$(\omega^{\mathfrak t}\Lambda_a,\omega(a,j)\Lambda_a),(\omega^{\mathfrak
t}\Lambda_a,\omega(a,j)\sigma_a\Lambda_a),$$ and $$(-\omega^{\mathfrak
t}\Lambda_k,-\omega(a,j)\Lambda_k) \textrm{ with multiplicity }(-a_{ka}).$$
Furthermore, as in (\ref{3.1}),
$\Lambda_a+\sigma_a\Lambda_a+\sum_ka_{ka}\Lambda_k=0$ and,
equivalently, $2\Lambda_a+\sum_ka_{ka}\Lambda_k=\alpha_a$ . \qed
\begin{Prop}\langlebel{7.10}$\forall (a,j),(b,j')\in {\mathbb U}^{d,{\mathfrak t}},
j<a_{\mathfrak p}$ the following holds:
\begin{equation}
E^d_{\mathfrak t}(b,j')B^d_{\mathfrak
t}(a,j)=q^{-2(\Lambda_a,\alpha_a)\delta_{j,j'}\delta_{a,b}}B^d_{\mathfrak
t}(a,j)E^d_{\mathfrak t}(b,j').
\end{equation}
\end{Prop}
\proof It is clear from the formulas (\ref{54}-\ref{55}) that if an element
$E_{\xi,\eta}$ either is bigger than all factors in $B^d_{\mathfrak
t}(s,j)$ or smaller than all factors, then it commutes with this element. The
important fact now is that the ordering is independent of the fundamental
weights $\Lambda_i$ - it depends only on the Weyl group elements. The factors
in any $H_{\mathfrak t}^d$ are, with a fixed ${\mathfrak t}$, of the form
$E_{\omega^{\mathfrak t}\Lambda_i,\omega\Lambda_i}$ or $E_{\omega^{\mathfrak
t}\Lambda_a,\omega\circ\sigma_a\Lambda_a}$ for some $\omega\geq
\omega^{\mathfrak t}$. The elements $E_{\xi,\eta}=E^d_{\mathfrak t}(b,j')$ we
consider thus satisfy the first or the second case in Lemma~\ref{7.10}
for either terms $H^d_{\mathfrak t}(a,j)$ and $H^d_{\mathfrak t}(a,j+1)$.
Clearly, we then need only consider the in-between case $H^d_{\mathfrak t}(a,j)\leq E_{\xi,\eta}\leq H^d_{\mathfrak t}(a,j+1)$, and here there appears a
factor $q^{-2(\xi,\omega^{\mathfrak t}(\alpha_a))}$ in the commutator with
$\xi=\omega^{\mathfrak t}\Lambda_{b}$. This accounts for the term
$-2(\Lambda_a,\alpha_a)\delta_{a,b}$. Finally, if $a=b$ the previous
assumption forces $j=j'$. \qed
Let us choose an enumeration
\begin{equation}
{\mathcal C}_q^d({\mathfrak t})=\{c_1,c_2,\dots, c_N\}
\end{equation}
so that each $(a,j)\leftrightarrow k$ and let us use the same enumeration of
the elements $B^d_{\mathfrak
t}(a,j)$. Set, for now $B^d_{\mathfrak
t}(a,j)=b_k$ if $(a,j)\leftrightarrow k$. Let us also agree that the, say $n$,
non-mutable elements $\det_s^{{\mathfrak t},{\mathfrak p}}$ of ${\mathcal
C}_q^d({\mathfrak t})$ are written last, say numbers $N-n+1, N-n+2,\dots, N-n$.
Then, as defined,
\begin{equation}
\forall j=1,\dots, N-n: b_j=q^{\alpha_j}\prod_kc_k^{b_{kj}}
\end{equation}
for some integers $b_{kj}$ and some, here inconsequential, factor
$q^{\alpha_j}$. The symplectic form yields a matrix which we, abusing notation
slightly, also denote ${\mathcal L}_q^d({\mathfrak t})$ such that
\begin{equation}
\forall i,j=1,\dots, N:\ \left({\mathcal L}_q^d({\mathfrak
t})\right)_{ij}=\langlembda_{ij}
\end{equation}
and
\begin{equation}
\forall i,j=1,\dots, N: c_jc_j=q^{\langlembda_{ij}}c_jc_i.\end{equation}
Similarly, we let ${\mathcal B}_q^d({\mathfrak t})$ denote the matrix
\begin{equation}
\forall i=1,\dots, N\ \forall j=1,\dots, N-n: \left({\mathcal B}_q^d({\mathfrak
t})\right)_{ij}=b_{ij}.
\end{equation}
Then, where defined,
\begin{equation}
c_ib_j=\prod_kq^{\langlembda_{ik}b_{kj}}b_jc_i,
\end{equation}
and Proposition~\ref{7.10} may then be restated as
\begin{equation}
\forall i=1,\dots, N\ \forall j=1,\dots, N-n:
\sum_k\langlembda_{ik}b_{kj}=-2(\Lambda_s,\alpha_s)\delta_{ij},
\end{equation}
where we assume that $i\leftrightarrow (s,\ell)$.
We have then established
\begin{Thm}
The pair $({\mathcal L}_q^d({\mathfrak t}),{\mathcal B}_q^d({\mathfrak t}))$ is a
compatible pair and hence,
\begin{equation}
{\mathcal Q}_q^d({\mathfrak t}):=({\mathcal C}_q^d({\mathfrak t}),{\mathcal
L}_q^d({\mathfrak t}),{\mathcal B}_q^d({\mathfrak t}))
\end{equation}
is a quantum seed with the $n$ non-mutable elements $\det_s^{{\mathfrak
t},{\mathfrak p}}$, $(s,s_{\mathfrak p})\in {\mathbb U}^d({\mathfrak t})$. The
entries of the diagonal of the matrix $\tilde D=({\mathcal B}_q^d({\mathfrak
t}))^T{\mathcal L}_q^d({\mathfrak t})$ are in the set
$\{2(\Lambda_s,\alpha_s)\mid s=1,\dots,R\}$.
\end{Thm}
It ${\mathfrak v}>{\mathfrak t}$, we let $({\mathcal L}_q^d({\mathfrak
t},{\mathfrak v}),{\mathcal B}_q^d({\mathfrak t},{\mathfrak v}))$ denote the part
of the compatible pair $({\mathcal L}_q^d({\mathfrak t}),{\mathcal
B}_q^d({\mathfrak t}))$ that corresponds to the cluster ${\mathcal
C}_q^d({\mathfrak t},{\mathfrak v})$ and we let ${\mathcal Q}_q^d({\mathfrak
t},{\mathfrak v})$ be the corresponding triple. It is then obvious by simple
restriction, that we in fact have obtained
\begin{Thm}
The pair $({\mathcal L}_q^d({\mathfrak t},{\mathfrak v}),{\mathcal
B}_q^d({\mathfrak t},{\mathfrak v}))$ is a compatible pair and hence,
\begin{equation}
{\mathcal Q}_q^d({\mathfrak t},{\mathfrak v}):=({\mathcal C}_q^d({\mathfrak
t},{\mathfrak v}),{\mathcal L}_q^d({\mathfrak t},{\mathfrak v}),{\mathcal
B}_q^d({\mathfrak t},{\mathfrak v}))
\end{equation}
is a quantum seed with the $n$ non-mutable elements $\det_s^{{\mathfrak
t},{\mathfrak v}}$, $(s,s_{\mathfrak v})\in {\mathbb U}^d({\mathfrak
t},{\mathfrak v})$.
\end{Thm}
The case of ${\mathcal C}_q^u({\mathfrak t})$ is completely analogous: Define
\begin{Def}
\begin{eqnarray}
H^u_{\mathfrak t}(a,j)&:=&E_a(j, a_{\mathfrak t})E_a(j-1,a_{\mathfrak t})
\prod_{a_{ka}<0}E_k(p(a,j,k), k_{\mathfrak t})^{a_{ka}} \ (1\leq j<a_{\mathfrak
t}),\nonumber\\
B^u_{\mathfrak t}(a,j)&:=&H^u_{\mathfrak t}(a,j+1)(H^u_{\mathfrak t}(a,j))^{-1}
\ (1\leq j<a_{\mathfrak t}).
\end{eqnarray}
The terms $E(p(a,j,k),k_{\mathfrak t})$ are well-defined but may become equal
to
$1$. Notice also the exponents on the terms $H^u_{\mathfrak t}$.
\end{Def}
The terms $E(p(a,j,k),k_{\mathfrak t})$ are well-defined but may become equal
to
$1$. As defined, $H^u_{\mathfrak t}(a,j)$, and $B^u_{\mathfrak t}(a,j)$ are in
${\mathbf L}_q^u({\mathfrak t})$.
\begin{Prop}$\forall (a,j),(b,j')\in {\mathbb U}^{u,{\mathfrak t}}, 1\leq j$
the
following holds:
\begin{equation}
E^u_{\mathfrak t}(b,j')B^u_{\mathfrak
t}(a,j)=q^{2(\Lambda_a,\alpha_a)\delta_{j,j'}\delta_{a,b}}B^u_{\mathfrak
t}(a,j)E^u_{\mathfrak
t}(b,j').
\end{equation}
\end{Prop}
We then get in a similar way
\begin{Thm}
The pair $({\mathcal L}_q^u({\mathfrak t}),{\mathcal B}_q^u({\mathfrak t}))$ is a
compatible pair and hence,
\begin{equation}
{\mathcal Q}_q^u({\mathfrak t}):=({\mathcal C}_q^u({\mathfrak t}),{\mathcal
L}_q^u({\mathfrak t}),{\mathcal B}_q^u({\mathfrak t}))
\end{equation}
is a quantum seed with the $n$ non-mutable elements $\det_s^{{\mathfrak
e},{\mathfrak t}}$, $(s,s_{\mathfrak t})\in {\mathbb U}^u({\mathfrak t})$.
\end{Thm}
Naturally, we even have
\begin{Thm}
The pair $({\mathcal L}_q^u({\mathfrak s},{\mathfrak t}),{\mathcal
B}_q^u({\mathfrak s},{\mathfrak t}))$ is a compatible pair and hence,
\begin{equation}
{\mathcal Q}_q^u({\mathfrak s},{\mathfrak t}):=({\mathcal C}_q^u({\mathfrak
s},{\mathfrak v}),{\mathcal L}_q^u({\mathfrak s},{\mathfrak t}),{\mathcal
B}_q^u({\mathfrak s},{\mathfrak t}))
\end{equation}
is a quantum seed with the $n$ non-mutable elements $\det_s^{{\mathfrak
s},{\mathfrak t}}$, $(s,s_{\mathfrak s})\in {\mathbb U}^u({\mathfrak
s},{\mathfrak t})$.
\end{Thm}
We now wish to consider more elaborate seeds. The first generalization is the
most important:
Let
\begin{equation}
{\mathfrak e}\leq {\mathfrak a}\leq {\mathfrak
b}\leq{\mathfrak c}\leq {\mathfrak p}, \textrm{ but } {\mathfrak a}\neq
{\mathfrak c}.\end{equation}
\begin{eqnarray}\langlebel{l1}
{\mathcal C}_q^d({\mathfrak a},{\mathfrak b},{\mathfrak
c})&:=&\{E^d_{\mathfrak a}(s,j)\mid (a,j)\in ({\mathbb U}^{d,{\mathfrak
b}}\setminus {\mathbb U}^{d,{\mathfrak
c}})= {\mathbb U}^{d,{\mathfrak
b},{\mathfrak c}}\},\\\langlebel{l2}{\mathcal C}_q^u({\mathfrak
a},{\mathfrak b},{\mathfrak
c})&:=&\{E^u_{\mathfrak c}(s,j)\mid (s,j)\in ({\mathbb U}^{u,{\mathfrak
b}}\setminus {\mathbb U}^{u,{\mathfrak
a}})= {\mathbb U}^{u,{\mathfrak
a},{\mathfrak b}}\}.
\end{eqnarray}
In (\ref{l1}), ${\mathfrak a}={\mathfrak b}$ is allowed, and in (\ref{l2}),
${\mathfrak b}={\mathfrak c}$ is allowed.
\begin{Def}
\begin{eqnarray*}{\mathcal C}_q({\mathfrak a},{\mathfrak b},{\mathfrak
c})&:=&{\mathcal C}_q^d({\mathfrak a},{\mathfrak b},{\mathfrak
c})\cup{\mathcal C}_q^u({\mathfrak a},{\mathfrak b},{\mathfrak
b}),\\
{\mathcal C}_q^o({\mathfrak a},{\mathfrak b},{\mathfrak
c})&:=&{\mathcal C}_q^u({\mathfrak a},{\mathfrak b},{\mathfrak
c})\cup{\mathcal C}_q^d({\mathfrak b},{\mathfrak b},{\mathfrak
c}).
\end{eqnarray*}
\end{Def}
\begin{Prop}\langlebel{7.13}
The elements of ${\mathcal C}_q({\mathfrak a},{\mathfrak b},{\mathfrak
c})$ and ${\mathcal C}_q^o({\mathfrak a},{\mathfrak b},{\mathfrak
c})$, respectively, $q$-commute.
\end{Prop}
\proof The two cases are very similar, so we only prove it for the first case.
We examine 3 cases, while using the following mild version of Theorem~\ref{10.2}:
$
\triangle_{s's\langlembda,t'\langlembda}$ and $\triangle_{s'\mu,t't\mu}$ $q$ commute for any $\langlembda,\mu\in P^+$, and $s, s', t, t' \in W$ for which $\ell(s's) =
\ell(s') + \ell(s), \ell(t't) = \ell(t') + \ell(t)$.
{\bf Case 1:} $E_{\mathfrak a}^d(s,t)$ and $E_{\mathfrak a}^d(s_1,t_1)$ for $(s,t)\in {\mathbb U}^{d, {\mathfrak b},{\mathfrak c}}$ and
$(s,t)<(s_1,t_1)$:
Set $\langlembda=\Lambda_s,\mu=\Lambda_{s_1}$, $s=1,s'=\omega^{\mathfrak a}$, and
$t'=\omega^{\mathfrak c}(s,t),
t't=\omega^{\mathfrak c}(s_1,t_1)$.
{\bf Case 2:} $E_{\mathfrak b}^u(s,t)$ and $E_{\mathfrak b}^u(s_1,t_1)$ for $(s,t)\in {\mathbb U}^{u, {\mathfrak a},{\mathfrak b}}$ and
$(s,t)>(s_1,t_1)$:
Set $\langlembda=\Lambda_s,\mu=\Lambda_{s_1}$, $t=1$,
$t'=\omega^{\mathfrak b}$ and
$s'=\omega^{\mathfrak p}(s_1,t_1), s's=\omega^{\mathfrak r}(s,t)$.
{\bf Case 3:} $E_{\mathfrak b}^u(s,t)$ and $E_{\mathfrak a}^d(s_1,t_1)$ for $(s,t)\in {\mathbb U}^{u, {\mathfrak a},{\mathfrak b}}$ and
$(s_1,t_1)\in
{\mathbb U}^{d, {\mathfrak b},{\mathfrak c}}$:
Set $\langlembda=\Lambda_s,\mu=\Lambda_{s_1}$, $s'=\omega^{{\mathfrak a}}$,
$s=\omega^{\mathfrak p}(s,t)$,
$t'=\omega^{\mathfrak b}$ and
$t't=\omega^{\mathfrak p}(s_1,t_1)$.
\qed
Notice that the ordering in ${\mathbb U}^{u, {\mathfrak a},{\mathfrak b}}$ (Case 2) is the
opposite of that of the two other cases.
We also define, for ${\mathfrak a}<{\mathfrak b}$,
\begin{eqnarray}
{\mathcal C}_q^u({\mathfrak a},{\mathfrak b})&=&{\mathcal C}_q^u({\mathfrak a},
{\mathfrak b},{\mathfrak b}),\textrm{ and}\\\nonumber {\mathcal C}_q^d({\mathfrak
a},{\mathfrak b})&=&{\mathcal C}_q^d({\mathfrak a}, {\mathfrak a},{\mathfrak b}).
\end{eqnarray}
We let ${\mathcal L}_q({\mathfrak a},{\mathfrak b},{\mathfrak
c})$ and ${\mathcal L}_q^o({\mathfrak a},{\mathfrak b},{\mathfrak
c})$ denote the corresponding symplectic matrices. We proceed to construct
compatible pairs and give the details for just ${\mathcal C}_q({\mathfrak
a},{\mathfrak b},{\mathfrak
c})$. We will be completely explicit except in the special cases
$E^u_{\omega^{{\mathfrak
a}}\Lambda_s,\omega^{{\mathfrak b}}\Lambda_s}={\det}_{s}^{{\mathfrak
a},{\mathfrak b}}$ where we only give a recipe for $ {B}_q^{{\mathfrak
a},{\mathfrak b},{\mathfrak
c}}(s,s_{{\mathfrak a}}) $. Notice, however, the remark following (\ref{77}).
\begin{equation}\langlebel{72}{B}_q^{{\mathfrak a}, {\mathfrak
b},{\mathfrak c}}(s,j):=\left\{\begin{array}{lll}B^d_{{\mathfrak
a}}(s,j)&\textrm{if }(s,j)\in {\mathbb U}_{R<}^{d, {\mathfrak b},{\mathfrak
c}}\\ \ \\
B^u_{{\mathfrak b}}(s,j)&\textrm{if }(s,j)\in {\mathbb U}_{L<}^{u, {\mathfrak
a},{\mathfrak b}} \end{array}\right..\end{equation}
We easily get from the preceding propositions:
\begin{Prop}\langlebel{7.20}
Let $E(b,j')\in {\mathcal C}_q({{\mathfrak a}, {\mathfrak
b},{\mathfrak c}})$ and let ${B}_q^{{\mathfrak a}, {\mathfrak
b},{\mathfrak c}}(s,j)$ be as in the previous equation. Then
\begin{equation}
E(b,j'){B}_q^{{\mathfrak a}, {\mathfrak
b},{\mathfrak
c}}(s,j)=q^{{-2(\Lambda_s,\alpha_s)\delta_{j,j'}\delta_{s,b}}}{B}_q^{{\mathfrak
a}, {\mathfrak
b},{\mathfrak c}}(s,j)E(b,j'),
\end{equation}
and ${B}_q^{{\mathfrak a}, {\mathfrak
b},{\mathfrak c}}(s,j)$ is in the algebra ${\mathcal A}_q({{\mathfrak a},
{\mathfrak
b},{\mathfrak c}})$ generated by the elements of ${\mathcal C}_q({{\mathfrak a},
{\mathfrak
b},{\mathfrak c}})$.
\end{Prop}
This then leaves the positions $(s,s_{\mathfrak c})\in {\mathbb
U}^{d,{\mathfrak
b},{\mathfrak c}}$ and $(s,s_{\mathfrak a})\in {\mathbb U}^{u,{\mathfrak
a},{\mathfrak b}}$ to be considered. Here, the first ones are considered as the non-mutable elements. In the ambient space ${\mathcal A}_q({{\mathfrak a},
{\mathfrak b},{\mathfrak c}})$, the positions in remaining cases define elements that are, in general, mutable.
The elements in these cases are of the form $E_{\omega^{{\mathfrak
a}}\Lambda_s,\omega^{{\mathfrak b}}\Lambda_s}$ for some $s$. To give a recipe we define the following elements in ${\mathcal
A}_q({{\mathfrak a}, {\mathfrak
b},{\mathfrak c}})$:
\begin{eqnarray}&\tilde {B}_q^{{\mathfrak a},{\mathfrak b},{\mathfrak
c}}(s,s_{{\mathfrak a}}) :=\\&\left(H^u_{{\mathfrak
b}}(s,s_{{\mathfrak a}}+1) H^d_{{\mathfrak a}}(s,s_{{\mathfrak
b}}+1)\right)^{-1} E_s(s_{{\mathfrak a}},s_{{\mathfrak
b}})^2\prod_{a_{ka}<0}E_k(k_{{\mathfrak a}},k_{{\mathfrak
b}})^{a_{ks}}.\nonumber\end{eqnarray}
If $\omega(s,s_{{\mathfrak a}}+1)=\omega^{{\mathfrak a}}\omega_x\sigma_s$ and
$\omega(s,s_{{\mathfrak b}}+1)=\omega^{{\mathfrak b}}\omega_y\sigma_s$, and if we set $u=\omega^{{\mathfrak a}}\omega_x$, $v=\omega^{{\mathfrak b}}\omega_y$ this takes the simpler form
\begin{equation}\langlebel{75}\tilde {B}_q^{{\mathfrak a},{\mathfrak b},{\mathfrak
c}}(s,s_{{\mathfrak a}}) =E_{u\sigma_s\Lambda_s,v\Lambda_s}^{-1}E_{
u\Lambda_s,v\sigma_s\Lambda_s}^{-1}\prod_{a_{ks<0}}
E_{\omega^{\mathfrak a}\Lambda_k,v\Lambda_k}^{-a_{ks}}\prod_{a_{ks<0}}
E_{u\Lambda_k,\omega^{\mathfrak b}\Lambda_k}^{-a_{ks}}\prod_{a_{ks<0}}
E_{\omega^{\mathfrak a}\Lambda_k,\omega^{\mathfrak
b}\Lambda_k}^{a_{ks}}.\end{equation}
\begin{Prop}
\begin{equation}
\forall\ell: E_{\omega^{\mathfrak a}\Lambda_\ell, \omega^{\mathfrak
b}\Lambda_\ell}\tilde {B}_q^{{\mathfrak a},{\mathfrak b},{\mathfrak
c}}(s,s_{{\mathfrak a}})=q^{-2\delta_{\ell.s}(\langlembda_s,\alpha_s)} \tilde
{B}_q^{{\mathfrak a},{\mathfrak b},{\mathfrak
c}}(s,s_{{\mathfrak a}})E_{\omega^{\mathfrak a}\Lambda_\ell, \omega^{\mathfrak
b}\Lambda_\ell}.
\end{equation}
Besides this, $\tilde {B}_q^{{\mathfrak a},{\mathfrak b},{\mathfrak
c}}(s,s_{{\mathfrak a}})$ commutes with everything in the cluster except possibly elements of
the form
$$ E_{\omega^{{\mathfrak a}}\Lambda_\ell,\omega^{{\mathfrak
b}}\tilde\omega_y\Lambda_\ell}, \textrm{ and } E_{\omega^{{\mathfrak
a}}\tilde\omega_x\Lambda_\ell,\omega^{{\mathfrak
b}}\Lambda_\ell}, $$ with $1<\tilde\omega_x<\omega_x$ and
$1<\tilde\omega_y<\omega_y$.
\end{Prop}
The exceptional terms above are covered by Proposition~\ref{7.20} which means
that we can in principle make a modification $\tilde {B}_q^{{\mathfrak
a},{\mathfrak b},{\mathfrak
c}}(s,s_{{\mathfrak a}})\to {B}_q^{{\mathfrak a},{\mathfrak b},{\mathfrak
c}}(s,s_{{\mathfrak a}})$ where the latter expression commutes with everything
except $E_{\omega^{{\mathfrak a}}\Lambda_s,\omega^{{\mathfrak
b}}\Lambda_s}$ where we get a factor $q^{-2(\Lambda_s,\alpha_s)}$.
If $\omega_y=1$ we get a further
simplification where now $u=\omega^{{\mathfrak a}}\omega_x$ and
$v=\omega^{{\mathfrak
b}}$:
\begin{equation}\langlebel{77}\tilde {B}_q^{{\mathfrak a},{\mathfrak b},{\mathfrak
c}}(s,s_{{\mathfrak a}}) =E_{u\sigma_s\Lambda_s,v\Lambda_s}^{-1}E_{
u\Lambda_s,v\sigma_s\Lambda_s}^{-1}\prod_{a_{ks<0}}
E_{u\Lambda_k,v\Lambda_k}^{-a_{ks}}.\end{equation}
Here we actually have $\tilde {B}_q^{{\mathfrak a},{\mathfrak b},{\mathfrak
c}}(s,s_{{\mathfrak a}}) ={B}_q^{{\mathfrak a},{\mathfrak b},{\mathfrak
c}}(s,s_{{\mathfrak a}})$, and this expression has the exact form needed for the purposes of the next section.
We let ${\mathcal B}_q({\mathfrak a},{\mathfrak b},{\mathfrak
c})$ and ${\mathcal B}_q^o({\mathfrak a},{\mathfrak b},{\mathfrak
c})$ denote the corresponding symplectic matrices and can now finally define
our quantum seeds:
\begin{equation}{\mathcal Q}_q({\mathfrak a},{\mathfrak b},{\mathfrak
c}):=({\mathcal C}_q({\mathfrak a},{\mathfrak b},{\mathfrak
c}), {\mathcal L}_q({\mathfrak a},{\mathfrak b},{\mathfrak
c}), {\mathcal B}_q({\mathfrak a},{\mathfrak b},{\mathfrak
c})).\end{equation}
\begin{Def}\begin{equation}{\mathcal Q}_q^o({\mathfrak a},{\mathfrak
b},{\mathfrak
c}):=({\mathcal C}_q^o({\mathfrak a},{\mathfrak b},{\mathfrak
c}), {\mathcal L}_q^o({\mathfrak a},{\mathfrak b},{\mathfrak
c}), {\mathcal B}_q^o({\mathfrak a},{\mathfrak b},{\mathfrak
c})).\end{equation}\end{Def}
According to our analysis above we have established
\begin{Thm}\langlebel{seedth}
They are indeed seeds. The non-mutable elements are in both cases the elements
${\det}_{s}^{{\mathfrak a},{\mathfrak c}}; s\in Im(\pi_{\omega^{\mathfrak c}})$.
\end{Thm}
Let us finally consider a general situation where we are given a finite
sequence of elements $\{\omega^{{\mathfrak r}_i}\}_{i=1}^n\in W^p$ such that
\begin{equation}\langlebel{genseq}
{\mathfrak e}\leq {{\mathfrak r}_1}<\dots<{{\mathfrak
r}_n}\leq {{\mathfrak p}}.
\end{equation}
Observe that \begin{equation}\forall(s,t)\in{\mathbb U}({\mathfrak
r}_k):\omega^{{\mathfrak r}_k}_{(s,t)}=\omega^{{\mathfrak p}}_{(s,t)}.
\end{equation}
It may of course well happen that for some $a$, and some $ {{\mathfrak
r}_i}<{{\mathfrak
r}_j}$,
\begin{equation}\omega^{{\mathfrak r}_i}\Lambda_a=\omega^{{\mathfrak
r}_j}\Lambda_a.
\end{equation}
\begin{Def}Given (\ref{genseq}) we define
\begin{eqnarray}{\mathcal C}_q({\mathfrak r}_1,\dots, {\mathfrak
r}_{n-1},{\mathfrak r}_n)&=&{\mathcal C}_q^d({\mathfrak r}_1, {\mathfrak
r}_{n-1},{\mathfrak r}_n)\cup {\mathcal C}_q^u({\mathfrak r}_1,{\mathfrak
r}_{2},{\mathfrak r}_{n-1})\cup\dots\\&=&\bigcup_{0<2i\leq n}{\mathcal
C}_q^d({\mathfrak r}_i,{\mathfrak r}_{n-i},{\mathfrak r}_{n-i+1})\cup
\bigcup_{0<2j\leq n-1}{\mathcal C}_q^u({\mathfrak r}_j,{\mathfrak
r}_{j+1},{\mathfrak r}_{n-j}). \nonumber
\end{eqnarray}
It is also convenient to consider
\begin{eqnarray}{\mathcal C}_q^o({\mathfrak r}_1,\dots, {\mathfrak
r}_{n-1},{\mathfrak r}_n)&=&{\mathcal C}_q^u({\mathfrak r}_1, {\mathfrak
r}_{2},{\mathfrak r}_n)\cup {\mathcal C}_q^d({\mathfrak r}_2,{\mathfrak
r}_{n-1},{\mathfrak r}_{n})\cup\dots\\&=&\bigcup_{0<2i\leq n}{\mathcal
C}_q^u({\mathfrak r}_i,{\mathfrak r}_{i+1},{\mathfrak r}_{n-i+1})\cup
\bigcup_{0<2j\leq n-1}{\mathcal C}_q^d({\mathfrak r}_{j+1},{\mathfrak
r}_{n-j},{\mathfrak r}_{n-j+1}). \nonumber
\end{eqnarray}
\end{Def}
Notice that
\begin{eqnarray}{\mathcal C}_q({\mathfrak r}_1,\dots, {\mathfrak
r}_{n-1},{\mathfrak r}_n)&=&{\mathcal C}_q^d({\mathfrak r}_1, {\mathfrak
r}_{n-1},{\mathfrak r}_n)\cup {\mathcal C}_q^o({\mathfrak r}_1,\dots, {\mathfrak
r}_{n-2},{\mathfrak r}_{n-1})\\\nonumber
{\mathcal C}_q^o({\mathfrak r}_1,\dots, {\mathfrak
r}_{n-1},{\mathfrak r}_n)&=&{\mathcal C}_q^u({\mathfrak r}_1, {\mathfrak
r}_{2},{\mathfrak r}_n)\cup {\mathcal C}_q({\mathfrak r}_2,\dots, {\mathfrak
r}_{n-1},{\mathfrak r}_n)
\end{eqnarray}
For the last equations, notice that ${\mathcal C}_q^{d}({\mathfrak
e},{\mathfrak r},{\mathfrak r}) =\emptyset={\mathcal C}_q^{d}({\mathfrak
r},{\mathfrak r},{\mathfrak r})$.
\begin{Prop}\langlebel{qcom}
The spaces
\begin{equation}
{\mathcal C}_q^o({\mathfrak r}_1,\dots, {\mathfrak
r}_{n-1},{\mathfrak r}_n)\textrm{ and }{\mathcal C}_q({\mathfrak r}_1,\dots,
{\mathfrak
r}_{n-1},{\mathfrak r}_n)
\end{equation} each consists of $q$-commuting elements.
\end{Prop}
\proof This is proved in the same way as Proposition~{\ref{7.13}. \qed
Our goal is to construct seeds out of these clusters using (and then
generalizing) Proposition~\ref{seedth}.
With Proposition~\ref{qcom} at hand, we are immediately given the
corresponding
symplectic matrices
\begin{equation}
{\mathcal L}_q^o({\mathfrak r}_1,\dots, {\mathfrak
r}_{n-1},{\mathfrak r}_n)\textrm{ and }{\mathcal L}_q({\mathfrak r}_1,\dots,
{\mathfrak
r}_{n-1},{\mathfrak r}_n).
\end{equation}
The construction of the accompanying $B$-matrices
\begin{equation}
{\mathcal B}_q^o({\mathfrak r}_1,\dots, {\mathfrak
r}_{n-1},{\mathfrak r}_n)\textrm{ and }{\mathcal B}_q({\mathfrak r}_1,\dots,
{\mathfrak
r}_{n-1},{\mathfrak r}_n)
\end{equation}
takes a little more work, though in principle it is straightforward.
The idea is in both cases to consider an element in the cluster as lying in a
space
\begin{eqnarray}
{\mathcal C}_q^d({\mathfrak r}_i, {\mathfrak
r}_{n-i},{\mathfrak r}_{n-i+1})\cup {\mathcal C}_q^u({\mathfrak r}_i, {\mathfrak
r}_{i+1},{\mathfrak r}_{n-i})&\subseteq&{\mathcal C}_q({\mathfrak r}_i,
{\mathfrak
r}_{n-i},{\mathfrak r}_{n-i+1})\textrm{ or}\\
{\mathcal C}_q^u({\mathfrak r}_i, {\mathfrak
r}_{i+1},{\mathfrak r}_{n-i+1})\cup {\mathcal C}_q^d({\mathfrak r}_{i+1},
{\mathfrak
r}_{n-i},{\mathfrak r}_{n-i+1})&\subseteq&{\mathcal C}_q^o({\mathfrak r}_i,
{\mathfrak
r}_{i+1},{\mathfrak r}_{n-i+1})\quad
\end{eqnarray}
as appropriate. Then we can use the corresponding matrices
${\mathcal B}_q({\mathfrak r}_i, {\mathfrak
r}_{n-i},{\mathfrak r}_{n-i+1})$ or ${\mathcal B}_q^o({\mathfrak r}_i, {\mathfrak
r}_{i+1},{\mathfrak r}_{n-i+1})$ in the sense that one can extend these
matrices
to the full rank by inserting rows of zeros.
In this way, we can construct columns even for the troublesome elements of the
form $E(a_{{\mathfrak r}_i}, a_{{\mathfrak r}_j})$ that may belong to such
spaces. Indeed, we may start by including $E(a_{{\mathfrak r}_{\frac{ n+0}2}},
a_{{\mathfrak r}_{\frac{n+2}2}})$ ($n$ even) or $E(a_{{\mathfrak
r}_{\frac{n-1}2}}, a_{{\mathfrak r}_{\frac{n+1}2}})$ ($n$ odd) in a such
space
in which they may be seen as mutable. Then these spaces have new non-mutable
elements which can be handled by viewing them in appropriate spaces. The only
ones which we cannot capture are the elements ${\det}_{s}^{{\mathfrak
r}_1,{\mathfrak r}_n}=E(s_{{\mathfrak r}_1},
s_{{\mathfrak r}_n})$.
\begin{Def}In both cases, the elements ${\det}_{s}^{{\mathfrak r}_1,{\mathfrak r}_n}$, $s\in
Im(\pi_{{\mathfrak r}_1})$
are the non-mutable elements. We let ${\mathcal N}_q({\mathfrak r}_1, {\mathfrak r}_n)$ denote the set of these.
\end{Def}
\begin{Prop}
\begin{equation}
{\mathcal Q}_q({\mathfrak r}_1,\dots, {\mathfrak
r}_{n-1},{\mathfrak r}_n)\textrm{ and }
{\mathcal Q}_q^o({\mathfrak r}_1,\dots, {\mathfrak
r}_{n-1},{\mathfrak r}_n)
\end{equation}
are quantum seeds.
\end{Prop}
\section{Mutations}
Here is the fundamental setup: Let $\omega^{\mathfrak a}, \omega^{\mathfrak
b},\omega^{\mathfrak c}\in W^p$ satisfy
\begin{equation}{\mathfrak a}<{\mathfrak c}\textrm{ and } {\mathfrak a}\leq
{\mathfrak b}\leq{\mathfrak c}.\end{equation}
\begin{Def}\langlebel{7.1bis}
A root $\gamma\in\triangle^+({\mathfrak c})$ is an {\bf increasing-mutation
site}
of
$\omega^{\mathfrak b}\in W^p$ (in reference to $({\mathfrak a},{\mathfrak
b},{\mathfrak c})$) if there exists a reduced form of
$\omega^{\mathfrak c}$ as
\begin{equation}
\omega^{\mathfrak c}=\hat\omega\sigma_\gamma\omega^{\mathfrak b}.
\end{equation}
Let $W^p\ni\omega^{{\mathfrak b}'}=\sigma_\gamma\omega^{\mathfrak b}$. It
follows
that
\begin{equation}\langlebel{94}
\omega^{{\mathfrak b}'}=\omega^{\mathfrak b}\sigma_{\alpha_s}
\end{equation}
for a unique $s\in Im(\pi_{{\mathfrak b}'})$. Such a site will henceforth be
called an ${\mathfrak m}^+$ site.
We will further say that $\gamma$ is a {\bf decreasing-mutation site}, or
${\mathfrak m}^-$ site (in reference to $({\mathfrak a},{\mathfrak
b},{\mathfrak c})$)
of $\omega^{{\mathfrak b}}\in W^p$ in case there exists a rewriting of
$\omega^{{\mathfrak b}}$ as $\omega^{{\mathfrak
b}}=\sigma_\gamma\omega^{{\mathfrak b}''}$ with ${\mathfrak a}\leq
\omega^{{\mathfrak b}''}\in W^p$. Here, \begin{equation}
\omega^{{\mathfrak b}}=\omega^{{\mathfrak b}''}\sigma_{\alpha_s}
\end{equation}
for a unique $s\in Im(\pi_{{\mathfrak b}})$. We view such sites as places where
replacements are possible and will use the notation
\begin{equation}\langlebel{m+}{\mathfrak m}^{+}_{{\mathfrak a},{\mathfrak c}}:({\mathfrak
a},{\mathfrak
b},{\mathfrak c})\to ({\mathfrak a},{\mathfrak b}',{\mathfrak c}),\end{equation}
and
\begin{equation}{\mathfrak m}^-_{{\mathfrak a},{\mathfrak c}}:({\mathfrak
a},{\mathfrak
b},{\mathfrak c})\to ({\mathfrak a},{\mathfrak b}'',{\mathfrak c}),\end{equation}
respectively, for the replacements while at the same time defining what we mean
by replacements.
Notice that ${\mathfrak a}={\mathfrak b}$ and ${\mathfrak b}'={\mathfrak c}$
are
allowed in the first while ${\mathfrak b}={\mathfrak c}$ and ${\mathfrak
b}''={\mathfrak a}$ are allowed in the second.
Furthermore,
$${\mathfrak m}_{{\mathfrak a},{\mathfrak c}}:({\mathfrak a},{\mathfrak
b},{\mathfrak c})\to ({\mathfrak a},{\mathfrak b}_1,{\mathfrak c})$$ denotes
the
composition of any finite number of such maps ${\mathfrak m}^{\pm}_{{\mathfrak
a},{\mathfrak c}}$ (in any order, subject to the limitations at any step
stipulated above)
We will further extend the meaning of ${\mathfrak m}_{{\mathfrak a},{\mathfrak
c}}$ also to include the replacements
$${\mathcal C}_q({\mathfrak a},{\mathfrak b},{\mathfrak c})\to {\mathcal
C}_q({\mathfrak a},{\mathfrak b}_1,{\mathfrak c}),$$ and even
$${\mathcal Q}_q({\mathfrak a},{\mathfrak b},{\mathfrak c})\to {\mathcal
Q}_q({\mathfrak a},{\mathfrak b}_1,{\mathfrak c}).$$At the seed level, we will
refer to the replacements as {\bf Schubert mutations}.
Similarly, we can define maps ${\mathfrak m}^{o,\pm}_{{\mathfrak a},{\mathfrak
c}}$, and after that mutations as composites $${\mathfrak m}^{o}_{{\mathfrak
a},{\mathfrak c}}:{\mathcal Q}_q^o({\mathfrak a},{\mathfrak b},{\mathfrak c})\to
{\mathcal Q}_q^o({\mathfrak a},{\mathfrak b}_1,{\mathfrak c}).$$
\end{Def}
We need to define another kind of replacement:
Consider\begin{equation}\langlebel{maxim}
{\mathfrak a}<{\mathfrak b}_1<{\mathfrak b}<{\mathfrak c}.
\end{equation}
\begin{Def}We say that $({\mathfrak a},{\mathfrak b},{\mathfrak
c})$ is a {\bf d-splitting} of $({\mathfrak a},{\mathfrak c})$ if
$${\mathcal C}_q({\mathfrak a},{\mathfrak b},{\mathfrak
c})={\mathcal C}_q({\mathfrak a},{\mathfrak c}).$$ In this case we will also say
that $({\mathfrak a},{\mathfrak c})$ is a {\bf
d-merger} of $({\mathfrak a},{\mathfrak b},{\mathfrak c})$.
\end{Def}
To make this more definitive, one might further assume that ${\mathfrak b}$ is
maximal amongst those satisfying (\ref{maxim}), but we will not need to do
this here.
Similarly,
\begin{Def}We say that $({\mathfrak a},{\mathfrak b},{\mathfrak
c})$ is a {\bf u-splitting} of $({\mathfrak a},{\mathfrak c})$ if
$${\mathcal C}_q^o({\mathfrak a},{\mathfrak b},{\mathfrak
c})={\mathcal C}_q^o({\mathfrak a},{\mathfrak c}).$$ Similarly, we will in this
case also say that $({\mathfrak a},{\mathfrak c})$ is a
{\bf u-merger} of $({\mathfrak a},{\mathfrak b},{\mathfrak
c})$.
\end{Def}
Our next definition combines the two preceding:
\begin{Def}\langlebel{def84}A Schubert creation replacement
$$a^+_{{\mathfrak a},{\mathfrak c}}:({\mathfrak a},{\mathfrak
c})\rightarrow ({\mathfrak a},{\mathfrak b}_1,{\mathfrak c})$$
consists in a d-splitting
$$({\mathfrak a},{\mathfrak c})\rightarrow ({\mathfrak
a},{\mathfrak b},{\mathfrak c})$$ followed by a replacement
$m_{{\mathfrak a},{\mathfrak c}}$ applied to $({\mathfrak a},{\mathfrak
b},{\mathfrak c})$.
A Schubert annihilation replacement
$$a^-_{{\mathfrak a},{\mathfrak c}}:({\mathfrak a},{\mathfrak
b}_1,{\mathfrak c})\rightarrow ({\mathfrak a},{\mathfrak c})$$ is
defined as the reverse process.
Schubert creation/annihilation mutations $a^{o,\pm}_{{\mathfrak
a},{\mathfrak c}}$ are defined analogously;
$$a^{o,+}_{{\mathfrak a},{\mathfrak c}}:
{\mathcal Q}_q^o({\mathfrak a},{\mathfrak c})\to {\mathcal
Q}_q^o({\mathfrak a},{\mathfrak b}_1,{\mathfrak c}),
$$and
$$a^{o,-}_{{\mathfrak a},{\mathfrak c}}:
{\mathcal Q}_q^o({\mathfrak a},{\mathfrak b}_1,{\mathfrak c})\to
{\mathcal Q}_q^o({\mathfrak a},{\mathfrak c}).
$$
We finally extend these Schubert creation/annihilation mutations into (we
could do it more generally, but do not need to do so here)
$${\mathcal Q}_q({\mathfrak r}_1,\dots, {\mathfrak
r}_{n-1},{\mathfrak r}_n)\rightarrow {\mathcal Q}_q({\mathfrak r}_1,\dots,
{\mathfrak
r}_{n-2},\dots,{\mathfrak r}_{n\pm 1})$$
by inserting/removing an ${\mathfrak r}_x$ between
${\mathfrak r}_{\frac{n}2}$ and ${\mathfrak r}_{\frac{n}2+1}$ ($n$ even) or
between ${\mathfrak r}_{\frac{n-1}2}$ and ${\mathfrak r}_{\frac{n+1}2}$
($n$ odd). Similar maps are defined for the spaces
${\mathcal Q}_q^o({\mathfrak r}_1,\dots, {\mathfrak
r}_{n-1},{\mathfrak r}_n)$.
\end{Def}
In the sequel, we will encounter expressions of the form $\check
B(u,v,s)$;
\begin{equation}\langlebel{99}
\check B(u,v,s)=E_{u\sigma_s\Lambda_s,v\Lambda_s}^{-1}E_{
u\Lambda_s,v\sigma_s\Lambda_s}^{-1}\prod_{a_{ks<0}}
E_{u\Lambda_k,v\Lambda_k}^{a_{ks}}\end{equation} where \begin{equation}
E(u\Lambda_s,v\Lambda_s)\check
B(u,v,s)=q^{-2(\Lambda_s,\alpha_s)}\check
B(u,v,s)E(u\Lambda_s,v\Lambda_s),
\end{equation}
and where $\check B(u,v,s)$ commutes with all other
elements in a given cluster.
\begin{Def}
We say that $\check B(u,v,s)$ implies the change
$$E_{u\Lambda_s,v\Lambda_s}\to E_{u\sigma_s\Lambda_s,v\sigma_s\Lambda_s}.$$
\end{Def}
We will only encounter such changes where the set with
$E_{u\Lambda_s,v\Lambda_s}$ removed from the initial cluster, and
$E_{u\sigma_s\Lambda_s,v\sigma_s\Lambda_s}$ added, again is a cluster.
We further observe that a (column) vector with $-1$ at positions corresponding
to $E_{u\sigma_s\Lambda_s,v\Lambda_s}$ and $E_{u\Lambda_s,v\sigma_s\Lambda_s}$
and $
{a_{ks}}$ at each position corresponding to a $E_{u\Lambda_k,v\Lambda_k}$ with
$a_{ks}<0$ has the property that the symplectic form of the original cluster,
when applied to it, returns a
vector whose only non-zero entry is $-2(\Lambda_s,\alpha_s)$ at the position
corresponding to $E_{u\Lambda_s,v\Lambda_s}$. Hence, this can be a column
vector of the $B$ of a potential compatible pair.
Even more can be ascertained: It can be seen that the last two lines of
Theorem~\ref{toric} precisely states that with a $B$ matrix like that, the following holds:
\begin{Prop}The change $E_{u\Lambda_s,v\Lambda_s}\to E_{u\sigma_s\Lambda_s,v\sigma_s\Lambda_s}$ implied by $\check B(u,v,s)$
is the result of a BFZ mutation.
\end{Prop}
\begin{Thm} The Schubert mutation $${\mathcal Q}_q({\mathfrak a},{\mathfrak
b},{\mathfrak c})\rightarrow{\mathcal Q}_q({\mathfrak a},{\mathfrak
b}',{\mathfrak c})$$ implied by a replacement ${\mathfrak m}^{+}_{{\mathfrak a},{\mathfrak c}}$ as in (\ref{m+}) is the result of series of BFZ mutations.
\end{Thm}
\proof The number $s$ is given by (\ref{94}) and remains fixed throughout. We do the replacement in a number of steps. We set ${\mathcal Q}_q({\mathfrak a},{\mathfrak
b},{\mathfrak c})={\mathcal Q}_q({\mathfrak a},{\mathfrak
b},{\mathfrak c})(0)$ and perform changes
\begin{eqnarray}
&{\mathcal Q}_q({\mathfrak a},{\mathfrak
b},{\mathfrak c})={\mathcal Q}_q({\mathfrak a},{\mathfrak
b},{\mathfrak c})(0)\rightarrow \\\nonumber& {\mathcal Q}_q({\mathfrak a},{\mathfrak
b},{\mathfrak c})(1)\rightarrow \dots\rightarrow {\mathcal Q}_q({\mathfrak a},{\mathfrak
b},{\mathfrak c})(t_0)={\mathcal Q}_q({\mathfrak a},{\mathfrak
b}',{\mathfrak c}).
\end{eqnarray}
We will below see that $t_0=s_{\mathfrak b}-s_{\mathfrak a}-1$. We set
\begin{equation}
\textrm{If } 0\leq t\leq t_o: {\mathcal Q}_q({\mathfrak a},{\mathfrak
b},{\mathfrak c})(t)=({\mathcal C}_q({\mathfrak a},{\mathfrak
b},{\mathfrak c})(t),{\mathcal L}_q({\mathfrak a},{\mathfrak
b},{\mathfrak c})(t),{\mathcal B}_q({\mathfrak a},{\mathfrak
b},{\mathfrak c})(t)).
\end{equation}
The intermediate seeds ${\mathcal Q}_q({\mathfrak a},{\mathfrak
b},{\mathfrak c})(t)$ with $0<t<t_0$ are not defined by strings $\tilde{\mathfrak a}\leq \tilde{\mathfrak b}\leq \tilde{\mathfrak c}$. At each $t$-level, only one column is replaced when passing from ${\mathcal B}_q({\mathfrak a},{\mathfrak
b},{\mathfrak c})(t)$ to ${\mathcal B}_q({\mathfrak a},{\mathfrak
b},{\mathfrak c})(t+1)$, and here (\ref{77}) is applied. Of course, the whole ${\mathcal B}$ matrix is given by (\ref{72}) and (\ref{75}) for a suitable seed.
Specifically, using (\ref{77}) we introduce a family of expressions $\check B$ as in (\ref{99})
\begin{eqnarray}\langlebel{b-conv-bar-t}{B}^{{\mathfrak a},{\mathfrak
b}(t), {\mathfrak c}}_{m^+}(s,t)=
E_{\omega(s_{\mathfrak a}+t+1)\Lambda_s,\omega^{{\mathfrak b}}\Lambda_s}^{-1}
E_{\omega(s_{\mathfrak a}+t)\Lambda_s,\omega^{{\mathfrak b}'}\Lambda_s}^{-1}
\prod
E_{\omega(s,s_{\mathfrak a}+t+1)\Lambda_j,\omega^{{\mathfrak b}'}\Lambda_j}
^{-a_{js}}\\\nonumber
=(E^u_{\mathfrak b}(s,s_{\mathfrak a}+t+1)E^u_{{\mathfrak b}'}(s,s_{\mathfrak
a}+t))^{-1}\prod E^u_{{\mathfrak b}}(j,\overline p(j,s,s_{\mathfrak
a}+t+1))^{-a_{js}},
\end{eqnarray}
implying the changes
\begin{equation}
E^u_{{\mathfrak b}}(s,s_{\mathfrak a}+t)\rightarrow E^u_{{\mathfrak
b}'}(s,s_{\mathfrak a}+t+1).
\end{equation}
If $\omega(s,s_{\mathfrak a}+t+1)=u_t\sigma_s$ and $v=\omega^{\mathfrak b}$
then
this corresponds to
\begin{equation}
\left((u_t\sigma_s\Lambda_s,v\Lambda_s)(u_t\Lambda_s,
v\sigma_s\Lambda_s)(u\Lambda_j,v\Lambda_j)^{a_{js}}\right)^{-1}
\end{equation}
Here are then in details how the changes are performed:
\begin{eqnarray*}{\mathrm Step}(0):&&\\{\mathcal C}_q({\mathfrak a},{\mathfrak
b},{\mathfrak
c})\ni E^d_{{\mathfrak a}}(s,s_{\mathfrak b}+1)&\rightarrow& E^u_{{\mathfrak
b}'}(s,s_{\mathfrak a})\in {\mathcal C}_q({\mathfrak a},{\mathfrak
b}(0),{\mathfrak c}) \ (renaming),\\
{B}_q^{{\mathfrak a},{\mathfrak
b}, {\mathfrak c}}(s,s_{\mathfrak a})&\rightarrow&{B}^{{\mathfrak a},{\mathfrak
b}(0), {\mathfrak c}}_{m^+}(s,0) \ (renaming),
\\{\mathcal L}_q({\mathfrak a},{\mathfrak b},{\mathfrak
c})&\rightarrow& {\mathcal L}_q({\mathfrak a},{\mathfrak b}(0),{\mathfrak
c}) \ (renaming),
\\{\mathrm Step}(1): && (implied\ by \ {B}^{{\mathfrak a},{\mathfrak
b}(0), {\mathfrak c}}_{m^+}(s,0) ),
\\{\mathcal C}_q({\mathfrak a},{\mathfrak b}(0),{\mathfrak c}) \ni
E^u_{{\mathfrak b}}(s,s_{\mathfrak a})&\rightarrow& E^u_{{\mathfrak
b}'}(s,s_{\mathfrak a}+1) \in {\mathcal C}_q^d({\mathfrak a},{\mathfrak
b}(1),{\mathfrak c}) , \\{B}_q^{{\mathfrak a},{\mathfrak
b}, {\mathfrak c}}(s,s_{\mathfrak a}+1)&
\rightarrow&{B}^{{\mathfrak a},{\mathfrak
b}(1), {\mathfrak c}}_{m^+}(s,1) (by\ (\ref{77})),
\\{\mathcal L}_q({\mathfrak a},{\mathfrak b}(0),{\mathfrak
c})&\rightarrow& {\mathcal L}_q({\mathfrak a},{\mathfrak b}(1),{\mathfrak
c}) \ (implied),\\{\mathrm Step}(2): && (implied\ by \ {B}^{{\mathfrak
a},{\mathfrak
b}(1), {\mathfrak c}}_{m^+}(s,1) ),
\\
{\mathcal C}_q^d({\mathfrak a},{\mathfrak b}(1),{\mathfrak c}) \ni
E^u_{{\mathfrak
b}}(s,s_{\mathfrak a}+1)&\rightarrow& E^u_{{\mathfrak b}'}(s,s_{\mathfrak
a}+2)\in {\mathcal C}_q^d({\mathfrak a},{\mathfrak b}(2),{\mathfrak c}),\\\vdots\
\\{\mathrm Step}(t+1): && (implied\ by \ {B}^{{\mathfrak a},{\mathfrak
b}(t), {\mathfrak c}}_{m^+}(s,t) ),\\
{\mathcal C}_q^d({\mathfrak a},{\mathfrak b}(t),{\mathfrak c}) \ni
E^u_{{\mathfrak
b}}(s,s_{\mathfrak a}+t)&\rightarrow& E^u_{{\mathfrak b}'}(s,s_{\mathfrak
a}+t+1)\in {\mathcal C}_q^d({\mathfrak a},{\mathfrak b}(t+1),{\mathfrak c}) ,
\\{B}_q^{{\mathfrak a},{\mathfrak
b}, {\mathfrak c}}(s,s_{\mathfrak a}+t)&
\rightarrow&{B}^{{\mathfrak a},{\mathfrak
b}(t), {\mathfrak c}}_{m^+}(s,t) (by\ (\ref{77})),\\{\mathcal L}_q({\mathfrak a},{\mathfrak
b}(t),{\mathfrak
c})&\rightarrow& {\mathcal L}_q({\mathfrak a},{\mathfrak b}(t+1),{\mathfrak
c}) \ (implied).
\end{eqnarray*}
The last step is $t=s_{\mathfrak b}-s_{\mathfrak a}-1$. ${\mathfrak
b}(0)={\mathfrak b}$, ${\mathfrak b}(s_{\mathfrak b}-s_{\mathfrak
a}-1)={\mathfrak b}'$.
It is easy to see that all intermediate sets indeed are seeds.
What is missing now is to connect, via a change of basis transformation of the
compatible pair, with the ``$E,F$'' matrices of \cite{bz}. Here we notice that both terms
\begin{equation}
(E^u_{\mathfrak b}(s,s_{\mathfrak a}+t+1)E^u_{{\mathfrak b}'}(s,s_{\mathfrak
a}+t))^{-1}(E^u_{\mathfrak b}(s,s_{\mathfrak a}+t))^{-1}
\end{equation}
and
\begin{equation}
\prod E^u_{{\mathfrak b}}(j,\overline p(j,s,s_{\mathfrak
a}+t+1))^{-a_{js}}(E^u_{\mathfrak b}(s,s_{\mathfrak a}+t))^{-1}
\end{equation}
have the same $q$-commutators as $E^u_{{\mathfrak b}'}(s,s_{\mathfrak a}+t+1)$.
The two possibilities correspond to the two signs in formulas (3.2) and (3.3) in \cite{bz}.
Indeed, the linear transformation
\begin{equation}E(t):E^u_{\mathfrak b}(s,s_{\mathfrak a}+t)\rightarrow
-E^u_{\mathfrak b}(s,s_{\mathfrak a}+t+1)-E^u_{{\mathfrak b}'}(s,s_{\mathfrak
a}+t)-E^u_{\mathfrak b}(s,s_{\mathfrak a}+t)
\end{equation}
results in a change-of-basis on the level of forms:
\begin{eqnarray}{\mathcal L}_q({\mathfrak a},{\mathfrak
b}(t),{\mathfrak
c})\rightarrow{\mathcal L}_q({\mathfrak a},{\mathfrak b}(t+1),{\mathfrak
c})&=&E^T(t){\mathcal L}_q({\mathfrak a},{\mathfrak
b}(t),{\mathfrak
c})E(t),\\\nonumber {\mathcal B}_{m^+}^{{\mathfrak a},{\mathfrak
b}(t),{\mathfrak
c}}(s,t)\rightarrow{\mathcal B}_{m^+}^{{\mathfrak a},{\mathfrak b}(t+1),{\mathfrak
c}}(s,t+1)&=&E(t){\mathcal B}_{m^+}^{{\mathfrak a},{\mathfrak
b}(t),{\mathfrak
c}}(s,t)F(t),
\end{eqnarray}
where $F(t)$ is a truncated part of $E(t)^T$ (the restriction to the mutable
elements).
With this, the proof is complete. \qed
\begin{Thm}
Any ${\mathcal Q}_q({\mathfrak r}_1,\dots, {\mathfrak
r}_{n-1},{\mathfrak r}_n)$ can be obtained from ${\mathcal Q}_q({\mathfrak
e},{\mathfrak p})$ as a sub-seed and any ${\mathcal Q}_q^o({\mathfrak r}_1,\dots,
{\mathfrak
r}_{n-1},{\mathfrak r}_n)$ can be obtained from ${\mathcal Q}_q^o({\mathfrak
e},{\mathfrak p})$ as a sub-seed through a series of Schubert creation and
annihilation
mutations.
These mutations are, apart from the trivial actions of renaming, splitting,
merging, or simple restrictions, composites of BFZ-mutations.
\end{Thm}
\proof Apart from mergers and splittings (Definition~\ref{def84}), the mutations are composites of mutations of the form ${\mathcal Q}_q({\mathfrak a},{\mathfrak
b},{\mathfrak c})\to {\mathcal Q}_q({\mathfrak a},{\mathfrak
b}',{\mathfrak c})$. \qed
\begin{Cor}\langlebel{cor8.7}The algebras ${\mathcal A}_q^{d,{\mathfrak a},{\mathfrak
c}}$ and ${\mathcal A}_q^{u,{\mathfrak a},{\mathfrak c}}$ are mutation equivalent
and indeed are equal. We denote henceforth this algebra by ${\mathcal
A}^{{\mathfrak a},{\mathfrak
c}}$. This is the quadratic algebra generated by the elements $\beta_{c,d}$
with $c_{\mathfrak a}<d\leq c_{\mathfrak c}$.
\end{Cor}
We similarly denote the corresponding skew-field of fractions by ${\mathcal
F}_q^{{\mathfrak a},{\mathfrak
c}}$.
\section{Prime}
\begin{Def}
\begin{equation}{\det}_{s}^{{\mathfrak a},{\mathfrak c}}:=E_{\omega^{\mathfrak
a}\Lambda_s,\omega^{\mathfrak c}\Lambda_s}.\end{equation}
\end{Def}
\begin{Thm}\langlebel{8.6} The 2 sided ideal $I({\det}_{s}^{{\mathfrak
a},{\mathfrak
c}})$ in ${\mathcal A}_q({\mathfrak a},{\mathfrak c})$ generated
by the covariant and non-mutable element ${\det}_{s}^{{\mathfrak a},{\mathfrak
c}}$ is \underline{prime} for each $s$.
\end{Thm}
\proof Induction. The induction start is trivially satisfied. Let us then
divide the induction step into two cases. First, let $Z_\gamma$ be an
annihilation-mutation site of
$\omega^{\mathfrak c}$ such that $\omega^{\mathfrak
c}=\sigma_\gamma\omega^{{\mathfrak c}_1}=\omega^{{\mathfrak
c}_1}\sigma_{\alpha_s}$
with $\omega^{{\mathfrak c}_1}\in W^p$. We have clearly ${\mathcal
A}_q({\mathfrak a},{\mathfrak c})= {\mathcal A}_q({\mathfrak a},{\mathfrak
c}_1)\cup I({\det}_{s}^{{\mathfrak a},{\mathfrak c}})$. Furthermore, ${\mathcal
A}_q({\mathfrak a},{\mathfrak c})\setminus {\mathcal
A}_q({\mathfrak a},{\mathfrak c}_1) =I_\ell(Z_\gamma)$, where
$I_\ell(Z_\gamma)$ denotes the left ideal generated by $Z_\gamma$. We might as
well consider the right ideal,
but not the 2-sided ideal since in general there will be terms ${\mathcal R}$ of lower order, c.f. Theorem~\ref{4.1}.
It follows that \begin{equation}\langlebel{Z}{\det}_{s}^{{\mathfrak a},{\mathfrak
c}}=M_1Z_\gamma
+M_2\end{equation} where $M_1,M_2\in {\mathcal
A}_q({\mathfrak a},{\mathfrak c}_1)$ and
$M_1\neq0$.
Indeed, $M_1$ is a non-zero multiple of ${\det}_{s}^{{\mathfrak a},{\mathfrak
c}_1}$. (If
$s_{\mathfrak c}=1$ then $M_1=1$ and $M_2=0$.) We also record, partly for
later use,
that $Z_\gamma$ $q$-commutes with everything up to correction terms from
${\mathcal
A}_q({\mathfrak a},{\mathfrak c}_1)$.
Notice that we use Corollary~\ref{cor8.7}.
Now consider an equation
\begin{equation}
{\det}_{s}^{{\mathfrak a},{\mathfrak c}}p_1=p_2p_3
\end{equation}
with $p_1,p_2,p_3\in {\mathcal A}_q({\mathfrak a},{\mathfrak
c})$. Use (\ref{Z}) to write for each $i=1,2,3$
\begin{equation}
p_i=\sum_{k=0}^{n_i}({\det}_{s}^{{\mathfrak a},{\mathfrak c}})^kN_{i,k}
\end{equation}
where each $N_{i,k}\in {\mathbf L}_q({\mathfrak a},{\mathfrak
c}_1)$ and assume that $N_{i,0}\neq0\textrm{ for }i=2,3$
Then $0\neq N_{0,2}N_{0,3}\in {\mathbf L}_q({\mathfrak a},{\mathfrak c}_1)$. At the same time,
\begin{equation}
N_{0,2}N_{0,3}=\sum_{k=1}^{n_i}({\det}_{s}^{{\mathfrak a},{\mathfrak
c}})^k\tilde N_{i,k}
\end{equation}
for certain elements $\tilde N_{i,k} \in {\mathbf L}_q({\mathfrak a},{\mathfrak
c}_1)$.
Using the linear independence (\cite[Proposition~10.8]{bz}) we easily get a
contradiction by looking at the leading term in ${\det}_{s}^{{\mathfrak
a},{\mathfrak
c}})$.
Now in the general case, the $s$ in ${\det}_{s}^{{\mathfrak a},{\mathfrak c}}$ is given
and we may write ${\omega}^{\mathfrak c}={\omega}^{{\mathfrak
c}_2}\sigma_{s}\tilde{\omega}$ where $\sigma_s$ does not occur in
$\tilde{\omega}$. Let ${\omega}^{{\mathfrak c}_1}={\omega}^{{\mathfrak
c}_2}\sigma_s$. It is clear that ${\det}_{s}^{{\mathfrak a},{\mathfrak
c}}={\det}_{s}^{{\mathfrak a},{\mathfrak c}_1}$ and by the previous,
${\det}_{s}^{{\mathfrak a},{\mathfrak c}_1}$ is prime in
${\mathcal A}_q({\mathfrak a},{\mathfrak c}_1)$. We have that ${\mathcal
A}_q({\mathfrak a},{\mathfrak c}_1)$ is
an algebra in its own right. Furthermore,
\begin{equation}{\mathcal A}_q({\mathfrak a},{\mathfrak c})={\mathcal
A}_q({\mathfrak a},{\mathfrak c}_1)[Z_{\gamma_1},\dots,Z_{\gamma_n}],
\end{equation}
where the Lusztig elements $Z_{\gamma_1},\dots,Z_{\gamma_n}$ are bigger than
the generators of ${\mathcal A}_q({\mathfrak a},{\mathfrak c}_1)$. In a PBW basis
we can put them to the right. They even generate a quadratic algebra
$\tilde{\mathcal A}_q$ in their own right! The equation we need to consider are
of
the form
\begin{equation}p_1p_2={\det}_{s}^{{\mathfrak a},{\mathfrak c}_1}p_3
\end{equation}
with $p_1,p_2,p_3\in {\mathcal A}_q({\mathfrak a},{\mathfrak c})$. The claim that
at least one
of $p_1,p_2$ contains a factor of ${\det}^{{\mathfrak r}_1}_{q,s}$ follows by
easy induction on the $\tilde{\mathcal A}_q$ degree of $p_1p_2$, i.e. the sum of
the $\tilde{\mathcal A}_q$ degrees of $p_1$ and $p_2$. \qed
\section{Upper}
Let $\omega^{\mathfrak a}, \omega^{\mathfrak c}\in W^p$ and ${\mathfrak
a}<{\mathfrak
c}$.
\begin{Def}
The cluster algebra ${\mathbf A}_q({\mathfrak a},{\mathfrak c})$ is the
${\mathbb
Z}[q]$-algebra generated in the
space ${\mathcal F}_q({\mathfrak a},{\mathfrak c})$ by the inverses of the
non-mutable elements
${\mathcal N}_q({\mathfrak a},{\mathfrak c})$ together with the union of the sets
of all
variables
obtainable from the initial seed ${\mathcal Q}_q({\mathfrak a},{\mathfrak c})$ by
composites of
quantum
Schubert mutations.
(Appropriately applied)
\end{Def}
Observe that we include ${\mathcal N}_q({\mathfrak a},{\mathfrak c})$ in the set
of variables.
\begin{Def}
The upper cluster algebra ${\mathbf U}_q({\mathfrak a},{\mathfrak c})$ connected
with
the same pair $\omega^{\mathfrak a}, \omega^{\mathfrak c}\in W^p$ is the
${\mathbb Z}[q]$-algebra in ${\mathcal F}_q({\mathfrak a},{\mathfrak c})$ given
as
the intersection of all the Laurent algebras of the
sets of variables
obtainable from the initial seed ${\mathcal Q}_q({\mathfrak a},{\mathfrak c})$ by
composites of
quantum Schubert mutations. (Appropriately applied)
\end{Def}
\begin{Prop}
$${\mathcal A}_q({\mathfrak a},{\mathfrak c})\subseteq {\mathbf A}_q({\mathfrak
a},{\mathfrak c})\subset {\mathbf U}_q({\mathfrak a},{\mathfrak c}).$$
\end{Prop}
\proof The first inclusion follows from \cite{leclerc}, the other is the quantum
Laurent phenomenon. \qed
\begin{Rem}
Our terminology may seem a bit unfortunate since the notions of a cluster
algebra and an
upper cluster algebra already have been introduced by Berenstein and Zelevinsky
in
terms of all mutations. We only use quantum line mutations which form a proper
subset of the set of all quantum mutations. However, it will be a corollary to
what
follows that the two notions in fact coincide, and for this reason we
do not introduce some auxiliary notation.
\end{Rem}
\begin{Thm}
$${\mathbf U}_q({\mathfrak a},{\mathfrak c})={\mathcal A}_q({\mathfrak
a},{\mathfrak c})[({\det}_{s}^{{\mathfrak a},{\mathfrak c}})^{-1}; s\in
Im(\pi_{{\mathfrak c}})].$$
\end{Thm}
\proof This follows by induction on $\ell(\omega^{\mathfrak c})$ (with start at
$\ell(\omega^{\mathfrak a})+1$) in the same way
as in the proof of \cite[Theorem~8.5]{jz}, but for clarity we give the details:
Let the notation and assumptions be as in the proof of Theorem~\ref{8.6}. First
of all, the induction start is trivial since we there are looking at the
generator of a Laurent quasi-polynomial algebra. Let then $u\in {\mathbf
U}_q({\mathfrak a},{\mathfrak c})$. We will argue by contradiction, and just as
in
the proof
of \cite[Theorem~8.5]{jz}, one readily sees that one may assume that $u\in
{\mathcal A}_q({\mathfrak a},{\mathfrak c}_1)[({\det}_{s}^{{\mathfrak
a},{\mathfrak c}_1})^{-1}, {\det}_{s}^{{\mathfrak a},{\mathfrak c}}]$. Using
(\ref{Z}) we may now write
\begin{equation} \langlebel{neg1}
u=\left(\sum_{i=0}^K Z_\gamma^ip_i({\det}_{s}^{{\mathfrak a},{\mathfrak
c}_1})^{k_i}\right)({\det}_{s}^{{\mathfrak a},{\mathfrak c}_1})^{-\rho},
\end{equation}
with $p_i\in {\mathcal A}_q({\mathfrak a},{\mathfrak c}_1)$, $p_i\notin
I({\det}_{s}^{{\mathfrak a},{\mathfrak c}_1})$, and $k_i\geq0$. Our assumption
is that $\rho>0$. recall that the elements
${\det}_{s}^{{\mathfrak a},{\mathfrak c}_1}$ and ${\det}_{s}^{{\mathfrak
a},{\mathfrak c}}$ are
covariant and define prime ideals in the appropriate algebras.
Using the fact that ${\mathbf U}_q({\mathfrak a},{\mathfrak c})$ is an algebra
containing ${\mathcal A}_q({\mathfrak a},{\mathfrak c})$, we can assume that the
expression in the left bracket in (\ref{neg1}) is not in
$I({\det}_{s}^{{\mathfrak a},{\mathfrak c}})$ and we may further assume that
$p_i\neq0\Rightarrow
k_i<\rho$. To wit, one can remove the factors of ${\det}_{s}^{{\mathfrak
a},{\mathfrak c}}$, then
remove the terms with $k_i\geq \rho$, then possibly repeat this process a
number
of
times.
Consider now the cluster ${\mathcal C}_q^{u}({\mathfrak a},{\mathfrak c})$. We
know that $u$ can be written as a Laurent quasi-polynomial in the elements
of
${\mathcal C}_q^{u}({\mathfrak a},{\mathfrak c})$. By factoring out, we can
then write
\begin{equation}\langlebel{neg2}
u=p\prod_{(c,d)\in{\mathbb
U}^{u,{\mathfrak a},{\mathfrak c}}}(E^u_{\mathfrak c}(c,d))^{-\alpha_{c,d}},
\end{equation}
where $p\in {\mathcal A}_q({\mathfrak a},{\mathfrak c})$, and
$\alpha_{c,d}\geq0$.
We will
compare this to (\ref{neg1}). For the sake of this argument set $\tilde{\mathbb
U}^{u,{\mathfrak a},{\mathfrak c}})=\{(c,d)\in {\mathbb
U}^{u,{\mathfrak a},{\mathfrak c}})\mid \alpha_{c,d}>0\}$.
Of course, ${\det}_s^{{\mathfrak a},{\mathfrak c}}\in {\mathcal
C}^{u}({\mathfrak e},{\mathfrak r})$.
``Multiplying across'', we get from (\ref{neg1}) and (\ref{neg2}), absorbing
possibly some terms into $p$:
\begin{equation}
(\sum_{i=0}^K Z^ip_i({\det}^{{\mathfrak a},{\mathfrak
c}_1}_{s})^{k_i})\prod_{(c,d)\in\tilde{\mathbb
U}^{u,{\mathfrak a},{\mathfrak c}}}(E^u_{\mathfrak
c}(c,d))^{\alpha_{c,d}}=p({\det}^{{\mathfrak a},{\mathfrak c}_1}_{s})^{\rho}.
\end{equation}
Any factor of ${\det}^{{\mathfrak a},{\mathfrak c}}_{s}$ in $p$ will have to be
canceled by a similar
factor of $E^u_{\mathfrak c}(s,0)$ in the left-hand side, so we can assume that
$p$ does not contain no
factor of ${\det}^{{\mathfrak a},{\mathfrak c}}_{s}$. After that we can assume
that $(s,0)\notin
\tilde{\mathbb
U}^{u,{\mathfrak a},{\mathfrak c}}$ since clearly ${\det}^{{\mathfrak
a},{\mathfrak c}_1}_{s}\notin
I({\det}^{{\mathfrak a},{\mathfrak c}}_{s})$. Using that $k_i<\rho$ it follows
that there must be a
factor of $({\det}^{{\mathfrak a},{\mathfrak c}_1}_{s})$ in
$\prod_{(c,d)\in\tilde{\mathbb
U}^{u,{\mathfrak a},{\mathfrak c}}}(E^u_{\mathfrak c}(c,d))^{\alpha_{c,d}}$.
Here, as but noticed, $d=0$ is excluded. The other
terms do not contain $Z_{s,1}$ but $({\det}^{{\mathfrak a},{\mathfrak
c}_1}_{s})$ does. This is an obvious contradiction. \qed
\section{The diagonal of a quantized minor}
\begin{Def}Let ${\mathfrak a}<{\mathfrak b}$. The diagonal, ${\mathbb
D}_{\omega^{\mathfrak
a}(\Lambda_s),\omega^{\mathfrak b}(\Lambda_s)}$, of
$E_{\omega^{\mathfrak a}(\Lambda_s),\omega^{\mathfrak
b}(\Lambda_s)}$ is set to
\begin{equation}
{\mathbb D}_{\omega^{\mathfrak a}(\Lambda_s),\omega^{\mathfrak
b}(\Lambda_s)}=q^{\alpha}Z_{s,s_{\mathfrak a}+1}\cdots Z_{s,s_{\mathfrak b}},
\end{equation}
where
\begin{equation}
Z_{s,s_{\mathfrak b}}\cdots Z_{s,s_{\mathfrak
a}+1}=q^{2\alpha}Z_{s,s_{\mathfrak a}+1}\cdots Z_{s,s_{\mathfrak b}} + {\mathcal R}
\end{equation}
where the terms ${\mathcal R}$ are of lower order
\end{Def}
\begin{Prop}
$$E_{\omega^{\mathfrak a}(\Lambda_s),\omega^{\mathfrak
b}(\Lambda_s)}={\mathbb D}_{\omega^{\mathfrak a}(\Lambda_s),\omega^{\mathfrak
b}(\Lambda_s)}+{\mathcal R}$$
The terms in ${\mathcal R}$ are of lower order in our ordering induced by
$\leq_L$. They can in theory be determined from the
fact
that the full
polynomial belongs to the dual canonical
basis. (\cite{bz},\cite{leclerc}).
\end{Prop}
\proof We prove this by induction on the length $s_{\mathfrak b}-s_{\mathfrak
a}$ of any $s$-diagonal. When this length is $1$ we have at most a quasi-polynomial algebra and here the case is clear. Consider then a creation-mutation site where we go from length $r$ to $r+1$: Obviously, it is only the very last determinant we need to consider. Here we use the equation in
Theorem~\ref{3.2} but reformulate it in terms of the elements $E_{\xi,\eta}$, cf.
Theorem~\ref{toric}.
Set $\omega^{{\mathfrak b}_1}=\omega^{{\mathfrak b}}\sigma_s$ and consider
$E_{\omega^{\mathfrak a}(\Lambda_s),\omega^{{\mathfrak
b}_1}(\Lambda_s)}$. Its weight is
given as
$$\omega^{{\mathfrak
b}_1}(\Lambda_s)-\omega^{\mathfrak a}(\Lambda_s)=\beta_{s,s_{\mathfrak a}+1}
\dots+\beta_{s,s_{\mathfrak b}+1}.$$
In the recast version of Theorem~\ref{3.2}, the terms on the left hand
side are covered by the induction hypothesis. The second term on the right hand
side contains no element of the form $Z_{s,s_{{\mathfrak b}_1}}$ and it follows
that we have an equation
\begin{equation}(Z_{s,s_{\mathfrak a}+2}\cdots Z_{s,s_{\mathfrak
b}})E_{\omega^{\mathfrak a}(\Lambda_s),\omega^{{\mathfrak
b}_1}(\Lambda_s)}=(Z_{s,s_{\mathfrak a}+2}\cdots Z_{s,s_{\mathfrak b}+1})
(Z_{s,s_{\mathfrak a}+1}\cdots Z_{s,s_{\mathfrak b}})
+{\mathcal R}.
\end{equation}
The claim follows easily from that. \qed
Recall that in the associated quasi-polynomial algebra is the algebra with
relations corresponding to the top terms, i.e., colloquially speaking, setting
the lower order terms ${\mathcal R}$ equal to $0$. Let
\begin{equation}
{d}_{\omega^{\mathfrak r}_{s,t_1}(\Lambda_s),\omega^{\mathfrak
r}_{s,t}(\Lambda_s)}=z_{s,t_1+1}\cdots z_{s,t}.
\end{equation}
The following shows the importance of the diagonals:
\begin{Thm}
\begin{eqnarray}
d_{u\cdot\Lambda_{i_0},v\cdot\Lambda_{i_0}}d_{u_1\cdot\Lambda_{i_1},
v_1\cdot\Lambda_{i_1}}&=&
q^G d_{u_1\cdot\Lambda_{i_1},v_1\cdot\Lambda_{i_1}}
d_{u\cdot\Lambda_{i_0},v\cdot\Lambda_{i_0}}\Leftrightarrow\\
{\mathbb D}_{u\cdot\Lambda_{i_0},v\cdot\Lambda_{i_0}}{\mathbb
D}_{u_1\cdot\Lambda_{i_1},v_1\cdot\Lambda_{i_1}}&=&
q^G {\mathbb D}_{u_1\cdot\Lambda_{i_1},v_1\cdot\Lambda_{i_1}}
{\mathbb
D}_{u\cdot\Lambda_{i_0},v\cdot\Lambda_{i_0}}+ {\mathcal R}
\end{eqnarray}In particular, if the two elements
${E}_{u\cdot\Lambda_{i_0},v\cdot\Lambda_{i_0}}{E}_{u_1\cdot\Lambda_{i_1},
v_1\cdot\Lambda_{i_1}}$ $q$-commute:
\begin{equation}
{E}_{u\cdot\Lambda_{i_0},v\cdot\Lambda_{i_0}}{E}_{u_1\cdot\Lambda_{i_1},
v_1\cdot\Lambda_{i_1}}=
q^G {E}_{u_1\cdot\Lambda_{i_1},v_1\cdot\Lambda_{i_1}}
{E}_{u\cdot\Lambda_{i_0},v\cdot\Lambda_{i_0}}
\end{equation}then $G$ can be computed in the associated quasi-polynomial
algebra:
\begin{equation}
d_{u\cdot\Lambda_{i_0},v\cdot\Lambda_{i_0}}d_{u_1\cdot\Lambda_{i_1},
v_1\cdot\Lambda_{i_1}}=
q^G D_{u_1\cdot\Lambda_{i_1},v_1\cdot\Lambda_{i_1}}
d_{u\cdot\Lambda_{i_0},v\cdot\Lambda_{i_0}}.\end{equation}
\end{Thm}
\begin{Rem}
One can also compute $G$ directly using the formulas in \cite{bz}.
\end{Rem}
\begin{Rem}
The elements $E_{\xi,\eta}$ that we consider belong to the dual canonical
basis.
As such, they can in principle be determined from the highest order terms
${\mathbb D}_{\xi,\eta}$.
\end{Rem}
\section{Litterature}
\end{document} |
\begin{document}
\author[1]{Dan Wilson \thanks{corresponding author:~dwilso81@utk.edu} }
\affil[1]{Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN 37996, USA}
\title{Nonlinear Data-Driven Approximation of the Koopman Operator}
\begin{abstract}
Koopman analysis provides a general framework from which to analyze a nonlinear dynamical system in terms of a linear operator acting on an infinite-dimensional observable space. This theoretical framework provides a rigorous underpinning for widely used dynamic mode decomposition algorithms. While such methods have proven to be remarkably useful in the analysis of time-series data, the resulting linear models must generally be of high order to accurately approximate fundamentally nonlinear behaviors. This issue poses an inherent risk of overfitting to training data thereby limiting predictive capabilities. By contrast, this work explores strategies for nonlinear data-driven estimation of the action of the Koopman operator. General strategies that yield nonlinear models are presented for systems both with and without control. Subsequent projection of the resulting nonlinear equations onto a low-rank basis yields a low order representation for the underlying dynamical system. In both computational and experimental examples considered in this work, linear estimators of the Koopman operator are generally only able to provide short-term predictions for the observable dynamics while comparable nonlinear estimators provide accurate predictions on substantially longer timescales and replicate infinite-time behaviors that linear predictors cannot.
\end{abstract}
\section{Introduction}
Model identification is a necessary first step in the design, optimization, control, and estimation of complex dynamical systems. When the mechanisms that underlie the dynamics are well understood, models can often be derived using first principles approaches and subsequently fit to available data. However, in applications where an underlying system is too complicated to write down the underlying equations, data-driven model identification can be a powerful alternative \cite{kutz16}, \cite{brun19}. Substantial progress has been made in recent years in the development of algorithms for inferring dynamical models strictly from time-series data. Dynamic mode decomposition (DMD) \cite{kutz16}, \cite{schm10}, \cite{rowl09} is one such algorithm, with the ability to represent the evolution of snapshot data in terms of a collection of linear modes with associated eigenvalues that determine the growth/decay/oscillation rates. This general framework has inspired numerous variations that can, for instance, incorporate the influence of an exogenous control input \cite{proc16}, account for noise and uncertainty \cite{hema17}, and continuously adjust when the system parameters are time-varying \cite{zhan19}.
While DMD has been used in a wide variety of applications to explicate the underlying behavior of snapshot data in terms of eigenmode and eigenvalue pairs, without additional modifications it yields a linear estimator for the underlying dynamics. Alternative approaches have been developed to identify fully nonlinear representations for the underlying equations from data \cite{mang19}, \cite{brun16b}, \cite{pant19}, \cite{rudy17}. These methods typically consider a large nonlinear function library and subsequently use machine learning algorithms to choose a sparse subset that best matches the training data. While such methods can be readily applied to identify sparse representations of highly nonlinear (and even chaotic systems), their efficacy is dependent on the choice an appropriate nonlinear function library. Related machine learning approaches using neural networks have also achieved success for prediction in highly nonlinear dynamical systems \cite{path18}, \cite{vlac18}, \cite{rais18}.
Many of the data-driven model identification strategies described above have a close connection to Koopman analysis \cite{budi12}, \cite{mezi13}, \cite{mezi19}. Koopman-based approaches generally allow for the representation of a nonlinear dynamical system as a linear operator acting on an infinite-dimensional observable space. Such approaches are distinct from standard linearization techniques that consider the dynamics in a close neighborhood of some nominal solution. Rather, the goal of Koopman analysis is to identify a linear operator that can accurately capture fundamentally nonlinear dynamics -- the key challenge is in the identification of a suitable finite basis to represent the action of the generally infinite dimensional Koopman operator. The connection between DMD and spectral analysis of the Koopman operator is well established \cite{rowl09}, and in applications where high-dimensional data is readily available, the DMD algorithm can indeed be used to provide a finite-dimensional approximation of the Koopman operator. Extensions of the DMD algorithm have illustrated that more accurate approximations of the Koopman operator can be obtained using DMD in conjunction with a set of lifting functions \cite{will15} and/or time-delayed embeddings of snapshot data \cite{arba17}. Additional accuracy can also be obtained using an adaptive strategy that uses DMD to obtain a continuous family of linear models and actively chooses the one that provides the best representation at every instant \cite{wils22data}.
How best to approximate the action of the Koopman operator from snapshot data remains an open question. DMD separates data into snapshot pairs and subsequently finds a linear operator that provides a least squares fit for the mapping from one snapshot to the next. This is currently the most widely used approach. An obvious advantage of linear estimators of the Koopman operator is that they allow for subsequent analysis using a wide variety of linear techniques. Nonetheless, such linear estimators are not always suitable for highly nonlinear systems since finite dimensional linear operators cannot be used, for instance, to replicate the infinite-time behavior of systems with multiple hyperbolic fixed points or systems with stable limit cycles. Further limitations of linear estimators can also be seen in \cite{page18} which established the difficulty of representing even the relatively simple Burgers' equation in terms of a linear operator owing to the existence highly degenerate Koopman eigenvalues. Alternatively, nonlinear models obtained from data-driven techniques are often more difficult to analyze, but can admit lower dimensional realizations and can often provide accurate representations of chaotic behavior \cite{brun16b}, \cite{path18}, \cite{vlac18}. Recent works have considered nonlinear estimation strategies. For instance, \cite{piet19} approximates separate Koopman operators that result for different values of an applied control input and uses this information to formulate a switching time optimization problem. Related approaches consider bilinear approximations of the Koopman operator for control systems \cite{peit18}, \cite{peit20}, \cite{sura16}.
This work explores strategies for nonlinear data-driven estimation of the action of the Koopman operator. General strategies that yield nonlinear models are presented for systems both with and without control. In the various examples considered in this work, only short term predictions of the dynamical behavior can be obtained using linear estimators for the Koopman operator. By contrast, nonlinear estimators are able to provide accurate long-term estimates for the dynamics of model observables and yield accurate information about limit cycling behaviors and basin of attraction estimates. The organization of this paper is as follows:~Section \ref{koopbackground} provides necessary background on Koopman operator theory along with a brief description of associated data-driven model identification techniques including DMD \cite{kutz16}, extended DMD \cite{will15}, and Koopman model predictive control \cite{kord18}. Section \ref{koopnonlin} proposes algorithms for obtaining a nonlinear approximation for the Koopman operator from snapshot data in both autonomous and controlled systems. The proposed approach is related to the extended DMD algorithm in that it considers a dictionary of functions of the observables, however, instead of estimating the action of the Koopman operator on each of the elements of the dictionary, the explicit nonlinear dependence of the dictionary elements on the observables is retained. A variety of examples are presented in Section \ref{exampsec}. Here, linear estimators for the Koopman operator are generally able to provide short-term predictions for the dynamics of observables; comparable nonlinear estimators provide accurate predictions on substantially longer timescales and accurately identify infinite-time behaviors. Concluding remarks and suggestions for extension are provided in Section \ref{concsec}.
\section{Background} \label{koopbackground}
\subsection{Koopman Operator Theory}
Consider a discrete-time dynamical system
\begin{equation} \label{vecfield}
x^+ = F(x),
\end{equation}
where $x \in \mathbb{R}^n$ is the state and $F$ gives the potentially nonlinear dynamics of the mapping $x \mapsto x^+$. The Koopman operator $K:\mathcal{F} \rightarrow \mathcal{F}$ acts on the vector space of observables so that
\begin{equation}\label{koop2}
K \psi (x) \equiv \psi(F(x)),
\end{equation}
for every $\psi: \mathbb{R}^n \rightarrow \mathbb{R}$ belonging to the space of observables $\mathcal{F}$. This operator is linear (owing to the linearity of the composition operator). As such, it can be used to represent the dynamics associated with a fully nonlinear system. Approaches that use Koopman analysis are distinct from standard linearization techniques that are only valid in a close neighborhood of some nominal solution. Note that while the Koopman operator is linear, it is generally infinite-dimensional \cite{budi12}, \cite{mezi13}, \cite{mezi19}. In practical applications, the critical challenge of Koopman analysis is in the identification of a finite-dimensional approximation of the Koopman operator.
\subsection{Finite Dimensional Approximation of the Koopman Operator} \label{koopest}
Dynamic mode decomposition (DMD) \cite{kutz16}, \cite{schm10}, \cite{tu14} is one standard approach for identifying a finite dimensional approximation of the Koopman operator. To summarize this algorithm, one can consider a series of data snapshots
\begin{equation}
s_i = (g(x_i),g(x_i^+)),
\end{equation}
for $i = 1,\dots,d$ where $g(x) \in \mathbb{R}^m = \begin{bmatrix} \psi_1(x) , \dots, \psi_m(x) \end{bmatrix}$ is a set of observables obtained from the data and $x_i^+ = F(x_i)$. The goal of DMD is to identify a linear dynamical system of the form
\begin{equation} \label{linmod}
g_i^+ = A g_i,
\end{equation}
where $g_i = g(x_i)$, $g_i^+ = g(x_i^+)$, and $A \in \mathbb{R}^{m \times m}$ maps the observables from one time step to the next. Such an estimate can be found according to a least-squares optimization,
\begin{equation} \label{standarddmd}
A = X^+ X^\dagger,
\end{equation}
where $X \equiv [g_1 \dots g_d]$, $X^+ \equiv [g_1^+ \dots g_d^+]$, and $^\dagger$ denotes the pseudoinverse. As a slight modification, instead of taking the pseudoinverse of $X$ as in Equation \eqref{standarddmd}, it is often desirable to obtain a lower rank representation by first taking the singular value decomposition of $X$ and truncating terms associated with low magnitude singular values \cite{proc16}, \cite{brun17}. Notice that the DMD algorithm as described above does not require knowledge of the underlying state and as such, can be implemented in a purely data-driven setting. DMD often struggles in applications where few observables are available, i.e.,~when $m$ is small. In such cases, extended DMD (EDMD) can be used \cite{will15}, which considers a lifted observable space
\begin{equation} \label{hvec}
h(x) = \begin{bmatrix} g(x) \\ f_{\rm lift}(g(x)) \end{bmatrix} \in \mathbb{R}^{m+b},
\end{equation}
where $f_{\rm lift}(g(x)) \in \mathbb{R}^b$ is a possibly nonlinear function of the observables called a `dictionary'. As before, letting $h_i = h(x_i)$ and $h_i^+ = h(x_i^+)$ comprise snapshot pairs with
\begin{align} \label{caph}
H & \equiv \begin{bmatrix} h_1 \dots h_d \end{bmatrix}, \nonumber \\
H^+ & \equiv \begin{bmatrix} h_1^+ \dots h_d^+ \end{bmatrix},
\end{align}
an estimate for the Koopman operator using the lifted coordinates can be obtained according to $A_{\rm lift} = H^+ H^\dagger$. The EDMD approach can provide more accurate estimates of the Koopman operator than the standard DMD approach. Indeed, in some cases the estimated Koopman operator converges to the true Koopman operator in the limit as both the lifted state and number of measurements approach infinity \cite{klus16}, \cite{kord18b}. Possible choices of lifted coordinates include polynomials, radial basis functions, and Fourier modes \cite{will15}. Additionally, delay embeddings of time series measurements of observables \cite{brun17}, \cite{arba17} have also yielded useful results in a variety of applications.
\subsection{Koopman-Based Model Identification With Control} \label{koopcontrol}
Koopman-based approaches can readily be generalized to actuated systems \cite{kord18, proc18, will16}. Following the approach suggested in \cite{kord18}, consider a controlled dynamical system
\begin{equation} \label{nlinvec}
x^+ = F(x,u),
\end{equation}
with output also given by Equation \eqref{koop2}. The above equation is identical to \eqref{vecfield} with the incorporation of a control input $u \in \mathbb{R}^q \subset \mathcal{U}$. Following the approach from \cite{kord18}, one can define an extended state space that is the product of the original state space $\mathbb{R}^n$ and the space of all input sequences $l(\mathcal{U}) = \{ (u_i)_{i=0}^\infty | u_i \in \mathcal{U} \}$. Defining an observable $\phi:\mathbb{R}^n \times l(\mathcal{U}) \rightarrow \mathbb{R}$ belonging to a space of observables $\mathcal{H}$, the nonautonomous Koopman operator $K:\mathcal{H} \rightarrow \mathcal{H}$ can be defined according to
\begin{equation}
K \phi (x,(u_i)_{i=0}^\infty) = \phi(F(x,u_0),(u_i)_{i=1}^\infty).
\end{equation}
Leveraging the EDMD algorithm, an estimate for the nonautonomous Koopman operator can be obtained by defining a vector of lifted coordinates
\begin{equation}
p(x_i) = \begin{bmatrix} g(x_i) \\ f_{\rm lift}(g(x_i)) \\ u_i \end{bmatrix},
\end{equation}
and determining an estimate for the linear dynamical system $p(x_i^+) = A_c p(x_i)$, where $A_c \in \mathbb{R}^{(m+b+q) \times (m+b+q)}$. As noted in \cite{kord18}, one is generally not interested in predicting the last $q$ components of $p(x_i^+)$, i.e.,~those associated with the control input. As such, the estimation of the final $q$ rows of $A_c$ can be neglected. Letting $\bar{A}$ correspond the first $m+b$ rows of $A_c$. Partitioning $\bar{A} = \begin{bmatrix} A & B \end{bmatrix}$ with $A \in \mathbb{R}^{(m+b) \times (m+b)}$ and $B \in \mathbb{R}^{(m+b) \times q}$, a linear, finite dimensional approximation of the Koopman operator can be obtained using a series of snapshot triples
\begin{equation} \label{triplefn}
w_i = (h_i, h_i^+,u_i),
\end{equation}
for $i = 1, \dots, d$. Recall that $h_i$ and $h_i^+$ were defined below Equation \eqref{hvec}. Once again, defining $H$ and $H^+$ as in \eqref{caph} and letting $\Upsilon = \begin{bmatrix} u_1 \dots u_d \end{bmatrix}$, an estimate for $\bar{A}$ can be obtained according to
\begin{equation} \label{dmdcfit}
\bar{A} = \begin{bmatrix} A & B \end{bmatrix} = H^+ \begin{bmatrix} H \\ \Upsilon \end{bmatrix}^ \dagger,
\end{equation}
ultimately yielding the state space representation
\begin{equation} \label{statecontrol}
h_i^+ = A h_i + B u_i.
\end{equation}
Using the above equation, the evolution of the observables can be recovered from the first $m$ entries of $h(x)$.
\section{Nonlinear Approximations of the Koopman Operator} \label{koopnonlin}
\subsection{Nonlinear Predictors For Autonomous Systems} \label{autsys}
The estimation strategies summarized in Sections \ref{koopest} and \ref{koopcontrol} yield linear models, for instance, of the form \eqref{linmod} and \eqref{statecontrol}. The strategy detailed below allows for additional nonlinear terms in the prediction of the dynamics. To begin, consider an unperturbed, discrete time dynamical system of the form \eqref{vecfield} with observables $g(x) \in \mathbb{R}^m$. Leveraging the delayed embedding approaches considered in \cite{brun17} and \cite{arba17}, one can define a lifted state
\begin{equation} \label{liftstate}
\gamma_i = \begin{bmatrix} h(x_i) \\ h(x_{i-1}) \\ \vdots \\ h(x_{i-z}) \end{bmatrix},
\end{equation}
where $z \in \mathbb{N}$ determines the length of the delayed embedding and $h(x)$ was defined in Equation \eqref{hvec}. Here, $\gamma_i \in \mathbb{R}^{M}$ with $M = (z+1)(m+b)$. Next, a secondary lifting is defined
\begin{equation}
\sigma_i = \begin{bmatrix} \gamma_i \\ f_n(\gamma_i) \end{bmatrix},
\end{equation}
where $f_n(\gamma_i) \in \mathbb{R}^L$ is an additional, generally nonlinear function of the lifted state $\gamma_i$. The term $f_n$ represents an additional user specified lifting of the data. For example, these terms can be comprised of polynomials, radial basis functions, and Fourier modes \cite{will15}. \added{Letting $\sigma_i$ and $\sigma_i^+$ be the lifted coordinates on successive iterations}, a direct implementation of the EDMD algorithm detailed in Section \ref{koopest} would seek a matrix $A$ that solves
\begin{equation} \label{dmdminimization}
\min_A \left[ \sum_{i=1}^d || \sigma_i^+ - A \sigma_i ||_F \right],
\end{equation}
for a collection of data $(\sigma_i,\sigma_i^+)$ for $i = 1,\dots,d$ where $||\cdot||_F$ denotes the Frobenius norm. Alternatively, one can instead neglect the prediction of the final $L$ states because they are direct functions of $\gamma_i$. In this case, defining the matrix $\hat{A}$ to be the first $M$ rows of $A$ and letting $\hat{A} = \begin{bmatrix} A_n & C_n \end{bmatrix}$ where $A_n \in \mathbb{R}^{M \times M}$ and $C_n \in \mathbb{R}^{M \times L}$, the minimization problem becomes
\begin{equation} \label{optproblem}
\min_{A_n,C_n} \left[ \sum_{i=1}^d || \gamma_i^+ - A_n \gamma_i - C_n f_n(\gamma_i) ||_F \right].
\end{equation}
This minimization problem can be solved by computing
\begin{equation} \label{minsol}
\hat{A} = \begin{bmatrix} A_n & C_n \end{bmatrix} = \Gamma^+ \begin{bmatrix} \Gamma \\ F_n \end{bmatrix}^\dagger,
\end{equation}
where $\Gamma \equiv \begin{bmatrix} \gamma_1 \dots \gamma_d \end{bmatrix}$, $\Gamma^+ \equiv \begin{bmatrix} \gamma_1^+ \dots \gamma_d^+ \end{bmatrix}$, and $F_n = \begin{bmatrix} f_n(\gamma_1) \dots f_n(\gamma_d) \end{bmatrix}$. The resulting model takes the form
\begin{equation} \label{nonlinpredict}
\gamma_i^+ = A_n \gamma_i + C_n f_n(\gamma_i).
\end{equation}
Lower rank approximations of $A_n$ and $C_n$ may be desirable in order to avoid overfitting to the measured data. In this instance, one can consider the singular value decomposition
\begin{equation} \label{svdmtx}
\begin{bmatrix} \Gamma \\ F_n \end{bmatrix} = U \Sigma V^T,
\end{equation}
where $U \in \mathbb{R}^{(M+L) \times (M+L)}$, $\Sigma \in \mathbb{R}^{(M+L) \times d}$, and $V \in \mathbb{R}^{d \times d}$, and $^T$ denotes the matrix transpose. Note that $U$ and $V$ are real because $\Gamma$ and $F_n$ are real. A rank $r$ approximation of \eqref{svdmtx} can be obtained by taking letting $\tilde{U}$ and $\tilde{V}$ represent the first $r$ columns of $U$ and $V$, respectively, and letting $\tilde{\Sigma}$ be a square matrix containing the first $r$ singular values from $\Sigma$ so that
\begin{equation}
\begin{bmatrix} \Gamma \\ F_n \end{bmatrix} \approx \tilde{U} \tilde{\Sigma} \tilde{V}^T.
\end{equation}
With this representation, one can obtain the lower rank approximation of the solution of the optimization problem from \eqref{optproblem}
\begin{equation}
\begin{bmatrix} A_n & C_n \end{bmatrix} \approx \Gamma^+ \tilde{V} \tilde{\Sigma}^{-1} \tilde{U}^T,
\end{equation}
where $^{-1}$ denotes the matrix inverse.
In contrast to the standard EDMD algorithm, the predictor \eqref{nonlinpredict} is nonlinear. Nonetheless, as illustrated in the examples presented in Section \ref{exampsec}, the added nonlinearity can accommodate behaviors that linear predictors cannot.
\subsection{Reduced Order Representations Using Nonlinear Predictors} \label{redsec}
The model identification strategy proposed in Section \ref{autsys} incorporates a lifting of the observables in conjunction with a delayed embedding of the lifted coordinates. As such, the resulting nonlinear model may be high dimensional making analysis and control difficult. Because the proposed strategy yields a nonlinear predictor for the dynamics of the observables of \eqref{vecfield} (as opposed to a linear predictor obtained from the EDMD algorithm), it is generally useful to identify a reduced order representation of the dynamics. This task can be accomplished by applying proper orthogonal decomposition (POD) \cite{holm96}, \cite{rowl17} to $\Gamma$ to identify a representative set of modes from the data. Here, POD modes are found according to the eigenvectors of $\Gamma \Gamma^T$ and sorted according to the magnitude of the associated eigenvalues. Keeping the first $\rho$ POD modes and truncating the rest (i.e.,~associated with the smallest eigenvalues) yields an orthogonal basis of POD modes $\Phi = \begin{bmatrix} \mu_1 & \dots &\mu_\rho \end{bmatrix} \in \mathbb{R}^{M \times \rho}$ for which
\begin{equation} \label{gammaapprox}
\gamma_i \approx \sum_{k = 1}^\rho \mu_k \omega_{k,i},
\end{equation}
where $\omega_{k,i}$ is a coefficient that can be obtained according to $\omega_{k,i} = \mu_k^T \gamma_i$. Substituting \eqref{gammaapprox} into \eqref{nonlinpredict} yields
\begin{equation}
\sum_{k = 1}^\rho \mu_k \omega^+_{k,i} \approx A_n \sum_{k = 1}^\rho \mu_k \omega_{k,i} + C_n f_ n \left( \sum_{k = 1}^\rho \mu_k \omega_{k,i} \right).
\end{equation}
Multiplying the above equation by the left by $\Phi^T$ and rearranging (noting that the POD modes are orthogonal) yields
\begin{equation} \label{lowdim}
\Omega^+_i = \Phi^T A_n \Phi \Omega_i + \Phi^T C_n f_n(\Phi \Omega_i),
\end{equation}
where $\Omega_i = \begin{bmatrix} \omega_{1,i} & \dots & \omega_{\rho,i} \end{bmatrix}^T$ and $\Omega^+ = \begin{bmatrix} \omega_{1,i}^+ & \dots & \omega_{\rho,i}^+ \end{bmatrix}^T$. Equation \eqref{lowdim} provides an order $\rho$ approximation for the dynamics of the nonlinear system given by Equation \eqref{nonlinpredict}. Conversion from the reduced order basis $\Omega_i$ back to the lifted state $\gamma_i$ can be accomplished using Equation \eqref{gammaapprox}.
\subsection{Nonlinear Predictors For Controlled Systems} \label{nlincont}
Control input can readily be incorporated into the proposed model identification strategy. To do so, considering a general system of the form \eqref{nlinvec}, one can define the lifted state as
\begin{equation} \label{gammacont}
\gamma_{c,i} = \begin{bmatrix} h(x_i) \\ \vdots \\ h(x_{i-z}) \\ u_{i-1} \\ \vdots \\ u_{i-z} \end{bmatrix}.
\end{equation}
Here, $\gamma_{c,i} \in \mathbb{R}^{M_c}$ where $M_c = (z+1)(m+b) + zq$. Compared with the lifted state defined in Equation \eqref{liftstate}, $\gamma_{c,i}$ also contains an embedding of the preceding $z$ control inputs as suggested in \cite{arba18b}. This lifted state is then augmented with additional states to yield
\begin{equation} \label{augstate}
\sigma_{c,i} = \begin{bmatrix} \gamma_{c,i} \\ u_i \\ f_{c,n}(\gamma_{c,i}) \\ \end{bmatrix},
\end{equation}
where $f_{c,n}(\gamma_{c,i}) \in \mathbb{R}^L$ is a nonlinear function of $\gamma_{c,i}$. \added{Let $\sigma_{c,i}$ and $\sigma_{c,i}^+$ be the lifted coordinates on successive iterations}. Mirroring the argument from Section \ref{autsys} that starts with Equation \eqref{dmdminimization} and ends with Equation \eqref{nonlinpredict}, for a collection of snapshot pairs $(\sigma_{c,i},\sigma_{c,i}^+)$ for $i = 1,\dots,d$, a direct implementation of the EDMD algorithm would seek a matrix $A$ that solves the minimization problem $\min_A \left[ \sum_{i=1}^d || \sigma_{c,i}^+ - A \sigma_{c,i} ||_F \right]$. However, prediction of the final $L+q$ states can be omitted because prediction of the control sequence is not of interest and $f_{c,n}(\gamma_{c,i})$ is an explicit function of $\gamma_{c,i}$. Subsequently defining the matrix $\hat{A}$ to be the \added{first $M_c$ rows} of $A$ and letting $\hat{A} = \begin{bmatrix} A_c & B_c & C_c \end{bmatrix}$ the minimization problem becomes
\begin{equation} \label{optcontrol}
\min_{\added{A_c,B_c,C_c}} \left[ \sum_{i=1}^d || \gamma_{c,i}^+ - A_c \gamma_{c,i} - B_c u_i - C_c f_{c,n}(\gamma_{c,i}) ||_F \right],
\end{equation}
which can be solved by computing
\begin{equation} \label{ahatest}
\hat{A} = \begin{bmatrix} A_c & B_c & C_c \end{bmatrix} = \Gamma_c^+ \begin{bmatrix} \Gamma_c \\ U \\ F_{c,n} \end{bmatrix}^\dagger,
\end{equation}
where $\Gamma_c \equiv \begin{bmatrix} \gamma_{c,1} \dots \gamma_{c,d} \end{bmatrix}$, $\Gamma_c^+ \equiv \begin{bmatrix} \gamma_{c,1}^+ \dots \gamma_{c,d}^+ \end{bmatrix}$, $U = \begin{bmatrix} u_1 \dots u_n \end{bmatrix}$, and $F_{c,n} = \begin{bmatrix} f_{c,n}(\gamma_{c,1}) \dots f_{c,n}(\gamma_{c,d}) \end{bmatrix}$. The resulting model takes the form
\begin{equation} \label{contsys}
\gamma_{c,i}^+ = A_c \gamma_{c,i} + B_c u_i + C_c f_{c,n}(\gamma_{c,i}).
\end{equation}
As with the autonomous system of the form \eqref{nonlinpredict}, a lower rank approximation of the matrices $A_c$, $B_c$ and $C_c$ can be obtained using a truncated singular value decomposition. Likewise, a reduced order model similar to Equation \eqref{lowdim} can be obtained by projecting \eqref{contsys} onto a reduced order basis of POD modes obtained from the data contained in $\Gamma_c$.
\section{Examples With Comparisons to Other Koopman-Based Approaches} \label{exampsec}
\subsection{Forced Duffing Equation}
Consider the forced Duffing equation
\begin{align} \label{duffingeq}
\dot{x}_1 &= x_2, \nonumber \\
\dot{x}_2 &= u(t) - \delta x_2 - \alpha x_1 - \beta x_1^3,
\end{align}
with observable
\begin{equation}
g(x) = x_1,
\end{equation}
taking $\alpha = 1$, $\beta = -1$, and $\delta = 0.5$. Here $u(t)$ represents a general control input instead of the usual periodic driving force. When $u(t) = 0$, Equation \eqref{duffingeq} has one unstable equilibrium at $x_1 = x_2 = 0$ and two stable equilibria at $x_2 = 0$ and $x_1 = \pm 1$. Data is obtained for model identification taking $u(t)$ as follows:~random numbers between -1.5 and 1.5 are chosen from a uniform distribution with the value held constant over a 5 time unit interval. The resulting curve is smoothed with a spline interpolation and used as the input in Equation \eqref{duffingeq}. Simulation is performed for $t \in [0, 1000]$ and the resulting output is used to implement the model identification procedure detailed in Section \ref{nlincont} taking snapshots at time intervals $\Delta t = 0.1$, i.e.,~so that $x_i = \begin{bmatrix} x_1( \Delta t (i-1)) & x_2( \Delta t (i-1)) \end{bmatrix}^T$. Panel A of Figure \ref{duffingresults} shows the state of the system over the first 100 time units of simulation. Panels B and C show the corresponding observable and input, respectively, used for model identification. To implement the model identification strategy, a delay embedding of size $z = 1$ is used taking $h(x_i) = g(x_i)$ so that $\gamma_{c,i} \in \mathbb{R}^3$ as defined in Equation \eqref{gammacont}. The nonlinear lifting $f_{c,n}(\gamma_{c,i}) \in \mathbb{R}^{12}$ is comprised of polynomial terms in $h(x_i)$ and $h(x_{i-1})$ up to degree 4 (e.g.,~$h(x_i)^2, h(x_i)^2 h(x_{i-1}), h(x_i)h(x_{i-1}^3))$. The matrix $\hat{A}$ is estimated according to Equation \eqref{ahatest} which is comprised of the matrices $A_c \in \mathbb{R}^{3\times 3}$, $B_c \in \mathbb{R}^{3\times 1}$, and $C_c \in \mathbb{R}^{3\times 12}$ from Equation \eqref{contsys}.
The inferred model is used to provide basin of attraction estimates for stable fixed points of the Duffing equation for different constant values of $u$. For the inferred model, initial conditions are taken to be $\gamma_{c,1} = \begin{bmatrix} x_1 & x_1 - \Delta t x_2 & 0 \end{bmatrix}^T$ and the associated basin of attraction is assigned according to the resulting steady state value of $x_1$, i.e.,~$x_{1,ss} = \lim_{j \rightarrow \infty} \left( e_1^T \gamma_{c,j} \right)$ where $e_1 = \begin{bmatrix} 1 & 0 & 0 \end{bmatrix}$. Results are shown in panels D-I of Figure \ref{duffingresults}; basin of attraction estimates between the true model \eqref{duffingeq} and the inferred model of the form \eqref{contsys} are nearly identical.
For this example, comparison with linear estimators such as EDMD as described in Section \ref{koopcontrol} is not considered. It is well known that linear models cannot be used to accurately represent the infinite time behavior of systems with multiple hyperbolic fixed points (as is the case in Equation \eqref{duffingeq}) complicating their use for providing basin of attraction estimates. EDMD was used in \cite{will15} to obtain basin of attraction estimates of the unforced Duffing equation by considering the resulting approximation of the nontrivial Koopman eigenmode associated with nondecaying solutions. This method of analysis, however, would require data from trajectories with initial conditions uniformly distributed over a domain of interest and with a constant value of $u$. By contrast, the approximated basin of attraction estimates from Figure \ref{duffingresults} are obtained from snapshot triples using arbitrary inputs and also provide accurate basin of attraction estimates for arbitrary values of $u$.
\begin{figure}
\caption{Data-driven model identification and subsequent basin of attraction estimates for the forced Duffing Equation \eqref{duffingeq}
\label{duffingresults}
\end{figure}
As a final note regarding this example using the forced Duffing equation, the model equations \eqref{duffingeq} can be recast in discrete time by first letting $\dot{x}_2(t) \approx (x_1(t) - x_1(t - \Delta t))/\Delta t$. Subsequently taking a forward Euler time step yields
\begin{align} \label{fealt}
x_1(t + \Delta t) &= a_1 x_1(t) + a_2x_1(t - \Delta t), \nonumber \\
x_2(t + \Delta t) &= a_3 u(t) +a_4 x_1(t) + a_5 x_1^3(t) + a_6x_1(t - \Delta t),
\end{align}
where $a_1 = 1, a_2 = -1, a_3 = \Delta t, a_4 = - (\delta + \alpha \Delta t), a_5 = -\beta \Delta t,$ and $a_6 = \delta$. When using the delay embedding and polynomial lifting strategy described above, the augmented state \eqref{augstate} contains all of the polynomial terms that comprise \eqref{fealt}. As such, for $\Delta t$ small enough, this example could readily be handled by a sparse nonlinear model identification algorithm \cite{brun16c}, \cite{brun16b} that selects appropriate functions from a library and identifies the associated coefficients. The examples to follow, however, do not admit simple, sparse representations for the dynamics of the observables.
\FloatBarrier
\subsection{Conductance-Based Neural Model}
Consider a conductance-based Wang-Buzsaki model neuron \cite{wang96} with an additional adaptation current \cite{erme98}
\begin{align} \label{wbmodel}
C \dot{V} &= -g_{\rm Na} m_\infty^3 p (V -E_{\rm Na}) - g_{\rm K} n^4(V -E_K) - g_{\rm L}(V-E_{\rm L}) - i_w + u(t) + i_b, \nonumber \\
\dot{p} &= \gamma \left[ \alpha_p(V)(1-p) - \beta_p(V)p \right], \nonumber \\
\dot{n} &= \gamma \left[ \alpha_n(V)(1-n) - \beta_n(V)n \right], \nonumber \\
\dot{w} &= a(1.5/(1+\exp((b-V)/k))-w). \nonumber \\
\end{align}
Here, $V$ is represents the transmembrane voltage with $p$ and $n$ representing gating variables. The adaptation current $i_w = g_w w (V-E_K)$ is mediated by the variable $w$ and $i_b = 10 \mu {\rm A}/{\rm cm}^2$ is a constant baseline current. The input $u(t)$ represents a transmembrane current. The membrane capacitance, $C$, is taken to be $1 \mu {\rm F}/{\rm cm}^2$. Auxiliary equations governing ionic currents are:
\begin{align*}
m_\infty &= \alpha_m(V)/(\alpha_m(V) + \beta_m(V)), \\
\beta_n(V) &= 0.125\exp(-(V+44)/80), \\
\alpha_n(V) &= -0.01(V+34)/(\exp(-0.1(V+34))-1), \\
\beta_p(V) &= 1/(\exp(-0.1(V+28))+1), \\
\alpha_p(V) &= 0.07\exp(-(V+58)/20), \\
\beta_m(V) &= 4\exp(-(V+60)/18), \\
\alpha_m(V) &= -0.1(V+35)/(\exp(-0.1(V+35))-1). \\
\end{align*}
Reversal potentials and conductance are $E_{\rm Na} = 55 {\rm m}V, E_{\rm K} = -90{\rm m}V, E_{\rm L} = -65 {\rm m}V, g_{\rm Na}= 35 {\rm mS}/{\rm cm}^2, g_{\rm K} = 9 {\rm mS}/{\rm cm}^2, g_{\rm L} = 0.1 {\rm mS}/{\rm cm}^2, g_w = 2 {\rm mS}/{\rm cm}^2$. Auxiliary parameters are $a = 0.02\;{\rm ms}^{-1}, b = -5 \; {\rm m}V, k = 0.5 {\rm m}V$, and $\gamma = 5$. In the absence of input, the neural model \eqref{wbmodel} is in a tonically firing regime with a stable limit cycle having period 6.53 ms. The input $u(t)$ serves to modulate the firing rate of the action potentials.
For the conductance-based neural model from Equation \eqref{wbmodel}, the state is $x = \begin{bmatrix} V & p & n & w \end{bmatrix}^T$. The observable for the spiking neural model is taken to be
\begin{equation}
g(x) = \begin{bmatrix} V \\ h \end{bmatrix},
\end{equation}
i.e.,~it is assumed that the variables $V$ and $h$ can be measured directly but that the variables $n$ and $w$ are inaccessable. The model identification strategy from Section \ref{nlincont} is implemented using 300 ms of simulated data taking a time step of $\Delta t = 0.025$ ms with an applied input $u(t) = 6 \sin (2 \pi t/200 + 0.0003 t^2)$. A delayed embedding of size $z = 10$ is used taking $h(x_i) = g(x_i)$ so that $\gamma_{c,i} \in \mathbb{R}^{32}$ as defined in Equation \eqref{gammacont}. The nonlinear lifting function $f_{c,n}(\gamma_{c,i}) = f_1(f_2(h(x_i)))$. Here $f_2(h(x_i)) \in \mathbb{R}^{10}$ with the $j^{\rm th}$ term given by $||g(x_i) - q_j||_2$, where $q_j \in \mathbb{R}^2$ is the center of each radial basis function with the first element (associated with the transmembrane voltage) chosen randomly from a uniform distribution taking values between -300 and 200 and the second element (associated with the gating variable) chosen randomly from a uniform distribution taking values between 0 and 1, and $||\cdot||_2$ denotes the 2-norm. The function $f_1(f_2(h(x_i))) \in \mathbb{R}^{990}$ provides a second nonlinear lifting and is comprised by taking polynomial combinations of the elements of $f_2(h(x_i))$ up to degree 4. The matrix $\hat{A}$ is estimated according to Equation \eqref{ahatest} using a truncated singular value decomposition of rank 80 to approximate the pseudoinverse. This information is used to determine $A_c \in \mathbb{R}^{32 \times 32}$, $B_c \in \mathbb{R}^{32\times 1}$, and $C_c \in \mathbb{R}^{32\times 990}$ from Equation \eqref{contsys}. As described in Section \ref{redsec}, a 20 dimensional model is obtained by projecting the inferred model equations onto a POD basis obtained from the eigenvectors of $\Gamma_c \Gamma_c^T$.
Simulations of the inferred model are compared to those of the simulations of the true model \eqref{wbmodel}. Comparisons are also given when using the Koopman model predictive control strategy from \cite{kord18} which provides a least squares estimate for the update rule $a_i^+ = A a_i + B u_i$ where the lifted state space in this example is taken to be $a_i = \begin{bmatrix} \gamma_{c,i}^T & f_{c,n}(\gamma_{c,i})^T \end{bmatrix}^T$. Results are shown in Figure \ref{neuralresults}. Panel A shows the effect of a 10 ms duration positive pulse input shown in panel B. Panel C shows the effect of a comparable negative pulse input from panel D. In each case the proposed method (with the nonlinear predictor) provides a good approximation for the true model output while the linear predictor does not. The model obtained from the nonlinear predictor yields stable oscillations in response to constant inputs -- such stable oscillations are not possible to obtain when considering linear predictors. Pane E shows the predicted natural frequency for different baseline currents illustrating good agreement with the true model. These results are particularly noteworthy considering that the model was trained using only oscillatory inputs and that this model was inferred without direct access to the auxiliary variables $n$ and $w$.
\begin{figure}
\caption{Comparisons between the full order model, linear predictor, and proposed nonlinear predictor in response to various inputs. Panel A (resp.,~C) shows the response to the pulse input in panel B (resp.,~D). The proposed nonlinear predictor accurately reflects the increase (resp.,~decrease) in firing rate in response to positive (resp.,~negative) inputs as well as the subsequent rebound caused by the adaptation current. The linear predictor obtained using the strategy proposed in \cite{kord18}
\label{neuralresults}
\end{figure}
\subsection{Coupled Population of Neural Oscillators} \label{neurpopsec}
In the previous section, the dynamics of a single conductance-based neuron was considered. Here, the behavior of a coupled population of identical, noisy thalamic neural oscillators taken from \cite{rubi04} will be considered:
\begin{align} \label{neurmod}
C \dot{V}_j &= f_V(V_j,p_j,r_j) + i_b + u(t) + \sqrt{2D} \eta_j + \frac{1}{N} \sum_{i = 1}^N \sigma_c (V_i - V_j), \nonumber \\
\dot{p}_j &= f_p(V_j,p_j), \nonumber \\
\dot{r}_j &= f_r(V_j,r_j), \nonumber \\
\quad j &= 1,\dots,N.
\end{align}
Above, $V_j$ is the transmembrane voltage of neuron $j$, $p_j$ and $r_j$ are associated gating variables, $N$ denotes the total number of neurons, $u(t)$ is a transmembrane current stimulus common to each neuron, $\sqrt{2 D} \eta_j$ is a white noise process with intensity $D = 1$ associated with neuron $j$, and $C = 1 \mu {\rm F}/{\rm cm}^2$ is a membrane capacitance. For simplicity, neurons are coupled using all-to-all electrotonic coupling \cite{john95} with strength $\sigma_c$; other types of neural coupling could also be considered. Each of the remaining functions from \eqref{neurmod} is described in \cite{rubi04}. The baseline current $i_b = 5 \mu {\rm A}/{\rm cm}^2$ is chosen so that in the absence of input, coupling, and noise, each oscillator is in a tonically firing regime with a period of $T=8.39$ ms. In the limit that both $u$, $D$, and $\sigma_c$ are all small in magnitude, Equation \eqref{neurmod} can be well-approximated in a phase reduced form \cite{winf01}, \cite{erme10}, \cite{izhi07}
\begin{align} \label{phasepop}
\dot{\theta}_j &= \omega + Z(\theta_j)\bigg( u(t) + \sqrt{2D} \eta_j + \frac{1}{N} \sum_{i = 1}^N \sigma_c (V(\theta_i) - V(\theta_j)) \bigg), \nonumber \\
j &= 1,\dots, N,
\end{align}
where $\theta_j \in [0,2\pi)$ is the phase of oscillator $i$, $\omega = 2\pi/T$ and $Z(\theta)$ is the phase response curve that characterizes the effect of infinitesimal inputs on the phase. In the limit as $N \rightarrow \infty$, Equation \eqref{phasepop} can be considered according to a probability density $\rho(\theta,t)$ governed by the Fokker-Planck Equation \cite{gard04}
\begin{equation} \label{fpreduc}
\frac{\partial \rho}{\partial t} = -\frac{\partial}{\partial \theta} [( \omega + Z(\theta)(u(t) + \sigma_c (\overline{V}-V(\theta)))) \rho(\theta,t) ] + \frac{\partial^2}{\partial \theta^2} [ D Z^2(\theta) \rho(\theta,t)],
\end{equation}
with periodic boundary conditions. Here, $\overline{V} = \int_0^{2\pi} V(\theta) \rho(\theta) d\theta$ is the average voltage. In previous work \cite{toth22}, \cite{wils20stab}, the above equation was analyzed in the context of developing control strategies to desynchronize a pathologically synchronized population of neural oscillators. Here, Equation \eqref{fpreduc} will be used in conjunction with the proposed data-driven model identification strategy. To obtain simulated data for this purpose, the functions $Z(\theta)$ and $V(\theta)$ are computed numerically for the individual neurons from Equation \eqref{neurmod} and Equation \eqref{fpreduc} is subsequently simulated using finite difference approximations for the partial derivatives. For this model, the state is $\rho_i = \rho(\theta, \Delta t (i-1))$. The model identification strategy from Section \ref{nlincont} is implemented using 1000 ms of simulated data taking the time step to be $\Delta t = 0.3$ ms. The input $u(t)$ for training is taken as follows:~random numbers between -1 and 1 are chosen from a uniform distribution with the value held constant over a 2 ms interval; the resulting \added{signal} is smoothed with a spline interpolation and used as the input.
Two seperate observables are considered for the model \eqref{fpreduc}. For the first,
\begin{equation} \label{singleobservable}
g(\rho_i) = \rho(0,\Delta t(i-1)) \in \mathbb{R}^1.
\end{equation}
To implement the model identification strategy, a delay embedding of size $z = 30$ is used. No preliminary lifting is considered so that $h(\rho_i) = g(\rho_i)$ as defined in Equation \eqref{hvec}. As such, $\gamma_{c,i} \in \mathbb{R}^{61}$ as defined in Equation \eqref{gammacont}. The nonlinear lifting function $f_{c,n}(\gamma_{c,i}) \in \mathbb{R}^{5952}$ is comprised of all possible combinations of polynomial terms taken from $h(x_i), h(x_{i-1}), \dots, h(x_{i-z})$ up to degree 3. The matrix $\hat{A}$ from Equation \eqref{ahatest} is estimated using a truncated singular value decomposition of rank 40 to approximate the pseudoinverse. This yields approximations of $A_c \in \mathbb{R}^{61 \times 61}$, $B_c \in \mathbb{R}^{61 \times 1}$, and $C_c \in \mathbb{R}^{61 \times 5952}$ from Equation \eqref{contsys}. As described in Section \ref{redsec}, the resulting nonlinear equation is projected onto a 20 element POD basis obtained from the eigenvectors of $\Gamma_c \Gamma_c^T$. Comparisons are also provided using the Koopman model predictive control strategy from \cite{kord18} which provides a least squares estimate for the update rule $a_i^+ = A a_i + B u_i$ where the lifted state space in this example is taken to be $a_i = \begin{bmatrix} \gamma_{c,i}^T & f_{c,n}(\gamma_{c,i})^T \end{bmatrix}^T$.
Figure \ref{neuralpopulation} shows simulations using the true model and inferred models in response to pulse inputs. In panel A, a pulse input of magnitude $0.01 \; \mu {\rm A}/{\rm cm}^2$ lasting 9 ms in duration is applied starting at $t = 1.5$ ms. Output from the proposed nonlinear predictor is nearly indistinguishable from the output from simulations of the true model \eqref{fpreduc}. Conversely, the output from the model obtained using the linear predictor is accurate for only the first three milliseconds and ultimately develops spurious high frequency oscillations that render the results inaccurate. The differences between the linear model and the nonlinear inferred models become more pronounced when using larger inputs; panels B and C show the influence of a magnitude $1 \; \mu {\rm A}/{\rm cm}^2$ pulse with the same timing as the one considered in panel A. In this case, the nonlinear model still performs well, with outputs that are nearly indistinguishable from the true model outputs. In response to the $1 \; \mu {\rm A}/{\rm cm}^2$ pulse, the spurious oscillations in the linear model shown in panel C become substantially more pronounced.
The nonlinear model has stable oscillations when taking $u(t) = 0$ allowing for the further reduction to a phase model of the form \cite{winf01}, \cite{erme10}, \cite{izhi07}
\begin{equation} \label{phasered}
\dot{\Theta} = \Omega + Z(\Theta) u(t).
\end{equation}
Here $\Theta \in [0,2 \pi)$ denotes the phase of the population oscillation, $\Omega$ is the associated natural frequency, and $Z(\Theta)$ is the phase response curve to infinitesimal inputs. Note that capitol Greek letters are used to emphasize that the phase and natural frequency are associated with the population oscillation (as opposed to the phase and natural frequencies of the individual oscillators as considered in Equation \eqref{phasepop}). Here, $\Theta = 0$ will be defined to occur the moment that $\rho(0,t)$ crosses 0.16 with a positive slope. $Z(\Theta)$ can be estimated according to the direct method \cite{izhi07}, \cite{neto12}. This strategy is implemented by applying an pulse input $u(t) = M = 1.5 \mu {\rm A}/{\rm cm}^2$ for a duration $L = 1.5$ milliseconds at a known phase $\Theta_0$ and subsequently inferring the resulting phase shift $\Delta \Theta$. This process is then repeated for different values of $\theta_0$ to provide pointwise estimates of $Z(\Theta_0) \approx \Delta \Theta / M L$ for both the nonlinear inferred model and the true model \eqref{fpreduc}. Results are shown in panel D of Figure \ref{neuralpopulation} illustrating that the phase response of the true model to inputs is nearly identical to the phase response of the nonlinear inferred model. Note that it is not possible to provide a similar estimate for the linear inferred model because it does not have a stable periodic orbit.
\begin{figure}
\caption{Response to pulse inputs for the true model \eqref{fpreduc}
\label{neuralpopulation}
\end{figure}
A second observable is considered for the model \eqref{fpreduc} to illustrate the generality of the proposed model identification strategy. This second observable is taken to be
\begin{equation} \label{multiobs}
g( \rho_i ) = \begin{bmatrix} \rho(0,\Delta t(i-1)) \\ \rho(2 \pi/25,\Delta t(i-1)) \\ \rho(4 \pi / 25,\Delta t(i-1)) \\ \vdots \\ \rho(48 \pi/25,\Delta t(i-1)) \end{bmatrix} \in \mathbb{R}^{25},
\end{equation}
i.e.,~the observable is comprised of 25 measurements of $\rho(\theta,t)$ equally spaced in $\theta$. With this alternative observable, the model identification strategy is implemented using a delay embedding of size $z = 20$. Data for model identification is taken for 1000 ms of simulated data with a time step of $\Delta t = 0.3$ ms. The input $u(t)$ used for training is chosen as follows:~random numbers between -0.25 and 0.25 are chosen from a uniform distribution with the value held constant over a 2 ms interval. The resulting input is smoothed with a spline interpolation and applied for simulations of the true model \eqref{fpreduc}. Once again, no preliminary lifting is considered so that $h(\rho_i) = g(\rho_i)$. Consequently $\gamma_{c,i} \in \mathbb{R}^{545}$. The nonlinear lifting function $f_{c,n}(\gamma_{c,i}) \in \mathbb{R}^{3250}$ is comprised of all possible combinations of polynomial terms taken from $h(\rho_i)$ up to degree 4. The matrix $\hat{A}$ is estimated according to Equation \eqref{ahatest} using a truncated singular value decomposition of rank 40 to approximate the pseudoinverse. This information is used to determine $A_c \in \mathbb{R}^{545 \times 545}$, $B_c \in \mathbb{R}^{545 \times 1}$, and $C_c \in \mathbb{R}^{545 \times 3250}$ from Equation \eqref{contsys}. As described in Section \ref{redsec}, a 25-dimensional model is obtained by projecting the inferred model equations onto a POD basis obtained from the eigenvectors of $\Gamma_c \Gamma_c^T$. Once again, comparisons are provided when using the Koopman model predictive control strategy \cite{kord18} that obtains a least squares estimate for the update rule $a_i^+ = A a_i + B u_i$ where the lifted state space in this example is taken to be $a_i = \begin{bmatrix} \gamma_{c,i}^T & f_{c,n}(\gamma_{c,i})^T \end{bmatrix}^T$.
In addition to accurately predicting the output of the true model in response to input, the inferred nonlinear model accurately characterizes fixed points and periodic orbits as shown in Figure \ref{neuralpopfulldata}. For instance, panel A illustrates an unstable fixed point, $\rho_{\rm fp}(\theta)$, that exists both in the full model \eqref{fpreduc} and the nonlinear inferred model of the form \eqref{contsys} when taking $u(t) = 0$. As indicated in the figure, both the profile of the fixed point solution and the the associated unstable, complex-conjugate discrete time eigenvalues (obtained from the linearization about the fixed point) are nearly identical. This unstable fixed point emerges in the true model as a result of a Hopf bifurcation where the coupling strength is the bifurcation parameter. Note that the model obtained when using the observable \eqref{singleobservable} also has a fixed point with $\rho(0,t) = 0.159$ with associated unstable discrete time eigenvalues $\lambda_{1,2} = .9740 \pm 0.239 i$. Despite using different data sets, the models inferred from the from the observables \eqref{singleobservable} and \eqref{multiobs} yield comparable estimates for the location and stability of this fixed point. Of course, for the model that uses the single observable \eqref{singleobservable}, it is not possible to reconstruct the full probability density since this information is unavailable. Initial conditions near this unstable fixed point eventually settle to a stable periodic orbit; the periodic orbits, $\rho_{\rm po}(\theta,t)$, obtained from the nonlinear predictor and the true model represented by the colormaps in panels B and C, respectively, are nearly indestinguishable. Panel D illustrates the time course of $p(0,t)$ for an initial condition near the unstable fixed point when taking $u(t) = 0$. The transition from the stable fixed point to the unstable periodic orbit is well captured by the proposed nonlinear predictor. The model obtained using the Koopman model predictive control strategy \cite{kord18} does not accurately reflect this transition.
\begin{figure}
\caption{Dynamical features of the model obtained using the nonlinear predictor with observables from \eqref{multiobs}
\label{neuralpopfulldata}
\end{figure}
\FloatBarrier
\subsection{One Dimensional Burgers' Equation}
The Burgers' equation is often used as a test bed for Koopman-based model identification and analysis strategies \cite{page18}, \cite{arba18b}, \cite{peit19} because it has a convective nonlinearity that is similar to that of the Navier-Stokes equations. Here a 1-dimensional version of the Burgers' equation is considered
\begin{equation} \label{burgeq}
\frac{\partial w}{\partial t} = \frac{1}{{\rm Re}} \frac{\partial^2 w}{\partial x^2} - w \frac{\partial w}{\partial x}.
\end{equation}
Here $w(x,t)$ gives the state on the domain $x \in[0,1]$ and ${\rm Re} = 50$ is a constant that is analogous to the Reynolds number from the Navier-Stokes equations. In this example, Dirichlet boundary conditions $w_L(t)$ and $w_R(t)$ are considered for the boundary at $x = 0$ and $x = 1$, respectively. These boundary conditions are also taken to be the inputs, i.e.,~$u(t) = \begin{bmatrix} w_L(t) & w_R(t) \end{bmatrix}^T$. For this model, the state is $w_i = w(x, \Delta t(i-1))$. The model identification strategy from Section \ref{nlincont} is implemented taking 2000 time units of simulated data with $\Delta t = 0.1$ ms. The input $u(t)$ used for training is chosen as follows:~for both $w_L(t)$ and $w_R(t)$, random numbers between -0.5 and 0.5 are chosen from a uniform distribution with the value held constant over a 20 time unit interval. These signals are smoothed with a spline interpolation and the resulting inputs are used in training simulations.
The observable for the model \eqref{burgeq} is taken to be
\begin{equation} \label{burgoutput}
g(w_i) = \begin{bmatrix} w(0, \Delta t(i-1)) \\ w(0.05, \Delta t(i-1)) \\ w(0.10, \Delta t(i-1)) \\ \vdots \\ w(0.95, \Delta t(i-1)) \end{bmatrix} \in \mathbb{R}^{20},
\end{equation}
i.e.,~the observable is comprised of 20 measurements of $w(x,t)$ equally spaced in $x$. Following the definitions given in Section \ref{koopbackground} and \ref{koopnonlin}, a delay embedding of size $z = 30$ is used. No preliminary lifting is considered so that $h(w_i) = g(w_i)$ as defined in Equation \eqref{hvec}. Here $\gamma_{c,i} \in \mathbb{R}^{680}$ as defined in Equation \eqref{gammacont}. The nonlinear lifting function $f_{c,n}(\gamma_{c,i}) \in \mathbb{R}^{1750}$ is comprised of all possible combinations of polynomial terms taken from $h(w_i)$ up to degree 3. The matrix $\hat{A}$ from Equation \eqref{ahatest} is estimated using a truncated singular value decomposition of rank 80 to approximate the pseudoinverse. This yields approximations of $A_c \in \mathbb{R}^{680 \times 680}$, $B_c \in \mathbb{R}^{680 \times 2}$, and $C_c \in \mathbb{R}^{680 \times 1750}$ from Equation \eqref{contsys}. As described in Section \ref{redsec}, the resulting nonlinear equation is projected onto a lower dimensional POD basis obtained from the eigenvectors of $\Gamma_c \Gamma_c^T$. Comparisons are also provided using the Koopman model predictive control strategy from \cite{kord18} which gives an estimate for the update rule $a_i^+ = A a_i + B u_i$ where the lifted state space in this example is taken to be $a_i = \begin{bmatrix} \gamma_{c,i}^T & f_{c,n}(\gamma_{c,i})^T \end{bmatrix}^T$. This least squares fitting is implemented according to Equation \eqref{dmdcfit} using a truncated singular value decomposition retaining different numbers of singular values as described in the results below.
Comparisons are provided between simulations of the true model \eqref{burgeq} and both the linear and nonlinear inferred models in response to different inputs. Panels A and B show the $L^2$ error, $L^2 = \int_0^1 (w_{\rm true}(t,x)- w_{\rm inferred}(t,x))^2dx$ between the true model solutions and the inferred solutions when using the linear predictor (red lines) and nonlinear predictor (blue lines) for different inputs. In panel A, $w_L(t)$ and $w_R(t)$ are chosen similarly to the training data, i.e.,~obtained by choosing random numbers between -0.5 and 0.5 from a uniform distribution, holding the value constant over a 20 time unit interval, and smoothing the resulting input with a spline interpolation. Note that the inputs are not identical those used for training because the random numbers are realized differently. Results from panel B consider a similar input, except that the values that comprise $w_L$ and $w_R$ are held constant for only 7 time units before smoothing. Effectively, this yields inputs with higher frequency content than those used for training. In each case, the $L^2$ error is approximately three orders of magnitude lower when using the nonlinear predictor as compared to the linear predictor. Panels C and D show representative traces of $w(x,t)$ corresponding to simulations in panels A and B, respectively. The true model output and the output from the nonlinear model are nearly identical in both cases while the linear predictor does not accurately capture the model output. Panel E shows the average $L^2$ error for inferred models of different order when applying the lower frequency input. For the linear model, order is determined by the number of singular values retained in the truncated singular value decomposition used to approximate the pseudoinverse in Equation \eqref{dmdcfit}. For the nonlinear model, order is governed by the dimension of the POD basis used for projection as described in Section \ref{redsec}. The linear model performs slightly better as the order increases. For moderate orders between 10 and 100, the inferred linear model is generally unstable, i.e.,~the inferred matrix $A$ from Equation \eqref{statecontrol} has unstable eigenvalues. These dots are omitted from panel E because the error grows unbounded as time approaches infinity. The inferred nonlinear models do not suffer from the same stability issues as the linear models. Accuracy of the nonlinear model stops improving once the model order reaches 50, at which point the output is nearly indistinguishable from the true model output.
\begin{figure}
\caption{Accuracy of the linear and nonlinear models inferred from data in response to inputs with different frequency content. In panel A, inputs are similar to those used for training are considered. In panel B, inputs with higher frequency content are considered. These panels show representative traces of the $L^2$ error over a 200 time unit window of simulation. In each case, the proposed nonlinear predictor yields results that are approximately 3 orders of magnitude better than the linear predictor. Panels C and D give representative traces of the outputs from the simulations from panels A and B, respectively. In each case, the nonlinear predictor yields outputs that are nearly identical to the true model output while the linear predictor yields outputs that are substantially less accurate. Panel E shows the influence of the model order on the accuracy of the inferred linear and nonlinear models. For models with order between 10 and 100, most inferred linear models are unstable with errors that grow unbounded in time. As such, there are fewer data points for the linear predictor in panel E.}
\label{burgersresults}
\end{figure}
\subsection{Schlierin Images of Supersonic Flow Past a Cylinder} \label{schsec}
Finally, experimental schlieren image data of cylinder-generated shock-wave/transitional boundary-layer interaction is analyzed using the proposed nonlinear model identification strategy. Here, the schlieren images of the Mach 2 flow past a standing cylinder are taken at 100 kHz. Details of the experimental setup and data collection are provided in \cite{wils20acc}. Salient flow features are illustrated in panel A of Figure \ref{lambda} with the flow going from left to right. Panel B shows a characteristic schlieren image taken from this data set. A flat plate is visible on the bottom edge and the cylinder is visible near the right edge of the image. Differences in pixel intensities roughly correspond to differences in fluid density gradients. Of particular interest in this data is the location of the forward lambda shock foot. Previous studies identified a characteristic oscillation frequency of the forward shock foot of approximately 5 kHz \cite{comb18}, \cite{comb19}, \cite{wils20acc} by analyzing power spectral densities and by using linear data analysis techniques such as spectral POD and other techniques related to dynamic mode decomposition.
\begin{figure}
\caption{Panel A shows a schematic depicting the flow geometry used to study shock-wave/transitional boundary-layer interaction. Mach 2 flow enters from the left and interacts with the cylinder mounted to a flat plate. Temporal oscillation in the location of the forward shock foot is of particular interest here. Panel B shows a characteristic schlieren image taken from this data set. \added{Adapted from \cite{wils20acc}
\label{lambda}
\end{figure}
The data set consists of 25,000 snapshots with each image containing of 5,472 pixels. In order to make the data set more computationally tractable, every other snapshot is removed so that the data is effectively sampled at 50 kHz. Half of the remaining snapshots are used for training and the other half are used to validate the resulting data-driven model. To further compress the data, snapshots from the training set are represented using a 5 mode POD basis which captures 0.48 of the total energy as determined by taking the sum of the largest 5 eigenvalues of the covariance matrix and dividing by the total sum of the eigenvalues. The observable is taken to be
\begin{equation} \label{podobs}
g_i = \begin{bmatrix} \omega_{1,i} & \dots & \omega_{5,i} \end{bmatrix}^T \in \mathbb{R}^5,
\end{equation}
where $\omega_{k,i}$ is the amplitude of the $k^{\rm th}$ POD mode on the $i^{\rm th}$ snapshot. A representation in the space of the schlieren images can be obtained by taking a linear combination of the POD modes with weights $\omega_{1,i}, \dots, \omega_{5,i}$. The strategy from Section \ref{autsys} is implemented on the autonomous data set in order to infer a nonlinear model. Once again, no initial lifting is considered so that $h_i = g_i$. A delay embedding of size $z = 25$ is used. Consequently, $\gamma_{i} \in \mathbb{R}^{130}$ as defined in Equation \eqref{liftstate}. The nonlinear lifting function $f_n(\gamma_i) \in \mathbb{R}^{120}$ is comprised of all possible combinations of polynomial terms that comprise $h_i$ up to degree 4. The matrix $\hat{A}$ is estimated according to Equation \eqref{minsol} yielding approximations of $A_n \in \mathbb{R}^{130\times 130}$ and $C_n \in \mathbb{R}^{130 \times 120}$ in the nonlinear estimator from Equation \eqref{nonlinpredict}. As described in Section \ref{redsec}, the resulting nonlinear equation is projected onto a low rank basis comprised of the 4 most important POD modes obtained from the eigenvectors of $\Gamma \Gamma^T$.
The inferred nonlinear model of the form \eqref{nonlinpredict} has a stable periodic orbit with frequency of 4.70 kHz that which agrees well with the oscillation frequencies identified in prior studies \cite{comb18}, \cite{comb19} \cite{wils20acc}. This periodic orbit is identified by taking an initial condition obtained from the comparison data set (i.e.,~the portion of the data set not used for training) and iterating the model \eqref{nonlinpredict} until the initial transients decay. In contrast to the results of this study, the linear techniques applied in these previous studies did not explicitly identify a periodic orbit, but rather, identified characteristic oscillation frequencies observed with the data set. Figure \ref{shockwaveresults} shows the periodic orbit obtained from the inferred nonlinear model (left column) as well as raw data from the comparison data set in the right columns. The ten sequential frames are centered at the lambda shock to emphasize the oscillation in the location of the forward shock foot and correspond to approximately one period of oscillation. The cylinder and the flat plate appear in the right and bottom edges each frame, respectively. The middle column of Figure \ref{shockwaveresults} shows the raw data projected onto the 5 POD mode basis used for the nonlinear model identification strategy; the left and middle frames are qualitatively similar to each other both in terms of the location of the forward lambda shock and in terms of the pixel intensities. Note that the higher frequency flow features, for instance that appear in the flow separation region between the forward shock foot, $\lambda_1$, and the closure shock, $\lambda_2$, are not accurately resolved in the reduced order model; these features are filtered out in the initial projection of the data onto the 5 mode POD basis.
\begin{figure}
\caption{The inferred nonlinear model of the form \eqref{nonlinpredict}
\label{shockwaveresults}
\end{figure}
For comparison, the extended DMD approach \cite{will15} is also implemented on the schlieren image data set as described in Section \ref{koopest} to provide a linear least squares estimate for the update rule $a_i^+ = A a_i$. Here, the lifted state space is taken to be $a_i = \begin{bmatrix} \gamma_i^T & f_n(\gamma_i)^T \; \end{bmatrix}^T$. The least squares fitting uses a truncated singular value decomposition of rank 200 -- keeping more singular values results in an unstable linear system. This extended DMD approach is often used to provide an approximation for Koopman eigenmodes. However, as noted in \cite{comb19} it is often difficult to identify which Koopman eigenmodes are most important in a given data set. Indeed, Figure \ref{shockwaveedmd} shows a plot of frequency of the eigenmodes versus the average amplitude of the eigenmode from the snapshot data. Here, the frequency associated with a given eigenmode is equal to \added{the imaginary component of} $\log(\lambda_A)/2 \pi \Delta t$ where $\lambda_A$ is an eigenvector of the inferred $A$ matrix and $\Delta t$ is the time between successive snapshots. The associated amplitude at frame $i$ is given by $w_A^T a_i$ where $w_A^T$ is the left eigenvector for eigenvalue $\lambda_A$. Analysis of the frequency content of these eigenmodes would identify two dominant eigenmodes with a frequency near 20 kHz. However, power spectrum analysis of the same schlieren image data performed in \cite{comb18} identifies a peak in power at approximately 4.7 kHz and relatively little power in the 20 kHz region. By contrast, the proposed nonlinear model identification technique identifies a 4.70 kHz stable periodic orbit embedded in the data which is consistent with peaks in the power spectrum of the imaging data identified in \cite{comb18}.
\begin{figure}
\caption{ Extended DMD is applied to the schlieren image data. The resulting eigenmodes are shown according to their frequency content and their relative importance as gauged by the average amplitude of the eigenmode observed in the snapshot data. Previous analysis of this data in \cite{comb18}
\label{shockwaveedmd}
\end{figure}
\FloatBarrier
\section{Conclusion} \label{concsec}
Koopman analysis and associated data-driven model identification algorithms are invaluable tools in the study of nonlinear dynamical systems. In data-driven modeling applications, the vast majority of Koopman-based strategies consider finite-dimensional, linear estimators for the action of the Koopman operator on observables. This work proposes a general strategy to obtain a nonlinear estimator for the Koopman operator. In the examples considered in this work, the proposed strategy yields nonlinear models that are substantially more accurate on longer timescales than comparable models that consider linear estimators for the Koopman operator. It should be noted that the examples considered in this work did not explicitly consider optimization of the observables used for fitting, for instance, which might result in a low-dimensional Koopman invariant subspace \cite{take17}, \cite{brun16}, \cite{kord20}. As such one cannot rule out the possibility that a linear estimator could have provided a more accurate representation for the Koopman operator with a more careful choice of the observables for the examples considered in this work.
Despite the promising results presented in the applications considered in this paper, the proposed nonlinear model identification strategy certainly comes with its share of drawbacks as discussed below. Foremost, the proposed model identification strategy yields a nonlinear model, thereby precluding the direct use of a wide variety of linear model analysis techniques and control algorithms that are available for linear models obtained using DMD \cite{kutz16}, Extended DMD \cite{will15}, and Koopman model predictive control \cite{kord18}. In applications where the dynamics can be well-approximated by a low-dimensional linear operator, linear estimators would certainly be preferable to nonlinear estimators. The proposed algorithm shares similarities with the extended DMD algorithm in that it uses a dictionary of functions of the observables to lift to a higher dimensional space. Strategies for determining an optimal choice for the lifting functions were not considered here. Practically, using polynomial combinations of the observables and radial basis functions worked well in the applications presented in this work. The proposed model identification technique shares similarities with other approaches designed to determine the model equations directly from data. In contrast to sparse model identification algorithms \cite{mang19}, \cite{brun16b}, \cite{pant19}, \cite{rudy17}, the proposed strategy instead identifies low rank approximations for the nonlinear dynamics. In contrast to these sparse model identification strategies, the proposed methodology yields a nonlinear model that is generally not interpretable in the sense that the terms of the learned equation correspond to the physics of the underlying models. Nonetheless, the proposed strategy does not require the use of neural networks and still provides a low-rank approximation for the dynamics.
Due to its similarity to existing Koopman-based model identification strategies, the proposed framework offers some interesting opportunities for extension. For instance, this strategy could be used alongside linear Koopman model predictive control strategies described in \cite{arba18b}, \cite{kord18}, using the linear estimator to approximate an optimal control input over a finite time horizon and subsequently using the nonlinear predictor to maintain accurate information about the state of the system. Such an approach would be particularly useful in situations where real-time information about the observables is not continuously available. Alternatively, the resulting nonlinear models can be used to obtain additional information about stable attractors in the inferred systems. This point was illustrated in Section \ref{neurpopsec} where applying pulse inputs to the nonlinear model allowed for accurate estimation of the phase response curve for the limit cycle that emerges in the true model equations. The diversity of examples considered in this work using both computational and experimental data suggest that the proposed framework could be a versatile tool to aid in the identification of nonlinear dynamical systems, especially in applications where linear predictors alone are not sufficient.
\section*{Acknowledgments}
This material is based upon work supported by the National Science Foundation Grant CMMI-2140527. Thank you to Phil Kreth from University of Tennessee Space Institute for providing the schlieren imaging dataset considered in Section \ref{schsec}.
\iffalse
FIGURE EXAMPLE
\begin{figure}
\caption{ }
\label{inferredresults}
\end{figure}
\fi
\end{document} |
\begin{document}
\title{On $C^*$-algebras associated to actions of discrete subgroups of $\SL(2,\mathbb{R})$ on the punctured plane}
\author{Jacopo Bassi}
\maketitle
\begin{abstract}
\noindent Dynamical conditions that guarantee stability for discrete transformation group $C^*$-algebras are determined. The results are applied to the case of some discrete subgroups of $\SL(2,\mathbb{R})$ acting on the punctured plane by means of matrix multiplication of vectors. In the case of cocompact subgroups, further properties of such crossed products are deduced from properties of the $C^*$-algebra associated to the horocycle flow on the corresponding compact homogeneous space of $\SL(2,\mathbb{R})$.
\end{abstract}
\section{Introduction}
Transformation group $C^*$-algebras represent a tool for the construction of examples of structure and classification theorems for $C^*$-algebras and provide a way to interpret dynamical properties on the $C^*$-algebraic level. Typical examples are the $C^*$-algebras associated to minimal homeomorphisms on infinite compact metric spaces with finite covering dimension (\cite{rieffel-ir,giordano-putnam-skau,toms-winter}), or more generally, free minimal actions of countable residually finite groups with asymptotically finite-dimensional box space on compact metric spaces with finite covering dimension (\cite{swz}). In these cases the structure is that of an $ASH$-algebra and classification is provided by the Elliott invariant. Moving to the non-unital setting, such classification results are still available in the case the $C^*$-algebra is stable and contains projections, assuming a suitable Rokhlin type property for the action. In these situations the resulting transformation group $C^*$-algebra is a stabilized $ASH$-algebra. Examples come from free and minimal actions of the real numbers on compact metric spaces admitting compact transversals (\cite{hsww}), where stability is reminiscent of freeness and the transversal produces a projection in the crossed product. On the other hand, stable simple $\mathcal{Z}$-stable projectionless $C^*$-algebras admit a description of the isomorphism classes of hereditary $C^*$-subalgebras and countable generated Hilbert $C^*$-modules in terms of Cuntz equivalence of positive elements (\cite{zstable_projless}).
Dynamical conditions which ensure stability of a transformation group $C^*$-algebra were given in \cite{green}, where it is proved that $C^*$-algebras arising from actions that are free and wandering on compacts are trivial fields of compact operators. For the case of more general $C^*$-algebras, other characterizations of stability are contained in \cite{rordam-fp,brttw}.\\
The present paper focuses on transformation group $C^*$-algebras associated to the action of discrete subgroups os $\SL(2,\mathbb{R})$ on the punctured plane, by means of matrix multiplication of vectors. Ergodic properties of such dynamical systems have been investigated in several places and the duality with the horocycle flow on the corresponding homogeneous spaces for $\SL(2,\mathbb{R})$ has been successfully employed in \cite{furstenberg,ledrappier, mau_weiss}. The study of such dynamical systems and their generalizations has a number of interesting applications, as observed in \cite{goro_weiss}, as the quantitative Oppenheim conjecture, quantitative estimates of the denseness of certain projections associated to irreducible lattices and strengthenings of distribution results concerning actions of lattices by automorphisms.
The first part of this work focuses on the study of the distribution of orbits of compact sets on the punctured plane under the action of discrete subgroups of $\SL(2,\mathbb{R})$ containing two hyperbolic elements with different axes. Rather than studying the asymptotics of the distribution of such orbits under an increasing family of finite subsets in the lattice, as in \cite{nogueira2002}, \cite{nogueira2010} and \cite{guilloux}, we consider the possibility to find, at every step, an element in the group that \textit{squeezes} enough the image of the compact set under the action of any element in the finite subset. This property of the action resembles the fact that such discrete subgroups of $\SL(2,\mathbb{R})$ actually contain an abundance of hyperbolic elements and represents a weaker version of the wandering on compacts assumption considered in \cite{green}. This dynamical condition guarantees the existence of invertible approximants for the elements in the crossed product $C^*$-algebra. By appealing to \cite{rordam-fp}, we show that in the case of actions that are contractive in a suitable sense, this property is enough in order to ensure stability of the crossed product $C^*$-algebra. The "dual" approach is used in the last part to find properties of the crossed product arising from an action of a cocompact subgroup of $\SL(2,\mathbb{R})$ on $\mathbb{R}^2 \backslash \{0\}$ by establishing a $*$-isomorphism between this $C^*$-algebra and the $C^*$-algebra associated to the horocycle flow on the corresponding homogeneous space for $\SL(2,\mathbb{R})$.\\
\subsetbsection{Notation}
If $G$ is a locally compact group and $A$ is a $C^*$-algebra, by an action of $G$ on $A$ we mean a continuous group homomorphism from $G$ to the group $\Aut (A)$ of $*$-automorphisms of $A$, endowed with the topology of pointwise convergence. If $X$ is a locally compact Hausdorff space, by an action of $G$ on $X$ we mean a continuous map $G \times X \rightarrow X$ that is associative and such that the identity of the group leaves every point of the space fixed.
If a locally compact group $G$ acts on a locally compact Hausdorff space $X$ by means of an action $\alpha : G \times X \rightarrow X$, we denote by $C_0 (X) \rtimes G$ the associated (full) transformation group $C^*$-algebra, that is the full crossed product $C^*$-algebra relative to the action $\mathfrak{H}at{\alpha}_g (f)=f \circ g^{-1}$ for $g \in G$, $f \in C_0 (X)$. Similarly $C_0 (X) \rtimes_r G$ is the reduced transformation group $C^*$-algebra, that is the reduced crossed product relative to the same action.
If $X$ and $Y$ are two Hilbert modules over a $C^*$-algebra, we write $X\Subset Y$ to mean that $X$ is compactly contained in $Y$ in the sense of \cite{cuntz_hm} Section 1.
If $F \subsetbset S$ is an inclusion of sets, we write $F\Subset S$ to mean that $F$ has finite cardinality.
If $X$ is a topological space and $S\subsetbset X$ a subset, we denote by $S^\circ$ its interior.
\section{Weak stable rank $1$}
The concept of stable rank for $C^*$-algebras was introduced by Rieffel in \cite{rieffel} as a noncommutative analogue of the covering dimension of a space and the case of stable rank $1$ is of particular interest (see for example \cite{cuntz_hm} and \cite{open_proj}).
Conditions under which a transformation group $C^*$-algebra has stable rank $1$ have been given in \cite{poon} for actions of the integers; for actions of other groups with finite Rokhlin dimension on compact spaces such conditions can be obtained by combining the results in \cite{hwz}, \cite{szabo} or \cite{swz} and \cite{rordam-sr}, under some other assumptions, as for example, the existence of an invariant measure. If $A$ is a $C^*$-algebra, it is said to have stable rank $1$ if every element in its minimal unitization $\tilde{A}$ can be approximated by invertible elements in $\tilde{A}$. We will consider a more restrictive (non-stable) approximation property, which was used in a crucial way in \cite{brttw}. The following definition was given by Hannes Thiel during a lecture about the Cuntz semigroup in the Winter semester 2016/2017 at the University of M{\"u}nster.
\begin{defn}
\label{defn2.0}
Let $A$ be a $C^*$-algebra. Then $A$ has \textit{weak stable rank $1$}, $\wsr (A)=1$, if $A \subsetbset \overline{GL(\tilde{A})}$.
\end{defn}
Another variation of the concept of stable rank $1$ is the following
\begin{defn}[\cite{zstable_projless} Definition 3.1]
Let $A$ be a $C^*$-algebra. Then $A$ has \textit{almost stable rank $1$}, $\asr (A)=1$, if $\wsr (B) =1$ for every hereditary $C^*$-subalgebra $B \subsetbset A$.
\end{defn}
A $C^*$-algebra $A$ is said to be stable if $A\otimes \mathbb{K} \simeq A$, where $\mathbb{K}$ denotes the $C^*$-algebra of compact operators on a separable Hilbert space. Stable $C^*$-algebras always have weak stable rank $1$ by Lemma 4.3.2 of \cite{brttw} and their multiplier algebra is properly infinite by \cite{rordam-fp} Lemma 3.4. The connection between stability and stable rank in the $\sigma$-unital case was already investigated in \cite{rordam-fp} Proposition 3.5 and Proposition 3.6. For our purpose, we need the following slight variation of the results contained in \cite{rordam-fp}:
\begin{thm}
\label{thm2.0}
Let $A$ be a $\sigma$-unital $C^*$-algebra. The following are equivalent
\begin{itemize}
\item[(i)] $\wsr (A)=1$ and $M(A)$ is properly infinite;
\item[(ii)] $A$ is stable.
\end{itemize}
If $A$ is simple, they are equivalent to
\begin{itemize}
\item[(iii)] $\wsr(A)=1$ and $M(A)$ is infinite.
\end{itemize}
\end{thm}
\proof
The proof of Lemma 3.2 of \cite{rordam-fp} applies under the hypothesis of weak stable rank $1$, hence if $\wsr (A) =1$ and $M(A)$ is properly infinite, then $A$ is stable by the considerations in the proof of \cite{rordam-fp} Proposition 3.6. As already observed, for any stable $C^*$-algebra $A$, $\wsr(A)=1$ and its multiplier algebra is properly infinite. In the simple case the result follows by an application of Lemma 3.3 of \cite{rordam-fp} and the proof is complete.\\
In order to obtain stability for a transformation group $C^*$-algebra, we introduce a certain dynamical condition and observe that it guarantees weak stable rank $1$; this is the content of the rest of this section. We will deduce infiniteness properties for the multiplier algebra by adapting the results contained in \cite{sth} to the locally compact case in the next section.
\begin{defn}
\label{defn2.1}
Let $G$ be a discrete group acting on a locally compact Hausdorff space $X$. The action is said to be \textit{squeezing} if for every $F\Subset G$ and every $C \subsetbset X$ compact there exists $\gamma \in G$ such that
\[
\gamma g \gamma h C \cap \gamma g C \cap C =\emptyset \nonumber
\]
for all $g,h \in F$.
\end{defn}
Note that Definition \ref{defn2.1} only makes sense for actions on locally compact non-compact spaces, since the space itself is globally fixed by any homeomorphism.
\begin{pro}
\label{prop2.1}
Let $G$ be a discrete group acting on a locally compact Hausdorff space $X$ by means of a squeezing action. Then $\wsr (C_0 (X) \rtimes G) =1$.
\end{pro}
\proof
Every element in $C_0 (X) \rtimes G$ can be approximated by elements in $C_c (G, C_c (X))$, hence it is enough to prove that any element in $C_c (G, C_c (X))$ is the limit of invertible elements in $(C_0 (X) \rtimes G )^\sim$. Let $F \Subset G$ and $z=\subsetm_{g \in F} z_g u_g$ be such that $z_g \in C_c (X)$ for every $g \in F$. Define $C := \bigcup_{g \in F} \subsetpp (z_g)$ and let $K \subsetbset X$ be a compact subset such that $C \subsetbsetneq K^\circ$. There exists a continuous function $f : X \rightarrow [0,1]$ such that $\subsetpp (f) \subsetbset K$, $f|_{C} =1$; furthermore, since the action is squeezing, there is a group element $\gamma \in G$ such that
\[
\gamma g \gamma h K \cap \gamma g K \cap K = \emptyset \nonumber \qquad \mbox{ for all }\quad g,h \in F \cup F^{-1} \cup \{ e\}.
\]
From our choice of $f$, it follows that we can write $z = f z = (f u_{\gamma})(u_{\gamma^{-1}} z)$.
Computing the third power of $u_{\gamma^{-1}} z$ we obtain
\[
(u_{\gamma^{-1}} z)^3 = \subsetm_{g,g' , g'' \in F} (z_g \circ \gamma) (z_{g'} \circ (\gamma^{-1} g \gamma^{-1})^{-1}) (z_{g''} \circ (\gamma^{-1} g \gamma^{-1} g' \gamma^{-1})^{-1}) u_{\gamma^{-1} g \gamma^{-1} g' \gamma^{-1} g''}.
\nonumber
\]
For every $s \in G$ and $\phi \in C_c (X)$ we have $\subsetpp (\phi \circ s^{-1}) = s \subsetpp (\phi)$ and so from our choice of $K$ and $\gamma$, we see that
\[
\begin{split}
&\subsetpp (z_g \circ \gamma) \cap \subsetpp (z_{g'} \circ (\gamma^{-1} g \gamma^{-1})^{-1}) \cap \subsetpp (z_{g''} \circ (\gamma^{-1} g \gamma^{-1} g' \gamma^{-1})^{-1}) \\
&\subsetbset \gamma^{-1} (K \cap g \gamma^{-1} K \cap g \gamma^{-1} g' \gamma^{-1} K) = \emptyset,
\end{split} \nonumber
\]
since $K \cap g \gamma^{-1} K \cap g \gamma^{-1} g' \gamma^{-1} K =\emptyset$ if and only if $\gamma(g')^{-1} \gamma g^{-1} K \cap \gamma (g')^{-1} K \cap K = \emptyset$. Hence
\[
(u_\gamma^{-1} z)^3 = 0 \nonumber
\]
and $u_\gamma z$ is nilpotent. In the same way we obtain
\[
(f u _{\gamma})^3 = f (f\circ \gamma^{-1}) (f\circ \gamma^{-2}) u_{\gamma^{3}}=0 \nonumber
\]
since $\gamma^2 K \cap \gamma K \cap K = \emptyset$. Hence $z$ is a product of nilpotent elements, thus is the limit of invertible elements in $(C_0 (X) \rtimes G)^\sim$ (cfr. \cite{rordam-uhf} 4.1) and the claim follows.
\begin{rem}
\label{oss2.0}
Natural variations of Definition \ref{defn2.1} lead to the same result of Proposition \ref{prop2.1}. The reason why we chose this form is that it fits in the discussion of Section 4.
\end{rem}
\begin{rem}
\label{oss2.1}
Proposition \ref{prop2.1} applies to the reduced crossed product as well.
\end{rem}
\section{Contractive and paradoxical actions}
In the last section we determined a condition on an action of a discrete group that guarantees weak stable rank $1$ for the transformation group $C^*$-algebra. In view of Theorem \ref{thm2.0} this section is devoted to find conditions that guarantee infiniteness properties for the multiplier algebra of the crossed product $C^*$-algebra.\\
If $A$ is any $C^*$-algebra and $G$ a discrete group acting on it, then $A\rtimes G$ is isomorphic to an ideal in $M(A) \rtimes G$, where the action of $G$ on $M(A)$ is the extension of the action on $A$. Then there is a unital $*$-homomorphism $\phi : M(A) \rtimes G \rightarrow M(A \rtimes G)$; if we identify $M(A\rtimes G)$ with the $C^*$-algebra of double centralizers on $A\rtimes G$ and $A\rtimes G$ with its isomorphic image in $M(A) \rtimes G$, $\phi (x) y = xy$ for any $x$ in $M(A) \rtimes G$ and $y$ in $A\rtimes G$. The same results apply to the reduced crossed product as well. This will be the framework for the following considerations.\\
In virtue of the above discussion, all the results we state in the rest of this section concerning full transformation group $C^*$-algebras hold true for the reduced transformation group $C^*$-algebras as well. The same applies to the results contained in the next section, where in order to prove the analogue of Proposition \ref{prop123} for the reduced crossed product, one can use the extension of the surjective $*$-homomorphism from the full crossed product to the reduced crossed product to the multiplier algebras.
The concept of contractive action (see below) was already considered in \cite{sth} page 22 and has to be compared with the more restrictive Definition 2.1 of \cite{delaroche}.
\begin{defn}
\label{defn3.1}
Let $G$ be a discrete group acting on a locally compact Hausdorff space $X$. The action is said to be \textit{contractive} if there exist an open set $U \subsetbset X$ and an element $t \in G$ such that $t \overline{U} \subsetbsetneq U$. In this case $(U,t)$ is called a \textit{contractive pair} and $U$ a \textit{contractive set}.
\end{defn}
The notion of scaling element was introduced in \cite{blackadar-cuntz} and was used to characterize stable algebraically simple $C^*$-algebras.
\begin{defn}[\cite{blackadar-cuntz} Definition 1.1]
\label{defn3.3}
Let $A$ be a $C^*$-algebra and $x$ an element in $A$. $x$ is called a \textit{scaling element} if $x^* x (xx^*) = xx^*$ and $x^* x \neq xx^*$.
\end{defn}
\begin{pro}
\label{prop3.1}
Let $G$ be a discrete group acting on a locally compact Hausdorff space $X$. Consider the following properties:
\begin{itemize}
\item[(i)] The action of $G$ on $X$ is contractive.
\item[(ii)] There exists a scaling elementary tensor in $C_c (X, C_b (X))$.
\end{itemize}
Then $(ii) {\mathbb R}ightarrow (i)$. If $X$ is normal, then $(i) {\mathbb R}ightarrow (ii)$.
\end{pro}
\proof
$(ii) {\mathbb R}ightarrow (i)$: Let $x=u_t f$ be a scaling elementary tensor in $C_c (G, C_b (X))$ and $U$ the interior of $\subsetpp(f)$. Since $x^* x = |f|^2$ and $xx^* = | f \circ t^{-1} |^2$, the condition $x^* x xx^* = xx^*$ implies $|f| |_{t\overline{U}} =1$; in particular $t\overline{U} \subsetbset U$. Suppose that $t\overline{U} =U$. Then
\[
|f| |_{U^c} =0, \quad |f||_U = |f||_{t\overline{U}} = 1|_{t\overline{U}} \nonumber
\]
and
\[
|f\circ t^{-1} ||_{U^c} = |f\circ t^{-1} ||_{(t\overline{U})^c} =| f\circ t^{-1} ||_{t (U)^c}=0. \nonumber
\]
Since $G$ acts by homeomorphisms, $U$ is a clopen set and $t^{-1} U = t \overline{U}$, which entails
\[
|f\circ t^{-1}||_U =1|_U = 1|_{t \overline{U}}. \nonumber
\]
This would imply $|f|= |f\circ t^{-1}|$ and $x^* x=xx^*$. Hence $t\overline{U} \subsetbsetneq U$.\\
Suppose now that $X$ is normal and let $(U,t)$ be a contractive pair. Take $\xi \in U \backslash (t\overline{U})$. By Urysohn Lemma (normality) there exists a continuous function $f : X \rightarrow [0,1]$ that is $0$ on $U^c$ and $1$ on $\{ \xi \} \cup (t\overline{U})$. The element $x := u_t f \in C_c (G, C_b (X))$ satisfies $x^* x = f^2$, $xx^* = (f\circ t^{-1} )^2$ and $x^* x (xx^*)=xx^*$. Since $\subsetpp (f\circ t^{-1} ) \subsetbsetneq \subsetpp (f)$, we have $x^* x \neq xx^*$, completing the proof.
\begin{cor}
\label{cor3.3.1}
Let $G$ be a group acting on a locally compact normal Hausdorff space by means of a contractive action. Then $M(C_0 (X) \rtimes G)$ is infinite.
\end{cor}
\proof
Let $x$ be as in Proposition \ref{prop3.1}, we want to show that $\phi(x^*x) \neq \phi (xx^*)$ ($\phi$ is defined at the beginning of the section). For take $\xi \in U \backslash (t\overline{U})$ and let $f \in C_c (X)$ be such that $f(\xi)=1$. Then $(x^*x f)(\xi) \neq 0$ and $(xx^* f)(\xi)=0$ and so $\phi (x^* x) \neq \phi (xx^*)$. As shown in \cite{blackadar-cuntz} Theorem 3.1 the element $\phi(x)+(1-\phi(x^* x))^{1/2}$ is a nontrivial isometry and the claim follows.\\
A variation of the concept of contractive action is the following (see \cite{sth} Lemma 2.3.2) and is a particular case of Definition 2.3.6 of \cite{sth}.
\begin{defn}
\label{defn3.33}
Let $X$ be a locally compact Hausdorff space and $G$ a discrete group acting on it. We say that the action is \textit{paradoxical} if there are positive natural numbers $n$, $m$, group elements $t_1 ,..., t_{n+m}$ and non-empty open sets $U_1 ,..., U_{n+m}$ such that $\bigcup_{i=1}^n U_i = \bigcup_{i=n+1}^{n+m} U_i = X$, $\bigcup_{i=1}^{n+m} t_i (U_i) \subsetbsetneq X$ and $t_i U_i \cap t_j U_j = \emptyset$ for every $i\neq j$.
\end{defn}
Adapting the ideas (and methods) of \cite{sth} Lemma 2.3.7 to the locally compact case, we have the following
\begin{pro}
\label{prop3.2}
Let $G$ be a discrete group acting on a locally compact normal Hausdorff space $X$. If the action is paradoxical, then $M(C_0 (X) \rtimes G)$ is properly infinite.
\end{pro}
\proof
Let $n$, $m$, $t_1 ,..., t_{n+m}$ and $U_1 ,..., U_{n+m}$ be as in Definition \ref{defn3.33}.
Taking unions and relabeling we can suppose $t_i \neq t_j$ for $i\neq j$. Let $F:= \{ t_1 ,..., t_n \}$, $F' := \{ t_{n+1} ,..., t_{n+m}\}$.\\
Since $X$ is normal we can take a partition of unity $\{\phi_t\}_{t \in F}$ subordinated to $\{U_i\}_{i=1}^n$ and a partition of unity $\{ \psi_{s}\}_{s \in F'}$ subordinated to $\{U_i\}_{i=n+1}^{n+m}$. Consider the extension of the action of $G$ to $C_b (X)$ and the associated crossed product $C^*$-algebra $C_b (X) \rtimes G$.\\
Define $x:= \subsetm_{t \in F} u_t \phi_t^{1/2}$ and $y:= \subsetm_{t' \in F'} u_{s} \psi_{s}^{1/2}$. Then
\[
x^* x = y^* y = 1. \nonumber
\]
Note now that
\[
x^*y = \subsetm_{t \in F, s \in F'} \phi_t^{1/2} (\psi_s^{1/2} \circ s^{-1} t ) u_{t^{-1} s} =0 \nonumber
\]
and so $xx^* \perp yy^*$.\\
Let $\phi :C_b (X) \rtimes G \rightarrow M(C_0 (X) \rtimes G)$ be as at the beginning of this section. Take a positive function $f \in C_c(X)$ that takes the value $1$ on a point $\xi \in (\bigcup_{1\leq i \leq n} t_i U_i)^c$. Then
\[
xx^* f = \subsetm_{t, t' \in F} (\phi_t^{1/2} \circ t^{-1}) (\phi_{t'}^{1/2} \circ t^{-1}) u_{t(t')^{-1}} f \nonumber
\]
entails $0=(xx^* f)(\xi) \neq f (\xi) =1$. Hence $xx^* f \neq f$ and $\phi (xx^*)\neq \phi (1)=1$. The same applies to $yy^*$ and so $1 \in M(C_0 (X) \rtimes G)$ is properly infinite, as claimed.\\
If a discrete group $G$ acts on a locally compact Hausdorff space $X$, the action is said to be \textit{topologically free} if for every $F \Subset G$ the set $\bigcap_{t \in F \backslash \{e\}} \{ x \in X \; | \; tx \neq x \}$ is dense in $X$ (\cite{archbold-spielberg} Definition 1).
Combining Proposition \ref{prop3.2} with the results of Section $2$ we obtain
\begin{thm}
\label{thm3.1}
Let $G$ be a discrete group acting on a locally compact metric space by means of an action that is paradoxical and squeezing. Then $C_0 (X) \rtimes G$ is stable. If the action is topologically free, minimal, squeezing and contractive, then $C_0 (X) \rtimes_r G$ is stable.
\end{thm}
\proof
Since $X$ is second countable, $C_0 (X) \rtimes G$ is separable, hence $\sigma$-unital. The result follows from Theorem \ref{thm2.0}, Proposition \ref{prop3.2} and Proposition \ref{prop2.1}. If the action is topologically free and minimal, then $C_0 (X)\rtimes_r G$ is simple by \cite{archbold-spielberg}; hence Theorem \ref{thm2.0} applies also in this situation.
\section{The case of discrete subgroups of $\SL(2,\mathbb{R})$}
A Fuchsian group $\Gamma$ is a discrete subgroup of $\PSL(2,\mathbb{R})$ (\cite{katok} Definition 2.2) and as such it acts on the hyperbolic plane $\mathbb{H}$ and on its boundary $\partial \mathbb{H} =\mathbb{R}\cup \{\infty\} \simeq \mathbb{R} \mathbb{P}^1$ by means of M{\"o}bius transformations.
Let $G$ be a discrete subgroup of $\SL(2,\mathbb{R})$ acting on $\mathbb{R}^2 \backslash \{0\}$ by means of matrix multiplication of vectors. The quotient map $\pi : \mathbb{R}^2 \backslash \{0\} \rightarrow \mathbb{R} \mathbb{P}^1$ induces an action of $G$ on $\mathbb{R}\mathbb{P}^1$, which factors through the action of the corresponding Fuchsian group $p (G)$, where $p: \SL(2,\mathbb{R}) \rightarrow \PSL(2,\mathbb{R})$ is the quotient by the normal subgroup $\{-1,+1\}$ of $\SL(2,\mathbb{R})$.\\
If $\gamma$ is a hyperbolic element (\cite{katok} 2.1) in $\PSL(2,\mathbb{R})$ or $\SL(2,\mathbb{R})$ acting on $\mathbb{RP}^1$, we denote by $\gamma^{- (+)}$ its repelling (attracting) fixed point.
For a subset of $\SL(2,\mathbb{R})$ or $\PSL(2,\mathbb{R})$ consisting of hyperbolic transformations, we say that its elements have different axes if the fixed-point sets for the action of the elements on $\mathbb{R} \mathbb{P}^1$ are pairwise disjoint. Note that in both $\SL(2,\mathbb{R})$ and $\PSL(2,\mathbb{R})$, discreteness of a subgroup $G$ implies that whenever two hyperbolic elements in $G$ have a common axis, then both their axes coincide.
\begin{lem}
\label{lem5}
Let $\Gamma$ be a Fuchsian group containing two hyperbolic elements with different axes. Then for every $F\Subset \Gamma$ there exists a hyperbolic element $\gamma \in \Gamma$ such that
\[
g \gamma^+ \neq \gamma^- \qquad \forall g \in F. \nonumber
\]
The same is true if $\Gamma$ is a group generated by a hyperbolic element.
\end{lem}
\proof
Let $F \Subset \Gamma$. If $\Gamma$ contains two hyperbolic elements with different axes, then it contains infinitely many, hence we can take $\eta$, $\delta$ hyperbolic with different axes and such that the fixed points of $\eta$ are not fixed by any elements in $F$. Suppose that $F$ is such that for every $n \in \mathbb{N}$ there is a $g \in F$ with $g \eta^n \delta^+ =g(\eta^n \delta \eta^{-n})^+ = (\eta^n \delta \eta^{-n})^- = \eta^n \delta^-$. Then, passing to a subsequence
\[
\exists g \in F \quad \mbox{ s.t. } \quad g\eta^{n_k} \delta^+ = \eta^{n_k} \delta^-. \nonumber
\]
Both $\eta^{n_k} \delta^+$ and $\eta^{n_k} \delta^-$ converge to $\eta^+$ and so $\eta^+$ is fixed by $g$, a contradiction.
\begin{pro}
\label{prop4.1}
Let $G$ be a discrete subgroup of $\SL(2,\mathbb{R})$ such that $p(G)$ is a Fuchsian group containing two hyperbolic elements with different axes or a Fuchsian group generated by a hyperbolic element. Then the action of $G$ on $\mathbb{R}^2 \backslash \{0\}$ is squeezing.
\end{pro}
\proof
Let $F \Subset G$ and let be given an orthonormal basis $\{e_1 , e_2\}$ for $\mathbb{R}^2$. By Lemma \ref{lem5} there is a $\gamma \in p (G)$ such that $p(g) \gamma^+ \neq \gamma^-$ for every $g \in G$. Let $h \in G$ be such that $p(h) = \gamma$. Hence $h$ is hyperbolic and is conjugated in $\SL(2,\mathbb{R})$ to a diagonal matrix:
\[
h = u^{-1} \Lambda u =u^{-1} \left( \begin{array}{cc} \lambda & 0 \\
0 & \lambda^{-1} \end{array}\right)u, \qquad |\lambda| >1. \nonumber
\]
Let $g'$ and $g$ be elements in $G$ and suppose that the upper-left diagonal entry of the matrix $ug'u^{-1}$ vanishes: $(ug'u^{-1})_{1,1} =0$. This means that $\langle e_1, ug'u^{-1}e_1\rightarrowngle =0$, or equivalently, $ug'u^{-1} e_1 \in \mathbb{R} e_2$; hence, since the image of $u^{-1} e_1$ under the quotient map $\pi : \mathbb{R}^2 \backslash \{0\} \rightarrow \mathbb{R} \mathbb{P}^1$ is $\gamma^+$ and the image of $u^{-1} e_2$ under the same map is $\gamma^-$, looking at the action of $p(G)$ on $\mathbb{R} \mathbb{P}^1$ we obtain $p (g') \gamma^+ = \gamma^-$, contradicting the assumption. Hence $(ug'u^{-1})_{1,1}\neq 0$. Define $g_u := ugu^{-1}$, $g'_u := ug' u^{-1}$ and compute for $n \in \mathbb{N}$
\[
\Lambda^n g_u= \left(\begin{array}{cc} \lambda^n (g_{u})_{1,1} & \lambda^n (g_u)_{1,2} \\
\lambda^{-n} (g_{u})_{2,1} & \lambda^{-n} (g_{u})_{2,2} \end{array}\right), \nonumber
\]
and
\[
\Lambda^n g'_u \Lambda^n g_u = \left( \begin{array}{cc} \lambda^{2n} (g'_u)_{1,1} (g_{u})_{1,1} + (g'_u)_{1,2} (g_u)_{2,1} & \lambda^{2n} (g'_u)_{1,1} (g_u)_{1,2} + (g'_u)_{1,2} (g_u)_{2,2} \\
(g'_u)_{2,1} (g_u)_{1,1} + \lambda^{-2n} (g'_u)_{2,2} (g_u)_{2,1} & (g'_u)_{2,1} (g_u)_{1,2} + \lambda^{-2n} (g'_u)_{2,2} (g_u)_{2,2} \end{array}\right). \nonumber
\]
Let $C \subsetbset \mathbb{R}^2 \backslash \{ 0 \}$ be a compact subset; take real positive numbers $r_1$ and $r_2$ such that the compact crown $C_{r_1 , r_2} = \{ z \in \mathbb{R}^2 | r_1 \leq \| z \| \leq r_2\}$ contains $uC$. We want to show that there exists $n >0$ such that
\[
\Lambda^n g'_u \Lambda^n g_u C_{r_1 , r_2} \cap \Lambda^n g'_u C_{r_1 , r_2} \cap C_{r_1,r_2} = \emptyset, \qquad \forall g,g' \in F. \nonumber
\]
Let $(x,y)^t \in \mathbb{R}^2$ be such that $\Lambda^n g'_u \Lambda^n g_u (x,y)^t$ belongs to $uC$. In particular, this entails
\[
\| \Lambda^n g'_u \Lambda^n g_u (x,y)^t \| \leq r_2 \nonumber
\]
and taking the first coordinate:
\[
| \lambda^{2n} (g'_u)_{1,1} [(g_u)_{1,1} x + (g_u)_{1,2} y] + [ (g'_u)_{1,2} (g_u)_{2,2} x + (g'_u)_{1,2} (g_u)_{2,2} y ]| \leq r_2. \nonumber
\]
Hence
\begin{equation}
\label{eq4.1}
|(g_u)_{1,1} x + (g_u)_{1,2} y| \leq \frac{r_2 + | (g'_u)_{1,2} (g_u)_{2,2} x + (g'_u)_{1,2} (g_u)_{2,2} y |}{\lambda^{2n} | (g'_u)_{1,1}|}
\end{equation}
for every $(x,y)^t \in (\Lambda^n g'_u \Lambda^n g_u)^{-1} C$. Furthermore, if $(x,y)^t \in \mathbb{R}^2$ is such that $\Lambda^n g_u (x,y)^t $ belongs to $C_{r_1,r_2}$, then
\begin{equation}
\label{eq4.2}
\begin{split}
r_1^2 &\leq [\lambda^n (g_u)_{1,1} x + \lambda^n (g_u)_{1,2} y]^2 + [\lambda^{-n} (g_u)_{2,1} x + \lambda^{-n} (g_u)_{2,2} y ]^2\\
& = \lambda^{2n} [(g_u)_{1,1} x + (g_u)_{1,2} y]^2 + \lambda^{-2n}[ (g_u)_{2,1} x + (g_u)_{2,2} y ]^2.
\end{split}
\end{equation}
Combining (\ref{eq4.1}) and (\ref{eq4.2}) we obtain
\begin{equation}
\label{eq4.3}
r_1^2 \leq \frac{[r_2 + | (g'_u)_{1,2} (g_u)_{2,2} x + (g'_u)_{1,2} (g_u)_{2,2} y |]^2}{\lambda^{2n} | (g'_u)_{1,1}|^2} + \lambda^{-2n} [(g_u)_{2,1} x + (g_u)_{2,2} y]^2
\end{equation}
for every $(x,y)^t \in (\Lambda^n g'_u \Lambda^n g_u)^{-1} C_{r_1,r_2} \cap (\Lambda^n g_u)^{-1} C_{r_1,r_2}$.
If $(x,y)^t$ belongs to $C_{r_1,r_2}$, then there is a constant $M >0$ such that
\[
\frac{| (g'_u)_{1,2} (g_u)_{2,2} x + (g'_u)_{1,2} (g_u)_{2,2} y |]^2}{ | (g'_u)_{1,1}|^2} + [(g_u)_{2,1} x + (g_u)_{2,2} y]^2 \leq M \nonumber
\]
and this constant does not depend on the choice of $g$, $g'$ in $F$. So, by (\ref{eq4.3}), for $n$ large enough
\[
(\Lambda^n g'_u \Lambda^n g_u)^{-1} C_{r_1,r_2} \cap (\Lambda^n g_u)^{-1} C_{r_1,r_2} \cap C_{r_1,r_2} = \emptyset, \nonumber
\]
which entails
\[
C_{r_1,r_2} \cap \Lambda^n g'_u C_{r_1,r_2} \cap \Lambda^n g'_u \Lambda^n g_u C_{r_1,r_2}=\emptyset \nonumber
\]
and so
\[
u^{-1}C_{r_1,r_2} \cap h^n g' u^{-1}C_{r_1,r_2} \cap h^n g' h^n g u^{-1}C_{r_1,r_2}=\emptyset. \nonumber
\]
The result follows since $C \subsetbset u^{-1} C_{r_1 , r_2}$. \\
Hence we have determined a class of discrete subgroups of $\SL(2,\mathbb{R})$ whose action on $\mathbb{R}^2 \backslash \{0\}$ is squeezing. Conditions under which this action is contractive or paradoxical are the content of the following
\begin{pro}
\label{prop4.2}
Let $G$ be a discrete subgroup of $\SL(2,\mathbb{R})$ acting on $\mathbb{R}^2 \backslash \{ 0 \}$ by means of matrix multiplication of vectors. If $G$ contains a hyperbolic element, then the action is contractive. If $G$ contains at least two hyperbolic elements with different axes, then the action is paradoxical.
\end{pro}
\proof
Suppose that $G$ contains a hyperbolic element, then the same is true for its image under the quotient map $p : \SL(2,\mathbb{R}) \rightarrow \PSL(2,\mathbb{R})$. Since the action of $\Gamma = p (G)$ on $\mathbb{R}\mathbb{P}^1$ is by homeomorphisms and every hyperbolic element in $\Gamma$ is conjugated in $\PSL(2,\mathbb{R})$ to a Moebius transformation of the form $z \mapsto \lambda^2 z$ for some $\lambda >1$, it follows that the action of $\Gamma$ on $\mathbb{R}\mathbb{P}^1$ is contractive. Hence there are $U \subsetbset \mathbb{R}\mathbb{P}^1$ and $\gamma \in \Gamma$ such that
\begin{equation}
\gamma \overline{U} \subsetbsetneq U. \nonumber
\end{equation}
In the case $G$ contains at least two hyperbolic elements with different axes, then the same is true for $\Gamma$ and as is well known, in this case $\Gamma$ contains a countable subset of hyperbolic elements with different axes. In order to see this, let $\gamma$, $\eta$ be hyperbolic elements in $\Gamma$ with different axes; then the elements in the sequence $\{ \eta^n \gamma \eta^{-n}\}_{n \in \mathbb{N}}$ are hyperbolic transformations with different axes. In particular, for every $n, m \geq 2$ natural numbers there are group elements $\gamma_1 ,..., \gamma_{n+m}$ and contractive open sets $U_1 ,..., U_{n+m}$, where for each $i=1,...,n+m$ $U_i$ contains the attracting fixed point $\gamma_i^+$ of $\gamma_i$, such that
\begin{equation}
\label{eq2}
\bigcup_{i=1}^n U_i = \bigcup_{j=n+1}^{n+m} U_j = \mathbb{R}\mathbb{P}^1,
\end{equation}
\begin{equation}
\label{eq3}
\gamma_i U_i \cap \gamma_j U_j =\emptyset \qquad \forall i\neq j.
\end{equation}
Hence, we just need to observe that the same holds after replacing the sets $U_i$ with $\pi^{-1} (U_i)$ and the elements $\gamma_i$ with some representatives in $G$. Equation (\ref{eq2}) automatically holds for the sets $\pi^{-1} (U_i) \subsetbset \mathbb{R}^2 \backslash \{0\}$. Choose a representative $g_i \in G$ for every $\gamma_i \in \Gamma$; since the action of $G$ on $\mathbb{R}\mathbb{P}^1$ factors through the action of $\Gamma$, equation (\ref{eq3}) can be replaced by
\[
g_i U_i \cap g_j U_j =\emptyset \qquad \forall i\neq j. \nonumber
\]
By equivariance of the quotient map $\pi : \mathbb{R}^2 \backslash \{0\} \rightarrow \mathbb{R} \mathbb{P}^1$ it follows that
\[
g_i (\pi^{-1} (U_i)) \cap g_j (\pi^{-1} (U_j)) = \emptyset \qquad \forall i\neq j. \nonumber
\]
We are left to check that the inverse image of a contractive open set is again a contractive open set. Since the map $\mathbb{R}^2 \backslash \{0\} \rightarrow \mathbb{R}\mathbb{P}^1$ is a quotient by a group action (the group is $\mathbb{R}^\times$), it is open and so the inverse image of the closure of a set is the closure of the inverse image of the same set; hence, if $(U, g)$ is a contractive pair with $U \subsetbset \mathbb{R}\mathbb{P}^1$ and $g \in G$, then
\[
g \overline{( \pi^{-1} (U))} = g \pi^{-1} (\overline{U}) = \pi^{-1} (g \overline{U}) \subsetbsetneq \pi^{-1} (U). \nonumber
\]
The proof is complete.
\begin{cor}
\label{cor4.1}
Let $G$ be a discrete subgroup of $\SL(2,\mathbb{R})$ such that $p (G) \subsetbset \PSL(2,\mathbb{R})$ is a Fuchsian group containing two hyperbolic elements with different axes. The transformation group $C^*$-algebra $C_0 (\mathbb{R}^2 \backslash \{0\})\rtimes G$ is stable.\\
If $p(G)$ is generated by a hyperbolic transformation, then $\wsr (C_0 (\mathbb{R}^2 \backslash \{0\})\rtimes G) =1$ and $M(C_0 (\mathbb{R}^2 \backslash \{0\})\rtimes G)$ is infinite.
\end{cor}
\proof
Follows from Proposition \ref{prop4.2}, Proposition \ref{prop4.1}, Theorem \ref{thm3.1} and Corollary \ref{cor3.3.1}.\\
Corollary \ref{cor4.1} applies to the case of discrete subgroups of $\SL(2,\mathbb{R})$ associated to Fuchsian groups of the first kind (\cite{katok} 4.5), hence in particular the cocompact ones. Non-lattice subgroups to which Corollary \ref{cor4.1} applies are considered in \cite{semenova}.\\
In Proposition \ref{prop4.2} we deduced paradoxicality for the action of a discrete subgroup $G$ of $\SL(2,\mathbb{R})$ on $\mathbb{R}^2 \backslash \{0\}$ from paradoxicality of the action of the corresponding Fuchsian group on $\mathbb{R} \mathbb{P}^1$ and concluded from this fact that the multiplier algebra of $C_0 (\mathbb{R}^2 \backslash \{0\})\rtimes G$ is properly infinite.
It follows from \cite{glasner} Example VII.3.6 that if $\Gamma$ is a Fuchsian group of the first kind, then its action on $\mathbb{R} \mathbb{P}^1$ is extremely proximal (see \cite{glasner} page 96 for the definition) and this property represents a stronger form of paradoxicality, hence stronger infiniteness properties for the multiplier algebra of the transformation group $C^*$-algebra are expected in this case. Note that in \cite{boundary} an extremely proximal action is called a strong boundary action.\\ The next Proposition is a consequence of the results contained in \cite{boundary} and \cite{kra}.
\begin{lem}
\label{lem4.4}
Let $G$ and $H$ be locally compact groups and $A$, $B$ be $C^*$-algebras. Suppose $G$ acts on $A$ and $H$ acts on $B$ and that there is an equivariant involutive homomorphism $\phi : C_c (G,A) \rightarrow C_c (H,B)$ which is continuous for the $L^1$-norms. Then there is a $*$-homomorphism $\mathfrak{H}at{\phi} : A\rtimes G \rightarrow B\rtimes H$.
\end{lem}
\proof
If $\rho: L^1 (H,B) \rightarrow \mathfrak{H}$ is a nondegenerate $L^1$-continuous involutive representation of $L^1 (H, B)$, then the composition $\rho \circ \phi : C_c (G, A) \rightarrow \mathfrak{H}$ is $L^1$-continuous as well. Hence $\| \phi (f) \| \leq \| f \|$ for every $f \in C_c (G,A)$ by \cite{williams} Corollary 2.46, as claimed.
\begin{pro}
\label{prop123}
Let $G$ be a discrete subgroup of $\SL(2,\mathbb{R})$ such that $p(G) \subsetbset \PSL(2,\mathbb{R})$ is a finitely generated Fuchsian group of the first kind not containing elements of order $2$. Then $M(C_0 (\mathbb{R}^2 \backslash \{0\})\rtimes G)$ contains a Kirchberg algebra in the UCT class as a unital $C^*$-subalgebra.
\end{pro}
\proof
The quotient map $p : \mathbb{R}^2 \backslash \{0\} \rightarrow \mathbb{R} \mathbb{P}^1$ is surjective and equivariant with respect to the action of $G$, hence it induces a unital $*$-homomorphism $ C(\mathbb{R} \mathbb{P}^1) \rtimes G \rightarrow C_b (\mathbb{R}^2 \backslash \{0\} )\rtimes G$ which can be composed with the unital $*$-homomorphism $C_b (\mathbb{R}^2 \backslash \{0\})\rtimes G \rightarrow M(C_0 (\mathbb{R}^2 \backslash \{0\}) \rtimes G)$ introduced at the beginning of Section 3 in order to obtain a unital $*$-homomorphism $\phi : C(\mathbb{R} \mathbb{P}^1)\rtimes G \rightarrow M(C_0 (\mathbb{R}^2 \backslash \{0\}) \rtimes G)$. By \cite{kra}, finitely generated Fuchsian groups of the first kind not admitting elements of order $2$ lift to $\SL(2,\mathbb{R})$. Denote by $\mathfrak{K}ppa : \Gamma \rightarrow \mathfrak{K}ppa (\Gamma) \subsetbset G$ a lift. Since the action of $G$ on $\mathbb{R}\mathbb{P}^1$ factors through the action of $\Gamma$, the map
\[
\psi_c : C_c (\Gamma , C(\mathbb{R} \mathbb{P}^1)) \rightarrow C_c (\mathfrak{K}ppa (\Gamma) , C(\mathbb{R}\mathbb{P}^1)) \nonumber
\]
\[
f \mapsto f \circ \mathfrak{K}ppa^{-1} \nonumber
\]
is an involutive homomorphism and it preserves the $L^1$-norm, as well as the inclusion $C_c (\mathfrak{K}ppa (\Gamma), C(\mathbb{R} \mathbb{P}^1)) \rightarrow C_c (G, C(\mathbb{R} \mathbb{P}^1))$. By Lemma \ref{lem4.4} there is a (unital) $*$-homomorphism $\psi : C(\mathbb{R}\mathbb{P}^1)\rtimes \Gamma \rightarrow C(\mathbb{R}\mathbb{P}^1) \rtimes G$. Hence $\phi \circ \psi: C(\mathbb{R} \mathbb{P}^1 ) \rtimes \Gamma \rightarrow M(C_0 (\mathbb{R}^2 \backslash \{0\})\rtimes G)$ is a unital $*$-homomorphism. By \cite{boundary} Theorem 5 the $C^*$-algebra $C(\mathbb{R} \mathbb{P}^1 ) \rtimes \Gamma$ is a unital Kirchberg algebra in the UCT class, hence $\psi \circ \phi$ is injective and the result follows.
\section{Cocompact subgroups of $\SL(2,\mathbb{R})$}
Consider the one-parameter subgroup of $\SL(2,\mathbb{R})$
\[
N:= \{ n(t) \in \SL(2,\mathbb{R}) \; | \; n(t)=\left( \begin{array}{cc} 1 & t \\
0 & 1 \end{array}\right), \quad t \in \mathbb{R} \}. \nonumber
\]
Given a discrete subgroup $G$ of $\SL(2,\mathbb{R})$, one can define a flow on the corresponding homogeneous space $G \backslash \SL(2,\mathbb{R})$ by $Gg \mapsto Gg n(-t)$; this is called the \textit{horocycle flow} (\cite{ew} 11.3.1). The stabilizer of the point $(1,0)^t$ in $\mathbb{R}^2 \backslash \{0\}$ for the action of $\SL(2,\mathbb{R})$ is $N$ and so the quotient $\SL(2,\mathbb{R})/N$, endowed with the action of $\SL(2,\mathbb{R})$ given by left multiplication, is isomorphic, as a dynamical system, to $\mathbb{R}^2 \backslash \{0\}$. The interplay between the action of $G$ on $\mathbb{R}^2 \backslash \{0\}$ and the horocycle flow on $G \backslash \SL(2,\mathbb{R})$ is employed in the following
\begin{pro}
\label{prop2.1}
Let $G$ be a discrete cocompact subgroup of $\SL(2,\mathbb{R})$. The transformation group $C^*$-algebra $C_0 (\mathbb{R}^2 \backslash \{0\}) \rtimes G$ is simple, separable, stable, $\mathcal{Z}$-stable, with a unique lower semicontinuous $2$-quasitrace and it has almost stable rank $1$. In particular it satisfies the hypothesis of \cite{io} Theorem 3.5.
\end{pro}
\proof
Since $G$ is countable and $\mathbb{R}^2 \backslash \{0\}$ is a locally compact second countable Hausdorff space, $C_0 (\mathbb{R}^2 \backslash \{0\}) \rtimes G$ is separable.\\
As already observed in the discussion after Corollary \ref{cor4.1}, $C_0 (\mathbb{R}^2 \backslash \{0\}) \rtimes G$ is stable in the case $G$ is cocompact. Since the action of $G$ on $\mathbb{R}^2 \backslash \{0\}$ is free and minimal (\cite{ergtopdyn} Theorem IV.1.9), $C_0 (\mathbb{R}^2 \backslash \{0\}) \rtimes G$ is simple (\cite{archbold-spielberg}).
By simplicity, the non-trivial lower semicontinuous traces on $C_0 (\mathbb{R}^2 \backslash \{0\}) \rtimes G$ are semifinite (\cite{dixmier} 6.1.3) and so, in virtue of \cite{green2} Proposition 25 and Proposition 26, the restriction map sets up a bijection with the lower semicontinuous semifinite $G$-invariant traces on $C_0 (\mathbb{R}^2 \backslash \{0\})$. Every such trace is uniquely given by integration against a $G$-invariant Radon measure. By Furstenberg Theorem (\cite{furstenberg}) there is exactly one such non-trivial measure. Hence $C_0 (\mathbb{R}^2 \backslash \{0\}) \rtimes G$ admits a unique non-trivial lower semicontinuous trace. Since the action of $G$ on $\mathbb{R}^2 \backslash \{0\}$ is amenable, $C_0 (\mathbb{R}^2 \backslash \{0\}) \rtimes G$ is nuclear (\cite{delaroche2} Theorem 3.4). By exactness it admits a unique non-trivial lower semicontinuous $2$-quasitrace (\cite{kirchberg}).\\
In virtue of Corollary 9.1 and Corollary 6.7 of \cite{hsww} the $C^*$-algebra $C(G \backslash \SL(2,\mathbb{R})) \rtimes N$ is stable; hence it follows from Green's imprimitivity Theorem (\cite{williams} Corollary 4.11) that $C_0 (\mathbb{R}^2 \backslash \{0\}) \rtimes G \simeq C(G \backslash \SL(2,\mathbb{R})) \rtimes N$. By \cite{hsww} Corollary 9.1 and Theorem 3.5, $C(G \backslash \SL(2,\mathbb{R}) )\rtimes N$ has finite nuclear dimension; hence, \cite{tikusis} Corollary 8.7 entails $\mathcal{Z}$-stability.\\
As observed in \cite{noncommgeom} page 129, $C(G\backslash \SL(2,\mathbb{R})) \rtimes N$ is projectionless; hence, \cite{zstable_projless} Corollary 3.2 applies and $\asr (C(G\backslash \SL(2,\mathbb{R})) \rtimes N) =1$.\\
The result follows since the Cuntz semigroup of a stable $\mathcal{Z}$-stable $C^*$-algebra is almost unperforated (\cite{rordam-sr} Theorem 4.5).\\
\begin{rem}
The stability of $C_0 (\mathbb{R}^2 \backslash \{0\}) G$ in Proposition \ref{prop2.1} can also be established directly from that of $C(G \backslash \SL(2,\mathbb{R})) \rtimes N$. In fact, the rest of the proof shows that $C(G \backslash \SL(2,\mathbb{R})) \rtimes N$ satisfies the hypothesis of \cite{io} Theorem 3.5. Since $C_0 (\mathbb{R}^2 \backslash \{0\}) \rtimes G$ is a hereditary $C^*$-subalgebra of $C(G \backslash \SL(2,\mathbb{R})) \rtimes N$, it is then enough to prove that the non-trivial lower semicontinuous trace on $C_0 (\mathbb{R}^2 \backslash \{0\}) \rtimes G$ is unbounded. But this follows since it is induced by the Lebesgue measure.
\end{rem}
As a consequence we obtain the following properties for the $C^*$-algebra associated to the action of a cocompact discrete subgroup of $\SL(2,\mathbb{R})$ on $\mathbb{R}^2 \backslash\{0\}$
\begin{cor}
\label{horo_1}
Let $G$ be a cocompact discrete subgroup of $\SL(2,\mathbb{R})$, $\tau$ the lower semicontinuous trace associated to the Lebesgue measure $\mu_L$ on $\mathbb{R}^2 \backslash \{0\}$ and $d_\tau$ the corresponding functional on the Cuntz semigroup $Cu (C_0 (\mathbb{R}^2 \backslash \{0\})\rtimes G)$. Then
\[
\Ped (C_0 (\mathbb{R}^2 \backslash \{0\}) \rtimes G)=\{ x \in C(\mathbb{R}^2 \backslash \{0\}) \rtimes G\; : \; d_\tau ([|x|]) < \infty\}. \nonumber
\]
Every hereditary $C^*$-subalgebra of $C(\mathbb{R}^2 \backslash \{0\}) \rtimes G$ is either algebraically simple or isomorphic to $C(\mathbb{R}^2 \backslash \{0\}) \rtimes G$.
\end{cor}
\begin{cor}
\label{horo_3}
Let $G$ be a cocompact discrete subgroup of $\SL(2,\mathbb{R})$. Every countably generated right Hilbert module for $C_0 (\mathbb{R}^2 \backslash \{0\}) \rtimes G$ is isomorphic to one of the form
\[
\overline{f \cdot (C_0 (\mathbb{R}^2 \backslash \{0\}) \rtimes G)}, \qquad f \in C_0 (\mathbb{R}^2 \backslash \{0\}). \nonumber
\]
For two such Hilbert modules we have
\[
\overline{f \cdot (C_0 (\mathbb{R}^2 \backslash \{0\}) \rtimes G)} \simeq \overline{g \cdot (C_0 (\mathbb{R}^2 \backslash \{0\})\rtimes G)} \qquad \Leftrightarrow \qquad \mu_L (\subsetpp (f)) = \mu_L (\subsetpp (g)) \nonumber
\]
and there exists a Hilbert module $E$ such that
\[
\overline{f \cdot (C_0 (\mathbb{R}^2 \backslash \{0\})\rtimes G)} \simeq E \Subset \overline{g \cdot (C_0 (\mathbb{R}^2 \backslash \{0\})\rtimes G)} \nonumber
\]
if and only if $\mu_L (\subsetpp (f)) < \mu_L (\subsetpp (g))$.
\end{cor}
\proof
The Cuntz semigroup of the $C^*$-algebra $C_0 (\mathbb{R}^2 \backslash \{0\})\rtimes G$ is stably finite by \cite{cuntz_t} Proposition 5.2.10, hence by \cite{cuntz_t} Proposition 5.3.16 it does not contain compact elements, since this $C^*$-algebra is projectionless. Hence the countably generated Hilbert modules correspond to soft elements and Cuntz equivalence of soft elements is implemented by the unique (up to scalar multiples) nontrivial functional associated to the unique (up to scalar multiples) lower semicontinuous trace $\tau$. It follows that all the possible values in the range of the dimension function are obtained by Cuntz equivalence classes of elements in $C_0 (\mathbb{R}^2 \backslash \{0\})$ since, for every $f \in C_0 (\mathbb{R}^2 \backslash \{0\})$, we have $d_\tau (f) = \mu_L (\subsetpp (f))$. The result follows from Theorem 3.5 of \cite{io}.
\section{Final remarks}
It follows from the results in the last section that if $G$ is a cocompact discrete subgroup of $\SL(2,\mathbb{R})$, the Cuntz classes of elements in the transformation group $C^*$-algebra $C_0 (\mathbb{R}^2 \backslash\{0\})\rtimes G$ are generated by continuous functions on the plane. It might be possible that this property can be derived from the dynamics.
It can be shown that if we restrict to discrete subgroups of $\SL(2,\mathbb{R})$ which are the inverse images under the quotient map $p: \SL(2,\mathbb{R}) \rightarrow \PSL(2,\mathbb{R})$ of fundamental groups of hyperbolic Riemann surfaces, the construction of the $C^*$-algebra associated to the horocycle flow on the corresponding homogeneous space of $\SL(2,\mathbb{R})$ induces a functor from a category whose objects are hyperbolic Riemann surfaces and the morphisms are finite sheeted holomorphic coverings to the usual category of $C^*$-algebras; this suggests that it might be possible do detect the holomorphic structure at the $C^*$-algebraic level. Observe that, if $\mathcal{M}_g$ is a compact Riemann surface of genus $g$, after identifying $p^{-1} (\pi_1 (\mathcal{M}_g)) \backslash \SL(2,\mathbb{R})$ with the unit tangent bundle $T_1 (\mathcal{M}_g)$, the Thom-Connes isomorphism (\cite{thom-connes}) gives a way to compute the $K$-theory of $C_0 (\mathbb{R}^2 \backslash \{0\})\rtimes p^{-1}(\pi_1 (\mathcal{M}_g)) \simeq C(T_1 (\mathcal{M}_g))\rtimes \mathbb{R}$ and it reads
\[
K_0 (C_0 (\mathbb{R}^2 \backslash \{0\})\rtimes p^{-1}(\pi_1 (\mathcal{M}_g))) = \mathbb{Z}^{2g+1}, \nonumber
\]
\[
K_1 (C_0 (\mathbb{R}^2 \backslash \{0\})\rtimes p^{-1}(\pi_1 (\mathcal{M}_g)))= \mathbb{Z}^{2g+1} \oplus \mathbb{Z} /(2g-2) . \nonumber
\]
Both the order and the scale in $K_0$ are trivial since $C_0 (\mathbb{R}^2 \backslash \{0\})\rtimes p^{-1}(\pi_1 (\mathcal{M}_g))$ is projectionless and stable.\\
Furthermore, by \cite{thom-connes} Corollary 2 the range of the pairing between $K_0$ and the unique trace is determined by the Ruelle-Sullivan current associated to this flow (see \cite{noncommgeom} 5-$\alpha$), which is trivial by \cite{paternain}. Thus the Elliott invariant contains information only about the genus, or equivalently, the homeomorphic class of the Riemann surface. In particular, if the Elliott conjecture holds true for this class of $C^*$-algebras and if it is possible to detect the holomorphic structure at the level of the $C^*$-algebras, this should correspond to something finer than the $C^*$-algebraic structure.
\section{Aknowledgements}
The author thanks Prof. Wilhelm Winter for the hospitality at the Westf\"alische Wilhelms-Universit\"at of M\"unster and Prof. Roberto Longo for the hospitality at the Universit\`a degli Studi di Roma Tor Vergata for the period of this research. Many thanks go to Prof. Ludwik D\k abrowski who carefully read and gave his important feedback on the parts of this paper that are contained in the author's PhD thesis. The author also thanks the anonymous referee for the valuable comments on a previous version of the manuscript which led to an improved exposition. This research is partially supported by INdAM.
\end{document} |
\begin{document}
\title{Effective Mass Dirac-Morse Problem with any $\kappa$-value}
\author{\small Altuð Arda}
\email[E-mail: ]{arda@hacettepe.edu.tr}\affiliation{Department of
Physics Education, Hacettepe University, 06800, Ankara,Turkey}
\author{\small Ramazan Sever}
\email[E-mail: ]{sever@metu.edu.tr}\affiliation{Department of
Physics, Middle East Technical University, 06531, Ankara,Turkey}
\author{\small Cevdet Tezcan}
\email[E-mail: ]{ctezcan@baskent.edu.tr}\affiliation{Faculty of
Engineering, Baþkent University, Baglýca Campus, Ankara,Turkey}
\author{\small H\"{u}seyin Akçay}
\email[E-mail: ]{akcay@baskent.edu.tr}\affiliation{Faculty of
Engineering, Baþkent University, Baglýca Campus, Ankara,Turkey}
\date{\today}
\begin{abstract}
The Dirac-Morse problem are investigated within the framework of
an approximation to the term proportional to $1/r^2$ in the view
of the position-dependent mass formalism. The energy eigenvalues
and corresponding wave functions are obtained by using the
parametric generalization of the Nikiforov-Uvarov method for any
$\kappa$-value. It is also studied the approximate energy
eigenvalues, and corresponding wave functions
in the case of the constant-mass for pseudospin, and spin cases, respectively.\\
Keywords: generalized Morse potential, Dirac equation,
Position-Dependent Mass, Nikiforov-Uvarov Method, Spin Symmetry,
Pseudospin Symmetry
\end{abstract}
\pacs{03.65.-w; 03.65.Ge; 12.39.Fd}
\maketitle
The investigation of the solutions for quantum mechanical systems
having certain potentials in the case of position-dependent mass
(PDM) [1, 2] has been received great attentions. Many authors have
studied the solutions of different potentials for
spatially-dependent mass, such as hypergeometric type potentials
[3], Coulomb potential [4], $PT$-symmetric kink-like, and inversely
linear plus linear potentials [5]. It is well known that the theory
based on the effective-mass Schr\"{o}dinger equation is a useful
ground for investigation of some physical systems, such as
semiconductor heterostructures [6], the impurities in crystals
[7-9], and electric properties of quantum wells, and quantum dots
[10]. In the present work, we tend to solve the Dirac-Morse problem
within the PDM formalism.
The pseudospin symmetry is an interesting result appearing in Dirac
equation of a particle moving in an external scalar, and vector
potentials in the case of it when the sum of the potentials is
nearly zero. It was observed that the single particle states have a
quasidegeneracy labeled with the quantum numbers $\tilde{\ell}$, and
$\tilde{s}$, which are called the pseudo-orbital angular momentum,
and pseudospin angular momentum quantum numbers, respectively
[11-16]. The concept of pseudospin symmetry has received great
attentions in nuclear theory because of being a ground to
investigate deformation, and superdeformation in nuclei [17, 18],
and to build an effective shell-model coupling scheme [19, 20]. The
symmetry appears in that case, when the magnitude of scalar
potential is nearly equal to the magnitude of vector potential with
opposite sign [14, 21-25] and the Dirac equation has the pseudospin
symmetry, when the sum of the vector, and scalar potentials is a
constant, i.e., $\Sigma(r)=V_v(r)+V_s(r)=const.$ or
$d\Sigma(r)/dr=0$ [16]. The spin symmetry is another important
symmetry occurring in Dirac theory in the presence of external
scalar, and vector potentials. The spin symmetry appears in the
Dirac equation, when the difference of scalar, and vector potentials
is a constant, i.e., $\Delta(r)=V_{v}(r)-V_{s}(r)=const.$ [14, 16].
Recently, the pseudospin and/or spin symmetry have been studied by
many authors for some potentials, such as Morse potential [26-28],
Woods-Saxon potential [29], Coulomb [30], and harmonic potentials
[31-33], Eckart potential [34-36], P\"{o}schl-Teller potential[37,
38], Hulth\'{e}n potential [39], and Kratzer potential [40]. In
Ref. [41], the bound-state solutions of Dirac equation are studied
for generalized Hulth\'{e}n potential with spin-orbit quantum
number $\kappa$ in the position-dependent mass background. In this
letter, we tend to show that the new scheme of the
Nikiforov-Uvarov (NU) method could be used to find the energy
spectra, and the corresponding eigenspinors within the framework
of an approximation to the term proportional to $1/r^2$ for
arbitray spin-orbit quantum number $\kappa$, i.e. $\kappa\neq 0$,
when the mass depends on position. The NU method is a powerful
tool to solve of a second order differential equation by turning
it into a hypergeometric type equation [42].
Dirac equation for a spin-$\frac{1}{2}$ particle with mass $m$
moving in scalar $V_s(r)$, and vector potential $V_v(r)$ can be
written as (in $\hbar=c=1$ unit)
\begin{eqnarray}
[\alpha\,.\,\textbf{P}+\beta(m+V_s(r))]\,\Psi_{n\kappa}(r)=[E-V_v(r)]\,\Psi_{n\kappa}(r)\,.
\end{eqnarray}
where $E$ is the relativistic energy of the particle, $\textbf{P}$
is three-momentum, $\alpha$ and $\beta$ are $4 \times 4$ Dirac
matrices, which have the forms of $\alpha=\Bigg(\begin{array}{cc}
0 & \sigma \\
\sigma & 0
\end{array}\Bigg)$ and $\beta=\Bigg(\begin{array}{cc}
0 & I \\
-I & 0
\end{array}\Bigg)$, respectively, [43].
Here, $\sigma$ is a three-vector whose components are Pauli
matrices and $I$ denotes the $2 \times 2$ unit matrix.
$\textbf{J}$ denotes the total angular momentum , and
$\hat{K}=-\beta(\sigma.\textbf{L}+1)$ corresponds to the
spin-orbit operator of the Dirac particle in a spherically
symmetric potential, where $\textbf{L}$ is the orbital angular
momentum operator of the particle. The eigenvalues of the
spin-orbit operator $\hat{K}$ are given as $\kappa=\pm(j+1/2)$,
where $\kappa=-(j+1/2)<0$ correspond to the aligned spin
$j=\ell+1/2$, and $\kappa=(j+1/2)>0$ correspond to the unaligned
spin $j=\ell-1/2$. The total angular momentum quantum number of
the particle is described as $j=\tilde{\ell}+\tilde{s}$\,,where
$\tilde{\ell}=\ell+1$ is the pseudo-orbital angular momentum
quantum number, and $\tilde{s}=1/2$ is the pseudospin angular
momentum quantum number. For a given $\kappa=\pm1, \pm2, \ldots$,
the relation between the spin-orbit quantum number $\kappa$\,, and
"two" orbital angular momentum quantum numbers are given by
$\kappa(\kappa+1)=\ell(\ell+1)$, and
$\kappa(\kappa-1)=\tilde{\ell}(\tilde{\ell}+1)$.
The Dirac spinor in spherically symmetric potential can be written
in terms of upper and lower components as
\begin{eqnarray}
\Psi_{n \kappa}(r)=\,\frac{1}{r}\,\Bigg(\begin{array}{c} \,\chi_{n
\kappa}\,(r)Y_{jm}^{\ell}(\theta,\phi) \\
i\phi_{n \kappa}\,(r)Y_{jm}^{\tilde{\ell}}(\theta,\phi)
\end{array}\Bigg)\,,
\end{eqnarray}
where $Y_{jm}^{\ell}(\theta,\phi)$, and
$Y_{jm}^{\tilde{\ell}}(\theta,\phi)$ are the spherical harmonics,
and $\chi_{n \kappa}\,(r)/r$, and $\phi_{n \kappa}\,(r)/r$ are
radial part of the upper and lower components. Substituting Eq.
(2) into Eq. (1) enable us to write the Dirac equation as a set of
two couple differential equations in terms of $\chi_{n
\kappa}\,(r)$ and $\phi_{n \kappa}\,(r)$. By eliminating $\chi_{n
\kappa}\,(r)$ or $\phi_{n \kappa}\,(r)$ in these coupled
equations, we obtain
\begin{eqnarray}
\Big\{\,\frac{d^2}{dr^2}-\,\frac{\kappa(\kappa+1)}{r^2}\,
+\,\frac{1}{M_{\Delta}(r)}\Big(\frac{dm(r)}{dr}
-\frac{d\Delta(r)}{dr}\Big)\,(\frac{d}{dr}\,+\,\frac{\kappa}{r})\Big\}\chi_{n\kappa}(r)=
M_{\Delta}(r)M_{\Sigma}(r)\chi_{n\kappa}(r)\,,
\end{eqnarray}
\begin{eqnarray}
\Big\{\,\frac{d^2}{dr^2}-\,\frac{\kappa(\kappa-1)}{r^2}\,
-\,\frac{1}{M_{\Sigma}(r)}\Big(\frac{dm(r)}{dr}+
\frac{d\Sigma(r)}{dr}\Big)\,(\frac{d}{dr}\,-\,\frac{\kappa}{r})\Big\}\phi_{n\kappa}(r)=
M_{\Delta}(r)M_{\Sigma}(r)\phi_{n\kappa}(r)\,,
\end{eqnarray}
where $M_{\Delta}(r)=m+E_{n\kappa}-\Delta(r)$\,,
$M_{\Sigma}(r)=m-E_{n\kappa}+\Sigma(r)$, and
$\Delta(r)=V_{v}\,(r)-V_s\,(r)$, $\Sigma(r)=V_{v}\,(r)+V_s\,(r)$.
In the NU-method, the Schr\"{o}dinger equation is transformed by
using an appropriate coordinate transformation
\begin{eqnarray}
\sigma^{2}(s)\Psi''(s)+\sigma(s)\tilde{\tau}(s)
\Psi'(s)+\tilde{\sigma}(s)\Psi(s)=0\,,
\end{eqnarray}
where $\sigma(s)$, $\tilde{\sigma}(s)$ are polynomials, at most
second degree, and $\tilde{\tau}(s)$ is a first degree polynomial.
The polynomial $\pi(s)$, and the parameter $k$ are required in the
method
\begin{eqnarray}
\pi(s)=\frac{1}{2}\,[\sigma^{\prime}(s)-\tilde{\tau}(s)]\pm
\sqrt{\frac{1}{4}\,[\sigma^{\prime}(s)-\tilde{\tau}(s)]^2-
\tilde{\sigma}(s)+k\sigma(s)},
\end{eqnarray}
\begin{eqnarray}
\lambda=k+\pi^{\prime}(s ),
\end{eqnarray}
where $\lambda$ is a constant. The function under the square root
in the polynomial in $\pi(s)$ in Eq. (6) must be square of a
polynomial in order that $\pi(s)$ be a first degree polynomial.
Replacing $k$ into Eq. (6), we define
\begin{eqnarray}
\tau(s)=\tilde{\tau}(s)+2\pi(s).
\end{eqnarray}
where the derivative of $\tau(s)$ should be negative [42]. Eq.
(5) has a particular solution with degree $n$, if $\lambda$ in Eq.
(7) satisfies
\begin{eqnarray}
\lambda=\lambda_{n}=-n\tau^{\prime}-\frac{\left[n(n-1)\sigma^{\prime\prime}\right]}{2},
\quad n=0,1,2,\ldots
\end{eqnarray}
To obtain the solution of Eq. (5) it is assumed that the solution
is a product of two independent parts as $\Psi(s)=\phi(s)~y(s)$,
where $y(s)$ can be written as
\begin{eqnarray}
y_{n}(s)\sim \frac{1}{\rho(s)}\frac{d^{n}}{ds^{n}}
\left[\sigma^{n}(s)~\rho(s)\right],
\end{eqnarray}
where the function $\rho(s)$ is the weight function, and should
satisfy the condition
\begin{eqnarray}
\left[\sigma(s)~\rho(s)\right]'=\tau(s)~\rho(s)\,,
\end{eqnarray}
and the other factor is defined as
\begin{eqnarray}
\frac{1}{\phi(s)}\frac{d\phi(s)}{ds}=\frac{\pi(s)}{\sigma(s)}.
\end{eqnarray}
In order to clarify the parametric generalization of the NU
method, let us take the following general form of a
Schr\"{o}dinger-like equation written for any potential,
\begin{eqnarray}
\left\{\frac{d^{2}}{ds^{2}}+\frac{\alpha_{1}-\alpha_{2}s}{s(1-\alpha_{3}s)}
\frac{d}{ds}+\frac{-\xi_{1}s^{2}+\xi_{2}s-\xi_{3}}{[s(1-\alpha_{3}s)]^{2}}\right\}\Psi(s)=0.
\end{eqnarray}
When Eq. (13) is compared with Eq. (5), we obtain
\begin{eqnarray}
\tilde{\tau}(s)=\alpha_{1}-\alpha_{2}s\,\,\,;\,\,\sigma(s)=s(1-\alpha_{3}s)\,\,\,;\,\,
\tilde{\sigma}(s)=-\xi_{1}s^{2}+\xi_{2}s-\xi_{3}\,.
\end{eqnarray}
Substituting these into Eq. (6)
\begin{eqnarray}
\pi(s)=\alpha_{4}+\alpha_{5}s\pm\sqrt{(\alpha_{6}-k\alpha_{3})s^{2}+(\alpha_{7}+k)s+\alpha_{8}}\,,
\end{eqnarray}
where the parameter set are
\begin{eqnarray}
\begin{array}{lll}
\alpha_{4}=\frac{1}{2}\,(1-\alpha_{1})\,, &
\alpha_{5}=\frac{1}{2}\,(\alpha_{2}-2\alpha_{3})\,,
& \alpha_{6}=\alpha_{5}^{2}+\xi_{1} \\
\alpha_{7}=2\alpha_{4}\alpha_{5}-\xi_{2}\,, &
\alpha_{8}=\alpha_{4}^{2}+\xi_{3}\,. &
\end{array}
\end{eqnarray}
In NU-method, the function under the square root in Eq. (15) must
be the square of a polynomial [42], which gives the following
roots of the parameter $k$
\begin{eqnarray}
k_{1,2}=-(\alpha_{7}+2\alpha_{3}\alpha_{8})\pm2\sqrt{\alpha_{8}\alpha_{9}}\,,
\end{eqnarray}
where
$\alpha_{9}=\alpha_{3}\alpha_{7}+\alpha_{3}^{2}\alpha_{8}+\alpha_{6}$\,.
We obtain the polynomials $\pi(s)$ and $\tau(s)$ for
$k=-(\alpha_{7}+2\alpha_{3}\alpha_{8})-2\sqrt{\alpha_{8}\alpha_{9}}$,
respectively
\begin{eqnarray}
\pi(s)=\alpha_{4}+\alpha_{5}s-\left[(\sqrt{\alpha_{9}}+\alpha_{3}\sqrt{\alpha_{8}}\,)s-\sqrt{\alpha_{8}}\,\right]\,,
\end{eqnarray}
\begin{eqnarray}
\tau(s)=\alpha_{1}+2\alpha_{4}-(\alpha_{2}-2\alpha_{5})s-2\left[(\sqrt{\alpha_{9}}
+\alpha_{3}\sqrt{\alpha_{8}}\,)s-\sqrt{\alpha_{8}}\,\right].
\end{eqnarray}
Thus, we impose the following for satisfying the condition that
the derivative of the function $\tau(s)$ should be negative in the
method
\begin{eqnarray}
\tau^{\prime}(s)&=&-(\alpha_{2}-2\alpha_{5})-2(\sqrt{\alpha_{9}}+\alpha_{3}\sqrt{\alpha_{8}}\,)
\nonumber \\
&=&-2\alpha_{3}-2(\sqrt{\alpha_{9}}+\alpha_{3}\sqrt{\alpha_{8}}\,)\quad<0.
\end{eqnarray}
From Eqs. (7), (8), (19), and (20), and equating Eq. (7) with the
condition that $\lambda$ should satisfy given by Eq. (9), we find
the eigenvalue equation
\begin{eqnarray}
\alpha_{2}n-(2n+1)\alpha_{5}&+&(2n+1)(\sqrt{\alpha_{9}}+\alpha_{3}\sqrt{\alpha_{8}}\,)+n(n-1)\alpha_{3}\nonumber\\
&+&\alpha_{7}+2\alpha_{3}\alpha_{8}+2\sqrt{\alpha_{8}\alpha_{9}}=0.
\end{eqnarray}
We obtain from Eq. (11) the polynomial $\rho(s)$ as
$\rho(s)=s^{\alpha_{10}-1}(1-\alpha_{3}s)^{\frac{\alpha_{11}}{\alpha_{3}}-\alpha_{10}-1}$
and substituting it into Eq. (10) gives
\begin{eqnarray}
y_{n}(s)=P_{n}^{(\alpha_{10}-1,\frac{\alpha_{11}}{\alpha_{3}}-\alpha_{10}-1)}(1-2\alpha_{3}s)\,,
\end{eqnarray}
where $\alpha_{10}=\alpha_{1}+2\alpha_{4}+2\sqrt{\alpha_{8}}$,
$\alpha_{11}=\alpha_{2}-2\alpha_{5}+2(\sqrt{\alpha_{9}}+\alpha_{3}\sqrt{\alpha_{8}})$
and $P_{n}^{(\alpha,\beta)}(1-2\alpha_{3}s)$ are the Jacobi
polynomials. From Eq. (12), one obtaines
\begin{eqnarray}
\phi(s)=s^{\alpha_{12}}(1-\alpha_{3}s)^{-\alpha_{12}-\frac{\alpha_{13}}{\alpha_{3}}}\,,
\end{eqnarray}
then the general solution $\Psi(s)=\phi(s)y(s)$ becomes
\begin{eqnarray}
\Psi(s)=s^{\alpha_{12}}(1-\alpha_{3}s)^{-\alpha_{12}-\frac{\alpha_{13}}{\alpha_{3}}}
P_{n}^{(\alpha_{10}-1,\frac{\alpha_{11}}{\alpha_{3}}-\alpha_{10}-1)}(1-2\alpha_{3}s).
\end{eqnarray}
where $\alpha_{12}=\alpha_{4}+\sqrt{\alpha_{8}}$ and
$\alpha_{13}=\alpha_{5}-(\sqrt{\alpha_{9}}+\alpha_{3}\sqrt{\alpha_{8}}\,)$.
Let us study the case where the parameter $\alpha_3=0$. In this
type of problems, the eigenfunctions become
\begin{eqnarray}
\Psi(s)=s^{\alpha_{12}}\,e^{\alpha_{13}s}\,L^{\alpha_{10}-1}_{n}(\alpha_{11}s)\,,
\end{eqnarray}
when the limits $lim_{\alpha_3 \rightarrow
0}\,P^{(\alpha_{10}-1\,,\frac{\alpha_{11}}{\alpha_{3}}-\alpha_{10}-1)}_{n}(1-\alpha_{3}s)=
L^{\alpha_{10}-1}_{n}(\alpha_{11}s)$ and $lim_{\alpha_3
\rightarrow
0}\,(1-\alpha_{3}s)^{-\,\alpha_{12}-\frac{\alpha_{13}}{\alpha_{3}}}=
e^{\alpha_{13}s}$ are satisfied and the corresponding energy
spectrum is
\begin{eqnarray}
\alpha_{2}n-2\alpha_{5}n+(2n+1)(\sqrt{\alpha_{9}\,}&-&\alpha_{3}\sqrt{\alpha_{8}\,}\,)+n(n-1)\alpha_{3}
+\alpha_{7}\nonumber\\&+&2\alpha_{3}\alpha_{8}-2\sqrt{\alpha_{8}\alpha_{9}\,}+\alpha_{5}=0\,.
\end{eqnarray}
The generalized Morse potential is given by [44]
\begin{eqnarray}
V_M(x)=De^{-2\beta x}-2De^{-\beta x}\,,
\end{eqnarray}
where $x=(r/r_0)-1$\,,\,$\beta=\alpha r_0$\,,\,$D$ is the
dissociation energy, $r_0$ is the equilibrium distance, and
$\alpha$ is the potential width. The term proportional to $1/r^2$
in Eq. (4) can be expanded about $x=0$ [45]
\begin{eqnarray}
V_M(x)=\,\frac{\kappa(\kappa-1)}{r^2}=\,\frac{a_{0}}{(1+x)^2}=a_{0}(1-2x+3x^2+\ldots)\,;\,\,
a_{0}=\,\frac{\kappa(\kappa-1)}{r_0^2}\,,
\end{eqnarray}
Instead, we now replace $V_M(x)$ by the potential [45]
\begin{eqnarray}
\tilde{V}_M(x)=a_{0}(a_{1}+a_{2}e^{-\beta x}+a_{3}e^{-2\beta
x})\,,
\end{eqnarray}
Expanding the potential $\tilde{V}_M(x)$ around $x=0$, and
combining equal powers with Eq. (28), one can find the arbitrary
constants in the new form of the potential as
\begin{eqnarray}
a_{1}=1-\,\frac{3}{\beta}\,+\,\frac{3}{\beta^2}\,\,;\,\,\,a_{2}=\,\frac{4}{\beta}\,-\,\frac{6}{\beta^2}\,\,;\,\,\,
a_{3}=-\,\frac{1}{\beta}\,+\,\frac{3}{\beta^2}\,.
\end{eqnarray}
Eq. (4) can not be solved analytically because of the last term in
the equation, we prefer to use a mathematical identity such as
$dm(r)/dr=-d\Sigma(r)/dr$ to eliminate this term. We obtain the
mass function from the identity as
\begin{eqnarray}
m(x)=m_{0}+m_{1}e^{-\beta x}+m_{2}e^{-2\beta x}\,,
\end{eqnarray}
where $m_{0}$ corresponds to the integral constant, and the
parameters $m_{1}$, and $m_{2}$ are $2D$, and $-D$, respectively.
The parameter $m_{0}$ will denote the rest mass of the Dirac
particle. By using the potential form given by Eq. (29) replaced
by Eq. (28), inserting the mass function in Eq. (31), setting the
"difference" potential $\Delta(r)$ to generalized Morse potential
in Eq. (27) and using the new variable $s=e^{-\beta x}$, we have
\begin{eqnarray}
\Big\{\,\frac{d^2}{ds^2}\,+\,\frac{1}{s}\,\frac{d}{ds}\,&+&\frac{1}{s^2}\Big[
-\delta^2(a_{0}a_{1}+m^2_{0}-E^2)-\delta^2[a_{0}a_{2}+(m_{0}-E)(m_{1}+2D)]
s\nonumber\\&-&\delta^2[a_{0}a_{3}+(m_{0}-E)(m_{2}-D]s^2\Big]\Big\}\phi_{n\kappa}(s)=0\,.
\end{eqnarray}
Comparing Eq. (32) with Eq. (13) gives the parameter set
\begin{eqnarray}
\begin{array}{ll}
\alpha_1=1\,, & -\xi_1=-\delta^2[a_{0}a_{3}+(m_{0}-E)(m_{2}-D] \\
\alpha_2=0\,, &
\xi_2=-\delta^2[a_{0}a_{2}+(m_{0}-E)(m_{1}+2D)] \\
\alpha_3=0\,, & -\xi_3=-\delta^2(a_{0}a_{1}+m^2_{0}-E^2) \\
\alpha_4=0\,, & \alpha_5=0
\\ \alpha_6=\xi_1\,, & \alpha_7=-\xi_2 \\
\alpha_8=\xi_3\,, & \alpha_9=\xi_1 \\
\alpha_{10}=1+2\sqrt{\xi_3}\,, & \alpha_{11}=2\sqrt{\xi_1} \\
\alpha_{12}=\sqrt{\xi_3}\,, & \alpha_{13}=-\sqrt{\xi_1}
\end{array}
\end{eqnarray}
where $\delta=1/\alpha$. We write the energy eigenvalue equation
of the generalized Morse potential by using Eq. (26)
\begin{eqnarray}
2\delta\sqrt{a_{0}a_{1}+m^2_{0}-E^2\,}
-\delta\,\frac{a_{0}a_{2}+(m_{0}-E)(m_{1}+2D)}{\sqrt{a_{0}a_{3}+(m_{0}-E)(m_{2}-D)\,}}
=2n+1\,.
\end{eqnarray}
Since the negative energy eigenstates exist in the case of the
pseudospin symmetry [14, 15, 16], so we choose the negative energy
solutions in Eq. (46). In Table I, we give some numerical values
of the negative bound state energies obtained from Eq. (46) for
$CO$ molecule in atomic units, where we use the input parameter
set as $D=11.2256$ eV, $r_{0}=1.1283$ $\AA$, $m_{0}=6.8606719$
amu, and $a=2.59441$ [46], and summarize our results for different
$\tilde{\ell}$, and $n$ values. The corresponding lower spinor
component can be written by using Eq. (25)
\begin{eqnarray}
\phi(s)=s^{w_{1}}\,e^{-\,w_{2}s}L^{2w_{1}}_{n}(2w_{2}s)\,,
\end{eqnarray}
where $w_{1}=\delta\sqrt{a_{0}a_{1}+m^2_{0}-E^2\,}$, and
$w_{2}=\delta\sqrt{a_{0}a_{3}+(m_{0}-E)(m_{2}-D)\,}$.
Let us study the two special limits, pseudospin and spin symmetry
cases, respectively, in the case of the constant mass.
\subsubsection{Pseudospin Case}
The Dirac equation has the exact pseudospin symmetry if the "sum"
potential could satisfy the condition that $d\Sigma(r)/dr=0$, i.e.
$\Sigma(r)=A (const.)$ [14]. The parameters in our formalism
become $m_{1}=m_{2}=0$. Setting the "difference" potential
$\Delta(r)$ to the generalized Morse potential in Eq. (27), using
Eq. (29) for the term proportional to $1/r^2$, and using the new
variable $s=e^{-\beta x}$, we have from Eq. (4)
\begin{eqnarray}
\Big\{\,\frac{d^2}{ds^2}\,+\,\frac{1}{s}\,\frac{d}{ds}\,&+&\frac{1}{s^2}\Big[
-\delta^2[a_{0}a_{1}+M(m_{0}+E)]-\delta^2(2MD+a_{0}a_{2})
s\nonumber\\&+&\delta^2(MD-a_{0}a_{3})s^2\Big]\Big\}\phi(s)=0\,.
\end{eqnarray}
where $M=m_{0}+A-E$. By following the same procedure, the energy
eigenvalue equation for the exact pseudospin symmetry in the case
of constant mass is written
\begin{eqnarray}
2\sqrt{a_{0}a_{1}+M(m_{0}+E)\,}=\frac{a_{0}a_{2}+2DM}{\sqrt{a_{0}a_{3}-DM\,}}+\alpha(2n+1)\,.
\end{eqnarray}
and the corresponding wave functions read as
\begin{eqnarray}
\phi^{m_{1}=m_{2}=0}(s)=s^{w'_{1}}\,e^{-\,w'_{2}s}L^{2w'_{1}}_{n}(2w'_{2}s)\,,
\end{eqnarray}
where $w\,'_{1}=\delta\sqrt{a_{0}a_{1}+M(m_{0}+E)\,}$\,, and
$w\,'_{2}=\delta\sqrt{a_{0}a_{3}-DM\,}$\,. We must consideration
the negative bound states solutions in Eq. (37) because there
exist only the negative eigenvalues in the exact pseudospin
symmetry [14, 15, 16].
\subsubsection{Spin Case}
The spin symmetry appears in the Dirac equation if the condition
is satisfied that $\Delta(r)=V_{v}(r)-V_{s}(r)=A(const.)$. In this
case, we have from Eq. (3)
\begin{eqnarray}
\Big\{\frac{d^2}{dr^2}-\frac{\kappa(\kappa+1)}{r^2}-(m_{0}+E-A)(m_{0}-E-\Sigma(r))\Big\}
\chi(r)=0\,,
\end{eqnarray}
where we set the "sum" potential as generalized Morse potential
given in Eq. (27), and use approximation for the term proportional
to $1/r^2$ in Eq. (29) [45]
\begin{eqnarray}
\tilde{V}_M(x)=b_{0}(b_{1}+b_{2}e^{-\beta x}+b_{3}e^{-2\beta
x})\,,
\end{eqnarray}
where $b_{0}=\kappa(\kappa+1)/r^2_{0}$\,, and the parameters
$b_{i} (i=1, 2, 3)$ are given in Eq. (30). Using the variable
$s=e^{-\beta x}$, and inserting Eq. (40) into Eq. (39), we obtain
\begin{eqnarray}
\Big\{\,\frac{d^2}{ds^2}\,+\,\frac{1}{s}\,\frac{d}{ds}\,&+&\frac{1}{s^2}\Big[
-\delta^2[b_{0}b_{1}+M'(m_{0}-E)]+\delta^2(2DM\,'-b_{0}b_{2})
s\nonumber\\&-&\delta^2(b_{0}b_{3}+DM\,')s^2\Big]\Big\}\chi(s)=0\,.
\end{eqnarray}
where $M\,'=m_{0}+E-A$. We write the energy eigenvalue equation,
and corresponding wave equations in the spin symmetry limit,
respectively,
\begin{eqnarray}
\frac{\delta[2DM\,'-b_{0}b_{2}]}{\sqrt{b_{0}b_{3}+DM\,'\,}}
+2\delta\sqrt{b_{0}b_{1}+M\,'(m_{0}-E)\,}=2n+1\,,
\end{eqnarray}
and
\begin{eqnarray}
\chi^{m_{1}=m_{2}=0}(s)=s^{w''_{1}}\,e^{-\,w''_{2}s}L^{2w''_{1}}_{n}(2w''_{2}s)\,,
\end{eqnarray}
where $w\,''_{1}=\delta\sqrt{b_{0}b_{1}+M'(m_{0}-E)\,}$\,, and
$w\,''_{2}=\delta\sqrt{b_{0}b_{3}+DM'\,}$\,. We must take into
account the positive energy solutions in Eq. (42) in the case of
the exact spin symmetry [14, 15, 16].
In Summary, we have approximately solved the effective mass Dirac
equation for the generalized Morse potential for arbitrary
spin-orbit quantum number $\kappa$ in the position-dependent mass
background. We have found the eigenvalue equation, and
corresponding two-component spinors in terms of Legendre
polynomials by using the parametric NU-method within the framework
of an approximation to the term proportional to $1/r^2$\,. We have
also obtained the energy eigenvalue equations, and corresponding
wave functions for exact pseudospin, and spin symmetry limits in
the case of constant mass. We have observed that our analytical
results in the case of the pseudospin symmetry are good agreement
with the ones obtained in the literature.
\begin{table}
\begin{ruledtabular}
\caption{Energy eigenvalues for the $CO$ molecule for different
values of $\tilde{\ell}$ and $(n,\kappa)$ in the case of position
dependent mass.}
\begin{tabular}{ccccc}
$\tilde{\ell}$ & $n$ & $\kappa$ & state & $E<0$ \\ \hline
1 & 1 & -1 & $1s_{1/2}$ & 6.15913020 \\
2 & 1 & -2 & $1p_{3/2}$ & 6.52968379 \\
3 & 1 & -3 & $1d_{5/2}$ & 6.89146288 \\
4 & 1 & -4 & $1f_{7/2}$ & 7.24974882 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\end{document} |
\begin{document}
\title{re:Linde et al. (2021): The Bayes factor, HDI-ROPE and frequentist equivalence tests can all be reverse engineered -almost exactly- from one another}
\abstract{{\footnotesize{ABSTRACT - Following an extensive simulation study comparing the operating characteristics of three different procedures used for establishing equivalence (the frequentist `TOST'', the Bayesian ``HDI-ROPE'', and the Bayes factor interval null procedure), Linde et al. (2021) conclude with the recommendation that ``researchers rely more on the Bayes factor interval null approach for quantifying evidence for equivalence.'' We redo the simulation study of Linde et al. (2021) in its entirety but with the different procedures calibrated to have the same predetermined maximum type 1 error rate. Our results suggest that, when calibrated in this way, the Bayes Factor, HDI-ROPE, and frequentist equivalence tests all have similar -almost exactly- type 2 error rates. In general any advocating for frequentist testing as better or worse than Bayesian testing in terms of empirical findings seems dubious at best. If one decides on which underlying principle to subscribe to in tackling a given problem, then the method follows naturally. Bearing in mind that each procedure can be reverse-engineered from the others (at least approximately), trying to use empirical performance to argue for one approach over another seems like tilting at windmills.}}
}
{\footnotesize{\paragraph{Acknowledgments - } Many thanks the authors of \citet{linde2020decisions} who provided open access to their well documented code without which our work would not have been possible.}}
{\footnotesize{\paragraph{Code - } Code to replicate our simulation study and all of the Figures is available at \url{https://github.com/harlanhappydog/reLinde}.}}
\pagebreak
\section{Introduction}
\citet{linde2020decisions} describe and compare three different approaches for finding evidence of equivalence between two groups:
\begin{enumerate}
\item ``TOST'': the frequentist two one-sided t-tests procedure with $\alpha=0.05$ \citep{hodges1954testing, schuirmann1987comparison, westlake1976symmetrical};
\item ``HDI-ROPE'': the Bayesian highest density interval (HDI) region of practical equivalence procedure with a 95\% HDI \citep{kruschke2011bayesian, kruschke2013bayesian}; and
\item ``BF'': the Bayes factor interval null procedure with one of two different BF decision thresholds, either $BF_{thr} = 3$ or $BF_{thr} = 10$; see \citet{morey2011bayes}.
\end{enumerate}
Following an extensive simulation study, \citet{linde2020decisions} conclude with the recommendation that ``researchers rely more on the Bayes factor interval null approach for quantifying evidence for equivalence.'' This recommendation is based on the finding that the TOST and HDI-ROPE have ``limited discrimination capabilities when the sample size is relatively small.'' However, we suspect that the same remark could be made about the BF procedure if it were calibrated so as to maintain a predetermined maximum type 1 error rate.
Motivated by this suspicion, we repeat the simulation study of \citet{linde2020decisions} in its entirety to determine how the different methods compare when they are calibrated to all have the same maximum type 1 error rate. \citet{linde2020decisions} write: ``In general, it is important to evaluate statistical testing approaches based on both types of errors,'' i.e., both the type 1 error and the type 2 error. However, the degree of type 1 error is dependent on the degree of type 2 error and vice-versa. Therefore, in order to evaluate and compare the statistical power of different tests on a level playing field, one must proceed by first calibrating each test to have the same predetermined maximum type 1 error rate.
\section{Methods}
Our simulation study is identical to the one conducted by \citet{linde2020decisions} with a few notable exceptions.
First, for frequentist equivalence testing we consider the so-called ``optimal test'' based on the folded-Normal distribution \citep{romano2005optimal} in addition to the TOST. It is ``well known in the mathematical statistics literature'' \citep{mollenhoff2019efficient} that the optimal test is more powerful than the TOST, particularly for small sample sizes. While the TOST procedure for frequentist equivalence testing works well for moderate and large sample sizes, it is sub-optimal for small sample sizes; see \cite{lehmann2006testing} and \cite{wellek2010testing}. Note that both tests are asymptotically equivalent (i.e., essentially identical for sufficiently large sample sizes).
The optimal test can be summarized as follows. Reject the null hypothesis ($H_{0}: |\delta| \ge m$), whenever:
\begin{equation}
|\bar{X}_{1} - \bar{X}_{2}| < u_{\alpha}
\end{equation}
\noindent where $\delta$ is the true difference in group means, $m$ is the equivalence margin, $\bar{X}_{1}$ is the observed sample mean for the first group, $\bar{X}_{2}$ is the observed sample mean for the second group, and $u_{\alpha}$ is the $\alpha$-quantile of the folded Normal distribution, $N_{F}(m, \hat{\sigma}_{P}^{2})$, with location parameter equal to the margin, $m$, and scale parameter equal to estimated pooled variance of the data, $\hat{\sigma}_{P}^{2}$. For full details, see Section 2.2 of \citet{mollenhoff2019efficient}. In the Appendix, we provide simple R code that can be used to conduct this test.
{Second, we generate 25,000 datasets for each individual combination of 4 global parameters:
\begin{enumerate}
\item the population effect size ($\delta=\{0,0.01,0.02,\ldots,0.5\}$),
\item the sample size in each group ($n=\{50,100,250,500\}$),
\item the equivalence margin ($m=\{0.1,0.2,0.3\}$), and
\item the prior scale ($r=\{0.5/\sqrt{2}, 1/\sqrt{2}, 2/\sqrt{2}\}$).
\end{enumerate}
\noindent To be clear, for each of the 1,836 ($=51 \times 4 \times 3 \times 3$) unique scenarios, we simulate 25,000 individual independent datasets.
Unlike \citet{linde2020decisions}, we define $m$ as the \textit{unstandardized} equivalence margin. We decided to define $m$ as the unstandardized margin rather than as the standardized margin so as to make things as simple as possible (the added simplicity will help us with discussing decision boundaries in Section 4) and because we are concerned about the improper use of equivalence tests defined with standardized margins; see \citet{campbell2020equivalence}. (To be brief, \citet{lakens2017equivalence}’s suggestion that one may simply define the equivalence margin in terms of the observed standard deviation is technically incorrect. Recall that a valid frequentist hypothesis cannot be defined in terms of the observed data. As such, if the equivalence margin is defined as a function of the observed standard deviation, then the equivalence test is invalid.)
For each dataset, we conduct four different procedures (the frequentist TOST, the frequentist optimal test, the Bayesian BF procedure, and the Bayesian HDI-ROPE procedure) and record:
\begin{itemize}
\item the $p$-values obtained from the frequentist equivalence testing procedures,
\item the BF obtained from the Bayes factor interval null procedure, and
\item the maximum probability of the HDI at which the HDI-ROPE procedure will predict equivalence.
\end{itemize}
We specifically chose to conduct 25,000 simulation runs so as to keep computing time within a reasonable limit while also reducing the amount of Monte Carlo standard error to a negligible amount \footnote{ Note that since the prior-scale is irrelevant for the two frequentist procedures, we essentially obtain 75,000 simulations for the frequentist results for each of 612 ($=51 \times 4 \times 3$) unique scenarios. We will consider only the first 25,000 of these (and disregard the remaining 50,000) when reporting the results so that the precision of the results for all methods is comparable.}}. Note that for computing a false positive rate that is truly $\alpha=0.50$, Monte Carlo SE will be approximately $0.003 \approx \sqrt{0.5(1-0.5)/25,000}$; see \cite{morris2019using}. We ran all simulations using R based on the code provided by \cite{linde2020decisions} using parallel nodes of the Compute Canada cluster \citep{baldwin2012compute}.
Finally, we proceed by calibrating the Bayesian procedures (the HDI-ROPE and BF procedures) so that they maintain a predetermined maximum type 1 error rate of $\alpha$. This is done by adjusting each procedure such that, for a given sample size, the proportion of equivalence predictions obtained is exactly $\alpha$ when the margin is equal to the population effect size, i.e., when $m = \delta$. The frequentist testing procedures will not be calibrated since, in theory, they should require no calibration as they are specifically designed to observe this property (at least asymptotically). The scenario in which $m=\delta$ represents the boundary of the null hypothesis, $H_{0}: |\delta| \ge m$. The calibration will therefore ensure that whenever the null hypothesis is true (i.e., whenever $|\delta| \ge m$), the proportion of equivalence predictions will be no greater than $\alpha$.
\cite{linde2020decisions} seem to suggest that, when $\delta = m$ (at the boundary of the null hypothesis), it is desirable to have ``a proportion of equivalence decisions close to 0.5'' so that the test is ``an unbiased classifier [which] maximizes accuracy.'' We therefore consider, for the results of our simulation, the maximum type 1 error set with $\alpha=0.50$. In addition, we will also report the results with $\alpha=0.05$, a common choice in frequentist analyses. The ideal value for $\alpha$ will no doubt depend on the specific context of one’s analysis \citep{lakens2018justify}. While it is true that $\alpha=0.5$ is uncommon in practice, we note that $\alpha=0.05$ is an equally arbitrary threshold which has come to prominence due primarily to a historical happenstance \citep{kennedy2019before}.
The BF procedure can be calibrated to be more or less conservative by setting a higher or lower BF decision threshold and/or by using a smaller/larger Cauchy prior scale. However, \citet{linde2020decisions} note that calibration by selecting a smaller/larger Cauchy prior scale ``is not advised.'' We proceed in the simulation study by calibrating the BF procedure by adjusting the BF decision threshold. Calibration of the HDI-ROPE procedure can be done by selecting a smaller/larger prior scale and/or by adjusting the probability of the HDI. We proceed in the simulation study by calibrating the HDI-ROPE procedure by adjusting the probability of the HDI.
\section{Results}
The results of the simulation study with $\alpha=0.05$ are shown in the left-hand panels of Figures \ref{fig:my_label1}, \ref{fig:my_label2}, and \ref{fig:my_label3}, for equivalence margins of $m = 0.1$, $m = 0.2$, and $m = 0.3$ respectively. The results of the simulation study with $\alpha=0.5$ are shown in the right-hand panels. We have three main comments.
First and foremost, note that, when calibrated, the frequentist optimal test, the frequentist-calibrated HDI-ROPE and the frequentist-calibrated BF all display an almost identical probability of predicting equivalence for all values of $n$, $m$, $\delta$, $r$, and $\alpha$. This suggests that, for the vast majority of scenarios, regardless of which of these approaches is used for analysis, the result could be made the same by adopting the same calibration. With regards to the TOST, our results are similar to those reported by \citet{linde2020decisions}: the TOST, when $\alpha=0.05$, has little power to establish equivalence when $n$ and/or $m$ are small. With $\alpha=0.5$, the TOST appears to be overly conservative when $n$ and $m$ are small (see top panels in Figure \ref{fig:my_label1}). Note that we were unable to adequately calibrate the HDI-ROPE procedure for five small sample size scenarios with $\alpha=0.5$ and $m=0.1$ due to insufficient numerical accuracy of our results. In these five scenarios (which can be identified as those with "NA" indicated for the Z1, Z2, and Z3 values in Figure \ref{fig:my_label1}), the probability of the HDI that was required for calibration was less than 0.0001 and impossible to determine with sufficient accuracy.
Second, when calibrated to obtain a specific predetermined maximum type 1 error rate, the Bayesian procedures appear to operate identically regardless of one's choice of prior-scale. This is immediately obvious in Figures \ref{fig:my_label1}-\ref{fig:my_label3}: the three blue (green) lines -dashed ($r=0.5/\sqrt{2}$), solid ($r=1/\sqrt{2}$), and dotted ($r=2/\sqrt{2}$)- for the HDI-ROPE (the BF), are indistinguishable from one another. This suggests that choosing a smaller or larger prior-scale is essentially irrelevant, at least from a frequentist perspective, for a calibrated Bayesian procedure.
Finally, with $\alpha=0.05$, the probability values required to calibrate the HDI-ROPE procedure range from 0.34 to 0.94 and appear to increase with decreasing values of $r$, with increasing values of $m$, and with increasing values of $n$. With $\alpha=0.5$, the probability values required to calibrate the HDI-ROPE procedure are much much lower (and in some cases impossible to determine with sufficient accuracy). We suspect that as $n$ increases, the probability value required to calibrate the HDI-ROPE procedure will trend towards $(1-2\alpha)$ since the credible interval will closely approximate the confidence interval with a sufficiently large $n$. The BF decision thresholds required to calibrate the BF procedure with $\alpha=0.05$ range from 4.0 to 196.2, and with $\alpha=0.5$, range from 1.6 to 20.9. The BF decision thresholds appear to increase with increasing values of $r$, with increasing values of $n$, and with decreasing values of $\alpha$.
\begin{figure}
\caption{The proportion of equivalence predictions with an equivalence margin of $m = 0.1$ (vertical dashed line). Panels in the left-hand column correspond to results for $\alpha=0.05$ (horizontal dashed line), and panels in the right-hand column correspond to results for $\alpha=0.50$ (horizontal dashed line). Each row of panels contains results for a different sample size ($n$). Colours denote the four different inferential approaches. Line types denote the three different priors (for Bayesian procedures). Each coloured line corresponds to simulation results from 25,000 simulation runs. Predictions of equivalence are correct if the population effect size ($\delta$) lies within the equivalence interval (i.e., if $|\delta| < m$), whereas predictions of equivalence are incorrect if $\delta$ lies outside the equivalence interval (i.e., if $|\delta| \ge m$). Bayesian metrics are calibrated such that the proportion of equivalence predictions is exactly equal to $\alpha$ (horizontal dashed line) when $\delta = m$ (at the intersection of the horizontal and vertical dashed lines). The calibration for the Bayesian procedures is specified by the Z1, Z2, and Z3 probability values for the HDI-ROPE procedure and by the B1, B2, and B3 decision threshold values for the BF procedure. Note that the frequentist `optimal test,' the BF procedures, and the HDI-ROPE procedures, all produce a very similar (almost identical) proportion of equivalence predictions and therefore the seven different curved lines (the red, blue and green lines) are not independently visible in any of the panels. Also note that calibration of the HDI-ROPE procedure for five scenarios above (those with "NA" indicated for the Z1, Z2, and Z3 probability values) was not possible due to numerical limitations. }
\label{fig:my_label1}
\end{figure}
\begin{figure}
\caption{The proportion of equivalence predictions with an equivalence margin of $m = 0.2$ (vertical dashed line). Panels in the left-hand column correspond to results for $\alpha=0.05$ (horizontal dashed line), and panels in the right-hand column correspond to results for $\alpha=0.50$ (horizontal dashed line). Each row of panels contains results for a different sample size ($n$). Colours denote the four different inferential approaches. Line types denote the three different priors (for Bayesian procedures). Each coloured line corresponds to simulation results from 25,000 simulation runs. Predictions of equivalence are correct if the population effect size ($\delta$) lies within the equivalence interval (i.e., if $|\delta| < m$), whereas predictions of equivalence are incorrect if $\delta$ lies outside the equivalence interval (i.e., if $|\delta| \ge m$). Bayesian metrics are calibrated such that the proportion of equivalence predictions is exactly $\alpha=0.5$ (horizontal dashed line) when $\delta = m$ (at the intersection of the horizontal and vertical dashed lines). The calibration for the Bayesian procedures is specified by the Z1, Z2, and Z3 probability values for the HDI-ROPE procedure and by the B1, B2, and B3 decision threshold values for the BF procedure. Note that all of the procedures (with the exception of the TOST procedure for $\alpha=0.05$ and $n=50, 100$) produce a very similar (almost identical) proportion of equivalence predictions and therefore the eight different curved lines (the purple, blue, green, and maroon lines) are not independently visible. }
\label{fig:my_label2}
\end{figure}
\begin{figure}
\caption{The proportion of equivalence predictions with an equivalence margin of $m = 0.3$ (vertical dashed line). Panels in the left-hand column correspond to results for $\alpha=0.05$ (horizontal dashed line), and panels in the right-hand column correspond to results for $\alpha=0.50$ (horizontal dashed line). Each row of panels contains results for a different sample size ($n$). Colours denote the four different inferential approaches. Line types denote the three different priors (for Bayesian procedures). Each coloured line corresponds to simulation results from 25,000 simulation runs. Predictions of equivalence are correct if the population effect size ($\delta$) lies within the equivalence interval (i.e., if $|\delta| < m$), whereas predictions of equivalence are incorrect if $\delta$ lies outside the equivalence interval (i.e., if $|\delta| \ge m$). Bayesian metrics are calibrated such that the proportion of equivalence predictions is exactly $\alpha=0.5$ (horizontal dashed line) when $\delta = m$ (at the intersection of the horizontal and vertical dashed lines). The calibration for the Bayesian procedures is specified by the Z1, Z2, and Z3 probability values for the HDI-ROPE procedure and by the B1, B2, and B3 decision threshold values for the BF procedure. Note that all of the procedures (with the exception of the TOST procedure for $\alpha=0.05$ and $n=50$) produce a very similar (almost identical) proportion of equivalence predictions and therefore the eight different curved lines (the purple, blue, green, and maroon lines) are not independently visible. }
\label{fig:my_label3}
\end{figure}
\section{Discussion}
The simulation study results suggest that the Bayes Factor, HDI-ROPE, and frequentist equivalence tests can all be reverse engineered to achieve the same operating characteristics as one another. While the simulation study is limited to two-sample normally distributed data, we suspect that a similar conclusion could be made in other scenarios. In order to better understand, it is useful to consider the sufficient statistics required for each procedure.
For a given value of the observed absolute difference in sample means ($|\bar{X}_{1} - \bar{X}_{2}|$), a given value of the observed pooled standard deviation ($\hat{\sigma}_{P}$), and a fixed sample size ($n$) and margin ($m$), there is a single unique $p$-value that one will obtain from the frequentist optimal test, a single unique $p$-value one will obtain from the TOST, a single unique BF one will obtain from the BF procedure (with a given prior-scale), and a single unique probability at which the HDI-ROPE procedure (with a given prior-scale) will predict equivalence. As such, we can easily determine a 2-dimensional (dimension 1: $|\bar{X}_{1} - \bar{X}_{2}|$; dimension 2: $\hat{\sigma}_{P}$) decision threshold for each of the four procedures.
Figure \ref{fig:boundary} plots the different decision boundary lines for each of the four procedures for $\alpha=0.05$, $n=100$ and $m=0.3$, and with a prior-scale of $r=1/\sqrt{2}$ for the Bayesian procedures (these calibrated based on the simulation study results which gave $B1=58.1$, $Z1=0.913$). For reference, we have overlaid on this plot the two-dimensional density of the distribution of observed $|\bar{X}_{1} - \bar{X}_{2}|$ and $\hat{\sigma}_{P}$ values obtained from simulating ten million independent datasets from $X_{1} \sim N(0,1)$ and $X_{2} \sim N(0,1)$. From this figure, we conclude that, with regards to where the vast majority of the data will be observed (i.e., the area outlined by the grey contour lines), the decision boundaries for all four procedures are nearly identical.
Figure \ref{fig:margin to delta} plots the maximum value of $|\bar{X}_{1} - \bar{X}_{2}|$ that allows one to predict equivalence for a range of $m$, and for $n=100$ and a given $\hat{\sigma}_{P}=1$. We set a prior-scale of $r=1/\sqrt{2}$ for the Bayesian procedures which are calibrated based on the results of the simulation study. The frequentist procedures are calibrated with $\alpha=0.05$. For the HDI-ROPE, the BF and the TOST, note that there are values of $m$ for which one cannot predict equivalence, regardless of the value of $|\bar{X}_{1} - \bar{X}_{2}|$. However, for the optimal frequentist test, there will always be a value for $|\bar{X}_{1} - \bar{X}_{2}|$ small enough to predict equivalence no matter how small the value of $m$.
\begin{figure}
\caption{The different decision boundary lines, in terms of $|\bar{X}
\label{fig:boundary}
\end{figure}
\begin{figure}
\caption{The value of $|\bar{X}
\label{fig:margin to delta}
\end{figure}
In summary, it appears that, for fixed $n$, we can almost exactly reproduce a frequentist test at a given level $\alpha$ by adopting a Bayesian procedure and reverse-engineering it (by determining necessary values for either the $BF$ threshold or the probability of the HDI). This street, however, is open to two-way traffic. That is, we should also be able to mimic a particular Bayesian test with a frequentist test, by reverse-engineering the $\alpha$ level. See Figure \ref{fig:freq3m03} in which our simulation study results for $m=3$ are plotted again but with the $\alpha$ level chosen for each scenario so that, at $\delta=m$, the proportion of equivalence predictions made with the optimal frequentist test matches the proportion made with the Bayes factor test, with thresholds of 3 and 10. The notion of ``two-way traffic'' becomes clear when we are reminded that the Bayes factor has an underlying principle of its own, albeit one that is much less publicized than the frequentist control of the type 1 error rate.
If we imagine repeated sampling of datasets, with the underlying parameters themselves being {\em different} from draw to draw, then the average performance of Bayes factor testing has a decision-theoretic optimality. Specifically, if the parameters are drawn from the overall prior distribution, which itself is a mixture of the “under null” and “under alternative” prior distributions, then the BF procedure minimizes the average loss, often referred to as the Bayes' risk \citep{berger2013statistical}.
Consider the simplest case of 50\% prior weight on each of the null and alternative, and a loss function that weights type 1 and type 2 errors equally. Then the procedure which selects the null or alternative by comparing the Bayes' factor to an evidence threshold of 1 minimizes the probability of a selection error (with respect to the particular sense of repeated sampling described above). More generally, with prior probability $1-q$ on the null and $q$ on the alternative, and a type 1 error deemed to be $k$ times as damaging as a type 2 error, the average loss is minimized by basing selection on comparison of the BF to a threshold of $k(1-q)/q$.
So Bayes factor testing indeed has an underlying premise and interpretation – it just happens to differ from the frequentist principle of minimizing the probability of a type 2 error subject to an upper-bound on the maximum probability of a type 1 error; see \citet{berger2013statistical}. Coming back to the two-way traffic then, if one desires to carry out Bayesian testing as is rooted in the interpretation above, then for fixed $n$, one could reverse-engineer a value of $\alpha$ such that the frequentist test would almost exactly do the job, in terms of reproducing the decision boundary. Indeed this is what we see in Figure \ref{fig:freq3m03}.
Finally, to be clear, a Bayesian, in principle, should not be concerned with minimizing the type 2 error, given a fixed upper-bound on the type 1 error rate. And conversely, a frequentist, in principle, should not be concerned with minimizing the Bayes’ risk. However, in practice, there is nothing preventing a Bayesian from using a frequentist test calibrated in such a way so as to minimize the Bayes’ risk, and nothing preventing a frequentist from using a Bayes factor calibrated in such a way so as to control the type 1 error. There is, however, a fundamental difference between the Bayesian and the frequentist when it comes to how to consider the sample size. The frequentist will maintain the same value for $\alpha$ regardless of $n$, whereas the Bayesian will adjust $\alpha$ depending on $n$; see \citet{wagenmakers2021history}. Indeed, it is only after observing how a researcher treats two samples of different sample sizes, that one could reliably determine whether the researcher is acting as a frequentist or as a Bayesian.
\begin{figure}
\caption{ The proportion of equivalence predictions with an equivalence margin of $m = 0.3$ (vertical dashed line) and with the $\alpha$ level chosen for each scenario so that, at $m=\delta$, the proportion of equivalence predictions made with the optimal frequentist test matches the proportion made with the Bayes factor test with a BF threshold of 3 (panels in left-hand column) and a BF threshold of 10 (panels in right-hand column). Panels in each row contain results for a different sample size ($n$). Colours denote the two different inferential approaches. Line types denote the three different priors (for the Bayesian procedure). Each coloured line corresponds to simulation results from 25,000 simulation runs. Predictions of equivalence are correct if the population effect size ($\delta$) lies within the equivalence interval (i.e., if $|\delta| < m$), whereas predictions of equivalence are incorrect if $\delta$ lies outside the equivalence interval (i.e., if $|\delta| \ge m$). The calibration for the frequentist procedure is specified by the A1, A2, and A3 values for $\alpha$. Note that the frequentist `optimal test,' and the BF procedures produce a very similar (almost identical) proportion of equivalence predictions and therefore the red and green lines are not independently visible.}
\label{fig:freq3m03}
\end{figure}
\section{Conclusion}
In general any advocating for frequentist testing as better or worse than Bayesian testing in terms of empirical findings seems dubious at best. If you decide on which underlying principle you want to subscribe to in tackling a given problem, then the method follows naturally. And particularly bearing in mind that either procedure can be reverse-engineered from the other (at least approximately), as we have shown, trying to use empirical performance to argue for one over the other seems like tilting at windmills. This being said, it is crucial to understand how a given statistical test, be it either frequentist or Bayesian, operates under different circumstances. Understanding a statistical procedure's operating characteristics is key to ensuring its proper use, and, perhaps more importantly, key to avoiding its misuse.
Recall, as an example of a misused statistical procedure, the controversial method of ``magnitude-based inference'' (MBI) \citep{barker2008inference}. While rarely used or even acknowledged in other fields, MBI became widely popular in sports medicine research. The supposedly ``philosophically and statistically distinct'' \citep{batterham2015} statistical procedure was poorly understood and led countless sports medicine researchers to unreliable and altogether erroneous conclusions. Only once the operating characteristics of MBI were better understood \citep{sainani2019magnitude, sainani2018problem} were researchers advised to avoid using it for their analyses \citep{lohse2020systematic}. Unfortunately, by that point, much damage had already been done to the field of sports medicine research.
All too often, non-significance (e.g., $p > 0.05$), or a combination of both non-significance and supposed high statistical power, is used as the basis to claim the lack of a meaningful effect. This approach is logically flawed. As the saying goes, ``absence of evidence is not evidence of absence'' ~\citep{hartung1983absence, altman1995statistics}. Researchers should instead use one of the tools available for equivalence testing. Based on our simulation study, we determined that frequentist equivalence tests, the Bayes factor, and the HDI-ROPE can all be calibrated to be roughly equivalent in terms of their power to detect the lack of a meaningful effect.
\citet{linde2020decisions} concluded ``that the BF approach is particularly useful for research with small sample sizes.'' Our simulation results suggest otherwise. We observed nothing ``particularly useful about the BF approach.'' With this in mind, we recommend that researchers, if they can properly calibrate \textit{and communicate} their results, use whatever approach suits them best. A potential advantage with frequentist tests is that they are widely used and well understood in fields outside of psychology \citep{wellek2010testing, jones1996trials}. The same cannot be said for the HDI-ROPE or the Bayes factor interval null procedures.
If the Bayes factor interval null procedure is used for predicting equivalence with standard BF decision thresholds such as $BF_{thr}=3$ or $BF_{thr}=10$ (i.e., used without frequentist calibration), one should expect to see a very high false positive rate. Indeed, \cite{linde2020decisions} observed false positive rates higher than 60\% for both $BF_{thr}=3$ and $BF_{thr}=10$ when $m=0.1$ and $r=\sqrt{2}/2$. In contrast, if the HDI-ROPE procedure is used with a standard 95\% HDI (i.e., used without frequentist calibration), one should expect to see a very low false positive rate, well bellow 5\%.
With small sample sizes, the TOST procedure may indeed ``have no discriminatory power and result in a foregone decision for non-equivalence'' \citep{linde2020decisions}. For this reason, researchers are advised to use the so-called ``optimal test'' based on the folded Normal distribution \citep{romano2005optimal} rather than the TOST procedure when sample sizes are very small. Note that, regardless of which frequentist testing procedure is used, researchers must be careful to select an appropriate equivalence margin. This is often easier said than done; see \citet{campbell2018make}.
Finally, given a number of different procedures that, when calibrated, are essentially identical in terms of their statistical power, one might question why some researchers will prefer one approach over another \citep{andrews2013prior, dienes2014using}. To answer this, we must recognize that statistics are not entirely defined by statistical power metrics and their operating characteristics. Indeed, it is important to understand that statistics are, as \cite{kasy2019selective} wisely note, ``a \textit{social} process of communication and collective learning that involves many different actors with differences in knowledge and expertise, different objectives, and constraints on their attention and time, who engage in strategic behavior."
\pagebreak
\section{Appendix}
Below is R code for the so-called ``optimal'' frequentist equivalence test based on the folded-Normal distribution; see details in Section 2.2 of \citet{mollenhoff2019efficient}.
\begin{footnotesize}
\begin{verbatim}
## optim_equiv: a function for two-sample equivalence
## testing. Produces both TOST p-val and optimal test p-val
optim_equiv <- function(sample1, sample2, margin) {
require("VGAM")
n1 <- length(sample1); n2 <- length(sample2)
s2_1 <- sd(sample1)^2; s2_2 <- sd(sample2)^2
s_P = sqrt(( ((n1 - 1) * s2_1) +
((n2 - 1) * s2_2) )/(n1 + n2 - 2))
xbar1 <- mean(sample1); xbar2 <- mean(sample2)
se.diff <- (s_P*sqrt(1/n1 + 1/n2))
t_1 <- (xbar1 - xbar2 - (-margin))/se.diff
t_2 <- (xbar1 - xbar2 - (margin))/se.diff
pval1 <- 1 - pt(t_1, n1 + n2 - 2)
pval2 <- 1 - pt(t_2, n1 + n2 - 2, lower = FALSE)
tost_pval <- max(c(pval1, pval2))
optimal_equiv <- function(x){ abs(xbar1 - xbar2) - qfoldnorm(x, margin, se.diff) }
optim_pval <- NA
if(is.na(optim_pval)){
tryCatch({optim_pval <- uniroot(optimal_equiv,
c(0, (1 - 1/10e15)), tol = 0.0001)$root
}, error=function(e){})}
return(c(tost = tost_pval, optim = optim_pval))}
# Examples:
set.seed(123)
optim_equiv(rnorm(100), rnorm(260), margin = 0.4)
# tost optim
# 0.003542515 0.003349803
optim_equiv(rnorm(40), rnorm(26), margin = 0.4)
# tost optim
# 0.05371685 0.01259863
\end{verbatim}
\end{footnotesize}
\end{document} |
\begin{equation}gin{document}
\selectlanguage{english}
\tauitle[Strichartz estimates for wave equation with inverse square
potential] {Strichartz estimates for wave equation with inverse
square potential}
\alphauthor{Changxing Miao}
\alphaddress{Institute of Applied Physics and Computational Mathematics, P. O. Box 8009, Beijing, China, 100088}
\varepsilonmail{miao\_changxing@iapcm.ac.cn}
\alphauthor{Junyong Zhang}
\alphaddress{Department of Mathematics, Beijing Institute of Technology, Beijing,
China, 100081} \varepsilonmail{zhang\_junyong@bit.edu.cn}
\alphauthor{Jiqiang Zheng}
\alphaddress{The Graduate School of China Academy of Engineering Physics, P. O. Box 2101, Beijing, China, 100088}
\varepsilonmail{zhengjiqiang@gmail.com}
\begin{equation}gin{abstract}
In this paper, we study the Strichartz-type estimates of the
solution for the linear wave equation with inverse square potential.
Assuming the initial data possesses additional angular regularity,
especially the radial initial data, the range of admissible pairs is
improved. As an application, we show the global well-posedness of
the semi-linear wave equation with inverse-square potential
$\partialartial_t^2 u-\Deltaelta u+\fracrac{a}{|x|^2}u=\partialm|u|^{p-1}u$ for power
$p$ being in some regime when the initial data are radial. This
result extends the well-posedness result in Planchon, Stalker, and
Tahvildar-Zadeh.
\varepsilonnd{abstract}
\maketitle
\selectlanguage{english}
\tauableofcontents
\nonumberindent {\bf Mathematics Subject Classification
(2000):}\quaduad 35Q40, 35Q55, 47J35. \\
\nonumberindent {\bf Keywords:}\quaduad Inverse square potential,
Strichartz estimate, Spherical harmonics.
\section{Introduction and Statement of Main Result}
The aim of this paper is to study the $L^q_t(L^r_x)$-type estimates
of the solution for the linear wave equation perturbed by an inverse
square potential. More precisely, we shall consider the following
wave equation with the inverse square potential
\begin{equation}gin{equation}\lambdabel{1.1}
\begin{equation}gin{cases}
\partialartial_t^2 u-\Deltaelta u+\fracrac{a}{|x|^2}u=0,\quadquad
(t,x)\in\mathbb{R}\tauimes\mathbb{R}^n, ~a\in\mathbb{R},\\ u(t,x)|_{t=0}=u_0(x),\quaduad
\partialartial_tu(t,x)|_{t=0}=u_1(x).
\varepsilonnd{cases}
\varepsilonnd{equation}
The scale-covariance elliptic operator $-\Deltaelta+\fracrac{a}{|x|^2}$
appearing in \varepsilonqref{1.1} plays a key role in many problems of
physics and geometry. The heat and Schr\"odinger flows for the
elliptic operator $-\Deltaelta+\fracrac{a}{|x|^2}$ have been studied in the
theory of combustion \cite{LZ}, and in quantum mechanics
\cite{KSWW}. The equation \varepsilonqref{1.1} arises in the study of the
wave propagation on conic manifolds \cite{CT}. We refer the readers
to \cite{BPSS,BPSS1,PSS,PSS1} and references therein.
It is well known that Strichartz-type estimates are crucial in
handling local and global well-posedness problems of nonlinear
dispersive equations. Along this way, Planchon, Stalker, and
Tahvildar-Zadeh \cite{PSS} first showed a generalized Strichartz
estimates for the equation \varepsilonqref{1.1} with radial initial data.
Thereafter, Burq, Planchon, Stalker, and Tahvildar-Zadeh
\cite{BPSS} removed the radially symmetric assumption in \cite{PSS}
and then obtained some well-posedness results for the semi-linear
wave equation with inverse-square potential. The range of the
admissible exponents $(q,r)$ for the Strichartz estimates of
\varepsilonqref{1.1} obtained in \cite{BPSS,PSS} is restricted under
$\fracrac2q\leq (n-1)(\fracrac12-\fracrac1r)$, which is the same as that of
the linear wave equation without potential. Sterbenz and Rodnianski
\cite{Sterbenz} improved the range of the ``classical" admissible
exponents $(q,r)$ for the linear wave equation with no potential by
compensating a small loss of angular regularity.
In this paper, we
are devoted to study the Strichartz estimates of the solution of the
equation \varepsilonqref{1.1}. By employing the asymptotic behavior of the
Bessel function and some fine estimates of Hankel transform, we
improve the range of the admissible pairs $(q,r)$ in \cite{BPSS,PSS}
by compensating a small loss of angular regularity. The machinery we
employ here is mainly based on the spherical harmonics expansion and
some properties of Hankel transform. As an application of the
Strichartz estimates, we obtain well-posedness of \varepsilonqref{1.1}
perturbed by nonlinearity $|u|^{p-1}u$ with power
$p_h<p<p_{\tauext{conf}}$ (defined below) in the radial case, which
extends the well-posedness result in Planchon et al. \cite{PSS}.
Before stating our main theorems, we need some notations. We say the
pair $(q,r)\in\Lambda$, if $q,r\nablaeq2$, and satisfy
$$\fracrac1q\nablaeq\fracrac{n-1}2(\fracrac12-\fracrac1r)\quaduad \tauext{and}\quaduad
\fracrac1q<(n-1)(\fracrac12-\fracrac1r).$$ Set the infinitesimal generators
of the rotations on Euclidean space:
\begin{equation}gin{equation*}
\mathbb Omega_{j,k}:=x_j\partialartial_k-x_k\partialartial_j,
\varepsilonnd{equation*}
and define for $s\in\mathbb{R}$,
\begin{equation}gin{equation*}
\Deltaelta_\tauhetaeta:=\sum_{j<k}\mathbb Omega_{j,k}^2,\quaduad
|\mathbb Omega|^s=(-\Deltaelta_{\tauhetaeta})^{\fracrac{s}2}.
\varepsilonnd{equation*}
\begin{equation}gin{theorem}\lambdabel{thm} Let $u$ be a solution of the equation \varepsilonqref{1.1} with
$a>\fracrac1{(n-1)^2}-\fracrac{(n-2)^2}4$. For any $\varepsilonpsilon>0$ and
$0<s<1+\min\big\{\fracrac{n-2}{2}, \sqrt{(\fracrac{n-2}2)^2+a}\big\}$,
$\bullet$ if $n\nablaeq4$, then
\begin{equation}gin{equation}\lambdabel{1.2}
\|u(t,x)\|_{L^q_tL^r_x}\leq
C_\varepsilonpsilon\mathcal{B}ig(\|\lambdangle\mathbb Omega\rightarrowngle^{\begin{equation}gin{aligned}r{s}} u_0\|_{\deltaot
H^s}+\|\lambdangle\mathbb Omega\rightarrowngle^{\begin{equation}gin{aligned}r{s}} u_1\|_{\deltaot H^{s-1}}\mathcal{B}ig),
\varepsilonnd{equation}
where $(q,r)\in \Lambda$, and
$$\begin{equation}gin{aligned}r{s}=(1+\varepsilonpsilon)\mathcal{B}ig(\fracrac2q-(n-1)\big(\fracrac12-\fracrac1r\big)\mathcal{B}ig)\quaduad
\tauext{and}\quaduad s=n\mathcal{B}ig(\fracrac12-\fracrac1r\mathcal{B}ig)-\fracrac1q;$$
$\bullet$ if $n=3$, then
\begin{equation}gin{equation}\lambdabel{1.3}
\|u(t,x)\|_{L^q_tL^r_x}\leq
C_\varepsilonpsilon\mathcal{B}ig(\|\lambdangle\mathbb Omega\rightarrowngle^{\begin{equation}gin{aligned}r{s}} u_0\|_{\deltaot
H^s}+\|\lambdangle\mathbb Omega\rightarrowngle^{\begin{equation}gin{aligned}r{s}} u_1\|_{\deltaot H^{s-1}}\mathcal{B}ig),
\varepsilonnd{equation}
where $q\neq 2, (q,r)\in \Lambda$, and
$$\begin{equation}gin{aligned}r{s}=(2+\varepsilonpsilon)\mathcal{B}ig(\fracrac1q-\big(\fracrac12-\fracrac1r\big)\mathcal{B}ig)\quaduad
\tauext{and}\quaduad s=3\mathcal{B}ig(\fracrac12-\fracrac1r\mathcal{B}ig)-\fracrac1q.$$ In addition,
the following estimate holds for $r>4$ and
$s=3(\fracrac12-\fracrac1r)-\fracrac12$,
\begin{equation}gin{equation}\lambdabel{1.4}
\|u(t,x)\|_{L^2_tL^r_x}\leq
C_\varepsilonpsilon\mathcal{B}ig(\|\lambdangle\mathbb Omega\rightarrowngle^{\begin{equation}gin{aligned}r{s}(r)} u_0\|_{\deltaot
H^s}+\|\lambdangle\mathbb Omega\rightarrowngle^{\begin{equation}gin{aligned}r{s}(r)} u_1\|_{\deltaot H^{s-1}}\mathcal{B}ig),
\varepsilonnd{equation}
where $\begin{equation}gin{aligned}r{s}(r)=1-\fracrac2r$ with $r\neq\infty$.
\varepsilonnd{theorem}
\begin{equation}gin{remark}
i$)$. We remark that some of admissible pairs $(q,r)$ in Theorem 1.1
are out of the region $ACDO$ or $ACO$ $($in the following figures$)$
obtained in \cite {BPSS,PSS}.
ii$)$. Our restriction $a>a_n:=\fracrac1{(n-1)^2}-\fracrac{(n-2)^2}4$ is
to extend the the range of $(q,r)$ as widely as possible. We remark
that $a_3=0$ and $a_n<0$ for $n\nablaeq4$. Therefore, we recover the
result of Theorem 1.5 in Sterbenz \cite{Sterbenz}, which considers
$a=0$ and $n\nablaeq4$.
iii$)$. In the extended region $\Lambda$ $($see the below
figures$)$, the loss of angular regularity is
$\begin{equation}gin{aligned}r{s}=(1+\varepsilonpsilon)(\fracrac2q-(n-1)(\fracrac12-\fracrac1r))$. When $n=3$,
the loss of angular
regularity in the line $BC$ is $\begin{equation}gin{aligned}r{s}(r)>\begin{equation}gin{aligned}r{s}$, since the
Strichartz estimate fails at the endpoint $(q,r,n)=(2,\infty,3)$. It
seems that the methods we use here are not available to obtain such
estimate at endpoint since Lemma \ref{square-expression} and Lemma
\ref{square-expression2} fail at $r=\infty$. And one might need the
wave packet method of Wolff \cite{Wolff} and the argument in Tao
\cite{Tao4} to obtain the Strichartz estimate at the endpoint
$(q,r,n)=(2,\infty,3)$ with some loss of angular regularity.
\varepsilonnd{remark}
\begin{equation}gin{figure}[ht]
\begin{equation}gin{center}
$$\varepsiloncriture{\includegraphics[width=6cm]{Graph2.eps}}
{\alphaat{1}{1}{\small
O}\alphaat{-4}{12}{$\taufrac{n-3}{2(n-1)}$}\alphaat{-4}{25}{$\taufrac{n-2}{2(n-1)}$}
\alphaat{-2}{34}{$\taufrac12$}\alphaat{-2}{42}{$\taufrac1r$} \alphaat{6}{37}{\small
A} \alphaat{16}{-1}{\small
$n>3$}\alphaat{34}{-1}{$\taufrac12$}\alphaat{48}{-1}{$\taufrac1q$}\alphaat{37}{6}{\small
D}\alphaat{35}{13}{\small C}\alphaat{35}{27}{\small B}\alphaat{30}{20}{\small
$\Lambda$}}\quadquad \varepsiloncriture{\includegraphics[width=6cm]{123.eps}}
{\alphaat{1}{2}{\small
O}\alphaat{0}{16}{$\taufrac16$}\alphaat{0}{22}{$\taufrac14$}\alphaat{0}{36}{$\taufrac12$}\alphaat{0}{42}{$\taufrac1r$}
\alphaat{7}{38}{\small A} \alphaat{22}{3}{\small
$n=3$}\alphaat{44}{2}{$\taufrac12$}\alphaat{48}{2}{$\taufrac1q$}\alphaat{46}{6}{\small
C}\alphaat{46}{25}{\small B}\alphaat{30}{20}{\small $\Lambda$}}$$
\varepsilonnd{center}
\varepsilonnd{figure}
As a consequence of Theorem \ref{thm} and Corollary
3.9 in \cite{PSS}, we have the following Strichartz estimates for
radial initial data:
\begin{equation}gin{corollary}\lambdabel{cor} Let $n\nablaeq3$ and $s<\fracrac n2$. Suppose $(u_0, u_1)$ are radial functions, then for $q,r\nablaeq2, \fracrac 1q<(n-1)(\fracrac12-\fracrac1r)$
and $s=n(\fracrac12-\fracrac1r)-\fracrac1q$, the solution $u$ of the equation
\varepsilonqref{1.1} with $a>\fracrac1{(n-1)^2}-\fracrac{(n-2)^2}4$ satisfies
\begin{equation}gin{equation}\lambdabel{1.5}
\|u(t,x)\|_{L^q_tL^r_x}\leq C\mathcal{B}ig(\|u_0\|_{\deltaot H^s}+\|u_1\|_{\deltaot
H^{s-1}}\mathcal{B}ig).
\varepsilonnd{equation}
\varepsilonnd{corollary}
As an application, we obtain some well-posedness result of the
following semi-linear wave equation,
\begin{equation}gin{equation}\lambdabel{1.6}
\begin{equation}gin{cases}
\partialartial_t^2 u-\Deltaelta u+\fracrac{a}{|x|^2}u=\partialm|u|^{p-1}u,\quadquad
(t,x)\in\mathbb{R}\tauimes\mathbb{R}^n, ~a\in\mathbb{R},\\
u(t,x)|_{t=0}=u_0(x),\quaduad
\partialartial_tu(t,x)|_{t=0}=u_1(x).
\varepsilonnd{cases}
\varepsilonnd{equation}
In the case of the semi-linear wave equation without potential (i.e.
$a=0$), there are many exciting results on the global existence and
blow-up. We refer the readers to \cite{LS,Sogge2} and references
therein. While for the equation \varepsilonqref{1.6} with $p\nablaeq
p_{\tauext{conf}}:=1+\fracrac4{n-1}$ and $n\nablaeq3$, Planchon et al.
\cite{PSS} established the global existence when the radial initial
data is small in $\deltaot H^{s_c}\tauimes \deltaot H^{s_c-1}$-norm with
$s_c:=\fracrac{n}2-\fracrac2{p-1}$. Thereafter, Burq et al. \cite{BPSS}
removed the radially symmetric assumption on the initial data. As a
consequence of Theorem \ref{thm}, we prove the global existence of
the solution to the equation \varepsilonqref{1.6} with
$p_{\tauext{h}}:=1+\fracrac{4n}{(n+1)(n-1)}<p<p_{\tauext{conf}}$ for small
radial initial data $(u_0,u_1)\in \deltaot H^{s_c}\tauimes \deltaot H^{s_c-1}
$.
\begin{equation}gin{theorem}\lambdabel{thm1} Let $n\nablaeq3$ and $p_{\tauext{h}}<p<p_{\tauext{conf}}$.
Let $q_0=(p-1)(n+1)/2$, $r_0=(n+1)(p-1)/(2p)$, and
\begin{equation}gin{equation}\lambdabel{1.7}
a>\max\mathcal{B}ig\{\fracrac1{(n-1)^2}-\fracrac{(n-2)^2}4, \fracrac n{q_0}\mathcal{B}ig(\fracrac
n{q_0}-n+2\mathcal{B}ig),\big(\fracrac n{r_0}-n\big)\big(\fracrac
n{r_0}-2\big)\mathcal{B}ig\}. \varepsilonnd{equation} Assume $(u_0,u_1)$ are radial
functions and there is a small constant $\varepsilonpsilon(p)$ such that
\begin{equation}gin{equation}\lambdabel{1.8}
\|u_0\|_{\deltaot H^{s_c}}+\|u_1\|_{\deltaot H^{s_c-1}}<\varepsilonpsilon(p),
\varepsilonnd{equation}
then there exists a unique global solution $u$ to \varepsilonqref{1.6}
satisfying
\begin{equation}gin{equation}\lambdabel{1.9} u\in C_t(\mathbb{R};\deltaot H^{s_c})\cap
L^{q_0}_{t,x}(\mathbb{R}\tauimes\mathbb{R}^n).\varepsilonnd{equation}
\varepsilonnd{theorem}
\begin{equation}gin{remark}
i$)$. The above result extends the well-posedness result in
\cite{PSS} from $p\nablaeq p_{\tauext{conf}}$ to $p_{\tauext{h}}<p<
p_{\tauext{conf}}$.
ii$)$. We remark that the $L^1$-bound of the operator
$\mathcal{K}_{\lambdambda,\nu}^0$ defined below is the source of our
constraint to $p>p_h$. Inspired by the arguments in
Lindblad-Sogge\cite{LS1,Sogge2} for the usual semi-linear wave
equation, if we want to extend the above result to $p>p_{\tauext{c}}$,
one needs to explore new inhomogeneous Strichartz estimates since
the operator $\mathcal{K}_{\lambdambda,\nu}^0$ is not known as a bounded
operator on $L^1$. Here $p_c$ is the positive root of
$(n-1)p_c^2-(n+1)p_c-2=0$, and $p_c$ is called the Strauss's index.
\varepsilonnd{remark}
This paper is organized as follows: In the section 2, we revisit the
property of the Bessel functions, harmonic projection operator, and
the Hankel transform associated with $-\Deltaelta+\fracrac{a}{|x|^2}$.
Section 3 is devoted to establishing some estimates of the Hankel
transform. In Section 4, we use the previous estimates to prove
Theorem \ref{thm}. We show Theorem \ref{thm1} in Section 5. In the
appendix, we sketch the proof of Lemma \ref{square-expression} by
using a weak-type $(1,1)$ estimate for the multiplier operators with
respect to the Hankel transform.
Finally, we conclude this section by giving some notations which
will be used throughout this paper. We use $A\lesssim B$ to denote
the statement that $A\leq CB$ for some large constant $C$ which may
vary from line to line and depend on various parameters, and
similarly use $A\ll B$ to denote the statement $A\leq C^{-1} B$. We
employ $A\sim B$ to denote the statement that $A\lesssim B\lesssim
A$. If the constant $C$ depends on a special parameter other than
the above, we shall denote it explicitly by subscripts. We briefly
write $A+\varepsilonpsilon$ as $A+$ for $0<\varepsilonpsilon\ll1$. Throughout this
paper, pairs of conjugate indices are written as $p, p'$, where
$\fracrac{1}p+\fracrac1{p'}=1$ with $1\leq p\leq\infty$.
\section{Preliminary}
In this section, we provide some standard facts about the Hankel
transform and the Bessel functions. We use the oscillatory integral
argument to show the asymptotic behavior of the derivative of the
Bessel function. The Littlewood-Paley theorems associated to the
Hankel transform are collected in this section. Finally we prove a
Stirchartz estimate for unit frequency by making use of some results
in \cite{BPSS}.
\subsection{Spherical harmonic expansions and the Bessel functions}
We begin with the spherical harmonics expansion formula. For more
details, we refer to Stein-Weiss \cite{SW}. Let
\begin{equation}gin{equation}\lambdabel{2.1}
\xi=\rho \omega \quaduad\tauext{and}\quaduad x=r\tauhetaeta\quaduad\tauext{with}\quaduad
\omega,\tauhetaeta\in\mathbb{S}^{n-1}.
\varepsilonnd{equation}
For any $g\in L^2(\mathbb{R}^n)$, we have the expansion formula
\begin{equation}gin{equation*}
g(x)=\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}a_{k,\varepsilonll}(r)Y_{k,\varepsilonll}(\tauhetaeta)
\varepsilonnd{equation*}
where
\begin{equation}gin{equation*}
\{Y_{k,1},\lambdaots, Y_{k,d(k)}\}
\varepsilonnd{equation*}
is the orthogonal basis of the spherical harmonic space of degree
$k$ on $\mathbb{S}^{n-1}$, called $\mathcal{H}^{k}$, with the
dimension
\begin{equation}gin{equation*}
d(0)=1\quaduad\tauext{and}\quaduad
d(k)=\fracrac{2k+n-2}{k}C^{k-1}_{n+k-3}\simeq \lambdangle k\rightarrowngle^{n-2}.
\varepsilonnd{equation*}
We remark that for $n=2$, the dimension of $\mathcal{H}^{k}$ is a
constant independent of $k$. We have the orthogonal decomposition
\begin{equation}gin{equation*}
L^2(\mathbb{S}^{n-1})=\bigoplus_{k=0}^\infty \mathcal{H}^{k}.
\varepsilonnd{equation*}
This gives by orthogonality
\begin{equation}gin{equation}\lambdabel{2.2}
\|g(x)\|_{L^2_\tauhetaeta}=\|a_{k,\varepsilonll}(r)\|_{\varepsilonll^2_{k,\varepsilonll}}.
\varepsilonnd{equation}
By Theorem 3.10 in \cite{SW}, we have the Hankel transforms formula
\begin{equation}gin{equation}\lambdabel{2.3}
\hat{g}(\rho\omega)=\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}2\partiali
i^{k}Y_{k,\varepsilonll}(\omega)\rho^{-\fracrac{n-2}2}\int_0^\infty
J_{k+\fracrac{n-2}2}(2\partiali r\rho)a_{k,\varepsilonll}(r)r^{\fracrac n2}\mathrm{d}r,
\varepsilonnd{equation}
here the Bessel function $J_k(r)$ of order $k$ is defined by
\begin{equation}gin{equation*}
J_k(r)=\fracrac{(r/2)^k}{G_{\t_1,\t_2,\t_3}amma(k+\fracrac12)G_{\t_1,\t_2,\t_3}amma(1/2)}\int_{-1}^{1}e^{isr}(1-s^2)^{(2k-1)/2}\mathrm{d
}s\quaduad\tauext{with}~ k>-\fracrac12~\tauext{and}~ r>0.
\varepsilonnd{equation*}
A simple computation gives the estimates
\begin{equation}gin{equation}\lambdabel{2.4}
|J_k(r)|\leq
\fracrac{Cr^k}{2^kG_{\t_1,\t_2,\t_3}amma(k+\fracrac12)G_{\t_1,\t_2,\t_3}amma(1/2)}\big(1+\fracrac1{k+1/2}\big),
\varepsilonnd{equation}
and
\begin{equation}gin{equation}\lambdabel{2.5} |J'_k(r)|\leq
\fracrac{C
(kr^{k-1}+r^k)}{2^kG_{\t_1,\t_2,\t_3}amma(k+\fracrac12)G_{\t_1,\t_2,\t_3}amma(1/2)}\big(1+\fracrac1{k+1/2}\big),
\varepsilonnd{equation}
where $C$ is a constant and these estimates will be used when
$r\lesssim1$. Another well known asymptotic expansion about the
Bessel function is
\begin{equation}gin{equation*}
J_k(r)=r^{-1/2}\sqrt{\fracrac2{\partiali}}\cos(r-\fracrac{k\partiali}2-\fracrac{\partiali}4)+O_{k}(r^{-3/2}),\quaduad
\tauext{as}~ r\rightarrow\infty,
\varepsilonnd{equation*}
but with a constant depending on $k$ (see \cite{SW}). As pointed out
in \cite{Stein1}, if one seeks a uniform bound for large $r$ and
$k$, then the best one can do is $|J_k(r)|\leq C r^{-\fracrac13}$. To
investigate the behavior of asymptotic on $k$ and $r$, we are
devoted to Schl\"afli's integral representation \cite{Watson} of the
Bessel function: for $r\in\mathbb{R}^+$ and $k>-\fracrac12$,
\begin{equation}gin{equation}\lambdabel{2.6}
\begin{equation}gin{split}
J_k(r)&=\fracrac1{2\partiali}\int_{-\partiali}^\partiali
e^{ir\sin\tauhetaeta-ik\tauhetaeta}\mathrm{d}\tauhetaeta-\fracrac{\sin(k\partiali)}{\partiali}\int_0^\infty
e^{-(r\sinh s+ks)}\mathrm{d}s\\&:=\tauilde{J}_k(r)-E_k(r).
\varepsilonnd{split}
\varepsilonnd{equation}
We remark that $E_k(r)=0$ for $k\in\mathbb{Z}^+$. One easily estimates for
$r>0$
\begin{equation}gin{equation}\lambdabel{2.7}
|E_k(r)|=\mathcal{B}ig|\fracrac{\sin(k\partiali)}{\partiali}\int_0^\infty e^{-(r\sinh
s+ks)}\mathrm{d}s\mathcal{B}ig|\leq C (r+k)^{-1}.
\varepsilonnd{equation}
Next, we recall the properties of Bessel function $J_k(r)$ in
\cite{Stempak,Stein1}, the readers can also refer to \cite{MZZ1} for
the detailed proof.
\begin{equation}gin{lemma}[Asymptotics of the Bessel function] \lambdabel{Bessel} Assume $k\nablag1$. Let $J_k(r)$ be
the Bessel function of order $k$ defined as above. Then there exist
a large constant $C$ and small constant $c$ independent of $k$ and
$r$ such that:
$\bullet$ when $r\leq \fracrac k2$
\begin{equation}gin{equation}\lambdabel{2.8}
\begin{equation}gin{split}
|J_k(r)|\leq C e^{-c(k+r)};
\varepsilonnd{split}
\varepsilonnd{equation}
$\bullet$ when $\fracrac k 2\leq r\leq 2k$
\begin{equation}gin{equation}\lambdabel{2.9}
\begin{equation}gin{split}
|J_k(r)|\leq C k^{-\fracrac13}(k^{-\fracrac13}|r-k|+1)^{-\fracrac14};
\varepsilonnd{split}
\varepsilonnd{equation}
$\bullet$ when $r\nablaeq 2k$
\begin{equation}gin{equation}\lambdabel{2.10}
\begin{equation}gin{split}
J_k(r)=r^{-\fracrac12}\sum_{\partialm}a_\partialm(r) e^{\partialm ir}+E(r),
\varepsilonnd{split}
\varepsilonnd{equation}
where $|a_\partialm(r)|\leq C$ and $|E(r)|\leq Cr^{-1}$.
\varepsilonnd{lemma}
For our purpose, we additionally need the asymptotic behavior of the
derivative of the Bessel function $J'_{k}(r)$. It is a
straightforward elaboration of the argument of proving Lemma
\ref{Bessel} in \cite{MZZ1}, but we give the proof for completeness.
\begin{equation}gin{lemma}\lambdabel{Bessel2} Assume $r, k\nablag1$. Then there exists
a constant $C$ independent of $k$ and $r$ such that
$$|J'_k(r)|\leq C r^{-\fracrac12}.$$
\varepsilonnd{lemma}
\begin{equation}gin{proof}When $r\leq
\fracrac k2$ or $r\nablaeq 2k$, we apply the recurrence formula
\cite{Watson}
$$J'_k(r)=\fracrac12\big(J_{k-1}(r)-J_{k+1}(r)\big),$$ \varepsilonqref{2.8} and
\varepsilonqref{2.10} to obtaining $|J'_k(r)|\leq C r^{-\fracrac12}$.
When $\fracrac k 2\leq r\leq 2k$, we have by \varepsilonqref{2.6}
\begin{equation}gin{equation*}
\begin{equation}gin{split}
J'_k(r)=\tauilde{J}'_k(r)-E'_k(r).
\varepsilonnd{split}
\varepsilonnd{equation*}
A simple computation gives that for $r>0$
\begin{equation}gin{equation*}
|E'_k(r)|=\mathcal{B}ig|\fracrac{\sin(k\partiali)}{\partiali}\int_0^\infty e^{-(r\sinh
s+ks)}\sinh s~\mathrm{d}s\mathcal{B}ig|\leq C (r+k)^{-1}.
\varepsilonnd{equation*}
Thus we only need to estimate $\tauilde{J}'_k(r)$. We divide two cases
$r>k$ and $r\leq k$ to estimate it by the stationary phase argument.
Let
$$\partialhi_{r,k}(\tauhetaeta)=r\sin\tauhetaeta-k\tauhetaeta.$$
{\bf Case 1: $k<r\leq2k$.} Let $\tauhetaeta_0=\cos^{-1}(\fracrac k r)$, then
$$\partialhi'_{r,k}(\tauhetaeta_0)=r\cos\tauhetaeta_0-k=0.$$
Now we split $\tauilde{J}_k(r)$ into two pieces:
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\tauilde{J}'_k(r)&=\fracrac i{2\partiali}\int_{\mathbb Omega_{\deltaelta}}
e^{ir\sin\tauhetaeta-ik\tauhetaeta}\sin\tauhetaeta~\mathrm{d}\tauhetaeta+\fracrac
i{2\partiali}\int_{B_{\deltaelta}}
e^{ir\sin\tauhetaeta-ik\tauhetaeta}\sin\tauhetaeta~\mathrm{d}\tauhetaeta,
\varepsilonnd{split}
\varepsilonnd{equation*}
where
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\mathbb Omega_{\deltaelta}=\{\tauhetaeta:|\tauhetaeta\partialm\tauhetaeta_0|\leq \deltaelta\},\quaduad
B_{\deltaelta}=[-\partiali,\partiali]\setminus \mathbb Omega_{\deltaelta}\quaduad \tauext{with}\quaduad
\deltaelta>0.
\varepsilonnd{split}
\varepsilonnd{equation*}
We have by taking absolute values
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\mathcal{B}ig|\fracrac1{2\partiali}\int_{\mathbb Omega_{\deltaelta}}
e^{ir\sin\tauhetaeta-ik\tauhetaeta}\sin\tauhetaeta~\mathrm{d}\tauhetaeta\mathcal{B}ig|\leq
C|\sin(\tauhetaeta_0\partialm\deltaelta)|\deltaelta.
\varepsilonnd{split}
\varepsilonnd{equation*}
Integrating by parts, we have
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\int_{B_{\deltaelta}}
e^{ir\sin\tauhetaeta-ik\tauhetaeta}\sin\tauhetaeta~\mathrm{d}\tauhetaeta=\fracrac{e^{i(r\sin\tauhetaeta-k\tauhetaeta)}\sin\tauhetaeta}{i(r\cos\tauhetaeta-k)}\mathcal{B}ig|_{\partialartial
B_{\deltaelta}}-\int_{B_{\deltaelta}}\fracrac{
e^{ir\sin\tauhetaeta-ik\tauhetaeta}(r-k\cos\tauhetaeta)}{i(r\cos\tauhetaeta-k)^2}\mathrm{d}\tauhetaeta,
\varepsilonnd{split}
\varepsilonnd{equation*}
where $\partialartial B_{\deltaelta}=\{\partialm\partiali,\partialm\tauhetaeta_0\partialm\deltaelta\}$. It is
easy to see that
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\mathcal{B}ig|\fracrac{e^{i(r\sin\tauhetaeta-k\tauhetaeta)}\sin\tauhetaeta}{i(r\cos\tauhetaeta-k)}\mathcal{B}ig|_{\partialartial
B_{\deltaelta}}\mathcal{B}ig|\leq
c\sin(\tauhetaeta_0\partialm\deltaelta)|r\cos(\tauhetaeta_0\partialm\deltaelta)-k|^{-1}.
\varepsilonnd{split}
\varepsilonnd{equation*}
Since $r-k\cos\tauhetaeta>0$, we obtain
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\mathcal{B}ig|\int_{B_{\deltaelta}}\fracrac{
e^{ir\sin\tauhetaeta-ik\tauhetaeta}(r-k\cos\tauhetaeta)}{i(r\cos\tauhetaeta-k)^2}\mathrm{d}\tauhetaeta\mathcal{B}ig|&\leq
\int_{B_{\deltaelta}}\fracrac{
|r-k\cos\tauhetaeta|}{(r\cos\tauhetaeta-k)^2}\mathrm{d}\tauhetaeta=
\fracrac{\sin\tauhetaeta}{(r\cos\tauhetaeta-k)}\mathcal{B}ig|_{\partialartial B_{\deltaelta}}\\&\leq
c|\sin(\tauhetaeta_0\partialm\deltaelta)|\cdot|r\cos(\tauhetaeta_0\partialm\deltaelta)-k|^{-1}.
\varepsilonnd{split}
\varepsilonnd{equation*}
Therefore,
$$|\tauilde{J}'_k(r)|\leq
C|\sin(\tauhetaeta_0\partialm\deltaelta)|\deltaelta+c|\sin(\tauhetaeta_0\partialm\deltaelta)|\cdot|r\cos(\tauhetaeta_0\partialm\deltaelta)-k|^{-1}.$$
We shall choose proper $\deltaelta$ such that
$$|\sin(\tauhetaeta_0\partialm\deltaelta)|\deltaelta\sim
c|\sin(\tauhetaeta_0\partialm\deltaelta)|\cdot|r\cos(\tauhetaeta_0\partialm\deltaelta)-k|^{-1}.$$
Noting that
$\cos(\tauhetaeta_0\partialm\deltaelta)=\cos\tauhetaeta_0\cos\deltaelta\mp\sin\tauhetaeta_0\sin\deltaelta$
and the definition of $\tauhetaeta_0$, we get
\begin{equation}gin{equation*}
\begin{equation}gin{split}
r\cos(\tauhetaeta_0\partialm\deltaelta)-k=k\cos\deltaelta\partialm\sqrt{r^2-k^2}\sin\deltaelta-k.
\varepsilonnd{split}
\varepsilonnd{equation*}
Since $1-\cos\deltaelta=2\sin^2\fracrac{\deltaelta}2$, one has
$$|r\cos(\tauhetaeta_0\partialm\deltaelta)-k|\sim
|k\deltaelta^2\partialm\deltaelta\sqrt{r^2-k^2}|\quaduad\tauext{with small}~\deltaelta.$$
On the other hand, we have by
$\sin(\tauhetaeta_0\partialm\deltaelta)=\sin\tauhetaeta_0\cos\deltaelta\partialm\cos\tauhetaeta_0\sin\deltaelta$,
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\sin(\tauhetaeta_0\partialm\deltaelta)=\partialm\fracrac{\sqrt{r^2-k^2}}r(1-\fracrac{\deltaelta^2}2)\partialm\fracrac
k r\deltaelta.
\varepsilonnd{split}
\varepsilonnd{equation*}
When $|r-k|\leq k^{\fracrac13}$, choosing $\deltaelta=Ck^{-\fracrac13}$ with
large $C\nablaeq2$, we have
\begin{equation}gin{equation*}
\begin{equation}gin{split}
|\sin(\tauhetaeta_0\partialm\deltaelta)|\cdot|r\cos(\tauhetaeta_0\partialm\deltaelta)-k|^{-1}\lesssim
k^{-\fracrac23}(C^2-Ck^{-\fracrac23}\sqrt{r^2-k^2})^{-1}\lesssim
k^{-\fracrac23}\lesssim r^{-\fracrac23}.
\varepsilonnd{split}
\varepsilonnd{equation*}
When $|r-k|\nablaeq k^{\fracrac13}$, taking $\deltaelta=c(r^2-k^2)^{-\fracrac14}$
with small $c>0$, we obtain
\begin{equation}gin{equation*}
\begin{equation}gin{split}
&|\sin(\tauhetaeta_0\partialm\deltaelta)|\cdot|r\cos(\tauhetaeta_0\partialm\deltaelta)-k|^{-1}\\\lesssim&
\big[|r-k|^{\fracrac12}r^{-\fracrac12}+r^{-1}+(r^2-k^2)^{-\fracrac14}\big]
(r^2-k^2)^{-\fracrac14}\big(c-c^2k(r^2-k^2)^{-\fracrac34}\big)^{-1}\\\lesssim&
k^{-\fracrac34}|r-k|^{\fracrac14}+k^{-\fracrac12}|r-k|^{-\fracrac12}\lesssim
r^{-\fracrac12},
\varepsilonnd{split}
\varepsilonnd{equation*}
where we use the fact that
$(r^2-k^2)^{-\fracrac14}\leq(2k)^{-\fracrac14}|r-k|^{-\fracrac14}$ for $k<r$.
{\bf Case 2: $\fracrac k2 \leq r\leq k $.} When $k-k^{\fracrac13}<r<k$,
choosing $\tauhetaeta_0=0$ and $\deltaelta=Ck^{-\fracrac13}$ with large
$C\nablaeq2$, it follows from the above argument that
\begin{equation}gin{equation*}
\begin{equation}gin{split}
|\tauilde{J}'_k(r)|&\lesssim|\sin(\tauhetaeta_0\partialm\deltaelta)|\cdot|r\cos(\tauhetaeta_0\partialm\deltaelta)-k|^{-1}\\&\lesssim
\deltaelta({r\deltaelta^2}/2-|r-k|)^{-1}\lesssim k^{-\fracrac23}\lesssim
r^{-\fracrac23}.
\varepsilonnd{split}
\varepsilonnd{equation*}
When $r<k-k^{\fracrac13}$, there is no critical point. Hence we obtain
$$|\tauilde{J}'_k(r)|\lesssim|r-k|^{-2}\lesssim r^{-\fracrac23}.$$
Finally, we collect all the estimates to get
$|\tauilde{J}'_k(r)|\lesssim r^{-\fracrac12}.$
\varepsilonnd{proof}
Next, we record the two basic results about the modified square
function expressions.
\begin{equation}gin{lemma}[A modified Littlewood-Paley theorem \cite{Stein1}]\lambdabel{square-expression} Let $\begin{equation}ta\in C_0^\infty(\mathbb{R}^+)$ be supported in
$[\fracrac12,2]$, $\begin{equation}ta_j(\rho)=\begin{equation}ta(2^{-j}\rho)$ and
$\sum\limits_{j}\begin{equation}ta_j=1$. Then for any $\nu(k)>0$ and
$1<p<\infty$, we have
\begin{equation}gin{equation}\lambdabel{2.11}
\begin{equation}gin{split}
&\bigg\|\sum_{j\in
\mathbb{Z}}\int_0^\infty(r\rho)^{-\fracrac{n-2}2}J_{\nu(k)}(r\rho)\cos(t\rho)b^0_{k,\varepsilonll}(\rho)\rho^{n-1}\begin{equation}ta_j(\rho)\mathrm{d}\rho\bigg\|_{L^p_{r^{n-1}}}\\&\sim
\bigg\|\mathcal{B}ig(\sum_{j\in
\mathbb{Z}}\mathcal{B}ig|\int_0^\infty(r\rho)^{-\fracrac{n-2}2}J_{\nu(k)}(r\rho)\cos(t\rho)
b^0_{k,\varepsilonll}(\rho)\rho^{n-1}\begin{equation}ta_j(\rho)\mathrm{d}\rho\mathcal{B}ig|^2\mathcal{B}ig)^{\fracrac12}\bigg\|_{L^p_{r^{n-1}}}.
\varepsilonnd{split}
\varepsilonnd{equation}
\varepsilonnd{lemma}
For the sake of the completeness, we will prove Lemma
\ref{square-expression} in the appendix by using a weak-type $(1,1)$
estimate for the multiplier operators with respect to the Hankel
transform.
\begin{equation}gin{lemma}[Littlewood-Paley-Stein theorem for the sphere,
\cite{Stein2,Stri,Sterbenz}]\lambdabel{square-expression2} Let $\begin{equation}ta\in
C_0^\infty(\mathbb{R}^+)$ be supported in $[\fracrac12,4]$ and $\begin{equation}ta(\rho)=1$
when $\rho\in[1,2]$. Assume $\begin{equation}ta_j(\rho)=\begin{equation}ta(2^{-j}\rho).$ Then
for any $1<p<\infty$ and any test function $f(\tauhetaeta)$ defined on
$\mathbb{S}^{n-1}$, we have
\begin{equation}gin{equation}\lambdabel{2.12}
\|f(\tauhetaeta)\|_{L^p_{\tauhetaeta}(\mathbb{S}^{n-1})}\sim
\mathcal{B}ig\|\mathcal{B}ig(\big|a_{0,1}Y_{0,1}(\tauhetaeta)\big|^2+\sum_{j=0}^\infty\big|\sum_{k}\sum_{\varepsilonll=1}^{{d}(k)}\begin{equation}ta_j(k)a_{k,\varepsilonll}Y_{k,\varepsilonll}(\tauhetaeta)\big|^2\mathcal{B}ig)^{\fracrac12}\mathcal{B}ig\|_{L^p_{\tauhetaeta}(\mathbb{S}^{n-1})},
\varepsilonnd{equation}
where
$f=\sum\limits_{k=0}^{\infty}\sum\limits_{\varepsilonll=1}^{{d}(k)}a_{k,\varepsilonll}Y_{k,\varepsilonll}(\tauhetaeta)$.
\varepsilonnd{lemma}
We conclude this subsection by showing the ``Bernstein" inequality
on sphere
\begin{equation}gin{equation}\lambdabel{Bernstein}
\big\|\sum_{k=2^{j}}^{2^{j+1}}\sum_{\varepsilonll=1}^{{d}(k)} a_{k,\varepsilonll}
Y_{k,\varepsilonll}(\tauhetaeta)\big\|_{L^q(\mathbb{S}^{n-1})}\leq C_{q,n}
2^{j(n-1)(\fracrac12-\fracrac1q)}\mathcal{B}ig(\sum_{k=2^{j}}^{2^{j+1}}\sum_{\varepsilonll=1}^{{d}(k)}
|a_{k,\varepsilonll}|^2\mathcal{B}ig)^{\fracrac12}
\varepsilonnd{equation}
for $q\nablaeq2, j=0,1,2\cdots$.
In fact, since $\sum\limits_{\varepsilonll=1}^{{d}(k)}
|Y_{k,\varepsilonll}(\tauhetaeta)|^2=d(k)|\mathbb{S}^{n-1}|^{-1}, \fracorall
\tauhetaeta\in \mathbb{S}^{n-1}$ (see Stein-Weiss \cite{SW}), one has
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\mathcal{B}ig\|\sum_{k=2^{j}}^{2^{j+1}}\sum_{\varepsilonll=1}^{{d}(k)} a_{k,\varepsilonll}
Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{B}ig\|^2_{L^\infty(\mathbb{S}^{n-1})}&\leq C
\sum_{k=2^{j}}^{2^{j+1}}\sum_{\varepsilonll=1}^{{d}(k)}
|a_{k,\varepsilonll}|^2\mathcal{B}ig\|\mathcal{B}ig(\sum_{k=2^{j}}^{2^{j+1}}\sum_{\varepsilonll=1}^{{d}(k)}
|Y_{k,\varepsilonll}(\tauhetaeta)|^2\mathcal{B}ig)^{\fracrac12}\mathcal{B}ig\|^2_{L^\infty(\mathbb{S}^{n-1})}\\&\leq
C \sum_{k=2^{j}}^{2^{j+1}}\sum_{\varepsilonll=1}^{{d}(k)}
|a_{k,\varepsilonll}|^2\sum_{k=2^{j}}^{2^{j+1}}k^{n-2}\\&\leq C2^{j(n-1)}
\sum_{k=2^{j}}^{2^{j+1}}\sum_{\varepsilonll=1}^{{d}(k)} |a_{k,\varepsilonll}|^2.
\varepsilonnd{split}
\varepsilonnd{equation*}
Interpolating this with
\begin{equation}gin{equation*}
\mathcal{B}ig\|\sum_{k=2^{j}}^{2^{j+1}}\sum_{\varepsilonll=1}^{{d}(k)} a_{k,\varepsilonll}
Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{B}ig\|^2_{L^2(\mathbb{S}^{n-1})}\leq C
\sum_{k=2^{j}}^{2^{j+1}}\sum_{\varepsilonll=1}^{{d}(k)} |a_{k,\varepsilonll}|^2
\varepsilonnd{equation*}
yields \varepsilonqref{Bernstein}.
\subsection{Spectrum of $-\Deltaelta+\fracrac{a}{|x|^2}$ and Hankel transform}
Let us first consider the eigenvalue problem associated with the
operator $-\Deltaelta+\fracrac{a}{|x|^2}$:
\begin{equation}gin{equation*}
\begin{equation}gin{cases}
-\Deltaelta u+\fracrac{a}{|x|^2}u=\rho^2 u\quaduad x\in B=\{x:|x|\leq1\},\\
u(x)=0,\quadquad x\in \mathbb{S}^{n-1}.
\varepsilonnd{cases}
\varepsilonnd{equation*}
If $u(x)=f(r)Y_k(\tauhetaeta)$, we have
\begin{equation}gin{equation*}
f''(r)+\fracrac{n-1}r f'(r)+[\rho^2-\fracrac{k(k+n-2)+a}{r^2}]f(r)=0.
\varepsilonnd{equation*}
Let $\lambdambda=\rho r$ and $f(r)=\lambdambda^{-\fracrac{n-2}2}g(\lambdambda)$, we
obtain
\begin{equation}gin{equation}\lambdabel{bessfunc}
g''(\lambdambda)+\fracrac{1}\lambdambda
g'(\lambdambda)+[1-\fracrac{(k+\fracrac{n-2}2)^2+a}{\lambdambda^2}]g(\lambdambda)=0.
\varepsilonnd{equation}
Define
\begin{equation}gin{equation}\lambdabel{2.14}
\mu(k)=\fracrac{n-2}2+k,\quaduad\tauext{and}\quaduad\nu(k)=\sqrt{\mu^2(k)+a}\quaduad\tauext{with}\quaduad
a>-(n-2)^2/4.
\varepsilonnd{equation}
The Bessel function $J_{\nu(k)}(\lambdambda)$ solves the Bessel equation
\varepsilonqref{bessfunc}. And the eigenfunctions corresponding to the
spectrum $\rho^2$ can be expressed by
\begin{equation}gin{equation}\lambdabel{2.16}
\partialhi_{\rho}(x)=(\rho r)^{-\fracrac{n-2}2}J_{\nu(k)}(\rho
r)Y_k(\tauhetaeta)\quaduad\tauext{with}\quaduad x=r\tauhetaeta,
\varepsilonnd{equation}
where
\begin{equation}gin{equation}\lambdabel{2.15}
\mathcal{B}ig(-\Deltaelta+\fracrac{a}{|x|^2}\mathcal{B}ig)\partialhi_{\rho}=\rho^2\partialhi_{\rho}.
\varepsilonnd{equation}
We define the following elliptic operator
\begin{equation}gin{equation}\lambdabel{2.17}
\begin{equation}gin{split}
A_{\nu(k)}:&=-\partialartial_r^2-\fracrac{n-1}r\partialartial_r+\fracrac{k(k+n-2)+a}{r^2}\\&=-\partialartial_r^2-\fracrac{n-1}r\partialartial_r+\fracrac{\nu^2(k)-\big(\fracrac{n-2}2\big)^2}{r^2},
\varepsilonnd{split}
\varepsilonnd{equation}
then $A_{\nu(k)}\partialhi_{\rho}=\rho^2\partialhi_{\rho}$. Define the Hankel
transform of order $\nu$:
\begin{equation}gin{equation}\lambdabel{2.18}
(\mathcal{H}_{\nu}f)(\xi)=\int_0^\infty(r\rho)^{-\fracrac{n-2}2}J_{\nu}(r\rho)f(r\omega)r^{n-1}\mathrm{d}r,
\varepsilonnd{equation}
where $\rho=|\xi|$, $\omega=\xi/|\xi|$ and $J_{\nu}$ is the Bessel
function of order $\nu$. Specially, if the function $f$ is radial,
then
\begin{equation}gin{equation}\lambdabel{2.19}
(\mathcal{H}_{\nu}f)(\rho)=\int_0^\infty(r\rho)^{-\fracrac{n-2}2}J_{\nu}(r\rho)f(r)r^{n-1}\mathrm{d}r.
\varepsilonnd{equation}
If $
f(x)=\sum\limits_{k=0}^{\infty}\sum\limits_{\varepsilonll=1}^{d(k)}a_{k,\varepsilonll}(r)Y_{k,\varepsilonll}(\tauhetaeta)
$, then we obtain by \varepsilonqref{2.3}
\begin{equation}gin{equation}\lambdabel{2.20}
\begin{equation}gin{split}
\hat f(\xi)=\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}2\partiali
i^{k}Y_{k,\varepsilonll}(\omega)\big(\mathcal{H}_{\mu(k)}a_{k,\varepsilonll}\big)(\rho).
\varepsilonnd{split}
\varepsilonnd{equation}
We will also make use of the following properties of the Hankel
transform, which appears in \cite{BPSS,PSS}.
\begin{equation}gin{lemma}\lambdabel{Hankel3}
Let $\mathcal{H}_{\nu}$ and $A_{\nu}$ be defined as above. Then
$(\rm{i})$ $\mathcal{H}_{\nu}=\mathcal{H}_{\nu}^{-1}$,
$(\rm{ii})$ $\mathcal{H}_{\nu}$ is self-adjoint, i.e.
$\mathcal{H}_{\nu}=\mathcal{H}_{\nu}^*$,
$(\rm{iii})$ $\mathcal{H}_{\nu}$ is an $L^2$ isometry, i.e.
$\|\mathcal{H}_{\nu}\partialhi\|_{L^2_\xi}=\|\partialhi\|_{L^2_x}$,
$(\rm{iv})$ $\mathcal{H}_{\nu}(
A_{\nu}\partialhi)(\xi)=|\xi|^2(\mathcal{H}_{\nu} \partialhi)(\xi)$, for
$\partialhi\in L^2$.
\varepsilonnd{lemma}
Let $\mathcal{K}_{\mu,\nu}^0=\mathcal{H}_{\mu}\mathcal{H}_{\nu}$,
then as well as in \cite{PSS} one has
\begin{equation}gin{equation}\lambdabel{2.21}
A_{\mu}\mathcal{K}_{\mu,\nu}^0=\mathcal{K}_{\mu,\nu}^0A_{\nu}.
\varepsilonnd{equation}
For our purpose, we need another crucial properties of
$\mathcal{K}_{\mu(k),\nu(k)}^0$ with $k=0$:
\begin{equation}gin{lemma}[The boundness of $\mathcal{K}_{\lambdambda,\nu}^0$, \cite{BPSS,PSS}]\lambdabel{continuous}
Let $\nu, \alphalpha, \begin{equation}ta\in \mathbb{R}$, $\nu>-1,
\lambdambda=\mu(0)=\fracrac{n-2}2$, $-n<\alphalpha<2(\nu+1)$ and
$-2(\nu+1)<\begin{equation}ta<n$. Then the conjugation operator
$\mathcal{K}_{\lambdambda,\nu}^0$ is continuous on $\deltaot
H^{\begin{equation}ta}_{p,\tauext{rad}}(\mathbb{R}^n)$ provided that
\begin{equation}gin{equation*}
\max\mathcal{B}ig\{0,\fracrac{\lambdambda-\nu}{n},\fracrac\begin{equation}ta
n\mathcal{B}ig\}<\fracrac1p<\min\mathcal{B}ig\{\fracrac{\lambdambda+\nu+2}{n},\fracrac{\lambdambda+\nu+2+\begin{equation}ta}{n},1\mathcal{B}ig\}
\varepsilonnd{equation*}
while the inverse operator $\mathcal{K}_{\nu,\lambdambda}^0$ is
continuous on $\deltaot H^{\alphalpha}_{q,\tauext{rad}}(\mathbb{R}^n)$ provided that
\begin{equation}gin{equation*}
\max\mathcal{B}ig\{0,\fracrac{\lambdambda-\nu}{n},\fracrac{\lambdambda-\nu+\alphalpha}
n\mathcal{B}ig\}<\fracrac1q<\min\mathcal{B}ig\{\fracrac{\lambdambda+\nu+2}{n},1+\fracrac{\alphalpha}{n},1\mathcal{B}ig\}.
\varepsilonnd{equation*}
\varepsilonnd{lemma}
We also need the Strichartz estimates for \varepsilonqref{1.1} in
\cite{BPSS}:
\begin{equation}gin{lemma}[Strichartz estimates]\lambdabel{stri1}
For $n\nablaeq2$, let $2\leq r<\infty$ and $q,r,\nablaamma,\sigma$ satisfy
\begin{equation}gin{equation}\lambdabel{2.22}
\fracrac1q\leq\min\mathcal{B}ig\{\fracrac12,\fracrac{n-1}2\mathcal{B}ig(\fracrac12-\fracrac1r\mathcal{B}ig)\mathcal{B}ig\},\quaduad
\sigma=\nablaamma+\fracrac1q-n\mathcal{B}ig(\fracrac12-\fracrac1r\mathcal{B}ig).
\varepsilonnd{equation} There exists a positive constant $C$ depending on $n, a, q,r,\nablaamma$ such that the solution $u$ of \varepsilonqref{1.1} satisfies
\begin{equation}gin{equation}\lambdabel{2.23}
\big\|(-\Deltaelta)^{\fracrac\sigma2}u\big\|_{L^q_t(\mathbb{R};L^r(\mathbb{R}^n))}\leq
C\big(\|u_0\|_{\deltaot H^{\nablaamma}}+\|u_1\|_{\deltaot H^{\nablaamma-1}}\big)
\varepsilonnd{equation}
provided that when $n=2,3$
\begin{equation}gin{equation*}
-\min\mathcal{B}ig\{\fracrac{n-1}2,\nu(1)-\fracrac12,1+\nu(0)\mathcal{B}ig\}<\nablaamma<\min\mathcal{B}ig\{\fracrac{n+1}2,\nu(1)+\fracrac12,1+\nu(0)-\fracrac1q\mathcal{B}ig\}
\varepsilonnd{equation*}
and when $n\nablaeq4$
\begin{equation}gin{equation*}
-\min\mathcal{B}ig\{\fracrac{n}2-\fracrac{n+3}{2(n-1)},\nu(1)-\fracrac{n+3}{2(n-1)},1+\nu(0)\mathcal{B}ig\}<\nablaamma<\min\mathcal{B}ig\{\fracrac{n+1}2,\nu(1)+\fracrac12,1+\nu(0)-\fracrac1q\mathcal{B}ig\}.
\varepsilonnd{equation*}
\varepsilonnd{lemma}
Next, define the projectors $M_{jj'}=P_j\tauilde{P}_{j'}$ and
$N_{jj'}=\tauilde{P}_jP_{j'}$, where $P_j$ is the usual dyadic
frequency localization at $|\xi|\sim 2^{j}$ and $\tauilde{P}_j$ is the
localization with respect to $\big(-\Deltaelta+\fracrac
a{|x|^2}\big)^{\fracrac12}$. More precisely, let $f$ be in the $k$-th
harmonic subspace, then
\begin{equation}gin{equation*}
P_j f=\mathcal{H}_{\mu(k)}\begin{equation}ta_j
\mathcal{H}_{\mu(k)}f\quaduad\tauext{and}\quaduad \tauilde{P}_j
f=\mathcal{H}_{\nu(k)}\begin{equation}ta_j \mathcal{H}_{\nu(k)}f,
\varepsilonnd{equation*}
where $\begin{equation}ta_j(\xi)=\begin{equation}ta(2^{-j}|\xi|)$ with $\begin{equation}ta\in
C_0^\infty(\mathbb{R}^+)$ supported in $[\fracrac14,2]$. Then, we have the
almost orthogonality estimate which is proved in \cite{BPSS}.
\begin{equation}gin{lemma}[Almost orthogonality estimate, \cite{BPSS}]\lambdabel{orthogonality}
There exists a positive constant $C$ independent of $j,j',$ and $k$
such that the following inequalities hold for all positive
$\varepsilonpsilon_1<1+\min\{\fracrac{n-2}2, (\fracrac{(n-2)^2}4+a)^{\fracrac12}\}$
$$\|M_{j j'}f\|_{L^2(\mathbb{R}^n)},~~ \|N_{j j'}f\|_{L^2(\mathbb{R}^n)}\leq C
2^{-\varepsilonpsilon_1|j-j'|}\|f\|_{L^2(\mathbb{R}^n)}, $$ where $f$ is in the
$k$-th harmonic subspace.
\varepsilonnd{lemma}
As a consequence of Lemma \ref{stri1} and Lemma \ref{orthogonality},
we have
\begin{equation}gin{lemma}[Strichartz estimates for unit frequency]\lambdabel{stri}
Let $n\nablaeq3$, $k\in \mathbb N$. Let $u$ solve
\begin{equation}gin{equation*}
\begin{equation}gin{cases}
(\partialartial_t^2-\Deltaelta+\fracrac a{|x|^2})u=0, \\
u|_{t=0}=u_0(x),~u_t|_{t=0}=0,
\varepsilonnd{cases}
\varepsilonnd{equation*} where $u_0\in L^2(\mathbb{R}^n)$ and
$$u_0=\sum\limits_{k=0}^\infty\sum\limits_{\varepsilonll=1}^{{d}(k)}a_{k,\varepsilonll}(r)Y_{k,\varepsilonll}(\tauhetaeta).$$
Assume that for all $k,\varepsilonll\in\mathbb N$,
$\tauext{supp}~\big[\mathcal{H}_{\nu(k)}a_{k,\varepsilonll}\big]\subset [1,2]$.
Then the following estimate holds for
$a>\fracrac1{(n-1)^2}-\fracrac{(n-2)^2}4$
\begin{equation}gin{equation}\lambdabel{2.24}
\|u(t,x)\|_{L^q(\mathbb{R};L^r(\mathbb{R}^n))}\leq C\|u_0\|_{L^2(\mathbb{R}^n)},
\varepsilonnd{equation}
where $q\nablaeq2,$ $\fracrac1q=\fracrac{n-1}2(\fracrac12-\fracrac1r)$ and $
(q,r,n)\neq(2,\infty,3)$.
\varepsilonnd{lemma}
\begin{equation}gin{proof}
By making use of Lemma \ref{stri1} with
$\sigma=0,\nablaamma=n(\fracrac12-\fracrac1r)-\fracrac1q=\fracrac{n+1}{q(n-1)}$, we
obtain that $\|u\|_{L^q_t(\mathbb{R};L^r(\mathbb{R}^n))}\leq C\|u_0\|_{\deltaot
H^{\nablaamma}}$. Since $0<\nablaamma\leq1$, we have by Lemma
\ref{orthogonality} with $\varepsilonpsilon_1=1+$,
\begin{equation}gin{equation*}
\begin{equation}gin{split}\|u\|_{L^q_t(\mathbb{R};L^r(\mathbb{R}^n))}&\leq C\mathcal{B}ig(\sum_{j\in\mathbb{Z}}
2^{2j\nablaamma}\|P_j u_0\|^2_{ L^{2}}\mathcal{B}ig)^{\fracrac12}=
C\mathcal{B}ig(\sum_{j\in\mathbb{Z}} 2^{2j\nablaamma}\mathcal{B}ig\|
\sum_{k=0}^\infty\sum_{\varepsilonll=1}^{{d}(k)}P_j\big(a_{k,\varepsilonll}(r)Y_{k,\varepsilonll}(\tauhetaeta)\big)\mathcal{B}ig\|^2_{
L^{2}}\mathcal{B}ig)^{\fracrac12}\\&=C\mathcal{B}ig(\sum_{j\in\mathbb{Z}} 2^{2j\nablaamma}\mathcal{B}ig\|
\sum_{k=0}^\infty\sum_{\varepsilonll=1}^{{d}(k)}P_j\tauilde{P}_1\big(a_{k,\varepsilonll}(r)Y_{k,\varepsilonll}(\tauhetaeta)\big)\mathcal{B}ig\|^2_{
L^{2}}\mathcal{B}ig)^{\fracrac12} \\&\leq C\mathcal{B}ig(\sum_{j\in\mathbb{Z}}
2^{2j\nablaamma-2\varepsilonpsilon_1|j-1|}\|u_0\|^2_{ L^{2}}\mathcal{B}ig)^{\fracrac12}\leq
C\|u_0\|_{ L^{2}}.
\varepsilonnd{split}
\varepsilonnd{equation*}
This completes the proof of Lemma \ref{stri}.
\varepsilonnd{proof}
\section{Estimates of Hankel transforms}
In this section, we prove some estimates for the Hankel transforms
of order $\nu(k)$. These estimates will be utilized to prove the
main results in the next section.
\begin{equation}gin{proposition}\lambdabel{Hankel1}
Let $k\in \mathbb N, 1\leq\varepsilonll\leq d(k)$ and let $\varphi$ be a smooth
function supported in the interval $I:=[\fracrac12,2]$. Then
\begin{equation}gin{equation}\lambdabel{3.1}
\begin{equation}gin{split}
\mathcal{B}ig\|\int_0^\infty e^{- it\rho} &{J}_{\nu(k)}( r\rho)
b^0_{k,\varepsilonll}(\rho) \varphi(\rho)\mathrm{d}\rho
\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^2_{r}([R,2R]))}\leq
C\min\big\{R^{\fracrac12},1\big\} \|b^0_{k,\varepsilonll}(\rho)\|_{L^2_\rho(I)},
\varepsilonnd{split}
\varepsilonnd{equation}
where $R\in 2^{\mathbb{Z}}$ and $C$ is a constant independent of $R, k,$ and
$\varepsilonll$.
\varepsilonnd{proposition}
\begin{equation}gin{proof}
Using the Plancherel theorem in $t$, we have
\begin{equation}gin{equation}\lambdabel{3.2}
\begin{equation}gin{split}
\tauext{L.H.S of }~\varepsilonqref{3.1}\lesssim \mathcal{B}ig\| \big\|J_{\nu(k)}( r\rho)
b^0_{k,\varepsilonll}(\rho) \varphi(\rho)\big\|_{L^{2}_\rho}
\mathcal{B}ig\|_{L^2_{r}([R,2R])}.
\varepsilonnd{split}
\varepsilonnd{equation}
We first consider the case $R\lesssim1$. Since $\nu(k)>0$, one has
by \varepsilonqref{2.4}
\begin{equation}gin{equation}\lambdabel{3.3}
\begin{equation}gin{split}
\tauext{L.H.S of}~\varepsilonqref{3.1}&\lesssim \big\|
b^0_{k,\varepsilonll}(\rho)\big\|_{L^{2}_\rho(I)}\mathcal{B}ig(\int_R^{2R} \mathcal{B}ig|\fracrac{
r^{\nu(k)}}{2^{\nu(k)}G_{\t_1,\t_2,\t_3}amma(\nu(k)+\fracrac12)G_{\t_1,\t_2,\t_3}amma(\fracrac12)}\mathcal{B}ig|^{2}
\mathrm{d}r\mathcal{B}ig)^{\fracrac12}\\&\lesssim R^{\fracrac
12}\big\|b^0_{k,\varepsilonll}(\rho)\big\|_{L^2_\rho(I)}.
\varepsilonnd{split}
\varepsilonnd{equation}
Next we consider the case $R\nablag 1$. It follows from \varepsilonqref{3.2} that
\varepsilonqref{3.1} can be reduced to show
\begin{equation}gin{equation}\lambdabel{3.4}
\int_R^{2R}|J_{k}(r)|^2\mathrm{d}r\leq C,\quaduad R\nablag 1,
\varepsilonnd{equation}
where the constant $C$ is independent of $k$ and $R$. To prove
\varepsilonqref{3.4}, we write
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\int_R^{2R}|J_{k}(r)|^2\mathrm{d}r=\int_{I_1}|J_{k}(r)|^2\mathrm{d}r
+\int_{I_2}|J_{k}(r)|^2\mathrm{d}r+\int_{I_3}|J_{k}(r)|^2\mathrm{d}r
\varepsilonnd{split}
\varepsilonnd{equation*}
where $$I_1=[R,2R]\cap[0,\fracrac k 2],\quaduad I_2=[R,2R]\cap[\fracrac k
2,2k]\quaduad \tauext{and}\quaduad I_3=[R,2R]\cap[2k,\infty].$$
Using \varepsilonqref{2.8} and \varepsilonqref{2.10} in Lemma \ref{Bessel}, we have
\begin{equation}gin{equation}\lambdabel{3.5}
\begin{equation}gin{split}
\int_{I_1}|J_{k}(r)|^2\mathrm{d}r\leq C
\int_{I_1}e^{-cr}\mathrm{d}r\leq C e^{-cR},
\varepsilonnd{split}
\varepsilonnd{equation}
and \begin{equation}gin{equation}\lambdabel{3.6}
\begin{equation}gin{split}\int_{I_3}|J_{k}(r)|^2\mathrm{d}r\leq C.\varepsilonnd{split}
\varepsilonnd{equation}
On the other hand, one has by \varepsilonqref{2.9}
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\int_{[\fracrac k 2,2k]}|J_{k}(r)|^2\mathrm{d}r&\leq C \int_{[\fracrac k
2,2k]}k^{-\fracrac23}(1+k^{-\fracrac13}|r-k|)^{-\fracrac 12}\mathrm{d}r\leq
C.
\varepsilonnd{split}
\varepsilonnd{equation*}
Observing $[R,2R]\cap[\fracrac k 2,2k]=\varepsilonmptyset$ unless $R\sim k$, we
obtain
\begin{equation}gin{equation}\lambdabel{3.7}
\begin{equation}gin{split}
\int_{I_2}|J_{k}(r)|^2\mathrm{d}r\leq C.
\varepsilonnd{split}
\varepsilonnd{equation}
This together with \varepsilonqref{3.5} and \varepsilonqref{3.6} yields \varepsilonqref{3.1}.
\varepsilonnd{proof}
\begin{equation}gin{proposition}\lambdabel{Hankel2}Let $\nablaamma\nablaeq2$ and let $k\in \mathbb N, 1\leq\varepsilonll\leq d(k)$.
Suppose $\tauext{supp}~b^0_{k,\varepsilonll}(\rho)\subset I:=[1,2]$. Then
\begin{equation}gin{equation}\lambdabel{3.8}
\begin{equation}gin{split}
\mathcal{B}ig\|\mathcal{H}_{\nu(k)}&\big[\cos(
t\rho)b^0_{k,\varepsilonll}(\rho)\big](r)\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^\nablaamma_{r^{n-1}\mathrm{d}r}([R,2R]))}
\\&\leq C \min\mathcal{B}ig\{R^{\fracrac{(n+1)+(\nablaamma-2)\nu(k)}\nablaamma-\fracrac{n-1}
2}, R^{\fracrac{n-1}\nablaamma-\fracrac{n-2}2}\mathcal{B}ig\}
\|b^0_{k,\varepsilonll}(\rho)\|_{L^2_\rho(I)},
\varepsilonnd{split}
\varepsilonnd{equation}
where $R\in 2^{\mathbb{Z}}$ and $C$ is a constant independent of $R, k$ and
$\varepsilonll$.
\varepsilonnd{proposition}
\begin{equation}gin{proof}
We first consider the case $R\nablag 1$. Using the definition of Hankel
transform and the interpolation, we only need to prove
\begin{equation}gin{equation}\lambdabel{3.9}
\begin{equation}gin{split}
\mathcal{B}ig\|\int_0^\infty e^{- it\rho} {J}_{\nu(k)}(
r\rho)b^0_{k,\varepsilonll}(\rho)
(r\rho)^{-\fracrac{n-2}2}&\rho^{n-1}\mathrm{d}\rho\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^2_{r^{n-1}\mathrm{d}r}([R,2R]))}\\&
\lesssim R^{\fracrac1 2}\| b^0_{k,\varepsilonll}(\rho)\|_{L^2_\rho},
\varepsilonnd{split}
\varepsilonnd{equation}
and
\begin{equation}gin{equation}\lambdabel{3.10}
\begin{equation}gin{split}
\mathcal{B}ig\|\int_0^\infty e^{- it\rho} {J}_{\nu(k)}(
r\rho)b^0_{k,\varepsilonll}(\rho)
(r\rho)^{-\fracrac{n-2}2}&\rho^{n-1}\mathrm{d}\rho\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^\infty_{r^{n-1}\mathrm{d}r}([R,2R]))}\\&
\lesssim R^{-\fracrac{n-2}2}\| b^0_{k,\varepsilonll}(\rho)\|_{L^2_\rho}.
\varepsilonnd{split}
\varepsilonnd{equation}
\varepsilonqref{3.9} follows from Proposition \ref{Hankel1}. To prove
\varepsilonqref{3.10}, it is enough to show that there exists a constant $C$
independent of $k,\varepsilonll$ such that
\begin{equation}gin{equation}\lambdabel{3.11}
\begin{equation}gin{split}
\mathcal{B}ig\|\int_0^\infty e^{-it\rho} J_{\nu(k)}( r\rho)
b^0_{k,\varepsilonll}(\rho) \varphi(\rho)\mathrm{d}\rho
\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^\infty_{r}([R,2R]))}\leq C
\|b^0_{k,\varepsilonll}(\rho)\|_{L^2_\rho(I)}.
\varepsilonnd{split}
\varepsilonnd{equation}
By the Sobolev embedding $H^1(\mathbb Omega)\hookrightarrow
L^\infty(\mathbb Omega)$ with $\mathbb Omega=[R,2R]$, it suffices to show
\begin{equation}gin{equation}\lambdabel{3.12}
\begin{equation}gin{split}
\mathcal{B}ig\|\int_0^\infty e^{- it\rho} J_{\nu(k)}(r\rho)
b^0_{k,\varepsilonll}(\rho) &\varphi(\rho)\mathrm{d}\rho
\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^2_{r}([R,2R]))}\leq C
\|b^0_{k,\varepsilonll}(\rho)\|_{L^2_\rho(I)},
\varepsilonnd{split}
\varepsilonnd{equation}
and
\begin{equation}gin{equation}\lambdabel{3.13}
\begin{equation}gin{split}
\mathcal{B}ig\|\int_0^\infty e^{- it\rho} J^{\partialrime}_{\nu(k)}(r\rho)
b^0_{k,\varepsilonll}(\rho) \rho \varphi(\rho)\mathrm{d}\rho
\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^2_{r}([R,2R]))}\leq C
\|b^0_{k,\varepsilonll}(\rho)\|_{L^2_\rho(I)}.
\varepsilonnd{split}
\varepsilonnd{equation}
In fact, \varepsilonqref{3.12} follows from Proposition \ref{Hankel1}, we
apply the Plancherel theorem in $t$ and Lemma 2.2 to showing
\varepsilonqref{3.13}.
Secondly, we consider the case $R\lesssim1$. From the definition of
Hankel transform, we need to prove
\begin{equation}gin{equation}\lambdabel{3.14}
\begin{equation}gin{split}
\mathcal{B}ig\|\int_0^\infty e^{- it\rho} {J}_{\nu(k)}(
r\rho)&b^0_{k,\varepsilonll}(\rho)
(r\rho)^{-\fracrac{n-2}2}\rho^{n-1}\mathrm{d}\rho\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^\nablaamma_{r}([R,2R]))}
\\&\lesssim R^{\fracrac{2+(\nablaamma-2)\nu(k)}\nablaamma-\fracrac{n-1} 2}\|
b^0_{k,\varepsilonll}(\rho)\|_{L^2_\rho}.
\varepsilonnd{split}
\varepsilonnd{equation}
On the other hand, we have by Proposition \ref{Hankel1}
\begin{equation}gin{equation}\lambdabel{3.15}
\begin{equation}gin{split}
\mathcal{B}ig\|\int_0^\infty e^{-it\rho} {J}_{\nu(k)}(
r\rho)b^0_{k,\varepsilonll}(\rho)
&(r\rho)^{-\fracrac{n-2}2}\rho^{n-1}\mathrm{d}\rho\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^2_{r}([R,2R]))}\\&
\lesssim R^{-\fracrac{n-3} 2}\| b^0_{k,\varepsilonll}(\rho)\|_{L^2_\rho}.
\varepsilonnd{split}
\varepsilonnd{equation}
By interpolation, it suffices to prove the estimate
\begin{equation}gin{equation}\lambdabel{3.16}
\begin{equation}gin{split}
\mathcal{B}ig\|\int_0^\infty e^{- it\rho} {J}_{\nu(k)}(
r\rho)b^0_{k,\varepsilonll}(\rho)
&(r\rho)^{-\fracrac{n-2}2}\rho^{n-1}\mathrm{d}\rho\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^\infty_{r}([R,2R]))}\\&
\lesssim R^{-\fracrac{n-1} 2+\nu(k)}\| b^0_{k,\varepsilonll}(\rho)\|_{L^2_\rho}.
\varepsilonnd{split}
\varepsilonnd{equation}
Indeed, using Sobolev embedding, we can prove \varepsilonqref{3.16} by
showing
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\mathcal{B}ig\|\int_0^\infty e^{- it\rho} J_{\nu(k)}(r\rho)
b^0_{k,\varepsilonll}(\rho) &\varphi(\rho)\mathrm{d}\rho
\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^2_{r}([R,2R]))}\leq C R^{\fracrac12+\nu(k)}
\|b^0_{k,\varepsilonll}(\rho)\|_{L^2_\rho(I)},
\varepsilonnd{split}
\varepsilonnd{equation*}
and
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\mathcal{B}ig\|\int_0^\infty e^{- it\rho} J^{\partialrime}_{\nu(k)}( r\rho)
b^0_{k,\varepsilonll}(\rho) \rho \varphi(\rho)\mathrm{d}\rho
\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^2_{r}([R,2R]))}\leq C R^{\nu(k)-\fracrac12}
\|b^0_{k,\varepsilonll}(\rho)\|_{L^2_\rho(I)}.
\varepsilonnd{split}
\varepsilonnd{equation*}
These two estimates are implied by \varepsilonqref{2.4} and \varepsilonqref{2.5}.
Therefore, we conclude this proposition.
\varepsilonnd{proof}
\section{Proof of Theorem \ref{thm}}
In this section, we use Proposition \ref{Hankel1} and Proposition
\ref{Hankel2} to prove Theorem \ref{thm}. We first consider the
Cauchy problem:
\begin{equation}gin{equation}\lambdabel{4.1}
\begin{equation}gin{cases}
(\partialartial_{tt}-\Deltaelta +\fracrac{a}{|x|^2})u(x,t)=0,\\
u(x,0)=u_0(x),~\partialartial_tu(x,0)=0.
\varepsilonnd{cases}
\varepsilonnd{equation}
We use the spherical harmonic expansion to write
\begin{equation}gin{equation}\lambdabel{4.2}
u_0(x)=\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}a^0_{k,\varepsilonll}(r)Y_{k,\varepsilonll}(\tauhetaeta).
\varepsilonnd{equation}
Then we have the following proposition:
\begin{equation}gin{proposition}\lambdabel{pro}
Let $\nablaamma=\fracrac{2(n-1)}{n-2}+$ and suppose
$\tauext{supp}~\big(\mathcal{H}_{\nu}a^0_{k,\varepsilonll}\big)\subset [1,2]$
for all $k,\varepsilonll\in\mathbb N$ and $1\leq\varepsilonll\leq d(k)$. Then
\begin{equation}gin{equation}\lambdabel{4.3}
\begin{equation}gin{split}
\|u(x,t)\|_{L^2_tL^\nablaamma_{r^{n-1}\mathrm{d}r}L^2(\mathbb{S}^{n-1})}\leq
C\|u_0\|_{L^2_x}.
\varepsilonnd{split}
\varepsilonnd{equation}
\varepsilonnd{proposition}
\begin{equation}gin{proof}
Let us consider the equation \varepsilonqref{4.1} in polar coordinates. Write
$v(t,r,\tauhetaeta)=u(t,r\tauhetaeta)$ and $g(r,\tauhetaeta)=u_0(r\tauhetaeta)$. Then
$v(t,r,\tauhetaeta)$ satisfies that
\begin{equation}gin{equation}\lambdabel{4.4}
\begin{equation}gin{cases}
\partialartial_{tt}
v-\partialartial_{rr}v-\fracrac{n-1}r\partialartial_rv-\fracrac1{r^2}\Deltaelta_{\tauhetaeta}v+\fracrac{a}{r^2}v=0,\\
v(0,r,\tauhetaeta)=g(r,\tauhetaeta),\quaduad\partialartial_t v(0,r,\tauhetaeta)=0.
\varepsilonnd{cases}
\varepsilonnd{equation}
By \varepsilonqref{4.2}, we also have
\begin{equation}gin{equation*}
g(r,\tauhetaeta)=\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}a^0_{k,\varepsilonll}(r)Y_{k,\varepsilonll}(\tauhetaeta).
\varepsilonnd{equation*}
Using separation of variables, we can write $v$ as a superposition
\begin{equation}gin{equation}\lambdabel{4.5}
v(t,r,\tauhetaeta)=\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}v_{k,\varepsilonll}(t,r)Y_{k,\varepsilonll}(\tauhetaeta),
\varepsilonnd{equation}
where $v_{k,\varepsilonll}$ satisfies the following equation
\begin{equation}gin{equation*}
\begin{equation}gin{cases}
\partialartial_{tt}
v_{k,\varepsilonll}-\partialartial_{rr}v_{k,\varepsilonll}-\fracrac{n-1}r\partialartial_rv_{k,\varepsilonll}+\fracrac{k(k+n-2)+a}{r^2}v_{k,\varepsilonll}=0 \\
v_{k,\varepsilonll}(0,r)=a^0_{k,\varepsilonll}(r),\quadquad\partialartial_t v_{k,\varepsilonll}(0,r)=0
\varepsilonnd{cases}
\varepsilonnd{equation*}
for each $k,\varepsilonll\in \mathbb N,$ and $1\leq\varepsilonll\leq d(k)$. From the
definition of $A_{\nu}$, it becomes
\begin{equation}gin{equation}\lambdabel{4.6}
\begin{equation}gin{cases}
\partialartial_{tt}
v_{k,\varepsilonll}+A_{\nu(k)}v_{k,\varepsilonll}=0, \\
v_{k,\varepsilonll}(0,r)=a^0_{k,\varepsilonll}(r),\quadquad\partialartial_t v_{k,\varepsilonll}(0,r)=0.
\varepsilonnd{cases}
\varepsilonnd{equation}
Applying the Hankel transform to the equation \varepsilonqref{4.6}, we have
by Lemma \ref{Hankel3}
\begin{equation}gin{equation}\lambdabel{4.7}
\begin{equation}gin{cases}
\partialartial_{tt}
\tauilde{ v}_{k,\varepsilonll}+\rho^2\tauilde{v}_{k,\varepsilonll}=0, \\
\tauilde{v}_{k,\varepsilonll}(0,\rho)=b^0_{k,\varepsilonll}(\rho),\quadquad\partialartial_t\tauilde{v}_{k,\varepsilonll}(0,\rho)=0,
\varepsilonnd{cases}
\varepsilonnd{equation}
where
\begin{equation}gin{equation}\lambdabel{4.8}
\tauilde{v}_{k,\varepsilonll}(t,\rho)=(\mathcal{H}_{\nu}
v_{k,\varepsilonll})(t,\rho),\quaduad
b^0_{k,\varepsilonll}(\rho)=(\mathcal{H}_{\nu}a^0_{k,\varepsilonll})(\rho).
\varepsilonnd{equation}
Solving this ODE and using the inverse Hankel transform, we obtain
\begin{equation}gin{equation*}
\begin{equation}gin{split}
v_{k,\varepsilonll}(t,r)&=\int_0^\infty(r\rho)^{-\fracrac{n-2}2}J_{\nu(k)}(r\rho)\tauilde{v}_{k,\varepsilonll}(t,\rho)\rho^{n-1}\mathrm{d}\rho\\
&=\fracrac1{2}\int_0^\infty(r\rho)^{-\fracrac{n-2}2}J_{\nu(k)}(r\rho)\big(e^{
it\rho}+e^{-it\rho}\big)b^0_{k,\varepsilonll}(\rho)\rho^{n-1}\mathrm{d}\rho.
\varepsilonnd{split}
\varepsilonnd{equation*}
Therefore, we get
\begin{equation}gin{equation}\lambdabel{4.9}
\begin{equation}gin{split} &u(x,t)=v(t,r,\tauhetaeta)\\&=\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\int_0^\infty(r\rho)^{-\fracrac{n-2}2}J_{\nu(k)}(r\rho)\cos(
t\rho)b^0_{k,\varepsilonll}(\rho)\rho^{n-1}\mathrm{d}\rho\\&=\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(\rho)\big](r).
\varepsilonnd{split}
\varepsilonnd{equation}
To prove \varepsilonqref{4.3}, it suffices to show
\begin{equation}gin{equation}\lambdabel{4.10}
\begin{equation}gin{split}
\mathcal{B}ig\|\mathcal{B}ig(\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}\big|\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(\rho)\big](r)\big|^2\mathcal{B}ig)^{\fracrac12}
\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^\nablaamma_{r^{n-1}\mathrm{d}r}(\mathbb{R}^+))}\leq
C\|u_0\|_{L^2_x}.
\varepsilonnd{split}
\varepsilonnd{equation}
Using the dyadic decomposition, we have by $\varepsilonll^{2}\hookrightarrow
\varepsilonll^{\nablaamma}(\nablaamma>2)$
\begin{equation}gin{equation}\lambdabel{4.11}
\begin{equation}gin{split}
&\mathcal{B}ig\|\mathcal{B}ig(\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}\big|\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(\rho)\big](r)\big|^2\mathcal{B}ig)^{\fracrac12}
\mathcal{B}ig\|^2_{L^2_t(\mathbb{R};L^\nablaamma_{r^{n-1}\mathrm{d}r}(\mathbb{R}^+))}\\&=
\mathcal{B}ig\|\mathcal{B}ig(\sum_{R\in2^{\mathbb{Z}}}\mathcal{B}ig\|\mathcal{B}ig(\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}\big|\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(\rho)\big](r)\big|^2\mathcal{B}ig)^{\fracrac12}\mathcal{B}ig\|^\nablaamma_{L^\nablaamma_{r^{n-1}\mathrm{d}r}([R,2R])}\mathcal{B}ig)^{\fracrac1\nablaamma}\mathcal{B}ig\|^2_{L^2_t(\mathbb{R})}
\\&\lesssim
\sum_{R\in2^{\mathbb{Z}}}\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}\mathcal{B}ig\|\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(\rho)\big](r)\mathcal{B}ig\|^2_{L^2_t(\mathbb{R};L^\nablaamma_{r^{n-1}\mathrm{d}r}([R,2R]))}.
\varepsilonnd{split}
\varepsilonnd{equation}
By Proposition \ref{Hankel2}, we obtain
\begin{equation}gin{equation}\lambdabel{4.12}
\begin{equation}gin{split}
&\mathcal{B}ig\|\mathcal{B}ig(\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}\big|\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(\rho)\big](r)\big|^2\mathcal{B}ig)^{\fracrac12}
\mathcal{B}ig\|^2_{L^2_t(\mathbb{R};L^\nablaamma_{r^{n-1}\mathrm{d}r}(\mathbb{R}^+))}
\\&\lesssim
\sum_{R\in2^{\mathbb{Z}}}\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}\min\mathcal{B}ig\{R^{\fracrac{(n+1)+(\nablaamma-2)\nu(k)}\nablaamma-\fracrac{n-1}
2}, R^{\fracrac{n-1}\nablaamma-\fracrac{n-2}2}\mathcal{B}ig\}^2\|
b^0_{k,\varepsilonll}(\rho)\|^2_{L^2_\rho}\\&\lesssim
\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}\|
b^0_{k,\varepsilonll}(\rho)\|^2_{L^2_\rho}.
\varepsilonnd{split}
\varepsilonnd{equation}
Since $\tauext{supp}~b^0_{k,\varepsilonll}(\rho)\subset[1,2]$, we have
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}\|
b^0_{k,\varepsilonll}(\rho)\|^2_{L^2_\rho} \lesssim
\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}\|
\big(\mathcal{H}_{\nu(k)}a^0_{k,\varepsilonll}\big)(\rho)\|^2_{L^2_{\rho^{n-1}\mathrm{d}\rho}}.
\varepsilonnd{split}
\varepsilonnd{equation*}
It follows from Lemma \ref{Hankel3} that
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}\|
\big(\mathcal{H}_{\nu(k)}a^0_{k,\varepsilonll}\big)(\rho)\|^2_{L^2_{\rho^{n-1}\mathrm{d}\rho}}=
\sum_{k=0}^{\infty}\sum_{\varepsilonll=1}^{d(k)}\|
a^0_{k,\varepsilonll}(r)\|^2_{L^2_{r^{n-1}\mathrm{d}r}}=\|u_0(x)\|^2_{L^2_x(\mathbb{R}^n)}.
\varepsilonnd{split}
\varepsilonnd{equation*}
Therefore, we complete the proof of \varepsilonqref{4.3}.
\varepsilonnd{proof}
Now we turn to prove Theorem \ref{thm}. We choose $\begin{equation}ta\in
C_0^\infty(\mathbb{R}^+)$ supported in $[\fracrac12,2]$ such that
$\sum\limits_{N\in2^\mathbb{Z}}\begin{equation}ta(\fracrac{\rho}N)=1$ for all $\rho\in R^+$.
Let $\begin{equation}ta_N(\rho)=\begin{equation}ta(\fracrac{\rho}N)$ and $\tauilde{\begin{equation}ta}_N$ be
similar to $\begin{equation}ta_N$. For simplicity, we assume $u_1=0$. Then we can
write
\begin{equation}gin{equation}\lambdabel{4.13}
\begin{equation}gin{split} u(x,t)=&\sum_{M\in2^{\mathbb{Z}}}\mathcal{B}ig\{Y_{0,1}(\tauhetaeta)\mathcal{H}_{\nu(0)}\big[\cos(
t\rho)b^0_{0,1}(\rho)\begin{equation}ta_M(\rho)\big](r)\\&\quadquad\quaduad+\sum_{N\in2^{\mathbb N}}\sum_{k}\tauilde{\begin{equation}ta}_N(k)\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(\rho)\begin{equation}ta_M(\rho)\big](r)\mathcal{B}ig\}\\:=&u_{<}(x,t)+u_{\nablaeq}(x,t).
\varepsilonnd{split}
\varepsilonnd{equation}
Without loss of the generality, it suffices to estiamte
$u_{\nablaeq}(x,t)$. By Lemma \ref{square-expression}, Lemma
\ref{square-expression2} and the scaling argument, we show that for
$2\leq q, r$ and $r<\infty$
\begin{equation}gin{equation}\lambdabel{4.14}
\begin{equation}gin{split}
&\|u_{\nablaeq}(t,x)\|^2_{L^q_t(\mathbb{R};L^r_x(\mathbb{R}^n))}\\&\lesssim
\sum_{M\in2^{\mathbb{Z}}}\sum_{N\in2^{\mathbb N}}\mathcal{B}ig\|\sum_{k}\tauilde{\begin{equation}ta}_N(k)\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(\rho)\begin{equation}ta_M(\rho)\big](r)\mathcal{B}ig\|^2_{L^q_t(\mathbb{R};L^r_x(\mathbb{R}^n))}\\&\lesssim
\sum_{M\in2^{\mathbb{Z}}}M^{2(n-\fracrac 1q-\fracrac
nr)}\sum_{N\in2^{\mathbb N}}\mathcal{B}ig\|\sum_{k}\tauilde{\begin{equation}ta}_N(k)\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\mathcal{B}ig\|^2_{L^q_t(\mathbb{R};L^r_x(\mathbb{R}^n))}.
\varepsilonnd{split}
\varepsilonnd{equation}
$\bullet$ {\bf Case 1:} $n\nablaeq4$. we have by interpolation
\begin{equation}gin{equation}\lambdabel{4.15}
\begin{equation}gin{split}
&\mathcal{B}ig\|\sum_{k}\tauilde{\begin{equation}ta}_N(k)\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\mathcal{B}ig\|_{L^q_t(\mathbb{R};L^r_x(\mathbb{R}^n))}\\
&\lesssim\mathcal{B}ig\|\sum_{k}\tauilde{\begin{equation}ta}_N(k)\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\mathcal{B}ig\|^\lambdambda_{L^2_t(\mathbb{R};L^{\nablaamma_0}_x(\mathbb{R}^n))}\\&\quadquad\tauimes
\mathcal{B}ig\|\sum_{k}\tauilde{\begin{equation}ta}_N(k)\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\mathcal{B}ig\|^{1-\lambdambda}_{L^{\infty}_t(\mathbb{R};L^2_x(\mathbb{R}^n))},
\varepsilonnd{split}
\varepsilonnd{equation}
where
\begin{equation}gin{equation}\lambdabel{4.16}
\begin{equation}gin{split}
\fracrac 1q=\fracrac\lambdambda 2+\fracrac{1-\lambdambda}\infty,\quadquad
\fracrac1r=\fracrac\lambdambda {\nablaamma_0}+\fracrac{1-\lambdambda}2,\quadquad
\fracrac1{\nablaamma_0}=\fracrac q2(\fracrac1r+\fracrac1q-\fracrac12).
\varepsilonnd{split}
\varepsilonnd{equation}
Since $(q,r)\in\Lambda$, one has
$\fracrac{2(n-1)}{n-2}<\nablaamma_0\leq\fracrac{2(n-1)}{n-3}$. By
\varepsilonqref{Bernstein} and the argument in proving Proposition \ref{pro},
one has
\begin{equation}gin{equation}\lambdabel{4.17}
\begin{equation}gin{split}
&\mathcal{B}ig\|\sum_{k}\tauilde{\begin{equation}ta}_N(k)\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^{\fracrac{2(n-1)}{n-2}+}_{x}(\mathbb{R}^n))}\\&\lesssim
N^{\fracrac12+}\mathcal{B}ig\|\mathcal{B}ig(\sum_{k}\sum_{\varepsilonll=1}^{d(k)}\big|\tauilde{\begin{equation}ta}_N(k)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\big|^2\mathcal{B}ig)^{\fracrac12}\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^{\fracrac{2(n-1)}{n-2}+}_{r^{n-1}\mathrm{d}r}(\mathbb{R}^+))}\\&\lesssim
N^{\fracrac12+}\mathcal{B}ig\|\mathcal{B}ig(\sum_{k}\sum_{\varepsilonll=1}^{d(k)}\big|\tauilde{\begin{equation}ta}_N(k)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big|^2\mathcal{B}ig)^{\fracrac12}\mathcal{B}ig\|_{L^2_\rho}.
\varepsilonnd{split}
\varepsilonnd{equation}
On the other hand, using the endpoint Strichartz estimate in Lemma
\ref{stri}, we have
\begin{equation}gin{equation}\lambdabel{4.18}
\begin{equation}gin{split}
&\mathcal{B}ig\|\sum_{k}\tauilde{\begin{equation}ta}_N(k)\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^{\fracrac{2(n-1)}{n-3}}_{x}(\mathbb{R}^n))}\\&\lesssim
\mathcal{B}ig\|\mathcal{B}ig(\sum_{k}\sum_{\varepsilonll=1}^{d(k)}\big|\tauilde{\begin{equation}ta}_N(k)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big|^2\mathcal{B}ig)^{\fracrac12}\mathcal{B}ig\|_{L^2_\rho}.
\varepsilonnd{split}
\varepsilonnd{equation}
Therefore, we obtain by interpolation
\begin{equation}gin{equation}\lambdabel{4.19}
\begin{equation}gin{split}
&\mathcal{B}ig\|\sum_{k}\tauilde{\begin{equation}ta}_N(k)\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^{\nablaamma_0}_x(\mathbb{R}^n))}\\&\lesssim
N^{(n-1)(\fracrac1{\nablaamma_0}-\fracrac{n-3}{2(n-1)})+}\mathcal{B}ig\|\mathcal{B}ig(\sum_{k}\sum_{\varepsilonll=1}^{d(k)}\big|\tauilde{\begin{equation}ta}_N(k)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big|^2\mathcal{B}ig)^{\fracrac12}\mathcal{B}ig\|_{L^2_\rho}.
\varepsilonnd{split}
\varepsilonnd{equation}
By Lemma \ref{Hankel3}, we have
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\big\|[\mathcal{H}_{\nu(k)}a_{k,\varepsilonll}](r)\big\|_{L^2_{r^{n-1}\mathrm{d}r}}=\|a_{k,\varepsilonll}(\rho)\|_{L^2_{\rho^{n-1}\mathrm{d}\rho}}.
\varepsilonnd{split}
\varepsilonnd{equation*}
We are in sprit of energy estimate to obtain
\begin{equation}gin{equation}\lambdabel{4.20}
\begin{equation}gin{split}
&\mathcal{B}ig\|\sum_{k}\tauilde{\begin{equation}ta}_N(k)\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\mathcal{B}ig\|_{L^\infty_t(\mathbb{R};L^2_{r^{n-1}\mathrm{d}r}(\mathbb{R}^+;L^2(\mathbb{S}^{n-1})))}\\&\lesssim
\mathcal{B}ig\|\mathcal{B}ig(\sum_{k}\sum_{\varepsilonll=1}^{d(k)}\big|\tauilde{\begin{equation}ta}_N(k)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\big|^2\mathcal{B}ig)^{\fracrac12}\mathcal{B}ig\|_{L^\infty_t(\mathbb{R};L^{2}_{r^{n-1}\mathrm{d}r}(\mathbb{R}^+))}\\&\lesssim
\mathcal{B}ig(\sum_{k}\sum_{\varepsilonll=1}^{d(k)}\tauilde{\begin{equation}ta}_N(k)\big\|b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big\|_{L^2_{\rho}}^2\mathcal{B}ig)^{\fracrac12}.
\varepsilonnd{split}
\varepsilonnd{equation}
Combining \varepsilonqref{4.14},\varepsilonqref{4.15}, \varepsilonqref{4.19}, and \varepsilonqref{4.20},
we have
\begin{equation}gin{equation}\lambdabel{4.21}
\begin{equation}gin{split}
&\|u_{\nablaeq}(t,x)\|^2_{L^q_t(\mathbb{R};L^r_x(\mathbb{R}^n))}\\&\lesssim
\sum_{M\in2^{\mathbb{Z}}}M^{2(n-\fracrac 1q-\fracrac
nr)}\sum_{N\in2^{\mathbb N}}N^{2\lambdambda(n-1)(\fracrac1{\nablaamma_0}-\fracrac{n-3}{2(n-1)})+}\sum_{k}\sum_{\varepsilonll=1}^{d(k)}\tauilde{\begin{equation}ta}_N(k)\big\|b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big\|_{L^2_{\rho}}^2
\\&\lesssim
\sum_{M\in2^{\mathbb{Z}}}M^{2(n-\fracrac 1q-\fracrac
nr)}\sum_{N\in2^{\mathbb N}}N^{2[\fracrac2q+(n-1)(\fracrac1r-\fracrac12)]+}\sum_{k}\sum_{\varepsilonll=1}^{d(k)}\tauilde{\begin{equation}ta}_N(k)\big\|b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big\|_{L^2_{\rho}}^2.
\varepsilonnd{split}
\varepsilonnd{equation}
By making use of Lemma \ref{orthogonality},
$s=n(\fracrac12-\fracrac1r)-\fracrac1q$, and
$\begin{equation}gin{aligned}r{s}=(1+\varepsilonpsilon)(\fracrac2q-(n-1)(\fracrac12-\fracrac1r))$, we get
\begin{equation}gin{equation}\lambdabel{4.22}
\begin{equation}gin{split}
\|u_{\nablaeq}(t,x)\|_{L^q_t(\mathbb{R};L^r_x(\mathbb{R}^n))}\lesssim
\|\lambdangle\mathbb Omega\rightarrowngle^{\begin{equation}gin{aligned}r{s}}u_0\|_{\deltaot H^s}.
\varepsilonnd{split}
\varepsilonnd{equation}
$\bullet$ {\bf Case 2:} $n=3$. Since the endpoint Strichartz
estimate fails, the above argument breaks down. By the
interpolation, we have
\begin{equation}gin{equation}\lambdabel{4.23}
\begin{equation}gin{split}
&\mathcal{B}ig\|\sum_{k}\tauilde{\begin{equation}ta}_N(k)\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\mathcal{B}ig\|_{L^q_t(\mathbb{R};L^r_x(\mathbb{R}^3))}\\
&\lesssim\mathcal{B}ig\|\sum_{k}\tauilde{\begin{equation}ta}_N(k)\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\mathcal{B}ig\|^\lambdambda_{L^2_t(\mathbb{R};L^{4+}_x(\mathbb{R}^3))}\\&\quadquad
\tauimes\mathcal{B}ig\|\sum_{k}\tauilde{\begin{equation}ta}_N(k)\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\mathcal{B}ig\|^{1-\lambdambda}_{L^{q_0}_t(\mathbb{R};L^{r_0}_x(\mathbb{R}^3))},
\varepsilonnd{split}
\varepsilonnd{equation}
where
\begin{equation}gin{equation}\lambdabel{4.24}
\begin{equation}gin{split}
\fracrac 1q=\fracrac\lambdambda 2+\fracrac{1-\lambdambda}{q_0},\quadquad
\fracrac1r=\fracrac\lambdambda {4+}+\fracrac{1-\lambdambda}{r_0},\quadquad
\fracrac{1}2=\fracrac 1{q_0}+\fracrac{1}{r_0},\quaduad r_0\neq \infty.
\varepsilonnd{split}
\varepsilonnd{equation}
By \varepsilonqref{Bernstein} and the argument in proving Proposition
\ref{pro}, one has
\begin{equation}gin{equation}\lambdabel{4.25}
\begin{equation}gin{split}
&\mathcal{B}ig\|\sum_{k}\tauilde{\begin{equation}ta}_N(k)\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^{4+}_{x}(\mathbb{R}^3))}\\&\lesssim
N^{\fracrac12+}\mathcal{B}ig\|\mathcal{B}ig(\sum_{k}\sum_{\varepsilonll=1}^{d(k)}\big|\tauilde{\begin{equation}ta}_N(k)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\big|^2\mathcal{B}ig)^{\fracrac12}\mathcal{B}ig\|_{L^2_t(\mathbb{R};L^{4+}_{r^{n-1}\mathrm{d}r}(\mathbb{R}^+))}\\&\lesssim
N^{\fracrac12+}\mathcal{B}ig\|\mathcal{B}ig(\sum_{k}\sum_{\varepsilonll=1}^{d(k)}\big|\tauilde{\begin{equation}ta}_N(k)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big|^2\mathcal{B}ig)^{\fracrac12}\mathcal{B}ig\|_{L^2_\rho}.
\varepsilonnd{split}
\varepsilonnd{equation}
On the other hand, by the Strichartz estimate with $(q_0, r_0)$ in
Lemma \ref{stri}, we have
\begin{equation}gin{equation}\lambdabel{4.26}
\begin{equation}gin{split}
&\mathcal{B}ig\|\sum_{k}\tauilde{\begin{equation}ta}_N(k)\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\mathcal{B}ig\|_{L^{q_0}_t(\mathbb{R};L^{r_0}_{x}(\mathbb{R}^3))}\\&\lesssim
\mathcal{B}ig\|\mathcal{B}ig(\sum_{k}\sum_{\varepsilonll=1}^{d(k)}\big|\tauilde{\begin{equation}ta}_N(k)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big|^2\mathcal{B}ig)^{\fracrac12}\mathcal{B}ig\|_{L^2_\rho}.
\varepsilonnd{split}
\varepsilonnd{equation}
This together with \varepsilonqref{4.14}, \varepsilonqref{4.23} and \varepsilonqref{4.25}
yields that
\begin{equation}gin{equation}\lambdabel{4.27}
\begin{equation}gin{split}
&\|u_{\nablaeq}(t,x)\|^2_{L^q_t(\mathbb{R};L^r_x(\mathbb{R}^n))}\\&\lesssim
\sum_{M\in2^{\mathbb{Z}}}M^{2(n-\fracrac 1q-\fracrac
nr)}\sum_{N\in2^{\mathbb N}}N^{\lambdambda+}\sum_{k}\sum_{\varepsilonll=1}^{d(k)}\tauilde{\begin{equation}ta}_N(k)\big\|b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big\|_{L^2_{\rho}}^2
\\&\lesssim
\sum_{M\in2^{\mathbb{Z}}}M^{2(n-\fracrac 1q-\fracrac
nr)}\sum_{N\in2^{\mathbb N}}N^{(4+\varepsilonpsilon)[\fracrac1q+\fracrac1r-\fracrac12]}\sum_{k}\sum_{\varepsilonll=1}^{d(k)}\tauilde{\begin{equation}ta}_N(k)\big\|b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big\|_{L^2_{\rho}}^2.
\varepsilonnd{split}
\varepsilonnd{equation}
Since $s=n(\fracrac12-\fracrac1r)-\fracrac1q$, by the scaling argument, Lemma
\ref{orthogonality}, we get
\begin{equation}gin{equation}\lambdabel{4.28}
\begin{equation}gin{split}
\|u_{\nablaeq}(t,x)\|_{L^q_t(\mathbb{R};L^r_x(\mathbb{R}^n))}\lesssim
\|\lambdangle\mathbb Omega\rightarrowngle^{\begin{equation}gin{aligned}r{s}}u_0\|_{\deltaot H^s}.
\varepsilonnd{split}
\varepsilonnd{equation}
Moreover, for $q=2, 4<r<\infty$, \varepsilonqref{4.14} and the ``Bernstein"
inequality \varepsilonqref{Bernstein} imply that
\begin{equation}gin{equation*}
\begin{equation}gin{split}
&\|u_{\nablaeq}(t,x)\|^2_{L^2_t(\mathbb{R};L^r_x(\mathbb{R}^n))}
\\ \lesssim&
\sum_{M\in2^{\mathbb{Z}}}M^{2(n-\fracrac 12-\fracrac
nr)}\sum_{N\in2^{\mathbb N}}\mathcal{B}ig\|\sum_{k}\tauilde{\begin{equation}ta}_N(k)\sum_{\varepsilonll=1}^{d(k)}Y_{k,\varepsilonll}(\tauhetaeta)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\mathcal{B}ig\|^2_{L^2_t(\mathbb{R};L^r_x(\mathbb{R}^n))}\\
\lesssim&\sum_{M\in2^{\mathbb{Z}}}M^{2(n-\fracrac 12-\fracrac
nr)}\sum_{N\in2^{\mathbb N}}
N^{2\begin{equation}gin{aligned}r{s}(r)}\mathcal{B}ig\|\mathcal{B}ig(\sum_{k}\sum_{\varepsilonll=1}^{d(k)}\big|\tauilde{\begin{equation}ta}_N(k)\mathcal{H}_{\nu(k)}\big[\cos(
t\rho)b^0_{k,\varepsilonll}(M{\rho})\begin{equation}ta(\rho)\big](r)\big|^2\mathcal{B}ig)^{\fracrac12}\mathcal{B}ig\|^2_{L^2_t(\mathbb{R};L^{r}_{r^{n-1}\mathrm{d}r}(\mathbb{R}^+))}\\\lesssim
& \|\lambdangle\mathbb Omega\rightarrowngle^{\begin{equation}gin{aligned}r{s}(r)}u_0\|^2_{\deltaot H^s}.
\varepsilonnd{split}
\varepsilonnd{equation*}
Combining this with \varepsilonqref{4.22} and \varepsilonqref{4.28}, we complete the
proof of Theorem \ref{thm}.
\section{Proof of Theorem \ref{thm1}}
To prove Theorem \ref{thm1}, we first use the inhomogeneous
Strichartz estimates for the wave equation without potential in
\cite{H,O} and the arguments in \cite{PSS} to prove an inhomogeneous
Strichartz estimates for the wave equation with inverse-square
potential.
\begin{equation}gin{proposition}[Inhomogeneous Strichartz estimates]\lambdabel{inh}
Let $\omegaidetilde{\mathcal{B}ox}=\partialartial_t^2+A_{\nu}$ and let $v$ solve the
inhomogeneous wave equation $\omegaidetilde{\mathcal{B}ox} v=h$ in $\mathbb{R}\tauimes\mathbb{R}^n$
with zero initial data. If $\nu>\max\{\fracrac{n-2}2-\fracrac n{q_0},
\fracrac n{r_0}-\fracrac{n-2}2-2\}$, then
\begin{equation}gin{equation}\lambdabel{5.1}
\begin{equation}gin{split}
\|v\|_{L^{q_0}_{t,x}(\mathbb{R}\tauimes\mathbb{R}^n)}\lesssim
\|h\|_{L^{r_0}_{t,x}(\mathbb{R}\tauimes\mathbb{R}^n)},
\varepsilonnd{split}
\varepsilonnd{equation}
where $q_0=(p-1)(n+1)/2$ and $r_0=(n+1)(p-1)/(2p)$ with
$p_{\tauext{h}}<p<p_{\tauext{conf}}$.
\varepsilonnd{proposition}
\begin{equation}gin{proof}
By the continuity property of $\mathcal{K}^0_{\nu,\lambdambda}$ in Lemma
\ref{continuous}, it follows that
\begin{equation}gin{equation}\lambdabel{5.2}
\begin{equation}gin{split}
&\|v\|_{L^{q_0}_{t,x}(\mathbb{R}\tauimes\mathbb{R}^n)}\leq
\|\mathcal{K}^0_{\nu,\lambdambda}\|_{q_0\rightarrow q_0}
\|\mathcal{K}^0_{\lambdambda,\nu}v\|_{L^{q_0}_{t,x}(\mathbb{R}\tauimes\mathbb{R}^n)}.
\varepsilonnd{split}
\varepsilonnd{equation}
Noting that \varepsilonqref{2.21} with $k=0$, one has
$\mathcal{K}^0_{\lambdambda,\nu}h=\mathcal{K}^0_{\lambdambda,\nu}\omegaidetilde{\mathcal{B}ox}v=\mathcal{B}ox\mathcal{K}^0_{\lambdambda,\nu}v$.
We recall the inhomogeneous Strichartz estimates, for
$\fracrac1{r}-\fracrac1{q}=\fracrac2{n+1}$ and
$\fracrac{2n}{n-1}<q<\fracrac{2(n+1)}{n-1}$,
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\|u\|_{L^{q}_{t,x}(\mathbb{R}\tauimes\mathbb{R}^n)}\leq C\|
(\partialartial_{tt}-\Deltaelta)u\|_{L^{r}_{t,x}(\mathbb{R}\tauimes\mathbb{R}^n)},
\varepsilonnd{split}
\varepsilonnd{equation*}
which was shown by Harmse\cite{H} and Oberlin \cite{O}. Thus we
obtain
\begin{equation}gin{equation}\lambdabel{5.3}
\begin{equation}gin{split}
&\|v\|_{L^{q_0}_{t,x}(\mathbb{R}\tauimes\mathbb{R}^n)}\leq C
\|\mathcal{K}^0_{\nu,\lambdambda}\|_{q_0\rightarrow q_0}
\|\mathcal{K}^0_{\lambdambda,\nu}
h\|_{L^{r_0}_{t,x}(\mathbb{R}\tauimes\mathbb{R}^n)}\\&\lesssim
\|\mathcal{K}^0_{\nu,\lambdambda}\|_{q_0\rightarrow
q_0}\|\mathcal{K}^0_{\lambdambda,\nu}\|_{r_0\rightarrow r_0}\|
h\|_{L^{r_0}_{t,x}(\mathbb{R}\tauimes\mathbb{R}^n)},
\varepsilonnd{split}
\varepsilonnd{equation}
where we use the facts that $\fracrac1{r_0}-\fracrac1{q_0}=\fracrac2{n+1}$
and $\fracrac{2n}{n-1}<q_0<\fracrac{2(n+1)}{n-1}$.
\varepsilonnd{proof}
Now we are in position to prove Theorem \ref{thm1}. Define the
solution map
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\Phi(u)&=\cos\big(t\sqrt{P_a}\big)u_0(x)+\fracrac{\sin\big(t\sqrt{P_a}\big
)}{\sqrt{P_a}}u_1(x)+\int_0^t\fracrac{\sin\big((t-s)\sqrt{P_a}\big)}{\sqrt{P_a}}F(u(s,x))\mathrm{d}s
\\&:=u_{\tauext{hom}}+u_{\tauext{inh}},
\varepsilonnd{split}
\varepsilonnd{equation*}
on the complete metric space $X$
$$X=\big\{u: u\in C_t(\deltaot H^{s_c})\cap L^{q_0}_{t,x},
\|u\|_{L^{q_0}_{t,x}}\leq 2C\varepsilonpsilon\big\}$$ with the metric
$d(u_1,u_2)=\|u_1-u_2\|_{L^{q_0}_{t,x}}$,
where $P_a=-\Deltaelta+\fracrac{a}{|x|^2}$
with $a$ satisfying \varepsilonqref{1.7}, and $\varepsilonpsilon\leq\varepsilonpsilon(p)$ is as
in \varepsilonqref{1.8}.
From Lemma \ref{stri1} and \varepsilonqref{1.5}, we obtain
\begin{equation}gin{equation}\lambdabel{5.4}
\begin{equation}gin{split}
\|u_{\tauext{hom}}\|_{C_t(\deltaot H^{s_c})\cap L^{q_0}_{t,x}}\leq
C\big(\|u_0\|_{\deltaot H^{s_c}}+\|u_1\|_{\deltaot H^{s_c-1}}\big)\leq
C\varepsilonpsilon.
\varepsilonnd{split}
\varepsilonnd{equation}
By Proposition \ref{inh} and the inhomogeneous version of the
Strichartz estimates \varepsilonqref{1.2}, one has
\begin{equation}gin{equation}\lambdabel{5.5}
\begin{equation}gin{split}
\|u_{\tauext{inh}}\|_{C_t(\deltaot H^{s_c})\cap L^{q_0}_{t,x}}\leq
C\|F(u)\|_{L^{r_0}_{t,x}}\leq C\|u\|^{p}_{L^{q_0}_{t,x}}\leq
C^2(C\varepsilonpsilon)^{p-1}\varepsilonpsilon\leq C\varepsilonpsilon.
\varepsilonnd{split}
\varepsilonnd{equation}
A similar argument as above leads to
\begin{equation}gin{equation}\lambdabel{5.6}
\begin{equation}gin{split}
\|\Phi(u_1)-\Phi(u_2)\|_{L^{q_0}_{t,x}}\leq&
C\|F(u_1)-F(u_2)\|_{L^{r_0}_{t,x}}\\\leq&
C^2(C\varepsilonpsilon)^{p-1}\|u_1-u_2\|_{L^{q_0}_{t,x}}\leq
\fracrac12\|u_1-u_2\|_{L^{q_0}_{t,x}}.
\varepsilonnd{split}
\varepsilonnd{equation}
Therefore, the solution map $\Phi$ is a contraction map on $X$ under
the matric $d(u_1,u_2)=\|u_1-u_2\|_{L^{q_0}_{t,x}}$. The standard
contraction argument gives the proof.
\section{Appendix: The Proof of Lemma \ref{square-expression}}
We will apply the H\"ormander's technique to showing a weak-type
$(1,1)$ estimate for the multiplier operators with respect to the
Hankel transform.
The multiplier operators associated with the Hankel transform are
defined by
\begin{equation}gin{equation}\lambdabel{6.1}
[L_jf](r)=\int_0^\infty(r\rho)^{-\fracrac{n-2}2}J_{\nu}(r\rho)[\mathcal{H}_{\nu}f](\rho)\begin{equation}ta_j(\rho)\mathrm{d}\omega(\rho),\quaduad
j\in\mathbb{Z}
\varepsilonnd{equation}
where
\begin{equation}gin{equation}\lambdabel{6.2}
(\mathcal{H}_{\nu}f)(\rho)=\int_0^\infty(r\rho)^{-\fracrac{n-2}2}J_{\nu}(r\rho)f(r)\mathrm{d}\omega(r),\quaduad
\tauext{and}\quaduad \mathrm{d}\omega(\rho)=\rho^{n-1}\mathrm{d}\rho.
\varepsilonnd{equation}
Since $\mathcal{H}_{\nu}=\mathcal{H}^{-1}_{\nu}$, we have
$\mathcal{H}_{\nu}[L_jf]=\begin{equation}ta_j(\rho)[\mathcal{H}_{\nu}f]$. We
first claim that
\begin{equation}gin{equation}\lambdabel{6.3}
\big\|\big(\sum_{j\in\mathbb{Z}}|L_jf|^2\big)^{\fracrac12}\big\|_{L^p(\omega)}\sim\big\|\sum_{j\in\mathbb{Z}}L_jf\big\|_{L^p(\omega)}\sim\|f\|_{L^p(\omega)}.
\varepsilonnd{equation}
This implies Lemma \ref{square-expression}, by choosing
$f=\mathcal{H}_{\nu}[\cos(t\rho)b_{k,\varepsilonll}^0(\rho)]$. To show
\varepsilonqref{6.3}, we need the following
\begin{equation}gin{lemma}\lambdabel{square}
Let $f\in L^p(\omega)$, $1<p<\infty$. Then there exists a constant
$C_p$ such that
\begin{equation}gin{equation}\lambdabel{6.4}
\big\|\big(\sum_{j\in\mathbb{Z}}|{L}_jf|^2\big)^{\fracrac12}\big\|_{L^p(\omega)}\leq
C_p\|f\|_{L^p(\omega)}.
\varepsilonnd{equation}
\varepsilonnd{lemma}
We postpone the proof for a moment. By duality, one has
\begin{equation}gin{equation*}
\|f\|_{L^p(\omega)}=\sup_{\|g\|_{L^{p'}(\omega)}\leq1}\int_0^\infty
f(r)\overline{g}(r)\mathrm{d}\omega(r).
\varepsilonnd{equation*}
By Lemma \ref{Hankel3}, we observe that
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\int_0^\infty
f(r)\overline{g}(r)\mathrm{d}\omega(r)&=\sum_{j,j'\in\mathbb{Z}}\int_0^\infty
\mathcal{H}_{\nu}[L_jf](\rho){\mathcal{H}_{\nu}[L_{j'}\overline{g}]}(\rho)\mathrm{d}\omega(\rho)
\\&=\sum_{j,j'\in\mathbb{Z}}\int_0^\infty \begin{equation}ta_j(\rho)\begin{equation}ta_{j'}(\rho)[\mathcal{H}_{\nu}f](\rho)[\mathcal{H}_{\nu}\overline{g}](\rho)\mathrm{d}\omega(\rho)
\\&\leq
C\sum_{j\in\mathbb{Z}}\int_0^\infty\begin{equation}ta_j(\rho)\begin{equation}ta_{j}(\rho)[\mathcal{H}_{\nu}f](\rho)[\mathcal{H}_{\nu}\overline{g}](\rho)\mathrm{d}\omega(\rho).
\varepsilonnd{split}
\varepsilonnd{equation*}
This implies that
\begin{equation}gin{equation}\lambdabel{6.5}
\begin{equation}gin{split}
\int_0^\infty f(r)\overline{g}(r)\mathrm{d}\omega(r)\leq
C\sum_{j\in\mathbb{Z}}\int_0^\infty [L_jf](r)[L_jg](r)\mathrm{d}\omega(r).
\varepsilonnd{split}
\varepsilonnd{equation}
Hence by Lemma \ref{square}, we obtain
\begin{equation}gin{equation}
\begin{equation}gin{split} \|f\|_{L^p(\omega)}&\leq
C\sup_{\|g\|_{L^{p'}(\omega)}\leq1}\big\|\big(\sum_{j\in\mathbb{Z}}|{L}_jf|^2\big)^{\fracrac12}\big\|_{L^p(\omega)}\big\|\big(\sum_{j\in\mathbb{Z}}|{L}_jg|^2\big)^{\fracrac12}\big\|_{L^{p'}(\omega)}
\\&\leq C\big\|\big(\sum_{j\in\mathbb{Z}}|{L}_jf|^2\big)^{\fracrac12}\big\|_{L^p(\omega)}.
\varepsilonnd{split}
\varepsilonnd{equation}
This together with \varepsilonqref{6.4} gives \varepsilonqref{6.3}. When $p=2$, we
have by Lemma \ref{Hankel3}
\begin{equation}gin{equation*}
\begin{equation}gin{split}
&\mathcal{B}ig\|\big(\sum_{j\in\mathbb{Z}}|{L}_jf|^2\big)^{\fracrac12}\mathcal{B}ig\|^2_{L^2(\omega)}=\sum_{j\in\mathbb{Z}}\big\|{L}_jf\big\|^2_{L^2(\omega)}
=\int_0^\infty\sum_{j\in\mathbb{Z}}|\begin{equation}ta_j(\rho)|^2|\mathcal{H}_{\nu}f|^2\mathrm{d}\omega(\rho)\leq
C\|f\|_{L^2(\omega)}.
\varepsilonnd{split}
\varepsilonnd{equation*}
Define the operator $S(f)$ by $f~\mapsto~\{L_j f\}_{j\in\mathbb{Z}}$, then
$\|S(f)\|_{L^2(\omega;\varepsilonll^2(\mathbb{Z}))}\leq C\|f\|_{L^2(\omega)}$.
To show \varepsilonqref{6.4}, it suffices to prove
\begin{equation}gin{equation}\lambdabel{6.7}
\begin{equation}gin{split}
\|S(f)\|_{L^{1,\infty}(\omega;\varepsilonll^2(\mathbb{Z}))}\leq C\|f\|_{L^1(\omega)},
\varepsilonnd{split}
\varepsilonnd{equation}
where $L^{1,\infty}(\omega)$ denotes the weak-$L^1(\omega)$. Define
the generalized convolution $f\# g$ as
\begin{equation}gin{equation}\lambdabel{6.8}
\begin{equation}gin{split}
f\# g(x)=\int_0^\infty(\tauau_xf)(y)g(y)\mathrm{d}\omega(y), \quaduad
x\in \mathbb{R}^+,
\varepsilonnd{split}
\varepsilonnd{equation}
where $f,g\in L^1(\omega)$, the Hankel translation $\tauau_xf$ is
denoted to be
\begin{equation}gin{equation}\lambdabel{6.9}
\begin{equation}gin{split}
(\tauau_xf)(y)=\int_0^\infty K_{\nu}(x,y,z)f(z)\mathrm{d}\omega(z),
\quaduad x,y\in \mathbb{R}^+,
\varepsilonnd{split}
\varepsilonnd{equation}
and
\begin{equation}gin{equation}\lambdabel{6.10}
\begin{equation}gin{split}
K_{\nu}(x,y,z)=\int_0^\infty(xt)^{-\fracrac{n-2}2}J_{\nu}(xt)(yt)^{-\fracrac{n-2}2}J_{\nu}(yt)(zt)^{-\fracrac{n-2}2}J_{\nu}(zt)\mathrm{d}\omega(t),
\quaduad x,y,z\in \mathbb{R}^+.
\varepsilonnd{split}
\varepsilonnd{equation}
Then $\mathcal{H}_{\nu}[f\#
g]=\mathcal{H}_{\nu}(f)\mathcal{H}_{\nu}(g)$. Moreover, we have
$L_jf=k_j\# f$ with $k_j=\mathcal{H}_{\nu}(\begin{equation}ta_j)$. Taking into
account the fact that $(\tauau_x f)(y)=(\tauau_y f)(x)$ and Theorem 2.4
in \cite{CW}, it suffices to prove the Hankel version of the
well-known H\"ormander condition
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\int_{|x-y_0|>2|y-y_0|}\big(\sum_{j\in\mathbb{Z}}\big|\tauau_{y}k_j(x)-\tauau_{y_0}k_j(x)\big|^2\big)^{\fracrac12}\mathrm{d}\omega(x)\leq
C,
\varepsilonnd{split}
\varepsilonnd{equation*}
where $C$ is independent of $y, y_0$. This is implied by
\begin{equation}gin{equation*}
\begin{equation}gin{split}
\sum_{j\in\mathbb{Z}}\int_{|x-y_0|>2|y-y_0|}\big|\tauau_{y}k_j(x)-\tauau_{y_0}k_j(x)\big|\mathrm{d}\omega(x)\leq
C,
\varepsilonnd{split}
\varepsilonnd{equation*}
which can be proved by the arguments in \cite{BM,GS}. \vskip0.5cm
{\bf Acknowledgments:}\quaduad The authors would like to express their
gratitude to Professor S. Shao for his helpful discussions and
the anonymous referee
for their invaluable comments and
suggestions. The authors were partly supported by the NSF of China
(No.11171033, No.11231006) and by Beijing Center of Mathematics and
Information Science.
\begin{equation}gin{thebibliography}{60}
{\small
\bibitem{BPSS} N. Burq, F. Planchon, J. Stalker and A. S.
Tahvildar-Zadeh. Strichartz estimates for the wave and Schr\"odinger
equations with the inverse-square potential, J. Funct. Anal. 203
(2003), 519-549.
\bibitem{BPSS1} N. Burq, F. Planchon, J. Stalker and A. S. Tahvildar-Zadeh. Strichartz estimates for the wave and Schr\"odinger equations
with potentials of critical decay, Indiana Univ. Math. J. 53(2004),
1665-1680.
\bibitem{BM} J. Betancor and L. Rodriguez-Mesa.
Weighted inequalities for hankel convolution operators, Illionis J.
Math., 44(2000), 230-245.
\bibitem{CT} J. Cheeger, M. Taylor. On the diffraction of waves by conical singularities. I, Comm. Pure Appl. Math., 35(1982), 275-331.
\bibitem{CW} R. Coifman and G. Weiss. Analyse harmonique non commutative sur certains
espaces homogenes, Lecture Notes in Math., 242, Springer-Verlag,
Berlin and New York, 1971.
\bibitem{GS} J. Gosselin and K. Stempak. A weak-type estimate for Fourier-Bessel
multipliers, Proc. Amer. Math. Soc. 106 (1989), 655-662.
\bibitem{H} K. Harmse. On Lebesgue space estimates for the wave equation, Indiana Univ. Math. J. 39(1990) 229-248.
\bibitem{KSWW} H. Kalf, U. W. Schmincke, J. Walter and R. W\"ust.
On the spectral theory of Schr\"odinger and Dirac operators with
strongly singular potentials. In Spectral theory and differential
equations, 182-226. Lect. Notes in Math., 448 (1975) Springer,
Berlin.
\bibitem{LS} H. Lindblad and C.D. Sogge. On existence and scattering with
minimal regularity for semi-linear wave equations, J. Funct. Anal.,
130(1995) 357-426.
\bibitem{LS1} H. Lindblad and C.D. Sogge. Long-time existence for small amplitude semilinear wave equations,
Amer. J. Math. 118(1996),1047-1135.
\bibitem{LZ} J. L. Vazquez and E. Zuazua. The Hardy inequality and the asymptotic behaviour of the heat equation with
an inverse-square potential, J. Funct. Anal., 173(2000) 103-153.
\bibitem{MZZ1} C. Miao, J. Zhang and J. Zheng. Linear Adjoint Restriction Estimates for
Paraboloid, Preprint.
\bibitem{O} D. M. Oberlin. Convolution estimates for some distributions with singularities on the light cone, Duke Math. J. 59(1989), 747-757.
\bibitem{PSS} F. Planchon, J. Stalker and A. S. Tahvildar-Zadeh. $L^p$ estimates for the wave equation with the inverse-square
potential, Discrete Contin. Dynam. Systems, 9(2003), 427-442.
\bibitem{PSS1} F. Planchon, J. Stalker and A. S. Tahvildar-Zadeh. Dispersive estimate for the wave equation with the inverse-square
potential, Discrete Contin. Dynam. Systems, 9(2003), 1387-1400.
\bibitem{Sogge2} C. D. Sogge. Lectures on nonlinear wave equations. Second edition. International Press, Boston, MA, 2008.
\bibitem{Stempak} K. Stempak. A Weighted uniform $L^p$ estimate of Bessel functions: A note on a paper of
Guo, Proc. Amer. Math. Soc. 128 (2000) 2943-2945.
\bibitem{Stein1} E.M. Stein. Harmonic
Analysis: Real Variable Methods, Orthogonality and Oscillatory
Integrals, Princeton Mathematical Series, 43(1993), Princeton
University Press, Princeton, N.J.
\bibitem{Stein2} E.M. Stein. Topics in Harmonic Analysis Related to the Littlewood-Paley Theory,
Annals of Mathematical Series, no.30, Princeton University Press,
New Jersey, 1970.
\bibitem{SW} E.M. Stein and G. Weiss. Introduction to Fourier analysis on Euclidean spaces, Princeton Mathematical
Series, 32(1971), Princeton University Press, Princeton, N. J.
\bibitem{Stri} R. Strichartz. Multipliers for spherical harmonic expansions, Trans. Amer. Math. Soc. 167(1972), 115-124.
\bibitem{Sterbenz} J. Sterbenz and Appendix by I. Rodnianski. Angular Regularity and Strichartz Estimates for the Wave
Equation, Int. Math. Res. Notices 4(2005), 187-231.
\bibitem{Tao4} T.Tao. Spherically averaged endpoint Strichartz estimates for the two-dimensional Schr\"odinger Equation, Comm. Part. Diff. Equ. 25(2000), 1471-1485.
\bibitem{Watson} G. N. Watson. A Treatise on the Theory of Bessel Functions. Second Edition
Cambridge University Press, (1944).
\bibitem{Wolff} T. Wolff. A sharp bilinear cone restriction estimate, Ann. of Math.
153 (2001), 661-698.}
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\begin{abstract}
We give asymptotically sharp upper bounds for the Khovanov width and the dealternation number of positive braid links, in terms of their
crossing number. The same braid-theoretic technique, combined with Ozsv\'ath, Stipsicz, and Szab\'o's Upsilon invariant, allows us to determine the exact cobordism distance between
torus knots with braid index two and six.
\end{abstract}
\maketitle
\section{Introduction}
Every link diagram with $n$ crossings can be turned into one of the two alternating diagrams with the same underlying projection by changing at most $n/2$ crossings. Therefore the ratio between the dealternation number $\dalt(L)$ -- the smallest number of crossing changes needed to turn some diagram of $L$ into an alternating {diagram} -- and the
crossing number $c(L)$ of a link $L$ is at most one-half.
We show that this ratio is bounded away from one-half for positive braid links with fixed braid index. The latter condition is necessary; we will exhibit a family of positive braid links with increasing braid index whose ratio $\dalt/c$ converges to one-half.
\begin{theorem} \label{main1}
Let $L$ be a link of
braid index $n$ that can be represented as the closure of a positive braid on $n$ strands. Then
$$\frac{\dalt(L)}{c(L)} \leq \frac{1}{2}-\frac{1}{2(n^2-n+1)}.$$
\end{theorem}
The following result shows the asymptotic optimality of this ratio. Incidentally, it also settles the question about the largest possible ratio between the Khovanov width $w_{Kh}(L)$ of a link $L$ and its crossing number $c(L)$.
\begin{proposition} \label{propLn}
The family of links $L_n$ defined as the closures of the braids $\beta_n=(\sigma_1 \ldots \sigma_{n-1} \sigma_{n-1} \ldots \sigma_1)^{n-1}$ on $n$ strands satisfies
$$\lim_{n \to \infty} \frac{\dalt(L_n)}{c(L_n)}=\lim_{n \to \infty} \frac{w_{Kh}(L_n)}{c(L_n)}=\frac{1}{2}.$$
\end{proposition}
As discussed above, the ratio $\dalt(L)/c(L)$ cannot exceed one-half. Similarly, the ratio $w_{Kh}(L)/c(L)$ has no accumulation point above one-half, since the Khovanov width is bounded from above by the dealternation number (see~\cite[Theorem~8]{CK}):
\[w_{Kh}(L) \leq \dalt(L)+2.\]
At present, the question about the largest ratio $\dalt/c$ for positive braid links with fixed braid index $n$ remains open.
However, the answer is known to be $\frac{1}{4}$ for $n=3$ by Abe and Kishimoto's work on $\dalt$ of 3--stranded braids~\cite{AK}; we determine the answer for $n=4$.
\begin{proposition} \label{prop4braids}
Let $L$ be a link of
braid index $4$ that can be represented as the closure of a positive braid on 4 strands. Then
\[\frac{\dalt(L)}{c(L)} \leq \frac{1}{3}.\]
Moreover, the family of links defined as the closures of the $4$--braids\linebreak $(\sigma_1 \sigma_2 \sigma_3 \sigma_3 \sigma_2 \sigma_1)^n$
attains this bound in the limit $n \to \infty$.
\end{proposition}
Computations suggest that the ratio $w_{Kh}(L)/c(L)$ is far less than one-half for torus links $L=T(p,q)$. In fact, we expect their asymptotic ratio to be
\[\lim_{n \to \infty} \frac{w_{Kh}(T(n,n))}{c(T(n,n))}=\frac{1}{4}.\]
This would follow from the sharpness of Sto\v{s}i\'{c}'s inequality for the Khovanov width (\cite[Corollary 5]{St}; see \cref{stosic} below).
The following result provides evidence towards this; it shows that Sto\v{s}i\'{c}'s inequality is asymptotically sharp for
torus links with braid index $6$.
\begin{proposition} \label{prop6torus}
For all integers $n\geq3$ and $k \geq 1$:\\[-2ex]
\begin{enumerate}
\item[\emph{(i)}] $\displaystyle \dalt(T(6,2n)) \leq 2n+2,$\\[-1ex]
\item[\emph{(ii)}] $\displaystyle \dalt(T(6,2n+1)) \leq 2n+2,$\\[-1ex]
\item[\emph{(iii)}] $\displaystyle 6k \leq w_{Kh}(T(6,6k))-2\leq \dalt(T(6,6k)),$\\[-1ex]
\item[\emph{(iv)}] $\displaystyle 6k-1\pm1 \leq \dalt(T(6,6k\pm1)),$\\[-1ex]
\item[\emph{(v)}] $\displaystyle \lim_{k \to \infty} \frac{\dalt(T(6,6k))}{c(T(6,6k))}=\lim_{n \to \infty} \frac{w_{Kh}(T(6,6k))}{c(T(6,6k))}=\frac{1}{5}.$
\end{enumerate}
\end{proposition}
The proof of the upper bounds in \cref{prop6torus} consists in finding braid representatives of torus links with the smallest possible number of generators $\sigma_i$ with even index, i.e.~$\sigma_2$ or $\sigma_4$.
This technique for obtaining upper bounds has another interesting application to the smooth cobordism distance $d_{\text{cob}}(K,L)$ of pairs of knots $K,L$, defined as the minimal genus among all smooth cobordisms in $S^3 \times [0,1]$ connecting $K \times \{0\}$ and $L \times \{1\}$. We write $\upsilon(K)$ for $\Upsilon_K(1)$, where $\Upsilon_K$ denotes the Upsilon invariant of a knot $K$, defined by Ozsv\'ath, Stipsicz and Szab\'o in~\cite{OSS_2014}, and we denote by $\tau(K)$ the tau invariant of a knot $K$ defined by Ozsv\'ath and Szab\'o in~\cite{OzsvathSzabo_03_KFHandthefourballgenus}.
\begin{theorem} \label{main3}
For torus knots $K$ and $L$ of braid index $2$ and $6$, respectively, we have
\[d_{\text{cob}}(K,L)=\max\left\{|\upsilon(L)-\upsilon(K)|,|\tau(L)-\tau(K)|\right\}.\]
\end{theorem}
An explicit formula for $d_{\text{cob}}(K,L)$
is provided after the proof of \cref{main3}; see~\eqref{eq:cobdistexplicit}.
All the statements concerning general positive braids and 4--braids are proved in the next section; the results about torus links are proved in \cref{sec:6strands}.
\cref{sec:altvsdalt} contains an analogue of \cref{prop6torus} for
torus links with braid index $4$,
and compares the dealternation number with the alternation number.
\section{Twist regions and Khovanov width of positive braids}\label{sec:twistandwidth}
The proofs of \cref{main1} and \cref{propLn} involve an estimation of the crossing number and the dealternation number of positive braid links. The former task is easy, thanks to a result of Bennequin: if a link $L$ is represented by a positive braid whose number of strands coincides with the
braid index of~$L$, then that braid realises the
crossing number $c(L)$. Indeed, the canonical Seifert surface associated with the closure of a positive braid has minimal genus (see~\cite{B}); a diagram with fewer crossings and at least as many Seifert circles would result in a Seifert surface of smaller genus, a contradiction. Here we recall that the number of Seifert circles is not smaller than the
braid index of a link (see~\cite{Y}). For the second task, we need an upper bound for the dealternation number in terms of the number of twist regions $t$ of a positive braid representing a link $L$. A twist region of an $n$--braid is a maximal subword of the form $\sigma_i^k$, for some generator $\sigma_i$ in the braid group on $n$ strands.
The following inequality was proved by Abe and Kishimoto (\cite[Lemma 2.2]{AK}; the generalisation from $3$--braids to $n$--braids is straightforward, see \cref{fig:twist}):
\[\dalt(L) \leq \frac{t}{2}.\]
\begin{figure}
\caption{How to alternate around one twist region with one crossing change.}
\label{fig:twist}
\end{figure}
\begin{proof}[Proof of \cref{main1}]
Let $\beta$
be a positive $n$--braid whose closure is a link $L$ of
braid index $n$. We write $\beta$ as a product of positive braids $\beta_1 \ldots \beta_k \alpha$, where all $\beta_i$ have $\frac{1}{2}n(n-1)+1$ crossings, and $\alpha$ has strictly less crossings (the case $k=0$, i.e. $\beta=\alpha$, is also allowed). The condition on the number of crossings guarantees that every braid $\beta_i$ has two strands that cross at least two times. Consider an innermost bigon formed by two such strands. Then all other strands intersecting that bigon pass over it from the bottom left to the top right, or pass under it from the bottom right to the top left (see \cref{fig:digon}).
\begin{figure}\label{fig:digon}
\label{fig:markov}
\end{figure}
\begin{figure}
\caption{The braid $(\sigma_1\sigma_2\sigma_3\sigma_3\sigma_2\sigma_1)^3$ and its all-B smoothing.}
\label{fig:Bsmoothing}
\end{figure}
These strands can be moved away by an isotopy, giving rise to a positive braid containing a square of a generator. Altogether, we obtain a positive braid equivalent to $\beta$ with at least $k$ squares of generators. By Abe and Kishimoto's result, the dealternation number of $L$ is at most one-half times the number of twist regions of that braid:
$$\dalt(L) \leq \frac{1}{2}(c(L)-k).$$
If $k \geq 1$, the highest possible ratio $\dalt(L)/c(L)$ comes from the case $k=1$, $c(L)=n(n-1)+1$, $\dalt(L) \leq \frac{1}{2}n(n-1)$; it is
$$\frac{\dalt(L)}{c(L)} \leq \frac{1}{2}-\frac{1}{2(n^2-n+1)},$$
as desired. If $k=0$, i.e. $\beta=\alpha$, then either $\alpha$ contains a bigon, leading to a lower ratio $\dalt(L)/c(L)$, or $\alpha$ can be reduced by a Markov move. Indeed, in the latter case, the strand starting at the bottom left of $\alpha$ crosses some number of strands before reaching the top. It can therefore be moved to the top of $\alpha$ and then reduced by a Markov move (see \cref{fig:markov}), contradicting the assumption on the minimality of the braid index of $\beta$.
\end{proof}
\begin{proof}[Proof of \cref{propLn}]
The links $L_n$ represented by the family of braids $\beta_n$
have $n$ components. Therefore, their
braid index is $n$. By the above remark, their
crossing number is realised by the braids $\beta_n$: $c(L_n)=2(n-1)^2$.
The key observation needed to compute the Khovanov width $w_{Kh}(L_n)$ is the adequacy of the diagrams obtained by closing the braids $\beta_n$. This means by definition that the all-A smoothing and the all-B smoothing of crossings results in a union of circles that have no points of self-contact. In the case of our braids~$\beta_n$, this is easy to check, since the all-A and all-B smoothings of positive braid diagrams correspond to all vertical and all horizontal smoothing, respectively. The Khovanov width of a link $L$ with an adequate diagram $D$ can then be determined by another result of Abe (\cite[Theorem 3.2]{A}; the generalisation from knots to multi-component links is straightforward):
$$w_{Kh}(L)=\frac{1}{2}(c(D)-s_A(D)-s_B(D))+3.$$
Here $s_A(D)$ and $s_B(D)$ denote the number of circles resulting from the all-A and the all-B smoothings of $D$, respectively. We compute $s_A=n$, $s_B=3n-4$ for the closures of the braids $\beta_n$ (see \cref{fig:Bsmoothing}) and deduce
$$\lim_{n \to \infty} \frac{w_{Kh}(L_n)}{c(L_n)}=\lim_{n \to \infty} \frac{\dalt(L_n)}{c(L_n)}=\frac{1}{2}.$$
For the latter, we recall $w_{Kh}(L) \leq \dalt(L)+2$ and $\dalt(L) \leq c(L)/2$.
\end{proof}
\begin{proof}[Proof of \cref{prop4braids}]
Choose a positive $4$--braid $\beta$
representing $L$ with the minimal number of generators of type $\sigma_2$, and conjugate it so that it does not start with a generator $\sigma_2$. Then two consecutive twist regions of the form $\sigma_2^k$ are separated by at least two crossings of type $\sigma_1$ or $\sigma_3$. This is also true for the last and first twist region, when viewing these as consecutive along the closed braid. Therefore, the number of twist regions of the form $\sigma_2^k$ is at most a third of the number of crossings of $\beta$. We conclude as in the proof of \cref{main1}. For the second statement, we observe that the links $L_n$ defined as the closures of the $4$--braids $(\sigma_1 \sigma_2 \sigma_3 \sigma_3 \sigma_2 \sigma_1)^n$
are again adequate, which allows for a simple computation of their Khovanov width. The resulting limits are
\[
\pushQED{\qed}
\lim_{n \to \infty} \frac{\dalt(L_n)}{c(L_n)}=\lim_{n \to \infty} \frac{w_{Kh}(L_n)}{c(L_n)}=\frac{1}{3}.
\qedhere
\popQED
\]
\renewcommand{\qedsymbol}{}
\end{proof}
\section{Dealternation number and cobordism distance for torus links with braid index 6}\label{sec:6strands}
The following braid-theoretic observation is the main geometric input for \cref{main3} and \cref{prop6torus}.
\begin{lemma}\label{lemma:manyodd}
For all integers $n\geq 0$, there exists a positive $6$--braid word $\beta_n$ with $8n+3$ odd generators (i.e.~$\sigma_1$, $\sigma_3$, and $\sigma_5$) and $2n+2$ even generators (i.e.~$\sigma_2$ and $\sigma_4$) such that $\beta_n$ represents the standard torus link $6$--braid $(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^{2n+1}$.
\end{lemma}
\begin{proof}
The case $n=0$ is trivial since $(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)$ contains 3 odd and 2 even generators.
For the case $n=1$,
observe that the positive braid given by the $6$--braid word $(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^{3}$ is isotopic to
\begin{multline}\label{eq:iso1}
\sigma_1\sigma_3(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)(\sigma_1\sigma_2\sigma_3\sigma_4)(\sigma_2\sigma_3\sigma_4\sigma_5)
=\\
\sigma_1\sigma_3(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)(\sigma_1\sigma_2\sigma_3\sigma_2\sigma_4\sigma_3\sigma_4\sigma_5);
\end{multline}
compare \cref{fig:torus}.
\begin{figure}\label{fig:torus}
\label{fig:commute}
\end{figure}
By applying $\sigma_2\sigma_3\sigma_2\sigma_4\sigma_3\sigma_4=\sigma_3\sigma_2\sigma_3\sigma_3\sigma_4\sigma_3$ (indicated in grey in \cref{fig:torus}), we find
\begin{equation}\label{eq:T63}(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^3=\sigma_1\sigma_3(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)(\sigma_1\sigma_3\sigma_2\sigma_3\sigma_3\sigma_4\sigma_3\sigma_5).\end{equation}
The right-hand side of~\eqref{eq:T63} can be taken to be $\beta_1$, since it has $11$ odd generators and $4$ even generators.
Next we consider the case $n\geq 2$. We reduce this to the case $n=1$ by using that odd generators `commute' with $(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^2$ as follows:
\begin{equation}\label{eq:T62commuteswithodd}
(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^2\sigma_i=\sigma_{i+2}(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^2\end{equation}
for all $i$ in $\{1,3,5\}$, where $i+2$ is read modulo $6$ (compare \cref{fig:commute}).
Using Equations~\cref{eq:T63} and \cref{eq:T62commuteswithodd}, we rewrite $(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^{2n+1}$ as
\begin{align*}
& (\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^{2n+1} \\
\overset{\text{\eqref{eq:T63}}}{=}\ &
(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^{2n-2}\sigma_1\sigma_3(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)(\sigma_1\sigma_3\sigma_2\sigma_3\sigma_3\sigma_4\sigma_3\sigma_5)\\
\overset{\text{\eqref{eq:T62commuteswithodd}}}{=}\ &
\sigma_i\sigma_{i+2}(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^{2n-2}(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)(\sigma_1\sigma_3\sigma_2\sigma_3\sigma_3\sigma_4\sigma_3\sigma_5)\\
=\hspace{1pt}\ &\sigma_i\sigma_{i+2}(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^{2n-1}(\sigma_1\sigma_3\sigma_2\sigma_3\sigma_3\sigma_4\sigma_3\sigma_5),\end{align*}
where $i=1$, $i=3$, or $i=5$; depending on whether $n$ is $0$, $1$, or $2$ modulo 3. Again $i+2$ is read modulo $6$.
Applying the above inductively to $(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^{2l+1}$ for $l\leq n$, we find
\begin{align*}
(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^{2n+1}
&=\sigma_i\sigma_{i+2}(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^{2n-1}(\sigma_1\sigma_3\sigma_2\sigma_3\sigma_3\sigma_4\sigma_3\sigma_5)\\
&=\sigma_i\sigma_{i+2}\sigma_{i-2}\sigma_{i}(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^{2n-3}(\sigma_1\sigma_3\sigma_2\sigma_3\sigma_3\sigma_4\sigma_3\sigma_5)^2\\
&=d_{\text{cob}}ots\\
&=\sigma_1^{k_1}\sigma_3^{k_3}\sigma_5^{k_5}(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)(\sigma_1\sigma_3\sigma_2\sigma_3\sigma_3\sigma_4\sigma_3\sigma_5)^{n},
\end{align*}
where $k_1+k_2+k_3=2n$.
\end{proof}
\cref{lemma:manyodd} has an interesting application concerning fibre surfaces of braid index $6$
torus knots, which we will use in the proof of Theorem 2. Let $F(p,q)$ denote the unique fibre surface of the torus link $T(p,q)$.
\begin{proposition}\label{prop:subsurfaces}
For all integers $n\geq 2$, the fibre surface $F(6,2n+1)$ contains $F(2,8n+1)$ as an incompressible subsurface.
In particular, \begin{align*}
d_{\text{cob}}(T(6,6k+1),T(2,24k+1))&=g(T(6,6k+ 1))-g(T(2,24k+1))\;\mbox{and}\;\\
d_{\text{cob}}(T(6,6k-1),T(2,24k-7))&=g(T(6,6k- 1))-g(T(2,24k-7)).
\end{align*}
for all positive integers $k$, where $g(T(p,q))=\frac{1}{2}(p-1)(q-1)$ denotes the Seifert genus of $T(p,q)$ for positive coprime integers $p, q$.
\end{proposition}
\begin{proof}[Proof of \cref{prop:subsurfaces}]
To the closure of a positive braid word $\beta$, we associate its canonical Seifert surface given by vertical disks for every strand and half twisted bands connecting them for every generator in $\beta$. As remarked in \cref{sec:twistandwidth}, this is a minimal genus Seifert surface. In particular, the $6$--strand positive braid word $(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^{2n+1}$ yields the fibre surface $F(6,2n+1)$.
We rewrite $(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^{2n+1}$ as
$(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^2(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^{2(n-1)+1}$ and then apply \cref{lemma:manyodd} to find a braid word $\beta_{n-1}$ with $2n$ even generators and $8n-5$ odd generators such that
\[(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^{2n+1}=(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^2\beta_{n-1}.\]
By deleting the $2n$ even generators in $\beta_{n-1}$, we find a positive braid word
\[\alpha_n=(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)^2\sigma_1^{k_1}\sigma_3^{k_3}\sigma_5^{k_5},\] where
$k_1$, $k_3$ and $k_5$ are positive integers such that $k_1+k_3+k_5=8n-5$. The closure of $\alpha_n$ is the torus knot $T(2,8n+1)$. Since deleting a generator in a positive braid word corresponds to deleting a band in the associated Seifert surface, we have that $F(6,2n+1)$ may be turned into $F(2,8n+1)$ by removing $2n$ bands. Consequently, $F(2,8n+1)$ is an incompressible subsurface of $F(6,2n+1)$.
For the second statement of the Proposition, we recall that, if a knot $K$ is the boundary of a genus $g_K$ incompressible subsurface of a genus $g_L$ Seifert surface with boundary the knot $L$, then there exists a cobordism of genus $g_L-g_K$ between $K$ and $L$. Applying this to $T(2,8n+1)$ and $T(6,2n+1)$ for $n=3k\pm1$ yields a cobordism of genus $n$. More explicitly, such a cobordism is e.g.~given by $2n$ saddles guided by the $2n$ bands corresponding to the deleted generators described in the previous paragraph. Finally, $n$ realises the cobordism distance since
by the triangle inequality the cobordism distance
$d_{\text{cob}}(T(6,2n+1),T(2,8n+1))$ is greater than or equal to
\[d_{\text{cob}}(T(6,2n+1),\mathrm{unknot})-d_{\text{cob}}(T(2,8n+1,\mathrm{unknot})=5n-4n,\]
where the equality is given by the local Thom conjecture proven by Kronheimer and Mrowka~\cite[Corollary~1.3]{KronheimerMrowka_Gaugetheoryforemb}:
\[d_{\text{cob}}(T(p,q),\mathrm{unknot})=
\frac{(p-1)(q-1)}{2}\text{ for all coprime } p,q\geq 1.\qedhere\]
\end{proof}
For the proofs of \cref{prop6torus} and \cref{main3}, we use
\cref{lemma:manyodd} and
\cref{prop:subsurfaces} as geometric inputs, respectively. As an obstruction to cobordisms and the dealternation number we use the Upsilon invariant, which we recall next, before applying it in the proofs of \cref{prop6torus} and \cref{main3}.
In~\cite{OSS_2014}, Ozsv\'ath, Stipsicz, and Szab\'o introduced an infinite family of concordance invariants $\Upsilon(t)$, parametrised by the interval $[0,2]$. We use $\upsilon$ -- the invariant corresponding to $t=1$ -- and the $\tau$--invariant as introduced by Ozsv\'ath and Szab\'o in~\cite{OzsvathSzabo_03_KFHandthefourballgenus}. The latter can be recovered as $\lim_{t\to0}\frac{-\Upsilon(t)}{t}$.
Both $\tau$ and $\upsilon$ are integer-valued concordance invariants. In fact, they both bound the smooth slice genus and, thus, the cobordism distance of knots~\cite[Corollary~1.3]{OzsvathSzabo_03_KFHandthefourballgenus}\cite[Theorem~1.11]{OSS_2014}.
Thus, for all knots $K$ and $L$ we have
\begin{equation}\label{eq:lowerboundcob}
|\upsilon(L)-\upsilon(K)|,|\tau(L)-\tau(K)|\leq d_{\text{cob}}(K,L).
\end{equation}
As a consequence of the fact that $\upsilon$ equals $-\tau$ on alternating knots and their similar behaviour under crossing changes, one has for all knots $K$ (compare~\cite[Corollary~3]{FellerPohlmannZentner_15}):
\begin{equation}\label{eq:lowerbounddalt}
|\tau(K)+\upsilon(K)|\leq \dalt(K).
\end{equation}
The $\tau$--invariant equals the genus of positive torus knots~\cite[Corollary~1.7]{OzsvathSzabo_03_KFHandthefourballgenus}.
We recall the value of $\upsilon$ on positive torus knots of braid index $2$ and $6$.
\begin{lemma}\label{lemma:upsilonfortorusknots}
For all positive integers $k$,
\begin{align*}
\upsilon(T(2,2k+1)) &=-k,\\
\upsilon(T(6,6k+1)) &=-9k,\\
\upsilon(T(6,6k+5)) &=-9k-6.
\end{align*}
\end{lemma}
\begin{proof}The values of $\upsilon$ for torus knots with braid index $2$ (or more generally thin knots) are provided in~\cite[Theorem~1.14]{OSS_2014}. For torus knots of braid index $6$, the inductive formula from~\cite[Proposition~2.2]{FellerKrcatovich_16_OnCobBraidIndexAndUpsilon} yields
\begin{align*}
\upsilon(T(6,6k+1))&=k\upsilon(T(6,7))=-9k& \;\mbox{and}\;\\
\upsilon(T(6,6k+5))&=k\upsilon(T(6,7))+\upsilon(T(6,5))=-9k-6.&&\qedhere
\end{align*}
\end{proof}
\begin{proof}[Proof of \cref{prop6torus}]
Items $(i)$ and $(ii)$ follow from \cref{lemma:manyodd}. Indeed, by \cref{lemma:manyodd}, there exists a positive braid word $\beta_n$ with closure $T(6,2n+1)$ that has $2n+2$ even generators. Changing the corresponding $2n+2$ positive crossing to negative crossings in the associated diagram for $T(6,2n+1)$ yields an alternating diagram. Thus, we have $\dalt(T(6,2n+1))\leq 2n+2$.
Similarly, by \cref{lemma:manyodd}, the torus link $T(6,2n)$ is the closure of a positive braid word $\beta_{n-1}(\sigma_1\sigma_2\sigma_3\sigma_4\sigma_5)$, which has $2n+2$ even generators. By the same reasoning as above this yields $\dalt(T(6,2n))\leq 2n+2$.
The lower bound for the Khovanov width claimed in $(iii)$ is given by Sto\v{s}i\'{c}'s inequality (\cite[Corollary 5]{St}),
\begin{equation}\label{stosic}
w_{Kh}(T(2n,2kn)) \geq n(n-1)k + 2 \, .
\end{equation}
The lower bound claimed in $(iv)$ follows from~\eqref{eq:lowerbounddalt}. Indeed, by \cref{lemma:upsilonfortorusknots}, we have $\upsilon(T(6,6k+1))=-9k$ and, therefore,
\[6k=15k-9k=|\tau(T(6,6k+1))+\upsilon(T(6,6k+1))|\leq \dalt(T(6,6k+1)).\]
Similarly, we have
\begin{equation*}
\begin{split}
6k-2=15k-5-9k+3 & =|\tau(T(6,6k-1))+\upsilon(T(6,6k-1))| \\
& \leq \dalt(T(6,6k-1)).
\end{split}
\end{equation*}
Finally, $(v)$ follows from $(i)$ and $(iii)$ since $c(T(6,6k))=30k$ (compare with the beginning of \cref{sec:twistandwidth}) and $w_{Kh}\leq \dalt+2$.
\end{proof}
Next we turn to the cobordism distance between torus knots of braid index 2 and torus knots of braid index 6.
In fact, it will be clear from the proof below that $d_{\text{cob}}(K,L)=d_{\text{cob}}(K,J)+d_{\text{cob}}(J,L)$, where $J$ is the (unique) braid index 2 torus knot of maximal genus such that $d_{\text{cob}}(J,L)=g(L)-g(J)$. See~\eqref{eq:cobdistexplicit} below for an explicit formula for $d_{\text{cob}}(K,L)$.
\begin{proof}[Proof of \cref{main3}]
For the entire proof, we write $L=T(6,m)$ and $K=T(2,n)$, where $n$ is an odd integer and $m$ is an integer coprime to 6. Also, by taking mirror images, we may (and do) assume that $m$ is positive. Furthermore, we take $J$ to be the positive torus knot $T(2,4(m-1)+1)$.
Note that, by \cref{prop:subsurfaces}, there exists a cobordism of genus
\begin{equation}\label{eq:optcob}g(L)-g(J)=\tau(L)-\tau(J)=-\upsilon(J)+\upsilon(L),\end{equation}
where the last equality follows immediately from \cref{lemma:upsilonfortorusknots}.
Let us first consider the case $n\leq 4(m-1)+1$. Then
\begin{equation}\label{eq:cobdistKJ}d_{\text{cob}}(K,J)=\tau(J)-\tau(K)=\left\{\begin{array}{cc}g(J)-g(K)&\text{if }n>0\\g(J)+g(K)&\text{if }n<0\end{array}\right..\end{equation}
Therefore,
\begin{align*}
d_{\text{cob}}(K,L) & =d_{\text{cob}}(K,J)+d_{\text{cob}}(J,L) \\
& \overset{\makebox[0pt][c]{\scriptsize\eqref{eq:cobdistKJ}\eqref{eq:optcob}}}{=}\ \tau(L)-\tau(K)=\left\{\begin{array}{cc}g(L)-g(K)&\text{if }n>0\\g(L)+g(K)&\text{if }n<0\end{array}\right..
\end{align*}
Indeed, we have $d_{\text{cob}}(K,L)\leqd_{\text{cob}}(K,J)+d_{\text{cob}}(J,L)$ by composition of cobordisms and $d_{\text{cob}}(K,L)\geqd_{\text{cob}}(K,J)+d_{\text{cob}}(J,L)$ follows from the fact that $\tau(L)-\tau(K)$ is a lower bound for $d_{\text{cob}}(K,L)$.
This leaves us with the case $n>4(m-1)+1$.
Similarly to~\eqref{eq:cobdistKJ} we have
\begin{equation}\label{eq:cobdistKJagain}d_{\text{cob}}(K,J)=-\upsilon(K)+\upsilon(J)=g(K)-g(J).\end{equation}
Thus,
\begin{align*}
d_{\text{cob}}(K,L) & =d_{\text{cob}}(K,J)+d_{\text{cob}}(J,L)\\
& \overset{\makebox[0pt][c]{\scriptsize\eqref{eq:cobdistKJagain}\eqref{eq:optcob}}}{=}\
-\upsilon(K)+\upsilon(J)-\upsilon(J)+\upsilon(L)\\
& =-\upsilon(K)+\upsilon(L).
\end{align*}
Indeed, we have $d_{\text{cob}}(K,L)\leqd_{\text{cob}}(K,J)+d_{\text{cob}}(J,L)$ by composition of cobordisms and $d_{\text{cob}}(K,L)\geqd_{\text{cob}}(K,J)+d_{\text{cob}}(J,L)$ follows from the fact that $-\upsilon(K)+\upsilon(L)$ is a lower bound for $d_{\text{cob}}(K,L)$.
\end{proof}
If we choose $K=T(2,n)$ and $L=T(6,m)$ as in the above proof, where $n$ is an odd integer and $m$ is an integer coprime to 6 and $m\geq 7$, then the distance from \cref{main3} can be explicitly given by
\begin{equation}\label{eq:cobdistexplicit}d_{\text{cob}}(K,L)=\frac{|4(m-1)+1-n|}{2}+\frac{m-1}{2}.\end{equation}
\section{Alternation number and torus links of braid index 4}\label{sec:altvsdalt}
We briefly comment on the \emph{alternation number} $\alt(L)$ of a link $L$ -- the smallest number of crossing changes needed to make $L$ alternating. We note that $\alt(L)$ is different from $\dalt(L)$ -- the smallest number of crossing changes needed to turn some diagram of $L$ into an alternating \emph{diagram}. Clearly, $\alt(L)\leq\dalt(L)$ for all links. This inequality can be strict. The latter follows for example from the fact that all Whitehead doubles $W(K)$ of a knot $K$ satisfy $\alt(W(K))\leq 1$, while $w_{Kh}(W(K))\leq\dalt(W(K))+2$ can be arbitrarily large. While the lower bound given by $w_{Kh}$ no longer holds for $\alt$, the lower bound given by $|\tau+\upsilon|$ still holds; compare~\cite[Corollary~3]{FellerPohlmannZentner_15}.
Consequently all upper bounds for $\dalt$ provided in this paper also hold for $\alt$ and, for torus knots of braid index $6$, the alternation number, like the dealternation number, is determined up to an ambiguity of $2$. Indeed,
\[6k-1\pm1 \leq \alt(T(6,6k\pm1))\leq 6k+1\pm 1,\]
by \cref{prop6torus} (and its proof).
Let us conclude by discussing the case of
braid index $4$ torus links. While an analogue of \cref{lemma:manyodd} holds, the consequences for $\dalt$ are less interesting since $\alt$ was previously determined for $T(4,2n+1)$~\cite[Theorem~1]{FellerPohlmannZentner_15}. Also, the analogue of \cref{main3} (and \cref{prop:subsurfaces}) was previously established; compare~\cite[Corollary~3 (and Theorem~2)]{Feller_15_MinCobBetweenTorusknots}. We briefly summarise the results that can be obtained by the same techniques we used in \cref{sec:6strands}.
\begin{lemma}
For all integers $n\geq0$, there exists a positive $4$--braid word $\beta_n$ with $5n+2$ odd generators and $n+1$ even generators such that $\beta_n$ represents the $4$--braid $(\sigma_1\sigma_2\sigma_3)^{2n+1}$.\qed
\end{lemma}
As a consequence one finds
\begin{align}
n\leq\dalt(T(4,2n+1)) & \leq n+1\notag\qquad\;\mbox{and}\;\\
\dalt(T(4,2n))& \leq n+1;\label{eq:dalttorus4strand}
\end{align}
in comparison, one has $\alt(T(4,2n+1))=n$~\cite[Theorem~1]{FellerPohlmannZentner_15}.
By Sto\v{s}i\'{c}'s inequality and $c(T(4,4k))=12k$, \eqref{eq:dalttorus4strand} yields
\begin{corollary}\label{dalt_braid-index_4}
For all integers $k\geq 1$,
\[2k\leq w_{Kh}(T(4,4k))-2\leq\dalt(T(4,4k))\leq 2k+1\]
and
\[
\lim_{n \to \infty} \frac{w_{Kh}(T(4,4n))}{c(T(4,4n))}= \frac{\dalt(T(4,4n))}{c(T(4,4n))} = \frac{1}{6}.\]
\end{corollary}
\newcommand{\myemail}[1]{\texttt{\href{mailto:#1}{#1}}}
\myemail{sebastian.baader@math.unibe.ch}
\myemail{peter.feller@math.ch}
\myemail{lukas@lewark.de}
\myemail{raphael.zentner@mathematik.uni-regensburg.de}
\end{document} |
\begin{document}
\title{Bounded Model Checking of an MITL Fragment for Timed Automata}
\author{\IEEEauthorblockN{Roland Kindermann, Tommi Junttila and Ilkka Niemel{\"a}}
\IEEEauthorblockA{Department of Information and Computer Science\\
Aalto University\\
P.O.Box 15400, FI-00076 Aalto, Finland\\
Email: \{Roland.Kindermann,Tommi.Junttila,Ilkka.Niemela\}@aalto.fi}
}
\maketitle
\begin{abstract}
Timed automata (TAs) are a common formalism for modeling timed
systems.
Bounded model checking (BMC) is a verification method that searches
for runs violating a property using a SAT or SMT solver.
MITL is a real-time extension of the linear time logic LTL. Originally,
MITL was defined for traces of non-overlapping time intervals rather than the
``super-dense'' time traces allowing for intervals overlapping in single points that are employed by the nowadays common semantics of timed
automata.
In this paper we extend the semantics of a fragment of MITL to super-dense time traces and devise a bounded model checking encoding for the fragment.
We prove correctness and completeness in the sense that using a sufficiently large bound a counter-example to any given non-holding property can be found.
We have implemented the proposed bounded model checking approach and experimentally studied the efficiency and scalability of the implementation.
\end{abstract}
\begin{IEEEkeywords}
timed automaton; metric interval temporal logic; bounded model checking; satisfiability modulo theories
\end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction}
Fully-automated verification has many industrial applications.
A particularly interesting and challenging setting for the use of verification are systems for which timing aspects are of high importance like safety instrumented systems or communication protocols.
In this paper, we study verification in a setting where \emph{both} the \emph{system} and the \emph{specification} contain quantitative timing aspects, allowing not only to specify, e.g., that a certain situation will eventually lead to a reaction but also that the reaction will happen within a certain amount of time. Allowing such timing aspects to be part of both the specification and the system adds an additional challenge.
Timed automata~\cite{AlurDill:TCS1994} are a widely employed formalism for the representation of finite state systems augmented with real-valued clocks.
Timed automata have been studied for two decades and various tools for the verification of timed automata exist. Most existing verification techniques and tools, like the model checker Uppaal~\cite{Uppaaltutorial:2004}, however do not support quantitative specifications on the timing of events. We feel that the ability to state, e.g., that a certain condition triggers a reaction within a certain amount of time provides a clear improvement over being able only to specify that a reaction will eventually occur.
For specifications, we use the linear time logic \ensuremath{\textup{MITL}_{0,\infty}}\ \cite{AlurFederHenzinger:JACM1996}, an extension adding lower and upper time bounds to the popular logic LTL.
Industrial size systems often have a huge discrete state space in addition to the infinite state space of timing-related parts of the system. We feel that fully symbolic verification is a key to tackling large discrete state spaces. We, thus, provide a translation of a pair of a timed automaton representing a system and a \ensuremath{\textup{MITL}_{0,\infty}}\ formula into a symbolic transition system\ that can serve as a foundation for various symbolic verification methods. It is proven that the translated system has a trace if and only if the original timed automaton has a trace satisfying the formula.
We, furthermore, demonstrate how to employ the translation for SMT-based bounded model checking using the region-abstraction for timed automata \cite{AlurDill:TCS1994}. We show completeness of the approach and prove the applicability of the region abstraction to the transition system .
Finally, we evaluate the scalability of the approach and the cost for checking specifications containing timing experimentally.
\ensuremath{\textup{MITL}_{0,\infty}}\ is a fragment of the logic MITL \cite{AlurFederHenzinger:JACM1996} for which the question whether or not a given timed automaton has a trace satisfying or violating a given formula is PSPACE complete~\cite{AlurFederHenzinger:JACM1996}. Previously, a verification approach for \ensuremath{\textup{MITL}_{0,\infty}}\ specifications was introduced in \cite{AlurFederHenzinger:JACM1996} and improved upon in~\cite{DBLP:conf/formats/MalerNP06_long}. At this point, however, there are to our best knowledge no implementations or results of experiments using these methods available. Additionally, a major difference between the techniques described in \cite{AlurFederHenzinger:JACM1996,DBLP:conf/formats/MalerNP06_long} and our approach lies in the precise semantics of timed automata used. While previous approaches use dense-time semantics, we extend \ensuremath{\textup{MITL}_{0,\infty}}\ to super-dense time. Although dense and super-dense time semantics of timed automata are often used interchangeably in the literature (and in fact do not differ in any important fashion when, e.g., verifying reachability constraints), we will show that equivalences between \ensuremath{\textup{MITL}_{0,\infty}}\ formulas fundamental to the techniques in \cite{AlurFederHenzinger:JACM1996,DBLP:conf/formats/MalerNP06_long} do not hold anymore when using dense-time semantics.
\section{Timed Automata}
We first give basic definitions for timed automata
(see e.g.\ \cite{AlurDill:TCS1994,Alur:CAV1999,BengtssonYi:2003}).
For simplicity,
we use basic timed automata in the theoretical parts of the paper.
However,
in practice (and the experimental part of the paper)
one usually defines a network of timed automata
that can also have (shared and local) finite domain non-clock variables
manipulated on the edges.
The symbolic bounded model checking encodings presented later in the paper can
be extended to handle both of these features:
see, e.g.,~\cite{Sorea:ENTCS2002,DBLP:conf/forte/AudemardCKS02}
on how to handle synchronization in a network of timed automata.
Alternatively,
one can specify timed systems with a symbolic formalism~\cite{KindermannJunttilaNiemela:ACSD2011}.
Let $X$ be a set of real-valued \emph{clock variables}.
A
\emph{clock valuation} $xValuation$ is a function
$xValuation : X \to \mathbb{R}NonNeg$.
For $\delta \in \mathbb{R}NonNeg$ we define the valuation
$xValuation + \delta$ by
$\forall x \in X:
(xValuation+\delta)(x) = xValuation(x)+\delta$.
The set of \emph{clock constraints} over $X$, $xcons{X}$,
is defined by the grammar
$C \mathop{\textup{\bf G}}Def {\mathbf{true} \mid {x \mathbin{\bowtie} n} \mid {C \land C}}$
where $x \in X$,
${\mathbin{\bowtie}} \in \Set{<, \leq, =, \geq, >}$ and
$n \in \mathbb{N}$.
A valuation $xValuation$ satisfies $C\inxcons{X}$,
denoted by $xValuation \models C$, if it evaluates $C$ to true.
A \emph{timed automaton} (TA)
is a tuple $\Tuple{L, l_\textup{init}, clocks, E, I}$ where
\begin{itemize}
\item
$L$ is a finite set of \emph{locations},
\item
$l_\textup{init} \in L$ is the \emph{initial location} of the automaton,
\item
$X$ is a finite set of real-valued \emph{clock variables},
\item
$E \subseteq {L \times xcons{X} \times 2^{X} \times L}$ is a finite set of edges,
each edge $\Tuple{l,g,R,l'}\inE$ specifying a \emph{guard} $g$ and
a set $R$ of \emph{clocks to be reset},
and
\item
$I : L \to xcons{X}$
assigns an \emph{invariant} to each location.
\end{itemize}
\iftrue
\begin{figure}
\caption{A timed automaton}
\label{Fig:TA}
\end{figure}
\else
\begin{figwindow}[1,l,
\makebox[55mm][c]{\includegraphics[width=50mm]{TA}},
{A timed automaton
\label{Fig:TA}}
]
\fi
As an example,
Figure~\ref{Fig:TA} shows a part of a timed automaton with
locations $\mathrel{\textup{\bf X}}Loc{1}$, $\mathrel{\textup{\bf X}}Loc{2}, ...$, and
two clocks $\mathrel{\textup{\bf X}}ClockA$ and $\mathrel{\textup{\bf X}}ClockB$.
The initial location is $\mathrel{\textup{\bf X}}Loc{1}$, having the invariant $\mathrel{\textup{\bf X}}ClockA < 5$.
The invariant of the location $\mathrel{\textup{\bf X}}Loc{2}$ is $\mathbf{true}$.
The edge from $\mathrel{\textup{\bf X}}Loc1$ to $\mathrel{\textup{\bf X}}Loc2$ has the guard $\mathrel{\textup{\bf X}}ClockB \ge 1$
and the reset set $\Set{\mathrel{\textup{\bf X}}ClockB}$.
The guard of the edge from $\mathrel{\textup{\bf X}}Loc2$ to $\mathrel{\textup{\bf X}}Loc3$ is $\mathbf{true}$ and
its reset set is empty.
\iftrue
\else
\end{figwindow}
\fi
A \emph{state} of a timed automaton $\mathcal{A} = \Tuple{L, l_\textup{init}, clocks, E, I}$ is a pair
$\Tuple{l, xValuation}$,
where $l \in L$ is a location
and
$xValuation$ is a clock valuation over $X$.
A state $\Tuple{l, xValuation}$ is
(i)~\emph{initial} if $l = l_\textup{init}$ and
$xValuation(x) = 0$ for each $x \in X$,
and
(ii)~\emph{valid} if $xValuation \models I(l)$.
Let $\Tuple{l, xValuation}$ and $\Tuple{l', \nu'}$ be
states of $\mathcal{A}$.
There is a \emph{time elapse step} of $\delta \in \mathbb{R}Pos$ time units
from $\Tuple{l, xValuation}$ to $\Tuple{l', \nu'}$,
denoted by $\Tuple{l, xValuation} \EStep{\delta} \Tuple{l', \nu'}$,
if
(i)~$l = l'$,
(ii)~$\nu' = xValuation+\delta$,
and
(iii)~$\Tuple{l', \nu'}$ is a valid state.
Intuitively,
there is a time elapse step from a state to another
if the second state can be reached from the first one by
letting $\delta$ amount of time pass.
There is a \emph{discrete step} from $\Tuple{l, xValuation}$ to
$\Tuple{l', \nu'}$,
denoted by $\Tuple{l, xValuation} \EStep{0} \Tuple{l', \nu'}$,
if there is an edge $\Tuple{l, \mathop{\textup{\bf G}}uard, \mathrel{\textup{\bf R}}esets, l'} \in E$
such that
(i)~$xValuation \models \mathop{\textup{\bf G}}uard$,
(ii)~$\Tuple{l', \nu'}$ is a valid state,
and
(iii)~$\nu'(x) = 0$ for all $x \in \mathrel{\textup{\bf R}}esets$ and
$\nu'(x) = xValuation(x)$ for all $x \in {X \setminus \mathrel{\textup{\bf R}}esets}$.
That is, discrete steps can be used to change the current location
as long as the guard and the target location invariant are satisfied.
A discrete step resets some clocks and leaves the other's values unchanged,
i.e., a discrete step does not take any time.
A \emph{run} of $\mathcal{A}$ is an infinite sequence of states
$\pi=\Tuple{l_0, xValuation_0} \EStep{\delta_{0}} \Tuple{l_1, xValuation_1} \EStep{\delta_{1}} \ldots$,
such that
(i) $\Tuple{l_0, xValuation_0}$ is valid and initial,
and
(ii) $\Tuple{l_i, xValuation_i} \EStep{\delta_{i}} \Tuple{l_{i+1}, xValuation_{i+1}}$ with some $\delta_{i} \in \mathbb{R}$
for each consecutive pair of states.
E.g.,
the automaton in Figure~\ref{Fig:TA}
has a run
$\Tuple{\mathrel{\textup{\bf X}}Loc1,(0,0)} \EStep{3.5} \Tuple{\mathrel{\textup{\bf X}}Loc1,(3.5,3.5)} \EStep{0} \Tuple{\mathrel{\textup{\bf X}}Loc2,(3.5,0.0)} \EStep{0} \Tuple{\mathrel{\textup{\bf X}}Loc3,(3.5,0.0)} \EStep{1.1} \Tuple{\mathrel{\textup{\bf X}}Loc3,(4.6,1.1)} \ldots$
where each clock valuation $\Set{\mathrel{\textup{\bf X}}ClockA \mapsto v, \mathrel{\textup{\bf X}}ClockB \mapsto w}$ is
abbreviated with $(v,w)$.
A run is
\emph{non-zeno} if the total amount $\sum_{i=0}^\infty \delta_{i}$ of time passed in the run is infinite.
In the rest of the paper, we will only consider non-zeno runs.
Observe that on timed automata runs,
the automaton can visit multiple locations without time elapsing in between.
For instance,
at the time point 3.5 in the run given above,
the automaton is after the first time elapse step in location $\mathrel{\textup{\bf X}}Loc1$,
then after the first discrete step in location $\mathrel{\textup{\bf X}}Loc2$,
and
finally after the second discrete step in location $\mathrel{\textup{\bf X}}Loc3$.
These kind of ``super-dense'' runs
differ from the dense runs that can be represented with ``signals'',
i.e.~by mapping each time point in $\mathbb{R}NonNeg$ to a single value.
As we will see in the next section,
considering super-dense timed automata runs
complicates model checking as, e.g.,
we cannot get rid of the timed until operator
in the way we would if dense runs were used.
Note that previous papers on timed automata use both dense (e.g.~\cite{AlurDill:TCS1994}) and super-dense time (e.g.~\cite{Alur:CAV1999}), often without addressing the different semantics.
From a practical perspective, super-dense runs appear paradox, as they permit multiple successive events to happen with no time passing in between. An alternative way of interpreting super-dense time, however, is that the amount of time in between events is just too small to be of interest and is, thus, abstracted away.
We also take the fact that Uppaal \cite{Uppaaltutorial:2004}, arguably the most successful timed model checker, not only allows for super-dense time traces but actually even makes it possible to \emph{enforce} super-dense behaviors by marking locations as ``urgent'' or ``committed'' as a strong indication that there is an interest in super-dense traces in practice.
\section{The Logic \ensuremath{\textup{MITL}_{0,\infty}}\ for super-dense time}
Next,
we describe the syntax and semantics of \ensuremath{\textup{MITL}_{0,\infty}}{} formulas
over ``super-dense timed traces''
which, as discussed in Sect.~\ref{Sect:TA2TT},
can represent timed automata runs.
\subsection{Syntax and Semantics}
Assuming a set $ps$ of atomic propositions,
the syntax of \ensuremath{\textup{MITL}_{0,\infty}}{} formula follows that in~\cite{AlurFederHenzinger:JACM1996},
and
is defined by the BNF grammar
$
\phi
\mathop{\textup{\bf G}}Def
p \mathrel{|}
\phi \mathrel{|}
\neg\phi \mathrel{|}
\phi \land \phi \mathrel{|}
\phi \lor \phi \mathrel{|}
\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n} \phi
\mathrel{|}
\phi \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\bowtie}n} \phi
$
where $p$ ranges over $ps$,
$n$ ranges over $\mathbb{N}$,
and
${\mathbin{\bowtie}}$ ranges over $\Set{{<}, {\leq}, {\geq}, {>}}$.
Intuitively,
a strict timed until formula $\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n} \psi$
states that
$\phi$ holds in all later time points until
$\psi$ holds at a time point $t$ satisfying the timing constraint, i.e. $t \mathbin{\bowtie} n$.
Rational time constraints could be allowed in the temporal operators
without influencing the expressivity of the logic
(see \cite{AlurFederHenzinger:JACM1996} for \ensuremath{\textup{MITL}}{} on dense traces).
We define the
usual abbreviations:
$\mathbf{true} \equiv (p \lor {\neg p})$,
$\mathbf{false} \equiv {\neg\mathbf{true}}$,
${\mathop{\textup{\bf F}^\textup{s}}I{\mathbin{\bowtie}n}\phi} \equiv {\mathbf{true} \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n} \phi}$,
and
${\mathop{\textup{\bf G}^\textup{s}}I{\mathbin{\bowtie}n}\phi} \equiv {\mathbf{false} \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\bowtie}n} \phi}$.
We now define the semantics of \ensuremath{\textup{MITL}_{0,\infty}}{} over ``super-dense'' timed traces,
and
then later show the correspondence of timed automata runs to such traces.
A \emph{super-dense timed trace} over a set of atomic propositions $ps$
is an infinite sequence
$\sigma = \Tuple{\IntervalI{0},ValuI{0}} \Tuple{\IntervalI{1},ValuI{1}} \ldots$,
where
\begin{itemize}
\item
each $ValuI{i}$ is a subset of $ps$,
\item
each $\IntervalI{i}$ is either an open interval $(T_i,\mathrel{\textup{\bf U}}B_i)$
or
a singleton\ $[T_i,T_i]$
with
$0 \le T_i < \mathrel{\textup{\bf U}}B_i$
and
$T_i,\mathrel{\textup{\bf U}}B_i \in \mathbb{R}NonNeg$,
\item
$\IntervalI{0} = [0,0]$,
\item
for each $i \in \mathbb{N}$ it holds that
(i)
$\IntervalI{i} = (T_i,\mathrel{\textup{\bf U}}B_i)$ implies
$\IntervalI{i+1} = [\mathrel{\textup{\bf U}}B_i,\mathrel{\textup{\bf U}}B_i]$,
and
(ii)
$\IntervalI{i} = [T_i,T_i]$
implies either
$\IntervalI{i+1} = [T_i,T_i]$ or
$\IntervalI{i+1} = (T_i,\mathrel{\textup{\bf U}}B_{i+1})$;
and
\item
every
$t\in \mathbb{R}NonNeg$ is contained in at least one $\IntervalI{i}$.
\end{itemize}
For each trace element $\Tuple{\IntervalI{i},ValuI{i}}$,
equivalently written as $\IState{\IntervalI{i}}{ValuI{i}}$,
the interpretation is that the atomic propositions in $ValuI{i}$
hold in all the time points in the interval $\IntervalI{i}$.
As consecutive singletons\ are allowed,
it is possible for an atomic proposition to change its value
an
arbitrary finite number of times at a given time point.
This is required to capture timed automata traces containing two or more successive discrete steps and differentiates super-dense timed traces from dense ones.
In the semantics part we could have allowed
general intervals;
however,
our constructions depend on discriminating the end points of left/right-closed intervals and thus we use this normal form already here.
A \emph{dense timed trace} is a super-dense timed trace with no consecutive
singletons\ (i.e., every time point $t\in \mathbb{R}NonNeg$ occurs
in exactly one $\IntervalI{i}$).
The set of all \emph{points} in a trace $\sigma$ is defined by
$tpoints{\sigma} = \Setdef{\Point{i}{t}}{i\in\mathbb{N},t\in\IntervalI{i}}$.
Two points,
$\Point{i}{t},\Point{i'}{t'}\intpoints{\sigma}$,
are ordered with the ``earlier'' relation $\prec$ defined by
$
\Point{i}{t}\prec\Point{i'}{t'}
lrightarrow
{{i < i'} \lor ({i = i'} \land {t < t'})}
$
and
the set of all points later than $\Point{i}{t}$
is defined by
$
\LaterPoints{\sigma}{i}{t}
\deltaef
\Setdef{\Point{i'}{t'}\intpoints{\sigma}}
{\Point{i}{t}\prec\Point{i'}{t'}}
$.
\newcommand{\sigmaAt}[2]{\sigma^{\Point{#1}{#2}}}
Given a super-dense timed trace
$\sigma$
over $ps$,
a formula $\phi$ over $ps$,
and
a point $\Point{i}{t}$ in $\sigma$,
we define the satisfies relation $\sigma^\Point{i}{t} \models \phi$
iteratively as follows:
\begin{itemize}
\item
$\sigmaAt{i}{t} \models p$
iff $p \in ValuI{i}$,
where $p$ is an atomic proposition.
\item
$\sigmaAt{i}{t} \models {\neg \phi}$
iff
$\sigmaAt{i}{t} \models {\phi}$ does not hold.
\item
$\sigmaAt{i}{t} \models (\phi \land \psi)$
iff
$\sigmaAt{i}{t} \models \phi$ and
$\sigmaAt{i}{t} \models \psi$.
\item
$\sigmaAt{i}{t} \models (\phi \lor \psi)$
iff
$\sigmaAt{i}{t} \models \phi$ or
$\sigmaAt{i}{t} \models \psi$.
\item
$\sigmaAt{i}{t} \models (\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n} \psi)$
iff
$\exists \Point{i'}{t'} \in \LaterPoints{\sigma}{i}{t} :
({t' - t} \mathbin{\bowtie} n) \land
(\sigmaAt{i'}{t'} \models \psi)
\land
\big(\forall \Point{i''}{t''} \in \LaterPoints{\sigma}{i}{t}:
\Point{i''}{t''} \prec \Point{i'}{t'}
rarrow
(\sigmaAt{i''}{t''} \models \phi)\big)$
\item
$\sigmaAt{i}{t} \models (\phi \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\bowtie}n} \psi)$
iff
$\forall \Point{i'}{t'} \in \LaterPoints{\sigma}{i}{t}:
\big(({t'-t} \mathbin{\bowtie} n) \land
\neg(\sigmaAt{i'}{t'} \models \psi)\big)
rarrow
\big(\exists \Point{i''}{t''} \in \LaterPoints{\sigma}{i}{t}:
\Point{i''}{t''}\prec\Point{i'}{t'}
\land
(\sigmaAt{i''}{t''} \models \phi)\big)$
\end{itemize}
For any formula $\phi$,
we abbreviate $\sigmaAt{0}{0} \models \phi$ with $\sigma \models \phi$.
\begin{example}\label{Ex:semantics}
Consider the super-dense timed trace
$\sigma =
\IStateC{0}{\emptyset}
\IStateO{0}{4}{\Set{p}}
\IStateC{4}{\Set{p}}
\IStateC{4}{\Set{q}}
\IStateC{4}{\emptyset}
\ldots$.
Now $\sigma \models {p \mathrel{\textup{\bf U}^\textup{s}}I{\le 4} q}$ as
$\sigmaAt{3}{4} \models q$
and
$\sigmaAt{i}{t} \models p$
for all $0 < i < 3$ and $0 \le t \le 4$.
As an another example,
$\sigma \models \mathop{\textup{\bf F}^\textup{s}}I{\le 3}((\mathop{\textup{\bf G}^\textup{s}}I{\le 1} p) \land (\mathop{\textup{\bf F}^\textup{s}}I{< 2}q))$
also holds because
(i) $\sigmaAt{1}{t} \models {\mathop{\textup{\bf G}^\textup{s}}I{\le 1} p}$ for all $0 \le t < 3$,
and
(ii) $\sigmaAt{1}{t} \models {\mathop{\textup{\bf F}^\textup{s}}I{< 2} q}$ for all $2 < t < 4$.
\end{example}
As illustrated in Ex.~\ref{Ex:semantics},
neither $\phi$ nor $\psi$ need to hold in the current point
in order to satisfy $\phi\mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n}\psi$.
Conversely,
$\phi\mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleleft}n}\psi$ with $\mathbin{\triangleleft}\in\Set{{<},{\leq}}$ does not necessarily hold even if $\psi$ holds in the first state:
e.g.,
$\IStateC{0}{\Set{q}} \IStateO{0}{3}{\emptyset} ...$
does not satisfy $p \mathrel{\textup{\bf U}^\textup{s}}I{<2} q$.
As \cite{AlurFederHenzinger:JACM1996} observes,
the reason for this slightly unintuitive semantics is that they allow
expressing formulas that would not be expressible if more intuitive semantics
where the current point in time is relevant for the timed until operator as well were used.
On the other hand,
expressing that $\phi$ holds from the current point in time on until $\psi$ holds can be done using the formula
$\psi \lor {(\phi \land (\phi\mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n}\psi))}$.
We can define the ``untimed versions'' of the temporal operators with
${\mathop{\textup{\bf F}^\textup{s}} \phi} \equiv {\mathop{\textup{\bf F}^\textup{s}}I{\ge 0} \phi}$,
${\mathop{\textup{\bf G}^\textup{s}} \phi} \equiv {\mathop{\textup{\bf G}^\textup{s}}I{\ge 0} \phi}$,
${\phi \mathrel{\textup{\bf U}^\textup{s}} \psi} \equiv {\phi \mathrel{\textup{\bf U}^\textup{s}}I{\ge 0} \psi}$,
and
${\phi \mathrel{\textup{\bf R}^\textup{s}} \psi} \equiv {\phi \mathrel{\textup{\bf R}^\textup{s}}I{\ge 0} \psi}$.
An easily made misconception is that the time-aspect of a timed trace is
irrelevant when evaluating ``untimed'' operators, i.e.,
that they could be evaluated on $\omega$-words obtained
when
removing intervals from a trace;
this is not the case.
In fact, even when not taking the ``only in the future'' part of the semantics,
illustrated in the previous example, into account,
considering the sets of propositions only is not sufficient.
As an example,
the formula $p \mathrel{\textup{\bf U}^\textup{s}} q$ is satisfied on
$
\IStateC{0}{\Set{p}}
\IStateO{0}{2}{\Set{p}}
\IStateC{2}{\Set{q}}
\ldots
$
but not on
$
\IStateC{0}{\Set{p}}
\IStateO{0}{2}{\Set{p}}
\IStateC{2}{\Set{p}}
\IStateO{2}{3.5}{\Set{q}}
\ldots
$.
The issue in the second trace is that
as the interval on which $q$ holds is an open one,
any point in it has a previous point at which only $q$, but not $p$, holds.
This illustrates that even for the ``untimed'' versions of the operators,
timing is relevant.
Observe that with super-dense timed traces we cannot get rid of
the timed until operator $\mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n}$ by using the
``timed until is redundant'' theorem of~\cite{DBLP:conf/formats/MalerNP06_long},
vital for the transducer construction presented there.
That is,
$\phi \mathrel{\textup{\bf U}^\textup{s}}I{\gen} \psi$
is \emph{not} equivalent to
$(\mathop{\textup{\bf G}^\textup{s}}I{\len}(\phi \mathrel{\textup{\bf U}} \psi)) \land {\mathop{\textup{\bf F}^\textup{s}}I{\gen}\psi}$
in our setting.\footnote{Here, $\mathrel{\textup{\bf U}}$ is the non-strict until operator, i.e. $\phi\mathrel{\textup{\bf U}}\psi\deltaef\psi\lor(\phi\land(\phi\mathrel{\textup{\bf U}^\textup{s}}\psi))$}
For example,
in the trace
$\sigma =
\IStateC{0}{\Set{p}}
\IStateO{0}{2}{\Set{p}}
\IStateC{2}{\Set{p}}
\IStateC{2}{\Set{q}}
\IStateC{2}{\emptyset}
\ldots$
we have $\sigma \models {p \mathrel{\textup{\bf U}^\textup{s}}I{\ge 2} q}$ but
$\sigma \modelsNot (\mathop{\textup{\bf G}^\textup{s}}I{\le 2}(p \mathrel{\textup{\bf U}} q)) \land {\mathop{\textup{\bf F}^\textup{s}}I{\ge 2} q}$
as
$\sigmaAt{4}{2} \modelsNot {p \mathrel{\textup{\bf U}} q}$.
Likewise, the corresponding equivalences used in~\cite{AlurFederHenzinger:JACM1996} do not hold when using super-dense time, e.g. $p\mathrel{\textup{\bf U}^\textup{s}}I{\geq 2}q$ is \emph{not} equivalent to $\mathop{\textup{\bf G}^\textup{s}}I{<2} p\land\mathop{\textup{\bf G}^\textup{s}}I{\leq 2} (q \lor(p \land(p \mathrel{\textup{\bf U}^\textup{s}} p)))$ which can be demonstrated by the exact same trace.
Similarly,
it is not possible to
use the classic LTL equality
${\phi \mathrel{\textup{\bf R}} \psi} \equiv
(\mathop{\textup{\bf G}} \psi) \lor (\psi \mathrel{\textup{\bf U}} (\phi \land \psi))$
to handle timed release operator by means of the other operators in our setting:
e.g.,
when
$\sigma =
\IStateC{0}{\emptyset}
\IStateO{0}{2}{\Set{\psi}}
\IStateC{2}{\Set{\psi}}
\IStateO{2}{4}{\Set{\phi}}
\ldots$
we have
$\sigma \models {\phi \mathrel{\textup{\bf R}^\textup{s}}I{\le 3} \psi}$
but
$\sigma \modelsNot \mathop{\textup{\bf G}^\textup{s}}I{\le 3} \psi$
and
$\sigma \modelsNot \psi \mathrel{\textup{\bf U}^\textup{s}}I{\le 3} (\phi \land \psi)$.
One can verify that the usual dualities hold
for the operators:
$\neg\neg\phi \equiv \phi$,
$\neg(\phi \lor \psi) \equiv
{(\neg\phi) \land (\neg\psi)}$,
$\neg(\phi \land \psi) \equiv
{(\neg\phi) \lor (\neg\psi)}$,
${\neg(\phi\mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n}\psi)} \equiv
{(\neg\phi)\mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\bowtie}n}(\neg\psi)}$,
and
${\neg(\phi\mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\bowtie}n}\psi)} \equiv
{(\neg\phi)\mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n}(\neg\psi)}$.
These allow us to transform a formula into
\emph{positive normal form} in which negations only appear in front of
atomic propositions.
From now on, we assume that all formulas are in positive normal form.
\subsection{Trace Refinement and Fineness}
To perform model checking of \ensuremath{\textup{MITL}_{0,\infty}}{} formulas, we do
not want the values of sub-formulas to change during open
intervals. We next formalize this and show how it can be
achieved by means of trace refinement; the definitions and
results here are extended from those in Sect.~2 of \cite{AlurFederHenzinger:JACM1996}.
\newcommand{\preceq}{\preceq}
A trace $\sigma'$ is a \emph{refinement} of a trace $\sigma$,
denoted by $\sigma' \preceq \sigma$,
if it can be obtained
by replacing each open interval $\IStateO{T_i}{T'_i}{v_i}$ in the trace $\sigma$
with a sequence of intervals
$\IStateO{T_{i,0}}{T_{i,1}}{v_i}
\IStateC{T_{i,1}}{v_i} \IStateO{T_{i,1}}{T_{i,2}}{v_i} \ldots
\IStateO{T_{i,k-1}}{T_{i,k}}{v_i}$
of $2k-1$ consecutive, non-overlapping intervals with
$k \ge 1$, $T_{i,0} = T_i$. and $T_{i,k} = T'_i$.
Naturally,
if $\phi$ is a \ensuremath{\textup{MITL}_{0,\infty}}{} formula and $\sigma'$ is a refinement of $\sigma$,
then
$\sigma' \models \phi$ iff $\sigma \models \phi$.
Taking an arbitrary trace $\sigma$,
it may happen that the value of a compound
sub-formula changes
within
an open interval.
To capture the desired case when this does not happen,
we
call
$\sigma$ \emph{fine for a formula $\phi$}
(or \emph{$\phi$-fine})
if
for each sub-formula $\psi$ of $\phi$
(including $\phi$ itself),
for each interval $\IntervalI{i}$ in $\sigma$,
and
for all $t, t' \in \IntervalI{i}$,
it holds that
$\sigmaSuffix{\sigma}{i}{t} \models \psi$ iff
$\sigmaSuffix{\sigma}{i}{t'} \models \psi$.
\begin{example}
The following super-dense timed trace
$\sigma =
\IStateC{0}{\Set{p}}
\IStateO{0}{4.1}{\Set{p}}
\IStateC{4.1}{\Set{p}}
\IStateC{4.1}{\Set{q}}
\IStateC{4.1}{\emptyset}
\ldots$
is not fine for $\mathop{\textup{\bf G}^\textup{s}}I{\le 1} p$ as, e.g.,
(i) $\sigmaSuffix{\sigma}{1}{t} \models \mathop{\textup{\bf G}^\textup{s}}I{\le 1} p$ for all $0 \le t < 3.1$
but
(ii) $\sigmaSuffix{\sigma}{1}{t} \modelsNot \mathop{\textup{\bf G}^\textup{s}}I{\le 1} p$ for all $3.1 \le t < 4.1$.
We can make the beginning of the trace $\mathop{\textup{\bf G}^\textup{s}}I{\le 1} p$-fine
by refining it to
$ \IStateC{0}{\Set{p}}
\IStateO{0}{3.1}{\Set{p}}
\IStateC{3.1}{\Set{p}}
\IStateO{3.1}{4.1}{\Set{p}}
\IStateC{4.1}{\Set{p}}
\ldots$.
\end{example}
By definition,
every trace $\sigma$ is fine for each atomic proposition $p \in ps$.
Furthermore, if $\sigma$ is $\phi$-fine and $\psi$-fine,
then it is also fine for
$\neg\phi$,
$\phi \land \psi$,
and
$\phi \lor \psi$.
For temporal operators $\mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n}$ and $\mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\bowtie}n}$,
we have the following lemma stating that their values can change only once
during an open interval
given the trace is fine for the sub-formulas:
\newcommand{\SplitIntervalText }{
If a trace $\sigma$ is fine for $\phi$ and $\psi$,
$i \in \mathbb{N}$,
$t,u \in \IntervalI{i}$,
${\mathbin{\triangleleft}} \in \Set{<,\le}$, and
${\mathbin{\triangleright}} \in \Set{\ge,>}$,
then
\begin{itemize}
\item
if
$\sigmaSuffix{\sigma}{i}{t} \models {\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleleft}n} \psi}$
and
$u \ge t$,
then
$\sigmaSuffix{\sigma}{i}{u} \models {\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleleft}n} \psi}$;
\item
if
$\sigmaSuffix{\sigma}{i}{t} \models {\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleright}n} \psi}$
and $u \le t$,
then
$\sigmaSuffix{\sigma}{i}{u} \models {\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleright}n} \psi}$;
\item
if
$\sigmaSuffix{\sigma}{i}{t} \models {\phi \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleleft}n} \psi}$
and $u \le t$,
then
$\sigmaSuffix{\sigma}{i}{u} \models {\phi \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleleft}n} \psi}$;
\item
if
$\sigmaSuffix{\sigma}{i}{t} \models {\phi \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleright}n} \psi}$
and $u \ge t$,
then
$\sigmaSuffix{\sigma}{i}{u} \models {\phi \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleright}n} \psi}$.
\end{itemize}}
\begin{lemma}\label{Lemma:SplitInterval}
\SplitIntervalText
\end{lemma}
Thus,
if $\sigma$ is fine for two formulas,
it can be made fine for their compound by
splitting each open interval at most once.
\newcommand{\ExistenceOfFineRefinementText}{
Let $\phi$ be a \ensuremath{\textup{MITL}_{0,\infty}}{} formula and $\sigma$ a trace.
There is a refinement $\sigma'$ of $\sigma$ that is $\phi$-fine.
Such a refinement can be obtained by splitting each open interval in
$\sigma$ into at most $2^K$ new open intervals and $2^K-1$ singletons ,
where $K$ is the number of timed until and release operators in $\phi$.
}
\begin{lemma}\label{Lemma:ExistenceOfFineRefinement}
\ExistenceOfFineRefinementText
\end{lemma}
\subsection{Timed Automata Runs as Super-Dense Timed Traces}
\label{Sect:TA2TT}
We now describe the relationship between timed automata
runs and super-dense timed traces. In our theory part, when
model checking timed automata with \ensuremath{\textup{MITL}_{0,\infty}}{}, we assume
that the atomic propositions only concern locations of the
automaton. That is, they are of form ``$@\mathrel{\textup{\bf X}}Loc{i}$'', where $\mathrel{\textup{\bf X}}Loc{i}$ is a
location in the automaton. Of course, in the practice when
compositions of timed automata with discrete local variables
are handled, the atomic propositions can be more complex.
However, we do assume that the atomic propositions do not
change their values during the time elapse steps.
Consider a run
$\pi =
\Tuple{l_0, xValuation_0} \EStep{\delta_{0}}
\Tuple{l_1, xValuation_1} \EStep{\delta_{1}} ...$
of a timed automaton $\mathcal{A}$.
For each
$\Tuple{l_i,xValuation_i}$ in
$\pi$
let $t_i = \sum_{j=0}^{i-1}\delta_{j}$ be the cumulative time spent in
the run before the state,
i.e.\ $t_i$ is ``the time when the state occurs in
$\pi$''.
Thus, at the time point $t_i$ the automaton is in the state $\Tuple{l_i,xValuation_i}$
and
we shall have $\IStateC{t_i}{\Set{@l_i}}$ in the corresponding timed trace.
The time elapse steps in the run produce the missing open
intervals:
when
$\Tuple{l_i, xValuation_i} \EStep{\delta_{i}}
\Tuple{l_{i+1}, xValuation_{i+1}}$
with $\delta_{i} > 0$ (and thus $l_i = l_{i+1}$),
then
an
open interval element
$\IStateO{t_i}{t_{i+1}}{\Set{@l_i}}$
lies
in
between
$\IStateC{t_i}{\Set{@l_i}}$ and
$\IStateC{t_{i+1}}{\Set{@l_i}}$
in the timed trace.
\begin{example}
The run
$\Tuple{\mathrel{\textup{\bf X}}Loc1,(0,0)} \EStep{3.5} \Tuple{\mathrel{\textup{\bf X}}Loc1,(3.5,3.5)} \EStep{0} \Tuple{\mathrel{\textup{\bf X}}Loc2,(3.5,0)} \EStep{0} \Tuple{\mathrel{\textup{\bf X}}Loc3,(3.5,0)} \EStep{1.1} \Tuple{\mathrel{\textup{\bf X}}Loc3,(4.6,1.1)} \ldots$
of the automaton in Figure~\ref{Fig:TA}
corresponds to the trace
{\small
$\sigma =
\IStateC{0}{\Set{@\mathrel{\textup{\bf X}}Loc1}}
\IStateO{0}{3.5}{\Set{@\mathrel{\textup{\bf X}}Loc1}}
\IStateC{3.5}{\Set{@\mathrel{\textup{\bf X}}Loc1}}
\IStateC{3.5}{\Set{@\mathrel{\textup{\bf X}}Loc2}}
\IStateC{3.5}{\Set{@\mathrel{\textup{\bf X}}Loc3}}
\IStateO{3.5}{4.6}{\Set{@\mathrel{\textup{\bf X}}Loc3}}
\ldots$
}
\end{example}
Recall that we will need to consider certain refinements of timed traces when
model checking with $\ensuremath{\textup{MITL}_{0,\infty}}$ formulas.
All the refinements of a timed trace produced by a timed automata run can
be produced by other runs of the same automaton.
That is,
considering
a trace coming from a run
$\pi =
\Tuple{l_0, xValuation_0} \EStep{\delta_{0}}
\Tuple{l_1, xValuation_1} \EStep{\delta_{1}} ...$
of a timed automaton,
each
refinement
can be obtained by
considering the corresponding run $\pi'$
where each time elapse step
$\Tuple{l_i, xValuation_i} \EStep{\delta_i} \Tuple{l_{i+1}, xValuation_{i+1}}$ in $\pi$,
with $\delta_i > 0$ and $l_{i+1} = l_{i}$,
is split into a sequence
$\Tuple{l_i, xValuation_i} \EStep{\delta_{i,1}} \Tuple{l_i, xValuation_{i,1}} \EStep{\delta_{i,2}} ... \EStep{\delta_{i,k}} \Tuple{l_i, xValuation_{i,k}}$
of time elapse steps
such that $\sum_{1 \le j \le k}\delta_{i,j} = \delta_i$
(and thus $xValuation_{i,k} = xValuation_{i+1}$).
\newcommand{.62}{.62}
\section{Symbolic Encoding of Timed Traces}
\label{sect:sett}
We now describe how to \emph{symbolically} represent systems producing super-dense timed traces.
The symbolical representation intended not as a replacement for timed automata but
as a foundation for their symbolic verification, i.e. it is intended for use in the ``back-end'' of the verification tool and not as a modeling language.
After the formalism is introduced, it will be shown how timed automata can be represented in this framework.
The next section will then address the question of how to encode $\ensuremath{\textup{MITL}_{0,\infty}}$ formulas in this framework so that they are symbolically evaluated.
Finally, in Sect.~\ref{Sect:BMC} it will be demonstrated how finite versions of
these encodings can be obtained by using region abstraction,
allowing us to perform actual symbolic model checking of $\ensuremath{\textup{MITL}_{0,\infty}}$ formulas on timed automata.
\newcommand{\TraceOf}[1]{\mathop{\operatorname{trace}}(#1)}
\subsection{Symbolic Transition Systems with Clock-like Variables}
\label{ss:ts}
In the following, we use standard concepts of propositional and first-order logics, and assume that the formulas are interpreted modulo some background theory such as linear arithmetics
(see e.g.~\cite{Handbook:SMT} and the references therein).
Given a set of typed variables,
a valuation $v$ over the set is a function that assigns each variable in the set a value in the domain of the variable.
We use $v \models \phi$ to denote that $v$ evaluates
a quantifier-free formula $\phi$ over the set to true.
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{S}f}[1]{\mathcal{S}_{#1}}
\newcommand{s}{s}
\newcommand{sp}{s'}
\newcommand{sI}[1]{s_{#1}}
\newcommand{\widehat{\APs}}{\widehat{ps}}
\newcommand{\hat{\AP}}{\hat{p}}
A \emph{symbolic transition system with clock-like variables}, for brevity simply referred to as a transition system\ for the remainder of the paper,
over a set $ps$ of atomic propositions
is a tuple
$\Tuple{Z,X,\mathcal{I},\mathcal{INV},\mathcal{T},\mathcal{F},\widehat{\APs}}$,
where
\begin{itemize}
\item
$Z = \Set{z_1,\ldots,z_n}$ is a set of typed \emph{non-clock variables},
$ZNext = \Set{zNext_1,\ldots,zNext_n}$ being
their \emph{next-state versions},
\item
$X = \Set{x_1,\ldots,x_m}$ is a set of non-negative real-valued \emph{clock variables},
$XNext = \Set{xNext_1,\ldots,xNext_m}$ again being their
\emph{next-state versions},
\item
$\mathcal{I}$ is the \emph{initial state formula} over $Z \cup X$,
\item
$\mathcal{INV}$ is the \emph{state invariant formula} over $Z \cup X$,
\item
$\mathcal{T}$ is the \emph{transition relation formula}
over $Z \cup X \cup \Set{\delta} \cup ZNext \cup XNext$,
with a real-valued \emph{duration variable} $\delta$,
\item
$\mathcal{F}$ is a finite set of \emph{fairness formulas} over $Z$,
and
\item
$\widehat{\APs}$ associates each atomic proposition $p \in ps$
with a corresponding formula $\hat{\AP}$ over $Z$.
\end{itemize}
To ensure that the clock variables are used properly,
we require that all the atoms in all the formulas in the system
follow these rules:
(i) if a non-clock variable in $Z$ or in $ZNext$ occurs in the atom, then none of the variables in $X \cup XNext \cup \Set{\delta}$ occur in it,
and
(ii) if a variable in $X \cup XNext \cup \Set{\delta}$ occurs in it, then it is of the forms $xNext = 0$, $xNext = x + \delta$, $x \mathbin{\bowtie} n$, $x + \delta \mathbin{\bowtie} n$, or $\delta \mathbin{\bowtie} 0$ where $\mathbin{\bowtie} \in \Set{{<},{\le},{=},{\ge},{>}}$, $x,xNext\inX$ and $n \in \mathbb{N}$.
Furthermore,
for all valuations $\tau$ over $Z \cup X \cup \Set{\delta} \cup ZNext \cup XNext$ such that $\tau \models \mathcal{T}$,
it must hold that $\tau(\delta) \ge 0$ and
for each clock $x \in X$ either $\tau(xNext)=0$ or $\tau(xNext)=\tau(x)+\tau(\delta)$.
\newcommand{\TracesOf}[1]{\mathop{\operatorname{traces}}(#1)}
\newcommand{\mathcal{S}Run}{\tau}
A \emph{state} of
the system
now is
a valuation $s$ over $Z \cup X$ and
a \emph{run} an infinite sequence $s_0 \EStep{\delta_0} s_1 \EStep{\delta_1} s_2 \ldots$ such that
\begin{itemize}
\item
$\delta_0 = 0$
and for all $i \in \mathbb{N}$ we have
$\delta_i \ge 0$,
$s_i(x) \ge 0$ when $x \in X$,
and
${\delta_i>0} rarrow {\delta_{i+1}=0}$,
\item
$s_0 \models \mathcal{I}$ and $s_i \models \mathcal{INV}$ holds for all $i \in \mathbb{N}$,
\item
for all $i \in \mathbb{N}$ it holds that
$\Setdef{y\mapsto s_i(y)}{y\inZ\cupX}\cup\Set{\delta \mapsto \delta_i} \cup \Setdef{y' \mapsto s_{i+1}(y)}{y \in Z\cupX} \models \mathcal{T}$,
and
\item
for each $f \in \mathcal{F}$,
there are infinitely many states $s$ in the run for which $s \models f$ holds.
\end{itemize}
A run $\mathcal{S}Run = s_0 \EStep{\delta_0} s_1 \EStep{\delta_1} s_2 \EStep{\delta_2} \ldots$
represents the super-dense timed trace
$\TraceOf{\mathcal{S}Run} =
\Tuple{\IntervalI{0},ValuI{0}}
\Tuple{\IntervalI{1},ValuI{1}}
\Tuple{\IntervalI{2},ValuI{2}} \ldots$ over $ps$
where for each $i \in \mathbb{N}$,
\begin{itemize}
\item
$ValuI{i} = \Setdef{p \in ps}{s_i \models \hat{\AP}}$,
and
\item
letting $t_i = \sum_{j=0}^{i-1}\delta_j$,
(i)
if $\delta_i = 0$,
then $\IntervalI{i} = [t_i,t_i]$,
and
(ii)
if $\delta_i > 0$,
then $\IntervalI{i} = (t_i,t_i+\delta_i)$.
\end{itemize}
The set of all traces of a transition system\ $\mathcal{S}$ is $\TracesOf{\mathcal{S}} = \Setdef{\TraceOf{\mathcal{S}Run}}{\text{$\mathcal{S}Run$ is a run of $\mathcal{S}$}}$.
The transition system\ $\mathcal{S}$ is \emph{refinement-admitting}
if $\sigma \in \TracesOf{\mathcal{S}}$ implies $\sigma' \in \TracesOf{\mathcal{S}}$ for all the refinements $\sigma'$ of $\sigma$.
\subsection{Encoding Timed Automata Traces}
Recall the correspondence between timed automata runs and traces
discussed in Sect.~\ref{Sect:TA2TT}.
Given a timed automaton $\mathcal{A} = \Tuple{L, l_\textup{init}, clocks, E, I}$,
we can encode it as a transition system\
$\mathcal{S}f{\mathcal{A}} = \Tuple{Z,X,\mathcal{I},\mathcal{INV},\mathcal{T},\emptyset,\widehat{\APs}}$,
where\footnote{Strictly,
the atoms $\deltaNext = 0$ and $\deltaNext > 0$ are not allowed in $\mathcal{T}$;
this can be handled by adding new Boolean variables
$\underline{\delta = 0}$ and $\underline{\delta > 0}$ in $Z$,
forcing $\underline{\delta = 0} rarrow (\delta=0)$ and
$\underline{\delta > 0} rarrow (\delta>0)$ in $\mathcal{T}$,
and then
using $\underline{\delta = 0}'$ instead of $\deltaNext = 0$
and $\underline{\delta > 0}'$ instead of $\deltaNext > 0$
in the rest of $\mathcal{T}$.}
\begin{itemize}
\item
$Z=\Set{\mathit{at}}$, where $\mathit{at}$ is a variable with the domain $L$,
\item
$\mathcal{I} \deltaef (\mathit{at}=l_\textup{init}) \land \bigwedge_{x\inX}(x=0)$,
\item
$\mathcal{INV} \deltaef \bigwedge_{l\inL} (\mathit{at}=l) rarrow I(l)$
\item
$\begin{array}[t]{@{}r@{ }c@{ }l}
\mathcal{T} & \deltaef & \big((\delta=0\land\delta'=0) rarrow
\bigvee_{\Tuple{l,\mathop{\textup{\bf G}}uard,\mathrel{\textup{\bf R}}eset,l'}\inE}
\mathit{at}{=}l \land \mathit{at}'{=}l'
\\
& & {}
\quad\quad \land \mathop{\textup{\bf G}}uard \land
(\bigwedge_{x\in\mathrel{\textup{\bf R}}esets} x'=0) \land
(\bigwedge_{x\inX\setminus\mathrel{\textup{\bf R}}esets} x'=x) \big)\\
& \land & \big((\delta>0\lor\delta'>0) {rarrow}
(\mathit{at}'{=}\mathit{at} \land \bigwedge_{x\inX}
xNext{=}x{+}\delta )\big)
\\
& \land & \big(\delta=0\lor\deltaNext=0\big)
\end{array}$
(Recall that $\delta$ special real-valued \emph{duration variable})
\item
$\widehat{\APs}$ associates each atomic proposition $@l$, where $l \in L$, with the formula $(\mathit{at} = l)$.
\end{itemize}
Now $\TracesOf{\mathcal{S}f{\mathcal{A}}}$ is exactly the set of super-dense timed traces corresponding to the runs of the automaton $\mathcal{A}$.
Every state of $\mathcal{S}f{\mathcal{A}}$ corresponds to a time interval in the timed trace of $\mathcal{A}$. Thus, there are three types of transitions
encoded in $\mathcal{T}$.
Firstly, a singleton -to-singleton\ transition, corresponding to a discrete transition of $\mathcal{A}$, occurs when $\delta$ and $\delta'$ are both zero.
Secondly, a singleton -to-open transition occurs when the $\delta$ is zero and $\delta'$ non-zero. On such a transition, all variables remain unchanged. Hence, the clocks values correspond to the left bound of the interval.
Thirdly, on a open-to-singleton\ transition ($\delta>0$ and $\delta'=0$)
the clock variables are updated according to the length of the open interval.
Due to the ``repetition of time elapse steps'' property of timed automata discussed in Sect.~\ref{Sect:TA2TT},
the transition system\ $\mathcal{S}f{\mathcal{A}}$ is also refinement-admitting.
\section{Symbolic Encoding of $\ensuremath{\textup{MITL}_{0,\infty}}$ formulas}
\label{sect:seozf}
Let $\mathcal{S} = \Tuple{Z,X,\mathcal{I},\mathcal{INV},\mathcal{T},\mathcal{F},\widehat{\APs}}$ be a transition system\ over $ps$ encoding some timed system producing super-dense timed traces.
We now augment $\mathcal{S}$ with new variables and constraints
so that $\ensuremath{\textup{MITL}_{0,\infty}}$ formulas over $ps$ are symbolically evaluated
in the runs of the transition system s.
We say that the resulting transition system\ $\mathcal{S}f{\phi} = \langleZ\cupZ_\phi,X\cupX_\phi,\mathcal{I}\land\mathcal{I}_\phi,\mathcal{INV},\mathcal{T}\land\mathcal{T}_\phi,\mathcal{F}\cup\mathcal{F}_\phi,\widehat{\APs}\rangle$ over $ps$
encodes $\phi$
if $Z_\phi$ includes a Boolean variable $\Enc{\psi}$
for each sub-formula $\psi$ of $\phi$ (including $\phi$ itself).
Furthermore,
we require two conditions on such encodings.
First,
we want to make sure that the encoding $\mathcal{S}f{\phi}$ is \emph{sound}
in the following senses:
\newcommand{\SoundDef}{
\begin{itemize}
\item
all the traces of $\mathcal{S}$ (i.e, projections of runs to the atomic propositions) are preserved:
$\TracesOf{\mathcal{S}f{\phi}} = \TracesOf{\mathcal{S}}$
\item
when $\phi$ is holds in a state,
then it holds in the corresponding interval:
for each run $\mathcal{S}Run = s_0 s_1
\ldots$ of $\mathcal{S}f{\phi}$
with $\TraceOf{\mathcal{S}Run} = \sigma = \Tuple{\IntervalI{0},ValuI{0}} \Tuple{\IntervalI{1},ValuI{1}}
\ldots$,
and
each $i \in \mathbb{N}$,
$s_{i}(\Enc{\phi}) = \mathbf{true}$
implies
$\forall t \in \IntervalI{i} : \sigmaAt{i}{t} \models \phi$.
\end{itemize}
}
\SoundDef
For fine traces we want to faithfully capture the cases when a formula
holds on some interval.
To this end, we say that the encoding $\mathcal{S}f{\phi}$ is \emph{complete}
if for every $\phi$-fine trace $\sigma = \Tuple{\IntervalI{0},ValuI{0}} \Tuple{\IntervalI{1},ValuI{1}} \Tuple{\IntervalI{2},ValuI{2}} \ldots$
in $\TracesOf{\mathcal{S}}$,
there is a run $\mathcal{S}Run = s_0 s_1 s_2 \ldots$ in $\mathcal{S}f{\phi}$
such that
$\TraceOf{\mathcal{S}Run} = \sigma$
and
for all points $\Point{i}{t}$ in $\sigma$
it holds that
$\sigmaAt{i}{t} \models \phi$
implies
$ValuI{i}(\Enc{\phi}) = \mathbf{true}$.
Therefore,
our model checking task ``Does a refinement-admitting transition system\ $\mathcal{S}$ have a run corresponding to a trace $\sigma$ with $\sigma \models \phi$?''
is reduced to the problem of
deciding whether $\mathcal{S}f{\phi}$ has a run $s_0 s_1 s_2 \ldots$ with $s_0(\Enc{\phi}) = \mathbf{true}$.
\subsection{Encoding Propositional Subformulas}
Let $\mathcal{S} = \Tuple{Z,X,\mathcal{I},\mathcal{INV},\mathcal{T},\mathcal{F},\widehat{\APs}}$ be a transition system\ over $ps$.
\ifLong
For the atomic formulas $\phi$ of forms $p$ and $\negp$,
it is possible to make a transition system\ $\mathcal{S}f{\phi} = \Tuple{Z\cup\Set{\Enc{p}},X,\mathcal{I},\mathcal{INV},\mathcal{T}\land\mathcal{T}_\phi,\mathcal{F},\widehat{\APs}}$ encoding $\phi$
as follows:
\begin{itemize}
\item
if $\phi = p$,
then $\mathcal{T}_\phi = (\Enc{\phi} lrightarrow \hat{\AP})$.
\item
if $\phi = \negp$,
then $\mathcal{T}_\phi = (\Enc{\phi} lrightarrow {\neg\hat{\AP}})$.
\end{itemize}
\else
For the atomic formulas $\phi$ of forms $p$ and $\negp$,
it is possible to make a transition system\ $\mathcal{S}f{\phi} = \Tuple{Z\cup\Set{\Enc{p}},X,\mathcal{I},\mathcal{INV},\mathcal{T}\land\mathcal{T}_\phi,\mathcal{F},\widehat{\APs}}$ encoding $\phi$
by (i) defining $\mathcal{T}_\phi \deltaef (\Enc{\phi} lrightarrow \hat{\AP})$ if $\phi = p$
and
(ii)
$\mathcal{T}_\phi \deltaef (\Enc{\phi} lrightarrow {\neg\hat{\AP}})$
if
$\phi = \negp$.
\fi
Similarly,
assuming that $\phi$ is either of form $\alpha \land \beta$ or $\alpha \lor \beta$
for some $\ensuremath{\textup{MITL}_{0,\infty}}$ formulas $\alpha$ and $\beta$,
and
that $\mathcal{S}$ encodes both $\alpha$ and $\beta$,
we can make a transition system\ $\mathcal{S}f{\phi} = \Tuple{Z\cup\Set{\Enc{p}},X,\mathcal{I},\mathcal{INV},\mathcal{T}\land\mathcal{T}_\phi,\mathcal{F},\widehat{\APs}}$
encoding $\phi$ as follows:
\ifLong
\begin{itemize}
\item
if $\phi = {\alpha \lor \beta}$,
then $\mathcal{T}_\phi = (\Enc{\phi} lrightarrow (\Enc{\alpha} \lor \Enc{\beta}))$,
and
\item
if $\phi = {\alpha \land \beta}$,
then $\mathcal{T}_\phi = (\Enc{\phi} lrightarrow (\Enc{\alpha} \land \Enc{\beta}))$.
\end{itemize}
\else
(i)
if $\phi = {\alpha \lor \beta}$,
then $\mathcal{T}_\phi \deltaef (\Enc{\phi} lrightarrow (\Enc{\alpha} \lor \Enc{\beta}))$,
and,
(ii)
if $\phi = {\alpha \land \beta}$,
then $\mathcal{T}_\phi \deltaef (\Enc{\phi} lrightarrow (\Enc{\alpha} \land \Enc{\beta}))$.
\fi
The lemmas for the soundness and completeness of the encodings are given in
Sect.~\ref{Sect:SoundnessAndCompleteness}.
\subsection{Encoding \ensuremath{\textup{MITL}_{0,\infty}} operators}
In the following sub-sections,
we present encodings for the other $\ensuremath{\textup{MITL}_{0,\infty}}$ operators.
In each encoding, we may introduce some new non-clock and clock variables
such as $c$ and $\mathit{lefto}$;
these variables are ``local'' to the encoded subformula $\psi$ and
not used elsewhere, we do not subscript them (e.g.\ $c$ really means $c_\psi$) for the sake of readability.
We also introduce new transition relation constraints (i.e.\ conjuncts in $\mathcal{T}_\psi$), initial state constraints and fairness conditions.
We will use $\mathit{open}$ as a shorthand for $(\delta > 0)$.
\newcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}}{MUU}
\newcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}next}{MUU}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}}{l \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleleft}n} \mathrel{\textup{\bf R}}ight}
\mitlopsec{Encoding $l \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleleft}n} \mathrel{\textup{\bf R}}ight$ and $l \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleleft}n} \mathrel{\textup{\bf R}}ight$ with ${\mathbin{\triangleleft}} \in \Set{<,\le}$}
These operators can be expressed with simpler ones by using the following lemma (proven in the full version of this paper~\cite{extended} ):
\newcommand{\mathrel{\textup{\bf U}}beqText}{
$\sigmaAt{i}{t} \models {\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleleft}n} \psi}$ iff
$\sigmaAt{i}{t} \models (\mathop{\textup{\bf F}^\textup{s}}I{\mathbin{\triangleleft}n}\psi) \land (\phi\mathrel{\textup{\bf U}^\textup{s}}\psi)$
for all
$i \in \mathbb{N}$,
$t \in \IntervalI{i}$,
${\mathbin{\triangleleft}} \in \Set{{<},{\leq}}$,
and $n \in \mathbb{N}$.}
\begin{lemma}\label{lem:ubeq}
\mathrel{\textup{\bf U}}beqText
\end{lemma}
Using the $ \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleleft}n}$ / $ \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleleft}n}$ duality, we can now also express ${\phi \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleleft}n} \psi}$
as
$(\mathop{\textup{\bf G}^\textup{s}}I{\mathbin{\triangleleft}n}\psi) \land (\phi\mathrel{\textup{\bf R}^\textup{s}}\psi)$.
\newcommand{\Enc{\Left}}{\Enc{l}}
\newcommand{\Enc{\Left}next}{\EncNext{l}}
\newcommand{\mathrel{\textup{\bf R}}IGHT}{\Enc{\mathrel{\textup{\bf R}}ight}}
\newcommand{\mathrel{\textup{\bf R}}IGHTnext}{\EncNext{\mathrel{\textup{\bf R}}ight}}
\mitlopsec{Encoding $l\mathrel{\textup{\bf U}^\textup{s}}\mathrel{\textup{\bf R}}ight$}
\label{s:enc.su}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}}{\Enc{l \mathrel{\textup{\bf U}^\textup{s}} \mathrel{\textup{\bf R}}ight}}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}next}{\EncNext{l \mathrel{\textup{\bf U}^\textup{s}} \mathrel{\textup{\bf R}}ight}}
\renewcommand{\Enc{\Left}}{\Enc{l}}
\renewcommand{\Enc{\Left}next}{\EncNext{l}}
\renewcommand{\mathrel{\textup{\bf R}}IGHT}{\Enc{\mathrel{\textup{\bf R}}ight}}
\renewcommand{\mathrel{\textup{\bf R}}IGHTnext}{\EncNext{\mathrel{\textup{\bf R}}ight}}
We encode ``untimed'' until formulas $l\mathrel{\textup{\bf U}^\textup{s}}\mathrel{\textup{\bf R}}ight$
essentially like in the traditional LTL case~\cite{BiereEtAl:LMCS2006} but must consider open intervals and singletons\ separately.
Assume $l\mathrel{\textup{\bf U}^\textup{s}}\mathrel{\textup{\bf R}}ight$ holds on the current interval.
If that interval is open,
$l$ and one of the following hold:
(i) $\mathrel{\textup{\bf R}}ight$ holds on the current interval, (ii) $\mathrel{\textup{\bf R}}ight$ holds on the next interval (which is
a singleton ), or (iii) $l$ holds on the next interval and
$l\mathrel{\textup{\bf U}^\textup{s}}\mathrel{\textup{\bf R}}ight$ is satisfied as well. This is captured by the following
constraint:
\begin{equation}
\label{m:enc-u-open}
{\Enc{\Left\SRI{\AnyLower\Const}\Right}{\land}\mathit{open}}
rarrow
{\Enc{\Left} \land (\mathrel{\textup{\bf R}}IGHT \lor \mathrel{\textup{\bf R}}IGHTnext \lor (\Enc{\Left}next \land \Enc{\Left\SRI{\AnyLower\Const}\Right}next))}
\end{equation}
If, in contrast, the current interval is a singleton ,
then there are two possibilities:
(i) the next interval is a singleton\ and $\mathrel{\textup{\bf R}}ight$ holds,
or
(ii)
both $l$ and $l\mathrel{\textup{\bf U}^\textup{s}}\mathrel{\textup{\bf R}}ight$ hold on the next interval:
\begin{equation}
\label{m:enc-u-singular}
{\Enc{\Left\SRI{\AnyLower\Const}\Right} {\land} {\neg\mathit{open}}}
rarrow
{(\neg\mathit{open}' {\land} \mathrel{\textup{\bf R}}IGHTnext) \lor (\Enc{\Left}next {\land} \Enc{\Left\SRI{\AnyLower\Const}\Right}next)}
\end{equation}
\ifLong
Finally,
as in the traditional LTL encoding,
we must add the following fairness condition in order to avoid
the case where $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ and $\Enc{\Left}$ are $\mathbf{true}$ on all intervals starting from some point but $\mathrel{\textup{\bf R}}ight$ does not hold at any future time point:
\begin{equation}
\mathcal{F}_{l \mathrel{\textup{\bf U}^\textup{s}} \mathrel{\textup{\bf R}}ight} = \Set{\neg\Enc{\Left\SRI{\AnyLower\Const}\Right} \lor \mathrel{\textup{\bf R}}IGHT}
\end{equation}
\else
Finally,
as in the traditional LTL encoding,
we must add a fairness condition in order to avoid
the case where $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ and $\Enc{\Left}$ are $\mathbf{true}$ on all intervals starting from some point but $\mathrel{\textup{\bf R}}ight$ does not hold at any future time point, i.e. $\mathcal{F}_{l \mathrel{\textup{\bf U}^\textup{s}} \mathrel{\textup{\bf R}}ight} = \Set{\neg\Enc{\Left\SRI{\AnyLower\Const}\Right} \lor \mathrel{\textup{\bf R}}IGHT}$.
\fi
\begin{figure}
\caption{Encoding $l \mathrel{\textup{\bf U}
\label{Fig:EncEx1}
\end{figure}
\begin{example}
Figure~\ref{Fig:EncEx1} illustrates
an evaluation of the encoding variables
on a trace (ignore the text below the dashed line for now).
Note that $\Enc{l \mathrel{\textup{\bf U}^\textup{s}} \mathrel{\textup{\bf R}}ight}$ is (correctly) evaluated to $\mathbf{true}$
on the second $[6,6]$-interval
despite $l$ not holding.
\end{example}
\mitlopsec{Encoding $\mathop{\textup{\bf F}^\textup{s}}I{\leq 0}\mathrel{\textup{\bf R}}ight$}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}}{\Enc{\mathop{\textup{\bf F}^\textup{s}}I{\leq 0}\mathrel{\textup{\bf R}}ight}}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}next}{\EncNext{\mathop{\textup{\bf F}^\textup{s}}I{\leq 0}\mathrel{\textup{\bf R}}ight}}
A formula $\mathop{\textup{\bf F}^\textup{s}}I{\leq 0}\mathrel{\textup{\bf R}}ight$ holding
requires a future interval at which $\mathrel{\textup{\bf R}}ight$ holds and
which can be reached without any time passing.
Thus, $\mathop{\textup{\bf F}^\textup{s}}I{\leq 0}\mathrel{\textup{\bf R}}ight$
is satisfied only on a
singleton\
where
the next interval is a singleton\ as well and (i)
$\mathrel{\textup{\bf R}}ight$ or (ii) $\mathop{\textup{\bf F}^\textup{s}}I{\leq 0}\mathrel{\textup{\bf R}}ight$ holds on the next interval:
\begin{equation}
\label{eq:f.leqz}
\Enc{\Left\SRI{\AnyLower\Const}\Right}
rarrow
{{\neg\mathit{open}} \land {\neg\mathit{open}'} \land (\mathrel{\textup{\bf R}}IGHTnext \lor \Enc{\Left\SRI{\AnyLower\Const}\Right}next)}
\end{equation}
No fairness conditions are needed
as the non-zenoness requirement always guarantees a future open interval.
\mitlopsec{Encoding $\mathop{\textup{\bf F}^\textup{s}}I{\mathbin{\triangleleft}n}\mathrel{\textup{\bf R}}ight$ with $n > 0$}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}}{\Enc{\mathop{\textup{\bf F}^\textup{s}}I{\mathbin{\triangleleft}n}\mathrel{\textup{\bf R}}ight}}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}next}{\EncNext{\mathop{\textup{\bf F}^\textup{s}}I{\mathbin{\triangleleft}n}\mathrel{\textup{\bf R}}ight}}
In the encoding of $\mathop{\textup{\bf F}^\textup{s}}I{\mathbin{\triangleleft}n}$, we first add
the constraints for $\mathrel{\textup{\bf U}^\textup{s}}$
replacing $l$ by $\mathbf{true}$.
\begin{eqnarray}
{\Enc{\Left\SRI{\AnyLower\Const}\Right} \land \mathit{open}}
& rarrow &
{\mathrel{\textup{\bf R}}IGHT \lor \mathrel{\textup{\bf R}}IGHTnext \lor \Enc{\Left\SRI{\AnyLower\Const}\Right}next}
\label{eq:enc.f.ub.open}
\\
{\Enc{\Left\SRI{\AnyLower\Const}\Right} \land {\neg\mathit{open}}}
& rarrow &
{\mathrel{\textup{\bf R}}IGHTnext \lor \Enc{\Left\SRI{\AnyLower\Const}\Right}next}
\label{eq:enc.f.ub.singular}
\end{eqnarray}
Next, we observe that for encoding timing related aspect, it is sufficient to at any point remember the earliest interval at which $\mathop{\textup{\bf F}^\textup{s}}I{\mathbin{\triangleleft}n} \mathrel{\textup{\bf R}}ight$ holds and after which $\mathrel{\textup{\bf R}}ight$ has not held yet.
If $\mathrel{\textup{\bf R}}ight$ is encountered in time for the earliest such interval, then interval where $\mathrel{\textup{\bf R}}ight$ holds is close enough to any later interval where $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ holds as well.
Correspondingly,
we use a real-valued (clock-like) auxiliary variable $c$ and a boolean auxiliary variable $\mathit{lefto}$
to remember the time passed since and type of the earliest interval on which $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ held and after which we have not seen $\mathrel{\textup{\bf R}}IGHT$.
\ifLong
The correct values in the first interval are forced by the initial state formula
\begin{equation}
\mathcal{I}_{\mathop{\textup{\bf F}^\textup{s}}I{\mathbin{\triangleleft}n} \mathrel{\textup{\bf R}}ight}
\deltaef
{{c = 0} \land {\neg\mathit{lefto}}}
\label{eq:enc.f.ub.initial}
\end{equation}
\else
The correct values in the first interval are forced by the initial state formula
$\mathcal{I}_{\mathop{\textup{\bf F}^\textup{s}}I{\mathbin{\triangleleft}n} \mathrel{\textup{\bf R}}ight}
\deltaef
{{c = 0} \land {\neg\mathit{lefto}}}$.
\fi
To update $c$ and $\mathit{lefto}$, we define the shorthand $R_\C$ to be $\mathbf{true}$
when we
have not seen $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ without seeing $\mathrel{\textup{\bf R}}ight$ afterwards or $\mathrel{\textup{\bf R}}ight$ holds on an open current or an arbitrary next interval.
\begin{equation}
R_\C
\deltaef
(\neg\Enc{\Left\SRI{\AnyLower\Const}\Right} \lor (\mathit{open}\land\mathrel{\textup{\bf R}}IGHT)\lor\mathrel{\textup{\bf R}}IGHTnext) \land \Enc{\Left\SRI{\AnyLower\Const}\Right}next
\end{equation}
We then
(i) reset $c$ and $\mathit{lefto}$ on the next interval if $R_\C$ holds on the current interval,
and
(ii) update $c$ and leave $\mathit{lefto}$ unchanged if $R_\C$ does not hold.
\begin{eqnarray}
R_\C
& rarrow &
{c'=0} \land (\mathit{lefto}' lrightarrow \mathit{open}')
\label{eq:enc.f.ub.clk.reset}
\\
\negR_\C
& rarrow &
{c'=c+\delta} \land (\mathit{lefto}'lrightarrow\mathit{lefto})
\label{eq:enc.f.ub.clk.update}
\end{eqnarray}
We introduce a shorthand $T_\C$ (defined below)
such that $T_\C$
holds if
for each point on the interval where we reset $c$
there is a point on the \emph{next} interval that satisfies the $\mathbin{\triangleleft}n$ constraint.
We then require that $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ being $\mathbf{true}$,
and $\mathrel{\textup{\bf R}}ight$ being $\mathbf{false}$ or
the current interval being a singleton\ implies that $T_\C$ holds.
\begin{equation}
(\Enc{\Left\SRI{\AnyLower\Const}\Right} \land \neg(\mathrel{\textup{\bf R}}IGHT \land \mathit{open})) rarrow T_\C
\label{eq:enc.f.ub.timing}
\end{equation}
\ifLong
In the case of $\mathop{\textup{\bf F}^\textup{s}}I{<n}\mathrel{\textup{\bf R}}ight$, we define $T_\C$ as follows
\begin{equation}
T_\C
\deltaef
{c+\delta < n} \lor (\mathit{lefto} \land {c+\delta\leqn})
\end{equation}
and in the case of $\mathop{\textup{\bf F}^\textup{s}}I{\leqn}\mathrel{\textup{\bf R}}ight$ by
\begin{equation}
T_\C
\deltaef
{c+\delta < n} \lor ((\neg\mathit{open}'\lor\mathit{lefto})\land {c+\delta\leqn})
\end{equation}
\else
In the case of $\mathop{\textup{\bf F}^\textup{s}}I{<n}\mathrel{\textup{\bf R}}ight$, we define
$
T_\C
\deltaef
{c+\delta < n} \lor (\mathit{lefto} \land {c+\delta\leqn})
$
and in the case of $\mathop{\textup{\bf F}^\textup{s}}I{\leqn}\mathrel{\textup{\bf R}}ight$ we define
$
T_\C
\deltaef
{c+\delta < n} \lor ((\neg\mathit{open}'\lor\mathit{lefto})\land {c+\delta\leqn})
$.
\fi
\begin{example}
An evaluation of the encoding variables
is shown (below the dashed line) in Figure~\ref{Fig:EncEx1}.
Especially,
observe that $\Enc{\mathop{\textup{\bf F}^\textup{s}}I{>3}\mathrel{\textup{\bf R}}ight}$ is \emph{not} evaluated to true
on the interval $(6,9.3)$ although $\mathop{\textup{\bf F}^\textup{s}}I{>3}\mathrel{\textup{\bf R}}ight$ holds on \emph{some} points in the interval: we are interested in \emph{sound} encodings and
$\Enc{\mathop{\textup{\bf F}^\textup{s}}I{>3}\mathrel{\textup{\bf R}}ight}$ does not hold on \emph{all} the points in the interval.
\end{example}
\mitlopsec{Encoding $l\mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleright}n}\mathrel{\textup{\bf R}}ight$ with $\mathbin{\triangleright} \in \Set{\ge,>}$}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}}{\Enc{l\mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleright}n}\mathrel{\textup{\bf R}}ight}}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}next}{\EncNext{l\mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleright}n}\mathrel{\textup{\bf R}}ight}}
To encode $l\mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleright}n}\mathrel{\textup{\bf R}}ight$,
we define shorthands $T_\C$ and $\hat\Right$.
$T_\C$ will later be defined so that $T_\C$ holds iff for every previous point at which $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ held there is a point on the current interval that satisfies the $\mathbin{\triangleright}n$ timing constraint.
\ifLong
We, then, define $\hat\Right$ as follows:
\begin{equation}
\hat\Right
\deltaef
{\mathrel{\textup{\bf R}}IGHT \land T_\C}
\label{m:enc.u.lb.def.rhat}
\end{equation}
\else
We, then, define $
\hat\Right
\deltaef
{\mathrel{\textup{\bf R}}IGHT \land T_\C}
$.
\fi
Next, we add a boolean ``obligation'' variable $\mathit{oblig}$ to remember when we need to see $\hat\Right$ at a future point.
Whenever $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ is $\mathbf{true}$, we also require $\mathit{oblig}$ to be $\mathbf{true}$.
\begin{equation}
\label{m:enc.u.lb.out.obl}
\Enc{\Left\SRI{\AnyLower\Const}\Right}rarrow\mathit{oblig}
\end{equation}
In case $n>0$, we additionally require $\mathit{oblig}$ and $l$ to hold on the next interval.
\begin{equation}
\label{m:enc.u.lb.out.xobl}
\Enc{\Left\SRI{\AnyLower\Const}\Right}rarrow(\mathit{oblig}'\landl')
\end{equation}
Next, we add constraints similar to
those
for the
$\mathrel{\textup{\bf U}^\textup{s}}$-operator but with $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ and $\mathrel{\textup{\bf R}}IGHT$ replaced by $\mathit{oblig}$ and $\hat\Right$.
\iffalse
\begin{equation}
(\mathit{oblig}\land\mathit{open})
rarrow
(\Enc{\Left}\land(\hat\Right\lor\hat\Right'\lor(\Enc{\Left}next\land\mathit{oblig}')))
\label{m:enc.u.lb.obl.open}
\end{equation}
\begin{equation}
(\mathit{oblig}\wedge\neg\mathit{open})
rarrow
((\neg\mathit{open}'\land\hat\Right')\lor(\Enc{\Left}next\land\mathit{oblig}'))
\label{m:enc.u.lb.obl.singular}
\end{equation}
\else
\begin{eqnarray}
(\mathit{oblig}\land\mathit{open})
rarrow
(\Enc{\Left}\land(\hat\Right\lor\hat\Right'\lor(\Enc{\Left}next\land\mathit{oblig}')))
&&
\label{m:enc.u.lb.obl.open}
\\
(\mathit{oblig}\wedge\neg\mathit{open})
rarrow
((\neg\mathit{open}'\land\hat\Right')\lor(\Enc{\Left}next\land\mathit{oblig}'))
&&
\label{m:enc.u.lb.obl.singular}
\end{eqnarray}
\fi
We want to determine whether the $\mathbin{\triangleright}n$ constraint holds for \emph{all} previous points at which $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ holds.
We, thus, use a
real-valued
variable $c$ and
a boolean variable $\mathrel{\textup{\bf R}}ightOpen$
to measure the time since the most recent corresponding
interval.
We, thus, reset $c$ to zero and use $\mathrel{\textup{\bf R}}ightOpen$ to remember the type of the \emph{current} interval whenever $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ holds. Otherwise, we update $c$ and $\mathrel{\textup{\bf R}}ightOpen$ as before.
\begin{eqnarray}
\Enc{\Left\SRI{\AnyLower\Const}\Right}
&rarrow&
{c'=0} \land (\mathrel{\textup{\bf R}}ightOpen'lrightarrow\mathit{open})
\label{m:enc.u.lb.clk.reset}
\\
{\neg\Enc{\Left\SRI{\AnyLower\Const}\Right}}
&rarrow&
{c'=c+\delta} \land (\mathrel{\textup{\bf R}}ightOpen'lrightarrow\mathrel{\textup{\bf R}}ightOpen)
\label{m:enc.u.lb.clk.update}
\end{eqnarray}
\ifLong
Next, in case
$l\mathrel{\textup{\bf U}^\textup{s}}I{>n}\mathrel{\textup{\bf R}}ight$, we define
\begin{equation}
\label{m:enc.u.lb.def.timing.strict}
T_\C\deltaefc+\delta > n \lor (\mathrel{\textup{\bf R}}ightOpen\landc+\delta \geq n)
\end{equation}
and in case $l\mathrel{\textup{\bf U}^\textup{s}}I{\geqn}\mathrel{\textup{\bf R}}ight$
\begin{equation}
\label{m:enc.u.lb.def.timing.nonstrict}
T_\C\deltaefc+\delta > n \lor ((\mathrel{\textup{\bf R}}ightOpen\lor\neg\mathit{open})\landc+\delta \geq n)
\end{equation}
\else
Next, in case
$l\mathrel{\textup{\bf U}^\textup{s}}I{>n}\mathrel{\textup{\bf R}}ight$, we define
$T_\C\deltaefc+\delta > n \lor (\mathrel{\textup{\bf R}}ightOpen\landc+\delta \geq n)$
and in case $l\mathrel{\textup{\bf U}^\textup{s}}I{\geqn}\mathrel{\textup{\bf R}}ight$ we define $T_\C\deltaefc+\delta > n \lor ((\mathrel{\textup{\bf R}}ightOpen\lor\neg\mathit{open})\landc+\delta \geq n)$.
\fi
\ifLong
Finally,
as for the untimed $\mathrel{\textup{\bf U}^\textup{s}}$-operator,
we need a fairness condition to prevent a situation where $\mathit{oblig}$ holds globally but $\mathrel{\textup{\bf R}}ight$ never holds.
\begin{equation}
\label{eq:enc.u.lb.fairness}
\mathcal{F}_{l \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleright}n} \mathrel{\textup{\bf R}}ight}
\deltaef
\Set{{\neg\mathit{oblig}} \lor \mathrel{\textup{\bf R}}IGHT}
\end{equation}
\else
Finally,
as for the untimed $\mathrel{\textup{\bf U}^\textup{s}}$-operator,
we need a fairness condition to prevent a situation where $\mathit{oblig}$ holds globally but $\mathrel{\textup{\bf R}}ight$ never holds. We define $\mathcal{F}_{l \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleright}n} \mathrel{\textup{\bf R}}ight} \deltaef \Set{{\neg\mathit{oblig}} \lor \mathrel{\textup{\bf R}}IGHT}$.
\fi
Note that, here, we use $\mathrel{\textup{\bf R}}IGHT$, not $\hat\Right$. For instance, when $l\mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleright}n}\mathrel{\textup{\bf R}}ight$ and $\mathrel{\textup{\bf R}}ight$ hold globally, there may never be a point where $T_\C$ is $\mathbf{true}$ and thus $\hat\Right$ always stays $\mathbf{false}$.
\begin{example}
Figure \ref{Fig:EncEx2} illustrates how the encoding variables
of $l \mathrel{\textup{\bf U}^\textup{s}}I{>3} \mathrel{\textup{\bf R}}ight$
variables could be evaluated on a trace.
Again,
$\Enc{l \mathrel{\textup{\bf U}^\textup{s}}I{>3} \mathrel{\textup{\bf R}}ight}$ is not true on the interval $(6,9.3)$
because $l \mathrel{\textup{\bf U}^\textup{s}}I{>3} \mathrel{\textup{\bf R}}ight$ holds only on some points on it but not on all.
\end{example}
\begin{figure}
\caption{Encoding $l \mathrel{\textup{\bf U}
\label{Fig:EncEx2}
\end{figure}
\mitlopsec{Encoding $l\mathrel{\textup{\bf R}^\textup{s}}\mathrel{\textup{\bf R}}ight$}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}}{\Enc{l\mathrel{\textup{\bf R}^\textup{s}}\mathrel{\textup{\bf R}}ight}}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}next}{\EncNext{l\mathrel{\textup{\bf R}^\textup{s}}\mathrel{\textup{\bf R}}ight}}
For encoding $l\mathrel{\textup{\bf R}^\textup{s}}\mathrel{\textup{\bf R}}ight$,
we
use an auxiliary boolean variable $\mathit{oblig}$.
Intuitively, $\mathit{oblig}$ being $\mathbf{true}$ means that before seeing any point at which $\mathrel{\textup{\bf R}}IGHT$ is $\mathbf{false}$, we need to see a point where $\Enc{\Left}$ is $\mathbf{true}$.
We require $\mathit{oblig}$ to hold on the current interval when $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ holds on an open interval and on the next interval when $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ holds on a singleton .
\begin{eqnarray}
(\Enc{\Left\SRI{\AnyLower\Const}\Right} \land \mathit{open}) & rarrow & \mathit{oblig}
\label{eq:r.unt.out-open}
\\
\label{eq:r.unt.out-sing}
(\Enc{\Left\SRI{\AnyLower\Const}\Right}\land\neg\mathit{open}) & rarrow & \mathit{oblig}'
\end{eqnarray}
The obligation to see $l$ before $\neg\mathrel{\textup{\bf R}}ight$ remains active
until
$l$ holds:
\begin{equation}
\mathit{oblig}
rarrow
(\Enc{\Left} \lor \mathit{oblig}')
\label{eq:r.unt.consec}
\end{equation}
As a final constraint, $\mathrel{\textup{\bf R}}ight$ needs to hold on all intervals where the obligation is $\mathbf{true}$, with the exception of open intervals on which $l$ holds, leading to
\begin{equation}
\mathit{oblig}
rarrow
((\mathit{open}\land\Enc{\Left}) \lor \mathrel{\textup{\bf R}}IGHT)
\label{eq:r.unt.obl-r}
\end{equation}
\mitlopsec{Encoding $\mathop{\textup{\bf G}^\textup{s}}I{\leq 0}\mathrel{\textup{\bf R}}ight$}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}}{\Enc{\mathop{\textup{\bf G}^\textup{s}}I{\leq 0}\mathrel{\textup{\bf R}}ight}}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}next}{\EncNext{\mathop{\textup{\bf G}^\textup{s}}I{\leq 0}\mathrel{\textup{\bf R}}ight}}
$\mathop{\textup{\bf G}^\textup{s}}I{\leq 0}\mathrel{\textup{\bf R}}ight$ trivially holds
when the current or the next interval is open.
Furthermore, $\mathop{\textup{\bf G}^\textup{s}}I{\leq 0}\mathrel{\textup{\bf R}}ight$ holds when both current and next interval are singletons\ and $\mathrel{\textup{\bf R}}ight$ and $\mathop{\textup{\bf G}^\textup{s}}I{\leq 0}\mathrel{\textup{\bf R}}ight$ hold on the next interval.
\begin{equation}
\Enc{\Left\SRI{\AnyLower\Const}\Right}
rarrow
(\mathit{open} \lor \mathit{open}' \lor (\mathrel{\textup{\bf R}}IGHTnext \land \Enc{\Left\SRI{\AnyLower\Const}\Right}next))
\label{eq:g.leqz}
\end{equation}
\mitlopsec{Encoding $\mathop{\textup{\bf G}^\textup{s}}I{\mathbin{\triangleleft}n}\mathrel{\textup{\bf R}}ight$ with $n > 0$}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}}{\Enc{\mathop{\textup{\bf G}^\textup{s}}I{\mathbin{\triangleleft}n}\mathrel{\textup{\bf R}}ight}}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}next}{\EncNext{\mathop{\textup{\bf G}^\textup{s}}I{\mathbin{\triangleleft}n}\mathrel{\textup{\bf R}}ight}}
First,
we require that $\mathrel{\textup{\bf R}}ight$ holds on all
open intervals on which $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ holds. Furthermore, we will later define a shorthand $T_\C$ to hold whenever there is an interval on which $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ held sufficiently shortly in the past to still require $\mathrel{\textup{\bf R}}ight$ to hold, resulting in
\begin{equation}
((\Enc{\Left\SRI{\AnyLower\Const}\Right} \land \mathit{open}) \vee T_\C)
rarrow
\mathrel{\textup{\bf R}}IGHT
\label{eq:g.ub.timing}
\end{equation}
Like in the $\mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleright}}$ encoding, we use a real-valued variable $c$ and a boolean variable $\mathrel{\textup{\bf R}}ightOpen$ to measure time from the most recent interval at which $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ held.
\begin{eqnarray}
\Enc{\Left\SRI{\AnyLower\Const}\Right}
&rarrow&
{c'=0} \land (\mathrel{\textup{\bf R}}ightOpen'lrightarrow\mathit{open})
\label{eq:g.ub.clk1}
\\
{\neg\Enc{\Left\SRI{\AnyLower\Const}\Right}}
&rarrow&
{c'=c+\delta} \land (\mathrel{\textup{\bf R}}ightOpen'lrightarrow\mathrel{\textup{\bf R}}ightOpen)
\label{eq:g.ub.clk2}
\end{eqnarray}
\ifLong
Now, in the case of $\mathop{\textup{\bf G}^\textup{s}}I{<n}$ we define
\begin{equation}
T_\C
\deltaef
{c<n}
\label{eq:g.ub.timing-strict}
\end{equation}
and
in the case of $\mathop{\textup{\bf G}^\textup{s}}I{\leqn}$
\begin{equation}
T_\C
\deltaef
{c<n} \lor ({c\leqn} \land {\neg\mathit{open}} \land {\neg\mathrel{\textup{\bf R}}ightOpen})
\label{eq:g.ub.timing-nonstrict}
\end{equation}
\else
Now, in the case of $\mathop{\textup{\bf G}^\textup{s}}I{<n}$ we define
$
T_\C
\deltaef
{c<n}
$
and
for $\mathop{\textup{\bf G}^\textup{s}}I{\leqn}$
we define
$
T_\C
\deltaef
{c<n} \lor ({c\leqn} \land {\neg\mathit{open}} \land {\neg\mathrel{\textup{\bf R}}ightOpen})
$
\fi
\mitlopsec{Encoding $l\mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleright}n}\mathrel{\textup{\bf R}}ight$}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}}{\Enc{l\mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleright}n}\mathrel{\textup{\bf R}}ight}}
\renewcommand{\Enc{\Left\SRI{\AnyLower\Const}\Right}next}{\EncNext{l\mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleright}n}\mathrel{\textup{\bf R}}ight}}
For encoding the lower bound until operators, we use a boolean variable $\mathit{oblig}$ and the same update rules as for the untimed $\mathrel{\textup{\bf R}^\textup{s}}$ operator.
\begin{eqnarray}
\label{eq:nr.lb.left.open}
(\Enc{\Left\SRI{\AnyLower\Const}\Right}\wedge\mathit{open}) & rarrow & \mathit{oblig} \\
\label{eq:nr.lb.left.singular}
(\Enc{\Left\SRI{\AnyLower\Const}\Right}\wedge\neg\mathit{open}) & rarrow & \mathit{oblig}' \\
\label{eq:nr.lb.obligation.update}
\mathit{oblig} & rarrow & (\Enc{\Left}\lor\mathit{oblig}')
\end{eqnarray}
We add a modified version of Constraint~\ref{eq:r.unt.consec} and use a shorthand $T_\C$ (defined later) to identify intervals that contain time points $\mathbin{\triangleright}n$ from a point where $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ holds.
\begin{equation}
\label{eq:nr.lb.timing}
(\mathit{oblig}\wedgeT_\C) rarrow ((\Enc{\Left}\wedge\mathit{open})\lor\mathrel{\textup{\bf R}}IGHT)
\end{equation}
Next, we add a constraint for intervals of length $>n$. On such an interval, $l$ or $\mathrel{\textup{\bf R}}ight$ has to hold if $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ holds.
\begin{equation}
\label{eq:nr.lb.first}
(\Enc{\Left\SRI{\AnyLower\Const}\Right}\land\delta>n) rarrow (\Enc{\Left}\lor\mathrel{\textup{\bf R}}IGHT)
\end{equation}
\begin{figure}
\caption{Encoding $l \mathrel{\textup{\bf R}
\label{Fig:EncEx3}
\end{figure}
For encoding $\mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleright}n}$, we use an auxiliary real-valued variable $c$ and a boolean variable $\mathit{lefto}$ to measure the time passed since
the earliest interval at which $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ holds and whose obligation to see $l$ before $\mathrel{\textup{\bf R}}ight$ is still active.
This is, in principle, similar to the $\mathop{\textup{\bf F}^\textup{s}}I{\mathbin{\triangleleft}n}$ encoding except for a special case illustrated in Figure~\ref{Fig:EncEx3}.
Here, on the fourth interval $c$ and $\mathit{lefto}$ are needed for two purposes: to measure the time passed since the second interval (which introduced a still open obligation) and to start measuring time since the current interval (which introduces a fresh obligation as $\Enc{\Left}$ holds satisfying the previous obligation).
\ifLong
We define a shorthand $\deltaReset$ to captures precisely this situation and will later delay resetting $c$ by one step whenever $\deltaReset$ holds.
\begin{equation}
\label{eq:nr.lb.delay}
\deltaReset\deltaef(\neg\mathit{open}\land\mathit{oblig}\landl\land\Enc{\Left\SRI{\AnyLower\Const}\Right})
\end{equation}
\else
We define a shorthand $\deltaReset\deltaef(\neg\mathit{open}\land\mathit{oblig}\landl\land\Enc{\Left\SRI{\AnyLower\Const}\Right})$ to captures precisely this situation and will later delay resetting $c$ by one step whenever $\deltaReset$ holds.
\fi
Otherwise, $c$ needs to be reset on the next interval if $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ holds on that
interval and
(i)
if there is an open obligation it is satisfied on the current interval
and
(ii) the current interval is not a singleton\ on which $\Enc{\Left\SRI{\AnyLower\Const}\Right}$ holds, i.e. does not add an obligation to the next
\iffalse
interval.
\begin{equation}
R_\C\deltaef\Enc{\Left\SRI{\AnyLower\Const}\Right}'\land(\neg\mathit{oblig}\lorl)\land(\mathit{open}\lor\neg\Enc{\Left\SRI{\AnyLower\Const}\Right})
\end{equation}
\else
interval, i.e.
$
R_\C\deltaef\Enc{\Left\SRI{\AnyLower\Const}\Right}'\land(\neg\mathit{oblig}\lorl)\land(\mathit{open}\lor\neg\Enc{\Left\SRI{\AnyLower\Const}\Right})
$.
\fi
As said before,
we delay resetting $c$ and $\mathit{lefto}$ by one interval when $\deltaReset$ holds, i.e. set $c$ to 0 and $\mathit{lefto}$ to $\mathbf{false}$.
\begin{equation}
\label{eq:nr.lb.clk.delayed}
\deltaResetrarrow(c'=0\wedge\neg\mathit{lefto}')
\end{equation}
When $R_\C$ holds, $c$ and $\mathit{lefto}$ are reset as for the $\mathop{\textup{\bf F}^\textup{s}}I{\mathbin{\triangleleft}n}$ operator and when neither holds we update them as usual:
\begin{eqnarray}
\label{eq:nr.lb.clk.reset}
&R_\Crarrow(c'=0\wedge(\mathit{lefto}'lrightarrow\mathit{open}'))&\\
\label{eq:nr.lb.clk.update}
&(\negR_\C\land\neg\deltaReset)rarrow(c'=c+\delta\wedge(\mathit{lefto}'lrightarrow\mathit{lefto}))&
\end{eqnarray}
We set the initial values of $c$ and $\mathit{lefto}$ to correspond measuring time from the initial
\iffalse
interval:
\begin{equation}
\label{eq:nr.lb.initial}
\mathcal{I}_{l\mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleright}n}\mathrel{\textup{\bf R}}IGHT}
\deltaef
c=0\land\neg\mathit{lefto}
\end{equation}
\else
interval, i.e.
$
\mathcal{I}_{l\mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleright}n}\mathrel{\textup{\bf R}}IGHT}
\deltaef
c=0\land\neg\mathit{lefto}
$.
\fi
Finally, we define $T_\C$ to hold precisely if there is a point on the current interval that is $\mathbin{\triangleright}n$ time units away from a point belonging to the interval at which we started measuring time.
\ifLong
In the case of $\mathrel{\textup{\bf R}^\textup{s}}I{>n}$, we define:
\begin{equation}
\label{eq:nr.lb.timing.strict}
T_\C\deltaefc+\delta>n
\end{equation}
and in case $\mathrel{\textup{\bf R}^\textup{s}}I{\geqn}$
\begin{equation}
\label{eq:nr.lb.timing.nonstrict}
T_\C\deltaefc+\delta>n\lor(\neg\mathit{lefto}\land\neg\mathit{open}\landc+\delta\geqn))
\end{equation}
\else
In the case of $\mathrel{\textup{\bf R}^\textup{s}}I{>n}$, we define
$
T_\C\deltaefc+\delta>n
$
and for $\mathrel{\textup{\bf R}^\textup{s}}I{\geqn}$ we define
$
T_\C\deltaefc+\delta>n\lor(\neg\mathit{lefto}\land\neg\mathit{open}\landc+\delta\geqn))
$.
\fi
\subsection{Soundness and Completeness of the Encodings}
\label{Sect:SoundnessAndCompleteness}
The encoding just given is sound and complete in the sense defined by the following lemmas which are proven in the full version of this paper~\cite{extended} .
\newcommand{cenfFText}{
The transition system\
$\mathcal{S}f{p}$ is a sound encoding for $p$ and
$\mathcal{S}f{\negp}$ is a sound encoding for $\negp$.
If a transition system\ $\mathcal{S}$ over $ps$ is a sound encoding of $\alpha$ and $\beta$,
then
the transition system\ $\mathcal{S}f{\mathrel{\textup{\bf {Op}}} \alpha}$ over $ps$ is a sound encoding of $\mathrel{\textup{\bf {Op}}} \alpha$ for each ${\mathrel{\textup{\bf {Op}}}} \in \Set{{\mathop{\textup{\bf F}^\textup{s}}I{\leq 0}},{\mathop{\textup{\bf F}^\textup{s}}I{<n}},{\mathop{\textup{\bf F}^\textup{s}}I{\leqn}},{\mathop{\textup{\bf G}^\textup{s}}I{\leq0}},{\mathop{\textup{\bf G}^\textup{s}}I{<n}},{\mathop{\textup{\bf G}^\textup{s}}I{\leqn}}}$,
and
$\mathcal{S}f{\alpha \mathrel{\textup{\bf {Op}}} \beta}$
is a sound encoding of $\alpha \mathrel{\textup{\bf {Op}}} \beta$ for each ${\mathrel{\textup{\bf {Op}}}} \in \Set{{\land},{\lor},{\mathrel{\textup{\bf U}^\textup{s}}},{\mathrel{\textup{\bf U}^\textup{s}}I{\geqn}},{\mathrel{\textup{\bf U}^\textup{s}}I{>n}},{\mathrel{\textup{\bf R}^\textup{s}}},{\mathrel{\textup{\bf R}^\textup{s}}I{\geqn}},{\mathrel{\textup{\bf R}^\textup{s}}I{>n}}}$.
}
\begin{lemma}\label{Lemma:constraints_enforce_formula}
cenfFText
\end{lemma}
\newcommand{\mathop{\textup{\bf F}}enfCText}{
The transition system\
$\mathcal{S}f{p}$ is a complete encoding for $p$,
$\mathcal{S}f{\negp}$ is a complete encoding for $\negp$.
If a transition system\ $\mathcal{S}$ over $ps$ is a complete encoding of $\alpha$ and $\beta$,
then
the transition system\ $\mathcal{S}f{\mathrel{\textup{\bf {Op}}} \alpha}$ over $ps$ is a complete encoding of $\mathrel{\textup{\bf {Op}}} \alpha$ for each ${\mathrel{\textup{\bf {Op}}}} \in \Set{{\mathop{\textup{\bf F}^\textup{s}}I{\leq 0}},{\mathop{\textup{\bf F}^\textup{s}}I{<n}},{\mathop{\textup{\bf F}^\textup{s}}I{\leqn}},{\mathop{\textup{\bf G}^\textup{s}}I{\leq0}},{\mathop{\textup{\bf G}^\textup{s}}I{<n}},{\mathop{\textup{\bf G}^\textup{s}}I{\leqn}}}$,
and
$\mathcal{S}f{\alpha \mathrel{\textup{\bf {Op}}} \beta}$
is a complete encoding of $\alpha \mathrel{\textup{\bf {Op}}} \beta$ for each ${\mathrel{\textup{\bf {Op}}}} \in \Set{{\land},{\lor},{\mathrel{\textup{\bf U}^\textup{s}}},{\mathrel{\textup{\bf U}^\textup{s}}I{\geqn}},{\mathrel{\textup{\bf U}^\textup{s}}I{>n}},{\mathrel{\textup{\bf R}^\textup{s}}},{\mathrel{\textup{\bf R}^\textup{s}}I{\geqn}},{\mathrel{\textup{\bf R}^\textup{s}}I{>n}}}$.
}
\begin{lemma}\label{Lemma:formula_implies_constraints}
\mathop{\textup{\bf F}}enfCText
\end{lemma}
\section{Bounded Model Checking}
\label{Sect:BMC}
Naturally, one cannot directly handle infinite formula representations
capturing infinite runs with SMT solvers.
Thus in bounded model checking (\emph{BMC}) one considers finite representations, i.e.\ looping, lasso-shaped paths only.
We show that,
by using region abstraction~\cite{AlurDill:TCS1994},
we can indeed capture all runs that satisfy a $\ensuremath{\textup{MITL}_{0,\infty}}$ formula with
such finite representations.
For this we must assume that the domains of all the non-clock variables in $Z$ are finite.
Assume a transition system\
$\Tuple{Z,X,\mathcal{I},\mathcal{INV},\mathcal{T},\mathcal{F},\widehat{\APs}}$
over a set $ps$ of atomic propositions.
For each clock $x \in X$,
let $xMax{x}$ be the largest constant $n$
occurring in atoms of forms $x \mathbin{\bowtie} n$ and
$x + \delta \mathbin{\bowtie} n$ in $\mathcal{I}$, $\mathcal{INV}$, and $\mathcal{T}$.
Two states,
$s$ and $t$ (i.e. valuations over $Z\cupX$ as defined in Sect.~\ref{ss:ts}),
belong to the same equivalence class called \emph{region},
denoted by $s \mathrel{\textup{\bf R}}Equiv t$,
if
(i) $s(z) = t(z)$ for each non-clock variable $z \in Z$,
and
(ii) for all clocks $x, y \in X$
\begin{enumerate}
\item
either
(a) $cInt{s(x)} = cInt{t(x)}$
or
(b)
$s(x) > xMax{x}$ and
$t(x) > xMax{x}$;
\item
if $s(x) \leq xMax{x}$,
then
$frac{s(x)} = 0$ iff
$frac{t(x)} = 0$,
where $frac{i}$ denotes the fractional part of $i$;
and
\item
if
$s(x) \leq xMax{x}$ and
$s(y) \leq xMax{y}$,
then
$frac{s(x)} \leq frac{s(y)}$
iff
$frac{t(x)} \leq frac{t(y)}$.
\end{enumerate}
Next, we will apply the bisimulation property of regions introduced in~\cite{AlurDill:TCS1994} to transition systems.
\begin{lemma}
\label{lem:sddfuiet}
Assume two states, $s$ and $t$,
such that $s \mathrel{\textup{\bf R}}Equiv t$.
It holds that
(i) $s \models \mathcal{I}$ iff $t \models \mathcal{I}$, and
(ii) $s \models \mathcal{INV}$ iff $t \models \mathcal{INV}$.
Furthermore,
if there is
a $\delta_s \in \mathbb{R}NonNeg$ and
a state $s'$
such that
${s \cup \Set{\delta \mapsto \delta_s} \cup \Setdef{y' \mapsto s'(y)}{y \in {X \cup Z}}} \models \mathcal{T}$,
then
there is a $\delta_t \in \mathbb{R}NonNeg$
and a state $t'$
such that
${t \cup \Set{\delta \mapsto \delta_t} \cup \Setdef{y' \mapsto t'(y)}{y \in {X \cup Z}}} \models \mathcal{T}$
and
$s' \mathrel{\textup{\bf R}}Equiv t'$.
\end{lemma}
Lemma~\ref{lem:sddfuiet}
is proven in the full version of this paper~\cite{extended} .
When the domains of the non-clock variables are finite, as we have assumed,
the set of equivalence classes induced by $\mathrel{\textup{\bf R}}Equiv$ is finite, too.
In this case
we can prove, in a similar fashion as the corresponding lemma in \cite{KindermannJunttilaNiemela:FORTE2012},
that all runs of a transition system\ also have corresponding runs whose
projections on the equivalences classes induced by $\mathrel{\textup{\bf R}}Equiv$ are
lasso-shaped looping runs:
\iffalse
\begin{lemma}
\textbf{OLD:}
Let $Val$ be the set of all valuations over $Z$ and $\mathrel{\textup{\bf R}}eg$ the set of clock regions.
If the transition system\ $\mathcal{S}$ has a run $\mathcal{S}Run = s_0 \EStep{\delta_0} s_1 \EStep{\delta_1} s_2 \EStep{\delta_2} ...$,
then it also has a run $\mathcal{S}Run' = sp_0 \EStep{\delta'_0} sp_1 \EStep{\delta'_1} sp_2 \EStep{\delta'_2} ...$
such that
$s_0 \mathrel{\textup{\bf R}}Equiv sp_0$,
$sp_{i-1} \mathrel{\textup{\bf R}}Equiv sp_k$ for some $0 \leq i \le k$,
$k \le (\left|X\right|+\left|\mathcal{F}\right|+2)\cdot\left|Val\right|\cdot\left|\mathrel{\textup{\bf R}}eg\right|$,
for each $f \in \mathcal{F}$ there is a state $sp_j$ such that
$i \le j \le k$ and $sp_j \models f$,
and
$sp_{k+1+j} \mathrel{\textup{\bf R}}Equiv sp_{i+(j \bmod l)}$ for all $j \ge 0$,
where $l = k-i+1$.
\end{lemma}
\fi
\begin{lemma}
\label{lem:weiht}
Let $Val$ be the set of all valuations over $Z$ and $\mathrel{\textup{\bf R}}eg$ the set of clock regions.
If the transition system\ $\mathcal{S}$ has an arbitrary infinite run starting in some state $s_0$, then it also has a run
run
$\mathcal{S}Run = s_0 \EStep{\delta_0} s_1 \EStep{\delta_1} s_2 \EStep{\delta_2} \ldots$ such that
for some $i,k\in\mathbb{N}$ with $0 \leq i \le k \le (\left|X\right|+\left|\mathcal{F}\right|+2)\cdot\left|Val\right|\cdot\left|\mathrel{\textup{\bf R}}eg\right|$ and for every $j$ with $j\geqi$ we have
$s_{j} \mathrel{\textup{\bf R}}Equiv s_{j+k-i+1}$.
\end{lemma}
Intuitively, Lemma~\ref{lem:weiht} states that if $\mathcal{S}$ has a run starting in a given state, then $\mathcal{S}$
has a run starting in the same state that begins to loop \emph{through the same regions}
after a finite prefix. E.g., if $i=7$ and $k=10$, then $s_7\mathrel{\textup{\bf R}}Equivs_{11}\mathrel{\textup{\bf R}}Equivs_{15}\mathrel{\textup{\bf R}}Equivs_{19}\ldots$ and $s_8\mathrel{\textup{\bf R}}Equivs_{12}\mathrel{\textup{\bf R}}Equivs_{16}\mathrel{\textup{\bf R}}Equivs_{20}\ldots$.
In particular, Lemma~\ref{lem:weiht} implies that if we are interested in whether $\mathcal{S}$ has any run at all, it is sufficient to search for runs that are lasso-shaped under the region abstraction.
Such runs
can be captured with finite bounded model checking encodings.
Given a formula $\psi$ over $Z \cup X \cup \Set{\delta} \cup ZNext \cup XNext$ and an index $i \in \mathbb{N}$,
let $\mathit{at}Step{i}{\psi}$ be the the formula over
$\Setdef{\mathit{at}Step{i}{y}}{y \in Z \cup X \cup \Set{\delta}} \cup \Setdef{\mathit{at}Step{i+1}{y}}{y \in Z \cup X}$
obtained by
replacing each variable $y \in Z \cup X \cup \Set{\delta}$
with the
variable $\mathit{at}Step{i}{y}$ and
each $y' \in ZNext \cup XNext$ with
the
variable $\mathit{at}Step{i+1}{y}$.
E.g.,
$\mathit{at}Step{3}{((xNext=x+\delta) \land \negp)}$ is
$(\mathit{at}Step{4}{x}=\mathit{at}Step{3}{x}+\mathit{at}Step{3}{\delta}) \land \neg\mathit{at}Step{3}{p}$).
Now the bounded model checking encoding for
bound $k$ is:
$$
\begin{array}{r@{\ }c@{\ }l}
\Enc{\mathcal{S},k}
&{\deltaef}&
{\mathit{at}Step{0}{\delta} = 0} \land
\mathit{at}Step{0}{\mathcal{I}} \land
\bigwedge_{0 \le j \le k} \mathit{at}Step{j}{\mathcal{INV}} \land {}
\\
&&
\bigwedge_{0 \le j < k} \mathit{at}Step{j}{\mathcal{T}}
\land
\bigwedge_{1 \le j \leq k} (\Loop{j} rarrow \SameRegion{j})
\land {}
\\
&&
\bigwedge_{0 \le j < k} ({\mathit{at}Step{j}{\delta}> 0} rarrow {\mathit{at}Step{j+1}{\delta}=0})
\land {}
\\
&&
\mathit{Fair}_k \land \mathit{NonZeno}_k
\land\bigvee_{1\lej\lek} \Loop{j}
\end{array}
$$
where
(i)
$\SameRegion{j}$ is a formula evaluating to true
if state $j-1$ and
state $k$ (i.e. the valuations of the variables with superscripts $j-1$ and $k$, respectively) are in the same region
(see \cite{KindermannJunttilaNiemela:FORTE2012} for different ways to implement this),
and
(ii)
$\mathit{Fair}_k$ and $\mathit{NonZeno}_k$ are constraints
forcing that the fairness formulas are holding in the loop and
that sufficiently much time passes in the loop to unroll it to a non-zeno run
(again, see \cite{KindermannJunttilaNiemela:FORTE2012}).
Intuitively, the conjuncts of $\Enc{\mathcal{S},k}$ encode the following: (a) the first interval is a singleton\ and satisfies the initial constraint, (b) all intervals satisfy the invariant and all pairs of successive states the transition relation, (c) if some $\Loop{j}$ holds then state $j-1$ and state $k$ are in the same region, (d)~there are no two successive open intervals, (e) the fairness formulas are satisfied within the looping part of the trace, (f) the trace is non-zeno and (g) at least one $\Loop{j}$ is true, meaning that the trace is ``looping under region abstraction''.
Now,
if we wish to find out whether a transition system\ $\mathcal{S}$ has a run corresponding to a trace $\sigma$ such that $\sigma \models \phi$ for a $\ensuremath{\textup{MITL}_{0,\infty}}$ formula $\phi$,
we can check whether
$\Enc{\mathcal{S}f{\phi},k} \land \mathit{at}Step{0}{\Enc{\phi}}$
is satisfiable for some $0 < k \le (\left|X\right|+\left|\mathcal{F}\right|+2)\cdot\left|Val\right|\cdot\left|\mathrel{\textup{\bf R}}eg\right|$.
This upper bound is
very
large and, in practice, much lower bounds are often used (and sufficient for finding traces). Then, however, the possibility remains that a trace exists despite none being found with the bound used.
\section{Experimental Evaluation}
We have studied the feasibility of the BMC encoding developed in this paper experimentally. We have devised a straightforward implementation of the approach following
the encoding scheme given in Sect.~\ref{sect:sett} and~\ref{sect:seozf}. With experiments on a class of models we (i) show that it is possible to develop relatively efficient implementations of the approach, (ii) demonstrate that the approach scales reasonably and (iii) are able to estimate the “cost of timing” by comparing the verification of properties using timed operators both to
verifying \ensuremath{\textup{MITL}_{0,\infty}}\ properties that do not use timing constraints and region-based LTL BMC~\cite{KindermannJunttilaNiemela:FORTE2012,BiereEtAl:TACAS1999}.
\begin{figure}
\caption{Experimental results}
\label{fig:exp-non}
\label{fig:exp-hold}
\end{figure}
As a model for the experimentation we used the Fischer mutual exclusion protocol
with two to 20 agents. This protocol is commonly used for the evaluation of timed verification approaches. The encoding used for the experiments is based on a model that comes with the model checker Uppaal~\cite{Uppaaltutorial:2004} which also uses super-dense time.
We checked one property that holds (``requesting state leads to waiting state eventually'') and one that does not (``there is no trace visiting the critical section and the non-critical section infinitely often'').\footnote{Here, we search for counter-examples, i.e. encode $\neg\phi$ instead of $\phi$.} Each property was checked in three variants: as an LTL property using the approach from~\cite{KindermannJunttilaNiemela:FORTE2012}, as the corresponding MITL property (only untimed operators) and with timing constraints added.
Both MITL BMC and LTL BMC were used in an incremental fashion, i.e. bounds are increased starting with bound one until a counter-example is found
and constraints are shared by successive SMT solver calls where possible.
All experiments were run under Linux on Intel Xeon X5650 CPUs limiting memory to 4 GB and CPU time to 20 minutes. As an SMT solver, Yices~\cite{DBLP:conf/cav/DutertreM06} version 1.0.37 was used. All plots report minimum, maximum and median over 11 executions. The implementation and the benchmark used are available on the first author's website.
Figure~\ref{fig:exp-non} shows the time needed for finding a counter-example to the non-holding property.
No timeouts were encountered, even when using the timed MITL properties.
Figures~\ref{fig:exp-hold} shows the maximum bound reached within 20 minutes when checking the holding property. The bounds reached for the timed property are significantly lower than the bounds reached for the LTL property with the untimed MITL BMC bounds lying between.
While there is both a cost for using the MITL framework for an untimed property and an additional cost for adding timing constraints,
checking timed constraints using MITL BMC is certainly feasible.
The performance could be further improved using
well-known optimization techniques
e.g. by adding the possibility
for finite counter-examples~\cite{BiereEtAl:LMCS2006}, a technique used in the
LTL BMC implementation used for the experiments. When
verifying properties without timing constraints, using LTL
BMC, however, is advisable not only because of the better
performance but also because a lower bound is needed to
find a trace as open intervals are irrelevant for LTL formulas.
\iffalse
\fi
\section{Conclusions}
\iffalse
In this paper, we extended the linear time logic \ensuremath{\textup{MITL}_{0,\infty}}\ to super-dense time semantics.
We devised a method to encode both a timed automaton and a \ensuremath{\textup{MITL}_{0,\infty}}\ formula as a transition system\ that could be used as a foundation for different fully symbolic verification methods.
Soundness and completeness of the encoding are proven in the extended version of this paper.
It was demonstrated how the encoding can be employed for bounded model checking (BMC) using the well-known region abstraction.
Experimental evaluation of the BMC approach indicated that while slower than the less powerful LTL BMC, \ensuremath{\textup{MITL}_{0,\infty}}\ BMC can be efficiently performed.
\fi
In this paper, we extend the linear time logic \ensuremath{\textup{MITL}_{0,\infty}}\ to super-dense time semantics. We devise a method to encode both a timed automaton and a \ensuremath{\textup{MITL}_{0,\infty}}\ formula as a symbolic transition system. The encoding provides a foundation for different kinds of fully symbolic verification methods. Soundness and completeness of the encoding are proven in the full version of this paper~\cite{extended}.
Furthermore, we demonstrate how the encoding can be employed for bounded model checking (BMC) using the well-known region abstraction. We have implemented the approach. An experimental evaluation of the BMC approach
indicated that a reasonably efficient implementation is feasible.
\ifAppendix
\appendix
\newenvironment{relemma}[1]{\renewcommand{\end{lemma}}{\end{lemma}}
\newenvironment{retheorem}[1]{\renewcommand{\end{theorem}}{\end{theorem}}
\subsection{Duality of until and release operators}
\begin{lemma}\label{Lemma:Duality}
For any trace $\sigma=\Tuple{\IntervalI{0},ValuI{0}},\Tuple{\IntervalI{1},ValuI{1}},\ldots$ over $ps$, \ensuremath{\textup{MITL}_{0,\infty}}\ formulas $\phi$ and $\psi$ over $ps$, $i\in\mathbb{N}$, $t\in\IntervalI{i}$ it holds that
$\sigmaAt{i}{t} \models (\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n} \psi)$
iff
$\sigmaAt{i}{t} \models \neg(\neg\phi \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\bowtie}n} \neg\psi)$
\end{lemma}
\begin{IEEEproof}
$
\sigmaAt{i}{t} \models \neg({\neg\phi} \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\bowtie}n} {\neg\psi}) $
if and only if (by definition) \\
$ \neg\forall \Point{i'}{t'} \in \LaterPoints{\sigma}{i}{t}: \big(({t'-t} \mathbin{\bowtie} n) \land \neg(\sigmaAt{i'}{t'} \models {\neg\psi})\big)
rarrow \big(\exists \Point{i''}{t''} \in \LaterPoints{\sigma}{i}{t} : \Point{i''}{t''}\prec\Point{i'}{t'} \land (\sigmaAt{i''}{t''} \models \neg\phi)\big) $ \\
if and only if (pushing negations inside)
\\
$\exists \Point{i'}{t'} \in \LaterPoints{\sigma}{i}{t} :
\big(({t'-t} \mathbin{\bowtie} n) \land
\neg (\sigmaAt{i'}{t'} \models \neg \psi)\big)
\land
\big(\forall \Point{i''}{t''} \in \LaterPoints{\sigma}{i}{t}:
\Point{i''}{t''}\prec\Point{i'}{t'}
rarrow
\neg(\sigmaAt{i''}{t''} \models \neg\phi)\big)$
\\
if and only if (replacing
$\neg (\sigmaAt{i'}{t'} \models \neg \cdot)\big)$
by
$(\sigmaAt{i'}{t'} \models \cdot)\big)$
\\
$\exists \Point{i'}{t'} \in \LaterPoints{\sigma}{i}{t}:
({t'-t} \mathbin{\bowtie} n)
\land
(\sigmaAt{i'}{t'} \models \psi)\big)
\land
\big(\forall \Point{i''}{t''} \in \LaterPoints{\sigma}{i}{t}:
\Point{i''}{t''}\prec\Point{i'}{t'}
rarrow
(\sigmaAt{i''}{t''} \models \phi)\big)$
\\
if and only if (by definition)
$\sigmaAt{i}{t} \models (\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n} \psi)$.
\end{IEEEproof}
\begin{lemma}\label{Lemma:Duality2}
For any trace $\sigma=\Tuple{\IntervalI{0},ValuI{0}},\Tuple{\IntervalI{1},ValuI{1}},\ldots$ over $ps$, \ensuremath{\textup{MITL}_{0,\infty}}\ formulas $\phi$ and $\psi$ over $ps$, $i\in\mathbb{N}$, $t\in\IntervalI{i}$ it holds that
$\sigmaAt{i}{t} \models (\phi \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\bowtie}n} \psi)$
iff
$\sigmaAt{i}{t} \models \neg(\neg\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n} \neg\psi)$
\end{lemma}
\begin{IEEEproof}
$\sigmaAt{i}{t} \models \neg(\neg\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n} \neg\psi)$
iff (by Lemma~\ref{Lemma:Duality})
$\sigmaAt{i}{t} \models \neg(\neg(\neg\neg\phi \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\bowtie}n} \neg\neg\psi))$
iff (double negations)
$\sigmaAt{i}{t} \models (\phi \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\bowtie}n} \psi)$.
\end{IEEEproof}
\subsection{Proof of Lemma~\ref{Lemma:SplitInterval}}
\begin{relemma}{\ref{Lemma:SplitInterval}}
\SplitIntervalText
\end{relemma}
\begin{IEEEproof}
If $\IntervalI{i}$ is a singleton , then the lemma holds trivially.
Thus, assume that $\IntervalI{i}$ is an open interval.
We have the following four cases.
\begin{itemize}
\item
Assume that $\sigmaAt{i}{t} \models {\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleleft}n} \psi}$.
Thus there exists a $\Point{i'}{t'} \in \LaterPoints{\sigma}{i}{t}$ such that
$({t' - t} \mathbin{\triangleleft} n) \land
(\sigmaAt{i'}{t'} \models \psi)
\land
\big(\forall \Point{i''}{t''} \in \LaterPoints{\sigma}{i}{t},
\Point{i''}{t''} \prec \Point{i'}{t'}
rarrow
(\sigmaAt{i''}{t''} \models \phi)\big)$.
Let $u \ge t$ with $u \in \IntervalI{i}$.
Now $\LaterPoints{\sigma}{i}{u} \subseteq \LaterPoints{\sigma}{i}{t}$.
If $i' > i$ or
${i' = i} \land {u < t'}$,
then
$({t' - u} \mathbin{\triangleleft} n) \land
(\sigmaAt{i'}{t'} \models \psi)
\land
\big(\forall \Point{i''}{t''} \in \LaterPoints{\sigma}{i}{u},
\Point{i''}{t''} \prec \Point{i'}{t'}
rarrow
(\sigmaAt{i''}{t''} \models \phi)\big)$,
implying
$\sigmaSuffix{\sigma}{i}{u} \models {\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleleft}n} \psi}$ irrespective whether $\sigma$ is fine for $\phi$ and $\psi$ or not.
If $i' = i$ and $u \ge t'$,
then
there is a $u' > u$ with
$u' \in \IntervalI{i}$ and
${u' - u} \mathbin{\triangleleft} n$
as $\IntervalI{i}$ is an open interval.
As $\sigma$ is fine for $\psi$ and
$\sigmaAt{i}{t'} \models \psi$,
it holds that
$\sigmaAt{i}{u'} \models \psi$ as well.
As
$\forall \Point{i}{t''} \in \LaterPoints{\sigma}{i}{t},
\Point{i}{t''} \prec \Point{i}{t'}
rarrow
(\sigmaAt{i}{t''} \models \phi)$,
there is at least one $t < t'' < t'$,
and
$\sigma$ is fine for $\phi$,
we have
$\forall \Point{i}{u''} \in \LaterPoints{\sigma}{i}{u},
\Point{i}{u''} \prec \Point{i}{u'}
rarrow
(\sigmaAt{i}{u''} \models \phi)$.
Therefore, $\sigmaSuffix{\sigma}{i}{u} \models {\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleleft}n} \psi}$.
\item
Assume that
$\sigmaAt{i}{t} \models
{\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleright}n} \psi}$.
Thus there exists a $\Point{i'}{t'} \in \LaterPoints{\sigma}{i}{t}$ such that
$({t' - t} \mathbin{\triangleright} n) \land
(\sigmaAt{i'}{t'} \models \psi)
\land
\big(\forall \Point{i''}{t''} \in \LaterPoints{\sigma}{i}{t},
\Point{i''}{t''} \prec \Point{i'}{t'}
rarrow
(\sigmaAt{i''}{t''} \models \phi)\big)$.
Let $u \le t$ with $u \in \IntervalI{i}$.
Thus ${t' - u} \mathbin{\triangleright} n$.
Because
(i)
$\forall \Point{i''}{t''} \in \LaterPoints{\sigma}{i}{t},
\Point{i''}{t''} \prec \Point{i'}{t'}
rarrow
(\sigmaAt{i''}{t''} \models \phi)$,
(ii)
there is at least one $t < t'' < t'$ with $t'' \in \IntervalI{i}$ as $\IntervalI{i}$ is open,
and
(iii)
$\sigma$ is fine for $\phi$,
we have
$\forall \Point{i''}{t''} \in \LaterPoints{\sigma}{i}{u},
\Point{i''}{t''} \prec \Point{i'}{t'}
rarrow
(\sigmaAt{i''}{t''} \models \phi)$.
Therefore, $\sigmaSuffix{\sigma}{i}{u} \models {\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleleft}n} \psi}$.
\item
Assume that
$\sigmaSuffix{\sigma}{i}{t} \models {\phi \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleleft}n} \psi}$.
Thus
$\forall \Point{i'}{t'} \in \LaterPoints{\sigma}{i}{t},
\big(({t'-t} \mathbin{\triangleleft} n) \land
\neg(\sigmaAt{i'}{t'} \models \psi)\big)
rarrow
\big(\exists \Point{i''}{t''} \in \LaterPoints{\sigma}{i}{t},
\Point{i''}{t''}\prec\Point{i'}{t'}
\land
(\sigmaAt{i''}{t''} \models \phi)\big)$.
Let $u \le t$ with $u \in \IntervalI{i}$.
Suppose that
$({u'-t'} \mathbin{\triangleleft} n) \land
\neg(\sigmaAt{j'}{u'} \models \psi)$
for some $\Point{j}{u'} \in \LaterPoints{\sigma}{i}{u}$.
If $\Point{i}{t} \prec \Point{j'}{u'}$,
then
$\exists \Point{i''}{t''} \in \LaterPoints{\sigma}{i}{t},
\Point{i''}{t''}\prec\Point{j'}{u'}
\land
(\sigmaAt{i''}{t''} \models \phi)$.
On the other hand,
if $\Point{j'}{u'} = \Point{i}{t}$
or
$\Point{j'}{u'} \prec \Point{i}{t}$,
then
$j' = i$,
$\neg(\sigmaAt{i}{v} \models \psi)$
for all $v \in \IntervalI{i}$ as $\sigma$ is fine for $\psi$,
there is a $\Point{i}{t''} \in \LaterPoints{\sigma}{i}{t},
\Point{i}{t} \prec \Point{i}{t''}
\land
(\sigmaAt{i}{t''} \models \phi)$
as $\IntervalI{i}$ is open,
$\sigmaAt{i}{v} \models \phi$
for all $v \in \IntervalI{i}$ as $\sigma$ is fine for $\phi$,
and
$\sigmaSuffix{\sigma}{i}{u} \models {\phi \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleleft}n} \psi}$.
\item
Assume that
$\sigmaSuffix{\sigma}{i}{t} \models
{\phi \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleright}n} \psi}$.
Thus
$\forall \Point{i'}{t'} \in \LaterPoints{\sigma}{i}{t},
\big(({t'-t} \mathbin{\triangleright} n) \land
\neg(\sigmaAt{i'}{t'} \models \psi)\big)
rarrow
\big(\exists \Point{i''}{t''} \in \LaterPoints{\sigma}{i}{t},
\Point{i''}{t''}\prec\Point{i'}{t'}
\land
(\sigmaAt{i''}{t''} \models \phi)\big)$.
Let $u \ge t$ with $u \in \IntervalI{i}$.
Suppose that
$({u'-u} \mathbin{\triangleright} n) \land
\neg(\sigmaAt{j'}{u'} \models \psi)$
for some
$\Point{j'}{u'} \in \LaterPoints{\sigma}{i}{u}$.
As ${u'-t}\mathbin{\triangleright} n$,
there exists a
$\Point{i''}{t''} \in \LaterPoints{\sigma}{i}{t}$
such that
$\Point{i''}{t''}\prec\Point{i'}{t'}
\land
(\sigmaAt{i''}{t''} \models \phi)\big)$.
If $\Point{j'}{u'} \prec \Point{i''}{t''}$, we are done.
On the other hand,
if
$\Point{i''}{t''} = \Point{j'}{u'}$ or
$\Point{i''}{t''} \prec \Point{j'}{u'}$,
then
$\sigmaAt{i}{v} \models \phi$ for all $v \in \IntervalI{i}$ as $\sigma$ is fine for $\phi$,
there exists a $\Point{i}{j''}$ such that
$\Point{i}{j} \prec \Point{i}{j''} \prec \Point{j'}{u'}$
as $\IntervalI{i}$ is open,
and
thus
$\sigmaSuffix{\sigma}{i}{u} \models {\phi \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\triangleright}n} \psi}$.
\end{itemize}
\end{IEEEproof}
\subsection{Proof of Lemma~\ref{Lemma:ExistenceOfFineRefinement}}
\begin{relemma}{\ref{Lemma:ExistenceOfFineRefinement}}
\ExistenceOfFineRefinementText
\end{relemma}
\begin{IEEEproof}
Let $[\phi_1,...,\phi_n]$ be a list containing all the
sub-formulas of $\phi$
so that the sub-formulas of a sub-formula $\phi_i$ are listed before
$\phi_i$.
Thus $\phi_1$ is an atomic proposition and $\phi_n = \phi$.
We now construct a trace $\sigma_i$ for each $1 \le i \le n$
such that
$\sigma_i$ is fine for all sub-formulas $\phi_j$ with $1 \le j \le i$.
If $\phi_i$ is an atomic proposition or
of forms $\neg\phi_j$, $\phi_j \land \phi_k$,
or $\phi_j \lor \phi_k$ with $j,k < i$,
then $\sigma_i = \sigma_{i-1}$ is fine for $\phi_i$ as well.
If $\phi_i$ is an until or release formula
of forms
$\phi_j \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n} \phi_k$ or
$\phi_j \mathrel{\textup{\bf R}^\textup{s}}I{\mathbin{\bowtie}n} \phi_k$,
then
by (i) recalling that $\sigma_{i-1}$ is fine for $\phi_j$ and $\phi_k$
(ii) applying Lemma~\ref{Lemma:SplitInterval},
we obtain a $\phi_i$-fine trace $\sigma_i$
by splitting each open interval in $\sigma_{i-1}$ into at most
two new open intervals and one singletonadj\ interval.
\end{IEEEproof}
\subsection{Proof of Lemma \ref{lem:ubeq}}
\begin{relemma}{\ref{lem:ubeq}}
\mathrel{\textup{\bf U}}beqText
\end{relemma}
\begin{IEEEproof}
Recall that
$\sigmaAt{i}{t} \models (\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n} \psi)$
iff
$\exists \Point{i'}{t'} \in \LaterPoints{\sigma}{i}{t} :
({t' - t} \mathbin{\bowtie} n) \land
(\sigmaAt{i'}{t'} \models \psi)
\land
\big(\forall \Point{i''}{t''} \in \LaterPoints{\sigma}{i}{t}:
\Point{i''}{t''} \prec \Point{i'}{t'}
rarrow
(\sigmaAt{i''}{t''} \models \phi)\big)$.
\begin{itemize}
\item
The ``$\mathrel{\textup{\bf R}}ightarrow$'' part.
As is easy to see from the semantics,
$\sigmaAt{i}{t} \models (\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n} \psi)$ implies
both
(i) $\sigmaAt{i}{t} \models (\phi \mathrel{\textup{\bf U}^\textup{s}} \psi)$
and
(ii) $\sigmaAt{i}{t} \models (\mathbf{true} \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\bowtie}n} \psi)$
corresponding to $\sigmaAt{i}{t} \models \mathop{\textup{\bf F}^\textup{s}}I{\mathbin{\bowtie}n}\psi$.
\item
The ``$larrow$'' part.
By the semantics,
if $\sigmaAt{i}{t} \models (\phi \mathrel{\textup{\bf U}^\textup{s}} \psi)$
we can pick a
$\Point{i'}{t'} \in \LaterPoints{\sigma}{i}{t}$
such that
$
({t' - t} \ge 0) \land
(\sigmaAt{i'}{t'} \models \psi)
\land
\big(\forall \Point{i''}{t''} \in \LaterPoints{\sigma}{i}{t}:
\Point{i''}{t''} \prec \Point{i'}{t'}
rarrow
(\sigmaAt{i''}{t''} \models \phi)\big)
$.
We have two cases now:
\begin{itemize}
\item
If ${t' - t} \mathbin{\triangleleft}n$,
then we immediately have
$\sigmaAt{i}{t} \models (\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleleft}n} \psi)$.
\item
Otherwise,
$\sigmaAt{i}{t} \models \mathop{\textup{\bf F}^\textup{s}}I{\mathbin{\triangleleft}n} \psi$
allows us to pick
$\Point{j'}{u'} \in \LaterPoints{\sigma}{i}{t}$
such that
$
(u'-t \mathbin{\triangleleft} n) \land
(\sigmaAt{j'}{u'} \models \psi)$.
As $u'-t \mathbin{\triangleleft} n$,
we know that $\Point{j'}{u'} \prec \Point{i'}{t'}$,
which in turn implies that
$\forall \Point{i''}{t''} \in \LaterPoints{\sigma}{i}{t}:
\Point{i''}{t''} \prec \Point{j'}{u'}
rarrow
(\sigmaAt{i''}{t''} \models \phi)$.
Thus we obtain
$\sigmaAt{i}{t} \models (\phi \mathrel{\textup{\bf U}^\textup{s}}I{\mathbin{\triangleleft}n} \psi)$.
\end{itemize}
\end{itemize}
\end{IEEEproof}
\input{proofs-sound}
\input{proofs-compl}
\subsection{Proof of Lemma~\ref{lem:sddfuiet}}
\begin{relemma}{\ref{lem:sddfuiet}}
Assume two states, $s$ and $t$,
such that $s \mathrel{\textup{\bf R}}Equiv t$.
It holds that
(i) $s \models \mathcal{I}$ iff $t \models \mathcal{I}$, and
(ii) $s \models \mathcal{INV}$ iff $t \models \mathcal{INV}$.
Furthermore,
if there is
a $\delta_s \in \mathbb{R}NonNeg$ and
a state $s'$
such that
${s \cup \Set{\delta \mapsto \delta_s} \cup \Setdef{y' \mapsto s'(y)}{y \in {X \cup Z}}} \models \mathcal{T}$,
then
there is a $\delta_t \in \mathbb{R}NonNeg$
and a state $t'$
such that
${t \cup \Set{\delta \mapsto \delta_t} \cup \Setdef{y' \mapsto t'(y)}{y \in {X \cup Z}}} \models \mathcal{T}$
and
$s' \mathrel{\textup{\bf R}}Equiv t'$.
\end{relemma}
\begin{IEEEproof}
As
$s \mathrel{\textup{\bf R}}Equiv t$ and
the only atoms involving clock variables in $\mathcal{I}$ and $\mathcal{INV}$
are of form $x \mathbin{\bowtie} n$,
the definitions of $xMax{x}$ and $\mathrel{\textup{\bf R}}Equiv$
directly imply that
(i) $s \models \mathcal{I}$ iff $t \models \mathcal{I}$, and
(ii) $s \models \mathcal{INV}$ iff $t \models \mathcal{INV}$.
To prove the remaining claim,
consider the state $s''$ such that
(i) $s''(x) = s(x)+\delta_s$ for each clock $x \in X$, and
(ii) $s''(z) = s'(z)$ for each non-clock $z \in Z$.
As ${s \cup \Set{\delta \mapsto \delta_s} \cup \Setdef{y' \mapsto s'(y)}{y \in {X \cup Z}}} \models \mathcal{T}$,
we have $\delta_s \ge 0$ and
for each $x \in X$ either
$s'(x) = 0$ or
$s'(x) = s(x)+\delta_s = s''(x)$.
That is,
intuitively $s''$ is obtained from $s'$
by ``unresetting'' the reset clocks.
Next,
take any $\delta_t \in \mathbb{R}$
and
state $t''$
such that
(i) $\delta_t \ge 0$,
(ii) $\delta_t = 0 lrightarrow \delta_s = 0$,
(iii) $t''(x) = t(x)+\delta_t$ for each clock $x \in X$,
(iv) $t''(z) = s'(z)$ for each non-clock $z \in Z$,
and
(v) $s'' \mathrel{\textup{\bf R}}Equiv t''$.
Such $\delta_t$ and $t''$ exists
because $s \mathrel{\textup{\bf R}}Equiv t$ and
of the fact that clock valuations in the same region have
time successors in same regions \cite{AlurDill:TCS1994}.
Let $t'$ be the state such that
(i) for each $x \in X$,
$t'(x) = 0$ if $s'(x)=0$ and
$t'(x) = t''(x) =t(x)+\delta_t$ otherwise,
and
(ii) $t'(z) = t''(z) = s'(z)$ for each non-clock $z \in Z$.
Now $s' \mathrel{\textup{\bf R}}Equiv t'$.
As a summary, intuitively $t'$ is a state in the region that is obtained by letting time pass in the similar manner as when moving from $s$ to $s'$ and then resetting the same clocks.
Now we only have to show that ${t \cup \Set{\delta \mapsto \delta_t} \cup \Setdef{y' \mapsto t'(y)}{y \in {X \cup Z}}} \models \mathcal{T}$.
We do this by showing that the atoms in $\mathcal{T}$ evaluate to the same boolean value under both
$s \cup \Set{\delta \mapsto \delta_s} \cup \Setdef{y' \mapsto s'(y)}{y \in {X \cup Z}}$
and
$t \cup \Set{\delta \mapsto \delta_t} \cup \Setdef{y' \mapsto t'(y)}{y \in {X \cup Z}}$.
\begin{itemize}
\item
Case: the atom does not involve variables in $X \cup XNext \cup \Set{\delta}$.
In this case the atom evaluates to true under
$s \cup \Setdef{y' \mapsto s'(y)}{y \in {X \cup Z}}$
if and only if
it does under $t \cup \Setdef{y' \mapsto t'(y)}{y \in {X \cup Z}}$
because
$s \mathrel{\textup{\bf R}}Equiv t$
and
$s' \mathrel{\textup{\bf R}}Equiv t'$.
\item
Case: the atom is of form $x' = 0$.
Because $s' \mathrel{\textup{\bf R}}Equiv t'$,
the atom evaluates to true under $\Setdef{y' \mapsto s'(y)}{y \in {X \cup Z}}$
if and only if
it does under $\Setdef{y' \mapsto t'(y)}{y \in {X \cup Z}}$.
\item
Case: the atom is of form $x' = x+\delta$.
We have to consider the following:
\begin{enumerate}
\item
Sub-case $s'(x) = 0$.
Thus $t'(x) = 0$ as well because $s' \mathrel{\textup{\bf R}}Equiv t'$.
Now $x' = x+\delta$ evaluates to true under
$s \cup \Set{\delta \mapsto \delta_s} \cup \Setdef{y' \mapsto s'(y)}{y \in {X \cup Z}}$
if and only if
$s(x) = 0$ and $\delta_s = 0$
(as $x$ and $\delta$ always have non-negative values).
\begin{enumerate}
\item
If $s(x) = 0$ and $\delta_s = 0$,
then $t(x) = 0$ and $\delta_t = 0$ as well
because $s \mathrel{\textup{\bf R}}Equiv t$ and
$s'' \mathrel{\textup{\bf R}}Equiv t''$
(forcing that $s(x)+\delta_s = 0$
if and only if
$t(x)+\delta_t = 0$).
\item
If $s(x) > 0$,
then $t(x) > 0$ as $s \mathrel{\textup{\bf R}}Equiv t$,
and thus
$s'(x) \neq s(x)+\delta_s$
and
$t'(x) \neq t(x)+\delta_t$.
\item
If $s(x) = 0$ and $\delta_s > 0$,
then $t(x) = 0$
as
$s \mathrel{\textup{\bf R}}Equiv t$ and
$\delta_t > 0$
as $s'' \mathrel{\textup{\bf R}}Equiv t''$ and
$s''(x) = s(x)+\delta_s \ge 0$,
implying that
$s'(x) \neq s(x)+\delta_s$
and
$t'(x) \neq t(x)+\delta_t$.
\end{enumerate}
\item
Sub-case $s'(x) > 0$.
Now also $t'(x) > 0$ as $s' \mathrel{\textup{\bf R}}Equiv t'$.
As $s'(x) > 0$,
it must be that $s'(x) = s(x) + \delta_s$
of the restriction imposed on $\mathcal{T}$.
By the construction of $t''$ and $t'$,
$t'(x) = t''(x) = t(x) + \delta_t$.
\end{enumerate}
\item
Case: the atom is of form $x \mathbin{\bowtie} n$.
Because $s \mathrel{\textup{\bf R}}Equiv t$,
the atom evaluates to true under $s$
if and only if
it does under $t$.
\item
Case: the atom is of form $x + \delta \mathbin{\bowtie} n$.
By the construction of $s''$ and $t''$,
and the fact that $s'' \mathrel{\textup{\bf R}}Equiv t''$,
we have that
$s''(x) = s(x) + \delta_s \mathbin{\bowtie} n$ if and only if
$t''(x) = t(x) + \delta_t \mathbin{\bowtie} n$.
\item
Case: the atom is of form $\delta \mathbin{\bowtie} 0$.
Because
$\delta_s \ge 0$,
$\delta_t \ge 0$, and
$\delta_s = 0 lrightarrow \delta_t = 0$,
the atom evaluates to true under $\Set{\delta \mapsto \delta_s}$
if and only if
it does under $\Set{\delta \mapsto \delta_t}$.
\end{itemize}
\end{IEEEproof}
\fi
\end{document} |
\mathfrak{m}athbf{b}egin{document}
\title[Square functions associated with operators]
{Weak and strong types estimates for square functions associated with operators}
\author{Mingming Cao}
\address{Mingming Cao\\
Instituto de Ciencias Matem\'aticas CSIC-UAM-UC3M-UCM\\
Con\-se\-jo Superior de Investigaciones Cient{\'\i}ficas\\
C/ Nicol\'as Cabrera, 13-15\\
E-28049 Ma\-drid, Spain} \email{mingming.cao@icmat.es}
\author{Zengyan Si}
\address{Zengyan Si\\
School of Mathematics and Information Science\\
Henan Polytechnic University\\
Jiaozuo 454000\\
People's Republic of China} \email{zengyan@hpu.edu.cn}
\author{Juan Zhang}
\address{Juan Zhang\\
School of Science\\
Beijing Forestry University\\
Beijing, 100083 \\
People's Republic of China}\email{juanzhang@bjfu.edu.cn}
\thanks{The first author acknowledges financial support from the Spanish Ministry of Science and Innovation, through the ``Severo Ochoa Programme for Centres of Excellence in R\&D'' (SEV-2015-0554) and from the Spanish National Research Council, through the ``Ayuda extraordinaria a Centros de Excelencia Severo Ochoa'' (20205CEX001).The second author was sponsored by Natural Science Foundation of Henan(No.202300410184), the Key Research Project for Higher Education in Henan Province(No.19A110017) and the Fundamental Research Funds for the Universities of Henan Province(No.NSFRF200329). The third author was supported by the Fundamental Research Funds for the Central Universities (No.BLX201926). }
\subjclass[2010]{42B20, 42B25}
\keywords{Square functions,
Bump conjectures,
Mixed weak type estimates,
Local decay estimates}
\date{November 19, 2020}
\mathfrak{m}athbf{b}egin{abstract}
Let $L$ be a linear operator in $L^2(\mathfrak{m}athbb{R}n)$ which generates a semigroup $e^{-tL}$ whose kernels $p_t(x,y)$ satisfy the
Gaussian upper bound. In this paper, we investigate several kinds of weighted norm inequalities for the conical square function $S_{\alpha,L}$ associated with an abstract operator $L$. We first establish two-weight inequalities including bump estimates, and Fefferman-Stein inequalities with arbitrary weights. We also present the local decay estimates using the extrapolation techniques, and the mixed weak type estimates corresponding Sawyer's conjecture by means of a Coifman-Fefferman inequality. Beyond that, we consider other weak type estimates including the restricted weak-type $(p, p)$ for $S_{\alpha, L}$ and the endpoint estimate for commutators of $S_{\alpha, L}$. Finally, all the conclusions aforementioned can be applied to a number of square functions associated to $L$.
\end{abstract}
\mathfrak{m}aketitle
\section{Introduction}\label{Introduction}
Given an operator $L$, the conical square function $S_{\alpha,L}$ associated with $L$ is defined by
\mathfrak{m}athbf{b}egin{align}\label{def:SaL}
S_{\alpha,L}(f)(x) :=\mathfrak{m}athbf{b}igg(\iint_{\Gamma_{\alpha}(x)}|t^mLe^{-t^mL}f(y)|^2\frac{dydt}{t^{n+1}}\mathfrak{m}athbf{b}igg)^{\frac12},
\end{align}
where $\Gamma_{\alpha}(x)=\{(x,t)\in \mathfrak{m}athbb{R}n \times (0,\infty):|x-y|<\alpha t\}$. In particular, if $m=2$ and $L=-\mathfrak{m}athcal{D}elta$, $S_{\alpha,L}$ is the classical area integral function. The conical square functions associated with abstract operators played an important role in harmonic analysis. For example, by means of $S_{\alpha, L}$, Auscher et al. \cite{ADM} introduced the Hardy space $H^1_L$ associated with an operator $L$. Soon after, Duong and Yan \cite{DY} showed that $\mathfrak{m}athcal{B}MO_{L^*}$ is the dual space of the Hardy space $H^1_L$, which can be seen a generalization of Fefferman and Stein's result on the duality between $H^1$ and $\mathfrak{m}athcal{B}MO$ spaces. Later, the theory of function spaces associated with operators has been developed and generalized to many other different settings, see for example \cite{DL, HM, HLMMY, LW}. Recently, Martell and Prisuelos-Arribas \cite{MP-1} studied the weighted norm inequalities for conical square functions. More specifically, they established boundedness and comparability in weighted Lebesgue spaces of different square functions using the Heat and Poisson semigroups. Using these square functions, they \cite{MP-2} define several weighted Hardy spaces $H_L^1(w)$ and showed that they are one and the same in view of the fact that the square functions are comparable in the corresponding weighted spaces. Very recently, Bui and Duong \cite{BD} introduced several types of square functions associated with operators and established the sharp weighted estimates.
In this paper, we continue to investigate several kinds of weighted norm inequalities for such operators, including bump estimates, Fefferman-Stein inequalities with arbitrary weights, the local decay estimates, the mixed weak type estimates corresponding Sawyer's conjecture. Beyond that, we consider other weak type estimates including the restricted weak-type $(p, p)$ estimates and the endpoint estimate for the corresponding commutators. For more information about the progress of these estimates, see \cite{CXY, F2, OPR, PW, MW, S83} and the reference therein.
Suppose that $L$ is an operator which satisfies the following properties:
\mathfrak{m}athbf{b}egin{enumerate}
\item[(A1)] $L$ is a closed densely defined operators of type $\omega$ in $L^2(\mathfrak{m}athbb{R}^n)$ with $0\leq \omega< \mathfrak{m}athfrak{p}i/ 2$, and it has a bounded $H_\infty$-functional calculus in $L^2(\mathfrak{m}athbb{R}^n)$.
\item[(A2)] The kernel $p_t(x,y)$ of $e^{-tL}$ admits a Gaussian upper bound. That is, there exists $m\geq 1$ and $C,c>0$ so that for all $x,y\in \mathfrak{m}athbb{R}^n$ and $t>0,$
$$|p_t(x,y)|\leq \frac{C}{t^{n/ m}}\exp\mathfrak{m}athbf{b}igg(-\frac{|x-y|^{m/(m-1)}}{c \, t^{1/(m-1)}}\mathfrak{m}athbf{b}igg).$$
\end{enumerate}
Examples of the operator $L$ which satisfies condition (A1) and (A2) include: Laplacian $-\mathfrak{m}athcal{D}elta$ on $\mathfrak{m}athbb{R}^n$, or the Laplace operator on an open connected domain
with Dirichlet boundary conditions, or the homogeneous sub-Laplacian on a homogeneous group; Schr\"{o}dinger operator $L=-\mathfrak{m}athcal{D}elta+V$ with a nonnegative potential $0\leq V \in L^1_{\operatorname{loc}}(\mathfrak{m}athbb{R}n)$.
The main results of this paper can be stated as follows. We begin with the bump estimates for $S_{\alpha, L}$.
\mathfrak{m}athbf{b}egin{theorem}\label{thm:Suv}
Let $1<p<\infty$, and let $S_{\alpha, L}$ be defined in \eqref{def:SaL} with $\alpha \ge 1$ and $L$ satisfying {\rm (A1)} and {\rm (A2)}. Given Young functions $A$ and $B$, we denote
\mathfrak{m}athbf{b}egin{equation*}
\|(u, v)\|_{A, B, p} :=
\mathfrak{m}athbf{b}egin{cases}
\sup\limits_{Q} \|u^{\frac1p}\|_{p, Q} \|v^{-\frac1p}\|_{B, Q}, & \text{if } 1<p \le 2,
\\%
\sup\limits_{Q} \|u^{\frac2p}\|_{A, Q}^{\frac12} \|v^{-\frac1p}\|_{B,Q}, & \text{if } 2<p<\infty.
\end{cases}
\end{equation*}
If the pair $(u, v)$ satisfies $||(u, v)||_{A, B, p}<\infty$ with $\mathfrak{m}athbf{b}ar{A} \in B_{(p/2)'}$ and $\mathfrak{m}athbf{b}ar{B} \in B_p$, then
\mathfrak{m}athbf{b}egin{align}\label{eq:SLp}
\|S_{\alpha,L}(f)\|_{L^p(u)} &\lesssim \alpha^{n} \mathfrak{m}athscr{N}_p \|f\|_{L^p(v)},
\end{align}
where
\mathfrak{m}athbf{b}egin{equation*}
\mathfrak{m}athscr{N}_p :=
\mathfrak{m}athbf{b}egin{cases}
||(u, v)||_{A,B,p} [\mathfrak{m}athbf{b}ar{B}]_{B_p}^{\frac1p}, & \text{if } 1<p \le 2,
\\%
||(u, v)||_{A,B,p} [\mathfrak{m}athbf{b}ar{A}]_{B_{(p/2)'}}^{\frac12-\frac1p} [\mathfrak{m}athbf{b}ar{B}]_{B_p}^{\frac1p}, & \text{if } 2<p<\infty.
\end{cases}
\end{equation*}
\end{theorem}
\mathfrak{m}athbf{b}egin{theorem}\label{thm:Sweak}
Let $1<p<\infty$, and let $S_{\alpha, L}$ be defined in \eqref{def:SaL} with $\alpha \ge 1$ and $L$ satisfying {\rm (A1)} and {\rm (A2)}. Let $A$ be a Young function. If the pair $(u, v)$ satisfies $[u, v]_{A,p'}<\infty$ with $\mathfrak{m}athbf{b}ar{A} \in B_{p'}$, then
\mathfrak{m}athbf{b}egin{align}\label{eq:S-weak}
\|S_{\alpha,L}(f)\|_{L^{p,\infty}(u)} \lesssim [u, v]_{A,p'} [\mathfrak{m}athbf{b}ar{A}]_{B_{p'}}^{\frac{1}{p'}} \|f\|_{L^p(v)}.
\end{align}
\end{theorem}
We next present the Fefferman-Stein inequalities with arbitrary weights.
\mathfrak{m}athbf{b}egin{theorem}\label{thm:FS}
Let $1<p<\infty$, and let $S_{\alpha, L}$ be defined in \eqref{def:SaL} with $\alpha \ge 1$ and $L$ satisfying {\rm (A1)} and {\rm (A2)}. Then for every weight $w$,
\mathfrak{m}athbf{b}egin{align}
\label{eq:SMw-1} \|S_{\alpha,L}(f)\|_{L^p(w)} &\lesssim \alpha^n \|f\|_{L^p(Mw)}, \quad 1<p \le 2,
\\%
\label{eq:SMw-2} \|S_{\alpha,L}(f)\|_{L^p(w)} &\lesssim \alpha^n \|f (Mw/w)^{\frac12}\|_{L^p(w)}, \quad 2<p<\infty,
\end{align}
where the implicit constants are independent of $w$ and $f$.
\end{theorem}
We turn to some weak type estimates for $S_{\alpha, L}$.
\mathfrak{m}athbf{b}egin{theorem}\label{thm:local}
Let $S_{\alpha, L}$ be defined in \eqref{def:SaL} with $\alpha \ge 1$ and $L$ satisfying {\rm (A1)} and {\rm (A2)}. Let $B \subset X$ be a ball and every function $f \in L^{\infty}_c(\mathfrak{m}athbb{R}n)$ with $\operatorname{supp} (f) \subset B$. Then there exist constants $c_1>0$ and $c_2>0$ such that
\mathfrak{m}athbf{b}egin{align}\label{eq:local}
| \mathfrak{m}athbf{b}ig\{x \in B: S_{\alpha,L}(f)(x) > t M(f)(x) \mathfrak{m}athbf{b}ig\}| \leq c_1 e^{- c_2 t^2} |B|, \quad \forall t>0.
\end{align}
\end{theorem}
\mathfrak{m}athbf{b}egin{theorem}\label{thm:mixed}
Let $S_{\alpha, L}$ be defined in \eqref{def:SaL} with $\alpha \ge 1$ and $L$ satisfying {\rm (A1)} and {\rm (A2)}. If $u$ and $v$ satisfy
\mathfrak{m}athbf{b}egin{align*}
(1) \quad u \in A_{1} \text{ and } uv \in A_{\infty},\ \
\text{ or }\quad (2)\quad u \in A_1 \text{ and } v \in A_{\infty},
\end{align*}
then we have
\mathfrak{m}athbf{b}egin{align}
\mathfrak{m}athbf{b}igg\|\frac{S_{\alpha,L}(f)}{v}\mathfrak{m}athbf{b}igg\|_{L^{1,\infty}(uv)} \lesssim \| f \|_{L^1(u)},
\end{align}
In particular, $S_{\alpha,L}$ is bounded from $L^1(u)$ to $L^{1, \infty}(u)$ for every $u \in A_{1}$.
\end{theorem}
Given $1 \le p<\infty$, $A_p^{\mathfrak{m}athcal{R}}$ denotes the class of weights $w$ such that
\mathfrak{m}athbf{b}egin{align*}
[w]_{A_p^{\mathfrak{m}athcal{R}}} := \sup_{E \subset Q} \frac{|E|}{|Q|} \mathfrak{m}athbf{b}igg(\frac{w(Q)}{w(E)}\mathfrak{m}athbf{b}igg)^{\frac1p}<\infty,
\end{align*}
where the supremum is taken over all cubes $Q$ and all measurable sets $E \subset Q$. This $A_p^{\mathfrak{m}athcal{R}}$ class was introduced in \cite{KT} to characterize the restricted weak-type $(p, p)$ of the Hardy-Littlewood maximal operator $M$ as follows:
\mathfrak{m}athbf{b}egin{align}\label{eq:ME}
\|M\mathfrak{m}athbf{1}_E\|_{L^{p,\infty}(w)} \lesssim [w]_{A_p^{\mathfrak{m}athcal{R}}} w(E)^{\frac1p}.
\end{align}
We should mention that $A_p \subsetneq A_p^{\mathfrak{m}athcal{R}}$ for any $1<p<\infty$.
\mathfrak{m}athbf{b}egin{theorem}\label{thm:RW}
Let $S_{\alpha, L}$ be defined in \eqref{def:SaL} with $\alpha \ge 1$ and $L$ satisfying {\rm (A1)} and {\rm (A2)}. Then for every $2<p<\infty$, for every $w \in A_p^{\mathfrak{m}athcal{R}}$, and for every measurable set $E \subset \mathfrak{m}athbb{R}n$,
\mathfrak{m}athbf{b}egin{align}\label{eq:SLE}
\|S_{\alpha, L}(\mathfrak{m}athbf{1}_E)\|_{L^{p,\infty}(w)} \lesssim [w]_{A_p^{\mathfrak{m}athcal{R}}}^{1+\frac{p}{2}} w(E)^{\frac1p},
\end{align}
where the implicit constants are independent of $w$ and $E$.
\end{theorem}
Finally, we obtain the endpoint estimate for commutators of $S_{\alpha, L}$ as follows. Given an operator $T$ and measurable functions $b$, we define, whenever it makes sense, the commutator by
\mathfrak{m}athbf{b}egin{align*}
C_{b}(T)(f)(x) := T((b(x)-b(\cdot))f(\cdot))(x).
\end{align*}
\mathfrak{m}athbf{b}egin{theorem}\label{thm:SbA1}
Let $S_{\alpha, L}$ be defined in \eqref{def:SaL} with $\alpha \ge 1$ and $L$ satisfying {\rm (A1)} and {\rm (A2)}. Then for every $w \in A_1$,
\mathfrak{m}athbf{b}egin{equation}
w(\{x\in \mathfrak{m}athbb{R}n: C_b(S_{\alpha,L})f(x)>t\}) \lesssim \int_{\mathfrak{m}athbb{R}n} \Phi \mathfrak{m}athcal{B}ig(\frac{|f(x)|}{t}\mathfrak{m}athcal{B}ig) w(x) dx, \quad\forall t>0,
\end{equation}
where $\Phi(t)=t(1+\log^{+}t)$.
\end{theorem}
\section{Applications}\label{sec:app}
The goal of this section is to give some applications of Theorems \ref{thm:Suv}--\ref{thm:SbA1}. To this end, we introduce some new operators. Associated with $L$ introduced in Section \ref{Introduction}, we can also define the square functions $g_{L}$ and $g^*_{\lambda,L}$ ($\lambda>0$) as follows:
\mathfrak{m}athbf{b}egin{align*}
g_{L}(f)(x) &:=\mathfrak{m}athbf{b}igg(\int_0^\infty |t^m Le^{-t^m L}f(x)|^2 \frac{dt}{t}\mathfrak{m}athbf{b}igg)^{\frac12},
\\%
g^*_{\lambda,L}(f)(x) &:=\mathfrak{m}athbf{b}igg(\int_0^\infty\int_{\mathfrak{m}athbb{R}n}\mathfrak{m}athbf{b}igg( \frac{t}{t+|x-y|}\mathfrak{m}athbf{b}igg)^{n\lambda} |t^mLe^{-t^mL}f(y)|^2\frac{dydt}{t^{n+1}}\mathfrak{m}athbf{b}igg)^{\frac12}.
\end{align*}
If $L$ satisfies (A1) and (A2), we have the following estimates (cf. \cite[p. 891]{BD}):
\mathfrak{m}athbf{b}egin{align}
\label{eq:gL-1} g_{L}(f)(x) &\lesssim g^*_{\lambda, L}(f)(x), \quad x \in \mathfrak{m}athbb{R}n,
\\
\label{eq:gL-2} g^*_{\lambda,L}(f)(x) &\lesssim \sum_{k=0}^\infty 2^{-k\lambda n/ 2}S_{2^k, L}f(x),\quad x \in \mathfrak{m}athbb{R}n,
\end{align}
whenever $\lambda>2$. By \eqref{eq:gL-1}, \eqref{eq:gL-2} and Theorems \ref{thm:Suv}--\ref{thm:SbA1}, we conclude the following:
\mathfrak{m}athbf{b}egin{theorem}\label{thm:app-1}
Let $L$ satisfy {\rm (A1)} and {\rm (A2)}. Then Theorems \ref{thm:Suv}--\ref{thm:SbA1} are also true for $g_L$ and $g^*_{\lambda,L}$, whenever $\lambda>2$.
\end{theorem}
Next, we introduce a class of square functions associated to $L$ and $D$, where $D$ is an operator which plays the role of the directional derivative or gradient operator. Assume that $m$ is a positive even integer. Let $D$ be a densely defined linear operator on $L^2(\mathfrak{m}athbb{R}^n)$ which possess the following properties:
\mathfrak{m}athbf{b}egin{enumerate}
\item[(D1)] $D^{m/ 2}L^{-1/ 2}$ is bounded on $L^2(\mathfrak{m}athbb{R}^n)$;
\item[(D2)]there exist $c_1, c_2 > 0$ such that
$$|D^{m/ 2}p_t(x,y)|\leq \frac{c_1}{\sqrt{t}|B(x,t^{1/ m})|}\exp\mathfrak{m}athbf{b}igg(-\frac{|x-y|^{m/(m-1)}}{c_2 \, t^{1/(m-1)}}\mathfrak{m}athbf{b}igg).$$
\end{enumerate}
Given $\alpha \ge 1$ and $\lambda>2$, we define the following square functions associated to $L$ and $D$:
\mathfrak{m}athbf{b}egin{align*}
g_{D, L}(f)(x) &:=\mathfrak{m}athbf{b}igg(\int_0^\infty |t^{\frac{m}{2}} D^{\frac{m}{2}}e^{-t^m L}f(x)|^2 \frac{dt}{t}\mathfrak{m}athbf{b}igg)^{\frac12},
\\%
S_{\alpha,D, L}(f)(x) &:=\mathfrak{m}athbf{b}igg(\iint_{\Gamma_{\alpha}(x)} |t^{\frac{m}{2}} D^{\frac{m}{2}}e^{-t^mL}f(y)|^2\frac{dydt}{t^{n+1}}\mathfrak{m}athbf{b}igg)^{\frac12},
\\%
g^*_{\lambda,D,L}(f)(x) &:=\mathfrak{m}athbf{b}igg(\int_0^\infty\int_{\mathfrak{m}athbb{R}n}\mathfrak{m}athbf{b}igg( \frac{t}{t+|x-y|}\mathfrak{m}athbf{b}igg)^{n\lambda} |t^{\frac{m}{2}} D^{\frac{m}{2}}e^{-t^mL}f(y)|^2\frac{dydt}{t^{n+1}}\mathfrak{m}athbf{b}igg)^{\frac12}.
\end{align*}
It was proved in \cite[p.~895]{BD} that
\mathfrak{m}athbf{b}egin{align}\label{eq:gDL-1}
g_{D, L}(f)(x) \lesssim g^*_{\lambda, L}(f)(x), \quad x \in \mathfrak{m}athbb{R}n \text{ and } \lambda>2.
\end{align}
On the other hand, we note that $S_{\alpha,D, L}$ has the same properties as $S_{\alpha,L}$ and
\mathfrak{m}athbf{b}egin{align}\label{eq:gDL-2}
g^*_{\lambda,D,L}(f)(x) &\lesssim \sum_{k=0}^\infty 2^{-k\lambda n/ 2}S_{2^k, D, L}(f)(x),\quad x \in \mathfrak{m}athbb{R}n \text{ and } \lambda>2.
\end{align}
Then, \eqref{eq:gDL-1}, \eqref{eq:gDL-2} and Theorems \ref{thm:Suv}--\ref{thm:SbA1} give the following.
\mathfrak{m}athbf{b}egin{theorem}\label{thm:app-2}
Let $L$ satisfy {\rm (A1)} and {\rm (A2)} and $D$ satisfy {\rm (D1)} and {\rm (D2)}. Let $\alpha\geq 1$ and $\lambda>2$. Then Theorems \ref{thm:Suv}--\ref{thm:SbA1} also hold for $g_{D,L}$, $S_{\alpha,D, L}$ and $g^*_{\lambda,D,L}$.
\end{theorem}
Finally, we defined a class of more general square functions. Assume that $L$ is a nonnegative self-adjoint operator in $L^2(\mathfrak{m}athbb{R}^n)$ and satisfies (A2). Denote by $E_L (\lambda)$ the spectral decomposition of $L$. Then by spectral theory, for any bounded Borel function $F : [0,\infty)\rightarrow C$ we can define
\[
F(L)=\int_0^\infty F(\lambda)dE_L(\lambda)
\]
as a bounded operator on $L^2(\mathfrak{m}athbb{R}^n)$.
Let $\mathfrak{m}athfrak{p}si$ be an even real-valued function in the Schwartz space $\mathfrak{m}athcal{S}(\mathfrak{m}athbb{R})$ such that $\int_0^\infty \mathfrak{m}athfrak{p}si^2(s)\frac{ds}{s}<\infty$. Given $\alpha \ge 1$ and $\lambda>2$, we now consider the following square functions:
\mathfrak{m}athbf{b}egin{align*}
g_{\mathfrak{m}athfrak{p}si, L}(f)(x) &:=\mathfrak{m}athbf{b}igg(\int_0^\infty |\mathfrak{m}athfrak{p}si(t^\frac{m}{2}\sqrt{L})f(x)|^2 \frac{dt}{t}\mathfrak{m}athbf{b}igg)^{\frac12},
\\%
S_{\alpha,\mathfrak{m}athfrak{p}si, L}(f)(x) &:=\mathfrak{m}athbf{b}igg(\iint_{\Gamma_{\alpha}(x)} |\mathfrak{m}athfrak{p}si(t^\frac{m}{2}\sqrt{L})f(y)|^2\frac{dydt}{t^{n+1}}\mathfrak{m}athbf{b}igg)^{\frac12},
\\%
g^*_{\lambda,\mathfrak{m}athfrak{p}si,L}(f)(x) &:=\mathfrak{m}athbf{b}igg(\int_0^\infty\int_{\mathfrak{m}athbb{R}n} \mathfrak{m}athbf{b}igg( \frac{t}{t+|x-y|}\mathfrak{m}athbf{b}igg)^{n\lambda} |\mathfrak{m}athfrak{p}si(t^\frac{m}{2}\sqrt{L})f(y)|^2\frac{dydt}{t^{n+1}}\mathfrak{m}athbf{b}igg)^{\frac12}.
\end{align*}
Observe that for any $N>0$,
\mathfrak{m}athbf{b}egin{align}\label{eq:psiL}
|\mathfrak{m}athfrak{p}si(t^{m/2} \sqrt{t})(x, y)| \le C_N \frac{1}{t^n} \mathfrak{m}athbf{b}igg(1+\frac{|x-y|}{t}\mathfrak{m}athbf{b}igg)^{-N},\quad t>0, \, x, y \in \mathfrak{m}athbb{R}n.
\end{align}
Using \eqref{eq:psiL} and the argument for $S_{\alpha,L}$, we obtain that the estimates in Section \ref{Introduction} is true for $S_{\alpha,D, L}$. Additionally, for any $\lambda>2$,
\mathfrak{m}athbf{b}egin{align}
\label{eq:gpsiL-1} g_{\mathfrak{m}athfrak{p}si, L}f(x) &\lesssim g^*_{\lambda, \varphi, L}(f)(x) + g^*_{\lambda,\mathfrak{m}athfrak{p}si, L}f(x), \quad x \in \mathfrak{m}athbb{R}n,
\\
\label{eq:gpsiL-2} g^*_{\lambda,\mathfrak{m}athfrak{p}si,L}(f)(x) &\lesssim \sum_{k=0}^\infty 2^{-k\lambda n/ 2}S_{2^k,\mathfrak{m}athfrak{p}si, L}(f)(x),\quad x \in \mathfrak{m}athbb{R}n,
\end{align}
where $\varphi \in \mathfrak{m}athcal{S}(\mathfrak{m}athbb{R})$ is a fixed function supported in $[2^{-m/2}, 2^{m/2}]$. The proof of \eqref{eq:gpsiL-1} is given in \cite{BD}, while the proof of \eqref{eq:gpsiL-2} is as before. Together with Theorems \ref{thm:Suv}--\ref{thm:SbA1}, these estimates imply the conclusions as follows.
\mathfrak{m}athbf{b}egin{theorem}\label{thm:app-3}
Let $L$ be a nonnegative self-adjoint operator in $L^2(\mathfrak{m}athbb{R}^n)$ and satisfy {\rm (A2)}. Let $\alpha\geq 1$ and $\lambda>2$. Then Theorems \ref{thm:Suv}--\ref{thm:SbA1} are true for $g_{\mathfrak{m}athfrak{p}si, L}$, $S_{\alpha,\mathfrak{m}athfrak{p}si, L}$ and $g^*_{\lambda,\mathfrak{m}athfrak{p}si,L}$.
\end{theorem}
\section{Preliminaries}\label{sec:pre}
\subsection{Muckenhoupt class}
By a weight $w$, we mean that $w$ is a nonnegative locally integrable function on $\mathfrak{m}athbb{R}n$. The weight $w$ is said to belong to the Muckenhoupt class $A_p$, $1 < p<\infty$, if
\[
[w]_{A_p} :=\sup_Q \mathfrak{m}athbf{b}igg(\fint_{Q}w\, dx \mathfrak{m}athbf{b}igg) \mathfrak{m}athbf{b}igg(\fint_{Q} w^{-\frac{1}{p-1}}dx\mathfrak{m}athbf{b}igg)^{p-1}<\infty,
\]
where the supremum is taken over all cubes in $\mathfrak{m}athbb{R}n$.
\subsection{Dyadic cubes}
Denote by $\ell(Q)$ the sidelength of the cube $Q$. Given a cube $Q_0 \subset \mathfrak{m}athbb{R}n$, let $\mathfrak{m}athcal{D}(Q_0)$ denote the set of all dyadic cubes with respect to $Q_0$, that is, the cubes obtained by repeated subdivision of $Q_0$ and each of its descendants into $2^n$ congruent subcubes.
\mathfrak{m}athbf{b}egin{definition}
A collection $\mathfrak{m}athcal{D}$ of cubes is said to be a dyadic grid if it satisfies
\mathfrak{m}athbf{b}egin{enumerate}
\item [(1)] For any $Q \in \mathfrak{m}athcal{D}$, $\ell(Q) = 2^k$ for some $k \in \mathfrak{m}athbb{Z}$.
\item [(2)] For any $Q,Q' \in \mathfrak{m}athcal{D}$, $Q \cap Q' = \{Q,Q',\emptyset\}$.
\item [(3)] The family $\mathfrak{m}athcal{D}_k=\{Q \in \mathfrak{m}athcal{D}; \ell(Q)=2^k\}$ forms a partition of $\mathfrak{m}athbb{R}n$ for any $k \in \mathfrak{m}athbb{Z}$.
\end{enumerate}
\end{definition}
\mathfrak{m}athbf{b}egin{definition}
A subset $\mathfrak{m}athcal{S}$ of a dyadic grid is said to be $\eta$-sparse, $0<\eta<1$, if for every $Q \in \mathfrak{m}athcal{S}$, there exists a measurable set $E_Q \subset Q$ such that $|E_Q| \geq \eta |Q|$, and the sets $\{E_Q\}_{Q \in \mathfrak{m}athcal{S}}$ are pairwise disjoint.
\end{definition}
By a median value of a measurable function $f$ on a cube $Q$ we mean a possibly non-unique, real number $m_f (Q)$ such that
\[
\mathfrak{m}ax \mathfrak{m}athbf{b}ig\{|\{x \in Q : f(x) > m_f(Q) \}|,
|\{x \in Q : f(x) < m_f(Q) \}| \mathfrak{m}athbf{b}ig\} \leq |Q|/2.
\]
The decreasing rearrangement of a measurable function $f$ on $\mathfrak{m}athbb{R}n$ is
defined by
\[
f^*(t) = \inf \{ \alpha > 0 : |\{x \in \mathfrak{m}athbb{R}n : |f(x)| > \alpha \}| < t \},
\quad 0 < t < \infty.
\]
The local mean oscillation of $f$ is
\[
\omega_{\lambda}(f; Q)
= \inf_{c \in \mathfrak{m}athbb{R}} \mathfrak{m}athbf{b}ig( (f-c) \mathfrak{m}athbf{1}_{Q} \mathfrak{m}athbf{b}ig)^* (\lambda |Q|),
\quad 0 < \lambda < 1.
\]
Given a cube $Q_0$, the local sharp maximal function is
defined by
\[
M_{\lambda; Q_0}^{\sharp} f (x)
= \sup_{x \in Q \subset Q_0} \omega_{\lambda}(f; Q).
\]
Observe that for any $\delta > 0$ and $0 < \lambda < 1$
\mathfrak{m}athbf{b}egin{equation}\label{eq:mfQ}
|m_f(Q)| \leq (f \mathfrak{m}athbf{1}_Q)^* (|Q|/2) \ \ \text{and} \ \
(f \mathfrak{m}athbf{1}_Q)^* (\lambda |Q|) \leq
\left( \frac{1}{\lambda} \fint_{Q} |f|^{\delta} dx \right)^{1/{\delta}}.
\end{equation}
The following theorem was proved by Hyt\"{o}nen \cite[Theorem~2.3]{Hy2} in order to improve Lerner's formula given in \cite{Ler11} by getting rid of the local sharp maximal function.
\mathfrak{m}athbf{b}egin{lemma}\label{lem:mf}
Let $f$ be a measurable function on $\mathfrak{m}athbb{R}n$ and let $Q_0$ be a fixed cube. Then there exists a (possibly empty) sparse family $\mathfrak{m}athcal{S}(Q_0) \subset \mathfrak{m}athcal{D}(Q_0)$ such that
\mathfrak{m}athbf{b}egin{equation}\label{eq:mf}
|f (x) - m_f (Q_0)| \leq 2 \sum_{Q \in \mathfrak{m}athcal{S}(Q_0)} \omega_{2^{-n-2}}(f; Q) \mathfrak{m}athbf{1}_Q (x), \quad a. e. ~ x \in Q_0.
\end{equation}
\end{lemma}
\subsection{Orlicz maximal operators}
A function $\Phi:[0,\infty) \to [0,\infty)$ is called a Young function if it is continuous, convex, strictly increasing, and satisfies
\mathfrak{m}athbf{b}egin{equation*}
\lim_{t\to 0^{+}}\frac{\Phi(t)}{t}=0 \quad\text{and}\quad \lim_{t\to\infty}\frac{\Phi(t)}{t}=\infty.
\end{equation*}
Given $p \in[1, \infty)$, we say that a Young function $\Phi$ is a $p$-Young function, if $\Psi(t)=\Phi(t^{1/p})$ is a Young function.
If $A$ and $B$ are Young functions, we write $A(t) \simeq B(t)$ if there are constants $c_1, c_2>0$ such that
$c_1 A(t) \leq B(t) \leq c_2 A(t)$ for all $t \geq t_0>0$. Also, we denote $A(t) \mathfrak{m}athfrak{p}receq B(t)$ if there exists $c>0$ such that $A(t) \leq B(ct)$ for all $t \geq t_0>0$. Note that for all Young functions $\mathfrak{m}athfrak{p}hi$, $t \mathfrak{m}athfrak{p}receq \mathfrak{m}athfrak{p}hi(t)$. Further, if $A(t)\leq cB(t)$ for some $c>1$, then by convexity, $A(t) \leq B(ct)$.
A function $\Phi$ is said to be doubling, or $\Phi \in \mathfrak{m}athcal{D}elta_2$, if there is a constant $C>0$ such that $\Phi(2t) \leq C \Phi(t)$ for any $t>0$. Given a Young function $\Phi$, its complementary function $\mathfrak{m}athbf{b}ar{\Phi}:[0,\infty) \to [0,\infty)$ is defined by
\[
\mathfrak{m}athbf{b}ar{\Phi}(t):=\sup_{s>0}\{st-\Phi(s)\}, \quad t>0,
\]
which clearly implies that
\mathfrak{m}athbf{b}egin{align}\label{eq:stst}
st \leq \Phi(s) + \mathfrak{m}athbf{b}ar{\Phi}(t), \quad s, t > 0.
\end{align}
Moreover, one can check that $\mathfrak{m}athbf{b}ar{\Phi}$ is also a Young function and
\mathfrak{m}athbf{b}egin{equation}\label{eq:Young-1}
t \leq \Phi^{-1}(t) \mathfrak{m}athbf{b}ar{\Phi}^{-1}(t) \leq 2t, \qquad t>0.
\end{equation}
In turn, by replacing $t$ by $\Phi(t)$ in first inequality of \eqref{eq:Young-1}, we obtain
\mathfrak{m}athbf{b}egin{equation}\label{eq:Young-2}
\mathfrak{m}athbf{b}ar{\Phi} \mathfrak{m}athcal{B}ig(\frac{\Phi(t)}{t}\mathfrak{m}athcal{B}ig) \leq \Phi(t), \qquad t>0.
\end{equation}
Given a Young function $\Phi$, we define the Orlicz space $L^{\Phi}(\Omega, u)$ to be the function space with Luxemburg norm
\mathfrak{m}athbf{b}egin{align}\label{eq:Orlicz}
\|f\|_{L^{\Phi}(\Omega, u)} := \inf\mathfrak{m}athbf{b}igg\{\lambda>0:
\int_{\Omega} \Phi \mathfrak{m}athcal{B}ig(\frac{|f(x)|}{\lambda}\mathfrak{m}athcal{B}ig) du(x) \leq 1 \mathfrak{m}athbf{b}igg\}.
\end{align}
Now we define the Orlicz maximal operator
\mathfrak{m}athbf{b}egin{align*}
M_{\Phi}f(x) := \sup_{Q \ni x} \|f\|_{\Phi, Q} := \sup_{Q \ni x} \|f\|_{L^{\Phi}(Q, \frac{dx}{|Q|})},
\end{align*}
where the supremum is taken over all cubes $Q$ in $\mathfrak{m}athbb{R}n$. When $\Phi(t)=t^p$, $1\leq p<\infty$,
\mathfrak{m}athbf{b}egin{align*}
\|f\|_{\Phi, Q} = \mathfrak{m}athbf{b}igg(\fint_{Q} |f(x)|^p dx \mathfrak{m}athbf{b}igg)^{\frac1p}=:\|f\|_{p, Q}.
\end{align*}
In this case, if $p=1$, $M_{\Phi}$ agrees with the classical Hardy-Littlewood maximal operator $M$; if $p>1$, $M_{\Phi}f=M_pf:=M(|f|^p)^{1/p}$. If $\Phi(t) \mathfrak{m}athfrak{p}receq \Psi(t)$, then $M_{\Phi}f(x) \leq c M_{\Psi}f(x)$ for all $x \in \mathfrak{m}athbb{R}n$.
The H\"{o}lder inequality can be generalized to the scale of Orlicz spaces \cite[Lemma~5.2]{CMP11}.
\mathfrak{m}athbf{b}egin{lemma}
Given a Young function $A$, then for all cubes $Q$,
\mathfrak{m}athbf{b}egin{equation}\label{eq:Holder-AA}
\fint_{Q} |fg| dx \leq 2 \|f\|_{A, Q} \|g\|_{\mathfrak{m}athbf{b}ar{A}, Q}.
\end{equation}
More generally, if $A$, $B$ and $C$ are Young functions such that $A^{-1}(t) B^{-1}(t) \leq c_1 C^{-1}(t), $ for all $t \geq t_0>0$,
then
\mathfrak{m}athbf{b}egin{align}\label{eq:Holder-ABC}
\|fg\|_{C, Q} \leq c_2 \|f\|_{A, Q} \|g\|_{B, Q}.
\end{align}
\end{lemma}
The following result is an extension of the well-known Coifman-Rochberg theorem. The proof can be found in \cite[Lemma~4.2]{HP}.
\mathfrak{m}athbf{b}egin{lemma}
Let $\Phi$ be a Young function and $w$ be a nonnegative function such that $M_{\Phi}w(x)<\infty$ a.e.. Then
\mathfrak{m}athbf{b}egin{align}
\label{eq:CR-Phi} [(M_{\Phi}w)^{\delta}]_{A_1} &\le c_{n,\delta}, \quad\forall \delta \in (0, 1),
\\%
\label{eq:MPhiRH} [(M_{\Phi} w)^{-\lambda}]_{RH_{\infty}} &\le c_{n,\lambda},\quad\forall \lambda>0.
\end{align}
\end{lemma}
Given $p \in (1, \infty)$, a Young function $\Phi$ is said to satisfy the $B_p$ condition (or, $\Phi \in B_p$) if for some $c>0$,
\mathfrak{m}athbf{b}egin{align}\label{def:Bp}
\int_{c}^{\infty} \frac{\Phi(t)}{t^p} \frac{dt}{t} < \infty.
\end{align}
Observe that if \eqref{def:Bp} is finite for some $c>0$, then it is finite for every $c>0$. Let $[\Phi]_{B_p}$ denote the value if $c=1$ in \eqref{def:Bp}. It was shown in \cite[Proposition~5.10]{CMP11} that if $\Phi$ and $\mathfrak{m}athbf{b}ar{\Phi}$ are doubling Young functions, then $\Phi \in B_p$ if and only if
\mathfrak{m}athbf{b}egin{align*}
\int_{c}^{\infty} \mathfrak{m}athbf{b}igg(\frac{t^{p'}}{\mathfrak{m}athbf{b}ar{\Phi}(t)}\mathfrak{m}athbf{b}igg)^{p-1} \frac{dt}{t} < \infty.
\end{align*}
Let us present two types of $B_p$ bumps. An important special case is the ``log-bumps" of the form
\mathfrak{m}athbf{b}egin{align}\label{eq:log}
A(t) =t^p \log(e+t)^{p-1+\delta}, \quad B(t) =t^{p'} \log(e+t)^{p'-1+\delta},\quad \delta>0.
\end{align}
Another interesting example is the ``loglog-bumps" as follows:
\mathfrak{m}athbf{b}egin{align}
\label{eq:loglog-1} &A(t)=t^p \log(e+t)^{p-1} \log\log(e^e+t)^{p-1+\delta}, \quad \delta>0\\
\label{eq:loglog-2} &B(t)=t^{p'} \log(e+t)^{p'-1} \log\log(e^e+t)^{p'-1+\delta}, \quad \delta>0.
\end{align}
Then one can verify that in both cases above, $\mathfrak{m}athbf{b}ar{A} \in B_{p'}$ and $\mathfrak{m}athbf{b}ar{B} \in B_p$ for any $1<p<\infty$.
The $B_p$ condition can be also characterized by the boundedness of the Orlicz maximal operator $M_{\Phi}$. Indeed, the following result was given in \cite[Theorem~5.13]{CMP11} and \cite[eq. (25)]{HP}.
\mathfrak{m}athbf{b}egin{lemma}\label{lem:MBp}
Let $1<p<\infty$. Then $M_{\Phi}$ is bounded on $L^p(\mathfrak{m}athbb{R}n)$ if and only if $\Phi \in B_p$. Moreover, $\|M_{\Phi}\|_{L^p(\mathfrak{m}athbb{R}n) \to L^p(\mathfrak{m}athbb{R}n)} \le C_{n,p} [\Phi]_{B_p}^{\frac1p}$. In particular, if the Young function $A$ is the same as the first one in \eqref{eq:log} or \eqref{eq:loglog-1}, then
\mathfrak{m}athbf{b}egin{equation}\label{eq:MAnorm}
\|M_{\mathfrak{m}athbf{b}ar{A}}\|_{L^{p'}(\mathfrak{m}athbb{R}n) \to L^{p'}(\mathfrak{m}athbb{R}n)} \le c_n p^2 \delta^{-\frac{1}{p'}},\quad\forall \delta \in (0, 1].
\end{equation}
\end{lemma}
\mathfrak{m}athbf{b}egin{definition}\label{def:sepbum}
Given $p \in (1, \infty)$, let $A$ and $B$ be Young functions such that $\mathfrak{m}athbf{b}ar{A} \in B_{p'}$ and $\mathfrak{m}athbf{b}ar{B} \in B_p$. We say that the pair of weights $(u, v)$ satisfies the {\tt double bump condition} with respect to $A$ and $B$ if
\mathfrak{m}athbf{b}egin{align}\label{eq:uvABp}
[u, v]_{A,B,p}:=\sup_{Q} \|u^{\frac1p}\|_{A,Q} \|v^{-\frac1p}\|_{B,Q} < \infty.
\end{align}
where the supremum is taken over all cubes $Q$ in $\mathfrak{m}athbb{R}n$. Also, $(u, v)$ is said to satisfy the {\tt separated bump condition} if
\mathfrak{m}athbf{b}egin{align}
\label{eq:uvAp} [u, v]_{A,p'} &:= \sup_{Q} \|u^{\frac1p}\|_{A,Q} \|v^{-\frac1p}\|_{p',Q} < \infty,
\\%
\label{eq:uvpB} [u, v]_{p,B} &:= \sup_{Q} \|u^{\frac1p}\|_{p,Q} \|v^{-\frac1p}\|_{B,Q} < \infty.
\end{align}
\end{definition}
Note that if $A(t)=t^p$ in \eqref{eq:uvAp} or $B(t)=t^p$ in \eqref{eq:uvpB}, each of them actually is two-weight $A_p$ condition and we denote them by $[u, v]_{A_p}:=[u, v]_{p,p'}$. Also, the separated bump condition is weaker than the double bump condition. Indeed, \eqref{eq:uvABp} implies \eqref{eq:uvAp} and \eqref{eq:uvpB}, but the reverse direction is incorrect.
The first fact holds since $\mathfrak{m}athbf{b}ar{A} \in B_{p'}$ and $\mathfrak{m}athbf{b}ar{B} \in B_p$ respectively indicate $A$ is a $p$-Young function and $B$ is a $p'$-Young function. The second fact was shown in \cite[Section~7]{ACM} by constructing log-bumps.
\mathfrak{m}athbf{b}egin{lemma}\label{lem:M-uv}
Let $1<p<\infty$, let $A$, $B$ and $\Phi$ be Young functions such that $A \in B_p$ and $A^{-1}(t)B^{-1}(t) \lesssim \Phi^{-1}(t)$ for any $t>t_0>0$. If a pair of weights $(u, v)$ satisfies $[u, v]_{p, B}<\infty$, then
\mathfrak{m}athbf{b}egin{align}\label{eq:MPhi-uv}
\|M_{\Phi}f\|_{L^p(u)} \leq C [u, v]_{p, B} [A]_{B_p}^{\frac1p} \|f\|_{L^p(v)}.
\end{align}
Moreover, \eqref{eq:MPhi-uv} holds for $\Phi(t)=t$ and $B=\mathfrak{m}athbf{b}ar{A}$ satisfying the same hypotheses. In this case, $\mathfrak{m}athbf{b}ar{A} \in B_p$ is necessary.
\end{lemma}
The two-weight inequality above was established in \cite[Theorem~5.14]{CMP11} and \cite[Theorem~3.1]{CP99}. The weak type inequality for $M_{\Phi}$ was also obtained in \cite[Proposition~5.16]{CMP11} as follows.
\mathfrak{m}athbf{b}egin{lemma}\label{lem:Muv-weak}
Let $1<p<\infty$, let $B$ and $\Phi$ be Young functions such that $t^{\frac1p} B^{-1}(t) \lesssim \Phi^{-1}(t)$ for any $t>t_0>0$. If a pair of weights $(u, v)$ satisfies $[u, v]_{p, B}<\infty$, then
\mathfrak{m}athbf{b}egin{align}\label{eq:MPuv}
\|M_{\Phi}f\|_{L^{p,\infty}(u)} \leq C \|f\|_{L^p(v)}.
\end{align}
Moreover, \eqref{eq:MPuv} holds for $M$ if and only if $[u, v]_{A_p}<\infty$.
\end{lemma}
\section{Proof of main results}
\subsection{Sparse domination}
Let $\Phi$ be a radial Schwartz function such that $\mathfrak{m}athbf{1}_{B(0, 1)} \le \Phi \le \mathfrak{m}athbf{1}_{B(0, 2)}$. We define
\mathfrak{m}athbf{b}egin{align*}
\widetilde{S}_{\alpha,L}(f)(x):=\mathfrak{m}athbf{b}igg(\int_{0}^{\infty} \int_{\mathfrak{m}athbb{R}n}
\Phi\mathfrak{m}athcal{B}ig(\frac{|x-y|}{\alpha t}\mathfrak{m}athcal{B}ig) |Q_{t,L}f(y)|^2\frac{dydt}{t^{n+1}}\mathfrak{m}athbf{b}igg)^{1/2},
\end{align*}
where $Q_{t,L}f:=t^m L e^{-t^m L}f$. It is easy to verify that
\mathfrak{m}athbf{b}egin{align}\label{eq:SSS}
S_{\alpha,L}(f)(x) \le \widetilde{S}_{\alpha,L}(f)(x) \le S_{2\alpha,L}(f)(x),\quad x \in \mathfrak{m}athbb{R}n.
\end{align}
Additionally, it was proved in \cite{ADM} that $S_{1,L}$ is bounded from $L^1(\mathfrak{m}athbb{R}^n)$ to $L^{1,\infty}(\mathfrak{m}athbb{R}^n)$. Then, this and \eqref{eq:SSS} give that
\mathfrak{m}athbf{b}egin{align}\label{eq:S11}
\|\widetilde{S}_{\alpha,L}(f)\|_{L^{1,\infty}(\mathfrak{m}athbb{R}^n)} \lesssim \alpha^n \|f\|_{L^1(\mathfrak{m}athbb{R}^n)}.
\end{align}
Using these facts, we can establish the sparse domination for $S_{\alpha,L}$ as follows.
\mathfrak{m}athbf{b}egin{lemma}\label{lem:S-sparse}
For any $\alpha \geq 1$, we have
\mathfrak{m}athbf{b}egin{align}\label{eq:S-sparse}
S_{\alpha,L}(f)(x) &\lesssim \alpha^{n} \sum_{j=1}^{3^n} \mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}_j}^2 (f)(x) ,\quad \text{a.e. } x \in \mathfrak{m}athbb{R}n,
\end{align}
where
\[
\mathfrak{m}athcal{A}_{S}^2(f)(x):= \mathfrak{m}athbf{b}igg(\sum_{Q \in \mathfrak{m}athcal{S}} \langle |f| \rangle_Q^2 \mathfrak{m}athbf{1}_Q(x)\mathfrak{m}athbf{b}igg)^{\frac12}.
\]
\end{lemma}
\mathfrak{m}athbf{b}egin{proof}
Fix $Q_0\in \mathfrak{m}athcal{D}$. By \eqref{eq:mfQ}, Kolmogorov's inequality and \eqref{eq:S11}, we have
\mathfrak{m}athbf{b}egin{align}\label{eq:mfSL}
|m_{\widetilde{S}_{\alpha,L}(f)^2}(Q_0)|
&\lesssim \mathfrak{m}athbf{b}igg(\fint_{Q_0} |\widetilde{S}_{\alpha,L}(f \mathfrak{m}athbf{1}_{Q_0})|^{\frac12} \, dx\mathfrak{m}athbf{b}igg)^4
\nonumber\\%
&\lesssim \|\widetilde{S}_{\alpha,L}(f \mathfrak{m}athbf{1}_{Q_0})\|^2_{L^{1,\infty}(Q_0, \frac{dx}{|Q_0|})}
\lesssim \alpha^{2n} \mathfrak{m}athbf{b}igg(\fint_{Q_0} |f| \, dx\mathfrak{m}athbf{b}igg)^2.
\end{align}
From \cite[Proposition~3.2]{BD}, we obtain that for any dyadic cube $Q\subset \mathfrak{m}athbb{R}n$, $\alpha\geq 1$ and $\lambda \in (0, 1)$,
\mathfrak{m}athbf{b}egin{equation}\label{eq:osc-SL}
\omega_{\lambda}(\widetilde{S}_{\alpha, L}(f)^2;Q)
\lesssim \alpha^{2n} \sum_{j=0}^\infty 2^{-j\delta} \mathfrak{m}athbf{b}igg(\fint_{2^j Q} |f|\, dx\mathfrak{m}athbf{b}igg)^2,
\end{equation}
where $\delta \in (0, 1)$ is some constant. Invoking Lemma \ref{lem:mf}, \eqref{eq:mfSL} and \eqref{eq:osc-SL}, one can pick a sparse family $\mathfrak{m}athcal{S}(Q_0) \subset \mathfrak{m}athcal{D}(Q_0)$ so that
\mathfrak{m}athbf{b}egin{align}
\widetilde{S}_{\alpha,L}(f)(x)^2
& \lesssim |m_{\widetilde{S}_{\alpha,L}(f)^2}(Q_0)| +
\sum_{Q \in \mathfrak{m}athcal{S}(Q_0)}\omega_{\varepsilon}(\widetilde{S}_{\alpha, L}(f)^2;Q) \mathfrak{m}athbf{1}_{Q}(x)
\nonumber\\%
&\lesssim \alpha^{2n} \sum_{Q\in \mathfrak{m}athcal{S}(Q_0)}\sum_{j=0}^\infty 2^{-j\delta}\langle |f|\rangle_{2^jQ}^2 \mathfrak{m}athbf{1}_{Q}(x)
\label{eq:SLQj} \\%
&=: \alpha^{2n} \sum_{j=0}^\infty 2^{-j\delta} \mathfrak{m}athcal{T}^2_{\mathfrak{m}athcal{S}(Q_0), j}(f)(x)^2, \quad\text{ a.e. } x\in Q_0. \label{eq:SLTS}
\end{align}
where
\mathfrak{m}athbf{b}egin{align*}
\mathfrak{m}athcal{T}^2_{\mathfrak{m}athcal{S},j}(f)(x)
&:=\mathfrak{m}athbf{b}igg(\sum_{Q\in \mathfrak{m}athcal{S}} \langle |f|\rangle_{2^jQ} \mathfrak{m}athbf{1}_{Q}(x)\mathfrak{m}athbf{b}igg)^{\frac12}.
\end{align*}
Denote
\mathfrak{m}athbf{b}egin{align*}
\mathfrak{m}athcal{T}_{\mathfrak{m}athcal{S},j}(f, g)(x)
&:=\sum_{Q\in \mathfrak{m}athcal{S}} \langle |f|\rangle_{2^jQ} \langle |g|\rangle_{2^jQ} \mathfrak{m}athbf{1}_{Q}(x),
\\%
\mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}(f, g)(x)
&:=\sum_{Q\in \mathfrak{m}athcal{S}} \langle |f|\rangle_{Q} \langle |g|\rangle_{2Q} \mathfrak{m}athbf{1}_{Q}(x).
\end{align*}
Then, $\mathfrak{m}athcal{T}^2_{\mathfrak{m}athcal{S},j}(f)(x)=\mathfrak{m}athcal{T}_{\mathfrak{m}athcal{S},j}(f, f)(x)^{\frac12}$. On the other hand, the arguments in \cite[Sections~11-13]{LN} shows that there exist $3^n$ dyadic grids $S_j\in \mathfrak{m}athcal{D}_j, j=1,\ldots,3^n$, such that
\mathfrak{m}athbf{b}egin{align}\label{eq:TAA}
\sum_{j=0}^\infty 2^{-j\delta} \mathfrak{m}athcal{T}^1_{\mathfrak{m}athcal{S}(Q_0), j}(f,f)(x)
\lesssim \sum_{j=1}^{3^n} \mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}_j}(f, f)(x)
= \sum_{j=1}^{3^n} \mathfrak{m}athcal{A}^2_{\mathfrak{m}athcal{S}_j}(f)(x)^2.
\end{align}
Gathering \eqref{eq:SLTS} and \eqref{eq:TAA}, we deduce that
\mathfrak{m}athbf{b}egin{align*}
\widetilde{S}_{\alpha,L}(f)(x)\lesssim \alpha^{n} \sum_{j=1}^{3^n}\mathfrak{m}athcal{A}^2_{\mathfrak{m}athcal{S}_j}(f)(x), \quad\text{ a.e. } x\in Q_0.
\end{align*}
Since $\mathfrak{m}athbb{R}n=\mathfrak{m}athbf{b}igcup_{Q \in \mathfrak{m}athcal{D}} Q$, it leads that
\mathfrak{m}athbf{b}egin{align*}
S_{\alpha,L}(f)(x) \le \widetilde{S}_{\alpha,L}(f)(x)\lesssim \alpha^{n} \sum_{j=1}^{3^n}\mathfrak{m}athcal{A}^2_{\mathfrak{m}athcal{S}_j}(f)(x), \quad\text{ a.e. } x\in \mathfrak{m}athbb{R}n.
\end{align*}
This completes our proof.
\end{proof}
\subsection{Bump conjectures}
In this subsection, we are going to show two-weight inequalities invoking bump conjectures.
\mathfrak{m}athbf{b}egin{proof}[\textbf{Proof of Theorem \ref{thm:Suv}.}]
By Lemma \ref{lem:S-sparse}, the inequality \eqref{eq:SLp} follows from the following
\mathfrak{m}athbf{b}egin{align}\label{eq:ASLp}
\|\mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(f)\|_{L^p(u)} \lesssim \mathfrak{m}athscr{N}_p \|f\|_{L^p(v)},
\end{align}
for every sparse family $\mathfrak{m}athcal{S}$, where the implicit constant does not depend on $\mathfrak{m}athcal{S}$.
To prove \eqref{eq:ASLp}, we begin with the case $1<p \le 2$. Actually, the H\"{o}lder's inequality \eqref{eq:Holder-AA} gives that
\mathfrak{m}athbf{b}egin{align}\label{eq:ASp1}
\|\mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(f)\|_{L^p(u)}^p
&=\int_{X} \mathfrak{m}athbf{b}igg(\sum_{Q \in \mathfrak{m}athcal{S}} \langle f \rangle_Q^2 \mathfrak{m}athbf{1}_{Q}(x)\mathfrak{m}athbf{b}igg)^{\frac{p}{2}} u(x) dx
\le \sum_{Q \in \mathfrak{m}athcal{S}} \langle |f| \rangle_Q^p \int_Q u(x)dx
\nonumber \\%
&\lesssim \sum_{Q \in \mathfrak{m}athcal{S}} \|f v^{\frac1p}\|_{\mathfrak{m}athbf{b}ar{B}, Q}^p \|v^{-\frac1p}\|_{B, Q}^p
\|u^{\frac1p}\|_{p, Q}^p |Q|
\nonumber \\%
&\lesssim ||(u, v)||_{A,B,p}^p \sum_{Q \in \mathfrak{m}athcal{S}} \left(\inf_{Q} M_{\mathfrak{m}athbf{b}ar{B}}(f v^{\frac1p})\right)^p |E_Q|
\nonumber \\%
&\le ||(u, v)||_{A,B,p}^p \int_{X} M_{\mathfrak{m}athbf{b}ar{B}}(f v^{\frac1p})(x)^p dx
\nonumber \\%
&\le ||(u, v)||_{A,B,p}^p \|M_{\mathfrak{m}athbf{b}ar{B}}\|_{L^p}^p \|f\|_{L^p(v)}^p,
\end{align}
where Lemma \ref{lem:MBp} is used in the last step.
Next let us deal with the case $2<p<\infty$. By duality, one has
\mathfrak{m}athbf{b}egin{align}\label{eq:AS-dual}
\|\mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(f)\|_{L^p(u)}^2 = \|\mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(f)^2\|_{L^{p/2}(u)}
=\sup_{\substack{0 \le h \in L^{(p/2)'}(u) \\ \|h\|_{L^{(p/2)'}(u)=1}}} \int_{\mathfrak{m}athbb{R}n} \mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(f)^2 h u\, dx.
\end{align}
Fix a nonnegative function $h \in L^{(p/2)'}(u)$ with $\|h\|_{L^{(p/2)'}(u)}=1$. Then using H\"{o}lder's inequality \eqref{eq:Holder-AA} three times and Lemma \ref{lem:MBp}, we obtain
\mathfrak{m}athbf{b}egin{align}\label{eq:ASp2}
&\int_{X} \mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(f)(x)^2 h(x) u(x) dx
\lesssim \sum_{Q \in \mathfrak{m}athcal{S}} \langle |f| \rangle_{Q}^2 \langle hu \rangle_{Q} |Q|
\nonumber \\%
&\lesssim \sum_{Q \in \mathfrak{m}athcal{S}} \|f v^{\frac1p}\|_{\mathfrak{m}athbf{b}ar{B}, Q}^2 \|v^{-\frac1p}\|_{B, Q}^2
\|hu^{1-\frac{2}{p}}\|_{\mathfrak{m}athbf{b}ar{A}, Q} \|u^{\frac{2}{p}}\|_{A, Q} |Q|
\nonumber \\%
&\lesssim ||(u, v)||_{A,B,p}^2 \sum_{Q \in \mathfrak{m}athcal{S}} \left(\inf_{Q} M_{\mathfrak{m}athbf{b}ar{B}}(f v^{\frac1p})\right)^2
\left(\inf_{Q} M_{\mathfrak{m}athbf{b}ar{A}}(hu^{1-\frac{2}{p}})\right) |E_Q|
\nonumber \\%
&\le ||(u, v)||_{A,B,p}^2 \int_{X} M_{\mathfrak{m}athbf{b}ar{B}}(f v^{\frac1p})(x)^2 M_{\mathfrak{m}athbf{b}ar{A}}(hu^{1-\frac{2}{p}})(x) dx
\nonumber \\%
&\le ||(u, v)||_{A,B,p}^2 \|M_{\mathfrak{m}athbf{b}ar{B}}(f v^{\frac1p})^2\|_{L^{p/2}}
\|M_{\mathfrak{m}athbf{b}ar{A}}(hu^{1-\frac{2}{p}})\|_{L^{(p/2)'}}
\nonumber \\%
&\le ||(u, v)||_{A,B,p}^2 \|M_{\mathfrak{m}athbf{b}ar{B}}\|_{L^p}^2
\|M_{\mathfrak{m}athbf{b}ar{A}}\|_{L^{(p/2)'}} \|f\|_{L^p( v)}^2 \|h\|_{L^{(p/2)'}(u)}.
\end{align}
Therefore, \eqref{eq:ASLp} immediately follows from \eqref{eq:ASp1}, \eqref{eq:AS-dual} and \eqref{eq:ASp2}.
\end{proof}
Let us present an endpoint extrapolation theorem from \cite[Corollary~8.4]{CMP11}.
\mathfrak{m}athbf{b}egin{lemma}
Let $\mathfrak{m}athcal{F}$ be a collection of pairs $(f, g)$ of nonnegative measurable functions. If for every weight $w$,
\mathfrak{m}athbf{b}egin{align*}
\|f\|_{L^{1,\infty}(w)} \le C \|g\|_{L^1(Mw)}, \quad (f, g) \in \mathfrak{m}athcal{F},
\end{align*}
then for all $p \in (1, \infty)$,
\mathfrak{m}athbf{b}egin{align*}
\|f\|_{L^{p,\infty}(u)} \le C \|g\|_{L^p(v)}, \quad (f, g) \in \mathfrak{m}athcal{F},
\end{align*}
whenever $\sup_{B} \|u^{\frac1p}\|_{A, B} \|v^{-\frac1p}\|_{p', B}<\infty$, where $\mathfrak{m}athbf{b}ar{A} \in B_{p'}$.
\end{lemma}
\mathfrak{m}athbf{b}egin{proof} [\textbf{Proof of Theorem \ref{thm:Sweak}.}]
In view of, it suffices to prove that for every weight $w$,
\mathfrak{m}athbf{b}egin{align*}
\|S_{\alpha, L}(f)\|_{L^{1,\infty}(w)} \leq C \|f\|_{L^1(Mw)},
\end{align*}
where the constant $C$ is independent of $w$ and $f$. We should mention that although the norm of weights does not appear in \cite[Corollary~8.4]{CMP11}, one can check its proof to obtain the norm constant in \eqref{eq:S-weak}. Invoking Lemma \ref{lem:S-sparse}, we are reduced to showing that there exists a constant $C$ such that for every sparse family $\mathfrak{m}athcal{S}$ and for every weight $w$,
\mathfrak{m}athbf{b}egin{align}\label{eq:S-11}
\|\mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(f)\|_{L^{1,\infty}(w)} \leq C \|f\|_{L^1(M_{\mathfrak{m}athcal{D}}w)},
\end{align}
Without loss of generality, we may assume that $f$ is bounded and has compact support. Fix $\lambda>0$ and denote $\Omega:=\{x \in \mathfrak{m}athbb{R}n: M_{\mathfrak{m}athcal{D}}f(x)>\lambda\}$. By the Calder\'{o}n-Zygmund decomposition, there exists a pairwise disjoint family $\{Q_j\} \subset \mathfrak{m}athcal{D}$ such that $\Omega=\mathfrak{m}athbf{b}igcup_{j}Q_j$ and
\mathfrak{m}athbf{b}egin{list}{\textup{(\theenumi)}}{\usecounter{enumi}\leftmargin=1cm \labelwidth=1cm \itemsep=0.2cm
\topsep=.2cm \renewcommand{\theenumi}{\arabic{enumi}}}
\item\label{CZ-1} $f=g+b$,
\item\label{CZ-2} $g=f\mathfrak{m}athbf{1}_{\Omega}+\sum_{j} \langle f \rangle_{Q_j} \mathfrak{m}athbf{1}_{Q_j}$,
\item\label{CZ-3} $b=\sum_{j}b_j$ with $b_j=(f-\langle f \rangle_{Q_j}) \mathfrak{m}athbf{1}_{Q_j}$,
\item\label{CZ-4} $\langle |f| \rangle_{Q_j}>\lambda$ and $|g(x)| \le 2^n \lambda$, a.e. $x \in \mathfrak{m}athbb{R}n$,
\item\label{CZ-5} $\operatorname{supp}(b_j) \subset Q_j$ and $\fint_{Q_j} b_j\, dx=0$.
\end{list}
Then by \eqref{CZ-1}, we split
\mathfrak{m}athbf{b}egin{align}\label{eq:IgIb}
&w(\{x \in \mathfrak{m}athbb{R}n: \mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(f)(x)>\lambda\})
\le w(\Omega) + {\rm I_g} + {\rm I_b},
\end{align}
where
\mathfrak{m}athbf{b}egin{align*}
{\rm I_g} = w(\{x \in \Omega^c: \mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(g)(x)>\lambda/2\}) \quad\text{and}\quad
{\rm I_b} =w(\{x \in \Omega^c: \mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(b)(x)>\lambda/2\})
\end{align*}
For the first term, we by \eqref{CZ-4} have
\mathfrak{m}athbf{b}egin{align}\label{eq:wo}
w(\Omega) &\le \sum_{j} w(Q_j) \le \frac{1}{\lambda} \sum_{j} \frac{w(Q_j)}{|Q_j|} \int_{Q_j} |f(x)| dx
\nonumber \\%
&\le \frac{1}{\lambda} \sum_{j} \int_{Q_j} |f(x)| M_{\mathfrak{m}athcal{D}}w(x) dx
\le \frac{1}{\lambda} \int_{\mathfrak{m}athbb{R}n} |f(x)| M_{\mathfrak{m}athcal{D}}w(x) dx.
\end{align}
To estimate ${\rm I_b}$, we claim that $\mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(b_j)(x)=0$ for all $x \in \Omega^c$ and for all $j$. In fact, if there exist $x_0 \in \Omega^c$ and $j_0$ such that $\mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(b_{j_0})(x_0) \neq 0$, then there is a dyadic cube $Q_0 \in \mathfrak{m}athcal{S}$ such that $x_0 \in Q_0$ and $\langle b_{j_0}\rangle_{Q_0} \neq 0$. The latter implies $Q_0 \subsetneq Q_{j_0}$ because of the support and the vanishing property of $b_{j_0}$. This in turn gives that $x_0 \in Q_{j_0}$, which contradicts $x_0 \in \Omega^c$. This shows our claim. As a consequence, the set $\{x \in \Omega^c: \mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(b)(x)>\lambda/2\}$ is empty, and hence ${\rm I_b}=0$.
In order to control ${\rm I_g}$, we first present a Fefferman-Stein inequality for $\mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2$. Note that $v(x):=M_{\mathfrak{m}athcal{D}}w(x) \ge \langle w \rangle_{Q}$ for any dyadic cube $Q \in \mathfrak{m}athcal{S}$ containing $x$. Then for any Young function $A$ such that $\mathfrak{m}athbf{b}ar{A} \in B_2$,
\mathfrak{m}athbf{b}egin{align}\label{eq:AS-FS}
\|\mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(f)\|_{L^2(w)}^2
&=\sum_{Q \in \mathfrak{m}athcal{S}} \langle |f| \rangle_{Q}^2 w(Q)
\le \sum_{Q \in \mathfrak{m}athcal{S}} \|f v^{\frac12}\|_{\mathfrak{m}athbf{b}ar{A}, Q}^2 \|v^{-\frac12}\|_{A, Q}^2 w(Q)
\nonumber \\%
&\le \sum_{Q \in \mathfrak{m}athcal{S}} \|f v^{\frac12}\|_{\mathfrak{m}athbf{b}ar{A}, Q}^2 \langle w \rangle_{Q}^{-1} w(Q)
\lesssim \sum_{Q \in \mathfrak{m}athcal{S}} \mathfrak{m}athcal{B}ig(\inf_{Q} M_{\mathfrak{m}athbf{b}ar{A}}(f v^{\frac12}) \mathfrak{m}athcal{B}ig)^2 |E_Q|
\nonumber \\%
&\le \int_{\mathfrak{m}athbb{R}n} M_{\mathfrak{m}athbf{b}ar{A}}(f v^{\frac12})(x)^2 dx
\lesssim \|f\|_{L^2(v)}^2 = \|f\|_{L^2(M_{\mathfrak{m}athcal{D}}w)}^2,
\end{align}
where we used Lemma \ref{lem:MBp} in the last inequality. Note that for any $x \in Q_j$,
\mathfrak{m}athbf{b}egin{align}\label{eq:MD}
M_{\mathfrak{m}athcal{D}}(w\mathfrak{m}athbf{1}_{\Omega^c})(x)
&\le M_{\mathfrak{m}athcal{D}}(w\mathfrak{m}athbf{1}_{Q_j^c})(x)
=\sup_{x \in Q \in \mathfrak{m}athcal{D}} \frac{1}{|Q|} \int_{Q \cap Q_j^c} w(y) dy
\nonumber \\%
&\le \sup_{Q \in \mathfrak{m}athcal{D}:Q_j \subsetneq Q} \frac{1}{|Q|} \int_{Q \cap Q_j^c} w(y) dy
\le \inf_{Q_j} M_{\mathfrak{m}athcal{D}}w.
\end{align}
Thus, combining \eqref{eq:AS-FS} with \eqref{eq:MD}, we have
\mathfrak{m}athbf{b}egin{align}\label{eq:Ig}
{\rm I_g} &\le \frac{4}{\lambda^2} \int_{\Omega^c} \mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(g)(x) w(x) dx
\nonumber \\%
&\lesssim \frac{1}{\lambda^2} \int_{\mathfrak{m}athbb{R}n} |g(x)| M_{\mathfrak{m}athcal{D}}(w\mathfrak{m}athbf{1}_{\Omega^c})(x) dx
\lesssim \frac{1}{\lambda} \int_{\mathfrak{m}athbb{R}n} |g(x)| M_{\mathfrak{m}athcal{D}}(w\mathfrak{m}athbf{1}_{\Omega^c})(x) dx
\nonumber \\%
&\le \frac{1}{\lambda} \int_{\Omega^c} |f(x)| M_{\mathfrak{m}athcal{D}}(w\mathfrak{m}athbf{1}_{\Omega^c})(x) dx
+ \frac{1}{\lambda} \sum_{j} \langle f \rangle_{Q_j} \int_{Q_j} M_{\mathfrak{m}athcal{D}}(w\mathfrak{m}athbf{1}_{\Omega^c})(x) dx
\nonumber \\%
&\le \frac{1}{\lambda} \int_{\mathfrak{m}athbb{R}n} |f(x)| M_{\mathfrak{m}athcal{D}}w(x) dx
+ \frac{1}{\lambda} \sum_{j} \int_{Q_j} |f(y)| M_{\mathfrak{m}athcal{D}}w(y) dy
\nonumber \\%
&\lesssim \frac{1}{\lambda} \int_{\mathfrak{m}athbb{R}n} |f(x)| M_{\mathfrak{m}athcal{D}}w(x) dx.
\end{align}
Consequently, gathering \eqref{eq:IgIb}, \eqref{eq:wo} and \eqref{eq:Ig}, we conclude that \eqref{eq:S-11} holds.
\end{proof}
\subsection{Fefferman-Stein inequalities}
In order to show Theorem \ref{thm:FS}, we recall an extrapolation theorem for arbitrary weights in \cite[Theorem 1.3]{CP}, or \cite[Theorem~4.11]{CO} in the general Banach function spaces.
\mathfrak{m}athbf{b}egin{lemma}\label{lem:extra}
Let $\mathfrak{m}athcal{F}$ be a collection of pairs $(f, g)$ of nonnegative measurable functions. If for some $p_0 \in (0, \infty)$ and for every weight $w$,
\mathfrak{m}athbf{b}egin{align*}
\|f\|_{L^{p_0}(w)} \le C \|g\|_{L^{p_0}(Mw)}, \quad (f, g) \in \mathfrak{m}athcal{F},
\end{align*}
then for every $p \in (p_0, \infty)$ and for every weight $w$,
\mathfrak{m}athbf{b}egin{align*}
\|f\|_{L^p(w)} \le C \|g(Mw/w)^{\frac{1}{p_0}}\|_{L^p(w)}, \quad (f, g) \in \mathfrak{m}athcal{F}.
\end{align*}
\end{lemma}
\mathfrak{m}athbf{b}egin{proof}[\textbf{Proof of Theorem \ref{thm:FS}.}]
Note that $v(x):=Mw(x) \ge \langle w\rangle_{Q}$ for any dyadic cube $Q \in \mathfrak{m}athcal{S}$ containing $x$. Let $A$ be a Young function such that ${\mathfrak{m}athbf{b}ar{A}}\in B_{p}$.
By Lemma \ref{lem:MBp}, we have
\mathfrak{m}athbf{b}egin{align}\label{eq:MAf}
\|M_{\mathfrak{m}athbf{b}ar{A}} (f v^{\frac{1}{p}})\|_{L^{p} } \lesssim \|f\|_{L^{p}(v)}.
\end{align}
Thus, using Lemma \ref{lem:S-sparse}, H\"{o}lder's inequality and \eqref{eq:MAf}, we deduce that
\mathfrak{m}athbf{b}egin{align*}
\|S_{\alpha,L}(\vec{f})\|_{L^p(w)}^p
&\lesssim \alpha^{pn}\sum_{j=1}^{K_0} \sum_{Q \in \mathfrak{m}athcal{S}_j}
\langle |f| \rangle_{Q}^p \int_Q w(x)dx
\\%
&\le \alpha^{pn}\sum_{j=1}^{K_0} \sum_{Q \in \mathfrak{m}athcal{S}_j}
\|f \nu^{\frac{1}{p}}\|_{\mathfrak{m}athbf{b}ar{A}, Q}^p
\|\nu^{-\frac{1}{p}}\|_{A, Q}^p \int_Q w(x)dx
\\%
&\le \alpha^{pn}\sum_{j=1}^{K_0} \sum_{Q \in \mathfrak{m}athcal{S}_j}
\|f \nu^{\frac{1}{p}}\|_{\mathfrak{m}athbf{b}ar{A}, Q}^p
|Q|
\\%
&\lesssim \alpha^{pn}\sum_{j=1}^{K_0} \sum_{Q \in \mathfrak{m}athcal{S}_j}
\mathfrak{m}athcal{B}ig(\inf_{Q} M_{\mathfrak{m}athbf{b}ar{A}}(f \nu^{\frac{1}{p}}) \mathfrak{m}athcal{B}ig)^p |E_Q|
\\%
&\lesssim\alpha^{pn}\int_{\mathfrak{m}athbb{R}^n}
M_{\mathfrak{m}athbf{b}ar{A}}(f \nu^{\frac{1}{p}})(x)^p dx
\\%
&\lesssim \alpha^{pn}\|f\|_{L^{p}(v)}^p
= \alpha^{pn} \|f\|_{L^{p}(Mw)}^p.
\end{align*}
This shows \eqref{eq:SMw-1}. Finally, \eqref{eq:SMw-2} is a consequence of \eqref{eq:SMw-1} and Lemma \ref{lem:extra}.
\end{proof}
\subsection{Local decay estimates}
To show Theorem \ref{thm:local}, we need the following Carleson embedding theorem from \cite[Theorem~4.5]{HP}.
\mathfrak{m}athbf{b}egin{lemma}\label{lem:Car-emb}
Suppose that the sequence $\{a_Q\}_{Q \in \mathfrak{m}athcal{D}}$ of nonnegative numbers satisfies the Carleson packing condition
\mathfrak{m}athbf{b}egin{align*}
\sum_{Q \in \mathfrak{m}athcal{D}:Q \subset Q_0} a_Q \leq A w(Q_0),\quad\forall Q_0 \in \mathfrak{m}athcal{D}.
\end{align*}
Then for all $p \in (1, \infty)$ and $f \in L^p(w)$,
\mathfrak{m}athbf{b}egin{align*}
\mathfrak{m}athbf{b}igg(\sum_{Q \in \mathfrak{m}athcal{D}} a_Q \mathfrak{m}athcal{B}ig(\frac{1}{w(Q)} \int_{Q} f(x) w(x) \ du(x)\mathfrak{m}athcal{B}ig)^p \mathfrak{m}athbf{b}igg)^{\frac1p}
\leq A^{\frac1p} p' \|f\|_{L^p(w)}.
\end{align*}
\end{lemma}
\mathfrak{m}athbf{b}egin{lemma}\label{lem:CF-local}
For every $1<p<\infty$ and $w \in A_p$, we have
\mathfrak{m}athbf{b}egin{align}\label{eq:CF-local}
\|S_{\alpha,L}f\|_{L^2(B, w)} \leq c_{n,p} \alpha^{n}[w]_{A_p}^{\frac12} \|Mf\|_{L^2(B, w)},
\end{align}
for every ball $B \subset \mathfrak{m}athbb{R}^n$ and $f\in L_c^\infty(\mathfrak{m}athbb{R}^n)$ with $\operatorname{supp}(f) \subset B$.
\end{lemma}
\mathfrak{m}athbf{b}egin{proof}
Let $w \in A_p$ with $1<p<\infty$. Fix a ball $B \subset \mathfrak{m}athbb{R}^n$. From \eqref{eq:SLQj}, there exists a sparse family $\mathfrak{m}athcal{S}(Q) \subset \mathfrak{m}athcal{D}(Q)$ so that
\mathfrak{m}athbf{b}egin{align*}
\widetilde{S}_{\alpha,L}(f)(x)^2
&\lesssim \alpha^{2n} \sum_{Q'\in \mathfrak{m}athcal{S}(Q)} \sum_{j=0}^\infty 2^{-j\delta} \langle |f|\rangle_{2^jQ'}^2 \mathfrak{m}athbf{1}_{Q'}(x)
\\%
&\lesssim \alpha^{2n} \sum_{Q'\in \mathfrak{m}athcal{S}(Q)} \inf_{Q'} M(f)^2 \mathfrak{m}athbf{1}_{Q'}(x), \quad\text{ a.e. } x\in Q.
\end{align*}
This implies that
\mathfrak{m}athbf{b}egin{align*}
\|\widetilde{S}_{\alpha,L}(f)\|_{L^2(Q, w)}^2
&\lesssim \alpha^{2n}\sum_{Q' \in \mathfrak{m}athcal{S}(Q)} \inf_{Q'} M(f)^2 w(Q').
\end{align*}
From this and \eqref{eq:SSS}, we see that to obtain \eqref{eq:CF-local}, it suffices to prove
\mathfrak{m}athbf{b}egin{align}\label{eq:QSQ}
\sum_{Q' \in \mathfrak{m}athcal{S}(Q)} \inf_{Q'} M(f)^2 w(Q') \lesssim [w]_{A_p} \|M(f)\|_{L^2(Q, w)}^2.
\end{align}
Recall that a new version of $A_{\infty}$ was introduced by Hyt\"{o}nen and P\'{e}rez \cite{HP}:
\mathfrak{m}athbf{b}egin{align*}
[w]'_{A_{\infty}} := \sup_{Q} \frac{1}{w(Q)} \int_{Q} M(w \mathfrak{m}athbf{1}_Q)(x) dx.
\end{align*}
By \cite[Proposition~2.2]{HP}, there holds
\mathfrak{m}athbf{b}egin{align}\label{eq:AiAi}
c_n [w]'_{A_{\infty}} \le [w]_{A_{\infty}} \leq [w]_{A_p}.
\end{align}
Observe that for every $Q'' \in \mathfrak{m}athcal{D}$,
\mathfrak{m}athbf{b}egin{align*}
\sum_{Q' \in \mathfrak{m}athcal{S}(Q): Q' \subset Q''} w(Q')
&=\sum_{Q' \in \mathfrak{m}athcal{S}(Q): Q' \subset Q''} \langle w \rangle_{Q'} |Q'|
\lesssim \sum_{Q' \in \mathfrak{m}athcal{S}(Q): Q' \subset Q''} \inf_{Q'} M(w \mathfrak{m}athbf{1}_{Q''}) |E_{Q'}|
\\%
&\lesssim \int_{Q''} M(w \mathfrak{m}athbf{1}_{Q''})(x) dx
\leq [w]'_{A_{\infty}} w(Q'') \lesssim [w]_{A_p} w(Q''),
\end{align*}
where we used the disjointness of $\{E_{Q'}\}_{Q' \in \mathfrak{m}athcal{S}(Q)}$ and \eqref{eq:AiAi}. This shows that the collection $\{w(Q')\}_{Q' \in \mathfrak{m}athcal{S}(Q)}$ satisfies the Carleson packing condition with the constant $c_n [w]_{A_p}$. As a consequence, this and Lemma \ref{lem:Car-emb} give that
\mathfrak{m}athbf{b}egin{align*}
\sum_{Q' \in \mathfrak{m}athcal{S}(Q)} \inf_{Q'} M(f)^2 w(Q')
&\le \sum_{Q' \in \mathfrak{m}athcal{S}(Q)} \mathfrak{m}athbf{b}igg(\frac{1}{w(Q')} \int_{Q'} M(f)\, \mathfrak{m}athbf{1}_Q w\, dx \mathfrak{m}athbf{b}igg)^2 w(Q')
\\%
&\lesssim [w]_{A_p} \|M(f) \mathfrak{m}athbf{1}_Q\|_{L^2(w)}^2
=[w]_{A_p} \|M(f) \mathfrak{m}athbf{1}_Q\|_{L^2(Q, w)}^2,
\end{align*}
where the above implicit constants are independent of $[w]_{A_p}$ and $Q$. This shows \eqref{eq:QSQ} and completes the proof of \eqref{eq:CF-local}.
\end{proof}
Next, let us see how Lemma \ref{lem:CF-local} implies Theorem \ref{thm:local}.
\mathfrak{m}athbf{b}egin{proof}[\textbf{Proof of Theorem \ref{thm:local}.}]
Let $p>1$ and $r>1$ be chosen later. Define the Rubio de Francia algorithm:
\mathfrak{m}athbf{b}egin{align*}
\mathfrak{m}athcal{R}h=\sum_{k=0}^{\infty} \frac{M^{k}h}{2^k\|M\|^k_{L^{r'}\to L^{r'}}}.
\end{align*}
Then it is obvious that
\mathfrak{m}athbf{b}egin{align}\label{eq:hRh}
h \le \mathfrak{m}athcal{R}h \quad\text{and}\quad \|\mathfrak{m}athcal{R}h\|_{L^{r'} (\mathfrak{m}athbb{R}^n)} \leq 2 \|h\|_{L^{r'} (\mathfrak{m}athbb{R}^n)}.
\end{align}
Moreover, for any nonnegative $h \in L^{r'}(\mathfrak{m}athbb{R}^n)$, we have that $\mathfrak{m}athcal{R}h \in A_1$ with
\mathfrak{m}athbf{b}egin{align}\label{eq:Rh-A1}
[\mathfrak{m}athcal{R}h]_{A_1} \leq 2 \|M\|_{L^{r'} \to L^{r'}} \leq c_n \ r.
\end{align}
By Riesz theorem and the first inequality in \eqref{eq:hRh}, there exists some nonnegative function $h \in L^{r'}(Q)$ with $\|h\|_{L^{r'}(Q)}=1$ such that
\mathfrak{m}athbf{b}egin{align}\label{eq:FQ}
\mathfrak{m}athscr{F}_Q^{\frac1r} &:= |\{x \in Q: S_{\alpha,L}(f)(x) > t M(f)(x)\}|^{\frac1r}
\nonumber \\
&= |\{x \in Q: S_{\alpha,L}(f)(x)^2 > t^2 M(f)(x)^2\}|^{\frac1r}
\nonumber \\
& \leq \frac{1}{t^2} \mathfrak{m}athbf{b}igg\| \mathfrak{m}athbf{b}igg(\frac{S_{\alpha,L}(f)}{M(f)}\mathfrak{m}athbf{b}igg)^2 \mathfrak{m}athbf{b}igg\|_{L^r(Q)}
\leq \frac{1}{t^2} \int_{Q} S_{\alpha,L}(f)^2 \, h \, M(f)^{-2} dx
\nonumber \\
&\leq t^{-2} \|S_{\alpha,L}(f)\|_{L^2(Q, w)}^2,
\end{align}
where $w=w_1 w_2^{1-p}$, $w_1= \mathfrak{m}athcal{R}h$ and $w_2 = M(f)^{2(p'-1)}$. Recall that Coifmann-Rochberg theorem \cite[Theorem~3.4]{Jose} asserts that
\mathfrak{m}athbf{b}egin{align}\label{eq:C-R}
[(M(f))^{\delta}]_{A_1} \leq \frac{c_n}{1-\delta}, \quad\forall \delta \in (0, 1).
\end{align}
In view of \eqref{eq:Rh-A1} and \eqref{eq:C-R}, we see that $w_1, w_2 \in A_1$ provided $p>3$. Then the reverse $A_p$ factorization theorem gives that
$w=w_1 w_2^{1-p} \in A_p$ with
\mathfrak{m}athbf{b}egin{align}\label{eq:w-Ap-r}
[w]_{A_p} \leq [w_1]_{A_1} [w_2]_{A_1}^{p-1} \leq c_n ~ r.
\end{align}
Thus, gathering \eqref{eq:CF-local}, \eqref{eq:FQ} and \eqref{eq:w-Ap-r},
we obtain
\mathfrak{m}athbf{b}egin{align*}
\mathfrak{m}athscr{F}_Q^{\frac1r}
& \le c_{n} t^{-2}\alpha^{2n} [w]_{A_p} \|M(f)\|_{L^2(Q, w)}^2
\\
& = c_{n} t^{-2} \alpha^{2n}[w]_{A_p} \|\mathfrak{m}athcal{R}h\|_{L^1(Q)}
\\
&\le c_{n} t^{-2} \alpha^{2n}[w]_{A_p} \|\mathfrak{m}athcal{R}h\|_{L^{r'}(Q)} |Q|^{\frac1r}
\\
&\le c_{n} t^{-2}\alpha^{2n} [w]_{A_p} \|h\|_{L^{r'}(Q)} |Q|^{\frac1r}
\\
&\le c_{n} r t^{-2}\alpha^{2n} |Q|^{\frac1r}.
\end{align*}
Consequently, if $t> \sqrt{c_n e}\,\alpha^{n}$, choosing $r>1$ so that $t^2/e = c_n \alpha^{2n}r$, we have
\mathfrak{m}athbf{b}egin{align}\label{eq:FQr-1}
\mathfrak{m}athscr{F}_Q \le (c_n \alpha^{2n}r t^{-2})^r |Q| = e^{-r} |Q|
= e^{-\frac{t^2}{c_n e\alpha^{2n}}} |Q|.
\end{align}
If $0<t \le \sqrt{c_n e}\alpha^{n}$, it is easy to see that
\mathfrak{m}athbf{b}egin{equation}\label{eq:FQr-2}
\mathfrak{m}athscr{F}_Q \le |Q| \le e \cdot e^{-\frac{t^2}{c_n e\alpha^{2n}}} |Q|.
\end{equation}
Summing \eqref{eq:FQr-1} and \eqref{eq:FQr-2} up, we deduce that
\mathfrak{m}athbf{b}egin{equation*}
\mathfrak{m}athscr{F}_Q=\mathfrak{m}u(\{x \in Q: S_{\alpha,L}(f)(x) > t M(f)(x)\})
\le c_1 e^{-c_2 t^2/\alpha^{2n}} |Q|,\quad\forall t>0.
\end{equation*}
This proves \eqref{eq:local}.
\end{proof}
\subsection{Mixed weak type estimates}
To proceed the proof of Theorem \ref{thm:mixed}, we establish a Coifman-Fefferman inequality.
\mathfrak{m}athbf{b}egin{lemma}\label{lem:CF}
For every $0<p<\infty$ and $w \in A_{\infty}$, we have
\mathfrak{m}athbf{b}egin{align}\label{eq:CF}
\|S_{\alpha, L}f\|_{L^p(w)} &\lesssim \|Mf\|_{L^p(w)},
\end{align}
\end{lemma}
\mathfrak{m}athbf{b}egin{proof}
Let $w \in A_\infty$. Then, it is well known that for any $\alpha \in (0, 1)$ there exists $\mathfrak{m}athbf{b}eta \in (0, 1)$ such that for any cube $Q$ and any measurable subset $A\subset Q$
\mathfrak{m}athbf{b}egin{align*}
|A| \le \alpha |Q| \quad\Longrightarrow\quad w(A) \le \mathfrak{m}athbf{b}eta w(Q).
\end{align*}
Thus, for the sparsity constant $\eta_j$ of $\mathfrak{m}athcal{S}_j$ there exists
$\mathfrak{m}athbf{b}eta_j \in (0,1)$ such that for $Q \in \mathfrak{m}athcal{S}_j$, we have
\mathfrak{m}athbf{b}egin{equation}\label{eq:w-EQ}
w(E_Q) =w(Q) - w(Q \setminus E_Q) \geq (1-\mathfrak{m}athbf{b}eta_j)w(Q),
\end{equation}
since $w(Q \setminus E_Q) \leq (1-\eta_j)w(Q)$.
It follows from \eqref{eq:S-sparse} and \eqref{eq:w-EQ} that
\mathfrak{m}athbf{b}egin{align}\label{eq:p=1}
\int_{\mathfrak{m}athbb{R}n} S_{\alpha,L}(\vec{f})(x)^2 w(x) \ dx
&\lesssim \sum_{j=1}^{3^n} \sum_{Q \in \mathfrak{m}athcal{S}_j} \langle |f| \rangle_{Q}^2 w(Q)
\nonumber \\
&\lesssim \sum_{j=1}^{3^n} \sum_{Q \in \mathfrak{m}athcal{S}_j} \mathfrak{m}athcal{B}ig(\inf_{Q}M(f) \mathfrak{m}athcal{B}ig)^2 w(E_Q)
\nonumber \\
&\lesssim \sum_{j=1}^{3^n} \sum_{Q \in \mathfrak{m}athcal{S}_j} \int_{E_Q} M(f)(x)^2 w(x) \ dx
\nonumber \\
&\lesssim \int_{\mathfrak{m}athbb{R}^n} M(f)(x)^2 w(x) \ dx.
\end{align}
This shows the inequality \eqref{eq:CF} holds for $p=2$.
To obtain the result \eqref{eq:CF} for every $p \in (0, \infty)$, we apply the $A_{\infty}$ extrapolation theorem from \cite[Corollary 3.15]{CMP11} in the Lebesgue spaces or \cite[Theorem~3.36]{CMM} for the general measure spaces. Let $\mathfrak{m}athcal{F}$ be a family of pairs of functions. Suppose that for some $p_0 \in (0, \infty)$ and for every weight $v_0 \in A_{\infty}$,
\mathfrak{m}athbf{b}egin{align}\label{eq:fg-some}
\|f\|_{L^{p_0}(v_0)} \leq C_1 \|g\|_{L^{p_0}(v_0)}, \quad \forall (f, g) \in \mathfrak{m}athcal{F}.
\end{align}
Then for every $p \in (0, \infty)$ and every weigh $v \in A_{\infty}$,
\mathfrak{m}athbf{b}egin{align}\label{eq:fg-every}
\|f\|_{L^p(v)} \leq C_2 \|g\|_{L^p(v)}, \quad \forall (f, g) \in \mathfrak{m}athcal{F}.
\end{align}
From \eqref{eq:p=1}, we see that \eqref{eq:fg-some} holds for the exponent $p_0=2$ and the pair $(S_{\alpha,L}f, Mf)$. Therefore, \eqref{eq:fg-every} implies that \eqref{eq:CF} holds for the general case $0<p<\infty$.
\end{proof}
\mathfrak{m}athbf{b}egin{proof}[\textbf{Proof of Theorem \ref{thm:mixed}.}]
In view of Lemma \ref{lem:CF}, we just present the proof for $S_{\alpha,L}$. We use a hybrid of the arguments in \cite{CMP} and \cite{LOPi}. Define
\mathfrak{m}athbf{b}egin{align*}
\mathfrak{m}athcal{R}h(x)=\sum_{j=0}^{\infty} \frac{T^j_u h(x)}{2^j K_0^j},
\end{align*}
where $K_0>0$ will be chosen later and $T_uf(x) := M(fu)(x)/u(x)$ if $u(x) \neq 0$, $T_uf(x)=0$ otherwise. It immediately yields that
\mathfrak{m}athbf{b}egin{align}\label{eq:R-1}
h \leq \mathfrak{m}athcal{R}h \quad
\text{and}\quad T_u(\mathfrak{m}athcal{R}h) \leq 2K_0 \mathfrak{m}athcal{R}h.
\end{align}
Moreover, follow the same scheme of that in \cite{CMP}, we get for some $r>1$,
\mathfrak{m}athbf{b}egin{align}\label{eq:R-2}
\mathfrak{m}athcal{R}h \cdot uv^{\frac{1}{r'}} \in A_{\infty} \quad\text{and}\quad
\|\mathfrak{m}athcal{R}h\|_{L^{r',1}(uv)} \leq 2 \|h\|_{L^{r',1}(uv)}.
\end{align}
Note that
\mathfrak{m}athbf{b}egin{equation}\label{e:Lpq}
\|f^q\|_{L^{p,\infty}(w)}= \|f\|^q_{L^{pq,\infty}(w)}, \ \ 0<p,q<\infty.
\end{equation}
This implies that
\mathfrak{m}athbf{b}egin{align*}
& \mathfrak{m}athbf{b}igg\| \frac{S_{\alpha,L}(f)}{v} \mathfrak{m}athbf{b}igg\|_{L^{1,\infty}(uv)}^{\frac{1}{r}}
= \mathfrak{m}athbf{b}igg\| \mathfrak{m}athbf{b}igg(\frac{S_{\alpha,L}(f)}{v}\mathfrak{m}athbf{b}igg)^{\frac{1}{r}}\mathfrak{m}athbf{b}igg\|_{L^{r,\infty}(uv)}
\\
&=\sup_{\|h\|_{L^{r',1}(uv)}=1}
\mathfrak{m}athbf{b}igg|\int_{\mathfrak{m}athbb{R}n} |S_{\alpha,L}(f)(x)|^{\frac{1}{r}} h(x) u(x) v(x)^{\frac{1}{r'}} dx \mathfrak{m}athbf{b}igg|
\\
&\leq \sup_{\|h\|_{L^{r',1}(uv)}=1}
\int_{\mathfrak{m}athbb{R}n} |S_{\alpha,L}(f)(x)|^{\frac{1}{r}} \mathfrak{m}athcal{R}h(x)
u(x) v(x)^{\frac{1}{r'}} dx.
\end{align*}
Invoking Lemma \ref{lem:CF} and H\"{o}lder's inequality, we obtain
\mathfrak{m}athbf{b}egin{align*}
& \int_{\mathfrak{m}athbb{R}n} |S_{\alpha,L}(f)(x)|^{\frac{1}{r}} \mathfrak{m}athcal{R}h(x) u(x) v(x)^{\frac{1}{r'}} dx
\\
& \lesssim \int_{\mathfrak{m}athbb{R}n} M(f)(x)^{\frac{1}{r}} \mathfrak{m}athcal{R}h(x) u(x) v(x)^{\frac{1}{r'}} dx
\\
& =\int_{\mathfrak{m}athbb{R}n} \mathfrak{m}athbf{b}igg(\frac{M(f)(x)}{v(x)}\mathfrak{m}athbf{b}igg)^{\frac{1}{r}} \mathfrak{m}athcal{R}h(x) u(x) v(x) dx
\\
& \leq \mathfrak{m}athbf{b}igg\|\mathfrak{m}athbf{b}igg(\frac{M(f)}{\nu}\mathfrak{m}athbf{b}igg)^{\frac{1}{r}} \mathfrak{m}athbf{b}igg\|_{L^{r,\infty}(uv)}\|\mathfrak{m}athcal{R}h\|_{L^{r',1}(uv)}
\\
& \leq \mathfrak{m}athbf{b}igg\|\frac{M(f)}{v}\mathfrak{m}athbf{b}igg\|_{L^{1,\infty}(uv)}^{\frac{1}{r}} \|h\|_{L^{r',1}(uv)},
\end{align*}
where we used \eqref{e:Lpq} and \eqref{eq:R-2} in the last inequality. Here we need to apply the weighted mixed weak type estimates for $M$ proved in Theorems 1.1 in \cite{LOP}. Consequently, collecting the above estimates, we get the desired result
\mathfrak{m}athbf{b}egin{equation*}
\mathfrak{m}athbf{b}igg\| \frac{S_{\alpha,L}(f)}{v} \mathfrak{m}athbf{b}igg\|_{L^{1,\infty}(uv)}
\lesssim \mathfrak{m}athbf{b}igg\| \frac{M(f)}{v} \mathfrak{m}athbf{b}igg\|_{L^{1,\infty}(uv)}
\lesssim \| f \|_{L^1(u)}.
\end{equation*}
The proof is complete.
\end{proof}
\subsection{Restricted weak type estimates}
\mathfrak{m}athbf{b}egin{proof}[{\mathfrak{m}athbf{b}f Proof of Theorem \ref{thm:RW}}]
In view of \eqref{eq:S-sparse}, it suffices to show that
\mathfrak{m}athbf{b}egin{align}
\|\mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(\mathfrak{m}athbf{1}_E)\|_{L^{p, \infty}(w)} \lesssim [w]_{A_p^{\mathfrak{m}athcal{R}}}^{p+1} w(E)^{\frac1p}.
\end{align}
Since $\mathfrak{m}athcal{S}$ is sparse, for every $Q \in \mathfrak{m}athcal{S}$, there exists $E_Q \subset Q$ such that $|E_Q| \simeq |Q|$ and $\{E_Q\}_{Q \in \mathfrak{m}athcal{S}}$ is a disjoint family. Note that $Q \subset Q(x, 2\ell(Q)) \subset 3Q$ for any $x \in Q$, where $Q(x, 2\ell(Q))$ denotes the cube centered at $x$ and with sidelength $2\ell(Q)$. Hence, for all $Q \in \mathfrak{m}athcal{S}$ and for all $x \in Q$,
\mathfrak{m}athbf{b}egin{align}\label{eq:EQQ}
\frac{w(Q(x, 2\ell(Q)))}{w(E_Q)} \simeq \frac{w(Q)}{w(E_Q)}
\le [w]_{A_p^{\mathfrak{m}athcal{R}}}^p \mathfrak{m}athbf{b}igg(\frac{|Q|}{|E_Q|}\mathfrak{m}athbf{b}igg)^p
\lesssim [w]_{A_p^{\mathfrak{m}athcal{R}}}^p.
\end{align}
By duality, one has
\mathfrak{m}athbf{b}egin{align}\label{eq:AE}
\|\mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(\mathfrak{m}athbf{1}_E)\|_{L^{p, \infty}(w)}^2
=\|\mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(\mathfrak{m}athbf{1}_E)^2\|_{L^{p/2, \infty}(w)}
&=\sup_{\substack{0 \le h \in L^{(p/2)', 1}(w) \\ \|h\|_{L^{(p/2)',1}(w)}=1}}
\int_{\mathfrak{m}athbb{R}n} \mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(\mathfrak{m}athbf{1}_E)^2 hw\, dx.
\end{align}
Fix such $h$ above. Then, using \eqref{eq:EQQ}, the disjointness of $\{E_Q\}_{Q \in \mathfrak{m}athcal{S}}$, H\"{o}lder's inequality and \eqref{eq:ME}, we conclude that
\mathfrak{m}athbf{b}egin{align}\label{eq:AEE}
&\int_{\mathfrak{m}athbb{R}n} \mathfrak{m}athcal{A}_{\mathfrak{m}athcal{S}}^2(\mathfrak{m}athbf{1}_E)^2 hw\, dx
=\sum_{Q \in \mathfrak{m}athcal{S}} \langle \mathfrak{m}athbf{1}_E \rangle_Q^2 \int_{Q} hw\, dx
\nonumber\\%
&\le \sum_{Q \in \mathfrak{m}athcal{S}} \langle \mathfrak{m}athbf{1}_E \rangle_Q^2 \mathfrak{m}athbf{b}igg(
\fint_{Q(x, 2\ell(Q))} h\, dw\mathfrak{m}athbf{b}igg) w(E_Q) \mathfrak{m}athbf{b}igg(\frac{w(Q(x, 2\ell(Q)))}{w(E_Q)}\mathfrak{m}athbf{b}igg)
\nonumber\\%
&\lesssim [w]_{A_p^{\mathfrak{m}athcal{R}}}^p \sum_{Q \in \mathfrak{m}athcal{S}} \mathfrak{m}athcal{B}ig(\inf_Q M\mathfrak{m}athbf{1}_E\mathfrak{m}athcal{B}ig)^2 \mathfrak{m}athcal{B}ig(\inf_Q M_w^c h\mathfrak{m}athcal{B}ig) w(E_Q)
\nonumber\\%
&\le [w]_{A_p^{\mathfrak{m}athcal{R}}}^p \int_{\mathfrak{m}athbb{R}n} M\mathfrak{m}athbf{1}_E(x)^2 M_w^c h(x) w\, dx
\nonumber\\%
&\le [w]_{A_p^{\mathfrak{m}athcal{R}}}^p \|(M\mathfrak{m}athbf{1}_E)^2\|_{L^{p/2,\infty}(w)} \|M_w^c h\|_{L^{(p/2)', 1}(w)}
\nonumber\\%
&\lesssim [w]_{A_p^{\mathfrak{m}athcal{R}}}^{p+2} w(E)^{\frac2p}.
\end{align}
Therefore, \eqref{eq:AE} and \eqref{eq:AEE} immediately imply \eqref{eq:SLE}.
\end{proof}
\subsection{Endpoint estimates for commutators}
Recall that the sharp maximal function of $f$ is defined by
\[
M_{\delta}^{\sharp}(f)(x):=\sup_{x\in Q} \inf_{c \in \mathfrak{m}athbb{R}} \mathfrak{m}athbf{b}igg(\fint_{Q}|f^{\delta}-c|dx\mathfrak{m}athbf{b}igg)^{\frac{1}{\delta}}.
\]
If we write $Q_{t, L} := t^m Le^{-t^m L}$, then
\mathfrak{m}athbf{b}egin{align*}
C_b(S_{\alpha, L})f(x) &= \mathfrak{m}athbf{b}igg(\iint_{\Gamma_{\alpha}(x)} |Q_{t, L} \mathfrak{m}athbf{b}ig((b(x)-b(\cdot))f(\cdot) \mathfrak{m}athbf{b}ig)(y)|^2 \frac{dydt}{t^{n+1}} \mathfrak{m}athbf{b}igg)^{\frac12}.
\end{align*}
\mathfrak{m}athbf{b}egin{lemma}\label{lem:MMf}
For any $0<\delta<1$,
\mathfrak{m}athbf{b}egin{align}
M_{\delta}^{\#}(\widetilde{S}_{\alpha, L} f)(x_{0}) \lesssim \alpha^{2n} Mf(x_{0}), \quad \forall x_{0} \in \mathfrak{m}athbb{R}n.
\end{align}
\end{lemma}
\mathfrak{m}athbf{b}egin{proof}
For any cube $Q \ni x_{0}$. The lemma will be proved if we can show that
\[
\left(\fint_{Q}|\widetilde{S}_{\alpha,L}(f)^{2}(x)-c_{Q}|^{\delta}dx\right)^{\frac{1}{\delta}}\lesssim \alpha^{2n}Mf(x_{0})^{2},
\]
where $c_{Q}$ is a constant which will be determined later.
Denote $T(Q)=Q\times(0,\ell(Q))$. We write
\[
\widetilde{S}_{\alpha,L}(f)^{2}(x) =E(f)(x)+F(f)(x),
\]
where
\mathfrak{m}athbf{b}egin{align*}
E(f)(x) &:=\iint_{T(2Q)}\Phi\mathfrak{m}athcal{B}ig(\frac{|x-y|}{\alpha t}\mathfrak{m}athcal{B}ig)|Q_{t, L}f(y)|^{2}\frac{dydt}{t^{n+1}},
\\
F(f)(x) &:= \iint_{\mathfrak{m}athbb{R}_{+}^{n+1}\mathfrak{m}athbf{b}ackslash T(2Q)}\Phi\mathfrak{m}athcal{B}ig(\frac{|x-y|}{\alpha t}\mathfrak{m}athcal{B}ig)|Q_{t, L}f(y)|^{2}\frac{dydt}{t^{n+1}}.
\end{align*}
Let us choose $c_{Q}=F(f)(x_{Q})$ where $x_{Q}$ is the center of $Q$. Then
\mathfrak{m}athbf{b}egin{align*}
\mathfrak{m}athbf{b}igg(&\fint_{Q}|\widetilde{S}_{\alpha,\mathfrak{m}athfrak{p}si}(f)^{2}-c_{Q}|^{\delta} dx\mathfrak{m}athbf{b}igg)^{\frac{1}{\delta}}
=\left(\fint_{Q}|E(f)(x)+F(f)(x)-c_{Q}|^{\delta} dx\right)^{\frac{1}{\delta}}
\\
&\qquad \lesssim\left(\fint_{Q}|E(f)(x)|^{\delta} dx\right)^{\frac{1}{\delta}}+\left(\fint_{Q}|F(f)(x)-F(f)(x_{Q})|^{\delta}dx \right)^{\frac{1}{\delta}}
=:I+II
\end{align*}
We estimate each term separately. For the first term $I$, we set $f=f_0+f^\infty$, where $f_{0}=f\chi_{Q^{*}}, f^\infty=f\chi_{(Q^{*})^{c}}$ and $Q^{*}=8Q$. Then we have
\mathfrak{m}athbf{b}egin{equation}\label{E0}
E(f)(x)\lesssim E(f_{0})(x)+E(f^\infty)(x).
\end{equation}
Therefore,
\[
\left(\fint_{Q}|E(f)(x)|^{\delta} dx\right)^{\frac{1}{\delta}}
\lesssim\left(\fint_{Q}|E(f_{0})(x)|^{\delta} dx\right)^{\frac{1}{\delta}}
+\left(\fint_{Q}|E(f^\infty (x)|^{\delta}dx\right)^{\frac{1}{\delta}}.
\]
It was proved in \cite[p. 884]{BD}, that
$\|\widetilde{S}_{\alpha,L}(f)\|_{L^{1,\infty}}\lesssim \alpha^{n}\|S_{1,L}(f)\|_{L^{1,\infty}}$.
Then, by \eqref{eq:S11} and Kolmogorov inequality we have
\mathfrak{m}athbf{b}egin{align}\label{E1}
\mathfrak{m}athbf{b}igg(\fint_{Q} & |E(f_{0})(x)|^{\delta} dx\mathfrak{m}athbf{b}igg)^{\frac{1}{\delta}}
\leq\left(\fint_{Q}|\widetilde{S}_{\alpha,L}(f_0)|^{2\delta} dx \right)^{\frac{2}{2\delta}}
\nonumber\\
& \lesssim\|\widetilde{S}_{\alpha,L}(f_0)\|_{L^{1,\infty}(Q,\frac{dx}{|Q|})}^{2}
\lesssim \alpha^{2n}\mathfrak{m}athcal{B}ig(\fint_{Q^{*}}|f_{0}(x)|dx\mathfrak{m}athcal{B}ig)^{2}.
\end{align}
On the other hand,
\[
\mathfrak{m}athbf{b}egin{aligned}
\left(\fint_{Q}|E(f^\infty)(x)|^{\delta}dx\right)^{\frac{1}{\delta}}&\lesssim\frac 1{|Q|}\int_{\mathfrak{m}athbb{R}^{n}}\int_{T(2Q)}\Phi\mathfrak{m}athcal{B}ig(\frac{x-y}{\alpha t}\mathfrak{m}athcal{B}ig)\mathfrak{m}athcal{B}ig|Q_{t, L}f(y)\mathfrak{m}athcal{B}ig|^{2}\frac{dydt}{t^{n+1}}dx\\
&=\frac{\alpha^{2n}}{|Q|}\int_{T(2Q)}|Q_{t, L}(f^\infty)(y)|^{2}\frac{dydt}{t},
\end{aligned}
\]
since
$\int_{\mathfrak{m}athbb{R}^{n}}\Phi\mathfrak{m}athcal{B}ig(\frac{x-y}{\alpha t}\mathfrak{m}athcal{B}ig)dx\leq c_{n}(\alpha t)^{n}$.
For any $k\in N_+$, $p_{k,t}(x,y)$ denote the kernel of operator $(tL)^k e^{-tL}$. Note that condition (A2) implies that for any $\delta_0>0$, there exist $C, c>0$ such that
\mathfrak{m}athbf{b}egin{align}\label{kernelestimate}
|p_{k,t}(x,y)|\leq \frac{C}{t^{n/ m}} \mathfrak{m}athbf{b}igg(\frac{t^{1/ m}}{t^{1/ m}+|x-y|}\mathfrak{m}athbf{b}igg)^{n+\delta_0},\; \text{for all} \; x, y\in \mathfrak{m}athbb{R}^n.
\end{align}
Thus, \eqref{kernelestimate} implies that
\[
\mathfrak{m}athbf{b}egin{aligned}
&\mathfrak{m}athbf{b}igg(\int_{2Q}|Q_{t, L}(f^\infty)(y)|^{2}dy\mathfrak{m}athbf{b}igg)^{1/ 2}\\
&\lesssim \sum_{j\geq 3}\mathfrak{m}athbf{b}igg \{\int_{2Q} \mathfrak{m}athbf{b}igg[\int_{2^{j+1}Q\setminus 2^j Q}\frac{1}{t^n}\mathfrak{m}athbf{b}igg(\frac{t}{t+|y-z|}\mathfrak{m}athbf{b}igg)^{n+\delta_0}|f(z)|dz\mathfrak{m}athbf{b}igg]^2dy\mathfrak{m}athbf{b}igg\}^{1/2}\\
&\lesssim \sum_{j\geq 3}\mathfrak{m}athbf{b}igg \{\int_{2Q} \mathfrak{m}athbf{b}igg[\int_{2^{j+1}Q\setminus 2^j Q}\frac{1}{t^n}\mathfrak{m}athbf{b}igg(\frac{t}{2^j\ell(Q)}\mathfrak{m}athbf{b}igg)^{n+\delta_0}|f(z)|dz\mathfrak{m}athbf{b}igg]^2dy\mathfrak{m}athbf{b}igg\}^{1/2}\\
&\lesssim \mathfrak{m}athbf{b}igg(\frac{t}{\ell(Q)}\mathfrak{m}athbf{b}igg)^{\delta_0} |2Q|^{1/ 2} \sum_{j\geq 3} \frac{1}{2^{j\delta_0}} \mathfrak{m}athbf{b}igg(\fint_{2^{j}Q}|f(z)|dz\mathfrak{m}athbf{b}igg).
\end{aligned}
\]
Then one has
\mathfrak{m}athbf{b}egin{align*}
& \left(\fint_{Q}|E(f^\infty)(x)|^{\delta}dx\right)^{\frac{1}{\delta}}
\\
& \lesssim \mathfrak{m}athbf{b}igg[\sum_{l=0}^{\infty}\frac 1{2^{l\delta_0}}\mathfrak{m}athbf{b}igg(\fint_{2^{l}Q}|f_{j}| dx \mathfrak{m}athbf{b}igg)\mathfrak{m}athbf{b}igg]^{2}\frac{\alpha^{2n}}{|Q|}\int_{T(2Q)}|2Q|(t/\ell(Q))^{2\delta_0}\frac{dydt}{t}
\\
&\lesssim \alpha^{2n}\mathfrak{m}athbf{b}igg[\sum_{l=0}^{\infty}\frac 1{2^{l\delta_0}}\mathfrak{m}athbf{b}igg(\fint_{2^{l}Q}|f_{j}| dx \mathfrak{m}athbf{b}igg)\mathfrak{m}athbf{b}igg]^{2}
\lesssim \alpha^{2n}\sum_{l=0}^{\infty}\frac 1{2^{l\delta_0}}\mathfrak{m}athbf{b}igg(\fint_{2^{l}Q}|f_j| dx\mathfrak{m}athbf{b}igg)^{2},
\end{align*}
where in the last inequality we used H\"older's inequality.
Therefore, we obtain that
\mathfrak{m}athbf{b}egin{equation}\label{E2}
\left(\fint_{Q}|E(f^\infty)(x)|^{\delta}dx\right)^{\frac{1}{\delta}}
\lesssim \alpha^{2n}\sum_{l=0}^{\infty}\frac 1{2^{l\delta_0}}\mathfrak{m}athbf{b}igg(\fint_{2^{l}Q}|f| dx\mathfrak{m}athbf{b}igg)^{2}.
\end{equation}
Gathering \eqref{E0}, \eqref{E1} and \eqref{E2}, we deduce that
\[
I\lesssim\alpha^{2n}M(f)(x_{0}).
\]
To estimate the second term $II$, we shall use the following estimate \cite[Eq (35)]{BD}:
\mathfrak{m}athbf{b}egin{equation}
|F(f)(x)-F(f)(x_{Q})|\lesssim \alpha^{2n}\sum_{l=0}^{\infty}\frac 1{2^{l\delta}}\mathfrak{m}athbf{b}igg(\fint_{2^{l}Q}|f| dx\mathfrak{m}athbf{b}igg)^{2},
\end{equation}
for some $\delta>0$ and all $x\in Q$, where $x_Q$ is the center of $Q$. As a consequence, we have
\[
II=\left(\fint_{Q}|F(f)(x)-F(f)(x_{Q})|^{\delta} dx \right)^{\frac{1}{\delta}}\lesssim\alpha^{2n}Mf(x_{0}).
\]
This finish the proof.
\end{proof}
\mathfrak{m}athbf{b}egin{lemma}\label{lem:MSL}
For any $0<\delta<\varepsilon<1$ and for any $b \in \mathfrak{m}athcal{B}MO$,
\mathfrak{m}athbf{b}egin{align}
M_{\delta}^{\#}(C_b(\widetilde{S}_{\alpha, L})f)(x) \lesssim
\|b\|_{\mathfrak{m}athcal{B}MO} \mathfrak{m}athbf{b}ig(M_{L\log L}f(x) +M_{\varepsilon}(\widetilde{S}_{\alpha, L}f)(x) \mathfrak{m}athbf{b}ig).
\end{align}
\end{lemma}
\mathfrak{m}athbf{b}egin{proof}
Let $x\in \mathfrak{m}athbb{R}n$, and let $Q$ be any arbitrary cube containing $x$. It suffices to show that there exists $c_Q$ such that
\mathfrak{m}athbf{b}egin{align}\label{eq:Cbb}
\mathfrak{m}athscr{A}&:=\mathfrak{m}athbf{b}igg(\fint_Q |C_b(\widetilde{S}_{\alpha, L})f(z)-c_Q|^{\delta} dz\mathfrak{m}athbf{b}igg)^{\frac{1}{\delta}}
\lesssim \|b\|_{\mathfrak{m}athcal{B}MO} \mathfrak{m}athbf{b}ig(M_{L\log L}f(x) +M_{\varepsilon}(\widetilde{S}_{\alpha, L}f)(x) \mathfrak{m}athbf{b}ig).
\end{align}
Split $f=f_1+f_2$, where $f_1=f {\mathfrak{m}athbf{b}f 1}_{8Q}$. Then, we have
\mathfrak{m}athbf{b}egin{align}\label{eq:AAA}
\mathfrak{m}athscr{A} & \lesssim \mathfrak{m}athbf{b}igg(\fint_{Q} |(b(z)-b_Q)\widetilde{S}_{\alpha, L}(f)(z)|^{\delta} dz\mathfrak{m}athbf{b}igg)^{\frac{1}{\delta}}
\nonumber\\
&\qquad+ \mathfrak{m}athbf{b}igg(\fint_Q |\widetilde{S}_{\alpha, L}((b-b_Q)f_{1})(z)|^{\delta} dz\mathfrak{m}athbf{b}igg)^{\frac{1}{\delta}}
\nonumber\\
&\qquad + \mathfrak{m}athbf{b}igg(\fint_Q |\widetilde{S}_{\alpha, L}((b-b_Q)f_2)(z)-c_Q|^{\delta} dz\mathfrak{m}athbf{b}igg)^{\frac{1}{\delta}}
\nonumber\\
&:=\mathfrak{m}athscr{A}_1 + \mathfrak{m}athscr{A}_2 + \mathfrak{m}athscr{A}_3.
\end{align}
To bound $\mathfrak{m}athscr{A}_1$, we choose $r \in (1, \varepsilon/\delta)$. The H\"{o}lder's inequality gives that
\mathfrak{m}athbf{b}egin{align}\label{eq:A1}
\mathfrak{m}athscr{A}_1
& \leq \mathfrak{m}athbf{b}igg(\fint_Q |b(z)-b_Q|^{\delta r'} dz\mathfrak{m}athbf{b}igg)^{\frac{1}{\delta r'}}
\mathfrak{m}athbf{b}igg(\fint_Q |\widetilde{S}_{\alpha, L}f(z)|^{\delta r} dz\mathfrak{m}athbf{b}igg)^{\frac{1}{\delta r}}
\nonumber\\
&\lesssim \|b\|_{\mathfrak{m}athcal{B}MO} M_{\delta r}(\widetilde{S}_{\alpha, L}f)(x)
\le \|b\|_{\mathfrak{m}athcal{B}MO} M_{\varepsilon}(\widetilde{S}_{\alpha, L}f)(x).
\end{align}
Since $\widetilde{S}_{\alpha, L}: L^{1}(\mathfrak{m}athbb{R}n)\rightarrow L^{1,\infty}(\mathfrak{m}athbb{R}n)$ and $0<\delta <1$, there holds
\mathfrak{m}athbf{b}egin{align}\label{eq:A2}
\mathfrak{m}athscr{A}_2 & \lesssim \|\widetilde{S}_{\alpha, L}((b-b_Q)f_1)\|_{L^{1,\infty}(Q, \frac{dx}{|Q|})}
\lesssim \fint_Q |(b-b_Q)f_1|dz
\nonumber\\
&\lesssim \|b-b_Q\|_{\exp L, Q} \|f\|_{L\log L, Q} \lesssim \|b\|_{\mathfrak{m}athcal{B}MO} M_{L\log L}(f)(x).
\end{align}
For the last term, we take $c_Q=\widetilde{S}_{\alpha, L}((b-b_Q)f_2)(z_Q)$, where $z_Q$ is the center of $B$. We have
\mathfrak{m}athbf{b}egin{align}\label{eq:A3}
\mathfrak{m}athscr{A}_3 \le \fint_Q |\widetilde{S}_{\alpha, L}((b-b_Q)f_2)(z)-c_Q|\, dz
=:\fint_Q J_Q(z)\, dz \le \mathfrak{m}athbf{b}igg(\fint_Q J_Q(z)^2\, dz\mathfrak{m}athbf{b}igg)^{\frac12}.
\end{align}
For any cube $Q\subset \mathfrak{m}athbb{R}n$, set $T_Q=Q \times (0, \ell(Q))$. Thus, for any $z \in Q$,
\mathfrak{m}athbf{b}egin{align}\label{eq:JQJQ}
J_Q(z)^2 & \leq |\widetilde{S}_{\alpha, L}((b-b_Q)f_2)(z)^2 - \widetilde{S}_{\alpha, L}((b-b_Q)f_2)(z_Q)^2|
\nonumber\\
&\leq \iint_{T(2Q)} \Phi \mathfrak{m}athcal{B}ig(\frac{z-y}{\alpha t}\mathfrak{m}athcal{B}ig) |Q_{t, L}((b-b_Q)f_2)(y)|^2 \frac{dydt}{t^{n+1}}
\nonumber\\
&\qquad+ \iint_{T(2Q)} \Phi \mathfrak{m}athcal{B}ig(\frac{z'-y}{\alpha t}\mathfrak{m}athcal{B}ig) |Q_{t, L}((b-b_Q)f_2)(y)|^2 \frac{dydt}{t^{n+1}}
\nonumber\\
&\qquad + \iint_{\mathfrak{m}athbb{R}^{n+1} \setminus T(2Q)} \mathfrak{m}athcal{B}ig|\Phi \mathfrak{m}athcal{B}ig(\frac{z-y}{\alpha t}\mathfrak{m}athcal{B}ig) - \Phi \mathfrak{m}athcal{B}ig(\frac{z'-y}{\alpha t}\mathfrak{m}athcal{B}ig)\mathfrak{m}athcal{B}ig|
|Q_{t^m}((b-b_Q)f_2)(y)|^2 \frac{dydt}{t^{n+1}}
\nonumber\\
&=: J_{Q, 1}(z)+J_{Q, 2}(z)+ J_{Q, 3}(z).
\end{align}
In order to estimate $J_{Q, 1}(z)$, we note that
\mathfrak{m}athbf{b}egin{align}\label{eq:QPhi}
\fint_Q \Phi \mathfrak{m}athcal{B}ig(\frac{z-y}{\alpha t}\mathfrak{m}athcal{B}ig) dz
\leq \frac{1}{|Q|} \int_{\mathfrak{m}athbb{R}n} \Phi \mathfrak{m}athcal{B}ig(\frac{z-y}{\alpha t}\mathfrak{m}athcal{B}ig) dz
\lesssim \frac{(\alpha t)^{n}}{|Q|}.
\end{align}
Furthermore, the kernel estimate \eqref{kernelestimate} gives that
\mathfrak{m}athbf{b}egin{align}\label{eq:QtLbb}
\mathfrak{m}athbf{b}igg(\int_{2Q} & |Q_{t, L}((b-b_Q)f_2)(y)|^2\mathfrak{m}athbf{b}igg)^{\frac12}
\nonumber\\
&\lesssim \sum_{j\geq 3} \mathfrak{m}athbf{b}igg\{\int_{2Q} \mathfrak{m}athbf{b}igg[\int_{2^{j+1}Q \setminus 2^j Q} \frac{1}{t^{n}}\mathfrak{m}athbf{b}igg(\frac{t}{t+|z-y|}\mathfrak{m}athbf{b}igg)^{n+\delta_0}|b-b_Q| |f| dz\mathfrak{m}athbf{b}igg]^{2}dy\mathfrak{m}athbf{b}igg\}^{\frac12}
\nonumber\\
&\lesssim \sum_{j\geq 3}\mathfrak{m}athbf{b}igg\{\int_{2Q}\mathfrak{m}athbf{b}igg[\int_{2^{j+1}Q \setminus 2^jQ} \frac{1}{t^n} \mathfrak{m}athbf{b}igg(\frac{t}{2^{j}\ell(Q)}\mathfrak{m}athbf{b}igg)^{n+\delta_0} |b-b_Q| |f| dz\mathfrak{m}athbf{b}igg]^2 dy \mathfrak{m}athbf{b}igg\}^{\frac12}
\nonumber\\
&\lesssim \sum_{j\geq 3} \mathfrak{m}athbf{b}igg(\frac{t}{2^{j}\ell(Q)}\mathfrak{m}athbf{b}igg)^{\delta_0} |Q|^{\frac{1}{2}}\fint_{2^{j+1}Q}|b-b_Q| |f| dz
\nonumber\\
&\lesssim \mathfrak{m}athbf{b}igg(\frac{t}{\ell(Q)}\mathfrak{m}athbf{b}igg)^{\delta_0} |Q|^{\frac{1}{2}} \sum_{j\geq 0} 2^{-j\delta_0} \|b-b_Q\|_{\exp L, 2^{j+1}Q} \|f\|_{L\log L, 2^{j+1}Q}
\nonumber\\
&\lesssim \mathfrak{m}athbf{b}igg(\frac{t}{\ell(Q)}\mathfrak{m}athbf{b}igg)^{\delta_0} |Q|^{\frac{1}{2}} \sum_{j\geq 0} 2^{-j\delta_0} j \|b\|_{\mathfrak{m}athcal{B}MO} M_{L\log L}f(x)
\nonumber\\
&\lesssim \mathfrak{m}athbf{b}igg(\frac{t}{\ell(Q)}\mathfrak{m}athbf{b}igg)^{\delta_0} |Q|^{\frac{1}{2}} \|b\|_{\mathfrak{m}athcal{B}MO} M_{L\log L}f(x).
\end{align}
Then, gathering \eqref{eq:QPhi} and \eqref{eq:QtLbb}, we obtain
\mathfrak{m}athbf{b}egin{align}\label{eq:JQ1}
\fint_Q J_{Q, 1}(z)dz
&\leq \iint_{T(2Q)} \mathfrak{m}athbf{b}igg(\fint_{Q}\Phi\mathfrak{m}athcal{B}ig(\frac{z-y}{\alpha t}\mathfrak{m}athcal{B}ig)dz\mathfrak{m}athbf{b}igg) |Q_{t, L}((b-b_Q)f_2)(y)|^{2}\frac{dydt}{t^{n+1}}
\nonumber\\
&\lesssim \frac{1}{|Q|} \int_{0}^{2\ell(Q)} \int_{2Q}|Q_{t, L}((b-b_Q)f_2)(y)|^2 dy \frac{dt}{t}
\nonumber\\
&\lesssim \int_{0}^{2\ell(Q)} \mathfrak{m}athbf{b}igg(\frac{t}{\ell(Q)}\mathfrak{m}athbf{b}igg)^{2\delta_0} \frac{dt}{t} \, \|b\|_{\mathfrak{m}athcal{B}MO}^2 M_{L\log L}f(x)^2
\nonumber\\
&\lesssim \|b\|_{\mathfrak{m}athcal{B}MO}^2 M_{L\log L}f(x)^2.
\end{align}
Similarly, one has
\mathfrak{m}athbf{b}egin{align}\label{eq:JQ2}
\fint_Q J_{Q, 2}(z)dz \lesssim \|b\|_{\mathfrak{m}athcal{B}MO} M_{L\log L}f(x).
\end{align}
To control $J_{Q, 3}$, invoking \cite[eq. (35)]{BD}, we have
\mathfrak{m}athbf{b}egin{align}\label{eq:JQ3}
J_{Q, 3}(z) &\lesssim \sum_{j\geq 0} 2^{-j\delta_{0}} \mathfrak{m}athbf{b}igg(\fint_{2^j Q} |b-b_Q| |f| dz \mathfrak{m}athbf{b}igg)^2
\nonumber\\
&\lesssim \sum_{j \geq 0} 2^{-j\delta_0} \|b-b_Q\|^{2}_{\exp L, 2^j Q} \|f\|_{L\log L,2^j Q}
\nonumber\\
&\lesssim \sum_{j\geq 0} 2^{-j\delta_0} j \|b\|_{\mathfrak{m}athcal{B}MO} M_{L\log L}f(x)
\lesssim \|b\|_{\mathfrak{m}athcal{B}MO} M_{L\log L}f(x).
\end{align}
Combining \eqref{eq:A3}, \eqref{eq:JQJQ}, \eqref{eq:JQ1}, \eqref{eq:JQ2} and \eqref{eq:JQ3}, we conclude that
\mathfrak{m}athbf{b}egin{equation}\label{eq:A33}
\mathfrak{m}athscr{A}_3 \lesssim \|b\|_{\mathfrak{m}athcal{B}MO} M_{L\log L}f(x).
\end{equation}
Therefore, \eqref{eq:Cbb} immediately follows from \eqref{eq:AAA}, \eqref{eq:A1}, \eqref{eq:A2} and \eqref{eq:A33}.
\end{proof}
\mathfrak{m}athbf{b}egin{lemma}\label{lem:WEC}
For any $w \in A_{\infty}$ and $b\in \mathfrak{m}athcal{B}MO$,
\mathfrak{m}athbf{b}egin{equation}\label{eq:SLML}
\mathfrak{m}athbf{b}egin{split}
\sup_{t>0} \Phi(1/t)^{-1} & w(\{x\in \mathfrak{m}athbb{R}n: |C_b(S_{\alpha,L})f(x)|>t\})
\\
&\lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathfrak{m}athbb{R}n: M_{L\log L}f(x)>t\}),
\end{split}
\end{equation}
for all $f \in L^{\infty}_c(\mathfrak{m}athbb{R}n)$.
\end{lemma}
\mathfrak{m}athbf{b}egin{proof}
Recall that the weak type Fefferman-Stein inequality:
\mathfrak{m}athbf{b}egin{align}\label{WF-S}
\sup_{\lambda>0} \varphi(\lambda)\omega(\{x\in \mathfrak{m}athbb{R}n:M_{\delta}f(x)>\lambda\})
\leq \sup_{\lambda>0}\varphi(\lambda)\omega(\{x\in \mathfrak{m}athbb{R}n:M^{\sharp}_{\delta}f(x)>\lambda\})
\end{align}
for all function $f$ for which the left-hand side is finite, where $\varphi:(0,\infty)\rightarrow(0,\infty)$ is doubling. We may assume that the right-hand side of \eqref{eq:SLML} is finite since otherwise there is nothing to be proved. Now by the Lebesgue diffentiation theorem we have
\mathfrak{m}athbf{b}egin{align*}
\mathfrak{m}athscr{B}:=&\sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathfrak{m}athbb{R}n: |C_b(S_{\alpha,L})f(x)|>t\})
\\
=&\sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathfrak{m}athbb{R}n: |C_b(\widetilde{S}_{\alpha,L})f(x)|>t\})
\\
\leq & \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathfrak{m}athbb{R}n: M_{\delta}(C_b(\widetilde{S}_{\alpha,L}))f(x)|>t\}).
\end{align*}
Then Lemma \ref{lem:MMf}, Lemma \ref{lem:MSL} and \eqref{WF-S} give that
\mathfrak{m}athbf{b}egin{align*}
\mathfrak{m}athscr{B} & \lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathfrak{m}athbb{R}n: M_{\delta}^{\#}(C_b(\widetilde{S}_{\alpha, L})f)(x) >t\})
\\
&\lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathfrak{m}athbb{R}n: M_{L\log L}f(x) +M_{\varepsilon}(\widetilde{S}_{\alpha, L}f)(x)>c_0t\})
\\
&\lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathfrak{m}athbb{R}n:M_{L\log L}f(x)>t\})
\\
&\quad + \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathfrak{m}athbb{R}n:M_{\varepsilon}(\widetilde{S}_{\alpha, L}f)(x)>t\})
\\
&\lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathfrak{m}athbb{R}n:M_{L\log L}f(x)>t\})\\
&\quad +\sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathfrak{m}athbb{R}n:M^{\sharp}_{\varepsilon}(\widetilde{S}_{\alpha, L}f)(x)>t\})
\\
&\lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathfrak{m}athbb{R}n:M_{L\log L}f(x)>t\})
\\
&\quad + \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathfrak{m}athbb{R}n:M(f)(x)>t\})
\\
&\lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathfrak{m}athbb{R}n:M_{L\log L}f(x)>t\}).
\end{align*}
The proof is complete.
\end{proof}
\mathfrak{m}athbf{b}egin{proof}[\textbf{Proof of Theorem \ref{thm:SbA1}.}]
Let $w \in A_1$. By homogeneity, it is enough to prove
\mathfrak{m}athbf{b}egin{align}\label{eq:Cbt=1}
w(\{x\in \mathfrak{m}athbb{R}n: C_b(S_{\alpha,L})f(x)>1\}) \lesssim \int_{\mathfrak{m}athbb{R}n} \Phi(|f(x)|) w(x) dx.
\end{align}
Let us recall a result from \cite[Lemma~2.11]{CY} for $m=1$. For any $w \in A_1$,
\mathfrak{m}athbf{b}egin{align}\label{eq:MLL}
w(\{x\in \mathfrak{m}athbb{R}n: M_{L\log L}f(x)>t\}) \lesssim \int_{\mathfrak{m}athbb{R}n} \Phi \mathfrak{m}athbf{b}igg(\frac{|f(x)|}{t}\mathfrak{m}athbf{b}igg) w(x) dx, \quad\forall t>0.
\end{align}
Since $\Phi$ is submultiplicative, Lemma \ref{lem:WEC} and \eqref{eq:MLL} imply that
\mathfrak{m}athbf{b}egin{align*}
&w(\{x\in \mathfrak{m}athbb{R}n: C_b(S_{\alpha,L})f(x)>1\})
\\
&\qquad \lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathfrak{m}athbb{R}n: |C_b(S_{\alpha,L})f(x)|>t\})
\\
&\qquad \lesssim \sup_{t>0} \Phi(1/t)^{-1} w(\{x\in \mathfrak{m}athbb{R}n:M_{L\log L}f(x)>t\})
\\
&\qquad \lesssim \sup_{t>0} \Phi(1/t)^{-1} \int_{\mathfrak{m}athbb{R}n} \Phi \mathfrak{m}athbf{b}igg(\frac{|f(x)|}{t}\mathfrak{m}athbf{b}igg) w(x) dx
\\
&\qquad \leq \sup_{t>0} \Phi(1/t)^{-1} \int_{\mathfrak{m}athbb{R}n} \Phi (|f(x)|) \Phi(1/t) w(x)dx
\\
&\qquad\le \int_{\mathfrak{m}athbb{R}n} \Phi(|f(x)|) w(x) dx.
\end{align*}
This shows \eqref{eq:Cbt=1} and hence Theorem \ref{thm:SbA1}.
\end{proof}
\mathfrak{m}athbf{b}egin{thebibliography}{999}
\mathfrak{m}athbf{b}ibitem{ACM}T. Anderson, D. Cruz-Uribe and K. Moen,
\emph{Logarithmic bump conditions for Calder\'{o}n-Zygmund operators on spaces of homogeneous type},
Publ. Mat. \textbf{59} (2015), 17--43.
\mathfrak{m}athbf{b}ibitem{ADM}P. Auscher, X.T. Duong and A. McIntosh,
\emph{Boundedness of Banach space valued singular integral operators and Hardy spaces},
Unpublished preprint (2005).
\mathfrak{m}athbf{b}ibitem{BD}T. A. Bui, X. T. Duong,
\emph{Sharp weighted estimates for square functions associated to operators on spaces of homogeneous type},
J. Geom. Anal. \textbf{30} (2020), 874--900.
\mathfrak{m}athbf{b}ibitem{CMM}M. Cao, J.J. Mar\'{i}n and J.M. Martell,
\emph{Extrapolation on function spaces and applications},
in preprint, (2020).
\mathfrak{m}athbf{b}ibitem{CO}M. Cao and A. Olivo,
\emph{Two-weight extrapolation on function spaces and applications},
in preprint, (2020).
\mathfrak{m}athbf{b}ibitem{CXY}M. Cao, Q. Xue, K. Yabuta,
\emph{Weak and strong type estimates for the multilinear pseudo-differential operators},
J. Funct. Anal. \textbf{278 } (2020), 108454.
\mathfrak{m}athbf{b}ibitem{CY}M. Cao and K. Yabuta,
\emph{The multilinear Littlewood-Paley operators with minimal regularity conditions},
J. Fourier Anal. Appl. \textbf{25} (2019), 1203--1247.
\mathfrak{m}athbf{b}ibitem{CMM}R. R. Coifman, A. McIntosh and Y. Meyer,
\emph{L'integrale de Cauchy definit un operateur borne sur $L^2$ pour les courbes lipschitziennes},
Ann. Math. \textbf{116} (1982), 361--387.
\mathfrak{m}athbf{b}ibitem{CP}D. Cruz-Uribe and C. P\'erez,
\emph{Two weight extrapolation via the maximal operator},
J. Funct. Anal. \textbf{174} (2000), 1--17.
\mathfrak{m}athbf{b}ibitem{CMP}D. Cruz-Uribe, J. M. Martell and C. P\'{e}rez,
\emph{Weighted weak-type inequalities and a conjecture of Sawyer},
Int. Math. Res. Not. \textbf{30} (2005), 1849--1871.
\mathfrak{m}athbf{b}ibitem{CMP11}D. Cruz-Uribe, J. M. Martell and C. P\'{e}rez,
\emph{Weights, extrapolation and the theory of Rubio de Francia},
Operator Theory: Advances and Applications, Vol. 215, Birkh\"auser/Springer Basel AG, Basel, 2011.
\mathfrak{m}athbf{b}ibitem{CP99}D. Cruz-Uribe and C. P\'erez,
\emph{Sharp two-weight, weak-type norm inequalities for singular integral operators},
Math. Res. Lett. \textbf{6} (1999), 1--11.
\mathfrak{m}athbf{b}ibitem{DL}X.T. Duong and J. Li,
\emph{Hardy spaces associated to operators satisfying Davies-Gaffney estimates and
bounded holomorphic functional calculus},
J. Funct. Anal. \textbf{264} (2013), 1409--1437.
\mathfrak{m}athbf{b}ibitem{F2}C. Fefferman,
\emph{The uncertainty principle},
Bull. Amer. Math. Soc. \textbf{9}(1983)129--206.
\mathfrak{m}athbf{b}ibitem{DY} X. T. Duong, L. Yan,
\emph{Duality of Hardy and $\mathfrak{m}athcal{B}MO$ spaces associated with operators with heat kernel bounds},
J. Amer. Math. Soc. \textbf{18} (2005), 943--973.
\mathfrak{m}athbf{b}ibitem{HM}S. Hofmann and S. Mayboroda,
\emph{Hardy and $\mathfrak{m}athcal{B}MO$ spaces associated to divergence form elliptic operators},
Math. Ann. \textbf{344} (2009), 37--166.
\mathfrak{m}athbf{b}ibitem{HLMMY}S. Hofmann, G. Lu, D. Mitrea, M. Mitrea and L.X. Yan,
\emph{Hardy spaces associated to non-negative self-adjoint operators satisfying Davies-Gaffney estimates},
Mem. Amer. Math. Soc. \textbf{214} (2011).
\mathfrak{m}athbf{b}ibitem{Hy2}T. Hyt\"{o}nen,
\emph{The $A_2$ theorem: remarks and complements},
Contemp. Math. 612, Amer. Math. Soc., 91--106, Providence (2014).
\mathfrak{m}athbf{b}ibitem{HP}T. Hyt\"{o}nen and C. P\'{e}rez,
\emph{Sharp weighted bounds involving $A_{\infty}$},
Anal. PDE \textbf{6} (2013), 777--818.
\mathfrak{m}athbf{b}ibitem{Jose} G.-C. Jos\'{e}, R. de Francia,
\emph{Weighted norm inequalities and related topics}, volume 116 of North-Holland
Mathematics Studies. North-Holland Publishing Co., Amsterdam (1985).
\mathfrak{m}athbf{b}ibitem{KT}R. Kerman and A. Torchinsky,
\emph{Integral inequalities with weights for the Hardy maximal function},
Stud. Math. \textbf{71} (1982), 277--284.
\mathfrak{m}athbf{b}ibitem{Ler11} A. K. Lerner,
\emph{Sharp weighted norm inequalities for Littlewood--Paley operators and singular integrals},
Adv. Math. \textbf{226} (2011), 3912--3926.
\mathfrak{m}athbf{b}ibitem{LN} A. K. Lerner and F. Nazarov,
{\it Intuitive dyadic calculus}.
Available at http://www.math.kent.edu/ zvavitch/Lerner Nazarov Book.pdf.
\mathfrak{m}athbf{b}ibitem{LW}J. Li and B.D. Wick,
\emph{Characterizations of $H_{\mathfrak{m}athcal{D}elta_N}^1(\mathfrak{m}athbb{R}n)$ and $\mathfrak{m}athcal{B}MO_{\mathfrak{m}athcal{D}elta_N}(\mathfrak{m}athbb{R}n)$ via weak factorizations and commutators},
J. Funct. Anal. \textbf{272} (2017), 5384--5416.
\mathfrak{m}athbf{b}ibitem{LOPi}K. Li, S. Ombrosi and B. Picardi,
\emph{Weighted mixed weak-type inequalities for multilinear operators},
Studia Math. \textbf{244} (2019), 203--215.
\mathfrak{m}athbf{b}ibitem{LOP}K. Li, S. Ombrosi, C. P\'{e}rez,
\emph{Proof of an extension of E. Sawyer's conjecture about weighted mixed weak-type estimates},
Math. Ann. \textbf{374 }(2019), 907--929.
\mathfrak{m}athbf{b}ibitem{MP-1}J.M. Martell and C. Prisuelos-Arribas,
\emph{Weighted Hardy spaces associated with elliptic operators. Part I: Weighted norm inequalities for conical square functions}, Trans. Amer. Math. Soc. \textbf{369} (2017), 4193--4233.
\mathfrak{m}athbf{b}ibitem{MP-2}J.M. Martell and C. Prisuelos-Arribas,
\emph{Weighted Hardy spaces associated with elliptic operators. Part II: Characterizations of $H^1_L(w)$},
Publ. Mat. \textbf{62} (2018), 475--535.
\mathfrak{m}athbf{b}ibitem{MW} B. Muckenhoupt and R. Wheeden,
\emph{ Some weighted weak-type inequalities for the Hardy-Littlewood maximal function and the Hilbert transform},
Indiana Math. J. \textbf{26} (1977), 801--816.
\mathfrak{m}athbf{b}ibitem{OPR}C. Ortiz-Caraballo, C. P\'{e}rez and E. Rela,
\emph{Exponential decay estimates for singular integral operators},
Math. Ann. \textbf{357} (2013), 1217--1243.
\mathfrak{m}athbf{b}ibitem{PW}C. P\'{e}rez and R. Wheeden,
\emph{Uncertainty principle estimates for vector fields},
J. Funct. Anal. \textbf{181}(2001), 146--188.
\mathfrak{m}athbf{b}ibitem{S83}E.T. Sawyer,
\emph{Norm inequalities relating singular integrals and maximal function},
Studia Math. \textbf{75} (1983), 254--263.
\end{thebibliography}
\end{document} |
\begin{document}
\title{Virtual knot theory on a group}
\begin{abstract}
\footnotesize
Given a group endowed with a Z/2-valued morphism we associate a Gauss diagram theory, and show that for a particular choice of the group these diagrams encode faithfully virtual knots on a given arbitrary surface.
This theory contains all of the earlier attempts to decorate Gauss diagrams, in a way that is made precise via \textit{symmetry-preserving maps}. These maps become crucial when one makes use of decorated Gauss diagrams to describe finite-type invariants. In particular they allow us to generalize Grishanov-Vassiliev's formulas and to show that they define invariants of virtual knots.
\end{abstract}
\tableofcontents
\vspace*{0.2cm}
Gauss diagrams were introduced in knot theory as a means of representing knots and their finite-type invariants \citep{PolyakViro, Goussarov}, allowing compactification and generalization of formulas due to J.Lannes \citep{Lannes}. Since then, several generalizations have been attempted to adapt them to knot theory in thickened surfaces by decorating them with topological information \citep{Fiedler, Grishanov, MortierPolyakEquations}.
Our goal is to construct a unifying \enquote{father} framework, and to describe how to get down from there to other versions with less data.
First we define and study (virtual) knot diagrams on an arbitrary surface $\operatorname{S}igma$: these are tetravalent graphs embedded in $\operatorname{S}igma$, some of whose double points (the \enquote{real} ones) are pushed and desingularized into a real line bundle over $\operatorname{S}igma$. Defining Gauss diagrams requires a global notion for the branches at a real crossing to be one \enquote{over} the other, and a global notion of writhe of a crossing. It is shown that these notions can be defined simultaneously if and only if $\operatorname{S}igma$ is orientable. If it is not, we sacrifice the globality of one property, and take into account its monodromy. It is shown that when the total space of the bundle is orientable, the writhes are globally defined and the monodromy of the \enquote{over/under} datum is the first Stiefel-Whitney class of the tangent bundle to $\operatorname{S}igma$, $w_1(\operatorname{S}igma)$.
In Section~\ref{sec:VKTG} is given a definition of Gauss diagrams decorated by elements of a fixed group $\pi$, subject to usual Reidemeister moves, and to additional \enquote{conjugacy moves}, depending on a fixed group homomorphism $w:\pi\rightarrow \mathbb{F}_2$. It is shown that when there is a surface $\operatorname{S}igma$ such that $\pi=\pi(\operatorname{S}igma)$ and $w_1=w_1(\operatorname{S}igma)$, then there is a $1-1$ correspondence between Gauss diagrams and virtual knot diagrams, that induces a correspondence between the equivalence classes (virtual knot types) on both sides.
A lighter kind of Gauss diagrams, called \textit{abelian}, is defined in Subsection~\ref{subsec:abelian} following the idea of T.Fiedler's $H_1(\operatorname{S}igma)$-decorated diagrams (\cite{Fiedler}) and shown to be equivalent to the above when $\pi$ is abelian and $w$ is trivial. The little drawback of this version is that it becomes more difficult to compute the homological decoration of an arbitrary loop. Two formulas are presented in \ref{subsec:homform} to sort this out, involving quite unexpected combinatorial tools.
Finally, we describe invariance criteria for the analog of Goussarov-Polyak-Viro's invariants \citep{GPV} in this framework. As an application, we obtain a generalization of Grishanov-Vassiliev formulas \citep{Grishanov}, and a notion of Whitney index for virtual knots whose underlying immersed curve is non nullhomotopic.
\section{Preliminary: classical Gauss diagrams and their Reidemeister moves}\label{subsec:classical}
\begin{definition}A \textit{classical Gauss diagram} is an equivalence class of an oriented circle in which a finite number of couples of points are linked by an abstract oriented arrow with a sign decoration, up to positive homeomorphism of the circle. A Gauss diagram with $n$ arrows is said to be of \textit{degree} $n$.
\end{definition}
It may happen that one regards Gauss diagrams as topological objects (drawing loops on them, considering their first homology). In that case, one must beware of the fact that \textit{the arrows do not topologically intersect} -- that is what is meant by \enquote{abstract}. However, the fact that two arrows may \textit{look like} they intersect is something combinatorially well-defined, and interesting for many purposes.
\textbf{Fact:}
There is a natural way to associate a Gauss diagram with a knot diagram in the sphere $\mathbb{S}^2$, from which the knot diagram can be uniquely recovered. Fig.\ref{1} illustrates this fact.
\begin{figure}
\caption{The writhe convention, a diagram of the figure eight knot, and its Gauss diagram -- the letters are here only for the sake of clarity.}
\label{1}
\end{figure}
However, not every Gauss diagram actually comes from a knot diagram in that way. This observation has lead to the development of virtual knot theory \citep{KauffmanVKT99}: basically a virtual knot is a Gauss diagram which does not come from an actual knot. There is a knot-diagrammatic version of these, using virtual crossings subject to virtual Reidemeister moves - that can be thought of as a unique \enquote{detour move}. A detour move is naturally any move that leaves the underlying Gauss diagram unchanged.
Of course virtual knot diagrams are also subject to the usual Reidemeister moves, and these do change the face of the Gauss diagram.
We call them R-moves for simplicity - and to make it clear whether knot diagrams or Gauss diagrams are considered. Here is a combinatorial description of R-moves.
\subsubsection*{R$_1$-moves}
An R$_1$-move is the birth or death of an isolated arrow, as shown in Fig.\ref{Rmoves} (top-left). There is no restriction on the direction or the sign of the arrow.
\subsubsection*{R$_2$-moves}
An R$_2$-move is the birth or death of a pair of arrows with different signs, whose heads are consecutive as well as their tails (Fig.\ref{Rmoves}, top-right).
If one restricts oneself to Gauss diagrams that come from classical knot diagrams, then there is an additional condition as for the creating direction: indeed, two arcs in a knot diagram can be subject to a Reidemeister II move if and only if they \textit{face each other}. In the virtual world, there is no such condition since any two arcs can be brought to face each other by detour moves.
It may be good to know that this condition can be read directly on the Gauss diagram: indeed, two arcs face each other in a knot diagram if one can join them by walking along the diagram and \textit{turning to the left} at each time one meets a crossing. Thanks to the decorations of the arrows, it makes sense for a path in a Gauss diagram to \textit{turn to the left}.
\begin{figure}
\caption{$\textrm{R}
\label{Rmoves}
\end{figure}
\subsubsection*{R$_3$-moves}
\begin{definition}\label{def_epsilon}
In a classical Gauss diagram of degree $n$, the complementary of the arrows is made of $2n$ oriented components. These are called the \textit{edges} of the diagram. In a diagram with no arrow, we still call the whole circle an edge.
Let $e$ be an edge in a Gauss diagram, between two consecutive arrow ends that do not belong to the same arrow.
Put
$$\eta (e)=\left\lbrace \begin{array}{l}
+1 \text{ if the arrows that bound $e$ cross each other} \\
-1 \text{ otherwise}
\end{array}\right. ,$$
and let $\uparrow\!\!(e)$ be the number of arrowheads at the boundary of $e$.
Then define
$$\varepsilon (e)= \eta(e)\cdot(-1)^{\uparrow(e)}.$$
Finally, define $\mathrm{w}(e)$ as the product of the writhes of the two arrows at the boundary of $e$.
\end{definition}
An R$_3$-move is the simultaneous switch of the endpoints of three arrows as shown on Fig.\ref{Rmoves} (bottom), with the following conditions:
\begin{enumerate}
\iotatem The value of $\mathrm{w}(e)\varepsilon(e)$ should be the same for all three visible edges $e$. This ensures that the piece of diagram containing the three arrows can be represented in a knot-diagrammatic way without making use of virtual crossings.
\iotatem The values of $\uparrow\!\!(e)$ should be pairwise different. This ensures that one of the arcs in the knot diagram version actually \enquote{goes over} the others.
\end{enumerate}
\begin{remark}
From a simplicial viewpoint, the sign $\mathrm{w}(e)\varepsilon(e)$ gives a natural co-orientation of the $1$-codimensional strata corresponding to R$_3$ moves. This is exploited in \citep{FT1cocycles} to construct finite-type $1$-cocycles.
\end{remark}
\section{Knot and virtual knot diagrams on an arbitrary surface}\label{sec:KDAS}
The goal of this section is to examine when and how one can define a couple of equivalent theories \enquote{virtual knots $-$ Gauss diagrams} that generalizes knot theory in an arbitrary $3$-manifold $M$. What first appears is that a Gauss diagram depends on a projection; so it seems unavoidable to ask for the existence of a surface $\operatorname{S}igma$ (maybe with boundary, non orientable, or non compact), and a \enquote{nice} map $p:M\rightarrow \operatorname{S}igma$. For the \textit{over} and \textit{under} branches at a crossing to be well-defined at least locally, the fibers of $p$ need to be equipped with a total order: this leaves only the possiblity of a real line bundle.
\subsection{Thickenings of surfaces}
Let us now split the discussion according to the two kinds of decorations that one would expect to find on a Gauss diagram: signs (local writhes), and orientation of the arrows.
\subsubsection*{Local writhes}
For a knot in an arbitrary real line bundle, there are situations in which it is possible to switch over and under in a crossing by a mere \textit{diagram isotopy}. For instance, in the non-trivial line bundle over the annulus $\mathbb{S}^1\times\mathbb{R}$, a full rotation of the closure of the two-stranded elementary braid $\sigma_1$ turns it into the closure of $\sigma_1^{-1}$ (Fig.~\ref{pic:nonorientable}).
\begin{figure}
\caption{Non trivial line bundle over the annulus -- as one reads from left to right, the knot moves towards the right of the picture.}
\label{pic:nonorientable}
\end{figure}
Fig.~\ref{pic:nonorientable} would be exactly the same (except for the gluing indications) if one considered the trivial line bundle over the Moebius strip. Note that this diagram would then represent a $2$-component link. In fact, it is possible to embed this picture in \textit{any non-orientable total space} of a line bundle over a surface.
This phenomenon reveals the fact that in these cases, there is no way to define the local writhe of a crossing. However, according to \cite{FiedlerSSS} (Definition~$1.$), there is a well-defined writhe as soon as the total space of the bundle is oientable.
\begin{definition}\label{def:thick}
We call a \textit{thickened surface} a real line bundle over a surface, whose total space is orientable.
\end{definition}
\begin{deflemma}\label{def:wfg}
If $M\rightarrow\operatorname{S}igma$ is a thickened surface, then its first Stiefel-Whitney class coincides with that of the tangent bundle to $\operatorname{S}igma$.
This class induces a homomorphism $w_1(\operatorname{S}igma):\pi_1(\operatorname{S}igma)\rightarrow\mathbb{F}_2$.
The couple $(\pi_1(\operatorname{S}igma), w_1(\operatorname{S}igma))$ is called the \textit{weighted fundamental group} of $\operatorname{S}igma$. Note that in particular the thickening of $\operatorname{S}igma$ is the trivial bundle $\operatorname{S}igma\times\mathbb{R}$ if and only if $\operatorname{S}igma$ is orientable.
\end{deflemma}
\subsubsection*{Arrow orientations}
Note that the writhe of a crossing for a knot in $M\rightarrow\operatorname{S}igma$ depends only on one choice, that of an orientation for $M$. The important thing is that this choice is \textit{global}, so that it makes sense to compare the writhes of different crossings (they live in \enquote{the same} $\mathbb{Z}/2\mathbb{Z}$).
Similarly, for the orientation of the arrows in a Gauss diagram to simultaneously make sense, one needs a global definition of the over/under datum at the crossings; that is, the fibres of $M\rightarrow\operatorname{S}igma$ should be simultaneously and consistently oriented. In other words, $M\rightarrow\operatorname{S}igma$ should be the trivial line bundle.
According to our definition of a thickened surface, this happens only if the surface is orientable.\newline
\textit{So it seems that one has a choice to make, either restricting one's attention to orientable surfaces, or taking into account the monodromy of whatever is not globally defined. Additional \textit{conjugacy moves} will be needed when one defines Gauss diagrams. The convention to consider only fibre bundles with an orientable total space is arbitrary, its only use is to reduce the number of monodromy morphisms to $1$ instead of $2$.}
\subsubsection*{Virtual knot diagrams on an arbitrary surface}
Fix an arbitrary surface $\operatorname{S}igma$ and denote its thickening by $M\rightarrow\operatorname{S}igma$.
\begin{definition}\label{def:virtualknotdiags}
A \textit{virtual knot diagram} on $\operatorname{S}igma$ is a generic immersion $\mathbb{S}^1\rightarrow \operatorname{S}igma$ whose every double point has been decorated
\begin{itemize}
\iotatem either with the designation \enquote{virtual} (which is nothing but a name),
\iotatem or with a way to desingularize it locally into $M$, up to local isotopy.
\end{itemize}
\end{definition}
These diagrams are subject to the usual Reidemeister moves, dictated by local isotopy in $M$, and to the virtual \enquote{detour} moves which are studied in the next section. As explained before, if one chooses an orientation for $M$, then the real crossings of such a diagram have a well-defined writhe.
\subsection{Diagram isotopies and detour moves}\label{subsec:globalmoves}
Here by \textit{knot diagram} we mean a virtual knot diagram on a fixed arbitrary surface $\operatorname{S}igma$, as defined above. In this case a \textit{diagram isotopy}, usually briefly denoted by $H:\operatorname{Id}\rightarrow h$, is the datum of a diffeomorphism $h$ of $\operatorname{S}igma$ together with an isotopy from $\operatorname{Id}_\operatorname{S}igma$ to $h$. A \textit{detour move} is a boundary-fixing homotopy of an arc that, before and after the homotopy, goes through only virtual crossings (such an arc is called \textit{totally virtual}). Though both of these processes seem rather simple, it will be useful to understand how they interact.
\begin{lemma}\label{lem:iso_detour} A knot diagram obtained from another by a sequence of diagram isotopies alternating with detour moves may always be obtained by a single diagram isotopy followed by detour moves.
\end{lemma}
\begin{proof}
It is enough to show that a detour move $d$ followed by a diagram isotopy $\operatorname{Id}\rightarrow h$ may be replaced with a diagram isotopy followed by a detour move (without changing the initial and final diagrams). The initial diagram is denoted by $D$.
Call $\alpha$ the totally virtual arc that is moved by the detour move. By definition, $d(\alpha)$ is boundary-fixing homotopic to $\alpha$, and is totally virtual too. Thus, $h\left( d\left( \alpha\right) \right) $ and $h(\alpha)$ are totally virtual and boundary-fixing homotopic to each other. Since $h\left(d\left(D\right) \right) $ and $h(D)$ differ only by these two arcs, it follows that there is a detour move taking $h(D)$ to $h\left( d\left( D\right) \right) $.
\end{proof}
Now an interesting question about diagram isotopies is when two of them lead to diagrams that are equivalent \textit{under detour moves}. Here is a quite useful sufficient condition.
\begin{definition}\label{def:generalizedbraids}
Let $X$ and $Y$ be two finite subsets of $\operatorname{S}igma$ with the same (positive) cardinality $n$. A \textit{generalized braid} in $\operatorname{S}igma\times \left[ 0,1\right]$ based on the sets $X$ and $Y$ is an embedding $\beta$ of a disjoint union of segments, such that $\operatorname{Im}\beta\cap\left( \operatorname{S}igma\times \left\lbrace t\right\rbrace\right) $ has cardinality $n$ for each $t$, coincides with $X$ at $t=0$ and with $Y$ at $t=1$.
\end{definition}
Let $D$ be a knot diagram and $H$ a diagram isotopy. Let $p_1\iotan P_1,\ldots ,p_n\iotan P_n$ denote little neighborhoods of the real crossings of $D$,
and set $\mathcal{P}=\cup P_i$. Then, $\coprod H(p_i,\cdot)$ defines a generalized braid $\prescript{H\!\!}{}{\beta}$ in $\operatorname{S}igma\times \left[ 0,1\right] $ with $n$ strands based on the sets $\left\lbrace p_1,\ldots ,p_n \right\rbrace$ and $\left\lbrace h(p_1),\ldots ,h(p_n) \right\rbrace$. The strand of a braid $\beta$ that intersects $\operatorname{S}igma\times \left\lbrace 0\right\rbrace$ at $p_i$ is denoted by $\beta_i$.
\begin{proposition}
\label{prop:braid_homotopy}
Let $D$ and $H$ be as above. Then, up to detour moves, $h(D)$ only depends on $D$ and the boundary fixing homotopy class of $\prescript{H\!\!}{}{\beta}$.
\end{proposition}
\begin{proof}
Let $\gamma$ be a maximal smooth arc of $D$ outside $\mathcal{P}$ (thus totally virtual). It begins at some $P_i$ and ends at some $P_j$ (of course it may happen that $j=i$). Using little arcs inside of $P_i$ and $P_j$ to join the endpoints of $\gamma$ with $p_i$ and $p_j$, one obtains an oriented path $\prescript{H\!\!}{}{\beta}_i^{-1}\gamma\prescript{H\!\!}{}{\beta}_j$.
The obvious retraction of $\operatorname{S}igma\times\left[ 0,1\right]$ onto $\operatorname{S}igma\times\left\lbrace 1\right\rbrace$ induces a map
$$\pi_1(\operatorname{S}igma\times\left[ 0,1\right],h(\mathcal{P})\times\left\lbrace 1\right\rbrace)\longrightarrow \pi_1(\operatorname{S}igma,h(\mathcal{P}))$$
that sends the class $\left[ \prescript{H\!\!}{}{\beta}_i^{-1}\gamma\prescript{H\!\!}{}{\beta}_j\right]$
to $\left[ h(\gamma)\right]$. Since the former class is unchanged under boundary-fixing homotopy of $\gamma$ and $\prescript{H\!\!}{}{\beta}$, so is the latter, which proves the result.
\end{proof}
This proposition states that the only relevant datum in a diagram isotopy of a virtual knot is the path followed by the real crossings along the isotopy, \textit{up to homotopy}: the entanglement of these paths with each other or themselves does not matter. It follows that the crossings may be moved one at a time:
\begin{corollary}\label{1by1}
Let $D$ be a knot diagram with its real crossings numbered from $1$ to $n$, and let $H:\operatorname{Id}\rightarrow h$ be a diagram isotopy. Then there is a sequence of diagram isotopies $H_1,\ldots,H_n$, such that $h_n\ldots h_1(D)$ coincides with $h(D)$ up to detour moves, and such that $H_i$ is the identity on a neighborhood of each real crossing but the $i$-th one.
\end{corollary}
\begin{remark}It is to be understood that the $i$-th crossing of $h_k\ldots h_1(D)$ is $h_k\ldots h_1(p_i)$.
\end{remark}
\begin{proof}
Any generalized braid is (boundary-fixing) homotopic to a braid $\beta\subset \operatorname{S}igma\times\left[ 0,1\right]$ such that the $i$-th strand is vertical before the time $\frac{i-1}{n}$ and vertical again after the time $\frac{i}{n}$. Take such a braid $\beta$ that is homotopic to $\prescript{H\!\!}{}{\beta}$. Any diagram isotopy $H^\prime$ such that $\beta=\prescript{H^\prime\!\!}{}{\beta}$ factorizes into a product $H_n\ldots H_1$ satisfying the last required condition. The fact that $h_n\ldots h_1(D)$ and $h(D)$ coincide up to detour moves is a consequence of Proposition~\ref{prop:braid_homotopy}.
\end{proof}
\section{Virtual knot theory on a weighted group}\label{sec:VKTG}
In this section, we define a new Gauss diagram theory, that depends on an arbitrary group $\pi$ and a homomorphism $w:\pi\rightarrow\mathbb{F}_2\simeq\mathbb{Z}/2\mathbb{Z}$. These two data together are called a \textit{weighted group}. When $(\pi,w)$ is the weighted fundamental group of a surface (Definition~\ref{def:wfg}), this theory encodes, fully and faithfully, virtual knot diagrams on that surface (Definition~\ref{def:virtualknotdiags}).
\subsection{General settings and the main theorem}
\begin{definition}\label{def:GammaRm}
Let $\pi$ be an arbitrary group and $w$ a homomorphism from $\pi$ to $\mathbb{F}_2.$ A \textit{Gauss diagram on $\pi$} is a classical Gauss diagram decorated with
\begin{itemize}
\iotatem an element of $\pi$ on each edge if the diagram has at least one arrow.
\iotatem a single element of $\pi$ \textit{up to conjugacy} if the diagram is empty.\end{itemize}
Such diagrams are subject to the usual types of R-moves, plus an additional \textit{conjugacy move}, or \textit{$w$-move} -- the dependence on $w$ arises only there. An equivalence class modulo all these moves is called a \textit{virtual knot type on $(\pi,w)$}.
A \textit{subdiagram} of a Gauss diagram on $\pi$ is the result of removing some of its arrows. Removing an arrow involves a merging of its ($2$, $3$, or $4$) adjacent edges, and each edge resulting from this merging should be marked with the product in $\pi$ of the former markings. If all the arrows have been removed, this product is not well-defined, but its conjugacy class is.
\end{definition}
The notion of subdiagrams is useful to construct finite-type invariants (see Section~\ref{FTI}), but it already allows explicit understanding of
\begin{enumerate}
\iotatem The distinction between empty and non empty diagrams in the definition above.
\iotatem The \enquote{merge $\rightsquigarrow$ multiply} principle, which is omnipresent, in particular in R-moves.
\end{enumerate}
An \textbf{$\mathrm{R}_1$-move} is the local addition or removal of an isolated arrow, surrounding an edge marked with the unit $1\iotan\pi$. The markings of the affected edges must satisfy the rule indicated on Fig.\ref{pic:GammaRm} (top-left). There are no conditions on the decorations of the arrows.
\textit{Exceptional case:} If the isolated arrow is the only one in the diagram on the left, then the markings $a$ and $b$ on the picture actually correspond to the same edge, and the diagram on the right, with no arrow, must be decorated by $\left[ a\right] $, the conjugacy class of $a$.
\begin{figure}
\caption{The $\mathrm{R}
\label{pic:GammaRm}
\end{figure}
\begin{figure}
\caption{The general conjugacy move (top-left) and its two exceptional cases -- in every case the orientation of the arrow switches if and only if $w(g)=-1$.}
\label{pic:conjugacy}
\end{figure}
An \textbf{$\mathrm{R}_2$-move} is the addition or removal of two arrows with opposite writhes and matching orientations as shown on Fig.\ref{pic:GammaRm} (top-right). The surrounded edges must be decorated with $1$, and the \enquote{merge $\rightsquigarrow$ multiply} rule should be satisfied.\newline
\textit{Exceptional case of type 1:} If the markings $a$ and $d$ (\textit{resp. } $b$ and $c$) correspond to the same edge, then the resulting marking shall be $cab$ (\textit{resp. } $abd$).\newline
\textit{Exceptional case of type 2:} If the middle diagram contains no arrow at all, \textit{i.e. } $a$ and $d$ match and so do $b$ and $c$, then the (only) marking of the middle diagram shall be $\left[ ab\right] $.
An \textbf{$\mathrm{R}_3$-move} may be of the two types shown on Fig.\ref{pic:GammaRm} (bottom left and right). The surrounded edges must be decorated by $1$, the value of $\mathrm{w}(\cdot)\varepsilon(\cdot)$ must be the same for all three of them, and the values of $\uparrow\!\!(\cdot)$ must be pairwise distinct (see Definition~\ref{def_epsilon}).
A \textbf{conjugacy move} depends on an element $g\iotan\pi$. It changes the markings of the adjacent edges to an arbitrary arrow as indicated on Fig.\ref{pic:conjugacy}. Besides, if $w(g)=-1$ then the orientation of the arrow is reversed -- though its sign remains the same.
\begin{remark}
By composing R-moves and $w$-moves, it is possible to perform \textit{generalized moves}, which look like R-moves but depend on $w$. Fig.\ref{pic:generalizedGamma} shows some of them.
\end{remark}
\begin{figure}
\caption{Some generalized moves -- for the $\mathrm{R}
\label{pic:generalizedGamma}
\end{figure}
\begin{theorem}\label{thm:Th1}
Let $(\operatorname{S}igma,x)$ be an arbitrary surface with a base point, and denote by $(\pi,w)$ the weighted fundamental group of $(\operatorname{S}igma,x)$ (see Definition~\ref{def:wfg}). There is a $1-1$ correspondence $\Phi$ between Gauss diagrams on $\pi$ up to R-moves and $w$-moves (\textit{i.e. } virtual knot types on $(\pi,w)$), and virtual knot diagrams on $\operatorname{S}igma$ up to diagram isotopy, Reidemeister moves and detour moves \mbox{(\textit{i.e. } virtual knot types on $\operatorname{S}igma$).}
\end{theorem}
\begin{proof}
Fix a subset $X$ of $\operatorname{S}igma$ homeomorphic to a closed $2$-dimensional disc and containing the base point $x$ -- so that $\pi=\pi_1(\operatorname{S}igma,X)$. Also, $X$ being contractible allows one to fix a trivialization of the thickening of $\operatorname{S}igma$ over $X$: this gives meaning to the locally \textit{over} and \textit{under} branches when a knot diagram has a real crossing in $X$.\newline
\textbf{Construction of the bijection.} Pick a knot diagram $D\iotan \operatorname{S}igma$ and assume that every real crossing of $D$ lies over $X$. Then $D$ defines a Gauss diagram on $\pi$, denoted by $\varphi (D)$: the signs of the arrows are given by the writhes, their orientation is defined by the trivialization of $M\rightarrow\operatorname{S}igma$ over $X$, and each edge is decorated by the class in $\pi$ of the corresponding arc in $D$. This defines $\varphi (D)$ without ambiguity if $D$ has at least one real crossing. If it does not, then define $\varphi (D)$ as a Gauss diagram without arrows, decorated with the conjugacy class corresponding to the free homotopy class of $D$. Finally, put
$$\Phi (D):=\left[ \varphi (D)\right] \text{ $\operatorname{mod}$ R-moves and $w$-moves.}$$
\textbf{Invariance of $\Phi$ under diagram isotopy and detour moves.} It is clear from the definitions that $\varphi(D)$ is strictly unchanged under detour moves on $D$. Now assume that $D_1$ and $D_2$ are equivalent under \textit{usual} diagram isotopy -- that is, diagram isotopy that may take real crossings out of $X$ for some time. By Corollary~\ref{1by1}, it is enough to understand what happens for a diagram isotopy along which only one crossing goes out of $X$. In that case, $\varphi (D)$ is changed by a $w$-move performed on the arrow corresponding to that crossing, where the conjugating element $g$ is the loop followed by the crossing along the isotopy. Indeed, since the first Stiefel-Whitney class of the thickening of $\operatorname{S}igma$ coincides with that of its tangent bundle, it follows that:
\begin{enumerate}
\iotatem The orientation of the fibre (and thus the notions of \enquote{over} and \enquote{under}) is reversed along $g$ if and only if $w(g)=-1$, which actually corresponds to the rule for arrow orientations in a $w$-move.
\iotatem The orientation of the fibre over the crossing is reversed along $g$ if and only if a given local orientation of $\operatorname{S}igma$ is reversed along $g$, so that the writhe of the crossing never changes.
\end{enumerate}
\textbf{Invariance of $\Phi$ under Reidemeister moves.} Up to conjugacy by a diagram isotopy, it can always be assumed that a Reidemeister move happens inside $X$. In that case, at the level of $\varphi (D)$, it clearly corresponds to an R-move as described in Definition~\ref{def:GammaRm}.\newline
So far, $\Phi$ is a well-defined map from the set of virtual knot types on $\operatorname{S}igma$ to the set of virtual knot types on $(\pi,w)$.
\textbf{Construction of an inverse map $\Psi$.} If $G$ is a Gauss diagram without arrows, then define $\psi (G)$ as the totally virtual knot with free homotopy class equal to the marking of $G$ -- it is well-defined up to detour moves. If $G$ has arrows, then for each of them draw a crossing inside $X$ with the required writhe, and then join these by totally virtual arcs with the required homotopy classes. The resulting diagram $\psi (G)$ is well-defined up to diagram isotopy and detour moves by this construction. In both cases, put
$$\Psi (D):=\text{virtual knot type of $\psi (D)$}.$$
Let us prove that $\varphi$ and $\psi$ are inverse maps, so that $\Psi$ will be the inverse of $\Phi$ as soon as it is invariant under R-moves and $w$-moves.
It is clear from the definitions that $\varphi\circ\psi$ coincides with the identity. It is also clear that $\psi\circ\varphi$ is the identity, up to detour moves, for \textit{totally virtual} knot diagrams.
Now fix a knot diagram $D$ with at least one real crossing (and all real crossings inside $X$). Recall that $\psi\circ\varphi (D)$ is defined up to diagram isotopy and detour moves, so fix a diagram $D^\prime$ in that class. There is a natural correspondence between the set of real crossings of $D$ and those of $D^\prime$, due to the fact that both identify by construction with the set of arrows of
$\varphi (D)$. Pick a diagram isotopy $h$ that takes each real crossing of $D$ to meet its match in $D^\prime$, \textit{without leaving $X$}. Then clearly $\varphi (h(D))=\varphi (D)$, and because $\varphi\circ\psi$ is the identity, one gets
\begin{equation}\label{proofmain}
\varphi(h(D))=\varphi(D^\prime).
\end{equation}
The choice of $h$ ensures that $h(D)$ and $D^\prime$ differ only by totally virtual arcs, and \eqref{proofmain} implies that each of these, in $h(D)$, has the same class in $\pi_1(\operatorname{S}igma,X)$ as its match in $D^\prime$, which means by definition that $h(D)$ and $D^\prime$ are equivalent up to detour moves. Thus $\psi\circ\varphi$ is the identity up to diagram isotopy and detour moves.\newline
\textbf{Invariance of $\Psi$ under R-moves.}
Let us treat only the case of $\mathrm{R}_2$-moves, which contains all the ideas. Let $G_1$ and $G_2$ differ by an $\mathrm{R}_2$-move, and assume that $G_1$ is the one with more arrows. By appropriate diagram isotopy and detour moves \textit{inside} $X$, performed on $\psi(G_1)$, it is possible to make the two concerned crossings \enquote{face} each other, as in Fig.\ref{pic:R2} (left). The paths $\alpha_1$ and $\alpha_2$ from this picture are totally virtual and trivial in $\pi_1 (\operatorname{S}igma,X)$, thus $\psi(G_1)$ is equivalent to the second diagram of Fig.\ref{pic:R2} up to detour moves. The fact that at this point, an R-II move is actually possible is a consequence of (in fact equivalent to) the combinatorial conditions defining the R-moves. Denote by $D$ the third diagram of the picture. The \enquote{merge $\rightsquigarrow$ multiply} principle that rules $\mathrm{R}_2$-moves implies that $\varphi(D)=G_2$, so that
\begin{equation}\label{proofmain2}
\psi(G_1)\sim D\sim\psi\circ
\varphi(D)=\psi(G_2),\end{equation}
where $\sim$ is the equivalence under diagram isotopy, detour moves and Reidemeister moves. It follows that $\psi(G_1)$ and $\psi(G_2)$ have the same knot type.
\begin{figure}
\caption{$\mathrm{R}
\label{pic:R2}
\end{figure}
\begin{figure}
\caption{Performing a $w$-move -- the railway trick}
\label{pic:wmove}
\end{figure}
\textbf{Invariance of $\Psi$ under $w$-moves.} Let $G_1$ and $G_2$ differ by a $w$-move on $g\iotan\pi$. Call $c$ the corresponding crossing on the diagram $\psi(G_1)$. Then, pick two little arcs right before $c$, one on each branch, and make them follow $g$ by a detour move. At the end, one shall see a totally virtual $4$-lane railway as pictured on Fig.\ref{pic:wmove} (middle): the strands are made parallel, \textit{i.e. } any (virtual) crossing met by either of them is part of a larger picture as indicated by the zoom. This ensures that, using the mixed version of Reidemeister III moves, one can slide the real crossing all along the red part of the railway, ending with the diagram on the right of the picture -- let us call it $D$. The conclusion is identical to that for R-moves: again $\varphi(D)=G_2$ and \eqref{proofmain2} holds, whence $\psi(G_1)$ and $\psi(G_2)$ have the same knot type.
\end{proof}
\subsubsection{About the orbits of $w$-moves}\label{orbits}
It could feel natural to try to get rid of $w$-moves by understanding their orbits in a synthetic combinatorial way. This is what is done in Section~\ref{subsec:abelian} in the particular case of an abelian group $\pi$ endowed with the trivial homomorphism $\pi\rightarrow\mathbb{F}_2$.
In general, for a Gauss diagram on $\pi$, $G$, denote by $h_1(G)$ the \textit{set} of free homotopy classes of loops in the underlying topological space of $G$ (it is the set of conjugacy classes in a free group on $deg(G)+1$ generators). Also, denote by $h_1(\pi)$ the set of conjugacy classes in $\pi$. Then the $\pi$-markings of $G$ define a map
$$F_G:h_1(G)\rightarrow h_1(\pi).$$
Observe that the map $G\mapsto F_G$ is invariant under $w$-moves. This raises a number of questions that amout to technical group theoretic problems, and which will not be answered here ($G^w$ denotes the orbit of $G$ under $w$-moves):
\begin{enumerate}
\iotatem Is the map $G^w\mapsto F_G$ injective?
\iotatem If the answer to $1.$ is yes, then is $G^w$ determined by a finite number of values of $F_G$, for instance its values on the free homotopy classes of \textit{simple loops}?
\iotatem Is it possible to detect in a simple manner what maps $h_1(G)\rightarrow h_1(\pi)$ lie in the image of $G^w\mapsto F_G$?
\end{enumerate}
\begin{remark}
Gauss diagrams with decorations in $h_1(\operatorname{S}igma)$ can be met for example in \cite{Grishanov}, where they are used to construct knot invariants in a thickened oriented surface $\operatorname{S}igma$ -- see also Section~\ref{sec:Grishanov}. If the answer to Question $1.$ above is no, then such invariants, which factor through $F_G$, stand no chance to be complete.
\end{remark}
\begin{remark}Even for diagrams with only one arrow, it still does not seem easy to answer the \enquote{simple loop} version of Question $2.$ Given $x,y,h,k$ in a finite type free group, is it true that $$\begin{array}{cccc}
hxh^{-1}kyk^{-1}=xy
& \Longrightarrow & \exists l, \, \left\lbrace \begin{array}{ccc}
hxh^{-1} & = & lxl^{-1} \\ kyk^{-1} & = & lyl^{-1}
\end{array}\right. & ?\end{array}$$
\end{remark}
Let us end with an example that shows that the values of $F_G$ on the (finite) set of simple loops running along at most one arrow is not enough (cf. Question $2.$). Fig.\ref{exWP1} shows a Gauss diagram with such decorations -- $\left\lbrace a,b \right\rbrace$ is a set of generators for the free group $\pi_1(\operatorname{S}igma)\simeq \mathbb{F} (a,b)$, where $\operatorname{S}igma$ is a $2$-punctured disc. These particular values of $F_G$ do not determine the free homotopy class of the red loop $\gamma$, as it is shown in Fig.\ref{exWP2}.
\begin{figure}
\caption{A Gauss diagram with $h_1$-decorations that does not define a unique virtual knot}
\label{exWP1}
\end{figure}
In fact, these two virtual knots are even distinguished by Vassiliev-Grishanov's planar chain invariants, which means they represent different virtual knot types.
\begin{figure}
\caption{One red loop is trivial, while the other is a commutator}
\label{exWP2}
\end{figure}
\subsection{Abelian Gauss diagrams}
\label{subsec:abelian}
In this subsection, $\pi$ is assumed to be abelian, and $w_0$ denotes the trivial homomorphism $\pi\rightarrow\mathbb{F}_2$. We describe a version of Gauss diagrams that carries as much information as the previously introduced virtual knot types on $(\pi,w_0)$, with two improvements:
\begin{itemize}
\iotatem The diagrams are made of less data than in the general version.
\iotatem This version is free from conjugacy moves.
\end{itemize}
It is inspired from the decorated diagrams introduced by T. Fiedler to study combinatorial invariants for knots in thickened surfaces (see \cite{Fiedler,Fiedlerbraids} and also \cite{MortierPolyakEquations}).
We use the same notation $G$ for a Gauss diagram and its underlying topological space, which has a $1$-dimensional complex structure with edges and arrows as oriented $1$-cells. $H_1(G)$ denotes its first integral homology group.
\begin{deflemma}[fundamental loops]\label{def:distinguished}
Let $G$ be a classical Gauss diagram of degree $n$. There are exactly $n+1$ simple loops in $G$ respecting the local orientations of edges and arrows, and going along at most one arrow. They are called the \textit{fundamental loops} of $G$ and their homology classes form a basis of $H_1(G)$.
\end{deflemma}
\begin{definition}[abelian Gauss diagram]\label{def:ab}
Let $\pi$ be an abelian group. An \textit{abelian Gauss diagram on $\pi$} is a classical Gauss diagram $G$ decorated with a group homomorphism $\mu:H_1(G)\rightarrow\pi$. It is usually represented by its values on the basis of fundamental loops, that is, one decoration in $\pi$ for each arrow, and one for the base circle -- that last one is called the \textit{global marking of $G$}.
A Gauss diagram on $\pi$ determines an abelian Gauss diagram as follows:
\begin{itemize}
\iotatem The underlying classical Gauss diagram is the same.
\iotatem Each fundamental loop is decorated by the sum of the markings of the edges that it meets (see Fig~\ref{pic:ab}).
\end{itemize}
\begin{figure}
\caption{Abelianizing a Gauss diagram on an abelian group}
\label{pic:ab}
\end{figure}
This defines an \textit{abelianization map} $\operatorname{ab}$.
\end{definition}
\begin{proposition}\label{prop:abelian}
The map $\operatorname{ab}$ induces a natural $1-1$ correspondence between abelian Gauss diagrams on $\pi$ and equivalence classes of Gauss diagrams on $\pi$ up to $w_0$-moves. Moreover, if $\pi=\pi_1(\operatorname{S}igma)$ is the fundamental group of a surface, then these sets are in $1-1$ correspondence with the set of virtual knot diagrams on $\operatorname{S}igma$ up to diagram isotopy and detour moves.
\end{proposition}
\begin{proof}
The proof of the last statement is contained in that of Theorem~\ref{thm:Th1} -- through the facts that $\phi$ and $\psi$ are inverse maps up to detour moves and diagram isotopy, and that $w$-moves at the level of knot diagrams can be performed using only detour moves and diagram isotopies, by the railway trick (Fig.\ref{pic:wmove}).
As for the first statement, one easily sees that $\operatorname{ab}$ is invariant under $w_0$-moves. We have to show that conversely, if $\operatorname{ab}(G_1)=\operatorname{ab}(G_2)$, then $G_1$ and $G_2$ are equivalent under $w_0$-moves.
This is clear if $G_1$ has no arrows, since then $\operatorname{ab}(G_1)=G_1$. Now proceed by induction. Since $G_1$ and $G_2$ have the same abelianization, they have in particular the same underlying classical Gauss diagram, and there is a natural correspondence between their arrows.
\textbf{Case 1:} No two arrows in $G_1$ cross each other. Then at least one arrow surrounds a single isolated edge on one side (as in an $\mathrm{R}_1$-move). Choose such an arrow $\alpha$ and remove it, as well as its match in $G_2$. By induction, there is a sequence of $w_0$-moves on the resulting diagram $G_1^\prime$ that turns it into $G_2^\prime$.
Since the arrows of $G_1^\prime$ have a natural match in $G_1$, those $w_0$-moves make sense there, and take every marking of $G_1$ to be equal to its match in $G_2$, except for those in the neighborhood of $\alpha$. So we may assume that $G_1$ and $G_2$ only differ near $\alpha$ as in Fig.\ref{pic:proofab}. Since all the unseen markings coincide in $G_1$ and $G_2$, and since $\operatorname{ab}(G_1)$ and $\operatorname{ab}(G_2)$ have the same global marking, it follows that $$a+b+c=a^\prime+
b^\prime+c^\prime.$$ Thus a $w_0$-move on $\alpha$ with conjugating element $g=a^\prime-a$ turns $G_1$ into $G_2$.
\begin{figure}
\caption{Notations for case $1$}
\label{pic:proofab}
\end{figure}
\textbf{Case 2:} There is at least one arrow $\alpha$ in $G_1$ that intersects another arrow. By the same process as in case $1$, one may assume that $G_1$ and $G_2$ only differ near $\alpha$ -- see Fig.\ref{pic:proofabis}, where $a$, $b$, $c$ and $d$ actually correspond to pairwise distinct edges since $\alpha$ intersects an arrow. Again, since all the unseen markings coincide in $G_1$ and $G_2$, one obtains
$$a+d=a^\prime+d^\prime,$$ and $$b+c=b^\prime+c^\prime,$$ by considering the global marking, and the marking of $\alpha$, in $\operatorname{ab}(G_1)$ and $\operatorname{ab}(G_2)$. Moreover, there is at least one arrow intersecting $\alpha$: considering the marking of that arrow gives $$a+b=a^\prime+b^\prime.$$ The last three equations may be written as $$a^\prime-a=b-b^\prime=c^\prime-c=d-d^\prime,$$ so that, again, a $w_0$-move on $\alpha$ with conjugating element $g=a^\prime-a$ turns $G_1$ into $G_2$.
\begin{figure}
\caption{Notations for case $2$}
\label{pic:proofabis}
\end{figure}
\end{proof}
\begin{remark}
A different proof of this proposition was given in a draft paper, in the special case $\pi=\mathbb{Z}$ (\cite{MortierGaussDiagrams}, Proposition $2.2$). As an exercise, one can show that this proof extends to the case of an arbitrary abelian group.
\end{remark}
To make the picture complete, it only remains to understand R-moves in this context.
\begin{definition}[obstruction loops]
Within any local Reidemeister picture like those shown on Fig.\ref{Rmoves} featuring at least one arrow, there is exactly one (unoriented) simple loop. We call it the \textit{obstruction loop}.
Fig.\ref{pic:loops} shows typical examples.
\end{definition}
\begin{definition}[R-moves]\label{lem:obstruction}
A move from Fig.\ref{Rmoves} is likely to define an R-move only if the obstruction loop lies in the kernel of the decorating map $H_1(G)\rightarrow \pi$ (which makes sense even though the loop is unoriented). Under that assumption, the \emph{R-moves for abelian Gauss diagrams} are defined by the usual conditions:
\begin{itemize}
\iotatem $i=1.$ No additional condition.
\iotatem $i=2.$ The arrows head to the same edge, and have opposite signs.
\iotatem $i=3.$ The value of $\mathrm{w}(e)\varepsilon(e)$ is the same for all three visible edges $e$, and the values of $\uparrow\!\!(e)$ are pairwise different (see Definition~\ref{def_epsilon}).
\end{itemize}
\end{definition}
\begin{figure}
\caption{Homological obstruction to $\mathrm{R}
\label{pic:loops}
\end{figure}
\begin{theorem}\label{thm:abelian}
The map $\operatorname{ab}$ induces a natural $1-1$ correspondence between equivalence classes of abelian Gauss diagrams on $\pi$ up to R-moves and virtual knot types on $(\pi,w_0)$.
\end{theorem}
\begin{proof}
$\operatorname{ab}$ clearly maps an R-move in the non commutative sense to an R-move in the abelian sense. Conversely, if $\operatorname{ab}(G_1)$ and $\operatorname{ab}(G_2)$ differ from an (abelian) R-move, then the vanishing homological obstruction implies that $G_1$ and $G_2$ are in a position to perform a \enquote{generalized R-move} like the examples pictured on Fig.\ref{pic:generalizedGamma}.
\end{proof}
Theorems~\ref{thm:Th1}~and ~\ref{thm:abelian} together imply the following
\begin{corollary}\label{cor:orientable}
If $\operatorname{S}igma$ is an orientable surface with abelian fundamental group, then there is a $1-1$ correspondence between abelian Gauss diagrams on $\pi_1(\operatorname{S}igma)$ up to R-moves, and virtual knot types on $\operatorname{S}igma$.
\end{corollary}
\subsection{Homological formulas}
\label{subsec:homform}
It may seem not easy to compute an arbitrary value of the linear map decorating an abelian Gauss diagram, given only its values on the fundamental loops. To end this section, we give two formulas to fill this gap, by understanding the coordinates of an arbitrary loop in the basis of fundamental loops.
\subsubsection{The energy formula}
Fix an abelian Gauss diagram $G$. Observe that as a cellular complex, $G$ has no $2$-cells, thus every $1$-homology class has a unique set of \enquote{coordinates} along the family of edges and arrows. For each $1$-cell $c$ (which may be an arrow or an edge), we denote by $\left<\cdot,c\right>:H_1(G)\rightarrow\mathbb{Z}$ the coordinate function along $c$. It is a group homomorphism.
Let us denote by $\left[ A\right] \iotan H_1(G)$ the class of the fundamental loop associated with an arrow $A$ (Fig.\ref{pic:energy} left).
\begin{deflemma}[Energy of a loop]\label{def:energy}
Fix an edge $e$ in $G$, and a class $\gamma\iotan H_1(G)$. The value of
\begin{equation}\label{eq:energy}
E_e (\gamma) = \left<\gamma,e\right> -\sum_{\left<\left[ A\right],e\right>=1} \left<\gamma,A\right>
\end{equation}
is independent of $e$. This defines a group homomorphism $E:H_1(G) \rightarrow \mathbb{Z}$.
\end{deflemma}
\begin{proof}
Let us compare the values of $E_\cdot (\gamma)$ for an edge $e$ and the edge $e^\prime$ right after it. $e$ and $e^\prime$ are separated by a vertex $P$, which is the endpoint of an arrow $A$. There are two possible situations (Fig.\ref{pic:energy}):
\begin{enumerate}
\iotatem $P$ is the tail of $A$. Then $\left<\left[ A\right],e\right>=1$ and $\left<\left[ A\right],e^\prime\right>=0$, so that
$$E_e (\gamma)-E_{e^\prime} (\gamma)=\left<\gamma,e\right>
-\left<\gamma,A\right>
-\left<\gamma,e^\prime\right>.$$
\iotatem $P$ is the head of $A$. Then $\left<\left[ A\right],e\right>=0$ and $\left<\left[ A\right],e^\prime\right>=1$, so that
$$E_e (\gamma)-E_{e^\prime} (\gamma)=\left<\gamma,e\right>
+\left<\gamma,A\right>
-\left<\gamma,e^\prime\right>.$$
\end{enumerate}
In both cases, $E_e (\gamma)-E_{e^\prime}(\gamma)$ is equal to $\left<\ensuremath{\operatorname{S}}rtial
\gamma,P\right>$, which is $0$ since $\gamma$ is a cycle.
\end{proof}
\begin{figure}
\caption{The fundamental loop of an arrow and the two cases in the proof of Lemma~\ref{def:energy}
\label{pic:energy}
\end{figure}
\begin{theorem}\label{thm:formula2}
For any $\gamma\iotan H_1(G)$, one has the decomposition
\begin{equation}\label{eq:formula2} \gamma =\sum_A\left<\gamma,A\right> \left[ A\right] +E(\gamma) \left[K \right] .\end{equation}
\end{theorem}
\begin{proof}
This formula is an identity between two group homomorphisms, so it suffices to check it on the basis of fundamental loops, which is immediate.
\end{proof}
\begin{remark}
The existence of a map $E$ such that Theorem~\ref{thm:formula2} holds was clear, since for each arrow $A$ considered as a $1$-cell, $[A]$ is the only fundamental loop that involves $A$. With that in mind, one may read into \eqref{eq:energy} as follows: $E(\gamma)$ counts the (algebraic) number of times that $\gamma$ goes through an edge, minus the number of those times that are already taken care of by the fundamental loops of the arrows. This number has to be the same for all edges, so that one recovers a multiple of $\left[K \right]$. \end{remark}
\subsubsection{The torsion formula}
Looking at \eqref{eq:formula2} and Fig.\ref{pic:energy}, one may feel that it would be more natural to have $\left[K \right]-\left[A \right]$ involved in the formula, instead of $\left[A \right]$, for all arrows $A$ such that $\left<\gamma,A\right>$ is negative -- that is, when $\gamma$ runs along $A$ with the wrong orientation more often than not. The formula then becomes
\begin{equation}\label{eq:torsion}
\gamma=\sum_{\left<\gamma,A\right> >0}\left<\gamma,A\right> \left[ A\right]
+ \sum_{\left<\gamma,A\right> <0}\left<\gamma,A\right> \left(\left[ A\right]- \left[ K\right]\right)\,-\mathcal{T}(\gamma)\left[ K\right],
\end{equation}
where
\begin{equation}\label{TvsE}
-\mathcal{T}(\gamma)= E(\gamma)+ \sum_{\left<\gamma,A\right> <0}\left<\gamma,A\right>.
\end{equation}
\begin{definition}
$\mathcal{T}(\gamma)$ is called the \textit{torsion} of $\gamma$.
\end{definition}
How is \eqref{eq:torsion} different from \eqref{eq:formula2}?\newline
$\ominus$ On the negative side, unlike the energy, $\mathcal{T}$ is not a group homomorphism. But it actually behaves almost like one:
\begin{lemma}\label{lem:torsionmorphism}
Let $\gamma_1$ and $ \gamma_2$ be two homology classes such that
$$\forall A,\,\, \left<\gamma_1,A\right>
\left<\gamma_2,A\right> \geq 0.$$
Then
$$\mathcal{T}(\gamma_1+\gamma_2)=\mathcal{T}(\gamma_1)+\mathcal{T}(\gamma_2).$$
\end{lemma}
\begin{proof}
It follows from the definition and the fact that $E(\gamma)$ is a homomorphism.
\end{proof}
\noindent $\oplus$ On the positive side:
\begin{lemma}\label{lem:torsionindep}
The torsion of a loop in a Gauss diagram $G$ does not depend on the orientations of the arrows of $G$.
\end{lemma}
\begin{proof}
By expanding the defining formula,
$$\mathcal{T}(\gamma)=
-\left<\gamma,e\right>
\hspace*{0.7cm}+ \sum_{\begin{array}{c}
\left<\gamma,A\right> <0\\
\left<[A],e\right> =0
\end{array}}\left<\gamma,A\right> \hspace*{0.7cm}- \sum_{\begin{array}{c}
\left<\gamma,A\right> >0\\
\left<[A],e\right> =1
\end{array}}
\left<\gamma,A\right>,$$
one sees that reversing an arrow makes its contribution (if non zero) switch from one sum to the other, while $\left<\gamma,A\right>$ also changes signs.
\end{proof}
This lemma allows one to expect that $\mathcal{T}(\gamma)$ should admit a very simple combinatorial interpretation. It actually does, but only for a certain family of loops -- the ERS loops defined below. Fortunately enough, this family happens to positively generate $H_1(G)$, which allows one to compute the torsion of any loop by using Lemma~\ref{lem:torsionmorphism}.
\begin{definition}\label{def:ER}
The notation $\gamma$ is used for loops as well as $1$-homology classes.
A homology class $\gamma\iotan H_1(G)$ is said to be
\begin{itemize}
\iotatem \textit{ER} (for \enquote{edge-respecting}), if for every edge $e$, $\left<\gamma,e\right> \geq 0.$
\iotatem \textit{simple} if it is the class of a simple (injective) loop, that is, $\operatorname{ab}s{\left<\gamma,c\right>} \leq 1$ for every $1$-cell $c$ (edge or arrow).
\iotatem \textit{ERS} if it is ER and simple.
\iotatem \textit{proper} if it runs along at least one arrow.
\end{itemize}
\end{definition}
\begin{figure}
\caption{The local and global look of a proper ERS loop}
\label{simpleloop}
\end{figure}
Consider a permutation $\sigma\iotan\mathfrak{S}\left( \llbracket 1,n\rrbracket\right)$, and set
$$\,\nearrow \!(\sigma):=\sharp\left\lbrace i\iotan\llbracket 1,n\rrbracket\mid\sigma(i)>i\right\rbrace.$$
It is easy to check that if $\sigma_0$ is the circular permutation $(1\,2\,\ldots\,n)$, then
$$\forall \sigma\iotan\mathfrak{S}, \,\nearrow \!(\sigma)=\,\nearrow \!(\sigma_0\sigma\sigma_0^{-1}).$$
\begin{definition}
The invariance property from above means that $\mathcal{T}$ is well-defined for permutations of a set of $n$ points lying in an abstract oriented circle.
We still denote this function by $\mathcal{T}$, and call it the \textit{torsion} of a permutation.
\end{definition}
Let $\gamma$ be a proper simple loop, then the set of edges $e$ such that $\left<\gamma,e\right> \neq 0$ can be naturally assimiliated to a finite subset of an oriented circle, and $\gamma$ induces a permutation of this set. Let us denote it by $\sigma_\gamma$.
\begin{theorem}\label{thm:torsion}
For all proper ERS loops $\gamma$,
$$\mathcal{T}(\gamma)=\,\nearrow \!(\sigma_\gamma).$$
\end{theorem}
This theorem can be useful in practice, since the torsion of a permutation can be computed at a glance on the braid-like presentation. Observe that
\begin{enumerate}
\iotatem Every non proper loop is homologous to a multiple of $[K]$, easy to determine.
\iotatem For every proper loop $\gamma$, there is an integer $n$ such that $\tilde\gamma=\gamma+n[K]$ is proper, ER, and has zero coordinate along at least one edge. Namely, $n=-\operatorname{min}_e
\left<\gamma,e\right>.$
\iotatem Every class $\tilde\gamma$ as above may be decomposed as a sum $\tilde\gamma=\sum_i\gamma_i$ such that
\begin{itemize}
\iotatem all the $\gamma_i$'s are proper and ERS
\iotatem $\forall i,j, A, \left<\gamma_i,A\right>
\left<\gamma_j,A\right> \geq 0$
\end{itemize}
\iotatem By Lemma~\ref{lem:torsionmorphism}, $\mathcal{T}(\tilde{\gamma})=\sum_i
\mathcal{T}(\gamma_i)$, and the $\mathcal{T}(\gamma_i)$'s are given by Theorem~\ref{thm:torsion}.
\end{enumerate}
This shows that it is possible to compute any homology class by using the torsion formula. Whether it is more interesting than the energy formula depends on the context.
\begin{proof}[Proof of Theorem~\ref{thm:torsion}]
One may assume that for every arrow $A$, $\left< \gamma,A \right>=1.$
Indeed, deleting an arrow avoided by $\gamma$, or reversing the orientation of an arrow that $\gamma$ runs in the wrong direction, have no effect on either side of the formula (notably because of Lemma~\ref{lem:torsionindep}). Under this assumption, half of the edges of $G$ are run by $\gamma$: call them the \textit{red edges of $G$}, while the other half are called the \textit{blue edges}. Red and blue edges alternate along the orientation of the circle.
If $e$ is any (red or blue) edge, we define:
$$\lambda(e):=\sum_A \left<[A],e\right>. $$
\begin{lemma}\label{l1}
Under the assumption that $\left< \gamma,A \right>=1$ for all $A$, the value of $\lambda(e)$ only depends on the color of the edge $e$. Moreover, $$\lambda(blue)=\lambda(red)-1=\,\nearrow \!(\sigma_\gamma).$$
\end{lemma}
Let us temporarily admit this result. By the definition of $\lambda$,
$$\begin{array}{rcl}
\sum_A\left[ A\right] & = & \sum \text{arrows} +\lambda(\text{red}) \sum ( \text{red edges})
+\lambda(\text{blue}) \sum ( \text{blue edges})\\
& \stackrel{\text{Lemma}~
\ref{l1}}{=} & \sum \text{arrows} + \sum \left(\text{red edges} \right) + \lambda(\text{blue}) \sum \left(\text{red and blue edges} \right)\\
& \stackrel{\phantom{\text{Lemma}~
\ref{l1}}}{=} & \gamma + \lambda(\text{blue}) [K]\\
& \stackrel{\text{Lemma}~
\ref{l1}}{=} & \gamma +\,\nearrow \!(\sigma_\gamma) [K].
\end{array}$$
Since it was assumed that $\left< \gamma,A \right>=1$ for every arrow, the definition of $\mathcal{T}$ \eqref{eq:torsion} reads
$$\gamma=\sum_A\left[ A\right]-\mathcal{T}(\gamma) [K],$$ which terminates the proof of the theorem, up to Lemma~\ref{l1}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{l1}]
In the case of $\sigma_0=(1\,2\,\ldots\,n)$ depicted on Fig.\ref{h1}, it is easy to see that $\lambda(red)=n$ and $\lambda(blue)=n-1$, while $\,\nearrow \!(\sigma_0)=n-1$.
The lemma being true for one diagram, let us show that it survives elementary changes that cover all the diagrams.
\begin{figure}
\caption{Braid-like representations of permutations are to be read from bottom to top}
\label{h1}
\end{figure}
Notice that for every proper ERS loop $\gamma$, $\sigma_\gamma$ is a \textit{cycle}, and conversely a permutation that is a cycle uniquely defines an undecorated Gauss diagram \textit{and} a proper ERS loop $\gamma$ such that for every arrow $A$, $\left< \gamma,A \right>=1$. Thus, covering all possible permutations implies covering all possible diagrams and proper ERS loops. So all we have to check is that the formula survives an operation on $\sigma_\gamma$, of the form:
$$\left(\, \ldots \, i\, j \, \ldots\,\right) \longrightarrow
\left(\, \ldots \, j\, i \, \ldots\,\right)$$
The corresponding move at the level of Gauss diagrams may be of six different types, grouped in three pairs of reverse operations (Fig.\ref{h2}).
\begin{figure}
\caption{Twist moves on Gauss diagrams}
\label{h2}
\end{figure}
On each diagram in Fig.\ref{h2}, the three moving arrows split the base circle into six regions. One computes the variation of $\lambda$ separately for each of these regions, and sees that it is the same for each of them. The results are gathered in the following table, proving the lemma.
$$
\begin{array}{c|c|c}
\text{type of move} & \text{variation of }\lambda & \text{variation of }\mathcal{T}(\gamma)\\
\hline A & \text{unchanged} & \text{unchanged} \\
B \text{ (from left to right)} & \text{decreases by }1 & \text{decreases by }1 \\
C \text{ (from left to right)} & \text{decreases by }1 & \text{decreases by }1 \\ \end{array}
$$
\end{proof}
\section{Finite-type invariants}\label{FTI}
One of the main points of using Gauss diagrams is their ability to describe finite-type invariants by simple formulas \citep{PolyakViro, Fiedler, ChmutovKhouryRossi, PolyakChmutovHOMFLYPT}. In the case of classical long knots in $3$-space, such formulas actually cover all Vassiliev invariants as was shown by M.Goussarov \citep{Goussarov}. In the virtual case, the two notions actually differ (see \citep{KauffmanVKT99} and also \citep{ChrismanGPV, ChrismanLattices}. Finite type invariants for virtual knots that do admit Gauss diagram formulas shall be called GPV invariants \citep{GPV}.
In \citep{MortierPolyakEquations}, a simple set of criteria was given to detect a particular family of those formulas, called \textit{virtual arrow diagram formulas}. Most of the examples that are known belong to this family. That includes Chmutov-Khoury-Rossi's formulas for the coefficients of the Conway polynomial \citep{ChmutovKhouryRossi} (and their generalization by M. Brandenbursky \citep{Brandenbursky}), as well as the formulas from \citep{Fiedler, Fiedlerbraids, Grishanov} where different kinds of decorated diagrams are used. Note however that the formulas for the invariants extracted from the HOMFLYPT polynomial \citep{PolyakChmutovHOMFLYPT} are arrow diagram formulas only if the variable $a$ is specialized to $1$ (which yields back the result of \citep{ChmutovKhouryRossi}).
In this section, we extend the results from \citep{MortierPolyakEquations} to an arbitrary surface. Then we show how to apply them to any other kind of decorated diagrams found in the literature, by defining \textit{symmetry-preserving maps} which enable one to jump from one theory to another.
\subsection{General algebraic settings}\label{sec:GDspaces}
We denote by $\mathfrak{G}_n$ (\textit{resp. } $\mathfrak{G}_{\leq n}$) the $\mathbb{Q}$-vector space freely generated by Gauss diagrams on $\pi$ of degree $n$ (\textit{resp. } ${\leq n}$), and set $\mathfrak{G}= \varinjlim\mathfrak{G}_{\leq n}$. Unless $\pi$ is a finite group, these spaces are not finitely generated, and we define their hat versions $\widehat{\mathfrak{G}}_n$ (\textit{resp. } $\widehat{\mathfrak{G}}_{\leq n}$) as the $\mathbb{Q}$-spaces of formal series of Gauss diagrams of degree $n$ (\textit{resp. } $\leq n$). Finally, set $\widehat{\mathfrak{G}}= \varinjlim\widehat{\mathfrak{G}}_{\leq n}.$ An arbitrary element of $\widehat{\mathfrak{G}}$ is usually denoted by $\mathcal{G}$ and called a \textit{Gauss series}, of degree $n$ if it is represented in $\widehat{\mathfrak{G}}_{\leq n}$ but not in $\widehat{\mathfrak{G}}_{\leq n-1}$. The notation $G$ is saved for single Gauss diagrams.
A Gauss diagram $G$ of degree $n$ has a \textit{group of symmetries} $\operatorname{Aut}(G)$, which is a subgroup of $\mathbb{Z}/2n$, made of the rotations of the circle that leave unchanged a given representative of $G$ (see Subsection~\ref{sec:Sinj}).
$\mathfrak{G}$ is endowed with the orthonormal scalar product with respect to its canonical basis, denoted by $(,)$, and its normalized version $\left\langle ,\right\rangle$, defined by
\begin{eqnarray}\label{eq2}
\left\langle G,G^\prime \right\rangle :=\operatorname{ab}s{\operatorname{Aut}(G)}
(G,G^\prime).
\end{eqnarray}
There is a linear isomorphism $I: \mathfrak{G}_{\leq n}\rightarrow\mathfrak{G}_{\leq n}$, the keystone to the theory, which maps a Gauss diagram of degree $n$ to the formal sum of its $2^n$ subdiagrams:
\begin{equation}\label{def:I}
I(G)=\sum_{\sigma\iotan \left\lbrace \pm 1\right\rbrace ^{n}}G_{(\sigma)},\end{equation}
where $G_{(\sigma)}$ is $G$ deprived from the arrows that $\sigma$ maps to $-1$ (see Definition\ref{def:GammaRm} for subdiagrams).
The inverse map of $I$ is given by
\begin{equation}\label{def:I-1}
I^{-1}(G)=\sum_{\sigma\iotan \left\lbrace \pm 1\right\rbrace ^{n}} \operatorname{sign}(\sigma)G_{(\sigma)}.
\end{equation}
\begin{definition}
A finite-type invariant for virtual knots in the sense of Goussarov-Polyak-Viro is a virtual knot invariant given by a \textit{Gauss diagram formula}
\begin{equation}\label{eq1}
\nu_\mathcal{G}: G \mapsto \left\langle \mathcal{G},I(G)
\right\rangle,
\end{equation}
where $\mathcal{G}\iotan \widehat{\mathfrak{G}}$. Such a formula \enquote{counts} the subdiagrams of $G$, with weights given by the coefficients of $\mathcal{G}$. Notice that only one of the two arguments of $\left\langle,
\right\rangle$ needs to be a finite sum for the expression to make sense. We do not make a distinction between a virtual knot invariant and the linear form induced on $\mathfrak{G}$.
\end{definition}
\subsubsection{The Polyak algebra}
A Gauss series $\mathcal{G}\iotan \widehat{\mathfrak{G}}$ defines a virtual knot invariant if and only if the function $\left\langle \mathcal{G},I(.)\right\rangle$ is zero on the subspace spanned by $\mathrm{R}$-moves and $w$-moves relators. Hence one has to understand the image of that subspace under $I$ with a simple family of generators. This is the idea of the construction of the Polyak algebra (\citep{P1,GPV}).
\begin{figure}
\caption{The three kinds of Polyak relations -- only one $\mathrm{P}
\label{pic:Pmoves}
\end{figure}
In the present case, $\mathcal{P}$ is defined as the quotient of $\mathfrak{G}$ by
\begin{itemize}
\iotatem the relations shown in Fig.\ref{pic:Pmoves}, which we call $\mathrm{P}_1$, $\mathrm{P}_2$, $\mathrm{P}_3$ (or \textit{$8T$ relation}),
\iotatem the $\mathrm{W}$ relation, which is simply the linear match of $w$-moves (\textit{i.e. } just replace the \enquote{$\leftrightsquigarrow$} with a \enquote{$=$} in all the relations from Fig.\ref{pic:conjugacy}).\end{itemize}
Be careful that unlike $\mathrm{R}_1$-moves, where an isolated arrow surrounding an edge marked with $1$ simply disappears, in a $\mathrm{P}_1$-move the presence of such an arrow completely kills the diagram. Fig.\ref{pic:Pmoves} does not feature the $\pi$-markings for $\mathrm{P}_3$ to lighten the picture, but they have to follow the usual \enquote{merge $\rightsquigarrow$ multiply} rule (see Definition~\ref{def:GammaRm}).
The following proposition extends Theorem $2.D$ from \citep{GPV}.
\begin{proposition}\label{thm:PolyakAlg}
The map $I$ induces an isomorphism $\mathfrak{G}/\RW
\rightarrow \mathfrak{G}/
\PW=:\mathcal{P}$. More precisely, $I$ induces an isomorphism between $\operatorname{Span}(\mathrm{R}_i)$ and $\operatorname{Span}(\mathrm{P}_i)$, for $i=1,2,3$, and between $\operatorname{Span}(\mathrm{W})$ and itself. It follows that the map $G\rightarrow I(G)\iotan \mathcal{P}$ defines a complete invariant for virtual knots.
\end{proposition}
\subsubsection{The symmetry-preserving injections}\label{sec:Sinj}
Depending on the context, one may have to consider simultaneously different types of Gauss diagrams, with more or less decorations. This subsection presents a natural way to do it, convenient from the viewpoint of Gauss diagram invariants. The construction requires one to choose a kind of combinatorial objects that is the \enquote{father} of all other kinds, in the sense of quotienting. We present the construction by taking as the father type that of Gauss diagrams on a group.
In first place, we do not regard Gauss diagrams up to homeomorphisms of the circle: the base circle is assumed to be the unit circle in $\mathbb{C}$, the endpoints of the arrows are assumed to be located at the $2n$-th roots of unity, and the arrows are straight line segments. Such a diagram is called \textit{rigid}.
By a \enquote{type of rigid Gauss diagrams} we mean an equivalence relation on the set of rigid Gauss diagrams on $\pi$, which is required to satisfy two properties:\\
\textbf{1.} (Degree property) All diagrams in a given equivalence class shall have the same degree.\\
\textbf{2.} (Stability property) The action of $\mathbb{Z}/2n$ on the set of Gauss diagrams of degree $n$ shall induce an action on the set of degree $n$ equivalence classes.\\
Since every construction in this subsection is therefore destined to be homogeneous, the degree of all Gauss diagrams is once and for all set equal to $n$. A \textit{rigid Gauss diagram of type $\sim$} is an equivalence class under the relation $\sim$. A \textit{Gauss diagram (of type $\sim$)} is the orbit of a rigid diagram of type $\sim$ under the action of $\mathbb{Z}/2n$. The corresponding $\mathbb{Q}$-spaces are respectively denoted by $\mathfrak{G}_\sim^\text{rigid}$ and $\mathfrak{G}_\sim$.
Since $\mathbb{Z}/2n$ is abelian, two elements from the same orbit have the same stabilizer, hence a Gauss diagram $G$ has a well-defined \textit{group of symmetries} $\operatorname{Aut}(G)$, which is the stabilizer of any of its rigid representatives under the action of $\mathbb{Z}/2n$.
Consequently, the space $\mathfrak{G}_\sim$ is endowed with a pairing $\left\langle ,\right\rangle$ defined by \eqref{eq2}.
Now consider two types of rigid Gauss diagrams, say $1$ and $2$, such that relation $1$ is finer than relation $2$ (\enquote{$1\prec 2$}).
\begin{definition}[Forgetful projections]\label{def:proj}
A $1$-rigid diagram $G_1$ determines a unique $2$-rigid diagram whose $\mathbb{Z}/2n$-orbit only depends on that of $G_1$. This induces a natural surjective map at the level of Gauss diagram spaces, denoted by $$\operatorname{T_2^1}:\mathfrak{G}_{(1)}\twoheadrightarrow
\mathfrak{G}_{(2)}.$$
\end{definition}
Note that this map may be not well-defined on the spaces of formal series of Gauss diagrams, if some $2$-equivalence class contains infinitely many $1$-classes.
Example: the abelianization map $\operatorname{ab}$ (Definition~\ref{def:ab}) induces by linearity a forgetful projection from Gauss diagrams on $\pi$ to abelian diagrams on $\pi$, when $\pi$ is abelian.
\begin{definition}[Symmetry-preserving injections]\label{def:Sinj}
In the opposite way, there is a map $\mathfrak{G}_{(2)}^\text{rigid}
\rightarrow
\mathfrak{G}_{(1)}^\text{rigid}$ that sends a $2$-rigid diagram $G_2$ to the formal sum of all $1$-classes that it contains. When this sum is pushed in $\mathfrak{G}_{(1)}$, the result
\begin{itemize}
\iotatem is well-defined: a $2$-rigid diagram cannot contain infinitely many rigid representatives of a given Gauss diagram of type $1$, since the orbits are finite ($\mathbb{Z}/2n$ is finite).
\iotatem only depends on the $\mathbb{Z}/2n$-orbit of $G_2$.
\end{itemize}
This induces an injective \textit{symmetry-preserving map} at the level of formal series,
$$\operatorname{S_2^1}:\widehat{\mathfrak{G}_{(2)}}
\hookrightarrow
\widehat{\mathfrak{G}_{(1)}}.$$
$\operatorname{S_2^1}$ is well-defined, componentwise, since $2$-rigid diagrams from different $\mathbb{Z}/2n$-orbits contain $1$-rigid diagrams from disjoint sets of $\mathbb{Z}/2n$-orbits (the images of two different Gauss diagrams do not overlap). It is injective for the same reason.
\end{definition}
The terminology is explained by the following fundamental formula:
\begin{lemma}\label{Sfund}
With notations as above, for any Gauss diagram $G_2$ of type $2$,
\begin{equation}
\operatorname{S}_2^1(G_2)=\sum_{\operatorname{T_2^1}\left( G_1\right)=G_2} \frac{\operatorname{ab}s{\operatorname{Aut}(G_2)}}{\operatorname{ab}s{\operatorname{Aut}(G_1)}}G_1.
\end{equation}
\end{lemma}
\textit{Informally, the weight given to a preimage of $G_2$ under $\operatorname{T_2^1}$ is the amount of symmetry that it has lost by the gain of more information. Note that the weights are integers, since $\operatorname{Aut}(G_1)$ identifies with a subgroup of $\operatorname{Aut}(G_2)$.}
\begin{proof}
Fix a representative $G_2^\text{rigid}$ of $G_2$. By the stability property, $\operatorname{Aut}(G_2)$ acts on the set of $1$-classes contained in $G_2^\text{rigid}$. Moreover, by definition of $\operatorname{Aut}(G_2)$, two different orbits under that action still lie in different orbits under the action of $\mathbb{Z}/2n$ itself. Therefore there is a $1-1$ correspondence between the $\operatorname{Aut}(G_2)$-orbits and the Gauss diagrams that happen in the sum $\operatorname{S}_2^1(G_2)$. The stabilizer of a given $1$-class $G_1^\text{rigid}$ is by definition $\operatorname{Aut}(G_1)$, whence the cardinality of the corresponding orbit, which is also the coefficient of $G_1$ in $\operatorname{S}_2^1(G_2)$, is $\frac{\operatorname{ab}s{\operatorname{Aut}(G_2)}}{\operatorname{ab}s{\operatorname{Aut}(G_1)}}$.
\end{proof}
\begin{proposition}\label{prop:ST}
\
\begin{enumerate}
\iotatem For any three relations such that $1\prec 2\prec 3$, the following diagrams commute:
\begin{center}
\begin{tikzpicture}
\node (0) at (0.45,0.45) {$\circlearrowleft$};
\node (.) at (5.5,0) {.};
\node (1) at (0,0) {$\mathfrak{G}_{(2)}$};
\node (2) at (1.5,0) {$\mathfrak{G}_{(3)}$};
\node (3) at (0,1.5) {$\mathfrak{G}_{(1)}$};
\draw[->>,>=latex] (1) -- (2) node[midway,below] {$\operatorname{T_3^2}$};
\draw[->>,>=latex] (3) -- (1) node[midway,left] {$\operatorname{T_2^1}$};
\draw[->>,>=latex] (3) -- (2) node[midway,above right] {$\operatorname{T_3^1}$};
\node (10) at (3,1.5) {$\widehat{\mathfrak{G}_{(3)}}$};
\node (20) at (4.5,1.5) {$\widehat{\mathfrak{G}_{(2)}}$};
\node (30) at (4.5,0) {$\widehat{\mathfrak{G}_{(1)}}$};
\node (40) at (4.05,1.05) {$\circlearrowleft$};
\draw[right hook->,>=latex] (10) -- (20) node[midway,above] {$\operatorname{S}_3^2$};
\draw[right hook->,>=latex] (20) -- (30) node[midway,right] {$\operatorname{S}_2^1$};
\draw[right hook->,>=latex] (10) -- (30) node[midway,below left] {$\operatorname{S}_3^1$};
\end{tikzpicture}
\end{center}
\iotatem Injections and projections are pairwise $\left\langle ,\right\rangle$-adjoint, in the sense that
$$
\begin{array}{cc}
\forall \,\, \mathcal{G}_1\iotan
\mathfrak{G}_{(1)},\, \mathcal{G}_2\iotan\widehat{
\mathfrak{G}_{(2)}},& \left\langle \operatorname{S}_2^1\left(\mathcal{G}_2
\right),\mathcal{G}_1\right\rangle =\left\langle \mathcal{G}_2,
\operatorname{T}_2^1\left(
\mathcal{G}_1\right)\right\rangle.
\end{array}
$$
\iotatem
$\operatorname{Im} \operatorname{S}_2^1 = \operatorname{Ker}^\bot \operatorname{T_2^1}.
$
\end{enumerate}
\end{proposition}
\begin{proof}
$1.$ The first diagram commutes directly from the definition of the maps $\operatorname{T}_i^j$. As for the maps $\operatorname{S}_i^j$, since they are defined componentwise it is enough to check it for a single diagram $G_3$. In that case, it is a consequence of Lemma~\ref{Sfund} and the relation $\operatorname{T_3^1}=\operatorname{T_3^2}\circ \operatorname{T_2^1}.$
$2.$ In both sides, it is clear that only a finite number of terms in $\mathcal{G}_2$ are relevant, namely those that are projections of some terms of $\mathcal{G}_1$ under $\operatorname{T_2^1}$. Thus, by bilinearity, it is enough to consider single diagrams $G_1$ and $G_2$. If $G_2\neq \operatorname{T_2^1}(G_1)$, then both sides are $0$.
If $G_2= \operatorname{T_2^1}(G_1)$, then $$\begin{array}{ccl}
\left\langle \operatorname{S}_2^1\left(G_2\right),G_1\right\rangle & = &
\left\langle
\frac{\operatorname{ab}s{\operatorname{Aut}(G_2)}}{\operatorname{ab}s{\operatorname{Aut}(G_1)}}
G_1
,G_1\right\rangle \\
& = & \operatorname{ab}s{\operatorname{Aut}(G_2)},
\end{array}$$
while
$$\begin{array}{ccl}
\left\langle G_2,
\operatorname{T}_2^1\left(
G_1\right)\right\rangle & = & \left\langle G_2,G_2\right\rangle\\
& = & \operatorname{ab}s{\operatorname{Aut}(G_2)}.\end{array}
$$
$3.$ The inclusion $\operatorname{Im} \operatorname{S}_2^1 \subset \operatorname{Ker}^\bot \operatorname{T_2^1}$ follows immediately from $2.$ For the converse, pick a Gauss diagram series $\mathcal{G}_1$ in $\operatorname{Ker}^\bot \operatorname{T_2^1}$. For any two $2$-related Gauss diagrams of type $1$, $G_1$ and $G_1^\prime$, one has
$$\left\langle \mathcal{G}_1,G_1-G_1^\prime \right\rangle=0.$$
Thus, if $G_2$ is a Gauss diagram of type $2$, one can define $\phi(G_2)$ to be the value of $\left\langle \mathcal{G}_1,G_1\right\rangle$ for any preimage $G_1$ of $G_2$ under $\operatorname{T_2^1}$, and set
$$\mathcal{G}_2=\sum \frac{\phi(G_2)}{\operatorname{ab}s{\operatorname{Aut}(G_2)}}G_2,$$
where the sum runs over all Gauss diagrams of type $2$. Finally,
$$
\begin{array}{ccl}
\operatorname{S}_2^1(\mathcal{G}_2) & = & \sum \frac{\operatorname{ab}s{\operatorname{Aut}(\operatorname{T_2^1}\left( G_1\right))}}{\operatorname{ab}s{\operatorname{Aut}(G_1)}}
\frac{\phi(\operatorname{T_2^1}\left( G_1\right))}{\operatorname{ab}s{\operatorname{Aut}(\operatorname{T_2^1}\left( G_1\right))}}G_1\\
& = & \sum \frac{\left\langle \mathcal{G}_1, G_1\right\rangle}{\operatorname{ab}s{\operatorname{Aut}(G_1)}} G_1\\
& = & \mathcal{G}_1
\end{array}.$$
\end{proof}
In practice, point $3$ is useful in both directions: whether one needs a characterization of the series that lie in the image of some map $\operatorname{S}$ (Lemma~\ref{ImS}), or of the series that define invariants under some kind of moves (Propoosition~\ref{prop:omega}). Point $2$ states that symmetry-preserving maps are the good dictionary to understand invariants that were defined via forgetful projections.
\begin{remark}\label{openbar}
Every construction and result in this subsection can be repeated by replacing the set of rigid Gauss diagrams of degree $n$ with any set endowed with the action of an abelian finite group.
\end{remark}
\subsubsection{Arrow diagrams and homogeneous invariants}\label{sec:arrows}
\begin{definition}[see \citep{P1,PolyakViro}]\label{def:arrowdiag}
An \textit{arrow diagram (on $\pi$)} is a Gauss diagram $G$ (on $\pi$) of which the signs decorating the arrows have been forgotten. As usual, it is considered up to homeomorphisms of the circle.\end{definition}
Arrow diagram spaces $\mathfrak{A}_{n}$, $\mathfrak{A}_{\leq n}$, $\mathfrak{A}$, the hat versions, and the pairings $(,)$ and $\left\langle ,\right\rangle$ are defined similarly to their signed versions (Subsection \ref{sec:GDspaces}). We use notations $A$
for an arrow diagram and $\mathcal{A}$ for an \textit{arrow diagram series} -- \textit{i.e. } an element of $\widehat{\mathfrak{A}}$.
\subsubsection*{Arrow diagram formulas}
In the language of Subsection~\ref{sec:Sinj}, arrow diagrams are a kind of Gauss diagrams satisfying the degree and stability properties -- the equivalence relation on rigid Gauss diagrams is given by $G\sim G^\prime \Leftrightarrow$ one may pass from $G$ to $G^\prime$ by writhe changes. Therefore, an arrow diagram $A$ has a well-defined symmetry group $\operatorname{Aut}(A)$, and there are a symmetry-preserving map $\operatorname{S}_a:\widehat{\mathfrak{A}}\hookrightarrow \widehat{\mathfrak{G}}$ and a projection $\operatorname{T}_a:\mathfrak{G}\twoheadrightarrow\mathfrak{A}$.
However, for the purpose of defining arrow diagram invariants, we are going to twist these maps a little, by pushing an additional sign into the weights:
\begin{definition}\label{twistedS}
Define the linear maps $S:\widehat{\mathfrak{A}}
\rightarrow
\widehat{\mathfrak{G}}$
and $T:\mathfrak{G}\rightarrow
\mathfrak{A}$ componentwise by
\begin{eqnarray}
S(A) & := & \sum_{\operatorname{T}_a(G)=A}\operatorname{sign}(G)\frac{\operatorname{Aut}(A)}{\operatorname{Aut}(G)}G
\\
T(G) & := & \operatorname{sign}(G)\operatorname{T}_a(G)\label{defT}
\end{eqnarray}
\end{definition}
\begin{proposition}\label{lem:ST} \
\begin{enumerate}
\iotatem $S$ and $T$ are $\left\langle ,\right\rangle$-adjoint, in the sense that
$$
\begin{array}{cc}
\forall \,\, \mathcal{G}\iotan
\mathfrak{G},\, \mathcal{A}\iotan\widehat{
\mathfrak{A}},& \left\langle S\left(\mathcal{A}
\right),\mathcal{G}\right\rangle =\left\langle \mathcal{A},
T\left(
\mathcal{G}\right)\right\rangle.
\end{array}
$$
\iotatem
$\operatorname{Im} S = \operatorname{Ker}^\bot T.$
\end{enumerate}
\end{proposition}
The proof is completely similar to that of Proposition~\ref{prop:ST}.
\begin{definition}
A Gauss diagram formula that lies in the image of the map $S$ is called an \textit{arrow diagram formula}.
\end{definition}
\subsubsection*{Homogeneous invariants}
\begin{definition}
For each $n\iotan \mathbb{N}$, there is an orthogonal projection $p_n : \widehat{\mathfrak{G}}\rightarrow \widehat{\mathfrak{G}}_n$ with respect to the scalar product $\left\langle ,\right\rangle$. For $\mathcal{G}\iotan \widehat{\mathfrak{G}}$, the \textit{principal part} of $\mathcal{G}$ is defined by $p_n (\mathcal{G})$, with $n=\operatorname{deg}(G)$.
$\mathcal{G}$ is called \textit{homogeneous} if it is equal to its principal part.
\end{definition}
\begin{figure}
\caption{Some homogeneous Polyak relations}
\label{pic:homogeneous}
\end{figure}
\begin{figure}
\caption{The homogeneous arrow relations -- as usual, in $\mathrm{AW}
\label{pic:Arelations}
\end{figure}
Let $\mathcal{G}$ be a homogeneous Gauss series. Then $\mathcal{G}$ satisfies the $\mathrm{P}_2$ and $\mathrm{P}_3$ relations (\textit{i.e. } $\left\langle \mathcal{G},\mathrm{P}_i\right\rangle=0$ for $i=2,3$) if and only if it satisfies the \textit{homogeneous} relations $\left\langle \mathcal{G},p_k(\mathrm{P}_i)\right\rangle=0$ for all $k$. These are denoted by $\mathrm{P}_2^{(n-1),1}$, $\mathrm{P}_2^{(n-2),2}$, $\mathrm{P}_3^{(n-2),2}$ (or $G6T$) and $\mathrm{P}_3^{(n-3),3}$ (or $G2T$). The parenthesized numbers in exponent indicate in each case how many arrows are unseen. $\mathrm{P}_1$ and $\mathrm{W}$ relations are already homogeneous and do not get a new name. Some examples are shown on Fig.\ref{pic:homogeneous}; for a full list, just consider the projections of the relations of Fig.\ref{pic:Pmoves}.
Homogeneous relations are also defined for arrow diagram spaces, denoted by
$\mathrm{AP}_1$,
$\mathrm{AP}_2^{(n-2),2}$, $\mathrm{AP}_3^{(n-2),2}$ (or $A6T$), $\mathrm{AP}_3^{(n-3),3}$ (or $A2T$) and $\mathrm{AW}$ (Lemma~\ref{ImS} below explains why $\mathrm{AP}_2^{(n-1),1}$ is useless: it reads $0=0$). They are the images under $T$ \eqref{defT} of the homogeneous relations for Gauss diagrams -- in particular one should be especially careful to the signs in the $A6T$ relations. A full list is presented in Fig.\ref{pic:Arelations}.
\begin{lemma}\label{SpanOrth}
Let $\mathcal{A}\iotan \widehat{\mathfrak{A}}$ and let $\mathrm{X}$ be a name among
$\mathrm{P}_1$, $\mathrm{P}_2^{(n-2),2}$, $\mathrm{P}_3^{(n-2),2}$, $\mathrm{P}_3^{(n-3),3}, \mathrm{W}$. Then
$$\mathcal{A}\iotan \operatorname{Span}^\perp(\mathrm{AX})\Longleftrightarrow S(\mathcal{A})\iotan \operatorname{Span}^\perp(\mathrm{X}).$$
where orthogonality is as usual in the sense of $\left\langle ,\right\rangle$.
\end{lemma}
\begin{proof}
It is a direct consequence of part $1.$ of Proposition~\ref{lem:ST}.
\end{proof}
\begin{lemma}\label{ImS}
Let $\mathcal{G}\iotan
\widehat{\mathfrak{G}}$. Then $\mathcal{G}$ lies in the image of the map $S:\widehat{\mathfrak{A}}\rightarrow
\widehat{\mathfrak{G}}$ if and only if $\mathcal{G}$ satisfies all the homogeneous relations $\left\langle \mathcal{G},\mathrm{P}_2^{(n-1),1}\right\rangle=0$.
\end{lemma}
\begin{proof}
Notice that the $\mathrm{P}_2^{(n-1),1}$ relators span the kernel of the map $T$. Hence the result follows from point $2.$ of Proposition~\ref{lem:ST}.
\end{proof}
The following are proved in a particular case in \citep{MortierPolyakEquations} (Lemma~3.2 and Theorem~2.5); the proof can be readily adapted to the present situation.
\begin{lemma}\label{crucial2T6T} For all $n\geq 3$:
$$\operatorname{Span}(A2T) \subseteq \operatorname{Span}(A6T)\oplus \operatorname{Span}(\mathrm{AP}_2^{(n-2),2}).$$
\end{lemma}
\begin{theorem}\label{thm:ArrowHomogeneous}
Arrow diagram formulas are exactly the linear combinations of homogeneous Gauss diagram formulas.
\end{theorem}
\subsubsection{Based and degenerate diagrams}\label{subsec:bas_deg}
\begin{definition}\label{def:basdeg}
A \textit{based} Gauss diagram is a Gauss diagram together with a distinguished (\textit{base}) edge. Based arrow diagrams are defined similarly. The corresponding spaces are denoted by $\mathfrak{G}_\bullet$ and $\mathfrak{A}_\bullet$, in reference to the dot that we use in practice to pinpoint the distinguished edge.
A \textit{degenerate Gauss diagram (with one degeneracy)} is a Gauss diagram in which one edge, whose endpoints belonged to two different arrows, has been shrunk to a point. The spaces of degenerate diagrams are denoted by $\mathfrak{DG}$ and $\mathfrak{DA}$ respectively.
\end{definition}
Even though they would encode long knots in the classical theory, based diagrams have only a combinatorial interest here. On the other hand, degenerate diagrams have a natural topological interpretation that is explained in \citep{FT1cocycles}.
The space of degenerate arrow diagrams is meant to be quotiented by the so-called \textit{triangle relations}, shown in Fig.\ref{pic:triangle}. The quotient space is denoted by $\mathfrak{DA}/\nab$. These relations originated in the early work of M.Polyak on arrow diagrams (\citep{PolyakTalk, P1}, see also \cite{PVCasson}).
\begin{figure}
\caption{The triangle relations}
\label{pic:triangle}
\end{figure}
\begin{definition}\label{def:monotonic}
Call a degenerate diagram \textit{monotonic} if an arrowhead and an arrowtail meet at the degenerate point.
\end{definition}
\begin{lemma}
$\widehat{\mathfrak{D}\mathfrak{A}/
\nab}$ is naturally isomorphic with the $\mathbb{Q}$-space of formal series of monotonic arrow diagrams.
\end{lemma}
\begin{proof}
It suffices to show that the set of monotonic diagrams forms a basis of $\mathfrak{DA}/\nab$. It is clearly a generating set thanks to the $\nabla$ relations, and it is free because every non monotonic diagram happens in exactly one relation, and every relation contains exactly one of them.
\end{proof}
\subsection{Invariance criteria}\label{sec:polyak}
\subsubsection*{$w$-invariance}
\begin{proposition}
\label{prop:omega}
There is an injective \enquote{symmetry-preserving} map
$\operatorname{S}_{aw}^a:\widehat{\mathfrak{A}/\AW}\hookrightarrow
\widehat{\mathfrak{A}}$ defined componentwise by the formula
$$\alpha\mapsto\sum_{A\iotan\alpha}
\frac{\operatorname{ab}s{\operatorname{Aut}(\alpha)}}{\operatorname{ab}s{\operatorname{Aut}(A)}}A.$$
If $\mathcal{A}\iotan\widehat{\mathfrak{A}}$, then the map $G\mapsto \left\langle \! \left\langle \mathcal{A},G \right\rangle\!\right\rangle= \left\langle S(\mathcal{A}),I(G)\right\rangle$ is invariant under $w$-moves if and only if $\mathcal{A}$ lies in the image of $\operatorname{S}_{aw}^a$.
\end{proposition}
\begin{proof}
The equivalence relation defined by $\mathrm{AW}$-moves and writhe changes on the set of rigid Gauss diagrams on $\pi$ satisfies the degree and stability properties from Subsection~\ref{sec:Sinj}. Hence one may apply the results of that section to get the existence and elementary properties of $\operatorname{S}_{aw}^a$. The last assertion follows from sucessive application of the $\mathrm{W}$ part of Lemma~\ref{SpanOrth} and point $3$ of Proposition~\ref{prop:ST}.
\end{proof}
This means that an arrow diagram formula must be represented by a series of $w$-orbits of arrow diagrams. In practice, this condition is most of time satisfied by construction. An important example is the formal sum of all elements contained in a set that is stable under $w$-moves.
\subsubsection*{$\mathrm{R}_1$ and $\mathrm{R}_2$ invariance}
\begin{proposition}
\label{thm:R1et2}
Let $\mathcal{A}\iotan \widehat{\mathfrak{A}}$. Then the function $G \mapsto\left\langle S(\mathcal{A}),I(G)\right\rangle$ is invariant under $\mathrm{R}_1$ (\textit{resp. } $\mathrm{R}_2$) moves if and only if $\mathcal{A}$ satisfies all $\mathrm{AP}_1$ (\textit{resp. } $\mathrm{AP}_2^{
(n-2),2}$) relations. \end{proposition}
\begin{proof}
By Proposition~\ref{thm:PolyakAlg}, the $\mathrm{R}_1$ and $\mathrm{R}_2$ invariance are equivalent to the relations $\left\langle S(\mathcal{A}),\mathrm{P}_1\right\rangle=0$ and $\left\langle S(\mathcal{A}),\mathrm{P}_2\right\rangle=0$ being satisfied. Lemma~\ref{ImS} implies that the second of these relations is actually equivalent to $\langle S(\mathcal{A}),\mathrm{AP}_2^{(n-2),2}
\rangle =0$. Lemma~\ref{SpanOrth} concludes the proof.
\end{proof}
In practice, these conditions are easy to check naked-eye.
\subsubsection*{$\mathrm{R}_3$ invariance}
\begin{definition}\label{def:nice}
Say that a based diagram $A_\bullet$ is \textit{nice} if its base edge
\begin{itemize}
\iotatem is decorated by $1$,
\iotatem is bounded by the endpoints of two different arrows.
\end{itemize}
\end{definition}
\begin{lemma}\label{epsilonbased}
Nice based diagrams have a sign $\varepsilon$ induced by Definition~\ref{def_epsilon}.
\end{lemma}
\begin{proof}
The sign $\varepsilon$ from Definition~\ref{def_epsilon} takes as an argument the edge of a Gauss diagram. Since it does not depend on the writhes of the arrows, it is well-defined for arrow diagrams with a preferred edge.
\end{proof}
\begin{definition}\label{def:delta}
Let $A_\bullet$ be a based arrow diagram. If $A_\bullet$ is not nice, then put $\delta (A_\bullet)=0.$
If $A_\bullet$ is nice, then
\begin{enumerate}
\iotatem Shrink the base edge to a point.
\iotatem Multiply the resulting degenerate diagram by $\varepsilon(A_\bullet)$, and call the result $\delta (A_\bullet).$
\end{enumerate}
This process defines a map $\delta:\widehat{\mathfrak{A}_\bullet} \rightarrow \widehat{\mathfrak{D}
\mathfrak{A}}$ -- well defined since any monotonic diagram has finitely many preimages.
Now let $A$ be an arrow diagram, and denote by $\bullet(A)\iotan \mathfrak{A}_\bullet$ the sum of all based diagrams that one can form by choosing a base edge in $A$. Finally, define
$$
\begin{array}{cccc}
d: & \widehat{\mathfrak{A}} & \rightarrow & \widehat{\mathfrak{DA}} \\
& \mathcal{A} & \mapsto &
\delta (\bullet(\mathcal{A}))
\\
\end{array}.
$$
\end{definition}
\begin{theorem}\label{thm:main}
Let $\mathcal{A}\iotan \widehat{\mathfrak{A}}$ satisfy the $\mathrm{R}_2$ invariance condition from Proposition~\ref{thm:R1et2}. Then the following are equivalent:
\begin{itemize}
\iotatem The map $G\mapsto\left\langle S(\mathcal{A}),I(G)\right\rangle$ is invariant under $\mathrm{R}_3$-moves.
\iotatem $d(\mathcal{A})=0$ modulo the triangle relations.
\iotatem $\mathcal{A}\iotan \operatorname{Span}^\perp (A6T)$.
\end{itemize}
\end{theorem}
The proof is similar to that of Theorem~3.6 from \citep{MortierPolyakEquations}.
\subsubsection{Invariance criterion for $w$-orbits}
As we have seen, an arrow diagram series defines an invariant only if it has a preimage in $\widehat{\mathfrak{A}/\AW}$. The above criteria for invariance under $\mathrm{R}$-moves nicely extend in terms of that preimage. This is especially interesting when $w$-orbits are well understood, for instance if $\pi$ is abelian and $w$ is trivial (Proposition~\ref{prop:abelian}).
\subsubsection*{$w$-moves for based and degenerate diagrams}
These moves are defined similarly to the regular version (Fig.\ref{pic:conjugacy}, top-left and top-right), with additional moves in the degenerate case, that modify the neighborhood of the arrows meeting at the degenerate point (Fig.\ref{pic:degwmove} shows the extreme cases -- there are obvious intermediate ones, when only one or two of the unseen arcs is empty). The arrows both change orientations if $w(g)=-1$, and keep the same otherwise.
\begin{figure}
\caption{The \enquote{degenerate}
\label{pic:degwmove}
\end{figure}
\begin{definition}
Pick a based arrow diagram $A_\bullet$. If its base edge is bounded by twice the same arrow, then set $\delta_w (\left[ A_\bullet\right] )=0$. If it is bounded by two different arrows, then
\begin{enumerate}
\iotatem pick a nice diagram $A_\bullet^{(1)}$ $\mathrm{AW}$-equivalent to $A_\bullet$,
\iotatem set $\delta_w (\left[ A_\bullet\right] )=\left[ \delta(A_\bullet^{(1)})\right] $.
\end{enumerate}
Finally, set
$$d_w(\left[ A\right] )=\delta_w (\left[\bullet(A) \right] ).$$
As usual, $\delta_w$ and $d_w$ are defined componentwise on formal series of $w$-orbits.
\end{definition}
\begin{proof}[Consistency of the definition]
Observe that when the endpoints of an edge belong to two different arrows, then any value can be given to its marking by using the appropriate $w$-move. This proves that step $1$ is always possible -- though not in a unique way.
For step $2$, first notice that two $w$-moves performed on different arrows from an arrow diagram commute, so that any finite sequence of $w$-moves amounts to a sequence made of one move for each arrow. If such a sequence leaves the marking of the base edge unchanged (and equal to $1$), then the moves on the two adjacent arrows must involve the same conjugating element $g$.
It follows that whenever $A_\bullet$ and $A^\prime_\bullet$ are nice and lie in the same $w$-orbit,
\begin{itemize}
\iotatem $\varepsilon(A_\bullet)=
\varepsilon(A^\prime_\bullet),$
\iotatem the degenerate diagrams obtained by shrinking their bases also lie in the same orbit.
\end{itemize}
Hence $\delta_w$ is well-defined.
\end{proof}
\subsubsection*{How to handle the quotients by $\nabla$ relations}
Note that the quotient of $\mathfrak{DA}$ by the $\nabla$ relations does not fit in the general framework in which symmetry preserving maps were introduced: indeed, it does not come from an equivalence relation at the level of the \textit{set} of diagrams.
However, the set of classes of monotonic diagrams forms a basis of $\mathfrak{DA}/\nab$. This induces an injective section $i$ of the projection $s:\mathfrak{DA}\twoheadrightarrow\mathfrak{DA}/\nab$. Both $s$ and $i$ extend componentwise to formal series of $w$-orbits.
The same phenomenon happens between $\widehat{\mathfrak{DA}/\AW}$ and $\widehat{\mathfrak{DA}/\AWnab}$, because $w$-moves never change the status of a degenerate diagram -- monotonic or not -- and the set of monotonic $w$-orbits still forms a basis of $\mathfrak{DA}/\AWnab$.
Again there are maps $s_w$ and $i_w$ such that $s_w\circ i_w=\operatorname{Id}$, at the level of formal series.
Finally, this allows us to construct a symmetry-preserving map $\mathrm{S}_{w\nabla}^{\nabla}:\widehat{\mathfrak{DA}/\AWnab}\rightarrow \widehat{\mathfrak{DA}/\nab}$ , by regarding the restriction of $\mathrm{S}_{dw}^{d}: \widehat{\mathfrak{DA}/\AW}\rightarrow \widehat{\mathfrak{DA}}$ to the subspaces of monotonic diagrams.
\subsubsection*{Summary}
In the following diagram, the two squares and the two triangles on the left are commutative, as well as the internal and external squares on the right. Except for $\mathrm{S}_{w\nabla}^{\nabla}$,
all vertical arrows are symmetry-preserving injections in the usual sense.
\begin{center}
\begin{tikzpicture}[baseline=(current bounding box.center)]
\node (.) at (4,-0.8) {.};
\node (1) at (-5,2) {$\widehat{\mathfrak{A}}$};
\node (1b) at (-2.5,0.8) {$\widehat{\mathfrak{A}_\bullet}$};
\node (2) at (0,2) {$\widehat{\mathfrak{DA}}$};
\node (3) at (-5,-2) {$\widehat{\mathfrak{A}/\AW}
$};
\node (3b) at (-2.5,-0.8) {$\widehat{\mathfrak{A}_\bullet/\AW}
$};
\node (4) at (0,-2) {$\widehat{\mathfrak{DA}/\AW}$};
\node (6) at (2.6,-0.8) {$\widehat{\mathfrak{DA}/\AWnab}$};
\node (5) at (2.6,0.8) {$\widehat{\mathfrak{DA}/\nab}$};
\node (s) at (1.95,2.15) {$i$};
\node (sw) at (2.1,-2.2) {$i_w$};
\draw[->,>=latex] (1) -- (2) node[midway,above] {$d$};
\draw[->,>=latex] (1) -- (1b) node[midway,below] {$\bullet$};
\draw[->,>=latex] (1b) -- (2) node[midway,below] {$\delta$};
\draw[->,>=latex] (3) -- (3b) node[midway,above] {$\bullet$};
\draw[->,>=latex] (3b) -- (4) node[midway,above] {$\delta_w$};
\draw[right hook->,>=latex] (3) -- (1) node[midway,left] {$\mathrm{S}_{aw}^a$};
\draw[right hook->,>=latex] (3b) -- (1b) node[midway,left] {$\mathrm{S}_{\bullet w}^\bullet$};
\draw[right hook->,>=latex] (4) -- (2) node[midway,left] {$\mathrm{S}_{dw}^{d}$};
\draw[->,>=latex] (3) -- (4) node[midway,below] {$d_w$};
\draw[->>,>=latex] (4) -- (6) node[midway,above left] {$s_w$};
\draw[left hook->,>=latex] (6) to[out=245,in=350] (4);
\draw[->>,>=latex] (2) -- (5) node[midway,below left] {$s$};
\draw[right hook->,>=latex] (5) to[out=120,in=15] (2);
\draw[right hook->,>=latex] (6) -- (5) node[midway,right] {$\mathrm{S}_{w\nabla}^{\nabla}$};
\end{tikzpicture}
\end{center}
\begin{theorem}
\label{thm:mainorbits}
An arrow diagram series $\mathcal{A}$ is an arrow diagram formula if and only if each of the following holds:
\begin{enumerate}
\iotatem $\mathcal{A}$ has a preimage $\mathcal{A}_w$ by $\mathrm{S}_{aw}^a$.
\iotatem $\mathcal{A}_w$ is mapped to $0$ in $\widehat{\mathfrak{DA}/\AWnab}$.
\iotatem $\mathcal{A}_w$ satisfies the equations
$$\begin{array}{ccc}
\left\langle \mathcal{A}_w,\mathrm{T}_{aw}^a (\mathrm{AP_1})\right\rangle=0 & \text{and} & \left\langle \mathcal{A}_w,\mathrm{T}_{aw}^a (\mathrm{AP_2^{(n-2),2}})\right\rangle=0
\end{array}.$$
\end{enumerate}
\end{theorem}
\begin{proof}
$1$ is necessary because of Proposition~\ref
{prop:omega}, and $3$ because of Proposition~\ref{thm:R1et2} and point $2$ of Proposition~\ref{prop:ST}.
If $1$ and $3$ are satisfied, then Theorem~\ref{thm:main} implies that $\mathcal{A}$ defines an arrow diagram formula if and only if $s(d(\mathcal{A}))=0$. This is equivalent to $\mathrm{S}_{w\nabla}^{\nabla}( s_w( d_w(\mathcal{A}_w)))=0$ since the diagram commutes, and to $s_w( d_w(\mathcal{A}_w)))=0$ since $\mathrm{S}_{w\nabla}^{\nabla}$ is injective.
\end{proof}
\begin{remark}
To apply the above theorem to an element of $\widehat{\mathfrak{A}/\AW}$, one never needs to push it through symmetry-preserving maps (in the upper half of the diargam). Hence the checkings are done using the most compact expressions.
Also, Point $3$ can be checked separately on each $w$-orbit that happens in $\mathcal{A}_w$, and for each of them it can be checked on any representative diagram by a simple criterion.\newline
For $\mathrm{AP}_1$ relations to hold, the situation at the left of Fig.\ref{R1R2orbits} simply must never happen (the $1$-marking is invariant under $w$-moves).\newline
For $\mathrm{AP_2^{(n-2),2}}$, the situations at the middle and at the right of the picture are forbidden:
\begin{enumerate}
\iotatem if $w(g)=1$ and the arrows have \enquote{the same} orientation (in the sense of the picture),
\iotatem if $w(g)\neq 1$ and the arrows have \enquote{different} orientations.
\end{enumerate}
Again these conditions are stable by $w$-moves.
\end{remark}
\begin{figure}
\caption{Forbidden situations -- the rules for the arrow orientations are explained above.}
\label{R1R2orbits}
\end{figure}
\subsection{Examples and applications}\label{sec:Grishanov}
It was noticed by M. Polyak \citep{PolyakTalk} that several families of formulas describing the finite-type invariants extracted from the Conway polynomial \citep{ChmutovKhouryRossi, Brandenbursky} actually define invariants of virtual knots. That also led him to conjecture that there should be an invariance criterion such as the one we give here for $\mathrm{R}_3$-moves.
We describe here two other families of examples: first, Grishanov-Vassiliev formulas \cite{Grishanov}, which are extended to degenerate cases that were excluded in the original paper, and second, given a surface $\operatorname{S}igma$ we describe a regular invariant that can reasonably be used as a definition of the Whitney index for virtual knots whose projection on $\operatorname{S}igma$ is a non nullhomotopic curve.
\subsubsection{Grishanov-Vassiliev's planar chain invariants}
\begin{definition}\label{def:nakedArrow}
A \textit{naked arrow diagram} is an arrow diagram with every decoration forgotten except for the local orientations. It is called \textit{planar} if no two of its arrows intersect -- thus one may regard it as a part of the plane, up to isotopy.
A \textit{chain presentation} of such a diagram with $n$ arrows is a way to number its $n+1$ bounded complementary components in the plane from $1$ to $n+1$, in such a way that the numbering increases when one goes from the left to the right of an arrow.
Let $U_n$ be the sum of all planar isotopy equivalence classes of chain presentations of naked arrow diagrams of degree $n$.
$U_n$ is called the \textit{universal degree $n$ planar chain} (\cite{Grishanov}, Definition 1).
\end{definition}
\begin{definition}
\label{def:Conjmap}
An \textit{$h_1$-decorated planar diagram} is the result of assigning an element of $ h_1(\operatorname{S}igma)\setminus\left\lbrace 1\right\rbrace$ to each region in a planar naked arrow diagram.
\end{definition}
We consider two ways to construct such diagrams:
\begin{enumerate}
\iotatem From the datum of a chain presentation together with a system $\mathcal{G}amma=\left\lbrace \gamma_1,\ldots,\gamma
_{n+1}\right\rbrace$ -- which yields the notion of $\mathcal{G}amma$-decorated diagrams.
\iotatem From a planar arrow diagram on $\pi=\pi_1(\operatorname{S}igma)$: each region of the diagram receives the conjugacy class of the product of the $\pi$-markings at its boundary, in the order induced by the orientation of the circle (the product is not well-defined, but its conjugacy class is).
\end{enumerate}
Call $\Phi_\mathcal{G}amma$ the sum obtained by decorating every diagram in $U_n$ with a fixed system $\mathcal{G}amma$ (\cite{Grishanov}, Definition 2). Note that some of the summands in $U_n$ -- namely those with non trivial symmetries -- may lead to the same decorated diagram if some of the $\gamma_i$'s are equal; unlike Grishanov-Vassiliev, we do not forbid that. Of course these summands happen with coefficients greater than $1$ in $\Phi_\mathcal{G}amma$.
To understand $\Phi_\mathcal{G}amma$ as an arrow diagram series, we apply the machinery of symmetry-preserving maps from Subsection~\ref{sec:Sinj}. Even though it is absent from the notations, all considered diagrams are planar:
\begin{itemize}
\iotatem $\mathfrak{A}^\mathcal{G}amma$ is the $\mathbb{Q}$-space generated by $\mathcal{G}amma$-decorated planar diagrams (hence of degree $n$).
\iotatem $\mathfrak{A}^{\rightsquigarrow \mathcal{G}amma}$ is the subspace of $\mathfrak{A}_n$ generated by planar arrow diagrams on $\pi_1(\operatorname{S}igma)$ \textit{that induce $\mathcal{G}amma$-decorated diagrams}.
\iotatem $\mathfrak{A}_n^{1,2,\ldots}$ is the $\mathbb{Q}$-space generated by all chain presentations of planar naked arrow diagrams of degree $n$.
\end{itemize}
Note that $\mathfrak{A}_n^{1,2,\ldots}$ is not defined by an equivalence relation on rigid Gauss diagrams on $\pi_1(\operatorname{S}igma)$. The father of all types of Gauss diagrams here is the type of planar Gauss diagrams on $\pi_1(\operatorname{S}igma)$ endowed with a chain presentation, such that the chain presentation and the $\pi$-markings induce the same $h_1$-decorated diagram -- it is the pullback of Diagram \eqref{eq:proofGV} below.
Let us denote by $\operatorname{S}_\mathcal{G}amma^a$ the symmetry-preserving map $ \widehat{\mathfrak{A}^\mathcal{G}amma
}
\hookrightarrow \widehat{\mathfrak{A}_n}$, and set $$\widetilde{\Phi}_\mathcal{G}amma= \operatorname{S}_\mathcal{G}amma^a(\Phi_\mathcal{G}amma).$$
\begin{theorem}\label{thm:Grishanov}
For any system of non-trivial conjugacy classes $\mathcal{G}amma=(\gamma_1,\ldots,\gamma_{n+1})$, $\widetilde{\Phi}_\mathcal{G}amma$ defines an arrow diagram formula for virtual knots.
\end{theorem}
The reason why $\widetilde{\Phi}_\mathcal{G}amma$ coincides \textit{as an invariant} with Grishanov-Vassiliev's ${\Phi}_\mathcal{G}amma$ is Point $2$ of Proposition~\ref{prop:ST}. This theorem improves Theorem $1$ of \cite{Grishanov}, since we remove the assumption that $\mathcal{G}amma$ is unambiguous (\textit{i.e. } any of the $\gamma_i$'s may coincide) and show that $\widetilde{\Phi}_\mathcal{G}amma$ is an invariant for virtual knots.
\begin{proof}
$1.$ First, notice that an equivalence class of arrow diagrams under $\mathrm{AW}$-moves determines an $h_1$-decorated diagram. Thus the map $\operatorname{S}_\mathcal{G}amma^a$ factorizes through $\operatorname{S}_{aw}^a$ (by point $1.$ of Proposition~\ref{prop:ST}), wherefrom Proposition~\ref{prop:omega} implies that $\widetilde{\Phi}_\mathcal{G}amma$ defines an invariant under $w$-moves.
$2.$ The fact that no $\gamma_i$ may be trivial gives immediately the condition of $\mathrm{R}_1$ and $\mathrm{R}_2$ invariance from Proposition~\ref{thm:R1et2}.
$3.$ For $\mathrm{R}_3$, the more convenient here is to check condition $3$ of Theorem~\ref{thm:main}. In any $A6T$ relation, only three diagrams possibly have pairwise non intersecting arrows, and either all three of them do, either no one does. This yields two kinds of reduced relations. Let us consider only that on Fig.\ref{pic:reduced6T} (ignore the markings $i$, $j$, $k$ for now), the other case is completely similar. Write the relator of the picture as $A_1-A_2-A_3$ in the order of reading. One has to prove that $$\left\langle \widetilde{\Phi}_\mathcal{G}amma,
A_1
\right\rangle=\left\langle \widetilde{\Phi}_\mathcal{G}amma,
A_2\right\rangle+\left\langle \widetilde{\Phi}_\mathcal{G}amma,
A_3\right\rangle.$$
\begin{figure}
\caption{One of the two reduced $6$-term relations for planar diagrams}
\label{pic:reduced6T}
\end{figure}
The three spaces defined previously fit into the diagram
\begin{equation} \label{eq:proofGV}
\begin{tikzpicture}[baseline=(current bounding box.center)]
\node (1) at (0,0) {$\mathfrak{A}^\mathcal{G}amma$};
\node (.) at (2.5,0) {.};
\node (2) at (1.5,1.5) {$\mathfrak{A}_n^{1,2,\ldots}$};
\node (3) at (-1.5,1.5) {$\mathfrak{A}^{\rightsquigarrow \mathcal{G}amma}$};
\draw[->>,>=latex] (2) -- (1) node[midway,below right] {$\operatorname{T}^{1,2,\ldots}_\mathcal{G}amma$};
\draw[->>,>=latex] (3) -- (1) node[midway,below left] {$\operatorname{T}^a_\mathcal{G}amma$};
\end{tikzpicture}
\end{equation}
Pick a planar arrow diagram $A$ of degree $n$, and consider the set $\operatorname{Dec}(A)$ of all rigid representatives of all preimages of $\operatorname{T}_\mathcal{G}amma^{a}(A)$ under the map $\operatorname{T}^{1,2,\ldots}_\mathcal{G}amma$. By Definition~\ref{def:Sinj}, the cardinality of that set is the sum of all coefficients of $\operatorname{S}_\mathcal{G}amma^{1,2,\ldots}(\operatorname{T}_\mathcal{G}amma^{a}(A))$, which means, since the sum of all generators of $\mathfrak{A}_n^{1,2,\ldots}$ is $U_n$, that:
$$\sharp \operatorname{Dec}(A) = \left(U_n,\operatorname{S}_\mathcal{G}amma^{1,2,\ldots}(\operatorname{T}_\mathcal{G}amma^{a}(A))\right).$$
But no diagram decorated with a chain presentation admits non-trivial symmetries, whence, by successive applications of Proposition~\ref{prop:ST} $2.$,
$$\begin{array}{ccl}
\sharp \operatorname{Dec}(A) & = & \left\langle U_n,\operatorname{S}_\mathcal{G}amma^{1,2,\ldots}(\operatorname{T}_\mathcal{G}amma^{a}(A))\right\rangle\\
& = & \left\langle \Phi_\mathcal{G}amma,\operatorname{T}_\mathcal{G}amma^{a}(A)\right\rangle\\
& = &
\left\langle \widetilde{\Phi}_\mathcal{G}amma,A
\right\rangle.
\end{array}$$
Now it remains to see that
$$
\sharp \operatorname{Dec}(A_1)=\sharp \operatorname{Dec}(A_2)+\sharp \operatorname{Dec}(A_3).$$
Look at Fig.\ref{pic:reduced6T} again, now considering the markings $i$, $j$, $k$. Each element of $\operatorname{Dec}(A_1)$ determines either an element of $\operatorname{Dec}(A_2)$, or an element of $\operatorname{Dec}(A_3)$, as indicated by the picture, depending on whether $i<j$ or $i>j$.
This separates $\operatorname{Dec}(A_1)$ into two parts, respectively in bijection with $\operatorname{Dec}(A_2)$ and $\operatorname{Dec}(A_3)$, and terminates the proof.
\end{proof}
\subsubsection{There is a Whitney index for non nullhomotopic virtual knots}
In the classical framework, the Whitney index is an invariant of regular plane curve homotopies which, together with the total writhe number, classifies the representatives of any given knot type up to regular isotopy. In other words, these invariants count the (algebraic) number of Reidemeister I moves that \textit{have to} happen in a sequence of moves connecting two given diagrams. Here we describe such an invariant for \textit{virtual} knot diagrams whose underlying curve on $\operatorname{S}igma$ is not homotopically trivial.
\subsubsection*{The classical Whitney index}
Let $\delta:\mathbb{S}^1\rightarrow\mathbb{R}^2$ be a smooth immersion (with non-vanishing differential). There is an associated Gauss map
$$\begin{array}{cccl}
\mathcal{G}amma: & \mathbb{S}^1 & \rightarrow & \mathbb{S}^1 \\
& p & \mapsto & u_p(\delta)
\end{array},$$
where $u_p(\delta)$ is the unitary tangent vector to $\delta$ at the point $p$. It depends on a trivialization of the tangent space to $\mathbb{R}^2$.
\begin{definition}[usual Whitney index]
The index of the above Gauss map only depends on the homotopy class of $\delta$ within the space of smooth immersions. It is called the \textit{rotation number}, or \textit{Whitney index}, of $\delta$.\end{definition}
Given a projection $\mathbb{R}^3\rightarrow\mathbb{R}^2$, a generic isotopy of a knot $\mathbb{S}^1\rightarrow\mathbb{R}^3$ is called a \textit{regular isotopy} if the corresponding sequence of Reidemeister moves does not involve R-I. Clearly, the Whitney index of planar loops induces an invariant of regular isotopy classes of knots. In practice, it can be easily computed by looking at the Seifert circles of the projection (each of them contributes by $+1$ or $-1$). The total writhe number of a classical knot projection is also an invariant of regular isotopy. These two invariants together satisfy the following
\begin{lemma}[see \citep{KonK}]\label{lem:regularisotopy}
Two equivalent knot diagrams are regularly equivalent if and only if their projections have the same total writhe and the same Whitney index.
\end{lemma}
The proof (which we omit) relies essentially on the \textit{Whitney trick} (see \citep{KonK}) and the following variation table:
\begin{figure}
\caption{R-I moves sorted by their effect on the regular invariants}
\label{RIsorted}
\end{figure}
\subsubsection*{The virtual framework}
We want to define a Whitney index for virtual knots, that satisfies a version of Lemma~\ref{lem:regularisotopy}.
Given the virtual Reidemeister I moves, it does not seem reasonable to hope for counting the degree of a Gauss map. Relatedly, the Seifert circles are not embedded any more, and they do not have a well-defined contribution (at least not in the previous sense). Even when one looks only at real knot diagrams, the Whitney index is no more invariant when virtual moves are allowed: see Fig.\ref{pic:regvir}. In other words, though virtual moves do not connect knot diagrams that are not isotopic in the usual sense, they do add bridges between different regular isotopy classes.
\begin{figure}
\caption{A \enquote{regular}
\label{pic:regvir}
\end{figure}
From now on, let $\operatorname{S}igma$ be an orientable surface with non trivial fundamental group.
\begin{definition}\label{def:regvirtuals}
Two virtual knot diagrams in $\operatorname{S}igma$ are called \textit{regularly equivalent} if they are connected by a sequence of moves that does not involve the \textit{real} Reidemeister I move.
\end{definition}
In \citep{KauffmanVKT99}, regular equivalence is defined as equivalence under all moves but the real R-I \textit{and} the virtual R-I too. Here, our goal is to define maps on the set of Gauss diagrams, so implicitly they must be invariant under all detour moves anyway.
\begin{figure}
\caption{The regular invariants for non nullhomotopic virtual knots}
\label{pic:regvirtual}
\end{figure}
\begin{lemma}\label{lem:reginsigma}
The $h_1$-decorated diagram series $v_l$ and $v_r$ from Fig.\ref{pic:regvirtual} define invariants of regular equivalence. Moreover, two virtual knot diagrams from the same knot type with non trivial homotopy class are regularly equivalent as soon as both $v_l$ and $v_r$ coincide on them.
\end{lemma}
\begin{proof}
\textit{First assertion.} First, notice that since $\operatorname{S}igma$ is orientable, $w=w_1(T\operatorname{S}igma)$ is trivial and the $w$-moves never change the orientation of an arrow. Hence a $w$-orbit of planar Gauss diagrams on $\pi_1(\operatorname{S}igma)$ determines an $h_1(\operatorname{S}igma)$-decorated diagram (see Definition~\ref{def:Conjmap}). In restriction to the diagrams that happen in $v_l$ and $v_r$, this forgetful map is actually a $1-1$ correspondence.
Hence $v_l$ and $v_r$ can be regarded as series of $w$-orbits of Gauss diagrams, so that they satisfy the $w$ invariance criterion (Proposition~\ref{prop:omega}). The invariance under $\mathrm{R}_2$ and $\mathrm{R}_3$ moves follows from Theorem \ref{thm:mainorbits}.
\textit{Second assertion.} The proof is essentially the same as that of Lemma~\ref{lem:regularisotopy}. A little loop can run along a knot diagram without using R-I moves even where there are virtual crossings.
The table from Fig.\ref{RIsorted} becomes, respectively (for the couple ($v_l,v_r$)):
$$\begin{array}{cccc}
(0,+1) & (-1,0) & (+1,0) &(0,-1).
\end{array}$$
This is the essential reason for which one needs the assumption that the knot diagrams are not nullhomotopic: otherwise the increase would be $(0,0)$ for all R-I moves.
\end{proof}
Finally, looking at the above table, which details \textit{how} $v_l$ and $v_r$ control the R-I moves, it appears that:
\textbf{1.} $v_r+v_l$ behaves like the total writhe number under Reidemeister moves: it has the same \enquote{derivative}. It follows that these differ by a constant that depends only on the virtual knot type, \textit{i.e. } a virtual knot invariant in the usual sense (see Fig.\ref{pic:totwrithe}).
\begin{figure}
\caption{The difference between $v_r+v_l$ and the (usual) total writhe number}
\label{pic:totwrithe}
\end{figure}
\textbf{2.} \textit{In restriction to real knot diagrams}, $v_r-v_l$ has the same derivative as the Whitney index, with respect to Reidemeister moves. Hence there is an invariant of real knots $c$ such that for every real knot diagram $D$, $v_r(D)-v_l(D)+c(D)$ is the usual Whitney index of $D$.
Hence we have proved:
\begin{deflemma}Call $v_r-v_l$ the \emph{Whitney index} of non nullhomotopic virtual knot diagrams, and call $v_r+v_l$ their \emph{writhe number}. These make Lemma~\ref{lem:regularisotopy} hold for non nullhomotopic virtual knot diagrams.
\end{deflemma}
\end{document} |
\begin{document}
\title[Fully bounded noetherian rings]{Fully bounded noetherian rings and
Frobenius extensions}
\author{S. Caenepeel}
\address{Faculty of Engineering,
Vrije Universiteit Brussel, VUB, B-1050 Brussels, Belgium}
\email{scaenepe@vub.ac.be}
\urladdr{http://homepages.vub.ac.be/\~{}scaenepe/}
\author{T. Gu\'ed\'enon}
\address{Faculty of Engineering,
Vrije Universiteit Brussel, VUB, B-1050 Brussels, Belgium}
\email{tguedeno@vub.ac.be, guedenon@caramail.com}
\thanks{Research supported by the project G.0278.01 ``Construction
and applications of non-commutative geometry: from algebra to physics"
from FWO Vlaanderen}
\subjclass{16W30}
\keywords{Frobenius extension, Fully bounded noetherian ring, coring, Hopf
algebra action, quasi-projective module}
\begin{abstract}
Let $i:\ A\to R$ be a ring morphism, and $\chi:\ R\to A$ a right $R$-linear map
with $\chi(\chi(r)s)=\chi(rs)$ and $\chi(1_R)=1_A$. If $R$ is a Frobenius $A$-ring, then
we can define a trace map ${\rm tr}\,:\ A\to A^R$. If there exists an element of trace 1
in $A$, then $A$ is right FBN if and only if $A^R$ is right FBN and $A$ is
right noetherian. The result can be generalized to the case where $R$ is an
$I$-Frobenius $A$-ring. We recover results of Garc\'{\i}a and del R\'{\i}o and by
D\v{a}sc\v{a}lescu, Kelarev and Torrecillas on actions of group and Hopf algebras
on FBN rings as special cases. We also obtain applications to extensions
of Frobenius algebras, and to Frobenius corings with a grouplike element.
\end{abstract}
\maketitle
\section*{Introduction}
A ring $A$ is called right bounded if every essential right ideal contains a non-zero two-sided ideal.
$A$ is right fully bounded noetherian or right FBN if $A$ is noetherian, and
$A/P$ is right bounded for every two-sided prime ideal $P$ of $A$.
Obviously commutative noetherian rings are right FBN; more generally,
noetherian PI-rings and artinian rings are FBN. A series of conjectures in classical
ring theory can be proved in the case of rings with the FBN property, we refer
to the introduction of \cite{Sorin} for a brief survey.\\
Assume that a finite group $G$ acts on $A$. Garc\'{\i}a and Del R\'{\i}o \cite{Garcia}
investigated the relationship between the FBN property for $A$ and its subring
of invariants $A^G$. The main result is that, in case $A$ is right noetherian,
the right FBN property for $A$ is equivalent to the right FBN property for $A^G$,
if there exists an element in $A$ having trace $1$. A similar statement was proved
in \cite{Nasta} for rings graded by a finite group $G$.
These results can be generalized to Hopf algebra actions (see
\cite{Sorin,Guedenon}). \\
We have observed that the methods introduced in \cite{Garcia} can be applied in an apparently completely
different situation. Let $S$ be a Frobenius algebra (with Frobenius system
$(e=e^1\otimes e^2,\overline{\nu})$) and $j:\ S\to A$ an algebra map, with $A$ a right
noetherian ring. If there exists $a\in A$ such that $j(e^1)aj(e^2)=1$, then
$A$ is right FBN if and only if $C_S(A)$ is right FBN.\\
In this note, we propose a unified approach to these results, based on the concept
of an $A$-ring with a grouplike character, as introduced in \cite{CVW}. Basically, this
consists of a ring morphism $i:\ A\to R$, together with a right $A$-linear map
$\chi:\ R\to A$ such that the formula $a\hbox{$\leftharpoonup$} r=\chi(ar)$ makes $A$ into a
right $R$-module. The subring of invariants is defined as $B=\{b\in A~|~b\chi(r)=\chi(br)\}$.
The main result is basically the following: if $R$ is a Frobenius $A$-ring, and $A$ is projective
as a right $R$-module, then $A$ is right FBN if and only if $B$ is right FBN and
$A$ is right noetherian. The methods of proof are essentially the same as in
\cite{Garcia}. If $R$ is a Frobenius $A$-ring, then we can define a trace map
${\rm tr}\,:\ A\to B$, and $A$ is projective (and a fortiori quasi-projective) as a right
$R$-module if and only if there exists an element of trace 1. The condition that
$R$ is Frobenius can be relaxed in the sense that it suffices that $R$ is
Frobenius of the second kind, with respect to a strict Morita context
$(A,A,I,J,f,g)$. Then the trace map is a map ${\rm tr}\,:\ J\to B$.\\
The above mentioned results on group and Hopf algebra actions and extensions
of Frobenius algebras can be obtained as special cases. We also present an
application to Frobenius corings with a grouplike element.
\section{Rings with a grouplike character}\selabel{1}
Let $A$ be an associative ring with unit. The category of $A$-bimodules
${}_A\mathcal{M}_A$ is a monoidal category, and we can consider algebras in
${}_A\mathcal{M}_A$. Such an algebra $R$ is a ring $R$ together with a ring
morphism $i:\ A\to R$. The bimodule structure on $A$ is then given by
$arb=i(a)ri(b)$, for all $a,b\in A$ and $r\in R$.
A {\sl right grouplike character} on $R$ is a right $A$-linear map
$\chi:\ R\to A$ such that
\begin{equation}\eqlabel{1.1.0}
\chi(\chi(r)s)=\chi(rs)~~{\rm and}~~\chi(1_R)=1_A,
\end{equation}
for all $r,s\in R$. We then say that $(R,i,\chi)$ is an $A$-ring with
a right grouplike character.
Right grouplike characters were introduced in \cite{CVW}.
The terminology is motivated by the fact that the dual of a coring with
a grouplike element is a ring with a grouplike character (see \seref{7}).
For all $a\in A$, we have that
$$\chi(i(a))=\chi(1_R\cdot a)=\chi(1_R)a=1_Aa=a,$$
so $\chi\circ i={\rm Id}_A$, and $i$ is injective, $\chi$ is surjective. Sometimes we will regard $i$ as
an inclusion.
$A$ is a right $R$-module, with right $R$-action
\begin{equation}\eqlabel{1.1.1}
a\hbox{$\leftharpoonup$} r=\chi(ar).
\end{equation}
$A$ is a cyclic right $R$-module, since
$$a=\chi(i(a))=\chi(1_Ai(a))=1_A\hbox{$\rightharpoonup$} i(a),$$
for all $a\in A$. For $M\in \mathcal{M}_R$, the submodule of invariants is defined as
$$M^R=\{m\in M~|~mr=m\chi(r),~{\rm for~all~}r\in R\}.$$
Let
$$B=A^R=\{b\in A~|~b\chi(r)=\chi(br),~{\rm for~all~}r\in R\}.$$
Then $B$ is a subring of $A$, $M^R$ is a right $B$-module, and we have the
invariants functor $(-)^{R}:\ \mathcal{M}_R\to \mathcal{M}_B$. We will now present some
elementary properties of
$$Q=R^R=\{q\in R~|~qr=q\chi(r),~{\rm for~all~}r\in R\}.$$
\begin{lemma}\lelabel{1.1}
Let $(R,i,\chi)$ be an $A$-ring with a right grouplike character.
\begin{enumerate}
\item $Q$ is a $(R,B)$-subbimodule of $R$;
\item $\chi$ restricts to a $B$-bimodule map $\chi:\ Q\to B$;
\item if $1_R\in Q$, then $i$ is an isomorphism of rings, with inverse $\chi$.
\end{enumerate}
\end{lemma}
\begin{proof}
1) We refer to \cite[Prop. 2.2]{CVW}.\\
2) For all $q\in Q$ and $r\in R$, we have
$$\chi(q)\chi(r)=\chi(q\chi(r))=\chi(qr)=\chi(\chi(q)r),$$
hence $\chi(q)\in B$. $\chi$ is right $A$-linear, so its restriction to $Q$
is right $B$-linear. For all $q\in Q\subset R$ and $b\in B$, we have,
by the definition of $A^R=B$ that $b\chi(q)=\chi(bq)$, so $\chi$ is also
left $B$-linear.\\
3) If $1_R\in Q$, then we have for all $r\in R$ that
$$r=1_Rr=1_R\chi(r)=1_Ri(\chi(r))=i(\chi(r)).$$
It follows that $i$ is a left inverse of $\chi$. We have seen above that $i$
is always a right inverse of $\chi$, so it follows that $i$ is an isomorphism.
\end{proof}
If $M\in \mathcal{M}_R$, then ${\rm Hom}_R(A,M)\in \mathcal{M}_B$, with right $B$-action
$(fb)(a)=f(ba)$, for all $b\in B$, $f\in {\rm Hom}_R(A,M)$ and $a\in A$.\\
${\rm End}_R(A)$ is a $B$-bimodule, with left $B$-action $(bf)(a)=bf(a)$,
for all $b\in B$, $f\in {\rm End}_R(A)$ and $a\in A$.
\begin{lemma}\lelabel{1.2}
Let $(R,i,\chi)$ be an $A$-ring with a right grouplike character,
and $M$ a right $R$-module.
\begin{enumerate}
\item ${\rm Hom}_R(A,M)\cong M^R$ as right $B$-modules;
\item ${\rm End}_R(A)\cong B$ as $B$-bimodules and as rings.
\end{enumerate}
\end{lemma}
\begin{proof}
1) For $f\in {\rm Hom}_R(A,M)$ and $r\in R$, we have
$$f(1_A)r=f(1_A\hbox{$\leftharpoonup$} r)=f(\chi(r))=f(1_A)\chi(r),$$
so $f(1_A)\in M^R$, and we have a well-defined map
$$\phi:\ {\rm Hom}_R(A,M)\to M^R,~~\phi(f)=f(1_A).$$
$\phi$ is right $B$-linear since
$$\phi(fb)=(fb)(1_A)=f(b1_A)=f(1_Ab)=f(1_A)b=\phi(f)b.$$
The inverse of $\phi$ is given by the formula
$$\phi^{-1}(m)(a)=ma,$$
for all $m\in M^R$ and $a\in A$.\\
2) If $M=A$, then $\phi$ is also left $B$-linear since
$$\phi(bf)=(bf)(1_A)=bf(1_A)=\phi(f)b.$$
\end{proof}
\section{Quasi-projective modules}\selabel{2}
A right $R$-module $M$ is called {\sl quasi-projective} if the canonical
map ${\rm Hom}_R(M,M)\to {\rm Hom}_R(M,M/N)$ is surjective, for every $R$-submodule
$N$ of $M$. This means that
every right $R$-linear map $f:\ M\to M/N$
factorizes through the canonical projection $p:\ M\to M/N$, that is,
there exists a right $R$-linear map $g:\ M\to M$ such that
$f=p\circ g$.
\begin{proposition}\prlabel{2.1}
Let $(R,i,\chi)$ be an $A$-ring with a right grouplike character. The following
assertions are equivalent.
\begin{enumerate}
\item $A$ is quasi-projective as a right $R$-module;
\item for every right $R$-submodule $I$ of $A$, and every $a+I\in (A/I)^R$,
there exists $b\in B$ such that $b-a\in I$;
\item for every right $R$-submodule $I$ of $A$, $(A/I)^R\cong (B+I)/I$.
\end{enumerate}
\end{proposition}
\begin{proof}
$\underline{1)\Rightarrow 2)}$. Observe that
\begin{equation}\eqlabel{2.1.1}
(A/I)^R=\{a+I\in A/I~|~a\chi(r)-\chi(ar)\in I,~{\rm for~all~}r\in R\}.
\end{equation}
For $a+I\in (A/I)^R$, we have a well-defined right $A$-linear map
$$f:\ A\to A/I,~~f(a')=aa'+I.$$
$f$ is right $A$-linear since
\begin{eqnarray*}
&&\hspace*{-2cm} f(a'\hbox{$\leftharpoonup$} r)=a(a'\hbox{$\leftharpoonup$} r)+I=a\chi(a'r)+I\\
&=&\chi(aa'r)+I=((aa')\hbox{$\leftharpoonup$} r)+I=f(a')\hbox{$\leftharpoonup$} r.
\end{eqnarray*}
Let $p:\ A\to A/I$ be the canonical projection. Since $A$ is quasi-projective,
there exists $g\in {\rm Hom}_R(A,A)$ such that $p\circ g= f$, that is
$aa'+I=g(a')+I$ and, in particular, $a+I=g(1_A)+I$, or $g(1_A)-a\in I$.
Let us show that $b=g(1_A)\in B$. Indeed, for all $r\in R$, we have
\begin{eqnarray*}
&&\hspace*{-2cm}
\chi(br)-b\chi(r)=\chi(g(1_A)r)-g(1_A)\chi(r)
= (g(1_A)\hbox{$\leftharpoonup$} r)-(g(1_A)\hbox{$\leftharpoonup$} (i\circ \chi)(r))\\
&=& g(1_A\hbox{$\leftharpoonup$} r)-g((\chi\circ i\circ \chi)(r))
= g(\chi(r))-g(\chi(r))=0.
\end{eqnarray*}
$\underline{2)\Rightarrow 3)}$. The map $B\to (A/I)^R$, $b\mapsto b+I$ induces a
monomorphism $(B+I)/I\to (A/I)^R$. Condition 2) means precisely that this
map is surjective.\\
$\underline{3)\Rightarrow 1)}$. Take a right $R$-linear map $f:\ A\to A/I$, with
$I$ a right $R$-submodule of $A$. Then
$$\chi(f(1_A)r)=f(1_A)\hbox{$\leftharpoonup$} r=f(1_A\hbox{$\leftharpoonup$} r)=
f(\chi(1_Ar))=f(\chi(r))=f(1_A)\chi(r),$$
so $f(1_A)\in (A/I)^R\cong (B+I)/I$. Take $b\in B$ such that
$f(1_A)=b+I$, and consider the map $g:\ A\to A$, $g(a)=ba$.
$g$ is right $R$-linear since
$$g(a\hbox{$\leftharpoonup$} r)=b(a\hbox{$\leftharpoonup$} r)=b\chi(ar)=\chi(bar)=(ba)\hbox{$\leftharpoonup$} r=g(a)r.$$
Finally
$$(p\circ g)(a)=p(ba)=ba+I=f(1_A)a=f(a).$$
\end{proof}
In \prref{2.1}, we characterize quasi-projectivity of $A$ as a right $R$-module.
Projectivity has been characterized in \cite[Prop. 2.4]{CVW}:
\begin{proposition}\prlabel{2.2}
Let $(R,i,\chi)$ be an $A$-ring with a right grouplike character. The following
assertions are equivalent.
\begin{enumerate}
\item $A$ is projective as a right $R$-module;
\item there exists $q\in Q$ such that $\chi(q)=1$.
\end{enumerate}
We refer to \cite[Prop. 2.4]{CVW} for more equivalent properties.
\end{proposition}
\begin{proposition}\prlabel{2.3}\cite[4.11]{Albu}
Let $R$ be a ring, $M$ a quasi-projective right $R$-module, and $N$ a
noetherian right $R$-module. Then ${\rm Hom}_R(M,N)$ is a noetherian right
${\rm End}_R(M)$-module.
\end{proposition}
\section{$I$-Frobenius rings}\selabel{3}
Let $(R,i)$ be an $A$-ring, and $I=(A,A,I,J,f,g)$ a strict Morita context
connecting $A$ with itself. We say that $R$ is an $I$-Frobenius $A$-ring if there exist
an element
$e=e^1\otimes u^1\otimes e^2\in R\otimes_A I\otimes_A R$ (summation
understood implicitely) and
an $A$-bimodule map
$\overline{\nu}:\ R\otimes_A I\to A$ such that the following conditions are
satisfied, for all $r\in R$ and $u\in I$:
\begin{eqnarray}
&&re^1\otimes u^1\otimes e^2=e^1\otimes u^1\otimes e^2r;\eqlabel{3.1.1}\\
&&
\overline{\nu}(e^1\otimes_A u^1)e^2=1_R;\eqlabel{3.1.2}\\
&&
e^1\otimes_A u^1\overline{\nu}(e^2\otimes_A u)=r1_R\otimes_A u.\eqlabel{3.1.3}
\end{eqnarray}
If $I=(A,A,A,A,{\rm id}_A,{\rm id}_A)$, then the notion ``$I$-Frobenius" coincides with the
classical Frobenius property. Equivalent definitions are given in
\cite[Theorem 2.7]{CDM}.\\
$f:\ I\otimes_A J\to A$ and $g:\ J\otimes_A I\to A$ are $A$-bimodule isomorphisms, and
\begin{equation}\eqlabel{3.1.4}
f(u\otimes_A v)u'=ug(v\otimes_A u')~~;~~g(v\otimes_A u)v'=vf(u\otimes_A v'),
\end{equation}
for all $u,u'\in I$ and $v,v'\in J$. We will write
$$f^{-1}(1_A)=\sum_i u_i\otimes v_i\in I\otimes_A J.$$
From the fact that $f$ is an $A$-bimodule isomorphism, it follows easily that
\begin{equation}\eqlabel{3.1.5}
\sum_i au_i\otimes v_i=\sum_i u_i\otimes v_ia,
\end{equation}
for all $a\in A$. We have the following generalization of \cite[Theorem 2.7]{CVW}.
\begin{theorem}\thlabel{3.1}
Let $(R,i,\chi)$ be an $I$-Frobenius $A$-ring with a right grouplike character.
Then $J$ is an $(R,B)$-bimodule, with left
$R$-action
\begin{equation}\eqlabel{3.1.6}
r\cdot v=\sum_i \overline{\nu}(rg(v\otimes \chi(e^1)u^1)e^2\otimes_A u_i)v_i,
\end{equation}
and we have an isomorphism $\alpha:\ J\to Q$ of $(R,B)$-bimodules.
\end{theorem}
\begin{proof}
The map $\alpha$ is defined by the formula
\begin{equation}\eqlabel{3.1.7}
\alpha(v)=g(v\otimes_A\chi(e^1)u^1)e^2,
\end{equation}
for all $v\in J$. Let us first show that $\alpha(v)\in Q$. For all $r\in R$,
we compute
\begin{eqnarray*}
&&\hspace*{-1cm}
\alpha(v)r= g(v\otimes_A\chi(e^1)u^1)e^2r\equal{\equref{3.1.1}}
g(v\otimes_A\chi(re^1)u^1)e^2\\
&\equal{\equref{1.1.0}}&g(v\otimes_A\chi(\chi(r)e^1)u^1)e^2
\equal{\equref{3.1.1}}g(v\otimes_A\chi(e^1)u^1)e^2\chi(r)=\alpha(v)\chi(r).
\end{eqnarray*}
$\alpha$ is right $B$-linear since
\begin{eqnarray*}
&&\hspace*{-2cm}\alpha(vb)=g(vb\otimes_A\chi(e^1)u^1)e^2=
g(v\otimes_Ab\chi(e^1)u^1)e^2\\
&=& g(v\otimes_A\chi(be^1)u^1)e^2
g(v\otimes_A\chi(e^1)u^1)e^2b\equal{\equref{3.1.1}}\alpha(v)b,
\end{eqnarray*}
for all $b\in B$. The inverse $\beta$ of $\alpha$ is given by the composition
$$Q\subset R\rTo^{R\otimes_A f^{-1}}R\otimes_AI\otimes_AJ\rTo^{\overline{\nu}\otimes_AJ}
A\otimes_AJ\cong J,$$
or
$$\beta(q)=\sum_i\overline{\nu}(q\otimes_A u_i)v_i,$$
for all $q\in Q$. Indeed, we compute for all $q\in Q$ that
\begin{eqnarray*}
&&\hspace*{-15mm}
\alpha(\beta(q))=
g(\sum_i\overline{\nu}(q\otimes_A u_i)v_i\otimes_A\chi(e^1)u^1)e^2\\
&=&\sum_ig(\overline{\nu}(q\otimes_A u_i)v_i\chi(e^1)\otimes_Au^1)e^2
\equal{\equref{3.1.5}}
\sum_ig(\overline{\nu}(q\otimes_A \chi(e^1)u_i)v_i\otimes_Au^1)e^2\\
&=&
\sum_ig(\overline{\nu}(q\chi(e^1)\otimes_A u_i)v_i\otimes_Au^1)e^2
\equal{\equref{3.1.1}}
\sum_ig(\overline{\nu}(\chi(e^1)\otimes_A u_i)v_i\otimes_Au^1)e^2q\\
&=&
\sum_i\overline{\nu}(\chi(e^1)\otimes_A u_i) g(v_i\otimes_Au^1)e^2q
= \sum_i\overline{\nu}(\chi(e^1)\otimes_A u_i g(v_i\otimes_Au^1))e^2q\\
&\equal{\equref{3.1.4}}&
\sum_i\overline{\nu}(\chi(e^1)\otimes_A f(u_i \otimes_Av_i)u^1)e^2q
= \overline{\nu}(e^1\otimes_A e^1)e^2q=q.
\end{eqnarray*}
For all $v\in J$, we have that
\begin{eqnarray*}
&&\hspace*{-15mm}
\beta(\alpha(v))=
\sum_i\overline{\nu}(g(v\otimes_A\chi(e^1)u^1)e^2\otimes_A u_i)v_i=
\sum_ig(v\otimes_A\chi(e^1)u^1)\overline{\nu}(e^2\otimes_A u_i)v_i\\
&=&\sum_ig(v\otimes_A\chi(e^1)u^1\overline{\nu}(e^2\otimes_A u_i))v_i
\equal{\equref{3.1.3}} \sum_i g(v\otimes_A \chi(1_R)u_i)v_i\\
&=&\sum_i g(v\otimes_A u_i)v_i
\equal{\equref{3.1.4}} \sum_i vf(u_i\otimes_A v_i)=v.
\end{eqnarray*}
This shows that $\alpha$ is an isomorphism of right $B$-modules. We can transport
the left $B$-action on $Q$ to $J$ such that $\alpha$ becomes an $(R,B)$-bimodule
map. This yields formula \equref{3.1.6}.
\end{proof}
The composition
$${\rm tr}\,=\chi\circ \alpha:\ J\to Q\to B$$
is a $B$-bimodule map (see \leref{1.1}), and will be called the {\sl trace map}.
It is given by the formula
\begin{equation}\eqlabel{3.18}
{\rm tr}\,(v)=\chi(g(v\otimes_A\chi(e^1)u^1)e^2).
\end{equation}
Combining \prref{2.2} and \thref{3.1}, we obtain the following result:
\begin{proposition}\prlabel{3.2}
Let $(R,i,\chi)$ be an $I$-Frobenius $A$-ring with a right grouplike character.
The following assertions are equivalent.
\begin{enumerate}
\item $A$ is projective as a right $R$-module;
\item there exists $v\in J$ such that ${\rm tr}\,(v)=1_B$.
\end{enumerate}
\end{proposition}
Now assume that $R$ is Frobenius $A$-ring, that is, $I=A$. Then the above formulas
simplify. $e=e^1\otimes e^2\in R\otimes_AR$, $\overline{\nu}:\ R\to A$ is an $A$-bimodule
map, and the trace map ${\rm tr}\,:\ A\to B$ is given by
$${\rm tr}\,(a)= \chi(a\chi(e^1)e^2).$$
\section{Fully bounded noetherian rings}\selabel{4}
We recall some definitions and basic results from \cite{Garcia}.
Let $R$ be a ring, and $M,P\in \mathcal{M}_R$. For a subset $X$ of ${\rm Hom}_R(P,M)$, we
write
$$r_P(X)=\cap\{{\rm Ker}\, f~|~f\in X\}.$$
In particular, for $X\subset M\cong {\rm Hom}_R(R,M)$, we have
$$r_R(X)=\{r\in R~|~xr=0\}.$$
$M$ is called {\sl finitely} $P$-{\sl generated} if there exists an epimorphism of right
$R$-modules $P^n\to M\to 0$.\\
$M$ is called $P$-{\sl faithful} if ${\rm Hom}_R(P,M')\neq 0$, for every nonzero
submodule $M'\subset M$.\\
$R$ is called {\sl right bounded} if every essential right ideal contains a non-zero
two-sided ideal. $R$ is called {\sl right fully bounded} if $R/P$ is right bounded,
for every two-sided prime ideal $P$ of $R$. A ring $R$ that is right fully bounded
and right noetherian is called a {\sl right fully bounded noetherian ring} or a
{\sl FBN ring}. Characterizations of right FBN rings are given in \cite[Theorem 1.2]{Garcia}.
For later use, we recall one of them.
\begin{proposition}\prlabel{4.1}
For a ring $R$, the following conditions are equivalent.
\begin{enumerate}
\item $R$ is right FBN;
\item for every finitely generated right $R$-module $M$, there exists a finite subset $F\subset M$
such that $r_R(M)=r_R(F)$.
\end{enumerate}
\end{proposition}
A right $R$-module $P$ is called a {\sl right FBN-module} if it is noetherian and for
every finitely generated $P$-faithful right $R$-module $M$, there exists a finite
subset $F\subset {\rm Hom}_R(P,M)$ such that $r_P(F)=r_P({\rm Hom}_R(P,M))$. We recall
the following properties from \cite{Garcia}.
\begin{proposition}\prlabel{4.2} \cite[Theorem 1.7]{Garcia}
For a quasi-projective, noetherian right $R$-module $P$, the following assertions are
equivalent:
\begin{enumerate}
\item ${\rm End}_R(P)$ is right FBN;
\item $P$ is an FBN right $R$-module.
\end{enumerate}
\end{proposition}
\begin{proposition}\prlabel{4.3} \cite[Corollary 1.8]{Garcia}
Let $P$ be a quasi-projective FBN right $R$-module, $Q$ a finitely $P$-generated
right $R$-module, and $M$ a finitely generated $Q$-faithful right $R$-module.
For every $X\subset {\rm Hom}_R(Q,M)$, there exists a finite subset
$F\subset X$ such that $r_Q(X)=r_Q(F)$.
\end{proposition}
\begin{proposition}\prlabel{4.4} \cite[Corollary 1.9]{Garcia}
A right noetherian ring $R$ is right FBN if and only if every finitely generated right
$R$-module is FBN.
\end{proposition}
We can now state the main result of this paper.
\begin{theorem}\thlabel{4.5}
Let $(R,i,\chi)$ be an $A$-ring with a right grouplike character, and consider the
following statements.
\begin{enumerate}
\item $R\in \mathcal{M}_A$ is finitely generated and $A$ is right FBN;
\item $R$ is right FBN and $A$ is right noetherian;
\item $B$ is right FBN and $A$ is right noetherian.
\end{enumerate}
Then $1)\Rightarrow 2)$.\\
If $A$ is quasi-projective as a right $R$-module, then $2)\Rightarrow 3)$.\\
If $A$ is projective as a right $R$-module and $R$ is an $I$-Frobenius $A$-ring for
some strict Morita context $I=(A,A,I,J,f,g)$, then $3)\Rightarrow 1)$ and the three
conditions are equivalent.
\end{theorem}
\begin{proof}
$1)\Rightarrow 2)$. It follows from \prref{4.4} that $R$ is an FBN right $R$-module.
Let $M$ be a finitely generated right $R$-module; then $M$ is also finitely generated
as a right $A$-module. We claim that $M$ is an $R$-faithful right $A$-module.
Indeed, take a non-zero right $A$-module $M'\subset M$. Since $M'\cong
{\rm Hom}_A(A,M')$, there exists a non-zero $f\in {\rm Hom}_A(A,M')$, and the composition
$f\circ \chi:\ R\to M'$ is non-zero, since $\chi$ is surjective.\\
Now take $P=R$, $Q=A$ in \prref{4.3}, and consider the subset
$M\cong {\rm Hom}_R(R,M)\subset {\rm Hom}_A(R,M)$. It follows that there exists a finite
$F\subset M$ such that $r_A(F)=r_A(M)$. It then follows from \prref{4.1} that $R$
is right FBN.\\
$2)\Rightarrow 3)$. $A$ is a finitely generated (even cyclic) right $R$-module, so
it follows from \prref{4.4} that $A$ is an FBN right $R$-module. It then follows from
\prref{4.2} that ${\rm End}_R(A)\cong B$ is right FBN.\\
$3)\Rightarrow 1)$. We will apply \prref{2.3} with $M=A$ and $N=R$.
By assumption, $A$ is quasi-projective as a right $R$-module. Since $R/A$ is
$I$-Frobenius, $R$ is finitely generated projective as a right $R$-module. Since
$A$ is right noetherian, $R$ is also right noetherian.\\
It follows from \leref{1.2}, \prref{2.3} and \thref{3.1} that ${\rm Hom}_R(A,R)\cong R^R=Q\cong J$
is noetherian as a right module over ${\rm End}_R(A)\cong A^R=B$. It then follows that $J$
is finitely generated as a right $B$-module. Let $\{e_1,\cdots,e_k\}$ be a set of
generators of $J$ as a right $B$-module.\\
Recall that we have an $A$-bimodule isomorphism $f:\ I\otimes_A J\to A$. With notation
as in \seref{3}, we have, for $a\in A$,
$$f^{-1}(a)=\sum_{i=1}^n u_i\otimes_A v_ia\in I\otimes_A J.$$
For every $i$, we can find $b_{i1},\cdots,b_{ik_i}\in B$ such that
$$v_ia=\sum_{j=1}^{k_i}e_jb_{ij}.$$
We then easily compute that
\begin{eqnarray*}
a&=& f\Bigl(\sum_{i=1}^n u_i\otimes_A v_ia\Bigr)=
f\Bigl(\sum_{i=1}^n\sum_{j=1}^{k_i} u_i\otimes_A e_jb_{ij}\Bigr)=\
\sum_{i=1}^n\sum_{j=1}^{k_i} f(u_i\otimes_Ae_j)b_{ij},
\end{eqnarray*}
and we conclude that $A$ is finitely generated as a right $B$-module.\\
Take $M\in \mathcal{M}_A$ finitely generated. Then $M$ is also finitely generated as a right
$B$-module. We now show that $M$ is an $A$-faithful right $B$-module.
Let $M'$ be a non-zero right $B$-submodule of $M$, and take $0\neq m'\in M'$.
It follows from \prref{3.2} that there exists $v\in J$ such that ${\rm tr}\,(v)=1_B$. The map
$f:\ A\to M$, $f(a)=m'{\rm tr}\,(va)$ is right $B$-linear, and different from $0$ since
$f(1_A)=m'\neq 0$.\\
Observe now that
\begin{itemize}
\item $B$ is a quasi-projective FBN right $B$-module;
\item $A$ is a finitely $B$-generated right $B$-module;
\item $M$ is a finitely generated $A$-faithful right $B$-module.
\end{itemize}
Applying \prref{4.3} to $M\cong {\rm Hom}_A(A,M)\subset {\rm Hom}_B(A,M)$, we find that
there exists a finite subset $F\subset M$ such that $r_A(F)= r_A(M)$.
It then follows from \prref{4.1} that $A$ is right FBN.
\end{proof}
\begin{remark}\relabel{4.6}
We do not know whether the implication $3)\Rightarrow 1)$ holds under the weaker
assumption that $A\in \mathcal{M}_R$ is quasi-projective. The projectivity is used at the point
where we applied \prref{4.3}.
\end{remark}
\section{Application to Frobenius algebras}\selabel{5}
Let $k$ be a commutative ring, and consider two $k$-algebras $A$ and $S$,
and an algebra map $j:\ S\to A$. All unadorned tensor products in this Section are over $k$. It is easy to establish that
$(R=S^{\rm op}\otimes A, i,\chi)$ with
$$i:\ A\to S^{\rm op} \otimes A,~~i(a)=1_S\otimes a,$$
$$\chi:\ S^{\rm op}\otimes A\to A,~~\chi(s\otimes a)=j(s)a$$
is an $A$-ring with a right grouplike character. Also observe that
the categories $\mathcal{M}_R$ and ${}_S\mathcal{M}_A$ are isomorphic. For $M\in {}_S\mathcal{M}_A$,
we have that
$$M^R=\{m\in M~|~sm=mj(s),~{\rm for~all~}s\in S\}=C_S(M).$$
In particular, $B=A^R=C_S(A)$ and
$$Q=\{\sum_i s_i\otimes a_i\in S^{\rm op} \otimes A~|~
\sum_i ts_i\otimes a_i=\sum_i s_i\otimes a_ij(t),~{\rm for~all~}t\in S\}.$$
Consequently $A$ is projective as a right $R$-module if and only if there exists
$\sum_i s_i\otimes a_i\in Q$ such that $\sum_i j(s_i)a_i=1_A$.\\
From \prref{2.1}, it follows that $A$ is quasi-projective as a right $R$-module
if and only if for every $(S,A)$-submodule $I$ of $A$ and $a\in A$ such that
$as-sa\in I$, for all $s\in S$, there exists $b\in B$ such that $a-b\in I$.\\
Assume that $S$ is a Frobenius $k$-algebra, with Frobenius system
$(e=e^1\otimes e^2,\overline{\nu})$. Then $S^{\rm op}$ is also a Frobenius algebra, with
Frobenius system $(e=e^2\otimes e^1,\overline{\nu})$, and $S^{\rm op}\otimes A$ is a Frobenius
$A$-ring, with Frobenius system $(E, N)$, with
$E=(e^2\otimes 1_A)\otimes_A (e^1\otimes 1_A)$ and
$$N:\ S^{\rm op}\otimes A\to A,~~N(s\otimes a)=\overline{\nu}(s)a.$$
We then have the isomorphism
$$\alpha:\ A\to Q,~~\alpha(a)=e^1\otimes aj(e^2)$$
and the trace map
$${\rm tr}\,:\ A\to B,~~{\rm tr}\,(a)=j(e^1)aj(e^2).$$
$A$ is projective as a right $R$-module if and only if there exists $a\in A$ such that
${\rm tr}\,(a)=1$.
\begin{corollary}\colabel{5.1}
Let $S$ be a Frobenius algebra over a commutative ring $k$, and $j:\ S\to A$
an algebra map. Furthermore, assume that
there exists $a\in A$ such that ${\rm tr}\,(a)=1$. Then the following assertions are
equivalent:
\begin{enumerate}
\item $A$ is right FBN;
\item $S^{\rm op}\otimes A$ is right FBN and $A$ is right noetherian;
\item $B=C_S(A)$ is right FBN and $A$ is right noetherian.
\end{enumerate}
\end{corollary}
\section{Application to Hopf algeba actions}\selabel{6}
Let $H$ be a finitely generated projective Hopf algebra over a commutative ring
$k$, and $A$ a left $H$-module algebra. The smash product $R=A\# H$ is equal to $A\otimes H$ as
a $k$-module, with multiplication given by the formula
$$(a\# h)(b\# k)=a(h_{(1)}\cdot b)\# h_{(2)}k.$$
The unit is $1_A\# 1_H$. Consider the maps
$$i:\ A\to A\#H,~~i(a)=a\#1_H,$$
$$\chi:\ A\# H\to A,~~\chi(a\# h)=a\varepsilon(h).$$
Straightforward computations show that $(A\# H,i,\chi)$ is an $A$-ring
with a left grouplike character. It is also easy to prove that
$$A^R=\{a\in A~|~h\cdot a=\varepsilon(h)a,~{\rm for~all~}h\in H\}=A^H$$
is the subalgebra of invariants of $R$.\\
In a similar way, we can associate an $A$-ring with right grouplike character to
a right $H$-comodule algebra. We will discuss the left handed case here, in order
to recover the results from \cite{Sorin,Garcia,Guedenon}. The results from the previous
Sections can easily be restated for rings with a left grouplike character.\\
Let $I=\int_{H^*}^l$ and $J=\int_H^l$ be the spaces of left integrals on and
in $H$. $I$ and $J$ are projective rank one $k$-modules, and $H/k$ is
$I$-Frobenius (see for example \cite[Theorem 3.4]{CDM}). We need an explicit
description of the Frobenius system. From the Fundamental Theorem, it follows
that we have an isomorphism
$$\phi:\ I\otimes H\to H^*,~~\phi(\varphi\otimes h)=h\cdot \varphi,$$
with $\langle h\cdot \varphi,k\rangle=\langle \varphi,kh\rangle$. If $t\in J$, then
$$\phi(\varphi\otimes t)(h)=\langle \varphi,ht\rangle=\langle\varphi,t\rangle\varepsilon(h),$$
so $\phi$ restricts to a monomorphism $\tilde{\phi}:\ I\otimes J\to k\varepsilon$.
If $I$ and $J$ are free of rank one, then $\tilde{\phi}$ is an isomorphism,
as there exist $\varphi\in I$ and $t\in J$ such that $\langle\varphi,t\rangle=1$
(see for example \cite[Theorem 31]{CMZ}, \cite{Pareigis71}. Hence $\tilde{\phi}$ is an isomorphism
after we localize at a prime ideal $p$ of $k$, and this implies that $\tilde{\phi}$
is itself an isomorphism. Consequently $J^*\cong I$. Consider
$\tilde{\phi}^{-1}(\varepsilon)=\sum_i\varphi_i\otimes t_i\in I\otimes J$. Then
\begin{equation}\eqlabel{6.1.1}
\sum_i \langle\varphi_i,t_i\rangle=1.
\end{equation}
Furthermore $\{(\varphi_i,t_i)~|~i=1,\cdots,n\}$ is a finite dual basis for $I$,
so we have $t=\sum_i \langle\varphi_i,t\rangle t_i$, $\varphi=\sum_i\langle\varphi,t_i\rangle
\varphi$, for all $t\in J$ and $\varphi\in I$. $\phi$ induces an isomorphism
$$\psi:\ H\to H^*\otimes J,~~\psi(h)=\sum_i h\cdot\varphi_i\otimes t_i.$$
The inverse of $\psi$ is given by the formula
$$\psi^{-1}(h^*\otimes t)=\langle h^*,\overline{S}(t_{(1)})\rangle t_{(2)},$$
where $\overline{S}$ is the inverse of the antipode $S$; recall from
\cite{Pareigis71} that the antipode of
a finitely generated projective Hopf algebra is always bijective. Indeed,
it is straightforward to show that $\psi^{-1}$ is a right inverse of $\psi$.
First observe that
$$\psi(\psi^{-1}(h^*\otimes t))=\sum_i\langle h^*,\overline{S}(t_{(1)})\rangle t_{(2)}\cdot \varphi_i\otimes t_i.$$
Now we compute for all $h\in H$ that
\begin{eqnarray*}
&&\hspace*{-2cm}
\langle h^*,\overline{S}(t_{(1)})\rangle \langle t_{(2)}\cdot\varphi_i,h\rangle=
\langle h^*,\overline{S}(t_{(1)})\overline{S}(h_{(2)})h_{(1)}\rangle \langle \varphi_i,h_{(3)}t_{(2)}\rangle\\
&=& \langle h^*,\overline{S}(h_{(2)}t_{(1)})h_{(1)}\rangle \langle \varphi_i,h_{(3)}t_{(2)}\rangle\\
&=& \langle h^*,\overline{S}(1_H)h_{(1)}\rangle \langle \varphi_i,h_{(2)}t\rangle
= \langle h^*,h\rangle \langle \varphi_i,t\rangle,
\end{eqnarray*}
where we used the fact that $\varphi_i$ and $t$ are integrals. It follows that
$$\psi(\psi^{-1}(h^*\otimes t))=\sum_i h^*\otimes \langle\varphi_i,t\rangle t_i=h^*\otimes t.$$
A right inverse of an invertible element is also a left inverse, so it follows that
$$
1_H=\psi(\psi^{-1}(1_H))=\sum_i\langle \varphi_i,\overline{S}(t_{i(1)})\rangle t_{i(2)}=
\sum_i\langle \varphi_i\circ \overline{S},t_{i(1)}\rangle t_{i(2)}=
\sum_i\langle \varphi_i\circ \overline{S},t_i\rangle 1_H,$$
where we used the fact that $\varphi_i\circ \overline{S}$ is a right integral on $H$.
We conclude that
\begin{equation}\eqlabel{6.1.2}
\sum_i\langle \varphi_i,\overline{S}(t_i)\rangle=1.
\end{equation}
Consider the particular situation where $I$ and $J$ are free rank one modules.
Then there exist free generators
$\varphi_1$ of $ I$ and $t_1$ of $ J$ such that $\langle\varphi_1,t_1\rangle=1$.
From \equref{6.1.2} it follows that $\langle\varphi_1,\overline{S}(t_1)\rangle=1$.
For arbitrary $\varphi=x\varphi_1\in I$ and $t=yt_1\in J$, it then follows that
$\langle \varphi,t\rangle=xy \langle\varphi_1,t_1\rangle=xy=
xy\langle\varphi_1,\overline{S}(t_1)\rangle=\langle\varphi,\overline{S}(t)\rangle$.
Consider the case where $I$ and $J$ are not necessarily free, and take
$\varphi\in I$, $t\in J$ and a prime ideal $p$ of $k$.
Then the images of $\langle \varphi,t\rangle$ and
$\langle\varphi,\overline{S}(t)\rangle$ in the localized ring $k_p$ are equal, since the integral
space of the Hopf $k_p$-algebra $H_p$ is free. So we can conclude that
\begin{equation}\eqlabel{6.1.3}
\langle \varphi,t\rangle=\langle\varphi,\overline{S}(t)\rangle.
\end{equation}
\begin{lemma}\lelabel{6.1}
Let $H$ be a finitely generated projective Hopf algebra over a commutative ring
$k$. There exist $t_i\in J=\int_H^l$ and $\varphi_i\in I=\int_{H^*}^l$ such that
$\sum_i \langle\varphi_i,t_i\rangle=1$. $H$ is an $I$-Frobenius $k$-algebra, with Frobenius system
$(e,\overline{\nu})$ with
\begin{eqnarray*}
&&e=\sum_i t_{i(2)}\otimes\varphi_i\otimes\overline{S}(t_{i(1)})\\
&&\overline{\nu}=\sum_j t_j\otimes \varphi_j\in (H\otimes I)^*\cong J\otimes H^*
\end{eqnarray*}
\end{lemma}
\begin{proof}
It is straightforward to show that $e\in C_H(H\otimes I\otimes H)$; this also follows
from \cite[Prop. 3.3]{CDM}, taking into account that $e=
i'(\varphi\otimes \overline{S}(t))$.\\
Write $e=e^1\otimes u^1\otimes e^2\in H\otimes I\otimes H$. We compute that
\begin{eqnarray}
&&\hspace*{-2cm}
\overline{\nu}(e^1\otimes u^1\otimes e^2)=
\sum_{i,j}\langle \varphi_j,t_{i(2)}\rangle \langle \varphi,t_j\rangle \overline{S}(t_{i(1)})\nonumber\\
&=& \sum_{i}\langle \varphi_i,t_{i(2)}\rangle \overline{S}(t_{i(1)})
= \sum_i\overline{S}(\langle \varphi_i,t_i\rangle 1_H)\equal{\equref{6.1.1}}1_H.\eqlabel{6.1.4}
\end{eqnarray}
For all $\varphi\in I$, we calculate
\begin{eqnarray}
&&\hspace*{-2cm}
e^1\otimes u^1\overline{\nu}( e^2\otimes \varphi)=
\sum_{i,j} t_{i(2)}\otimes\varphi_i\langle\varphi_j,\overline{S}(t_{i(1)})\rangle\langle\varphi,t_j\rangle\nonumber\\
&=& \sum_{i,j} 1_H\otimes\varphi_i\langle\varphi_j,\overline{S}(t_{i})\rangle\langle\varphi,t_j\rangle
= \sum_{i} 1_H\otimes\varphi_i\langle\varphi,\overline{S}(t_{i})\rangle\nonumber\\
&\equal{\equref{6.1.3}}& \sum_{i} 1_H\otimes\varphi_i\langle\varphi,t_{i}\rangle=1_H\otimes \varphi.
\eqlabel{6.1.5}
\end{eqnarray}
It now follows from \cite[Theorem 3.1]{CDM} that $(e,\overline{\nu})$ is a
Frobenius system.
\end{proof}
\begin{proposition}\prlabel{6.2}
Let $H$ be a finitely generated projective Hopf algebra over a commutative ring
$k$, and $A$ a left $H$-module algebra. Then $A\otimes H$ is an $A\otimes I$-Frobenius
$A$-algebra,
with Frobenius system $(E,N)$, with
\begin{eqnarray*}
&&\hspace*{-2cm}
E=E^1\otimes_AU^1\otimes_AE^2=(1_A\# e^1)\otimes_A(1_A\otimes u^1)\otimes_A(1_A\# e^1)\\
&=&\sum_i (1_A\# t_{i(2)})\otimes_A(1_A\otimes\varphi_i)
\otimes_A(1_A\#\overline{S}(t_{i(1)})),\\
&&\hspace*{-2cm}N:\ (A\#H)\otimes_A (A\otimes I)\cong A\# H\otimes I\to A,\\
&&N(a\#h\otimes\varphi)=a\overline{\nu}(h\otimes\varphi)=\sum_j a\langle\varphi_j,h\rangle\langle \varphi,t_j\rangle.
\end{eqnarray*}
Here we used the notation introduced above.
\end{proposition}
\begin{proof}
The proof is an adaptation of the proof of \cite[Proposition 5.1]{CVW}. Let us first show
that $E$ satisfies \equref{3.1.1}.
\begin{eqnarray*}
&&\hspace*{-15mm}
\sum_i (1_A\# t_{i(2)})\otimes_A(1_A\otimes\varphi_i)\otimes_A(1_A\#\overline{S}(t_{i(1)})(a\# h)\\
&=&
\sum_i (1_A\# t_{i(3)})\otimes_A(1_A\otimes\varphi_i)\otimes_A(\overline{S}(t_{i(2)})\cdot a \# \overline{S}(t_{i(1)}) h)\\
&=&
\sum_i (1_A\# t_{i(3)})\otimes_A(\overline{S}(t_{i(2)})\cdot a\otimes\varphi_i)\otimes_A(1_A \# \overline{S}(t_{i(1)}) h)\\
&=&
\sum_i ((t_{i(3)}\overline{S}(t_{i(2)}))\cdot a\# t_{i(4)})\otimes_A(1_A\otimes\varphi)\otimes_A(1_A \# \overline{S}(t_{i(1)}) h)\\
&=&
\sum_i ( a\# t_{i(2)})\otimes_A(1_A\otimes\varphi)\otimes_A(1_A \# \overline{S}(t_{i(1)}) h)\\
&=&
\sum_i ( a\#h t_{i(2)})\otimes_A(1_A\otimes\varphi)\otimes_A(1_A \# \overline{S}(t_{i(1)}) )\\
&=&
\sum_i ( a\#h)(1_A\# t_{i(2)})\otimes_A(1_A\otimes\varphi_i)\otimes_A(1_A\#\overline{S}(t_{i(1)}).
\end{eqnarray*}
Obviously $N$ is left $A$-linear. Right $A$-linearity can be proved as follows:
\begin{eqnarray*}
&&\hspace*{-2cm}
N((1\# h\otimes \varphi)a)=N(h_{(1)}a\# h_{(2)}\otimes\varphi)\\
&=& \sum_j h_{(1)}\cdot a\langle\varphi_j,h_{(2)}\rangle\langle \varphi,t_j\rangle
= N(1\# h\otimes \varphi)a.
\end{eqnarray*}
\equref{3.1.2} is satisfied since
\begin{eqnarray*}
&&\hspace*{-2cm}
N(E^1\otimes_AU^1)E^2=1_A\overline{\nu}(e^1\otimes u^1)(1_A\# e^2)\\
&\equal{\equref{6.1.4}}& 1_A\#\overline{\nu}(e^1\otimes u^1) e^2=1_A\#1_H.
\end{eqnarray*}
Let us finally show that \equref{3.1.3} holds. For all $a\in A$ and $\varphi\in I$, we have
\begin{eqnarray*}
&&\hspace*{-2cm}
E^1\otimes_AU^1N(E^2\otimes_A(a\otimes\varphi))\\
&=&
\sum_i(1_A\#t_{i(2)})\otimes_A(1_A\otimes\varphi_i)N(a\#\overline{S}(t_{i(1)})\otimes\varphi)\\
&=&\sum_{i,j}(1_A\#t_{i(2)})\otimes_A(a\otimes\varphi_i)\langle\varphi_j,\overline{S}(t_{i(1)})
\langle\varphi,t_j\rangle\\
&\equal{\equref{6.1.4}}& (1_A\#1_H)\otimes_A (a\otimes\varphi)
\end{eqnarray*}
\end{proof}
\begin{proposition}\prlabel{6.3}
Let $H$ be a finitely generated projective Hopf algebra, and $A$ a left $H$-module
algebra. The trace map ${\rm tr}\,:\ A\otimes J\to B=A^H$ is given by the formula
$${\rm tr}\,(a\otimes t)=t\cdot a.$$
\end{proposition}
\begin{proof}
Observe that the map $g:\ (J\otimes A)\otimes_A(I\otimes A)$ in the Morita context associated to
$I\otimes A$ is given by the formula
$$g((t\otimes a)\otimes_A (\varphi\otimes b))=\langle \varphi,t\rangle ab.$$
Using the left handed version of \equref{3.18}, we compute, for $V=a\otimes t\in A\otimes J$
that
\begin{eqnarray*}
&&\hspace*{-15mm}
{\rm tr}\,(a\otimes v)=\chi(E^1g(U^1\chi(E^2)\otimes_AV))
=\sum_i \chi((1_A\# t_i)g((1_A\otimes\varphi)\otimes (a\otimes t)))\\
&=&\sum_i \chi((1_A\# t_i)a\langle\varphi,t\rangle)=\chi((1_A\# t)a)
=\chi(t_{(1)}\cdot a\# t_{(2)})=t\cdot a.
\end{eqnarray*}
\end{proof}
We can now apply Propositions \ref{pr:2.1}, \ref{pr:2.2} and \ref{pr:3.2}, and \thref{4.5},
and obtain the following result.
\begin{corollary}\colabel{6.4}
Let $H$ be a finitely generated projective Hopf algebra, and $A$ a left $H$-module
algebra. Assume that there exist $a_i\in A$ and $t_i\in \int_l^H$
such that $\sum_it_i\cdot a_i=1$.\\
Then the following assertions are equivalent;
\begin{enumerate}
\item $A$ is left FBN;
\item $A\# H$ is left FBN and $A$ is left noetherian;
\item $B$ is left FBN and $A$ is left noetherian.
\end{enumerate}
\end{corollary}
We recover \cite[Theorem 2.3 and Corollary 2.4]{Garcia}, \cite[Theorem 8]{Sorin}
and \cite[Theorem 2.4]{Guedenon}. If $H$ is Frobenius (e.g. if $k$ is a field, or
$H=kG$ is a finite group algebra), then the space of left integrals is free. We can then
take a free generator $t$ of $\int_H^l$ and the condition of the trace map means that
there exists $a\in A$ such that $t\cdot a=1$. We observe that - in the case where
the space of integrals is not free - the sufficient condition in \coref{6.4} that there exist $a_i\in A$ and $t_i\in \int_l^H$
such that $\sum_it_i\cdot a_i=1$ is weaker than the one given in \cite[Theorem 8]{Sorin},
where a single $t\in \int_l^H$ and $a\in A$ with $t\cdot a=1$ are needed.\\
In \cite{Garcia} and \cite{Guedenon}, it is stated that \coref{6.4} holds under the weaker
assumption (called (C1)) that $A$ is $A\# H$-{\sl quasi}-projective. There seems to be
a hole in the proofs in \cite{Garcia} and \cite{Guedenon}: the proof of the implication $3)\Longrightarrow 1)$ uses the projectivity of $A$ as an $A\# H$-module (see
\reref{4.6}).
\section{Application to corings}\selabel{7}
Let $A$ be a ring. An $A$-coring is a coalgebra in the category of
$A$-bimodules ${}_A\mathcal{M}_A$. This means that we have two $A$-bimodule maps
$$\Delta_\mathcal{C}:\ \mathcal{C}\to \mathcal{C}\otimes_A\mathcal{C}~~{\rm and}~~\varepsilon_\mathcal{C}:\ \mathcal{C}\to A$$
satisfying some coassociativity and counit axioms. The maps $\Delta_\mathcal{C}$
and $\varepsilon_\mathcal{C}$ are called the comultiplication and counit, and we
use the Sweedler notation
$$\Delta_\mathcal{C}(c)=c_{(1)}\otimes_A c_{(2)},$$
where summation is understood implicitely. Corings were revived recently
in \cite{Brzezinski02}, and we refer to \cite{BrzezinskiWisbauer} for a
detailed discussion of all kinds of applications. The left dual
$R={}^*\mathcal{C}={}_A{\rm Hom}(\mathcal{C},A)$ is an $A$-ring, with multiplication rule
$$(f\# g)(c)=g(c_{(1)}f(c_{(2)})),$$
for all $c\in \mathcal{C}$ and $f,g\in {}^*\mathcal{C}$. The unit is $\varepsilon_\mathcal{C}$,
and the ring morphism $i:\ A\to{}^* \mathcal{C}$ is given by
$$i(a)(c)=\varepsilon_\mathcal{C}(c)a.$$
The $A$-bimodule structure on ${}^*\mathcal{C}$ is then given by the formula
$$(afb)(c)=f(ca)b,$$
for all $a,b\in A$, $f\in \mathcal{C}^*$ and $c\in \mathcal{C}$.\\
$x\in \mathcal{C}$ is called grouplike if $\Delta_\mathcal{C}(x)=x\otimes_R x$ and
$\varepsilon_\mathcal{C}(x)=1_R$. $(\mathcal{C},x)$ is then called an $R$-coring with
a grouplike element. Now consider the map
$\chi:\ {}^*\mathcal{C}\to A$, $\chi(f)=f(x)$.
It can be shown easily (see \cite{CVW}) that $(\mathcal{C}^*,i,\chi)$ is
an $A$-ring with a right grouplike character.
We can also compute that
$$B=A^R=\{a\in A~|~f(xa)=af(x),~{\rm for~all~}f\in \mathcal{C}^*\}.$$
Using the grouplike element $x$, we can define a right $\mathcal{C}$-coaction on $A$,
namely
$$\rho:\ \mathcal{C}\to A\otimes_A\mathcal{C}\cong \mathcal{C},~~\rho(r)=1_A\otimes_A xa=xa.$$
We can consider the subring of coinvariants
$$A^{{\rm co}\mathcal{C}}=\{a\in A~|~ax=xa\}.$$
In general, $A^{{\rm co}\mathcal{C}}$ is a subring of $A^R$, and they are equal
if $\mathcal{C}$ is finitely generated and projective as a right $A$-module.\\
An $A$-coring $\mathcal{C}$ is called Frobenius if there exist an $A$-bimodule
map $\theta:\ \mathcal{C}\otimes_A\mathcal{C}\to A$ and $z\in C_A( \mathcal{C})$ (that is,
$az=za$, for all $a\in A$) such that the following
conditions hold, for all $c,d\in \mathcal{C}$:
$$c_{(1)}\theta(c_{(2)}\otimes d)=\theta(c\otimes d_{(1)})d_{(2)},$$
$$\theta(z\otimes c)=\theta(c\otimes z)=1.$$
We refer to \cite[Theorem 35]{CMZ} for the explanation of this definition.
If $\mathcal{C}$ is Frobenius, $\mathcal{C}$ is finitely generated and projective as a
(left or right) $A$-module, and ${}^*\mathcal{C}/A$ is Frobenius (see
\cite[Theorem 36]{CMZ}). Then we also have (see \cite[Sec. 3]{CVW}) that
$$Q=\{q\in {}^*\mathcal{C}~|~c_{(1)}q(c_{(2)})=q(c)x\}.$$
It follows from \cite[Theorem 2.7]{CVW} or \thref{3.1} that we have an isomorphism
of $({}^*\mathcal{C},B)$-bimodules $\alpha:\ A\to Q$, given by
$$\alpha(a)(c)=\theta(ca\otimes_A x),$$
for all $a\in A$ and $c\in \mathcal{C}$.
The inverse $\alpha^{-1}$ is given by $\alpha^{-1}(q)=q(z)$, and the left
${}^*\mathcal{C}$-action on $A$ is
$$f\cdot a=\theta(z_{(1)}f(z_{(2)})\otimes_A x).$$
This can be verified directly as follows:
$$\alpha(\alpha^{-1}(a))=\theta(za\otimes_A x)=\theta(az\otimes x)=a\theta(z\otimes x)=a,$$
and
\begin{eqnarray*}
&&\hspace*{-2cm}
\alpha(\alpha^{-1}(q))(c)=\theta(cq(z)\otimes_A x)=\theta(c\otimes_A q(z)x)
= \theta(c\otimes_A z_{(1)}q(z_{(2)})\\
&=& \theta(c\otimes_A z_{(1)})q(z_{(2)})
=q(\theta(c\otimes_A z_{(1)})z_{(2)})\\
&=&q(c_{(1)}\theta(c_{(2)}\otimes z))
= q(c_{(1)}\varepsilon(c_{(2)}))=q(c).
\end{eqnarray*}
The trace map ${\rm tr}\,:\ A\to B$ is given by
$${\rm tr}\,(a)=\theta(xa\otimes_A x).$$
\begin{corollary}\colabel{7.1}
Let $(\mathcal{C},x)$ be a Frobenius $A$-coring with a fixed grouplike element, and
Frobenius system $(\theta, z)$, and assume that there exists $a\in A$ such that
${\rm tr}\,(a)=1$. Then the following assertions are equivalent.
\begin{enumerate}
\item $A$ is right FBN;
\item ${}^*\mathcal{C}$ is right FBN and $A$ is right noetherian;
\item $B=A^{{\rm co}\mathcal{C}}$ is right FBN and $A$ is right noetherian.
\end{enumerate}
\end{corollary}
\begin{center}
{\bf Acknowledgement}
\end{center}
We thank Angel del R\'{\i}o and Sorin D\v{a}sc\v{a}lescu for discussing with us
the sufficiency of the quasi-projectivity assumption in the proof of $3)\Longrightarrow 1)$
in \thref{4.5}.
\end{document} |
\begin{document}
\twocolumn[
\icmltitle{Benchmarking Invertible Architectures on Inverse Problems}
\icmlsetsymbol{equal}{*}
\begin{icmlauthorlist}
\icmlauthor{Jakob Kruse}{vll}
\icmlauthor{Lynton Ardizzone}{vll}
\icmlauthor{Carsten Rother}{vll}
\icmlauthor{Ullrich K\"othe}{vll}
\end{icmlauthorlist}
\icmlaffiliation{vll}{Visual Learning Lab, Heidelberg University, Germany}
\icmlcorrespondingauthor{Jakob Kruse}{jakob.kruse@iwr.uni-heidelberg.de}
\icmlkeywords{Machine Learning, ICML}
\vskip 0.3in
]
\printAffiliationsAndNotice{}
\begin{abstract}
Recent work demonstrated that flow-based invertible neural networks are promising tools for solving ambiguous inverse problems.
Following up on this, we investigate how ten invertible architectures and related models fare on two intuitive, low-dimensional benchmark problems,
obtaining the best results with coupling layers and simple autoencoders.
We hope that our initial efforts inspire other researchers to evaluate their invertible architectures in the same setting and put forth additional benchmarks,
so our evaluation may eventually grow into an official community challenge.
\end{abstract}
\section{Introduction}
\label{intro}
Both in science and in everyday life, we often encounter phenomena that depend on hidden properties $\mathbf{x}$, which we would like to determine from observable quantities $\mathbf{y}$.
A common problem is that many different configurations of these properties would result in the {\em same} observable state,
especially when there are far more hidden than observable variables.
We will call the mapping $f$ from hidden variables $\mathbf{x}$ to observable variables $\mathbf{y} = f(\mathbf{x})$ the \textit{forward process}.
It can usually be modelled accurately by domain experts.
The opposite direction, the \textit{inverse process} $\mathbf{y} \rightarrow \mathbf{x}$, is much more difficult to deal with.
Since $f^{-1}(\mathbf{y})$ does not have a single unambiguous answer, a proper inverse model should instead estimate the full \textit{posterior} probability distribution $p(\mathbf{x} \!\mid\! \mathbf{y})$ of hidden variables $\mathbf{x}$ given the observation $\mathbf{y}$.
Recent work \cite{ardizzone2018analyzing} has shown that flow-based invertible neural networks such as RealNVP \cite{dinh2016density} can be trained with data from the forward process,
and then used in inverse mode to sample from $p(\mathbf{x} \!\mid\! \mathbf{y})$ for any $\mathbf{y}$.
This is made possible by introducing additional latent variables $\mathbf{z}$ that encode any information about $\mathbf{x}$ {\em not} contained in $\mathbf{y}$.
Assuming a perfectly representative training set and a fully converged model, they prove that the generated distribution is equal to the true posterior.
\begin{figure}
\caption{
Prior distributions of the parameters $\mathbf{x}
\label{fig:inverse-kinematics-prior}
\end{figure}
Interestingly, this proof carries over to all models offering an exact inverse upon convergence.
This poses a natural question: How well can various network types approximate this ideal behavior in practice?
Fundamentally, we can distinguish between {\em hard invertibility}, where the architecture ensures that forward and backward processing are exact inverses of each other (e.g.~RealNVP), and {\em soft invertibility}, where encoder and decoder only become inverses upon convergence (e.g. autoencoders).
The former pays for guaranteed invertibility with architectural restrictions that may harm expressive power and training dynamics, whereas the latter is more flexible but only approximately invertible.
We propose two simple inverse problems, one geometric and one physical, for systematic investigation of the resulting trade-offs.
Common toy problems for invertible networks are constrained to two dimensions for visualization purposes \cite{behrmann2018invertible, grathwohl2018ffjord}.
The 4D problems shown here are more challenging,
facilitating more meaningful variance in the results of different models.
However, they still have an intuitive 2D representation (\cref{fig:inverse-kinematics-prior}) and are small enough to allow computation of ground truth posteriors via rejection sampling,
which is crucial for proper evaluation.
We test ten popular network variants on our two problems to address the following questions:
(i) Is soft invertibility sufficient for solving inverse problems?
(ii) Do architectural restrictions needed for hard invertibility harm performance?
(iii) Which architectures and losses give the most accurate results?
\section{Methods}
\label{methods}
\textbf{Invertible Neural Networks (INNs).}\;
Our starting point is the model from \cite{ardizzone2018analyzing}, which is based on RealNVP, i.e.~affine coupling layers.
They propose to use a standard L2 loss for fitting the network's $\mathbf{y}$-predictions to the training data,
\begin{align}
\mathrm{L2}(\mathbf{y}) &= (\mathbf{y} - \mathbf{y}_\mathrm{gt})^2,
\label{eq:l2}
\end{align}
and an MMD loss \cite{gretton2012kernel} for fitting the latent distribution $p(\mathbf{z})$ to $\mathcal{N}(\mathbf{0}, \mathbf{I})$, given samples:
\begin{align}
\mathrm{MMD}(\mathbf{z}) =\,
&\mathbf{E}_{i,j}[\kappa(\mathbf{z}^{(i)}, \mathbf{z}^{(j)})] - 2 \!\cdot\!
\mathbf{E}_{i,j}[\kappa(\mathbf{z}^{(i)}, \mathbf{z}_\mathrm{gt}^{(j)})] \,+ \nonumber\\
&\mathbf{E}_{i,j}[\kappa(\mathbf{z}_\mathrm{gt}^{(i)}, \mathbf{z}_\mathrm{gt}^{(j)})]
\label{eq:mmd}
\end{align}
With weighting factors $\alpha, \beta$, their training loss becomes
\begin{align}
\mathcal{L}(\mathbf{y}, \mathbf{z}) &= \mathrm{L2}(\mathbf{y}) + \alpha \cdot \mathrm{MMD}(\mathbf{z}).
\label{eq:l2-mmd}
\end{align}
\begin{figure}\label{fig:INN-model}
\end{figure}
We find that it is also possible to train the network with just a maximum likelihood loss \cite{dinh2016density} by assuming $\mathbf{y}$ to be normally distributed around the ground truth values $\mathbf{y}_\mathrm{gt}$ with very low variance $\sigma^2$,
\begin{align}
\mathcal{L}(\mathbf{y}, \mathbf{z}) =\,
&\tfrac{1}{2} \cdot \left( \tfrac{1}{\sigma^2} \cdot (\mathbf{y} - \mathbf{y}_\mathrm{gt})^2 +
\mathbf{z}^2 \right) - \nonumber\\
&\log \left| \det J_{\mathbf{x} \;\mapsto\, [\mathbf{y},\,\mathbf{z}]} \right|,
\label{eq:ml-yz}
\end{align}
and we compare both approaches in our experiments.
\textbf{Conditional INNs}.\;
Instead of training INNs to predict $\mathbf{y}$ from $\mathbf{x}$ while transforming the lost information into a latent distribution,
we can train them to transform $\mathbf{x}$ directly to a latent representation $\mathbf{z}$ \textit{given} the observation $\mathbf{y}$.
This is done by providing $\mathbf{y}$ as an additional input to each affine coupling layer, both during the forward and the inverse network passes. cINNs work with larger latent spaces than INNs and are also suited for maximum likelihood training:
\begin{align}
\mathcal{L}(\mathbf{z}) =\,
&\tfrac{1}{2} \cdot \mathbf{z}^2 - \log \left| \det J_{\mathbf{x} \;\mapsto\, \mathbf{z}} \right|,
\label{eq:ml-z}
\end{align}
\begin{figure}\label{fig:cINN-model}
\end{figure}
\textbf{Autoregressive flows}.\;
Masked autoregressive flows (MAF) decompose multi-variate distributions into products of 1-dimensional Gaussian conditionals using the chain rule of probability \cite{papamakarios2017masked}.
Inverse autoregressive flows (IAF) similarly decompose the latent distribution \cite{kingma2016improved}.
To obtain asymptotically invertible architectures, we add standard feed-forward networks for the opposite direction in the manner of Parallel WaveNets \cite{oord2018parallel} and train with \cref{eq:ml-yz} and a cycle loss:
\begin{align}
\mathcal{L}(\mathbf{y}, \mathbf{z}, \hat{\mathbf{x}}) =\,
&\tfrac{1}{2} \cdot \left( \tfrac{1}{\sigma^2} \cdot (\mathbf{y} - \mathbf{y}_\mathrm{gt})^2 +
\mathbf{z}^2 \right) - \nonumber\\
&\log \left| \det J_{\mathbf{x} \;\mapsto\, [\mathbf{y},\,\mathbf{z}]} \right| + \alpha \cdot (\mathbf{x} - \hat{\mathbf{x}})^2
\label{eq:loss-autoregressive}
\end{align}
\begin{figure}\label{fig:autoregressive-flow-model}
\end{figure}
\textbf{Invertible Residual Networks}.\;
A more flexible approach is the i-ResNet \cite{behrmann2018invertible}, which replaces the heavy architectural constraints imposed by coupling layers and autoregressive models with a mild Lipschitz-constraint on its residual branches.
With this constraint, the model's inverse and its Jacobian determinant can be estimated iteratively with a runtime vs.~accuracy trade-off.
Finding that the estimated Jacobian determinants' gradients are too noisy
\footnote{While accurate determinants may be found numerically for toy problems, this would not scale and thus is of limited interest.},
we train with the loss from \cref{eq:l2-mmd} instead.
\begin{figure}\label{fig:iResNet-model}
\end{figure}
\textbf{Invertible Autoencoders}.\;
This model proposed by \cite{teng2018invertible} uses invertible nonlinearities and orthogonal weight matrices to achieve efficient invertibility.
The weight matrices start with random initialization, but converge to orthogonal matrices during training via a cycle loss:
\begin{align}
\mathcal{L}(\mathbf{y}, \mathbf{z}, \hat{\mathbf{x}}) &= \mathrm{L2}(\mathbf{y}) + \alpha \cdot \mathrm{MMD}(\mathbf{z}) + \beta \cdot (\mathbf{x} - \hat{\mathbf{x}})^2
\label{eq:l2-mmd-cycle}
\end{align}
\begin{figure}\label{fig:invAuto-model}
\end{figure}
\textbf{Standard Autoencoders}.\;
In the limit of zero reconstruction loss, the decoder of a standard autoencoder becomes the exact inverse of its encoder.
While this approach uses two networks instead of one, it is not subject to any architectural constraints.
In contrast to standard practice, our autoencoders do not have a bottleneck but use encodings with the same dimension as the input (exactly like INNs).
The loss function is the same as \cref{eq:l2-mmd-cycle}.
\begin{figure}\label{fig:AE-model}
\end{figure}
\textbf{Conditional Variational Autoencoders}.\;
Variational autoencoders \cite{kingma2013auto} take a Bayesian approach and thus should be well suited for predicting distributions.
Since we are interested in conditional distributions and it simplifies training in this case, we focus on the conditional VAE proposed by \cite{sohn2015cvae}, with loss
\begin{align}
\mathcal{L}(\boldsymbol\mu_z, \boldsymbol\sigma_z, \hat{\mathbf{x}}) &=
(\mathbf{x} \!-\! \hat{\mathbf{x}})^2 -\tfrac{1}{2} \alpha \!\cdot\! (1 + \log \boldsymbol\sigma_z - \boldsymbol\mu_z^2 - \boldsymbol\sigma_z).
\label{eq:elbo-cycle}
\end{align}
\begin{figure}\label{fig:cVAE-model}
\end{figure}
\textbf{Mixture Density Networks (MDNs)}.\;
MDNs \cite{bishop1994mixture,kruse2020mdn} are not invertible at all, but model the inverse problem directly.
To this end, the network takes $\mathbf{y}$ as an input and predicts the parameters $\boldsymbol\mu_x, \boldsymbol\Sigma_x^{-1}$ of a Gaussian mixture model that characterizes $p(\mathbf{x} \!\mid\! \mathbf{y})$.
It is trained by maximizing the likelihood of the training data under the predicted mixture models, leading to a loss of the form
\begin{align}
\mathcal{L}(\boldsymbol\mu_x, \boldsymbol\Sigma_x^{-1}) &=
\tfrac{1}{2} \!\cdot\! (\mathbf{x}\boldsymbol{\mu}_x^\top \!\cdot\! \boldsymbol{\Sigma}_x^{-1} \!\cdot\! \mathbf{x}\boldsymbol{\mu}_x)
- \log \lvert \boldsymbol{\Sigma}_x^{-1} \rvert^{\tfrac{1}{2}}.
\label{eq:ml-mdn}
\end{align}
We include it in this work as a non-invertible baseline.
\begin{figure}\label{fig:MDN-model}
\end{figure}
\section{Benchmark Problems}
\label{benchmark-problems}
We propose two low-dimensional inverse problems as test cases, as they allow quick training, intuitive visualizations and ground truth estimates via rejection sampling.
\begin{table*}[t]
\caption{
Quantitative results for inverse kinematics benchmark, see \cref{sec:experiments}.
The first three columns are averaged over $1000$ different observations $\mathbf{y}^*$.
\textsc{dim$(\mathbf{z})$} denotes the dimensionality of the latent space.
\textsc{ML Loss} marks models that were trained with a maximum likelihood loss,
while \textsc{$\mathbf{y}$-Supervision} marks models that were trained with an explicit supervised loss on the forward process $\mathbf{x} \rightarrow \mathbf{y}$.
}
\label{tab:kinematics-results}
\begin{center}\begin{small}\begin{sc}
\begin{tabular}{lcccccc}
\toprule
Method & $Err_\mathrm{post}$ (\ref{eq:posterior-mismatch}) & $Err_\mathrm{resim}$ (\ref{eq:re-simulation-error}) & Inference in ms & dim($\mathbf{z}$) & ML Loss & $\mathbf{y}$-Supervision \\
\midrule
INN & 0.025 & 0.015 & 10 & ${\bullet}{\bullet}$ & $\checkmark$ & $\checkmark$ \\
INN (L2 + MMD) & 0.017 & 0.086 & 9 & ${\bullet}{\bullet}$ & & $\checkmark$ \\
cINN & \textbf{0.015} & \textbf{0.008} & 11 & ${\bullet}{\bullet}{\bullet}{\bullet}$ & $\checkmark$ & \\
IAF + Decoder & 0.419 & 0.222 & \textbf{0} & ${\bullet}{\bullet}{\bullet}{\bullet}$ & $\checkmark$ & $\checkmark$ \\
MAF + Decoder & 0.074 & 0.034 & \textbf{0} & ${\bullet}{\bullet}{\bullet}{\bullet}$ & $\checkmark$ & $\checkmark$ \\
iResNet & 0.713 & 0.311 & 763 & ${\bullet}{\bullet}$ & & $\checkmark$ \\
InvAuto & 0.062 & 0.022 & 1 & ${\bullet}{\bullet}$ & & $\checkmark$ \\
Autoencoder & 0.037 & 0.016 & \textbf{0} & ${\bullet}{\bullet}$ & & $\checkmark$ \\
cVAE & 0.042 & 0.019 & \textbf{0} & ${\bullet}{\bullet}$ & & \\
\midrule
\textcolor{gray}{MDN} & \textcolor{gray}{0.007} & \textcolor{gray}{0.012} & \textcolor{gray}{601} & \textcolor{gray}{${\bullet}{\bullet}{\bullet}{\bullet}$} & \textcolor{gray}{$\checkmark$} & \\
\bottomrule
\end{tabular}
\end{sc}\end{small}\end{center}
\end{table*}
\begin{figure*}
\caption{
Qualitative results for the inverse kinematics benchmark.
The faint lines are arm configurations sampled from each model's predicted posterior $\hat{p}
\label{fig:kinematics-results}
\end{figure*}
\subsection{Inverse Kinematics}
\label{inverse-kinematics-intro}
First is the geometrical example used by \cite{ardizzone2018analyzing}, which asks about configurations of a multi-jointed 2D arm that end in a given position, see \cref{fig:inverse-kinematics-prior} left.
The forward process takes a starting height $x_1$ and the three joint angles $x_2, x_3, x_4$, and returns the coordinate of the arm's end point $\mathbf{y} = [y_1, y_2]$ as
\begin{align*}
y_1 \!&=\! l_1 \sin(x_2) + l_2 \sin(x_2 \!+\! x_3) + l_3 \sin(x_2 \!+\! x_3 \!+\! x_4) \!+\! x_1 \\
y_2 \!&=\! l_1 \cos(x_2) + l_2 \cos(x_2 \!+\! x_3) + l_3 \cos(x_2 \!+\! x_3 \!+\! x_4)
\end{align*}
with segment lengths $l_1 = \tfrac{1}{2},\; l_2 = \tfrac{1}{2}$ and $l_3 = 1$.
Parameters $\mathbf{x}$ follow a Gaussian prior $\mathbf{x} \sim \mathcal{N}(\mathbf{0},\; \boldsymbol{\sigma}^2 \!\cdot\! \mathbf{I})$ with
$\boldsymbol{\sigma}^2 = [\tfrac{1}{16}, \tfrac{1}{4}, \tfrac{1}{4}, \tfrac{1}{4}]$.
The inverse problem is to find the distribution $p(\mathbf{x} \,|\, \mathbf{y}^*)$ of all arm configurations $\mathbf{x}$ that end at some observed 2D position $\mathbf{y}^*$.
\subsection{Inverse Ballistics}
\label{inverse-ballistics-intro}
A similar, more physically motivated problem in the 2D plane arises when an object is thrown from a starting position $(x_1, x_2)$ with angle $x_3$ and initial velocity $x_4$.
This setup is illustrated in \cref{fig:inverse-kinematics-prior}, right.
For given gravity $g$, object mass $m$ and air resistance $k$, the object's trajectory $\mathbf{T}(t)$ can be computed as
\begin{align*}
T_1(t) &= x_1 - \frac{v_1 m}{k} \cdot \left( e^{-\tfrac{kt}{m}} - 1 \right) \\
T_2(t) &= x_2 - \frac{m}{k^2} \cdot \left( \big(\, gm + v_2 k \,\big) \cdot \left( e^{-\tfrac{kt}{m}} - 1 \right) + gtk \right)
\end{align*}
with $v_1 = x_4 \cdot \cos{x_3}$ and $v_2 = x_4 \cdot \sin{x_3}$. We define the location of impact as $y = T_1(t^*)$, where $t^*$ is the solution of $T_2(t^*) = 0$, i.e.~the trajectory's intersection with the $x_1$-axis of the coordinate system (if there are two such points we take the rightmost one, and we only consider trajectories that do cross the $x_1$-axis).
Note that here, $y$ is one-dimensional.
We choose the parameters' priors as $x_1 \sim \mathcal{N}(0,\; \tfrac{1}{4}),\; x_2 \sim \mathcal{N}(\tfrac{3}{2},\; \tfrac{1}{4}),\; x_3 \sim \mathcal{U}(9^{\circ},\; 72^{\circ})$ and $x_4 \sim \textrm{Poisson}(15)$.
The inverse problem here is to find the distribution $p(\mathbf{x} \,|\, y^*)$ of all throwing parameters $\mathbf{x}$ that share the same observed impact location $y^*$.
\section{Experiments}
\label{sec:experiments}
To compare all approaches in a fair setting, we use the same training data, train for the same number of batches and epochs and choose layer sizes such that all models have roughly the same number of trainable parameters (${\sim} 3\,\textrm{M}$).
We quantify the correctness of the generated posteriors in two ways, using $1000$ unseen conditions $\mathbf{y}^*$ obtained via prior and forward process.
Firstly, we use MMD (\cref{eq:mmd}, \citep{gretton2012kernel}) to compute the \textit{posterior mismatch} between the distribution $\hat{p}(\mathbf{x} \,|\, \mathbf{y}^*)$ generated by a model and a ground truth estimate $p_\mathrm{gt}(\mathbf{x} \,|\, \mathbf{y}^*)$ obtained via rejection sampling:
\begin{align}
Err_\mathrm{post} &=
\mathrm{MMD}\bigl( \hat{p}(\mathbf{x} \,|\, \mathbf{y}^*),\, p_\mathrm{gt}(\mathbf{x} \,|\, \mathbf{y}^*) \bigr)
\label{eq:posterior-mismatch}
\end{align}
Secondly, we apply the true forward process $f$ to the generated samples $\mathbf{x}$ and measure the \textit{re-simulation error} as the mean squared distance to the target $\mathbf{y}^*$:
\begin{align}
Err_\mathrm{resim} &= \mathbb{E}_{\,\mathbf{x} \sim \hat{p}(\mathbf{x} \,|\, \mathbf{y}^*)}
\left\lVert f(\mathbf{x}) - \mathbf{y}^* \right\rVert_2^2
\label{eq:re-simulation-error}
\end{align}
Finally, we report the inference time for each implementation using one \emph{GTX 1080 Ti}.
\subsection{Inverse Kinematics}
\label{inverse-kinematics-results}
Quantitative results for the kinematics benchmark are shown in \cref{tab:kinematics-results} (extra detail in \cref{fig:kinematics-boxplot}), while qualitative results for one challenging end point $\mathbf{y}^*$ are plotted in \cref{fig:kinematics-results}.
Architectures based on coupling layers (INN, cINN) achieve the best scores on average, followed by the simple autoencoder.
The invertible ResNet exhibits some mode collapse, as seen in \cref{fig:kinematics-results}, bottom left.
Note that we were unable to train our iResNet-implementation with the estimated Jacobian determinants, which were too inaccurate, and resorted to the loss from \cref{eq:l2-mmd}.
Similarly we would expect the autoregressive models, in particular IAF, to converge much better with more careful tuning.
MDN on the other hand performs very well for both error measures.
Note however that a full precision matrix $\boldsymbol{\Sigma}_x^{-1}$ is needed for this, as a purely diagonal $\boldsymbol{\Sigma}_x = \mathbf{I}\boldsymbol{\sigma}_x$ fails to model the potentially strong covariance among variables $x_i$.
Since $\boldsymbol{\Sigma}_x^{-1}$ grows quadratically with the size of $\mathbf{x}$ and a matrix inverse is needed during inference, the method is very slow and does not scale to higher dimensions.
\begin{table*}[ht!]
\caption{
Quantitative results for the inverse ballistics benchmark.
Rows and columns have the same meaning as in \cref{tab:kinematics-results}.
}
\label{tab:ballistics-results}
\begin{center}\begin{small}\begin{sc}
\begin{tabular}{lcccccc}
\toprule
Method & $Err_\mathrm{post}$ (\ref{eq:posterior-mismatch}) & $Err_\mathrm{resim}$ (\ref{eq:re-simulation-error}) & Inference in ms & dim($\mathbf{z}$) & ML Loss & $y$-Supervision \\
\midrule
INN & \textbf{0.047} & \textbf{0.019} & 21 & ${\bullet}{\bullet}{\bullet}$ & $\checkmark$ & $\checkmark$ \\
INN (L2 + MMD) & 0.060 & 3.668 & 21 & ${\bullet}{\bullet}{\bullet}$ & & $\checkmark$ \\
cINN & \textbf{0.047} & 0.437 & 22 & ${\bullet}{\bullet}{\bullet}{\bullet}$ & $\checkmark$ & \\
IAF + Decoder & 0.323 & 3.457 & \textbf{0} & ${\bullet}{\bullet}{\bullet}{\bullet}$ & $\checkmark$ & $\checkmark$ \\
MAF + Decoder & 0.213 & 1.010 & \textbf{0} & ${\bullet}{\bullet}{\bullet}{\bullet}$ & $\checkmark$ & $\checkmark$ \\
iResNet & 0.084 & 0.091 & 307 & ${\bullet}{\bullet}{\bullet}$ & & $\checkmark$ \\
InvAuto & 0.156 & 0.315 & 1 & ${\bullet}{\bullet}{\bullet}$ & & $\checkmark$ \\
Autoencoder & 0.049 & 0.052 & 1 & ${\bullet}{\bullet}{\bullet}$ & & $\checkmark$ \\
cVAE & 4.359 & 0.812 & \textbf{0} & ${\bullet}{\bullet}{\bullet}$ & & \\
\midrule
\textcolor{gray}{MDN} & \textcolor{gray}{0.048} & \textcolor{gray}{0.184} & \textcolor{gray}{175} & \textcolor{gray}{${\bullet}{\bullet}{\bullet}{\bullet}$} & \textcolor{gray}{$\checkmark$} & \\
\bottomrule
\end{tabular}
\end{sc}\end{small}\end{center}
\end{table*}
\begin{figure*}
\caption{
Qualitative results for the inverse ballistics benchmark.
Faint lines show the trajectories of sampled throwing parameters and as above, bold is the most likely one.
A vertical line marks the target coordinate $y^* = 5$, the distribution of actual impacts is shown in green.}
\label{fig:ballistics-results}
\end{figure*}
\subsection{Inverse Ballistics}
\label{inverse-ballistics-results}
Quantitative results for the ballistics benchmark are shown in \cref{tab:ballistics-results} (extra detail in \cref{fig:ballistics-boxplot}), while qualitative results for one representative impact location $y^*$ are plotted in \cref{fig:ballistics-results}.
Again we see INN, cINN and the simple autoencoder perform best.
Notably, we could not get the conditional VAE to predict proper distributions on this task; instead it collapses to some average trajectory with very high posterior mismatch.
The invertible ResNet does better here, perhaps due to the more uni-modal posteriors, but IAF and MAF again fail to capture the distributions properly at all.
Due to the presence of extreme outliers for the error measures in this task, the averages in \cref{tab:ballistics-results} are computed with clamped values and thus somewhat distorted.
\cref{fig:ballistics-boxplot} gives a better impression of the distribution of errors.
There the INN trained with \cref{eq:ml-yz} appears the most robust model (smallest maximal errors), followed by the autoencoder.
cINN and iResNet come close in performance if outliers are ignored.
\section{Discussion and Outlook}
In both our benchmarks, models based on RealNVP \cite{dinh2016density} and the standard autoencoder take the lead, while other invertible architectures seem to struggle in various ways.
Success in our experiments was neither tied to maximum likelihood training, nor to the use of a supervised loss on the forward process.
We are aware that training of some models can probably be improved, and welcome input from experts to do so.
In the future, the comparison should also include ODE-based methods like \citet{chen2018neural,grathwohl2018ffjord}, variants of Parallel WaveNet \cite{oord2018parallel} and classical approaches to Bayesian estimation such as MCMC.
Ideally, this paper will encourage the community to join our evaluation efforts and possibly set up an open challenge with additional benchmarks and official leader boards.
Code for the benchmarks introduced here can be found at \url{https://github.com/VLL-HD/inn_toy_data}.
\appendix
\begin{minipage}[c]{2\linewidth}
\section*{Appendix}
\begin{figure}
\caption{Boxplot of inverse kinematics results from \cref{tab:kinematics-results}
\label{fig:kinematics-boxplot}
\end{figure}
\begin{figure}
\caption{Boxplot of inverse ballistics results from \cref{tab:ballistics-results}
\label{fig:ballistics-boxplot}
\end{figure}
\end{minipage}
\end{document} |
\begin{document}
\title{Adaptive Kernel Graph Neural Network}
\begin{abstract}
Graph neural networks (GNNs) have demonstrated great success in representation learning for graph-structured data. The layer-wise graph convolution in GNNs is shown to be powerful at capturing graph topology. During this process, GNNs are usually guided by pre-defined kernels such as Laplacian matrix, adjacency matrix, or their variants. However, the adoptions of pre-defined kernels may restrain the generalities to different graphs: mismatch between graph and kernel would entail sub-optimal performance. For example, GNNs that focus on low-frequency information may not achieve satisfactory performance when high-frequency information is significant for the graphs, and vice versa. To solve this problem, in this paper, we propose a novel framework - i.e., namely Adaptive Kernel Graph Neural Network (AKGNN) - which learns to adapt to the optimal graph kernel in a unified manner at the first attempt. In the proposed AKGNN, we first design a data-driven graph kernel learning mechanism, which adaptively modulates the balance between all-pass and low-pass filters by modifying the maximal eigenvalue of the graph Laplacian. Through this process, AKGNN learns the optimal threshold between high and low frequency signals to relieve the generality problem. Later, we further reduce the number of parameters by a parameterization trick and enhance the expressive power by a global readout function. Extensive experiments are conducted on acknowledged benchmark datasets and promising results demonstrate the outstanding performance of our proposed AKGNN by comparison with state-of-the-art GNNs. The source code is publicly available at: \url{https://github.com/jumxglhf/AKGNN}.
\end{abstract}
\section{Introduction}
\label{intro}
Graph-structured data have become ubiquitous in the real world, such as social networks, knowledge graphs, and molecule structures. Mining and learning expressive node representations on these graphs can contribute to a variety of realistic challenges and applications. The emphasis of this work lies in the node representation learning on graphs, aiming to generate node embeddings that are expressive with respect to downstream tasks such as node classification \cite{kipf2016semi,klicpera2019predict,velivckovic2017graph}. Current state-of-the-art frameworks could be categorized as graph convolutions, where nodes aggregate information from their direct neighbors with fixed guiding kernels, e.g., different versions of adjacency or Laplacian matrices. And information of high-order neighbors could be captured in an iterative manner by stacked convolution layers.
Although the results are promising, recent researches have shown that such a propagation mechanism entails certain challenges. Firstly, \cite{chen2020measuring,oono2019graph} illustrate that the performance of GNNs could be deteriorated by over-smoothing if excessive layers are stacked. As proved in the work of \cite{zhu2021interpreting}, graph convolution could be summarized as a case of Laplacian smoothing, in which adjacent nodes become inseparable after multiple layers. \cite{oono2019graph} shows that multiple non-linearity functions between stacked convolution layers further antagonize this problem. Moreover, these aforementioned propagation layers cause the over-fitting problem \cite{wang2019improving}. In current GNNs, each layer serves as a parameterized filter where graph signals are first amplified or diminished and then combined. Adding more layers aims to capture high-order information beneficial for the downstream classification but it meanwhile introduces the number of trainable parameters, which might cancel out the intended benefits as real-world data is often scarcely labeled \cite{zhao2020data,chen2020measuring}. Effective frameworks such as JK-Nets \cite{xu2018representation} and DAGNN \cite{liu2020towards} overcome the over-smoothing issue by a global readout function between propagation layers, making local information from early layers directly accessible during the inference phase. And the over-fitting issue is conquered by a single learnable matrix, placed before all propagation layers, to approximate parameters of all layers \cite{wu2019simplifying}.
Another far less researched issue rests in the fixed graph kernels (e.g., adjacency matrix, Laplacian matrix, or their variants) that current GNNs model on, which restricts their generalization to different graphs. \cite{ma2020unified,zhu2021interpreting,dong2016learning} prove that the current GNNs could be explained in a unified framework, where the output node representation minimizes two terms: 1) the distances between adjacent nodes and 2) the similarities between the input and output signals. Some GNNs such as GCN \cite{kipf2016semi} mainly focus on the latter term, which solely extracts low-frequency information. Others like APPNP \cite{klicpera2019predict} merges these two terms by introducing original signals through teleport connection after low-pass filtering, and hence brings a certain degree of high-frequency signals. But in reality, it is difficult to determine what and how much information should be encoded, unless experiments are conducted across algorithms with different hyperparameters. Merely considering one kind of information might not satisfy the needs of various downstream tasks, while introducing extra information could jeopardize the decision boundary. Some very recent works such as GNN-LF and GNN-HF \cite{zhu2021interpreting} utilize models with different pre-defined graph kernels to adapt to various datasets. These models either focus on high or low frequency signals. Still, experiments need to be conducted on both in order to know which one works out best, and the threshold between low and high frequency signals needs to be defined via human experts, which might be sub-optimal under some circumstances. To the best of our knowledge, there has not yet a unified framework that solves this problem. To fill this research gap, in this paper, we propose Adaptive Kernel Graph Neural Network (i.e., AKGNN) where a novel adaptive kernel mechanism is devised to self-learn the optimal threshold between high and low frequency signals for downstream tasks.
Specifically, to effectively combat the generality issue entailed by fixed graph kernel, AKGNN dynamically adjusts the maximal eigenvalue of graph Laplacian matrix at each layer such that the balance between all-pass and low-pass filters is dynamically optimized. And through this process, the optimal trade-off between high and low frequency signals is learned. From the spatial point of view, our model is able to raise the weight of self-loop when neighbors are not informative (i.e., all-pass filter from spectral view), or focus more on adjacent nodes when neighbors are helpful (i.e., low-pass filter). To prevent the over-fitting problem, we modify the parameterization trick proposed in \cite{wu2019simplifying}, wherein learnable parameters of all convolution layers, except maximal eigenvalues, are compressed and approximated by a single matrix. Nevertheless, it is possible that different nodes require information from neighbors of distinct orders and hence we utilize a global readout function \cite{xu2018representation} to capture node embeddings at different orders.
Finally, we demonstrate the legitimacy of the proposed AKGNN through theoretical analysis and empirical studies, where it is able to achieve state-of-the-art results on node classification at acknowledged graph node classification benchmark datasets, and persistently retain outstanding performance even with an exaggerated amount of convolution layers across all benchmark datasets.
\section{Problem and Related Work}
Let $G = (V,E)$ denote a graph, in which $V$ is the set of $|V| = N$ nodes and $E \subseteq V \times V$ is the set of $|E|$ edges between nodes. Adjacency matrix is denoted as $\textbf{A} \subseteq \{0,1\}^{N \times N}$. The element $a_{ij}$ in $i$-th row and $j$-th column of $\textbf{A}$ equals to 1 if there exists an edge between nodes $v_i$ and $v_j$ or equals to 0 otherwise. Laplacian matrix of a graph is denoted as $\mathbf{L} = \mathbf{D} - \mathbf{A}$ or its normalized form $\mathbf{L} = \mathbf{D}^{-\frac{1}{2}}(\mathbf{D} - \mathbf{A})\mathbf{D}^{-\frac{1}{2}} = \mathbf{I} - \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}$, where $\mathbf{D}$ is the diagonal degree matrix $\mathbf{D}$ = $diag(d(v1),\dots,d(v_N))$ and $\mathbf{I}$ is the identity matrix. Spectral graph theories have studied the properties of a graph by analyzing the eigenvalues and eigenvectors of $\mathbf{L}$ \cite{kipf2016semi,defferrard2016convolutional}, and our model adaptively modifies the maximal eigenvalue of $\mathbf{L}$ to learn the optimal trade-off between all-pass and low-pass filters.
\noindent \textbf{Node Classification on Graphs.} We focus on node classification on graph. $\mathbf{X} \in \mathbb{R}^{N \times d}$ represents the feature matrix of a graph, where node $v_i$ is given with a feature vector $\mathbf{x}_i \in \mathbb{R}^d$ and $d$ is the dimension size. $\mathbf{Y} \subseteq \{0,1\}^{N \times C}$ denotes the label matrix of a graph, where $C$ is the number of total classes. Given $M$ labeled nodes ($0 < M \ll N$) with label $\mathbf{Y}^L$ and $N-M$ unlabeled nodes with missing label $\mathbf{Y}^U$, the objective of node classification is learning a function $f:G, \mathbf{X}, \mathbf{Y}^L \rightarrow \mathbf{Y}^U$ to predict missing labels $\mathbf{Y}^U$. Traditional solutions to this problem are mainly based on Deepwalk \cite{ando2007learning,pang2017graph,dong2016learning}. Recently GNNs have emerged as a class of powerful approaches for this problem. GCN \cite{kipf2016semi}, which iteratively approximates Chebyshev polynomials proposed by \cite{defferrard2016convolutional}, has motivated numerous novel designs. Some typical GNNs are reviewed below.
\noindent \textbf{Graph Neural Networks.} GNNs generalize neural network into graph-structured data \cite{scarselli2008graph,kipf2016semi,klicpera2019predict,velivckovic2017graph}. The key operation is graph convolution, where information is routed from each node its neighbors with some deterministic rules (e.g., adjacency matrices and Laplacian matrices). For example, the propagation rule of GCN \cite{kipf2016semi} could be formulated as $\mathbf{H}^{(l+1)} = \sigma(\hat{\mathbf{A}} \mathbf{H}^{(l)} \mathbf{W}^{(l)})$, where $\hat{\mathbf{A}}$ denotes the normalized adjacency matrix with self-loop, $\sigma(.)$ denotes the non-linearity function, and $\mathbf{W}^{(l)}$ and $\mathbf{H}^{(l)}$is the learnable parameters and node representations at $l^{th}$ layer respectively. That of APPNP \cite{klicpera2019predict} is formulated as $\mathbf{Z}^{(k+1)} = (1 - \alpha) \hat{\mathbf{A}} \mathbf{Z}^{(k)} + \alpha \mathbf{H}$, where $\alpha$ denotes the teleport probabilities, $\mathbf{H}$ is the predicted class distribution before propagation and $\mathbf{Z}^{(k)}$ denotes the propagated class distribution at $k^{th}$ layer.
\noindent \textbf{From Graph Spectral Filter to Graph Neural Network.} The idea behind graph spectral filtering is modulating the graph signals with learnable parameters so that signals at different frequencies are either amplified or diminished. \cite{defferrard2016convolutional} proposes signal modulating with Chebyshev polynomials, which allows the model to learn neighbor information within K-hops with a relatively scalable amount of parameters. GCN proposed by \cite{kipf2016semi} approximates the K-th order Chebyshev polynomials by K convolution layers connected back-to-back, each of which assumes K equals to 1 and the maximal eigenvalue of Laplacian matrix equals to 2. Spatially, each propagation layer could be understood as gathering information from direct neighbors by mean pooling, and information from high-order neighbors could be captured through multiple stacked layers. Our proposed AKGNN simplifies this complexity of such filtering process by decoupling it into two sub-tasks: 1) limiting the scope of filters by finding the optimal trade-off between high and low frequency signals and 2) graph signal filtering on the distilled signals.
\begin{figure*}
\caption{AKGNN for node classification. The parameters of filters at all layers are approximated by a single MLP. And at each propagation layer, AKGNN learns the optimal trade-off between all-pass and low-pass filters and constructs $\mathbf{A}
\label{system}
\end{figure*}
\section{Methodology}
In this section, we explain the technical details of Adaptive Kernel Graph Neural Network (AKGNN), as shown in Fig. \ref{system}. We present a type of graph convolution that is able to adaptively tune the weights of all-pass and low-pass filters by learning the maximal eigenvalue of graph Laplacian at each layer. Through such a design, the threshold between high and low frequency signals is efficiently optimized. Demonstrated by comprehensive experiments, proposed AKGNN is able to achieve state-of-the-art results on benchmarks acknowledged as community conventions.
\subsection{Adaptive Graph Kernel Learning}
\label{eigen}
Given an input graph $G$ and its normalized Laplacian matrix $\mathbf{L} = \mathbf{I} - \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}} = \mathbf{U}\mathbf{\Lambda}\mathbf{U}^T$, where $\mathbf{U}$ is the eigenvector matrix and $\mathbf{\Lambda}$ is the diagonal eigenvalue matrix of $\mathbf{L}$, Cheby-Filter \cite{defferrard2016convolutional} on a graph signal $\mathbf{f}$ is formulated as:
\begin{equation}
\mathbf{f}^{'} = \sum_{k = 0}^{K} \mathbf{\theta}_k \mathbf{U} T_k(\tilde{\mathbf{\Lambda}}) \mathbf{U}^T \mathbf{f}
= \sum_{k = 0}^{K} \mathbf{\theta}_k T_k(\tilde{\mathbf{L}}) \mathbf{f},
\label{cheb_poly}
\end{equation}
where $\mathbf{f}^{'}$ is the resulted modulated signal, $K$ is the order of the truncated polynomials, $\mathbf{\theta}_k$ denotes the learnable filter at $k^{th}$ order, $T_k(.)$ refers to the $k^{th}$ polynomial bases, $\tilde{\mathbf{\Lambda}}$ denotes the normalized diagonal eigenvalue matrix, and $\tilde{\mathbf{L}} = \mathbf{U} \tilde{\mathbf{\Lambda}} \mathbf{U}^T$. $\tilde{\mathbf{\Lambda}} = \frac{2 \cdot \mathbf{\Lambda}}{\lambda_{max}} - \mathbf{I}$, where $\lambda_{max}$ denotes the maximum value in $\mathbf{\Lambda}$, has domain in range [-1,1]. Normalized form is used here instead of $\mathbf{\Lambda}$ since Chebyshev polynomial is orthogonal only in the range [-1,1] \cite{defferrard2016convolutional}.
GCN \cite{kipf2016semi} simplifies the Cheby-Filter by assuming $K = 1$ and $\lambda_{max} \approx 2$ at each layer. Although the efficacy of GCN is promising, through this simplification, one issue is brought: graph convolution operation essentially conducts low-frequency filtering as proved by \cite{zhu2021interpreting}, where the similarity between adjacent nodes are enlarged as the number of propagation layers increases, and the kernel could not be adjusted to dataset where high-frequency information is important. Given that $T_0(\tilde{\mathbf{L}}) = \mathbf{I}$ and $T_1(\tilde{\mathbf{L}}) = \tilde{\mathbf{L}}$ \cite{defferrard2016convolutional}, Eq. \ref{cheb_poly} is re-formulated as follows:
\begin{equation} \label{our_eq1}
\begin{split}
\mathbf{f}^{'} &= \mathbf{\theta}_0 \mathbf{I} \mathbf{f} + \mathbf{\theta}_1 (\frac{2}{\lambda_{max}}(\mathbf{I} - \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}) - \mathbf{I}) \mathbf{f}\\
&= \mathbf{\theta}_0 \mathbf{I} \mathbf{f} + \frac{2}{\lambda_{max}} \mathbf{\theta}_1 \mathbf{I} \mathbf{f} -
\frac{2}{\lambda_{max}} \mathbf{\theta}_1 \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}} \mathbf{f} - \mathbf{\theta}_1 \mathbf{I} \mathbf{f}.
\end{split}
\end{equation}
By setting $\mathbf{\theta}_0 = -\mathbf{\theta}_1$, Eq. \ref{our_eq1} can be simplified as follows:
\begin{equation} \label{our_eq2}
\begin{split}
\mathbf{f}^{'} &= \mathbf{\theta}_0 \mathbf{I} \mathbf{f} + \frac{2}{\lambda_{max}} \mathbf{\theta}_1 \mathbf{I} \mathbf{f} -
\frac{2}{\lambda_{max}} \mathbf{\theta}_1 \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}} \mathbf{f} - \mathbf{\theta}_1 \mathbf{I} \mathbf{f}\\
&= \mathbf{\theta}_0 \mathbf{I} \mathbf{f} - \frac{2}{\lambda_{max}} \mathbf{\theta}_0 \mathbf{I} \mathbf{f} +
\frac{2}{\lambda_{max}} \mathbf{\theta}_0 \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}} \mathbf{f} + \mathbf{\theta}_0 \mathbf{I} \mathbf{f} \\
&= \frac{2\lambda_{max}-2}{\lambda_{max}} \mathbf{\theta}_0 \mathbf{I} \mathbf{f} + \frac{2}{\lambda_{max}} \mathbf{\theta}_0 \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}} \mathbf{f}
\end{split}
\end{equation}
In Eq. \ref{our_eq2}, the first and second term could be regarded as the all-pass filter with weight $\frac{2\lambda_{max}-2}{\lambda_{max}}$ and low-pass filter with weight $\frac{2}{\lambda_{max}}$, respectively. With these settings, we have the following theorem with theoretical analysis later.
\textbf{Theorem 1.} \textit{While conducting spectral modulation, we can control the balance between all-pass and low-pass filters by simply tuning $\lambda_{max}$. And $\lim_{\lambda_{max}\to\infty} \mathbf{f}^{'} \approx \mathbf{\theta}_0 \mathbf{I} \mathbf{f}$, which is an all-pass filter; $\lim_{\lambda_{max}\to1} \mathbf{f}^{'} \approx \mathbf{\theta}_0 \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}} \mathbf{f}$, which is a low-pass filter.}
We can find the optimal threshold between low and high frequency signals by tuning the weights of these two filters. With a high $\lambda_{max}$, the weight of all-pass filters is elevated and so are the high frequency signals. Whereas when $\lambda_{max}$ is low, the weight of low-pass filters is high and so are the low frequency signals. However, it is nontrivial to manually decide which part is more significant than the other as situations are different across various datasets. A natural step forward would be finding the optimal eigenvalues in a data-driven fashion. Hence, in order to find the optimal threshold, we make $\lambda_{max}$ at $k^{th}$ layer a learnable parameter:
\begin{equation}
\lambda_{max}^{k} = 1 + \text{relu}(\phi_k),
\label{lambda}
\end{equation}
where $\phi_k \in \mathbb{R}$ is a learnable scalar, and relu(.) refers to rectified linear unit function. $\phi_k$ is initialized as 1, since $\lambda_{max}^{k} = 2$ when $\phi_k = 1$. Under this setting, the initial balance between two filters is identical (i.e., $\frac{2\lambda_{max}-2}{\lambda_{max}} = \frac{2}{\lambda_{max}} = 1$), preventing the model from being stuck at local-minimum. We regularize $\phi_k$ by a relu function because relu has a codomain of [0, $\infty$]. This enables the propagation layer to achieve a all-pass filter when $\phi_k \rightarrow \infty$ or a low-pass filter when $\phi_k \rightarrow 0$. Utilizing a layer-wise matrix representation, we have node embedding $\mathbf{H}^{(k)}$ at $k^{th}$ layer as:
\begin{equation} \label{our_eq3}
\begin{split}
&\mathbf{H}^{(k)} = (\frac{2\lambda^k_{max}-2}{\lambda^k_{max}}\mathbf{I} + \frac{2}{\lambda^k_{max}} \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}) \mathbf{H}^{(k-1)} \mathbf{W}_k \\
&= (\frac{2\text{relu}(\phi_k)}{1+\text{relu}(\phi_k)} \mathbf{I} + \frac{2}{1+\text{relu}(\phi_k)} \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}) \mathbf{H}^{(k-1)} \mathbf{W}_k,
\end{split}
\end{equation}
where $\mathbf{H}^{(0)} = \mathbf{X}$, $\mathbf{W}_k \in \mathbb{R}^{d^{(k-1)} \times d^{(k)}}$ denotes the parameter matrix of filter at $k^{th}$ layer and $d^{(k)}$ refers to the dimension of signals at $k^{th}$ layer. The domain of eigenvalues of $(\frac{2\text{relu}(\phi_k)}{1+\text{relu}(\phi_k)} \mathbf{I} + \frac{2}{1+\text{relu}(\phi_k)} \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}})$ is [0, 2], which can introduce numerical instabilities and unexpected gradient issues. So using the renormalization trick proposed in \cite{kipf2016semi}, we further normalize and reformat $(\frac{2\text{relu}(\phi_k)}{1+\text{relu}(\phi_k)} \mathbf{I} + \frac{2}{1+\text{relu}(\phi_k)} \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}})$ as $\mathbf{A}_{k}^{*} = \mathbf{D}_k^{-\frac{1}{2}}\mathbf{A}_k\mathbf{D}_k^{-\frac{1}{2}}$, where $\mathbf{A}_k = \frac{2\text{relu}(\phi_k)}{1+\text{relu}(\phi_k)} \mathbf{I} + \frac{2}{1+\text{relu}(\phi_k)} \mathbf{A}$, and $\mathbf{D}_k$ denotes the diagonal degree matrix of $\mathbf{A}_k$. Putting them together, the layer-wise propagation is summarized as:
\begin{equation}
\mathbf{H}^{(k)} = \mathbf{A}_{k}^{*} \mathbf{H}^{(k-1)} \mathbf{W}_k.
\label{propagate_1}
\end{equation}
Many current GNNs \cite{klicpera2019predict,zhu2009introduction,chen2020simple} have adopted kernels where the balance between all-pass and low-pass filters are dedicatedly tailored. They utilize a pre-defined balancing variable to achieve so but finding the optimal threshold for a specific dataset is undeniably non-trivial as the search space is usually very large. Different from these current approaches, the adjacency matrix $\mathbf{A}_{k}^{*}$ we utilize at $k^{th}$ layer is parameterized with a single scalar. This design enables our model to effectively learn the optimal balance between high and low frequency signals during the training phase and omit the cumbersome hyper-parameter tuning. However, it is difficult for the model to simultaneously learn both the filter $\mathbf{W}_k$ and the optimized graph Laplacian $\mathbf{A}_{k}^{*}$ since the filter operates on the current version of graph Laplacian and dynamically updating both might lead to a situation where the whole model will never converge. Moreover, as we stack numerous layers to capture the high-order information, $\mathbf{W}_k$ still introduces a number of parameters, which are very likely to introduce the over-fit issue. Hence we utilize a parameterization trick to alleviate the above issues.
\subsection{Parameterization trick}
The key motivation of all graph convolutions to stack multiple propagation layers is capturing high-order information that is beneficial to downstream tasks. As mentioned in the introduction, aiming to capture such information, under the designs of most GNNs, more parameters are also introduced (e.g., $\mathbf{W}_k$ in Eq. \ref{propagate_1}). This could bring the over-fitting problem when nodes are scarcely labeled and offset the intended benefits. \cite{wu2019simplifying} proposes to approximate parameters at all layers with a single matrix and meanwhile eliminate the non-linearity in between, which is proved to achieve similar results with fewer parameters. Nevertheless, by conducting such approximation, the complexity of the graph filter is also significantly decreased, making dynamically tuning both the filter and graph Laplacian feasible. Hence, we utilize a modified version of such parameterization trick to approximate parameters at all layers with a single matrix. Specifically, we can re-write Eq. \ref{propagate_1} by expanding $\mathbf{H}^{(k-1)}$ as follows:
\begin{equation} \label{propagate_2_1}
\begin{split}
\mathbf{H}^{(K)} &= \mathbf{A}_{k}^{*} \mathbf{H}^{(K-1)} \mathbf{W}_K \\
&= \mathbf{A}_{K}^{*} \mathbf{A}_{K-1}^{*} \dots \mathbf{A}_{1}^{*} \mathbf{X} \mathbf{W}_1 \dots \mathbf{W}_{K-1} \mathbf{W}_K,
\end{split}
\end{equation}
where $K$ is the total number of propagation layers. We propose to use a single matrix $\mathbf{W}^{*} \in \mathbb{R}^{d \times d^{(K)}}$ to approximate the functionalities of all $\mathbf{W}_k$, such that:
\begin{equation}
\begin{split}
\mathbf{H}^{(k)} = \mathbf{A}_{k}^{*} \mathbf{H}^{(k-1)}\; \; \; \; \text{for } k \geq 1, \text{and }\mathbf{H}^{(0)} = \sigma(\mathbf{X} \mathbf{W}^{*}),
\label{propagate_2}
\end{split}
\end{equation}
where $\sigma(.)$ denotes the ReLU non-linearity. From the perspective of spatial aggregation, intuitively, $i^{th}$ row of $\mathbf{H}^{(k)}$ is simply a linear combination of the modulated graph signals of node $v_i$ and those of its neighbors, whose distances to $v_i$ are within $k$ hops. Through this trick, each convolution layer has only one parameter $\phi_k$, which also significantly alleviates the convergence issue.
\subsection{Inference and Prediction}
\label{inference}
After performing signal modulation for $K$ propagation layers, we generate $K$ node representation matrices $\{\mathbf{H}^{(k)}|1\leq k \leq K\}$. These matrices are combined through a readout function and then fed into a Multi-layer Perceptron (MLP) to predict the class labels $\mathbf{Y}^P \subseteq \{0,1\}^{N \times C} $, formulated as:
\begin{equation}
\mathbf{Y}^P = \text{softmax}(f_{MLP}(READOUT(\mathbf{H}^{(k)}))),
\end{equation}
where $READOUT(.)$ denotes the layer-wise readout function. We choose to combine intermediate node representations through a readout function, instead of using $\mathbf{H}^{(k)}$ directly for the final prediction, because it is very possible that different nodes require distinct levels of information for node classification. And bringing high-order information for nodes whose labels could be inferred merely through local information might jeopardize the decision margin \cite{xu2018representation,liu2020towards}. As for the selection of readout function, we explore element-wise summation $sum(.)$ instead of other element-wise operations (e.g., mean, or max) to maximize the express power \cite{xu2018powerful}. Compared with other readout functions, the summation function is injective and with such function, we reduce the possibility of nodes, that share the same graph signal coefficients but are structurally different, being represented the same. Layer-wise concatenation is also able to achieve similar functionalities but it also introduces the number of parameters in $f_{MLP}(.)$.
\subsection{Theoretical Analysis}
\label{eigen_proof}
In this section, we explain the mechanism of adaptive kernel learning with maximal eigenvalue from the perspective of spectral graph theory. Recall that graph Laplacian is formulated as $\mathbf{L} = \mathbf{D} - \mathbf{A}$ or its normalized form $\mathbf{L} = \mathbf{D}^{-\frac{1}{2}}(\mathbf{D} - \mathbf{A})\mathbf{D}^{-\frac{1}{2}} = \mathbf{I} - \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}$. The Laplacian matrix of either form is symmetric and semi-positive definite, which gives it an important property: it can be eigen-decomposed such one set of resulted eigenvalues are all greater than or equal to zero, formulated as $\mathbf{L} = \mathbf{U} \mathbf{\Lambda} \mathbf{U}^T$. We sort the eigenvalues and their corresponding eigenvectors such that $0 = \lambda_1 \leq \lambda_2 \dots \leq \lambda_{N-1} \leq \lambda_N$. The key idea behind graph spectral filtering is modulating the graph signals' frequencies so that these beneficial to downstream tasks are magnified while others are diminished. This could be achieved by modifying the eigenvalues of Laplacian matrix with a filter function $f(.)$. \cite{defferrard2016convolutional} proposes to utilize model $f(.)$ by Chebyshev polynomials $T_k(.)$. In the work of \cite{defferrard2016convolutional}, it is prerequisite that polynomials at different orders are orthogonal, because orthogonality guarantees that modifying filter at a specific order won't interfere with other orders. So it is necessary to normalize $\mathbf{\Lambda}$ as $\tilde{\mathbf{\Lambda}} = \frac{2 \cdot \mathbf{\Lambda}}{\lambda_{max}} - \mathbf{I}$ such that its domain aligns with the domain [-1, 1] where the orthogonality of Chebyshev polynomials is defined. Under this setup, in order to modulate signals, at least $\mathcal{O}(d \times K)$ parameters are needed, where $d, K$ stands for the dimension of input signals and the order of truncated polynomials, respectively. To reduce the complexity of this process, we propose to make $\lambda_{max}$ as a learnable parameter. $\lambda_{max}$ as a single parameter could effectively solve one major task of the graph filter: balancing the trade-off between high and low frequencies. When $\lambda_{max}$ is large (e.g., $\lim_{\lambda_{max}\to\infty}$), all values on the diagonal of $\mathbf{\Lambda}$ becomes infinitely close to each other and hence every signal in the original graph Laplacian is retained, corresponding to the all-pass filter $\lim_{\lambda_{max}\to\infty} \mathbf{f}^{'} \approx \mathbf{\theta}_0 \mathbf{I} \mathbf{f}$ in Theorem 1. Notice that the domain of normalized Laplacian matrix is upper-bounded by 2 and we allow the maximum eigenvalue to freely vary between (1, $\inf$). In this case, our goal here is not to find the actual maximum eigenvalue; instead, we aim to utilize this normalization process such that the trade-off between all-pass and low-pass filters is optimized. If $\lambda_{max}$ is upper-bounded by 2, in circumstances where high-frequency signals are significant for downstream tasks, the best scenario we can possibly approach is the equal weight on all-pass and low-pass filters (i.e., $\frac{2\lambda_{max}-2}{\lambda_{max}} \mathbf{\theta}_0 \mathbf{I} \mathbf{f} + \frac{2}{\lambda_{max}} \mathbf{\theta}_0 \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}} \mathbf{f} = \mathbf{I} \mathbf{f} + \mathbf{\theta}_0 \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}} \mathbf{f}$).
On the other hand, when $\lambda_{max}$ is small (e.g., $\lim_{\lambda_{max}\to 1}$), we will have some high frequency signals whose eigenvalues are larger than $\lambda_{max}$. In this case, these signals will become unorthogonal to low frequency signals whose eigenvalues are less than or equal to $\lambda_{max}$ and these high frequency signals can be generated by linear combinations of the low ones, which corresponds to low-pass filter $\lim_{\lambda_{max}\to1} \mathbf{f}^{'} \approx \mathbf{\theta}_0 \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}} \mathbf{f}$. With the help of learnable $\lambda_{max}$, the complexity of graph signal filtering in AKGNN is significantly reduced. Because the scope of signal sets is diminished by learnable $\lambda_{max}$ and filter only needs to focus on signals beneficial to the downstream task.
\subsection{Complexity Analysis}
The complexity of AKGNN can be decoupled into two parts. The first portion is graph signal filtering with $f_{MLP}(.)$, with complexity $\mathcal{O}(Nd\cdot d^{(K)})$, where N stands for the number of nodes, $d$ refers to the dimension of the input signal, and $d^{(K)}$ is the dimension of the filtered signal. The second portion is graph convolution with the adaptive kernel, with complexity $\mathcal{O}(|E|d^{(K)})$ per each layer, where $|E|$ denotes the number of edges. Hence for a $K$-layer AKGNN, the total computational complexity is $\mathcal{O}(N\cdot d\cdot d^{(K)} + K\cdot|E|\cdot d^{(K)})$, which is linear with the number of nodes, edges, and layers.
\section{Experiments and Analysis}
We follow the experiment setup acknowledged as community conventions, the same as node classification tasks in \cite{kipf2016semi,velivckovic2017graph} (e.g., publicly fixed 20 nodes per class for training, 500 nodes for validation, and 1,000 nodes for testing). The three datasets we evaluate are Cora, Citeseer and Pubmed \cite{sen2008collective}.
\noindent \textbf{Baselines.} We compare AKGNN with three GNN branches:
\begin{itemize}
\item \textbf{\textit{Graph convolutions}}: ChebyNet \cite{defferrard2016convolutional}, GCN \cite{kipf2016semi}, GAT \cite{velivckovic2017graph}, JKNets \cite{xu2018representation}, APPNP \cite{klicpera2019predict}, SGC \cite{wu2019simplifying} and SSGC \cite{zhu2021simple}.
\item \textbf{\textit{Regularization based}}: VBAT \cite{deng2019batch}, Dropedge \cite{rong2020dropedge}, GAugO \cite{zhao2020data}, and PGSO \cite{dasoulas2021learning}. The backbone model used is GCN.
\item \textbf{\textit{Sampling based}}: GraphSage \cite{hamilton2017inductive}, FastGCN \cite{chen2018fastgcn}.
\end{itemize}
\subsubsection{Implementation Detail}
We utilize PyTorch as our deep learning framework to implement AKGNN. The adaptive kernel learning mechanism is engineered in a sparse tensor fashion for compact memory consumption and fast back-propagation. The weights are initialized with Glorot normal initializer \cite{glorot2010understanding}. We explore Adam to optimize parameters of AKGNN with weight decay and use early stopping to control the training iterations based on validation loss. Besides, we also utilize dropout mechanism between all propagation layers. All the experiments in this work are implemented on a single NVIDIA GeForce RTX 2080 Ti with 11 GB memory size and we didn't encounter any memory bottleneck issue while running all experiments.
\subsubsection{Hyperparameter Detail}
\label{setting}
In AKGNN, related hyperparameters are the number of layers $K$, hidden size $d^{(K)}$, dropout rate, learning rate, weight decay rate, and patience for early stopping. We utilize identical hyperparameter settings over all three datasets as our model learns to adapt to different dataset. The number of layers $K$ is 5, the hidden size $d^{(K)}$ is 64, the dropout rate between propagation layers is 0.6, the learning rate is 0.01, the weight decay rate is 5e-4, and patience for early stopping is 100 iterations.
\begin{table}
\centering
\begin{tabular}{c|ccc}
\toprule
\toprule
\multirow{3}{*}{Method} & \multicolumn{3}{c}{Graph} \\
\cmidrule(r){2-4}
& Cora & Citeseer & Pubmed \\
\midrule
ChebyNet & 81.2 & 69.8 & 74.4\\
GCN & 81.5 & 70.3 & 79.0 \\
GAT & 83.0$\pm$0.7 & 72.5$\pm$0.7 & 79.0$\pm$0.3 \\
APPNP & 83.8$\pm$0.3 & 71.6$\pm$0.5 & 79.7$\pm$0.3 \\
SGC & 81.0$\pm$0.0 & 71.9$\pm$0.1 & 78.9$\pm$0.0 \\
SSGC & 83.5$\pm$0.0 & 73.0$\pm$0.1 & 80.2$\pm$0.0 \\
JKNets & 83.3$\pm$0.5 & 72.6$\pm$0.3 & 79.2$\pm$0.3 \\
\midrule
PGSO & 82.5$\pm$0.3 & 71.8$\pm$0.2 & 79.3$\pm$0.5 \\
VBAT & 83.6$\pm$0.5 & 73.1$\pm$0.6 & 79.9$\pm$0.4 \\
GAugO & 83.6$\pm$0.5 & 73.3$\pm$1.1 & 80.2$\pm$0.3 \\
Dropedge & 82.8 & 72.3 & 79.6 \\
\midrule
GraphSage & 78.9$\pm$0.6 & 67.4$\pm$0.7 & 77.8$\pm$0.6 \\
FastGCN & 81.4$\pm$0.5 & 68.8$\pm$0.9 & 77.6$\pm$0.5 \\
\midrule
\textbf{AKGNN} (ours) & \textbf{84.4$\pm$0.3} & \textbf{73.5$\pm$0.2} & \textbf{80.4$\pm$0.3} \\
\midrule
\midrule
w/o $\lambda$ learning & 81.4$\pm$0.2 & 71.9$\pm$0.1 & 79.1$\pm$0.2 \\
w/o PT & 83.1$\pm$0.1 & 72.2$\pm$0.5 & 80.1$\pm$0.3 \\
w/o readout & 83.5$\pm$0.2 & 73.1$\pm$0.3 & 79.4$\pm$0.2 \\
\bottomrule
\bottomrule
\end{tabular}
\caption{Overall classification accuracy (\%).}
\label{exp_overall}
\end{table}
\subsection{Overall Results}
From the upper portion of Tab. \ref{exp_overall}, we observe that AKGNN consistently outperforms all baselines by a noticeable margin over all datasets. Comparing AKGNN with the best-performing baseline of each dataset, we further conduct t-test and the improvement margin is statistically significant with the p-values less than 0.001. The improvement of AKGNN over GCN is 3.2\%, 3.7\% and 1.4\% on Cora, Citeseer and Pubmed; whereas that over GAT is 1.4\%, 1.3\% and 1.4\%. Compared with JKNet that utilizes a similar global readout function and parameterization trick, AKGNN has 1.1\%, 0.9\% and 1.2\% improvements, demonstrating the efficacy of adaptive kernel learning mechanism. As the graph regularization-based model gets more attention, we also compare AKGNN with these recent works and the improvements are 0.8\%, 0.2\% and 0.2\%.
\begin{figure}
\caption{Generalization on Cora.}
\label{loss_fig}
\end{figure}
\subsection{Ablation Studies}
To analyze the contribution of different components in AKGNN, we conduct several sets of ablation studies. In order to examine the effectiveness of $\lambda_{max}$ learning, we design the first variant as our model without adaptive kernel learning, denoted as `w/o $\lambda$'. Another variant is our model without parameterization trick, denoted as `w/o PT', aimed at validating its effectiveness in combating the over-fitting issue. The last variant is our model without readout function (i.e., $sum(.)$), in order to prove that nodes require different levels of information to achieve better performance. From the bottom portion of Tab. \ref{exp_overall}, we can first observe that all components contribute to the performance of AKGNN. The first variant without adaptive kernel learning and readout experiences a significant performance downgrade and is worse than the vanilla GCN model on some datasets, because we stack a lot of convolution layers and it encounters the over-smoothing. Comparing AKGNN without readout function with baselines, we observe similar performance.
In Fig. \ref{loss_fig}.a, both training and validation loss of this variant are highest and the differences between them are smallest across all variants, indicating that the over-smoothing issue has caused node representations to be indistinguishable. The second variant without parameterization trick has the lowest training loss, and also the biggest gap between training and validation loss, as shown in Fig. \ref{loss_fig}.b. This phenomenon represents the model suffers from the over-fitting problem due to the large number of parameters brought by numerous propagation layers. The third variant without readout function relatively performs better than the previous two, but still not as good as AKGNN, as shown in Fig. \ref{loss_fig}.c. This illustrates that the decision boundary is diminished as a result of bringing redundant high-order information for nodes that requires only low-frequency signals. Nevertheless, we further examine the generalization ability of our proposed method. As shown in Fig. \ref{loss_fig}.d, we can observe the lowest validation loss across all variants and meanwhile the differences between training and validation loss remain small, which demonstrates the generalization ability of AKGNN.
\begin{figure}
\caption{Maximal eigenvalues vs. number of layers ($x$-axis: layer $k$, $y$-axis: $\lambda_{max}
\label{lambda_fig}
\end{figure}
\subsection{Analysis of Adaptive Kernel Learning}
The key motivation of learning the maximal eigenvalue is learning the optimal threshold between low and high frequency signals. In order to examine how it combats against the generality issue, we visualize the maximal eigenvalue at each layer, as shown in Fig. \ref{lambda_fig}. We first analyze the dynamics of maximal eigenvalues $\lambda_{max}^k$ within the same dataset. We can notice that the value of $\lambda_{max}^k$ incrementally decreases as $k$ progresses and reaches a plateau where $\lambda_{max}^k$ doesn't change much after fifth layer across all datasets. We interpret this phenomenon as our model enforcing high-order layers to become meaningless. Because a low maximal eigenvalue at high-order layer would make node representations getting more indistinguishable. Moreover, $\lambda_{max}^k$ at early layers doesn't deviate as $K$ increases, demonstrating the strong contribution of local information and stability of AKGNN. Then we analyze the dynamics between these three datasets. We can notice that the $\lambda_{max}^k$ of Pubmed has a higher mean than those of other two, showing that for node classification, high frequency signals benefits most for Pubmed dataset. Meanwhile, we can observe a significant $\lambda_{max}^k$ drop at the second layer for Pubmed, indicating the redundancy of high-order information. This phenomenon also aligns with GNNs like GCN or GAT performing best with only two layers. Besides an intuitive explanation given above, we also theoretically explicate these phenomenon. Adapted Laplacian matrices across all layers share the same eigenvectors $\mathbf{U}$, because essentially our operation only modifies the diagonal eigenvalue matrix $\mathbf{\Lambda}$. Hence the commutative rule for matrix multiplication applies for all $\mathbf{A}_{k}^{*}$ as they are simultaneously diagonalizable and the order of $\mathbf{A}_{k}^{*}$ multiplication in Eq. \ref{propagate_2_1} could be switch. In short, higher learned maximum eigenvalues should be observed if high-frequency signals dominate; whereas lower ones should be observed if low-frequency signals dominate. Across these three datasets, we can observe that AKGNN learns relatively high maximum eigenvalues compared with Cora and Pubmed, which aligns with the homophily property of there three datasets (i.e., Citeseer has the lowest homophily ratio.).
\subsection{Analysis of Number of Propagation Layers}
Beside adaptation of global readout function that has been utilized in \cite{liu2020towards, xu2018representation}, our adaptive kernel learning mechanism also improves AKGNN's resilience to the over-reaching by enforcing high-order layers to become meaningless, as discussed in the previous sections. And the impact of the number of layers on performance is shown in Fig. \ref{layer_fig}. From this figure, we can notice that the accuracy on both testing and training reaches their highest around fifth layer and remains stable as the number of layers increases, demonstrating the AKGNN's strong resilience to the over-smoothing while the model is the over-reaching. We don't conduct experiments on AKGNN with more than 10 layers because 10-hop sub-graph of a target node covers almost all its possibly reachable nodes; the resilience to the over-fitting of AKGNN is only a byproduct of adaptive kernel learning and not the focus of this work.
\begin{figure}
\caption{Impact of the number of layers on accuracy. ($x$-axis: $K$, $y$-axis: accuracy (\%))}
\label{layer_fig}
\end{figure}
\section{Conclusion}
In this work, we study the problem of node representation learning on graphs and present Adaptive Kernel Graph Neural Network (AKGNN). In AKGNN, we propose adaptive kernel learning to find the optimal threshold between high and low frequency signals. Together with parameterization trick and global readout function, AKGNN is highly scalable, achieves competitive performance and retains so even with a number of convolution layers. Through experiments on three acknowledged benchmark datasets, AKGNN outperforms all baselines. Different from other graph convolution models whose guiding kernels are fixed and not ideal to all kinds of graphs, AKGNN learns to adapt to different graph Laplacians, which could shed light on a different path while researchers design new GNN models. We do not observe ethical concern or negative societal impact entailed by our method. However, care must be taken to ensure positive and societal consequences of machine learning. In the future, we aim to transfer the similar ideas of AKGNN to directed graphs or investigating the possibility of applying adaptive kernel learning to other kernels.
\pagebreak
\end{document} |
\begin{document}
\def\Xint#1{\mathchoice
{\XXint\displaystyle\textstyle{#1}}
{\XXint\textstyle\scriptstyle{#1}}
{\XXint\scriptstyle\scriptscriptstyle{#1}}
{\XXint\scriptscriptstyle\scriptscriptstyle{#1}}
\!\int}
\def\XXint#1#2#3{{\setbox0=\hbox{$#1{#2#3}{\int}$}
\vcenter{\hbox{$#2#3$}}\kern-.5\wd0}}
\def\Xint={\Xint=}
\def\Xint-{\Xint-}
\title[Quantitative two weight theorem]{ A new quantitative two weight theorem for the Hardy-Littlewood maximal operator}
\author{Carlos P\'erez}
\address{Departamento de An\'alisis Matem\'atico,
Facultad de Matem\'aticas, Universidad de Sevilla, 41080 Sevilla,
Spain}
\email{carlosperez@us.es}
\author{Ezequiel Rela}
\address{Departamento de An\'alisis Matem\'atico,
Facultad de Matem\'aticas, Universidad de Sevilla, 41080 Sevilla,
Spain}
\email{erela@us.es}
\thanks{Both authors are supported by the Spanish Ministry of Science and Innovation grant MTM2012-30748 and by the Junta de Andaluc\'ia, grant FQM-4745.}
\subjclass{Primary: 42B25. Secondary: 43A85.}
\keywords{Two weight theorem, Space of homogeneous type, Muckenhoupt weights, Calder\'on-Zygmund, Maximal functions}
\begin{abstract}
A quantitative two weight theorem for the Hardy-Little\-wood maximal operator is proved improving the known ones. As a consequence a new proof of the main results in \cite{HP} and \cite{HPR1} is obtained which avoids the use of the sharp quantitative reverse Holder inequality for $A_{\infty}$ proved in those papers. Our results are valid within the context of spaces of homogeneous type without imposing the non-empty annuli condition.
\end{abstract}
\maketitle
\section{ Introduction and Main results} \label{sec:intro}
\subsection{Introduction}
The purpose of this note is to present a \emph{quantitative} two weight theorem for the Hardy-Littlewood maximal operator when the underlying space is a space of homogeneous type $\mathcal{S}$ (SHT in the sequel), endowed with a quasimetric $\rho$ and a doubling measure $\mu$ (see Section \ref{sec:SHT} for the precise definitions).
We briefly recall some background on this problem in the \emph{euclidean} or \emph{classical} setting, when we are working in $\mathbb{R}^n$ and we consider Lebesgue measure and euclidean metric. We also assume that in this classical setting all the maximal operators involved and $A_p$ classes of weights are defined over cubes. Let $M$ stand for the usual uncentered Hardy-Littlewood maximal operator:
\begin{equation*}
Mf(x) = \sup_{Q\ni x } \frac{1}{|Q|}\int_{Q} |f|\,dx.
\end{equation*}
The problem of characterizing the pair of weights for which the maximal operator is bounded between weighted Lebesgue spaces was solved by Sawyer \cite{sawyer82b}:
To be more precise, if $1<p<\infty$ we define for any pair of weights $w,\sigma$, the (two weight) norm,
\begin{equation} \label{MainQuestion}
\|M(\cdot \sigma)\|_{L^p(w)}:= \sup_{f\in L^p(\sigma)} \frac{ \|M(f \sigma) \|_{L^p(w)}}{ \|f\|_{L^p(\sigma)} }
\end{equation}
then Sawyer showed that $\|M(\cdot \sigma)\|_{L^p(w)}$ is finite if and and only if
\begin{equation*}
\sup_Q \frac{\int_Q (M(\chi_Q \sigma )^p \ wdx}{ \sigma(Q)}<\infty,
\end{equation*}
where the supremum is taken over all the cubes in $\mathbb{R}^n$.
A quantitative precise version of this result is the following: if we define
\begin{equation*}
[w,\sigma]_{S_p}:= \left(\frac1{\sigma(Q)} \,\int_Q M(\sigma\chi_Q)^pw\ dx\right)^{1/p}.
\end{equation*}
then
\begin{equation}\label{eq:moen}
\|M(\cdot \sigma)\|_{L^p(w)} \sim p'[w,\sigma]_{S_p},
\end{equation}
where $\frac{1}{p}+\frac{1}{p'}=1$. This result is due to K. Moen and can be found in \cite{Moen:Two-weight}.
However, it is still an open problem to find a characterization more closely related to the $A_p$ condition of Muckenhoupt which is easier to use in applications. Indeed, recall that the two weight $A_p$ condition:
\begin{equation*}
\sup_Q\left(\Xint-_Q w\ dx\right)\left(\Xint-_Q v^{-\frac1{p-1}}\ dx\right)^{p-1}<\infty
\end{equation*}
is necessary for the boundedness of $M$ from $L^p(v)$ into $L^{p}(w)$ (which is clearly equivalent, setting $\sigma=v^{1-p'}$, to the two weight problem), but it is not sufficient. Therefore, the general idea is to strengthen the $A_p$ condition to make it sufficient. The first result on this direction is due to Neugebauer \cite{Neugebauer}, proving that, for any $r>1$, it is sufficient to consider the following ``power bump'' for the $A_p$ condition:
\begin{equation}\label{Neug}
\sup_Q\left(\Xint-_Q w^{r}\ dx\right)^\frac{1}{r}\left(\Xint-_Q v^{-\frac{r}{p-1}}\ dx\right)^{(p-1)r}<\infty
\end{equation}
Later, the first author improved this result in \cite{perez95} by considering a different approach which allows to consider much larger classes weights. The new idea is to replace \emph{only} the average norm associated
to the weight $v^{-\frac1{p-1}}$ in \eqref{Neug} by an ``stronger'' norm which is often called a ``bump". This norm is defined
in terms of an appropriate Banach function $X$ space satisfying certain special property. This property is related to the $L^p$ boundedness of a natural maximal function related to the space. More precisely, for a given Banach function space $X$, the local $X$-average of a measurable function $f$ associated to the cube $Q$ is defined as
\begin{equation*}
\|f\|_{X,Q}=\left\|\tau_{\ell(Q)}(f\chi_Q)\right\|_X,
\end{equation*}
where $\tau_\delta$ is the dilation operator $\tau_\delta f(x)=f(\delta x)$, $\delta>0$ and $\ell(Q)$ stands for the sidelength of the cube $Q$. The natural maximal operator associated to the space $X$ is defined as
\begin{equation*}
M_{X}f(x)= \sup_{Q:x\in Q} \|f\|_{X,Q}
\end{equation*}
and the key property is that the maximal operator $M_{X'}$ is bounded on $L^p(\mathbb{R}^n)$
where $X'$ is the associate space to $X$ (see \eqref{bnessX'} below).
As a corollary of our main result, Theorem \ref{thm:main}, we will give a quantitative version of the main result from \cite{perez95} regarding sufficient conditions for the two weight inequality to hold:
\begin{theorem}\label{thm:perez-bump}
Let $w$ and $\sigma$ be a pair of weights that satisfies the condition
\begin{equation}\label{keySufficientCondition}
\sup \left(\Xint-_Q w\ dx\right) \|\sigma^{1/p'}\|^p_{X,Q} <\infty.
\end{equation}
Suppose, in addition, that the maximal operator associated to the associate space is bounded on $L^p(\mathbb{R}^n)$:
\begin{equation}\label{bnessX'}
M_{X'}: L^p(\mathbb{R}^n)\to L^p(\mathbb{R}^n).
\end{equation}
Then there is a finite positive constant $C$ such that:
\begin{equation*}
\|M(\cdot \sigma)\|_{L^p(w)} \leq C.
\end{equation*}
\end{theorem}
In this note we give a different result of this type with the hope that it may lead to different, possible better, conditions for the two weight problem for Singular Integral Operators.
Most of the interesting examples are obtained when $X$ is an Orlicz space $L_\Phi $ defined in term of the Young function $\Phi$ (see Section \ref{sec:SHT} for the precise definitions). In this case, the local average with respect to $\Phi$ over a cube $Q$ is
\begin{equation*}
\|f\|_{\Phi,Q} =\|f\|_{\Phi,Q,\mu}= \inf\left\{\lambda >0:
\frac{1}{\mu(Q)}\int_{Q} \Phi\left(\frac{ |f|}{ \lambda } \right)
dx \le 1\right\}
\end{equation*}
where $\mu$ is here the Lebesgue measure. The corresponding maximal function is
\begin{equation}\label{eq:maximaltype}
M_{\Phi}f(x)= \sup_{Q:x\in Q} \|f\|_{\Phi,Q}.
\end{equation}
Related to condition \eqref{keySufficientCondition} we introduce here the following quantities.
\begin{definition}\label{def:Ap-multiple}
Let $(\mathcal{S},d\mu)$ be a SHT. Given a ball $B\subset \mathcal{S}$, a Young function $\Phi$ and two weights $w$ and $\sigma$, we define the quantity
\begin{equation}\label{eq:A_p-local}
A_p(w,\sigma,B,\Phi):=\left( \Xint-_{B} w\, d\mu\right)\|\sigma^{1/p'}\|^p_{\Phi,B}
\end{equation}
and we say that a pair of weights belong to the $A_{p,\Phi}$ class if
\begin{equation*}
[w,\sigma,\Phi]_{A_p}:=\sup_B A_p(w,\sigma,B,\Phi) <\infty,
\end{equation*}
where the $\sup$ is taken over all balls in the space. In the particular case of $\Phi(t)=t^{p'}$, this condition corresponds to the classical $A_p$ condition and we use the notation
\begin{equation*}
[w,\sigma]_{A_p}:=\sup_B\left(\Xint-_{B} w\ d\mu\right)\left(\Xint-_{B} \sigma\ d\mu\right)^{p-1}.
\end{equation*}
\end{definition}
We define now a generalization of the Fuji-Wilson constant of a $A_{\infty}$ weight $\sigma$ as introduced in \cite{HP} by means of a Young function $\Phi$:
\begin{equation*}
[\sigma,\Phi]_{W_p}:=\sup_B\frac{1}{\sigma(B)}\int_B M_{\Phi}\left(\sigma^{1/p}\chi_B\right)^p\ d\mu
\end{equation*}
Note that the particular choice of $\Phi_p(t):=t^p$ reduces to the $A_\infty$ constant ( see Definition \eqref{eq:Ainfty} from Section \ref{sec:SHT}):
\begin{equation}\label{eq:WpPhi-p--Ainfty}
[\sigma,\Phi_p]_{W_p}=\sup_B\frac{1}{\sigma(B)}\int_B M\left(\sigma\chi_B\right)\ d\mu =[\sigma]_{A_\infty}.
\end{equation}
\subsection{Main results}
Our main purpose in the present note is to address the problem mentioned above within the context of spaces of homogeneous type. In this context, the Hardy--Littlewood maximal operator $M$ is defined over balls:
\begin{equation}\label{eq:maximal-SHT}
Mf(x) = \sup_{B\ni x } \frac{1}{\mu(B)}\int_{B} |f|\,d\mu.
\end{equation}
The Orlicz type maximal operators are defined also with balls and with respect to the measure $\mu$ in the natural way.
Our main result is the following theorem.
\begin{theorem} \label{thm:main}
Let $1 < p < \infty$ and let $\Phi$ be any Young function with conjugate function $\bar\Phi$. Then, for any pair of weights $w,\sigma$,
there exists a structural constant $C>0$ such that the (two weight) norm defined in \eqref{MainQuestion} satisfies
\begin{equation}\label{eq:main}
\|M(\cdot \sigma)\|_{L^p(w)} \leq C p'\left( [w,\sigma,\Phi]_{A_p}[\sigma,\bar\Phi]_{W_p}\right)^{1/p},
\end{equation}
\end{theorem}
We emphasize that \eqref{eq:main}, which is even new in the usual context of Euclidean Spaces, fits into the spirit of the $A_p-A_{\infty}$ theorem derived in \cite{HP} and \cite{HPR1}. The main point here is that we have a two weight result with a better condition and with a proof that avoids completely the use of the sharp quantitative reverse H\"older inequality for $A_{\infty}$ weights proved in these papers. This property is, of course, of independent interest but it is not used in our results.
From this Theorem, we derive several corollaries. First, we have a direct proof of the two weight result derived in \cite{HP} using the $[w]_{A_{\infty}}$ constant of Fujii-Wilson \eqref{eq:Ainfty}.
\begin{corollary}\label{cor:mixed-two-weight}
Under the same hypothesis of Theorem \ref{thm:main}, we have that there exists a structural constant $C>0$ such that
\begin{equation}\label{eq:mixed-two}
\|M(\cdot \sigma)\|_{L^p(w)} \leq Cp'\left([w,\sigma]_{A_p}[\sigma]_{A_\infty}\right)^{1/p}.
\end{equation}
\end{corollary}
Note that the result in Theorem \ref{thm:main} involves two suprema like in Corollary \ref{cor:mixed-two-weight}. It would interesting to find out if there is a version of this result involving only one supremum. There is some evidence that it could be the case, see for example \cite{HP}, Theorem 4.3. See also the recent work \cite{LM}.
As a second consequence of Theorem \ref{thm:main}, we have the announced quantitative version of Theorem \ref{thm:perez-bump}:
\begin{corollary}\label{cor:precise-bump}
Under the same hypothesis of Theorem \ref{thm:main}, we have that there exists a structural constant $C>0$ such that
\begin{equation*}
\|M(\cdot \sigma)\|_{L^p(w)} \leq C p' [w,\sigma,\Phi]_{A_p}^{1/p} \|M_{\bar\Phi}\|_{L^p(\mathbb{R}^n)}
\end{equation*}
\end{corollary}
We remark that this approach produces a non-optimal dependence on $p$, since we have to pay with one $p'$ for using Sawyer's theorem. However, the ideas from the proof of Theorem \ref{thm:main} can be used to derive a direct proof of Corollary \ref{cor:precise-bump} without the $p'$ factor. We include the proof in the appendix.
Finally, for the one weight problem, we recover the known mixed bound.
\begin{corollary}\label{cor:mixed-one-weight}
For any $A_p$ weight $w$ the following mixed bound holds:
\begin{equation*}
\|M\|_{L^p(w)} \leq C p' \left([w]_{A_p}[\sigma]_{A_\infty}\right)^{1/p}
\end{equation*}
where $C$ is an structural constant and as usual $\sigma=w^{1-p'}$ is the dual weight.
\end{corollary}
\begin{remark}
To be able to extend the proofs to this general scenario, we need to use (and prove) suitable versions of classical tools on this subject, such as Calder\'on--Zygmund decompositions. We remark that in previous works (\cite{PW-JFA}, \cite{SW}) most of the results are proved under the assumption that the space has non-empty annuli. The main consequence of this property is that in that case the measure $\mu$ enjoys a reverse doubling property, which is crucial in the proof of Calder\'on--Zygmund type lemmas. However, this assumption implies, for instance, that the space has infinite measure and no atoms (i.e. points with positive measure) and therefore constraints the family of spaces under study. Recently, some of those results were proven without this hypothesis, see for example \cite{Pradolini-Salinas}. Here we choose to work without the annuli property and therefore we need to adapt the proofs from \cite{PW-JFA}. Hence, we will need to consider separately the cases when the space has finite or infinite measure. An important and useful result on this matter is the following:
\end{remark}
\begin{lemma}[\cite{vili}]\label{lem:bounded-finite}
Let $(\mathcal S,\rho,\mu)$ be a space of homogeneous type. Then $\mathcal{S}$ is bounded if and only if $\mu(\mathcal S)<\infty$.
\end{lemma}
\subsection{Outline}
The article is organized as follows. In Section \ref{sec:prelim} we summarize some basic needed results on spaces of homogeneous type and Orlicz spaces. We also include a Calder\'on--Zygmund type decomposition lemma. In Section \ref{sec:proofs} we present the proofs of our results. Finally, we include in Section \ref{sec:appendix} an Appendix with a direct proof of a slightly better result than Corollary \ref{cor:mixed-two-weight}.
\section{preliminaries}\label{sec:prelim}
In this section we first summarize some basic aspects regarding spaces of homogeneous type and Orlicz spaces. Then, we include a Calder\'on--Zygmund (C--Z) decomposition lemma adapted to our purposes.
\subsection{Spaces of homogeneous type}\label{sec:SHT}
A quasimetric $d$ on a set $\mathcal{S}$ is a function $d:{\mathcal S} \times
{\mathcal S} \rightarrow [0,\infty)$ which satisfies
\begin{enumerate}
\item $d(x,y)=0$ if and only if $x=y$;
\item $d(x,y)=d(y,x)$ for all $x,y$;
\item there exists a finite constant $\kappa \ge 1$ such that, for all $x,y,z \in \mathcal{S}$,
\begin{equation*}
d(x,y)\le \kappa (d(x,z)+d(z,y)).
\end{equation*}
\end{enumerate}
Given $x \in \mathcal{S}$ and $r > 0$, we define the ball with center $x$ and radius $r$, $B(x,r) := \{y \in {\mathcal{S}} :d(x,y) < r\}$ and we denote its radius $r$ by $r(B)$ and its center $x$ by $x_B$.
A space of homogeneous type $({\mathcal{S}},d,\mu)$ is a set $\mathcal{S}$ endowed with a quasimetric $d$ and a doubling nonnegative Borel measure $\mu$ such that
\begin{equation}\label{eq:doubling}
\mu(B(x,2r)) \le C\mu(B(x,r))
\end{equation}
Let $C_\mu$ be the smallest constant satisfying \eqref{eq:doubling}. Then $D_\mu = \log_2 C_\mu$ is called the doubling order of $\mu$. It follows that
\begin{equation}
\frac{\mu(B)}{\mu(\tilde{B})} \le
C^{2+\log_2\kappa}_{\mu}\left(\frac{r(B)}{r(\tilde{B})}\right)^{D_\mu} \;\mbox{for all
balls}\; \tilde{B} \subset B.
\end{equation}
In particular for $\lambda>1$ and $B$ a ball, we have that
\begin{equation}\label{eq:doublingDIL}
\mu(\lambda B) \le (2\lambda)^{D_\mu} \mu(B).
\end{equation}
Here, as usual, $\lambda B$ stands for the dilation of a ball $B(x,\lambda r)$ with $\lambda>0$.
Throughout this paper, we will say that a constant $c=c(\kappa,\mu)>0$ is a \emph{structural constant} if it depends only on the quasimetric constant $\kappa$ and the doubling constant $C_\mu$.
An elementary but important property of the quasimetric is the following. Suppose that we have two balls $B_1=B(x_1,r_1)$ and $B_2=B(x_2,r_2)$ with non empty intersection. Then,
\begin{equation}\label{eq:engulfing}
r_1\le r_2 \Longrightarrow B_1\subset \kappa(2\kappa+1)B_2.
\end{equation}
This is usually known as the ``engulfing'' property and follows directly from the quasitriangular property of the quasimetric.
In a general space of homogeneous type, the balls $ B(x,r)$ are not necessarily open, but by a theorem of
Macias and Segovia \cite{MS}, there is a continuous quasimetric
$d'$ which is equivalent to $d$ (i.e., there are positive
constants $c_{1}$ and $c_{2}$ such that $c_{1}d'(x,y)\le d(x,y)
\le c_{2}d'(x,y)$ for all $x,y \in \mathcal{S}$) for which every ball is
open. We always assume that the quasimetric $d$ is continuous and
that balls are open.
We will adopt the usual notation: if $\nu$ is a measure and $E$ is a measurable set, $\nu(E)$ denotes the $\nu$-measure of $E$. Also, if $f$ is a measurable function on $(\mathcal S,d,\mu)$ and $E$ is a measurable set, we will use the notation $f(E):=\int_E f(x)\ d\mu$.
We also will denote the $\mu$-average of $f$ over a ball $B$ as $f_{B} = \Xint-_B f d\mu$.
We recall that a weight $w$ (any non negative measurable function) satisfies the $A_p$ condition for $1<p<\infty$ if
\begin{equation*}
[w]_{A_p}:=\sup_B\left(\Xint-_B w\ d\mu\right)\left(\Xint-_B w^{-\frac{1}{p-1}}\ d\mu\right)^{p-1},
\end{equation*}
where the supremum is taken over all the balls in $\mathcal{S}$. The $A_{\infty}$ class is defined in the natural way by $A_{\infty}:=\bigcup_{p>1}A_p$
This class of weights can also be characterized by means of an appropriate constant. In fact, there are various different definitions of this constant, all of them equivalent in the sense that they define the same class of weights. Perhaps the more classical and known definition is the following due to Hru\v{s}\v{c}ev
\cite{Hruscev} (see also \cite{GCRdF}):
\begin{equation*}
[w]^{exp}_{A_\infty}:=\sup_B \left(\Xint-_{B} w\,d\mu\right) \exp \left(\Xint-_{B} \log w^{-1}\,d\mu \right).
\end{equation*}
However, in \cite{HP} the authors use a ``new'' $A_\infty$ constant (which was originally introduced implicitly by Fujii in \cite{Fujii} and later by Wilson in \cite{Wilson:87}), which seems to be better suited. For any $w\in A_\infty$, we define
\begin{equation}\label{eq:Ainfty}
[w]_{A_\infty}:= [w]^{W}_{A_\infty}:=\sup_B\frac{1}{w(B)}\int_B M(w\chi_B )\ d\mu,
\end{equation}
where $M$ is the usual Hardy--Littlewood maximal operator. When the underlying space is $\mathbb{R}^d$, it is easy to see that $[w]_{A_\infty}\le c [w]^{exp}_{A_\infty}$ for some structural $c>0$. In fact, it is shown in \cite{HP} that there are examples showing that $[w]_{A_\infty}$ is much smaller than $[w]^{exp}_{A_\infty}$
The same line of ideas yields the inequality in this wider scenario. See the recent work of Beznosova and Reznikov \cite{BR} for a comprehensive and thorough study of these different $A_\infty$ constants.
We also refer the reader to the forthcoming work of Duoandikoetxea, Martin-Reyes and Ombrosi \cite{DMRO} for a discussion regarding different definitions of $A_\infty$ classes.
\subsection{Orlicz spaces}\label{sec:Orlicz}
We recall here some basic definitions and facts about Orlicz spaces.
A function $\Phi:[0,\infty) \rightarrow [0,\infty)$ is called a Young function if it is continuous, convex, increasing and satisfies $\Phi(0)=0$ and $\Phi(t) \rightarrow \infty$ as $t \rightarrow \infty$. For Orlicz spaces, we are usually only concerned about the behaviour of Young functions for $t$ large.
The space $L_{\Phi}$ is a Banach function space with the Luxemburg norm
\[
\|f\|_{\Phi} =\|f\|_{\Phi,\mu} =\inf\left\{\lambda >0: \int_{\mathcal{S}}
\Phi( \frac{ |f|}{\lambda }) \, d\mu \le 1 \right\}.
\]
Each Young function $\Phi$ has an associated complementary Young function $\bar{\Phi}$ satisfying
\begin{equation*}
t\le \Phi^{-1}(t)\bar{\Phi}^{-1}(t) \le 2t \label{propiedad}
\end{equation*}
for all $t>0$. The function $\bar{\Phi}$ is called the conjugate
of $\Phi$, and the space $L_{\bar{\Phi}}$ is called the conjugate
space of $L_{\Phi}$. For example, if $\Phi(t) = t^p$ for $1 < p <
\infty$, then $\bar{\Phi}(t) = t^{p'}, p' = p/(p-1)$, and the
conjugate space of $L^p(\mu)$ is $L^{p'}(\mu)$.
A very important property of Orlicz spaces is the generalized
H\"older inequality
\begin{equation}\label{eq:HOLDERglobal}
\int_{\mathcal{S}} |fg|\, d\mu \le 2 \|f\|_{\Phi}\|g\|_{\bar{\Phi}}.
\end{equation}
Now we introduce local versions of Luxemburg norms. If $\Phi$ is a Young function, let
\begin{equation*}
\|f\|_{\Phi,B} =\|f\|_{\Phi,B,\mu}= \inf\left\{\lambda >0:
\frac{1}{\mu(B)}\int_{B} \Phi\left(\frac{ |f|}{ \lambda }\right) \,
d\mu \le 1\right\}.
\end{equation*}
Furthermore, the local version of the generalized H\"older inequality (\ref{eq:HOLDERglobal}) is
\begin{equation}\label{eq:HOLDERlocal}
\frac{1}{\mu(B)}\int_{B}fg\, d\mu \le 2 \|f\|_{\Phi,B}\|g\|_{\bar{\Phi},B}.
\end{equation}
Recall the definition of the maximal type operators $M_\Phi$ from \eqref{eq:maximaltype}:
\begin{equation}\label{eq:maximaltype-SHT}
M_{\Phi}f(x)= \sup_{B:x\in B} \|f\|_{\Phi,B}.
\end{equation}
An important fact related to this sort of operator is that its boundedness is related to the so called $B_p$ condition. For any positive function $\Phi$ (not necessarily a Young function), we have that
\begin{equation*}
\|M_{\Phi}\|^p_{L^{p}(\mathcal{S})}\, \leq c_{\mu,\kappa}\, \alpha_{p}(\Phi),
\end{equation*}
where $\alpha_{p}(\Phi)$ is the following tail condition
\begin{equation}\label{eq:Phi-p}
\alpha_{p}(\Phi)= \,\int_{1}^{\infty} \frac{\Phi(t)} { t^p }
\frac{dt}{t} < \infty.
\end{equation}
It is worth noting that in the recent article \cite{LL} the authors define the appropriate analogue of the $B_p$ condition in order to characterize the boundedness of the \emph{strong} Orlicz-type maximal function defined over rectangles both in the linear and multilinear cases. Recent developments and improvements can also be found in \cite{Masty-Perez}, where the authors addressed the problem of studying the maximal operator between Banach function spaces.
\subsection{Calder\'on--Zygmund decomposition for spaces of homogeneous type}
\
The following lemma is a classical result in the theory, regarding a decomposition of a generic level set of the Hardy--Littlewood maximal function $M$. Some variants can be found in \cite{AimarPAMS} for $M$ and in \cite{AimarTAMS} for the centered maximal function $M^c$ . In this latter case, the proof is straightforward. We include here a detailed proof for the general case of $M$ where some extra subtleties are needed.
\begin{lemma}[Calder\'on--Zygmund decomposition]\label{lem:stoppingtime}
Let $B$ be a fixed ball and let $f$ be a bounded nonnegative measurable function. Let $M$ be the usual non centered Hardy--Littlewood maximal function. Define the set $\Omega_{\lambda}$ as
\begin{equation}\label{eq:omegalambda}
\Omega_\lambda = \{x \in B: Mf(x) >\lambda\},
\end{equation}
Let $\lambda>0$ be such that $\lambda\ge \Xint-_B f\ d\mu$. If $\Omega_{\lambda}$ is non-empty, then given $\eta > 1$, there exists a countable family
$\{B_i\}$ of pairwise disjoint balls such that, for $\theta=4\kappa^2+\kappa$,
\begin{itemize}
\item[i)] $\displaystyle \cup_{i} B_{i}\subset \Omega_{\lambda} \subset
\cup_{i} \theta B_{i}$,
\item[ii)] For all $i$,
\begin{equation*}
\lambda <\frac{1}{\mu(B_i)} \int_{B_i} f d\mu.
\end{equation*}
\item[iii)] If $B$ is any ball such that $B_i\subset B$ for some $i$ and $r(B)\ge \eta r(B_i)$, we have that
\begin{equation}\label{eq:ballmaximal1}
\frac{1}{\mu(\eta B)} \int_{\eta B} f d\mu \le \lambda.
\end{equation}
\end{itemize}
\end{lemma}
\begin{proof}
Define, for each $x\in\Omega_\lambda$, the following set:
\begin{equation*}
\mathcal{R}^\lambda_x=\left\{r>0: \Xint-_{B} f\ d\mu >\lambda, x\in B=B(y,r)\right\},
\end{equation*}
which is clearly non-empty. The key here is to prove that $\mathcal{R}^\lambda_x$ is bounded. If the whole space is bounded, there is nothing to prove. In the case of unbounded spaces, we argue as follows.
Since the space is of infinite measure (recall Lemma \ref{lem:bounded-finite}), and clearly $S=\bigcup_{r>0} B(x,r)$,
we have that $\mu(B(x,r))$ goes to $+\infty$ when $r\to\infty$ for any $x\in \mathcal{S}$. Therefore, for $K=\kappa(2\kappa+1)$, we can choose $r_1$ such that the ball $B_1=B(x,r_1)$ satisfies the inequality
\begin{equation*}
\mu(B_1)\ge \frac{2(2K)^{D_\mu}\|f\|_{L^1}}{\lambda}
\end{equation*}
Suppose now that $\sup\mathcal{R}^\lambda_x=+\infty$. Then we can choose a ball $B_2=B(y,r_2)$ for some $y$ such that $x\in B_2$, $\Xint-_{B_2}f\ d\mu>\lambda$ and $r_2>r_1$. Now, by the engulfing property \eqref{eq:engulfing} we obtain that $B_1\subset KB_2$. The doubling condition \eqref{eq:doublingDIL} yields
\begin{equation*}
\mu(B_1)\le \mu(KB_2)\le (2k)^{D_\mu}\mu(B_2)
\end{equation*}
Then we obtain that
\begin{equation*}
\frac{2\|f\|_{L^1}}{\lambda}\le \mu(B_2)
< \frac{\|f\|_{L^1}}{\lambda}
\end{equation*}
which is a contradiction. We conclude that, in any case, for any $x\in \Omega_\lambda$, we have that $\sup \mathcal{R}^\lambda_x<\infty$.
Now fix $\eta>1$. If $x \in \Omega_\lambda$, there is a ball $B_{x}$ containing $x$, whose radius $r(B_x)$ satisfies $\frac{\sup \mathcal{R}^\lambda_x}{\eta} < r(B_x)\leq \sup \mathcal{R}^\lambda_x$, and for which $\Xint-_{B_x} f \ d\mu > \lambda$. Thus the ball $B_x$ satisfies ii) and iii). Also
note that $\Omega_\lambda = \bigcup_{x\in \Omega_{\lambda}} B_x$.
Picking a Vitali type subcover of $\{B_{x}\}_{x\in
\Omega_{\lambda}}$ as in \cite{SW}, Lemma 3.3, we obtain a
family of pairwise disjoint balls $\{B_{i}\} \subset
\{B_{x}\}_{x\in \Omega_{\lambda}}$ satisfying i). Therefore
$\{B_i\}$ satisfies i), ii) and iii).
\end{proof}
We will need another important lemma, in order to handle simultaneously decompositions of level sets at different scales.
\begin{lemma}\label{lem:disjointing}
Let $B$ be a ball and let $f$ be a bounded nonnegative measurable function. Let also $a \gg 1$ and, for each integer $k$ such that $a^k>\Xint-_B f\ d\mu$, we define $\Omega_{k}$ as
\begin{equation}\label{eq:Omega-k}
\Omega _{k} = \left\{x\in B: Mf(x) >a^{k} \right\},
\end{equation}
Let $\{E_i^k\}_{i,k}$ be defined by $E_i^k=B_i^k\setminus \Omega_{k+1}$, where the family of balls $\{B_i^k\}_{i,k}$ is obtained by applying Lemma \ref{lem:stoppingtime} to each $\Omega_k$.
Then, for $\theta=4\kappa^2+\kappa$ as in the previous Lemma and $\eta=\kappa^2(4\kappa+3)$, the following inequality holds:
\begin{equation}\label{eq:Bik vs Eik}
\mu(B_i^k\cap \Omega_{k+1})< \frac{(4\theta\eta)^{D_\mu}}{a}\mu(B_i^k).
\end{equation}
Consequently, for sufficiently large $a$, we can obtain that
\begin{equation}\label{eq:Bik vs Eik one half}
\mu(B_i^k) \le 2\mu(E_i^k).
\end{equation}
\end{lemma}
\begin{proof}
To prove the claim, we apply Lemma \ref{lem:stoppingtime} with $\eta=\kappa^2(4\kappa+3)$. Then, by part i), we have that, for $\theta=4\kappa^2+\kappa$
\begin{equation*}
\Omega_{k+1}\subset \bigcup_m\theta B^{k+1}_m
\end{equation*}
and then
\begin{equation}\label{eq:decompBik-k+1}
\mu(B_{i}^{k} \cap \Omega_{k+1} )\le \sum_{m} \mu( B_{i}^{k} \cap
\theta B_{m}^{k+1} ).
\end{equation}
Suppose now that $B_i^k\cap \theta B_m^{k+1}\neq \emptyset$. We claim that $r(B_{m}^{k+1})\le r(B_{i}^{k})$. Suppose the contrary, namely $r(B_{m}^{k+1})> r(B_{i}^{k})$. Then, by property \eqref{eq:engulfing}, we can see that $B_{i}^{k} \subset \kappa^2(4\kappa+3) B_{m}^{k+1}=\eta B_{m}^{k+1}$.
For $B=\eta B_{m}^{k+1}$, part iii) from Lemma \ref{lem:stoppingtime} gives us that the average satisfies
\begin{equation}\label{eq:avg-etaBmk+1}
\frac{1}{\mu(B)}\int_B f\ d\mu\le a^k.
\end{equation}
Now, by the properties of the family $\{B_m^{k+1}\}_m$ and the doubling condition of $\mu$, we have that, for $a>(2\eta)^{D_\mu}$,
\begin{equation}\label{eq:ak}
\frac{1}{\mu(\eta B_{m}^{k+1})}\int_{\eta B_{m}^{k+1}} f\ d\mu>\frac{a^{k+1}}{ (2\eta)^{D_\mu}}>a^k.
\end{equation}
This last inequality contradicts \eqref{eq:avg-etaBmk+1}. Then, whenever $B_i^k\cap \theta B_m^{k+1}\neq \emptyset$, we have that
$r(B_{m}^{k+1})\le r(B_{i}^{k})$ and from that it follows that $ B_{m}^{k+1}\subset \eta B_{i}^{k}$. The sum \eqref{eq:decompBik-k+1} now becomes
\begin{eqnarray*}
\mu(B_{i}^{k} \cap \Omega_{k+1} )& \le & \sum_{m:B_{m}^{k+1}\subset \eta B_{i}^{k}} \mu( B_{j}^{k} \cap \theta B_{m}^{k+1} )\\
& \le & (2\theta)^{D_\mu} \sum_{m:B_{m}^{k+1}\subset \eta B_{i}^{k}} \mu(B_{m}^{k+1} )\\
&\le & \frac{(2\theta)^{D_\mu}}{a^{k+1}}\int_{\eta B_i^k}f\ d\mu
\end{eqnarray*}
since the sets $\{B_m^{k+1}\}_m$ are pairwise disjoint. Finally, by part iii) of Lemma \ref{lem:stoppingtime}, we obtain
\begin{equation*}
\mu(B_{i}^{k} \cap \Omega_{k+1})\le \frac{(4\theta\eta)^{D_\mu}}{a}\mu(B_i^k),
\end{equation*}
which is inequality \eqref{eq:Bik vs Eik}.
\end{proof}
\section{Proofs of the main results}\label{sec:proofs}
We present here the proof or our main results. Our starting point is a version of the sharp two weight inequality \eqref{eq:moen} valid for SHT from \cite{kairema:twoweight}:
\begin{theorem}[\cite{kairema:twoweight}]\label{thm:kairema}
Let $(\mathcal{S},\rho,\mu)$ a SHT. Then the H--L maximal operator $M$ defined by \eqref{eq:maximal-SHT} satisfies the bound
\begin{equation}\label{eq:kairema}
\left\|M(f\sigma)\right\|_{L^p(w)}\le C p'[w,\sigma]_{S_p}\|f\|_{L^p(\sigma)},
\end{equation}
where $[w,\sigma]_{S_p}$ is the Sawyer's condition with respect to balls:
\begin{equation}
[w,\sigma]_{S_p}:=\sup_B \left(\frac1{\sigma(B)} \int_B M(\sigma\chi_B)^pw\ d\mu\right)^{1/p}.
\end{equation}
\end{theorem}
We now present the proof of the main result.
\begin{proof}[Proof of Theorem \ref{thm:main}]
By Theorem \ref{thm:kairema}, we only need to prove that
\begin{equation*}
[w,\sigma]_{S_p}\le C [w,\sigma,\Phi]^{1/p}_{A_p}[\sigma,\bar\Phi]^{1/p}_{W_p}
\end{equation*}
for some constant $C$, for any Young function $\Phi$, for any $1<p<\infty$. Let $B$ be a fixed ball $B$ and consider the sets $\Omega_k$ from \eqref{eq:Omega-k} for the function $\sigma\chi_B$ for any $k\in \mathbb{Z}$. We remark here that in order to apply a C--Z decomposition of these sets, we need the level of the decomposition to be larger that the average over the ball. We proceed as follows. Take any $a>1$ and consider $k_0\in\mathbb{Z}$ such that
\begin{equation}\label{eq:small-average}
a^{k_0-1}< \Xint-_B \sigma\ d\mu \le a^{k_0}.
\end{equation}
Now, let $A$ be the set of the small values of the maximal function:
\begin{equation*}
A=\left\{x\in B: M(\sigma\chi_B)\le a\Xint-_B \sigma\ d\mu\right\}.
\end{equation*}
For any $x\in B\setminus A$, we have that
\begin{equation*}
M(\sigma\chi_B)(x)> a\Xint-_B \sigma\ d\mu>a^{k_0}\ge\Xint-_B \sigma\ d\mu.
\end{equation*}
Therefore,
\begin{eqnarray*}
\int_B M(\sigma\chi_B)^p w \ d\mu & = & \int_A M(\sigma\chi_B)^p w \ d\mu +\int_{B\setminus A} M(\sigma\chi_B)^p w \ d\mu \\
& \le & a^p w(B) \left(\Xint-_{B} \sigma\ d\mu\right)^p + \sum_{k\ge k_0} \int_{ \Omega_{k}\setminus \Omega_{k+1}} M(\sigma\chi_B)^p w\ d\mu\\
&= & I + II
\end{eqnarray*}
The first term $I$ can be bounded easily. By the general H\"older inequality \eqref{eq:HOLDERlocal}, we obtain
\begin{eqnarray*}
I & \le & 2a^p\left(\Xint-_B w\ d\mu \right)\|\sigma^{1/p'}\|^p_{\Phi,B}\|\sigma^{1/p}\|^p_{\bar\Phi,B}\ \mu(B)\\
&\le & 2[w,\sigma,\Phi]_{A_p}\int_B M_{\bar\Phi}(\sigma^{1/p}\chi_B)^p\ d\mu
\end{eqnarray*}
Now, for the second term $II$, we first note that
\begin{eqnarray*}
\int_{B\setminus A} M(\sigma\chi_B)^p w \ d\mu & = & \sum_{k\ge k_0} \int_{ \Omega_{k}\setminus \Omega_{k+1}} M(\sigma\chi_B)^p w\ d\mu\\
& \le & a^p\sum_{k\ge k_0} a^{kp} w(\Omega_{k})
\end{eqnarray*}
By the choice of $k_0$, we can apply Lemma \ref{lem:stoppingtime} to perform a C--Z decomposition at all levels $k\ge k_0$ and obtain a family of balls $\{B^k_i\}_{i,k}$ with the properties listed in that lemma. Then,
\begin{eqnarray*}
\int_{B\setminus A} M(\sigma\chi_B)^p w \ d\mu &\le & a^{p} \sum_{k,i} \left(\Xint-_{B_i^k}\sigma\chi_B\ d\mu \right)^{p} w(\theta B_i^k)\\
&\le & a^{p} \sum_{k,i} \left(\frac{\mu(\theta B_i^k)}{\mu(B_i^k)}\Xint-_{\theta B_i^k}\sigma^\frac{1}{p}\sigma^\frac{1}{p'}\chi_B\ d\mu \right)^{p} w(\theta B_i^k)
\end{eqnarray*}
We now proceed as before, using the local generalized Holder inequality \eqref{eq:HOLDERlocal} and the doubling property \eqref{eq:doublingDIL} of the measure (twice). Then we obtain
\begin{equation*}
\int_{B\setminus A} M(\sigma\chi_B)^p w d\mu \le 2a^{p} (2\theta)^{(p+1)D_\mu}[w,\sigma,\Phi]_{A_p}\sum_{k,i}\left\| \sigma^\frac{1}{p}\chi_B\right\|^p_{\bar{\Phi},\theta B_i^k}\mu(B_i^k)
\end{equation*}
The key here is to use Lemma \ref{lem:disjointing} to pass from the family $\{B_i^k\}$ to the pairwise disjoint family $\{E_i^k\}$. Then, for $a\ge 2(4\theta\eta)^{D_\mu}$, we can bound the last sum as follows
\begin{eqnarray*}
\sum_{k,i}\left\| \sigma^\frac{1}{p}\chi_B\right\|^p_{\bar{\Phi},\theta B_i^k}\mu(B_i^k)& \le & 2 \sum_{k,i}\left\| \sigma^\frac{1}{p}\chi_B\right\|^p_{\bar{\Phi},\theta B_i^k}\mu(E_i^k)\\
&\le& 2 \sum_{k,i} \int_{E_i^k}M_{\bar{\Phi}}(\sigma^\frac{1}{p}\chi_B)^p\ d\mu\\
&\le& 2 \int_B M_{\bar{\Phi}}(\sigma^\frac{1}{p}\chi_B)^p\ d\mu
\end{eqnarray*}
since the sets $ \{ E_{k,j}\}$ are pairwise disjoint. Collecting all previous estimates and dividing by $\sigma(B)$, we obtain the desired estimate
\begin{equation*}
[w,\sigma]^p_{S_p}\le 4 a^{p} (2\theta)^{(p+1)D_\mu}[w,\sigma,\Phi]_{A_p} [\sigma,\bar\Phi]_{W_p},
\end{equation*}
and the proof of Theorem \ref{thm:main} is complete.
\end{proof}
It remains to prove Corollary \ref{cor:mixed-two-weight}. To that end, we need to consider the special case of $\Phi(t)=t^{p'}$.
\begin{proof}[Proof of Corollary \ref{cor:mixed-two-weight}]
Considering then $\Phi(t)=t^{p'}$, the quantity \eqref{eq:A_p-local} is
\begin{eqnarray*}
A_p(w,\sigma,B,\Phi) & = &\left( \Xint-_{B} w\, d\mu\right)\|\sigma^{1/p'}\|^p_{\Phi,B} \\
& = & \left( \Xint-_{B} w(y)\, d\mu\right) \left( \Xint-_{B} \sigma \, d\mu\right)^{p-1}.
\end{eqnarray*}
In addition, we have from \eqref{eq:WpPhi-p--Ainfty} that $[\sigma,\overline{\Phi_{p'}}]_{W_p}=[\sigma,\Phi_p]_{W_p}=[\sigma]_{A_\infty}$ and therefore we obtain \eqref{eq:mixed-two}.
\end{proof}
For the proof of Corollary \ref{cor:precise-bump}, we simply use the boundedness of $M_{\bar\Phi}$ on $L^p(\mu)$,
\begin{equation*}
[\sigma,\bar\Phi]_{W_p}:=\sup_B\frac{1}{\sigma(B)}\int_B M_{\bar\Phi}\left(\sigma^{1/p}\chi_B\right)^p\ d\mu
\leq \|M_{\bar\Phi}\|^p_{L^p}.
\end{equation*}
The proof of Corollary \ref{cor:mixed-one-weight} is trivial.
\section{Appendix}\label{sec:appendix}
We include here a direct proof of version of Corollary \ref{cor:precise-bump} which is better in terms of the dependence on $p$. Precisely, we have the following Proposition.
\begin{proposition}\label{pro:precise-bump-sharp-p}
Let $1 < p < \infty$. For any pair of weights $w,\sigma$ and any Young function $\Phi$,
there exists a structural constant $C>0$ such that
\begin{equation*}
\|M (f\sigma)\|_{L^p(w)}\leq C [w,\sigma,\Phi]^{1/p}_{A_p} \|M_{\bar\Phi}\|_{L^p}\|f\|_{L^{p}(\sigma)}
\end{equation*}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{pro:precise-bump-sharp-p}]
By density it is enough to prove the inequality for each nonnegative bounded function with compact support $f$. We first consider the case of unbounded $S$. In this case we have $\Xint-_S f\sigma\ d\mu=0$. Therefore, instead of the sets from sets from \eqref{eq:Omega-k}, we consider
\begin{equation*}\label{eq:Omega-k-global}
\Omega _{k} = \left\{x\in \mathcal{S}: M(f\sigma)(x) >a^{k} \right\},
\end{equation*}
for any $a>1$ and any $k\in \mathbb{Z}$. Then, we can write
\begin{equation*}
\int_{\mathcal S} M(f\sigma)^p w \ d\mu = \sum_{k} \int_{ \Omega_{k}\setminus \Omega_{k+1}} M(f\sigma)^p w\ d\mu
\end{equation*}
Then, following the same line of ideas as in the proof of Theorem \ref{thm:main}, we obtain
\begin{equation*}
\int_{\mathcal S} M(f\sigma)^p w d\mu \le 2a^{p} (2\theta)^{(p+1)D_\mu}[w,\sigma,\Phi]_{A_p} \sum_{k,i}\left\| f\sigma^\frac{1}{p}\right\|^p_{\bar{\Phi},\theta B_i^k}\mu(B_i^k)
\end{equation*}
By Lemma \ref{lem:disjointing} we can replace the family $\{B_i^k\}$ by the pairwise disjoint family $\{E_i^k\}$ to obtain the desired estimate:
\begin{equation*}
\int_{\mathcal S} M(f\sigma)^p w \ d\mu \le 4a^{p} (2\theta)^{(p+1)D_\mu}[w,\sigma,\Phi]_{A_p} \|M_{\bar{\Phi}}\|_{L^p}^p\int_{S}f^p\sigma\ d\mu.
\end{equation*}
In the bounded case, the whole space is a ball and we can write $\mathcal S=B(x,R)$ for any $x$ and some $R>0$. The problem here is to deal with the small values of $\lambda$, since we cannot apply Lemma \ref{lem:disjointing} for $a^k\le \Xint-_S f\sigma\ d\mu$. We then take any $a>1$ and consider $k_0\in\mathbb{Z}$ to verify \eqref{eq:small-average}:
\begin{equation*}
a^{k_0-1}< \Xint-_S f\sigma\ d\mu \le a^{k_0}
\end{equation*}
and argue as in the proof of Theorem \ref{thm:main}.
\end{proof}
Now, from this last proposition, we can derive another proof of the mixed bound \eqref{eq:mixed-two} from Corollary \ref{cor:mixed-two-weight}. The disadvantage of this approach with respect to the previous one is that we need a deep property of $A_\infty$ weights: the sharp Reverse H\"older Inequality. In the whole generality of SHT, we only know a \emph{weak} version of this result from the recent paper \cite{HPR1}:
\begin{theorem}[Sharp weak Reverse H\"older Inequality, \cite{HPR1}]\label{thm:SharpRHI}
Let $w\in A_\infty$. Define the exponent $r(w)=1+\frac{1}{\tau_{\kappa\mu}[w]_{A_{\infty}}}$,
where $\tau_{\kappa\mu}$ is an structural constant.
Then,
\begin{equation*}
\left(\Xint-_B w^{r(w)}\ d\mu\right)^{1/r(w)}\leq 2(4\kappa)^{D_\mu}\Xint-_{2\kappa B} w\ d\mu,
\end{equation*}
where $B$ is any ball in $\mathcal S$.
\end{theorem}
The other ingredient for the alternative proof of Corollary \ref{cor:mixed-two-weight} is the known estimate for the operator norm for $M$. For any $1<q<\infty$, we have that $\|M\|^q_{L^q}\sim q'$.
\begin{proof}[Another proof of Corollary \ref{cor:mixed-two-weight}]
Consider the particular choice of $\Phi(t)=t^{p'r}$ for $r>1$. Then quantity \eqref{eq:A_p-local} is
\begin{equation*}
A_p(w,\sigma,B,\Phi) =\left( \Xint-_{B} w(y)\, d\mu\right) \left( \Xint-_{B} \sigma^r\, d\mu\right)^{p/rp'}
\end{equation*}
If we choose $r$ from the sharp weak reverse H\"older property (Theorem \ref{thm:SharpRHI}), we obtain that
\begin{eqnarray*}
A_p(w,\sigma,B,\Phi) & = &
\left( \Xint-_{B} w\ d\mu\right)\left(2(4\kappa)^{D_\mu}\Xint-_{2\kappa B} \sigma\ d\mu\right)^{p-1}\\
&\le& 2^{p-1}(4\kappa)^{pD_\mu}\left( \Xint-_{2\kappa B} w\ d\mu\right)\left(\Xint-_{2\kappa B} \sigma\ d\mu\right)^{p-1}\\
&\le & 2^{p-1}(4\kappa)^{pD_\mu}[w,\sigma]_{A_p}
\end{eqnarray*}
And therefore the proof of Proposition \ref{pro:precise-bump-sharp-p} gives
\begin{equation*}
\|M (f\sigma)\|_{L^p(w)} \leq C [w,\sigma]_{A_p}^{1/p} \|M_{\bar{\Phi}}\|_{L^{p}(\mathcal{S},d\mu)} \, \|f\|_{L^{p}(\sigma)}.
\end{equation*}
We conclude with the proof by computing $\|M_{\bar \Phi}\|_{L^p}$ for $\Phi(t)=t^{p'r}$. We use \eqref{eq:Phi-p}, and then we obtain that $\|M_{\bar \Phi}\|^p_{L^p}\le c r'p'$.
But, by the choice of $r$, it follows that $r'\sim [\sigma]_{A_\infty}$ and we obtain \eqref{eq:mixed-two}.
\end{proof}
\end{document} |
\begin{document}
\title{$k$-Sets and Rectilinear Crossings in Complete Uniform Hypergraphs}
\author{Rahul Gangopadhyay\inst{1} \and
Saswata Shannigrahi\inst{2}}
\authorrunning{R. Gangopadhyay et al.}
\institute{ IIIT Delhi, India\\
\email{rahulg@iiitd.ac.in}\\
\and
Saint Petersburg State University, Russia\\
\email{saswata.shannigrahi@gmail.com}}
\maketitle
\begin{abstract}
In this paper, we study the $d$-dimensional rectilinear drawings of the complete $d$-uniform hypergraph $K_{2d}^d$. Anshu et al. [Computational Geometry: Theory and Applications, 2017] used Gale transform and Ham-Sandwich theorem to prove that there exist $\Omega \left(2^d\right)$ crossing pairs of hyperedges in such a drawing of $K_{2d}^d$. We improve this lower bound by showing that there exist $\Omega \left(2^d \sqrt{ d}\right)$ crossing pairs of hyperedges in a $d$-dimensional rectilinear drawing of $K_{2d}^d$. We also prove the following results. \\
1. There are $\Omega \left(2^d {d^{3/2}}\right)$ crossing pairs of hyperedges in a $d$-dimensional rectilinear drawing of $K_{2d}^d$ when its $2d$ vertices are either not in convex position in $\mathbb{R}^d$ or form the vertices of a $d$-dimensional convex polytope that is $t$-neighborly but not $(t+1)$-neighborly for some constant $t\geq1$ independent of $d$.\\
2. There are $\Omega \left(2^d {d^{5/2}}\right)$ crossing pairs of hyperedges in a $d$-dimensional rectilinear drawing of $K_{2d}^d$ when its $2d$ vertices form the vertices of a $d$-dimensional convex polytope that is $(\floor{d/2}-t')$-neighborly for some constant $t' \geq 0$ independent of $d$.
\keywords{Rectilinear Crossing Number \and Neighborly Polytope \and Gale Transform \and Affine Gale Diagram \and Balanced Line \and $j$-Facet \and $k$-Set}
\end{abstract}
\section{Introduction}
\label{Intro}
For a sufficiently large positive integer $d$, let $K_{n}^d$ denote the complete $d$-uniform hypergraph with $n \geq 2d$ vertices. A \textit{$d$-dimensional rectilinear drawing} of $K_{n}^d$ is defined as an embedding of it in $\mathbb{R}^d$ such that the vertices of $K_{n}^d$ are points in general position in $\mathbb{R}^d$ and each hyperedge is drawn as the $(d-1)$-simplex formed by the $d$ corresponding vertices \cite{AS}. Note that a set of points in $\mathbb{R}^d$ is said to be in \textit{general position} if no $d+1$ of them lie on a $(d-1)$-dimensional hyperplane. For $u$ and $v$ in the range $0 \leq u, v \leq d-1$, a $u$-simplex spanned by
a set of $u+1$ points and a $v$-simplex spanned by a set of $v+1$ points (when these $u+v+2$ points are in general position in $\mathbb{R}^d$) are said to be {\textit {crossing}} if they do not have common vertices ($0$-faces) and contain a common point in their relative interiors \cite{DP}. As a result, a pair of hyperedges in a $d$-dimensional rectilinear drawing of $K_{n}^d$ are said to be \textit{crossing} if they do not have a common vertex and contain a common point in their relative interiors \cite{AGS,AS,DP}. The {\textit{$d$-dimensional rectilinear crossing number}} of $K_{n}^d$, denoted by $\overline {cr}_d(K_{n}^d)$, is defined as the minimum number of crossing pairs of hyperedges among all $d$-dimensional rectilinear drawings of $K_{n}^d$ \cite{AGS,AS}. \par
Since the crossing pairs of hyperedges formed by a set containing $2d$ vertices of $K_{n}^d$ are distinct from the crossing pairs of hyperedges formed by another set of $2d$ vertices, it can be observed that $\overline {cr}_d(K_{n}^d) \geq \overline {cr}_d(K_{2d}^d)\dbinom{n}{2d}$ \cite{AS}. The best-known lower bound on $\overline {cr}_d(K_{2d}^d)$ is $\Omega (2^d)$ \cite{AGS}. Anshu et al. \cite{AGS} also studied a $d$-dimensional rectilinear drawing of $K_{2d}^d$ where all its vertices are placed on the {\textit{$d$-dimensional moment curve}} $\gamma=\{(a,a^2,\ldots, a^d): a \in \mathbb{R}\}$ and proved that the number of crossing pairs of hyperedges is $\Theta\left(4^d/ \sqrt{d}\right)$ in this drawing. \par
As described above, an improvement of the lower bound on $\overline {cr}_d(K_{2d}^d)$ improves the lower bound on $\overline {cr}_d(K_{n}^d)$.
Let us denote the points corresponding to the set of vertices in a $d$-dimensional rectilinear drawing of the hypergraph $K_{2d}^d$ by
$V = \{v_1, v_2, \ldots, v_{2d}\}$.
The points in $V$ are said to be in {\textit{convex position}} if there does not exist any point $v_i \in V$ (for some $i$ in the range $1\leq i \leq 2d$) such that $v_i$ can be expressed as a convex combination of the points in $V \setminus \{v_i\}$, and such a drawing of $K_{2d}^d$ is called a {\textit{$d$-dimensional convex drawing}} of it \cite{AGS}. \par
Note that the convex hull of the vertices of $K_{2d}^d$ in a $d$-dimensional convex drawing of it forms a $\textit{$d$-dimensional convex polytope}$ with its vertices in general position. For any $t\geq 1$, a \textit{$d$-dimensional $t$-neighborly polytope} is a
$d$-dimensional convex polytope in which each subset of its vertex set having less than or equal to $t$ vertices forms a face \cite[Page 122]{GRU}. Such a polytope is said to be \textit{neighborly} if it is $\floor{d/2}$-neighborly. Note that any $d$-dimensional convex polytope can be at most $\floor{d/2}$-neighborly unless it is a $d$-simplex which is $d$-neighborly \cite[Page 123]{GRU}. The {\textit{$d$-dimensional cyclic polytope}} is a special kind of neighborly polytope where all its vertices are placed on the $d$-dimensional moment curve \cite[Page 15]{ZIE}. Using these definitions and notations, let us describe our contributions in this paper.
In Section \ref{imprvcross}, we improve the lower bound on $\overline {cr}_d(K_{2d}^d)$. In particular, we prove the following theorem which implies Corollary \ref{thm41}.
\begin{theorem}
\label{thm4}
$\overline {cr}_d(K_{2d}^d)= \Omega(2^d \sqrt{d})$.
\end{theorem}
\begin{corollary}
\label{thm41}
$\overline {cr}_d(K_{n}^d)= \Omega(2^d \sqrt{d})\dbinom{n}{2d}$.
\end{corollary}
We then prove the following theorems to obtain lower bounds on the number of crossing pairs of hyperedges in some special types of $d$-dimensional rectilinear drawings of $K_{2d}^d$.
\begin{theorem}
\label{thm1}
The number of crossing pairs of hyperedges in a $d$-dimensional rectilinear drawing of $K_{2d}^d$ is $\Omega (2^d {d^{3/2}})$ if the vertices of $K_{2d}^d$ are not in convex position.\end{theorem}
\begin{theorem}
\label{thm2}
For any constant $t \geq 1$ independent of $d$, the number of crossing pairs of hyperedges in a $d$-dimensional rectilinear drawing of $K_{2d}^d$ is $\Omega (2^d {d^{3/2}})$ if the vertices of $K_{2d}^d$ are placed as the vertices of a $d$-dimensional $t$-neighborly polytope that is not $(t+1)$-neighborly.
\end{theorem}
\begin{theorem}
\label{thm3}
For any constant $t' \geq 0$ independent of $d$, the number of crossing pairs of hyperedges in a $d$-dimensional rectilinear drawing of $K_{2d}^d$ is $\Omega (2^d d^{5/2})$ if the vertices of $K_{2d}^d$ are placed as the vertices of a $d$-dimensional $(\floor{d/2}-t')$-neighborly polytope.
\end{theorem}
\noindent Note that the $t'=0$ case in Theorem \ref{thm3} corresponds to a neighborly polytope. As mentioned above, the number of crossing pairs of hyperedges in a $d$-dimensional rectilinear drawing of $K_{2d}^d$ is known to be $\Theta(4^d/\sqrt{d})$ when such a polytope is cyclic.\\
\noindent {\bf Techniques Used:} We use the properties of Gale transform, affine Gale diagram, $k$-sets and balanced lines to prove Theorems
\ref{thm4}, \ref{thm1}, \ref{thm2} and \ref{thm3}. We discuss these properties in detail in Sections \ref{TU} and \ref{TU1}. In addition, a few other results used in these proofs are mentioned below.
Let us first state the Ham-Sandwich theorem and the Caratheodory's theorem that are used in the proofs of Theorems \ref{thm4} and \ref{thm1}, respectively.\\
\noindent{\textbf {Ham-Sandwich Theorem:}} \cite{JM,STO} There exists a $(d-1)$-hyperplane $h$ which simultaneously bisects $d$
finite point sets $P_1, P_2, \ldots, P_d$ in $\mathbb{R}^d$, such that each of
the
open half-spaces created by $h$ contains at most
$\left\lfloor\small{{|P_i|}/{2}}\right\rfloor$
points of $P_i$ for each $i$ in the range $1 \leq i \leq d$.\\
\noindent{\textbf {Carath\'{e}odory's Theorem:}} \cite{JM,ST} Let $X \subseteq \mathbb{R}^d$. Then, each point in the convex hull $Conv(X)$ of $X$ can be expressed as a convex combination of at most $d+1$ points in $X$.\\
\noindent In the following, we state the Proper Separation theorem that is used in the proof of Lemma \ref{extension}. Two non-empty convex sets are said to be \textit{properly separated} in $\mathbb{R}^d$ if they lie in the opposite closed half-spaces created by a $(d-1)$-dimensional hyperplane and both of them are not contained in the hyperplane. \\
\noindent{\textbf {Proper Separation Theorem:}} \cite[Page 148]{OG} Two non-empty convex sets can be properly separated in $\mathbb{R}^d$ if and only if their relative interiors are disjoint.\\
\noindent The proof of the following lemma, which is used in the proofs of all four theorems, is the same proof mentioned in \cite{AGS} for the special case $u=v=d-1$. For the sake of completeness, we repeat its proof in full generality.
\begin{lemma}\cite{AGS}
\label{extension}
Consider a set $A$ that contains at least $d+1$ points in general position in $\mathbb{R}^d$. Let $B$ and $C$ be its disjoint subsets such that $|B|= b$, $|C|=
c$, $2 \leq b,c \leq d$ and $b+c \geq d+1$. If the $(b-1)$-simplex formed by
$B$ and the
$(c-1)$-simplex formed by $C$ form a crossing pair, then the $u$-simplex ($u \geq b-1$) formed by a point set $B'\supseteq B$ and the $v$-simplex ($v \geq c-1$) formed by a point set $C' \supseteq C$ satisfying $B' \cap C' = \emptyset$, $|B'|, |C'| \leq d$ and $B', C' \subset A$ also form a crossing pair.
\end{lemma}
\begin{proof}
For the sake of contradiction, we assume that there exist a $u$-simplex and a $v$-simplex, formed respectively by the disjoint point sets $B' \supseteq B$ and $C' \supseteq C$, that do not cross. We consider two cases.
\begin{case**}
{\normalfont Let us assume that $Conv(B') \cap Conv(C') = \emptyset$. It clearly leads to a contradiction since $Conv(B) \cap Conv(C) \ne \emptyset$.}
\end{case**}
\begin{case**}
{\normalfont Let us assume that $Conv(B') \cap Conv (C') \neq \emptyset$. Since the relative interiors of $Conv(B')$ and $Conv(C')$ are disjoint, the Proper Separation theorem implies that there exists a $(d-1)$-dimensional hyperplane $h$ such that $Conv(B')$ and $Conv(C')$ lie in the opposite closed half-spaces determined by $h$. It implies that $Conv(B)$ and $Conv(C)$ also lie in the opposite closed half-spaces created by $h$. Since the relative interiors of $Conv(B)$ and $Conv(C)$ are not disjoint and they lie in the opposite closed halfspaces of $h$, it implies that all $b+c \geq d+1$ points in $B \cup C$ lie on $h$. This leads to a contradiction since the points in $B \cup C$ are in general position in $\mathbb{R}^d$.} \qed
\end{case**}
\end{proof}
\section{Gale Transform and its Properties}
\label{TU}
The {\textit{Gale transformation}} is a useful technique to investigate the properties of high dimensional point sets \cite{GL}. Consider a sequence of $m>d+1$ points $P=$ $<p_1,p_2, \ldots, p_m>$ in $\mathbb{R}^d$ such that the affine hull of the points is $\mathbb{R}^d$. Let the $i^{th}$ point $p_i$ be represented as $(x_1^i, x_2^i, \ldots, x_d^i)$. To compute a {\textit{Gale transform}} of $P$, let us consider the $(d+1)\times m$ matrix $M(P)$ whose $i^{th}$ column is
$\begin{pmatrix}
x_1^i \\
x_2^i \\
\vdots \\
x_d^i \\
1
\end{pmatrix}
$.
Since there exists a set of $d+1$ points in $P$ that is affinely independent, the rank of $M(P)$ is $d+1$. Therefore, the dimension of the null space of $M(P)$ is $m-d-1$. Let $\{(b_1^1, b_2^1, \ldots, b_m^1),$ $(b_1^2, b_2^2, \ldots,
b_m^2), \ldots,(b_1^{m-d-1}, b_2^{m-d-1},$ $\ldots, b_m^{m-d-1}) \}$ be a set of $m-d-1$ vectors that spans the null space of $M(P)$. A Gale transform $D(P)$ is the sequence of vectors $D(P)$ $=$ $<g_1,g_2, \ldots, g_m>$ where $g_i= (b_i^1$, $b_i^2$, $\ldots,
b_i^{m-d-1})$ for each $i$ satisfying $1 \leq i \leq m$. Note that $D(P)$ can also be treated as a point sequence in $\mathbb{R}^{m-d-1}$.\par
We define a {\textit{linear separation}} of $D (P)$ to be a partition of the vectors in $D(P)$ into two disjoint sets of vectors $D^+(P)$ and $D^-(P)$ contained in the opposite open half-spaces created by a linear hyperplane (i.e., a hyperplane passing through the origin). A linear separation is said to be \textit{proper} if one of the sets among $D^+(P)$ and $D^-(P)$ contains $\floor{{m}/{2}}$ vectors and the other contains $\ceil{{m}/{2}}$ vectors. We use the following properties of $D(P)$ in the proofs of our theorems. The first two of these properties are used in the proofs of all four theorems. The third property is used in the proof of Theorem \ref{thm1} and the last one is used in the proof of Theorem \ref{thm2}.
\begin{lemma}\cite[Page 111]{JM}
\label{genposi}
If the points
in $P$ are in general position in $\mathbb{R}^d$, each collection of $m-d-1$ vectors in $D(P)$ spans $\mathbb{R}^{m-d-1}$.
\end{lemma}
\begin{lemma}\cite[Page 111]{JM}
\label{bjection}
Consider two integers $u$ and $v$ satisfying $1 \leq u, v \leq d-1$ and $u+v+2 = m$. If the points in $P$ are in general
position in $\mathbb{R}^d$, there exists a
bijection between the crossing pairs of $u$- and $v$-simplices formed by some points
in $P$ and the linear separations of $D(P)$ into $D^+(P)$ and $D^-(P)$
such that $|D^+(P)| = u+1$ and $|D^-(P)| = v+1$.
\end{lemma}
\begin{lemma}\cite[Page 111]{JM}
\label{convexity}
The points in $P$ are in convex position in $\mathbb{R}^d$ if
and only if there
is
no linear hyperplane with exactly one vector from $D(P)$ in one of the open half-spaces created by it.
\end{lemma}
\begin{lemma}\cite[Page 126]{GRU}
\label{neigh}
If the points in $P$ are in convex position, the $d$-dimensional polytope formed by the convex hull of the points in $P$ is $t$-neighborly if and only if each of the open half-spaces created by a linear hyperplane contains at least $t+1$ vectors of $D(P)$.
\end{lemma}
The proofs of the first three lemmas are easy to see. The fourth lemma, a generalized version of which is given as an exercise in \cite{GRU}, can be proved using Lemma \ref{bjection} and the fact \cite[Page 111]{JM} that any $t$-element subset $P'= \{p_{i_1},p_{i_2}, \ldots,$ $ p_{i_t}\} \subset P$ forms a $(t-1)$-dimensional face of the $d$-dimensional polytope formed by the convex hull of the points in $P$ if and only if the convex hull of the points in $D(P)\setminus \{g_{i_1},g_{i_2}, \ldots, g_{i_t}\}$ contains the origin.
\iffalse
\begin{figure}
\caption{Standard Gale Transform of $Q$}
\label{fig:stddia}
\end{figure}
\begin{proof}
Suppose $Q$ is a $t$-neighborly polytope with $d+3$ vertices. Let there be a partition of $D^*(Q)$ in two sets of size $d+3-k$ and $k$. Lemma \ref{bjection} implies that there exist a $(k-1)$-simplex and a $(d+2-k)$-simplex such that they form a crossing. This is a contradiction to the fact that every set of $k$-vertices of $Q$ forms a face of $Q$. This implies each open hemisphere of $S^1$ contains at least $k+1$ points of $D^*(Q)$.
To prove the other side, let us consider a standard Gale Transform $D^*(Q)$ which is a collection of $d+3$ unit length vector is $\mathbb{R}^2$. This implies that all the end points of the vectors lie on the unit circle centred at origin. Consider a linear partition (a partition of the vectors by a line passing through the origin) of the vectors in $D^*(Q)$by a line $l$, such that one of the partition contains $k+1$ vectors and another side contains $d+2-k$ vectors. We label the vectors as shown in the Figure \ref{fig:stddia}. Let us assume that $Q$ is not $k$-neighborly, i.e.,there exist a collection of $k$ vectors $\overline V=\{v_{i_1},v_{i_2}, \ldots, v_{i_k}\}$in $D^*(Q)$, such that the corresponding $k$-vertices in $Q$ do not span a face. This implies that convex hull of the points $D^*(Q)\setminus \overline V$ does not contain origin inside it. Let us consider following $3$ cases to prove that if $Q$ is not a $k$-neighborly polytope with $d+3$ vertices, then there exist a open hemisphere of $S^1$ containing at most $k$ vectors.
\begin{case}
Let $\overline V \subset \{v_{k+2}, v_{k+3}, \ldots, v_{d+2-k}\}$. This implies all the vectors in $ \{v_{k+2}, v_{k+3}, \ldots, v_{d+2-k}\} \setminus \overline V$ lie on the right side of the line $-v_{k+1}$ or on the left side of the line $-v_{1}$. This implies either a line along $v_1$ or a line along $v_{k+1}$ creates a open hemisphere of $S^1$ which contains at most $k$ vectors.
\end{case}
\begin{case}
Let $\overline V \subset \{v_1, v_2, \ldots, v_{k+1}\}$. This implies there exist exactly one vector ${v_i}=\{v_1, v_2, \ldots, v_{k+1}\} \setminus \overline V$, for $1\ leq i \leq k+1$, such that all the vectors in $\{v_{k+2}, v_{k+3}, \ldots, v_{d+2-k}\}$ lie on the one side of the line $l'$ drawn along $v_i$. Clearly,one open hemisphere created by $l'$ contains at most $k$ vectors.
\end{case}
\begin{case}
Let us assume that $\overline V \cap \{v_1, v_2, \ldots, v_{k+1}\} \ne \emptyset$ and $\overline V \cap \{v_{k+2}, v_{k+3}, \ldots, v_{d+2-k}\} \ne \emptyset$. Let $v_i \in \{v_1, v_2, \ldots, v_{k+1}\} $ is the minimum indexed vector such that $v_i \not \in \overline V$ and Let $v_j \in \{v_1, v_2, \ldots, v_{k+1}\} $ is the maximum indexed vector such that $v_j \not \in \overline V$. This implies all the vectors in $ \{v_{k+2}, v_{k+3}, \ldots, v_{d+2-k}\} \setminus \overline V$ lie on the right side of the line $-v_{j}$ or on the left side of the line $-v_{i}$. Hence, either a line along $v_i$ or a line along $v_{j}$ creates a open hemisphere of $S^1$ which contains at most $k$ vectors. \qed
\end{case}
\end{proof}
\fi
We obtain an {\textit{affine Gale diagram}} \cite[Page 112]{JM} of $P$ by considering a hyperplane $\bar{h}$ that is not parallel to any vector in $D(P)$ and not passing through the origin. Since the points in $P$ are in general position in $\mathbb{R}^d$, Lemma \ref{genposi} implies that $g_i \neq 0$ for every $i$. For each $i$ in the range $1\leq i \leq m$, we extend the vector $g_i \in D(P)$ either in the direction of $g_i$ or in its opposite direction until it cuts $\bar{h}$ at the point $\overline {g_i}$. We color $\overline {g_i}$ as \textit{white} if the projection is in the direction of $g_i$, and \textit{black} otherwise. The sequence of $m$ points $\overline{D(P)}$ $=$ $< \overline {g_1}, \overline {g_2}, \ldots, \overline {g_m}>$ in $\mathbb{R}^{m-d-2}$ along with the color of each point is defined as an affine Gale diagram of $P$. We define a {\textit {partition}} of the points in $\overline{D(P)}$ as two disjoint sets of points $\overline{D^+(P)}$ and $\overline{D^-(P)}$ contained in the opposite open half-spaces created by a hyperplane. Let us restate Lemma \ref{bjection} using these definitions and notations.
\begin{lemma}\cite{JM}
\label{bjection1}
Consider two integers $u$ and $v$ satisfying $1 \leq u, v \leq d-1$ and $u+v+2 = m$.
If the points in $P$ are in general
position in $\mathbb{R}^d$, there exists a
bijection between the crossing pairs of $u$- and $v$-simplices formed by some points
in $P$ and the partitions of the points in $\overline{D(P)}$ into $\overline{D^+(P)}$ and $\overline{D^-(P)}$
such that $the ~number~ of~ white$ $points~in~\overline{D^+(P)}$ $plus~the ~number$ $~ of~ black~ points~in$ $\overline{D^-(P)}$ $~is~u+1$ and $the ~number~$ $of~ white~ points$ $~in~\overline{D^-(P)}~$ $plus$ $the ~number~ of~ black~ points$ $in~\overline{D^+(P)}~is~ v+1$.
\end{lemma}
\section{Balanced Lines, $j$-Facets and $k$-Sets }
\label{TU1}
In this section, we describe the properties of balanced lines, $j$-facets and $k$-sets that are used in the proofs of all four theorems.
\subsection{Balanced Lines}
\noindent Consider a set $R$ containing $r$ points in general position in $\mathbb{R}^2$, such that $\ceil{{r}/{2}}$ points are colored white and $\floor{{r}/{2}}$ points are colored black. Let us state the definitions of a {\textit {balanced line}} and an {\textit {almost balanced line}} of $R$, and discuss their properties that are used in the proof of Theorem \ref{thm4}.\\
\noindent\textbf{Balanced Line:} \cite{PP} A balanced line $l$ of $R$ is a straight line that passes through a white and a black point in $R$ and the number of black points is equal to the number of white points in each of the open half-spaces created by $l$.\\
\noindent Note that a balanced line exists only when $r$ is even. The following lemma gives a non-trivial lower bound on the number of balanced lines of $R$.
\begin{lemma}\cite{PP}
\label{bline}
When $r$ is even, the number of balanced lines of $R$ is at least $r/2$.
\end{lemma}
\noindent We extend the definition of a balanced line to define an \textit{almost balanced directed line} of $R$.\\
\noindent\textbf{Almost Balanced Directed Line:} When $r$ is even, an almost balanced directed line $l$ of $R$ is a balanced line with direction assigned from the
black point to the white point it passes through. When $r$ is odd, an almost balanced directed line $l$ of $R$ is a directed straight line that passes through a white and a black point in $R$ such that the number of black points is equal to the number of white points in the positive open half-space created by $l$.\\
\noindent We obtain the following observation from Lemma \ref{bline}.
\begin{observation}
\label{obs:222}
The number of almost balanced directed lines of $R$ is at least $\floor{{r}/{2}}$.
\end{observation}
\subsection{$j$-Facets and $k$-Sets}
Consider a set $S$ containing $s$ points in general position in $\mathbb{R}^3$. Let us first state the definitions of a \textit{$j$-facet} and an \textit{$(\leq j)$-facet} of $S$ for some integer $j \geq 0$. We then state the definitions of a \textit{$k$-set} and an \textit{$(\leq k)$-set} of $S$ for some integer $k \geq 1$, and discuss their properties that are used in the proofs of Theorems \ref{thm1}, \ref{thm2} and \ref{thm3}. \\
\noindent{\textbf {$j$-facet:}} \cite{ANDR} A {\textit{$j$-facet}} of $S$ is an oriented $2$-dimensional hyperplane spanned by $3$ points in $S$ such that exactly $j$ points of $S$ lie in the positive open half-space created by it.
\\
\noindent Let us denote the number of $j$-facets of $S$ by $E_{j}$.
\\
\noindent{\textbf {$(\leq j)$-facet:}} \cite{ANDR} An $(\leq j)$-facet of $S$ is an oriented $2$-dimensional hyperplane $h$ spanned by $3$ points in $S$ such that at most $j$ points of $S$ lie in the positive open half-space created by it.\\
\noindent{\textbf {Almost Halving Triangle:}} An almost halving triangle of $S$ is a $j$-facet of $S$ such that $\left| j - (s-j-3)\right|$ is at most one.\\
\noindent When $s$ is odd, note that an almost halving triangle is a {\textit{halving triangle}} containing an equal number of points in each of the open half-spaces
created by it. The following lemma gives a non-trivial lower bound on the number of halving triangles of $S$. In fact, it is shown in \cite{MS} that this lemma is equivalent to Lemma \ref{bline}.
\begin{lemma}\cite{MS}
\label{HTR}
When $s$ is odd, the number of halving triangles of $S$ is at least $\floor{{s}/{2}}^{2}$.
\end{lemma}
\noindent We obtain the following observation from Lemma \ref{HTR}.
\begin{observation}
\label{obs:htr}
The number of almost halving triangles of $S$ is at least $\floor{{s}/{2}}^2$.
\end{observation}
\noindent We consider the following lemma which gives a non-trivial lower bound on the number of $(\leq j)$-facets of $S$.
\begin{lemma}\cite{ALCHO}
\label{ALCH}
For $j < {s}/{4}$ , the number of $(\leq j)$-facets of $S$ is at least $4\dbinom{j+3}{3}$.
\end{lemma}
\noindent{\textbf {$k$-Set:}} \cite[page 265]{JM} A {\textit{$k$-set}} of $S$ is a non-empty subset of $S$ having size $k$ that can be separated from the rest of the points by a $2$-dimensional hyperplane that does not pass through any of the points in $S$.
\\
Let us denote the number of $k$-sets of $S$ by $e_{k}$.\\
\noindent{\textbf {$(\leq k)$-Set:}} \cite{ANDR} A subset $T \subseteq S$ is called an {\textit{$(\leq k)$-set}} if $1 \leq |T| \leq k$ and $T$ can be separated from $S \setminus T$ by a $2$-dimensional hyperplane that does not pass through any of the points in $S$.
\\
Andrzejak et. al. \cite{ANDR} gave the following lemma about the relation between the $j$-facets and the $k$-sets of $S$.
\begin{lemma}\cite{ANDR}
\label{AND}
$e_1 = (E_0/2)+2$, $e_{s-1} = (E_{s-3}/2)+2$, and $e_{k} = (E_{k-1}+E_{k-2})/2+2$ for each $k$ in the range $2 \leq k \leq s-2$.
\end{lemma}
\noindent We obtain the following observation from Observation $2$ and Lemma \ref{AND}.
\begin{observation}
\label{obs101}
There exist $\Omega(s^2)$ $k$-sets of $S$ such that $min\{k,s-k\}$ is at least $\ceil{(s-1)/2}$.
\end{observation}
\noindent We obtain the following observation from Lemma \ref{ALCH} and Lemma \ref{AND}.
\begin{observation}
\label{obs100}
The number of $\left(\leq \ceil{{s}/{4}}\right)$-sets of $S$ is $\Omega(s^3)$.
\end{observation}
\iffalse
We obtain the following observation as a consequence of Observation \ref{obs:22} and \ref{obs:222}.
\begin{observation}
\label{obs:3}
The number of almost balanced partitions in $S''$ is at least $\floor{n'/2}$.
\end{observation}
\fi
\section{Improved Lower Bound on $\overline {cr}_d(K_{2d}^d)$}
\label{imprvcross}
In this section, we first use Observation $1$ to improve the lower bound on $\overline {cr}_d(K_{2d}^d)$ to $\Omega (2^d \sqrt{d})$. We then present the proofs of Theorems \ref{thm1}, \ref{thm2} and \ref{thm3}. Note that $V=\{v_1, v_2, \ldots, v_{2d}\}$ denotes the set of points corresponding to the vertices in a $d$-dimensional rectilinear drawing of $K_{2d}^d$ and $E$ denotes the set of $(d-1)$-simplices created by the corresponding hyperedges.\\
\noindent{\textbf{Proof of Theorem \ref{thm4}:}} Consider a set $V'=\{v_1, v_2, \ldots, v_{d+4}\} \subset V$, whose Gale transform $D(V')$ is a set of $d+4$ vectors in $\mathbb{R}^3$. As mentioned before, the vectors in $D(V')$ can be treated as points in $\mathbb{R}^3$. In order to apply the Ham-Sandwich theorem to obtain a proper linear separation of $D(V')$, we keep the origin in a set and all the points in $D(V')$ in another set. The Ham-Sandwich theorem implies that there exists a linear hyperplane $h$ such that each of the open half-spaces created by it contains at most $\floor{{(d+4)}/{2}}$ vectors of $D(V')$. Since the vectors in $D(V')$ are in general position in $\mathbb{R}^3$, note that at most two vectors in $D(V')$ can lie on $h$ and no two vectors in $D(V')$ lie on a line passing through the origin. As a result, it can be easily seen that we can slightly rotate $h$ to obtain a linear hyperplane $h'$ which creates a proper linear separation of $D(V')$. Consider a hyperplane parallel to $h'$ and project the vectors in $D(V')$ on this hyperplane to obtain an affine Gale diagram $\overline{D(V')}$. Note that $\overline{D(V')}$ contains $\floor{{(d+4)}/{2}}$ points of the same color and $\ceil{{(d+4)}/{2}}$ points of the other color in $\mathbb{R}^2$. Without loss of generality, let us assume that the majority color is white. Also, note that the points in $\overline{D(V')}$ are in general position in $\mathbb{R}^2$. \par
Observation $1$ implies that there exist at least $\floor{{(d+4)}/{2}}$ almost balanced directed lines of $\overline{D(V')}$. Consider an almost balanced directed line that passes through a white and a black point in $\overline{D(V')}$. Consider the middle point $p$ of the straight line segment connecting these two points. We rotate the almost balanced directed line slightly counter-clockwise around $p$ to obtain a partition of $\overline{D(V')}$ by a directed line that does not pass through any point of $\overline{D(V')}$. Note that this partition of $\overline{D(V')}$ corresponds to a unique linear separation of $D(V')$ having at least $\floor{{(d+2)}/{2}}$ vectors in each of the open half-spaces created by the corresponding linear hyperplane. \iffalse We can rotate each of the $\floor{{(d+4)}/{2}}$ almost balanced directed lines in the above mentioned way to obtain $\floor{{(d+4)}/{2}}$ such partitions in $\overline{D(V')}$. Note that each of these partitions is unique.\fi This implies that there exist at least $\floor{{(d+4)}/{2}}$ distinct linear separations of $D(V')$ such that each such linear separation contains at least $\floor{{(d+2)}/{2}}$ vectors in each of the open half-spaces created by the corresponding linear hyperplane. Lemma \ref{bjection} implies that there exists a unique crossing pair of $u$-simplex and $v$-simplex corresponding to each linear separation of $D(V')$, such that $u+v+2=d+4$ and $min\{u+1,v+1\} \geq \floor{{(d+2)}/{2}}$. It follows from Lemma \ref{extension} that each such crossing pair of $u$-simplex and $v$-simplex can be extended to obtain at least $\dbinom{d-4}{d-\floor{(d+2)/2}}= \Omega\left({2^d}/{\sqrt{d}}\right)$ crossing pairs of $(d-1)$-simplices formed by the hyperedges in $E$. Therefore, the total number of crossing pairs of hyperedges in a $d$-dimensional rectilinear drawing of $K_{2d}^d$ is at least $\floor{{(d+4)}/{2}}\Omega\left({2^d}/{\sqrt{d}}\right)=\Omega \left(2^d \sqrt{d}\right)$. \qed
\noindent{\textbf{Proof of Theorem \ref{thm1}:}}
Since the points in $V$ are not in convex position in $\mathbb{R}^d$, we assume without loss of generality that $v_{d+2}$ can be expressed as a convex combination of the points in $V \setminus \{v_{d+2}\}$. The Carath\'{e}odory's theorem implies that $v_{d+2}$ can be expressed as a convex combination of $d+1$ points in $V \setminus \{v_{d+2}\}$. Without loss of generality, we assume these $d+1$ points to be $\{v_1, v_2, \ldots, v_{d+1}\}$. \par
Consider the set of points $V'=\{v_1, v_2, \ldots, v_{d+5}\} \subset V$. Note that a Gale transform $D(V')$ of it is a collection of $d+5$ vectors in $\mathbb{R}^4$. Lemma \ref{convexity} implies that there exists a linear hyperplane $h$ that partitions $D(V')$ in such a way that one of the open half-spaces created by $h$ contains exactly one vector of $D(V')$. Since the points in $V'$ are in general position in $\mathbb{R}^4$, Lemma \ref{genposi} implies that at most three vectors of $D(V')$ lie on $h$. Since the vectors in $D(V')$ are in general position, it can be easily seen that we can slightly rotate $h$ to obtain a linear hyperplane $h'$ that partitions $D(V')$ such that one of the open half-spaces created by $h'$ contains $d+4$ vectors and the other one contains exactly one vector. \par
Consider a hyperplane parallel to $h'$. We project the vectors in $D(V')$ on this hyperplane to obtain an affine Gale diagram $\overline{D(V')}$. Note that $\overline{D(V')}$ contains $d+4$ points of the same color and one point of the other color in $\mathbb{R}^3$. Without loss of generality, let us assume that the majority color is white. Also, note that the points in $\overline{D(V')}$ are in general position in $\mathbb{R}^3$ since the corresponding vectors in the Gale transform $D(V')$ are in general position in $\mathbb{R}^4$.\par
Consider the set $W$ containing $d+4$ white points of $\overline{D(V')}$ in $\mathbb{R}^3$. Observation $3$ implies that there exist $\Omega(d^2)$ distinct $k$-sets of $W$ such that $min\{k,d+4-k\}$ is at least $\ceil{{(d+3)}/{2}}$. Each of these $k$-sets corresponds to a unique linear separation of $D(V')$ having at least $\ceil{{(d+3)}/{2}}$ vectors in each of the open half-spaces created by the corresponding linear hyperplane. Lemma \ref{bjection} implies that there exists a unique crossing pair of $u$-simplex and $v$-simplex corresponding to each of these linear separations of $D(V')$, such that $u+v+2=d+5$ and $min\{u+1,v+1\} \geq \ceil{({d+3})/{2}}$. It follows from Lemma \ref{extension} that each such crossing pair of $u$-simplex and $v$-simplex can be extended to obtain at least $\test{d-5}{d-\ceil{(d+3)/{2}}}$ crossing pairs of $(d-1)$-simplices formed by the hyperedges in $E$. Therefore, the total number of crossing pairs of hyperedges in such a $d$-dimensional rectilinear drawing of $K_{2d}^d$ is at least $\Omega(d^2)\test{d-5}{d-\ceil{({d+3})/{2}}}=\Omega\left(2^d{d}^{3/2}\right)$. \qed
\noindent{\textbf{Proof of Theorem \ref{thm2}:}}
Consider the points in $V$ that form the vertex set of a $d$-dimensional $t$-neighborly polytope which is not $(t+1)$-neighborly. Lemma \ref{neigh} implies that there exists a linear hyperplane $\widetilde{h}$ such that one of the open half-spaces created by it contains $t+1$ vectors of $D(V)$. Without loss of generality, we denote the set of these $t+1$ vectors by $D^+(V)$. It implies that one of the closed half-spaces created by $\widetilde{h}$ contains $2d-t-1$ vectors of $D(V)$. If $d-2$ vectors of $D(V)$ do not lie on $\widetilde{h}$, we rotate $\widetilde{h}$ around the lower dimensional hyperplane spanned by the vectors on $\widetilde{h}$ till some new vector $g_i \in D(V)$ lies on it. We keep rotating $\widetilde{h}$ in this way till $d-2$ vectors of D(V) lie on it. Lemma \ref{neigh} implies that none of these $d-2$ vectors belongs to the set $D^+(V)$. After rotating $\widetilde{h}$ in the above mentioned way, we obtain a partition of $D(V)$ by a linear hyperplane $\widetilde{h'}$ such that one of the open half-spaces created by it contains $t+1$ vectors and the other one contains $d+1-t$ vectors. This implies that there exist a $t$-simplex and a $(d-t)$-simplex created by the vertices in $V$ such that they form a crossing. We choose any three vertices from the rest of the $d-2$ vertices in $V$ and add these to the $(d-t)$-simplex to form a $(d+3-t)$-simplex. Lemma \ref{extension} implies that the $t$-simplex forms a crossing with this $(d+3-t)$-simplex. This implies that the $t$-neighborly sub-polytope formed by the convex hull of the $d+5$ vertices corresponding to these two simplices is not $(t+1)$-neighborly.\\
\indent ~~Without loss of generality, let the vertex set of this sub-polytope be $V'=\{v_1, v_2,$ $\ldots, v_{d+5}\}$. Note that a Gale transform $D(V')$ of it is a collection of $d+5$ vectors in $\mathbb{R}^4$. Lemma \ref{neigh} implies that there exists a linear hyperplane $h$ such that one of the open half-spaces created by it contains exactly $t+1$ vectors of $D(V')$. As described in the proof of Theorem \ref{thm1}, it follows from Lemma \ref{genposi} that at most three vectors can lie on $h$. Since the vectors in $D(V')$ are in general position, we can slightly rotate $h$ to obtain a linear hyperplane $h'$ such that one of the open half-spaces created by $h'$ contains $t+1$ vectors and the other one contains $d+4-t$ vectors.\\
\indent Consider a hyperplane parallel to $h'$ and project the vectors in $D(V')$ on this hyperplane to obtain an affine Gale diagram $\overline{D(V')}$. Note that $\overline{D(V')}$ contains $d+4-t$ points of the same color and $t+1$ points of the other color in $\mathbb{R}^3$. Without loss of generality, let us assume that these $d+4-t$ points of the same color are white. Also, note that the points in $\overline{D(V')}$ are in general position in $\mathbb{R}^3$.\\
\indent Let us consider the set $W$ consisting of $d+4-t$ white points of $\overline{D(V')}$. Observation $3$ implies that there exist $\Omega(d^2)$ distinct $k$-sets of $W$ such that $min\{k,d+4-t-k\}$ is at least $\ceil{({d+3-t})/{2}}$. Each of these $k$-sets corresponds to a unique linear separation of $D(V')$ such that it contains at least $\ceil{{(d+3-t)}/{2}}$ vectors in each of the open half-spaces created by the corresponding linear hyperplane. Lemma \ref{bjection} implies that there exists a unique crossing pair of $u$-simplex and $v$-simplex corresponding to each of these linear separations of $D(V')$, such that $u+v+2=d+5$ and $min\{u+1,v+1\} \geq$ $ \ceil{{(d+3-t)}/{2}}$. It follows from Lemma \ref{extension} that each such crossing pair of $u$-simplex and $v$-simplex can be extended to obtain at least $\test{d-5}{d-\ceil{{(d+3-t)}/{2}}}$ crossing pairs of $(d-1)$-simplices formed by the hyperedges in $E$. Therefore, the total number of crossing pairs of hyperedges in such a $d$-dimensional rectilinear drawing of $K_{2d}^d$ is at least $\Omega(d^2)\test{d-5}{d-\ceil{{(d+3-t)}/{2}}}=\Omega\left(2^d{d}^{3/2}\right)$. \qed
\noindent{\textbf{Proof of Theorem \ref{thm3}:}}
Since the points in $V$ form the vertex set of a $d$-dimensional $(\floor{d/2}-t')$-neighborly polytope, consider a sub-polytope of it formed by the convex hull of the vertex set $V'$ containing any d+5 points of $V$. Without loss of generality, let $V'$ be $\{v_1, v_2, \ldots, v_{d+5}\}$. Note that a Gale transform $D(V')$ of it is a collection of $d+5$ vectors in $\mathbb{R}^4$ and an affine Gale diagram $\overline{D(V')}$ of it is a collection of $d+5$ points in $\mathbb{R}^3$. In this proof, we ignore the colors of these points. However, note that the points in $\overline{D(V')}$ are in general position in $\mathbb{R}^3$.\par
Consider the set $\overline{D(V')}$. It follows from Observation $4$ that the number of $\left(\leq\ceil{{(d+5)}/{4}}\right)$-sets of $\overline{D(V')}$ is $\Omega(d^3)$. For each $k$ in the range $ 1\leq k \leq$ ${\scriptstyle \ceil{{(d+5)}/{4}}}$, a $k$-set of $\overline{D(V')}$ corresponds to a unique linear separation of $D(V')$. Lemma \ref{neigh} implies that each of these $\Omega(d^3)$ linear separations of $D(V')$ contains at least $\floor{d/2}-t'+1$ vectors in each of the open half-spaces created by the corresponding linear hyperplane. Lemma \ref{bjection} implies that there exists a unique crossing pair of $u$-simplex and $v$-simplex corresponding to each linear separation of $D(V')$, such that $u+v+2=d+5$ and $min\{u+1,v+1\} \geq \floor{d/2}-t'+1$. It follows from Lemma \ref{extension} that each such crossing pair of $u$-simplex and $v$-simplex can be extended to obtain at least $\dbinom{d-5}{d-\floor{d/2}+t'-1}= \Omega\left({2^d}/{\sqrt{d}}\right)$ crossing pairs of $(d-1)$-simplices formed by the hyperedges in $E$. Therefore, the total number of crossing pairs of hyperedges in such a $d$-dimensional rectilinear drawing of $K_{2d}^d$ is $\Omega(d^3)\Omega\left({2^d}/{\sqrt{d}}\right)=\Omega \left(2^d d^{5/2}\right)$. \qed
\iffalse
Consider a sequence of points $P=<p_1,p_2, \ldots, p_n>$ having $n>d+1$ points in general position in $\mathbb{R}^d$. Let the coordinate of point $p_i$ is $(a_1^i, a_2^i, \ldots, a_d^i)$ Consider a subsequence $P'=<p_{i_1}, p_{i_2}, \ldots p_{i_{d+1}}>$ which follows the same precedence order of $P$ among the points. Consider the following determinant $D_{p'}$.
\[
D_{p'} =
\begin{vmatrix}
1 & 1 & \ldots & 1\\
a_1^{i_1} & a_1^{i_2} & \ldots & a_1^{i_d} \\implies
\vdots & \vdots & \vdots & \vdots\\
a_d^{i_1} & a_d^{i_2} & \ldots & a_d^{i_d} \\
\end{vmatrix}
\]
Since the points are in general position, $D_{p'} \ne 0$. $P$ is said to be order type homogeneous, if for every $d+1$ length subsequence $P'$ of $P$, $D_{p'}$ has same sign.
A special type of $d$-dimensional neighborly polytope is the polytope which is combinatorially equivalent to a cyclic polytope. Strumfels \cite{STR} proved that, for even $d$, vertex set of any $d$-polytope which is combinatorially equivalent to a cyclic polytope can be arranged in a order type homogeneous sequence. As already mentioned the number of crossing pairs of hyperedges of $K_{2d}^d$ is $\Theta\left(4^d /\sqrt{d}\right)$ when all the $2d$-vertices of $K_{2d}^d$ are placed on a $d$-dimensional moment curve. This result along with the result of Strumfels prove that, for even d, the number of crossing pairs of hyperedges of $K_{2d}^d$ is $\Theta\left(4^d /\sqrt{d}\right)$, if the vertices of $K_{2d}^d$ are placed as the vertices of a $d$-dimensional polytope which is combinatorially equivalent to a cyclic polytope.
\fi
\iffalse
\section{Concluding Remarks}
\label{CR}
In this paper, we proved that the number of crossing pairs of hyperedges of $K_{2d}^d$ is $\Omega (2^d \sqrt{d})$, when the vertices of $K_{2d}^d$ are placed either in a) the position in $\mathbb{R}^d$, or b) as the vertices of $t$-neighborly $d$-polytope, for some constant $k$ independent of $d$. We also proved that the number of crossing pairs of hyperedges of $K_{2d}^d$ is $\Omega (2^d d^{3/2})$, if the vertices of $K_{2d}^d$ are placed as the vertices of $(\floor{d/2}-k)$-neighborly $d$-polytope for some constant $k$ independent of $d$. We already mentioned that $\overline {cr}_d(K_{2d}^d)= \Omega (2^d)$. Based on this results, we conjecture the following.
\begin{conjecture}
\label{cnj1}
$\overline {cr}_d(K_{2d}^d)= \Omega (2^d \sqrt{d})$.
\end{conjecture}
\fi
\iffalse
\begin{lemma} \cite{AS}
\label{coloring}
Consider a subset $V'=\{v_1, v_2, \ldots, v_{d+4}\} \subset V$ having $d+4$ points. The Gale transform $D(V')$ is a set of $d+4$ vectors in $\mathbb{R}^3$. There exist at least $\Omega(\log d)$ proper linear separations of $D(V')$.
\end{lemma}
\begin{proof}
$D(V')=\{g_1,g_2, \ldots, g_{d+4}\}$ is a collection of $d+4$ vectors in $\mathbb{R}^3$. As already mentioned, the vectors in $D(V')$ can be treated as points in $\mathbb{R}^3$. We use the Ham-sandwich theorem to obtain the proper linear separations of $D(V')$. Since the points in $D(V')$ are in $\mathbb{R}^3$, we use three colors $c_0$,$c_1$ and $c_2$. The coloring argument proceeds as following. \\
We color the origin with $c_0$ and all the points in $D(V')$ with $c_1$. The color of the origin remains unchanged throughout the process. By the Ham-sandwich theorem and rotating the separating hyperplane if needed (as mentioned before), we obtain a proper linear separation of $D(V')$ into $D_{11}(V')$ and $D_{12}(V')$ having $\floor{(d+4)/2}$ and $\ceil{(d+4)/2}$ vectors, respectively. We then color all the vectors in $D_{11}(V')$ with $c_1$ and all the vectors in $D_{12}(V')$ with $c_2$. The Ham-Sandwich theorem implies that we obtain a partition $D_{21}(V')$ and $D_{22}(V')$ of $D(V')$. Note that at least $\floor{(d+4)/4}$ points of $D(V')$ have stayed together in both the partitions. Next, we color these $\floor{(d+4)/4}$ points in $D(V')$ with $c_1$ and rest of the points with $c_2$ to obtain a new proper linear separation of $D(V')$ in two partitions $D_{31}(V')$ and $D_{32}(V')$. Note that $\floor{(d+4)/8}$ points of $D(V')$ have stayed together in all three proper linear separations. In particular, in the $k^{th}$ step we obtain a proper linear separation of $D(V')$ into $D_{k1}(V')$ and $D_{k2}(V')$. Note that the $k^{th}$ proper linear separation of $D(V')$ is distinct from all the $k-1$ proper linear separations obtained before. It is easy to observe that $\floor{(d+4)/2^k}$ points of $D(V')$ have stayed together in all the $k$ proper linear separations obtained so far. We then color these $\floor{(d+4)/2^k}$ points of $D(V')$ with $c_1$ and rest of the points with $c_2$ to obtain the ${k+1}^{th}$ proper linear separation of $D(V')$. We keep on coloring in this way till a pair of points stays together. This implies that we can keep on coloring for $\Omega(\log d)$ times without repeating any of the previous proper linear separations of $D(V')$.
\end{proof}
\begin{lemma}
$\overline {cr}_d(K_{2d}^d) =\Omega (2^d \sqrt{\log d})$.
\end{lemma}
\begin{proof}
Consider a subset $V'=\{v_1, v_2, \ldots, v_{d+4}\} \subset V$ having $d+4$ points. The Gale transform $D(V')$ is a set of $d+4$ vectors in $\mathbb{R}^3$. We apply the Ham-sandwich theorem to obtain $\Omega(\log d)$ proper linear separations of $D(V')$ as mentioned in Lemma \ref{coloring}. Let us denote the $i^{th}$ proper linear separation by $PL_i$. Consider an affine Gale diagram $\overline{D(V')}$. Let us ignore the color of the points in $\overline{D(V')}$. Note that the $i^{th}$ proper linear separation of $D(V')$ $PL_i$ corresponds to a $k_i$-set of $\overline{D(V')}$ for some $k_i$ satisfying $1 \leq k_i \leq d+3$. For each of this $k_i$-set, we can obtain distinct $(k_i-1)$-edge in $\overline{D(V')}$ as mentioned in the proof of Observation \ref{obs:22} and we denote the $(k_i-1)$-edge by $L_i^0$. Note that each $L_i^0$ is a directed line passing through the two points of $\overline{D(V')}$. We collect all the distinct endpoints of these lines. To span these $\Omega(\log d)$ lines, we need at least $\Omega(\sqrt{\log d})$ points of $\overline{D(V')}$. Let $\overline{D(V'')}= \{ \overline {g_{i_1}}, \overline {g_{i_2}}, \ldots, \overline {g_{i_{\sqrt{\log d}}}} \} \subset \overline{D(V')}$ denotes the set of these points. Consider the line $L_1^0$. Without the loss of generality, let us assume that it is a directed line from $\overline {g_{i_1}}$ to $\overline {g_{i_2}}$ . We rotate the line $L_1^0$ counter clockwise as mentioned in the proof of Observation \ref{obs:22} to obtain the line $L_1^0(0)$. Note that $L_1^0(0)$ does not contain any point of $\overline{D(V')}$. We get a partition of the points in $\overline{D(V')}$ by $L_1^0(0)$. This partition corresponds to a linear separation of vectors in $D(V')$ such that each of the open half-spaces contain at least $\floor{(d+1)/2}$ vectors. We now rotate $L_1^0$ counter-clockwise with respect to $\overline {g_{i_1}}$ until it meets the second point and let us denote this line by $L_1^1$. We then rotate the line $L_1^1$ counter clockwise as mentioned in the proof of Observation \ref{obs:22} to obtain the line $L_1^1(0)$. Note that $L_1^1(0)$ does not contain any point of $\overline{D(V')}$. We get a partition of the points in $\overline{D(V')}$ by $L_1^1(0)$. This partition corresponds to a linear separation of vectors in $D(V')$. We now rotate $L_1^1$ counter-clockwise with respect to $\overline {g_{i_1}}$ until it meets the next point and let us denote this line by $L_1^2$. In general, we rotate $L_1^j$ counter clockwise as mentioned in the proof of Observation \ref{obs:22} to obtain a partition of points in $\overline{D(V')}$ created by the line $L_1^j(0)$. This separation of points in $\overline{D(V')}$ corresponds to a linear separation of vectors in $\overline{D(V')}$. We then rotate $L_1^j$ with respect to ${g_{i_1}}$ until it meets the next point to obtain the line by $L_1^{j+1}$. Note that we can keep on rotating like this until all the points in $\overline{D(V')}\setminus \{g_{i_1}\}$ are covered. Note that we obtain $d+3$ distinct proper linear separations while rotating the line with respect to ${g_{i_1}}$. Let us denote the linear separation of $D(V')$ corresponds to the line $L_1^j(0)$ by $\{D_j^+(V'), D_j^-(V')\}$. Note that $\left|\left|D_j^+(V')\right|-\left|D_{j+1}^+(V')\right| \right| \leq 4$ for $j$ satisfying $0 \leq j \leq d+2$. Each of these proper linear separations corresponds to distinct crossing pair of $u$-simplex and $v$-simplex where $u+v=d+2$ and $1\leq u,v \leq d-1$. It follows from Lemma \ref{extension} that each such crossing pair of $u$-simplex and $v$-simplex can be extended to obtain at least $\dbinom{d-4}{d-u-1}$ crossing pairs of $(d-1)$-simplices formed by the hyperedges in $E$. The total number of crossing pairs of hyperedges obtained in this way is at least $\dbinom{d-4}{d-\floor{(d+1)/2}}+ \dbinom{d-4}{d-\floor{d+1/2}-4}+ \ldots+ \dbinom{d-4}{d-4}= \Omega(2^d)$. For each of the points in $\overline{D(V'')}$, we obtain $\Omega(2^d)$ crossing pairs of hyperedges in a similar way. Note that Observation \ref{obs:22} implies that any partition of points in $\overline{D(V')}$ obtained during the rotation with respect to $\overline {g_{i_j}} \in \overline{D(V'')}$ is distinct from the any other partition of points in $\overline{D(V')}$ obtained during the rotation with respect to $\overline {g_{i_k}} \in \overline{D(V'')}\setminus \{\overline{g_{i_j}}\}$. This implies that for each of the points in $\overline{D(V'')}$, we obtain $\Omega(2^d)$ distinct crossing pairs of hyperedges. This proves that $\overline {cr}_d(K_{2d}^d)= \Omega (2^d \sqrt{\log d})$.
\end{proof}
\fi
\end{document} |
\begin{document}
\title[ Bicomplex k-Fibonacci quaternions]{
\\ \\
Bicomplex k-Fibonacci quaternions}
\author[F\"{u}gen Torunbalc{\i} Ayd{\i}n]{F\"{u}gen Torunbalc{\i} Ayd{\i}n}
\address{
Yildiz Technical University\\
Faculty of Chemical and Metallurgical Engineering\\
Department of Mathematical Engineering\\
Davutpasa Campus, 34220\\
Esenler, Istanbul, TURKEY}
\email{ftorunay@gmail.com ; faydin@yildiz.edu.tr}
\thanks{*Corresponding Author}
\keywords{ Bicomplex number; k-Fibonacci number; bicomplex k-Fibonacci number; k-Fibonacci quaternion; bicomplex k-Fibonacci quaternion. }
\begin{abstract}
In this paper, bicomplex k-Fibonacci quaternions are defined. Also, some algebraic properties of bicomplex k-Fibonacci quaternions which are connected with bicomplex numbers and k-Fibonacci numbers are investigated. Furthermore, the Honsberger identity, the d'Ocagne's identity, Binet's formula, Cassini's identity, Catalan's identity for these quaternions are given.
\end{abstract}
\maketitle
\section{Introduction}
Many kinds of generalizations of the Fibonacci sequence have been presented in the literature \cite{9}. In 2007, the k-Fibonacci sequence $\{F_{k,n}\}_{n\in\mathbb{N}}$ is defined by Falcon and Plaza \cite{4} as follows
\begin{equation}\label{E1}
\left\{\begin{array}{rl}
{{F}_{k,0}}=&0,\,\,{{F}_{k,1}}=1 \\
{{F}_{k,n+1}}=&k\,{{F}_{k,n}}+\,{{F}_{k,n-1}},\,\ n\geq 1 \\
or \\
\{{F}_{k,n}\}_{n\in\mathbb{N}}=&\{\,0,\,1,\,k,\,k^2+1,\,k^3+2\,k,\,k^4+3\,k^2+1,...\}\\
\end{array}\right.
\end{equation} \\
Here, ${{k}}$ is a positive real number. Recently, Falcon and Plaza worked on k-Fibonacci numbers, sequences and matrices in \cite{5}, \cite{6}, \cite{7}, \cite{8}.\\
In 2010, Bolat and K\"{o}se \cite{2} gave properties of k-Fibonacci numbers.\\
In 2014, Catarino \cite{3} obtained some identities for k-Fibonacci numbers. \\
In 2015, Ramirez \cite{16} defined the the k-Fibonacci and the k-Lucas quaternions as follows:
\begin{equation*}
\begin{aligned}
{D}_{k,n}=&\{{F}_{k,n}+i\,{F}_{k,n+1}+j\,\,{F}_{k,n+2}+k\,{F}_{k,n+3}\,| {F}_{k,n},\, n-th\,\, \\ & \text{ k-Fibonacci number} \},
\end{aligned}
\end{equation*} and
\begin{equation*}
\begin{aligned}
{P}_{k,n}=&\{{L}_{k,n}+i\,{L}_{k,n+1}+j\,\,{L}_{k,n+2}+k\,{L}_{k,n+3}\,| {L}_{k,n},\, n-th\,\, \\ & \text{ k-Lucas number} \}
\end{aligned}
\end{equation*}
where ${\,i,\,j,\,k\,}$ satisfy the multiplication rules
\begin{equation*}
{i}^{2}={j}^{2}={k}^{2}=-1\,,\ \ i\ j=-j\ i=k\,,\quad j\ k=-k \ j=i\,,\quad k\ i=-i\ k=j\,.
\end{equation*}
\par In 2015, Polatl{\i} gave Catalan’s identity for the k-Fibonacci quaternions \cite{13}.
\par In 2016, Polatl{\i}, K{\i}z{\i}ılate\c{s} and Kesim \cite{14} defined split k-Fibonacci and split k-Lucas quaternions $({M}_{k,n})$ and $({N}_{k,n})$ respectively as follows:
\begin{equation*}
\begin{aligned}
{M}_{k,n}=&\{{F}_{k,n}+i\,{F}_{k,n+1}+j\,\,{F}_{k,n+2}+k\,{F}_{k,n+3}\,| {F}_{k,n},\, n-th\,\, \\ & \text{ k-Fibonacci number} \}
\end{aligned}
\end{equation*}
where ${\,i,\,j,\,k\,}$ are split quaternionic units which satisy the multiplication rules
\begin{equation*}
{i}^{2}=-1,\,{j}^{2}={k}^{2}=i\ j\ k=1\,,\ \ i\,j=-j\, i=k,\,j\,k=-k\,j=-i,\,k\,i=-i\,k=j.
\end{equation*}
\par In 1892, bicomplex numbers were introduced by Corrado Segre, for the first time \cite{20}. In 1991, G. Baley Price, the bicomplex numbers gave in his book based on multicomplex spaces and functions \cite{15}. In recent years, fractal structures of this numbers are studied \cite{17}, \cite{18}, \cite{19}, \cite{11}. In 2015, Karaku{\c{s}}, S{\i}dd{\i}ka {\"O}zkald{\i} and Aksoyak, Ferdag Kahraman worked on generalized bicomplex numbers and Lie Groups \cite{10}. The set of bicomplex numbers can be expressed by a basis $\{1\,,i\,,j\,,i\,j\,\}$ as,
\begin{equation}\label{E2}
\begin{aligned}
\mathbb{C}_2=\{\, q=q_1+iq_2+jq_3+ijq_4 \ | \ q_1,q_2,q_3,q_4\in \mathbb R\}
\end{aligned}
\end{equation}
where $i$,$j$ and $ij$ satisfy the conditions
\begin{equation*}
i^2=-1,\,\,\,j^2=-1,\,\,\,i\,j=j\,i.
\end{equation*} \,
\par A set of bicomplex numbers $\mathbb{C}_2$ is a real vector space with the addition and scalar multiplication operations. The vector space $\mathbb{C}_2$ equipped with bicomplex product is a real associative algebra Table 1. Also, the vector space together with properties of multiplication and product of the bicomplex numbers is an commutative algebra. Furthermore, three different conjugations can operate on bicomplex numbers \cite{17},\cite{18} as follows:
\begin{equation}\label{E3}
\begin{aligned}
q=q_1+i\,q_2+j\,q_3+i\,j\,q_4=(q_1+iq_2)+j\,(q_3+iq_4),\,\, q\in{\mathbb{C}_{2}}\\
{q_i}^*=q_1-iq_2+jq_3-ijq_4=(q_1-iq_2)+j\,(q_3-iq_4),\\
{q_j}^*=q_1+iq_2-jq_3-ijq_4=(q_1+iq_2)-j\,(q_3+iq_4),\\
{q_{ij}}^*=q_1-iq_2-jq_3+ijq_4=(q_1-iq_2)-j\,(q_3-iq_4).
\end{aligned}
\end{equation} \\
The norm of the bicomplex numbers is defined as
\begin{equation}\label{E4}
\begin{aligned}
{{N}_{q}}_{i}=\left\| {q\times{q}_{i}} \right\|=\sqrt[]{\left|{q}_{1}^2+{q}_{2}^2-{q}_{3}^2-{q}_{4}^2+2\,j\,({q}_{1}{q}_{3}+{q}_{2}{q}_{4})\right|}, \\
{{N}_{q}}_{j}=\left\| {q\times{q}_{j}} \right\|=\sqrt[]{\left|{q}_{1}^2-{q}_{2}^2+{q}_{3}^2-{q}_{4}^2+2\,i\,({q}_{1}{q}_{2}+{q}_{3}{q}_{4})\right|}, \\
{{N}_{q}}_{i\,j}=\left\| {q\times{q}_{i\,j}} \right\|=\sqrt[]{\left|{q}_{1}^2+{q}_{2}^2+{q}_{3}^2+{q}_{4}^2+2\,i\,j\,({q}_{1}{q}_{4}-{q}_{2}{q}_{3})\right|}.
\end{aligned}
\end{equation}
\begin{table}[ht]
\caption{Multiplication scheme of bicomplex numbers}
\centering
\begin{tabular}{c c c c c}
\\
\hline
x & 1 & i & j & i\,j \\ [0.5ex]
\hline
1 & 1 & i & j & i\,j \\
i & i & -1 & i\,j & -j \\
j & j & i\,j & -1 & -i \\
i\,j & i\,j & -j & -i & 1 \\
[1ex]
\hline
\end{tabular}
\label{table:nonlin}
\end{table}
\\
In 2015, the bicomplex Fibonacci and Lucas numbers defined by Nurkan and G\"{u}ven \cite{12} as follows
\begin{equation}\label{E5}
{BF}_{n}={F}_{n}+i\,{F}_{n+1}+j\,{F}_{n+2}+k\,{F}_{n+3}
\end{equation}
and
\begin{equation}\label{E6}
{BL}_{n}={L}_{n}+i\,{L}_{n+1}+j\,{L}_{n+2}+k\,{L}_{n+3}
\end{equation}
where the basis $\{1,\,i,\,j,\,k \}$ satisfy the conditions
\begin{equation*}
i^2=j^2=-1,\,\,k^2=1,\,i\,j=j\,i=k,\,j\,k=k\,j=-i,\,i\,k=k\,i=-j.
\end{equation*}
In 2018, the bicomplex Fibonacci quaternions defined by Ayd{\i}n Torunbalc{\i} \cite{1} as follows
\begin{equation}\label{E7}
{Q_F}_{n}={F}_{n}+i\,{F}_{n+1}+j\,{F}_{n+2}+i\,j\,{F}_{n+3}
\end{equation}
where the basis $\{1,\,i,\,j,\,i\,j \}$ satisfy the conditions
\begin{equation*}
i^2=-1,\,\,j^2=-1,\,\,i\,j=j\,i,\,\,(i\,j)^2=1.
\end{equation*}
In this paper, the bicomplex k-Fibonacci quaternions and the bicomplex k-Lucas quaternions will be defined respectively, as follows
\begin{equation*}
\begin{aligned}
{\mathbb{BC}}^{F_{k,n}}=\{\,{Q_F}_{k,n}=&{F}_{k,n}+i\,{F}_{k,n+1}+j\,{F}_{k,n+2}+i\,j\,{F}_{k,n+3}\,| \, {F}_{k,n},\, nth\, \\ & \text{k-Fibonacci number} \}
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
{\mathbb{BC}}^{L_{k,n}}=\{\,{{Q}_L}_{k,n}=&{L}_{k,n}+i\,{L}_{k,n+1}+j\,{L}_{k,n+2}+i\,j\,{L}_{k,n+3}\,| \, {L}_{k,n},\, nth\, \\ & \text{k-Lucas number} \}
\end{aligned}
\end{equation*}
where
\begin{equation*}
i^2=-1,\,\,j^2=-1,\,\,i\,j=j\,i,\,\,(i\,j)^2=1.
\end{equation*}
The aim of this work is to present in a unified manner a variety of algebraic properties of the bicomplex k-Fibonacci quaternions as well as both the bicomplex numbers and k-Fibonacci numbers. In particular, using three types of conjugations, all the properties established for bicomplex numbers and bicomplex k-Fibonacci numbers are also given for the bicomplex k-Fibonacci quaternions. In addition, Binet's Formula, the Honsberger identity, the d'Ocagne's identity, Cassini's identity and Catalan's identity for these quaternions are given.
\section{The bicomplex k-Fibonacci numbers}
The bicomplex k-Fibonacci and k-Lucas numbers can be define by with the basis $\{1,\,i,\,j,\,i\,j\,\}$, where $i$,\,\,$j\,$ \,and\, $i\,j\,$ satisfy the conditions
\begin{equation*}
i^2=-1,\,\,j^2=-1,\,\,i\,j=j\,i,\,\,(i\,j)^2=1.
\end{equation*}
as follows
\begin{equation}\label{F1}
\begin{aligned}
{\mathbb{BC}F_{k,n}}=&({F}_{k,n}+i\,{F}_{k,n+1})+j\,({F}_{k,n+2}+i\,{F}_{k,n+3}) \\
=& {F}_{k,n}+i\,{F}_{k,n+1}+j\,{F}_{k,n+2}+i\,j\,{F}_{k,n+3}
\end{aligned}
\end{equation}
and
\begin{equation}\label{F2}
\begin{aligned}
{\mathbb{BC}L_{k,n}}=&({L}_{k,n}+i\,{L}_{k,n+1})+j \,({L}_{k,n+2}+i\,{L}_{k,n+3}) \\
=& {L}_{k,n}+i\,{L}_{k,n+1}+j\,{L}_{k,n+2}+i\,j\,{L}_{k,n+3}.
\end{aligned}
\end{equation}
The addition and subtraction of two bicomplex k-Fibonacci numbers are defined by
\begin{equation}\label{F3}
\begin{array}{rl}
{\mathbb{BC}F}_{k,n}\pm{\mathbb{BC}F}_{k,m}=&({F}_{k,n}\pm{F}_{k,m})+i\,({F}_{k,n+1}\pm{F}_{k,m+1}) \\
&+j\,({F}_{k,n+2}\pm{F}_{k,m+2})+i\,j\,({F}_{k,n+3}\pm{F}_{k,m+3}) \\
\end{array}
\end{equation}
The multiplication of two bicomplex k-Fibonacci numbers is defined by
\begin{equation}\label{F4}
\begin{array}{rl}
{\mathbb{BC}F_{k,n}}\times\,{\mathbb{BC}F_{k,m}}=&({F}_{k,n}\,{F}_{k,m}-{F}_{k,n+1}\,{F}_{k,m+1} \\
&-{F}_{k,n+2}\,{F}_{k,m+2}-{F}_{k,n+3}\,{F}_{k,m+3}) \\
&+i\,({F}_{k,n}\,{F}_{k,m+1}+{F}_{k,n+1}\,{F}_{k,m} \\
&-{F}_{k,n+2}\,{F}_{k,m+3}-{F}_{k,n+3}\,{F}_{k,m+2}) \\
&+j\,({F}_{k,n}\,{F}_{k,m+2}+{F}_{k,n+2}\,{F}_{k,m} \\
&-{F}_{k,n+1}\,{F}_{k,m+3}-{F}_{k,n+3}\,{F}_{k,m+1}) \\
&+i\,j\,({F}_{k,n}\,{F}_{k,m+3}+{F}_{k,n+3}\,{F}_{k,m} \\
&+{F}_{k,n+1}\,{F}_{k,m+2}+{F}_{k,n+2}\,{F}_{k,m+1}) \\
=&{\mathbb{BC}F_{k,m}}\times\,{\mathbb{BC}F_{k,n}}\,.
\end{array}
\end{equation}
\section{The bicomplex k-Fibonacci quaternions}
In this section, firstly the bicomplex k-Fibonacci quaternions will be defined. The bicomplex k-Fibonacci quaternions are defined by using the bicomplex numbers and k-Fibonacci numbers as follows
\begin{equation}\label{G1}
\begin{aligned}
\mathbb{BC}^{F_{k,n}}=\{\,{{Q}_F}_{k,n}=&{F}_{k,n}+i\,{F}_{k,n+1}+j\,{F}_{k,n+2}+i\,j\,{F}_{k,n+3}\,|\, {F}_{k,n},\, n-th \\
& \quad \quad \quad \quad \,\text{k-Fib. number} \, \}
\end{aligned}
\end{equation}
where
\begin{equation*}
i^2=-1,\,\,j^2=-1,\,\,i\,j=j\,i,\,\,(i\,j)^2=1.
\end{equation*}
Let ${Q_F}_{k,n}$ and ${Q_F}_{k,m}$ be two bicomplex k-Fibonacci quaternions such that
\begin{equation}\label{G2}
{Q_F}_{k,n}={F}_{k,n}+i\,{F}_{k,n+1}+j\,{F}_{k,n+2}+i\,j\,{F}_{k,n+3}
\end{equation}
and
\begin{equation}\label{G3}
{Q_F}_{k,m}={F}_{k,m}+i\,{F}_{k,m+1}+j\,{F}_{k,m+2}+i\,j\,{F}_{k,m+3}.
\end{equation}
The addition and subtraction of two bicomplex k-Fibonacci quaternions are defined in the obvious way,
\begin{equation}\label{G4}
\begin{array}{rl}
{Q_F}_{k,n}\pm{Q_F}_{k,m}=&({F}_{k,n}+i\,{F}_{k,n+1}+j\,{F}_{k,n+2}+i\,j\,{F}_{k,n+3}) \\ &\pm ({F}_{k,m}+i\,{F}_{k,m+1}+j\,{F}_{k,m+2}+i\,j\,{F}_{k,m+3}) \\
=&({F}_{k,n}\pm{F}_{k,m})+i\,({F}_{k,n+1}\pm{F}_{k,m+1}) \\
&+j\,({F}_{k,n+2}\pm{F}_{k,m+2})+ i\,j\,({F}_{k,n+3}\pm{F}_{k,m+3}).
\end{array}
\end{equation}
Multiplication of two bicomplex k-Fibonacci quaternions is defined by
\begin{equation}\label{G5}
\begin{array}{rl}
{Q_F}_{k,n}\times\,{Q_F}_{k,m}=&({F}_{k,n}+i\,{F}_{k,n+1}+j\,{F}_{k,n+2}+i\,j\,{F}_{k,n+3}) \\&\,({F}_{k,m}+i\,{F}_{k,m+1}+j\,{F}_{k,m+2}+i\,j\,{F}_{k,m+3}) \\
=&[{F}_{k,n}\,{F}_{k,m}-{F}_{k,n+1}\,{F}_{k,m+1} \\
&\quad \quad-{F}_{k,n+2}\,{F}_{k,m+2}+{F}_{k,n+3}\,{F}_{k,m+3}] \\
&+i\,[{F}_{k,n}\,{F}_{k,m+1}+{F}_{k,n+1}\,{F}_{k,m} \\
&\quad \quad-{F}_{k,n+2}\,{F}_{k,m+3}-{F}_{k,n+3}\,{F}_{k,m+2}] \\
&+j\,[{F}_{k,n}\,{F}_{k,m+2}-{F}_{k,n+1}\,{F}_{k,m+3} \\
&\quad \quad+{F}_{k,n+2}\,{F}_{k,m}-{F}_{k,n+3}\,{F}_{k,m+1}] \\
&+i\,j\,[{F}_{k,n}\,{F}_{k,m+3}+{F}_{k,n+1}\,{F}_{k,m+2} \\
&\quad \quad+{F}_{k,n+2}\,{F}_{k,m+1}+{F}_{k,n+3}\,{F}_{k,m}] \\
=&{Q_F}_{k,m}\times\,{Q_F}_{k,n}\,.
\end{array}
\end{equation}
The scaler and the bicomplex vector parts of the bicomplex k-Fibonacci quaternion $({Q_F}_{k,n})$ are denoted by
\begin{equation}\label{G6}
{S}_{\,{{Q}_F}_{k,n}}={F}_{k,n} \ \ \text{and} \ \ \ {V}_{\,{{Q}_F}_{k,n}}=i\,{F}_{k,n+1}+j\,{F}_{k,n+2}+i\,j\,{F}_{k,n+3}.
\end{equation}
Thus, the bicomplex k-Fibonacci quaternion ${Q_F}_{k,n}$ is given by
\begin{equation*}
{Q_F}_{k,n}={S}_{\,{Q_F}_{k,n}}+{V}_{\,{Q_F}_{k,n}}.
\end{equation*}
Three kinds of conjugation can be defined for bicomplex numbers \cite{18}. Therefore, conjugation of the bicomplex k-Fibonacci quaternion is defined in five different ways as follows
\begin{equation} \label{G7}
\begin{aligned}
({Q_F}_{k,n})^{*_1}=&{F}_{k,n}-i\,{F}_{k,n+1}+j\,{F}_{k,n+2}-i\,j\,{F}_{k,n+3}, \\
\end{aligned}
\end{equation}
\begin{equation} \label{G8}
\begin{aligned}
({Q_F}_{k,n})^{*_2}=&{F}_{k,n}+i\,{F}_{k,n+1}-j\,{F}_{k,n+2}-i\,j\,{F}_{k,n+3}, \\
\end{aligned}
\end{equation}
\begin{equation} \label{G9}
\begin{aligned}
({Q_F}_{k,n})^{*_3}=&{F}_{k,n}-i\,{F}_{k,n+1}-j\,{F}_{k,n+2}+i\,j\,{F}_{k,n+3}, \\
\end{aligned}
\end{equation}
\\
In the following theorem, some properties related to the conjugations of the bicomplex k-Fibonacci quaternions are given.
\begin{thm}
Let $({Q_F}_{k,n})^{*_1}$,\,$({Q_F}_{k,n})^{*_2}$ and \,$({Q_F}_{k,n})^{*_3}$,\, be three kinds of conjugation of the bicomplex k-Fibonacci quaternion. In this case, we can give the following relations:
\begin{equation}\label{G10}
{Q_F}_{k,n}\,({Q_F}_{k,n})^{*_1}=F_{k,2n+1}-F_{k,2n+5}+2\,j\,{F}_{k,2n+3}, \\
\end{equation}
\begin{equation}\label{G11}
\begin{array}{rl}
{Q_F}_{k,n}\,({Q_F}_{k,n})^{*_2}=&(F_{k,n}^2-F_{k,n+1}^2+F_{k,n+2}^2-F_{k,n+3}^2) \\
&+2\,i\,(2\,{F}_{k,n}\,{F}_{k,n+1}+k\,{F}_{k,2n+3}), \\
\end{array}
\end{equation}
\begin{equation}\label{G12}
{Q_F}_{k,n}\,({Q_F}_{k,n})^{*_3}=(F_{k,2n+1}+{F}_{k,2n+5})+2\,i\,j\,(-1)^{n+1}\,k, \\
\end{equation}
\end{thm}
\begin{proof}
(\ref{G10}): By the Eq.(3.2) and (3.7) we get,
\begin{equation*}
\begin{array}{rl}
{Q_F}_{k,n}\,({Q_F}_{k,n})^{*_1}=&({F}_{k,n}^2+{{F}_{k,n+1}^2}-{F}_{k,n+2}^2-{F}_{k,n+3}^2) \\
&+2\,j\,(\,{F}_{k,n}\,{F}_{k,n+2}+{F}_{k,n+1}\,{F}_{k,n+3}) \\
=&{F}_{k,2n+1}-{F}_{k,2n+5}+2\,j\,{F}_{k,2n+3}.
\end{array}
\end{equation*}
where ${F}_{k,n}^2+{{F}_{k,n+1}^2}={F}_{k,2n+1}$ and ${F}_{k,n}\,{F}_{k,m-1}+{F}_{k,n+1}\,{F}_{k,m}={F}_{k,n+m}$ are used \cite{4}.\\
\\
(\ref{G11}): By the Eq.(3.2) and (3.8) we get,
\begin{equation*}
\begin{array}{rl}
{Q_F}_{k,n}\,({Q_F}_{k,n})^{*_2}=&({F}_{k,n}^2-{F}_{k,n+1}^2+{F}_{k,n+2}^2-{F}_{k,n+3}^2) \\
&+2\,i\,({F}_{k,n}\,{F}_{k,n+1}+{F}_{k,n+2}\,{F}_{k,n+3}) \\
=&({F}_{k,n}^2-{F}_{k,n+1}^2+{F}_{k,n+2}^2-{F}_{k,n+3}^2) \\
&+2\,i\,(2\,{F}_{k,n}\,{F}_{k,n+1}+k\,{F}_{k,2n+3}).
\end{array}
\end{equation*}
(\ref{G12}): By the Eq.(3.2) and (3.9) we get,
\begin{equation*}
\begin{array}{rl}
{Q_F}_{k,n}\,({Q_F}_{k,n})^{*_3}=&({F}_{k,n}^2+{F}_{k,n+1}^2+{F}_{k,n+2}^2+{F}_{k,n+3}^2) \\
&+2\,i\,j\,({F}_{k,n}\,{F}_{k,n+3}-{F}_{k,n+1}\,{F}_{k,n+2}) \\
=&{F}_{k,2n+1}+{F}_{k,2n+5}+2\,i\,j\,(-1)^{n+1}\,k.
\end{array}
\end{equation*}
where ${F}_{k,n}^2-{F}_{k,n-1}\,{F}_{k,n+1}=(-1)^{n+1}$ and ${F}_{k,n}\,{F}_{k,n+3}-{F}_{k,n+1}\,{F}_{k,n+2}=k\,(-1)^{n+1}$ are used \cite{4}. \\
\end{proof}
Therefore, the norm of the bicomplex k-Fibonacci quaternion ${\,{Q_F}_{k,n}}$ is defined in three different ways as follows
\begin{equation}\label{G13}
\begin{array}{rl}
{N}_({Q_F}_{k,n})^{*_1}=&\|{{Q}_F}_{k,n}\times\,({{Q}_F}_{k,n})^{*_1}\|^2 \\
=&|({F}_{k,n}^2+{F}_{k,n+1}^2)-({F}_{k,n+2}^2+{F}_{k,n+3}^2) \\
&+2\,j\,(\,{F}_{k,n}\,{F}_{k,n+2}+{F}_{k,n+1}\,{F}_{k,n+3})\,| \\
=&|{F}_{k,2n+1}-{F}_{k,2n+5}+2\,j\,{F}_{k,2n+3}|,
\end{array}
\end{equation}
\begin{equation}\label{G14}
{\begin{array}{rl}
{N}_({Q_F}_{k,n})^{*_2}=&\|{{Q}_F}_{k,n}\times\,({{Q}_F}_{k,n})^{*_2}\|^2 \\
=&|({F}_{k,n}^2-{F}_{k,n+1}^2)+({F}_{k,n+2}^2-{F}_{k,n+3}^2) \\
&+2\,i\,{F}_{k,n}\,{F}_{k,n+1}+k\,{F}_{k,2n+3}\,|,
\end{array}}
\end{equation}
\begin{equation}\label{G15}
{\begin{array}{rl}
{N}_({Q_F}_{k,n})^{*_3}=&\|{{Q}_F}_{k,n}\times\,({{Q}_F}_{k,n})^{*_3}\|^2 \\
=&|({F}_{k,n}^2+{F}_{k,n+1}^2)+({F}_{k,n+2}^2+{F}_{k,n+3}^2) \\
&+2\,i\,j\,({F}_{k,n}\,{F}_{k,n+3}-{F}_{k,n+1}\,{F}_{k,n+2})\,| \\
=&|{F}_{k,2n+1}+{F}_{k,2n+5}+2\,i\,j\,(-1)^{n+1}\,k\,|.
\end{array}}
\end{equation}
In the following theorem, some properties related to the bicomplex k-Fibonacci quaternions are given.
\begin{thm}
Let ${\,{Q_F}_{k,n}}$ be the bicomplex k-Fibonacci quaternion. In this case, we can give the following relations:
\begin{equation}\label{G16}
{Q_F}_{k,n}+k\,{Q_F}_{k,n+1}={Q_F}_{k,n+2},
\end{equation} \,
\begin{equation}\label{G17}
\begin{array}{rl}
({Q_F}_{k,n})^2=& ({F}_{k,n}^2-{F}_{k,n+1}^2-{F}_{k,n+2}^2-{F}_{k,n+3}^2) \\
&+2\,i\,({F}_{k,n}\,{F}_{k,n+1}-{F}_{k,n+2}\,{F}_{k,n+3}) \\
&+2\,j\,({F}_{k,n}\,{F}_{k,n+2}-{F}_{k,n+1}\,{F}_{k,n+3}) \\
\quad \quad \quad \quad &+2\,i\,j\,({F}_{k,n}\,{F}_{k,n+3}+{F}_{k,n+1}\,{F}_{k,n+2}), \\
\end{array}
\end{equation} \,
\begin{equation}\label{G18}
\begin{array}{rl}
({Q_F}_{k,n})^2+({Q_F}_{k,n+1})^2=& {Q_F}_{k,2n+1}+(k\,{F}_{k,2n+6}-{F}_{k,2n+3}) \\
&+i\,({F}_{k,2n+2}-2\,{F}_{k,2n+6}) \\
&+j\,({F}_{k,2n+3}-2\,{F}_{k,2n+5})+i\,j\,(3\,F_{k,2n+4}),
\end{array}
\end{equation}
\begin{equation}\label{G19}
\begin{array}{rl}
({Q_F}_{k,n+1})^2-({Q_F}_{k,n-1})^2=& k\,[\,{Q_F}_{k,2n}-{F}_{k,2n+2}+k\,{F}_{k,2n+5} \\
&+i\,({F}_{k,2n+1}-2\,{F}_{k,2n+5}) \\
&+j\,(-{F}_{k,2n+2}-2\,k\,{F}_{k,2n+3}) \\
&+i\,j\,(\,3\,{F}_{k,2n+3})\,],
\end{array}
\end{equation} \,
\begin{equation}\label{G20}
\begin{array}{rl}
{Q_F}_{k,n}-i\,{Q_F}_{k,n+1}+j\,{{Q}_F}_{k,n+2}-i\,j\,{{Q}_F}_{k,n+3}&={F}_{k,n}+{F}_{k,n+2}-{F}_{k,n+4} \\
&-{F}_{k,n+6}+2\,j\,{L}_{k,n+3}.
\end{array}
\end{equation}
\begin{equation}\label{G21}
\begin{array}{rl}
{Q_F}_{k,n}-i\,{Q_F}_{k,n+1}-j\,{{Q}_F}_{k,n+2}-i\,j\,{{Q}_F}_{k,n+3}&={F}_{k,n}+{F}_{k,n+2}+{F}_{k,n+4} \\
&-{F}_{k,n+6}+2\,i\,{F}_{k,n+5} \\
&+2\,j\,{F}_{k,n+4}-2\,i\,j\,{F}_{k,n+3} \\
=&{L}_{k,n+1}-k\,{F}_{k,n+5}+2\,i\,{F}_{k,n+5} \\
&+2\,j\,{F}_{k,n+4}-2\,i\,j\,{F}_{k,n+3}.
\end{array}
\end{equation}
\end{thm}
\begin{proof}
(\ref{G16}): By the Eq.(3.2) we get,
\begin{equation*}
\begin{array}{rl}
{Q_F}_{k,n}+k\,{Q_F}_{k,n+1}=&({F}_{k,n}+k\,{F}_{k,n+1})+i\,({F}_{k,n+1}+k\,{F}_{k,n+2}) \\
&+j\,({F}_{k,n+2}+k\,F_{k,n+3})+i\,j\,({F}_{k,n+3}+k\,{F}_{k,n+4}) \\
=&{F}_{k,n+2}+i\,{F}_{k,n+3}+j\,{F}_{k,n+4}+i\,j\,{F}_{k,n+5} \\
=&{Q_F}_{k,n+2}.
\end{array}
\end{equation*}
(\ref{G17}): By the Eq.(3.2) we get,
\begin{equation*}
\begin{array}{rl}
({Q_F}_{k,n})^2=& ({F}_{k,n}^2-{F}_{k,n+1}^2-{F}_{k,n+2}^2+{F}_{k,n+3}^2) \\
&+2\,i\,({F}_{k,n}\,{F}_{k,n+1}-{F}_{k,n+2}\,{F}_{k,n+3}) \\
&+2\,j\,({F}_{k,n}\,{F}_{k,n+2}-{F}_{k,n+1}\,{F}_{k,n+3}) \\
\quad \quad \quad \quad &+2\,i\,j\,({F}_{k,n}\,{F}_{k,n+3}+{F}_{k,n+1}\,{F}_{k,n+2}). \\
\end{array}
\end{equation*}
(\ref{G18}): By the Eq.(3.2) we get,
\begin{equation*}
\begin{array}{rl}
({Q_F}_{k,n})^2+({{Q}_F}_{k,n+1})^2=&({F}_{k,2n+1}-{F}_{k,2n+3}-{F}_{k,2n+5}+{F}_{k,2n+7}) \\
&+2\,i\,({F}_{k,2n+2}-{F}_{k,2n+6}) \\
&+2\,j\,({F}_{k,2n+3}-{F}_{k,2n+5}) \\
&+2\,i\,j\,(2\,{F}_{k,2n+4}) \\
=& ({F}_{k,2n+1}+i\,{F}_{k,2n+2}+j\,{F}_{k,2n+3}+i\,j\,{F}_{k,2n+4}) \\
&-{F}_{k,2n+3}-{F}_{k,2n+5}+{F}_{k,2n+7} \\
&+i\,({F}_{k,2n+2}-2\,{F}_{k,2n+6})+j\,({F}_{k,2n+3}-2\,{F}_{k,2n+5}) \\
&+i\,j\,(3\,F_{k,2n+4}) \\
=&{Q_F}_{k,2n+1}+(k\,{F}_{k,2n+6}-{F}_{k,2n+3}) \\
&+i\,({F}_{k,2n+2}-2\,{F}_{k,2n+6})+j\,({F}_{k,2n+3}-2\,{F}_{k,2n+5}) \\
&+i\,j\,(3\,F_{k,2n+4}).
\end{array}
\end{equation*}
(\ref{G19}): By the Eq.(3.2) we get,
\begin{equation*}
\begin{array}{rl}
({Q_F}_{k,n+1})^2-({{Q}_F}_{k,n-1})^2=& [\,({F}_{k,n+1}^2-{F}_{k,n-1}^2)-({F}_{k,n+2}^2-{F}_{k,n}^2) \\
&\quad \quad-({F}_{k,n+3}^2-{F}_{k,n+1}^2)+({F}_{k,n+4}^2-{F}_{k,n+2}^2)\,] \\
&+2\,i\,[\,({F}_{k,n+1}\,{F}_{k,n+2}-{F}_{k,n-1}\,{F}_{k,n}) \\
&\quad \quad-({F}_{k,n+3}\,{F}_{k,n+4}-{F}_{k,n+1}\,{F}_{k,n+2})\,] \\
&+2\,j\,[\,({F}_{k,n+1}\,{F}_{k,n+3}-{F}_{k,n-1}\,{F}_{k,n+1}) \\
&\quad \quad-({F}_{k,n+2}\,{F}_{k,n+4}-{F}_{k,n}\,{F}_{k,n+2})\,] \\
&+2\,i\,j\,[\,({F}_{k,n+1}\,{F}_{k,n+4}-{F}_{k,n-1}\,{F}_{k,n+2}) \\
&\quad \quad+({F}_{k,n+2}\,{F}_{k,n+3}-{F}_{k,n}\,{F}_{k,n+1})\,] \\
=& k\,(\,{F}_{k,2n}-k\,{F}_{k,2n+2}-k\,{F}_{k,2n+4}+k\,{F}_{k,2n+6}) \\
&+2\,i\,(k\,{F}_{k,2n+1}-k\,{F}_{k,2n+5})+2\,j\,(-k^2\,{F}_{k,2n+3}) \\
&+2\,i\,j\,(2\,k\,{F}_{k,2n+3}) \\
=& k\,[\,{Q_F}_{k,2n}-{F}_{k,2n+2}+k\,{F}_{k,2n+5} \\
&+i\,({F}_{k,2n+1}-2\,{F}_{k,2n+5}) \\
&+j\,(-{F}_{k,2n+2}-2\,k\,{F}_{k,2n+3}) \\
&+i\,j\,(\,3\,{F}_{k,2n+3})\,] .
\end{array}
\end{equation*}
(\ref{G20}): By the Eq.(3.2) we get,
\begin{equation*}
\begin{array}{rl}
{Q_F}_{k,n}-i\,{Q_F}_{k,n+1}-j\,{Q_F}_{k,n+2}-i\,j\,{Q_F}_{k,n+3}&=({F}_{k,n}+{F}_{k,n+2}-k\,{F}_{k,n+5}) \\
&+2\,i\,{F}_{k,n+5}+2\,j\,{F}_{k,n+4} \\
&-2\,i\,j\,{F}_{k,n+3} \\
=&{L}_{k,n+1}-k\,{F}_{k,n+5}+2\,i\,{F}_{k,n+5} \\
&+2\,j\,{F}_{k,n+4}-2\,i\,j\,2\,j\,{F}_{k,n+3}.\,
\end{array}
\end{equation*}
(\ref{G21}): By the Eq.(3.2) we get,
\begin{equation*}
\begin{array}{rl}
{Q_F}_{k,n}-i\,{Q_F}_{k,n+1}+j\,{Q_F}_{k,n+2}-i\,j\,{Q_F}_{k,n+3}&=({F}_{k,n}+{F}_{k,n+2}-{F}_{k,n+4} \\
&-{F}_{k,n+6})+2\,j\,({F}_{k,n+2}+{F}_{k,n+4}) \\
=&{L}_{k,n+1}-({F}_{k,n+4}+{F}_{k,n+6}) \\
&+2\,j\,{L}_{k,n+3}.\,
\end{array}
\end{equation*}
\end{proof}
\begin{thm} Let ${Q_F}_{k,n}=({F}_{k,n},{F}_{k,n+1},{F}_{k,n+2},{F}_{k,n+3})$ and \\
${Q_L}_{k,n}=({L}_{k,n},{L}_{k,n+1},{L}_{k,n+2},{L}_{k,n+3})$ be the bicomplex k-Fibonacci quaternion and the bicomplex k-Lucas quaternion respectively. The following relations are satisfied
\begin{equation}\label{G22}
\begin{aligned}
{Q_F}_{k,n+1}+{Q_F}_{k,n-1}={L}_{k,n}+i{L}_{k,n+1}+j\,{L}_{k,n+2}+ij\,{L}_{k,n+3}={Q_L}_{k,n}, \\
\end{aligned}
\end{equation}
\begin{equation}\label{G23}
\begin{aligned}
{Q_F}_{k,n+2}-{Q_F}_{k,n-2}={L}_{k,n}+i{L}_{k,n+1} +j\,{L}_{k,n+2}+ij\,{L}_{k,n+3}=k\,{Q_L}_{k,n}.
\end{aligned}
\end{equation}
\end{thm}
\begin{proof}
\begin{equation*}
\begin{aligned}
\begin{array}{rl}
{Q_F}_{k,n+1}+{Q_F}_{k,n-1}=&({F}_{k,n+1}+{F}_{k,n-1})+i\,({F}_{k,n+2}+{F}_{k,n}) \\
&+j\,({F}_{k,n+3}+{F}_{k,n+1})\\&+i\,j\,({F}_{k,n+4}+{F}_{k,n+2}) \\
=&({L}_{k,n}+i\,{L}_{k,n+1}+j\,{L}_{k,n+2}+i\,j\,{L}_{k,n+3}) \\
=&{Q_L}_{k,n}.
\end{array}
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{array}{rl}
{Q_F}_{k,n+2}-{Q_F}_{k,n-2}=&({F}_{k,n+2}-{F}_{k,n-2})+i\,({F}_{k,n+3}-{F}_{k,n-1}) \\
&+j\,({F}_{k,n+4}-{F}_{k,n})+i\,j\,({F}_{k,n+5}-{F}_{k,n+1}) \\
=&k\,({L}_{k,n}+i\,{L}_{k,n+1}+j\,{L}_{k,n+2}+i\,j\,{L}_{k,n+3}) \\
=&k\,{Q_L}_{k,n}.
\end{array}
\end{equation*}
\end{proof}
\begin{thm}
For $n,m\ge0$ the Honsberger identity for the bicomplex k-Fibonacci quaternions ${Q_F}_{k,n}$ and ${Q_F}_{k,m}$ is given by
\begin{equation}\label{G24}
\begin{array}{rl}
{Q_F}_{k,n}\,{Q_F}_{k,m}+{Q_F}_{k,n+1}\,{Q_F}_{k,m+1}=& {Q_F}_{k,n+m+1}-{F}_{k,n+m+3}+k\,{F}_{k,n+m+6}\\
&+i\,({F}_{k,n+m+2}-2\,{F}_{k,n+m+6}) \\
&+j\,({F}_{k,n+m+3}-2\,{F}_{k,n+m+5})\\
&+i\,j\,(3\,{F}_{k,n+m+4}).
\end{array}
\end{equation}
\end{thm}
\begin{proof}
(\ref{G24}): By the Eq.(3.2) we get,
\begin{equation*}
\begin{array}{rl}
{Q_F}_{k,n}\,{Q_F}_{k,m}+{Q_F}_{k,n+1}\,{Q_F}_{k,m+1}=&[\,({F}_{k,n}{F}_{k,m}+{F}_{k,n+1}{F}_{k,m+1}) \\
&\quad \quad-({F}_{k,n+1}{F}_{k,m+1}+{F}_{k,n+2}{F}_{k,m+2}) \\
&\quad \quad-({F}_{k,n+2}{F}_{k,m+2}+{F}_{k,n+3}{F}_{k,m+3}) \\
&\quad \quad+({F}_{k,n+3}{F}_{k,m+3}+{F}_{k,n+4}{F}_{k,m+4})\,] \\
&+i\,[\,({F}_{k,n}{F}_{k,m+1}+{F}_{k,n+1}{F}_{k,m+2}) \\
&\quad \quad+({F}_{k,n+1}{F}_{k,m}+{F}_{k,n+2}{F}_{k,m+1}) \\
&\quad \quad-({F}_{k,n+2}{F}_{k,m+3}+{F}_{k,n+3}{F}_{k,m+4}) \\
&\quad \quad-({F}_{k,n+3}{F}_{k,m+2}+{F}_{k,n+4}{F}_{k,m+3})\,] \\
&+j\,[\,({F}_{k,n}{F}_{k,m+2}+{F}_{k,n+1}{F}_{k,m+3}) \\
&\quad \quad+({F}_{k,n+2}{F}_{k,m}+{F}_{k,n+3}{F}_{k,m+1}) \\
&\quad \quad-({F}_{k,n+1}{F}_{k,m+3}+{F}_{k,n+2}{F}_{k,m+4}) \\
&\quad \quad-({F}_{k,n+3}{F}_{k,m+1}+{F}_{k,n+4}{F}_{k,m+2})\,] \\
&+i\,j\,[\,({F}_{k,n}{F}_{k,m+3}+{F}_{k,n+1}{F}_{k,m+4}) \\
&\quad \quad+({F}_{k,n+1}{F}_{k,m+2}+{F}_{k,n+2}{F}_{k,m+3}) \\
&\quad \quad+({F}_{k,n+2}{F}_{k,m+1}+{F}_{k,n+3}{F}_{k,m+2}) \\
&\quad \quad+({F}_{k,n+3}{F}_{k,m}+{F}_{k,n+4}{F}_{k,m+1})\,] \\
=&({F}_{k,n+m+1}-{F}_{k,n+m+3}-{F}_{k,n+m+5} \\
&+{F}_{k,n+m+7})+2\,i\,({F}_{k,n+m+2}-{F}_{k,n+m+6}) \\
&+2\,j\,({F}_{k,n+m+3}-{F}_{k,n+m+5}) \\
&+i\,j\,(4\,{F}_{k,n+m+4}) \\
=&({F}_{k,n+m+1}+\,i\,{F}_{k,n+m+2}+j\,{F}_{k,n+m+3} \\
&+\,i\,j\,{F}_{k,n+m+4})-{F}_{k,n+m+3}+k\,{F}_{k,n+m+6} \\
&+i\,({F}_{k,n+m+2}-2\,{F}_{k,n+m+6}) \\
&+j\,({F}_{k,n+m+3}-2\,{F}_{k,n+m+5}) \\
&+i\,j\,(3\,{F}_{k,n+m+4}) \\
=&{Q_F}_{k,n+m+1}-{F}_{k,n+m+3}+k\,{F}_{k,n+m+6} \\
&+i\,({F}_{k,n+m+2}-2\,{F}_{k,n+m+6}) \\
&+j\,({F}_{k,n+m+3}-2\,{F}_{k,n+m+3})\\
&+i\,j\,(3\,{F}_{k,n+m+4}).\,
\end{array}
\end{equation*}
where the identity ${F}_{k,n}{F}_{k,m}+{F}_{k,n+1}{F}_{k,m+1}={F}_{k,n+m+1}$ was used \cite{4}.
\end{proof}
\begin{thm}
For ${n,m\ge0}$ the D'Ocagne's identity for the bicomplex k-Fibonacci quaternions ${Q_F}_{k,n}$ and ${Q_F}_{k,m}$ is given by
\begin{equation}\label{G25}
\begin{array}{rl}
{Q_F}_{k,n}\,{Q_F}_{k,m+1}-{Q_F}_{k,n+1}\,{Q_F}_{k,m}&=(-1)^m\,{F}_{k,n-m}\,[\,2\,(k^2+2)\,j+(k^3+2\,k)\,i\,j\,].
\end{array}
\end{equation}
\end{thm}
\begin{proof}
(\ref{G25}): By the Eq.(3.2) we get,
\begin{equation*}
\begin{array}{rl}
{Q_F}_{k,n}\,{Q_F}_{k,m+1}-{Q_F}_{k,n+1}\,{Q_F}_{k,m}=&[\,({F}_{k,n}{F}_{k,m+1}-{F}_{k,n+1}{F}_{k,m}) \\
&\quad-({F}_{k,n+1}{F}_{k,m+2}-{F}_{k,n+2}{F}_{k,m+1}) \\
&\quad-({F}_{k,n+2}{F}_{k,m+3}-{F}_{k,n+3}{F}_{k,m+2}) \\
&\quad+({F}_{k,n+3}{F}_{k,m+4}-{F}_{k,n+4}{F}_{k,m+3})\,] \\
&+\,i\,[\,({F}_{k,n}{F}_{k,m+2}-{F}_{k,n+1}{F}_{k,m+1})\\
&\quad+({F}_{k,n+1}{F}_{k,m+1}-{F}_{k,n+2}{F}_{k,m}) \\
&\quad-({F}_{k,n+3}{F}_{k,m+3}-{F}_{k,n+4}{F}_{k,m+2})\,] \\
&+\,j\,[\,({F}_{k,n}{F}_{k,m+3}-{F}_{k,n+1}{F}_{k,m+2})\\
&\quad+({F}_{k,n+2}{F}_{k,m+1}-{F}_{k,n+3}{F}_{k,m}) \\
&\quad-({F}_{k,n+1}{F}_{k,m+4}-{F}_{k,n+2}{F}_{k,m+3})\\
&\quad-({F}_{k,n+3}{F}_{k,m+2}-{F}_{k,n+4}{F}_{k,m+1})\,] \\
&+\,i\,j\,[\,({F}_{k,n}{F}_{k,m+4}-{F}_{k,n+1}{F}_{k,m+3})\\
&\quad+({F}_{k,n+1}{F}_{k,m+3}-{F}_{k,n+2}{F}_{k,m+2}) \\
&\quad+({F}_{k,n+2}{F}_{k,m+2}-{F}_{k,n+3}{F}_{k,m+1})\\
&\quad+({F}_{k,n+3}{F}_{k,m+1}{F}_{k,n+4}{F}_{k,m})\,] \\
=&(-1)^m\,{F}_{k,n-m}\,[\,2\,(k^2+2)\,j \\
&\quad+(k^3+2\,k)\,i\,j\,].\,
\end{array}
\end{equation*}
where the identity ${F}_{k,m}{F}_{k,n+1}-{F}_{k,m+1}{F}_{k,n}=(-1)^{n}{F}_{k,m-n}$ is used \cite{5}.
\end{proof}
\begin{thm}
Let ${Q_F}_{k,n}$ be the bicomplex k-Fibonacci quaternion.Then, we have the following identities
\begin{equation}\label{G26}
\sum\limits_{s=1}^{n}{\,{Q_F}_{k,s}}=\frac{1}{k}\,(\,{Q_F}_{k,n+1}+{Q_F}_{k,n}-{Q_F}_{k,1}-{Q_F}_{k,0}\,),
\end{equation}
\begin{equation}\label{G27}
\sum\limits_{s=1}^{n}{\,{Q_F}_{k,2s-1}}=\frac{1}{k}({Q_F}_{k,2n}-{Q_F}_{k,0}),
\end{equation}
\begin{equation}\label{G28}
\sum\limits_{s=1}^{n}{\,{Q_F}_{2s}}=\frac{1}{k}({Q_F}_{k,2n+1}-{Q_F}_{k,1}).
\end{equation}
\end{thm}
\begin{proof}
(\ref{G26}) Since $\sum\nolimits_{i=1}^{n}{{F}_{k,i}}=\frac{1}{k}({F}_{k,n+1}+{F}_{k,n}-1)$ \cite{4}, \, we get
\begin{equation*}
\begin{aligned}
& \sum\limits_{s=1}^{n}\,{{Q_F}_{k,s}}=\sum\limits_{s=1}^{n}{{F}_{k,s}}+i\,\sum\limits_{s=1}^{n}{{F}_{k,s+1}}+\varepsilon\,\sum\limits_{s=1}^{n}{{F}_{k,s+2}}+i\,j\,\sum\limits_{s=1}^{n}{{F}_{k,s+3}} \\
& \quad =\frac{1}{k}\{({F}_{k,n+1}+{F}_{k,n}-1)+i\,({F}_{k,n+2}+{F}_{k,n+1}-k-1) \\
& \quad \quad +j\,[\,{F}_{k,n+3}+{F}_{k,n+2}-(k^2+1)-k\,] \\
& \quad \quad +i\,j\,[\,{F}_{k,n+4}+{F}_{k,n+3}-(k^3+2\,k)-(k^2+1)\,]\} \\
& \quad =\frac{1}{k}\{({F}_{k,n+1}+i\,{F}_{k,n+2}+j\,{F}_{k,n+3}+i\,j\,{F}_{k,n+4}) \\
& \quad \quad +({F}_{k,n}+i\,{F}_{k,n+1}+j\,{F}_{k,n+2}+i\,j\,{F}_{k,n+3}) \\
& \quad \quad -[\,1+(k+1)\,i+(k^2+k+1)\,j+(k^3+k^2+2\,k+1)\,]\}\\
& \quad =\frac{1}{k}\,(\,{Q_F}_{k,n+1}+{Q_F}_{k,n}-{Q_F}_{k,1}-{Q_F}_{k,0}\,).
\end{aligned}
\end{equation*}
(\ref{G27}): Using $\sum\limits_{i=1}^{n}{F}_{k,2i+1}=\frac{1}{k}{F}_{k,2n+2}$ \, and \, $\sum\limits_{i=1}^{n}{F}_{k,2i}=\frac{1}{k}({F}_{k,2n+1}-1)$ \, \cite{4}, \, we get
\begin{equation*}
\begin{array}{rl}
\sum\limits_{s=1}^{n}{\,{{Q}_F}_{k,2s-1}}= &\frac{1}{k}\,\{\,({F}_{k,2n})+i\,({F}_{k,2n+1}-1)+j\,({F}_{k,2n+2}-k)
\\ &+i\,j\,({F}_{k,2n+3}-(k^2+1)\,)\}
\\=&\frac{1}{k}\,\{[\,{F}_{k,2n}+i\,{F}_{k,2n+1}+j\,{F}_{k,2n+2}+i\,j\,{F}_{k,2n+3}]
\\ &-\,(\,0+i+k\,j+2\,(k^2+1)\,i\,j\,)\}
\\=&\frac{1}{k}\,\{{Q_F}_{k,2n}-[{F}_{k,0}+i\,{F}_{k,1}+\,j\,{F}_{k,2}+\,i\,j\,{F}_{k,3}]\}
\\=&\frac{1}{k}\,(\,{Q_F}_{k,2n}-\,{Q_F}_{k,0}\,)\,.
\end{array}
\end{equation*}
(\ref{G28}): Using $\sum\limits_{i=1}^{n}{F}_{k,2i}=\frac{1}{k}({F}_{2n+1}-1)$ \, \cite{4}, \, we obtain
\begin{equation*}
\begin{array}{rl}
\sum\limits_{s=1}^{n}{\,{{Q}_F}_{k,2s}}= & \frac{1}{k}\,\{({F}_{k,2n+1}-1)+i\,({F}_{k,2n+2}-k)+j\,({F}_{k,2n+3}-(k^2+1))
\\& \quad \quad +i\,j\,({F}_{k,2n+4}-(k^2+2\,k))\}
\\=&\frac{1}{k}\,\{({F}_{k,2n+1}+i\,{F}_{k,2n+2}+j\,{F}_{k,2n+3}+i\,j\,{F}_{k,2n+4})
\\& \quad \quad -\,(1+k\,i\,+(k^2+1)\,j\,+(k^3+2\,k)\,i\,j\,)\,\}
\\=&\frac{1}{k}\,\{\,{Q_F}_{k,2n+1}-({F}_{k,1}+i\,{F}_{k,2}+j\,{F}_{k,3}+i\,j\,{F}_{k,4})\}
\\=&\frac{1}{k}\,(\,{Q_F}_{k,2n+1}-{Q_F}_{k,1}\,)\,.
\end{array}
\end{equation*}
\end{proof}
\begin{thm} \textbf{Binet's Formula}
\\
Let ${\,{Q_F}_{k,n}}$ be the bicomplex k-Fibonacci quaternion. For $n\ge 1$, Binet's formula for these quaternions is as follows:
\begin{equation}\label{G29}
{Q_F}_{k,n}=\frac{1}{\alpha -\beta }\left( \,\hat{\alpha }\,{\alpha}^{n}-\hat{\beta \,}\,{\beta }^{n} \right)\,
\end{equation}
where
\begin{equation*}
\begin{array}{l}
\hat{\alpha }=1+i\,{\alpha}+j\,{\alpha}^2+i\,j\,{\alpha}^3,\,\,\,\,\, \alpha=\frac{k+\sqrt{k^2+4}}{2},
\end{array}
\end{equation*}
\begin{equation*}
\begin{array}{l}
\hat{\beta }=1+i\,{\beta}+j\,{\beta}^2+i\,j\,{\beta}^3,\,\,\,\,\, \beta=\frac{k-\sqrt{k^2+4}}{2},
\end{array}
\end{equation*}
$\alpha +\beta =k\ ,\ \ \alpha -\beta =\sqrt{k^2+4}\,,\ \ \alpha \beta =-1$.\\
\end{thm}
\begin{proof}
By using the Binet formula for k-Fibonacci number \cite{5}, we obtain
\begin{equation*}
\begin{array}{rl}
{Q_F}_{k,n}=&{F}_{k,n}+i\,{F}_{k,n+1}+j\,{F}_{k,n+2}+i\,j\,{F}_{k,n+3} \\
\\
=&\frac{\alpha^n -\beta^n }{\sqrt{k^2+4}}+i\,(\frac{\alpha^{n+1} -\beta^{n+1}\,}{\sqrt{k^2+4}})+j\,(\frac{\alpha^{n+2} -\beta^{n+2}\,}{\sqrt{k^2+4}})+i\,j\,(\frac{\alpha^{n+3} -\beta^{n+3}\,}{\sqrt{k^2+4}}) \\
\\
=&\frac{\alpha^{n}\,(1+i\,\alpha+j\,\alpha^2+i\,j\,\alpha^3)-\beta^{n}\,(1+i\,\beta+j\,\beta^2+i\,j\,\beta^3)\,}{\sqrt{k^2+4}} \\
\\
=&\frac{1}{\sqrt{k^2+4}}( \,\hat{\alpha }\,\alpha^{n}-\hat{\beta}\,\beta^{n}).
\end{array}
\end{equation*}
where $\hat{\alpha }=\,1+i\,\alpha +j\,\alpha^2+i\,j\,\alpha^3, \, \, \hat{\beta }=\,\,1+i\,\beta +j\,\beta^2+i\,j\,\beta^3$.
\end{proof}
\begin{thm} \textbf{Cassini's Identity}
\\
Let ${\,{Q_F}_{k,n}}$ be the bicomplex k-Fibonacci quaternion. For $n\ge 1$, Cassini's identity for ${\,{Q_F}_{k,n}}$ is as follows:
\begin{equation}\label{G30}
{Q_F}_{k,n-1}\,{Q_F}_{k,n+1}-({Q_F}_{k,n})^2=(-1)^{n}\,[\,2(k^2+2)\,j+\,(k^3+2\,k)\,i\,j\,].
\end{equation}
\end{thm}
\begin{proof}
(\ref{G30}): By using (3.2) we get
\begin{equation*}
\begin{array}{rl}
{Q_F}_{k,n-1}\,{Q_F}_{k,n+1}-(\,{{Q}_F}_{k,n})^2=&\,[\,({F}_{k,n-1}{F}_{k,n+1}-{F}_{k,n}^2) \\
&\quad-({F}_{k,n}{F}_{k,n+2}-{F}_{k,n+1}^2) \\
&\quad-({F}_{k,n+1}{F}_{k,n+3}-{F}_{k,n+2}^2) \\
&\quad+({F}_{k,n+2}{F}_{k,n+4}-{F}_{k,n+3}^2)\,] \\
&+i\,[\,({F}_{k,n-1}{F}_{k,n+2}-{F}_{k,n}{F}_{k,n+1}) \\
&\quad-({F}_{k,n+1}{F}_{k,n+4}-{F}_{k,n+2}{F}_{k,n+3})\,] \\
&+j\,[\,({F}_{k,n-1}{F}_{k,n+3}-{F}_{k,n}{F}_{k,n+2}) \\
&\quad-({F}_{k,n}{F}_{k,n+4}-{F}_{k,n+1}{F}_{k,n+3})\\
&\quad+({F}_{k,n+1}{F}_{k,n+1}-{F}_{k,n+2}{F}_{k,n}) \\
&\quad-({F}_{k,n+2}{F}_{k,n+2}-{F}_{k,n+3}{F}_{k,n+1})\,] \\
&+i\,j\,({F}_{k,n-1}{F}_{k,n+4}-{F}_{k,n}{F}_{k,n+3}) \\
=&(-1)^{n}\,{F}_{k,n-m}[\,2(k^2+2)\,j+\,(k^3+2\,k)\,i\,j\,].
\end{array}
\end{equation*}
where the identities of the k-Fibonacci numbers ${F}_{k,n-1}{F}_{k,n+1}-{F}_{k,n}^2=(-1)^{n}$ \cite{5}. Furthermore;
\begin{equation*}
\left\{\begin{array}{l}
{F}_{k,n-1}{F}_{k,n+2}-{F}_{k,n}{F}_{k,n+1}=k\,(-1)^n\,\\
{F}_{k,n-1}{F}_{k,n+3}-{F}_{k,n}{F}_{k,n+2}=(k^2+1)\,(-1)^n\,,\\
{F}_{k,n+1}{F}_{k,n+3}-{F}_{k,n}{F}_{k,n+4}=(k^2+1)\,(-1)^n\,, \\
{F}_{k,n-1}{F}_{k,n+4}-{F}_{k,n}{F}_{k,n+3}=(k^3+2\,k)\,(-1)^n\,
\end{array}\right.
\end{equation*}
are used.
\end{proof}
\begin{thm} \textbf{Catalan's Identity}
\\
Let ${Q_F}_{k,n+r}$ be the bicomplex k-Fibonacci quaternion. For $n\ge 1$, Catalan's identity for ${Q_F}_{k,n+r}$ is as follows:
\begin{equation}\label{G31}
{Q_F}_{k,n+r-1}\,{Q_F}_{k,n+r+1}-({Q_F}_{k,n+r})^2=(-1)^{n+r}\,[\,2(k^2+2)\,j+\,(k^3+2\,k)\,i\,j\,].
\end{equation}
\end{thm}
\begin{proof}
(\ref{G31}): By using (3.2) we get
\begin{equation*}
\begin{array}{rl}
{Q_F}_{k,n+r-1}\,{Q_F}_{k,n+r+1}-({Q_F}_{k,n+r})^2&=({F}_{k,n+r-1}{F}_{k,n+r+1}-{F}_{k,n+r}^2) \\
&\quad-({F}_{k,n+r}{F}_{k,n+r+2}-{F}_{k,n+r+1}^2)\\
&\quad({F}_{k,n+r+1}{F}_{k,n+r+3}-{F}_{k,n+r+2}^2) \\
&\quad+({F}_{k,n+r+2}{F}_{k,n+r+4}-{F}_{k,n+r+3}^2) \\
&+i\,[\,({F}_{k,n+r-1}{F}_{k,n+r+2})-({F}_{k,n+r}{F}_{k,n+r+1}) \\
&\quad-({F}_{k,n+r+1}{F}_{k,n+r+4}-{F}_{k,n+r+2}{F}_{k,n+r+3})\\
&+j\,[\,({F}_{k,n+r-1}{F}_{k,n+r+3}-{F}_{k,n+r}{F}_{k,n+r+2})\\
&\quad-({F}_{k,n+r}{F}_{k,n+r+4}-{F}_{k,n+r+1}{F}_{k,n+r+3}) \\
&\quad+({F}_{k,n+r+1}{F}_{k,n+r+1}-{F}_{k,n+r+2}{F}_{k,n+r}) \\
&\quad-({F}_{k,n+r+2}{F}_{k,n+r+2}-{F}_{k,n+r+3}{F}_{k,n+r+1})\,]\\
&+i\,j\,[\,({F}_{k,n+r-1}{F}_{k,n+r+4}-{F}_{k,n+r}{F}_{k,n+r+3})\\
&\quad+({F}_{k,n+r}{F}_{k,n+r+3}-{F}_{k,n+r+1}{F}_{k,n+r+2}) \\
&\quad+({F}_{k,n+r+2}{F}_{k,n+r+1}-{F}_{k,n+r+3}{F}_{k,n+r})\,] \\
&=(-1)^{n+r}\,[\,2\,(k^2+2)\,j+\,(k^3+2\,k)\,i\,j\,] \\
\end{array}
\end{equation*}
where the identity of the k-Fibonacci numbers ${F}_{k,n+r-1}{F}_{k,n+r+1}-{F}_{k,n+r}^2=(-1)^{n+r}$ is used \cite{5}. Furthermore;
\begin{equation*}
\left\{\begin{array}{l}
{F}_{k,n+r-1}\,{F}_{k,n+r+2}+{F}_{k,n+r}\,{F}_{k,n+r+1}=(-1)^{n+r}\,k,\\
{F}_{k,n+r-1}\,{F}_{k,n+r+3}-{F}_{k,n+r}\,{F}_{k,n+r+2}=(-1)^{n+r}\,(k^2+1),\\
{F}_{k,n+r+1}\,{F}_{k,n+r+3}-{F}_{k,n+r}\,{F}_{k,n+r+4}=(-1)^{n+r}\,(k^2+1),\\
{F}_{k,n+r-1}\,{F}_{k,n+r+4}-{F}_{k,n+r}\,{F}_{k,n+r+3}=(-1)^{n+r}\,(k^3+2\,k),\\
{F}_{k,n+r}\,{F}_{k,n+r+3}-{F}_{k,n+r+1}\,{F}_{k,n+r+2}=(-1)^{n+r+1}\,k,\\
{F}_{k,n+r+2}\,{F}_{k,n+r+1}-{F}_{k,n+r+3}\,{F}_{k,n+r}=(-1)^{n+r}\,k\,.
\end{array}\right.
\end{equation*}
are used.
\end{proof}
\section{Conclusion}
In this paper, a number of new results on bicomplex k-Fibonacci quaternions were derived.
This study fills the gap in the literature by providing the bicomplex k-Fibonacci quaternion using definitions of the bicomplex number \cite{18}, k-Fibonacci quaternion \cite{16} and the bicomplex Fibonacci quaternion \cite{1}.
\end{document} |
\begin{document}
\title{On Type I Blowups of Suitable Weak Solutions to Navier-Stokes Equations near Boundary.
}
\begin{abstract} In this note, boundary Type I blowups of suitable weak solutions to the Navier-Stokes equations are discussed. In particular, it has been shown that, under certain assumptions, the existence of non-trivial mild bounded ancient solutions in half space leads to the existence of suitable weak solutions with Type I blowup on the boundary.
\end{abstract}
\section{Introduction}
\setcounter{equation}{0}
The aim of the note is to study conditions under which solutions to the Navier-Stokes equations undergo Type I blowups on the boundary.
Consider the classical Navier-Stokes equations \begin{equation}
\label{NSE}
\partial_tv+v\cdot\nabla v-\Delta v=-\nabla q,\qquad {\rm div}\,v=0
\end{equation}
in the space time domain $Q^+=B^+\times ]-1,0[$, where $B^+=B^+(1)$ and $B^+(r)=\{x\in \mathbb R^3: |x|<r,\,x_3>0\}$ is a half ball of radius $r$ centred at the origin $x=0$. It is supposed that $v$ satisfies the homogeneous Dirichlet boundary condition
\begin{equation}\label{bc}
v(x',0,t)=0
\end{equation}
for all $|x'|<1$ and $-1<t<0$. Here, $x'=(x_1,x_2)$ so that $x=(x',x_3)$ and $z=(x,t)$.
Our goal is to understand how to determine whether or not the origin $z=0$ is a singular point of the velocity field $v$. We say that $z=0$ is a regular point of $v$ if there exists $r>0$ such that $v\in L_\infty(Q^+(r))$ where $Q^+(r)=B^+(r)\times ]-r^2,0[$. It is known, see \cite{S3} and \cite{S2009},
that the velocity $v$ is H\"older continuous in a parabolic vicinity of $z=0$ if $z=0$ is a regular point. However, further smoothness even in spatial variables does not follow in the regular case, see \cite{Kang2005} and \cite{SerSve10} for counter-examples.
The class of solutions to be studied is as follows.
\begin{definition}\label{sws} A pair of functions $v$ and $q$ is called a suitable weak solution to the Navier-Stokes equations in $Q^+$ near the boundary if and only if the following conditions hold:
\begin{equation}\label{class}
v\in L_{2, \infty}(Q^+)\cap L_2(-1,0;W^1_2(Q^+)),\qquad q\in L_\frac 32(Q^+);
\end{equation}
$v$ and $q$ satisfy equations (\ref{NSE}) and boundary condition (\ref{bc});
$$\int\limits_{B^+}\varphi(x,t)|v(x,t)|^2dx+2\int\limits_{-1}^t\int\limits_{B^+}\varphi|\nabla v|^2dxdt\leq $$
\begin{equation}\label{energy_inequality}
\leq \int\limits_{-1}^t\int\limits_{B^+}(|v|^2(\partial_t\varphi+\Delta\varphi)+v\cdot\nabla v(|v|^2+2q))dxdt \end{equation}
for all non-negative functions $\varphi\in C^\infty_0(B\times]-1,1[)$ such that $\varphi|_{x_3=0}=0$.
\end{definition}
In what follows, some statements will be expressed in terms of scale invariant quantities (invariant with respect to the Navier-Stokes scaling: $\lambda v(\lambda x,\lambda^2 t)$ and $\lambda ^2q(\lambda x,\lambda^2 t)$). Here, they are:
$$A(v,r)=\sup\limits_{-r^2<t<0}\frac 1r\int\limits_{B^+(r)}|v(x,t)|^2dx, \qquad
E(v,r)=\frac 1r\int\limits_{Q^+(r)}|\nabla v|^2dz,$$$$
C(v,r)=\frac 1{r^2}\int\limits_{Q^+(r)}|v|^3dz,\qquad
D_0(q,r)=\frac 1{r^2}\int\limits_{Q^+(r)}|q-[q]_{B^+(r)}|^\frac 32dz, $$
$$D_2(q,r)=\frac 1{r^\frac {13}8}\int\limits^0_{-r^2}\Big(\int\limits_{B^+(r)}|\nabla q|^\frac {12}{11}dx\Big)^\frac {11}8dt,$$
where
$$[f]_\Omega=\frac 1{|\Omega|}\int\limits_\Omega fdx.$$
We also introduce the following values:
$$g(v):=\min\{\sup\limits_{0<R<1}A(v,R), \sup\limits_{0<R<1}C(v,R),\sup\limits_{0<R<1}E(v,R)\}
$$
and, given $r>0$,
$$G_r(v,q):=$$$$\max\{\sup\limits_{0<R<r}A(v,R), \sup\limits_{0<R<r}C(v,R),\sup\limits_{0<R<r}E(v,R),\sup\limits_{0<R<r}D_0(q,R)\}.
$$
Relationships between $g(v)$ and $G_1(v,q)$ is described in the following proposition.
\begin{pro}\label{boundednesstheorem}
Let $v$ and $q$ be a suitable weak solution to the Navier-Stokes equations in $Q^+$ near the boundary. Then, $G_1$ is bounded if and only if $g$ is bounded.
\end{pro}
If $z=0$ is a singular point of $v$ and $g(v)<\infty$, then $z=0$ is called a Type I singular point or a Type I blowup point.
Now, we are ready to state the main results of the paper.
\begin{definition}
\label{leas}
A function $u:Q^+_-:=\mathbb R^3_+\times]-\infty,0[\,\to\mathbb R^3$ is called a local energy ancient solution if there exists a function $p:Q_-^+\to\mathbb R$ such that the pair $u$ and $p$ is a suitable weak solution in $Q^+(R)$ for any $R>0$. Here, $\mathbb R^3_+:=\{x\in \mathbb R^3:\,x_3>0\}$.
\end{definition}
\begin{theorem}\label{local energy ancient solution} There exists a suitable weak solution $v$ and $q$ with Type I blowup at the origin $z=0$ if and only if there exists a non-trivial local energy ancient solution $u$ such that $u$ and the corresponding pressure $p$ have the following prosperities:
\begin{equation}\label{scalequatities}
G_\infty(u,p):=\max\{\sup\limits_{0<R<\infty} A(u,R), \sup\limits_{0<R<\infty}E(u,R),$$$$\sup\limits_{0<R<\infty}C(u,R),\sup\limits_{0<R<\infty}D_0(p,R)\}<\infty
\end{equation}
and
\begin{equation}\label{singularity}
\inf\limits_{0<a<1}C(u,a)\geq \varepsilon_1>0.
\end{equation}
\end{theorem}
\begin{remark}\label{singType1}
According to (\ref{scalequatities}) and (\ref{singularity}), the origin $z=0$ is Type I blowup of the velocity $u$.
\end{remark}
There is another way to construct a suitable weak solution with Type I blowup. It is motivated by the recent result in \cite{AlBa18} for the interior case. Now, the main object is related to the so-called mild bounded ancient solutions in a half space, for details see \cite{SerSve15} and \cite{BaSe15}.
\begin{definition}\label{mbas}
A bounded function $u$ is a mild bounded ancient solution if and only if there exists a pressure $p=p^1+p^2$, where the even extension of $p^1$ in $x_3$ to the whole space $\mathbb R^3$
is a $L_\infty(-\infty,0;BMO(\mathbb R^3))$-function,
$$\Delta p^1={\rm divdiv}\,u\otimes u$$
in $Q^+_-$ with $p^1_{,3}(x',0,t)=0$, and $p^2(\cdot,t)$ is a harmonic function in $\mathbb R^3_+$, whose gradient satisfies the estimate
$$|\nabla p^2(x,t)|\leq \ln (2+1/x_3)$$
for all $(x,t)\in Q^+_-$ and has the property
$$\sup\limits_{x'\in \mathbb R^2}|\nabla p^2(x,t)|\to 0
$$ as $x_3\to \infty$; functions $u$ and $p$ satisfy:
$$\int\limits_{Q^+_-}u\cdot \nabla qdz=0$$
for all $q\in C^\infty_0(Q_-:=\mathbb R^3\times ]-\infty,0[)$ and, for any $t<0$,
$$\int\limits_{Q^+_-}\Big(u\cdot(\partial_t\varphi+\Delta\varphi)+u\otimes u:\nabla \varphi+p{\rm div}\,\varphi\Big)dz=0
$$ for and $\varphi\in C^\infty_0(Q_-)$ with $\varphi(x',0,t)=$ for all $x'\in \mathbb R^2$.
\end{definition}
As it has been shown in \cite{BaSe15}, any mild bounded ancient solution $u$ in a half space is infinitely smooth up to the boundary and $u|_{x_3}=0$.
\begin{theorem}\label{mbas_type1}
Let $u$ be a mild bounded ancient solution such that $|u|\leq 1$ and $|u(0,a,0)|=1$ for a positive number $a$ and such that
(\ref{scalequatities}) holds. Then there exists a suitable weak solution in $Q^+$ having Type I blowup at the origin $z=0$.
\end{theorem}
{\bf Acknowledgement} The work is supported by the grant RFBR 17-01-00099-a.
\section{Basic Estimates}
\setcounter{equation}{0}
In this section, we are going to state and prove certain basic estimates for arbitrary suitable weak solutions near the boundary.
For our purposes, the main estimate of the convective term can be derived as follows. First, we apply H\"older inequality in spatial variables:
$$\||v||\nabla v|\|_{\frac {12}{11},\frac 32,Q_+(r)}^\frac 32=\int\limits^0_{-r^2}\Big(\int\limits_{B_+(r)}
|v|^\frac {12}{11}|\nabla v|^\frac {12}{11}dx\Big)^\frac{11}8dt\leq$$
$$\leq\int\limits^0_{-r^2}\Big(\int\limits_{B_+(r)}|\nabla v|^2dx\Big)^\frac 34\Big(\int\limits_{B_+(r)}|v|^\frac {12}5dx\Big)^\frac 58dt.
$$
Then, byy interpolation, since $\frac {12}5=2\cdot\frac 35+3\cdot\frac 25$, we find
$$\Big(\int\limits_{B_+(r)}|v|^\frac {12}5dx\Big)^\frac 58\leq \Big(\int\limits_{B_+(r)}|v|^2dx\Big)^\frac 38\Big(\int\limits_{B_+(r)}|v|^3dx\Big)^\frac 14. $$
So,
$$\||v||\nabla v|\|_{\frac {12}{11},\frac 32,Q_+(r)}^\frac 32\leq $$$$\leq \int\limits^0_{-r^2}\Big(\int\limits_{B_+(r)}|\nabla v|^2dx\Big)^\frac 34\Big(\int\limits_{B_+(r)}|v|^2dx\Big)^\frac 38\Big(\int\limits_{B_+(r)}|v|^3dx\Big)^\frac 14dt\leq $$
\begin{equation}\label{mainest}\leq \sup\limits_{-r^2<t<0}\Big(\int\limits_{B_+(r)}|v|^2dx\Big)^\frac 38\Big(\int\limits_{Q_+(r)}|\nabla v|^2dxdt\Big)^\frac 34\Big(\int\limits_{Q_+(r)}|v|^3dxdt\Big)^\frac 14\leq \end{equation}
$$\leq r^\frac 38r^\frac 34r^\frac 12A^\frac 38(v,r)E^\frac 34(v,r)C^\frac 14(v,r)
$$$$=r^\frac {13}8A^\frac 38(v,r)E^\frac 34(v,r)C^\frac 14(v,r).
$$
Two other estimates are well known and valid for any $0<r\leq 1$:
\begin{equation}\label{multiple}
C(v,r)\leq cA^\frac 34(v,r)E^\frac 34(v,r)
\end{equation}
and
\begin{equation}\label{embedding}
D_0(q,r)\leq cD_2(q,r).
\end{equation}
Next, one more estimate immediately follows from the energy inequality (\ref{energy}) for a suitable choice of cut-off function $\varphi$:
\begin{equation}\label{energy}
A(v,\tau R)+E(v,\tau R)\leq c(\tau)\Big[C^\frac 23(v,R)+C^\frac 13(v,R)D_0^\frac 23(q,R)+C(v,R)\Big]
\end{equation}
for any $0<\tau<1$ and for all $0<R\leq 1$.
The last two estimates are coming out from the linear theory. Here, they are:
\begin{equation}\label{pressure}
D_2(q,r)\leq c \Big(\frac r\varrho\Big)^2\Big[D_2(q,\varrho)+E^\frac 34(v,\varrho)\Big]+$$$$+c\Big(\frac \varrho r\Big)^\frac {13}8A^\frac 38(v,\varrho)E^\frac 34(v,\varrho)C^\frac 14(v,\varrho)
\end{equation}
for any $0<r<\varrho\leq 1$ and
\begin{equation}\label{highder}
\|\partial_tv\|_{\frac {12}{11},\frac 32,Q^+(\tau R)}+\|\nabla^2v\|_{\frac {12}{11},\frac 32,Q^+(\tau R)}+\|\nabla q\|_{\frac {12}{11},\frac 32,Q^+(\tau R)} \leq
\end{equation}
$$\leq c(\tau)R^\frac {13}{12}\Big[D_0^\frac 23(q,R)+C^\frac 13(v,R)+E^\frac 12(v,R)+$$$$+(A^\frac 38(v,R)E^\frac 34(v,R)C^\frac 14(v,R))^\frac 23\Big]
$$
for any $0<\tau<1$ and for all $0<R\leq 1$.
Estimate (\ref{highder}) follows from bound (\ref{mainest}), from the local regularity theory for the Stokes equations (linear theory), see paper
\cite{S2009}, and from scaling. Estimate (\ref{pressure}) will be proven in the next section.
\section{Proof of (\ref{pressure})}
\setcounter{equation}{0}
Here, we follows paper \cite{S3}. We let
$\tilde f=-v\cdot\nabla v$ and observe
that
\begin{equation}\label{weakerterm}
\frac 1r\|\nabla v\|_{\frac {12}{11},\frac 32,Q^+(r)}\leq r^\frac {13}{12}E^\frac 12(v,r)
\end{equation}
and, see (\ref{mainest}),
\begin{equation}\label{righthand}
\|\tilde f\|_{\frac {12}{11},\frac 32,Q^+(r)} \leq cr^\frac {13}{12}(A^\frac 38(v,r)E^\frac 34(v,r)C^\frac 14(v,r))^\frac 23.
\end{equation}
Next, we select a convex domain with smooth boundary so that
$$B^+(1/2)\subset \tilde B\subset B^+$$ and, for $0<\varrho<1$, we let
$$\tilde B(\varrho)=\{x\in \mathbb R^3: x/\varrho\in \tilde B\},\qquad \tilde Q(\varrho)=\tilde B(\varrho)\times ]-\varrho^2,0[.
$$
Now, consider the following initial boundary value problem:
\begin{equation}\label{v1eq}
\partial_tv^1-\Delta v^1+\nabla q^1=\tilde f, \qquad {\rm div}\,v^1=0
\end{equation} in $\tilde Q(\varrho)$ and
\begin{equation}\label{v1ibc}
v^1=0
\end{equation}
on parabolic boundary $\partial'\tilde Q(\varrho)$ of $\tilde Q(\varrho)$. It is also supposed that
$[q^1]_{\tilde B(\varrho)}(t)=0$ for all $-\varrho^2<t<0$.
Due to estimate ({\ref{righthand}) and due to the Navier-Stokes scaling,
a unique solution to problem (\ref{v1eq}) and (\ref{v1ibc}) satisfies the estimate
$$\frac 1{\varrho^2}\|v^1\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)}+\frac 1{\varrho}\|\nabla v^1\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)}
+\|\nabla^2 v^1\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)}+$$
\begin{equation}\label{v1est}
+ \frac 1{\varrho}\| q^1\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)}
+\|\nabla q^1\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)}\leq
\end{equation}
$$\leq c\|\tilde f\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)}\leq c\varrho^\frac {12}{11}(A^\frac 38(v,\varrho)E^\frac 34(v,\varrho)C^\frac 14(v,\varrho))^\frac 23,$$
where a generic constant c is independent of $\varrho$.
Regarding $v^2=v-v^1$ and $q^2=q-[q]_{B_+(\varrho/2)}-q^1$, one can notice the following:\begin{equation}\label{v2eq}
\partial_tv^2-\Delta v^2+\nabla q^2=0, \qquad {\rm div}\,v^2=0
\end{equation} in $\tilde Q(\varrho)$ and
\begin{equation}\label{v2ibc}
v^2|_{x_3=0}=0.
\end{equation}
As it was indicated in \cite{S2009}, functions $v^2$ and $q^2$ obey the estimate
\begin{equation}\label{v2q2est}\|\nabla^2 v^2\|_{9,\frac 32, Q^+(\varrho/4)}+\|\nabla q^2\|_{9,\frac 32, Q^+(\varrho/4)}\leq \frac c{\varrho^\frac {29}{12}}L,\end{equation}
where
$$L:=\frac 1{\varrho^2}\| v^2\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| \nabla v^2\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| q^2\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}.$$
As to an evaluation of $L$, we have
$$L\leq
\Big[\frac 1{\varrho^2}\| v\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| \nabla v\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| q-[q]_{B^+(\varrho/)}\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+$$
$$+\frac 1{\varrho^2}\| v^1\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| \nabla v^1\|_{\frac {12}{11},\frac 32, Q^+(\varrho/1)}+\frac 1{\varrho}\| q^1\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}\Big]\leq$$
$$\leq \Big[\frac 1{\varrho}\| \nabla v\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\|\nabla q\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+$$$$+\frac 1{\varrho}\| \nabla v^1\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| q^1\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}\Big].$$
So, by (\ref{weakerterm}), by (\ref{highder})
with $R=\varrho$ and $\tau=\frac 12$, and by (\ref{v1est}), one can find the following bound
\begin{equation}\label{q2est}
\|\nabla q^2\|_{9,\frac 32, Q^+(\varrho/4)} \leq
\frac c{\varrho^\frac 43}\Big[E^\frac 12(v,\varrho)+D_2^\frac 23(q,\varrho)+$$$$+(A^\frac 38(\varrho)E^\frac 34(v,\varrho)C^\frac 14(v,\varrho))^\frac 23\Big].
\end{equation}
Now, assuming $0<r<\varrho/4$, we can derive from (\ref{v1est}) and from (\ref{q2est}) the estimate
$$D_2(r)\leq \frac c{r^\frac {13}8}\Big[\int\limits^0_{-r^2}\Big(\int\limits_{B^+(r)}|\nabla q^1|^\frac {12}{11}dx\Big)^\frac {11}8dt+
\int\limits^0_{-r^2}\Big(\int\limits_{B^+(r)}|\nabla q^2|^\frac {12}{11}dx\Big)^\frac {11}8dt\Big]\leq
$$$$
\leq \frac c{r^\frac {13}8}\int\limits^0_{-r^2}\Big(\int\limits_{B^+(r)}|\nabla q^1|^\frac {12}{11}dx\Big)^\frac {11}8dt+cr^2
\int\limits^0_{-r^2}\Big(\int\limits_{B^+(r)}|\nabla q^1|^9dx\Big)^\frac 16dt\leq$$
$$\leq c\Big(\frac \varrho r\Big)^\frac {13}8A^\frac 38(v,\varrho)E^\frac 34(v,\varrho)C^\frac 14(v,\varrho)+c\Big(\frac r\varrho\Big)^2\Big[E^\frac 34(v,\varrho)+D_2(q,\varrho)+$$$$+A^\frac 38(v,\varrho)E^\frac 34(v,\varrho)C^\frac 14(v,\varrho)\Big]
$$
and thus
$$D_2(q,r)\leq c\Big(\frac r\varrho\Big)^2\Big[E^\frac 34(v,\varrho)+D_2(q,\varrho)\Big]+$$$$+c\Big(\frac \varrho r\Big)^\frac {13}8A^\frac 38(v,\varrho)E^\frac 34(v,\varrho)C^14(v,\varrho)
$$
for $0<r<\varrho/4$. The latter implies estimate
(\ref{pressure}).
\section{Proof of Proposition \ref{boundednesstheorem}}
\setcounter{equation}{0}
\begin{proof} We let $g=g(v)$ and $G=G_1(v,q)$.
Let us
assume that $g<\infty$. Our aim is to show that $G<\infty$. There are three cases:
\textsc{Case 1}.
Suppose that
\begin{equation}\label{case1}
C_0:=\sup\limits_{0<R<1}C(v,R)<\infty.
\end{equation}
Then, from (\ref{energy}), one can deduce that
$$A(v, R/2)+E(v, R/2)\le c_1(1+D_0^\frac 23(q,R)).$$
Here and in what follows in this case, $c_1$ is a generic constant depending on $C_0$ only.
Now, let us use (\ref{embedding}), (\ref{pressure}) with $\varrho= R/2$, and the above estimate. As a result, we find
$$D_2(q,r)\leq c\Big(\frac rR\Big)^2D_2(q, R/2)
+c_1\Big(\frac Rr\Big)^\frac {13}8[E^\frac 34(v, R/2)+1 +D_2^\frac 34(q,R)]\leq
$$$$\leq c\Big(\frac rR\Big)^2D_2(q,R)+c_1\Big(\frac Rr\Big)^\frac {13}8[1+D_2(q,R)^\frac 23]$$
for all $0<r< R/2$. So, by Young's inequality,
\begin{equation}\label{pressure1}
D_2(q,r)\leq c\Big(\frac rR\Big)^2D_2(q,R)+c_1\Big(\frac Rr\Big)^\frac {71}8\end{equation}
for all $0<r< R/2$.
If $ R/2\leq r\leq R$, then
$$D_2(q,r)\leq \frac 1{( R/2)^\frac {13}8}\int\limits^0_{-R^2}\Big(\int\limits_{B^+(R)}|\nabla q|^\frac {12}{11}dx\Big)^\frac {11}8dt\leq 2^\frac {13}8D_2(q,R)\Big(\frac {2r}{R}\Big)^2.$$
So, estimate (\ref{pressure1})
holds for all $0<r<R<1$.
Now, for $\mu$ and $R$ in $]0,1[$, we let $r=\mu R$ in (\ref{pressure1}) and find
$$D_2(q,\mu R)\leq c\mu^2D_2(q,R)+c_1\mu^{-\frac{71}8}.
$$
Picking $\mu$ up so small that $2c\mu\leq 1$, we show that
$$D_2(q,\mu R)\leq \mu D_2(q,R)+c_1
$$ for any $0<R<1$. One can iterate the last inequality and get the following:
$$D_2(q,\mu^{k+1}R)\leq \mu^{k+1}D_2(q,R)+c_1(1+\mu+...+\mu^k)$$
for all natural numbers $k$. The latter implies
that
\begin{equation}\label{1case1est}
D_2(q,r)\leq c_1\frac rRD_2(q,R)+c_1
\end{equation} for all $0<r<R<1$. And we can deduce from (\ref{embedding}) and from the above estimate that
$$\max\{\sup\limits_{0<R<1}D_0(q,R), \sup\limits_{0<R<1}D_2(q,\tau R)\}<\infty$$
for any $0<\tau<1$. Uniform boundedness of $A(R)$ and $E(R)$ follows from the energy estimate (\ref{energy}) and from the assumption (\ref{case1}).
\textsc{Case 2}. Assume now that
\begin{equation}\label{case2}
A_0:=\sup\limits_{0<R<1}A(v,R)<\infty.
\end{equation}
Then, from (\ref{multiple}), it follows that
$$C(v,r)\leq cA_0^\frac 34E^\frac 34(v,r)$$
for any $0<r<1$ and thus
$$A(v,\tau \varrho)+E(v,\tau \varrho)\leq c_3(A_0,\tau)\Big[E^\frac 12(v,\varrho)+E^\frac 14(v,\varrho)D_0^\frac 23(q,\varrho)+E^\frac 34(v,\varrho)\Big].
$$ for any $0<\tau<1$ and $0<\varrho<1$.
Our next step is an estimate for the pressure quantity:
$$D_2(q,r)\leq c\Big(\frac r\varrho\Big)^2\Big[D_2(q,\varrho)+E^\frac34(v,\varrho)\Big]+c_2\Big(\frac \varrho r\Big)^\frac {13}8E^\frac {15}{16}(v,\varrho)\leq$$$$\leq c\Big(\frac r\varrho\Big)^2D_2(q,\varrho)+c_2\Big(\frac \varrho r\Big)^\frac {13}8(E^\frac {15}{16}(v,\varrho)+1)$$ for any $0<r<\varrho<1$.
Here, a generic constant, depending on $A_0$ only, is denoted by $c_2$.
Letting $r=\tau R$ and $\mathcal E(r):=A(v,r)+D_2(q,r)$, one can deduce from latter inequalities, see also (\ref{embedding}), the following estimates:
$$\mathcal E(\tau \varrho)\leq c\tau^2D_2(q,\varrho)+c_2\Big(\frac 1 \tau\Big)^\frac {13}8(E^\frac {15}{16}(v,\varrho)+1)+$$$$+c_3(A_0,\tau)\Big[E^\frac 12(v,\varrho)+E^\frac 14(v,\varrho)D_2^\frac 23(\varrho)+E^\frac 34(v,\varrho)\Big]\leq $$
$$\leq c\tau^2D_2(q,\varrho)+c_2\Big(\frac 1 \tau\Big)^\frac {13}8(E^\frac {15}{16}(v,\varrho)+1)+$$$$+c_3(A_0,\tau)\Big(\frac1{\tau}\Big)^4E^\frac 34(v,\varrho)+c_3(A_0,\tau)\Big[E^\frac 12(v,\varrho)+E^\frac 34(v,\varrho)\Big]\leq $$
$$\leq c\tau^2\mathcal E(\varrho)
+c_3(A_0,\tau).
$$
The rest of the proof is similar to what has been done in Case 1, see derivation of (\ref{1case1est}).
\textsc{Case 3}. Assume now that
\begin{equation}\label{case3}
E_0:=\sup\limits_{0<R<1}E(v,R)<\infty.
\end{equation}
Indeed,
$$C(v,r)\leq cE_0^\frac 34A^\frac 34(v,r)$$
for all $0<r\leq 1$. As to the pressure, we can find
$$D_2(\tau\varrho)\leq c\tau^2D_2(\varrho)+c_4(E_0,\tau)A^\frac {9}{16}(\varrho)
$$ for any $0<\tau<1$ and for any $0<\varrho<1$.
In turn, the energy inequality gives:
$$A(v,\tau \varrho)\leq c_5(E_0,\tau)\Big[A^\frac 12(v,\varrho)+A^\frac 14(v,\varrho)D_0^\frac 23(q,\varrho)+A^\frac 34(v,\varrho)\Big]\leq
$$
$$\leq c_5(E_0,\tau)\Big[A^\frac 12(v,\varrho)+A^\frac 14(v,\varrho)D_2^\frac 23(q,\varrho)+A^\frac 34(v,\varrho)\Big]
$$ for any $0<\tau<1$ and for any $0<\varrho<1$.
Similar to Case 2, one can introduce the quantity $\mathcal E(r)=A(v,r)+D_2(q,r)$ and find the following inequality for it:
$$\mathcal E(\tau\varrho)\leq c\tau^2D_2(q,\varrho)+c_4(E_0,\tau)A^\frac {9}{16}(v,\varrho)+$$
$$+c_5(E_0,\tau)\Big[A^\frac 12(v,\varrho)+A^\frac 14(v,\varrho)D_2^\frac 23(q,\varrho)+A^\frac 34(v,\varrho)\Big]\leq $$
$$\leq c\tau^2\mathcal E(\varrho)+c_5(E_0,\tau)$$
for any $0<\tau<1$ and for any $0<\varrho<1$. The rest of the proof is the same as in Case 2.
\end{proof}
\section{Proof of Theorem \ref{local energy ancient solution}}
\setcounter{equation}{0}
Assume that $v$ and $q$ is a suitable weak solution in $Q^+$ with Type I blow up at the origin so that
\begin{equation}\label{type1}
g=g(v)=\min\{ \sup\limits_{0<R<1}A(v,R),\sup\limits_{0<R<1}E(v,R)\sup\limits_{0<R<1}C(v,R)\}<\infty.
\end{equation}
By Theorem \ref{boundednesstheorem},
\begin{equation}
\label{bound1}
G_1=G_1(v,q):=\max\{\sup\limits_{0<R<1}A(v,R),\sup\limits_{0<R<1}E(v,R),$$$$\sup\limits_{0<R<1}C(v,R), \sup\limits_{0<R<1}D_0(v,R)\}<\infty.
\end{equation}
We know, see Theorem 2.2 in \cite{S2016}, that there exists a positive number $\varepsilon_1=\varepsilon_1(G_1)$ such that
\begin{equation}\label{sing1}
\inf\limits_{0<R<1}C(v,R)\geq \varepsilon_1>0.
\end{equation}
Otherwise, the origin $z=0$ is a regular point of $v$.
Let $R_k\to0$ and $a>0$ and let
$$u^{(k)}(y,s)=R_kv(x,t),\qquad p^{(k)}(y,s)=R_k^2q(x,t),
$$ where $x=R_ky$, $t=R^2_ks$. Then, we have
$$A(v,aR_k)=A(u^{(k)},a)\leq G_1,\qquad E(v,aR_k)=E(u^{(k)},a)\leq G_1,$$$$ C(v,aR_k)=C(u^{(k)},a)\leq G_1, \qquad D_0(q,u^{(k)})=D_0(p^{(k)},a)\leq G_1.$$
Thus, by (\ref{highder}),
$$\|\partial_tu^{(k)}\|_{\frac {12}{11},\frac 32,Q^+(a)}+\|\nabla^2u^{(k)}\|_{\frac {12}{11},\frac 32,Q^+(a)}+\|\nabla p^{(k)}\|_{\frac {12}{11},\frac 32,Q^+(a)}\leq c(a,G_1)$$
Moreover, the well known multiplicative inequality implies the following bound:
$$\sup\limits_k\int\limits_{Q^+}|u^{(k)}|^\frac {10}3dz\leq c(a,G_1).$$
Using known arguments,
one can select a subsequence (still denoted in the same way as the whole sequence) such that, for any $a>0$,
$$u^{(k)}\to u$$
in $L_3(Q^+(a))$,
$$\nabla u^{(k)}\rightharpoonup \nabla u$$
in $L_2(Q^+(a))$,
$$p^{(k)}\rightharpoonup p$$
in $L_\frac 32(Q^+(a))$. The first two statements are well known and we shall comment on the last one only.
Without loss of generality, we may assume that
$$\nabla p^{(k)}\rightharpoonup w
$$ in $L_{\frac {12}{11}}(Q^+(a))$ for all positive $a$.
We let $p^{(k)}_1(x,t)=p^{(k)}(x,t)-[p^{(k)}]_{B^+(1)}(t)$. Then, there exists
a subsequence $\{k^1_j\}_{j=1}^\infty$ such that
$$p^{(k^1_j)}_1\rightharpoonup p_1$$ in $L_\frac 32(Q^+(1))$ as $j\to\infty$. Indeed, it follows from Poincar\'e-Sobolev inequality
$$\|p^{(k^1_j)}_1\|_{\frac 32, Q^+(1)}\leq c \|\nabla p^{(k^1_j)}\|_{\frac {12}{11},\frac 32,Q^+(1)}\leq c(1,G_1).$$
Moreover, one has $\nabla p_1=w$ in $Q^+(1)$.
Our next step is to define $p^{(k^1_j)}_2(x,t)=p^{(k^1_j)}(x,t)-[p^{(k^1_j)}]_{B^+(2)}(t)$. For the same reason as above, there is a subsequence $\{k^2_j\}_{j=1}^\infty$ of the sequence
$\{k^1_j\}_{j=1}^\infty$ such that
$$p^{(k^2_j)}_2\rightharpoonup p_2$$ in $L_\frac 32(Q^+(2))$ as $j\to\infty$. Moreover, we claim that $\nabla p_2=w$ in $Q^+(2)$ and
$$p_2(x,t)-p_1(x,t)=[p_2]_{B^+(1)}(t)-[p_1]_{B^+(1)}(t)=[p_2]_{B^+(1)}(t)$$
for $x\in B^+(1)$ and $-1<t<0$, i.e., in $Q^+(1)$.
After $s$ steps, we arrive at the following: there exists a subsequence $\{k^s_j\}_{j=1}^\infty$ of the sequence $\{k^{s-1}_j\}_{j=1}^\infty$ such that $p^{(k^s_j)}_s(x,t)
=p^{(k^s_j)}(x,t)-[p^{(k^s_j)}]_{B^+(s)}(t)$ in $Q^+(s)$ and
$$p^{(k^s_j)}_s\rightharpoonup p_s$$ in $L_\frac 32(Q^+(s))$ as $j\to\infty$.
Moreover, $\nabla p_s=w$ in $Q^+(s)$ and
$$p_s(x,t)=p_{s-1}(x,t)+[p_s]_{B^+(s-1)}(t)$$
in $Q^+(s-1)$. And so on.
The following function $p$ is going to be well defined: $p=p_1$ in $Q^+(1)$ and
$$p(x,t)=p_{s+1}(x,t)-\sum\limits_{m=1}^s[p_{m+1}]_{B^+(m)}(t)\chi_{]-m^2,0[}(t)$$
in $Q^+(s+1)$, where $\chi_\omega(t)$ is the indicator function of the set $\omega \in \mathbb R$. Indeed, to this end, we need to verify
that
$$p_{s+1}(x,t)-\sum\limits_{m=1}^{s}[p_{m+1}]_{B^+(m)}(t)\chi_{]-m^2,0[}(t)=$$
$$=p_{s}(x,t)-\sum\limits_{m=1}^{s-1}[p_{m+1}]_{B^+(m)}(t)\chi_{]-m^2,0[}(t)$$
in $Q^+(s)$. The latter is an easy exercise.
Now, let us fix $s$ and consider the sequence
$$p^{(k^s_j)}(x,t)=p^{(k^s_j)}_{s}(x,t)-\sum\limits_{m=1}^{s-1}[p^{(k^s_j)}_{m+1}]_{B^+(m)}(t)\chi_{]-m^2,0[}(t)$$
in $Q^+(s)$. Then, since the sequence $\{k^s_j\}_{j=1}^\infty$ is a subsequence of all sequences $\{k^{m+1}_j\}_{j=1}^\infty$ with $m\leq s-1$, one can easily check that
$$p^{(k^s_j)}\rightharpoonup p$$
in $L_\frac 32(Q^+(s))$. It remains to apply the diagonal procedure of Cantor.
Having in hands the above convergences, we can conclude that the pair $u$ and $p$ is a local energy ancient solution in $Q^+_-$ and (\ref{scalequatities}) and (\ref{singularity}) hold.
The inverse statement is obvious.
\section{Proof of Theorem \ref{mbas_type1}}
\setcounter{equation}{0}
The proof is similar to the proof of Theorem \ref{local energy ancient solution}. We start with scaling $u^\lambda(y,s)=\lambda u(x,t)$ and $p^\lambda(y,s)=\lambda^2p(x,t)$ where $x=\lambda y$ and $t=\lambda^2s$ and $\lambda\to\infty$. We know
$$|u^\lambda(0,y_{3\lambda},0)|=\lambda|u(0,a,0)|=\lambda$$
and so that $y_{3\lambda}\to0$ as $\lambda\to\infty$.
For any $R>0$, by the invariance with respect to the scaling, we have
$$A(u^\lambda,R)=A(u,\lambda R)\leq G(u,p)=:G_0,\qquad E(u^\lambda,R)=E(u,\lambda R)\leq G_0,
$$
$$C(u^\lambda,R)=C(u,\lambda R)\leq G_0,\qquad D_0(p^\lambda,R)=D_0(p,\lambda R)\leq G_0.$$
Now, one can apply estimate (\ref{scalequatities}) and get the following:
$$\|\partial_tu^\lambda\|_{\frac {12}{11},\frac 32,Q^+(R)}+\|\nabla^2u^\lambda\|_{\frac {12}{11},\frac 32,Q^+(R)}+\|\nabla p^\lambda\|_{\frac {12}{11},\frac 32,Q^+(aR}\leq c(R,G_0).$$
Without loss of generality, we can deduce from the above estimates that, for any $R>0$,
$$u^{(k)}\to v$$
in $L_3(Q^+(R))$,
$$\nabla u^{(k)}\rightharpoonup \nabla v$$
in $L_2(Q^+(R))$,
$$p^{(k)}\rightharpoonup q$$
in $L_\frac 32(Q^+(R))$. Passing to the limit as $\lambda\to\infty$, we conclude that $v$ and $q$ are a local energy ancient solution in $Q^+_-$ for which $G(v,q)<\infty$.
Now, our goal is to prove that $z=0$ is a singular point of $v$. We argue ad absurdum. Assume that the origin is a regular point, i.e., there exist numbers $R_0>0$ and $A_0>0$
such that
$$|v(z)|\leq A_0$$
for all $z\in Q^+(R_0)$. Hence,
\begin{equation}\label{estim1}
C(v,R)=\frac 1{R^2}\int\limits_{Q^+(R)}|v|^3dz\leq cA_0^3R^3
\end{equation}
for all $0<R\leq R_0$. Moreover,
\begin{equation}\label{pass}
C(u^\lambda,R)\to C(v,R)
\end{equation}
as $\lambda\to\infty$. By weak convergence,
$$D_0(q,R)\leq G_0$$
for all $R>0$.
Now, we can calculate positive numbers $\varepsilon(G_0)$ and $c(G_0)$ of Theorem 2.2 in \cite{S2016}. Then, let us fix $0<R_1<R_0$, see (\ref{estim1}), so that $C(v,R_1)<\varepsilon(G_0)/2$. According to (\ref{pass}), one can find a number $\lambda_0>0$ such that
$$G(u^\lambda,R_1)<\varepsilon(G_0)$$
for all $\lambda>\lambda_0$. By Theorem 2.2 of \cite{S2016},
$$\sup\limits_{z\in Q^+(R_1/2)}|u^\lambda(z)|<\frac {c(G_0)}{R_1}
$$ for all $\lambda>\lambda_0$.
It remains to select $\lambda_1>\lambda_0$ such that $y_{3\lambda}=a/\lambda <R_1/2$ and $\lambda_1>c(G_0)/R_1$. Then
$$|u^{\lambda_1}(0,y_{3\lambda_1},0)|=\lambda_1\leq \sup\limits_{z\in Q^+(R_1/2)}|u^{\lambda_1}(z)|<\frac {c(G_0)}{R_1}.$$
This is a contradiction.
\end{document} |
{\boldsymbol{e}}gin{document}
\title{A real algebra perspective on multivariate tight wavelet frames}
\author{
Maria Charina\footnoteremember{myfootnote}{Fakult\"at f\"ur Mathematik, TU
Dortmund, D--44221 Dortmund, Germany, $maria.charina@tu-dortmund.de$}, \ Mihai
Putinar \footnote{Department of Mathematics, University of California at Santa
Barbara, CA 93106-3080, USA},\\
Claus Scheiderer \footnote{Fachbereich
Mathematik und Statistik, Universit\"at Konstanz, D--78457 Konstanz, Germany} \
and \ Joachim St\"ockler \footnoterecall{myfootnote} }
\maketitle
{\boldsymbol{e}}gin{abstract}
Recent results from real algebraic geometry and the theory
of polynomial optimization are related in a new framework to the
existence question of multivariate tight wavelet frames whose generators have at least
one vanishing moment. Namely, several equivalent formulations of
the so-called Unitary Extension Principle (UEP) from \cite{RS95} are interpreted in terms of
hermitian sums of squares of certain nonnegative trigonometric polynomials and in terms
of semi-definite programming. The latter together with the results in \cite{LP,sch:mz}
answer affirmatively the long standing open
question of the existence of such tight wavelet frames in dimension $d=2$; we also
provide numerically efficient methods for checking their existence and actual construction in
any dimension. We exhibit a class of counterexamples in dimension $d=3$ showing that,
in general, the UEP property is not sufficient for the existence of tight wavelet frames.
On the other hand we provide stronger sufficient conditions for the existence of tight wavelet frames
in dimension $d \ge 3$ and illustrate our results by several examples.
\end{abstract}
\noindent {\bf Keywords:} multivariate wavelet frame, real algebraic geometry, torus, hermitian square,
polynomial optimization, trigonometric polynomial.\\
\noindent{\bf Math. Sci. Classification 2000:} 65T60, 12D15, 90C26, 90C22.
\section{Introduction}
Several fundamental results due to two groups of authors (I. Daubechies, B.
Han, A. Ron, Z. Shen \cite{DHRS03} and respectively C. Chui, W. He, J. St\"ockler
\cite{CHS04, CHS05}) lay at the foundation of the theory of tight wavelet frames and also
provide their characterizations. These characterizations allow on one hand
to establish the connection between frame constructions and the challenging
algebraic problem of existence of sums of squares representations (sos) of
non-negative trigonometric polynomials. On the other hand, the same characterizations
provide methods, however unsatisfactory from the practical
point of view, for constructing tight wavelet frames.
The existence and effective
construction of tight frames, together with good estimates on the number of
frame generators, are still open problems. One can easily be discouraged
by a general result by Scheiderer in \cite{S99}, which implies that not all non-negative
trigonometric polynomials in the dimension $d \ge 3$ possess sos
representations. However, the main focus is on dimension $d=2$ and on
special non-negative trigonometric polynomials. This
motivates us to pursue the issue of existence of sos representations further.
It has been observed in \cite{CoifmanDonoho1995} that redundancy of wavelet
frames has advantages for applications in signal denoising - if the data is
redundant, then loosing some data during transmission does not necessarily
affect the reconstruction of the original signal. Shen et al. \cite{Shen2011}
use the tight wavelet frame decomposition to recover a clear image from a single
motion-blurred image. In \cite{JoBra} the authors show how to use
multiresolution wavelet filters $p$ and $q_j$ to construct irreducible
representations for the Cuntz algebra and, conversely, how to recover wavelet
filters from these representations. Wavelet and frame decompositions for
subdivision surfaces are one of the basic tools, e.g., for progressive
compression of $3-d$ meshes or interactive surface viewing
\cite{CS07,KSS,VisualComp}. Adaptive numerical methods based on wavelet frame
discretizations have produced very promising results \cite{CDD1,CDD2} when
applied to a large class of operator equations, in particular, PDE's and integral
equations. We list some existing constructions of compactly supported MRA wavelet tight
frames of $L_2({\mathbb R}^d)$ \cite{CH,CHS01,DHRS03,HanMo2005,LS06,RS95, Selesnick2001}
that employ the Unitary Extension Principle. For any dimension and in the case
of a general expansive dilation matrix, the existence of tight wavelet frames is
always ensured by \cite{CCH,CS}, if the coefficients of the associated
refinement equation are real and nonnegative. A few other compactly
supported multi-wavelet tight frames are circulating nowadays, see \cite{CCH,
CS07,GGL}.
The main goal of this paper is to relate the existence of multivariate tight wavelet frames
to recent advances in
real algebraic geometry and
the theory
of moment problems. The starting point of our
study is the so-called Unitary Extension Principle (UEP) from \cite{RS95}, a special case of the
above mentioned characterizations in \cite{CHS04, CHS05, DHRS03}. In section
\ref{sec:UEP} we first list several equivalent well-known formulations of the UEP
from wavelet and frame literature, but use the novel algebraic terminology to state them.
It has been already observed in \cite{LS06} that a sufficient condition for the existence
of tight wavelet frames satisfying UEP can be expressed in terms of
sums of squares representations of a certain nonnegative trigonometric polynomial. In
\cite[Theorem 3.4]{LS06}, the authors also provide an algorithm for the actual construction of
the corresponding frame generators. In subsection \ref{subsec:sumsofsquares}, we extend
the result of \cite[Theorem 3.4]{LS06} and obtain another equivalent formulation of UEP,
which combined with the results from \cite{sch:mz} guarantees the existence of UEP tight wavelet
frames in the two-dimensional case, see subsection \ref{subsec:existence}. We also exhibit there a
class of three-dimensional counterexamples showing that,
in general, the UEP conditions are not sufficient for the existence of tight wavelet frames.
In those examples, however, we make a rather strong assumption on the underlying refinable
function, which leaves hope that in certain other cases we will be able to show the
existence of such tight wavelet frames. The novel,
purely algebraic equivalent formulation of the UEP in Theorem \ref{th:UEP_hermitian} is aimed at
better understanding the structure of tight wavelet frames.
The constructive method in \cite[Theorem 3.4]{LS06} yields frame generators of support
twice as large as the one of the underlying refinable function. Theorem \ref{th:UEP_hermitian} leads to
a numerically efficient method for frame constructions with no such restriction on the size
of their support. Namely, in subsection \ref{subsec:semi-definite}, we show how to reformulate
Theorem \ref{th:UEP_hermitian} equivalently as a problem of semi-definite programming. This establishes a connection between
constructions of tight wavelet frames and moment problems, see \cite{HP,
lasserrebook, LP} for details.
In section \ref{subsec:sufficient}, we give sufficient
conditions for the existence of tight wavelet frames in dimension $d \ge 3$ and
illustrate our results by several examples of three-dimensional subdivision.
In section \ref{subsec:construction}, we discuss an
elegant method that sometimes simplifies the frame construction and allows to
determine the frame generators analytically. We illustrate this method on the example of
the so-called butterfly scheme from
\cite{GDL}.
{\bf Acknowledgements.} The authors are grateful to the Mathematical Institute at Oberwolfach
for offering optimal working conditions through the Research In Pairs program in 2011. The second author was partially
supported by the National Science Foundation Grant DMS-10-01071.
\section{Background and Notation}
\subsection{Real algebraic geometry}
Let $d\in{\mathbb N}$, let $T$ denote the $d$-dimensional anisotropic real
(algebraic) torus, and let ${\mathbb R}[T]$ denote the (real) affine
coordinate ring of $T$
$$
{\mathbb R}[T]\>=\>{\mathbb R}\bigl[x_j,\,y_j\colon j=1,\dots,d\bigr]\big/
\bigl(x_j^2+y_j^2-1\colon j=1,\dots,d\bigr).
$$
In other words, $T$ is the subset of ${\mathbb R}^{2d}$ defined by the
equations $x_j^2+y_j^2=1, 1 \leq j \leq d,$ and endowed with additional algebraic structure
which will become apparent in the following pages.
Rather than working with the above description, we will mostly employ the
complexification of $T$, together with its affine coordinate ring
${\mathbb C}[T]={\mathbb R}[T]\otimes_{\mathbb R}{\mathbb C}$. This coordinate ring comes with a natural
involution $*$ on ${\mathbb C}[T]$, induced by complex conjugation. Namely,
$$
{\mathbb C}[T]\>=\>{\mathbb C}[z_1^{\pm1},\dots,z_d^{\pm1}]
$$
is the ring of complex Laurent polynomials, and $*$ sends $z_j$ to
$z_j^{-1}$ and is complex conjugation on coefficients. The real
coordinate ring ${\mathbb R}[T]$ consists of the $*$-invariant polynomials in
${\mathbb C}[T]$, i.e. $\displaystyle p=\sum_{\alpha \in {\mathbb Z}^d} p(\alpha)
z^\alpha \in {\mathbb R}[T]$ if and only if $p(-\alpha)=\overline{p(\alpha)}$.
The group of ${\mathbb C}$-points of $T$ is $T({\mathbb C})=({\mathbb C}^*)^d={\mathbb C}^*\times\cdots \times{\mathbb C}^*$.
In this paper we often denote the group of ${\mathbb R}$-points of $T$ by ${\mathbb T}^d$.
Therefore,
$$
{\mathbb T}^d\>=\>T({\mathbb R})\>=\>\{(z_1,\dots,z_d)\in({\mathbb C}^*)^d\colon|z_1|=\cdots=
|z_d|=1\}
$$
is the direct product of $d$ copies of the circle group $S^1$. The
neutral element of this group we denote by ${\boldsymbol{1}}=(1,\dots,1)$.
Via the exponential map $\hbox{exp}$, the coordinate ring
${\mathbb C}[T]={\mathbb C}[z_1^{\pm1}, \dots,z_d^{\pm1}]$ of $T$ is identified with the
algebra of (complex) trigonometric polynomials. Namely,
$\hbox{exp}$ identifies $(z_1, \dots, z_d)$ with ${\boldsymbol{e}}^{-i \omega}:=(
e^{-i\omega_1}, \dots, e^{-i\omega_d})$. In the same way, the real coordinate ring ${\mathbb R}[T]$
is identified with the ring of real trigonometric polynomials,
i.e.\ polynomials with real coefficients in $\cos(\omega_j)$ and
$\sin(\omega_j)$, $j=1,\dots,d$.
Let $M\in{\mathbb Z}^{d\times d}$ be a matrix with $\det(M)\ne0$, and write
$m:=|\det M|$. The finite abelian group
{\boldsymbol{e}}gin{equation}\label{def:G}
G:=2\pi M^{-T}{\mathbb Z}^d/ 2\pi {\mathbb Z}^d
\end{equation}
is (via exp) a subgroup of ${\mathbb T}^d=T({\mathbb R})$. It is isomorphic to
${\mathbb Z}^d/M^T{\mathbb Z}^d$ and has order $|G|=m$. Its character group is $G'=
{\mathbb Z}^d/ M{\mathbb Z}^d$, via the natural pairing
$$
G\times G'\to{\mathbb C}^*,\quad\bil\sigma\chi=e^{i\sigma\cdot\chi},
\quad \sigma\in G, \quad \chi\in G'.
$$
Here $\sigma\cdot\chi$ is the ordinary
inner product on ${\mathbb R}^d$, and $\bil\sigma \chi$ is a root of unity of
order dividing $m$. Note that the group $G$ acts on the coordinate
ring ${\mathbb C}[T]$ via multiplication on the torus
{\boldsymbol{e}}gin{equation}\label{def:psigma}
p\mapsto p^\sigma({\boldsymbol{e}}^{-i\omega}):= p({\boldsymbol{e}}^{-i(\omega+\sigma)}), \quad \sigma\in
G, \quad \omega \in {\mathbb R}^d.
\end{equation}
The group action commutes with the involution~$*$, that is, $(p^*)
^\sigma=(p^\sigma)^*$ holds for $p\in{\mathbb C}[T]$ and $\sigma\in G$.
From the action of the group $G$ we get an associated direct sum
decomposition of ${\mathbb C}[T]$ into the eigenspaces of this action
$$
{\mathbb C}[T]\>=\>\bigoplus_{\chi\in G'}{\mathbb C}[T]_\chi,
$$
where ${\mathbb C}[T]_\chi$ consists of all $p\in{\mathbb C}[T]$ satisfying $p^\sigma=
\bil\sigma\chi\,p$ for all $\sigma\in G$. For $\chi\in G'$ and $p\in
{\mathbb C}[T]$, we denote by $p_\chi$ the weight $\chi$ isotypical component
of~$p$. Thus,
$$
p_\chi=\frac1m\sum_{\sigma\in G}\bil\sigma\chi\,p^{\sigma}
$$
lies in ${\mathbb C}[T]_\chi$, and we have $\displaystyle p=\sum_{\chi\in G'}p_\chi$.
For every $\chi\in G'$, we choose a lift $\alpha_\chi\in{\mathbb Z}^d$ such that
{\boldsymbol{e}}gin{equation}\label{def:p_polyphase}
\tilde p_\chi:=z^{-\alpha_\chi}p_\chi
\end{equation}
is $G$-invariant. The components $\tilde p_\chi$ are called
{\em polyphase components} of
$p$, see
\cite{StrangNguyen}.
\subsection{Wavelet tight frames}
A wavelet tight frame is a structured system of functions that has some special
group structure and is defined by the actions of translates and dilates on a
finite set of functions $\psi_j \in L_2({\mathbb R}^d)$, $ 1 \le j \le N$. More
precisely, let $M\in{\mathbb Z}^{d\times d}$ be a general expansive matrix, i.e.
$\rho(M^{-1})<1$, or equivalently, all eigenvalues of $M$ are strictly larger
than $1$ in modulus, and let $m=|\det M|$.
We define translation operators $T_\alpha$ on $L_2({\mathbb R}^d)$
by $T_\alpha f=f(\cdot-\alpha) $, $\alpha\in {\mathbb Z}^d$, and
dilation (homothethy) $U_M$ by $U_M f=m^{1/2} f(M\cdot)$. Note that these
operators are isometries on $L_2({\mathbb R}^d)$.
{\boldsymbol{e}}gin{Definition} \label{def:wavelet_tight_frame}
Let $\{\psi_j \ : \ 1 \le j \le N \}\subseteq L_2({\mathbb R}^d)$. The family
$$
\Psi=\{U_M^\ell T_\alpha \psi_j \ : \ 1\le j \le N, \
\ell \in {\mathbb Z}, \ \alpha \in {\mathbb Z}^d \}
$$
is a wavelet tight frame of $L_2({\mathbb R}^d)$, if
{\boldsymbol{e}}gin{equation}\label{def:parseval}
\|f\|^2_{L_2}=\sum_{{1 \le j \le N, \ell \in {\mathbb Z},} \atop
\alpha \in {\mathbb Z}^d} |\langle f,U_M^\ell T_\alpha \psi_j\rangle|^2 \quad \hbox{for
all} \quad f \in L_2({\mathbb R}^d).
\end{equation}
\end{Definition}
The foundation for the construction of multiresolution wavelet basis or wavelet
tight frame is a compactly supported function $\phi \in L_2({\mathbb R}^d)$
with the following properties.
{\boldsymbol{e}}gin{itemize}
\item[(i)] $\phi$ is refinable, i.e. there exists a finitely
supported sequence $p=\left( p(\alpha)\right)_{\alpha \in {\mathbb Z}^d}$, $p(\alpha)\in {\mathbb C}$, such that
$\phi$ satisfies
{\boldsymbol{e}}gin{equation} \label{eq:refinement_equation}
\phi(x)= m^{1/2} \sum_{\alpha \in {\mathbb Z}^d} p(\alpha) U_M T_\alpha \phi(x),
\quad x \in {\mathbb R}^d.
\end{equation}
Taking the Fourier-Transform
$$
\widehat{\phi}(\omega)=\int_{{\mathbb R}^d} \phi(x) e^{-i\omega \cdot x}
dx
$$
of both sides of \eqref{eq:refinement_equation} leads to its equivalent form
{\boldsymbol{e}}gin{equation} \label{eq:F_refinement_equation}
\widehat{\phi}(M^T\omega)=p({\boldsymbol{e}}^{-i\omega})
\widehat{\phi}(\omega), \quad
\omega \in {\mathbb R}^d,
\end{equation}
where the trigonometric polynomial $p \in {\mathbb C}[T]$ is given by
$$
p({\boldsymbol{e}}^{-i\omega})= \sum_{\alpha \in {\mathbb Z}^d} p(\alpha) e^{-i\alpha\omega},
\qquad \omega\in{\mathbb R}^d.
$$
The isotypical components $p_\chi$ of $p$ are given by
{\boldsymbol{e}}gin{equation}\label{def:isotypical}
p_\chi({\boldsymbol{e}}^{-i\omega})= \sum_{\alpha\equiv \chi~{\rm mod}~M{\mathbb Z}^d} p(\alpha)
e^{-i\alpha\omega},\qquad \chi\in G'.
\end{equation}
\item[(ii)] One usually assumes that $\widehat{\phi}(0)=1$ by proper
normalization. This
assumption on $\widehat{\phi}$ and \eqref{eq:F_refinement_equation} allow us to
read all properties of $\phi$
from the polynomial $p$, since the refinement equation
\eqref{eq:F_refinement_equation} then implies
$$
\widehat{\phi}(\omega)=\prod_{\ell=1}^\infty p({\boldsymbol{e}}^{-i(M^T)^{-\ell} \omega}),
\quad \omega \in {\mathbb R}^d.
$$
The uniform convergence of this infinite product on compact sets is guaranteed
by $p({\boldsymbol{1}})=1$.
\item[(iii)] One of the approximation properties of $\phi$ is the requirement
that the translates $T_\alpha \phi$, $\alpha\in{\mathbb Z}^d$, form a partition of unity.
Then
{\boldsymbol{e}}gin{equation}\label{identity:isotypical_at_one}
p_\chi({\boldsymbol{1}})=m^{-1} ,\qquad \chi\in G'.
\end{equation}
\end{itemize}
\noindent The functions $\psi_j$, $j=1, \dots, N$, are assumed to
be of the form
{\boldsymbol{e}}gin{equation}\label{def:psij}
\widehat{\psi}_j(M^T\omega)=q_j({\boldsymbol{e}}^{-i\omega}) \widehat{\phi}(\omega),
\end{equation}
where $q_j \in {\mathbb C}[T]$. These assumptions
imply that $\psi_{j}$ have compact support and, as in
\eqref{eq:refinement_equation}, are finite linear combinations of
$U_M T_\alpha\phi$.
\section{Equivalent formulations of UEP} \label{sec:UEP}
In this section we first recall the method called UEP (unitary extension
principle) that allows us to determine the trigonometric
polynomials $q_j$, $1\le j \le N$,
such that the family $\Psi$ in Definition \ref{def:wavelet_tight_frame} is a
wavelet tight frame of $L_2({\mathbb R}^d)$, see \cite{DHRS03,RS95}.
We also give several
equivalent formulations of UEP to link frame constructions with problems in
algebraic geometry and semi-definite programming.
We assume throughout this section
that $\phi\in L_2({\mathbb R}^d)$
is a refinable function with respect to the expansive matrix
$M\in {\mathbb Z}^{d\times d}$, with trigonometric polynomial $p$ in
\eqref{eq:F_refinement_equation} and $\hat \phi(0)=1$, and
the functions $\psi_j$ are defined as in \eqref{def:psij}.
We also make use of the definitions \eqref{def:G} for $G$
and \eqref{def:psigma} for $p^\sigma$, $\sigma\in G$.
\subsection{Formulations of UEP in wavelet frame literature}
Most formulations of the UEP are given in terms of identities for trigonometric
polynomials, see \cite{DHRS03,RS95}.
{\boldsymbol{e}}gin{Theorem} \label{th:UEP} (UEP) Let the
trigonometric polynomial $p \in {\mathbb C}[T]$ satisfy $p({\boldsymbol{1}})=1$. If the
trigonometric polynomials $q_j \in {\mathbb C}[T]$, $1 \le j \le N$, satisfy the
identities
{\boldsymbol{e}}gin{equation}\label{id:UEP}
\delta_{\sigma,\tau}-p^{\sigma*}p^{\tau}=
\sum_{j=1}^N q_j^{\sigma*}q_j^{\tau},\qquad
\sigma,\tau \in G,
\end{equation}
then the family $\Psi$ is a wavelet tight frame of $L_2({\mathbb R}^d)$.
\end{Theorem}
We next state another equivalent formulation of the Unitary Extension Principle in Theorem
\ref{th:UEP} in terms of the isotypical components $p_\chi$,
$q_{j,\chi}$ of the polynomials $p$, $q_j$. In the wavelet and frame literature, see e.g.
\cite{StrangNguyen}, this equivalent formulation of UEP is usually given in terms of the
polyphase components in \eqref{def:p_polyphase} of $p$ and $q_j$. The proof we
present here serves as an illustration of the algebraic structure behind wavelet
and tight wavelet frame constructions.
{\boldsymbol{e}}gin{Theorem} \label{th:UEP_polyphase}
Let the trigonometric polynomial $p \in {\mathbb C}[T]$ satisfy $p({\boldsymbol{1}})=1$. The
identities \eqref{id:UEP} are equivalent to
{\boldsymbol{e}}gin{equation}\label{id:equiv_UEP_1}
{\boldsymbol{e}}gin{array}{rcl}
&& \displaystyle p_{\chi}^* p_\chi+\sum_{j=1}^N q_{j,\chi}^* q_{j,\chi}=m^{-1},
\quad \chi \in G', \\[12pt]
&& \displaystyle p_{\chi}^* p_{\eta}+\sum_{j=1}^N q_{j,\chi}^* q_{j,\eta}=0\qquad
\chi,\eta \in G', \quad \chi \not=\eta.
\end{array}
\end{equation}
\end{Theorem}
{\boldsymbol{e}}gin{proof}
Recall that
$ \displaystyle{
p=\sum_{\chi \in G'} p_{\chi}}$ and $ \displaystyle{
p_\chi=m^{-1} \sum_{\sigma \in G}\langle\sigma, \chi \rangle p^{\sigma}}$ imply
$$
p^*=\sum_{\chi \in G'} p^*_{\chi} \quad \hbox{and} \quad p^{\sigma*}=\sum_{\chi \in G'}
(p^*_{\chi})^\sigma = \sum_{\chi \in G'} \langle\sigma, \chi\rangle
p^*_{\chi}.
$$
Thus, with $\eta'=\chi+\eta$ in the next identity, we get
$$
p^{\sigma *} p = \sum_{\chi, \eta' \in G'} \langle \sigma, \chi\rangle p^*_\chi
p_{\eta'}=\sum_{\eta \in G'} \sum_{\chi \in G'}
\langle\sigma, \chi\rangle p^*_\chi p_{\chi+\eta}.
$$
Note that the isotypical components of $p^{\sigma *} p$ are given by
{\boldsymbol{e}}gin{equation}\label{id:thmUEPpolyphase}
(p^{\sigma *} p)_\eta = \sum_{\chi \in G'}
\langle\sigma, \chi\rangle p^*_\chi p_{\chi+\eta},\qquad \eta\in G'.
\end{equation}
Similarly for $q_j$. Therefore, we get that the identities \eqref{id:UEP} for $\tau=0$ are
equivalent to
{\boldsymbol{e}}gin{equation} \label{id:equiv_UEP_1_aux}
\sum_{\chi \in G'} \langle\sigma, \chi\rangle \left( p^*_\chi p_{\chi+\eta}+ \sum_{j=1}^N q^*_{j,\chi} q_{j,\chi+\eta}
\right)=\delta_{\sigma,0} \delta_{\eta,0}, \quad \eta \in G', \quad \sigma \in G.
\end{equation}
Note that the identities \eqref{id:UEP} for $\tau \in G$ are redundant and it
suffices to consider only those for $\tau=0$. For fixed $\eta \in G'$, \eqref{id:equiv_UEP_1_aux} is a
system of $m$ equations indexed by $\sigma \in G$ in $m$ unknowns $\displaystyle
p_\chi^* p_{\chi+\eta}+ \sum_{j=1}^N q^*_{j,\chi} q_{j,\chi+\eta}$, $\chi \in
G'$. The corresponding system matrix $A=(\langle \sigma, \chi \rangle)_{\sigma
\in G, \chi \in G'}$ is invertible and $\displaystyle A^{-1}=m^{-1} A^*$. Thus,
\eqref{id:equiv_UEP_1_aux} is equivalent to \eqref{id:equiv_UEP_1}.
\end{proof}
It is easy to see that the identities in Theorem \ref{th:UEP} and in Theorem
\ref{th:UEP_polyphase} have equivalent matrix formulations.
{\boldsymbol{e}}gin{Theorem}\label{th:matrixUEP}
The identities \eqref{id:UEP} are equivalent to
{\boldsymbol{e}}gin{equation}\label{identity:UEPmatrixform}
U^*U=I_{m}
\end{equation}
with
$$
U^*=\left[
{\boldsymbol{e}}gin{matrix} p^{\sigma*}& q_1^{\sigma*} &\cdots& q_N^{\sigma*}\end{matrix}
\right]_{\sigma\in G} \in M_{m \times (N+1)}({\mathbb C}[T]),
$$
and are also equivalent to
{\boldsymbol{e}}gin{equation}\label{identity:UEPmatrixformpoly}
\widetilde{U}^*\widetilde{U}=m^{-1}I_{m},
\end{equation}
with
$$
\widetilde{U}^*=\left[
{\boldsymbol{e}}gin{matrix} \tilde{p}^*_{\chi}&
\tilde{q}_{1,\chi}^{*} &\cdots& \tilde{q}_{N,\chi}^{*}\end{matrix}
\right]_{\chi \in G'} \in M_{m \times (N+1)}({\mathbb C}[T]).
$$
\end{Theorem}
{\boldsymbol{e}}gin{Remark}
The identities \eqref{identity:UEPmatrixform} and
\eqref{identity:UEPmatrixformpoly} connect the construction of $q_1,\ldots,q_N$
to the following matrix extension problem: extend the first row
$(p^\sigma)_{\sigma\in G}$ of the polynomial matrix $U$ (or $(\widetilde
p_\chi)_{\chi\in G'}$ of $\widetilde U$) to a rectangular $(N+1)\times m$
polynomial matrix satisfying \eqref{identity:UEPmatrixform} (or
\eqref{identity:UEPmatrixformpoly}). There are two major differences
between the identities
\eqref{identity:UEPmatrixform} and \eqref{identity:UEPmatrixformpoly}.
While the first column $(p,q_1,\ldots,q_N)$ of $U$ determines all
other columns of $U$ as well, the columns of the matrix $\widetilde U$
can be
chosen independently, see \cite{StrangNguyen}. All entries of $\widetilde U$,
however, are forced to be $G$-invariant trigonometric polynomials.
\end{Remark}
The following simple consequence of the above results provides a necessary condition
for the existence of UEP tight wavelet frames.
{\boldsymbol{e}}gin{Corollary}\label{cor:UEP_matrix}
Let the trigonometric polynomial $p \in {\mathbb C}[T]$ satisfy $p({\boldsymbol{1}})=1$. For the
existence of trigonometric polynomials $q_j$ satisfying \eqref{id:UEP}, it is
necessary that the sub-QMF condition
{\boldsymbol{e}}gin{equation}\label{id:subQMF}
1-\sum_{\sigma\in G}p^{\sigma*}p^\sigma\>\ge\>0
\end{equation}
holds on ${\mathbb T}^d$. In particular, it is necessary that $1-p^*p$ is non-negative on
${\mathbb T}^d$.
\end{Corollary}
Next, we give an example of a trigonometric polynomial $p$ satisfying
$p({\boldsymbol{1}})=1$, but for which the corresponding
polynomial $f$ is negative for some $\omega \in {\mathbb R}^3$.
{\boldsymbol{e}}gin{Example} Consider
{\boldsymbol{e}}gin{eqnarray*}
p(z_1,z_2,z_3)&=&6z_1z_2z_3 \left(\frac{1+z_1}{2}\right)^2\left(\frac{1+z_2}{2}\right)^2\left(\frac{1+z_3}{2}\right)^2
\left(\frac{1+z_1z_2z_3}{2}\right)^2 - \\
&& \frac{5}{4}z_1 \left(\frac{1+z_1}{2}\right)\left(\frac{1+z_2}{2}\right)^3\left(\frac{1+z_3}{2}\right)^3
\left(\frac{1+z_1z_2z_3}{2}\right)^3 -\\
&& \frac{5}{4}z_2 \left(\frac{1+z_1}{2}\right)^3\left(\frac{1+z_2}{2}\right)\left(\frac{1+z_3}{2}\right)^3
\left(\frac{1+z_1z_2z_3}{2}\right)^3 -\\
&& \frac{5}{4}z_3 \left(\frac{1+z_1}{2}\right)^3\left(\frac{1+z_2}{2}\right)^3\left(\frac{1+z_3}{2}\right)
\left(\frac{1+z_1z_2z_3}{2}\right)^3 -\\
&& \frac{5}{4}z_1 z_2z_3\left(\frac{1+z_1}{2}\right)^3\left(\frac{1+z_2}{2}\right)^3\left(\frac{1+z_3}{2}\right)^3
\left(\frac{1+z_1z_2z_3}{2}\right).
\end{eqnarray*}
The associated refinable function is continuous as the corresponding subdivision scheme is uniformly convergent,
but $p$ does not satisfy the sub-QMF condition, as
$$
1-\sum_{\sigma\in G}|p^\sigma({\boldsymbol{e}}^{-i \omega})|^2<0 \quad \hbox{for} \quad
\omega=\left( \frac{\pi}{6},0,0\right).
$$
\end{Example}
\subsection{Sums of squares} \label{subsec:sumsofsquares}
Next we give another equivalent formulation of the UEP in terms of a sums of
squares problem for the
Laurent polynomial
{\boldsymbol{e}}gin{equation}\label{def:f}
f:=1-\sum_{\sigma\in G}p^{\sigma*}p^{\sigma}.
\end{equation}
We say that $f \in C[T]$ is a {\it sum of hermitian squares},
if there exist $h_1,\ldots,h_r\in {\mathbb C}[T]$ such that $\displaystyle f=\sum_{j=1}^r h_j^*h_j$.
We start with the following auxiliary lemma.
{\boldsymbol{e}}gin{Lemma}\label{lem:subQMFiso}
Let $p\in {\mathbb C}[T]$ with isotypical components $p_\chi$, $\chi\in G'$.
{\boldsymbol{e}}gin{itemize}
\item[(a)]
$ \displaystyle \sum_{\sigma\in G}p^{\sigma*}p^\sigma \>=\>m\cdot\sum_{\chi\in
G'} p_\chi^*p_\chi$ is a $G$-invariant Laurent polynomial in ${\mathbb R}[T]$.
\item[(b)]
If $f$ in \eqref{def:f} is a sum of hermitian squares
{\boldsymbol{e}}gin{equation}\label{id:tildeH}
f\>=\>\sum_{j=1}^rh_j^*h_j,
\end{equation}
with $h_j\in {\mathbb C}[T]$, then
{\boldsymbol{e}}gin{equation}\label{id:H}
f\>=\>\sum_{j=1}^r\sum_{\chi \in
G'}\tilde h_{j,\chi}^*\tilde h_{j,\chi},
\end{equation}
with the $G$-invariant polyphase components $\tilde h _{j,\chi}\in {\mathbb C}[T]$.
\end{itemize}
\end{Lemma}
{\boldsymbol{e}}gin{proof}
Similar computations as in the proof of Theorem \ref{th:UEP_polyphase} yield
the identity in (a). The $G$-invariance and
invariance by involution are obvious.
For (b) we observe that the left-hand side of \eqref{id:tildeH} is
$G$-invariant as well. Therefore, \eqref{id:tildeH} implies
$$
1-\sum_{\sigma\in G}p^{\sigma*}p^{\sigma}= m^{-1}\sum_{j=1}^r \sum_{\sigma\in G}
h_j^{\sigma*}h_j^{\sigma}.
$$
Using the result in (a) we get
$$
m^{-1}\sum_{j=1}^r \sum_{\sigma\in G}h_j^{\sigma*}h_j^{\sigma}
\>=\>\sum_{j=1}^r\sum_{\chi\in G'}h_{j,\chi}^*h_{j,\chi}.
$$
The polyphase component $\tilde h_{j,\chi}=z^{-\alpha_\chi}h_{j,\chi}$, with
$\alpha_\chi\in{\mathbb Z}^d$ and $\alpha_\chi\equiv \chi$ mod~$M{\mathbb Z}^d$, is $G$-invariant
and satisfies $\tilde h_{j,\chi}^*\tilde h_{j,\chi}=
h_{j,\chi}^*h_{j,\chi}$.
\end{proof}
The results in \cite{LS06} imply that having a sum of hermitian squares decomposition of
\[
f=1-\sum_{\sigma\in G}p^{\sigma*}p^{\sigma}
=\sum_{j=1}^rh_j^*h_j\in {\mathbb R}[T],
\]
with $G$-invariant polynomials $h_j\in{\mathbb C}[T]$, is sufficient for the existence of
the polynomials $q_j$ in Theorem~\ref{th:UEP}. The authors in \cite{LS06} also
provide a method for the construction of $q_j$ from a sum of squares
decomposition of the trigonometric polynomial $f$. Lemma \ref{lem:subQMFiso}
shows that one does not need to require $G$-invariance of $h_j$ in
\eqref{def:f}. Moreover, it is not mentioned in \cite{LS06}, that the existence
of the sos decomposition of $f$ is
also a necessary condition, and, therefore, provides another equivalent formulation
of the UEP conditions \eqref{id:UEP}. We state the following extension of \cite[Theorem 3.4]{LS06}.
{\boldsymbol{e}}gin{Theorem}\label{th:LaiSt}
For any $p \in {\mathbb C}[T]$, with $p({\boldsymbol{1}})=1$, the following conditions are equivalent.
{\boldsymbol{e}}gin{itemize}
\item[(i)] There exist trigonometric polynomials $h_1,\ldots,h_r \in {\mathbb C}[T]$ satisfying
\eqref{def:f}.
\item[(ii)] There exist trigonometric polynomials $q_1,\ldots, q_N \in
{\mathbb C}[T]$ satisfying \eqref{id:UEP}.
\end{itemize}
\end{Theorem}
{\boldsymbol{e}}gin{proof}
Assume that $(i)$ is satisfied. Let $\chi_k$ be the elements of
$G'\simeq\{\chi_1,\ldots,\chi_m\}$. For $1\le j\le r$ and $1\le k\le m$, we define
the polyphase components $\tilde h_{j,\chi_k}$ of $h_j$ and set $\alpha_\chi\in{\mathbb Z}^d$,
$\alpha_\chi\equiv \chi$ mod~$M{\mathbb Z}^d$, as in Lemma
\ref{lem:subQMFiso}. The constructive method in the proof of \cite[Theorem
3.4]{LS06} yields the explicit form of $q_1,\ldots,q_N$, with $N=m(r+1)$,
satisfying \eqref{id:UEP}, namely
{\boldsymbol{e}}gin{eqnarray}
q_k&=& m^{-1/2}z^{\alpha_{\chi_k}}
(1-mpp_{\chi_k}^*), \qquad 1\le k\le m,\label{def:LSqk1}\\
q_{mj+k}&=& p
\tilde h_{j,\chi_k}^*
, \qquad 1\le k\le m, \qquad 1 \le j\le r.\label{def:LSqk2}
\end{eqnarray}
Conversely, if $(ii)$ is satisfied, we obtain from \eqref{identity:UEPmatrixform}
\[
I_m-
\left[{\boldsymbol{e}}gin{matrix} \vdots\\p^{\sigma*}\\ \vdots\end{matrix}
\right]_{\sigma\in G}\cdot
\left[{\boldsymbol{e}}gin{matrix} \cdots & p^{\sigma}& \cdots\end{matrix}
\right]_{\sigma\in G}= \sum_{j=1}^N
\left[{\boldsymbol{e}}gin{matrix} \vdots\\q_j^{\sigma*}\\ \vdots\end{matrix}
\right]_{\sigma\in G}\cdot
\left[{\boldsymbol{e}}gin{matrix} \cdots & q_j^{\sigma}& \cdots\end{matrix}
\right]_{\sigma\in G}.
\]
The determinant of the matrix on the left-hand side is equal to $f$ in
\eqref{def:f}, and, by the Cauchy-Binet formula, the determinant of the matrix
on the right-hand side is a sum of squares.
\end{proof}
{\boldsymbol{e}}gin{Remark}\label{rem:matrix-extension}
The constructive method in \cite{LS06} yields $N=m(r+1)$ trigonometric
polynomials $q_j$ in \eqref{id:UEP}, where $r$ is the number of trigonometric
polynomials $h_j$ in \eqref{def:f}. Moreover, the degree of some $q_j$ in
\eqref{def:LSqk1} and \eqref{def:LSqk2} is at least
twice as high as the degree of $p$.
\end{Remark}
Next, we give an equivalent formulation of the UEP condition in terms of
hermitian sums of squares, derived from the identities \eqref{id:equiv_UEP_1}
in Theorem
\ref{th:UEP_polyphase}. Our goal is to improve the constructive method
in \cite{LS06} and to give an algebraic equivalent formulation
that directly delivers the trigonometric polynomials $q_j$ in Theorem~\ref{th:UEP},
avoiding any extra computations as in \eqref{def:LSqk1} and \eqref{def:LSqk2}.
To this end we write $A\>=\>{\mathbb C}[T]$ and consider
$$A\otimes_{\mathbb C} A={\mathbb C}[T\times T].$$
So $A$ is the ring of Laurent polynomials in $d$ variables $z_1,\dots, z_d$. We
may identify $A\otimes A$ with the ring of Laurent polynomials in $2d$ variables
$u_1,\dots,u_d$ and $v_1,\dots,v_d$, where $u_j=z_j\otimes1$ and $v_j=1\otimes
z_j$, $j=1,\dots,d$. On $A$ we have already introduced the $G'$-grading
$A=\bigoplus_{\chi\in G'} A_\chi$ and the involution $*$ satisfying
$z_j^*=z_j^{-1}$. On $A\otimes A$ we consider the involution $*$ defined by
$(p\otimes q) ^*=q^*\otimes p^*$ for $p,q\in A$. Thus $u_j^*=v_j^{-1}$ and $v_j^*=
u_j^{-1}$. An element $f\in A\otimes A$ will be called \emph{hermitian} if
$f=f^*$. We say that $f$ is a sum of hermitian squares if there are finitely
many $q_1,\dots,q_r\in A$ with $\displaystyle f=\sum _{k=1}^rq_k^*\otimes q_k$. On $A\otimes
A$ we consider the grading
$$A\otimes A\>=\>\bigoplus_{\chi,\eta\in G'}A_\chi\otimes A_\eta.$$
So $A_\chi\otimes A_\eta$ is spanned by the monomials $u^\alpha v^{\boldsymbol{e}}ta$ with
$\alpha+M{\mathbb Z}^d=\chi$ and ${\boldsymbol{e}}ta+M{\mathbb Z}^d=\eta$. Note that $(A_\chi\otimes
A_\eta)^*=A_{-\eta}\otimes A_{-\chi}$.
The multiplication homomorphism $\mu\colon A\otimes A\to A$ (with $\mu(p\otimes
q)=pq$) is compatible with the involutions. Let $I= \ker(\mu)$, the ideal in
$A\otimes A$ that is generated by $u_j-v_j$ with $j=1,\dots,d$. We also need to
consider the smaller ideal
$$J\>:=\>\bigoplus_{\chi,\eta\in G'}\Bigl(I\cap\bigl(A_\chi\otimes
A_\eta\bigr)\Bigr)$$ of $A\otimes A$. The ideal $J$ is $*$-invariant. Note that
the inclusion $J\subseteq I$ is proper since, for example, $u_j-v_j\notin J$.
{\boldsymbol{e}}gin{Theorem}\label{th:UEP_hermitian}
Let $p\in A={\mathbb C}[T]$ satisfy $p({\boldsymbol{1}})=1$. The following conditions are
equivalent.
{\boldsymbol{e}}gin{itemize}
\item[(i)] The Laurent polynomial
$$f=1-\sum_{\sigma\in G}p^{\sigma*}p^\sigma$$
is a sum of hermitian squares in
$A$; that is, there exist $h_1,\dots,h_r\in A$ with $\displaystyle f= \sum_{j=1}^r h_j^*h_j$.
\item[(ii)]
For any hermitian elements $h_\chi=h_\chi^*$ in $A_{-\chi}\otimes A_{\chi}$,
with $\mu(h_\chi)=\frac1m$ for all $\chi\in G'$, the element
$$g\>:=\>\sum_{\chi\in G'}h_\chi-p^*\otimes p$$
is a sum of hermitian squares in $A\otimes A$ modulo~$J$; that is,
there exist $q_1,\dots,q_N\in A$ with $\displaystyle g- \sum_{j=1}^N q_j^*q_j\in J$.
\item[(iii)]
$p$ satisfies the UEP condition \eqref{id:UEP} for suitable $q_1, \dots,q_N\in
A$.
\end{itemize}
\end{Theorem}
{\boldsymbol{e}}gin{proof}
By Theorem \ref{th:LaiSt}, $(i)$ is equivalent to $(iii)$.
In $(ii)$, let hermitian elements $h_\chi\in A_{-\chi,\chi}$ be given with
$\mu(h_\chi)=\frac1m$. Then $(ii)$ is equivalent to the existence of
$q_1,\dots,q_N\in A$ with
{\boldsymbol{e}}gin{equation}\label{id:gmodJ}
\sum_{\chi\in G'}h_\chi-p^*\otimes p-\sum_{j=1}^Nq_j^*\otimes q_j\in J.
\end{equation}
We write
$\displaystyle p=\sum_{\chi \in G'}p_\chi$ and $q_j$
as the sum of its isotypical components and observe that \eqref{id:gmodJ} is
equivalent to
{\boldsymbol{e}}gin{equation}\label{id:gchieta}
\delta_{\chi,\eta}h_\chi -p_\chi^*\otimes p_\eta-
\sum_{j=1}^Nq_{j,\chi}^*\otimes q_{j,\eta}
\in I\quad\hbox{for all}\quad \chi,\eta\in G'.
\end{equation}
Due to $\mu(h_\chi)=\frac1m$, the relation \eqref{id:gchieta} is
an equivalent reformulation of equations \eqref{id:equiv_UEP_1} in Theorem
\ref{th:UEP_polyphase}, and therefore equivalent to equations
\eqref{id:UEP}.
\end{proof}
{\boldsymbol{e}}gin{Remark}\label{rem:UEP_hermitian}
{\boldsymbol{e}}gin{itemize}
\item[$(i)$]
The proof of Theorem \ref{th:UEP_hermitian} does not
depend on the choice of the hermitian elements $h_\chi\in
A_{-\chi}\otimes A_{\chi}$ in $(ii)$. Thus, it suffices to choose
particular hermitian elements satisfying $\mu(h_\chi)=\frac1m$. For example,
if $p_\chi({\boldsymbol{1}})=m^{-1}$ is satisfied for all $\chi\in G'$, we can choose
{\boldsymbol{e}}gin{equation}\label{def:hchi}
h_\chi = \sum_{\alpha\equiv \chi}{\rm Re}(p(\alpha))u^{-\alpha}v^\alpha,
\end{equation}
where $p(\alpha)$ are the coefficients of the Laurent polynomial $p$.
\item[$(ii)$]
The same Laurent polynomials $q_1,\ldots,q_N$ can be chosen
in Theorem~\ref{th:UEP_hermitian}
$(ii)$ and $(iii)$. This is the main advantage of working with the
condition $(ii)$ rather than with $(i)$.
\end{itemize}
\end{Remark}
\subsection{Semi-definite programming} \label{subsec:semi-definite}
We next devise a constructive method for determining the Laurent polynomials
$q_j$ in \eqref{id:UEP}. This method is based on $(ii)$ of Theorem \ref{th:UEP_hermitian}
and $(i)$ of Remark \ref{rem:UEP_hermitian}.
For a Laurent polynomial $ p=\sum_\alpha p(\alpha)z^\alpha$,
let ${\cal N}\subseteq {\mathbb Z}^d$ contain $\{\alpha \in {\mathbb Z}^d \ : \ p(\alpha) \not=0\}$.
We also define the tautological (column) vector
$$
{\boldsymbol{x}}=\left[z^\alpha \ : \ \alpha \in {\cal N} \right]^T,
$$
and the orthogonal projections $E_\chi \in {\mathbb R}^{|{\cal N}| \times |{\cal N}|}$ to be diagonal matrices with diagonal
entries given by
$$
E_\chi(\alpha,\alpha)=\left\{ {\boldsymbol{e}}gin{array}{cc} 1, & \alpha \equiv \chi \ \hbox{mod} \ M{\mathbb Z}^d, \\
0, & \hbox{otherwise}, \end{array}\right. \alpha \in {\cal N}.
$$
{\boldsymbol{e}}gin{Theorem} \label{th:UEP_semidefinite}
Let
{\boldsymbol{e}}gin{equation}\label{def:p_coeff}
p={\boldsymbol{p}}\cdot {\boldsymbol{x}}\in A={\mathbb C}[T], \qquad {\boldsymbol{p}}=[p(\alpha) \ : \ \alpha \in {\cal N}]\in {\mathbb C}^{|{\cal N}|},
\end{equation}
satisfy $p_\chi({\boldsymbol{1}})=m^{-1}$ for all $\chi\in G'$. The following conditions
are equivalent.
{\boldsymbol{e}}gin{itemize}
\item[(i)] There exist row vectors ${\boldsymbol{q}}_j=[q_j(\alpha) \ : \ \alpha \in {\cal N}]\in {\mathbb C}^{|{\cal N}|}$,
$1 \le j \le N$, satisfying the identities
{\boldsymbol{e}}gin{eqnarray} \label{id:equiv_UEP_2}
{\boldsymbol{x}}^*E_\chi \left( {\rm diag}({\rm Re}\,{\boldsymbol{p}})-{\boldsymbol{p}}^* {\boldsymbol{p}} -\sum_{j=1}^N {\boldsymbol{q}}_j^* {\boldsymbol{q}}_j \right) E_\eta
{\boldsymbol{x}}=0\quad\hbox{for all}\quad
\chi,\eta \in G'.
\end{eqnarray}
\item[(ii)]
$p$ satisfies the UEP condition \eqref{id:UEP} with
$$
q_j={\boldsymbol{q}}_j\cdot {\boldsymbol{x}}\in {\mathbb C}[T], \qquad j=1,\ldots,N,
$$
and suitable row vectors ${\boldsymbol{q}}_j\in {\mathbb C}^{|{\cal N}|}$.
\end{itemize}
\end{Theorem}
{\boldsymbol{e}}gin{proof}
Define
$$
{\boldsymbol{v}}=\left[1\otimes z^\alpha \ : \ \alpha \in {\cal N} \right]^T \in (A\otimes A)^{|{\cal N}|}.
$$
Note that ${\boldsymbol{p}}{\boldsymbol{v}}=1\otimes p$ and the definition of $E_\chi$ gives
${\boldsymbol{p}} E_\chi{\boldsymbol{v}}=1\otimes p_\chi$. Therefore, we have
$$
{\boldsymbol{v}}^* E_\chi {\boldsymbol{p}}^* {\boldsymbol{p}} E_\eta {\boldsymbol{v}}=p_\chi^*\otimes p_\eta
\quad\hbox{for all}\quad \chi,\eta\in G',
$$
and the analogue for $q_{j,\chi}^*\otimes q_{j,\eta}$. Moreover, we have
$$
{\boldsymbol{v}}^* E_\chi \hbox{diag}({\rm Re}\,{\boldsymbol{p}}) E_\eta {\boldsymbol{v}}=\delta_{\chi,\eta}
\sum_{\alpha\equiv \chi}{\rm Re}(p(\alpha))u^{-\alpha}v^\alpha.
$$
Due to
$p_\chi({\boldsymbol{1}})=m^{-1}$ and by Remark \ref{rem:UEP_hermitian} we choose
$h_\chi={\boldsymbol{v}}^* E_\chi \hbox{diag}({\rm Re}\, {\boldsymbol{p}}) E_\chi {\boldsymbol{v}}$
as the hermitian elements in
Theorem \ref{th:UEP_hermitian}$(ii)$, and
the relation \eqref{id:gchieta} is equivalent to
$$
{\boldsymbol{v}}^*E_\chi \left( \hbox{diag}({\rm Re}\,{\boldsymbol{p}})-{\boldsymbol{p}}^* {\boldsymbol{p}}
-\sum_{j=1}^N {\boldsymbol{q}}_j^* {\boldsymbol{q}}_j \right) E_\eta {\boldsymbol{v}} \in I
\quad\hbox{for all}\quad \chi,\eta\in G'.
$$
Due to $\mu({\boldsymbol{v}})={\boldsymbol{x}}$, the claim follows from the equivalence of $(ii)$ and $(iii)$ in Theorem \ref{th:UEP_hermitian}.
\end{proof}
We suggest the following constructive method based on Theorem \ref{th:UEP_semidefinite}.
Given the trigonometric polynomial $p$ and the vector ${\boldsymbol{p}}$ in \eqref{def:p_coeff},
define the matrix
{\boldsymbol{e}}gin{equation} \label{def:R}
R=\hbox{diag}({\rm Re}\,{\boldsymbol{p}})-{\boldsymbol{p}}^*{\boldsymbol{p}}\in {\mathbb C}^{|{\cal N}| \times |{\cal N}|}.
\end{equation}
Then the task of constructing tight wavelet frames can be formulated as the
following problem of {\bf semi-definite programming}: find a
matrix $O\in {\mathbb C}^{|{\cal N}| \times |{\cal N}|}$ such that
{\boldsymbol{e}}gin{equation}\label{id:Sposdef}
S:=R+O\quad\hbox{is positive semi-definite}
\end{equation}
subject to the constraints
{\boldsymbol{e}}gin{equation}\label{id:null-matrices}
{\boldsymbol{x}}^* E_\chi \, O \, E_\eta {\boldsymbol{x}}=0 \quad \hbox{for all}\quad
\chi, \eta \in G'.
\end{equation}
If such a matrix $O$ exists,
we determine the trigonometric polynomials $q_j={\boldsymbol{q}}_j {\boldsymbol{x}}\in {\mathbb C}[T]$
by choosing any decomposition of the form
$$
S=\sum_{j=1}^N{\boldsymbol{q}}_j^* {\boldsymbol{q}}_j
$$
with standard methods
from linear algebra.
If the semi-definite programming problem does not have a
solution, we can increase the set ${\cal N}$ and start all over.
Note that the identities \eqref{id:null-matrices} are equivalent to the
following linear constraints on the null-matrices $O$
$$
\sum_{\alpha \equiv \chi, {\boldsymbol{e}}ta \equiv \eta}
O_{\alpha,{\boldsymbol{e}}ta} z^{{\boldsymbol{e}}ta-\alpha}=0 \quad\hbox{for all}\quad \chi, \eta
\in G',
$$
or, equivalently,
$$
\sum_{\alpha \equiv \chi} O_{\alpha,
\alpha+\tau}=0 \quad \hbox{for all}\quad
\tau \in \{{\boldsymbol{e}}ta-\alpha \ : \ \alpha, {\boldsymbol{e}}ta \in
{\cal N}\}.
$$
{\boldsymbol{e}}gin{Example} To illustrate the concept of null-matrices, we consider first a very
prominent one-dimensional example of a Daubechies wavelet. Let
$$
p={\boldsymbol{p}} \cdot {\boldsymbol{x}}, \quad
{\boldsymbol{p}}=\frac{1}{8} \left[{\boldsymbol{e}}gin{array}{cccc} 1+\sqrt{3} & 3+\sqrt{3} & 3-\sqrt{3}& 1-\sqrt{3}
\end{array}\right],
$$
and ${\boldsymbol{x}}=\left[1,z,z^2,z^3\right]^T$. In this case $M=m=2$, $G\simeq\{0,\pi\}$,
$G'\simeq\{0,1\}$ and the orthogonal projections $E_\chi \in {\mathbb R}^{4 \times
4}$, $\chi \in G'$, are given by
$$
E_0=\hbox{diag}[1,0,1,0] \quad \hbox{and} \quad E_1=\hbox{diag}[0,1,0,1].
$$
By \eqref{def:R}, we have
$$
R=\frac{1}{64} \left[{\boldsymbol{e}}gin{array}{rrrr}4+6\sqrt{3}& -6-4\sqrt{3}& -2\sqrt{3}&2\\
-6-4\sqrt{3}& 12+2\sqrt{3} & -6& 2\sqrt{3} \\ -2\sqrt{3} & -6 & 12-2\sqrt{3} & -6+4\sqrt{3} \\
2& 2\sqrt{3} & -6+4\sqrt{3} & 4-6\sqrt{3} \end{array} \right],
$$
which is not positive semi-definite. Define
$$
O=\frac{1}{64} \left[{\boldsymbol{e}}gin{array}{rrrr}-8\sqrt{3}& 8\sqrt{3}& 0&0\\
8\sqrt{3}& -8\sqrt{3} & 0& 0 \\ 0 & 0 & 8\sqrt{3} & -8\sqrt{3} \\
0& 0 & -8\sqrt{3} & 8\sqrt{3} \end{array} \right]
$$
satisfying \eqref{id:null-matrices}. Then $S=R+O$ is positive semi-definite, of
rank one, and yields the well-known Daubechies wavelet, see \cite{Daub} defined
by
$$
q_1= \frac{1}{8} \left[ {\boldsymbol{e}}gin{array}{cccc}
1-\sqrt{3} & -3+\sqrt{3} & 3+\sqrt{3}& -1-\sqrt{3}
\end{array}\right] \cdot {\boldsymbol{x}}.
$$
\end{Example}
Another two-dimensional example of one possible choice of an appropriate
null-matrix satisfying \eqref{id:null-matrices} is given in Example
\ref{ex:butterfly}.
{\boldsymbol{e}}gin{Remark}
Another, very similar, way of working with null-matrices was pursued already in
\cite{CS08}.
\end{Remark}
\section{Existence and constructions of tight wavelet frames} \label{sec:algebra}
In this section we use results from algebraic geometry and Theorem \ref{th:LaiSt}
to resolve the problem of existence of tight wavelet frames. Theorem \ref{th:LaiSt}
allows us to reduce the problem of existence of $q_j$ in \eqref{id:UEP} to the problem
of existence of an sos decomposition of a single nonnegative polynomial
{\boldsymbol{e}}gin{equation}
f=1-\sum_{\sigma \in G} p^{\sigma*} p^{\sigma} \in {\mathbb R}[T]. \notag
\end{equation}
In subsection \ref{subsec:existence}, for dimension $d=2$, we show that the
polynomials $q_1,\dots,q_N \in {\mathbb C}[T]$ as in Theorem \ref{th:UEP} always exist.
This result is based on recent progress in real algebraic geometry.
We also include an example of a three-dimensional trigonometric polynomial $p$, satisfying the
sub-QMF condition \eqref{id:subQMF}, but for which
trigonometric polynomials $q_1,\ldots,q_N$ as in Theorem \ref{th:UEP} do not exist.
In subsection \ref{subsec:sufficient}, we give sufficient conditions for the
existence of the $q_j$'s in the multidimensional case and give several explicit
constructions of tight wavelet frames in section \ref{subsec:construction}.
\subsection{Existence of tight wavelet frames} \label{subsec:existence}
In this section we show that in the two-dimensional case ($d=2$) the question of
existence of a wavelet tight frame can be positively answered using the results
from \cite{sch:surf}. Thus, Theorem \ref{th:existence2dim} answers a long
standing open question about the existence of tight wavelet frames as in Theorem \ref{th:UEP}.
The result of Theorem \ref{th:noUEP} states that in the dimension $d \ge 3$
for a given trigonometric polynomial $p$ satisfying $p({\boldsymbol{1}})=1$ and the sub-QMF condition \eqref{id:subQMF}
one cannot always determine trigonometric polynomials $q_j$ as in
Theorem \ref{th:UEP}.
{\boldsymbol{e}}gin{Theorem}\label{th:existence2dim}
Let $d=2$, $p\in{\mathbb C}[T]$ satisfy $p({\boldsymbol{1}})=1$ and
$\displaystyle \sum_{\sigma\in G}p^{\sigma *} p^\sigma \le1$ on ${\mathbb T}^2=T({\mathbb R})$. Then there exist $N\in{\mathbb N}$
and trigonometric polynomials $q_1,\dots,q_N\in{\mathbb C}[T]$ satisfying
{\boldsymbol{e}}gin{equation}\label{aux:th:existence2dim}
\delta_{\sigma,\tau}\>=\>p^{\sigma*}p^{\tau}+\sum_{j=1}^N
q_j^{\sigma*}q_j^{\tau},\qquad \sigma,\,\tau\in G.
\end{equation}
\end{Theorem}
{\boldsymbol{e}}gin{proof}
The torus $T$ is a non-singular affine algebraic surface over ${\mathbb R}$, and $T({\mathbb R})$
is compact. The polynomial $f$ in \eqref{def:f} is in ${\mathbb R}[T]$ and is
nonnegative on $T({\mathbb R})$ by assumption. By Corollary~3.4 of \cite{sch:surf}, there
exist $h_1,\dots,h_r\in{\mathbb C}[T]$ satisfying $\displaystyle f=\sum_{j=1}^r h_j^* h_j$. According to
Lemma \ref{lem:subQMFiso} part $(b)$, the polynomials $h_j$ can be taken to be
$G$-invariant. Thus, by Theorem \ref{th:LaiSt}, there exist
polynomials $q_1,\dots,q_N$ satisfying \eqref{aux:th:existence2dim}.
\end{proof}
The question may arise, if there exists
a trigonometric polynomial $p$ that satisfies $p({\boldsymbol{1}})=1$ and the sub-QMF condition
$\displaystyle \sum_{\sigma\in G} p^{\sigma*}p^\sigma \le 1$ on ${\mathbb T}^d$, but for which there exists no UEP tight frame
as in Theorem \ref{th:UEP}.
Or, due to Corollary \ref{cor:UEP_matrix}, if we can find such a $p$, for which the nonnegative
trigonometric polynomial $1-p^{*}p$ is not a sum of hermitian squares of trigonometric polynomials?
{\boldsymbol{e}}gin{Theorem}\label{th:noUEP}
There exists $p\in{\mathbb C}[T]$ satisfying $p({\boldsymbol{1}})=1$ and the sub-QMF condition on ${\mathbb T}^3$,
such that $1-p^*p$ is not a sum of hermitian squares in ${\mathbb R}[T]$.
\end{Theorem}
The proof is constructive. The following example defines a family of
trigonometric polynomials with the properties stated in Theorem \ref{th:noUEP}.
We make use of the following local-global result from algebraic geometry: if
the Taylor expansion of $f\in {\mathbb R}[T]$
at one of its roots has, in local coordinates,
a homogeneous part of lowest degree which is not sos of real algebraic
polynomials, then $f$ is not sos in ${\mathbb R}[T]$.
{\boldsymbol{e}}gin{Example}
Denote $z_j=e^{-i\omega_j}$, $j=1,2,3$.
We let
\[
p(z)=\Big(1-c \cdot m(z) \Big) a(z),\quad z \in T, \quad 0<c\le \frac{1}{3},
\]
where
\[
m(z)= y_1^4y_2^2+y_1^2y_2^4+y_3^6-3y_1^2y_2^2y_3^2 \in {\mathbb R}[T],\qquad
y_j=\sin\omega_j.
\]
In the local coordinates $(y_1,y_2,y_3)$ at $z={\boldsymbol{1}}$,
$m$ is the well-known Motzkin polynomial in ${\mathbb R}[y_1,y_2,y_3]$; i.e.
$m$ is not sos in ${\mathbb R}[y_1,y_2,y_3]$. Moreover, $a\in {\mathbb R}[T]$ is chosen
such that
{\boldsymbol{e}}gin{equation}\label{eq:propA}
D^\alpha a({\boldsymbol{1}})=\delta_{0,\alpha},\quad
D^\alpha a(\sigma)=0,\quad 0\le |\alpha|< 8, \quad
\sigma\in G\setminus \{{\boldsymbol{1}}\},
\end{equation}
and $\displaystyle \sum_{\sigma \in G} a^{\sigma*}a^\sigma \le 1$. Such $a$ can be, for example, any scaling
symbol of a 3-D orthonormal wavelet with 8 vanishing moments; in particular, the
tensor product Daubechies symbol $a(z)=m_8(z_1) m_8(z_2) m_8(z_3)$ with $m_8$ in
\cite{Daub} satisfies conditions \eqref{eq:propA} and $\displaystyle \sum_{\sigma \in G} a^{\sigma *}a^\sigma
= 1$. The properties of $m$ and $a$ imply that
{\boldsymbol{e}}gin{itemize}
\item[1.] $p$ satisfies the sub-QMF condition on ${\mathbb T}^3$, since $m$ is $G$-invariant and
$0\le 1-c \cdot m \le 1$ on ${\mathbb T}^3$,
\item[2.] $p$ satisfies sum rules of order at least $6$,
\item[3.] the Taylor expansion of
$1-p^*p$ at $z={\boldsymbol{1}}$, in local coordinates $(y_1,y_2,y_3)$,
has $2\cdot c \cdot m$ as its homogeneous part of lowest degree.
\end{itemize}
Therefore, $1-p^*p$ is not sos of trigonometric polynomials in ${\mathbb R}[T]$. By Corollary \ref{cor:UEP_matrix}, the corresponding
nonnegative trigonometric polynomial $f$ in \eqref{def:f} has no sos decomposition.
\end{Example}
\subsection{Sufficient conditions for existence of tight wavelet frames} \label{subsec:sufficient}
In the general multivariate case $d \ge 2$, in Theorem \ref{th:existence-ddim},
we provide a sufficient condition for the existence of a sums of squares
decomposition of $f$ in \eqref{def:f}. This condition is based on the properties
of the Hessian of $f \in {\mathbb R}[T]$
$$
{\rm Hess}(f)=\left( D^\mu f \right)_{\mu \in {\mathbb N}_0^s, |\mu|=2},
$$
where $f$ is a trigonometric polynomial in $\omega \in {\mathbb R}^d$ and $D^\mu$ denotes the $|\mu|-$th partial
derivative with respect to $\omega \in {\mathbb R}^d$.
{\boldsymbol{e}}gin{Theorem}\label{hessiancrit}
Let $V$ be a non-singular affine ${\mathbb R}$-variety for which $V({\mathbb R})$ is
compact, and let $f\in{\mathbb R}[V]$ with $f\ge0$ on $V({\mathbb R})$. For every $\xi
\in V({\mathbb R})$ with $f(\xi)=0$, assume that the Hessian of $f$ at $\xi$
is strictly positive definite. Then $f$ is a sum of squares in
${\mathbb R}[V]$.
\end{Theorem}
{\boldsymbol{e}}gin{proof}
The hypotheses imply that $f$ has only finitely many zeros in
$V({\mathbb R})$. Therefore the claim follows from \cite[Corollary
2.17, Example 3.18]{sch:mz}.
\end{proof}
Theorem \ref{hessiancrit} implies the following result.
{\boldsymbol{e}}gin{Theorem}\label{th:existence-ddim}
Let $p\in{\mathbb C}[T]$ satisfy $p({\boldsymbol{1}})=1$ and $\displaystyle f=1-\sum_{\sigma\in G}p^{\sigma*}p^\sigma \ge 0$ on $T({\mathbb R})={\mathbb T}^d$. If the
Hessian of $f$ is positive
definite at every zero of $f$ in ${\mathbb T}^d$, then there exist $N\in{\mathbb N}$ and polynomials
$q_1,\dots,q_N\in{\mathbb C}[T]$ satisfying \eqref{id:UEP}.
\end{Theorem}
{\boldsymbol{e}}gin{proof}
By Theorem \ref{hessiancrit}, $f$ is a sum of squares in ${\mathbb R}[T]$. The claim
follows then by Theorem \ref{th:existence2dim}.
\end{proof}
Due to $p({\boldsymbol{1}})=1$, $z={\boldsymbol{1}}$ is obviously a zero of $f$. We show next how to
express the Hessian of $f$ at ${\boldsymbol{1}}$ in terms of the gradient $\nabla p({\boldsymbol{1}})$
and the Hessian of $p$ at ${\boldsymbol{1}}$, if $p$ additionally satisfies the so-called
sum rules of order $2$, or, equivalently, satisfies the zero conditions of order
$2$. We say that $p \in {\mathbb C}[T]$ satisfies zero conditions of order $k$, if
$$
D^\mu p({\boldsymbol{e}}^{-i\sigma})=0, \quad \mu \in {\mathbb N}_0^d, \quad |\mu|<k, \quad \sigma \in G\setminus\{0\},
$$
see \cite{JePlo,JiaJiang} for details. The assumption that $p$ satisfies sum
rules of order $2$ together with $p({\boldsymbol{1}})=1$ are necessary for the continuity
of the corresponding refinable function $\phi$.
{\boldsymbol{e}}gin{Lemma}\label{lem:Hessianf_HessianP}
Let $p\in{\mathbb C}[T]$ with real coefficients satisfy the sum rules of order $2$ and
$p({\boldsymbol{1}})=1$. Then the Hessian of $\displaystyle f=1-\sum_{\sigma\in G}p^{\sigma*}p^\sigma$ at
${\boldsymbol{1}}$ is equal to
$$-2\,{\rm Hess}(p)({\boldsymbol{1}})-2\,\nabla p({\boldsymbol{1}})^*\nabla p({\boldsymbol{1}}).$$
\end{Lemma}
{\boldsymbol{e}}gin{proof}
We expand the trigonometric polynomial $p$ in a neighborhood of
${\boldsymbol{1}}$ and get
$$
p({\boldsymbol{e}}^{-i\omega})=1+ \nabla p({\boldsymbol{1}}) \omega+
\frac{1}{2}\omega^T \mbox{Hess}(p)({\boldsymbol{1}}) \omega
+\mathcal{O}(|\omega|^3).
$$
Note that, since the coefficients of $p$ are real, the row vector $v=\nabla
p({\boldsymbol{1}})$ is purely imaginary and $\mbox{Hess}(p)({\boldsymbol{1}})$ is real and symmetric.
The sum rules of order $2$ are equivalent to
$$
p^\sigma({\boldsymbol{1}})= 0,\quad
\nabla p^\sigma({\boldsymbol{1}})=0\qquad
\mbox{for all}\quad
\sigma\in G\setminus\{ 0\}.
$$
Thus, we have $p^\sigma({\boldsymbol{e}}^{-i\omega})=\mathcal{O}(|\omega|^2)$ for all
$\sigma\in G\setminus\{0\}$. Simple computation yields
{\boldsymbol{e}}gin{eqnarray*}
|p({\boldsymbol{e}}^{-i\omega})|^2&=&1+ (v+\overline{v}) \omega+ \omega^T (\mbox{Hess}(p)({\boldsymbol{1}})+v^*v) \omega
+\mathcal{O}(|\omega|^3) \\&=&1+ \omega^T (\mbox{Hess}(p)({\boldsymbol{1}})+v^*v) \omega
+\mathcal{O}(|\omega|^3).
\end{eqnarray*} Thus, the claim follows.
\end{proof}
{\boldsymbol{e}}gin{Remark} Note that ${\rm Hess}(f)$ is a zero matrix, if
$p$ is a symbol of interpolatory subdivision scheme, i.e.,
$$
p=m^{-1}+ m^{-1}\sum_{\chi \in G' \setminus \{0\}} p_\chi,
$$
and $p$ satisfies zero conditions of order at least $3$. This property of ${\rm Hess}(f)$
follows directly from the equivalent formulation of zero conditions of order
$k$, see \cite{Cabrelli}. The examples of $p$ with such properties are for
example the butterfly scheme in Example \ref{ex:butterfly} and the
three-dimensional interpolatory scheme in Example \ref{ex:3D_butterfly}.
\end{Remark}
{\boldsymbol{e}}gin{Remark}
The sufficient condition of Theorem \ref{hessiancrit} can be
generalized to cases when the order of vanishing of $f$ is larger than two.
Namely, let $V$ and $f$ as in \ref{hessiancrit}, and let $\xi\in
V({\mathbb R})$ be a zero of $f$. Fix a system $x_1,\dots,x_n$ of local
(analytic) coordinates on $V$ centered at $\xi$. Let $2d>0$ be the
order of vanishing of $f$ at $\xi$, and let $F_\xi(x_1,\dots,x_n)$ be
the homogeneous part of degree $2d$ in the Taylor expansion of $f$
at $\xi$. Let us say that $f$ is \emph{strongly sos at~$\xi$} if
there exists a linear basis $g_1,\dots,g_N$ of the space of
homogeneous polynomials of degree $d$ in $x_1,\dots,x_n$ such that
$F_\xi=g_1^2+\cdots+g_N^2$. (Equivalently, if $F_\xi$ lies in the
interior of the sums of squares cone in degree~$2d$.) If $2d=2$, this
condition is equivalent to the Hessian of $f$ at $\xi$ being positive
definite.
Then the following holds: If $f$ is strongly sos at each of its zeros
in $V({\mathbb R})$, $f$ is a sum of squares in ${\mathbb R}[V]$. For a proof we refer
to \cite{sch:future}. As a result, we get a corresponding
generalization of Theorem \ref{th:existence-ddim}: If $f$ as in
\ref{th:existence-ddim} is strongly sos at each of its zeros in
${\mathbb T}^d$, then the conclusion of \ref{th:existence-ddim} holds.
\end{Remark}
For simplicity of presentation, we start by applying the result of Theorem
\ref{th:existence-ddim} to the $2-$dimensional polynomial $f$ derived from the symbol of the three-directional
piecewise linear box spline. This example also motivates the statements of Remark
\ref{remark:box_and_cosets_sec2.2}.
{\boldsymbol{e}}gin{Example} \label{example:b111-sec2.2}
The three-directional piecewise linear box spline is defined by its associated trigonometric
polynomial
$$
p({\boldsymbol{e}}^{-i\omega})= e^{-i(\omega_1+\omega_2)} \cos\left(\frac{\omega_1}{2}\right)
\cos\left(\frac{\omega_2}{2}
\right)\cos\left(\frac{\omega_1+\omega_2}{2}\right), \quad
\omega \in {\mathbb R}^2.
$$
Note that
$$
\cos\left(\frac{\omega_1}{2}\right)
\cos\left(\frac{\omega_2}{2}
\right)\cos\left(\frac{\omega_1+\omega_2}{2}\right)
=1-\frac{1}{8}\omega^T \left({\boldsymbol{e}}gin{matrix} 2&1\\1&2\end{matrix}\right) \omega +
\mathcal{O}(|\omega|^4).
$$
Therefore, as the trigonometric polynomial $p$ satisfies sum rules
of order $2$, we get
$$
f({\boldsymbol{e}}^{-i\omega})=\frac{1}{8}\omega^T \left({\boldsymbol{e}}gin{matrix} 2&1\\1&2\end{matrix}\right) \omega +
\mathcal{O}(|\omega|^4).
$$
Thus, the Hessian of $f$ at ${\boldsymbol{1}}$ is positive definite.
To determine the other zeroes of $f$, by Lemma \ref{lem:subQMFiso} part $(a)$, we
can use either one of the representations
{\boldsymbol{e}}gin{eqnarray*}
f({\boldsymbol{e}}^{-i\omega})&=&1-\sum_{\sigma \in \{0,\pi\}^2} \prod_{\theta \in \{0,1\}^2 \setminus \{0\}}
\cos^2\left(\frac{(\omega+ \sigma)\cdot \theta}{2}\right)\\
&=& \frac{1}{4} \sum_{\chi \in \{0,1\}^2}(1-\cos^2(\omega \cdot \chi)).
\end{eqnarray*}
It follows that the zeros of $f$ are the points $\omega \in
\pi {\mathbb Z}^2$ and, by periodicity of $f$ with period $\pi$ in both coordinate directions,
we get that
$$
\mbox{Hess}(f)({\boldsymbol{e}}^{-i\omega})=\mbox{Hess}(f)({\boldsymbol{1}}), \quad \omega \in
\pi {\mathbb Z}^2,
$$
is positive definite at all zeros of $f$.
\end{Example}
{\boldsymbol{e}}gin{Remark} \label{remark:box_and_cosets_sec2.2}
\noindent $(i)$ The result of \cite[Theorem 2.4]{CS07} implies the existence of
tight frames for multivariate box-splines. According to the notation in
\cite[p.~127]{deBoor}, the corresponding trigonometric polynomial is given by
$$
p({\boldsymbol{e}}^{-i\omega})=\prod_{j=1}^n \frac{1+{\boldsymbol{e}}^{-i\omega \cdot \xi^{(j)}}}{2},
\quad \omega \in {\mathbb R}^d,
$$
where $\Xi=(\xi^{(1)},\ldots,\xi^{(n)})\in{\mathbb Z}^{d\times n}$ is unimodular and has
rank $d$. (Unimodularity means that all $d\times d$-submatrices have determinant
$0,1$, or $-1$.) Moreover, $\Xi$ has the property that leaving out any column
$\xi^{(j)}$ does not reduce its rank. (This property guarantees continuity of
the box-spline and that the corresponding polynomial $p$ satisfies at least sum
rules of order $2$.) Then one can show that
$$
f= 1-\sum_{\sigma \in G}p^{\sigma*}p^\sigma \ge 0 \quad \hbox{on} \ {\mathbb T}^d,
$$
the zeros of $f$ are at $\omega \in \pi {\mathbb Z}^d$ and the Hessian of $f$ at
these zeros is positive definite. This yields an alternative proof for
\cite[Theorem 2.4]{CS07} in the case of box splines.
\noindent $(ii)$ If the summands $m^{-2}-p_\chi^* p_\chi$ are
nonnegative on ${\mathbb T}^d$, then it can be easier to determine the
zeros of $f$ by determining the common zeros of all of these summands.
\end{Remark}
{\boldsymbol{e}}gin{Example} \label{ex:3D_butterfly}
There was an attempt to define an interpolatory scheme for 3D-subdivision with dilation matrix $2I_3$
in \cite{CMQ}. There are several inconsistencies in this paper and we give a correct description of the trigonometric polynomial
$p$, the so-called subdivision mask. Note that the scheme we present is an extension of the 2-D
butterfly scheme to 3-D data in the following sense: if the data are constant along
one of the coordinate directions (or along the main diagonal in ${\mathbb R}^3$), then
the subdivision procedure keeps this property and is identical with the
2-D butterfly scheme.
We describe the trigonometric polynomial $p$ associated with this 3-D scheme
by defining its isotypical components. The isotypical components,
in terms of $z_k=e^{-i\omega_k}$, $k=1,2$, are given by
\small
{\boldsymbol{e}}gin{eqnarray*}
p_{0,0,0}(z_1,z_2,z_3)&=&1/8,\\[12pt]
p_{1,0,0}(z_1,z_2,z_3) &=& \frac{1}{8} \cos\omega_1 +
\frac{\lambda}{4} \Big(\cos(\omega_1+2\omega_2)+\cos(\omega_1+2\omega_3)\\&+&\cos(\omega_1+2\omega_2+2\omega_3)\Big)
- \frac{\lambda}{4} \Big(\cos(\omega_1-2\omega_2)+\cos(\omega_1-2\omega_3)\\&+&
\cos(3\omega_1+2\omega_2+2\omega_3)\Big),\\[12pt]
p_{0,1,0}(z_1,z_2,z_3) &=& p_{1,0,0}(z_2,z_1,z_3),\
p_{0,0,1}(z_1,z_2,z_3) = p_{1,0,0}(z_3,z_1,z_2),\\
p_{1,1,1}(z_1,z_2,z_3) &=& p_{1,0,0}(z_1z_2z_3,z_1^{-1},z_2^{-1}),\\[12pt]
p_{1,1,0}(z_1,z_2,z_3) &=& \Big(\frac{1}{8}-\lambda \Big) \cos(\omega_1+\omega_2) +
\lambda \Big(\cos(\omega_1-\omega_2)+\cos(\omega_1+\omega_2+2\omega_3)\Big)\\
&-&\frac{\lambda}{4} \Big(\cos(\omega_1-\omega_2+2\omega_3)+\cos(\omega_1-\omega_2-2\omega_3)+ \\
&& \cos(3\omega_1+\omega_2+2\omega_3)+\cos(\omega_1+3\omega_2+2\omega_3)\Big),\\[12pt]
p_{1,0,1}(z_1,z_2,z_3) &=& p_{1,1,0}(z_1,z_3,z_2),\qquad
p_{0,1,1}(z_1,z_2,z_3) = p_{1,0,0}(z_2,z_3,z_1),
\end{eqnarray*}
\normalsize where $\lambda$ is the so-called tension parameter.
The polynomial $p$ also satisfies
$$
p(z_1,z_2,z_3)=\frac{1}{8} (1+z_1)(1+z_2)(1+z_3)(1+z_1z_2z_3)q(z_1,z_2,z_3), \quad q({\boldsymbol{1}})=1,
$$
which implies sum rules of order $2$.
{\boldsymbol{e}}gin{itemize}
\item[$(a)$] For $\lambda=0$, we have
$q(z_1,z_2,z_3)=1/(z_1z_2z_3)$. Hence, $p$ is the scaling symbol of the trivariate
box spline with the direction set $(1,0,0)$, $(0,1,0)$, $(0,0,1)$, $(1,1,1)$ and whose support center
is shifted to the origin.
\item[$(b)$] For $0\le \lambda< 1/16$, the corresponding subdivision scheme converges and
has a continuous limit function. The only zeros of the associated nonnegative
trigonometric polynomial $f$ are at $\pi {\mathbb Z}^3$, and the Hessian of $f$
at these zeros is given by
$$
\hbox{Hess}(f)({\boldsymbol{1}})=\hbox{Hess}(f)({\boldsymbol{e}}^{-i\omega})=\left( {\boldsymbol{e}}gin{array}{ccc} 1-16\lambda &\frac{1}{2}-8\lambda&\frac{1}{2}-8\lambda\\
\frac{1}{2}-8\lambda&1-16\lambda&\frac{1}{2}-8\lambda
\\\frac{1}{2}-8\lambda&\frac{1}{2}-8\lambda&1-16\lambda \end{array}\right)
$$
for all $\omega \in \pi {\mathbb Z}^3$.
The existence of the sos decomposition of $f$ is guaranteed by Theorem \ref{th:existence-ddim} and
one possible decomposition of $f$ is computed as follows.
{\boldsymbol{e}}gin{itemize}
\item[$(b_1)$] Denote $u:=\cos(\omega_1+\omega_2)$, $v:=\cos(\omega_1+\omega_3)$,
$w:=\cos(\omega_2+\omega_3)$, and $\tilde u:=\sin(\omega_1+\omega_2)$, $\tilde v:=\sin(\omega_1+\omega_3)$,
$\tilde w:=\sin(\omega_2+\omega_3)$.
Simple computations yield
\[
p_{1,1,0} =\frac18 -(1-u)(\frac18-\lambda v^2-\lambda w^2)-\lambda (v-w)^2,
\]
and
{\boldsymbol{e}}gin{eqnarray*}
\frac{1}{64}&-&|p_{1,1,0}|^2=
\lambda^2 (v^2-w^2)^2 + \Big(\Big(\frac1{16}-\lambda v^2\Big) \\&+&
\Big(\frac1{16}-\lambda w^2\Big)\Big)
\left(\frac1{8}\tilde u^2+\lambda (v-uw)^2+\lambda(w-uv)^2\right).
\end{eqnarray*}
Therefore, $\frac{1}{64}-|p_{1,1,0}|^2$ has an sos decomposition with $7$ summands $h_j$, and each $h_j$ has
only one nonzero isotypical component.
\item[$(b_2)$] The isotypical component $p_{1,0,0}$ is not bounded by $1/8$;
consider, for example,
$p_{1,0,0}({\boldsymbol{e}}^{-i\omega})$ at the point $\omega=\left( -\frac{\pi}{6}, -\frac{2\pi}{3}, -\frac{2\pi}{3}\right)$.
Yet we obtain, by simple computations,
\[
p_{1,0,0} = \frac18\cos\omega_1+\frac{\lambda}{2} A \sin\omega_1,\
A:=\sin 2(\omega_1+\omega_2+\omega_3)-\sin 2\omega_2-\sin 2\omega_3,
\]
and
\[
\frac{1}{16}-|p_{1,0,0}|^2-|p_{0,1,0}|^2-|p_{0,0,1}|^2-|p_{1,1,1}|^2 =
E_{1,0,0}+E_{0,1,0}+E_{0,0,1}+E_{1,1,1},
\]
where
\[
E_{1,0,0}= \frac{3\lambda}{16} \sin^4\omega_1 +
\frac{\lambda}{64} (2\sin\omega_1- A \cos\omega_1)^2 +
\frac{1-16\lambda}{64} \sin^2\omega_1 (1+\lambda A^2) ;
\]
the other $E_{i,j,k}$ are given by the same coordinate transformations
as $p_{i,j,k}$. Hence, for $ \frac{1}{16}-|p_{1,0,0}|^2-|p_{0,1,0}|^2-|p_{0,0,1}|^2-|p_{1,1,1}|^2$, we obtain an sos
decomposition with $12$ summands $g_j$, each of which has only one nonzero isotypical component.
\end{itemize}
Thus, for the trivariate interpolatory subdivision
scheme with tension parameter $0\le\lambda<1/16$, by Theorem~\ref{th:LaiSt},
we have explicitly constructed a tight frame with 41 generators $q_j$
as in Theorem \ref{th:UEP}.
\item[c)]
For $\lambda=1/16$, the sum rules of order $4$
are satisfied.
In this particular case,
the scheme is $C^1$ and the Hessian of $f$ at ${\boldsymbol{1}}$ is the zero-matrix,
thus the result
of Theorem~\ref{th:existence-ddim} is not applicable. Nevertheless, the sos decomposition
of
$1-\sum p^{\sigma *} p^\sigma$ in b), with further simplifications
for $\lambda=1/16$,
gives a tight frame with 31 generators
for the trivariate interpolatory subdivision
scheme.
\end{itemize}
\end{Example}
\subsection{Constructions of tight wavelet frames} \label{subsec:construction}
Lemma \ref{lem:subQMFiso} part $(a)$ sometimes yields an elegant
method for determining the sum of squares decomposition of the polynomial $f$ in \eqref{def:f} and, thus,
constructing the trigonometric polynomials $q_j$ in Theorem \ref{th:UEP}. Note that
{\boldsymbol{e}}gin{equation} \label{idea:method}
f\>=1-\sum_{\sigma \in G} p^{\sigma *} p^\sigma \>=\>1-m\sum_{\chi\in G'} p_\chi^* p_\chi \>=\>m\sum_{\chi\in G'}
\Bigl(\frac1{m^2}-p_\chi^* p_\chi \Bigr).
\end{equation}
So it suffices to find an sos decomposition for each of
the polynomials $m^{-2}-p_\chi^* p_\chi$, provided that they are all
nonnegative. This nonnegativity assumption is satisfied, for
example, for the special case when all coefficients $p(\alpha)$ of
$p$ are nonnegative. This is due to the simple fact that for
nonnegative $p(\alpha)$ we get
$$
p^*_\chi p_\chi\>\le\>|p_\chi({\boldsymbol{1}})|^2\>=\>m^{-2}
$$
on ${\mathbb T}^d$, for all $\chi\in G'$.
The last equality in \eqref{idea:method} allows us to simplify the construction of frame generators
considerably. In Example \ref{example:b111_2_construction} we apply
this method to the three-directional piecewise linear box spline. Example \ref{ex:butterfly} illustrates the advantage of the representation
in \eqref{idea:method} for the butterfly scheme \cite{GDL}, an interpolatory subdivision method
with the corresponding mask $p \in {\mathbb C}[T]$ of a larger support, some of whose coefficients are negative. Example \ref{ex:jiang_oswald} shows that
our method is also applicable for at least one of the interpolatory $\sqrt{3}-$subdivision schemes studied in \cite{JO}.
For the three-dimensional
example that also demonstrates our constructive approach see Example \ref{ex:3D_butterfly} part $(b1)$.
{\boldsymbol{e}}gin{Example}\label{example:b111_2_construction}
Consider the three-directional piecewise linear box spline with the
symbol
$$p(z_1,z_2)\>=\>\frac18\,(1+z_1)(1+z_2)(1+z_1z_2),\quad z_j=
e^{-i\omega_j}.$$
The sos decomposition for the isotypical components yields
$$
f\>=\>1-m\sum_{\chi\in G'} p_\chi^* p_\chi \>=\>\frac14\sin^2(\omega_1)+
\frac14\sin^2(\omega_2)+\frac14\sin^2({\omega_1+\omega_2}).
$$
Thus, in \eqref{id:tildeH} we have a decomposition with $r=3$.
Since each of $h_1$, $h_2$, $h_3$ has only one isotypical
component, we get a representation $f=\tilde h_1^2+\tilde
h_2^2+\tilde h_3^2$ with $3$ $G$-invariant polynomials $\tilde
h_j$. By Theorem
\ref{th:LaiSt} we get $7$ frame generators. Note that the
method in \cite[Example 2.4]{LS06} yields $6$ generators of
slightly larger support. The method in \cite[Section 4]{CH} based on
properties of the Kronecker product leads to $7$ frame generators
whose support is the same as the one of $p$. One can also employ the
technique discussed in \cite[Section]{GR} and get $7$ frame
generators.
\end{Example}
Another prominent example of a subdivision scheme is the so-called
butterfly scheme. This example shows the real advantage of
treating the isotypical components of $p$ separately for $p$ with larger support.
{\boldsymbol{e}}gin{Example} \label{ex:butterfly}
The butterfly scheme describes an interpolatory subdivision scheme
that generates a smooth regular surface
interpolating a given set of points \cite{GDL}. The trigonometric
polynomial $p$ associated with the butterfly scheme is given by
{\boldsymbol{e}}gin{eqnarray*}
p(z_1,z_2) & = & \frac{1}{4}+\frac{1}{8}\Bigl(z_1+z_2+z_1z_2+z_1^{-1}+z_2^{-1}+
z_1^{-1}z_2^{-1}\Bigr) \\
&& +\frac{1}{32}\Bigl(z_1^2z_2+z_1z_2^2+z_1z_2^{-1}+z_1^{-1}z_2+z_1^{-2}
z_2^{-1}+z_1^{-1}z_2^{-2}\Bigr) \\
&& -\frac1{64}\Bigl(z_1^3z_2+z_1^3z_2^2+z_1^2z_2^3+z_1z_2^3+z_1^2
z_2^{-1}+z_1z_2^{-2} \\
&& +z_1^{-1}z_2^2+z_1^{-2}z_2+z_1^{-3}z_2^{-1}+z_1^{-3}z_2^{-2}+
z_1^{-2}z_2^{-3}+z_1^{-1}z_2^{-3}\Bigr).
\end{eqnarray*}
Its first isotypical component is $p_{0,0}=\frac14$, which is the case for every
interpolatory subdivision scheme. The other isotypical components, in terms of
$z_k=e^{-i\omega_k}$, $k=1,2$, are given by $p_{1,0}
(z_1,z_2)=\frac14\cos(\omega_1)+\frac1{16}\cos(\omega_1+2\omega_2)
-\frac1{32}\cos(3\omega_1+2\omega_2)-\frac1{32}\cos(\omega_1 -2\omega_2)$, i.e.,
$$
p_{1,0}(z_1,z_2)\>=\>\frac14\cos(\omega_1)+\frac18\sin^2(\omega_1)
\cos(\omega_1+2\omega_2),
$$
and $p_{0,1}(z_1,z_2)=p_{1,0}(z_2,z_1)$, $p_{1,1}(z_1,z_2)=p_{1,0}
(z_1z_2,z_2^{-1})$. Note that on ${\mathbb T}^2$
$$
|p_\chi|\le\frac14 \quad \hbox{for all} \quad \chi\in G',
$$
thus, our method is applicable. Simple computation shows that
{\boldsymbol{e}}gin{eqnarray*}
1-16\,\bigl|p_{1,0}(z_1,z_2)
\bigr|^2&=&1-\cos^2(\omega_1)-\cos(\omega_1)\sin^2(\omega_1)\cos
(\omega_1+2\omega_2)\\&-&\frac14\sin^4(\omega_1)\cos^2(\omega_1+ 2\omega_2).
\end{eqnarray*}
Setting $u_j:=\sin(\omega_j)$, $j=1,2$, $v:= \sin(\omega_1+\omega_2)$,
$v':=\sin(\omega_1-\omega_2)$, $w:=\sin (\omega_1+2\omega_2)$ and $w':=\sin
(2\omega_1+\omega_2)$, we get
$$1-16\,\bigl|p_{1,0}(z_1,z_2)\bigr|^2\>=\>\frac14\,u_1^2\Bigl(w^2+
(u_2^2+v^2)^2+2u_2^2+2v^2\Bigr).$$
Therefore,
{\boldsymbol{e}}gin{eqnarray*}
f=1-\sum_{\sigma\in G} p^{\sigma *} p^\sigma & = & \frac14\Bigl(u_1^2u_2^2+u_1^2
v^2+u_2^2v^2\Bigr)+\frac1{16}\Bigl(u_1^2w^2+u_2^2w'^2+v^2v'^2
\Bigr) \\
&& +\frac1{16}\Bigl(u_1^2(u_2^2+v^2)^2+u_2^2(u_1^2+v^2)^2+v^2(u_1^2
+u_2^2)^2\Bigr).
\end{eqnarray*}
This provides a decomposition $\displaystyle f=
\sum_{j=1}^9 h_j^* h_j$ into a sum of $9$ squares. As in the previous example, each
$h_j$ has only one nonzero isotypical component $h_{j,\chi_j}$. Thus, by part $(b)$
of Lemma \ref{lem:subQMFiso} and by Theorem \ref{th:LaiSt}, there
exists a tight frame with $13$ generators. Namely, as in the proof of Theorem \ref{th:LaiSt},
we get
{\boldsymbol{e}}gin{eqnarray*}
q_1(z_1,z_2)&=&\frac{1}{2}-\frac{1}{2}p(z_1,z_2), \quad q_2(z_1,z_2)=
\frac{1}{2}z_1-2 p(z_1,z_2)p_{(1,0)}^*(z_1,z_2) \\
q_3(z_1,z_2)&=&q_2(z_2,z_1), \quad q_4(z_1,z_2)=q_2(z_1z_2,z_2^{-1})\\
q_{4+j}(z_1,z_2)&=&p(z_1,z_2) \widetilde{h}^*_{j,\chi_j}, \quad j=1, \dots, 9,
\end{eqnarray*}
where $\widetilde{h}_{j,\chi_j}$ are the lifted isotypical components defined as
in Lemma \ref{lem:subQMFiso}. Let ${\cal N}=\{0, \dots ,7\}^2$, $p={\boldsymbol{p}} \cdot
{\boldsymbol{x}}$ and $q_j={\boldsymbol{q}}_j \cdot {\boldsymbol{x}}$ with ${\boldsymbol{x}}= [
z^\alpha \ : \ \alpha \in {\cal N}]^T$. The corresponding null-matrix $O \in {\mathbb R}^{64
\times 64}$ satisfying \eqref{id:null-matrices} is given by
$$
{\boldsymbol{x}}^* \cdot O \cdot {\boldsymbol{x}}= {\boldsymbol{x}}^* \left[
\sum_{j=1}^{13} {\boldsymbol{q}}_j^T {\boldsymbol{q}}_j -\hbox{diag}({\boldsymbol{p}})+{\boldsymbol{p}}^T {\boldsymbol{p}} \right] {\boldsymbol{x}}.
$$
Note that other factorizations of the positive semi-definite matrix
$\hbox{diag}({\boldsymbol{p}})-{\boldsymbol{p}}^T {\boldsymbol{p}}+O$ of rank $13$ lead to other possible tight frames
with at least $13$ frame generators. An advantage of using semi-definite
programming techniques is that it can possibly yield $q_j$ of smaller degree and
reduce the rank of $\hbox{diag}({\boldsymbol{p}})-{\boldsymbol{p}}^T {\boldsymbol{p}}+O$.
Using the technique of semi-definite programming the authors in \cite{CS08}
constructed numerically a tight frame for the butterfly scheme with 18 frame
generators. The advantage of our construction is that the frame
generators are determined analytically. The disadvantage is that their support
is approximately twice as large as that of the frame generators in \cite{CS08}.
\end{Example}
The next example is one of the family of interpolatory $\sqrt{3}-$subdivision studied in \cite{JO}.
The associated dilation matrix is $\displaystyle{M=\left[{\boldsymbol{e}}gin{array}{rr} 1&2\\-2&-1 \end{array} \right]}$
and $m=3$.
{\boldsymbol{e}}gin{Example} \label{ex:jiang_oswald}
The symbol of the scheme is given by
$$
p(z_1,z_2)= p_{(0,0)}(z_1,z_2)+ p_{(1,0)}(z_1,z_2)+ p_{(0,1)}(z_1,z_2)
$$
with isotypical components $p_{(0,0)}=\frac{1}{3}$,
{\boldsymbol{e}}gin{eqnarray*}
p_{(0,1)}(z_1,z_2)=\frac{4}{27}(z_2+z_1^{-1}+z_1z_2^{-1})-\frac{1}{27}(z_1^{-2}z_2^{2}+z_1^2+z_2^{-2})
\end{eqnarray*}
and $ p_{(1,0)}(z_1,z_2)=p_{(0,1)}(z_2,z_1)$. We have by Lemma
\ref{lem:subQMFiso} and due to the equality $|p_{(0,1)}(z_1,z_2)|^2=|p_{(1,0)}(z_1,z_2)|^2$
$$
1-\sum_{\sigma \in G} p^{\sigma *} p^\sigma=2\left( \frac19-p_{(0,1)}^*p_{(0,1)}\right),
$$
thus it suffices to consider only
{\boldsymbol{e}}gin{eqnarray*}
\frac{1}{9}&-&|p_{(0,1)}(z_1,z_2)|^2=3^{-2}-27^{-2} \Big(51+16\cos(\omega_1+\omega_2)+16\cos(2\omega_1-\omega_2) \\
&+&16\cos(\omega_1-2\omega_2)+2\cos(2\omega_1+2\omega_2)+2\cos(2\omega_1-4\omega_2)\\&+&2\cos(4\omega_1-2\omega_2)
-8\cos(3\omega_1)-8\cos(3\omega_2)-8\cos(3\omega_1-3\omega_2\Big).
\end{eqnarray*}
Numerical tests show that this polynomial is nonnegative.
\end{Example}
{\boldsymbol{e}}gin{thebibliography}{99}
\bibitem{JoBra} {O. Bratteli, P.E.T. Jorgensen, Isometries, shifts, Cuntz algebras
and multiresolution wavelet analysis of scale $N$,
{\sl Integral Equations and Operator Theory} {\bf 28} (1997), 382--443.}
\bibitem{Cabrelli} {C. Cabrelli, C. Heil, U. Molter, Accuracy of lattice translates of several mulidimensional refinable
functions, {\sl J. Approx. Theory} {\bf 95} (1998), 5--52.}
\bibitem{CCH} {M. Charina, C. K. Chui, W. He, Tight frames of compactly
supported multivariate multi-wavelets, {\sl J. Comp. Appl. Math.} {\bf 223}
(2010), 2044--2061.}
\bibitem{CS}{ M.~Charina and J.~St\"ockler, {\it Tight wavelet frames for irregular
multiresolution analysis}, {\sl Appl. Comp. Harmonic
Anal.} {\bf 25} (2008), 98--113.}
\bibitem{CS07}{ M.~Charina and J.~St\"ockler, Tight wavelet frames for
subdivision, \sl{ J. Comp. Applied
Math.} {\bf 221} (2008), 293--301.}
\bibitem{CS08} {M. Charina and J. St\"ockler, Construction of tight wavelet frames by semi--definite programming,
\sl{J. Approx. Theory} {\bf 162} (2010), 1429-1449.}
\bibitem{CH} {C. K. Chui and W. He, Construction of multivariate tight frame via Kronecker products,
{\sl Appl. Comput. Harmon. Anal.} {\bf 11} (2001), 305--312.}
\bibitem{CHS04} {C. K.~Chui, W.~He and J.~St\"ockler, Nonstationary tight wavelet frames, I:
Bounded Intervals, {\sl Appl. Comp. Harmonic Anal.} {\bf 17}
(2004), 141--197.}
\bibitem{CHS05} {C. K.~Chui, W.~He and J.~St\"ockler, Nonstationary tight wavelet frames, II:
Unbounded Intervals, {\sl Appl. Comp. Harmonic Anal.} {\bf 18}
(2005), 25--66.}
\bibitem{CHS01} {C. K.~Chui, W.~He and J.~St\"ockler,
Compactly supported tight and sibling frames with maximum vanishing moments, {\sl Appl. Comp. Harmonic Anal.} {\bf 13} (2002), 224--262.}
\bibitem{CDD1} {A.~Cohen and W.~Dahmen and R.~DeVore,
Adaptive wavelet methods for elliptic operator equations -
Convergence rates, {\sl Math. Comput.} {\bf 70} (2001), 27--75.}
\bibitem{CDD2} {A.~Cohen and W.~Dahmen and R.~DeVore,
Adaptive wavelet methods {II}: Beyond the elliptic case, {\sl
Found. Comput. Math.} {\bf 2} (2002), 203--245.}
\bibitem{CMQ} {Y.-S.~Chang, K.~McDonnell, H.~Qin, An interpolatory subdivision for volumetric models over
simplicial complexes, {\sl in Proceedings of Shape Modeling International}, 2003, 143--152.}
\bibitem{CoifmanDonoho1995} {R. R. Coifman, D. L. Donoho,
Translation-invariant de-noising, in: A. Antoniadis (ed.) et al.,
{\sl Wavelets and Statistics}, Springer-Verlag, Lect. Notes Stat.
vol. 103, 125--150, 1995.}
\bibitem{Daub}
I.~Daubechies, {\it Ten Lectures on Wavelets}, CBMS-NSF Regional Conference
Series in Applied Mathematics, vol.~61, SIAM, Philadelphia, 1992.
\bibitem{DHRS03} {I.~Daubechies, B.~Han, A.~Ron, Z.~Shen, Framelets: MRA-based constructions
of wavelet frames, {\sl Appl. Comp. Harmonic Anal.} {\bf 14} (2003), 1--46.}
\bibitem{deBoor}
{C.~de Boor, K.~H\"ollig, and S.~D.~Riemenschneider, {\it Box
Splines}, Appl.~Math.~Sci., vol.~98, Springer-Verlag, New York,
1993.}
\bibitem{Shen2011} {B. Dong, H. Ji, J. Li, Z. Shen, and Y. Xu, Wavelet frame
based blind image inpainting, {\sl Appl. Comp. Harmonic Anal.}, {\bf 32} (2012), 268--279.}
\bibitem{GDL} {J.~A.~Gregory, N.~Dyn, D.~Levin, A butterfly subdivision scheme for surface
interpolation with a tension control, {\sl ACM Transactions on Graphics 9}, {\bf 2} (1990), 160--169.}
\bibitem{GR} {K. Gr\"ochenig, A. Ron, Tight compactly supported wavelet frames of arbitrarily high smoothness,
{\sl Proc. Amer. Math. Soc}, {\bf 126} (1998), 1101--1107.}
\bibitem{GGL} {S. S. Goh, T. T. Goodman and S. L. Lee, Constructing tight
frames of multivariate functions, {\sl J. Approx. Theory}, {\bf 158} (2009), 49--68.}
\bibitem{GT} {S. S. Goh and K. M. Teo, Extension pronciples for tight wavelet frames
of periodic functions, {\sl Appl. Comput. Harmon. Anal.} {\bf 25} (2008), 168--186. }
\bibitem{HanMo2005} {B. Han and Q. Mo, Multiwavelet frames from refinable function
vectors, {\sl Adv. Comput. Math.} {\bf 18} (2003), 211--245.}
\bibitem{HP} J.W. Helton, M. Putinar, {\it Positive Polynomials in Scalar and
Matrix Variables, the Spectral Theorem and Optimization}, in vol. {\it Operator
Theory, Structured Matrices, and Dilations}, Theta, Bucharest, 2007, 229--306.
\bibitem{JePlo}
K.~Jetter and G.~Plonka, A survey on $L_2$-approximation orders
from shift-invariant spaces, {\it in:} N.~Dyn, D.~Leviatan,
D.~Levin, and A.~Pinkus (eds.), {\it Multivariate Approximation
and Applications}, Cambridge University Press, Cambridge, 2001,
73--111.
\bibitem{JiaJiang} {R.-Q.~Jia, Q.~Jiang, Approximation power of refinable vectors of functions, in Wavelet analysis and applications,
In D. Deng, D. Huang, R. Q. Jia, W. Lin, and J. Wang, editors, Proceedings of an International
Conference on Wavelet Analysis and its Applications, 2002, 155--178.}
\bibitem{JO} {Q. Jiang and P. Oswald, Triangular $\sqrt{3}-$subdivision schemes:
the regular case, {\sl J. Comp. Appl. Math.} {\bf 156} (2003), 47--75.}
\bibitem{KSS} {A.~Khodakovsky, P.~Schr\"oder, W.~Sweldens,
Progressive geometry compression, {\sl Proceedings of SIGGRAPH} 2000.}
\bibitem{LS06} {M. J. Lai, J. St\"ockler, Construction of multivariate
compactly supported tight wavelet frames,
{\sl Appl. Comp. Harmonic Anal.} {\bf 21} (2006), 324--348.}
\bibitem{lasserrebook}
J.B. Lasserre, {\it Moments, Positive Polynomials and Their Applications},
Imperial College Press, London 2009.
\bibitem{LP} J. B. Lasserre, M. Putinar, {\it Positivity and optimization: beyond
polynomials}, in vol. {\it Handbook on Semidefinite, Cone and Polynomial
Optimization} (M. Anjos, J.-B. Lasserre, eds.), Springer, Berlin, 2012, 407--434.
\bibitem{VisualComp} {X.-Z. Liang, Y.-H. Xue, Q. Li, Some applications of Loop-subdivision wavelet
tight frames to the processing of 3D graphics, {\sl The Visual Computer} {\bf 2701} (2011),
35--43.}
\bibitem{RS95} {A.~Ron, Z.~Shen, Affine systems in $L_2({\mathbb R}^d)$: the
analysis of the analysis operator, {\sl J. Func. Anal.}
{\bf 148} (1997), 408--447.}
\bibitem{sch:surf}
{C.~Scheiderer,
Sums of squares on real algebraic surfaces.
Manuscripta Math.\ \textbf{119} (2006), 395--410.}
\bibitem{sch:mz}
{C. Scheiderer,
Sums of squares on real algebraic curves.
{\sl Math.~Z.} \textbf{245} (2003), 725--760.}
\bibitem{S99}
{C. Scheiderer, Sums of squares of regular functions on real
algebraic varieties, {\sl Trans. AMS} {\bf 352} (1999), 1039--1069.}
\bibitem{sch:future}
{C. Scheiderer, Work in progress.}
\bibitem{Selesnick2001} {I. W. Selesnick, Smooth wavelet tight frames with zero
moments, {\sl Appl. Comput. Harmon. Anal.} {\bf 10} (2001), 163--181.}
\bibitem{StrangNguyen} {G. Strang, T. Nguyen, {\it Wavelets and Filter Banks.}
Wellesley-Cambridge Press, Wellesley 1997.}
\end{thebibliography}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Checking the Quality of Approximation of $p$-values
in Statistical Tests for Random Number Generators
by Using a Three-Level Test}
\author[haramoto]{Hiroshi Haramoto\corref{cor1}}
\address[haramoto]{Faculty of Education, Ehime University, Ehime 790-8577, Japan}
\ead{haramoto@ehime-u.ac.jp}
\cortext[cor1]{Corresponding Author}
\author[matsumoto]{Makoto Matsumoto}
\address[matsumoto]
{Graduate School of Sciences, Hiroshima University, Hiroshima 739-8526, Japan}
\ead{m-mat@math.sci.hiroshima-u.ac.jp}
\begin{abstract}
Statistical tests of pseudorandom number generators (PRNGs) are
applicable to any type of random number generators and are
indispensable for evaluation. While several practical packages
for statistical tests of randomness exist, they may suffer from
a lack of reliability: for some tests, the amount of approximation error
can be deemed significant.
Reducing this error by finding a better approximation is necessary,
but it generally requires an enormous amount of effort.
In this paper, we introduce an experimental method for revealing defects
in statistical tests by using a three-level test proposed by Okutomi and Nakamura.
In particular, we investigate the NIST test suite and the test batteries
in TestU01, which are widely used statistical packages.
Furthermore, we show the efficiency of several modifications for some tests.
\end{abstract}
\begin{keyword}
Statistical testing \sep Pseudorandom number generations \sep
Three-level test
\end{keyword}
\end{frontmatter}
\section{Introduction}
Statistical testing of pseudorandom number generators (PRNGs) is
indispensable for their evaluation and many such test suites exist.
Widely used examples are
TestU01 by L'Ecuyer and Simard \cite{L'Ecuyer:2007:TCL:1268776.1268777},
and the test suite of the National Institute of Standards and Technology
(NIST) \cite{Bassham:2010:SRS:2206233}.
Those suites are easy to apply to PRNGs, and further tests are still
being designed.
However, implementers and users always face an important problem
in determining whether each test yields correct $p$-values.
Common problems include making tests based on incorrect mathematical
analyses, parameter selection through experiments,
poor implementations damaging testing credibility, etc.
\textcolor{black}{Moreover, some statistical tests yield erroneous results
because they use approximation formulas for $p$-values with
non-negligible error.
Therefore, checking accuracy of the approximation formula
is important.}
The aim of this paper is to develop a method for checking the
\textcolor{black}{quality of the approximation for the $p$-values}
of statistical tests by using a three-level test.
This method has the merit of being easily conducted experimentally.
Furthermore, our criterion only makes use of the uniformity of $p$-values,
meaning that a wide range of tests can be subjected to the three-level
method.
Additionally, the result of this test is a $p$-value, so it is easy to
understand as a figure of merit in statistical tests.
The rest of this paper is organized as follows.
In Section 2, we briefly review statistical testing for PRNGs.
In section 3, we consider a three-level test for checking
\textcolor{black}{the quality of the approximation for the $p$-values}
of statistical tests proposed by Okutomi and Nakamura \cite{110007504717}.
In section 4, we present several results for the NIST test suite,
SmallCrush and Crush in TestU01.
We also present some modifications to those suites.
These results support the usefulness of the three-level test.
\section{Statistical testing for PRNGs \textcolor{black}{and approximation
error}}
This section gives a brief explanation on statistical testing for PRNGs,
especially one-level and two-level tests.
You can find further descriptions and explanations in
\cite{Knuth:1997:ACP:270146, rLEC92a, MR1310607, doi:10.1080/00949659708811859,
doi:10.1287/opre.48.2.308.12385, TestU01Manual}.
\textcolor{black}{Our aim is to use these methods to
evaluate the precision of the approximations used
in statistical tests.}
Let $I$ denote the two element set $\{ 0, 1\}$ or
the interval $[0, 1)$.
Let $X_1, X_2, \ldots$ be random variables distributed over $I$,
with each $X_k$ representing the $k$-th output of the tested PRNG.
A statistical test (called a one-level test) looks for empirical evidence
against the null hypothesis
$$
\mathcal{H}_0 : X_1, X_2, \ldots, X_n \underset{{i.i.d.}}{\sim} U(I)
$$
with a test statistic
$$
f : I^n \to \mathbb{R}.
$$
Let $\bm{X} = (X_1, \ldots, X_n)$.
\textcolor{black}{
In a statistical test, we assume that
the distribution of $f(\bm{X})$ under $\mathcal{H}_0$
is well-approximated by a known (cumulative) distribution $F$.
}
Thus, for our purpose to test the exactness of the
approximation under $\mathcal{H}_0$,
we make the following hypothesis
$$
\mathcal{H}' : f(\bm{X}) {\sim} F.
$$
Let $\bm{a} = (a_1,\ldots, a_n) \in I^n$
be an output sequence of the PRNG. If the $p$-value
$$
F(f(\bm{a}))=\Pr \left( f(\bm{a}) \leq f(\bm{X}) \right)
$$
is too close to 0 or too close to 1, then
either $\mathcal{H}_0$ or $\mathcal{H}'$ is rejected.
In usual tests for PRNG, $\mathcal{H}'$ is assumed and hence
the randomness of PRNG ($\mathcal{H}_0$) is rejected.
In this manuscript, $\mathcal{H}_0$ is assumed
and hence the precision of the approximation ($\mathcal{H}'$)
is rejected.
If the $p$-value is very small (e.g., less than $10^{-10}$),
then it is clear that either $\mathcal{H}_0$ or $\mathcal{H}'$
is rejected.
However, it is difficult to judge if the $p$-value is suspicious
but is not very small (such as $10^{-4}$, for example).
In order to avoid such difficulties, a two-level test is often used,
see \cite{Knuth:1997:ACP:270146, rLEC92a}.
A two-level test can be considered as a composite function
$$
I^{nN} \stackrel{f^N}{\longrightarrow}
\mathbb{R}^N \stackrel{g}{\longrightarrow} \mathbb{R},
$$
where $f$ is the test statistic of the one-level test
and $f^N$ is defined by
$$
f^N(\bm{a}_1, \ldots, \bm{a}_N) :=
\left( f(\bm{a}_1), \ldots, f(\bm{a}_N) \right) ~~~
(\bm{a}_1, \ldots, \bm{a}_N \in I^{n}).
$$
At the second level, the function $g$ corresponds to
a Goodness-Of-Fit (GOF) test that compares
the empirical distribution of the $N$ $p$-values
$$
F(f(\bm{a}_1)), \ldots, F(f(\bm{a}_N))
$$
from the observations $f(\bm{a_1})$, $\ldots$, $f(\bm{a}_N)$
with its theoretical distribution;
the sample size at the second level is $N$.
If the $p$-value at the second level is small,
either $\mathcal{H}_0$ or $\mathcal{H}'$
is rejected.
Two-level tests permit one to apply the test with a larger total sample size
to increase its power.
Hence, if the generator fails the test in this particular way,
then the $p$-value at the second level tends to become extremely small value
as the sample size $N$ is increased.
However, the $p$-value also tends to be very small if the approximations
of the $p$-values at the first level
\textcolor{black}{is not good enough} (i.e.\ if ${\mathcal H}'$ fails).
In this case, computational errors accumulate at each level
and two-level tests detect the Lack-Of-Fit of that approximation,
leading to rejection even if the generator is good
\cite{rLEC92a, doi:10.1287/opre.48.2.308.12385, rLEC02c, 6135498}.
\section{Checking the quality of the approximation of the $p$-value
by using a three-level test}
Although it is easy to extend the level of a statistical test
from two to three (or higher) using a technique such as
$$
I^{nNN'} \,
\stackrel{f^{N \times N'}}{\longrightarrow}
\mathbb{R}^{NN'} \stackrel{g^{N'}}{\longrightarrow}
\mathbb{R}^{N'} \stackrel{h}{\longrightarrow} \mathbb{R},
$$
this type of test is \textcolor{black}{often useless,
because the approximation error of the second level
may destroy the result: the resulting $p$-values tend to be too close
to 0}.
By contrast, Okutomi and Nakamura proposed a three-level test that
can be considered as reliable as a two-level one \cite{110007504717}.
The novelty of their method is that it uses an error-free function
at the second level.
This allows us to increase the sample size by $N'$ times,
and consequently to increase the power while
avoiding an accumulation of computational errors.
Okutomi and Nakamura originally intended to develop
a new statistical test for PRNGs,
but their method is useful to check the quality of the approximation of statistical tests.
Let $f$ be an $n$-variable statistic corresponding to the one-level test.
Suppose we want to check the quality of the approximation of the
distribution of \textcolor{black}{$f(\bm{X})$}, namely $\mathcal{H}'$.
Let $(\bm{a}_1, \ldots, \bm{a}_{NN'}) \in I^{n N N'}$
($\bm{a}_i \in I^n$) be a sequence of $NN'$ vectors in $I^n$.
At the first level, we compute
$f(\bm{a}_1)$, $\ldots$, $f(\bm{a}_{N N'})$.
\textcolor{black}{Here we make an assumption: the approximating distribution
$F$ in the hypothesis ${\mathcal H}'$ is assumed
to be continuous.}
Under ${\mathcal H}_0$ and ${\mathcal H}'$, the probability distribution of
$F(f(\bm{X}))$
is uniform in $[0,1]$.
This is proved by
$\Pr(F(f(\bm{X}))\leq p)=\Pr(f(\bm{X})\leq F^{-1}(p))
=F(F^{-1}(p))=p$, where $F^{-1}$ is the
generalized inverse distribution function
$F^{-1}(p)=\inf\{x \in \mathbb{R} \mid F(x) \geq p\}$
and the equalities follow from the continuity of $F$.
\textcolor{black}{Note that in the case of $I=\{0,1\}$,
$f(\bm{X})$ cannot have a continuous distribution,
thus ${\mathcal H}'$ must have some error.
Therefore, we should distinguish the right and left $p$-values
\cite{L'Ecuyer:2007:TCL:1268776.1268777, TestU01Manual}.
In this paper, the assumption ${\mathcal H}'$
means that $F$ is an approximation good enough
so that the statistical tests behave well.
Thus, ${\mathcal H}'$ includes
the assumption that each probability mass is small
enough to be negligible by itself.
}
We fix an arbitrary significance level $\alpha \in (0, 1)$.
The function $g$, which corresponds to the second level,
is the function that counts the number $T_i$ of $p$-values
greater than or equal to $\alpha$ in
$$
F(f(\bm{a}_{1+(i-1)N})), \ldots,
F(f(\bm{a}_{iN}))
$$
for $i=1, \ldots, N'$.
Under the hypotheses
$\mathcal{H}_0$, $\mathcal{H}'$ and the continuity
of $F$,
the distribution of the above $p$-values should be
independently uniformly distributed over the interval
$[0,1]$, as shown above.
Therefore, $T_i$ should have the binomial distribution
$B(N, 1-\alpha)$.
Finally, at the third level, we compare the empirical distributions of
$T_1$, $\ldots$, $T_{N'}$ and $B(N, 1-\alpha)$ via a GOF test.
If the resulting $p$-value at the third level is extremely small,
\textcolor{black}{it strongly suggests that either ${\mathcal H}_0$ or
${\mathcal H}'$ fails. In our purpose, we use good
PRNGs so that ${\mathcal H}_0$ is assumed,
and consider that ${\mathcal H}'$ is rejected, or equivalently
the approximation of $f(\bm{X})$ by $F$ is not good enough.}
In this paper, following \cite{110007504717}, we use the parameters
$\alpha = 0.01$, $N=10^3$, and $N'=10^3$,
as well as the following categorization:
\begin{align*}
C_0 &= \{0, 1, \ldots, 981\}, \\
C_i &= \{981+i\} ~~~ (i=1,2,\ldots, 15), \\
C_{16} &= \{997, 998, 999, 1000\}.
\end{align*}
Let $Y_i := \# \{ T_j \mid j=1, \ldots, N', T_j \in C_i\}$
for $i=0, \ldots, 16$.
We compute the $\chi^2$-value
$$
h(T_1, \ldots, T_{N'}) := \sum_{i=0}^{16} \frac{(Y_i-N'p_i)^2}{N'p_i},
$$
where $p_i = \sum_{j \in C_i} \binom{N}{j}/2^N$.
The distribution of this statistic under $\mathcal{H}_0$ and
$\mathcal{H}'$
is approximated by the $\chi^2$-distribution with
$16$ degrees of freedom.
\textcolor{black}{
It might seem to be more natural to
use a GOF test on the entire distribution
of the $N'$ $p$-values under the uniform distribution hypothesis,
by using a test such as a Kolmogorov-Smirnov test.
However, if the distribution of the test statistic at the second level
is only approximated and the approximation error is significant,
a three-level test will detect this approximation
error, and tends to give $p$-values nearly zero.}
On the other hand, the presented method counts only the number of
$p$-values at the second level, which has \textcolor{black}{no}
approximation error introduced at this level
(under the hypotheses ${\mathcal H}_0$ and
${\mathcal H}'$).
This is a reason why the proposed test is better.
\section{Experimental results}
This section shows the experimental results of
\textcolor{black}{the three-level test} for
the NIST test suite, SmallCrush, and Crush in TestU01.
In order to mimic truly random number sequences at the first level,
we adopt Mersenne Twister (MT) \cite{DBLP:journals/tomacs/MatsumotoN98}
and a PRNG from the SHA1 algorithm.
Note that MT fails certain tests (e.g. the linear complexity test,
the binary matrix rank test with large matrix size) even when
the $p$-value is computed correctly with no significant error.
Thus we need both generators in the following experiments.
\subsection{Results of the NIST test suite}
The NIST test suite
consists of 15 statistical tests for randomness of bit sequences,
and its result is a list of $188$ $p$-values.
Since it was published in 2001, many modifications and corrections have been studied.
However, the latest version 2.1.2 \cite{Bassham:2010:SRS:2206233},
released in 2014, does not incorporate several modifications.
We will show that the three-level method can reveal known defects of some statistical tests
and show the effectiveness of several proposals to increase the reliability of those tests.
In addition,
\textcolor{black}{through the three-level methods, we deduce new
constraints}
for the Random Excursions test and the Random Excursions Variant test.
First, we consider all tests other than the Random Excursions test and
the Random Excursions Variant test.
In this experiment, the sample size at the first level $n$
is fixed to $10^6$, as recommended by NIST.
\textcolor{black}{Recall that throughout the experiments,
the number of iterations $N$ in the second level
and $N'$ in the third level are both $1000$
with categorizations described in the previous section.
}
Table 1 shows the
results
\textcolor{black}{of the three-level test}
for the original NIST test suite and the modified tests explained later.
The Cumulative Sums test and the Serial test have
two statistics respectively, with two $p$-values written in Table 1.
The Non-Overlapping Template Matching test reports $148$ $p$-values,
thus the passing rates are filled in the table.
From our experiments, the $p$-values from the
Discrete Fourier Transform (DFT) test,
the Overlapping Template Matching test,
and the Maurer's Universal Statistical test in the original test suite are
much too small.
Additionally, the $p$-values of the Longest Runs of Ones in a Block test
are relatively small.
This result indicates that those tests have some flaws.
After we applied appropriate modifications,
the three-level test reported reasonable $p$-values for the four tests
\textcolor{black}{as described in the two columns at the right
in Table~1}.
\begin{table}[htpb]
\begin{center}
\caption{
Results
\textcolor{black}{of the three-level test}
for the NIST test suite with $n = 10^6$}
\begin{tabular}{c|c|c|c|c}
\multicolumn{1}{l|}{} & \multicolumn{2}{|c|}{$p$-value(Original)}
& \multicolumn{2}{|l}{$p$-value(Modified)} \\ \cline{2-5}
Test Name & MT & SHA1 & MT & SHA1 \\ \hline
Frequency & $0.85$ & $0.59$ & - & - \\
Frequency test within a Block & $0.017$ & $0.68$ & - & - \\
Cumulative Sums Test & $0.13$, $0.64$ & $0.37$, $0.43$ & - & - \\
Runs & $0.56$ & $0.47$ & - & - \\
Longest Run of Ones in a Block & 3.9E$-$5 & 1.3E$-$8 & $0.44$ & $0.0011$ \\
Binary Matrix Rank & $0.30$ & $0.13$ & - & - \\
Discrete Fourier Transform & 4.1E$-119$ & 7.2E$-$116 & $0.19$ & $0.026$ \\
Non-Overlapping Template Matching & 148/148 & 148/148 & - & - \\
Overlapping Template Matching & 7.5E$-$80 & 5.6E$-$73 & $0.70$ & $0.88$ \\
Maurer's Universal Statistical & 8.7E$-$76 & 4.1E$-$66 & $0.99$ & $0.77$ \\
Approximate Entropy & $0.40$ & $0.036$ & - & - \\
Serial & $0.67$, $0.70$ & $0.28$, $0.39$ & - & - \\
Linear Complexity & $0.023$ & $0.0030$ & - & -
\end{tabular}
\end{center}
\end{table}
Note that the current implementation of the NIST test suite
uses one-level or two-level tests, differently from the above
experiments, and the approximation error in $p$-values
provided by those tests are not large.
For example, Table 2 and Table 3 show $p$-values provided by
the NIST test suite.
\begin{table}[H]
\begin{center}
\caption{$p$-values of one-level tests and a two-level test for MT}
\begin{tabular}{c|c|c|c|c|c||c}
\multicolumn{1}{c|}{} & \multicolumn{5}{|c||}{first level ($n=10^6$)}
& second level \\ \cline{2-6}
Test Name & 1st & 2nd & 3rd & 4th & 5th & ($N=10^3$) \\ \hline
Longest Run of Ones in a Block &
0.15 & 0.39 & 0.64 & 0.029 & 0.47 & 0.88 \\ \hline
Discrete Fourier Transform
& 0.48 & 0.44 & 0.31 & 0.89 & 0.66 & 0.41 \\ \hline
Overlapping Template Matching
& 0.58 & 0.69 & 0.18 & 0.47 & 0.99 & 0.15 \\ \hline
Maurer's Universal Statistical
& 0.78 & 0.96 & 0.083 & 0.40 & 0.38 & 0.99
\end{tabular}
\caption{$p$-values of one-level tests and a two-level test for SHA1}
\begin{tabular}{c|c|c|c|c|c||c}
\multicolumn{1}{c|}{} & \multicolumn{5}{|c||}{first level ($n=10^6$)}
& second level \\ \cline{2-6}
Test Name & 1st & 2nd & 3rd & 4th & 5th & ($N=10^3$) \\ \hline
Longest Run of Ones in a Block
& 0.65 & 0.50 & 0.69 & 0.44 & 0.052 & 0.64 \\ \hline
Discrete Fourier Transform
& 0.73 & 0.038 & 0.13 & 0.77 & 0.34 & 0.034 \\ \hline
Overlapping Template Matching
& 0.21 & 0.75 & 0.91 & 0.087 & 0.76 & 0.14 \\ \hline
Maurer's Universal Statistical
& 0.32 & 0.33 & 0.63 & 0.89 & 0.090 & 0.083
\end{tabular}
\end{center}
\end{table}
Let us explain the modifications.
We begin by considering the DFT test.
Let $X_k$ be the $k$-th bit of the tested sequence.
The DFT test computes the discrete Fourier coefficients
$$
F_i = \sum_{k=0}^{n-1} (2X_k-1) \exp(-2\pi \sqrt{-1} k i/ n), ~~~
i=0,1, \ldots, n/2-1.
$$
The $p$-value of the DFT test is \textcolor{black}{approximated by}
$$
\Pr ((o_h-0.95 n/2)/\sqrt{0.05 \cdot 0.95 n / d} < Z),
~~~ Z \sim N(0, 1)
$$
for a realization $o_h$ of the number $O_h$ of $|F_j|$'s that
are smaller than some constant $h$.
The latest version of the NIST test suite uses the parameter $d=4$
proposed by Kim et al. \cite{journals/iacr/KimUH04}.
Subsequently, Pareschi et al. \cite{6135498} proposed $d=3.8$
for $n \approx 10^6$, which we use here as modification.
The Overlapping Template Matching test uses a $\chi^2$ GOF test that
compares the empirical distribution of occurrences of a certain
bit template with the theoretical one. NIST once used the probabilities
derived by an approximation formula, and now it adopts more accurate values
derived by \cite{Hamano:2007:COT:1521215.1521232}. However,
the C-code \texttt{overlappingTemplateMatching.c} changes the new values
to the former wrong ones. We thus remove this instruction
from the original code (lines 40--44),
\textcolor{black}{which is the modification}.
The Maurer's Universal Statistical test detects whether the sequence can be
significantly compressed without loss of information.
The original test adopts an asymptotic measure. We use the
modification by Coron \cite{Coron1999}, a variant test statistic
which enables better detection of defects in the tested sequence.
The Longest Runs of Ones in a Block test also uses a $\chi^2$ GOF test.
The NIST test suite uses approximation values to four decimal places
instead of the theoretical probabilities.
We \textcolor{black}{modify these values by more accurate} ones to fifteen decimal places.
Unlike the other tests, the Random Excursions test and
the Random Excursions Variant test do not always yield $p$-values.
We review the algorithms of those tests and explain why this happens.
Both tests are based on considering successive sums of the bits
as a one-dimensional random walk.
Let $X_1, \ldots, X_n$ be random variables distributed over $\{0,1\}$.
The Random Excursions and Random Excursions Variant tests
compute the partial sums
$$
S_i := \sum_{k=1}^{i} (2X_k-1), ~~~ i=1, \ldots, n,
$$
\textcolor{black}{called the $i$-th \textit{state} of the random walk.
For an integer $x$, we say that $S_i$ takes the value $x$
if $S_i=x$.
Consider the sequence $(0, S_1, \ldots, S_n, 0)$,
and let $J$ be the number of $0$'s minus one in this sequence.}
\textcolor{black}{We call a subsequence of $(0, S_1, \ldots, S_n, 0)$
a \textit{cycle} if it has length no less than two,
it starts with $0$, ends with $0$, and contains
no $0$ between the first $0$ and the last $0$.
Hence $J$ is the total number of cycles in $(0, S_1, \ldots, S_n, 0)$.}
\textcolor{black}{
Let $x$ be an integer among
$x=\pm 1$, $\pm 2$, $\pm 3$, $\pm 4$.
For each $x$ among the eight, the Random Excursions test uses
the test statistic consisting
of six integers $\nu_k(x)$ ($k=0,1,2,3,4,5$).
For $k<5$, $\nu_k(x)$ is the number of cycles in which
the frequency of the value $x$ in the states is exactly $k$.
For $k=5$, $\nu_5(x)$ is the number of cycles in which
the frequency of the value $x$ is $5$ or more.
Thus, $\sum_{k=0}^5\nu_k(x)=J$ holds.
}
\textcolor{black}{The corresponding $\chi^2$ statistic is}
$$
\chi^2 := \sum_{k=0}^5 \frac{(\nu_k(x)-J \pi_k(x))^2}{J\pi_k(x)},
$$
where $\pi_k(x)$ is the probability
\textcolor{black}{that the state $S_i$ visits the value $x$ exactly $k$ times in a cycle,
under $\mathcal{H}_0$.}
\textcolor{black}{For the test statistic to have approximately a chi-square distribution,}
the expectation $J\pi_k(x)$ for each $k$ should not be too small, say $J\pi_k(x) \geq 5$.
The NIST test suite \textcolor{black}{discards the sample} if $J < 500$
because the minimum value of $\pi_k(x)$'s is $\pi_4(4) \approx 0.0105$.
\textcolor{black}{Thus, each test yields eight $p$-values (one for each $x$)
when $J\geq 500$,
and yields no result when $J<500$}.
The Random Excursions Variant test computes the number $\xi(x)$ of times
that $x$ occurs across all $J$ cycles for $x=\pm 1, \pm2, \ldots, \pm 9$.
The limiting distribution of $\xi(x)$ is known to be normal with
mean $J$ and variance $J(4|x|-2)$ for each $x$: thus, the test suite uses
the statistic
$$
Z:=(\xi(x)-J)/(\sqrt{J(4|x|-2)}).
$$
The constraint is also $J \geq 500$.
\textcolor{black}{In the Random Excursions test and
the Random Excursions Variant test, $J$ is the sample size
in computing $p$-values.
Hence, the approximations of the statistics of the tests
by a chi-square distribution and a normal distribution
are getting
better when the number $J$ is increased.}
\textcolor{black}
{
Since these tests discard some parts of the output
of PRNG, the formalism of the three-level test
does not apply as it is. However, for the both tests,
the first level procedure yields a sequence
of $p$-values which are uniform i.i.d in $[0,1]$
under the hypotheses ${\mathcal H}_0$ and ${\mathcal H}'$.
We iterate the first-level tests until we obtain
$N(=1000)$ sample $p$-values.
Then, the rest of the three-level test works in the
same manner.
}
We show the results of the three-level test for the Random Excursions test
in Table 4, and for the Random Excursions Variant test in Table 5.
Note that we use the sample size at the first level
$n=10^7$ to decrease the number of tests in which the test procedure
is discontinued.
\begin{table}[h]
\begin{center}
\caption{$p$-values of
\textcolor{black}{the three-level test}
of the Random Excursions test}
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\multicolumn{1}{c|}{} & \multicolumn{2}{|c|}{$J\geq 500$}
& \multicolumn{2}{|c}{$J\geq 1000$} & \multicolumn{2}{|c}{$J\geq 1500$}
& \multicolumn{2}{|c}{$J\geq 2000$}\\ \cline{2-9}
$x$ & MT & SHA1 & MT & SHA1 & MT & SHA1 & MT & SHA1 \\ \hline
$-4$ & 1.0E$-$10 & 9.1E$-$20 & 2.7E$-$03 & 4.9E$-$14 & 2.6E$-$02 & 6.5E$-$11 &
1.1E$-$03 & 1.2E$-$04 \\
$-3$ & 3.2E$-$06 & 8.1E$-$07 & 3.4E$-$04 & 2.3E$-$02 & 1.2E$-$01 & 2.4E$-$01 &
8.4E$-$02 & 4.1E$-$01 \\
$-2$ & 4.5E$-$01 & 3.1E$-$01 & 3.8E$-$01 & 1.9E$-$01 & 3.1E$-$01 & 2.1E$-$04 &
3.2E$-$01 & 1.1E$-$01 \\
$-1$ & 6.4E$-$02 & 8.9E$-$01 & 8.6E$-$01 & 2.9E$-$01 & 1.6E$-$01 & 4.1E$-$01 &
2.8E$-$02 & 2.8E$-$01 \\
$1$ & 9.0E$-$02 & 6.3E$-$01 & 5.7E$-$01 & 3.5E$-$01 & 2.9E$-$01 & 5.8E$-$01 &
2.5E$-$01 & 4.7E$-$01 \\
$2$ & 3.8E$-$02 & 5.8E$-$02 & 4.0E$-$02 & 1.3E$-$01 & 9.2E$-$03 & 4.7E$-$01 &
8.2E$-$02 & 1.7E$-$01 \\
$3$ & 3.5E$-$06 & 2.5E$-$08 & 8.5E$-$04 & 6.8E$-$05 & 1.5E$-$03 & 1.6E$-$04 &
5.5E$-$03 & 9.1E$-$05 \\
$4$ & 6.7E$-$16 & 4.7E$-$18 & 5.7E$-$03 & 6.0E$-$10 & 9.7E$-$02 & 4.8E$-$05 &
5.3E$-$02 & 4.0E$-$06
\end{tabular}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\caption{$p$-values of the Random Excursions Variant test}
\begin{tabular}{c|c|c|c|c}
\multicolumn{1}{c|}{} & \multicolumn{2}{|c|}{$J\geq 500$}
& \multicolumn{2}{|c}{$J\geq 1000$} \\ \cline{2-5}
$x$ & MT & SHA1 & MT & SHA1 \\ \hline
$-9$ & 1.5E$-$07 & 9.3E$-$10 & 2.5E$-$01 & 5.3E$-$01 \\
$-8$ & 2.8E$-$07 & 3.6E$-$05 & 3.3E$-$01 & 8.1E$-$01 \\
$-7$ & 1.3E$-$08 & 2.2E$-$05 & 3.9E$-$01 & 2.4E$-$01 \\
$-6$ & 8.9E$-$03 & 5.9E$-$03 & 6.0E$-$01 & 6.9E$-$01 \\
$-5$ & 5.1E$-$02 & 1.6E$-$02 & 9.2E$-$01 & 7.9E$-$01 \\
$-4$ & 7.4E$-$04 & 1.7E$-$01 & 4.5E$-$01 & 6.6E$-$01 \\
$-3$ & 8.6E$-$03 & 4.7E$-$03 & 5.2E$-$01 & 8.7E$-$01 \\
$-2$ & 2.5E$-$02 & 8.3E$-$01 & 3.1E$-$01 & 1.2E$-$02 \\
$-1$ & 4.9E$-$01 & 7.1E$-$04 & 7.2E$-$01 & 9.3E$-$01 \\
$1$ & 3.4E$-$01 & 8.8E$-$01 & 2.0E$-$01 & 7.0E$-$01 \\
$2$ & 3.4E$-$02 & 1.6E$-$02 & 6.1E$-$02 & 1.3E$-$01 \\
$3$ & 6.0E$-$01 & 1.6E$-$03 & 7.0E$-$01 & 8.5E$-$01 \\
$4$ & 8.5E$-$02 & 1.4E$-$02 & 2.1E$-$01 & 2.5E$-$01 \\
$5$ & 1.5E$-$01 & 1.5E$-$03 & 5.5E$-$01 & 5.6E$-$01 \\
$6$ & 1.8E$-$03 & 5.5E$-$05 & 1.5E$-$01 & 1.5E$-$01 \\
$7$ & 2.8E$-$03 & 1.0E$-$06 & 6.7E$-$01 & 5.3E$-$01 \\
$8$ & 2.7E$-$04 & 1.0E$-$05 & 8.7E$-$02 & 9.0E$-$01 \\
$9$ & 1.2E$-$07 & 2.6E$-$09 & 5.5E$-$01 & 4.7E$-$01
\end{tabular}
\end{center}
\end{table}
From our experiments, the Random Excursions test
\textcolor{black}{for $x=4$ shows some flaw up to
$J=1500$. For the safety, we recommend a stronger constraint
$J\geq 2000$ than $J\geq 500$ which NIST specified, with
a larger sample size $n=10^7$.
For the Random Excursions Variant test, from the too small
$p$-values for $x=\pm 9$, we recommend a constraint
$J \geq 1000$.}
\subsection{Results for SmallCrush and Crush in TestU01}
We examine the
\textcolor{black}{quality of the approximation of the $p$-values}
of SmallCrush and Crush batteries in TestU01.
SmallCrush battery consists of $10$ statistical tests ($16$ statistics).
Of those tests, the \texttt{smarsa\char`_BirthdaySpacings} test and
one of the \texttt{sknuth\char`_Collision} tests are based on a Poisson distribution,
meaning that their distributions of $p$-values are not uniform.
We thus assess the
\textcolor{black}{quality of the approximation of the $p$-values}
of the remaining $14$ test statistics.
Table 6 indicates that all 14 tests
\textcolor{black}{have the approximations of $p$-values
which are sufficiently accurate.}
\begin{table}[h]
\begin{center}
\caption{$p$-values of the
\textcolor{black}{three-level} test of the SmallCrush}
\begin{tabular}{c|c|l|l}
test name & distribution & MT & SHA1 \\ \hline
\texttt{sknuth\char`_Collision} & normal &$0.057$ & $0.76$ \\
\texttt{sknuth\char`_Gap} & $\chi^2$ & $0.059$ & $0.37$ \\
\texttt{sknuth\char`_SimplePoker} & $\chi^2$ & $0.47$ & $0.75$ \\
\texttt{sknuth\char`_CouponCollector} & $\chi^2$ & $0.62$ & $0.94$ \\
\texttt{sknuth\char`_MaxOft} & normal & $0.0047$ & $0.47$ \\
\texttt{sknuth\char`_MaxOft} & $\chi^2$ & $0.017$ & $0.62$ \\
\texttt{svaria\char`_WeightDistrib} & $\chi^2$ & $0.50$ & $0.049$ \\
\texttt{smarsa\char`_MatrixRank} & $\chi^2$ & $0.29$ & $0.90$ \\
\texttt{sstring\char`_HammingIndep} & normal & $0.019$ & $0.095$ \\
\texttt{swalk\char`_RandomWalk1} (\texttt{H}) & $\chi^2$ & $0.0020$ & $0.043$ \\
\texttt{swalk\char`_RandomWalk1} (\texttt{M}) & $\chi^2$ & $0.011$ & $0.48$ \\
\texttt{swalk\char`_RandomWalk1} (\texttt{J}) & $\chi^2$ & $0.26$ & $0.90$ \\
\texttt{swalk\char`_RandomWalk1} (\texttt{R}) & $\chi^2$ & $0.83$ & $0.12$ \\
\texttt{swalk\char`_RandomWalk1} (\texttt{C}) & $\chi^2$ & $0.23$ & $0.40$
\end{tabular}
\end{center}
\end{table}
Crush battery consists of 96 tests and reports 144 $p$-values.
We check the quality of the approximation of the 76 tests (90 statistics),
whose statistics have continuous distributions,
ignoring those whose statistics are discrete, namely
the \texttt{smarsa\char`_CollisionOver} test (No.3--10),
the \texttt{smarsa\char`_BirthdaySpacings} test (No.11--17),
the \texttt{snpair\char`_ClosePairs} test (No.18--20),
the \texttt{snpair\char`_ClosePairsBitMatch} test (No.21--22),
and one of the test statistics of the
\texttt{sknuth\char`_CollisionPermut} test (No.39--40),
where the numbers correspond to the enumeration of the tests
in the user's guidebook \cite{TestU01Manual}.
To reduce the computation time, we check
Crush using the following procedure.
We
\textcolor{black}{apply the three-level test with Mersenne Twister to}
each test
If the $p$-value is smaller than $10^{-10}$,
we check the test with a PRNG from SHA1.
The test (i.e. hypothesis ${\mathcal H}'$)
will be rejected if both $p$-values are smaller
than $10^{-10}$.
Table 7 shows the tests rejected by both Mersenne Twister and SHA1.
Because the \texttt{sspectral\char`_Fourier3} test has three statistics,
three corresponding $p$-values are listed in the right-most column
in Table~7,
each of which is smaller than $10^{-300}$.
The \texttt{sstring\char`_Run} test has two statistics,
thus we listed two $p$-values in the table.
Similarly to the case of the NIST test suite,
the approximation error in the $p$-value by TestU01
is not that large even if we find $\varepsilon$ values
in these three-level tests.
\begin{table}[ht]
\begin{center}
\caption{Rejected tests in Crush and their $p$-values
($\varepsilon$ : the $p$-value $<10^{-300}$)}
\begin{tabular}{c|c|c|c}
test name & parameters & MT & SHA1 \\ \hline
\texttt{svaria\char`_SampleCorr} & $n=5 \times 10^8$, $k=1$ & 1.8E$-$222 & 5.5E$-$237 \\
\texttt{smarsa\char`_Savir2} & $n=2 \times 10^7$, $m=2^{20}$, $t=30$
& 2.7E$-$49 & 9.9E$-$32 \\
\texttt{scomp\char`_LempelZiv} & $n=2^{25}$ &
$\varepsilon$ & $\varepsilon$ \\
\texttt{sspectral\char`_Fourier3} & $n=2^{14} \times 50000$ &
$\varepsilon$, $\varepsilon$, $\varepsilon$ &
$\varepsilon$, $\varepsilon$, $\varepsilon$ \\
\texttt{sstring\char`_Run} & $n = 10^9$ &
$\varepsilon$, $\varepsilon$ & $\varepsilon$, $\varepsilon$ \\
\end{tabular}
\end{center}
\end{table}
\textcolor{black}{
Among these rejected tests, we find that two of them can be modified
to pass the three-level test.
These are the \texttt{svaria\char`_SampleCorr} test
and the \texttt{sstring\char`_Run} test.
The improvements are shown in Table~8.
}
The \texttt{svaria\char`_SampleCorr} test computes a correlation between
$X_1, \ldots, X_n$ which are random variables distributed over $[0, 1)$.
TestU01 assumes that the statistic
$$
\frac1{n-k} \sum_{j=1}^{n-k} (X_jX_{j+k}-1/4), ~~~
$$
has the normal distribution with mean $0$ and variance $1/{12(n-k)}$.
Fishman \cite{Fishman:1978:PDE:539984} shows that the statistic
$$
\frac1{n-k} \sum_{j=1}^{n-k} (X_j-1/2) (X_{j+k}-1/2)
$$
converges to normal with mean $0$ and variance $1/{144(n-k)}$.
\textcolor{black}{We modified the original statistic
$\frac1{n-k} \sum_{j=1}^{n-k} (X_jX_{j+k}-1/4)$ to
$\frac1{n-k} \sum_{j=1}^{n-k} (X_j-1/2) (X_{j+k}-1/2)$.
}
The \texttt{sstring\char`_Run} test is a variant of the run test applicable
to a bit sequence, which yields two $p$-values:
the test statistics are
based on a normal distribution and a $\chi^2$ distribution.
Let $Y$ be the total number of bits needed to obtain $2n$ runs.
Under $\mathcal{H}_0$, we have $Y = \sum_{i=1}^{2n} X_i +2n$
where $X_i$ are independent geometric random variables with parameter $1/2$.
TestU01 adopts the statistic $(Y-4n)/\sqrt{8n}$ and assumes
that it can be approximated by the standard normal distribution.
However, the expectation of $X_i$ is $1$ and the variance
of $X_i$ is $2$, so
$$
E[Y] = \sum_{i=1}^{2n} E[X_i] + 2n = 4n,
~~~ V[Y] = \sum_{i=1}^{2n} V[X_i] = 4n.
$$
Thus the appropriate statistic is $(Y-4n)/\sqrt{4n}$, this is the
modification.
The other test statistic is
$$
\sum_{i=1}^{k}\dfrac{(X_{0,i}-np_i)^2}{np_i(1-p_i)} +
\sum_{i=1}^{k}\dfrac{(X_{1,i}-np_i)^2}{np_i(1-p_i)},
$$
where $X_{0, i}$ and $X_{1, i}$ are the number of runs of $0$'s and $1$'s
of length $i$ for $i=1, \ldots, k$, where $k$ is some positive integer, and
$p_i=2^{-i}$. TestU01 assumes that the statistic has approximately
the $\chi^2$ distribution with $2(n-1)$ degrees of freedom for
a $\chi^2$ GOF test.
However, the factor $1-p_i$ in the denominator seems to be unnecessary,
so our modification removes them.
Table 8 shows the $p$-values for those test statistics.
The results indicate that the above modifications are satisfactory
in improving the reliability of the tests.
\begin{table}[ht]
\begin{center}
\caption{$p$-values of the original tests and their modifications
($\varepsilon$ : the $p$-value $<10^{-300}$)}
\begin{tabular}{c|c|c|c|c}
\multicolumn{1}{l|}{} & \multicolumn{2}{|c|}{$p$-value (Original)}
& \multicolumn{2}{|l}{$p$-value(Improved)} \\ \cline{2-5}
test name & MT & SHA1 & MT & SHA1 \\ \hline
\texttt{svaria\char`_SampleCorr} & 1.8E$-$222 & 5.5E$-$237
& $0.498$ & $0.825$ \\
\texttt{sstring\char`_Run} (normal) & $\varepsilon$ & $\varepsilon$
& $0.657$ & $0.302$ \\
\texttt{sstring\char`_Run} (chi-squared) & $\varepsilon$ & $\varepsilon$
& $0.715$ & $0.0479$
\end{tabular}
\end{center}
\end{table}
\textcolor{black}{We discuss on the rest three tests, for which we are not
able to give satisfactory modifications.}
The \texttt{smarsa\char`_Savir2} test is a modified version of
the Savir test proposed by Marsaglia \cite{1676623w}.
Let $U_1$, $U_2$, $\ldots$, $U_t$ be independent uniform random variables
over $(0, 1)$. For a given $m$, the random integers
$I_1$, $I_2$, $\ldots$, $I_t$ are defined by
$I_1 = \lceil m U_1 \rceil$, $I_2 = \lceil I_1 U_2 \rceil$, $\ldots$,
$I_t = \lceil I_{t-1} U_t \rceil$. It thus generates $n$ values of $I_t$
and compares their empirical distribution with the theoretical one
via a $\chi^2$-test.
TestU01 recommends the values of $m$ and $t$ that satisfy
$m \approx 2^t$ and Crush adopts $m=2^{20}$ and $t=30$.
Table 9 shows the $p$-values obtained with $n=2 \times 10^7$, $m=2^{20}$
and various values of $t$.
The $p$-values are slightly suspicious but not too small.
Therefore, it is necessary to investigate the tests mathematically,
but we are not able to manage this at present.
Tentatively, we propose to take $t=9$ for a compromise between
the reliability of the test and choosing a larger value of $t$.
\begin{table}[H]
\begin{center}
\caption{$p$-values of the \texttt{smarsa\char`_Savir2} test
for various $t$'s}
\begin{tabular}{c}
\begin{minipage}{0.5\hsize}
\begin{center}
\begin{tabular}{c|c|c}
$t$ & MT & SHA1 \\ \hline
$5$ & 2.7E$-$07 & 6.9E$-$06 \\
$6$ & 4.2E$-$12 & 3.4E$-$05 \\
$7$ & 2.3E$-$10 & 3.5E$-$08 \\
$8$ & 5.2E$-$06 & 1.5E$-$09 \\
$9$ & 1.1E$-$06 & 1.6E$-$05 \\
$10$ & 3.2E$-$04 & 2.1E$-$13 \\
$11$ & 5.6E$-$14 & 1.6E$-$15
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\begin{tabular}{c|c|c}
$t$ & MT & SHA1 \\ \hline
$12$ & 1.9E$-$06 & 1.6E$-$12 \\
$13$ & 1.6E$-$07 & 2.3E$-$08 \\
$14$ & 3.8E$-$06 & 4.9E$-$16 \\
$15$ & 4.7E$-$16 & 1.7E$-$17 \\
$20$ & 2.2E$-$13 & 2.0E$-$18 \\
$25$ & 1.3E$-$28 & 1.2E$-$17 \\
$30$ & 2.7E$-$49 & 9.9E$-$32
\end{tabular}
\end{center}
\end{minipage}
\end{tabular}
\end{center}
\end{table}
The \texttt{scomp\char`_LempelZiv} test measures the compressibility of
the bit sequence using the Lempel-Ziv compression algorithm. TestU01 uses
approximations of the mean and variance obtained by simulation.
The \texttt{sspectral\char`_Fourier3} test is a kind of DFT tests
proposed by Erdmann.
However, the authors of TestU01 claim that those tests tend not
to be very sensitive.
Indeed, the resulting $p$-values of those tests
are smaller than $10^{-300}$, so more mathematical justifications
for those tests are needed.
\section{Concluding remarks}
We introduced a three-level test to check the
\textcolor{black}{the quality of the approximation for the $p$-values}
in statistical tests
for PRNGs.
\textcolor{black}{We find that some statistical tests use approximation
with some flaw. We list some of such tests from NIST and TestU01.
This does not mean that these tests are erroneous,
but the reliability of the tests is increased if the approximation
is improved. We give three satisfactory modifications to three tests
in Crush, and propose new parameters for several tests from this viewpoint.}
\textcolor{black}{In this study,
we need to assume that the approximated statistics
are continuous, because our three-level test is based on
the uniformity of $p$-values in $[0,1]$ at the first level.}
This condition is
not essential: if the distribution of $p$-values can be computed exactly,
we can conduct the three-level test with an appropriate GOF test
at the third level. For example,
the exact probability formula of the \texttt{smarsa\char`_BirthdaySpacings}
test is presented in \cite{Knuth:1997:ACP:270146}.
It indicates the possibility
of calculating the exact distribution of its $p$-values.
In future work, we hope to assess the reliability of all of
the remaining tests in Crush battery.
According to the original proposal presented in \cite{110007504717},
we employ a $\chi^2$ GOF test at the third level.
However, a Kolmogorov-Smirnov (KS) test seems to be more appropriate
and more powerful.
An accurate approximation of the KS distribution is now available
\cite{JSSv039i11}, so we should experiment with this method
to obtain more decisive conclusions.
\section*{References}
\end{document} |
\begin{document}
\title{Reference-frame-independent measurement-device-independent quantum key distribution based on polarization multiplexing}
\author{Hongwei Liu\textsuperscript{1,2}}
\author{Jipeng Wang\textsuperscript{1,2}}
\author{Haiqiang Ma\textsuperscript{2}}
\email{hqma@bupt.edu.cn}
\author{Shihai Sun\textsuperscript{1}}
\email{shsun@nudt.edu.cn}
\affiliation{1. College of Liberal Arts and Science, National University of Defense Technology, Hunan, Changsha 410073, China\\
2. School of Science and State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China}
\date{\today}
\begin{abstract}
Measurement-device-independent quantum key distribution (MDI-QKD) is proved to be able to eliminate all potential detector side channel attacks. Combining with the reference frame independent (RFI) scheme, the complexity of practical system can be reduced because of the unnecessary alignment for reference frame. Here, based on polarization multiplexing, we propose a time-bin encoding structure, and experimentally demonstrate the RFI-MDI-QKD protocol. Thanks to this, two of the four Bell states can be distinguished, whereas only one is used to generate the secure key in previous RFI-MDI-QKD experiments. As far as we know, this is the first demonstration for RFI-MDI-QKD protocol with clock rate of 50 MHz and distance of more than hundred kilometers between legitimate parties Alice and Bob. In asymptotic case, we experimentally compare RFI-MDI-QKD protocol with the original MDI-QKD protocol at the transmission distance of 160 km, when the different misalignments of the reference frame are deployed. By considering observables and statistical fluctuations jointly, four-intensity decoy-state RFI-MDI-QKD protocol with biased bases is experimentally achieved at the transmission distance of 100km and 120km. The results show the robustness of our scheme, and the key rate of RFI-MDI-QKD can be improved obviously under a large misalignment of the reference frame.
\end{abstract}
\pacs{}
\maketitle
\section{\label{sec:level1}Introduction}
In this highly intelligent age, the privacy of information is vital to the personal life, the management of companies and governments. Recently, researchers turned to physical theory, such as quantum physics, rather than the mathematical complexities to find an unconditional security scheme. Such is the significance of quantum key distribution (QKD) \cite{Bennett1984}, which has been attracted widely attention nowadays. Tremendous theoretic and experimental efforts have been made in this field \cite{COWE,Wang2005Beating,DECOY05,RRDPS,Laing2010Reference,Wang20122,Peng2010Decoy,Yuan2008Gigahertz}.
However, the actual performance of practical apparatuses should be taken into account in a real QKD system, otherwise the gap between theoretical and practical model will weaken its security \cite{TAG04,ATTACK1,PhysRevA.83.062331,wei2017feasible,PhysRevA.73.022320,1367-2630-12-11-113026,PhysRevLett.107.110501,qi2007time,li2011attacking}. There are three main approaches to close this gap. The first one is the security patch \cite{yuan2010avoiding,da2012real}, but it is not universal for all potential and unnoticed security loopholes. The second one is the device-independent QKD (DI-QKD) \cite{acin2007device,gisin2010proposal,curty2011heralded}, which is still challenging with current technology since a loophole-free Bell test is needed \cite{hensen2015loophole}. The third and the most promising approach is measurement device independent QKD (MDI-QKD) \cite{lo2012measurement,braunstein2012side}. It successfully removes all detection-related security loopholes, which means secure key can be generated even when measurement unit is fully controlled by the adversary Eve. Furthermore, with current technology, MDI-QKD can provide a solution to build more security long-distance key distribution links or metropolitan networks \cite{yin2016measurement,tang2016measurement}.
The merits of MDI-QKD protocol have attracted extensive attention in recent decades, a series of achievements have been made in both theories \cite{xu2014protocol,curty2014finite,xu2015discrete,CVDV,zhou2016making} and experiments \cite{rubenok2013real,da2013proof,liu2013experimental,comandar2016quantum,tang2016experimental}. Since relative phase and time-bins of pulses can be firmly maintained along the transmission, time-bin encoding is a suitable scheme for fiber based QKD system, whereas the polarization of light is not stable due to the birefringence of fiber. It is noted that most of experiments based on time-bin encoding schemes can only distinguish one Bell state, such as $\left| {{\psi ^ - }} \right\rangle $, which will eventually lead a factor of 3/4 loss in the final key. In addition, an active reference frame alignment is needed to ensure the higher secure key rate. Although additional calibration parts appear feasible, they increase the complexity of the MDI-QKD system, which may lead to extra information leakage through these ancillary processes \cite{Jain2011Device}.
As a promising solution to eliminate the requirements for reference frame calibration, reference-frame-independent (RFI) MDI-QKD protocol is proposed \cite{yin2014reference}. As far as we know, only two experimental verifications were made until now \cite{wang2015phase,wang2017measurement}, whose systems are worked at 1 MHz, and the longest distance between Alice and Bob is 20km. The experimental demonstration with a higher clock rate and longer transmission distance is still missing. Furthermore, although simulations are carried out to compare the performance of RFI-MDI-QKD protocol with the original MDI-QKD protocol under the different misalignments of reference frames \cite{zhang2017practical,zhang2017decoy}, a clearly experimental comparison is also missing.
In this paper, we propose an effective time-bin encoding scheme based on the polarization multiplexing. Combining with the efficient detecting scheme proposed in our previous work \cite{tang2016time}, both bell states $\left| {{\psi ^ \pm }} \right\rangle $ can be distinguished, which means the factor of loss in the final key can be reduced to 1/2. The proof-of-principle experiment based on RFI-MDI-QKD protocol over a symmetrical channel is made to show the feasibility of our scheme. The system clock rate is improved to 50 MHz. In asymptotic case, we compare the performance of RFI-MDI-QKD protocol with the original MDI-QKD protocol at the transmission distance of 160 km. The key rate of an order of magnitude higher is achieved for RFI-MDI-QKD protocol when misalignment of the relative reference frame $\beta$ is controlled at 25 degrees. For real-world applications, we deploy decoy-state RFI-MDI-QKD protocol with biased bases proposed in \cite{zhang2017decoy} for our system. By employing an elegant statistical fluctuation analysis proposed in \cite{wang2017measurement}, the positive secure key rates are achieved for $\beta = {0^ \circ }$ at the transmission distance of 120km and for $\beta = {25^ \circ }$ at the transmission distance of 100km. We believe this result can further illustrate the feasibility and the merit of RFI-MDI-QKD protocol under the higher clock rate and longer secure transmission distance, especially at the situation when a large misalignment of reference frame occurred. Eliminating the calibration of primary reference frames of the system will definitely reduce the complexity of the realistic setup, and prevent extra information leakage through the ancillary alignment processes.
\section{\label{sec:level2}Protocol}
In both RFI-MDI-QKD and the original MDI-QKD protocol, Alice and Bob are firstly required a random selection in the several mutually orthogonal bases to prepare their phase randomized weak coherent states, which are \emph{Z} basis states ($\left| {\rm{0}} \right\rangle $, $\left| {\rm{1}} \right\rangle $), \emph{X} basis states ($\left| {\rm{ + }} \right\rangle {\rm{ = }}{{\left( {\left| {\rm{0}} \right\rangle {\rm{ + }}\left| {\rm{1}} \right\rangle } \right)}/ {\sqrt {\rm{2}} }}$, $\left| - \right\rangle {\rm{ = }}{{\left( {\left| {\rm{0}} \right\rangle - \left| {\rm{1}} \right\rangle } \right)} / {\sqrt {\rm{2}} }}$) for the original MDI-QKD protocol, and additional \emph{Y} basis sates ($\left| { + i} \right\rangle {\rm{ = }}{{\left( {\left| {\rm{0}} \right\rangle + i\left| {\rm{1}} \right\rangle } \right)} /{\sqrt {\rm{2}} }}$, $\left| { - i} \right\rangle {\rm{ = }}{{\left( {\left| {\rm{0}} \right\rangle - i\left| {\rm{1}} \right\rangle } \right)}/{\sqrt {\rm{2}} }}$) are required in RFI-MDI-QKD protocol. They are then send to an untrusted relay Charlie, who performs a Bell state measurement (BSM) and announces the corresponding measurement results. Charlie's measurement will projects the incoming states into one of two Bell states $\left| {{\psi ^ + }} \right\rangle = \left( {\left| {01} \right\rangle + \left| {10} \right\rangle } \right)/\sqrt 2 $ or $\left| {{\psi ^ - }} \right\rangle = \left( {\left| {01} \right\rangle - \left| {10} \right\rangle } \right)/\sqrt 2 $. Alice and Bob keep the data that conform to these instances and discard the rest. After basis sifting and error estimation, they can obtain the total counting rate $Q_{{i_A}{i_B}} ^{{\lambda _A}{\lambda _B}}$ and quantum bit error rate (QBER) $E_{{i_A}{i_B}} ^{{\lambda _A}{\lambda _B}}$, where ${\lambda _{A\left( B \right)}} \in \left\{ {\mu_i ,\nu_i ,o} \right\}$ denotes Alice (Bob) randomly prepare their signal states $\mu_i $, decoy states $\nu_i $ for basis $i_{A(B)} \in \left\{ {Z,X,Y} \right\}$, or vacuum states $o$. It is noted that Alice and Bob do not choose any bases for vacuum states.
If the deviation of the practical reference from the ideal one ${\beta _{A\left( B \right)}}$ is considered, \emph{Z} basis is assumed well aligned (${Z_A} = {Z_B} = Z$), \emph{X} and \emph{Y} bases can be written as follows \cite{yin2014reference,wang2015phase}:
\begin{equation}
\begin{array}{c}
{X_B} = \cos \beta {X_A} + \sin \beta {Y_A},\\
{Y_B} = \cos \beta {Y_A} - \sin \beta {Y_A},\\
\beta = {{\left| {{\beta _A} - {\beta _B}} \right|} \mathord{\left/
{\vphantom {{\left| {{\beta _A} - {\beta _B}} \right|} 2}} \right.
\kern-\nulldelimiterspace} 2}.
\end{array}
\label{1}
\end{equation}
The secure key is extracted from the data when both Alice and Bob encode their bits using signal states ($\mu$) in the \emph{Z} basis. The rest of the data are applied to estimate the parameters used in the secure key rate calculation. The secure key rate is given by \cite{lo2012measurement,wang2017measurement}
\begin{equation}
R \ge {P_{zz}}P_{zz}^{\mu \mu }\left\{ {{\mu ^2}{e^{ - 2\mu }}S_{ZZ}^{11,L}\left[ {1 - {I_E}} \right] - Q_{ZZ}^{\mu \mu }fH\left( {E_{ZZ}^{\mu \mu }} \right)} \right\},
\label{3}
\end{equation}
where $S_{ZZ}^{11,L}$ is a lower bound of the yield of single-photon states in \emph{Z} basis, $P_{zz}$ is the probability that both Alice and Bob send the Z basis state, and $P_{zz}^{\mu \mu }$ is the signal state probability when both the Z basis states are sent from Alice (Bob) respectively. Parameter $f$ is the error correction efficiency, and $H\left( x \right) = - x{\log _2}\left( x \right) - \left( {1 - x} \right){\log _2}\left( {1 - x} \right)$ is the binary Shannon entropy function.
When sources in both Alice and Bob are assumed perfect, Eve's information ${I_E}$ in Eq.(\ref{3}) can be estimated by ${I_E} = H( {e_{XX}^{11,U}} )$ for the original MDI-QKD protocol, where ${e_{XX}^{11,U}}$ is a upper bound of quantum error rate of single-photon states in \emph{X} basis. As for RFI-MDI-QKD protocol, ${I_E}$ can be bounded by \cite{yin2014reference}
\begin{equation}
\begin{aligned}
{I_E} &= ( {1 - e_{ZZ}^{11,U}} )H\left[ {\left( {1 + u} \right)/2} \right] + e_{ZZ}^{11,U}H\left[ {\left( {1 + v} \right)/2} \right],\\
v &= \sqrt {C/2 - {{( {1 - e_{ZZ}^{11,U}} )}^2}{u^2}} /e_{ZZ}^{11,U},\\
u &= \min [ {C/2/( {1 - e_{ZZ}^{11,U}} ),1} ].
\end{aligned}
\end{equation}
Obviously, ${I_E}$ is a function of upper bound of quantum error rate of single-photon states in \emph{Z} basis $e_{ZZ}^{11,U}$ and the quantity $C$. When there is no Eve and other errors, $C$ always equals to 2. In order to upper bound the Eve's information ${I_E}$, the value of $C$ should be lower bounded, it can be estimated by
\begin{equation}
C \ge \sum\limits_{\omega '} {\min \left[ {{{(1 - 2e_{\omega '}^{11,U})}^2},{{(1 - 2e_{\omega '}^{11,L})}^2}} \right]},
\label{5}
\end{equation}
where $\omega ' \in \left\{ {{X_A}{X_B},{X_A}{Y_B},{Y_A}{X_B},{Y_A}{Y_B}} \right\}$ and $e_{\omega '}^{11,U(L)}$ is a upper (lower) bound of the quantum error rate of single-photon states when Alice and Bob choose the $\omega '$ basis simultaneously. Note that $E_{{X_A}{Y_B}}^{\mu \mu }$ and $E_{{Y_A}{X_B}}^{\mu \mu }$ are theoretically symmetrical about 0.5. Thus we assume $E_{\omega'} ^{{\lambda _A}{\lambda _B}} \le 0.5$ for simplicity, if not, Bob can simply flip his bits corresponding to the relevant basis \emph{X}, \emph{Y}. In this scenario, the value $C$ can be simplified by $C \ge {\sum\limits_{\omega '} {( {1 - 2e_{\omega '} ^{11}} )} ^2}$, where $e_{\omega '}^{11} = \min \left\{ {0.5,e_{\omega '}^{11,U}} \right\}$.
\section{\label{sec:level2}Experimental setup}
\begin{figure*}
\caption{Experimental setup of our scheme. Laser, a distributed feedback (DFB) laser combined with a home-built drive board; EPC, electronic polarization controller; PC, polarization controller; IM, intensity modulator; PM, phase modulator; PS, phase shifter; Cir, circulator, its ports and directions is labelled above; ATT, attenuator; SPD, single photon detector; QC, a SMF-28 fiber spool, its channel attenuation is measured at $\alpha=0.195 dB/km$; BS, beam splitter; PBS, polarizing beam splitter; FR, ${90^ \circ }
\label{f1}
\end{figure*}
The time-bin encoding method is used in our system, and the experimental setup is shown in Fig. \ref{f1}. For both Alice and Bob, we employ a distributed feedback (DFB) laser combined with a home-built drive board. By operating the laser below and above the lasering threshold, we first generate phase-randomized laser pulses with a 2 ns temporal width and 50 MHz repetition rate, which eliminates the possibility of an unambiguous-state-discrimination attack \cite{tang2013source}. The electrical pulses are created by an field-programmable gate array (FPGA)-based signal generator (not pictured in Fig. \ref{f1}). In order to calibrate the wavelength, the laser pulses are injected into an optical spectrum analyzer (YOKOGAWA AQ6370D, OSA) through the BSs after two lasers. The OSA, whose resolution is 10-20 pm, is used to monitor the wavelength difference of two independent lasers, which can be minimized by precisely adjusting the operating temperature of lasers through the temperature controllers on the laser drive boards.
\begin{table}[htbp]
\centering
\caption{\bf The detail of time-bin encoding scheme.}
\begin{tabular}{ccccccc}
\hline
& $\left| {\rm{0}} \right\rangle$ & $\left| {\rm{1}} \right\rangle$ & $\left| {\rm{+}} \right\rangle$ &$\left| {\rm{-}} \right\rangle$ & $\left| {+ i} \right\rangle$ & $\left| {- i} \right\rangle$\\
\hline
PM1 & 0 & $\pi$ & $\frac{\pi }{{\rm{2}}}$ & $\frac{\pi }{{\rm{2}}}$ & $\frac{\pi }{{\rm{2}}}$ & $\frac{\pi }{{\rm{2}}}$ \\
PM2 &0 &0 &0 & $\pi$ & $\frac{\pi }{{\rm{2}}}$ & $\frac{3 \pi }{{\rm{2}}}$ \\
\hline
\end{tabular}
\label{t2}
\end{table}
Since Alice's and Bob's parts are symmetrical, here we use Alice's part as an example to illustrate our experimental setups. To realize the decoy states preparation, intensity modulator (IM, Photline, MXER-LN-10) is used to modulate the laser pulses into two different intensities, the vacuum states are prepared by stopping the trigger on lasers. The circulator (Cir) is used to transmit the incident pulses from port 1 to port 2. Each of pulses is then divided into two adjacent pulses with 5 ns separation by first modified asymmetric Mach-Zehnder interferometer (AMZI1), which is composed of a beam splitter (BS) and a polarizing beam splitter (PBS). The relative phase of these two successive pulses is modulated by the phase modulator (PM1, Photline, MPZ-LN-10) in Sagnac interferometer (SI). When the phase of 0 or $\pi$ is modulated, \emph{Z} basis state can be prepared. We define the light passing through the upper path of the second AMZI (AMZI2) as the time-bin state $\left| {\rm{0}} \right\rangle$, and lower path of AMZI2 as the time-bin state $\left| {\rm{1}} \right\rangle$. These two time-bins are separated by 4.2 ns time delay. When the phase of $\pi/2$ is modulated by PM1, the phase modulated by PM2 in AMZI2 are $0$, $\pi$ for \emph{X} basis, and $\pi/2$, $3\pi/2$ for \emph{Y} basis. For detail, we list our time-bin encoding scheme in Table \ref{t2}.
It is obviously that IM or variable optical attenuator (VOA) is needless to normalize the average photon number of \emph{Z} basis states in two time bins \cite{yin2016measurement,tang2016measurement,wang2017measurement}, these can be achieved only by adjusting the modulating voltage value of PM1 accordingly in our system. This also reduces the complexity of the system to some extent. Furthermore, orthogonal polarization states (\emph{H}, \emph{V}) are multiplexed to the time bins because of the PBS at the output of AMZI2. For the sake of comparing the performance in different misalignment, phase shifters (PS) in AMZI2 are applied to control the reference frame, the parameter of quantum error rate in \emph{X} basis $E_{{X_A}{X_B}}^{{\lambda _A}{\lambda _B}}$ as a guide to set the deviation of relative reference frame. Note that the whole time-bin encoding units are strictly thermal and mechanical isolated to enhance its stability.
At the measurement site, since the time bins are multiplexed with the orthogonal polarization states (\emph{H} and \emph{V}), we can use the PBS to demultiplex them easily. Two electric polarization controllers (EPC, General Photonics, PCD-M02-4X) is used to control the polarization fluctuations that change polarization of input light until the SPD count rate are maximized and hence all polarization changes during photon transmission are compensated for. Two BSs are used to realize the interference. Four commercial InGaAs SPDs (ID210) with an efficiency of ${\eta _d}=12.5\%$, a dark count rate of ${P_d}=1.2 \times {10^{ - 6}}$ and a dead time of 5 $\mu s$ are placed at each output of the BSs. Therefore, all results of BSM are effectively detected, we define the Bell state $\left| {{\psi ^{\rm{ + }}}} \right\rangle $ is D1 and D4 or D2 and D3 in Fig. \ref{f1} clicks simultaneously, and the clicks of D1 and D3 or D2 and D4 is represented by $\left| {{\psi ^ - }} \right\rangle $. The parameters for experiment and numerical simulations are listed in Table \ref{t22}.
\begin{table}[htbp]
\centering
\caption{\bf Parameters for experiment and numerical simulations.}
\begin{tabular}{ccccc}
\hline
${\eta _d}$ & ${e_d}$ & $\alpha $ & ${P_d}$ & $f$ \\
\hline
12.5\% & 0.5\% & $0.195dB/km$ & $1.2 \times {10^{ - 6}}$ & 1.16 \\
\hline
\end{tabular}
\label{t22}
\end{table}
\section{Results and discussion}
We first test the indistinguishability of the photons from Alice and Bob by measuring the visibility of Hong-Ou-Mandel (HOM). We obtain a visibility of 42.7\%, which is smaller than the maximally possible value of 50\% for weak coherence source. The low visibility of HOM is mainly caused by detector side imperfections due to after-pulses, it has been studied that after-pulses effect of SPADs has greater impact on the measurement of HOM visibility \cite{Wang:17}. Furthermore, two PBSs are used before interference in our scheme, the change of polarization of incident pulses after long transmission distance will lead to a fluctuating intensity, and the finite extinction ration (about 20dB) of PBS will also lower the visibility. Moreover, the beam splitting ratio and detection efficiency mismatch of detectors can influence the visibility of HOM partly as discussed in \cite{Wang:17}. The central wavelength of laser pulses is 1558.18nm after calibration. Next, we will show and discuss our experimental results for asymptotic case and finite-size pulses case separately.
\subsection{Asymptotic case}
In asymptotic case, we adopt symmetrical three-intensity decoy-state protocol for simplicity, which means ${\mu _i} = {\mu _{i'}}{\rm{ = }} \mu $ for signal states and ${\nu _i} = {\nu _{i'}}{\rm{ = }}\nu $ for decoy states. By modeling the total gains and error rates of our system (See Appendices A and B for details), we find the optimal value of average photon number for the original MDI-QKD protocol (O-MDI) and RFI-MDI-QKD protocol (R-MDI) is almost the same, as depicted with blue and purple dash line in Fig. \ref{f3b}, when misalignment of the reference frame is controlled at $\beta = {0^ \circ }$. This means the secure key rate (SKR) for both protocols can be obtained from a single experiment. The simulation and experimental results are presented in Table \ref{t1} and Fig. \ref{f3a}, which shows two curves are almost overlapped (Red line for R-MDI and blue dash line for O-MDI). We set the average photon number of vacuum state to be $0$ since there is no pulses emitted when the trigger on lasers are paused. The value of $C$ for R-MDI is estimated to 1.668. The QBER of 0.6\% are obtain for \emph{Z} basis, it comes from the successful BSM declared by Charlie when Alice and Bob prepared the same states in Z basis. In the ideal case, the QBER of Z basis should be 0, whereas, the detector's dark counts and finite extinction ratio of the first AMZI in Fig. \ref{f1} will lead to incorrect coincidence counts and thus increase the QBER of Z basis. Meanwhile, the vacuum and multiphoton components of weak coherent states cause accidental coincidences which introduce an error rate of 50\%. Thus, the error rate of the X basis has an expected value of 25\% and so is for Y basis. However, when the visibility of HOM is lower than 50\%, the QBER of X basis will higher than 25\% since the error counts come from the situation when Bell state $\left| {{\psi ^{\rm{ + }}}} \right\rangle $ was announced as Alice and Bob prepared the same states in X basis, or $\left| {{\psi ^ - }} \right\rangle $ was declared as orthogonal states were prepared. In our system, it is measured at 27.9\%.
\begin{table}[htbp]
\begin{threeparttable}
\centering
\caption{\bf Experimental results when misalignment of reference frame are $\beta = {0^ \circ }$ and $\beta = {25^ \circ }$.}
\begin{tabular}{ccccccc}
\hline
Protocol & $\mu $ & $\nu $ & $E_{ZZ}^{\mu \mu }$ & $E_{XX}^{\mu \mu }$ & ${I_E}$ & SKR\\
\hline
\multicolumn{7}{c}{$\beta = {0^ \circ }$\tnote{*}}\\
\cline{4-5}
R-MDI & \multirow{2}*{0.67} & \multirow{2}*{0.01} & \multirow{2}*{0.006} & \multirow{2}*{0.279} & 0.254 & $5.225 \times {10^{ - 8}}$ \\
O-MDI & & & & & 0.296& $4.690 \times {10^{ - 8}}$ \\
\hline
\multicolumn{7}{c}{$\beta = {25^ \circ }$}\\
\cline{4-5}
R-MDI & 0.67 & 0.01 & 0.008 & 0.348 & 0.297 & $4.866 \times {10^{ - 8}}$ \\
O-MDI & 0.35 & 0.01 & 0.010 & 0.338 & 0.686 & $1.655\times {10^{ - 9}}$ \\
\hline
\end{tabular}
\begin{tablenotes}
\item[*] The optimal average photon number for O-MDI and R-MDI is identical when misalignment of the reference frame is controlled at $\beta = {0^ \circ }$. Thus, the SKR for both protocols can be obtained from a single experiment. The $\mu $ and $\nu $ are optimized in all the test.
\end{tablenotes}
\rule{\linewidth}{0.05mm}
\label{t1}
\end{threeparttable}
\end{table}
\begin{figure}
\caption{Lower secure key rate bound of RFI-MDI-QKD protocol (R-MDI) and the original MDI-QKD protocol (O-MDI) when the deviations of reference frame are controlled at $\beta ={ 0^ \circ }
\label{f3a}
\end{figure}
\begin{figure}
\caption{Lower secure key rate bound and optimal average photon number of RFI-MDI-QKD protocol (R-MDI) and the original MDI-QKD protocol (O-MDI) versus the different misalignments of reference frame at the distance of 160 km. Since the simulation results are symmetrical about $\beta=45^{\circ}
\label{f3b}
\end{figure}
In order to investigate the performance of RFI-MDI-QKD protocol and the original MDI-QKD protocol at the nonzero deviation of the relative reference frame, we can control the voltage of PS in Fig. \ref{f1} according to the simulation result of $E_{XX}^{\mu \mu }$ to simulate this deviation. Fig. \ref{f3b} presents the SKR and the optimal average photon number vs different deviation of the reference frame $\beta$ when the transmission distance between Alice and Bob is 160 km. It is obvious that O-MDI is particularly dependent on the change of $\beta$. However, the SKR and the optimal average photon number for R-MDI is almost identical at different deviation of the reference frame. Thus, for R-MDI at $\beta=25^{\circ}$, we keep the values of average photon number in consistency with the setting at $\beta=0^{\circ}$. In this case, the simulation results in Fig. \ref{f3a} show the red curve for $\beta=0^{\circ}$ is almost overlapped with green curve marked with crosses for $\beta=25^{\circ}$. As comparison, the optimal value of $\mu$ and $\nu$ for O-MDI at $\beta=25^{\circ}$ is used to conduct the experimental test. The related experimental results are presented in Table \ref{t1} and Fig. \ref{f3a}. The value of $C$ for R-MDI is estimated to 1.595. At $\beta ={ 25^ \circ }$, the secure key rate of R-MDI is close to the one at $\beta ={ 0^ \circ }$, and is an order of magnitude higher than O-MDI at the transmission distance of 160 km. Thus, unlike the O-MDI, the changes of the reference frame nearly cannot influence the secure key rete of R-MDI, neither can optimal average photon number settings. These results well illustrate the robust of RFI-MDI-QKD protocol against the deviation of relative reference frame.
\subsection{Finite-size pulses case}
In real-world applications, the key size is always finite, thus we must consider the effect of statistical fluctuations caused by a finite pulses size. Such an analysis is crucial to ensure the security of RFI-MDI-QKD. Three-intensity decoy-state RFI-MDI-QKD protocol with biased bases proposed in \cite{zhang2017decoy} have been proved that achievable secret key rate and transmission distance can be obviously improved compared with the original protocol, since this protocol avoids the futility in Z basis for decoy states, thus it can simplify the operation of system. Recently, a universal analysis appropriate for fluctuating systems with an arbitrary number of observables is developed in \cite{wang2017measurement}, it is showed that by adopting both the collective constraints and joint parameter estimation techniques, the secret key rate and transmission distance can be impressively improved for four-intensity decoy-state RFI-MDI-QKD protocol.
Here, by using this elegant fluctuation analysis method, we deploy the four-intensity decoy-state RFI-MDI-QKD protocol with biased bases for our experiment. In this scheme, expect for vacuum states, Alice and Bob need to prepare signal states $\mu_z$ for Z basis and $\mu_x$ for both X basis and Y basis due to the symmetry of the X, Y basis in Eq. (\ref{5}), whereas the decoy states $\nu_x$ are prepared only for X basis and Y basis. All related parameter including $\mu_z$, $\mu_x$, $\nu_x$, $P_z$, $P_x$, and $P_{x}^{\mu_x }$ should be optimized to achieve the highest secure key rate. It is found that the achievable secure key rate and transmission distance in this scheme can also be notably improved as showed in Fig. \ref{f4}.
We apply the Chernoff bound for the fluctuation estimation in our experiment, with a fixed failure probability of $\varepsilon = {10^{ - 10}}$ and a total number of pulse pairs $N = 3 \times {10^{12}}$. After the simulation with full parameter optimization showed in Fig. \ref{f4}, we find there are some different results compared with the asymptotic case. It is obviously that RFI-MDI-QKD deteriorates with the increase of $\beta$ when statistical fluctuations are considered, which can be explained that the correlations of $e_{{X_A}{X_B}}^{11}$, $e_{{Y_A}{Y_B}}^{11}$, $e_{{X_A}{Y_B}}^{11}$, and $e_{{Y_A}{X_B}}^{11}$ are smeared with the increase of $\beta$, thus it leads to poor estimation of the value of $C$ in Eq. (\ref{5}). Furthermore, the setup of optimal values for experiment will change as the increase of $\beta$, whereas it almost keeps the same in asymptotic case as showed in Fig. \ref{f3b}. For instance, when transmission distance is 100km, the optimal signal intensity setting for Z basis $\mu_z$ at $\beta=0^\circ$ is 0.4407, while it will be 0.2648 if $\beta=25^\circ$.
\begin{table}[htbp]
\centering
\caption{\bf Experimental results when statistical fluctuations are considered.}
\begin{tabular}{ccccccc}
\hline
Distance & $\beta$ & ${\mu _{zz}}$ & $E_{ZZ}^{\mu \mu }$ & C &$I_E$ & $SKR$ \\
\hline
100 km & $25^\circ$ & 0.265& 0.9\% & 0.44 & 0.83& $1.22 \times {10^{ - 10}}$ \\
120 km &$0^\circ$ &0.324 &1.15\% & 0.56 & 0.78 & $2.30 \times {10^{ - 10}}$ \\
\hline
\end{tabular}
\label{4}
\end{table}
We experimentally demonstrate the feasible of four-intensity biased decoy-state scheme when statistical fluctuations are considered. Secure key rates for transmission distances of 120 km and 100 km are obtain, which are presented in Table \ref{4} and Fig. \ref{f4}. Their deviations of reference frame are controlled at $\beta=0^\circ$ and $\beta=25^\circ$ respectively.
\begin{figure}
\caption{Lower secure key rate bound of RFI-MDI-QKD protocol (R-MDI) with biased bases when statistical fluctuations are considered. Black dashed line is the results at $\beta=25^{\circ}
\label{f4}
\end{figure}
\section{Conclusion}
In conclusion, a high-speed clock rate of 50 MHz and long distance of more than hundred kilometers RFI-MDI-QKD is demonstrated based on the time-bin and polarization multiplexing. Two of the four Bell states $\left| {{\psi ^ \pm }} \right\rangle $ can be distinguished without a loss. And the states in different bases can be prepared by only using phase modulators without the need for intensity modulators. The value of quantum error rate of Z basis $E_{ZZ}^{\mu \mu }$ shows the feasibility of this scheme. In asymptotic case, we experimentally compare the performance of RFI-MDI-QKD protocol and original MDI-QKD protocol under the difference deviation of reference frame at the distance of 160km. It shows that the secure key rate used RFI-MDI-QKD protocol is an order of magnitude higher than the one used the original MDI-QKD when the misalignment of reference frame is $\beta = {25^ \circ }$. Moreover, a simulation model for PFI-MDI-QKD protocol is given, and together with the experimental results, the robustness of PFI-MDI-QKD protocol against reference frame change is been verified since the invariant of secure key rate and optimal average photon numbers under the different deviation of the reference frame. The four-intensity decoy-state RFI-MDI-QKD protocol with biased bases is employed to take statistical fluctuations into account in our experiment. By adopting both the collective constraints and joint parameter estimation techniques, the achievable secret key rate and transmission distance is improved obviously compared with the original biased decoy-state RFI-MDI-QKD protocol. We also firstly experimentally achieved this protocol at the transmission distance of 120km when the deviation of reference frame is controlled at $\beta = {0^ \circ }$ and at the distance of 100km when $\beta = {25^ \circ }$.
\begin{appendices}
\section*{Appendix A: simulation model}
In order to simulate the protocol performance and get the optimal value of average photon number for experimental system, we need firstly derive the model for total counting rate $Q_{{i_A}{i_B}} ^{{\lambda _A}{\lambda _B}}$ and error counting rate $EQ_{{i_A}{i_B}} ^{{\lambda _A}{\lambda _B}}$. According to the method in \cite{Ma2012Alternative}, it is deduced by
\begin{equation}
\renewcommand\theequation{A1}
\begin{aligned}
Q_{{Z_A}{Z_B}}^{{\lambda _A}{\lambda _B}} &= {Q_C} + {Q_E},\\
Q_{{Z_A}{Z_B}}^{{\lambda _A}{\lambda _B}}E_{{Z_A}{Z_B}}^{{\lambda _A}{\lambda _B}} & = {e_d}{Q_C} + \left( {1 - {e_d}} \right){Q_E},\\
Q_{{X_A}{X_B}}^{{\lambda _A}{\lambda _B}} &= 2{y^2}\left[ {2{y^2} - 4y{I_0}\left( x \right) + {I_0}\left( {\rm B} \right) + {I_0}\left( {\rm E} \right)} \right],\\
Q_{{X_A}{X_B}}^{{\lambda _A}{\lambda _B}}E_{{X_A}{X_B}}^{{\lambda _A}{\lambda _B}} &= 2{y^2}\left[ {{y^2} - 2y{I_0}\left( x \right) + {e_d}{I_0}\left( {\rm B} \right) + \left( {1 - {e_d}} \right){I_0}\left({\rm E} \right)} \right],\\
Q_{{X_A}{Y_B}}^{{\lambda _A}{\lambda _B}} &= 2{y^2}\left\{ {2{y^2} - 4y{I_0}\left( x \right) + {I_0}\left[ \Theta \right] + {I_0}\left[ \Xi \right]} \right\},\\
Q_{{X_A}{Y_B}}^{{\lambda _A}{\lambda _B}}E_{{X_A}{Y_B}}^{{\lambda _A}{\lambda _B}} & = 2{y^2}\left\{ {{y^2} - 2y{I_0}\left( x \right) + {e_d}{I_0}\left[ \Xi \right] + \left( {1 - {e_d}} \right){I_0}\left[ \Theta \right]} \right\},\\
Q_{{Y_A}{X_B}}^{{\lambda _A}{\lambda _B}}E_{{Y_A}{X_B}}^{{\lambda _A}{\lambda _B}} &= 2{y^2}\left\{ {{y^2} - 2y{I_0}\left( x \right) + {e_d}{I_0}\left[ \Theta \right] + \left( {1 - {e_d}} \right){I_0}\left[ \Xi \right]} \right\},
\end{aligned}
\label{eq:refname1}
\end{equation}
where
\begin{equation}
\renewcommand\theequation{A2}
\begin{array}{l}
{Q_C} = 2{\left( {1 - {P_d}} \right)^2}{e^{{{ - \mu '} \mathord{\left/
{\vphantom {{ - \mu '} 2}} \right.
\kern-\nulldelimiterspace} 2}}}[ {1 - \left( {1 - {P_d}} \right){e^{{{ - {\eta _A}{\lambda _A}} \mathord{\left/
{\vphantom {{ - {\eta _A}{\lambda _A}} 2}} \right.
\kern-\nulldelimiterspace} 2}}}} ] \\
\times [ {1 - \left( {1 - {P_d}} \right){e^{{{ - {\eta _B}{\lambda _B}} \mathord{\left/
{\vphantom {{ - {\eta _B}{\lambda _B}} 2}} \right.
\kern-\nulldelimiterspace} 2}}}} ],\\
{Q_E} = 2{P_d}{\left( {1 - {P_d}} \right)^2}{e^{{{ - \mu '} \mathord{\left/
{\vphantom {{ - \mu '} 2}} \right.
\kern-\nulldelimiterspace} 2}}}[ {{I_0}\left( {2x} \right) - \left( {1 - {P_d}} \right){e^{{{ - \mu '} \mathord{\left/
{\vphantom {{ - \mu '} 2}} \right.
\kern-\nulldelimiterspace} 2}}}} ],\\
{\rm B} = 2x\cos \beta ,\\
{\rm E} =2x\sin \beta ,\\
\Theta = \sqrt 2 x\left( {\cos \beta + \sin \beta } \right),\\
\Xi = \sqrt 2 x\left( {\cos \beta - \sin \beta } \right),
\end{array}
\label{eq:refname1}
\end{equation}
${I_0}\left( \cdot \right)$ is the modified Bessel function of the first kind, ${e_d}$=0.005 is the misalignment-error probability, ${P_d}$ is the dark count of a single-photon detector, ${\eta _A}\left( {{\eta _B}} \right)$ is the transmission of Alice (Bob), and $\mu ' = {\eta _A}{\lambda _A} + {\eta _B}{\lambda _A}$, $x = \sqrt {{\eta _A}{\lambda _A}{\eta _B}{\lambda _B}} /2$ and $y = (1 - {P_d}){e^{-\mu '/4}}$. Due to the symmetry of quantum channel and the \emph{X} ,\emph{Y} basis in Eq. (\ref{5}), we treat the parameters of the \emph{X}, \emph{Y} basis and the average photon number setting for Alice and Bob equivalently for simplicity. Accordingly, ${\mu _A} = {\mu _B}$, $Q_{{X_A}{X_B}}^{{\lambda _A}{\lambda _B}}= Q_{{Y_A}{Y_B}}^{{\lambda _A}{\lambda _B}}$, $Q_{{X_A}{Y_B}}^{{\lambda _A}{\lambda _B}}= Q_{{Y_A}{X_B}}^{{\lambda _A}{\lambda _B}}$, and $EQ_{{X_A}{X_B}}^{{\lambda _A}{\lambda _B}}= EQ_{{Y_A}{Y_B}}^{{\lambda _A}{\lambda _B}}$. The quantum error rate can be calculated by $E_{{i_A}{i_B}} ^{{\lambda _A}{\lambda _b}} = EQ_{{i_A}{i_B}} ^{{\lambda _A}{\lambda _b}}/Q_{{i_A}{i_B}} ^{{\lambda _A}{\lambda _b}}$, it is obvious that $E_{{X_A}{Y_B}}^{{\lambda _A}{\lambda _b}}$ and $E_{{Y_A}{X_B}}^{{\lambda _A}{\lambda _b}}$ is symmetrical about $0.5$. As mentioned above, we assume the quantum error rate is smaller than 0.5 for simplicity, thus $e_{{X_A}{Y_B}}^{{\lambda _A}{\lambda _b}} = 1 - E_{{X_A}{Y_B}}^{{\lambda _A}{\lambda _b}}$ if $E_{{X_A}{Y_B}}^{{\lambda _A}{\lambda _b}} > 0.5$.
\section*{Appendix B: secure key rate estimation}
The secure key rate of Eq. (\ref{3}) is calculated with an analytical method with two decoy states according to \cite{wang2017measurement,yu2013three}.The lower bound and upper bound of the single-photon yield and the error yield is given by
\begin{equation}
\renewcommand\theequation{B1}
\begin{aligned}
{m^{11L}} &\ge \frac{{{T_1} - {T_2} - {a'_1}{b'_2}{T_3}}}{{{a_1}{a'_1}\left( {{b_1}{b'_2} - {b'_1}{b_2}} \right)}},\\
{m^{11U}} &\le \frac{{{M^{{v_i}{v_i}}} - {T_3}}}{{{a_1}{b_1}}},
\label{8}
\end{aligned}
\end{equation}
where
\begin{equation}
\renewcommand\theequation{B2}
\begin{array}{l}
{T_1} = {a'_1}{b'_2}{M^{{v_i}{v_i}}} + {a_1}{b_2}{a'_0}{M^{o\mu_i }} + {a_1}{b_2}{a'_0}{M^{\mu_i o}},\\
{T_2} = {a_1}{b_2}{M^{{\mu_i}{ \mu_i} }} + {a_1}{b_2}{a'_0}{b'_0}{M^{oo}},\\
{T_3} = {a_0}{M^{o{v_i}}} + {b_0}{M^{{v_i}o}} - {a_0}{b_0}{M^{oo}},\\
a'{\left( {b'} \right)_k} = {{\mu _{i}^k{e^{ - \mu_{i} }}} \mathord{\left/
{\vphantom {{\mu _{i}^k{e^{ - \mu_{i} }}} {k!}}} \right.
\kern-\nulldelimiterspace} {k!}},\\
a{\left( b \right)_k} = {{v_{i}^k{e^{ - v_{i}}}} \mathord{\left/
{\vphantom {{v_{i}^k{e^{ - v_{i}}}} {k!}}} \right.
\kern-\nulldelimiterspace} {k!}}.
\end{array}
\end{equation}
In the above formula, ${{M^{{\lambda _A}{\lambda _B}}} \in \left\{ {{Q^{{\lambda _A}{\lambda _B}}},E{Q^{{\lambda _A}{\lambda _B}}}} \right\}}$, ${m^{11}} \in \left\{ {{S^{11}},e{S^{11}}} \right\}$, $i \in \left\{ {Z,X,Y} \right\}$, and ${e^{11L(U)}} = e{S^{11L(U)}}/{S^{11U(L)}}$.
It is noted that the expression of Eq. (B1) is independent on $\omega $, thus, we can use above equations to estimate the parameters in Eq. (\ref{3}) for asymptotic case, which are listed in Table \ref{t5}, and then to calculate the secure key rate. However, since there only is signal states for Z basis in biased decoy-state protocol, we emphasize that $e_{ZZ}^{11U}$ and $e_{XX}^{11U}$ may be different and should be estimated individually. By using the following formula
\begin{equation}
\renewcommand\theequation{B3}
\begin{aligned}
{m^{11U}} &\le \frac{{{M^{u_{z}u_{z}}} - {T'_3}}}{{{a'_1}{b'_1}}},
\label{B3}
\end{aligned}
\end{equation}
the upper bound of error yield for Z basis can be estimated. Where
\begin{equation}
\renewcommand\theequation{B4}
\begin{aligned}
{T'_3} = {a'_0}{M^{o{u_{z}}}} + {b'_0}{M^{u_{z}o}} - {a'_0}{b'_0}{M^{oo}}.
\label{B4}
\end{aligned}
\end{equation}
By using the fluctuation analysis method proposed in \cite{wang2017measurement}, the parameters used for secure key rate estimation are listed in Table \ref{t5}.
\begin{table}[htbp]
\centering
\caption{\bf Parameters estimated in the process of secure key rate calculation.}
\begin{tabular}{ccccccc}
\hline
$\beta $ & $e_{ZZ}^{11U}$ & $e_{XX}^{11U}$ & $e_{YY}^{11U}$ & $e_{XY}^{11U}$ & $e_{YX}^{11U}$ & $S_{ZZ}^{11L} (10^{ - 6})$ \\
\hline
\multicolumn{7}{c}{R-MDI in asymptotic case}\\
\hline
${0^ \circ }$ & 0.004 & 0.052 & 0.035 & 0.534 & 0.527 & 1.084 \\
${25^ \circ }$ & 0.005 & 0.174 & 0.225 & 0.176 & 0.166 & 1.221 \\
\hline
\multicolumn{7}{c}{O-MDI in in asymptotic case }\\
\hline
${0^ \circ }$ & 0.004 & 0.052 & - & - & -& 1.084 \\
${25^ \circ }$ & 0.005 & 0.182 &- & - & - & 1.200 \\
\hline
\multicolumn{7}{c}{R-MDI with biased bases in finite-data case }\\
\hline
${0^ \circ }$ & 0.020 & 0.262 & 0.212 & 0.683 &0.631& 6.959 \\
${25^ \circ }$ & 0.015 & 0.348 &0.350 & 0.319 &0.316 & 17.305 \\
\hline
\end{tabular}
\label{t5}
\end{table}
\end{appendices}
\section*{Funding Information}
National Natural Science Foundation of China (NSFC) (11674397); Fund of State Key Laboratory of Information Photonics and Optical Communications (Beijing University of Posts and Telecommunications) (No. IPOC2017ZT04), P. R. China.
\section*{Acknowledgments}
The authors would like to thank Chao Wang for helpful discussion in statistical fluctuations analysis.
\end{document} |
\begin{document}
\keywords{operator ideals, diagonals, arithmetic mean closed, block tridiagonal}
\subjclass[2020]{Primary 47B10, 47L20; Secondary 15A42 47B07.}
\title{Matrix splitting and ideals in $\B(\Hil)$}
\begin{abstract}
We investigate the relationship between ideal membership of an operator and its pieces relative to several canonical types of partitions of the entries of its matrix representation with respect to a given orthonormal basis.
Our main theorems establish that if $T$ lies in an ideal $Mhcal{I}$, then $\sum P_n T P_n$ (or more generally $\sum Q_n T P_n$) lies in the arithmetic mean closure of $Mhcal{I}$ whenever $\delim\{\}{P_n}$ (and also $\delim\{\}{Q_n}$) is a sequence of mutually orthogonal projections; and in any basis for which $T$ is a block band matrix, in particular, when in Patnaik--Petrovic--Weiss universal block tridiagonal form, then all the sub/super/main-block diagonals of $T$ are in $Mhcal{I}$.
And in particular, the principal ideal generated by this $T$ is the finite sum of the principal ideals generated by each sub/super/main-block diagonals.
\end{abstract}
\section{Introduction}
In the study of infinite matrix representations of operators in $Mhcal{B}(Mhcal{H})$, and especially the structure of commutators, it is common and natural to split up a target operator $T$ into two (or finite) sum of natural parts.
For example, every finite matrix is the sum of its upper triangular part and its lower triangular part (including the diagonal in either part as you choose).
Formally this obviously holds also for infinite matrices, but not in $Mhcal{B}(Mhcal{H})$.
That is, as is well-known, the upper or lower triangular part of a matrix representation for a bounded operator is not necessarily a bounded operator.
The Laurent operator with zero-diagonal matrix representation $Mrep{\frac{1}{i-j}}_{i \neq j}$ represents a bounded operator but its upper and lower triangular parts represent unbounded operators.
From this we can produce a compact operator whose upper triangular part is unbounded.
\begin{example}
\label{ex:bdd-upp-triangular-unbdd}
Consider the zero-diagonal Laurent matrix $Mrep{\frac{1}{i-j}}_{i \neq j}$, which corresponds to the Laurent multiplication operator $M_{\phi} \in Mhcal{B}(L^2(\mathbb{S}^1))$ where
\begin{equation*}
\phi(z) := \sum_{0 \neq n \in \mathbb{Z}} \frac{z^n}{n} = \sum_{n=1}^{\infty} \frac{z^n}{n} - \sum_{n=1}^{\infty} \frac{\conj{z}^n}{n} = \log(1-z) - \log(1-\conj{z}) = \log\delim()*{\frac{1-z}{\conj{1-z}}},
\end{equation*}
which is bounded since the principle logarithm of a unit modulus function $\phi \in L^{\infty}(\mathbb{S}^1)$.
On the other hand, the upper triangular part $\Delta(M_{\phi})$ of $M_{\phi}$ corresponds to multiplication by $\log(1-z) \notin L^{\infty}(\mathbb{S}^1)$, and is therefore not a bounded operator.
Additionally, as is well-known, the same boundedness/unboundedness properties are shared by the corresponding Toeplitz operator $T_{\phi}$ and its $\Delta(T_{\phi})$.
Indeed, this follows from the fact that if $P \in Mhcal{B}(L^2(\mathbb{S}^1))$ is the projection onto the Hardy space $H^2$, then $P M_{\phi} P$ and $P^{\perp} M_{\phi} P^{\perp}$ are unitarily equivalent, and $P M_{\phi} P^{\perp} = P \Delta(M_{\phi}) P^{\perp}$ is bounded.
To produce a compact operator whose upper triangular part is unbounded, take successive corners $P_n T_{\phi} P_n$ where $P_n$ is the projection onto $\spans \delim\{\}{e_1,\ldots,e_n}$.
Since $\norm{P_n T_{\phi} P_n} \uparrow \norm{T_{\phi}}$ and $\norm{P_n \Delta(T_{\phi}) P_n} \uparrow \infty$, then $\bigoplus_{n=1}^{\infty} \frac{P_n T_{\phi} P_n}{\norm{P_n \Delta(T_{\phi}) P_n}^{1/2}}$ is compact and its upper triangular part is unbounded.
Similarly, $\bigoplus_{n=1}^{\infty} \frac{P_n T_{\phi} P_n}{\norm{P_n \Delta(T_{\phi}) P_n}}$ is compact but its upper triangular part is bounded and noncompact.
\end{example}
Focusing attention on $Mhcal{B}(Mhcal{H})$ ideals yields a fruitful area of study:
for a Hilbert--Schmidt operator, in any basis, any partition of the entries of its matrix representation has its parts again Hilbert--Schmidt.\footnotemark{}
This leads to a natural question for which the authors are unaware of the answer: is the Hilbert--Schmidt ideal the \emph{only} (nonzero) ideal with this property?
\footnotetext{
Of course, for any ideal $Mhcal{I}$ contained within the Hilbert--Schmidt ideal $Mhcal{L}_2$, and any $T \in Mhcal{I}$, the upper triangular part $\Delta(T) \in Mhcal{L}_2$, but one may wonder if anything stronger can be said.
In the case of the trace-class ideal $Mhcal{L}_1$, Gohberg--Krein \cite[Theorem~III.2.1]{GK-1970} showed that $\Delta(T)$, in the terminology of \cite{DFWW-2004-AM}, lies in the arithmetic mean closure of the principal ideal generated by $\diag(\frac{1}{n})$.
}
For the compact operators $Mhcal{K}(Mhcal{H})$, depending on the shape of the matrix parts for $T$, the problem of determining when its parts are in $Mhcal{K}(Mhcal{H})$ (i.e., ideal invariant) can be a little subtler.
Indeed, as noted in \Cref{ex:bdd-upp-triangular-unbdd}, the upper triangular part of a compact operator may not be compact (nay bounded);
on the other hand, it is well-known and elementary that the diagonal sequence $\delim(){d_n}$ of a compact operator converges to zero (i.e., $\diag \delim(){d_n}$ is compact), and the same holds for all the sub/super-diagonals as well.
In contrast, this fails for certain matrix representations for a finite rank operator;
that is, the diagonal of a finite rank operator may not be finite rank (e.g., $(\frac{1}{ij})_{i,j \ge 1}$ is rank-1 but its diagonal $\diag(\frac{1}{j^2}) \notin Mhcal{F}(Mhcal{H})$).
Here we study this question for general $Mhcal{B}(Mhcal{H})$-ideals: For an ideal $Mhcal{I}$ and all pairs $\delim\{\}{P_n}, \delim\{\}{Q_n}$ of sequences of mutually orthogonal projections, when are the generalized diagonals $\sum Q_n T P_n \in Mhcal{I}$ whenever $T \in Mhcal{I}$? (The canonical block diagonals are $\sum P_{n+k} T P_n$ and $\sum P_n T P_{n+k}$.)
We find this especially pertinent in our current search for commutator forms of compact operators \cite{PPW-2021}, growing out of \cite{BPW-2014-VLOT}; and, in view of the second author’s work with V. Kaftal \cite{KW-2011-IUMJ} on diagonal invariance for ideals, useful in recent discoveries by the second author with S. Petrovic and S. Patnaik \cite{PPW-2020-TmloVLOt} on their universal finite-block tridiagonalization for arbitrary $Mhcal{B}(Mhcal{H})$ operators and the consequent work on commutators \cite{PPW-2021}.
Evolution of questions:
\begin{enumerate}
\item For which $Mhcal{B}(Mhcal{H})$-ideals $Mhcal{I}$ does a tridiagonal operator $T$ have its three diagonal parts also in $Mhcal{I}$?
This question arose from the stronger question: for which tridiagonal operators $T \in Mhcal{K}(Mhcal{H})$ are the diagonals parts in $\delim<>{T}$?
\Cref{thm:bandable} guarantees the latter is always true, even for finite band operators.
\item The same questions but more generally for a block tridiagonal $T$ (see \Cref{def:block-decomposition}) and its three block diagonals (see \Cref{def:shift-representation}).
Again, \Cref{thm:bandable} guarantees this is always true, and likewise for finite block band operators.
That is, if
\(
T =
\begin{pNiceMatrix}[small,xdots/line-style=solid]
B & A & 0 & {} \\[-1em]
C & \Ddots & \Ddots & \Ddots \\[-1em]
0 & \Ddots & & {} \\[-1em]
& \Ddots & & {} \\
\end{pNiceMatrix}
\in Mhcal{I}
\),
then
\(
\begin{pNiceMatrix}[small,xdots/line-style=solid]
0 & A & 0 & {} \\[-1em]
0 & \Ddots & \Ddots & \Ddots \\[-1em]
0 & \Ddots & & {} \\[-1em]
& \Ddots & & {} \\
\end{pNiceMatrix}
\in Mhcal{I}
\),
and similarly for $B,C$.
\item A more general context: given two sequences of (separately) mutually orthogonal projections, $\delim\{\}{P_n}_{n=1}^{\infty}, \delim\{\}{Q_n}_{n=1}^{\infty}$, for $T \in Mhcal{I}$ what can be said about ideal membership for $\sum_{n=1}^{\infty} Q_n T P_n$?
In \Cref{thm:sum-off-diagonal-corners-am-closure} we establish that $\sum_{n=1}^{\infty} Q_n T P_n$ always lies in the arithmetic mean closure $\amclosure{Mhcal{I}}$ defined in \cite{DFWW-2004-AM} (see herein \cpageref{def:am-closed}).
This follows from a generalization (see \Cref{thm:fans-theorem-pinching}) of Fan's famous submajorization theorem \cite[Theorem~1]{Fan-1951-PNASUSA} concerning partial sums of diagonals of operators.
\end{enumerate}
Throughout the paper we will prefer bi-infinite sequences (i.e., indexed by $\mathbb{Z}$ instead of $\mathbb{N}$) of projections, but this is only to make the descriptions simpler;
we will not, however, use the term \term{bi-infinite} unless necessary for context.
The projections are allowed to be zero, so this is no restriction.
We first establish some terminology.
\begin{definition}
\label{def:block-decomposition}
A sequence $\delim\{\}{P_n}_{n \in \mathbb{Z}}$ of mutually orthogonal projections $P_n \in Mhcal{B}(Mhcal{H})$ for which $\sum P_n = I$ is a \term{block decomposition} and for $T \in Mhcal{B}(Mhcal{H})$, partitions it into a (bi-)infinite matrix of operators $T_{i,j} := P_i T P_j$.
We say that an operator $T$ is a \term{block band operator relative to $\delim\{\}{P_n}$} if there is some $M \ge 0$, called the \term{block bandwidth}, for which $T_{i,j} = 0$ whenever $\abs{i - j} > M$.
If $M=0$ (resp. $M=1$), we say $T$ is \emph{block diagonal (resp. block tridiagonal) relative to $\delim\{\}{P_n}$}.
Finally, in all the above definitions, if $\trace P_n \le 1$ for all $n \in \mathbb{Z}$, which, up to a choice of phase for each range vector, simply corresponds to a choice of orthonormal basis, then we omit the word ``block.''
In this case, the operators $T_{i,j}$ are scalars and $Mrep{T_{i,j}}$ is the matrix representation (again, up to a choice of phase for each vector) for $T$ relative to this basis.
If $\delim\{\}{Q_n}_{n \in \mathbb{Z}}$ is an (unrelated) block decomposition, the pair $\delim\{\}{P_n}_{n \in \mathbb{Z}}, \delim\{\}{Q_n}_{n \in \mathbb{Z}}$ still determines a (bi-)infinite matrix of operators $T_{i,j} = Q_i T P_j$, but this time there is an inherent asymmetry in that $(T^{*})_{i,j} \neq (T_{j,i})^{*}$.
In this case, the terms defined just above may be modified with the adjective ``asymmetric.''
\end{definition}
\begin{definition}
\label{def:shift-representation}
Suppose that $\delim\{\}{P_n}_{n \in \mathbb{Z}}$ is a block decomposition for an operator $T \in Mhcal{B}(Mhcal{H})$.
For each $k \in \mathbb{Z}$, we call
\begin{equation*}
T_k := \sum_{n \in \mathbb{Z}} T_{n,n+k} = \sum_{n \in \mathbb{Z}} P_n T P_{n+k}
\end{equation*}
the \term{$k^{Mhrm{th}}$ block diagonal} of $T$, which converges in the strong operator topology.
Visually, these operators may be described with the following diagram\footnotemark{}:
\begin{center}
\includegraphics{shift-decomposition-figure.pdf}
\end{center}
\footnotetext{
For the case when the projections $P_n = 0$ for $n \in \mathbb{Z} \delim\{\}minus \mathbb{N}$, the matrix below is uni-infinite.
This recovers uni-infinite matrix results from the bi-infinite approach we described in the paragraph preceding \Cref{def:block-decomposition}.
}
We call the collection $\delim\{\}{T_k}_{k \in \mathbb{Z}}$ the \term{shift decomposition} of $T$ (relative to the block decomposition $\delim\{\}{P_n}_{n \in \mathbb{Z}}$).
The \term{asymmetric shift decomposition} $\delim\{\}{T_k}_{k \in \mathbb{Z}}$ relative to \emph{different} block decompositions $\delim\{\}{P_n}_{n \in \mathbb{Z}}, \delim\{\}{Q_n}_{n \in \mathbb{Z}}$ is given by
\begin{equation*}
T_k := \sum_{n \in \mathbb{Z}} Q_n T P_{n+k}.
\end{equation*}
We note for future reference that sums of the above form don't require the sequences of projections to sum to the identity in order to converge in the strong operator topology, only that each sequence consists of mutually orthogonal projections.
Moreover, it is elementary to show that when $T$ is compact, so is $T_k$ for all $k \in \mathbb{Z}$.
\end{definition}
\begin{remark}
\label{rem:shift-decomposition-explanation}
Although one has the formal equality $T = \sum_{k \in \mathbb{Z}} T_k$ in the sense that $T$ is uniquely determined by $\delim\{\}{T_k}_{k \in \mathbb{Z}}$, this sum doesn't necessarily converge even in the weak operator topology \cite{Mer-1985-PAMS}, hence it doesn't converge in any of the usual operator topologies.
If $\rank P_n = 1$ (and $Q_n = P_n$) for all $n \in \mathbb{Z}$ then $\sum_{k \in \mathbb{Z}} T_k$ does converge to $T$ in the \term{Bures topology}\footnotemark{} \cite{Bur-1971,Mer-1985-PAMS}.
On the other hand, if $T$ is a block band operator relative to this block decomposition, then convergence is irrelevant: $T = \sum_{k=-M}^M T_k$.
\footnotetext{
The Bures topology on $B(Mhcal{H})$ is a locally convex topology constructed from the (rank-1) projections $P_n$ as follows.
Let $Mhcal{D} = \bigoplus_{n \in \mathbb{Z}} P_n B(Mhcal{H}) P_n$ be the algebra of diagonal matrices and $E : B(Mhcal{H}) \to Mhcal{D}$ the conditional expectation given by $T \mapsto T_0 := \sum_{n \in \mathbb{Z}} P_n T P_n$.
Then to each $\omega \in \ell_1 \cong Mhcal{D}_{*}$, associate the seminorm $T \mapsto \trace(\diag(\omega) E(T^{*}T)^{\frac{1}{2}})$, where $\diag : \ell_{\infty} \to Mhcal{D}$ is the natural *-isomorphism.
These seminorms generate the Bures topology.
}
The reason for our ``shift'' terminology in \Cref{def:shift-representation} is that if the block decomposition $\delim\{\}{P_n}_{n \in \mathbb{Z}}$ consists of rank-1 projections, then the operators $T_k$ have the form $T_k = U^k D_k$ where $D_k$ are diagonal operators and $U$ is the bilateral shift relative to any orthonormal basis corresponding to $\delim\{\}{P_n}_{n \in \mathbb{Z}}$.
\end{remark}
\begin{remark}
\label{rem:tridiagonalizability}
All compact selfadjoint operators are diagonalizable via the spectral theorem.
However, this is certainly not the case for arbitrary selfadjoint operators, the selfadjoint approximation theorem of Weyl--von Neumann notwithstanding.
Nevertheless, every selfadjoint operator with a cyclic vector is \emph{tri}diagonalizable;
for $T = T^{*}$ with cyclic vector $v$, apply Gram--Schmidt to the linearly independent spanning collection $\delim\{\}{T^n v}_{n=0}^{\infty}$ and then $T$ is tridiagonal in the resulting orthonormal basis.
Consequently, every selfadjoint operator is block diagonal with each nonzero block in the direct sum itself tridiagonal.
The second author, along with Patnaik and Petrovic \cite{PPW-2020-TmloVLOt,PPW-2021}, recently established that every bounded operator is \emph{block} tridiagonalizable, meaning $T = T_{-1} + T_0 + T_1$, hence block banded (with block bandwidth $1$) and with finite block sizes growing no faster than exponential.
\end{remark}
Our first main theorem is an algebraic equality of ideals for block band operators relative to some block decomposition.
\begin{theorem}
\label{thm:bandable}
Let $T \in Mhcal{B}(Mhcal{H})$ be an asymmetric block band operator of bandwidth $M$ relative to the block decompositions $\delim\{\}{P_n}_{n \in \mathbb{Z}}, \delim\{\}{Q_n}_{n \in \mathbb{Z}}$, and let $\delim\{\}{T_k}_{k=-M}^M$ be the asymmetric shift decomposition of $T$.
Then the following ideal equality holds:
\begin{equation*}
\delim<>{T} = \sum_{k=-M}^M \delim<>{T_k}.
\end{equation*}
\end{theorem}
\begin{proof}
The proof is essentially due to the following observation:
if you zoom out and squint, then a band matrix looks diagonal.
That is, we exploit the relative thinness of the diagonal strip of support entries.
The ideal inclusion $\delim<>{T} \subseteq \sum_{k=-M}^M \delim<>{T_k}$ is obvious since $T = \sum_{k=-M}^M T_k$.
Therefore it suffices to prove $T_k \in \delim<>{T}$ for each $-M \le k \le M$.
Indeed, for $-M \le j,k \le M$ define projections $R_{k,j} := \sum_{n \in \mathbb{Z}} P_{n(2M+1) + j + k}$ and $S_j := \sum_{n \in \mathbb{Z}} Q_{n(2M+1) + j}$
Then whenever $n \neq m$, $Q_{n(2M+1) + j} T P_{m(2M+1) + j + k} = 0$ since the bandwidth of $T$ is $M$ and
\begin{align*}
\abs{\delim()!{n(2M+1) + j} - \delim()!{m(2M+1) + j + k} }
&\ge \abs{n-m}(2M+1) - k \\
&\ge (2M+1) - M > M.
\end{align*}
Therefore, for each $k,j$,
\begin{equation*}
S_j T R_{k,j} = \sum_{n \in \mathbb{Z}} Q_{n(2M+1) + j} T P_{n(2M+1) + j + k}
\end{equation*}
converges in the strong operator topology, and summing over $j$ yields
\begin{equation*}
\sum_{j=-M}^M S_j T R_{k,j} = \sum_{j=-M}^{M} \sum_{n \in \mathbb{Z}} Q_{n(2M+1) + j} T P_{n(2M+1) + j + k} = \sum_{n \in \mathbb{Z}} Q_n T P_{n+k} = T_k.
\end{equation*}
As a finite sum, the left-hand side is trivially in $\delim<>{T}$ and therefore so is each $k^{Mhrm{th}}$ generalized block diagonal $T_k$.
\end{proof}
Before establishing our second main theorem (\Cref{thm:sum-off-diagonal-corners-am-closure}), we acquaint the reader with the prerequisite ideas concerning Fan's theorem \cite[Theorem~1]{Fan-1951-PNASUSA}, Hardy--Littlewood submajorization, fundamentals of the theory of operator ideals and arithmetic mean closed ideals, all of which are intimately related.
For a single operator, Fan's submajorization theorem \cite[Theorem~1]{Fan-1951-PNASUSA} states that if the matrix representation for a compact operator $T \in Mhcal{K}(Mhcal{H})$ has diagonal sequence $\delim(){d_j}_{j \in J}$ (with any index set $J$), then
\begin{equation}
\label{eq:submajorization}
\sum_{n=1}^m \abs{d_n}^{*} \le \sum_{n=1}^m s_n(T) \quad\text{for all } m \in \mathbb{N},
\end{equation}
where $s(T) := \delim(){s_n(T)}_{n \in \mathbb{N}}$ denotes the (monotone) \term{singular value sequence} of $T$, and where $\delim(){\abs{d_n}^{*}}_{n \in \mathbb{N}}$ denotes the \term{monotonization}\footnotemark{} of the (possibly unordered) sequence $\delim(){\abs{d_j}}_{j \in J}$;
the monotonization is always an element of the convex cone $c_0star$ of nonnegative nonincreasing sequences (indexed by $\mathbb{N}$) converging to zero, even when $\delim(){\abs{d_j}}_{j \in J}$ is indexed by another set $J$ different from $\mathbb{N}$.
The set of inequalities \eqref{eq:submajorization} may be encapsulated, for pairs of sequences in $c_0star$, by saying that $\delim(){\abs{d_n}^{*}}$ is \term{submajorized} by $s(T)$, which is often denoted $\delim(){\abs{d_n}^{*}} \pprec s(T)$, although the precise notation for submajorization varies throughout the literature.
We remark the trivial fact that the submajorization order is finer than the usual pointwise order on $c_0star$;
that is, $\delim(){a_n} \le \delim(){b_n}$ implies $\delim(){a_n} \pprec \delim(){b_n}$ for any $\delim(){a_n}, \delim(){b_n} \in c_0star$.
\footnotetext{
This is the measure-theoretic \term{nonincreasing rearrangement} relative to the counting measure on the index set, say $J$, of $\delim(){\abs{d_n}}$.
Associated to this, there is a injection (not necessarily a bijection) $\pi : \mathbb{N} \to J$ with $d^{-1}(\mathbb{C}\delim\{\}minus\delim\{\}{0}) \subseteq \pi(\mathbb{N})$ such that $\abs{d_n}^{*} = \abs{d_{\pi(n)}}$.
This of course requires $0 \notin (d \circ \pi)(\mathbb{N})$ when $d^{-1}(\mathbb{C}\delim\{\}minus\delim\{\}{0})$ is infinite since $\delim(){\abs{d_n}^{*}}$ is nonincreasing.
}
However, we view Fan's theorem in a slightly different way which is more amenable to our purposes.
In particular, consider the canonical trace-preserving conditional expectation\footnotemark{} $E : Mhcal{B}(Mhcal{H}) \to Mhcal{D}$ onto the masa (maximal abelian selfadjoint algebra) of diagonal operators relative to a fixed, but arbitrary, orthonormal basis.
Then the sequence $\delim(){\abs{d_n}^{*}}$ is simply $s(E(T))$, and in this language:
\footnotetext{
For an inclusion of unital C*-algebras $Mhcal{B} \subseteq Mhcal{A}$ (with $1_{Mhcal{B}} = 1_{Mhcal{A}}$), a \term{conditional expectation of $Mhcal{A}$ onto $Mhcal{B}$} is a unital positive linear map $E : Mhcal{A} \to Mhcal{B}$ such that $E(bab') = bE(a)b'$ for all $a \in Mhcal{A}$ and $b,b' \in Mhcal{B}$.
A conditional expectation is called \term{faithful} if $a \ge 0$ and $E(a) = 0$ imply $a = 0$.
If $Mhcal{A}$ is a semifinite von Neumann algebra with a faithful normal semifinite trace $\tau$, then the expectation is said to be \term{trace-preserving} if $\tau(a) = \tau(E(a))$ for all $a \in Mhcal{A}_+$.
}
\begin{theorem}[\protect{\cite[Theorem~1]{Fan-1951-PNASUSA}}]
\label{thm:fans-theorem}
If $T \in Mhcal{K}(Mhcal{H})$ and $E : Mhcal{B}(Mhcal{H}) \to Mhcal{D}$ is the canonical conditional expectation onto a masa of diagonal operators, then
\begin{equation*}
s(E(T)) \pprec s(T),
\end{equation*}
that is, $s(E(T))$ is submajorized by $s(T)$.
\end{theorem}
The submajorization order features prominently in operator theory, but especially in the theory of diagonals of operators and in the related theory of operator ideals in $Mhcal{B}(Mhcal{H})$.
For the reader's convenience we briefly review the basics of ideal theory.
Let $c_0star$ denote the convex cone of nonnegative nonincreasing sequences converging to zero.
To an ideal $Mhcal{I}$, Schatten \cite{Sch-1970}, in a manner quite similar to Calkin \cite{Cal-1941-AoM2}, associated the convex subcone $\Sigma(Mhcal{I}) := \delim\{\}b{ s(T) \in c_0star }{ T \in Mhcal{I} }$, called the \term{characteristic set} of $Mhcal{I}$, which satisfies the properties:
\begin{enumerate}
\item If $\delim(){a_n} \le \delim(){b_n}$ (pointwise) and $\delim(){b_n} \in \Sigma(Mhcal{I})$, then $\delim(){a_n} \in \Sigma(Mhcal{I})$;
that is, $\Sigma(Mhcal{I})$ is a \term{hereditary subcone} of $c_0star$ with respect to the usual pointwise ordering.
\item If $\delim(){a_n} \in \Sigma(Mhcal{I})$, then $\delim(){a_{\delim\lceil\rceil{\frac{n}{2}}}} \in \Sigma(Mhcal{I})$;
that is, $\Sigma(Mhcal{I})$ is closed under \term{$2$-ampliations}.
\end{enumerate}
Likewise, if $S$ is a hereditary (with respect to the pointwise order) convex subcone of $c_0star$ which is closed under $2$-ampliations, then $Mhcal{I}_S := \delim\{\}b{ T \in Mhcal{K}(Mhcal{H}) }{ s(T) \in S }$ is an ideal of $Mhcal{B}(Mhcal{H})$.
Finally, the maps $S \mapsto Mhcal{I}_S$ and $Mhcal{I} \mapsto \Sigma(Mhcal{I})$ are inclusion-preserving inverses between the classes of $Mhcal{B}(Mhcal{H})$-ideals and characteristic subsets of $c_0star$.
Ideals whose characteristic sets are also hereditary subcones with respect to the submajorization order (i.e., $B \in Mhcal{I}$ and $s(A) \pprec s(B)$ implies $A \in Mhcal{I}$) were introduced by Dykema, Figiel, Weiss and Wodzicki\fnmark{dfww} in \cite{DFWW-2004-AM} and are said to be \label{def:am-closed}\term{arithmetic mean closed}\footnotemark{} (abbreviated as \term{am-closed}).
Given an ideal $Mhcal{I}$, the smallest am-closed ideal containing $Mhcal{I}$ is called the am-closure, denoted $\amclosure{Mhcal{I}}$, and its characteristic set consists simply of the hereditary closure (with respect to the submajorization order) of $\Sigma(Mhcal{I})$.
That is,
\begin{equation*}
\Sigma\delim()1{\amclosure{Mhcal{I}}} = \delim\{\}b1{ \delim(){a_n} \in c_0star }{ \exists \delim(){b_n} \in \Sigma(Mhcal{I}), \delim(){a_n} \pprec \delim(){b_n} }.
\end{equation*}
In general, ideals are not am-closed.
Indeed, the sequence $\delim(){1,0,0,\dots}$ corresponding to a rank-1 projection $P$ submajorizes any (nonnegative) sequence $\delim(){a_n}$ whose sum is at most $1$.
Consequently, if $T \in Mhcal{L}_1$, the trace class, then $s(T) \pprec s(\trace(\abs{T})P)$.
Therefore, since any ideal $Mhcal{I}$ contains the finite rank operators, if it is am-closed it must also contain the trace class $Mhcal{L}_1$.
Additionally, it is immediate that $Mhcal{L}_1$ is am-closed, making it the minimum am-closed ideal.
\footnotetext[\arabic{dfww}]{
The description given \cite{DFWW-2004-AM} is not in terms of the submajorization order, but these two definitions are easily shown to be equivalent.
Instead, for an ideal $Mhcal{I}$, \cite{DFWW-2004-AM} defines the \term{arithmetic mean ideal} $Mhcal{I}_a$ and \term{pre-arithmetic mean ideal} ${}_aMhcal{I}$ whose characteristic sets are given by
\begin{gather*}
\Sigma(Mhcal{I}_a) := \delim\{\}b*{ \delim(){a_n} \in c_0star }{ \exists \delim(){b_n} \in \Sigma(Mhcal{I}), a_n \le \frac{1}{n} \sum_{k=1}^n b_k } \\
\Sigma({}_aMhcal{I}) := \delim\{\}b*{ \delim(){a_n} \in c_0star }{ \exists \delim(){b_n} \in \Sigma(Mhcal{I}), \frac{1}{n} \sum_{k=1}^n a_k \le b_n }
\end{gather*}
Then the \term{arithmetic mean closure} of $Mhcal{I}$ is $\amclosure{Mhcal{I}} := {}_a(Mhcal{I}_a)$, and $Mhcal{I}$ is called \term{am-closed} if $Mhcal{I} = \amclosure{Mhcal{I}}$.
This viewpoint also allows one to define the \term{arithmetic mean interior} $({}_aMhcal{I})_a$, and one always has the inclusions ${}_aMhcal{I} \subseteq ({}_aMhcal{I})_a \subseteq Mhcal{I} \subseteq {}_a(Mhcal{I}_a) \subseteq Mhcal{I}_a$.
}
\footnotetext{
Although am-closed ideals were introduced in this generality by \cite{DFWW-2004-AM}, they had been studied at least as early as \cite{GK-1969-ITTTOLNO,Rus-1969-FAA}, but only in the context of \term{symmetrically normed ideals}.
In the study of symmetrically normed ideals by Gohberg and Krein \cite{GK-1969-ITTTOLNO}, they only considered those which were already am-closed, but they did not have any terminology associated to this concept.
Around the same time, both Mityagin \cite{Mit-1964-IANSSM} and Russu \cite{Rus-1969-FAA} concerned themselves with the existence of so-called \term{intermediate} symmetrically normed ideals, which are necessarily not am-closed, or in the language of Russu, do not possess the \term{majorant property}.
In \cite{Rus-1969-FAA}, Russu also established that the majorant property is equivalent to the \term{interpolation property} studied by Mityagin \cite{Mit-1965-MSNS} and Calder\'on \cite{Cal-1966-SM}.
In the modern theory of symmetrically normed ideals, those which are am-closed (equivalently, have the majorant or interpolation properties), are said to be \term{fully symmetric}, but this term also implies the norm preserves the submajorization order.
For more information on fully symmetrically normed ideals and related topics, we refer the reader to \cite{LSZ-2013-STTAA}.
}
Arithmetic mean closed ideals are important within the lattice of operator ideals not least for their connection to Fan's theorem, but also because of the following sort of converse due to the second author with Kaftal.
\begin{theorem}[\protect{\cite[Corollaries~4.4,~4.5]{KW-2011-IUMJ}}]
\label{thm:diagonal-invariance}
For an operator ideal $Mhcal{I}$, and the canonical conditional expectation $E : Mhcal{B}(Mhcal{H}) \to Mhcal{D}$ onto a masa of diagonal operators,
\begin{equation*}
E(Mhcal{I}) = \amclosure{Mhcal{I}} \cap Mhcal{D}.
\end{equation*}
Consequently, $Mhcal{I}$ is am-closed if and only if $E(Mhcal{I}) \subseteq Mhcal{I}$.
\end{theorem}
They used the term \term{diagonal invariance} to refer to $E(Mhcal{I}) \subseteq Mhcal{I}$, and so $Mhcal{I}$ is am-closed if and only if it is diagonally invariant.
The reader should note that the inclusion $E(Mhcal{I}) \subseteq \amclosure{Mhcal{I}} \cap Mhcal{D}$ is a direct consequence of Fan's theorem, when viewed through the lens of \Cref{thm:fans-theorem}, so the new content of \Cref{thm:diagonal-invariance} lies primarily in the reverse inclusion.
At this point, we note an important contrapositive consequence of \Cref{thm:bandable} and \Cref{thm:diagonal-invariance}.
Suppose $T$ is positive and $\delim<>{T}$ is not am-closed, then by \Cref{thm:diagonal-invariance} there is some basis in which the main diagonal of $T$ does not lie in $\delim<>{T}$, and therefore by \Cref{thm:bandable}, $T$ is not a band operator in this basis.
The next theorem, due originally to Gohberg--Krein \cite[Theorems~II.5.1 and III.4.2]{GK-1969-ITTTOLNO}, bootstraps \Cref{thm:fans-theorem} to apply to conditional expectations onto block diagonal algebras instead of simply diagonal masas.
We include this more modern proof both for completeness and to make the statement accord with that of \Cref{thm:fans-theorem}.
\begin{theorem}
\label{thm:fans-theorem-pinching}
Let $Mhcal{P} = \delim\{\}{P_n}_{n \in \mathbb{Z}}$ be a block decomposition and consider the associated conditional expectation $E_{\mathcal{P}} : Mhcal{B}(Mhcal{H}) \to \bigoplus_{n \in \mathbb{Z}} P_n Mhcal{B}(Mhcal{H}) P_n$ defined by $E_{\mathcal{P}}(T) := T_0 = \sum_{n \in \mathbb{Z}} P_n T P_n$.
If $T \in Mhcal{K}(Mhcal{H})$, then $s(E_{\mathcal{P}}(T))$ is submajorized by $s(T)$, i.e.,
\begin{equation*}
s(E_{\mathcal{P}}(T)) \pprec s(T).
\end{equation*}
Moreover, if $T \in Mhcal{I}$, then $E_{\mathcal{P}}(T) \in \amclosure{Mhcal{I}}$.
In addition, if $s(E_{\mathcal{P}}(T)) = s(T)$, then $E_{\mathcal{P}}(T) = T$.
\end{theorem}
\begin{proof}
Suppose that $Mhcal{D}$ is a diagonal masa contained in the algebra $\bigoplus_{n \in \mathbb{Z}} P_n Mhcal{B}(Mhcal{H}) P_n$, and let $E : Mhcal{B}(Mhcal{H}) \to Mhcal{D}$ be the associated canonical trace-preserving conditional expectation.
Because of the algebra inclusions, we see that $E \circ E_{\mathcal{P}} = E$.
Let $T \in Mhcal{K}(Mhcal{H})$ and consider $E_{\mathcal{P}}(T)$.
By applying the Schmidt decomposition to each $P_n T P_n$ one obtains partial isometries $U_n, V_n$ (the latter may even be chosen unitary) in $P_n Mhcal{B}(Mhcal{H}) P_n$ so that $U_n P_n T P_n V_n$ is a positive operator in $Mhcal{D}$.
Then $U := \bigoplus_{n \in \mathbb{Z}} U_n, V := \bigoplus_{n \in \mathbb{Z}} V_n$ are partial isometries for which $s((E(U E_{\mathcal{P}}(T) V)) = s(E_{\mathcal{P}}(T))$.
Then since $U,V \in \bigoplus_{n \in \mathbb{Z}} P_n Mhcal{B}(Mhcal{H}) P_n$ they commute with the conditional expectation $E_{\mathcal{P}}$ and hence
\begin{equation*}
s(E_{\mathcal{P}}(T)) = s((E(U E_{\mathcal{P}}(T) V)) = s(E(E_{\mathcal{P}}(UTV))) = s(E(UTV)).
\end{equation*}
By Fan's theorem (\Cref{thm:fans-theorem}), $s(E(UTV)) \pprec s(UTV) \le \norm{U} s(T) \norm{V} = s(T)$, and therefore $s(E_{\mathcal{P}}(T)) \pprec s(T)$.
Finally, this fact along with the definition of the arithmetic mean closure guarantees $T \in Mhcal{I}$ implies $E_{\mathcal{P}}(T) \in \amclosure{Mhcal{I}}$.
For the case of equality, now suppose that $s(E_{\mathcal{P}}(T)) = s(T)$.
Let $\delim\{\}{e_n}_{n \in \mathbb{N}}$ be an orthonormal sequence of eigenvectors of $E_{\mathcal{P}}(T)^{*} E_{\mathcal{P}}(T)$, each of which is inside one of the subspaces $P_j Mhcal{H}$, satisfying $E_{\mathcal{P}}(T)^{*} E_{\mathcal{P}}(T) e_n = s_n(T)^2 e_n$.
Then the projections $Q_n$ onto $\spans\delim\{\}{e_1,\ldots,e_n}$ commute with each $P_j$, and hence also with the expectation $E_{\mathcal{P}}$.
We note for later reference that
\begin{equation}
\label{eq:epTQnperp}
\norm{E_{\mathcal{P}}(T)Q_n^{\perp}}^2 = \norm{Q_n^{\perp}E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T) Q_n^{\perp}} \le s_{n+1}(E_{\mathcal{P}}(T))^2.
\end{equation}
Observe that for any operator $X$, because $P_j X^{*} P_j X P_j \le P_j X^{*} X P_j$
\begin{equation}
\label{eq:epX}
E_{\mathcal{P}}(X)^{*}E_{\mathcal{P}}(X) = \sum_{j \in \mathbb{Z}} P_j X^{*} P_j X P_j \le \sum_{j \in \mathbb{Z}} P_j X^{*} X P_j = E_{\mathcal{P}}(X^{*}X),
\end{equation}
with equality if and only if $P_j X^{*} P_j^{\perp} X P_j = 0$ for all $j \in \mathbb{Z}$ if and only if $P_j^{\perp} X P_j = 0$ for all $j \in \mathbb{Z}$ if and only if $X = E_{\mathcal{P}}(X)$.
Applying \eqref{eq:epX} to $X = TQ_n$,
\begin{align*}
\sum_{j=1}^n s_j(E_{\mathcal{P}}(T))^2 &= \trace(Q_n E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T) Q_n) \\
&= \trace(E_{\mathcal{P}}(TQ_n)^{*} E_{\mathcal{P}}(TQ_n)) \\
&\le \trace(E_{\mathcal{P}}(Q_n T^{*} T Q_n)) \\
&= \trace(Q_n T^{*} T Q_n) \le \sum_{j=1}^n s_j(T)^2,
\end{align*}
where the last inequality follows from \Cref{thm:fans-theorem}.
We must have equality throughout since $s(E_{\mathcal{P}}(T)) = s(T)$.
Consequently, $TQ_n = E_{\mathcal{P}}(TQ_n) = E_{\mathcal{P}}(T)Q_n$ for all $n \in \mathbb{N}$ by the equality case of \eqref{eq:epX}.
By construction, $\norm{E_{\mathcal{P}}(T)Q_n^{\perp}} \to 0$ as $n \to \infty$, but we also claim
\begin{equation}
\label{eq:TQnperp}
\norm{TQ_n^{\perp}} \le s_{n+1}(T).
\end{equation}
Suppose not.
Then we could find some unit vector $x \in Q_n^{\perp} Mhcal{H}$ with $\delimpair<{[.],}>{T^{*}T x}{x} = \norm{T x}^2 > s_{n+1}(T)^2$, and therefore, for the projection $R = Q_n + (x \otimes x)$,
\begin{equation*}
\trace(RT^{*}TR) = \trace(Q_n T^{*}T Q_n) + \delimpair<{[.],}>{T^{*}T x}{x} > \sum_{j=1}^{n+1} s_j(T)^2,
\end{equation*}
contradicting the fact that, because $R$ is a projection of rank $n+1$, by \Cref{thm:fans-theorem}
\begin{equation*}
\trace(RT^{*}TR) \le \sum_{j=1}^{n+1} s_j(RT^{*}TR) \le \sum_{j=1}^{n+1} s_j(T)^2.
\end{equation*}
Finally, again noting that $TQ_n = E_{\mathcal{P}}(T)Q_n$,
\begin{equation*}
0 \le \norm{T - E_{\mathcal{P}}(T)} \le \norm{T - TQ_n} + \norm{E_{\mathcal{P}}(T)Q_n - E_{\mathcal{P}}(T)} = \norm{TQ_n^{\perp}} + \norm{E_{\mathcal{P}}(T)Q_n^{\perp}}.
\end{equation*}
Since $\norm{TQ_n^{\perp}} \le s_{n+1}(T)$ by \eqref{eq:TQnperp} and $\norm{E_{\mathcal{P}}(T)Q_n^{\perp}} \le s_{n+1}(E_{\mathcal{P}}(T))$ by \eqref{eq:epTQnperp}, the right-hand side converges to zero as $n \to \infty$.
Therefore, $\norm{T - E_{\mathcal{P}}(T)} = 0$ and hence $T = E_{\mathcal{P}}(T)$.
\end{proof}.
\begin{remark}
\label{rem:T=ep(T)-hilbert-schmidt}
When $T$ is Hilbert--Schmidt, the proof that $s(E_{\mathcal{P}}(T)) = s(T)$ implies $E_{\mathcal{P}}(T) = T$ may be shortened considerably.
In particular, $T^{*}T, E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T)$ are trace-class with $s(T^{*}T) = s(E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T))$ and so $\trace(T^{*}T) = \trace(E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T))$.
Since the expectation $E_{\mathcal{P}}(T)$ is trace-preserving,
\begin{align*}
\trace(E_{\mathcal{P}}(T^{*}T) - E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T)) &= \trace(E_{\mathcal{P}}(T^{*}T - E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T))) \\
&= \trace(T^{*}T - E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T)) = 0.
\end{align*}
Since $E_{\mathcal{P}}(T^{*}T) - E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T)$ is a positive operator by \eqref{eq:epX} and the trace is faithful, we must have $E_{\mathcal{P}}(T^{*}T) = E_{\mathcal{P}}(T)^{*}E_{\mathcal{P}}(T)$, and hence $T = E_{\mathcal{P}}(T)$ by the equality case of \eqref{eq:epX}.
\end{remark}
\begin{remark}
Fan's theorem (\Cref{thm:fans-theorem}) is a special case of \Cref{thm:fans-theorem-pinching} by selecting the projections $P_n$ to have rank one, and therefore $E = E_{\mathcal{P}}$.
As we need \Cref{thm:fans-theorem} to prove \Cref{thm:fans-theorem-pinching}, this doesn't provide an independent proof of Fan's theorem.
\end{remark}
Our second main theorem says that there is nothing special about the main diagonal $T_0$: for all $k \in \mathbb{Z}$, $s(T_k) \pprec s(T)$;
Moreover, this holds even for \emph{asymmetric} shift decompositions.
\begin{theorem}
\label{thm:sum-off-diagonal-corners-am-closure}
Suppose that $\delim\{\}{P_n}_{n \in \mathbb{Z}}, \delim\{\}{Q_n}_{n \in \mathbb{Z}}$ are block decompositions and let $T \in Mhcal{K}(Mhcal{H})$ with asymmetric shift decomposition $\delim\{\}{T_k}_{k \in \mathbb{Z}}$.
Then $s(T_k) \pprec s(T)$.
Consequently, if $T$ lies in some ideal $Mhcal{I}$, then $T_k \in \amclosure{Mhcal{I}}$.
\end{theorem}
\begin{proof}
It suffices to prove the theorem for $T_0$ since $T_k$ is simply $T_0$ relative to the translated block decomposition pair $\delim\{\}{P_{n+k}}_{n \in \mathbb{Z}}, \delim\{\}{Q_n}_{n \in \mathbb{Z}}$.
Each $Q_n T P_n$ has the polar decomposition $Q_n T P_n = U_n \abs{Q_n T P_n}$ where $U_n$ is a partial isometry\footnotemark{} with $Q_n U_n = U_n = U_n P_n$.
Then $U := \sum_{n \in \mathbb{Z}} U_n$ converges in the strong operator topology since the collections $\delim\{\}{P_n}_{n \in \mathbb{Z}}, \delim\{\}{Q_n}_{n \in \mathbb{Z}}$ are each mutually orthogonal and hence also $U$ is a partial isometry.
Moreover,
\begin{equation*}
T_0^{*} T_0 = \delim()*{ \sum_{n \in \mathbb{Z}} P_n T^{*} Q_n } \delim()*{ \sum_{m \in \mathbb{Z}} Q_m T P_m } = \sum_{n \in \mathbb{Z}} \abs{Q_n T P_n}^2.
\end{equation*}
Since the operators $\abs{Q_n T P_n}^2$ are orthogonal (i.e., their products are zero), $\abs{T_0} = (T_0^{*}T_0)^{\frac{1}{2}} = \sum_{n \in \mathbb{Z}} \abs{Q_n T P_n}$.
Thus,
\begin{align*}
E_{\mathcal{P}}(U^{*}T) &= \sum_{n \in \mathbb{Z}} P_n U^{*} T P_n = \sum_{n \in \mathbb{Z}} \delim()*{ \sum_{m \in \mathbb{Z}} P_n U^{*}_m T P_n } \\
&= \sum_{n \in \mathbb{Z}} \delim()*{ \sum_{m \in \mathbb{Z}} P_n P_m U^{*}_m Q_m T P_n } = \sum_{n \in \mathbb{Z}} U^{*}_n Q_n T P_n \\
&= \sum_{n \in \mathbb{Z}} \abs{Q_n T P_n} = \abs{T_0}.
\end{align*}
Finally, by \Cref{thm:fans-theorem-pinching} and since $U^{*}$ is a contraction,
\begin{equation*}
s(T_0) = s(\abs{T_0}) = s(E_{\mathcal{P}}(U^{*}T)) \pprec s(U^{*}T) \le s(T).
\end{equation*}
Therefore, if $T \in Mhcal{I}$, then $T_0 \in \amclosure{Mhcal{I}}$ by definition.
\end{proof}
\footnotetext{That $Q_n U_n = U_n = U_n P_n$ follows from well-known facts (e.g., see \cite[Theorem~I.8.1]{Dav-1996}) when $U_n$ is taken to be the canonical unique partial isometry on $Mhcal{H}$ mapping $\closure{\range(\abs{Q_n T P_n})} \to \closure{\range(Q_n TP_n)}$ and noting also the range projection of $Q_n T P_n$ is dominated by $Q_n$ and the projection onto $\closure{\range(\abs{Q_n T P_n})} = \ker^{\perp}(Q_n T P_n)$ is dominated by $P_n$.}
\begin{remark}
In the previous theorem we assumed that $\delim\{\}{P_n}_{n \in \mathbb{Z}}, \delim\{\}{Q_n}_{n \in \mathbb{Z}}$ were block decompositions, but the condition that they sum to the identity is not actually necessary (the same proof given above still works).
Therefore, if $\delim\{\}{P_n}_{n \in \mathbb{Z}}, \delim\{\}{Q_n}_{n \in \mathbb{Z}}$ are sequences of mutually orthogonal projections then $s(\sum_{n \in \mathbb{Z}} Q_n T P_n) \pprec s(T)$;
consequently, if $T \in Mhcal{I}$ then we still have $\sum_{n \in \mathbb{Z}} Q_n T P_n \in \closure[Mhrlap{am}]{Mhcal{I}}$.
\end{remark}
\section*{Acknowledgments}
The authors would like to thank Fedor Sukochev for providing insight into the history of fully symmetric ideals.
\printbibliography
\end{document} |